[
  {
    "path": ".clang-format",
    "content": "---\nLanguage:        Cpp\n# BasedOnStyle:  LLVM\nAccessModifierOffset: -2\nAlignAfterOpenBracket: Align\nAlignConsecutiveAssignments: false\nAlignConsecutiveDeclarations: false\nAlignEscapedNewlines: Right\nAlignOperands:   true\nAlignTrailingComments: true\nAllowAllArgumentsOnNextLine: true\nAllowAllConstructorInitializersOnNextLine: true\nAllowAllParametersOfDeclarationOnNextLine: true\nAllowShortBlocksOnASingleLine: false\nAllowShortCaseLabelsOnASingleLine: false\nAllowShortFunctionsOnASingleLine: All\nAllowShortLambdasOnASingleLine: All\nAllowShortIfStatementsOnASingleLine: Never\nAllowShortLoopsOnASingleLine: false\nAlwaysBreakAfterDefinitionReturnType: None\nAlwaysBreakBeforeMultilineStrings: false\nAlwaysBreakTemplateDeclarations: MultiLine\nBinPackArguments: true\nBinPackParameters: true\nBreakBeforeBraces: Custom\nBraceWrapping:\n  AfterCaseLabel:  false\n  AfterClass:      false\n  AfterControlStatement: false\n  AfterEnum:       false\n  AfterFunction:   true\n  AfterNamespace:  false\n  AfterObjCDeclaration: false\n  AfterStruct:     false\n  AfterUnion:      false\n  AfterExternBlock: false\n  BeforeCatch:     false\n  BeforeElse:      false\n  IndentBraces:    false\n  SplitEmptyFunction: true\n  SplitEmptyRecord: true\n  SplitEmptyNamespace: true\nBreakBeforeBinaryOperators: None\nBreakBeforeInheritanceComma: false\nBreakInheritanceList: BeforeColon\nBreakBeforeTernaryOperators: true\nBreakConstructorInitializersBeforeComma: false\nBreakConstructorInitializers: BeforeColon\nBreakAfterJavaFieldAnnotations: false\nBreakStringLiterals: true\nColumnLimit:     0\nCommentPragmas:  '^ IWYU pragma:'\nCompactNamespaces: false\nConstructorInitializerAllOnOneLineOrOnePerLine: false\nConstructorInitializerIndentWidth: 4\nContinuationIndentWidth: 8\nCpp11BracedListStyle: true\nDerivePointerAlignment: false\nDisableFormat:   false\nExperimentalAutoDetectBinPacking: false\nFixNamespaceComments: true\nForEachMacros:\n  - foreach\n  - Q_FOREACH\n  - BOOST_FOREACH\nIncludeBlocks:   Preserve\nIncludeCategories:\n  - Regex:           '[<\"]pbs_config.h[>\"]'\n    Priority:        -1\n  - Regex:           '.*'\n    Priority:        3\n  - Regex:           '^\"(llvm|llvm-c|clang|clang-c)/'\n    Priority:        2\n  - Regex:           '^(<|\"(gtest|gmock|isl|json)/)'\n    Priority:        4\nIncludeIsMainRegex: '(Test)?$'\nIndentCaseLabels: true\nIndentPPDirectives: None\nIndentWidth:     8\nIndentWrappedFunctionNames: false\nJavaScriptQuotes: Leave\nJavaScriptWrapImports: true\nKeepEmptyLinesAtTheStartOfBlocks: true\nMacroBlockBegin: ''\nMacroBlockEnd:   ''\nMaxEmptyLinesToKeep: 1\nNamespaceIndentation: None\nObjCBinPackProtocolList: Auto\nObjCBlockIndentWidth: 2\nObjCSpaceAfterProperty: false\nObjCSpaceBeforeProtocolList: true\nPenaltyBreakAssignment: 2\nPenaltyBreakBeforeFirstCallParameter: 19\nPenaltyBreakComment: 300\nPenaltyBreakFirstLessLess: 120\nPenaltyBreakString: 1000\nPenaltyBreakTemplateDeclaration: 10\nPenaltyExcessCharacter: 1000000\nPenaltyReturnTypeOnItsOwnLine: 60\nPointerAlignment: Right\nReflowComments:  false\nSortIncludes:    false\nSortUsingDeclarations: true\nSpaceAfterLogicalNot: false\nSpaceAfterTemplateKeyword: true\nSpaceBeforeAssignmentOperators: true\nSpaceBeforeCpp11BracedList: false\nSpaceBeforeCtorInitializerColon: true\nSpaceBeforeInheritanceColon: true\nSpaceBeforeParens: ControlStatements\nSpaceBeforeRangeBasedForLoopColon: true\nSpaceInEmptyParentheses: false\nSpacesBeforeTrailingComments: 1\nSpacesInAngles:  false\nSpacesInContainerLiterals: true\nSpacesInCStyleCastParentheses: false\nSpacesInParentheses: false\nSpacesInSquareBrackets: false\nStandard:        Cpp11\nStatementMacros:\n  - Q_UNUSED\n  - QT_REQUIRE_VERSION\nTabWidth:        8\nUseTab:          Always\nAlwaysBreakAfterReturnType: AllDefinitions\nSpaceAfterCStyleCast: true\n...\n\n"
  },
  {
    "path": ".github/PULL_REQUEST_TEMPLATE.md",
    "content": "<!--- Please review your changes in preview mode -->\n<!--- Provide a general summary of your changes in the Title above -->\n\n#### Describe Bug or Feature\n<!--- Describe the problem, ideally from the customer's viewpoint  -->\n\n\n#### Describe Your Change\n<!--- Say how you fixed the problem.  Please describe your code changes in detail for reviewer -->\n\n\n#### Link to Design Doc\n<!--- If there is a design, link to it here: **[project documentation area](https://pbspro.atlassian.net/wiki/display/PD)** -->\n\n\n#### Attach Test and Valgrind Logs/Output\n<!--- Please attach your test log output from running the test you added (or from existing tests that cover your changes) -->\n<!--- Don't forget to run Valgrind if appropriate and attach the resulting logs -->\n\n\n\n<!--- Pull Request Guidelines: [Pull Request Guidelines](https://pbspro.atlassian.net/wiki/spaces/DG/pages/1187348483/Pull+Request+Guidelines) -->\n"
  },
  {
    "path": ".github/checkclang",
    "content": "#!/bin/bash\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nexport PATH=$PATH:/usr/local/bin/\n\ncheckdir=\"$(readlink -f $(dirname $0))\"\n\nwhich clang-format 1>/dev/null 2>/dev/null\nif [ $? -ne 0 ]; then\n    echo \"Could not find clang-format command\" 1>&2\n    exit 1\nfi\n\ncd ${checkdir}/..\n\nfind . -iname *.h -o -iname *.c -o -iname *.cpp | xargs clang-format --dry-run\nexit $?\n\n"
  },
  {
    "path": ".github/checkpep8",
    "content": "#!/bin/bash\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\ncheckdir=\"$(readlink -f $(dirname $0))\"\nerrors=0\n\nwhich pep8 1>/dev/null 2>/dev/null\nif [ $? -ne 0 ]; then\n    echo \"Could not find pep8 command\" 1>&2\n    exit 1\nfi\n\ncd ${checkdir}/..\n\nis_python_file() {\n    name=$(basename ${1})\n\n    # special case\n    # if .rst file then it will be considered\n    # as a  plain text file\n    if [ \"x${name##*.}\" == \"xrst\" ]; then\n        return 1\n    fi\n\n    # special case\n    # if __init__.py does not contain any code then file\n    # command will consider it as plain text file\n    if [ \"x${name}\" == \"x__init__.py\" ]; then\n        return 0\n    fi\n\n    if [ \"x$(file --mime-type -b ${1})\" == \"xtext/x-python\" ]; then\n        return 0\n    fi\n    return 1\n\n}\n\ncheck_pep8() {\n    pep8 --show-source ${1} >out_check_pep8 2>&1\n    return $?\n}\n\nfor f in $(find test -type f)\ndo\n    if is_python_file ${f}\n    then\n        if ! check_pep8 ${f}\n        then\n            cat out_check_pep8 1>&2\n            rm -f out_check_pep8\n            errors=$((errors + 1))\n        fi\n        if [ -x \"${f}\" ]; then\n            echo \"${f}: executable bit set\" 1>&2\n            errors=$((errors + 1))\n        fi\n    fi\ndone\n\nif [ ${errors} -ne 0 ]; then\n    exit 1\nelse\n    exit 0\nfi\n"
  },
  {
    "path": ".github/runchecks",
    "content": "#!/bin/bash\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\ndeclare -a listofchecks\nlistofchecks[0]=\"checkpep8\"\nlistofchecks[1]=\"checkclang\"\n\ncheckdir=$(readlink -f $(dirname $0))\nerrors_fails=0\n\nfor check in ${listofchecks[@]}\ndo\n    echo -n \"Running check: '${check}' ... \"\n    if [ ! -x \"${checkdir}/${check}\" ]; then\n        echo \"NOTFOUND\"\n        errors_fails=$((errors_fails + 1))\n        continue\n    fi\n    ${checkdir}/${check} >out 2>err\n    if [ $? -ne 0 ]; then\n        echo \"FAILED\"\n        cat err\n        errors_fails=$((errors_fails + 1))\n    else\n        echo \"OK\"\n        cat out\n    fi\ndone\n\nif [ ${errors_fails} -ne 0 ]; then\n    exit 1\nelse\n    exit 0\nfi\n"
  },
  {
    "path": ".gitignore",
    "content": "# Object files\n*.o\n*.ko\n*.obj\n*.elf\n*.slo\n\n# Precompiled Headers\n*.gch\n*.pch\n\n# Libraries\n*.lib\n*.libs\n*.a\n*.la\n*.lo\n*.lai\n\n# module files\n*.mod\n\n# Shared objects (inc. Windows DLLs)\n*.dll\n*.so\n*.so.*\n*.dylib\n\n# Executables\n*.exe\n*.out\n*.app\n*.i*86\n*.x86_64\n*.hex\n\n# Debug files\n*.dSYM/\n\n# Eclipse project files\n.project\n.cproject\n.pydevproject\n.settings/\n.autotools\n.csettings/\n.devcontainer/\n\n# Files used by ctags\ntags\n\n# Files used by cscope\ncscope.files\ncscope.out\n\n#Visual Studio files\n*.user\n*.ncb\n*.suo\n.vscode/\nwin_configure/.vs/\n\n# Files used by gtags\nGPATH\nGRTAGS\nGTAGS\n\n# Files/Directory generated by PBSTestLab\nptl_test_results.html\nptl_test_results.json\ntest/fw/build/\ntest/fw/ptl/__init__.py\ntest/fw/setup.py\ntest/tests/ptl_test_results.json\ntest/tests/*/ptl_test_results.json\n\n# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# Python Distribution / packaging\n.Python\ndevelop-eggs/\ndist/\ndownloads/\neggs/\n.eggs/\nsdist/\n*.egg-info/\n.installed.cfg\n*.egg\n\n# pip Installer logs\npip-log.txt\npip-delete-this-directory.txt\n*.log\n\n# Build directory\ntarget/\ntarget-*/\n\n#PyCharm project directory\n.idea/\n\n# From automake/autoconf\nautom4te.cache/\nautoscan.log\nautoscan-*.log\nconfigure.scan\nconfig.status\naclocal.m4\nbuildutils/config.guess\nbuildutils/config.sub\n\n# Libtool\nlibtool\nm4/libtool.m4\nm4/ltoptions.m4\nm4/ltsugar.m4\nm4/ltversion.m4\nm4/lt~obsolete.m4\nbuildutils/ltmain.sh\n\n# Build related files\nconfigure\nMakefile.in\nMakefile\n*.deps\nbuildutils/pbs_mkdirs\nbuildutils/ar-lib\nbuildutils/compile\nbuildutils/depcomp\nbuildutils/install-sh\nbuildutils/missing\nbuildutils/py-compile\nbuildutils/exclude_script\nbuildutils/makedepend-sh\nbuildutils/ylwrap\n\n# Generated binaries\n\nsrc/cmds/mpiexec\nsrc/cmds/nqs2pbs\nsrc/cmds/pbs_attach\nsrc/cmds/pbs_demux\nsrc/cmds/pbs_ds_password.bin\nsrc/cmds/pbs_lamboot\nsrc/cmds/pbs_mpihp\nsrc/cmds/pbs_mpilam\nsrc/cmds/pbs_mpirun\nsrc/cmds/pbs_rdel\nsrc/cmds/pbs_remsh\nsrc/cmds/pbs_rstat\nsrc/cmds/pbs_rsub\nsrc/cmds/pbs_ralter\nsrc/cmds/pbs_tmrsh\nsrc/cmds/pbsdsh\nsrc/cmds/pbsnodes\nsrc/cmds/pbs_release_nodes\nsrc/cmds/pbs_dataservice.bin\nsrc/cmds/pbsrun\nsrc/cmds/pbsrun_unwrap\nsrc/cmds/pbsrun_wrap\nsrc/cmds/qalter\nsrc/cmds/qdel\nsrc/cmds/qdisable\nsrc/cmds/qenable\nsrc/cmds/qhold\nsrc/cmds/qmgr\nsrc/cmds/qmove\nsrc/cmds/qmsg\nsrc/cmds/qorder\nsrc/cmds/qrerun\nsrc/cmds/qrls\nsrc/cmds/qrun\nsrc/cmds/qselect\nsrc/cmds/qsig\nsrc/cmds/qstart\nsrc/cmds/qstat\nsrc/cmds/qstop\nsrc/cmds/qsub\nsrc/cmds/qterm\nsrc/cmds/scripts/limits.pbs_mom\nsrc/cmds/scripts/limits.post_services\nsrc/cmds/scripts/pbs_habitat\nsrc/cmds/scripts/pbs_init.d\nsrc/cmds/scripts/pbs_poerun\nsrc/cmds/scripts/pbs_postinstall\nsrc/cmds/scripts/pbsrun.poe\nsrc/cmds/scripts/pbs_reload\nsrc/iff/pbs_iff\nsrc/mom_rcp/pbs_rcp\nsrc/resmom/pbs_mom\nsrc/scheduler/pbs_sched\nsrc/scheduler/pbs_sched_bare\nsrc/scheduler/pbsfs\nsrc/server/pbs_comm\nsrc/server/pbs_server.bin\nsrc/tools/pbs_ds_monitor\nsrc/tools/pbs_hostn\nsrc/tools/pbs_idled\nsrc/tools/pbs_probe\nsrc/tools/pbs_python\nsrc/tools/pbs_tclsh\nsrc/tools/pbs_upgrade_job\nsrc/tools/pbs_wish\nsrc/tools/printjob.bin\nsrc/tools/printjob_svr.bin\nsrc/tools/tracejob\nsrc/tools/wrap_tcl.sh\nsrc/tools/pbs_sleep\nsrc/unsupported/pbs_rmget\nsrc/unsupported/renew-test/renew-test\n\n# Generated source files\nsrc/include/pbs_version.h\nsrc/include/pbs_config.h\nsrc/include/pbs_config.h.in\nsrc/include/pbs_config.h.in~\nsrc/include/job_attr_enum.h\nsrc/include/node_attr_enum.h\nsrc/include/queue_attr_enum.h\nsrc/include/resc_def_enum.h\nsrc/include/resv_attr_enum.h\nsrc/include/sched_attr_enum.h\nsrc/include/svr_attr_enum.h\nsrc/lib/Libattr/queue_attr_def.c\nsrc/lib/Libattr/resc_def_all.c\nsrc/lib/Libattr/resv_attr_def.c\nsrc/lib/Libattr/sched_attr_def.c\nsrc/lib/Libattr/svr_attr_def.c\nsrc/lib/Libattr/job_attr_def.c\nsrc/lib/Libattr/node_attr_def.c\nsrc/lib/Libpbs/ecl_job_attr_def.c\nsrc/lib/Libpbs/ecl_node_attr_def.c\nsrc/lib/Libpbs/ecl_queue_attr_def.c\nsrc/lib/Libpbs/ecl_resc_def_all.c\nsrc/lib/Libpbs/ecl_resv_attr_def.c\nsrc/lib/Libpbs/ecl_sched_attr_def.c\nsrc/lib/Libpbs/ecl_svr_attr_def.c\nsrc/include/stamp-h1\nsrc/lib/Libpython/pbs_ifl.i\nsrc/lib/Libpython/pbs_ifl.py\nsrc/lib/Libpython/pbs_ifl_wrap.c\nsrc/lib/Libifl/pbs_ifl.py\nsrc/lib/Libifl/pbs_ifl_wrap.c\nsrc/include/job_attr_enum.h\nsrc/include/node_attr_enum.h\nsrc/include/queue_attr_enum.h\nsrc/include/resc_def_enum.h\nsrc/include/resv_attr_enum.h\nsrc/include/sched_attr_enum.h\nsrc/include/svr_attr_enum.h\n\n#Generated source files - Windows\nsrc/lib/Libecl/ecl_node_attr_def.c\nsrc/lib/Libecl/ecl_job_attr_def.c\nsrc/lib/Libecl/ecl_queue_attr_def.c\nsrc/lib/Libecl/ecl_resc_def_all.c\nsrc/lib/Libecl/ecl_resv_attr_def.c\nsrc/lib/Libecl/ecl_sched_attr_def.c\nsrc/lib/Libecl/ecl_svr_attr_def.c\nwin_configure/projects/pbs_ifl.i\nwin_configure/projects/pbs_ifl.py\nwin_configure/projects/pbs_ifl_wrap.c\n\n#ci logs\nci/logs/\nci/logs/prev_LOGS/\nci/.*\nci/packages\nci/ptl_ts_tree.json\nci/docker-compose.json\n\n# Generated scripts\nsrc/cmds/scripts/modulefile\nsrc/cmds/scripts/pbs.service\n\n# Generated by make dist\n*.tar.gz\nsrc/lib/Libpbs/pbs.pc\n\n# rpm spec file\n*.spec\n\n# Other archive file types\n*.tar\n*.tar.bz\n*.tgz\n*.zip\n*.cpio\n*.rpm\n*.deb\n\n# Generated directories by autotools\n*.libs\n"
  },
  {
    "path": "CODE_OF_CONDUCT.md",
    "content": "#### OpenPBS Open Source Project \n\n## **Code of Conduct**\n\nThis code of conduct is a guide for members of the OpenPBS community. We are committed to providing an open and welcoming environment for the OpenPBS community.  We expect that all members of the community will behave according to this code of conduct.  This code of conduct is intended to explain the spirit in which we expect to communicate, not to be an exhaustive list.  This code of conduct applies to all elements of the OpenPBS community: mailing lists, bug tracking systems, etc.  Anyone who violates this code of conduct may be banned from the OpenPBS community.  It is unacceptable to follow the letter but not the spirit of this code of conduct.\n\nGuidelines for code of conduct:\n\n* **Be friendly and patient.**\n* **Be welcoming:** We strive to be a community that welcomes and supports people of all backgrounds and identities.\n* **Be considerate:** Your work will be used by other people, and you in turn will depend on the work of others. Decisions you make affect everyone in the community, so please be mindful of your actions and always choose a non-confrontational approach. Remember: this is a global community and English is not everyone's primary language.\n* **Be respectful:** Disagreements may occur, but we cannot abide personal attacks. The health of the community depends on all members feeling comfortable and supported. If you don't agree, use discretion and be polite.\n* **Be careful in the words that we choose:** we are a community of professionals, and we conduct ourselves professionally. Be kind to others. Do not insult or put down other participants. Harassment and other exclusionary behavior aren’t acceptable.\n* **Try to understand why we disagree:** Disagreements, both social and technical, happen all the time. It is important that we resolve disagreements and differing views constructively. Different people have different perspectives on issues. Being unable to understand why someone holds a viewpoint doesn’t mean that they’re wrong. Don’t forget that it is human to err and blaming each other doesn’t get us anywhere. Instead, focus on helping to resolve issues and learning from mistakes.\nIn addition, our open source community members are expected to abide by the **[OpenPBS Acceptable Use Policy](https://openpbs.atlassian.net/wiki/spaces/PBSPro/pages/5537837/Acceptable+Use+Policy).\n\n### Reporting Issues\nIf you experience or witness unacceptable behavior — or have any other concerns — please report it by sending e-mail to webmaster@pbspro.org. All reports will be handled with discretion. In your report please include:\n* Your contact information.\n* Names (real, nicknames, or pseudonyms) of any individuals involved. If there are additional witnesses, please include them as well. Your account of what occurred, and if you believe the incident is ongoing. If there is a publicly available record (e.g. a mailing list archive or a public IRC logger), please include a link.\n* Any additional information that may be helpful.\n\nAfter filing a report, a representative will contact you personally, review the incident, follow up with any additional questions, and make a decision as to how to respond. If the person who is harassing you is part of the response team, they will recuse themselves from handling your incident. If the complaint originates from a member of the response team, it will be handled by a different member of the response team. We will respect confidentiality requests for the purpose of protecting victims of abuse.\n\n### Attribution & Acknowledgements\nThis code of conduct is based on the **[Open Code of Conduct v1.0](https://github.com/todogroup/opencodeofconduct)** from the **[TODOGroup](http://todogroup.org)**. We are thankful for their work and all the communities who have paved the way with codes of conduct.\n\n### PBS Pro Contributor's Portal\nPlease see the PBS Pro Contributor's Portal for the PBS Pro **[Code of Conduct](https://openpbs.atlassian.net/wiki/spaces/PBSPro/pages/5537835/Code+of+Conduct)**.\n\nNote: In May 2020, OpenPBS became the new name for the PBS Professional Open Source Project. (PBS Professional will be used to refer to the commercial version; OpenPBS to the Open Source version -- same code, easier naming.) As there are many parts to the project, it will take several weeks to change the name in all places, so you will continue to see references to PBS Pro (as in the above) -- stay tuned.\n"
  },
  {
    "path": "CONTRIBUTING.md",
    "content": "### Contributing to the OpenPBS Open Source Project\n\nWe're so happy that you want to contribute to OpenPBS!\n\nPlease see the Contributor's Portal for details, guidelines, and how-to tutorials.  Start at **[Becoming a Contributor to OpenPBS](https://pbspro.atlassian.net/wiki/spaces/DG/pages/20414474/Becoming+a+Contributor+to+PBS+Pro)**.\n\nNote: In May 2020, OpenPBS became the new name for the PBS Professional Open Source Project.  (PBS Professional will be used to refer to the commercial version; OpenPBS to the Open Source version -- same code, easier naming.)  As there are many parts to the project, it will take several weeks to change the name in all places, so you will continue to see references to PBS Pro (as in the above) -- stay tuned. \n"
  },
  {
    "path": "COPYRIGHT",
    "content": "Copyright (C) 1994-2021 Altair Engineering, Inc.\nFor more information, contact Altair at www.altair.com.\n\nThis file is part of both the OpenPBS software (\"OpenPBS\")\nand the PBS Professional (\"PBS Pro\") software.\n\nOpen Source License Information:\n\nOpenPBS is free software. You can redistribute it and/or modify it under\nthe terms of the GNU Affero General Public License as published by the\nFree Software Foundation, either version 3 of the License, or (at your\noption) any later version.\n\nOpenPBS is distributed in the hope that it will be useful, but WITHOUT\nANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\nFITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\nLicense for more details.\n\nYou should have received a copy of the GNU Affero General Public License\nalong with this program.  If not, see <http://www.gnu.org/licenses/>.\n\nCommercial License Information:\n\nPBS Pro is commercially licensed software that shares a common core with\nthe OpenPBS software.  For a copy of the commercial license terms and\nconditions, go to: (http://www.pbspro.com/agreement.html) or contact the\nAltair Legal Department.\n\nAltair's dual-license business model allows companies, individuals, and\norganizations to create proprietary derivative works of OpenPBS and\ndistribute them - whether embedded or bundled with other software -\nunder a commercial license agreement.\n\nUse of Altair's trademarks, including but not limited to \"PBS™\",\n\"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\nsubject to Altair's trademark licensing policies.\n"
  },
  {
    "path": "INSTALL",
    "content": "\n--------------------------------------------------------------------\n\nHow to install PBS using the configure script.\n\n0. Disable SELinux.\n\n  OpenPBS does not support SELinux. With SELinux enabled, initial start fails\n    with datastore permission error. You can also define proper policy but it is\n    out of scope of this guide.\n\n1. Install the prerequisite packages for building PBS.\n\n  For CentOS-8 systems you should configure and enable powertools\n  repo for hwloc-devel and libedit-devel packages.\n\n  You should run the following commands as root:\n\n    dnf install -y dnf-plugins-core\n    dnf config-manager --set-enabled powertools\n\n    dnf install -y gcc make rpm-build libtool hwloc-devel \\\n      libX11-devel libXt-devel libedit-devel libical-devel \\\n      ncurses-devel perl postgresql-devel postgresql-contrib python3-devel tcl-devel \\\n      tk-devel swig expat-devel openssl-devel libXext libXft \\\n      autoconf automake gcc-c++ cjson-devel\n\n  For CentOS-7 systems you should run the following command as root:\n\n    yum install -y gcc make rpm-build libtool hwloc-devel \\\n      libX11-devel libXt-devel libedit-devel libical-devel \\\n      ncurses-devel perl postgresql-devel postgresql-contrib python3-devel tcl-devel \\\n      tk-devel swig expat-devel openssl-devel libXext libXft \\\n      autoconf automake gcc-c++\n\n  For openSUSE systems you should run the following command as root:\n\n    zypper install gcc make rpm-build libtool hwloc-devel \\\n      libX11-devel libXt-devel libedit-devel libical-devel \\\n      ncurses-devel perl postgresql-devel postgresql-contrib python3-devel tcl-devel \\\n      tk-devel swig libexpat-devel libopenssl-devel libXext-devel \\\n      libXft-devel fontconfig autoconf automake gcc-c++ cJSON-devel\n\n  For Debian systems you should run the following command as root:\n\n    apt-get install gcc make libtool libhwloc-dev libx11-dev \\\n      libxt-dev libedit-dev libical-dev ncurses-dev perl \\\n      postgresql-server-dev-all postgresql-contrib python3-dev tcl-dev tk-dev swig \\\n      libexpat-dev libssl-dev libxext-dev libxft-dev autoconf \\\n      automake g++ libcjson-dev\n\n  For Ubuntu-18.04 systems you should run the following command as root:\n\n    apt install gcc make libtool libhwloc-dev libx11-dev \\\n      libxt-dev libedit-dev libical-dev ncurses-dev perl \\\n      postgresql-server-dev-all postgresql-contrib python3-dev tcl-dev tk-dev swig \\\n      libexpat-dev libssl-dev libxext-dev libxft-dev autoconf \\\n      automake g++\n\n  For Ubuntu-24.04 systems you should run the following command as root:\n\n    apt install gcc make libtool libhwloc-dev libx11-dev \\\n      libxt-dev libedit-dev libical-dev ncurses-dev perl \\\n      postgresql-server-dev-all postgresql-contrib python3-dev tcl-dev tk-dev swig \\\n      libexpat-dev libssl-dev libxext-dev libxft-dev autoconf \\\n      automake g++ libcjson-dev\n\n  For macOS systems using MacPorts you should run the following command as root:\n\n    port install autoconf automake libtool pkgconfig \\\n      expat hwloc libedit libical openssl postgresql14 python310 \\\n      swig-python tcl tk xorg-libX11 xorg-libXt\n\n2. Install the prerequisite packages for running PBS. In addition\n  to the commands below, you should also install a text editor of\n  your choosing (vim, emacs, gedit, etc.).\n\n  For CentOS systems you should run the following command as root:\n\n    yum install -y expat libedit postgresql-server postgresql-contrib python3 \\\n      sendmail sudo tcl tk libical chkconfig cjson\n\n  For openSUSE systems you should run the following command as root:\n\n    zypper install expat libedit postgresql-server postgresql-contrib python3 \\\n      sendmail sudo tcl tk libical1 libcjson1\n\n  For Debian (jessie) systems you should run the following command as root:\n\n    apt-get install expat libedit2 postgresql python3 postgresql-contrib sendmail-bin \\\n      sudo tcl tk libical1a\n\n  For Debian (stretch) systems you should run the following command as root:\n\n    apt-get install expat libedit2 postgresql python3 postgresql-contrib sendmail-bin \\\n      sudo tcl tk libical2\n\n  For Debian (buster) systems you should run the following command as root:\n\n    apt-get install expat libedit2 postgresql python3 postgresql-contrib sendmail-bin \\\n      sudo tcl tk libical3 libcjson1\n\n  For Ubuntu-18.04 systems you should run the following command as root:\n\n    apt install expat libedit2 postgresql python3 postgresql-contrib sendmail-bin \\\n      sudo tcl tk libical3 postgresql-server-dev-all\n\n  For Ubuntu-24.04 systems you should run the following command as root:\n\n    apt install expat libedit2 postgresql python3 postgresql-contrib sendmail-bin \\\n      sudo tcl tk libical3 postgresql-server-dev-all\n\n  For macOS systems using MacPorts you should run the following command as root:\n\n    port install expat libedit libical openssl postgresql14-server python310 \\\n      tcl tk\n\n3. Open a terminal as a normal (non-root) user, unpack the PBS\n  tarball, and cd to the package directory.\n\n    tar -xpvf openpbs-20.0.0.tar.gz\n    cd openpbs-20.0.0\n\n4. Generate the configure script and Makefiles. (See note 1 below)\n\n    ./autogen.sh\n\n5. Display the available build parameters.\n\n    ./configure --help\n\n6. Configure the build for your environment. You may utilize the\n  parameters displayed in the previous step. (See note 2 below)\n\n  For CentOS and Debian systems you should run the following\n  command:\n\n    ./configure --prefix=/opt/pbs\n\n  For openSUSE systems (see note 3 below) you should run the\n  following command:\n\n    ./configure --prefix=/opt/pbs --libexecdir=/opt/pbs/libexec\n\n  For macOS systems using MacPorts you should run the following commands:\n\n    export CPATH=/opt/local/include/postgresql14:/opt/local/include\n    export LIBRARY_PATH=/opt/local/lib/postgresql14:/opt/local/lib\n    ./configure --with-swig=/opt/local --with-tcl=/opt/local\n\n  If PTL needs to be installed along with PBS use the option\n  \"--enable-ptl\" (see note 5 below)\n    eg ./configure --prefix=/opt/pbs --enable-ptl\n\n7. Build PBS by running \"make\". (See note 4 below)\n\n    make\n\n8. Configure sudo to allow your user account to run commands as\n  root. Refer to the online manual pages for sudo, sudoers, and\n  visudo.\n\n9. Install PBS. Use sudo to run the command as root.\n\n    sudo make install\n\n10. Configure PBS by executing the post-install script.\n\n    sudo /opt/pbs/libexec/pbs_postinstall\n\n11. Edit /etc/pbs.conf to configure the PBS services that\n  should be started. If you are installing PBS on only\n  one system, you should change the value of PBS_START_MOM\n  from zero to one. If you use vi as your editor, you would\n  run:\n\n    sudo vi /etc/pbs.conf\n\n12. Some file permissions must be modified to add SUID privilege.\n\n    sudo chmod 4755 /opt/pbs/sbin/pbs_iff /opt/pbs/sbin/pbs_rcp\n\n13. Start the PBS services.\n\n    sudo /etc/init.d/pbs start\n\n14. All configured PBS services should now be running. Update\n  your PATH and MANPATH variables by sourcing the appropriate\n  PBS profile or logging out and back in.\n\n  For Bourne shell (or similar) run the following:\n    . /etc/profile.d/pbs.sh\n\n  For C shell (or similar) run the following:\n    source /etc/profile.d/pbs.csh\n\n15. You should now be able to run PBS commands to submit\n  and query jobs. Some examples follow.\n\nbash$ qstat -B\nServer             Max   Tot   Que   Run   Hld   Wat   Trn   Ext Status\n---------------- ----- ----- ----- ----- ----- ----- ----- ----- -----------\nhost1                0     0     0     0     0     0     0     0 Active\nbash$ pbsnodes -a\nhost1\n     Mom = host1\n     ntype = PBS\n     state = free\n     pcpus = 2\n     resources_available.arch = linux\n     resources_available.host = host1\n     resources_available.mem = 2049248kb\n     resources_available.ncpus = 2\n     resources_available.vnode = host1\n     resources_assigned.accelerator_memory = 0kb\n     resources_assigned.mem = 0kb\n     resources_assigned.naccelerators = 0\n     resources_assigned.ncpus = 0\n     resources_assigned.vmem = 0kb\n     resv_enable = True\n     sharing = default_shared\n     license = l\n\nbash$ echo \"sleep 60\" | qsub\n0.host1\nbash$ qstat -a\n\nhost1:\n                                                            Req'd  Req'd   Elap\nJob ID          Username Queue    Jobname    SessID NDS TSK Memory Time  S Time\n--------------- -------- -------- ---------- ------ --- --- ------ ----- - -----\n0.host1         mike     workq    STDIN        2122   1   1    --    --  R 00:00\n\nbash$\n\n--------------------------------------------------------------------\n\nNOTES:\n\nNote 1: If you modify configure.ac or adjust timestamps on any files\n  that are automatically generated, you will need to regenerate them\n  by re-running autogen.sh.\n\nNote 2: It is advisable to create a simple shell script that calls\n  configure with the appropriate options for your environment. This\n  ensures configure will be called with the same arguments during\n  subsequent invocations. If you have already run configure you can\n  regenerate all of the Makefiles by running \"./config.status\".\n  The first few lines of config.status will reveal the options that\n  were specified when configure was run. If you set envirnment\n  variables such as CFLAGS it is best to do so as an argument to\n  configure (e.g. ./configure CFLAGS=\"-O0 -g\" --prefix=/opt/pbs).\n  This will ensure consistency when config.status regenerates the\n  Makefiles.\n\nNote 3: The openSUSE rpm package expands %_libexecdir to /opt/pbs/lib\n  rather than /opt/pbs/libexec which causes problems for the post-\n  install scripts. Providing the --libexecdir value to configure\n  overrides this behavior.\n\nNote 4: You need to use a POSIX (or nearly POSIX) make. GNU make\n  works quite well in this regard; BSD make does not. If you are\n  having any sort of build problems, your make should be a prime\n  suspect. Tremendous effort has been expended to provide proper\n  dependency generation and makefiles without relying on any\n  non-POSIX features. The build should work fine with a simple call\n  to make, however, complicating things by using various make flags\n  is not guaranteed to work. Don't be surprised if the first thing\n  that make does is call configure again.\n\nNote 5: PTL gets installed in the parent directory of where PBS\n  is installed. For example if you have given install prefix=/opt/pbs, then\n  you can find PTL installation in the directory /opt/ptl . You may need to\n  log out and log in from the terminal for PATH and PYTHONPATH to update.\n\n\n\nUsing valgrind with PBS.\n-------------------------------------\nHere is a set of steps to detect memory errors/leaks within PBS code.\n\n1. Install the valgrind development package.\n\n   yum install valgrind-devel (zypper for OpenSUSE).\n\n\n2. Compile Python in a way that valgrind can work with it, as follows:\n\n   ./configure --prefix=<installdir> --without-pymalloc --with-pydebug --with-valgrind\n   make; make install\n\n\n3. Compile PBS with the special python and in debug mode as follows:\n\n   ./configure --prefix=<installdir> --with-python=<python-dir>  CFLAGS=\"-g -DPy_DEBUG -DDEBUG -Wall -Werror\"\n\n\n4. Run pbs daemons under valgrind.\n\n   a) To detect memory errors (not leaks) run pbs daemons as follows:\n\n   export LD_LIBRARY_PATH=/opt/pbs/pgsql/lib:/opt/pbs/lib:$LD_LIBRARY_PATH\n   valgrind --tool=memcheck --log-file=/tmp/val.out /opt/pbs/sbin/pbs_server.bin\n\n\n   b) To detect memory leaks use the supplied leaks suppression file valgrind.supp, as follows:\n\n   export LD_LIBRARY_PATH=/opt/pbs/pgsql/lib:/opt/pbs/lib:$LD_LIBRARY_PATH\n   valgrind --tool=memcheck --log-file=/tmp/val.out --suppressions=./valgrind.supp --leak-check=full --track-origins=yes /opt/pbs/sbin/pbs_server.bin\n"
  },
  {
    "path": "LICENSE",
    "content": "----------------------------------------------------\nOpen Source License for OpenPBS and PBS Professional\n----------------------------------------------------\n\nCopyright (C) 1994-2021 Altair Engineering, Inc.\nFor more information, contact Altair at www.altair.com.\n\nThis file is part of both the OpenPBS software (\"OpenPBS\")\nand the PBS Professional (\"PBS Pro\") software.\n\nOpen Source License Information:\n\nOpenPBS is free software. You can redistribute it and/or modify it under\nthe terms of the GNU Affero General Public License as published by the\nFree Software Foundation, either version 3 of the License, or (at your\noption) any later version.\n\nOpenPBS is distributed in the hope that it will be useful, but WITHOUT\nANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\nFITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\nLicense for more details.\n\nYou should have received a copy of the GNU Affero General Public License\nalong with this program.  If not, see <http://www.gnu.org/licenses/>.\n\nCommercial License Information:\n\nPBS Pro is commercially licensed software that shares a common core with\nthe OpenPBS software.  For a copy of the commercial license terms and\nconditions, go to: (http://www.pbspro.com/agreement.html) or contact the\nAltair Legal Department.\n\nAltair's dual-license business model allows companies, individuals, and\norganizations to create proprietary derivative works of OpenPBS and\ndistribute them - whether embedded or bundled with other software -\nunder a commercial license agreement.\n\nUse of Altair's trademarks, including but not limited to \"PBS™\",\n\"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\nsubject to Altair's trademark licensing policies.\n\n==============================================================================\n\n                    GNU AFFERO GENERAL PUBLIC LICENSE\n                       Version 3, 19 November 2007\n\n Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>\n Everyone is permitted to copy and distribute verbatim copies\n of this license document, but changing it is not allowed.\n\n                            Preamble\n\n  The GNU Affero General Public License is a free, copyleft license for\nsoftware and other kinds of works, specifically designed to ensure\ncooperation with the community in the case of network server software.\n\n  The licenses for most software and other practical works are designed\nto take away your freedom to share and change the works.  By contrast,\nour General Public Licenses are intended to guarantee your freedom to\nshare and change all versions of a program--to make sure it remains free\nsoftware for all its users.\n\n  When we speak of free software, we are referring to freedom, not\nprice.  Our General Public Licenses are designed to make sure that you\nhave the freedom to distribute copies of free software (and charge for\nthem if you wish), that you receive source code or can get it if you\nwant it, that you can change the software or use pieces of it in new\nfree programs, and that you know you can do these things.\n\n  Developers that use our General Public Licenses protect your rights\nwith two steps: (1) assert copyright on the software, and (2) offer\nyou this License which gives you legal permission to copy, distribute\nand/or modify the software.\n\n  A secondary benefit of defending all users' freedom is that\nimprovements made in alternate versions of the program, if they\nreceive widespread use, become available for other developers to\nincorporate.  Many developers of free software are heartened and\nencouraged by the resulting cooperation.  However, in the case of\nsoftware used on network servers, this result may fail to come about.\nThe GNU General Public License permits making a modified version and\nletting the public access it on a server without ever releasing its\nsource code to the public.\n\n  The GNU Affero General Public License is designed specifically to\nensure that, in such cases, the modified source code becomes available\nto the community.  It requires the operator of a network server to\nprovide the source code of the modified version running there to the\nusers of that server.  Therefore, public use of a modified version, on\na publicly accessible server, gives the public access to the source\ncode of the modified version.\n\n  An older license, called the Affero General Public License and\npublished by Affero, was designed to accomplish similar goals.  This is\na different license, not a version of the Affero GPL, but Affero has\nreleased a new version of the Affero GPL which permits relicensing under\nthis license.\n\n  The precise terms and conditions for copying, distribution and\nmodification follow.\n\n                       TERMS AND CONDITIONS\n\n  0. Definitions.\n\n  \"This License\" refers to version 3 of the GNU Affero General Public License.\n\n  \"Copyright\" also means copyright-like laws that apply to other kinds of\nworks, such as semiconductor masks.\n\n  \"The Program\" refers to any copyrightable work licensed under this\nLicense.  Each licensee is addressed as \"you\".  \"Licensees\" and\n\"recipients\" may be individuals or organizations.\n\n  To \"modify\" a work means to copy from or adapt all or part of the work\nin a fashion requiring copyright permission, other than the making of an\nexact copy.  The resulting work is called a \"modified version\" of the\nearlier work or a work \"based on\" the earlier work.\n\n  A \"covered work\" means either the unmodified Program or a work based\non the Program.\n\n  To \"propagate\" a work means to do anything with it that, without\npermission, would make you directly or secondarily liable for\ninfringement under applicable copyright law, except executing it on a\ncomputer or modifying a private copy.  Propagation includes copying,\ndistribution (with or without modification), making available to the\npublic, and in some countries other activities as well.\n\n  To \"convey\" a work means any kind of propagation that enables other\nparties to make or receive copies.  Mere interaction with a user through\na computer network, with no transfer of a copy, is not conveying.\n\n  An interactive user interface displays \"Appropriate Legal Notices\"\nto the extent that it includes a convenient and prominently visible\nfeature that (1) displays an appropriate copyright notice, and (2)\ntells the user that there is no warranty for the work (except to the\nextent that warranties are provided), that licensees may convey the\nwork under this License, and how to view a copy of this License.  If\nthe interface presents a list of user commands or options, such as a\nmenu, a prominent item in the list meets this criterion.\n\n  1. Source Code.\n\n  The \"source code\" for a work means the preferred form of the work\nfor making modifications to it.  \"Object code\" means any non-source\nform of a work.\n\n  A \"Standard Interface\" means an interface that either is an official\nstandard defined by a recognized standards body, or, in the case of\ninterfaces specified for a particular programming language, one that\nis widely used among developers working in that language.\n\n  The \"System Libraries\" of an executable work include anything, other\nthan the work as a whole, that (a) is included in the normal form of\npackaging a Major Component, but which is not part of that Major\nComponent, and (b) serves only to enable use of the work with that\nMajor Component, or to implement a Standard Interface for which an\nimplementation is available to the public in source code form.  A\n\"Major Component\", in this context, means a major essential component\n(kernel, window system, and so on) of the specific operating system\n(if any) on which the executable work runs, or a compiler used to\nproduce the work, or an object code interpreter used to run it.\n\n  The \"Corresponding Source\" for a work in object code form means all\nthe source code needed to generate, install, and (for an executable\nwork) run the object code and to modify the work, including scripts to\ncontrol those activities.  However, it does not include the work's\nSystem Libraries, or general-purpose tools or generally available free\nprograms which are used unmodified in performing those activities but\nwhich are not part of the work.  For example, Corresponding Source\nincludes interface definition files associated with source files for\nthe work, and the source code for shared libraries and dynamically\nlinked subprograms that the work is specifically designed to require,\nsuch as by intimate data communication or control flow between those\nsubprograms and other parts of the work.\n\n  The Corresponding Source need not include anything that users\ncan regenerate automatically from other parts of the Corresponding\nSource.\n\n  The Corresponding Source for a work in source code form is that\nsame work.\n\n  2. Basic Permissions.\n\n  All rights granted under this License are granted for the term of\ncopyright on the Program, and are irrevocable provided the stated\nconditions are met.  This License explicitly affirms your unlimited\npermission to run the unmodified Program.  The output from running a\ncovered work is covered by this License only if the output, given its\ncontent, constitutes a covered work.  This License acknowledges your\nrights of fair use or other equivalent, as provided by copyright law.\n\n  You may make, run and propagate covered works that you do not\nconvey, without conditions so long as your license otherwise remains\nin force.  You may convey covered works to others for the sole purpose\nof having them make modifications exclusively for you, or provide you\nwith facilities for running those works, provided that you comply with\nthe terms of this License in conveying all material for which you do\nnot control copyright.  Those thus making or running the covered works\nfor you must do so exclusively on your behalf, under your direction\nand control, on terms that prohibit them from making any copies of\nyour copyrighted material outside their relationship with you.\n\n  Conveying under any other circumstances is permitted solely under\nthe conditions stated below.  Sublicensing is not allowed; section 10\nmakes it unnecessary.\n\n  3. Protecting Users' Legal Rights From Anti-Circumvention Law.\n\n  No covered work shall be deemed part of an effective technological\nmeasure under any applicable law fulfilling obligations under article\n11 of the WIPO copyright treaty adopted on 20 December 1996, or\nsimilar laws prohibiting or restricting circumvention of such\nmeasures.\n\n  When you convey a covered work, you waive any legal power to forbid\ncircumvention of technological measures to the extent such circumvention\nis effected by exercising rights under this License with respect to\nthe covered work, and you disclaim any intention to limit operation or\nmodification of the work as a means of enforcing, against the work's\nusers, your or third parties' legal rights to forbid circumvention of\ntechnological measures.\n\n  4. Conveying Verbatim Copies.\n\n  You may convey verbatim copies of the Program's source code as you\nreceive it, in any medium, provided that you conspicuously and\nappropriately publish on each copy an appropriate copyright notice;\nkeep intact all notices stating that this License and any\nnon-permissive terms added in accord with section 7 apply to the code;\nkeep intact all notices of the absence of any warranty; and give all\nrecipients a copy of this License along with the Program.\n\n  You may charge any price or no price for each copy that you convey,\nand you may offer support or warranty protection for a fee.\n\n  5. Conveying Modified Source Versions.\n\n  You may convey a work based on the Program, or the modifications to\nproduce it from the Program, in the form of source code under the\nterms of section 4, provided that you also meet all of these conditions:\n\n    a) The work must carry prominent notices stating that you modified\n    it, and giving a relevant date.\n\n    b) The work must carry prominent notices stating that it is\n    released under this License and any conditions added under section\n    7.  This requirement modifies the requirement in section 4 to\n    \"keep intact all notices\".\n\n    c) You must license the entire work, as a whole, under this\n    License to anyone who comes into possession of a copy.  This\n    License will therefore apply, along with any applicable section 7\n    additional terms, to the whole of the work, and all its parts,\n    regardless of how they are packaged.  This License gives no\n    permission to license the work in any other way, but it does not\n    invalidate such permission if you have separately received it.\n\n    d) If the work has interactive user interfaces, each must display\n    Appropriate Legal Notices; however, if the Program has interactive\n    interfaces that do not display Appropriate Legal Notices, your\n    work need not make them do so.\n\n  A compilation of a covered work with other separate and independent\nworks, which are not by their nature extensions of the covered work,\nand which are not combined with it such as to form a larger program,\nin or on a volume of a storage or distribution medium, is called an\n\"aggregate\" if the compilation and its resulting copyright are not\nused to limit the access or legal rights of the compilation's users\nbeyond what the individual works permit.  Inclusion of a covered work\nin an aggregate does not cause this License to apply to the other\nparts of the aggregate.\n\n  6. Conveying Non-Source Forms.\n\n  You may convey a covered work in object code form under the terms\nof sections 4 and 5, provided that you also convey the\nmachine-readable Corresponding Source under the terms of this License,\nin one of these ways:\n\n    a) Convey the object code in, or embodied in, a physical product\n    (including a physical distribution medium), accompanied by the\n    Corresponding Source fixed on a durable physical medium\n    customarily used for software interchange.\n\n    b) Convey the object code in, or embodied in, a physical product\n    (including a physical distribution medium), accompanied by a\n    written offer, valid for at least three years and valid for as\n    long as you offer spare parts or customer support for that product\n    model, to give anyone who possesses the object code either (1) a\n    copy of the Corresponding Source for all the software in the\n    product that is covered by this License, on a durable physical\n    medium customarily used for software interchange, for a price no\n    more than your reasonable cost of physically performing this\n    conveying of source, or (2) access to copy the\n    Corresponding Source from a network server at no charge.\n\n    c) Convey individual copies of the object code with a copy of the\n    written offer to provide the Corresponding Source.  This\n    alternative is allowed only occasionally and noncommercially, and\n    only if you received the object code with such an offer, in accord\n    with subsection 6b.\n\n    d) Convey the object code by offering access from a designated\n    place (gratis or for a charge), and offer equivalent access to the\n    Corresponding Source in the same way through the same place at no\n    further charge.  You need not require recipients to copy the\n    Corresponding Source along with the object code.  If the place to\n    copy the object code is a network server, the Corresponding Source\n    may be on a different server (operated by you or a third party)\n    that supports equivalent copying facilities, provided you maintain\n    clear directions next to the object code saying where to find the\n    Corresponding Source.  Regardless of what server hosts the\n    Corresponding Source, you remain obligated to ensure that it is\n    available for as long as needed to satisfy these requirements.\n\n    e) Convey the object code using peer-to-peer transmission, provided\n    you inform other peers where the object code and Corresponding\n    Source of the work are being offered to the general public at no\n    charge under subsection 6d.\n\n  A separable portion of the object code, whose source code is excluded\nfrom the Corresponding Source as a System Library, need not be\nincluded in conveying the object code work.\n\n  A \"User Product\" is either (1) a \"consumer product\", which means any\ntangible personal property which is normally used for personal, family,\nor household purposes, or (2) anything designed or sold for incorporation\ninto a dwelling.  In determining whether a product is a consumer product,\ndoubtful cases shall be resolved in favor of coverage.  For a particular\nproduct received by a particular user, \"normally used\" refers to a\ntypical or common use of that class of product, regardless of the status\nof the particular user or of the way in which the particular user\nactually uses, or expects or is expected to use, the product.  A product\nis a consumer product regardless of whether the product has substantial\ncommercial, industrial or non-consumer uses, unless such uses represent\nthe only significant mode of use of the product.\n\n  \"Installation Information\" for a User Product means any methods,\nprocedures, authorization keys, or other information required to install\nand execute modified versions of a covered work in that User Product from\na modified version of its Corresponding Source.  The information must\nsuffice to ensure that the continued functioning of the modified object\ncode is in no case prevented or interfered with solely because\nmodification has been made.\n\n  If you convey an object code work under this section in, or with, or\nspecifically for use in, a User Product, and the conveying occurs as\npart of a transaction in which the right of possession and use of the\nUser Product is transferred to the recipient in perpetuity or for a\nfixed term (regardless of how the transaction is characterized), the\nCorresponding Source conveyed under this section must be accompanied\nby the Installation Information.  But this requirement does not apply\nif neither you nor any third party retains the ability to install\nmodified object code on the User Product (for example, the work has\nbeen installed in ROM).\n\n  The requirement to provide Installation Information does not include a\nrequirement to continue to provide support service, warranty, or updates\nfor a work that has been modified or installed by the recipient, or for\nthe User Product in which it has been modified or installed.  Access to a\nnetwork may be denied when the modification itself materially and\nadversely affects the operation of the network or violates the rules and\nprotocols for communication across the network.\n\n  Corresponding Source conveyed, and Installation Information provided,\nin accord with this section must be in a format that is publicly\ndocumented (and with an implementation available to the public in\nsource code form), and must require no special password or key for\nunpacking, reading or copying.\n\n  7. Additional Terms.\n\n  \"Additional permissions\" are terms that supplement the terms of this\nLicense by making exceptions from one or more of its conditions.\nAdditional permissions that are applicable to the entire Program shall\nbe treated as though they were included in this License, to the extent\nthat they are valid under applicable law.  If additional permissions\napply only to part of the Program, that part may be used separately\nunder those permissions, but the entire Program remains governed by\nthis License without regard to the additional permissions.\n\n  When you convey a copy of a covered work, you may at your option\nremove any additional permissions from that copy, or from any part of\nit.  (Additional permissions may be written to require their own\nremoval in certain cases when you modify the work.)  You may place\nadditional permissions on material, added by you to a covered work,\nfor which you have or can give appropriate copyright permission.\n\n  Notwithstanding any other provision of this License, for material you\nadd to a covered work, you may (if authorized by the copyright holders of\nthat material) supplement the terms of this License with terms:\n\n    a) Disclaiming warranty or limiting liability differently from the\n    terms of sections 15 and 16 of this License; or\n\n    b) Requiring preservation of specified reasonable legal notices or\n    author attributions in that material or in the Appropriate Legal\n    Notices displayed by works containing it; or\n\n    c) Prohibiting misrepresentation of the origin of that material, or\n    requiring that modified versions of such material be marked in\n    reasonable ways as different from the original version; or\n\n    d) Limiting the use for publicity purposes of names of licensors or\n    authors of the material; or\n\n    e) Declining to grant rights under trademark law for use of some\n    trade names, trademarks, or service marks; or\n\n    f) Requiring indemnification of licensors and authors of that\n    material by anyone who conveys the material (or modified versions of\n    it) with contractual assumptions of liability to the recipient, for\n    any liability that these contractual assumptions directly impose on\n    those licensors and authors.\n\n  All other non-permissive additional terms are considered \"further\nrestrictions\" within the meaning of section 10.  If the Program as you\nreceived it, or any part of it, contains a notice stating that it is\ngoverned by this License along with a term that is a further\nrestriction, you may remove that term.  If a license document contains\na further restriction but permits relicensing or conveying under this\nLicense, you may add to a covered work material governed by the terms\nof that license document, provided that the further restriction does\nnot survive such relicensing or conveying.\n\n  If you add terms to a covered work in accord with this section, you\nmust place, in the relevant source files, a statement of the\nadditional terms that apply to those files, or a notice indicating\nwhere to find the applicable terms.\n\n  Additional terms, permissive or non-permissive, may be stated in the\nform of a separately written license, or stated as exceptions;\nthe above requirements apply either way.\n\n  8. Termination.\n\n  You may not propagate or modify a covered work except as expressly\nprovided under this License.  Any attempt otherwise to propagate or\nmodify it is void, and will automatically terminate your rights under\nthis License (including any patent licenses granted under the third\nparagraph of section 11).\n\n  However, if you cease all violation of this License, then your\nlicense from a particular copyright holder is reinstated (a)\nprovisionally, unless and until the copyright holder explicitly and\nfinally terminates your license, and (b) permanently, if the copyright\nholder fails to notify you of the violation by some reasonable means\nprior to 60 days after the cessation.\n\n  Moreover, your license from a particular copyright holder is\nreinstated permanently if the copyright holder notifies you of the\nviolation by some reasonable means, this is the first time you have\nreceived notice of violation of this License (for any work) from that\ncopyright holder, and you cure the violation prior to 30 days after\nyour receipt of the notice.\n\n  Termination of your rights under this section does not terminate the\nlicenses of parties who have received copies or rights from you under\nthis License.  If your rights have been terminated and not permanently\nreinstated, you do not qualify to receive new licenses for the same\nmaterial under section 10.\n\n  9. Acceptance Not Required for Having Copies.\n\n  You are not required to accept this License in order to receive or\nrun a copy of the Program.  Ancillary propagation of a covered work\noccurring solely as a consequence of using peer-to-peer transmission\nto receive a copy likewise does not require acceptance.  However,\nnothing other than this License grants you permission to propagate or\nmodify any covered work.  These actions infringe copyright if you do\nnot accept this License.  Therefore, by modifying or propagating a\ncovered work, you indicate your acceptance of this License to do so.\n\n  10. Automatic Licensing of Downstream Recipients.\n\n  Each time you convey a covered work, the recipient automatically\nreceives a license from the original licensors, to run, modify and\npropagate that work, subject to this License.  You are not responsible\nfor enforcing compliance by third parties with this License.\n\n  An \"entity transaction\" is a transaction transferring control of an\norganization, or substantially all assets of one, or subdividing an\norganization, or merging organizations.  If propagation of a covered\nwork results from an entity transaction, each party to that\ntransaction who receives a copy of the work also receives whatever\nlicenses to the work the party's predecessor in interest had or could\ngive under the previous paragraph, plus a right to possession of the\nCorresponding Source of the work from the predecessor in interest, if\nthe predecessor has it or can get it with reasonable efforts.\n\n  You may not impose any further restrictions on the exercise of the\nrights granted or affirmed under this License.  For example, you may\nnot impose a license fee, royalty, or other charge for exercise of\nrights granted under this License, and you may not initiate litigation\n(including a cross-claim or counterclaim in a lawsuit) alleging that\nany patent claim is infringed by making, using, selling, offering for\nsale, or importing the Program or any portion of it.\n\n  11. Patents.\n\n  A \"contributor\" is a copyright holder who authorizes use under this\nLicense of the Program or a work on which the Program is based.  The\nwork thus licensed is called the contributor's \"contributor version\".\n\n  A contributor's \"essential patent claims\" are all patent claims\nowned or controlled by the contributor, whether already acquired or\nhereafter acquired, that would be infringed by some manner, permitted\nby this License, of making, using, or selling its contributor version,\nbut do not include claims that would be infringed only as a\nconsequence of further modification of the contributor version.  For\npurposes of this definition, \"control\" includes the right to grant\npatent sublicenses in a manner consistent with the requirements of\nthis License.\n\n  Each contributor grants you a non-exclusive, worldwide, royalty-free\npatent license under the contributor's essential patent claims, to\nmake, use, sell, offer for sale, import and otherwise run, modify and\npropagate the contents of its contributor version.\n\n  In the following three paragraphs, a \"patent license\" is any express\nagreement or commitment, however denominated, not to enforce a patent\n(such as an express permission to practice a patent or covenant not to\nsue for patent infringement).  To \"grant\" such a patent license to a\nparty means to make such an agreement or commitment not to enforce a\npatent against the party.\n\n  If you convey a covered work, knowingly relying on a patent license,\nand the Corresponding Source of the work is not available for anyone\nto copy, free of charge and under the terms of this License, through a\npublicly available network server or other readily accessible means,\nthen you must either (1) cause the Corresponding Source to be so\navailable, or (2) arrange to deprive yourself of the benefit of the\npatent license for this particular work, or (3) arrange, in a manner\nconsistent with the requirements of this License, to extend the patent\nlicense to downstream recipients.  \"Knowingly relying\" means you have\nactual knowledge that, but for the patent license, your conveying the\ncovered work in a country, or your recipient's use of the covered work\nin a country, would infringe one or more identifiable patents in that\ncountry that you have reason to believe are valid.\n\n  If, pursuant to or in connection with a single transaction or\narrangement, you convey, or propagate by procuring conveyance of, a\ncovered work, and grant a patent license to some of the parties\nreceiving the covered work authorizing them to use, propagate, modify\nor convey a specific copy of the covered work, then the patent license\nyou grant is automatically extended to all recipients of the covered\nwork and works based on it.\n\n  A patent license is \"discriminatory\" if it does not include within\nthe scope of its coverage, prohibits the exercise of, or is\nconditioned on the non-exercise of one or more of the rights that are\nspecifically granted under this License.  You may not convey a covered\nwork if you are a party to an arrangement with a third party that is\nin the business of distributing software, under which you make payment\nto the third party based on the extent of your activity of conveying\nthe work, and under which the third party grants, to any of the\nparties who would receive the covered work from you, a discriminatory\npatent license (a) in connection with copies of the covered work\nconveyed by you (or copies made from those copies), or (b) primarily\nfor and in connection with specific products or compilations that\ncontain the covered work, unless you entered into that arrangement,\nor that patent license was granted, prior to 28 March 2007.\n\n  Nothing in this License shall be construed as excluding or limiting\nany implied license or other defenses to infringement that may\notherwise be available to you under applicable patent law.\n\n  12. No Surrender of Others' Freedom.\n\n  If conditions are imposed on you (whether by court order, agreement or\notherwise) that contradict the conditions of this License, they do not\nexcuse you from the conditions of this License.  If you cannot convey a\ncovered work so as to satisfy simultaneously your obligations under this\nLicense and any other pertinent obligations, then as a consequence you may\nnot convey it at all.  For example, if you agree to terms that obligate you\nto collect a royalty for further conveying from those to whom you convey\nthe Program, the only way you could satisfy both those terms and this\nLicense would be to refrain entirely from conveying the Program.\n\n  13. Remote Network Interaction; Use with the GNU General Public License.\n\n  Notwithstanding any other provision of this License, if you modify the\nProgram, your modified version must prominently offer all users\ninteracting with it remotely through a computer network (if your version\nsupports such interaction) an opportunity to receive the Corresponding\nSource of your version by providing access to the Corresponding Source\nfrom a network server at no charge, through some standard or customary\nmeans of facilitating copying of software.  This Corresponding Source\nshall include the Corresponding Source for any work covered by version 3\nof the GNU General Public License that is incorporated pursuant to the\nfollowing paragraph.\n\n  Notwithstanding any other provision of this License, you have\npermission to link or combine any covered work with a work licensed\nunder version 3 of the GNU General Public License into a single\ncombined work, and to convey the resulting work.  The terms of this\nLicense will continue to apply to the part which is the covered work,\nbut the work with which it is combined will remain governed by version\n3 of the GNU General Public License.\n\n  14. Revised Versions of this License.\n\n  The Free Software Foundation may publish revised and/or new versions of\nthe GNU Affero General Public License from time to time.  Such new versions\nwill be similar in spirit to the present version, but may differ in detail to\naddress new problems or concerns.\n\n  Each version is given a distinguishing version number.  If the\nProgram specifies that a certain numbered version of the GNU Affero General\nPublic License \"or any later version\" applies to it, you have the\noption of following the terms and conditions either of that numbered\nversion or of any later version published by the Free Software\nFoundation.  If the Program does not specify a version number of the\nGNU Affero General Public License, you may choose any version ever published\nby the Free Software Foundation.\n\n  If the Program specifies that a proxy can decide which future\nversions of the GNU Affero General Public License can be used, that proxy's\npublic statement of acceptance of a version permanently authorizes you\nto choose that version for the Program.\n\n  Later license versions may give you additional or different\npermissions.  However, no additional obligations are imposed on any\nauthor or copyright holder as a result of your choosing to follow a\nlater version.\n\n  15. Disclaimer of Warranty.\n\n  THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY\nAPPLICABLE LAW.  EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT\nHOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM \"AS IS\" WITHOUT WARRANTY\nOF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,\nTHE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR\nPURPOSE.  THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM\nIS WITH YOU.  SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF\nALL NECESSARY SERVICING, REPAIR OR CORRECTION.\n\n  16. Limitation of Liability.\n\n  IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING\nWILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS\nTHE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY\nGENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE\nUSE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF\nDATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD\nPARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),\nEVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF\nSUCH DAMAGES.\n\n  17. Interpretation of Sections 15 and 16.\n\n  If the disclaimer of warranty and limitation of liability provided\nabove cannot be given local legal effect according to their terms,\nreviewing courts shall apply local law that most closely approximates\nan absolute waiver of all civil liability in connection with the\nProgram, unless a warranty or assumption of liability accompanies a\ncopy of the Program in return for a fee.\n\n                     END OF TERMS AND CONDITIONS\n\n            How to Apply These Terms to Your New Programs\n\n  If you develop a new program, and you want it to be of the greatest\npossible use to the public, the best way to achieve this is to make it\nfree software which everyone can redistribute and change under these terms.\n\n  To do so, attach the following notices to the program.  It is safest\nto attach them to the start of each source file to most effectively\nstate the exclusion of warranty; and each file should have at least\nthe \"copyright\" line and a pointer to where the full notice is found.\n\n    <one line to give the program's name and a brief idea of what it does.>\n    Copyright (C) <year>  <name of author>\n\n    This program is free software: you can redistribute it and/or modify\n    it under the terms of the GNU Affero General Public License as published by\n    the Free Software Foundation, either version 3 of the License, or\n    (at your option) any later version.\n\n    This program is distributed in the hope that it will be useful,\n    but WITHOUT ANY WARRANTY; without even the implied warranty of\n    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the\n    GNU Affero General Public License for more details.\n\n    You should have received a copy of the GNU Affero General Public License\n    along with this program.  If not, see <http://www.gnu.org/licenses/>.\n\nAlso add information on how to contact you by electronic and paper mail.\n\n  If your software can interact with users remotely through a computer\nnetwork, you should also make sure that it provides a way for users to\nget its source.  For example, if your program is a web application, its\ninterface could display a \"Source\" link that leads users to an archive\nof the code.  There are many ways you could offer source, and different\nsolutions will be better for different programs; see section 13 for the\nspecific requirements.\n\n  You should also get your employer (if you work as a programmer) or school,\nif any, to sign a \"copyright disclaimer\" for the program, if necessary.\nFor more information on this, and how to apply and follow the GNU AGPL, see\n<http://www.gnu.org/licenses/>.\n\n==============================================================================\n\n--------------------------------\nThird Party Software Information\n--------------------------------\n\nPBS Pro includes code created by other parties which is provided under\nthe open source software license agreements chosen by the authors. All\nunmodified files from these and other sources retain their original\ncopyright and license notices.\n\n_ _ _ _ _ _\n\nsrc/scheduler/sort.c\n\nCopyright (c) 1992, 1993. Regents of the University of California.\nAll rights reserved.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions\nare met:\n1. Redistributions of source code must retain the above copyright\n   notice, this list of conditions and the following disclaimer.\n2. Redistributions in binary form must reproduce the above copyright\n   notice, this list of conditions and the following disclaimer in the\n   documentation and/or other materials provided with the distribution.\n3. All advertising materials mentioning features or use of this software\n   must display the following acknowledgement:\n     This product includes software developed by the University of\n     California, Berkeley and its contributors.\n4. Neither the name of the University nor the names of its contributors\n   may be used to endorse or promote products derived from this software\n   without specific prior written permission.\n\nTHIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND\nANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\nIMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\nARE DISCLAIMED.  IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE\nFOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\nDAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS\nOR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)\nHOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT\nLIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY\nOUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF\nSUCH DAMAGE.\n\n_ _ _ _ _ _\n\nsrc/lib/Libwin/rcmd.c\n\nCopyright (c) 1983 Regents of the University of California.\nAll rights reserved.\n\nRedistribution and use in source and binary forms are permitted\nprovided that the above copyright notice and this paragraph are\nduplicated in all such forms and that any documentation,\nadvertising materials, and other materials related to such\ndistribution and use acknowledge that the software was developed\nby the University of California, Berkeley.  The name of the\nUniversity may not be used to endorse or promote products derived\nfrom this software without specific prior written permission.\nTHIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR\nIMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED\nWARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.\n\n_ _ _ _ _ _\n\nsrc/resmom/popen.c\n\nCopyright (c) 1988, 1993\n     The Regents of the University of California.  All rights reserved.\n\nThis code is derived from software written by Ken Arnold and\npublished in UNIX Review, Vol. 6, No. 8.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions\nare met:\n1. Redistributions of source code must retain the above copyright\n   notice, this list of conditions and the following disclaimer.\n2. Redistributions in binary form must reproduce the above copyright\n   notice, this list of conditions and the following disclaimer in the\n   documentation and/or other materials provided with the distribution.\n3. All advertising materials mentioning features or use of this software\n   must display the following acknowledgement:\n     This product includes software developed by the University of\n     California, Berkeley and its contributors.\n4. Neither the name of the University nor the names of its contributors\n   may be used to endorse or promote products derived from this software\n   without specific prior written permission.\n\nTHIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND\nANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\nIMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\nARE DISCLAIMED.  IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE\nFOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\nDAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS\nOR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)\nHOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT\nLIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY\nOUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF\nSUCH DAMAGE.\n\n_ _ _ _ _ _\n\nsrc/lib/Libutil/avltree.c\n\nCopyright (c) 2000 Gregory Tseytin <tseyting@acm.org>\n  All rights reserved.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions\nare met:\n1. Redistributions of source code must retain the above copyright\n   notice, this list of conditions and the following disclaimer as\n   the first lines of this file unmodified.\n2. Redistributions in binary form must reproduce the above copyright\n   notice, this list of conditions and the following disclaimer in the\n   documentation and/or other materials provided with the distribution.\n\nTHIS SOFTWARE IS PROVIDED BY Gregory Tseytin ``AS IS'' AND ANY EXPRESS OR\nIMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES\nOF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.\nIN NO EVENT SHALL Gregory Tseytin BE LIABLE FOR ANY DIRECT, INDIRECT,\nINCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT\nNOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\nDATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\nTHEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF\nTHIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\n_ _ _ _ _ _\n\nbuildutils/depcomp\nbuildutils/compile\n\nCopyright (C) 1999-2013 Free Software Foundation, Inc.\n\nThis program is free software; you can redistribute it and/or modify\nit under the terms of the GNU General Public License as published by\nthe Free Software Foundation; either version 2, or (at your option)\nany later version.\n\nThis program is distributed in the hope that it will be useful,\nbut WITHOUT ANY WARRANTY; without even the implied warranty of\nMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the\nGNU General Public License for more details.\n\nYou should have received a copy of the GNU General Public License\nalong with this program.  If not, see <http://www.gnu.org/licenses/>.\n\nAs a special exception to the GNU General Public License, if you\ndistribute this file as part of a program that contains a\nconfiguration script generated by Autoconf, you may include it under\nthe same distribution terms that you use for the rest of that program.\n\n_ _ _ _ _ _\n\nbuildutils/install-sh\n\nCopyright (C) 1994 X Consortium\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to\ndeal in the Software without restriction, including without limitation the\nrights to use, copy, modify, merge, publish, distribute, sublicense, and/or\nsell copies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in\nall copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL THE\nX CONSORTIUM BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN\nAN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNEC-\nTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\nExcept as contained in this notice, the name of the X Consortium shall not\nbe used in advertising or otherwise to promote the sale, use or other deal-\nings in this Software without prior written authorization from the X Consor-\ntium.\n\n_ _ _ _ _ _\n\nbuildutils/ltmain.sh\nm4/libtool.m4\n\n  Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005,\n                2006, 2007, 2008, 2009, 2010, 2011 Free Software\n                Foundation, Inc.\n  Written by Gordon Matzigkeit, 1996\n\n  This file is part of GNU Libtool.\n\nGNU Libtool is free software; you can redistribute it and/or\nmodify it under the terms of the GNU General Public License as\npublished by the Free Software Foundation; either version 2 of\nthe License, or (at your option) any later version.\n\nAs a special exception to the GNU General Public License,\nif you distribute this file as part of a program or library that\nis built using GNU Libtool, you may include this file under the\nsame distribution terms that you use for the rest of that program.\n\nGNU Libtool is distributed in the hope that it will be useful,\nbut WITHOUT ANY WARRANTY; without even the implied warranty of\nMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the\nGNU General Public License for more details.\n\nYou should have received a copy of the GNU General Public License\nalong with GNU Libtool; see the file COPYING.  If not, a copy\ncan be downloaded from http://www.gnu.org/licenses/gpl.html, or\nobtained by writing to the Free Software Foundation, Inc.,\n51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.\n\n_ _ _ _ _ _\n\nm4/lt~obsolete.m4\n\n  Copyright (C) 2004, 2005, 2007, 2009 Free Software Foundation, Inc.\n  Written by Scott James Remnant, 2004. (see license below)\n\nm4/ltoptions.m4\n\n  Copyright (C) 2004, 2005, 2007, 2008, 2009 Free Software Foundation,\n  Inc.\n  Written by Gary V. Vaughan, 2004  (see license below)\n\nm4/ltsugar.m4\n\nCopyright (C) 2004, 2005, 2007, 2008 Free Software Foundation, Inc.\nWritten by Gary V. Vaughan, 2004   (see license below)\n\nm4/ltversion.m4\n\n  Copyright (C) 2004 Free Software Foundation, Inc.\n  Written by Scott James Remnant, 2004.   (see license below)\n\nm4 GNU license:\n\nLicense: GPL-2+ or configure-same-as-package\n This file is free software; you can redistribute it and/or modify it\n under the terms of the GNU General Public License as published by the\n Free Software Foundation; either version 2 of the License, or (at your\n option) any later version.\n\n This program is distributed in the hope that it will be useful, but\n WITHOUT ANY WARRANTY; without even the implied warranty of\n MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General\n Public License for more details.\n\n The full text of the GNU General Public License version 2 is available on\n Debian systems in /usr/share/common-licenses/GPL-2.\n\n As a special exception to the GNU General Public License, if you\n distribute this file as part of a program that contains a configuration\n script generated by Autoconf, you may include it under the same\n distribution terms that you use for the rest of that program.\n\n_ _ _ _ _ _\n\nbuildutils/makedepend-sh\n\nCopyright (c) 1996, 1998 The NetBSD Foundation, Inc.\nAll rights reserved.\n\nThis code is derived from software contributed to The NetBSD Foundation\nby Lonhyn T. Jasinskyj.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions\nare met:\n1. Redistributions of source code must retain the above copyright\n   notice, this list of conditions and the following disclaimer.\n2. Redistributions in binary form must reproduce the above copyright\n   notice, this list of conditions and the following disclaimer in the\n   documentation and/or other materials provided with the distribution.\n3. All advertising materials mentioning features or use of this software\n   must display the following acknowledgement:\n       This product includes software developed by the NetBSD\n       Foundation, Inc. and its contributors.\n4. Neither the name of The NetBSD Foundation nor the names of its\n   contributors may be used to endorse or promote products derived\n   from this software without specific prior written permission.\n\nTHIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS\n``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED\nTO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR\nPURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS\nBE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\nCONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\nSUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\nINTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\nCONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\nARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\nPOSSIBILITY OF SUCH DAMAGE.\n"
  },
  {
    "path": "Makefile.am",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nACLOCAL_AMFLAGS = -I m4\n\nSUBDIRS = buildutils src doc test\nEXTRA_DIST = \\\n\tCOPYRIGHT \\\n\tINSTALL \\\n\tLICENSE \\\n\tREADME.md \\\n\tautogen.sh \\\n\topenpbs-rpmlintrc \\\n\topenpbs.spec\n"
  },
  {
    "path": "PBS_License.txt",
    "content": "Copyright (C) 1994-2021 Altair Engineering, Inc.\nFor more information, contact Altair at www.altair.com.\n\nThis file is part of both the OpenPBS software (\"OpenPBS\")\nand the PBS Professional (\"PBS Pro\") software.\n\nOpen Source License Information:\n\nOpenPBS is free software. You can redistribute it and/or modify it under\nthe terms of the GNU Affero General Public License as published by the\nFree Software Foundation, either version 3 of the License, or (at your\noption) any later version.\n\nOpenPBS is distributed in the hope that it will be useful, but WITHOUT\nANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\nFITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\nLicense for more details.\n\nYou should have received a copy of the GNU Affero General Public License\nalong with this program.  If not, see <http://www.gnu.org/licenses/>.\n\nCommercial License Information:\n\nPBS Pro is commercially licensed software that shares a common core with\nthe OpenPBS software.  For a copy of the commercial license terms and\nconditions, go to: (http://www.pbspro.com/agreement.html) or contact the\nAltair Legal Department.\n\nAltair's dual-license business model allows companies, individuals, and\norganizations to create proprietary derivative works of OpenPBS and\ndistribute them - whether embedded or bundled with other software -\nunder a commercial license agreement.\n\nUse of Altair's trademarks, including but not limited to \"PBS™\",\n\"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\nsubject to Altair's trademark licensing policies.\n"
  },
  {
    "path": "README.md",
    "content": "### OpenPBS Open Source Project\n\nIf you are new to this project, please start at https://www.openpbs.org/\n\nNote: In May 2020, OpenPBS became the new name for the PBS Professional Open Source Project. (PBS Professional will be used to refer to the commercial version; OpenPBS to the Open Source version -- same code, easier naming.)  As there are many parts to the project, it will take several weeks to change the name in all places, so you will continue to see references to PBS Pro -- stay tuned.\n\n### What is OpenPBS?\nOpenPBS® software optimizes job scheduling and workload management in high-performance computing (HPC) environments – clusters, clouds, and supercomputers – improving system efficiency and people’s productivity.  Built by HPC people for HPC people, OpenPBS is fast, scalable, secure, and resilient, and supports all modern infrastructure, middleware, and applications.\n\n* **Scalability:** supports millions of cores with fast job dispatch and minimal latency; tested beyond 50,000 nodes\n* **Policy-Driven Scheduling:** meets unique site goals and SLAs by balancing job turnaround time and utilization with optimal job placement\n* **Resiliency:** includes automatic fail-over architecture with no single point of failure – jobs are never lost, and jobs continue to run despite failures\n* **Flexible Plugin Framework:** simplifies administration with enhanced visibility and extensibility; customize implementations to meet complex requirements\n* **Health Checks:** monitors and automatically mitigates faults with a comprehensive health check framework\n* **Voted #1 HPC Software** by HPC Wire readers and proven for over 20 years at thousands of sites around the globe in both the private sector and public sector\n\n### Community and Ways to Participate\n\nOpenPBS is a community effort and there are a variety of ways to engage, from helping answer questions to benchmarking to developing new capabilities and tests.  We value being aggressively open and inclusive, but also aggressively respectful and professional.  See our [Code of Conduct](https://openpbs.atlassian.net/wiki/display/PBSPro/Code+of+Conduct).\n\nThe best place to start is by joining the community forum.  You may sign up or view the archives via:\n\n* [Announcements](http://community.openpbs.org/c/announcements) -- important updates relevant to the entire PBS Pro community\n* [Users/Site Admins](http://community.openpbs.org/c/users-site-administrators) -- general questions and discussions among end users (system admins, engineers, scientists)\n* [Developers](http://community.openpbs.org/c/developers) -- technical discussions among developers\n\nTo dive in deeper and learn more about the project and what the community is up to, visit:\n\n* [Contributor’s portal](https://openpbs.atlassian.net/wiki) -- includes roadmaps, processes, how to articles, coding standards, release notes, etc  (Uses Confluence)\n* [Source code](https://github.com/OpenPBS/openpbs) -- includes full source code and test framework (Uses Github)\n* [Issue tracking system](https://github.com/OpenPBS/openpbs/issues) -- includes bugs and feature requests and status (Uses Github).  Previously, we used [JIRA](https://pbspro.atlassian.net), which contains older issues.\n\nOpenPBS is also integrated in the OpenHPC software stack. The mission of OpenHPC is to provide an integrated collection of HPC-centric components to provide full-featured HPC software stacks. OpenHPC is a Linux Foundation Collaborative Project.  Learn more at:\n\n* [OpenHPC.community](http://openhpc.community)\n* [The Linux Foundation](http://thelinuxfoundation.org)\n\n### Our Vision:  One Scheduler for the whole HPC World\n\nThere is a huge opportunity to advance the state of the art in HPC scheduling by bringing the whole HPC world together, marrying public sector innovations with private sector enterprise know-how, and retargeting the effort wasted re-implementing the same old capabilities again and again towards pushing the outside of the envelope.  At the heart of this vision is fostering common standards (at least defacto standards like common software).  To this end, Altair has made a big investment by releasing the PBS Professional technology as OpenPBS (under an Open Source license to meet the needs of the public sector), while also continuing to offer PBS Professional (under a commercial license to meet the needs of the private sector).  One defacto standard that can work for the whole HPC community.\n\n### Current Build status\n[![Build Status](https://travis-ci.com/OpenPBS/openpbs.svg?branch=master)](https://travis-ci.com/OpenPBS/openpbs)\n"
  },
  {
    "path": "autogen.sh",
    "content": "#!/bin/sh\n\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nif test -d ./src/resmom; then\n\techo \"Generating configure script and Makefile templates.\"\n\texec autoreconf --force --install -I m4 $*\nelse\n\techo \"Execute `basename $0` from the top level distribution directory.\"\nfi\n"
  },
  {
    "path": "azure-pipelines.yml",
    "content": "trigger:\n  branches:\n    include:\n      - master\n      - release_*\n\npr:\n  branches:\n    include:\n      - master\n      - release_*\n\npool:\n  vmImage: 'ubuntu-latest'  # Changed from ubuntu-20.04\n\nvariables:\n  - name: DOCKER_BUILDKIT\n    value: 1\n\njobs:\n  - job: runcheck\n    displayName: 'Code Quality Checks'\n    pool:\n      vmImage: 'ubuntu-latest'\n    steps:\n      - checkout: self\n        displayName: 'Checkout code'\n      \n      - bash: |\n          sudo apt-get update\n          sudo apt-get install -y python3-pip\n          sudo pip3 install --upgrade pip\n          sudo pip3 install pycodestyle pep8 flake8 clang-format\n          \n          # Create pep8 symlink if it doesn't exist (for compatibility)\n          if ! command -v pep8 &> /dev/null; then\n            sudo ln -sf $(which pycodestyle) /usr/local/bin/pep8\n          fi\n          \n          # Verify installations\n          echo \"Checking installed tools:\"\n          python3 --version\n          pip3 --version\n          pep8 --version || pycodestyle --version\n          clang-format --version\n          \n          # Check if runchecks script exists\n          if [ -f \".github/runchecks\" ]; then\n            chmod +x .github/runchecks\n            ./.github/runchecks\n          else\n            echo \"Warning: .github/runchecks script not found\"\n            # Run basic checks if script is missing\n            echo \"Running basic Python style checks...\"\n            find . -name \"*.py\" -exec pep8 {} \\; || true\n          fi\n        displayName: 'Run code quality checks'\n\n  - job: ubuntu_2004_build\n    displayName: 'Ubuntu 20.04'\n    dependsOn: runcheck\n    pool:\n      vmImage: 'ubuntu-latest'\n    variables:\n      OS_TYPE: \"ubuntu:20.04\"\n      PKG_INSTALL_CMD: \"apt-get -y update && apt-get -y upgrade && apt-get install -y python3 build-essential\"\n      DOCKER_EXTRA_ARG: \"-e DEBIAN_FRONTEND=noninteractive -e LANGUAGE=C.UTF-8 -e LANG=C.UTF-8 -e LC_ALL=C.UTF-8\"\n      CI_CMD: \"./ci --local\"\n      CONTAINER_NAME: \"ubuntu2004-$(Build.BuildId)\"\n    steps:\n    - checkout: self\n      displayName: 'Checkout code'\n      \n    - script: |\n        echo \"Starting build for Ubuntu 20.04\"\n        echo \"OS Type: $(OS_TYPE)\"\n        echo \"Package Install: $(PKG_INSTALL_CMD)\"\n        echo \"Docker Args: $(DOCKER_EXTRA_ARG)\"\n        echo \"CI Command: $(CI_CMD)\"\n        echo \"Container Name: $(CONTAINER_NAME)\"\n      displayName: 'Display build configuration'\n      \n    - script: |\n        # Pull the Docker image\n        docker pull $(OS_TYPE)\n        \n        # Start the container with proper init to handle zombie processes\n        docker run -d \\\n          $(DOCKER_EXTRA_ARG) \\\n          -h pbs.dev.local \\\n          --name $(CONTAINER_NAME) \\\n          -v $(pwd):$(pwd) \\\n          --privileged \\\n          --init \\\n          -w $(pwd) \\\n          $(OS_TYPE) \\\n          /bin/bash -c \"sleep 3600\"\n          \n        # Verify container is running\n        docker ps | grep $(CONTAINER_NAME)\n        \n      displayName: 'Start Docker container'\n      \n    - script: |\n        # Install packages\n        docker exec $(CONTAINER_NAME) bash -c \"$(PKG_INSTALL_CMD)\"\n        \n        # Install additional tools for process management\n        docker exec $(CONTAINER_NAME) bash -c \"apt-get install -y procps psmisc\"\n        \n        # Verify Python installation\n        docker exec $(CONTAINER_NAME) python3 --version\n        \n      displayName: 'Install dependencies'\n      \n    - script: |\n        # Monitor processes before running CI\n        echo \"=== Process monitoring before CI ===\"\n        docker exec $(CONTAINER_NAME) bash -c \"\n          echo 'Current processes:'\n          ps aux | head -20\n          echo ''\n          echo 'Checking for zombie/defunct processes:'\n          ps aux | grep -E 'defunct|<zombie>' || echo 'No zombie processes found'\n          echo ''\n          echo 'PBS-related processes:'\n          ps aux | grep -E 'pbs_|openpbs' || echo 'No PBS processes found'\n        \"\n      displayName: 'Monitor processes before CI'\n      \n    - script: |\n        # Check if ci directory and script exist\n        docker exec $(CONTAINER_NAME) bash -c \"ls -la\"\n        docker exec $(CONTAINER_NAME) bash -c \"if [ -d 'ci' ]; then ls -la ci/; else echo 'ci directory not found'; fi\"\n        \n        # Run CI script if it exists\n        if docker exec $(CONTAINER_NAME) bash -c \"[ -f 'ci/ci' ] || [ -f './ci' ]\"; then\n          docker exec --privileged $(CONTAINER_NAME) bash -c \"cd ci && $(CI_CMD)\"\n        else\n          echo \"CI script not found, running basic build test\"\n          docker exec $(CONTAINER_NAME) bash -c \"python3 -c 'print(\\\"Python test successful\\\")'\"\n        fi\n        \n        # Check for any PBS processes and stop them properly\n        echo \"Checking for PBS processes...\"\n        docker exec $(CONTAINER_NAME) bash -c \"ps aux | grep -E 'pbs_|openpbs' || echo 'No PBS processes found'\"\n        \n        # Stop PBS services properly if they're running\n        docker exec $(CONTAINER_NAME) bash -c \"\n          if command -v pbs_server &> /dev/null; then\n            echo 'Stopping PBS services...'\n            pkill -TERM pbs_server || true\n            pkill -TERM pbs_sched || true\n            pkill -TERM pbs_mom || true\n            pkill -TERM pbs_ds_monitor || true\n            sleep 2\n            # Force kill if still running\n            pkill -KILL pbs_server || true\n            pkill -KILL pbs_sched || true\n            pkill -KILL pbs_mom || true\n            pkill -KILL pbs_ds_monitor || true\n          fi\n        \" || true\n        \n      displayName: 'Run CI tests'\n      \n    - script: |\n        # Proper PBS cleanup and container shutdown\n        echo \"Cleaning up PBS processes and container...\"\n        \n        # Stop PBS services gracefully first\n        docker exec $(CONTAINER_NAME) bash -c \"\n          echo 'Stopping PBS services gracefully...'\n          if command -v qterm &> /dev/null; then\n            qterm -t quick || true\n          fi\n          \n          # Stop individual PBS components\n          pkill -TERM pbs_server || true\n          pkill -TERM pbs_sched || true  \n          pkill -TERM pbs_mom || true\n          pkill -TERM pbs_ds_monitor || true\n          \n          # Wait a bit for graceful shutdown\n          sleep 3\n          \n          # Force kill any remaining PBS processes\n          pkill -KILL pbs_server || true\n          pkill -KILL pbs_sched || true\n          pkill -KILL pbs_mom || true\n          pkill -KILL pbs_ds_monitor || true\n          \n          # Clean up any remaining zombie processes\n          ps aux | grep -E 'defunct|<zombie>' || echo 'No zombie processes found'\n        \" || true\n        \n        # Stop and remove container\n        docker stop $(CONTAINER_NAME) || true\n        docker rm $(CONTAINER_NAME) || true\n        \n      displayName: 'Cleanup Docker container'\n      condition: always()\n\n  - job: ubuntu_2404_build\n    displayName: 'Ubuntu 24.04'\n    dependsOn: runcheck\n    pool:\n      vmImage: 'ubuntu-latest'\n    variables:\n      OS_TYPE: \"ubuntu:24.04\"\n      PKG_INSTALL_CMD: \"apt-get -y update && apt-get -y upgrade && apt-get install -y python3 build-essential\"\n      DOCKER_EXTRA_ARG: \"-e DEBIAN_FRONTEND=noninteractive -e LANGUAGE=C.UTF-8 -e LANG=C.UTF-8 -e LC_ALL=C.UTF-8\"\n      CI_CMD: \"./ci --local\"\n      CONTAINER_NAME: \"ubuntu2404-$(Build.BuildId)\"\n    steps:\n    - checkout: self\n      displayName: 'Checkout code'\n      \n    - script: |\n        echo \"Starting build for Ubuntu 24.04\"\n        echo \"OS Type: $(OS_TYPE)\"\n        echo \"Package Install: $(PKG_INSTALL_CMD)\"\n        echo \"Docker Args: $(DOCKER_EXTRA_ARG)\"\n        echo \"CI Command: $(CI_CMD)\"\n        echo \"Container Name: $(CONTAINER_NAME)\"\n      displayName: 'Display build configuration'\n      \n    - script: |\n        # Pull the Docker image\n        docker pull $(OS_TYPE)\n        \n        # Start the container with proper init to handle zombie processes\n        docker run -d \\\n          $(DOCKER_EXTRA_ARG) \\\n          -h pbs.dev.local \\\n          --name $(CONTAINER_NAME) \\\n          -v $(pwd):$(pwd) \\\n          --privileged \\\n          --init \\\n          -w $(pwd) \\\n          $(OS_TYPE) \\\n          /bin/bash -c \"sleep 3600\"\n          \n        # Verify container is running\n        docker ps | grep $(CONTAINER_NAME)\n        \n      displayName: 'Start Docker container'\n      \n    - script: |\n        # Install packages\n        docker exec $(CONTAINER_NAME) bash -c \"$(PKG_INSTALL_CMD)\"\n        \n        # Install additional tools for process management\n        docker exec $(CONTAINER_NAME) bash -c \"apt-get install -y procps psmisc\"\n        \n        # Verify Python installation\n        docker exec $(CONTAINER_NAME) python3 --version\n        \n      displayName: 'Install dependencies'\n      \n    - script: |\n        # Monitor processes before running CI\n        echo \"=== Process monitoring before CI ===\"\n        docker exec $(CONTAINER_NAME) bash -c \"\n          echo 'Current processes:'\n          ps aux | head -20\n          echo ''\n          echo 'Checking for zombie/defunct processes:'\n          ps aux | grep -E 'defunct|<zombie>' || echo 'No zombie processes found'\n          echo ''\n          echo 'PBS-related processes:'\n          ps aux | grep -E 'pbs_|openpbs' || echo 'No PBS processes found'\n        \"\n      displayName: 'Monitor processes before CI'\n      \n    - script: |\n        # Check if ci directory and script exist\n        docker exec $(CONTAINER_NAME) bash -c \"ls -la\"\n        docker exec $(CONTAINER_NAME) bash -c \"if [ -d 'ci' ]; then ls -la ci/; else echo 'ci directory not found'; fi\"\n        \n        # Run CI script if it exists\n        if docker exec $(CONTAINER_NAME) bash -c \"[ -f 'ci/ci' ] || [ -f './ci' ]\"; then\n          docker exec --privileged $(CONTAINER_NAME) bash -c \"cd ci && $(CI_CMD)\"\n        else\n          echo \"CI script not found, running basic build test\"\n          docker exec $(CONTAINER_NAME) bash -c \"python3 -c 'print(\\\"Python test successful\\\")'\"\n        fi\n        \n        # Check for any PBS processes and stop them properly\n        echo \"Checking for PBS processes...\"\n        docker exec $(CONTAINER_NAME) bash -c \"ps aux | grep -E 'pbs_|openpbs' || echo 'No PBS processes found'\"\n        \n        # Stop PBS services properly if they're running\n        docker exec $(CONTAINER_NAME) bash -c \"\n          if command -v pbs_server &> /dev/null; then\n            echo 'Stopping PBS services...'\n            pkill -TERM pbs_server || true\n            pkill -TERM pbs_sched || true\n            pkill -TERM pbs_mom || true\n            pkill -TERM pbs_ds_monitor || true\n            sleep 2\n            # Force kill if still running\n            pkill -KILL pbs_server || true\n            pkill -KILL pbs_sched || true\n            pkill -KILL pbs_mom || true\n            pkill -KILL pbs_ds_monitor || true\n          fi\n        \" || true\n        \n      displayName: 'Run CI tests'\n      \n    - script: |\n        # Proper PBS cleanup and container shutdown\n        echo \"Cleaning up PBS processes and container...\"\n        \n        # Stop PBS services gracefully first\n        docker exec $(CONTAINER_NAME) bash -c \"\n          echo 'Stopping PBS services gracefully...'\n          if command -v qterm &> /dev/null; then\n            qterm -t quick || true\n          fi\n          \n          # Stop individual PBS components\n          pkill -TERM pbs_server || true\n          pkill -TERM pbs_sched || true  \n          pkill -TERM pbs_mom || true\n          pkill -TERM pbs_ds_monitor || true\n          \n          # Wait a bit for graceful shutdown\n          sleep 3\n          \n          # Force kill any remaining PBS processes\n          pkill -KILL pbs_server || true\n          pkill -KILL pbs_sched || true\n          pkill -KILL pbs_mom || true\n          pkill -KILL pbs_ds_monitor || true\n          \n          # Clean up any remaining zombie processes\n          ps aux | grep -E 'defunct|<zombie>' || echo 'No zombie processes found'\n        \" || true\n        \n        # Stop and remove container\n        docker stop $(CONTAINER_NAME) || true\n        docker rm $(CONTAINER_NAME) || true\n        \n      displayName: 'Cleanup Docker container'\n      condition: always()\n\n  - job: rocky_sanitize_build\n    displayName: 'Rocky Linux 9 Sanitize'\n    dependsOn: runcheck\n    pool:\n      vmImage: 'ubuntu-latest'\n    variables:\n      OS_TYPE: \"rockylinux/rockylinux:9.2\"\n      PKG_INSTALL_CMD: \"yum -y update && yum -y install python3 gcc gcc-c++ make\"\n      DOCKER_EXTRA_ARG: \"-e BUILD_MODE=sanitize\"\n      CI_CMD: \"./ci --local=sanitize\"\n      CONTAINER_NAME: \"rocky-sanitize-$(Build.BuildId)\"\n    steps:\n    - checkout: self\n      displayName: 'Checkout code'\n      \n    - script: |\n        echo \"Starting build for Rocky Linux 9 Sanitize\"\n        echo \"OS Type: $(OS_TYPE)\"\n        echo \"Package Install: $(PKG_INSTALL_CMD)\"\n        echo \"Docker Args: $(DOCKER_EXTRA_ARG)\"\n        echo \"CI Command: $(CI_CMD)\"\n        echo \"Container Name: $(CONTAINER_NAME)\"\n      displayName: 'Display build configuration'\n      \n    - script: |\n        # Pull the Docker image\n        docker pull $(OS_TYPE)\n        \n        # Start the container with proper init to handle zombie processes\n        docker run -d \\\n          $(DOCKER_EXTRA_ARG) \\\n          -h pbs.dev.local \\\n          --name $(CONTAINER_NAME) \\\n          -v $(pwd):$(pwd) \\\n          --privileged \\\n          --init \\\n          -w $(pwd) \\\n          $(OS_TYPE) \\\n          /bin/bash -c \"sleep 3600\"\n          \n        # Verify container is running\n        docker ps | grep $(CONTAINER_NAME)\n        \n      displayName: 'Start Docker container'\n      \n    - script: |\n        # Install packages\n        docker exec $(CONTAINER_NAME) bash -c \"$(PKG_INSTALL_CMD)\"\n        \n        # Install additional tools for process management\n        docker exec $(CONTAINER_NAME) bash -c \"yum install -y procps-ng psmisc || dnf install -y procps-ng psmisc\"\n        \n        # Verify Python installation\n        docker exec $(CONTAINER_NAME) python3 --version\n        \n      displayName: 'Install dependencies'\n      \n    - script: |\n        # Monitor processes before running CI\n        echo \"=== Process monitoring before CI ===\"\n        docker exec $(CONTAINER_NAME) bash -c \"\n          echo 'Current processes:'\n          ps aux | head -20\n          echo ''\n          echo 'Checking for zombie/defunct processes:'\n          ps aux | grep -E 'defunct|<zombie>' || echo 'No zombie processes found'\n          echo ''\n          echo 'PBS-related processes:'\n          ps aux | grep -E 'pbs_|openpbs' || echo 'No PBS processes found'\n        \"\n      displayName: 'Monitor processes before CI'\n      \n    - script: |\n        # Check if ci directory and script exist\n        docker exec $(CONTAINER_NAME) bash -c \"ls -la\"\n        docker exec $(CONTAINER_NAME) bash -c \"if [ -d 'ci' ]; then ls -la ci/; else echo 'ci directory not found'; fi\"\n        \n        # Run CI script if it exists\n        if docker exec $(CONTAINER_NAME) bash -c \"[ -f 'ci/ci' ] || [ -f './ci' ]\"; then\n          docker exec --privileged $(CONTAINER_NAME) bash -c \"cd ci && $(CI_CMD)\"\n        else\n          echo \"CI script not found, running basic build test\"\n          docker exec $(CONTAINER_NAME) bash -c \"python3 -c 'print(\\\"Python test successful\\\")'\"\n        fi\n        \n        # Check for any PBS processes and stop them properly\n        echo \"Checking for PBS processes...\"\n        docker exec $(CONTAINER_NAME) bash -c \"ps aux | grep -E 'pbs_|openpbs' || echo 'No PBS processes found'\"\n        \n        # Stop PBS services properly if they're running\n        docker exec $(CONTAINER_NAME) bash -c \"\n          if command -v pbs_server &> /dev/null; then\n            echo 'Stopping PBS services...'\n            pkill -TERM pbs_server || true\n            pkill -TERM pbs_sched || true\n            pkill -TERM pbs_mom || true\n            pkill -TERM pbs_ds_monitor || true\n            sleep 2\n            # Force kill if still running\n            pkill -KILL pbs_server || true\n            pkill -KILL pbs_sched || true\n            pkill -KILL pbs_mom || true\n            pkill -KILL pbs_ds_monitor || true\n          fi\n        \" || true\n        \n      displayName: 'Run CI tests'\n      \n    - script: |\n        # Proper PBS cleanup and container shutdown\n        echo \"Cleaning up PBS processes and container...\"\n        \n        # Stop PBS services gracefully first\n        docker exec $(CONTAINER_NAME) bash -c \"\n          echo 'Stopping PBS services gracefully...'\n          if command -v qterm &> /dev/null; then\n            qterm -t quick || true\n          fi\n          \n          # Stop individual PBS components\n          pkill -TERM pbs_server || true\n          pkill -TERM pbs_sched || true  \n          pkill -TERM pbs_mom || true\n          pkill -TERM pbs_ds_monitor || true\n          \n          # Wait a bit for graceful shutdown\n          sleep 3\n          \n          # Force kill any remaining PBS processes\n          pkill -KILL pbs_server || true\n          pkill -KILL pbs_sched || true\n          pkill -KILL pbs_mom || true\n          pkill -KILL pbs_ds_monitor || true\n          \n          # Clean up any remaining zombie processes\n          ps aux | grep -E 'defunct|<zombie>' || echo 'No zombie processes found'\n        \" || true\n        \n        # Stop and remove container\n        docker stop $(CONTAINER_NAME) || true\n        docker rm $(CONTAINER_NAME) || true\n        \n      displayName: 'Cleanup Docker container'\n      condition: always()\n\n  - job: rocky_kerberos_build\n    displayName: 'Rocky Linux 9 Kerberos'\n    dependsOn: runcheck\n    pool:\n      vmImage: 'ubuntu-latest'\n    variables:\n      OS_TYPE: \"rockylinux/rockylinux:9.2\"\n      PKG_INSTALL_CMD: \"yum -y update && yum -y install python3 gcc gcc-c++ make\"\n      DOCKER_EXTRA_ARG: \"-e BUILD_MODE=kerberos\"\n      CI_CMD: \"./ci --local\"\n      CONTAINER_NAME: \"rocky-kerberos-$(Build.BuildId)\"\n    steps:\n    - checkout: self\n      displayName: 'Checkout code'\n      \n    - script: |\n        echo \"Starting build for Rocky Linux 9 Kerberos\"\n        echo \"OS Type: $(OS_TYPE)\"\n        echo \"Package Install: $(PKG_INSTALL_CMD)\"\n        echo \"Docker Args: $(DOCKER_EXTRA_ARG)\"\n        echo \"CI Command: $(CI_CMD)\"\n        echo \"Container Name: $(CONTAINER_NAME)\"\n      displayName: 'Display build configuration'\n      \n    - script: |\n        # Pull the Docker image\n        docker pull $(OS_TYPE)\n        \n        # Start the container with proper init to handle zombie processes\n        docker run -d \\\n          $(DOCKER_EXTRA_ARG) \\\n          -h pbs.dev.local \\\n          --name $(CONTAINER_NAME) \\\n          -v $(pwd):$(pwd) \\\n          --privileged \\\n          --init \\\n          -w $(pwd) \\\n          $(OS_TYPE) \\\n          /bin/bash -c \"sleep 3600\"\n          \n        # Verify container is running\n        docker ps | grep $(CONTAINER_NAME)\n        \n      displayName: 'Start Docker container'\n      \n    - script: |\n        # Install packages\n        docker exec $(CONTAINER_NAME) bash -c \"$(PKG_INSTALL_CMD)\"\n        \n        # Install additional tools for process management\n        docker exec $(CONTAINER_NAME) bash -c \"yum install -y procps-ng psmisc || dnf install -y procps-ng psmisc\"\n        \n        # Verify Python installation\n        docker exec $(CONTAINER_NAME) python3 --version\n        \n      displayName: 'Install dependencies'\n      \n    - script: |\n        # Monitor processes before running CI\n        echo \"=== Process monitoring before CI ===\"\n        docker exec $(CONTAINER_NAME) bash -c \"\n          echo 'Current processes:'\n          ps aux | head -20\n          echo ''\n          echo 'Checking for zombie/defunct processes:'\n          ps aux | grep -E 'defunct|<zombie>' || echo 'No zombie processes found'\n          echo ''\n          echo 'PBS-related processes:'\n          ps aux | grep -E 'pbs_|openpbs' || echo 'No PBS processes found'\n        \"\n      displayName: 'Monitor processes before CI'\n      \n    - script: |\n        # Check if ci directory and script exist\n        docker exec $(CONTAINER_NAME) bash -c \"ls -la\"\n        docker exec $(CONTAINER_NAME) bash -c \"if [ -d 'ci' ]; then ls -la ci/; else echo 'ci directory not found'; fi\"\n        \n        # Run CI script if it exists\n        if docker exec $(CONTAINER_NAME) bash -c \"[ -f 'ci/ci' ] || [ -f './ci' ]\"; then\n          docker exec --privileged $(CONTAINER_NAME) bash -c \"cd ci && $(CI_CMD)\"\n        else\n          echo \"CI script not found, running basic build test\"\n          docker exec $(CONTAINER_NAME) bash -c \"python3 -c 'print(\\\"Python test successful\\\")'\"\n        fi\n        \n        # Check for any PBS processes and stop them properly\n        echo \"Checking for PBS processes...\"\n        docker exec $(CONTAINER_NAME) bash -c \"ps aux | grep -E 'pbs_|openpbs' || echo 'No PBS processes found'\"\n        \n        # Stop PBS services properly if they're running\n        docker exec $(CONTAINER_NAME) bash -c \"\n          if command -v pbs_server &> /dev/null; then\n            echo 'Stopping PBS services...'\n            pkill -TERM pbs_server || true\n            pkill -TERM pbs_sched || true\n            pkill -TERM pbs_mom || true\n            pkill -TERM pbs_ds_monitor || true\n            sleep 2\n            # Force kill if still running\n            pkill -KILL pbs_server || true\n            pkill -KILL pbs_sched || true\n            pkill -KILL pbs_mom || true\n            pkill -KILL pbs_ds_monitor || true\n          fi\n        \" || true\n        \n      displayName: 'Run CI tests'\n      \n    - script: |\n        # Proper PBS cleanup and container shutdown\n        echo \"Cleaning up PBS processes and container...\"\n        \n        # Stop PBS services gracefully first\n        docker exec $(CONTAINER_NAME) bash -c \"\n          echo 'Stopping PBS services gracefully...'\n          if command -v qterm &> /dev/null; then\n            qterm -t quick || true\n          fi\n          \n          # Stop individual PBS components\n          pkill -TERM pbs_server || true\n          pkill -TERM pbs_sched || true  \n          pkill -TERM pbs_mom || true\n          pkill -TERM pbs_ds_monitor || true\n          \n          # Wait a bit for graceful shutdown\n          sleep 3\n          \n          # Force kill any remaining PBS processes\n          pkill -KILL pbs_server || true\n          pkill -KILL pbs_sched || true\n          pkill -KILL pbs_mom || true\n          pkill -KILL pbs_ds_monitor || true\n          \n          # Clean up any remaining zombie processes\n          ps aux | grep -E 'defunct|<zombie>' || echo 'No zombie processes found'\n        \" || true\n        \n        # Stop and remove container\n        docker stop $(CONTAINER_NAME) || true\n        docker rm $(CONTAINER_NAME) || true\n        \n      displayName: 'Cleanup Docker container'\n      condition: always()\n"
  },
  {
    "path": "buildutils/Makefile.am",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nEXTRA_DIST = \\\n\tattr_parser.py\n"
  },
  {
    "path": "buildutils/attr_parser.py",
    "content": "# coding: utf-8\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\n\"\"\"\n    attr_parser.py will parse xml files also called master attribute files\ncontaining all the members of both server and ecl files,and will generate\ntwo corresponding files one for server and one for ecl\n\"\"\"\nimport getopt\nimport os\nimport pdb\nimport re\nimport string\nimport sys\nimport enum\nimport xml.dom.minidom\nimport xml.parsers.expat\n\nlist_ecl = []\nlist_svr = []\nlist_defs = []\n\nglobal attr_type\nglobal newattr\n\nclass PropType(enum.Enum):\n    '''\n    BOTH - Write information for this tag to all the output files\n    SERVER - Write information for this tag to the SERVER file only\n    ECL - Write information for this tag to the ECL file only\n    '''\n    BOTH = 0\n    SERVER = 1\n    ECL = 2\n\nclass switch(object):\n    \"\"\"\n    This class provides the functionality which is equivalent\n    to switch/case statements in C. It only needs to be defined\n    once.\n    \"\"\"\n\n    def __init__(self, value):\n        self.value = value\n        self.fall = False\n\n    def __iter__(self):\n        \"\"\"Return the match method once, then stop\"\"\"\n        yield self.match\n\n    def match(self, *args):\n        \"\"\"Indicate whether or not to enter a case suite\"\"\"\n        if self.fall or not args:\n            return True\n        elif self.value in args:  # changed for v1.5, see below\n            self.fall = True\n            return True\n        else:\n            return False\n\n\ndef fileappend(prop_type, line):\n    '''\n    Selects files to append line to dependig on prop_type\n    prop_type - BOTH, SERVER, ECL\n    line - The string line to append to the file(s)\n    '''\n    global attr_type\n\n    if prop_type == PropType.SERVER:\n        if attr_type == PropType.SERVER or attr_type == PropType.BOTH:\n            list_svr.append(line)\n    elif prop_type == PropType.ECL:\n        if attr_type == PropType.ECL or attr_type == PropType.BOTH:\n            list_ecl.append(line)\n    elif prop_type == PropType.BOTH:\n        if attr_type == PropType.SERVER or attr_type == PropType.BOTH:\n            list_svr.append(line)\n        if attr_type == PropType.ECL or attr_type == PropType.BOTH:\n            list_ecl.append(line)\n    return None\n\n\ndef getText(svr_file, ecl_file, defines_file):\n    '''\n    getText function - (writes the data stored in lists to file)\n    svr_file - the server side output file\n    ecl_file - the output file to be used by the ECL layer\n    defines_file - the output file containing the macro definitions for the\n                   index positions\n    '''\n    buff = \"\".join(list_svr)\n    for line in buff:\n        svr_file.write(line)\n\n    buff = \"\".join(list_ecl)\n    for line in buff:\n        ecl_file.write(line)\n\n    buff = \"\".join(list_defs)\n    for line in buff:\n        defines_file.write(line)\n\n\ndef do_head(node):\n    '''\n    Processes the head element of the node passed\n    '''\n    alist = node.getElementsByTagName('head')\n    for a in alist:\n        list_svr.append(\"/*Disclaimer: This is a machine generated file.*/\" +\n                        '\\n')\n        list_svr.append(\"/*For modifying any attribute change corresponding \"\n                        \"XML file */\" + '\\n')\n        list_ecl.append(\"/*Disclaimer: This is a machine generated file.*/\" +\n                        '\\n')\n        list_ecl.append(\"/*For modifying any attribute change corresponding \"\n                        \"XML file */\" + '\\n')\n        blist = a.getElementsByTagName('SVR')\n        blist_ecl = a.getElementsByTagName('ECL')\n        for s in blist:\n            text1 = s.childNodes[0].nodeValue\n            text1 = text1.strip(' \\t')\n            list_svr.append(text1)\n        for e in blist_ecl:\n            text2 = e.childNodes[0].nodeValue\n            text2 = text2.strip(' \\t')\n            list_ecl.append(text2)\n\n\ndef do_index(attr):\n    '''\n    Processes the member_index attribute attr\n    '''\n    li = None\n    li = attr.getElementsByTagName('member_index')\n    if li:\n        for v in li:\n            buf = v.childNodes[0].nodeValue\n            list_defs.append(\"\\n\\t\" + buf + \",\")\n\n\ndef do_member(attr, p_flag, tag_name):\n    '''\n    Processes the member identified by tage_name\n    attr - the attribute definition node\n    p_flag - property flag - SVR, ECL, BOTH\n    tag_name - the tag_name string to process\n    '''\n    global newattr\n    buf = None\n    comma = ','\n    if newattr:\n        comma = ''\n\n    newattr = False\n    li = attr.getElementsByTagName(tag_name)\n    if li:\n        svr = li[0].getElementsByTagName('SVR')\n        if svr:\n            value = svr\n            for v in value:\n                buf = v.childNodes[0].nodeValue\n                fileappend(PropType.SERVER, comma + '\\n' + '\\t' + '\\t' + buf)\n\n        ecl = li[0].getElementsByTagName('ECL')\n        if ecl:\n            value = ecl\n            for v in value:\n                buf = v.childNodes[0].nodeValue\n                fileappend(PropType.ECL, comma + '\\n' + '\\t' + '\\t' + buf)\n\n        value = li\n        for v in value:\n            buf = v.childNodes[0].nodeValue\n            if buf:\n                s = buf.strip('\\n \\t')\n                if s:\n                    fileappend(p_flag, comma + '\\n' + '\\t' + '\\t' + buf)\n\n\ndef process(master_file, svr_file, ecl_file, defines_file):\n    '''\n    process the master xml file and produce the outputs files as requested\n    master_file - the Master XML files to process\n    svr_file - the server side output file\n    ecl_file - the output file to be used by the ECL layer\n    defines_file - the output file containing the macro definitions for the\n                   index positions\n    '''\n    from xml.dom import minidom\n\n    global attr_type\n    global newattr\n    newattr = False\n\n    doc = minidom.parse(master_file)\n    nodes = doc.getElementsByTagName('data')\n\n    for node in nodes:\n        do_head(node)\n    \n        at_list = node.getElementsByTagName('attributes')\n        for attr in at_list:\n            attr_type = PropType.BOTH\n            newattr  = True\n\n            flag_name = attr.getAttribute('flag')\n            if flag_name == 'SVR':\n                attr_type = PropType.SERVER\n            if flag_name == 'ECL':\n                attr_type = PropType.ECL\n\n            inc_name =  attr.getAttribute('include')\n            if inc_name:\n                fileappend(PropType.SERVER, '\\n' + inc_name)\n\n            mem_list = attr.childNodes[0].nodeValue\n            mem_list = mem_list.strip(' \\t')\n            fileappend(PropType.BOTH, mem_list)\n\n            macro_name = attr.getAttribute('macro')\n            if macro_name:\n                fileappend(PropType.BOTH, '\\n' + macro_name + \"\\n\")\n\n            do_index(attr)\n            fileappend(PropType.BOTH, '\\t{')\n\n            do_member(attr, PropType.BOTH, 'member_name')\n            do_member(attr, PropType.SERVER, 'member_at_decode')\n            do_member(attr, PropType.SERVER, 'member_at_encode')\n            do_member(attr, PropType.SERVER, 'member_at_set')\n            do_member(attr, PropType.SERVER, 'member_at_comp')\n            do_member(attr, PropType.SERVER, 'member_at_free')\n            do_member(attr, PropType.SERVER, 'member_at_action')\n            do_member(attr, PropType.BOTH, 'member_at_flags')\n            do_member(attr, PropType.BOTH, 'member_at_type')\n            do_member(attr, PropType.SERVER, 'member_at_parent')\n            do_member(attr, PropType.ECL, 'member_verify_function')\n            do_member(attr, PropType.SERVER, 'member_at_entlim')\n            do_member(attr, PropType.SERVER, 'member_at_struct')\n\n            fileappend(PropType.BOTH, '\\n\\t}')\n            fileappend(PropType.BOTH, \",\")\n\n            if macro_name:\n                fileappend(PropType.BOTH, '\\n#else')\n                fileappend(PropType.BOTH, '\\n\\t{\\n\\t\\t\"noop\"\\n\\t},')\n                fileappend(PropType.BOTH, '\\n#endif')\n\n        tail_list = node.getElementsByTagName('tail')\n        for t in tail_list:\n            tail_value = t.childNodes[0].nodeValue\n            if tail_value is None:\n                pass\n            fileappend(PropType.BOTH, '\\n')\n            tail_both = t.getElementsByTagName('both')\n            tail_svr = t.getElementsByTagName('SVR')\n            tail_ecl = t.getElementsByTagName('ECL')\n            for tb in tail_both:\n                b = tb.childNodes[0].nodeValue\n                b = b.strip(' \\t')\n                list_ecl.append(b)\n                list_svr.append(b)\n            for ts in tail_svr:\n                s = ts.childNodes[0].nodeValue\n                s = s.strip(' \\t')\n                list_svr.append(s)\n            for te in tail_ecl:\n                e = te.childNodes[0].nodeValue\n                e = e.strip(' \\t')\n                list_ecl.append(e)\n\n        getText(svr_file, ecl_file, defines_file)\n\n\ndef main(argv):\n    '''\n    Opens files,and calls appropriate functions based on Object values.\n    '''\n    global SVR_FILENAME\n    global ECL_FILENAME\n    global DEFINES_FILENAME\n    global MASTER_FILENAME\n\n    SVR_FILENAME = \"/dev/null\"\n    ECL_FILENAME = \"/dev/null\"\n    DEFINES_FILENAME = \"/dev/null\"\n    MASTER_FILENAME = \"/dev/null\"\n\n    if len(sys.argv) == 2:\n        usage()\n        sys.exit(1)\n    try:\n        opts, args = getopt.getopt(\n            argv, \"m:s:e:d:h\",\n            [\"master=\", \"svr=\", \"ecl=\", \"attr=\", \"help=\", \"defines=\"])\n    except getopt.error as err:\n        print(str(err))\n        usage()\n        sys.exit(1)\n    for opt, arg in opts:\n        if opt in ('-h', \"--help\"):\n            usage()\n            sys.exit(1)\n        elif opt in (\"-m\", \"--master\"):\n            MASTER_FILENAME = arg\n        elif opt in (\"-s\", \"--svr\"):\n            SVR_FILENAME = arg\n        elif opt in (\"-d\", \"--defines\"):\n            DEFINES_FILENAME = arg\n        elif opt in (\"-e\", \"--ecl\"):\n            ECL_FILENAME = arg\n        else:\n            print(\"Invalid Option!\")\n            sys.exit(1)\n#    Error conditions are checked here.\n\n    if (MASTER_FILENAME is None or not os.path.isfile(MASTER_FILENAME) or\n            not os.path.getsize(MASTER_FILENAME) > 0):\n        print(\"Master file not found or data is not present in File\")\n        sys.exit(1)\n\n    try:\n        master_file = open(MASTER_FILENAME, encoding='utf-8')\n    except IOError as err:\n        print(str(err))\n        print('Cannot open master file ' + MASTER_FILENAME)\n        sys.exit(1)\n\n    try:\n        svr_file = open(SVR_FILENAME, 'w', encoding='utf-8')\n    except IOError as err:\n        print(str(err))\n        print('Cannot open ferver file ' + SVR_FILENAME)\n        sys.exit(1)\n\n    try:\n        defines_file = open(DEFINES_FILENAME, 'w', encoding='utf-8')\n    except IOError as err:\n        print(str(err))\n        print('Cannot open defines file ' + DEFINES_FILENAME)\n        sys.exit(1)\n\n    try:\n        ecl_file = open(ECL_FILENAME, 'w', encoding='utf-8')\n    except IOError as err:\n        print(str(err))\n        print('Cannot open ecl file ' + ECL_FILENAME)\n        sys.exit(1)\n\n    process(master_file, svr_file, ecl_file, defines_file)\n\n    master_file.close()\n    svr_file.close()\n    ecl_file.close()\n\n\ndef usage():\n    \"\"\"\n    Usage (depicts the usage of the script)\n    \"\"\"\n    print(\"usage: prog -m <MASTER_FILENAME> -s <svr_attr_file> \"\n          \"-e <ecl_attr_file> -d <defines_file>\")\n\n\nif __name__ == \"__main__\":\n    main(sys.argv[1:])\n"
  },
  {
    "path": "ci/README.md",
    "content": "Instant-CI is a developer tool which aims at providing continous integration to the developers locally on their development systems.\nUsers can build, install PBS and run PTL tests with a single command. For this, the user need not worry about any underlying dependencies.\nIt also supports build and test history in the form of logs.\n\nDependencies for this tool are:\n* python3.5 or above\n* docker (17.12.0+)\n* docker-compose\n\n***How to setup:***\n\nSimply invoke the following command:\n\n` ./ci`\n\n***CLI interface for ci:***\n\n* **./ci :** This is the primary command for ci. It starts the container (if not already running), builds PBS dependencies. Will configure(if required), make and install PBS. If the tests option are given it will run PTL with the same. It does not take any argument.\n```bash\n./ci\n ```\n\n* **./ci --params:** The params option can be used to run ci with a custom configuration.\nFollowing parameters can be set.\n| os | nodes | configure | tests |\n> os: used to set OS platform of the container (single node) <br>\n> nodes: used to define multi-node configuration for container <br>\n> configure: will hold the value of configure options for PBS <br>\n> tests: will hold the value for pbs_benchpress argument for PTL; if set empty will skip PTL tests <br>\n\n```bash\n# When the params command is called without any arguments it will display the currently set \"configuration\" and then proceed to run ci\n# as the following example.\n./ci --params\n# or\n./ci -p\n\n\n# The following command is an example of how to provide a custom configure option for PBS. Everything to the right of the first '=' after configure will\n# be taken as it is and given as an argument to the configure file in PBS. The same convention follows for other configuration options as well\n./ci --params 'configure=CFLAGS=\" -O2 -Wall -Werror\" --prefix=/tmp/pbs --enable-ptl'\n\n# You can also pass multiple parameter with this option for example\n./ci -p 'configure=--enable-ptl --prefix=/opt/pbs' -p 'tests=-t SmokeTest.test_basic'\n\n\n# The following are examples how to define a custom test case for pbs_benchpress.\n# NOTE: The string is passed to pbs_benchpress command therefore one can use all available options of pbs_benchpress here.\n# By default the test option is set to '-t SmokeTest'\n./ci --params 'tests=-f pbs_smoketest.py'\n./ci --params 'tests=--tags=smoke'\n\n\n# If you wish to not run any PTL tests then use the below command. This will set tests as empty thus not invoking PTL.\n./ci --params 'tests='\n\n\n# Below is an example of setting the container operating system. This will setup a single container running PBS server.\n# NOTE: ci uses cached image to increase performance. These cached images are saved on the local system\n#\t\twith the suffix '-ci-pbs'. If you do not wish to use the cached image(s) delete them using <docker rmi {image_name}>.\n# OS platform can be defined by any image from docker-hub\n./ci --params 'os=centos:7'\n\n\n# Following is an example of how to define multi node setup for PBS.\n# You can define multiple 'mom' or 'comm' nodes but only one 'server' node\n./ci --params 'nodes=mom=centos:7;server=ubuntu:16.04;comm=ubuntu:18.04;mom=centos:8'\n\n```\n\n\n* **./ci --build-pkgs:** Invoke this command to build PBS packages. By default it will build packages for the platform ci container is started for.\nOptionally accepts argument for other platform. The packages can be found in 'ci/packages' folder.\n\n```bash\n# Below command builds package for the platform ci was started/currently running on.\n./ci --build-pkgs\n# or\n./ci -b\n\n```\n\n* **./ci --delete:** This will delete any containers created by this tool and take a backup of logs. The current logs can be found in the \"logs\" folder in the ci folder. The backup of previous sessions logs can be can be found in the ci/logs/session-{date}-{timestamp} folder.\n\n```bash\n# If you want to delete the container simply invoke this command.\n./ci --delete\n# or\n./ci -d\n```\n\n* **./ci --local:** This will build, install PBS, and run smoke tests on the local machine. This option can not be combined with other options. It does not take configurations from params but runs with predefined params(as run in travis).\n```bash\n# The command to run\n./ci --local\n#or\n./ci -l\n\n# Optionally one can run the sanitize version (works only on centos:7) with the following argument\n./ci --local sanitize\n```\n"
  },
  {
    "path": "ci/ci",
    "content": "#!/usr/bin/env python3\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\nimport argparse\nimport configparser\nimport copy\nimport fileinput\nimport json\nimport os\nimport platform\nimport re\nimport shlex\nimport shutil\nimport subprocess\nimport sys\nimport textwrap\nimport threading\nimport time\nfrom argparse import RawTextHelpFormatter\nfrom string import Template\n\nci_dirname = ''\ndefault_platform = ''\nMACROS = {}\n\n\ndef read_macros():\n    for line in open(os.path.join(ci_dirname, 'etc', 'macros')):\n        var, value = line.split('=')\n        MACROS[var] = value.replace('\\n', '')\n\n\nrequirements_template = Template('''num_servers=${num_servers}\nnum_moms=${num_moms}\nnum_comms=${num_comms}\nno_mom_on_server=${no_mom_on_server}\nno_comm_on_server=${no_comm_on_server}\nno_comm_on_mom=${no_comm_on_mom}\n''')\n\nservice_template_prist = Template('''{\n\"image\": \"${image}\",\n\"volumes\": [\n    \"../:/pbssrc\",\n    \"./:/src\",\n    \"./logs:/logs\",\n    \"./etc:/workspace/etc\"\n],\n\"entrypoint\": \"/workspace/etc/container-init\",\n\"environment\": [\n    \"NODE_TYPE=${node_type}\",\n    \"LANG=en_US.utf-8\"\n],\n\"networks\": {\n    \"ci.local\": { }\n},\n\"domainname\": \"ci.local\",\n\"container_name\": \"${hostname}\",\n\"hostname\": \"${hostname}\",\n\"user\": \"root\",\n\"privileged\": true,\n\"stdin_open\": true,\n\"tty\": true\n}''')\n\n\ndef log_error(msg):\n    print(\"ERROR ::: \" + str(msg))\n\n\ndef log_info(msg):\n    t = time.localtime()\n    current_time = time.strftime(\"%H:%M:%S\", t)\n    print(current_time + \" ---> \" + str(msg))\n\n\ndef log_warning(msg):\n    print(\"WARNING ::: \" + str(msg))\n\n\ndef get_services_list():\n    _ps = subprocess.run(\n        [\"docker-compose\", \"-f\", \"docker-compose.json\",\n         \"ps\", \"--filter\", \"status=running\", \"--services\"],\n        stdout=subprocess.PIPE)\n    _p = str((_ps.stdout).decode('utf-8'))\n    return [x for x in _p.splitlines() if len(x) > 0]\n\n\ndef get_compose_file_services_list():\n    compose_file = os.path.join(ci_dirname, 'docker-compose.json')\n    with open(compose_file) as f:\n        compose_file = json.loads(f.read())\n    return list(compose_file['services'].keys())\n\n\ndef run_cmd(cmd, return_output=False):\n    '''\n    Run a terminal command, and if needed return output of the command.\n    '''\n    cmd = shlex.split(cmd)\n    try:\n        a = subprocess.Popen(cmd, stdout=subprocess.PIPE)\n        out, err = a.communicate()\n        if a.returncode != 0:\n            log_error(\"command failed\")\n            log_error(str(err))\n        else:\n            if return_output:\n                return str(out)\n    except Exception as e:\n        log_error(\"The command failed.\")\n        log_error(e)\n\n\ndef run_docker_cmd(run_cmd, run_on='all'):\n    '''\n    Runs a docker command and on failure redirects user to\n    the container terminal\n    '''\n    services = get_services_list()\n    services.sort(reverse=True)  # we want server cmds to run first\n    for service in services:\n        cmd = \"docker-compose -f docker-compose.json exec \"\n        cmd += service + \" bash -c \\'\" + run_cmd + \"\\'\"\n        if run_on != 'all' and service.find(run_on) == -1:\n            log_info('Skipping on ' + service +\n                     ' as command only to be run on ' + run_on)\n            continue\n        try:\n            log_info(cmd)\n            docker_cmd = shlex.split(cmd)\n            a = subprocess.Popen(docker_cmd)\n            a.communicate()\n            if a.returncode != 0:\n                _msg = \"docker cmd returned with non zero exit code,\"\n                _msg += \"redirecting you to container terminal\"\n                log_error(_msg)\n                _docker_cmd = \"docker-compose -f docker-compose.json exec \"\n                _docker_cmd += service + \" bash -c \\'cd /pbssrc && /bin/bash\\'\"\n                docker_cmd = shlex.split(_docker_cmd)\n                subprocess.run(docker_cmd)\n                os._exit(1)\n        except Exception as e:\n            log_error(\"Failed\\n:\")\n            log_error(e)\n\n\ndef write_to_file(file_path, value):\n    with open(file_path, \"w+\") as f:\n        f.write(value)\n\n\ndef read_from_file(file_path):\n    if not os.path.isfile(file_path):\n        open(file_path, 'a').close()\n    with open(file_path, 'r+') as f:\n        val = f.read()\n    return val\n\n\ndef commit_docker_image():\n    '''\n    Watch for readiness of ci containers to commit a new image\n    '''\n\n    images_to_commit = {}\n    time_spent = 0\n    services = get_services_list()\n    service_count = len(services)\n    timeout = 1 * 60 * 60\n    while service_count > 0:\n        # Do not want to check constantly as it increases cpu load\n        time.sleep(15)\n        time_spent = time_spent + 15\n        if time_spent > timeout:\n            log_error(\"build is taking too long, timed out\")\n            sys.exit(1)\n        status = read_from_file(os.path.join(\n            ci_dirname, MACROS['CONFIG_DIR'], MACROS['STATUS_FILE']))\n        for service in services:\n            if str(status).find(service) != -1:\n                services.remove(service)\n                service_count -= 1\n                image = (service.split('-', 1)[1][:-2]).replace('-', ':')\n                image = image.replace(\"_\", \".\")\n                images_to_commit[image] = service\n    for key in images_to_commit:\n        try:\n            build_id = 'docker-compose -f docker-compose.json ps -q ' + \\\n                images_to_commit[key]\n            build_id = run_cmd(build_id, True)\n            build_id = build_id.split(\"'\")[1]\n            build_id = build_id[:12]\n            image_name = (str(key).replace(':', '-')\n                          ).replace('.', '_') + '-ci-pbs'\n            # shortening the build id to 12 characters as is displayed by\n            # 'docker ps' unlike 'docker-compose ps'  which shows full id\n            cmd = 'docker commit '+build_id+' '+image_name+':latest'\n            log_info(cmd)\n            run_cmd(cmd)\n        except Exception as e:\n            log_error(e)\n        try:\n            bad_images = \"docker images -qa -f'dangling=true'\"\n            bad_images = run_cmd(bad_images, True)\n            if bad_images != \"b''\":\n                bad_images = (bad_images.split(\"'\")[1]).replace(\"\\\\n\", \" \")\n                print(\"The following untagged images will be removed -> \" +\n                      bad_images)\n                cmd = 'docker rmi ' + bad_images\n                run_cmd(cmd)\n        except Exception as e:\n            log_warning(\n                \"could not remove bad (dangling) images, \\\n                please remove manually\")\n            print(e)\n    return True\n\n\ndef create_ts_tree_json():\n    benchpress_opt = os.path.join(\n        ci_dirname, MACROS['CONFIG_DIR'], MACROS['BENCHPRESS_OPT_FILE'])\n    benchpress_value = read_from_file(benchpress_opt)\n    try:\n        cmd = '/src/etc/gen_ptl_json.sh \"' + benchpress_value + '\"'\n        run_docker_cmd(cmd, run_on='server')\n    except Exception:\n        log_error('Failed to generate testsuite info json')\n        sys.exit(1)\n\n\ndef get_node_config(node_image=default_platform):\n    '''\n    Calculate the required node configuration for given\n    requirements decorator and return node config\n    '''\n    json_data = {}\n    max_servers_needed = 1\n    max_moms_needed = 1\n    max_comms_needed = 1\n    no_mom_on_server_flag = False\n    no_comm_on_mom_flag = True\n    no_comm_on_server_flag = False\n    try:\n        with open(os.path.join(ci_dirname, 'ptl_ts_tree.json')) as f:\n            json_data = json.load(f)\n    except Exception:\n        log_error('Could not find ptl tree json file')\n    for ts in json_data.values():\n        for tclist in ts['tclist'].values():\n            max_moms_needed = max(\n                tclist['requirements']['num_moms'], max_moms_needed)\n            max_servers_needed = max(\n                tclist['requirements']['num_servers'], max_servers_needed)\n            max_comms_needed = max(\n                tclist['requirements']['num_comms'], max_comms_needed)\n            no_mom_on_server_flag = tclist['requirements']['no_mom_on_server']\\\n                or no_mom_on_server_flag\n            no_comm_on_server_flag = tclist['requirements']['no_comm_on_server']\\\n                or no_comm_on_server_flag\n            no_comm_on_mom_flag = tclist['requirements']['no_comm_on_mom']\\\n                or no_comm_on_mom_flag\n    # Create a bash readable requirements decorator file\n    write_to_file(os.path.join(ci_dirname, MACROS['CONFIG_DIR'],\n                               MACROS['REQUIREMENT_DECORATOR_FILE']),\n                  requirements_template.substitute(num_servers=max_servers_needed,\n                                                   num_moms=max_moms_needed,\n                                                   num_comms=max_comms_needed,\n                                                   no_mom_on_server=no_mom_on_server_flag,\n                                                   no_comm_on_server=no_comm_on_server_flag,\n                                                   no_comm_on_mom=no_comm_on_mom_flag))\n\n    server_nodes = []\n    mom_nodes = []\n    comm_nodes = []\n    # get required number of servers and moms\n    for _ in range(max_servers_needed):\n        server_nodes.append(node_image)\n    if not no_mom_on_server_flag:\n        max_moms_needed = max(max_moms_needed, max_servers_needed)\n        if max_moms_needed > max_servers_needed:\n            for _ in range(max_moms_needed - max_servers_needed):\n                mom_nodes.append(node_image)\n    else:\n        for _ in range(max_moms_needed):\n            mom_nodes.append(node_image)\n\n    only_moms = len(mom_nodes)\n    # get required num of comms\n    if no_comm_on_mom_flag and no_comm_on_server_flag:\n        for _ in range(max_comms_needed):\n            comm_nodes.append(node_image)\n    elif no_comm_on_mom_flag and not no_comm_on_server_flag:\n        if max_comms_needed > max_servers_needed:\n            for _ in range(max_comms_needed-max_servers_needed):\n                comm_nodes.append(node_image)\n    else:\n        if max_comms_needed > only_moms:\n            for _ in range(max_comms_needed - only_moms):\n                comm_nodes.append(node_image)\n\n    # remove the trailing ';' from the node_config string\n    mom_nodes = ['mom=' + x for x in mom_nodes]\n    server_nodes = ['server=' + x for x in server_nodes]\n    comm_nodes = ['comm=' + x for x in comm_nodes]\n    node_images = \";\".join(server_nodes + mom_nodes + comm_nodes)\n    return node_images\n\n\ndef tail_build_log():\n    server_name = ''\n    build_log_path = get_services_list()\n    for i in build_log_path:\n        if i.find('server') != -1:\n            build_log_path = i\n            server_name = i\n    build_log_path = os.path.join(\n        ci_dirname, 'logs', 'build-' + build_log_path)\n    prev = ''\n    next = ''\n    with open(build_log_path, 'rb') as f:\n        while True:\n            f.seek(-2, os.SEEK_END)\n            while f.read(1) != b'\\n':\n                f.seek(-2, os.SEEK_CUR)\n            next = f.readline().decode()\n            if next != prev:\n                print(next, end='')\n                prev = next\n            else:\n                status = os.path.join(\n                    ci_dirname, MACROS['CONFIG_DIR'], MACROS['STATUS_FILE'])\n                status = read_from_file(status)\n                if status.find(server_name) != -1:\n                    return\n\n\ndef check_for_existing_image(val=default_platform):\n    '''\n    This function will check whether an existing image with the\n    post-fix of '-ci-pbs' exists or not for the given docker image.\n    '''\n    if val.find('-ci-pbs') == -1:\n        search_str = val.replace(\":\", \"-\")\n        search_str = search_str.replace(\".\", '_')\n        search_str += '-ci-pbs'\n    cmd = 'docker images -q ' + search_str\n    search_result = run_cmd(cmd, True)\n    if search_result != \"b''\":\n        return True, search_str\n    else:\n        return False, val\n\n\ndef get_current_setup():\n    '''\n    Returns the node config for currently running ci containers\n    '''\n    compose_file = os.path.join(ci_dirname, 'docker-compose.json')\n    node_config = ''\n    with open(compose_file) as f:\n        compose_file = json.loads(f.read())\n    for service in compose_file['services']:\n        image = compose_file[\"services\"][service]['image']\n        if image[-7:] == '-ci-pbs':\n            image = image[:-7][::-1].replace('-', ':', 1)[::-1]\n        node_type = compose_file[\"services\"][service]['environment'][0]\n        node_type = node_type.split('=')[1]\n        node_config += node_type + '=' + image + ';'\n    node_config = node_config[:-1]\n    return node_config\n\n\ndef load_conf():\n    conf_file = os.path.join(\n        ci_dirname, MACROS['CONFIG_DIR'], MACROS['CONF_JSON_FILE'])\n    with open(conf_file) as f:\n        conf_file = json.loads(f.read())\n    return conf_file\n\n\ndef show_set_opts():\n    conf_opts = load_conf()\n    os_file_list = get_compose_file_services_list()\n    os_file_list = [(x.split('-', 1)[0] + '=' + x.split('-', 1)[1][:-2]\n                     ).replace('-', ':').replace('_', '.')\n                    for x in os_file_list]\n    os_file_list.sort()\n    conf_opts['OS'] = os_file_list\n    print(json.dumps(conf_opts, indent=2, sort_keys=True))\n\n\ndef create_param_file():\n    '''\n    Create param file with necessary node configuration for\n    multi node PTL tests.\n    '''\n    moms = []\n    comms = []\n    include_server_mom = False\n    include_server_comm = False\n    include_mom_comm = False\n    reqs = read_from_file(os.path.join(\n        ci_dirname, MACROS['CONFIG_DIR'],\n        MACROS['REQUIREMENT_DECORATOR_FILE']))\n    if reqs.find('no_mom_on_server=False') != -1:\n        include_server_mom = True\n    if reqs.find('no_comm_on_server=False') != -1:\n        include_server_comm = True\n    if reqs.find('no_comm_on_mom=False') != -1:\n        include_mom_comm = True\n    for service in get_services_list():\n        service = service+'.ci.local'\n        if service.find('server') != -1:\n            if include_server_mom:\n                moms.append(service)\n            if include_server_comm:\n                comms.append(service)\n        if service.find('mom') != -1:\n            moms.append(service)\n            if include_mom_comm:\n                comms.append(service)\n        if service.find('comm') != -1:\n            comms.append(service)\n    write_str = ''\n    if len(moms) != 0:\n        write_str = 'moms=' + ':'.join(moms) + '\\n'\n    if len(comms) != 0:\n        write_str += 'comms=' + ':'.join(comms)\n    param_path = os.path.join(\n        ci_dirname, MACROS['CONFIG_DIR'], MACROS['PARAM_FILE'])\n    write_to_file(param_path, write_str)\n\n\ndef unpack_node_string(nodes):\n    '''\n    Helper function to expand abbreviated node config\n    '''\n    for x in nodes:\n        if x.find('*') != -1:\n            num = x.split('*')[0]\n            try:\n                num = int(num)\n            except Exception:\n                log_error('invalid string provided for \"nodes\" configuration')\n                sys.exit(1)\n            val = x.split('*')[1]\n            nodes.remove(x)\n            for _ in range(num):\n                nodes.append(val)\n    return ';'.join(nodes)\n\n\ndef build_compose_file(nodes):\n    '''\n    Build docker-compose file for given node config in function parameter\n    '''\n    compose_template = {\n        \"version\": \"3.5\",\n        \"networks\": {\n            \"ci.local\": {\n                \"name\": \"ci.local\"\n            }\n        },\n        \"services\": {}\n    }\n    if nodes.find(\"*\") != -1:\n        nodes = unpack_node_string(nodes.split(';'))\n    count = 0\n    server = ''\n    for n in nodes.split(';'):\n        count = count + 1\n        node_key, node_val = n.split('=')\n        if (node_val not in MACROS['SUPPORTED_PLATFORMS'].split(',')\n                and ''.join(sys.argv).find(node_val) != -1):\n            log_warning(\"Given platform '\" + node_val + \"' is not supported by\" +\n                        \" ci, will result in unexpected behaviour\")\n            log_warning(\"Supported platforms are \" +\n                        MACROS['SUPPORTED_PLATFORMS'])\n        node_name = node_key + '-' + \\\n            (node_val.replace(':', '-')).replace('.', '_') + '-' + str(count)\n        image_value = node_val\n        _, image_value = check_for_existing_image(node_val)\n        service_template = json.loads(service_template_prist.substitute(\n            image=image_value, node_type=node_key,\n            hostname=node_name))\n        if node_key == 'server':\n            server = node_name\n        compose_template['services'][node_name] = service_template\n    for service in compose_template['services']:\n        compose_template['services'][service]['environment'].append(\n            \"SERVER=\"+server)\n    f = open(os.path.join(ci_dirname, 'docker-compose.json'), 'w')\n    json.dump(compose_template, f, indent=2, sort_keys=True)\n    f.close()\n    log_info(\"Configured nodes for ci\")\n\n\ndef ensure_ci_running():\n    '''\n    Check for running ci container; if not start ci container.\n    '''\n    try:\n        service_count = len(get_services_list())\n        if service_count == 0:\n            log_info(\"No running service found\")\n            try:\n                log_info('Attempting to start container')\n                os.chdir(ci_dirname)\n                subprocess.run([\"docker-compose\", \"-f\",\n                                \"docker-compose.json\", \"down\",\n                                \"--remove-orphans\"],\n                               stdout=subprocess.DEVNULL)\n                if os.path.exists(os.path.join(ci_dirname,\n                                               MACROS['CONFIG_DIR'],\n                                               MACROS['STATUS_FILE'])):\n                    os.remove(os.path.join(\n                        ci_dirname, MACROS['CONFIG_DIR'],\n                        MACROS['STATUS_FILE']))\n                write_to_file(os.path.join(\n                    ci_dirname, MACROS['CONFIG_DIR'],\n                    MACROS['STATUS_FILE']), '')\n                subprocess.run(\n                    [\"docker-compose\", \"-f\",\n                     \"docker-compose.json\", \"up\", \"-d\"])\n                log_info('Waiting for container build to complete ')\n                build_log_path = os.path.join(ci_dirname, 'logs')\n                log_info(\"Build logs can be found in \" + build_log_path)\n                # wait for build to complete and commit newly built container\n                tail_build_log()\n                commit_docker_image()\n            except Exception as e:\n                log_error(e)\n        else:\n            log_info(\"running container found\")\n            return 0\n    except Exception:\n        log_error(e)\n\n\ndef check_prerequisites():\n    '''\n    This function will check whether docker docker-compose commands\n    are available. Also check docker version is minimum required.\n    '''\n    cmd = \"where\" if platform.system() == \"Windows\" else \"which\"\n\n    try:\n        subprocess.run([cmd, \"docker\"], stdout=subprocess.DEVNULL)\n    except Exception:\n        log_error(\"docker not found in PATH\")\n        sys.exit(1)\n\n    def version_tuple(s: str):\n        return tuple(int(x) for x in s.split(\".\"))\n\n    try:\n        version = subprocess.run(\n            [\"docker\", \"--version\"], stdout=subprocess.PIPE)\n        version = re.findall(r'\\s*([\\d.]+)', version.stdout.decode('utf-8'))\n        req_version = MACROS['REQ_DOCKER_VERSION']\n        if version_tuple(version[0]) < version_tuple(req_version):\n            print(version[0])\n            print(\"Docker version less than minimum required \" + req_version)\n            sys.exit(1)\n    except Exception:\n        log_error(\"Failed to get docker version\")\n        sys.exit(1)\n\n    try:\n        subprocess.run([cmd, \"docker-compose\"], stdout=subprocess.DEVNULL)\n    except Exception:\n        log_error(\"docker-compose not found in PATH\")\n        sys.exit(1)\n\n\ndef is_restart_required():\n    '''\n    This function checks if the number of nodes currently running meet\n    requirement for the given test case. If not builds new docker-compose file\n    and returns bool value to restart ci.\n    '''\n    create_ts_tree_json()\n    current_file_services_list = get_compose_file_services_list()\n    current_node_image = current_file_services_list[0].split(\n        '-', 1)[1][:-2].replace('-', ':')\n    node_config = get_node_config(node_image=current_node_image)\n    potential_list = []\n    for val in node_config.split(';'):\n        val = val.replace('=', '-')\n        val = val.replace(':', '-')\n        potential_list.append(val)\n    current_file_services_list = [i[:-2] for i in current_file_services_list]\n    # compare without platform names\n    current_file_services_list = [\n        i.split('-', 1)[0] for i in current_file_services_list]\n    potential_list = [i.split('-', 1)[0] for i in potential_list]\n    potential_list.sort()\n    current_file_services_list.sort()\n    if current_file_services_list != potential_list:\n        build_compose_file(node_config)\n        return True\n    else:\n        return False\n\n\ndef setup_config_dir():\n    '''\n    Initializes config directory and files for ci\n    '''\n    command_path = os.path.join(ci_dirname, MACROS['CONFIG_DIR'])\n    if not os.path.exists(command_path):\n        os.mkdir(command_path)\n    target_path = os.path.join(command_path, MACROS['CONF_JSON_FILE'])\n    if not os.path.exists(target_path):\n        value = '{ \"configure\": \"--prefix=/opt/pbs '\n        value += '--enable-ptl\", \"tests\" : \"-t SmokeTest\" }'\n        write_to_file(target_path, value)\n    target_path = os.path.join(command_path, MACROS['CONFIGURE_OPT_FILE'])\n    if not os.path.exists(target_path):\n        value = \"--prefix=/opt/pbs --enable-ptl\"\n        write_to_file(target_path, value)\n    target_path = os.path.join(command_path, MACROS['BENCHPRESS_OPT_FILE'])\n    if not os.path.exists(target_path):\n        value = \"-t SmokeTest\"\n        write_to_file(target_path, value)\n    target_path = os.path.join(ci_dirname, 'docker-compose.json')\n    if not os.path.exists(target_path):\n        build_compose_file('server=' + default_platform)\n        run_cmd('docker-compose -f docker-compose.json down --remove-orphans')\n\n\ndef delete_ci():\n    '''\n    Takes backup of logs and deletes running containers.\n    '''\n    services = get_services_list()\n    if len(services) != 0:\n        build_compose_file(nodes=get_current_setup())\n        cmd = '/src/etc/killit.sh backup'\n        run_docker_cmd(cmd, run_on='server')\n        log_warning('Removed logs file')\n        log_info('backup files can be found in ' + build_log_path)\n    else:\n        log_info('No running container found, nothing to backup')\n    try:\n        os.chdir(ci_dirname)\n        run_cmd(\n            \"docker-compose -f docker-compose.json down --remove-orphans\")\n        log_info(\n            \"done delete container and services\")\n    except Exception as e:\n        log_error(\"Failed to destroy container and services: \" + e)\n\n\ndef parse_params(params_list):\n    '''\n    Update given params\n    '''\n    if params_list[0] != 'called':\n        container_running = False\n        conf_opts = load_conf()\n        for set_opts in params_list:\n            key, value = (set_opts).split('=', 1)\n            service_count = len(get_services_list())\n            if service_count > 0:\n                container_running = True\n            if key.lower() == 'nodes':\n                if container_running:\n                    log_warning(\n                        \"Deleting existing containers first,\\\n                        find backup in logs folder\")\n                    delete_ci()\n                build_compose_file(value)\n            elif key.lower() == 'os':\n                if container_running:\n                    log_warning(\n                        \"Deleting existing containers first, \\\n                        find backup in logs folder\")\n                    delete_ci()\n                node_string = value.replace('\"', '')\n                node_string = 'server=' + node_string\n                build_compose_file(node_string)\n            else:\n                if key in conf_opts:\n                    conf_opts[key] = value\n                    f = open(os.path.join(\n                        ci_dirname, MACROS['CONFIG_DIR'],\n                        MACROS['CONF_JSON_FILE']), 'w')\n                    json.dump(conf_opts, f, indent=2, sort_keys=True)\n                    f.close()\n                else:\n                    log_error(\"Unrecognised key in parameter: '\" +\n                              key + \"' , nothing updated\")\n                    sys.exit(1)\n\n\ndef run_ci_local(local):\n    '''\n    Run ci locally on host without spawning containers\n    '''\n    os.chdir(ci_dirname)\n    # using subprocess.run instead of run_cmd function\n    # so we dont supress stdout and stderr\n    if local == 'normal':\n        exit_code = subprocess.run(\"./etc/do.sh\")\n        sys.exit(exit_code.returncode)\n    if local == 'sanitize':\n        exit_code = subprocess.run(\"./etc/do_sanitize_mode.sh\")\n        sys.exit(exit_code.returncode)\n\n\ndef run_ci(build_pkgs=False):\n    '''\n    Run PBS configure, install PBS and run PTL tests, if build_pkgs\n    is set to True it will instead run package build script only\n    '''\n    # Display Current options\n    log_info(\"Running ci with the following options\")\n    show_set_opts()\n    if len(get_services_list()) > 0:\n        build_compose_file(get_current_setup())\n    ret = ensure_ci_running()\n    if ret == 1:\n        log_error(\n            \"container build failed, build logs can be found in \" +\n            build_log_path)\n        sys.exit(1)\n    command_path = os.path.join(ci_dirname, MACROS['CONFIG_DIR'])\n    conf_opts = load_conf()\n    if build_pkgs:\n        build_cmd = '/src/etc/build-pbs-packages.sh'\n        log_info('The package build logs can be found in logs/pkglogs')\n        run_docker_cmd(build_cmd + ' | tee /logs/pkglogs',\n                       run_on='server')\n        sys.exit(0)\n    if conf_opts['tests'] != '':\n        target_path = os.path.join(command_path, MACROS['BENCHPRESS_OPT_FILE'])\n        write_to_file(target_path, conf_opts['tests'])\n        if is_restart_required():\n            delete_ci()\n            ensure_ci_running()\n    target_path = os.path.join(command_path, MACROS['CONFIGURE_OPT_FILE'])\n    if conf_opts['configure'] != read_from_file(target_path):\n        write_to_file(target_path, conf_opts['configure'])\n        cmd = ' export ONLY_CONFIGURE=1 && /src/etc/do.sh 2>&1 \\\n            | tee -a /logs/build-$(hostname -s) '\n        run_docker_cmd(cmd)\n    cmd = ' export ONLY_REBUILD=1 && /src/etc/do.sh 2>&1 \\\n        | tee -a /logs/build-$(hostname -s) '\n    run_docker_cmd(cmd)\n    cmd = ' export ONLY_INSTALL=1 && /src/etc/do.sh 2>&1 \\\n        | tee -a /logs/build-$(hostname -s) '\n    run_docker_cmd(cmd)\n    target_path = os.path.join(command_path, MACROS['BENCHPRESS_OPT_FILE'])\n    if conf_opts['tests'] == '':\n        write_to_file(target_path, conf_opts['tests'])\n        log_warning(\"No tests assigned, skipping PTL run\")\n    else:\n        create_param_file()\n        write_to_file(target_path, conf_opts['tests'])\n        cmd = 'export RUN_TESTS=1 && export ONLY_TEST=1 && /src/etc/do.sh '\n        run_docker_cmd(cmd, run_on='server')\n\n\nif __name__ == \"__main__\":\n\n    ci_dirname = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\n    ci_dirname = os.path.join(ci_dirname, 'ci')\n    os.chdir(ci_dirname)\n    read_macros()\n    _help = '''\n    Examples of using arguments.\n        ./ci -p 'OS=centos:7'\n        ./ci -p 'tests=-t SmokeTest'\n        ./ci -p 'configure=CFLAGS=\"-g -O2\" --enable-ptl'\n        ./ci -p 'nodes=mom=centos:7;server=ubuntu:16.04'\n        ./ci -d or ./ci --delete\n        ./ci -b or ./ci --build\n        ./ci -l or ./ci --local\n    Note: Set tests as empty if you dont want to run PTL'\n    '''\n    _help += 'Supported platforms are ' + MACROS['SUPPORTED_PLATFORMS']\n    ap = argparse.ArgumentParser(prog='ci',\n                                 description='Runs the ci tool for pbs',\n                                 formatter_class=argparse.RawTextHelpFormatter,\n                                 epilog=textwrap.dedent(_help),\n                                 conflict_handler='resolve')\n    _help = 'set configuration values for os | nodes | configure | tests'\n    ap.add_argument('-p', '--params', nargs='+',\n                    action='append', help=_help, metavar='param')\n    _help = 'destroy pbs container'\n    ap.add_argument('-d', '--delete', action='store_true', help=_help)\n    _help = 'build packages for the current platform.'\n    ap.add_argument('-b', '--build-pkgs', nargs='?', const='called',\n                    help=_help)\n    _help = 'Simply run the tests locally, without spawning any containers.'\n    _help += '\\ntype can be one of normal (default) or sanitize'\n    ap.add_argument('-l', '--local', nargs='?', const='normal',\n                    help=_help, metavar='type')\n    args = ap.parse_args()\n    build_pkgs = False\n    default_platform = MACROS['DEFAULT_PLATFORM']\n    build_log_path = os.path.join(ci_dirname, 'logs')\n    not_local_run = sys.argv.count('-l') == 0 \\\n        and sys.argv.count('--local') == 0 \\\n        and sys.argv.count('-l=sanitize') == 0\\\n        and sys.argv.count('--local=sanitize') == 0 \\\n        and sys.argv.count('-l=normal') == 0 \\\n        and sys.argv.count('--local=normal') == 0\n    if not_local_run:\n        setup_config_dir()\n        check_prerequisites()\n    if (not args.delete) and not_local_run and (args.params is None):\n        ret = ensure_ci_running()\n        if ret == 1:\n            log_error(\n                \"container build failed, build logs can be found in \" +\n                build_log_path)\n            sys.exit(1)\n    try:\n        if args.params is not None:\n            for p in args.params:\n                parse_params(p)\n        if args.build_pkgs is not None:\n            build_pkgs = True\n        if args.delete is True:\n            confirm = input(\n                'Are you sure you want to delete containers (Y/N)?: ')\n            if confirm[0].lower() == 'n':\n                sys.exit(0)\n            elif confirm[0].lower() == 'y':\n                delete_ci()\n            else:\n                log_error(\"Invalid option provided\")\n            sys.exit(0)\n        if args.local is not None:\n            run_ci_local(args.local)\n    except Exception as e:\n        ap.print_help()\n        log_error(e)\n\n    run_ci(build_pkgs)\n"
  },
  {
    "path": "ci/etc/build-pbs-packages.sh",
    "content": "#! /bin/bash -xe\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n. /etc/os-release\n\npbsdir=/pbssrc\nrpm_dir=/root/rpmbuild\n\nrm -rf /src/packages\nmkdir -p /src/packages\nmkdir -p ${rpm_dir}/{BUILD,RPMS,SOURCES,SPECS,SRPMS}\n\nif [ \"x${ID}\" == \"xcentos\" -a \"x${VERSION_ID}\" == \"x8\" ]; then\n\texport LANG=\"C.utf8\"\n\tswig_opt=\"--with-swig=/usr/local\"\n\tif [ ! -f /tmp/swig/swig/configure ]; then\n\t\t# source install swig\n\t\tdnf -y install gcc-c++ byacc pcre-devel\n\t\tmkdir -p /tmp/swig/\n\t\tcd /tmp/swig\n\t\tgit clone https://github.com/swig/swig --branch rel-4.0.0 --single-branch\n\t\tcd swig\n\t\t./autogen.sh\n\t\t./configure\n\t\tmake -j8\n\t\tmake install\n\t\tcd ${PBS_DIR}\n\tfi\nfi\n\ncp -r $pbsdir /tmp/pbs\ncd /tmp/pbs\n./autogen.sh\nmkdir -p target\ncd target\n../configure --prefix=/opt/pbs --enable-ptl ${swig_opt}\nmake dist\ncp *.tar.gz ${rpm_dir}/SOURCES\ncp ../*-rpmlintrc ${rpm_dir}/SOURCES\ncp *.spec ${rpm_dir}/SPECS\ncflags=\"-g -O2 -Wall -Werror\"\ncxxflags=\"-g -O2 -Wall -Werror\"\nif [ \"x${ID}\" == \"xdebian\" -o \"x${ID}\" == \"xubuntu\" ]; then\n\tCFLAGS=\"${cflags} -Wno-unused-result\" CXXFLAGS=\"${cxxflags} -Wno-unused-result\" rpmbuild -ba --nodeps *.spec --with ptl\nelse\n\tif [ \"x${ID}\" == \"xcentos\" -a \"x${VERSION_ID}\" == \"x8\" ]; then\n\t\tCFLAGS=\"${cflags}\" CXXFLAGS=\"${cxxflags}\" rpmbuild -ba *.spec --with ptl -D \"_with_swig ${swig_opt}\"\n\telse\n\t\tCFLAGS=\"${cflags}\" CXXFLAGS=\"${cxxflags}\" rpmbuild -ba *.spec --with ptl\n\tfi\nfi\n\ncp ${pbsdir}/README.md /src/packages/\ncp ${pbsdir}/LICENSE /src/packages/\ncp ${pbsdir}/COPYRIGHT /src/packages/\nmv ${rpm_dir}/RPMS/*/*pbs* /src/packages/\nmv ${rpm_dir}/SRPMS/*pbs* /src/packages/\ncd /src/packages\nrm -rf /tmp/pbs\n\nif [ \"x${ID}\" == \"xdebian\" -o \"x${ID}\" == \"xubuntu\" ]; then\n\t_target_arch=$(dpkg --print-architecture)\n\tfakeroot alien --to-deb --scripts --target=${_target_arch} *-debuginfo*.rpm -g\n\t_dir=$(/bin/ls -1d *debuginfo* | grep -vE '(rpm|orig)')\n\tmv ${_dir}/opt/pbs/usr/ ${_dir}/\n\trm -rf ${_dir}/opt\n\t(\n\t\tcd ${_dir}\n\t\tdpkg-buildpackage -d -b -us -uc\n\t)\n\trm -rf ${_dir} ${_dir}.orig *debuginfo*.buildinfo *debuginfo*.changes *debuginfo*.rpm\n\tfakeroot alien --to-deb --scripts --target=${_target_arch} *.rpm\n\trm -f *.rpm\nfi\n"
  },
  {
    "path": "ci/etc/ci-script-wrapper.service",
    "content": "# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n[Unit]\nDescription=Run ci docker entrypoint script at startup after all systemd services are loaded\nAfter=getty.target\n\n[Service]\nType=forking\nRemainAfterExit=yes\nEnvironmentFile=/.env-file\nExecStart=/src/etc/docker-entrypoint\nTimeoutStartSec=0\n\n[Install]\nWantedBy=default.target\n"
  },
  {
    "path": "ci/etc/configure_node.sh",
    "content": "#! /bin/bash -x\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n. /src/etc/macros\nif [ -f /src/${CONFIG_DIR}/${REQUIREMENT_DECORATOR_FILE} ]; then\n\t. /src/${CONFIG_DIR}/${REQUIREMENT_DECORATOR_FILE}\nfi\n\nif [ \"x${NODE_TYPE}\" == \"xmom\" ]; then\n\tsed -i \"s@PBS_SERVER=.*@PBS_SERVER=${SERVER}@\" /etc/pbs.conf\n\tsed -i \"s@PBS_START_SERVER=.*@PBS_START_SERVER=0@\" /etc/pbs.conf\n\tssh -t root@${SERVER} \" /opt/pbs/bin/qmgr -c 'c n $(hostname -s)'\"\n\tif [ \"x${no_comm_on_mom}\" == \"xTrue\" ]; then\n\t\tsed -i \"s@PBS_START_COMM=.*@PBS_START_COMM=0@\" /etc/pbs.conf\n\telse\n\t\tsed -i \"s@PBS_START_COMM=.*@PBS_START_COMM=1@\" /etc/pbs.conf\n\tfi\n\tsed -i \"s@PBS_START_SCHED=.*@PBS_START_SCHED=0@\" /etc/pbs.conf\nfi\n\nif [ \"x${NODE_TYPE}\" == \"xserver\" ]; then\n\tsed -i \"s@PBS_SERVER=.*@PBS_SERVER=$(hostname)@\" /etc/pbs.conf\n\tif [ \"x${no_comm_on_server}\" == \"xTrue\" ]; then\n\t\tsed -i \"s@PBS_START_COMM=.*@PBS_START_COMM=0@\" /etc/pbs.conf\n\telse\n\t\tsed -i \"s@PBS_START_COMM=.*@PBS_START_COMM=1@\" /etc/pbs.conf\n\tfi\n\tif [ \"x${no_mom_on_server}\" == \"xTrue\" ]; then\n\t\tsed -i \"s@PBS_START_MOM=.*@PBS_START_MOM=0@\" /etc/pbs.conf\n\telse\n\t\tsed -i \"s@PBS_START_MOM=.*@PBS_START_MOM=1@\" /etc/pbs.conf\n\tfi\n\tsed -i \"s@PBS_START_SERVER=.*@PBS_START_SERVER=1@\" /etc/pbs.conf\n\tsed -i \"s@PBS_START_SCHED=.*@PBS_START_SCHED=1@\" /etc/pbs.conf\nfi\n\nif [ \"x${NODE_TYPE}\" == \"xcomm\" ]; then\n\tsed -i \"s@PBS_START_COMM=.*@PBS_START_COMM=1@\" /etc/pbs.conf\n\tsed -i \"s@PBS_SERVER=.*@PBS_SERVER=${SERVER}@\" /etc/pbs.conf\n\tsed -i \"s@PBS_START_MOM=.*@PBS_START_MOM=0@\" /etc/pbs.conf\n\tsed -i \"s@PBS_START_SERVER=.*@PBS_START_SERVER=0@\" /etc/pbs.conf\n\tsed -i \"s@PBS_START_SCHED=.*@PBS_START_SCHED=0@\" /etc/pbs.conf\nfi\n"
  },
  {
    "path": "ci/etc/container-env-setup.sh",
    "content": "# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\nexport container=docker\nexport TERM=xterm\nif [ -e /etc/debian_version ]; then\n  export DEBIAN_FRONTEND=noninteractive\nfi\nexport LOGNAME=${LOGNAME:-\"$(id -un)\"}\nexport USER=${USER:-\"$(id -un)\"}\nexport TZ=UTC\nexport PBS_TZID=UTC\nexport PATH=\"$(printf \"%s\" \"/usr/local/bin:/usr/local/sbin:${PATH}\" | awk -v RS=: -v ORS=: '!($0 in a) {a[$0]; print}')\"\nexport DOMAIN=$(hostname -d)\nexport PERL5LIB=${HOME}/AUTO/lib/perl5/site_perl\nexport PERL5LIB=${PERL5LIB}:${HOME}/AUTO/lib/site_perl\nexport PERL5LIB=${PERL5LIB}:${HOME}/AUTO/share/perl5\nexport PERL5LIB=${PERL5LIB}:${HOME}/AUTO/share/perl\nexport PBS_TEST_DEBUG=1\nexport PBS_TEST_VERBOSE=1\nexport PBS_PRINT_STACK_TRACE=1\nexport MAIL=\"${MAIL:-\"/var/mail/$(id -un)\"}\"\n"
  },
  {
    "path": "ci/etc/container-init",
    "content": "#!/bin/bash -x\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n/workspace/etc/install-system-packages 2>&1 | tee -a /logs/build-$(hostname -s)\n\n# set environment var file\ntouch /.env-file\nset >/.env-file\n\ncapsh --print | grep -Eq '*cap_sys_admin*'\nif [ $? -eq 0 ]; then\n\tif [ -x \"/usr/lib/systemd/systemd\" ]; then\n\t\texec /usr/lib/systemd/systemd --system\n\telif [ -x \"/lib/systemd/systemd\" ]; then\n\t\texec /lib/systemd/systemd --system\n\telif [ -x \"/usr/sbin/init\" ]; then\n\t\texec /usr/sbin/init\n\telif [ -x \"/sbin/init\" ]; then\n\t\texec /sbin/init\n\telse\n\t\techo \"Couldn't start container in systemd mode, starting in default mode\"\n\tfi\nfi\n"
  },
  {
    "path": "ci/etc/do.sh",
    "content": "#!/bin/bash -xe\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\nif [ $(id -u) -ne 0 ]; then\n  echo \"This script must be run by root user\"\n  exit 1\nfi\n\nif [ -f /src/ci ]; then\n  IS_CI_BUILD=1\n  FIRST_TIME_BUILD=$1\n  . /src/etc/macros\n  config_dir=/src/${CONFIG_DIR}\n  chmod -R 755 ${config_dir}\n  logdir=/logs\n  chmod -R 755 ${logdir}\n  PBS_DIR=/pbssrc\nelse\n  PBS_DIR=$(readlink -f $0 | awk -F'/ci/' '{print $1}')\nfi\n\ncd ${PBS_DIR}\n. /etc/os-release\n# Extract major version number\nMAJOR_VERSION=\"${VERSION_ID%%.*}\"\nSPEC_FILE=$(/bin/ls -1 ${PBS_DIR}/*.spec)\nREQ_FILE=${PBS_DIR}/test/fw/requirements.txt\nif [ ! -r ${SPEC_FILE} -o ! -r ${REQ_FILE} ]; then\n  echo \"Couldn't find pbs spec file or ptl requirements file\"\n  exit 1\nfi\n\nif [ \"x${IS_CI_BUILD}\" != \"x1\" ] || [ \"x${FIRST_TIME_BUILD}\" == \"x1\" -a \"x${IS_CI_BUILD}\" == \"x1\" ]; then\n  if [ \"x${ID}\" == \"xcentos\" -a \"x${VERSION_ID}\" == \"x7\" ]; then\n    yum clean all\n    yum -y install yum-utils epel-release rpmdevtools\n    yum -y install python3-pip sudo which net-tools man-db time.x86_64 \\\n      expat libedit postgresql-server postgresql-contrib python3 \\\n      sendmail sudo tcl tk libical libasan llvm git\n    rpmdev-setuptree\n    yum-builddep -y ${SPEC_FILE}\n    yum -y install $(rpmspec --requires -q ${SPEC_FILE} | awk '{print $1}' | sort -u | grep -vE '^(/bin/)?(ba)?sh$')\n    pip3 install --trusted-host pypi.org --trusted-host files.pythonhosted.org -r ${REQ_FILE}\n    if [ \"x${BUILD_MODE}\" == \"xkerberos\" ]; then\n      yum -y install krb5-libs krb5-devel libcom_err libcom_err-devel\n    fi\n    yum -y install cmake3\n    rm -rf cJSON\n    git clone https://github.com/DaveGamble/cJSON.git\n    cd cJSON; mkdir build; cd build; cmake3 .. -DCMAKE_INSTALL_PREFIX=/usr; make; make install; cd ../../\n  elif [ \"x${ID}\" == \"xcentos\" -a \"x${VERSION_ID}\" == \"x8\" ]; then\n    export LANG=\"C.utf8\"\n    sed -i -e \"s|mirrorlist=|#mirrorlist=|g\" /etc/yum.repos.d/CentOS-*\n    sed -i -e \"s|#baseurl=http://mirror.centos.org|baseurl=http://vault.centos.org|g\" /etc/yum.repos.d/CentOS-*\n    dnf -y clean all\n    dnf -y install 'dnf-command(config-manager)'\n    dnf -y config-manager --set-enabled powertools\n    dnf -y install epel-release\n    dnf -y install python3-pip sudo which net-tools man-db time.x86_64 \\\n      expat libedit postgresql-server postgresql-contrib python3 \\\n      sendmail sudo tcl tk libical libasan llvm git\n    dnf -y builddep ${SPEC_FILE}\n    dnf -y install $(rpmspec --requires -q ${SPEC_FILE} | awk '{print $1}' | sort -u | grep -vE '^(/bin/)?(ba)?sh$')\n    pip3 install --trusted-host pypi.org --trusted-host files.pythonhosted.org -r ${REQ_FILE}\n    if [ \"x${BUILD_MODE}\" == \"xkerberos\" ]; then\n      dnf -y install krb5-libs krb5-devel libcom_err libcom_err-devel\n    fi\n  elif [ \"x${ID}\" == \"xrocky\" -a \"x${MAJOR_VERSION}\" == \"x9\" ]; then\n    export LANG=\"C.utf8\"\n    dnf -y clean all\n    yum -y install yum-utils\n    dnf -y install 'dnf-command(config-manager)'\n    dnf config-manager --set-enabled crb\n    dnf -y install epel-release\n    dnf -y install python3-pip sudo which net-tools man-db time.x86_64 procps \\\n      expat libedit postgresql-server postgresql-contrib python3 \\\n      sendmail sudo tcl tk libical libasan llvm git chkconfig\n    dnf -y builddep ${SPEC_FILE}\n    dnf -y install $(rpmspec --requires -q ${SPEC_FILE} | awk '{print $1}' | sort -u | grep -vE '^(/bin/)?(ba)?sh$')\n    pip3 install --trusted-host pypi.org --trusted-host files.pythonhosted.org -r ${REQ_FILE}\n    if [ \"x${BUILD_MODE}\" == \"xkerberos\" ]; then\n      dnf -y install krb5-libs krb5-devel libcom_err libcom_err-devel\n    fi\n  elif [ \"x${ID}\" == \"xopensuse\" -o \"x${ID}\" == \"xopensuse-leap\" ]; then\n    zypper -n ref\n    zypper -n install rpmdevtools python3-pip sudo which net-tools man time.x86_64 git\n    rpmdev-setuptree\n    zypper -n install --force-resolution $(rpmspec --buildrequires -q ${SPEC_FILE} | sort -u | grep -vE '^(/bin/)?(ba)?sh$')\n    zypper -n install --force-resolution $(rpmspec --requires -q ${SPEC_FILE} | sort -u | grep -vE '^(/bin/)?(ba)?sh$')\n    pip3 install --trusted-host pypi.org --trusted-host files.pythonhosted.org -r ${REQ_FILE}\n  elif [ \"x${ID}\" == \"xdebian\" ]; then\n    if [ \"x${DEBIAN_FRONTEND}\" == \"x\" ]; then\n      export DEBIAN_FRONTEND=noninteractive\n    fi\n    apt-get -y update\n    apt-get install -y build-essential dpkg-dev autoconf libtool rpm alien libssl-dev \\\n      libxt-dev libpq-dev libexpat1-dev libedit-dev libncurses5-dev \\\n      libical-dev libhwloc-dev pkg-config tcl-dev tk-dev python3-dev \\\n      swig expat postgresql postgresql-contrib python3-pip sudo \\\n      man-db git elfutils libcjson-dev\n    pip3 install --trusted-host pypi.org --trusted-host files.pythonhosted.org -r ${REQ_FILE}\n  elif [ \"x${ID}\" == \"xubuntu\" ]; then\n    if [ \"x${DEBIAN_FRONTEND}\" == \"x\" ]; then\n      export DEBIAN_FRONTEND=noninteractive\n    fi\n    apt-get -y update\n    apt-get install -y build-essential dpkg-dev autoconf libtool rpm alien libssl-dev \\\n      libxt-dev libpq-dev libexpat1-dev libedit-dev libncurses5-dev \\\n      libical-dev libhwloc-dev pkg-config tcl-dev tk-dev python3-dev \\\n      swig expat postgresql python3-pip sudo man-db git elfutils libcjson-dev\n    if [[ $(printf '%s\\n' \"24.04\" \"$VERSION_ID\" | sort -V | head -n1) == \"24.04\" ]]; then\n\tapt-get -y install python3-nose python3-bs4 python3-defusedxml python3-pexpect\n    else\n        pip3 install --trusted-host pypi.org --trusted-host files.pythonhosted.org -r ${REQ_FILE}\n    fi\n  else\n    echo \"Unknown platform...\"\n    exit 1\n  fi\nfi\n\nif [ \"x${FIRST_TIME_BUILD}\" == \"x1\" -a \"x${IS_CI_BUILD}\" == \"x1\" ]; then\n  echo \"### First time build is complete ###\"\n  echo \"READY:$(hostname -s)\" >>${config_dir}/${STATUS_FILE}\n  exit 0\nfi\n\nif [ \"x${ID}\" == \"xcentos\" -a \"x${VERSION_ID}\" == \"x8\" ]; then\n  export LANG=\"C.utf8\"\n  swig_opt=\"--with-swig=/usr/local\"\n  if [ ! -f /tmp/swig/swig/configure ]; then\n    # source install swig\n    dnf -y install gcc-c++ byacc pcre-devel\n    mkdir -p /tmp/swig/\n    cd /tmp/swig\n    git clone https://github.com/swig/swig --branch rel-4.0.0 --single-branch\n    cd swig\n    ./autogen.sh\n    ./configure\n    make -j8\n    make install\n    cd ${PBS_DIR}\n  fi\nfi\n\nif [ \"x${ONLY_INSTALL_DEPS}\" == \"x1\" ]; then\n  exit 0\nfi\n_targetdirname=target-${ID}-$(hostname -s)\nif [ \"x${ONLY_INSTALL}\" != \"x1\" -a \"x${ONLY_REBUILD}\" != \"x1\" -a \"x${ONLY_TEST}\" != \"x1\" ]; then\n  rm -rf ${_targetdirname}\nfi\nmkdir -p ${_targetdirname}\n[[ -f Makefile ]] && make distclean || true\nif [ ! -f ./${SPEC_FILE} ]; then\n  git config --global --add safe.directory ${PBS_DIR}\n  git checkout ${SPEC_FILE}\nfi\nif [ ! -f ./configure ]; then\n  ./autogen.sh\nfi\nif [ \"x${ONLY_REBUILD}\" != \"x1\" -a \"x${ONLY_INSTALL}\" != \"x1\" -a \"x${ONLY_TEST}\" != \"x1\" ]; then\n  _cflags=\"-g -O2 -Wall -Werror\"\n  if [ \"x${ID}\" == \"xubuntu\" ]; then\n    _cflags=\"${_cflags} -Wno-unused-result\"\n  fi\n  cd ${_targetdirname}\n  if [ -f /src/ci ]; then\n    if [ -f ${config_dir}/${CONFIGURE_OPT_FILE} ]; then\n      PYTHON_CODE=$(cat <<END\nwith open('${config_dir}/${CONFIGURE_OPT_FILE}') as f:\n  x = f.read()\nimport re\nif len(x.split(\"'\")) > 1:\n  if re.search(r\"CFLAGS=(\\\"|\\').*(\\\"|\\')\",x) != None:\n    print(re.search(r\"CFLAGS=(\\\"|\\').*(\\\"|\\')\",x).group(0).split('\\'')[1])\nelse:\n  if re.search(r\"CFLAGS=(\\\"|\\').*(\\\"|\\')\",x) != None:\n    print(re.search(r\"CFLAGS=(\\\"|\\').*(\\\"|\\')\",x).group(0).split('\"')[1])\nEND\n)\n      _cflags=\"$(python3 -c \"$PYTHON_CODE\")\"\n      PYTHON_CODE=$(cat <<END\nwith open('${config_dir}/${CONFIGURE_OPT_FILE}') as f:\n  x = f.read()\nimport re\nprint(re.sub(r\"CFLAGS=(\\\"|\\').*(\\\"|\\')\",\"\",x))\nEND\n)\n    configure_opt=\"$(python3 -c \"$PYTHON_CODE\")\"\n    else\n      configure_opt='--prefix=/opt/pbs --enable-ptl'\n    fi\n    if [ -z \"${_cflags}\" ]; then\n      ../configure ${configure_opt} ${swig_opt}\n    else\n      ../configure CFLAGS=\"${_cflags}\" ${configure_opt} ${swig_opt}\n    fi\n    if [ \"x${ONLY_CONFIGURE}\" == \"x1\" ]; then\n      exit 0\n    fi\n  else\n    configure_opt='--prefix=/opt/pbs --enable-ptl'\n    if [ \"x${BUILD_MODE}\" == \"xkerberos\" ]; then\n      configure_opt=\"${configure_opt} --with-krbauth PATH_KRB5_CONFIG=/usr/bin/krb5-config\"\n    fi\n    ../configure CFLAGS=\"${_cflags}\" ${configure_opt} ${swig_opt}\n  fi\n  cd -\nfi\ncd ${_targetdirname}\nprefix=$(cat ${config_dir}/${CONFIGURE_OPT_FILE} | awk -F'prefix=' '{print $2}' | awk -F' ' '{print $1}')\nif [ \"x${prefix}\" == \"x\" ]; then\n  prefix='/opt/pbs'\nfi\nif [ \"x${ONLY_INSTALL}\" == \"x1\" -o \"x${ONLY_TEST}\" == \"x1\" ]; then\n  echo \"skipping make\"\nelse\n  if [ ! -f ${PBS_DIR}/${_targetdirname}/Makefile ]; then\n    if [ -f ${config_dir}/${CONFIGURE_OPT_FILE} ]; then\n      PYTHON_CODE=$(cat <<END\nwith open('${config_dir}/${CONFIGURE_OPT_FILE}') as f:\n  x = f.read()\nimport re\nif len(x.split(\"'\")) > 1:\n  if re.search(r\"CFLAGS=(\\\"|\\').*(\\\"|\\')\",x) != None:\n    print(re.search(r\"CFLAGS=(\\\"|\\').*(\\\"|\\')\",x).group(0).split('\\'')[1])\nelse:\n  if re.search(r\"CFLAGS=(\\\"|\\').*(\\\"|\\')\",x) != None:\n    print(re.search(r\"CFLAGS=(\\\"|\\').*(\\\"|\\')\",x).group(0).split('\"')[1])\nEND\n)\n      _cflags=\"$(python3 -c \"$PYTHON_CODE\")\"\n      PYTHON_CODE=$(cat <<END\nwith open('${config_dir}/${CONFIGURE_OPT_FILE}') as f:\n  x = f.read()\nimport re\nprint(re.sub(r\"CFLAGS=(\\\"|\\').*(\\\"|\\')\",\"\",x))\nEND\n)\n      configure_opt=\"$(python3 -c \"$PYTHON_CODE\")\"\n    else\n      configure_opt='--prefix=/opt/pbs --enable-ptl'\n    fi\n    if [ -z \"${_cflags}\" ]; then\n      ../configure ${configure_opt}\n    else\n      ../configure CFLAGS=\"${_cflags}\" ${configure_opt}\n    fi\n  fi\n  make -j8\nfi\nif [ \"x${ONLY_REBUILD}\" == \"x1\" ]; then\n  exit 0\nfi\nif [ \"x${ONLY_TEST}\" != \"x1\" ]; then\n  if [ ! -f ${PBS_DIR}/${_targetdirname}/Makefile ]; then\n    if [ -f ${config_dir}/${CONFIGURE_OPT_FILE} ]; then\n      PYTHON_CODE=$(cat <<END\nwith open('${config_dir}/${CONFIGURE_OPT_FILE}') as f:\n  x = f.read()\nimport re\nif len(x.split(\"'\")) > 1:\n  if re.search(r\"CFLAGS=(\\\"|\\').*(\\\"|\\')\",x) != None:\n    print(re.search(r\"CFLAGS=(\\\"|\\').*(\\\"|\\')\",x).group(0).split('\\'')[1])\nelse:\n  if re.search(r\"CFLAGS=(\\\"|\\').*(\\\"|\\')\",x) != None:\n    print(re.search(r\"CFLAGS=(\\\"|\\').*(\\\"|\\')\",x).group(0).split('\"')[1])\nEND\n)\n      _cflags=\"$(python3 -c \"$PYTHON_CODE\")\"\n      PYTHON_CODE=$(cat <<END\nwith open('${config_dir}/${CONFIGURE_OPT_FILE}') as f:\n  x = f.read()\nimport re\nprint(re.sub(r\"CFLAGS=(\\\"|\\').*(\\\"|\\')\",\"\",x))\nEND\n)\n    configure_opt=\"$(python3 -c \"$PYTHON_CODE\")\"\n    else\n      configure_opt='--prefix=/opt/pbs --enable-ptl'\n    fi\n    if [ -z \"${_cflags}\" ]; then\n      ../configure ${configure_opt}\n    else\n      ../configure CFLAGS=\"${_cflags}\" ${configure_opt}\n    fi\n    make -j8\n  fi\n  make -j8 install\n  chmod 4755 ${prefix}/sbin/pbs_iff ${prefix}/sbin/pbs_rcp\n  if [ \"x${DONT_START_PBS}\" != \"x1\" ]; then\n    ${prefix}/libexec/pbs_postinstall server\n    sed -i \"s@PBS_START_MOM=0@PBS_START_MOM=1@\" /etc/pbs.conf\n    if [ \"x$IS_CI_BUILD\" == \"x1\" ]; then\n      /src/etc/configure_node.sh\n    fi\n    /etc/init.d/pbs restart\n  fi\nfi\n\nif [ \"x${BUILD_MODE}\" == \"xkerberos\" ]; then\n  echo \"PTL with Kerberos support not implemented yet.\"\n  exit 0\nfi\n\nset +e\n. /etc/profile.d/ptl.sh\nset -e\npbs_config --make-ug\n\nif [ \"x${RUN_TESTS}\" == \"x1\" ]; then\n  if [ \"x${ID}\" == \"xcentos\" ]; then\n    export LC_ALL=en_US.utf-8\n    export LANG=en_US.utf-8\n  elif [ \"x${ID}\" == \"xopensuse\" ]; then\n    export LC_ALL=C.utf8\n  fi\n  ptl_tests_dir=/pbssrc/test/tests\n  cd ${ptl_tests_dir}/\n  benchpress_opt=\"$(cat ${config_dir}/${BENCHPRESS_OPT_FILE})\"\n  eval_tag=\"$(echo ${benchpress_opt} | awk -F'\"' '{print $2}')\"\n  benchpress_opt=\"$(echo ${benchpress_opt} | sed -e 's/--eval-tags=\\\".*\\\"//g')\"\n  params=\"--param-file=${config_dir}/${PARAM_FILE}\"\n  time_stamp=$(date -u \"+%Y-%m-%d-%H%M%S\")\n  ptl_log_file=${logdir}/logfile-${time_stamp}\n  chown pbsroot ${logdir}\n  if [ -z \"${eval_tag}\" ]; then\n    sudo -Hiu pbsroot pbs_benchpress ${benchpress_opt} --db-type=html --db-name=${logdir}/result.html -o ${ptl_log_file} ${params}\n  else\n    sudo -Hiu pbsroot pbs_benchpress --eval-tags=\"'${eval_tag}'\" ${benchpress_opt} --db-type=html --db-name=${logdir}/result.html -o ${ptl_log_file} ${params}\n  fi\nfi\n\nif [ \"x${IS_CI_BUILD}\" != \"x1\" ]; then\n  cd /opt/ptl/tests/\n  /opt/pbs/bin/pbsnodes -av\n  ps -ef | grep pbs\n#Azure pipeline : on Suse platform man page test is failing. Need to analyze.\n#for time being skipping man page test case\n  pbs_benchpress --tags=smoke --exclude=SmokeTest.test_man_pages\n  /opt/pbs/bin/pbsnodes -av\nfi\n"
  },
  {
    "path": "ci/etc/do_sanitize_mode.sh",
    "content": "#!/bin/bash -xe\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\nPBS_DIR=$(readlink -f $0 | awk -F'/ci/' '{print $1}')\ncd ${PBS_DIR}\n\n[ -f /sys/fs/selinux/enforce ] && echo 0 > /sys/fs/selinux/enforce\nyum clean all\nyum -y update\nyum -y install yum-utils epel-release rpmdevtools libasan llvm\ndnf config-manager --set-enabled crb\nrpmdev-setuptree\nyum -y install python3-pip sudo which net-tools man-db time.x86_64 procps\nyum-builddep -y ./*.spec\nyum -y install cmake3 git\nrm -rf cJSON\ngit clone https://github.com/DaveGamble/cJSON.git\ncd cJSON; mkdir build; cd build; cmake3 .. -DCMAKE_INSTALL_PREFIX=/usr; make; make install; cd ../../\n./autogen.sh\nrm -rf target-sanitize\nmkdir -p target-sanitize\ncd target-sanitize\n../configure\nmake dist\ncp -fv *.tar.gz /root/rpmbuild/SOURCES/\nCFLAGS=\"-g -O2 -Wall -Werror -fsanitize=address -fno-omit-frame-pointer\" CXXFLAGS=\"-g -O2 -Wall -Werror -fsanitize=address -fno-omit-frame-pointer\" rpmbuild -bb --with ptl *.spec\nyum -y install /root/rpmbuild/RPMS/x86_64/*-server-??.*.x86_64.rpm\nyum -y install /root/rpmbuild/RPMS/x86_64/*-debuginfo-??.*.x86_64.rpm\nyum -y install /root/rpmbuild/RPMS/x86_64/*-ptl-??.*.x86_64.rpm\nsed -i \"s@PBS_START_MOM=0@PBS_START_MOM=1@\" /etc/pbs.conf\n/etc/init.d/pbs start\nset +e\n. /etc/profile.d/ptl.sh\nset -e\npbs_config --make-ug\ncd /opt/ptl/tests/\n# Ignore address sanitizer link order because of\n# importing pbs python modules (like pbs and pbs_ifl) in ptl.\n# The problem is that original Python bin is not compiled with ASAN.\n# This will not affect pbs service as it has its own env.\nexport ASAN_OPTIONS=verify_asan_link_order=0\npbs_benchpress --tags=smoke\n"
  },
  {
    "path": "ci/etc/docker-entrypoint",
    "content": "#!/bin/bash -ex\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\nworkdir=/src/etc\nlogdir=/logs\n\ncd /pbssrc\n${workdir}/do.sh 1 2>&1 | tee -a ${logdir}/build-$(hostname -s)\nif [ $? -ne 0 ]; then\n    exit 1\nelse\n    exit 0\nfi\n"
  },
  {
    "path": "ci/etc/gen_ptl_json.sh",
    "content": "#!/bin/bash -x\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\ncleanup() {\n\tcd ${etcdir}\n\trm -rf ./tmpptl\n}\n\netcdir=$(dirname $(readlink -f \"$0\"))\ncidir=/pbssrc/ci\ncd ${etcdir}\nmkdir tmpptl\nworkdir=${etcdir}/tmpptl\ncd ${workdir}\nmkdir -p ptlsrc\n/bin/cp -rf ${cidir}/../test/* ptlsrc/\nif [ -f ptlsrc/fw/setup.py.in ]; then\n\tsed \"s;@PBS_VERSION@;1.0.0;g\" ptlsrc/fw/setup.py.in >ptlsrc/fw/setup.py\n\tsed \"s;@PBS_VERSION@;1.0.0;g\" ptlsrc/fw/ptl/__init__.py.in >ptlsrc/fw/ptl/__init__.py\nfi\ncd ${workdir}/ptlsrc\nmkdir ../tp\n__python=\"$(grep -rE '^#!/usr/bin/(python|env python)[23]' fw/bin/pbs_benchpress | awk -F[/\" \"] '{print $NF}')\"\n${__python} -m pip install --trusted-host pypi.org --trusted-host files.pythonhosted.org --prefix $(pwd)/tp -r fw/requirements.txt fw/.\ncd tests\nPYTHONPATH=../tp/lib/$(/bin/ls -1 ../tp/lib)/site-packages ${__python} ../tp/bin/pbs_benchpress $1 --gen-ts-tree\nret=$?\nif [ ${ret} -ne 0 ]; then\n\techo \"Failed to generate ptl json\"\n\tcleanup\n\texit $ret\nelse\n\tmv ptl_ts_tree.json ${cidir}\nfi\n\ncleanup\n"
  },
  {
    "path": "ci/etc/id_rsa",
    "content": "-----BEGIN OPENSSH PRIVATE KEY-----\nb3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAABlwAAAAdzc2gtcn\nNhAAAAAwEAAQAAAYEAt6kNw0C2ZMybkld0sucLkpaMuwn6SXB6+9scN3ZMTSFRSMxa85MT\nee8sOsiyrkIjv85nAWdYsGjLBKgr43IlV2qBCxZO2YsTryl52E6pVBbVuizBj8m6sO+3hM\nhUBEbIrqvplrxf19y2HlNsygSlNFfMb3ptIIvTGGez+o8ZTAI3wXcFqxNxi8flo77yp6UH\nx31zIDOJCfN98W1GYXVwXiowfkoKkROvbH9B/HsLTjuxkHzFCGwGNzEClr3ayJSmYyJu0P\nnfjBPeZrL7Dxt1RwSfqI8j1kp4VhLCeEyFYS5pi8CypLgtvL37gLdqEGpBjcf4J/AyjDZJ\ncDgzTI+ZrTP/ldhnVMy84B8TAC53swauaec1JKDtc+FNSN28GY/0VTcyH7Pwt9gRESWFsV\nzrN4lwRWZivwndi3mj3zUcge3LQ6pBpjTEGiYIgNNJd5mjDZM9ieB4lC7+MTmq9Yg0Dzm4\nu6uanAP5t2up6F5jck/7sLiAX4+fQ8vLZOAqsZdhAAAFgPS9UiD0vVIgAAAAB3NzaC1yc2\nEAAAGBALepDcNAtmTMm5JXdLLnC5KWjLsJ+klwevvbHDd2TE0hUUjMWvOTE3nvLDrIsq5C\nI7/OZwFnWLBoywSoK+NyJVdqgQsWTtmLE68pedhOqVQW1boswY/JurDvt4TIVARGyK6r6Z\na8X9fcth5TbMoEpTRXzG96bSCL0xhns/qPGUwCN8F3BasTcYvH5aO+8qelB8d9cyAziQnz\nffFtRmF1cF4qMH5KCpETr2x/Qfx7C047sZB8xQhsBjcxApa92siUpmMibtD534wT3may+w\n8bdUcEn6iPI9ZKeFYSwnhMhWEuaYvAsqS4Lby9+4C3ahBqQY3H+CfwMow2SXA4M0yPma0z\n/5XYZ1TMvOAfEwAud7MGrmnnNSSg7XPhTUjdvBmP9FU3Mh+z8LfYERElhbFc6zeJcEVmYr\n8J3Yt5o981HIHty0OqQaY0xBomCIDTSXeZow2TPYngeJQu/jE5qvWINA85uLurmpwD+bdr\nqeheY3JP+7C4gF+Pn0PLy2TgKrGXYQAAAAMBAAEAAAGAJVEQHtATPz/jjESAzajsTQiR55\n8LX8ie9HV8sjgzIKjYXzZGdJ85odja38bPp2CA6wQBIePhvVZNidCxujEDLVPSjHIn60O6\n6ChBPZYeCZvqKT3WxmRyrmjGnRAnIgdP103O1HXJ845A4sCIpjNzbcM5Ip15dtdyOM85Xn\nuc5Di/I2wPlscIlyIyoqa1nyKFBh+TOMO/4Gm8+UT+u+akwj1IRSC+LOQXDLB+s9I8ZdTz\nKyxuzFtGmAg5Qm+o+IBbRbvTzpdx2UHkiFw8+VQn8fwHuzfR+Od48D1kFBCk5yGcAMTQP3\ng4AV8vp/UAVU3f4stYWh7okxXE7dKY+YTb1qHbjadNp9KqJUY3d+LO2F2vT7QBD4eIDS22\n1emtqfaiLXXWDG1vZHXq3wx5MlvnwFE4gSY9yxF0FsSwi3s0j8zEYjszKQBiAoPLkxmqDq\n2/WcmT9GhKd5FsMQEy0W8lBePtRYw85BRfhZH7Lzh0gGZ+3ZYss4qQS2vAzqWWiuhRAAAA\nwHsVos2ccAcgMeTVYmo3JNgahAF0orP+NPxFLgZrK7Z0nwjICpKfaR6D3lWiFvlhUH33iv\nwr3gCAFTNL7zblbJXTebA5dvw8kFmUuXhe7/uRGNjn2l38j0t+aHMXDVafo7Dm1chh6pa8\nAyP5/OR9sVXsFVrkQ3+iVQHJBpsXDYlI7q5j51CrNb7wgr8l8HhWyDLDTg0irmzfrvPJ43\nH7URIgDIDuX7mbSnYoDDtP2azdpaZyG1IZlbFkCNyaQtjycwAAAMEA8fq3kVuTqntNXqTE\n3H7CnKSwR8w7yE/VGaVs7jLRvPyHpC3umUiKWjO/ebLMKBKdS3fQ0I72MB2BdeQbmuYTBY\n2FwRQOAopjketAZDrZWhjmzRgSsSRofl3N/cqya6L+0RcAfwR/2OGM9E1QzEIyPcH8khVo\nBK2I+xRpU5s1b5SXw5TOge9PXWgEWvRRtFRgbgOJ5WfPiLabKMm9skVx8BiNFzsVJxsEnb\nWdwJKwnT+2a7gIOnM+DvFQiLyEr8QTAAAAwQDCTUpPyB8cqP0cCOFFH46im7ryV3ROZLlj\nhj5dVKpXPyA5iHEQbPTx+VXOLSM1MysNRFPWlisE2OCES897kPgD5cypatnC1aa+sztOeD\nfuuEN4wZXjDo97DhIaO6YtfzhXI5Y/CEOKWQmrWlEQEf4HEGoK2kQka5KeOPKQTACLcLqi\nATLFxSEDr6wyEwHA0EGh7WjH1zEpFDDY9pUCAwmyETD/OriqfCbRPhGrTTQVrcadG/Sc72\nV5hjFzgl3J3TsAAAALcm9vdEBwYnMuY2k=\n-----END OPENSSH PRIVATE KEY-----\n"
  },
  {
    "path": "ci/etc/id_rsa.pub",
    "content": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3qQ3DQLZkzJuSV3Sy5wuSloy7CfpJcHr72xw3dkxNIVFIzFrzkxN57yw6yLKuQiO/zmcBZ1iwaMsEqCvjciVXaoELFk7ZixOvKXnYTqlUFtW6LMGPybqw77eEyFQERsiuq+mWvF/X3LYeU2zKBKU0V8xvem0gi9MYZ7P6jxlMAjfBdwWrE3GLx+WjvvKnpQfHfXMgM4kJ833xbUZhdXBeKjB+SgqRE69sf0H8ewtOO7GQfMUIbAY3MQKWvdrIlKZjIm7Q+d+ME95msvsPG3VHBJ+ojyPWSnhWEsJ4TIVhLmmLwLKkuC28vfuAt2oQakGNx/gn8DKMNklwODNMj5mtM/+V2GdUzLzgHxMALnezBq5p5zUkoO1z4U1I3bwZj/RVNzIfs/C32BERJYWxXOs3iXBFZmK/Cd2LeaPfNRyB7ctDqkGmNMQaJgiA00l3maMNkz2J4HiULv4xOar1iDQPObi7q5qcA/m3a6noXmNyT/uwuIBfj59Dy8tk4Cqxl2E= root@pbs.ci\n"
  },
  {
    "path": "ci/etc/install-system-packages",
    "content": "#!/bin/bash -x\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\ngroupadd -g 1900 tstgrp00\ngroupadd -g 1901 tstgrp01\ngroupadd -g 1902 tstgrp02\ngroupadd -g 1903 tstgrp03\ngroupadd -g 1904 tstgrp04\ngroupadd -g 1905 tstgrp05\ngroupadd -g 1906 tstgrp06\ngroupadd -g 1907 tstgrp07\ngroupadd -g 901 pbs\ngroupadd -g 1146 agt\nuseradd -m -s /bin/bash -u 4357 -g tstgrp00 -G tstgrp00 pbsadmin\nuseradd -m -s /bin/bash -u 9000 -g tstgrp00 -G tstgrp00 pbsbuild\nuseradd -m -s /bin/bash -u 884 -g tstgrp00 -G tstgrp00 pbsdata\nuseradd -m -s /bin/bash -u 4367 -g tstgrp00 -G tstgrp00 pbsmgr\nuseradd -m -s /bin/bash -u 4373 -g tstgrp00 -G tstgrp00 pbsnonroot\nuseradd -m -s /bin/bash -u 4356 -g tstgrp00 -G tstgrp00 pbsoper\nuseradd -m -s /bin/bash -u 4358 -g tstgrp00 -G tstgrp00 pbsother\nuseradd -m -s /bin/bash -u 4371 -g tstgrp00 -G tstgrp00 pbsroot\nuseradd -m -s /bin/bash -u 4355 -g tstgrp00 -G tstgrp02,tstgrp00 pbstest\nuseradd -m -s /bin/bash -u 4359 -g tstgrp00 -G tstgrp00 pbsuser\nuseradd -m -s /bin/bash -u 4361 -g tstgrp00 -G tstgrp01,tstgrp02,tstgrp00 pbsuser1\nuseradd -m -s /bin/bash -u 4362 -g tstgrp00 -G tstgrp01,tstgrp03,tstgrp00 pbsuser2\nuseradd -m -s /bin/bash -u 4363 -g tstgrp00 -G tstgrp01,tstgrp04,tstgrp00 pbsuser3\nuseradd -m -s /bin/bash -u 4364 -g tstgrp01 -G tstgrp04,tstgrp05,tstgrp01 pbsuser4\nuseradd -m -s /bin/bash -u 4365 -g tstgrp02 -G tstgrp04,tstgrp06,tstgrp02 pbsuser5\nuseradd -m -s /bin/bash -u 4366 -g tstgrp03 -G tstgrp04,tstgrp07,tstgrp03 pbsuser6\nuseradd -m -s /bin/bash -u 4368 -g tstgrp01 -G tstgrp01 pbsuser7\nuseradd -m -s /bin/bash -u 11000 -g tstgrp00 -G tstgrp00 tstusr00\nuseradd -m -s /bin/bash -u 11001 -g tstgrp00 -G tstgrp00 tstusr01\nchmod g+x,o+x /home/*\n\n. /etc/os-release\n\nif [ \"x${ID}\" == \"xcentos\" -a \"x${VERSION_ID}\" == \"x8\" ]; then\n  sed -i -e \"s|mirrorlist=|#mirrorlist=|g\" /etc/yum.repos.d/CentOS-*\n  sed -i -e \"s|#baseurl=http://mirror.centos.org|baseurl=http://vault.centos.org|g\" /etc/yum.repos.d/CentOS-*\n  dnf -y clean all\n  dnf -y install 'dnf-command(config-manager)'\n  dnf -y config-manager --set-enabled powertools\n  dnf -y install epel-release\n  dnf -y update\n  dnf -y install git gcc make m4 autoconf automake libtool rpm-build rpmdevtools \\\n    hwloc-devel libX11-devel libXt-devel libXext-devel libXft-devel \\\n    libedit-devel libical-devel cmake glibc-common yum-utils \\\n    ncurses-devel postgresql-devel python3-devel tcl-devel tk-devel swig \\\n    expat-devel openssl-devel libXext libXft expat libedit glibc-static \\\n    postgresql-server python3 tcl tk libical perl tar sendmail sudo perl-Env \\\n    perl-Switch gcc-c++ doxygen elfutils bison flex glibc-langpack-en \\\n    which net-tools man-db time csh lsof tzdata file \\\n    expect perl-App-cpanminus cpan initscripts \\\n    systemd systemd-sysv libcap rsyslog \\\n    openssh-clients openssh-server valgrind-devel valgrind libasan \\\n    llvm bc gzip gdb rsync wget curl ccache bind-utils vim iputils pam-devel\n  dnf -y clean all\n  rpmdev-setuptree\n  __systemd_paths='/etc/systemd/system /usr/lib/systemd/system'\nelif [ \"x${ID}\" == \"xcentos\" -a \"x${VERSION_ID}\" == \"x7\" ]; then\n  yum -y clean all\n  rpm --import https://package.perforce.com/perforce.pubkey &&\n    {\n      echo [perforce]\n      echo name=Perforce\n      echo baseurl=http://package.perforce.com/yum/rhel/7/x86_64\n      echo enabled=1\n      echo gpgcheck=1\n    } >/etc/yum.repos.d/perforce.repo\n  yum -y install epel-release\n  yum -y update\n  yum -y install git gcc make m4 autoconf automake libtool rpm-build rpmdevtools \\\n    hwloc-devel libX11-devel libXt-devel libXext-devel libXft-devel \\\n    libedit-devel libical-devel cmake glibc-common yum-utils \\\n    ncurses-devel postgresql-devel python3-devel tcl-devel tk-devel swig \\\n    expat-devel openssl-devel libXext libXft expat libedit glibc-static \\\n    postgresql-server python3 tcl tk libical perl tar sendmail sudo perl-Env \\\n    perl-Switch gcc-c++ doxygen elfutils bison flex postgresql-contrib \\\n    which net-tools man-db time csh lsof tzdata file glibc-langpack-en \\\n    expect perl-App-cpanminus cpan \\\n    systemd systemd-sysv libcap rsyslog \\\n    openssh-clients openssh-server valgrind-devel valgrind libasan pam-devel \\\n    llvm bc gzip gdb rsync wget curl ccache bind-utils vim iputils python2-pip helix-cli\n  yum -y clean all\n  rpmdev-setuptree\n  __systemd_paths='/etc/systemd/system /usr/lib/systemd/system'\nelif [ \"x${ID}\" == \"xopensuse\" -o \"x${ID}\" == \"xopensuse-leap\" ]; then\n  __on=\"$(grep -oP '(?<=^NAME=\").*(?=\")' /etc/os-release)\"\n  __ov=\"$(grep -oP '(?<=^VERSION=\").*(?=\")' /etc/os-release)\"\n  zypper -n addrepo -ceKfG \"https://download.opensuse.org/repositories/devel:tools/${__on// /_}_${__ov// /_}/devel:tools.repo\"\n  zypper -n addrepo -ceKfG \"https://download.opensuse.org/repositories/devel:languages:perl/${__on// /_}_${__ov// /_}/devel:languages:perl.repo\"\n  zypper -n addrepo -ceKfG \"http://package.perforce.com/yum/rhel/7/x86_64\" p4\n  zypper -n clean -mMa\n  zypper -n refresh -fbd\n  zypper --no-gpg-checks -n update --force-resolution\n  zypper --no-gpg-checks -n install --force-resolution git m4 \\\n    gcc make autoconf automake libtool rpm-build rpmdevtools helix-cli hwloc-devel \\\n    libX11-devel libXt-devel libedit-devel libical-devel cmake ncurses-devel \\\n    postgresql-devel python3-devel tcl-devel tk-devel swig libexpat-devel \\\n    libopenssl-devel libXext-devel libXft-devel expat libedit fontconfig net-tools-deprecated net-tools \\\n    timezone python3-xml glibc-devel-static postgresql-server python3 python3-pip tcl tk \\\n    perl tar sendmail sudo gcc-c++ doxygen elfutils bison flex \\\n    which net-tools net-tools-deprecated man time tcsh lsof file vim \\\n    expect perl-App-cpanminus perl-Parse-PMFile hostname bind-utils \\\n    systemd systemd-sysvinit libcap-progs iputils rsyslog openssh pam-devel \\\n    valgrind-devel valgrind llvm gdb rsync wget ccache bc gzip python-pip\n  zypper -n clean -mMa\n  zypper -n rr devel_tools\n  rpmdev-setuptree\n  __systemd_paths='/etc/systemd/system /usr/lib/systemd/system'\nelif [ \"x${ID}\" == \"xubuntu\" ]; then\n  if [ \"x${DEBIAN_FRONTEND}\" == \"x\" ]; then\n    export DEBIAN_FRONTEND=noninteractive\n  fi\n  apt -y update\n  apt -y upgrade\n  apt -y install git build-essential gcc g++ make dpkg-dev m4 \\\n    autoconf automake libtool rpm alien elfutils dh-make \\\n    libhwloc-dev libx11-dev libxt-dev libedit-dev libical-dev cmake \\\n    libncurses-dev libpq-dev python3-dev tcl-dev tk-dev swig libexpat1-dev \\\n    libssl-dev libxext-dev libxft-dev pkg-config expat postgresql perl tar \\\n    sendmail sendmail-bin sudo doxygen bison flex fakeroot libnuma1 \\\n    net-tools man time csh lsof curl gzip iputils-ping \\\n    expect cpanminus locales-all dnsutils tzdata vim bc file \\\n    systemd systemd-sysv sysvinit-utils libcap2-bin rsyslog libpam-dev \\\n    openssh-server openssh-client valgrind llvm gdb rsync wget ccache \\\n    python3 python3-pip cpanminus\n  if [ \"x${ID}\" == \"xubuntu\" -a \"x${VERSION_ID}\" == \"x16.04\" ]; then\n    wget -qO - https://package.perforce.com/perforce.pubkey | apt-key add - &&\n      echo 'deb http://package.perforce.com/apt/ubuntu/ xenial release' >/etc/apt/sources.list.d/perforce.list\n  else\n    wget -qO - https://package.perforce.com/perforce.pubkey | apt-key add - &&\n      echo 'deb http://package.perforce.com/apt/ubuntu/ bionic release' >/etc/apt/sources.list.d/perforce.list\n  fi\n  apt -y update\n  apt -y install helix-cli\n  __systemd_paths='/etc/systemd/system /lib/systemd/system'\n  apt -y autoremove\n  apt -y clean\n  rm -rf /var/lib/apt/list/*\n  mkdir -p /root/rpmbuild/SOURCES\nfi\n\n# Install pip, requests and sh python modules\nset -ex &&\n  python -m pip install --trusted-host pypi.org --trusted-host files.pythonhosted.org requests sh &&\n  rm -rf ~/.cache /tmp/*\n\n# QALib deps modules\ncpanm -n --no-wget --no-lwp --curl \\\n  IO::Pty IPC::Run IPC::Cmd Class::Accessor Module::Build Pod::Usage \\\n  Getopt::Long DateTime Date::Parse Proc::ProcessTable Test::More \\\n  Unix::Process Time::HiRes File::FcntlLock File::Remote\n\nfind ${__systemd_paths} -path '*.wants/*' \\\n  -not -name '*journald*' \\\n  -not -name '*systemd-tmpfiles*' \\\n  -not -name '*systemd-user-sessions*' \\\n  -not -name '*getty*' \\\n  -not -name '*dbus*' \\\n  -exec rm -fv {} \\;\n\ncp /workspace/etc/ci-script-wrapper.service /etc/systemd/system\nsystemctl set-default multi-user.target\n\nsystemctl enable sshd || systemctl enable ssh\nsystemctl enable sendmail\nif [ \"x${ID}\" != \"xubuntu\" -a \"x${VERSION_ID}\" != \"x16.04\" ]; then\n  systemctl disable sm-client\n  systemctl mask sm-client\nfi\nsystemctl enable rsyslog\nsystemctl disable getty@.service\nsystemctl unmask getty.target\nsystemctl unmask console-getty\nsystemctl enable getty.target\nsystemctl enable console-getty\nsystemctl enable ci-script-wrapper\n\ncp /workspace/etc/container-env-setup.sh /etc/profile.d/0container-env-setup.sh\ncp /workspace/etc/sudoers-overrides /etc/sudoers.d/container-overrides\n\necho '' >/etc/security/limits.conf\nrm -f /etc/security/limits.d/*.conf\nrm -rf ~/.ssh\nmkdir --mode=700 ~/.ssh\ncp /workspace/etc/id_rsa* ~/.ssh/\nchmod 0600 ~/.ssh/id_rsa\nchmod 0644 ~/.ssh/id_rsa.pub\ncp ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys\nchmod 0600 ~/.ssh/authorized_keys\necho 'root:pbs' | chpasswd\ncat /etc/profile.d/0container-env-setup.sh >>/root/.profile\ncat /etc/profile.d/0container-env-setup.sh >>/root/.bash_profile\ncat /etc/profile.d/0container-env-setup.sh >>/root/.bashrc\nfor user in $(awk -F: '/^(pbs|tst)/ {print $1}' /etc/passwd); do\n  rm -rf /home/${user}/.ssh\n  cp -rfp ~/.ssh /home/${user}/\n  chown -R ${user}: /home/${user}/.ssh\n  echo \"${user}:pbs\" | chpasswd\n  cat /etc/profile.d/0container-env-setup.sh >>/home/${user}/.profile\n  cat /etc/profile.d/0container-env-setup.sh >>/home/${user}/.bash_profile\n  cat /etc/profile.d/0container-env-setup.sh >>/home/${user}/.bashrc\n  chown ${user}: /home/${user}/.bashrc /home/${user}/.profile /home/${user}/.bash_profile\ndone\necho 'Host *' >>/etc/ssh/ssh_config\necho '  StrictHostKeyChecking no' >>/etc/ssh/ssh_config\necho '  ConnectionAttempts 3' >>/etc/ssh/ssh_config\necho '  IdentityFile ~/.ssh/id_rsa' >>/etc/ssh/ssh_config\necho '  PreferredAuthentications publickey,password' >>/etc/ssh/ssh_config\necho 'PermitRootLogin yes' >>/etc/ssh/sshd_config\necho 'UseDNS no' >>/etc/ssh/sshd_config\nsed -i 's/AcceptEnv/# AcceptEnv/g' /etc/ssh/sshd_config\nssh-keygen -A\nrm -f /var/run/*.pid /run/nologin\n\nrm -rf ~/.cache ~/.cpanm /var/{log,cache} /tmp /var/tmp /run/*.pid /var/run/*.pid\nmkdir -p --mode=0755 /var/{log,cache}\nmkdir -p --mode=1777 /tmp /var/tmp\n"
  },
  {
    "path": "ci/etc/killit.sh",
    "content": "#!/bin/bash -x\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\nkillit() {\n    if [ -z \"$1\" ]; then\n        return 0\n    fi\n    pid=$(ps -ef 2>/dev/null | grep $1 | grep -v grep | awk '{print $2}')\n    if [ ! -z \"${pid}\" ]; then\n        echo \"kill -TERM ${pid}\"\n        kill -TERM ${pid} 2>/dev/null\n    else\n        return 0\n    fi\n    sleep 10\n    pid=$(ps -ef 2>/dev/null | grep $1 | grep -v grep | awk '{print $2}')\n    if [ ! -z \"${pid}\" ]; then\n        echo \"kill -KILL ${pid}\"\n        kill -KILL ${pid} 2>/dev/null\n    fi\n}\n\nkill_pbs_process() {\n    ps -eaf 2>/dev/null | grep pbs_ | grep -v grep | wc -l\n    if [ $ret -gt 0 ]; then\n        killit pbs_server\n        killit pbs_mom\n        killit pbs_comm\n        killit pbs_sched\n        killit pbs_ds_monitor\n        killit /opt/pbs/pgsql/bin/postgres\n        killit pbs_benchpress\n        ps_count=$(ps -eaf 2>/dev/null | grep pbs_ | grep -v grep | wc -l)\n        if [ ${ps_count} -eq 0 ]; then\n            return 0\n        else\n            return 1\n        fi\n    fi\n}\n\n. /etc/os-release\n\nif [ \"x$1\" == \"xbackup\" ]; then\n    time_stamp=$(date -u \"+%Y-%m-%d-%H%M%S\")\n    folder=session-${time_stamp}\n    mkdir -p /logs/${folder}\n    cp /logs/build-* /logs/${folder}\n    cp /logs/logfile* /logs/${folder}\n    cp /logs/result* /logs/${folder}\n    cp /src/.config_dir/.conf.json /logs/${folder}/conf.json\n    cp /src/docker-compose.json /logs/${folder}/\n    rm -rf /logs/build-*\n    rm -rf /logs/logfile*\n    rm -rf /logs/result*\n    rm -rf /pbssrc/target-*\n    exit 0\nfi\n\nclean=${1}\necho \"Trying to stop all process via init.d\"\n/etc/init.d/pbs stop\nret=$?\nif [ ${ret} -ne 0 ]; then\n    echo \"failed graceful stop\"\n    echo \"force kill all processes\"\n    kill_pbs_process\nelse\n    echo \"checking for running ptl\"\n    benchpress_count=$(ps -ef 2>/dev/null | grep $1 | grep -v grep | wc -l)\n    if [ ${benchpress_count} -gt 0 ]; then\n        killit pbs_benchpress\n    else\n        echo \"No running ptl tests found\"\n    fi\nfi\n\nif [ \"XX${clean}\" == \"XXclean\" ]; then\n    cd /pbssrc/target-${ID} && make uninstall\n    rm -rf /etc/init.d/pbs\n    rm -rf /etc/pbs.conf\n    rm -rf /var/spool/pbs\n    rm -rf /opt/ptl\n    rm -rf /opt/pbs\nfi\n"
  },
  {
    "path": "ci/etc/macros",
    "content": "CONFIG_DIR=.config_dir\nSTATUS_FILE=status\nPARAM_FILE=params\nREQUIREMENT_DECORATOR_FILE=requirements_decorator\nCONFIGURE_OPT_FILE=configure_opt\nBENCHPRESS_OPT_FILE=benchpress_opt\nCONF_JSON_FILE=conf.json\nREQ_DOCKER_VERSION=17.12.0\nDEFAULT_PLATFORM=centos:8\nSUPPORTED_PLATFORMS=centos:7,centos:8,ubuntu:16.04,ubuntu:18.04\n"
  },
  {
    "path": "ci/etc/sudoers-overrides",
    "content": "Defaults\tsyslog = local7\nDefaults\talways_set_home\nDefaults\t!requiretty\nDefaults\t!env_reset\nDefaults\t!secure_path\nDefaults\tenv_keep = \"*\"\nALL ALL=(ALL)       NOPASSWD: ALL\n"
  },
  {
    "path": "configure.ac",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nAC_PREREQ([2.63])\n# Use PBS_VERSION to override the version statically defined here. For example:\n# ./configure PBS_VERSION=20.0.0 --prefix=/opt/pbs\nAC_INIT([OpenPBS],\n  [23.06.06],\n  [pbssupport@altair.com],\n  [openpbs],\n  [http://www.openpbs.org/])\nAC_CONFIG_HEADERS([src/include/pbs_config.h])\nAC_CONFIG_SRCDIR([src/cmds/qmgr.c])\nAC_CONFIG_AUX_DIR([buildutils])\nAC_CONFIG_MACRO_DIR([m4])\nAC_CANONICAL_TARGET([])\nos_id=`grep ^ID= /etc/os-release | sed -n 's/.*\"\\(.*\\)\"/\\1/p'`\nAS_CASE([$os_id],\n  [opensuse-tumbleweed], m4_define([am_init_string], [-Wall foreign subdir-objects]),\n  [*], m4_define([am_init_string], [-Wall foreign]))\nAM_INIT_AUTOMAKE(am_init_string)\nAC_USE_SYSTEM_EXTENSIONS\n\n\n# Checks for programs.\nAC_PROG_AWK\nAC_PROG_YACC\nAC_PROG_SED\nAC_PROG_CC\nAC_PROG_LEX\nAC_PROG_INSTALL\nAC_PROG_LN_S\nAC_PROG_CXX\n\nAC_SUBST([AM_CXXFLAGS], [--std=c++11])\n\n# Automake macros\n#AM_PROG_AR macro is defined with automake version >= 1.12\nm4_ifdef([AM_PROG_AR], [AM_PROG_AR])\nAM_PROG_CC_C_O\n\n# Initialize libtool\nAM_PROG_LIBTOOL\nLT_INIT([shared static])\n\n# Checks for libraries.\nAC_CHECK_LIB([c], [xdr_int],\n  [],\n  AC_CHECK_LIB(nsl, xdr_int)\n)\nAC_CHECK_LIB([c], [ruserok],\n  [],\n  AC_CHECK_LIB(socket, ruserok)\n)\nAC_CHECK_LIB([c], [crypt],\n  [],\n  AC_CHECK_LIB(crypt, crypt)\n)\nAC_CHECK_LIB([c], [posix_openpt],\n  AC_DEFINE([HAVE_POSIX_OPENPT], [], [Defined whe posix_openpt is available])\n)\nAC_CHECK_LIB(dl, dlopen)\nAC_CHECK_LIB([kvm], [kvm_open])\nAC_CHECK_LIB([socket], [socket],\n  [socket_lib=\"-lsocket -lnsl\"]\n  AC_SUBST(socket_lib),\n  [socket_lib=\"\"]\n  AC_SUBST(socket_lib),\n  [-lnsl]\n)\nAC_CHECK_LIB([c], [malloc_info],\n  AC_DEFINE([HAVE_MALLOC_INFO], [], [Defined when malloc_info is available])\n)\n\n# Check for X Window System\nAC_PATH_XTRA\n\n# Checks for optional header files.\nAC_CHECK_HEADERS([ \\\n\tcom_err.h \\\n\tgssapi.h \\\n\tkrb5.h \\\n\tlibpq-fe.h \\\n\tmach/mach.h \\\n\tnlist.h \\\n\tsys/eventfd.h \\\n\tsys/systeminfo.h \\\n])\n\n# Checks for required header files.\nAC_CHECK_HEADERS([ \\\n\tstdio.h \\\n\talloca.h \\\n\tarpa/inet.h \\\n\tassert.h \\\n\tctype.h \\\n\tdirent.h \\\n\tdlfcn.h \\\n\texecinfo.h \\\n\tfcntl.h \\\n\tfloat.h \\\n\tfstab.h \\\n\tftw.h \\\n\tgrp.h \\\n\tlibgen.h \\\n\tlimits.h \\\n\tmath.h \\\n\tmemory.h \\\n\tnetdb.h \\\n\tnetinet/in.h \\\n\tnetinet/in_systm.h \\\n\tnetinet/ip.h \\\n\tnetinet/tcp.h \\\n\topenssl/aes.h \\\n\topenssl/bio.h \\\n\topenssl/err.h \\\n\topenssl/evp.h \\\n\topenssl/ssl.h \\\n\tpaths.h \\\n\tpoll.h \\\n\tpthread.h \\\n\tpwd.h \\\n\tregex.h \\\n\tsignal.h \\\n\tstdbool.h \\\n\tstddef.h \\\n\tstdint.h \\\n\tstdio.h \\\n\tstdlib.h \\\n\tstring.h \\\n\tstrings.h \\\n\tsyslog.h \\\n\tsys/fcntl.h \\\n\tsys/file.h \\\n\tsys/ioctl.h \\\n\tsys/mman.h \\\n\tsys/mount.h \\\n\tsys/param.h \\\n\tsys/poll.h \\\n\tsys/quota.h \\\n\tsys/resource.h \\\n\tsys/select.h \\\n\tsys/signal.h \\\n\tsys/socket.h \\\n\tsys/stat.h \\\n\tsys/statvfs.h \\\n\tsys/time.h \\\n\tsys/timeb.h \\\n\tsys/times.h \\\n\tsys/types.h \\\n\tsys/uio.h \\\n\tsys/un.h \\\n\tsys/user.h \\\n\tsys/utsname.h \\\n\tsys/wait.h \\\n\ttermios.h \\\n\ttime.h \\\n\tunistd.h \\\n\tutime.h \\\n\tX11/Intrinsic.h \\\n\tX11/X.h \\\n\tX11/Xlib.h \\\n\tzlib.h \\\n\t],, AC_MSG_ERROR([Required header file is missing.]) \\\n)\n\n# Checks for typedefs, structures, and compiler characteristics.\n#AC_CHECK_HEADER_STDBOOL macro is defined with autoconf version >= 2.67\nm4_ifdef([AC_CHECK_HEADER_STDBOOL], [AC_CHECK_HEADER_STDBOOL])\nAC_TYPE_UID_T\nAC_TYPE_MODE_T\nAC_TYPE_OFF_T\nAC_TYPE_PID_T\nAC_C_RESTRICT\nAC_TYPE_SIZE_T\nAC_TYPE_SSIZE_T\nAC_CHECK_MEMBERS([struct stat.st_blksize])\nAC_TYPE_UINT16_T\nAC_TYPE_UINT32_T\nAC_TYPE_UINT64_T\nAC_TYPE_UINT8_T\nAC_CHECK_TYPES([ptrdiff_t])\n\n# Checks for library functions.\nAC_FUNC_ALLOCA\nAC_FUNC_CHOWN\nAC_FUNC_ERROR_AT_LINE\nAC_FUNC_FORK\nAC_FUNC_GETGROUPS\nAC_FUNC_GETMNTENT\nAC_FUNC_LSTAT_FOLLOWS_SLASHED_SYMLINK\nAC_FUNC_MKTIME\nAC_FUNC_MMAP\nAC_FUNC_STRERROR_R\nAC_FUNC_STRTOD\nAC_CHECK_FUNCS([ \\\n\talarm \\\n\tatexit \\\n\tbzero \\\n\tdup2 \\\n\tendpwent \\\n\tfloor \\\n\tftruncate \\\n\tgetcwd \\\n\tgethostbyaddr \\\n\tgethostbyname \\\n\tgethostname \\\n\tgetmntent \\\n\tgetpagesize \\\n\tgettimeofday \\\n\thasmntopt \\\n\tinet_ntoa \\\n\tlocaltime_r \\\n\tmemchr \\\n\tmemmove \\\n\tmemset \\\n\tmkdir \\\n\tmunmap \\\n\tpathconf \\\n\tpoll \\\n\tpstat_getdynamic \\\n\tputenv \\\n\trealpath \\\n\tregcomp \\\n\trmdir \\\n\tselect \\\n\tsetresuid \\\n\tsetresgid \\\n\tgetpwuid \\\n\tinitgroups \\\n\tseteuid \\\n\tsetegid \\\n\tstrerror_r \\\n\tsocket \\\n\tstrcasecmp \\\n\tstrchr \\\n\tstrcspn \\\n\tstrdup \\\n\tstrerror \\\n\tstrncasecmp \\\n\tstrpbrk \\\n\tstrrchr \\\n\tstrspn \\\n\tstrstr \\\n\tstrtol \\\n\tstrtoul \\\n\tstrtoull \\\n\tsysinfo \\\n\tuname \\\n\tutime \\\n])\n\nPKG_PROG_PKG_CONFIG\nm4_ifdef([PKG_INSTALLDIR],\n  [PKG_INSTALLDIR],\n  [\n    pkgconfigdir=/usr/lib64/pkgconfig\n    AC_SUBST([pkgconfigdir])\n  ])\n\n\n# PBS macros (order matters for some of these)\nPBS_AC_PBS_VERSION\nPBS_AC_DECL_H_ERRNO\nPBS_AC_DECL_SOCKLEN_T\nPBS_AC_DECL_EPOLL\nPBS_AC_DECL_EPOLL_PWAIT\nPBS_AC_DECL_PPOLL\nPBS_AC_WITH_SERVER_HOME\nPBS_AC_WITH_SERVER_NAME_FILE\nPBS_AC_WITH_DATABASE_DIR\nPBS_AC_WITH_DATABASE_USER\nPBS_AC_WITH_DATABASE_PORT\nPBS_AC_WITH_PBS_CONF_FILE\nPBS_AC_WITH_TMP_DIR\nPBS_AC_WITH_UNSUPPORTED_DIR\nPBS_AC_WITH_CORE_LIMIT\nPBS_AC_WITH_PYTHON\nPBS_AC_WITH_EXPAT\nPBS_AC_WITH_EDITLINE\nPBS_AC_WITH_HWLOC\nPBS_AC_WITH_LIBICAL\nPBS_AC_WITH_PMIX\nPBS_AC_WITH_SENDMAIL\nPBS_AC_WITH_SWIG\nPBS_AC_WITH_TCL\nPBS_AC_WITH_TCLATRSEP\nPBS_AC_WITH_XAUTH\nPBS_AC_WITH_KRBAUTH\nPBS_AC_WITH_MIN_STACK_LIMIT\nPBS_AC_DISABLE_SHELL_PIPE\nPBS_AC_DISABLE_SYSLOG\nPBS_AC_SECURITY\nPBS_AC_ENABLE_ALPS\nPBS_AC_WITH_LIBZ\nPBS_AC_ENABLE_PTL\nPBS_AC_SYSTEMD_UNITDIR\nPBS_AC_PATCH_LIBTOOL\nPBS_AC_WITH_CJSON\n\nAC_CONFIG_FILES([\n\topenpbs.spec\n\tMakefile\n\tbuildutils/Makefile\n\tdoc/Makefile\n\ttest/Makefile\n\ttest/fw/Makefile\n\ttest/tests/Makefile\n\ttest/fw/setup.py\n\ttest/fw/ptl/__init__.py\n\tsrc/Makefile\n\tsrc/cmds/Makefile\n\tsrc/cmds/mpiexec\n\tsrc/cmds/pbs_lamboot\n\tsrc/cmds/pbs_mpihp\n\tsrc/cmds/pbs_mpilam\n\tsrc/cmds/pbs_mpirun\n\tsrc/cmds/pbs_remsh\n\tsrc/cmds/pbsrun_unwrap\n\tsrc/cmds/pbsrun_wrap\n\tsrc/cmds/pbsrun\n\tsrc/cmds/scripts/Makefile\n\tsrc/cmds/scripts/modulefile\n\tsrc/cmds/scripts/pbs_habitat\n\tsrc/cmds/scripts/pbs_init.d\n\tsrc/cmds/scripts/pbs_reload\n\tsrc/cmds/scripts/pbs_poerun\n\tsrc/cmds/scripts/pbs_postinstall\n\tsrc/cmds/scripts/pbs.service\n\tsrc/cmds/scripts/pbsrun.poe\n\tsrc/hooks/Makefile\n\tsrc/iff/Makefile\n\tsrc/include/Makefile\n\tsrc/include/pbs_version.h\n\tsrc/lib/Libattr/Makefile\n\tsrc/lib/Libdb/Makefile\n\tsrc/lib/Libdb/pgsql/Makefile\n\tsrc/lib/Libifl/Makefile\n\tsrc/lib/Liblog/Makefile\n\tsrc/lib/Libnet/Makefile\n\tsrc/lib/Libpbs/Makefile\n\tsrc/lib/Libpbs/pbs.pc\n\tsrc/lib/Libpython/Makefile\n\tsrc/lib/Libsec/Makefile\n\tsrc/lib/Libsite/Makefile\n\tsrc/lib/Libtpp/Makefile\n\tsrc/lib/Libutil/Makefile\n\tsrc/lib/Libauth/Makefile\n\tsrc/lib/Libauth/gss/Makefile\n\tsrc/lib/Libauth/munge/Makefile\n\tsrc/lib/Liblicensing/Makefile\n\tsrc/lib/Libjson/Makefile\n\tsrc/lib/Libjson/cJSON/Makefile\n\tsrc/lib/Makefile\n\tsrc/modules/Makefile\n\tsrc/modules/python/Makefile\n\tsrc/mom_rcp/Makefile\n\tsrc/resmom/Makefile\n\tsrc/scheduler/Makefile\n\tsrc/server/Makefile\n\tsrc/tools/Makefile\n\tsrc/tools/wrap_tcl.sh\n\tsrc/unsupported/Makefile\n])\nAC_OUTPUT\n"
  },
  {
    "path": "doc/Makefile.am",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nnotrans_dist_man1_MANS = \\\n\tman1/pbsdsh.1B \\\n\tman1/pbs_login.1B \\\n\tman1/pbs_python.1B \\\n\tman1/pbs_ralter.1B \\\n\tman1/pbs_rdel.1B \\\n\tman1/pbs_release_nodes.1B \\\n\tman1/pbs_rstat.1B \\\n\tman1/pbs_rsub.1B \\\n\tman1/qalter.1B \\\n\tman1/qdel.1B \\\n\tman1/qhold.1B \\\n\tman1/qmove.1B \\\n\tman1/qmsg.1B \\\n\tman1/qorder.1B \\\n\tman1/qrerun.1B \\\n\tman1/qrls.1B \\\n\tman1/qselect.1B \\\n\tman1/qsig.1B \\\n\tman1/qstat.1B \\\n\tman1/qsub.1B\n\nnotrans_dist_man3_MANS = \\\n\tman3/pbs_alterjob.3B \\\n\tman3/pbs_asyrunjob.3B \\\n\tman3/pbs_confirmresv.3B \\\n\tman3/pbs_connect.3B \\\n\tman3/pbs_default.3B \\\n\tman3/pbs_deljob.3B \\\n\tman3/pbs_delresv.3B \\\n\tman3/pbs_disconnect.3B \\\n\tman3/pbs_geterrmsg.3B \\\n\tman3/pbs_holdjob.3B \\\n\tman3/pbs_locjob.3B \\\n\tman3/pbs_manager.3B \\\n\tman3/pbs_modify_resv.3B \\\n\tman3/pbs_movejob.3B \\\n\tman3/pbs_msgjob.3B \\\n\tman3/pbs_orderjob.3B \\\n\tman3/pbs_preempt_jobs.3B \\\n\tman3/pbs_rerunjob.3B \\\n\tman3/pbs_rescreserve.3B \\\n\tman3/pbs_relnodesjob.3B \\\n\tman3/pbs_rlsjob.3B \\\n\tman3/pbs_runjob.3B \\\n\tman3/pbs_selectjob.3B \\\n\tman3/pbs_selstat.3B \\\n\tman3/pbs_sigjob.3B \\\n\tman3/pbs_stagein.3B \\\n\tman3/pbs_statfree.3B \\\n\tman3/pbs_stathook.3B \\\n\tman3/pbs_stathost.3B \\\n\tman3/pbs_statjob.3B \\\n\tman3/pbs_statnode.3B \\\n\tman3/pbs_statque.3B \\\n\tman3/pbs_statresv.3B \\\n\tman3/pbs_statrsc.3B \\\n\tman3/pbs_statsched.3B \\\n\tman3/pbs_statserver.3B \\\n\tman3/pbs_statvnode.3B \\\n\tman3/pbs_submit.3B \\\n\tman3/pbs_submit_resv.3B \\\n\tman3/pbs_tclapi.3B \\\n\tman3/pbs_terminate.3B \\\n\tman3/rm.3B \\\n\tman3/tm.3\n\nnoinst_man3_MANS = \\\n\tman3/pbs_rescquery.3B \\\n\tman3/pbs_submitresv.3B\n\nnotrans_dist_man7_MANS = \\\n\tman1/pbs_hook_attributes.7B \\\n\tman1/pbs_job_attributes.7B \\\n\tman1/pbs_module.7B \\\n\tman1/pbs_node_attributes.7B \\\n\tman1/pbs_professional.7B \\\n\tman1/pbs_queue_attributes.7B \\\n\tman1/pbs_resources.7B \\\n\tman1/pbs_resv_attributes.7B \\\n\tman1/pbs_sched_attributes.7B \\\n\tman1/pbs_server_attributes.7B\n\nnotrans_dist_man8_MANS = \\\n\tman8/mpiexec.8B \\\n\tman8/pbs.8B \\\n\tman8/pbs_account.8B \\\n\tman8/pbs_attach.8B \\\n\tman8/pbs_comm.8B \\\n\tman8/pbs.conf.8B \\\n\tman8/pbs_dataservice.8B \\\n\tman8/pbs_ds_password.8B \\\n\tman8/pbsfs.8B \\\n\tman8/pbs_hostn.8B \\\n\tman8/pbs_idled.8B \\\n\tman8/pbs_iff.8B \\\n\tman8/pbs_interactive.8B \\\n\tman8/pbs_lamboot.8B \\\n\tman8/pbs_mkdirs.8B \\\n\tman8/pbs_mom.8B \\\n\tman8/pbs_mpihp.8B \\\n\tman8/pbs_mpilam.8B \\\n\tman8/pbs_mpirun.8B \\\n\tman8/pbsnodes.8B \\\n\tman8/pbs_probe.8B \\\n\tman8/pbsrun.8B \\\n\tman8/pbsrun_unwrap.8B \\\n\tman8/pbsrun_wrap.8B \\\n\tman8/pbs_sched.8B \\\n\tman8/pbs_server.8B \\\n\tman8/pbs_snapshot.8B \\\n\tman8/pbs_tclsh.8B \\\n\tman8/pbs_tmrsh.8B \\\n\tman8/pbs_topologyinfo.8B \\\n\tman8/pbs_wish.8B \\\n\tman8/printjob.8B \\\n\tman8/qdisable.8B \\\n\tman8/qenable.8B \\\n\tman8/qmgr.8B \\\n\tman8/qrun.8B \\\n\tman8/qstart.8B \\\n\tman8/qstop.8B \\\n\tman8/qterm.8B \\\n\tman8/tracejob.8B \\\n\tman8/win_postinstall.py.8B\n\n"
  },
  {
    "path": "doc/man1/pbs_hook_attributes.7B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_hook_attributes 7B \"6 May 2020\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_hook_attributes \n\\- attributes of PBS hooks\n\n.SH DESCRIPTION\n.LP\nHook attributes can be set, unset, and viewed using the \n.B qmgr \ncommand.\nSee the \n.B qmgr(1B) \nman page.\n\nAn unset hook attribute takes the default value for that attribute.\n\nUnder UNIX/Linux, root privilege is required in order to operate on\nhooks.  Under Windows, this must be done from the installation\naccount.  For domained environments, the installation account must be\na lo cal account that is a member of the local Administrators group on\nthe local computer.  For standalone environments, the ins tallation\naccount must be a local account that is a member of the local\nAdministrators group on the local computer.\n\n.IP \"alarm=<n>\"\nSpecifies the number of seconds to allow a hook to run before\nthe hook times out.\n.br\nSet by administrator.\n.br\nValid values: >0\n.br\nFormat: Integer\n.br\nDefault value: 30\n\n.IP \"debug\"\nSpecifies whether or not the hook produces debugging files under\nPBS_HOME/server_priv/hooks/tmp or PBS_HOME/mom_priv/hooks/tmp.  Files\nare named hook_<hook event>_<hook name>_<unique ID>.in, .data, and .out.\nWhen this is set to \n.I true, \nthe hook leaves debugging files.\n.br\nSet by administrator.\n.br\nFormat: Boolean\n.br\nDefault value: False\n\n.IP \"enabled\"\nDetermines whether or not a hook is run when its triggering event occurs.\nIf a hook's \n.I enabled\nattribute is \n.I True, \nthe hook is run.\n.br\nSet by administrator.\n.br\nFormat: Boolean\n.br\nDefault: True\n\n.IP \"event\"\nList of events that trigger the hook.  Can be operated on with \nthe \"=\", \"+=\", and \"-=\" operators.  The \n.I provision\nevent cannot be combined with any other events.\n.br\nValid events: \n.RS 11\n.nf\n\"exechost_periodic\"\n\"exechost_startup\" \n\"execjob_attach\"\n\"execjob_begin\"\n\"execjob_end\"\n\"execjob_epilogue\" \n\"execjob_launch\" \n\"execjob_postsuspend\"\n\"execjob_preresume\"\n\"execjob_preterm\" \n\"execjob_prologue\"\n\"modifyjob\" \n\"movejob\" \n\"periodic\"\n\"provision\"\n\"queuejob\"\n\"resvsub\" \n\"runjob\"\n\"\" (meaning no event)\n.fi\n.RE\n.IP\n.br\nSet by administrator.\n.br\nFormat: string array \n.br\nDefault value: \"\" (meaning none, i.e. the hook is not triggered)\n\n.IP \"fail_action\"\nSpecifies the action to be taken when hook fails due to alarm call or\nunhandled exception, or to an internal error such as not enough disk\nspace or memory.  Can also specify a subsequent action to be taken\nwhen hook runs successfully.  Value can be either \"none\" or one or more of \n\"offline_vnodes\", \"clear_vnodes_upon_recovery\", and \"scheduler_restart_cycle\". \nIf this attribute is set to multiple values, scheduler restart happens last.\n.br\n.I offline_vnodes\n.RS 11\nAfter unsuccessful hook execution, offlines the vnodes managed by the MoM\nexecuting the hook.  Only available for execjob_prologue, exechost_startup\nand execjob_begin hooks.\n.RE\n.IP\n.I clear_vnodes_upon_recovery\n.RS 11\nAfter successful hook execution, clears vnodes previously offlined via\noffline_vnodes fail action.  Only available for exechost_startup hooks.\n.RE\n.IP\n.I scheduler_restart_cycle\n.RS 11\nAfter unsuccessful hook execution, restarts scheduling cycle.  Only \navailable for execjob_begin and execjob_prologue hooks.\n.RE\n.IP\n.br\nSet by administrator.\n.br\nFormat: string_array\n.br\nDefault value: \"none\"\n\n.IP \"freq\"\nNumber of seconds between periodic or exechost_periodic triggers.\n.br\nSet by administrator.\n.br \nFormat: integer\n.br\nDefault: 120 seconds\n\n.IP \"order\"\nIndicates relative order of hook execution, for hooks of the same \ntype sharing a trigger.  Hooks with lower \n.I order\nvalues execute before those with higher values.\nDoes not apply to periodic or exechost_periodic hooks.\n.br\nSet by administrator.\n.br\nValid values:\n.RS 8\nBuilt-in hooks:\n.I [-1000, 2000]\n.br\nSite hooks:\n.I [1, 1000]\n.RE\n.IP\nFormat: Integer\n.br\nDefault value: 1\n\n.IP \"Type\"\nThe type of the hook.  Cannot be set for a built-in hook.\n.br\nValid values: \"pbs\", \"site\"\n.br\n.I pbs\n.RS 11\nHook is built in.\n.RE\n.IP\n.I site\n.RS 11\nHook is custom (site-defined).\n.RE\n.IP\n.br\nSet by administrator.\n.br\nFormat: String\n.br\nDefault value: \"site\"\n\n.IP \"user\"\nSpecifies who executes the hook.  \n.br\nValid values: \"pbsadmin\", \"pbsuser\"\n.br\n.I \"pbsadmin\"\n.RS 11\nThe hook executes as root.  \n.RE\n.IP\n.I \"pbsuser\"\n.RS 11\nThe hook executes as the triggering job's owner.\n.RE\n.IP\n.br\nSet by administrator.\n.br\nFormat: String\n.br\nDefault value: \"pbsadmin\"\n\n.SH SEE ALSO\nqmgr(1B),\npbs_module(7B), \npbs_stathook(3B)\n\n\n"
  },
  {
    "path": "doc/man1/pbs_job_attributes.7B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_job_attributes 7B \"4 March 2021\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_job_attributes \n\\- attributes of PBS jobs\n.SH DESCRIPTION\nEach PBS job has attributes that characterize that job.\n\n.IP \"Account_Name\" 8\nString used for accounting purposes.  Can be used for fairshare.\n.br\nCan be read and set by user, Operator, Manager.\n.br\nFormat: \n.I String; \ncan contain any character.\n.br\nPython type: \n.I str\n.br\nDefault: No default\n\n.IP accounting_id 8\nAccounting ID for tracking accounting data not produced by PBS.\nReadable by all.\n.br\nFormat: \n.I String\n.br\nPython type: \n.I str\n\n.IP accrue_type 8\nIndicates what kind of time the job is accruing.  \n.br\nReadable by Manager only.\n.br\nFormat: \n.I Integer\n.br\nPython type: \n.I int\n.br\nValid values: \n.RS 11\nOne of 0 (initial_time), 1 (ineligible_time), 2 (eligible_time), or 3 (run_time).  \n.RE\n.IP\nDefault:\n.I 2 (eligible_time)\n\n.IP alt_id 8\nFor a few systems, the session ID is insufficient to track which\nprocesses belong to the job.  Where a different identifier is\nrequired, it is recorded in this attribute.  If set, it will also be\nrecorded in the end-of-job accounting record.  \n.br\nOn Windows, holds PBS home directory.\n.br\nReadable by all; settable by None.\n.br\nFormat:\n.I String; \nmay contain white spaces.\n.br\nPython type: \n.I str\n.br\nDefault: No default\n\n.IP \"argument_list\" 8\nJob executable's argument list.  Shown if job is submitted with \n\"-- <executable> [<argument list>]\".\n.br\nCan be read and set by user, Operator, Manager.\n.br\nFormat: \n.I JSDL=encoded string\n.RS 11\n.I <jsdl-hpcpa:Argument> <1st arg> </jsdl-hpcpa:Argument>\n.br\n.I <jsdl-hpcpa:Argument> <2nd arg> </jsdl-hpcpa:Argument>\n.br\n.I <jsdl-hpcpa:Argument> <nth arg> </jsdl-hpcpa:Argument>\n.RE\n.IP\nExample: \n.RS 11\nIf arguments are \"A B\": \n<jsdl-hpcpa:Argument>A</jsdl-hpcpa:Argument> <jsdl-hpcpa:Argument>B</jsdl-hpcpa:Argument>\n.RE\n.IP\nPython type: \n.I str\n.br\nDefault: No default\n\n.IP array 8\nIndicates whether this is a job array.  Set to \n.I True\nif this is an array job.\n.br\nCan be read and set by user.  Can be read by Manager and Operator.\n.br\nFormat: \n.I Boolean\n.br\nPython type: \n.I bool\n.br\nDefault: \n.I False\n\n.IP array_id 8\nApplies only to subjobs.  Array identifier of subjob.\nReadable by all; set by PBS.\n.br\nFormat: \n.I String\n.br\nPython type: \n.I str\n.br\nDefault: No default\n\n.IP array_index 8\nApplies only to subjobs.  Index number of subjob. \nReadable by all; set by PBS.\n.br\nFormat: \n.I String\n.br\nPython type: \n.I int\n.br\nDefault: No default\n\n.IP array_indices_remaining 8\nApplies only to job arrays.  List of indices of subjobs still queued.  \nReadable by all; set by PBS.\n.br\nFormat: \n.I String\n.br\nSyntax: Range or list of ranges, e.g. 500, 552, 596-1000\n.br\nPython type: \n.I str\n.br\nDefault: No default\n\n.IP array_indices_submitted 8\nApplies only to job arrays.  Complete list of indices of subjobs \ngiven at submission time.  \n.br\nCan be read and set by user.  Can be read by Manager and Operator.\n.br\nFormat: \n.I String\n.br\nSyntax: \nGiven as range, e.g. 1-100\n.br\nPython type: \n.I pbs.range\n.br\nDefault: No default\n\n.IP array_state_count 8\nApplies only to job arrays.  Lists number of subjobs in each state. \n.br \nReadable by all; set by PBS.\n.br\nFormat: \n.I String\n.br\nPython type: \n.I pbs.state_count\n.br\nDefault: No default\n\n.IP \"block\" 8\nSpecifies whether qsub will wait for the job to complete and\nreturn the exit value of the job.  \n.br\nFor X11 forwarding jobs, and jobs with \n.I interactive\nand \n.I block\nattributes set to\n.I True, \nthe job's exit status is not returned.\n.br\nWhen \n.I block \nis \n.I True, \nqsub waits for the job to finish.  \n.br\nCan be read and set by user.  Can be read by Manager and Operator.\n.br\nFormat: \n.I Boolean\n.br\nPython type: \n.I int\n.br\nDefault: \n.I False\n\n.IP \"Checkpoint\" 8\nDetermines when the job will be checkpointed.  An \n.I $action\nscript is required to checkpoint the job.  See the \n.I pbs_mom(8B)\nman page. \n.br\nCan be read and set by user, Operator, Manager.\n.br\nFormat: \n.I String, \ncontaining description of interval at which to checkpoint.\n.br\nPython type: \n.I pbs.checkpoint\n.br\nValid values:  \n.RS\n.IP c 3\nCheckpoint at intervals, measured in CPU time, set on the job's \nexecution queue.  If no interval set at queue, job is not checkpointed.\n\n.IP \"c=<minutes of CPU time>\" 3\nCheckpoint at intervals of the specified number of minutes of job CPU\ntime.  This value must be greater than zero.  If the interval\nspecified is less than that set on the job's execution queue, the\nqueue's interval is used.\n.br\nFormat:\n.I Integer\n\n.IP w 3\nCheckpoint at intervals, measured in walltime, set on job's execution \nqueue.  If no interval set at queue, job is not checkpointed.\n\n.IP \"w=<minutes of walltime>\" 3\nCheckpoint at intervals of the specified number of minutes of job \nwalltime.  This value must be greater than zero.  If the interval \nspecified is less that that set on job's execution queue, the \nqueue's interval is used.\n.br\nFormat: \n.I Integer\n.IP n 3\nNo checkpointing.\n.IP s 3\nCheckpoint only when the server is shut down.\n.IP u 3\nUnset.  Defaults to behavior when \n.I interval\nargument is set to \n.I s.\n.LP\nDefault: \n.I u\n.RE\n\n.IP comment 8\nComment about job.  Informational only.\n.br\nCan be read by user.  Can be read and set by Operator, Manager.\n.br\nFormat: \n.I String\n.br\nPython type: \n.I str\n.br\nDefault: No default\n\n.IP create_resv_from_job 8\nWhen this attribute is\n.I True, \nwhen this job is run, immediately creates and confirms a job-specific\nstart reservation on the same resources as the job (including\nresources inherited by the job), and places the job in the\njob-specific reservation's queue.  Sets the job's\n.I create_resv_from_job \nattribute to \n.I True.  \nSets the job-specific reservation's \n.I reserve_job \nattribute to the ID of the job from which the reservation was created.\nThe new reservation's duration and start time are the same as the\njob's walltime and start time.  If the job is peer scheduled, the\njob-specific reservation is created in the pulling complex.\n.br\nReadable and settable by all.\n.br\nFormat: \n.I Boolean\n.br\nPython type:\n.I bool\n.br\nDefault: \n.I False\n\n\n.IP ctime 8\nTimestamp; time at which the job was created.  \n.br\nReadable by all; set by PBS.\n.br\nFormat: \n.I Integer\n.br\nSyntax: Timestamp.  \n.RS 11\nPrinted by \n.B qstat\nin human-readable format.  Output by hooks as seconds since epoch.\n.RE\n.IP\nPython type: \n.I int\n.br\nDefault: No default\n\n.IP depend 8\nSpecifies inter-job dependencies.\n.br\nNo limit on number of dependencies.\n.br\nCan be read and set by user, Operator, Manager.\n.br\nFormat: \n.I String\n.br\nSyntax: \n.RS 11\n\"<type>:<job ID>[,<job ID> ...][,<type>:<job ID>[,<job ID> ...] ...]\"\n.br\nMust be enclosed in double quotes if it contains commas.  \n.RE\n.IP\nExample: \"before:123,456\"\n.br\nPython type: \n.I pbs.depend\n.br\nValid values: \n.RS\n.IP \"after:<job ID list>\" 3\nThis job may run at any point after all jobs in \n.I job ID list \nhave started execution.\n\n.IP \"afterok:<job ID list>\" 3\nThis job may run only after all jobs in \n.I job ID list \nhave terminated with no errors.\n\n.IP \"afternotok:<job ID list>\" 3\nThis job may run only after all jobs in \n.I job ID list \nhave terminated with errors.\n\n.IP \"afterany:<job ID list>\"  \nThis job can run after all jobs in \n.I job ID list\nhave finished execution, with or without errors. This job will not run\nif a job in the \n.I job ID list \nwas deleted without ever having been run.\n\n.IP \"before:<job ID list>\" 3    \nJobs in \n.I job ID list \nmay start once this job has started.\n\n.IP \"beforeok:<job ID list>\" 3  \nJobs in \n.I job ID list \nmay start once this job terminates without errors.\n\n.IP \"beforenotok:<job ID list>\" 3\nIf this job terminates execution with errors, jobs in \n.I job ID list \nmay begin.\n\n.IP \"beforeany:<job id list>\" 3 \nJobs in \n.I job ID list \nmay begin execution once this job terminates execution, with or without errors.\n\n.IP \"on:<count>\" 3\nThis job may run after \n.I count \ndependencies on other jobs have been satisfied. This type is used with one of the \n.I before types listed. \n.I Count \nis an integer greater than \n.I 0.\n.RE\n.IP\nDefault: No dependencies\n\n.IP egroup 8\nIf the job is queued, this attribute is set to the\ngroup name under which the job is to be run.  \n.br\nReadable by Manager only.\n.br\nFormat: \n.I String\n.br\nPython type: \n.I str\n.br\nDefault: No default\n\n.IP eligible_time 8\nThe amount of wall clock wait time a job has accrued while the job\nis blocked waiting for resources.  For a job currently accruing \n.I eligible_time, \nif we were to add enough of the right type of resources, the job would\nstart immediately.\n.br\nViewable via \n.B qstat -f.\n.br\nReadable by job owner, Manager and Operator.  Settable by Operator or Manager.\n.br\nFormat: \n.I Duration\n.br\nPython type: \n.I pbs.duration\n.br\nDefault: \n.I Zero\n\n.IP \"Error_Path\" 8\nThe final path name for the file containing the job's standard error\nstream.  See the\n.B qsub\nand\n.B qalter\ncommands.\n.br\nCan be read and set by user, Operator, Manager.\n.br\nFormat: \n.I String\n.br\nPython type: \n.I str\n.br\nSyntax: \n.I [<hostname>:]<path>\n.br\nValid values:\n.RS\n.IP \"<relative path>\" 3\nPath is relative to the current working directory of command executing\non current host.\n.IP \"<absolute path>\" 3\nPath is absolute path on current host where command is executing.\n.IP \"<hostname>:<relative path>\" 3\nPath is relative to user's home directory on specified host.\n.IP \"<hostname>:<absolute path>\" 3\nPath is absolute path on named host.\n.IP \"No path\" 3 \nPath is current working directory where qsub is executed.\n.RE\n.IP\nDefault: Default path is current working directory where qsub is run.\nIf the output path is specified, but does not include a\nfilename, the default filename is \n.I <job ID>.ER.  \nIf the path name is not specified, the default filename is \n.I <job name>.e<sequence number>.\n\n\n.IP estimated 8\nList of estimated values for job.  Used to report job's \n.I exec_vnode, start_time, \nand \n.I soft_walltime.\nCan be set in a hook or via qalter, but PBS will overwrite the values.  \n.br\nFormat: Format of reported element\n.br\nSyntax: \n.RS 11\n.I estimated.<resource name>=<value>[, estimated.<resource name>=<value> ...]\n.RE\n.IP\nPython type: \n.I pbs.pbs_resource\n.br\nSyntax:\n.RS 11\nestimated.<resource name>=<value>\n.br\nwhere <resource name> is a resource\n.RE\n.IP\nReported values: \n.RS\n.IP \"exec_vnode\" 3\nThe estimated vnodes used by this job.     \n.br\nReadable by all; settable by Manager and Operator.\n.br\nFormat: \n.I String\n.br\nPython type: \n.I pbs.exec_vnode\n.br\nDefault: Unset   \n\n.IP \"soft_walltime\" 3\nThe estimated soft walltime for this job.  Calculated when a job\nexceeds its soft_walltime resource.\n.br\nReadable by all; settable by Manager.\n.br\nFormat: \n.I Duration\n.br\nPython type: \n.I pbs.duration\n.br\nDefault: Unset\n\n.IP \"start_time\" 3\nThe estimated start time for this job.\n.br\nReadable by all; settable by Manager and Operator.\n.br\nFormat: \n.I start_time \nis printed by qstat\nin human-readable format; \n.I start_time \noutput in hooks as seconds since epoch.\n.br\nPython type: \n.I int\n.br\nDefault: Unset\n\n.RE\n.IP\n\n.IP etime 8\nTimestamp; time when job became eligible to run, i.e. was enqueued in\nan execution queue and was in the \"Q\" state.  Reset when a job moves\nqueues, or is held then released.  Not affected by qaltering.\n.br\nReadable by all; set by PBS.\n.br\nFormat: \n.I Integer \n.br\nSyntax:\n.RS 11\nPrinted by qstat in human-readable format.  Output in hooks as seconds since epoch.\n.RE\n.IP\nPython type: \n.I int\n.br\nDefault: No default\n\n.IP euser 8\nIf the job is queued, this attribute is set to the\nuser name under which the job is to be run.  \n.br\nReadable by Manager only; set by PBS.\n.br\nFormat: \n.I String\n.br\nPython type: \n.I str\n.br\nDefault: No default\n\n.IP \"executable\" 8\nJSDL-encoded listing of job's executable.  \nShown if job is submitted with \"-- <executable> [<arg list>]\".\n.br\nCan be read and set by user, Operator, Manager.\n.br\nFormat: \n.I JSDL-encoded string\n.br\nSyntax: <jsdl-hpcpa:Executable> <name of executable> \n.br\nExample: \n.RS 11\nIf the executable is ping, the string\nis <jsdl-hpcpa:Executable>ping</jsdl-hpcpa:Executable>\n.RE\n.IP\nPython type: \n.I str\n.br\nDefault: No default\n\n.IP \"Execution_Time\" 8\nTimestamp; time after which the job may execute.  Before this time,\nthe job remains queued in the (W)ait state.  Can be set when stage-in\nfails and PBS moves job start time out 30 minutes to allow user to fix\nproblem.\n.br\nCan be read and set by user, Operator, Manager.\n.br\nFormat: \n.I Datetime\n.br\nSyntax:\n.I [[CCwYY]MMDDhhmm[.ss]\n.br\nPython type: \n.I int\n.br\nDefault: Unset (no delay)  \n\n.IP exec_host 8\nIf the job is running, this is set to the name of the host or hosts on which\nthe job is executing.  \n.br\nCan be read by user, Operator, Manager.\n.br\nFormat: \n.I String\n.br\nSyntax:\n.RS 11\n.I <hostname>/N[*C][+...], \n.br\nwhere \n.I N \nis task slot number, starting with 0,\non that vnode, and \n.I C \nis the number of CPUs allocated to the job.\n.I *C \ndoes not appear if \n.I C \nhas a value of \n.I 1.\n.RE\n.IP\nPython type: \n.I pbs.exec_host\n.br\nDefault: No default\n\n.IP exec_vnode 8\nList of chunks for the job.  Each chunk shows the name of the vnode(s)\nfrom which it is taken, along with the host-level, consumable resources \nallocated from that vnode, and any AOE provisioned on this vnode for this job.  \n.br\nIf a vnode is allocated to the job but no resources \nfrom the vnode are used by the job, the vnode\nname appears alone.  \n.br\nIf a chunk is split across vnodes, the name of \neach vnode and its resources appear inside one pair of parentheses, \njoined with a plus sign (\"+\").\n.br\nCan be read by user.  Can be read and set by Manager, Operator.\n.br\nFormat: \n.I String\n.br\nSyntax: \n.RS 11 \nEach chunk is enclosed in parentheses, and\nchunks are connected by plus signs.  \n.RE\n.IP\nExample: \n.RS 11\nFor a job which requested two chunks\nthat were satisfied by resources from three vnodes,\n.I exec_vnode \nis\n.br\n(vnodeA:ncpus=N:mem=X)+(vnodeB:ncpus=P:mem=Y+vnodeC:mem=Z).\n.br\nFor a job which requested one chunk and exclusive use of a 2-vnode host,\nwhere the chunk was satisfied by resources from one vnode,\n.I exec_vnode \nis\n.br\n(vnodeA:ncpus=N:mem=X)+(vnodeB).\n.RE\n.IP\nPython type: \n.I pbs.exec_vnode\n.br\nDefault: No default\n\n.IP Exit_status 8\nExit status of job.  Set to zero for successful execution.  If any\nsubjob of an array job has non-zero exit status, the array job has\nnon-zero exit status.  \n.br\nReadable by all; set by PBS.\n.br\nFormat: \n.I Integer\n.br\nPython type: \n.I int\n.br\nDefault: No default\n\n.IP \"forward_x11_cookie\" 8\nContains the X authorization cookie.\n.br\nReadable by all; set by PBS.\n.br\nFormat: \n.I String\n.br\nPython type:\n.I int\n.br\nDefault: No default\n\n.IP \"forward_x11_port\"  8\nContains the number of the port being listened to by the port \nforwarder on the submission host.\n.br\nReadable by all; set by PBS.\n.br\nFormat: \n.I Integer\n.br\nPython type: \n.I int\n.br\nDefault: No default\n\n.IP \"group_list\" 8\nA list of group names used to determine the group under which the job\nruns. When a job runs, the server selects a group name from the list\naccording to the following ordered set of rules:\n.RS\n.IP \"1.\" 3 \nSelect the group name for which the associated host name matches the\nname of the server host.\n\n.IP \"2.\" 3 \nSelect the group name which has no associated host name.\n\n.IP \"3.\" 3 \nUse the login group for the user name under which the job will be run. \n.RE\n.IP\n.br\nCan be read and set by user, Operator, Manager.\n.br\nFormat: \n.I String\n.br\nSyntax: \n.RS 11\n.I <group name>[@<hostname>] [,<group name>[@<hostname>] ...]\n.br\nMust be enclosed in double quotes if it contains commas.\n.RE\n.IP\nPython type: \n.I pbs.group_list\n.br\nDefault: No default\n\n.IP hashname 8\nNo longer used.\n\n.IP \"Hold_Types\" 8\nThe set of holds currently applied to the job.  If the set is not null, \nthe job will not be scheduled for execution and is said to be in the\n.I held\nstate.  The\n.I held\nstate takes precedence over the\n.I wait\nstate.  \n.br\nCan be read and set by user, Operator, Manager.\n.br\nFormat: \n.I String, \nmade up of the letters \n.I 'n', 'o', 'p', 's', 'u'\n.br\nHold types:\n.RS\n.IP n 3\nNo hold\n.IP o 3\nOther hold\n.IP p 3\nBad password\n.IP s 3\nSystem hold\n.IP u 3\nUser hold\n.RE\n.IP\nPython type: \n.I pbs.hold_types\n.br\nDefault:\n.I n\n(no hold)\n.RE\n\n.IP \"interactive\" 8\nSpecifies whether the job is interactive.  \n.br\nWhen both this attribute and the \n.I block\nattribute are \n.I True, \nno exit status is returned.  \nFor X11 forwarding jobs, the job's exit status is not returned.\n.br\nCannot be set using a PBS directive.\n.br\nJob arrays cannot be interactive.\n.br\nCan be set, but not altered, by unprivileged user.  Can be read by Operator, Manager.\n.br\nFormat: \n.I Boolean\n.br\nPython type: \n.I int\n.br\nDefault: \n.I False\n\n.IP \"jobdir\" 8\nPath of the job's staging and execution directory on the \nprimary execution host.  Either user's home, or private sandbox.\nDepends on value of \n.I sandbox\nattribute.  Viewable via \n.B qstat -f.\n.br\nReadable by all; set by PBS.\n.br\nFormat: \n.I String\n.br\nPython type: \n.I str\n.br\nDefault: No default\n\n.IP \"Job_Name\" 8\nThe job name.  See the \n.B qsub\nand\n.B qalter\ncommands.   \n.br\nCan be read and set by user, Operator, Manager.\n.br\nFormat: \n.I String \nup to 236 characters, first character must be alphabetic or numeric\n.br\nPython type: \n.I str\n.br\nDefault: Base name of job script, or STDIN\n\n.IP \"Job_Owner\" 8\nThe login name on the submitting host of the user who submitted the batch job.\n.br\nReadable by all; set by PBS.\n.br\nFormat: \n.I String\n.br\nPython type: \n.I str\n.br\nDefault: No default\n\n.IP \"job_state\" 8\nThe state of the job.\n.br\nReadable by all.  Can be set indirectly by all.\n.br\nFormat: \n.I Character\n.br\nJob states:\n.RS \n.IP B 3\n.I Begun.\nJob arrays only.  The job array has begun execution.\n.br\nPython type: PBS job state constant\n.I pbs.JOB_STATE_BEGUN\n.IP E 3\n.I Exiting.  \nThe job has finished, with or without errors,\nand PBS is cleaning up post-execution.\n.br\nPython type: PBS job state constant\n.I pbs.JOB_STATE_EXITING\n.IP F 3\n.I Finished.\nJob is finished.  Job has completed execution, job failed during execution,\nor job was deleted.\n.br\nPython type: PBS job state constant\n.I pbs.JOB_STATE_FINISHED\n.IP H 3\n.I Held.  \nThe job is held. \n.br\nPython type: PBS job state constant\n.I pbs.JOB_STATE_HELD\n.IP M 3\n.I Moved.\nJob has been moved to another server. \n.br\nPython type: PBS job state constant\n.I pbs.JOB_STATE_MOVED\n.IP Q 3\n.I Queued.  \nThe job resides in an execution or routing queue pending\nexecution or routing.  It is not in\n.B held\nor\n.B waiting\nstate.\n.br\nPython type: PBS job state constant\n.I pbs.JOB_STATE_QUEUED\n.IP R 3\n.I Running.  \nThe job is in a execution queue and is running.\n.br\nPython type: PBS job state constant\n.I pbs.JOB_STATE_RUNNING\n.IP S 3\n.I Suspended.  \nThe job was executing and has been suspended.   \nThe job does not use CPU cycles or walltime.\n.br\nPython type: PBS job state constant\n.I pbs.JOB_STATE_SUSPEND\n.IP T 3\n.I Transiting.  \nThe job is being routed or moved to a new destination.\n.br\nPython type: PBS job state constant\n.I pbs.JOB_STATE_TRANSIT\n.IP U 3\n.I User suspended.  \nThe job was running on a workstation configured for cycle\nharvesting and the keyboard/mouse is currently busy.  The job is suspended\nuntil the workstation has been idle for a configured amount of time.\n.br\nPython type: PBS job state constant\n.I pbs.JOB_STATE_SUSPEND_USERACTIVE\n.IP W 3\n.I Waiting.  \nThe \n.I Execution_Time\nattribute contains a time in the future.  Can be set when\nstage-in fails and PBS moves job start time out \n30 minutes to allow user to fix problem.\n.br\nPython type: PBS job state constant\n.I pbs.JOB_STATE_WAIITING\n.IP X 3\n.I Expired.\nSubjobs only.  Subjob is finished (expired.)\n.br\nPython type: PBS job state constant\n.I pbs.JOB_STATE_EXPIRED\n.LP\n.RE\n\n.IP \"Join_Path\" 8\nSpecifies whether the job's standard error and standard output streams\nare to be merged and placed in the file specified in the \n.I Output_Path\njob attribute.\n.br\nWhen set to \n.I True, \nthe job's standard error and standard output streams are merged.\n.br\nCan be read and set by user, Operator, Manager.\n.br\nFormat: \n.I String\n.br\nBehavior:\n.RS\n.IP eo 3\nStandard output and standard error are merged, intermixed, into a \nsingle stream, which becomes standard error.\n.IP oe 3\nStandard output and standard error are merged, intermixed, into a single stream, which becomes standard output.\n.IP n 3\nStandard output and standard error are not merged.\n.RE\n.IP\nPython type: \n.I pbs.join_path\n.br\nDefault: \n.I False\n\n.IP \"Keep_Files\" 8\nSpecifies whether the standard output and/or standard error streams\nare retained on the execution host in the job's staging and execution\ndirectory after the job has executed.  Otherwise these files are\nreturned to the submission host.  \n.I Keep_Files \noverrides the \n.I Output_Path\nand \n.I Error_Path \nattributes.\n.br\nReadable and settable by all.\n.br\nFormat: \n.I String\n.br\nPython type:\n.I pbs.keep_files\n.br\nValid values: Can be one of the following:\n.RS\n.IP o 3\nThe standard output stream is retained.  The filename is:\n.I <job name>.o<sequence number>\n.IP e 3\nThe standard error stream is  retained.  The filename is: \n.I <job name>.e<sequence number>\n.IP \"eo, oe\" 3\nBoth standard output and standard error streams are retained.\n.IP d 3\nOutput and error are written directly to their final destination\n.IP n 3\nNeither stream is retained.  Files are returned to submission host.\n.RE\n.IP \nDefault: \n.I n\n.RS 11\n(neither stream is retained, and files are returned to submission host.)\n.RE\n\n.IP \"Mail_Points\" 8\nSpecifies state changes for which the server sends mail about the job.\n.br\nCan be read and set by user, Operator, Manager.\n.br\nFormat:\n.I String\n.br\nPython type: \n.I pbs.mail_points\n.br\nValid values: Combination of\n.I a, b, \nand\n.I e, \nwith optional \n.I j, \nor \n.I n\nby itself.\n.RS\n.IP a 3\nMail is sent when job is aborted\n.IP b 3\nMail is sent at beginning of job\n.IP e 3\nMail is sent when job ends\n.IP j 3\nMail is sent for subjobs.  Must be combined with one or more of\n.I a, b, \nand \n.I e\noptions.\n.IP n 3\nNo mail is sent.  Cannot be combined with other options.\n.RE\n.IP\nDefault: \n.I a\n\n.IP \"Mail_Users\" 8\nThe set of users to whom mail is sent when the job makes state changes\nspecified in the \n.I Mail_Points\njob attribute.\n.br\nCan be read and set by user, Operator, Manager.\n.br\nFormat: \n.I String\n.br\nSyntax: \"<username>@<hostname>[,<username>@<hostname>]\"\n.br\nPython type: \n.I pbs.email_list\n.br\nDefault: Job owner only\n\n.IP \"max_run_subjobs\" 8\nSets a limit on the number of subjobs that can be running at one time.\nCan be set using \n.B qsub -J <range> [%<max subjobs>] \nor \n.B qalter -Wmax_run_subjobs=<new value> <job ID>.\n\n.IP mtime 8\nTimestamp; the time that the job was last modified, changed state, or changed locations.\n.br\nFormat: \n.I Integer.\n.br\nSyntax: Timestamp.  \n.RS 11\nPrinted by qstat in human-readable format; output in hooks \nas seconds since epoch.\n.RE\n.IP\nPython type: \n.I int\n.br\nDefault: No default\n\n.IP \"no_stdio_sockets\" 8\n.B Not used.\n\n.IP \"Output_Path\" 8\nThe final path name for the file containing the job's standard output\nstream.  See the\n.B qsub\nand\n.B qalter\ncommands.\n.br\nCan be read and set by user, Operator, Manager.\n.br\nFormat: \n.I String\n.br\nPython type: \n.I str\n.br\nSyntax: \n.I [<hostname>:]<path>\n.br\nValid values:\n.RS\n.IP \"<relative path>\" 3\nPath is relative to the current working directory of command executing\non current host.\n.IP \"<absolute path>\" 3\nPath is absolute path on current host where command is executing.\n.IP \"<hostname>:<relative path>\" 3\nPath is relative to user's home directory on specified host.\n.IP \"<hostname>:<absolute path>\" 3\nPath is absolute path on named host.\n.IP \"No path\" 3 \nPath is current working directory where qsub is executed.\n.RE\n.IP\nDefault: \n.RS 11\nDefault path is current working directory where qsub is run.\n.br\nIf the output path is specified, but does not include a\nfilename, the default filename is \n.I <job ID>.OU.  \n.br\nIf the path name is not specified, the default filename is \n.I <job name>.o<sequence number>.\n.RE\n\n.IP \"pcap_accelerator\" 8\nPower attribute.  Power cap for an accleerator.  Corresponds to Cray \n.I capmc set_power_cap --accel\nsetting.  See \n.I capmc\ndocumentation.\n.br\nReadable and settable by all.\n.br\nFormat: \n.I Integer\n.br\nUnits: \n.I Watts\n.br\nPython type: \n.I int\n.br\nDefault: Unset\n\n.IP \"pcap_node\" 8\nPower attribute.  Power cap for a node.  Corresponds to Cray \n.I capmc set_power_cap --node\nsetting.  See \n.I capmc\ndocumentation.\n.br\nReadable and settable by all.\n.br\nFormat: \n.I Integer\n.br\nUnits: \n.I Watts\n.br\nPython type: \n.I int\n.br\nDefault: Unset\n\n.IP \"pgov\" 8\nPower attribute.  Cray ALPS reservation setting for CPU throttling\ncorresponding to \n.I p-governor.\nSee BASIL 1.4 documentation.  We do not recommend using this attribute.\n.br\nReadable and settable by all.\n.br\nFormat: \n.I String\n.br\nPython type: \n.I str\n.br\nDefault: Unset\n\n.IP \"Priority\" 8\nThe scheduling priority for the job.  Higher value indicates higher priority.\n.br\nCan be read and set by user, Operator, Manager.\n.br\nFormat: \n.I Integer\n.br\nSyntax: \n.I [+|-]nnnnn\n.br\nValid values: [-1024, +1023] inclusive\n.br\nPython type: \n.I int\n.br\nDefault: Unset\n\n.IP \"project\" 8\nThe job's project. A project is a way to tag jobs.  Each job can belong\nto at most one project.\n.br\nReadable and settable by user, Operator, Manager.\n.br\nFormat: \n.I String\n.RS 11\nCan contain any characters except for the following:\nSlash (\"/\"), left bracket (\"[\"), right bracket (\"]\"), double quote (\"\"\"), \nsemicolon (\";\"), colon (\":\"), vertical bar (\"|\"), left angle bracket (\"<\"), \nright angle bracket (\">\"), plus (\"+\"), comma (\",\"), question mark (\"?\"), \nand asterisk (\"*\").\n.RE\n.IP\nPython type: \n.I str\n.br\nDefault: \"_pbs_project_default\"\n\n.IP \"pset\" 8\n.B Deprecated.  \nName of the placement set used by the job.  \n.br\nCan be read by user, Operator.  Can be read and set by Manager.\n.br\nFormat: \n.I String\n.br\nPython type: \n.I str\n.br\nDefault: No default\n\n.IP \"pstate\" 8\nPower attribute.  Cray ALPS reservation setting for CPU frequency\ncorresponding to \n.I p-state.\nSee BASIL 1.4 documentation.  \n.br\nReadable and settable by all.\n.br\nFormat: \n.I String\n.br\nUnits:\n.I Hertz\n.br\nPython type: \n.I str\n.br\nDefault: Unset\n\n.IP qtime 8\nTimestamp; the time that the job entered the current queue.\n.br\nReadable by all; settable only by PBS.\n.br\nFormat: \n.I Integer\n.br\nSyntax: Timestamp.  \n.RS 11\nPrinted by qstat in human-readable format; output in hooks \nas seconds since epoch.\n.RE\n.IP\nPython type: \n.I int\n.br\nDefault: No default\n\n.IP queue 8\nThe name of the queue in which the job currently resides.\n.br\nReadable by all; settable only by PBS.\n.br\nFormat: \n.I String\n.br\nPython type: \n.I pbs.queue\n.br\nDefault: No default\n\n.IP queue_rank 8\nA number indicating the job's position within the\nqueue.  Only used internally by PBS.  \n.br\nReadable by Manager only.\n.br\nFormat: \n.I Integer\n.br\nPython type: \n.I int\n.br\nDefault: No default\n\n.IP queue_type 8\nThe type of queue in which the job is currently residing.  \n.br\nReadable by Manager only.\n.br\nFormat: \n.I Character\n.br\nValid values: One of \n.I E\nor \n.I R\n.RS\n.IP E 3\nExecution queue\n.br\nPython type: \n.RS 3\nPBS job state constant\n.I pbs.QUEUE_TYPE_EXECUTION\n.RE\n.IP R 3\nRouting queue\n.br\nPython type: \n.RS 3\nPBS job state constant\n.I pbs.QUEUE_TYPE_EXECUTION\n.RE\n.RE\n.IP\nDefault: No default\n\n.IP \"release_nodes_on_stageout\" 8\nControls whether job vnodes are released when stageout begins.\n.br\nCannot be used with vnodes tied to Cray X* series systems.\n.br\nWhen cgroups is enabled and this is used with some but not all vnodes\nfrom one MoM, resources on those vnodes that are part of a cgroup are\nnot released until the entire cgroup is released.\n.br\nThe job's \n.I stageout \nattribute must be set for the\n.I release_nodes_on_stageout \nattribute to take effect.\n.br\nWhen set to \n.I True, \nall of the job's vnodes not on the primary execution\nhost are released when stageout begins.  \n.br\nWhen set to \n.I False,\njob's vnodes are released when the job finishes and MoM cleans up the job.\n.br\nReadable and settable by all.\n.br\nFormat: \n.I Boolean\n.br\nPython type:\n.I bool\n.br\nDefault:\n.I False\n\n.IP \"Remove_Files\" 8\nSpecifies whether standard output and/or standard error files are \nautomatically removed upon job completion.\n.br\nReadable and settable by all.\n.br\nFormat: \n.I String\n.br\nPython type:\n.I str\n.br\nValid values: \"e\", \"o\", \"eo\", \"oe\", or unset\n.RS\n.IP e 3\nStandard error is removed upon job completion.\n.IP o 3\nStandard output is removed upon job completion.\n.IP \"eo, oe\" 3\nStandard output and standard error are removed upon job completion.\n.IP unset 3\nNeither is removed.\n.RE\n.IP\nDefault: Unset\n\n.IP \"Rerunable\" 8\nSpecifies whether the job can be rerun.  Does not affect how a job is\ntreated if the job could not begin execution.  \n.br\nJob arrays are required to be rerunnable and are rerunnable by\ndefault.\n.br\nReadable and settable by all.\n.br\nFormat: \n.I Character\n.br\nSyntax: One of \n.I y\nor \n.I n\n.br\nPython type: \n.I bool\n.br\nDefault: y (job is rerunnable)\n\n.IP \"Resource_List\" 8\nThe list of resources required by the job. List is a set of \n.I <resource name>=<value> \nstrings. The meaning of name and value is dependent upon\ndefined resources. Each value establishes the limit of usage of that\nresource. If not set, the value for a resource may be determined by a\nqueue or server default established by the administrator. \n.br\nReadable and settable by all.\n.br\nFormat: \n.I String\n.br\nSyntax: \n.RS 11\n.I Resource_List.<resource name>=<value>[, Resource_List.<resource name>=<value>, ...]\n.RE\n.IP\nPython type: \n.I pbs.pbs_resource\n.br\nSyntax:\n.RS 11\nResource_List[\"<resource name>\"]=<value>\n.br\nwhere <resource name> is any built-in or custom resource\n.RE\n.IP\nDefault: No default\n\n.IP \"resources_released\" 8\nListed by vnode, consumable resources that were released when the job\nwas suspended.  Populated only when \n.I restrict_res_to_release_on_suspend\nserver attribute is set.  \n.br\nReadable by all.  Set by server.\n.br\nFormat: \n.I String \n.br\nSyntax: \n.RS 11\n.I (<vnode>:<resource name>=<value>:\n.I <resource name>=<value>:...)+\n.I (<vnode>:<resource name>=<value>:...)\n.RE\n.IP\nPython type:\n.I str\n.br\nDefault: No default\n\n.IP \"resource_release_list\" 8\nSum of each consumable resource requested by the job that was released\nwhen the job was suspended.  Populated only when\n.I restrict_res_to_release_on_suspend \nserver attribute is set. \n.br\nReadable by Manager and Operator.  Set by server.\n.br\nFormat: \n.I String \n.br\nSyntax: \n.RS 11\n.I resource_released_list.<resource name>=<value>,\n.I resource_released_list.<resource name>=<value>, ...\n.RE\n.IP\nPython type: \n.I pbs.pbs_resource\n.br\nDefault: No default\n\n.IP \"resources_used\" 8\nThe amount of each resource used by the job.  \n.br\nReadable by all; set by PBS.\n.br\nFormat: \n.I String\n.br\nSyntax: \n.RS 11\nList of \n.I resources_used.<resource name>=<value>,resources_used.<resource name>=<value>\npairs.  \n.RE\n.IP\nExample: resources_used.mem=2mb\n.br\nPython type: \n.I pbs.pbs_resource\n.br\nSyntax:\n.RS 11\nresources_used[\"<resource name>\"]=<value> \n.br\nwhere <resource name> is any built-in or custom resource\n.RE\n.IP \nDefault: No default\n\n.IP run_count 8\nThe number of times the server thinks the job has been executed.  \n.br\nThe\n.I run_count\nattribute starts at zero.  Job is held after 21 tries.  \n.br\nCan be set via qsub, qalter, or a hook.\n.br\nCan be read and set by Manager and Operator.\n.br\nFormat: \n.I Integer;\nmust be greater than or equal to zero.\n.br\nPython type: \n.I int\n.br\nDefault: \n.I Zero\n\n.IP \"run_version\" 8\nUsed internally by PBS to track the instance of the job.  \n.br\nSet by PBS.  Visible to Manager only.  \n.br\nFormat: \n.I Integer\n.br\nPython type: \n.I int\n.br\nDefault: No default\n\n.IP \"sandbox\" 8\nSpecifies type of location PBS uses for job staging and execution.\n.br\nUser-settable via \n.B qsub -Wsandbox=<value>\nor via a PBS directive.\n.br\nSee the $jobdir_root MoM configuration option in \n.B pbs_mom.8B. \n.br\nCan be read and set by user, Operator, Manager.\n.br\nFormat: \n.I String\n.br\nValid values: \n.I PRIVATE, HOME, \nunset\n.br\n.RS\n.IP PRIVATE 3\nWhen set to PRIVATE, PBS creates job-specific staging and\nexecution directories under the directory specified in \nthe \n.I $jobdir_root \nMoM configuration option.\n.IP \"HOME or unset\" 3\nPBS uses the job owner's home directory for staging and execution.\n.RE\n.IP\nPython type: \n.I str \n.br\nDefault: Unset\n\n.IP schedselect  8\nThe union of the select specification of the job, and the queue and \nserver defaults for resources in a chunk.  \n.br\nCan be read by PBS Manager only.\n.br\nFormat: \n.I String\n.br\nPython type: \n.I pbs.select\n.br\nDefault: No default\n\n.IP sched_hint 8\n.B No longer used.\n\n.IP security_context 8\nContains security context of job submitter.  Set by PBS to the\nsecurity context of the job submitter at the time of job\nsubmission. If not present when a request is submitted, an error\noccurs, a server message is logged, and the request is rejected.\n.br\nReadable by all; set by PBS.\n.br\nFormat: \n.I String in SELinux format\n.br\nDefault: Unset\n\n.IP server 8\nThe name of the server which is currently managing the job.\nWhen the secondary server is running during failover, shows the name\nof the primary server.  After a job is moved to another server, either\nvia qmove or peer scheduling, shows the name of the new server.\n.br\nReadable by all; set by PBS.\n.br\nFormat: \n.I String\n.br\nPython type: \n.I pbs.server\n.br\nDefault: No default\n\n.IP session_id 8\nIf the job is running, this is set to the session ID of the first\nexecuting task.\n.br\nReadable by all; set by PBS.\n.br\nFormat: \n.I Integer\n.br\nPython type: \n.I int\n.br\nDefault: No default\n\n.IP \"Shell_Path_List\" 8\nOne or more absolute paths to the program(s) to process the job's\nscript file.\n.br\nCan be read and set by user, Operator, Manager.\n.br\nFormat: \n.I String\n.br\nSyntax: \n.RS 11\n\"<path>[@<hostname>][,<path>[@<hostname>]...]\"\n.br\nMust be enclosed in double quotes if it contains commas.\n.RE\n.IP\nPython type: \n.I pbs.path_list\n.br\nDefault: User's login shell on execution host\n\n.IP stagein 8\nThe list of files to be staged in prior to job execution.\n.br\nCan be read and set by user, Operator, Manager.\n.br\nFormat: \n.I String\n.br\nSyntax: \n.RS 11\n\"<execution path>@<storage host>:<storage path>\n[, <execution path>@<storage host>:<storage path>, ...]\"\n.RE\n.IP\nPython type: \n.I pbs.staging_list\n.br\nDefault: No default\n\n.IP stageout 8\nThe list of files to be staged out after job execution.\n.br\nCan be read and set by user, Operator, Manager.\n.br\nFormat: \n.I String\n.br\nSyntax: \n.RS 11\n\"<execution path>@<storage host>:<storage path>\n[, <execution path>@<storage host>:<storage path>, ...]\"\n.RE\n.IP\nPython type: \n.I pbs.staging_list\n.br\nDefault: No default\n\n.IP Stageout_status 8\nStatus of stageout.  If stageout succeeded, this is set to 1.  \nIf stageout failed, this is \nset to 0.  Displayed only if set.  \nIf stageout fails for any subjob of an array job, the value of \n.I Stageout_status\nis zero for the array job.  Available only for finished jobs.\n.br\nReadable by all; set by PBS.\n.br\nFormat: \n.I Integer\n.br\nPython type: \n.I int\n.br\nDefault: No default\n\n.IP stime 8\nTimestamp; time when the job started execution.  Changes when job is restarted.\n.br\nReadable by all; set by PBS.\n.br\nFormat: \n.I Integer \n.br\nSyntax: Timestamp.  \n.RS 11\nPrinted by qstat in human-readable format; output in hooks \nas seconds since epoch.\n.RE\n.IP\nPython type: \n.I int\n.br\nDefault: No default\n\n.IP \"Submit_arguments\" 8\nJob submission arguments given on the \n.B qsub\ncommand line.  Available for all jobs.\n.br\nCan be read and set by user, Operator, Manager.\n.br\nFormat: \n.I String\n.br\nPython type: \n.I str\n.br\nDefault: No default\n\n.IP \"substate\" 8\nThe substate of the job.  The substate is used internally by PBS.\n.br\nReadable by all; set by PBS.\n.br\nFormat: \n.I Integer\n.br\nPython type: \n.I int\n.br \nDefault: No default\n\n.IP sw_index 8\n.B No longer used.\n\n.IP \"tobjob_ineligible\" 8\nAllows administrators to mark this job as ineligible to be a top job.\n.br\nWhen \n.I True\n, this job is not eligible to be the top job.\n.br\nCan be read and set by Manager.\n.br\nFormat: \n.I Boolean\n.br\nPython type: \n.I bool\n.br\nDefault: \n.I False\n\n.IP umask 8\nThe initial umask of the job is set to the value of this attribute when the\njob is created.  This may be changed by umask commands in the shell\ninitialization files such as .profile or .cshrc.\n.br\nCan be read and set by user, Operator, Manager.\n.br\nFormat: \n.I Decimal integer\n.br\nPython type: \n.I int\n.br\nDefault: \n.I 077\n\n.IP \"User_List\" 8\nThe list of users which determines the user name under which the job\nis run on a given host.  No length limit.\n.br\nWhen a job is to be executed, the server selects a user name from the\nlist according to the following ordered set of rules:\n.RS\n.IP 1. 3 \nSelect the user name from the list for which the associated host name\nmatches the name of the server.\n.IP 2. 3\nSelect the user name which has no associated host name; the wild card name.\n.IP 3. 3 \nUse the value of \n.I Job_Owner \nas the user name.\n.RE\n.IP\nReadable and settable by all.\n.br\nFormat: \n.I String\n.br\nSyntax: \n.RS 11\n\"<username>@<hostname> [,<username>@<hostname>...]\" \n.br\nMust be enclosed in double quotes if it contains commas.  May be up to\n256 characters in length.\n.RE\n.IP\nPython type:\n.I pbs.user_list\n.br\nDefault: Value of \n.I Job_Owner\njob attribute\n\n.IP \"Variable_List\" 8\nList of environment variables set in the job's execution environment.\nSee the qsub(1B) command.\n.br\nReadable and settable by all.\n.br\nFormat: \n.I String\n.br\nSyntax: \n.RS 11\n\"<variable name>=<value> [,<variable name>=<value>...]\"\n.br\nMust be enclosed in double quotes if it contains commas. \n.RE\n.IP\nPython type: \n.I pbs.pbs_resource\n.br\nSyntax:\n.RS 11 \nVariable_List[\"<variable name>\"]=<value>\n.RE\n.IP\nDefault: No default\n\n\n.SH SEE ALSO\nqsub(1B), qalter(1B), qhold(1B), qrls(1B), pbs_resources(7B)\n"
  },
  {
    "path": "doc/man1/pbs_login.1B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_login 1B \"15 July 2020\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_login \n- cache encrypted user password for authentication\n.SH Synopsis\n.B pbs_login\n.br\n.B pbs_login\n-m <PBS service account password>\n.br\necho <password>|\n.B pbs_login\n-p\n\n.SH Description\nThe \n.B pbs_login \ncommand encrypts the password and caches it locally where it can be\nused by daemons for authorization.\n\nJob submitters must run this command at each submission host each time\ntheir password changes.\n\nOn Windows, the \n.B win_postinstall \nscript calls \n.B pbs_login \nto store the PBS service account password so that the account user can\nbe authenticated by daemons.\n\n.SH Required Privilege\nCan be run by any user.\n\n\n.SH Options to pbs_login\n.IP \"(no options)\" 8\nQueries user for password.\n\n.IP \"-m <PBS service account password>\" 8\nThis option is intended to be used only by the PBS service account,\nwhich is the account that is used to execute \n.B pbs_mom \nvia the Service Control Manager on Windows.  This option is used during installation\nwhen invoked by the\n.B win_postinstall \nscript, or by the administrator when the\nPBS service account password has changed.  Stores PBS service account\npassword in the mom_priv directory.\n\n.IP \"-p\" 8\nCaches user password on client host.  Intended to be run by job\nsubmitter at client host.  Allows job submitter to be authenticated\nby daemons.\n\n"
  },
  {
    "path": "doc/man1/pbs_module.7B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_module 7B \"6 April 2020\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_module\n\\- Python interface to PBS and PBS hook environment\n\n\n.SH DESCRIPTION\n \nThe \n.I pbs module \nprovides an interface to PBS and the hook environment.\nThe interface is made up of Python objects, which have attributes and\nmethods.  You can operate on these objects using Python code.\n\n.SH PBS MODULE OBJECTS\n\n.IP  pbs.acl\nRepresents a PBS ACL type. \n.IP  pbs.args\nRepresents a space-separated list of PBS arguments to commands like qsub, qdel.\n.IP  pbs.argv[]\nRepresents a list of argument strings to be passed to the program\n.IP  pbs.BadAttributeValueError\nRaised when setting the attribute value of a pbs.* object to an invalid value.\n.IP  pbs.BadAttributeValueTypeError\nRaised when setting the attribute value of a pbs.* object to an invalid value type.\n.IP  pbs.BadResourceValueError\nRaised when setting the resource value of a pbs.* object to an invalid value.\n.IP  pbs.BadResourceValueTypeError\nRaised when setting the resource value of a pbs.* object to an invalid value type.\n.IP  pbs.checkpoint\nRepresents a job's \n.I Checkpoint \nattribute.\n.IP  pbs.depend\nRepresents a job's \n.I depend\nattribute.\n.IP  pbs.duration\nRepresents a time interval.\n.IP  pbs.email_list\nRepresents the set of users to whom mail may be sent.\n.IP  pbs.env[]\nDictionary of environment variables.\n.IP  pbs.event\nRepresents a PBS event.\n.IP  pbs.EventIncompatibleError\nRaised when referencing a non-existent attribute in pbs.event().\n.IP  pbs.EXECHOST_PERIODIC\nThe \n.I exechost_periodic \nevent type.\n.IP  pbs.EXECHOST_STARTUP\nThe \n.I exechost_startup\nevent type.\n.IP  pbs.EXECJOB_ATTACH\nThe \n.I execjob_attach\nevent type.\n.IP  pbs.EXECJOB_BEGIN\nThe \n.I execjob_begin\nevent type.\n.IP  pbs.EXECJOB_END\nThe \n.I execjob_end\nevent type.\n.IP  pbs.EXECJOB_EPILOGUE\nThe \n.I execjob_epilogue\nevent type.\n.IP  pbs.EXECJOB_LAUNCH\nThe \n.I execjob_launch\nevent type.\n.IP  pbs.EXECJOB_PRETERM\nThe \n.I execjob_preterm\nevent type.\n.IP  pbs.EXECJOB_PROLOGUE\nThe \n.I execjob_prologue\nevent type.\n.IP  pbs.exec_host\nRepresents a job's \n.I exec_host \nattribute.\n.IP  pbs.exec_vnode\nRepresents a job's \n.I exec_vnode \nattribute.\n.IP  pbs.group_list\nRepresents a list of group names.\n.IP  pbs.hold_types\nRepresents a job's \n.I Hold_Types \nattribute.\n.IP  pbs.hook_config_filename\nContains path to hook's configuration file.\n.IP  pbs.job\nRepresents a PBS job.\n.IP  pbs.job_list[]\nRepresents a list of pbs.job objects.\n.IP  pbs.job_sort_formula\nRepresents the server's\n.I job_sort_formula \nattribute.\n.IP  pbs.JOB_STATE_BEGUN\nRepresents the job array state of having started.\n.IP  pbs.JOB_STATE_EXITING\nRepresents the job state of exiting.\n.IP  pbs.JOB_STATE_EXPIRED\nRepresents the subjob state of expiring.\n.IP  pbs.JOB_STATE_FINISHED\nRepresents the job state of finished.\n.IP  pbs.JOB_STATE_HELD\nRepresents the job state of held.\n.IP  pbs.JOB_STATE_MOVED\nRepresents the job state of moved.\n.IP  pbs.JOB_STATE_QUEUED\nRepresents the job state of queued.\n.IP  pbs.JOB_STATE_RUNNING\nRepresents the job state of running.\n.IP  pbs.JOB_STATE_SUSPEND\nRepresents the job state of suspended.\n.IP  pbs.JOB_STATE_SUSPEND_USERACTIVE\nRepresents the job state of suspended due to user activity.\n.IP  pbs.JOB_STATE_TRANSIT\nRepresents the job state of transiting.\n.IP  pbs.JOB_STATE_WAITING\nRepresents the job state of waiting.\n.IP  pbs.join_path\nRepresents a job's \n.I Join_Path\nattribute.\n.IP  pbs.keep_files\nRepresents a job's \n.I Keep_Files \nattribute.\n.IP pbs.license_count\nRepresents a set of licensing-related counters.\n.IP  pbs.LOG_DEBUG\nLog level 004.\n.IP  pbs.LOG_ERROR\nLog level 004.\n.IP  pbs.LOG_WARNING\nLog level 004.\n.IP  pbs.mail_points\nRepresents a job's \n.I Mail_Points \nattribute.\n.IP  pbs.MODIFYJOB\nThe \n.I modifyjob \nevent type.\n.IP  pbs.MOVEJOB\nThe \n.I movejob \nevent type.\n\n.IP pbs.ND_BUSY\nRepresents \n.I busy \nvnode state.\n\n.IP pbs.ND_DEFAULT_EXCL\nRepresents \n.I default_excl sharing \nvnode attribute value\n\n.IP pbs.ND_DEFAULT_SHARED\nRepresents \n.I default_shared sharing \nvnode attribute value.\n\n.IP pbs.ND_DOWN\nRepresents \n.I down \nvnode state\n\n.IP pbs.ND_FORCE_EXCL\nRepresents \n.I force_excl sharing \nvnode attribute value.\n\n.IP pbs.ND_FREE\nRepresents \n.I free \nvnode state. \n\n.IP pbs.ND_GLOBUS\nPBS no longer supports Globus.  The Globus functionality has been \n.B removed \nfrom PBS.\n\nRepresents\n.I globus \nvnode \n.I ntype.\n\n.IP pbs.ND_IGNORE_EXCL\nRepresents \n.I ignore_excl sharing \nvnode attribute value.\n\n.IP pbs.ND_JOBBUSY\nRepresents \n.I job-busy \nvnode state.\n\n.IP pbs.ND_JOB_EXCLUSIVE\nRepresents \n.I job-exclusive \nvnode state.\n\n.IP pbs.ND_OFFLINE\nRepresents \n.I offline \nvnode state.  \n\n.IP pbs.ND_PBS\nRepresents \n.I pbs \nvnode \n.I ntype.\n\n.IP pbs.ND_PROV\nRepresents \n.I provisioning \nvnode state.\n\n.IP pbs.ND_RESV_EXCLUSIVE\nRepresents\n.I resv-exclusive\nvnode state.\n\n.IP pbs.ND_STALE\nRepresents \n.I stale \nvnode state.\n\n.IP pbs.ND_STATE_UNKNOWN\nRepresents \n.I state-unknown, down \nvnode state.\n\n.IP pbs.ND_UNRESOLVABLE\nRepresents the \n.I unresolvable\nvnode state.\n\n.IP pbs.ND_WAIT_PROV\nRepresents \n.I wait-provisioning \nvnode state.\n\n.IP  pbs.node_group_key\nRepresents the server or queue \n.I node_group_key\nattribute.\n.IP  pbs.path_list\nRepresents a list of pathnames.\n.IP  pbs.pbs_conf[]\nDictionary of entries in pbs.conf.\n.IP  pbs.pid\nRepresents the process ID of a process belonging to a job.\n\n.IP  pbs.place\nRepresents the \n.I place\njob submission specification.\n.IP pbs.progname\nPath of job shell or executable.\n.IP  pbs.QTYPE_EXECUTION\nThe \n.I execution\nqueue type.\n.IP  pbs.QTYPE_ROUTE\nThe \n.I route\nqueue type.\n.IP  pbs.queue\nRepresents a PBS queue.\n.IP  pbs.QUEUEJOB\nThe \n.I queuejob \nevent type.\n.IP  pbs.range\nRepresents a range of numbers referring to array indices.\n.IP  pbs.resv\nRepresents a PBS reservation.\n.IP  pbs.RESVSUB\nThe \n.I resvsub\nevent type.\n.IP  pbs.RESV_STATE_BEING_DELETED\nRepresents the reservation state RESV_BEING_DELETED.\n.IP  pbs.RESV_STATE_CONFIRMED\nRepresents the reservation state RESV_CONFIRMED.\n.IP  pbs.RESV_STATE_DEGRADED\nRepresents the reservation state RESV_DEGRADED.\n.IP  pbs.RESV_STATE_DELETED\nRepresents the reservation state RESV_DELETED.\n.IP  pbs.RESV_STATE_DELETING_JOBS\nRepresents the reservation state RESV_DELETING_JOBS.\n.IP  pbs.RESV_STATE_FINISHED\nRepresents the reservation state RESV_FINISHED.\n.IP  pbs.RESV_STATE_NONE\nRepresents the reservation state RESV_NONE.\n.IP  pbs.RESV_STATE_RUNNING\nRepresents the reservation state RESV_RUNNING.\n.IP  pbs.RESV_STATE_TIME_TO_RUN\nRepresents the reservation state RESV_TIME_TO_RUN.\n.IP  pbs.RESV_STATE_UNCONFIRMED\nRepresents the reservation state RESV_UNCONFIRMED.\n.IP  pbs.RESV_STATE_WAIT\nRepresents the reservation state RESV_WAIT.\n.IP  pbs.route_destinations\nRepresents a queue's \n.I route_destinations\nattribute.\n.IP  pbs.RUNJOB\nThe \n.I runjob\nevent type.\n.IP  pbs.select\nRepresents the \n.I select\njob submission specification.\n.IP  pbs.server\nRepresents the local PBS server.\n.IP  pbs.size\nRepresents a PBS \n.I size\ntype.\n.IP  pbs.software\nRepresents a site-dependent software specification resource.\n.IP  pbs.staging_list\nRepresents a list of file stagein or stageout parameters.\n.IP  pbs.state_count\nRepresents a set of job-related state counters.\n.IP  pbs.SV_STATE_ACTIVE\nRepresents the server state \"Scheduling\".\n.IP  pbs.SV_STATE_HOT\nRepresents the server state \"Hot_Start\".\n.IP  pbs.SV_STATE_IDLE\nRepresents the server state \"Idle\".\n.IP  pbs.SV_STATE_SHUTDEL\nRepresents the server state \"Terminating, Delayed\".\n.IP  pbs.SV_STATE_SHUTIMM\nRepresents the server state \"Terminating\".\n.IP  pbs.SV_STATE_SHUTSIG\nRepresents the server state \"Terminating\", when a signal has been caught.\n.IP  pbs.UnsetAttributeNameError\nRaised when referencing a non-existent name of a pbs.* object.\n.IP  pbs.UnsetResourceNameError\nRaised when referencing a non-existent name of a pbs.* object.\n.IP  pbs.user_list\nRepresents a list of user names.\n.IP  pbs.vchunk\nRepresents a resource chunk assigned to a job.\n.IP  pbs.version\nRepresents PBS version information.\n.IP  pbs.vnode\nRepresents a PBS vnode.\n.IP  pbs.vnode_list[]\nRepresents a list of PBS vnodes.\n.IP  SystemExit\nRaised when accepting or rejecting an action.\n.LP\n\n.SH PBS MODULE GLOBAL METHODS\n.IP pbs.acl(\"[+|-]<entity>][,...]\")\nCreates an object representing a PBS ACL, using the given string parameter.\nInstantiation of these objects requires a formatted input string.\n\n\n.IP pbs.args(\"<args>\")\nwhere \n.I <args> \nare space-separated arguments to a command such as \n.B qsub \nor \n.B qdel.\nCreates an object representing the arguments to the command.\nExample:\n.RS 10\npbs.args(\"-Wsuppress_email=N -r y\")\n.RE\n.IP\nInstantiation of these objects requires a formatted input string.\n\n.IP pbs.checkpoint(\"<checkpoint_string>\")\nwhere \n.I <checkpoint_string> \nmust be one of \"n\", \"s\", \"c\", \"c=mmm\", \"w\", or \"w=mmm\"\nCreates an object representing the job's\n.I Checkpoint \nattribute, using the given string.\nInstantiation of these objects requires a formatted input string.\n.IP pbs.depend(\"<depend_string>\")\n.I <depend_string> \nmust be of format \"<type>:<jobid>[,<jobid>...]\", or \"on:<count>\", and\nwhere \n.I <type> \nis one of \"after\", \"afterok\", \"afterany\", \"before\", \"beforeok\",\nand \"beforenotok\".\nCreates a PBS dependency specification object representing the job's\n.I depend \nattribute, using the given \n.I <depend_string>.\nInstantiation of these objects requires a formatted input string.\n.IP pbs.duration(\"[[hours:]minutes:]seconds[.milliseconds]\")\nCreates a time specification duration instance, returning the\nequivalent number of seconds from the given time string. Represents an\ninterval or elapsed time in number of seconds. Duration objects can be\nspecified using either a time or an integer. See the\n\"pbs.duration(<integer>)\" creation method.\n.IP pbs.duration(<integer>)\nCreates an integer duration instance using the specified number of\nseconds.  A \n.I pbs.duration \ninstance can be operated on by any of the\nPython \n.I int \nfunctions.  When performing arithmetic operations on a\n.I pbs.duration \ntype, ensure the resulting value is a \n.I pbs.duration()\ntype, before assigning to a job member that expects such a type.\n.IP pbs.email_list(\"<email_address1>[,<email_address2>...]\")\nCreates an object representing a mail list.\nInstantiation of these objects requires a formatted input string.\n.IP pbs.exec_host(\"host/N[*C][+...]\")\nCreate an object representing the \n.I exec_host \njob attribute, using the\ngiven host and resource specification.\nInstantiation of these objects requires a formatted input string.\n.IP pbs.exec_vnode(\"<vchunk>[+<vchunk>...]\")\n.I <vchunk> \nis (<vnodename:ncpus=N:mem=M>)\nCreates an object representing the \n.I exec_vnode \njob attribute, using the\ngiven vnode and resource specification. When the \n.B qrun -H \ncommand is used, or when the scheduler runs a job, the \n.I pbs.job.exec_vnode \nobject contains the vnode specification for the job.  Instantiation of these\nobjects requires a formatted input string.\n.br\nExample:\n.br\npbs.exec_vnode(\"(vnodeA:ncpus=N:mem=X)+(nodeB:ncpus=P:mem=Y+nodeC:mem=Z)\")\n.br\nThis object is managed and accessed via the \n.I str() \nor \n.I repr() \nfunctions. \n.br\nExample:\n.br\nPython> ev = pbs.server().job(\"10\").exec_vnode\n.br\nPython> str(ev)\n.br\n\"(vnodeA:ncpus=2:mem=200m)+(vnodeB:ncpus=5:mem=1g)\"\n\n.IP pbs.get_hook_config_file()\nReturns the path to the hook's configuration file, or None if there is\nno configuration file.  For example:\n.RS 10\nconfigfilepath = pbs.get_hook_config_file()\n.RE\n\n.IP pbs.get_local_nodename()\nThis returns a Python str whose value is the name of the local natural vnode.\nIf you want to refer to the vnode object representing the current\nhost, you can pass this vnode name as the key to\n.I pbs.event().vnode_list[].  \nFor example:\n.RS 10\nVn = pbs.event().vnode_list[pbs.get_local_nodename()]\n.RE\n\n.IP pbs.get_pbs_conf()\nThis method returns a dictionary of values which represent entries in\nthe pbs.conf file.  The method reads the file on the host where a hook\nruns, so pre-execution event hooks get the entries on the server host,\nand execution event hooks get the entries on the execution host where\nthe hook runs.  The method reads /etc/pbs.conf on the host where\npbs_python runs.\nExample:\n.RS 10\npbs_conf = pbs.get_pbs_conf()\n.br\npbs.logmsg(pbs.LOG_DEBUG, \"pbs home is \" % (pbs_conf['PBS_HOME]))\n.RE\n.IP\nIf you HUP pbs_mom (Linux/UNIX), pbs.get_pbs_conf returns the reloaded\ncontents of the pbs.conf file.\n\n.IP pbs.group_list(\"<group_name>[@<host>][,<group_name>[@<host>]...]\")\nCreates an object representing a PBS group list.\nTo use a group list object:\n.br\npbs.job.group_list = pbs.group_list(....)\n.br\nInstantiation of these objects requires a formatted input string.\n.IP pbs.hold_types(\"<hold_type_str>\")\nwhere \n.I <hold_type_str> \nis one of \"u\", \"o\", \"s\", or \"n\".\nCreates an object representing the \n.I Hold_Types \njob attribute.\nInstantiation of these objects requires a formatted input string.\n.IP pbs.job_sort_formula(\"<formula_string>\")\nwhere \n.I <formula_string> \nis a string containing a math formula. \nCreates an object representing the \n.I job_sort_formula \nserver attribute.\nInstantiation of these objects requires a formatted input string.\n.IP pbs.join_path({\"oe\"|\"eo\"|\"n\"})\nCreates an object representing the \n.I Join_Path \njob attribute.\nInstantiation of these objects requires a formatted input string.\n.IP pbs.keep_files(\"<keep_files_str>\")\nwhere \n.I <keep_files_str> \nis one of \"o\", \"e\", \"oe\", \"eo\".\nCreates an object representing the \n.I Keep_Files \njob attribute.\nInstantiation of these objects requires a formatted input string.\n.IP pbs.license_count(\"Avail_Global:<W>Avail_Local:<X>Used:<Y>High_Use:<Z>\")\nInstantiates an object representing a \n.I license_count \nattribute.\nInstantiation of these objects requires a formatted input string.\n.IP pbs.logjobmsg(job_ID,message)\nwhere \n.I job_ID \nmust be an existing or previously existing job ID and where \n.I message \nis an arbitrary string.  This puts a custom string in the PBS Server\nlog. The \n.B tracejob \ncommand can be used to print out the job-related\nmessages logged by a hook script.  Messages are logged at log event\nclass \n.I pbs.LOG_DEBUG.\n.IP pbs.logmsg(log_event_class,message)\nwhere \n.I message \nis an arbitrary string, and where \n.I log_event_class \ncan be one of the message log event class constants:\n.br\npbs.LOG_WARNING\n.br\npbs.LOG_ERROR\n.br\npbs.LOG_DEBUG\n.br\nThis puts a custom string in the daemon log.\n.IP pbs.mail_points(\"<mail_points_string>\")\nwhere \n.I <mail_points_string> \nis \"a\", \"b\", and/or \"e\", or \"n\".\nCreates an object representing a \n.I Mail_Points \nattribute.\nInstantiation of these objects requires a formatted input string.\n.IP pbs.node_group_key(\"<resource>\")\nCreates an object representing the resource to be used for node\ngrouping, using the specified resource.\n.IP pbs.path_list(\"<path>[@<host>][,<path>@<host>...]\")\nCreates an object representing a PBS pathname list.\nTo use a path list object:\n.br\npbs.job.Shell_Path_List = pbs.path_list(....)\n.br\nInstantiation of these objects requires a formatted input string.\n\n.IP pbs.env()\nCreates an empty environment variable list.  For example, to create\nan empty environment variable list:\n.RS 10\npbs.event().env = pbs.pbs_env()\n.RE\n\n.IP pbs.place(\"[arrangement]:[sharing]:[group]\")\n.I arrangement \ncan be \"pack\", \"scatter\", \"free\", \"vscatter\"\n.br\n.I sharing \ncan be \"shared\", \"excl\", \"exclhost\"\n.br\n.I group \ncan be of the form \"group=<resource>\"\n.br\n.I [arrangement], [sharing], \nand \n.I [group] \ncan be given in any order or combination.\n.br\nCreates a place object representing the job's place specification.\nInstantiation of these objects requires a formatted input string.\n.br\nExample:\n.br\npl = pbs.place(\"pack:excl\")\n.br\ns = repr(pl) (or s = `pl`)\n.br\nletter = pl[0] (assigns 'p' to letter)\n.br\ns = s + \":group=host\" (append to string)\nbr\npl = pbs.place(s) (update original pl)\n.IP pbs.range(\"<start>-<stop>:<step>\")\nCreates a PBS object representing a range of values.\n.br\nExample:\n.br\npbs.range(\"1-30:3\")\n.br\nInstantiation of these objects requires a formatted input string.\n\n.IP pbs.reboot([<command>])\nThis stops hook execution, so that remaining lines in the hook script\nare not executed, and starts the tasks that would normally begin after\nthe hook is finished, such as flagging the current host to be\nrebooted.  The MoM logs show the following:\n.RS 10\n<hook name> requested for host to be rebooted\n.RE\n.IP\nWe recommend that before calling pbs.reboot(), you set any vnodes\nmanaged by this MoM offline, and requeue the current job, if this hook\nis not an exechost_periodic hook.  For example:\n.RS 10\nfor  v in pbs.event().vnode_list.keys():\n.br\n\\ \\ \\ pbs.event().vnode_list[v].state = pbs.ND_OFFLINE\n.br\n\\ \\ \\ pbs.event().vnode_list[v].comment = \"MoM host rebooting\"\n.br\npbs.event().job.rerun()\n.br\npbs.reboot()\n.RE\n.IP\nThe effect of the call to pbs.reboot() is not instantaneous. The\nreboot happens after the hook executes, and after any of the other\nactions such as pbs.event().job.rerun(), pbs.event().delete(), and\npbs.event().vnode_list[] take effect.\n\nA hook with its user attribute set to pbsuser cannot successfully\ninvoke pbs.reboot(), even if the owner is a PBS Manager or Operator.\nIf this is attempted, the host is not rebooted, and the following\nmessage appears at log event class PBSEVENT_DEBUG2 in the MoM logs:\n.RS 10\n<hook_name>; Not allowed to issue reboot if run as user.\n.RE\n.IP\nThe <command> is an optional argument.  It is a Python str which is\nexecuted instead of the reboot command that is the default for the\nsystem.  For example:\n.RS 10\npbs.reboot(\"/usr/local/bin/my_reboot -s 10 -c 'going down in 10'\")\n.RE\n.IP\nThe specified <command> is executed in a shell on Linux/UNIX or via cmd on Windows.\n\n\n.IP pbs.route_destinations(\"<queue_spec>[,<queue_spec>,...]\")\nwhere \n.I <queue_spec> \nis queue_name[@server_host[:port]]\n.br\nCreates an object that represents a \n.I route_destinations \nrouting queue attribute.\nInstantiation of these objects requires a formatted input string.\n.IP pbs.select(\"[N:]res=val[:res=val][+[N:]res=val[:res=val]...]\")\nCreates a \n.I select \nobject representing the job's select specification.\nInstantiation of these objects requires a formatted input string.\nExample:\n.br\nsel = pbs.select(\"2:ncpus=1:mem=5gb+3:ncpus=2:mem=5gb\")\n.br\ns = repr(sel) (or s = `sel`)\n.br\nletter = s[3] (assigns 'c' to letter)\n.br\ns = s + \"+5:scratch=10gb\" (append to string)\n.br\nsel = pbs.select(s) (reset the value of sel)\n.br\n.IP pbs.size(<integer>)\nCreates a PBS \n.I size \nobject using the given integer value, storing the\nvalue as the number of bytes. Size objects can be specified using\neither an integer or a string. See the \"pbs.size(\"<integer><suffix>\")\"\ncreation method.\n.IP pbs.size(\"<integer><suffix>\")\nCreates a PBS \n.I size \nobject out of the given string specification. \nThe size of a word\nis the word size on the execution host. \n.I Size \nobjects can be specified\nusing either an integer or a string.  To operate on \n.I pbs.size\ninstances, use the \"+\" and \"-\" operators.  To compare \n.I pbs.size\ninstances, use the \"==\", \"!=\", \">\", \"<\", \">=\", and \"<=\" operators.\nExample: the sizes are normalized to the smaller of the 2 suffixes. In\nthis case, \"10gb\" becomes \"10240mb\" and is added to \"10mb\":\n.br\nsz = pbs.size(\"10gb\")\n.br\nsz = sz + 10mb\n.br\n10250mb\n.br\nExample: the following returns \n.I True \nbecause \n.I sz \nis greater than 100 bytes:\n.br\nif sz > 100:\n.br\n\\ \\ \\ \\ gt100 = True\n.br\n\n.IP pbs.software(\"<software_info_string>\")\nCreates an object representing a site-dependent software resource.\nInstantiation of these objects requires a formatted input string.\n\n.IP pbs.staging_list(\"<filespec>[,<filespec>,...]\")\nwhere \n.I <filespec> \nis <execution_path>@<storage_host>:<storage_path>\nCreates an object representing a job file staging parameters list.\nTo use a staging list object:\n.br\npbs.job.stagein = pbs.staging_list(....)\n.br\nInstantiation of these objects requires a formatted input string.\n.IP pbs.state_count(\"Transit:<U>Queued:<V>Held:<W>Running:<X>Exiting:<Y>Begun:<Z>)\nInstantiates an object representing a \n.I state_count \nattribute.\nInstantiation of these objects requires a formatted input string.\n.IP pbs.user_list(\"<user>[@<host>][,<user>@<host>...]\")\nCreates an object representing a PBS user list.\nTo use a user list object:\n.br\npbs.job.User_List = pbs.user_list(....)\n.br\nInstantiation of these objects requires a formatted input string.\n\n.IP pbs.version(\"<pbs_version_string>\")\nCreates an object representing the PBS version string.\nInstantiation of these objects requires a formatted input string.\n\n\n\n.SH ATTRIBUTES AND RESOURCES\n.br\nHooks can read Server, Queue, or reservation resources. \nHooks can read vnode or job attributes and resources.  Hooks can modify\n.IP\nThe resources requested by a job\n.br\nThe resources used by a job\n.br\nThe attributes of a job\n.br\nThe resource arguments to pbs_rsub\n.br\nVnode attributes and resources\n.br\nThe shell or program to be executed in a job\n.br \nThe arguments to the shell or program to be executed in a job\n.br\nThe environment of the job\n.LP\n\nCustom and built-in PBS resources are represented in Python dictionaries,\nwhere the resource names are the dictionary keys.  Built-in resources are\nlisted in pbs_resources(7B).  You reference a resource through a vnode,\nthe Server, the event that triggered the hook, or the current job, for example:\n\n.IP\npbs.server().resources_available[\"< resource name>\"]\n.br\npbs.event().job.Resource_List[\"< resource name>\"]\n.br \npbs.event().vnode_list[<vnode name>].resources_available[\"< resource name >\"]\n.LP\n\nThe resource name must be in quotes.\nExample: Get the number of CPUs:\n\n.IP\nncpus = Resource_List[\"ncpus\"]\n.LP\n\nAn instance R of a job resource can be set as follows:\n.IP\nR[\"<resource name>\"] = <resource value>\n.LP\n\nFor example:\n.IP\npbs.event().job().Resource_List[\"mem\"] = 8gb\n.LP\n\n.SH EXCEPTIONS\n\n.IP  pbs.BadAttributeValueError\nRaised when setting the attribute value of a pbs.* object to an invalid value.\n.IP  pbs.BadAttributeValueTypeError\nRaised when setting the attribute value of a pbs.* object to an invalid value type.\n.IP  pbs.BadResourceValueError\nRaised when setting the resource value of a pbs.* object to an invalid value.\n.IP  pbs.BadResourceValueTypeError\nRaised when setting the resource value of a pbs.* object to an invalid value type.\n.IP  pbs.EventIncompatibleError\nRaised when referencing a non-existent attribute in pbs.event().\n.IP  pbs.UnsetAttributeNameError\nRaised when referencing a non-existent name of an attribute.\n.IP  pbs.UnsetResourceNameError\nRaised when referencing a non-existent name of a resource.\n.IP  SystemExit\nRaised when accepting or rejecting an action.\n.LP\n\nIf a hook encounters an unhandled exception:\n.IP\nPBS rejects the corresponding action, and an error message is printed \nto stderr.\n.br\nA message is printed to the daemon log.\n.LP\n\n\n\n\n\n.SH SEE ALSO\npbs_hook_attributes(7B), pbs_resources(7B),\nqmgr(1B)\n\n\n"
  },
  {
    "path": "doc/man1/pbs_node_attributes.7B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_node_attributes 7B \"17 July 2020\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_node_attributes \n\\- attributes of PBS vnodes\n\n\n.SH DESCRIPTION\nVnodes have the following attributes:\n\n.IP comment 8\nInformation about this vnode.  This attribute may be set by the\nmanager to any string to inform users of any information relating to\nthe node. If this attribute is not explicitly set, the PBS server will\nuse the attribute to pass information about the node status,\nspecifically why the node is down. If the attribute is explicitly set\nby the manager, it will not be modified by the server.\n.br\nReadable by all; settable by Manager.\n.br\nFormat: \n.I String\n.br\nPython type: \n.I str\n.br\nDefault: No default\n\n.IP current_aoe 8\nThe AOE currently instantiated on this vnode.  Cannot be set on \nserver's host.\n.br\nReadable by all; settable by Manager.\n.br\nFormat: \n.I String\n.br\nPython type: \n.I str\n.br\nDefault: \n.I Unset\n\n.IP current_eoe 8\nCurrent value of eoe on this vnode.  We do not recommend setting this\nattribute manually.\n.br\nReadable by all; settable by Manager (not recommended).\n.br\nFormat: \n.I String\n.br\nPython type: \n.I str\n.br\nDefault:\n.I Unset\n\n.IP in_multivnode_host 8\nSpecifies whether a vnode is part of a multi-vnoded host.  Used\ninternally.  Do not set.\n.br\nReadable and settable by Manager (not recommended).\n.br\nFormat:\n.I Integer \n.br\nPython type:\n.I int\n.br\nBehavior:\n.RS\n.IP 1 3\nPart of a multi-vnode host\n.IP Unset 3\nNot part of a multi-vnode host\n.RE\n.IP\nDefault: \n.I Unset   \n\n.IP jobs 8\nList of jobs running on this vnode.\n.br\nReadable by all; set by PBS.\n.br\nFormat: \n.I String\n.br\nSyntax: \n.I <processor number>/<job ID>, ... \n.br\nPython type:\n.I int\n.br\n\n.IP last_state_change_time 8\nRecords the most recent time that this node changed state.              \n.br\nFormat: \n.RS 11\nTimestamp\n.br\nPrinted by qstat in human-readable Date format.  \n.br\nOutput in hooks as seconds since epoch.\n.RE\n.IP\n\n.IP last_used_time 8\nRecords the most recent time that this node finished being used for a\njob or reservation.  Set at creation or reboot time.  Updated when\nnode is released early from a running job.  Reset when node is ramped\nup.\n.br\nFormat: \n.RS 11\nTimestamp\n.br\nPrinted by qstat in human-readable Date format.  \n.br\nOutput in hooks as seconds since epoch.\n.RE\n\n.IP license 8\n.br\nIndicates whether this vnode is licensed.\n.br\nReadable by all; set by PBS.\n.br\nFormat:\n.I Character\n.br\nPython type: \n.I str\n.br\nValid values:\n.RS\n.IP l 3\nThis vnode is licensed.\n.RE\n.IP \nDefault: \n.I Unset\n\n.IP license_info 8\nNumber of licenses assigned to this vnode.  \n.br\nReadable by all; set by PBS.             \n.br\nFormat: \n.I Integer\n.br\nPython type:\n.I int\n.br\nDefault: \n.I Unset\n\n.IP lictype 8\nNo longer used.\n\n.IP maintenance_jobs 8\nList of jobs that were running on this vnode, but have been suspended\nvia the\n.I admin-suspend \nsignal to qsig.\n.br\nReadable by Manager; set by PBS.\n.br\nFormat:\n.I string_array                      \n.br\nPython type:\n.I str\n.br\nDefault: No default \n\n.IP Mom\nHostname of host on which MoM daemon runs.  \n.br\nReadable by all.  Can be explicitly set by Manager only via \n.B qmgr, \nand only at vnode creation. The server can set this to the FQDN of the host \non which MoM runs, if the vnode name is the same as the hostname.\n.br\nFormat: \n.I String\n.br\nPython type: \n.I str\n.br\nDefault: Value of \n.I vnode \nresource (vnode name)\n\n.IP name 8\nThe name of this vnode.\n.br\nReadable by all; settable by Manager.\n.br\nFormat: \n.I String\n.br\nPython type: \n.I str\n.br\nDefault: No default\n\n.IP no_multinode_jobs 8\nControls whether jobs which request more than one chunk are allowed to execute\non this vnode.  Used for cycle harvesting.\n.br\nReadable by all; settable by Manager.\n.br\nFormat: \n.I Boolean\n.br\nPython type:\n.I bool\n.br\nBehavior: \n.RS\n.IP True 3\nJobs requesting more than one chunk are not allowed to execute on this vnode.\n.RE\n.IP\nDefault: \n.I False\n\n.IP ntype 8\nThe type of this vnode.\n.br\nReadable by all; settable by Manager.\n.br\nFormat: \n.I String\n.br\nValid values:\n.RS\n.IP PBS 3\nNormal vnode\n.br\nPython type: \n.I pbs.ND_PBS\n.br\nDefault: \n.I PBS\n.RE\n\n.IP partition 8\nName of partition to which this vnode is assigned.  A vnode can be\nassigned to at most one partition.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat:\n.I String\n.br\nPython type:\n.I str\n.br\nDefault: No default\n\n.IP pbs_version 8\nThe version of PBS for this MoM.  \n.br\nReadable by all; set by PBS.\n.br\nFormat: \n.I String\n.br\nPython type: \n.I str\n.br\nDefault: No default\n\n.IP pcpus 8\n.B Deprecated.  \nThe number of physical CPUs on this vnode.  This is set to the number\nof CPUs available when MoM starts.  For a multiple-vnode MoM, only the\nnatural vnode has \n.I pcpus.\n.br\nReadable by all; set by PBS.\n.br\nFormat: \n.I Integer\n.br\nPython type: \n.I int\n.br\nDefault: \n.I Number of CPUs on startup\n\n.IP pnames\nThe list of resources being used for placement sets.  \nNot used for scheduling; advisory only.\n.br\nReadable by all; settable by Manager.\n.br\nFormat: \n.I String\n.br\nSyntax: \n.I Comma-separated list of resource names\n.br\nPython type: \n.I str\n.br\nDefault: No default\n\n.IP Port 8\nPort number on which MoM daemon listens.  \n.br\nCan be explicitly set only via\n.B qmgr, \nand only at vnode creation.  \nReadable and settable by Operator and Manager.\n.br\nFormat: \n.I Integer\n.br\nPython type: \n.I int\n.br\nDefault: \n.I 15002\n\n.IP poweroff_eligible 8\nEnables powering this vnode up and down by PBS.\n.br\nReadable by all; settable by Manager.\n.br\nFormat: \n.I Boolean\n.br\nPython type:\n.I bool\n.br\nValues:\n.RS 11\n.IP True\nPBS can power this vnode on and off.\n.IP False\nPBS cannot power this vnode on and off.\n.RE\n.IP\nDefault: \n.I False\n\n\n\n.IP power_provisioning 8\nEnables use of power profiles by this vnode.\n.br\nReadable by all; settable by Manager.\n.br\nFormat:\n.I Boolean\n.br\nPython type:\n.I bool\n.br\nBehavior:\n.RS\n.IP True 3\nPower provisioning is enabled at this vnode.\n.IP False 3\nPower provisioning is disabled at this vnode.\n.RE\n.IP\nDefault: \n.I False\n\n.IP Priority 8\nThe priority of this vnode compared with other vnodes.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat:\n.I Integer\n.br\nPython type: \n.I int\n.br\nValid values: \n.I -1024 to +1023\n.br\nDefault: No default\n\n.IP provision_enable\nControls whether this vnode can be provisioned.  \nCannot be set on server's host.  \n.br\nFormat: \n.I Boolean\n.br\nPython type: \n.I bool\n.br\nBehavior:\n.RS\n.IP True 3\nThis vnode may be provisioned.  \n.IP False 3\nThis vnode may not be provisioned.  \n.RE\n.IP\nDefault: \n.I False\n\n.IP queue 8\n.B Deprecated.  \nThe queue with which this vnode is associated.  Each vnode can be\nassociated with at most 1 queue.  Queues can be associated with\nmultiple vnodes.  Any jobs in a queue that has associated vnodes can\nrun only on those vnodes.  If a vnode has an associated queue, only\njobs in that queue can run on that vnode.  \n.br\nReadable by all; settable by Manager.\n.br\nFormat: \n.I String\n.br\nPython type:\n.I pbs.queue\n.br\nBehavior:\n.RS\n.IP \"<name of queue>\" 3\nOnly jobs in specified queue may run on this vnode.  \n.IP Unset 3\nAny job in any queue that does not have associated vnodes can run on this vnode.\n.RE\n.IP \nDefault: No default\n\n.IP resources_assigned 8\nThe total amount of each resource allocated to jobs and started\nreservations running on this vnode.\n.br\nReadable by all; set by PBS.\n.br\nFormat: String\n.br\nSyntax: \n.RS 11\n.I resources_assigned.<resource name>=<value>[,resources_assigned.<resource name>=<value>\n.RE\n.IP\nPython type: \n.I pbs.pbs_resource\n.br\nSyntax: \n.RS 11\nresources_assigned['<resource name>'] = < val> \n.br\nwhere \n.I resource name \nis any built-in or custom resource\n.RE\n.IP\nDefault: No default\n\n.IP resources_available 8\nThe list of resource and amounts available on this vnode.  If not\nexplicitly set, the amount shown is that reported by the pbs_mom\nrunning on this vnode.  If a resource value is explicitly set, that\nvalue is retained across restarts.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat:\n.I String\n.br\nSyntax:\n.RS 11\n.I resources_available.<resource name>=<value>, \n.I resources_available.<resource name> = <value>, ...\n.RE\n.IP \nPython type:\n.I pbs.pbs_resource\n.br\nSyntax:\n.RS 11\nresources_available['<resource name>'] = < val> \n.br\nwhere \n.I resource name \nis any built-in or custom resource\n.RE\n.IP \nDefault: No default\n\n.IP resv 8 \nList of advance and standing reservations pending on this vnode.\n.br\nReadable by all; set by PBS.\n.br\nFormat: \n.I String\n.br\nSyntax: \n.RS 11\n.I <reservation ID>[, <reservation ID>, ...]\n.br\n(Comma-separated list of reservation IDs)\n.RE\n.IP\nPython type:\n.I str\n.br\nExample: resv = R142.examplemachine, R143.examplemachine\n.br\nDefault: No default\n\n.IP resv_enable 8\nControls whether the vnode can be used for advance and standing\nreservations.  Reservations are incompatible with cycle harvesting.\n.br\nReadable by all; settable by Manger.\n.br\nFormat: \n.I Boolean\n.br\nPython type: \n.I bool\n.br\nBehavior: \n.RS 11\nWhen set to \n.I True, \nthis vnode can be used for reservations. Existing reservations are\nhonored when this attribute is changed from \n.I True \nto \n.I False.\n.RE\n.IP\nDefault: \n.I True\n\n.IP sharing 8\nSpecifies whether more than one job at a time can use the resources of\nthe vnode or the vnode's host.  Either (1) the vnode or host is\nallocated exclusively to one job, or (2) the vnode's or host's unused\nresources are available to other jobs.  \n.br\nCan be set using \n.I pbs_mom -s insert\nonly.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat: \n.I String\n.br\nPython type:\n.I int \n.br\nValid values: \n.RS\n.IP default_shared 3\nDefaults to \n.I shared\n.br\nPython type:\n.I pbs.ND_DEFAULT_SHARED\n.IP default_excl 3\nDefaults to \n.I exclusive\n.br\nPython type:\n.I pbs.ND_DEFAULT_EXCL\n.IP default_exclhost 3\nEntire host is assigned to the job unless the job's sharing request\nspecifies otherwise\n.br\nPython type:\n.I pbs.ND_DEFAULT_EXCLHOST\n.IP ignore_excl 3\nOverrides any job \n.I place=excl\nsetting\n.br\nPython type:\n.I pbs.ND_IGNORE_EXCL\n.IP force_excl 3\nOverrides any job\n.I place=shared\nsetting\n.br\nPython type:\n.I pbs.ND_FORCE_EXCL\n.IP force_exclhost 3\nThe entire host is assigned to the job, regardless of the job's sharing request\n.br\nPython type:\n.I pbs.ND_FORCE_EXCLHOST\n.IP Unset 3\nDefaults to \n.I shared\n.RE\n.IP\nBehavior of a vnode or host is determined by a combination of the \n.I sharing\nattribute and a job's placement directive, defined as follows:\n.nf\n                 | Vnode Behavior      | Host Behavior\n                 | when place=         | when place=\n                 |                     |   \nsharing value    | unset  shared excl  |exclhost !=exclhost\n----------------------------------------------------------------\nnot set          | shared shared excl  | excl   depends on place\ndefault_shared   | shared shared excl  | excl   depends on place\ndefault_excl     | excl   shared excl  | excl   depends on place\ndefault_exclhost | excl   shared excl  | excl   depends on place\nignore_excl      | shared shared shared| shared not exclusive\nforce_excl       | excl   excl   excl  | excl   not exclusive\nforce_exclhost   | excl   excl   excl  | excl   excl\n.fi\n\nExample: <vnode name>: sharing=force_excl\n.br\nDefault value: \n.I default_shared\n\n.IP state 8\nShows or sets the state of the vnode.  \n.br\nReadable by all.  All states are set by PBS; Operator and Manager \ncan set \n.I state\nto \n.I offline.\n.br\nFormat: \n.I String \n.br\nSyntax: \n.I <state>[, <state>, ...]\n.br\n(Comma-separated list of one or more states)\n.br\nPython type: \n.I int\n.br\nValid values:\n.RS\n.IP busy 3\nVnode is reporting load average greater than allowed max.\nCan combine with \n.I offline\n.IP down 3\nNode is not responding to queries from the server.  \nCannot be combined with \n.I free, provisioning.\n.IP free 3\nVnode is up and capable of accepting additional job(s).\nCannot be combined with other states.\n.IP job-busy 3\nAll CPUs on the vnode are allocated to jobs.\nCan combine with: \n.I offline, resv_exclusive\n.IP job-exclusive 3\nEntire vnode is exclusively allocated to one job at the job's request.\nCan combine:\n.I offline, resv_exclusive\n.IP offline 3\nJobs are not to be assigned to this vnode.  \nCan combine: \n.I busy, job-busy, job-exclusive, resv_exclusive\n.IP provisioning 3\nVnode is in being provisioned.  Cannot be combined with other states.\n.IP resv-exclusive 3\nRunning reservation has requested exclusive use of vnode.  Can combine\nwith: \n.I job-exclusive, offline\n.IP stale 3\nVnode was previously reported to server, but is no longer reported to\nserver.  Cannot combine with \n.I free, provisioning.\n.IP state-unknown 3\nThe server has never been able to contact the vnode.  Either MoM is\nnot running on the vnode, the vnode hardware is down, or there is a\nnetwork problem.\n.IP unresolvable 3\nThe server cannot resolve the name of the vnode.\n.IP wait-provisioning 3\nVnode needs to be provisioned, but can't: limit reached\nfor concurrently provisioning vnodes.  See the \n.I max_concurrent_provision \nserver attribute.\n.RE\n.IP\nDefault: No default\n\n.IP topology_info\nContains information intended to be used in hooks.  \n.br\nVisible in and usable by hooks only.\nInvisible to Manager, Operator, User.\n.br\nFormat: \n.I XML String\n.br\nPython type: \n.I str\n.br\nDefault value: \n.I Unset\n\n.IP vnode_pool 8\nCray only.  Allows just one MoM, instead of all, to report inventory\nupon startup, allowing faster startup and less network communication\nbetween server and non-reporting MoMs.  On each Cray, all MoMs must\nhave same setting for this attribute.  Can be set only at vnode\ncreation; valid only on login nodes running a MoM.  Not supported on\nnon-Cray machines.\n.br\nReadable by all; settable by Manager.\n.br\nFormat: \n.I Integer\n.br\nPython type:\n.I int\n.br\nBehavior:\n.RS\n.IP \">0\" 3\nOnly one MoM per Cray reports inventory.\n.IP Unset 3\nEach MoM reports inventory separately.\n.RE\n.IP \nDefault: \n.I 0\n(Unset)\n\n\n.SH SEE ALSO\nqmgr(1B)\n\n"
  },
  {
    "path": "doc/man1/pbs_professional.7B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_professional 7B \"6 May 2020\" Local \"PBS Professional\"\n.SH NAME\n.B PBS Professional\n\\- The PBS Professional workload management system\n\n\n.SH DESCRIPTION\nPBS Professional is an HPC workload manager and job scheduler.\nPBS schedules jobs onto systems with the required resources, according\nto specified policy.  PBS runs on most major platforms.  See \n.B www.pbsworks.com \nand\n.B https://pbspro.atlassian.net/wiki/spaces/PBSPro/overview.\n\n.B Primary Commands\n.br\n.IP \"init.d/pbs\" 8\nStarts, stops or restarts PBS daemons on the local machine.\nThis command is typically placed in /etc/init.d so that \nPBS starts up automatically.  See the \n.B pbs.8B\nman page.\n.br\n.IP \"qmgr\" 8\nAdministrator's interface for configuring and managing PBS.  See the \n.B qmgr.8B \nman page.\n.IP \"qstat\" 8\nAdministrator's and job submitter's tool for checking server, queue, and job status.\nSee the \n.B qstat.1B \nman page.\n.IP \"qsub\" 8\nJob submitter's tool for submitting jobs to PBS.  See the \n.B qsub.1B\nman page.\n.LP\n\n.SH SEE ALSO\n.br\npbs_mom(8B), pbs_server(8B), pbs_sched(8B), pbs_comm(8B)\n"
  },
  {
    "path": "doc/man1/pbs_python.1B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_python 1B \"6 May 2020\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_python \n\\- Python interpreter for debugging a hook script from the command line\n.SH SYNOPSIS\n.B pbs_python\n--hook  [-e <log event mask>] [-i <event input file>] \n.RS 11\n[-L <log dir>] [-l <log file>] [-o <hook execution record>] \n.br\n[-r <resourcedef file>] [-s <site data file>] [<Python script>]\n.RE\n\n.B pbs_python\n<standard Python options>\n\n.B pbs_python\n--version\n\n.SH DESCRIPTION\nThe PBS Python interpreter, \n.B pbs_python,\nis a wrapper for Python.  \n\nYou can use the \n.B pbs_python\nwrapper that is shipped with PBS to debug hooks.  Either:\n.RS 5\nUse the \n.I --hook \noption to \n.B pbs_python\nto run \n.B pbs_python\nas a wrapper to Python, employing the \n.B pbs_python \noptions.  With the \n.I --hook \noption, you cannot use the standard Python options.  The rest of this\nman page covers how to use \n.B pbs_python\nwith the \n.I --hook\noption.\n\nDo not use the \n.I --hook\noption, so \n.B pbs_python\nruns the Python interpreter, with the standard Python options, and without access to\nthe \n.B pbs_python \noptions.\n.RE\n\n.B Debugging Hooks\n.br\nYou can get each hook to write out debugging files, and then modify the files\nand use them as debugging input to\n.B pbs_python.  \nAlternatively, you can write them yourself.  \n\nDebugging files can contain information about the event, about the site, and \nabout what the hook changed.  You can use these as inputs to a hook when debugging.\n\n.SH Options to pbs_python\n.IP \"--hook\" 6\nThis option is a switch.  When you use this option, you can use the\nPBS Python module (via \"import pbs\"), and the other options described\nhere are available.  When you use this option, you cannot use the\nstandard Python options.  This option is useful for debugging.  \n\nWhen you do not use this option, you cannot use\nthe other options listed here, but you can use the standard \nPython options.\n\n.IP \"-e <log event mask>\" 6\nSets the mask that determines which event types are logged by\n.B pbs_python. \nTo see only debug messages, set the value to 0xd80. To\nsee all messages, set the value to 0xffff.\n.br\nThe \n.B pbs_python \ninterpreter uses the same set of mask values that are\nused for the \n.I $logevent <mask> \nentry in the \n.B pbs_mom \nconfiguration file.\nSee the pbs_mom.8B man page.  Available only when \n.I --hook \noption is used.\n\n.IP \"-i <event input file>\" 6\nText file containing data to populate pbs.event() objects.  \n\nEach line specifies an attribute value or a resource value.  Syntax of\neach input line is one of the following:\n.RS 10\n<object name>.<attribute name>=<attribute value>\n.br\n<object name>.<resource list>[<resource name>]=<resource value>\n.RE\n.IP\nWhere\n.RS 10\n<object name> is a PBS object name which can refer to its sub-objects.  Examples:  \n\"pbs.event()\", \"pbs.event().job\", \"pbs.event().vnode_list[\"<vnode name>\"]\".\n.RE\n.IP\nExample input file:\n.RS 10\n.br\npbs.event().hook_name=proto\n.br\npbs.event().hook_type=site\n.br\npbs.event().type=queuejob\n.br\npbs.event().requestor=user1\n.br\npbs.event().requestor_host=host1\n.br\npbs.event().hook_alarm=40\n.br\npbs.event().job.id=72\n.br\npbs.event().job.Job_Name=job1\n.br\npbs.event().job.Resource_List[ncpus]=5\n.br\npbs.event().job.Resource_List[mem]=6mb\n.br\npbs.event().vnode_list[\"host1\"].resources_available[\"ncpus\"] = 5\n.br\npbs.event().vnode_list[\"host1\"].resources_available[\"mem\"] = 300gb\n.RE\n\n.IP\nAvailable only when \n.I --hook \noption is used.\n\n\n.IP \"-L <log dir>\" 6\nDirectory holding the log file where pbs.logmsg() and pbs.logjobmsg()\nwrite their output.  Default is current working directory where \n.B pbs_python\nis executed.\nAvailable only when \n.I --hook \noption is used.\n\n.IP \"-l <log file>\" 6\nLog file where pbs.logmsg() and pbs.logjobmsg() write their output.\nDefault file name is current date in \n.I yyyymmdd \nformat.\nAvailable only when \n.I --hook \noption is used.\n\n.IP \"-o <hook execution record>\" 6\nThe hook execution record contains the changes made after executing the hook\nscript, such as the attributes and resources set in any pbs.event()\njobs and reservations, whether an action was accepted or rejected, and any\npbs.reject() messages.\n.br\nExample hook execution record:\n.RS 10\n.br\npbs.event().job.Job_Name=job2\n.br\npbs.event().job.Resource_List[file]=60gb\n.br\npbs.event().job.Resource_List[ncpus]=5\n.br\npbs.event().job.Resource_List[mem]=20gb\n.br\npbs.event().job.Account_Name=account2\n.br\npbs.event().reject=True\n.br\npbs.event().reject_msg=No way! \n.RE\n\n.IP \nWithout this option, output goes to stdout.\n\n.IP\nAvailable only when \n.I --hook \noption is used.\n\n.IP \"-r <resourcedef file>\" 6\nFile/path name containing a resource definition specifying a custom\nresource whose Python type is \n.I pbs.resource.\n.br \nFormat: \n.br\n.I <resource name> type=<typename> [flag=<value>]\n.br\nThis file has the same format as the PBS_HOME/server_priv/resourcedef\nfile.  Available only when \n.I --hook \noption is used.\n\n.IP \"-s <site data file>\" 6\nThe site data file can contain any relevant information about the\nserver, queues, vnodes, and jobs at the server.  This file \ncan be written by a hook or by the administrator.  \n.br\nWhen the hook writes it, this file contains the values that populate\nthe server, queues, vnodes, reservations, and jobs, with all\nattributes and resources for which there are values.\n.br \nThe site data file is named \n.I hook_<event type>_<hook name>_<random integer>.data.  \nAvailable only when \n.I --hook \noption is used.\n\n.IP \"--version\" 6\nThe \n.B pbs_python \ncommand prints its version information and exits.  This option\ncan only be used alone.  \n\n.SH ARGUMENTS\n.IP \"<Python script>\" 6\nThe hook script to execute.\nWe recommend importing the PBS Python module at the start of the script:\n.RS 9\nimport pbs\n.RE\n.IP\nIf you do not specify \n.I Python script, \nyou can perform interactive\ndebugging.  If you type the following:\n.RS 9\npbs_python --hook -i hook.input\n.RE\n.IP\nThe interpreter displays a prompt:\n.RS 9\n>>\n.RE\n.IP\nYou can type your Python lines at the prompt:\n.RS 9\n>>import pbs\n.br\n>> e=pbs.event().job\n.br\n>> print e.id\n.br\n<job ID>\n.br\n...\n.RE\n\n"
  },
  {
    "path": "doc/man1/pbs_queue_attributes.7B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_queue_attributes 7B \"6 May 2020\" Local \"PBS Professional\"\n.SH NAME\npbs_queue_attributes\n\\- Attributes of PBS queues\n.SH DESCRIPTION\nQueues have the following attributes:\n\n.IP acl_group_enable 8\nControls whether group access to the queue obeys the access control list defined in\nthe \n.I acl_groups \nqueue attribute.\n.br\nApplies to routing and execution queues.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat:\n.I Boolean\n.br\nPython type:\n.I bool\n.br\nBehavior: \n.RS \n.IP True 3\nGroup access to the queue is limited according to the group access control list.\n.IP False 3\nAll groups are allowed access.\n.RE\n.IP\nDefault: \n.I False\n\n.IP acl_groups 8\nList of groups which are allowed or denied access to this queue. The\ngroups in the list are groups on the server host, not submitting\nhosts.  List is evaluated left-to-right; first match in list is used.\n.br\nApplies to routing and execution queues.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat:\n.I String\n.br\nSyntax: \"[+|-]<group name>[, ...]\"\n.br\nPython type:\n.I pbs.acl\n.br\nDefault: No default\n\n.IP acl_host_enable 8\nControls whether host access to the queue obeys the access control list defined in\nthe \n.I acl_hosts \nqueue attribute.\n.br\nApplies to routing and execution queues.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat:\n.I Boolean\n.br\nPython type:\n.I bool\n.br\nBehavior: \n.RS \n.IP True 3\nHost access to the queue is limited according to the host access control list.\n.IP False 3\nAll hosts are allowed access.\n.RE\n.IP\nDefault: \n.I False\n\n.IP acl_hosts 8\nList of hosts from which jobs may be submitted to this queue.  List is\nevaluated left-to-right; first match in list is used.\n.br\nApplies to routing and execution queues.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat:\n.I String\n.br\nSyntax: \"[+|-]<hostname>[, ...]\"\n.br\nPython type:\n.I pbs.acl\n.br\nDefault: No default\n\n.IP acl_user_enable 8\nControls whether user access to the queue obeys the access control list defined in\nthe \n.I acl_users \nqueue attribute.\n.br\nApplies to routing and execution queues.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat:\n.I Boolean\n.br\nPython type:\n.I bool\n.br\nBehavior: \n.RS \n.IP True 3\nUser access to the queue is limited according to the user access control list.\n.IP False 3\nAll users are allowed access.\n.RE\n.IP\nDefault: \n.I False\n\n.IP acl_users 8\nList of users which are allowed or denied access to this queue.\nList is evaluated left-to-right; first match in list is used.\n.br\nApplies to routing and execution queues.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat:\n.I String\n.br\nSyntax: \"[+|-]<username>[, ...]\"\n.br\nPython type:\n.I pbs.acl\n.br\nDefault: No default\n\n.IP alt_router 8\nNo longer used.\n\n.IP backfill_depth 8\nModifies backfilling behavior for this queue.  Sets the number of jobs \nto be backfilled around in this queue.  Overrides \n.I backfill_depth\nserver attribute.\n.br\nRecommendation: set this to less than \n.I 100.\n.br\nApplies to execution queues.\n.br\nReadable by all; settable by all.\n.br\nFormat:\n.I Integer\n.br\nValid values: Must be >=0\n.br\nBehavior: \n.RS\n.IP \">= 0\" 3\nPBS backfills around the specified number of jobs.\n.IP \"Unset\" 3\nBackfill depth is set to \n.I 1.\n.RE\n.IP\nPython type:\n.I int\n.br\nDefault: Unset (backfill depth is 1)\n\n.IP checkpoint_min 8\nMinimum number of minutes of CPU time or walltime allowed\nbetween checkpoints of a job.  \nIf a user specifies a time less than this\nvalue, this value is used instead.  The value given in \n.I checkpoint_min\nis used for both CPU minutes and walltime minutes.\n.br\nApplies to execution queues.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat:\n.I Integer\n.br\nPython type:\n.I pbs.duration\n.br\nDefault: No default\n\n.IP default_chunk 8\nThe list of resources which will be inserted into each chunk of a job's select \nspecification if the corresponding resource is not specified by the user.\nThis provides a means for a site to be sure a given resource is properly \naccounted for even if not specified by the user.\n.br\nApplies to execution queues.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat:\n.I String\n.br\nSyntax:\n.RS 11\n.nf\n.I default_chunk.<resource name>=<value>\n.I [, default_chunk.<resource name>=<value>, ...]\n.fi\n.RE\n.IP\nPython type:\n.I pbs.pbs_resource\n.br\nSyntax:\n.RS 11\ndefault_chunk[\"<resource name>\"]=<value> \n.br\nwhere \n.I resource name\nis any built-in or custom resource\n.RE\n.IP\nDefault: No default\n\n.IP enabled 8\nSpecifies whether this queue accepts new jobs.  \n.br\nApplies to routing and execution queues.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat:\n.I Boolean\n.br\nPython type:\n.I bool\n.br\nBehavior:\n.RS\n.IP True 3\nThis queue is enabled.  This queue accepts new jobs; new jobs can be enqueued.\n.IP False 3\nThis queue does not accept new jobs.\n.RE\n.IP\nDefault: \n.I False\n(disabled)\n\n.IP from_route_only 8\nSpecifies whether this queue accepts jobs only from routing queues, or\nfrom both execution and routing queues.\n.br\nApplies to routing and execution queues.\n.br\nReadable by all; settable by Manager.\n.br\nFormat:\n.I Boolean\n.br\nPython type:\n.I bool\n.br\nBehavior:\n.RS\n.IP True 3\nThis queue accepts jobs only from routing queues.\n.IP False 3\nThis queue accepts jobs from both execution and routing queues,\nas well as directly from submitter.\n.RE\n.IP\nDefault: \n.I False\n\n.IP hasnodes 8\nIndicates whether vnodes are associated with this queue.\n.br\nApplies to execution queues.\n.br\nReadable by all; set by PBS.\n.br\nFormat:\n.I Boolean\n.br\nPython type:\n.I bool\n.br\nBehavior: \n.RS 11\nWhen \n.I True,\nthere are vnodes associated with this queue.\n.RE\n.IP\nDefault: \n.I False\n\n.IP kill_delay 8\nThe time delay between sending SIGTERM and SIGKILL when a qdel command\nis issued against a running job.\n.br\nApplies to execution queues.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat:\n.I Integer\n.br\nUnits:\n.I Seconds\n.br\nPython type:\n.I pbs.duration\n.br\nValid values: Must be >= 0\n.br\nDefault: \n.I 10 seconds\n\n.IP max_array_size 8\nThe maximum number of subjobs that are allowed in an array job.\n.br\nApplies to routing and execution queues.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat:\n.I Integer\n.br\nPython type:\n.I int\n.br\nDefault: No default\n\n.IP max_group_res 8\nOld limit attribute.  Incompatible with new limit attributes.\nThe maximum amount of the specified resource that any single group may consume\nin a complex.\n.br\nApplies to execution queues.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat:\n.I String\n.br\nSyntax: \n.I max_group_res.<resource name>=<value>\n.br\nPython type:\n.I pbs.pbs_resource\n.br\nSyntax:\n.RS 11\nmax_group_res[\"<resource name>\"]=<value> \n.br\nwhere \n.I resource name \nis any built-in or custom resource\n.RE\n.IP\nValid values: Any PBS resource, e.g. \"ncpus\", \"mem\", \"pmem\"\n.br\nExample: \n.I set server max_group_res.ncpus=6\n.br\nDefault: No default\n\n.IP max_group_res_soft 8\nOld limit attribute.  Incompatible with new limit attributes.\nThe soft limit on the amount of the specified resource that any single group may consume\nin a complex.\nIf a group is consuming more than this amount of the specified resource,\ntheir jobs are eligible to be preempted by jobs from groups who are not over\ntheir soft limit.\n.br\nApplies to execution queues.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat:\n.I String\n.br\nSyntax: \n.I max_group_res_soft.<resource name>=<value>\n.br\nPython type:\n.I pbs.pbs_resource\n.br\nSyntax: \n.RS 11\nmax_group_res_soft[\"<resource name>\"]=<value> \n.br\nwhere \n.I resource name \nis any built-in or custom resource\n.RE\n.IP\nValid values: Any PBS resource, e.g. \"ncpus\", \"mem\", \"pmem\"\n.br\nExample: \n.I set queue workq max_group_res_soft.ncpus=3\n.br\nDefault: No default\n\n.IP max_group_run 8\nOld limit attribute.  Incompatible with new limit attributes.\nThe maximum number of jobs owned by a group that are\nallowed to be running from this queue at one time.\n.br\nApplies to execution queues.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat:\n.I Integer\n.br\nPython type:\n.I int\n.br\nDefault: No default\n\n.IP max_group_run_soft 8\nOld limit attribute.  Incompatible with new limit attributes.\nThe maximum number of jobs owned by users in a single group that are\nallowed to be running from this queue at one time.\nIf a group has more than this number of jobs\nrunning, their jobs are eligible to be preempted by jobs from groups who are not over\ntheir soft limit.\n.br\nApplies to execution queues.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat:\n.I Integer\n.br\nPython type:\n.I int\n.br\nDefault: No default\n\n.IP max_queuable 8\nOld limit attribute.  Incompatible with new limit attributes.\nThe maximum number of jobs allowed to reside in this queue at any given time.\n.br\nApplies to routing and execution queues.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat:\n.I Integer\n.br\nPython type:\n.I int\n.br\nDefault: No default (no limit)\n\n.IP max_queued 8\nLimit attribute.  The maximum number of jobs allowed to be queued\nin or running from this queue.  Can be specified for projects, users, groups, or all.\nCannot be used with old limit attributes. \n.br\nApplies to routing and execution queues.\n.br\nReadable by all; settable by Operator and Manager.\n.br\n.br\nFormat:\n.I Limit specification.  \nSee \n.B FORMATS.\n.br\nPython type:\n.I pbs.pbs_resource\n.br\nSyntax:\n.RS 11\nmax_queued[\"<resource name>\"]=<value> \n.br\nwhere \n.I resource name \nis any built-in or custom resource\n.RE\n.IP\nDefault: No default\n\n.IP max_queued_res 8\nLimit attribute.  The maximum amount of the specified resource \nallowed to be allocated to jobs queued in or running from this queue.\nCan be specified for projects, users, groups, or all.\nCannot be used with old limit attributes.  \n.br\nApplies to routing and execution queues.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat:\n.I Limit specification.  \nSee \n.B FORMATS.\n.br\nSyntax: \n.I max_queued_res.<resource name>=<value>\n.br\nPython type:\n.I pbs.pbs_resource\n.br\nSyntax: \n.RS 11\nmax_queued_res[\"<resource name>\"]=<value> \n.br\nwhere \n.I resource name \nis any built-in or custom resource\n.RE\n.IP\nValid values: Any PBS resource, e.g. \"ncpus\", \"mem\", \"pmem\"\n.br\nExample: \n.I set queue workq max_queued_res.ncpus=4\n.br\nDefault: No default\n\n.IP max_run 8\nLimit attribute.  The maximum number of jobs allowed to be running \nfrom this queue.  Can be specified for projects, users, groups, or all.\nCannot be used with old limit attributes. \n.br\nApplies to routing and execution queues.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat:\n.I Limit specification.  \nSee \n.B FORMATS.\n.br\nPython type:\n.I pbs.pbs_resource\n.br\nSyntax:\n.RS 11\nmax_run[\"<resource name>\"]=<value> \n.br\nwhere \n.I resource name \nis any built-in or custom resource\n.RE\n.IP\nDefault: No default\n\n.IP max_run_res 8\nLimit attribute.  The maximum amount of the specified resource \nallowed to be allocated to jobs running from this queue.\nCan be specified for projects, users, groups, or all.\nCannot be used with old limit attributes.  \n.br\nApplies to execution queues.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat:\n.I Limit specification.  \nSee \n.B FORMATS.\n.br\nSyntax: \n.I max_run_res.<resource name>=<value>\n.br\nPython type:\n.I pbs.pbs_resource\n.br\nSyntax:\n.RS 11\nmax_run_res[\"<resource name>\"]=<value> \n.br\nwhere \n.I resource name \nis any built-in or custom resource\n.RE\n.IP\nValid values: Any PBS resource, e.g. \"ncpus\", \"mem\", \"pmem\"\n.br\nExample: \n.I set queue workq max_run_res.ncpus=4\n.br\nDefault: No default\n\n.IP max_run_res_soft 8\nLimit attribute.  Soft limit on the amount of the specified resource \nallowed to be allocated to jobs running from this queue.\nCan be specified for projects, users, groups, or all.\nCannot be used with old limit attributes.  \n.br\nApplies to execution queues.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat:\n.I Limit specification.  \nSee \n.B FORMATS.\n.br\nSyntax: \n.I max_run_res_soft.<resource name>=<value>\n.br\nPython type:\n.I pbs.pbs_resource\n.br\nSyntax:\n.RS 11\nmax_run_res_soft[\"<resource name>\"]=<value> \n.br\nwhere \n.I resource name \nis any built-in or custom resource\n.RE\n.IP\nValid values: Any PBS resource, e.g. \"ncpus\", \"mem\", \"pmem\"\n.br\nExample: \n.I set queue workq max_run_res_soft.ncpus=2\n.br\nDefault: No default\n\n.IP max_run_soft 8\nLimit attribute.  Soft limit on the number of jobs allowed to be running \nfrom this queue.  Can be specified for projects, users, groups, or all.\nCannot be used with old limit attributes.  \n.br\nApplies to execution queues.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat:\n.I Limit specification.  \nSee \n.B FORMATS.\n.br\nPython type:\n.I pbs.pbs_resource\n.br\nSyntax:\n.RS 11\nmax_run_soft[\"<resource name>\"]=<value> \n.br\nwhere \n.I resource name \nis any built-in or custom resource\n.RE\n.IP\nDefault: No default\n\n.IP max_running 8\nOld limit attribute.  Incompatible with new limit attributes.\nFor an execution queue, this is the largest number of jobs allowed to \nbe running at any given time.  For a routing queue, this is the largest\nnumber of jobs allowed to be transiting from this queue at any given \ntime.\n.br\nApplies to routing and execution queues.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat:\n.I Integer\n.br\nPython type:\n.I int\n.br\nDefault: No default\n\n.IP max_user_res 8 \nOld limit attribute.  Incompatible with new limit attributes.\nThe maximum amount of the specified resource that any single user may consume.\n.br\nApplies to execution queues.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat:\n.I String\n.br\nSyntax:\n.I max_user_res.<resource name>=<value>\n.br\nPython type:\n.I pbs.pbs_resource\n.br\nSyntax:\n.RS 11\nmax_user_res[\"<resource name>\"]=<value> \n.br\nwhere \n.I resource name \nis any built-in or custom resource\n.RE\n.IP\nValid values: Any PBS resource, e.g. \"ncpus\", \"mem\", \"pmem\"\n.br\nExample: \n.I set queue workq max_user_res.ncpus=2\n.br\nDefault: No default\n\n.IP max_user_res_soft 8\nOld limit attribute.  Incompatible with new limit attributes.\nThe soft limit on the amount of the specified resource that any single user may consume.\nIf a user is consuming more than this amount of the specified resource,\ntheir jobs are eligible to be preempted by jobs from users who are not over\ntheir soft limit.\n.br\nApplies to execution queues.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat:\n.I String\n.br\nSyntax:\n.I max_user_res_soft.<resource name>=<value>\n.br\nPython type:\n.I pbs.pbs_resource\n.br\nSyntax:\n.RS 11\nmax_user_res_soft[\"<resource name>\"]=<value> \n.br\nwhere \n.I resource name \nis any built-in or custom resource\n.RE\n.IP\nValid values: Any PBS resource, e.g. \"ncpus\", \"mem\", \"pmem\"\n.br\nExample: \n.I set queue workq max_user_res_soft.ncpus=2\n.br\nDefault: No default\n\n.IP max_user_run 8\nOld limit attribute.  Incompatible with new limit attributes.\nThe maximum number of jobs owned by a single user that are allowed to be \nrunning from this queue at one time.  \n.br\nApplies to execution queues.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat:\n.I Integer\n.br\nPython type:\n.I int\n.br\nDefault: No default\n\n.IP max_user_run_soft 8\nOld limit attribute.  Incompatible with new limit attributes.\nThe soft limit on the number of jobs owned by any single user that are allowed to be\nrunning from this queue at one time.  If a user has more than this number of jobs\nrunning, their jobs are eligible to be preempted by jobs from users who are not over\ntheir soft limit.\n.br\nApplies to execution queues.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat:\n.I Integer\n.br\nPython type:\n.I int\n.br\nDefault: No default\n\n.IP node_group_key 8\nSpecifies the resources to use for placement sets.  Overrides server's \n.I node_group_key\nattribute.  Specified resources must be of type \n.I string_array.\n.br\nApplies to routing and execution queues.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat:\n.I string_array\n.br\nSyntax:\n.RS 11\n.I Comma-separated list of resource names.  \n.br\nWhen specifying multiple resources, enclose value in double quotes.\n.RE\n.IP\nPython type:\n.I pbs.node_group_key\n.br\nExample:\n.RS 11\nQmgr> set queue workq node_group_key=<resource name>\n.RE\n.IP\nDefault: No default\n\n.IP partition 8\nName of partition to which this queue is assigned.  Cannot be set for\nrouting queue.  An execution queue cannot be changed to a routing\nqueue while this attribute is set.\n.br\nApplies to execution queues.\n.br\nReadable by all; settable by Manager.\n.br\nFormat:\n.I String\n.br\nPython type: \n.I str\n.br\nDefault: No default\n\n.IP Priority 8\nThe priority of this queue compared to other queues of the same type\nin this PBS complex.  Priority can define a queue as an express queue.  See \n.I preempt_queue_prio\nin the pbs_sched(8B) man page.  Used for execution queues only; the value\nof \n.I Priority \nhas no meaning for routing queues.\n.br\nApplies to execution queues.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat:\n.I Integer\n.br\nValid values: -1024 to 1023\n.br\nPython type:\n.I int\n.br\nDefault: No default\n\n.IP queued_jobs_threshold 8\nLimit attribute.  The maximum number of jobs allowed\nto be queued in this queue.  Can be specified for\nprojects, users, groups, or all.  Cannot be used with old limit\nattributes.\n.br\nApplies to routing and execution queues.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat:\n.I Limit specification;\nSee \n.B LIMITS\n.br\nPython type:\n.I pbs.pbs_resource\n.br\nSyntax:\n.RS 11\nqueued_jobs_threshold[\"<resource name>\"]=<value> \n.br\nwhere \n.I resource name \nis any built-in or custom resource\n.RE\n.IP\nDefault: No default\n\n.IP queued_jobs_threshold_res 8\nLimit attribute.  The maximum amount of the specified resource allowed\nto be allocated to jobs queued in this queue.  Can be specified for\nprojects, users, groups, or all.  Cannot be used with old limit\nattributes.\n.br\nApplies to routing and execution queues.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat:\n.I limit specification.\nSee \n.B LIMITS\n.br\nSyntax:\n.I queued_jobs_threshold_res.<resource name>=<value>\n.br\nPython type:\n.I pbs.pbs_resource\n.br\nSyntax:\n.RS 11\nqueued_jobs_threshold_res_[\"<resource name>\"]=<value> \n.br\nwhere \n.I resource name \nis any built-in or custom resource\n.RE\n.IP\nValid values: Any PBS resource, e.g. \"ncpus\", \"mem\", \"pmem\"\n.br\nExample: \n.I set queue workq queued_jobs_threshold_res.ncpus=8\n.br\nDefault: No default\n\n.IP queue_type 8\nThe type of the queue.  This attribute must be explicitly set \nat queue creation.\n.br\nApplies to routing and execution queues.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat:\n.I String\n.br\nPython type: \n.RS 11\nPBS queue type constant: \n.I pbs.QUEUETYPE_EXECUTION \nor \n.I pbs.QUEUETYPE_ROUTE\n.RE\n.IP\nValid values: \"e\", \"execution\", \"r\", \"route\"\n.br\nDefault: No default\n\n.IP require_cred 8\nSpecifies the credential type required.  All jobs submitted to the named \nqueue without the specified credential will be rejected.  \n.br\nApplies to routing and execution queues.\n.br\nReadable by all; settable by Manager.\n.br\nFormat:\n.I String\n.br\nPython type:\n.I str\n.br\nValid values:\n.I krb5\nor \n.I dce\n.br\nDefault: Unset\n\n.IP require_cred_enable 8\nSpecifies whether the credential authentication method specified in the \n.I require_cred \nqueue attribute is required for this queue. \n.br\nApplies to routing and execution queues.\n.br\nReadable by all; settable by Manager.\n.br\nFormat:\n.I Boolean\n.br\nPython type:\n.I bool\n.br\nBehavior:\n.RS 11\nWhen set to \n.I True,\nthe credential authentication method is required.\n.RE\n.IP\nDefault: \n.I False\n\n.IP resources_assigned 8\nThe total for each kind of resource allocated to running and \nexiting jobs in this queue.\n.br\nApplies to execution queues.\n.br\nReadable by all; set by PBS.\n.br\nFormat:\n.I String\n.br\nSyntax:\n.RS 11\n.nf\n.I resources_assigned.<resource name>=<value><newline> \n.I resources_assigned.<resource name>=<value><newline> ...\n.fi\n.RE\n.IP\nPython type:\n.I pbs.pbs_resource\n.br\nSyntax:\n.RS 11\nresources_assigned[\"<resource name>\"]=<value> \n.br\nwhere \n.I resource name \nis any built-in or custom resource\n.RE\n.IP \nDefault value: No default\n\n.IP resources_available 8\nThe list of resources and amounts available to jobs running in this\nqueue. The sum of the resource of each type used by all jobs running\nfrom this queue cannot exceed the total amount listed here.  See the\n.I qmgr(1B) \nman page.\n.br\nApplies to execution queues.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat:\n.I String\n.br\nSyntax:\n.RS 11\n.nf\n.I resources_available.<resource name>=<value><newline> \n.I resources_available.<resource name>=<value><newline> ...\n.fi\n.RE\n.IP\nPython type:\n.I pbs.pbs_resource\n.br\nSyntax:\n.RS 11\nresources_available[\"<resource name>\"]=<value> \n.br\nwhere \n.I resource name \nis any built-in or custom resource\n.RE\n.IP \nDefault value: No default\n\n.IP resources_default 8\nThe list of default resource values which are set as limits for a job \nresiding in this queue and for which the job did not specify a limit.  \nIf not explicitly set, the default limit for a job is determined by\nthe first of the following attributes which is set: server's \n.I resources_default,\nqueue's \n.I resources_max, \nserver's \n.I resources_max.  \nIf none of these is set, the job gets unlimited resource usage.\n.br\nApplies to routing and execution queues.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat:\n.I String\n.br\nSyntax:\n.RS 11\n.nf\n.I resources_default.<resource name>=<value>,\n.I resources_default.<resource name>=<value>, ...\n.fi\n.RE\n.IP\nPython type:\n.I pbs.pbs_resource\n.br\nSyntax:\n.RS 11\nresources_default[\"<resource name>\"]=<value> \n.br\nwhere \n.I resource name \nis any built-in or custom resource\n.RE\n.IP \nDefault value: No default\n\n.IP resources_max 8\nThe maximum amount of each resource that can be requested by a single job\nin this queue.  The queue value supersedes any server wide maximum limit.\n.br\nApplies to routing and execution queues.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat:\n.I String\n.br\nSyntax:\n.RS 11\n.nf\n.I resources_max.<resource name>=<value>,\n.I resources_max.<resource name>=<value>, ...\n.fi\n.RE\n.IP\nPython type:\n.I pbs.pbs_resource\n.br\nSyntax:\n.RS 11\nresources_max[\"<resource name>\"]=<value> \n.br\nwhere \n.I resource name \nis any built-in or custom resource\n.RE\n.IP \nDefault value: No default (infinite usage)\n\n.IP resources_min 8\nThe minimum amount of each resource that can be requested by a single job\nin this queue.\n.br\nApplies to routing and execution queues.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat:\n.I String\n.br\nSyntax:\n.RS 11\n.nf\n.I resources_min.<resource name>=<value>,\n.I resources_min.<resource name>=<value>, ...\n.fi\n.RE\n.IP\nPython type:\n.I pbs.pbs_resource\n.br\nSyntax:\n.RS 11\nresources_min[\"<resource name>\"]=<value> \n.br\nwhere \n.I resource name \nis any built-in or custom resource\n.RE\n.IP \nDefault value: No default (zero usage)\n\n.IP route_destinations 8\nThe list of destinations to which jobs may be routed.\n.br\nMust be set to at least one valid destination.\n.br\nApplies to routing queues.\n.br\nReadable by all; settable by Manager.\n.br\nFormat:\n.I String\n.br\nSyntax:\n.RS 11\nList of comma-separated strings:\n.br\n.I <queue name>[@<server host>[:<port>]]\n.RE\n.IP\nPython type:\n.I pbs.route_destinations\n.br\nExample:  \n.I Q1,Q2@remote,Q3@remote:15501\n.br\nDefault: No default\n\n.IP route_held_jobs 8\nSpecifies whether jobs in the \n.I held\nstate can be routed from this queue.\n.br\nApplies to routing queues.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat:\n.I Boolean\n.br\nPython type:\n.I bool\n.br\nBehavior:\n.RS 11\nWhen \n.I True, \njobs with a hold can be routed from this queue.\n.RE\n.IP\nDefault: \n.I False\n\n.IP route_lifetime 8\nThe maximum time a job is allowed to reside in this routing queue.  If a job\ncannot be routed in this amount of time, the job is aborted.\n.br\nApplies to routing queues.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat:\n.I Integer\n.br\nUnits:\n.I Seconds\n.br\nPython type:\n.I pbs.duration\n.br\nBehavior:\n.RS \n.IP >0 3\nJobs can reside for specified number of seconds\n.IP \"0 or unset\" 3\nJobs can reside for infinite time\n.RE\n.IP\nDefault: Unset\n\n.IP route_retry_time 8\nTime delay between routing retries.  Typically used when the network between\nservers is down.  \n.br\nApplies to routing queues.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat:\n.I Integer \n.br\nUnits: \n.I Seconds\n.br\nPython type:\n.I pbs.duration\n.br\nDefault: \n.I 30 seconds\n\n.IP route_waiting_jobs 8\nSpecifies whether jobs whose \n.I Execution_Time \nattribute value is in the future can be routed from this queue. \n.br\nApplies to routing queues.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat:\n.I Boolean \n.br\nPython type:\n.I bool\nBehavior:\n.RS 11\nWhen \n.I True,\njobs with a future\n.I Execution_Time \ncan be routed from this queue.\n.RE\n.IP\nDefault: \n.I False\n\n.IP started 8\nIf this is an execution queue, specifies whether jobs in this queue can be scheduled for execution, \nor if this is a routing queue, whether jobs can be routed.\n.br\nApplies to routing and execution queues.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat:\n.I Boolean\n.br\nPython type:\n.I bool\n.br\nBehavior: When\n.I True, \njobs in this queue can run or be routed.\n.br\nDefault: \n.I False\n\n.IP state_count 8\nThe total number of jobs in each state currently residing in this queue.\n.br\nApplies to routing and execution queues.\n.br\nReadable by all; set by PBS.\n.br\nFormat:\n.I String\n.br\nSyntax: \n.I transiting=<value>,exiting=<value>, ...\n.br\nPython type:\n.I pbs.state_count\n.br\nDefault: No default\n\n.IP total_jobs 8\nThe number of jobs currently residing in this queue.\n.br\nApplies to routing and execution queues.\n.br\nReadable by all; set by PBS.\n.br \nFormat:\n.I Integer\n.br\nPython type:\n.I int\n.br\nDefault: No default\n\n\n.SH FORMATS\n\n.IP \"Limit specification\" 8\nLimit attributes can be set, added to, or removed from.\n\nFormat for setting a limit specification:\n.RS 11\n.nf\nset server <limit attribute> = \"<limit specification>=<limit>[, <limit specification>=<limit>] ...\"\n.fi\n.RE\n.IP\nFormat for adding to a limit specification:\n.RS 11\n.nf\nset server <limit attribute> += \"<limit specification>=<limit>[, <limit specification>=<limit>] ...\"\n.fi\n.RE\n.IP\nFormat for removing from a limit specification:\n.RS 11\n.nf\nset server <limit attribute> -= \"<limit specification>=<limit>[, [<limit specification>=<limit>] ...\"\n.br\nor\n.br\nset server <limit attribute> -= \"<limit specification>[, <limit specification>] ...\"\n.fi\n.RE\n.IP\nWhere \n.I limit specification \nis \n.RS 11\no:PBS_ALL         Overall limit\n.br\nu:PBS_GENERIC     Generic users\n.br\nu:<username>      A specific user\n.br\ng:PBS_GENERIC     Generic groups\n.br\ng:<group name>    A specific group\n.br\np:PBS_GENERIC     Generic projects\n.br\np:<project name>  A specific project\n.RE\n.IP\nThe \n.I limit specification \ncan contain spaces anywhere except after the colon\n(\":\").\n.br\nIf there are comma-separated \n.I limit specifications, \nthe entire string must be enclosed in double quotes.\n.br\nA username, groupname, or project name containing spaces must be\nenclosed in quotes.\n.br\nIf a username, groupname, or project name is quoted using double\nquotes, and the entire string requires quotes, the outer enclosing\nquotes must be single quotes.  Similarly, if the inner quotes are\nsingle quotes, the outer quotes must be double quotes.\n.br\n.I PBS_ALL \nis a keyword which indicates that this limit applies to the usage total.\n.br\n.I PBS_GENERIC \nis a keyword which indicates that this limit applies to\ngeneric users, groups, or projects.\n.br\nWhen removing a limit, the \n.I limit value \ndoes not need to be specified.\n.br\n\nFor example, to set the \n.I max_queued \nlimit on QueueA to 5 for total usage, and to limit user bill to 3:\n.RS 11\ns q QueueA max_queued = \"[o:PBS_ALL=5], [u:bill =3]\"\n.RE\n.IP\n\nExamples of setting, adding, and removing: \n.br\n.RS 11\nset server max_run=\"[u:PBS_GENERIC=2], [g:group1=10], [o:PBS_ALL = 100]\"\n.br\nset server max_run+=\"[u:user1=3], [g:PBS_GENERIC=8]\"\n.br\nset server max_run-=\"[u:user2], [g:group3]\"\n.br\nset server max_run_res.ncpus=\"[u:PBS_GENERIC=2], [g:group1=8], [o:PBS_ALL = 64]\"\n.RE\n.IP\n\n.SH Incompatible Limit Attributes\nThe old and new limit attributes are incompatible.  \nIf any of one kind is set, none of the other kind can be set.\nAll of one kind must be unset in order to set any of the other kind.\n.br\n\n\n.SH SEE ALSO\nqmgr(1B)\n"
  },
  {
    "path": "doc/man1/pbs_ralter.1B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_ralter 1B \"28 February 2021\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_ralter \n\\- modify an existing reservation\n.SH SYNOPSIS\n.B pbs_ralter\n[-D <duration>] [-E <end time>] [-G <auth group list>] \n           [-I <block time>] [-l select=<select spec>]\n           [-m <mail points>] [-M <mail list>] \n           [-N <reservation name>] [-R <start time>] \n           [-U <auth user list>] <reservation ID>\n.br\n.B pbs_ralter\n-Wforce [-D <duration>] [-E <end time>] [-R <start time>]\n           <reservation ID>\n.br\n.B pbs_ralter\n--version\n.SH DESCRIPTION\nYou can use the \n.B pbs_ralter \ncommand to alter an existing reservation, whether it is an \nindividual job-specific, advance, or maintenance reservation,\nor the next or current occurrence of a standing\nreservation.  You can change the start time, end time, duration,\nevents that generate mail, mail recipient list, authorized groups,\nauthorized users, and reservation name.\n\nThe PBS Administrator can use the \n.I -Wforce\noption to this command to change the start time, end time, or duration\nof a reservation; this option overrides the scheduler's actions.\n\nAfter the change is requested, the change is either confirmed or\ndenied. On denial of the change, the reservation is not deleted and is\nleft as is, and the following message appears in the server's log:\n.nf\n   Unable to alter reservation <reservation ID> \n.fi\n\nWhen a reservation is\nconfirmed, the following message appears in the server's log:\n.nf\n   Reservation alter successful for <reservation ID> \n.fi\n\nTo find out whether or not the change was allowed: \n.RS 3\nUse the pbs_rstat command: see whether you altered reservation attribute(s) \n\nUse the interactive option: check for confirmation after the blocking\ntime has run out\n\nCheck the server log for confirmation or denial messages\n.RE\n\nBefore the change is confirmed or denied, the change is unconfirmed,\nand the reservation state is \n.I AL.  \n\nOnce a reservation change is confirmed, the reservation state is \n.I CO \nor \n.I RN.  \n\nIf the reservation has not started and it cannot be confirmed on the same vnodes, PBS\nsearches for another set of vnodes.\n\nIf the reservation is altered, PBS logs a \n.I Y \naccounting record.  \n\n.B Caveats and Restrictions\n.br\nYou cannot change the start time of a reservation if jobs are running in it.\n\nIf you change the end time of a reservation so that it ends before a job running in the \nreservation finishes, the job is killed when the reservation ends.\n\n.B Required Privilege\n.br\nWithout the \n.I Wforce\noption, this command can be used by the reservation owner or the PBS\nAdministrator.\n\nWith the \n.I Wforce \noption, this command can be used only by the PBS Administrator.\n\n.SH Options to pbs_ralter\n\n.IP \"-D <duration>\" 10\nSpecifies reservation's new duration. This option can be used even\nwhen the reservation is running and has jobs that are submitted to\nand/or are running in the reservation.  \n\nCan be specified with start and/or end time.  PBS calculates anything\nnot specified.  When specified without start or end time, PBS keeps\nprevious start time.\n\nIf you change the duration to less than the time the reservation has\nalready run, PBS deletes the reservation.  \n\nFormat: \n.I Duration, \nas \n.I seconds \nor \n.I hh:mm:ss\n\n.IP \"-E <end time>\" 10\nSpecifies reservation's new end time. This option can be used even\nwhen the reservation is running and has jobs that are submitted to\nand/or are running in the reservation.\n\nFormat: \n.I Datetime\n\n.IP \"-G <auth group list>\" 10\nComma-separated list of names of groups who\ncan or cannot submit jobs to this reservation.  Sets reservation's \n.I Authorized_Groups \nattribute to \n.I auth group list.\n.br\nThis list becomes the \n.I acl_groups \nlist for the \nreservation's queue. \n.br\nMore specific entries should be listed before more general, because the\nlist is read left-to-right, and the first match determines access.\n.br\nIf both the\n.I Authorized_Users\nand \n.I Authorized_Groups\nreservation attributes are set, a user must belong to both in order to\nbe able to submit jobs to this reservation.  \n.br\nGroup names are\ninterpreted in the context of the server's host, not the context of the host \nfrom which the job is submitted. \n.br\nSee the \n.I Authorized_Groups \nreservation attribute in the \npbs_resv_attributes(7B) man page.  \n.br\nSyntax: \n.I [+|-]<group name>[,[+|-]<group name> ...]\n.br\nDefault: no default; group names are unchanged\n\n.IP \"-I <block time>\" 10\n(Capital I) Specifies interactive mode. The pbs_ralter command will block, up to\n.I block time \nseconds, while waiting for the reservation's change request\nto be confirmed or denied.\n\nThe value for \n.I block time \nmust be positive. The pbs_ralter command\nreturns either the status \n.I \"CONFIRMED\" \nor the status \n.I \"DENIED\".\n\nFormat: \n.I Integer\n\nDefault: Not interactive\n\n.IP \"-l select=<select spec>\" 10\n(Lowercase L) Specifies new select specification for reservation.  New\nspecification must be a subset of the same chunks requested by the\noriginal reservation.  If reservation is started, cannot be used to\nrelease chunks where reservation jobs are running.  If reservation is\nstarted and degraded, you must release all unavailable chunks in\norder to alter the reservation select specification.\n\n.IP \"-m <mail points>\" 10\nSpecifies the set of events that cause mail to be sent to the list of\nusers specified in the \n.I -M <mail list> \noption.\n\nFormat: \n.I String\n.br\nSyntax: Either of\n.RS 13\n1) any combination of \"a\", \"b\", \"c\" or \"e\", or\n.br\n2) the single character \"n\"\n.RE\n.IP\n.nf\nSuboptions to -m Option:\n\nCharacter   Meaning\n--------------------------------------------------------------\na           Notify if reservation is terminated for any reason\nb           Notify when the reservation period begins\nc           Notify when the reservation is confirmed\ne           Notify when the reservation period ends\nn           Send no mail.  Cannot be used with any of a, b, c or e.\n.fi\n\nDefault: No default; if not specified, mail events are unchanged\n\n.IP \"-M <mail list>\" 10\nThe list of users to whom mail is sent whenever the reservation\ntransitions to one of the states specified in the\n.I -m <mail points> \noption.  \n\nFormat: \n.I <username>[@<hostname>][,<username>[@<hostname>]...]\n\nDefault: No default; if not specified, user list is unchanged\n\n.IP \"-N <reservation name>\" 10\nSpecifies a name for the reservation.  \n\nFormat: \n.RS 13\nString up to 15 characters in length.  It must consist of printable,\nnon-white space characters with the first character alphabetic.\n.RE\n.IP\nDefault: No default; if not specified, reservation name is unchanged\n\n.IP \"-R <start time>\" 10\nSpecifies reservation's new start time. This option can be used either\nwhen the reservation is not running or there are no jobs are submitted\nto the reservation.  You cannot use this option when a reservation is\nnot empty and has started running.\n\nThe specifications for providing the time are the same as for pbs_rsub:\n.br\nIf the day, \n.I DD, \nis not specified, it defaults to today if the time\n.I hhmm \nis in the future.  Otherwise, the day is set to tomorrow.  For\nexample, if you alter a reservation with the specification -R 1110 at\n11:15 a.m., it is interpreted as being for 11:10 a.m. tomorrow.  If\nthe month portion,\n.I MM, \nis not specified, it defaults to the current month, provided that the specified day \n.I DD, \nis in the future.  Otherwise, the month is set to next month.  Similar\nrules apply to the two other optional, left-side components.\n\nFormat: \n.I Datetime\n\n.IP \"-U <auth user list>\" 10\nComma-separated list of users who are and are not allowed to \nsubmit jobs to this reservation.  Sets reservation's \n.I Authorized_Users \nattribute to \n.I auth user list.\n.br\nThis list becomes the \n.I acl_users \nattribute for the reservation's queue. \n.br\nMore specific entries should be listed before more general, because the\nlist is read left-to-right, and the first match determines access. \nThe reservation creator's username is automatically added to this list,\nwhether or not the reservation creator specifies this list.\n.br\nIf both the\n.I Authorized_Users\nand \n.I Authorized_Groups\nreservation attributes are set, a user must belong to both in order to be able to \nsubmit jobs to this reservation.\n.br\nSee the \n.I Authorized_Users\nreservation attribute in the pbs_resv_attributes(7B) man page.\n.br\nSyntax:\n.I [+|-]<username>[@<hostname>][,[+|-]<username>[@<hostname>]...]\n.br \nDefault: no default; user list is unchanged\n.br\n\n.IP \"-Wforce\" 10\nEnforces changes made to the reservation start time, end time, or\nduration, regardless of the actions of the scheduler.  Can be used\nonly by the PBS Administrator.  Note that with this option you can\nforce PBS to oversubscribe resources, in which case you (the\nadministrator) may need to manage them yourself.  Cannot be used to\nchange the start time of a reservation in which jobs are running.\n\n.IP \"--version\" 10\nThe \n.B pbs_ralter\ncommand returns its PBS version information and exits.\nThis option can only be used alone.\n\n.SH OPERANDS\nThe pbs_ralter command takes a reservation ID.\n.br\nFor an advance or job-specific reservation this has the form:\n.RS 4\n.I \"R<sequence number>[.<server name>][@<remote server>]\"\n.RE\nFor a standing reservation this has the form:\n.RS 4\n.I \"S<sequence number>[.<server name>][@<remote server>]\"\n.RE\nFor a maintenance reservation this has the form:\n.RS 4\n.I \"M<sequence number>[.<server name>][@<remote server>]\"\n.RE\n\n.I @<remote server> \nspecifies a reservation at a server other than the default server.\n\n"
  },
  {
    "path": "doc/man1/pbs_rdel.1B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_rdel 1B \"6 May 2020\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_rdel \n\\- delete a PBS reservation \n.SH SYNOPSIS\n.B pbs_rdel\n<reservation ID>[,<reservation ID>...]\n.br\n.B pbs_rdel\n--version\n.SH DESCRIPTION\nThe \n.B pbs_rdel\ncommand deletes reservations in the order specified.\n\nThis command deletes the specified reservations, whether or not they\nare running, all jobs in the reservations, and the reservation queues.\n\n.B Required Privilege\n.br\nA reservation may be deleted by its owner, a PBS Operator,\nor a PBS Manager.\n\n.SH OPTIONS\n.IP \"--version\" 10\nThe \n.B pbs_rdel\ncommand returns its PBS version information and exits.\nThis option can only be used alone.\n\n.SH OPERANDS\nThe pbs_rdel command accepts one or more \n.I reservation ID\noperands.  \n.br\nFor an advance or job-specific reservation this has the form:\n.RS 4\n.I \"R<sequence number>[.<server name>][@<remote server>]\"\n.RE\nFor a standing reservation this has the form:\n.RS 4\n.I \"S<sequence number>[.<server name>][@<remote server>]\"\n.RE\nFor a maintenance reservation this has the form:\n.RS 4\n.I \"M<sequence number>[.<server name>][@<remote server>]\"\n.RE\n\n.I @<remote server> \nspecifies a reservation at a server other than the default server.\n\n.SH EXIT STATUS\n.IP \"Zero\" 10 \nUpon success\n.IP \"Greater than zero\" 10\nUpon failure to process any operand\n\n.SH SEE ALSO\npbs_rsub(1B),\npbs_rstat(1B),\npbs_resv_attributes(7B)\n"
  },
  {
    "path": "doc/man1/pbs_release_nodes.1B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_release_nodes 1B \"6 May 2020\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_release_nodes \n\\- release vnodes assigned to a PBS job\n.SH SYNOPSIS\n.B pbs_release_nodes\n[-j <job ID>] \n.RS 18\n.br\n[-k (<number of hosts to keep> | \n.br\n<selection of vnodes to keep>)] \n.br\n<vnode> [<vnode> [<vnode>] ...]\n.RE\n.br\n.B pbs_release_nodes \n[-j <job ID>] -a\n.br\n.B pbs_release_nodes\n--version\n.SH DESCRIPTION\n\nYou can use the \n.B pbs_release_nodes \ncommand to release no-longer-needed\nsister hosts or vnodes assigned to a running job, before the job would\nnormally release them.  These vnodes are then available for use by\nother jobs.  \n\nYou can specify the names of sister vnodes to be released, or you can\nrelease all sister vnodes not on the primary execution host that are\nassigned to a running job via the\n.I -a \noption.\n\nPBS can keep the numbe of sister hosts you specify, or PBS can release\nall sister vnodes except for the ones you specify vi a a select\nstatement.  \n\nCan be used on jobs and subjobs, but not on job arrays or ranges of\nsubjobs.\n\n.B Caveats and Restrictions\n.br\n\nYou can release only sister hosts or vnodes that are not on the\nprimary execution host.  You cannot release vnodes on the primary\nexecution host.\n\nThe job must be running (in the \n.I R \nstate).\n\nThe pbs_release_nodes command is not supported on vnodes tied to Cray\nX* series systems (vnodes whose \n.I vntype \nhas the \"cray_\" prefix).\n\nIf cgroups support is enabled, and pbs_release_nodes is called to\nrelease some but not all the vnodes managed by a MoM, resources on\nthose vnodes that are part of a cgroup are not released until the\nentire cgroup is released.\n\nYou cannot release a partial host.  If you try to release some but not\nall of a host, the job's \n.I exec_vnode \nattribute shows the new, smaller list of vnodes, but the pbsnodes\ncommand will reveal that the host is still allocated to the job.\n\nIf you specify release of a vnode on which a job process is running,\nthat process is terminated when the vnode is released.\n\n.B Required Privilege\n.br\nThis command can be run by the job owner, the PBS Manager, Operator,\nand Administrator, as well as root on Linux and Admin on Windows.\n\n.SH Options to pbs_release_nodes\n\n.IP \"-a\" 10\nReleases all job vnodes not on the primary execution host.  Cannot be\nused \n.I -k\noption, or with list of vnode names.\n\n.IP \"-j <job ID>\" 10\nSpecifies the job ID for the job or subjob whose vnode(s) are to be released.\n\n.IP \"-k <keep number> | <keep selection>\" 10\nUse \n.I keep number \nto specify how many sister hosts to keep.\n\nUse \n.I keep selection \nto specify which sister vnodes to keep.  The \n.I keep selection \nis a select statement beginning with \"select=\" specifying which vnodes to keep.  \n\nThe primary execution host and its vnodes are not released.\n\nFor example, to release all sister hosts except 8:\n.br\n.B \\ \\ \\ pbs_release_nodes -k 8\n.br\n\nTo release all sister vnodes except for 4 of the ones marked with\n\"bigmem\":\n\n.br\n.B \\ \\ \\ pbs_release_nodes -k select=4:bigmem=true\n\nCannot be used with \n.I -a \noption or with vnode list argument.\n\n.IP \"(no options)\" 10\nWith no options, pbs_release_nodes uses the value of the\n.I PBS_JOBID \nenvironment variable as the job ID of the job whose vnodes are to be released.\n\n.IP \"--version\" 10\nThe pbs_release_nodes command returns its PBS version information and exits.\nThis option can only be used alone.\n\n.SH Operands for pbs_release_nodes\nThe pbs_release_nodes command can take as an operand a list of vnodes.  Format:\n.br\n.I <vnode name> [<vnode name> [<vnode name>] ...]\n.br\nCannot be used with the \n.I -a \noption.\n\n.SH Usage\nThis command can be run at the command line, or called inside a job\nscript, where it can use the value of the \n.I PBS_JOBID \nenvironment variable.\n\nYou can release any vnode that appears in the job's \n.I exec_vnode \nattribute that is not on the primary execution host.  You can release\na particular set of a job's vnodes, or you can release all of a job's\nnon-primary-execution-host vnodes.\n\nTo release specific vnodes:\n.br\n.B \\ \\ \\  pbs_release_nodes [-j <job ID>] <vnode name> [<vnode name>] ...]\n\nTo release all of a job's vnodes that are not on the primary execution host:\n.br\n.B \\ \\ \\  pbs_release_nodes [-j <job ID>] -a\n\nTo release all except a specified number of vnodes:\n.br\n.B \\ \\ \\ pbs_release_nodes -k <number of sister hosts to keep>\n\nTo release all vnodes except for those in a select specification:\n.br\n.B \\ \\ \\ pbs_release_nodes -k <select specification>\n"
  },
  {
    "path": "doc/man1/pbs_resources.7B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_resources 7B \"6 May 2020\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_resources \n\\- computational resources for PBS jobs\n\n\n.SH DESCRIPTION\nPBS provides computational resources for jobs, limits on using \nresources, and control over how\njobs are placed on the vnodes from which resources may be allocated for\na job.\n\nPBS provides built-in resources and custom resources for some systems, \nand allows the administrator to define\ncustom resources.  The administrator can specify which resources are\navailable on a given vnode, as well as at the queue or server level\n(e.g. walltime).  Resources can be \"stretched\" across vnodes.\nSee the \n.I qmgr(8B)\nman page.\n\nResources defined at the queue or server level apply to an entire job.\nIf resources are defined at the host level, they apply only to the part of\nthe job running on that host.\n\n.B Allocating Resources to Jobs\n.br\nResources are allocated to jobs when jobs explicitly request them, \nwhen hooks assign them, and when defaults are assigned by PBS.  \nResources are explicitly requested in this order of precedence:\n.RS 3\nThrough a\n.B qalter\noperation\n.br\nVia the \n.B qsub\ncommand line\n.br\nVia PBS job script directives  \n.RE \nThe administrator writes\nany hooks that assign resources to jobs.  Default resources can be\nspecified by the administrator (in order of precedence) for\n.B qsub\narguments, \nqueues, the server, and vnodes.  \n\n.B Limits on Resource Usage\n.br\nA resource allocated to a job, whether explicitly requested or\nassigned via hooks or defaults, places a limit on the amount of that\nresource a job can use.  This limit applies to how much the job can\nuse on each vnode and to how much the whole job can use.\n\n.B Placing Jobs on Vnodes\n.br\nJobs are placed on vnodes according to their explicit placement request, \nor according to default placement rules.\nThe explicit placement request can be specified (in order of precedence) using \n.B qalter, qsub, \nand PBS job script directives.\nDefault placement rules can be specified for queues and the server, \nand rules for default placement take effect if no other placement\nspecifications exist.  \nA job's placement request is specified in its \n.I place statement.\n\n.B Old Syntax\n.br\nA job submitted with the old node or resource specification syntax\nwill be converted to the new select and place syntax.  If the job is\nsubmitted with\n.I -lnodes= \nor \n.I -lncpus=\nit will be converted to \n.I -l select=\nand \n.I -l place=.  See \n.B BACKWARD COMPATIBILITY.  \nJobs cannot use both new and old syntax for resource requests.\n\n.B Allocating Chunks and Job-wide Resources\n.br\nJob resource requests are defined either at the host level in \n.I chunks\nspecified in a \n.I selection statement,\nor as job-wide resources.  \n.br\nJob-wide format: \n.RS 3\n.nf\n.I qsub ... -l <resource name>=<value>\n.RE\nChunk format:\n.RS 3\n.I -l select=<chunks>\n.fi\n.RE\nThe only resources that can be\nrequested in chunks are host-level resources, such as \n.I mem \nand \n.I ncpus.\nThe only resources that can be in a job-wide request are server-level\nor queue-level resources, such as \n.I walltime.  \n\n.IP \" \" 3\n.RS\n.B Requesting Resources in Chunks\n.br\nA \n.I chunk \ndeclares the value of each resource in a set of resources\nwhich are to be allocated as a unit to a job.  All of a chunk \nmust be taken from a single vnode.  \nA \n.I chunk\nrequest is a host-level request, and it must be for a host-level resource.\nA\n.I chunk\nis the smallest \nset of resources that will be allocated to a job.  It is one or \nmore \n.I <resource name>=<value>\nstatements separated by a colon, e.g.:\n.RS 3\nncpus=2:mem=10GB:host=Host1\n.br\nncpus=1:mem=20GB:arch=linux\n.RE\n\n.I Chunks \nare described in a \n.I selection statement, \nwhich specifies how many of each kind of chunk.\nA selection statement is of the form:\n.RS 3\n.I -l select=[<N>:]<chunk>[+[<N>:]<chunk> ...]\n.RE\nIf \n.I N \nis not specified, it is taken to be 1.\n.br\nNo spaces are allowed between chunks.\n\nExample of multiple chunks in a \n.I selection statement:\n.RS 3\n-l select=2:ncpus=1:mem=10GB+3:ncpus=2:mem=8GB\n.RE\n\n.B Requesting Job-wide Resources\n.br\nA job-wide resource request is for resource(s) at the server or queue\nlevel.  This resource must be a server-level or queue-level resource.\nA job-wide resource is designed to be used by the entire job, and is\navailable to the complex, not just one execution host.  Job-wide\nresources are requested outside of a\n.I selection statement, \nin this form:\n.RS 3\n.I -l <resource name>=<value>[,<resource name>=<value> ...]\n.RE\nwhere \n.I resource name\nis either a consumable resource or a time-based resource such\nas\n.I walltime.\n\nExample of job-wide resources: walltime\n\n.B Do not mix old style resource or node specification with the new select and place statements.\nDo not use one in a job script and the other on the command line.\nThis will result in an error.\n\nSee the qsub(1B) man page for a detailed description of how to request \nresources and place jobs on vnodes.\n\n.B Applying Resource Defaults\n.br\nWhen a default resource is defined, it is applied to a job when that\njob does not explicitly request the resource.  Jobs get default\nresources, both\n.I job-wide \nand per-\n.I chunk\nwith the following order of precedence, from \n.RS 3\nDefault\n.B qsub\narguments\n.br\t\nDefault queue resources\n.br\t\nDefault server resources\n.RE\n\nFor each \n.I chunk\nin the job's selection statement, first queue chunk defaults are\napplied, then server chunk defaults are applied.  If the chunk does not\ncontain a resource defined in the defaults, the default is\nadded.  The chunk defaults are specified with \"default_chunk.<resource name>\".\n\nFor example, if the queue in which the job is enqueued has the\nfollowing defaults defined: \n.RS 3\ndefault_chunk.ncpus=1\n.br\ndefault_chunk.mem=2gb\n.RE\na job submitted with this selection statement:\n.RS 3\nselect=2:ncpus=4+1:mem=9gb \n.RE\nwill have this specification after the \n.I default_chunk \nelements are applied:\n.RS 3\nselect=2:ncpus=4:mem=2gb+1:ncpus=1:mem=9gb \n.RE\nIn the above, mem=2gb and ncpus=1 are inherited from \n.I default_chunk.\n\nIf a default job-wide resource is defined which is not specified in a\njob's resource request, it is added to the resource request.\nQueue defaults are applied first, then server defaults are applied\nto any remaining gaps in the resource request.\n\n.B Specifying Default Resources\n.br\nThe administrator can specify default resources on the server and\nqueue.  These resources can be job-wide or apply to chunks.  Job-wide\nresources are specified via\n.I resources_default \non the server or queue, and chunk\nresources are specified via \n.I default_chunk \non the server or queue.\n\nThe administrator can specify default resources to be added to\nany qsub arguments via the server's \n.I default_qsub_arguments\nattribute.\n\n.B Specifying Default Placement\n.br\nThe administrator can specify default job placement by setting \na value for \n.I resources.default \nat the queue or server. \n\nSee the qmgr(8B) man page for how to set default resources.\n\n.B How Default Resources Work When \n.B Moving Jobs Between Queues\n.br\nIf the job is moved from the current queue to a new queue, any default\nresources in the job's resource list are removed.  This\nincludes a select specification and place directive generated by the\nrules for conversion from the old syntax.  If a job's resource is\nunset (undefined) and there exists a default value at the new queue or server,\nthat default value is applied to the job's resource list.  If either\n.I select \nor \n.I place\nis missing from the job's new resource list, it will \nbe automatically generated,\nusing any newly-inherited default values.\n\nExample:\n.RS 3\nGiven the following set of queue and server default values:\n.IP Server 3\nresources_default.ncpus=1\n.IP \"Queue QA\" 3\nresources_default.ncpus=2\n.br\ndefault_chunk.mem=2gb\n.IP \"Queue QB\" 3\ndefault_chunk.mem=1gb\n.br\nno default for ncpus\n.LP\n\nThe following illustrate the equivalent select specification for jobs\nsubmitted into queue QA and then moved to (or submitted directly to)\nqueue QB:\n.IP \"qsub -l ncpus=1 -lmem=4gb\" 3\nIn QA:  select=1:ncpus=1:mem=4gb  -  no defaults need be applied\n.br\nIn QB:  select=1:ncpus=1:mem=4gb  -  no defaults need be applied\n\n.IP \"qsub -l ncpus=1\" 3\nIn QA: select=1:ncpus=1:mem=2gb\n.br\nIn QB: select=1:ncpus=1:mem=1gb\n\n.IP \"qsub -lmem=4gb\" 3\nIn QA: select=1:ncpus=2:mem=4gb\n.br\nIn QB: select=1:ncpus=1:mem=4gb\n\n.IP \"qsub -l nodes=4\" 3\nIn QA: select=4:ncpus=1:mem=2gb\n.br\nIn QB: select=4:mem=1gb\n.IP \"qsub -l mem=16gb -l nodes=4\" 3\nIn QA: select=4:ncpus=1:mem=4gb\n.br\nIn QB: select=4:ncpus=1:mem=4gb\n.RE\n.RE\n.LP\n\n.B Limits on Resource Usage\n.br\nEach chunk's\nper-chunk limits determine how much of any resource can be used in\nthat chunk.\nPer-chunk resource usage limits are established by per-chunk\nresources, both from explicit requests and from defaults.  \n\n.I Job-wide resource limits \nset a limit for per-job resource usage.  \n.I Job-wide resource limits \nare established both by requesting job-wide resources and \nby summing per-chunk consumable resources.  \n.I Job-wide resource limits \nfrom sums of all chunks, including defaults, override those from\njob-wide defaults and resource requests.  Limits include both\nexplicitly requested resources and default resources.\n\nIf a job-wide resource limit exceeds queue or server restrictions, it\nwill not be put in the queue or accepted by the server.  If, while\nrunning, a job exceeds its limit for a consumable or time-based\nresource, it will be terminated.  \n\n.B Controlling Placement of Jobs\n.br\nJobs are placed on vnodes according to their \n.I place\nstatements.  The \n.I place \nstatement is specified, in order of precedence, via:\n.RS 3\nExplicit placement request in \n.B qalter\n.br\nExplicit placement request in\n.B qsub\n.br\nExplicit placement request in PBS job script directives\n.br\nDefault \n.B qsub\nplace statement\n.br\nQueue default placement rules\n.br\nServer default placement rules\n.br\nBuilt-in default conversion and placement rules\n.RE\n\nThe \n.I place\nstatement may be not be used without the \n.I select\nstatement.  \n\nFor a detailed description of the \n.I place \nstatement, see the qsub(1B) man page.\n\nNote that vnodes can have sharing attributes that override\njob placement requests.  See the\n.I pbs_node_attributes(7B)\nman page.\n.LP\n.RS 3\n.B Default Placement\n.br\nIf, after all defaults have been applied to a resource request that\ncontains a selection statement, there is no place statement, then\n.I arrangement\nis set to \n.I free.\nDefault \n.I sharing\nis \n.I shared.\n\nIf the job's place statement does not contain \n.I group=resource,\nthen a grouping defined at the queue level may be used, \nor a grouping defined at the server level if there is\nnone at the queue level.\n\n.B Placement of Jobs Submitted \n.B with Old Syntax\n.br\nA job submitted with a \nnode (\n.I -lnodes=\n) or resource (\n.I -lncpus=\n) specification will be converted to select and place,\naccording to the rules described below in \n.B BACKWARD COMPATIBILITY.\n.RE\n\n.B Boolean Resources\n.br \nA Boolean resource can be either true or false.  A resource request\nfor a Boolean specifies the required value for the Boolean resource.  For\nexample, if some vnodes have\n.I green=true \nand some have\n.I red=true,\na selection statement for two vnodes, each with one CPU, all green and\nno red, would be:\n.RS 3\n-l select=2:green=true:red=false:ncpus=1\n.RE\n\n.B Consumable Resources \n.br\nConsumable resources are those whose use by a job reduces the amount\navailable to other concurrent jobs, e.g. memory \n.I (mem), \nCPUs (ncpus) and licenses.  \nNon-consumable resources include time-based resources such as \n.I walltime \nand CPU time\n.I (cput), \nand string-value resources such as architecture\n.I (arch).\n\n.B Custom Resources\n.br\nCustom resources are site-defined and site-dependent.  The\nadministrator defines custom resources.  These are typically used for\nlicenses and scratch space.  PBS provides custom resources\nspecifically for Cray systems.  \n\nA job requesting a floating license must specify it outside of a\nselection statement, as a job-wide resource limit.  A job requesting a\nnode-locked license must specify it inside a selection statement in a\nchunk.  See your system administrator.  \n\nCustom resources can be created to be invisible or read-only for\nunprivileged users.  See the pbsnodes(8B), pbs_rstat(1B), pbs_rsub(1B),\nqalter(1B), qselect(1B), qstat(1B), and qmgr(8B) man pages.\nThese restricted resources cannot be requested by a job via the \n.B qsub\ncommand, regardless of privilege.  \n\n.B Behavior of Unset Resources\n.br\nAn unset resource is undefined.  An unset numerical resource\n(i.e. float, long, size, or time) at the host level behaves as if its\nvalue is zero, but at the server or queue level it behaves as if it\nwere infinite.  An unset string or string array resource at the\nserver, queue or vnode level cannot be matched by a job's resource\nrequest.  An unset Boolean resource at a server, queue, or vnode\nbehaves as if that resource is set to \"false\".\n\n.SH Resources Built Into PBS\n\n.IP accelerator 8\nIndicates whether this vnode is associated with an accelerator.\nUsed for requesting accelerators.\n.br\nOn Cray, this resource exists only when there is at least one associated\naccelerator. \n.br \nBehavior:\n.RS\n.IP True 3\nOn Cray, this is set to \n.I True \nwhen there is at least one associated accelerator whose state is UP.  \n.IP False 3\nOn Cray, set to \n.I False \nwhen all associated accelerators are in state DOWN.  \n.RE\n.IP\nHost-level.  Can be requested only inside of a select statement.  \n.br\nNot consumable.\n.br\nFormat: \n.I Boolean\n.br\nPython type: \n.I bool\n.br\nDefault: \n.I False\n\n.IP accelerator_memory 8\nIndicates amount of memory for accelerator(s) associated with this\nvnode.  \n.br\nOn Cray, PBS sets this resource only on vnodes with at\nleast one accelerator with state = UP.  For Cray, PBS sets this\nresource on the 0th NUMA node (the vnode with PBScrayseg=0), and the\nresource is shared by other vnodes on the compute node.\n.br\nFor example, on vnodeA_2_0: \n.br\n.nf\n   resources_available.accelerator_memory=4196mb \nOn vnodeA_2_1: \n   resources_available.accelerator_memory=@vnodeA_2_0\n.fi\n.br\nA scheduler rounds all resources of type \n.I size\nup to the nearest kb.\n.br\nHost-level.  Can be requested only inside of a select statement.  \n.br\nConsumable.  \n.br\nFormat: \n.I Size\n.br\nPython type: \n.I pbs.size\n.br\nDefault: No default\n\n.IP accelerator_model 8\nIndicates model of accelerator(s) associated with this vnode.\n.br\nOn Cray, PBS sets this resource only on vnodes with at\nleast one accelerator with state = UP.  \n.br\nHost-level.  Can be requested only inside of a select statement.\n.br\nNon-consumable.  \n.br\nFormat: \n.I String\n.br\nPython type: \n.I str\n.br\nDefault: No default\n\n.IP aoe 8\nList of AOEs (Application Operating Environments) \nthat can be instantiated on this vnode.  Case-sensitive.  \nAn AOE is the environment that results from provisioning a vnode.\nEach job can request at most one AOE.  Cannot be set on server's host.\n.br\nValid values: Allowable values are site-dependent.  \n.br\nHost-level.  Can be requested only inside of a select statement.  \n.br\nNon-consumable.\n.br\nType: \n.I String_array\n.br\nPython type: \n.I str\n.br\nDefault: No default\n\n.IP arch 8\nSystem architecture.  One architecture\ncan be defined for a vnode.  One architecture can be requested per\nvnode.  \n.br\nValid values: \n.RS 11\nAllowable values and effect on job placement are site-dependent.  \n.RE\n.IP\nHost-level.  Can be requested only inside of a select statement.\n.br\nNon-consumable.  \n.br\nType: \n.I String\n.br\nPython type: \n.I str\n.br\nDefault: No default\n\n.IP cput 8\nAmount of CPU time used by the job for all processes on all\nvnodes.  Establishes a job-wide resource limit.  \n.br\nJob-wide.  Can be requested only outside of a select statement.\n.br\nNon-consumable.\n.br\nType: \n.I Duration\n.br\nPython type: \n.I pbs.duration\n.br\nDefault: No default\n\n.IP energy 8\nThe energy used by a job.  Set by PBS.  \n.br\nConsumable.\n.br\nFormat: \n.I Float\n.br\nUnits: \n.I kWh\n.br\nDefault: No default\n\n.IP eoe 8\nStands for \"Energy Operational Environment\".  When set on a vnode in\n.I resources_available.eoe, \ncontains the list of available power profiles.  When set for a job\nin\n.I Resource_List.eoe,\ncan contain at most one power profile.  (A job can request only one\npower profile.)  \n.br\nNon-consumable.\n.br\nFormat: \n.I String_array\n.br\nPython type:\n.I str\n.br\nDefault value for \n.I resources_available.eoe: \nunset\n.br\nDefault value for \n.I Resource_List.eoe:\nno default\n\n.IP exec_vnode 8\nThe vnodes that PBS estimates this job will use.  Cannot\nbe requested for a job; used for reporting only. Read-only.  \n.br\nType: \n.I String\n.br\nPython type: \n.I str\n.br\nDefault: No default\n\n.IP file 8\nSize of any single file that may be created by the job.\n.br\nThe scheduler rounds all resources of type \n.I size\nup to the nearest kb.\n.br\nJob-wide.  Can be requested only outside of a select statement.\n.br\nType: \n.I Size\n.br\nPython type: \n.I pbs.size\n.br\nDefault: No default\n\n.IP hbmem 8\nHigh-bandwidth memory.  Available only on some architectures such as \nXeon Phi KNL.\n.br\nValid values: Greater than or equal to zero.\n.br\nHost-level.  \n.br\nFormat:\n.I Size\n.br\nPython type:\n.I pbs.size\n.br\nDefault: No default\n\n.IP host 8\nName of execution host.  Cannot be changed.  Site-dependent.\n.br\nCan be requested only inside of a select statement.  \n.br\nBehavior:\n.RS 11\nAutomatically set to the short form of the hostname in the \n.I Mom \nattribute. \n.br\nOn Cray compute node, set to \n.I <mpp_host>_<nid>.\n.RE\n.IP\nType: \n.I String\n.br\nPython type: \n.I str\n\n.IP max_walltime 8\nMaximum walltime allowed for a shrink-to-fit job.  Job's actual\nwalltime is between \n.I max_walltime \nand \n.I min_walltime.  \nPBS sets \n.I walltime\nfor a shrink-to-fit job.  If \n.I max_walltime \nis specifed, \n.I min_walltime\nmust also be specified.  \nCannot be used for \n.I resources_min \nor \n.I resources_max.\nCannot be set on job arrays or reservations.  \n.br\nValid values: Must be greater than or equal to\n.I min_walltime.  \n.br\nCan be requested only outside of a select statement.  \n.br\nNon-consumable.\n.br\nFormat: \n.I Duration\n.br\nPython type: \n.I pbs.duration\n.br\nDefault: \n.I 5 years\n\n.IP mem 8\nAmount of physical memory i.e. workingset allocated to \nthe job, either job-wide or host-level.  \n.br\nThe scheduler rounds all resources of type \n.I size\nup to the nearest kb.\n.br\nCan be requested only inside of a select statement. \n.br\nConsumable.\n.br\nFormat: \n.I Size\n.br\nPython type: \n.I pbs.size\n.br\nDefault: No default\n\n.IP min_walltime 8\nMinimum walltime allowed for a shrink-to-fit job.  When \n.I min_walltime\nis specified, job is a shrink-to-fit job.  If this attribute is set,\nPBS sets the job \n.I walltime.  \nJob's actual \n.I walltime \nis between\n.I max_walltime \nand \n.I min_walltime.  \nCannot be used for \n.I resources_min \nor \n.I resources_max.\nCannot be set on job arrays or reservations.  \n.br\nValid values: Must be less than or equal to\n.I max_walltime.  \n.br\nCan be requested only outside of a select statement.  \n.br\nNon-consumable.\n.br\nType: \n.I Duration\n.br\nPython type: \n.I pbs.duration\n.br\nDefault: No default\n\n.IP mpiprocs 8\nNumber of MPI processes for this chunk.  Cannot use sum from chunks\nas job-wide limit.\n.br\nThe number of lines in PBS_NODEFILE is the sum of the values\nof \n.I mpiprocs\nfor all chunks requested by the job.  For each chunk with \n.I mpiprocs=P, \nthe host name for that chunk is written to the PBS_NODEFILE\n.I P\ntimes.  \n.br\nHost-level.  Can be requested only inside of a select statement.\n.br\nFormat: \n.I Integer\n.br\nPython type: \n.I int\n.br\nDefault: \n.RS 11\n.nf\nIf ncpus > 0\n\\ \\ \\ 1\nOtherwise\n\\ \\ \\ 0\n.fi\n.RE\n.IP\n\n.IP naccelerators 8\nNumber of accelerators on the host.  PBS sets this resource to the\nnumber of accelerators with state = \n.I UP.\n.br\nOn Cray, PBS sets this resource only on vnodes whose hosts have at \nleast one accelerator with state = \n.I UP.  \nFor Cray, PBS sets this resource on the 0th NUMA\nnode (the vnode with PBScrayseg=0), and the resource is shared by\nother vnodes on the compute node.\n.nf\nFor example, on vnodeA_2_0:\n   resources_available.naccelerators=1\nOn vnodeA_2_1:\n   resources_available.naccelerators=@vnodeA_2_0\n.fi\nHost-level.  Can be requested only inside of a select statement.\n.br\nConsumable.\n.br\nFormat: \n.I Integer\n.br\nPython type: \n.I int\n.br\nDefault: No default\n\n.IP nchunk 8\nNumber of chunks requested between plus symbols in a\nselect statement.  For example, if the select statement is \n.br\n\\ \\ \\ -lselect=4:ncpus=2+12:ncpus=8 \n.br\nthe value of nchunk for the first part is 4, and\nfor the second part it is 12.  The \n.I nchunk \nresource cannot be named in\na select statement; it can only be specified as a number preceding the\ncolon, as in the above example.  When the number is omitted, \n.I nchunk \nis \n.I 1.\n.br\nThis resource can be used to specify the default\nnumber of chunks at the server or queue \nExample:  \n.br\n\\ \\ \\ set queue myqueue default_chunk.nchunk=2\n.br\nThis resource cannot be used in server and queue \n.I resources_min \nand \n.I resources_max.\n.br\nNon-consumable.\n.br\nFormat: \n.I Integer\n.br\nPython type: \n.I int\n.br\nDefault value: \n.I 1\n\n.IP ncpus 8\nNumber of processors.\n.br\nCan be requested only inside of a select statement.\n.br\nConsumable.  \n.br\nFormat: \n.I Integer\n.br\nPython type: \n.I int\n.br\nDefault: No default\n\n.IP nice 8\nNice value with which the job is to be run.  Host-dependent.\n.br\nCan be requested only outside of a select statement.\n.br\nFormat: \n.I Integer\n.br\nPython type: \n.I int\n.br\nDefault: No default\n\n.IP nodect 8\n.B Deprecated.    \nNumber of chunks in resource request from selection\ndirective, or number of nodes requested from node specification.\nOtherwise defaults to value of 1.  Can be requested only outside \nof a select statement.  Read-only.\n.br\nFormat:\n.I Integer\n.br\nPython type:\n.I int\n.br\nDefault: \n.I 1\n\n.IP nodes\n.B Deprecated.\n.br\nNumber of hosts requested.  \n.br\nFormat:\n.I Integer  \n.br\nSee \n.B BACKWARD COMPATIBILITY.\n\n.IP ompthreads 8\nNumber of OpenMP threads for this chunk.  \n.br\nCannot use sum from chunks as job-wide limit.  \n.br\nFor the MPI process with rank 0, the environment variables NCPUS and  \nOMP_NUM_THREADS are set to the value of \n.I ompthreads.\nFor other MPI processes, behavior is dependent on MPI implementation.\n.br\nCan be requested only inside of a select statement.\n.br\nNon-consumable.  \n.br\nFormat:\n.I Integer\n.br\nPython type:\n.I int\n.br\nDefault: Value of\n.I ncpus \n\n.IP PBScrayhost 8\nUsed to differentiate a Cray system, containing\nALPS, login nodes running PBS MoMs, and compute nodes, from a separate\nCray system with a separate ALPS.  \n.br\nNon-consumable.  \n.br\nFormat: \n.I String\n.br\nPython type:\n.I str\n.br\nDefault: Value of \n.I mpp_host \nfor this system\n\n.IP \"PBScraylabel_<label name>\" 8\nTracks labels applied to compute nodes.  \n.br\nFor each label on a compute node, PBS creates a custom resource \nwhose name is a concatenation of\n.I PBScraylabel_ \nand the name of the label.  \nName format: PBScraylabel_<label name>\n.br\nFor example, if the label name is \n.I Blue,\nthe name of this resource is \n.I PBScraylabel_Blue.\n.br\nBehavior: \n.RS 11\nPBS sets the value of the resource to \n.I True \non all vnodes representing the compute node.\n.RE\n.IP\nFormat: \n.I Boolean\n.br\nPython type:\n.I bool\n.br\nDefault: No default\n\n.IP PBScraynid 8\nTracks the node ID of the associated compute node.  All vnodes\nrepresenting a particular compute node share a value\nfor \n.I PBScraynid.  \n.br \nNon-consumable.  \n.br \nFormat: \n.I String \n.br \nPython type:\n.I str\n.br\nDefault: Value of \n.I node_id \nfor this compute node\n\n.IP PBScrayorder 8\nTracks the order in which compute nodes are listed in the Cray\ninventory.  All vnodes associated with a particular compute node share\na value for \n.I PBScrayorder.  \n.br\nBehavior:\n.RS 11\nVnodes for the first compute node listed are assigned a value of \n.I 1 \nfor \n.I PBScrayorder.  \n.br\nThe vnodes for each subsequent compute node listed are assigned a value\none greater than the previous value.  \n.RE\n.IP \nDo not use this resource in a resource request.\n.br\nNon-consumable.  \n.br\nFormat: \n.I Integer \n.br\nPython type: \n.I int\n.br\nDefault: No default\n\n.IP PBScrayseg 8\nNot used. \n.br\nFormat: \n.I String\n.br\nDefault: No default\n\n.IP pcput 8\nAmount of CPU time allocated to any single process in the job.\nEstablishes a per-process resource limit.  \n.br\nCan be requested only outside of a select statement.\n.br\nNon-consumable.\n.br\nFormat:\n.I Duration\n.br\nPython type:\n.I pbs.duration\n.br\nDefault: No default\n\n.IP pmem 8\nAmount of physical memory (workingset) for use by any single\nprocess of the job.  Establishes a\nper-process resource limit.  \n.br\nThe scheduler rounds all resources of type\n.I size\nup to the nearest kb.\n.br\nCan be requested only outside of a select statement.\n.br\nNon-consumable.\n.br\nFormat:\n.I Size\n.br\nPython type:\n.I pbs.size\n.br\nDefault: No default\n\n.IP preempt_targets\nList of resources and/or queues.  Jobs requesting those resources or\nin those queues are preemption targets.  \n.br\nJob-wide.  Can be requested only outside of a select statement.  \n.br\nNon-consumable.  \n.br\nFormat:\n.I String_array\n.br\nSyntax:\n.RS 11\npreempt_targets=\"Queue=<queue name>[,Queue=<queue name>],\nResource_List.<resource>=<value>[,Resource_List.<resource>=<value>]\" \n.RE\n.IP\nor \n.RS 11\npreempt_targets=None\n.RE\n.IP \nKeywords \"queue\" and \"none\" are case-insensitive.  You can\nlist multiple comma-separated targets.  \n.br\nPython type: \n.I str\n.br\nDefault: No default\n\n.IP pvmem 8\nAmount of virtual memory for use by any single process in the\njob.  Establishes a per-process resource limit.  \n.br\nThe scheduler rounds all resources of type\n.I size\nup to the nearest kb.\n.br\nCan be requested only outside of a select statement.\n.br\nNon-consumable.  \n.br\nFormat:\n.I Size\n.br\nPython type:\n.I pbs.size\n.br\nDefault: No default\n\n.IP site 8\nArbitrary string resource.  \n.br\nCan be requested only outside of a select statement.  \n.br\nNon-consumable.  \n.br\nFormat:\n.I String\n.br  \nPython type:\n.I str\n.br\nDefault: No default\n\n.IP software 8\nSite-specific software specification.  \n.br\nValues: Allowable values and effect on job placement are site-dependent.  \n.br\nCan be requested only outside of a select statement.\n.br\nFormat:\n.I String\n.br\nPython type:\n.I pbs.software\n.br\nDefault: No default\n\n.IP soft_walltime 8\nSoft limit on walltime.  Similar to \n.I walltime, \nbut cannot be requested by unprivileged users, and job is not killed\nif it exceeds its \n.I soft_walltime.  \nA job's \n.I soft_walltime \ncannot exceed its \n.I walltime.  \nCan be set by Manager only.\n.br\nFormat:\n.I Duration\n.br\nPython type:\n.I pbs.duration\n.br\nDefault: No default\n\n.IP start_time 8\nThe estimated start time for this job.  Cannot be requested\nfor a job; used for reporting only.  Appears only in job's \n.I estimated\nattribute.  Read-only.\n.br\nHost-level.\n.br\nConsumable.\n.br\nFormat:\n.I Integer\n.br\nPython type:\n.I int\n.br\nDefault: No default\n\n.IP vmem 8\nAmount of virtual memory for use by all concurrent processes in\nthe job.  Establishes a per-chunk resource limit.  \n.br\nThe scheduler rounds all resources of type\n.I size\nup to the nearest kb\n.br\nCan be requested only inside of a select statement.\n.br\nConsumable.\n.br\nFormat:\n.I Size\n.br\nPython type:\n.I pbs.size\n.br\nDefault: No default\n\n.IP vnode 8\nName of virtual node (vnode) on which to execute.  Site-dependent.  \nSee the \n.I pbs_node_attributes(7B) \nman page.\n.br\nCan be requested only inside of a select statement.\n.br\nFormat:\n.I String\n.br\nPython type:\n.I str\n.br\nDefault: No default\n\n.IP vntype 8\nThe type of the vnode.  \nAutomatically set by PBS to one of two specific values for Cray vnodes.  \nHas no meaning for non-Cray vnodes.  \n.br\nAutomatically assigned values for Cray vnodes:\n.RS 11\n.IP cray_compute 3\nThis vnode represents part of a compute node.\n.IP cray_login 3\nThis vnode represents a login node.\n.RE\n.IP\nCan be requested only inside of a select statement.\n.br\nNon-consumable. \n.br\nFormat: \n.I String_array\n.br\nPython type:\n.I str\n.br\nDefault:  No default\n\n.IP walltime 8\nAmount of wall-clock time.  Establishes a job-wide resource limit. \n.br\nActual elapsed time may differ from \n.I walltime \nduring Daylight Savings transitions.\n.br \nCan be requested only outside of a select statement.\n.br\nNon-consumable.  \n.br\nFormat:\n.I Duration\n.br\nPython type:\n.I pbs.duration\n.br\nDefault: \n.I 5 years\n\n.SH BACKWARD COMPATIBILITY\n.B Conversion to Select and Place\n.br\nFor backward compatibility, a legal node specification or resource\nspecification is converted into selection and placement directives.\n\n.B Node Specification Conversion\n.br\nNode specification format:\n.IP \" \" 3\n-lnodes=[<N>:<spec list> | <spec list>]\n.br\n        [[+<N>:<spec list> | +<spec list>] ...]\n.br\n        [#<suffix> ...][-lncpus=Z]\n.LP\nwhere:\n.IP \" \" 3\n.I spec list  \nhas syntax:  \n.I <spec>[:<spec> ...]\n.br\n.I spec \nis any of:  \n.I <hostname> | <property> | ncpus=X | cpp=X | ppn=P \n.br\n.I suffix \nis any of: \n.I <property> | excl | shared\n.br\n.I N\nand \n.I P \nare positive integers\n.br\n.I X \nand \n.I Z \nare non-negative integers\n.LP\nThe node specification is converted into selection and placement directives as follows:\n.IP \" \" 3\nEach\n.I spec list \nis converted into one chunk, \nso that \n.I <N>:<spec list> \nis converted into \n.I N \nchunks.\n\nIf \n.I spec \nis \n.I hostname\n:\n.br\nThe chunk will include \n.I host=<hostname>\n\nIf \n.I spec \nmatches any vnode's \n.I resources_available.host \nvalue:\n.br\nThe chunk will include \n.I host=<hostname>\n\nIf \n.I spec \nis \n.I <property>\n:\n.br\nThe chunk will include \n.I <property>=true\n.br\nwhere\n.I property \nmust be a site-defined host-level Boolean resource.\n\nIf \n.I spec \nis \n.I ncpus=X \nor \n.I cpp=X\n:\n.br \nThe chunk will include \n.I ncpus=X\n\nIf no \n.I spec \nis \n.I ncpus=X \nand no \n.I spec \nis \n.I cpp=X\n:\n.br \nThe chunk will include \n.I ncpus=1\n\nIf \n.I spec \nis \n.I ppn=P\n:\n.br\nThe chunk will include \n.I mpiprocs=P\n.br\nExample: \n    -lnodes=4:ppn=2 \n.br\nis converted into \n.br\n    -lselect=4:ncpus=2:mpiprocs=2\n\nIf \n.I -lncpus=Z \nis specified and no \n.I spec \ncontains \n.I ncpus=X \nand no \n.I spec \nis \n.I ccp=X\n:\n.br\nEvery chunk will include \n.I ncpus=W,\n.br\nwhere \n.I W \nis \n.I Z \ndivided by the total number of chunks.\n.br\n(Note:\n.I W \nmust be an integer; \n.I Z \nmust be evenly divisible by the number of chunks.)\n\nIf \n.I property \nis a \n.I suffix \n:\n.br\nAll chunks will include \n.I <property>=true\n\nIf \n.I excl \nis a \n.I suffix \n:\n.br\nThe placement directive will be \n.I -lplace=scatter:excl\n\nIf \n.I shared \nis a \n.I suffix\n:\n.br\nThe placement directive will be \n.I -lplace=scatter:shared\n\nIf neither \n.I excl \nnor shared is a \n.I suffix\n:\n.br \nThe placement directive will be \n.I -lplace=scatter\n.LP\nExample: \n.IP \" \" 3\n-l nodes=3:green:ncpus=2:ppn=2+2:red\n.LP\nis converted to:\n.IP \" \" 3\n-l select=3:green=true:ncpus=4:mpiprocs=2+2:red=true:ncpus=1 \n.br\n-l place=scatter\n.LP\nNode specification syntax for requesting properties is \n.B deprecated.\nThe new Boolean resource syntax \"<property>=<value>\" is accepted only in a\nselection directive.  It is erroneous to mix old and new syntax.\n\n.B Resource Specification Conversion\n.br\nThe resource specification is converted to select and place statements \nafter any defaults have been applied.\n\nResource specification format:\n.IP \" \" 3\n-l<resource>=<value>[:<resource>=<value> ...]\n.LP\nThe resource specification is converted to:\n.IP \" \" 3\nselect=1[:<resource>=<value> ...]\n.br\nplace=pack\n.br\n.LP\nwith one instance of \n.I <resource>=<value> \nfor each of the following \nhost-level resources in the resource request:\n.IP \" \" 3\nBuilt-in resources: \n.I ncpus \n| \n.I mem \n| \n.I vmem \n| \n.I arch \n| \n.I host\n\nSite-defined host-level resources with flags including \"h\"\n\n.SH SEE ALSO\npbs_node_attributes(7B),\npbs_rsub(1B),\nqalter(1B), \nqmgr(8B), \nqstat(1B),\nqsub(1B)\n\n"
  },
  {
    "path": "doc/man1/pbs_resv_attributes.7B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_resv_attributes 7B \"6 May 2020\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_resv_attributes \n\\- attributes of PBS reservations\n\n.SH DESCRIPTION\nPBS reservations have the following attributes:\n\n.IP Account_Name 8\nNo longer used.\n\n.IP Authorized_Groups 8\nList of groups who can or cannot submit jobs to this reservation.\nGroup names are interpreted relative to the server, not the \nsubmission host.  List is evaluated left-to-right; first match in \nlist is used.  This list is used to set the reservation queue's \n.I acl_groups \nattribute.  See the \n.I G \noption to the \n.I pbs_rsub \ncommand.\n.br\nReadable by all; settable by all.\n.br\nFormat: \n.I String\n.br\nSyntax:\n.RS 11\n.I [+|-]<group name> [,[+|-]<group name> ...] \n.RE\n.IP\nPython type:\n.I pbs.acl\n.br\nDefault: Reservation owner's login group\n\n.IP Authorized_Hosts 8\nThe list of hosts from which jobs can and cannot be submitted to this reservation. \nList is evaluated left-to-right; first match in list is used.  \nThis list is used to set the reservation queue's \n.I acl_hosts\nattribute.  See the \n.I H\noption to the \n.I pbs_rsub \ncommand.  \n.br\nReadable by all; settable by all.\n.br\nFormat: \n.I String\n.br\nSyntax:\n.RS 11\n.I [+|-]<hostname> [,[+|-]<hostname> ...]\n.br\nHostnames may be wildcarded using an asterisk, according to the following rules:\n.RS 3\n- A hostname can contain at most one asterisk\n.br\n- The asterisk must be the leftmost label\n.RE\nExamples:\n.RS 3\n*.test.example.com\n.br\n*.example.com\n.br\n*.com\n.RE\n.RE\n.RE\n.IP\nPython type:\n.I pbs.acl\n.br\nDefault: No default (Jobs can be submitted from all hosts)\n\n.IP Authorized_Users 8\nThe list of users who can or cannot submit jobs to this reservation.\nList is evaluated left-to-right; first match in list is used.  \nThis list is used to set the reservation queue's \n.I acl_users\nattribute.  See the \n.I U \noption to the \n.I pbs_rsub \ncommand.\n.br\nReadable by all; settable by all.\n.br\nFormat: \n.I String\n.br\nSyntax:\n.RS 11\n.I [+|-]<username>[@<hostname>.<domain>] [, [+|-]<username>[@<hostname>.<domain>] ...]\n.br\nwhere '-' means \"deny\" and '+' means \"allow\". \n.br\nThe hostname portion of a username may be wildcarded using an asterisk, \naccording to the following rules:\n.RS 3\n- A hostname can contain at most one asterisk\n.br\n- The asterisk must be the leftmost label in the hostname\n.RE\nExamples:\n.RS 3\n*.test.example.com\n.br\n*.example.com\n.br\n*.com\n.RE\n.RE\n.RE\n.IP\nPython type:\n.I pbs.acl\n.br\nDefault: Reservation owner only\n\n.IP ctime 8\nThe time that the reservation was created.\n.br\nReadable by all; set by PBS.\n.br\nFormat: \n.I Timestamp\n.RS 11\nPrinted by qstat in human-readable\n.I Date\nformat.  \n.br\nOutput in hooks as seconds since epoch.\n.RE\n.IP \nPython type:\n.I int\n.br\nDefault: No default\n\n.IP group_list 8\nNo longer used.\n\n.IP hashname 8\nNo longer used.\n\n.IP \"interactive\" 8\nNumber of seconds that the \n.I pbs_rsub \ncommand will block while waiting for confirmation or denial of the \nreservation.  \nSee the \n.I -I block_time\noption to the \n.I pbs_rsub \ncommand.\n.br\nReadable by all; settable by all.\n.br\nFormat:\n.I Integer\n.br\nBehavior:\n.RS 11\n.IP \"Less than zero\" 3\nThe reservation is automatically deleted if it cannot be confirmed in the time specified.\n.IP \"Zero or greater than zero\" 3\nThe reservation is not automatically deleted after this time.  \n.RE\n.IP\nPython type:\n.I int\n.br\nDefault: \n.I Zero\n\n.IP \"Mail_Points\" 8\nSets the list of events for which mail is sent by the server.  Mail \nis sent to the list of users specified in the \n.I Mail_Users\nattribute.  See the \n.I m mail_points \noption to the \n.I pbs_rsub \ncommand.\n.br\nReadable by all; settable by all.\n.br\nFormat:\n.I String \n.br\nSyntax:\n.RS 11\nOne or more letters \"a\", \"b\", \"c\", \"e\", or the string \"n\".  \n.br\nCannot use \"n\" with any other letter.\n.RE\n.IP\nBehavior:\n.RS 11\n.IP a 3\nNotify when reservation is terminated\n.IP b 3\nNotify when reservation period begins\n.IP c 3\nNotify when reservation is confirmed\n.IP e 3\nNotify when reservation period ends\n.IP n 3\nDo not send mail.  Cannot be used with other letters.\n.RE\n.IP\nPython type:\n.I pbs.group_list\n.br\nDefault: \"ac\"\n\n.IP \"Mail_Users\" 8\nThe set of users to whom mail is sent for the reservation events\nspecified in the \n.I Mail_Points\nattribute.  See the \n.I M mail_list \noption to the \n.I pbs_rsub \ncommand.\n.br\nReadable by all; settable by all.\n.br\nFormat:\n.I String \n.br\nSyntax:\n.RS 11\n.I <username>@<hostname>[,<username>@<hostname> ...]\n.RE\n.IP\nPython type:\n.I pbs.user_list\n.br\nDefault: Reservation owner only\n\n.IP mtime 8\nThe time that the reservation was last modified.\n.br\nReadable by all; set by PBS.\n.br\nFormat: \n.I Timestamp\n.RS 11\nPrinted by qstat in human-readable\n.I Date\nformat.  \n.br\nOutput in hooks as seconds since epoch.\n.RE\n.IP\nPython type:\n.I int\n.br\nDefault: No default\n\n.IP Priority 8\nNo longer used.\n\n.IP queue 8\nName of the reservation queue.  Jobs that \nare to use resources belonging to this reservation are submitted to this queue. \n.br\nReadable by all; set by PBS.\n.br\nFormat:\n.I String\n.RS 11\nFormat for an advance or job-specific reservation:\n.I R<unique integer>\n.br\nFormat for a standing reservation:\n.I S<unique integer>\n.RE\n.IP\nPython type:\n.I pbs.queue\n.br\nDefault: No default\n\n.IP reserve_count 8\nThe total number of occurrences in a standing reservation.\n.br \nReadable by all; settable by all.\n.br\nFormat:\n.I Integer\n.br\nPython type:\n.I int\n.br\nDefault: No default\n\n.IP reserve_duration 8\nReservation duration in seconds.  For a standing reservation, this is the\nduration for one occurrence.\n.br\nReadable by all; settable by all.\n.br\nFormat:\n.I Integer\n.br\nPython type:\n.I pbs.duration\n.br\nDefault: No default\n\n.IP reserve_end 8\nThe date and time when an advance reservation or soonest occurrence of a\nstanding reservation ends.  \n.br\nReadable by all; settable by all.\n.br\nFormat: \n.I Timestamp\n.RS 11\nPrinted by qstat in human-readable\n.I Date\nformat.  \n.br\nOutput in hooks as seconds since epoch.\n.RE\n.IP\nPython type:\n.I int\n.br\nDefault: No default\n\n.IP reserve_ID 8\nThe reservation identifier. \n.br\nFormat:\n.I String \n.RS 11\nFormat for an advance or job-specific reservation: \n.I R<unique integer>.<server name>\n.br\nFormat for a standing reservation: \n.I S<unique integer>.<server name>\n.RE\n.IP\nPython type:\n.I str\n.br\nDefault: No default\n\n.IP reserve_index 8\nThe index of the soonest occurrence of a standing reservation.\n.br\nReadable by all; set by PBS.\n.br\nFormat:\n.I Integer\n.br\nPython type:\n.I int\n.br\nDefault: No default\n\n.IP \"reserve_job\" 8\nIf this reservation is a job-specific start or now reservation, \nshows the ID of the job from which the reservation was created.\n.br\nReadable by all; set by PBS.\n.br\nFormat:\n.I String\n.br\nPython type:\n.I str\n.br\nDefault: no default\n\n\n.IP \"Reserve_Name\" 8\nThe name assigned to the reservation during creation, if specified.  See the \n.I N \noption to the \n.I pbs_rsub \ncommand.\n.br\nReadable by all; settable by all.\n.br\nFormat:\n.I String \n.br\nSyntax:\nUp to 15 characters.  First character is alphabetic.\n.br\nPython type:\n.I str\n.br\nDefault: No default\n\n.IP \"Reserve_Owner\" 8\nThe login name on the submitting host of the user who created the\nreservation.\n.br\nReadable by all; set by PBS.\n.br\nFormat:\n.I String\n.br\nSyntax:\n.I <username>@<hostname>\n.br\nPython type:\n.I str\n.br\nDefault: Login name of creator\n\n.IP reserve_retry 8\nIf this reservation becomes degraded, this is set to the \nnext time that PBS will attempt to reconfirm this reservation.\n.br\nReadable by all; set by PBS.\n.br\nFormat: \n.I Timestamp\n.RS 11\nPrinted by qstat in human-readable\n.I Date\nformat.  \n.br\nOutput in hooks as seconds since epoch.\n.RE\n.IP\nPython type:\n.I int\n.br\nDefault: No default\n\n.IP reserve_rrule 8\nThe rule that describes the recurrence pattern of a standing reservation.\nSee the \n.I r\noption to the \n.I pbs_rsub\ncommand.\n.br\nReadable by all; settable by all.\n.br\nFormat: \n.I String\n.br\nSyntax: either of two forms:\n.RS 11\n\"FREQ=\n.I <freq_spec>;\nCOUNT=\n.I <count_spec>;\n.I <interval_spec>\"\n.br\nor\n.br\n\"FREQ=\n.I <freq_spec>;\nUNTIL=\n.I <until_spec>; <interval_spec>\"\n.br\nwhere\n.IP freq_spec 15\nFrequency with which the standing reservation repeats.  Valid values are:\n.br\nWEEKLY|DAILY|HOURLY\n\n.IP count_spec 15\nThe exact number of occurrences.  Number up to 4 digits in length.  \n.br\nFormat:\n.I Integer\n\n.IP interval_spec 15\nSpecifies interval.  \n.br\nFormat is one or both of:\n.br\n.I BYDAY = MO|TU|WE|TH|FR|SA|SU \n.br\nor\n.br\n.I BYHOUR = 0|1|2|...|23\n.br\n\n.IP until_spec 15\nOccurrences will start up to but not after date and time \nspecified.\n.br\n.br\nFormat:\n.I YYYYMMDD[THHMMSS] \n.br\nNote that the year-month-day section is separated from \nthe hour-minute-second section by a capital T.\n.RE\n.IP\nPython type:\n.I str\n.br\nDefault: No default\n\n.IP reserve_start 8\nThe date and time when the reservation period for the reservation \nor soonest occurrence begins.  \n.br\nReadable by all; settable by all.\n.br\nFormat: \n.I Timestamp\n.RS 11\nPrinted by qstat in human-readable\n.I Date\nformat.  \n.br\nOutput in hooks as seconds since epoch.\n.RE\n.IP\nPython type:\n.I int\n\n.IP reserve_state 8\nThe state of the reservation.  \n.br\nReadable by all; set by PBS.\n.br\nFormat: \n.I String\n.br\nPython type:\nEach value has its own reservation state constant.\n\nThe following table shows each abbreviation, state, and Python constant:\n.RS\n\n.IP \"NO   RESV_NONE            pbs.RESV_STATE_NONE\"\nNo reservation yet.\n\n.IP \"UN   RESV_UNCONFIRMED     pbs.RESV_STATE_UNCONFIRMED \"\nReservation request is awaiting confirmation.\n\n.IP \"CO   RESV_CONFIRMED       pbs.RESV_STATE_CONFIRMED \"\nReservation has been confirmed.  For a standing reservation, this means\nthat all occurrences of the reservation have been confirmed.\n\n.IP \"WT   RESV_WAIT            pbs.RESV_STATE_WAIT \"\nUnused.  \n\n.IP \"TR   RESV_TIME_TO_RUN     pbs.RESV_STATE_TIME_TO_RUN \"\nStart of the reservation period.\n\n.IP \"RN   RESV_RUNNING         pbs.RESV_STATE_RUNNING \"\nReservation period has started and reservation is running.\n\n.IP \"FN   RESV_FINISHED        pbs.RESV_STATE_FINISHED \"\nEnd of the reservation period.\n\n.IP \"BD   RESV_BEING_DELETED   pbs.RESV_STATE_BEING_DELETED \"\nReservation is being deleted.\n\n.IP \"DE   RESV_DELETED         pbs.RESV_STATE_DELETED \"\nReservation has been deleted.\n\n.IP \"DJ   RESV_DELETING_JOBS   pbs.RESV_STATE_DELETING_JOBS \"\nJobs belonging to the reservation are being deleted.\n.RE\n.IP\nDefault: No default\n\n.IP reserve_substate 8\nThe substate of the reservation or occurrence.  The \nsubstate is used internally by PBS.\n.br\nReadable by all; set by PBS.\n.br\nFormat:\n.I Integer\n.br\nPython type:\n.I int\n.br\nDefault: No default\n\n.IP reserve_type 8\nNo longer used.\n\n.IP \"Resource_List\" 8\nThe list of resources allocated to the reservation.  Jobs running in\nthe reservation cannot use in aggregate more than the specified amount\nof a resource.\n.br\nReadable by all; settable by all.\n.br\nFormat:\n.I String\n.br\nSyntax:\n.RS 11\n.I Resource_List.<resource name>=<value>[,Resource_List.<resource name>=<value> ...]\n.RE\n.IP\nPython type: \n.I pbs.pbs_resource\n.br\nSyntax:\n.RS 11\n.I Resource_List[\"<resource name>\"]=<resource value>\n.br\nwhere \n.I resource name \nis any built-in or custom resource\n.RE\n.IP\nDefault: No default\n\n.IP resv_nodes 8\nThe list of vnodes and resources allocated from them to satisfy \nthe chunks requested for this reservation or occurrence.\n.br\nReadable by all; set by PBS.\n.br\nFormat:\n.I String\n.br\nSyntax:  \n.RS 11\n.I (<vnode name>:<resource name>=<value>[:<resource name>=<value>]...) \n.I [+(<vnode name>:<resource name>=<value>[:<resource name>=<value>])+...]\n.RE\n.IP\nPython type:\n.I pbs.exec_vnode\n.br\nDefault: No default\n\n.IP server 8\nName of server.\n.br\nReadable by all; set by PBS.\n.br\nFormat:\n.I String\n.br\nPython type:\n.I pbs.server\n.br\nDefault: No default\n\n.IP User_List 8\nNo longer used.\n\n.IP Variable_List 8\nNot used.\n\n\n.SH SEE ALSO\npbs_rstat(1B), \npbs_rsub(1B), \npbs_resources(7B)\n\n"
  },
  {
    "path": "doc/man1/pbs_rstat.1B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_rstat 1B \"6 May 2020\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_rstat \n\\- show status of PBS reservations\n.SH SYNOPSIS\n.B pbs_rstat \n[-B] [-f | -F] [-S] [<reservation ID>...]\n.br\n.B pbs_rstat \n--version\n.SH DESCRIPTION\nThe\n.B pbs_rstat\ncommand shows the status of all reservations at the PBS server.\nDenied reservations are not displayed.\n\n.B Required Privilege\n.br\nThis command can be run by a user with any level of PBS privilege.  For\nfull output, users without manager or operator privilege cannot print\ncustom resources which where created to be invisible to users.\n\n.SH Output\nThe\n.B pbs_rstat\ncommand displays output in any of brief, short, or full formats.\n\nSee the \n.B pbs_resv_attributes(7B) \nman page for information about reservation attributes.\n\n.SH OPTIONS\n.IP \"-B\" 10\nBrief output.  Displays each reservation identifier only.\n\n.IP \"-f, -F\" 10\nFull output.  Displays all reservation attributes that are not set to \nthe default value.  Users without manager or operator privilege\ncannot print custom resources which were created to be invisible to users.\n\n.IP \"-S\" 10\nShort output.  Displays a table showing the name, queue, owner, state, \nstart time, duration, and end time of each reservation.\n\n.IP \"--version\" 10\nThe \n.B pbs_rstat\ncommand returns its PBS version information and exits.\nThis option can only be used alone.\n\n.IP \"(no options)\" 10\nShort output.  Same behavior as \n.I -S \noption.\n\n.SH OPERANDS\nThe \n.B pbs_rstat \ncommand accepts one or more\n.I reservation ID\noperands.  \n.br\n\nFormat for an advance or job-specific reservation:\n.RS 4\n.I \"R<sequence number>[.<server name>][@<remote server>]\"\n.RE\n\nFormat for a standing reservation:\n.RS 4\n.I \"S<sequence number>[.<server name>][@<remote server>]\"\n.RE\n\nFormat for a maintenance reservation:\n.RS 4\n.I \"M<sequence number>[.<server name>][@<remote server>]\"\n.RE\n\n\n.I @<remote server> \nspecifies a reservation at a server other than the default server.\n\n.SH SEE ALSO\npbs_rsub(1B), pbs_rdel(1B), pbs_resv_attributes(7B)\n\n"
  },
  {
    "path": "doc/man1/pbs_rsub.1B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_rsub 1B \"24 September 2020\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_rsub \n\\- create a PBS reservation \n\n.SH SYNOPSIS\n.B For advance and standing reservations:\n.br\n.B pbs_rsub \n[-D <duration>] [-E <end time>] [-g <group list>] \n.RS 9\n[-G <auth group list>] [-H <auth host list>] [-I <block time>]\n] [-l <placement>] [-l <resource request>] [-m <mail events>] \n[-M <mail list>] [-N <reservation name>] [-q <destination>] \n[-r <recurrence rule>] [-R <start time>]  [-u <user list>] \n[-U <auth user list>] [-W <attribute value list>]\n.RE\n\n.B For job-specific now reservations:\n.br\n.B pbs_rsub \n[-I <block time>] [-m <mail events>] [-M <mail list>] \n.RS 9\n.br\n--job <job ID>\n.RE\n\n.B For maintenance reservations:\n.br\n.B pbs_rsub \n[-D <duration>] [-E <end time>] [--hosts <host list>]\n.RS 9\n[-N <reservation name>] [-q <destination>] [-R <start time>]  \n.RE\n\n.B For version information:\n.br\n.B pbs_rsub \n--version\n\n.SH DESCRIPTION\nThe \n.B pbs_rsub\ncommand is used to create advance, standing, job-specific now, \njob-specific ASAP, or maintenance reservations.  \n.RS 3\n\nAn advance reservation reserves specific resources for the requested\ntime period.\n\nA standing reservation reserves specific resources for recurring time\nperiods.\n\nA job-specific now reservation reserves the resources being used by a\nspecific job in case the job fails and needs to be re-submitted,\nallowing it to run again without having to wait to be scheduled.  The\nreservation is created and starts running when a queued job starts\nrunning, or immediately when you use \n.B pbs_rsub --job <job ID> \non a running job.\n\nA job-specific ASAP reservation is created from a queued job via\n.B pbs_rsub -Wqmove=<job ID>.  \nThe reservation runs as soon as possible,\nand the job is moved into the reservation.  The reservation is created\nusing the same resources as the job requested.  \n\nA job-specific start reservation is created immediately using a \nrunning job's resources, and the job is moved into the reservation.  \nYou create job-specific start reservations using \n.B qsub -Wcreate_resv_from_job=true \non a running job.  See the\n.B qsub \ncommand.\n\nA maintenance\nreservation reserves the specified hosts for the specified time\nregardless of other circumstances.\n\n.RE\n\nAdvance, standing, and job-specific reservations are \"job reservations\", \nto distinguish them from maintenance reservations.  When a reservation\nis created, it has an associated queue.  \n\nTo get information about a reservation, use the \n.B pbs_rstat \ncommand.  \n\nTo delete a reservation, use the \n.B pbs_rdel \ncommand.  Do not use the qdel command.  \n\nThe behavior of the \n.B pbs_rsub \ncommand may be affected by any site hooks.  Site hooks can modify the\nreservation's attributes.\n\n.B Job Reservations\n.br\nAfter an advance or standing reservation is requested, it is either\nconfirmed or denied.  A job-specific now reservation is created when\nthe job is started and confirmed immediately.  A job-specific ASAP\nreservation is scheduled as soon as possible.  Once the reservation\nhas been confirmed, authorized users submit jobs to the\nreservation's queue via qsub and qmove.  \n\nA confirmed job reservation will accept jobs at any time.  The jobs in\nits queue can run only during the reservation period.  Jobs in a\nsingle advance reservation or job-specific reservation can run only\nduring the reservation's time slot, and jobs in a standing\nreservation can run only during the time slots of occurrences of the\nstanding reservation.  \n\nWhen an advance reservation ends, all of its jobs are deleted, whether\nrunning or queued.  When an occurrence of a standing reservation ends,\nonly its running jobs are deleted; those jobs still in the queue are\nnot deleted.\n\n.B Maintenance Reservations \n.br\nYou can create maintenance reservations using \n.B pbs_rsub --hosts <host list>.  \nMaintenance reservations are designed to make the specified\nhosts available for the specified amount of time, regardless of what\nelse is happening:\n.RS 3\nYou can create a maintenance reservation that includes or is made up\nof vnodes that are down or offline.  \n\nMaintenance reservations ignore the value of a vnode's \n.I resv_enable \nattribute.  \n\nPBS immediately confirms any maintenance reservation.  \n\nMaintenance reservations take precedence over other reservations; if you create a\nmaintenance reservation that overlaps an advance or standing job\nreservation, the overlapping vnodes become unavailable to the job\nreservation, and the job reservation becomes degraded.  PBS looks for\nreplacement vnodes.\n.RE\n\nPBS will not start any new jobs on vnodes overlapping or in a\nmaintenance reservation.  However, jobs that were already running on\noverlapping vnodes continue to run; you can let them run or requeue\nthem.  \n\nYou cannot specify place or select for a maintenance\nreservation; these are created by PBS: \n\n.RS 3\nPBS creates the\nreservation's placement specification so that hosts are\nassigned exclusively to the reservation.  The placement specification\nis always the following: \n.RS 3\n.I -lplace=exclhost \n.RE\n\nPBS sets the reservation's \n.I resv_nodes\nattribute value so that all CPUs on the reserved hosts are assigned\nto the\nmaintenance reservation.  The select specification is always the\nfollowing: \n.IP \"\" 3\n.I -lselect = host=<host1>:ncpus= <number of CPUs at host1> \n.I +host=<host2>:ncpus= <number of CPUs at host2>+...\n.LP\n.RE\n\nMaintenance reservations are prefixed with M.  A maintenance\nreservation ID has the format: \n.RS 3\n.I M<unique integer>.<server name>\n.RE\n\nCreating a maintenance reservation does not trigger a scheduling\ncycle.  \n\nYou must have manager or operator privilege to create a maintenance reservation.\n\n.SH REQUIREMENTS\n\nWhen using\n.B pbs_rsub\nto request a standing, advance, or maintenance reservation, you must \nspecify two of the following options: \n.I -R, -E, \nand\n.I -D.\nThe resource request \n.I -l walltime\ncan be used instead of the \n.I -D \noption.\n\nIf you want to run jobs in a reservation that will request exclusiveqsub \nplacement, you must create the reservation with exclusive placement\nvia -l place=excl.\n\n.SH OPTIONS\n.IP \"-D <duration>\" 8\nSpecifies reservation duration. If the start\ntime and end time are the only times specified, this duration time is\ncalculated.  \n.br\nFormat: \n.I Duration.  \nSee \n.B FORMATS.\n.br\nDefault: none\n.RE\n.IP \"-E <end time>\" 8\nSpecifies the reservation end time.  If start time and duration are\nthe only times specified, the end time value is calculated.    \n.br\nFormat: \n.I Datetime. \nSee \n.B FORMATS.\n.br\nDefault: none.\n.RE\n.IP \"-g <group list>\" 8\nThe \n.I group list\nis a comma-separated list of group names. \nThe server uses entries in this list, along with an ordered set \nof rules, to associate a group name with the reservation. \nThe reservation creator's primary group is automatically added to this list.\n.br\nFormat: <group>@<hostname>[,<group>@<hostname> ...]\n.RE\n.IP \"-G <auth group list>\" 8\nComma-separated list of names of groups who\ncan or cannot submit jobs to this reservation.  Sets reservation's \n.I Authorized_Groups \nattribute to \n.I auth group list.\n.br\nThis list becomes the \n.I acl_groups \nlist for the \nreservation's queue. \n.br\nMore specific entries should be listed before more general, because the\nlist is read left-to-right, and the first match determines access.\n.br\nIf both the\n.I Authorized_Users\nand \n.I Authorized_Groups\nreservation attributes are set, a user must belong to both in order to\nbe able to submit jobs to this reservation.  \n.br\nGroup names are\ninterpreted in the context of the server's host, not the context of the host \nfrom which the job is submitted. \n.br\nSee the \n.I Authorized_Groups \nreservation attribute in the \npbs_resv_attributes(7B) man page.  \n.br\nSyntax: \n.I [+|-]<group name>[,[+|-]<group name> ...]\n.br\nDefault: No groups are authorized to submit jobs\n.RE\n\n.IP \"--hosts <host list>\" 8\nSpace-separated list of hosts to be included in maintenance\nreservation.  Cannot be used with the \n.I -l <placement> \nor \n.I -l <resource request> \noptions.  PBS creates placement and resource requests.  Placement is always \n.I exclhost, \nand all CPUs of requested hosts are assigned to maintenance reservation.\n\n.IP \"-H <auth host list>\" 8\nComma-separated list of hosts from which jobs can and cannot be \nsubmitted to this reservation.  This list becomes the \n.I acl_hosts \nlist for the reservation's queue. \n.br\nMore specific entries should be listed before more general, because the\nlist is read left-to-right, and the first match determines access.\nIf the reservation creator specifies this list, the creator's \nhost is not automatically added to the list.\n.br\nSee the \n.I Authorized_Hosts \nreservation attribute in the pbs_resv_attributes(7B) man page.\n.br\nFormat: [+|-]<hostname>[,[+|-]<hostname> ...]\n.br\nDefault: All hosts are authorized to submit jobs\n.RE\n\n.IP \"-I <block time>\" 8\nSpecifies interactive mode.  The \n.B pbs_rsub \ncommand will block, up to \n.I block time \nseconds, while waiting for the reservation request to be \nconfirmed or denied.\n.br\nIf \n.I block time\nis positive, and the reservation isn't confirmed or denied in \nthe specified time, the ID string for the reservation is returned \nwith the status \"UNCONFIRMED\".\n.br\nIf \n.I block time\nis negative, and a scheduler doesn't confirm or deny the reservation in \nthe specified time, the reservation is deleted.\n.br\nCannot be used with\n.I --hosts\noption.  Has no effect when used with \n.I --job \noption.\n.br\nFormat: Integer\n.br\nDefault: Not interactive\n.RE\n\n.IP \"--job <job ID>\" 8\nImmediately creates and confirms a job-specific now reservation on the\nsame resources as the job (including resources inherited by the job),\nand places the job in the job-specific now reservation queue.  Sets\nthe job's \n.I create_resv_from_job \nattribute to \n.I True.  \nSets the now reservation's \n.I reserve_job \nattribute to the ID of the job from which the reservation was created,\nsets the reservation's \n.I Reserve_Owner \nattribute to the value of the job's \n.I Job_Owner \nattribute, sets the reservation's \n.I resv_nodes \nattribute to the job's \n.I exec_vnode \nattribute, sets the reservation's resources to match the job's \n.I schedselect \nattribute, and sets the reservation's\n.I Resource_List \nattribute to the job's \n.I Resource_List \nattribute.\n\nThe now reservation's duration and start time are the same as the\njob's walltime and start time.  If the job is peer scheduled, the now\nreservation is created in the pulling complex.\n.br\nFormat: \n.I Boolean\n.br\nDefault: no default\n.br\nExample:\n.B pbs_rsub --job 1234.myserver\n\nCan be used on running jobs only (jobs in the \n.I R \nstate, with substate \n.I 42\n).\n\nCannot be used with job arrays, jobs already in reservations, or other users' jobs.\n\n.IP \"-l <placement>\" 8\nThe \n.I placement\nspecifies how vnodes are reserved. \nThe \n.I place\nstatement can contain the following elements, in any order:\n.IP \" \" 11\n-l place=[\n.I <arrangement>\n][:\n.I <sharing>\n][:\n.I <grouping>\n]\n.LP\n.IP \" \" 8\nwhere\n.IP \" \" 11\n.RS \n.IP arrangement 3\nWhether this reservation chunk is willing to share this vnode or host \nwith other chunks from this reservation.  One of \n.I free\n|\n.I pack\n|\n.I scatter\n| \n.I vscatter\n\n.IP sharing 3\nWhether this reservation chunk is willing to share this vnode or host\nwith other reservations or jobs.  One of \n.I excl \n| \n.I share \n| \n.I exclhost\n\n.IP grouping 3\nWhether the chunks from this reservation should be placed on vnodes\nthat all have the smae value for a resource.  Can have only one\ninstance of\n.I group=resource\n.LP\n.LP\n.RE\n.LP\n.IP \" \" 8\nand where\n.IP \" \" 11\n.RS\n.IP free 3\nPlace reservation on any vnode(s).\n.IP pack\nAll chunks are taken from one host.\n.IP scatter 3\nOnly one chunk with any MPI processes is taken from a host.\nA chunk with no MPI processes may be taken from the same vnode as\nanother chunk.\n.IP vscatter 3\nOnly one chunk is taken from any vnode.  Each chunk must fit on a vnode.\n.IP excl 3\nOnly this reservation uses the vnodes chosen.\n.IP exclhost 3\nThe entire host is allocated to the reservation.\n.IP shared 3\nThis reservation can share the vnodes chosen.\n.IP group=<resource> 3\nChunks are grouped according to a \n.I resource.  \nAll vnodes in the group must have a common value for \n.I resource, \nwhich can be either the built-in resource\n.I host\nor a custom vnode-level resource.\n.br\n.I resource\nmust be a string or a string array.\n.LP\n.LP\n.RE\n.LP\n.IP \" \" 8\nIf you want to run jobs in the reservation that will request exclusive \nplacement, you must create the reservation with exclusive placement via \n.B -l place=excl.\n\nThe place statement cannot begin with a colon.  Colons are delimiters; use \nthem only to separate parts of a place statement, unless they are quoted\ninside resource values.\n\nNote that nodes can have sharing attributes that override\njob placement requests.  See the\n.B pbs_node_attributes(7B)\nman page.\n.LP\n.IP \"-l resource request\" 8\nThe \n.I resource request \nspecifies the resources required for the reservation. These \nresources are used for the limits on the queue that is dynamically created \nfor the reservation. The aggregate amount of resources for currently \nrunning jobs from this queue will not exceed these resource limits. \nJobs in the queue that \nrequest more of a resource than the queue limit for that resource are not \nallowed to run. Also, the queue inherits the value of any resource limit set \non the server, and these are used for the job if the reservation request \nitself is silent about that resource.\nA non-privileged user cannot submit a reservation requesting a custom \nresource which has\nbeen created to be invisible or read-only for users.\n\nResources are requested\nby using the\n.I -l\noption, either in\n.I chunks\ninside of\n.I selection statements,\nor in job-wide requests using\n.I <resource name>=<value>\npairs.\n\nRequesting resources in chunks:\n.RS \n.IP \" \" 3\n.I -l select=[N:]<chunk>[+[N:]<chunk> ...]\n.LP\nwhere\n.I N\nspecifies how many of that chunk, and\na\n.I chunk\nis of the form:\n\n.IP \" \" 3\n.I <resource name>=<value>[:<resource name>=<value> ...]\n.LP\n\nRequesting job-wide resources:\n.IP \" \" 3\n.I -l <resource name>=<value>[,<resource name>=<value> ...]\n.LP\n.RE\n\n.IP \"-m <mail events>\" 8\nSpecifies the set of events that cause mail to be sent to the \nlist of users specified in the \n.I -M mail list\noption. \n.br\nFormat: string consisting of one of the following:\n.RS\n1) any combination of \"a\", \"b\", \"c\" or \"e\"\n.br\n2) the single character \"n\"\n.IP a\nNotify if the reservation is terminated for whatever reason\n.IP b\nNotify when the reservation period begins\n.IP c \nNotify when the reservation is confirmed\n.IP e \nNotify when the reservation period ends\n.IP n\nSend no mail.  Cannot be used with any of \n.I a, b, c\nor \n.I e.\n.LP\nDefault:\n.I ac\n.RE\n\n.IP \"-M <mail list>\" 8\nThe list of users to whom mail is sent \nwhenever the reservation transitions to one of the states \nspecified in the \n.I -m mail events\noption. \n.br\nFormat: <username>[@<hostname>][,<username>[@<hostname>]...]\n.br\nDefault: reservation owner.\n.RE\n.IP \"-N <reservation name>\" 8\nSpecifies a name for the reservation. \n.br\nFormat: \n.I Reservation name.  \nSee \n.B FORMATS.\n.br\nDefault: None.\n.RE\n.IP \"-q <destination>\" 8\nSpecifies the destination server at which to create the reservation. \n.br\nDefault: The default server is used if this option is not selected.\n.RE\n\n.IP \"-r <recurrence rule>\" 8\nSpecifies rule for recurrence of standing reservations.  Rule must conform to iCalendar\nsyntax, and is specified using a subset of parameters from RFC 2445.\n.br\nValid syntax for \n.I recurrence rule \ntakes one of two forms:\n.RS 11\n.I FREQ=<freq spec>;COUNT=<count spec>;<interval spec>\n.RE\n.IP \" \" 8\nor\n.RS 11\n.I FREQ=<freq spec>;UNTIL=<until spec>; <interval spec>\n.RE\n.IP \" \" 8\nwhere\n.RS 11\n.IP \"freq spec\" 5\nFrequency with which the standing reservation repeats.  Valid values are:\n.br\nWEEKLY|DAILY|HOURLY\n\n.IP \"count spec\" 5\nThe exact number of occurrences.  Number up to 4 digits in length.  \nFormat: integer.\n\n.IP \"interval spec\" 5\nSpecifies interval.  Format is one or both of:\n.br\nBYDAY = MO|TU|WE|TH|FR|SA|SU \n.br\nor\n.br\nBYHOUR = 0|1|2|...|23\n.br\nWhen using both, separate them with a semicolon.\n.br\nElements specified in the recurrence rule override those \nspecified in the arguments to the \n.I -R \nand \n.I -E\noptions.  For example, the \n.I BYHOUR \nspecification overrides the hourly part of the\n.I -R\noption.  For example, \n.br\n-R 0730 -E 0830 ... BYHOUR=9\n.br\nresults in a reservation that starts at 9:30 and runs for 1 hour.\n\n.IP \"until spec\" 5\nOccurrences will start up to but not after date and time \nspecified.\n.br\nFormat: YYYYMMDD[THHMMSS] \n.br\nNote that the year-month-day section is separated from \nthe hour-minute-second section by a capital T.\n.RE\n.IP \" \" 8\nRequirements:\n.br\nThe recurrence rule must be on one unbroken line and must be enclosed\nin double quotes.  \n\nA start and end date must be used when specifying a recurrence rule.  \nSee the \n.I R\nand\n.I E \noptions.\n\nThe PBS_TZID environment variable must be set at the submission host.  The \nformat for PBS_TZID is a timezone location.  Examples: America/Los_Angeles,\nAmerica/Detroit, Europe/Berlin, Asia/Calcutta.  \n\n.B Examples of Standing Reservations\n.br\nFor a reservation that runs every day from 8am to 10am, for a total of 10 occurrences:\n.RS \n.IP \" \" 3\npbs_rsub -R 0800 -E 1000 -r \"FREQ=DAILY;COUNT=10\"\n.LP\n\nEvery weekday from 6am to 6pm until December 10 2008\n.IP \" \" 3\npbs_rsub -R 0600 -E 1800 \n.br\n-r \"FREQ=WEEKLY; BYDAY=MO,TU,WE,TH,FR; UNTIL=20081210\"\n.LP\n\nEvery week from 3pm to 5pm on Monday, Wednesday, and Friday, for 9 occurrences, \ni.e., for three weeks:\n.RS 5\npbs_rsub -R 1500 -E 1700\n.br\n-r \"FREQ=WEEKLY;BYDAY=MO,WE,FR; COUNT=3\"\n.RE\n.RE\n\n.IP \"-R <start time>\" 8\n.RS\n.LP\nSpecifies reservation starting time. If the reservation's end time \nand duration are the only times specified, this start time is calculated.\n\nIf the day,\n.I DD ,\nis not specified, it defaults to today if the time\n.I hhmm\nis in the future. Otherwise, the day is set to tomorrow.\nFor example, if you submit a reservation with the specification \n.I \"-R 1110\"\nat 11:15 a.m., it is interpreted as being for 11:10am tomorrow.\nIf the month portion,\n.I MM ,\nis not specified, it defaults to the current month, provided that the specified \nday\n.I DD ,\nis in the future. Otherwise, the month is set to next month. Similar\nrules apply to the two other optional, left-side components.\n.br\nFormat: \n.I Datetime\n.LP\n.RE\n.IP \"-u <user list>\" 8\nNot used. Comma-separated list of user names.\n.br\nFormat: <username>[@<hostname>][,<username>[@<hostname>] ...]\n.br\nDefault: None\n.RE\n.IP \"-U <auth user list>\" 8\nComma-separated list of users who are and are not allowed to \nsubmit jobs to this reservation.  Sets reservation's \n.I Authorized_Users \nattribute to \n.I auth user list.\n.br\nThis list becomes the \n.I acl_users \nattribute for the reservation's queue. \n.br\nMore specific entries should be listed before more general, because the\nlist is read left-to-right, and the first match determines access. \nThe reservation creator's username is automatically added to this list,\nwhether or not the reservation creator specifies this list.\n.br\nIf both the\n.I Authorized_Users\nand \n.I Authorized_Groups\nreservation attributes are set, a user must belong to both in order to be able to \nsubmit jobs to this reservation.\n.br\nSee the \n.I Authorized_Users\nreservation attribute in the pbs_resv_attributes(7B) man page.\n.br\nSyntax:\n.I [+|-]<username>[@<hostname>][,[+|-]<username>[@<hostname>]...]\n.br\nDefault: Reservation owner only\n.br\n.RE\n.IP \"-W attribute value list\" 8\nThis allows you to define other attributes for the reservation.\nSupported attributes:\n.RS\n\n.IP \"qmove=<job ID>\" 5\nTakes as input a queued job, creates a job-specific ASAP reservation\nfor the same resources the job requests, and moves the job into the\nreservation's queue.  The reservation is scheduled to run as soon as\npossible.  \n\nWhen the reservation is created, it inherits its resources from the\njob, not from the resources requested through the pbs_rsub command.\n\nYou can use the \n.I -I \noption to specify a timeout for the conversion.  If you use the \n.I qmove \noption to convert a job to a reservation, and the\nreservation is not confirmed within the timeout period, the\nreservation is deleted.  The default timeout period is 10 seconds.\nThere is no option for this kind of reservation to be unconfirmed.  \n\nTo specify the timeout, you must give a negative value for the \n.I -I\noption.  For example, to specify a timeout of 300 seconds: \n.br\n.B \\ \\ \\ pbs_rsub -Wqmove=<job ID> -I -300 \n\nThe default value for the \n.I delete_idle_time\nattribute for an ASAP reservation is 10 minutes.  \n\nThe \n.I -R \nand \n.I -E \noptions to pbs_rsub are disabled when using the\n.I qmove=<job ID> \noption.  \n\nSome shells require that you enclose a job array ID in double quotes.\n\nCan be used on queued jobs only.\n.RE\n\n.IP \"--version\" 8\nThe \n.B pbs_rsub\ncommand returns its PBS version information and exits.\nThis option can only be used alone.\n\n.SH OUTPUT\nThe \n.B pbs_rsub \ncommand returns the reservation identifier.  \n.br\nFormat for an advance or job-specific reservation:\n.IP\n.I R<sequence number>.<server name>\n.br\nThe associated queue's name is the prefix, \n.I R<sequence number>.\n.LP\n\nFormat for a standing reservation:\n.IP\n.I S<sequence number>.<server name>\n.br\nThe associated queue's name is the prefix, \n.I S<sequence number>.\n.LP\n\nFormat for a maintenance reservation:\n.IP\n.I M<sequence number>.<server name>\n.LP\n\n.SH FORMATS\n.IP \"Datetime format\"\n.I \"[[[[CC]YY]MM]DD]hhmm[.SS]\"\n\n.IP \"Duration format\"\nA period of time, expressed either as \n.RS 11\n.I    An integer whose units are seconds\n.RE\n.IP \nor \n.RS 11\n.I [[hours:]minutes:]seconds[.milliseconds]\n.br\nin the form\n.br\n.I [[HH:]MM:]SS[.milliseconds]\n.RE\n.IP\nMilliseconds are rounded to the nearest second.\n\n.IP \"Reservation name format\"\nString up to 230 characters in length. It must consist of printable,\nnon-white space characters.  It can contain alphabetic and numeric\ncharacters, and plus sign, dash or minus, underscore, and dot or period.\n\n.SH SEE ALSO\npbs_resv_attributes(7B),\npbs_rdel(1B),\npbs_rstat(1B), \nqmove(1B),\nqsub(1B)\t\n"
  },
  {
    "path": "doc/man1/pbs_sched_attributes.7B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_sched_attributes 7B \"4 March 2020\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_sched_attributes\n\\- attributes of default PBS scheduler and multischeds\n.SH DESCRIPTION\nThese are the attributes of the default PBS scheduler and PBS multischeds.\n\n.IP comment 8\nFor certain scheduler errors, PBS sets the scheduler's \n.I comment\nattribute to specific error messages.  You can use the \n.I comment \nattribute to notify another administrator of something, but PBS does overwrite\nthe value of \n.I comment \nunder certain circumstances.\n.br\nReadable by all; settable by Manager.\n.br\nFormat: \n.I String\n.br\nDefault: no default\n.br\nPython type: No Python type\n\n.IP do_not_span_psets 8\nSpecifies whether or not the scheduler requires the job to fit within\none existing placement set.\n.br\nReadable by all; settable by Manager and Operator.\n.br\nFormat: \n.I Boolean\n.br\nBehavior:\n.nf\n   True   The job must fit in one existing placement set.  All \n          existing placement sets are checked.  If the job fits \n          in an occupied placement set, the job waits for the \n          placement set to be available.  If the job cannot fit \n          within a single placement set, it won't run.\n   False  The scheduler first attempts to place the job in a \n          single placement set, but if it cannot, it allows the \n          job to span placement sets, running on whichever\n          vnodes can satisfy the job's resource request.\n.fi\nDefault: \n.I False\n.br\nPython type: No Python type\n\n.IP job_sort_formula_threshold 8\nLower bound for calculated priority for job.  If job priority is at \nor below this value, the job is not eligible to run in the current\nscheduler cycle.  \n.br\nReadable by all; settable by Manager.  \n.br\nFormat: \n.I Float\n.br\nDefault: None\n.br\nPython type: No Python type\n\n.IP log_events 8\nTypes of events logged by this scheduler.\n.br\nReadable by all; settable by Manager and Operator.\n.br\nFormat:\n.I Integer\n.br\nDefault:\n.I 767\n.br\nPython type: No Python type\n\n.IP only_explicit_psets 8\nSpecifies whether placement sets are created for unset resources.  \n.br\nReadable by all; settable by Manager and Operator.\n.br\nFormat: \n.I Boolean\n.br\nBehavior:\n.nf\n   True   Placement sets are not created from vnodes \n          whose value for a resource is unset.\n   False  placement sets are created from vnodes whose \n          value for a resource is unset.\n.fi\nDefault:\n.I False\n.br\nPython type: No Python type\n\n.IP opt_backfill_fuzzy 8\nSets the tradeoff between scheduling cycle speed and granularity of \nestimated start time calculation.  \n.br\nReadable by all; settable by Manager.\n.br\nBehavior: \n.RS 11\n.IP off 8\nFinest granularity; no speedup.\n.IP low 8\nFairly fine granularity; some speedup.\n.IP medium 8\nMedium granularity; medium speedup.\n.IP high 8\nCoarse granularity; greatest speedup.\n.RE\n.IP\n.br\nFormat: \n.I String\n.br \nDefault: unset (behaves like \n.I low\n).\n.br\nPython type: No Python type\n\n.IP partition 8\nName of partition for which this scheduler is to run jobs.\nCannot be set on default scheduler.\n.br\nFormat:\n.I String\n.br\nDefault: \"None\"\n.br\nPython type: No Python type\n\n.IP pbs_version 8\nThe version of PBS for this scheduler.  \n.br\nReadable by Manager and Operator; not settable.\n.br\nFormat:\n.I String\n.br\nDefault: no default\n.br\nPython type: No Python type\n\n.IP preempt_order\nDefines the order of preemption methods which this scheduler uses on\njobs. This order can change depending on the percentage of time\nremaining on the job. The ordering can be any combination of \n.I S, C, R, \nand \n.I D.  \n\nUsage: an ordering \n.I (SCR) \noptionally followed by a percentage of time remaining and another\nordering.\n\nFor example, PBS should first attempt to use suspension to preempt a\njob, and if that is unsuccessful, requeue the job: \n.nf\n   preempt_order: \"SR\"\n.fi\n\nFor example, if the job has between 100% and 810f requested time\nremaining, first try to suspend the job, then try checkpoint, then\nrequeue. If the job has between 80% and 510f requested time\nremaining, attempt suspend, then checkpoint.  Between 50% and 0% time\nremaining, just attempt to suspend the job:\n.nf\n   preempt_order: \"SCR 80 SC 50 S\"\n.fi\n\nFor each job percentage, each method can be used only once.  Note that\nin the example above, the \n.I S \nmethod appears only once per percentage. \n\nReadable by all; settable by Manager.\n.br\nFormat:\n.I String, \nas a quoted list\n.br\nPreemption methods:\n.nf\n   C   Checkpoint job\n   D   Delete job\n   R   Requeue job\n   S   Suspend job\n.fi\nDefault: \n.I \"SCR\"\n.br\nPython type: No Python type\n\n.IP \"preempt_prio\" 8\nSpecifies the ordering of priority for different preemption levels.\nTwo or more job types may be combined at the same priority level with\na plus sign (\"+\") between them, using no whitespace.  Comma-separated\npreemption levels are evaluated left to right, with higher priority to\nthe left.  Any level not specified in the\n.I preempt_prio \nlist is ignored.  \n\nFor example, starving jobs have the highest priority, then normal\njobs, and jobs whose entities are over their fairshare limit are third\nhighest:\n.nf\n   preempt_prio: \"starving_jobs, normal_jobs, fairshare\"\n.fi\n\nFor example, starving jobs whose entities are also over their\nfairshare limit are lower priority than normal jobs: \n.nf \n   preempt_prio: \"normal_jobs, starving_jobs+fairshare\"\n.fi\n\nReadable by all; settable by Manager.\n.br\nFormat:\n.I string_array,\nas quoted string\n.br\nPreemption levels:\n.nf\n   express_queue      Jobs in express queues preempt other jobs.  \n.fi\n.RS 30\nSee \n.I preempt_queue_prio.  \nDoes not require \n.I by_queue \nto be \n.I True. \n.RE\n.RS 8\n.nf\n   starving_jobs      When a job becomes starving it can preempt \n                      other jobs.  \n   fairshare          When the entity owning a job exceeds its \n                      fairshare limit.       \n   queue_softlimits   Jobs which are over their queue soft limits     \n   server_softlimits  Jobs which are over their server soft limits    \n   normal_jobs        The preemption level into which a job falls \n                      if it does not fit into any other specified \n                      level.     \n.fi\n\nDefault:\n.I \"express queue, normal_jobs\"\n.br\nPython type: No Python type\n.RE\n\n.IP preempt_queue_prio 8\nSpecifies the minimum queue priority required for a queue to be\nclassified as an express queue.  Express queues do not require\n.I by_queue \nto be \n.I True.\n.br\nReadable by all; settable by Manager.\n.br\nFormat:\n.I Integer\n.br\nDefault:\n.I 150\n.br\nPython type: No Python type\n\n.IP preempt_sort 8\nSpecifies whether jobs most eligible for preemption are sorted\naccording to their start times.  \n.br\nReadable by all; settable by Manager.\n.br\nFormat:\n.I String\n.br\nBehavior:\n.nf\n   unset                 Preempted job will be that with \n                         longest running time.\n   min_time_since_start  First job preempted will be that with \n                         most recent start time.\n.fi\n.br\nDefault:\n.I min_time_since_start\n.br\nPython type: No Python type\n\n.IP scheduler_iteration 8\nTime in seconds between scheduling iterations.  If you set the\nserver's \n.I scheduler_iteration \nattribute, that value is assigned to the default scheduler's \n.I scheduler_iteration\nattribute, and vice versa.\n.br\nReadable by all; settable by Manager.\n.br\nFormat:\n.I Integer\n.br\nUnits:\nSeconds\n.br\nDefault:\n.I 600\n.br\nPython type: No Python type\n\n.IP scheduling 8\nEnables scheduling of jobs.  \n.br\nIf you set the server's \n.I scheduling \nattribute, that value is\nassigned to the default scheduler's \n.I scheduling \nattribute, and\nvice versa.\n.br\nReadable by all; settable by Manager.\n.br\nFormat:\n.I Boolean\n.br\nDefault value for default scheduler:\n.I True\n.br\nDefault value for multischeds: \n.I False\n.br\nPython type: No Python type\n\n.IP sched_cycle_length 8\nThe scheduler's maximum cycle length.  \nOverwritten by the \n.I -a\noption to the\n.I pbs_sched\ncommand.\n.br\nReadable by all and settable by the PBS Manager and Operator only.\n.br\nFormat: \n.I Duration\n expressed as integer seconds, or \n.I [[hours:]minutes:]seconds[.milliseconds]\n.br\nDefault: \n.I 20:00 (20 minutes)\n.br\nPython type: No Python type\n\n.IP sched_host 8\nThe hostname of the machine on which this scheduler runs.  \n.br\nCannot be set on default scheduler; value for default scheduler is server\nhostname.  \n.br\nMust be set by administrator.\n.br\nReadable by Manager and Operator.\n.br\nFormat: \n.I String\n.br\nDefault value for multischeds: server hostname\n.br\nPython type: No Python type\n\n.IP sched_log 8\nDirectory where this scheduler writes its logs.  Permissions should be\n755.  Must be owned by root.  Cannot be shared with another scheduler.\n.br\nFormat: \n.I String\n.br\nDefault: \n.I $PBS_HOME/sched_logs_<scheduler name>\n\n.IP sched_port 8\n.B Removed. \n\n.IP sched_preempt_enforce_resumption 8\nControls whether the scheduler treats preempted jobs as top jobs.  \n.br\nWhen \n.I True\n, preempted jobs are treated as top jobs.  \n.br\nReadable by all; settable by Manager.\n.br \nFormat: \n.I Boolean\n.br\nDefault: \n.I False\n.br\nPython type: No Python type\n\n.IP sched_priv 8\nDirectory where this scheduler keeps its fairshare usage, resource_group, \nholidays, and sched_config files. Must be owned by root.  \nFor default scheduler, use default value; do not set.  Settable for \nmultischeds.\n.br\nReadable by all; settable by Manager (for multischeds).\n.br\nFormat: \n.I String\n.br\nDefault: \n.I $PBS_HOME/sched_priv_<scheduler name>\n.br\nPython type: No Python type\n\n\n.IP state 8\nState of this scheduler. \n.br\nStates:\n.RS 11\n.IP down\nScheduler is not running.\n.IP idle\nScheduler is running and is waiting for a scheduling cycle to be triggered.\n.IP scheduling\nScheduler is running and is in a scheduling cycle.\n.RE\n.IP \nFormat:\n.I String\n.br\nDefault value for default scheduler: \n.I idle\n.br\nDefault value for multischeds:\n.I down\n.br\nPython type: No Python type\n\n.IP throughput_mode  8\nAllows scheduler to run faster; it doesn't have to wait for each job\nto be accepted, and doesn't wait for \n.I execjob_begin \nhooks to finish.  Also allows jobs that were changed via \n.I qalter, server_dyn_res \nscripts, or peering to run in the same scheduling cycle where they\nwere changed.\n.br\nReadable by all; settable by Manager and Operator.\n.br\nFormat: \n.I Boolean\n.br\nBehavior:\n.nf\n   True    Scheduler runs asynchronously and faster.  Only available \n           when PBS complex is in TPP mode.\n   False   Scheduler does not run asynchronously\n.fi\n.br\nDefault: \n.I True\n.br\nPython type: No Python type\n\n\n.SH SEE ALSO\n.B qmgr(1B)\n"
  },
  {
    "path": "doc/man1/pbs_server_attributes.7B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_server_attributes 7B \"18 July 2020\" Local \"PBS Professional\"\n.SH NAME\npbs_server_attributes \n\\- PBS server attributes\n.SH DESCRIPTION\nA PBS server has the following attributes.\n\n.IP acl_host_enable 8\nSpecifies whether the server obeys the host access control list in the\n.I acl_hosts \nserver attribute.  \n.br\nReadable by all; settable by Manager.\n.br\nFormat: \n.I Boolean\n.br\nPython type: \n.I bool\n.br\nBehavior:\n.RS\n.IP True 3\nThe server limits host access according to the access control list.\n.IP False 3\nAll hosts are allowed access.\n.RE\n.IP\nDefault: \n.I False \n\n.IP acl_host_moms_enable 8\nSpecifies whether all MoMs are automatically allowed to contact the\nserver with the same privilege as hosts listed in the acl_hosts server\nattribute.\n.br\nReadable by all; settable by Manager.\n.br\nFormat:\n.I Boolean\n.br\nPython type:\n.I bool\n.br\nBehavior:\n.RS \n.IP True 3\nAll MoMs are automatically allowed to contact the server with the same\nprivilege as hosts listed in the \n.I acl_hosts \nserver attribute.\n.IP False 3\nMoMs are not automatically allowed to contact the server with the same\nprivilege as hosts listed in the \n.I acl_hosts \nserver attribute.\n.RE\n.IP\nDefault: \n.I False\n\n.IP acl_hosts 8\nList of hosts from which services can be requested of this\nserver. Requests from the server host are always honored whether or\nnot that host is in the list.  This list contains the fully qualified\ndomain names of the hosts. List is evaluated left-to-right; first\nmatch in list is used.\n.br\nReadable by all; settable by Manager.\n.br\nFormat:\n.I String\n.br\nSyntax: \"[+|-]<hostname>.<domain>[, ...]\"\n.br\nPython type: \n.I pbs.acl\n.br\nDefault: No default (all hosts are allowed access)\n\n.IP acl_resv_group_enable 8\nSpecifies whether the server obeys the group reservation access\ncontrol list in the \n.I acl_resv_groups \nserver attribute.\n.br\nReadable by all; settable by Manager.\n.br\nFormat:\n.I Boolean\n.br\nPython type:\n.I bool\n.br\nBehavior:\n.RS \n.IP True 3\nThe server limits group access according to the access control list.\n.IP False 3\nAll groups are allowed access.\n.RE\n.IP\nDefault:\n.I False\n\n.IP acl_resv_groups 8\nList of groups allowed or denied permission to create reservations in\nthis PBS complex.  The groups in the list are groups on the server\nhost, not submission hosts.  List is evaluated left-to-right; first\nmatch in list is used.\n.br\nReadable by all; settable by Manager.\n.br\nFormat:\n.I String\n.br\nSyntax: \"[+|-]<group name>[, ...]\"\n.br\nPython type:\n.I pbs.acl\n.br\nDefault: No default\n\n.IP acl_resv_host_enable 8\nSpecifies whether the server obeys the host reservation access control\nlist in the \n.I acl_resv_hosts \nserver attribute.\n.br\nReadable by all; settable by Manager.\n.br\nFormat:\n.I Boolean\n.br\nPython type:\n.I bool\n.br\nBehavior:\n.RS \n.IP True 3\nThe server limits host access according to the access control list.\n.IP False 3\nAll hosts are allowed access.\n.RE\n.IP\nDefault:\n.I False\n\n.IP acl_resv_hosts 8\nList of hosts from which reservations can be created in this PBS\ncomplex. This list is made up of the fully-qualified domain names of\nthe hosts.  List is evaluated left-to-right; first match in list is\nused.\n.br\nReadable by all; settable by Manager.\n.br\nFormat:\n.I String\n.br\nSyntax: \"[+|-]<hostname>.<domain>[, ...]\"\n.br\nPython type:\n.I pbs.acl\n.br\nDefault: No default\n\n.IP acl_resv_user_enable 8\nSpecifies whether the server limits which users are allowed to create\nreservations, according to the access control list in the \n.I acl_resv_users \nserver attribute.\n.br\nReadable by all; settable by Manager.\n.br\nFormat:\n.I Boolean\n.br\nPython type:\n.I bool\n.br\nBehavior:\n.RS \n.IP True 3\nThe server limits user reservation creation according to the access control list.\n.IP False 3\nAll users can create reservations.\n.RE\n.IP\nDefault:\n.I False\n\n.IP acl_resv_users 8\nList of users allowed or denied permission to create reservations in\nthis PBS complex.  List is evaluated left-to-right; first match in\nlist is used.\n.br\nReadable by all; settable by Manager.\n.br\nFormat:\n.I String\n.br\nSyntax: \"[+|-]<username>[@<hostname>][, ...]\"\n.br\nPython type:\n.I pbs.acl\n.br\nDefault: No default\n\n.IP acl_roots 8\nList of users with root privilege who may run jobs\nat this server.  If the owner of a job is root or Administrator,\nthe owner must be listed in this access control list or\nthe job is rejected.  More specific entries should be listed before\nmore general, because the list is read left-to-right, and the first\nmatch determines access.\n.br\nReadable by all; can be set or altered by root only, and only at \nthe server host.\n.br\nFormat: \n.I String\n.br\nSyntax: \"[+|-]<username>[@<hostname>][, ...]\"\n.br\nPython type: \n.I pbs.acl\n.br\nDefault: No default; no root jobs allowed\n\n.IP acl_user_enable 8\nSpecifies whether the server limits which users are allowed to run\ncommands at the server, according to the control list in the \n.I acl_users\nserver attribute.\n.br\nReadable by all; settable by Manager.\n.br\nFormat:\n.I Boolean\n.br\nPython type:\n.I bool\n.br\nBehavior:\n.RS \n.IP True 3\nThe server limits user access according to the access control list.\n.IP False 3\nAll users have access.\n.RE\n.IP\nDefault:\n.I False\n\n.IP acl_users 8\nList of users allowed or denied permission to run commands at this\nserver.  List is evaluated left-to-right; first match in list is used.\n.br\nReadable by all; settable by Manager.\n.br\nFormat:\n.I String\n.br\nSyntax: \"[+|-]<username>[@<hostname>][, ...]\"\n.br\nPython type:\n.I pbs.acl\n.br\nDefault: No default\n\n.IP backfill_depth 8\nModifies backfilling behavior.  Sets the number of jobs that are to be backfilled \naround.  Overridden by \n.I backfill_depth \nqueue attribute.\n.br\nRecommendation: set this to less than 100.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat: \n.I Integer\n.br\nValid values: Must be >= 0\n.br\nBehavior:\n.RS\n.IP \">= 0\" 3\nPBS backfills around the specified number of jobs.\n.IP \"Unset\" 3\nBackfill depth is set to\n.I 1.\n.RE\n.IP\nPython type:\n.I int\n.br\nDefault: Unset (backfill depth is 1)\n\n.IP comment 8\nInformational text.  Can be set by a scheduler or other privileged client.\n.br\nReadable by all; settable by Operator, Manager, and PBS.\n.br\nFormat: \n.I String\nof any form\n.br\nPython type:\n.I str\n.br\nDefault: No default\n\n.IP default_chunk  8\nThe list of resources which will be inserted into each chunk of a\njob's select specification if the corresponding resource is not\nspecified by the user.  This provides a means for a site to be sure a\ngiven resource is properly accounted for even if not specified by the\nuser.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat: \n.I String\n.br\nSyntax: \n.RS 11\n.I default_chunk.<resource name>=<value>, default_chunk.<resource name>=<value>, ...\n.RE\n.IP\nPython type: \n.I pbs.pbs_resource\n.br\nSyntax:\n.RS 11\ndefault_chunk[\"<resource name>\"]=<value> \n.br\nwhere \n.I resource name \nis any built-in or custom resource\n.RE\n.IP\nDefault: No default\n\n.IP default_node 8\nNo longer used.\n\n.IP default_qdel_arguments 8\nArgument to qdel command.  Automatically added to all qdel commands.\nSee qdel(1B).  Overrides standard defaults. Overridden by arguments\ngiven on the command line.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat: \n.I String\n.br\nSyntax: \"-Wsuppress_mail=<N>\"\n.br\nPython type: \n.I pbs.args\n.br\nDefault: No default\n\n.IP default_qsub_arguments 8\nArguments that are automatically added to the qsub command.  Any valid\narguments to qsub command, such as job attributes. Setting a job\nattribute via default_qsub_arguments sets that attribute for each job\nwhich does not explicitly override it. See qsub(1B). Settable by the\nadministrator via the qmgr command. Overrides standard\ndefaults. Overridden by arguments given on the command line and in\nscript directives.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat: \n.I String\n.br\nSyntax: \n.RS 11\n\"<option> <value> <option> <value>\"\n.RE\n.IP \nPython type:\n.I pbs.args\n.br\nTo set: \n.RS 11\nQmgr: s s default_qsub_arguments =\"<option> <value>\"\n.RE\n.IP\nTo add to existing: \n.RS 11\nQmgr: s s default_qsub_arguments +=\"<option> <value>\"\n.RE\n.IP\nExample: \n.RS 11\nQmgr: set server default_qsub_arguments = \"-r y -N MyJob\"\n.br\nQmgr: set server default_qsub_arguments += \"-l Blue=False\"\n.RE\n.IP\nDefault: No default\n\n.IP default_queue 8\nThe name of the default target queue.  Used for requests that do not\nspecify a queue name.  Must be set to an existing queue.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat:\n.I Queue name\n.br\nPython type:\n.I pbs.queue\n.br\nDefault: \n.I workq\n\n.IP eligible_time_enable 8\nControls starving behavior. Toggles between using the value of the job's \n.I eligible_time \nattribute and the value of \n.I now() - etime\nto evaluate whether job is starving.\n.br\nReadable by all; settable by Manager.\n.br\nFormat:\n.I Boolean\n.br\nPython type:\n.I bool\n.br\nBehavior:\n.RS\n.IP True 3\nThe value of the job's \n.I eligible_time \nattribute is used for its starving time.\n.IP False 3\nThe value of \n.I now() - etime \nis used for the job's starving time.\n.RE\n.IP \nDefault: \n.I False\n\n.IP est_start_time_freq 8\n.B Obsolete. \nNo longer used.\n\n.IP flatuid 8\nUsed for authorization allowing users to submit and alter jobs.\nSpecifies whether user names are treated as being the same across the\nPBS server and all submission hosts in the complex.  Can be used to\nallow users without accounts at the server host to submit jobs.\n.br\nIf UserA has an account at the server host, PBS requires that\nUserA@<server host> is the same as UserA@<execution host>.\n.br\nReadable by all; settable by Manager.\n.br\nFormat: \n.I Boolean\n.br\nPython type:\n.I bool\n.br\nBehavior:\n.RS\n.IP True 3\nPBS assumes that UserA@<submithost> is same user as UserA@<server\nname>.  Jobs that run under the name of the job owner do not need\nauthorization.  \n.br\nA job submitted under a different username, by using the \n.I -u \noption to the qsub command, requires authorization.  \n.br\nEntries in .rhosts or hosts.equiv are not checked, so even if\nUserA@host1 has an entry for UserB@host2, UserB@host2 cannot operate\non UserA@host1's jobs.  User without account on server can submit\njobs.\n.IP False 3\nPBS does not assume that UserA@<submission host> is the same user as\nUserA@<server host>.  Jobs that run under the name of the job owner\nneed authorization.  Users must have accounts on the server host to\nsubmit jobs.\n.RE\n.IP\nDefault: \n.I False\n(authorization is required)\n\n.IP FLicenses 8\nThe number of licenses currently available for allocation to unlicensed\nhosts.\n.br\nReadable by all; settable by Manager.\n.br\nFormat:\n.I Integer\n.br\nPython type: \n.I int\n.br\nDefault: No default\n\n.IP job_history_duration 8\nThe length of time PBS will keep each job's history.\n.br\nReadable by all; settable by Manager.\n.br\nFormat: \n.I Duration\n.br\nSyntax: \n.I [[hours:]minutes:]seconds[.milliseconds]\n.br\nPython type:\n.I pbs.duration\n.br\nDefault: \n.I Two weeks\n\n.IP job_history_enable 8\nEnables job history management.  \nSetting this attribute to True enables job history management.  \n.br \nReadable by all; settable by Manager.\n.br\nFormat: \n.I Boolean\n.br\nPython type:\n.I bool\n.br\nDefault: \n.I False\n\n.IP job_requeue_timeout 8\nThe amount of time that can be taken while requeueing a job.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat: \n.I Duration\n.br\nPython type:\n.I pbs.duration\n.br\nMinimum allowed value: \n.I 1 second\n.br \nMaximum allowed value: \n.I 3 hours\n.br\nDefault: \n.I 45 seconds\n\n.IP job_sort_formula 8\nFormula for computing job priorities.\nIf the attribute \n.I job_sort_formula \nis set, all schedulers use the formula in it to compute job\npriorities.  When a scheduler sorts jobs according to the formula, it\ncomputes a priority for each job, where that priority is the value\nproduced by the formula.  Jobs with a higher value get higher\npriority.\n.br\nReadable by all; settable by root.\n.br\nFormat: \n.I String \n.br\nSyntax:\nMathematical formula; can be made up of expressions, where\nexpressions contain terms which are added, subtracted,\nmultiplied, or divided, and which can contain parentheses, \nexponents, and unary plus and minus. \n.br\nPython type: \n.I pbs.job_sort_formula\n.br\nDefault: Unset\n\n.IP jobscript_max_size 8\nLimit on the size of any job script.  \n.br\nReadable by all; settable by Manager.\n.br\nFormat: \n.I size\n.br\nUnits default to bytes\n.br \nPython type: \n.I pbs.size\n.br\nDefault: \n.I 100MB\n\n.IP license_count 8\nThe \n.I license_count\nattribute contains the following elements with their values: \n.I Avail_Global, Avail_Local, Used, High_Use.\n.br\nReadable by all; settable by PBS only.\n.br\nFormat: \n.I String\n.br\nSyntax:\n.RS 11\n.I Avail_Global:<value> Avail_Local:<value> Used:<value> \n.I High_Use:<value>\n.RE\n.IP\n.RS\n.IP Avail_Global 3\nThe number of licenses available at ALM license server\n(checked in.)\n.LP\n.IP Avail_Local 3\nThe number licenses kept by PBS (checked out.)\n.LP\n.IP Used 3\nThe number of licenses currently in use.\n.LP\n.IP High_Use 3\nThe highest number of licenses checked out and used at any time \nby the current instance of the PBS server.\n.LP\n.RE\n.IP\nPython type: \n.I pbs.license_count\n.br\nDefault value:\n.RS 11\n.I Avail_Global:0 Avail_Local:0 Used:0 High_Use:0\n.RE\n\n.IP log_events 8\nThe types of events the server logs.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat: \n.I Integer\nrepresentation of bit string\n.br\nPython type: \n.I int\n.br\nDefault: \n.I 511\n(all events)\n\n.IP mail_from 8\nThe username from which server-generated mail is sent to users.  \nMail is sent \n.B to\nthis address upon failover.\n.br\nReadable by all; settable by Manager.\n.br\nFormat: \n.I String\n.br\nPython type: \n.I str\nDefault: \"adm\"\n\n.IP managers 8\nList of PBS Managers.\n.br\nReadable by all; settable by Manager.\n.br\nFormat: \n.I String\n.br\nSyntax:\n.RS 11\n\"<username>@<hostname>.<subdomain>.<domain>[,<username>@<hostname>.<subdomain>.<domain> ...]\"\n.br\nThe \n.I hostname, subdomain, \nor \n.I domain \nmay be wildcarded with an asterisk (\"*\").\n.RE\n.IP\nPython type: \n.I pbs.acl\n.br\nDefault: Root on the server host\n\n.IP max_array_size 8\nThe maximum number of subjobs that are allowed in any array job.  \n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat: \n.I Integer\n.br\nPython type: \n.I int\n.br\nDefault: \n.I 10000\n\n.IP max_concurrent_provision 8\nThe maximum number of vnodes allowed to be in the process of being provisioned.\nCannot be set to zero.\n.br\nWhen unset, default value is used.  \n.br\nReadable by all; settable by Manager.\n.br\nFormat:\n.I Integer\n.br\nPython type: \n.I int\n.br\nDefault:\n.I 5\n\n.IP max_group_res 8\nOld limit attribute.  Incompatible with new limit attributes.\nThe maximum amount of the specified resource that any single group may consume\nin this PBS complex. \n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat: \n.I String\n.br\nSyntax: \n.I max_group_res.<resource name>=<value>\n.br\nPython type:\n.I pbs.pbs_resource\n.br\nSyntax:\n.br\n.RS 11\nmax_group_res[\"<resource name>\"]=<value> \n.br\nwhere \n.I resource name\nis any built-in or custom resource\n.RE\n.IP\nExample: set server max_group_res.ncpus=6\n.br\nDefault: No default\n.br\n\n.IP max_group_res_soft 8\nOld limit attribute.  Incompatible with new limit attributes.  The\nsoft limit on the amount of the specified resource that any single\ngroup may consume in this complex.  If a group is\nconsuming more than this amount of the specified resource, their jobs\nare eligible to be preempted by jobs from groups who are not over\ntheir soft limit.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat: \n.I String\n.br\nSyntax: \n.I max_group_res_soft.<resource name>=<value>\n.br\nPython type:\n.I pbs.pbs_resource\n.br\nSyntax:\n.br\n.RS 11\nmax_group_res_soft[\"<resource name>\"]=<value> \n.br\nwhere \n.I resource name\nis any built-in or custom resource\n.RE\n.IP\nDefault: No default\n.br\n\n.IP max_group_run 8\nOld limit attribute.  Incompatible with new limit attributes.\nThe maximum number of jobs owned by any users in a single group that are\nallowed to be running within this complex at one time.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat: \n.I Integer\n.br\nPython type: \n.I int\n.br\nDefault: No default\n\n.IP max_group_run_soft 8\nOld limit attribute.  Incompatible with new limit attributes.  The\nmaximum number of jobs owned by the users in one group allowed to be\nrunning in this complex at one time.  If a group has more than this\nnumber of jobs running, their jobs are eligible to be preempted by\njobs from groups who are not over their soft limit.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat: \n.I Integer\n.br\nPython type: \n.I int\n.br\nDefault: No default\n\n.IP max_queued 8\nLimit attribute.  The maximum number of jobs allowed to be queued\nor running in the complex.  Can be specified for projects, users, groups, or all.\nCannot be used with old limit attributes. \n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat: \n.I Limit specification\n.RS 11\n.RE\n.IP\nPython type:\n.I str\n.br\nDefault: No default\n\n.IP max_queued_res 8\nLimit attribute.  The maximum amount of the specified resource \nallowed to be allocated to jobs queued or running in the complex.\nCan be specified for projects, users, groups, or all.\nCannot be used with old limit attributes. \n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat: \n.I Limit specification\n.br\nPython type: \n.I pbs.pbs_resource\n.br\nSyntax:\n.RS 11\nmax_queued_res[\"<resource name>\"]=<value> \n.br \nwhere \n.I resource name \nis any built-in or custom resource\n.RE\n.IP\nDefault: No default\n\n.IP max_run 8\nLimit attribute.  The maximum number of jobs allowed to be running \nin the complex.  Can be specified for projects, users, groups, or all.\nCannot be used with old limit attributes. \n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat: \n.I Limit specification\n.br\nPython type:\n.I str\n.br\nDefault: No default\n\n.IP max_run_res 8\nLimit attribute.  The maximum amount of the specified resource \nallowed to be allocated to jobs running in the complex.\nCan be specified for projects, users, groups, or all.\nCannot be used with old limit attributes.  \n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat: \n.I Limit specification\n.br\nPython type: \n.I pbs.pbs_resource\n.br\nSyntax:\n.RS 11\nmax_run_res[\"<resource name>\"]=<value> \n.br \nwhere \n.I resource name\nis any built-in or custom resource\n.RE\n.IP\nDefault: No default\n\n.IP max_run_res_soft 8\nLimit attribute.  Soft limit on the amount of the specified resource \nallowed to be allocated to jobs running in the complex.\nCan be specified for projects, users, groups, or all.\nCannot be used with old limit attributes.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat: \n.I Limit specification\n.br\nPython type: \n.I pbs.pbs_resource\n.br\nSyntax:\n.RS 11\nmax_run_res_soft[\"<resource name>\"]=<value> \n.br \nwhere\n.I resource name\nis any built-in or custom resource\n.RE\n.IP\nDefault: No default\n\n.IP max_run_soft 8\nLimit attribute.  Soft limit on the number of jobs allowed to be running \nin the complex.  Can be specified for projects, users, groups, or all.\nCannot be used with old limit attributes.  \n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat: \n.I Limit specification\n.br\nPython type:\n.I str\n.br\nDefault: No default\n\n.IP max_running 8\nOld limit attribute.  Incompatible with new limit attributes.\nThe maximum number of jobs in this complex allowed to be \nrunning any given time.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat: \n.I Integer\n.br\nPython type:\n.I int\n.br\nDefault: No default\n\n.IP max_user_res 8\nOld limit attribute.  Incompatible with new limit attributes.\nThe maximum amount of the specified resource that any single user may consume\nwithin this complex. \n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat: \n.I String\n.br\nSyntax:\n.I max_user_res.<resource name>=<value>\n.br\n.br\nPython type: \n.I pbs.pbs_resource\n.br\nSyntax:\n.br\n.RS 11\nmax_user_res[\"<resource name>\"]=<value> \n.br\nwhere \n.I resource name \nis any built-in or custom resource\n.RE\n.IP\nExample: set server max_user_res.ncpus=6\n.br\nDefault: No default\n\n.IP max_user_res_soft 8\nOld limit attribute.  Incompatible with new limit attributes.  The\nsoft limit on the amount of the specified resource that any single\nuser may consume within this complex.  If a user is consuming more\nthan this amount of the specified resource, their jobs are eligible to\nbe preempted by jobs from users who are not over their soft limit.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat: \n.I String\n.br\nSyntax:\n.I max_user_res_soft.<resource name>=<value>\n.br\nPython type: \n.I pbs.pbs_resource\n.br\nSyntax:\n.br\n.RS 11\nmax_user_res_soft[\"<resource name>\"]=<value> \n.br\nwhere \n.I resource name\nis any built-in or custom resource\n.RE\n.IP\nExample: set server max_user_res_soft.ncpus=3\n.br\nDefault: No default\n\n.IP max_user_run 8\nOld limit attribute.  Incompatible with new limit attributes.\nThe maximum number of jobs owned by a single user that are allowed to be\nrunning within this complex at one time.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat: \n.I Integer\n.br\nPython type:\n.I int\n.br\nDefault: No default\n\n.IP max_user_run_soft 8\nOld limit attribute.  Incompatible with new limit attributes.\nThe soft limit on the number of jobs owned by a single user that are allowed to be\nrunning within this complex at one time.  If a user has more than this number of jobs \nrunning, their jobs are eligible to be preempted by jobs from users who are not over \ntheir soft limit.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat: \n.I Integer\n.br\nPython type: \n.I int\n.br\nDefault: No default\n\n.IP node_fail_requeue 8\nControls whether running jobs are automatically requeued or deleted\nwhen the primary execution host fails.  Number of seconds to wait after\nlosing contact with Mother Superior before requeueing or deleting jobs.  \n.br\nReverts to default value when server is restarted.\n.br\nCan be set by Managers and Operators.  Visible to all.\n.br\nFormat: \n.I Integer\n.br\nPython type:\n.I int\nBehavior:\n.RS\n.IP \"< 0\" 3\nBehaves as if set to \n.I 1.\n.IP 0 3\nJobs are not requeued; they are left in the \n.I Running \nstate until the execution host is recovered.  \n.IP \"> 0\" 3\nWhen the host has been down for the specified number of seconds, \njobs are requeued if they are marked as rerunnable, or are deleted.\n.IP Unset 3\nBehaves as if set to default value of \n.I 310.\n.RE\n.IP\nDefault: \n.I 310\n\n.IP resend_term_delay 8\nDelay for resending the TERM signal to a job in seconds.\n.br\nCan be set by Managers and Operators.  Visible to all.\n.br\nMust be >= 0 and <= 1800.\n.br\nFormat: \n.I Integer\n.br\nPython type:\n.I int\n.IP Unset 3\nBehaves as if set to default value of \n.I 5.\n.RE\n.IP\nDefault: \n.I 5\n\n.IP \"node_group_enable\" 8\nSpecifies whether placement sets (which includes node grouping) are\nenabled.  See the \n.I node_group_key\nserver attribute.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat: \n.I Boolean\n.br  \nPython type: \n.I bool\n.br\nExample: Qmgr> set server node_group_enable=true\n.br\nDefault: \n.I False\n\n.IP node_group_key 8\nSpecifies the resources to use for placement sets (node grouping.)  \nOverridden by queue's \n.I node_group_key \nattribute.  See \n.I node_group_enable\nserver attribute.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat:\n.I string_array\n.br\nSyntax:\n.RS 11\nWhen specifying multiple resources, separate them with commas and \nenclose the value in double quotes.\n.RE\n.IP\nPython type:\n.I pbs.node_group_key\n.br\nExample: Qmgr> set server node_group_key=\"ncpus,mem\"\n.br\nDefault: \n.I Unset\n\n.IP operators 8\nList of PBS Operators.\n.br\nReadable by all; settable by Manager.\n.br\nFormat: \n.I String\n.br\nSyntax: \n.RS 11 \n<username>@<hostname>.<subdomain>.<domain\nname>[,<username>@<hostname>.<subdomain>.<domain name> ...]\n.br\nThe \n.I host, subdomain, \nor \n.I domain name \nmay be wildcarded with an asterisk (*).\n.RE\n.IP\nPython type:\n.I pbs.acl\n.br\nDefault: No default\n\n.IP pbs_license_file_location 8\n.B Deprecated.  \nDo not use.\n\n.IP pbs_license_info 8\nLocation of license server(s).  \n.br\nReadable by all; settable by Manager.\n.br\nFormat: \n.I String\n.br\nSyntax: \n.RS 11 \nOne or more port number and hostname combinations:\n.br\n.I <port1>@<host1>[:<port2>@<host2>:...:<portN>@<hostN>]\n.br\nwhere \n.I host1, host2, ... hostN \ncan be IP addresses.\n.br\nDelimiter between items is colon (\":\").\n.RE\n.IP \nPython type: \n.I str\n.br\nDefault: No default\n\n.IP pbs_license_linger_time 8\nThe number of seconds to keep an unused license, when the number\nof licenses is above the value given by \n.I pbs_license_min.  \n.br\nReadable by all; settable by Manager.\n.br\nFormat: \n.I Integer \n.br\nUnits:\n.I seconds\n.br\nPython type: \n.I pbs.duration\n.br\nTo set \n.I pbs_license_linger_time:\n.br\n\\ \\ \\ Qmgr> set server pbs_license_linger_time=<value>\n.br\nTo unset \n.I pbs_license_linger_time:\n.br\n\\ \\ \\ Qmgr> unset server pbs_license_linger_time\n.br\nDefault: \n.I 3600 seconds\n\n.IP pbs_license_max 8\nMaximum number of licenses to be checked out at any time, i.e maximum\nnumber of licenses to keep in the PBS local license pool.  Sets a\ncap on the number of nodes or sockets that can be licensed at one time. \n.br\nReadable by all; settable by Manager. \n.br\nFormat: \n.I Integer\n.br\nPython type: \n.I int\n.br\nTo set\n.I pbs_license_max:\n.br\n\\ \\ \\ Qmgr> set server pbs_license_max=<value>\n.br\nTo unset \n.I pbs_license_max:\n.br\n\\ \\ \\ Qmgr> unset server pbs_license_max\n.br\nDefault: \n.I Maximum value for an integer\n\n.IP pbs_license_min 8\nMinimum number of nodes or sockets to permanently keep licensed, i.e. the minimum\nnumber of licenses to keep in the PBS local license pool. This is\nthe minimum number of licenses to keep checked out.  \n.br\nReadable by all; settable by Manager. \n.br\nFormat: \n.I Integer\n.br\nPython type: \n.I int\n.br\nTo set pbs_license_min:\n.br\n\\ \\ \\ Qmgr> set server pbs_license_min=<value>\n.br\nTo unset pbs_license_min:\n.br\n\\ \\ \\ Qmgr> unset server pbs_license_min\n.br\nIf unset, PBS automatically sets the value to \n.I 0.\n.br\nDefault: \n.I 0\n\n.IP pbs_version 8\nThe version of PBS for this server.  \n.br\n.br\nReadable by all.\n.br\nFormat: \n.I String\n.br\nPython type: \n.I pbs.version\n.br\nDefault: No default\n\n.IP power_provisioning 8\nShows use of power profiles via PBS.\n.br\nReadable by all; set by PBS.\n.br\nFormat:\n.I Boolean\n.br\nPython type:\n.I bool\n.br\nBehavior:\n.RS\n.IP True 3\nPower provisioning is enabled.  \n.IP False 3\nPower provisioning is disabled.\n.RE\n.IP \nDefault: \n.I False\n\n.IP python_restart_max_hooks 8\nThe maximum number of hooks to be serviced before the Python\ninterpreter is restarted.  If this number is exceeded, and the time\nlimit set in \n.I python_restart_min_interval \nhas elapsed, the Python interpreter is restarted.\n.br\nReadable by all; settable by Manager.\n.br\nFormat: \n.I Integer\n.br\nPython type: \n.I int\n.br\nDefault: \n.I 100\n\n.IP python_restart_max_objects 8\nThe maximum number of objects to be created before the Python\ninterpreter is restarted.  If this number is exceeded, and the time\nlimit set in \n.I python_restart_min_interval \nhas elapsed, the Python interpreter is restarted.\n.br\nReadable by all; settable by Manager.\n.br\nFormat: \n.I Integer\n.br\nPython type:\n.I int\n.br\nDefault: \n.I 1000\n\n.IP python_restart_min_interval 8\nThe minimum time interval before the Python interpreter is restarted.\nIf this interval has elapsed, and either the maximum number of hooks\nto be serviced (set in \n.I python_restart_max_hooks\n) has been exceeded or\nthe maximum number of objects to be created (set in\n.I python_restart_max_objects\n) has been exceeded, the Python interpreter is restarted.\n.br\nReadable by all; settable by Manager.\n.br\nFormat: \n.I Integer \n.br\nUnits: \n.I Seconds \nor \n.I Duration [[HH:]MM:]SS\n.br\nPython type: \n.I pbs.duration\n.br\nDefault: \n.I 30\n\n.IP query_other_jobs 8\nControls whether unprivileged users are allowed to select or query the\nstatus of jobs owned by other users.\n.br\nWhen \n.I True, \nunprivileged users can query or select other users' jobs.\n.br\nReadable by all; settable by Manager.\n.br\nFormat: \n.I Boolean\n.br\nPython type: \n.I bool\n.br\nDefault: \n.I True\n\n.IP queued_jobs_threshold 8\nLimit attribute.  The maximum number of jobs allowed\nto be queued in the complex.  Can be specified for\nprojects, users, groups, or all.  Cannot be used with old limit\nattributes.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat: \n.I Limit specification\n.br\nPython type: \n.I str\n.br\nDefault: No default\n\n.IP queued_jobs_threshold_res 8\nLimit attribute.  The maximum amount of the specified resource allowed\nto be allocated to jobs queued in the complex.  Can be specified for\nprojects, users, groups, or all.  Cannot be used with old limit\nattributes.\n.br\nReadable by all; settable by Operator and Manager.\nFormat: \n.I Limit specification\n.br\nPython type: \n.I pbs.pbs_resource\n.br \nSyntax:\n.RS 11\nqueued_jobs_threshold_res[\"<resource name>\"]=<value> \n.br\nwhere \n.I resource name \nis any built-in or custom resource\n.RE\n.IP\nDefault: No default\n\n.IP reserve_retry_cutoff 8\n.B Obsolete.\nNo longer used.\n\n.IP reserve_retry_init 8\n.B Deprecated.\nThe amount of time after a reservation becomes degraded that\nPBS waits before attempting to reconfirm the reservation.\nWhen this value is changed, only reservations that become degraded\nafter the change use the new value.\n.br\nReadable and settable by Manager only.\n.br\nFormat: \n.I Integer \n.br\nUnits: \n.I Seconds\n.br\nPython type: \n.I int\n.br\nValid values: Must be greater than \n.I zero\n.br\nDefault: \n.I 7200 \n(2 hours)\n\n.IP reserve_retry_time 8\nThe amount of time after a reservation becomes degraded that\nPBS waits before attempting to reconfirm the reservation, as well\nas the amount of time between attempts to reconfirm degraded \nreservations.\nWhen this value is changed, only reservations that become degraded\nafter the change use the new value.\n.br\nReadable by all; settable by Manager.\n.br\nFormat: \n.I Integer \n.br\nUnits: \n.I Seconds\n.br\nPython type: \n.I int\n.br\nValid values: Must be greater than \n.I zero\n.br\nDefault: \n.I 600 \n(5 minutes)\n\n\n.IP resources_assigned 8\nThe total of each type of resource allocated to jobs running in this\ncomplex, plus the total of each type of resource allocated to any reservation.\nReservation resources are added when the reservation starts.\n.br\nReadable by all; settable by PBS only.\n.br\nFormat: \n.I String\n.br\nPython type: \n.I pbs.pbs_resource\n.br\nSyntax:\n.br\n.RS 11\nresources_assigned[\"<resource name>\"]=<value> \n.br\nwhere\n.I resource name\nis any built-in or custom resource\n.RE\n.IP\nDefault: No default\n\n.IP resources_available 8\nThe list of available resources and their values defined on the server.  \nEach resource is listed on a separate line.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat: \n.I String \n.br\nSyntax: \n.RS 11\n\"resources_available.<resource name>=<value>\"\n.br\nwhere \n.I resource name\nis any built-in or custom resource\n.RE\n.IP\nPython type:\n.I pbs.pbs_resource\n.br\nSyntax:\n.br\n.RS 11\nresources_available[\"<resource name>\"]=<value> \n.br\nwhere\n.I resource name\nis any built-in or custom resource\n.RE\n.IP\nDefault: No default\n\n.IP resources_cost 8\nNo longer used.\n\n.IP resources_default 8\nThe list of default job-wide resource values that are set as limits\nfor jobs in this complex when a) the job does not specify a limit, and\nb) there is no queue default.  The value for a string array,\ne.g. \n.I resources_default.<string array resource>, \ncan contain only one string.  For host-level resources, see the \n.I default_chunk.<resource name> \nserver attribute.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat: \n.I String\n.br\nSyntax:\n.I resources_default.<resource name>=value[, ...]\n.br\nPython type: \n.I pbs.pbs_resource\n.br\nSyntax:\n.RS 11\nresources_default[\"<resource name>\"]=<value> \n.br\nwhere\n.I resource name \nis any built-in or custom resource\n.RE\n.IP \nDefault: No limit\n\n.IP resources_max 8 \nThe maximum amount of each resource that can be requested by any\nsingle job in this complex, if there is not a \n.I resources_max \nvalue defined for the queue at which the job is targeted.  This\nattribute functions as a gating value for jobs entering the PBS\ncomplex.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat: \n.I String\n.br\nSyntax:\n.I resources_default.<resource name>=value[, ...]\n.br\nPython type: \n.I pbs.pbs_resource\n.br\nSyntax:\n.RS 11\nresources_max[\"<resource name>\"]=<value> \n.br\nwhere\n.I resource name \nis any built-in or custom resource\n.RE\n.IP\nDefault: No limit\n\n.IP restrict_res_to_release_on_suspend 8\nComma-separated list of consumable resources to be released when\njobs are suspended.  If unset, all consumable resources are released\non suspension.\n.br\nReadable by all; settable by Manager.\n.br\nFormat: \n.I String array\n.br\nSyntax: \n.I comma-separated list\n.br\nPython type:\n.I Python list\n.br\nDefault: \n.I Unset\n\n.IP \"resv_enable\" 8\nSpecifies whether or not advance and standing reservations can be \ncreated at this server.  \n.br\nReadable by all; settable by Manager.\n.br\nFormat:\n.I Boolean\n.br\nPython type:\n.I bool\n.br\nBehavior:\n.RS 11\nWhen set to \n.I True, \nnew reservations can be created.  When changed from \n.I True \nto \n.I False, \nnew reservations cannot be created, but existing reservations are honored.\n.RE\n.IP\nDefault: \n.I True\n\n.IP \"resv_post_processing_time\" 8\nThe amount of time allowed for reservations to clean up after running jobs.\n.br\nReservation duration and end time are extended by this amount of time.  Jobs\nare not allowed to run during the cleanup period.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat: \n.I Duration\n.br\nSyntax: \n.I [[hours:]minutes:]seconds[.milliseconds]\n.br\nPython type: \n.I int\n.br\nBehavior: When unset, behaves as if set to \n.I zero.\n.br\nDefault: \n.I Unset\n\n.IP rpp_highwater 8\nThe maximum number of messages.\n.br\nReadable by all; settable by Manager.\n.br\nFormat: \n.I Integer\n.br\nPython type:\n.I int\n.br\nValid values: Greater than or equal to \n.I one\n.br\nDefault: \n.I 1024\n\n.IP rpp_max_pkt_check 8\nMaxiumum number of TPP messages processed by the main server thread per iteration.\n.br\nReadable by all; settable by Manager.\n.br\nFormat:\n.I Integer\n.br\nPython type: \n.I int\n.br\nDefault: \n.I 1024\n\n.IP rpp_retry 8\nIn a fault-tolerant setup (multiple pbs_comms), when the first\npbs_comm fails partway through a message, this is number of times TPP\ntries to use the first pbs_comm.\n.br\nReadable by all; settable by Manager.\n.br\nFormat: \n.I Integer\n.br\nPython type:\n.I int\n.br\nValid values: Greater than or equal to \n.I zero\n.br\nDefault: \n.I 10\n\n.IP scheduler_iteration 8\nThe time between scheduling iterations.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat: \n.I Integer \n.br\nUnits:\n.I Seconds\n.br\nPython type: \n.I pbs.duration\n.br\nDefault: \n.I 600\n(10 minutes)\n\n.IP scheduling 8\nEnables scheduling of jobs.  Specified by value of \n.I -a \noption to\n.I pbs_server \ncommand. If \n.I -a \nis not specified, value is taken from previous invocation of \n.I pbs_server.\n.br\nReadable by all; settable by Operator and Manager.\n.br\nFormat: \n.I Boolean \n.br\nPython type: \n.I bool\n.br\nDefault: \n.I False\nif never set via \n.I pbs_server \ncommand\n\n.IP server_host 8\nThe name of the host on which the active server is running.\nIf the secondary server takes over, this attribute is set to the \nname of the secondary server's host.  When the primary server\ntakes control again, this attribute shows the name of the primary\nserver's host.\n.br\nReadable by all; settable by PBS only.\n.br\nFormat: \n.I String\n.br\nSyntax: \n.I <hostname>.<domain name>\n.RS 11\nIf the server is listening to a non-standard port, the port number is\nappended, with a colon, to the hostname: \n.I <hostname>.<domain name>:<port number>\n.RE\n.IP\nPython type: \n.I str\n.br\nDefault: No default\n \n.IP server_state 8\nThe current state of the server.\n.br\nReadable by all; settable by PBS only.\n.br\nFormat: \n.I String\n.br\nPython type: \n.RS 11\nEach server state's Python type is its corresponding server state constant\n.RE\n.IP\nServer states and their Python types:\n.RS\n.IP \"Active                pbs.SV_STATE_ACTIVE\" 3\nThe server is running.  The scheduler is between scheduling cycles.\n\n.IP \"Hot_Start             pbs.SV_STATE_HOT \" 3\nThe server will run first any jobs that were running when it \nwas shut down.\n\n.IP \"Idle                  pbs.SV_STATE_IDLE\" 3\nThe server is running.  The default scheduler's \n.I scheduling\nattribute is \n.I False.\n\n.IP \"Scheduling            pbs.SV_STATE_ACTIVE\" 3\nThe server is running.  The scheduler is in a scheduling cycle.\n\n.IP \"Terminating    pbs.SV_STATE_SHUTIMM or pbs.SV_STATE_SHUTSIG\" 3\nThe server is terminating.  No additional jobs will be run.\n\n.IP \"Terminating_Delayed   pbs.SV_STATE_SHUTDEL\" 3\nThe server is terminating in delayed mode.  No new jobs will be run.\nThe server will shut down when all running jobs have finished.\n.RE\n.IP\nDefault: No default\n\n.IP single_signon_password_enable 8\n.B Removed. (2020.1)\n\n.IP state_count 8\nList of the number of jobs in each state in the complex.  Suspended\njobs are counted as running.\n.br\nReadable by all; settable by PBS only.\n.br\nFormat: \n.I String\n.br\nSyntax:\n.I transiting=<value>, queued=<value>, ...\n.br\nPython type: \n.I pbs.state_count\n.br\nDefault: No default\n\n.IP system_cost 8\nNo longer used.\n\n.IP total_jobs 8\nThe total number of jobs in the complex.  If the \n.I job_history_enable\nattribute is set to \n.I True, \nthis includes jobs that are finished, deleted, and moved.\n.br\nReadable by all; settable by PBS only.\n.br\nFormat: \n.I Integer\n.br\nPython type: \n.I int\n.br\nDefault: No default\n\n.SH Incompatible Limit Attributes\nThe old and new limit attributes are incompatible.  \nIf any of one kind is set, none of the other kind can be set.\nAll of one kind must be unset in order to set any of the other kind.\n\n.SH FORMATS\n\n.IP Duration 8\nSyntax: either\n.I [[hours:]minutes:]seconds[.milliseconds]\nor \n.I an integer, whose units are seconds \n\n.IP \"Limit specification\" 8\nLimit attributes can be set, added to or removed from.\n.RS\nFormat for setting a \n.I limit specification:\n.RS 3\nset server <limit attribute> = \"[<limit specification>=<limit value>], [<limit specification>=<limit value>] ...\"\n.RE\nFormat for adding to a \n.I limit specification:\n.RS 3\nset server <limit attribute> += \"[<limit specification>=<limit value>], [<limit specification>=<limit value>] ...\"\n.RE\nFormat for removing from a \n.I limit specification:\n.RS 3\nset server <limit attribute> -= \"[<limit specification>=<limit value>], [<limit specification>=<limit value>] ...\"\n.br\nor\n.br\nset server <limit attribute> -= \"[<limit specification>], [<limit specification>] ...\"\n.RE\n\nWhere \n.I limit specification\nis \n.RS 3\n.IP \"o:PBS_ALL\" 3\nOverall limit\n.IP \"p:PBS_GENERIC\" 3\nGeneric users\n.IP \"p:<project name>\" 3\nAn individual project\n.IP \"u:PBS_GENERIC\" 3\nGeneric users\n.IP \"u:<user name>\" 3\nAn individual user\n.IP \"g:PBS_GENERIC\" 3\nGeneric groups\n.IP \"g:<group name>\" 3\nAn individual group\n.RE\n\nThe \n.I limit specification \ncan contain spaces anywhere except after the colon\n(\":\").\n.br\nIf there are comma-separated \n.I limit specifications, \nthe entire string must be enclosed in double quotes.\n.br\nA user name, group name, or project name containing spaces must be\nenclosed in quotes.\n.br\nIf a user name, group name, or project name is quoted using double\nquotes, and the entire string requires quotes, the outer enclosing\nquotes must be single quotes.  Similarly, if the inner quotes are\nsingle quotes, the outer quotes must be double quotes.\n.br\nPBS_ALL is a keyword which indicates that this limit applies to \nthe usage total.\n.br\nPBS_GENERIC is a keyword which indicates that this limit applies to\ngeneric users, groups, or projects.\n.br\nWhen removing a limit, the \n.I limit value \ndoes not need to be specified.\n.br\nPBS_ALL and PBS_GENERIC are case-sensitive.\n.br\n\nFormat for setting a limit attribute:\n.RS 3\nset server <limit attribute> = \"[<limit specification>=<limit value>],\n[<limit specification>=<limit value>], ...\"\n.br\n\nset queue <queue name> <limit attribute> = \"[<limit specification>=\n<limit value>], [<limit specification>=<limit value>], ...\"\n.RE\n.RE\n\n.IP\nFor example, to set the \n.I max_queued \nlimit on QueueA to 5 for total usage, and to limit user bill to 3:\n.br\n\\ \\ \\ s q QueueA max_queued = \"[o:PBS_ALL=5], [u:bill =3]\"\n\nExamples of setting, adding, and removing: \n.RS \n.RS 3\nset server max_run=\"[u:PBS_GENERIC=2], [g:group1=10], [o:PBS_ALL = 100]\"\n.br\nset server max_run+=\"[u:user1=3], [g:PBS_GENERIC=8]\"\n.br\nset server max_run-=\"[u:user2], [g:group3]\"\n.RE\n.RE\n\n.IP\n\n.SH SEE ALSO\nqdel(1B),\nqmgr(1B),\nqsub(1b)\n"
  },
  {
    "path": "doc/man1/pbsdsh.1B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbsdsh 1B \"6 May 2020\" Local \"PBS Professional\"\n.SH NAME\n.B pbsdsh \n\\- distribute tasks to vnodes under PBS\n\n.SH SYNOPSIS\n.B pbsdsh \n[-c <copies>] [-s] [-v] [-o] -- <program> [<program args>]\n.br\n.B pbsdsh \n[-n <vnode index>] [-s] [-v] [-o] -- <program> [<program args>]\n.br\n.B pbsdsh \n--version\n\n.SH DESCRIPTION\nThe \n.B pbsdsh\ncommand allows you to distribute and execute a task on each of the vnodes\nassigned to your job by executing (spawning) the application on each\nvnode.  The \n.B pbsdsh \ncommand uses the PBS Task Manager, or TM, to distribute the program on \nthe allocated vnodes.\n\nWhen run without the \n.I -c \nor the \n.I -n \noption, \n.B pbsdsh \nwill spawn the program\non all vnodes allocated to the PBS job.  The spawns take place concurrently;\nall execute at (about) the same time.\n\nNote that the double dash must come after the options and before the \nprogram and arguments.  The double dash is only required for Linux.\n\nThe \n.B pbsdsh \ncommand runs one task for each line in the $PBS_NODEFILE.  Each MPI rank \ngets a single line in the $PBS_NODEFILE, so if you are running multiple \nMPI ranks on the same host, you still get multiple \n.B pbsdsh \ntasks on that host.\n\n.B Example\n.br\nThe following example shows the \n.B pbsdsh \ncommand inside of a PBS batch job. The options indicate that the user wants \n.B pbsdsh \nto run the myapp program with one argument (app-arg1) on all four vnodes \nallocated to the job (i.e. the default behavior).\n\n.RS 5\n#!/bin/sh\n.br\n#PBS -l select=4:ncpus=1\n.br\n#PBS -l walltime=1:00:00\n\npbsdsh ./myapp app-arg1\n.RE\n\n.SH OPTIONS\n.IP \"-c copies\"\nThe program is spawned \n.I copies\ntimes on the vnodes allocated, one per vnode, unless\n.I copies \nis greater than the number of vnodes.\nIf \n.I copies\nis greater than the number of vnodes, \nit wraps around, running multiple instances on some vnodes.\nThis option is mutually exclusive with \n.I -n.\n.IP \"-n <vnode index>\"\nThe program is spawned only on a single vnode, which is the \n.I vnode index -th\nvnode allocated.  This option is mutually exclusive with \n.I -c.\n.IP -o\nNo obit request is made for spawned tasks.  The program does not wait for\nthe tasks to finish.\n.IP -s\nTe program is run in turn on each vnode, one after the other.\n.IP -v\nProduces verbose output about error conditions and task exit status.\n\n.IP --version\nThe \n.B pbsdsh\ncommand returns its PBS version information and exits.\nThis option can only be used alone\n\n.SH OPERANDS\n.IP program\nThe first operand,\n.I program ,\nis the program to execute.  The double dash must precede the \n.I program \nunder Linux.\n.LP\n.IP \"program args\"\nAdditional operands,\n.I program args ,\nare passed as arguments to the \n.I program.\n\n.SH STANDARD ERROR\nThe \n.B pbsdsh \ncommand writes a diagnostic message to standard error for\neach error occurrence.\n\n.SH SEE ALSO\nqsub(1B), tm(3).\n"
  },
  {
    "path": "doc/man1/qalter.1B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH qalter 1B \"25 January 2021\" Local \"PBS Professional\"\n.SH NAME\n.B qalter \n\\- alter PBS job\n\n\n.SH SYNOPSIS\n.B qalter\n[-a <date and time>] [-A <account string>] \n.RS 7\n[-c <checkpoint spec>] \n[-e <error path>] \n[-h <hold list>] [-j <join>] [-k <discard>] [-l <resource list>]\n[-m <mail events>] [-M <user list>] [-N <name>] [-o <output path>] \n[-p <priority>] [-P <project>] [-r <y|n>] [-S <path list>] [-u <user list>] \n[-W <additional attributes>] \n<job ID> [<job ID> ...]\n.RE\n\n.B qalter\n--version\n\n.SH DESCRIPTION\nThe \n.B qalter \ncommand is used to alter one or more PBS batch jobs.\nEach of certain job attributes can be modified using the \n.B qalter\noption for that attribute.\n\n\n.B Required Privilege\n.br\nA non-privileged user can alter their own jobs, whether they are \nqueued or running.  An Operator or Manager can alter any job, \nwhether it is queued or running.\n\nA non-privileged user can only lower resource requests.\nA Manager or Operator can raise or lower resource requests.\n\n.B Modifying resources and job placement\n.br\n\nA Manager or Operator may lower or raise requested resource limits,\nexcept for per-process limits such as \n.I pcput \nand \n.I pmem, \nbecause these are set when the process starts, and enforced by the\nkernel.  A non-privileged user can only lower resource requests.\n\nThe \n.B qalter \ncommand cannot be used by a non-privileged user to alter a\ncustom resource which has been created to be invisible or read-only\nfor users.\n\nIf a job is running, the only resources that can be modified are\n.I mppnodes, mppt, cput, walltime, min_walltime, \nand \n.I max_walltime.\n\nIf a job is queued, any resource mentioned in the options to the\n.B qalter \ncommand can be modified, but requested modifications must fit\nwithin the limits set at the server and queue for the amount of each\nresource allocated for queued jobs.  If a requested modification does\nnot fit within these limits, the modification is rejected.\n\nA job's resource request must fit within the queue's and server's\nresource run limits.  If a modification to a resource exceeds the\namount of the resource allowed by the queue or server to be used by\nrunning jobs, the job is never run.\n\nRequesting resources includes setting limits on resource usage and\ncontrolling how the job is placed on vnodes.\n\n.B Syntax for Modifying Resources and Job Placement\n.br\nResources are modified\nby using the \n.I -l\noption, either in\n.I chunks \ninside of \n.I selection statements,\nor in job-wide requests using\n.I <resource name>=<value>\npairs.\nThe \n.I selection statement \nis of the form:\n.IP\n.I -l select=[<N>:]<chunk>[+[<N>:]<chunk> ...]\n.LP\nwhere \n.I N\nspecifies how many of that chunk, and \na \n.I chunk \nis of the form:\n.IP\n.I <resource name>=<value>[:<resource name>=<value> ...]\n.LP\nJob-wide \n.I <resource name>=<value>\nrequests are of the form:\n.IP\n.I -l <resource name>=<value>[,<resource name>=<value> ...]\n.LP\n.B The Place Statement\n.br\nYou choose how your chunks are placed using the \n.I place statement.  \nThe \n.I place statement\ncan contain the following elements, in any order:\n.IP\n.I -l place=[<arrangement>][:<sharing>][:<grouping>]\n.LP\nwhere\n.br\n.I \\ \\ \\ arrangement\n.RS 13\nWhether this chunk is willing to share this vnode or host with other\nchunks from the same job.  One of \n.I free | pack | scatter | vscatter\n.RE\n.LP\n.I \\ \\ \\ sharing\n.RS 13\nWhether this chunk is willing to share this vnode or host with other\njobs.  One of \n.I excl | shared | exclhost\n.RE\n.LP \n.I \\ \\ \\ grouping\n.RS 13\nWhether the chunks from this job should be placed on vnodes that all \nhave the same value for a resource.  Can have only one instance of \n.I group=<resource name>\n.RE\n.LP\n.I \\ \\ \\ \\ \\ free\n.RS 15\nPlace job on any vnodes.\n.RE\n.LP\n.I \\ \\ \\ \\ \\ pack\n.RS 15\nAll chunks are taken from one host.\n.RE\n.LP\n.I \\ \\ \\ \\ \\ scatter\n.RS 15\nOnly one chunk with any MPI processes is taken from a host.  A chunk\nwith no MPI processes may be taken from the same vnode as another chunk.\n.RE\n.LP\n.I \\ \\ \\ \\ \\ vscatter\n.RS 15\nOnly one chunk is taken from any vnode.  Each chunk must fit on a vnode.\n.RE\n.LP\n.I \\ \\ \\ \\ \\ excl\n.RS 15\nOnly this job uses the vnodes chosen.\n.RE\n.LP\n.I \\ \\ \\ \\ \\ shared\n.RS 15\nThis job can share the vnodes chosen.\n.RE\n.LP\n.I \\ \\ \\ \\ \\ exclhost\n.RS 15\nThe entire host is allocated to the job.\n.RE\n.LP\n.I \\ \\ \\ \\ \\ group=<resource name>\n.RS 15\nChunks are grouped according to a resource.  All vnodes in the group must \nhave a common value for \n.I resource, \nwhich can be either the built-in resource\n.I host\nor a custom vnode-level resource.  The\n.I resource name \nmust be a string or a string array.\n.RE\n.LP\n\nThe \n.I place statement\ncannot begin with a colon.  Colons are delimiters; use them only to separate\nparts of a place statement, unless they are quoted inside resource values.\n\nFor more on resource requests, usage limits and job placement, see\n.B pbs_resources(7B).\n\n.B Modifying Attributes\n.br\nThe user alters job attributes via options to the \n.B qalter \ncommand.  Each\n.B qalter \noption changes a job attribute.  See pbs_job_attributes(7B).\n\nThe behavior of the \n.B qalter \ncommand may be affected by \nany site hooks.  Site hooks can modify the job's attributes, \nchange its routing, etc.\n\nTo modify the \n.I max_run_subjobs \nattribute, use \n.B qalter -Wmax_run_subjobs=<new value> <job ID>.\n\n.SH Caveats and Restrictions for Altering Jobs\nWhen you lengthen the \n.I walltime\nof a running job, make sure that the new \n.I walltime\nwill not interfere with any existing reservations etc.\n\nIf any of the modifications to a job fails, \nnone of the job's attributes is modified.\n\nA job that is in the process of provisioning cannot be altered.\n\n.SH OPTIONS\n\n.IP \"-a <date and time>\" 8\nChanges the point in time after which the job is eligible for execution.\nGiven in pairs of digits.  Sets job's \n.I Execution_Time\nattribute to \n.I date and time.\nFormat:\n.RS 13\n.I \"[[[[CC]YY]MM]DD]hhmm[.SS]\"\n.RE\n.IP\nwhere CC is the century,\nYY is the year, \nMM is the month,\nDD is the day of the month, \nhh is the hour, mm is the minute,\nand SS is the seconds.\n\nEach portion of the date defaults to the current date, as long as the \nnext-smaller portion is in the future.  For example, if today is the\n3rd of the month and the specified day \n.I DD \nis the 5th, the month \n.I MM\nis set to the current month.\n\nIf a specified portion has already passed, the next-larger portion is set\nto one after the current date.  For example, if the day\n.I DD\nis not specified, but the hour \n.I hh \nis specified to be 10:00 a.m. and the current time is 11:00 a.m., \nthe day \n.I DD\nis set to tomorrow.\n\nThe job's \n.I Execution_Time\nattribute can be altered after the job has begun execution, in which\ncase it will not take effect until the job is rerun.\n\n.IP \"-A <account string>\" 8\nReplaces the accounting string associated with the job.  Used for labeling accounting data.\nSets job's \n.I Account_Name \nattribute to \n.I account string.\nThis attribute cannot be altered once the job has begun execution.\n.br\nFormat: \n.I String\n\n.IP \"-c <checkpoint spec>\"\nChanges when the job will be checkpointed.  Sets job's \n.I Checkpoint\nattribute.  An \n.I $action\nscript is required to checkpoint the job.  See the \n.B pbs_mom(8B)\nman page.  \nThis attribute can be altered after the job has begun execution, \nin which case the new value will not take effect until the\njob is rerun.\n.IP\nThe argument \n.I checkpoint spec\ncan take one of the following values:\n.RS\n.IP c 5\nCheckpoint at intervals, measured\nin CPU time, set on job's execution queue.  \nIf no interval set at queue, job is not checkpointed.\n\n.IP \"c=<minutes of CPU time>\" 5\nCheckpoint at intervals of specified number of minutes of job CPU\ntime.  This value must be greater than zero.  If interval specified is less that\nthat set on job's execution queue, queue's interval is used.\n.br\nFormat: \n.I Integer\n\n.IP w 5\nCheckpoint at intervals, measured in walltime, set on job's execution queue.\nIf no interval set at queue, job is not checkpointed.\n\n.IP \"w=<minutes of walltime>\" 5\nCheckpoint at intervals of the specified number\nof minutes of job walltime.  This value must be greater\nthan zero.  If the interval specified is less than that set on the\njob's execution queue, the queue's interval is used.\n.br\nFormat: \n.I Integer\n.IP n 5\nNo checkpointing.\n.IP s 5\nCheckpoint only when the server is shut down.\n\n.IP u 5\nUnset.  Defaults to behavior when \n.I interval\nargument is set to \n.I s.\n.I \n.LP\nDefault: \n.I u\n.br\nFormat: \n.I String\n.RE\n.RE\n\n.IP \"-e <error path>\" 8\nReplaces the path to be used for the job's standard error stream.\nSets job's \n.I Error_Path \nattribute to \n.I error path.\nOverridden by \n.I -k \noption.\n.br  \nFormat:\n.RS 13\n.I [<hostname>:]<path>\n.RE\n.RS\nThe \n.I error path \nis interpreted as follows:\n\n.IP path 5\nIf\n.I path \nis relative, it is taken to be relative to \nthe current working directory of the \n.B qalter\ncommand, where it is executing \non the current host.\n\nIf\n.I path\nis absolute, then it is taken to be an absolute path on \nthe current host where the \n.B qalter\ncommand is executing.\n\n.IP hostname:path\nIf \n.I path\nis relative, it is taken to be relative to the user's \nhome directory on the host named\n.I hostname.\n\nIf \n.I path\nis absolute, it is the absolute path on the host named\n.I hostname.\n.LP\nIf \n.I path\ndoes not include a filename, the default filename is\n.RS \n<job ID>.ER\n.RE\nIf the\n.I -e\noption is not specified, PBS writes standard error to the default filename,\nwhich has this form:\n.RS \n.I <job name>.e<sequence number>\n.RE\n\nThis attribute can be altered after the job has begun execution, \nin which case the new value will not take effect until the\njob is rerun.\n\nIf you use a UNC path, the hostname is optional.  If you use a non-UNC \npath, the hostname is required.\n.RE\n\n.IP \"-h <hold list>\" 8\nUpdates the job's hold list.\nAdds \n.I hold list\nto the job's \n.I Hold_Types \nattribute.\nThe \n.I hold list\nis a string of one or more characters.  The following table shows the \nholds and the privilege required to set each:\n.RS\n\nHold  Meaning       Who Can Set\n.br\n--------------------------------------------------------------\n.IP u 6\nUser          Job owner, Operator, Manager, \n.br\n              administrator, root\n.IP o 6\nOther         Operator, Manager, administrator, root\n.IP s 6\nSystem        Manager, administrator, root, \n.br\n              PBS (dependency)\n.IP n 6\nNone          Job owner, Operator, Manager, \n.br\n              administrator, root\n.IP p 6\nBad password  Administrator, root\n.LP\nThis attribute can be altered after the job has begun execution, \nin which case the new value will not take effect until the\njob is rerun.\n.RE\n\n.IP \"-j <join>\" 8\nChanges whether and how to join the job's standard error \nand standard output streams.\nSets job's \n.I Join_Path\nattribute to \n.I join.\n.br\nThis attribute can be altered after the job has begun execution, in which\ncase the new value will not take effect until the job is rerun.\n\nThe \n.I join\nargument can take the following values:\n.RS\n.IP oe 5\nStandard error and standard output are merged into standard output.\n.IP eo 5\nStandard error and standard output are merged into standard error.\n.IP n 5\nStandard error and standard output are not merged.\n.RE\n.IP\nDefault: \n.I n;\nnot merged\n.LP\n\n.IP \"-k <discard>\" 8\nChanges whether and which of the standard output and standard error streams\nis left behind on the execution host, and whether they are written to their \nfinal destinations.  Sets the job's \n.I Keep_Files \nattribute to \n.I discard.\n\nThis attribute cannot be altered once the job has begun execution.\n\nIn the case where output and/or error is retained on the execution host in\na job-specific staging and execution directory created by PBS, these\nfiles are deleted when PBS deletes the directory.\n\nDefault: \n.I n;\nneither is retained, and files are not written to final destinations\n\nThe \n.I discard\nargument can take the following values:\n.RS\n\n.IP e 5\nThe standard error stream is retained on the execution host, in the \njob's staging and execution directory.  The filename is:\n.RS\n.RS 5\n<job name>.e<sequence number>\n.RE\n.RE\n\n.IP o  5\nThe standard output stream is retained on the execution host, in the \njob's staging and execution directory.  The filename is:\n.RS\n.RS 5\n<job name>.o<sequence number>\n.RE\n.RE\n\n.IP \"eo, oe\"  5\nBoth standard output and standard error streams are \nretained on the execution host, in the job's staging \nand execution directory.\n\n.IP d 5\nOutput and/or error are written directly to their final destination.\nOverrides the action of leaving files behind on the execution host.\n\n.IP n  5\nNeither stream is retained.\n.RE\n\n\n.IP \"-l <resource list>\" 8\n.RS\nAllows the user to change requested resources and job\nplacement.  Sets job's \n.I Resource_list \nattribute to \n.I resource list.\nUses resource request syntax.\nRequesting a resource places a limit on its usage.\nUsers without manager or operator privilege cannot alter a custom \nresource which was created to be invisible or read-only for users.\n\nFor syntax, see \n.B \"Syntax for Modifying Resources and Job Placement\"\nabove.\n\nIf a requested modification to a resource would exceed the server's or\njob queue's limits, the resource request is rejected.\nWhich resources can be altered is system-dependent.\n\nIf the job was submitted with an explicit \"-l select=\", vnode-level \nresources must be qaltered using the \"-l select=\" form.  In this\ncase a vnode level resource <resource> cannot be qaltered with \nthe \"-l <resource name>\" form.\n\nThe place statement cannot begin with a colon.\n\n\nFor example: \n.RS\nSubmit the job:\n.br\n% qsub -l select=1:ncpus=2:mem=512mb jobscript\n.br\nJob's ID is 230\n\nqalter the job using \"-l <resource name>\" form:\n.br\n% qalter -l ncpus=4 230\n.br\n\nError reported by qalter:\n.br\nqalter: Resource must only appear in \"select\" specification \nwhen select is used: ncpus 230\n.br\n\nqalter the job using the \"-l select=\" form:\n.br\n% qalter -l select=1:ncpus=4:mem=512mb 230\n.br\n\nNo error reported by qalter:\n.br\n%\n.RE\n\nFor more on resource requests, usage limits and job placement, see\n.B pbs_resources(7B).\n.RE\n\n.IP \"-m <mail events> \" 8\nChanges the set of conditions under which mail about the job is sent.\nSets job's \n.I Mail_Points\nattribute to \n.I mail events.  \nThe \n.I mail events\nargument can be one of the following:\n.RS 11\nThe single character \"n\"\n.br\nAny combination of \"a\", \"b\", and \"e\", with optional \"j\"\n.RE\n.IP\nThe following table lists the sub-options to the \n.I -m \noption:\n.RS\n.IP n 5\nNo mail is sent\n.IP a 5\nMail is sent when the job is aborted by the batch system\n.IP b 5\nMail is sent when the job begins execution\n.IP e 5\nMail is sent when the job terminates\n.IP j 5\nMail is sent for subjobs.  Must be combined with one or more of the\n.I a\n, \n.I b\n, or \n.I e \nsub-options\n.RE\n.IP\nCan be used with job arrays but not subjobs.\n.br\nFormat: \n.I String\n.br\nSyntax: \n.I n | [j](one or more of a, b, e)\n.br\nExample: -m ja\n.br\nDefault: \n.I \"a\"  \n\n\n.IP \"-M <user list>\" 8\nAlters list of users to whom mail about the job is sent.  Sets job's \n.I Mail_Users \nattribute to \n.I user list.  \n.br\nFormat:\n.br\n.I \\ \\ \\ <username>[@<hostname>][,<username>[@<hostname>],...]\n.br\nDefault: job owner\n\n.IP \"-N <name> \" 8\nRenames the job.\nSets job's \n.I Job_Name\nattribute to \n.I name.\n.br\nFormat: \n.I Job Name \n.br\nDefault: if a script is used to submit the job, the job's name is the\nname of the script.  If no script is used, the job's name is \n.I \"STDIN\".\n\n.IP \"-o <output path>\" 8\nAlters path to be used for the job's standard output stream.\nSets job's \n.I Output_Path \nattribute to \n.I output path.\nOverridden by \n.I -k\noption.\nFormat:\n.RS 13\n.I [<hostname>:]<path>\n.RE\n.RS\nThe \n.I output path \nis interpreted as follows:\n\n.IP path 5\nIf\n.I path\nis relative, it is taken to be relative to \nthe current working directory of the command, where it is executing \non the current host.\n\nIf\n.I path\nis absolute, it is taken to be an absolute path on \nthe current host where the command is executing.\n\n.IP hostname:path\nIf \n.I path\nis relative, it is taken to be relative to the user's \nhome directory on the host named\n.I hostname.\n\nIf \n.I path \nis absolute, it is the absolute path on the host named\n.I hostname.\n\nIf \n.I path\ndoes not include a filename, the default filename is\n.RS \n<job ID>.OU\n.LP\n.RE\n\nIf the\n.I -o\noption is not specified, PBS writes standard output to \nthe default filename, which has this form:\n.RS \n.I <job name>.o<sequence_number>\n.RE\nThis attribute can be altered after the job has begun execution, \nin which case the new value will not take effect until the\njob is rerun.\n\nIf you use a UNC path, the hostname is optional.  If you use a non-UNC\npath, the hostname is required.\n.RE\n\n.IP \"-p <priority>\" 8\nAlters priority of the job.  Sets job's \n.I Priority\nattribute to \n.I priority.\n\nThis attribute can be altered after the job has begun execution, \nin which case the new value will not take effect until the\njob is rerun.\n\nFormat: \n.I Host-dependent integer\n.br\nRange: [-1024, +1023] inclusive\n.br\nDefault: \n.I Zero  \n\n.IP \"-P <project>\" 8\nSpecifies a project for the job. Sets job's \n.I project\nattribute to specified value.\n\nFormat: \n.I Project Name\n.br\nDefault: \n.I \"_pbs_project_default\"\n\n.IP \"-r <y|n>\" 8\nChanges whether the job is rerunnable.  Sets job's \n.I Rerunable\nattribute to the argument. Does not affect how the job is handled when\nthe job is unable to begin execution.\n.br\nSee the\n.B qrerun(1B)\ncommand.  \n.br\nFormat: Single character, \n.I \"y\" \nor \n.I \"n\"\n.br\n\n.RS\n.IP y 5\nJob is rerunnable.\n.IP n 5\nJob is not rerunnable.\n.LP\n\nDefault: \n.I \"y\"\n\nInteractive jobs are not rerunnable.  Job arrays are always rerunnable.\n.RE\n.LP\n\n.IP \"R <remove options>\" 8\nChanges whether standard output and/or standard error files are automatically\nremoved upon job completion.  Sets job's \n.I Remove_Files\nattribute to \n.I remove options.\nOverrides default path names for these streams.  Overrides\n.I -o\nand \n.I -e\noptions.\n\nThis attribute cannot be altered once the job has begun execution.\n\nDefault:\n.I Unset; \nneither is removed\n\nThe \n.I remove options\nargument can take the following values:\n\n.RS\n.IP e\nThe standard error stream is removed (deleted) upon job completion\n.IP o\nThe standard output stream is removed (deleted) upon job completion\n.IP \"eo, oe\"\nBoth standard error and standard output streams are removed (deleted) \nupon job completion\n.IP unset\nNeither stream is removed\n.RE\n\n.IP \"-S <path list>\" 8\nSpecifies the interpreter or shell path for the job script.  Sets job's \n.I Shell_Path_List \nattribute to \n.I path list.\n\nThe \n.I path_list\nargument is the full path to the interpreter or shell including the \nexecutable name.  \n\nOnly one path may be specified without a hostname.\nOnly one path may be specified per named host.  The path selected\nis the one whose hostname is that of the server on which the job\nresides.  \n\nThis attribute can be altered after the job has begun execution, \nin which case the new value will not take effect until the\njob is rerun.\n\nFormat:\n.RS\n.IP\n.I <path>[@<hostname>][,<path>@<hostname> ...]\n.LP\nIf the path contains spaces, it must be quoted, for example:\n.IP\nqsub -S \"C:\\Program Files\\\\PBS Pro\\\\bin\\\\pbs_python.exe\" <script name>\n.LP\n.br\nDefault: User's login shell on execution node\n\nExample of using bash via a directive:\n.IP\n#PBS -S /bin/bash@mars,/usr/bin/bash@jupiter\n.LP\nExample of using a Python script from the command line on Linux: \n.IP\nqsub -S $PBS_EXEC/bin/pbs_python <script name>\n.LP\nExample of using a Python script from the command line on Windows: \n.IP\nqsub -S \\%PBS_EXEC\\%\\\\bin\\\\pbs_python.exe <script name>\n.LP\n\n.RE\n.IP \n.IP \"-u <user list>\" 8\nAlters list of usernames.  Job will be run under a username from this list.\nSets job's \n.I User_List \nattribute to \n.I user list.\n\nOnly one username may be specified without a hostname.\nOnly one username may be specified per named host.  \nThe server on which the job resides will select first the username whose\nhostname is the same as the server name.  Failing that, \nthe next selection will be the username with no specified hostname.\nThe usernames on the server and execution hosts must be the same.\nThe job owner must have authorization to run as the specified user.\n\nThis attribute cannot be altered once the job has begun execution.\n\nFormat: \n.br\n.I \\ \\ \\ <username>[@<hostname>][,<username>@<hostname> ...]\n\nDefault: Job owner (username on submit host)  \n.RE\n\n.IP \"-W <additional attributes>\" 8\nEach sub-option to the \n.I -W \noption allows you to change a specific job attribute\n.br\nFormat:\n.br\n.I \\ \\ \\ -W <attribute name>=<value>[,<attribute name>=<value>...]\n\nIf white space occurs within the \n.I additional attributes\nargument, or the equal sign \"=\" occurs within an \n.I attribute value\nstring, that argument or string must be enclosed with single or double quotes.\nPBS supports the following attributes via the \n.I -W \noption:\n\n.I \"depend=<dependency list>\"\n.IP\nDefines dependencies between this and other jobs.  \nSets the job's\n.I depend\nattribute to \n.I dependency list.\nThe \n.I dependency list\nhas the form:\n.RS\n.RS 5\n.I <type>:<arg list>[,<type>:<arg list> ...]\n.RE\nwhere except for the \n.I on\ntype, \nthe\n.I arg list\nis one or more PBS job IDs in the form:\n.RS 5\n.I <job ID>[:<job ID> ...]\n.RE\nThe types and their argument lists can be:\n\n.IP \" after: <arg list>\" 4\nThis job may be scheduled for execution at any point after all jobs\nin \n.I arg list\nhave started execution.\n\n.IP \" afterok: <arg list>\" 4\nThis job may be scheduled for execution only after all jobs in\n.I arg list\nhave terminated with no errors.\nSee \"Warning about exit status with csh\" in \n.B EXIT STATUS.\n\n.IP \" afternotok: <arg list>\" 4\nThis job may be scheduled for execution only after all jobs in \n.I arg list\nhave terminated with errors.\nSee \"Warning about exit status with csh\" in \n.B EXIT STATUS.\n\n.IP \" afterany: <arg list>\" 4\nThis job may be scheduled for execution after all jobs in\n.I arg list\nhave finished execution, with any exit status (with or without errors.)\nThis job will not run if a job in the \n.I arg list \nwas deleted without ever having been run.\n\n.IP \" before: <arg list>\"  4\nJobs in \n.I arg list \nmay begin execution once this job has begun execution.\n\n.IP  \" beforeok: <arg list> \" 4\nJobs in \n.I arg list\nmay begin execution once this job terminates without errors.\nSee \"Warning about exit status with csh\" in \n.B EXIT STATUS.\n\n.IP \" beforenotok: <arg list>\"  4\nIf this job terminates execution with errors, jobs in \n.I arg list\nmay begin.\nSee \"Warning about exit status with csh\" in \n.B EXIT STATUS.\n\n.IP \" beforeany: <arg list>\"  4\nJobs in \n.I arg list\nmay begin execution once this job terminates execution,\nwith or without errors.\n\n.IP \" on: count \" 4\nThis job may be scheduled for execution after\n.I count\ndependencies on\nother jobs have been satisfied.  This type is used in conjunction\nwith one of the \n.I before\ntypes listed.\nCount is an integer greater than \n.I 0.\n.LP\n\n.B Restrictions\n.br\nJob IDs in the\n.I arg list \nof \n.I before \ntypes must have been submitted with a \n.I type \nof \n.I on.\n\nTo use the \n.I before \ntypes, the user must have the authority to alter the jobs in \n.I arg list.\nOtherwise, the dependency is rejected and the new job aborted.\n\nError processing of the existence, state, or condition of the job on which the\nnewly submitted job depends is performed after\nthe job is queued.  If an error is detected, the new job is deleted by\nthe server.  Mail is sent to the job submitter stating the error.\n\n.B Dependency examples:\n.br\nqalter -W depend=afterok:123.host1.domain.com /tmp/script\n.br\nqalter -W depend=before:234.host1.com:235.host1.com /tmp/script\n.RE\n\n.IP \"group_list=<group list>\"\nAlters list of group names.  Job will run under a group name from this list.\nSets job's\n.I group_List\nattribute to\n.I group list.\n\nOnly one group name may be specified without a hostname.\nOnly one group name may be specified per named host.\nThe server on which the job resides will select first the group name whose\nhostname is the same as the server name.  Failing that,\nthe next selection will be the group name with no specified hostname.\nThe group names on the server and execution hosts must be the same.\n\nFormat:\n.br\n.I \\ \\ \\ <group>[@<hostname>][,<group>@<hostname> ...]\n\nDefault: no default\n.LP\n\n.IP \"release_nodes_on_stageout=<value>\"\nWhen set to \n.I True, \nall of the job's vnodes not on the primary execution host are released\nwhen stageout begins.\n\nCannot be used with vnodes tied to Cray X* series systems.\n\nWhen cgroups is enabled and this is used with some but not all vnodes\nfrom one MoM, resources on those vnodes that are part of a cgroup are\nnot released until the entire cgroup is released.\n\nThe job's \n.I stageout \nattribute must be set for the \n.I release_nodes_on_stageout \nattribute to take effect.\n\nFormat: \n.I Boolean\n.br\nDefault: \n.I False\n\n\n.IP \"run_count=<count>\"\nSets the number of times the server thinks it has run the job.  Sets the job's\n.I run_count\nattribute to \n.I count.  \n\nCan be set while job is running.\n\nJob is held when the value of this attribute goes over \n.I 20.\n\nFormat: Integer greater than or equal to zero\n.LP\n\n.IP \"sandbox=<sandbox spec>\"\nChanges which directory PBS uses for the job's staging and execution.\nSets job's \n.I sandbox\nattribute to the value of \n.I sandbox spec.\n\nFormat: \n.I String\n\nAllowed values for \n.I sandbox spec:\n.RS\n.IP PRIVATE\nPBS creates a job-specific directory for staging and execution.\n\n.IP \"HOME or unset\"\nPBS uses the user's home directory for staging and execution.\n.RE\n.LP\n\n.IP \"stagein=<path list>\"\n.br\n.IP \"stageout=<path list>\"\nChanges files or directories to be staged in before execution or staged out\nafter execution is complete.  Sets the job's \n.I stagein\nand \n.I stageout\nattributes to the specified\n.I path lists.\nOn completion of the job, all staged-in and staged-out files and directories\nare removed from the execution host(s).  A\n.I path list\nhas the form:\n.br\n.I \\ \\ \\ <filespec>[,<filespec>]\n.br\nwhere \n.I filespec \nis \n.br\n.I \\ \\ \\ <execution path>@<hostname>:<storage path>\n.br\nregardless of the direction of the copy.\nThe\n.I execution path \nis the name of the file or directory on the primary execution host.\nIt can be relative to the staging and execution directory on the\nexecution host, or it can be an absolute path.\n\nThe \"@\" character separates \n.I execution path \nfrom \n.I storage path.\n\nThe\n.I storage path\nis the path on \n.I hostname. \nThe name can be relative to the staging and execution directory on the\nprimary execution host, or it can be an absolute path.\n\nIf \n.I path list\nhas more than one \n.I filespec,\ni.e. it contains commas, it must be enclosed in double quotes.\n\nIf you use a UNC path, the hostname is optional.  If you use a non-UNC \npath, the hostname is required.\n.RE\n.LP\n\n.IP \"umask=<value>\"\nAlters the umask with which the job is started.  Controls umask of\njob's standard output and standard error.  Sets job's\n.I umask\nattribute to \n.I value.\n\nFormat: One to four digits; typically two\n\nDefault: \n.I 077\n\nThe following example allows group and world read of the job's output\nand error:\n.br\n.I \\ \\ \\ -W umask=33\n\n.IP \"--version\"\nThe \n.B qalter\ncommand returns its PBS version information and exits.\nThis option can only be used alone.\n.LP\n\n\n.SH  OPERANDS\nThe \n.B qalter \ncommand accepts a \n.I job ID\nlist as its operand.  The \n.I job ID\nlist is a space-separated list of \none or more job IDs for normal jobs or array jobs.  \n\nSubjobs and ranges of subjobs are not alterable.\n\nJob IDs have the form:\n.br\n.I \\ \\ \\ <sequence number>[.<server name>][@<server name>]\n.br\n.I \\ \\ \\ <sequence number>[][.<server name>][@<server name>]\n\n\nNote that some shells require that you enclose a job array ID \nin double quotes.\n\n.SH STANDARD ERROR\nThe\n.B qalter\ncommand writes a diagnostic message to standard error for\neach error occurrence.\n\n.SH EXIT STATUS\n.IP Zero \nUpon successful processing of input\n.IP \"Greater than zero\"\nUpon failure\n\n.SH Warning about exit status with csh:\nIf a job is run in csh and a .logout file\nexists in the home directory in which the job executes, the exit status\nof the job is that of the .logout script, not the job script.  This may\nimpact any inter-job dependencies.  \n\n\n.SH SEE ALSO\npbs_job_attributes(7B),\npbs_resources(7B),\nqdel(1B), \nqhold(1B), \nqmove(1B), \nqmsg(1B), \nqrerun(1B),\nqrls(1B), \nqselect(1B), \nqstat(1B),\nqsub(1B)\n"
  },
  {
    "path": "doc/man1/qdel.1B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH qdel 1B \"6 May 2020\" Local \"PBS Professional\"\n.SH NAME\n.B qdel \n- delete PBS jobs\n.SH SYNOPSIS\n.B qdel \n[ -Wforce | -Wsuppress_email=<N>] [-x] <job ID> [<job ID> ...]\n.br\n.B qdel\n--version\n.SH DESCRIPTION\nThe \n.B qdel \ncommand deletes jobs in the order given, whether they are at the local\nserver or at a remote server.  \n\n.B Usage\n.br\nThe qdel command is used without options to delete queued, running,\nheld, or suspended jobs, while the\n.I -x  \noption gives it the additional capacity to delete finished or moved\njobs.  With the\n.I -x \noption, this command can be used on finished and moved jobs, in\naddition to queued, running, held, or suspended jobs.\n\nWhen this command is used without the \n.I -x \noption, if job history is enabled, the deleted job's history is\nretained.  The \n.I -x \noption is used to additionally remove the history of the job being\ndeleted.\n\nIf someone other than the job's owner deletes the job, mail is\nsent to the job's owner, or to a list of mail recipients if \nspecified during \n.B qsub.  \nSee the \n.B qsub(1B)\nman page.\n\nIf the job is in the process of provisioning,\nit can be deleted only by using the \n.I -W force\noption.\n\n.B \"How Behavior of qdel Command Can Be Affected\"\n.br\nThe server's \n.I default_qdel_arguments\nattribute may affect the behavior of the \n.B qdel \ncommand.  This attribute is settable by the administrator \nvia the \n.B qmgr\ncommand.  The attribute may be set to \"-Wsuppress_email=<N>\".\nThe server attribute is overridden by command-line arguments.\nSee the \n.B pbs_server_attributes(1B) \nman page.\n\n.B Sequence of Events\n.IP \" \" 3\nThe job's running processes are killed.\n.IP\nThe epilogue runs.\n.IP\nFiles that were staged in are staged out.  This includes\nstandard out (.o) and standard error (.e) files.\n.IP\nFiles that were staged in or out are deleted.\n.IP\nThe job's temp directory is removed.\n.IP\nThe job is removed from the MOM(s) and the server.\n.LP\n\n.B Required Privilege\n.br\nA PBS job may be deleted by its owner, an Operator, or the\nadministrator.  The MoM deletes a PBS job by sending a \n.B SIGTERM\nsignal, then, if there are remaining processes, a\n.B SIGKILL \nsignal.  \n\n\n.SH OPTIONS\n.IP \"(no options)\" 8\nCan delete queued, running, held, or suspended jobs.  \nDoes not delete job history for specified job(s).\n.IP \"-W force\" 8\nDeletes the job whether or not the job's execution host is \nreachable.  Deletes the job whether or not the job is in the\nprocess of provisioning.  Cannot be used with the \n.I -Wsuppress_email\noption.\n\nIf the server can contact the MoM, this option is ignored; the \nserver allows the job to be deleted normally.  If the server \ncannot contact the MoM or the job is in the \n.I E\nstate, the server deletes its information about the job.\n\n.IP \"-Wsuppress_email=<N>\" 8\nSets limit on number of emails sent when deleting multiple jobs.  \n.RS 11\nIf \n.I N\n>= 1 and \n.I N \nor more \n.I job IDs\nare given, \n.I N \nemails are sent.  \n.br\nIf \n.I N\n>=1 and less than \n.I N job IDs\nare given, the number of emails is\nthe same as the number of jobs.  \n.br\nIf \n.I N\n= 0, this option is ignored.  \n.br\nIf \n.I N\n= -1, no email is sent.\n.RE\n.LP\n.IP \" \" 8\nThe \n.I <N>\nargument is an integer.  Note that there is no space between \"W\" and \"suppress_email\".\nCannot be used with the \n.I -Wforce\noption.\n.LP\n.IP \"-x\" 8\nCan delete running, queued, suspended, held, finished, or moved jobs.\nDeletes job history for specified job(s).\n.LP\n.IP \"--version\" 8\nThe \n.B qdel\ncommand returns its PBS version information and exits.\nThis option can only be used alone.\n\n.SH OPERANDS\nThe \n.B qdel \ncommand accepts one or more space-separated\n.I job ID\noperands.  These operands can be job identifiers, job array\nidentifiers, subjob dentifiers, or subjob range identifiers.\n\nJob IDs have the form:\n.br\n.I \\ \\ \\ <sequence number>[.<server name>][@<server name>]\n\nJob arrays have the form:\n.br\n.I \\ \\ \\ <sequence number>[][.<server name>][@<server name>]\n\nSubjobs have the form:\n.br\n.I \\ \\ \\ <sequence number>[<index>][.<server name>][@<server name>]\n\nRanges of subjobs have the form:\n.br\n.I \\ \\ \\  <sequence number>[<first>-<last>][.<server name>][@<server name>]\n\nJob array identifiers must be enclosed in double quotes for some shells.\n\n\n.SH STANDARD ERROR\nThe \n.B qdel \ncommand writes a diagnostic message to standard error for each\nerror occurrence.\n\n.SH EXIT STATUS\n.IP Zero 8\nUpon successful processing of input\n\n.IP \"Greater than zero\" 8\nUpon error\n\n.SH SEE ALSO\npbs_queue_attributes(7B), pbs_server_attributes(1B), \nqsub(1B), qsig(1B), pbs_deljob(3B)\n"
  },
  {
    "path": "doc/man1/qhold.1B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH qhold 1B \"6 May 2020\" Local \"PBS Professional\"\n.SH NAME\n.B qhold \n\\- hold PBS batch jobs\n\n\n.SH SYNOPSIS\n.B qhold\n[-h <hold list>] <job ID> [<job ID> ...]\n.br\n.B qhold\n--version\n\n.SH DESCRIPTION\nPlaces one or more holds on a job.  A job that has a hold is not\neligible for execution.  Can be used on jobs and job arrays, but not\non subjobs or ranges of subjobs.\n\nIf a job identified by\n.I job ID\nis in the queued, held, or waiting states, then all that occurs is\nthat the hold type is added to the job.  The job is then put into the\nheld state if it resides in an execution queue.\n\nIf the job is running, the result of the \n.B qhold \ncommand depends upon whether the job can be checkpointed.\nThe job can be checkpointed if the OS supports checkpointing, or \nif the application being checkpointed supports checkpointing. \n.br\nIf the job can be checkpointed, the following happens:\n.RS 3\nThe job is checkpointed and its execution is interrupted.\n\nThe resources assigned to the job are released.\n\nThe job is placed in the held state in the execution queue.\n\nThe job's \n.I Hold_Types \nattribute is set to \n.I u\nfor \n.I user hold.\n.RE\n\nIf checkpoint / restart is not supported, \n.B qhold \nsimply sets the\njob's \n.I Hold_Types \nattribute to \n.I u.  \nThe job continues to execute.\n\nA job's dependency places a \n.I system \nhold on the job.  When the\ndependency is satisfied, the \n.I system \nhold is removed.  This \n.I system \nhold\nis the same as the one set by an administrator.  If the administrator\nsets a \n.I system \nhold on a job with a dependency, when the\ndependency is satisfied, the job becomes eligible for execution.\n\nIf the job is in the process of provisioning, it cannot be held.\n\nA hold on a job can be released by the administrator, a Manager, \nan Operator, or the job owner, when the job reaches the time set\nin its \n.I Execution_Time\nattribute, or when a dependency clears.  See \n.B qrls.1B.\n\n.B Effect of Privilege on Behavior\n.br\nThe following table shows the holds and the privilege required to set each:\n.RS 3\nHold  Meaning       Who Can Set\n.br\n--------------------------------------------------------------\n.IP u 6\nUser          Job owner, Operator, Manager, \n.br\n              administrator, root\n.IP o 6\nOther         Operator, Manager, administrator, root\n.IP s 6\nSystem        Manager, administrator, root, \n.br\n              PBS (dependency)\n.IP n 6\nNone          Job owner, Operator, Manager, \n.br\n              administrator, root\n.IP p 6\nBad password  Administrator, root\n.RE\n.LP\n\n.SH OPTIONS\n.IP \"(no options)\" 8\nSame as \n.I -h u.\nApplies the \n.I user\nhold to the specified job(s).\n.IP \"-h <hold list>\" 8\nTypes of holds to be placed on the job(s).\n\nThe\n.I hold list\nargument is a string consisting of one or more of the letters\n.I \"\"\"u\"\"\", \"\"\"o\"\"\", \nor \n.I \"\"\"s\"\"\"\nin any combination, or one of the letters\n.I \"\"\"n\"\"\" \nor \n.I \"\"\"p\"\"\".\n\n.IP \"--version\" 8\nThe \n.B qhold\ncommand returns its PBS version information and exits.\nThis option can only be used alone.\n\n\n.SH OPERANDS\nThe \n.B qhold \ncommand can be used on jobs and job arrays, but not on subjobs or ranges \nof subjobs.  The \n.B qhold \ncomand accepts one or more \n.I job IDs\nin the form:\n.RS 4\n.I <sequence number>[.<server name>][@<server name>]\n.br\n.I <sequence number>[][.<server name>][@<server name>]\n.RE\nNote that some shells require that you enclose a job array identifier in\ndouble quotes.\n\n.SH STANDARD ERROR\nThe \n.B qhold \ncommand writes a diagnostic message to standard error for each\nerror occurrence.\n\n.SH EXIT STATUS\n.IP Zero 8\nUpon successful processing of all operands\n\n.IP \"Greater than zero\" 8\nIf the \n.B qhold \ncommand fails to process any operand\n\n.SH SEE ALSO\nqrls(1B), qalter(1B), qsub(1B), pbs_alterjob(3B), pbs_holdjob(3B),\npbs_rlsjob(3B), pbs_job_attributes(7B), pbs_resources(7B)\n"
  },
  {
    "path": "doc/man1/qmove.1B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH qmove 1B \"6 May 2020\" Local \"PBS Professional\"\n.SH NAME\n.B qmove \n- move PBS batch job\n\n.SH SYNOPSIS\n.B qmove \n<destination> <job ID> [<job ID> ...]\n.br\n.B qmove\n--version\n\n.SH DESCRIPTION\nMoves a job from one queue to another.  \n\nThe behavior of the \n.B qmove \ncommand may be affected by \nany site hooks.  Site hooks can modify the job's attributes, \nchange its routing, etc.\n\n.B Restrictions\n.br\nThe\n.B qmove\ncommand can be used on job arrays, but not on subjobs or ranges of\nsubjobs.  \n\nJob arrays can only be moved from one server to another if\nthey are in the 'Q', 'H', or 'W' states, and only if there are no\nrunning subjobs.  The state of the job array is preserved, and the job\narray will run to completion on the new server.\n.LP\nA job in the\n.B Running ,\n.B Transiting ,\nor\n.B Exiting\nstate cannot be moved.  \n\nA job in the process of provisioning cannot \nbe moved.\n\n\n\n.SH EFFECT OF PRIVILEGE ON BEHAVIOR\n\nAn unprivileged user can use the \n.B qmove \ncommand to move a job only when\nthe move would not violate queue restrictions.  A privileged user\n(root, Manager, Operator) can use the \n.B qmove \ncommand to move a job\nunder some circumstances where an unprivileged user cannot.  The\nfollowing restrictions apply only to unprivileged users:\n\n.RS 4\nThe queue must be enabled\n\n.br\nMoving the job into the queue must not exceed the queue limits for\njobs or resources\n\n.br\nIf the job is an array job, the size of the job array must not exceed \n.I max_array_size \nfor the queue\n\n.br \nThe queue cannot have its \n.I from_route_only\nattribute set to \n.I True\n(accepting jobs only from routing queues)\n.RE\n\n.SH OPTIONS\n.IP \"--version\" 4\nThe \n.B qmove\ncommand returns its PBS version information and exits.\nThis option can only be used alone.\n\n\n.SH OPERANDS\n.IP destination 4\nWhere job(s) are to end up.  First operand.  Syntax:\n.RS 4\n.I <queue name>\n.RS 4\nMoves the job(s) into the specified queue at the job's current server.\n.RE\n.br\n.I @<server name>\n.RS 4\nMoves the job(s) into the default queue at the specified server.\n.RE\n.br\n.I <queue name>@<server name>\n.RS 4\nMoves the job(s) into the specified queue at the specified server.\n.RE\n.RE\n\n.LP\n.IP \"job ID\" 4\nJob(s) and/or job array(s) to be moved to the new destination.  The\n.B qmove\ncommand accepts one or more\n.I job ID\noperands of the form:\n.RS 4\n.I <sequence number>[.<server name>][@<server name>]\n.br\n.I <sequence number>[][.<server name>][@<server name>]\n.RE\n\nNote that some shells require that you enclose a job array identifier \nin double quotes.\n\n\n.br\n.SH STANDARD ERROR\nThe \n.B qmove \ncommand writes a diagnostic messages to standard error for each\nerror occurrence.\n\n.SH EXIT STATUS\n.IP Zero 4\nUpon success\n\n.IP \"Greater than zero\" 4\nUpon failure\n\n\n.SH SEE ALSO\nqsub(1B), pbs_movejob(3B)\n"
  },
  {
    "path": "doc/man1/qmsg.1B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH qmsg 1B \"6 May 2020\" Local \"PBS Professional\"\n.SH NAME\n.B qmsg \n- write message string into one or more job output files\n.SH SYNOPSIS\n.B qmsg\n[-E] [-O] <message string>\n<job ID> [<job ID> ...]\n.br\n.B qmsg\n--version\n\n.SH DESCRIPTION\nWrites a message string into one or more output files of the job.\nTypically this is done to leave an informative message in the output\nof the job.  Also called \"sending a message to a job\".  \n\nThe \n.B qmsg\ncommand writes messages into the files of jobs by sending a \n.I Message Job \nbatch request to the batch server that owns the job.  The \n.B qmsg\ncommand does not directly write the message into the files of the job.\n\nThe \n.B qmsg \ncommand can be used on jobs and subjobs, but not on job arrays or ranges of\nsubjobs.\n\n.SH OPTIONS\n.IP \"-E\" 8\nThe message is written to the standard error of each job.\n\n.IP \"-O\" 8\nThe message is written to the standard output of each job.\n\n.IP \"--version\" 8\nThe \n.B qmsg\ncommand returns its PBS version information and exits.\nThis option can only be used alone.\n\n.IP \"(no options)\" 8\nThe message is written to the standard error of each job.\n\n.SH  OPERANDS\n.IP \"message string\" 8\nThe message to be written. String.  First operand.  If the string contains\nblanks, the string must be quoted.  If the final character of the string \nis not a newline, a newline character is added when written to the job's file.\n\n.IP \"job ID\" 8\nThe job(s) to receive the message string.  This operand follows the \n.I message string\noperand.  Can be a job or subjob.  Cannot be a job array or range of subjobs.  The \n.B qmsg \ncommand accepts one or more\n.I job ID\noperands.\n.br\nFormat for job:\n.br\n.I <sequence number>[.<server name>][@<server name>]\n.br\nFormat for subjob. Note that a subjob has square brackets around its index number:\n.br\n.I <sequence number>[<index>][.<server name>][@<server name>]\n\n.SH STANDARD ERROR\nThe \n.B qmsg\ncommand writes a diagnostic message to standard error for\neach error occurrence.\n\n.SH EXIT STATUS\n.IP Zero 8\nUpon success\n.IP \"Greater than zero\" 8\nUpon failure\n\n.SH SEE ALSO\nqsub(1B), pbs_msgjob(3B)\n"
  },
  {
    "path": "doc/man1/qorder.1B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH qorder 1B \"6 May 2020\" Local \"PBS Professional\"\n.SH NAME\n.B qorder \n\\- exchange order of two PBS batch jobs.\n.SH SYNOPSIS\n.B qorder \n<job ID> <job ID>\n.br\n.B qorder \n--version\n.SH DESCRIPTION\n\nExchanges positions in queue(s) of two jobs, whether in the same or different\nqueue(s).\n\nNo attribute of either job, e.g. priority, is changed.\nThe impact of interchanging the order within or between queues is dependent on local\njob scheduling policy; contact your systems administrator.  \n\n.B Restrictions\n.br\nA job in the\n.B running\nstate cannot be reordered.\n.br\nThe \n.B qorder\ncommand can be used on job arrays, but not on subjobs or ranges of subjobs.\n.br\nThe two jobs must be located at the same server.\n\n.B Effect of Privilege on Behavior\n.br\nFor an unprivileged user to reorder jobs, both jobs must be owned by\nthe user.  A privileged user (Manager, Operator) can reorder any jobs.\n\n.SH OPTIONS\n.IP \"--version\" 8\nThe \n.B qorder\ncommand returns its PBS version information and exits.\nThis option can only be used alone.\n\n.SH OPERANDS\nBoth operands are\n.I job IDs\nwhich specify the jobs to be exchanged.\nThe \n.B qorder\ncommand accepts two\n.I job ID\noperands of the form:\n.RS 4\n.I <sequence number>[.<server name>][@<server name>]\n.br\n.I <sequence number>[][.<server name>][@<server name>]\n.RE\n\nIf you specify a server for both jobs, they must be at the same server.\n\nNote that some shells require that you enclose a job array identifier in\ndouble quotes.\n\n.SH STANDARD ERROR\nThe \n.B qorder \ncommand writes diagnostic messages to standard error for each\nerror occurrence.\n\n.SH EXIT STATUS\n.IP \"Zero\" 8\nUpon successful processing of all operands\n.IP \"Greater than zero\" 8\nIf the \n.B qorder \ncommand fails to process any operand\n\n.SH SEE ALSO\nqsub(1B), qmove(1B), pbs_orderjob(3B), pbs_movejob(3B)\n"
  },
  {
    "path": "doc/man1/qrerun.1B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH qrerun 1B \"6 May 2020\" Local \"PBS Professional\"\n.SH NAME\n.B qrerun \n- requeue a PBS job\n.SH SYNOPSIS\n.B qrerun \n[-W force]\n<job ID> [<job ID> ...]\n.br\n.B qrerun \n--version\n\n.SH DESCRIPTION\nIf possible, kills the specified job(s), then requeues each job in the\nexecution queue from which it was run. \n\nThe \n.B qrerun\ncommand can be used on jobs, job arrays, subjobs, and ranges of subjobs.\nIf you give a job array identifier as an argument, the job array is\nreturned to its initial state at submission time, or to its altered\nstate if it has been qaltered.  All of that job array's subjobs are\nrequeued, which includes those that are currently running, and those\nthat are completed and deleted. If a you give a subjob or range as an\nargument, those subjobs are requeued.\n\n.B Restrictions\n.br\nIf a job is marked as not rerunnable, \n.B qrerun \nneither kills nor requeues the job.  See the\n.I -r \noption for the\n.B qsub \nand\n.B qalter\ncommands, and the \n.I Rerunable \njob attribute.\n\nThe\n.B qrerun \ncommand cannot requeue a job or subjob which is not running, is held, \nor is suspended.\n\n.B Required Privilege\n.br\nPBS Manager or Operator privilege is required to use this command.\n\n.SH OPTIONS\n.IP \"-W force\" 8\nThe job is to be requeued even if the vnode on which the job is\nexecuting is unreachable, or if the job's substate is \n.I provisioning.\n.IP \"--version\" 8\nThe \n.B qrerun\ncommand returns its PBS version information and exits.\nThis option can only be used alone.\n\n\n.SH  OPERANDS\nThe qrerun\ncommand accepts one or more\n.I job ID\noperands of the form:\n.RS 4\n.I <sequence number>[.<server name>][@<server name>]\n.br\n.I <sequence number>[][.<server name>][@<server name>]\n.br\n.I <sequence number>[<index>][.<server name>][@<server name>]\n.br\n.I <sequence number>[<index start>-<index end>][.<server name>] \n.br\n\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ [@<server name>]\n.RE\n\nNote that some shells require that you enclose a job array identifier in\ndouble quotes.\n.br\n\n.SH STANDARD ERROR\nThe\n.B qrerun\ncommand writes a diagnostic message to standard error for\neach error occurrence.\n\n.SH EXIT STATUS\n.IP Zero 8\nUpon successful processing of all operands\n.IP \"Greater than zero\" 8\nUpon failure to process any operand\n\n.SH SEE ALSO\nqsub(1B), qalter(1B), pbs_alterjob(3B), pbs_rerunjob(3B)\n"
  },
  {
    "path": "doc/man1/qrls.1B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH qrls 1B \"6 May 2020\" Local \"PBS Professional\"\n.SH NAME\n.B qrls \n- release hold on PBS jobs\n.SH SYNOPSIS\n.B qrls\n[-h <hold list>] <job ID> [<job ID> ...]\n.br\n.B qrls\n--version\n.SH DESCRIPTION\nThe\n.B qrls\ncommand removes or releases holds on batch jobs or job arrays, \nbut not on subjobs or ranges of subjobs.\n\nA job may have one or more types of holds which make the job\nineligible for execution.\n\nIf you \n.B qrls\na job whose \n.I Execution_Time\nattribute is not set to a time in the future, the job changes to the \n.I queued\nstate.  If \n.I Execution_Time\nis in the future, the job changes to the\n.I waiting\nstate.\n\nHolds can be set by the owner, an Operator, or Manager, when a job \nhas a dependency, or when a job has its \n.I Execution_Time \nattribute set to a time in the future.  Se the \n.B qhold\nman page.\n\n.B Effect of Privilege on Behavior\n.br\nThe following table shows the holds and the privilege required to release each:\n.RS 3\nHold  Meaning       Privilege Required to Release\n.br\n--------------------------------------------------------------\n.IP u 6\nUser          Job owner, Operator, Manager, \n.br\n              administrator, root\n.IP o 6\nOther         Operator, Manager, administrator, root\n.IP s 6\nSystem        Manager, administrator, root, \n.br\n              PBS (dependency)\n.IP n 6\nNone          Job owner, Operator, Manager, \n.br\n              administrator, root\n.IP p 6\nBad password  Administrator, root\n.RE\n.LP\n\n.SH OPTIONS\n.IP \"(no options)\" 8\nDefaults to\n.I -h u,\nremoving\n.I user\nhold.\n.IP \"-h <hold list>\" 8\nTypes of holds to be released for the job(s).\n\nThe\n.I hold list\nargument is a string consisting of one or more of the letters\n.I \"\"\"u\"\"\", \"\"\"o\"\"\", \nor \n.I \"\"\"s\"\"\"\nin any combination, or one of the letters\n.I \"\"\"n\"\"\" \nor \n.I \"\"\"p\"\"\".\n\n.IP \"--version\" 8\nThe \n.B qrls\ncommand returns its PBS version information and exits.\nThis option can only be used alone.\n\n.SH  OPERANDS\nThe \n.B qrls\ncommand can be used on jobs and job arrays, but not on subjobs or ranges \nof subjobs.  The \n.B qrls \ncomand accepts one or more \n.I job IDs\nin the form:\n.RS 4\n.I <sequence number>[.<server name>][@<server name>]\n.br\n.I <sequence number>[][.<server name>][@<server name>]\n.RE\nNote that some shells require that you enclose a job array identifier in\ndouble quotes.\n\n.SH STANDARD ERROR\nThe \n.B qrls \ncommand writes a diagnostic message to standard error for\neach error occurrence.\n\n.SH EXIT STATUS\n.IP Zero 8\nUpon successful processing of all operands\n\n.IP \"Greater than zero\" 8\nIf the \n.B qrls\ncommand fails to process any operand\n\n.SH SEE ALSO\nqsub(1B), qalter(1B), qhold(1B), pbs_alterjob(3B), pbs_holdjob(3B), and\npbs_rlsjob(3B).\n"
  },
  {
    "path": "doc/man1/qselect.1B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH qselect 1B \"6 May 2020\" Local \"PBS Professional\"\n.SH NAME\n.B qselect \n- select specified PBS jobs\n.SH SYNOPSIS\n.B qselect\n[-a [<op>] <date and time>] [-A <account string>] \n.RS 8\n[-c [<op>] <interval>] [-h <hold list>] [-H] [-J] \n.br\n[-l <resource list>] [-N <name>] [-p [<op>] <priority>] \n.br\n[-P <project>] [-q <destination>] [-r <rerun>] [-s <states>] \n.br\n[-t <time option> [<comparison>] <specified time>] [-T] \n.br\n[-u <user list>] [-x]\n.RE\n.br\n.B qselect\n--version\n\n.SH DESCRIPTION\nThe\n.B qselect\ncommand lists those jobs that meet the specified selection criteria.\nYou can compare certain job attribute values to specified values using\na comparison operator shown as \n.I op \nin the option description.\n\nYou can select jobs, job arrays, or subjobs.  You can select jobs from\none server per call to the command.\n\nEach option acts as a filter restricting which jobs are listed.  \n\nYou can select jobs according to the values of some of the resources\nin the \n.I Resource_List \njob attribute.  You can also select jobs\naccording the selection directive (although because this is a string,\nyou can only check for equality or inequality.)\n\nJobs that are finished or moved are listed only when the \n.I -x \nor \n.I -H\noptions are used.  Otherwise, job selection is limited to queued and\nrunning jobs.\n\n.B Comparison Operations\n.br\nYou can select jobs by comparing the values of certain job attributes\nto values you specify.  The following table lists the comparison \noperations you can use:\n\n.B Operation \\ \\ Type of Comparison\n.br\n-----------------------------------------------------------------------\n.IP .eq. 12\nThe value of the attribute of the job is equal to the value of the\noption argument.\n.IP .ne. 12\nThe value of the attribute of the job is not equal to the value of the\noption argument.\n.IP .ge. 12\nThe value of the attribute of the job is greater than or equal to the\nvalue of the option argument.\n.IP .gt. 12\t\nThe value of the attribute of the job is greater than the value of the\noption argument.\n.IP .le. 12\nThe value of the attribute of the job is less than or equal to the\nvalue of the option argument.\n.IP .lt. 12\nThe value of the attribute of the job is less than the\nvalue of the option argument.\n.LP\n\nFor example, to select jobs whose \n.I Priority \nattribute has a value greater than \n.I 5:\n.br\n.B \\ \\ \\ qselect -p.gt.5\n\nWhere an optional comparison is not specified, the comparison\noperation defaults to \n.I .eq, \nmeaning PBS checks whether the value of the\nattribute is equal to the option argument.\n\n.B Required Permissions\n.br\nWhen selecting jobs according to resource values, users without\nOperator or Manager privilege cannot specify custom resources which\nwere created to be invisible to unprivileged users.\n\n.SH Options to qselect\n.IP \"(no options)\" 8\nLists all jobs at the server which the user is authorized to list\n(query status of).\n\n.IP \"-a [<op>] <date and time>\" 8\n.B Deprecated.  \nRestricts selection to those jobs whose \n.I Execution_Time\nattribute qualifies when compared to the \n.I date and time\nargument.  You can select a range of execution times by using this\noption twice, to compare to a minimum time and a maximum time.\n\nThe \n.I date and time\nargument has the format:\n.br\n.I [[CC]YY]MMDDhhmm[.SS]\n.br\nwhere the \n.I MM \nis the two digits for the month, \n.I DD \nis the day of the month, \n.I hh \nis the hour, \n.I mm \nis the minute, and the optional \n.I SS \nis the seconds.  \n.I CC \nis the century and \n.I YY \nthe year.\n\n.IP \"-A <account string>\" 8\nRestricts selection to jobs whose \n.I Account_Name\nattribute matches the specified\n.I account string.\n\n.IP \"-c [<op>] <interval>\" 8\nRestricts selection to jobs whose \n.I Checkpoint\ninterval attribute meets the comparison criteria.\n\nThe \n.I interval\nargument can take one of the following values:\n.RS 11\n.I c\n.br\n.I c=<minutes>\n.br\n.I n\n.br\n.I s\n.br\n.I w\n.br\n.I w=<minutes>\n.RE\n.IP \" \"  8\nWe give the range of interval  values for the \n.I Checkpoint\nattribute the following ordered relationship:\n.br\n.I  n\\ >\\ s\\ >\\ c=minutes\\ >\\ c\\ >\\ u\n\n(Information about \n.I w\nand \n.I w=<minutes>\nis not available.)\n\nFor an interval value of \n\"u\", only \".eq\" and \".ne\" are valid.\n\n.IP \"-h <hold list>\" 8\nRestricts the selection of jobs to those with a specific set of hold types.\nThe holds in the \n.I Hold_Types\njob attribute must be the same as those in the \n.I hold list\nargument, but can be in a different order.\n\nThe\n.I hold list\nargument is a string consisting of the single letter\n.I n,\nor one or more of the letters\n.I u,\n.I o,\n.I p,\nor \n.I s \nin any combination.  If letters are duplicated, they are treated as if they\noccurred once.\nThe letters represent the hold types:\n\n.B Letter \\ \\ Hold Type\n.br\n---------------------------------------------------------------\n.nf\nn        None\nu        User\no        Other\np        Bad password\ns        System\n.fi\n\n.IP \"-H\" 8\nRestricts selection to finished and moved jobs.\n\n.IP \"-J\" 8\nLimits selection to job arrays only.\n\n.IP \"-l <resource list>\" 8\nRestricts selection of jobs to those with specified resource amounts.\nResource must be job-wide, or be \n.I mem, ncpus, \nor \n.I vmem.\n\nThe \n.I resource list\nis in the following format:\n.br\n.I <resource name> <op> <value>[,<resource name> <op> <value> ...]\n\nYou must specify\n.I op,\nand you can use any of the comparison operators.\n\nBecause resource specifications for chunks using the select statement,\nand placement using the place statement, are stored as strings, the\nonly useful operators for these are \n.I .eq.  \nand \n.I .ne.\n\nUnprivileged users cannot specify custom resources\nwhich were created to be invisible to unprivileged users.\n\n.IP \"-N <name>\" 8\nRestricts selection of jobs to those with the specified value for the\n.I Job_Name\nattribute.\n\n.IP \"-p [<op>] <priority>\" 8\nRestricts selection of jobs to those with the specified \n.I Priority \nvalue(s).\n\n.IP \"-P <project>\" 8\nRestricts selection of jobs to those matching the specified value for the \n.I project\nattribute.\n\nFormat: \n.I Project Name\n\n.IP \"-q <destination>\" 8\nRestricts selection to those jobs at the specified \n.I destination.\n\nThe\n.I destination\nmay take of one of the following forms:\n.RS 11\n.I <queue name>\n.br\nRestricts selection to the specified queue at the default server.\n.br\n.I @<server name>\n.br\nRestricts selection to the specified server.\n.br\n.I <queue name>@<server name>\n.br\nRestricts selection to the specified queue at the specified server. \n.RE\n.IP \" \" 8\nIf the -q option is not specified, jobs are selected from the default server.\n\n.IP \"-r <rerun>\" 8\nRestricts selection of jobs to those with the specified value for the \n.I Rerunable\nattribute.  The option argument \n.I rerun\nmust be a single character, either\n.I y\nor \n.I n.\n\n.IP \"-s <states>\" 8\nRestricts job selection to those whose\n.I job_state\nattribute has the specified value(s).\n\nThe\n.I states\nargument is a character string consisting of any combination of these\ncharacters:\n.I B\n, \n.I E\n, \n.I F\n,\n.I H\n,\n.I M\n,\n.I Q\n,\n.I R\n,\n.I S\n,\n.I T\n,\n.I U\n,\n.I W\n, and\n.I X.\n(A repeated character is accepted, but no additional meaning is\nassigned to it.)\n\n.nf\n.B State \\ \\ Meaning\n---------------------------------------------------------------\nB       Job array has started execution\nE       The Exiting state\nF       The Finished state\nH       The Held state\nM       The Moved state\nQ       The Queued state\nR       The Running state\nS       The Suspended state\nT       The Transiting state\nU       Job suspended due to workstation user activity\nW       The Waiting state\nX       The eXited state.  Subjobs only\n.fi\n\n.IP\nJobs in any of the specified states are selected.\n\nJob arrays are never in states \n.I R, S, T, \nor \n.I U.  \nSubjobs may be in those states.\n\n.IP \"-t <time option> [<op>] <specified time>\" 8\nJobs are selected according to one of their time-based attributes.  The \n.I time option\nspecifies which time-based attribute is tested.  You give the \n.I specified time\nin \n.I datetime\nformat. \n\nThe \n.I time option \nis one of the following:\n\n.nf\n.B Time \n.B Option \\ Time Attribute \\ Attribute Description\n---------------------------------------------------------------\na       Execution_Time  Timestamp.  Time the job is eligible \n                        for execution.  Specified in datetime \n                        format.\n\nc       ctime           Timestamp; time at which the job was\n                        created.  Printed by qstat in \n                        human-readable format.  Output in hooks\n                        as seconds since epoch.\n\ne       etime           Timestamp; time when job became \n                        eligible to run, i.e. was enqueued in \n                        an execution queue and was in the \"Q\" \n                        state.  Reset when a job moves queues, \n                        or is held then released.  Not affected \n                        by qaltering.  Printed by qstat in \n                        human-readable format.  Output in hooks \n                        as seconds since epoch.\n\ng       eligible_time   Amount of eligible time job accrued \n                        waiting to run.  Specified as duration.\n\nm       mtime           Timestamp; the time that the job was \n                        last modified, changed state, or \n                        changed locations.  Printed by qstat in \n                        human-readable format.  Output in hooks \n                        as seconds since epoch.\n\nq       qtime           Timestamp; the time that the job \n                        entered the current queue.  Printed by \n                        qstat in human-readable format.  Output \n                        in hooks as seconds since epoch.\n\ns       stime           Timestamp; time the job started.  \n                        Updated when job is restarted.  Printed \n                        by qstat in human-readable format.  \n                        Output in hooks as seconds since epoch.\n\nt       estimated.      Job's estimated start time.  Specified \n        start_time      in datetime format.  Printed by qstat in \n                        human-readable format.  Output in hooks \n                        as seconds since epoch.\n.fi\n\nTo bracket a time period, use the \n.I -t\noption twice.\n.br\nFor example, to select jobs using \n.I stime \nbetween noon and 3 p.m.:\n.br\n.B \\ \\ \\ qselect -ts.gt.09251200 -ts.lt.09251500\n\n.IP \"-T\" 8\nLimits selection to jobs and subjobs.\n\n.IP \"-u <user list>\" 8\nRestricts selection to jobs owned by the specified usernames.\n\nSyntax of\n.I user_list:\n.br\n.I <username>[@<hostname>][,<username>[@<hostname>],...]\n\nSelects jobs which are owned by the listed users at the corresponding hosts. \nHostnames may be wildcarded on the left end, e.g. \"*.nasa.gov\".  A username\nwithout a \"@<hostname>\" is equivalent to \"<username>@*\", meaning that it is \nvalid at any host.\n\n.IP \"-x\" 8\nSelects finished and moved jobs in addition to queued and running jobs.\n\n.IP \"--version\" 8\nThe \n.B qselect\ncommand returns its PBS version information and exits.\nThis option can only be used alone.\n\n.SH STANDARD OUTPUT\nPBS writes a list of the selected job IDs to standard output.  Each\njob ID is separated by white space.  A job ID can represent a job, a\njob array, or a subjob.  Each job ID has one of the forms:\n.br\n.I <sequence number>.<server name>[@<server name>]\n.br\n.I <sequence number>[].<server name>[@<server name>]   \n.br\n.I <sequence number>[<index>].<server name>[@<server name>]\n.br\n.I @<server name> \nidentifies the server which currently owns the job.\n\n.SH STANDARD ERROR\nThe \n.B qselect \ncommand writes a diagnostic message to standard error for\neach error occurrence.\n\n.SH EXIT STATUS\n.IP Zero 8\nUpon successful processing of all options presented to the\n.B qselect \ncommand\n.IP \"Greater than zero\" 8\nIf the \n.B qselect \ncommand fails to process any option\n\n.SH SEE ALSO\nqstat(1B),\nqsub(1B), \npbs_job_attributes(7B),\npbs_resources(7B)\n\n"
  },
  {
    "path": "doc/man1/qsig.1B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH qsig 1B \"6 May 2020\" Local \"PBS Professional\"\n.SH NAME\n.B qsig \n\\- signal a PBS batch job\n.SH SYNOPSIS\n.B qsig\n[-s <signal>] <job ID> [<job ID> ...]\n.br\n.B qsig\n--version\n\n.SH DESCRIPTION\nThe\n.B qsig\ncommand sends a signal to all the processes of the specified jobs.  \nThe \n.B qsig \ncommand sends a \n.I Signal Job\nbatch request to the server which owns the job.\n\nThe \n.B qsig\ncommand can be used for jobs, job arrays, subjobs, and ranges of subjobs.\nIf it is used on a range of subjobs, the running subjobs in the range \nare signaled.\n\nNot all signal names are recognized by \n.B qsig\n; if using a signal name does not work, try issuing the signal number instead.\n\n.B Using admin-suspend and admin-resume\n.br\nIf you have a vnode requiring maintenance while remaining powered up,\nwhere you do not want jobs running during the maintenance, you can use\nthe special signals \n.I admin-suspend \nand \n.I admin-resume \nto suspend and resume the jobs on the vnode.  When you use \n.I admin-suspend \non a vnode's\njob(s), the vnode goes into the \n.I maintenance \nstate, and its scheduler does not schedule jobs on it.  You must separately \n.I admin-suspend \neach job on the vnode.  When its last \n.I admin-suspended \njob is admin-resumed, a vnode leaves the \n.I maintenance \nstate.  \n\n.B Restrictions\n.br\nThe request to signal a job is rejected if:\n.IP -\nThe user is not authorized to signal the job\n.IP -\nThe job is not in the \n.I running\nor \n.I suspended\nstate\n.IP -\nThe requested signal is not supported by the system upon which the\njob is executing\n.IP -\nThe job is in the process of provisioning\n.IP -\nYou attempt to use \n.I admin-resume \non a job that was \n.I suspended\n.IP -\nYou attempt to use \n.I resume \non a job that was \n.I admin-suspended\n.LP\n\n.B Required Privilege\n.br\nManager or Operator privilege is required to use the \n.I admin-suspend, admin-resume, suspend, \nor \n.I resume\nsignals.  Unprivileged users can use other signals.\n\n.SH OPTIONS\n.IP \"-s\" 8\nPBS sends SIGTERM to the job.\n.IP \"-s <signal>\" 8\nPBS sends signal\n.I signal\nto the job.\n.IP \"--version\" 8\nThe \n.B qsig\ncommand returns its PBS version information and exits.\nThis option can only be used alone.\n\n.SH SIGNALS\nYou can send standard signals to a job, or the special signals described below.\nThe\n.I signal\nargument can be in any of the following formats:\n.RS 4\nA signal name, e.g.\n.I SIGKILL\n\nThe signal name without the \n.I SIG\nprefix, e.g. \n.I KILL\n\nAn unsigned signal number, e.g.\n.I 9\n.RE\n\nThe signal name\n.I SIGNULL\nis allowed; in this case the server sends the signal 0 to the job, which \nhas no effect.\n\n.B Special Signals\n.br\nThe following special signals are all lower-case, and have no\nassociated signal number:\n\n.IP \"admin-suspend\"\nSuspends a job and puts its vnodes into the \n.I maintenance \nstate.  The job is put into the \n.I S \nstate and its processes are suspended.  \nWhen suspended, a job is not executing and is not charged for\nwalltime.\n.br\nSyntax:\n.I qsig -s admin-suspend <job ID>\n\n.IP \"admin-resume\"\nResumes a job that was suspended using the \n.I admin-suspend \nsignal, without waiting for its scheduler.   Cannot be used on jobs that were suspended with \n.I suspend \nsignal. When the last \n.I admin-suspended\njob has been \n.I admin_resumed, \nthe vnode leaves the maintenance state.\n.br\nSyntax:\n.I qsig -s admin-resume <job ID>\n\n.IP \"suspend\" \nSuspends specified job(s).  Job goes into \n.I suspended (S)\nstate.  When suspended, a job is not\nexecuting and is not charged for walltime.\n\n.IP \"resume\"\nMarks specified job(s) for resumption by a\nscheduler when there are sufficient resources.  If you use\n.B qsig -s resume \non a job that was suspended using \n.B qsig -s suspend\n, the job is resumed when there are sufficient resources.  Cannot\nbe used on jobs that were suspended with \n.I admin_suspend \nsignal.  \n\n.SH  OPERANDS\nThe \n.B qsig \ncommand accepts one or more\n.I job ID\noperands.  For a job, this has the form:\n.RS 4\n.I <sequence number>[.<server name>][@<server name>]\n.RE\n\nFor a job array, \n.I job ID\ntakes the form:\n.RS 4\n.I <sequence number>[][.<server name>][@<server name>]\n.RE\n\nNote that some shells require that you enclose a job array identifier in\ndouble quotes.\n\n.SH STANDARD ERROR\nThe \n.B qsig \ncommand writes a diagnostic messages to standard error for\neach error occurrence.\n\n.SH EXIT STATUS\n.IP Zero 8\nUpon successful processing of all the operands presented to the\n.B qsig \ncommand\n.IP \"Greater than zero\" 8\nIf the \n.B qsig \ncommand fails to process any operand\n\n.SH SEE ALSO\nqsub(1B), pbs_sigjob(3B),\npbs_resources(7B)\n"
  },
  {
    "path": "doc/man1/qstat.1B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH qstat 1B \"6 May 2020\" Local \"PBS Professional\"\n.SH NAME\n.B qstat \n- display status of PBS jobs, queues, or servers\n\n\n.SH SYNOPSIS\n.RE\n.B Displaying Job Status\n.RS 7\nDefault format:\n.br\n.B qstat \n[-E] [-J] [-p] [-t] [-w] [-x] [[<job ID> | <destination>] ...]\n\n.sp\nLong format:\n.br\n.B qstat \n-f [-F json | dsv [-D <delimiter>]] [-E] [-J] [-p] \n[-t] [-w] \n.RS 6\n[-x] [[<job ID> | <destination>] ...]\n.RE\n.sp\nAlternate format:\n.br\n.B qstat \n[-a | -H | -i | -r ] [-E] [-G | -M] [-J] [-n [-1]]   \n.RS 6\n[-s [-1]] [-t] [-T] [-u <user list>] [-w]\n.br\n[[<job ID> | <destination>] ...]\n.RE\n.RE\n\n.B Displaying Queue Status\n.RS 7\nDefault format:\n.br\n.B qstat \n-Q [<destination> ...]\n.sp\nLong format:\n.br\n.B qstat \n-Q -f [-F json | dsv [-D <delimiter>]] [-w] [<destination> ...]\n.sp\nAlternate format:\n.br\n.B qstat \n-q [-G | -M] [<destination> ...]\n.RE\n\n.B Displaying Server Status\n.RS 7\nDefault format:\n.br\n.B qstat \n-B [<server> ...]\n.sp\nLong format:\n.br\n.B qstat \n-B -f [-F json | dsv [-D <delimiter>]] [-w] [<server> ...]\n\n.B Version Information\n.br\n.B qstat\n--version\n\n.SH DESCRIPTION\nThe\n.B qstat\ncommand displays the status of jobs, queues, or servers, writing\nthe status information to standard output.\n.LP\nWhen displaying job status information, the \n.B qstat \ncommand displays status information about all specified jobs, job\narrays, and subjobs.  You can specify jobs by ID, or by destination,\nfor example all jobs at a specified queue or server.\n\n.B Display Formats\n.br\nYou can use particular options to display status information in a\ndefault format, an alternate format, or a long format.  Default and\nalternate formats display all status information for a job, queue, or\nserver with one line per object, in columns.  Long formats display\nstatus information showing all attributes, one attribute to a line.\n\n.B Displaying Information for Finished and Moved Jobs\n.br\nYou can display status information for finished and moved jobs by\nusing the \n.I -x \nand \n.I -H \noptions.\n\nIf your job has been moved to another server through peer scheduling,\ngive the job ID as an argument to \n.B qstat.  \nIf you do not specify the job ID, your job will not appear to exist.  \nFor example, your job 123.ServerA is moved to ServerB.  In this case, you can use\n.br\n.B \\ \\ \\ qstat 123\n.br\nor\n.br\n.B \\ \\ \\ qstat 123.ServerA\n\nSpecifying the full job name, including the server, avoids the possibility\nthat \n.B qstat\nwill report on a job named 123.ServerB that was moved to ServerA.\n\nTo list all jobs at ServerB, you can use:\n.br\n.B \\ \\ \\ qstat @ServerB\n\n.B Displaying Truncated Data\n.br\nWhen the number of characters required would exceed the space\navailable, qstat truncates the output and puts an asterisk (\"*\") in\nthe last position.  For example, in default job display format, there\nare three characters allowed for the number of cores.  If the actual\noutput were \n.I 1234\n, the value displayed would be \n.I 12*\ninstead.\n\n.B Required Privilege\n.br\nUsers without Manager or Operator privilege cannot view \nresources or attributes that are invisible to unprivileged users.\n\n.SH DISPLAYING JOB STATUS\n.B Job Status in Default Format\n.br\nTriggers: no options, or any of the \n.I -J, -p, -t, \nor \n.I -x\noptions.\n\nThe \n.B qstat \ncommand displays job status in default format when you specify no\noptions, or any of the \n.I -J, -p, -t,\nor \n.I -x, \noptions.\nJobs are displayed one to a line, with these column headers:\n\n.nf\nJob ID   Name       User      Time Use S Queue   \n-------- ---------- --------- -------- - -----\n\nDescription of columns:\n\nColumn    Width                Width    Description\nName      without -w           with -w\n-------------------------------------------------------------------------------\n\nJob ID    17 (22 when          30       Job ID assigned by PBS\n          max_job_sequence_id\n          > 10 million)\n\nName      16                   15       Job name specified by submitter       \n\nUser      16                   15       Username of job owner                 \n\nTime Use  8                    8        The CPU time used by the job.  Before \nor                                      the application has actually started  \nPercent                                 running, for example during stage-in, \nComplete                                this field is \"0\". At the point where \n                                        the application starts accumulating   \n                                        cput, this field changes to \"00:00:00\".  \n                                        After that, every time the MoM polls  \n                                        for resource usage, the field is      \n                                        updated.\n\n                                        The MoM on each execution host polls   \n                                        for the usage of all processes on her \n                                        host belonging to the job.  Usage is  \n                                        summed.  The polling interval is short \n                                        when a job first starts running and   \n                                        lengthens to a maximum of 2 minutes.  \n\n                                        If you specify -p, the Time Use column \n                                        is replaced with the percentage \n                                        completed for the job.  For a job array \n                                        this is the percentage of subjobs \n                                        completed.  For a normal job, it is the \n                                        percentage of allocated CPU time used.\n\nS         1                    1        The job's state:\n                                          B  Array job has at least one subjob\n                                             running\n                                          E  Job is exiting after having run\n                                          F  Job is finished\n                                          H  Job is held\n                                          M  Job was moved to another server\n                                          Q  Job is queued\n                                          R  Job is running\n                                          S  Job is suspended\n                                          T  Job is being moved to new location\n                                          U  Cycle-harvesting job is suspended \n                                             due to keyboard activity\n                                          W  Job is waiting for its submitter-\n                                             assigned start time to be reached\n                                          X  Subjob has completed execution or\n                                             has been deleted\n\nQueue     16                   15       The queue in which the job resides\n.fi\n.LP\n\n.B Job Status in Long Format\n.br\nTrigger: the \n.I -f\noption.\n.br\nIf you specify the \n.I -f\n(full) option, full job status information for each job is displayed \nin this order:\n.RS 5\nThe job ID\n.br\nEach job attribute, one to a line\n.br\nThe job's submission arguments \n.br\nThe job's executable, in JSDL format\n.br\nThe executable's argument list, in JSDL format\n.RE\n\nThe job attributes are listed \nas \n.I <name> = <value>\npairs. This includes the \n.I exec_host \nand \n.I exec_vnode \nstrings.  \nThe full output can be very large.\n.sp\n\nThe \n.I exec_host \nstring has the format:\n.br\n.I <host1>/<T1>*<P1>[+<host2>/<T2>*<P2>+...]\n.br\nwhere \n.br\n.I T1\nis the task slot number (the index) of the job on hostA.\n.br\n.I P1\nis the number of processors allocated to the job from host1.  The \nnumber of processors allocated does not appear if it is 1.\n\nThe \n.I exec_vnode \nstring has the format:\n.br\n.I (<vnode1>:ncpus=<N1>:mem=<M1>)[+(<vnode2>:ncpus=<N2>:mem=<M2>)+...]\n.br\nwhere\n.br\nN1 is the number of CPUs allocated to that job on vnode1.\n.br\nM1 is the amount of memory allocated to that job on vnode1.\n\n.B Job Status in Alternate Format\n.br\nTriggers: any of the \n.I -a, -i, -G, -H, -M, -n, -r, -s, -T,\nor\n.I -u <user list>\noptions.\n.br\nThe \n.B qstat\ncommand displays job status in the alternate format if you specify any of the\n.I -a, -i, -G, -H, -M, -n, -r, -s, -T,\nor \n.I -u <user list>\noptions.\nJobs are displayed one to a line.  If jobs are running and the -n\noption is specified, or if jobs are finished or moved and the -H and\n-n options are specified, there is a second line for the \n.I exec_host\nstring.\n\n.nf\nColumn headers:\n\n                                             Req'd  Req'd   Elap\nJob ID Username Queue Jobname SessID NDS TSK Memory Time  S Time\n------ -------- ----- ------- ------ --- --- ------ ----- - ---- \n\nDescription of columns:\n   \nColumn    Width                Width    Description\nName      without -w           with -w  \n--------------------------------------------------------------------------------\n\nJob ID    15 (20 when          30       The job ID assigned by PBS\n          max_job_sequence_id\n          > 10 million)\n\nUsername  8                    15       Username of job owner\n\nQueue     8                    15       Queue in which the job resides\n\nJobname   10                   15       Job name specified by submitter\n\nSessID    6                    8        Session ID.  Appears only if the job \n                                        is running.\n\nNDS       3                    4        Number of chunks or vnodes requested \n                                        by the job\n\nTSK       3                    5        Number of CPUs requested by the job\n\nReq'd     6                    6        Amount of memory requested by the job\nMemory\n\nReq'd     5                    5        If CPU time is requested, this shows CPU \nTime                                    time.  Otherwise, shows walltime\n\nS         1                    1        The job's state.  (See listing above  \n                                        for default format)\n\nElap      5                    5        If CPU time is requested, this shows CPU \nTime                                    time.  Otherwise, shows walltime.\nor       \nEst Start                               If you use the -P option, displays \nTime                                    estimated start time for queued jobs, \n                                        replacing the Elap Time field with the \n                                        Est Start Time field.\n.fi\n\n.SH Grouping Jobs and Sorting by ID\nTrigger: the \n.I -E \noption.  \n.br\nYou can use the \n.I -E \noption to sort and group jobs in the output of \n.B qstat.  \nThe \n.I -E \noption groups jobs by server and displays each group by ascending ID.\nThis option also improves \n.B qstat \nperformance.  The following table shows how the \n.I -E \noption affects the behavior of \n.B qstat:\n  \n.B How qstat is Used \\ \\  Result Without -E \\ \\ \\ \\ \\ \\ \\ \\ \\  Result With -E \n.br\n-----------------------------------------------------------------------\n.br\nqstat (no job ID     Queries the default server  No change in behavior; \n.br\nspecified)           and displays result         same as without \n.I -E\n.br\n                                                 option\n.br\n-----------------------------------------------------------------------\n.br\nqstat <list of job   Displays results in the     Displays results in \n.br\nIDs from single      order specified             ascending ID order\n.br\nserver>\n.br\n-----------------------------------------------------------------------\n.br\nqstat <job IDs at    Displays results in the     Groups jobs by server.  \n.br\nmultiple servers>    order they are specified    Displays each group in \n.br\n                                                 ascending order\n.br\n-----------------------------------------------------------------------\n\n\n.SH DISPLAYING QUEUE STATUS\n.B Queue Status in Default Format\n.br\nTrigger: the \n.I -Q\noption by itself.\n.br\nThe \n.B qstat\ncommand displays queue status in the default format if \nthe only option is\n.I -Q.\nQueue status is displayed one queue to a line, with these column headers: \n\n.nf\nQueue Max  Tot  Ena  Str Que  Run  Hld  Wat  Trn  Ext  Type\n----- ---- ---- ---- --- ---- ---- ---- ---- ---- ---- ----\n.fi\n\nDescription of columns:\n.IP \"Queue\" 15\nQueue name\n.IP \"Max\" 15\nMaximum number of jobs allowed to run concurrently in this queue\n.IP \"Tot\" 15\nTotal number of jobs in the queue\n.IP \"Ena\" 15\nWhether the queue is \nenabled\nor \ndisabled\n.IP \"Str\" 15\nWhether the queue is \nstarted\nor \nstopped\n.IP \"Que\" 15\nNumber of queued jobs\n.IP \"Run\" 15\nNumber of running jobs\n.IP \"Hld\" 15\nNumber of held jobs\n.IP \"Wat\" 15\nNumber of waiting jobs\n.IP \"Trn\" 15\nNumber of jobs being moved (transiting)\n.IP \"Ext\" 15\nNumber of exiting jobs\n.IP \"Type\" 15\nType of queue: execution or routing\n.sp\n\n.LP\n\n.B Queue Status in Long Format\n.br\nTrigger: the \n.I -q\nand \n.I -f\noptions together.\n.br\nIf you specify the\n.I -f\n(full) option with the \n.I -q\noption, full queue status information for each queue is displayed\nstarting with the queue name, followed by each attribute, one to a line,\nas\n.I <name> = <value>\npairs.  \n.sp\n\n.B Queue Status: Alternate Format\n.br\nTriggers: any of the \n.I -q, -G, \nor \n.I -M\noptions.\n.br\nThe\n.B qstat\ncommand displays queue status in the alternate format if you \nspecify any of the\n.I -q, -G,\nor \n.I -M\noptions.\nQueue status is displayed one queue to a line, and the lowest line \ncontains totals for some columns.\n\nThese are the alternate format queue status column headers:\n\n.nf\nQueue Memory CPU Time Walltime Node Run Que Lm State\n----- ------ -------- -------- ---- --- --- -- -----\n.fi\n\nDescription of columns:\n.IP \"Queue\" 15\nQueue name\n.IP \"Memory\" 15\nMaximum amount of memory that can be requested by a job in this queue\n.IP \"CPU Time\" 15\nMaximum amount of CPU time that can be requested by a job in this queue\n.IP \"Walltime\" 15\nMaximum amount of walltime that can be requested by a job in this queue\n.IP \"Node\" 15\nMaximum number of vnodes that can be requested by a job in this queue\n.IP \"Run\" 15\nNumber of running and suspended jobs.  Lowest row is total number of\nrunning and suspended jobs in all the queues shown\n.IP \"Que\" 15\nNumber of queued, waiting, and held jobs.  Lowest row is total number \nof queued, waiting, and held jobs in all the queues shown\n.IP \"Lm\" 15\nMaximum number of jobs allowed to run concurrently in this queue\n.IP \"State\" 15\nState of this queue: \n.I E\n(enabled) or \n.I D\n(disabled), \nand \n.I R\n(running) or \n.I S\n(stopped)\n\n.sp\n\n\n.SH DISPLAYING SERVER STATUS\n\n.B Server Status in Default Format:\n.br\nTrigger: the \n.I -B \noption.\n.br\nThe \n.B qstat \ncommand displays server status if the only option given is \n.I -B.\n.sp\nColumn headers for default server status output:\n\n.nf\nServer Max Tot Que Run Hld Wat Trn Ext Status\n------ --- --- --- --- --- --- --- --- ------\n.fi\n\nDescription of columns:\n.IP \"Server\" 15\nName of server\n.IP \"Max\" 15\nMaximum number of jobs allowed to be running concurrently \non the server\n.IP \"Tot\" 15\nTotal number of jobs currently managed by the server\n.IP \"Que\" 15\nNumber of queued jobs\n.IP \"Run\" 15\nNumber of running jobs\n.IP \"Hld\" 15\nNumber of held jobs\n.IP \"Wat\" 15\nNumber of waiting jobs\n.IP \"Trn\" 15\nNumber of transiting jobs\n.IP \"Ext\" 15\nNumber of exiting jobs\n.IP \"Status\" 15\nStatus of the server\n.RE\n\n.B Server Status in Long Format\n.br\nTrigger: the\n.I -f\noption.\n.br\nIf you specify the\n.I -f\n(full) option, displays full server status information\nstarting with the server name, followed by each server attribute, \none to a line, as\n.I <name> = <value>\npairs. Includes PBS version information.\n.sp\n\n.SH OPTIONS\n.B Generic Job Status Options\n.IP \"-E\" 10\nGroups jobs by server and displays jobs sorted by ascending ID.  When\n.B qstat\nis presented with a list of jobs, jobs are grouped by server and each \ngroup is displayed by ascending ID.  This option also improves \n.B qstat\nperformance.  \n.LP\n.B Default Job Status Options\n.IP \"-J\" 10\nPrints status information for job arrays (not subjobs).\n.IP \"-t\" 10\nDisplays status information for jobs, job arrays, and subjobs.\nWhen used with \n.I -J\noption, prints status information for subjobs only.\n\n.IP \"-p\" 10\nThe \n.I Time Use \ncolumn is replaced with the percentage completed for the job.  For an \narray job this is the percentage of subjobs completed.  For a normal\njob, it is the percentage of allocated CPU time used.\n\n.IP \"-x\" 10\nDisplays status information for finished and moved jobs in \naddition to queued and running jobs.\n.LP\n\n.B Alternate Job Status Options\n.IP \"-a\" 10\nAll queued and running jobs are displayed.  \nIf a\n.I destination\nis specified, information for all jobs at that\n.I destination\nis displayed.\nIf a \n.I job ID\nis specified, information about that job is displayed.  \nAlways specify this option before the \n.I -n \nor \n.I -s\noptions, otherwise they will not take effect.\n\n.IP \"-H\" 10\nWithout a job identifier, displays information for all finished or moved jobs.\nIf a \n.I job ID \nis given, displays information for that job regardless of \nits state.  If a \n.I destination\nis specified, displays information for finished or moved jobs, or \nspecified job(s), at \n.I destination.\n\n.IP \"-i\" 10\nIf a\n.I destination\nis given, information for queued, held or waiting jobs at that \n.I destination\nis displayed.\nIf a \n.I job ID\nis given, information about that job is displayed regardless\nof its state. \n\n.IP \"-n\" 10\nThe \n.I exec_host \nstring is listed on the line below the basic information.\nIf the \n.I -1\noption is given, the \n.I exec_host \nstring is listed on the end of the same line.\nIf using the \n.I -a \noption, always specify the\n.I -n \noption after\n.I -a,\notherwise the\n.I -n \noption does not take effect.\n \n.IP \"-r\" 10\nIf a \n.I destination\nis given, information for running or suspended jobs at that \n.I destination\nis displayed.\nIf a \n.I job ID\nis given, information about that job is displayed regardless of its state.  \n\n.IP \"-s\" 10\nAny comment added by the administrator or scheduler is shown on the line below the basic information.\nIf the \n.I -1 \noption is given, the comment string is listed on the end of the same line.\nIf using the \n.I -a \noption, always specify the\n.I -s \noption after\n.I -a,\notherwise the\n.I -s \noption does not take effect.\n\n.IP \"-T\" 10\nDisplays estimated start time for queued jobs, replacing the \n.I Elap Time\nfield with the \n.I Est Start Time\nfield.  Jobs with earlier estimated start\ntimes are displayed before those with later estimated start times. \n\nRunning jobs are displayed before other jobs.  Running jobs are sorted\nby their \n.I stime \nattribute (start time).\n\nQueued jobs whose estimated start times are unset\n(estimated.start_time = unset) are displayed after those with\nestimated start times, with estimated start time shown as a double dash\n(\"--\").  Queued jobs with estimated start times in the past are treated\nas if their estimated start times are unset.\n\nIf a job's estimated start time cannot be calculated, the start time\nis shown as a question mark (\"?\").\n\nTime displayed is local to the qstat command.  Current week begins on\nSunday.\n\nThe following table shows the format for the \n.I Est Start Time \nfield when the\n.I -w \noption is not used:\n\n.IP \" \" 10\n.nf\n.B Format\\ \\ \\ \\ \\ \\ \\ \\ \\ Job's Estimated Start Time\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ Example\n-------------------------------------------------------------\n<HH>:<MM>      Today                                  15:34\n-------------------------------------------------------------\n<2-letter      Within 7 days, but after today         We 15\nweekday> <HH>\n-------------------------------------------------------------\n<3-letter      This calendar year, but after this     Feb \nmonth name>    week\n-------------------------------------------------------------\n<YYYY>         Less than or equal to 5 years from     2018\n               today, after this year\n-------------------------------------------------------------\n>5yrs          More than 5 years from today           >5yrs\n-------------------------------------------------------------\n.fi\n\n.IP \" \" 10\nThe following table shows the format for the \n.I Est Start Time \nfield when the\n.I -w \noption is used:\n.IP \" \" 10\n.nf\n.B Format\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ Job Est Start Time\\ \\ Example\n-------------------------------------------------------------\nToday <HH>:<MM>     Today               Today 13:34\n-------------------------------------------------------------\n<Day> <HH>:<MM>     This week, but      Wed 15:34\n                    after today  \n-------------------------------------------------------------\n<Day> <Month>       This year, but      Wed Feb 10 15:34\n<Daynum> <HH>:<MM>  after this week\n-------------------------------------------------------------\n<Day> <Month>       After this year     Wed Feb 10 2018 15:34\n<Daynum> <YYYY>\n<HH>:<MM>\n-------------------------------------------------------------\n.fi\n\n.IP \" \" 10\nWhen used with the \n.I -f \noption, prints the full timezone-qualified start time.\n\nEstimated start time information can be made unavailable to unprivileged \nusers; in this case, the estimated start time appears to be unset.\n\n.IP \"-u <user list>\" 10\nIf a \n.I destination\nis given, status for jobs at that \n.I destination \nowned by users in \n.I <user list>\nis displayed.\nIf a \n.I job ID\nis given, status information for that job is displayed regardless of the job's \nownership.\n.sp\nFormat: \n.I <username>[@<hostname>][, <username>[@<hostname>], ...] \nin comma-separated list.  \n\nHostnames may be wildcarded, but not domain names.\nWhen no hostname is specified, username is for any host.\n\n.IP \"-w\" 10\nCan be used with job status in default and alternate formats.  Allows display\nof wider fields up to 120 characters.  See\n.B Job Status in Default Format\nand \n.B Job Status in Alternate Format\nfor column widths.\n\nThis option is different from the \n.I -w\noption used with the\n.I -f \nlong-format option.\n\n.IP \"-1 (hyphen one)\" 10\nReformats \n.B qstat \noutput to a single line.  Can be used only in conjunction with the \n.I -n\nand/or \n.I -s\noptions.\n\n.LP\n.B Queue Status Options\n.IP \"-Q\" 10\nDisplays queue status in default format.  \nOperands must be \n.I destinations.\n.IP \"-q\" 10\nDisplays queue status in alternate format.  \nOperands must be \n.I destinations.\n\n.LP\n.B Server Status Options\n.IP \"-B\" 10\nDisplay server status.\nOperands must be names of servers.\n\n.LP\n.B Job, Queue, and Server Status Options\n.IP \"-f [-w]\" 10\nFull display for long format. Job, queue or server attributes displayed one to a line.\n.br\nJSON output: PBS reports \n.I resources_used \nvalues for resources that are created or set in a hook as JSON strings\nin the output of \n.B qstat -f.\n\nIf MoM returns a JSON object (a Python dictionary), PBS reports the\nvalue as a string in single quotes:  \n.br\n.I resources_used.<resource_name> = '{ <mom_JSON_item_value>, \n.I <mom_JSON_item_value>, <mom_JSON_item_value>, ..}'\n.br\nExample:  MoM returns { \"a\":1, \"b\":2, \"c\":1,\"d\": 4} for \n.I resources_used.foo_str.  \nWe get: \n.br\nresources_used.foo_str='{\"a\": 1, \"b\": 2, \"c\":1,\"d\": 4}'\n\nIf MoM returns a value that is\nnot a valid JSON object, the value is reported verbatim.\n.br\nExample: MoM returns \"hello\"  for \n.I resources_used.foo_str.  \nWe get:\n.br\nresources_used.foo_str=\"hello\"\n\n.br\nOptional \n.I -w \nprints each attribute on one unbroken line.  Feed characters are converted:\n.RS 13\nNewline is converted to backslash concatenated with \"n\", resulting in \"\\\\n\"\n.br\nForm feed is converted to backslash concatenated with \"f\", resulting in \"\\\\f\"\n.RE\n.RS 10\nThis \n.I -w \nis independent of the \n.I -w \njob output option used with default and alternate formats.\n.RE\n\n.IP \"-F dsv [-D <delimiter>]\" 10\n\nPrints output in delimiter-separated value format.  The default\n.I delimiter \nis a pipe (\"|\").  You can specify a character or a string\ndelimiter using the \n.I -D \nargument to the \n.I -F dsv \noption.  For example, to use a comma as the delimeter:\n.RS 13\n.B qstat -f -F dsv -D,\n.RE\n.RS 10 \nIf the delimiter itself appears in a value, it is escaped:\n.RS 3\nOn Linux, the delimiter is escaped with a backslash (\"\\\\\").\n.br\nOn Windows, the delimiter is escaped with a caret (\"^\").\n.RE\n.RE\n.RS 10\n.sp\nFeed characters are converted:\n.RS 3\nNewline is converted to backslash concatenated with \"n\", resulting in \"\\\\n\"\n.br\nForm feed is converted to backslash concatenated with \"f\", resulting in \"\\\\f\"\n.RE\n\nA newline separates each job from the next.  Using newline as the\ndelimiter leads to undefined behavior.\n\n.br\nExample of getting output in delimiter-separated value format:\n.RS 3\n.B qstat -f -Fdsv\n\nJob Id: 1.vbox|Job_Name = STDIN|Job_Owner = root@vbox|job_state = Q|queue = workq|server = vbox|Checkpoint = u|ctime = Fri Nov 11 17:57:05 2016|Error_Path = ...\n.RE\n.RE\n\n.IP \"-F json\" 10\nPrints output in JSON format (http://www.json.org/).  \nAttribute output is preceded by timestamp, PBS version, and PBS server hostname.  \nExample: \n\n.RS 13\n.B qstat -f F json\n\n{\n        \"timestamp\":1479277336,\n        \"pbs_version\":\"14.1\",\n        \"pbs_server\":\"vbox\",\n        \"Jobs\":{\n                \"1.vbox\":{\n                        \"Job_Name\":\"STDIN\",\n                        \"Job_Owner\":\"root@vbox\",\n                        \"job_state\":\"Q\",\n...\n\n.RE\n\n.IP \"-G\" 10\nShows size in gigabytes.  Triggers alternate format.\n.IP \"-M\" 10\nShows size in megawords.  A word is considered to be 8 bytes.  \nTriggers alternate format.\n\n.LP\n\n.B Version Information\n.IP \"--version\" 8\nThe \n.B qstat\ncommand returns its PBS version information and exits.\nThis option can only be used alone.\n\n\n.SH  OPERANDS\n.B Job Identifier Operands\n.br\nThe \n.I job ID\nis assigned by PBS at submission.  \nJob IDs are used only with job status requests.\nStatus information for specified job(s) is displayed.\nFormats:\n.IP Job 15\n.I <sequence number>[.<server>][@<server>]\n.IP \"Job array\" 15\n.I <sequence number>[][.<server>][@<server>]\n.IP Subjob 15\n.I <sequence number>[<index>][.<server>][@<server>]\n.IP \"Range of subjobs\" 15\n.I <sequence number>[<start>-<end>][.<server>][@<server>]\n.LP\nNote that some shells require that you enclose a job array identifier in\ndouble quotes.\n\n.RE\n.LP\n\n.B Destination Operands\n.br\nName of queue, name of server, or name of queue at a specific server.\nFormats:\n.IP \"queue name\" 15\nSpecifies name of queue for job or queue display.\n.RS 18\nWhen displaying job status, PBS displays status for all jobs in the\nspecified queue at the default server.\n.br\nWhen displaying queue status, PBS displays status for the specified\nqueue at the default server.\n.RE\n.IP \"queue name@server name\" 15\nSpecifies name of queue at server for job or queue display.\n.RS 18\nWhen displaying job status, PBS displays status for all jobs in the\nspecified queue at the specified server.\n.br\nWhen displaying queue status, PBS displays status for the specified\nqueue at the specified server.\n.RE\n.IP \"@server name\" 15\nSpecifies server name for job or queue display.  \n.RS 18\nWhen displaying job status, PBS displays status for all jobs at all\nqueues at the specified server.\n.br\nWhen displaying queue status, PBS displays status for all queues at\nthe specified server.\n.RE\n.IP \"server name\" \nSpecifies server name for server display.\n.RS 18\nWhen displaying server status (with the -B option) PBS displays status\nfor the specified server.\n.RE\n.LP\n\n.SH STANDARD ERROR\nThe \n.B qstat \ncommand writes a diagnostic message to standard error for\neach error occurrence.\n\n.SH EXIT STATUS\n.IP Zero 15\nUpon successful processing of all operands\n.IP \"Greater than zero\" 15\nIf any operands could not be processed\n\n.SH SEE ALSO\nqalter(1B), qsub(1B), pbs_alterjob(3B), pbs_statjob(3B), pbs_statque(3B),\npbs_statserver(3B), pbs_submit(3B),\npbs_job_attributes(7B), pbs_queue_attributes(7B), pbs_server_attributes(7B),\npbs_resources(7B) \n"
  },
  {
    "path": "doc/man1/qsub.1B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH qsub 1B \"25 January 2021\" Local \"PBS Professional\"\n.SH NAME\n.B qsub \n\\- submit a job to PBS \n\n\n.SH SYNOPSIS\n.B qsub \n[-a <date and time>] [-A <account string>] [-c <checkpoint spec>] \n.RS 5\n[-C <directive prefix>] [-e <path>] [-f] [-h] \n.br\n[-I [-G [-- <GUI application/script>]] | [-X]] [-j <join>] \n.br\n[-J <range> [%<max subjobs]] [-k <discard>] [-l <resource list>] \n.br\n[-m <mail events>] [-M <user list>] [-N <name>] [-o <path>]\n.br\n[-p <priority>] [-P <project>] [-q <destination>] [-r <y|n>]\n.br\n[-R <remove options>] [-S <path list>] [-u <user list>] \n.br\n[-v <variable list>] [-V] [-W <additional attributes>] [-z] \n.br\n[- | <script> | -- <executable> [<arguments to executable>]]\n.RE\n.B qsub\n--version\n\n.SH DESCRIPTION\nYou use the \n.B qsub \ncommand to submit a batch job to PBS.\nSubmitting a PBS job specifies a task, requests resources, and sets job attributes.\n\nThe \n.B qsub \ncommand can read from a job script, from standard input, or from the command line.\nWhen the user has submitted the job, PBS returns the job identifier for that job.\nFor a job, this is of the form:\n.RS 3\n.I <sequence number>.<server name>\n.RE\n\nFor an array job, this is of the form:\n.RS 3\n.I <sequence number>[].<server name>\n.RE\n\nDuring execution, jobs can be interactive or non-interactive.\nInteractive jobs are not rerunnable, and if they are blocking, you\ncannot use their exit status.\n\nJobs are run as the user and group who submitted the job.\n\n.B Background Process\n.br\nBy default, on the first invocation, qsub spawns a background process\nto manage communication with the PBS server.  Later invocations of\nqsub attempt to communicate with this background process.  Under\ncertain circumstances, calls to qsub when it uses the background\nprocess can result in communication problems.  You can prevent qsub\nfrom spawning a background process by using the \n.I -f \noption, although this can degrade performance.\n\n.B Where PBS Puts Job Files\n.br\nBy default, PBS copies the stdout and stderr files from the job back to the \ncurrent working directory where the \n.B qsub\ncommand is executed.  However, you can specify the output paths using\nthe \n.I -o \nand \n.I -e \noptions.  You can also specify which and whether these\nfiles should be kept on the execution host via the \n.I -k \noption, or deleted, using the \n.I -R \noption.  See the \n.I -k, -o, -e, \nand \n.I -R \noptions.\n\n.B Submitting Jobs By Using Scripts\n.br\nTo submit a PBS job by using a script, you specify a job script on the\ncommand line:\n.br\n.I \\ \\ \\ qsub [<options>] <script name>\n\nFor example:\n.br\n.B \\ \\ \\ qsub myscript.sh\n\nJob scripts are run as the user and group who submitted the job.  Job\nscripts can be written in Python, Linux shells such as csh and sh, the\nWindows command batch language, Perl, etc.\n\nA PBS job script consists of the following:\n.RS 3\nOptional shell specification \n\nAny PBS directives\n\nThe user's tasks: programs, commands, or applications \n\nOptional comments (Under Windows, comments can contain only ASCII\ncharacters.)\n.RE\n\n.B Using Shells and Interpreters\n.br\nBy default, PBS uses your login shell to run your script.  You can\noptionally specify a different shell or interpreter to run your\nscript:\n\nVia the \n.I -S \noption to qsub:\n.RS 3\n.I qsub -S <path to shell> <script name>\n\nFor example:\n.br\n.B qsub -S /bin/bash myscript.sh\n.RE\n\nYou can specify a different interpreter in the first line of your script.\nFor example:\n.RS 3\n.nf\n.B cat myscript.sh\n#!/bin/bash\n#PBS -N MyHelloJob\necho \"Hello\"\n.fi\n.RE\n\n.B Python Job Scripts\n.br\nYou can use the same Python script under Linux or under Windows, if\nthe script is written to be portable.  PBS includes a Python package,\nallowing Python job scripts to run; you do not need to install Python.\nYou can include PBS directives in a Python job script as you would in\na Linux shell script.  Python job scripts can access Win32 APIs,\nincluding the following modules:\n.RS 3\nWin32api\n\nWin32con\n\nPywintypes\n.RE\n\nFor example, we have a Python job script that includes PBS directives:\n.RS 3\n.B cat myjob.py\n.nf\n#!/usr/bin/python\n#PBS -l select=1:ncpus=3:mem=1gb\n#PBS -N HelloJob\nprint \"Hello\"\n.fi\n.RE\n\nSo long as the first line of the script is the \"#!/usr/bin/python\" line\nor similar, you don't need to do anything special to run a python script.\n.RS 3\n.I qsub <script name>\n.RE\n\nTo run a Python job script under Windows, use the path to the pbs_python\nexecutable on the execution host:\n.RS 3\n.I qsub -S <pbs_python path on execution host> <script name>\n\nFor example, \n.br\nqsub -S %PBS_EXEC%\\\\bin\\\\pbs_python.exe <script name>\n.RE\n\nIf the script pathname contains spaces, it must be quoted, for example:\n.RS 3\nqsub -S \"C:\\\\Program Files\\\\PBS Pro\\\\bin\\\\pbs_python.exe\" <script name>\n.RE\n\n.B Linux Shell Job Scripts\n.br\nFor example, we have a Linux job script named \"weatherscript\" for a job named\n\"Weather1\" which runs the executable \"weathersim\" on Linux:\n.RS 3\n.nf\n#!/bin/sh\n#PBS -N Weather1\n#PBS -l walltime=1:00:00\n/usr/local/weathersim\n.RE\n.fi\n\nTo submit the job, the user types:\n.RS 3\n.B qsub weatherscript <return>\n.RE\n\n.B Windows Command Job Scripts\n.br\nFor example, we have a script named \"weather.exe\" for a job named \"Weather1\" which\nruns under Windows:\n.RS 3\n.nf\n#PBS -N Weather1\n#PBS -l walltime=1:00:00\nweathersim.exe\n.fi\n.RE\n\nTo submit the job, the user types:\n.RS 3\n.B qsub weather.exe <return>\n.RE\n\nIn Windows, if you use notepad to create a job script, the last line\ndoes not automatically get newline-terminated. Be sure to put one\nexplicitly, otherwise, PBS job will get the following error message:\n\n   More?\n\nwhen the Windows command interpreter tries to execute that last line.\n\n.B Submitting Jobs From Standard Input\n.br\nTo submit a PBS job by typing job specifications at the command line,\nyou type:\n.br\n.I qsub [<options>] [-] <return>\n.br\nthen type any directives, then any tasks, followed by:\n.RS 3\nLinux: CTRL-D on a line by itself\n\nWindows: CTRL-Z <return>\n.RE\n\nto terminate the input.\n\nThe qsub command behaves the same both with and without the dash operand.\n.br\nFor example, on Linux:\n.RS 3\n.nf\n.B qsub <return>\n#PBS -N StdInJob\nsleep 100\n<CTRL-D>\n.RE\n.fi\n\n.B Submitting Job Directly by Specifying Executable on Command Line\n.br\nTo submit a job directly, you specify the executable on the command\nline:\n.RS 3\n.I qsub [<options>] -- <executable> [<arguments to executable>] <return>\n.RE\n\nWhen you run qsub this way, it runs the executable directly.  It does\nnot start a shell, so no shell initialization scripts are run, and\nexecution paths and other environment variables are not set.  There is\nnot an easy way to run your command in a different directory.  You\nshould make sure that environment variables are set correctly, and you\nwill usually have to specify the full path to the command.\n\nFor example, to run myprog with the arguments a and b:\n.RS 3\n.B qsub -- myprog a b <return>\n.RE\n\nFor example, to run myprog with the arguments a and b, naming the job \"JobA\":\n.RS 3\n.B qsub -N JobA -- myprog a b <return>\n.RE\n\nOn Linux, you need to specify the path to myprog, so the previous example\nbecomes:\n.RS 3\n.B qsub -N JobA -- /path/to/myprog a b <return>\n.RE\n\n.B Requesting Resources and Placing Jobs\n.br\nRequesting resources includes setting limits \non resource usage and controlling how the job is placed on vnodes.\n\nResources are requested by using the \n.I -l\noption, either in job-wide requests using\n.I <resource name>=<value>\npairs, or in\n.I chunks \ninside of \n.I selection statements.\n  See the \n.B pbs_resources(7B) \nman page.\n\nJob-wide \n.I <resource name>=<value> \nrequests are of the form:\n.RS 3\n.I -l <resource name>=<value>[,<resource name>=<value> ...]\n.RE\n\nThe \n.I selection statement \nis of the form:\n.RS 3\n.I -l select=[<N>:]<chunk>[+[<N>:]<chunk> ...]\n.RE\nwhere \n.I N\nspecifies how many of that chunk, and a \n.I chunk \nis of the form:\n.RS 3\n.I <resource name>=<value>[:<resource name>=<value> ...]\n.RE\n\nYou choose how your chunks are placed using the \n.I place statement.  \nThe place statement can contain the following elements, in any order:\n.RS 3\n.I -l place=[<arrangement>][:<sharing>][:<grouping>]\n\nwhere\n.br\n.I arrangement \n.RS 3\nWhether this chunk is willing to share this vnode or host with other\nchunks from the same job.  \n.br\nOne of \n.I free | pack | scatter | vscatter\n.RE\n\n.I sharing\n.RS 3\nWhether this this chunk is willing to share this vnode or host with\nother jobs.  \n.br\nOne of \n.I excl | shared | exclhost\n.RE\n\n.I grouping \n.RS 3\nWhether the chunks from this job should be placed on vnodes that all\nhave the same value for a resource.  \n.br\nCan have only one instance of\n.I group=<resource name>\n.RE\n\n.I free\n.RS 3\nPlace job on any vnode(s).\n.RE\n\n.I pack\n.RS 3\nAll chunks are taken from one host.\n.RE\n\n.I scatter\n.RS 3\nOnly one chunk with any MPI processes is taken from a host.  A chunk\nwith no MPI processes may be taken from the same vnode as another\nchunk.\n.RE\n\n.I vscatter\n.RS 3\nOnly one chunk is taken from any vnode.  Each chunk must fit on a vnode.\n.RE\n\n.I excl\n.RS 3\nOnly this job uses the vnodes chosen.\n.RE\n\n.I shared\n.RS 3\nThis job can share the vnodes chosen.\n.RE\n\n.I exclhost\n.RS 3\nThe entire host is allocated to the job.\n.RE\n\n.I group=<resource name>\n.RS 3\nChunks are grouped according to a resource.  All vnodes in the group\nmust have a common value for resource, which can be either the\nbuilt-in resource host or a custom vnode-level resource.\n\n.I resource name \nmust be a string or a string array.\n.RE\n\nThe place statement cannot begin with a colon.  Colons are delimiters;\nuse them only to separate parts of a place statement, unless they are\nquoted inside resource values.\n.RE\n\nNote that vnodes can have \n.I sharing \nattributes that override job placement requests.  \nSee the \n.I pbs_node_attributes.7B\nman page.\n\n.B Caveats for Requesting Resources\n.br\nDo not mix old-style resource or vnode specifications with the new\nselect and place statements.  Do not use one in a job script and the\nother on the command line.  Mixing the two will result in an error.\n\nYou cannot submit a job requesting a custom resource which has been\ncreated to be invisible or read-only for unprivileged users,\nregardless of your privilege.  A Manager or Operator can use the\n.B qalter \ncommand to change a job's request for this kind of custom resource.\n\nFor more on resource requests, usage limits and job placement, see\n.I pbs_resources(7B).\n\n\n\n.B Setting attributes\n.br\nThe job submitter sets job attributes by giving options to the \n.B qsub \ncommand or by using \n.I PBS directives.\nMost qsub options set a job attribute, and have a corresponding PBS\ndirective with the same syntax as the option.  Attributes set via\ncommand-line options take precedence over those set using PBS\ndirectives. See\n.I pbs_job_attributes.7B.\n\n.B Changing qsub Behavior\n.br\nThe behavior of the \n.B qsub\ncommand\nmay be affected by \nthe server's \n.I default_qsub_arguments\nattribute.  \nThis attribute can set the default for any job attribute.  \nThe \n.I default_qsub_arguments\nserver attribute is settable by the administrator,\nand is overridden by command-line arguments and\nscript directives.\nSee the \n.I pbs_server_attributes(1B) \nman page.\n\nThe behavior of the \n.B qsub \ncommand may also be affected by \nany site hooks.  Site hooks can modify the job's attributes, \nchange its routing, etc.\n\n.SH Options to qsub\n\n.IP \"-a <date and time>\" 8\nPoint in time after which the job is eligible for execution.\nGiven in pairs of digits.  Sets job's \n.I Execution_Time\nattribute to \n.I date and time.\nFormat: \n.I datetime, \nexpressed as\n.RS 11\n.I [[[[CC]YY]MM]DD]hhmm[.SS]\n.RE\n.IP \" \" 8\nwhere \n.I CC \nis the century,\n.I YY \nis the year, \n.I MM \nis the month,\n.I DD \nis the day of the month, \n.I hh \nis the hour, \n.I mm \nis the minute, and \n.I SS \nis the seconds.\n\n.IP \" \" 8\nEach portion of the date defaults to the current date, as long as the \nnext-smaller portion is in the future.  For example, if today is the\n3rd of the month and the specified day \n.I DD \nis the 5th, the month \n.I MM\nis set to the current month.\n\nIf a specified portion has already passed, the next-larger portion is set\nto one after the current date.  For example, if the day\n.I DD\nis not specified, but the hour \n.I hh \nis specified to be 10:00 a.m. and the current time is 11:00 a.m., \nthe day \n.I DD\nis set to tomorrow.\n\n.IP \"-A <account string>\" 8\nAccounting string associated with the job.  Used for labeling accounting data.\nSets job's \n.I Account_Name \nattribute to \n.I account string.\n.br\nFormat: \n.I String\n\n.IP \"-c <checkpoint spec>\"\nDetermines when the job will be checkpointed.  Sets job's \n.I Checkpoint\nattribute to \n.I checkpoint spec.  \nAn \n.I $action\nscript is required to checkpoint the job.  \n\nThe argument \n.I checkpoint spec\ncan take on one of the following values:\n.RS\n.IP c 5\nCheckpoint at intervals, measured in CPU time, set on job's execution\nqueue.  If there is no interval set at the queue, the job is not\ncheckpointed.\n\n.IP \"c=<minutes of CPU time>\" 5\nCheckpoint at intervals of specified number of minutes of job CPU\ntime.  This value must be greater than zero.  If the interval\nspecified is less than that set on the job's execution queue, the\nqueue's interval is used.\n.br\nFormat: \n.I Integer\n.IP w 5\nCheckpoint at intervals, measured in walltime, set on job's execution\nqueue.  If there is no interval set at the queue, the job is not\ncheckpointed.\n\n.IP \"w=<minutes of walltime>\" 5\nCheckpoint at intervals of the specified number of minutes of job walltime.\nThis value must be greater\nthan zero.  If the interval specified is less than that set on the\njob's execution queue, the queue's interval is\nused.\n.br\nFormat: \n.I Integer\n.IP n 5\nNo checkpointing.\n.IP s 5\nCheckpoint only when the server is shut down.\n.IP u 5\nUnset.  Defaults to behavior when \n.I interval\nargument is set to \n.I s.\n.LP\nDefault: \n.I u\n.br\nFormat: \n.I String\n.RE\n.RE\n\n.IP \"-C <directive prefix>\" 8\nDefines the prefix identifying a \n.I PBS directive.\nDefault prefix is \"#PBS\".\n.IP\nIf the\n.I directive prefix\nargument is a null string, qsub\ndoes not scan the script file for directives.\nOverrides the PBS_DPREFIX environment variable and the default.\nThe string \"PBS_DPREFIX\" cannot be used as a PBS directive.\nLength limit: 4096 characters.\n\n.IP \"-e <path>\" 8\nPath to be used for the job's standard error stream.\nSets job's \n.I Error_Path \nattribute to \n.I path.\nThe\n.I path\nargument is of the form:\n.RS 13\n.I [<hostname>:]<path>\n.RE\n.RS\nThe \n.I path \nis interpreted as follows:\n\n.IP path 5\nIf\n.I path\nis relative, it is taken to be relative to the current working directory of the \n.B qsub\ncommand, where it is executing on the current host.\n\nIf\n.I path\nis absolute, it is taken to be an absolute path on the current host where the \n.B qsub\ncommand is executing.\n\n.IP hostname:path\nIf \n.I path\nis relative, it is taken to be relative to the user's \nhome directory on the host named\n.I hostname.\n\nIf \n.I path\nis absolute, it is an absolute path on the host named\n.I hostname.\n.LP\nIf \n.I path\ndoes not include a filename, the default filename has the form\n.I <job ID>.ER\n\nIf the\n.I -e\noption is not specified, PBS copies the standard error to the current\nworking directory where the \n.B qsub \ncommand was executed, and writes standard error to the default filename, \nwhich has this form:\n.br\n.I <job name>.e<sequence number>\n\nIf you use a UNC path for output or error files, the \n.I hostname \nis optional.  If you use a non-UNC path, the \n.I hostname \nis required.\n\nThis option is overridden by the \n.I -k\noption.\n.RE\n\n.IP \"-f\" 8\nPrevents \n.B qsub\nfrom spawning a background process.  By default, \n.B qsub \nspawns a background process to manage communication with the PBS server.  \nWhen this option is specified, the \n.B qsub\nprocess connects directly to the server and no background process is created.\n\nNOTE: Use of this option degrades performance of the \n.B qsub\ncommand when calls are made in rapid succession.\n\n.IP \"-G [-- <GUI application>]\" 8\nStarts a GUI session.  When no application or script is provided,\nstarts a GUI-enabled interactive shell.  When an application or script\nis provided, starts the GUI application or script.  Use full path to\napplication or script unless the path is part of the user's\nPATH environment variable on the execution host.  When submission and\nexecution hosts are different, this uses a remote viewer.\n.br\nSession is terminated when remote viewer, GUI application, or \ninteractive shell is terminated, or when job is deleted.\n.br\nCan be used only with interactive jobs (the \n.I -I \noption).\n.br\nAvailable only under Windows.\n\n.IP \"-h\" 8\nApplies a \n.I User \nhold to the job.\nSets the job's \n.I Hold_Types \nattribute to \"u\".\n\n.IP \"-I\" 8\nJob is to be run interactively.  Sets job's \n.I interactive\nattribute to \n.I True.\nThe job is queued\nand scheduled as any PBS batch job, but when executed, the standard input,\noutput, and error streams of the job are connected to the\nterminal session in which \n.B qsub \nis running.\nIf a job script is given, only its directives are processed.  When the job\nbegins execution, all input to the job is taken from the terminal session.\n\nInteractive jobs are not rerunnable.\n\nJob arrays cannot be interactive.\n\nWhen used with \n.I -Wblock=true, \nno exit status is returned.\n\n.IP \"-j <join>\" 8\nSpecifies whether and how to join the job's standard error and standard output streams.\nSets job's \n.I Join_Path\nattribute to \n.I join.\nDefault: \n.I n, \nnot merged.  \nThe \n.I join\nargument can take the following values:\n.RS\n.IP oe 11\nStandard error and standard output are merged into standard output.\n\n.IP eo 11\nStandard error and standard output are merged into standard error.\n\n.IP n 11\nStandard error and standard output are not merged.\n.RE\n\n.IP \"-J <range> [%<max subjobs>]\" 8\nMakes this job an array job.  Sets job's \n.I array\nattribute to \n.I True.\n\nUse the\n.I range \nargument to specify the indices of the subjobs of the array.\n.I range \nis specified in the form\n.I X-Y[:Z]  \nwhere \n.I X \nis the first index, \n.I Y \nis the upper bound on the indices, and\n.I Z \nis the stepping factor.  For example,  2-7:2 will produce indices of 2, 4, and\n6.  If \n.I Z \nis not specified, it is taken to be 1.  Indices must be greater than or \nequal to zero.\n\nUse the optional \n.I %max subjobs \nargument to set a limit on the number of subjobs that can be running\nat one time.  This sets the value of the \n.I max_run_subjobs \njob attribute to the specified maximum.\n\nJob arrays are always rerunnable.\n\n.IP \"-k <discard>\" 8\nSpecifies whether and which of the standard output and standard error streams\nis left behind on the execution host, or written to their final destination.\nSets the job's \n.I Keep_Files \nattribute to \n.I discard.\nOverrides default path names for these streams.  \nOverrides \n.I -o\nand \n.I -e\noptions.\n\nDefault: \n.I n; \nneither is retained, and files are not written to their final destinations.\n\nIn the case where output and/or error is retained on the execution host in\na job-specific staging and execution directory created by PBS, these\nfiles are deleted when PBS deletes the directory.\n\nThe \n.I discard\nargument can take the following values:\n.RS\n.IP e 5\nThe standard error stream is retained on the execution host, in the \njob's staging and execution directory.  The filename is\n.RS\n.RS 3\n.I <job name>.e<sequence number>\n.RE\n.RE\n\n.IP o  5\nThe standard output stream is retained on the execution host, in the\njob's staging and execution directory.  The filename is\n.RS\n.RS 3\n.I <job name>.o<sequence number>\n.RE\n.RE\n\n.IP \"eo, oe\"  5\nBoth standard output and standard error streams are \nretained on the execution host, in the\njob's staging and execution directory. \n\n.IP d 5\nOutput and/or error are written directly to their final destination.\nOverrides action of leaving files on execution host.\n\n.IP n  5\nNeither stream is retained.\n.RE\n\n.IP \"-l <resource list>\" 8\n.RS\nAllows the user to request resources and specify job placement.  Sets job's \n.I Resource_list \nattribute to \n.I resource list.\nRequesting a resource places a limit on its usage.\n\nFor how to request resources and place jobs, see \n.B \"Requesting Resources and Placing Jobs\" above.\n.RE\n\n.IP \"-m <mail events> \" 8\nSpecifies the set of conditions under which mail about the job is sent.\nSets job's \n.I Mail_Points\nattribute to \n.I mail events.  \nThe \n.I mail events\nargument can be one of the following:\n.RS 11\nThe single character \"n\"\n.br\nAny combination of \"a\", \"b\", and \"e\", with optional \"j\"\n.RE\n.IP\nThe following table lists the sub-options to the \n.I -m \noption:\n.RS\n.IP n 5\nNo mail is sent\n.IP a 5\nMail is sent when the job is aborted by the batch system\n.IP b 5\nMail is sent when the job begins execution\n.IP e 5\nMail is sent when the job terminates\n.IP j 5\nMail is sent for subjobs. Must be combined with one or more of the\n.I a\n, \n.I b\n, or \n.I e \nsub-options\n.RE\n.IP\nFormat: \n.I String\n.br\nSyntax: \n.I n | [j](one or more of a, b, e)\n.br\nExample: -m ja\n.br\nDefault: \n.I \"a\"  \n\n.IP \"-M <user list>\" 8\nList of users to whom mail about the job is sent.  Sets job's \n.I Mail_Users \nattribute to \n.I user list.  \n\n.RS\nThe\n.I user list\nargument has the form:\n.RS 3\n.I <username>[@<hostname>][,<username>[@<hostname>],...]\n.RE\n\nDefault: Job owner\n.RE\n\n.IP \"-N <name> \" 8\nSets job's \n.I Job_Name \nattribute and name to \n.I name.  \n\nFormat: \n.I Job Name\n\nDefault: if a script is used to submit the job, the job's name is the\nname of the script.  If no script is used, the job's name is \"STDIN\".\n\n.IP \"-o <path>\" 8\nPath to be used for the job's standard output stream.\nSets job's \n.I Output_Path \nattribute to \n.I path.\nThe\n.I path\nargument has the form:\n.RS 13\n.I [<hostname>:]<path>\n.RE\n.RS\nThe \n.I path \nis interpreted as follows:\n.IP path 5\nIf\n.I path \nis relative, it is taken to be relative to \nthe current working directory of the\n.B qsub\ncommand, where it is executing on the current host.\n\nIf\n.I path\nis absolute, it is taken to be an absolute path on \nthe current host where the\n.B qsub\ncommand is executing.\n.IP hostname:path\nIf \n.I path\nis relative, it is taken to be relative to the user's \nhome directory on the host named\n.I hostname.\n\nIf \n.I path \nis absolute, it is an absolute path on the host named\n.I hostname.\n.LP\n\nIf \n.I path\ndoes not include a filename, the default filename has the form \n.I <job ID>.OU\n\nIf the\n.I -o\noption is not specified, PBS copies the standard output to the current\nworking directory where the \n.B qsub \ncommand was executed, and writes standard output to the default filename,\nwhich has this form: \n.I <job name>.o<sequence number>\n\nIf you use a UNC path, the hostname is optional.  If you use a non-UNC\npath, the hostname is required.\n\nThis option is overridden by the \n.I -k\noption.\n.RE\n\n.IP \"-p <priority>\" 8\nPriority of the job.  Sets job's \n.I Priority\nattribute to \n.I priority.\n\nFormat: host-dependent integer\n.br\nRange: \n.I [-1024, +1023] \ninclusive\n.br\nDefault: \n.I Zero  \n\n.IP \"-P <project>\" 8\nSpecifies a project for the job. Sets job's \n.I project\nattribute to \n.I project.\n\nFormat: \n.I Project Name\n\nDefault value: \"_pbs_project_default\"\n\n.IP \"-q <destination>\" 8\nWhere the job is sent upon submission.  \n\nSpecifies a queue, a server, or a queue at a server.  The destination\nargument can have one of these formats:\n.RS\n.I <queue name>\n.RS 3\nJob is submitted to the specified queue at the default server.\n.RE\n\n.I @<server name>\n.RS 3\nJob is submitted to the default queue at the specified server.\n.RE\n\n.I <queue name>@<server name>\n.RS 3\nJob is submitted to the specified queue at the specified server.\n.RE\n\nDefault: Default queue at default server \n.RE\n\n.IP \"-r <y|n>\" 8\nDeclares whether the job is rerunnable.  Sets job's \n.I Rerunable\nattribute to the argument value.\nDoes not affect how the job is handled in the case where the job was\nunable to begin execution.\n\nFormat: Single character, \"y\" or \"n\"\n.br\n\n.RS\n.IP y 5\nJob is rerunnable.\n.IP n 5\nJob is not rerunnable.\n.RE\n\n.IP \"\" 8\nDefault: \"y\"\n\nInteractive jobs are not rerunnable.\nJob arrays are automatically marked as rerunnable.\nSee the\n.I qrerun.1B\nman page.\n\n.IP \"-R <remove options>\" 8\nSpecifies whether standard output and/or standard error files are\nautomatically removed (deleted) upon job completion.\n\nSets the job's \n.I Remove_Files \nattribute to \n.I remove options.  \nOverrides default path names for these streams.  Overrides \n.I -o \nand \n.I -e \noptions.\n\nThis attribute cannot be altered once the job has begun execution.\n\nDefault: \n.I Unset; \nneither is removed\n\nThe \n.I remove options \nargument can take the following values:\n.RS\n.IP e 5     \nThe standard error stream is removed (deleted) upon job completion\n.IP o 5      \nThe standard output stream is removed (deleted) upon job completion\n.IP \"eo, oe\" 5  \nBoth standard output and standard error streams are removed (deleted) upon job completion\n.IP unset 5  \nNeither stream is removed.\n.RE\n\n.IP \"-S <path list>\" 8\nSpecifies the interpreter or shell path for the job script.  Sets job's \n.I Shell_Path_List \nattribute to \n.I path list.\n\nThe \n.I path list\nargument is the full path to the interpreter or shell including the \nexecutable name.  \n\nOnly one path may be specified without a host name.\nOnly one path may be specified per named host.  The path selected\nis the one whose host name is that of the server on which the job\nresides.  \n\n.RS\nFormat:\n.RS 3\n.I <path>[@<hostname>][,<path>@<hostname> ...]\n.RE\n\nDefault: user's login shell on execution host\n\nExample of using bash via a directive:\n.RS 3\n.B #PBS -S /bin/bash@mars,/usr/bin/bash@jupiter\n.RE\nExample of running a Python script from the command line on Linux: \n.RS 3\n.B qsub -S $PBS_EXEC/bin/pbs_python <script name>\n.RE\nExample of running a Python script from the command line on Windows: \n.RS 3\nqsub -S %PBS_EXEC%\\\\bin\\\\pbs_python.exe <script name>\n.RE\n.RE\n.IP \n\n.IP \"-u <user list>\" 8\nList of usernames.  Job is run under a username from this list.\nSets job's \n.I User_List \nattribute to \n.I user list.\n\nOnly one username may be specified without a hostname.\nOnly one username may be specified per named host.  \nThe server on which the job resides will select first the username whose\nhostname is the same as the server name.  Failing that, \nthe next selection is the username with no specified hostname.\nThe usernames on the server and execution hosts must be the same.\nThe job owner must have authorization to run as the specified user.\n\n.RS\nFormat of\n.I user list: \n.RS 3\n.I <username>[@<hostname>][,<username>@<hostname> ...]\n.RE\n\nDefault: job owner (username on submission host)  \n.RE\n\n.IP \"-v <variable list>\"\nLists environment variables and shell functions to be exported to the job.\nThis is the list of environment variables that are added to\nthose already automatically exported.  These variables exist in\nthe user's environment from which\n.B qsub\nis run.\nThe job's \n.I Variable_List\nattribute is appended with the variables in\n.I variable list \nand their values.\nSee the \n.B ENVIRONMENT\nsection of this man page.\n\n.RS\nFormat: comma-separated list of strings in the form:\n.RS 3\n.I <variable>\n.RE\nor\n.RS 3\n.I <variable>=<value>\n.RE\n\nIf a \n.I <variable>=<value> \npair contains any commas, the value must be \nenclosed in single or double quotes, and the \n.I <variable>=<value> \npair must be enclosed in the kind of quotes not used to \nenclose the value.  For example:\n.RS 3\nqsub -v \"var1='A,B,C,D'\" job.sh\n.br\nqsub -v \"a=10,var2='A,B',c=20,d='Hello world'\" job.sh\n.RE\n\nDefault: no environment variables are added to the job's variable list.\n.RE\n\n.IP \"-V\" 8\nAll environment variables and shell functions in the user's\nenvironment where\n.B qsub\nis run are exported to the job.\nThe job's\n.I Variable_List\nattribute is appended with all of these environment variables and\ntheir values.\n\n.IP \"-W <additional attributes>\" 8\nThe \n.I -W \noption allows specification of some job attributes.  Some\njob attributes must be specified using this option.  Those attributes\nare listed below.\n.RS\nFormat:\n.RS 3\n.I -W <attribute name>=<value>[,<attribute name>=<value>...]\n.RE\n\nIf white space occurs within the \n.I additional attributes\nargument, or the equal sign \"=\" occurs within a\n.I value\nstring, it must be enclosed with single quotes or double quotes.\n\nThe following attributes can be set using the -W option only\n\n.I \"block=true\"\n.RS 3\nThe \n.B qsub\ncommand waits for the job to terminate, then returns the job's exit value.\nSets job's \n.I block\nattribute to \n.I True.\nWhen used with X11 forwarding or interactive jobs, no exit value is returned.\nSee \n.B EXIT STATUS\nsection in this man page.\n.LP\n.RE\n\n.I \"create_resv_from_job=<value>\"\n.RS 3\nWhen this job starts, immediately creates and confirms a job-specific\nstart reservation on the same resources as the job (including\nresources inherited by the job), and places the job in the\njob-specific reservation queue.  Sets the job's\n.I create_resv_from_job \nattribute to \n.I True.  \nSets the job-specific reservation's \n.I reserve_job \nattribute to the ID of the job from which the reservation was created.\nThe new reservation's duration and start time are the same as the\njob's walltime and start time.  If the job is peer scheduled, the\njob-specific reservation is created in the pulling complex.\n.br\nFormat: \n.I Boolean \n.br\nExample: \n.B qsub myscript.sh -Wcreate_resv_from_job=1 \n\nCannot be used with job arrays or jobs being submitted into a reservation.\n.RE\n\n.I \"depend=<dependency list>\"\n.RS 3\nDefines dependencies between this and other jobs.  \nSets the job's\n.I depend\nattribute to \n.I dependency list.\nThe \n.I dependency list\nhas the form:\n.RS 3\n.I <type>:<arg list>[,<type>:<arg list> ...]\n.RE\nwhere except for the \n.I on\ntype, the\n.I arg list\nis one or more PBS job IDs, and has the form:\n.RS 3\n.I <job ID>[:<job ID> ...]\n.RE\nThe \n.I type \ncan be:\n\n.IP \"after: <arg list>\" 4\nThis job may be scheduled for execution at any point after all jobs\nin \n.I arg list\nhave started execution.\n\n.IP \"afterok: <arg list>\" 4\nThis job may be scheduled for execution only after all jobs in\n.I arg list\nhave terminated with no errors.\nSee \"Warning about exit status with csh\" in \n.B EXIT STATUS.\n\n.IP \"afternotok: <arg list>\" 4\nThis job may be scheduled for execution only after all jobs in \n.I arg list\nhave terminated with errors.\nSee \"Warning about exit status with csh\" in \n.B EXIT STATUS.\n\n.IP \"afterany: <arg list>\" 4\nThis job may be scheduled for execution after all jobs in\n.I arg list\nhave finished execution, with any exit status (with or without errors.)\nThis job will not run if a job in the \n.I arg list \nwas deleted without ever having been run.\n\n.IP  \"before: <arg list> \" 4\nJobs in \n.I arg list \nmay begin execution once this job has begun execution.\nIt is uncommon for users to specify a before condition. Rather,\nPBS adds before dependencies automatically to the targets of\nafter dependencies.\n\n.IP  \"beforeok: <arg list>\" 4\nJobs in \n.I arg list\nmay begin execution once this job terminates without errors.\nSee \"Warning about exit status with csh\" in \n.B EXIT STATUS.\n\n.IP  \"beforenotok: <arg list>\" 4\nIf this job terminates execution with errors, jobs in \n.I arg list\nmay begin.\nSee \"Warning about exit status with csh\" in \n.B EXIT STATUS.\n\n.IP  \"beforeany: <arg list>\" 4\nJobs in \n.I arg list\nmay begin execution once this job terminates execution,\nwith or without errors.\n\n.IP \"on: count\" 4\nThis job may be scheduled for execution after \n.I count \ndependencies on\nother jobs have been satisfied.  This type is used in conjunction\nwith one of the \n.I before\ntypes listed.\n.I count \nis an integer greater than 0.\n.LP\n\nJob IDs in the\n.I arg list \nof \n.I before \ntypes must have been submitted with a \n.I type \nof \n.I on.\n\nTo use the \n.I before types,\nthe user must have the authority to alter the jobs in \n.I arg list.\nOtherwise, the dependency is rejected and the new job aborted.\n\nError processing of the existence, state, or condition of the job on\nwhich the newly submitted job is performed after the job is queued.\nIf an error is detected, the new job is deleted by the server.  Mail\nis sent to the job submitter stating the error.\n\nDependency example:\n.RS 3\nIn this example, we save the output (the jobid) from the first qsub into\nshell variable \"jobid\" so we can supply it to the depend option on the\nsecond job.\n.LP\n.B \"jobid=`qsub first_step.sh`\"\n.br\n.B \"qsub -W depend=afterok:$jobid second_step.sh\"\n.br\n.RE\n.RE\n\n.I \"group_list=<group list>\"\n.RS 3\nList of group names.  Job is run under a group name from this list.\nSets job's\n.I group_List\nattribute to\n.I group list.\n\nOnly one group name may be specified without a hostname.\nOnly one group name may be specified per named host.\nThe server on which the job resides will select first the group name whose\nhostname is the same as the server name.  Failing that,\nthe next selection is the group name with no specified hostname.\nThe group names on the server and execution hosts must be the same.\nJob submitter's primary group is automatically added to this \nlist.  \n.LP\nUnder Windows, the primary group is the first group found for \nthe user by PBS when it queries the accounts database.\n\nFormat of\n.I group list:\n.RS 3\n.I <group name>[@<hostname>][,<group name>@<hostname> ...]\n.RE\n\nDefault: Login group name of job owner\n.RE\n\n.I pwd\n.br\n.I pwd=''\n.br\n.I pwd=\"\"\n.RS 3\nThese forms prompt the user for a password.  A space between\n.I W\nand\n.I pwd\nis optional.  Spaces between the quotes are optional.\nExamples:\n.nf\n    qsub ... -Wpwd <return>\n    qsub ... -W pwd='' <return>\n    qsub ... -W pwd=\"  \" <return>\n.fi\nAvailable on supported Linux platforms only.\n.RE\n\n.I release_nodes_on_stageout=<value>\n.RS 3\nWhen set to \n.I True, \nall of the job's vnodes not on the primary execution\nhost are released when stageout begins.\n\nCannot be used with vnodes tied to Cray X* series systems.\n\nWhen cgroups is enabled and this is used with some but not all vnodes\nfrom one MoM, resources on those vnodes that are part of a cgroup are\nnot released until the entire cgroup is released.\n\nThe job's \n.I stageout \nattribute must be set for the\n.I release_nodes_on_stageout \nattribute to take effect.\n\nFormat: \n.I Boolean\n\nDefault: \n.I False\n.RE\n\n.I \"run_count=<value>\"\n.RS 3\nSets the number of times the server thinks it has run the job.  Sets the value of\nthe job's \n.I run_count\nattribute to \n.I value.  \n\nFormat: Integer greater than or equal to zero\n.RE\n\n.I \"sandbox=<sandbox spec>\"\n.RS 3\nDetermines which directory PBS uses for the job's staging and execution.  Sets\njob's \n.I sandbox\nattribute to \n.I sandbox spec.\n\nAllowed values for \n.I sandbox spec:\n\n.IP PRIVATE 5\nPBS creates a job-specific directory for staging and execution.\n.IP \"HOME or unset\" 5\nPBS uses the user's home directory for staging and execution.\n.LP\n\nFormat:\n.I String\n.RE\n\n.I \"stagein=<path list>\"\n.br\n.I \"stageout=<path list>\"\n.RS 3\nSpecifies files or directories to be staged in before execution or staged out\nafter execution is complete.  Sets the job's \n.I stagein\nand \n.I stageout\nattributes to the specified\n.I path lists.\nOn completion of the job, all staged-in and staged-out files and directories\nare removed from the execution host(s).  The\n.I path list\nhas the form:\n.RS 3\n.I <file spec>[,<file spec>]\n.RE\nwhere \n.I file spec \nis \n.RS 3\n.I <execution path>@<hostname>:<storage path>\n.RE\nregardless of the direction of the copy.\nThe name\n.I execution path\nis the name of the file or directory on the primary execution host.\nIt can be relative to the staging and execution directory on the\nexecution host, or it can be an absolute path.\n\nThe \"@\" character separates \n.I execution path\nfrom \n.I storage path.\n\nThe name\n.I storage_path\nis the path on \n.I <hostname>. \nThe storage_path can be absolute, or it can be relative to the user's home\ndirectory on hostname.\n\nIf \n.I path list\nhas more than one \n.I file spec,\ni.e. it contains commas, it must be enclosed in double quotes.\n\nIf you use a UNC path, the \n.I hostname \nis optional.  If you use a non-UNC path, the \n.I hostname \nis required.\n.RE\n\n.I \"umask=<mask value>\"\n.RS 3\nThe umask with which the job is started.  Sets job's \n.I umask\nattribute to \n.I mask value.\nControls umask of job's standard output and standard error.\n\nThe following example allows group and world read on the job's output:\n.RS 3\n.B -W umask=33\n.RE\n\nFormat: One to four octal digits; typically two\n\nDefault value: \n.I 077\n.RE\n.RE\n\n.IP \"-X\" 8\nAllows user to receive X output from interactive job.\n\nDISPLAY variable in submission environment must be set to \ndesired display.\n\nCan be used with interactive jobs only: must be used with\none of the following:\n.RS\n.RS 3\n.I -I \n.br\n.I -W interactive=true\n.B (deprecated)\n.RE\n\nCannot be used with \n.I -v DISPLAY.\n\nWhen used with \n.I -Wblock=true,\nno exit status is returned.\n\nCan be used with \n.I -V \noption.\n\nNot available under Windows.\n.RE\n\n.IP \"-z\" 8\nJob identifier is not written to standard output.\n.RE\n\n.IP \"--version\" 8\nThe \n.B qsub\ncommand returns its PBS version information and exits.\nThis option can only be used alone.\n\n.SH  OPERANDS\nThe \n.B qsub \ncommand accepts as operands one of the following:\n.RS 3\n.I (no operands)\n.RS 3\nSame as with a dash. Any PBS directives and user tasks are read\nfrom the command line.\n.RE\n\n.I <script>\n.RS 3\nPath to script.  Can be absolute or relative to current directory\nwhere \n.B qsub \nis run.\nThe script must be the last argument to\n.BR qsub .\n.RE\n\n.I \"-\"\n.RS 3\nWhen you use a dash, any PBS directives and user tasks are read\nfrom the command line.\n.RE\n\n.I -- <executable> [<arguments to executable>]\n.RS 3\nA single executable (preceded by two dashes) and its arguments\n\nThe executable, and any arguments to the executable, are given on the\nqsub command line.  The executable is preceded by two dashes, \"--\".\nAll\n.B qsub\noptions must come before the \"--\".\n\nWhen you run qsub this way, it runs the executable directly.  It does\nnot start a shell, so no shell initialization scripts are run, and\nexecution paths and other environment variables are not set.  You\nshould make sure that environment variables are set correctly.\n.RE\n.RE\n\n.SH STANDARD OUTPUT\n.IP \"Job ID for submitted jobs\" 8\nIf the job is successfully created\n\n.IP \"(No output)\" 8\nIf the \n.I -z\noption is set\n\n.SH STANDARD ERROR\nThe\n.B qsub\ncommand writes a diagnostic message to standard error for\neach error occurrence.\n\n.SH ENVIRONMENT VARIABLES\nThe\n.B qsub\ncommand uses the following environment variables:\n.RS 3\n.IP PBS_DEFAULT 3\nName of default server.\n\n.IP PBS_DPREFIX 3\nPrefix string which identifies PBS directives.\n.RE\n\nPBS automatically exports the following environment variables to the job.\nEnvironment variables beginning with \"PBS_O_\" are created by \n.B qsub\nand included in the job's Variable_List.\n\n.IP PBS_ENVIRONMENT 8\nSet to \n.I PBS_BATCH \nfor a batch job.  Set to \n.I PBS_INTERACTIVE\nfor an interactive job.\n\n.IP PBS_JOBDIR 8\nPathname of job's staging and execution directory on the\nprimary execution host.  \n\n.IP PBS_JOBID 8\nJob identifier given by PBS when the job is submitted.\n\n.IP PBS_JOBNAME 8\nJob name specified by submitter.\n\n.IP PBS_NODEFILE 8\nName of file containing the list of vnodes assigned to the job when\nthe job runs.\n\n.IP PBS_O_HOME 8\nUser's home directory.  \nValue of HOME taken from user's submission environment.\n\n.IP PBS_O_HOST 8\nName of submit host.\nValue taken from user's submission environment.\n\n.IP PBS_O_LANG 8\nValue of LANG taken from user's submission environment.\n\n.IP PBS_O_LOGNAME 8\nUser's login name.\nValue of LOGNAME taken from user's submission environment.\n\n.IP PBS_O_MAIL 8\nValue of MAIL taken from user's submission environment.\n\n.IP PBS_O_PATH 8\nUser's PATH.\nValue of PATH taken from user's submission environment.\n\n.IP PBS_O_QUEUE 8\nName of the queue to which the job was submitted.\nValue taken from job submission, otherwise default queue.\n\n.IP PBS_O_SHELL 8\nValue of SHELL taken from user's submission environment.\n\n.IP PBS_O_SYSTEM 8\nOperating system, from \n.I uname -s, \non submit host.\nValue taken from user's submission environment.\n\n.IP PBS_O_TZ 8\nTimezone.  Value taken from user's submission environment.\n\n.IP PBS_O_WORKDIR 8\nAbsolute path to directory where\n.B qsub\nis run.\nValue taken from user's submission environment.\n\n.IP PBS_QUEUE 8\nName of the queue from which the job is executed.\n\n.IP TMPDIR 8\nPathname of job's scratch directory.  Set when PBS assigns it.\n\n.SH EXIT STATUS\nFor non-blocking jobs:\n.RS 3\n.IP Zero 8\nUpon successful processing of input\n\n.IP \"Greater than zero\" 8\nUpon failure of \n.B qsub\n.RE\n\nFor blocking jobs:\n.RS 3\n.IP \"Exit value of job\" 8\nWhen job runs successfully\n\n.IP 3 8\nIf the job is deleted without being run\n.RE\n\n.B Warning About Exit Status with csh:\n.br\nIf a job is run in csh and a .logout file\nexists in the user's home directory on the host where the job executes,\nthe exit status\nof the job is that of the .logout script, not the job script.  This may\nimpact any inter-job dependencies.  \n\n.SH SEE ALSO\npbs_job_attributes(7B),\npbs_server_attributes(7B),\npbs_resources(7B),\nqalter(1B), \nqhold(1B), \nqmove(1B), \nqmsg(1B), \nqrerun(1B),\nqrls(1B), \nqselect(1B), \nqstat(1B)\n\n"
  },
  {
    "path": "doc/man3/pbs_alterjob.3B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_alterjob 3B \"15 November 2019\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_alterjob \n\\- alter a PBS batch job\n.SH SYNOPSIS\n#include <pbs_error.h>\n.br\n#include <pbs_ifl.h>\n.sp\n.nf\n.B int pbs_alterjob(int connect, char *jobID, struct attropl *change_list, \n.B \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ char *extend)\n.fi\n\n.SH DESCRIPTION\nIssues a batch request to alter a batch job.\n\nThis command generates a \n.I Modify Job \n(11) batch request and sends it to the server over the connection specified by \n.I connect.\n\nJob state may affect which attributes can be altered.  See the qalter(1B) man page.\n\n.SH ARGUMENTS\n.IP connect 8\nReturn value of \n.B pbs_connect().  \nSpecifies connection over which to send batch request to server.\n\n.IP jobID 8\nID of job or job array to be altered.  \n.br\nFormat for a job:\n.br\n.I <sequence number>.<server name>\n.br\nFormat for an array job:\n.br\n.I <sequence number>[].<server name>\n\n.IP change_list 8\nPointer to a list of attributes to change.  Each attribute is described in an \n.I attropl \nstructure, defined in pbs_ifl.h as:\n.nf\nstruct attropl {\n        struct attropl *next;\n        char           *name;\n        char           *resource;\n        char           *value;\n        enum batch_op  op;\n};\n.fi\n\n.IP extend 8\nCharacter string for extensions to command.  Not currently used.\n.LP\n.B Members of attropl Structure\n.br\n.IP next 8\nPoints to next attribute in list.  A null pointer terminates the list.\n\n.IP name 8\nPoints to a string containing the name of the attribute.  \n\n.IP resource 8\nPoints to a string containing the name of a resource.  Used only when\nthe specified attribute contains resource information.  Otherwise,\n.I resource \nshould be a pointer to a null string.\n\nIf the resource is already present in the job's \n.I Resource_List \nattribute, the value is altered as specified.  Otherwise the resource is added.\n\n.IP value\nPoints to a string containing the value of the attribute or resource.  \n\n.IP op 8\nDefines the operation to perform on the attribute or resource.  For\nthis command, operators are \n.I SET, UNSET, INCR, DECR.\n\n.SH RETURN VALUE\nThe routine returns 0 (zero) on success.\n.br\nIf an error occurred, the routine returns a non-zero exit value, and\nthe error number is available in the global integer \n.I pbs_errno.\n\n.SH SEE ALSO\nqalter(1B), qhold(1B), qrls(1B), qsub(1B), pbs_connect(3B), pbs_holdjob(3B),\npbs_rlsjob(3B)\n\n"
  },
  {
    "path": "doc/man3/pbs_asyrunjob.3B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_asyrunjob 3B \"11 December 2019\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_asyrunjob\n\\- run an asynchronous PBS batch job\n.SH SYNOPSIS\n#include <pbs_error.h>\n.br\n#include <pbs_ifl.h>\n.sp\n.nf\n.B int pbs_asyrunjob(int connect, char *jobID, char *location, \n.B \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ char *extend)\n.fi\n.SH DESCRIPTION\nIssues a batch request to run a batch job.\n\nGenerates an \n.I Asynchronous Run Job \nrequest and sends it to the server over the connection specified by \n.I connect.  \n\nThe server validates the request and replies before initiating the execution of the job.  \n\nYou can use this version of the call to reduce latency in scheduling,\nespecially when the scheduler must start a large number of jobs.\n\n.SH REQUIRED PRIVILEGE\nYou must have Manager or Operator privilege to use this command.\n\n.SH ARGUMENTS\n.IP connect 8\nReturn value of \n.B pbs_connect().  \nSpecifies connection over which to send batch request to server.\n\n.IP jobID 8\nID of job to be run.  \n.br\nFormat for a job:\n.br\n.I <sequence number>.<server name>\n.br\nFormat for an array job:\n.br\n.I <sequence number>[].<server name\n\n.IP location 8\nLocation where job should run, and optionally resources to use.  Same as \n.B qrun -H:\n\n.RS 8\n.IP \"-H <vnode specification without resources>\" 3\nThe \n.I vnode specification without resources\nhas this format:\n.br\n.I \\ \\ \\ (<vchunk>)[+(<vchunk>) ...]\n.br\nwhere \n.I vchunk \nhas the format\n.br\n.I \\ \\ \\ <vnode name>[+<vnode name> ...]\n.br\nExample: -H (VnodeA+VnodeB)+(VnodeC)\n\nPBS applies one requested chunk from the job's selection directive in round-robin\nfashion to each \n.I vchunk \nin the list.  Each \n.I vchunk \nmust be sufficient to run the job's corresponding chunk, otherwise\nthe job may not execute correctly.\n.RE\n\n.RS 8\n.IP \"-H <vnode specification with resources>\" 3\nThe \n.I vnode specification with resources\nhas this format:\n.br\n.I \\ \\ \\ (<vchunk>)[+(<vchunk>) ...]\n.br\nwhere \n.I vchunk \nhas the format\n.IP \"\" 6\n.I <vnode name>:<vnode resources>[+<vnode name>:<vnode resources> ...]\n.LP\n.RS 3\nand where\n.I vnode resources\nhas the format\n.RS 3\n<resource name>=<value>[:<resource name>=<value> ...]\n.RE\n\n.IP \"Example:\" 3\n.nf\n-H (VnodeA:mem=100kb:ncpus=1)+ \n.br \n     (VnodeB:mem=100kb:ncpus=2+ VnodeC:mem=100kb)\n.fi\n.LP\n\nPBS creates a new selection directive from the \n.I vnode specification with resources, \nusing it instead of the original specification from the user.\nAny single resource specification results in the\njob's original selection directive being ignored.  Each \n.I vchunk \nmust be sufficient to run the job's corresponding chunk, otherwise\nthe job may not execute correctly.\n\nIf the job being run requests\n.I -l place=exclhost,\ntake extra care to satisfy the \n.I exclhost \nrequest.  Make sure that if any vnodes are from a multi-vnoded host, \nall vnodes from that host are allocated.  Otherwise those vnodes can \nbe allocated to other jobs.\n.RE\n.RE\n\n.IP extend 8\nCharacter string for extensions to command.  Not currently used.\n\n.SH RETURN VALUE\nThe routine returns 0 (zero) on success.\n.br\n\nIf an error occurred, the routine returns a non-zero exit value, and\nthe error number is available in the global integer \n.I pbs_errno.\n\n.SH SEE ALSO\nqrun(1B), pbs_connect(3B), pbs_runjob(3B)\n"
  },
  {
    "path": "doc/man3/pbs_confirmresv.3B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_confirmresv 3B \"15 November 2019\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_confirmresv \n\\- confirm a PBS reservation \n.SH SYNOPSIS\n#include <pbs_error.h>\n.br\n#include <pbs_ifl.h>\n.sp\n.nf\n.B int pbs_confirmresv(int connect, char *reservationID, char *location,\n.B \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ char *extend)\n.fi\n\n.SH DESCRIPTION\nIssues a batch request to confirm a PBS advance, standing, or maintenance reservation.\n\nThis function generates a \n.I Confirm Reservation \n(75) batch request and sends it to the server over the connection specified by \n.I connect.\n\n.SH ARGUMENTS\n.IP connect 8\nReturn value of \n.B pbs_connect().  \nSpecifies connection over which to send batch request to server.\n\n.IP reservationID 8\nReservation to be confirmed.  \n.br\nFormat for advance reservation:\n.br\n.I R<sequence number>.<server name>\n.br\nFormat for standing reservation:\n.br\n.I S<sequence number>.<server name>\n.br\nFormat for maintenance reservation:\n.br\n.I M<sequence number>.<server name>\n\n.IP location 8\nString describing vnodes and resources to be used for reservation.  \nFormat:\n.br\n.I (<vchunk>)[+(<vchunk>) ...]\n.br\nwhere \n.I vchunk \nhas the format\n.br\n.nf\n.I <vnode name>:<vnode resources>[+<vnode name>:<vnode \n.br\n.I resources> ...]\n.fi\n.br\nand where\n.I vnode resources\nhas the format\n.br\n.I <resource name>=<value>[:<resource name>=<value> ...]\n\nExample:\n.nf\n-H (VnodeA:mem=100kb:ncpus=1)+ \n.br \n     (VnodeB:mem=100kb:ncpus=2+ VnodeC:mem=100kb)\n.fi\n.LP\n\n.IP start_time 8\nUnsigned long containing start time in seconds since epoch.  Used only\nfor ASAP reservations (reservations created by using\n.I pbs_ralter -W qmove=<jobID> \non an existing job).  \n\n.IP extend 8\nCharacter string for specifying confirmation/non-confirmation action:\n.RS 11\nTo confirm a normal reservation, pass in PBS_RESV_CONFIRM_SUCCESS.\n.br\nTo have an unconfirmed reservation deleted, pass in PBS_RESV_CONFIRM_FAIL.\n.br\nTo have the scheduler set the time when it will try to reconfirm a\ndegraded reservation, pass in PBS_RESV_CONFIRM_FAIL.\n.RE\n\n.SH RETURN VALUE\nThe routine returns 0 (zero) on success.\n\nOn error, the routine returns a non-zero exit value, and the error\nnumber is available in the global integer \n.I pbs_errno.\n\n.SH SEE ALSO\npbs_rdel(1B), pbs_connect(3B)\n\n"
  },
  {
    "path": "doc/man3/pbs_connect.3B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_connect 3B \"15 November 2019\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_connect \n\\- return a connection handle from a PBS batch server\n\n.SH SYNOPSIS\n#include <pbs_error.h>\n.br\n#include <pbs_ifl.h>\n.sp\n.B int pbs_connect(char *server) \n\n.SH DESCRIPTION\nThis function establishes a virtual stream (TCP/IP) connection with\nthe specified batch server.\n\nReturns a connection handle.\n\n.B pbs_connect() \ndetermines whether or not the complex has a failover server\nconfigured.  It also determines which server is the primary and which\nis the secondary.\n\n.SH ARGUMENTS\n.IP server 8\nSpecifies name of server to connect to.  Format:\n.br\n.I <hostname>[:<port>]\n\nIf you do not specify a port, PBS uses the default.\n\nIf \n.I server \nis a null pointer or a null string, this function opens a\nconnection to the default server.  The default server is specified in\nthe PBS_DEFAULT environment variable or the PBS_SERVER parameter in\n/etc/pbs.conf.\n\n.SH USAGE\nUse this function to establish a connection handle to the desired\nserver before calling any of the other pbs_* API functions.  They\nwill send their batch requests over the connection established by this\nfunction.  You can send multiple requests over one connection.\n\n.SH CLEANUP\nAfter you are done using the connection handle, close the connection\nvia a call to \n.B pbs_disconnect().\n\n.SH SIDE EFFECTS\n\nThe global variable \n.I pbs_server \nis declared in pbs_ifl.h.  This\nvariable is set on return to point to the server name to which\n.B pbs_connect() \nconnected or attempted to connect.\n\n.SH Windows Requirement\n\nIn order to use \n.B pbs_connect() \nwith Windows, initialize the network\nlibrary and link with winsock2.  To do this, call winsock_init()\nbefore calling \n.B pbs_connect(), \nand link against the ws2_32.lib library.\n\n.SH RETURN VAlUE\nOn success, the routine returns a connection handle which is a\nnon-negative integer.\n\nIf an error occurred, the routine returns -1, and the error number is\navailable in the global integer \n.I pbs_errno.\n\n.SH SEE ALSO\npbs_disconnect(3B)\n\n\n"
  },
  {
    "path": "doc/man3/pbs_default.3B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_default 3B \"15 November 2019\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_default \n\\- return the name of the default PBS server\n.SH SYNOPSIS\n#include <pbs_error.h>\n.br\n#include <pbs_ifl.h>\n.sp\n.B char *pbs_default()\n\n.SH DESCRIPTION\nReturns a pointer to a character string containing the name of the default PBS server.  \n\nThe default server is specified in the PBS_DEFAULT environment\nvariable or the PBS_SERVER parameter in /etc/pbs.conf.\n\n.SH RETURN VALUE\nOn success, returns a pointer to a character string containing the\nname of the default PBS server.  You do not need to free the character string.\n\nReturns null if it cannot determine the name of the default server.\n\n\n\n"
  },
  {
    "path": "doc/man3/pbs_deljob.3B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_deljob 3B \"15 November 2019\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_deljob \n\\- delete a PBS batch job\n.SH SYNOPSIS\n#include <pbs_error.h>\n.br\n#include <pbs_ifl.h>\n.sp\n.B int pbs_deljob(int connect, char *jobID, char *extend)\n\n.SH DESCRIPTION\nIssues a batch request to delete a batch job.  \n\nThis function generates a \n.I Delete Job \n(6) batch request and sends it to the server over the connection specified by \n.I connect.\n\nIf the batch job is running, the MoM sends the SIGTERM signal followed\nby SIGKILL.\n\nIf the batch job is deleted by a user other than the job owner, PBS\nsends mail to the job owner.\n\n.SH ARGUMENTS\n.IP connect 8\nReturn value of \n.B pbs_connect().  \nSpecifies connection over which to send batch request to server.\n\n.IP jobID 8\nID of job, job array, subjob, or range of subjobs to be deleted.  \n.br\nFormat for a job:\n.br\n.I <sequence number>.<server name>\n.br\nFormat for an array job:\n.br\n.I <sequence number>[].<server name>\n.br\nFormat for a subjob:\n.br\n.I <sequence number>[<index>][.<server name>]\n.br\nFormat for a range of subjobs:\n.br\n.I <sequence number>[<first>-<last>][.<server name>]\n\n.IP extend 8\nCharacter string for extensions to command.  If the string is not\nnull, it is appended to the message mailed to the job owner.\n\n.SH RETURN VALUE\nThe routine returns 0 (zero) on success.\n\nOn error, the routine returns a non-zero exit value, and the error\nnumber is available in the global integer \n.I pbs_errno.\n\n.SH SEE ALSO\nqdel(1B), pbs_connect(3B)\n"
  },
  {
    "path": "doc/man3/pbs_delresv.3B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_delresv 3B \"15 November 2019\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_delresv \n\\- delete a reservation \n.SH SYNOPSIS\n#include <pbs_error.h>\n.br\n#include <pbs_ifl.h>\n.sp\n.B int pbs_delresv(int connect, char *reservationID, char *extend)\n\n.SH DESCRIPTION\nIssues a batch request to delete a reservation.\n\nThis function generates a \n.I Delete Reservation \n(72) batch request and sends it to the server over the connection specified by \n.I connect.\n\nIf the reservation is in state \n.I RESV_RUNNING, \nand there are jobs in the reservation queue, those jobs are deleted \nbefore the reservation is deleted.\n\n.SH ARGUMENTS\n.IP connect 8\nReturn value of \n.B pbs_connect().  \nSpecifies connection over which to send batch request to server.\n\n.IP reservationID 8\nReservation to be deleted.  \n.br\nFormat for advance reservation:\n.br\n.I R<sequence number>.<server name>\n.br\nFormat for standing reservation:\n.br\n.I S<sequence number>.<server name>\n.br\nFormat for maintenance reservation:\n.br\n.I M<sequence number>.<server name>\n\n.IP extend 8\nCharacter string for extensions to command.  Not currently used.\n\n.SH RETURN VALUE\nThe routine returns 0 (zero) on success.\n\nOn error, the routine returns a non-zero exit value, and the error\nnumber is available in the global integer \n.I pbs_errno.\n\n.SH SEE ALSO\npbs_rdel(1B), pbs_connect(3B)\n\n"
  },
  {
    "path": "doc/man3/pbs_disconnect.3B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_disconnect 3B \"15 November 2019\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_disconnect \n\\- disconnect from a PBS batch server\n.SH SYNOPSIS\n#include <pbs_error.h>\n.br\n#include <pbs_ifl.h>\n.sp\n.B int pbs_disconnect(int connect)\n\n.SH DESCRIPTION\nCloses the virtual stream connection to a PBS batch server.\nConnection was previously returned from a call to\n.B pbs_connect().\n\n.SH ARGUMENTS\n.IP connect 8\nConnection to be closed.  Return value of \n.B pbs_connect().\nSpecifies connection used earlier to send batch requests to server.\n\n.SH RETURN VALUE\nThe routine returns 0 (zero) after successfully closing the connection.\n\nIf an error occurred, the routine returns -1, and the error number is \navailable in the global inteter \n.I pbs_errno.\n\n.SH SEE ALSO\npbs_connect(3B)\n"
  },
  {
    "path": "doc/man3/pbs_geterrmsg.3B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_geterrmsg 3B \"15 November 2019\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_geterrmsg \n\\- get error message for most recent PBS batch operation\n.SH SYNOPSIS\n#include <pbs_error.h>\n.br\n#include <pbs_ifl.h>\n.sp\n.B char *pbs_geterrmsg(int connect)\n\n.SH DESCRIPTION\nReturns most recent error message text associated with a batch server request.\n\nIf a preceding batch interface library call over the connection\nspecified by \n.I connect \nreturned an error from the server, the server may\nhave created an associated text message.  If there is a text message,\nthis function returns a pointer to the text message.\n\n.SH ARGUMENTS\n.IP connect 8\nReturn value of \n.B pbs_connect().  \nSpecifies connection handle over which to request error message from server.\n\n.SH RETURN VALUE\nIf the server returned an error and created an error text string in\nreply to a previous batch request, this function returns a pointer to\nthe text string.  The text string is null-terminated.\n\nIf the server does not have an error text string, this function\nreturns a null pointer.  The text string is a global variable; you do\nnot need to free it.\n\n.SH SEE ALSO\npbs_connect(3B)\n"
  },
  {
    "path": "doc/man3/pbs_holdjob.3B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_holdjob 3B \"11 December 2019\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_holdjob \n\\- place a hold on a PBS batch job\n.SH SYNOPSIS\n#include <pbs_error.h>\n.br\n#include <pbs_ifl.h>\n.sp\n.nf\n.B int pbs_holdjob(int connect, char *jobID, char *hold_type, \n.B \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ char *extend)\n.fi\n\n.SH DESCRIPTION\nIssues a batch request to place a hold on a job or job array.\n\nThis function generates a\n.I Hold Job \nbatch request sends it to the server over the connection specified by \n.I connect.\n\n.SH ARGUMENTS\n.IP connect 8\nReturn value of \n.B pbs_connect().  \nSpecifies connection over which to send batch request to server.\n\n.IP jobID 8\nID of job which is to be held.\n.br\nFormat for a job:\n.br\n.I <sequence number>.<server name>\n.br\nFormat for a job array:\n.br\n.I <sequence number>[].<server name>\n\n.IP hold_type 8\nType of hold to apply to job or job array.  Valid values are defined\nin pbs_ifl.h.  If hold_type is a null pointer or points to a null\nstring, PBS applies a \n.I User \nhold to the job or job array.\n\n.IP extend 8\nCharacter string for extensions to command.  Not currently used.\n\n.SH RETURN VALUE\nThe routine returns 0 (zero) on success.\n\nIf an error occurred, the routine returns a non-zero exit value, and\nthe error number is available in the global integer \n.I pbs_errno.\n\n.SH SEE ALSO\nqhold(1B), pbs_connect(3B), pbs_alterjob(3B), pbs_rlsjob(3B)\n"
  },
  {
    "path": "doc/man3/pbs_locjob.3B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_locjob 3B \"15 November 2019\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_locjob \n- return current location of a PBS batch job\n.SH SYNOPSIS\n#include <pbs_error.h>\n.br\n#include <pbs_ifl.h>\n.sp\n.B char *pbs_locjob(int connect, char *jobID, char *extend)\n\n.SH DESCRIPTION\nIssues a batch request to locate a batch job or job array.\n\nThis function generates a \n.I Locate Job \n(8) batch request and sends it to the server over the connection specified by \n.I connect.\n\nIf the server currently manages the batch job, or knows which server\ndoes currently manage the job, the server returns the location of the\njob.\n\n.SH ARGUMENTS\n.IP connect 8\nReturn value of \n.B pbs_connect().  \nSpecifies connection over which to send batch request to server.\n\n.IP jobID 8\nID of job to be located.  \n.br\nFormat for a job:\n.br\n.I <sequence number>.<server name>\n.br\nFormat for a job array:\n.br\n.I <sequence number>[].<server name>\n\n.IP extend 8\nCharacter string for extensions to command.  Not currently used.\n\n.SH CLEANUP\nThe character string returned by \n.B pbs_locjob() \nis allocated by\n.B pbs_locjob().  \nYou must free it via a call to \n.B free().\n\n.SH RETURN VALUE\nOn success, returns a pointer to a character string containing current\nlocation.  Format: \n.br\n.I <server name> \n\nOn failure, returns a null pointer, and the error number is available\nin the global integer \n.I pbs_errno.\n\n.SH SEE ALSO\npbs_connect(3B)\n"
  },
  {
    "path": "doc/man3/pbs_manager.3B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_manager 3B \"6 May 2020\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_manager \n\\- modify a PBS batch object\n.SH SYNOPSIS\n#include <pbs_error.h>\n.br\n#include <pbs_ifl.h>\n.sp\n.nf\n.B int pbs_manager(int connect, int command, int object_type, \n.B \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ char *object_name, struct attropl *attrib_list, \n.B \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ char *extend)\n.fi\n.SH DESCRIPTION\n\nIssues a batch request to perform administrative functions at a server.\n\nGenerates a \n.I Manager\n(9) batch request and sends it to the server over the connection specified by \n.I connect.  \n\nYou can use this to create, delete, and set attributes of objects such as queues.\n\n.SH REQUIRED PRIVILEGE\nThis function requires Manager or Operator privilege depending on the\noperation, and root privilege when used with hooks.\n\nWhen not used with hooks:\n.RS 3\nFunctions MGR_CMD_CREATE and MGR_CMD_DELETE require PBS Manager privilege.\n.br\nFunctions MGR_CMD_SET and MGR_CMD_UNSET require PBS Manager or Operator privilege.\n.RE\n\nWhen used with hooks:\n.RS 3\nAll commands require root privilege on the server host.  \n.br\nFunctions MGR_CMD_IMPORT, MGR_CMD_EXPORT, and MGR_OBJ_HOOK are used\nonly with hooks, and therefore require root privilege on the server\nhost.\n.br\nHook commands are run at the server host.\n.RE\n\n.SH ARGUMENTS\n.IP connect 8\nReturn value of \n.B pbs_connect().  \nSpecifies connection over which to send batch request to server.\n\n.IP command 8\nOperation to be performed.  Valid values are specified in pbs_ifl.h. \n\n.IP object_type 8\nSpecifies type of object on which command is to operate.  Valid values\nare specified in pbs_ifl.h.\n\n.IP object_name 8\nName of object on which to operate.\n\n.IP attrib_list 8\nPointer to a list of attributes to be operated on.  Each attribute is\ndescribed in an \n.I attropl \nstructure, defined in pbs_ifl .h as:\n.nf\nstruct attropl {\n        struct attropl *next;\n        char           *name;\n        char           *resource;\n        char           *value;\n        enum batch_op  op;\n};\n.fi\n\n.IP extend 8\nCharacter string for extensions to command.  Not currently used.\n\n.LP\n\n.B Members of attropl Structure\n.br\n.IP next 8\nPoints to next attribute in list.  A null pointer terminates the list.\n\n.IP name 8\nPoints to a string containing the name of the attribute.  \n\n.IP resource 8\nPoints to a string containing the name of a resource.  Used only when\nthe specified attribute contains resource information.\nOtherwise, \n.I resource \nshould be a null pointer.  \n\nIf the resource is already present in the object's attribute, the\nvalue is altered as specified.  Otherwise the resource is added.\n\n.IP value 8\nPoints to a string containing the new value of the attribute or\nresource.  For parameterized limit attributes, this string contains\nall parameters for the attribute.\n\n.IP op 8\nDefines the manner in which the new value is assigned to the attribute\nor resource.  The operators used for this function are \n.I SET, UNSET, INCR, DECR.\n\n.SH USAGE FOR HOOKS\nWhen importing a hook or hook configuration file:\n.RS 3\nSet \n.I command \nto \n.I MGR_CMD_IMPORT\n\nSet \n.I object_type \nto \n.I SITE_HOOK \n(or \n.I PBS_HOOK \nif you are importing a\nconfiguration file for a built-in hook; you cannot import a built-in hook)\n\nSet \n.I object_name \nto the name of the hook\n\nIn one attropl structure:\n.RS 6\nSet \n.I name \nto \"content-type\"\n\nSet \n.I value \nto \"application/x-python\" for a hook, or \"application/x-config\" for a configuration file\n.RE\n\nIn another attropl structure:\n.RS 6\nSet \n.I name \nto \"content-encoding\"\n\nSet \n.I value \nto \"default\" or \"base64\"\n.RE\n\nIn a third attropl structure:\n.RS 6\nSet  \n.I name \nto \"input-file\"\n\nSet \n.I value \nto the name of the input file\n.RE\n\nSet \n.I op \nto \n.I SET\n.RE\n\nWhen exporting a hook or hook configuration file:\n.RS 3\nSet \n.I command \nto \n.I MGR_CMD_EXPORT\n\nSet \n.I object_type \nto \n.I SITE_HOOK \n(or \n.I PBS_HOOK \nif you are exporting a configuration file for a built-in hook; you cannot export \na built-in hook) \n\nSet \n.I object_name \nto the name of the hook\n\nIn one attropl structure:\n.RS 6\nSet \n.I name to \"content-type\"\n\nSet \n.I value \nto \"application/x-python\" for a hook, or \"application/x-config\" for a configuration file\n.RE\nIn another attropl structure:\n.RS 6\nSet  \n.I name \nto \"content-encoding\"\n\nSet \n.I value \nto \"default\" or \"base64\"\n.RE\nIn another attropl structure:\n.RS 6\nSet  \n.I name \nto \"output-file\"\n\nSet \n.I value \nto \"default\" or \"base64\"\n.RE\nIn a third attropl structure:\n.RS 6\nSet \n.I name \nto \"output-file\"\n\nSet \n.I value \nto the name of the output file\n.RE\n\nSet \n.I op \nto \n.I SET\n.RE\n\n.SH RETURN VALUE\nThe routine returns 0 (zero) on success.\n\nIf an error occurred, the routine returns a non-zero exit value, and\nthe error number is available in the global integer \n.I pbs_errno.\n\n.SH SEE ALSO\nqmgr(1B), pbs_connect(3B)\n\n\n"
  },
  {
    "path": "doc/man3/pbs_modify_resv.3B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_modify_resv 3B \"15 November 2019\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_modify_resv \n\\- modify a PBS reservation\n.SH SYNOPSIS\n#include <pbs_error.h>\n.br\n#include <pbs_ifl.h>\n.sp\n.nf\n.B char *pbs_modify_resv(int connect, char *reservationID, \n.B \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ struct attropl *attrib_list, char *extend)\n.fi\n\n.SH DESCRIPTION\nIssues a batch request to modify a reservation.\n\nGenerates a \n.I Modify Reservation \n(91) batch request and sends it to the server over the connection specified by \n.I connect.\n\n.SH ARGUMENTS\n.IP connect 8\nReturn value of \n.B pbs_connect().  \nSpecifies connection over which to send batch request to server.\n\n.IP reservationID 8\nReservation to be modified.  \n.br\nFormat for advance reservation:\n.br\n.I R<sequence number>.<server name>\n.br\nFormat for standing reservation:\n.br\n.I S<sequence number>.<server name>\n.br\n\n.IP attrib_list 8\nPointer to a list of attributes to modify.  Each attribute is\ndescribed in an \n.I attropl \nstructure, defined in pbs_ifl.h as:\n.nf\nstruct attropl {\n        struct attropl *next;\n        char           *name;\n        char           *resource;\n        char           *value;\n        enum batch_op  op;\n};\n.fi\n\nFor any attribute that is not specified or that is a null pointer, PBS\ntakes the default action for that attribute.  The default action is to\nassign the default value or to not pass the attribute with the\nreservation; the action depends on the attribute.\n\n.IP extend 8\nCharacter string for extensions to command.  Not currently used.\n.LP\n.B Members of attropl Structure\n.br\n.IP next 8\nPoints to next attribute in list.  A null pointer terminates the list.\n\n.IP name 8\nPoints to a string containing the name of the attribute.  \n\n.IP resource 8\nPoints to a string containing the name of a resource.  Used only when\nthe specified attribute contains resource information.  Otherwise,\n.I resource \nshould be a null pointer.\n\nIf the resource is already present in the reservation's\n.I Resource_List \nattribute, the value is altered as specified.  Otherwise the resource is added.\n\n.IP value 8\nPoints to a string containing the value of the attribute or resource.  \n\n.IP op 8\nOperator.  The only allowed operator for this function is \n.I SET.\n\n.SH RETURN VALUE\nOn success, returns a character string containing the reservation ID assigned by the server.  \n\nOn failure, returns a null pointer, and the error number is available\nin the global integer \n.I pbs_errno.\n\n.SH CLEANUP\nThe space for the reservation ID string is allocated by \n.B pbs_modify_resv().\nRelease the reservation ID via a call to \n.B free() \nwhen no longer needed.\n\n.SH SEE ALSO\npbs_rsub(1B), pbs_connect(3B)\n\n\n"
  },
  {
    "path": "doc/man3/pbs_movejob.3B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_movejob 3B \"15 November 2019\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_movejob \n\\- move a PBS batch job to a new destination\n.SH SYNOPSIS\n#include <pbs_error.h>\n.br\n#include <pbs_ifl.h>\n.sp\n.nf\n.B int pbs_movejob(int connect, char *jobID, char *destination, \n.B \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ char *extend)\n.fi\n.SH DESCRIPTION\nIssues a batch request to move a job or job array to a new destination.\n\nGenerates a \n.I Move Job \n(12) batch request and sends it to the server over the connection specified by \n.I connect.\n\nMoves specified job or job array from its current queue and server to\nthe specified queue and server.\n\nYou cannot move a job in the \n.I Running, Transiting, \nor \n.I Exiting \nstates.\n\n.SH ARGUMENTS\n.IP connect 8\nReturn value of \n.B pbs_connect().  \nSpecifies connection over which to send batch request to server.\n\n.IP jobID 8\nID of job to be moved.  \n.br\nFormat for a job:\n.br\n.I <sequence number>.<server name>\n.br\nFormat for a job array:\n.br\n.I <sequence number>[].<server name>\n\n.IP destination 8\nNew location for job or job array.  Formats:\n.br\n.I <queue name>@<server name>\n.br\n   Specified queue at specified server\n.br\n.I <queue name>\n.br\n   Specified queue at default server\n.br\n.I @<server name> \n.br\n   Default queue at specified server\n.br\n.I @default\n.br\n   Default queue at default server\n.br\n.I (null pointer or null string)\n.br\n   Default queue at default server\n\n.IP extend 8\nCharacter string for extensions to command.  Not currently used.\n\n.SH RETURN VALUE\nThe routine returns 0 (zero) on success.\n\nIf an error occurred, the routine returns a non-zero exit value, and\nthe error number is available in the global integer \n.I pbs_errno.\n\n.SH SEE ALSO\nqmove(1B), pbs_connect(3B)\n"
  },
  {
    "path": "doc/man3/pbs_msgjob.3B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_msgjob 3B \"15 November 2019\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_msgjob \n\\- record a message for a running PBS batch job\n.SH SYNOPSIS\n#include <pbs_error.h>\n.br\n#include <pbs_ifl.h>\n.sp\n.nf\n.B int pbs_msgjob(int connect, char *jobID, int file, char *message, \n.B \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ char *extend)\n.fi\n.SH DESCRIPTION\n\nIssues a batch request to write a message in one or more output files of a batch job.\n\nGenerates a \n.I Message Job \n(10) batch request and sends it to the server over the connection specified by \n.I connect.\n\nYou can write a message into a job's stdout and/or stderr files.  Can\nbe used on jobs or subjobs, but not job arrays or ranges of subjobs.\n\n.SH ARGUMENTS\n.IP connect 8\nReturn value of \n.B pbs_connect().  \nSpecifies connection over which to send batch request to server.\n\n.IP jobID 8\nID of job into whose output file(s) to write.  \n.br\nFormat for a job:\n.br\n.I <sequence number>.<server name>\n.br\nFormat for a subjob:\n.br\n.I <sequence number>[<index>].<server name>\n\n.IP file 8\nIndicates whether to write to stdout, stderr, or both:\n.RS 8\n1\n.br\n   Writes to stdout\n.br\n2 \n.br\n   Writes to stderr\n.br\n3 \n.br\n   Writes to stdout and stderr\n.RE\n.IP message 8\nCharacter string to be written to output file(s).\n\n.IP extend 8\nCharacter string for extensions to command.  Not currently used.\n\n.SH RETURN VALUE\nThe routine returns 0 (zero) on success.\n\nIf an error occurred, the routine returns a non-zero exit value, and\nthe error number is available in the global integer \n.I pbs_errno.\n\n.SH SEE ALSO\nqmsg(1B), pbs_connect(3B)\n"
  },
  {
    "path": "doc/man3/pbs_orderjob.3B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_orderjob 3B \"15 November 2019\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_orderjob \n\\- swap positions of two PBS batch jobs\n\n.SH SYNOPSIS\n#include <pbs_error.h>\n.br\n#include <pbs_ifl.h>\n.sp\n.nf\n.B int pbs_orderjob(int connect, char *job_id1, char *job_id2, \n.B \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ char *extend)\n.fi\n\n.SH DESCRIPTION\nIssues a batch request to swap the positions of two jobs.\n\nGenerates an \n.I Order Job \n(50) batch request and sends it to the server over the connection specified by \n.I connect.\n\nCan be used on jobs and job arrays.  Can be used on jobs in different\nqueues.  Both jobs must be at the same server.\n\nYou cannot swap positions of jobs that are running.\n\n.SH ARGUMENTS\n.IP connect 8\nReturn value of \n.B pbs_connect().  \nSpecifies connection over which to send batch request to server.\n\n.IP \"jobID1, jobID2\" 8\nIDs of jobs to be swapped.\n.br\nFormat for a job:\n.br\n.I <sequence number>.<server name>\n.br\nFormat for a job array:\n.br\n.I <sequence number>[].<server name>\n\n.IP extend 8\nCharacter string for extensions to command.  Not currently used.\n\n.SH RETURN VALUE\nThe routine returns 0 (zero) on success.\n\nIf an error occurred, the routine returns a non-zero exit value, and\nthe error number is available in the global integer \n.I pbs_errno.\n\n.SH SEE ALSO\nqmove(1B), qorder(1B), qsub(1B), pbs_connect(3B)\n"
  },
  {
    "path": "doc/man3/pbs_preempt_jobs.3B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_preempt_jobs 3B \"11 December 2019\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_preempt_jobs \n\\- preempt a list of jobs\n.SH SYNOPSIS\n#include <pbs_error.h>\n.br\n#include <pbs_ifl.h>\n.sp\n.nf\n.B int preempt_job_info *pbs_preempt_jobs(int connect, char **jobID_list)\n.fi\n.SH DESCRIPTION\nSends the server a list of jobs to be preempted.\n\nSends a \n.I Preempt Jobs\n(93) batch request to the server over the connection specified by \n.I connect.\n\nReturns a list of preempted jobs along with the method used to preempt\neach one.\n\n.SH ARGUMENTS\n.IP connect 8\nReturn value of \n.B pbs_connect().  \nSpecifies connection handle over which to send batch request to server.\n\n.IP jobID_list 8\nList of job IDs to be preempted, as a null-terminated array of pointers to strings.  \n.br\nFormat for a job ID:\n.br\n.I <sequence number>.<server name>\n.br\nFormat for a job array ID:\n.br\n.I <sequence number>[].<server name>\n.br\nFor example:\n.RS 11\n.nf\nconst char *joblist[3];\njoblist[0]=\"123@myserver\";\njoblist[1]=\"456@myserver\";\njoblist[2]=NULL;\n.fi\n.RE\n\n.SH RETURN VALUE\nReturns a list of preempted jobs.  Each job is represented in a\n.I preempt_job_info \nstructure, which has the following fields:\n\n.IP job_id 8\nThe job ID, in a char*\n\n.IP preempt_method 8\nHow the job was preempted, in a char:\n.br\nS \n.br\n   The job was preempted using suspension.\n.br\nC \n.br\n   The job was preempted using checkpointing.\n.br\nR \n.br\n   The job was preempted by being requeued.\n.br\nD \n.br\n   The job was preempted by being deleted.\n.br\n0 (zero)\n.br\n   The job could not be preempted.\n.SH CLEANUP\nYou must free the list of preempted jobs by passing it directly to \n.B free().\n"
  },
  {
    "path": "doc/man3/pbs_relnodesjob.3B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_relnodesjob 3B \"15 November 2019\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_relnodesjob \n\\- release some or all of the non-primary-execution-host vnodes assigned to a PBS job\n.SH SYNOPSIS\n#include <pbs_error.h>\n.br\n#include <pbs_ifl.h>\n.sp\n.nf\n.B int pbs_relnodesjob (int connect, char *jobID, char *vnode_list, \n.B \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ char *extend)\n.fi\n.SH DESCRIPTION\n\nIssues a batch request to release some or all of the vnodes of a batch job.\n\nGenerates a \n.I RelnodesJob \n(90) batch request and sends it to the server over the connection specified by \n.I connect.\n\nYou cannot release vnodes on the primary execution host.\n\nYou can use this on jobs and subjobs, but not on job arrays or ranges of subjobs.\n\n.SH ARGUMENTS\n.IP connect 8\nReturn value of \n.B pbs_connect().  \nSpecifies connection handle over which to send batch request to server.\n\n.IP jobID 8\nID of job or subjob whose vnodes are to be released.  \n.br\nFormat for a job:\n.br\n.I <sequence number>.<server name>\n.br\nFormat for a subjob:\n.br\n.I <sequence number>[<index>].<server name>\n\n.IP vnode_list 8\nList of vnode names separated by plus signs (\"+\").\n\nIf \n.I vnode_list \nis a null pointer, this specifies that all the vnodes assigned to the\njob that are not on the primary execution host are to be released.\n\n.IP extend 8\nCharacter string for extensions to command.  Not currently used.\n\n.SH RETURN VALLUE\nThe routine returns 0 (zero) on success.\n\nIf an error occurred, the routine returns a non-zero exit value, and\nthe error number is available in the global integer \n.I pbs_errno.\n\n"
  },
  {
    "path": "doc/man3/pbs_rerunjob.3B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_rerunjob 3B \"15 November 2019\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_rerunjob \n\\- requeue a PBS batch job\n.SH SYNOPSIS\n#include <pbs_error.h>\n.br\n#include <pbs_ifl.h>\n.sp\n.B int pbs_rerunjob(int connect, char *jobID, char *extend)\n\n.SH DESCRIPTION\nIssues a batch request to requeue a batch job, job array, subjob, or range of subjobs.\n\nGenerates a \n.I Rerun Job \n(14) batch request and sends it to the server over the connection specified by \n.I connect.\n\nYou cannot requeue a job that is marked as not rerunnable (the \n.I Rerunable \nattribute is False).\n\n.SH ARGUMENTS\n.IP connect 8\nReturn value of \n.B pbs_connect().  \nSpecifies connection handle over which to send batch request to server.\n\n.IP jobID 8\nID of job to be requeued.  \n.br\nFormat for a job:\n.br\n.I <sequence number>.<server name>\n.br\nFormat for a job array:\n.br\n.I <sequence number>[].<server name>\n.br\nFormat for a subjob:\n.br\n.I <sequence number>[<index>].<server name>\n.br\nFormat for a range of subjobs:\n.br\n.I <sequence number>[<index start>-<index end>].<server name>\n\n.IP extend 8\nCharacter string for extensions to command.  Not currently used.\n\n.SH RETURN VALUE\nThe routine returns 0 (zero) on success.\n\nIf an error occurred, the routine returns a non-zero exit value, and\nthe error number is available in the global integer \n.I pbs_errno.\n\n.SH SEE ALSO\nqrerun(1B), pbs_connect(3B)\n"
  },
  {
    "path": "doc/man3/pbs_rescquery.3B",
    "content": ".\\\"\n.\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n\n.if \\n(Pb .ig Ig\n.TH pbs_rescquery 3B \"1 Oct 2009\" Local \"PBS\"\n.\\\" The following macros are style for object names and values.\n.de Ar\t\t\\\" command/function arguments and operands (italic)\n.ft 2\n.if \\\\n(.$>0 \\&\\\\$1\\f1\\\\$2\n..\n.de Av\t\t\\\" data item values  (Helv)\n.if  \\n(Pb .ft 6\n.if !\\n(Pb .ft 3\n.ps -1\n.if \\\\n(.$>0 \\&\\\\$1\\s+1\\f1\\\\$2\n..\n.de At\t\t\\\" attribute and data item names (Helv Bold)\n.if  \\n(Pb .ft 6\n.if !\\n(Pb .ft 2\n.ps -1\n.if \\\\n(.$>0 \\&\\\\$1\\s+1\\f1\\\\$2\n..\n.de Ty\t\t\\\" Type-ins and examples (typewriter)\n.if  \\n(Pb .ft 5\n.if !\\n(Pb .ft 3\n.if \\\\n(.$>0 \\&\\\\$1\\f1\\\\$2\n..\n.de Er\t\t\\\" Error values ( [Helv] )\n.if  \\n(Pb .ft 6\n.if !\\n(Pb .ft 3\n\\&\\s-1[\\^\\\\$1\\^]\\s+1\\f1\\\\$2\n..\n.de Sc\t\t\\\" Symbolic constants ( {Helv} )\n.if  \\n(Pb .ft 6\n.if !\\n(Pb .ft 3\n\\&\\s-1{\\^\\\\$1\\^}\\s+1\\f1\\\\$2\n..\n.de Al\t\t\\\" Attribute list item, like .IP but set font and size\n.if !\\n(Pb .ig Ig\n.ft 6\n.IP \"\\&\\s-1\\\\$1\\s+1\\f1\"\n.Ig\n.if  \\n(Pb .ig Ig\n.ft 2\n.IP \"\\&\\\\$1\\s+1\\f1\"\n.Ig\n..\n.\\\" the following pair of macros are used to bracket sections of code\n.de Cs\n.ft 5\n.nf\n..\n.de Ce\n.sp\n.fi\n.ft 1\n..\n.\\\" End of macros\n.Ig\n.SH NAME\npbs_rescquery, avail, totpool, usepool - query resource availability\n.SH SYNOPSIS\n#include <pbs_error.h>\n.br\n#include <pbs_ifl.h>\n.sp\n.ft 3\n.nf\nint pbs_rescquery\\^(\\^int\\ connect, char\\ **resourcelist, int *arraysize,\nint *available, int *allocated, int *reserved, int *down \\^)\n.sp\nchar *avail\\^(\\^int connect, char *resc\\^)\n.sp\nint totpool\\^(\\^int connect, int update\\^)\n.sp\nint usepool\\^(\\^int connect, int update\\^)\n.fi\n.ft 1\n.SH DESCRIPTION\n.if \\n(Pb .ig Ig\n.HP 2\n.Ig\n.if !\\n(Pb .ig Ig\n.sp\n.Ig\n.B pbs_rescquery\n.br\nIssue a request to the batch server to query the availability of resources.\n.Ar connect\nis the connection returned by \\f3pbs_connect\\fP().\n.Ar resourcelist\nis an array of one or more strings specifying the resources to be queried.\n.Ar arraysize\nis the is the number of strings in resourcelist.\n.Ar available ,\n.Ar allocated ,\n.Ar reserved ,\nand\n.Ar down\nare integer arrays of size arraysize.  The amount of resource specified in\nthe corresponding resourcelist string which is available, already allocated,\nreserved, and down/off-line is returned in the integer arrays.\n.IP\nAt the present time the only resources which may be specified is \"nodes\".\nIt may be specified as\n.br\n.Ty \\ \\ \\ \\ nodes\n.br\n.Ty \\ \\ \\ \\ nodes=\n.br\n.Ty \\ \\ \\ \\ nodes=\\f2specification\\f1\n.br\nwhere specification is what a user specifies in the -l option argument list\nfor nodes.  See the qsub(1B) and pbs_resources(7B) man pages.\n.IP\nWhere the node resourcelist is a simple type, such as \"nodes\", \"nodes=\",\nor \"nodes=\\f2type\\fP\", the numbers returned reflect the actual number of nodes\n(of the specified type) which are \\f2available\\fP, \\f2allocated\\fP,\n\\f2reserved\\fP, or \\f2down\\fP.\n.IP\nFor a more complex node resourcelist, such as\n\"nodes=2\" or \"nodes=type1:type2\", only the value returned in\n.I available\nhas meaning.\nIf the number in\n.I available\nis positive, it is the number of nodes required to satisfy the specification\nand that some set of nodes are available which will satisfy it, see\n.I avail ().\nIf the number in\n.I available\nis zero, some number of nodes required for the specification are\ncurrently unavailable, the request might be satisfied at a later time.\nIf the number in\n.I available\nis negative, no combination of known nodes can fulfill the specification.\n.if \\n(Pb .ig Ig\n.HP 2\n.Ig\n.if !\\n(Pb .ig Ig\n.sp\n.Ig\n.B avail\n.br\nThe\n.I avail ()\ncall is provided as a conversion aid for schedulers written for early versions\nof PBS.   The avail() routine uses pbs_rescquery() and returns a character\nstring answer.\n.Ar connect\nis the connection returned by \\f3pbs_connect\\fP().\n.Ar resc\nis a single\n.I node=specification\nspecification as discussed above.  If the nodes to satisfy the specification\nare currently available, the return value is the character string\n.B yes .\nIf the nodes are currently unavailable, the return is the character string\n.B no .\nIf the specification could never be satisfied, the return is the string\n.B never .\nAn error in the specification returns the character string\n.B ? .\n.if \\n(Pb .ig Ig\n.HP 2\n.Ig\n.if !\\n(Pb .ig Ig\n.sp\n.Ig\n.B totpool\n.br\nThe\n.I totpool ()\nfunction returns the total number of nodes known to the PBS server.  This is\nthe sum of the number of nodes available, allocated, reserved, and down.\nThe parameter\n.Ar connection\nis the connection returned by pbs_connect().\nThe parameter\n.Ar update\nif non-zero, causes totpool() to issue a pbs_rescquery() call to obtain\nfresh information.   If zero, numbers from the prior pbs_rescquery() are used.\n.if \\n(Pb .ig Ig\n.HP 2\n.Ig\n.if !\\n(Pb .ig Ig\n.sp\n.Ig\n.B usepool\n.br\n.I usepool ()\nreturns the number of nodes currently in use, the sum of allocated, reserved,\nand down.\nThe parameter\n.Ar connection\nis the connection returned by pbs_connect().\nThe parameter\n.Ar update\nif non-zero, causes totpool() to issue a pbs_rescquery() call to obtain\nfresh information.   If zero, numbers from the prior pbs_rescquery() are used.\n.SH \"SEE ALSO\"\nqsub(1B), pbs_connect(3B), pbs_disconnect(3B), pbs_rescreserve(3B) and\npbs_resources(7B)\n.SH DIAGNOSTICS\nWhen the batch request generated by the \\f3pbs_rescquery\\f1()\nfunction has been completed successfully\nby a batch server, the routine will return 0 (zero).\nOtherwise, a non zero error is returned.  The error number is also set\nin pbs_errno.\n.LP\nThe functions usepool() and totpool() return -1 on error.\n"
  },
  {
    "path": "doc/man3/pbs_rescreserve.3B",
    "content": ".\\\"\n.\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_rescreserve 3B \"13 Sept 2011\" Local \"PBS Professional\"\n.SH NAME\npbs_rescreserve, pbs_rescrelease - reserve/free batch resources\n.SH SYNOPSIS\n#include <pbs_error.h>\n.br\n#include <pbs_ifl.h>\n.sp\n.B int pbs_rescreserve\\^(\\^int\\ connect, char\\ **resourcelist, \n.B int arraysize, resource_t *resource_id\\^)\n.sp\n.B int pbs_rescrelease\\^(\\^int connect, resource_t resource_id\\^)\n.SH DESCRIPTION\n.HP 2\n.B pbs_rescreserve\n.br\nIssue a request to the batch server to reserve specified resources.\n.I connect\nis the connection returned by \\f3pbs_connect\\fP().\n.I resourcelist \nis an array of one or more strings specifying the resources to be queried.\n.I arraysize\nis the is the number of strings in resourcelist.\n.I resource_id\nis a pointer to a resource handle.\nThe pointer cannot be null.\nIf the present value of the resource handle is \n.B RESOURCE_T_NULL ,\nthis request is for a new reservation and if successful, a resource handle\nwill be returned in resource_id.\n.IP\nIf the value of resource_id as supplied by the caller is not \n.B RESOURCE_T_NULL ,\nthis is a existing (partial) reservation.   Resources currently reserved \nfor this handle will be released and the full reservation will be attempted\nagain.\nIf the caller wishes to release the resources allocated to a partial\nreservation, the caller should pass the resource handle to \n\\f2pbs_rescrelease\\fP().\n.IP\nAt the present time the only resources which may be specified are \"nodes\". \nIt should be specified as:\n.RS 4\n.I nodes=specification\n.RE\n.IP\nwhere specification is what a user specifies in the -l option argument list\nfor nodes, see \n.I qsub (1B).\n.HP 2\n.B pbs_rescrelease\n.br\nThe \\f2pbs_rescrelease\\fP()\ncall releases or frees resources reserved with the resource handle of\n.I resource_id\nreturned from a prior \\f2pbs_rescreserve\\fP() call.\n.I connect\nis the connection returned by \\f3pbs_connect\\fP().\n.LP\nBoth functions require that the issuing user have operator or administrator\nprivilege.\n.SH \"SEE ALSO\"\nqsub(1B), pbs_connect(3B), pbs_disconnect(3B) and\npbs_resources(7B)\n.SH DIAGNOSTICS\npbs_rescreserve() and pbs_rescrelease() return zero on success.\nOtherwise, a non zero error is returned.  The error number is also set\nin pbs_errno.\n.IP PBSE_RMPART\nis a special case indicating that some but not all of the requested resources\ncould be reserved; a partial reservation was made.  The reservation request\nshould either be rerequested with the returned handle or the partial \nresources released.\n.IP PBSE_RMBADPARAM\na parameter is incorrect, such as a null for the pointer to the resource_id.\n.IP PBSE_RMNOPARAM\na parameter is missing, such as a null resource list. \n.LP\n"
  },
  {
    "path": "doc/man3/pbs_rlsjob.3B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_rlsjob 3B \"15 November 2019\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_rlsjob \n\\- release a hold on a PBS batch job\n.SH SYNOPSIS\n#include <pbs_error.h>\n.br\n#include <pbs_ifl.h>\n.sp\n.B int pbs_rlsjob(int connect, char *jobID, char *hold_type, char *extend)\n\n.SH DESCRIPTION\nIssues a batch request to release a hold on a job or job array.\n\nGenerates a \n.I Release Job \n(13) batch request and sends it to the server over the connection specified by \n.I connect.\n\n.SH ARGUMENTS\n.IP connect 8\nReturn value of \n.B pbs_connect().  \nSpecifies connection handle over which to send batch request to server.\n\n.IP jobID 8\nID of job to be requeued.  \n.br\nFormat for a job:\n.br\n.I <sequence number>.<server name>\n.br\nFormat for a job array:\n.br\n.I <sequence number>[].<server name>\n.br\n\n.IP hold_type 8\nType of hold to remove from job or job array.  Valid values are defined\nin pbs_ifl.h.  If hold_type is a null pointer or points to a null\nstring, PBS removes a \n.I User \nhold to the job or job array.\n\n.IP extend 8\nCharacter string for extensions to command.  Not currently used.\n\n.SH RETURN VALUE\nThe routine returns 0 (zero) on success.\n\nIf an error occurred, the routine returns a non-zero exit value, and\nthe error number is available in the global integer \n.I pbs_errno.\n\n.SH SEE ALSO\nqhold(1B), qrls(1B), pbs_connect(3B), pbs_holdjob(3B)\n"
  },
  {
    "path": "doc/man3/pbs_runjob.3B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_runjob 3B \"15 November 2019\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_runjob\n\\- run a PBS batch job\n.SH SYNOPSIS\n#include <pbs_error.h>\n.br\n#include <pbs_ifl.h>\n.sp\n.nf\n.B int pbs_runjob(int connect, char *jobID, char *location, \n.B \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ char *extend)\n.fi\n.SH DESCRIPTION\nIssues a batch request to run a batch job.\n\nGenerates a\n.I Run Job \n(15) request and sends it to the server over the connection specified by \n.I connect.  \n\nIf no file stagein is required, the server replies when the job has\nstarted execution.  If file stagein is required, the server replies\nwhen staging is started.\n\n.SH REQUIRED PRIVILEGE\nYou must have Manager or Operator privilege to use this command.\n\n.SH ARGUMENTS\n.IP connect 8\nReturn value of \n.B pbs_connect().  \nSpecifies connection over which to send batch request to server.\n\n.IP jobID 8\nID of job to be run.  \n.br\nFormat for a job:\n.br\n.I <sequence number>.<server name>\n.br\nFormat for a job array:\n.br\n.I <sequence number>[].<server name\n\n.IP location 8\nLocation where job should run, and optionally resources to use.  Same as \n.B qrun -H:\n\n.RS 8\n.IP \"-H <vnode specification without resources>\" 3\nThe \n.I vnode specification without resources\nhas this format:\n.br\n.I \\ \\ \\ (<vchunk>)[+(<vchunk>) ...]\n.br\nwhere \n.I vchunk \nhas the format\n.br\n.I \\ \\ \\ <vnode name>[+<vnode name> ...]\n.br\nExample: -H (VnodeA+VnodeB)+(VnodeC)\n\nPBS applies one requested chunk from the job's selection directive in round-robin\nfashion to each \n.I vchunk \nin the list.  Each \n.I vchunk \nmust be sufficient to run the job's corresponding chunk, otherwise\nthe job may not execute correctly.\n.RE\n\n.RS 8\n.IP \"-H <vnode specification with resources>\" 3\nThe \n.I vnode specification with resources\nhas this format:\n.br\n.I \\ \\ \\ (<vchunk>)[+(<vchunk>) ...]\n.br\nwhere \n.I vchunk \nhas the format\n.IP \"\" 6\n.I <vnode name>:<vnode resources>[+<vnode name>:<vnode resources> ...]\n.LP\n.RS 3\nand where\n.I vnode resources\nhas the format\n.RS 3\n<resource name>=<value>[:<resource name>=<value> ...]\n.RE\n\n.IP \"Example:\" 3\n.nf\n-H (VnodeA:mem=100kb:ncpus=1)+ \n.br \n     (VnodeB:mem=100kb:ncpus=2+ VnodeC:mem=100kb)\n.fi\n.LP\n\nPBS creates a new selection directive from the \n.I vnode specification with resources, \nusing it instead of the original specification from the user.\nAny single resource specification results in the\njob's original selection directive being ignored.  Each \n.I vchunk \nmust be sufficient to run the job's corresponding chunk, otherwise\nthe job may not execute correctly.\n\nIf the job being run requests\n.I -l place=exclhost,\ntake extra care to satisfy the \n.I exclhost \nrequest.  Make sure that if any vnodes are from a multi-vnoded host, \nall vnodes from that host are allocated.  Otherwise those vnodes can \nbe allocated to other jobs.\n.RE\n.RE\n\n.IP extend 8\nCharacter string for extensions to command.  Not currently used.\n\n.SH RETURN VALUE\nThe routine returns 0 (zero) on success.\n.br\n\nIf an error occurred, the routine returns a non-zero exit value, and\nthe error number is available in the global integer \n.I pbs_errno.\n\n.SH SEE ALSO\nqrun(1B), pbs_asyrunjob(3B), pbs_connect(3B)\n"
  },
  {
    "path": "doc/man3/pbs_selectjob.3B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_selectjob 3B \"15 November 2019\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_selectjob \n\\- select PBS batch jobs according to specified criteria\n.SH SYNOPSIS\n#include <pbs_error.h>\n.br\n#include <pbs_ifl.h>\n.sp\n.nf\n.B char **pbs_selectjob(int connect, struct attropl *criteria_list, \n.B \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ char *extend)\n.fi\n\n.SH DESCRIPTION\n\n.B pbs_selectjob() \nissues a batch request to select jobs that meet specified criteria,\nand returns an array of job IDs that meet the specified criteria.\n\nThis command generates a \n.I Select Jobs \n(16) batch request and sends it to the server over the connection handle specified by \n.I connect.\n\nBy default, \n.B pbs_selectjob() \nreturns all batch jobs for which the user\nis authorized to query status.  You filter the jobs by specifying\nvalues for job attributes and resources.  You send a linked list of\nattributes with associated values and operators.  Job attributes are\nlisted pbs_job_attributes(7B).\n\nReturns a list of jobs that meet all specified criteria.  \n\n.SH ARGUMENTS\n.IP connect 8\nReturn value of \n.B pbs_connect().  \nSpecifies connection handle over which to send batch request to server.\n\n.IP criteria_list 8\nPointer to a list of attributes to use as selection criteria.  Each\nattribute is described in an \n.I attropl \nstructure, defined in pbs_ifl.h as:\n.nf\nstruct attropl {\n        struct attropl *next;\n        char           *name;\n        char           *resource;\n        char           *value;\n        enum batch_op  op;\n};\n.fi\n\nIf \n.I criteria_list \nitself is null, you are not using attributes or resources as selection criteria.\n\n.IP extend 8\nCharacter string where you can specify limits or extensions of your search.\n.LP\n\n.B Members of attropl Structure\n.IP next 8\nPoints to next attribute in list.  A null pointer terminates the list.\n\n.IP name 8\nPoints to a string containing the name of the attribute.  \n\n.IP resource 8\nPoints to a string containing the name of a resource.  Used only when\nthe specified attribute contains resource information.  Otherwise,\n.I resource \nshould be a null pointer.\n\n.IP value 8\nPoints to a string containing the value of the attribute or resource.  \n\n.IP op 8\nDefines the operator in the logical expression:\n.br\n.I <existing value> <operator> <specified limit>\n.br\nJobs for which the logical expression evaluates to True are selected.\n\nFor this command, \n.I op \ncan be\n.I EQ, NE, GE, GT, LE, LT.\n\n.SH QUERYING STATES\nYou can select jobs in more than one state using a single request, by\nlisting all states you want returned.  For example, to get jobs in\n.I Held \nand \n.I Waiting \nstates:\n.RS 3\nFill in \n.I criteria_list->name \nwith \"job_state\"\n.br\nFill in \n.I criteria_list->value \nwith \"HW\" for \n.I Held \nand \n.I Waiting\n.RE\n\n.SH EXTENDING YOUR QUERY\nYou can use the following characters in the extend parameter:\n.IP \"T, t\" 8\nExtends query to include subjobs.  Job arrays are not included.\n.IP x 8\nExtends query to include finished and moved jobs.\n.LP\n\n.B Querying Finished and Moved Jobs\n.br\nTo get information on finished or moved jobs, as well as current jobs,\nadd an 'x' character to the \n.I extend \nparameter (set one character to be the 'x' character).  For example:\n.nf \n   pbs_selectjob ( ..., ..., <extend characters>) ...\n.fi\nTo get information on finished jobs only:\n.RS 3\nAdd the \"x\" character to the \n.I extend \nparameter\n.br\nFill in \n.I criteria_list->name \nwith \"ATTR_state\"\n.br\nFill in \n.I criteria_list->value \nwith \"FM\" for Finished and Moved\n.RE\nSubjobs are not considered finished until the parent array job is finished.\n\n.B Querying Job Arrays and Subjobs\n.br\nTo query only job arrays (not jobs or subjobs):\n.RS 3\nFill in \n.I criteria_list->name \nwith \"array\"\n.br\nFill in \n.I criteria_list->value \nwith \"True\"\n.RE\nTo query only job arrays and subjobs (not jobs):\n.RS 3\nFill in \n.I criteria_list->name \nwith \"array\"\n.br\nFill in \n.I criteria_list->value \nwith \"True\"\n.br\nAdd the \"T\" or \"t\" character to the \n.I extend \nparameter\n.RE\nTo query only jobs and subjobs (not job arrays), add the \"T\" or \"t\" character to the \n.I extend \nparameter.\n\n.SH RETURN VALUE\nThe return value is a pointer to a null-terminated array of character\npointers.  Each character pointer in the array points to a character\nstring which is a job ID in the form:\n.br\n.I <sequence number>.<server>@<server>\n\nIf no jobs met the criteria, the first pointer in the array is null.\n\nIf an error occurred, the routine returns a null pointer, and the\nerror number is available in the global integer \n.I pbs_errno.\n\n.SH CLEANUP REQUIRED\nThe returned array of character pointers is malloc()'ed by\n.B pbs_selectjob().  \nWhen the array is no longer needed, you must free it via a call to \n.B free().\n\n.SH SEE ALSO\nqselect(1B), pbs_connect(3B)\n\n"
  },
  {
    "path": "doc/man3/pbs_selstat.3B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_selstat 3B \"15 November 2019\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_selstat \n\\- get status of selected PBS batch jobs\n.SH SYNOPSIS\n#include <pbs_error.h>\n.br\n#include <pbs_ifl.h>\n.sp\n.nf\n.B struct batch_status *\n.B pbs_selstat(int connect, struct attropl *criteria_list, \n.B \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ struct attrl *output_attribs, char *extend)\n.fi\n\n.SH DESCRIPTION\nIssues a batch request to get the status of jobs which meet the specified criteria.  \n\nGenerates a \n.I Select Status \n(51) batch request and sends it to the server over the connection specified by \n.I connect.\n\nReturns a list of \n.I batch_status \nstructures for jobs that meet the selection criteria.\n\nThis function is a combination of \n.B pbs_selectjob() \nand \n.B pbs_statjob().  \n\nBy default this gives status for all jobs for which you are authorized\nto query status.  You can filter the results by specifying selection\ncriteria.\n\n.SH ARGUMENTS\n.IP connect 8\nReturn value of \n.B pbs_connect().  \nSpecifies connection handle over which to send batch request to server.\n\n.IP criteria_list 8\nPointer to a list of selection criteria, which are attributes and\nresources with required values.  If this list is null, you are not\nfiltering your results via selection criteria.  Each attribute or\nresource is described in an \n.I attropl \nstructure, defined in pbs_ifl.h\nas:\n.nf\nstruct attropl {\n        struct attropl *next;\n        char           *name;\n        char           *resource;\n        char           *value;\n        enum batch_op  op;\n};\n.fi\nIf \n.I criteria_list \nitself is null, you are not using attributes or resources as selection criteria.\n\n.IP output_attribs 8\nPointer to a list of attributes to return.  If this list is null, all\nattributes are returned.  Each attribute is described in an \n.I attrl\nstructure, defined in pbs_ifl.h as:\n.nf\nstruct attrl {\n        char         *name;\n        char         *resource;\n        char         *value;\n        struct attrl *next;\n};\n.fi\n.IP extend 8\nCharacter string where you can specify limits or extensions of your selection.\n.LP\n.B Members of attropl Structure\n.br\n.IP next 8\nPoints to next attribute in list.  A null pointer terminates the list.\n\n.IP name 8\nPoints to a string containing the name of the attribute.  \n\n.IP resource 8\nPoints to a string containing the name of a resource.  Used only when\nthe specified attribute contains resource information.  Otherwise,\n.I resource\nshould be a null pointer.\n\n.IP value 8\nPoints to a string containing the value of the attribute or resource.\nFor parameterized limit attributes, this string contains all\nparameters for the attribute.\n\n.IP op 8\nSpecifies the test to be applied to the attribute or resource.  The\noperators are \n.I EQ, NE, GE, GT, LE, LT.\n.LP \n\n.B Members of attrl Structure\n.br\n.IP name 8\nPoints to a string containing the name of the attribute.  \n\n.IP resource 8\nPoints to a string containing the name of a resource.  Used only when\nthe specified attribute contains resource information.  Otherwise, \n.I resource \nshould be a null pointer.\n\n.IP value 8\nPoints to a string containing the value of the attribute or resource.  Should always be null.\n\n.IP next 8\nPoints to next attribute in list.  A null pointer terminates the list.\n\n.SH QUERYING STATES\nYou can select jobs in more than one state using a single request, by\nlisting all states you want returned.  For example, to get jobs in\n.I Held \nand \n.I Waiting \nstates:\n.RS 3\nFill in \n.I criteria_list->name \nwith \"job_state\"\n.br\nFill in \n.I criteria_list->value \nwith \"HW\" for \n.I Held \nand \n.I Waiting\n.RE\n.SH EXTENDING YOUR QUERY\nYou can use the following characters in the \n.I extend \nparameter:\n.IP \"T, t\" 8\nExtends query to include subjobs.  Job arrays are not included.\n.IP x 8\nExtends query to include finished and moved jobs.\n.LP\n\n.B Querying Finished and Moved Jobs\n.br\nTo get information on finished or moved jobs, as well as current jobs,\nadd an 'x' character to the \n.I extend \nparameter (set one character to be the 'x' character).  For example:\n.nf\n   pbs_selstat ( ..., ..., <extend characters>) ...\n.fi\nTo get information on finished jobs only:\n.RS 3\nAdd the \"x\" character to the \n.I extend \nparameter\n.br\nFill in \n.I criteria_list->name \nwith \"ATTR_state\"\n.br\nFill in \n.I criteria_list->value \nwith \"FM\" for \n.I Finished \nand \n.I Moved\n.RE\nFor example:\n.nf\n   criteria_list->name = ATTR_state;\n   criteria_list->value = \"FM\";\n   criteria_list->op = EQ;\n   pbs_selstat ( ..., criteria_list, ..., extend) ...\n.fi\nSubjobs are not considered finished until the parent array job is finished.\n\n.B Querying Job Arrays and Subjobs\n.br\nTo query only job arrays (not jobs or subjobs):\n.RS 3\nFill in \n.I criteria_list->name \nwith \"array\"\n.br\nFill in \n.I criteria_list->value \nwith \"True\"\n.RE\n\nTo query only job arrays and subjobs (not jobs):\n.RS 3\nFill in \n.I criteria_list->name \nwith \"array\"\n.br\nFill in \n.I criteria_list->value \nwith \"True\"\n.br\nAdd the \"T\" or \"t\" character to the \n.I extend \nparameter\n.RE\nTo query only jobs and subjobs (not job arrays), add the \"T\" or \"t\" character to the \n.I extend \nparameter.\n\n.SH RETURN VALUE\nReturns a pointer to a list of \n.I batch_status \nstructures for jobs that meet the selection criteria.  If no jobs meet\nthe criteria or can be queried for status, returns the null pointer.\n\nIf an error occurred, the routine returns a null pointer, and the\nerror number is available in the global integer \n.I pbs_errno.\n\n.B The batch_status Structure\n.br\nThe \n.I batch_status \nstructure is defined in pbs_ifl.h as\n.nf\nstruct batch_status {\n        struct batch_status *next;\n        char                *name;\n        struct attrl        *attribs;\n        char                *text;\n}\n.fi\n\n.SH CLEANUP\nYou must free the list of \n.I batch_status \nstructures when no longer needed, by calling \n.B pbs_statfree().\n\n.SH SEE ALSO\nqselect(1B), qstat(1B), pbs_connect(3B), pbs_selectjob(3B), pbs_statjob(3B)\n\n"
  },
  {
    "path": "doc/man3/pbs_sigjob.3B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_sigjob 3B \"15 November 2019\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_sigjob \n\\- send a signal to a PBS batch job\n.SH SYNOPSIS\n#include <pbs_error.h>\n.br\n#include <pbs_ifl.h>\n.sp\n.B int pbs_sigjob(int connect, char *jobID, char *signal, char *extend)\n\n.SH DESCRIPTION\nIssues a batch request to send a signal to a batch job.\n\nGenerates a \n.I Signal Job \n(18) batch request and sends it to the server over the connection specified by \n.I connect.\n\nYou can send a signal to a job, job array, subjob, or range of subjobs.\n\nThe batch server sends the job the specified signal.\n\nThe job must be in the \n.I running \nor \n.I suspended \nstate.\n\n.SH ARGUMENTS\n.IP connect 8\nReturn value of \n.B pbs_connect().  \nSpecifies connection handle over which to send batch request to server.\n\n.IP jobID *\nID of job to be signaled.  \n.br\nFormat for a job:\n.br\n.I <sequence number>.<server name>\n.br\nFormat for a job array:\n.br\n.I <sequence number>[].<server name>\n.br\nFormat for a subjob:\n.br\n.I <sequence number>[<index>].<server name>\n.br\nFormat for a range of subjobs:\n.br\n.I <sequence number>[<index start>-<index end>].<server name>\n\n.IP signal 8\nName of signal to send to job.  Can be alphabetic, with or without \n.I SIG\nprefix.  Can be signal number.\n\nThe following special signals are all lower-case, and have no associated signal number:\n.RS 8\n.IP admin-suspend 6\nSuspends a job and puts its vnodes into the \n.I maintenance \nstate.  The job is put into the \n.I S \nstate and its processes are suspended.\n\n.IP admin-resume 6\nResumes a job that was suspended using the \n.I admin-suspend \nsignal, without waiting for scheduler.  Cannot be used on jobs that were\nsuspended with the \n.I suspend \nsignal.  When the last \n.I admin-suspend\ned job has been \n.I admin-resume\nd, the vnode leaves the maintenance state.\n\n.IP suspend 6\nSuspends specified job(s).  Job goes into \n.I suspended (S) \nstate.\n\n.IP resume 6\nMarks specified job(s) for resumption by scheduler when there are\nsufficient resources.  Cannot be used on jobs that were suspended\nwith the \n.I admin_suspend \nsignal.  \n.LP\nIf the signal is not recognized on the execution host, no signal is\nsent and an error is returned.\n.RE\n\n.IP extend 8\nCharacter string for extensions to command.  Not currently used.\n\n.SH RETURN VALUE\nThe routine returns 0 (zero) on success.\n\nIf an error occurred, the routine returns a non-zero exit value, and\nthe error number is available in the global integer \n.I pbs_errno.\n\n.SH SEE ALSO\nqsig(1B), pbs_connect(3B)\n"
  },
  {
    "path": "doc/man3/pbs_stagein.3B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_stagein 3B \"3 March 2015\" Local \"PBS Professional\"\n.SH NAME\npbs_stagein - request that files for a PBS batch job be staged in.\n.SH SYNOPSIS\n#include <pbs_error.h>\n.br\n#include <pbs_ifl.h>\n.sp\n.B int pbs_stagein(\\^int\\ connect, char\\ *job_id, char\\ *location, char\\ *extend)\n\n.SH DESCRIPTION\nIssue a batch request to start the stage in of files specified in the stagein attribute of a batch job.\n.LP\nA\n.I \"stage in\"\nbatch request is generated and sent to the server over the connection\nspecified by\n.I connect \nwhich is the return value of \\f3pbs_connect\\f1().\n.LP\nThis request directs the server to begin the stage in of files specified in the\njob's stage in attribute.\nThis request requires that the issuing user have operator or\nadministrator privilege.\n.LP\nThe argument,\n.I job_id ,\nidentifies which job for which file staging is to begin.  It is specified in\nthe form:\n.RS 4\n.I \"sequence_number.server\"\n.RE\n.LP\nThe argument,\n.I location ,\nif not the null pointer or null string, specifies the location where the \njob will  be run and hence to where the files will be staged.\nThe location is the name of a host in the cluster managed by the server.\nIf the job is then directed to run at different location, the run request will\nbe rejected.\n.LP\nThe argument,\n.I extend ,\nis reserved for implementation-defined extensions.\n.SH \"SEE ALSO\"\nqrun(8B), qsub(1B), and pbs_connect(3B)\n.SH DIAGNOSTICS\nWhen the batch request generated by \\f3pbs_stagein\\f1()\nfunction has been completed successfully by a batch server, the routine will\nreturn 0 (zero).\nOtherwise, a non zero error is returned.  The error number is also set\nin pbs_errno.\n"
  },
  {
    "path": "doc/man3/pbs_statfree.3B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_statfree 3B \"15 November 2019\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_statfree \n\\- free a PBS status object\n.SH SYNOPSIS\n#include <pbs_error.h>\n.br\n#include <pbs_ifl.h>\n.sp\n.B void pbs_statfree(struct batch_status *psj)\n\n.SH DESCRIPTION\nFrees the specified PBS status object returned by PBS API routines\nsuch as \n.B pbs_statque, pbs_statserver, pbs_stathook, \netc.\n\n.SH ARGUMENTS\n\n.IP psj 8\nPointer to the \n.I batch_status \nstructure to be freed.\n.LP\n.B The batch_status Structure\n.br\nThe \n.I batch_status \nstructure is defined in pbs_ifl.h as\n.nf\nstruct batch_status {\n     struct batch_status *next;\n     char                *name;\n     struct attrl        *attribs;\n     char                *text;\n}\n.fi\n\n.SH RETURN VALUE\nNo return value.\n\n\n"
  },
  {
    "path": "doc/man3/pbs_stathook.3B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_stathook 3B \"19 July 2020\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_stathook \n- get status information about PBS site hooks\n.SH SYNOPSIS\n#include <pbs_error.h>\n.br\n#include <pbs_ifl.h>\n.sp\n.nf\n.B struct batch_status \n.B *pbs_stathook(int connect, char *hook_id, struct attrl *output_attribs, \n.B \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ char *extend)\n.sp\n.B void pbs_statfree(struct batch_status *psj)\n.fi\n.SH DESCRIPTION\nIssues a batch request to get the status of a specified site hook\nor a set of site hooks at the current server.\n\nGenerates a \n.I Status Hook \nbatch request and sends it to the server over the connection specified\nby connect.\n\n.B Required Privilege\n.br\nThis API can be executed only by root on the local server host.\n\n.SH ARGUMENTS\n.IP connect 8\nReturn value of \n.B pbs_connect().  \nSpecifies connection handle over which to send batch request to server.\n\n.IP hook_id 8\nHook name, null string, or null pointer.  \n.br\nIf \n.I hook_id \nspecifies a name, the attribute-value list for that hook is returned.  \n.br\nIf \n.I hook_id \nis a null string or a null pointer, the status of all hooks\nat the current server is returned.\n\n.IP output_attribs 8\nPointer to a list of attributes to return.  If this list is null, all\nattributes are returned.  Each attribute is described in an \n.I attrl\nstructure, defined in pbs_ifl.h as:\n.nf\nstruct attrl {\n        char         *name;\n        char         *resource;\n        char         *value;\n        struct attrl *next;\n};\n.fi\n.IP extend 8\nCharacter string where you can specify limits or extensions of your selection.\n.LP\n\n.B Members of attrl Structure\n.br\n.IP name 8\nPoints to a string containing the name of the attribute.  \n\n.IP resource 8\nPoints to a string containing the name of a resource.  Used only when\nthe specified attribute contains resource information.  Otherwise, \n.I resource \nshould be a null pointer.\n\n.IP value 8\nShould always be a pointer to a null string.\n\n.IP next 8\nPoints to next attribute in list.  A null pointer terminates the list.\n\n.SH RETURN VALUE\nReturns a pointer to a list of \n.I batch_status \nstructures for for the specified site hook.  If no site hook can be\nqueried for status, returns the null pointer.\n\nIf an error occurred, the routine returns a null pointer, and the\nerror number is available in the global integer \n.I pbs_errno.\n\n.B The batch_status Structure\n.br\nThe \n.I batch_status \nstructure is defined in pbs_ifl.h as\n.nf\nstruct batch_status {\n        struct batch_status *next;\n        char                *name;\n        struct attrl        *attribs;\n        char                *text;\n}\n.fi\n\n.SH CLEANUP\nYou must free the list of \n.I batch_status \nstructures when no longer needed, by calling \n.B pbs_statfree().\n\n.SH SEE ALSO\npbs_hook_attributes(7B), \npbs_connect(3B), pbs_statfree(3B)\n"
  },
  {
    "path": "doc/man3/pbs_stathost.3B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_stathost 3B \"15 November 2019\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_stathost\n\\- get status of PBS execution host(s)\n.SH SYNOPSIS\n#include <pbs_error.h>\n.br\n#include <pbs_ifl.h>\n.sp\n.nf\n.B struct batch_status *\n.B pbs_stathost(int connect, char *target, struct attrl *output_attribs, \n.B \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ char *extend)\n.fi\n\n.SH DESCRIPTION\nIssues a batch request to get the status of PBS execution hosts.\n\nGenerates a \n.I Status Node \n(58) batch request and sends it to the server over the connection specified by \n.I connect.\n\nReturns specified attributes or all attributes of specified execution\nhost or all execution hosts.  If an execution host has multiple\nvnodes, this command reports aggregated information from the vnodes\nfor that host.\n\n.SH ARGUMENTS\n.IP connect 8\nReturn value of \n.B pbs_connect().  \nSpecifies connection handle over which to send batch request to server.\n\n.IP target 8\nName of execution host whose attributes are to be reported.  If this\nargument is a null pointer or points to a null string, returns\nattributes of all execution hosts known to the server.\n\n.IP output_attribs 8\nPointer to a list of attributes to return.  If this argument is null,\nreturns all attributes.  Each attribute is described in an \n.I attrl\nstructure, defined in pbs_ifl.h as:\n.nf\nstruct attrl {\n        char         *name;\n        char         *resource;\n        char         *value;\n        struct attrl *next;\n};\n.fi\n\n.IP extend 8\nCharacter string for extensions to command.  Not currently used.\n.LP\n.B Members of attrl Structure\n.br\n.IP name 8\nPoints to a string containing the name of the attribute.  \n\n.IP resource 8\nPoints to a string containing the name of a resource.  Used only when\nthe specified attribute contains resource information.  Otherwise,\n.I resource \nshould be a null pointer.\n\n.IP value 8\nPoints to a string containing the value of the attribute or resource.  \n\n.IP next 8\nPoints to next attribute in list.  A null pointer terminates the list.\n\n.SH RETURN VALUE\nReturns a pointer to a list of\n.I batch_status \nstructures describing the execution host(s).\n\nIf an error occurred, the routine returns a null pointer, and the\nerror number is available in the global integer \n.I pbs_errno.\n\n.B The batch_status Structure\n.br\nThe \n.I batch_status \nstructure is defined in pbs_ifl.h as\n.nf\nstruct batch_status {\n        struct batch_status *next;\n        char                *name;\n        struct attrl        *attribs;\n        char                *text;\n}\n.fi\n\n.SH CLEANUP \nYou must free the list of \n.I batch_status \nstructures when no longer needed, by calling \n.B pbs_statfree().\n\n.SH SEE ALSO\nqstat(1B), pbs_connect(3B), pbs_statfree(3B)\n\n"
  },
  {
    "path": "doc/man3/pbs_statjob.3B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_statjob 3B \"15 November 2019\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_statjob \n\\- get status of PBS batch jobs\n.SH SYNOPSIS\n#include <pbs_error.h>\n.br\n#include <pbs_ifl.h>\n.sp\n.nf\n.B struct batch_status *\n.B pbs_statjob(int connect, char *ID, struct attrl *output_attribs, \n.B \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ char *extend)\n.fi\n.SH DESCRIPTION\nIssues a batch request to get the status of a specified batch job, a\nlist of batch jobs, or the batch jobs at a queue or server.\n\nGenerates a \n.I Status Job \n(19) batch request and sends it to the server over the connection specified by \n.I connect.\n\nYou can query status of jobs, job arrays, subjobs, and ranges of subjobs.\n\nQueries all specified jobs that the user is authorized to query.\n\n.SH ARGUMENTS\n.IP connect 8\nReturn value of \n.B pbs_connect().  \nSpecifies connection handle over which to send batch request to server.\n\n.IP ID 8\nJob ID, list of job IDs, queue, server, or null.  \n.br\nIf \n.I ID \nis a null pointer or points to a null string, gets status of jobs at connected server. \n.br\nFormat for a job:\n.br\n.I <sequence number>.<server name>\n.br\nFormat for a job array:\n.br\n.I <sequence number>[].<server name>\n.br\nFormat for a subjob:\n.br\n.I <sequence number>[<index>].<server name>\n.br\nFormat for a range of subjobs:\n.br\n.I <sequence number>[<index start>-<index end>].<server name>\n.br\nFormat for a list of jobs: comma-separated list of job IDs in a single\nstring.  White space is ignored.  No limit on length:\n.br\n.I \"<job ID>,<job ID>,<job ID>, ...\"\n.br\nFormat for a queue:\n.br\n.I <queue name>@<server name>\n.br\nFormat for a server:\n.br\n.I <server name>\n.br\n\n.IP output_attribs 8\nPointer to a list of attributes to return.  If this list is null, all\nattributes are returned.  Each attribute is described in an \n.I attrl\nstructure, defined in pbs_ifl.h as:\n.nf\nstruct attrl {\n        char         *name;\n        char         *resource;\n        char         *value;\n        struct attrl *next;\n};\n.fi\n\n.IP extend 8\nCharacter string where you can specify limits or extensions of your\nsearch.  Order of characters is not important.\n\n.LP\n.B Members of attrl Structure\n.br\n.IP name 8\nPoints to a string containing the name of the attribute.  \n\n.IP resource 8\nPoints to a string containing the name of a resource.  Used only when\nthe specified attribute contains resource information.  Otherwise,\n.I resource \nshould be a null pointer.\n\n.IP value 8\nPoints to a string containing the value of the attribute or resource.  \n\n.IP next 8\nPoints to next attribute in list.  A null pointer terminates the list.\n\n.SH QUERYING JOB ARRAYS AND SUBJOBS\nYou can query status of job arrays and their subjobs, or just the parent job arrays only.\n\nTo query status of job arrays and their subjobs, include the job array\nIDs in the \n.I ID \nargument, and include the \"t\" character in the \n.I extend\nargument.  The function returns the status of each parent job array\nfollowed by status of each subjob in that job array.\n\nTo query status of one or more parent job arrays only, but not their\nsubjobs, include their job IDs in the \n.I ID \nargument, but do not include anything in the \n.I extend \nargument.\n\n.SH QUERYING THE JOBS AT A QUEUE OR SERVER\nTo query status of all jobs at a queue, give the queue name in the \n.I ID \nargument.\n\nTo query status of all jobs at a server, give the server name in the\n.I ID \nargument.  If you give a null ID argument, the function queries the\ndefault server.\n\n.SH EXTENDING YOUR QUERY\nYou can use the following characters in the extend parameter:\n.IP \"T, t\" 8\nExtends query to include subjobs.  Job arrays are not included.\n\n.IP x 8\nExtends query to include finished and moved jobs.\n.LP\n.B Querying Finished and Moved Jobs\n.br\nTo get status for finished or moved jobs, as well as current jobs, add\nan 'x' character to the \n.I extend \nparameter (set one character to be the 'x' character).  For example: \n.br\n.I \\ \\ \\ pbs_statjob ( ..., ..., <extend characters>) ...\n\nSubjobs are not considered finished until the parent array job is finished.\n\n\n.SH RETURN VALUES\n\nFor a single job, if the job can be queried, returns a pointer to a\n.I batch_status \nstructure containing the status of the specified job.\nIf the job cannot be queried, returns a NULL pointer, and \n.I pbs_errno \nis set to an error number indicating the reason the job could not be queried.\n\nFor a list of jobs, if any of the specified jobs can be queried,\nreturns a pointer to a \n.I batch_status \nstructure containing the status\nof all the queryable jobs.  If none of the jobs can be queried,\nreturns a NULL pointer, and \n.I pbs_errno \nis set to the error number that indicates the reason that the last\njob in the list could not be queried.\n\nFor a queue, if the queue exists, returns a pointer to a \n.I batch_status\nstructure containing the status of all the queryable jobs in the\nqueue.  If the queue does not exist, returns a NULL pointer, and\n.I pbs_errno \nis set to \n.I PBSE_UNKQUE (15018).  \nIf the queue exists but contains no queryable jobs, returns a NULL pointer, and \n.I pbs_errno \nis set to \n.I PBSE_NONE (0).\n\nWhen querying a server, the connection to the server is already\nestablished by \n.B pbs_connect().  \nIf there are jobs at the server, returns a pointer to a \n.I batch_status \nstructure containing the status of all the queryable jobs at the\nserver.  If the server does not contain any queryable jobs, returns a\nNULL pointer, and \n.I pbs_errno \nis set to \n.I PBSE_NONE (0).  \n\n.B The batch_status Structure \n.br \nThe \n.I batch_status \nstructure is defined in pbs_ifl.h as\n.nf \nstruct batch_status {\n        struct batch_status *next;\n        char                *name;\n        struct attrl        *attribs;\n        char                *text;\n}\n\n.SH CLEANUP\nYou must free the list of \n.I batch_status \nstructures when no longer needed, by calling \n.B pbs_statfree().\n\n.SH SEE ALSO\nqstat(1B), pbs_connect(3B), pbs_statfree(3B)\n"
  },
  {
    "path": "doc/man3/pbs_statnode.3B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_statnode 3B \"15 November 2019\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_statnode\n\\- get status of PBS execution host(s)\n.SH SYNOPSIS\n#include <pbs_error.h>\n.br\n#include <pbs_ifl.h>\n.sp\n.nf\n.B struct batch_status *\n.B pbs_statnode(int connect, char *target, struct attrl *output_attribs, \n.B \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ char *extend)\n.fi\n\n.SH DESCRIPTION\nIssues a batch request to get the status of PBS execution hosts.\n\nGenerates a \n.I Status Node \n(58) batch request and sends it to the server over the connection specified by \n.I connect.\n\nReturns specified attributes or all attributes of specified execution\nhost or all execution hosts.  If an execution host has multiple\nvnodes, this command reports aggregated information from the vnodes\nfor that host.\n\nIdentical to \n.B pbs_stathost(); \nretained for backward compatibility.\n\n.SH ARGUMENTS\n.IP connect 8\nReturn value of \n.B pbs_connect().  \nSpecifies connection handle over which to send batch request to server.\n\n.IP target 8\nName of execution host whose attributes are to be reported.  If this\nargument is a null pointer or points to a null string, returns\nattributes of all execution hosts known to the server.\n\n.IP output_attribs 8\nPointer to a list of attributes to return.  If this argument is null,\nreturns all attributes.  Each attribute is described in an \n.I attrl\nstructure, defined in pbs_ifl.h as:\n.nf\nstruct attrl {\n        char         *name;\n        char         *resource;\n        char         *value;\n        struct attrl *next;\n};\n.fi\n\n.IP extend 8\nCharacter string for extensions to command.  Not currently used.\n.LP\n.B Members of attrl Structure\n.br\n.IP name 8\nPoints to a string containing the name of the attribute.  \n\n.IP resource 8\nPoints to a string containing the name of a resource.  Used only when\nthe specified attribute contains resource information.  Otherwise,\n.I resource \nshould be a null pointer.\n\n.IP value 8\nPoints to a string containing the value of the attribute or resource.  \n\n.IP next 8\nPoints to next attribute in list.  A null pointer terminates the list.\n\n.SH RETURN VALUE\nReturns a pointer to a list of\n.I batch_status \nstructures describing the execution host(s).\n\nIf an error occurred, the routine returns a null pointer, and the\nerror number is available in the global integer \n.I pbs_errno.\n\n.B The batch_status Structure\n.br\nThe \n.I batch_status \nstructure is defined in pbs_ifl.h as\n.nf\nstruct batch_status {\n        struct batch_status *next;\n        char                *name;\n        struct attrl        *attribs;\n        char                *text;\n}\n.fi\n\n\n.SH CLEANUP \nYou must free the list of \n.I batch_status \nstructures when no longer needed, by calling \n.B pbs_statfree().\n\n.SH SEE ALSO\nqstat(1B), pbs_connect(3B), pbs_statfree(3B)\n\n"
  },
  {
    "path": "doc/man3/pbs_statque.3B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_statque 3B \"15 November 2019\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_statque \n\\- get status of PBS queue(s)\n.SH SYNOPSIS\n#include <pbs_error.h>\n.br\n#include <pbs_ifl.h>\n.sp\n.nf\n.B struct batch_status *\n.B pbs_statque(int connect, char *target, struct attrl *output_attribs, \n.B \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ char *extend)\n.fi\n\n.SH DESCRIPTION\nIssues a batch request to get the status of PBS queues.\n\nGenerates a \n.I Status Queue \n(20) batch request and sends it to the server over the connection specified by \n.I connect.\n\nReturns specified attributes or all attributes of specified queue or all queues.\n\n.SH ARGUMENTS\n.IP connect 8\nReturn value of \n.B pbs_connect().  \nSpecifies connection handle over which to send batch request to server.\n\n.IP target 8\nName of queue whose attributes are to be reported.  If this argument\nis null, returns attributes of all queues known to the server.\n\n.IP output_attribs 8\nPointer to a list of attributes to return.  If this argument is a null\npointer or points to a null string, returns all attributes.  Each\nattribute is described in an \n.I attrl \nstructure, defined in pbs_ifl.h as:\n.nf\nstruct attrl {\n        char         *name;\n        char         *resource;\n        char         *value;\n        struct attrl *next;\n};\n.fi\n\n.IP extend 8\nCharacter string for extensions to command.  Not currently used.\n.LP\n.B Members of attrl Structure\n.IP name 8\nPoints to a string containing the name of the attribute.  \n\n.IP resource 8\nPoints to a string containing the name of a resource.  Used only when\nthe specified attribute contains resource information.  Otherwise,\n.I resource \nshould be a null pointer.\n\n.IP value 8\nPoints to a string containing the value of the attribute or resource.  \n\n.IP next 8\nPoints to next attribute in list.  A null pointer terminates the list.\n\n.SH RETURN VALUE\nReturns a pointer to a list of \n.I batch_status \nstructures describing the queue(s).  \n\nIf an error occurred, the routine returns a null pointer, and the\nerror number is available in the global integer \n.I pbs_errno.\n\n.B The batch_status Structure\n.br\nThe \n.I batch_status \nstructure is defined in pbs_ifl.h as\n.nf\nstruct batch_status {\n        struct batch_status *next;\n        char                *name;\n        struct attrl        *attribs;\n        char                *text;\n}\n.fi\n\n\n.SH CLEANUP\nYou must free the list of \n.I batch_status \nstructures when no longer needed, by calling \n.B pbs_statfree().\n\n.SH SEE ALSO\nqstat(1B), pbs_connect(3B), pbs_statfree(3B)\n"
  },
  {
    "path": "doc/man3/pbs_statresv.3B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_statresv 3B \"15 November 2019\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_statresv \n\\- get status of PBS reservation(s)\n.SH SYNOPSIS\n#include <pbs_error.h>\n.br\n#include <pbs_ifl.h>\n.sp\n.nf\n.B struct batch_status *\n.B pbs_statresv(int connect, char *target, struct attrl *output_attribs, \n.B \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ char *extend)\n.fi\n.SH DESCRIPTION\nIssues a batch request to get the status of PBS reservation(s).\n\nGenerates a \n.I Status Reservation \n(71) batch request and sends it to the server over the connection specified by \n.I connect.\n\nReturns specified attributes or all attributes of specified reservation or all reservations.\n\n.SH ARGUMENTS\n.IP connect 8\nReturn value of \n.B pbs_connect().  \nSpecifies connection handle over which to send batch request to server.\n\n.IP target 8\nID of reservation whose attributes are to be reported.  If this\nargument is a null pointer or points to a null string, returns\nattributes of all reservations the user is authorized to query.\n.br\nFormat for advance reservation:\n.br\n.I R<sequence number>.<server name>\n.br\nFormat for standing reservation:\n.br\n.I S<sequence number>.<server name>\n.br\nFormat for maintenance reservation:\n.br\n.I M<sequence number>.<server name>\n\n.IP output_attribs 8\nPointer to a list of attributes to  return.  If this argument is null,\nreturns all attributes.  Each attribute is described in an\n.I attrl \nstructure, defined in pbs_ifl.h as:\n.nf\nstruct attrl {\n        char         *name;\n        char         *resource;\n        char         *value;\n        struct attrl *next;\n};\n.fi\n\n.IP extend 8\nCharacter string for extensions to command.  Not currently used.\n.LP\n.B Members of attrl Structure\n\n.IP name  8\nPoints to a string containing the name of the attribute.  \n\n.IP resource\nPoints to a string containing the name of a resource.  Used only when\nthe specified attribute contains resource information.  Otherwise,\n.I resource \nshould be a null pointer.\n\n.IP value 8\nPoints to a string containing the value of the attribute or resource.  \n\n.IP next 8\nPoints to next attribute in list.  A null pointer terminates the list.\n\n.SH RETURN VALUE\nReturns a pointer to a list of \n.I batch_status \nstructures describing the reservation(s).  \n\nIf an error occurred, the routine returns a null pointer, and the\nerror number is available in the global integer \n.I pbs_errno.\n\n.B The batch_status Structure\n.br\nThe \n.I batch_status \nstructure is defined in pbs_ifl.h as\n.nf\nstruct batch_status {\n        struct batch_status *next;\n        char                *name;\n        struct attrl        *attribs;\n        char                *text;\n}\n.fi\n\n.SH CLEANUP\nYou must free the list of \n.I batch_status \nstructures when no longer needed, by calling \n.B pbs_statfree().\n\n.SH SEE ALSO\npbs_rstat(1B), pbs_connect(3B), pbs_statfree(3B)\n\n"
  },
  {
    "path": "doc/man3/pbs_statrsc.3B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_statrsc 3B \"15 November 2019\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_statrsc \n\\- get status of PBS resources\n.SH SYNOPSIS\n#include <pbs_error.h>\n.br\n#include <pbs_ifl.h>\n.sp\n.nf\n.B struct batch_status *\n.B pbs_statrsc(int connect, char *rescname, struct attrl *output_attribs, \n.B \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ char *extend)\n.fi\n\n.SH DESCRIPTION\nIssues a batch request to query and return the status of a specified\nresource, or a set of resources at a server.\n\nGenerates a \n.I Status Resource \n(82) batch request and sends it to the server over the connection specified by \n.I connect.\n\n.SH ARGUMENTS\n.IP connect 8\nReturn value of \n.B pbs_connect().  \nSpecifies connection handle over which to send batch request to server.\n\n.IP rescname 8\nName of resource to be queried.  If this is null, queries all resources at the server.  \n\n.IP output_attribs 8\nPointer to a list of attributes to return.  If this argument is a null\npointer or points to a null string, returns all attributes.  Each\nattribute is described in an \n.I attrl \nstructure, defined in pbs_ifl.h as:\n.nf\nstruct attrl {\n        char         *name;\n        char         *resource;\n        char         *value;\n        struct attrl *next;\n};\n.fi\n\n.IP extend 8\nCharacter string for extensions to command.  Not currently used.\n.LP\n\n.B Members of attrl Structure\n.IP name 8\nPoints to a string containing the name of the attribute.  \n\n.IP resource 8\nPoints to a string containing the name of a resource.  Should be a null pointer.  \n\n.IP value 8\nPoints to a string containing the value of the attribute or resource.\nShould always be a pointer to a null string.\n\n.IP next 8\nPoints to next attribute in list.  A null pointer terminates the list.\n\n.SH QUERYING RESOURCES AT SERVER\nUse the \n.B pbs_connect() \ncommand to get a connection handle at the server.  \n\nTo query all resources at the server, pass a null pointer as the name\nof the resource.\n\n.SH RETURN VALUE\nFor a single resource, if the resource can be queried, returns a\npointer to a \n.I batch_status \nstructure containing the status of the specified resource.\n\nIf the resource cannot be queried, the routine returns a null pointer,\nand the error number is available in the global integer\n.I pbs_errno.\n\nWhen querying a server, the connection to the server is already\nestablished by \n.B pbs_connect().  \nIf there are resources at the server, returns a pointer to a \n.I batch_status \nstructure describing the queryable resource(s) at the server.  \nIn the unlikely event that the server does not contain any queryable\nresources because the user is unprivileged and all  resources are\nmarked as invisible (the \n.I i \nflag is set), returns a NULL pointer, and\n.I pbs_errno \nis set to \n.I PBSE_NONE (0).\n\n.B The batch_status Structure\n.br\nThe\n.I batch_status \nstructure is defined in pbs_ifl.h as\n.nf\nstruct batch_status {\n        struct batch_status *next;\n        char                *name;\n        struct attrl        *attribs;\n        char                *text;\n}\n.fi\n\n.SH CLEANUP\nYou must free the list of \n.I batch_status \nstructures when no longer needed, by calling \n.B pbs_statfree().\n\n.SH SEE ALSO\nqstat(1B), pbs_connect(3B), pbs_statfree(3B)\n"
  },
  {
    "path": "doc/man3/pbs_statsched.3B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_statsched 3B \"15 November 2019\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_statsched \n\\- get status of PBS schedulers\n.SH SYNOPSIS\n#include <pbs_error.h>\n.br\n#include <pbs_ifl.h>\n.sp\n.nf\n.B struct batch_status *\n.B pbs_statsched(int connect, \n.B \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ struct attrl *output_attribs, char *extend)\n.fi\n\n.SH DESCRIPTION\nIssues a batch request to get the status of the PBS schedulers.\n\nGenerates a \n.I Status Scheduler \n(81) batch request and sends it to the server over the connection specified by \n.I connect.\n\nThis command returns status of the default scheduler and all multischeds.\n\n.SH ARGUMENTS\n.IP connect 8\nReturn value of \n.B pbs_connect().  \nSpecifies connection handle over which to send batch request to server.\n\n.IP output_attribs 8\nPointer to a list of attributes to return.  If this argument is a null\npointer or points to a null string, returns all attributes.  Each\nattribute is described in an \n.I attrl \nstructure, defined in pbs_ifl.h as:\n.nf\nstruct attrl {\n        char         *name;\n        char         *resource;\n        char         *value;\n        struct attrl *next;\n};\n.fi\n\n.IP extend 8\nCharacter string for extensions to command.  Not currently used.\n.LP\n\n.B Members of attrl Structure\n.IP name 8\nPoints to a string containing the name of the attribute.  \n\n.IP resource 8\n\nPoints to a string containing the name of a resource.  Used only when\nthe specified attribute contains resource information.  Otherwise,\n.I resource \nshould be a null pointer.\n\n.IP value 8\nPoints to a string containing the value of the attribute or resource.\nShould always be a pointer to a null string.\n\n.IP next 8\nPoints to next attribute in list.  A null pointer terminates the list.\n\n.SH RETURN VALUE\nReturns a pointer to a list of \n.I batch_status \nstructures describing the default scheduler and all multischeds.\n\nIf an error occurred, the routine returns a null pointer, and the\nerror number is available in the global integer \n.I pbs_errno.\n\n.B The batch_status Structure\n.br\nThe \n.I batch_status \nstructure is defined in pbs_ifl.h as\n.nf\nstruct batch_status {\n        struct batch_status *next;\n        char                *name;\n        struct attrl        *attribs;\n        char                *text;\n}\n.fi\n\n.SH CLEANUP\nYou must free the list of \n.I batch_status \nstructures when no longer needed, by calling \n.B pbs_statfree().\n\n.SH \"SEE ALSO\"\nqstat(1B), pbs_connect(3B), pbs_statfree(3B)\n\n"
  },
  {
    "path": "doc/man3/pbs_statserver.3B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_statserver 3B \"15 November 2019\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_statserver \n\\- get status of a PBS batch server\n.SH SYNOPSIS\n#include <pbs_error.h>\n.br\n#include <pbs_ifl.h>\n.sp\n.nf\n.B struct batch_status *\n.B pbs_statserver(int connect, struct attrl *output_attribs, char *extend)\n.fi\n\n.SH DESCRIPTION\nIssues a batch request to get the status of a batch server.\n\nGenerates a \n.I Status Server \n(21) batch request and sends it to the server over the connection specified by \n.I connect.\n\n.SH ARGUMENTS\n.IP connect 8\nReturn value of \n.B pbs_connect().  \nSpecifies connection handle over which to send batch request to server.\n\n.IP output_attribs 8\nPointer to a list of attributes to return.  If this argument is a null\npointer or points to a null string, returns all attributes.  Each\nattribute is described in an \n.I attrl \nstructure, defined in pbs_ifl.h as:\n.nf\nstruct attrl {\n        char         *name;\n        char         *resource;\n        char         *value;\n        struct attrl *next;\n};\n.fi\n\n.IP extend 8\nCharacter string for extensions to command.  Not currently used.\n.LP\n\n.B Members of attrl Structure\n.IP name 8\nPoints to a string containing the name of the attribute.  \n\n.IP resource 8\nPoints to a string containing the name of a resource.  Used only when\nthe specified attribute contains resource information.  Otherwise,\n.I resource \nshould be a null pointer.\n\n.IP value 8\nPoints to a string containing the value of the attribute or resource.\nShould always be a pointer to a null string.\n\n.IP next 8\nPoints to next attribute in list.  A null pointer terminates the list.\n\n.SH RETURN VALUE\nReturns a pointer to a \n.I batch_status \nstructure describing the server.  \n\nIf an error occurred, the routine returns a null pointer, and the\nerror number is available in the global integer \n.I pbs_errno.\n\n.B The batch_status Structure\n.br\nThe \n.I batch_status \nstructure is defined in pbs_ifl.h as\n.nf\nstruct batch_status {\n        struct batch_status *next;\n        char                *name;\n        struct attrl        *attribs;\n        char                *text;\n}\n.fi\n\n.SH CLEANUP\nYou must free the \n.I batch_status \nstructure when no longer needed, by calling \n.B pbs_statfree().\n\n.SH SEE ALSO\nqstat(1B), pbs_connect(3B), pbs_statfree(3B)\n\n"
  },
  {
    "path": "doc/man3/pbs_statvnode.3B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_statvnode 3B \"15 November 2019\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_statvnode\n\\- get status of PBS vnode(s) on execution hosts\n.SH SYNOPSIS\n#include <pbs_error.h>\n.br\n#include <pbs_ifl.h>\n.sp\n.nf\n.B struct batch_status *\n.B pbs_statvnode(int connect, char *target, struct attrl *output_attribs, \n.B \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ char *extend)\n.fi\n\n.SH DESCRIPTION\nIssues a batch request to get the status of PBS vnodes on execution hosts.\n\nGenerates a \n.I Status Node \n(58) batch request and sends it to the server over the connection specified by \n.I connect.\n\nReturns specified attributes or all attributes of specified execution\nhost vnode or all execution host vnodes.\n\n.SH ARGUMENTS\n.IP connect 8\nReturn value of \n.B pbs_connect().  \nSpecifies connection handle over which to send batch request to server.\n\n.IP target 8\nName of execution host vnode whose attributes are to be reported.  If this\nargument is a null pointer or points to a null string, returns\nattributes of all execution host vnodes known to the server.\n\n.IP output_attribs 8\nPointer to a list of attributes to return.  If this argument is null,\nreturns all attributes.  Each attribute is described in an \n.I attrl\nstructure, defined in pbs_ifl.h as:\n.nf\nstruct attrl {\n        char         *name;\n        char         *resource;\n        char         *value;\n        struct attrl *next;\n};\n.fi\n\n.IP extend 8\nCharacter string for extensions to command.  Not currently used.\n.LP\n.B Members of attrl Structure\n.br\n.IP name 8\nPoints to a string containing the name of the attribute.  \n\n.IP resource 8\nPoints to a string containing the name of a resource.  Used only when\nthe specified attribute contains resource information.  Otherwise,\n.I resource \nshould be a null pointer.\n\n.IP value 8\nPoints to a string containing the value of the attribute or resource.  \n\n.IP next 8\nPoints to next attribute in list.  A null pointer terminates the list.\n\n.SH RETURN VALUE\nReturns a pointer to a list of\n.I batch_status \nstructures describing the vnode(s).\n\nIf an error occurred, the routine returns a null pointer, and the\nerror number is available in the global integer \n.I pbs_errno.\n\n.B The batch_status Structure\n.br\nThe \n.I batch_status \nstructure is defined in pbs_ifl.h as\n.nf\nstruct batch_status {\n        struct batch_status *next;\n        char                *name;\n        struct attrl        *attribs;\n        char                *text;\n}\n.fi\n\n.SH CLEANUP \nYou must free the list of \n.I batch_status \nstructures when no longer needed, by calling \n.B pbs_statfree().\n\n.SH SEE ALSO\nqstat(1B), pbs_connect(3B), pbs_statfree(3B)\n\n"
  },
  {
    "path": "doc/man3/pbs_submit.3B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_submit 3B \"15 November 2019\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_submit \n\\- submit a PBS batch job\n.SH SYNOPSIS\n#include <pbs_error.h>\n.br\n#include <pbs_ifl.h>\n.sp\n.nf\n.B char *pbs_submit(int connect, struct attropl *attrib_list, \n.B \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ char *jobscript,  char *destqueue, char *extend)\n.fi\n\n.SH DESCRIPTION\nIssues a batch request to submit a new batch job.\n\nGenerates a \n.I Queue Job \n(1) batch request and sends it to the server over the connection specified by \n.I connect.\n\nSubmits job to specified queue at connected server, or if no queue is\nspecified, to default queue at connected server.\n\n.SH ARGUMENTS\n.IP connect 8\nReturn value of \n.B pbs_connect().  \nSpecifies connection handle over which to send batch request to server.\n\n.IP attrib_list 8\nPointer to a list of attributes explicitly requested for\njob.  Each attribute is described in an \n.I attropl \nstructure, defined in pbs_ifl.h as:\n.nf\nstruct attropl {\n        struct attropl *next;\n        char           *name;\n        char           *resource;\n        char           *value;\n        enum batch_op  op;\n};\n.fi\n\nFor any attribute that is not specified or that is a null pointer, PBS\ntakes the default action for that attribute.  The default action is\nto assign the default value or to not pass the attribute with the job;\nthe action depends on the attribute.\n\n.IP jobscript 8\nPointer to path to job script.  Can be absolute or relative.  Relative\npath begins with the directory where the user submits the job.\n\nIf null pointer or pointer to null string, no script is passed with job.\n\n.IP destqueue 8\nPointer to name of destination queue at connected server.  If this is\na null pointer or points to a null string, the job is submitted to the\ndefault queue at the connected server.\n\n.IP extend 8\nCharacter string for extensions to command.  Not currently used.\n.LP\n\n.B Members of attropl Structure\n.IP next 8\nPoints to next attribute in list.  A null pointer terminates the list.\n\n.IP name 8\nPoints to a string containing the name of the attribute.  \n\n.IP resource 8\nPoints to a string containing the name of a resource.  Used only when\nthe specified attribute contains resource information.  Otherwise,\n.I resource \nshould be a null pointer.  \n\n.IP value 8\nPoints to a string containing the value of the attribute or resource.  \n\n.IP op 8\nOperation to perform on the attribute or resource.  In this command,\nthe only allowed operator is \n.I SET.\n\n.SH RETURN VALUE\nReturns a pointer to a character string containing the job ID assigned\nby the server.\n\nIf an error occurred, the routine returns a null pointer, and the\nerror number is available in the global integer \n.I pbs_errno.\n\n.SH CLEANUP\n\nThe space for the job ID returned by \n.B pbs_submit() \nis allocated by \n.B pbs_submit().  \nFree it via a call to \n.B free() \nwhen you no longer need it.\n\n.SH SEE ALSO\nqsub(1B), pbs_connect(3B)\n"
  },
  {
    "path": "doc/man3/pbs_submit_resv.3B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_submit_resv 3B \"15 November 2019\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_submit_resv \n\\- submit a PBS reservation\n.SH SYNOPSIS\n#include <pbs_error.h>\n.br\n#include <pbs_ifl.h>\n.sp\n.nf\n.B char *pbs_submit_resv(int connect, struct attropl *attrib_list, \n.B \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ char *extend)\n.fi\n\n.SH DESCRIPTION\nIssues a batch request to submit a new reservation.\n\nGenerates a \n.I Submit Reservation \n(70) batch request and sends it to the server over the connection specified by \n.I connect.\n\nReturns a pointer to the reservation ID. \n\n.SH ARGUMENTS\n.IP connect 8\nReturn value of \n.B pbs_connect().  \nSpecifies connection over which to send batch request to server.\n\n.IP attrib_list 8\nPointer to a list of attributes to set, with values.  Each attribute\nis described in an \n.I attropl \nstructure, defined in pbs_ifl.h as:\n.nf\nstruct attropl {\n        struct attropl *next;\n        char           *name;\n        char           *resource;\n        char           *value;\n        enum batch_op  op;\n};\n.fi\n\nFor any attribute that is not specified or that is a null pointer, PBS\ntakes the default action for that attribute.  The default action is\nto assign the default value or to not pass the attribute with the\nreservation; the action depends on the attribute.\n\n.IP extend 8\nCharacter string for extensions to command.  Not currently used.\n.LP\n\n.B Members of attropl Structure\n.IP next 8\nPoints to next attribute in list.  A null pointer terminates the list.\n\n.IP name 8\nPoints to a string containing the name of the attribute.  \n\n.IP resource 8\nPoints to a string containing the name of a resource.  Used only when\nthe specified attribute contains resource information.  Otherwise,\n.I resource \nshould be a null pointer.\n\n.IP value 8\nPoints to a string containing the value of the attribute or resource.  \n\n.IP op 8\nOperator.  The only allowed operator for this function is \n.I SET.\n\n.SH RETURN VALUE\n\nReturns a pointer to a character string containing the reservation ID\nassigned by the server.\n\nIf an error occurred, the routine returns a null pointer, and the\nerror number is available in the global integer \n.I pbs_errno.\n\n.SH CLEANUP\n\nThe space for the reservation ID returned by \n.B pbs_submit_resv() \nis allocated by \n.B pbs_submit_resv().  \nFree it via a call to \n.B free() \nwhen you no longer need it.\n\n.SH SEE ALSO\npbs_rsub(1B), pbs_connect(3B)\n"
  },
  {
    "path": "doc/man3/pbs_submitresv.3B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n\n.TH pbs_submitresv 3B \"29 August 2011\" Local \"PBS\"\n.SH NAME\npbs_submitresv - submit a pbs reservation\n.SH SYNOPSIS\n#include <pbs_error.h>\n.br\n#include <pbs_ifl.h>\n.sp\n.B char *pbs_submitresv(\\^int\\ connect, struct\\ attropl\\ *attrib, char\\ *extend)\n\n.SH DESCRIPTION\nIssue a batch request to submit a new reservation.\n.LP\nA\n.I \"Submit Reservation\"\nbatch request is generated and sent to the server over the connection\nspecified by\n.I connect\nwhich is the return value of \\f3pbs_connect\\f1().\n.LP\nThe parameter,\n.I attrib ,\nis a list of\n.I attropl\nstructures which is defined in pbs_ifl.h as:\n.sp\n.Ty\n.nf\n    struct attrl {\n        char   *name;\n        char   *resource;\n        char   *value;\n        struct attrl *next;\n        enum batch_op op;\n    };\n.fi\n.sp\nThe\n.I attrib\nlist is terminated by the first entry where\n.I next\nis a null pointer.\n.LP\nThe\n.I name\nmember points to a string which is the name of the attribute.  The\n.I value\nmember points to a string which is the value of the attribute.\nThe attribute names are defined in pbs_ifl.h.\n.LP\nIf an attribute is not named in the\n.I attrib\narray, the default action will be taken.  It will either be assigned\nthe default value or will not be passed with the reservation.  The action\ndepends on the attribute.\nIf\n.I attrib\nitself is a null pointer, then the default action will be taken for\neach attribute.\n.LP\nAssociated with an attribute of type ATTR_l (the letter ell)\nis a resource name indicated by\n.I resource\nin the\n.I attrl\nstructure.\nAll other attribute types should have a pointer to a null string for\n.I resource .\n.LP\nThe\n.I op\nmember is forced to a value of\n.I SET\nby pbs_submitresv().\n.LP\nThe parameter,\n.I extend ,\nis reserved for implementation-defined extensions.\n.LP\nThe return value is a character string which is the\n.I reservation_identifier\nassigned to the job by the server.\nThe space for the\n.I reservation_identifier\nstring is allocated by \\f3pbs_submitresv\\f1()\nand should be released via a call to \\f3free\\f1()\nby the user when no longer needed.\n.SH \"SEE ALSO\"\npbs_rsub(1B) and pbs_connect(3B)\n.SH DIAGNOSTICS\nWhen the batch request generated by pbs_submitresv()\nfunction has been completed successfully by a batch server, the routine will\nreturn a pointer to a character string which is the job identifier of the\nsubmitted batch job.\nOtherwise, a null pointer is returned and the error code is set in pbs_error.\n"
  },
  {
    "path": "doc/man3/pbs_tclapi.3B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_tclapi 3B \"4 December 2019\" Local \"PBS Professional\"\n.SH NAME\npbs_tclapi \\- PBS TCL Application Programming Interface \n.SH DESCRIPTION\n.B Deprecated.  \nThe\n.B pbs_tclapi\nis a subset of the PBS external API wrapped in a TCL\nlibrary. This functionality allows the creation of \nscripts that query the PBS system. Specifically,\nit permits the user to query the\n.B pbs_server\nabout the state of PBS, jobs,\nqueues, and nodes, and communicate with\n.B pbs_mom\nto get information about the status of running jobs, available\nresources on nodes, etc.\n.LP\n.SH USAGE\nA set of functions to communicate with the PBS\nserver and resource monitor have been added to those normally\navailable with Tcl. All these calls will set the Tcl variable\n\"pbs_errno\" to a value to indicate if an error occurred.\nIn all cases, the value \"0\" means no error. If a call to\na Resource Monitor function is made, any error value will\ncome from the system supplied\n.B errno\nvariable. If the function call communicates with the PBS\nServer, any error value will come from the error number returned\nby the server. This is the same TCL interface used by the\n.B pbs_tclsh\nand \n.B pbs_wish\ncommands.\n\n.IP \"openrm host ?port?\" 6\nCreates a connection to the PBS Resource Monitor on\n.I host\nusing\n.I port\nas the port number or the standard port for the resource monitor\nif it is not given.  A connection handle is returned.\nIf the open is successful, this will be a non-negative integer.\nIf not, an error occurred.\n.IP \"closerm connection\" 6\nThe parameter\n.I connection\nis a handle to a resource monitor which was previously returned from\n.B openrm.\nThis connection is closed.  Nothing is returned.\n.LP\n.IP \"downrm connection\" 6\nSends a command to the connected resource monitor to shutdown.\nNothing is returned.\n.LP\n.IP \"configrm connection filename\" 6\nSends a command to the connected resource monitor to read the configuration\nfile given by\n.I filename.\nIf this is successful, a \"0\" is returned, otherwise, \"-1\" is returned.\n.LP\n.IP \"addreq connection request\" 6\nA resource request is sent to the connected resource monitor.\nIf this is successful, a \"0\" is returned, otherwise, \"-1\" is returned.\n.LP\n.IP \"getreq connection\" 6\nOne resource request response from the connected resource monitor is\nreturned.  If an error occurred or there are no more responses, an\nempty string is returned.\n.LP\n.IP \"allreq request\" 6\nA resource request is sent to all connected resource monitors.\nThe number of streams acted upon is returned.\n.LP\n.IP \"flushreq\" 6\nAll resource requests previously sent to all connected resource monitors\nare flushed out to the network.  Nothing is returned.\n.LP\n.IP \"activereq\" 6\nThe connection number of the next stream with something to read is returned.\nIf there is nothing to read from any of the connections, a negative\nnumber is returned.\n.LP\n.IP \"fullresp flag\" 6\nEvaluates\n.I flag\nas a boolean value and sets\nthe response mode used by\n.B getreq\nto\n.B full\nif\n.I flag\nevaluates to \"true\".\nThe full return from a resource monitor includes the original request\nfollowed by an equal sign followed by the response.  The default\nsituation is only to return the response following the equal sign.\nIf a script needs to \"see\" the entire line, this function may be used.\n.LP\n.IP \"pbsstatserv\" 6\nThe server is sent a status request for information about the server\nitself.\nIf the request succeeds, a list with three elements is returned,\notherwise an empty string is returned.\nThe first element is the server's name.  The second is a list of attributes.\nThe third is the \"text\" associated with the server (usually blank).\n.LP\n.IP \"pbsstatjob\" 6\nThe server is sent a status request for information about the all\njobs resident within the server.\nIf the request succeeds, a list is returned, otherwise an empty string\nis returned.\nThe list contains an entry for each job.  Each element is a list\nwith three elements.  The first is the job's jobid.  The second\nis a list of attributes.  The attribute names which specify\nresources will have a name of the form \"Resource_List:name\" where\n\"name\" is the resource name.\nThe third is the \"text\" associated with the job (usually blank).\n.LP\n.IP \"pbsstatque\" 6\nThe server is sent a status request for information about all\nqueues resident within the server.\nIf the request succeeds, a list is returned, otherwise an empty string\nis returned.\nThe list contains an entry for each queue.  Each element is a list\nwith three elements.  This first is the queue's name.  The second\nis a list of attributes similar to\n.B pbsstatjob.\nThe third is the \"text\" associated with the queue (usually blank).\n.LP\n.IP \"pbsstatnode\" 6\nThe server is sent a status request for information about all\nnodes defined within the server.\nIf the request succeeds, a list is returned, otherwise an empty string\nis returned.\nThe list contains an entry for each node.  Each element is a list\nwith three elements.  This first is the node's name.  The second\nis a list of attributes similar to\n.B pbsstatjob.\nThe third is the \"text\" associated with the node (usually blank).\n.LP\n.IP \"pbsselstat\" 6\nThe server is sent a status request for information about the all runnable\njobs resident within the server.\nIf the request succeeds, a list similar to\n.B pbsstatjob\nis returned, otherwise an empty string is returned.\n.LP\n.IP \"pbsrunjob jobid ?location?\" 6\nRun the job given by\n.I jobid\nat the location given by\n.I location.\nIf\n.I location\nis not given, the default location is used.\nIf this is successful, a \"0\" is returned, otherwise, \"-1\" is returned.\n.LP\n.IP \"pbsasyrunjob jobid ?location?\" 6\nRun the job given by\n.I jobid\nat the location given by\n.I location\nwithout waiting for a positive response that the job\nhas actually started.\nIf\n.I location\nis not given, the default location is used.\nIf this is successful, a \"0\" is returned, otherwise, \"-1\" is returned.\n.LP\n.IP \"pbsrerunjob jobid\" 6\nRe-runs the job given by\n.I jobid.\nIf this is successful, a \"0\" is returned, otherwise, \"-1\" is returned.\n.LP\n.IP \"pbsdeljob jobid\" 6\nDelete the job given by\n.I jobid.\nIf this is successful, a \"0\" is returned, otherwise, \"-1\" is returned.\n.LP\n.IP \"pbsholdjob jobid\" 6\nPlace a hold on the job given by\n.I jobid.\nIf this is successful, a \"0\" is returned, otherwise, \"-1\" is returned.\n.LP\n.IP \"pbsmovejob jobid ?location?\" 6\nMove the job given by\n.I jobid\nto the location given by\n.I location.\nIf\n.I location\nis not given, the default location is used.\nIf this is successful, a \"0\" is returned, otherwise, \"-1\" is returned.\n.LP\n.IP \"pbsqenable queue\" 6\nSet the \"enabled\" attribute for the queue given by\n.I queue\nto true.\nIf this is successful, a \"0\" is returned, otherwise, \"-1\" is returned.\n.LP\n.IP \"pbsqdisable queue\" 6\nSet the \"enabled\" attribute for the queue given by\n.I queue\nto false.\nIf this is successful, a \"0\" is returned, otherwise, \"-1\" is returned.\n.LP\n.IP \"pbsqstart queue\" 6\nSet the \"started\" attribute for the queue given by\n.I queue\nto true.\nIf this is successful, a \"0\" is returned, otherwise, \"-1\" is returned.\n.LP\n.IP \"pbsqstop queue\" 6\nSet the \"started\" attribute for the queue given by\n.I queue\nto false.\nIf this is successful, a \"0\" is returned, otherwise, \"-1\" is returned.\n.LP\n.IP \"pbsalterjob jobid attribute_list\" 6\nAlter the attributes for a job specified by\n.I jobid.\nThe parameter\n.I attribute_list\nis the list of attributes to be altered.  There can be more than one.\nEach attribute consists of a list of three elements.  The first\nis the name, the second the resource and the third is the new value.\nIf the alter is successful, a \"0\" is returned, otherwise, \"-1\" is returned.\n.LP\n.IP \"pbsrescquery resource_list\" 6\n.B Deprecated. \nObtain information about the resources specified by\n.I resource_list.\nThis will be a list of strings.  If the request succeeds, a list\nwith the same number of elements as\n.I resource_list\nis returned.  Each element in this list will be a list with four\nnumbers.  The numbers specify\n.I available,\n.I allocated,\n.I reserved,\nand\n.I down\nin that order.\n.LP\n\n.IP \"pbsconnect ?server?\" 6\nMake a connection to the named server or the default server if\na parameter is not given.\nOnly one connection to a server is allowed at any one time.\n.LP\n.IP pbsdisconnect 6\nDisconnect from the currently connected server.\n.LP\nThe above Tcl functions use PBS interface library calls for communication\nwith the server and the PBS resource monitor library to communicate\nwith pbs_mom.\n.LP\n.IP \"datetime ?day? ?time?\" 6\nThe number of arguments used determine the type of\ndate to be calculated.  With no arguments, the current POSIX\ndate is returned.  This is an integer in seconds.\n.sp\nWith one argument there are two possible formats.  The first is a 12\n(or more) character string specifying a complete date in\nthe following format:\n.br\n.I \\ \\ \\ YYMMDDhhmmss\n.br\nAll characters must be digits.  The year (YY) is given by the first\ntwo (or more) characters and is the number of years since 1900.\nThe month (MM) is the number of the month [01-12].\nThe day (DD) is the day of the month [01-32].  The hour (hh) is the hour\nof the day [00-23].  The minute (mm) is minutes after the hour [00-59].\nThe second (ss) is seconds after the minute [00-59].  The POSIX date\nfor the given date/time is returned.\n.sp\nThe second option with one argument is a relative time.  The format\nfor this is\n.br\n.I \\ \\ \\ HH:MM:SS\n.br\nWith hours (HH), minutes (MM) and seconds (SS) being separated by\ncolons \":\".  The number returned in this case will be the number of seconds\nin the interval specified, not an absolute POSIX date.\n.sp\nWith two arguments a relative date is calculated.  The first argument\nspecifies a day of the week and must be one of the following strings:\n\"Sun\", \"Mon\", \"Tue\", \"Wed\", \"Thr\", \"Fri\", or \"Sat\".  The second\nargument is a relative time as given above.  The POSIX date\ncalculated will be the day of the week given which follows the\ncurrent day, and the time given in the second argument.  For example,\nif the current day was Monday, and the two arguments were\n\"Fri\" and \"04:30:00\", the date calculated would be the POSIX date\nfor the Friday following the current Monday, at four-thirty in the\nmorning.  If the day specified and the current day are the same,\nthe current day is used, not the day one week later.\n.LP\n.IP \"strftime format time\"\nThis function calls the POSIX function\n.I strftime().\nIt requires two arguments.  The first\nis a format string.  The format conventions are the same as those\nfor the POSIX function strftime().  The second argument is POSIX\ncalendar time in second as returned by\n.I datetime.\nIt returns a string based on the format given.  This gives the ability to\nextract information about a time, or format it for printing.\n.LP\n.IP \"logmsg tag message\" 6\nThis function calls the internal PBS function\n.I log_err().\nIt will cause a log message to be written to the scheduler's log file.  The \n.I tag\nspecifies a function name or other word used to identify the area\nwhere the message is generated.  The\n.I message\nis the string to be logged.\n.LP\n.SH \"SEE ALSO\"\npbs_tclsh(8B), pbs_wish(8B), pbs_mom(8B),\npbs_server(8B), pbs_sched(8B)\n"
  },
  {
    "path": "doc/man3/pbs_terminate.3B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_terminate 3B \"15 November 2019\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_terminate \n\\- shut down a PBS batch server\n.SH SYNOPSIS\n#include <pbs_error.h>\n.br\n#include <pbs_ifl.h>\n.sp\n.nf\n.B int pbs_terminate(int connect, int manner, char *extend)\n.fi\n\n.SH DESCRIPTION\nIssues a batch request to shut down a batch server.  \n\nGenerates a \n.I Server Shutdown \n(17) batch request and sends it to the server over the connection specified by \n.I connect.\n\nThe \n.B pbs_terminate() \ncommand exits after the server has completed its shutdown procedure.\n\n.SH REQUIRED PRIVILEGE\nYou must have Operator or Manager privilege to run this command.\n\n.SH ARGUMENTS\n.IP connect 8\nReturn value of \n.B pbs_connect().  \nSpecifies connection handle over which to send batch request to server.\n\n.IP manner 8\nManner in which to shut down server.  The available manners are\ndefined in pbs_ifl.h.  Valid values: \n.I SHUT_IMMEDIATE, SHUT_DELAY, SHUT_QUICK.\nSee qterm(8B) for information on manner in which to shut down server.\n\n.IP extend 8\nCharacter string for extensions to command.  Not currently used.\n\n.SH RETURN VALUE\nThe routine returns 0 (zero) on success.\n\nIf an error occurred, the routine returns a non-zero exit value, and\nthe error number is available in the global integer \n.I pbs_errno.\n\n.SH SEE ALSO\nqterm(8B), pbs_connect(3B)\n"
  },
  {
    "path": "doc/man3/rm.3B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH RM 3 \"6 March 2019\" Local \"PBS Professional\"\n.SH NAME\nopenrm, closerm, downrm, configrm, addreq, allreq, getreq, flushreq, activereq, fullresp \\- resource monitor API\n.SH SYNOPSIS\n\n.B #include <sys/types.h>\n.br\n.B #include <netinet/in.h>\n.br\n.B #include <rm.h>\n\n.LP\n.B\nint openrm (host, port)\n.br\n.RS 6\nchar *host;\n.br\nunsigned int port;\n.RE\n.LP\n.B int closerm (stream) \n.br\n.RS 6\nint stream;\n.RE\n.LP\n.B\nint downrm (stream)   \n.br\n.RS 6\nint stream;\n.RE\n.LP\n.B\nint configrm (stream, file)\n.br\n.RS 6\nint stream;\n.br\nchar *file;\n.RE\n.LP\n.B\nint addreq (stream, line)\n.br\n.RS 6\nint stream; \n.br\nchar *line;\n.RE\n.LP\n.B\nint allreq (line) \n.br\n.RS 6\nchar *line; \n.RE\n.LP\n.B\nchar *getreq(stream)  \n.br\n.RS 6\nint stream;\n.RE\n.LP\n.B\nint flushreq() \n.LP\n.B\nint activereq()\n.LP\n.B\nvoid fullresp(flag)  \n.br\n.RS 6\nint flag; \n.RE\n.SH DESCRIPTION\n.LP\nThe resource monitor library contains functions to facilitate\ncommunication with the PBS Professional resource monitor. It is set up\nto make it easy to connect to several resource monitors and\nhandle the network communication efficiently.\n.LP\nIn all these routines, the variable\n.B pbs_errno\nwill be set when an error is indicated.\nThe lower levels of network protocol are handled by the\n\"Data Is Strings\" \n.I DIS\nlibrary and the \"Reliable Packet Protocol\"\n.I RPP\nlibrary.\n\n.LP\n.B configrm() \ncauses the resource monitor to read the file named. \n.B Deprecated.\n\n.LP\n.B addreq()\nbegins a new message to the resource monitor if necessary.\nThen adds a line to the body of an outstanding command to the resource\nmonitor.\n\n.LP\n.B allreq()\nbegins, for each stream, a new message to the resource monitor if necessary.\nThen adds a line to the body of an outstanding command to the resource\nmonitor.\n\n.LP\n.B getreq()\nfinishes and sends any outstanding message to the resource monitor.\nIf\n.B fullresp()\nhas been called to turn off \"full response\" mode, the routine\nsearches down the line to find the equal sign just before the\nresponse value.\nThe returned string (if it is not NULL) has been allocated by\n.I malloc\nand thus\n.I free\nmust be called when it is no longer needed to prevent memory leaks.\n\n.LP\n.B flushreq()\nfinishes and sends any outstanding messages to all resource monitors.\nFor each active resource monitor structure, it checks if any\noutstanding data is waiting to be sent. If there is, it is sent and\nthe internal structure is marked to show \"waiting for response\".\n\n.LP\n.B fullresp()\nturns on, if flag is true, \"full response\" mode where\n.B getreq()\nreturns a pointer to the beginning of a line of response.\nThis is the default.  If flag is false,\nthe line returned by\n.B getreq()\nis just the answer following the equal sign.\n\n.LP\n.B activereq()\nReturns the stream number of the next stream with something\nto read or a negative number (the return from\n.I rpp_poll )\nif there is no stream to read.\n\nIn order to use any of the above with Windows, initialize the network\nlibrary and link with \n.B winsock2.  \nTo do this, call \n.B winsock_init() \nbefore calling the function and link against the \n.B ws2_32.lib \nlibrary.\n\n.SH SEE ALSO\n.BR tcp (4P),\n.BR udp (4P)\n"
  },
  {
    "path": "doc/man3/tm.3",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH TM 3 \"24 February 2015\" Local \"PBS Professional\"\n.SH NAME\ntm_init, tm_nodeinfo, tm_poll, tm_notify, tm_spawn, tm_kill, tm_obit, tm_taskinfo, tm_atnode, tm_rescinfo, tm_publish, tm_subscribe, tm_finalize, tm_attach \\- task management API\n.SH SYNOPSIS\n.B\n#include <tm.h>\n.LP\n.B\nint tm_init(info, roots)\n.RS 6\nvoid \\(**info;\n.br\nstruct tm_roots \\(**roots;\n.RE\n.LP\n.B\nint tm_nodeinfo(list, nnodes)\n.RS 6\ntm_node_id \\(**\\(**list;\n.br\nint \\(**nnodes;\n.RE\n.LP\n.B\nint tm_poll(poll_event, result_event, wait, tm_errno)\n.RS 6\ntm_event_t poll_event;\n.br\ntm_event_t \\(**result_event;\n.br\nint wait;\n.br\nint \\(**tm_errno;\n.RE\n.LP\n.B\nint tm_notify(tm_signal)\n.RS 6\nint tm_signal;\n.RE\n.LP\n.B\nint tm_spawn(argc, argv, envp, where, tid, event)\n.RS 6\nint argc;\n.br\nchar \\(**\\(**argv;\n.br\nchar \\(**\\(**envp;\n.br\ntm_node_id where;\n.br\ntm_task_id \\(**tid;\n.br\ntm_event_t \\(**event;\n.RE\n.LP\n.B\nint tm_kill(tid, sig, event)\n.RS 6\ntm_task_id tid;\n.br\nint sig;\n.br\ntm_event_t \\(**event;\n.RE\n.LP\n.B\nint tm_obit(tid, obitval, event)\n.RS 6\ntm_task_id tid;\n.br\nint \\(**obitval;\n.br\ntm_event_t \\(**event;\n.RE\n.LP\n.B\nint tm_taskinfo(node, tid_list, list_size, ntasks, event)\n.RS 6\ntm_node_id node;\n.br\ntm_task_id \\(**tid_list;\n.br\nint list_size;\n.br\nint \\(**ntasks;\n.br\ntm_event_t \\(**event;\n.RE\n.LP\n.B\nint tm_atnode(tid, node)\n.RS 6\ntm_task_id tid;\n.br\ntm_node_id \\(**node;\n.RE\n.LP\n.B\nint tm_rescinfo(node, resource, len, event)\n.RS 6\ntm_node_id node;\n.br\nchar \\(**resource;\n.br\nint len;\n.br\ntm_event_t \\(**event;\n.RE\n.LP\n.B\nint tm_publish(name, info, len, event)\n.RS 6\nchar \\(**name;\n.br\nvoid \\(**info;\n.br\nint len;\n.br\ntm_event_t \\(**event;\n.RE\n.LP\n.B\nint tm_subscribe(tid, name, info, len, info_len, event)\n.RS 6\ntm_task_id tid;\n.br\nchar \\(**name;\n.br\nvoid \\(**info;\n.br\t\nint len;\n.br\nint \\(**info_len;\n.br\ntm_event_t \\(**event;\n.RE\n.LP\n.B\nint tm_attach(jobid, cookie, pid, tid, host, port)\n.RS 6\nchar \\(**jobid;\n.br\nchar \\(**cookie;\n.br\npid_t pid;\n.br\ntm_task_id \\(**tid;\n.br\nchar \\(**host;\n.br\nint port;\n.RE\n.LP\n.B\nint tm_finalize()\n\n.SH DESCRIPTION\n.LP\nThese functions provide a partial implementation of the task\nmanagement interface part of the PSCHED API.  In PBS, MOM\nprovides the task manager functions.  This library opens a\ntcp socket to the MOM running on the local host and sends\nand receives messages using the DIS protocol (described in\nthe PBS IDS).  The \n.B tm \ninterface can only be used by a process within a PBS job.\n.LP\nThe PSCHED Task Management API description used to create this\nlibrary was committed to paper on November 15, 1996 and was\ngiven the version number 0.1.  Changes may have taken place since\nthat time which are not reflected in this library.\n.LP\nThe API description uses several data types that it purposefully\ndoes not define.  This was done so an implementation would not be\nconfined in the way it was written.  For this specific work,\nthe definitions follow:\n.sp\n.nf\ntypedef\tint\t\t\ttm_node_id;\t/* job-relative node id */\n#define\tTM_ERROR_NODE\t((tm_node_id)-1)\n\ntypedef\tint\t\t\ttm_event_t;\t/* > 0 for real events */\n#define\tTM_NULL_EVENT\t((tm_event_t)0)\n#define\tTM_ERROR_EVENT\t((tm_event_t)-1)\n\ntypedef\tunsigned long\ttm_task_id;\n#define\tTM_NULL_TASK\t(tm_task_id)0\n.fi\n.LP\nThere are a number of error values defined as well:\n.na\nTM_SUCCESS, TM_ESYSTEM, TM_ENOEVENT, TM_ENOTCONNECTED, TM_EUNKNOWNCMD,\nTM_ENOTIMPLEMENTED, TM_EBADENVIRONMENT, TM_ENOTFOUND.\n.ad\n.LP\n\n.B tm_init(\\|)\ninitializes the library by opening a socket to the MOM on the local\nhost and sending a TM_INIT message, then waiting for the reply.\nThe\n.IR info\nparameter has no use and is included to conform with the PSCHED\ndocument.  The\n.IR roots\npointer will contain valid data after the function returns and\nhas the following structure:\n.sp\n.nf\nstruct\ttm_roots {\n\ttm_task_id\ttm_me;\n\ttm_task_id\ttm_parent;\n\tint\t\ttm_nnodes;\n\tint\t\ttm_ntasks;\n\tint\t\ttm_taskpoolid;\n\ttm_task_id\t*tm_tasklist;\n};\n.fi\n.sp\n.IP tm_me 20\nThe task id of this calling task.\n.IP tm_parent 20\nThe task id of the task which spawned this task or TM_NULL_TASK if\nthe calling task is the initial task started by PBS.\n.IP tm_nnodes 20\nThe number of nodes allocated to the job.\n.IP tm_ntasks 20\nThis will always be 0 for PBS.\n.IP tm_taskpoolid 20\nPBS does not support task pools so this will always be -1.\n.IP tm_tasklist 20\nThis will be NULL for PBS.\n.LP\nThe\n.IR tm_ntasks ,\n.IR tm_taskpoolid\nand\n.IR tm_tasklist\nfields are not filled with data specified by the PSCHED document.  PBS does\nnot support task pools and, at this time, does not return information\nabout current running tasks from\n.B tm_init.\nThere is a separate call to get information for current running tasks called\n.B tm_taskinfo\nwhich is described below.  The return value from\n.B tm_init\nis TM_SUCCESS if the library initialization was successful, or an error\nis returned otherwise.\n.LP\n.B tm_nodeinfo(\\|)\nplaces a pointer to a malloc'ed\narray of tm_node_id's in the pointer pointed at by\n.IR list .\nThe order of the tm_node_id's in\n.IR list\nis the same as that specified to MOM in the \"exec_host\" attribute.  The\nint pointed to by\n.IR nnodes\ncontains the number of nodes allocated to the job.\nThis is information that is returned during initialization and does\nnot require communication with MOM.  If\n.B tm_init\nhas not been called, TM_ESYSTEM is returned, otherwise TM_SUCCESS is\nreturned.\n.LP\n.B tm_poll(\\|)\nis the function which will retrieve information about the task management\nsystem to locations specified when other routines request an action\ntake place.  The bookkeeping for this is done by generating an\n.IR event\nfor each action.  When the task manager (MOM) sends a message that an\naction is complete, the event is reported by\n.B tm_poll\nand information is placed where the caller requested it.\nThe argument\n.IR poll_event\nis meant to be used to request a specific event.  This implementation\ndoes not use it and it must be set to TM_NULL_EVENT or an error\nis returned.  Upon return, the argument\n.IR result_event\nwill contain a valid event number or TM_ERROR_EVENT on error.  If\n.IR wait\nis zero and there are no events to report,\n.IR result_event\nis set to TM_NULL_EVENT.  If\n.IR wait\nis non-zero an there are no events to report, the function will block\nwaiting for an event.  If no local error takes place, TM_SUCCESS is\nreturned.  If an error is reported by MOM for an event, then the argument\n.IR tm_errno\nwill be set to an error code.\n.LP\n.B tm_notify(\\|)\nis described in the PSCHED documentation, but is not implemented for\nPBS yet.  It will return TM_ENOTIMPLEMENTED.\n.LP\n.B tm_spawn(\\|)\nsends a message to MOM to start a new task.  The node id of the\nhost to run the task is given by\n.IR where .\nThe parameters\n.IR argc ,\n.IR argv\nand\n.IR envp\nspecify the program to run and its arguments and environment very\nmuch like\n.B exec(\\|).\nThe full path of the program executable must be given by\n.IR argv[0]\nand the number of elements in the argv array is given by\n.IR argc .\nThe array\n.IR envp\nis NULL terminated.  The argument\n.IR event\npoints to a tm_event_t variable which is filled in with an event\nnumber.  When this event is returned by\n.B tm_poll ,\nthe tm_task_id pointed to by\n.IR tid\nwill contain the task id of the newly created task.\n.LP\n.B tm_kill(\\|)\nsends a signal specified by\n.IR sig\nto the task\n.IR tid\nand puts an event number in the tm_event_t pointed to by\n.IR event .\n.LP\n.B tm_obit(\\|)\ncreates an event which will be reported when the task\n.IR tid\nexits.  The int pointed to by\n.IR obitval\nwill contain the exit value of the task when the event is reported.\n.LP\n.B tm_taskinfo(\\|)\nreturns the list of tasks running on the node specified by\n.IR node .\nThe PSCHED documentation mentions a special ability to retrieve\nall tasks running in the job.  This is not supported by PBS.\nThe argument\n.IR tid_list\npoints to an array of tm_task_id's which contains\n.IR list_size\nelements.  Upon return,\n.IR event\nwill contain an event number.  When this event is polled, the int\npointed to by\n.IR ntasks\nwill contain the number of tasks running on the node and the array\nwill be filled in with tm_task_id's.  If\n.IR ntasks\nis greater than\n.IR list_size ,\nonly\n.IR list_size\ntasks will be returned.\n.LP\n.B tm_atnode(\\|)\nwill place the node id where the task\n.IR tid\nexists in the tm_node_id pointed to by\n.IR node .\n.LP\n.B tm_rescinfo(\\|)\nmakes a request for a string specifying the resources available on\na node given by the argument\n.IR node .  \nThe string is returned in the buffer pointed to by\n.IR resource\nand is terminated by a NUL character unless the number of characters\nof information is greater than specified by\n.IR len .\nThe resource string PBS returns is formatted as follows:\n.sp\nA space separated set of strings from the\n.B uname\nsystem call.  The order of the strings is \n.B sysname,\n.B nodename,\n.B release,\n.B version,\n.B machine.\n.sp\nA comma separated set of strings giving the components of the\n\"Resource_List\" attribute of the job, preceded by a colon (:).\nEach component has the\nresource name, an equal sign, and the limit value.\n.LP\n.B tm_publish(\\|)\ncauses\n.IR len\nbytes of information pointed at by\n.IR info\nto be sent to the local MOM to be saved under the name given by\n.IR name .\n.LP\n.B tm_subscribe(\\|)\nreturns a copy of the information named by\n.IR name\nfor the task given by\n.IR tid .\nThe argument\n.IR info\npoints to a buffer of size\n.IR len\nwhere the information will be returned.  The argument\n.IR info_len\nwill be set with the size of the published data.  If this is larger\nthan the supplied buffer, the data will have been truncated.\n.LP\n.B tm_attach(\\|)\ncommands MOM to create a new PBS \"attached task\" out of a session running on MOM's host.\nThe \n.IR jobid\nparameter specifies the job which is to have a new task attached.  If it is NULL, the system \nwill try to determine the correct \n.IR jobid.\nThe \n.IR cookie\nparameter must be NULL.  The \n.IR pid\nparameter must be a non-zero process id for the process which is to be \nadded to the job specified by \n.IR jobid.\nIf \n.IR tid\nis non-NULL, it will be used to store the task id of the new task.  The \n.IR host\nand \n.IR port \nparameters specify where to contact MOM.  \n.IR host\nshould be NULL.  The return value will be 0 if a new \ntask has been successfully\ncreated and non-zero on error.  The return value will be one of the \nTM error numbers defined in \n.B tm.h\nas follows:\n   TM_ESYSTEM          MOM cannot be contacted\n   TM_ENOTFOUND        No matching job was found\n   TM_ENOTIMPLEMENTED  The call is not implemented/supported\n   TM_ESESSION         The session specified is already attached\n   TM_EUSER            The calling user is not permitted to attach\n   TM_EOWNER           The process owner does not match the job\n   TM_ENOPROC          The process does not exist\n.LP\n.B tm_finalize(\\|)\nmay be called to free any memory in use by the library and close\nthe connection to MOM.\n.SH SEE ALSO\npbs_mom(8B),\npbs_sched(8B)\n\n\n"
  },
  {
    "path": "doc/man8/mpiexec.8B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH mpiexec 8B \"26 May 2020\" Local \"PBS Professional\"\n.SH NAME\n.B mpiexec \n\\- run MPI programs under PBS on Linux\n\n.SH SYNOPSIS\n.B mpiexec \n\n.B mpiexec \n--version\n\n.SH DESCRIPTION\nThe PBS \n.B mpiexec\ncommand provides the standard mpiexec interface on a system\nrunning supported versions of Performance Suite.  \nIf executed on a different system, it will assume\nit was invoked by mistake.  In this case it will use the value of\n.B PBS_O_PATH\nto search for the correct mpiexec.  If one is found, the PBS \n.B mpiexec \nwill\nexec it.\n\nThe PBS\n.B mpiexec\ncalls the HPE mpirun(1).  \n\nIt is transparent to the user; MPI jobs submitted outside of PBS will\nrun as they would normally.  MPI jobs can be launched across multiple\nHPE machines.  PBS will manage, track, and cleanly terminate multi-host MPI\njobs.  PBS users can run MPI jobs within specific partitions.\n\nIf CSA has been configured and enabled, PBS will collect accounting\ninformation on all tasks launched by an MPI job.  CSA information will\nbe associated with the PBS job ID that invoked it, on each execution\nhost.\n\nIf the \n.B PBS_MPI_DEBUG \nenvironment variable's value has a nonzero\nlength, PBS will write debugging information to standard output.\n\n.SH USAGE \nThe PBS\n.B mpiexec \ncommand presents the mpiexec interface described in section \n\"4.1 Portable MPI Process Startup\" of the \"MPI-2: Extensions \nto the Message-Passing Interface\" document in \nhttp://www.mpi-forum.org/docs/mpi-20-html/node42.htm\n\n.SH OPTIONS\n.IP \"--version\" 8\nThe \n.B mpiexec\ncommand returns its PBS version information and exits.\nThis option can only be used alone.\n\n.SH REQUIREMENTS\nSystem running a supported version of Performance Suite.\n\nPBS uses HPE's mpirun(1) command to launch MPI jobs.  HPE's mpirun\nmust be in the standard location.\n\nThe location of \n.B pbs_attach() \non each node of a multinode MPI job\nmust be the same as it is on the mother superior node.\n\nIn order to run multihost jobs, the HPE Array Services must be\ncorrectly configured.  HPE systems communicating via HPE's Array\nServices must all use the same version of the sgi-arraysvcs package.\nHPE systems communicating via HPE's Array Services must have been\nconfigured to interoperate with each other using the default array.\nSee HPE's array_services(5) man page.\n\n.SH ENVIRONMENT VARIABLES\nThe PBS \n.B mpiexec \nchecks the \n.B PBS_MPI_DEBUG \nenvironment variable.  If\nthis variable has a nonzero length, debugging information is written.\n\nThe \n.B PBS_ENVIRONMENT \nenvironment variable is used to determine whether\n.B mpiexec \nis being called from within a PBS job.\n\nThe PBS \n.B mpiexec \nuses the value of \n.B PBS_O_PATH \nto search for the correct\n.B mpiexec \nif it was invoked by mistake.\n\n.SH PATH\nPBS' \n.B mpiexec \nis located in \n.I PBS_EXEC/bin/mpiexec.\n\n\n.SH SEE ALSO\nHPE man pages: \nHPE's mpirun(1), \nHPE's mpiexec_mpt(1),\nHPE's array_services(5)\n.LP\nPBS man pages:\npbs_attach(8B)\n"
  },
  {
    "path": "doc/man8/pbs.8B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs 8B \"6 May 2020\" Local \"PBS Professional\"\n.SH NAME\n.B pbs \n\\- Start, stop, restart, or get the PIDs of PBS daemons\n\n\n.SH SYNOPSIS\n.B pbs \n[start|stop|restart|status] \n\n\n.SH DESCRIPTION\nThe\n.B pbs\ncommand starts, stops or restarts all PBS daemons on the local machine, or\nreports the PIDs of all daemons when given the \n.I status\nargument.  Does not affect other hosts.\n\nYou can start, stop, restart, or status the PBS daemons using the \n.B systemctl\ncommand.\n\n.B Caveats\n.br\nThis command operates only on daemons that are marked as active\nin pbs.conf.  For example, if PBS_START_MOM is set to 0 in the local\npbs.conf, this command will not operate on pbs_mom, and will not\nstart, stop, or restart pbs_mom.\n\nThis command is typically placed in /etc/init.d so that \nPBS starts up automatically.  \n\n.B Required Privilege\n.br\nRoot privilege is required to use this command.\n\n.SH ARGUMENTS\n\n.IP \"restart\" 10\nAll daemons on the local machine are stopped, then they are restarted.\nPBS reports the name of the license server and the \nnumber and type of licenses available.\n\n.IP \"start\" 10\nEach daemon on the local machine is started.  PBS reports the number\nand type of licenses available, as well as the name of the license\nserver.  Any running jobs are killed.\n\n.IP \"status\" 10\nPBS reports the PID of each daemon on the local machine.\n\n.IP \"stop\" 10\nEach daemon on the local machine is stopped, and its PID is reported.\n\n\n.SH SEE ALSO\npbs_mom(8B), pbs_server(8B), pbs_sched(8B), and pbs_comm(8B)\n"
  },
  {
    "path": "doc/man8/pbs.conf.8B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs.conf 8B \"6 May 2020\" Local \"PBS Professional\"\n.SH NAME\n.B pbs.conf\n- PBS configuration parameters\n\n.SH DESCRIPTION\nThe \n.I /etc/pbs.conf \nfile contains PBS configuration parameters.  You may also set these using\nenvironment variables.  Environment variables override settings in \n.I pbs.conf.\n\nTo specify an alternate location for \n.I pbs.conf, \nyou may set \n.I PBS_CONF_FILE in your environment.\n\n.SH CONFIGURATION PARAMETERS\n\n.IP PBS_AUTH_METHOD \nAuthentication method to be used by PBS. Allowed values are\n\"munge\" and \"resvport\" (case-insensitive). Defaults to \"resvport\".\n\n.IP PBS_AUTH_SERVICE_USERS\nList of daemon users allowed to connect to the communication daemon.\nDefaults to \"root\". \n\n.IP PBS_BATCH_SERVICE_PORT  \nPort on which server listens.  Default: 15001\n\n.IP PBS_BATCH_SERVICE_PORT_DIS      \nDIS port on which server listens.\n\n.IP PBS_COMM_LOG_EVENTS     \nCommunication daemon log mask.  \n.br\nDefault: 511\n\n.IP PBS_COMM_ROUTERS        \nTells a \n.I pbs_comm \nthe location of the other \n.I pbs_comms.\n\n.IP PBS_COMM_THREADS        \nNumber of threads for communication daemon.\n\n.IP PBS_CONF_REMOTE_VIEWER  \nSpecifies remote viewer client.  If not specified, PBS uses native\nRemote Desktop client for remote viewer.  Set on submission host(s).\nSupported on Windows only.\n\n.IP PBS_CORE_LIMIT  \nLimit on corefile size for PBS daemons.  Can be set to an integer\nnumber of bytes or to the string \"unlimited\".\n\n.IP PBS_DATA_SERVICE_PORT   \nUsed to specify non-default port for connecting to data service.  Default: 15007\n\n.IP PBS_ENVIRONMENT \nLocation of pbs_environment file.\n\n.IP PBS_EXEC        \nLocation of PBS \n.I bin \nand \n.I sbin \ndirectories.\n\n.IP PBS_HOME        \nLocation of PBS working directories.\n\n.IP PBS_LEAF_NAME   \nTells endpoint what hostname to use for network.\n\n.IP PBS_LEAF_ROUTERS        \nLocation of endpoint's \n.I pbs_comm \ndaemon(s).\n\n.IP PBS_LOCALLOG    \nEnables logging to local PBS log files.\n\n.IP PBS_MAIL_HOST_NAME      \nUsed in addressing mail regarding jobs and reservations that is sent\nto users specified in a job or reservation's Mail_Users attribute.\nOptional.  Must be a fully qualified domain name.  Cannot contain a\ncolon (\":\").  \n\n.IP PBS_MANAGER_SERVICE_PORT        \nPort on which MoM listens.  Default: 15003\n\n.IP PBS_MOM_HOME    \nLocation of MoM working directories.\n\n.IP PBS_MOM_SERVICE_PORT    \nPort on which MoM listens. Default: 15002\n\n.IP PBS_OUTPUT_HOST_NAME \nHost to which all job standard output and\nstandard error are delivered.  If specified in \n.I pbs.conf \non a job\nsubmission host, the value of PBS_OUTPUT_HOST_NAME is used in the host\nportion of the job's \n.I Output_Path \nand \n.I Error_Path \nattributes.  If the\njob submitter does not specify paths for standard output and standard\nerror, the current working directory for the \n.I qsub \ncommand is used, and\nthe value of PBS_OUTPUT_HOST_NAME is appended after an at sign (\"@\").\nIf the job submitter specifies only a file path for standard output\nand standard error, the value of PBS_OUTPUT_HOST_NAME is appended\nafter an at sign (\"@\").  If the job submitter specifies paths for\nstandard output and standard error that include host names, the\nspecified paths are used.  Optional.  Must be a fully qualified domain\nname.  Cannot contain a colon (\":\").  \n\n.IP PBS_PRIMARY     \nHostname of primary server.  Used only for failover configuration.  \nOverrides PBS_SERVER_HOST_NAME.\n\n.IP PBS_INTERACTIVE_AUTH_METHOD\nAuthentication method to be used when establishing a qsub interactive session.\nSupported methods are \"resvport\" and \"munge\" (case-insensitive). Defaults to \"resvport\".\n\n.IP PBS_CP \nLocation of local copy command. Default is cp on Linux systems and xcopy on Windows.\n\n.IP PBS_RCP\nLocation of rcp command if rcp is used.\n\n.IP PBS_SCHEDULER_SERVICE_PORT      \nPort on which default scheduler listens.  Default value: 15004\n\n.IP PBS_SCP \nLocation of scp command if scp is used; setting this parameter causes\nPBS to first try scp rather than rcp for file transport.\n\n.IP PBS_SCP_ARGS\nArguments for the scp command if scp is used; if not set, defaults will be used.\n\n.IP PBS_SECONDARY   \nHostname of secondary server.  Used only for failover configuration.  \nOverrides PBS_SERVER_HOST_NAME.\n\n.IP PBS_SERVER \nHostname of host running the server.  \nIf the short name of the server host resolves to the\ncorrect IP address, you can use the short name for the value of the\nPBS_SERVER entry in pbs.conf.  If only the FQDN of the server host\nresolves to the correct IP address, you must use the FQDN for the\nvalue of PBS_SERVER.  Overridden by PBS_SERVER_HOST_NAME and\nPBS_PRIMARY.  Cannot be longer than 255 characters.  \n\n.IP PBS_SERVER_HOST_NAME\nThe FQDN of the server host.  Used by clients to contact server.\nOverridden by PBS_PRIMARY and PBS_SECONDARY failover parameters.\nOverrides PBS_SERVER parameter.  Optional.  Must be a fully qualified\ndomain name.  Cannot contain a colon (\":\").  \n\n.IP PBS_SMTP_SERVER_NAME    \nName of SMTP server PBS will use to send mail.  Should be a fully\nqualified domain name.  Cannot contain a colon (\":\").  \n\n.IP PBS_START_COMM  \nSet to 1 if a communication daemon is to run on this host.\n\n.IP PBS_START_MOM   \nSet to 1 if a MoM is to run on this host.\n\n.IP PBS_START_SCHED \n.B Deprecated. \nSet to 1 if the scheduler is to run on this host.  Overridden by \nscheduler's \n.I scheduling\nattribute.\n\n.IP PBS_START_SERVER        \nSet to 1 if the server is to run on this host.\n\n.IP PBS_SYSLOG      \nControls use of syslog facility.\n\n.IP PBS_SYSLOGSEVR  \nFilters syslog messages by severity.\n\n.IP PBS_TMPDIR      \nRoot directory for temporary files for PBS components.\n\n\n"
  },
  {
    "path": "doc/man8/pbs_account.8B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_account 8B \"18 November 2019\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_account \nFor Windows.  Manage PBS service account\n.SH SYNOPSIS\n.B pbs_account \n[-a <PBS service account name>] [-c [<password>]] [--ci] \n.RS 12\n[--instid <instance ID>] [-o <output path>] \n.br\n[-p [<password>]] [--reg <service path>] [-s] \n.br\n[--unreg <service path>]\n.RE\n\n.SH DESCRIPTION\nThe\n.B pbs_account\ncommand is used to manage the PBS service account. It is used to\ncreate the account, set or validate the account password, add\nprivileges to the account, and register or unregister the account with\nthe SCM.\n\n.SH Permissions\nThis command can be run by administrators only.\n\n.SH Platforms\nThis command is available on Windows only.\n\n.SH Caveats\nUsing \n.B pbs_account --unreg\nand\n.B pbs_account --reg \nstops and restarts MoM, which can kill jobs.\n\n.SH OPTIONS\n.IP \"-a <account name>\" 15\nSpecifies service account name.\n\n.IP \"-c [<password>]\" 15  \nIf specified account does not exist, creates the account with the password.\n\nIf specified account exists, validates password against it.\n\nGives necessary privileges to the specified account: \n.I Create Token Object, Replace Process Level Token, Log on as a Service, \nand \n.I Act as Part of the Operating System\n\nIf password is not specified, user is prompted for password.\n\n.IP \"--ci\" 15\nInformational only.  Prints actions taken by pbs_account while\ncreating PBS service account while operations are performed.\n\n.IP \"instid <instance ID>\" 15\nSpecifies the instance ID when registering or unregistering multiple\ninstances of a service.  Example:\n.RS 18\npbs_account --reg \"C:\\\\Program Files (x86)\\\\PBS Pro_2\\\\exec\\\\sbin\\\\pbs_mom\" --instid 2 -a <username> -p <password>\n.br\npbs_account --unreg \"C:\\\\Program Files (x86)\\\\PBS Pro_2\\\\exec\\\\sbin\\\\pbs_mom\" --instid 2\n\n.RE\n\n.IP \"-o <output path>\" 15   \nPrints stdout and stderr messages in specified output path.\n\n.IP \"-p [<password >]\" 15\nUpdates the PBS service account password. If no password is specified,\nthe user is prompted for a password.\n\n.IP \"--reg <path to service>\" 15\nRegisters the PBS service with the SCM, instructing it to run the services \nunder the PBS service account.  \n.I path to service\nmust be in double quotes.  Restarts MoM. \n\n.IP \"-S\" 15\nAdds necessary privileges to the PBS service account. Grants the\n\"Create Token Object\", \"Replace Process Level Token\", \"Log On as a\nService\", and \"Act as Part of the Operating System\" privileges to PBS\nservice account.\n\n.IP \"--unreg <path to service>\" 15\nUnregisters the PBS service with the SCM.  \n.I path to service\nmust be in double quotes.  Stops MoM.\n\n.IP \"(no options)\" 15\nPrints name of PBS service account, if it exists.  Exit value is 0.\n\n\n.SH Examples\n\nTo create the PBS service account:\n.RS 4\npbs_account -c -s -p <password>\n.RE\nTo change the PBS service account:\n.RS 4\npbs_account [--reg <service path>] -a <PBS service account name>\n.RE\nTo register the MoM service:\n.RS 4\n.nf\npbs_account --reg \"\\\\Program Files\\\\PBS Pro\\\\exec\\\\sbin\\\\pbs_mom.exe\" -p <password>\n.fi\n.RE\n\n.SH Exit Value\n\n.IP \"Zero\" 15\nUpon success\n"
  },
  {
    "path": "doc/man8/pbs_attach.8B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_attach 8B \"6 May 2020\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_attach \n\\- attach a session ID to a PBS job\n\n\n.SH SYNOPSIS\n.B Linux\n.br\n.B pbs_attach \n[-j <job ID>] [-m <port>] -p <PID>\n.br\n.B pbs_attach \n[-j <job ID>] [-m <port>] [-P] [-s] <cmd> [<arg> ...] \n.br\n.B pbs_attach \n--version\n\n.B Windows\n.br\n.B pbs_attach \n[-c <path to script>] [-j <job ID>] [-m <port>] -p <PID>\n.br\n.B pbs_attach\n[-c <path to script>] [-j <job ID>] [-m <port>] [-P] [-s] \n.RS 11\n<cmd> [<arg> ...] \n.RE\n.br\n.B pbs_attach \n--version\n\n.SH DESCRIPTION\nThe\n.B pbs_attach\ncommand associates the processes in a session with \na PBS job by attaching the session ID to the job.  This allows PBS MoM\nto monitor and control those processes.\n\nMoM uses process IDs to determine session IDs, which are\nput into MoM's task list (attached to the job.)  All process IDs in \na session are then associated with the job.\n\nWhen a command\n.I cmd\nis given as an operand, \nthe \n.B pbs_attach\nprocess becomes the parent process of \n.I cmd, \nand the session ID of \n.B pbs_attach\nis attached to the job.  \n\n\n\n.SH OPTIONS\n.IP \"-c <path to script>\" 15\nWindows only.  Specified command is invoked using a new command shell.\nIn order to spawn and attach built-in DOS commands such as set or\necho, it is necessary to open the task using a \n.I cmd \nshell.  The new\ncommand shell, \n.I cmd.exe\n, is attached as a task to the PBS job.  The\n.B pbs_attach \ncommand spawns a program using a new command shell when\nattaching a batch script, or when invoked with the \n.I -c \noption.\n.IP \"-j <job ID>\" 15\nThe job ID to which the session ID is to be attached.\nIf \n.I job ID\nis not specified,\na best effort is made to determine the job to which to attach\nthe session.\n.IP \"-m <port>\" 15\nThe port at which to contact MOM.  Default: value of PBS_MANAGER_SERVICE_PORT\nfrom pbs.conf\n.IP \"-p <PID>\" 15\nProcess ID whose session ID is to be attached to the job.  \nDefault: process ID of pbs_attach.\nCannot be used with the \n.I -P \nor\n.I -s\noptions or the \n.I cmd\noperand.\n\n.IP \"-P\" 15\nAttach sessions of both \n.B pbs_attach \nand the parent of \n.B pbs_attach \nto job.  \nWhen used with \n.I -s\noption, \nthe sessions of the new\n.B fork()\ned\n.B pbs_attach\nand its parent, which is \n.B pbs_attach,\nare attached to the job.  Cannot be used with the \n.I -p\nor\n.I -s\noptions or the \n.I cmd\noperand.\n\n.IP \"-s\" 15\nStarts a new session and attaches it to the job: \n.B pbs_attach\ncalls \n.B fork(), \nthen the child\n.B pbs_attach\nfirst calls \n.B setsid() \nand then calls \n.B tm_attach\nto attach the new session to the job.  The session ID of the new\n.B pbs_attach\nis attached to the job.\n\n.IP \"--version\" 15\nThe \n.B pbs_attach\ncommand returns its PBS version information and exits.\nThis option can only be used alone.\n\n.SH OPERANDS\n.IP \"cmd\" 15 \nName of command whose process ID is to be associated with the job.\n\n\n.SH EXIT STATUS\n.IP \"0\" 15 \nSuccess\n.IP \"1\" 15\nAny error following successful command line processing.\nA message is printed to standard error.\n.LP\nIf \n.I cmd\nis specified, \n.B pbs_attach \nwaits for\n.I cmd\nto exit, then exits with the exit value of\n.I cmd.\n.LP\nIf \n.I cmd\nis not specified, \n.B pbs_attach\nexits after attaching the session ID(s) to the job.\n\n\n.SH SEE ALSO\npbs_mom(8B), pbs_tmrsh(8B), setsid(2), tm(3)\n"
  },
  {
    "path": "doc/man8/pbs_comm.8B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_comm 8B \"6 May 2020\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_comm \n- start the PBS communication daemon\n.SH SYNOPSIS\n.B pbs_comm \n[-N] [-r <other routers>] [-t <number of threads>]\n.br\n.B pbs_comm\n--version\n.br\n\n.SH DESCRIPTION\nThe PBS communication daemon, \n.B pbs_comm, \nhandles communication between daemons, except for scheduler-server and\nserver-server communication, which uses TCP.  The server, scheduler,\nand MoMs are connected by one or more pbs_comm daemons.\n\nAvailable on Linux only.\n\n.SH OPTIONS\n.IP \"-N\" 10\nRuns the communication daemon in standalone mode.\n\n.IP \"-r <other routers>\" 10\nList of other \n.B pbs_comm \ndaemons to which this\n.B pbs_comm \nmust connect. This is equivalent to the pbs.conf variable\n.I PBS_COMM_ROUTERS. \nThe command line overrides the variable.  \n.br\nFormat: <host>[:<port>][,<host>[:<port>]]\n\n.IP \"-t\" 10\nNumber of threads the \n.B pbs_comm \ndaemon uses.  This\nis equivalent to the pbs.conf variable \n.I PBS_COMM_THREADS.  \nThe command\nline overrides the variable.  \n.br\nFormat: Integer\n\n.IP \"--version\" 10\nPrints the PBS version and exits.  This option can only be used alone.\n\n.SH CONFIGURATION PARAMETERS\n\n.IP \"PBS_LEAF_ROUTERS\" 10\nParameter in /etc/pbs.conf.  Tells an endpoint where to find its\ncommunication daemon.  You can tell each endpoint which communication\ndaemon it should talk to.  Specifying the port is optional.\n.br\nFormat: PBS_LEAF_ROUTERS=<host>[:<port>][,<host>[:>port>]]\n\n.IP \"PBS_COMM_ROUTERS\" 10\nParameter in /etc/pbs.conf.  Tells a pbs_comm where to find its fellow\ncommunication daemons.  When you add a communication daemon, you must\ntell it about the other pbs_comms in the complex.  When you inform\ncommunication daemons about each other, you only tell one of each pair\nabout the other.  Do not tell both about each other.  We recommend\nthat an easy way to do this is to tell each new pbs_comm about each\nexisting pbs_comm, and leave it at that.\n.br\nFormat: PBS_COMM_ROUTERS=<host>[:<port>][,<host>[:>port>]]\n\n.IP \"PBS_COMM_THREADS\" 10\nParameter in /etc/pbs.conf.  Tells pbs_comm how many threads to start.\nBy default, each pbs_comm process starts four threads.  You can\nconfigure the number of threads that each pbs_comm uses.  Usually, you\nwant no more threads than the number of processors on the host.\n.br\nMaximum allowed value: 100\n.br\nFormat: Integer\n.br\nExample: PBS_COMM_THREADS=8\n\n.IP \"PBS_COMM_LOG_EVENTS\" 10\nParameter in /etc/pbs.conf.  Tells pbs_comm which log mask to use.  By\ndefault, pbs_comm produces few log messages.  You can choose more\nlogging, usually for troubleshooting.\n.br\nFormat: Integer\n.br\nDefault: 511\n.br\nExample: PBS_COMM_LOG_EVENTS=<log level>\n\n.IP \"PBS_LEAF_NAME\" 10\nParameter in /etc/pbs.conf.  Tells endpoint what name to use for\nnetwork.  The value does not include a port, since that is usually set\nby the daemon.  By default, the name of the endpoint's host is the\nhostname of the machine.  You can set the name where an endpoint runs.\nThis is useful when you have multiple networks configured, and you\nwant PBS to use a particular network.  TPP internally resolves the\nname to a set of IP addresses, so you do not affect how pbs_comm\nworks.\n.br\nFormat: String\n.br\nExample: PBS_LEAF_NAME=host1\n\n.IP \"PBS_START_COMM\" 10\nParameter in /etc/pbs.conf.  Tells PBS init script whether to start a\npbs_comm on this host if one is installed.  When set to 1, pbs_comm is\nstarted.  Just as with the other PBS daemons, you can specify whether\neach host should start pbs_comm.\n.br\nFormat: Boolean\n.br\nDefault: 0\n.br\nExample: PBS_START_COMM=1\n\n.SH COMMUNICATION DAEMON LOGFILES\n\nThe pbs_comm daemon creates its log files under $PBS_HOME/comm_logs.\nThis directory is automatically created by the PBS installer.  \n\nIn a\nfailover configuration, this directory is in the shared PBS_HOME,\nand is used by the pbs_comm daemons running on both the primary and\nsecondary servers.  This directory must never be shared across\nmultiple pbs_comm daemons in any other case.\n\nThe log filename format is yyyymmdd (the same as for other PBS\ndaemons).  \n\nThe log record format is the same as used by other pbs\ndaemons, with the addition of the thread number and the daemon name in\nthe log record. The log record format is as follows:\n.br\n<date and time>;<event code>;<daemon name>(<thread number>);<object\ntype>;<object name>;<message>\n\nExample: \n.br\n03/25/2014\n15:13:39;0d86;host1.example.com;TPP;host1.example.com(Thread 2);\nConnection from leaf 192.168.184.156:19331, tfd=81 down\n\n.SH SIGNAL HANDLING\nThe \n.B pbs_comm\ndaemon handles the following signals:\n\n.IP \"HUP\" 10\nRe-reads the value of\n.I PBS_COMM_LOG_EVENTS\nfrom pbs.conf.\n\n.IP \"TERM\" 10\nThe \n.B pbs_comm\ndaemon exits.\n\n"
  },
  {
    "path": "doc/man8/pbs_dataservice.8B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_dataservice 8B \"19 November 2019\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_dataservice \n\\- start, stop, or check the status of the PBS data service\n.SH SYNOPSIS\n.B pbs_dataservice \n[start|stop|status]\n\n.SH DESCRIPTION\nThe\n.B pbs_dataservice\ncommand starts, stops, or gets the status of the PBS data service.\n\n.SH Permission\nRoot privilege is required to use this command. \n\n.SH Arguments\n.IP \"start\" 15\nStarts the PBS data service.\n\n.IP \"stop\" 15  \nStops the PBS data service.\n\nCan be used only when the PBS server is not running.\n\n.IP \"status\" 15   \nDisplays the status of the PBS data service, as follows:\n\nData service running\n.RS 20\n\"PBS Data Service running\"\n.RE\n\n.IP \" \" 15 \nData service not running\n.RS 20\n\"PBS Data Service not running\"\n.RE\n\n.SH EXIT STATUS\n.IP \"Zero\" 15\nSuccess\n.IP \"Non-zero\" 15\nFailure\n"
  },
  {
    "path": "doc/man8/pbs_ds_password.8B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_ds_password 8B \"6 May 2020\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_ds_password \n\\- set or change data service user account or its password\n.SH SYNOPSIS\n.B pbs_ds_password \n[-C <username>] [-r]\n\n.SH DESCRIPTION\nYou can use this command to change the user account or account password \nfor the data service.  \n\n.B Passwords\n.br\nBlank passwords are not allowed.\n\nIf you type in a password, make sure it does not contain restricted\ncharacters.  The \n.B pbs_ds_password\ncommand generates passwords\ncontaining the following characters:\n\n0123456789abcdefghijklmnopqrstuvwxyz ABCDEFGHIJKLMNOPQRSTUVWXYZ!@#$%^&*()_+\n\nWhen creating a password manually, do not use \\\\ (backslash) or `\n(backquote). This can prevent certain commands such as pbs_server,\npbs_ds_password, and printjob from functioning properly, as they rely\non connecting to the database.  \n\n.B Permissions\nOn Linux, root privilege is required to use this command. On Windows, \nAdmin privilege is required.\n\n.B Restrictions\nDo not run this command if failover is configured. It is important \nnot to inadvertently start two separate instances of the data service \non two machines, thus potentially corrupting the database.\nIf failover is configured, stop the secondary server, remove\ndefinitions for PBS_PRIMARY and PBS_SECONDARY from pbs.conf on the\nprimary server host, start PBS, run pbs_ds_password, stop PBS, replace\nthe definitions, and start PBS again.\n\n.SH OPTIONS\n\n.IP \"-C <username>\" 15  \nChanges user account for data service to specified account.  Specified\nuser account must already exist. \n.br\n\nOn Linux-based systems, the specified user account must not be root.\n.br\n\nOn Windows, the specified user account must match the PBS service\naccount (which can be any user account.)\n.br\n\nThis option cannot be used while the data service is running. \n.br\n\nCan be used with the \n.I -r\noption to automatically generate a password\nfor the new account.\n\n.IP \"-r\" 15\nGenerates a random password. The data service is updated with the new\npassword. \n\nCan be used with the -C option.\n\n.IP \"(no options)\" 15\nAsks the user to enter a new password twice. Entries must\nmatch. Updates data service with new password.\n\n\n.RE\n.RE\n.RE\n.RE\n.LP\n\n.SH EXIT STATUS\n.IP \"Zero\" 15\nSuccess\n.IP \"Non-zero\" 15\nFailure\n"
  },
  {
    "path": "doc/man8/pbs_hostn.8B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_hostn 8B \"6 May 2020\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_hostn \n- report hostname and network address(es)\n.SH SYNOPSIS\n.B pbs_hostn \n[ -v ] <hostname>\n.br\n.B pbs_hostn \n--version\n\n.SH DESCRIPTION\nThe\n.B pbs_hostn\ncommand takes a hostname, and reports the results of both \ngethostbyname(3) and gethostbyaddr(3) system calls. Both forward and\nreverse lookup of hostname and network addresses need to succeed in order\nfor PBS to authenticate a host.\n.LP\nRunning this command can assist in\ntroubleshooting problems related to incorrect or non-standard network\nconfiguration, especially within clusters.\n.SH OPTIONS\n.IP \"-v\" 15\nTurns on verbose mode.\n.LP\n.IP \"--version\" 15\nThe \n.B pbs_hostn\ncommand returns its PBS version information and exits.\nThis option can only be used alone.\n\n\n.SH OPERANDS\n.IP \"hostname\" 15\nThe\n.B pbs_hostn\ncommand accepts a \n.I hostname\noperand either in short name form, or in fully qualified\ndomain name (FQDN) form.\n.SH STANDARD ERROR\nThe\n.B pbs_hostn\ncommand writes a diagnostic message to standard error for\neach error occurrence.\n\n.SH EXIT STATUS\n.IP \"Zero\" 15\nUpon successful processing of all the operands presented to the\n.B pbs_hostn\ncommand\n.LP\n.IP \"Greater than zero\" 15\nIf the\n.B pbs_hostn\ncommand fails to process any operand.\n\n.SH SEE ALSO\npbs_server(8B)\n"
  },
  {
    "path": "doc/man8/pbs_idled.8B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_idled 8B \"6 May 2020\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_idled \n\\- run PBS daemon that monitors the console and informs pbs_mom of idle time\n.SH LINUX SYNOPSIS\n.B pbs_idled \n[-D <display>] [-r <reconnect delay>] [-w <wait time>] \n.br\n.B pbs_idled\n--version\n.SH WINDOWS SYNOPSIS\n.B pbs_idled \n[start | stop] \n.br\n.B pbs_idled\n--version\n\n.SH LINUX DESCRIPTION\nOn Linux, the \n.B pbs_idled \nprogram monitors an X windows display and communicates the idle time\nof the display back to PBS.  If the mouse is moved or a key is\ntouched, PBS is informed that the vnode is busy.\n.br\nYou should run this program from the system-wide Xsession file, in the\nbackground before the window manager is run.  If this program is run\noutside of the Xsession, it will need to be able to make a connection\nto the X display.  See the xhost or xauth man pages for a description\nof X security.\n\n.SH WINDOWS DESCRIPTION\nOn Windows, \n.B pbs_idled \nreads its polling interval from a file called\n.I idle_poll_time \nwhich is created by MoM.  The process monitors\nkeyboard, mouse, and console activity, and updates a file called\n.I idle_touch \nwhen it finds user activity.  The \n.I idle_touch \nfile is created by MoM.\n\n.SH LINUX OPTIONS\n.IP \"-D <display>\" 10\nThe display to connect to and monitor\n.IP \"-r <reconnect delay>\" 10\nTime to wait before we try to reconnect to the X display if the previous \nattempt was unsuccessful\n.IP \"-w <wait time>\" 10\nInterval between times when the daemon checks for events or pointer movement\n.IP \"--version\" 10\nThe \n.B pbs_idled\ncommand returns its PBS version information and exits.\nThis option can only be used alone.\n.SH WINDOWS OPTIONS\n.IP \"start\" 10\nStarts the \n.B pbs_idled \nprocess.\n.IP \"stop\" 10\nStops the \n.B pbs_idled \nprocess.\n.IP \"--version\" 10\nThe \n.B pbs_idled \nprocess returns its PBS version information and exits.\nThis option can only be used alone.\n\n.SH SEE ALSO\npbs_mom(8B), xhost(1), xauth(1)\n"
  },
  {
    "path": "doc/man8/pbs_iff.8B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_iff 8B \"18 October 2017\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_iff\n\\- tests authentication with the server\n.SH SYNOPSIS\n.B pbs_iff \n[-t] <server host> <server port>\n\n.B pbs_iff \n--version\n.SH DESCRIPTION\nCalled from the pbs_connect() IFL API to authenticate a connection\nwith the PBS server.  Designed to be called internally by PBS commands\nand components, to be used by our IFL layer to talk to the server.\n\nIf \n.B pbs_iff\ncannot authenticate, it returns an error message.\n\n.B Required Privilege\n.br\nCan be run by any user.\n\nIt's a setuid root binary so it runs as the user who requests a\nconnection to a server but it becomes root so that it can grab a\nprivileged port.\n\n.SH OPTIONS TO pbs_iff\n.IP \"-t\" 10\nTest mode; means test whether \n.B pbs_iff \ncan authenticate with the server\n.IP \"--version\" 10\nReports version and exits; can only be used alone\n\n.SH ARGUMENTS TO pbs_iff\n.IP \"daemon host\" 10\nHost where server is running\n.IP \"daemon port\" 10\nPort on which server is listening; default is 15001\n\n.SH EXIT STATUS\n.IP \"Zero\" 10\nIf \n.B pbs_iff \nis able to contact the server at the specified port\n.IP \"Non-zero\" 10\nIf \n.B pbs_iff \nis unable to contact the server at the specified port\n"
  },
  {
    "path": "doc/man8/pbs_interactive.8B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_interactive 8B \"19 October 2017\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_interactive \n\\- For Windows. Register, unregister, or get the version of the PBS_INTERACTIVE service\n.SH SYNOPSIS\n.B pbs_interactive \n[R | U]\n.br\n.B pbs_interactive\n--version\n\n.SH DESCRIPTION\nThe\n.B pbs_interactive\ncommand registers, unregisters, or gets the version of the PBS_INTERACTIVE service.\nThe service must be registered manually; the installer does not register it.\n\nOn Windows, the PBS_INTERACTIVE service itself monitors logging in and out by users,\nstarts a pbs_idled process for each user logging in, and stops the pbs_idled process\nof each user logging out.\n\n.SH Permission\nAdmin privilege is required to use this command.\n\n.SH Arguments\n.IP \"R\" 15\nRegisters the PBS_INTERACTIVE service.\n\n.IP \"U\" 15  \nUnregisters the PBS_INTERACTIVE service.\n\n.IP \"--version\" 15   \nThe \n.B pbs_interactive\ncommand returns its version information and exits.\nThis option can only be used alone.\n\n\n"
  },
  {
    "path": "doc/man8/pbs_lamboot.8B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_lamboot 8B \"6 May 2020\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_lamboot \n\\- PBS front end to LAM's lamboot program\n\n.SH SYNOPSIS\n.B pbs_lamboot\n\n.B pbs_lamboot\n--version\n\n.SH DESCRIPTION\n.B Deprecated.  \nThe PBS command \n.B pbs_lamboot \nreplaces the standard \n.I lamboot\ncommand in a PBS\nLAM MPI job, for starting LAM software on each of the \nPBS execution hosts running Linux 2.4 or higher.\n\nUsage is the same as for LAM's\n.I lamboot.\nAll arguments except for\n.I bhost \nare passed directly to \n.I lamboot.  \nPBS will issue a warning saying that the\n.I bhost \nargument is ignored by PBS since input is taken automatically \nfrom \n.B $PBS_NODEFILE.\nThe \n.B pbs_lamboot \nprogram will not redundantly consult the\n.B $PBS_NODEFILE\nif it has been instructed to boot the nodes using the \n.I tm\nmodule.  This instruction happens when an argument is\npassed to\n.B pbs_lamboot\ncontaining \"-ssi boot tm\" or when the \n.B LAM_MPI_SSI_boot \nenvironment variable exists with the value\n.I tm.\n\n.SH OPTIONS\n.IP \"--version\" 8\nThe \n.B pbs_lamboot\ncommand returns its PBS version information and exits.\nThis option can only be used alone.\n\n.SH OPERANDS\nThe operands for\n.B pbs_lamboot \nare the same as for \n.I lamboot.\n\n\n.SH ENVIRONMENT VARIABLES\n\n\n.SH PATH\nThe PATH on remote machines must contain \n.I PBS_EXEC/bin.\n\n\n.SH SEE ALSO\nlamboot(1), tm(3)\n"
  },
  {
    "path": "doc/man8/pbs_mkdirs.8B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_mkdirs 8B \"6 May 2020\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_mkdirs\n- For Windows. Create, or fix the permissions of, the directories and files used by PBS\n\n\n.SH SYNOPSIS\n.B pbs_mkdirs\n.br\n.B pbs_mkdirs \n[ mom ]\n.br\n\n.SH DESCRIPTION\nRuns on Windows only.  If the directories and files used by PBS exist, the \n.B pbs_mkdirs \ncommand fixes their permissions.  If the directories and/or files do not\nexist, the \n.B pbs_mkdirs \ncommand creates them, with the correct\npermissions.  The \n.B pbs_mkdirs \ncommand always examines the following directories and files:\n.RS 5\npbs.conf\n.br\nPBS_EXEC\n.br\nPBS_HOME/spool\n.br\nPBS_HOME/undelivered\n.br\nPBS_HOME/pbs_environment\n.RE\n.B Required Privilege\n.br\nYou must have Administrator privilege to run this command.\n\n.SH OPTIONS\n.IP \"mom\" 5\nThe \n.B pbs_mkdirs \ncommand examines the following additional items:\n.RS 10\nPBS_HOME/mom_priv\n.br\nPBS_HOME/mom_logs\n.RE\n\n\n.IP \"(no options)\" 5\nThe \n.B pbs_mkdirs \ncommand examines all of the files and directories\nspecified for the \n.I mom\noption.\n\n\n.SH SEE ALSO\npbs_probe(8B)\n\n\n"
  },
  {
    "path": "doc/man8/pbs_mom.8B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_mom 8B \"1 March 2021\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_mom \n- run the PBS job monitoring and execution daemon\n.SH SYNOPSIS\n.B pbs_mom \n[-a <alarm timeout>] \n[-C <checkpoint directory>] \n.RS 8\n[-c <config file>] \n[-d <MoM home directory>] \n[-L <logfile>] \n.br\n[-M <MoM service port>] \n[-N]\n[-n <nice value>] \n[-p|-r] \n.br\n[-R <inter-MoM communication port>] \n[-S <server port>]\n.br\n[-s <options>]\n.RE\n.B pbs_mom\n--version\n\n.SH DESCRIPTION\nThe\n.B pbs_mom\ncommand starts the PBS job monitoring and execution daemon, called\nMoM.  \n\nThe standard MoM starts jobs on the execution host, monitors and reports\nresource usage, enforces resource usage limits, and notifies the\nserver when the job is finished.  The MoM also runs any prologue\nscripts before the job runs, and runs any epilogue scripts after the\njob runs.  \n\nThe MoM performs any communication with job tasks and with other MoMs.\nThe MoM on the first vnode on which a job is running manages\ncommunication with the MoMs on the remaining vnodes on which the job\nruns.\n\nThe MoM manages one or more vnodes.  PBS may treat a  host as \na set of virtual nodes, in which case one MoM \nmanages all of the host's vnodes.  \n\n.B Logging\n.br\nThe MoM's log file is in PBS_HOME/mom_logs.  The MoM writes an\nerror message in its log file when it encounters any error.  If it\ncannot write to its log file, it writes to standard error.  The\nMoM writes events to its log file.  \nThe MoM writes its PBS \nversion and build information to the logfile whenever it starts up or \nthe logfile is rolled to a new file.\n\n.B Required Permission\n.br\nThe executable for \n.B pbs_mom\nis in PBS_EXEC/sbin, and can be run only by root on Linux, and Admin \non Windows.\n\n\n.B HPE machine Running Supported Versions of Performance Software - \nMessage Passing Interface\n.br\n\nA PBS job can run across multiple machines that run supported versions \nof Performance Software - Message Passing Interface.\n\nPBS can run using HPE's MPI (MPT) over InfiniBand.\n.LP\n\n.B Effect on Jobs of Starting MoM\n.br\nWhen MoM is started or restarted, her default behavior is to leave\nany running processes running, but to tell the PBS server to requeue\nthe jobs she manages.  MoM tracks the process ID of jobs across \nrestarts.\n\nIn order to have all jobs killed and requeued, use the \n.I r\noption when starting or restarting MoM.\n\nIn order to leave any running processes running, and not to requeue\nany jobs, use the \n.I p\noption when starting or restarting MoM.\n\n.SH OPTIONS\n.IP \"-a <alarm timeout>\" 10\nNumber of seconds before alarm timeout.  \nWhenever a resource request is processed, an alarm is set for the\ngiven amount of time.  If the request has not completed before \n.I alarm timeout, \nthe OS generates an alarm signal and sends it to MoM.  \nDefault: 10 seconds.  Format: integer.\n\n.IP \"-C <checkpoint directory>\" 10\nSpecifies the path of the directory where MoM creates job-specific\nsubdirectories used to hold each job's restart files.  MoM passes this\npath to checkpoint and restart scripts.  Overrides other checkpoint\npath specification methods.  Any directory specified with the\n.I -C \noption must be owned, readable, writable, and executable by root only \n.I (rwx,---,---, or 0700), \nto protect the security of the checkpoint files.  See the \n.I -d \noption.  Format: string.\n.br\nDefault: PBS_HOME/spool/checkpoint.  \n\n.IP \"-c <config file>\" 10\nMoM will read this alternate default configuration file upon starting.\nIf this is a relative file name it will be relative to\nPBS_HOME/mom_priv.  If the specified file cannot be opened,\n.B pbs_mom \nwill abort.  See the \n.I -d \noption.\n\nMoM's normal operation, when the -c option is not given, is to attempt\nto open the default configuration file PBS_HOME/mom_priv/config.\nIf this file is not present,\n.B pbs_mom \nwill log the fact and continue.\n\n.IP \"-d <MoM home directory>\" 10\nSpecifies the path of the \n.I directory \nto be used in place of PBS_HOME by\n.B pbs_mom.\nThe default directory is given by $PBS_HOME.  Format: string.\n\n.IP \"-L <logfile>\" 10\nSpecifies an absolute path and filename for the log file.\nThe default is a file named for the current date in PBS_HOME/mom_logs/.\nSee the\n.I -d\noption.  Format: string.\n\n.IP \"-M <MoM port>\" 10\nSpecifies the number of the port on which MoM will\nlisten for server requests and instructions.  Overrides \nPBS_MOM_SERVICE_PORT setting in pbs.conf and environment variable.\nDefault: 15002.\nFormat: integer port number.\n\n.IP \"-n <nice value>\" 10\nSpecifies the priority for the\n.B pbs_mom \ndaemon.  Format: integer.\n\n.IP \"-N\" 10\nSpecifies that when starting, MoM should not detach from the\ncurrent session.\n\n.IP \"-p\" 10\nSpecifies that when starting, MoM should allow any running jobs\nto continue running, and not have them requeued.  This option \ncan be used for single-host jobs only; multi-host jobs cannot\nbe preserved.\nCannot be used with the\n.I -r\noption.  \nMoM is not the parent of these jobs.\n\n.IP \"-r\" 10\nSpecifies that when starting, MoM should requeue any rerunnable jobs and \nkill any non-rerunnable jobs that \nshe was tracking, and mark the \njobs as terminated.  Cannot be used with the\n.I -p\noption.  \nMoM is not the parent of these jobs.  \n\nIt is not recommended to use the \n.I -r \noption after a reboot, because process IDs of new, legitimate tasks\nmay match those MoM was previously tracking.  If they match and MoM is\nstarted with the \n.I -r\noption, MoM will kill the new tasks.\n\n.IP \"-R <inter-MoM communication port>\" 10\nSpecifies the number of the port on which MoM will listen for pings,\nresource information requests, communication from other MoMs, etc.  \nOverrides PBS_MANAGER_SERVICE_PORT setting in pbs.conf and environment variable.\nDefault:  15003.  Format: integer port number.\n\n.IP \"-S <server port>\" 10\nSpecifies the port number on which \n.B pbs_mom\ninitially contacts the server.  Default: 15001.  Format: integer port number.\n\n.IP \"-s <file options>\" 5\nIf you are running the cgroups hook, make sure that the vnode names in any Version 2 \nconfiguration file exactly match those in the output of pbsnodes -av.\nThis option lets you add, delete, and display MoM Version 2 configuration files.  \nSee \n.B CONFIGURATION FILES.  \nRun this command at the host you want to change.\nThe \n.I file options \nare used this way:\n.RS\n.IP \"-s insert <Version 2 filename> <inputfile>\" 5\nReads \n.I inputfile \nand copies it to a Version 2 \n.B pbs_mom\nconfiguration file with the filename \n.I Version 2 filename.  \nFor example, to create a Version 2 file named \"Myhost_V2\":\n\n.B pbs_mom -s insert <Myhost_V2> <myhost_v2_input>\n\nIf a configuration file with the specified \n.I Version 2 filename \nalready exists,\nthe operation fails, and\n.B pbs_mom\nwrites a diagnostic and exits with a nonzero status.  \nConfiguration files whose names begin with\nthe prefix \"PBS\" are reserved.  You cannot add a file\nwhose name begins with \"PBS\";\n.B pbs_mom \nwill print a diagnostic message and exit with a nonzero status.  \n\n.IP \"-s remove <Version 2 filename>\" 5\nRemoves the configuration file named \n.I Version 2 filename \nif it exists.  Example:\n\n.B pbs_mom -s remove <Version 2 filename> \n\nIf the file does not exist or if you try to remove a file with the reserved \"PBS\"\nprefix, the operation fails, and\n.B pbs_mom \nprints a diagnostic and exits with a nonzero status.  \n\n.IP \"-s show <Version 2 filename>\" 5\nPrints the contents of the named file to standard output.  Example:\n\n.B pbs_mom -s show <Version 2 filename> \n\nIf \n.I Version 2 filename \ndoes not exist, the\noperation fails, and \n.B pbs_mom\nwrites a diagnostic and exits with a nonzero status.  \n\n.IP \"-s list\" 5\nMoM lists the PBS-prefixed and site-defined configuration\nfiles in the order in which they are executed.  Example:\n\n.B pbs_mom -s list\n\n.LP\n\n.B WINDOWS:\n.RS 5\nUnder Windows, use the \n.I -N\noption so that \n.B pbs_mom \nwill start up as a standalone\nprogram.  For example:\n\n.B pbs_mom -N -s insert <Version 2 filename> <inputfile>\n\nor \n\n.B pbs_mom -N -s list\n\n.RE\n\n\n.RE\n\n.LP\n.IP \"--version\" 10\nThe \n.B pbs_mom\ncommand returns its PBS version information and exits.\nThis option can only be used alone.\n\n.SH CONFIGURATION FILES\nMoM's configuration information can be contained in configuration\nfiles of three types: \n.I default, PBS-prefixed, \nand \n.I site-defined.  \nThe\ndefault configuration file is usually PBS_HOME/mom_priv/config.  The\n\"PBS\" prefix is reserved for files created by PBS.  Site-defined\nconfiguration files are those created by the site administrator.\nMoM reads the configuration files at startup and reinitialization.  \nThe files are processed in this order:\n.br\n     The default configuration file\n.br\n     PBS-prefixed configuration files\n.br\n     Site-defined configuration files\n.br\n\nThe contents of a file read later override the contents of a file read earlier.\nFor example, to change flags, create a script \"update_flags\"\nthen use the \n.I -s insert\noption:\n.RS 4\n.B pbs_mom -s insert update_script update_flags\n.RE\nThis adds the configuration file \"update_script\".\nConfiguration files can be added, deleted and displayed using\nthe \n.I -s\noption.  \n\nMoM's configuration files can use either the syntax shown \nbelow under \n.B Default Syntax and Contents\nor the syntax for describing \n.I vnodes\nshown in \n.B Vnode Syntax.\n\n.B Location\n.br\nThe default configuration file is in PBS_HOME/mom_priv/.\n\nPBS places PBS-prefixed and site-defined configuration files \nin an area that is private to each installed instance of PBS.\nThis area is relative to the default PBS_HOME.  Note that the \n.I -d \noption changes where MoM looks for PBS_HOME.\n\nThe \n.I -c\noption will change which default configuration file MoM reads.\n\nSite-defined configuration files can be moved from one installed\ninstance of PBS to another.  Do not move PBS-prefixed configuration\nfiles.  To move a set of site-defined configuration files from one\ninstalled instance of PBS to another:\n\n.IP \"1\" 5\nUse the \n.I -s list \ndirective with the \"source\" instance of PBS to enumerate the \nsite-defined files.\n\n.IP \"2\" 5\nUse the \n.I -s show\ndirective with each site-defined file of the \"source\" instance of PBS\nto save a copy of that file.\n\n.IP \"3\" 5\nUse the \n.I -s insert \ndirective with each file at the \"target\" instance of PBS \nto create a copy of each site-defined configuration file.\n.LP\n\n.B Vnode Configuration File Syntax and Contents\n.br\nConfiguration files with the following syntax describe vnodes and\nthe resources available on them.  They do not contain initialization \nvalues for MoM.\n\nPBS-prefixed configuration files use the following syntax.  Other\nconfiguration files can use the following syntax.  \n\nAny configuration file containing vnode-specific assignments must\nbegin with this line:\n.RS 4\n.B $configversion 2\n.RE\nThe format a file containing vnode information is:\n.RS 4\n.I <ID> : <ATTRNAME> = <ATTRVAL>\n.RE\nwhere\n.RS 4\n.IP \"<ID>\" 12\nsequence of characters not including a colon (\":\")\n\n.IP \"<ATTRNAME>\" 12\nsequence of characters beginning with alphabetics or numerics, which\ncan contain underscore (\"_\") and dash (\"-\")\n\n.IP \"<ATTRVAL>\" 12\nsequence of characters not including an equal sign (\"=\")\n.LP\nThe colon and equal sign may be surrounded by spaces.\n.RE\n\nA vnode's \n.I ID \nis an identifier that will be unique across all\nvnodes known to a given \n.B pbs_server \nand will be be stable across\nreinitializations or invocations of \n.B pbs_mom.  \nID stability is\nof importance when a vnode's CPUs or memory might be expected\nto change over time and PBS is expected to adapt to such changes\nby resuming suspended jobs on the same vnodes to which they\nwere originally assigned.  Vnodes for which this is not a\nconsideration may simply use IDs of the form \"0\", \"1\", etc.\nconcatenated with some identifier that ensures uniqueness across\nthe vnodes served by the \n.B pbs_server.\n\nA \n.I natural vnode \ndoes not correspond to any actual hardware.  It is used to define\nany placement set information that is invariant for a given host, \nsuch as pnames.\n\nIt is defined as\nfollows:\n.br\n.IP \"\" 5\nThe name of the natural vnode is, by convention,\nthe MoM contact name, which is usually the hostname.\nThe MoM contact name is the vnode's MoM attribute.  See the\n.B pbs_node_attributes(7B) man page.\n\n.IP \"\" 5\nAn attribute, \"pnames\", with value set to the list of\nresource names that define the placement sets' types for\nthis machine.\n\n.IP \"\" 5\nAn attribute, \"sharing\" is set to the value \"force_shared\"\n.LP\n\nThe \n.I natural vnode\nis used to define any placement set information\tthat is invariant for\na given host (e.g. the placement set resource names themselves).\n\nThe order of the \n.I pnames\nattribute follows placement set organization.  If\nname X appears to the left of name Y in this attribute's value, an\nentity of type X may be assumed to be smaller (that is, be\ncapable of containing fewer vnodes) than one of type Y.  No such\nguarantee is made for specific instances of the types.\n\nFor example, on an HPE machine named \"HostA\", with two vnodes, a natural\nvnode, four processors and two cbricks, the description would\nlook like this:\n.br\n     HostA:  pnames = cbrick\n.br\n     HostA:  sharing = force_shared\n.br\n     HostA[001c02#0]:  sharing = default_excl\n.br\n     HostA[001c02#0]:  resources_available.ncpus = 2\n.br\n     HostA[001c02#0]:  resources_available.cbrick = cbrick-0\n.br\n     HostA[001c02#0]:  resources_available.mem = 1968448kb\n.br\n     HostA[001c04#0]:  sharing = default_excl\n.br\n     HostA[001c04#0]:  resources_available.ncpus = 2\n.br\n     HostA[001c04#0]:  resources_available.cbrick = cbrick-1\n.br\n     HostA[001c04#0]:  resources_available.mem = 1961328kb\n.br\nThe natural vnode is described in the first two lines.\nThe first vnode uses cbrick-0, and the second one uses cbrick-1.\n\n.B Default Syntax and Contents\n.br\nConfiguration files with this syntax list local resources and\ninitialization values for MoM.  Local resources are either static,\nlisted by name and value, or externally-provided, listed by name and\ncommand path.  See the\n.I -c\noption.\n\nEach configuration item is listed on a single line, with its parts \nseparated by white space.  Comments begin with a hashmark (\"#\").\n\nThe default configuration file must be secure.  It must be owned by a user ID\nand group ID both less than 10 and must not be world-writable.\n\n.B Externally-provided Resources\n.br\nExternally-provided resources use a shell escape to run a command.\nThese resources are described with a name and value, \nwhere the first character of the value is an exclamation mark (\"!\").\nThe remainder of the value is the path and command to execute.\n\nParameters in the command beginning with a percent sign (\"%\") can\nbe replaced when the command is executed.  \nFor example, this line in a configuration file describes a \nresource named \"escape\":\n.RS 14\nescape     !echo 0xx %yyy\n.RE\n.IP\nIf a query for the \"escape\" resource is sent with no parameter replacements, \nthe command executed would be \"echo 0xx %yyy\".  If one parameter replacement is sent,\n\"escape[xxx=hi there]\", the command executed would be \"echo hi there %yyy\".\nIf two parameter replacements are sent, \"escape[xxx=hi][yyy=there]\", the command\nexecuted would be \"echo hi there\".  If a parameter replacement is sent with\nno matching token in the command line, \"escape[zzz=snafu]\", an error\nis reported.\n.LP\n\n.B Windows Notes\n.br\nIf the argument to a MoM option is a pathname containing a space,\nenclose it in double quotes as in the following:\n\nhostn !\"\\\\Program Files\\\\PBS Pro\\\\exec\\\\bin\\\\hostn\" host\n\nWhen you edit any PBS configuration file, make sure that you put a\nnewline at the end of the file.  The Notepad application does not\nautomatically add a newline at the end of a file; you must explicitly\nadd the newline.\n\n\n\n.B Replacing Actions\n.br\n.IP \"$action <default action> <timeout> <new action>\" 5\nReplaces the \n.I default action\nfor an event with the site-specified\n.I new action.  \n.I timeout\nis the time allowed for \n.I new action \nto run. \nThe \n.I default action \ncan be one of:\n.RS\n.IP \"checkpoint\" 5\nRun\n.I new action \nin place of the periodic job checkpoint, after which the job \ncontinues to run.\n.IP \"checkpoint_abort\" 5\nRun\n.I new action \nto checkpoint the job, after which the job must be terminated by the script.\n.IP \"multinodebusy <timeout> requeue\" 5\nUsed with cycle harvesting and multi-vnode jobs.\nChanges default behavior when a vnode becomes busy.  Instead of \nallowing the job to run, the job is requeued.  \n.I timeout\nis ignored.  The only\n.I new action \nis \n.I requeue.  \n.IP \"restart\" 5\nRuns \n.I new action \nin place of \n.I restart.\n.IP \"terminate\" 5\nRuns \n.I new action \nin place of SIGTERM or SIGKILL when MoM terminates a job.\n.RE\n\n.SH MoM Parameters\n\n.IP \"$alps_client <path>\" 5\nCray only.  MoM runs this command to get the ALPS inventory.  Must \nbe full path to command.  \n.br \nFormat: path to command\n.br\nDefault: None\n\n.IP \"alps_release_jitter <maximum jitter>\" 5\nCray only.  PBS sends requests to ALPS to release a finished job at\nintervals specified by the sum of \n.I $alps_release_wait_time \nand a randomly generated value between zero and \n.I maximum jitter, \nin seconds.\n.br\nFormat: Float\n.br\nDefault: 0.12 seconds\n\n.IP \"$alps_release_timeout <timeout>\" 5\nCray only.  Specifies the amount of time that PBS tries to release an\nALPS reservation before giving up.  After this amount of time has\npassed, PBS stops trying to release the ALPS reservation, the job\nexits, and the job's rsources are released.  PBS sends a HUP to the\nMoM so that she re-reads the ALPS inventory to get the current\navailable ALPS resources.\n.br\nWe recommend that the value for this parameter be greater than the value for \n.I suspectbegin.\n.br\nFormat: Seconds, specified as positive integer\n.br\nDefault: 600 (10 minutes)\n\n.IP \"$alps_release_wait_time <wait time>\" 5\nCray only.  PBS sends requests to ALPS to release a finished job at\nintervals specified by the sum of \n.I wait time \nand a randomly generated value between zero and the maximum specified in \n.I $alps_release_jitter,\nin seconds.\n.br\nFormat: Float\n.br\nDefault: 0.4 seconds\n\n.IP \"$checkpoint_path <path>\" 5\nMoM passes this path to checkpoint and restart scripts.\nThis path can be absolute or relative to PBS_HOME/mom_priv.\nOverrides default.  Overridden by \n.I pbs_mom -C \noption and by \n.I PBS_CHECKPOINT_PATH \nenvironment variable.\n\n.IP \"$clienthost <hostname>\" 5\n.I hostname \nis added to the list of hosts which are allowed\nto connect to MoM as long as they are using a privileged port.\nFor example, \nthis allows the hosts \"fred\" and \"wilma\"\nto connect to MoM:\n.br\n \"$clienthost      fred\"\n.br\n \"$clienthost      wilma\"\n.br\n\nThe following hostnames are added to \n.I $clienthost \nautomatically: the\nserver, the localhost, and if configured, the secondary server.  The\nserver sends each MoM a list of the hosts in the nodes file, and these\nare added internally to \n.I $clienthost.  \nNone of these hostnames need to\nbe listed in the configuration file.\n\nTwo hostnames are always allowed to connect to \n.B pbs_mom, \n\"localhost\" and the name returned to MoM \nby the system call gethostname().  These\nhostnames do not need to be listed in the configuration file.  \n\nThe hosts listed\nas \"clienthosts\" make up a \"sisterhood\" of machines.  Any one of the\nsisterhood will accept connections from within the\nsisterhood.  The sisterhood must all use the same port number.\n\n.IP \"$cputmult <factor>\" 5\nThis sets a \n.I factor \nused to adjust CPU time used by each job.  This allows adjustment of time\ncharged and limits enforced where jobs run on a system with\ndifferent CPU performance.  If MoM's system is faster than the\nreference system, set\n.I factor\nto a decimal value greater than 1.0.  For example:\n.RS 9\n$cputmult 1.5\n.RE\n.IP\nIf MoM's system is slower, set \n.I factor \nto a value between 1.0 and 0.0.  For example:\n.RS 9\n$cputmult 0.75\n.RE\n.IP\n\n.IP \"$dce_refresh_delta <delta>\" 5\nDefines the number of seconds between successive refreshings of a job's\nDCE login context.\nFor example:\n.RS 9\n$dce_refresh_delta 18000\n.RE\n.IP\n\n.IP \"$enforce <limit>\" 5\nMoM will enforce the given \n.I limit.\nSome\n.I limits\nhave associated values.  Syntax:\n.br\n.I $enforce <variable name> <value>\n.br\n\n.RS\n.IP \"$enforce mem\" 5\nMoM will enforce each job's memory limit.\n\n.IP \"$enforce cpuaverage\" 5\nMoM will enforce ncpus when the average CPU usage over a job's\nlifetime usage is greater than the job's limit.\n\n.RS\n.IP \"$enforce average_trialperiod <seconds>\" 5\nModifies \n.I cpuaverage.\nMinimum number of \n.I seconds \nof job walltime before enforcement begins.  Default: 120.  \nInteger.\n\n.IP \"$enforce average_percent_over <percentage>\" 5\nModifies \n.I cpuaverage.\nGives \n.I percentage\nby which a job may exceed its ncpus limit.  Default: 50.\nInteger.\n\n.IP \"$enforce average_cpufactor <factor>\" 5\nModifies \n.I cpuaverage.\nThe ncpus limit is multiplied by \n.I factor \nto produce actual\nlimit.  Default: 1.025.  Float.\n.RE\n\n.IP \"$enforce cpuburst\" 5\nMoM will enforce the ncpus limit when CPU burst usage exceeds\nthe job's limit.\n.RS\n.IP \"$enforce delta_percent_over <percentage>\" 5\nModifies \n.I cpuburst.\nGives\n.I percentage\nover limit to be allowed.  Default: 50.  Integer.\n\n.IP \"$enforce delta_cpufactor <factor>\" 5\nModifies \n.I cpuburst.\nThe ncpus limit is multiplied by \n.I factor\nto produce actual limit.  Default: 1.5.  Float.\n\n.IP \"$enforce delta_weightup <factor>\" 5\nModifies \n.I cpuburst.\nWeighting factor for smoothing burst usage when average is increasing.  Default: 0.4.\nFloat.\n\n.IP \"$enforce delta_weightdown <factor>\" 5\nModifies \n.I cpuburst.\nWeighting factor\nfor smoothing burst usage when average is decreasing.  Default: 0.4.\nFloat.\n.RE\n.RE\n\n.IP \"$ideal_load <load>\" 5\nDefines the \n.I load \nbelow which the vnode is not considered to be busy.\nUsed with \nthe \n.I $max_load \ndirective.  \nNo default.  Float.  \n.RS\n.IP \"Example:\" 5\n$ideal_load 1.8\n.LP\n.br\nUse of $ideal_load adds a static resource to the vnode called \"ideal_load\", \nwhich is only internally visible.\n.LP\n.RE\n\n.IP \"$jobdir_root <stage directory root>\nDirectory under which PBS creates job-specific staging and execution directories.\nPBS creates a job's staging and execution directory when the job's \n.I sandbox\nattribute is set to PRIVATE.  If \n.I $jobdir_root\nis unset, it defaults to the job owner's home directory.  \nIn this case the user's home directory must exist.  \nIf \n.I stage_directory_root\ndoes not exist when MoM starts up, MoM will abort.  If \n.I stage directory root\ndoes not exist when MoM tries to run a job, MoM will kill the job.\nPath must be owned by root, and permissions must be 1777.  On Windows,\nthis directory should have Full Control Permission for the local\nAdministrators group.\n.br\nWhen you set \n.I $jobdir_root\nto a shared (e.g. NFS) directory, tell MoM it is shared by setting the\n.I shared\ndirective after the directory name:\n.br\n.I $jobdir_root <directory name> shared\n.br\nOtherwise sister MoMs can prematurely delete files and directories\nwhen nodes are released.  This is because when a jobs sandbox\nattribute is set to PRIVATE and $jobdir_root is set to a shared\ndirectory, PBS can use a shared location for job files.  When sister\nnodes are released, those sister MoMs would normally clean up their\nown files upon release.  So if $jobdir_root is set to a shared\ndirectory, you need to tell the sister MoMs not to do the cleanup, and\nlet the primary execution host MoM clean up when the job is finished.\n.RS\n.IP \"Example of a shared directory:\" 5\n$jobdir_root /r/shared shared\n.IP \"Example of a non-shared directory:\" 5\n$jobdir_root /scratch/foo\n.RE\n\n.IP \"$job_launch_delay\" 5\nWhen the primary MoM gets a job whose \n.I tolerate_node_failures \nattribute is set to \n.I all \nor \n.I job_start, \nthe primary MoM can wait to start the job (running the job script or\nexecutable) for up to a configured number of seconds.  During this\ntime, execjob_prologue hooks can finish and the primary MoM can check\nfor communication problems with sister MoMs.  You configure the number\nof seconds for the primary MoM to wait for hooks via the\n.I job_launch_delay \nconfiguration parameter in MoM's config file:\n.br\n   $job_launch_delay <number of seconds to wait>\n.br\nDefault: the sum of the values of the alarm attributes of any enabled\nexecjob_prologue hooks. If there are no enabled execjob_prologue\nhooks, the default value is 30 seconds.  For example, if there are two\nenabled execjob_prologue hooks, one with alarm = 30 and one with alarm\n= 60, the default value of MoM's \n.I job_launch_delay\nis 90 seconds.\n\nAfter all the execjob_prologue hooks have finished, or MoM has waited\nfor the value of the \n.I job_launch_delay parameter, \nshe starts the job.\n\n.IP \"$kbd_idle <idle wait> <min use> <poll interval>\" 5\nDeclares that the vnode will be used for batch jobs during periods when\nthe keyboard and mouse are not in use.  \n\nThe vnode must be idle for a minimum of \n.I idle wait\nseconds before being considered available for batch jobs.  \nNo default.  Integer.\n\nThe vnode must be in use for a minimum of \n.I min use\nseconds before it becomes unavailable for batch jobs.  Default: 10.  Integer.\n\nMom checks for activity every\n.I poll interval\nseconds.  Default: 1.  Integer.\n.RS\n.IP \"Example:\" 5\n$kbd_idle 1800 10 5\n.RE\n\n\n.IP \"$logevent <mask>\" 5\nSets the \n.I mask \nthat determines which event types are logged by \n.B pbs_mom.\nTo include all debug events, use 0xffffffff.\n.nf\nLog events:\n\nName       Hex Value  Message Category                   \n---------------------------------------------------\nERROR      0001       Internal errors\nSYSTEM     0002       System errors \nADMIN      0004       Administrative events\nJOB        0008       Job-related events\nJOB_USAGE  0010       Job accounting info\nSECURITY   0020       Security violations\nSCHED      0040       Scheduler events\nDEBUG      0080       Common debug messages\nDEBUG2     0100       Uncommon debug messages\nRESV       0200       Reservation-related info\nDEBUG3     0400       Rare debug messages\nDEBUG4     0800       Limit-related messages\n.fi\n\n.IP \"$max_check_poll <seconds>\" 5\nMaximum time between polling cycles, in seconds.  Minimum recommended\nvalue: 30 seconds.  \n\nThe interval between each poll starts at \n.I $min_check_poll \nand increases with each cycle until it reaches \n.I $max_check_poll, \nafter which it remains the same. The amount by which the cycle increases is 1/20 of\nthe difference between \n.I $max_check_poll \nand \n.I $min_check_poll.\n.br\nFormat: Integer\n.br\nMinimum value: 1 second\n.br\nDefault value: 120 seconds\n\n.IP \"$max_load <load> [suspend]\" 5\nDefines the load above which the vnode is considered to be busy.\nUsed with \nthe \n.I $ideal_load \ndirective. No new jobs are started on a \n.I busy\nvnode.\n\nThe optional \n.I suspend \ndirective tells PBS to suspend jobs running on\nthe vnode if the load average exceeds the \n.I $ max_load \nnumber, regardless of the source of the load (PBS and/or logged-in users).\nWithout this directive, PBS will not suspend jobs due to load.\n\nWe recommend setting \n.I load\nto a value that is slightly higher than the number of CPUs, \nfor example 0.25 + \n.I ncpus.\n\n.br\nDefault: number of CPUs on machine\n.br\nFormat: Float  \n.br\nExample:\n.RS 8\n$max_load 3.5\n.RE\n.IP\n\n.IP \"$max_poll_downtime\" 5\nWhen mother superior detects that a sister mom has lost connectivity (e.g. MoM went down or the network is having problems) it waits \n.I downtime \nseconds for the sister to reconnect before it gives up and kills the job.\n.br\nFormat: \n.I Integer\n.br\nDefault: \n.I Five minutes\n\n.IP \"memreserved <megabytes>\" 5\n.B Deprecated.  \nThe amount of per-vnode memory reserved for system overhead. \nThis much memory is deducted from the value of \n.I resources_available.mem\nfor each vnode managed by this MoM.\n.br\nExample:\n.RS 9\nmemreserved 16\n.RE\n.IP\nDefault: 0MB\n.br\n\n.IP \"$min_check_poll <seconds>\" 5\nMinimum time between polling cycles, in seconds.  Must be \ngreater than zero and less than\n.I $max_check_poll.  \nMinimum recommended value: 10 seconds.\n.br\nFormat: Integer.\n.br\nMinimum value: 1 second\n.br\nDefault value: 10 seconds\n\n.IP \"pbs_accounting_workload_mgmt <value>\" 5\nControls whether CSA accounting is enabled.  Name does not start with\ndollar sign.  If set to \n.I 1, on, \nor \n.I true, \nCSA accounting is enabled.  If set to\n.I 0, off, \nor \n.I false,\naccounting is disabled.  Cray only.  Requires CLE 5.2.\n.br\nDefault: \"true\"; enabled\n\n.IP \"$prologalarm <timeout>\" 5\nDefines the maximum number of seconds the prologue and epilogue\nmay run before timing out.  Default: 30 seconds.  Integer.\nExample:\n.RS 8\n$prologalarm 30\n.RE\n.IP\n\n.IP \"$reject_root_scripts <True | False>\" 5\nWhen set to \n.I True,\nMoM won't aquire any new hook scripts, and MoM won't run job scripts that would execute\nas root or Admin. However, MoM will run previously-aquired hooks that run as root.\n.br \nFormat: Boolean\n.br\nDefault: False\n\n.IP \"$restart_background <True | False>\" 5\nControls how MoM runs a restart script after checkpointing a job.\nWhen this option is set to \n.I True, \nMoM forks a child which runs the restart script.  The child returns\nwhen all restarts for all the local tasks of the job are done.  MoM\ndoes not block on the restart.  When this option is set to \n.I False,\nMoM runs the restart script and waits for the result.\n.br\nFormat: Boolean\n.br  \nDefault: False\n\n\n.IP \"$restart_transmogrify <True | False>\" 5\nControls how MoM runs a restart script after checkpointing a job.\nWhen this option is set to \n.I True, \nMoM runs the restart script, replacing the session ID of the original \ntask's top process with the session ID of the script.  \n\nWhen this option is set to \n.I False,\nMoM runs the restart script and waits for the result.  The restart\nscript must restore the original session ID for all the processes of\neach task so that MoM can continue to track the job.  \n\nWhen this option is set to \n.I False\nand the restart uses an external command, the configuration parameter\n.I restart_background\nis ignored and treated as if it were set to \n.I True,\npreventing MoM from blocking on the restart.\n.br\nFormat: Boolean\n.br\nDefault: False\n.br\n\n.IP \"$restrict_user <True | False>\" 5\nControls whether users not submitting jobs have access to this\nmachine.  If \n.I value\nis \n.I True, \nrestrictions are applied.  See \n.I $restrict_user_exceptions\nand \n.I $restrict_user_maxsysid.\nNot supported on Windows.\n.br\nFormat: Boolean\n.br\nDefault: False\n\n.IP \"$restrict_user_exceptions <user list>\" 5\nComma-separated list of users who are exempt from access\nrestrictions applied by \n.I $restrict_user.\nLeading spaces within each entry are allowed.  \nMaximum of 10 names.\n\n.IP \"$restrict_user_maxsysid <value>\" 5\nAny user with a numeric user ID less than or equal to \n.I value\nis exempt from restrictions applied by \n$restrict_user.  \n\nIf \n.I $restrict_user \nis \n.I True \nand no \n.I value\nexists for \n.I $restrict_user_maxsysid, \nPBS looks in /etc/login.defs, if it exists, for the \n.I value. \nOtherwise the default is used.\n.br\nFormat: \n.I Integer\n.br\nDefault: \n.I 999\n\n.IP \"$restricted <hostname>\" 5\nThe \n.I hostname\nis added to the list of hosts which are allowed to connect to MoM \nwithout being required to use a privileged port.  \n\nHostnames can be\nwildcarded.  For example, to allow queries from any host from the \ndomain \"xyz.com\":\n.RS 9\n$restricted      *.xyz.com\n.RE\n.IP\nQueries from the hosts in the restricted list are only allowed \naccess to information internal to this host, such as load\naverage, memory available, etc.  They may not run shell commands.\n\n.IP \"$sister_join_job_alarm\" 5\n\nWhen the primary MoM gets a job whose \n.I tolerate_node_failures \nattribute is set to \n.I all \nor \n.I job_start, \nthe primary MoM can wait to start the job\nfor up to a configured number of seconds if the sister MoMs do not\nimmediately acknowledge joining the job.  This gives the sister MoMs\nmore time to join the job.  You configure the number of seconds for\nthe primary MoM to wait for sister MoMs via the \n.I sister_join_job_alarm\nconfiguration parameter in MoM's config file:\n.br\n   $sister_join_job_alarm <number of seconds to wait>\n.br\nDefault: the sum of the values of the \n.I alarm \nattributes of any enabled\nexecjob_begin hooks. If there are no enabled execjob_ begin hooks, the\ndefault value is 30 seconds.  For example, if there are two enabled\nexecjob_begin hooks, one with alarm = 30 and one with alarm = 20, the\ndefault value of MoM's \n.I sister_join_job_alarm \nis 50 seconds.\n\nAfter all the sister MoMs have joined the job, or MoM has waited for\nthe value of the \n.I $sister_join_job_alarm \nparameter, she starts the job.\n\n.IP \"$suspendsig <suspend signal> [resume signal]\" 5\nAlternate signal \n.I suspend signal\nis used to suspend jobs instead of SIGSTOP.  Optional \n.I resume signal\nis used to resume jobs instead of SIGCONT.\n\n.IP \"$tmpdir <directory>\" 5\nLocation where each job's scratch directory will be created.\n\nPBS creats a temporary directory for use by the job, not by PBS.\nPBS creates the directory before the the job is run and removes \nthe directory and its contents when the job is finished.  It is \nscratch space for use by the job.  Permission must be 1777 on\nLinux, writable by \n.I Everyone \non Windows.\n\nExample: \n.RS 9\n$tmpdir /memfs\n.RE\n\n.IP\nDefault on Linux: /var/tmp\n.br\nDefault on Windows: value of the\n.I TMP\nenvironment variable \n\n.IP \"$usecp <hostname:source prefix> <destination prefix>\" 5\nMoM uses /bin/cp or the program specified by PBS_CP to deliver \noutput files when the destination is a network mounted file system, \nor when the source and destination are both on the local host, or \nwhen the \n.I source prefix \ncan be replaced with the \n.I destination prefix\non \n.I hostname.  \nBoth\n.I source prefix\nand \n.I destination prefix\nare absolute pathnames of directories, not files.  \n\nOverrides\n.I PBS_CP\nand\n.I PBS_RCP\nand \n.I PBS_SCP.\n\nUse trailing\nslashes on both source and destination.  For example:\n.RS 9\n$usecp HostA:/users/work/myproj/ /sharedwork/proj_results/\n.RE\n.IP\n\n.IP \"$vnodedef_additive\" 5\nSpecifies whether MoM considers a vnode that appeared previously\neither in the inventory or in a vnode definition file, but that does\nnot appear now, to be in her list of vnodes.\n.br\nWhen \n.I $vnodedef_additive \nis True, MoM treats missing vnodes as if they\nare still present, and continues to report them as if they are\npresent.  This means that the server does not mark missing vnodes as\n.I stale.\n.br\nWhen \n.I $vnodedef_additive \nis False, MoM does not list missing vnodes,\nthe server's information is brought up to date with the inventory and\nvnode definition files, and the server marks missing vnodes as \n.I stale.\n.br\nVisible in configuration file on Cray only.\n.br\nFormat: Boolean\n.br\nDefault for MoM on Cray login node: False\n\n\n.IP \"$wallmult <factor>\" 5\nEach job's walltime usage is multiplied by \n.I factor.\nFor example:\n.RS 9\n$wallmult 1.5\n.RE\n.IP\n\n.RE\n\n\n.RE\n.B Static Resources\n.br\nStatic resources local to the vnode are described\none resource to a line,\nwith a name and value separated by white space.\nFor example, tape drives of different types could be specified by:\n.RS 15\n.nf\n.B tape3480 \\ \\  4\n.B tape3420 \\ \\  2\n.B tapedat  \\ \\ \\ \\ 1\n.B tape8mm  \\ \\ \\ \\ 1\n.fi\n.RE\n\n\n.RE\n\n.SH FILES AND DIRECTORIES\n.IP $PBS_HOME/mom_priv 10\nDefault directory for default configuration files.\n\n.IP $PBS_HOME/mom_priv/config 10\nMoM's default configuration file.\n\n.IP $PBS_HOME/mom_logs 10\nDefault directory for log files written by MoM.\n\n.IP $PBS_HOME/mom_priv/prologue 10\nFile containing administrative script to be run before job execution.\n\n.IP $PBS_HOME/mom_priv/epilogue 10\nFile containing administrative script to be run after job execution.\n\n\n.SH SIGNAL HANDLING\n.B pbs_mom \nhandles the following signals:\n\n.IP SIGHUP 10\nThe \n.B pbs_mom \ndaemon rereads its configuration files, closes and reopens the log\nfile, and reinitializes resource structures.  \n\n.IP SIGALRM 10\nMoM writes a log file entry.  See the \n.I -a alarm_timeout \noption.\n\n.IP SIGINT 10\nThe \n.B pbs_mom \ndaemon exits, leaving all running jobs still running.  \nSee the \n.I -p\noption. \n\n.IP SIGKILL 10\nThis signal is not caught.  The \n.B pbs_mom \ndaemon exits immediately.\n\n.IP \"SIGTERM, SIGXCPU, SIGXFSZ, SIGCPULIM, SIGSHUTDN\" 10\nThe \n.B pbs_mom \ndaemon terminates all running children and exits.\n\n.IP \"SIGPIPE, SIGUSR1, SIGUSR2, SIGINFO\" 10\nThese are ignored.\n\n.LP\nAll other signals have their default behavior installed.\n\n.SH EXIT STATUS\n.IP \"Greater than zero\" 5\nIf the \n.B pbs_mom \ndaemon fails to start\n.br\nIf the \n.I -s insert \noption is used with an existing \n.I Version 2 filename\n.br\nIf the administrator attempts to add a script whose name \nbegins with \"PBS\"\n.br\nIf the administrator attempts to use the \n.I -s remove \noption on a nonexistent configuration file, or on a configuration\nfile whose name begins with \"PBS\"\n.br\nIf the administrator attempts to use the \n.I -s show\noption on a nonexistent script\n\n\n.SH SEE ALSO\npbs_server(8B), \npbs_sched(8B), \nqstat(1B)\n"
  },
  {
    "path": "doc/man8/pbs_mpihp.8B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_mpihp 8B \"6 May 2020\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_mpihp \n\\- run an MPI application in a PBS job with HP MPI\n\n.SH SYNOPSIS\n.B pbs_mpihp\n[-h <host>] [-np <number>] [<other HP mpirun options>] <program> [<args>]\n\n.B pbs_mpihp  \n[<HP mpirun options>] -f <appfile> [-- [<extra_args>]]\n\n.B pbs_mpihp\n--version\n\n.SH DESCRIPTION\nThe PBS command \n.B pbs_mpihp \nreplaces the standard \n.I mpirun\ncommand in a PBS\nHP MPI job, for executing programs.  \n.B pbs_mpihp \nis a front end to the HP MPI version of mpirun.  \n\nWhen \n.B pbs_mpihp \nis invoked from a PBS job, it \nprocesses the command line arguments, then calls standard HP mpirun to\nactually start the MPI ranks.  The ranks created are mapped onto\nCPUs on the vnodes allocated to the PBS job.  The environment variable \n.B MPI_REMSH \nis set to \n.I $PBS_EXEC/bin/pbs_tmrsh.  \nThis causes\nthe processes that are created to become part of the PBS job.\n\nThe path to standard HP mpirun is found by checking to see if a link\nexists with the name \n.I \"PBS_EXEC/etc/pbs_mpihp\".  \nIf this link exists,\nit points to standard HP mpirun.  If it does not exist, a call\nto \n.I \"mpirun -version\" \nis made to determine whether it is HP mpirun.\nIf so, the call is made to \"mpirun\" without an absolute path.\nIf HP mpirun cannot be found, an error is output, all temp\nfiles are cleaned up and the script exits with value 127.\n\nIf \n.B pbs_mpihp \nis invoked from outside a PBS job, it passes all of\nits arguments directly to standard HP mpirun without further processing.\n\n.B Configuration\n.br\nWhen HP MPI is wrapped with pbs_mpihp, \"rsh\" is the default used to \nstart the mpids. If you wish to use \"ssh\" or something else, \nbe sure to set the following in $PBS_HOME/pbs_environment:\n\n.RS 5\n.I PBS_RSHCOMMAND=ssh\n.RE\n\nor put the following in the job script:\n\n.RS 5\n.I export PBS_RSHCOMMAND=<rsh_cmd>\n.RE\n\n.SH USAGE\nUsage is the same as for HP \n.I mpirun.\n\n.B pbs_mpihp <program>\nallows one executable to be specified.\n\n.B pbs_mpihp -f <appfile>\nuses an\n.I appfile \nto list multiple executables.\nThe format is described in the HP mpirun man page.  If this form\nis used from inside a PBS job, the file is read to determine\nwhat executables are to be run and how many processes are\nstarted for each.\n\nExecuting \n.B pbs_mpihp \nwith the \n.I -client \noption is not supported under PBS.\n\n.SH OPTIONS\nAll options except the following are passed directly to HP mpirun\nwith no modification.\n\n.IP \"-client\" 15  \nNot supported.\n\n.IP \"-f <appfile>\"  \nThe specified \n.I appfile \nis read by \n.B pbs_mpihp.\n\n.IP \"-h <host>\" 15    \nIgnored by \n.B pbs_mpihp.\n\n.IP \"-l <user>\"     \nIgnored by pbs_mpihp.\n\n.IP \"-np <number>\" 15  \nSpecifies the \n.I number \nof processes to run on the PBS vnodes.\n\n.IP \"--version\" 15\nThe \n.B pbs_mpihp\ncommand returns its PBS version information and exits.\nThis option can only be used alone.\n\n.SH EXIT VALUES\n\n.IP 127 15\nIf HP\n.B mpirun\ncannot be found\n\n.SH SEE ALSO\nmpirun(1)\n"
  },
  {
    "path": "doc/man8/pbs_mpilam.8B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_mpilam 8B \"6 May 2020\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_mpilam \n\\- run MPI programs under PBS with LAM MPI\n\n.SH SYNOPSIS\n.B pbs_mpilam \n[<mpilam options>]\n\n.B pbs_mpilam \n--version\n\n.SH DESCRIPTION\n.B Deprecated.  \nThe PBS command \n.B pbs_mpilam \nreplaces the standard \n.I mpirun\ncommand in a PBS LAM MPI job.\n\nIf used to run a single program, PBS tracks resource usage \nand controls all user processes spawned by the program.\nIf used to run multiple programs as specified in an application file\n(no \n.I <where> \nargument and no \n.I -np/-c \noption), PBS does not manage the spawned user processes of each program.\n\nIf the \n.I where \nargument is not specified,\n.I pbs_mpilam\nwill try to run the user's program on all available CPUs\nusing the \n.I C\nkeyword.\n\n.B Prerequisites\n.br\nThe PATH on remote machines must contain \n.I PBS_EXEC/bin.\n\n.SH USAGE\nUsage is the same as for LAM \n.I mpirun.\nAll options are passed directly to \n.I mpirun.  \n\n.SH OPTIONS\n.IP \"<mpilam options>\" 8\nThe \n.B pbs_mpilam \ncommand uses the same options as \n.I mpirun.\n\n.IP \"--version\" 8\nThe \n.B pbs_mpilam\ncommand returns its PBS version information and exits.\nThis option can only be used alone.\n\n.SH SEE ALSO\nmpirun(1)\n"
  },
  {
    "path": "doc/man8/pbs_mpirun.8B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_mpirun 8B \"6 May 2020\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_mpirun\n\n.B Deprecated.  \nRun MPI programs under PBS with MPICH\n\n.SH SYNOPSIS\n.B pbs_mpirun \n[<mpirun options>]\n\n.B pbs_mpirun \n--version\n.SH DESCRIPTION\n\nThe PBS command \n.B pbs_mpirun \nreplaces the standard \n.I mpirun \ncommand in a PBS\nMPICH job using P4.  \n\nOn Windows, this command cannot be used to start job processes or\nto track a job's resource usage.\n\n.B Prerequisite\n.br\nThe PATH on remote machines must contain \n.I PBS_EXEC/bin.\n\n.SH USAGE\nUsage is the same as for \n.I mpirun, \nexcept for the \n.I -machinefile \noption.  All other options are passed directly to \n.I mpirun.\n\n.SH OPTIONS\n.IP \"<mpirun options>\" 8\nThe \n.I options \nto \n.B pbs_mpirun \nare the same as for \n.I mpirun, \nexcept for the\n.I -machinefile \noption.  This is generated by \n.B pbs_mpirun.\nThe user should not attempt to specify \n.I -machinefile.\n\nThe value for \n.I -machinefile\nis a temporary\nfile created from \n.I PBS_NODEFILE \nin the format:\n       hostname-1[:number of processors]\n       hostname-2[:number of processors]\n       hostname-n[:number of processors]\n\nwhere if the number of processors is not specified, it is 1.\nAn attempt  by the user to specify the \n.I -machinefile \noption\nwill result in a warning saying \"Warning, -machinefile value\nreplaced by PBS\".\n\nThe default value for the \n.I -np \noption is the number of entries in PBS_NODEFILE.\n\n.IP \"--version\" 8\nThe \n.B pbs_mpirun\ncommand returns its PBS version information and exits.\nThis option can only be used alone.\n.LP\n.SH ENVIRONMENT VARIABLES\n.B pbs_mpirun \nmodifies \n.I P4_RSHCOMMAND \nand \n.I PBS_RSHCOMMAND.  \nUsers should not\nedit these.  \n.B pbs_mpirun \ncopies the value of \n.I P4_RSHCOMMAND \ninto \n.I PBS_RSHCOMMAND.\n\n.SH SEE ALSO\nmpirun(1)\n"
  },
  {
    "path": "doc/man8/pbs_probe.8B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_probe 8B \"6 May 2020\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_probe \n- report PBS diagnostic information and fix permission errors\n.SH SYNOPSIS\n.B pbs_probe \n[ -f | -v ] \n.br\n.B pbs_probe\n--version\n.SH DESCRIPTION\n.B Deprecated.\nThe\n.B pbs_probe\ncommand reports post-installation information that is useful for PBS\ndiagnostics, and fixes permission errors.\n\n.B Information Sources\n.RS 3\nInformation that is supplied on the command line\n.br\nThe file /etc/pbs.conf \n.br\nThe file /etc/init.d/pbs\n.br\nThe values of any of the following environment variables; these may \nbe set in the environment in which\n.B pbs_probe\nis run: PBS_CONF_FILE, PBS_HOME, PBS_EXEC, PBS_START_SERVER, PBS_START_MOM,\nand PBS_START_SCHED\n.RE\n\n.B Required Privilege\n.br\nIn order to execute\n.B pbs_probe,\nthe user must have PBS Operator or Manager privilege.\n\n.SH OPTIONS\n.IP \"(no options)\" 10\nRun in \"report\" mode. In this mode\n.B pbs_probe\nreports any permission errors detected in the PBS infrastructure files.\nThe command categorizes the errors and writes a list of them by category.\nEmpty categories are not written.\n\n.IP \"-f \" 10\nRun in \"fix\" mode. In this mode pbs_probe examines each of the relevant\ninfrastructure files and, where possible, fixes any errors that it detects,\nand prints a message of what got changed. If it is unable to fix a problem,\nit prints a message regarding what was detected. \n.IP \"-v\" 10\nRun in \"verbose\" mode. In this mode\n.B pbs_probe\nwrites a complete list of the infrastructure files that it checked.\n.LP\n.IP \"--version\" 10\nThe \n.B pbs_probe\ncommand returns its PBS version information and exits.\nThis option can only be used alone.\n\n.SH STANDARD ERROR\nThe\n.B pbs_probe\ncommand writes a diagnostic message to standard error for each error occurrence.\n\n.SH EXIT STATUS\nExit code does not reflect results of probe; it reflects whether or not\nthe program ran.  \n.IP \"Zero\" 10\nWhen run correctly, whether or not \n.B pbs_probe \nfinds any problems or errors\n\n.IP \"Non-negative\" 10\nWhen run incorrectly\n.RE\n.SH SEE ALSO\npbs_server(8B), pbs_sched(8B), pbs_mom(8B).\n"
  },
  {
    "path": "doc/man8/pbs_sched.8B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_sched 8B \"6 May 2020\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_sched\n\\- run a PBS scheduler\n\n.SH SYNOPSIS\n.B pbs_sched\n[-a <alarm>] [-c <clientsfile>] [-d <home dir>]\n          [-I <scheduler name>] [-L <logfile>] [-n] [-N]\n          [-p <output file>] [-R <port number>] [-S <port number>]\n          [-t <num threads>]\n\n.B pbs_sched\n--version\n\n.SH DESCRIPTION\nRuns the default scheduler or a multisched.\n.LP\n.B pbs_sched\nmust be executed with root permission.\n\n.SH OPTIONS\n.IP \"-a <alarm>\" 13\n.B Deprecated.\nOverwrites value of\n.I sched_cycle_length\nscheduler attribute.\n.br\nTime in seconds to wait for a scheduling cycle to finish.\n.br\nFormat: Time, in seconds.\n.br\n\n.IP \"-c <clientsfile>\" 13\nAdd clients to this scheduler's list of known clients.\nThe\n.I clientsfile\ncontains single-line entries of the form\n.RS 17\n.I $clienthost <hostname>\n.RE\n.IP\nEach\n.I hostname\nis added to the list of hosts allowed to connect to this scheduler.\nIf\n.I clientsfile\ncannot be opened, this scheduler aborts.\nPath can be absolute or relative.  If relative, it is relative to\nPBS_HOME/sched_priv.\n\n.IP \"-d <home dir>\" 13\nThe directory in which this scheduler will run.\n.br\nDefault: PBS_HOME/sched_priv.\n\n.IP \"-I <scheduler name>\" 13\nName of scheduler to start.  Required when starting a multisched.\n\n.IP \"-L <logfile>\" 13\nThe absolute path and filename of the log file.\nA scheduler writes its PBS version and build information to\nthe logfile whenever it starts up or\nthe logfile is rolled to a new file.  See the\n.I -d\noption.\n.br\nDefault: A scheduler opens a file named for the current\ndate in the PBS_HOME/sched_logs directory.\n\n.IP \"-n\" 13\nTells this scheduler to not restart itself if it receives a\n.I sigsegv\nor a\n.I sigbus.\nA scheduler by default restarts itself if it receives either\nof these two signals more than five minutes after starting.\nA scheduler does not restart itself if it receives\neither one within five minutes of starting.\n\n.IP \"-N\" 13\nRuns the scheduler in standalone mode.\n.LP\n\n.IP \"-p <output file>\" 13\nAny output which is written to standard out or standard error is\nwritten to\n.I output file.\nThe pathname can be absolute or relative,\nin which case it is relative to PBS_HOME/sched_priv.\nSee the\n.I -d\noption.\n.br\nDefault: PBS_HOME/sched_priv/sched_out\n\n\n.IP \"-R <port number>\" 13\nThe port for MOM to use.  If this option is not given,\nthe port number is taken from PBS_MANAGER_SERVICE_PORT, in pbs.conf.\n.br\nDefault: 15003\n\n.IP \"-S <port number>\" 13\nThe port for this scheduler to use.\n\nRequired when starting a multisched.\n\nFor the default scheduler, if this option is not specified\nthe default port is taken from PBS_SCHEDULER_SERVICE_PORT,\nin pbs.conf.\n.br\nDefault value for default scheduler: 15004\n.br\nDefault value for multisched: none\n\n.IP \"-t <num threads>\" 13\nSpecifies number of threads for this scheduler.\n.br\nScheduler automatically caps number of threads at the number of cores\n(or hyperthreads if applicable), regardless of value of\n.I num threads.\n.br\nOverrides PBS_SCHED_THREADS environment variable and PBS_SCHED_THREADS\nparameter in pbs.conf.\n.br\nValid values:\n.I >=1\n.br\nDefault: half the number of cores (or hyperthreads if applicable) on this host\n\n.IP \"--version\" 13\nThe\n.B pbs_sched\ncommand returns its PBS version information and exits.\nThis option can only be used alone.\n\n.SH CONFIGURATION FILE\nThe file PBS_HOME/sched_priv/sched_config contains configuration parameters\nfor this scheduler.  Each entry must be a single unbroken line.\n.br\nFormat:\n.I name: value [prime | non-prime | all | none]\n.br\nwhere\n.RS 3\n.IP name 13\nMust not contain whitespace.\n.IP value 13\nMust be double-quoted if it contains whitespace.\n.I value\ncan be\n.I True | False | <number> | <string>.\n.I True\nand\n.I False\nare not case-sensitive.\n.IP \"[prime | non-prime | all | none]\" 13\nSpecifies when this setting applies:\nduring primetime,\nnon-primetime, all the time, or none of the time.  A blank third field\nis equivalent to\n.I all\nwhich means that it applies to both primetime and non-primetime.\n.br\nValid values:\n.I \"all\", \"ALL\", \"none\", \"NONE\", \"prime\", \"PRIME\", \"non_prime\", \"NON_PRIME\"\n.LP\n.RE\nAny line starting with a hashmark, \"#\", is a comment, and is ignored.\n\n.B Configuration Parameters\n.br\n.IP \"backfill \" 13\n.B Deprecated.\nUse the\n.I backfill_depth\nqueue/server attribute instead.\nToggle that controls whether PBS uses backfilling.\nIf this is set to True, this scheduler attempts to schedule\nsmaller jobs around higher-priority jobs when using\n.I strict_ordering,\nas long as running the smaller jobs won't change the\nstart time of the jobs they were scheduled around. A scheduler\nchooses jobs in the standard order, so other high-priority jobs will be\nconsidered first in the set to fit around the highest-priority job.\n\nWhen this parameter is\n.I True\nand\n.I help_starving_jobs\nis\n.I True,\nthis scheduler backfills around starving jobs.\n\nCan be used\nwith\n.I strict_ordering\nand\n.I help_starving_jobs.\n.br\nFormat: Boolean.\n.br\nDefault:\n.I True all\n\n\n.IP \"backfill_prime \" 13\nThis Scheduler will not run jobs which would overlap\nthe boundary between primetime and non-primetime. This assures\nthat jobs restricted to running in either primetime or non-primetime\ncan start as soon as the time boundary happens. See also\n.I prime_spill, prime_exempt_anytime_queues.\n.br\nFormat: Boolean.\n.br\nDefault:\n.I False all\n\n.IP \"by_queue \" 13\nIf set to\n.I True,\nall jobs that can be run from the highest-priority\nqueue are run, then any jobs that can be run from the next queue are\nrun, and so on.  If\n.I by_queue\nis set to\n.I False,\nall jobs are treated as if they are in one large queue. The\n.I by_queue\nparameter is overridden\nby the\n.I round_robin\nparameter when\n.I round_robin\nis set to\n.I True.\n.br\nFormat: Boolean.\n.br\nDefault:\n.I True all\n\n.IP cpus_per_ssinode 13\n.B Obsolete.\n\n.IP dedicated_prefix 13\nQueue names with this prefix are treated as dedicated\nqueues, meaning jobs in that queue are considered for\nexecution only when the system is in dedicated time as specified in the\nconfiguration file PBS_HOME/sched_priv/dedicated_time.\n.br\nFormat: String\n.br\nDefault:\n.I ded\n\n.IP fair_share 13\nEnables the fairshare algorithm, and\nturns on usage collecting. Jobs will be selected based on a\nfunction of their recent usage and priority (shares). Not a prime option.\n.br\nFormat: Boolean\n.br\nDefault:\n.I False all\n\n.IP fairshare_decay_factor 13\nDecay multiplier for fairshare usage reduction.  Each decay period, the\nusage is multiplied by this value.\n.br\nValid values: between 0 and 1, not inclusive.  Not a prime option.\n.br\nFormat: Float\n.br\nDefault:\n.I 0.5\n\n.IP fairshare_decay_time 13\nTime between fairshare usage decay operations.  Not a prime option.\n.br\nFormat: Duration\n.br\nDefault:\n.I 24:00:00\n\n.IP fairshare_entity 13\nSpecifies the entity for which fairshare usage data will\nbe collected. Can be\n.I euser, egroup, Account_Name, queue,\nor\n.I egroup:euser.\nNot a prime option.\n.br\nFormat: String\n.br\nDefault:\n.I euser\n\n.IP fairshare_enforce_no_shares 13\nIf this option is enabled, jobs whose entity has zero\nshares will never run. Requires\n.I fair_share\nparameter to be enabled.\n.br\nFormat: Boolean\n.br\nDefault:\n.I False\n\n.IP fairshare_usage_res 13\nSpecifies the mathematical formula to use in fairshare calculations.\nIs composed of PBS resources as well as mathematical operations that\nare standard Python operators and/or those in the Python math module.\nWhen using a PBS resource, if\n.I resources_used.<resource name>\nexists, that value is used.  Otherwise, the value is taken from\n.I Resource_List.<resource name>.\nNot a prime option.\n.br\nFormat: String\n.br\nDefault:\n.I cput\n\n.IP half_life 13\n.B Deprecated\n(as of 13.0).\nThe half-life for fairshare usage; after the amount of time\nspecified, the fairshare usage is halved. Requires that\n.I fair_share\nparameter be enabled.  Not a prime option.\n.br\nFormat: Duration\n.br\nDefault:\n.I 24:00:00\n\n.IP help_starving_jobs 13\nSetting this option enables starving job support. Once\njobs have waited for the amount of time given by\n.I max_starve\nthey are considered starving. If a job is considered starving, no\nlower-priority jobs will run until the starving job can be run, unless\nbackfilling is also specified. To use this option, the\n.I max_starve\nconfiguration parameter needs to be set as well. See also\n.I backfill, max_starve,\nand the server's\n.I eligible_time_enable\nattribute.\n.br\nAt each scheduler iteration, PBS calculates\n.I estimated.start_time\nand\n.I estimated.exec_vnode\nfor starving jobs being backfilled around.\n.br\nFormat: Boolean\n.br\nDefault:\n.I True all\n\n.IP job_sort_key 13\n.RS\nSpecifies how jobs should be sorted.\n.I job_sort_key\ncan be used to sort either by resources or by special case sorting\nroutines. Multiple\n.I job_sort_key\nentries can be used, one to a line, in which\ncase the first entry will be the primary sort key, the second will be\nused to sort equivalent items from the first sort, etc.\nThe\n.I HIGH\noption implies descending sorting,\n.I LOW\nimplies ascending. See example for details.\n\nThis attribute is overridden by the\n.I job_sort_formula\nattribute.\nIf both are set,\n.I job_sort_key\nis ignored and an error message is\nprinted.\n\n.br\nSyntax:\n.br\n.I job_sort_key: \"\"\"<resource name> HIGH|LOW\"\"\"\n.br\n.I job_sort_key: \"\"\"fairshare_perc HIGH|LOW\"\"\"\n.br\n.I job_sort_key: \"\"\"job_priority HIGH|LOW\"\"\"\n\n.br\nThere are three special case sorting routines, which can be used\ninstead of\n.I <resource name>:\n.RS\n.IP \"fairshare_perc HIGH\" 13\nSort based on how much fairshare percentage the entity deserves,\nbased on the values in the\n.I resource group\nfile. If user A has more priority than user B, all of user A's\njobs will always run first.  Past history is not used.\n\nThis should only\nbe used if entity share (strict priority) sorting is needed.\nIncompatible with\n.I fair_share\nscheduling parameter being\n.I True.\n\n.IP \"job_priority HIGH | LOW\" 13\nSort jobs by the job\n.I priority\nattribute regardless of job owner.\n\n.IP \"sort_priority HIGH|LOW\" 13\n.B Deprecated.\nSee\n.I job_priority\nabove.\n.RE\n\nThe following example shows how to sort jobs so that those with\nhigh CPU count come first:\n.RS\njob_sort_key: \"ncpus HIGH\" all\n.br\n.RE\nThe following example shows how to sort jobs so that those with\nlower memory come first:\n.br\n.RS\njob_sort_key: \"mem LOW\" prime\n.RE\n\n.br\nFormat: Quoted string\n.br\nDefault: Not in force\n.RE\n\n.IP load_balancing 13\nWhen set to\n.I True,\nthis scheduler takes into account the load average on vnodes as\nwell as the resources listed in the\n.I resources\nline in sched_config.  Load balancing can result in overloaded CPUs.\n.br\nFormat: Boolean\n.br\nDefault:\n.I False all\n\n.IP load_balancing_rr 13\n.B Deprecated.\nTo duplicate this setting, enable\n.I load_balancing\nand\n.I set smp_cluster_dist\nto\n.I round_robin.\n\n.IP log_filter 13\n.B Obsolete.\nSee the\n.I log_events\nscheduler attribute.\n\n.IP max_starve 13\nThe amount of time before a job is considered starving. This\nvariable is used only if\n.I help_starving_jobs\nis set.\n.br\nUpper limit: None\n.br\nFormat: Duration\n.br\nDefault:\n.I 24:00:00\n\n.IP mem_per_ssinode 13\n.B Obsolete.\n\n.IP mom_resources 13\nThis option is used to query the MoMs to set the value of\n.I resources_available.<resource name>\nwhere\n.I resource name\nis a site-defined\nresource. Each MoM is queried with the resource name and the\nreturn value is used to replace\n.I resources_available.<resource name>\non that vnode. On a multi-vnoded machine with a natural vnode,\nall vnodes share anything set in\n.I mom_resources.\n.br\nFormat: String\n\n.IP node_sort_key 13\n.RS\nDefines sorting on resource or priority values on vnodes. Resource\nmust be numerical, for example, long or float.  Up to 20\n.I node_sort_key entries\ncan be used, in which\ncase the first entry will be the primary sort key, the second will be\nused to sort equivalent items from the first sort, etc.\n\n.br\nSyntax:\n.br\n.I node_sort_key: <resource name>|sort_priority <HIGH | LOW>\n.br\n.I node_sort_key: <resource name> <HIGH | LOW> <total | assigned | unused>\n.br\nwhere\n.RS\n.IP total\nUse the\n.I resources_available\nvalue.\n.IP assigned\nUse the\n.I resources_assigned\nvalue.\n.IP unused\nUse the value given by\n.I resources_available - resources_assigned.\n.IP sort_priority\nSort vnodes by the value of the vnode\n.I priority\nattribute.\n.RE\n\n.br\nWhen sorting on a resource, the default third field is \"total\".\n.br\nFormat: String\n.br\nDefault:\n.I node_sort_key: sort_priority HIGH all\n.RE\n\n.IP nonprimetime_prefix 13\nQueue names which start with this prefix are treated\nas non-primetime queues. Jobs in these queues\nrun only during non-primetime. Primetime and non-primetime are\ndefined in the holidays file.\n.br\nFormat: String\n.br\nDefault:\n.I np_\n\n.IP peer_queue 13\nDefines the mapping of a pulling queue to a furnishing queue\nfor peer scheduling. Maximum number is 50 peer queues per\nscheduler.\n.br\nFormat: String\n.br\nDefault: Unset\n\n.IP preemptive_sched 13\nEnables job preemption.  See the\n.I preempt_order\nscheduler attribute.\n.br\nFormat: String\n.br\nDefault:\n.I True all\n\n.IP preempt_order 13\n.B Obsolete.\nSee the\n.I preempt_order\nscheduler attribute.\n\n.IP preempt_prio 13\n.B Obsolete.\nSee the\n.I preempt_prio\nscheduler attribute.\n\n.IP preempt_queue_prio 13\n.B Obsolete.\nSee the\n.I preempt_queue_prio\nscheduler attribute.\n\n.IP preempt_sort 13\n.B Obsolete.\nSee the\n.I preempt_sort\nscheduler attribute.\n\n.IP primetime_prefix 13\nQueue names starting with this prefix are treated as\nprimetime queues. Jobs in these queues run only during\nprimetime. Primetime and non-primetime are defined in the\nholidays file.\n.br\nFormat: String\n.br\nDefault:\n.I p_\n\n.IP prime_exempt_anytime_queues 13\nDetermines whether\n.I anytime\nqueues are controlled by\n.I backfill_prime.\nIf set to true, jobs in an\n.I anytime\nqueue\nare not prevented from running across a primetime/nonprimetime\nor non-primetime/primetime boundary. If set to\nfalse, the jobs in an\n.I anytime\nqueue may not cross this boundary,\nexcept for the amount specified by their\n.I prime_spill\nsetting.\nSee also\n.I backfill_prime, prime_spill.\n.br\nFormat: Boolean\n.br\nDefault:\n.I False\n\n.IP prime_spill 13\n.RS\nSpecifies the amount of time a job can spill over from non-primetime\ninto primetime or from primetime into non-primetime.\nThis option is only meaningful if\n.I backfill_prime\nis\n.I True.\nThis option can be separately specified for\nprimetime and non-primetime. See also\n.I backfill_prime, prime_exempt_anytime_queues.\n.br\nFor example, non-primetime jobs can spill into primetime by 1 hour:\n.RS\n.I prime_spill: 1:00:00 prime\n.RE\nFor example, jobs in either prime/non-prime can spill into\nthe other by 1 hour:\n.RS\n.I prime_spill: 1:00:00 all\n.RE\n.br\nFormat: Duration\n.br\nDefault:\n.I 00:00:00\n.RE\n\n.IP provision_policy\nSpecifies how vnodes are selected for provisioning.  Can be set by\nManager only; readable by all.  Can be set to one of the following:\n\n.RS\n.IP \"avoid_provision\" 5\n\nPBS first tries to satisfy the job's request from free vnodes that\nalready have the requested AOE instantiated.  PBS uses\n.I node_sort_key\nto sort these vnodes.\n\nIf it cannot satisfy the job's request using vnodes that already have\nthe requested AOE instantiated, PBS uses the server's\n.I node_sort_key\nto select the free vnodes that must be\nprovisioned in order to run the job, choosing from any free vnodes,\nregardless of which AOE is instantiated on them.\n\nOf the selected vnodes, PBS provisions any that do not have the\nrequested AOE instantiated on them.\n\n\n.IP \"aggressive_provision\" 5\nPBS selects vnodes to be provisioned without considering which AOE\nis currently instantiated.\n\nPBS uses the server's\n.I node_sort_key\nto select the vnodes on which to run the job,\nchoosing from any free vnodes, regardless of which AOE is instantiated\non them.  Of the selected vnodes, PBS provisions any that do not have\nthe requested AOE instantiated on them.\n.LP\n\nFormat: string\n.br\nDefault:\n.I \"aggressive_provision\"\n.RE\n\n.IP resources 13\n.RS\nSpecifies those resources which are not to be over-allocated,\nor if Boolean, are to be honored, when\nscheduling jobs. Vnode-level Boolean resources are automatically\nenforced and do not need to be listed here. Limits are set\nby setting\n.I resources_available.<resource name>\non vnodes, queues, and the server. A scheduler\nconsiders numeric (integer or float) items as consumable\nresources and ensures that no more are assigned than are available\n(e.g.\n.I ncpus\nor\n.I mem\n). Any string resources are compared\nusing string comparisons (e.g.\n.I arch\n).\n.br\nIf \"host\" is not added to the\nresources line, then when the user submits a job requesting a specific\nvnode in the following syntax:\n.RS\nqsub -l select=host=vnodeName\n.RE\nthe job will run on any host.\n.br\nFormat: String\n.br\nDefault:\n.I ncpus, mem, arch, host, vnode, aoe\n.RE\n\n.IP resource_unset_infinite 13\nResources in this list are treated as infinite if they are unset.\nCannot be set differently\nfor primetime and non-primetime.\n.br\nExample:\n.I resource_unset_infinite: vmem, foo_licenses\n.br\nFormat: Comma-delimited list of resources\n.br\nDefault: Empty list\n\n.IP round_robin 13\nIf set to\n.I True,\nthis scheduler considers one job from the first queue, then one job\nfrom the second queue, and so on in a circular fashion.  The queues\nare ordered with the highest-priority queue first.  Each scheduling\ncycle starts with the same highest-priority queue, which will\ntherefore get preferential treatment.\n\nIf there are groups of queues with the same priority, and this\nparameter is set to\n.I True,\nthis scheduler round-robins through each\ngroup of queues before moving to the next group.\n\nIf\n.I round_robin\nis set to\n.I False,\nthis scheduler considers jobs according to the setting of the\n.I by_queue\nparameter.  When\n.I True, overrides the\n.I by_queue parameter.\n\nFormat:\n.I Boolean\n.br\nDefault:\n.I False all\n\n.IP server_dyn_res 13\nDirects this scheduler to replace the server's\n.I resources_available\nvalues with new values returned\nby a site-specific external program.\n.br\nFormat: String\n.br\nDefault: Unset\n\n.IP smp_cluster_dist 13\n.RS\n.B Deprecated (12.2).\nSpecifies how single-host jobs should be distributed to\nall hosts of the complex. Options:\n.RS\n.IP pack\nKeep putting jobs onto one host until it is full and then move on to the next.\n.IP round_robin\nPut one job on each vnode in turn before cycling back to the first one.\n.IP lowest_load\nPut the job on the lowest-loaded host.\n.RE\n.br\nFormat: String\n.br\nDefault:\n.I pack all\n.RE\n\n.IP sort_queues 13\n.B Obsolete.\n\n.IP strict_fifo 13\n.B Deprecated.\nUse\n.I strict_ordering.\n\n.IP strict_ordering 13\nSpecifies that jobs must be run in the order determined\nby whatever sorting parameters are being used. This means that\na job cannot be skipped due to resources required not being\navailable.\nIf a job due to run next cannot run, no job will run, unless\nbackfilling is used, in which case jobs can be backfilled around the job that is\ndue to run next.\n.br\nExample line in PBS_HOME/sched_priv/sched_config:\n.RS\n.I strict_ordering: true ALL\n.br\nFormat: Boolean\n.br\nDefault:\n.I False all\n.RE\n\n.IP sync_time 13\n.B Obsolete.\n\n.IP unknown_shares 13\nThe number of shares for the\n.I unknown\ngroup. These\nshares determine the portion of a resource to be allotted to that\ngroup via fairshare. Requires\n.I fair_share\nto be enabled.\n.br\nFormat: Integer\n.br\nDefault: The unknown group gets 0 shares\n\n.SH FORMATS\n.IP Boolean 10\nAllowable values (case insensitive): True|T|Y|1|False|F|N|0\n\n.IP Duration 10\nPeriod of time.  Expressed either as\n.br\n.I \\ \\ \\ integer seconds\n.br\nor\n.br\n.I \\ \\ \\ [[hours:]minutes:]seconds[.milliseconds]\n.br\nMilliseconds are rounded to the nearest second.\n.LP\n\n.IP Float 10\nAllowable values: [+-] 0-9 [[0-9] ...][.][[0-9] ...]\n\n.IP Long 10\nLong integer.\nAllowable values: 0-9 [[0-9] ...], and + and -\n\n.IP Size 10\nNumber of bytes or words.  The size of a word is 64 bits.\n.br\nFormat:\n.I <integer>[<suffix>]\n.br\nwhere\n.I suffix\ncan be\n.RS 13\n.IP \"\\ b\\ or\\ \\ w\" 13\nbytes or words\n.IP \"kb\\ or\\ kw\"\nKilobytes or kilowords (2 to the 10th, or 1024)\n.IP \"mb\\ or\\ mw\" 13\nMegabytes or megawords (2 to the 20th, or 1,048,576)\n.IP \"gb\\ or\\ gw\" 13\nGigabytes or gigawords (2 to the 30th, or 1,073,741,824)\n.IP \"tb\\ or\\ tw\" 13\nTerabytes or terawords (2 to the 40th, or 1024 gigabytes)\n.IP \"pb\\ or\\ pw\" 13\nPetabytes or petawords (2 to the 50th, or 1,048,576 gigabytes)\n.RE\n.IP\nDefault:\n.I bytes\n\n\n.IP String 10\n(Resource value)\n.br\nAny character, including the space character.\n.br\nOnly one of the two types of quote characters, \" or ', may appear in any given value.\n.br\nAllowable values: [_a-zA-Z0-9][[-_a-zA-Z0-9 !\"#$%' ()*+,-./:;<=>?@[\\\\]^_{|}~] ...]\n.br\nString resource values are case-sensitive.\n\n.SH FILES\n$PBS_HOME/sched_priv is\nthe default directory for configuration files.\n\n$PBS_HOME/sched_priv/holidays is the holidays file.\n\n.SH SIGNAL HANDLING\n\nAll signals are ignored until the end of the cycle.  Most signals are\nhandled in the standard UNIX fashion.\n\n.IP SIGHUP\nThis scheduler closes and reopens its log file and rereads its\nconfiguration file if one exists.\n.IP \"SIGALRM, SIGBUS, etc.\"\nIgnored until end of scheduling cycle.  This scheduler quits.\n.IP \"SIGINT and SIGTERM\"\nThis scheduler closes its log file and shuts down.\n.LP\n\n\n.SH EXIT STATUS\nZero upon normal termination.\n\n.SH SEE ALSO\npbs_server(8B), pbs_mom(8B)\n"
  },
  {
    "path": "doc/man8/pbs_server.8B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_server 8B \"6 May 2020\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_server \n- start a PBS batch server\n.SH SYNOPSIS\n.B pbs_server \n[-a <active>] \n[-A <acctfile>] \n[-C]\n[-d <config path>] \n.RS 11\n[-e <mask>] \n[-F <delay>]\n[-L <logfile>]\n[-M <MoM port>] \n[-N]\n[-p <port number>]\n[-R <MoM RPP port>] \n[-S <default scheduler port>]\n[-s <replacement string>]\n[-t <restart type>] \n.RE\n.B pbs_server\n--version\n\n.SH DESCRIPTION\nThe\n.B pbs_server\ncommand starts a batch server on the local host.\nTypically, this command is in a local boot file such as\n.I /etc/rc.local.\nIf the batch server is already in running, \n.B pbs_server\nexits with an error.\n\n.B Required Permission\n.br\nTo ensure that the \n.B pbs_server\ncommand is not runnable by the general user community, the server\nruns only if its real and effective uid is zero.  You must be root.\n\n.SH OPTIONS\n.IP \"-A <acctfile>\" 10\nSpecifies an absolute path name for the file to use as the accounting file.\nIf not specified, the file is named for the current date in the\nPBS_HOME/server_priv/accounting directory.\n\n.IP \"-a <active> \" 10\nWhen \n.I True, \nthe server is in state \"active\" and the default scheduler is called\nto schedule jobs.\nWhen \n.I False, \nthe server is in state \"idle\" and the default scheduler is not called\nto schedule jobs.\nSets the server's \n.I scheduling \nattribute.\nIf this option is not specified, the server uses the prior \n.I value \nfor the \n.I scheduling\nattribute.\n.br\nFormat: Boolean\n.br\n\n.IP \"-C\" 10\nThe server starts up, creates the database, and exits.  Windows only.\n\n.IP \"-d <config path>\" 10\nSpecifies the absolute path to the directory containing the server\nconfiguration files, PBS_HOME.  Each server must have a different\nconfiguration directory.  The default configuration directory is\nspecified in $PBS_HOME, and is typically\n.I /var/spool/PBS .\n\n.IP \"-e <mask>\"\nSpecifies a log event mask to be used when logging.  See \"log_events\" in the\npbs_server_attributes(7B) man page.\n\n.IP \"-F <delay>\"\nSpecifies the number of seconds that the secondary server should wait \nbefore taking over when it believes the primary server is down.  If \nthe number of seconds is specified as \n.I -1, \nthe secondary will make one\nattempt to contact the primary and then become active.\n.br\nDefault: \n.I 30 seconds\n\n.IP \"-L <logfile>\"\nSpecifies the absolute path name for the log file.\nIf not specified, the file is named for the current date in the\nPBS_HOME/server_logs directory.  PBS_HOME is specified in the $PBS_HOME\nenvironment variable or in /etc/pbs.conf; see the\n.I -d\noption.\n\n.IP \"-M <MoM port>\"\nSpecifies the host name and/or port number on which the server should connect\nto MOM.  The option argument,\n.I MoM port,\nuses the syntax: \n.br\n.I \\ \\ \\ [<hostname>][:<port number>]\n.br\nIf \n.I hostname \nis not specified, the local host is assumed.   \n.br\nIf \n.I port number \nis not specified, the default port is assumed.  See the \n.I -M \noption for pbs_mom(8).  \n.br\nDefault: \n.I 15002\n\n.IP \"-N\" 10\nRuns the server in standalone mode.\n\n.IP \"-p <port number>\" 10\nSpecifies the port number on which the server is to listen for batch requests.\nIf multiple servers are running on a\nsingle host, each must have its own unique port number.\nThis option is for use in testing with multiple batch systems on a single host.\n.br\nDefault: \n.I 15001\n\n.IP \"-R <MoM RPP port>\"\nSpecifies the port number on which the server should query the up/down\nstatus of MoM.    See the \n.I -R \noption for pbs_mom(8).  \n.br\nDefault: \n.I 15003\n\n.IP \"-S <default scheduler port>\" 10\nSpecifies the port number to which the server should connect when\ncontacting the default scheduler.  The option argument,\n.I default scheduler port,\nuses the syntax:\n.br\n.I \\ \\ \\ [<hostname>][:<port number>]\n.br\nDefault: \n.I 15004\n\n.IP \"-s <replacement string>\" 10\nSpecifies the string to use when replacing spaces in accounting \nentity names.  Only available under Windows.\n\n.IP \"-t <restart type>\"\nSpecifies behavior when the server restarts.  The\n.I restart type\nargument is one of the following:\n.RS\n.IP cold 7\nAll jobs are purged.  Positive confirmation is required before\nthis direction is accepted.\n\n.IP create 7\nThe server discards any existing configuration files: server, nodes, queues,\nand jobs, and initialize configuration files to the default values.  \nThe default scheduler is idled (\n.I scheduling \nis set to \n.I False\n).  Any multischeds are deleted.\n\n.IP hot 7\nAll jobs in the Running state are retained in that state.  Any job\nthat was requeued into the Queued state from the Running state when\nthe server last shut down is run immediately, assuming the required\nresources are available.  This returns the server to the same state as\nwhen it went down.  After those jobs are restarted, normal scheduling\ntakes place for all remaining queued jobs.  All other jobs are\nretained in their current state.\n\n.IP\nIf a job cannot be restarted immediately because of a missing resource, such\nas a vnode being down, the server attempts to restart it periodically for\nup to 5 minutes.   After that period, the server will revert to a normal state,\nas if \n.I warm \nstarted, and will no longer attempt to restart any remaining jobs\nwhich were running prior to the shutdown.\n\n.IP updatedb 7\nUpdates format of PBS data from the previous format to the\ndata service format.\n\n.IP warm 7\nAll jobs in the Running state are retained in that state.  All other\njobs are maintained in their current state.  The default scheduler\ntypically chooses new jobs for execution.  \n.I warm \nis the default if \n.I -t\nis not specified.\n\n\n.RE\n.LP\n.IP \"--version\" 10\nThe \n.B pbs_server\ncommand returns its PBS version information and exits.\nThis option can only be used alone.\n\n.SH FILES\n.IP $PBS_HOME/server_priv 10\nDefault directory for configuration files\n\n.IP $PBS_HOME/server_logs 10\nDirectory for log files recorded by the server\n\n\n.SH Signal Handling\nWhen it receives the following signals, the server performs the following actions:\n\n.IP SIGHUP\nThe current server log and accounting log are closed and reopened.  This \nallows for the prior log to be renamed and a new log started from the time\nof the signal.\n\n.IP SIGTERM\nCauses a rapid orderly shutdown of \n.B pbs_server, \nidentical to \n.I \"qterm -t quick\".\n\n.IP SIGSHUTDN\nOn systems where SIGSHUTDN is defined, causes an orderly\n.I \"quick\" \nshutdown of the server.\n\n.IP \"SIGPIPE, SIGUSR1, SIGUSR2\"\nThese signals are ignored.\n.LP\nAll other signals have their default behavior installed.\n\n.SH Diagnostic Messages\nThe server will record a diagnostic message in a log file for any\nerror occurrence.  The log files are maintained in the server_logs\ndirectory below the home directory of the server.\nIf the log file cannot be opened, the diagnostic message is written\nto the system console.  The server writes its PBS \nversion and build information to the logfile whenever it starts up or \nthe logfile is rolled to a new file.\n\n.SH Stopping the PBS Server\n.B Stopping the Server on Linux\n.br\nUse the \n.B qterm\ncommand:\n.br\n.I \\ \\ \\ qterm \n.br\n(see qterm(8B)) \n.br\nor send a SIGTERM:\n.br\n.I \\ \\ \\ kill <server PID>\n\n.SH EXIT STATUS\n.IP Zero\nWhen the server has run in the background and then exits\n\n.IP \"Greater than zero\"\nIf the server command fails to begin batch operation\n\n\n.SH SEE ALSO\nqsub (1B), pbs_connect(3B),\npbs_mom(8B), pbs_sched(8B),\npbsnodes(8B), qdisable(8B), qenable(8B), qmgr(8B), qrun(8B), qstart(8B),\nqstop(8B), and qterm(8B)\n"
  },
  {
    "path": "doc/man8/pbs_snapshot.8B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_snapshot 8B \"20 September 2019\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_snapshot\n- Linux only.  Capture PBS data to be used for diagnostics\n.SH SYNOPSIS\n.B pbs_snapshot\n-h, --help\n.br\n.B pbs_snapshot \n-o <output directory path> \n.RS 12\n[--accounting-logs=<number of days>] \n.br\n[--additional-hosts=<hostname list>] [--basic]\n.br\n[--daemon-logs=<number of days>] \n[-H <server host>]\n.br\n[-l <log level>] \n[--map=<file path>]\n[--obfuscate]\n.br\n[--with-sudo]\n.RE\n.B pbs_snapshot\n--version\n.SH DESCRIPTION\n\nYou use \n.B pbs_snapshot \nto capture PBS data for diagnostics.  This tool is written in Python\nand uses PTL libraries, including PBSSnapUtils, to extract the data.\nYou can optionally anonymize the PBS data.  The \n.B pbs_snapshot \ncommand captures data from all multischeds.  The\ncommand detects which daemon or daemons are running on the host where\nit is collecting information, and captures daemon and system data\naccordingly.  If no PBS daemons are running, the command collects\nsystem information.  The output tarball contains information about the\nhost specified via the \n.I -H \noption, or if that is not specified, the\nlocal host.  If you specify additional hosts, the command creates a\ntarball for each additional host and includes it as a sub-tarball in\nthe output.\n\nIf you want to capture only PBS configuration information, use the \n.I --basic\noption.\n\n.B Required Privilege\n.br\nThe \n.B pbs_snapshot \ncommand allows you to use the \n.B sudo \ninfrastructure\nprovided by the PTL framework to capture root-owned information via\n.I --with-sudo.  \nAll other information is collected as a normal user.  If\nyou need to run \n.B pbs_snapshot \nas a non-prvileged user, and without\nusing the PTL \n.I --with-sudo \ninfrastructure, you must be root if you want\nroot-owned information to be collected.\n\n.B Restrictions\n.br\nThe \n.B pbs_snapshot \ncommand is not available on Windows.\n\n.SH OPTIONS\n.IP \"--accounting-logs=<number of days>\" 5\nSpecifies number of days of accounting logs to be collected; this\ncount includes the current day.\n.br\nValue of \n.I number of days \nmust be >=0:\n.br\n   If number of days is 0, no logs are captured.\n.br\n   If number of days is 1, only the logs for the current day \n.br\n   are captured.\n.br\nDefault: \n.B pbs_snapshot\ncollects 30 days of accounting logs\n\n.IP \"--additional-hosts=<hostname list>\" 5\nSpecifies that \n.B pbs_snapshot\nshould gather data from the specified list of additional hosts.  \nLaunches the \n.B pbs_snapshot \ncommand on each specified host, creates a\ntarball there named \n.I <hostname>_snapshot.tgz, \nand includes it as a sub-tarball in the output for the main output.  If you use the\n.I --with-sudo \noption, each launched copy uses that option as well.\n\nThe command does not query the server when it runs at a non-server host.  \n\nThe command collects a full snapshot, including the following information:\n.RS 8\nDaemon logs, for the number of days of logs being captured, specified\nvia the \n.I --daemon-logs=<number of days> \noption\n.br\nThe PBS_HOME/<daemon>_priv directory\n.br\nAccounting logs if server daemon runs on host\n.br\nSystem information\n.RE\n.IP\nFormat for \n.I hostname list \nis a comma-separated list of one or more hostnames: \n.br\n.I <hostname>[, <hostname> ...]\n\n.IP \"--basic\" 5\nCaptures PBS configuration information only.  Captures the following:\n.nf\n\nOutput File                   Description of Captured Information\n------------------------------------------------------------------\nserver/qstat_Bf.out           Output of qstat -Bf\nserver/qstat_Qf.out           Output of qstat -Qf\nscheduler/qmgr_lsched.out     Output of qmgr -c 'list sched'\nnode/pbsnodes_va.out          Output of pbsnodes -va\nreservation/pbs_rstat_f.out   Output of pbs_rstat -f\njob/qstat_f.out               Output of qstat -f\nhook/qmgr_lpbshook.out        Output of qmgr -c 'list pbshook'\nsched_priv/ for each          Copy of each scheduler's \nscheduler instance            sched_priv directory\nserver_priv/resourcedef       Copy of server_priv/resourcedef file  \npbs.conf                      Copy of /etc/pbs.conf on server host    \npbs_snapshot.log              Log of pbs_snapshot execution\nctime                         Timestamp of when the snapshot\n                              was taken\n.fi\n\nCan be combined with other options such as \n.I --accounting-logs\nand \n.I --daemon-logs\nin order to capture additional information.\n\n.IP \"--daemon-logs=<number of days>\" 5\nSpecifies number of days of daemon logs to be collected; this count\nincludes the current day. \n.br\nValue of \n.I number of days \nmust be >=0:\n.RS 8\nIf number of days is 0, no logs are captured.\n.br\nIf number of days is 1, only the logs for the current day are captured.\n.RE\n.IP\nDefault: \n.B pbs_snapshot\ncollects 5 days of daemon logs\n\n.IP \"-h, --help\" 5\nPrints usage and exits.\n\n.IP \"-H <hostname>\" 5\nSpecifies hostname for host whose retrieved data is to be at the top\nlevel in the output tarball.  If not specified, \n.B pbs_snapshot \nputs data for the local host at the top level in the output tarball.\n\n.IP \"-l <log level>\" 5\nSpecifies level at which \n.B pbs_snapshot\nwrites its log.  The log file is \npbs_snapshot.log, in the output directory path specified using the \n.I -o <output directory path> \noption.\n\nValid values, from most comprehensive to least: DEBUG2, DEBUG,\nINFOCLI2, INFOCLI, INFO, WARNING, ERROR, FATAL\n.br\nDefault: INFOCLI2\n\n\n.IP \"--map=<file path>\" 5\nSpecifies path for file containing obfuscation map, which is a\n<key>:<value> pair-mapping of obfuscated data.  Path can be absolute\nor relative to current working directory.\n.br\nDefault: \n.B pbs_snapshot\nwrites its obfuscation map in a file called obfuscate.map in the\nlocation specified via the\n.I -o <output directory path> \noption.  \n.br\nCan only be used with the \n.I --obfuscate \noption.\n\n.IP \"--obfuscate\" 5\nObfuscates (anonymizes) or deletes sensitive PBS data captured by \n.B pbs_snapshot.\n.br\nObfuscates the following data: \n.RS 8\neuser, egroup, project, Account_Name, operators, managers, group_list,\nMail_Users, User_List, server_host, acl_groups, acl_users,\nacl_resv_groups, acl_resv_users, sched_host, acl_resv_hosts,\nacl_hosts, Job_Owner, exec_host, Host, Mom, resources_available.host,\nresources_available.vnode\n.RE\n.IP \" \" 5\nDeletes the following data: \n.RS 8\nVariable_List, Error_Path, Output_Path, mail_from, Mail_Points,\nJob_Name, jobdir, Submit_arguments, Shell_Path_List\n.RE\n\n.IP \"--version\" 5\nThe \n.B pbs_snapshot\ncommand prints its PBS version information and exits.\nCan only be used alone.\n\n.IP \"--with-sudo\" 5\nUses the PTL \n.B sudo \ninfrastructure in order capture root-owned\ninformation via \n.B sudo.  \n(Information not owned by root is captured\nusing normal privilege, not root privilege.)  With this option, you do\nnot need to prefix your \n.B pbs_snapshot \ncommand with \n.B sudo, \nand you do not need root privilege.\n\n.SH Arguments to pbs_snapshot\n.IP \"-o <output directory path>\" 5\nPath to directory where \n.B pbs_snapshot\nwrites its output tarball.  Required.  Path can be absolute or\nrelative to current working directory.\n.br\nFor example, if you specify \n.I -o /temp,\n.B pbs_snapshot\nwrites \"/temp/snapshot_<timestamp>.tgz\".\n.br\nThe output directory path must already exist. \n\n.SH Output\n.B Output Location\n.br\nYou must use the \n.I -o <output directory path> \noption to specify the directory where \n.B pbs_snapshot\nwrites its output.  The path can be absolute or relative to current\nworking directory.  The output directory must already exist.  As an\nexample, if you specify \"-o /temp\", \n.B pbs_snapshot\nwrites \"/temp/snapshot_<timestamp>.tgz\".\n\n.B Output Contents\n.br\nThe \n.B pbs_snapshot\ncommand writes \nthe output for the local host and each specified remote host as a\ntarball.  Tarballs for remote hosts are included in the main tarball.\n\nThe command captures JSON output from \n.B qstat-f -F json \nand \n.B pbsnodes -av -F json.  \n.br\nThe main tarball contains the following directory structure, files, and tarballs:\n.nf\nDirectory  Directory\nor File    Contents             Description\n------------------------------------------------------------------------\nserver/         \n           qstat_B.out          Output of qstat -B\n           qstat_Bf.out         Output of qstat -Bf\n           qmgr_ps.out          Output of qmgr print server\n           qstat_Q.out          Output of qstat -Q\n           qstat_Qf.out         Output of qstat -Qf\n           qmgr_pr.out          Output of qmgr print resource\n\nserver_priv/                    Copy of the PBS_HOME/server_priv \n                                directory.   \n                                Core files are captured separately; \n.fi\n.RS 32\nsee\n.I core_file_bt/.  \n.RE\n.nf\n\n           accounting/          Accounting logs from \n                                PBS_HOME/server_priv/accounting/ \n                                directory for the number of days \n.fi\n.RS 32\nspecified via \n.I --accounting-logs \noption\n.RE\n.nf\n\nserver_logs/                    Server logs from the \n                                PBS_HOME/server_logs directory for the\n                                number of days specified \n.fi\n.RS 32\nvia\n.I --daemon-logs \noption\n.RE\n.nf\n\njob/            \n           qstat.out            Output of qstat\n           qstat_f.out          Output of qstat -f\n           qstat_f_F_json.out   Output of qstat -f -F json\n           qstat_t.out          Output of qstat -t\n           qstat_tf.out         Output of qstat -tf\n           qstat_x.out          Output of qstat -x\n           qstat_xf.out         Output of qstat -xf\n           qstat_ns.out         Output of qstat -ns\n           qstat_fx_F_dsv.out   Output of qstat -fx -F dsv\n           qstat_f_F_dsv.out    Output of qstat -f -F dsv\nnode/           \n           pbsnodes_va.out      Output of pbsnodes -va\n           pbsnodes_a.out       Output of pbsnodes -a\n           pbsnodes_avSj.out    Output of pbsnodes -avSj\n           pbsnodes_aSj.out     Output of pbsnodes -aSj\n           pbsnodes_avS.out     Output of pbsnodes -avS\n           pbsnodes_aS.out      Output of pbsnodes -aS\n           pbsnodes_aFdsv.out   Output of pbsnodes -aF dsv\n           pbsnodes_avFdsv.out  Output of pbsnodes -avF dsv\n           pbsnodes_avFjson.out Output of pbsnodes -avF json\n           qmgr_pn_default.out  Output of qmgr print node @default\n\nmom_priv/                       Copy of the PBS_HOME/mom_priv \n                                directory.\n                                Core files are captured separately; \n                                see core_file_bt/.  \n\nmom_logs/                       MoM logs from the PBS_HOME/mom_logs \n                                directory for the number of days \n.fi\n.RS 32\nvia\n.I --daemon-logs \noption\n.RE\n.nf\n\ncomm_logs/                      Comm logs from the PBS_HOME/comm_logs \n                                directory for the number of days \n.fi\n.RS 32\nspecified via \n.I --daemon-logs \noption\n.RE\n.nf  \n\nsched_priv/                     Copy of the PBS_HOME/sched_priv \n                                directory, with all files.\n                                Core files are not captured; \n                                see core_file_bt/. \n\nsched_logs/                     Scheduler logs from the \n                                PBS_HOME/sched_logs directory for \n                                the number of days specified \n.fi\n.RS 32\nvia\n.I --daemon-logs \noption    \n.RE\n.nf\n\nsched_priv_<multisched name>/   Copy of the \n                                PBS_HOME/sched_priv_<multisched_name>\n                                directory, with all files.\n                                Core files are not captured; \n                                see core_file_bt/. \n\nsched_logs_<multisched name>/   Scheduler logs from the \n                                PBS_HOME/sched_logs_<multisched_name> \n                                directory for the number\n                                of days specified\n.fi\n.RS 32\nvia\n.I --daemon-logs \noption    \n.RE\n.nf\n\nreservation/            \n           pbs_rstat_f.out      Output of pbs_rstat -f\n\n           pbs_rstat.out        Output of pbs_rstat\n\nscheduler/              \n           qmgr_lsched.out      Output of qmgr list sched\n\nhook/           \n           qmgr_ph_default.out  Output of qmgr print hook @default\n\n           qmgr_lpbshook.out    Output of qmgr list pbshook\n\ndatastore/              \n           pg_log/              Copy of the \n                                PBS_HOME/datastore/pg_log directory \n                                for the number of days specified \n.fi\n.RS 32\nvia\n.I --daemon-logs \noption\n.RE\n.nf\n\ncore_file_bt/                   Stack backtrace from core files \n\n           sched_priv/          Files containing the output of thread \n                                apply all backtrace full on all core \n                                files captured from PBS_HOME/sched_priv\n\n           sched_priv_          Files containing the output of thread \n           <multisched name>/   apply all backtrace full on all core \n                                files captured from PBS_HOME/sched_priv\n\n           server_priv/         Files containing the output of thread \n                                apply all backtrace full on all core \n                                files captured from \n                                PBS_HOME/server_priv\n\n           mom_priv/            Files containing the output of thread \n                                apply all backtrace full on all core \n                                files captured from PBS_HOME/mom_priv\n\n           misc/                Files containing the output of thread \n                                apply all backtrace full on any other \n                                core files found inside PBS_HOME\n\nsystem/         \n           pbs_probe_v.out      Output of pbs_probe -v\n\n           pbs_hostn_v.out      Output of pbs_hostn -v $(hostname)\n\n           pbs_environment      Copy of PBS_HOME/pbs_environment file\n\n           os_info              Information about the OS\n\n           process_info         List of processes running on the system \n                                when the snapshot was taken.  Output of\n                                ps -aux | grep [p]bs on Linux systems,\n                                or tasklist /v on Windows systems\n\n           ps_leaf.out          Output of ps -leaf.  Linux only.\n\n           lsof_pbs.out         Output of lsof | grep [p]bs.\n                                Linux only.\n           etc_hosts            Copy of /etc/hosts file.  Linux only.\n\n           etc_nsswitch_conf    Copy of /etc/nsswitch.conf file.\n                                Linux only.\n\n           vmstat.out           Output of the command vmstat.  \n                                Linux only.\n\n           df_h.out             Output of the command df -h.  \n                                Linux only.\n\n           dmesg.out            Output of the dmesg command.  \n                                Linux only.\n\npbs.conf                        Copy of the pbs.conf file on the \n                                server host    \n\nctime                           Contains the time in seconds since \n                                epoch when the snapshot was taken    \n\npbs_snapshot.log                Log messages written by pbs_snapshot\n    \n<remote hostname>.tgz           Tarball of output from running the \n                                pbs_snapshot command at a remote host\n.fi\n\n.SH Examples\n.IP \"pbs_snapshot -o /tmp\" 5\nWrites a snapshot to /temp/snapshot_<timestamp>.tgz that includes 30\ndays of accounting logs and 5 days of daemon logs from the server\nhost.\n\n.IP \"pbs_snapshot --daemon-logs=1 --accounting-logs=1 -o /tmp --obfuscate --map=mapfile.txt\" 5\nWrites a snapshot to /temp/snapshot_<timestamp>.tgz that includes 1\nday of accounting and daemon logs.  Obfuscates the data and stores the\ndata mapping in the map file named \"mapfile.txt\".\n\n"
  },
  {
    "path": "doc/man8/pbs_tclsh.8B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_tclsh 8B \"6 May 2020\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_tclsh \n- TCL shell with TCL-wrapped PBS API\n.SH SYNOPSIS\n.B pbs_tclsh \n\n.B pbs_tclsh \n--version\n.SH DESCRIPTION\n.B Deprecated.  \nThe\n.B pbs_tclsh\ncommand starts a version of the TCL shell which includes wrapped versions of\nthe PBS external API. The PBS TCL API is documented in the pbs_tclapi(3B) manual page.\n\nThe \n.B pbs_tclsh\ncommand is used to query MoM.  For example:\n\n.RS\n.nf\n\\> pbs_tclsh\ntclsh> openrm <hostname>\n<file descriptor>\ntclsh> addreq <file descriptor> \"loadave\"\ntclsh> getreq <file descriptor>\n<load average>\ntclsh> closereq <file descriptor>\n.fi\n.RE\n\n.B Required Privilege\n.br\nRoot privilege is required in order to query MoM for dynamic \nresources.  Root privilege is not required in order to query\nMoM for built-in resources and site-defined static resources.\n\n\n.SH OPTIONS\n.IP \"--version\" 8\nThe \n.B pbs_tclsh\ncommand returns its PBS version information and exits.\nThis option can only be used alone.\n\n.SH STANDARD ERROR\nThe\n.B pbs_tclsh\ncommand writes a diagnostic message to standard error for\neach error occurrence.\n\n.SH SEE ALSO\npbs_wish(8B), pbs_server(8B), pbs_mom(8B), pbs_sched(8B)\n"
  },
  {
    "path": "doc/man8/pbs_tmrsh.8B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_tmrsh 8B \"6 May 2020\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_tmrsh \n- TM-enabled replacement for rsh/ssh for use by MPI implementations\n.SH SYNOPSIS\n.B pbs_tmrsh \n<hostname> [-l <username>] [-n] <command> [<args> ...]\n.br\n.B pbs_tmrsh \n--version\n\n.SH DESCRIPTION\nThe\n.B pbs_tmrsh\ncommand attempts to emulate an \"rsh\" connection to the specified host,\nvia underlying calls to the Task Management (TM) API. The program is\nintended to be used during MPI integration activities, and not by\nend-users. \n\nRunning \"pbs_tmrsh <hostname> <command>\" causes a PBS task to be started\non \n.I hostname \nrunning \n.I command. \n\n.B Requirements for Environment Variables\n.br\nThe environment variables used by the two MPI implementations\nto point to the rsh work-alike (MPI_REMSH in the case of HP and\nP4_RSHCOMMAND for MPICH) must be set in the job environment\nand point to the full path for\n.B pbs_tmrsh.\n.LP\nThe file $PBS_HOME/pbs_environment should contain an environment\nvariable PATH in which to search for the program executable. This\napplies to both Windows and Linux. It is expected that a full path will\nbe specified for the \n.I command \nand the PATH variable will not be needed.\n\n.SH OPTIONS\n.IP \"-l <username>\" 13\nSpecifies the username under which to execute the task. If used, \n.I username \nmust\nmatch the username running the\n.B pbs_tmrsh \ncommand.\n.IP \"-n\" 13\nA no-op; provided for MPI implementations that expect to call\nrsh with the \n.I -n \noption.\n.LP\n.IP \"--version\" 13\nThe \n.B pbs_tmrsh\ncommand returns its PBS version information and exits.\nThis option can only be used alone.\n\n.SH OPERANDS\n.IP command\nSpecifies command to be run as a PBS task.\n\n.IP hostname\nSpecifies host on which to run PBS task.  The \n.I hostname \nmay be in IP-dot-address form.\n\n.SH Output and Error\nOutput and errors are written to the PBS job's output and error files,\nnot to standard output/error.\n\nThe\n.B pbs_tmrsh\ncommand writes a diagnostic message to the PBS job's error file for\neach error occurrence.\n\n.SH EXIT STATUS\nThe\n.B pbs_tmrsh\nprogram exits with the exit status of the remote command or with\n255 if an error occurred. This is because ssh works this way.\n\n.SH SEE ALSO\npbs_attach(8B), tm(3) \n"
  },
  {
    "path": "doc/man8/pbs_topologyinfo.8B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_topologyinfo 8B \"17 July 2020\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_topologyinfo \n\\- report topological information\n\n\n.SH SYNOPSIS\n.B pbs_topologyinfo \n(-a | --all) [(-l | --license) | (-s | --sockets)]\n.br\n.B pbs_topologyinfo \n(-l | --license) <vnode name> [<vnode name> ...]\n.br\n.B pbs_topologyinfo \n(-s | --sockets) <vnode name> [<vnode name> ...]\n.br\n.B pbs_topologyinfo \n-h | --help\n\n.SH DESCRIPTION\nThe\n.B pbs_topologyinfo\ncommand reports topological information for one or more vnodes.  This\ninformation is used for licensing purposes.  To use the command, you\nmust specify what kind of topological information you want.  The\ncommand reports only the requested information.\n\nThis command must be run on the server host.\n\n.SH Usage\n.B pbs_topologyinfo -al\nreports number of node licenses needed for all vnodes.\n\n.B pbs_topologyinfo -l <vnode name>\nreports number of node licenses needed for \n.I vnode name.\n\n.B pbs_topologyinfo -as \nreports socket counts for all vnodes that have reported sockets.\n\n.B pbs_topologyinfo -s <vnode name>\nreports socket count for vnode \n.I vnode name.\n\n.SH Prerequisites\nBefore you use this command, the server and MoMs must be configured \nso that they can contact each other, and must have been run.  \n\n.SH Required Privilege\n\nThis command can be run only by root or Admin on Windows.\n\n.SH OPTIONS\n.IP \"-a, --all\" 15\nReports requested topological information for all vnodes.  When this\noption is used alone, the command does not report any information.\n\n.IP \"-h, --help \" 15\nPrints usage and exits.\n\n.IP \"-l, --license [<vnode name(s)>]\" 15\nReports number of node licenses required.  If you specify\n.I vnode name(s),\nthe command reports node licenses needed for the specified vnode(s) only.\n\n.IP \"-s, --sockets [<vnode name(s)>]\" 15\nReports derived socket counts.  If you specify\n.I vnode name(s),\nthe command reports socket count information for the specified vnode(s) only.\n\n.IP \"(no options)\" 15\nDoes not report any information.\n\n.SH OPERANDS\n.IP \"vnode name [<vnode name> ...]\" 15 \nName(s) of vnode(s) about which to report.\n\n.SH Errors\nIf you specify an invalid vnode name, the command prints a message to standard error.\n\n\n.SH EXIT STATUS\n.IP \"0\" 15 \nSuccess\n.IP \"1\" 15\nAny error following successful command line processing.\n\n\n"
  },
  {
    "path": "doc/man8/pbs_wish.8B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbs_wish 8B \"6 May 2020\" Local \"PBS Professional\"\n.SH NAME\n.B pbs_wish \n- TK window shell with TCL-wrapped PBS API\n.SH SYNOPSIS\n.B pbs_wish \n\n.B pbs_wish\n--version\n.SH DESCRIPTION\n.B Deprecated.  \nThe\n.B pbs_wish\ncommand is a version of the TK window shell which includes wrapped versions of\nthe PBS external API. The PBS TCL API is documented in the\npbs_tclapi(3B) manual page.\n\n.SH OPTIONS\n.IP \"--version\" 8\nThe \n.B pbs_wish\ncommand returns its PBS version information and exits.\nThis option can only be used alone.\n\n.SH STANDARD ERROR\nThe\n.B pbs_wish\ncommand writes a diagnostic message to standard error for each error occurrence.\n\n.SH SEE ALSO\npbs_tclsh(8B), pbs_mom(8B), pbs_server(8B), pbs_sched(8B)\n"
  },
  {
    "path": "doc/man8/pbsfs.8B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbsfs 8B \"6 May 2020\" Local \"PBS Professional\"\n.SH NAME\n.B pbsfs \n\\- show or manipulate PBS fairshare usage data\n.SH SYNOPSIS\n\nShowing usage data:\n.br\n.B pbsfs \n[-c <entity1> <entity2>] [-g <entity>] [-I <scheduler name>] \n      [-p] [-t]\n\n.br\nManipulating usage data:\n.br\n.B pbsfs \n[-d] [-e] [-I <scheduler name>] [-s <entity> <usage value>]\n\n.br\nPrinting version:\n.br\n.B pbsfs \n--version\n\n.SH DESCRIPTION\n\nYou can use the \n.B pbsfs \ncommand to print or manipulate the PBS scheduler's\nfairshare usage data.  You can print the usage data in various\nformats, described below.  Changes made using \n.B pbsfs \ntake effect in the next scheduling cycle; you do not need to restart or HUP the\nscheduler for changes to take effect.  \n\nWe recommend that if you use the options that manipulate usage data,\nyou should do this when the scheduler is not scheduling jobs, because\nscheduling while changing fairshare usage data may give unwanted\nresults.  \n\n.B Prerequisites\n.br\nThe server must be running in order to use the \n.B pbsfs\ncommand.\n\n.B Permissions \n.br\nYou must be root to run the \n.B pbsfs \ncommand; if not, it will print the error message, \n\"Unable to access fairshare data\".\n\n.SH OPTIONS\n\n.B You can safely use the following options while jobs are being scheduled:\n.IP \"(no options)\" 10\nSame as \n.B pbsfs -p\n.IP \"-c <entity1> <entity2>\" 10\nCompare two fairshare entities\n.IP \"-g <entity>\" 10\nPrint a detailed listing for the specified entity, including the path \nfrom the root of the tree to the entity.\n.IP \"-I <scheduler name>\" 10\nSpecifies name of scheduler whose data is to be manipulated or shown.  \nRequired for multischeds; optional for default scheduler.  Name of \ndefault scheduler is \"default\".  If not specified, \n.B pbsfs\noperates on default scheduler.\n.IP \"-p\" 10\nPrint the fairshare tree as a table, showing for each internal and\nleaf vertex the group ID of the vertex's parent, group ID of the vertex,\nvertex shares, vertex usage, and percent of shares allotted to the\nvertex.\n.IP \"-t\" 10\nPrint the fairshare tree in a hierarchical format.\n.IP \"--version\" 10\nThe \n.B pbsfs\ncommand returns its PBS version information and exits.\nThis option can only be used alone.\n.LP\n\n.B It is not recommended to be scheduling jobs when you use the following options:\n.IP \"-d\" 10\nDecay the fairshare tree by the amount specified in the \n.I fairshare_decay_factor\nscheduler parameter.\n.IP \"-e\" 10\nTrim fairshare tree to just the entities in the \n.I resource_group \nfile.  Unknown entities and their usage are deleted; as a result the unknown\ngroup has no usage and no children.\n.IP \"-s <entity> <usage value>\" 10\nSet \n.I entity's \nusage value to \n.I usage value.  \nEditing a non-leaf entity is ignored.  All non-leaf entity usage\nvalues are calculated each time you use the pbsfs command to make\nchanges.\n.LP\n\n.SH Output Formats for pbsfs\n\nThe pbsfs command can print output in three different formats:\n\n.B pbsfs -g <entity>\n.br\nPrints a detailed listing for the specified entity.  Example:\n.br\n.B pbsfs -g pbsuser3\n fairshare entity: pbsuser3\n Resgroup: 20\n cresgroup: 22\n Shares: 40\n Percentage: 24.000000%\n fairshare_tree_usage: 0.832973\n usage: 1000 (cput)\n usage/perc: 4167\n Path from root:\n TREEROOT  :     0       1201 / 1.000 = 1201\n group2    :    20       1001 / 0.600 = 1668\n pbsuser3  :    22       1000 / 0.240 = 4167\n\n.B pbsfs or pbsfs -p\n.br\nPrints the entire tree as a table, with data in columns.  Example:\n.br\n.B pbsfs\n.br\nFairshare usage units are in: cput\n.br\nTREEROOT\\ : Grp:\\ -1\\ \\ cgrp:\\ \\ 0  Shares:\\ -1  Usage:\\ 1201   Perc:\\ 100.000%\n.br\ngroup2\\ \\ \\ : Grp:\\ \\ 0\\ \\ cgrp:\\ 20  Shares:\\ 60  Usage:\\ 1001   Perc:\\ \\ 60.000%\n.br\npbsuser3\\ : Grp:\\ 20\\ \\ cgrp:\\ 22  Shares:\\ 40  Usage:\\ 1000   Perc:\\ \\ 24.000%\n.br\npbsuser2\\ : Grp:\\ 20\\ \\ cgrp:\\ 21  Shares:\\ 60  Usage:\\ \\ \\ \\ 1   Perc:\\ \\ 36.000%\n.br\ngroup1\\ \\ \\ : Grp:\\ \\ 0\\ \\ cgrp:\\ 10  Shares:\\ 40  Usage:\\ \\ 201   Perc:\\ \\ 40.000%\n.br\npbsuser1\\ : Grp:\\ 10\\ \\ cgrp:\\ 12  Shares:\\ 50  Usage:\\ \\ 100   Perc:\\ \\ 20.000%\n.br\npbsuser\\ \\ : Grp:\\ 10\\ \\ cgrp:\\ 11  Shares:\\ 50  Usage:\\ \\ 100   Perc:\\ \\ 20.000%\n.br\nunknown\\ \\ : Grp:\\ \\ 0\\ \\ cgrp:\\ \\ 1  Shares:\\ \\ 0  Usage:\\ \\ \\ \\ 1   Perc:\\ \\ \\ 0.000%\n\n.B pbsfs -t\n.br\nPrints the entire tree as a tree, showing group-child relationships.  Example:\n.br\n.B pbsfs -t\n  TREEROOT(0)\n      group2(20)\n           pbsuser3(22)\n           pbsuser2(21)\n      group1(10)\n           pbsuser1(12)\n           pbsuser(11)\n      unknown(1)\n\n.SH Data Output by pbsfs\n.IP \"cresgroup or cgrp\" 10\nGroup ID of the entity.\n.IP \"fairshare entity\" 10\nThe specified fairshare tree entity.\n.IP \"fairshare usage units\" 10\nThe resource for which the scheduler accumulates usage for fairshare\ncalculations.  This defaults to \n.I cput \n(CPU seconds) but can be set in\nthe scheduler's configuration file.\n.IP \"fairshare_tree_usage\" 10\nThe entity's effective usage.  \n\n.IP \"Path from root\" 10\nThe path from the root of the tree to the entity.  The scheduler\nfollows this path when comparing priority between two entities.\n.IP \"Percentage or perc\" 10\nThe percentage of the shares in the tree allotted to the entity,\ncomputed as \n.I fairshare_perc.  \n\n.IP \"Resgroup or Grp\" 10\nGroup ID of the entity's parent group.\n.IP \"Shares\" 10\nThe number of shares allotted to the entity.\n.IP \"usage\" 10\nThe amount of usage by the entity.\n.IP \"usage / perc\" 10\nThe value the scheduler uses to the pick which entity has priority\nover another.  The smaller the number the higher the priority.\n\n.SH SEE ALSO\npbs_sched(8B)\n"
  },
  {
    "path": "doc/man8/pbsnodes.8B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbsnodes 8B \"6 May 2020\" Local \"PBS Professional\"\n.SH NAME\n\n.B pbsnodes \n\\- query PBS host or vnode status, mark hosts free or offline, change \nthe comment for a host, or output vnode information\n.SH SYNOPSIS\n\n.B pbsnodes \n[-o | -r] [-s <server>] [-C <comment>] <hostname> [<hostname> ...]\n\n.B pbsnodes \n[-l] [-s <server>] \n\n.B pbsnodes \n-v <vnode> [<vnode> ...] [-s <server>]\n\n.B pbsnodes \n-a[v] [-S[j][L]] [-F json|dsv [-D <delim>]] [-s <server>]\n\n.B pbsnodes \n[-H] [-S[j][L]] [-F json|dsv [-D <delim>]] <hostname> [<hostname> ...]\n\n.B pbsnodes \n--version\n\n.SH DESCRIPTION\nThe \n.B pbsnodes\ncommand is used to query the status of hosts or vnodes, to mark hosts FREE or\nOFFLINE, to edit a host's \n.I comment \nattribute, or to output vnode information.  The \n.B pbsnodes \ncommand obtains host information by sending a request to the PBS server.\n.LP\n\n.B Using pbsnodes\n.br\nTo list all vnodes:\n.br\n   pbsnodes -av\n.br   \n\nTo print the status of the specified host or hosts, run \n.B pbsnodes\nwith no options (except the \n.I -s\noption) and with a list of hosts.\n.LP\nTo print the command usage, run \n.B pbsnodes \nwith no options and without a list of hosts.\n\nTo remove a node from the scheduling pool, mark it OFFLINE.  If it is marked\nDOWN, when the server next queries the MoM, and can connect, the node will be\nmarked FREE.\n\nTo offline a single vnode in a multi-vnoded system, use: \n.RS 5\nqmgr -c \"set node <vnode name> state=offline\"\n.RE\n\n.SH OUTPUT\nThe order in which hosts or vnodes are listed in the output of the\n.B pbsnodes \ncommand is undefined.  Do not rely on output being ordered.\nIf you print attributes, \n.B pbsnodes\nprints out only those attributes which are not at default values.\n\n.SH PERMISSIONS\nPBS Manager or Operator privilege is required to execute \n.B pbsnodes \nwith the\n.B \\-o \nor\n.B \\-r\noptions, to view custom resources which have been created to be\ninvisible to users, and to see some output such as PBS version.\n\n.SH OPTIONS\n.IP \"(no options)\" 8\nIf neither options nor a host list is given, the \n.B pbsnodes\ncommand prints usage syntax.\n\n.IP \"-a\" 8\nLists all hosts and all their attributes (available and used.)\n\nWhen used with the \n.I -v\noption, lists all vnodes.\n\nWhen listing a host with multiple vnodes:\n\n.RS \n.IP \"\" 5\nThe output for the \n.I jobs \nattribute lists all the jobs on all the vnodes\non that host.\nJobs that run on more than one vnode will appear once for each vnode they\nrun on.\n\nFor consumable resources, the output for each resource is the sum of that\nresource across all vnodes on that host.\n\nFor all other resources, e.g. string and Boolean, if the value of that\nresource is the same on all vnodes on that host, the value is returned.\nOtherwise the output is the literal string \"<various>\".\n.LP\n.RE\n\n.IP \"-C <comment>\" 8\nSets the \n.I comment\nattribute for the specified host(s) to the value of <comment>.\nComments containing spaces must be quoted.  The comment string is limited\nto 80 characters.  Usage:\n.br\n.B \\ \\ \\  pbsnodes -C <comment> <hostname> [<hostname> ...]\n\nTo set the comment for a vnode:\n.br\n.B \\ \\ \\ qmgr -c \"s n <vnode name> comment=<comment>\"\n\n.IP \"-F dsv [-D <delim>]\"\nPrints output in delimiter-separated value format.  Optional delimiter\nspecification.  Default delimiter is vertical bar (\"|\").\n\n.IP \"-F json\"\nPrints output in JSON format.\n\n.IP \"-H <host> [<host> ...]\"\nPrints all non-default-valued attributes for specified hosts and all\nvnodes on specified hosts.\n\n.IP \"-j\"\nDisplays the following job-related headers for specified vnodes:\n.nf\nHeader\\ \\ \\ \\ \\ Width\\ \\ Description\n------------------------------------\nvnode       15    Vnode name\nstate       15    Vnode state\nnjobs        6    Number of jobs on vnode\nrun          5    Number of running jobs at vnode\nsusp         6    Number of suspended jobs at vnode\nmem f/t     12    Vnode memory free/total\nncpus f/t    7    Number of CPUs at vnode free/total\nnmics f/t    7    Number of MICs at vnode free/total\nngpus f/t    7    Number of GPUs at vnode free/total\njobs        any   List of job IDs on vnode\n.fi\n\n.br\nNote that \n.I nmics \nis a custom resource that must be created by the administrator if you \nwant it displayed here.\n\nEach subjob is treated as a unique job.\n\n.IP \"-L\"\nDisplays output with no restrictions on column width.\n\n.IP \"-l\" 8\nLists all hosts marked as DOWN or OFFLINE. Each such host's state and\ncomment attribute (if set) is listed.  If a host also has state \nSTATE-UNKNOWN, it is listed. For hosts with multiple vnodes, \nonly hosts where all vnodes are marked as DOWN or OFFLINE are listed.\n\n.IP \"-o <hostname> [<hostname> ...]\" 8\nMarks listed hosts as OFFLINE even if currently in use.  This is\ndifferent from being marked DOWN.  A host that is marked OFFLINE \ncontinues to execute the jobs already on it, but is removed from\nthe scheduling pool (no more jobs are scheduled on it.)  \n\nFor hosts with multiple vnodes, pbsnodes operates on a host and all of\nits vnodes, where the hostname is resources_available.host, which is\nthe name of the natural vnode.  \n\nTo offline all vnodes on a multi-vnoded machine:\n.br\n.B \\ \\ \\ pbsnodes -o <name of natural vnode>\n\nTo offline a single vnode on a multi-vnoded system, use: \n.br\n.B \\ \\ \\ qmgr -c \"set node <vnode name> state=offline\"\n\nRequires PBS Manager or Operator privilege.\n.RE\n\n.IP \"-r <hostname> [<hostname> ...]\" 8\nClears OFFLINE from listed hosts.\n\n.IP \"-S\"\nDisplays the following vnode information:\n.nf\nHeader\\ \\ \\ \\ \\ Width\\ \\ Description\n------------------------------------\nname        15    Vnode name\nstate       15    Vnode state\nOS           8    Value of OS custom resource, if any\nhardware     8    Value of hardware custom resource, if any\nhost        15    Hostname\nqueue       10    Value of vnode's queue attribute\nncpus        7    Number of CPUs at vnode\nnmics        7    Number of MICs at vnode\nmem          8    Vnode memory\nngpus        7    Number of GPUs at vnode\ncomment     any   Vnode comment\n.fi\n\nNote that \n.I nmics \nand \n.I OS\nare custom resources that must be created by the administrator\nif you want their values displayed here.\n\n.IP \"-s <server>\" 8\nSpecifies the PBS server to which to connect.\n\n.IP \"-v <vnode> [<vnode> ...]\" 8\nLists all non-default-valued attributes for each specified vnode.\n.br\nWith no arguments, prints one entry for each vnode in the PBS complex.\n.br\nWith one or more vnodes specified, prints on entry for each specified\nvnode.\n.br\nWhen used with \n.I -a,\nlists all vnodes.\n\n.IP \"--version\" 8\nThe \n.B pbsnodes\ncommand returns its PBS version information and exits.\nThis option can only be used alone.\n\n\n.SH OPERANDS\n.IP \"<server>\" 8\nSpecifies the server to which to connect. \nDefault: default server.\n\n.IP \"<hostname> [<hostname> ...]\" 8\nSpecifies the host(s) to be queried or operated on.\n\n.IP \"<vnode> [<vnode> ...]\" 8\nSpecifies the vnode(s) to be queried or operated on.\n\n.SH EXIT STATUS\n.IP \"Zero\"\nSuccess\n\n.IP \"Greater than zero\"\nIncorrect operands are given\n.br\n.B pbsnodes\ncannot connect to the server \n.br\nThere is an error querying the server for the vnodes\n\n.SH SEE ALSO\npbs_server(8B) and qmgr(8B)\n\n"
  },
  {
    "path": "doc/man8/pbsrun.8B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbsrun 8B \"6 May 2020\" Local \"PBS Professional\"\n.SH NAME\n.B pbsrun\n\\- general-purpose wrapper script for mpirun\n\n.SH SYNOPSIS\n.B pbsrun\n\n.B pbsrun\n--version\n\n.SH DESCRIPTION\n.B pbsrun\nis a wrapper script for any of several versions of mpirun.\nThis provides a user-transparent way for PBS to control jobs\nwhich call mpirun in their jobscripts.\nThe\n.B pbsrun_wrap\nscript instantiates \n.B pbsrun\nso that the wrapper script for the specific version of mpirun\nbeing used has the same name as that version of mpirun.\n\nIf the mpirun wrapper script is run inside a PBS job, it\ntranslates any mpirun call of the form:\n.br\n    mpirun [<options>] <executable> [<args>]\n.br\ninto\n.br\n    mpirun [<options>] pbs_attach [<special options to pbs_attach>] \\\\\n.br\n            <executable> [<args>]\n.br\nwhere \n.I special options to pbs_attach \nrefers to any options needed by \n.B pbs_attach \nto do its job (e.g. -j $PBS_JOBID).\n\nIf the wrapper script is executed outside of PBS, a warning is issued\nabout \"not running under PBS\", but it proceeds as if the actual\nprogram had been called in standalone fashion.\n\nThe \n.B pbsrun \nwrapper script is not meant to be executed directly; instead\nit is instantiated by \n.B pbsrun_wrap.\nIt is copied to the target directory and renamed\n\"pbsrun.<mpirun version/flavor>\" where \n.I mpirun version/flavor\nis a string that identifies the mpirun\nversion being wrapped (e.g. ch_gm).\n\nThe \n.B pbsrun \nscript, if executed inside a PBS job,\nruns an \n.B initialization script, \nnamed $PBS_EXEC/lib/MPI/pbsrun.<mpirun version/flavor>.init, then\nparses mpirun-like arguments from the command line, sorting which\noptions and option values to retain, to ignore, or to transform,\nbefore calling the actual mpirun script with a \"pbs_attach\" prefixed\nto the executable.  The actual mpirun to call is found by tracing the\nlink pointed to by $PBS_EXEC/lib/MPI/pbsrun.<mpirun\nversion/flavor>.link.\n\nFor all of the wrapped MPIs, the maximum number of ranks that can be\nlaunched is the number of entries in $PBS_NODEFILE.\n\nThe wrapped MPIs are:\n.RS 5\nMPICH-GM's mpirun (mpirun.ch_gm) with rsh/ssh (\n.B deprecated\nas of 14.2.1)\n.br\nMPICH-MX's mpirun (mpirun.ch_mx) with rsh/ssh (\n.B deprecated\nas of 14.2.1)\n.br\nMPICH-GM's mpirun (mpirun.mpd) with MPD (\n.B deprecated\nas of 14.2.1)\n.br\nMPICH-MX's mpirun (mpirun.mpd) with MPD (\n.B deprecated\nas of 14.2.1)\n.br\nMPICH2's mpirun\n.br\nIntel MPI's mpirun (\n.B deprecated\nas of 13.0)\n.br\nMVAPICH1's mpirun (\n.B deprecated\nas of 14.2.1)\n.br\nMVAPICH2's mpiexec\n.RE\n\n.SH OPTIONS\n.IP \"--version\" 8\nThe \n.B pbsrun\ncommand returns its PBS version information and exits.\nThis option can only be used alone.\n\n.SH INITIALIZATION SCRIPT\n\nThe initialization script, called $PBS_EXEC/lib/MPI/pbsrun.<mpirun version/flavor>.init,\nwhere \n.I mpirun version/flavor \nreflects the mpirun flavor/version being wrapped,\ncan be modified by an administrator to customize against the local\nflavor/version of mpirun being wrapped.\n\nInside this sourced init script, 8 variables are set:\n.RS 5\noptions_to_retain=\"-optA -optB <val> -optC <val1> val2> ...\"\n.br\noptions_to_ignore=\"-optD -optE <n> -optF <val1> val2> ...\"\n.br\noptions_to_transform=\"-optG -optH <val> -optI <val1> val2> ...\"\n.br\noptions_to_fail=\"-optY -optZ ...\"\n.br\noptions_to_configfile=\"-optX <val> ...\"\n.br\noptions_with_another_form=\"-optW <val> ...\"\n.br\npbs_attach=pbs_attach\n.br\noptions_to_pbs_attach=\"-J $PBS_JOBID\"\n.br\n.RE\n\n.B Initialization Script Options\n.br\n.IP \"options_to_retain\" 5\nSpace-separated list of options and values that\npbsrun.<mpirun version/flavor> passes on to the actual mpirun call. Options must\nbegin with \"-\" or \"--\", and option arguments must be specified by some\narbitrary name with left and right arrows, as in \"<val1>\".\n\n.IP \"options_to_ignore\" 5\nSpace-separated list of options and values that pbsrun.<mpirun\nversion/flavor> does not pass on to the actual mpirun call. Options\nmust begin with \"-\" or \"--\", and option arguments must be specified by\narbitrary names with left and right arrows, as in \"<n>\".\n\n.IP \"options_to_transform\" 5\nSpace-separated list of options and values\nthat \n.B pbsrun \nmodifies before passing on to the actual mpirun call.\n\n.IP \"option_to_fail\" 5\nSpace-separated list of options that will cause\n.B pbsrun \nto exit upon encountering a match.\n\n.IP \"options_to_configfile\" 5\nSingle option and value that refers to\nthe name of the \"configfile\" containing command line segments found in\ncertain versions of mpirun. \n\n.IP \"options_with_another_form\" 5\nSpace-separated list of options and values that can be found in\noptions_to_retain, options_to_ignore, or options_to_transform, whose\nsyntax has an alternate, unsupported form.\n\n.IP \"pbs_attach\" 5\nPath to \n.B pbs_attach, \nwhich is called before the\n<executable> argument of mpirun.\n\n.IP \"options_to_pbs_attach\" 5\nSpecial options to pass to the\n.B pbs_attach call. \nYou may pass variable references (e.g. $PBS_JOBID) and\nthey are substituted  by \n.B pbsrun \nto actual values.\n.LP\n.RE\n\nIf \n.B pbsrun \nencounters any option not found in \n.I options_to_retain, options_to_ignore, \nand \n.I options_to_transform, \nthen it is flagged\nas an error.\n\n.B Functions Created\n.br\nThese functions are created inside the init script.  These can be\nmodified by the PBS administrator.\n.RS 5\n\n.IP \"transform_action () {\" 5\n     # passed actual values of $options_to_transform\n.br\n     args=$*\n.br\n}\n\n.IP \"boot_action () {\" 5\n     mpirun_location=$1\n.br\n}\n\n.IP \"evaluate_options_action () {\" 5\n     # passed actual values of transformed options\n.br\n     args=$*\n.br\n}\n\n.IP \"configfile_cmdline_action () {\" 5\n     args=$*\n.br\n}\n\t\t\n.IP \"end_action () {\" 5\n     mpirun_location=$1\n.br\n}\n.LP\n\n.IP \"transform_action()\" 5\nThe pbsrun.<mpirun version/flavor> wrapper script invokes the function\n.I transform_action() \n(called once on each matched item and value) with\nactual options and values received matching one of the\n\"options_to_transform\". The function returns a string to pass on\nto the actual mpirun call.\n\n\n.IP \"boot_action()\" 5\nPerforms any initialization tasks needed before running the actual\nmpirun call. For instance, GM's MPD requires the MPD daemons to be\nuser-started first. This function is called by the pbsrun.<mpirun\nversion/flavor> script with the location of actual mpirun passed as\nthe first argument. Also, the pbsrun.<mpirun version/flavor> checks\nfor the exit value of this function to determine whether or not to\nprogress to the next step.\n\n\n.IP \"evaluate_options_action()\" 5\nCalled with the actual options\nand values that resulted after consulting \n.I options_to_retain, options_to_ignore, options_to_transform, \nand executing \n.I transform_action().\nThis provides one more chance for the script writer to evaluate all\nthe options and values in general, and make any necessary adjustments,\nbefore passing them on to the actual mpirun call. For instance,\nthis function can specify what the default value is for a missing \n.I -np\noption.\n\n\n.IP \"configfile_cmdline_action()\" 5\nReturns the actual options and values\nto be put in before the option_to_configfile parameter. \n\n.IP \"configfile_firstline_action()\" 5\nReturns the item that is \nput in the first line of the configuration file specified in the\n.I option_to_configfile \nparameter.\n\n.IP \"end_action()\" 5\nCalled by pbsrun.<mpirun version/flavor> at the end of execution.  It\nundoes any action done by transform_action(), like cleanup of\ntemporary files. It is also called when pbsrun.<mpirun\nversion/flavor> is prematurely killed. This function is called\nwith the location of actual mpirun passed as first argument.\n.RE\n\nThe actual mpirun program to call is the path pointed to by\n    $PBS_EXEC/lib/MPI/pbsrun.<mpirun version/flavor>.link.\n\n\n.B Modifying *.init scripts\n.br\nIn order for administrators to modify *.init scripts without breaking\npackage verification in RPM, master copies of the initialization\nscripts are named *.init.in.  \n.B pbsrun_wrap \ninstantiates the *.init.in files as *.init. For instance,\n$PBS_EXEC/lib/MPI/pbsrun.mpich2.init.in is the master copy, and\n.B pbsrun_wrap \ninstantiates it as $PBS_EXEC/lib/MPI/pbsrun.mpich2.init.\n.B pbsrun_unwrap \ntakes care of removing the *.init files.\n\n\n.SH MPIRUN VERSIONS/FLAVORS\n-----------------------------------------------------------\n.br\n.B MPICH-GM's mpirun (mpirun.ch_gm) with rsh/ssh: pbsrun.ch_gm\n.br\n-----------------------------------------------------------\n\nSYNTAX\n.RS 5\n\n.B pbsrun.ch_gm <options> <executable> <arg1> \n.B <arg2> ... <argn>\n\n.B Deprecated.\nThis is the PBS wrapper script to MPICH-GM's mpirun (mpirun.ch_gm) with\nrsh/ssh process startup method.\n\nIf executed inside a PBS job, this allows for PBS to track all MPICH-GM\nprocesses started by rsh/ssh so that PBS can perform accounting and \nhave complete job control.\n\nIf executed outside of a PBS job, it behaves exactly as if standard\n.B mpirun.ch_gm \nwas used.\n.RE\n\nOPTIONS HANDLING\n.RS 5\nIf executed inside a PBS job script, all \n.B mpirun.ch_gm \noptions given are\npassed on to the actual mpirun call with these exceptions:\n\n.IP \"-machinefile <file>\" 5\nThe \n.I file \nargument contents are ignored and replaced by the contents of\nthe $PBS_NODEFILE.\n\n.IP \"-np\" 5\nIf not specified, the number of entries found in the\n$PBS_NODEFILE is used.\n\n.IP \"-pg\" 5\nThe use of the \n.I -pg \noption, for having multiple executables on multiple\nhosts, is allowed but it is up to user to make sure \nonly PBS hosts are specified in the process group file; MPI processes\nspawned are not guaranteed to be under the control of PBS.\n.RE\n\nWRAP/UNWRAP\n.RS 5\nTo wrap MPICH-GM's mpirun script:\n.RS 4\n.B # pbsrun_wrap [MPICH-GM_BIN_PATH]/mpirun.ch_gm pbsrun.ch_gm\n.RE\nTo unwrap MPICH-GM's mpirun script:\n.RS 4\n.B # pbsrun_unwrap pbsrun.ch_gm\n.RE\n.RE\n.RE\n\n-----------------------------------------------------------\n.br\n.B MPICH-MX's mpirun (mpirun.ch_mx) with rsh/ssh: pbsrun.ch_mx\n.br\n-----------------------------------------------------------\n\nSYNTAX\n.RS 5\n\n.B pbsrun.ch_mx <options> <executable> <arg1> \n.B <arg2> ... <argn>\n\n.B Deprecated.  \nThis is the PBS wrapper script to MPICH-MX's mpirun (mpirun.ch_mx) with\nrsh/ssh process startup method.\n\nIf executed inside a PBS job, this allows for PBS to track all MPICH-MX\nprocesses started by rsh/ssh so that PBS can perform accounting and \nhas complete job control.\n\nIf executed outside of a PBS job, it behaves exactly as if standard\nmpirun.ch_mx was used.\n.RE\n\nOPTIONS HANDLING\n.RS 5\nIf executed inside a PBS job script, all mpirun.ch_gm options given are\npassed on to the actual mpirun call with some exceptions:\n\n.IP \"-machinefile <file>\" 5\nThe \n.I file \nargument contents is ignored and replaced by the contents\nof the $PBS_NODEFILE.\n\n.IP \"-np\" 5\nIf not specified, the number of entries found in the\n$PBS_NODEFILE is used.\n\n.IP \"-pg\" 5\nThe use of the \n.I -pg \noption, for having multiple executables on multiple\nhosts, is allowed but it is up to user to make sure \nonly PBS hosts are specified in the process group file; MPI processes\nspawned are not guaranteed to be under the control of PBS.\n.RE\n\nWRAP/UNWRAP\n.RS 5\nTo wrap MPICH-MX's mpirun script:\n.RS 4\n.B # pbsrun_wrap [MPICH-MX_BIN_PATH]/mpirun.ch_mx pbsrun.ch_mx\n.RE\nTo unwrap MPICH-MX's mpirun script:\n.RS 4\n.B # pbsrun_unwrap pbsrun.ch_mx\n.RE\n.RE \n.RE\n\n--------------------------------------------------------\n.br\n.B MPICH-GM's mpirun (mpirun.mpd) with MPD: pbsrun.gm_mpd\n.br\n--------------------------------------------------------\n\nSYNTAX\n.RS 5\n\n.B pbsrun.gm_mpd <options> <executable> <arg1> \n.B <arg2> ... <argn>\n\n.B Deprecated.  \nThis is the PBS wrapper script to MPICH-GM's mpirun (mpirun.mpd) with\nMPD process startup method.\n\nIf executed inside a PBS job, this allows for PBS to track all MPICH-GM\nprocesses started by the MPD daemons so that PBS can perform accounting \nhave and complete job control.\n\nIf executed outside of a PBS job, it behaves exactly as if standard\nmpirun.ch_gm with MPD was used.\n.RE\n\nOPTIONS HANDLING\n.RS 5 \nIf executed inside a PBS job script, all mpirun.ch_gm with MPD options given\nare passed on to the actual mpirun call with these exceptions:\n\n.IP \"-m <file>\" 5\nThe \n.I file \nargument contents are ignored and replaced by the contents of\nthe $PBS_NODEFILE.\n\n.IP \"-np\" 5\nIf not specified, the number of entries found in the\n$PBS_NODEFILE is used.\n\n.IP \"-pg\" 5\nThe use of the \n.I -pg \noption, for having multiple executables on multiple\nhosts, is allowed but it is up to user to make sure \nonly PBS hosts are specified in the process group file; MPI processes\nspawned are not guaranteed to be under the control of PBS.\n.RE\n\nSTARTUP/SHUTDOWN\n.RS 5\nThe script starts MPD daemons on each of the unique hosts listed in\n$PBS_NODEFILE, using either rsh or ssh method based on value of environment\nvariable RSHCOMMAND.  The default is rsh.\n\nThe script also takes care of shutting down the MPD daemons at the end of\na run.\n.RE\n\nWRAP/UNWRAP\n.RS 5\nTo wrap MPICH-GM's mpirun script with MPD:\n.RS 4\n.B # pbsrun_wrap [MPICH-GM_BIN_PATH]/mpirun.mpd pbsrun.gm_mpd\n.RE\nTo unwrap MPICH-GM's mpirun script with MPD:\n.RS 4\n.B # pbsrun_unwrap pbsrun.gm_mpd\n.RE\n.RE\n.RE\n\n--------------------------------------------------------\n.br\n.B MPICH-MX's mpirun (mpirun.mpd) with MPD: pbsrun.mx_mpd\n.br\n--------------------------------------------------------\n\nSYNTAX\n.RS 5\n\n.B pbsrun.mx_mpd <options> <executable> <arg1> \n.B <arg2> ... <argn>\n\n.B Deprecated.  \nThis is the PBS wrapper script to MPICH-MX's mpirun (mpirun.mpd) with\nMPD process startup method.\n\nIf executed inside a PBS job, this allows for PBS to track all MPICH-MX\nprocesses started by the MPD daemons so that PBS can perform accounting \nand have complete job control.\n\nIf executed outside of a PBS job, it behaves exactly as if standard\nmpirun.ch_mx with MPD was used.\n.RE\n\nOPTIONS HANDLING\n.RS 5\nIf executed inside a PBS job script, all mpirun.ch_gm with MPD options given\nare passed on to the actual mpirun call with these exceptions:\n\n.IP \"-m <file>\" 5\nThe \n.I file\nargument contents are ignored and replaced by the contents of\nthe $PBS_NODEFILE.\n\n.IP \"-np\" 5\nIf not specified, the number of entries found in the $PBS_NODEFILE is used.\n\n.IP \"-pg\" 5\nThe use of the \n.I -pg \noption, for having multiple executables on multiple\nhosts, is allowed but it is up to user to make sure \nonly PBS hosts are specified in the process group file; MPI processes\nspawned are not guaranteed to be under the control of PBS.\n.RE\n\nSTARTUP/SHUTDOWN\n.RS 5\nThe script starts MPD daemons on each of the unique hosts listed in\n$PBS_NODEFILE, using either rsh or ssh method, based on value of environment\nvariable RSHCOMMAND -- rsh is the default.\n\nThe script also takes care of shutting down the MPD daemons at the end of\na run.\n.RE\n\nWRAP/UNWRAP\n.RS 5\nTo wrap MPICH-MX's mpirun script with MPD:\n.RS 4\n.B # pbsrun_wrap [MPICH-MX_BIN_PATH]/mpirun.mpd pbsrun.mx_mpd\n.RE\nTo unwrap MPICH-MX's mpirun script with MPD:\n.RS 4\n.B # pbsrun_unwrap pbsrun.mx_mpd\n.RE\n.RE\n.RE\n\n------------------------------\n.br\n.B MPICH2's mpirun: pbsrun.mpich2\n.br\n------------------------------\n\nSYNTAX\n.RS 5\n\n.B pbsrun.mpich2 [<global args>] [<local args>] \n.B <executable> \n.RS 14\n.B [<args>]\n.B [: [<local args>] <executable> [<args>]] \n.RE\n.br\n- or - \n.br\n.B pbsrun.mpich2 -configfile <configfile>\n\nwhere <configfile> contains command line segments as lines:\n.RS 5\n[local args] executable1 [args]\n.br\n[local args] executable2 [args]\n.br\n[local args] executable3 [args]\n.RE\n\nThis is the PBS wrapper script to MPICH2's mpirun.\n\nIf executed inside a PBS job, this allows for PBS to track all MPICH2\nprocesses so that PBS can perform accounting and have complete job control.\n\nIf executed outside of a PBS job, it behaves exactly as if standard\nMPICH2's mpirun was used.\n.RE\n\nOPTIONS HANDLING\n.RS 5\nIf executed inside a PBS job script, all MPICH2's mpirun options given\nare passed on to the actual mpirun call with these exceptions:\n\n.IP \"-host and -ghost\" 5\nFor specifying the execution host to run\non.  Not passed on to the actual mpirun call.\n\n.IP \"-machinefile <file>\" 5\nThe \n.I file \nargument contents are ignored and replaced by the\ncontents of the $PBS_NODEFILE.\n\n.IP \"MPICH2's mpirun -localonly <x>\" 5\nFor specifying the <x> number of\nprocesses to run locally. Not supported. The user is advised\ninstead to use the equivalent arguments: \n.I \"-np <x> -localonly\".  \nThe reason\nfor this is that the \n.B pbsrun \nwrapper script cannot handle a variable number\nof arguments to an option (e.g. \"-localonly\" has 1 argument and \"-localonly <x>\"\nhas 2 arguments).\n\n.IP \"-np\" 5\nIf user did not specify a \n.I -np \noption, then no default value is provided\nby the PBS wrapper scripts. It is up to the local mpirun to decide what\nthe reasonable default value should be, which is usually 1.\n.RE \n\nSTARTUP/SHUTDOWN\n.RS 5\nThe script takes care of ensuring that the MPD daemons on each of the hosts\nlisted in the $PBS_NODEFILE are started. It also takes care of ensuring\nthat the MPD daemons have been shut down at the end of MPI job execution. \n.RE\n\nWRAP/UNWRAP\n.RS 5\nTo wrap MPICH2's mpirun script:\n.RS 4\n.B # pbsrun_wrap [MPICH2_BIN_PATH]/mpirun pbsrun.mpich2\n.RE\nTo unwrap MPICH2's mpirun script:\n.RS 4\n.B # pbsrun_unwrap pbsrun.mpich2\n.RE\n.RE\n.RE\n\n-----------------------------------\n.br\n.B Intel MPI's mpirun: pbsrun.intelmpi \n.br\n-----------------------------------\n\nWrapping Intel MPI, and support for mpdboot, are \n.B deprecated.\n\nSYNTAX\n.RS 5\n\n.B pbsrun.intelmpi  [<mpdboot options>] [<mpiexec options>]\n.RS 16\n.B <executable> [<prog args>]\n.br \n.B [: [<mpiexec options>] <executable> [<prog args>]]\n.RE\n.br\n- or - \n.br\n.B pbsrun.intelmpi [<mpdboot options>] -f <configfile>\n\nwhere \n.I mpdboot options \nare any options to pass to the mpdboot program,\nwhich is automatically called by Intel MPI's mpirun to start MPDs, and\n.I configfile \ncontains command line segments as lines.\n\nThis is the PBS wrapper script to Intel MPI's mpirun.\n\nIf executed inside a PBS job, this allows for PBS to track all Intel MPI\nprocesses so that PBS can perform accounting and have complete job control.\n\nIf executed outside of a PBS job, it behaves exactly as if standard\nIntel MPI's mpirun was used.\n.RE\n\nUsing \n.B pbsrun, pbsrun_wrap, \nand \n.B pbsrun_unwrap\nwith Intel MPI is \n.B deprecated \nas of 13.0.\n\nOPTIONS HANDLING\n.RS 5\nIf executed inside a PBS job script, all of the options to the PBS \ninterface to MPI's mpirun are passed to the actual \nmpirun call with these exceptions:\n\n.IP \"-host and -ghost\" 5\nFor specifying the execution host to run\non.  Not passed on to the actual mpirun call.\n\n.IP  \"-machinefile <file>\" 5\nThe \n.I file\nargument contents are ignored and replaced by the\ncontents of the $PBS_NODEFILE.\n\n.IP \"mpdboot options --totalnum=* and --file=*\" 5\nIgnored and replaced by the number of unique entries in $PBS_NODEFILE\nand name of $PBS_NODEFILE respectively.\n\n.IP \"arguments to mpdboot options --file=* and -f <mpd_hosts_file>\" 5\nReplaced by $PBS_NODEFILE.\n\n.IP \"-s\" 5 \nIf \n.B pbsrun.intelmpi \nis called inside a PBS job, Intel MPIs mpirun \n.I -s \nargument\nto mpdboot are not supported as this closely matches the mpirun option\n.I \"-s <spec>\". \nThe user can simply run a separate mpdboot \n.I -s \nbefore calling\nmpirun.  A warning message is issued by \n.B pbsrun.intelmpi \nupon\nencountering a \n.I -s \noption telling users of the supported form.\n\n.IP \"-np\" 5\nIf the user does not specify a \n.I -np \noption, then no default value is provided\nby the PBS wrap scripts. It is up to the local mpirun to decide what\nthe reasonable default value should be, which is usually 1.\n.RE\n\nSTARTUP/SHUTDOWN\n.RS 5\nIntel MPI's mpirun itself takes care of starting/stopping the\nMPD daemons. \n.B pbsrun.intelmpi \nalways passes the arguments\n.I -totalnum=<number of mpds to start> \nand \n.I -file=<mpd_hosts_file> \nto the actual\nmpirun, taking its input from unique entries in $PBS_NODEFILE.\n.RE\n\nWRAP/UNWRAP\n.RS 5\nTo wrap Intel MPI's mpirun script:\n.RS 4\n.B # pbsrun_wrap [INTEL_MPI_BIN_PATH]/mpirun pbsrun.intelmpi\n.RE\nTo unwrap Intel MPI's mpirun script:\n.RS 4\n.B # pbsrun_unwrap pbsrun.intelmpi\n.RE\n.RE\n.RE\n\n\n-----------------------------------------------------------\n.br\n.B MVAPICH1's mpirun: pbsrun.mvapich1\n.br\n-----------------------------------------------------------\n\nSYNTAX\n.RS 5\n\n.B pbsrun.mvapich1 <mpirun options> <executable> <options>\n\n.B Deprecated.  \nOnly one executable can be specified.\nThis is the PBS wrapper script to MVAPICH1's mpirun.  \n\nIf executed inside a PBS job, this allows for PBS to be aware of all MVAPICH1 \nranks and track their resources, so that PBS can perform accounting and \nhave complete job control.\n\nIf executed outside of a PBS job, it behaves exactly as if standard\n.B mpirun\nwas used.\n.RE\n\nOPTIONS HANDLING\n.RS 5\nIf executed inside a PBS job script, all \n.B mpirun\noptions given are\npassed on to the actual mpirun call with these exceptions:\n\n.IP \"-map <list>\" 5\nThe \n.I map\noption is ignored.\n\n.IP \"-exclude <list>\" 5\nThe \n.I exclude\noption is ignored.\n\n.IP \"-machinefile <file>\" 5\nThe \n.I machinefile\noption is ignored.\n\n.IP \"-np\" 5\nIf not specified, the number of entries found in the\n$PBS_NODEFILE is used.\n\n.RE\n\n\nWRAP/UNWRAP\n.RS 5\nTo wrap MVAPICH1's mpirun script:\n.RS 4\n.B # pbsrun_wrap  <path-to-actual-mpirun> pbsrun.mvapich1\n.RE\nTo unwrap MVAPICH1's mpirun script:\n.RS 4\n.B # pbsrun_unwrap pbsrun.mvapich1\n.RE\n.RE\n.RE\n\n\n\n-----------------------------------------------------------\n.br\n.B MVAPICH2's mpiexec: pbsrun.mvapich2\n.br\n-----------------------------------------------------------\n\nSYNTAX\n.RS 5\n\n.B pbsrun.mvapich2 <mpiexec args> <executable> <executable's \n.RS 16\n.B args> \n.B [: <mpiexec args> <executable> <executable's args>]\n.RE\nMultiple executables can be specified using the colon notation.\nThis is the PBS wrapper script to MVAPICH2's mpiexec, which have \nthe same format.\n\nIf executed inside a PBS job, this allows for PBS to be aware of all MVAPICH2\nranks and track their resources, so that PBS can perform accounting and \nhave complete job control.\n\nIf executed outside of a PBS job, it behaves exactly as if standard\n.B mpiexec\nwas used.\n.RE\n\nOPTIONS HANDLING\n.RS 5\nIf executed inside a PBS job script, all \n.B mpiexec\noptions given are\npassed on to the actual mpiexec call with these exceptions:\n\n.IP \"-host <host>\" 5\nThe \n.I host\nargument contents are ignored.\n\n.IP \"-machinefile <file>\" 5\nThe \n.I file \nargument contents are ignored and replaced by the contents of\nthe $PBS_NODEFILE.\n\n.RE\n\nWRAP/UNWRAP\n.RS 5\nTo wrap MVAPICH2's mpiexec script:\n.RS 4\n.B # pbsrun_wrap  <path-to-actual-mpiexec> pbsrun.mvapich2\n.RE\nTo unwrap MVAPICH2's mpiexec script:\n.RS 4\n.B # pbsrun_unwrap pbsrun.mvapich2\n.RE\n.RE\n.RE\n\n\n.SH REQUIREMENTS\nThe mpirun being wrapped\nmust be installed and working on all the nodes in the PBS cluster.\n\n.SH ERRORS\nIf \n.B pbsrun \nencounters any option not found in \n.I options_to_retain, options_to_ignore, \nand \n.I options_to_transform, \nthen it is flagged as an error.\n\n.SH SEE ALSO\npbs_attach(8B), \npbsrun_wrap(8B), \npbsrun_unwrap(8B)\n"
  },
  {
    "path": "doc/man8/pbsrun_unwrap.8B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbsrun_unwrap 8B \"6 May 2020\" Local \"PBS Professional\"\n.SH NAME\n.B pbsrun_unwrap\n\\- unwraps mpirun, reversing \n.B pbsrun_wrap\n\n.SH SYNOPSIS\n.B pbsrun_unwrap\npbsrun.<mpirun version/flavor>\n\n.B pbsrun_unwrap\n--version\n\n.SH DESCRIPTION\nThe \n.B pbsrun_unwrap\nscript is used to reverse the actions of the \n.B pbsrun_wrap\nscript.\n\nUse \n.B pbsrun_wrap\nto wrap mpirun.\n\nUsing \n.B pbsrun_unwrap\nwith Intel MPI is \n.B deprecated\nas of 13.0.\n\n.SH USAGE\n\nSyntax: \n.RS 5\n.B pbsrun_unwrap \npbsrun.<mpirun version/flavor>\n.RE\n\nFor example, running the following:\n\n     pbsrun_unwrap pbsrun.ch_gm\n\ncauses the following actions:\n\n.IP \" \" 5\nChecks for a link in $PBS_EXEC/lib/MPI/pbsrun.ch_gm.link;\nIf one exists, get the pathname it points to:\n.br\n/opt/mpich-gm/bin/mpirun.ch_gm.actual\n\n.IP \" \" 5\nrm $PBS_EXEC/lib/MPI/pbsrun.mpirun.ch_gm.link\n\n.IP \" \" 5\nrm /opt/mpich-gm/bin/mpirun.ch_gm\n\n.IP \" \" 5\nrm $PBS_EXEC/bin/pbsrun.ch_gm\n\n.RS 5\n.IP \"mv\" 4\n/opt/mpich-gm/bin/mpirun.ch_gm.actual\n.br\n/opt/mpich-gm/bin/mpirun.ch_gm\n.RE\n\n.SH OPTIONS\n.IP \"--version\" 5\nThe \n.B pbsrun_unwrap\ncommand returns its PBS version information and exits.\nThis option can only be used alone.\n\n.SH SEE ALSO\npbs_attach(8B), \npbsrun(8B), \npbsrun_wrap(8B)\n"
  },
  {
    "path": "doc/man8/pbsrun_wrap.8B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH pbsrun_wrap 8B \"3 June 2020\" Local \"PBS Professional\"\n.SH NAME\n.B pbsrun_wrap\n\\- general-purpose script for wrapping mpirun in pbsrun\n\n.SH SYNOPSIS\n.B pbsrun_wrap\n[-s] <path to actual mpirun> pbsrun.<mpirun version/flavor>\n\n.B pbsrun_wrap\n--version\n\n.SH DESCRIPTION\nThe \n.B pbsrun_wrap\nscript is used to wrap any of several versions of mpirun in \n.B pbsrun. \nThe \n.B pbsrun_wrap \nscript creates a symbolic link with the same\npath and name as the mpirun being wrapped.  This calls \n.B pbsrun, \nwhich uses \n.B pbs_attach\nto give MoM control of jobs.  The result is transparent to the\nuser; \nwhen mpirun is called from inside a\nPBS job, \nPBS can monitor and control the job, but when mpirun is called\nfrom outside of a PBS job, it behaves as it would normally.\nSee the \n.B pbs_attach(8B) \nand \n.B pbsrun(8B)\nman pages.\n\nUse \n.B pbsrun_unwrap\nto reverse the process.\n\nUsing \n.B pbsrun_wrap\nwith Intel MPI is \n.B deprecated \nas of 13.0.\n\nAvailable only under Linux.\n\n.SH USAGE\n\nSyntax: \n.RS 5\n.B pbsrun_wrap \n[-s] <path to actual mpirun> pbsrun.<mpirun version/flavor>\n.RE\n\nAny mpirun version/flavor that can be wrapped has\nan initialization script ending in \".init\", \nfound in $PBS_EXEC/lib/MPI:\n.br\n.RS 5\n$PBS_EXEC/lib/MPI/pbsrun.<mpirun version/flavor>.init\n.RE\n\nThe  \n.B pbsrun_wrap \nscript\ninstantiates the \n.B pbsrun \nwrapper script as\n.B pbsrun.<mpirun version/flavor> \nin the same directory where \n.B pbsrun \nis located, and sets up the link to actual mpirun call via the symbolic link\n.RS 5\n$PBS_EXEC/lib/MPI/pbsrun.<mpirun version/flavor>.link\n.RE\n\nFor example, running:\n.RS 5\n.B pbsrun_wrap \n/opt/mpich-gm/bin/mpirun.ch_gm pbsrun.ch_gm\n.RE\ncauses the following actions:\n.RS 4\nSave original mpirun.ch_gm script:\n.RS 4\n.IP \"mv\" 4\n/opt/mpich-gm/bin/mpirun.ch_gm \n.br \n/opt/mpich/gm/bin/mpirun.ch_gm.actual\n.LP\n.RE\nInstantiate pbsrun wrapper script as pbsrun.ch_gm:\n.RS 4\n.IP \"cp\" 4\n$PBS_EXEC/bin/pbsrun $PBS_EXEC/bin/pbsrun.ch_gm\n.LP\n.RE\nLink \"mpirun.ch_gm\" to actually call \"pbsrun.ch_gm\":\n.RS 4\n.IP \"ln -s\" 6\n$PBS_EXEC/bin/pbsrun.ch_gm /opt/mpich-gm/bin/mpirun.ch_gm\n.LP\n.RE\nCreate a link so that \"pbsrun.ch_gm\" calls \"mpirun.ch_gm.actual\":\n.RS 4\n.IP \"ln -s\" 6\n/opt/mpich-gm/bin/mpirun.ch_gm.actual\n$PBS_EXEC/lib/MPI/pbsrun.ch_gm.link\n.RE\n.RE\n.RE\n\n\n\n.SH OPTIONS\n\n.IP \"-s\" 5\nSets the \"strict_pbs\" options in the various \ninitialization scripts (e.g. pbsrun.bgl.init, pbsrun.ch_gm.init, etc...)\nto 1 from the default 0. This means that the mpirun being wrapped by \npbsrun will only be executed if inside a PBS environment. Otherwise, the user \ngets the error:\n.RS\n.IP \nNot running under PBS\nexiting since strict_pbs is enabled; execute only in PBS\n.LP\n\n.RE\n\n.IP \"--version\" 5\nThe \n.B pbsrun_wrap\ncommand returns its PBS version information and exits.\nThis option can only be used alone.\n\n\n\n.SH REQUIREMENTS\nThe mpirun being wrapped\nmust be installed and working on all the nodes in the PBS complex.\n\n\n.SH SEE ALSO\npbs_attach(8B), \npbsrun(8B), \npbsrun_unwrap(8B)\n"
  },
  {
    "path": "doc/man8/printjob.8B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH printjob 8B \"6 May 2020\" Local \"PBS Professional\"\n.SH NAME\n.B printjob \n\\- print job information\n.SH SYNOPSIS\n.B printjob \n[ -a | -s ] <job ID>\n.br\n.B printjob \n[ -a ] <file path> [<file path>...]\n.br\n.B printjob\n--version\n.SH DESCRIPTION\nPrints job information.  This command is mainly useful for troubleshooting, \nas during normal operation, the \n.B qstat(8B) \ncommand is the preferred method for displaying job-specific data and attributes.\nThe server and MoM do not have to be running to execute this command.\n\n.SH Usage\nFor a running job, you can run this command at any host using a \njob ID, and you can run this command at any execution host where\nthe job is running using a .JB file path.\n\nFor a finished job, if job history is enabled, you can run this command at\nthe server using the job ID.\n\nWhen querying the server, you must use the job ID, and the data service\nmust be running.\n\nResults will vary depending on whether you use the job ID or\na .JB file, and on which execution host you query with a .JB file.\n\n.SH PERMISSIONS\nIn order to execute\n.B printjob,\nyou must have root or Windows Administrator privilege.\n\n.SH OPTIONS\n.IP \"(no options)\" 15\nPrints all job data including job attributes.\n.IP \"-a\" 15\nSuppresses the printing of job attributes.  Cannot be used with \n.I -s \noption.\n.IP \"-s\" 15\nPrints out the job script only.  Can be used at server or primary execution host.\nCannot be used with \n.I -a\noption.  Must be used with a job ID.\n\n.IP \"--version\" 15\nThe \n.B printjob\ncommand returns its PBS version information and exits.\nThis option can only be used alone.\n\n.SH OPERANDS\n.IP \"file path\" 15\nThe \n.B printjob\ncommand accepts one or more\n.I file path\noperands at the execution host.  Files are found in PBS_HOME/mom_priv/jobs/ \non the primary execution host.  File path must include full path to file.  Cannot be\nused with \n.I -s \noption.\n\n.IP \"job ID\" 15\nThe \n.B printjob \ncommand accepts a job ID at the server host. \n.br\nFormat: <sequence number>[.<server name>][@<server name>]\n.br\nData service must be running.\n.SH STANDARD ERROR\nThe\n.B printjob\ncommand writes a diagnostic message to standard error for\neach error occurrence.\n.SH EXIT STATUS\n.IP Zero 15\nUpon successful processing of all the operands presented to the\n.B printjob\ncommand\n.LP\n.IP \"Greater than zero\" 15\nIf the \n.B printjob \ncommand fails to process any operand\n\n.SH SEE ALSO\npbs_server(8B), qstat(8B)\n"
  },
  {
    "path": "doc/man8/qdisable.8B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH qdisable 8B \"6 May 2020\" Local \"PBS Professional\"\n.SH NAME\n.B qdisable \n- prevent a queue from accepting jobs\n.SH SYNOPSIS\n.B qdisable \n<destination> [<destination> ...]\n.br\n.B qdisable\n--version\n.SH DESCRIPTION\nThe\n.B qdisable\ncommand prevents a queue from accepting batch jobs.  Sets the value of \nthe queue's \n.I enabled\nattribute to\n.I False.\nIf the command is accepted, the queue no longer accepts\n.I \"Queue Job\"\nrequests.  Jobs already in the queue continue to be processed.  You\ncan use this to drain a queue of jobs.\n\n.B Required Permission\n.br\nIn order to execute \n.B qdisable, \nthe user must have PBS Operator or Manager privilege.\n\n.SH OPTIONS\n.IP \"--version\" 8\nThe \n.B qdisable\ncommand returns its PBS version information and exits.\nThis option can only be used alone.\n\n.SH  OPERANDS\nThe qdisable command accepts one or more space-separated\n.I destination\noperands.  The operands take any of the following forms:\n\n.I <queue name>\n.IP \" \" 8\nPrevents specified queue at default server from accepting jobs.\n.LP\n.I @<server name>\n.IP \" \" 8\nPrevents all queues at specified server from accepting jobs.\n.LP\n.I <queue name>@<server name>\n.IP \" \" 8\nPrevents specified queue at specified server from accepting jobs.\n.LP\nTo prevent all queues at the default server from accepting jobs, \nuse the \n.B qmgr \ncommand:\n.br\n.B \\ \\ \\ Qmgr: set queue @default enabled=false\n\n.SH STANDARD ERROR\nThe \n.B qdisable\ncommand writes a diagnostic message to standard error for\neach error occurrence.\n\n.SH EXIT STATUS\n.IP Zero 8\nUpon successful processing of all the operands\n.IP \"Greater than zero\" 8\nIf the \n.B qdisable \ncommand fails to process any operand\n\n.SH SEE ALSO\npbs_server(8B), qmgr(8B), and qenable(8B)\n"
  },
  {
    "path": "doc/man8/qenable.8B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH qenable 8B \"6 May 2020\" Local \"PBS Professional\"\n.SH NAME\n.B qenable \n- allow a queue to accept jobs\n.SH SYNOPSIS\n.B qenable \n<destination> [<destination> ...]\n.br\n.B qenable\n--version\n.SH DESCRIPTION\nThe\n.B qenable\ncommand allows a queue to accept batch jobs.  Sets the value of \nthe queue's \n.I enabled\nattribute to\n.I True.\nIf the command is accepted, the \n.I destination\naccepts\n.I \"Queue Job\"\nrequests.  \n\n.B Required Permission\n.br\nIn order to execute \n.B qenable, \nthe user must have PBS Operator or Manager privilege.\n\n.SH OPTIONS\n.IP \"--version\" 8\nThe \n.B qenable\ncommand returns its PBS version information and exits.\nThis option can only be used alone.\n\n.SH  OPERANDS\nThe qenable command accepts one or more space-separated\n.I destination\noperands.  The operands take any of the following forms:\n\n.I <queue name>\n.IP \" \" 8\nAllows specified queue at default server to accept jobs.\n.LP\n.I @<server name>\n.IP \" \" 8\nAllows all queues at specified server to accept jobs.\n.LP\n.I <queue name>@<server name>\n.IP \" \" 8\nAllows specified queue at specified server to accept jobs.\n.LP\nTo allow all queues at the default server to accept jobs, \nuse the \n.B qmgr \ncommand:\n.br\n.B \\ \\ \\ Qmgr: set queue @default enabled=true\n\n.SH STANDARD ERROR\nThe \n.B qenable\ncommand writes a diagnostic message to standard error for\neach error occurrence.\n\n.SH EXIT STATUS\n.IP Zero 8\nUpon successful processing of all the operands\n.IP \"Greater than zero\" 8\nIf the \n.B qenable \ncommand fails to process any operand\n\n.SH SEE ALSO\npbs_server(8B), qmgr(8B), and qenable(8B)\n"
  },
  {
    "path": "doc/man8/qmgr.8B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH qmgr 8B \"6 May 2020\" Local \"PBS Professional\"\n.nf\n.SH NAME\n.B qmgr \n\\- administrator's command interface for managing PBS\n.SH SYNOPSIS\n.B At shell command line:\n.br\n.B qmgr \n-c '<directive> [-a] [-e] [-n] [-z]'\n.br\n.B qmgr \n-c 'help [<help option>]'\n.br\n.B qmgr \n<return>\n.br\n.B qmgr \n--version\n.br\n\n.B In qmgr session:\n.br\n<directive> [-a] [-e] [-n] [-z]\n.br\nhelp <help option>\n\n.SH DESCRIPTION\nThe PBS manager command, \n.B qmgr, \nprovides a command-line interface to parts of PBS.  The \n.B qmgr \ncommand is used to create or delete queues,\nvnodes, resources, and hooks, to set or change vnode, queue, hook,\nserver, or scheduler attributes and resources, and to view information\nabout hooks, queues, vnodes, resource definitions, the server, and \nschedulers.  \n\nFor a list of quick summaries of information about syntax, commands,\nattributes, operators, names, and values, type \"help\" or \"?\" at the\nqmgr prompt.  See \"Printing Usage Information\", below.\n\n.B Modes of Operation\n.br\nWhen you type qmgr -c '<directive>', \n.B qmgr \nperforms its\ntask and then exits.  \n\nWhen you type qmgr <return>, \n.B qmgr \nstarts a session and presents you with its command line prompt.  The \n.B qmgr\ncommand then reads directives etc. from standard input; see \n\"Directive Syntax\", below.  You can edit the command\nline; see \"Reusing and Editing the qmgr Command\nLine\", below.  \n\nFor a qmgr prompt, type: \n.br\n.B \\ \\ \\ qmgr <return> \n.br\nYou will see the qmgr prompt:\n.br\n.B \\ \\ \\ Qmgr:\n.br\n\n.B Required Privilege\n.br\nThe qmgr command requires different levels of privilege depending on\nthe operation to be performed.\n\nAll users can list or print attributes except for hook attributes.  \n\nPBS Operator or Manager privilege is required in order to set or\nchange vnode, queue, server, or scheduler attributes.  PBS Manager\nprivilege is required in order to create or delete queues, vnodes, and\nresources.\n\nUnder Linux, root privilege is required in order to create hooks, or\noperate on hooks or the \n.I job_sort_formula \nserver attribute.  Under\nWindows, this must be done from the installation account.\n\nFor domained environments, the installation account must be a local\naccount that is a member of the local Administrators group on the\nlocal computer.  For standalone environments, the installation account\nmust be a local account that is a member of the local Administrators\ngroup on the local computer.\n\nUsers without manager or operator privilege cannot view custom\nresources or resource definitions which were created to be invisible\nto users.\n\n.B When To Run qmgr At Server Host\n.br\nWhen operating on hooks or on the \n.I job_sort_formula \nserver attribute,\nthe qmgr command must be run at the server host.\n\n.B Reusing and Editing the qmgr Command Line\n.br\nYou can reuse or edit qmgr command lines.  The qmgr command maintains\na history of commands entered, up to a maximum of 500.  You can use\nthe 'history' command to see a numbered list of commands, and the !<n>\ncommand to execute the line whose number is n.  You must not put any\nspaces between the bang (\"!\") and the number.  For example, to execute\nthe 123rd command, type the following:\n.br\n.B \\ \\ \\ !123\n.br\nYou can see the last m commands by typing 'history m'.  For example,\nto see the last 6 commands, type the following:\n.br\n.B \\ \\ \\ history 6\n.br\nYou can use the up and down arrows to navigate through the command\nhistory list, and the left and right arrows to navigate within a\ncommand line.  Within a command line, you can use emacs commands to\nmove forward and backward, and delete characters.\n\nYou can edit the qmgr command line using the backspace and delete\nkeys, and you can insert characters anywhere in a command line.\n\nHistory is maintained across qmgr sessions, so that if you start qmgr,\nthen exit, then restart it, you can reuse your commands from the\nprevious session.  If you exit qmgr and then restart it, the command\nlines are renumbered.\n\nIf you enter the same command line more than once in a row, only one\noccurrence is recorded in the history.  If you enter the same command\nline multiple times, but intersperse other command lines after each\nline, each occurrence is recorded.\n\nEach user's history is unique to that user on that host.\n\nIn the case where an account runs concurrent sessions, the most recent\nlogout of a session overwrites history from previous logouts.  For\nexample, if two people are both logged in as root and using qmgr, the\nsecond person to log out overwrites the history file.\n\n.B The qmgr History File\n.br\nThe qmgr command stores and retrieves its history.  First, it tries to\nwrite its history in the ${HOME}/.pbs_qmgr_history file.  If this file\nor directory location is not writable, the command stores its history\nin $PBS_HOME/spool/.pbs_qmgr_history_<user name>.  If this file is\nalso not writable, the following happens:\n\n   The qmgr command prints error messages once at qmgr startup\n\n   The qmgr command cannot provide history across qmgr sessions\n\n.SH OPTIONS TO qmgr\nThe following table lists the options to qmgr:\n\n.nf\n.B Option \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\  Action\n-----------------------------------------------------------------------\n<return>           Starts a qmgr session and presents user with \n                   qmgr prompt\n-----------------------------------------------------------------------\n-a                 Aborts qmgr on any syntax errors or any requests \n                   rejected by a server.\n-----------------------------------------------------------------------\n-c '<directive>'   Executes a single command (directive) and exit qmgr. \n                   The directive must be enclosed in single or double \n                   quote marks, for example:\n.B \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ qmgr -c \"print server\" \n-----------------------------------------------------------------------\n-c 'help           Prints out usage information.  \n [<help option>]'  See \"Printing Usage Information\"\n-----------------------------------------------------------------------\n-e                 Echoes all commands to standard output\n-----------------------------------------------------------------------\n-n                 No commands are executed; syntax checking only \n                   is performed\n-----------------------------------------------------------------------\n-z                 No errors are written to standard error\n-----------------------------------------------------------------------\n--version          The qmgr command returns its PBS version information \n                   and  exits.  This option can only be used alone\n-----------------------------------------------------------------------\n\n\n.SH Directives\nA qmgr \n.I directive \nis a command together with the object(s) to be operated on, the\nattribute(s) belonging to the object that is to be changed, the\noperator, and the value(s) the attribute(s) will take.  In the case of\nresources, you can set the type and/or flag(s).\n\n.B Directive Syntax\n.br\nA directive is terminated by a newline or a semicolon (\";\"). Multiple\ndirectives may be entered on a single line.  A directive may extend\nacross lines by escaping the newline with a backslash (\"\\\").\n\nComments begin with the \"#\" character and continue to the end of the\nline. Comments and blank lines are ignored by qmgr.\n\n.B Server, Scheduler, Queue, Vnode Directives\n.br\nSyntax for operating on servers, schedulers, queues, vnodes:\n.RS 3\n.I <command> <object type> [<object name(s)>] [<attribute> <operator> <value>[,<attribute> <operator> <value>,...]]\n.RE\n\n.B Resource Directives\n.br\nSyntax for operating on resources:\n.nf\n.RS 3\n.I <command> <resource name> [<resource name> ...] [type = <type>][,flag = <flag(s)>]\n.RE\n.fi\nFor information about resources, see \n.I pbs_resources.7B.\n\n.B Hook-only Directives\n.br\nThe directives here apply only to hooks.  Other directives apply to all objects such as \nqueues, resources, hooks, etc. \n\nSyntax for importing and exporting site-defined hooks:\n.nf\n.RS 3\n.I import hook <hook name> application/x-python <content-encoding> (<input file> | -) \n.br\n.I export hook <hook name> <content-type> <content-encoding> [<output file>]\n.fi\n.RE\nSyntax for importing site-defined hook configuration file:\n.nf\n.RS 3\n.I import hook <hook name> application/x-config <content-encoding> (<input file> | -)  \n.fi\n.RE\nSyntax for importing built-in hook configuration file:\n.nf\n.RS 3\n.I import pbshook <hook name> application/x-config <content-encoding> (<input file> | -)  \n.fi\n.RE\n\n.B Using Directives\n.br\nYou can use a directive from the shell command line or from within\nthe qmgr session.\n\nTo use a directive from the command line, enclose the command and its\narguments in single or double quotes.\n.br\n.I \\ \\ \\ qmgr -c '<command> <command arguments>'\n\nFor example, to have qmgr print server information and exit:\n.br\n.B \\ \\ \\ qmgr -c \"print server\"\n\nTo use a directive from within the qmgr session, first start qmgr:\n.br\n.I \\ \\ \\ qmgr <return>\n\nThe qmgr session presents a qmgr prompt:\n.br\n.B \\ \\ \\ Qmgr:\n\nAt the qmgr prompt, enter the directive (a command and its arguments).\nFor example, to enter the same \"print server\" directive:\n.br\n.B \\ \\ \\ Qmgr: print server\n\n.B Commands Used in Directives\n.br\nCommands can be abbreviated to their minimum unambiguous form.\nCommands apply to all target objects unless explicitly limited.  The\nfollowing table lists the commands, briefly tells what they do, and\nlists the section with the full description:\n\n.nf\n.B Command \\ Abbr \\ Effect \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ See Description\n-----------------------------------------------------------------------\nactive   a     Specifies active objects   \"Making Objects Active\"\n-----------------------------------------------------------------------\ncreate   c     Creates object             \"Creating Objects (Server, \n                                           Scheduler, Vnode, Queue, \n                                           Hook)\"\n-----------------------------------------------------------------------\ndelete   d     Deletes object             \"Deleting Objects\"\n-----------------------------------------------------------------------\nexit           Exits (quits) the qmgr     \n               session\n-----------------------------------------------------------------------\nexport   e     Exports hook               \"Exporting Hooks\"\n-----------------------------------------------------------------------\nhelp     h|?   Prints usage to stdout     \"Printing Usage Information\"\n-----------------------------------------------------------------------\nimport   i     Imports hook or            \"Importing Hooks\"\n               configuration file         \"Importing Hook \n                                           Configuration Files\"\n-----------------------------------------------------------------------\nlist     l     Lists object attributes    \"Listing Objects and Their \n               and their values            Attributes\"\n-----------------------------------------------------------------------\nprint    p     Prints creation and        \"Printing Creation and \n               configuration commands      Configuration Commands\"\n-----------------------------------------------------------------------\nquit     q     Quits (exits) the qmgr \n               session\n-----------------------------------------------------------------------\nset      s     Sets value of attribute    \"Setting Attribute and \n                                           Resource Values\"\n-----------------------------------------------------------------------\nunset    u     Unsets value of attribute  \"Unsetting Attribute and \n                                           Resource Values\"\n-----------------------------------------------------------------------\n\n.SH Arguments to Directive Commands\n\n.B Object Arguments to Directive Commands\n.br\nThe qmgr command can operate on objects (servers, schedulers, queues, vnodes,\nresources, hooks, and built-in hooks).  Each of these can be\nabbreviated inside a directive.  The following table lists the objects\nand their abbreviations:\n\n.nf\n.B Object \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ Can Be Created or \\ \\ \\ \\ \\  Can Be\n.B Name \\ \\ \\ \\ \\ Abbr \\ Object \\ \\ \\ Deleted By: \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ Modified By:\n-----------------------------------------------------------------------\nserver    s    server     No one (created         Administrator,\n                          at installation)        Operator, Manager\n-----------------------------------------------------------------------\nsched     sc   default    No one (created         Administrator,\n               scheduler  at installation)        Operator, Manager \n               --------------------------------------------------------                 \n               multisched Administrator, Manager  Administrator, \n                                                  Operator, Manager\n-----------------------------------------------------------------------\nqueue     q    queue      Administrator,          Administrator,\n                          Operator, Manager       Operator, Manager\n-----------------------------------------------------------------------\nnode      n    vnode      Administrator,          Administrator,\n                          Operator, Manager       Operator, Manager\n-----------------------------------------------------------------------\nresource  r    resource   Administrator, Manager  Administrator, \n                                                  Manager\n-----------------------------------------------------------------------\nhook      h    hook       Linux: root             Linux: root\n\n                          Windows: installation   Windows: installation   \n                                   account                 account\n-----------------------------------------------------------------------\npbshook   p    built-in   No one (created         Linux: root\n               hook       at installation)        \n                                                  Windows: installation \n                                                           account \n-----------------------------------------------------------------------\n\n.B Specifying Active Server\n.br\nThe qmgr command operates on objects (queues, vnodes, etc.) at the\nactive server.  There is always at least one active server; the\ndefault server is the active server unless other servers have been\nmade active.  The default server is the server managing the host where\nthe qmgr command runs, meaning it is the server specified in that\nhost's pbs.conf file.  Server names have the following format:\n.br\n.I \\ \\ \\ <hostname>[:<port number>]\n.br\nwhere \n.I hostname \nis the fully-qualified domain name of the host on which\nthe server is running and \n.I port number \nis the port number to which to\nconnect.  If \n.I port number \nis not specified, the default port number, 15001, is used.\n\nTo specify the default server:\n.br\n.I \\ \\ \\ @default\n\nTo specify a named server:\n.br\n.I \\ \\ \\ @<server name>\n\nTo specify all active servers:\n.br\n.I \\ \\ \\ @active\n\n.B Using Lists of Object Names \n.br\nIn a qmgr directive, \n.I object name(s) \nis a list of one or more names of\nspecific objects.  The administrator specifies the name of an object\nwhen creating the object.  The name list is in the form:\n.br\n.I \\ \\ \\ <object name>[@<server>][,<object name>[@<server>] ...]\n.br\nwhere \n.I server \nis replaced in the directive with \"default\", \"active\", or\nthe name of the server.  The name list must conform to the following:\n.RS 3\nThere must be no space between the object name and the @ sign.\n\nName lists must not contain white space between entries.  \n\nAll objects in a list must be of the same type.  \n\nNode attributes cannot be used as vnode names.\n.RE\n\n.B Specifying Object Type and Name\n.br\nYou can specify objects in the following ways:\n\nTo act on the active objects of the named type, at the active server:\n.RS 3\n.I <object type>\n\nFor example, to list all active vnodes, along with their attributes,\nat the active server:\n.br\n.B Qmgr: list node\n.RE\n\nTo act on the active objects of the named type, at a specified server:\n.RS 3\n.I <object type> @<server name>  \n(note space before @ sign)\n\nFor example, to list all active vnodes at the default server, along\nwith their attributes:\n.br\n.B Qmgr: list node @default\n\nFor example, to print out all queues at the default server, along with\ntheir attributes:\n.br\n.B qmgr -c \"print queue @default\"\n.RE\n\nTo act on a specific named object:\n.RS 3\n.I <object type> <object name>\n\nFor example, to list Node1 and its attributes:\n.br\n.B Qmgr: list node Node1\n\nTo list queues workq, slowq, and fastq at the active server:\n.br\n.B Qmgr: list queue workq,slowq,fastq\n.RE\n\nTo act on the named object at the specified server:\n.RS 3\n.I <object type> <object name>@<server name>\n\nFor example, to list Node1 at the default server, along with the\nattributes of Node1:\n.br\n.B Qmgr: list node Node1@default\n\nTo list queues Queue1 at the default server, Queue2 at Server2, and\nQueue3 at the active server:\n.br\n.B Qmgr: list queue Queue1@default,Queue2@Server2,Queue3@active\n.RE\n\n.B Operators in Directive Commands\n.br\nIn a qmgr directive, \n.I operator \nis the operation to be performed with\nthe attribute and its value.  Operators are listed here:\n\n.nf\n.B Operator \\ \\ Effect\n-----------------------------------------------------------------------\n=          Sets the value of the attribute or resource.  If the \n           attribute or resource has an existing value, the current \n           value is replaced with the new value.\n-----------------------------------------------------------------------\n+=         Increases the current value of the attribute or resource by \n           the amount in the new value.  When used for a string array, \n           adds the new value as another string after a comma.\n-----------------------------------------------------------------------\n-=         Decreases the current value of the attribute or resource by \n           the specified amount.  When used for a string array, removes \n           the first matching string.\n-----------------------------------------------------------------------\n\nExample: Set routing destination for queue Queue1 to be Dest1:\n.br\n.B \\ \\ \\ Qmgr: set queue route_destinations = Dest1\n\nExample: Add new routing destination for queue Queue1:\n.br\n.B \\ \\ \\ Qmgr: set queue route_destinations += Dest2\n\nExample: Remove new routing destination for queue Queue1:\n.br\n.B \\ \\ \\ Qmgr: set queue route_destinations -= Dest2\n\nWhen setting numerical resource values, you can use only the equal sign (\"=\").\n\n.B Windows Requirements For Directive Arguments\n.br\nUnder Windows, use double quotes when specifying arguments to qmgr.\nFor example:\n.br\n   Qmgr: import hook hook1 application/x-python default \\*(lq\\\\Documents and Settings\\\\pbsuser1\\\\hook1.py\\\\\\*(rq\n.br\nor\n.br\n   qmgr -c 'import hook hook1 application/x-python default \\*(lq\\\\Documents and Settings\\\\pbsuser1\\\\hook1.py\\\\\\*(rq'\n\n.SH Operating on Objects (Server, Scheduler, Vnode, Queue, Hook)\n.B Making Objects Active\n.br\nMaking objects active is a way to set up a list of objects, all of the\nsame type, on which you can then use a single command.  For example,\nif you are going to set the same attribute to the same value on\nseveral vnodes, you can make all of the target vnodes active before\nusing a single command to set the attribute value, instead of having\nto give the command once for each vnode.  You can make any type of\nobject active except for resources or hooks.\n\nWhen an object is active, it is acted upon when you specify its type\nbut do not specify names.  When you specify any object names in a\ndirective, active objects are not operated on unless they are named in\nthe directive.\n\nYou can specify a list of active objects for each type of object. You\ncan have active objects of multiple types at the same time.  The\nactive objects of one type have no effect on whether objects of\nanother type are active.\n\nObjects are active only until the qmgr command is exited, so this\nfeature can be used only at the qmgr prompt.\n\nEach time you make any objects active at a given server, that list of\nobjects replaces any active objects of the same kind at that server.\nFor example, if you have four queues at a particular server, and you\nmake Q1 and Q2 active, then later make Q3 and Q4 active, the result is\nthat Q3 and Q4 are the only active queues.\n\nYou can make different objects be active at different servers\nsimultaneously.  For example, you can set vnodes N1 and N2 at the\ndefault server, and vnodes N3 and N4 at server Server2 to be active at\nthe same time.\n\nTo make all objects inactive, quit qmgr.  When you quit qmgr, any\nobject that was active is no longer active.\n\n.B Using the active Command\n.br\nTo make the named object(s) of the specified type active:\n.RS 3\n.I active <object type> [<object name>[,<object name> ...]]\n\nExample: To make queue Queue1 active:\n.br\n.B Qmgr: active queue Queue1\n\nExample: To make queues Queue1 and Queue2 at the active server be\nactive, then enable them:\n.br\n.B Qmgr: active queue Queue1,Queue2\n.br\n.B Qmgr: set queue enabled=True\n\nExample: To make queue Queue1 at the default server and queue Queue2\nat Server2 be active:\n.br\n.B Qmgr: active queue Queue1@default,Queue2@Server2\n\nExample: To make vnodes N1, N2, N3, and N4 active, and then give them\nall the same value for their \n.I max_running \nattribute:\n.br\n.B Qmgr: active node N1,N2,N3,N4\n.br\n.B Qmgr: set node max_running = 2\n.RE\n\nTo make all object(s) of the specified type at the specified server\nactive:\n.RS 3\n.I active <object type> @<server name>        \n(note space before @ sign)\n\nExample: To make all queues at the default server active:\n.br\n.B Qmgr: active queue @default\n\nExample: To make all vnodes at server Server2 active:\n.br\n.B Qmgr: active node @Server2\n.RE\n\nTo report which objects of the specified type are active:\n.RS 3\n.I active <object type>  \n\nThe qmgr command prints a list of names of active objects of the\nspecified type to stdout.\n.RE\n\n.B Creating Objects (Server, Scheduler, Vnode, Queue, Hook)\n.br\nTo create one new object of the specified type for each name, and give\nit the specified name:\n.RS 3\n.I create <object type> <object name>[,<object name> ...] [[<attribute> = <value>] [,<attribute> = <value>] ...]\n.RE\n\nCan be used only with multischeds, queues, vnodes, resources, and hooks.  Cannot be\nused with built-in hooks.\n\n.RS 3\nFor example, to create a multisched named multisched_1 at the active server:\n.br\n.B Qmgr: create sched multisched_1\n\nFor example, to create a queue named Q1 at the active server: \n.br\n.B Qmgr: create queue Q1\n\nFor example, to create a vnode named N1 and a vnode named N2:\n.br\n.B Qmgr: create node N1,N2\n\nFor example, to create queue Queue1 at the default server and queue\nQueue2 at Server2:\n.br\n.B Qmgr: create queue Queue1@default,Queue2@Server2\n\nFor example, to create vnodes named N1, N2, N3, and N4 at the active\nserver, and to set their Mom attribute to Host1 and their \n.I max_running\nattribute to 1:\n.br\n.B Qmgr: create node N1,N2,N3,N4 Mom=Host1, max_running = 1\n\nFor example, to create a host-level consumable string resource named \"foo\":\n.br\n.B qmgr -c \"create resource foo type=string,flag=nh\"\n.RE\n\nAll objects of the same type at a server must have unique names.  For\nexample, each queue at server Server1 must have a unique name.\nObjects at one server can have the same name as objects at another\nserver.\n\nYou can create multiple objects of the same type with a single\ncommand.  You cannot create multiple types of objects in a single\ncommand.  \n\nTo create multiple resources of the same type\nand flag, separate each resource name with a comma:\n.RS 3\n.I qmgr -c \"create resource <resource>[,<resource> ...] type=<type>,flag=<flag(s)>\"\n.RE\n\n.B Examples of Creating Objects\n.br\nExample: Create queue:\n.RS 3\n.B Qmgr: create queue fast priority=10,queue_type=e,enabled = true,max_running=0\n.RE\n\nExample: Create queue, set resources:\n.RS 3\n.B Qmgr: create queue little\n.br\n.B Qmgr: set queue little resources_max.mem=8mw,resources_max.cput=10\n.RE\n\n.B Deleting Objects\n.br\nTo delete the named object(s):\n.RS 3\n.I delete <object type> <object name>[,<object name> ...]\n\nWhen you delete more than one object, do not put a space after a comma.\n\nCan be used only with queues, vnodes, resources, and hooks.  Cannot be\nused with built-in hooks.\n\nFor example, to delete queue Q1 at the active server:\n.br\n.B Qmgr: delete queue Q1\n\nFor example, to delete vnodes N1 and N2 at the active server:\n.br\n.B Qmgr: delete node N1,N2\n\nFor example, to delete queue Queue1 at the default server and queue\nQueue2 at Server2:\n.br\n.B Qmgr: delete queue Queue1@default,Queue2@Server2\n\nFor example, to delete resource \"foo\" at the active server:\n.br\n.B Qmgr: delete resource foo\n.RE\n\nTo delete the active objects of the specified type:\n.RS 3\n.I delete <object type>\n\nFor example, to delete the active queues:\n.br\n.B Qmgr: delete queue\n.RE\n\nTo delete the active objects of the specified type at the specified\nserver:\n.RS 3\n.I delete <object type> @<server name>\n\nFor example, to delete the active queues at server Server2:\n.br\n.B Qmgr: delete queue @Server2\n.RE\n\nYou can delete multiple objects of the same type with a single\ncommand.  You cannot delete multiple types of objects in a single\ncommand.  To delete multiple resources, separate the\nresource names with commas.\n.RS 3\n\nFor example:\n.br\n.B Qmgr: delete resource r1,r2\n.RE\n\nYou cannot delete a resource that is requested by a job or\nreservation, or that is set on a server, queue, or vnode.\n\n.SH Operating on Attributes and Resources\nYou can specify attributes and resources for named objects or for all\nobjects of a type.\n\n.B Setting Attribute and Resource Values\n.br\nTo set the value of the specified attribute(s) for the named\nobject(s):\n.RS 3\n.I set <object type> <object name>[,<object name> ...] <attribute> = <value> [,<attribute> = <value> ...]\n.RE\n\nEach specified attribute is set for each named object, so if you\nspecify three attributes and two objects, both objects get all three\nattributes set.\n\nTo set the attribute value for all active objects when there are\nactive objects of the type specified:\n.br\n.I \\ \\ \\ set <object type> <attribute> = <value>\n\nTo set the attribute value for all active objects at the specified\nserver when there are active objects of the type specified:\n.RS 3\n.I set <object type> @<server name> <attribute> = <value>\n\nFor example, to set the amount of memory on a vnode:\n.br\n.B Qmgr: set node Vnode1 resources_available.mem = 2mb\n.RE\n\nIf the attribute is one which describes a set of resources such as\n.I resources_available, resources_default, resources_max, resources_used,\netc., the attribute is specified in the form:\n.br\n.I \\ \\ \\ <attribute name>.<resource name>\n\nYou can have spaces between attribute=value pairs.  \n\n.B Examples of Setting Attribute Values\n.br\nIncrease limit on queue:\n.RS 3\n.B Qmgr: set queue fast max_running +=2\n.RE\n\nSet software resource on mynode:\n.RS 3\n.B Qmgr: set node mynode resources_available.software = \"myapp=/tmp/foo\"\n.RE\n\nSet limit on queue:\n.RS 3\n.B Qmgr: set queue max_running = 10\n.RE\n\nSet vnode offline:\n.RS 3\n.B Qmgr: set node state = \"offline\"\n.RE\n\n.B Unsetting Attribute and Resource Values \n.br\nYou can use the qmgr command to unset attributes of any object, except\nfor the \n.I type \nattribute of a built-in hook.\n\nTo unset the value of the specified attributes of the named object(s):\n.nf\n.RS 3\n.I unset <object type> <object name>[,<object name> ...] <attribute>[,<attribute>...]\n.RE\n\nTo unset the value of specified attributes of active objects:\n.br\n.I \\ \\ \\ unset <object type> <attribute>[,<attribute>...]\n\nTo unset the value of specified attributes of the named object:\n.br\n.I \\ \\ \\ unset <object type> <object name> <attribute>[,<attribute>...]\n\nTo unset the value of specified attributes of the named object:\n.br\n.I \\ \\ \\ unset <object type> @<server name> <attribute>[,<attribute>...]\n\n.B Example of Unsetting Attribute Value\n.br\nUnset limit on queue\n.br\n.B \\ \\ \\ Qmgr: unset queue fast max_running\n\n.B Caveats and Restrictions for Setting Attribute and Resource Values\n.br\nIf the value includes whitespace, commas or other special characters,\nsuch as the # character, the value string must be enclosed in single\nor double quotes.  For example:\n.RS 3\n.B Qmgr: set node Vnode1 comment=\"Node will be taken offline Friday at 1:00 for memory upgrade.\"\n.RE\n\nYou can set or unset attribute values for only one type of object in each command.\n\nYou can use the qmgr command to set attributes of any object, except\nfor the \n.I type \nattribute of a built-in hook.\n\nYou can have spaces between attribute names.\n\nAttribute and resource values must conform to the format for the\nattribute or resource type.  \n\nMost of a vnode's attributes may be set using qmgr.  However, some\nmust be set on the individual execution host in local vnode definition\nfiles, NOT by using qmgr.  \n\n.B Setting Resource Type and Flag(s)\n.br\nYou can use the qmgr command to set or unset the type and flag(s) for\nresources.\n\nResource types can be the following:\n.RS 3\nstring\n.br\nboolean\n.br\nstring_array\n.br\nlong\n.br\nsize\n.br\nfloat\n.RE\n\nTo set a resource type:\n.br\n.I \\ \\ \\ set resource <resource name> type = <type>\n\nSets the type of the named resource to the specified type.  For\nexample:\n.br\n.B \\ \\ \\ qmgr -c \"set resource foo type=string_array\"\n\n.B Resource Accumulation Flags\nThe resource accumulation flag for a resource can be one of the\nfollowing:\n\n.B Flag \\ \\ \\ \\ \\ \\ \\ Meaning\n-----------------------------------------------------------------------\n(no flags)  Indicates a queue-level or server-level resource that is \n            not consumable.\n-----------------------------------------------------------------------\nfh          The amount is consumable at the host level for only the \n            first vnode allocated to the job (vnode with first task.) \n            Must be consumable or time-based. Cannot be used with \n            Boolean or string resources.\n\n            This flag specifies that the resource is accumulated at the \n            first vnode, meaning that the value of \n            resources_assigned.<resource> is incremented only at the \n            first vnode when a job is allocated this resource or when a \n            reservation requesting this resource on this vnode starts.\n-----------------------------------------------------------------------\nh           Indicates a host-level resource. Used alone, means that the \n            resource is not consumable. Required for any resource that \n            will be used inside a select statement. This flag selects \n            hardware. This flag indicates that the resource must be \n            requested inside of a select statement.\n\n            Example: for a Boolean resource named \"green\":\n.B \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ green type=boolean,flag=h\n-----------------------------------------------------------------------\nnh          The amount is consumable at the host level, for all vnodes \n            assigned to the job. Must be consumable or time-based. \n            Cannot be used with Boolean or string resources. \n\n            This flag specifies that the resource is accumulated at the \n            vnode level, meaning that the value of \n            resources_assigned.<resource> is incremented at relevant \n            vnodes when a job is allocated this resource or when a \n            reservation requesting this resource on this vnode starts.\n\n            This flag is not used with dynamic consumable resources. \n            A scheduler will not oversubscribe dynamic consumable \n            resources.\n-----------------------------------------------------------------------\nq           The amount is consumable at the queue and server level. \n            When a job is assigned one unit of a resource with this \n            flag, the resources_assigned.<resource> attribute at the \n            server and any queue is incremented by one. Must be \n            consumable or time-based.\n\n            This flag specifies that the resource is accumulated at the \n            queue and server level, meaning that the value of \n            resources_assigned.<resource> is incremented at each queue \n            and at the server when a job is allocated this resource. \n            When a reservation starts, allocated resources are added to \n            the server's resources_assigned attribute.\n\n            This flag is not used with dynamic consumable resources. \n            A scheduler will not oversubscribe dynamic consumable \n            resources.\n-----------------------------------------------------------------------\n\n.B Resource Permission Flags\nThe permission flag for a resource can be one of the following:    \n\n.B Flag \\ \\ \\ \\ \\ \\ \\ Meaning\n-----------------------------------------------------------------------\n(no flag)   Users can view and request the resource, and qalter a \n            resource request for this resource.\n-----------------------------------------------------------------------\ni           \"Invisible\".  Users cannot view or request the resource.  \n            Users cannot qalter a resource request for this resource.\n-----------------------------------------------------------------------\nr           \"Read only\".  Users can view the resource, but cannot \n            request it or qalter a resource request for this resource.\n-----------------------------------------------------------------------\n\nTo set resource flags, concatenate the flags you want without spaces\nor commas.\n\nTo set the flag(s) of the named resource to the specified flag(s):\n.RS 3\n.I set resource <resource name> flag=<flag(s)>\n\nFor example:\n.br\n.B qmgr -c \"set resource foo flag=nhi\"\n.RE\n\nTo set both type and flag(s):\n.RS 3\n.I set resource <resource name> type=<type>,flag=<flag(s)> \n\nSets the type and flag(s) of the named resource to the specified type\nand flag(s).  For example:\n.br\n.B qmgr -c \"set resource foo type=long,flag=nhi\"\n.RE\n\nYou can set multiple resources by separating the names with commas.\n.RS 3\nFor example:\n.br\n.B qmgr -c \"set resource r1,r2 type=long\"\n.RE\n\nYou cannot set the type for a resource that is requested by a job or\nreservation, or set on a server, queue, or vnode.\n\nYou cannot set the flag(s) to h, nh, fh, or q for a resource that is\nrequested by a job or reservation.\n\n.B Unsetting Resource Flag(s)\n.br\nYou can use the qmgr command to unset the flag(s) for resources.\n\nTo unset the flag(s) of the named resource:\n.RS 3\n.I unset resource <resource name> flag\n\nFor example:\n.br\n.B qmgr -c \"unset resource foo flag\"\n.RE\n\nYou can unset the flag(s) of multiple resources by separating the\nresource names with commas.  \n.RS 3\nFor example:\n.br\n.B qmgr -c \"unset resource r1,r2 flag\"\n.RE\n\nYou cannot unset the type for a resource.\n\nYou cannot unset the flag(s) for a resource that is requested by a job\nor reservation, or set on any server, queue, or vnode.\n\n.SH Viewing Object, Attribute, and Resource Information\n.B Listing Objects and Their Attributes\n.br\nYou can use the qmgr command to list attributes of any object,\nincluding attributes at their default values.\n\nTo list the attributes, with associated values, of the named\nobject(s):\n.RS 3\n.I list <object type> <object name>[,<object name> ...]\n.RE\n\nTo list values of the specified attributes of the named object:\n.RS 3\n.I list <object type> <object name> <attribute name>[, <attribute name>]...\n.RE\n\nTo list attributes, with associated values, of active objects of the\nspecified type at the active server:\n.br\n.I \\ \\ \\ list <object type>\n\nTo list all objects of the specified type at the specified server,\nwith their attributes and the values associated with the attributes:\n.br\n.I \\ \\ \\ list <object type> @<server name>\n\nTo list attributes of the active server: \n.RS 3\n.I list server\n.br\nIf no server other than the default server has been made active,\nlists attributes of the default server (it is the active server).\n.RE\n\nTo list attributes of the specified server:\n.br\n.I \\ \\ \\ list server <server name>\n\nTo list all attributes of all schedulers:\n.br\n.I \\ \\ \\ list sched\n\nTo list all attributes of the specified scheduler:\n.br\n.I \\ \\ \\ list sched <scheduler name>\n\nTo list all hooks, along with their attributes:\n.br\n.I \\ \\ \\ list hook\n\nTo list attributes of the specified hook:\n.br\n.I \\ \\ \\ list hook <hook name>\n\n.B Examples of Listing Objects and Their Attributes\n.br\nList serverA's schedulers' attributes:\n.br\n.B \\ \\ \\ Qmgr: list sched @serverA \n\nList attributes for default server's scheduler(s):\n.br\n.B \\ \\ \\ Qmgr: l sched @default\n\nList PBS version for default server's scheduler:\n.br\n.B \\ \\ \\ Qmgr: l sched @default pbs_version\n\nList queues at a specified server:\n.br\n.B \\ \\ \\ Qmgr: list queue @server1\n\n.B Listing Resource Definitions\n.br\nYou can use the qmgr \n.B list \nand \n.B print \ncommands to list resource\ndefinitions showing resource name, type, and flag(s).\n\nTo list the name, type, and flag(s) of the named resource(s):\n.RS 3\n.I list resource <resource name>[,<resource name> ...]\n.RE\nor\n.RS 3\n.I print resource <resource name>[,<resource name> ...]\n.RE\n\nTo list name, type, and flag(s) of custom resources only: \n.RS 3\n.I list resource\n.RE\nor\n.RS 3\n.I print resource\n.RE\nor\n.RS 3\n.I print server \n(note that this also prints information for the active server)\n.RE\n\nTo list all custom resources at the specified server, with their\nnames, types, and flags:\n.RS 3\n.I list resource @<server name>\n.RE\nor\n.RS 3\n.I print resource @<server name>\n.RE\n\nWhen used by a non-privileged user, qmgr prints only resource\ndefinitions for resources that are visible to non-privileged users\n(those that do not have the \n.B i \nflag set).\n\n.B Printing Creation and Configuration Commands\n.br\nFor printing the creation commands for any object except for a\nbuilt-in hook.\n\nTo print out the commands to create the named object(s) and set their\nattributes to their current values:\n.RS 3\n.I print <object type> <object name>[,<object name> ...] \n.br\nwhere object name follows the name rules in \"Using Lists of Object Names\".\n.RE\n\nTo print out the commands to create the named object and set its\nattributes to their current values:\n.RS 3\n.I print <object type> <object name> [<attribute name>[, <attribute name>]...]\n.br\nwhere object name follows the name rules in \"Using Lists of Object Names\".\n.RE\n\nTo print out the commands to create and configure the active objects\nof the named type:\n.br\n.I \\ \\ \\ print <object type>\n\nTo print out the commands to create and configure all of the objects\nof the specified type at the specified server:\n.br\n.I \\ \\ \\ print <object type> @<server name>\n\nTo print out the commands to create each queue, set the attributes of\neach queue to their current values, and set the attributes of the\nserver to their current values:\n.br\n.I \\ \\ \\ print server\n\nThis is used for the server and queues, but not hooks.\n\nPrints information for the active server.  If there is no active\nserver, prints information for the default server.\n\n.B Caveats for Viewing Information\n.br\nSome attributes whose values are unset do not appear in the output of the\nqmgr command.\n\nDefinitions for built-in resources do not appear in the output of the\nqmgr command.\n\nWhen a non-privileged user prints resource definitions, qmgr prints\nonly resource definitions for resources that are visible to\nnon-privileged users (those that do not have the \n.B i \nflag set).\n\n.SH Saving and Re-creating Server and Queue Information\nTo save and recreate server and queue configuration, print the\nconfiguration information to a file, then read it back in later.  For\nexample, to save your configuration:\n.br\n.B \\ \\ \\ # qmgr -c \"print server\" > savedsettings\n.br\nor\n.br\n.B \\ \\ \\ Qmgr: print server > savedsettings\n\nWhen re-creating queue and server configuration, read the commands\nback into qmgr.  For example:\n.br\n.B \\ \\ \\ qmgr < savedsettings\n\n.SH Operating on Hooks\n.B Creating Hooks\n.br\nTo create a hook:\n.RS 3\n.I Qmgr: create hook <hook name>\n\nFor example:\n.br\n.B Qmgr: create hook my_hook\n.RE\n\n.B Deleting Hooks\n.br\nTo delete a hook:\n.RS 3\n.I Qmgr: delete hook <hook name>\n\nFor example:\n.br\n.B Qmgr: delete hook my_hook\n.RE\n\n.B Setting and Unsetting Hook Attributes\n.br\nTo set a hook attribute:\n.RS 3\n.I Qmgr: set hook <hook name> <attribute> = <value>\n.RE\n\nTo unset a hook attribute: \n.RS 3\n.I Qmgr: unset hook <hook name> <attribute>\n\nExample: Unset hook1's \n.I alarm \nattribute, causing hook1's alarm to revert to its\ndefault value of 30 seconds:\n.br\n.B Qmgr: unset hook hook1 alarm\n.RE\n\n.B Importing Hooks\n.br\nFor importing the contents of a site-defined hook.  Cannot be used with built-in\nhooks.\n\nTo import a hook, you import the contents of a hook script into the\nhook.  You must specify a filename that is locally accessible to qmgr\nand the PBS server.\n\nFormat for importing a site-defined hook:\n.RS 3\n.I import hook <hook name> application/x-python <content encoding> {<input file> | -} \n.RE\nThis uses the contents of \n.I input file \nor stdin (-) as the contents of\nhook \n.I hook name.\n\nThe \n.I input file \nor stdin (-) data must have a format of \n.I content type\nand must be encoded with \n.I content encoding.\n\nThe allowed values for \n.I content encoding \nare \"default\" (7bit) and\n\"base64\".\n\nIf the source of input is stdin (-) and \n.I content encoding \nis \"default\",\nqmgr expects the input data to be terminated by EOF.\n\nIf the source of input is stdin (-) and \n.I content encoding \nis \"base64\",\nqmgr expects input data to be terminated by a blank line.\n\n.I input file \nmust be locally accessible to both qmgr and the requested\nbatch server.\n\nA relative path \n.I input file \nis relative to the directory where qmgr was\nexecuted.\n\nIf a hook already has a content script, that is overwritten by this\nimport call.\n\nIf the name in \n.I input file \ncontains spaces as are used in Windows filenames, input file must be quoted.\n\nThere is no restriction on the size of the hook script.\n\n.B Examples of Importing Hooks\n.br\nExample: Given a Python script in ASCII text file \"hello.py\", use its contents\nas the script contents of hook1:\n\n   #cat hello.py\n   import pbs\n\n   pbs.event().job.comment=\"Hello, world\"\n\n.RS 3\n.B # qmgr -c 'import hook hook1 application/x-python default hello.py'\n.RE\n\nExample: Given a base64-encoded file \"hello.py.b64\", qmgr unencodes the file's\ncontents, and then makes this the script contents of hook1:\n\n.B \\ \\ \\ # cat hello.py.b64\n.br\n   cHJpbnQgImhlbGxvLCB3b3JsZCIK\n\n.RS 3\n.B # qmgr -c 'import hook hook1 application/x-python base64 hello.py.b64'\n.RE\n\nExample: To create a provisioning hook called Provision_Hook, and import the\nASCII hook script called \"master_provision.py\" located in /root/data/:\n.RS 3\n.B Qmgr: create hook Provision_Hook\n.br\n.B Qmgr: import hook Provision_Hook application/x-python default /root/data/master_provision.py\n.RE\n\n.B Importing and Exporting Hook Configuration Files\n.br\n.B Importing Configuration Files\n.br\nFor importing the contents of a site-defined or built-in hook configuration file.  To import a\nhook configuration file, you import the contents of a file to a hook.\nYou must specify a filename that is locally accessible to qmgr and the\nPBS server.\n\nFormat for importing a site-defined hook configuration file:\n.RS 3\n.I import hook <hook name> application/x-config <content encoding> {<config file>|-}\n.RE\n\nFormat for importing a built-in hook configuration file:\n.RS 3\n.I import pbshook <hook name> application/x-config <content encoding> {<config file>|-}\n.RE\n\nThis uses the contents of \n.I config file \nor stdin (-) as the contents of the configuration file for hook \n.I hook name.\n\nThe \n.I config file \nor stdin (-) data must have a format of \n.I content-type\nand must be encoded with \n.I content encoding.\n\nThe allowed values for \n.I content encoding \nare \"default\" (7bit) and \"base64\".\n\nIf the source of input is stdin (-) and \n.I content encoding \nis \"default\", qmgr expects the input data to be terminated by EOF.\n\nIf the source of input is stdin (-) and \n.I content encoding \nis \"base64\", qmgr expects input data to be terminated by a blank line.\n\n.I config file \nmust be locally accessible to both qmgr and the requested batch server.\n\nA relative path \n.I config file \nis relative to the directory where qmgr was executed.\n\nIf a hook already has a configuration file, that file is overwritten\nby this import call.\n\nIf the name in \n.I config file \ncontains spaces as are used in Windows filenames, input file must be quoted.\n\nThere is no restriction on the size of the hook configuration file.\n\n.B Exporting Configuration Files\n.br\n\nFormat for exporting a site-defined hook configuration file:\n.RS 3\n.I export hook <hook name> application/x-config default {<config file>|-}\n.RE\n\nFormat for exporting a built-in hook configuration file:\n.RS 3\n.I export pbshook <hook name> application/x-config default {<config file>|-}\n.RE\n\n\n.B Hook Configuration File Format\n.br\nPBS supports several file formats for configuration files.  The format\nof the file is specified in its suffix.  Formats can be any of the\nfollowing:\n.RS 3\n .ini\n.br\n .json\n.br\n .py (Python)\n.br\n .txt (generic, no special format)\n.br\n .xml\n.RE\n.RS 4\nNo suffix: treat the input file as if it is a .txt file\n.br\nThe dash (-) symbol: configuration file content is taken from STDIN. The content is treated as if it is a .txt file.\n.RE\n\nExample: To import a configuration file in .json format:\n.RS 3\n.I # qmgr -c \"import hook my_hook application/x-config default my_input_file.json\"\n.RE\n\n.B Exporting Hooks\n.br\nFor exporting the contents of a site-defined hook.  Cannot be used with built-in\nhooks.\n\nFormat for exporting a hook: \n.br\n.RS 3\n.I export hook <hook name> <content type> <content encoding> [<output file>]\n.RE\nThis dumps the script contents of hook \n.I hook name \ninto \n.I output file, \nor stdout if \n.I output file \nis not specified.\n\nThe resulting \n.I output file \nor stdout data is of \n.I content type \nand \n.I content encoding.\n\nThe only \n.I content type \ncurrently supported is \"application/x-python\".\n\nThe allowed values for \n.I content encoding \nare \"default\" (7bit) and \"base64\".\n\n.I output file \nmust be a path that can be created by qmgr.\n\nAny relative path \n.I output file \nis relative to the directory where qmgr was executed.\n\nIf \n.I output file \nalready exists it is overwritten. If PBS is unable to\noverwrite the file due to ownership or permission problems, an error\nmessage is displayed in stderr.\n\nIf the \n.I output file \nname contains spaces like the ones used in Windows\nfile names, \n.I output file \nmust be enclosed in quotes.\n\n.B Examples of Exporting Hooks\n.br\nExample: Dump hook1's script contents directly into a file \"hello.py.out\":\n.RS 3\n.B # qmgr -c 'export hook hook1 application/x-python default hello.py'\n.br\n.B # cat hello.py\n.br\nimport pbs\n.br\npbs.event().job.comment=\"Hello, world\"\n.RE\n\nExample: To dump the script contents of a hook named hook1 into a file named hook1.py:\n.RS 3\n.B Qmgr: export hook hook1 application/x-python default hook1.py\n.RE\n\n.B Printing Hook Information\n.br\nTo print out the commands to create and configure all hooks, including\ntheir configuration files:\n.br\n.I \\ \\ \\ print hook\n\nTo print out the commands to create and configure the specified hook,\nincluding its configuration file:\n.br\n.I \\ \\ \\ print hook <hook name>\n\n.B Saving and Re-creating Hook Information\n.br\nYou can save creation and configuration information for all hooks.\nFor example:\n.br\n.B \\ \\ \\ # qmgr -c \"print hook\" > hook.qmgr\n\nYou can re-create all hooks and their configuration files.  For example:\n.br\n.B \\ \\ \\ # qmgr < hook.qmgr\n\n.B Restrictions on Built-in Hooks\n.br\nYou cannot do the following with built-in hooks:\n.RS 3\nImport a built-in hook\n.br\nExport a built-in hook\n.br\nPrint creation commands for a built-in hook\n.br\nCreate a built-in hook\n.br\nDelete a built-in hook\n.br\nSet the type attribute for a built-in hook\n.RE\n\n.SH Printing Usage Information\nYou use the help command or a question mark (\"?\") to invoke the qmgr\nbuilt-in help function.  You can request usage information for any of\nthe qmgr commands, and for topics including attributes, operators,\nnames, and values.\n\nTo print out usage information for the specified command or topic:\n.br\n.B \\ \\ \\ Qmgr: help [<command or topic>]\n.br\nor\n.br\n.B \\ \\ \\ Qmgr: ? [<command or topic>]\n\nFor example, to print usage information for the set command:\n.RS 3\n.B qmgr\n.br\n.B Qmgr: help set\n.br\nSyntax: set object [name][,name...] attribute[.resource] OP value\n.RE\n\n.SH Standard Input\nWhen you start a qmgr session, the qmgr command reads standard input\nfor directives until it reaches end-of-file, or it reads the exit or quit\ncommand.\n\n.SH Standard Output\nWhen you start a qmgr session, and standard output is connected to a\nterminal, qmgr writes a command prompt to standard output.\n\nIf you specify the -e option, qmgr echoes the directives it reads from\nstandard input to standard output.\n\n.SH Standard Error\nIf you do not specify the -z option, the qmgr command writes a\ndiagnostic message to standard error for each error occurrence.\n\n.SH Exit Status\n.IP 0 5\nSuccess\n\n.IP 1 5\nError in parsing\n\n.IP 2 5\nError in execution\n\n.IP 3 5\nError connecting to server\n\n.IP 4 5\nError making object active\n\n.IP 5 5\nMemory allocation error\n\n.SH See Also\nand pbs_server_attributes.7B, \npbs_job_attributes.7B, pbs_hook_attributes.7B, pbs_node_attributes.7B, \npbs_queue_attributes.7B, pbs_resv_attributes.7B, and pbs_sched_attributes.7B"
  },
  {
    "path": "doc/man8/qrun.8B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH qrun 8B \"25 January 2021\" Local \"PBS Professional\"\n.SH NAME\n.B qrun \n\\- run a PBS batch job now\n\n.SH SYNOPSIS\n.B qrun \n[-a] [-H <vnode specification>] <job ID> [<job ID> ...]\n.br\n.B qrun\n[-a] [-H - ] <job ID> [<job ID> ...]\n.br\n.B qrun\n--version\n\n.SH DESCRIPTION\nForces a job to run, regardless of scheduling position or resource requirements.\n\nThe \n.B qrun \ncommand can be used on jobs, subjobs, or ranges of subjobs, but\nnot on job arrays.  When it is used on a range of subjobs, the\nnon-running subjobs in that range are run.\n\nWhen preemption is enabled, the scheduler preempts other jobs in order\nto run this job.  Running a job via \n.B qrun \ngives the job higher preemption priority than any of the priorities defined\nin the \n.I preempt_prio \nscheduler parameter.  \n\n.B Required Privilege\n.br\nIn order to execute \n.B qrun, \nyou must have PBS Operator or Manager privilege.\n\n.B Caveats for qrun\n.RS 3\nThe job is run without respect for limits, primetime, or dedicated time.\n\nIf you use a\n.B -H vnode_specification\noption to run a job, but specify insufficient vnodes or resources, the\njob may not run correctly.  Avoid using this option unless you are\nsure.\n\nIf you don't use the \n.I -H \noption, the job must be in the \n.I Queued\nstate and reside in an execution queue.\n\nIf you do use the \n.I -H \noption, the job must be in the \n.I Queued \nor \n.I Suspended \nstate and reside in an execution queue.\n\nIf you use the \n.I -H\noption, all schedulers are bypassed, and partition boundaries are ignored.\n\nThe \n.B qrun\ncommand cannot be used on a job that is in the process of provisioning.\n\nIf you use \n.B qrun\non a subjob, PBS will try to run the subjob regardless of whether the job \nhas hit the limit specified in \n.I max_run_subjobs.\n.RE\n\n.SH OPTIONS\n.IP \"-a\" 6\nThe \n.B qrun \ncommand exits before the job actually starts execution.\n\n.IP \"(no -H option)\" 6\nThe job is run immediately regardless of scheduling policy as long as \nthe following are true:\n.RS 9\nThe queue in which the job resides is an execution queue.\n\nEither the resources required by the job are available, or preemption\nis enabled and the required resources can be made available by\npreempting jobs that are running.\n.RE\n\n.IP \"(with -H option)\" 6\nDo \n.B NOT\nuse this option unless you know exactly what you are doing.\n\nWith the -H option, all scheduling policies are bypassed and the job\nis run directly.  The job is run immediately on the named or\npreviously assigned vnodes, regardless of current usage on those\nvnodes or which scheduler manages the vnodes, \nwith the exception of vnode state.  The job is not run and\nthe qrun request is rejected if any named vnode is down, \nalready allocated exclusively, or would need to be allocated\nexclusively and another job is already running on the vnode.  The job\nis run if the vnode is \n.I offline.\n\nThe \n.I -H\noption runs jobs that are queued or suspended.\n\nIf the \n.B qrun -H \ncommand is used on a job that requests an AOE, and that AOE is not instantiated\non those vnodes, the vnodes are provisioned with the AOE.\n\nIf the job requests an AOE, and that AOE is not available on the \nspecified vnodes, the job is held.\n.RS 6\n.IP \"-H <vnode specification without resources>\" 3\nThe \n.I vnode specification without resources\nhas this format:\n.br\n.I \\ \\ \\ (<vchunk>)[+(<vchunk>) ...]\n.br\nwhere \n.I vchunk \nhas the format\n.br\n.I \\ \\ \\ <vnode name>[+<vnode name> ...]\n.br\nExample: -H (VnodeA+VnodeB)+(VnodeC)\n\nPBS applies one requested chunk from the job's selection directive in round-robin\nfashion to each \n.I vchunk \nin the list.  Each \n.I vchunk \nmust be sufficient to run the job's corresponding chunk, otherwise\nthe job may not execute correctly.\n.RE\n\n.RS 6\n.IP \"-H <vnode specification with resources>\" 3\nThe \n.I vnode specification with resources\nhas this format:\n.br\n.I \\ \\ \\ (<vchunk>)[+(<vchunk>) ...]\n.br\nwhere \n.I vchunk \nhas the format\n.IP \"\" 6\n.I <vnode name>:<vnode resources>[+<vnode name>:<vnode resources> ...]\n.LP\n.RS 3\nand where\n.I vnode resources\nhas the format\n.RS 3\n<resource name>=<value>[:<resource name>=<value> ...]\n.RE\n\n.IP \"Example:\" 3\n-H (VnodeA:mem=100kb:ncpus=1)+ (VnodeB:mem=100kb:ncpus=2+ VnodeC:mem=100kb)\n.LP\n\nPBS creates a new selection directive from the \n.I vnode specification with resources, \nusing it instead of the original specification from the user.\nAny single resource specification results in the\njob's original selection directive being ignored.  Each \n.I vchunk \nmust be sufficient to run the job's corresponding chunk, otherwise\nthe job may not execute correctly.\n\nIf the job being run requests\n.I -l place=exclhost,\ntake extra care to satisfy the \n.I exclhost \nrequest.  Make sure that if any vnodes are from a multi-vnoded host, \nall vnodes from that host are allocated.  Otherwise those vnodes can \nbe allocated to other jobs.\n.RE\n\n.IP \"-H -\" 3\nRuns the job on the set of resources to which it is already assigned.\nYou can run a job on the set of resources already assigned to the job, without having to list the resources, by using the \n.I -\n(dash) argument to the\n.I -H \noption.\n.RE\n\n.IP \"--version\" 6\nThe \n.B qrun\ncommand returns its PBS version information and exits.\nThis option can only be used alone.\n\n.SH OPERANDS\n.IP \"Job ID\" 6\nThe \n.B qrun \ncommand accepts a list of job IDs, of the form\n.I \\ \\ \\ <sequence number>[.<server name>][@<server name>]\n.br\n.I \\ \\ \\ <sequence number>[<index>][.<server name>][@<server name>]\n.IP \" \" 9\n.I <sequence number>[<index start>-<index end>][.<server name>][@<server name>]\n.IP \" \" 6\nNote that some shells require that you enclose a job array identifier in\ndouble quotes.\n\n.IP \"vnode specification\" 6\nThe \n.I vnode specification without resources\nhas this format:\n.IP \"\" 9\n.I (<vchunk>)[+(<vchunk>) ...]\n.br\nwhere \n.I vchunk \nhas the format\n.br\n.I <vnode name>[+<vnode name> ...]\n.br\nExample: -H (VnodeA+VnodeB)+(VnodeC)\n.LP\n.IP \"\" 6\nThe \n.I vnode specification with resources\nhas this format:\n.IP \"\" 9\n.I (<vchunk>)[+(<vchunk>) ...]\n.br\nwhere \n.I vchunk \nhas the format\n.br\n.I <vnode name>:<vnode resources>[+<vnode name>:<vnode resources> ...]\n.br\nand where\n.I vnode resources\nhas the format\n.br\n.I <resource name>=<value>[:<resource name>=<value> ...]\n\nExample: -H (VnodeA:mem=100kb:ncpus=1) + (VnodeB:mem=100kb:ncpus=2 + VnodeC:mem=100kb)\n.IP \"\" 6\nA \n.I vnode name\nis the name of the vnode, not the name of the host.\n\n.SH STANDARD ERROR\nThe\n.B qrun\ncommand writes a diagnostic message to standard error for\neach error occurrence.\n\n.SH EXIT STATUS\n.IP Zero 6\nOn success\n\n.IP \"Greater than zero\" 6\nIf the \n.B qrun \ncommand fails to process any operand\n\n.SH SEE ALSO\nqsub(1B), \nqmgr(8B), \npbs_runjob(3B)\n"
  },
  {
    "path": "doc/man8/qstart.8B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH qstart 8B \"6 May 2020\" Local \"PBS Professional\"\n.SH NAME\n.B qstart \n- turn on scheduling or routing for jobs in a PBS queue\n.SH SYNOPSIS\n.B qstart \n<destination> [<destination> ...]\n.br\n.B qstart\n--version\n.SH DESCRIPTION\nIf \n.I destination \nis an execution queue, the \n.B qstart \ncommand allows a PBS scheduler to schedule jobs residing in the specified queue.\nIf \n.I destination\nis a routing queue, the server can begin routing jobs from that queue.  Sets\nthe value of the queue's \n.I started \nattribute to\n.I True.\n\n.B Required Privilege\n.br\nIn order to execute \n.B qstart, \nyou must have PBS Operator or Manager privilege.\n\n.SH OPTIONS\n.IP \"--version\" \nThe \n.B qstart\ncommand returns its PBS version information and exits.\nThis option can only be used alone.\n\n\n.SH  OPERANDS\nThe qstart command accepts one or more space-separated\n.I destination\noperands.  The operands take one of three forms:\n.IP \"<queue name>\" 3\nStarts scheduling or routing from the specified queue.\n.IP \"@<server name>\" 3\nStarts scheduling or routing from all queues at the specified server.\n.IP \"<queue name>@<server name>\" 3\nStarts scheduling or routing from the specified queue at the specified server.\n.LP\nTo start scheduling at all queues at the default server, use the \n.B qmgr\ncommand:\n.br\n.B \\ \\ \\ Qmgr: set queue @default started=true\n\n.SH STANDARD ERROR\nThe \n.B qstart\ncommand writes a diagnostic message to standard error for\neach error occurrence.\n\n.SH EXIT STATUS\n.IP Zero 3\nUpon successful processing of all the operands presented to the\n.B qstart\ncommand\n.IP \"Greater than zero\" 3\nIf the qstart command fails to process any operand\n\n.SH SEE ALSO\npbs_server(8B), qstop(8B), and qmgr(8B)\n"
  },
  {
    "path": "doc/man8/qstop.8B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH qstop 8B \"6 May 2020\" Local \"PBS Professional\"\n.SH NAME\n.B qstop \n\\- prevent PBS jobs in the specified queue from being scheduled or routed\n.SH SYNOPSIS\n.B qstop \n<destination> [<destination> ...]\n.br\n.B qstop\n--version\n\n.SH DESCRIPTION\n\nIf \n.I destination \nis an execution queue, the \n.B qstop \ncommand stops the\nscheduler from scheduling jobs residing in \n.I destination.  \nIf\n.I destination \nis a routing queue, the server stops routing jobs from\nthat queue.  Sets the value of the queue's \n.I started \nattribute to \n.I False.\n\n.B Required Privilege\n.br\nYou must have PBS Operator or Manager privilege to run this command.\n\n.SH OPTIONS\n.IP \"--version\" 8\nThe \n.B qstop\ncommand returns its PBS version information and exits.\nThis option can only be used alone\n\n.SH  OPERANDS\nThe \n.B qstop \ncommand accepts one or more space-separated\n.I destination\noperands.  The operands take one of three forms:\n.br\n.I <queue name>\n.RS 3\nStops scheduling or routing from the specified queue.\n.RE\n\n.I @<server name>\n.RS 3\nStops scheduling or routing from all queues at the specified server.\n.RE\n\n.I <queue name>@<server name>\n.RS 3\nStops scheduling or routing from the specified queue at the specified server.\n.RE\n\nTo stop scheduling at all queues at the default server, use the \n.B qmgr\ncommand:\n.RS 3\n.B Qmgr: set queue @default started=false\n.RE\n\n.SH STANDARD ERROR\nThe \n.B qstop\ncommand writes a diagnostic message to standard error for each error occurrence.\n\n.SH EXIT STATUS\n.IP Zero 8\nUpon successful processing of all operands\n\n.IP \"Greater than zero\" 8\nIf the \n.B qstop \ncommand fails to process any operand\n\n.SH SEE ALSO\npbs_server(8B), qstart(8B), and qmgr(8B)\n"
  },
  {
    "path": "doc/man8/qterm.8B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH qterm 8B \"6 May 2020\" Local \"PBS Professional\"\n.SH NAME\n.B qterm \n\\- terminate one or both PBS servers, and optionally terminate scheduler(s) and/or MoMs\n\n.SH SYNOPSIS\n.B qterm \n[-f | -F | -i] [-m] [-s] [-t <type>] [<server>[ <server> ...]]\n.br\n.B qterm\n--version\n\n.SH DESCRIPTION\nThe\n.B qterm \ncommand terminates a PBS batch server.\n\nOnce the server is terminating, no new jobs are accepted by the\nserver, and no jobs are allowed to begin execution.  The impact on\nrunning jobs depends on the way the server is shut down.\n\nThe \n.B qterm \ncommand does not exit until the server has completed its shutdown procedure.\n\nIf the complex is configured for failover, and the primary server is\nshut down, the normal behavior for the secondary server is to become\nactive.  The \n.B qterm \ncommand provides options to manage the behavior of\nthe secondary server; it can be shut down, forced to remain idle, or\nshut down in place of the primary server.\n\n.B Required Privilege\n.br\nIn order to\nrun the \n.B qterm \ncommand, you must have PBS Operator or Manager privilege.\n\n.SH OPTIONS\n.IP \"(no options)\" 10\nThe \n.B qterm \ncommand defaults to \n.B qterm -t quick.\n\n.IP \"-f\" 10\nIf the complex is configured for failover, shuts down both the primary and\nsecondary servers.  \n.br\nWithout the \n.I -f \noption, \n.B qterm \nshuts down the the primary server and makes the secondary server active.  \n.br\nThe \n.I -f\noption cannot be used with the \n.I -i \nor \n.I -F \noptions.\n\n.IP \"-F\" 10\nIf the complex is configured for failover, shuts down only the secondary server,\nleaving the primary server active.\n.br\nThe \n.I -F \noption cannot be used with the \n.I -f \nor \n.I -i \noptions.\n\n.IP \"-i\" 10\nIf the complex is configured for failover, leaves the secondary server\nidle when the primary server is shut down.\n.br\nThe \n.I -i \noption cannot be used with the \n.I -f \nor \n.I -F \noptions.\n\n.IP \"-m\" 10\nShuts down the primary server and all MoMs \n.B (pbs_mom).\nThis option does not cause jobs or subjobs to be killed.\nJobs are left running subject to other options to the \n.B qterm\ncommand.  \n\n.IP \"-s\" 10\nShuts down the primary server and the scheduler\n.B (pbs_sched).\n\n.IP \"-t <type>\" 10\nThe \n.I type \nspecifies how the server is shut down.  The \n.I types\nare the following:\n.RS\n.IP immediate\nShuts down the primary server.  Immediately stops all running jobs.\nAny running jobs that can be checkpointed are checkpointed,\nterminated, and requeued.  Jobs that cannot be checkpointed are\nterminated and requeued if they are rerunnable, otherwise they are\nkilled.\n\nIf any job cannot be terminated, for example the server cannot contact\nthe MoM of a running job, the server continues to execute and the job\nis listed as running.  The server can be terminated by a second \n.B qterm -t immediate \ncommand.\n\nWhile terminating, the server is in the \n.I Terminating \nstate.\n\n.IP delay \nShuts down the primary server.  The server waits to terminate until\nall non-checkpointable, non-rerunnable jobs are finished executing.\nAny running jobs that can be checkpointed are checkpointed,\nterminated, and requeued.  Jobs that cannot be checkpointed are\nterminated and requeued if they are rerunnable, otherwise they are\nallowed to continue to run.\n\nWhile terminating, the server is in the \n.I Terminating-Delayed \nstate.\n\n.IP quick\nShuts down the primary server.  Running jobs and subjobs are left running.\n\nThis is the default behavior when no options are given to the \n.B qterm\ncommand.\n\nWhile terminating, the server is in the \n.I Terminating \nstate.\n\n.RE\n.LP\n.IP \"--version\" 10\nThe \n.B qterm\ncommand returns its PBS version information and exits.\nThis option can only be used alone.\n\n.SH OPERANDS\nYou optionally specify the list of servers to shut down using \n.I [<server>[ <server> ...].\n\nIf you do not specify any servers, the \n.B qterm\ncommand shuts down the default server.\n\n.SH STANDARD ERROR\nThe\n.B qterm\ncommand writes a diagnostic message to standard error for\neach error occurrence.\n\n.SH EXIT STATUS\n.IP Zero 8\nUpon successful processing of all operands presented to the \n.B qterm\ncommand.\n.IP \"Greater than zero\" 8\nIf the \n.B qterm \ncommand fails to process any operand\n\n.SH SEE ALSO\npbs_server(8B), pbs_mom(8B), pbs_sched(8B)\n\n\n"
  },
  {
    "path": "doc/man8/tracejob.8B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH tracejob 8B \"6 May 2020\" Local \"PBS Professional\"\n.SH NAME\n.B tracejob \n\\- extract and print log messages for a PBS job\n.SH SYNOPSIS\n.B tracejob \n[-a] [-c <count>] [-f <filter>] [-l] [-m] [-n <days>] \n.RS 9\n[-p <path>] [-s] [-v] [-w <cols>] [-z] <job ID>\n.RE\n.B tracejob\n--version\n.SH DESCRIPTION\nThe\n.B tracejob\ncommand extracts log messages for a given \n.I job ID \nand  prints them in chronological order.\n.LP\nThe \n.B tracejob\ncommand extracts information from the server, scheduler, accounting, and MoM logs.\nServer logs contain information such as when a job was queued or modified.\nScheduler logs contain clues as to why a job is not running.  Accounting\nlogs contain accounting records for when a job was queued, started, ended,\nor deleted.  MoM logs contain information about what happened to a job\nwhile it was running.  \n.LP\nTo get MoM log messages for a job, \n.B tracejob \nmust be run on the machine on which the job ran.  If the job ran on multiple\nhosts, you must run \n.B tracejob\non each of those hosts.\n.LP\nSome log messages appear many times.  In order to make the output of \n.B tracejob\nmore readable, messages that appear over a certain number of times (see option \n.I -c \nbelow) are restricted to only the most recent message.\n\n.B Using tracejob on Job Arrays\n.br\nIf \n.B tracejob \nis run on a job array, the information returned is about\nthe job array itself, and not its subjobs.  Job arrays do not have associated\nMoM log messages.  If \n.B tracejob \nis run on a subjob, the same types of log \nmessages are available as for a job.  Certain log messages that occur for \na regular job will not occur for a subjob.\n\n.B Required Privilege\n.br \nAll users have access to server, scheduler, and MoM information. Only \nAdministrator or root can access accounting information.\n.LP\n\n.SH Options to tracejob\n.IP \"-a\" 8\nDo not report accounting information.\n.IP \"-c <count>\" 8\nSet excessive message limit to \n.I count.\nIf a message is logged \nat least \n.I count\ntimes, only the most recent message is printed.\nThe default for \n.I count\nis 15.\n\n.IP \"-f <filter>\" 8\nDo not include log events of type \n.I filter.\nThe \n.B -f \noption can be used \nmore than once on the command line.  \nThe following table shows each filter with its hex value and category:\n\n.nf\nFilter      Hex Value  Message Category                   \n---------------------------------------------------\nerror       0001       Internal errors\nsystem      0002       System errors \nadmin       0004       Administrative events\njob         0008       Job-related events\njob_usage   0010       Job accounting info\nsecurity    0020       Security violations\nsched       0040       Scheduler events\ndebug       0080       Common debug messages\ndebug2      0100       Uncommon debug messages\nresv        0200       Reservation debug messages\ndebug3      0400       Less common than debug2\ndebug4      0800       Less common than debug3\n.fi\n.RE\n\n.IP \"-l\" 8\nDo not report scheduler information.            \n\n.IP \"-m\" 8\nDo not report MoM information.\n\n.IP \"-n <days>\" 8\nReport information from up to \n.I days \ndays in the past.  Default number of days: \n.I 1 \n= today\n\n.IP \"-p <path>\" 8\nUse \n.I path \nas path to PBS_HOME on machine being queried.\n\n.IP \"-s\"   8\nDo not report server information.\n\n.IP \"-w <cols>\" 8\nWidth of current terminal.  If \n.I cols \nis not specified, \n.B tracejob \nqueries OS to get terminal width.  If OS doesn't \nreturn anything, defaults to \n.I 80.\n\n.IP \"-v\" 8\nVerbose.  Report more of \n.B tracejob's \nerrors than default.\n\n.IP \"-z\" 8\nSuppresses printing of duplicate messages.\n\n.RE\n.LP\n.IP \"--version\" 8\nThe \n.B tracejob\ncommand returns its PBS version information and exits.\nThis option can only be used alone.\n\n.SH Operands\nThe tracejob command accepts one \n.I job ID \noperand. \n.br\nFor a job, this has the form: \n.br\n.I <sequence number>[.<server name>][@<server name>]\n.br\nFor a job array, the form is:\n.br\n.I <sequence number>[][.<server name>][@<server name>]\n.br\nFor a subjob, the form is: \n.br\n.I <sequence number>[<index>][.<server name>][@<server name>]\n.br\nNote that some shells require that you enclose a job array identifier in double quotes.\n\n\n.SH EXIT STATUS\n.IP Zero 8\nupon successful processing of all options\n.IP \"Greater than zero\" 8\nIf\n.B tracejob \nis unable to process any options\n\n.SH SEE ALSO\npbs_server(8B), pbs_sched(8B), pbs_mom(8B)\n"
  },
  {
    "path": "doc/man8/win_postinstall.py.8B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n.TH win_postinstall.py 8B \"20 November 2019\" Local \"PBS Professional\"\n.SH NAME\n.B win_postinstall.py \n\\- For Windows.  Configures PBS MoM or client\n\n\n.SH SYNOPSIS\n<PBS_EXEC>\\\\etc\\\\python win_postinstall.py \n-u <PBS service account> \n.br\n        -p <PBS service account password> -t <installation type>   \n.br\n        -s <server name> [-c <path to scp command>]\n\n.SH DESCRIPTION\nThe\n.B win_postinstall.py\ncommand configures the PBS MoM and commands.  It performs\npost-installation steps such as validating the PBS service account\nusername and password, installing the Visual C++ redistributable\nbinary, and creating the\n.I pbs.conf\nfile in the PBS destination folder.\n\nFor an \"execution\" type of installation, it creates PBS_HOME, \nand registers and starts the PBS_MOM service.\n\nWhen you use this command during an \"execution\" type installation of\nPBS, the command automatically un-registers any old PBS MoM.\n\nAvailable on Windows only.\n\n.SH Required Privilege\n\nYou must have Administrator privilege to run this command.\n\n.SH Options to win_postinstall.py\n.IP \"-c, --scp-path <path to scp command>\" 8\nSpecifies path to \n.B scp\ncommand.\n\n.IP \"-p, --passwd <PBS service account password>\" 8\nSpecifies password for PBS service account.  \n\n.IP \"-s, --server <server name>\" 8\nSpecifies the hostname on which the PBS server will run; required when\nthe installation type is one of \"execution\" or \"client\".\n\n.IP \"-t, --type <installation type>\" 8\nSpecifies type of installation.  Type can be one of \n\"execution\" or \"client\".\n\n.IP \"-u, --user <PBS service account>\" 8\nSpecifies PBS service account.  When you specify the PBS service\naccount, whether or not you are on a domain machine, include only the\nusername, not the domain.  For example, if the full username on a\ndomain machine is \n.I <domain>\n\\\\\n.I <username>\n, pass only \n.I <username>\nas an argument.\n\n"
  },
  {
    "path": "m4/disable_shell_pipe.m4",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nAC_DEFUN([PBS_AC_DISABLE_SHELL_PIPE],\n[\n  AC_MSG_CHECKING([for method to pass job script name])\n  AC_ARG_ENABLE([shell-pipe],\n    AS_HELP_STRING([--disable-shell-pipe],\n      [Pass the job script name via STDIN rather than using a pipe.]\n    )\n  )\n  AS_IF([test \"x$enable_shell_pipe\" != \"xno\"],\n    AC_MSG_RESULT([pipe])\n    AC_DEFINE([SHELL_INVOKE], [1], [Define to 0 for STDIN or 1 for pipe]),\n    AC_MSG_RESULT([stdin])\n    AC_DEFINE([SHELL_INVOKE], [0], [Define to 0 for STDIN or 1 for pipe])\n  )\n])\n"
  },
  {
    "path": "m4/disable_syslog.m4",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nAC_DEFUN([PBS_AC_DISABLE_SYSLOG],\n[\n  AC_MSG_CHECKING([whether to disable syslog support])\n  AC_ARG_ENABLE([syslog],\n    AS_HELP_STRING([--disable-syslog],\n      [Do not provide support for logging via syslog.]\n    )\n  )\n  AS_IF([test \"x$enable_syslog\" != \"xno\"],\n    AC_MSG_RESULT([no])\n    AC_DEFINE([SYSLOG], [1], [Define as 0 to disable syslog, 1 to enable]),\n    AC_MSG_RESULT([yes])\n    AC_DEFINE([SYSLOG], [0], [Define as 0 to disable syslog, 1 to enable])\n  )\n])\n"
  },
  {
    "path": "m4/enable_alps.m4",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nAC_DEFUN([PBS_AC_ENABLE_ALPS],\n[\n  AC_MSG_CHECKING([whether Cray ALPS support was requested])\n  AC_ARG_ENABLE([alps],\n    AS_HELP_STRING([--enable-alps],\n      [Enable support for Cray ALPS.]\n    )\n  )\n  AS_IF([test \"x$enable_alps\" = \"xyes\"],\n    AC_MSG_RESULT([yes]),\n    AC_MSG_RESULT([no])\n  )\n  AM_CONDITIONAL([ALPS_ENABLED], [test x$enable_alps = xyes])\n])\n"
  },
  {
    "path": "m4/enable_ptl.m4",
    "content": "#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nAC_DEFUN([PBS_AC_ENABLE_PTL],\n[\n  # check for PTL enable to generate PTL package which is disabled by default\n  AC_MSG_CHECKING([whether PTL package was requested])\n  AC_ARG_ENABLE([ptl],\n    AS_HELP_STRING([--enable-ptl],\n    [Enable package creation for PTL]))\n  AS_IF([test \"x$enable_ptl\" = \"xyes\"],\n    [AC_MSG_RESULT([yes])],\n    [AC_MSG_RESULT([no])])\n  AM_CONDITIONAL([ENABLEPTL], [test \"x${enable_ptl}\" = \"xyes\"])\n  [ptl_prefix=`dirname ${prefix}`/ptl]\n  AC_SUBST(ptl_prefix)\n])\n"
  },
  {
    "path": "m4/pbs_decl_epoll.m4",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\n\n#\n# Prefix the macro names with PBS_ so they don't conflict with Python definitions\n#\n\nAC_DEFUN([PBS_AC_DECL_EPOLL],\n[\n  AS_CASE([x$target_os],\n    [xlinux*],\n      AC_MSG_CHECKING([for epoll])\n      AC_TRY_RUN(\n[\n#include <sys/epoll.h>\nint main()\n{\n  return ((epoll_create(100) == -1) ? -1 : 0);\n}\n],\n        AC_DEFINE([PBS_HAVE_EPOLL], [], [Defined when epoll is available])\n        AC_MSG_RESULT([yes]),\n        AC_MSG_RESULT([no])\n      ),\n)])\n"
  },
  {
    "path": "m4/pbs_decl_epoll_pwait.m4",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\n\n#\n# Prefix the macro names with PBS_ so they don't conflict with\n# Python definitions\n#\n\nAC_DEFUN([PBS_AC_DECL_EPOLL_PWAIT],\n[\n  AS_CASE([x$target_os],\n    [xlinux*],\n      AC_MSG_CHECKING(whether epoll_pwait system call is supported)\n      AC_TRY_RUN([\n#include <unistd.h>\n#include <poll.h>\n#include <signal.h>\n#include <stdio.h>\n#include <errno.h>\n#include <sys/epoll.h>\nint main()\n{\n  sigset_t allsigs;\n  int n;\n  int maxevents = 1;\n  int timeout = 0;\n  int epollfd;\n  struct   epoll_event  events;\n  sigemptyset(&allsigs);\n  events.events = EPOLLIN;\n  epollfd = epoll_create1(0);\n  if (epollfd == -1) {\n    perror(\"epoll_create1\");\n    return (1);\n  }\n  n = epoll_pwait(epollfd, &events, maxevents, timeout, &allsigs);\n  return (n);\n}],\n        AC_DEFINE([PBS_HAVE_EPOLL_PWAIT], [],\n                  [Defined when epoll_pwait is available])\n        AC_MSG_RESULT([yes]),\n        AC_MSG_RESULT([no])\n      )\n  )\n])\n"
  },
  {
    "path": "m4/pbs_decl_h_errno.m4",
    "content": "\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\n# Test to see whether h_errno is visible when netdb.h is included.\n# At least under HP-UX 10.x this is not the case unless\n# XOPEN_SOURCE_EXTENDED is declared but then other nasty stuff happens.\n# The appropriate thing to do is to call this macro and then\n# if it is not available do a \"extern int h_errno;\" in the code.\n\nAC_DEFUN([PBS_AC_DECL_H_ERRNO],\n  [AC_CACHE_CHECK([for h_errno declaration in netdb.h],\n    ac_cv_decl_h_errno,\n    [AC_TRY_COMPILE([#include <sys/types.h>\n#ifdef HAVE_UNISTD_H\n#include <unistd.h>\n#endif\n#include <netdb.h>\n],\n      [int _ZzQ = (int)(h_errno + 1);],\n      [ac_cv_decl_h_errno=yes],\n      [ac_cv_decl_h_errno=no])\n    ])\n  AS_IF([test x$ac_cv_decl_h_errno = xyes],\n    AC_DEFINE(H_ERRNO_DECLARED, [], [Defined when h_errno is declared in netdb.h]))\n])\n"
  },
  {
    "path": "m4/pbs_decl_ppoll.m4",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\n\n#\n# Prefix the macro names with PBS_ so they don't conflict with Python definitions\n#\nAC_DEFUN([PBS_AC_DECL_PPOLL],\n[\n  AS_CASE([x$target_os],\n    [xlinux*],\n      AC_MSG_CHECKING(whether ppoll API is supported)\n      AC_TRY_RUN([\n#include <unistd.h>\n#include <poll.h>\n#include <signal.h>\nint main()\n{\n  sigset_t allsigs;\n  int n;\n  int fd[2];\n  struct timespec timeoutspec;\n  struct   pollfd  pollfds[1];\n  timeoutspec.tv_nsec = 1000;\n  timeoutspec.tv_sec = 0;\n  pipe(fd);\n  pollfds[0].fd = fd[0];\n  sigemptyset(&allsigs);\n  n = ppoll(pollfds, 1, &timeoutspec, &allsigs);\n  return (n);\n}],\n        AC_DEFINE([PBS_HAVE_PPOLL], [], [Defined when ppoll is available])\n        AC_MSG_RESULT([yes]),\n        AC_MSG_RESULT([no])\n      )\n  )\n])\n"
  },
  {
    "path": "m4/pbs_decl_socklen_t.m4",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nAC_DEFUN([PBS_AC_DECL_SOCKLEN_T],\n  [AC_CACHE_CHECK([for socklen_t],\n    pbs_ac_cv_decl_socklen_t,\n    [AC_TRY_COMPILE([\n#include <sys/types.h>\n#include <unistd.h>\n#include <sys/socket.h>\n#include <netdb.h>\n], [\n  socklen_t       len = 0;\n  len++;\n],\n    pbs_ac_cv_decl_socklen_t=yes,\n    pbs_ac_cv_decl_socklen_t=no)])\n  AS_IF([test x$pbs_ac_cv_decl_socklen_t = xno],\n    AC_DEFINE([pbs_socklen_t], [int], [socklen_t was not defined]),\n    AC_DEFINE([pbs_socklen_t], [socklen_t], [socklen_t was defined]))\n])\n"
  },
  {
    "path": "m4/pbs_patch_libtool.m4",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nAC_DEFUN([PBS_AC_PATCH_LIBTOOL], [\n\tAC_CONFIG_COMMANDS([patch-libtool], [\n\t\tAS_IF([! grep '[[-]]fsanitize=\\*' libtool 2>&1 >/dev/null], [\n\t\t\tAC_MSG_NOTICE([patching libtool to support -fsanitize])\n\t\t\tAS_IF([! grep '[[-]]pg[[|)]]' libtool 2>&1 >/dev/null], [\n\t\t\t\tgrep -A 30 'Flags to be passed through unchanged' libtool \\\n\t\t\t\t\t>libtool.patched.err\n\t\t\t\tAC_MSG_ERROR([libtool does not pass through -pg])\n\t\t\t])\n\t\t\t$SED 's/\\(-pg\\)\\([[|)]]\\)/\\1|-fsanitize=\\*\\2/' \\\n\t\t\t\tlibtool >libtool.patched 2>libtool.patched.err\n\t\t\tAS_IF([! grep '[[-]]fsanitize=\\*' libtool.patched \\\n\t\t\t\t\t2>&1 >/dev/null ], [\n\t\t\t\tAC_MSG_ERROR([Failed to patch libtool])\n\t\t\t], [])\n\t\t\tmv -f libtool.patched libtool\n\t\t\trm -f libtool.patched.err\n\t\t])\n\t])\n])\n"
  },
  {
    "path": "m4/pbs_systemd_unitdir.m4",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nAC_DEFUN([PBS_AC_SYSTEMD_UNITDIR],\n[\n  AC_MSG_CHECKING([system/machine type for systemd unit dir])\n  systemd_dir=\"/usr/lib/systemd/system\"\n  AS_IF([test -r \"/etc/os-release\"],\n    [system_type=$( cat /etc/os-release | awk -F'=' '/^ID=/' | cut -d \"=\" -f 2 )\n      AS_IF([test \"x$system_type\" = \"xubuntu\" -o \"x$system_type\" = \"xdebian\"],\n      [systemd_dir=\"/lib/systemd/system\"])\n    ]\n  )\n  _unitdir=$systemd_dir\n  AC_MSG_RESULT([$_unitdir])\n  AC_SUBST([_unitdir])\n])\n"
  },
  {
    "path": "m4/pbs_version.m4",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nAC_DEFUN([PBS_AC_PBS_VERSION],\n[\n  AC_MSG_CHECKING([for PBS version])\n  AC_ARG_VAR([PBS_VERSION], [Specifies the PBS version number.])\n  AS_IF([test \"x$PBS_VERSION\" = \"x\"],\n    [PBS_VERSION=$PACKAGE_VERSION]\n  )\n  AC_MSG_RESULT([$PBS_VERSION])\n  AC_SUBST([PBS_VERSION])\n  VERSION=$PBS_VERSION\n  AC_SUBST([VERSION])\n  PACKAGE_VERSION=$PBS_VERSION\n  AC_SUBST([PACKAGE_VERSION])\n])\n"
  },
  {
    "path": "m4/security_check.m4",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nAC_DEFUN([PBS_AC_SECURITY],\n[\n  AC_MSG_CHECKING([whether to disable security check])\n  AC_ARG_ENABLE([security],\n    AS_HELP_STRING([--disable/--enable-security],\n      [whether to perform security checks, enabled by default]\n    )\n  )\n  AS_IF([test \"x$enable_security\" != \"xno\"],\n    AC_MSG_RESULT([no])\n    AS_ECHO(\"Security checks will be performed\"),\n    AC_MSG_RESULT([yes])\n    AC_DEFINE([NO_SECURITY_CHECK], [], [Define to disable security])\n  )\n])\n"
  },
  {
    "path": "m4/with_cjson.m4",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nAC_DEFUN([PBS_AC_WITH_CJSON],\n[\n  AC_ARG_WITH([cjson],\n    AS_HELP_STRING([--with-cjson=DIR],\n      [Specify the directory where cJSON is installed.]\n    )\n  )\n  [cjson_dir=\"$with_cjson\"]\n  AC_MSG_CHECKING([for cJSON])\n  AS_IF(\n    [test \"$cjson_dir\" = \"\"],\n    AC_CHECK_HEADER([cjson/cJSON.h], [], AC_MSG_ERROR([cJSON headers not found.])),\n    [test -r \"$cjson_dir/include/cjson/cJSON.h\"],\n    [cjson_inc=\"-I$cjson_dir/include\"],\n    AC_MSG_ERROR([cJSON headers not found.])\n  )\n  AS_IF(\n    [test \"$cjson_dir\" = \"\"],\n    # Using system installed cjson\n    AC_CHECK_LIB([cjson], [cJSON_Parse],\n      [cjson_lib=\"-lcjson\"],\n      AC_MSG_ERROR([cJSON shared object library not found.])),\n    # Using developer installed cJSON\n    [test -r \"${cjson_dir}/lib64/libcjson.so\"],\n    [cjson_lib=\"-L${cjson_dir}/lib64 -lcjson\"],\n    [test -r \"${cjson_dir}/lib/libcjson.so\"],\n    [cjson_lib=\"-L${cjson_dir}/lib -lcjson\"],\n    AC_MSG_ERROR([cJSON library not found.])\n  )\n  AC_MSG_RESULT([$cjson_dir])\n  AC_SUBST(cjson_inc)\n  AC_SUBST(cjson_lib)\n  AC_DEFINE([CJSON], [], [Defined when cjson is available])\n])\n"
  },
  {
    "path": "m4/with_core_limit.m4",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nAC_DEFUN([PBS_AC_WITH_CORE_LIMIT],\n[\n  AC_MSG_CHECKING([for daemon coredump limit])\n  AC_ARG_WITH([core-limit],\n    AS_HELP_STRING([--with-core-limit=VALUE],\n      [Specify the daemon coredump limit.]\n    )\n  )\n  AS_IF([test x$with_core_limit != x],\n    [PBS_CORE_LIMIT=\"$with_core_limit\"],\n    [PBS_CORE_LIMIT=\"unlimited\"]\n  )\n  AC_MSG_RESULT([$PBS_CORE_LIMIT])\n  AC_SUBST(PBS_CORE_LIMIT)\n])\n"
  },
  {
    "path": "m4/with_database_dir.m4",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nAC_DEFUN([PBS_AC_WITH_DATABASE_DIR],\n[\n  AC_MSG_CHECKING([for PBS database directory])\n  AC_ARG_WITH([database-dir],\n    AS_HELP_STRING([--with-database-dir=DIR],\n      [Specify the directory where the PBS database is installed.]\n    )\n  )\n  [database_dir=\"$with_database_dir\"]\n  AS_IF(\n    [test \"$database_dir\" = \"\"],\n    AC_CHECK_HEADER([libpq-fe.h], [], [database_dir=\"/usr\"])\n  )\n  AS_IF(\n    [test \"$database_dir\" != \"\"],\n    AS_IF(\n      [test -r \"$database_dir/include/libpq-fe.h\"],\n      [database_inc=\"-I$database_dir/include\"],\n      [test -r \"$database_dir/include/pgsql/libpq-fe.h\"],\n      [database_inc=\"-I$database_dir/include/pgsql\"],\n      [test -r \"$database_dir/include/postgresql/libpq-fe.h\"],\n      [database_inc=\"-I$database_dir/include/postgresql\"],\n      AC_MSG_ERROR([Database headers not found.])\n    )\n  )\n  AS_IF(\n    # Using system installed PostgreSQL\n    [test \"$with_database_dir\" = \"\"],\n    AC_CHECK_LIB([pq], [PQconnectdb],\n      [database_lib=\"-lpq\"],\n      AC_MSG_ERROR([PBS database shared object library not found.])),\n    # Using developer installed PostgreSQL\n    [test -r \"$database_dir/lib64/libpq.a\"],\n    [database_lib=\"$database_dir/lib64/libpq.a\"],\n    [test -r \"$database_dir/lib/libpq.a\"],\n    [database_lib=\"$database_dir/lib/libpq.a\"],\n    AC_MSG_ERROR([PBS database library not found.])\n  )\n  AC_MSG_RESULT([$database_dir])\n  AC_SUBST([database_dir])\n  AC_SUBST([database_inc])\n  AC_SUBST([database_lib])\n  AC_DEFINE([DATABASE], [], [Defined when PBS database is available])\n])\n"
  },
  {
    "path": "m4/with_database_port.m4",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nAC_DEFUN([PBS_AC_WITH_DATABASE_PORT],\n[\n  AC_MSG_CHECKING([for PBS database port])\n  AC_ARG_WITH([database-port],\n    AS_HELP_STRING([--with-database-port=PORT],\n      [Specify the port number for the PBS database.]\n    )\n  )\n  AS_IF([test \"x$with_database_port\" != \"x\"],\n    database_port=[$with_database_port],\n    database_port=[15007]\n  )\n  AC_MSG_RESULT([$database_port])\n  AC_SUBST([database_port])\n  AC_DEFINE_UNQUOTED([PBS_DATA_SERVICE_PORT], [$database_port], [Port number for the PBS database])\n])\n"
  },
  {
    "path": "m4/with_database_user.m4",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nAC_DEFUN([PBS_AC_WITH_DATABASE_USER],\n[\n  AC_MSG_CHECKING([for PBS database user])\n  AC_ARG_WITH([database-user],\n    AS_HELP_STRING([--with-database-user=USER],\n      [Specify the user account that owns the PBS database.]\n    )\n  )\n  AS_IF([test \"x$with_database_user\" != \"x\"],\n    database_user=[$with_database_user],\n    database_user=[postgres]\n  )\n  AC_MSG_RESULT([$database_user])\n  AC_SUBST([database_user])\n  AC_DEFINE_UNQUOTED([PBS_DATA_SERVICE_USER], [\"$database_user\"], [User that owns the PBS database])\n])\n"
  },
  {
    "path": "m4/with_editline.m4",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nAC_DEFUN([PBS_AC_WITH_EDITLINE],\n[\n  AC_ARG_WITH([editline],\n    AS_HELP_STRING([--with-editline=DIR],\n      [Specify the directory where editline is installed.]\n    )\n  )\n  [editline_dir=\"$with_editline\"]\n  AC_MSG_CHECKING([for editline])\n  AS_IF(\n    [test \"$editline_dir\" = \"\"],\n    AC_CHECK_HEADER([histedit.h], [], AC_MSG_ERROR([editline headers not found.])),\n    [test -r \"$editline_dir/include/histedit.h\"],\n    [editline_inc=\"-I$editline_dir/include\"],\n    AC_MSG_ERROR([editline headers not found.])\n  )\n  AS_IF(\n    # Using system installed editline\n    [test \"$editline_dir\" = \"\"],\n    AC_CHECK_LIB([edit], [el_init],\n      [editline_lib=\"-ledit\"],\n      AC_MSG_ERROR([editline shared object library not found.])),\n    # Using developer installed editline\n    [test -r \"${editline_dir}/lib64/libedit.a\"],\n    [editline_lib=\"${editline_dir}/lib64/libedit.a\"],\n    [test -r \"${editline_dir}/lib/libedit.a\"],\n    [editline_lib=\"${editline_dir}/lib/libedit.a\"],\n    AC_MSG_ERROR([editline library not found.])\n  )\n  AC_MSG_RESULT([$editline_dir])\n  AC_CHECK_LIB([ncurses], [tgetent],\n    [curses_lib=\"-lncurses\"],\n    AC_CHECK_LIB([curses], [tgetent],\n      [curses_lib=\"-lcurses\"],\n      AC_MSG_ERROR([curses library not found.])))\n  [editline_lib=\"$editline_lib $curses_lib\"]\n  AC_SUBST(editline_inc)\n  AC_SUBST(editline_lib)\n  AC_DEFINE([QMGR_HAVE_HIST], [], [Defined when editline is available])\n])\n"
  },
  {
    "path": "m4/with_expat.m4",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nAC_DEFUN([PBS_AC_WITH_EXPAT],\n[\n  AC_ARG_WITH([expat],\n    AS_HELP_STRING([--with-expat=DIR],\n      [Specify the directory where expat is installed.]\n    )\n  )\n  [expat_dir=\"$with_expat\"]\n  AC_MSG_CHECKING([for expat])\n  AS_IF(\n    [test \"$expat_dir\" = \"\"],\n    AC_CHECK_HEADER([expat.h], [], AC_MSG_ERROR([expat headers not found.])),\n    [test -r \"$expat_dir/include/expat.h\"],\n    [expat_inc=\"-I$expat_dir/include\"],\n    AC_MSG_ERROR([expat headers not found.])\n  )\n  AS_IF(\n    [test \"$expat_dir\" = \"\"],\n    # Using system installed expat\n    AC_CHECK_LIB([expat], [XML_Parse],\n      [expat_lib=\"-lexpat\"],\n      AC_MSG_ERROR([expat shared object library not found.])),\n    # Using developer installed expat\n    [test -r \"${expat_dir}/lib64/libexpat.a\"],\n    [expat_lib=\"${expat_dir}/lib64/libexpat.a\"],\n    [test -r \"${expat_dir}/lib/libexpat.a\"],\n    [expat_lib=\"${expat_dir}/lib/libexpat.a\"],\n    AC_MSG_ERROR([expat library not found.])\n  )\n  AC_MSG_RESULT([$expat_dir])\n  AC_SUBST(expat_inc)\n  AC_SUBST(expat_lib)\n  AC_DEFINE([EXPAT], [], [Defined when expat is available])\n])\n"
  },
  {
    "path": "m4/with_hwloc.m4",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nAC_DEFUN([PBS_AC_WITH_HWLOC],\n[\n  AC_ARG_WITH([hwloc],\n    AS_HELP_STRING([--with-hwloc=DIR],\n      [Specify the directory where hwloc is installed.]\n    )\n  )\n  hwloc_dir=[\"$with_hwloc\"]\n  AC_MSG_CHECKING([for hwloc])\n  [hwloc_flags=\"\"]\n  [hwloc_inc=\"\"]\n  [hwloc_lib=\"\"]\n  AS_IF(\n    [test \"$hwloc_dir\" = \"\"],\n    AC_CHECK_HEADER([hwloc.h], [], AC_MSG_ERROR([hwloc headers not found.])),\n    [test -r \"$hwloc_dir/include/hwloc.h\"],\n    [hwloc_inc=\"-I$hwloc_dir/include\"],\n    AC_MSG_ERROR([hwloc headers not found.])\n  )\n  AS_IF(\n    # Using system installed hwloc\n    [test \"$hwloc_dir\" = \"\"],\n    AC_CHECK_LIB([hwloc], [hwloc_topology_init],\n      [hwloc_lib=\"-lhwloc\"],\n      AC_MSG_ERROR([hwloc shared object library not found.])\n    ),\n    # Using developer installed hwloc\n    [test -r \"${hwloc_dir}/lib64/libhwloc_embedded.a\"],\n    [hwloc_lib=\"${hwloc_dir}/lib64/libhwloc_embedded.a\"],\n    [test -r \"${hwloc_dir}/lib/libhwloc_embedded.a\"],\n    [hwloc_lib=\"${hwloc_dir}/lib/libhwloc_embedded.a\"],\n    AC_MSG_ERROR([hwloc library not found.])\n  )\n  AC_MSG_RESULT([$hwloc_dir])\n  AS_CASE([x$target_os],\n    [xlinux*],\n      AC_CHECK_LIB([numa], [mbind], [hwloc_lib=\"$hwloc_lib -lnuma\"])\n      AC_CHECK_LIB([udev], [udev_new], [hwloc_lib=\"$hwloc_lib -ludev\"])\n      AC_CHECK_LIB([pciaccess], [pci_system_init], [hwloc_lib=\"$hwloc_lib -lpciaccess\"])\n  )\n  AC_SUBST(hwloc_flags)\n  AC_SUBST(hwloc_inc)\n  AC_SUBST(hwloc_lib)\n  AC_DEFINE([HWLOC], [], [Defined when hwloc is available])\n])\n"
  },
  {
    "path": "m4/with_krbauth.m4",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nAC_DEFUN([_KRB5_CONFIG_PATH],\n[\n  AC_ARG_VAR([PATH_KRB5_CONFIG], [Path to krb5-config.])\n  AC_PATH_PROG([PATH_KRB5_CONFIG], [krb5-config], [], [${PATH}:/usr/kerberos/bin])\n  AS_IF([test -x \"$PATH_KRB5_CONFIG\"],\n    [\n    AC_MSG_NOTICE([krb5-config found])\n    ],\n    [\n    AC_MSG_ERROR([krb5-config not found at provided/default path])\n    ])\n])\n\nAC_DEFUN([_KRB5_CONFIG_LIBS],\n  [AC_REQUIRE([_KRB5_CONFIG_PATH])\n  $3[]_LIBS=`\"$1\" --libs $2 2>/dev/null | grep -vi unknown 2>/dev/null`\n])\n\nAC_DEFUN([_KRB5_CONFIG_CFLAGS],\n  [AC_REQUIRE([_KRB5_CONFIG_PATH])\n  $3[]_CFLAGS=`\"$1\" --cflags $2 2>/dev/null`\n])\n\nAC_DEFUN([_KRB5_CHECK_HEIMDAL],\n  [AC_REQUIRE([_KRB5_CONFIG_PATH])\n  _KRB5_HEIMDAL=`\"$1\" --vendor 2>/dev/null | grep Heimdal 2>/dev/null`\n])\n\nAC_DEFUN([KRB5_CONFIG],\n[\n  AC_REQUIRE([_KRB5_CONFIG_PATH])\n\n  PKG_CHECK_MODULES(com_err, [com_err])\n\n  _KRB5_CONFIG_CFLAGS([$PATH_KRB5_CONFIG],[],[_KRB5])\n  _KRB5_CONFIG_LIBS([$PATH_KRB5_CONFIG],[krb5],[_KRB5_KRB5])\n  _KRB5_CONFIG_LIBS([$PATH_KRB5_CONFIG],[gssapi],[_KRB5_GSSAPI])\n  _KRB5_CONFIG_LIBS([$PATH_KRB5_CONFIG],[kafs],[_KRB5_KAFS])\n\n  # we don't want to add -lkafs into the general LIBS\n  ac_save_libs=${LIBS}\n\n  AS_IF([test \"x$_KRB5_KAFS_LIBS\" != \"x\"],\n  [ac_save_ldflags=${LDFLAGS}\n  LDFLAGS=\"${_KRB5_KAFS_LIBS} ${LDFLAGS}\"\n  AC_CHECK_LIB([kafs],[k_hasafs],[],[_KRB5_KAFS_LIBS=\"\"])\n  LDFLAGS=\"$ac_save_ldflags\"],\n  [])\n\n  AS_IF([test \"x$_KRB5_KAFS_LIBS\" = \"x\"],\n    [\n    AC_CHECK_LIB([kafs],[k_hasafs],\n      [_KRB5_KAFS_LIBS=\"-lkafs\"],\n      AC_CHECK_LIB([kopenafs],[k_hasafs],[_KRB5_KAFS_LIBS=\"-lkopenafs\"],\n        AC_MSG_WARN([k(open)afs library not found - afs will be ignored])))\n    ],[])\n\n  LIBS=\"$ac_save_libs\"\n\n  _KRB5_CHECK_HEIMDAL([$PATH_KRB5_CONFIG])\n  AS_IF([test \"x$_KRB5_HEIMDAL\" != \"x\"],\n    [AC_MSG_NOTICE([Kerberos vendor is Heimdal])\n    AC_DEFINE_UNQUOTED([KRB5_HEIMDAL],[],[Kerberos is Heimdal])\n    ],[])\n\n  AC_SUBST([KRB5_CFLAGS],[\"$_KRB5_CFLAGS $com_err_CFLAGS\"])\n  _KRB5_LIBS=\"$_KRB5_KRB5_LIBS $_KRB5_GSSAPI_LIBS $_KRB5_KAFS_LIBS $com_err_LIBS\"\n  AC_SUBST([KRB5_LIBS],[$_KRB5_LIBS])\n])\n\nAC_DEFUN([PBS_AC_WITH_KRBAUTH],\n[\n  AC_MSG_CHECKING([for kerberos support])\n  AC_ARG_WITH([krbauth],\n    [AS_HELP_STRING([--with-krbauth],\n       [enable kerberos authentication, krb5-config required for setup])],\n    [],[with_krbauth=no])\n  AM_CONDITIONAL([KRB5_ENABLED], [test \"x$with_krbauth\" != xno])\n  AS_IF([test \"x$with_krbauth\" != xno],\n    [\n    AC_MSG_RESULT([requested])\n    _KRB5_CONFIG_PATH\n    KRB5_CONFIG\n    AC_DEFINE_UNQUOTED([PBS_SECURITY],[KRB5],[Enable krb5/gssapi security.])\n    ],\n    [\n    AC_MSG_RESULT([disabled])\n    ])\n])\n"
  },
  {
    "path": "m4/with_libical.m4",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nAC_DEFUN([PBS_AC_WITH_LIBICAL],\n[\n  AC_ARG_WITH([libical],\n    AS_HELP_STRING([--with-libical=DIR],\n      [Specify the directory where the ical library is installed.]\n    )\n  )\n  [libical_dir=\"$with_libical\"]\n  AC_MSG_CHECKING([for libical])\n  AS_IF(\n    [test \"$libical_dir\" = \"\"],\n    AC_CHECK_HEADER([libical/ical.h], [], AC_MSG_ERROR([libical headers not found.])),\n    [test -r \"$libical_dir/include/libical/ical.h\"],\n    [libical_include=\"$libical_dir/include\"],\n    AC_MSG_ERROR([libical headers not found.])\n  )\n  AS_IF(\n    [test \"$libical_include\" = \"\"],\n    [AC_PREPROC_IFELSE(\n      [AC_LANG_SOURCE([[#include <libical/ical.h>\n        ICAL_VERSION]])],\n      [libical_version=`tail -n1 conftest.i | $SED -n 's/\"\\([[0-9]]*\\)..*/\\1/p'`]\n    )],\n    [libical_version=`$SED -n 's/^#define ICAL_VERSION \"\\([[0-9]]*\\)..*/\\1/p' \"$libical_include/libical/ical.h\"`]\n  )\n  AS_IF(\n    [test \"x$libical_version\" = \"x\"],\n    AC_MSG_ERROR([Could not determine libical version.]),\n    [test \"$libical_version\" -gt 1],\n    AC_DEFINE([LIBICAL_API2], [], [Defined when libical version >= 2])\n  )\n  AS_IF([test \"$libical_dir\" = \"\"],\n    dnl Using system installed libical\n    libical_inc=\"\"\n    AC_CHECK_LIB([ical], [icalrecurrencetype_from_string],\n      [libical_lib=\"-lical\"],\n      AC_MSG_ERROR([libical shared object library not found.])),\n    dnl Using developer installed libical\n    libical_inc=\"-I$libical_include\"\n    AS_IF([test -r \"${libical_dir}/lib64/libical.a\"],\n      [libical_lib=\"${libical_dir}/lib64/libical.a\"],\n      AS_IF([test -r \"${libical_dir}/lib/libical.a\"],\n        [libical_lib=\"${libical_dir}/lib/libical.a\"],\n        AC_MSG_ERROR([ical library not found.])\n      )\n    )\n  )\n  AC_MSG_RESULT([$libical_dir])\n  AC_SUBST(libical_inc)\n  AC_SUBST(libical_lib)\n  AC_DEFINE([LIBICAL], [], [Defined when libical is available])\n])\n"
  },
  {
    "path": "m4/with_libz.m4",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nAC_DEFUN([PBS_AC_WITH_LIBZ],\n[\n  AC_ARG_WITH([libz],\n    AS_HELP_STRING([--with-libz=DIR],\n      [Specify the directory where libz is installed.]\n    )\n  )\n  [libz_dir=\"$with_libz\"]\n  AC_MSG_CHECKING([for libz])\n  AS_IF(\n    [test \"$libz_dir\" = \"\"],\n    AC_CHECK_HEADER([zlib.h], [], AC_MSG_ERROR([libz headers not found.])),\n    [test -r \"$libz_dir/include/zlib.h\"],\n    [libz_inc=\"-I$libz_dir/include\"],\n    AC_MSG_ERROR([libz headers not found.])\n  )\n  AS_IF(\n    # Using system installed libz\n    [test \"$libz_dir\" = \"\"],\n    AC_CHECK_LIB([z], [deflateInit_],\n      [libz_lib=\"-lz\"],\n      AC_MSG_ERROR([libz shared object library not found.])),\n\t  # Using developer installed libz\n    [test -r \"${libz_dir}/lib64/libz.a\"],\n    [libz_lib=\"${libz_dir}/lib64/libz.a\"],\n    [test -r \"${libz_dir}/lib/libz.a\"],\n    [libz_lib=\"${libz_dir}/lib/libz.a\"],\n    AC_MSG_ERROR([libz not found.])\n  )\n  AC_MSG_RESULT([$libz_dir])\n  AC_SUBST(libz_inc)\n  AC_SUBST(libz_lib)\n  AC_DEFINE([PBS_COMPRESSION_ENABLED], [], [Defined when libz is available])\n])\n"
  },
  {
    "path": "m4/with_min_stack_limit.m4",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nAC_DEFUN([PBS_AC_WITH_MIN_STACK_LIMIT],\n[\n  AC_MSG_CHECKING([for min stack limit])\n  AC_ARG_WITH([stack-limit],\n    AS_HELP_STRING([--with_min_stack_limit=LIMIT],\n      [Specify the minimum stack limit.]\n    )\n  )\n  AS_IF([test \"x$with_min_stack_limit\" != \"x\"],\n    limit_value=[$with_min_stack_limit],\n    limit_value=[0x1000000]\n  )\n  AC_MSG_RESULT([$limit_value])\n  AC_SUBST([limit_value])\n  AC_DEFINE_UNQUOTED([MIN_STACK_LIMIT], [$limit_value], [Mininum stack limit])\n])\n"
  },
  {
    "path": "m4/with_pbs_conf_file.m4",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nAC_DEFUN([PBS_AC_WITH_PBS_CONF_FILE],\n[\n  AC_MSG_CHECKING([for PBS configuration file location])\n  AC_ARG_WITH([pbs-conf-file],\n    AS_HELP_STRING([--with-pbs-conf-file=FILE],\n      [Location of the PBS configuration file.]\n    )\n  )\n  AS_IF([test \"x$with_pbs_conf_file\" != \"x\"],\n    pbs_conf_file=[$with_pbs_conf_file],\n    pbs_conf_file=[/etc/pbs.conf]\n  )\n  AC_MSG_RESULT([$pbs_conf_file])\n  AC_SUBST([PBS_CONF_FILE], [\"$pbs_conf_file\"])\n  AC_DEFINE_UNQUOTED([PBS_CONF_FILE], [\"$pbs_conf_file\"], [Location of the PBS configuration file])\n])\n"
  },
  {
    "path": "m4/with_pmix.m4",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nAC_DEFUN([PBS_AC_WITH_PMIX],\n[\n  AC_ARG_WITH([pmix],\n    AS_HELP_STRING([--with-pmix=DIR],\n      [Specify the directory where the pmix library is installed.]\n    )\n  )\n  AC_MSG_CHECKING([for PMIx])\n  AS_IF([test \"x$with_pmix\" = \"xno\" -o \"x$with_pmix\" = \"x\"],\n    AC_MSG_RESULT([no]),\n    AS_IF([test \"x$with_pmix\" = \"xyes\"],\n      pmix_dir=[\"/usr\"],\n      pmix_dir=[\"$with_pmix\"]\n    )\n    AS_IF([test -r \"$pmix_dir/include/pmix_version.h\"],\n      [pmix_version_h=\"$pmix_dir/include/pmix_version.h\"],\n      AC_MSG_ERROR([PMIx headers not found.])\n    )\n    AC_MSG_RESULT([$pmix_dir])\n    AC_MSG_CHECKING([PMIx version])\n    pmix_version_major=`${SED} -n 's/^#define PMIX_VERSION_MAJOR \\([[0-9]]*\\)L/\\1/p' \"$pmix_version_h\"`\n    AS_IF([test \"x$pmix_version_major\" = \"x\"],\n      AC_MSG_ERROR([Could not determine PMIx major version.])\n    )\n    pmix_version_minor=`${SED} -n 's/^#define PMIX_VERSION_MINOR \\([[0-9]]*\\)L/\\1/p' \"$pmix_version_h\"`\n    AS_IF([test \"x$pmix_version_minor\" = \"x\"],\n      AC_MSG_ERROR([Could not determine PMIx minor version.])\n    )\n    pmix_version_release=`${SED} -n 's/^#define PMIX_VERSION_RELEASE \\([[0-9]]*\\)L/\\1/p' \"$pmix_version_h\"`\n    AS_IF([test \"x$pmix_version_release\" = \"x\"],\n      AC_MSG_ERROR([Could not determine PMIx release version.])\n    )\n    pmix_version=\"$pmix_version_major.$pmix_version_minor.$pmix_version_release\"\n    AC_MSG_RESULT([$pmix_version])\n    AS_IF([test \"$pmix_dir\" = \"/usr\"],\n      [pmix_lib=\"-lpmix\"; pmix_inc=\"\"],\n      AS_IF([test -r \"$pmix_dir/lib/libpmix.so\"],\n        [pmix_lib=\"-L$pmix_dir/lib -lpmix\"],\n        AS_IF([test -r \"$pmix_dir/lib64/libpmix.so\"],\n          [pmix_lib=\"-L$pmix_dir/lib64 -lpmix\"],\n          AC_MSG_ERROR([PMIx library not found.])\n        )\n      )\n      pmix_inc=\"-I$pmix_dir/include\"\n    )\n    AC_SUBST(pmix_inc)\n    AC_SUBST(pmix_lib)\n    AC_DEFINE([PMIX], [], [Defined when PMIx is available])\n  )\n])\n"
  },
  {
    "path": "m4/with_python.m4",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nAC_DEFUN([PBS_AC_WITH_PYTHON],\n[\n  AC_ARG_WITH([python],\n    AS_HELP_STRING([--with-python=DIR],\n      [Specify the directory where Python is installed.]\n    )\n  )\n  AS_IF([test \"x$with_python\" != \"x\"],\n    [PYTHON=\"$with_python/bin/python3\"]\n  )\n  AM_PATH_PYTHON([3.6])\n  [PYTHON_CONFIG=\"$PYTHON-config\"]\n  [python_major_version=`echo $PYTHON_VERSION | sed -e 's/\\..*$//'`]\n  [python_minor_version=`echo $PYTHON_VERSION | sed -e 's/^[^.]*\\.//'`]\n  AS_IF([test $python_major_version -eq 3],\n  [\n    python_config_embed=\"\"\n    AS_IF([test $python_minor_version -ge 8], [python_config_embed=\"--embed\"])\n    [PYTHON_INCLUDES=`$PYTHON_CONFIG --includes ${python_config_embed}`]\n    AC_SUBST(PYTHON_INCLUDES)\n    [PYTHON_CFLAGS=`$PYTHON_CONFIG --cflags ${python_config_embed}`]\n    AC_SUBST(PYTHON_CFLAGS)\n    [PYTHON_LDFLAGS=`$PYTHON_CONFIG --ldflags ${python_config_embed}`]\n    AC_SUBST(PYTHON_LDFLAGS)\n    [PYTHON_LIBS=`$PYTHON_CONFIG --libs ${python_config_embed}`]\n    AC_SUBST(PYTHON_LIBS)\n    AC_DEFINE([PYTHON], [], [Defined when Python is available])\n    AC_DEFINE_UNQUOTED([PYTHON_BIN_PATH], [\"$PYTHON\"], [Python executable path])\n  ],\n  [AC_MSG_ERROR([Python version 3 is required.])])\n])\n"
  },
  {
    "path": "m4/with_sendmail.m4",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nAC_DEFUN([PBS_AC_WITH_SENDMAIL],\n[\n  AC_ARG_WITH([sendmail],\n    AS_HELP_STRING([--with-sendmail=EXECUTABLE],\n      [Specify the full path where the sendmail executable is installed.]\n    )\n  )\n  AS_IF([test \"x$with_sendmail\" != \"x\"],\n    sendmail_cmd=[\"$with_sendmail\"],\n    sendmail_cmd=[\"/usr/sbin/sendmail\"]\n  )\n  AC_MSG_CHECKING([for sendmail])\n  AC_MSG_RESULT([$sendmail_cmd])\n  AC_DEFINE_UNQUOTED([SENDMAIL_CMD], [\"$sendmail_cmd\"], [Full path to the sendmail executable])\n])\n"
  },
  {
    "path": "m4/with_server_home.m4",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nAC_DEFUN([PBS_AC_WITH_SERVER_HOME],\n[\n  AC_MSG_CHECKING([for PBS home directory])\n  AC_ARG_WITH([pbs-server-home],\n    AS_HELP_STRING([--with-pbs-server-home=DIR],\n      [Location of the PBS spool directory. Default is /var/spool/pbs]\n    )\n  )\n  AS_IF([test \"x$with_pbs_server_home\" != \"x\"],\n    PBS_SERVER_HOME=[$with_pbs_server_home],\n    PBS_SERVER_HOME=[/var/spool/pbs]\n  )\n  AC_MSG_RESULT([$PBS_SERVER_HOME])\n  AC_SUBST(PBS_SERVER_HOME)\n  AC_DEFINE_UNQUOTED([PBS_SERVER_HOME], [\"$PBS_SERVER_HOME\"], [Location of the PBS spool directory])\n])\n"
  },
  {
    "path": "m4/with_server_name_file.m4",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nAC_DEFUN([PBS_AC_WITH_SERVER_NAME_FILE],\n[\n  AC_MSG_CHECKING([for PBS server name file])\n  AC_ARG_WITH([pbs-server-name-file],\n    AS_HELP_STRING([--with-pbs-server-name-file=FILE],\n      [Location of the PBS server name file relative to PBS_HOME. Default is PBS_HOME/server_name]\n    )\n  )\n  AS_IF([test \"x$with_server_name_file\" != \"x\"],\n    [pbs_default_file=$with_server_name_file],\n    [pbs_default_file=server_name]\n  )\n  AS_CASE([$pbs_default_file],\n    [/*],\n      [PBS_DEFAULT_FILE=$pbs_default_file],\n    [PBS_DEFAULT_FILE=$PBS_SERVER_HOME/$pbs_default_file]\n  )\n  AC_MSG_RESULT([$PBS_DEFAULT_FILE])\n  AC_SUBST(PBS_DEFAULT_FILE)\n  AC_DEFINE_UNQUOTED([PBS_DEFAULT_FILE], [\"$PBS_DEFAULT_FILE\"], [Location of the PBS server name file])\n])\n"
  },
  {
    "path": "m4/with_swig.m4",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nAC_DEFUN([PBS_AC_WITH_SWIG],\n[\n  AC_ARG_WITH([swig],\n    AS_HELP_STRING([--with-swig=EXECUTABLE],\n      [Specify the full path where the swig executable is installed.]\n    )\n  )\n  AS_IF([test \"x$with_swig\" != \"x\"],\n    swig_dir=[\"$with_swig\"],\n    swig_dir=[\"/usr\"]\n  )\n  AC_MSG_CHECKING([for swig])\n  AS_IF([test -x \"$swig_dir/bin/swig\"],\n    AC_MSG_RESULT([$swig_dir/bin/swig])\n    AC_DEFINE([SWIG], [], [Defined when swig is available]),\n    AC_MSG_RESULT([not found])\n    AC_MSG_WARN([swig command not found.]))\n  AS_IF([test \"x`ls -d ${swig_dir}/share/swig/* 2>/dev/null`\" = \"x\" ],\n          [swig_py_inc=\"-I`ls -d ${swig_dir}/share/swig* | tail -n 1` -I`ls -d ${swig_dir}/share/swig*/python | tail -n 1`\"],\n          [swig_py_inc=\"-I`ls -d ${swig_dir}/share/swig/* | tail -n 1` -I`ls -d ${swig_dir}/share/swig/*/python | tail -n 1`\"])\n  AC_SUBST([swig_dir])\n  AC_SUBST([swig_py_inc])\n])\n"
  },
  {
    "path": "m4/with_tcl.m4",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nAC_DEFUN([PBS_AC_WITH_TCL],\n[\n  AC_ARG_WITH([tcl],\n    AS_HELP_STRING([--with-tcl=DIR],\n      [Specify the directory where Tcl is installed.]\n    )\n  )\n  AS_IF([test \"x$with_tcl\" != \"x\"],\n    tcl_dir=[\"$with_tcl\"],\n    tcl_dir=[\"/usr\"]\n  )\n  AC_MSG_CHECKING([for Tcl])\n  AS_IF([test -r \"$tcl_dir/lib64/tclConfig.sh\"],\n    [. \"$tcl_dir/lib64/tclConfig.sh\"],\n    AS_IF([test -r \"$tcl_dir/lib/tclConfig.sh\"],\n      [. \"$tcl_dir/lib/tclConfig.sh\"],\n      AS_IF([test -r \"$tcl_dir/lib/x86_64-linux-gnu/tclConfig.sh\"],\n        [. \"$tcl_dir/lib/x86_64-linux-gnu/tclConfig.sh\"],\n        AC_MSG_ERROR([tclConfig.sh not found]))))\n  AC_MSG_RESULT([$tcl_dir])\n  AC_MSG_CHECKING([for Tcl version])\n  AS_IF([test \"x$TCL_VERSION\" = \"x\"],\n    AC_MSG_ERROR([Could not determine Tcl version]))\n  AC_MSG_RESULT([$TCL_VERSION])\n  [tcl_version=\"$TCL_VERSION\"]\n  AC_SUBST(tcl_version)\n  AC_MSG_CHECKING([for Tk])\n  AS_IF([test -r \"$tcl_dir/lib64/tkConfig.sh\"],\n    [. \"$tcl_dir/lib64/tkConfig.sh\"],\n    AS_IF([test -r \"$tcl_dir/lib/tkConfig.sh\"],\n      [. \"$tcl_dir/lib/tkConfig.sh\"],\n      AS_IF([test -r \"$tcl_dir/lib/x86_64-linux-gnu/tkConfig.sh\"],\n        [. \"$tcl_dir/lib/x86_64-linux-gnu/tkConfig.sh\"],\n        AC_MSG_ERROR([tkConfig.sh not found]))))\n  AC_MSG_RESULT([$tcl_dir])\n  AC_MSG_CHECKING([for Tk version])\n  AS_IF([test \"x$TK_VERSION\" = \"x\"],\n    AC_MSG_ERROR([Could not determine Tk version]))\n  AC_MSG_RESULT([$TK_VERSION])\n  [tk_version=\"$TK_VERSION\"]\n  AC_SUBST(tk_version)\n  AS_IF([test x$TCL_INCLUDE_SPEC = x],\n    # Using developer installed tcl\n    [tcl_inc=\"-I$tcl_dir/include\"]\n    [tcl_lib=\"$tcl_dir/lib/libtcl$TCL_VERSION.a $TCL_LIBS\"]\n    [tk_inc=\"-I$tcl_dir/include\"]\n    [tk_lib=\"$tcl_dir/lib/libtcl$TCL_VERSION.a $tcl_dir/lib/libtk$TK_VERSION.a $TK_LIBS\"],\n    # Using system installed tcl\n    [tcl_inc=\"$TCL_INCLUDE_SPEC\"]\n    [tcl_lib=\"$TCL_LIB_SPEC $TCL_LIBS\"]\n    [tk_inc=\"$TK_INCLUDE_SPEC\"]\n    [tk_lib=`echo \"$TCL_LIB_SPEC $TK_LIB_SPEC $TK_LIBS\" | ${SED} -e 's/-lXss //'`])\n  save_CPPFLAGS=\"$CPPFLAGS\"\n  CPPFLAGS=\"$CPPFLAGS $tcl_inc\"\n  AC_CHECK_TYPES([Tcl_Size], [], [], [[#include <tcl.h>]])\n  CPPFLAGS=\"$save_CPPFLAGS\"\n  AC_SUBST(tcl_inc)\n  AC_SUBST(tcl_lib)\n  AC_SUBST(tk_inc)\n  AC_SUBST(tk_lib)\n  [TCLSH_PATH=$tcl_dir/bin/tclsh$tcl_version]\n  AC_SUBST(TCLSH_PATH)\n  AC_DEFINE([TCL], [], [Defined when Tcl is available])\n  AC_DEFINE([TK], [], [Defined when TK is available])\n])\n"
  },
  {
    "path": "m4/with_tclatrsep.m4",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nAC_DEFUN([PBS_AC_WITH_TCLATRSEP],\n[\n  AC_ARG_WITH([tclatrsep],\n    AS_HELP_STRING([--with-tclatrsep=CHAR],\n      [Specify the Tcl attribute separator.]\n    )\n  )\n  AS_IF([test x$with_tclatrsep != x],\n    [tcl_atrsep=\"$with_tclatrsep\"],\n    [tcl_atrsep=\".\"]\n  )\n  AC_MSG_CHECKING([for Tcl attribute separator])\n  AC_MSG_RESULT([\"$tcl_atrsep\"])\n  AC_DEFINE_UNQUOTED([TCL_ATRSEP], [\"$tcl_atrsep\"], [The Tcl attribute separator character])\n])\n"
  },
  {
    "path": "m4/with_tmpdir.m4",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nAC_DEFUN([PBS_AC_WITH_TMP_DIR],\n[\n  AC_MSG_CHECKING([for PBS temporary file location])\n  AC_ARG_WITH([tmpdir],\n    AS_HELP_STRING([--with-tmpdir=DIR],\n      [Location of the PBS temporary file directory.]\n    )\n  )\n  AS_IF([test \"x$with_tmpdir\" != \"x\"],\n    pbs_tmpdir=[$with_tmpdir],\n    pbs_tmpdir=[/var/tmp]\n  )\n  AC_MSG_RESULT([$pbs_tmpdir])\n  AC_DEFINE_UNQUOTED([TMP_DIR], [\"$pbs_tmpdir\"], [Location of the PBS temporary file directory])\n])\n"
  },
  {
    "path": "m4/with_unsupported_dir.m4",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nAC_DEFUN([PBS_AC_WITH_UNSUPPORTED_DIR],\n[\n  AC_MSG_CHECKING([for unsupported directory])\n  AC_ARG_WITH([unsupported-dir],\n    AS_HELP_STRING([--with-unsupported-dir=DIR],\n      [Specify the installation directory for unsupported tools.]\n    )\n  )\n  AS_IF([test x$with_unsupported_dir != x],\n    [unsupporteddir=\"$with_unsupported_dir\"],\n    [unsupporteddir='${exec_prefix}/unsupported']\n  )\n  AC_MSG_RESULT([$unsupporteddir])\n  AC_SUBST(unsupporteddir)\n])\n"
  },
  {
    "path": "m4/with_xauth.m4",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nAC_DEFUN([PBS_AC_WITH_XAUTH],\n[\n  AC_ARG_WITH([xauth],\n    AS_HELP_STRING([--with-xauth=EXECUTABLE],\n      [Specify the name of the xauth command.]\n    )\n  )\n  AS_IF([test \"x$with_xauth\" != \"x\"],\n    xauth_cmd=[\"$with_xauth\"],\n    xauth_cmd=[\"xauth\"]\n  )\n  AC_MSG_CHECKING([for xauth])\n  AC_MSG_RESULT([$xauth_cmd])\n  AC_DEFINE_UNQUOTED([XAUTH_BINARY], [\"$xauth_cmd\"], [Name of the xauth executable])\n])\n"
  },
  {
    "path": "openpbs-rpmlintrc",
    "content": "# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n# IGNORE: The following are purposely suppressed.\naddFilter(\"dir-or-file-in-opt\")\naddFilter(\"non-executable-script\")\naddFilter(\"non-standard-executable-perm .*pbs_iff\")\naddFilter(\"setuid-binary .*pbs_iff\")\naddFilter(\"non-standard-executable-perm .*pbs_rcp\")\naddFilter(\"setuid-binary .*pbs_rcp\")\naddFilter(\"only-non-binary-in-usr-lib\")\n# FIXME: The following errors need to be addressed rather than ignored\naddFilter('permissions-file-setuid-bit')\nsetBadness('permissions-file-setuid-bit', 0)\naddFilter('non-position-independent-executable')\nsetBadness('non-position-independent-executable', 0)\naddFilter('devel-file-in-non-devel-package')\nsetBadness('devel-file-in-non-devel-package', 0)\n"
  },
  {
    "path": "openpbs.spec.in",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\n%if !%{defined pbs_name}\n%define pbs_name openpbs\n%endif\n\n%if !%{defined pbs_version}\n%define pbs_version @PBS_VERSION@\n%endif\n\n%if !%{defined pbs_release}\n%define pbs_release 0\n%endif\n\n%if !%{defined pbs_prefix}\n%define pbs_prefix /opt/pbs\n%endif\n\n%if !%{defined pbs_home}\n%define pbs_home @PBS_SERVER_HOME@\n%endif\n\n%if !%{defined pbs_dbuser}\n%define pbs_dbuser postgres\n%endif\n\n%define pbs_client client\n%define pbs_execution execution\n%define pbs_server server\n%define pbs_devel devel\n%define pbs_dist %{pbs_name}-%{pbs_version}.tar.gz\n\n%if !%{defined _unitdir}\n%define _unitdir @_unitdir@\n%endif\n%if \"%{_vendor}\" == \"debian\" && %(test -f /etc/os-release && echo 1 || echo 0)\n%define _vendor_ver %(cat /etc/os-release | awk -F[=\\\\\".] '/^VERSION_ID=/ {print \\$3}')\n%define _vendor_id %(cat /etc/os-release | awk -F= '/^ID=/ {print \\$2}')\n%endif\n%if ( 0%{?suse_version} >= 1210 ) || ( 0%{?rhel} >= 7 ) || (\"x%{?_vendor_id}\" == \"xdebian\" && 0%{?_vendor_ver} >= 8) || (\"x%{?_vendor_id}\" == \"xubuntu\" && 0%{?_vendor_ver} >= 16)\n%define have_systemd 1\n%endif\n\n%global __python %{__python3}\n\nName: %{pbs_name}\nVersion: %{pbs_version}\nRelease: %{pbs_release}\nSource0: %{pbs_dist}\nSummary: OpenPBS\nLicense: AGPLv3 with exceptions\nURL: http://www.openpbs.org\nVendor: Altair Engineering, Inc.\nPrefix: %{?pbs_prefix}%{!?pbs_prefix:%{_prefix}}\n\n%bcond_with alps\n%bcond_with ptl\n%bcond_with pmix\n\nBuildRoot: %{buildroot}\nBuildRequires: gcc\nBuildRequires: gcc-c++\nBuildRequires: make\nBuildRequires: rpm-build\nBuildRequires: autoconf\nBuildRequires: automake\nBuildRequires: libtool\nBuildRequires: libtool-ltdl-devel\nBuildRequires: hwloc-devel\nBuildRequires: libX11-devel\nBuildRequires: libXt-devel\nBuildRequires: libedit-devel\nBuildRequires: libical-devel\nBuildRequires: ncurses-devel\nBuildRequires: perl\nBuildRequires: postgresql-devel >= 9.1\nBuildRequires: postgresql-contrib >= 9.1\nBuildRequires: python3-devel >= 3.5\nBuildRequires: tcl-devel\nBuildRequires: tk-devel\nBuildRequires: swig\nBuildRequires: zlib-devel\n%if %{with pmix}\nBuildRequires: pmix-devel\n%endif\n%if %{defined suse_version}\nBuildRequires: libexpat-devel\nBuildRequires: libopenssl-devel\nBuildRequires: libXext-devel\nBuildRequires: libXft-devel\nBuildRequires: fontconfig\nBuildRequires: timezone\nBuildRequires: cJSON-devel\n%if ( ( !%{defined sle_version} ) || ( 0%{?sle_version} < 150500 ) )\nBuildRequires: python-xml\n%endif\n%else\nBuildRequires: expat-devel\nBuildRequires: openssl-devel\nBuildRequires: libXext\nBuildRequires: libXft\n%if ( ( !%{defined rhel} ) || ( 0%{?rhel} >= 8 ) )\nBuildRequires: cjson-devel\n%endif\n%endif\n\n# Pure python extensions use the 32 bit library path\n%{!?py_site_pkg_32: %global py_site_pkg_32 %(%{__python} -c \"from distutils.sysconfig import get_python_lib; print(get_python_lib(0))\")}\n%{!?py_site_pkg_64: %global py_site_pkg_64 %(%{__python} -c \"from distutils.sysconfig import get_python_lib; print(get_python_lib(1))\")}\n\n%description\nOpenPBS is a fast, powerful workload manager and\njob scheduler designed to improve productivity, optimize\nutilization & efficiency, and simplify administration for\nHPC clusters, clouds and supercomputers.\n\n%package %{pbs_server}\nSummary: OpenPBS for a server host\nGroup: System Environment/Base\nConflicts: openpbs-execution\nConflicts: openpbs-client\nConflicts: openpbs-execution-ohpc\nConflicts: openpbs-client-ohpc\nConflicts: openpbs-server-ohpc\nConflicts: pbspro-server\nConflicts: pbspro-execution\nConflicts: pbspro-client\nConflicts: pbspro-client-ohpc\nConflicts: pbspro-execution-ohpc\nConflicts: pbspro-server-ohpc\nConflicts: pbs\nConflicts: pbs-mom\nConflicts: pbs-cmds\n%if %{defined rhel}\nRequires: chkconfig\n%endif\nRequires: bash\nRequires: expat\nRequires: libedit\nRequires: postgresql-server >= 9.1\nRequires: postgresql-contrib >= 9.1\nRequires: python3 >= 3.5\nRequires: tcl\nRequires: tk\n%if %{with pmix}\nRequires: pmix\n%endif\n%if %{defined suse_version}\nRequires: smtp_daemon\nRequires: libhwloc15\nRequires: net-tools\nRequires: libcjson1\n%else\nRequires: smtpdaemon\nRequires: hostname\n%if ( ( !%{defined rhel} ) || ( 0%{?rhel} >= 8 ) )\nRequires: cjson\n%endif\n%endif\n%if 0%{?rhel} >= 7\nRequires: hwloc-libs\n%endif\nRequires: libical\nAutoreq: 1\n\n%description %{pbs_server}\nOpenPBS is a fast, powerful workload manager and\njob scheduler designed to improve productivity, optimize\nutilization & efficiency, and simplify administration for\nHPC clusters, clouds and supercomputers.\n\nThis package is intended for a server host. It includes all\nPBS components.\n\n%package %{pbs_execution}\nSummary: OpenPBS for an execution host\nGroup: System Environment/Base\nConflicts: openpbs-server\nConflicts: openpbs-client\nConflicts: openpbs-execution-ohpc\nConflicts: openpbs-client-ohpc\nConflicts: openpbs-server-ohpc\nConflicts: pbspro-server\nConflicts: pbspro-execution\nConflicts: pbspro-client\nConflicts: pbspro-client-ohpc\nConflicts: pbspro-execution-ohpc\nConflicts: pbspro-server-ohpc\nConflicts: pbs\nConflicts: pbs-mom\nConflicts: pbs-cmds\n%if %{defined rhel}\nRequires: chkconfig\n%endif\nRequires: bash\nRequires: expat\nRequires: python3 >= 3.5\n%if %{with pmix}\nRequires: pmix\n%endif\n%if %{defined suse_version}\nRequires: libhwloc15\nRequires: net-tools\nRequires: libcjson1\n%else\nRequires: hostname\n%if ( ( !%{defined rhel} ) || ( 0%{?rhel} >= 8 ) )\nRequires: cjson\n%endif\n%endif\n%if 0%{?rhel} >= 7\nRequires: hwloc-libs\n%endif\nAutoreq: 1\n\n%description %{pbs_execution}\nOpenPBS is a fast, powerful workload manager and\njob scheduler designed to improve productivity, optimize\nutilization & efficiency, and simplify administration for\nHPC clusters, clouds and supercomputers.\n\nThis package is intended for an execution host. It does not\ninclude the scheduler, server, or communication agent. It\ndoes include the PBS user commands.\n\n%package %{pbs_client}\nSummary: OpenPBS for a client host\nGroup: System Environment/Base\nConflicts: openpbs-server\nConflicts: openpbs-execution\nConflicts: openpbs-execution-ohpc\nConflicts: openpbs-client-ohpc\nConflicts: openpbs-server-ohpc\nConflicts: pbspro-server\nConflicts: pbspro-execution\nConflicts: pbspro-client\nConflicts: pbspro-client-ohpc\nConflicts: pbspro-execution-ohpc\nConflicts: pbspro-server-ohpc\nConflicts: pbs\nConflicts: pbs-mom\nConflicts: pbs-cmds\nRequires: bash\nRequires: python3 >= 3.5\n%if %{defined suse_version}\nRequires: libcjson1\n%else\n%if ( ( !%{defined rhel} ) || ( 0%{?rhel} >= 8 ) )\nRequires: cjson\n%endif\n%endif\nAutoreq: 1\n\n%description %{pbs_client}\nOpenPBS is a fast, powerful workload manager and\njob scheduler designed to improve productivity, optimize\nutilization & efficiency, and simplify administration for\nHPC clusters, clouds and supercomputers.\n\nThis package is intended for a client host and provides\nthe PBS user commands.\n\n\n%package %{pbs_devel}\nSummary: OpenPBS Development Package\nGroup: Development/System\nConflicts: pbspro-devel\nConflicts: pbspro-devel-ohpc\nConflicts: openpbs-devel-ohpc\n\n%description %{pbs_devel}\nOpenPBS is a fast, powerful workload manager and\njob scheduler designed to improve productivity, optimize\nutilization & efficiency, and simplify administration for\nHPC clusters, clouds and supercomputers.\n\n\n%if %{with ptl}\n\n%define pbs_ptl ptl\n\n%if !%{defined ptl_prefix}\n%define ptl_prefix %{pbs_prefix}/../ptl\n%endif\n\n%package %{pbs_ptl}\nSummary: Testing framework for PBS\nGroup: System Environment/Base\nPrefix: %{ptl_prefix}\nConflicts: pbspro-ptl\n\n%description %{pbs_ptl}\nPBS Test Lab is a testing framework intended to test and validate the\nfunctionality of PBS.\n\n%endif\n\n%if 0%{?opensuse_bs}\n# Do not specify debug_package for OBS builds.\n%else\n%if 0%{?suse_version} || \"x%{?_vendor_id}\" == \"xdebian\" || \"x%{?_vendor_id}\" == \"xubuntu\"\n%debug_package\n%endif\n%endif\n\n%prep\n%setup\n\n%build\n[ -f configure ] || ./autogen.sh\n[ -d build ] && rm -rf build\nmkdir build\ncd build\n../configure \\\n\tPBS_VERSION=%{pbs_version} \\\n\t--prefix=%{pbs_prefix} \\\n%if %{with ptl}\n\t--enable-ptl \\\n%endif\n\t%{?_with_swig} \\\n%if %{defined suse_version}\n\t--libexecdir=%{pbs_prefix}/libexec \\\n%endif\n%if %{with alps}\n\t--enable-alps \\\n%endif\n%if %{with pmix}\n\t--with-pmix \\\n%endif\n\t--with-pbs-server-home=%{pbs_home} \\\n\t--with-database-user=%{pbs_dbuser}\n%{__make} %{?_smp_mflags}\n\n%install\ncd build\n%make_install\nmandir=$(find %{buildroot} -type d -name man)\n[ -d \"$mandir\" ] && find $mandir -type f -exec gzip -9 -n {} \\;\ninstall -D %{buildroot}/%{pbs_prefix}/libexec/pbs_init.d %{buildroot}/etc/init.d/pbs\n%if 0%{?rhel} >= 7\nexport QA_RPATHS=$[ 0x0002 ]\n%endif\n\n%post %{pbs_server}\nldconfig %{_libdir}\n# do not run pbs_postinstall when the CLE is greater than or equal to 6\nimps=0\ncle_release_version=0\ncle_release_path=/etc/opt/cray/release/cle-release\nif [ -f ${cle_release_path} ]; then\n\tcle_release_version=`grep RELEASE ${cle_release_path} | cut -f2 -d= | cut -f1 -d.`\nfi\n[ \"${cle_release_version}\" -ge 6 ] 2>/dev/null && imps=1\nif [ $imps -eq 0 ]; then\n${RPM_INSTALL_PREFIX:=%{pbs_prefix}}/libexec/pbs_postinstall server \\\n\t%{version} ${RPM_INSTALL_PREFIX:=%{pbs_prefix}} %{pbs_home} %{pbs_dbuser}\nfi\n\n%post %{pbs_execution}\nldconfig %{_libdir}\n# do not run pbs_postinstall when the CLE is greater than or equal to 6\nimps=0\ncle_release_version=0\ncle_release_path=/etc/opt/cray/release/cle-release\nif [ -f ${cle_release_path} ]; then\n\tcle_release_version=`grep RELEASE ${cle_release_path} | cut -f2 -d= | cut -f1 -d.`\nfi\n[ \"${cle_release_version}\" -ge 6 ] 2>/dev/null && imps=1\nif [ $imps -eq 0 ]; then\n${RPM_INSTALL_PREFIX:=%{pbs_prefix}}/libexec/pbs_postinstall execution \\\n\t%{version} ${RPM_INSTALL_PREFIX:=%{pbs_prefix}} %{pbs_home}\nfi\n\n%post %{pbs_client}\nldconfig %{_libdir}\n# do not run pbs_postinstall when the CLE is greater than or equal to 6\nimps=0\ncle_release_version=0\ncle_release_path=/etc/opt/cray/release/cle-release\nif [ -f ${cle_release_path} ]; then\n\tcle_release_version=`grep RELEASE ${cle_release_path} | cut -f2 -d= | cut -f1 -d.`\nfi\n[ \"${cle_release_version}\" -ge 6 ] 2>/dev/null && imps=1\nif [ $imps -eq 0 ]; then\n${RPM_INSTALL_PREFIX:=%{pbs_prefix}}/libexec/pbs_postinstall client \\\n\t%{version} ${RPM_INSTALL_PREFIX:=%{pbs_prefix}}\nfi\n\n%post %{pbs_devel}\nldconfig %{_libdir}\n\n%preun %{pbs_server}\nif [ \"$1\" != \"1\" ]; then\n\t# This is an uninstall, not an upgrade.\n\t${RPM_INSTALL_PREFIX:=%{pbs_prefix}}/libexec/pbs_preuninstall server \\\n\t\t%{version} ${RPM_INSTALL_PREFIX:=%{pbs_prefix}} %{pbs_home} %{defined have_systemd}\nfi\n\n%preun %{pbs_execution}\nif [ \"$1\" != \"1\" ]; then\n\t# This is an uninstall, not an upgrade.\n\t${RPM_INSTALL_PREFIX:=%{pbs_prefix}}/libexec/pbs_preuninstall execution \\\n\t\t%{version} ${RPM_INSTALL_PREFIX:=%{pbs_prefix}} %{pbs_home} %{defined have_systemd}\nfi\n\n%preun %{pbs_client}\nif [ \"$1\" != \"1\" ]; then\n\t# This is an uninstall, not an upgrade.\n\t${RPM_INSTALL_PREFIX:=%{pbs_prefix}}/libexec/pbs_preuninstall client \\\n\t\t%{version} ${RPM_INSTALL_PREFIX:=%{pbs_prefix}} %{pbs_home} %{defined have_systemd}\nfi\n\n%postun %{pbs_server}\nif [ \"$1\" != \"1\" ]; then\n\t# This is an uninstall, not an upgrade.\n\tldconfig %{_libdir}\n\techo\n\techo \"NOTE: @PBS_CONF_FILE@ and the PBS_HOME directory must be deleted manually\"\n\techo\nfi\n\n%postun %{pbs_execution}\nif [ \"$1\" != \"1\" ]; then\n\t# This is an uninstall, not an upgrade.\n\tldconfig %{_libdir}\n\techo\n\techo \"NOTE: @PBS_CONF_FILE@ and the PBS_HOME directory must be deleted manually\"\n\techo\nfi\n\n%postun %{pbs_client}\nif [ \"$1\" != \"1\" ]; then\n\t# This is an uninstall, not an upgrade.\n\tldconfig %{_libdir}\n\techo\n\techo \"NOTE: @PBS_CONF_FILE@ must be deleted manually\"\n\techo\nfi\n\n%postun %{pbs_devel}\nldconfig %{_libdir}\n\n%posttrans %{pbs_server}\n${RPM_INSTALL_PREFIX:=%{pbs_prefix}}/libexec/pbs_posttrans \\\n\t${RPM_INSTALL_PREFIX:=%{pbs_prefix}}\n\n%posttrans %{pbs_execution}\n${RPM_INSTALL_PREFIX:=%{pbs_prefix}}/libexec/pbs_posttrans \\\n\t${RPM_INSTALL_PREFIX:=%{pbs_prefix}}\n\n%files %{pbs_server}\n%defattr(-,root,root, -)\n%dir %{pbs_prefix}\n%{pbs_prefix}/*\n%attr(4755, root, root) %{pbs_prefix}/sbin/pbs_rcp\n%attr(4755, root, root) %{pbs_prefix}/sbin/pbs_iff\n#%attr(644, root, root) %{pbs_prefix}/lib*/libpbs.la\n%{_sysconfdir}/profile.d/pbs.csh\n%{_sysconfdir}/profile.d/pbs.sh\n%config(noreplace) %{_sysconfdir}/profile.d/pbs.*\n%exclude %{_sysconfdir}/profile.d/ptl.csh\n%exclude %{_sysconfdir}/profile.d/ptl.sh\n%if %{defined have_systemd}\n%attr(644, root, root) %{_unitdir}/pbs.service\n%attr(644, root, root) %{pbs_prefix}/libexec/pbs_reload\n%else\n%exclude %{_unitdir}/pbs.service\n%exclude %{pbs_prefix}/libexec/pbs_reload\n%endif\n#%exclude %{pbs_prefix}/unsupported/*.pyc\n#%exclude %{pbs_prefix}/unsupported/*.pyo\n%exclude %{pbs_prefix}/lib*/*.a\n%exclude %{pbs_prefix}/include/*\n%doc README.md\n%license LICENSE\n\n%files %{pbs_execution}\n%defattr(-,root,root, -)\n%dir %{pbs_prefix}\n%{pbs_prefix}/*\n%attr(4755, root, root) %{pbs_prefix}/sbin/pbs_rcp\n%attr(4755, root, root) %{pbs_prefix}/sbin/pbs_iff\n#%attr(644, root, root) %{pbs_prefix}/lib*/libpbs.la\n%{_sysconfdir}/profile.d/pbs.csh\n%{_sysconfdir}/profile.d/pbs.sh\n%config(noreplace) %{_sysconfdir}/profile.d/pbs.*\n%exclude %{_sysconfdir}/profile.d/ptl.csh\n%exclude %{_sysconfdir}/profile.d/ptl.sh\n%if %{defined have_systemd}\n%attr(644, root, root) %{_unitdir}/pbs.service\n%else\n%exclude %{_unitdir}/pbs.service\n%endif\n%exclude %{pbs_prefix}/bin/printjob_svr.bin\n%exclude %{pbs_prefix}/etc/pbs_dedicated\n%exclude %{pbs_prefix}/etc/pbs_holidays*\n%exclude %{pbs_prefix}/etc/pbs_resource_group\n%exclude %{pbs_prefix}/etc/pbs_sched_config\n%exclude %{pbs_prefix}/lib*/init.d/sgiICEplacement.sh\n%exclude %{pbs_prefix}/lib*/python/altair/pbs_hooks/*\n%exclude %{pbs_prefix}/libexec/pbs_db_utility\n%exclude %{pbs_prefix}/sbin/pbs_comm\n%exclude %{pbs_prefix}/sbin/pbs_dataservice\n%exclude %{pbs_prefix}/sbin/pbs_ds_monitor\n%exclude %{pbs_prefix}/sbin/pbs_ds_password\n%exclude %{pbs_prefix}/sbin/pbs_ds_password.bin\n%exclude %{pbs_prefix}/sbin/pbs_ds_systemd\n%exclude %{pbs_prefix}/sbin/pbs_sched\n%exclude %{pbs_prefix}/sbin/pbs_server\n%exclude %{pbs_prefix}/sbin/pbs_server.bin\n%exclude %{pbs_prefix}/sbin/pbsfs\n#%exclude %{pbs_prefix}/unsupported/*.pyc\n#%exclude %{pbs_prefix}/unsupported/*.pyo\n%exclude %{pbs_prefix}/lib*/*.a\n%exclude %{pbs_prefix}/include/*\n%doc README.md\n%license LICENSE\n\n%files %{pbs_client}\n%defattr(-,root,root, -)\n%dir %{pbs_prefix}\n%{pbs_prefix}/*\n%attr(4755, root, root) %{pbs_prefix}/sbin/pbs_iff\n#%attr(644, root, root) %{pbs_prefix}/lib*/libpbs.la\n%{_sysconfdir}/profile.d/pbs.csh\n%{_sysconfdir}/profile.d/pbs.sh\n%config(noreplace) %{_sysconfdir}/profile.d/pbs.*\n%exclude %{_sysconfdir}/profile.d/ptl.csh\n%exclude %{_sysconfdir}/profile.d/ptl.sh\n%exclude %{pbs_prefix}/bin/mpiexec\n%exclude %{pbs_prefix}/bin/pbs_attach\n%exclude %{pbs_prefix}/bin/pbs_tmrsh\n%exclude %{pbs_prefix}/bin/printjob_svr.bin\n%exclude %{pbs_prefix}/etc/pbs_dedicated\n%exclude %{pbs_prefix}/etc/pbs_holidays*\n%exclude %{pbs_prefix}/etc/pbs_resource_group\n%exclude %{pbs_prefix}/etc/pbs_sched_config\n%exclude %{pbs_prefix}/include\n%exclude %{pbs_prefix}/lib*/MPI\n%exclude %{pbs_prefix}/lib*/init.d\n%exclude %{pbs_prefix}/lib*/python/altair/pbs_hooks\n%exclude %{pbs_prefix}/lib*/python/pbs_bootcheck*\n%exclude %{pbs_prefix}/libexec/pbs_db_utility\n%exclude %{pbs_prefix}/libexec/pbs_habitat\n%exclude %{pbs_prefix}/libexec/pbs_init.d\n%exclude %{pbs_prefix}/sbin/pbs_comm\n%exclude %{pbs_prefix}/sbin/pbs_demux\n%exclude %{pbs_prefix}/sbin/pbs_dataservice\n%exclude %{pbs_prefix}/sbin/pbs_ds_monitor\n%exclude %{pbs_prefix}/sbin/pbs_ds_password\n%exclude %{pbs_prefix}/sbin/pbs_ds_password.bin\n%exclude %{pbs_prefix}/sbin/pbs_ds_systemd\n%exclude %{pbs_prefix}/sbin/pbs_idled\n%exclude %{pbs_prefix}/sbin/pbs_mom\n%exclude %{pbs_prefix}/sbin/pbs_rcp\n%exclude %{pbs_prefix}/sbin/pbs_sched\n%exclude %{pbs_prefix}/sbin/pbs_server\n%exclude %{pbs_prefix}/sbin/pbs_server.bin\n%exclude %{pbs_prefix}/sbin/pbs_upgrade_job\n%exclude %{pbs_prefix}/sbin/pbsfs\n#%exclude %{pbs_prefix}/unsupported/*.pyc\n#%exclude %{pbs_prefix}/unsupported/*.pyo\n%exclude %{_unitdir}/pbs.service\n%exclude %{pbs_prefix}/lib*/*.a\n%exclude %{pbs_prefix}/include/*\n%exclude /etc/init.d/pbs\n%doc README.md\n%license LICENSE\n\n%files %{pbs_devel}\n%defattr(-,root,root, -)\n%{pbs_prefix}/lib*/*.a\n%{pbs_prefix}/include/*\n%doc README.md\n%license LICENSE\n\n%if %{with ptl}\n%files %{pbs_ptl}\n%defattr(-,root,root, -)\n%dir %{ptl_prefix}\n%{ptl_prefix}/*\n%{_sysconfdir}/profile.d/ptl.csh\n%{_sysconfdir}/profile.d/ptl.sh\n%config(noreplace) %{_sysconfdir}/profile.d/ptl.*\n\n%post %{pbs_ptl}\npip3 install --trusted-host pypi.org --trusted-host files.pythonhosted.org -r \"%{ptl_prefix}/fw/requirements.txt\"\n\n%preun %{pbs_ptl}\npip3 uninstall --yes -r \"%{ptl_prefix}/fw/requirements.txt\"\n\n%endif\n\n%changelog\n* Fri Apr 17 2020 Hiren Vadalia <hiren.vadalia@altair.com> - 1.31\n- We are not using this changelog, see commit history\n"
  },
  {
    "path": "src/Makefile.am",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nSUBDIRS = \\\n\tinclude \\\n\tlib \\\n\tserver \\\n\tscheduler \\\n\tmodules \\\n\tresmom \\\n\tmom_rcp \\\n\tiff \\\n\tcmds \\\n\ttools \\\n\thooks \\\n\tunsupported\n"
  },
  {
    "path": "src/cmds/Makefile.am",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nSUBDIRS = scripts\n\nbin_PROGRAMS = \\\n\tpbsdsh \\\n\tpbsnodes \\\n\tpbs_attach \\\n\tpbs_tmrsh \\\n\tpbs_ralter \\\n\tpbs_rdel \\\n\tpbs_rstat \\\n\tpbs_rsub \\\n\tpbs_release_nodes \\\n\tqalter \\\n\tqdel \\\n\tqdisable \\\n\tqenable \\\n\tqhold \\\n\tqmgr \\\n\tqmove \\\n\tqorder \\\n\tqmsg \\\n\tqrerun \\\n\tqrls \\\n\tqrun \\\n\tqselect \\\n\tqsig \\\n\tqstat \\\n\tqstart \\\n\tqstop \\\n\tqsub \\\n\tqterm\n\nsbin_PROGRAMS = \\\n\tpbs_dataservice.bin \\\n\tpbs_ds_password.bin \\\n\tpbs_demux\n\ndist_bin_SCRIPTS = \\\n\tmpiexec \\\n\tpbs_lamboot \\\n\tpbs_mpihp \\\n\tpbs_mpilam \\\n\tpbs_mpirun \\\n\tpbs_remsh \\\n\tpbsrun \\\n\tpbsrun_unwrap \\\n\tpbsrun_wrap\n\ncommon_cflags = \\\n\t-I$(top_srcdir)/src/include \\\n\t@KRB5_CFLAGS@\n\ncommon_libs = \\\n\t$(top_builddir)/src/lib/Libpbs/libpbs.la \\\n\t$(top_builddir)/src/lib/Libnet/libnet.a \\\n\t$(top_builddir)/src/lib/Libsec/libsec.a \\\n\t$(top_builddir)/src/lib/Libutil/libutil.a \\\n\t@KRB5_LIBS@ \\\n\t-lpthread \\\n\t@socket_lib@\n\ncommon_sources = \\\n\t$(top_srcdir)/src/lib/Libcmds/cmds_common.c\n\npbsdsh_CPPFLAGS = ${common_cflags}\npbsdsh_LDADD = ${common_libs}\npbsdsh_SOURCES = pbsdsh.c ${common_sources}\n\npbsnodes_CPPFLAGS = ${common_cflags}\npbsnodes_LDADD = ${common_libs} \\\n\t$(top_builddir)/src/lib/Libjson/libpbsjson.la\npbsnodes_SOURCES = pbsnodes.c ${common_sources}\n\npbs_attach_CPPFLAGS = ${common_cflags}\npbs_attach_LDADD = ${common_libs}\npbs_attach_SOURCES = pbs_attach.c pbs_attach_sup.c ${common_sources}\n\npbs_demux_CPPFLAGS = ${common_cflags}\npbs_demux_LDADD = ${common_libs}\npbs_demux_SOURCES = pbs_demux.c\n\npbs_dataservice_bin_CPPFLAGS = \\\n\t${common_cflags}\npbs_dataservice_bin_LDADD = \\\n\t$(top_builddir)/src/lib/Libdb/libpbsdb.la \\\n\t${common_libs} \\\n\t-lssl \\\n\t-lcrypto\npbs_dataservice_bin_SOURCES = pbs_dataservice.c ${common_sources}\n\npbs_ds_password_bin_CPPFLAGS = \\\n\t${common_cflags}\npbs_ds_password_bin_LDADD = \\\n\t$(top_builddir)/src/lib/Libdb/libpbsdb.la \\\n\t${common_libs} \\\n\t-lssl \\\n\t-lcrypto\npbs_ds_password_bin_SOURCES = pbs_ds_password.c ${common_sources}\n\npbs_tmrsh_CPPFLAGS = ${common_cflags}\npbs_tmrsh_LDADD = ${common_libs}\npbs_tmrsh_SOURCES = pbs_tmrsh.c ${common_sources}\n\npbs_ralter_CPPFLAGS = ${common_cflags}\npbs_ralter_LDADD = ${common_libs}\npbs_ralter_SOURCES = pbs_ralter.c ${common_sources}\n\npbs_rdel_CPPFLAGS = ${common_cflags}\npbs_rdel_LDADD = ${common_libs}\npbs_rdel_SOURCES = pbs_rdel.c ${common_sources}\n\npbs_rstat_CPPFLAGS = ${common_cflags}\npbs_rstat_LDADD = ${common_libs}\npbs_rstat_SOURCES = pbs_rstat.c ${common_sources}\n\npbs_rsub_CPPFLAGS = ${common_cflags}\npbs_rsub_LDADD = ${common_libs}\npbs_rsub_SOURCES = pbs_rsub.c ${common_sources}\n\npbs_release_nodes_CPPFLAGS = ${common_cflags}\npbs_release_nodes_LDADD = ${common_libs}\npbs_release_nodes_SOURCES = pbs_release_nodes.c ${common_sources}\n\nqalter_CPPFLAGS = ${common_cflags}\nqalter_LDADD = ${common_libs}\nqalter_SOURCES = qalter.c ${common_sources}\n\nqdel_CPPFLAGS = ${common_cflags}\nqdel_LDADD = ${common_libs}\nqdel_SOURCES = qdel.c ${common_sources}\n\nqdisable_CPPFLAGS = ${common_cflags}\nqdisable_LDADD = ${common_libs}\nqdisable_SOURCES = qdisable.c ${common_sources}\n\nqenable_CPPFLAGS = ${common_cflags}\nqenable_LDADD = ${common_libs}\nqenable_SOURCES = qenable.c ${common_sources}\n\nqhold_CPPFLAGS = ${common_cflags}\nqhold_LDADD = ${common_libs}\nqhold_SOURCES = qhold.c ${common_sources}\n\nqmgr_CPPFLAGS = \\\n\t${common_cflags} \\\n\t@editline_inc@\nqmgr_LDADD = \\\n\t${common_libs} \\\n\t@editline_lib@\nqmgr_SOURCES = qmgr.c qmgr_sup.c ${common_sources}\n\nqmove_CPPFLAGS = ${common_cflags}\nqmove_LDADD = ${common_libs}\nqmove_SOURCES = qmove.c ${common_sources}\n\nqorder_CPPFLAGS = ${common_cflags}\nqorder_LDADD = ${common_libs}\nqorder_SOURCES = qorder.c ${common_sources}\n\nqmsg_CPPFLAGS = ${common_cflags}\nqmsg_LDADD = ${common_libs}\nqmsg_SOURCES = qmsg.c ${common_sources}\n\nqrerun_CPPFLAGS = ${common_cflags}\nqrerun_LDADD = ${common_libs}\nqrerun_SOURCES = qrerun.c ${common_sources}\n\nqrls_CPPFLAGS = ${common_cflags}\nqrls_LDADD = ${common_libs}\nqrls_SOURCES = qrls.c ${common_sources}\n\nqrun_CPPFLAGS = ${common_cflags}\nqrun_LDADD = ${common_libs}\nqrun_SOURCES = qrun.c ${common_sources}\n\nqselect_CPPFLAGS = ${common_cflags}\nqselect_LDADD = ${common_libs}\nqselect_SOURCES = qselect.c ${common_sources}\n\nqsig_CPPFLAGS = ${common_cflags}\nqsig_LDADD = ${common_libs}\nqsig_SOURCES = qsig.c ${common_sources}\n\nqstat_CPPFLAGS = ${common_cflags}\nqstat_LDADD = ${common_libs} \\\n\t$(top_builddir)/src/lib/Libjson/libpbsjson.la\nqstat_SOURCES = qstat.c ${common_sources}\n\nqstart_CPPFLAGS = ${common_cflags}\nqstart_LDADD = ${common_libs}\nqstart_SOURCES = qstart.c ${common_sources}\n\nqstop_CPPFLAGS = ${common_cflags}\nqstop_LDADD = ${common_libs}\nqstop_SOURCES = qstop.c ${common_sources}\n\nqsub_CPPFLAGS = ${common_cflags}\nqsub_LDADD = ${common_libs} \\\n\t\t-lssl \\\n\t\t-lcrypto\nqsub_SOURCES = qsub.c qsub_sup.c ${common_sources}\n\nqterm_CPPFLAGS = ${common_cflags}\nqterm_LDADD = ${common_libs}\nqterm_SOURCES = qterm.c ${common_sources}\n"
  },
  {
    "path": "src/cmds/mpiexec.in",
    "content": "#!/bin/sh -\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\n#   This text is cited from\n#\n#\t\thttp://www.mpi-forum.org/docs/mpi-20-html/node42.htm,\n#\n# section \"4.1. Portable MPI Process Startup\":\n#\n# A number of implementations of MPI-1 provide a startup command for MPI\n# programs that is of the form\n#\n#     mpirun <mpirun arguments> <program> <program arguments>\n#\n# Separating the command to start the program from the program itself\n# provides flexibility, particularly for network and heterogeneous\n# implementations. For example, the startup script need not run on one of\n# the machines that will be executing the MPI program itself.\n#\n# Having a standard startup mechanism also extends the portability of MPI\n# programs one step further, to the command lines and scripts that manage\n# them. For example, a validation suite script that runs hundreds of\n# programs can be a portable script if it is written using such a standard\n# starup mechanism. In order that the ``standard'' command not be confused\n# with existing practice, which is not standard and not portable among\n# implementations, instead of mpirun MPI specifies mpiexec.\n#\n# While a standardized startup mechanism improves the usability of MPI,\n# the range of environments is so diverse (e.g., there may not even be a\n# command line interface) that MPI cannot mandate such a mechanism.\n# Instead, MPI specifies an mpiexec startup command and recommends but\n# does not require it, as advice to implementors. However, if an\n# implementation does provide a command called mpiexec, it must be of the\n# form described below.\n#\n# It is suggested that\n#\n#     mpiexec -n <numprocs> <program>\n#\n# be at least one way to start <program> with an initial MPI_COMM_WORLD\n# whose group contains <numprocs> processes. Other arguments to mpiexec\n# may be implementation-dependent.\n#\n# This is advice to implementors, rather than a required part of MPI-2. It\n# is not suggested that this be the only way to start MPI programs. If an\n# implementation does provide a command called mpiexec, however, it must\n# be of the form described here.\n#\n#\n# / Advice to implementors./\n#\n# Implementors, if they do provide a special startup command for MPI\n# programs, are advised to give it the following form. The syntax is\n# chosen in order that mpiexec be able to be viewed as a command-line\n# version of MPI_COMM_SPAWN (See Section Reserved Keys <node97.htm#Node97>).\n#\n# Analogous to MPI_COMM_SPAWN, we have\n#\n#\n#     mpiexec -n    <maxprocs>\n#            -soft  <        >\n#            -host  <        >\n#            -arch  <        >\n#            -wdir  <        >\n#            -path  <        >\n#            -file  <        >\n#             ...\n#            <command line>\n#\n# for the case where a single command line for the application program and\n# its arguments will suffice. See Section Reserved Keys\n# <node97.htm#Node97> for the meanings of these arguments. For the case\n# corresponding to MPI_COMM_SPAWN_MULTIPLE there are two possible formats:\n#\n# Form A:\n#\n#\n#     mpiexec { <above arguments> } : { ... } : { ... } : ... : { ... }\n#\n# As with MPI_COMM_SPAWN, all the arguments are optional. (Even the -n x\n# argument is optional; the default is implementation dependent. It might\n# be 1, it might be taken from an environment variable, or it might be\n# specified at compile time.) The names and meanings of the arguments are\n# taken from the keys in the info argument to MPI_COMM_SPAWN. There may be\n# other, implementation-dependent arguments as well.\n#\n# Note that Form A, though convenient to type, prevents colons from being\n# program arguments. Therefore an alternate, file-based form is allowed:\n#\n# Form B:\n#\n#\n#     mpiexec -configfile <filename>\n#\n# where the lines of /</filename/>/ are of the form separated by the\n# colons in Form A. Lines beginning with ` #' are comments, and lines may\n# be continued by terminating the partial line with `\\'.\n#\n#\n# * Example* Start 16 instances of myprog on the current or default machine:\n#\n#     mpiexec -n 16 myprog\n#\n#\n# * Example* Start 10 processes on the machine called ferrari:\n#\n#     mpiexec -n 10 -host ferrari myprog\n#\n#\n# * Example* Start three copies of the same program with different\n# command-line arguments:\n#\n#     mpiexec myprog infile1 : myprog infile2 : myprog infile3\n#\n#\n# * Example* Start the ocean program on five Suns and the atmos program on\n# 10 RS/6000's:\n#\n#     mpiexec -n 5 -arch sun ocean : -n 10 -arch rs6000 atmos\n#\n# It is assumed that the implementation in this case has a method for\n# choosing hosts of the appropriate type. Their ranks are in the order\n# specified.\n# * Example* Start the ocean program on five Suns and the atmos program on\n# 10 RS/6000's (Form B):\n#\n#     mpiexec -configfile myfile\n#\n# where myfile contains\n#\n#     -n 5  -arch sun    ocean\n#     -n 10 -arch rs6000 atmos\n#\n# (/ End of advice to implementors./)\n# ...\n# MPI-2.0 of July 18, 1997\n# HTML Generated on September 10, 2001\n\nif [ $# -eq 1 ] && [ $1 = \"--version\" ]; then\n   echo pbs_version = @PBS_VERSION@\n   exit 0\nfi\n\n#\tstartup initializations\ninit()\n{\n\tpbsconffile=${PBS_CONF_FILE:-\"@PBS_CONF_FILE@\"}\n\tif [ -r $pbsconffile ]\n\tthen\n\t\t. $pbsconffile\n\t\texport PBS_TMPDIR=\"${PBS_TMPDIR:-${TMPDIR:-/var/tmp}}\"\n\telse\n\t\tlogerr \"cannot read PBS configuration file \\\"$pbsconffile\\\"\"\n\t\texit 1\n\tfi\n\n\tvendor_init ${1+\"$@\"}\n\n\tconfigfile=\"\"\n\trunfile=\"`mktemp ${PBS_TMPDIR}/mpiexec_runfileXXXXXX`\"\n\ttmpconfigfile=\"`mktemp ${PBS_TMPDIR}/mpiexec_configfileXXXXXX`\"\n\ttrap \"rm -f $runfile $tmpconfigfile\" 0 1 2 3 15\n\n\tranknum=0\n\treset_rank\n\n\tif [ -n \"$PBS_MPI_DEBUG\" ]\n\tthen\n\t\tdebug=1\n\telse\n\t\tdebug=0\n\tfi\n}\n\n#\tThe purpose of this function is to find another mpiexec in the user's\n#\tpath and hand control over to it.  If executing outside of PBS, the\n#\tuser's normal PATH is used;  if within PBS, we consult the pre-PBS path\n#\t(available within PBS as $PBS_O_PATH).  In either case, we take special\n#\tprecautions to avoid attempting to re-exec ourselves.\nrempiexec()\n{\n\t#\tif PBS_ENVIRONMENT is not set, this function is called before\n\t#\tthe init() function.\n\tif [ -z \"$PBS_ENVIRONMENT\" ]\n\tthen\n\t        testPATH=$PATH\n\n\t\t# make implicit \".\" in $testPATH explicit\n\t\tprepPATH=`echo $testPATH | sed\t-e 's/^:/.:/'\t\t\\\n\t\t\t\t\t\t-e 's/:$/:./'\t\t\\\n\t\t\t\t\t\t-e 's/:::*/:.:/g'`\n\n\t\tfor component in `echo $prepPATH | tr : \" \"`\n\t\tdo\n\t\t\tif [ $component = `dirname $0` ]\n\t\t\tthen\n\t\t\t\tcontinue\n\t\t\tfi\n\t\t\tif [ -x $component/mpiexec ]\n\t\t\tthen\n\t\t\t\texec $component/mpiexec ${1+\"$@\"}\n\t\t\tfi\n\t\tdone\n\t\tlogerr \"unexpected error - no non-PBS mpiexec in PATH\"\n\telse\n\t\ttestPATH=$PBS_O_PATH\n\t\tpbsbindirID=\"`filestat $PBS_EXEC/bin`\"\n\n\t\t# make implicit \".\" in $testPATH explicit\n\t\tprepPATH=`echo $testPATH | sed\t-e 's/^:/.:/'\t\t\\\n\t\t\t\t\t\t-e 's/:$/:./'\t\t\\\n\t\t\t\t\t\t-e 's/:::*/:.:/g'`\n\n\t\tfor component in `echo $prepPATH | tr : \" \"`\n\t\tdo\n\t\t\t#\tCheck to see whether . is $PBS_EXEC/bin\n\t\t\tif [ $component = \".\" ]\n\t\t\tthen\n\t\t\t\tif [ \"`filestat .`\" = $pbsbindirID ]\n\t\t\t\tthen\n\t\t\t\t\tcontinue\n\t\t\t\tfi\n\t\t\telse\n\t\t\t\tif [ $component = \"$PBS_EXEC/bin\" ]\n\t\t\t\tthen\n\t\t\t\t\tcontinue\n\t\t\t\tfi\n\t\t\tfi\n\t\t\tif [ -x $component/mpiexec ]\n\t\t\tthen\n\t\t\t\texec $component/mpiexec ${1+\"$@\"}\n\t\t\tfi\n\t\tdone\n\t\tlogerr \"unexpected error - no non-PBS mpiexec in PBS_O_PATH\"\n\tfi\n\n\texit 1\n}\n\nfilestat()\n{\n\tif [ $# -ne 1 ]\n\tthen\n\t\tlogerr \"filestat:  unexpected internal error ($#)\"\n\t\texit 1\n\telse\n\t\t#\tCheck for GNU stat(1) command, which allows us to\n\t\t#\treturn a tag (the file's serial number on a given\n\t\t#\tdevice) more likely to be unique than the serial\n\t\t#\tnumber alone.\n\t\tif type stat > /dev/null 2>&1\n\t\tthen\n\t\t\tstatformat=\"%d:%i\"\n\t\t\tstat -c $statformat $1\n\t\telse\n\t\t\t#\tSigh - best we can do under the circumstances\n\t\t\tls -id $1 | awk '{print $1}'\n\t\tfi\n\tfi\n}\n\n#\treset per-rank settings\nreset_rank()\n{\n\tmaxprocs=`wc -l $PBS_NODEFILE | cut -f1 -d' '`\n\tarch=\"\"\n\tfile=\"\"\n\thost=\"\"\n\tpath=\"\"\n\tprog=\"\"\n\tprogargs=\"\"\n\tset=\"\"\n\twdir=\"\"\n}\n\nusage()\n{\n\tprintf \"Usage:\\n\"\n\tprintf \"\\t%s\\n\" \"mpiexec -n    <maxprocs>\"\n\tprintf \"\\t%s\\n\" \"-soft  <        >\"\n\tprintf \"\\t%s\\n\" \"-host  <        >\"\n\tprintf \"\\t%s\\n\" \"-arch  <        >\"\n\tprintf \"\\t%s\\n\" \"-wdir  <        >\"\n\tprintf \"\\t%s\\n\" \"-path  <        >\"\n\tprintf \"\\t%s\\n\" \"-file  <        >\"\n\tprintf \"\\t%s\\n\" \"...\"\n\tprintf \"\\t%s\\n\" \"<command line>\"\n\tprintf \"or\\n\"\n\tprintf \"\\t%s\\n\" \"mpiexec <args as above> [ : <args as above> ... ]\"\n\tprintf \"or\\n\"\n\tprintf \"\\t%s\\n\" \"mpiexec -configfile <filename>\"\n\tprintf \"\\t%s\\n\" \"mpiexec --version\"\n\n\texit 1\n}\n\nlogerr()\n{\n\tprintf \"%s:  %s\\n\" $MyName ${1+\"$@\"} > /dev/stderr\n}\n\nevalsimpleargs()\n{\n\tif [ $# -le 1 ]\n\tthen\n\t\tif [ $# -eq 1 ]\n\t\tthen\n\t\t\tlogerr \"$1:  argument expected\"\n\t\tfi\n\t\tusage\n\telse\n\t\topt=\"$1\"\n\t\targ=\"$2\"\n\tfi\n\n\tcase \"$opt\" in\n\t\t\"-n\")\t\tmaxprocs=$arg\t;;\n\t\t\"-soft\")\tset=$arg\t;;\t# unimplemented?\n\t\t\"-host\")\thost=$arg\t;;\n\t\t\"-arch\")\tarch=$arg\t;;\n\t\t\"-wdir\")\twdir=$arg\t;;\n\t\t\"-path\")\tpath=$arg\t;;\n\t\t\"-file\")\tfile=$arg\t;;\n\t\t*)\t\tlogerr \"internal error - option \\\"$opt\\\"\"\n\t\t\t\texit 1\n\t\t\t\t;;\n\tesac\n}\n\n#\tdebugging hook\nprintargs()\n{\n\tprintf \"%s:\\n\" $MyName\n\tprintf \"\\tmaxprocs:  %s\\n\" $maxprocs\n\tprintf \"\\tsoft:  %s\\n\" $soft\n\tprintf \"\\thost:  %s\\n\" $host\n\tprintf \"\\tarch:  %s\\n\" $arch\n\tprintf \"\\twdir:  %s\\n\" $wdir\n\tprintf \"\\tpath:  %s\\n\" $path\n\tprintf \"\\tfile:  %s\\n\" $file\n\tprintf \"\\tprog:  %s\\n\" $prog\n\tprintf \"\\targs:  %s\\n\" $progargs\n\tprintf \"\\tconfigfile:  %s\\n\" $configfile\n}\n\ndorank()\n{\n\tline=\"\"\n\t[ -n \"$maxprocs\" ] && line=\"$line -n $maxprocs\"\n\t[ -n \"$soft\" ] && line=\"$line -soft $soft\"\n\t[ -n \"$host\" ] && line=\"$line -host $host\"\n\t[ -n \"$arch\" ] && line=\"$line -arch $arch\"\n\t[ -n \"$wdir\" ] && line=\"$line -wdir $wdir\"\n\t[ -n \"$path\" ] && line=\"$line -path $path\"\n\t[ -n \"$file\" ] && line=\"$line -file $file\"\n\t[ -n \"$prog\" ] && line=\"$line $prog\"\n\t[ -n \"$progargs\" ] && line=\"$line $progargs\"\n\n\techo \"$line\" >> $tmpconfigfile\n\treset_rank\n}\n\n#\tThis function is passed the script's initial arguments (via init()).\n#\tIt does any necessary vendor-specific initializations, including\n#\tdetermining whether this is a supported platform.\n#\n#\tCurrently the only supported platforms are Altix systems running\n#\teither the SGI MPI bundle of Performance Suite or older SGIs with\n#\tProPack version 4 or greater.  On Altix systems with an earlier\n#\tversion of ProPack, we complain about an unsupported version.  On\n#\tnon-Altix systems, we assume we were invoked by mistake and use\n#\tthe value of PBS_O_PATH to search for the correct version of mpiexec\n#\tto execute.\nvendor_init()\n{\n\tsupported_platform=0\n\n\tsgi_release=\"/etc/sgi-release\"\n\tsgi_compute_node=\"/etc/sgi-compute-node-release\"\n\tsgi_service_node=\"/etc/sgi-service-node-release\"\n\n\tif [ -f $sgi_release -o -f $sgi_compute_node -o -f $sgi_service_node ]\n\tthen\n\t\tsgiinit\n\t\tsupported_platform=1\n\tfi\n\n\tif [ $supported_platform -eq 0 ]\n\tthen\n\t\trempiexec ${1+\"$@\"}\n\tfi\n}\n\n#\tThis function is passed as its argument an mpiexec-style configuration\n#\tfile and is expected to reformat it into a format suitable for native\n#\tconsumption by the vendor's MPI infrastructure.\nvendor_config()\n{\n\tif [ $# -ne 1 ]\n\tthen\n\t\tusage\n\tfi\n\n\tsgiconfig ${1+\"$@\"}\n}\n\n#\tThis function takes no arguments.  It causes the native-format MPI job\n#\tconstructed by vendor_config() to be executed.\nvendor_run()\n{\n\tsgimpirun\n}\n\n# Handle initialization of an SGI system based on either the chkfeature\n# utility or, as a fallback, the presence of various text files on an\n# older ProPack system.\nsgiinit()\n{\n\tPATH=$PATH:/usr/sbin\t# ensure access to chkfeature CLI if present\n\tif type chkfeature > /dev/null 2>&1\n\tthen\n\t    #\tIf MPT is present, make sure it's loaded so we can exec mpirun\n\t    if chkfeature -p sgi-mpt > /dev/null 2>&1\n\t    then\n\t\tmodule load mpt > /dev/null 2>&1\n\t    else\n\t\t#\tchkfeature says MPT is not present.  If there is an\n\t\t#\tmpirun in our path, we assume (as we did before) that\n\t\t#\tit's the SGI one.\n\t\tif type mpirun > /dev/null 2>&1\n\t\tthen\n\t\t    :\n\t\telse\n\t\t    logerr \"unexpected error - MPT mpirun unavailable\"\n\t\t    exit 1\n\t\tfi\n\t    fi\n\telse\n\t    #\tno chkfeature - older ProPack\n\t    sgippinit\n\tfi\n}\n\n#\tFor the release into which this change is targeted, only the Altix\n#\tshould use the PBS version of mpiexec.  Therefore, our mpiexec will\n#\tdetermine whether it is executing on an SGI system running ProPack 4\n#\tor greater.  This is accomplished by examining the /etc/sgi-release\n#\tfile to look for a string of the form\n#\n#\t\t\"SGI ProPack N ...\"\n#\n#\twhere N is an integer greater than or equal to 4.  There are three\n#\tcases to consider:\n#\n#\t\t-  if the file does not exist, \tPBS's mpiexec assumes that it\n#\t\t   was the unintentional recipient of control and searches the\n#\t\t   user's pre-PBS path, whose value is found in the PBS_O_PATH\n#\t\t   environment variable, for mpiexec.  If one is found, we exec\n#\t\t   it;  otherwise, an appropriate error message is displayed\n#\t\t   and we exit with an error\n#\n#\t\t-  if the file does exist but its format is not what we expect\n#\t\t   to find, or N is less than 4, an appropriate error message\n#\t\t   is displayed and we exit with an error\n#\n#\t\t-  otherwise, we proceed with normal execution\nsgippinit()\n{\n\tif ! sgicheckppversion\n\tthen\n\t\tlogerr \"unexpected error - sgicheckppversion returned $?\"\n\t\texit 1\n\tfi\n}\n\nsgicheckppversion()\n{\n\tsgi_release=\"/etc/sgi-release\"\n\tsgi_compute_node=\"/etc/sgi-compute-node-release\"\n\tsgi_ppversmin=4\n\tif [ -r $sgi_release ]\n\tthen\n\t\tread sgi propack propackvers rest < $sgi_release\n\t\tif [ \"$sgi\" != \"SGI\" -o \"$propack\" != \"ProPack\" ]\n\t\tthen\n\t\t\tlogerr \"$sgi_release:  unexpected contents\"\n\t\t\texit 1\n\t\tfi\n\t\tif [ `expr substr $propackvers 1 1` -lt $sgi_ppversmin ]\n\t\tthen\n\t\t\tlogerr \"ProPack version $propackvers is unsupported\"\n\t\t\texit 1\n\t\telse\n\t\t\treturn 0\n\t\tfi\n\telif [ -r $sgi_compute_node ]\n\tthen\n\t\tif grep \"Build 5\" $sgi_compute_node > /dev/null\n\t\tthen\n\t\t\treturn 0\n\t\tfi\n\telse\n\t\treturn 1\n\tfi\n}\n\n#\tTranslate the mpiexec-style configuration file into an mpirun-style\n#\tone ...\nsgiconfig()\n{\n\tif [ $debug -eq 1 ]\n\tthen\n\t\treport_config \"$1\"\n\tfi\n\n\tPBS_LIB_PATH=${PBS_EXEC}/lib\n\tif [ ! -d ${PBS_LIB_PATH} -a -d ${PBS_EXEC}/lib64 ] ; then\n\t\tPBS_LIB_PATH=${PBS_EXEC}/lib64\n\tfi\n\n\tawk -f ${PBS_LIB_PATH}/MPI/sgiMPI.awk\t-v configfile=\"$1\"\t\\\n\t\t\t\t\t\t-v runfile=\"$runfile\"\t\\\n\t\t\t\t\t\t-v pbs_exec=\"$PBS_EXEC\"\t\\\n\t\t\t\t\t\t-v debug=$debug\n}\n\n#\t... and execute it.\nsgimpirun()\n{\n\tif [ $debug -eq 1 ]\n\tthen\n\t\treport_run\n\tfi\n\n\t# The Performance Suite version of mpirun needs to be told\n\t# that it ought not complain when we exec it via this script.\n\t# Don't be confused by the name - it's simply a directive to\n\t# the SGI mpirun command to assert that mpirun need not worry\n\t# its pretty little head about the absence of a pbs_attach\n\t# command among its command line arguments.\n\texport MPI_IGNORE_PBS=1\n\n\ta_opt=''\n\tif [ -n \"$PBS_MPI_SGIARRAY\" ]\n\tthen\n\t\ta_opt=\"-a $PBS_MPI_SGIARRAY\"\n\tfi\n\n\tmpirun $a_opt -f $runfile\n}\n\n#\tdebugging hooks\nreport_config()\n{\n\techo \"generated mpiexec configuration file ($1) contains\"\n\tcat \"$1\" | sed -e 's/^/\\t/'\n}\nreport_run()\n{\n\tif [ -n \"$PBS_MPI_SGIARRAY\" ]\n\tthen\n\t\techo \"mpirun -a $PBS_MPI_SGIARRAY -f $runfile\"\n\telse\n\t\techo \"mpirun -f $runfile\"\n\tfi\n\techo \"where $runfile contains:\"\n\tcat $runfile | sed -e 's/^/\\t/'\n}\n\nMyName=\"`basename $0`\"\t\t\t\t\t# must occur first\n\n[ -z \"$PBS_ENVIRONMENT\" ] && rempiexec ${1+\"$@\"}\t# no work for us to do\n\ninit ${1+\"$@\"}\n\nin_rankdef=0\nwhile [ $# -gt 0 ]\ndo\n\tcase \"$1\" in\n\t\t\"-n\"|\"-soft\"|\"-host\"|\"-arch\"|\"-wdir\"|\"-path\"|\"-file\")\n\t\t\tin_rankdef=1\n\t\t\tevalsimpleargs ${1+\"$@\"}\n\t\t\tshift 2\n\t\t\t;;\n\t\t\"-configfile\")\n\t\t\tif [ $in_rankdef -eq 1 ]\n\t\t\tthen\n\t\t\t\tlogerr \"-configfile in rank definition\"\n\t\t\t\texit 1\n\t\t\tfi\n\t\t\t# first \"-configfile\" option terminates argument parsing\n\t\t\tshift\n\t\t\tconfigfile=\"$1\"\n\t\t\tvendor_config $configfile && vendor_run\n\t\t\texit $?\n\t\t\t;;\n\t\t\":\")\n\t\t\tin_rankdef=0\n\t\t\tshift\n\t\t\tevalsimpleargs ${1+\"$@\"}\n\t\t\twhile [ $# -gt 0 -a \"$1\" != \":\" ]\n\t\t\tdo\n\t\t\t\tshift\n\t\t\tdone\n\t\t\t;;\n\t\t*)\n\t\t\tprog=\"$1\"\n\t\t\tshift\n\t\t\twhile [ $# -gt 0 ]\n\t\t\tdo\n\t\t\t\tif [ \"$1\" = \":\" ]\n\t\t\t\tthen\n\t\t\t\t\tin_rankdef=0\n\t\t\t\t\tdorank\n\t\t\t\t\tbreak\n\t\t\t\telse\n\t\t\t\t\tprogargs=\"$progargs $1\"\n\t\t\t\tfi\n\t\t\t\tshift\n\t\t\tdone\n\t\t\tif [ $# -gt 0 ]\n\t\t\tthen\n\t\t\t\tif [ `expr substr \"$1\" 1 1` = \":\" ]\n\t\t\t\tthen\n\t\t\t\t\tshift\n\t\t\t\tfi\n\t\t\telse\n\t\t\t\tranknum=`expr $ranknum + 1`\n\t\t\t\tin_rankdef=0\n\t\t\t\tdorank\n\t\t\tfi\n\t\t\t;;\n\tesac\ndone\n\nif [ $in_rankdef -eq 1 -a -z \"$prog\" ]\nthen\n\tlogerr \"rank $ranknum has no executable\"\n\texit 1\nelse\n\tvendor_config $tmpconfigfile && vendor_run\nfi\n"
  },
  {
    "path": "src/cmds/pbs_attach.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tpbs_attach.c\n * @brief\n * pbs_attach - attach a session to a job.\n *\n */\n#include <pbs_config.h>\n\n#include \"cmds.h\"\n#include \"pbs_version.h\"\n\nextern char *getoptargstr;\n\nextern void usage(char *);\n\nextern void attach(int use_cmd, int newsid, int port, int doparent, pid_t pid, char *jobid, char *host, int argc, char *argv[]);\n\nint\nmain(int argc, char *argv[])\n{\n\tchar *jobid = NULL;\n\tchar *host = NULL;\n\tint c;\n\tint newsid = 0;\n\tint port = 0;\n\tint err = 0;\n\tint use_cmd = FALSE; /* spawn the process using a new cmd shell */\n\textern char *optarg;\n\textern int optind;\n\tpid_t pid = 0;\n\tchar *end;\n\tint doparent = 0;\n\n\t/*test for real deal or just version and exit*/\n\n\tPRINT_VERSION_AND_EXIT(argc, argv);\n\n\tif (initsocketlib())\n\t\treturn 1;\n\n\twhile ((c = getopt(argc, argv, getoptargstr)) != EOF) {\n\t\tswitch (c) {\n\t\t\tcase 'j':\n\t\t\t\tjobid = optarg;\n\t\t\t\tbreak;\n\n\t\t\tcase 'p':\n\t\t\t\tpid = strtol(optarg, &end, 10);\n\t\t\t\tif (pid <= 0 || *end != '\\0') {\n\t\t\t\t\tfprintf(stderr, \"bad pid: %s\\n\", optarg);\n\t\t\t\t\terr = 1;\n\t\t\t\t}\n\t\t\t\tbreak;\n\n\t\t\tcase 'P':\n\t\t\t\tdoparent = 1;\n\t\t\t\tbreak;\n\n\t\t\tcase 'h':\n\t\t\t\thost = optarg;\n\t\t\t\tbreak;\n\n\t\t\tcase 'c':\n\t\t\t\tuse_cmd = TRUE;\n\t\t\t\tbreak;\n\n\t\t\tcase 'm':\n\t\t\t\tport = strtol(optarg, &end, 10);\n\t\t\t\tif (port <= 0 || *end != '\\0') {\n\t\t\t\t\tfprintf(stderr, \"bad port: %s\\n\", optarg);\n\t\t\t\t\terr = 1;\n\t\t\t\t}\n\t\t\t\tbreak;\n\n\t\t\tcase 's':\n\t\t\t\tnewsid = 1;\n\t\t\t\tbreak;\n\n\t\t\tdefault:\n\t\t\t\terr = 1;\n\t\t\t\tbreak;\n\t\t}\n\t}\n\n\tif (pid != 0) {\n\t\tif (newsid) {\n\t\t\tfprintf(stderr, \"cannot specify pid and session\\n\");\n\t\t\terr = 1;\n\t\t}\n\t\tif (doparent) {\n\t\t\tfprintf(stderr, \"cannot specify pid and parent\\n\");\n\t\t\terr = 1;\n\t\t}\n\t\tif (optind < argc) {\n\t\t\tfprintf(stderr, \"cannot specify pid and command\\n\");\n\t\t\terr = 1;\n\t\t}\n\t} else if (optind == argc) {\n\t\tfprintf(stderr, \"must specify pid or command\\n\");\n\t\terr = 1;\n\t}\n\n\tif (err)\n\t\tusage(argv[0]);\n\n\tif (port == 0) {\n\t\tpbs_loadconf(0);\n\t\tport = pbs_conf.manager_service_port;\n\t}\n\n\tattach(use_cmd, newsid, port, doparent, pid, jobid, host, argc, argv);\n\n\treturn 0;\n}\n"
  },
  {
    "path": "src/cmds/pbs_attach_sup.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tpbs_attach_sup.c\n * @brief\n * supporting file for pbs_attach.c\n *\n */\n\n#include <pbs_config.h>\n\n#include <stdio.h>\n#include <sys/wait.h>\n\n#include \"cmds.h\"\n#include \"tm.h\"\n\nextern char *get_ecname(int rc);\n\nchar *getoptargstr = \"+j:p:h:m:sP\";\n/**\n * @brief\n * \tdisplays how to use pbs_attach command\n *\n * @param[in] id - command name i.e pbs_attach\n *\n * @return Void\n *\n */\nvoid\nusage(char *id)\n{\n\tfprintf(stderr, \"usage: %s [-j jobid] [-m port] -p pid\\n\", id);\n\tfprintf(stderr, \"usage: %s [-j jobid] [-m port] [-P] [-s] cmd [arg1 ...]\\n\", id);\n\tfprintf(stderr, \"usage: %s --version\\n\", id);\n\texit(2);\n}\n\n/**\n * @brief\n *\tattach the process session to a job via TM\n *\n * @param[in] use_cmd : if TRUE, launch the process using a new command shell. (Not used)\n * @param[in] newsid : if TRUE, create a new process group for the newly spawned process\n * @param[in] port : port to connect to Mom\n * @param[in] doparent : if non-zero, attach the parent pid\n * @param[in] jobid : job id\n * @param[in] host : name of the local host\n * @param[in] argc : number of command line arguments for attach request\n * @param[in] argv : command line arguments for attach request\n *\n * @return void\n *\n */\nvoid\nattach(int use_cmd, int newsid, int port, int doparent, pid_t pid, char *jobid, char *host, int argc, char *argv[])\n{\n\tchar *cookie = NULL;\n\ttm_task_id tid;\n\tint rc = 0;\n\n\tif (newsid) {\n\t\tif ((pid = fork()) == -1) {\n\t\t\tperror(\"pbs_attach: fork\");\n\t\t\texit(1);\n\t\t} else if (pid > 0) { /* parent */\n\t\t\tint status;\n\n\t\t\tif (wait(&status) == -1) {\n\t\t\t\tperror(\"pbs_attach: wait\");\n\t\t\t\texit(1);\n\t\t\t}\n\t\t\tif (WIFEXITED(status))\n\t\t\t\texit(WEXITSTATUS(status));\n\t\t\telse\n\t\t\t\texit(2);\n\t\t}\n\t\tif (setsid() == -1) {\n\t\t\tperror(\"pbs_attach: setsid\");\n\t\t\texit(1);\n\t\t}\n\t}\n\n\tif (pid == 0)\n\t\tpid = getpid();\n\n\t/*\n\t **\tDo the attach.\n\t */\n\trc = tm_attach(jobid, cookie, pid, &tid, host, port);\n\n\t/*\n\t **\tIf an error other than \"session already attached\" is returned,\n\t **\tcomplain and return failure.\n\t */\n\tif ((rc != TM_SUCCESS) && (rc != TM_ESESSION)) {\n\t\tfprintf(stderr, \"%s: tm_attach: %s\\n\", argv[0], get_ecname(rc));\n\t\texit(1);\n\t}\n\t/*\n\t **\tOptional attach of the parent pid.\n\t */\n\tif (doparent) {\n\t\tpid = getppid();\n\t\trc = tm_attach(jobid, cookie, pid, &tid, host, port);\n\t\tif ((rc != TM_SUCCESS) && (rc != TM_ESESSION)) {\n\t\t\tfprintf(stderr, \"%s: tm_attach parent: %s\\n\", argv[0], get_ecname(rc));\n\t\t}\n\t}\n\n\tif (optind < argc) {\n\t\t/*\n\t\t ** Put MPICH_PROCESS_GROUP into the environment so some\n\t\t ** installations of MPICH will not call setsid() and escape\n\t\t ** the new task.\n\t\t */\n\t\t(void) setenv(\"MPICH_PROCESS_GROUP\", \"no\", 1);\n\n\t\targv += optind;\n\t\targc -= optind;\n\n\t\texecvp(argv[0], argv);\n\t\tperror(argv[0]);\n\t\texit(255); /* not reached */\n\t}\n\texit(0);\n}\n"
  },
  {
    "path": "src/cmds/pbs_dataservice.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n#include <pbs_version.h>\n#include <assert.h>\n#include <pwd.h>\n#include <stdio.h>\n#include <unistd.h>\n#include <sys/types.h>\n#include <sys/stat.h>\n#include <fcntl.h>\n#include <libgen.h>\n#include <dirent.h>\n#include <errno.h>\n#include \"libpbs.h\"\n#include \"portability.h\"\n#include \"ticket.h\"\n#include \"server_limits.h\"\n#include \"pbs_db.h\"\n\n/**\n * @brief\n *\tThe main function in C - entry point\n *\n * @param[in]  argc - argument count\n * @param[in]  argv - pointer to argument array\n *\n * @return  int\n * @retval  0 - success\n * @retval  !0 - error\n */\nint\nmain(int argc, char *argv[])\n{\n\tint rc;\n\tint i;\n\tint errflg = 0;\n\tchar prog[] = \"pbs_dataservice\";\n\tchar sopt[256];\n\tchar conn_db_host[PBS_MAXSERVERNAME + 1];\n\tchar *errmsg = NULL;\n\n\t/* test for real deal or just version and exit */\n\tPRINT_VERSION_AND_EXIT(argc, argv);\n\twhile ((i = getopt(argc, argv, \"s:\")) != EOF) {\n\t\tswitch (i) {\n\t\t\tcase 's':\n\t\t\t\tpbs_strncpy(sopt, optarg, sizeof(sopt));\n\t\t\t\tbreak;\n\t\t\tcase '?':\n\t\t\tdefault:\n\t\t\t\terrflg++;\n\t\t}\n\t}\n\tif (errflg) {\n\t\tfprintf(stderr, \"\\nusage: %s -s [start|stop|status]\\n\", prog);\n\t\treturn (-1);\n\t}\n\t/* read configuration file */\n\tif (pbs_loadconf(0) == 0) {\n\t\tfprintf(stderr, \"%s: Could not load pbs configuration\\n\", prog);\n\t\treturn (-1);\n\t}\n\n\t/* check admin privileges */\n\tif (getuid() != 0 || geteuid() != 0) {\n\t\tfprintf(stderr, \"%s: Must be run by root\\n\", prog);\n\t\treturn (1);\n\t}\n\n\tif (pbs_conf.pbs_data_service_host)\n\t\tpbs_strncpy(conn_db_host, pbs_conf.pbs_data_service_host, PBS_MAXSERVERNAME);\n\telse\n\t\tpbs_strncpy(conn_db_host, pbs_default(), PBS_MAXSERVERNAME);\n\n\tif (strcmp(sopt, PBS_DB_CONTROL_START) == 0) {\n\t\trc = pbs_start_db(conn_db_host, pbs_conf.pbs_data_service_port);\n\t\tif (rc == PBS_DB_OOM_ERR)\n\t\t\trc = 0; /* Ignore OOM access error */\n\t} else if (strcmp(sopt, PBS_DB_CONTROL_STOP) == 0) {\n\t\trc = pbs_stop_db(conn_db_host, pbs_conf.pbs_data_service_port);\n\t} else if (strcmp(sopt, PBS_DB_CONTROL_STATUS) == 0) {\n\t\trc = pbs_status_db(conn_db_host, pbs_conf.pbs_data_service_port);\n\t\tif (rc) {\n\t\t\tpbs_db_get_errmsg(PBS_DB_ERR, &errmsg);\n\t\t\tif (errmsg) {\n\t\t\t\tfprintf(stderr, \"%s: %s\", prog, errmsg);\n\t\t\t\tfree(errmsg);\n\t\t\t}\n\t\t}\n\t} else {\n\t\tfprintf(stderr, \"\\nusage: %s -s [start|stop|status]\\n\", prog);\n\t\treturn -1;\n\t}\n\treturn (rc);\n}\n"
  },
  {
    "path": "src/cmds/pbs_demux.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file pbs_demux.c\n * @brief\n * pbs_demux - handle I/O from multiple node job\n *\n *\tStandard Out and Standard Error of each task is bound to\n *\tstream sockets connected to pbs_demux which inputs from the\n *\tvarious streams and writes to the JOB's out and error.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n#include <pbs_version.h>\n\n#include <sys/types.h>\n#include <sys/time.h>\n#include <sys/socket.h>\n#include <netinet/in.h>\n#include <netdb.h>\n#include <errno.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <unistd.h>\n#include <string.h>\n#include <signal.h>\n\n#if defined(FD_SET_IN_SYS_SELECT_H)\n#include <sys/select.h>\n#endif\n\n#include \"cmds.h\"\n\nenum rwhere { invalid,\n\t      new_out,\n\t      new_err,\n\t      old_out,\n\t      old_err };\nstruct routem {\n\tenum rwhere r_where;\n\tshort r_nl;\n\tshort r_first;\n};\nfd_set readset;\nchar *cookie = 0;\n\n/**\n * @brief\n *\tread data from socket\n *\n * @param[in] sock - socket\n * @param[in] prm  - routem structure pointer\n *\n * @return - Void\n *\n */\nvoid\nreadit(int sock, struct routem *prm)\n{\n\tint amt;\n\tchar buf[256];\n\tFILE *fil;\n\tint i;\n\tchar *pc;\n\n\tif (prm->r_where == old_out)\n\t\tfil = stdout;\n\telse\n\t\tfil = stderr;\n\n\ti = 0;\n\tif ((amt = read(sock, buf, 256)) > 0) {\n\t\tif (prm->r_first == 1) {\n\n\t\t\t/* first data on connection must be the cookie to validate it */\n\n\t\t\ti = strlen(cookie);\n\t\t\tif (strncmp(buf, cookie, i) != 0) {\n\t\t\t\t(void) close(sock);\n\t\t\t\tprm->r_where = invalid;\n\t\t\t\tFD_CLR(sock, &readset);\n\t\t\t}\n\t\t\tprm->r_first = 0;\n\t\t}\n\t\tfor (pc = buf + i; pc < buf + amt; ++pc) {\n#ifdef DEBUG\n\t\t\tif (prm->r_nl) {\n\t\t\t\tfprintf(fil, \"socket %d: \", sock);\n\t\t\t\tprm->r_nl = 0;\n\t\t\t}\n#endif /* DEBUG */\n\t\t\tputc(*pc, fil);\n\t\t\tif (*pc == '\\n') {\n\t\t\t\tprm->r_nl = 1;\n\t\t\t\tfflush(fil);\n\t\t\t}\n\t\t}\n\t} else {\n\t\tclose(sock);\n\t\tprm->r_where = invalid;\n\t\tFD_CLR(sock, &readset);\n\t}\n\treturn;\n}\n\nint\nmain(int argc, char *argv[])\n{\n\tstruct timeval timeout;\n\tint i;\n\tint maxfd;\n\tint main_sock_out = 3;\n\tint main_sock_err = 4;\n\tint n;\n\tint newsock;\n\tpid_t parent;\n\tfd_set selset;\n\tstruct routem *routem;\n\n\t/*test for real deal or just version and exit*/\n\n\tPRINT_VERSION_AND_EXIT(argc, argv);\n\n\tparent = getppid();\n\tcookie = getenv(\"PBS_JOBCOOKIE\");\n\tif (cookie == 0) {\n\t\tfprintf(stderr, \"%s: no PBS_JOBCOOKIE found in the env\\n\",\n\t\t\targv[0]);\n\t\texit(3);\n\t}\n#ifdef DEBUG\n\tprintf(\"Cookie found in environment: %s\\n\", cookie);\n#endif\n\n\tmaxfd = sysconf(_SC_OPEN_MAX);\n\troutem = (struct routem *) malloc(maxfd * sizeof(struct routem));\n\tif (routem == NULL) {\n\t\tfprintf(stderr, \"%s: out of memory\\n\", argv[0]);\n\t\texit(2);\n\t}\n\tfor (i = 0; i < maxfd; ++i) {\n\t\t(routem + i)->r_where = invalid;\n\t\t(routem + i)->r_nl = 1;\n\t\t(routem + i)->r_first = 0;\n\t}\n\t(routem + main_sock_out)->r_where = new_out;\n\t(routem + main_sock_err)->r_where = new_err;\n\n\tFD_ZERO(&readset);\n\tFD_SET(main_sock_out, &readset);\n\tFD_SET(main_sock_err, &readset);\n\n\tif (listen(main_sock_out, 5) < 0) {\n\t\tperror(\"listen on out\");\n\t\texit(5);\n\t}\n\n\tif (listen(main_sock_err, 5) < 0) {\n\t\tperror(\"listen on err\");\n\t\texit(5);\n\t}\n\n\twhile (1) {\n\n\t\tselset = readset;\n\t\ttimeout.tv_usec = 0;\n\t\ttimeout.tv_sec = 10;\n\n\t\tn = select(FD_SETSIZE, &selset, NULL, NULL, &timeout);\n\t\tif (n == -1) {\n\t\t\tif (errno == EINTR) {\n\t\t\t\tn = 0;\n\t\t\t} else {\n\t\t\t\tfprintf(stderr, \"%s: select failed\\n\", argv[0]);\n\t\t\t\texit(1);\n\t\t\t}\n\t\t} else if (n == 0) {\n\t\t\tif (kill(parent, 0) == -1) {\n#ifdef DEBUG\n\t\t\t\tfprintf(stderr, \"%s: Parent has gone, and so do I\\n\",\n\t\t\t\t\targv[0]);\n#endif /* DEBUG */\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\n\t\tfor (i = 0; n && i < maxfd; ++i) {\n\t\t\tif (FD_ISSET(i, &selset)) { /* this socket has data */\n\t\t\t\tn--;\n\t\t\t\tswitch ((routem + i)->r_where) {\n\t\t\t\t\tcase new_out:\n\t\t\t\t\tcase new_err:\n\t\t\t\t\t\tnewsock = accept(i, 0, 0);\n\t\t\t\t\t\t(routem + newsock)->r_where = (routem + i)->r_where == new_out ? old_out : old_err;\n\t\t\t\t\t\tFD_SET(newsock, &readset);\n\t\t\t\t\t\t(routem + newsock)->r_first = 1;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\tcase old_out:\n\t\t\t\t\tcase old_err:\n\t\t\t\t\t\treadit(i, routem + i);\n\t\t\t\t\t\tbreak;\n\t\t\t\t\tdefault:\n\t\t\t\t\t\tfprintf(stderr, \"%s: internal error\\n\", argv[0]);\n\t\t\t\t\t\texit(2);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\tfree(routem);\n\treturn 0;\n}\n"
  },
  {
    "path": "src/cmds/pbs_ds_password.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file    pbs_ds_password.c\n *\n * @brief\n *      This is a tool to allow the admin to change the database password.\n *\tThis file uses the Libaes (AES) encryption to encrypt the chosen\n *\tpassword to file $PBS_HOME/server_priv/db_password.\n *\n * @par\tThis tool has two modes.\n *\t-r - No password is asked from the user. A random password is generated\n *\tand set to the database, then the password is encrypted using AES\n *\tencryption and stored in the above mentioned location. This option\n *\tis used by the PBS installer to generate and set an initial password\n *\tfor the database.\n *\n *\t-C <username>- Change the data-service account name that PBS uses to access\n *\tthe data-service. If the user name specified is different from what is listed\n *\tin pbs.conf file, then pbs_ds_password asks the user to confirm whether\n *\the/she really intends to change the data-service user. On Unix, the user-name\n *\tsupplied must be an existing non-root system user. pbs_ds_password will\n *\tcheck to ensure that the user is non-root.\n *\tIf the admin wishes to change the data-service user, then pbs_ds_password\n *\twill also prompt the user to enter the password to be set for this new user.\n *\tpbs_ds_password then creates the new user as a superuser in the database,\n *\tand sets the chosen password. It then updates the db_usr file in\n *\tserver_priv with the new data service user name. On Unix,\n *\tpbs_ds_password displays a reminder to the user to run \"pbs_probe -f \"\n *\tcommand to \"fix\" the change in ownership of the files related to the data-service.\n *\n *\tNo options: This is the interactive mode. In this mode, the tool asks the\n *\tuser to enter a password twice. If both the passwords match then the tool\n *\tsets the password to the database and stores the encrypted password\n *\tin the above mentioned location.\n *\n *\tChanges can be made only when the pbs data-service is running. This\n *\tcan be done when pbs_server is running (which means data-service is also\n *\trunning), or if pbs_server is down, the admin can start the data-service and\n *\tthen run this command.\n *\n *\tThis tool uses the usual way to connect to the database, which means to\n *\tchange the database it has to first authenticate with the database with the\n *\tcurrently set password. The connect_db function it calls, automatically\n *\tuses the current password from $PBS_HOME/server_priv/db_password\n *\tto connect to the database.\n *\n *\tThe tool attempts to connect to the data-service running on the localhost\n *\tonly. Thus this tool can be used only from the same host that is running the\n *\tpbs_dataservice. (For example, in failover scenario, this tool needs to be\n *\tinvoked from the same host which is currently running the data-service)\n *\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n#include <pbs_version.h>\n#include <assert.h>\n#include <pwd.h>\n#include <stdio.h>\n#include <unistd.h>\n#include <sys/types.h>\n#include <sys/stat.h>\n#include <fcntl.h>\n#include <libgen.h>\n#include <dirent.h>\n#include <errno.h>\n\n#include \"libpbs.h\"\n#include \"portability.h\"\n\n#include \"ticket.h\"\n\n#include \"server_limits.h\"\n#include \"pbs_db.h\"\n\n#ifndef LOGIN_NAME_MAX\n#define LOGIN_NAME_MAX 256\n#endif\n\nint cred_type;\nsize_t cred_len;\nchar *cred_buf = NULL;\n\nint started_db = 0;\nvoid *conn = NULL;\nstruct passwd *pwent;\nchar pwd_file_new[MAXPATHLEN + 1];\nchar conn_db_host[PBS_MAXSERVERNAME + 1];\n\nextern unsigned char pbs_aes_key[][16];\nextern unsigned char pbs_aes_iv[][16];\n\n/**\n * @brief\n *\tAt exit handler to close database connection,\n *\tstop database if this program had started it,\n *\tand to remove the temp password file, if it\n *\twas created\n *\n * @return\tVoid\n *\n */\nstatic void\ncleanup()\n{\n\tchar *db_err = NULL;\n\n\tif (pwd_file_new[0] != 0)\n\t\tunlink(pwd_file_new);\n\n\tif (conn != NULL) {\n\t\tpbs_db_disconnect(conn);\n\t\tconn = NULL;\n\t}\n\n\tif (started_db == 1) {\n\t\tif (pbs_stop_db(conn_db_host, pbs_conf.pbs_data_service_port) != 0) {\n\t\t\tfprintf(stderr, \"Failed to stop PBS Data Service\");\n\t\t\tpbs_db_get_errmsg(PBS_DB_ERR, &db_err);\n\t\t\tif (db_err) {\n\t\t\t\tfprintf(stderr, \":[%s]\", db_err);\n\t\t\t\tfree(db_err);\n\t\t\t}\n\t\t\tfprintf(stderr, \"\\n\");\n\t\t}\n\t\tstarted_db = 0;\n\t}\n}\n\n#define MAX_PASSWORD_LEN 256\n\n/**\n * @brief\n *\tAccepts a password string without echoing characters\n *\ton the screen\n *\n * @param[out]\tpasswd - password read from user\n *\n * @return - Error code\n * @retval  -1 - Failure\n * @retval   0 - Success\n *\n */\nstatic int\nread_password(char *passwd)\n{\n\tint len;\n\tchar *p;\n\n\tif (system(\"stty -echo\") != 0)\n\t\treturn -1;\n\n\tif (fgets(passwd, MAX_PASSWORD_LEN, stdin) == NULL) {\n\t\tfprintf(stderr, \"%s : fgets failed\", __func__);\n\t\treturn -1;\n\t}\n\n\tif (system(\"stty echo\") != 0)\n\t\treturn -1;\n\n\tlen = strlen(passwd);\n\tp = passwd + len - 1;\n\twhile (*p == '\\r' || *p == '\\n')\n\t\tp--;\n\t*(p + 1) = 0;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tGenerates a random password for the database\n *\tThe allowed_chars array contains a list of\n *\tcharacters acceptable for the password. This\n *\tfunction uses srand to randomize the seed on the\n *\tcurrent timestamp. Then it uses rand to select\n *\ta random character from the array to add to the\n *\tpassword string.\n *\n * @param[out]\tpasswd - password generated\n * @param[in] len - Length of password to be generated\n *\n * @return - Error code\n * @retval  -1 - Failure\n * @retval   0 - Success\n *\n */\nstatic int\ngen_password(char *passwd, int len)\n{\n\tint chrs = 0;\n\tchar allowed_chars[] = \"0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@#$%^&*()_+\";\n\tint arr_len = strlen(allowed_chars);\n\n\tsleep(1); /* sleep 1 second to ensure the srand on time(0) truely randomizes the seed */\n\tsrand(time(0));\n\twhile (chrs < len) {\n\t\tint c;\n\t\tc = (char) (rand() % arr_len);\n\t\tpasswd[chrs++] = allowed_chars[c];\n\t}\n\tpasswd[chrs] = '\\0';\n\treturn 0;\n}\n\n/**\n * @brief\n *\tUpdates the db_usr file in server_priv with\n *\tthe new data service user name.\n *\n * @param[in] file - The db_usr file\n * @param[in] userid - The new data service user to be set\n *\n * @return - Error code\n * @retval  -1 - Failure\n * @retval   0 - Success\n *\n */\nint\nupdate_db_usr(char *file, char *userid)\n{\n\tint fd;\n\tint rc = 0;\n\n\tif ((fd = open(file, O_CREAT | O_TRUNC | O_WRONLY, 0600)) == -1) {\n\t\tfprintf(stderr, \"%s: open failed, errno=%d\\n\", file, errno);\n\t\treturn -1;\n\t}\n\tif (write(fd, userid, strlen(userid)) == -1) {\n\t\tfprintf(stderr, \"%s: write failed, errno=%d\\n\", file, errno);\n\t\trc = -1;\n\t}\n\tclose(fd);\n\treturn rc;\n}\n\n/**\n * @brief\n *\tChecks whether given user exists on the system.\n *\tOn Unix, it additionally checks that the user is not\n *\tthe root user (id 0).\n *\n * @param[in] userid - The id to check\n *\n * @return - user suitable to use?\n * @retval  -1 - Userid not suitable for use\n * @retval   0 - Userid suitable to use\n *\n */\nint\ncheck_user(char *userid)\n{\n\tpwent = getpwnam(userid);\n\tif (pwent == NULL)\n\t\treturn (-1);\n\tif (pwent->pw_uid == 0)\n\t\treturn (-1);\n\n\t/* in unix make sure that the user home dir is accessible */\n\tif (access(pwent->pw_dir, R_OK | W_OK | X_OK) != 0)\n\t\treturn (-1);\n\treturn 0;\n}\n\n/**\n * @brief\n *\tThis function changes the ownership of\n *\tthe whole directory tree (and files) under the\n *\tpbs datastore directory to the new data service\n *\tuser account. This is required only in Unix.\n *\tOn Windows, acess is given to the admin\n *\tgroup anyway, which allows any service_account\n *\t(part of the admin group) to be able to access\n *\tthese directories.\n *\n * @param[out] path - Path of the datastore directory\n * @param[in] userid - The new data service user\n *\t\t\t\tto change ownership to\n *\n * @return - Error code\n * @retval  -1 - Failure\n * @retval   0 - Success\n *\n */\nint\nchange_ownership(char *path, char *userid)\n{\n\tDIR *dir;\n\tstruct dirent *pdirent;\n\tchar dirfile[MAXPATHLEN + 1];\n\tstruct stat stbuf;\n\n\tif (chown(path, pwent->pw_uid, (gid_t) -1) == -1) {\n\t\tfprintf(stderr, \"%s : chown failed : ERR : %s\\n\"\n\t\t\t\t,__func__, strerror(errno));\n\t\treturn -1;\n\t}\n\tdir = opendir(path);\n\tif (dir == NULL) {\n\t\treturn -1;\n\t}\n\n\twhile (errno = 0, (pdirent = readdir(dir)) != NULL) {\n\t\tif (strcmp(pdirent->d_name, \".\") == 0 ||\n\t\t    strcmp(pdirent->d_name, \"..\") == 0)\n\t\t\tcontinue;\n\n\t\tsprintf(dirfile, \"%s/%s\", path, pdirent->d_name);\n\t\tif (chown(dirfile, pwent->pw_uid, (gid_t) -1) == -1) {\n\t\t\tfprintf(stderr, \"%s : chown failed : ERR : %s\\n\"\n\t\t\t\t\t,__func__, strerror(errno));\n\t\t\tfprintf(stderr, \"%s : chown failed : ERR : %s\\n\",__func__, strerror(errno));\n\t\t\tcontinue;\n\t\t}\n\t\tstat(dirfile, &stbuf);\n\t\tif (stbuf.st_mode & S_IFDIR) {\n\t\t\tchange_ownership(dirfile, userid);\n\t\t\tcontinue;\n\t\t}\n\t}\n\tif (errno != 0 && errno != ENOENT) {\n\t\t(void) closedir(dir);\n\t\treturn -1;\n\t}\n\t(void) closedir(dir);\n\treturn 0;\n}\n/**\n * @brief\n *\tThe main function in C - entry point\n *\n * @param[in]  argc - argument count\n * @param[in]  argv - pointer to argument array\n *\n * @return  int\n * @retval  0 - success\n * @retval  !0 - error\n */\nint\nmain(int argc, char *argv[])\n{\n\tint i, rc;\n\tchar passwd[MAX_PASSWORD_LEN + 1] = {'\\0'};\n\tchar passwd2[MAX_PASSWORD_LEN + 1];\n\tchar pwd_file[MAXPATHLEN + 1];\n\tchar userid[LOGIN_NAME_MAX + 1];\n\tint fd, errflg = 0;\n\tint gen_pwd = 0;\n\tint failcode = 0;\n\tchar *db_errmsg = NULL;\n\tint pmode;\n\tint change_user = 0;\n\tchar *olduser = NULL;\n\tint ret = 0;\n\tint update_db = 0;\n\tchar getopt_format[5];\n\tchar prog[] = \"pbs_ds_password\";\n\tchar errmsg[PBS_MAX_DB_CONN_INIT_ERR + 1];\n\n\tconn = NULL;\n\tpwd_file_new[0] = 0;\n\n\t/*test for real deal or just version and exit*/\n\tPRINT_VERSION_AND_EXIT(argc, argv);\n\n\t/* read configuration file */\n\tif (pbs_loadconf(0) == 0) {\n\t\tfprintf(stderr, \"%s: Could not load pbs configuration\\n\", prog);\n\t\treturn (-1);\n\t}\n\n\t/* backup old user name */\n\tif ((olduser = pbs_get_dataservice_usr(errmsg, PBS_MAX_DB_CONN_INIT_ERR)) == NULL) {\n\t\tfprintf(stderr, \"%s: Could not retrieve current data service user\\n\", prog);\n\t\tif (strlen(errmsg) > 0)\n\t\t\tfprintf(stderr, \"%s\\n\", errmsg);\n\t\treturn (-1);\n\t}\n\n\tif (pbs_conf.pbs_data_service_host == NULL)\n\t\tupdate_db = 1;\n\n\tuserid[0] = 0; /* empty user id */\n\n\tstrcpy(getopt_format, \"rC:\");\n\n\twhile ((i = getopt(argc, argv, getopt_format)) != EOF) {\n\t\tswitch (i) {\n\t\t\tcase 'r':\n\t\t\t\tgen_pwd = 1;\n\t\t\t\tbreak;\n\t\t\tcase 'C':\n\t\t\t\tpbs_strncpy(userid, optarg, sizeof(userid));\n\t\t\t\tbreak;\n\t\t\tcase '?':\n\t\t\tdefault:\n\t\t\t\terrflg++;\n\t\t}\n\t}\n\n\tif (errflg) {\n\t\tfprintf(stderr, \"\\nusage:\\t%s [-r] [-C username]\\n\", prog);\n\t\tfprintf(stderr, \"      \\t%s --version\\n\", prog);\n\t\tret = -1;\n\t\tgoto exit;\n\t}\n\n\t/* NOTE : This functionality is added just for the automation testing purpose.\n     * Usage: pbs_ds_password <password>\n     */\n\tif (argv[optind] != NULL) {\n\t\tgen_pwd = 0;\n\t\tpbs_strncpy(passwd, argv[optind], sizeof(passwd));\n\t}\n\n\t/* check admin privileges */\n\tif ((getuid() != 0) || (geteuid() != 0)) {\n\t\tfprintf(stderr, \"%s: Must be run by root\\n\", prog);\n\t\tret = 1;\n\t\tgoto exit;\n\t}\n\n\tchange_user = 0;\n\t/* if the -C option was specified read the user from pbs.conf */\n\tif (userid[0] != 0) {\n\t\tif (strcmp(olduser, userid) != 0) {\n\t\t\tchange_user = 1;\n\t\t}\n\t}\n\n\tif (change_user == 1) {\n\t\t/* check that the supplied user-id exists (and is non-root on unix) */\n\t\tif (check_user(userid) != 0) {\n\t\t\tfprintf(stderr, \"\\n%s: User-id %s does not exist/is root user/home dir is not accessible\\n\", prog, userid);\n\t\t\tret = -1;\n\t\t\tgoto exit;\n\t\t}\n\t}\n\n\tatexit(cleanup);\n\tif (pbs_conf.pbs_data_service_host)\n\t\tpbs_strncpy(conn_db_host, pbs_conf.pbs_data_service_host, sizeof(conn_db_host));\n\telse\n\t\tpbs_strncpy(conn_db_host, pbs_default(), sizeof(conn_db_host));\n\n\tif (update_db == 1) {\n\t\t/* then connect to database */\n\n\t\tfailcode = pbs_db_connect(&conn, NULL, pbs_conf.pbs_data_service_port, PBS_DB_CNT_TIMEOUT_NORMAL);\n\n\t\tif (conn && change_user == 1) {\n\t\t\t/* able to connect ? Thats bad, PBS or dataservice is running */\n\t\t\tfprintf(stderr, \"%s: PBS Services and/or PBS Data Service is running\\n\", prog);\n\t\t\tfprintf(stderr, \"                 Stop PBS and Data Services before changing Data Service user\\n\");\n\t\t\tret = -1;\n\t\t\tgoto exit;\n\t\t}\n\n\t\tif (!conn) {\n\t\t\t/* start db only if it was not already running */\n\t\t\tfailcode = pbs_start_db(conn_db_host, pbs_conf.pbs_data_service_port);\n\t\t\tif (failcode != 0 && failcode != PBS_DB_OOM_ERR) {\n\t\t\t\tif (failcode == -1)\n\t\t\t\t\tpbs_db_get_errmsg(PBS_DB_ERR, &db_errmsg);\n\t\t\t\telse\n\t\t\t\t\tpbs_db_get_errmsg(failcode, &db_errmsg);\n\t\t\t\tif (db_errmsg)\n\t\t\t\t\tfprintf(stderr, \"%s: Failed to start PBS dataservice:[%s]\\n\", prog, db_errmsg);\n\t\t\t\telse\n\t\t\t\t\tfprintf(stderr, \"%s: Failed to start PBS dataservice\\n\", prog);\n\t\t\t\tret = -1;\n\t\t\t\tgoto exit;\n\t\t\t}\n\t\t\tstarted_db = 1;\n\n\t\t\tfailcode = pbs_db_connect(&conn, NULL, pbs_conf.pbs_data_service_port, PBS_DB_CNT_TIMEOUT_NORMAL);\n\t\t\tif (!conn) {\n\t\t\t\tpbs_db_get_errmsg(failcode, &db_errmsg);\n\t\t\t\tif (db_errmsg)\n\t\t\t\t\tfprintf(stderr, \"%s: Could not connect to PBS data service:%s\\n\", prog, db_errmsg);\n\t\t\t\tret = -1;\n\t\t\t\tgoto exit;\n\t\t\t}\n\t\t}\n\t}\n\n\tif (gen_pwd == 0 && passwd[0] == '\\0') {\n\t\t/* ask user to enter password twice */\n\t\tprintf(\"Enter the password:\");\n\t\tread_password(passwd);\n\n\t\tprintf(\"\\nRe-enter the password:\");\n\t\tread_password(passwd2);\n\t\tprintf(\"\\n\\n\");\n\t\tif (strcmp(passwd, passwd2) != 0) {\n\t\t\tfprintf(stderr, \"Entered passwords do not match\\n\");\n\t\t\tret = -2;\n\t\t\tgoto exit;\n\t\t}\n\t\tif (strlen(passwd) == 0) {\n\t\t\tfprintf(stderr, \"Blank password is not allowed\\n\");\n\t\t\tret = -2;\n\t\t\tgoto exit;\n\t\t}\n\t} else if (gen_pwd == 1) {\n\t\tgen_password(passwd, 16);\n\t}\n\n\trc = pbs_encrypt_pwd(passwd, &cred_type, &cred_buf, &cred_len, (const unsigned char *) pbs_aes_key, (const unsigned char *) pbs_aes_iv);\n\tif (rc != 0) {\n\t\tfprintf(stderr, \"%s: Failed to encrypt password\\n\", prog);\n\t\tret = -1;\n\t\tgoto exit;\n\t}\n\n\tsprintf(pwd_file_new, \"%s/server_priv/db_password.new\", pbs_conf.pbs_home_path);\n\tsprintf(pwd_file, \"%s/server_priv/db_password\", pbs_conf.pbs_home_path);\n\n\t/* write encrypted password to the password file */\n\tpmode = 0600;\n\tif ((fd = open(pwd_file_new, O_WRONLY | O_TRUNC | O_CREAT | O_Sync,\n\t\t       pmode)) == -1) {\n\t\tperror(\"open/create failed\");\n\t\tfprintf(stderr, \"%s: Unable to create file %s\\n\", prog, pwd_file_new);\n\t\tret = -1;\n\t\tgoto exit;\n\t}\n\n\tif (update_db == 1) {\n\t\t/* change password only if this config option is not set */\n\t\trc = pbs_db_password(conn, userid, passwd, olduser);\n\t\tfree(olduser);\n\t\tolduser = NULL;\n\t\tmemset(passwd, 0, sizeof(passwd));\n\t\tmemset(passwd2, 0, sizeof(passwd2));\n\t\tif (rc == -1) {\n\t\t\tfprintf(stderr, \"%s: Failed to create/alter user id %s\\n\", prog, userid);\n\t\t\tret = -1;\n\t\t\tgoto exit;\n\t\t}\n\t}\n\n\tif (write(fd, cred_buf, cred_len) != cred_len) {\n\t\tperror(\"write failed\");\n\t\tfprintf(stderr, \"%s: Unable to write to file %s\\n\", prog, pwd_file_new);\n\t\tret = -1;\n\t\tgoto exit;\n\t}\n\tclose(fd);\n\tfree(cred_buf);\n\n\tif (rename(pwd_file_new, pwd_file) != 0) {\n\t\tret = -1;\n\t\tgoto exit;\n\t}\n\n\tif (update_db == 1) {\n\t\t/* commit  to database */\n\t\tcleanup(); /* cleanup will disconnect and delete tmp file too */\n\t}\n\n\tprintf(\"---> Updated user password\\n\");\n\tif (update_db == 1 && change_user == 1) {\n\t\tprintf(\"---> Updated user in datastore\\n\");\n\t\tprintf(\"---> Stored user password in datastore\\n\");\n\t}\n\n\tif (change_user == 1) {\n\t\tchar usr_file[MAXPATHLEN + 1];\n\t\tsprintf(usr_file, \"%s/server_priv/db_user\", pbs_conf.pbs_home_path);\n\n\t\t/* update PBS_HOME/server_priv/db_user file with the new user name */\n\t\tif (update_db_usr(usr_file, userid) != 0) {\n\t\t\tfprintf(stderr, \"Unable to update file %s\\n\", usr_file);\n\t\t\tret = -1;\n\t\t\tgoto exit;\n\t\t}\n\t\tprintf(\"---> Updated new user\\n\");\n\t}\n\n\tif (update_db == 1 && change_user == 1) {\n\t\tchar datastore[MAXPATHLEN + 1];\n\n\t\t/* ownership is changed only for Unix users\n\t\t * On windows, these files are allways owned by the user who installed the database\n\t\t * and writable by administrators anyway\n\t\t */\n\t\tsprintf(datastore, \"%s/datastore\", pbs_conf.pbs_home_path);\n\t\t/* change ownership of the datastore directories to the new user, so that db can be started again */\n\t\tif (change_ownership(datastore, userid) != 0) {\n\t\t\tfprintf(stderr, \"%s: Failed to change ownership on path %s\\n\", prog, datastore);\n\t\t\tret = -1;\n\t\t\tgoto exit;\n\t\t}\n\t\tprintf(\"---> Changed ownership of %s to user %s\\n\", datastore, userid);\n\n\t\t/* reload configuration file */\n\t\tif (pbs_loadconf(1) == 0) {\n\t\t\tfprintf(stderr, \"%s: Could not load pbs configuration\\n\", prog);\n\t\t\tret = -1;\n\t\t\tgoto exit;\n\t\t}\n\n\t\tfailcode = pbs_start_db(conn_db_host, pbs_conf.pbs_data_service_port);\n\t\tif (failcode != 0 && failcode != PBS_DB_OOM_ERR) {\n\t\t\tpbs_db_get_errmsg(failcode, &db_errmsg);\n\t\t\tif (db_errmsg)\n\t\t\t\tfprintf(stderr, \"%s: Failed to start PBS dataservice as new user:[%s]\\n\", prog, db_errmsg);\n\t\t\telse\n\t\t\t\tfprintf(stderr, \"%s: Failed to start PBS dataservice as new user\\n\", prog);\n\t\t\tret = -1;\n\t\t\tgoto exit;\n\t\t}\n\t}\n\tprintf(\"---> Success\\n\");\n\nexit:\n\tfree(olduser);\n\treturn ret;\n}\n"
  },
  {
    "path": "src/cmds/pbs_lamboot.in",
    "content": "#!/bin/sh\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nif [ $# -eq 1 ] && [ $1 = \"--version\" ]; then\n   echo pbs_version = @PBS_VERSION@\n   exit 0\nfi\n\nlamboot=\"lamboot\"\nname=`basename $0`\n\nif [ \"${PBS_NODEFILE:-XX}\" = \"XX\" ]; then\n        if [ \"$name\" != \"lamboot\" ]; then\n               echo \"$name: Warning, not running under PBS\"\n        fi\n\t$lamboot $*\n\texit $?\nfi\n\noptions=\"\"\nbhost=\"\"\nboot_tm=0\nboot_rsh=0\nssi_arg1=\"\"\nssi_arg2=\"\"\nwhile [ $# -gt 0 ]; do\n\n\tif   [ \"XX$1\" = \"XX-c\" ]     \t\t ||\n             [ \"XX$1\" = \"XX-prefix\" ]     \t ||\n             [ \"XX$1\" = \"XX-sessionprefix\" ]     ||\n             [ \"XX$1\" = \"XX-sessionsuffix\" ]     ||\n             [ \"XX$1\" = \"XX-withlamprefixpath\" ] ; then\n\t\toptions=\"$options $1\"\n\t\tshift\n\t\toptions=\"$options $1\"\n        elif [ \"XX$1\" = \"XX-ssi\" ] ; then\n\t\toptions=\"$options $1\"\n\t\tshift\n\t\toptions=\"$options $1\"\n\t\tssi_arg1=$1\n\t\tshift\n\t\tssi_arg2=$1\n\n\t\tif [ \"$ssi_arg1\" = \"boot\" ] &&\n\t\t   [ \"$ssi_arg2\" = \"tm\" ] ; then\n\t\t\tboot_tm=1\n\t\telif [ \"$ssi_arg1\" = \"boot\" ] &&\n\t\t     [ \"$ssi_arg2\" = \"rsh\" ] ; then\n\t\t\tboot_rsh=1\n\t\tfi\n\n\t\tif [ `echo $1 | wc -w` -gt 1 ] ; then\n\t\t\toptions=\"$options \\\"$1\\\"\"\n\t\telse\n\t\t\toptions=\"$options $1\"\n\t\tfi\n\telif [ `expr match \"$1\" \"-\\+\"` -ne 0 ] ; then\n\t\toptions=\"$options $1\"\n\telse\n\t\tbhost=\"$1\"\n        fi\n\n\tshift\n\ndone\n\nif [ \"${bhost:-XX}\" != \"XX\" ]; then\n\techo \"$name: Warning, <bhost> value ignored by PBS\"\nfi\n\n# check if tm boot module is specified with lamboot\nif [ $boot_tm -eq 1 ] ||\n   ( [ $boot_rsh -eq 0 ] &&\n     [ \"${LAM_MPI_SSI_boot:-XX}\" != \"XX\" ] &&\n     [ \"${LAM_MPI_SSI_boot}\" = \"tm\" ] ) ; then\n\teval $lamboot $options\nelse\n\teval $lamboot $options ${PBS_NODEFILE}\nfi\n"
  },
  {
    "path": "src/cmds/pbs_mpihp.in",
    "content": "#!/bin/sh\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nif [ $# -eq 1 ] && [ $1 = \"--version\" ]; then\n   echo pbs_version = @PBS_VERSION@\n   exit 0\nfi\n\n#\n# is_comment_line: echoes 0 if line content as specified\n#         by first argument begins with #; returns\n#         1 otherwise.\n#\n\nis_comment_line ()\n{\n   line=$*\n\n   if [ `echo \"$1\" | egrep -c \"^#\"` -ne 0 ] ; then\n      echo 0\n      return\n   fi\n\n   echo 1\n\n}\n\n#\n# read_appfile: read next non-comment line of  user's appfile\n#      named in $1. Be sure to set 'appfile_num_entries\" global\n#      variable first.\n# Returns: line read-in put in 'appfile_content\" global\n#      variable, which can be the <empty> string if all\n#      lines have been read.\n#\n# Global variables:\n#   appfile_idx - next line to read in user's appfile\n#   appfile_num_entries - number of entries in user's appfile\n#   appfile_content - upon returns, holds the content\n#           of line read.\n\nappfile_idx=1\nappfile_num_entries=0\nappfile_content=\"\"\n\nread_appfile ()\n{\n   appfile=$1\n\n   while true ; do\n\n      if [ $appfile_idx -gt $appfile_num_entries ] ; then\n         appfile_content=\"\"\n         return\n      fi\n\n      appfile_content=`eval sed -n '${appfile_idx}p' $appfile`\n      appfile_idx=`expr $appfile_idx + 1`\n\n      if [ `is_comment_line $appfile_content` -eq 1 ] ; then\n         return\n      fi\n\n   done\n}\n\n#\n# reset_appfile: resets the  next line to read in user's\n#       appfile back to first line.\n#\n\nreset_appfile ()\n{\n\n   appfile_idx=1\n}\n\n\n#\n# read_nodefile: read next line of  nodefile named in\n#      in $1. Be sure to set 'nodefile_num_entries\"\n#      first.\n# Returns: line read-in put in 'nodefile_content\" global\n#      variable, which can be the <empty> string if all\n#      lines have been read.\n#\n# Global variables:\n#   nodefile_idx - next line to read in nodefile\n#   nodefile_num_entries - number of entries in nodefile\n#   nodeile_content - upon returns, holds the content\n#           of line read.\n\nnodefile_idx=1\nnodefile_num_entries=0\nnodefile_content=\"\"\n\nread_nodefile ()\n{\n   nodefile=$1\n\n   if [ $nodefile_idx -gt $nodefile_num_entries ] ; then\n      nodefile_content=\"\"\n      return\n   fi\n   nodefile_content=`eval sed -n '${nodefile_idx}p' $nodefile`\n\n   nodefile_idx=`expr $nodefile_idx + 1`\n}\n\n\n#\n# reset_nodefile: resets the  next line to read in user's\n#       nodefile back to first line.\n#\n\nreset_nodefile ()\n{\n\n   nodefile_idx=1\n}\n\n#\n# get_rank (): echoes the rank (-np value) value given\n#          in an appfile specification:\n#      [-np X] [-h host] ... <prog> <args>\n#\n\nget_rank ()\n{\n   line=$*\n\n   np_val=1   # default is 1\n\n   while [ $# -gt 0 ] ; do\n      if [ \"XX$1\" = \"XX-np\" ] ; then\n         shift\n         np_val=\"$1\"\n         break\n      fi\n      shift\n   done\n   echo $np_val\n}\n\n#\n# get_appfile_args (): given an mpirun command line\n#  argument specification of the form:\n#\n#  (1)  [-np #] [-help] [-version] [-djpv] [-ck]\n#       [-h host] [-l user] [-stdio=[options]] [-e var[=val]]...\n#       [-sp paths] [-i spec] [-tv] program [args]\n#\n#  (2)  [-help] [-version] [-djpv] [-ck]\n#   [-i spec] [-stdio=[options]] [-commd] [-tv]\n#   [-f appfile] [-- extra_args_for_appfile]\n#\n# echoes only the options and arguments that are allowed\n# to appear in an application file except\n# \"-h host\", and \"-l user\" options which are ignored\n# under PBS.\n#\n#   -np <#> [-E <VAR>[=<val>] [...]] [-sp <paths>] <program> [<args>]\n#\n\nget_appfile_args ()\n{\n   line=$*\n\n   args=\"\"\n   in_program_args=0\n   while [ $# -gt 0 ] ; do\n\n      if [ $in_program_args -eq 1 ] ; then\n         args=\"$args $1\"\n      elif [ \"XX$1\" = \"XX-e\" ]  ||\n           [ \"XX$1\" = \"XX-np\" ] ||\n           [ \"XX$1\" = \"XX-sp\" ] ; then\n         args=\"$args $1\"\n         shift\n         args=\"$args $1\"\n\n#      NOTE: we're using egrep for\n#            regular expression matching\n#            as expr MATCH doesn't seemed\n#            to work under HP\n\n      elif [ \"XX$1\" = \"XX-h\" ]  ||\n           [ \"XX$1\" = \"XX-l\" ]  ||\n           [ \"XX$1\" = \"XX-i\" ] ; then\n\t shift\n      elif [ `echo \"$1\" | egrep -c \"^\\-aff=.+\"` -ne 0 ] ; then\n         shift\n         continue\n      elif [ \"XX$1\" = \"XX--\" ] ; then\n         break\n      elif [ `echo \"$1\" | egrep -c \"^\\-.+\"` -ne 0 ] ; then\n         shift\n         continue\n      else\n         args=\"$args $1\"\n\t in_program_args=1\n      fi\n      shift\n   done\n   echo \"$args\"\n}\n\n#\n# get_global_args (): given an mpirun command line\n#  argument specification of the form:\n#\n#  (1)  [-np #] [-help] [-version] [-djpv] [-ck]\n#       [-h host] [-l user] [-stdio=[options]] [-e var[=val]]...\n#       [-sp paths] [-i spec] [-tv] program [args]\n#\n#  (2)  [-help] [-version] [-djpv] [-ck]\n#   [-i spec] [-stdio=[options]] [-commd] [-tv]\n#   [-f appfile] [-- extra_args_for_appfile]\n#\n# puts in the global variable 'global_args', the\n# options and arguments that are NOT allowed\n# to appear inside an application file - global arguments:\n#\n#   [-help] [-version] [-djpv] [-ck] [-i spec] [-tv]\n#                         ... [-- extra_args_for_appfile]\n# Also sets global variable 'global_extra_args'\n# the value of\n#   [-- extra_args_for_appfile]\n#\n# NOTE: Under HP Linux, mpirun has the following\n#       additional global arguments:\n#       [-universe=#] [-T] [-prot] [-spawn] [-1sided] [-ha] [-hmp]\n# Returns: value of 0 for success; 1 if -client argument\n#          was encountered.\n#\n#\n\nglobal_args=\"\"\nglobal_extra_args=\"\"\nget_global_args ()\n{\n   in_extra_args=0\n   in_prog_args=0\n\n   while [ $# -gt 0 ] ; do\n\n      if [ $in_prog_args -eq 1 ] ; then\n         break\n      elif [ $in_extra_args -eq 1 ] ; then\n         global_extra_args=\"$global_extra_args $1\"\n      elif [ \"XX$1\" = \"XX-help\" ]  ||\n         [ \"XX$1\" = \"XX-version\" ] ||\n         [ \"XX$1\" = \"XX-tv\" ] ||\n         [ \"XX$1\" = \"XX-commd\" ] ||\n         [ \"XX$1\" = \"XX-T\" ] ||\n         [ \"XX$1\" = \"XX-prot\" ] ||\n         [ \"XX$1\" = \"XX-spawn\" ] ||\n         [ \"XX$1\" = \"XX-1sided\" ] ||\n         [ \"XX$1\" = \"XX-ha\" ] ||\n         [ \"XX$1\" = \"XX-hmp\" ] ||\n         [ \"XX$1\" = \"XX-itapi\" ] ||\n         [ \"XX$1\" = \"XX-ITAPI\" ] ||\n         [ \"XX$1\" = \"XX-TCP\" ] ||\n         [ \"XX$1\" = \"XX-intra=mix\" ] ||\n         [ \"XX$1\" = \"XX-intra=nic\" ] ||\n         [ \"XX$1\" = \"XX-intra=shm\" ] ||\n         [ \"XX$1\" = \"XX-cpu_bind\" ] ||\n         [ \"XX$1\" = \"XX-dd\" ] ||\n         [ \"XX$1\" = \"XX-ndd\" ] ||\n         [ \"XX$1\" = \"XX-rdma\" ] ||\n         [ \"XX$1\" = \"XX-srq\" ] ||\n         [ \"XX$1\" = \"XX-ibv\" ] ||\n         [ \"XX$1\" = \"XX-IBV\" ] ||\n         [ `echo \"$1\" | egrep -c \"^\\-[djpv]+\"` -ne 0 ] ||\n         [ `echo \"$1\" | egrep -c \"^\\-stdio=.+\"` -ne 0 ] ||\n         [ `echo \"$1\" | egrep -c \"^\\-aff=.+\"` -ne 0 ] ||\n         [ `echo \"$1\" | egrep -c \"^\\-universe_size=.+\"` -ne 0 ] ||\n         [ \"XX$1\" = \"XX-ck\" ] ; then\n         global_args=\"$global_args $1\"\n\n#      NOTE: we're using egrep for\n#            regular expression matching\n#            as expr MATCH doesn't seemed\n#            to work under HP\n\n      elif [ \"XX$1\" = \"XX-i\" ] ||\n\t   [ \"XX$1\" = \"XX-netaddr\" ] ||\n           [ \"XX$1\" = \"XX-e\" ] ||\n\t   [ \"XX$1\" = \"XX-subnet\" ] ; then\n         global_args=\"$global_args $1\"\n\t shift\n         global_args=\"$global_args $1\"\n      elif [ \"XX$1\" = \"XX-client\" ] ; then\n         return 1\n      elif [ \"XX$1\" = \"XX--\" ] ; then\n\t global_extra_args=\"--\"\n\t in_extra_args=1\n      elif [ \"XX$1\" = \"XX-f\" ]  ||\n\t   [ \"XX$1\" = \"XX-np\" ] ||\n\t   [ \"XX$1\" = \"XX-h\" ]  ||\n\t   [ \"XX$1\" = \"XX-l\" ]  ||\n\t   [ \"XX$1\" = \"XX-sp\" ] ; then\n         shift\n      else\n         in_prog_args=1\n      fi\n      shift\n   done\n\n   return 0\n}\n\n#\n# get_appfile (): echoes the application file name given\n# by the -f command line option. This returns an empty\n# string (\"\") if -f option is not given\n#\n\nget_appfile ()\n{\n   appfile=\"\"\n   while [ $# -gt 0 ] ; do\n\n      if [ \"XX$1\" = \"XX-f\" ] ; then\n         shift\n\t if [ \"$appfile\" != \"\" ] ; then\n\t\techo \"Encountered multiple -f arguments\"\n\t\treturn 1\n\t fi\n\t appfile=\"$1\"\n      elif [ \"XX$1\" = \"XX-help\" ]  ||\n         [ \"XX$1\" = \"XX-version\" ] ||\n         [ \"XX$1\" = \"XX-tv\" ] ||\n         [ \"XX$1\" = \"XX-commd\" ] ||\n         [ \"XX$1\" = \"XX-T\" ] ||\n         [ \"XX$1\" = \"XX-prot\" ] ||\n         [ \"XX$1\" = \"XX-spawn\" ] ||\n         [ \"XX$1\" = \"XX-1sided\" ] ||\n         [ \"XX$1\" = \"XX-ha\" ] ||\n         [ \"XX$1\" = \"XX-hmp\" ] ||\n         [ \"XX$1\" = \"XX-client\" ] ||\n         [ \"XX$1\" = \"XX-itapi\" ] ||\n         [ \"XX$1\" = \"XX-ITAPI\" ] ||\n         [ \"XX$1\" = \"XX-TCP\" ] ||\n         [ \"XX$1\" = \"XX-intra=mix\" ] ||\n         [ \"XX$1\" = \"XX-intra=nic\" ] ||\n         [ \"XX$1\" = \"XX-intra=shm\" ] ||\n         [ \"XX$1\" = \"XX-cpu_bind\" ] ||\n         [ \"XX$1\" = \"XX-dd\" ] ||\n         [ \"XX$1\" = \"XX-ndd\" ] ||\n         [ \"XX$1\" = \"XX-rdma\" ] ||\n         [ \"XX$1\" = \"XX-srq\" ] ||\n         [ \"XX$1\" = \"XX-ibv\" ] ||\n         [ \"XX$1\" = \"XX-IBV\" ] ||\n         [ `echo \"$1\" | egrep -c \"^\\-[djpv]+\"` -ne 0 ] ||\n         [ `echo \"$1\" | egrep -c \"^\\-stdio=.+\"` -ne 0 ] ||\n         [ `echo \"$1\" | egrep -c \"^\\-aff=.+\"` -ne 0 ] ||\n         [ `echo \"$1\" | egrep -c \"^\\-universe_size=.+\"` -ne 0 ] ||\n         [ \"XX$1\" = \"XX-ck\" ] ; then\n         shift\n         continue\n\n#      NOTE: we're using egrep for\n#            regular expression matching\n#            as expr MATCH doesn't seemed\n#            to work under HP\n\n      elif [ \"XX$1\" = \"XX-i\" ]  ||\n           [ \"XX$1\" = \"XX-e\" ]\t||\n\t   [ \"XX$1\" = \"XX-np\" ] ||\n\t   [ \"XX$1\" = \"XX-h\" ]  ||\n\t   [ \"XX$1\" = \"XX-l\" ]  ||\n           [ \"XX$1\" = \"XX-netaddr\" ] ||\n           [ \"XX$1\" = \"XX-subnet\" ] ||\n\t   [ \"XX$1\" = \"XX-sp\" ] ; then\n         shift\n      else\n         break\n      fi\n\n      shift\n   done\n   echo $appfile\n   return 0\n}\n\n# is_prun_specified(): given as input the HP mpirun\n# command line arguments of the form:\n#\n#   mpirun [-help] [-version] [-jv] [-i <spec>]\n#          [-spawn] [-1sided] [-universe_size=#] [-sp <paths>]\n#          [-T] [-prot] [-e var[=val]] [...] -prun <prun-options> <program> [<args>]\n#\n# RETURN:\n#  Echoes 1 if one of the arguments is \"-prun\"; 0 otherwise.\n#\n\nis_prun_specified ()\n{\n   is_prun=0\n\n   while [ $# -gt 0 ] ; do\n\n      if [ \"XX$1\" = \"XX-prun\" ] ; then\n         is_prun=1\n         break\n      elif [ \"XX$1\" = \"XX-help\" ]  ||\n         [ \"XX$1\" = \"XX-version\" ] ||\n         [ \"XX$1\" = \"XX-tv\" ] ||\n         [ \"XX$1\" = \"XX-commd\" ] ||\n         [ \"XX$1\" = \"XX-T\" ] ||\n         [ \"XX$1\" = \"XX-prot\" ] ||\n         [ \"XX$1\" = \"XX-spawn\" ] ||\n         [ \"XX$1\" = \"XX-1sided\" ] ||\n         [ \"XX$1\" = \"XX-ha\" ] ||\n         [ \"XX$1\" = \"XX-hmp\" ] ||\n         [ \"XX$1\" = \"XX-client\" ] ||\n         [ \"XX$1\" = \"XX-itapi\" ] ||\n         [ \"XX$1\" = \"XX-ITAPI\" ] ||\n         [ \"XX$1\" = \"XX-TCP\" ] ||\n         [ \"XX$1\" = \"XX-intra=mix\" ] ||\n         [ \"XX$1\" = \"XX-intra=nic\" ] ||\n         [ \"XX$1\" = \"XX-intra=shm\" ] ||\n         [ \"XX$1\" = \"XX-cpu_bind\" ] ||\n         [ \"XX$1\" = \"XX-dd\" ] ||\n         [ \"XX$1\" = \"XX-ndd\" ] ||\n         [ \"XX$1\" = \"XX-rdma\" ] ||\n         [ \"XX$1\" = \"XX-srq\" ] ||\n         [ \"XX$1\" = \"XX-ibv\" ] ||\n         [ \"XX$1\" = \"XX-IBV\" ] ||\n         [ `echo \"$1\" | egrep -c \"^\\-[djpv]+\"` -ne 0 ] ||\n         [ `echo \"$1\" | egrep -c \"^\\-stdio=.+\"` -ne 0 ] ||\n         [ `echo \"$1\" | egrep -c \"^\\-aff=.+\"` -ne 0 ] ||\n         [ `echo \"$1\" | egrep -c \"^\\-universe_size=.+\"` -ne 0 ] ||\n         [ \"XX$1\" = \"XX-ck\" ] ; then\n         shift\n         continue\n\n#      NOTE: we're using egrep for\n#            regular expression matching\n#            as expr MATCH doesn't seemed\n#            to work under HP\n\n      elif [ \"XX$1\" = \"XX-i\" ]  ||\n           [ \"XX$1\" = \"XX-e\" ]\t||\n           [ \"XX$1\" = \"XX-f\" ]\t||\n\t   [ \"XX$1\" = \"XX-np\" ] ||\n\t   [ \"XX$1\" = \"XX-h\" ]  ||\n\t   [ \"XX$1\" = \"XX-l\" ]  ||\n           [ \"XX$1\" = \"XX-netaddr\" ] ||\n           [ \"XX$1\" = \"XX-subnet\" ] ||\n\t   [ \"XX$1\" = \"XX-sp\" ] ; then\n         shift\n      else\n         break\n      fi\n\n      shift\n   done\n   echo $is_prun\n}\n\n#\n# get_appfile_args_nonp (): same as get_appfile_args ()\n#             except -np argument is\n#             stripped out.\n#\nget_appfile_args_nonp ()\n{\n   line=$*\n\n   args=\"\"\n   in_program_args=0\n   while [ $# -gt 0 ] ; do\n\n      if [ $in_program_args -eq 1 ] ; then\n         args=\"$args $1\"\n      elif [ \"XX$1\" = \"XX-e\" ]  ||\n           [ \"XX$1\" = \"XX-sp\" ] ; then\n         args=\"$args $1\"\n         shift\n         args=\"$args $1\"\n\n#      NOTE: we're using egrep for\n#            regular expression matching\n#            as expr MATCH doesn't seemed\n#            to work under HP\n\n      elif [ \"XX$1\" = \"XX-h\" ]  ||\n           [ \"XX$1\" = \"XX-l\" ]  ||\n           [ \"XX$1\" = \"XX-np\" ]  ||\n           [ \"XX$1\" = \"XX-i\" ] ; then\n   \t shift\n\n      elif [ \"XX$1\" = \"XX--\" ] ; then\n         break\n      elif [ `echo \"$1\" | egrep -c \"^\\-.+\"` -ne 0 ] ; then\n         shift\n         continue\n      else\n         args=\"$args $1\"\n         in_program_args=1\n      fi\n      shift\n   done\n   echo \"$args\"\n}\n\n#\n# transform_appfile (): given a <user_appfile> (named in first argument),\n#      transform its contents into a PBS-environment\n#      friendly <pbs_appfile> (named in second argument)\n#      using nodes assigned in <pbs_nodes_file>\n#\ntransform_appfile ()\n{\n   user_appfile=$1\n   pbs_appfile=$2\n   pbs_nodes_file=$3\n\n   cat /dev/null > $pbs_appfile\n\n   appfile_num_entries=`wc -l $user_appfile | awk '{print $1}'`\n   nodefile_num_entries=`wc -l $pbs_nodes_file | awk '{print $1}'`\n\n   nodefile_rank=0\n   appfile_rank=0\n   read_appfile $user_appfile\n   while [ \"$appfile_content\" != \"\" ] ; do\n\n         if [ $appfile_rank -eq 0 ] ; then\n            appfile_rank=`get_rank $appfile_content`\n         fi\n\n         if [ $nodefile_rank -eq 0 ] ; then\n            read_nodefile $pbs_nodes_file\n         fi\n\n         while [ \"$nodefile_content\" != \"\" ] ; do\n            if [ $nodefile_rank -eq 0 ] ; then\n               nodefile_rank=`get_rank $nodefile_content`\n            fi\n            node=`echo $nodefile_content | awk '{print $1}'`\n\n            if [ $appfile_rank -eq $nodefile_rank ] ; then\n               echo \"-np $appfile_rank -h $node  `get_appfile_args_nonp $appfile_content`\" >> $pbs_appfile\n               nodefile_rank=0\n               break\n            elif [ $appfile_rank -gt $nodefile_rank ] ; then\n               echo \"-np $nodefile_rank -h $node `get_appfile_args_nonp $appfile_content`\" >> $pbs_appfile\n               appfile_rank=`expr $appfile_rank - $nodefile_rank`\n               read_nodefile $pbs_nodes_file\n               nodefile_rank=0\n               continue\n            else\n               echo \"-np $appfile_rank -h $node `get_appfile_args_nonp $appfile_content`\" >> $pbs_appfile\n               nodefile_rank=`expr $nodefile_rank - $appfile_rank`\n\n               read_appfile $user_appfile\n               if [ \"$appfile_content\" = \"\" ] ; then\n                  break\n               fi\n               appfile_rank=`get_rank $appfile_content`\n               continue\n            fi\n         done\n\n#        all nodes have been processed\n         if [ \"$nodefile_content\" = \"\" ] ; then\n            break\n         fi\n\n         read_appfile $user_appfile\n         appfile_rank=0\n\n\n   done\n\n#  still more appfile lines to process, but we've cycled through all nodes\n\n   while [ \"$appfile_content\" != \"\" ] ; do\n\n      if [ $nodefile_num_entries -eq 0 ] ; then\n         break\n      fi\n\n      if [ $appfile_rank -eq 0 ] ; then\n         appfile_rank=`get_rank $appfile_content`\n      fi\n\n      chunk=`expr $appfile_rank / $nodefile_num_entries`\n\n      if [ $chunk -eq 0 ] ; then\n         chunk=1\n      fi\n\n      reset_nodefile $pbs_nodes_file\n      read_nodefile $pbs_nodes_file\n      while [ \"$nodefile_content\" != \"\" ] ; do\n         node=`echo $nodefile_content | awk '{print $1}'`\n\n         rank_remain=`expr $appfile_rank - $chunk`\n         if [ $rank_remain -eq 0 ] ; then\n            echo \"-np $chunk -h $node `get_appfile_args_nonp $appfile_content`\" >> $pbs_appfile\n            break\n         elif [  $rank_remain -lt 0 ] ; then\n            echo \"-np $appfile_rank -h $node `get_appfile_args_nonp $appfile_content`\" >> $pbs_appfile\n            break\n         else\n            echo \"-np $chunk -h $node `get_appfile_args_nonp $appfile_content`\" >> $pbs_appfile\n            appfile_rank=`expr $appfile_rank - $chunk`\n         fi\n\n         read_nodefile $pbs_nodes_file\n\n#\t if processed all nodes but appfile not completely satisfied,\n#\t we'll continue to round robin through the nodes\n\t if [ \"$nodefile_content\" = \"\" ] && [ $appfile_rank -gt 0 ] ; then\n\t\treset_nodefile\n         \tread_nodefile $pbs_nodes_file\n\t fi\n\n      done\n\n      read_appfile $user_appfile\n      appfile_rank=0\n\n   done\n}\n\n#####################################################\n# MAIN\n#####################################################\n\n. ${PBS_CONF_FILE:-@PBS_CONF_FILE@}\nexport PBS_TMPDIR=\"${PBS_TMPDIR:-${TMPDIR:-/var/tmp}}\"\n\n# Global variables\n\nuser_appfile=\"${PBS_TMPDIR}/pbs_mpihp_uappfile$$\"\npbs_appfile=\"${PBS_TMPDIR}/pbs_mpihp_pappfile$$\"\npbs_nodefile=\"${PBS_TMPDIR}/pbs_mpihp_nodefile$$\"\n\ntype mpirun >/dev/null 2>&1\nmpirun_found=$?\n\nif [ -h ${PBS_EXEC}/etc/pbs_mpihp ] ; then\n   mpirun=`ls -l ${PBS_EXEC}/etc/pbs_mpihp | awk -F \"->\" '{print $2}'| tr -d ' '`\n   if [ ! -x \"$mpirun\" ] ; then\n\techo \"mpirun=$mpirun is not executable!\"\n\texit 127\n\n   fi\nelif [ $mpirun_found -eq 0 ] && #    Platform bought HP MPI so match either\n     [ `(mpirun -version | egrep -c \"HP MPI|Platform\") 2>/dev/null` -ne 0 ] ; then\n   mpirun=mpirun\nelse\n   echo \"HP version of mpirun not found\"\n   exit 127\nfi\n\nname=`basename $0`\nif [ \"${PBS_NODEFILE:-XX}\" = \"XX\" ]; then\n   if [ \"$name\" != \"mpirun\" ]; then\n      echo \"$name: Warning, not running under PBS\"\n   fi\n   $mpirun $*\n   exit $?\nfi\n\n# mpirun -prun specified, pass all arguments to HP mpirun,\n# but keep processes under control of PBS\nif [ `is_prun_specified $*` -eq 1 ]; then\n   export MPI_REMSH=\"${PBS_EXEC}/bin/pbs_remsh -j $PBS_JOBID -r ${PBS_RSHCOMMAND:-rsh}\"\n   $mpirun $*\n   exit $?\nfi\n\n### get arguments  to appear outside of an appfile\nget_global_args $*\nif [ $? -eq 1 ] ; then\n   echo \"-client option is unsupported. Exiting...\"\n   exit 1\nfi\n\n# Signals to catch\ntrap '(rm -f $user_appfile $pbs_appfile $pbs_nodefile) 2> /dev/null; exit 1;'  1 2 3 9 15\n\n### create interim user appfile\ninput_appfile=`get_appfile $*`\nif [ $? -eq 1 ] ; then\n   echo \"Encountered multiple -f arguments. Exiting...\"\n   exit 1\nfi\n\nif [ \"$input_appfile\" = \"\" ] ; then\n   appfile_args=`get_appfile_args $*`\n   echo \"$appfile_args\" > $user_appfile\nelse\n   cat /dev/null > $user_appfile\n   while read line\n   do\n      if [ `is_comment_line $line` -eq 0 ] ; then\n         continue\n      fi\n      appfile_args=`get_appfile_args $line`\n      echo \"$appfile_args\" >> $user_appfile\n   done < $input_appfile\n\nfi\n\n### tally up PBS nodes\ncat -n ${PBS_NODEFILE} | \\\n\tsort -k2 | uniq -f1 -c | \\\n\tawk '{if ($1 == 1) print $2, $3; else print $2, $3 \" -np \" $1}' | \\\n\tsort -n | awk '{print $2, $3, $4}' > $pbs_nodefile\n\n### transform appfile to a PBS-friendly appfile\ntransform_appfile $user_appfile $pbs_appfile $pbs_nodefile\n\nrm -f $user_appfile $pbs_nodefile\n\nexport MPI_REMSH=\"${PBS_EXEC}/bin/pbs_remsh -j $PBS_JOBID -r ${PBS_RSHCOMMAND:-rsh}\"\n\nif [ -s $pbs_appfile ] ; then\n   $mpirun $global_args -f $pbs_appfile $global_extra_args\n   ret=$?\nelse\n   echo \"No MPI application to process\"\n   ret=2\nfi\n\nrm -f $pbs_appfile\n\nexit $ret\n"
  },
  {
    "path": "src/cmds/pbs_mpilam.in",
    "content": "#!/bin/sh\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nif [ $# -eq 1 ] && [ $1 = \"--version\" ]; then\n   echo pbs_version = @PBS_VERSION@\n   exit 0\nfi\n\nmpirun=\"mpirun\"\nname=`basename $0`\n\nif [ \"${PBS_NODEFILE:-XX}\" = \"XX\" ]; then\n\tif [ \"$name\" != \"mpirun\" ]; then\n\t\techo \"$name: Warning, not running under PBS\"\n\tfi\n\techo $mpirun $*\n\t$mpirun $*\n\texit $?\nfi\n\nstart=1\noptions=\"\"\nwhere=\"\"\nprog_args=\"\"\nnp_spec=0\nc_spec=0\nwhile [ $# -gt 0 ]; do\n\n  if [ $start -eq 1 ] ; then\n        if [ \"XX$1\" = \"XX-np\" ]; then\n\t\tnp_spec=1\n\t\toptions=\"$options $1\"\n\t\tshift\n\t\toptions=\"$options $1\"\n        elif [ \"XX$1\" = \"XX-c\" ]; then\n\t\tc_spec=1\n\t\toptions=\"$options $1\"\n\t\tshift\n\t\toptions=\"$options $1\"\n        elif [ \"XX$1\" = \"XX-s\" ]     ||\n             [ \"XX$1\" = \"XX-x\" ]     ||\n             [ \"XX$1\" = \"XX-wd\" ]    ||\n             [ \"XX$1\" = \"XX-p\" ] ; then\n\t\toptions=\"$options $1\"\n\t\tshift\n\t\toptions=\"$options $1\"\n        elif [ \"XX$1\" = \"XX-ssi\" ] ; then\n\t\toptions=\"$options $1\"\n\t\tshift\n\t\toptions=\"$options $1\"\n\t\tshift\n\t\tif [ `echo $1 | wc -w` -gt 1 ] ; then\n\t\t\toptions=\"$options \\\"$1\\\"\"\n\t\telse\n\t\t\toptions=\"$options $1\"\n\t\tfi\n\telif [ `expr match \"$1\" \"-\\+\"` -ne 0 ] ; then\n\t\toptions=\"$options $1\"\n\telif [ `expr match $1 \"n\\+\"` -ne 0 ] \t||\n             [ `expr match $1 \"c\\+\"` -ne 0 ]   \t||\n             [ $1 = \"h\" ]   \t\t\t||\n             [ $1 = \"o\" ]   \t\t\t||\n             [ $1 = \"N\" ]   \t\t\t||\n             [ $1 = \"C\" ] ; then\n\t\twhere=\"$where $1\"\n\telse\n\t\tprog_args=\"$prog_args $1\"\n                start=0\n        fi\n  else\n\tprog_args=\"$prog_args $1\"\n  fi\n  shift\n\ndone\n\n# Under LAM >= 7, need to put a -s (new session)\n# option to pbs_attach  so as to not redundantly\n# attach the pids of lamds that may have been\n# started by tm-enabled lamboot.\n\nif [ `(lamboot -V | egrep -c \"LAM 7\") 2>/dev/null` -ne 0 ] ; then\n\ts_opt=\"-s\"\nelse\n\ts_opt=\"\"\nfi\n\n# no <where> parameter\nif [ \"${where:-XX}\" = \"XX\" ]; then\n\tif [ $c_spec -eq 0 ] && [ $np_spec -eq 0 ] ; then\n\t\teval $mpirun $options $prog_args\n\telse\n\t\teval $mpirun $options C pbs_attach -P -j ${PBS_JOBID} $s_opt $prog_args\n\tfi\nelse\n\teval $mpirun $options $where pbs_attach -P -j ${PBS_JOBID} $s_opt $prog_args\nfi\n"
  },
  {
    "path": "src/cmds/pbs_mpirun.in",
    "content": "#!/bin/sh\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nif [ $# -eq 1 ] && [ $1 = \"--version\" ]; then\n   echo pbs_version = @PBS_VERSION@\n   exit 0\nfi\n\nmpirun=\"mpirun\"\nname=`basename $0`\n\nexport PBS_RSHCOMMAND=${P4_RSHCOMMAND:-rsh}\n. ${PBS_CONF_FILE:-@PBS_CONF_FILE@}\nexport P4_RSHCOMMAND=${PBS_EXEC}/bin/pbs_remsh\nexport MPICH_PROCESS_GROUP=no\nexport PBS_TMPDIR=\"${PBS_TMPDIR:-${TMPDIR:-/var/tmp}}\"\n\nif [ \"${PBS_NODEFILE:-XX}\" = \"XX\" ]; then\n\tif [ \"$name\" != \"mpirun\" ]; then\n\t\techo \"$name: Warning, not running under PBS\"\n\tfi\n\t$mpirun $*\n\texit $?\nfi\n\nlist=\"\"\nusernp=`cat ${PBS_NODEFILE} | wc -l`\n\nwhile [ $# -gt 0 ]; do\n\tif [ \"XX$1\" = \"XX-np\" ]; then\n\t\tshift\n\t\tusernp=$1\n\telif [ \"XX$1\" = \"XX-machinefile\" ]; then\n\t\tshift\n\t\techo \"$name: Warning, -machinefile value replaced by PBS\"\n\telse\n\t\tlist=\"$list $1\"\n\tfi\n\tshift\ndone\n\nmachinefile=\"${PBS_TMPDIR}/pbs_mpimach$$\"\ncat -n ${PBS_NODEFILE} | \\\n\tsort -k2 | uniq -f1 -c | \\\n\tawk '{if ($1 == 1) print $2, $3; else print $2, $3 \":\" $1}' | \\\n\tsort -n | awk '{print $2}' > $machinefile\n\n\n$mpirun -np $usernp -machinefile $machinefile $list\nret=$?\n\nrm $machinefile\nexit $ret\n"
  },
  {
    "path": "src/cmds/pbs_ralter.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n#include <pbs_version.h>\n\n#include <sys/types.h>\n#include <sys/time.h>\n#include <errno.h>\n#include <pbs_ifl.h>\n#include \"cmds.h\"\n#include \"net_connect.h\"\n\n#define OPT_BUF_LEN 256\n\nstatic struct attrl *attrib = NULL;\nstatic time_t dtstart;\nstatic time_t dtend;\nint force_alter = FALSE;\n\n/*\n * @brief process options input to the command.\n *\n * @param[in]  argc  - the number of arguments to be parsed.\n * @param[in]  argv  - the argument array.\n * @param[out] attrp - attribute list sent to the caller.\n * @param[out] dest  - destination server.\n *\n * @retval 0 on no error.\n * @retval No. of options having errors.\n */\nint\nprocess_opts(int argc, char **argv, struct attrl **attrp, char *dest)\n{\n\tint c = 0;\n\n\tint errflg = 0;\n\ttime_t t;\n\n\tchar time_buf[80] = {0};\n\tchar *endptr = NULL;\n\tlong temp = 0;\n\tchar dur_buf[800];\n\tint alter_duration = FALSE;\n\n\twhile ((c = getopt(argc, argv, \"E:I:m:M:N:R:q:U:G:D:l:W:\")) != EOF) {\n\t\tswitch (c) {\n\t\t\tcase 'E':\n\t\t\t\tt = cvtdate(optarg);\n\t\t\t\tif (t >= 0) {\n\t\t\t\t\t(void) sprintf(time_buf, \"%ld\", (long) t);\n\t\t\t\t\tset_attr_error_exit(&attrib, ATTR_resv_end, time_buf);\n\t\t\t\t\tdtend = t;\n\t\t\t\t} else {\n\t\t\t\t\tfprintf(stderr, \"pbs_ralter: illegal -E time value\\n\");\n\t\t\t\t\terrflg++;\n\t\t\t\t}\n\t\t\t\tbreak;\n\n\t\t\tcase 'I':\n\t\t\t\ttemp = strtol(optarg, &endptr, 0);\n\t\t\t\tif (*endptr == '\\0' && temp > 0) {\n\t\t\t\t\tset_attr_error_exit(&attrib, ATTR_inter, optarg);\n\t\t\t\t} else {\n\t\t\t\t\tfprintf(stderr, \"pbs_ralter: illegal -I time value\\n\");\n\t\t\t\t\terrflg++;\n\t\t\t\t}\n\t\t\t\tbreak;\n\n\t\t\tcase 'm':\n\t\t\t\tset_attr_error_exit(&attrib, ATTR_m, optarg);\n\t\t\t\tbreak;\n\n\t\t\tcase 'M':\n\t\t\t\tset_attr_error_exit(&attrib, ATTR_M, optarg);\n\t\t\t\tbreak;\n\n\t\t\tcase 'N':\n\t\t\t\tset_attr_error_exit(&attrib, ATTR_resv_name, optarg);\n\t\t\t\tbreak;\n\n\t\t\tcase 'R':\n\t\t\t\tt = cvtdate(optarg);\n\t\t\t\tif (t >= 0) {\n\t\t\t\t\t(void) sprintf(time_buf, \"%ld\", (long) t);\n\t\t\t\t\tset_attr_error_exit(&attrib, ATTR_resv_start, time_buf);\n\t\t\t\t\tdtstart = t;\n\t\t\t\t} else {\n\t\t\t\t\tfprintf(stderr, \"pbs_ralter: illegal -R time value\\n\");\n\t\t\t\t\terrflg++;\n\t\t\t\t}\n\t\t\t\tbreak;\n\n\t\t\tcase 'q':\n\t\t\t\t/* destination can only be another server */\n\t\t\t\tif (optarg[0] != '@') {\n\t\t\t\t\tfprintf(stderr, \"pbs_ralter: illegal -q value: format \\\"@server\\\"\\n\");\n\t\t\t\t\terrflg++;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tpbs_strncpy(dest, &optarg[1], OPT_BUF_LEN);\n\t\t\t\tbreak;\n\n\t\t\tcase 'U':\n\t\t\t\tset_attr_error_exit(&attrib, ATTR_auth_u, optarg);\n\t\t\t\tbreak;\n\t\t\tcase 'G':\n\t\t\t\tset_attr_error_exit(&attrib, ATTR_auth_g, optarg);\n\t\t\t\tbreak;\n\n\t\t\tcase 'D':\n\t\t\t\tsnprintf(dur_buf, sizeof(dur_buf), \"%s\", optarg);\n\t\t\t\tset_attr_error_exit(&attrib, ATTR_resv_duration, dur_buf);\n\t\t\t\talter_duration = TRUE;\n\t\t\t\tbreak;\n\t\t\tcase 'l':\n\t\t\t\tif (strncmp(optarg, \"select=\", 7) == 0)\n\t\t\t\t\tset_attr_resc_error_exit(&attrib, ATTR_l, \"select\", (optarg + 7));\n\t\t\t\telse {\n\t\t\t\t\tfprintf(stderr, \"pbs_ralter -l only allows for select\\n\");\n\t\t\t\t\terrflg++;\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase 'W':\n\t\t\t\tif (strcmp(optarg, \"force\") == 0)\n\t\t\t\t\tforce_alter = TRUE;\n\t\t\t\telse {\n\t\t\t\t\tfprintf(stderr, \"pbs_ralter: illegal -W value\\n\");\n\t\t\t\t\terrflg++;\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\t/* pbs_ralter option not recognized */\n\t\t\t\terrflg++;\n\n\t\t} /* End of lengthy 'switch on option' constuction */\n\t}\t  /* End of lengthy while loop on 'options' */\n\n\t/* Check that force option is used with 'R', 'E' or 'D' option */\n\tif ((force_alter == TRUE) && (dtstart == 0) && (dtend == 0) && (alter_duration == 0)) {\n\t\tfprintf(stderr, \"pbs_ralter: No support for requested service\\n\");\n\t\terrflg++;\n\t}\n\t*attrp = attrib;\n\treturn (errflg);\n}\n\n/*\n * @brief - prints correct usage of the command to the console.\n */\n\nstatic void\nprint_usage()\n{\n\tstatic char usag2[] = \"       pbs_ralter --version\\n\";\n\tstatic char usage[] =\n\t\t\"usage: pbs_ralter [-I seconds] [-m mail_points] [-M mail_list]\\n\"\n\t\t\"                [-N reservation_name] [-R start_time] [-E end_time]\\n\"\n\t\t\"                [-U (+/-)username[,(+/-)username]...]\\n\"\n\t\t\"                [-G [(+/-)group[,(+/-)group]...]]\\n\"\n\t\t\"                [-D duration]\\n\"\n\t\t\"                [-W force]\\n\"\n\t\t\"                resv_id\\n\";\n\tfprintf(stderr, \"%s\", usage);\n\tfprintf(stderr, \"%s\", usag2);\n}\n\n/**\n * @brief\n * \thandles attribute errors and prints appropriate errmsg\n *\n * @param[in] err_list - list of possible attribute errors\n *\n * @return - Void\n *\n */\nstatic void\nhandle_attribute_errors(struct ecl_attribute_errors *err_list)\n{\n\tstruct attropl *attribute = NULL;\n\tchar *opt = NULL;\n\tint i = 0;\n\n\tfor (i = 0; i < err_list->ecl_numerrors; i++) {\n\t\tattribute = err_list->ecl_attrerr[i].ecl_attribute;\n\t\tif (strcmp(attribute->name, ATTR_resv_duration) == 0)\n\t\t\topt = \"D\";\n\t\telse if (strcmp(attribute->name, ATTR_resv_end) == 0)\n\t\t\topt = \"E\";\n\t\telse if (strcmp(attribute->name, ATTR_auth_g) == 0)\n\t\t\topt = \"G\";\n\t\telse if (strcmp(attribute->name, ATTR_inter) == 0)\n\t\t\topt = \"I\";\n\t\telse if (strcmp(attribute->name, ATTR_m) == 0)\n\t\t\topt = \"m\";\n\t\telse if (strcmp(attribute->name, ATTR_M) == 0)\n\t\t\topt = \"M\";\n\t\telse if (strcmp(attribute->name, ATTR_resv_name) == 0)\n\t\t\topt = \"N\";\n\t\telse if (strcmp(attribute->name, ATTR_resv_start) == 0)\n\t\t\topt = \"R\";\n\t\telse if (strcmp(attribute->name, ATTR_auth_u) == 0)\n\t\t\topt = \"U\";\n\t\telse\n\t\t\treturn;\n\n\t\tCS_close_app();\n\t\tfprintf(stderr, \"pbs_ralter: illegal -%s value\\n\", opt);\n\t\tprint_usage();\n\t\texit(2);\n\t}\n}\n\nint\nmain(int argc, char *argv[], char *envp[]) /* pbs_ralter */\n{\n\tint errflg = 0;\t\t\t /* command line option error */\n\tint connect = -1;\t\t /* return from pbs_connect */\n\tchar *errmsg = NULL;\t\t /* return from pbs_geterrmsg */\n\tchar destbuf[OPT_BUF_LEN] = {0}; /* buffer for option server */\n\tstruct attrl *attrib = NULL;\t /* the attrib list */\n\tstruct ecl_attribute_errors *err_list = NULL;\n\tchar resv_id[PBS_MAXCLTJOBID] = {0};\n\tchar resv_id_out[PBS_MAXCLTJOBID] = {0};\n\tchar server_out[MAXSERVERNAME] = {0};\n\tchar *stat = NULL;\n\tchar *extend = NULL;\n\n\t/*test for real deal or just version and exit*/\n\n\tPRINT_VERSION_AND_EXIT(argc, argv);\n\n\tif (initsocketlib())\n\t\treturn 1;\n\n\tdestbuf[0] = '\\0';\n\terrflg = process_opts(argc, argv, &attrib, destbuf); /* get cmdline options */\n\n\tif (errflg || ((optind + 1) != argc) || argc == 1) {\n\t\tprint_usage();\n\t\texit(2);\n\t}\n\n\t/*perform needed security library initializations (including none)*/\n\n\tif (CS_client_init() != CS_SUCCESS) {\n\t\tfprintf(stderr, \"pbs_ralter: unable to initialize security library.\\n\");\n\t\texit(1);\n\t}\n\n\t/* Connect to the server */\n\tconnect = cnt2server(destbuf);\n\tif (connect <= 0) {\n\t\tfprintf(stderr, \"pbs_ralter: cannot connect to server %s (errno=%d)\\n\",\n\t\t\tpbs_server, pbs_errno);\n\t\tCS_close_app();\n\t\texit(pbs_errno);\n\t}\n\n\tpbs_strncpy(resv_id, argv[optind], sizeof(resv_id));\n\tif (get_server(resv_id, resv_id_out, server_out)) {\n\t\tfprintf(stderr, \"pbs_ralter: illegally formed reservation identifier: %s\\n\", resv_id);\n\t\texit(2);\n\t}\n\n\tif (force_alter == TRUE)\n\t\textend = \"force\";\n\tstat = pbs_modify_resv(connect, resv_id_out, (struct attropl *) attrib, extend);\n\n\tif (stat == NULL) {\n\t\tif ((err_list = pbs_get_attributes_in_error(connect)))\n\t\t\thandle_attribute_errors(err_list);\n\n\t\terrmsg = pbs_geterrmsg(connect);\n\t\tif (errmsg != NULL) {\n\t\t\tfprintf(stderr, \"pbs_ralter: %s\\n\", errmsg);\n\t\t} else\n\t\t\tfprintf(stderr, \"pbs_ralter: Error (%d) modifying reservation\\n\", pbs_errno);\n\t\tCS_close_app();\n\t\texit(pbs_errno);\n\t} else {\n\t\tprintf(\"pbs_ralter: %s\\n\", stat);\n\t\tfree(stat);\n\t}\n\t/* Disconnet from the server. */\n\tpbs_disconnect(connect);\n\n\tCS_close_app();\n\texit(0);\n}\n"
  },
  {
    "path": "src/cmds/pbs_rdel.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tpbs_rdel.c\n * @brief\n *  pbs_rdel - PBS command to delete reservations\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n#include <pbs_version.h>\n#include <stdio.h>\n#include <pbs_ifl.h>\n#include <cmds.h>\n\n/**\n * @brief\n *\tThe main function in C - entry point\n *\n * @param[in]  argc - argument count\n * @param[in]  argv - pointer to argument array\n * @param[in]  envp - pointer to environment values\n *\n * @return  int\n * @retval  0 - success\n * @retval  !0 - error\n */\nint\nmain(int argc, char **argv, char **envp)\n{\n\tint c;\n\tint errflg = 0;\n\tint any_failed = 0;\n\n\tchar resv_id[PBS_MAXCLTJOBID]; /* from the command line */\n\n\tchar resv_id_out[PBS_MAXCLTJOBID];\n\tchar server_out[MAXSERVERNAME];\n\n\t/* destqueue=queue + '@' + server + \\0 */\n\tchar dest_queue[PBS_MAXQUEUENAME + PBS_MAXSERVERNAME + 2 + 10] = {'\\0'};\n\n\t/*test for real deal or just version and exit*/\n\n\tPRINT_VERSION_AND_EXIT(argc, argv);\n\n\tif (initsocketlib())\n\t\treturn 1;\n\n\twhile ((c = getopt(argc, argv, \"q:\")) != EOF)\n\t\tswitch (c) {\n\t\t\tcase 'q':\n\t\t\t\tif (optarg[0] == '\\0') {\n\t\t\t\t\tfprintf(stderr, \"pbs_rdel: illegal -q value\\n\");\n\t\t\t\t\terrflg++;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tsprintf(dest_queue, \"destqueue=%s\", optarg);\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\terrflg++;\n\t\t}\n\n\tif (errflg || optind >= argc) {\n\t\tfprintf(stderr, \"usage:\\tpbs_rdel [-q dest] resv_identifier...\\n\");\n\t\tfprintf(stderr, \"      \\tpbs_rdel --version\\n\");\n\t\texit(2);\n\t}\n\n\tif (CS_client_init() != CS_SUCCESS) {\n\t\tfprintf(stderr, \"pbs_rdel: unable to initialize security library.\\n\");\n\t\texit(1);\n\t}\n\n\tfor (; optind < argc; optind++) {\n\t\tint connect;\n\t\tint stat = 0;\n\n\t\tpbs_strncpy(resv_id, argv[optind], sizeof(resv_id));\n\t\tif (get_server(resv_id, resv_id_out, server_out)) {\n\t\t\tfprintf(stderr, \"pbs_rdel: illegally formed reservation identifier: %s\\n\", resv_id);\n\t\t\tany_failed = 1;\n\t\t\tcontinue;\n\t\t}\n\n\t\tconnect = cnt2server(server_out);\n\t\tif (connect <= 0) {\n\t\t\tfprintf(stderr, \"pbs_rdel: cannot connect to server %s (errno=%d)\\n\",\n\t\t\t\tpbs_server, pbs_errno);\n\t\t\tany_failed = pbs_errno;\n\t\t\tcontinue;\n\t\t}\n\n\t\tstat = pbs_delresv(connect, resv_id_out, dest_queue);\n\t\tif (stat) {\n\t\t\tprt_job_err(\"pbs_rdel\", connect, resv_id_out);\n\t\t\tany_failed = pbs_errno;\n\t\t}\n\t\tpbs_disconnect(connect);\n\t}\n\tCS_close_app();\n\texit(any_failed);\n}\n"
  },
  {
    "path": "src/cmds/pbs_release_nodes.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file    pbs_release_nodes.c\n *\n * @brief\n *\n * \tSend release nodes request to batch job.\n *\n */\n\n#include <pbs_config.h>\n\n#include <errno.h>\n#include \"cmds.h\"\n#include \"pbs_ifl.h\"\n#include <pbs_config.h> /* the master config generated by configure */\n#include <pbs_version.h>\n\n#define USAGE                                                                              \\\n\t\"usage: pbs_release_nodes [-j job_identifier] host_or_vnode1 host_or_vnode2 ...\\n\" \\\n\t\"       pbs_release_nodes [-j job_identifier] -a\\n\"                                \\\n\t\"       pbs_release_nodes [-j job_identifier] -k <select string>\\n\"                \\\n\t\"       pbs_release_nodes [-j job_identifier] -k <node count>\\n\"                   \\\n\t\"       pbs_release_nodes --version\\n\"\n\nint\nmain(int argc, char **argv, char **envp) /* pbs_release_nodes */\n{\n\tint c;\n\tint errflg = 0;\n\tint any_failed = 0;\n\n\tchar job_id[PBS_MAXCLTJOBID]; /* from the command line */\n\tchar job_id_out[PBS_MAXCLTJOBID];\n\tchar server_out[MAXSERVERNAME];\n\tchar rmt_server[MAXSERVERNAME];\n\tchar *keep_opt = NULL;\n\tint len;\n\tchar *node_list = NULL;\n\tint connect;\n\tint stat = 0;\n\tint k;\n\tint all_opt = 0;\n\n#define GETOPT_ARGS \"j:k:a\"\n\n\t/*test for real deal or just version and exit*/\n\tPRINT_VERSION_AND_EXIT(argc, argv);\n\n\tif (initsocketlib())\n\t\treturn 1;\n\n\tjob_id[0] = '\\0';\n\twhile ((c = getopt(argc, argv, GETOPT_ARGS)) != EOF) {\n\t\tswitch (c) {\n\t\t\tcase 'j':\n\t\t\t\tpbs_strncpy(job_id, optarg, sizeof(job_id));\n\t\t\t\tbreak;\n\t\t\tcase 'k':\n\t\t\t\tkeep_opt = optarg;\n\t\t\t\tbreak;\n\t\t\tcase 'a':\n\t\t\t\tall_opt = 1;\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\terrflg++;\n\t\t}\n\t}\n\tif (job_id[0] == '\\0') {\n\t\tchar *jid;\n\t\tjid = getenv(\"PBS_JOBID\");\n\t\tpbs_strncpy(job_id, jid ? jid : \"\", sizeof(job_id));\n\t}\n\n\tif (all_opt && keep_opt) {\n\t\terrflg++;\n\t\tfprintf(stderr, \"pbs_release_nodes: -a and -k options cannot be used together\\n\");\n\t}\n\n\tif ((optind != argc) && keep_opt) {\n\t\terrflg++;\n\t\tfprintf(stderr, \"pbs_release_nodes: cannot supply node list with -k option\\n\");\n\t}\n\n\tif (errflg ||\n\t    ((optind == argc) && !(all_opt || keep_opt)) ||\n\t    ((optind != argc) && all_opt)) {\n\t\tfprintf(stderr, \"%s\", USAGE);\n\t\texit(2);\n\t}\n\n\tif (job_id[0] == '\\0') {\n\t\tfprintf(stderr, \"pbs_release_nodes: No jobid given\\n\");\n\t\texit(2);\n\t}\n\n\t/*perform needed security library initializations (including none)*/\n\n\tif (CS_client_init() != CS_SUCCESS) {\n\t\tfprintf(stderr, \"pbs_release_nodes: unable to initialize security library.\\n\");\n\t\texit(2);\n\t}\n\n\tlen = 0;\n\tfor (k = optind; k < argc; k++) {\n\t\tlen += (strlen(argv[k]) + 1); /* +1 for space */\n\t}\n\n\tnode_list = (char *) malloc(len + 1);\n\tif (node_list == NULL) {\n\t\tfprintf(stderr, \"failed to malloc to store data (error %d)\", errno);\n\t\texit(2);\n\t}\n\tnode_list[0] = '\\0';\n\n\tfor (k = optind; k < argc; k++) {\n\t\tif (k != optind)\n\t\t\tstrcat(node_list, \"+\");\n\t\tstrcat(node_list, argv[k]);\n\t}\n\tif (get_server(job_id, job_id_out, server_out)) {\n\t\tfprintf(stderr, \"pbs_release_nodes: illegally formed job identifier: %s\\n\", job_id);\n\t\tfree(node_list);\n\t\texit(2);\n\t}\n\n\tpbs_errno = 0;\n\tstat = 0;\n\twhile (1) {\n\t\tconnect = cnt2server(server_out);\n\t\tif (connect <= 0) {\n\t\t\tfprintf(stderr,\n\t\t\t\t\"pbs_release_nodes: cannot connect to server %s (errno=%d)\\n\",\n\t\t\t\tpbs_server, pbs_errno);\n\t\t\tbreak;\n\t\t}\n\n\t\tstat = pbs_relnodesjob(connect, job_id_out, node_list, keep_opt);\n\t\tif (stat && (pbs_errno == PBSE_UNKJOBID)) {\n\t\t\tif (locate_job(job_id_out, server_out, rmt_server)) {\n\t\t\t\t/*\n\t\t\t\t * job located at a different server\n\t\t\t\t * retry connect on the new server\n\t\t\t\t */\n\t\t\t\tpbs_disconnect(connect);\n\t\t\t\tstrcpy(server_out, rmt_server);\n\t\t\t} else {\n\t\t\t\tprt_job_err(\"pbs_release_nodes\", connect, job_id_out);\n\t\t\t\tbreak;\n\t\t\t}\n\t\t} else {\n\t\t\tchar *info_msg;\n\n\t\t\tif (stat && (pbs_errno != PBSE_UNKJOBID)) {\n\t\t\t\tprt_job_err(\"pbs_release_nodes\", connect, \"\");\n\t\t\t} else if ((info_msg = pbs_geterrmsg(connect)) != NULL) {\n\t\t\t\t/* print potential warning message */\n\t\t\t\tprintf(\"pbs_release_nodes: %s\\n\", info_msg);\n\t\t\t}\n\t\t\tbreak;\n\t\t}\n\t}\n\tany_failed = pbs_errno;\n\n\tpbs_disconnect(connect);\n\n\t/*cleanup security library initializations before exiting*/\n\tCS_close_app();\n\n\texit(any_failed);\n}\n"
  },
  {
    "path": "src/cmds/pbs_remsh.in",
    "content": "#!/bin/sh\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nif [ $# -eq 1 ] && [ $1 = \"--version\" ]; then\n   echo pbs_version = @PBS_VERSION@\n   exit 0\nfi\n\n# pbs_remsh might be called in a non-PBS environment causing\n# PBS_JOBID and PBS_RSHCOMMAND to not exist. The fix is to pass\n# the values to these as arguments to pbs_remsh.\n\nif [ $# -lt 2 ]; then\n        echo \"Usage: $0 --version\"\n        echo \"Usage: $0 [-j jobid] [-r rshcmd] host [-n] [-l username] command\"\n        exit 1;\nfi\n\njobid=\"\"\nrshcmd=\"\"\nwhile [ $# -gt 1 ]; do\n        if [ \"XX$1\" = \"XX-j\" ]; then\n                shift;\n                jobid=$1\n                shift;\n        elif [ \"XX$1\" = \"XX-r\" ]; then\n                shift;\n                rshcmd=$1\n                shift;\n        else\n                break;\n        fi\ndone\n\nhost=\"$1\"\nshift\n\nwhile [ $# -gt 1 ]; do\n\tif [ \"XX$1\" = \"XX-n\" ]; then\n\t\tshift;\n\telif [ \"XX$1\" = \"XX-l\" ]; then\n\t\tshift;\n\t\tshift;\n\telse\n\t\tbreak;\n\tfi\ndone\n\nif [ ! -z \"$jobid\" ] ; then\n        export PBS_JOBID=$jobid\nfi\n\nif [ ! -z \"$rshcmd\" ] ; then\n        export PBS_RSHCOMMAND=$rshcmd\nfi\n\nremsh=\"${PBS_RSHCOMMAND:-rsh} -n\"\n$remsh \"$host\" pbs_attach -j \"$PBS_JOBID\" $*\n"
  },
  {
    "path": "src/cmds/pbs_rstat.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file pbs_rstat.c\n */\n#include <pbs_config.h> /* the master config generated by configure */\n#include <pbs_version.h>\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <time.h>\n#include <errno.h>\n#include \"cmds.h\"\n#include \"pbs_ifl.h\"\n\n#define DISP_RESV_FULL 0x01    /* -F,-f option - full verbose description */\n#define DISP_RESV_NAMES 0x02   /* -B option - Reservation names only */\n#define DISP_RESV_DEFAULT 0x04 /* -S option - Default, Short Description */\n#define DISP_INCR_WIDTH 0x08   /*  Increases the header width */\n\n/* prototypes */\nchar *convert_resv_state(char *pcode, int long_str);\nvoid handle_resv(char *resv_id, char *server, int how);\nstatic int check_width;\n\n/**\n * @brief\n *\tdisplay_single_reservation - display a single reservation\n *\n * @param[in] resv - the reservation to display\n * @param[in] how - 1: long form (all resv info)\n * \t\t      2: print reservation name\n *\t\t      4: short form\n *                  8: increase header width\n * @return Void\n *\n */\nvoid\ndisplay_single_reservation(struct batch_status *resv, int how)\n{\n\tchar *queue_name = NULL;\n#ifdef NAS /* localmod 075 */\n\tchar *resv_name = NULL;\n#endif /* localmod 075 */\n\tchar *user = NULL;\n\tchar *resv_state = NULL;\n\tchar *resv_start = NULL;\n\tchar *resv_end = NULL;\n\ttime_t resv_duration = 0;\n\tchar *str;\n\tstruct attrl *attrp = NULL;\n\ttime_t tmp_time;\n\tchar tbuf[64];\n\tchar *fmt = \"%a %b %d %H:%M:%S %Y\";\n\tattrp = resv->attribs;\n\n\tif (how & DISP_RESV_NAMES) { /* display just name of the reservation */\n\t\tprintf(\"Resv ID: %s\\n\", resv->name);\n\t} else if (how & DISP_RESV_DEFAULT) { /* display short form, default*/\n\t\twhile (attrp != NULL) {\n\t\t\tif (strcmp(attrp->name, ATTR_queue) == 0)\n\t\t\t\tqueue_name = attrp->value;\n\t\t\telse if (strcmp(attrp->name, ATTR_auth_u) == 0)\n\t\t\t\tuser = attrp->value;\n\t\t\telse if (strcmp(attrp->name, ATTR_resv_start) == 0)\n\t\t\t\tresv_start = attrp->value;\n\t\t\telse if (strcmp(attrp->name, ATTR_resv_end) == 0)\n\t\t\t\tresv_end = attrp->value;\n\t\t\telse if (strcmp(attrp->name, ATTR_resv_duration) == 0)\n\t\t\t\tresv_duration = strtol(attrp->value, NULL, 10);\n\t\t\telse if (strcmp(attrp->name, ATTR_resv_state) == 0)\n\t\t\t\tresv_state = convert_resv_state(attrp->value, 0); /*short state str*/\n#ifdef NAS\t\t\t\t\t\t\t\t\t  /* localmod 075 */\n\t\t\telse if (strcmp(attrp->name, ATTR_resv_name) == 0)\n\t\t\t\tresv_name = attrp->value;\n#endif /* localmod 075 */\n\n\t\t\tattrp = attrp->next;\n\t\t}\n\t\tif (how & DISP_INCR_WIDTH) {\n\t\t\tprintf(\"%-15.15s %-13.13s %-8.8s %-5.5s \",\n#ifdef NAS /* localmod 075 */\n\t\t\t       (resv_name ? resv_name : resv->name),\n\t\t\t       queue_name, user, resv_state);\n#else\n\t\t\t       resv->name, queue_name, user, resv_state);\n#endif\n\t\t} else {\n\t\t\tprintf(\"%-10.10s %-8.8s %-8.8s %-5.5s \",\n#ifdef NAS /* localmod 075 */\n\t\t\t       (resv_name ? resv_name : resv->name),\n\t\t\t       queue_name, user, resv_state);\n#else\n\t\t\t       resv->name, queue_name, user, resv_state);\n#endif /* localmod 075 */\n\t\t}\n\t\tprintf(\"%17.17s / \", convert_time(resv_start));\n\t\tprintf(\"%ld / %-17.17s\\n\", (long) resv_duration, convert_time(resv_end));\n\t} else { /*display long form (all reservation info)*/\n\t\tprintf(\"Resv ID: %s\\n\", resv->name);\n\t\twhile (attrp != NULL) {\n\t\t\tif (attrp->resource != NULL)\n\t\t\t\tprintf(\"%s.%s = %s\\n\", attrp->name, attrp->resource, show_nonprint_chars(attrp->value));\n\t\t\telse {\n\t\t\t\tif (strcmp(attrp->name, ATTR_resv_state) == 0) {\n\t\t\t\t\tstr = convert_resv_state(attrp->value, 1); /* long state str */\n\t\t\t\t} else if (strcmp(attrp->name, ATTR_resv_start) == 0 ||\n\t\t\t\t\t   strcmp(attrp->name, ATTR_resv_end) == 0 ||\n\t\t\t\t\t   strcmp(attrp->name, ATTR_ctime) == 0 ||\n\t\t\t\t\t   strcmp(attrp->name, ATTR_mtime) == 0 ||\n\t\t\t\t\t   strcmp(attrp->name, ATTR_resv_retry) == 0) {\n\t\t\t\t\ttmp_time = atol(attrp->value);\n\t\t\t\t\tstrftime(tbuf, sizeof(tbuf), fmt, localtime((time_t *) &tmp_time));\n\t\t\t\t\tstr = tbuf;\n\t\t\t\t} else if (!strcmp(attrp->name, ATTR_resv_execvnodes)) {\n\t\t\t\t\tattrp = attrp->next;\n\t\t\t\t\tcontinue;\n\t\t\t\t} else if (!strcmp(attrp->name, ATTR_resv_standing)) {\n\t\t\t\t\tattrp = attrp->next;\n\t\t\t\t\tcontinue;\n\t\t\t\t} else if (!strcmp(attrp->name, ATTR_resv_timezone)) {\n\t\t\t\t\tattrp = attrp->next;\n\t\t\t\t\tcontinue;\n\t\t\t\t} else if (!strcmp(attrp->name, ATTR_resv_count)) {\n\t\t\t\t\tstr = attrp->value;\n\t\t\t\t} else if (!strcmp(attrp->name, ATTR_resv_rrule)) {\n\t\t\t\t\tstr = attrp->value;\n\t\t\t\t} else if (!strcmp(attrp->name, ATTR_resv_idx)) {\n\t\t\t\t\tstr = attrp->value;\n\t\t\t\t} else {\n\t\t\t\t\tstr = attrp->value;\n\t\t\t\t}\n\t\t\t\tprintf(\"%s = %s\\n\", attrp->name, show_nonprint_chars(str));\n\t\t\t}\n\t\t\tattrp = attrp->next;\n\t\t}\n\t\tprintf(\"\\n\");\n\t}\n}\n\n/**\n * @brief\n *\tdisplay - display the resv data\n *\n * @param[in] bstat - the batch_status list to display\n * @param[in] how - 1: long form (all resv info)\n *\t\t      2: print reservation name\n *\t\t      4: short form\n *\t\t      8: increase header width\n *\n *@return Void\n *\n */\nvoid\ndisplay(struct batch_status *resv, int how)\n{\n\tstruct batch_status *cur; /* loop var - current batch_status in loop */\n\tstatic char no_display = 0;\n\n\tif (resv == NULL)\n\t\treturn;\n\n\tcur = resv;\n\n\tif ((how & DISP_RESV_DEFAULT) && (!no_display)) {\n#ifdef NAS /* localmod 075 */\n\t\tif (how & DISP_INCR_WIDTH)\n\t\t\tprintf(\"%-15.15s %-13.13s %-8.8s %-5.5s %17.17s / Duration / %s\\n\",\n\t\telse\n\t\t\tprintf(\"%-10.10s %-8.8s %-8.8s %-5.5s %17.17s / Duration / %s\\n\",\n#else\n\t\tif (how & DISP_INCR_WIDTH) {\n\t\t\tprintf(\"%-15.15s %-13.13s %-8.8s %-5.5s %17.17s / Duration / %-17.17s\\n\",\n\t\t\t       \"Resv ID\", \"Queue\", \"User\", \"State\", \"Start\", \"End\");\n\t\t\tprintf(\"-------------------------------------------------------------------------------\\n\");\n\t\t} else {\n\t\t\tprintf(\"%-10.10s %-8.8s %-8.8s %-5.5s %17.17s / Duration / %-17.17s\\n\",\n\t\t\t       \"Resv ID\", \"Queue\", \"User\", \"State\", \"Start\", \"End\");\n\t\t\tprintf(\"---------------------------------------------------------------------\\n\");\n\t\t}\n#endif /* localmod 075 */\n\n\t\t/* only display header once */\n\t\tno_display = 1;\n\t}\n\n\twhile (cur != NULL) {\n\t\tdisplay_single_reservation(cur, how);\n\t\tcur = cur->next;\n\t}\n}\n\nint\nmain(int argc, char *argv[])\n{\n\tint c;\t\t\t     /* for getopts() */\n\tint how = DISP_RESV_DEFAULT; /* how the reservation should be display, default to short listing */\n\tint errflg = 0;\n\tint i;\n\tchar *resv_id; /* reservation ID from the command line */\n\tchar resv_id_out[PBS_MAXCLTJOBID];\n\tchar server_out[MAXSERVERNAME];\n\t/*test for real deal or just version and exit*/\n\n\tPRINT_VERSION_AND_EXIT(argc, argv);\n\n\tif (initsocketlib())\n\t\treturn 1;\n\n\twhile ((c = getopt(argc, argv, \"fFBS\")) != EOF) {\n\t\tswitch (c) {\n\t\t\tcase 'F': /* full verbose description */\n\t\t\tcase 'f':\n\t\t\t\thow = DISP_RESV_FULL;\n\t\t\t\tbreak;\n\n\t\t\tcase 'B': /* Brief, just the names */\n\t\t\t\thow = DISP_RESV_NAMES;\n\t\t\t\tbreak;\n\n\t\t\tcase 'S': /* short desc, default */\n\t\t\t\thow = DISP_RESV_DEFAULT;\n\t\t\t\tbreak;\n\n\t\t\tdefault:\n\t\t\t\terrflg = 1;\n\t\t}\n\t}\n\n\tif (errflg) {\n\t\tfprintf(stderr, \"Usage:\\n\\tpbs_rstat [-fFBS] [reservation-id]\\n\");\n\t\tfprintf(stderr, \"\\tpbs_rstat --version\\n\");\n\t\texit(1);\n\t}\n\n\tif (CS_client_init() != CS_SUCCESS) {\n\t\tfprintf(stderr, \"pbs_rstat: unable to initialize security library.\\n\");\n\t\texit(1);\n\t}\n\n\tif (optind == argc)\n\t\thandle_resv(NULL, NULL, how);\n\telse {\n\t\tfor (i = optind; i < argc; i++) {\n\t\t\tresv_id = argv[i];\n\n#ifdef NAS /* localmod 075 */\n\t\t\tif (*resv_id == '@') {\n\t\t\t\thandle_resv(NULL, resv_id + 1, how);\n\t\t\t\tcontinue;\n\t\t\t} else if (get_server(resv_id, resv_id_out, server_out)) {\n#else\n\t\t\tif (get_server(resv_id, resv_id_out, server_out)) {\n#endif /* localmod 075 */\n\t\t\t\tfprintf(stderr,\n\t\t\t\t\t\"pbs_rstat: illegally formed reservation identifier: %s\\n\", resv_id);\n\t\t\t\terrflg = 1;\n#ifdef NAS /* localmod 075 */\n\t\t\t\tcontinue;\n\t\t\t}\n#else\n\t\t\t} else\n#endif /* localmod 075 */\n\t\t\thandle_resv(resv_id_out, server_out, how);\n\t\t}\n\t}\n\tCS_close_app();\n\texit(errflg);\n}\n\n/*\n * @brief\n *\thandle_resv - handle connecting to the server, and displaying the\n *\t\t\treservations\n *\n * @param[in] resv_id - the id of the reservation\n * @param[in] server  - the server to connect to\n * @param[in] how     - 0: just the names\n *                      1: short display\n *                      2: full display (all attributes)\n *\n * @return Void\n *\n */\nvoid\nhandle_resv(char *resv_id, char *server, int how)\n{\n\tint pbs_sd;\n\tstruct batch_status *bstat;\n\tchar *errmsg;\n\t/* for dynamic pbs_rstat width */\n\tstruct batch_status *server_attrs;\n\n\tpbs_sd = cnt2server(server);\n\tif (pbs_sd < 0) {\n\t\tfprintf(stderr, \"pbs_rstat: cannot connect to server (errno=%d)\\n\",\n\t\t\tpbs_errno);\n\t\tCS_close_app();\n\t\texit(pbs_errno);\n\t}\n\n\t/* check the server attribute max_job_sequence_id value */\n\tif (check_width == 0) {\n\t\tserver_attrs = pbs_statserver(pbs_sd, NULL, NULL);\n\t\tif (server_attrs == NULL && pbs_errno != PBSE_NONE) {\n\t\t\tif ((errmsg = pbs_geterrmsg(pbs_sd)) != NULL)\n\t\t\t\tfprintf(stderr, \"pbs_rstat: %s\\n\", errmsg);\n\t\t\telse\n\t\t\t\tfprintf(stderr, \"pbs_rstat: Error %d\\n\", pbs_errno);\n\t\t\treturn;\n\t\t}\n\n\t\tif (server_attrs != NULL) {\n\t\t\tint check_seqid_len;\n\t\t\tcheck_seqid_len = check_max_job_sequence_id(server_attrs);\n\t\t\tif (check_seqid_len == 1) {\n\t\t\t\thow |= DISP_INCR_WIDTH; /* increased column width*/\n\t\t\t}\n\t\t\tpbs_statfree(server_attrs);\n\t\t\tserver_attrs = NULL;\n\t\t\tcheck_width = 1;\n\t\t}\n\t}\n\n\tbstat = pbs_statresv(pbs_sd, resv_id, NULL, NULL);\n\tif (pbs_errno) {\n\t\terrmsg = pbs_geterrmsg(pbs_sd);\n\t\tfprintf(stderr, \"pbs_rstat: %s\\n\", errmsg);\n\t}\n\n\tdisplay(bstat, how);\n\tpbs_statfree(bstat);\n}\n\n/*\n * @brief\n *\tconvert_resv_state - convert the reservation state from a\n *\t\t\t     string integer enum resv_states value into\n *\t\t\t     a human readable string\n *\n * @param[out] pcode - the string enum value\n * @param[in] long_str - int value to indicate short or long human readable string to be printed\n *\n * @return - string\n * @retval   \"state of reservation\"\n *\n */\nchar *\nconvert_resv_state(char *pcode, int long_str)\n{\n\tint i;\n\tstatic char *resv_strings_short[] =\n\t\t{\"NO\", \"UN\", \"CO\", \"WT\", \"TR\", \"RN\", \"FN\", \"BD\", \"DE\", \"DJ\", \"DG\", \"AL\", \"IC\"};\n\tstatic char *resv_strings_long[] =\n\t\t{\"RESV_NONE\", \"RESV_UNCONFIRMED\", \"RESV_CONFIRMED\",\n\t\t \"RESV_WAIT\", \"RESV_TIME_TO_RUN\", \"RESV_RUNNING\",\n\t\t \"RESV_FINISHED\", \"RESV_BEING_DELETED\", \"RESV_DELETED\",\n\t\t \"RESV_DELETING_JOBS\", \"RESV_DEGRADED\", \"RESV_BEING_ALTERED\",\n\t\t \"RESV_IN_CONFLICT\"};\n\n\ti = atoi(pcode);\n\tswitch (i) {\n\t\tcase RESV_NONE:\n\t\tcase RESV_UNCONFIRMED:\n\t\tcase RESV_CONFIRMED:\n\t\tcase RESV_DEGRADED:\n\t\tcase RESV_WAIT:\n\t\tcase RESV_TIME_TO_RUN:\n\t\tcase RESV_RUNNING:\n\t\tcase RESV_FINISHED:\n\t\tcase RESV_BEING_DELETED:\n\t\tcase RESV_DELETED:\n\t\tcase RESV_DELETING_JOBS:\n\t\tcase RESV_BEING_ALTERED:\n\t\tcase RESV_IN_CONFLICT:\n\t\t\tif (long_str == 0) /* short */\n\t\t\t\treturn resv_strings_short[i];\n\t\t\telse\n\t\t\t\treturn resv_strings_long[i];\n\t}\n\treturn pcode;\n}\n"
  },
  {
    "path": "src/cmds/pbs_rsub.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tpbs_rsub.c\n * @brief\n *  pbs_rdel - PBS command to submit reservations\n */\n#include <pbs_config.h> /* the master config generated by configure */\n#include <pbs_version.h>\n\n#include <sys/types.h>\n#include <sys/time.h>\n#include <errno.h>\n#include <pbs_ifl.h>\n#include \"cmds.h\"\n#include \"net_connect.h\"\n#include \"attribute.h\"\n#include \"portability.h\"\n\n#define DEFAULT_INTERACTIVE \"-10\"\n#define OPT_BUF_LEN 256\n\nstatic struct attrl *attrib = NULL;\nstatic int qmoveflg = FALSE;\nstatic time_t dtstart;\nstatic time_t dtend;\nstatic int is_stdng_resv = 0;\nstatic int is_maintenance_resv = 0;\nstatic int is_job_resv = 0;\nstatic char **maintenance_hosts = NULL;\n\n/* The maximum buffer size that is allowed not to exceed 80 columns.\n * The number 67 (66 chars + 1 EOL) is the result of subtracting the number\n * of characters to print \"reserve_rrule=\" (14 chars) via pbs_rstat.\n */\nchar rrule[67];\n\n/**\n * @brief\n *\tprocesses the argument list for pbs_rsub and validates\n *\tand sets attribute according to the argument value\n *\n * @param[in] argc - commandline args count\n * @param[in] argv - pointer to argument list\n * @param[in] dest - server option\n *\n * @return errflag\n * @retval 0  Success\n * @retval  !0 Failure\n *\n */\nint\nprocess_opts(int argc, char **argv, struct attrl **attrp, char *dest)\n{\n\tint c, i;\n\tchar *erp;\n\tint errflg = 0;\n\tchar *keyword;\n\tchar *valuewd;\n\ttime_t t;\n\tchar *pc;\n\tint hhmm = FALSE;\n\n\tchar time_buf[80];\n\tchar dur_buf[800];\n\tchar badw[] = \"pbs_rsub: illegal -W value\\n\";\n\tint opt_re_flg = FALSE;\n\tint opt_inter_flg = FALSE;\n\tint opt_res_req_flg = FALSE;\n\n\twhile ((c = getopt(argc, argv, \"D:E:I:l:m:M:N:q:r:R:u:U:g:G:H:W:-:\")) != EOF) {\n\t\tswitch (c) {\n\t\t\tcase 'D':\n\t\t\t\tsprintf(dur_buf, \"walltime=%s\", optarg);\n\t\t\t\tif ((i = set_resources(&attrib, dur_buf, 0, &erp)) != 0) {\n\t\t\t\t\tfprintf(stderr, \"pbs_rsub: illegal -D value\\n\");\n\t\t\t\t\terrflg++;\n\t\t\t\t}\n\t\t\t\tbreak;\n\n\t\t\tcase 'E':\n\t\t\t\topt_re_flg = TRUE;\n\t\t\t\tt = cvtdate(optarg);\n\t\t\t\tif (t >= 0) {\n\t\t\t\t\t(void) sprintf(time_buf, \"%ld\", (long) t);\n\t\t\t\t\tset_attr_error_exit(&attrib, ATTR_resv_end, time_buf);\n\t\t\t\t\tdtend = t;\n\t\t\t\t} else {\n\t\t\t\t\tfprintf(stderr, \"pbs_rsub: illegal -E time value\\n\");\n\t\t\t\t\terrflg++;\n\t\t\t\t}\n\t\t\t\tbreak;\n\n\t\t\tcase 'I':\n\t\t\t\topt_inter_flg = TRUE;\n\t\t\t\tif ((optarg == NULL) || (*optarg == '\\0'))\n\t\t\t\t\tset_attr_error_exit(&attrib, ATTR_inter, \"0\");\n\t\t\t\telse {\n\t\t\t\t\tchar *endptr;\n\t\t\t\t\t(void) strtol(optarg, &endptr, 0);\n\t\t\t\t\tif (*endptr == '\\0') {\n\t\t\t\t\t\tset_attr_error_exit(&attrib, ATTR_inter, optarg);\n\t\t\t\t\t} else {\n\t\t\t\t\t\tfprintf(stderr, \"pbs_rsub: illegal -I time value\\n\");\n\t\t\t\t\t\terrflg++;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tbreak;\n\n\t\t\tcase 'l':\n\t\t\t\topt_res_req_flg = TRUE;\n\t\t\t\tif ((i = set_resources(&attrib, optarg, 0, &erp)) != 0) {\n\t\t\t\t\tif (i > 1) {\n\t\t\t\t\t\tpbs_prt_parse_err(\"pbs_rsub: illegal -l value\\n\", optarg,\n\t\t\t\t\t\t\t\t  (int) (erp - optarg), i);\n\t\t\t\t\t} else\n\t\t\t\t\t\tfprintf(stderr, \"pbs_rsub: illegal -l value\\n\");\n\t\t\t\t\terrflg++;\n\t\t\t\t}\n\t\t\t\tbreak;\n\n\t\t\tcase 'm':\n\t\t\t\twhile (isspace((int) *optarg))\n\t\t\t\t\toptarg++;\n\t\t\t\tset_attr_error_exit(&attrib, ATTR_m, optarg);\n\t\t\t\tbreak;\n\n\t\t\tcase 'M':\n\t\t\t\tset_attr_error_exit(&attrib, ATTR_M, optarg);\n\t\t\t\tbreak;\n\n\t\t\tcase 'N':\n\t\t\t\tset_attr_error_exit(&attrib, ATTR_resv_name, optarg);\n\t\t\t\tbreak;\n\n\t\t\tcase 'q':\n\t\t\t\t/* destination can only be another server */\n\t\t\t\tif (optarg[0] != '@') {\n\t\t\t\t\tfprintf(stderr, \"pbs_rsub: illegal -q value: format \\\"@server\\\"\\n\");\n\t\t\t\t\terrflg++;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tpbs_strncpy(dest, &optarg[1], OPT_BUF_LEN);\n\t\t\t\tbreak;\n\n\t\t\tcase 'R':\n\t\t\t\topt_re_flg = TRUE;\n\t\t\t\tt = cvtdate(optarg);\n\t\t\t\tif (t >= 0) {\n\t\t\t\t\t(void) sprintf(time_buf, \"%ld\", (long) t);\n\t\t\t\t\tset_attr_error_exit(&attrib, ATTR_resv_start, time_buf);\n\t\t\t\t\tdtstart = t;\n\t\t\t\t} else {\n\t\t\t\t\tfprintf(stderr, \"pbs_rsub: illegal -R time value\\n\");\n\t\t\t\t\terrflg++;\n\t\t\t\t}\n\t\t\t\tif ((pc = strchr(optarg, (int) '.')) != 0) {\n\t\t\t\t\tif ((pc - optarg) == 4)\n\t\t\t\t\t\thhmm = TRUE;\n\t\t\t\t} else if ((strlen(optarg)) == 4)\n\t\t\t\t\thhmm = TRUE;\n\t\t\t\tbreak;\n\n\t\t\tcase 'r':\n\t\t\t\tis_stdng_resv = 1;\n\t\t\t\tset_attr_error_exit(&attrib, ATTR_resv_rrule, optarg);\n\t\t\t\tset_attr_error_exit(&attrib, ATTR_resv_standing, \"1\");\n\t\t\t\tif (strlen(optarg) > sizeof(rrule) - 1) {\n\t\t\t\t\tfprintf(stderr, \"pbs_rsub: illegal -r value (expression too long)\\n\");\n\t\t\t\t\terrflg++;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tpbs_strncpy(rrule, optarg, sizeof(rrule));\n\t\t\t\tbreak;\n\n\t\t\tcase 'u':\n\t\t\t\tset_attr_error_exit(&attrib, ATTR_u, optarg);\n\t\t\t\tbreak;\n\n\t\t\tcase 'U':\n\t\t\t\tset_attr_error_exit(&attrib, ATTR_auth_u, optarg);\n\t\t\t\tbreak;\n\n\t\t\tcase 'g':\n\t\t\t\tset_attr_error_exit(&attrib, ATTR_g, optarg);\n\t\t\t\tbreak;\n\n\t\t\tcase 'G':\n\t\t\t\tset_attr_error_exit(&attrib, ATTR_auth_g, optarg);\n\t\t\t\tbreak;\n\n\t\t\tcase 'H':\n\t\t\t\tset_attr_error_exit(&attrib, ATTR_auth_h, optarg);\n\t\t\t\tbreak;\n\n\t\t\tcase 'W':\n\t\t\t\twhile (isspace((int) *optarg))\n\t\t\t\t\toptarg++;\n\n\t\t\t\tif (strlen(optarg) == 0) {\n\t\t\t\t\tfprintf(stderr, \"pbs_rsub: illegal -W value\\n\");\n\t\t\t\t\terrflg++;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\n\t\t\t\ti = parse_equal_string(optarg, &keyword, &valuewd);\n\t\t\t\twhile (i == 1) {\n\t\t\t\t\tif (strcmp(keyword, ATTR_convert) == 0)\n\t\t\t\t\t\tqmoveflg = TRUE;\n\n\t\t\t\t\tset_attr_error_exit(&attrib, keyword, valuewd);\n\n\t\t\t\t\t/* move to next attribute in this \"-W\" specification */\n\n\t\t\t\t\ti = parse_equal_string(NULL, &keyword, &valuewd);\n\t\t\t\t}\n\n\t\t\t\tif (i == -1) {\n\t\t\t\t\tfprintf(stderr, \"%s\", badw);\n\t\t\t\t\terrflg++;\n\t\t\t\t}\n\t\t\t\tbreak;\n\n\t\t\tcase '-':\n\t\t\t\tif (strcmp(optarg, \"hosts\") == 0)\n\t\t\t\t\tis_maintenance_resv = 1;\n\t\t\t\telse if (strcmp(optarg, \"job\") == 0) {\n\t\t\t\t\tset_attr_error_exit(&attrib, ATTR_resv_job, argv[optind]);\n\t\t\t\t\tis_job_resv = 1;\n\t\t\t\t\t++optind;\n\t\t\t\t} else\n\t\t\t\t\terrflg++;\n\t\t\t\tbreak;\n\n\t\t\tdefault:\n\t\t\t\t/* pbs_rsub option not recognized */\n\t\t\t\terrflg++;\n\n\t\t} /* End of lengthy 'switch on option' constuction */\n\t}\t  /* End of lengthy while loop on 'options' */\n\n\tif (opt_re_flg == TRUE && qmoveflg == TRUE) {\n\t\tfprintf(stderr, \"pbs_rsub: -Wqmove is not compatible with -R or -E option\\n\");\n\t\terrflg++;\n\t}\n\n\tif (opt_inter_flg && is_maintenance_resv) {\n\t\tfprintf(stderr, \"pbs_rsub: can't use -I with --hosts\\n\");\n\t\terrflg++;\n\t}\n\n\tif (opt_res_req_flg && is_maintenance_resv) {\n\t\tfprintf(stderr, \"pbs_rsub: can't use -l with --hosts\\n\");\n\t\terrflg++;\n\t}\n\n\tif (is_maintenance_resv) {\n\t\tchar **hostp = NULL;\n\t\tint num_hosts = argc - optind;\n\n\t\tif (num_hosts > 0) {\n\t\t\tint i;\n\n\t\t\tmaintenance_hosts = malloc(sizeof(char *) * (num_hosts + 1));\n\t\t\tif (maintenance_hosts == NULL) {\n\t\t\t\tfprintf(stderr, \"pbs_rsub: Out of memory\\n\");\n\t\t\t\treturn (++errflg);\n\t\t\t}\n\n\t\t\tmaintenance_hosts[0] = NULL;\n\n\t\t\tfor (i = 0; optind < argc; optind++, i++) {\n\t\t\t\thostp = maintenance_hosts;\n\t\t\t\tfor (; *hostp; hostp++) {\n\t\t\t\t\tif (strcmp(*hostp, argv[optind]) == 0) {\n\t\t\t\t\t\tfprintf(stderr, \"pbs_rsub: Duplicate host: %s\\n\", argv[optind]);\n\t\t\t\t\t\treturn (++errflg);\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\tif (strlen(argv[optind]) == 0) {\n\t\t\t\t\ti--;\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\n\t\t\t\tmaintenance_hosts[i + 1] = NULL;\n\t\t\t\tmaintenance_hosts[i] = strdup(argv[optind]);\n\t\t\t\tif (maintenance_hosts[i] == NULL) {\n\t\t\t\t\tfprintf(stderr, \"pbs_rsub: Out of memory\\n\");\n\t\t\t\t\treturn (++errflg);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\tif (maintenance_hosts == NULL) {\n\t\t\tfprintf(stderr, \"pbs_rsub: missing host(s)\\n\");\n\t\t\treturn (++errflg);\n\t\t}\n\t}\n\n\tif (is_job_resv && ((dtstart != 0) || (dtend != 0))) {\n\t\tfprintf(stderr, \"pbs_rsub: Start/End time cannot be used with --job option\");\n\t\tfprintf(stderr, \"\\n\");\n\t\treturn (++errflg);\n\t}\n\n\tif ((hhmm == TRUE) && (dtend != 0) && (dtend < dtstart)) {\n\t\t/* if end time is behind the start time, move it to the next day */\n\t\ttime_t skew = 60 * 60 * 24;\n\t\tdtend += skew;\n\t\tsprintf(time_buf, \"%ld\", (long) dtend);\n\t\tset_attr_error_exit(&attrib, ATTR_resv_end, time_buf);\n\t}\n\n\tif (!errflg) {\n\t\terrflg = (optind != argc);\n\t\tif (errflg) {\n\t\t\tfprintf(stderr, \"pbs_rsub: directive error: \");\n\t\t\tfor (optind = 1; optind < argc; optind++)\n\t\t\t\tfprintf(stderr, \"%s \", argv[optind]);\n\t\t\tfprintf(stderr, \"\\n\");\n\t\t}\n\t}\n\n\t*attrp = attrib;\n\treturn (errflg);\n}\n\n/**\n * @brief\n *\tsets the environment for reservation\n *\n * @param[in] envp - pointer to pointer to the environment variable\n *\n * @return - Boolean value\n * @retval   TRUE  Success\n * @retval   FALSE Failure\n *\n */\nint\nset_resv_env(char **envp)\n{\n\tchar *resv_env;\n\tchar *c, *env;\n\tchar host[PBS_MAXHOSTNAME + 1];\n\tint len;\n\tint rc;\n\n\t/* Calculate how big to make the variable string. */\n\tlen = 0;\n\tenv = getenv(\"HOME\");\n\tif (env != NULL)\n\t\tlen += strlen(env);\n\tenv = getenv(\"LANG\");\n\tif (env != NULL)\n\t\tlen += strlen(env);\n\tenv = getenv(\"LOGNAME\");\n\tif (env != NULL)\n\t\tlen += strlen(env);\n\tenv = getenv(\"PATH\");\n\tif (env != NULL)\n\t\tlen += strlen(env);\n\tenv = getenv(\"MAIL\");\n\tif (env != NULL)\n\t\tlen += strlen(env);\n\tenv = getenv(\"SHELL\");\n\tif (env != NULL)\n\t\tlen += strlen(env);\n\tenv = getenv(\"TZ\");\n\tif (env != NULL)\n\t\tlen += strlen(env);\n\tlen += PBS_MAXHOSTNAME;\n\tlen += MAXPATHLEN;\n\tlen += len; /* Double it for all the commas, etc. */\n\tif ((resv_env = (char *) malloc(len)) == NULL) {\n\t\tfprintf(stderr, \"pbs_rsub: Out of memory\\n\");\n\t\treturn FALSE;\n\t}\n\t*resv_env = '\\0';\n\n\t/* Send the required variables with the reservation. */\n\tc = getenv(\"LOGNAME\");\n\tif (c != NULL) {\n\t\tstrcat(resv_env, \"PBS_O_LOGNAME=\");\n\t\tstrcat(resv_env, c);\n\t}\n\tif ((rc = gethostname(host, (sizeof(host) - 1))) == 0) {\n\t\tif ((rc = get_fullhostname(host, host, (sizeof(host) - 1))) == 0) {\n\t\t\tif (*resv_env)\n\t\t\t\tstrcat(resv_env, \",PBS_O_HOST=\");\n\t\t\telse\n\t\t\t\tstrcat(resv_env, \"PBS_O_HOST=\");\n\t\t\tstrcat(resv_env, host);\n\t\t}\n\t}\n\n\tc = getenv(\"MAIL\");\n\tif (c != NULL) {\n\t\tfix_path(c, 1);\n\t\tstrcat(resv_env, \",PBS_O_MAIL=\");\n\t\tstrcat(resv_env, c);\n\t}\n\tif (rc != 0) {\n\t\tfprintf(stderr, \"pbs_rsub: cannot get full local host name\\n\");\n\t\texit(3);\n\t}\n\tc = getenv(\"PBS_TZID\");\n\tif (c != NULL) {\n\t\tstrcat(resv_env, \",PBS_TZID=\");\n\t\tstrcat(resv_env, c);\n\t\tset_attr_error_exit(&attrib, ATTR_resv_timezone, c);\n\t} else if (is_stdng_resv) {\n\t\tfprintf(stderr, \"pbs_rsub error: a valid PBS_TZID timezone environment variable is required.\\n\");\n\t\texit(2);\n\t}\n\n\tset_attr_error_exit(&attrib, ATTR_v, resv_env);\n\tfree(resv_env);\n\treturn TRUE;\n}\n\n/**\n * @brief\n *\tconverts and processes the attribute values\n *\n * @param[in] connect - indiacation for connection of server\n * @param[in] attrp   - attribute list\n * @param[in] dest    - server option\n *\n * @return - int\n * @retval   0 Success\n * @retval   exits on failure\n *\n */\nint\ncnvrt_proc_attrib(int connect, struct attrl **attrp, char *dest)\n{\n\tchar *str;\n\tint setflag, cnt = 0;\n\tstruct attropl *jobid_ptr;\n\tstruct batch_status *p, *p_status;\n\tstruct attrl *a, *ap, *apx, *attr, *cmd_attr;\n\tchar time_buf[80];\n\tchar job[PBS_MAXCLTJOBID];\n\tchar server[MAXSERVERNAME];\n\n\tjobid_ptr = (struct attropl *) attrib;\n\twhile (jobid_ptr != NULL) {\n\t\tif (strcmp(jobid_ptr->name, ATTR_convert) == 0)\n\t\t\tbreak;\n\t\tjobid_ptr = jobid_ptr->next;\n\t}\n\n\tif (get_server(jobid_ptr->value, job, server)) {\n\t\tfprintf(stderr, \"pbs_rsub: illegally formed job identifier: %s\\n\", jobid_ptr->value);\n\t\texit(-1);\n\t}\n\t/* update value string with full job-id (seqnum.server) */\n\t(void) free(jobid_ptr->value);\n\tjobid_ptr->value = strdup(job);\n\tif (jobid_ptr->value == NULL) {\n\t\tfprintf(stderr, \"Out of memory\\n\");\n\t\texit(2);\n\t}\n\n\tp_status = pbs_statjob(connect, jobid_ptr->value, NULL, NULL);\n\tif (p_status == NULL) {\n\t\tfprintf(stderr, \"Job %s does not exist\\n\", jobid_ptr->value);\n\t\texit(2);\n\t}\n\n\tp = p_status;\n\twhile (p != NULL) {\n\t\ta = p->attribs;\n\t\twhile (a != NULL) {\n\t\t\tif (a->name != NULL) {\n\t\t\t\t/* avoid qmove job in R, T or E state */\n\t\t\t\tif (strcmp(a->name, ATTR_state) == 0) {\n\t\t\t\t\tif (strcmp(a->value, \"R\") == 0 ||\n\t\t\t\t\t    strcmp(a->value, \"T\") == 0 ||\n\t\t\t\t\t    strcmp(a->value, \"E\") == 0) {\n\t\t\t\t\t\tfprintf(stderr, \"Job not in qmove state\\n\");\n\t\t\t\t\t\texit(2);\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tif (strcmp(a->name, ATTR_l) == 0 &&\n\t\t\t\t\t    strcmp(a->resource, \"nodect\") != 0 &&\n\t\t\t\t\t    strcmp(a->resource, \"neednodes\") != 0) {\n\t\t\t\t\t\tsetflag = FALSE;\n\t\t\t\t\t\tap = attrib;\n\t\t\t\t\t\twhile (ap != NULL) {\n\t\t\t\t\t\t\tif (ap->resource != NULL) {\n\t\t\t\t\t\t\t\tif (strcmp(ap->resource, a->resource) == 0) {\n\t\t\t\t\t\t\t\t\tsetflag = TRUE;\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tif (ap->next == NULL && setflag == FALSE) {\n\t\t\t\t\t\t\t\tattr = new_attrl();\n\t\t\t\t\t\t\t\tif (attr == NULL) {\n\t\t\t\t\t\t\t\t\tfprintf(stderr, \"pbs_rsub: Out of memory\\n\");\n\t\t\t\t\t\t\t\t\texit(2);\n\t\t\t\t\t\t\t\t}\n\n\t\t\t\t\t\t\t\tstr = strdup(ATTR_l);\n\t\t\t\t\t\t\t\tif (str == NULL) {\n\t\t\t\t\t\t\t\t\tfprintf(stderr, \"pbs_rsub: Out of memory\\n\");\n\t\t\t\t\t\t\t\t\texit(2);\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\tattr->name = str;\n\n\t\t\t\t\t\t\t\tstr = strdup(a->resource);\n\t\t\t\t\t\t\t\tif (str == NULL) {\n\t\t\t\t\t\t\t\t\tfprintf(stderr, \"pbs_rsub: Out of memory\\n\");\n\t\t\t\t\t\t\t\t\texit(2);\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\tattr->resource = str;\n\n\t\t\t\t\t\t\t\tif (a->value != NULL) {\n\t\t\t\t\t\t\t\t\tstr = strdup(a->value);\n\t\t\t\t\t\t\t\t\tif (str == NULL) {\n\t\t\t\t\t\t\t\t\t\tfprintf(stderr, \"pbs_rsub: Out of memory\\n\");\n\t\t\t\t\t\t\t\t\t\texit(2);\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\tattr->value = str;\n\t\t\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t\t\tstr = (char *) malloc(1);\n\t\t\t\t\t\t\t\t\tif (str == NULL) {\n\t\t\t\t\t\t\t\t\t\tfprintf(stderr, \"pbs_rsub: Out of memory\\n\");\n\t\t\t\t\t\t\t\t\t\texit(2);\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\tstr[0] = '\\0';\n\t\t\t\t\t\t\t\t\tattr->value = str;\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\tattr->next = NULL;\n\t\t\t\t\t\t\t\tap->next = attr;\n\t\t\t\t\t\t\t\tap = ap->next;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tsetflag = FALSE;\n\t\t\t\t\t\t\tap = ap->next;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\ta = a->next;\n\t\t}\n\t\tp = p->next;\n\t}\n\tpbs_statfree(p_status);\n\n\tcmd_attr = attrib;\n\twhile (cmd_attr != NULL) {\n\t\tif (strcmp(cmd_attr->name, ATTR_resv_start) == 0 ||\n\t\t    strcmp(cmd_attr->name, ATTR_resv_end) == 0) {\n\t\t\tif (cmd_attr->name != NULL)\n\t\t\t\tfree(cmd_attr->name);\n\t\t\tif (cmd_attr->resource != NULL)\n\t\t\t\tfree(cmd_attr->resource);\n\t\t\tif (cmd_attr->value != NULL)\n\t\t\t\tfree(cmd_attr->value);\n\t\t\tapx = cmd_attr->next;\n\t\t\tfree(cmd_attr);\n\t\t\tcmd_attr = apx;\n\t\t\tif (cnt == 0)\n\t\t\t\tattrib = cmd_attr;\n\t\t\tcnt++;\n\t\t} else\n\t\t\tcmd_attr = cmd_attr->next;\n\t}\n\n\t(void) sprintf(time_buf, \"%ld\", PBS_RESV_FUTURE_SCH);\n\tset_attr_error_exit(&attrib, ATTR_resv_start, time_buf);\n\t*attrp = attrib;\n\n\treturn (0);\n}\n\n/**\n * @brief\n * \tprints usage format for pbs_rsub command\n *\n * @return - Void\n *\n */\nstatic void\nprint_usage()\n{\n\tstatic char usag2[] = \"       pbs_rsub --version\\n\";\n\tstatic char usage[] =\n\t\t\"usage: pbs_rsub [-I seconds] [-m mail_points] [-M mail_list]\\n\"\n\t\t\"                [-N reservation_name] [-u user_list] [-g group_list]\\n\"\n\t\t\"                [-U auth_user_list] [-G auth_group_list] [-H auth_host_list]\\n\"\n\t\t\"                [-R start_time] [-E end_time] [-D duration] [-q destination]\\n\"\n\t\t\"                [-r rrule_expression] [-W otherattributes=value...]\\n\"\n\t\t\"                -l resource_list | --hosts host1 [... hostn]\\n\";\n\n\tfprintf(stderr, \"%s\", usage);\n\tfprintf(stderr, \"%s\", usag2);\n}\n\n/**\n * @brief\n * \thandles attribute errors and prints appropriate errmsg\n *\n * @param[in] err_list - list of possible attribute errors\n *\n * @return - Void\n *\n */\nstatic void\nhandle_attribute_errors(struct ecl_attribute_errors *err_list)\n{\n\tstruct attropl *attribute;\n\tchar *opt;\n\tint i;\n\n\tfor (i = 0; i < err_list->ecl_numerrors; i++) {\n\t\tattribute = err_list->ecl_attrerr[i].ecl_attribute;\n\t\tif (strcmp(attribute->name, ATTR_resv_end) == 0)\n\t\t\topt = \"E\";\n\t\telse if (strcmp(attribute->name, ATTR_g) == 0)\n\t\t\topt = \"g\";\n\t\telse if (strcmp(attribute->name, ATTR_auth_g) == 0)\n\t\t\topt = \"G\";\n\t\telse if (strcmp(attribute->name, ATTR_auth_h) == 0)\n\t\t\topt = \"H\";\n\t\telse if (strcmp(attribute->name, ATTR_inter) == 0)\n\t\t\topt = \"I\";\n\t\telse if (strcmp(attribute->name, ATTR_l) == 0)\n\t\t\topt = \"l\";\n\t\telse if (strcmp(attribute->name, ATTR_m) == 0)\n\t\t\topt = \"m\";\n\t\telse if (strcmp(attribute->name, ATTR_M) == 0)\n\t\t\topt = \"M\";\n\t\telse if (strcmp(attribute->name, ATTR_resv_name) == 0)\n\t\t\topt = \"N\";\n\t\telse if (strcmp(attribute->name, ATTR_resv_start) == 0)\n\t\t\topt = \"R\";\n\t\telse if (strcmp(attribute->name, ATTR_resv_rrule) == 0)\n\t\t\topt = \"r\";\n\t\telse if (strcmp(attribute->name, ATTR_u) == 0)\n\t\t\topt = \"u\";\n\t\telse if (strcmp(attribute->name, ATTR_auth_u) == 0)\n\t\t\topt = \"U\";\n\t\telse if (strcmp(attribute->name, ATTR_convert) == 0)\n\t\t\topt = \"W\";\n\t\telse\n\t\t\treturn;\n\n\t\tCS_close_app();\n\t\tif (*opt == 'l') {\n\t\t\tfprintf(stderr, \"pbs_rsub: %s\\n\",\n\t\t\t\terr_list->ecl_attrerr[i].ecl_errmsg);\n\t\t\texit(err_list->ecl_attrerr[i].ecl_errcode);\n\t\t} else if (err_list->ecl_attrerr->ecl_errcode == PBSE_JOBNBIG) {\n\t\t\tfprintf(stderr, \"pbs_rsub: Reservation %s \\n\", err_list->ecl_attrerr->ecl_errmsg);\n\t\t\texit(2);\n\t\t} else {\n\t\t\tfprintf(stderr, \"pbs_rsub: illegal -%s value\\n\", opt);\n\t\t\tprint_usage();\n\t\t\texit(2);\n\t\t}\n\t}\n}\n\n/**\n * @brief\n *\tThe main function in C - entry point\n *\n * @param[in]  argc - argument count\n * @param[in]  argv - pointer to argument array\n * @param[in]  envp - pointer to environment values\n *\n * @return  int\n * @retval  0 - success\n * @retval  !0 - error\n */\nint\nmain(int argc, char *argv[], char *envp[])\n{\n\tint errflg;\t\t   /* command line option error */\n\tint connect;\t\t   /* return from pbs_connect */\n\tchar *errmsg;\t\t   /* return from pbs_geterrmsg */\n\tchar destbuf[OPT_BUF_LEN]; /* buffer for option server */\n\tstruct attrl *attrib;\t   /* the attrib list */\n\tchar *new_resvname;\t   /* the name returned from pbs_submit_resv */\n\tstruct ecl_attribute_errors *err_list;\n\tchar *interactive = NULL;\n\tchar *reservid = NULL;\n\tchar extend[2];\n\tstruct batch_status *bstat_head = NULL;\n\tstruct batch_status *bstat = NULL;\n\tstruct attrl *pattr = NULL;\n\tchar *execvnodes_str = NULL;\n\tint execvnodes_str_size = 0;\n\tchar *select_str = NULL;\n\tint select_str_size = 0;\n\tchar **hostp = NULL;\n\tstruct attrl *pal;\n\tchar *erp;\n\tchar *host = NULL;\n\tchar *endp; /* used for strtol() */\n\n\t/*test for real deal or just version and exit*/\n\n\tPRINT_VERSION_AND_EXIT(argc, argv);\n\n\tif (initsocketlib())\n\t\treturn 1;\n\n\tdestbuf[0] = '\\0';\n\textend[0] = '\\0';\n\terrflg = process_opts(argc, argv, &attrib, destbuf); /* get cmdline options */\n\n\tif (errflg || ((optind + 1) < argc) || argc == 1) {\n\t\tprint_usage();\n\t\texit(2);\n\t}\n\n\tif (is_maintenance_resv) {\n\t\tpal = attrib;\n\t\twhile (pal) {\n\t\t\tif ((strcasecmp(pal->name, ATTR_l) == 0) &&\n\t\t\t    (strcasecmp(pal->resource, \"select\") == 0)) {\n\t\t\t\tfprintf(stderr, \"pbs_rsub: can't use -l select with --hosts\\n\");\n\t\t\t\tprint_usage();\n\t\t\t\texit(2);\n\t\t\t}\n\t\t\tif ((strcasecmp(pal->name, ATTR_l) == 0) &&\n\t\t\t    (strcasecmp(pal->resource, \"place\") == 0)) {\n\t\t\t\tfprintf(stderr, \"pbs_rsub: can't use -l place with --hosts\\n\");\n\t\t\t\tprint_usage();\n\t\t\t\texit(2);\n\t\t\t}\n\t\t\tpal = pal->next;\n\t\t}\n\t}\n\n\t/* Get any required environment variables needing to be sent. */\n\tif (!set_resv_env(envp)) {\n\t\tfprintf(stderr, \"pbs_rsub: can't send environment with the reservation\\n\");\n\t\texit(3);\n\t}\n\n\t/*perform needed security library initializations (including none)*/\n\n\tif (CS_client_init() != CS_SUCCESS) {\n\t\tfprintf(stderr, \"pbs_rsub: unable to initialize security library.\\n\");\n\t\texit(1);\n\t}\n\n\t/* Connect to the server */\n\tconnect = cnt2server(destbuf);\n\tif (connect <= 0) {\n\t\tfprintf(stderr, \"pbs_rsub: cannot connect to server %s (errno=%d)\\n\",\n\t\t\tpbs_server, pbs_errno);\n\t\tCS_close_app();\n\t\texit(pbs_errno);\n\t}\n\n\tif (qmoveflg == TRUE) {\n\t\tqmoveflg = FALSE;\n\t\tinteractive = get_attr(attrib, ATTR_inter, NULL);\n\t\tif (interactive == NULL) {\n\t\t\tset_attr_error_exit(&attrib, ATTR_inter, DEFAULT_INTERACTIVE);\n\t\t} else {\n\t\t\tif (atoi(interactive) > -1) {\n\t\t\t\tfprintf(stderr, \"pbs_rsub: -I <timeout> value must be negative when used with -Wqmove option.\\n\");\n\t\t\t\tCS_close_app();\n\t\t\t\texit(2);\n\t\t\t}\n\t\t}\n\t\terrflg = cnvrt_proc_attrib(connect, &attrib, destbuf);\n\t\tif (errflg) {\n\t\t\tfprintf(stderr, \"pbs_rsub: can't make a reservation with the qmove option\\n\");\n\t\t\tCS_close_app();\n\t\t\texit(2);\n\t\t}\n\t}\n\n\tif (is_maintenance_resv) {\n\t\tint i;\n\t\tchar tmp_str[BUF_SIZE];\n\t\tchar *endp;\n\n\t\tpbs_errno = 0;\n\t\tbstat_head = pbs_statvnode(connect, \"\", NULL, NULL);\n\t\tif (bstat_head == NULL) {\n\t\t\tif (pbs_errno) {\n\t\t\t\terrmsg = pbs_geterrmsg(connect);\n\t\t\t\tif (errmsg != NULL) {\n\t\t\t\t\tfprintf(stderr, \"pbs_rsub: %s\\n\", errmsg);\n\t\t\t\t} else {\n\t\t\t\t\tfprintf(stderr, \"pbs_rsub: Error (%d) submitting reservation\\n\", pbs_errno);\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tfprintf(stderr, \"pbs_rsub: No nodes found\\n\");\n\t\t\t}\n\n\t\t\tCS_close_app();\n\t\t\texit(pbs_errno);\n\t\t}\n\n\t\thostp = maintenance_hosts;\n\t\tfor (; *hostp; hostp++) {\n\t\t\tint host_ncpus = 0;\n\n\t\t\tfor (bstat = bstat_head; bstat; bstat = bstat->next) {\n\t\t\t\tchar *ncpus_str = NULL;\n\t\t\t\tint ncpus = 0;\n\n\t\t\t\tfor (pattr = bstat->attribs; pattr; pattr = pattr->next) {\n\t\t\t\t\tif (pattr->resource && strcmp(pattr->name, ATTR_rescavail) == 0 && strcmp(pattr->resource, \"host\") == 0)\n\t\t\t\t\t\thost = pattr->value;\n\t\t\t\t\tif (pattr->resource && strcmp(pattr->name, ATTR_rescavail) == 0 && strcmp(pattr->resource, \"ncpus\") == 0)\n\t\t\t\t\t\tncpus_str = pattr->value;\n\t\t\t\t}\n\n\t\t\t\tif (ncpus_str != NULL)\n\t\t\t\t\tncpus = strtol(ncpus_str, &endp, 0);\n\n\t\t\t\tif (*endp != '\\0') {\n\t\t\t\t\tfprintf(stderr, \"pbs_rsub: Attribute value error\\n\");\n\t\t\t\t\tCS_close_app();\n\t\t\t\t\texit(2);\n\t\t\t\t}\n\n\t\t\t\t/* here, the execvnodes is crafted */\n\t\t\t\tif (strcmp(host, *hostp) == 0 && ncpus > 0) {\n\t\t\t\t\t/* count ncpus of a host across vnodes\n\t\t\t\t\t * it will be used for crafting select\n\t\t\t\t\t */\n\t\t\t\t\thost_ncpus += ncpus;\n\n\t\t\t\t\tif (!execvnodes_str) {\n\t\t\t\t\t\texecvnodes_str = malloc(BUF_SIZE);\n\t\t\t\t\t\tif (execvnodes_str == NULL) {\n\t\t\t\t\t\t\tfprintf(stderr, \"pbs_rsub: Out of memory\\n\");\n\t\t\t\t\t\t\tCS_close_app();\n\t\t\t\t\t\t\texit(2);\n\t\t\t\t\t\t}\n\t\t\t\t\t\texecvnodes_str_size = BUF_SIZE;\n\n\t\t\t\t\t\tsnprintf(execvnodes_str, BUF_SIZE, \"(%s:ncpus=%d)\", bstat->name, ncpus);\n\t\t\t\t\t} else {\n\t\t\t\t\t\tsnprintf(tmp_str, BUF_SIZE, \"+(%s:ncpus=%d)\", bstat->name, ncpus);\n\n\t\t\t\t\t\tif (pbs_strcat(&execvnodes_str, &execvnodes_str_size, tmp_str) == NULL) {\n\t\t\t\t\t\t\tfprintf(stderr, \"pbs_rsub: Out of memory\\n\");\n\t\t\t\t\t\t\tCS_close_app();\n\t\t\t\t\t\t\texit(2);\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t} /* end of part that crafts execvnodes */\n\t\t\t}\n\n\t\t\t/* host not found or host has zero ncpus */\n\t\t\tif (host_ncpus == 0) {\n\t\t\t\tfprintf(stderr, \"pbs_rsub: Host with resources not found: %s\\n\", *hostp);\n\t\t\t\tCS_close_app();\n\t\t\t\texit(2);\n\t\t\t}\n\n\t\t\t/* here, the select is crafted */\n\t\t\tif (host_ncpus > 0) {\n\t\t\t\tif (!select_str) {\n\t\t\t\t\tselect_str = malloc(BUF_SIZE);\n\t\t\t\t\tif (select_str == NULL) {\n\t\t\t\t\t\tfprintf(stderr, \"pbs_rsub: Out of memory\\n\");\n\t\t\t\t\t\tCS_close_app();\n\t\t\t\t\t\texit(2);\n\t\t\t\t\t}\n\t\t\t\t\tselect_str_size = BUF_SIZE;\n\n\t\t\t\t\tsnprintf(select_str, BUF_SIZE, \"select=host=%s:ncpus=%d\", *hostp, host_ncpus);\n\t\t\t\t} else {\n\t\t\t\t\tsnprintf(tmp_str, BUF_SIZE, \"+host=%s:ncpus=%d\", *hostp, host_ncpus);\n\n\t\t\t\t\tif (pbs_strcat(&select_str, &select_str_size, tmp_str) == NULL) {\n\t\t\t\t\t\tfprintf(stderr, \"pbs_rsub: Out of memory\\n\");\n\t\t\t\t\t\tCS_close_app();\n\t\t\t\t\t\texit(2);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t} /* end of part that crafts select */\n\t\t}\n\n\t\tpbs_statfree(bstat_head); /* free info returned by pbs_statvnodes() */\n\n\t\tif (select_str == NULL) {\n\t\t\tfprintf(stderr, \"pbs_rsub: missing host(s)\\n\");\n\t\t\tprint_usage();\n\n\t\t\tCS_close_app();\n\t\t\texit(2);\n\t\t}\n\n\t\t/* add crafted select */\n\t\tif ((i = set_resources(&attrib, select_str, 0, &erp)) != 0) {\n\t\t\tif (i > 1) {\n\t\t\t\tpbs_prt_parse_err(\"pbs_rsub: illegal -l value\\n\", select_str,\n\t\t\t\t\t\t  (int) (erp - select_str), i);\n\t\t\t} else {\n\t\t\t\tfprintf(stderr, \"pbs_rsub: illegal -l value\\n\");\n\t\t\t}\n\n\t\t\tCS_close_app();\n\t\t\texit(pbs_errno);\n\t\t}\n\n\t\t/* add place=exclhost */\n\t\tif (set_resources(&attrib, \"place=exclhost\", 0, &erp) != 0) {\n\t\t\tfprintf(stderr, \"pbs_rsub: illegal -l value\\n\");\n\n\t\t\tCS_close_app();\n\t\t\texit(pbs_errno);\n\t\t}\n\n\t\tstrcat(extend, \"m\");\n\t}\n\n\tpbs_errno = 0;\n\tnew_resvname = pbs_submit_resv(connect, (struct attropl *) attrib, extend);\n\tif (new_resvname == NULL) {\n\t\tif ((err_list = pbs_get_attributes_in_error(connect)))\n\t\t\thandle_attribute_errors(err_list);\n\n\t\terrmsg = pbs_geterrmsg(connect);\n\t\tif (errmsg != NULL) {\n\t\t\tfprintf(stderr, \"pbs_rsub: %s\\n\", errmsg);\n\t\t} else\n\t\t\tfprintf(stderr, \"pbs_rsub: Error (%d) submitting reservation\\n\", pbs_errno);\n\t\tCS_close_app();\n\t\texit(pbs_errno);\n\t}\n\n\tif (is_maintenance_resv) {\n\t\tchar *rest;\n\t\tchar *resv_start_time_str;\n\t\ttime_t resv_start_time = 0;\n\n\t\treservid = strtok_r(new_resvname, \" \", &rest);\n\n\t\tresv_start_time_str = get_attr(attrib, ATTR_resv_start, NULL);\n\t\tif (resv_start_time_str)\n\t\t\tresv_start_time = strtol(resv_start_time_str, &endp, 10);\n\n\t\tpbs_errno = 0;\n\t\tif (pbs_confirmresv(connect, reservid, execvnodes_str, resv_start_time, PBS_RESV_CONFIRM_SUCCESS) > 0) {\n\t\t\terrmsg = pbs_geterrmsg(connect);\n\t\t\tif (errmsg == NULL)\n\t\t\t\terrmsg = \"\";\n\n\t\t\tfprintf(stderr, \"pbs_rsub: PBS Failed to confirm resv: %s (%d)\\n\", errmsg, pbs_errno);\n\n\t\t\tCS_close_app();\n\t\t\texit(pbs_errno);\n\t\t}\n\n\t\tprintf(\"%s CONFIRMED\\n\", reservid);\n\t} else {\n\t\tprintf(\"%s\\n\", new_resvname);\n\t}\n\n\tfree(new_resvname);\n\n\tif (maintenance_hosts) {\n\t\thostp = maintenance_hosts;\n\t\tfor (; *hostp; hostp++)\n\t\t\tfree(*hostp);\n\t\tfree(maintenance_hosts);\n\t}\n\n\t/* Disconnet from the server. */\n\tpbs_disconnect(connect);\n\n\tCS_close_app();\n\texit(0);\n}\n"
  },
  {
    "path": "src/cmds/pbs_tmrsh.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tpbs_tmrsh.c\n * @brief\n * pbs_tmrsh - a replacement for rsh using the Task Management API\n *\n */\n#include <pbs_config.h>\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <unistd.h>\n#include <string.h>\n#include <pwd.h>\n#include <netdb.h>\n#include <sys/types.h>\n#include <sys/socket.h>\n#include <netinet/in.h>\n#include <arpa/inet.h>\n\n#include \"cmds.h\"\n#include \"tm.h\"\n#include \"pbs_version.h\"\n\nchar *host = NULL;\n\nextern char *get_ecname(int rc);\n\n/**\n * @brief\n * \tdisplays how to use pbs_tmrsh\n *\n * @param[in] id - command name i.e pbs_tmrsh\n *\n * @return Void\n *\n */\nvoid\nusage(char *id)\n{\n\tfprintf(stderr, \"usage: %s [-n][-l username] host [-n][-l username] command\\n\", id);\n\tfprintf(stderr, \"       %s --version\\n\", id);\n\texit(255);\n}\n\n/**\n * @brief\n *  \treturns the username\n *\n * @return string\n * @retval \"username\"\n *\n */\nchar *\nmyname(void)\n{\n\tuid_t me = getuid();\n\tstruct passwd *pent;\n\n\tif ((pent = getpwuid(me)) == NULL)\n\t\treturn \"\";\n\telse\n\t\treturn pent->pw_name;\n}\n\n#ifndef INADDR_NONE\n#define INADDR_NONE (in_addr_t) 0xFFFFFFFF\n#endif\n\n/**\n * @brief\n *\tCheck the host to a line read from PBS_NODEFILE.\n *\tThe PBS_NODEFILE will contain node names.  We want to be able\n *\tto accept IP addresses for the host.\n *\n * @param[in] line - line from PBS_NODEFILE\n *\n * @return  \tError code\n * @retval\t1 - Success i.e matched\n * @retval\t0 - Failure i.e not matched\n *\n */\nint\nhost_match(char *line)\n{\n\tint len = strlen(line);\n\tstatic char domain[PBS_MAXHOSTNAME + 1];\n\tchar fullhost[PBS_MAXHOSTNAME + 1];\n\tstatic struct in_addr addr;\n\tstatic int addrvalid = -1;\n\n\tif (line[len - 1] == '\\n')\n\t\tline[len - 1] = '\\0';\n\n\tif (strcmp(line, host) == 0)\n\t\treturn 1;\n\n\tif (addrvalid == -1) {\n\t\taddr.s_addr = inet_addr(host);\n\t\taddrvalid = (addr.s_addr == INADDR_NONE) ? 0 : 1;\n\t}\n\n\tif (addrvalid) { /* compare IP addresses */\n\t\tstruct hostent *hp = gethostbyname(line);\n\t\tint i;\n\n\t\tif (hp == NULL)\n\t\t\treturn 0;\n\n\t\tfor (i = 0; hp->h_addr_list[i]; i++) {\n\t\t\tif (memcmp(&addr, hp->h_addr_list[i],\n\t\t\t\t   hp->h_length) == 0)\n\t\t\t\treturn 1;\n\t\t}\n\t\treturn 0;\n\t}\n\n\tif (domain[0] == '\\0') {\n\t\tif (getdomainname(domain, (sizeof(domain) - 1)) == -1) {\n\t\t\tperror(\"getdomainname\");\n\t\t\texit(255);\n\t\t}\n\t\tif (domain[0] == '\\0') {\n\t\t\tint i;\n\t\t\tchar *dot;\n\n\t\t\tif (gethostname(domain, (sizeof(domain) - 1)) == -1) {\n\t\t\t\tperror(\"getdomainname\");\n\t\t\t\texit(255);\n\t\t\t}\n\t\t\tif (domain[0] == '\\0')\n\t\t\t\treturn 0;\n\t\t\tif ((dot = strchr(domain, '.')) == NULL)\n\t\t\t\treturn 0;\n\t\t\tfor (i = 0, dot++; *dot; i++, dot++)\n\t\t\t\tdomain[i] = *dot;\n\t\t\tdomain[i] = '\\0';\n\t\t}\n\t}\n\tpbs_strncpy(fullhost, line, sizeof(fullhost));\n\tstrcat(fullhost, \".\");\n\tstrcat(fullhost, domain);\n\n\tif (strcmp(fullhost, host) == 0)\n\t\treturn 1;\n\n\treturn 0;\n}\n\nint\nmain(int argc, char *argv[], char *envp[])\n{\n\tchar *id;\n\tchar *jobid;\n\tint i, arg;\n\tFILE *fp;\n\tint numnodes;\n\tint err = 0;\n\tint rc, exitval;\n\tstruct tm_roots rootrot;\n\tchar *nodefile;\n\ttm_node_id *nodelist;\n\ttm_event_t event;\n\ttm_task_id tid;\n\tchar line[256], *cp;\n\n\t/*test for real deal or just version and exit*/\n\n\tPRINT_VERSION_AND_EXIT(argc, argv);\n\n\tif (initsocketlib())\n\t\treturn 1;\n\n\tid = argv[0];\n\tif (argc < 3)\n\t\tusage(id);\n\n\tfor (arg = 1; arg < argc; arg++) {\n\t\tchar *c = argv[arg];\n\t\tchar lopt[] = \"-l\";\n\t\tint len = sizeof(lopt) - 1;\n\n\t\tif (*c == '-') {\t\t  /* option */\n\t\t\tif (strcmp(c, \"-n\") == 0) /* noop */\n\t\t\t\tcontinue;\n\n\t\t\tif (strncmp(c, lopt, len) == 0) { /* login name */\n\t\t\t\tif (strlen(c) == len) {\n\t\t\t\t\targ++;\n\t\t\t\t\tif (arg == argc) {\n\t\t\t\t\t\terr = 1; /* no args left */\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t\tc = argv[arg];\n\t\t\t\t} else /* -lname */\n\t\t\t\t\tc += len;\n\n\t\t\t\tif (strcmp(c, myname()) != 0) {\n\t\t\t\t\tfprintf(stderr, \"%s: bad user \\\"%s\\\"\\n\",\n\t\t\t\t\t\tid, c);\n\t\t\t\t\terr = 1;\n\t\t\t\t}\n\t\t\t} else { /* unknown option */\n\t\t\t\terr = 1;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t} else if (host == NULL)\n\t\t\thost = c; /* first non-option is host */\n\t\telse\n\t\t\tbreak; /* host is set, must be command */\n\t}\n\n\t/*\n\t **\tIf there was an error processing arguments or there is\n\t **\tno command, exit.\n\t */\n\tif (err || (argc == arg) || (host == NULL))\n\t\tusage(id);\n\n\tif (getenv(\"PBS_ENVIRONMENT\") == 0) {\n\t\tfprintf(stderr, \"%s: not executing under PBS\\n\", id);\n\t\treturn 255;\n\t}\n\tif ((jobid = getenv(\"PBS_JOBID\")) == NULL) {\n\t\tfprintf(stderr, \"%s: PBS jobid not in environment\\n\", id);\n\t\treturn 255;\n\t}\n\n\t/*\n\t **\tSet up interface to the Task Manager\n\t */\n\tif ((rc = tm_init(NULL, &rootrot)) != TM_SUCCESS) {\n\t\tfprintf(stderr, \"%s: tm_init: %s\\n\", id, get_ecname(rc));\n\t\treturn 255;\n\t}\n\n\tif ((rc = tm_nodeinfo(&nodelist, &numnodes)) != TM_SUCCESS) {\n\t\tfprintf(stderr, \"%s: tm_nodeinfo: %s\\n\", id, get_ecname(rc));\n\t\treturn 255;\n\t}\n\n\t/*\n\t ** Check which node number the host is.\n\t */\n\tif ((nodefile = getenv(\"PBS_NODEFILE\")) == NULL) {\n\t\tfprintf(stderr, \"%s: cannot find PBS_NODEFILE\\n\", id);\n\t\treturn 255;\n\t}\n\tif ((fp = fopen(nodefile, \"r\")) == NULL) {\n\t\tperror(nodefile);\n\t\treturn 255;\n\t}\n\n\tfor (i = 0; (cp = fgets(line, sizeof(line), fp)) != NULL; i++) {\n\t\tif (host_match(line))\n\t\t\tbreak;\n\t}\n\tfclose(fp);\n\tif (cp == NULL) {\n\t\tfprintf(stderr, \"%s: host \\\"%s\\\" is not a node in job <%s>\\n\",\n\t\t\tid, host, jobid);\n\t\treturn 255;\n\t}\n\tif (i >= numnodes) {\n\t\tfprintf(stderr, \"%s: PBS_NODEFILE contains %d entries, \"\n\t\t\t\t\"only %d nodes in job\\n\",\n\t\t\tid, i, numnodes);\n\t\treturn 255;\n\t}\n\n\tif ((rc = tm_spawn(argc - arg, argv + arg, NULL,\n\t\t\t   nodelist[i], &tid, &event)) != TM_SUCCESS) {\n\t\tfprintf(stderr, \"%s: tm_spawn: host \\\"%s\\\" err %s\\n\",\n\t\t\tid, host, get_ecname(rc));\n\t}\n\n\trc = tm_poll(TM_NULL_EVENT, &event, 1, &err);\n\tif (rc != TM_SUCCESS || event == TM_ERROR_EVENT) {\n\t\tfprintf(stderr, \"%s: tm_poll(spawn): host \\\"%s\\\" err %s %d\\n\",\n\t\t\tid, host, get_ecname(rc), err);\n\t\treturn 255;\n\t}\n\n\tif ((rc = tm_obit(tid, &exitval, &event)) != TM_SUCCESS) {\n\t\tfprintf(stderr, \"%s: obit: host \\\"%s\\\" err %s\\n\",\n\t\t\tid, host, get_ecname(rc));\n\t\treturn 255;\n\t}\n\n\trc = tm_poll(TM_NULL_EVENT, &event, 1, &err);\n\tif (rc != TM_SUCCESS || event == TM_ERROR_EVENT) {\n\t\tfprintf(stderr, \"%s: tm_poll(obit): host \\\"%s\\\" err %s %d\\n\",\n\t\t\tid, host, get_ecname(rc), err);\n\t\treturn 255;\n\t}\n\n\ttm_finalize();\n\n\treturn exitval;\n}\n"
  },
  {
    "path": "src/cmds/pbsdsh.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n\n\n/**\n * @file\tpbs_dsh.c\n * @brief\n * pbs_dsh - a distribute task program using the Task Management API\n *\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n#include <pbs_version.h>\n\n\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <unistd.h>\n#include <string.h>\n#include <pwd.h>\n#include <netdb.h>\n#include <sys/types.h>\n#include <sys/socket.h>\n#include <netinet/in.h>\n#include <arpa/inet.h>\n#include \"cmds.h\"\n#include \"tm.h\"\n#include <signal.h>\n\nint *ev;\ntm_event_t *events_spawn;\ntm_event_t *events_obit;\nint numnodes;\ntm_task_id *tid;\nint verbose = 0;\n\n#ifndef WIN32\nsigset_t allsigs;\n#endif\n\nchar *id;\n\nint fire_phasers = 0;\nint no_obit = 0;\nextern char *get_ecname(int rc);\n\n/**\n * @brief\n *\tsignal handler function\n *\n * @param[in] sig - signal number\n *\n * @return - Void\n *\n */\nvoid\nbailout(int sig)\n{\n\tfire_phasers = sig;\n}\n\n\n/**\n * @brief\n *      Check the host to a line read from PBS_NODEFILE.\n *      The PBS_NODEFILE will contain node names.  We want to be able\n *      to accept IP addresses for the host.\n * \n * @param[in] line - line from PBS_NODEFILE\n *\n * @return      Error code\n * @retval      1 - Success i.e matched\n * @retval      0 - Failure i.e not matched\n * \n */\n\nint\nhost_match(char *line, char *host)\n{\n\tif (NULL == host) \n\t\treturn 0;\n\tif (NULL == line)\n\t\treturn 0;\n\tint len = strlen(line);\n\tstatic char domain[PBS_MAXHOSTNAME + 1];\n\tchar fullhost[PBS_MAXHOSTNAME + 1];\n\tstruct in_addr addr;\n        memset(&addr,0,sizeof(addr));\n        int addrvalid = -1;\n\n\tif (line[len - 1] == '\\n')\n\t\tline[len - 1] = '\\0';\n\n\tif (strcmp(line, host) == 0)\n\t\treturn 1;\n\n\tif (addrvalid == -1) {\n\t\taddr.s_addr = inet_addr(host);\n\t\taddrvalid = (addr.s_addr == INADDR_NONE) ? 0 : 1;\n\t}\n\n\tif (addrvalid) { /* compare IP addresses */\n\t \t\n\t\tstruct addrinfo hints, *res, *p;\t \n\t\tint status;\n\t\tmemset(&hints, 0, sizeof(hints));\n\t\thints.ai_family = AF_INET; /* IPv6 not supported */ \n\t\thints.ai_socktype = SOCK_STREAM;\n\t\t\n\t\tif ((status = getaddrinfo(line,NULL,&hints,&res)) != 0) {\n\t\t\tperror(\"getaddrinfo\");\n\t\t\texit(255);\n\t\t}\n\t\n\t\tfor(p = res; p;p = p->ai_next) {\n\t\t\tstruct sockaddr_in *s_in_entry = (struct sockaddr_in *) p->ai_addr;\n\t\t\tif (memcmp(&addr, &(s_in_entry->sin_addr), sizeof(addr)) == 0) {\n\t\t\t\tfreeaddrinfo(res);\n\t\t\t\treturn 1;\n\t\t\t}\n\t\t}\t\n\t\tfreeaddrinfo(res);\n\t\treturn 0;\n\t}\n\n\tif (domain[0] == '\\0') {\n\t\tif (getdomainname(domain, (sizeof(domain) - 1)) == -1) {\n\t\t\tperror(\"getdomainname\");\n\t\t\texit(255);\n\t\t}\n\t\tif (domain[0] == '\\0') {\n\t\t\tint i;\n\t\t\tchar *dot;\n\n\t\t\tif (gethostname(domain, (sizeof(domain) - 1)) == -1) {\n\t\t\t\tperror(\"gethostname\");\n\t\t\t\texit(255);\n\t\t\t}\n\t\t\tif (domain[0] == '\\0')\n\t\t\t\treturn 0;\n\t\t\tif ((dot = strchr(domain, '.')) == NULL)\n\t\t\t\treturn 0;\n\t\t\tfor (i = 0, dot++; *dot; i++, dot++)\n\t\t\t\tdomain[i] = *dot;\n\t\t\tdomain[i] = '\\0';\n\t\t}\n\t}\n\tpbs_strncpy(fullhost, line, sizeof(fullhost));\n\tstrcat(fullhost, \".\");\n\tstrcat(fullhost, domain);\n\n\tif (strcmp(fullhost, host) == 0)\n\t\treturn 1;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *      find_hostline - check a if a hostname has a entry in PBS_NODEFILE\n *     \t it calls host_match to do a match, does more that str operations;\n *      \n * \n * @param[in] first - the hostname to match\n * \n * @return - \n * @retval      1 - Success i.e matched\n * @retval      0 - Failure i.e not matched\n *\n */\n\n\nint find_hostline(char *host)\n{\n\tif (NULL == host) \n\t\treturn -1;\n\tif (NULL == getenv(\"PBS_NODEFILE\")) \n\t\treturn -1;\n\tFILE *fp = fopen(getenv(\"PBS_NODEFILE\"),\"r\");\n\tif (NULL == fp) \n\t\treturn -1;\n\t\n        char line[HOST_NAME_MAX], *cp;\n        int i, host_hit = 0;\n        for (i = 0; (cp = fgets(line, sizeof(line), fp)) != NULL; i++) {\n                if ((host_hit = host_match(line, host)) == 1)\n                        break;\n        }\n        fclose(fp);\n        if (host_hit) \n\t\treturn i;\n        return -1;\n\n}\n\n\n\n/**\n * @brief\n *      wait_for_task - wait for all spawned tasks to\n *      a. have the spawn acknowledged, and\n *      b. the task to terminate and return the obit with the exit status\n * \n * @param[in] first - first event index to consider\n * @param[in] nspawned - number of tasks spawned\n * \n * @return - Void\n *\n */\n\nvoid\nwait_for_task(int first, int *nspawned)\n{\n\tint c;\n\ttm_event_t eventpolled;\n\tint nevents;\n\tint nobits = 0;\n\tint rc;\n\tint tm_errno;\n\n\tnevents = *nspawned;\n\twhile (*nspawned || nobits) {\n\t\tif (verbose) {\n\t\t\tprintf(\"pbsdsh: waiting on %d spawned and %d obits\\n\",\n\t\t\t       *nspawned, nobits);\n\t\t}\n\n\t\tif (fire_phasers) {\n\t\t\ttm_event_t event;\n\n\t\t\tfor (c = first; c < (first + nevents); c++) {\n\t\t\t\tif (*(tid + c) == TM_NULL_TASK)\n\t\t\t\t\tcontinue;\n\t\t\t\tprintf(\"pbsdsh: killing task 0x%08X signal %d\\n\",\n\t\t\t\t       *(tid + c), fire_phasers);\n\t\t\t\t(void) tm_kill(*(tid + c), fire_phasers, &event);\n\t\t\t}\n\t\t\ttm_finalize();\n\t\t\texit(1);\n\t\t}\n\n#ifdef WIN32\n\t\trc = tm_poll(TM_NULL_EVENT, &eventpolled, 1, &tm_errno);\n#else\n\t\tsigprocmask(SIG_UNBLOCK, &allsigs, NULL);\n\t\trc = tm_poll(TM_NULL_EVENT, &eventpolled, 1, &tm_errno);\n\t\tsigprocmask(SIG_BLOCK, &allsigs, NULL);\n#endif\n\n\t\tif (rc != TM_SUCCESS) {\n\t\t\tfprintf(stderr, \"%s: Event poll failed, error %s\\n\",\n\t\t\t\tid, get_ecname(rc));\n\t\t\texit(2);\n\t\t}\n\n\t\tfor (c = first; c < (first + nevents); ++c) {\n\t\t\tif (eventpolled == *(events_spawn + c)) {\n\t\t\t\t/* spawn event returned - register obit */\n\t\t\t\t(*nspawned)--;\n\t\t\t\tif (tm_errno) {\n\t\t\t\t\tfprintf(stderr, \"error %d on spawn\\n\",\n\t\t\t\t\t\ttm_errno);\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\t\t\t\tif (no_obit)\n\t\t\t\t\tcontinue;\n\n\t\t\t\trc = tm_obit(*(tid + c), ev + c, events_obit + c);\n\t\t\t\tif (rc == TM_SUCCESS) {\n\t\t\t\t\tif (*(events_obit + c) == TM_NULL_EVENT) {\n\t\t\t\t\t\tif (verbose) {\n\t\t\t\t\t\t\tfprintf(stderr, \"task already dead\\n\");\n\t\t\t\t\t\t}\n\t\t\t\t\t} else if (*(events_obit + c) == TM_ERROR_EVENT) {\n\t\t\t\t\t\tif (verbose) {\n\t\t\t\t\t\t\tfprintf(stderr, \"Error on Obit return\\n\");\n\t\t\t\t\t\t}\n\t\t\t\t\t} else {\n\t\t\t\t\t\tnobits++;\n\t\t\t\t\t}\n\t\t\t\t} else if (verbose) {\n\t\t\t\t\tfprintf(stderr, \"%s: failed to register for task termination notice, task 0x%08X\\n\", id, c);\n\t\t\t\t}\n\n\t\t\t} else if (eventpolled == *(events_obit + c)) {\n\t\t\t\t/* obit event, task exited */\n\t\t\t\tnobits--;\n\t\t\t\t*(tid + c) = TM_NULL_TASK;\n\t\t\t\tif (verbose || *(ev + c) != 0) {\n\t\t\t\t\tprintf(\"%s: task 0x%08X exit status %d\\n\",\n\t\t\t\t\t       id, c, *(ev + c));\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n\nint\nmain(int argc, char *argv[], char *envp[])\n{\n\tint c = 0;\n\tint err = 0;\n\tint max_events;\n\tint ncopies = -1;\n\tint nd = 0;\n\tint onenode = -1;\n\tint rc = 0;\n\tstruct tm_roots rootrot;\n\tint nspawned = 0;\n\ttm_node_id *nodelist = NULL;\n\tint start = 0;\n\tint stop = 0;\n\tint sync = 0;\n\tchar *pbs_environ = NULL;\n#ifndef WIN32\n\tstruct sigaction act;\n#endif\n\textern int optind;\n\textern char *optarg;\n\tchar *targethost = NULL;\n\n\t/*test for real deal or just version and exit*/\n\n\tPRINT_VERSION_AND_EXIT(argc, argv);\n\n\tif (initsocketlib())\n\t\treturn 1;\n\n\twhile ((c = getopt(argc, argv, \"c:n:h:svo\")) != EOF) {\n\t\tswitch (c) {\n\t\t\tcase 'c':\n\t\t\t\ttargethost = NULL;\n\t\t\t\tncopies = atoi(optarg);\n\t\t\t\tif (ncopies < 0) {\n\t\t\t\t\terr = 1;\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase 'n':\n\t\t\t\ttargethost = NULL;\n\t\t\t\tonenode = atoi(optarg);\n\t\t\t\tif (onenode < 0) {\n\t\t\t\t\terr = 1;\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase 'h':\n\t\t\t\tonenode = 0;\n\t\t\t\ttargethost = optarg;\t\t\n\t\t\t\tonenode = find_hostline(targethost);\n\t\t\t\tif (onenode < 0) {\n\t\t\t\t\terr = 1;\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase 's':\n\t\t\t\tsync = 1; /* force synchronous spawns */\n\t\t\t\tbreak;\n\t\t\tcase 'v':\n\t\t\t\tverbose = 1; /* turn on verbose output */\n\t\t\t\tbreak;\n\t\t\tcase 'o':\n\t\t\t\tno_obit = 1;\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\terr = 1;\n\t\t\t\tbreak;\n\t\t}\n\t}\n\t\n\tif (err || (onenode >= 0 && ncopies >= 0) || (argc == optind)) {\n\t\tfprintf(stderr, \"Usage: %s [-c copies][-s][-v][-o]\"\n\t\t\t\t\" -- program [args...]\\n\",\n\t\t\targv[0]);\n\t\tfprintf(stderr, \"       %s [-n node_index][-s][-v][-o]\"\n\t\t\t\t\" -- program [args...]\\n\",\n\t\t\targv[0]);\n\t\tfprintf(stderr, \"       %s --version\\n\", argv[0]);\n\t\tfprintf(stderr, \"Where -c copies =  run a copy \"\n\t\t\t\t\"of \\\"program\\\" on the first \\\"copies\\\" nodes,\\n\");\n\t\tfprintf(stderr, \"      -n node_index = run a copy \"\n\t\t\t\t\"of \\\"program\\\" on the \\\"node_index\\\"-th node,\\n\");\n\n\t\tfprintf(stderr, \"      -h hostname = run a copy \"\n\t\t\t\t\"of \\\"program\\\" on the node named by hostname,\\n\");\n\t\tfprintf(stderr, \"      -s = forces synchronous execution,\\n\");\n\t\tfprintf(stderr, \"      -v = forces verbose output.\\n\");\n\t\tfprintf(stderr, \"      -o = no obits are waited for.\\n\");\n\n\t\texit(1);\n\t}\n\n\tid = argv[0];\n\tif ((pbs_environ = getenv(\"PBS_ENVIRONMENT\")) == 0) {\n\t\tfprintf(stderr, \"%s: not executing under PBS\\n\", id);\n\t\treturn 1;\n\t}\n\n\t\n\t/*\n\t *\tSet up interface to the Task Manager\n\t */\n\tif ((rc = tm_init(0, &rootrot)) != TM_SUCCESS) {\n\t\tfprintf(stderr, \"%s: tm_init failed, rc = %s (%d)\\n\", id,\n\t\t\tget_ecname(rc), rc);\n\t\treturn 1;\n\t}\n\n#ifdef WIN32\n\tsignal(SIGINT, bailout);\n\tsignal(SIGTERM, bailout);\n#else\n\tsigemptyset(&allsigs);\n\tsigaddset(&allsigs, SIGHUP);\n\tsigaddset(&allsigs, SIGINT);\n\tsigaddset(&allsigs, SIGTERM);\n\n\tact.sa_mask = allsigs;\n\tact.sa_flags = 0;\n\t/*\n\t ** We want to abort system calls and call a function.\n\t */\n#ifdef SA_INTERRUPT\n\tact.sa_flags |= SA_INTERRUPT;\n#endif\n\tact.sa_handler = bailout;\n\tsigaction(SIGHUP, &act, NULL);\n\tsigaction(SIGINT, &act, NULL);\n\tsigaction(SIGTERM, &act, NULL);\n\n#endif /* WIN32 */\n\n#ifdef DEBUG\n\tif (rootrot.tm_parent == TM_NULL_TASK) {\n\t\tprintf(\"%s: I am the mother of all tasks\\n\", id);\n\t} else {\n\t\tprintf(\"%s: I am but a child in the scheme of things\\n\", id);\n\t}\n#endif /* DEBUG */\n\n\tif ((rc = tm_nodeinfo(&nodelist, &numnodes)) != TM_SUCCESS) {\n\t\tfprintf(stderr, \"%s: tm_nodeinfo failed, rc = %s (%d)\\n\", id,\n\t\t\tget_ecname(rc), rc);\n\t\treturn 1;\n\t}\n\n\tmax_events = (ncopies > numnodes) ? ncopies : numnodes;\n\n\t/* malloc space for various arrays based on number of nodes/tasks */\n\n\ttid = (tm_task_id *) calloc(max_events, sizeof(tm_task_id));\n\tif (tid == NULL) {\n\t\tfprintf(stderr, \"%s: malloc of task ids failed\\n\", id);\n\t\treturn 1;\n\t}\n\tevents_spawn = (tm_event_t *) calloc(max_events, sizeof(tm_event_t));\n\tif (events_spawn == NULL) {\n\t\tfprintf(stderr, \"%s: out of memory\\n\", id);\n\t\treturn 1;\n\t}\n\tevents_obit = (tm_event_t *) calloc(max_events, sizeof(tm_event_t));\n\tif (events_obit == NULL) {\n\t\tfprintf(stderr, \"%s: out of memory\\n\", id);\n\t\treturn 1;\n\t}\n\tev = (int *) calloc(max_events, sizeof(int));\n\tif (ev == NULL) {\n\t\tfprintf(stderr, \"%s: out of memory\\n\", id);\n\t\treturn 1;\n\t}\n\tfor (c = 0; c < max_events; c++) {\n\t\t*(tid + c) = TM_NULL_TASK;\n\t\t*(events_spawn + c) = TM_NULL_EVENT;\n\t\t*(events_obit + c) = TM_NULL_EVENT;\n\t\t*(ev + c) = 0;\n\t}\n\n\t/* Now spawn the program to where it goes */\n\n\n\t\t\n\tif (onenode >= 0) {\n\n\t\t/* Spawning one copy onto logical node \"onenode\" */\n\n\t\tstart = onenode;\n\t\tstop = onenode + 1;\n\n\t} else if (ncopies >= 0) {\n\t\t/* Spawn a copy of the program to the first \"ncopies\" nodes */\n\n\t\tstart = 0;\n\t\tstop = ncopies;\n\t} else {\n\t\t/* Spawn a copy on all nodes */\n\n\t\tstart = 0;\n\t\tstop = numnodes;\n\t}\n\n#ifndef WIN32\n\tsigprocmask(SIG_BLOCK, &allsigs, NULL);\n#endif\n\n\tfor (c = 0; c < (stop - start); ++c) {\n\t\tnd = (start + c) % numnodes;\n\t\tif ((rc = tm_spawn(argc - optind,\n\t\t\t\t   argv + optind,\n\t\t\t\t   NULL,\n\t\t\t\t   *(nodelist + nd),\n\t\t\t\t   tid + c,\n\t\t\t\t   events_spawn + c)) != TM_SUCCESS) {\n\t\t\tfprintf(stderr, \"%s: spawn failed on node %d err %s\\n\",\n\t\t\t\tid, nd, get_ecname(rc));\n\t\t} else {\n\t\t\tif (verbose)\n\t\t\t\tprintf(\"%s: spawned task 0x%08X on logical node %d event %d\\n\", id, c, nd, *(events_spawn + c));\n\t\t\t++nspawned;\n\t\t\tif (sync)\n\t\t\t\twait_for_task(c, &nspawned); /* one at a time */\n\t\t}\n\t}\n\n\tif (sync == 0)\n\t\twait_for_task(0, &nspawned); /* wait for all to finish */\n#ifdef WIN32\n\t/*\n\t * On Windows, in case of interactive jobs - pbs_demux is writing on stdout and stderr\n\t * in parallel to the interactive shell on which pbsdsh executes. Give pbs_demux some time to\n\t * finish writing to the console before we exit from here.\n\t *\n\t */\n\tif (strncmp(pbs_environ, \"PBS_INTERACTIVE\", strlen(\"PBS_INTERACTIVE\")) == 0) {\n\t\tSleep(200); /* 200 ms */\n\t}\n#endif\n\t/*\n\t *\tTerminate interface with Task Manager\n\t */\n\ttm_finalize();\n\n\treturn 0;\n}\n"
  },
  {
    "path": "src/cmds/pbsnodes.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tpbsnodes.c\n * @brief\n *\t This program exists to give a way to mark nodes\n *\t Down, Offline, or Free in PBS.\n *\n * @par\tusage:\tpbsnodes [-s server] -[F format] host host ...\n *\t\tpbsnodes [-s server] -[F format]-v vnode vnode ...\n *\t\tpbsnodes [-s server] -[F format]-H host host...\n *\t\tpbsnodes [-s server] -[C comment]{o|r} host host ...\n *\t\tpbsnodes [-s server] -a[v][F format]\n *\t\tpbsnodes [-s server] -a[v][S[j][L]][F format]\n *\t\tpbsnodes [-s server] -[S[j][L]][F format][H] host host ...\n *\twhere the node(s) are the names given in the node\n *\tdescription file.\n *\n * @note\n *\t\tpbsnodes -c and -d have been deprecated.\n *\t\tText about -c and -d are removed from the usage\n *\t\tstatement.  The code will be removed in the future.\n *\n *\tpbsnodes\t\t\tprint command usage\n *\n *\tpbsnodes -d\t\t\tclear \"DOWN\" from all nodes so marked\n *\n *\tpbsnodes -d node1 node2\t\tset nodes node1, node2 \"DOWN\"\n *\t\t\t\t\tunmark \"DOWN\" from any other node\n *\n *\tpbsnodes -a\t\t\tlist all hosts\n *\tpbsnodes host [host...]\t\tlist specified hosts\n *\tpbsnodes -av\t\t\tlist all hosts and v-nodes\n *\tpbsnodes -v vnode [vnode...]\tlist specified vnodes\n *\n *\tpbsnodes -C <comment> host ...\tset a comment on hosts\n *\n *\tpbsnodes -F <format> host ...\tlist the output in specified format for specified nodes.\n *\n *\tpbsnodes -H host [host ...]\tlist the hosts and vnodes on them.\n *\n *\tpbsnodes -l\t\t\tlist all nodes marked in any way\n *\tpbsnodes -l node1 node2\t\tlist specified nodes\n *\n *\tpbsnodes -o host1 host2\t\tmark hosts host1, host2 as OFF_LINE\n *\t\t\t\t\teven if currently in use.\n *\n *\tpbsnodes -r host1 host2\t\tclear OFF_LINE from listed hosts\n *\n *\tpbsnodes -S host1 host2 ...\tsingle line Node summary of specified nodes\n *\tpbsnodes -Sj\t\t\tsingle line Jobs summary of specified nodes\n *\tpbsnodes -S[j]L\t\t\tlist expanded version of each field in the single line summary\n *\n *\tpbsnodes -c host1 host2\t\tclear OFF_LINE or DOWN from listed hosts\n */\n#include <pbs_config.h> /* the master config generated by configure */\n#include <pbs_version.h>\n\n#include \"cmds.h\"\n#include \"portability.h\"\n#include \"pbs_ifl.h\"\n#include \"pbs_internal.h\"\n#include \"pbs_json.h\"\n\n/* Field width for Single line summary */\n#define NODE_NAME 15\n#define NODE_STATE 15\n#define NODE_OS 8\n#define NODE_HARDW 8\n#define NODE_HOST 15\n#define QUEUE 10\n#define NCPUS 7\n#define MEM 8\n#define NMIC 7\n#define NGPUS 7\n#define COMMENT 20\n#define NJOBS 6\n#define RUNNING_JOBS 5\n#define SUSP_JOBS 6\n#define NCPUS_FT 7\n#define MEM_FT 12\n#define NMIC_FT 7\n#define NGPUS_FT 7\n\ntypedef enum mgr_operation {\n\tDOWN,\t\t/* Set nodes DOWN */\n\tLISTMRK,\t/* List nodes marked DOWN or OFF_LINE */\n\tCLEAR,\t\t/* Clear DOWN and OFF_LINE */\n\tOFFLINE,\t/* Set nodes OFF_LINE */\n\tRESET,\t\t/* Clear nodes OFF_LINE */\n\tUPDATE_COMMENT, /* add comment to nodes */\n\tALL,\t\t/* List all nodes */\n\tLISTSP,\t\t/* List specified nodes */\n\tLISTSPNV\t/* List specified nodes and their associated vnodes*/\n} mgr_operation_t;\n\nenum output_format_enum {\n\tFORMAT_DEFAULT = 0,\n\tFORMAT_DSV,\n\tFORMAT_JSON,\n\tFORMAT_MAX /* Add new formats before FORMAT_MAX */\n\t\t   /* and update output_format_names[]  */\n};\n\n/* This array contains the names users may specify for output format. */\nstatic char *output_format_names[] = {\"default\", \"dsv\", \"json\", NULL};\n\nstatic int output_format = FORMAT_DEFAULT;\nstatic int quiet = 0;\nstatic char *dsv_delim = \"|\";\nstatic json_data *json_nodes = NULL; /* json structure for nodes */\n\n/**\n * @brief\n *\tcmp_node_name - compare two node names, allow the second to match the\n *\tfirst if the same upto a dot ('.') in the second; i.e.\n *\t\"foo\" == \"foo.bar\"\n *\n * @param[in]  n1 - first node name to be matched with\n * @param[in]  n2 - second node name to be matched\n *\n *\n * @return - Error code\n * @retval   1 - Failure\n * @retval   0 - Success\n *\n */\n\nstatic int\ncmp_node_name(char *n1, char *n2)\n{\n\twhile ((*n1 != '\\0') && (*n2 != '\\0')) {\n\t\tif (*n1 != *n2)\n\t\t\tbreak;\n\t\tn1++;\n\t\tn2++;\n\t}\n\tif (*n1 == *n2)\n\t\treturn 0;\n\telse if ((*n1 == '.') && (*n2 == '\\0'))\n\t\treturn 0;\n\telse\n\t\treturn 1;\n}\n\n/**\n * @brief\n *\tEncodes the information in batch_status structure to json format\n *\n * @param[in] *bstat - structure containing node information\n *\n * @return - Error code\n * @retval   1 - Failure\n * @retval   0 - Success\n *\n */\nstatic int\nencode_to_json(struct batch_status *bstat)\n{\n\tstruct attrl *next;\n\tstruct attrl *pattr;\n\tchar *str;\n\tchar *pc;\n\tchar *pc1;\n\tchar *prev_jobid = \"\";\n\tjson_data *json_node;\n\n\tif ((json_node = pbs_json_create_object()) == NULL)\n\t\treturn 1;\n\tpbs_json_insert_item(json_nodes, bstat->name, json_node);\n\tfor (pattr = bstat->attribs; pattr; pattr = pattr->next) {\n\t\tif (strcmp(pattr->name, \"resources_available\") == 0) {\n\t\t\tjson_data *json_resc;\n\t\t\tif ((json_resc = pbs_json_create_object()) == NULL)\n\t\t\t\treturn 1;\n\t\t\tpbs_json_insert_item(json_node, pattr->name, json_resc);\n\t\t\tfor (next = pattr; next;) {\n\t\t\t\tif (pbs_json_insert_parsed(json_resc, next->resource, next->value, 0))\n\t\t\t\t\treturn 1;\n\t\t\t\tif (next->next == NULL || strcmp(next->next->name, \"resources_available\")) {\n\t\t\t\t\tpattr = next;\n\t\t\t\t\tnext = NULL;\n\t\t\t\t} else {\n\t\t\t\t\tnext = next->next;\n\t\t\t\t}\n\t\t\t}\n\t\t} else if (strcmp(pattr->name, \"resources_assigned\") == 0) {\n\t\t\tjson_data *json_resc;\n\t\t\tif ((json_resc = pbs_json_create_object()) == NULL)\n\t\t\t\treturn 1;\n\t\t\tpbs_json_insert_item(json_node, pattr->name, json_resc);\n\t\t\tfor (next = pattr; next;) {\n\t\t\t\tstr = next->value;\n\t\t\t\tstrtod(str, &pc);\n\t\t\t\twhile (pc) {\n\t\t\t\t\tif (isspace(*pc))\n\t\t\t\t\t\tpc++;\n\t\t\t\t\telse\n\t\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\t/* Adding only non zero values.*/\n\t\t\t\tif (pbs_json_insert_parsed(json_resc, next->resource, next->value, 1))\n\t\t\t\t\treturn 1;\n\t\t\t\tif (next->next == NULL || strcmp(next->next->name, \"resources_assigned\")) {\n\t\t\t\t\tpattr = next;\n\t\t\t\t\tnext = NULL;\n\t\t\t\t} else {\n\t\t\t\t\tnext = next->next;\n\t\t\t\t}\n\t\t\t}\n\t\t} else if (strcmp(pattr->name, \"jobs\") == 0) {\n\t\t\tjson_data *json_jobs;\n\t\t\tif ((json_jobs = pbs_json_create_array()) == NULL)\n\t\t\t\treturn 1;\n\t\t\tpbs_json_insert_item(json_node, pattr->name, json_jobs);\n\t\t\tpc = pc1 = str = pattr->value;\n\t\t\twhile (*pc1) {\n\t\t\t\tif (*pc1 != ' ')\n\t\t\t\t\t*pc++ = *(pc1);\n\t\t\t\tpc1++;\n\t\t\t}\n\t\t\t*pc = '\\0';\n\t\t\tfor (pc = strtok(str, \",\"); pc != NULL; pc = strtok(NULL, \",\")) {\n\t\t\t\tpc1 = strchr(pc, '/');\n\t\t\t\tif (pc1)\n\t\t\t\t\t*pc1 = '\\0';\n\t\t\t\tif (strcmp(pc, prev_jobid) != 0) {\n\t\t\t\t\tif (pbs_json_insert_string(json_jobs, NULL, pc))\n\t\t\t\t\t\treturn 1;\n\t\t\t\t}\n\t\t\t\tprev_jobid = pc;\n\t\t\t}\n\t\t} else {\n\t\t\tif(pbs_json_insert_parsed(json_node, pattr->name, pattr->value, 0))\n\t\t\t\treturn 1;\n\t\t}\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\tprints the nodes summary in specified format\n *\n * @param[in] *def_server - server name\n * @param[in] *bstatus - structure with node information\n * @param[in] job_summary - value to test wheteher job running on node\n * @param[in] long_summary - value to test whether to print long summary of node\n *\n * @retval - Error code\n * @retval   1 - Failure\n * @retval   0 - Success\n *\n */\nstatic int\nprt_node_summary(char *def_server, struct batch_status *bstatus, int job_summary, int long_summary)\n{\n\tstruct batch_status *bstat = NULL;\n\tstruct attrl *pattr;\n\tstruct attrl *next;\n\tchar suffixletter[] = \" kmgtp?\";\n\tchar *pc;\n\tchar *pc1;\n\tchar mem_info[50] = \"0kb\";\n\tchar ncpus_info[20] = \"0\";\n\tchar nmic_info[20] = \"0\";\n\tchar ngpus_info[20] = \"0\";\n\tchar *prev_jobid = NULL;\n\tchar *cur_jobid = NULL;\n\tint prefix_assigned = 0;\n\tlong int assigned_mem = 0;\n\tlong int njobs = 0;\n\tlong int run_jobs = 0;\n\tlong int susp_jobs = 0;\n\tlong int value = 0;\n\tstatic int done_headers = 0;\n\tjson_data *json_node;\n\tjson_data *json_jobs;\n\n\tif (output_format == FORMAT_DEFAULT && !done_headers) {\n\t\tif (job_summary) {\n\t\t\tprintf(\"                                                        mem       ncpus   nmics   ngpus\\n\");\n\t\t\tprintf(\"vnode           state           njobs   run   susp      f/t        f/t     f/t     f/t   jobs\\n\");\n\t\t\tprintf(\"--------------- --------------- ------ ----- ------ ------------ ------- ------- ------- -------\\n\");\n\t\t} else {\n\t\t\tprintf(\"vnode           state           OS       hardware host            queue        mem     ncpus   nmics   ngpus  comment\\n\");\n\t\t\tprintf(\"--------------- --------------- -------- -------- --------------- ---------- -------- ------- ------- ------- ---------\\n\");\n\t\t}\n\t\tdone_headers = 1;\n\t}\n\tif (def_server == NULL)\n\t\tdef_server = \"\";\n\n\tfor (bstat = bstatus; bstat; bstat = bstat->next) {\n\t\tchar *name;\n\t\tchar *state;\n\t\tchar *hardware;\n\t\tchar *queue;\n\t\tchar *os;\n\t\tchar *host;\n\t\tchar *comment;\n\t\tchar *jobs;\n\t\tint count;\n\t\tint prefix_total;\n\t\tint prefix_available; /* magnitude of value when printed */\n\t\tlong int total_mem;\n\t\tlong int available_mem;\n\t\tlong int total_cpus;\n\t\tlong int available_cpus;\n\t\tlong int total_nmic;\n\t\tlong int available_nmic;\n\t\tlong int total_ngpus;\n\t\tlong int available_ngpus;\n\t\tlong int resource_assigned;\n\n\t\tname = bstat->name;\n\t\tstate = \"--\";\n\t\thardware = \"--\";\n\t\tqueue = \"--\";\n\t\tos = \"--\";\n\t\thost = \"--\";\n\t\tcomment = \"--\";\n\t\tjobs = \"--\";\n\t\tcount = 0;\n\t\tprefix_total = 0;\n\t\tprefix_available = 0;\n\t\ttotal_mem = 0;\n\t\tavailable_mem = 0;\n\t\ttotal_cpus = 0;\n\t\tavailable_cpus = 0;\n\t\ttotal_nmic = 0;\n\t\tavailable_nmic = 0;\n\t\ttotal_ngpus = 0;\n\t\tavailable_ngpus = 0;\n\t\tresource_assigned = 0;\n\t\tnjobs = 0;\n\t\trun_jobs = 0;\n\t\tsusp_jobs = 0;\n\t\tvalue = 0;\n\t\tprev_jobid = \"\";\n\t\tcur_jobid = \"\";\n\t\tpc = NULL;\n\n\t\tif (job_summary) {\n\t\t\tstrcpy(mem_info, \"0kb/0kb\");\n\t\t\tstrcpy(ncpus_info, \"0/0\");\n\t\t\tstrcpy(nmic_info, \"0/0\");\n\t\t\tstrcpy(ngpus_info, \"0/0\");\n\t\t} else {\n\t\t\tstrcpy(mem_info, \"0kb\");\n\t\t\tstrcpy(ncpus_info, \"0\");\n\t\t\tstrcpy(nmic_info, \"0\");\n\t\t\tstrcpy(ngpus_info, \"0\");\n\t\t}\n\n\t\tfor (pattr = bstat->attribs; pattr; pattr = pattr->next) {\n\t\t\tif (pattr->resource && (strcmp(pattr->name, \"resources_assigned\") != 0)) {\n\t\t\t\tif ((strcmp(pattr->resource, \"mem\") == 0)) {\n\t\t\t\t\ttotal_mem = strtol(pattr->value, &pc, 10);\n\t\t\t\t\tif (*pc == 'k')\n\t\t\t\t\t\tprefix_total = 1;\n\t\t\t\t\telse if (*pc == 'm')\n\t\t\t\t\t\tprefix_total = 2;\n\t\t\t\t\telse if (*pc == 'g')\n\t\t\t\t\t\tprefix_total = 3;\n\t\t\t\t\telse if (*pc == 't')\n\t\t\t\t\t\tprefix_total = 4;\n\t\t\t\t\telse\n\t\t\t\t\t\tprefix_total = 0;\n\t\t\t\t\tprefix_available = prefix_total;\n\t\t\t\t\tfor (next = pattr->next; next && job_summary; next = next->next) {\n\t\t\t\t\t\tif (!next->resource)\n\t\t\t\t\t\t\tcontinue;\n\t\t\t\t\t\tif (strcmp(next->resource, \"mem\") == 0) {\n\t\t\t\t\t\t\tassigned_mem = strtol(next->value, &pc, 10);\n\t\t\t\t\t\t\tif (*pc == 'k')\n\t\t\t\t\t\t\t\tprefix_assigned = 1;\n\t\t\t\t\t\t\telse if (*pc == 'm')\n\t\t\t\t\t\t\t\tprefix_assigned = 2;\n\t\t\t\t\t\t\telse if (*pc == 'g')\n\t\t\t\t\t\t\t\tprefix_assigned = 3;\n\t\t\t\t\t\t\telse if (*pc == 't')\n\t\t\t\t\t\t\t\tprefix_assigned = 4;\n\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t\tprefix_assigned = 0;\n\t\t\t\t\t\t\twhile (prefix_assigned != prefix_total) {\n\t\t\t\t\t\t\t\tif (prefix_assigned < prefix_total) {\n\t\t\t\t\t\t\t\t\tassigned_mem = ((assigned_mem % 1024) + assigned_mem) >> 10;\n\t\t\t\t\t\t\t\t\tprefix_assigned++;\n\t\t\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t\t\tassigned_mem = assigned_mem << 10;\n\t\t\t\t\t\t\t\t\tprefix_assigned--;\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tavailable_mem = total_mem - assigned_mem;\n\t\t\t\t\t\t\tprefix_available = prefix_total;\n\t\t\t\t\t\t\twhile (available_mem > 999) {\n\t\t\t\t\t\t\t\tavailable_mem = ((available_mem % 1024) + available_mem) >> 10;\n\t\t\t\t\t\t\t\tprefix_available++;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\twhile (total_mem > 999) {\n\t\t\t\t\t\ttotal_mem = ((total_mem % 1024) + total_mem) >> 10;\n\t\t\t\t\t\tprefix_total++;\n\t\t\t\t\t}\n\t\t\t\t\tif (job_summary)\n\t\t\t\t\t\tsnprintf(mem_info, sizeof(mem_info), \"%ld%cb/%ld%cb\", available_mem, suffixletter[prefix_available],\n\t\t\t\t\t\t\t total_mem, suffixletter[prefix_total]);\n\t\t\t\t\telse\n\t\t\t\t\t\tsnprintf(mem_info, sizeof(mem_info), \"%ld%cb\", total_mem, suffixletter[prefix_total]);\n\t\t\t\t} else if ((strcmp(pattr->resource, \"ncpus\") == 0)) {\n\t\t\t\t\ttotal_cpus = atol(pattr->value);\n\t\t\t\t\tresource_assigned = 0;\n\t\t\t\t\tfor (next = pattr->next; next && job_summary; next = next->next) {\n\t\t\t\t\t\tif (!next->resource)\n\t\t\t\t\t\t\tcontinue;\n\t\t\t\t\t\tif (strcmp(next->resource, \"ncpus\") == 0) {\n\t\t\t\t\t\t\tresource_assigned = atol(next->value);\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tif (job_summary) {\n\t\t\t\t\t\tavailable_cpus = total_cpus - resource_assigned;\n\t\t\t\t\t\tsnprintf(ncpus_info, 20, \"%ld/%ld\", available_cpus, total_cpus);\n\t\t\t\t\t} else\n\t\t\t\t\t\tsnprintf(ncpus_info, 20, \"%ld\", total_cpus);\n\t\t\t\t} else if (strcmp(pattr->resource, \"nmics\") == 0) {\n\t\t\t\t\ttotal_nmic = atol(pattr->value);\n\t\t\t\t\tresource_assigned = 0;\n\t\t\t\t\tfor (next = pattr->next; next && job_summary; next = next->next) {\n\t\t\t\t\t\tif (!next->resource)\n\t\t\t\t\t\t\tcontinue;\n\t\t\t\t\t\tif (strcmp(next->resource, \"nmics\") == 0) {\n\t\t\t\t\t\t\tresource_assigned = atol(next->value);\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tif (job_summary) {\n\t\t\t\t\t\tavailable_nmic = total_nmic - resource_assigned;\n\t\t\t\t\t\tsnprintf(nmic_info, 20, \"%ld/%ld\", available_nmic, total_nmic);\n\t\t\t\t\t} else\n\t\t\t\t\t\tsnprintf(nmic_info, 20, \"%ld\", total_nmic);\n\t\t\t\t} else if (strcmp(pattr->resource, \"ngpus\") == 0) {\n\t\t\t\t\ttotal_ngpus = atol(pattr->value);\n\t\t\t\t\tresource_assigned = 0;\n\t\t\t\t\tfor (next = pattr->next; next && job_summary; next = next->next) {\n\t\t\t\t\t\tif (!next->resource)\n\t\t\t\t\t\t\tcontinue;\n\t\t\t\t\t\tif (strcmp(next->resource, \"ngpus\") == 0) {\n\t\t\t\t\t\t\tresource_assigned = atol(next->value);\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tif (job_summary) {\n\t\t\t\t\t\tavailable_ngpus = total_ngpus - resource_assigned;\n\t\t\t\t\t\tsnprintf(ngpus_info, 20, \"%ld/%ld\", available_ngpus, total_ngpus);\n\t\t\t\t\t} else\n\t\t\t\t\t\tsnprintf(ngpus_info, 20, \"%ld\", total_ngpus);\n\n\t\t\t\t} else if (strcmp(pattr->resource, \"host\") == 0)\n\t\t\t\t\thost = pattr->value;\n\t\t\t\telse if (strcmp(pattr->resource, \"OS\") == 0)\n\t\t\t\t\tos = pattr->value;\n\t\t\t\telse if (strcmp(pattr->resource, \"hardware\") == 0)\n\t\t\t\t\thardware = pattr->value;\n\t\t\t} else if (strcmp(pattr->name, \"state\") == 0) {\n\t\t\t\tif (!long_summary)\n\t\t\t\t\tstate = strtok(pattr->value, \",\");\n\t\t\t\telse\n\t\t\t\t\tstate = pattr->value;\n\t\t\t} else if (strcmp(pattr->name, \"comment\") == 0)\n\t\t\t\tcomment = pattr->value;\n\t\t\telse if (strcmp(pattr->name, \"queue\") == 0)\n\t\t\t\tqueue = pattr->value;\n\t\t\telse if (job_summary) {\n\t\t\t\tif (strcmp(pattr->name, \"jobs\") == 0) {\n\t\t\t\t\tjobs = pattr->value;\n\t\t\t\t\tcount = 1;\n\t\t\t\t\tpc = strtok(pattr->value, \", \");\n\t\t\t\t\twhile (pc != NULL) {\n\t\t\t\t\t\tpc1 = strchr(pc, (int) '/'); /* remove virtual core description from jobid. */\n\t\t\t\t\t\tif (pc1)\n\t\t\t\t\t\t\t*pc1 = '\\0';\n\t\t\t\t\t\tcur_jobid = pc;\n\t\t\t\t\t\tif (output_format == FORMAT_DEFAULT) {\n\t\t\t\t\t\t\tpc1 = strchr(pc, (int) '.');\n\t\t\t\t\t\t\tif (strcmp((pc1 + 1), def_server) == 0) /* if not equal, job is from a peer_server. */\n\t\t\t\t\t\t\t\t*pc1 = '\\0';\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif ((strcmp(cur_jobid, prev_jobid) != 0) && strcmp(cur_jobid, jobs) != 0) { /* will skip concatenating if only one job.*/\n\t\t\t\t\t\t\tstrcat(jobs, \",\");\n\t\t\t\t\t\t\tstrcat(jobs, cur_jobid);\n\t\t\t\t\t\t\tcount++;\n\t\t\t\t\t\t}\n\t\t\t\t\t\tprev_jobid = cur_jobid;\n\t\t\t\t\t\tpc = strtok(NULL, \", \");\n\t\t\t\t\t}\n\t\t\t\t\trun_jobs = count;\n\t\t\t\t\tnjobs = susp_jobs + run_jobs;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tswitch (output_format) {\n\n\t\t\tcase FORMAT_DSV:\n\t\t\t\tif (job_summary) {\n\t\t\t\t\tprintf(\"vnode=%s%sstate=%s%snjobs=%ld%srun=%ld%ssusp=%ld%smem(f/t)=%s%sncpus(f/t)=%s%snmics(f/t)=%s%sngpus(f/t)=%s%sjobs=%s\\n\",\n\t\t\t\t\t       name, dsv_delim, state, dsv_delim, njobs, dsv_delim, run_jobs, dsv_delim, susp_jobs,\n\t\t\t\t\t       dsv_delim, mem_info, dsv_delim, ncpus_info, dsv_delim, nmic_info, dsv_delim, ngpus_info, dsv_delim, jobs);\n\t\t\t\t} else {\n\t\t\t\t\tprintf(\"vnode=%s%sstate=%s%sOS=%s%shardware=%s%shost=%s%squeue=%s%smem=%s%sncpus=%s%snmics=%s%sngpus=%s%scomment=%s\\n\",\n\t\t\t\t\t       name, dsv_delim, state, dsv_delim, os, dsv_delim, hardware, dsv_delim, host, dsv_delim, queue,\n\t\t\t\t\t       dsv_delim, mem_info, dsv_delim, ncpus_info, dsv_delim, nmic_info, dsv_delim, ngpus_info, dsv_delim, show_nonprint_chars(comment));\n\t\t\t\t}\n\t\t\t\tbreak;\n\n\t\t\tcase FORMAT_JSON:\n\t\t\t\tif ((json_node = pbs_json_create_object()) == NULL)\n\t\t\t\t\treturn 1;\n\t\t\t\tpbs_json_insert_item(json_nodes, name, json_node);\n\t\t\t\tif (pbs_json_insert_string(json_node, \"State\", state))\n\t\t\t\t\treturn 1;\n\t\t\t\tif (job_summary) {\n\t\t\t\t\tif (pbs_json_insert_number(json_node, \"Total Jobs\", (double) njobs))\n\t\t\t\t\t\treturn 1;\n\t\t\t\t\tif (pbs_json_insert_number(json_node, \"Running Jobs\", (double) run_jobs))\n\t\t\t\t\t\treturn 1;\n\t\t\t\t\tif (pbs_json_insert_number(json_node, \"Suspended Jobs\", (double) susp_jobs))\n\t\t\t\t\t\treturn 1;\n\t\t\t\t\tif (pbs_json_insert_string(json_node, \"mem f/t\", mem_info))\n\t\t\t\t\t\treturn 1;\n\t\t\t\t\tif (pbs_json_insert_string(json_node, \"ncpus f/t\", ncpus_info))\n\t\t\t\t\t\treturn 1;\n\t\t\t\t\tif (pbs_json_insert_string(json_node, \"nmics f/t\", nmic_info))\n\t\t\t\t\t\treturn 1;\n\t\t\t\t\tif (pbs_json_insert_string(json_node, \"ngpus f/t\", ngpus_info))\n\t\t\t\t\t\treturn 1;\n\t\t\t\t\tif ((json_jobs = pbs_json_create_array()) == NULL)\n\t\t\t\t\t\treturn 1;\n\t\t\t\t\tpbs_json_insert_item(json_node, \"jobs\", json_jobs);\n\t\t\t\t\tif (strcmp(jobs, \"--\") != 0) {\n\t\t\t\t\t\tpc = strtok(jobs, \",\");\n\t\t\t\t\t\twhile (pc != NULL) {\n\t\t\t\t\t\t\tif (pbs_json_insert_string(json_jobs, NULL, pc))\n\t\t\t\t\t\t\t\treturn 1;\n\t\t\t\t\t\t\tpc = strtok(NULL, \",\");\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tif (pbs_json_insert_string(json_node, \"OS\", os))\n\t\t\t\t\t\treturn 1;\n\t\t\t\t\tif (pbs_json_insert_string(json_node, \"hardware\", hardware))\n\t\t\t\t\t\treturn 1;\n\t\t\t\t\tif (pbs_json_insert_string(json_node, \"host\", host))\n\t\t\t\t\t\treturn 1;\n\t\t\t\t\tif (pbs_json_insert_string(json_node, \"queue\", queue))\n\t\t\t\t\t\treturn 1;\n\t\t\t\t\tif (pbs_json_insert_string(json_node, \"Memory\", mem_info))\n\t\t\t\t\t\treturn 1;\n\t\t\t\t\tvalue = atol(ncpus_info);\n\t\t\t\t\tif (pbs_json_insert_number(json_node, \"ncpus\", (double) value))\n\t\t\t\t\t\treturn 1;\n\t\t\t\t\tvalue = atol(nmic_info);\n\t\t\t\t\tif (pbs_json_insert_number(json_node, \"nmics\", (double) value))\n\t\t\t\t\t\treturn 1;\n\t\t\t\t\tvalue = atol(ngpus_info);\n\t\t\t\t\tif (pbs_json_insert_number(json_node, \"ngpus\", (double) value))\n\t\t\t\t\t\treturn 1;\n\t\t\t\t\tif (pbs_json_insert_string(json_node, \"comment\", comment))\n\t\t\t\t\t\treturn 1;\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase FORMAT_DEFAULT:\n\t\t\t\tif (job_summary) {\n\t\t\t\t\tif (long_summary)\n\t\t\t\t\t\tprintf(\"%-*s %-*s %*ld %*ld %*ld %*s %*s %*s %*s %s\\n\",\n\t\t\t\t\t\t       NODE_NAME, name, NODE_STATE, state, NJOBS, njobs, RUNNING_JOBS, run_jobs, SUSP_JOBS, susp_jobs,\n\t\t\t\t\t\t       MEM_FT, mem_info, NCPUS_FT, ncpus_info, NMIC_FT, nmic_info,\n\t\t\t\t\t\t       NGPUS_FT, ngpus_info, jobs);\n\t\t\t\t\telse\n\t\t\t\t\t\tprintf(\"%-*.*s %-*.*s %*ld %*ld %*ld %*.*s %*.*s %*.*s %*.*s %s\\n\",\n\t\t\t\t\t\t       NODE_NAME, NODE_NAME, name, NODE_STATE, NODE_STATE, state, NJOBS, njobs, RUNNING_JOBS, run_jobs,\n\t\t\t\t\t\t       SUSP_JOBS, susp_jobs, MEM_FT, MEM_FT, mem_info, NCPUS_FT, NCPUS_FT, ncpus_info, NMIC_FT, NMIC_FT, nmic_info,\n\t\t\t\t\t\t       NGPUS_FT, NGPUS_FT, ngpus_info, jobs);\n\t\t\t\t} else {\n\t\t\t\t\tif (long_summary)\n\t\t\t\t\t\tprintf(\"%-*s %-*s %-*s %-*s %-*s %-*s %*s %*s %*s %*s %s\\n\", NODE_NAME, name, NODE_STATE, state,\n\t\t\t\t\t\t       NODE_OS, os, NODE_HARDW, hardware, NODE_HOST, host, QUEUE, queue, MEM, mem_info, NCPUS, ncpus_info,\n\t\t\t\t\t\t       NMIC, nmic_info, NGPUS, ngpus_info, show_nonprint_chars(comment));\n\t\t\t\t\telse\n\t\t\t\t\t\tprintf(\"%-*.*s %-*.*s %-*.*s %-*.*s %-*.*s %-*.*s %*.*s %*.*s %*.*s %*.*s %s\\n\", NODE_NAME, NODE_NAME, name,\n\t\t\t\t\t\t       NODE_STATE, NODE_STATE, state, NODE_OS, NODE_OS, os, NODE_HARDW, NODE_HARDW, hardware,\n\t\t\t\t\t\t       NODE_HOST, NODE_HOST, host, QUEUE, QUEUE, queue, MEM, MEM, mem_info, NCPUS, NCPUS, ncpus_info,\n\t\t\t\t\t\t       NMIC, NMIC, nmic_info, NGPUS, NGPUS, ngpus_info, show_nonprint_chars(comment));\n\t\t\t\t}\n\t\t}\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n * \tprint node information without summary\n *\n * @param[in] bstat - structure pointer having node information\n *\n * @retval Void\n *\n */\nstatic void\nprt_node(struct batch_status *bstat)\n{\n\tchar *pc;\n\tstruct attrl *pattr = NULL;\n\ttime_t epoch;\n\tif (bstat == NULL)\n\t\treturn;\n\n\tswitch (output_format) {\n\t\tcase FORMAT_JSON:\n\t\t\tif (encode_to_json(bstat)) {\n\t\t\t\tfprintf(stderr, \"pbsnodes: out of memory\\n\");\n\t\t\t\texit(1);\n\t\t\t}\n\t\t\tbreak;\n\t\tcase FORMAT_DSV:\n\t\t\tprintf(\"Name=%s%s\", bstat->name, dsv_delim);\n\t\t\tfor (pattr = bstat->attribs; pattr; pattr = pattr->next) {\n\t\t\t\tif (pattr->resource)\n\t\t\t\t\tprintf(\"%s.%s=%s\", pattr->name, pattr->resource, show_nonprint_chars(pattr->value));\n\t\t\t\telse if (strcmp(pattr->name, \"jobs\") == 0) {\n\t\t\t\t\tprintf(\"%s=\", pattr->name);\n\t\t\t\t\tpc = pattr->value;\n\t\t\t\t\twhile (*pc) {\n\t\t\t\t\t\tchar *sbuf;\n\t\t\t\t\t\tchar char_buf[2];\n\t\t\t\t\t\tif (*pc == ' ') {\n\t\t\t\t\t\t\tpc++;\n\t\t\t\t\t\t\tcontinue;\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\tsprintf(char_buf, \"%c\", *pc);\n\t\t\t\t\t\tsbuf = show_nonprint_chars(char_buf);\n\t\t\t\t\t\tif (sbuf != NULL) {\n\t\t\t\t\t\t\tint c;\n\t\t\t\t\t\t\tfor (c = 0; c < strlen(sbuf); c++)\n\t\t\t\t\t\t\t\tprintf(\"%c\", sbuf[c]);\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\tprintf(\"%c\", *pc);\n\t\t\t\t\t\t}\n\t\t\t\t\t\tpc++;\n\t\t\t\t\t}\n\t\t\t\t} else\n\t\t\t\t\tprintf(\"%s=%s\", pattr->name, show_nonprint_chars(pattr->value));\n\t\t\t\tif (pattr->next)\n\t\t\t\t\tprintf(\"%s\", dsv_delim);\n\t\t\t}\n\t\t\tprintf(\"\\n\");\n\t\t\tbreak;\n\t\tdefault:\n\t\t\tprintf(\"%s\\n\", bstat->name);\n\t\t\tfor (pattr = bstat->attribs; pattr; pattr = pattr->next) {\n\t\t\t\tprintf(\"     %s\", pattr->name);\n\t\t\t\tif (pattr->resource)\n\t\t\t\t\tprintf(\".%s\", pattr->resource);\n\t\t\t\tif ((strcmp(pattr->name, ATTR_NODE_last_used_time) == 0) ||\n\t\t\t\t    (strcmp(pattr->name, ATTR_NODE_last_state_change_time) == 0)) {\n\t\t\t\t\tepoch = (time_t) atol(pattr->value);\n\t\t\t\t\tprintf(\" = %s\", ctime(&epoch));\n\t\t\t\t} else\n\t\t\t\t\tprintf(\" = %s\\n\", show_nonprint_chars(pattr->value));\n\t\t\t}\n\t\t\tprintf(\"\\n\");\n\t\t\tbreak;\n\t}\n}\n\n/**\n * @brief\n *\treturns the state of node\n *\n * @param[in] pbs - structure pointer containing node information\n *\n * @return - string\n * @retval   \"\" - Failure\n * @retval   \"value\" - Success\n *\n */\nstatic char *\nget_nstate(struct batch_status *pbs)\n{\n\tstruct attrl *pat;\n\n\tfor (pat = pbs->attribs; pat; pat = pat->next) {\n\t\tif (strcmp(pat->name, ATTR_NODE_state) == 0)\n\t\t\treturn pat->value;\n\t}\n\treturn \"\";\n}\n\n/**\n * @brief\n *\treturns the comment for the node\n *\n * @param pbs - structure pointer containing node information\n *\n * @return - string\n * @retval   \"\" - Failure\n * @retval   \"value\" - Success\n *\n */\nstatic char *\nget_comment(struct batch_status *pbs)\n{\n\tstruct attrl *pat;\n\n\tfor (pat = pbs->attribs; pat; pat = pat->next) {\n\t\tif (strcmp(pat->name, ATTR_comment) == 0)\n\t\t\treturn pat->value;\n\t}\n\treturn \"\";\n}\n\n/*\n * @brief\n *\treturns indication if node is marked down or not\n *\n * @param pbs - structure pointer containing node information\n *\n * @return - Error code\n * @retval   1 - Failure indiacating node is down\n * @retval   0 - Success indicating node is not down\n *\n */\nstatic int\nis_down(struct batch_status *pbs)\n{\n\tif (strstr(get_nstate(pbs), ND_down) != NULL)\n\t\treturn 1;\n\telse\n\t\treturn 0;\n}\n\n/*\n * @brief\n *      returns indication if node is marked offline or not\n *\n * @param pbs - structure pointer containing node information\n *\n * @return - Error code\n * @retval   1 - Failure indiacating node is  offline\n * @retval   0 - Success indicating node is not ofline\n *\n */\nstatic int\nis_offline(struct batch_status *pbs)\n{\n\tif (strstr(get_nstate(pbs), ND_offline) != NULL)\n\t\treturn 1;\n\telse\n\t\treturn 0;\n}\n\n/**\n * @brief\n *\tMark the node with values sent as parameters\n *\n * @param[in] con - value to test connected to server or not\n * @param[in] name - name of node\n * @param[in] state1 - current state\n * @param[in] op1 - integer value corresponding to  state1\n * @param[in] state2 - transition to this state\n * @param[in] op2 - integere value corresponding to state2\n *\n * @return\tint\n * @retval\t0\t\tsuccess\n * @retval\tpbse error\tfailure\n *\n */\nstatic int\nmarknode(int con, char *name,\n\t char *state1, enum batch_op op1,\n\t char *state2, enum batch_op op2,\n\t char *comment)\n{\n\tchar Comment[80];\n\tstruct attropl new[3];\n\tint i;\n\tint rc;\n\n\ti = 0;\n\tif (state1 != NULL) {\n\t\tnew[i].name = ATTR_NODE_state;\n\t\tnew[i].resource = NULL;\n\t\tnew[i].value = state1;\n\t\tnew[i].op = op1;\n\t\tnew[i].next = NULL;\n\t}\n\tif (state2 != NULL) {\n\t\tif (state1 != NULL) {\n\t\t\tnew[i].next = &new[i + 1];\n\t\t\t++i;\n\t\t}\n\t\tnew[i].name = ATTR_NODE_state;\n\t\tnew[i].resource = NULL;\n\t\tnew[i].value = state2;\n\t\tnew[i].op = op2;\n\t\tnew[i].next = NULL;\n\t}\n\n\tif (comment != NULL) {\n\t\tif (state1 != NULL || state2 != NULL) {\n\t\t\tnew[i].next = &new[i + 1];\n\t\t\t++i;\n\t\t}\n\t\tsnprintf(Comment, 80, \"%s\", comment);\n\t\tnew[i].name = ATTR_comment;\n\t\tnew[i].resource = NULL;\n\t\tnew[i].value = Comment;\n\t\tnew[i].op = SET;\n\t\tnew[i].next = NULL;\n\t}\n\n\trc = pbs_manager(con, MGR_CMD_SET, MGR_OBJ_HOST, name, new, NULL);\n\tif (rc && !quiet) {\n\t\tchar *errmsg;\n\n\t\tfprintf(stderr, \"Error marking node %s - \", name);\n\t\tif ((errmsg = pbs_geterrmsg(con)) != NULL)\n\t\t\tfprintf(stderr, \"%s\\n\", errmsg);\n\t\telse\n\t\t\tfprintf(stderr, \"error: %d\\n\", pbs_errno);\n\t}\n\treturn (rc);\n}\n\n/**\n * @brief\n *\tThe main function in C - entry point\n *\n * @param[in]  argc - argument count\n * @param[in]  argv - pointer to argument array\n *\n * @return  int\n * @retval  0 - success\n * @retval  !0 - error\n */\nint\nmain(int argc, char *argv[])\n{\n\ttime_t timenow;\n\tstruct attrl *pattr = NULL;\n\tint con;\n\tchar *def_server = NULL;\n\tint errflg = 0;\n\tchar *errmsg;\n\tint i;\n\tint rc = 0;\n\textern char *optarg;\n\textern int optind;\n\tchar **pa;\n\tchar *comment = NULL;\n\tstruct batch_status *bstat = NULL;\n\tstruct batch_status *bstat_head = NULL;\n\tstruct batch_status *next_bstat = NULL;\n\tint do_vnodes = 0;\n\tmgr_operation_t oper = LISTSP;\n\tint ret = 0;\n\tint job_summary = 0;\n\tint long_summary = 0;\n\tint format = 0;\n\tint prt_summary = 0;\n\tjson_data *json_root = NULL; /* root of json structure */\n\n\t/*test for real deal or just version and exit*/\n\n\tPRINT_VERSION_AND_EXIT(argc, argv);\n\n\tif (initsocketlib())\n\t\treturn 1;\n\n\tif (argc == 1)\n\t\terrflg = 1;\n\twhile ((i = getopt(argc, argv, \"acC:dD:F:HjlLoqrs:Sv\")) != EOF)\n\t\tswitch (i) {\n\n\t\t\tcase 'a':\n\t\t\t\tif (oper == LISTSP)\n\t\t\t\t\toper = ALL;\n\t\t\t\telse\n\t\t\t\t\terrflg = 1;\n\t\t\t\tbreak;\n\n\t\t\tcase 'c':\n\t\t\t\tif (oper == LISTSP || do_vnodes == 1)\n\t\t\t\t\toper = CLEAR;\n\t\t\t\telse\n\t\t\t\t\terrflg = 1;\n\t\t\t\tbreak;\n\n\t\t\tcase 'C':\n\t\t\t\tif (optarg && (oper == LISTSP)) {\n\t\t\t\t\toper = UPDATE_COMMENT;\n\t\t\t\t\tcomment = optarg;\n\t\t\t\t} else if (optarg && (oper == OFFLINE || oper == RESET))\n\t\t\t\t\tcomment = optarg;\n\t\t\t\telse\n\t\t\t\t\terrflg = 1;\n\t\t\t\tbreak;\n\n\t\t\tcase 'd':\n\t\t\t\tif (oper == LISTSP || do_vnodes == 1)\n\t\t\t\t\toper = DOWN;\n\t\t\t\telse\n\t\t\t\t\terrflg = 1;\n\t\t\t\tbreak;\n\n\t\t\tcase 'D':\n\t\t\t\tif (oper == LISTSP || oper == ALL || oper == LISTSPNV)\n\t\t\t\t\tdsv_delim = optarg;\n\t\t\t\telse\n\t\t\t\t\terrflg = 1;\n\t\t\t\tbreak;\n\n\t\t\tcase 'F':\n\t\t\t\tfor (format = FORMAT_DEFAULT; format < FORMAT_MAX; format++) {\n\t\t\t\t\tif (strcasecmp(optarg, output_format_names[format]) == 0) {\n\t\t\t\t\t\toutput_format = format;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tif (format >= FORMAT_MAX)\n\t\t\t\t\terrflg = 1;\n\t\t\t\tbreak;\n\n\t\t\tcase 'H':\n\t\t\t\tif (oper == LISTSP)\n\t\t\t\t\toper = LISTSPNV;\n\t\t\t\telse\n\t\t\t\t\terrflg = 1;\n\t\t\t\tbreak;\n\n\t\t\tcase 'j':\n\t\t\t\tif (oper == LISTSP || oper == ALL || oper == LISTSPNV)\n\t\t\t\t\tjob_summary = 1;\n\t\t\t\telse\n\t\t\t\t\terrflg = 1;\n\t\t\t\tbreak;\n\n\t\t\tcase 'l':\n\t\t\t\tif (oper == LISTSP || do_vnodes == 1)\n\t\t\t\t\toper = LISTMRK;\n\t\t\t\telse\n\t\t\t\t\terrflg = 1;\n\t\t\t\tbreak;\n\n\t\t\tcase 'L':\n\t\t\t\tif (oper == LISTSP || oper == ALL || oper == LISTSPNV)\n\t\t\t\t\tlong_summary = 1;\n\t\t\t\telse\n\t\t\t\t\terrflg = 1;\n\t\t\t\tbreak;\n\n\t\t\tcase 'o':\n\t\t\t\tif (oper == LISTSP || do_vnodes == 1 || oper == UPDATE_COMMENT)\n\t\t\t\t\toper = OFFLINE;\n\t\t\t\telse\n\t\t\t\t\terrflg = 1;\n\t\t\t\tbreak;\n\n\t\t\tcase 'q':\n\t\t\t\tquiet = 1;\n\t\t\t\tbreak;\n\n\t\t\tcase 'r':\n\t\t\t\tif (oper == LISTSP || do_vnodes == 1 || oper == UPDATE_COMMENT)\n\t\t\t\t\toper = RESET;\n\t\t\t\telse\n\t\t\t\t\terrflg = 1;\n\t\t\t\tbreak;\n\n\t\t\tcase 's':\n\t\t\t\tdef_server = optarg;\n\t\t\t\tbreak;\n\n\t\t\tcase 'S':\n\t\t\t\tif (oper == LISTSP || oper == ALL || oper == LISTSPNV)\n\t\t\t\t\tprt_summary = 1;\n\t\t\t\telse\n\t\t\t\t\terrflg = 1;\n\t\t\t\tbreak;\n\n\t\t\tcase 'v':\n\t\t\t\tif (oper == LISTSP || oper == ALL)\n\t\t\t\t\tdo_vnodes = 1;\n\t\t\t\telse\n\t\t\t\t\terrflg = 1;\n\t\t\t\tbreak;\n\n\t\t\tcase '?':\n\t\t\tdefault:\n\t\t\t\terrflg = 1;\n\t\t\t\tbreak;\n\t\t}\n\n\tif (errflg ||\n\t    (oper == LISTMRK && optind != argc) ||\n\t    (oper == CLEAR && optind == argc) ||\n\t    (oper == OFFLINE && optind == argc) ||\n\t    (oper == RESET && optind == argc) ||\n\t    (oper == LISTSPNV && optind == argc) ||\n\t    (oper == LISTSP && optind == argc) ||\n\t    (oper == UPDATE_COMMENT && optind == argc) ||\n\t    (prt_summary && (oper != LISTSP && oper != LISTSPNV && oper != ALL))) {\n\t\tif (!quiet)\n\t\t\tfprintf(stderr,\n\t\t\t\t\"usage:\\t%s [-{o|r}][-C comment][-s server] host host ...\\n\"\n\t\t\t\t\"\\t%s -l [-s server]\\n\"\n\t\t\t\t\"\\t%s [-s server] -v vnode vnode ...\\n\"\n\t\t\t\t\"\\t%s -a[v][S[j][L]][-F format][-D delim][-s server]\\n\"\n\t\t\t\t\"\\t%s -[H][S[j][L]][-F format][-D delim] host host ...\\n\"\n\t\t\t\t\"\\t%s --version\\n\\n\",\n\t\t\t\targv[0], argv[0], argv[0], argv[0], argv[0], argv[0]);\n\t\texit(1);\n\t}\n\n\tif (def_server == NULL) {\n\t\tdef_server = pbs_default();\n\t\tif (def_server == NULL)\n\t\t\tdef_server = \"\";\n\t}\n\n\tif (CS_client_init() != CS_SUCCESS) {\n\t\tfprintf(stderr, \"pbsnodes: unable to initialize security library.\\n\");\n\t\texit(1);\n\t}\n\n\tcon = cnt2server(def_server);\n\tif (con <= 0) {\n\t\tif (!quiet)\n\t\t\tfprintf(stderr, \"%s: cannot connect to server %s, error=%d\\n\",\n\t\t\t\targv[0], def_server, pbs_errno);\n\t\tCS_close_app();\n\t\texit(1);\n\t}\n\n\t/* if do_vnodes is set, get status of all virtual nodes (vnodes) */\n\t/* else if oper is ALL then get status of all hosts              */\n\n\tif ((do_vnodes == 1) || (oper == ALL) ||\n\t    (oper == DOWN) || (oper == LISTMRK) || (oper == LISTSPNV)) {\n\t\tif (do_vnodes || oper == LISTSPNV)\n\t\t\tbstat_head = pbs_statvnode(con, \"\", NULL, NULL);\n\t\telse\n\t\t\tbstat_head = pbs_stathost(con, \"\", NULL, NULL);\n\n\t\tif (bstat_head == NULL) {\n\t\t\tif (pbs_errno) {\n\t\t\t\tif (!quiet) {\n\t\t\t\t\tif ((errmsg = pbs_geterrmsg(con)) != NULL)\n\t\t\t\t\t\tfprintf(stderr, \"%s: %s\\n\", argv[0], errmsg);\n\t\t\t\t\telse\n\t\t\t\t\t\tfprintf(stderr, \"%s: Error %d\\n\", argv[0], pbs_errno);\n\t\t\t\t}\n\t\t\t\texit(1);\n\t\t\t} else {\n\t\t\t\tif (!quiet)\n\t\t\t\t\tfprintf(stderr, \"%s: No nodes found\\n\", argv[0]);\n\t\t\t\texit(0);\n\t\t\t}\n\t\t}\n\t}\n\t/* adding prologue to json output. */\n\tif (output_format == FORMAT_JSON) {\n\t\ttimenow = time(0);\n\t\tif ((json_root = pbs_json_create_object()) == NULL) {\n\t\t\tfprintf(stderr, \"pbsnodes: json error\\n\");\n\t\t\texit(1);\n\t\t}\n\t\tif (pbs_json_insert_number(json_root, \"timestamp\", (double) timenow)) {\n\t\t\tfprintf(stderr, \"pbsnodes: json error\\n\");\n\t\t\texit(1);\n\t\t}\n\t\tif (pbs_json_insert_string(json_root, \"pbs_version\", PBS_VERSION)) {\n\t\t\tfprintf(stderr, \"pbsnodes: json error\\n\");\n\t\t\texit(1);\n\t\t}\n\t\tif (pbs_json_insert_string(json_root, \"pbs_server\", def_server)) {\n\t\t\tfprintf(stderr, \"pbsnodes: json error\\n\");\n\t\t\texit(1);\n\t\t}\n\t\tif ((json_nodes = pbs_json_create_object()) == NULL) {\n\t\t\tfprintf(stderr, \"pbsnodes: json error\\n\");\n\t\t\texit(1);\n\t\t}\n\t\tpbs_json_insert_item(json_root, \"nodes\", json_nodes);\n\t}\n\tswitch (oper) {\n\n\t\tcase DOWN:\n\n\t\t\t/*\n\t\t\t * loop through the list of nodes returned above:\n\t\t\t *   if node is up and is in argv list, mark it down;\n\t\t\t *   if node is down and not in argv list, mark it up;\n\t\t\t * for all changed nodes, send in request to server\n\t\t\t */\n\n\t\t\tfor (bstat = bstat_head; bstat; bstat = bstat->next) {\n\t\t\t\tfor (pa = argv + optind; *pa; pa++) {\n\t\t\t\t\tif (cmp_node_name(*pa, bstat->name) == 0) {\n\t\t\t\t\t\tif (is_down(bstat) == 0) {\n\t\t\t\t\t\t\tret = marknode(con, bstat->name,\n\t\t\t\t\t\t\t\t       ND_down, INCR, NULL, INCR, comment);\n\t\t\t\t\t\t\tif (ret > 0)\n\t\t\t\t\t\t\t\trc = ret;\n\t\t\t\t\t\t}\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tif (*pa == NULL) {\n\n\t\t\t\t\t/* node not in list, if down now, set up */\n\t\t\t\t\tif (is_down(bstat) == 1)\n\t\t\t\t\t\tret = marknode(con, bstat->name,\n\t\t\t\t\t\t\t       ND_down, DECR, NULL, DECR, comment);\n\t\t\t\t\tif (ret > 0)\n\t\t\t\t\t\trc = ret;\n\t\t\t\t}\n\t\t\t}\n\t\t\tpbs_statfree(bstat_head);\n\n\t\t\tbreak;\n\n\t\tcase CLEAR:\n\n\t\t\t/* clear DOWN and OFF_LINE from specified nodes\t\t*/\n\n\t\t\tfor (pa = argv + optind; *pa; pa++) {\n\t\t\t\tret = marknode(con, *pa, ND_offline, DECR, ND_down, DECR, comment);\n\t\t\t\tif (ret > 0)\n\t\t\t\t\trc = ret;\n\t\t\t}\n\n\t\t\tbreak;\n\n\t\tcase RESET:\n\n\t\t\t/* clear OFF_LINE from specified nodes\t\t\t*/\n\n\t\t\tfor (pa = argv + optind; *pa; pa++) {\n\t\t\t\tret = marknode(con, *pa, ND_offline, DECR, NULL, DECR, comment);\n\t\t\t\tif (ret > 0)\n\t\t\t\t\trc = ret;\n\t\t\t}\n\t\t\tbreak;\n\n\t\tcase OFFLINE:\n\n\t\t\t/* set OFF_LINE on specified nodes\t\t\t*/\n\t\t\tfor (pa = argv + optind; *pa; pa++) {\n\t\t\t\tret = marknode(con, *pa, ND_offline, INCR, NULL, INCR, comment);\n\t\t\t\tif (ret > 0)\n\t\t\t\t\trc = ret;\n\t\t\t}\n\t\t\tbreak;\n\n\t\tcase UPDATE_COMMENT:\n\n\t\t\t/*just add comment to specified nodes*/\n\t\t\tfor (pa = argv + optind; *pa; pa++) {\n\t\t\t\tif (strcmp(*pa, \"\") == 0)\n\t\t\t\t\tcontinue;\n\t\t\t\tret = marknode(con, *pa, NULL, INCR, NULL, INCR, comment);\n\t\t\t\tif (ret > 0)\n\t\t\t\t\trc = ret;\n\t\t\t}\n\t\t\tbreak;\n\n\t\tcase ALL:\n\n\t\t\tif (prt_summary) {\n\t\t\t\tif (prt_node_summary(def_server, bstat_head, job_summary, long_summary)) {\n\t\t\t\t\tfprintf(stderr, \"pbsnodes: out of memory\\n\");\n\t\t\t\t\treturn 1;\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tfor (bstat = bstat_head; bstat; bstat = bstat->next)\n\t\t\t\t\tprt_node(bstat);\n\t\t\t}\n\t\t\tif (output_format == FORMAT_JSON) {\n\t\t\t\tif (pbs_json_print(json_root, stdout))\n\t\t\t\t\tfprintf(stderr, \"json error\\n\");\n\t\t\t\tpbs_json_delete(json_root);\n\t\t\t}\n\t\t\tpbs_statfree(bstat_head);\n\n\t\t\tbreak;\n\n\t\tcase LISTMRK:\n\n\t\t\t/* list any node that is marked DOWN or OFF_LINE\t*/\n\t\t\tfor (bstat = bstat_head; bstat; bstat = bstat->next) {\n\t\t\t\tif (is_down(bstat) || is_offline(bstat)) {\n\t\t\t\t\tprintf(\"%-20s %s %s\\n\", bstat->name,\n\t\t\t\t\t       get_nstate(bstat), show_nonprint_chars(get_comment(bstat)));\n\t\t\t\t}\n\t\t\t}\n\t\t\tpbs_statfree(bstat_head);\n\n\t\t\tbreak;\n\n\t\tcase LISTSP:\n\n\t\t\t/* list the specified nodes or vnodes */\n\t\t\tfor (pa = argv + optind; *pa; pa++) {\n\t\t\t\tif (do_vnodes)\n\t\t\t\t\tbstat = pbs_statvnode(con, *pa, NULL, NULL);\n\t\t\t\telse\n\t\t\t\t\tbstat = pbs_stathost(con, *pa, NULL, NULL);\n\t\t\t\tif (!bstat) {\n\t\t\t\t\tif (pbs_errno != 0) {\n\n\t\t\t\t\t\tif (output_format == FORMAT_JSON) {\n\t\t\t\t\t\t\tjson_data *json_error;\n\t\t\t\t\t\t\tif ((json_error = pbs_json_create_object()) == NULL) {\n\t\t\t\t\t\t\t\tfprintf(stderr, \"pbsnodes: json error\\n\");\n\t\t\t\t\t\t\t\texit(1);\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tpbs_json_insert_item(json_nodes, *pa, json_error);\n\t\t\t\t\t\t\tif (pbs_json_insert_string(json_error, *pa, pbs_geterrmsg(con))) {\n\t\t\t\t\t\t\t\tfprintf(stderr, \"pbsnodes: json error\\n\");\n\t\t\t\t\t\t\t\texit(1);\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\tfprintf(stderr, \"Node: %s,  Error: %s\\n\", *pa, pbs_geterrmsg(con));\n\t\t\t\t\t\t\trc = 1;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tif (prt_summary) {\n\t\t\t\t\t\tif (prt_node_summary(def_server, bstat, job_summary, long_summary)) {\n\t\t\t\t\t\t\tfprintf(stderr, \"pbsnodes: out of memory\\n\");\n\t\t\t\t\t\t\texit(1);\n\t\t\t\t\t\t}\n\t\t\t\t\t} else {\n\t\t\t\t\t\tprt_node(bstat);\n\t\t\t\t\t}\n\t\t\t\t\tpbs_statfree(bstat);\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (output_format == FORMAT_JSON) {\n\t\t\t\tif (pbs_json_print(json_root, stdout))\n\t\t\t\t\tfprintf(stderr, \"json error\\n\");\n\t\t\t\tpbs_json_delete(json_root);\n\t\t\t}\n\t\t\tif (do_vnodes) {\n\t\t\t\tpbs_statfree(bstat_head);\n\t\t\t}\n\n\t\t\tbreak;\n\n\t\tcase LISTSPNV:\n\n\t\t\t/*list nodes and vnodes associated with them.*/\n\t\t\tif (argc - optind) {\n\t\t\t\tfor (bstat = bstat_head; bstat; bstat = bstat->next) {\n\t\t\t\t\tint matched;\n\n\t\t\t\t\tmatched = 0;\n\t\t\t\t\tpa = argv + optind;\n\t\t\t\t\twhile (*pa) {\n\t\t\t\t\t\tif (pa == NULL) {\n\t\t\t\t\t\t\tpa++;\n\t\t\t\t\t\t\tcontinue;\n\t\t\t\t\t\t}\n\t\t\t\t\t\tpattr = bstat->attribs;\n\t\t\t\t\t\twhile (pattr) {\n\t\t\t\t\t\t\tif (pattr->resource) {\n\t\t\t\t\t\t\t\tif (strcmp(pattr->resource, \"host\") == 0) {\n\t\t\t\t\t\t\t\t\tif (strcmp(pattr->value, *pa) == 0)\n\t\t\t\t\t\t\t\t\t\tmatched = 1;\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tif (matched)\n\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\tpattr = pattr->next;\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif (matched) {\n\t\t\t\t\t\t\tif (prt_summary) {\n\t\t\t\t\t\t\t\tnext_bstat = bstat->next;\n\t\t\t\t\t\t\t\tbstat->next = NULL;\n\t\t\t\t\t\t\t\tprt_node_summary(def_server, bstat, job_summary, long_summary);\n\t\t\t\t\t\t\t\tbstat->next = next_bstat;\n\t\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t\tprt_node(bstat);\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tmatched = 0;\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t}\n\t\t\t\t\t\tpa++;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\tpa = argv + optind;\n\t\t\twhile (*pa) {\n\t\t\t\tbstat = pbs_stathost(con, *pa, NULL, NULL);\n\t\t\t\tif (!bstat) {\n\t\t\t\t\tif (pbs_errno != 0) {\n\t\t\t\t\t\tif (output_format == FORMAT_JSON) {\n\t\t\t\t\t\t\tjson_data *json_error;\n\t\t\t\t\t\t\tif ((json_error = pbs_json_create_object()) == NULL) {\n\t\t\t\t\t\t\t\tfprintf(stderr, \"pbsnodes: json error\\n\");\n\t\t\t\t\t\t\t\texit(1);\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tpbs_json_insert_item(json_nodes, *pa, json_error);\n\t\t\t\t\t\t\tif (pbs_json_insert_string(json_error, *pa, pbs_geterrmsg(con))) {\n\t\t\t\t\t\t\t\tfprintf(stderr, \"pbsnodes: json error\\n\");\n\t\t\t\t\t\t\t\texit(1);\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\tfprintf(stderr, \"Node: %s,  Error: %s\\n\", *pa, pbs_geterrmsg(con));\n\t\t\t\t\t\t\trc = 1;\n\t\t\t\t\t\t}\n\t\t\t\t\t} else\n\t\t\t\t\t\tpbs_statfree(bstat);\n\t\t\t\t}\n\t\t\t\tpa++;\n\t\t\t}\n\t\t\tif (output_format == FORMAT_JSON) {\n\t\t\t\tif (pbs_json_print(json_root, stdout))\n\t\t\t\t\tfprintf(stderr, \"json error\\n\");\n\t\t\t\tpbs_json_delete(json_root);\n\t\t\t}\n\t\t\tpbs_statfree(bstat_head);\n\t\t\tbreak;\n\t}\n\t(void) pbs_disconnect(con);\n\treturn (rc ? 1 : 0);\n}\n"
  },
  {
    "path": "src/cmds/pbsrun.in",
    "content": "#!/bin/sh\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n#\n\nif [ $# -eq 1 ] && [ $1 = \"--version\" ]; then\n   echo pbs_version = @PBS_VERSION@\n   exit 0\nfi\n\n# Global variables\n\n# build_arg_counts: given a list of options and their arguments, returns\n# a space-separated list of elements of the form:\n#\n#   -optionN:<# of arguments>\n#\n# For instance,\n#\tgiven: -v -gm-wait <n>\"\n#\toutput: -v:0 -gm-wait:1\n# NOTE: if error encountered parsing the arguments, return value is \"\".\n\nbuild_arg_counts ()\n{\n\n  num_args=0\n  find_opt=1\n  arg_counts=\"\"\n  while [ $# -gt 0 ] ; do\n\n    first_char=`printf \"%s\" \"$1\" | cut -b1`\n    last_char=`printf \"%s\" \"$1\" | awk '{printf \"%s\", substr($1,length($1),1)}'`\n\n    if [ $find_opt -eq 1 ] ; then\n\n\n\tif [ \"$first_char\" = \"-\" ] ; then\n\n\t  if [ \"`printf \"%s\" \"$1\" | sed $SED_OPT_XREGEXP 's/^(-|--)//g' | cut -b1`\" = \"-\" ] ; then\n\t\techo \"\"\n\t\treturn\n\t  fi\n\n    \t  find_opt=0\n          num_args=0\n          arg_counts=\"$arg_counts $1\"\n\t  shift\n\t  if [ \"$1\" = \"\" ] ; then\n             arg_counts=\"${arg_counts}:${num_args}\"\n\t  fi\n\telse\n\t  echo \"\"\n\t  return\n\tfi\n\n     else\n\n\tif [ \"$first_char\" = \"<\" ] && [ \"$last_char\" = \">\" ] ; then\n      \t  num_args=`expr $num_args + 1`\n\t  shift\n\t  if [ \"$1\" = \"\" ] ; then\n             arg_counts=\"${arg_counts}:${num_args}\"\n\t  fi\n        elif [ \"$first_char\" = \"-\" ] ; then\n          arg_counts=\"${arg_counts}:${num_args}\"\n\t  find_opt=1\n\telse\n\t  echo \"\"\n\t  return\n\tfi\n     fi\n\n  done\n\n  echo \"$arg_counts\"\n\n}\n\n# Given an option, and an options list that was transformed by\n# build_arg_counts(), returns the # of arguments of the option that\n# was matched in the options list. Otherwise, -1 is returned.\n# For instance,\n#\tgiven: -v -x:0 -opt:1 -z:0 -v:0 -gm-buffers:1 -gx:gb:3 --totalnum=*:0\n#\treturn:  0 since \"-v\" (first argument) has no arguments (-v:0)\nmatch_opt ()\n{\n\n  opt_given=$1\n  shift\n\n  if [ \"`printf \"%s\" \"$opt_given\" | cut -b1`\" != \"-\" ] ; then\n\techo -1\n\treturn\n  fi\n\n  while [ $# -gt 0 ] ; do\n\n\topt_2match=`printf \"%s\" \"$1\" | awk -F\":\" '{\n             for(i=1;i<NF;i++) {\n                if( i == (NF-1) ) {\n                  printf \"%s\",$i\n                } else {\n                  printf \"%s:\",$i\n                }\n             } }'`\n\n    \topt_num_args=`printf \"%s\" \"$1\" | awk -F\":\" '{ print $NF }'`\n\n\t# wcard_str is for egrep matching and not glob\n\twcard_str=\"\"\n\tif [ `printf \"%s\" $opt_2match | egrep -c \"\\*\"` -ne 0 ]  ; then\n\t\twcard_str=`printf \"%s\" $opt_2match | sed 's/*/.+/g'`\n\tfi\n\n\t# if opt_given contains wildcard \"*\", then do egrep matching\n\tif [ \"$opt_2match\" = \"$opt_given\" ]  || \\\n\t   ([ \"$wcard_str\" != \"\" ] && \\\n\t    [ `printf \"%s\" $opt_given | egrep -c -e \"$wcard_str\"` -ne 0 ]) ; then\n\t\techo $opt_num_args\n\t\treturn\n\tfi\n\tshift\n\n  done\n\n  echo -1\n\n}\n\n# given a list of options and values as transformed by\n# build_arg_counts(), this returns a space-separated, list of single\n# option letters. For instance,\n#\tgiven: -v:0 -opt:1 -x:0 -y:2 -z:0 -gm-buffers:1 -gx:gb:3\n#\treturn: -v -x -z\n\nbuild_single_letter_noargs_opts ()\n{\n  single_letter_opts=\"\"\n  while [ $# -gt 0 ] ; do\n\n\topt_2match=`printf \"%s\" \"$1\" | awk -F\":\" '{\n             for(i=1;i<NF;i++) {\n                if( i == (NF-1) ) {\n                  printf \"%s\",$i\n                } else {\n                  printf \"%s:\",$i\n                }\n             } }'`\n        opt_2match_len=`printf \"%s\" \"$opt_2match\" | sed $SED_OPT_XREGEXP 's/^(-|--)//g' | \\\n\t\t\t\t\tawk '{print length($1)}'`\n    \topt_num_args=`printf \"%s\" \"$1\" | awk -F\":\" '{ print $NF }'`\n\n\tif [ $opt_2match_len -eq 1 ] && [ $opt_num_args -eq 0 ] ; then\n\t\tsingle_letter_opts=\"$single_letter_opts $opt_2match\"\n\tfi\n\tshift\n\n  done\n\n  printf \"%s\" \"$single_letter_opts\"\n\n}\n\n# Given an option, and an options list, return 0 if the given option\n# matches one of the items on the options list.\n# For instance,\n#\tgiven: \"-v\" -x -opt happy -z -v --totalnum=*\n#\treturn 0 since \"-v\" (first argument) is in the  succeeding\n#\targuments list.\nmatch_opt2 ()\n{\n\topt_given=$1\n\tshift\n\n\tif [ \"`printf \"%s\" \"$opt_given\" | cut -b1`\" != \"-\" ] ; then\n\t\techo -1\n\t\treturn\n\tfi\n\n   \twhile [ $# -gt 0 ] ; do\n\n\t\tif [ \"`printf \"%s\" \"$1\" | cut -b1`\" != \"-\" ] ; then\n\t\t\tshift\n\t\t\tcontinue\n\t\tfi\n\n\t\t# wcard_str is for egrep matching and not glob\n\t\twcard_str=\"\"\n\t\tif [ `printf \"%s\" $1 | egrep -c \"\\*\"` -ne 0 ]  ; then\n\t\t\twcard_str=`printf \"%s\" $1 | sed 's/*/.+/g'`\n\t\tfi\n\n\n\t\tif [ \"$opt_given\" = \"$1\" ] || \\\n\t\t   ([ \"$wcard_str\" != \"\" ] && \\\n\t\t    [ `printf \"%s\" $opt_given | egrep -c -e \"$wcard_str\"` -ne 0 ]) ; then\n\t\t\techo 0\n\t\t\treturn\n\t\tfi\n\t\tshift\n\tdone\n\techo -1\n}\n\n# Given an input list of single letter/no option arguments (SLNOA) (arg #1),\n# a valid options list that was transformed by build_arg_counts() (arg #2),\n# and a list of actual options and arguments to parse (args #3..#N),\n# return a new list such that any \"combined\" single letter, no option arguments\n# appearing in the SLNOA list are separated out. For instance,\n# given: \"-v -x -y -z -a -b\" \"-v:0 -x:0 -y:0 -z:0 -opt:1\" -v -opt happy -xyz\n# output: -v -opt happy -x -y -z\nexpand_args ()\n{\n\toptions_single_letter=$1\n\toptions_valid=$2\n\tshift 2\n\n\toptions_to_return=\"\"\n   \twhile [ $# -gt 0 ] ; do\n    \t  first_char=`printf \"%s\" $1 | cut -b1`\n\t  second_char=`printf \"%s\" $1 | cut -b2,2`\n\n\t  if [ \"$first_char\"  = \"-\" ] ; then\n\n\t\tif [ \"$second_char\" = \"-\" ] ; then\n\t\t\tprefix=\"--\"\n\t\telse\n\t\t\tprefix=\"-\"\n\t\tfi\n\t  \toption_str=\"`printf \"%s\" \"$1\" | sed $SED_OPT_XREGEXP 's/^(-|--)//g'`\"\n\t  \toption_str_len=`printf \"%s\" \"$option_str\" | awk '{print length($1)}'`\n\n\t\ti=1\n\t  \texp_option_str=\"\"\n\t\twhile [ $i -le $option_str_len ] ; do\n\t\t\texp_option_str=\"$exp_option_str ${prefix}`printf \"%s\" \"$option_str\" | cut -b${i},${i}`\"\n\t\t\ti=`expr $i + 1`\n\t\tdone\n\n\t\tmatch_all=1\n\t\tfor opt in $exp_option_str ; do\n\t\t   if [ `match_opt2 $opt $options_single_letter` -eq -1 ] ; then\n\t\t\tmatch_all=0\n\t\t\tbreak\n\t\t   fi\n\t\tdone\n\n\t\tif [ $match_all -eq 1 ] && \\\n\t\t   [ `match_opt $1 $options_valid` -eq -1 ] ; then\n\t\t\toptions_to_return=\"$options_to_return $exp_option_str\"\n\t\telse\n\t\t\toptions_to_return=\"$options_to_return $1\"\n\t\tfi\n\n\t  else\n\t\toptions_to_return=\"$options_to_return $1\"\n\t  fi\n\t  shift\n\tdone\n\tprintf \"%s\" \"$options_to_return\"\n}\n\n#####################################################\n# MAIN\n#####################################################\n\n# Returns the option in sed for doing extended regular expression.\n# This returns \"-r\", \"-E\", or \"\" if no option could be found.\n\nSED_OPT_XREGEXP=\"\"\nfor e in \"-r\" \"-E\" ; do\n\tsed $e 's/^(-|--)//g' /dev/null 2>/dev/null >/dev/null\n\tif [ $? -eq 0 ]; then\n\t\tSED_OPT_XREGEXP=\"$e\"\n\t\tbreak\n\tfi\ndone\n\nif [ \"$SED_OPT_XREGEXP\" = \"\" ] ; then\n\techo \"Could not find option to sed for doing extended regular expressions.\"\n\texit 1\nfi\n\n\n# We need to get name of the actual binary, and not some link\n# that resulted from wrapping\nif [ -h $0 ] ; then\n   realpath=`ls -l $0 | awk -F \"->\" '{print $2}'| tr -d ' '`\n   name=`basename $realpath`\nelse\n   name=`basename $0`\nfi\n\n. ${PBS_CONF_FILE:-@PBS_CONF_FILE@}\nexport PBS_TMPDIR=\"${PBS_TMPDIR:-${TMPDIR:-/var/tmp}}\"\nexport PATH=$PATH:${PBS_EXEC}/bin\n\nPBS_LIB_PATH=${PBS_EXEC}/lib\nif [ ! -d ${PBS_LIB_PATH} -a -d ${PBS_EXEC}/lib64 ] ; then\n\tPBS_LIB_PATH=${PBS_EXEC}/lib64\nfi\n\nif [ -h ${PBS_LIB_PATH}/MPI/${name}.link ] ; then\n   mpirun=`ls -l ${PBS_LIB_PATH}/MPI/${name}.link | awk -F \"->\" '{print $2}'| tr -d ' '`\n   if [ ! -x \"$mpirun\" ] ; then\n\techo \"mpirun=$mpirun is not executable!\"\n\texit 127\n\n   fi\nelse\n   echo \"No mpirun link found under ${PBS_LIB_PATH}/MPI/$name.link !\"\n   echo \"Please run pbsrun_wrap to create the link\"\n   exit 127\nfi\n\n# let's source the initialization script\nmpirun_location=`dirname $mpirun`\n\nif [ -s ${PBS_LIB_PATH}/MPI/${name}.init ] ; then\n   . ${PBS_LIB_PATH}/MPI/${name}.init\nelse\n   echo \"No ${PBS_LIB_PATH}/MPI/{$name}.init file exists!\"\n   exit 127\nfi\n\nif [ \"${PBS_NODEFILE:-XX}\" = \"XX\" ]; then\n   if [ \"$name\" != \"mpirun\" ]; then\n      echo \"$name: Warning, not running under PBS\"\n\n      if [ ${strict_pbs:=0} -eq 1 ] ; then\n\t\techo \"$name: exiting since strict_pbs is enabled; execute only in PBS\"\n\t\texit 1\n      fi\n   fi\n\n   # parse arguments looking for any quoted arguments\n   # to preserved.\n   # Use 'pres_pos_params' in place of $*\n\n   pres_pos_params=\"\"\n   while [ $# -gt 0 ]\n   do\n        nwords=`echo $1 | wc -w 2>/dev/null`\n        if [ \"$nwords\" = \"\" ] ; then\n           nwords=1\n        fi\n        if [ $nwords -gt 1 ] ; then\n                pres_pos_params=\"${pres_pos_params} \\\"$1\\\"\"\n        else\n                pres_pos_params=\"${pres_pos_params} $1\"\n        fi\n        shift\n   done\n\n   eval $mpirun $pres_pos_params\n   exit $?\nfi\n\nif [ \"$option_to_configfile\" != \"\" ] ; then\n\tif [ `printf \"%s\" \"$option_to_configfile\" | awk '{print NF}'` -ne 2 ] ; then\n\t\techo \"option_to_configfile must contain single option and its argument: Please fix in ${PBS_LIB_PATH}/MPI/${name}.init!\"\n   \t\texit 127\n\tfi\nfi\n\n_options_to_retain=\"`build_arg_counts $options_to_retain`\"\n\nif [ \"$_options_to_retain\" = \"\" ] && [ \"$options_to_retain\" != \"\" ] ; then\n   echo \"Encountered bad options_to_retain=$options_to_retain in ${PBS_LIB_PATH}/MPI/${name}.init!\"\n   exit 127\nfi\n\n_options_to_ignore=\"`build_arg_counts $options_to_ignore`\"\nif [ \"$_options_to_ignore\" = \"\" ] && [ \"$options_to_ignore\" != \"\" ] ; then\n   echo \"Encountered bad options_to_ignore=$options_to_ignore in ${PBS_LIB_PATH}/MPI/${name}.init!\"\n   exit 127\nfi\n\n_options_to_transform=\"`build_arg_counts $options_to_transform`\"\nif [ \"$_options_to_transform\" = \"\" ] && \\\n\t\t\t[ \"$options_to_transform\" != \"\" ] ; then\n   echo \"Encountered bad options_to_transform=$options_to_transform in ${PBS_LIB_PATH}/MPI/${name}.init!\"\n   exit 127\nfi\n\n_options_with_single_letter=`build_single_letter_noargs_opts \\\n\t\t\t\t$_options_to_retain \\\n\t\t\t\t$_options_to_ignore \\\n\t\t\t\t$_options_to_transform`\n\n\nconfigfile=\"\"\nconfigfile_new=\"${PBS_TMPDIR}/pbsrun_config$$\"\nin_configfile=0\n\n# Signals to catch\ntrap '(end_action $mpirun_location); /bin/rm -f ${configfile_new}; exit 1;'  1 2 3 15\n\nnum_lines=1\nline=1\n_option_list_global=\"\"\nwhile [ $line -le $num_lines ] ; do\n\n\n   if [ $in_configfile -eq 1 ] ; then\n\tcmd_seg=`eval sed -n '${line}p' $configfile`\n\n\t# skip past comment lines\n\twhile [ `printf \"%s\" \"$cmd_seg\" | egrep -c \"^#\"` -ne 0 ] ; do\n\t\tline=`expr $line + 1`\n\t\tif [ $line -gt $num_lines ] ; then\n\t\t\t# go to top loop\n\t\t\tbreak 2\n\t\tfi\n\t\tcmd_seg=`eval sed -n '${line}p' $configfile`\n\tdone\n\tset -- $cmd_seg\n   fi\n\n   # parse arguments looking for any quoted arguments\n   # to preserved.\n   # Use 'pres_pos_params' in place of $*\n\n   pres_pos_params=\"\"\n   while [ $# -gt 0 ]\n   do\n        nwords=`echo $1 | wc -w 2>/dev/null`\n        if [ \"$nwords\" = \"\" ] ; then\n           nwords=1\n        fi\n        if [ $nwords -gt 1 ] ; then\n                pres_pos_params=\"${pres_pos_params} \\\"$1\\\"\"\n        else\n                pres_pos_params=\"${pres_pos_params} $1\"\n        fi\n        shift\n   done\n\n   eval set -- `expand_args \"$_options_with_single_letter\" \\\n\t   \"$_options_to_retain $_options_to_ignore $_options_to_transform\" $pres_pos_params`\n\n   option_list=\"\"\n   num_retain=-1\n   num_ignore=-1\n   num_transform=-1\n   prog_args=\"\"\n   in_prog_args=0\n   while [ $# -gt 0 ] ; do\n\n\t# first time matching configfile option\n\tif [ $in_configfile -eq 0 ] &&\n\t\t[ `match_opt2 $1 $option_to_configfile` -eq 0 ] ; then\n\n\t\tshift\n\t\tconfigfile=\"$1\"\n\n\t\tif [ \"$configfile\" = \"\" ] ; then\n\t\t\techo \"$name: No configfile value given!\"\n\t\t\t(end_action)\n\t\t\texit 1\n\t\tfi\n\t\tif ! [ -s \"$configfile\" ] ; then\n\t\t\techo \"$name: File $configfile does not exit or zero length!\"\n\t\t\t(end_action)\n\t\t\texit 1\n\t\tfi\n\n\t\t# reset flags and counters\n\t\tin_configfile=1\n\t\tnum_lines=`wc -l $configfile | awk '{print $1}'`\n\t\tline=1\n   \t\tcat /dev/null > ${configfile_new}\n\n\t\t# whatever command line segment we've encountered\n\t\t# previously will be considered \"global\" arguments.\n   \t\t_option_list_global=`configfile_cmdline_action $option_list`\n\n\t\tshift\n\t\tif [ \"$1\" != \"\" ] ; then\n\t\t\techo \"$name: Extra options after ${option_to_configfile}!\"\n\t\t\t(end_action)\n\t\t\texit 1\n\t\tfi\n\t\t# first line will contain -machinefile option\n\t\t# which maps processes to host\n\t\tconfigfile_first=`configfile_firstline_action`\n\t\tif [ \"$configfile_first\" != \"\" ] ; then\n   \t\t\techo \"$configfile_first\" >> ${configfile_new}\n\t\tfi\n\t\t# go to top loop\n\t\tcontinue 2\n\tfi\n\n\tif [ $in_prog_args -eq 1 ] ; then\n\n\t\t# handle multiple command line segments\n\t\tif [ \"$1\" = \":\" ] ; then\n\t\t\toption_list=\"$option_list :\"\n\t\t\tin_prog_args=0\n\t\t\tshift\n\t\t\tif [ \"$1\" = \":\" ] ; then\n\t\t\t   echo \"$name: encountered empty command args segment!\"\n\t\t\t   (end_action $mpirun_location)\n\t\t    \t   /bin/rm -f ${configfile_new}\n\t\t\t   exit 1\n\t\t\tfi\n\t\telse\n\t\t\toption_list=\"$option_list $1\"\n\t\t\tshift\n\t\tfi\n\t\tcontinue\n\tfi\n\n\tif [ `match_opt2 $1 $options_to_fail` -eq 0 ] ; then\n\t\techo \"$name: option $1 is not allowed!\"\n\t\t(end_action $mpirun_location)\n\t\t/bin/rm -f ${configfile_new}\n\t\texit 1\n\tfi\n\n\tif [ `match_opt2 $1 $options_with_another_form` -eq 0 ] ; then\n\t\techo \"$name: warning: $1 has multiple forms; the  one supported is listed in \\\"$options_with_another_form\\\"\"\n\tfi\n\n\tnum_retain=`match_opt $1 $_options_to_retain`\n\tif [ $num_retain -ge 0 ] ; then\n\n\t\toption_list=\"$option_list $1\"\n\n\t\tloop=$num_retain\n\t\twhile [ $loop -gt 0 ] ; do\n\t\t\tshift\n\t\t\tnwords=`echo $1 | wc -w 2>/dev/null`\n\t\t\tif [ \"$nwords\" = \"\" ] ; then\n\t\t\t\tnwords=1\n\t\t\tfi\n\t\t\tif [ $nwords -gt 1 ] ; then\n\t\t\t\toption_list=\"$option_list \\\"$1\\\"\"\n\t\t\telse\n\t\t\t\toption_list=\"$option_list $1\"\n\t\t\tfi\n\t\t\tloop=`expr $loop - 1`\n\t\tdone\n\n\t\tshift\n\t\tcontinue\n\tfi\n\n\tnum_ignore=`match_opt $1 $_options_to_ignore`\n\tif [ $num_ignore -ge 0 ] ; then\n\t\techo \"$name: warning: ignoring option $1\"\n\t\tloop=$num_ignore\n\t\twhile [ $loop -gt 0 ] ; do\n\t\t\tshift\n\t\t\tloop=`expr $loop - 1`\n\t\tdone\n\t\tshift\n\t\tcontinue\n\tfi\n\n\tnum_transform=`match_opt $1 $_options_to_transform`\n\tif [ $num_transform -ge 0 ] ; then\n\t\tloop=$num_transform\n\t\ttransform_list=\"$transform_list $1\"\n\n\t\twhile [ $loop -gt 0 ] ; do\n\t\t\tshift\n\t\t\tnwords=`echo $1 | wc -w 2>/dev/null`\n\t\t\tif [ \"$nwords\" = \"\" ] ; then\n\t\t\t\tnwords=1\n\t\t\tfi\n\t\t\tif [ $nwords -gt 1 ] ; then\n\t\t\t\ttransform_list=\"$transform_list \\\"$1\\\"\"\n\t\t\telse\n\t\t\t\ttransform_list=\"$transform_list $1\"\n\t\t\tfi\n\t\t\tloop=`expr $loop - 1`\n\t\tdone\n\n\t\toption_list=\"$option_list `transform_action $transform_list`\"\n\tfi\n\n\tif [ $num_retain -eq -1 ] && [ $num_ignore -eq  -1 ] && \\\n\t\t\t\t\t[ $num_transform -eq -1 ] ; then\n\t\tif [ \"`printf \"%s\" $1 | cut -b1`\" = \"-\" ] ; then\n\t\t\techo \"$name: option $1 is not recognized!\"\n\t\t\techo \"$name: Please update ${PBS_LIB_PATH}/MPI/${name}.init\"\n\t\t\t(end_action $mpirun_location)\n\t\t\t/bin/rm -f ${configfile_new}\n\t\t\texit 127\n\t\telse\n\t\t\toption_list=\"$option_list $pbs_attach $options_to_pbs_attach $1\"\n\t\t\tin_prog_args=1\n\t\tfi\n\tfi\n      shift\n#  inner loop\n   done\n\n   if [ $in_configfile -eq 1 ] ; then\n\tif [ \"$option_list\" != \"\" ] ; then\n   \t\techo \"$option_list\" >> ${configfile_new}\n\tfi\n   else\n   \t_option_list=`evaluate_options_action $option_list`\n   fi\n\n   line=`expr $line + 1`\n\n# top loop\ndone\n\n(boot_action $mpirun_location)\nif [ $? -eq 0 ] ; then\n\tif [ $in_configfile -eq 1 ] ; then\n\t\teval $mpirun $_option_list_global -configfile ${configfile_new}\n\telse\n\t\teval $mpirun $_option_list\n\tfi\nfi\nret=$?\n\n(end_action $mpirun_location)\n/bin/rm -f ${configfile_new}\nexit $ret\n\nreturn 0\n"
  },
  {
    "path": "src/cmds/pbsrun_unwrap.in",
    "content": "#!/bin/sh\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n#\n\nif [ $# -eq 1 ] && [ $1 = \"--version\" ]; then\n   echo pbs_version = @PBS_VERSION@\n   exit 0\nfi\n\nexec_cmd () {\n  $* 2>/dev/null\n  if [ $? -ne 0 ] ; then\n\techo \"$progname: FAILED:  \\\"$*\\\"\"\n\texit 1\n  else\n\techo \"$progname: EXECUTED: \\\"$*\\\"\"\n  fi\n\n}\n\n. ${PBS_CONF_FILE:-@PBS_CONF_FILE@}\nexport PBS_TMPDIR=\"${PBS_TMPDIR:-${TMPDIR:-/var/tmp}}\"\n\nprogname=`basename $0`\nusage=\"$progname pbsrun.<keyword>\"\nusag2=\"$progname --version\"\n\npbs_mpirun=$1\n\nPBS_EXEC_BIN=${PBS_EXEC}/bin\nPBS_LIB_PATH=${PBS_EXEC}/lib\nif [ ! -d ${PBS_LIB_PATH} -a -d ${PBS_EXEC}/lib64 ] ; then\n\tPBS_LIB_PATH=${PBS_EXEC}/lib64\nfi\n\n# sanity checks\nif [ \"${PBS_EXEC_BIN}\" =  \"*\" ] ; then\n\techo \"$progname: PBS_EXEC_BIN set to *!\"\n\texit 1\nfi\n\nif [ \"${pbs_mpirun}\" =  \"*\" ] ; then\n\techo \"$progname: pbs_mpirun set to *!\"\n\texit 1\nfi\n\n\nif [ $# -ne 1 ] ; then\n   echo \"$usage\"\n   echo \"$usag2\"\n   exit 1\nfi\n\nif [ -h ${PBS_LIB_PATH}/MPI/${pbs_mpirun}.link ] ; then\n   actual_mpirun=`ls -l ${PBS_LIB_PATH}/MPI/${pbs_mpirun}.link | \\\n\t\t\t\t\tawk -F \"->\" '{print $2}'| tr -d ' '`\n   if [ ! -x \"$actual_mpirun\" ] ; then\n\techo \"$progname: mpirun=$pbs_mpirun is not executable!\"\n\texit 1\n   fi\nelse\n   echo \"$progname: did not find a ${PBS_LIB_PATH}/MPI/${pbs_mpirun}.link!\"\n   exit 1\nfi\norig_mpirun_dir=`dirname $actual_mpirun`\norig_mpirun_name=`basename ${actual_mpirun} .actual`\n# sanity check\nif [ \"${orig_mpirun_dir}\" =  \"*\" ] || \\\n\t\t[ \"${orig_mpirun_name}\" =  \"*\" ] ; then\n\techo \"$progname: orig_mpirun_dir or ori_mpirun_name set to *!\"\n\texit 1\nfi\norig_mpirun=\"${orig_mpirun_dir}/${orig_mpirun_name}\"\n\n\necho \"$progname: saving a copy of $actual_mpirun to ${orig_mpirun}.back$$\"\nres=`exec_cmd cp $actual_mpirun ${orig_mpirun}.back$$`\necho $res\nif [ `echo $res | egrep -c FAILED` -ne 0 ] ; then\n        exit 1\nfi\n\necho \"$progname: restoring $actual_mpirun to $orig_mpirun\"\n\nres=`exec_cmd rm -f ${PBS_LIB_PATH}/MPI/${pbs_mpirun}.link`\necho $res\nif [ `echo $res | egrep -c FAILED` -ne 0 ] ; then\n        exit 1\nfi\n\nres=`exec_cmd rm -f $orig_mpirun`\necho $res\nif [ `echo $res | egrep -c FAILED` -ne 0 ] ; then\n        exit 1\nfi\n\nres=`exec_cmd rm -f ${PBS_EXEC_BIN}/${pbs_mpirun}`\necho $res\nif [ `echo $res | egrep -c FAILED` -ne 0 ] ; then\n        exit 1\nfi\n\nres=`exec_cmd rm -f ${PBS_LIB_PATH}/MPI/${pbs_mpirun}.init`\necho $res\nif [ `echo $res | egrep -c FAILED` -ne 0 ] ; then\n        exit 1\nfi\n\nres=`exec_cmd mv $actual_mpirun $orig_mpirun`\necho $res\nif [ `echo $res | egrep -c FAILED` -ne 0 ] ; then\n        exit 1\nfi\n"
  },
  {
    "path": "src/cmds/pbsrun_wrap.in",
    "content": "#!/bin/sh\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nif [ $# -eq 1 ] && [ $1 = \"--version\" ]; then\n   echo pbs_version = @PBS_VERSION@\n   exit 0\nfi\n\nexec_cmd () {\n  $* 2>/dev/null\n  if [ $? -ne 0 ] ; then\n        echo \"$progname: FAILED:  \\\"$*\\\"\"\n  else\n        echo \"$progname: EXECUTED: \\\"$*\\\"\"\n  fi\n\n}\n\n. ${PBS_CONF_FILE:-@PBS_CONF_FILE@}\nexport PBS_TMPDIR=\"${PBS_TMPDIR:-${TMPDIR:-/var/tmp}}\"\n\nprogname=`basename $0`\nusage=\"$progname [-s] <path_to_actual_mpirun> pbsrun.<keyword>\"\nusag2=\"$progname --version\"\n\noptions_list=`getopt s $*`\nif [ $? != 0 ] ; then\n        echo \"$usage\"\n        echo \"$usag2\"\n        exit 2\nfi\n\nset -- $options_list\n\nstrict_pbs=0\nfor i in $* ; do\n        case $i in\n        -s) strict_pbs=1; shift;;\n        --) shift; break;;\n        esac\ndone\n\nPBS_EXEC_BIN=${PBS_EXEC}/bin\nPBS_LIB_PATH=${PBS_EXEC}/lib\nif [ ! -d ${PBS_LIB_PATH} -a -d ${PBS_EXEC}/lib64 ] ; then\n\tPBS_LIB_PATH=${PBS_EXEC}/lib64\nfi\n\nif [ $# -ne 2 ] ; then\n   echo \"$usage\"\n   echo \"$usag2\"\n   exit 1\nfi\n\nactual_mpirun=$1\npbs_mpirun=$2\n\nif [ \"`echo $pbs_mpirun | awk -F. '{print $1}'`\" != \"pbsrun\" ] ; then\n   \techo \"$usage\"\n   \techo \"$usag2\"\n   \texit 1\nfi\n\nif [ \"`echo $pbs_mpirun | awk -F. '{print $2}'`\" == \"\" ] ; then\n   \techo \"$usage\"\n   \techo \"$usag2\"\n   \texit 1\nfi\n\nif ! [ -x $actual_mpirun ] ; then\n   echo \"$progname: $actual_mpirun not exists and an executable!\"\n   exit 1\nfi\n\nactual_mpirun_name=`basename $actual_mpirun`\nactual_mpirun_dir=`dirname $actual_mpirun`\n\n# Save original mpirun script\nres=`exec_cmd mv $actual_mpirun ${actual_mpirun}.actual`\necho $res\nif [ `echo $res | egrep -c FAILED` -ne 0 ] ; then\n\texit 1\nfi\n\n# Instantiate pbsrun\nif [ \"$pbs_mpirun\" = \"pbsrun.poe\" ]; then\n\tres=`exec_cmd cp ${PBS_LIB_PATH}/MPI/${pbs_mpirun} ${PBS_EXEC_BIN}`\nelse\n\tres=`exec_cmd cp ${PBS_EXEC_BIN}/pbsrun ${PBS_EXEC_BIN}/${pbs_mpirun}`\nfi\necho $res\nif [ `echo $res | egrep -c FAILED` -ne 0 ] ; then\n\texit 1\nfi\n\nres=`exec_cmd chmod 755 ${PBS_EXEC_BIN}/${pbs_mpirun}`\necho $res\nif [ `echo $res | egrep -c FAILED` -ne 0 ] ; then\n\texit 1\nfi\n\nres=`exec_cmd ln -s ${PBS_EXEC_BIN}/$pbs_mpirun $actual_mpirun`\necho $res\nif [ `echo $res | egrep -c FAILED` -ne 0 ] ; then\n\texit 1\nfi\n\nres=`exec_cmd ln -s ${actual_mpirun}.actual ${PBS_LIB_PATH}/MPI/${pbs_mpirun}.link`\necho $res\nif [ `echo $res | egrep -c FAILED` -ne 0 ] ; then\n\texit 1\nfi\n\n# Instantiate the *.init.in script to *.init\neval sed 's/^strict_pbs=.*/strict_pbs=${strict_pbs}/' ${PBS_LIB_PATH}/MPI/${pbs_mpirun}.init.in > ${PBS_LIB_PATH}/MPI/${pbs_mpirun}.init 2>/dev/null\nif [ $? -ne 0 ] || [ ! -s ${PBS_LIB_PATH}/MPI/${pbs_mpirun}.init ] ; then\n\techo \"FAILED to instantiate ${PBS_LIB_PATH}/MPI/${pbs_mpirun}.init.in as ${PBS_LIB_PATH}/MPI/${pbs_mpirun}.init\"\n\texit 1\nfi\n\nres=`exec_cmd chmod 644  ${PBS_LIB_PATH}/MPI/${pbs_mpirun}.init`\necho $res\nif [ `echo $res | egrep -c FAILED` -ne 0 ] ; then\n\texit 1\nfi\n"
  },
  {
    "path": "src/cmds/qalter.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tqalter.c\n * @brief\n * \tqalter - (PBS) alter batch job\n *\n * @author  Terry Heidelberg\n * \t\t\tLivermore Computing\n *\n * @author\tBruce Kelly\n *      \tNational Energy Research Supercomputer Center\n *\n * @author\tLawrence Livermore National Laboratory\n *      \tUniversity of California\n */\n\n#include <pbs_config.h>\n\n#include \"cmds.h\"\n#include \"pbs_ifl.h\"\n#include <pbs_config.h> /* the master config generated by configure */\n#include <pbs_version.h>\n#include \"portability.h\"\n\n/**\n * @brief\n * \tprints usage format for qalter command\n *\n * @return - Void\n *\n */\nstatic void\nprint_usage()\n{\n\tstatic char usag2[] = \"       qalter --version\\n\";\n\tstatic char usage[] =\n\t\t\"usage: qalter [-a date_time] [-A account_string] [-c interval] [-e path]\\n\"\n\t\t\"\\t[-h hold_list] [-j y|n] [-k keep] [-l resource_list]\\n\"\n\t\t\"\\t[-m mail_options] [-M user_list] [-N jobname] [-o path] [-p priority]\\n\"\n\t\t\"\\t[-R o|e|oe] [-r y|n] [-S path] [-u user_list] [-W dependency_list]\\n\"\n\t\t\"\\t[-P project_name] job_identifier...\\n\";\n\tfprintf(stderr, \"%s\", usage);\n\tfprintf(stderr, \"%s\", usag2);\n}\n\n/**\n * @brief\n * \thandles attribute errors and prints appropriate errmsg\n *\n * @param[in] connect - value indicating server connection\n * @param[in] err_list - list of possible attribute errors\n * @param[in] id - corresponding id(string) for attribute error\n *\n * @return - Void\n *\n */\nstatic void\nhandle_attribute_errors(int connect,\n\t\t\tstruct ecl_attribute_errors *err_list, char *id)\n{\n\tstruct attropl *attribute;\n\tchar *opt;\n\tint i;\n\n\tfor (i = 0; i < err_list->ecl_numerrors; i++) {\n\t\tattribute = err_list->ecl_attrerr[i].ecl_attribute;\n\t\tif (strcmp(attribute->name, ATTR_a) == 0)\n\t\t\topt = \"a\";\n\t\telse if (strcmp(attribute->name, ATTR_A) == 0)\n\t\t\topt = \"A\";\n\t\telse if (strcmp(attribute->name, ATTR_project) == 0)\n\t\t\topt = \"P\";\n\t\telse if (strcmp(attribute->name, ATTR_c) == 0)\n\t\t\topt = \"c\";\n\t\telse if (strcmp(attribute->name, ATTR_e) == 0)\n\t\t\topt = \"e\";\n\t\telse if (strcmp(attribute->name, ATTR_h) == 0)\n\t\t\topt = \"h\";\n\t\telse if (strcmp(attribute->name, ATTR_j) == 0)\n\t\t\topt = \"j\";\n\t\telse if (strcmp(attribute->name, ATTR_k) == 0)\n\t\t\topt = \"k\";\n\t\telse if (strcmp(attribute->name, ATTR_l) == 0) {\n\t\t\topt = \"l\";\n\t\t\tfprintf(stderr, \"qalter: %s %s\\n\",\n\t\t\t\terr_list->ecl_attrerr[i].ecl_errmsg, id);\n\t\t\texit(err_list->ecl_attrerr[i].ecl_errcode);\n\t\t} else if (strcmp(attribute->name, ATTR_m) == 0)\n\t\t\topt = \"m\";\n\t\telse if (strcmp(attribute->name, ATTR_M) == 0)\n\t\t\topt = \"M\";\n\t\telse if (strcmp(attribute->name, ATTR_N) == 0)\n\t\t\topt = \"N\";\n\t\telse if (strcmp(attribute->name, ATTR_o) == 0)\n\t\t\topt = \"o\";\n\t\telse if (strcmp(attribute->name, ATTR_p) == 0)\n\t\t\topt = \"p\";\n\t\telse if (strcmp(attribute->name, ATTR_r) == 0)\n\t\t\topt = \"r\";\n\t\telse if (strcmp(attribute->name, ATTR_R) == 0)\n\t\t\topt = \"R\";\n\t\telse if (strcmp(attribute->name, ATTR_S) == 0)\n\t\t\topt = \"S\";\n\t\telse if (strcmp(attribute->name, ATTR_u) == 0)\n\t\t\topt = \"u\";\n\t\telse if ((strcmp(attribute->name, ATTR_depend) == 0) ||\n\t\t\t (strcmp(attribute->name, ATTR_stagein) == 0) ||\n\t\t\t (strcmp(attribute->name, ATTR_stageout) == 0) ||\n\t\t\t (strcmp(attribute->name, ATTR_sandbox) == 0) ||\n\t\t\t (strcmp(attribute->name, ATTR_umask) == 0) ||\n\t\t\t (strcmp(attribute->name, ATTR_runcount) == 0) ||\n\t\t\t (strcmp(attribute->name, ATTR_g) == 0))\n\t\t\topt = \"W\";\n\t\telse\n\t\t\treturn;\n\n\t\tpbs_disconnect(connect);\n\t\tCS_close_app();\n\n\t\tif (err_list->ecl_attrerr->ecl_errcode == PBSE_JOBNBIG)\n\t\t\tfprintf(stderr, \"qalter: Job %s \\n\", err_list->ecl_attrerr->ecl_errmsg);\n\t\telse {\n\t\t\tfprintf(stderr, \"qalter: illegal -%s value\\n\", opt);\n\t\t\tprint_usage();\n\t\t}\n\t\texit(2);\n\t}\n}\n\nint\nmain(int argc, char **argv, char **envp) /* qalter */\n{\n\tint c;\n\tint errflg = 0;\n\tint any_failed = 0;\n\tchar *pc;\n\tint i;\n\tstruct attrl *attrib = NULL;\n\tchar *keyword;\n\tchar *valuewd;\n\tchar *erplace;\n\ttime_t after;\n\tchar a_value[80];\n\n\tchar job_id[PBS_MAXCLTJOBID];\n\n\tchar job_id_out[PBS_MAXCLTJOBID];\n\tchar server_out[MAXSERVERNAME];\n\tchar rmt_server[MAXSERVERNAME];\n\tstruct ecl_attribute_errors *err_list;\n\n#define GETOPT_ARGS \"a:A:c:e:h:j:k:l:m:M:N:o:p:r:R:S:u:W:P:\"\n\n\t/*test for real deal or just version and exit*/\n\n\tPRINT_VERSION_AND_EXIT(argc, argv);\n\n\tif (initsocketlib())\n\t\treturn 1;\n\n\twhile ((c = getopt(argc, argv, GETOPT_ARGS)) != EOF)\n\t\tswitch (c) {\n\t\t\tcase 'a':\n\t\t\t\tif ((after = cvtdate(optarg)) < 0) {\n\t\t\t\t\tfprintf(stderr, \"qalter: illegal -a value\\n\");\n\t\t\t\t\terrflg++;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tsprintf(a_value, \"%ld\", (long) after);\n\t\t\t\tset_attr_error_exit(&attrib, ATTR_a, a_value);\n\t\t\t\tbreak;\n\t\t\tcase 'A':\n\t\t\t\tset_attr_error_exit(&attrib, ATTR_A, optarg);\n\t\t\t\tbreak;\n\t\t\tcase 'P':\n\t\t\t\tset_attr_error_exit(&attrib, ATTR_project, optarg);\n\t\t\t\tbreak;\n\t\t\tcase 'c':\n\t\t\t\twhile (isspace((int) *optarg))\n\t\t\t\t\toptarg++;\n\t\t\t\tpc = optarg;\n\t\t\t\tif ((pc[0] == 'u') && (pc[1] == '\\0')) {\n\t\t\t\t\tfprintf(stderr, \"qalter: illegal -c value\\n\");\n\t\t\t\t\terrflg++;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tset_attr_error_exit(&attrib, ATTR_c, optarg);\n\t\t\t\tbreak;\n\t\t\tcase 'e':\n\t\t\t\tset_attr_error_exit(&attrib, ATTR_e, optarg);\n\t\t\t\tbreak;\n\t\t\tcase 'h':\n\t\t\t\twhile (isspace((int) *optarg))\n\t\t\t\t\toptarg++;\n\t\t\t\tset_attr_error_exit(&attrib, ATTR_h, optarg);\n\t\t\t\tbreak;\n\t\t\tcase 'j':\n\t\t\t\tset_attr_error_exit(&attrib, ATTR_j, optarg);\n\t\t\t\tbreak;\n\t\t\tcase 'k':\n\t\t\t\tset_attr_error_exit(&attrib, ATTR_k, optarg);\n\t\t\t\tbreak;\n\t\t\tcase 'l':\n\t\t\t\tif ((i = set_resources(&attrib, optarg, TRUE, &erplace)) != 0) {\n\t\t\t\t\tif (i > 1) {\n\t\t\t\t\t\tpbs_prt_parse_err(\"qalter: illegal -l value\\n\", optarg,\n\t\t\t\t\t\t\t\t  erplace - optarg, i);\n\n\t\t\t\t\t} else\n\t\t\t\t\t\tfprintf(stderr, \"qalter: illegal -l value\\n\");\n\t\t\t\t\terrflg++;\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase 'm':\n\t\t\t\twhile (isspace((int) *optarg))\n\t\t\t\t\toptarg++;\n\t\t\t\tset_attr_error_exit(&attrib, ATTR_m, optarg);\n\t\t\t\tbreak;\n\t\t\tcase 'M':\n\t\t\t\tset_attr_error_exit(&attrib, ATTR_M, optarg);\n\t\t\t\tbreak;\n\t\t\tcase 'N':\n\t\t\t\tset_attr_error_exit(&attrib, ATTR_N, optarg);\n\t\t\t\tbreak;\n\t\t\tcase 'o':\n\t\t\t\tset_attr_error_exit(&attrib, ATTR_o, optarg);\n\t\t\t\tbreak;\n\t\t\tcase 'p':\n\t\t\t\twhile (isspace((int) *optarg))\n\t\t\t\t\toptarg++;\n\t\t\t\tset_attr_error_exit(&attrib, ATTR_p, optarg);\n\t\t\t\tbreak;\n\t\t\tcase 'r':\n\t\t\t\tif (strlen(optarg) != 1) {\n\t\t\t\t\tfprintf(stderr, \"qalter: illegal -r value\\n\");\n\t\t\t\t\terrflg++;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tif (*optarg != 'y' && *optarg != 'n') {\n\t\t\t\t\tfprintf(stderr, \"qalter: illegal -r value\\n\");\n\t\t\t\t\terrflg++;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tset_attr_error_exit(&attrib, ATTR_r, optarg);\n\t\t\t\tbreak;\n\t\t\tcase 'R':\n\t\t\t\tset_attr_error_exit(&attrib, ATTR_R, optarg);\n\t\t\t\tbreak;\n\t\t\tcase 'S':\n\t\t\t\tset_attr_error_exit(&attrib, ATTR_S, optarg);\n\t\t\t\tbreak;\n\t\t\tcase 'u':\n\t\t\t\tset_attr_error_exit(&attrib, ATTR_u, optarg);\n\t\t\t\tbreak;\n\t\t\tcase 'W':\n\t\t\t\twhile (isspace((int) *optarg))\n\t\t\t\t\toptarg++;\n\t\t\t\tif (strlen(optarg) == 0) {\n\t\t\t\t\tfprintf(stderr, \"qalter: illegal -W value\\n\");\n\t\t\t\t\terrflg++;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tfix_path(optarg, 2);\n\t\t\t\ti = parse_equal_string(optarg, &keyword, &valuewd);\n\t\t\t\twhile (i == 1) {\n\t\t\t\t\tset_attr_error_exit(&attrib, keyword, valuewd);\n\t\t\t\t\ti = parse_equal_string(NULL, &keyword, &valuewd);\n\t\t\t\t}\n\t\t\t\tif (i == -1) {\n\t\t\t\t\tfprintf(stderr, \"qalter: illegal -W value\\n\");\n\t\t\t\t\terrflg++;\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase '?':\n\t\t\tdefault:\n\t\t\t\terrflg++;\n\t\t\t\tbreak;\n\t\t}\n\n\tif (errflg || optind == argc) {\n\t\tprint_usage();\n\t\texit(2);\n\t}\n\n\t/*perform needed security library initializations (including none)*/\n\n\tif (CS_client_init() != CS_SUCCESS) {\n\t\tfprintf(stderr, \"qalter: unable to initialize security library.\\n\");\n\t\texit(1);\n\t}\n\n\tfor (; optind < argc; optind++) {\n\t\tint connect;\n\t\tint stat = 0;\n\t\tint located = FALSE;\n\n\t\tpbs_strncpy(job_id, argv[optind], sizeof(job_id));\n\t\tif (get_server(job_id, job_id_out, server_out)) {\n\t\t\tfprintf(stderr, \"qalter: illegally formed job identifier: %s\\n\", job_id);\n\t\t\tany_failed = 1;\n\t\t\tcontinue;\n\t\t}\n\tcnt:\n\t\tconnect = cnt2server(server_out);\n\t\tif (connect <= 0) {\n\t\t\tfprintf(stderr, \"qalter: cannot connect to server %s (errno=%d)\\n\",\n\t\t\t\tpbs_server, pbs_errno);\n\t\t\tany_failed = pbs_errno;\n\t\t\tcontinue;\n\t\t}\n\n\t\tstat = pbs_alterjob(connect, job_id_out, attrib, NULL);\n\t\tif (stat && (pbs_errno != PBSE_UNKJOBID)) {\n\t\t\tif ((err_list = pbs_get_attributes_in_error(connect)))\n\t\t\t\thandle_attribute_errors(connect, err_list, job_id_out);\n\n\t\t\tprt_job_err(\"qalter\", connect, job_id_out);\n\t\t\tany_failed = pbs_errno;\n\t\t} else if (stat && (pbs_errno == PBSE_UNKJOBID) && !located) {\n\t\t\tlocated = TRUE;\n\t\t\tif (locate_job(job_id_out, server_out, rmt_server)) {\n\t\t\t\tpbs_disconnect(connect);\n\t\t\t\tstrcpy(server_out, rmt_server);\n\t\t\t\tgoto cnt;\n\t\t\t}\n\t\t\tprt_job_err(\"qalter\", connect, job_id_out);\n\t\t\tany_failed = pbs_errno;\n\t\t}\n\n\t\tpbs_disconnect(connect);\n\t}\n\tCS_close_app();\n\texit(any_failed);\n}\n"
  },
  {
    "path": "src/cmds/qdel.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tqdel.c\n * @brief\n * \tqdel - (PBS) delete batch job\n *\n * @author  Terry Heidelberg\n * \t\t\tLivermore Computing\n *\n * @author  Bruce Kelly\n * \t\t\tNational Energy Research Supercomputer Center\n *\n * @author  Lawrence Livermore National Laboratory\n * \t\t\tUniversity of California\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <unistd.h>\n#include \"cmds.h\"\n#include \"pbs_ifl.h\"\n#include <pbs_version.h>\n\n#define MAX_TIME_DELAY_LEN 32\n#define GETOPT_ARGS \"W:x\"\n\nextern void free_svrjobidlist(svr_jobid_list_t *list, int shallow);\nextern int add_jid_to_list_by_name(char *job_id, char *svrname, svr_jobid_list_t **svr_jobid_list_hd);\n\nstatic int num_deleted = 0;\n\n/**\n * @brief\tProcess the deljob error response from server\n *\n * @param[in]\tclusterid - cluster name (PBS_SERVER)\n * @param[in]\tlist - list of deljob response objects\n * @param[out]\trmtlist - return pointer to list of jobs to try deleting on remote servers\n * @param[out]\tnfailed - number of failed jobs\n *\n * @return int\n * @retval error code from server\n */\nstatic int\nprocess_deljobstat(char *clusterid, struct batch_deljob_status **list, svr_jobid_list_t **rmtlist, int *nfailed)\n{\n\tstruct batch_deljob_status *p_delstatus;\n\tstruct batch_deljob_status *next = NULL;\n\tstruct batch_deljob_status *prev = NULL;\n\tchar *errtxt = NULL;\n\tint any_failed = 0;\n\n\t*nfailed = 0;\n\n\tfor (p_delstatus = *list; p_delstatus != NULL; prev = p_delstatus, p_delstatus = next) {\n\t\tnext = p_delstatus->next;\n\t\tif (p_delstatus->code == PBSE_UNKJOBID && rmtlist != NULL) {\n\t\t\tchar rmt_server[PBS_MAXDEST + 1];\n\n\t\t\t/* Check if job was moved to a remote cluster */\n\t\t\tif (locate_job(p_delstatus->name, clusterid, rmt_server)) {\n\t\t\t\tif (add_jid_to_list_by_name(strdup(p_delstatus->name), rmt_server, rmtlist) != 0)\n\t\t\t\t\treturn pbs_errno;\n\t\t\t\telse { /* Job found on remote server, let's remove it from error list */\n\t\t\t\t\tif (prev != NULL)\n\t\t\t\t\t\tprev->next = next;\n\t\t\t\t\telse\n\t\t\t\t\t\t*list = next;\n\t\t\t\t\tp_delstatus->next = NULL;\n\t\t\t\t\tpbs_delstatfree(p_delstatus);\n\t\t\t\t\tp_delstatus = prev;\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tif (p_delstatus->code != PBSE_HISTJOBDELETED) {\n\t\t\terrtxt = pbse_to_txt(p_delstatus->code);\n\t\t\tif ((errtxt != NULL) && (p_delstatus->code != PBSE_HISTJOBDELETED)) {\n\t\t\t\tfprintf(stderr, \"%s: %s %s\\n\", \"qdel\", errtxt, p_delstatus->name);\n\t\t\t\tany_failed = p_delstatus->code;\n\t\t\t}\n\t\t}\n\t\t*nfailed += 1;\n\t}\n\n\treturn any_failed;\n}\n\n/**\n * @brief\tGet the mail suppression limit\n *\n * @param[in]\tconnect - connection fd\n *\n * @return int\n * @retval mail suppression limit\n */\nstatic int\nget_mail_suppress_count(int connect)\n{\n\tstruct batch_status *ss = NULL;\n\tstruct attrl attr = {0};\n\tchar *errmsg;\n\tchar *keystr, *valuestr;\n\tint maillimit = 0;\n\n\tattr.name = ATTR_dfltqdelargs;\n\tattr.value = \"\";\n\tss = pbs_statserver(connect, &attr, NULL);\n\n\tif (ss == NULL && pbs_errno != PBSE_NONE) {\n\t\tif ((errmsg = pbs_geterrmsg(connect)) != NULL)\n\t\t\tfprintf(stderr, \"qdel: %s\\n\", errmsg);\n\t\telse\n\t\t\tfprintf(stderr, \"qdel: Error %d\\n\", pbs_errno);\n\t\texit(pbs_errno);\n\t}\n\n\tif (ss != NULL && ss->attribs != NULL && ss->attribs->value != NULL) {\n\t\tif (parse_equal_string(ss->attribs->value, &keystr, &valuestr)) {\n\t\t\tif (strcmp(keystr, \"-Wsuppress_email\") == 0)\n\t\t\t\tmaillimit = atol(valuestr);\n\t\t\telse\n\t\t\t\tfprintf(stderr, \"qdel: unsupported %s \\'%s\\'\\n\",\n\t\t\t\t\tss->attribs->name, ss->attribs->value);\n\t\t}\n\t}\n\tpbs_statfree(ss);\n\n\treturn maillimit;\n}\n\n/**\n * @brief\tHelper function to handle job deletion for a given cluster\n *\n * @param[in]\tclusterid - id of the cluster, currently the PBS_SERVER value\n * @param[in]\tjobids - the list of jobs to delete on this server\n * @param[in]\tnumids - count of jobs to delete\n * @param[in]\tdfltmail - mail suppression limit from the -W CLI value\n * @param[in]\twarg - -W tokens ('force' et al)\n *\n * @return int\n * @retval pbs_errno\n */\nstatic int\ndelete_jobs_for_cluster(char *clusterid, char **jobids, int numids, int dfltmail, char *warg, int wargsz)\n{\n\tint connect;\n\tint mails; /* number of emails we can send */\n\tint numofjobs;\n\tstruct batch_deljob_status *p_delstatus;\n\tint any_failed = 0;\n\tchar warg1[MAX_TIME_DELAY_LEN + 7];\n\tsvr_jobid_list_t *rmtsvr_jobid_list = NULL;\n\tsvr_jobid_list_t *iter_remote = NULL;\n\tint nfailed = 0;\n\n\tstrcpy(warg1, NOMAIL);\n\n\tif (clusterid == NULL || jobids == NULL)\n\t\treturn PBSE_INTERNAL;\n\n\tif (numids <= 0)\n\t\treturn PBSE_NONE;\n\n\tconnect = cnt2server(clusterid);\n\tif (connect <= 0) {\n\t\tfprintf(stderr, \"Couldn't connect to cluster: %s\\n\", clusterid);\n\t\treturn pbs_errno;\n\t}\n\n\t/* retrieve default: suppress_email from server: default_qdel_arguments */\n\tmails = dfltmail;\n\tif (mails == 0)\n\t\tmails = get_mail_suppress_count(connect);\n\tif (mails == 0)\n\t\tmails = QDEL_MAIL_SUPPRESS;\n\tmails = mails - num_deleted;\n\tif (mails < 0)\n\t\tmails = 0;\n\n\t/* First, delete mail limit number of jobs */\n\tint temp = 0;\n\tint diff = 0;\n\tnumofjobs = temp = (mails <= numids) ? mails : numids;\n\tp_delstatus = pbs_deljoblist(connect, jobids, numofjobs, warg);\n\tdiff = temp - numofjobs;\n\tany_failed = process_deljobstat(clusterid, &p_delstatus, &rmtsvr_jobid_list, &nfailed);\n\tpbs_delstatfree(p_delstatus);\n\tnum_deleted += (numofjobs - nfailed);\n\n\tif (numofjobs < numids) { /* More jobs to delete */\n\t\tint any_failed_local = 0;\n\t\t/* when jobs to be deleted over the mail suppression limit, mail function is disabled\n\t\t* by sending the flag below to server via its extend field:\n\t\t*   \"\" -- delete a job with a mail\n\t\t*   \"nomail\" -- delete a job without sending a mail\n\t\t*   \"force\" -- force job to be deleted with a mail\n\t\t*   \"nomailforce\" -- force job to be deleted without sending a mail\n\t\t*   \"nomaildeletehist\" -- delete history of a job without sending mail\n\t\t*   \"nomailforcedeletehist\" -- force delete history of a job without sending mail.\n\t\t*\n\t\t* current warg1 \"nomail\" should be at start\n\t\t*/\n\t\tstrcat(warg1, warg);\n\t\tpbs_strncpy(warg, warg1, wargsz);\n\t\tp_delstatus = pbs_deljoblist(connect, &jobids[numofjobs + diff], (numids - numofjobs - diff), warg);\n\t\tany_failed_local = process_deljobstat(clusterid, &p_delstatus, &rmtsvr_jobid_list, &nfailed);\n\t\tpbs_delstatfree(p_delstatus);\n\t\tnum_deleted += ((numids - numofjobs) - nfailed);\n\t\tif (any_failed_local)\n\t\t\tany_failed = any_failed_local;\n\t}\n\n\t/* Delete any jobs which were found on remote servers */\n\tfor (iter_remote = rmtsvr_jobid_list; iter_remote != NULL; iter_remote = iter_remote->next) {\n\t\tint fd;\n\t\tint any_failed_local = 0;\n\n\t\tfd = pbs_connect(iter_remote->svrname);\n\t\tif (fd > 0) {\n\t\t\tp_delstatus = pbs_deljoblist(fd, iter_remote->jobids, iter_remote->total_jobs, warg);\n\t\t\tany_failed_local = process_deljobstat(iter_remote->svrname, &p_delstatus, NULL, &nfailed);\n\t\t\tpbs_delstatfree(p_delstatus);\n\t\t\tnum_deleted += (iter_remote->total_jobs - nfailed);\n\t\t\tif (any_failed_local)\n\t\t\t\tany_failed = any_failed_local;\n\t\t\tpbs_disconnect(fd);\n\t\t}\n\t}\n\n\tfree_svrjobidlist(rmtsvr_jobid_list, 0);\n\tpbs_disconnect(connect);\n\n\treturn any_failed;\n}\n\n/**\n * @brief\tHelper function to group the total list of jobs by each cluster\n *\n * @param[in]\tjobids - list of jobids\n * @param[in]\tnumjids - the number of job ids\n *\n * @return svr_jobid_list_t *\n * @retval the svr_jobid_list_t list of clusters and jobids within each\n * @retval NULL for error\n */\nstatic svr_jobid_list_t *\ngroup_jobs_by_cluster(char **jobids, int numjids, int *any_failed)\n{\n\tint i;\n\tchar server_out[PBS_MAXSERVERNAME];\n\tchar job_id_out[PBS_MAXCLTJOBID];\n\tsvr_jobid_list_t *svr_jobid_list_hd = NULL;\n\tchar *dflt_server = pbs_default();\n\n\t/* Club jobs by each server */\n\tfor (i = 0; i < numjids; i++) {\n\t\tif (get_server(jobids[i], job_id_out, server_out)) {\n\t\t\tfprintf(stderr, \"qdel: illegally formed job identifier: %s\\n\", jobids[i]);\n\t\t\t*any_failed = 1;\n\t\t\tcontinue;\n\t\t}\n\t\tif (server_out[0] == '\\0') {\n\t\t\tif (dflt_server != NULL)\n\t\t\t\tpbs_strncpy(server_out, dflt_server, sizeof(server_out));\n\t\t}\n\t\tif (server_out[0] == '\\0') {\n\t\t\tfprintf(stderr, \"Couldn't determine server name for job %s\\n\", jobids[i]);\n\t\t\t*any_failed = 1;\n\t\t\tcontinue;\n\t\t}\n\n\t\tif (add_jid_to_list_by_name(jobids[i], server_out, &svr_jobid_list_hd) != 0)\n\t\t\treturn NULL;\n\t}\n\n\treturn svr_jobid_list_hd;\n}\n\nint\nmain(int argc, char **argv, char **envp) /* qdel */\n{\n\tint c;\n\tint errflg = 0;\n\tint any_failed = 0;\n\tchar *pc;\n\tint forcedel = FALSE;\n\tint deletehist = FALSE;\n\tchar *keystr, *valuestr;\n\tchar **jobids = NULL;\n\tint dfltmail = 0;\n\tint numids = 0;\n\t/* -W no longer supports a time delay */\n\t/* max length is \"nomailforcedeletehist\" plus terminating '\\0' */\n\tchar warg[MAX_TIME_DELAY_LEN + 1];\n\tsvr_jobid_list_t *jobsbycluster = NULL;\n\tsvr_jobid_list_t *iter_list = NULL;\n\tint any_failed_local = 0;\n\n\t/*test for real deal or just version and exit*/\n\n\tPRINT_VERSION_AND_EXIT(argc, argv);\n\n\tif (initsocketlib())\n\t\treturn 1;\n\n\twarg[0] = '\\0';\n\twhile ((c = getopt(argc, argv, GETOPT_ARGS)) != EOF) {\n\t\tswitch (c) {\n\t\t\tcase 'W':\n\t\t\t\tpc = optarg;\n\t\t\t\tif (strlen(pc) == 0) {\n\t\t\t\t\tfprintf(stderr, \"qdel: illegal -W value\\n\");\n\t\t\t\t\terrflg++;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tif (strcmp(pc, FORCE) == 0) {\n\t\t\t\t\tforcedel = TRUE;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tif (parse_equal_string(optarg, &keystr, &valuestr)) {\n\t\t\t\t\tif (strcmp(keystr, SUPPRESS_EMAIL) == 0) {\n\t\t\t\t\t\tdfltmail = atol(valuestr);\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\twhile (*pc != '\\0') {\n\t\t\t\t\tif (!isdigit(*pc)) {\n\t\t\t\t\t\tfprintf(stderr, \"qdel: illegal -W value\\n\");\n\t\t\t\t\t\terrflg++;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t\tpc++;\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase 'x':\n\t\t\t\tdeletehist = TRUE;\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\terrflg++;\n\t\t}\n\t}\n\n\tif (errflg || optind >= argc) {\n\t\tstatic char usage[] =\n\t\t\t\"usage:\\n\"\n\t\t\t\"\\tqdel [-W force|suppress_email=X] [-x] job_identifier...\\n\"\n\t\t\t\"\\tqdel --version\\n\";\n\t\tfprintf(stderr, \"%s\", usage);\n\t\texit(2);\n\t}\n\n\tif (forcedel && deletehist)\n\t\tsnprintf(warg, sizeof(warg), \"%s%s\", FORCE, DELETEHISTORY);\n\telse if (forcedel)\n\t\tstrcpy(warg, FORCE);\n\telse if (deletehist)\n\t\tstrcpy(warg, DELETEHISTORY);\n\n\t/*perform needed security library initializations (including none)*/\n\n\tif (CS_client_init() != CS_SUCCESS) {\n\t\tfprintf(stderr, \"qdel: unable to initialize security library.\\n\");\n\t\texit(1);\n\t}\n\n\tjobids = &argv[optind];\n\tnumids = argc - optind;\n\tif (jobids == NULL || numids <= 0) {\n\t\t/* No jobs to delete */\n\t\treturn 0;\n\t}\n\n\t/* Send delete job list request by each cluster */\n\tjobsbycluster = group_jobs_by_cluster(jobids, numids, &any_failed);\n\tif (jobsbycluster == NULL) {\n\t\texit(1);\n\t}\n\tfor (iter_list = jobsbycluster; iter_list != NULL; iter_list = iter_list->next) {\n\t\tany_failed_local = delete_jobs_for_cluster(iter_list->svrname, iter_list->jobids,\n\t\t\t\t\t\t\t   iter_list->total_jobs, dfltmail, warg, sizeof(warg));\n\t}\n\tfree_svrjobidlist(jobsbycluster, 1);\n\tif (any_failed_local)\n\t\tany_failed = any_failed_local;\n\n\t/* cleanup security library initializations before exiting */\n\tCS_close_app();\n\n\tif (any_failed == 0 && pbs_errno != PBSE_NONE)\n\t\tany_failed = PBSE_NONE;\n\n\texit(any_failed);\n}\n"
  },
  {
    "path": "src/cmds/qdisable.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tqdisable.c\n * @brief\n *  The qdisable command directs that a destination should no longer accept\n *  batch jobs.\n *\n * @par\tSynopsis:\n *  \tqdisable destination ...\n *\n * @par\tArguments:\n *  \tdestination ...\n *      A list of destinations.  A destination has one of the following\n *      three forms:\n *          queue\n *          @server\n *          queue@server\n *      If queue is specified, the request is to disable the queue at\n *      the default server.  If @server is given, the request is to\n *      disable the default queue at the server.  If queue@server is\n *      used, the request is to disable the named queue at the named\n *      server.\n *\n * @author \tBruce Kelly\n *  \t\tNational Energy Research Supercomputer Center\n *  \t\tLivermore, CA\n *    \t\tMay, 1993\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include \"cmds.h\"\n#include <pbs_version.h>\n\nint exitstatus = 0; /* Exit Status */\n\nstatic void execute(char *, char *);\n\nint\nmain(int argc, char **argv)\n{\n\t/*\n\t *  This routine sends a Manage request to the batch server specified by\n\t * the destination.  The ENABLED queue attribute is set to {False}.  If the\n\t * batch request is accepted, the server will no longer accept Queue Job\n\t * requests for the specified queue.\n\t */\n\n\tint dest;     /* Index into the destination array (argv) */\n\tchar *queue;  /* Queue name part of destination */\n\tchar *server; /* Server name part of destination */\n\n\t/*test for real deal or just version and exit*/\n\n\tPRINT_VERSION_AND_EXIT(argc, argv);\n\n\tif (initsocketlib())\n\t\treturn 1;\n\n\tif (argc == 1) {\n\t\tfprintf(stderr, \"Usage: qdisable [queue][@server] ...\\n\");\n\t\tfprintf(stderr, \"       qdisable --version\\n\");\n\t\texit(1);\n\t}\n\n\t/*perform needed security library initializations (including none)*/\n\n\tif (CS_client_init() != CS_SUCCESS) {\n\t\tfprintf(stderr, \"qdisable.c: unable to initialize security library.\\n\");\n\t\texit(1);\n\t}\n\n\tfor (dest = 1; dest < argc; dest++)\n\t\tif (parse_destination_id(argv[dest], &queue, &server) == 0)\n\t\t\texecute(queue, server);\n\t\telse {\n\t\t\tfprintf(stderr, \"qdisable: illegally formed destination: %s\\n\",\n\t\t\t\targv[dest]);\n\t\t\texitstatus = 1;\n\t\t}\n\n\t/*cleanup security library initializations before exiting*/\n\tCS_close_app();\n\n\texit(exitstatus);\n}\n\n/**\n * @brief\n *\tdisables a queue on server\n *\n * @param[in] queue - The name of the queue to disable.\n * @param[in] server - The name of the server that manages the queue.\n *\n * @return - Void\n *\n * @File Variables:\n *  exitstatus  Set to two if an error occurs.\n *\n */\nstatic void\nexecute(char *queue, char *server)\n{\n\tint ct;\t      /* Connection to the server */\n\tint merr;     /* Error return from pbs_manager */\n\tchar *errmsg; /* Error message from pbs_manager */\n\t/* The disable request */\n\tstatic struct attropl attr = {NULL, \"enabled\", NULL, \"FALSE\", SET};\n\n\tif ((ct = cnt2server(server)) > 0) {\n\t\tmerr = pbs_manager(ct, MGR_CMD_SET, MGR_OBJ_QUEUE, queue, &attr, NULL);\n\t\tif (merr != 0) {\n\t\t\terrmsg = pbs_geterrmsg(ct);\n\t\t\tif (errmsg != NULL) {\n\t\t\t\tfprintf(stderr, \"qdisable: %s \", errmsg);\n\t\t\t} else {\n\t\t\t\tfprintf(stderr, \"qdisable: Error (%d) disabling queue \", pbs_errno);\n\t\t\t}\n\t\t\tif (notNULL(queue))\n\t\t\t\tfprintf(stderr, \"%s\", queue);\n\t\t\tif (notNULL(server))\n\t\t\t\tfprintf(stderr, \"@%s\", server);\n\t\t\tfprintf(stderr, \"\\n\");\n\t\t\texitstatus = 2;\n\t\t}\n\t\tpbs_disconnect(ct);\n\t} else {\n\t\tfprintf(stderr, \"qdisable: could not connect to server %s (%d)\\n\", server, pbs_errno);\n\t\texitstatus = 2;\n\t}\n}\n"
  },
  {
    "path": "src/cmds/qenable.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tqenable.c\n * @brief\n *  The qenable command directs that a destination should no longer accept\n *  batch jobs.\n *\n * @par\tSynopsis:\n *  qenable destination ...\n *\n * @par\tArguments:\n *  destination ...\n *      A list of destinations.  A destination has one of the following\n *      three forms:\n *          queue\n *          @server\n *          queue@server\n *      If queue is specified, the request is to enable the queue at\n *      the default server.  If @server is given, the request is to\n *      enable the default queue at the server.  If queue@server is\n *      used, the request is to enable the named queue at the named\n *      server.\n *\n *  @author\tBruce Kelly\n *   \t\tNational Energy Research Supercomputer Center, Livermore, CA\n *  \t\tMay, 1993\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include \"cmds.h\"\n#include <pbs_version.h>\n\nint exitstatus = 0; /* Exit Status */\nstatic void execute(char *, char *);\n\nint\nmain(int argc, char **argv)\n{\n\t/*\n\t *  This routine sends a Manage request to the batch server specified by\n\t * the destination.  The ENABLED queue attribute is set to {True}.  If the\n\t * batch request is accepted, the server will accept Queue Job requests for\n\t * the specified queue.\n\t */\n\n\tint dest;     /* Index into the destination array (argv) */\n\tchar *queue;  /* Queue name part of destination */\n\tchar *server; /* Server name part of destination */\n\n\t/*test for real deal or just version and exit*/\n\n\tPRINT_VERSION_AND_EXIT(argc, argv);\n\n\tif (initsocketlib())\n\t\treturn 1;\n\n\tif (argc == 1) {\n\t\tfprintf(stderr, \"Usage: qenable [queue][@server] ...\\n\");\n\t\tfprintf(stderr, \"       qenable --version\\n\");\n\t\texit(1);\n\t}\n\n\t/*perform needed security library initializations (including none)*/\n\n\tif (CS_client_init() != CS_SUCCESS) {\n\t\tfprintf(stderr, \"qenable: unable to initialize security library.\\n\");\n\t\texit(1);\n\t}\n\n\tfor (dest = 1; dest < argc; dest++)\n\t\tif (parse_destination_id(argv[dest], &queue, &server) == 0)\n\t\t\texecute(queue, server);\n\t\telse {\n\t\t\tfprintf(stderr, \"qenable: illegally formed destination: %s\\n\",\n\t\t\t\targv[dest]);\n\t\t\texitstatus = 1;\n\t\t}\n\n\t/*cleanup security library initializations before exiting*/\n\tCS_close_app();\n\n\texit(exitstatus);\n}\n\n/**\n * @brief\n *\tenables a queue on server\n *\n * @param[in] queue - The name of the queue to enable.\n * @param[in] server - The name of the server that manages the queue.\n *\n * @return - Void\n *\n * @File Variables:\n * exitstatus  Set to two if an error occurs.\n *\n */\nstatic void\nexecute(char *queue, char *server)\n{\n\tint ct;\t      /* Connection to the server */\n\tint merr;     /* Error return from pbs_manager */\n\tchar *errmsg; /* Error message from pbs_manager */\n\t/* The disable request */\n\tstatic struct attropl attr = {NULL, \"enabled\", NULL, \"TRUE\", SET};\n\n\tif ((ct = cnt2server(server)) > 0) {\n\t\tmerr = pbs_manager(ct, MGR_CMD_SET, MGR_OBJ_QUEUE, queue, &attr, NULL);\n\t\tif (merr != 0) {\n\t\t\terrmsg = pbs_geterrmsg(ct);\n\t\t\tif (errmsg != NULL) {\n\t\t\t\tfprintf(stderr, \"qenable: %s \", errmsg);\n\t\t\t} else {\n\t\t\t\tfprintf(stderr, \"qenable: Error (%d) enabling queue \", pbs_errno);\n\t\t\t}\n\t\t\tif (notNULL(queue))\n\t\t\t\tfprintf(stderr, \"%s\", queue);\n\t\t\tif (notNULL(server))\n\t\t\t\tfprintf(stderr, \"@%s\", server);\n\t\t\tfprintf(stderr, \"\\n\");\n\t\t\texitstatus = 2;\n\t\t}\n\t\tpbs_disconnect(ct);\n\t} else {\n\t\tfprintf(stderr, \"qenable: could not connect to server %s (%d)\\n\", server, pbs_errno);\n\t\texitstatus = 2;\n\t}\n}\n"
  },
  {
    "path": "src/cmds/qhold.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tqhold.c\n * @brief\n * \tqhold - (PBS) hold batch job\n *\n * @author     \tTerry Heidelberg\n * \t\t\t\tLivermore Computing\n *\n * @author     \tBruce Kelly\n * \t\t\t\tNational Energy Research Supercomputer Center\n *\n * @author     \tLawrence Livermore National Laboratory\n * \t\t\t\tUniversity of California\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include \"cmds.h\"\n#include \"pbs_ifl.h\"\n#include <pbs_version.h>\n\n/**\n * @brief\n *      prints usage format for qhold command\n *\n * @return - Void\n *\n */\nstatic void\nprint_usage()\n{\n\tstatic char usag2[] = \"       qhold --version\\n\";\n\tstatic char usage[] =\n\t\t\"usage: qhold [-h hold_list] job_identifier...\\n\";\n\n\tfprintf(stderr, \"%s\", usage);\n\tfprintf(stderr, \"%s\", usag2);\n}\n\n/**\n * @brief\n *      handles attribute errors and prints appropriate errmsg\n *\n * @param[in] connect - value indicating server connection\n * @param[in] err_list - list of possible attribute errors\n *\n * @return - Void\n *\n */\nstatic void\nhandle_attribute_errors(int connect,\n\t\t\tstruct ecl_attribute_errors *err_list)\n{\n\tstruct attropl *attribute;\n\tchar *opt;\n\tint i;\n\n\tfor (i = 0; i < err_list->ecl_numerrors; i++) {\n\t\tattribute = err_list->ecl_attrerr[i].ecl_attribute;\n\t\tif (strcmp(attribute->name, ATTR_h) == 0)\n\t\t\topt = \"h\";\n\t\telse\n\t\t\treturn;\n\n\t\tfprintf(stderr, \"qhold: illegal -%s value\\n\", opt);\n\t\tprint_usage();\n\t\tpbs_disconnect(connect);\n\n\t\t/*cleanup security library initializations before exiting*/\n\t\tCS_close_app();\n\t\texit(2);\n\t}\n}\n\nint\nmain(int argc, char **argv, char **envp) /* qhold */\n{\n\tint c;\n\tint errflg = 0;\n\tint any_failed = 0;\n\tchar job_id[PBS_MAXCLTJOBID]; /* from the command line */\n\n\tchar job_id_out[PBS_MAXCLTJOBID];\n\tchar server_out[MAXSERVERNAME];\n\tchar rmt_server[MAXSERVERNAME];\n\tstruct ecl_attribute_errors *err_list;\n\n#define MAX_HOLD_TYPE_LEN 32\n\tchar hold_type[MAX_HOLD_TYPE_LEN + 1];\n\n#define GETOPT_ARGS \"h:-:\"\n\n\t/*test for real deal or just version and exit*/\n\n\tPRINT_VERSION_AND_EXIT(argc, argv);\n\n\tif (initsocketlib())\n\t\treturn 1;\n\n\thold_type[0] = '\\0';\n\n\twhile ((c = getopt(argc, argv, GETOPT_ARGS)) != EOF)\n\t\tswitch (c) {\n\t\t\tcase 'h':\n\t\t\t\twhile (isspace((int) *optarg))\n\t\t\t\t\toptarg++;\n\t\t\t\tif (optarg[0] == '\\0') {\n\t\t\t\t\tfprintf(stderr, \"qhold: illegal -h value\\n\");\n\t\t\t\t\terrflg++;\n\t\t\t\t} else\n\t\t\t\t\tpbs_strncpy(hold_type, optarg, sizeof(hold_type));\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\terrflg++;\n\t\t}\n\n\tif (errflg || optind >= argc) {\n\t\tprint_usage();\n\t\texit(2);\n\t}\n\n\t/*perform needed security library initializations (including none)*/\n\n\tif (CS_client_init() != CS_SUCCESS) {\n\t\tfprintf(stderr, \"qhold: unable to initialize security library.\\n\");\n\t\texit(2);\n\t}\n\n\tfor (; optind < argc; optind++) {\n\t\tint connect;\n\t\tint stat = 0;\n\t\tint located = FALSE;\n\n\t\tpbs_strncpy(job_id, argv[optind], sizeof(job_id));\n\t\tif (get_server(job_id, job_id_out, server_out)) {\n\t\t\tfprintf(stderr, \"qhold: illegally formed job identifier: %s\\n\", job_id);\n\t\t\tany_failed = 1;\n\t\t\tcontinue;\n\t\t}\n\tcnt:\n\t\tconnect = cnt2server(server_out);\n\t\tif (connect <= 0) {\n\t\t\tfprintf(stderr, \"qhold: cannot connect to server %s (errno=%d)\\n\",\n\t\t\t\tpbs_server, pbs_errno);\n\t\t\tany_failed = pbs_errno;\n\t\t\tcontinue;\n\t\t}\n\n\t\tstat = pbs_holdjob(connect, job_id_out, hold_type, NULL);\n\t\tif (stat && (err_list = pbs_get_attributes_in_error(connect)))\n\t\t\thandle_attribute_errors(connect, err_list);\n\n\t\tif (stat && (pbs_errno != PBSE_UNKJOBID)) {\n\t\t\tprt_job_err(\"qhold\", connect, job_id_out);\n\t\t\tany_failed = pbs_errno;\n\t\t} else if (stat && (pbs_errno == PBSE_UNKJOBID) && !located) {\n\t\t\tlocated = TRUE;\n\t\t\tif (locate_job(job_id_out, server_out, rmt_server)) {\n\t\t\t\tpbs_disconnect(connect);\n\t\t\t\tstrcpy(server_out, rmt_server);\n\t\t\t\tgoto cnt;\n\t\t\t}\n\t\t\tprt_job_err(\"qhold\", connect, job_id_out);\n\t\t\tany_failed = pbs_errno;\n\t\t}\n\n\t\tpbs_disconnect(connect);\n\t}\n\n\t/*cleanup security library initializations before exiting*/\n\tCS_close_app();\n\n\texit(any_failed);\n}\n"
  },
  {
    "path": "src/cmds/qmgr.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tqmgr.c\n * @brief\n *  \tThe qmgr command provides an administrator interface to the batch\n *      system.  The command reads directives from standard input.  The syntax\n *      of each directive is checked and the appropriate request is sent to the\n *      batch server or servers.\n * @par\tSynopsis:\n *      qmgr [-a] [-c command] [-e] [-n] [-z] [server...]\n *\n * @par Options:\n *      -a      Abort qmgr on any syntax errors or any requests rejected by a\n *              server.\n *\n *      -c command\n *              Execute a single command and exit qmgr.\n *\n *      -e      Echo all commands to standard output.\n *\n *      -n      No commands are executed, syntax checking only is performed.\n *\n *      -z      No errors are written to standard error.\n *\n * \t@par Arguments:\n *      server...\n *              A list of servers to administer.  If no servers are given, then\n *              use the default server.\n *\n *\n *\t@par Exitcodes:\n *\t  0 - successful\n *\t  1 - error in parse\n *\t  2 - error in execute\n *\t  3 - error connect_servers\n *\t  4 - error set_active\n *\t  5 - memory allocation error\n *\n * @author \tBruce Kelly\n * \t\t\tNational Energy Research Supercomputer Center\n * \t\t\tLivermore, CA\n *\t\t\tMarch, 1993\n *\n */\n\n#include <pbs_config.h>\n\n#include <stdio.h>\n#include <unistd.h>\n#include <ctype.h>\n#include <pwd.h>\n#include <sys/types.h>\n#include <sys/stat.h>\n#include <errno.h>\n\n/* PBS include files */\n#include <cmds.h>\n#include <qmgr.h>\n#include \"libpbs.h\"\n#include \"pbs_version.h\"\n#include \"pbs_share.h\"\n#include \"hook.h\"\n#include \"pbs_ifl.h\"\n#include \"net_connect.h\"\n#include \"server_limits.h\"\n#include \"attribute.h\"\n#include \"pbs_entlim.h\"\n#include \"resource.h\"\n#include \"pbs_ecl.h\"\n#include \"libutil.h\"\n\n/* Global Variables */\n#define QMGR_TIMEOUT 900 /* qmgr connection timeout set to 15 min */\ntime_t start_time = 0;\ntime_t check_time = 0;\n\nchar prompt[] = \"Qmgr: \"; /* Prompt if input is from terminal */\nchar contin[] = \"Qmgr< \"; /* Prompt if input is continued across lines */\nchar *cur_prompt = prompt;\nconst char hist_init_err[] = \"History could not be initialized\\n\";\nconst char histfile_access_err[] = \"Cannot read/write history file %s, history across sessions disabled\\n\";\nint qmgr_hist_enabled = 0;\t     /* history is enabled by default */\nchar qmgr_hist_file[MAXPATHLEN + 1]; /* history file for this user */\n\nstatic char hook_tempfile_errmsg[HOOK_MSG_SIZE] = {'\\0'};\n\n/*\n * This variable represents the use of the -z option on the command line.\n * It is declared here because it must be used by the pstderr routine to\n * determine if any message should be printed to standard error.\n */\nint zopt = FALSE; /* -z option */\n\nstatic struct server *servers = NULL; /* Linked list of server structures */\nstatic int nservers = 0;\t      /* Number of servers */\n\n/* active objects */\nstruct objname *active_servers;\nstruct objname *active_queues;\nstruct objname *active_nodes;\nstruct objname *active_scheds;\n\n/* The following refer to who is executing the qmgr and from what host */\nchar cur_host[PBS_MAXHOSTNAME + 1];\nchar cur_user[PBS_MAXHOSTNAME + 1];\nchar conf_full_server_name[PBS_MAXHOSTNAME + 1] = {'\\0'};\n\nconst char syntaxerr[] = \"qmgr: Syntax error\\n\";\n\n/* List of attribute names for attributes of type entlim */\nstatic char *entlim_attrs[] = {\n\tATTR_max_run,\n\tATTR_max_run_res,\n\tATTR_max_run_soft,\n\tATTR_max_run_res_soft,\n\tATTR_max_queued,\n\tATTR_max_queued_res,\n\tATTR_queued_jobs_threshold,\n\tATTR_queued_jobs_threshold_res,\n\tNULL /* keep as last one please */\n};\n\n/* Hook-related variables and functions */\n\nstatic char *hook_tempfile = NULL; /* a temporary file in PBS_HOOK_WORKDIR */\nstatic char *hook_tempdir = NULL;  /* PBS_HOOK_WORKDIR path */\n\nextern void qmgr_list_history(int);\nextern int init_qmgr_hist(char *);\nextern int qmgr_add_history(char *);\nextern int get_request_hist(char **);\n\n/**\n * @brief\n * \tdyn_strcpy: copies src string into 'dest', and adjusting the size of\n * \t'dest'  with realloc to fit the value of src.\n *\n * @param[in] dest - destination string for holding hookfile name\n * @param[in] src - src string holding filename\n *\n * @return - Void  (exits the program upon error.)\n *\n */\nstatic void\ndyn_strcpy(char **dest, char *src)\n{\n\tchar *p;\n\n\tif ((dest == NULL) || (*dest == NULL) || (src == NULL)) {\n\n\t\tfprintf(stderr, \"dyn_strcpy: bad argument\\n\");\n\t\texit(1);\n\t}\n\n\tif (strlen(*dest) >= strlen(src)) {\n\t\tstrcpy(*dest, src);\n\t} else {\n\t\tp = (char *) realloc((char *) *dest, strlen(src) + 1);\n\t\tif (p == NULL) {\n\t\t\tfprintf(stderr, \"dyn_strcpy: Failed to realloc\\n\");\n\t\t\texit(1);\n\t\t}\n\t\t*dest = p;\n\t}\n\tstrcpy(*dest, src);\n}\n\n/**\n * @brief\n * \tbase: returns the basename of the given 'path'.\n *\n * @param[in] path - file path\n *\n * @return string\n * @retval filename\n * exits from program on failure\n *\n */\nstatic char *\nbase(char *path)\n{\n\tchar *p;\n\n\tif (path == NULL) {\n\t\tfprintf(stderr, \"base: bad argument\\n\");\n\t\texit(1);\n\t}\n\n\tp = (char *) path;\n\n#ifdef WIN32\n\tif (((p = strrchr(path, '/')) != NULL) || ((p = strrchr(path, '\\\\')) != NULL))\n#else\n\tif ((p = strrchr(path, '/')))\n#endif\n\t{\n\t\tp++;\n\t}\n\n\treturn (p);\n}\n\nstatic void\nattrlist_add(struct attropl **attrlist, char *attname,\n\t     size_t attname_len, char *attval, size_t attval_len)\n{\n\tstruct attropl *paol;\n\tint ltxt;\n\n\tif ((attrlist == NULL) || (attname == NULL) || (attval == NULL)) {\n\t\tfprintf(stderr, \"attrlist_add: bad argument\\n\");\n\t\texit(1);\n\t}\n\n\t/* Allocate storage for attribute structure */\n\tMstruct(paol, struct attropl);\n\tpaol->name = NULL;\n\tpaol->resource = NULL;\n\tpaol->value = NULL;\n\tpaol->next = *attrlist;\n\t*attrlist = paol;\n\n\tltxt = attname_len;\n\tMstring(paol->name, ltxt + 1);\n\tpbs_strncpy(paol->name, attname, ltxt + 1);\n\n\tpaol->op = SET;\n\n\tif (attval_len == 0) { /* don't malloc */\n\t\tpaol->value = attval;\n\t} else {\n\t\tltxt = attval_len;\n\t\tMstring(paol->value, ltxt + 1);\n\t\tpbs_strncpy(paol->value, attval, ltxt + 1);\n\t}\n}\n\n/* dump_file: dump contents of 'infile' into 'outfile'. If 'infile' is NULL\n or the empty string, then input contents is coming from STDIN; if 'outfile'\n is NULL or the empty string, then contents are dumped into STDOUT.\n Return 0 for success; 1 otherwise, with\n 'msg' filled in with error information.\n */\nint\ndump_file(char *infile, char *outfile, char *infile_encoding, char *msg, size_t msg_len)\n{\n\tFILE *infp;\n\tFILE *outfp;\n\n\tunsigned char in_data[HOOK_BUF_SIZE + 1];\n\tssize_t in_len;\n\tint ret = 0;\n\tint encode_b64 = 0; /* 1 if encode in base 64 */\n\tstruct stat sb;\n\n\tmemset(msg, '\\0', msg_len);\n\n\tif ((infile == NULL) || (infile[0] == '\\0')) {\n\t\tinfp = stdin;\n\t} else {\n\n\t\tinfp = fopen(infile, \"rb\");\n\n\t\tif (infp == NULL) {\n\t\t\tsnprintf(msg, msg_len - 1,\n\t\t\t\t \"%s - %s\", infile, strerror(errno));\n\t\t\treturn (1);\n\t\t}\n\t\t/* need to check if we really opened a file and not a directory/dev */\n\t\tif ((fstat(fileno(infp), &sb) != -1) && !S_ISREG(sb.st_mode)) {\n\t\t\tsnprintf(msg, msg_len - 1,\n\t\t\t\t \"%s - Permission denied\", infile);\n\n\t\t\tfclose(infp);\n\t\t\treturn (1);\n\t\t}\n\t}\n\n\tif ((outfile == NULL) || (outfile[0] == '\\0')) {\n\t\toutfp = stdout;\n\t} else {\n\t\toutfp = fopen(outfile, \"wb\");\n\n\t\tif (outfp == NULL) {\n\t\t\tsnprintf(msg, msg_len - 1,\n\t\t\t\t \"%s - %s\", outfile, strerror(errno));\n\t\t\tret = 1;\n\t\t\tgoto dump_file_exit;\n\t\t}\n#ifdef WIN32\n\t\tsecure_file(outfile, \"Administrators\",\n\t\t\t    READS_MASK | WRITES_MASK | STANDARD_RIGHTS_REQUIRED);\n#endif\n\t}\n\n\tif (strcmp(infile_encoding, HOOKSTR_BASE64) == 0) {\n\t\tencode_b64 = 1;\n\t}\n\n\twhile (fgets((char *) in_data, sizeof(in_data), infp) != NULL) {\n\t\tin_len = strlen((char *) in_data);\n\t\tif (encode_b64 &&\n\t\t    (strcmp((char *) in_data, \"\\n\") == 0)) { /* empty line */\n\t\t\t/* signals end of processing, especially when     */\n\t\t\t/* qmgr -c print hook output is fed back to qmgr  */\n\t\t\t/* The output will have one or more hooks         */\n\t\t\t/* definitions with their encoded contents, and   */\n\t\t\t/* an empty line terminates a hook content.       */\n\t\t\tbreak;\n\t\t}\n\t\tif (in_len > 0) {\n\t\t\tif (fwrite(in_data, 1, in_len, outfp) != in_len) {\n\t\t\t\tsnprintf(msg, msg_len - 1,\n\t\t\t\t\t \"write to %s failed! Aborting...\",\n\t\t\t\t\t outfile);\n\t\t\t\tret = 1;\n\t\t\t\tgoto dump_file_exit;\n\t\t\t}\n\t\t}\n\t}\n\tif (fflush(outfp) != 0) {\n\t\tsnprintf(msg, msg_len - 1,\n\t\t\t \"Failed to dump file %s (error %s)\", outfile,\n\t\t\t strerror(errno));\n\t\tret = 1;\n\t}\n\ndump_file_exit:\n\tif (infp && infp != stdin)\n\t\tfclose(infp);\n\n\tif (outfp && outfp != stdout) {\n\t\tfclose(outfp);\n\t}\n\tif (ret != 0) {\n\t\tif (outfile)\n\t\t\t(void) unlink(outfile);\n\t}\n\treturn (ret);\n}\n\n/*\n *\n *\tparams_import - parse the parameters to the MGR_CMD_IMPORT\n *\t\t\t<content-type> <content-encoding> <input_file>|-\n *\n *\t  attrs        Text of the import command parameters.\n * \t  OUT: attrlist     Address of the attribute-value structure.\n * \t  doper\tdirective operation type  - must be import.\n *\n * \tReturns:\n *        This routine returns zero upon successful completion.  If there is\n *        a syntax error, it will return the index into attrs where the error\n *        occurred.\n *\n * \tNote:\n *        The following is an example of the text input and what the resulting\n *        data structure should look like.\n *\n *\t\t\"content-type\" = \"application/x-python\"\n *\t \t\"content-encoding\" = \"base64\"\n *\t\t\"input-file\" = \"in_file.PY\"\n *\n *      attrlist ---> struct attropl *\n *                      |\n *                      |\n *                      \\/\n *                      \"content-type\"\n *                      \"\"\n *                      \"application/x-python\"\n *                      SET\n *                      ---------\n *                              |\n *                      \"content-encoding\"   <-\n *                      \"\"\n *                      \"base64\"\n *                      SET\n *                      ---------\n *                              |\n *                      \"input-file\"   <-\n *                      \"\"\n *                      \"in_file.PY\"\n *                      SET\n *                      NULL\n */\nint\nparams_import(char *attrs, struct attropl **attrlist, int doper)\n{\n\tint i;\n\tchar *c;     /* Pointer into the attrs text */\n\tchar *start; /* Pointer to the start of a word */\n\tchar *v;     /* value returned by pbs_quote_parse */\n\tchar *e;\n\n\tif ((attrs == NULL) || (attrlist == NULL)) {\n\t\tfprintf(stderr, \"params_import: bad argument\\n\");\n\t\texit(1);\n\t}\n\n\tif (doper != MGR_CMD_IMPORT)\n\t\treturn 1;\n\n\t/* Free the space from the previous structure */\n\tfreeattropl(*attrlist);\n\t*attrlist = NULL;\n\n\t/* Is there any thing to parse? */\n\tc = attrs;\n\twhile (White(*c))\n\t\tc++;\n\n\tif (EOL(*c))\n\t\treturn 1; /* no parameter */\n\n\t/* Parse the parameter values */\n\n\t/* Get the content-type */\n\n\tstart = c;\n\twhile (!EOL(*c) && !White(*c))\n\t\tc++;\n\n\tif (c == start) {\n\t\t/* No attribute */\n\t\tif (start == attrs)\n\t\t\tstart++;\n\t\treturn (start - attrs);\n\t}\n\tattrlist_add(attrlist, CONTENT_TYPE_PARAM, strlen(CONTENT_TYPE_PARAM),\n\t\t     start, c - start);\n\n\t/* Get the content-encoding */\n\twhile (White(*c))\n\t\tc++;\n\n\tif (!EOL(*c)) {\n\t\tstart = c;\n\t\twhile (!EOL(*c) && !White(*c))\n\t\t\tc++;\n\n\t\tif (c == start) {\n\t\t\t/* No attribute */\n\t\t\tif (start == attrs)\n\t\t\t\tstart++;\n\t\t\treturn (start - attrs);\n\t\t}\n\n\t\tattrlist_add(attrlist, CONTENT_ENCODING_PARAM,\n\t\t\t     strlen(CONTENT_ENCODING_PARAM), start, c - start);\n\t} else\n\t\treturn (c - attrs);\n\n\t/* Get the input-file */\n\twhile (White(*c))\n\t\tc++;\n\n\tif (!EOL(*c)) {\n\t\ti = pbs_quote_parse(c, &v, &e, QMGR_NO_WHITE_IN_VALUE);\n\t\tif (i == -1) {\n\t\t\tpstderr(\"qmgr: Out of memory\\n\");\n\t\t\tclean_up_and_exit(5);\n\t\t} else if (i > 0)\n\t\t\treturn (c - attrs);\n\n\t\t/* value ok */\n\t\tattrlist_add(attrlist, INPUT_FILE_PARAM, strlen(INPUT_FILE_PARAM), v,\n\t\t\t     strlen(v));\n\t\tif (strlen(v) > 0) {\n\t\t\tfree(v);\n\t\t}\n\n\t\tif (EOL(*e)) {\n\t\t\treturn 0; /* end of line */\n\t\t}\n\t\tc = e; /* otherwise more to parse */\n\t} else\n\t\treturn (c - attrs);\n\n\t/* See if there is another argument */\n\twhile (White(*c))\n\t\tc++;\n\n\tif (!EOL(*c))\n\t\treturn (c - attrs);\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tparams_export - parse the parameters to the MGR_CMD_EXPORT\n *\t\t\t<content-type> <content-encoding> <output_file>\n *\n *\tattrs        Text of the export command parameters.\n * \tOUT: attrlist     Address of the attribute-value structure.\n *\tdoper\tdirective operation type - must be export\n *\n * @return Returns:\n *        This routine returns zero upon successful completion.  If there is\n *        a syntax error, it will return the index into attrs where the error\n *        occurred.\n *\n * \tNote:\n *        The following is an example of the text input and what the resulting\n *        data structure should look like.\n *\n *\t\t\"content-type\" = \"application/x-python\"\n *\t \t\"content-encoding\" = \"base64\"\n *\t\t\"input-file\" = \"in_file.PY\"\n *\n *      attrlist ---> struct attropl *\n *                      |\n *                      |\n *                      \\/\n *                      \"content-type\"\n *                      \"\"\n *                      \"application/x-python\"\n *                      SET\n *                      ---------\n *                              |\n *                      \"content-encoding\"   <-\n *                      \"\"\n *                      \"base64\"\n *                      SET\n *                      ---------\n *                              |\n *                      \"output-file\"   <-\n *                      \"\"\n *                      \"out_file.PY\"\n *                      SET\n *                      NULL\n */\nint\nparams_export(char *attrs, struct attropl **attrlist, int doper)\n{\n\tint i;\n\tchar *c;     /* Pointer into the attrs text */\n\tchar *start; /* Pointer to the start of a word */\n\tchar *v;     /* value returned by pbs_quote_parse */\n\tchar *e;\n\n\tif ((attrs == NULL) || (attrlist == NULL)) {\n\t\tfprintf(stderr, \"params_export: bad argument\\n\");\n\t\texit(1);\n\t}\n\n\tif (doper != MGR_CMD_EXPORT)\n\t\treturn 1;\n\n\t/* Free the space from the previous structure */\n\tfreeattropl(*attrlist);\n\t*attrlist = NULL;\n\n\t/* Is there any thing to parse? */\n\tc = attrs;\n\twhile (White(*c))\n\t\tc++;\n\n\tif (EOL(*c))\n\t\treturn 1; /* no parameter */\n\n\t/* Parse the parameter values */\n\n\t/* Get the content-type */\n\n\tstart = c;\n\twhile (!EOL(*c) && !White(*c))\n\t\tc++;\n\n\tif (c == start) {\n\t\t/* No attribute */\n\t\tif (start == attrs)\n\t\t\tstart++;\n\t\treturn (start - attrs);\n\t}\n\tattrlist_add(attrlist, CONTENT_TYPE_PARAM, strlen(CONTENT_TYPE_PARAM),\n\t\t     start, c - start);\n\n\t/* Get the content-encoding */\n\twhile (White(*c))\n\t\tc++;\n\n\tif (!EOL(*c)) {\n\t\tstart = c;\n\t\twhile (!EOL(*c) && !White(*c))\n\t\t\tc++;\n\n\t\tif (c == start) {\n\t\t\t/* No attribute */\n\t\t\tif (start == attrs)\n\t\t\t\tstart++;\n\t\t\treturn (start - attrs);\n\t\t}\n\n\t\tattrlist_add(attrlist, CONTENT_ENCODING_PARAM,\n\t\t\t     strlen(CONTENT_ENCODING_PARAM), start, c - start);\n\t} else\n\t\treturn (c - attrs);\n\n\t/* Get the OUTPUT_FILE_PARAM */\n\twhile (White(*c))\n\t\tc++;\n\n\tif (!EOL(*c)) {\n\t\ti = pbs_quote_parse(c, &v, &e, QMGR_NO_WHITE_IN_VALUE);\n\t\tif (i == -1) {\n\t\t\tpstderr(\"qmgr: Out of memory\\n\");\n\t\t\tclean_up_and_exit(5);\n\t\t} else if (i > 0) {\n\t\t\treturn (c - attrs);\n\t\t}\n\t\t/* value ok */\n\t\tattrlist_add(attrlist, OUTPUT_FILE_PARAM, strlen(OUTPUT_FILE_PARAM), v,\n\t\t\t     strlen(v));\n\t\tif (strlen(v) > 0) {\n\t\t\tfree(v);\n\t\t}\n\n\t\tif (EOL(*e)) {\n\t\t\treturn 0; /* end of line */\n\t\t}\n\t\tc = e; /* otherwise more to parse */\n\t} else {\n\t\t/* ok to not have OUTPUT_FILE_PARAM, just put empty string */\n\t\tattrlist_add(attrlist, OUTPUT_FILE_PARAM, strlen(OUTPUT_FILE_PARAM), \"\", 1);\n\t}\n\n\t/* See if there is another argument */\n\twhile (White(*c))\n\t\tc++;\n\n\tif (!EOL(*c))\n\t\treturn (c - attrs);\n\treturn 0;\n}\n\n/**\n * @brief\n *\twho: returns the username currently running this command\n *\n * @return string\n * @retval username\n *\n */\nchar *\nwho()\n{\n#ifdef WIN32\n\treturn (getlogin()); /* Windows version does not return NULL */\n\n#else\n\tstruct passwd *pw;\n\n\tif ((pw = getpwuid(getuid())) == NULL) {\n\t\treturn (\"\");\n\t}\n\n\tif (pw->pw_name == NULL)\n\t\treturn (\"\");\n\n\treturn (pw->pw_name);\n#endif\n}\n\nint\nmain(int argc, char **argv)\n{\n\tstatic char opts[] = \"ac:enz\"; /* See man getopt */\n\tstatic char usage[] = \"Usage: qmgr [-a] [-c command] [-e] [-n] [-z] [server...]\\n\";\n\tstatic char usag2[] = \"       qmgr --version\\n\";\n\tint aopt = FALSE;\t\t/* -a option */\n\tint eopt = FALSE;\t\t/* -e option */\n\tint nopt = FALSE;\t\t/* -n option */\n\tchar *copt = NULL;\t\t/* -c command option */\n\tint c;\t\t\t\t/* Individual option */\n\tint errflg = 0;\t\t\t/* Error flag */\n\tchar *request = NULL;\t\t/* Current request */\n\tint oper = MGR_CMD_CREATE;\t/* Operation: create, delete, set, unset, list, print */\n\tint type = MGR_OBJ_SERVER;\t/* Object type: server or queue */\n\tchar *name = NULL;\t\t/* Object name */\n\tstruct attropl *attribs = NULL; /* Pointer to attribute list */\n\tstruct objname *svrs;\n#ifndef WIN32\n\tint htmp_fd; /* for creating hooks temp file */\n#endif\n\n\t/*test for real deal or just version and exit*/\n\n\tPRINT_VERSION_AND_EXIT(argc, argv);\n\n\tif (initsocketlib())\n\t\treturn 1;\n\n\t/* Command line options */\n\twhile ((c = getopt(argc, argv, opts)) != EOF) {\n\t\tswitch (c) {\n\t\t\tcase 'a':\n\t\t\t\taopt = TRUE;\n\t\t\t\tbreak;\n\t\t\tcase 'c':\n\t\t\t\tcopt = optarg;\n\t\t\t\tbreak;\n\t\t\tcase 'e':\n\t\t\t\teopt = TRUE;\n\t\t\t\tbreak;\n\t\t\tcase 'n':\n\t\t\t\tnopt = TRUE;\n\t\t\t\tbreak;\n\t\t\tcase 'z':\n\t\t\t\tzopt = TRUE;\n\t\t\t\tbreak;\n\t\t\tcase '?':\n\t\t\tdefault:\n\t\t\t\terrflg++;\n\t\t\t\tbreak;\n\t\t}\n\t}\n\n\tif (errflg) {\n\t\tpstderr(usage);\n\t\tpstderr(usag2);\n\t\texit(1);\n\t}\n\n\tif (argc > optind)\n\t\tsvrs = strings2objname(&argv[optind], argc - optind, MGR_OBJ_SERVER);\n\telse\n\t\tsvrs = default_server_name();\n\n\t/*perform needed security library initializations (including none)*/\n\n\tif (CS_client_init() != CS_SUCCESS) {\n\t\tfprintf(stderr, \"qmgr: unable to initialize security library.\\n\");\n\t\texit(2);\n\t}\n\n\tpbs_strncpy(cur_user, who(), sizeof(cur_user));\n\tcur_host[0] = '\\0';\n\n\t/* obtain global information for hooks */\n\tif (pbs_loadconf(0) == 0) {\n\t\tfprintf(stderr, \"Failed to load pbs.conf file\\n\");\n\t\texit(2);\n\t}\n\n\tif (gethostname(cur_host, (sizeof(cur_host) - 1)) == 0)\n\t\tget_fullhostname(cur_host, cur_host, (sizeof(cur_host) - 1));\n\n\t/*\n\t * Get the Server Name which is used in hook related error messages:\n\t * 1. from PBS_PRIMARY if defined, if not\n\t * 2. from PBS_SERVER_HOST_NAME if defined, if not\n\t * 3. from PBS_SERVER, and as last resort\n\t * 4. use my host name\n\t */\n\tif (pbs_conf.pbs_primary != NULL) {\n\t\tpbs_strncpy(conf_full_server_name, pbs_conf.pbs_primary,\n\t\t\t    sizeof(conf_full_server_name));\n\t} else if (pbs_conf.pbs_server_host_name != NULL) {\n\t\tpbs_strncpy(conf_full_server_name, pbs_conf.pbs_server_host_name,\n\t\t\t    sizeof(conf_full_server_name));\n\t} else if (pbs_conf.pbs_server_name != NULL) {\n\t\tpbs_strncpy(conf_full_server_name, pbs_conf.pbs_server_name,\n\t\t\t    sizeof(conf_full_server_name));\n\t}\n\tif (conf_full_server_name[0] != '\\0') {\n\t\tget_fullhostname(conf_full_server_name, conf_full_server_name,\n\t\t\t\t (sizeof(conf_full_server_name) - 1));\n\t}\n\n\tpbs_asprintf(&hook_tempdir, \"%s/server_priv/%s\",\n\t\t     pbs_conf.pbs_home_path, PBS_HOOK_WORKDIR);\n\tpbs_asprintf(&hook_tempfile, \"%s/qmgr_hook%dXXXXXX\",\n\t\t     hook_tempdir, getpid());\n\n#ifdef WIN32\n\t/* mktemp() generates a filename */\n\tif (mktemp(hook_tempfile) == NULL) {\n\t\tsnprintf(hook_tempfile_errmsg, sizeof(hook_tempfile_errmsg),\n\t\t\t \"unable to generate a hook_tempfile from %s - %s\\n\",\n\t\t\t hook_tempfile, strerror(errno));\n\t\thook_tempfile[0] = '\\0'; /* hook_tempfile name generation not successful */\n\t}\n#else\n\t/*\n\t * For Linux/Unix, it is recommended to use mkstemp() for mktemp() is\n\t * dangerous - see mktemp(3).\n\t * mkstemp() generates and CREATES a filename\n\t */\n\tif ((htmp_fd = mkstemp(hook_tempfile)) == -1) {\n\t\tsnprintf(hook_tempfile_errmsg, sizeof(hook_tempfile_errmsg),\n\t\t\t \"unable to generate a hook_tempfile from %s - %s\\n\",\n\t\t\t hook_tempfile, strerror(errno));\n\t\thook_tempfile[0] = '\\0'; /* hook_tempfile name generation not successful */\n\t} else {\t\t\t /* success */\n\t\t(void) close(htmp_fd);\n\t\t(void) unlink(hook_tempfile); /* we'll recreate later if needed */\n\t}\n#endif /* Linux/Unix */\n\n\terrflg = connect_servers(svrs, ALL_SERVERS);\n\tif ((nservers == 0) || (errflg))\n\t\tclean_up_and_exit(3);\n\n\terrflg = set_active(MGR_OBJ_SERVER, svrs);\n\tif (errflg && aopt)\n\t\tclean_up_and_exit(4);\n\n\t/*\n\t * If no command was given on the command line, then read them from\n\t * stdin until end-of-file.  Otherwise, execute the one command only.\n\t */\n\tif (copt == NULL) {\n\n#ifdef QMGR_HAVE_HIST\n\t\tqmgr_hist_enabled = 0;\n\t\tif (isatty(0) && isatty(1)) {\n\t\t\tif (init_qmgr_hist(argv[0]) == 0)\n\t\t\t\tqmgr_hist_enabled = 1;\n\t\t}\n#endif\n\n\t\tprintf(\"Max open servers: %d\\n\", pbs_query_max_connections());\n\t\t/*\n\t\t * Passing the address of request since the memory is allocated\n\t\t * in the function get_request itself and passed back to the\n\t\t * caller\n\t\t */\n\t\twhile (get_request(&request) != EOF) {\n\t\t\tcheck_time = time(0);\n\t\t\tif (attribs) {\n\t\t\t\tPBS_free_aopl(attribs);\n\t\t\t\tattribs = NULL;\n\t\t\t}\n\t\t\tif (eopt)\n\t\t\t\tprintf(\"%s\\n\", request);\n\n\t\t\terrflg = parse(request, &oper, &type, &name, &attribs);\n\t\t\tif (errflg == -1) /* help */\n\t\t\t\tcontinue;\n\n\t\t\tif (aopt && errflg)\n\t\t\t\tclean_up_and_exit(1);\n\n\t\t\tif (!nopt && !errflg) {\n\t\t\t\terrflg = execute(aopt, oper, type, name, attribs);\n\t\t\t\tif (aopt && errflg)\n\t\t\t\t\tclean_up_and_exit(2);\n\t\t\t}\n\t\t\tif (request != NULL) {\n\t\t\t\tfree(request);\n\t\t\t\trequest = NULL;\n\t\t\t}\n\t\t\t/*\n\t\t\t * Deallocate the memory for the variable name whose memory\n\t\t\t * is allocated originally in the function parse\n\t\t\t */\n\t\t\tif (name != NULL) {\n\t\t\t\tfree(name);\n\t\t\t\tname = NULL;\n\t\t\t}\n\t\t}\n\t} else {\n\t\tif (eopt)\n\t\t\tprintf(\"%s\\n\", copt);\n\n\t\terrflg = parse(copt, &oper, &type, &name, &attribs);\n\t\tif (aopt && errflg)\n\t\t\tclean_up_and_exit(1);\n\n\t\tif (!nopt && !errflg) {\n\t\t\terrflg = execute(aopt, oper, type, name, attribs);\n\t\t\tif (aopt && errflg)\n\t\t\t\tclean_up_and_exit(2);\n\t\t}\n\t\tPBS_free_aopl(attribs);\n\t\t/*\n\t\t * Deallocate the memory for the variable name whose memory\n\t\t * is allocated originally in the function parse\n\t\t */\n\t\tif (name != NULL) {\n\t\t\tfree(name);\n\t\t\tname = NULL;\n\t\t}\n\t}\n\tif (errflg)\n\t\tclean_up_and_exit(errflg);\n\n\tclean_up_and_exit(0);\n\n\treturn 0;\n}\n\n/*\n * chk_special_attr_values - do additional syntax checking on the values\n *\tof certain attributes\n */\nstatic int\nchk_special_attr_values(struct attropl *paol)\n{\n\tint i;\n\tchar *dupval;\n\tint r;\n\n\ti = 0;\n\twhile (entlim_attrs[i]) {\n\t\tif (strcmp(paol->name, entlim_attrs[i]) == 0) {\n\t\t\tdupval = strdup(paol->value);\n\t\t\tif (dupval == NULL)\n\t\t\t\treturn 0;\n\t\t\tr = entlim_parse(dupval, paol->resource, NULL, NULL);\n\t\t\tfree(dupval);\n\t\t\treturn (-r);\n\t\t}\n\t\t++i;\n\t}\n\treturn 0;\n}\n\n/*\n *\n *\tattributes - parse attribute-value pairs in the format:\n *\t\t     attribute OP value\n *\t\t     which are on the qmgr input\n *\n * \t  attrs        Text of the attribute-value pairs.\n * \t  OUT: attrlist     Address of the attribute-value structure.\n * \t  doper\tdirective operation type (create, delete, set, ...)\n *\n * \tReturns:\n *        This routine returns zero upon successful completion.  If there is\n *        a syntax error, it will return the index or offset into the attrs\n *\t  character string (input) where the error occurred.  This can be used\n *\t  by the calling function to indicate to the user the character in error\n *\n * \tNote:\n *        The following is an example of the text input and what the resulting\n *        data structure should look like.\n *\n *      a1 = v1, a2.r2 += v2 , a3-=v3\n *\n *      attrlist ---> struct attropl *\n *                      |\n *                      |\n *                      \\/\n *                      \"a1\"\n *                      \"\"\n *                      \"v1\"\n *                      SET\n *                      ---------\n *                              |\n *                      \"a2\"   <-\n *                      \"r2\"\n *                      \"v2\"\n *                      INCR\n *                      ---------\n *                              |\n *                      \"a3\"   <-\n *                      \"\"\n *                      \"v3\"\n *                      DECR\n *                      NULL\n */\nint\nattributes(char *attrs, struct attropl **attrlist, int doper)\n{\n\tint i;\n\tchar *c;     /* Pointer into the attrs text */\n\tchar *start; /* Pointer to the start of a word */\n\tchar *v;     /* value returned by pbs_quote_parse */\n\tchar *e;\n\tint ltxt; /* Length of a word */\n\tstruct attropl *paol;\n\tchar **pentlim_name;\n\n\t/* Free the space from the previous structure */\n\tfreeattropl(*attrlist);\n\t*attrlist = NULL;\n\n\t/* Is there any thing to parse? */\n\tc = attrs;\n\twhile (White(*c))\n\t\tc++;\n\n\tif (EOL(*c))\n\t\treturn 0;\n\n\t/* Parse the attribute-values */\n\twhile (TRUE) {\n\t\t/* Get the attribute and resource */\n\t\twhile (White(*c))\n\t\t\tc++;\n\t\tif (!EOL(*c)) {\n\t\t\tstart = c;\n\t\t\twhile ((*c != '.') && (*c != ',') && !EOL(*c) && !Oper(c) && !White(*c))\n\t\t\t\tc++;\n\n\t\t\tif (c == start) {\n\t\t\t\t/* No attribute */\n\t\t\t\tif (start == attrs)\n\t\t\t\t\tstart++;\n\t\t\t\treturn (start - attrs);\n\t\t\t}\n\n\t\t\t/* Allocate storage for attribute structure */\n\t\t\tMstruct(paol, struct attropl);\n\t\t\tpaol->name = NULL;\n\t\t\tpaol->resource = NULL;\n\t\t\tpaol->value = NULL;\n\t\t\tpaol->next = *attrlist;\n\t\t\t*attrlist = paol;\n\n\t\t\t/* Copy attribute into structure */\n\t\t\tltxt = c - start;\n\t\t\tMstring(paol->name, ltxt + 1);\n\t\t\tpbs_strncpy(paol->name, start, ltxt + 1);\n\n\t\t\t/* Resource, if any */\n\t\t\tif (*c == '.') {\n\t\t\t\tstart = ++c;\n\t\t\t\tif ((doper == MGR_CMD_UNSET) ||\n\t\t\t\t    (doper == MGR_CMD_LIST) ||\n\t\t\t\t    (doper == MGR_CMD_PRINT)) {\n\t\t\t\t\twhile (!White(*c) && !Oper(c) && !EOL(*c) && !(*c == ','))\n\t\t\t\t\t\tc++;\n\t\t\t\t} else {\n\t\t\t\t\twhile (!White(*c) && !Oper(c) && !EOL(*c))\n\t\t\t\t\t\tc++;\n\t\t\t\t}\n\n\t\t\t\tltxt = c - start;\n\t\t\t\tif (ltxt == 0) /* No resource */\n\t\t\t\t\treturn (start - attrs);\n\n\t\t\t\tMstring(paol->resource, ltxt + 1);\n\t\t\t\tpbs_strncpy(paol->resource, start, ltxt + 1);\n\t\t\t}\n\t\t} else\n\t\t\treturn (c - attrs);\n\n\t\t/* Get the operator */\n\t\twhile (White(*c))\n\t\t\tc++;\n\n\t\tif (!EOL(*c)) {\n\t\t\tswitch (*c) {\n\t\t\t\tcase '=':\n\t\t\t\t\tpaol->op = SET;\n\t\t\t\t\tc++;\n\t\t\t\t\tbreak;\n\t\t\t\tcase '+':\n\t\t\t\t\tpaol->op = INCR;\n\t\t\t\t\tc += 2;\n\t\t\t\t\tbreak;\n\t\t\t\tcase '-':\n\t\t\t\t\tpaol->op = DECR;\n\t\t\t\t\tc += 2;\n\t\t\t\t\tbreak;\n\t\t\t\tcase ',':\n\t\t\t\t\t/* Attribute with no value */\n\t\t\t\t\tMstring(paol->value, 1);\n\t\t\t\t\tpaol->value[0] = '\\0';\n\t\t\t\t\tgoto next;\n\t\t\t\tdefault:\n\t\t\t\t\treturn (c - attrs);\n\t\t\t}\n\n\t\t\t/* The unset command must not have a operator or value */\n\t\t\tif (doper == MGR_CMD_UNSET)\n\t\t\t\treturn (c - attrs);\n\t\t} else if (doper != MGR_CMD_CREATE && doper != MGR_CMD_SET) {\n\t\t\tMstring(paol->value, 1);\n\t\t\tpaol->value[0] = '\\0';\n\t\t\treturn 0;\n\t\t} else\n\t\t\treturn (c - attrs);\n\n\t\t/* Get the value */\n\t\twhile (White(*c))\n\t\t\tc++;\n\n\t\t/* need to know if the attrbute is of the entlim variety */\n\t\t/* look through the list of attribute names which are    */\n\t\tpentlim_name = entlim_attrs;\n\t\twhile (*pentlim_name) {\n\t\t\tif (strcasecmp(*pentlim_name, paol->name) == 0)\n\t\t\t\tbreak;\n\t\t\t++pentlim_name;\n\t\t}\n\n\t\tif (!EOL(*c)) {\n\t\t\tif (*pentlim_name == NULL) {\n\t\t\t\t/* regular type of attribute, unquoted white space not allowed in val */\n\t\t\t\ti = pbs_quote_parse(c, &v, &e, QMGR_NO_WHITE_IN_VALUE);\n\t\t\t} else {\n\t\t\t\t/* entlim type of attribute, unquoted white space is allowed in val */\n\t\t\t\ti = pbs_quote_parse(c, &v, &e, QMGR_ALLOW_WHITE_IN_VALUE);\n\t\t\t}\n\t\t\tif (i == -1) {\n\t\t\t\tpstderr(\"qmgr: Out of memory\\n\");\n\t\t\t\tclean_up_and_exit(5);\n\t\t\t} else if (i > 0)\n\t\t\t\treturn (c - attrs);\n\t\t\t/* value ok */\n\t\t\tpaol->value = v;\n\n\t\t\t/* Add special checks for syntax of value for certain attributes */\n\t\t\ti = chk_special_attr_values(paol);\n\t\t\tif (i > 0)\t\t\t    /* error return,  i is offset of error in input */\n\t\t\t\treturn (c - attrs + i - 1); /* c - attrs = start + offset is err loc */\n\n\t\t\tif (EOL(*e))\n\t\t\t\treturn 0; /* end of line */\n\t\t\tc = e;\t\t  /* otherwise more to parse */\n\t\t} else\n\t\t\treturn (c - attrs);\n\n\t\t/* See if there is another attribute-value pair */\n\tnext:\n\t\twhile (White(*c))\n\t\t\tc++;\n\t\tif (EOL(*c))\n\t\t\treturn 0;\n\n\t\tif (*c == ',')\n\t\t\tc++;\n\t\telse\n\t\t\treturn (c - attrs);\n\t}\n}\n\n/**\n * @brief\n *\tmake_connection - open a connection to the server and assign\n *\t\t\t  server entry\n *\n * @param[in] name - name of server to connect to\n *\n *\treturns server struct if connection can be made or NULL if not\n *\n */\nstruct server *\nmake_connection(char *name)\n{\n\tint connection;\n\tstruct server *svr = NULL;\n\n\tif ((connection = cnt2server(name)) > 0) {\n\t\tsvr = new_server();\n\t\tMstring(svr->s_name, strlen(name) + 1);\n\t\tstrcpy(svr->s_name, name);\n\t\tsvr->s_connect = connection;\n\t} else\n\t\tPSTDERR1(\"qmgr: cannot connect to server %s\\n\", name)\n\n\treturn svr;\n}\n\n/**\n * @brief\n *\tconnect_servers - call connect to connect to each server in list\n *\t\t\t  and add then to the global server list\n *\n * @param[on] server_names - list of objnames\n * @param[in] numservers   - the number of servers to connect to or -1 for all\n *\t\t\t the servers on the list\n *\n * @return int\n * @retval False/1   Failure\n * @retval True      Success\n *\n */\nint\nconnect_servers(struct objname *server_names, int numservers)\n{\n\tint error = FALSE;\n\tstruct server *cur_svr;\n\tstruct objname *cur_obj;\n\tint i;\n\tint max_servers;\n\n\tmax_servers = pbs_query_max_connections();\n\n\tclose_non_ref_servers();\n\n\tif (nservers < max_servers) {\n\t\tcur_obj = server_names;\n\n\t\t/* if numservers == -1 (all servers) the var i will never equal zero */\n\t\tfor (i = numservers; i && cur_obj; i--, cur_obj = cur_obj->next) {\n\t\t\tnservers++;\n\t\t\tif ((cur_svr = make_connection(cur_obj->svr_name)) == NULL) {\n\t\t\t\tnservers--;\n\t\t\t\terror = TRUE;\n\t\t\t}\n\n\t\t\tif (cur_svr != NULL) {\n\t\t\t\tcur_obj->svr = cur_svr;\n\t\t\t\tcur_svr->ref++;\n\t\t\t\tcur_svr->next = servers;\n\t\t\t\tservers = cur_svr;\n\t\t\t}\n\t\t}\n\t} else {\n\t\tpstderr(\"qmgr: max server connections reached.\\n\");\n\t\terror = 1;\n\t}\n\treturn error;\n}\n\n/**\n * @brief\n *\tblanks - print requested spaces\n *\n * @param[in] number - The number spaces\n *\n * @return Void\n *\n */\nvoid\nblanks(int number)\n{\n\tchar spaces[1024];\n\tint i;\n\n\tif (number < 1023) {\n\t\tfor (i = 0; i < number; i++)\n\t\t\tspaces[i] = ' ';\n\t\tspaces[i] = '\\0';\n\n\t\tpstderr(spaces);\n\t} else\n\t\tpstderr(\"Too many blanks requested.\\n\");\n}\n\n/**\n * @brief\n *\tcheck_list - check a comma delimited list for valid syntax\n *\n * @param[in] list  - A comma delimited list.\n * @param[in] type  - server, queue, node, or resource\n *\n * valid syntax: name[@server][,name]\n *\t\texample: batch@svr1,debug\n *\n * @return int\n * @retval\t0\tIf the syntax of the list is correct for all commands.\n * @retval     >0\tThe number of chars into the list where the error occured\n *\n */\nint\ncheck_list(char *list, int type)\n{\n\tchar *foreptr, *backptr;\n\n\tbackptr = list;\n\n\twhile (!EOL(*backptr)) {\n\t\tforeptr = backptr;\n\n\t\t/* object names (except nodes ) have to start with an alpha character or\n\t\t * can be left off if all object of the same type are wanted\n\t\t */\n\t\tif (type == MGR_OBJ_NODE) {\n\t\t\tif (!isalnum((int) *backptr) && *backptr != '@')\n\t\t\t\treturn (backptr - list ? backptr - list : 1);\n\t\t} else if (!isalpha((int) *backptr) && *backptr != '@')\n\t\t\treturn (backptr - list ? backptr - list : 1);\n\n\t\twhile (*foreptr != ',' && *foreptr != '@' && !EOL(*foreptr))\n\t\t\tforeptr++;\n\n\t\tif (*foreptr == '@') {\n\t\t\tforeptr++;\n\n\t\t\t/* error on \"name@\" or \"name@,\" */\n\t\t\tif (EOL(*foreptr) || *foreptr == ',')\n\t\t\t\treturn (foreptr - list);\n\n\t\t\twhile (!EOL(*foreptr) && *foreptr != ',')\n\t\t\t\tforeptr++;\n\n\t\t\t/* error on name@svr@blah */\n\t\t\tif (*foreptr == '@')\n\t\t\t\treturn (foreptr - list);\n\t\t}\n\n\t\tif (*foreptr == ',') {\n\t\t\tforeptr++;\n\t\t\t/* error on \"name,\" */\n\t\t\tif (EOL(*foreptr))\n\t\t\t\treturn (foreptr - list ? foreptr - list : 1);\n\t\t}\n\t\tbackptr = foreptr;\n\t}\n\treturn 0; /* Success! */\n}\n\n/**\n * @brief\n *\tdisconnect_from_server  - disconnect from one server and clean up\n *\n * @param[in] svr - the server to disconnect from\n *\n * @return  Void\n *\n */\nstatic void\ndisconnect_from_server(struct server *svr)\n{\n\tpbs_disconnect(svr->s_connect);\n\tfree_server(svr);\n\tnservers--;\n}\n\n/**\n * @brief\n *\tclean_up_and_exit - disconnect from the servers and free memory used\n *\t\t\t    by active object lists and then exits\n *\n * @param[in]  exit_val - value to pass to exit\n *\n * @return Void\n *\n */\nvoid\nclean_up_and_exit(int exit_val)\n{\n\tstruct server *cur_svr, *next_svr;\n\n\tfree(hook_tempdir);\n\tfree(hook_tempfile);\n\tfree_objname_list(active_servers);\n\tfree_objname_list(active_queues);\n\tfree_objname_list(active_nodes);\n\n\tcur_svr = servers;\n\n\twhile (cur_svr) {\n\t\tnext_svr = cur_svr->next;\n\t\tdisconnect_from_server(cur_svr);\n\t\tcur_svr = next_svr;\n\t}\n\n\t/*cleanup security library initializations before exiting*/\n\tCS_close_app();\n\n\texit(exit_val);\n}\n\n/**\n * @brief\n *\tremove_char - remove char from a string\n *\n * @param[in]  ptr    pointer to a string\n * @param[in]  ch     character to be removed\n *\n * @return Void\n *\n */\nvoid\nremove_char(char *ptr, int ch)\n{\n\tint index = 0;\n\tint i;\n\tfor (i = 0; ptr[i] != '\\0'; ++i)\n\t\tif (ptr[i] != ch)\n\t\t\tptr[index++] = ptr[i];\n\tptr[index] = '\\0';\n}\n\n/**\n * @brief\n *\tget_resc_type - for a named resource, look it up in the batch_status\n *\treturned by pbs_statrsc().\n *\n * @return int\n * @retval 0\tIf the resource is not found, or if the type is not available,\n *\n */\nint\nget_resc_type(char *rname, struct batch_status *pbs)\n{\n\tstruct attrl *pat;\n\n\twhile (pbs) {\n\t\tif (strcmp(rname, pbs->name) == 0) {\n\t\t\tpat = pbs->attribs;\n\t\t\twhile (pat) {\n\t\t\t\tif (strcmp(\"type\", pat->name) == 0)\n\t\t\t\t\treturn (atoi(pat->value));\n\t\t\t\tpat = pat->next;\n\t\t\t}\n\t\t\treturn 0;\n\t\t}\n\t\tpbs = pbs->next;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\tDetermine if a given queue name belongs to a reservation\n *\n * @param[in] sd    - server to which queue belongs\n * @param[in] qname - queue name to check\n *\n * @return Error code\n * @retval 0 if queue is not a reservation queue\n * @retval 1 if queue is a reservation queue\n *\n */\nstatic int\nis_reservation_queue(int sd, char *qname)\n{\n\tstruct batch_status *bs = NULL;\n\tstruct attrl *resv_queue = NULL;\n\n\t/* pasing \"\" as value because DIS expects a non NULL value */\n\tset_attr_error_exit(&resv_queue, ATTR_queue, \"\");\n\tif (resv_queue != NULL) {\n\t\tbs = pbs_statresv(sd, NULL, resv_queue, NULL);\n\t\twhile (bs != NULL) {\n\t\t\tif (bs->attribs != NULL && bs->attribs->value != NULL) {\n\t\t\t\tif (strcmp(qname, bs->attribs->value) == 0)\n\t\t\t\t\tbreak;\n\t\t\t}\n\t\t\tbs = bs->next;\n\t\t}\n\t\tif (resv_queue->name != NULL)\n\t\t\tfree(resv_queue->name);\n\t\tif (resv_queue->value != NULL)\n\t\t\tfree(resv_queue->value);\n\t\tfree(resv_queue);\n\t}\n\tif (bs == NULL)\n\t\treturn 0;\n\tpbs_statfree(bs);\n\treturn 1;\n}\n\n/**\n * @brief\n *\tdisplay - format and output the status information.\n *\n * @par Functionality:\n * \tprints out all the information in a batch_status struct in either\n *\treadable form (one attribute per line) or formated for inputing back\n *\tinto qmgr.\n *\n * @param[in]\totype\tObject type, MGR_OBJ_*\n * @param[in]\tptype\tParent Object type, MGR_OBJ_*\n * @param[in]\toname\tObject name\n * @param[in]\tstatus\tAttribute list of the object in form of batch_status\n * @param[in]\tformat\tTrue, not zero, if the output should be formatted\n *\t\t\tto look like qmgr command input\n * @param[in]\t mysvr\tpointer to current \"server\" structure on which to find\n *\t\t     info about resources\n *\n * @return\tvoid\n *\n * @par Side Effects: None\n *\n */\nvoid\ndisplay(int otype, int ptype, char *oname, struct batch_status *status,\n\tint format, struct server *mysvr)\n{\n\tstruct attrl *attr;\n\tchar *c, *e;\n\tchar q;\n\tint l, comma, do_comma, first, indent_len;\n\tchar dump_msg[HOOK_MSG_SIZE];\n\tchar *hooktmp = NULL;\n\tint custom_resource = FALSE;\n\tecl_attribute_def *attrdef_l = NULL;\n\tint attrdef_size = 0, i;\n\tstatic struct attropl exp_attribs[] = {\n\t\t{(struct attropl *) &exp_attribs[1],\n\t\t CONTENT_TYPE_PARAM,\n\t\t NULL,\n\t\t HOOKSTR_CONTENT,\n\t\t SET},\n\t\t{(struct attropl *) &exp_attribs[2],\n\t\t CONTENT_ENCODING_PARAM,\n\t\t NULL,\n\t\t HOOKSTR_BASE64,\n\t\t SET},\n\t\t{NULL,\n\t\t OUTPUT_FILE_PARAM,\n\t\t NULL,\n\t\t NULL, /* has to be constant in some compilers like IRIX */\n\t\t SET},\n\t};\n\n\tstatic struct attropl exp_attribs_config[] = {\n\t\t{(struct attropl *) &exp_attribs_config[1],\n\t\t CONTENT_TYPE_PARAM,\n\t\t NULL,\n\t\t HOOKSTR_CONFIG,\n\t\t SET},\n\t\t{(struct attropl *) &exp_attribs_config[2],\n\t\t CONTENT_ENCODING_PARAM,\n\t\t NULL,\n\t\t HOOKSTR_BASE64,\n\t\t SET},\n\t\t{NULL,\n\t\t OUTPUT_FILE_PARAM,\n\t\t NULL,\n\t\t NULL, /* has to be constant in some compilers like IRIX */\n\t\t SET},\n\t};\n\n\t/* the OUTPUT_FILE_PARAM entry */\n\thooktmp = base(hook_tempfile);\n\texp_attribs[2].value = hooktmp ? hooktmp : \"\";\n\texp_attribs_config[2].value = hooktmp ? hooktmp : \"\";\n\n\tif (format) {\n\t\tif (otype == MGR_OBJ_SERVER)\n\t\t\tprintf(\"#\\n# Set server attributes.\\n#\\n\");\n\t\telse if (otype == MGR_OBJ_QUEUE)\n\t\t\tprintf(\"#\\n# Create queues and set their attributes.\\n#\\n\");\n\t\telse if (otype == MGR_OBJ_NODE)\n\t\t\tprintf(\"#\\n# Create nodes and set their properties.\\n#\\n\");\n\t\telse if (otype == MGR_OBJ_SITE_HOOK)\n\t\t\tprintf(\"#\\n# Create hooks and set their properties.\\n#\\n\");\n\t\telse if (otype == MGR_OBJ_PBS_HOOK)\n\t\t\tprintf(\"#\\n# Set PBS hooks properties.\\n#\\n\");\n\t}\n\n\tif (otype == MGR_OBJ_SERVER) {\n\t\tattrdef_l = ecl_svr_attr_def;\n\t\tattrdef_size = ecl_svr_attr_size;\n\t} else if (otype == MGR_OBJ_SCHED) {\n\t\tattrdef_l = ecl_sched_attr_def;\n\t\tattrdef_size = ecl_sched_attr_size;\n\t} else if (otype == MGR_OBJ_QUEUE) {\n\t\tattrdef_l = ecl_que_attr_def;\n\t\tattrdef_size = ecl_que_attr_size;\n\t} else if (otype == MGR_OBJ_NODE) {\n\t\tattrdef_l = ecl_node_attr_def;\n\t\tattrdef_size = ecl_node_attr_size;\n\t}\n\n\twhile (status != NULL) {\n\t\tif (otype == MGR_OBJ_SERVER) {\n\t\t\tif (!format)\n\t\t\t\tprintf(\"Server %s\\n\", status->name);\n\t\t} else if (otype == MGR_OBJ_SCHED) {\n\t\t\tif ((oname != NULL) && *oname && strcmp(oname, status->name)) {\n\t\t\t\tstatus = status->next;\n\t\t\t\tcontinue;\n\t\t\t}\n\n\t\t\tif (format) {\n\t\t\t\tprintf(\"#\\n# Create and define scheduler %s\\n#\\n\", status->name);\n\t\t\t\tprintf(\"create sched %s\\n\", status->name);\n\t\t\t} else\n\t\t\t\tprintf(\"Sched %s\\n\", status->name);\n\n\t\t} else if (otype == MGR_OBJ_QUEUE) {\n\t\t\t/* When printing server, skip display of reservation queue. This is done\n\t\t\t * to prevent recreating the reservation queue upon migration of a server\n\t\t\t * configuration.\n\t\t\t */\n\t\t\tif ((ptype == MGR_OBJ_SERVER) && is_reservation_queue(mysvr->s_connect,\n\t\t\t\t\t\t\t\t\t      status->name)) {\n\t\t\t\tstatus = status->next;\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\tif (format) {\n\t\t\t\tprintf(\"#\\n# Create and define queue %s\\n#\\n\", status->name);\n\t\t\t\tprintf(\"create queue %s\\n\", status->name);\n\t\t\t} else\n\t\t\t\tprintf(\"Queue %s\\n\", status->name);\n\t\t} else if (otype == MGR_OBJ_NODE) {\n\t\t\tif (format) {\n\t\t\t\tfirst = TRUE;\n\t\t\t\tprintf(\"#\\n# Create and define node %s\\n#\\n\", status->name);\n\t\t\t\tprintf(\"create node %s\", status->name);\n\t\t\t\tif ((c = get_attr(status->attribs, ATTR_NODE_Host, NULL)) != NULL) {\n\t\t\t\t\tif (strcmp(c, status->name) != 0) {\n\t\t\t\t\t\tprintf(\" %s=%s\", ATTR_NODE_Mom, c);\n\t\t\t\t\t\tfirst = 0;\n\t\t\t\t\t}\n\t\t\t\t} else if ((c = get_attr(status->attribs, ATTR_NODE_Mom, NULL)) != NULL) {\n\t\t\t\t\tif (strcmp(c, status->name) != 0) {\n\t\t\t\t\t\tif (format && (strchr(c, (int) ',') != NULL))\n\t\t\t\t\t\t\tprintf(\" %s=\\\"%s\\\"\", ATTR_NODE_Mom, c); /* quote value */\n\t\t\t\t\t\telse\n\t\t\t\t\t\t\tprintf(\" %s=%s\", ATTR_NODE_Mom, c);\n\t\t\t\t\t\tfirst = 0;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tif ((c = get_attr(status->attribs, ATTR_NODE_Port, NULL)) != NULL) {\n\t\t\t\t\tif (atoi(c) != PBS_MOM_SERVICE_PORT) {\n\t\t\t\t\t\tif (first)\n\t\t\t\t\t\t\tprintf(\" \");\n\t\t\t\t\t\telse\n\t\t\t\t\t\t\tprintf(\",\");\n\t\t\t\t\t\tprintf(\"%s=%s\", ATTR_NODE_Port, c);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tprintf(\"\\n\");\n\t\t\t} else\n\t\t\t\tprintf(\"Node %s\\n\", status->name);\n\t\t} else if (otype == MGR_OBJ_SITE_HOOK) {\n\t\t\tif (format) {\n\t\t\t\tprintf(\"#\\n# Create and define hook %s\\n#\\n\", show_nonprint_chars(status->name));\n\t\t\t\tprintf(\"create hook %s\\n\", show_nonprint_chars(status->name));\n\t\t\t} else\n\t\t\t\tprintf(\"Hook %s\\n\", show_nonprint_chars(status->name));\n\t\t} else if (otype == MGR_OBJ_PBS_HOOK) {\n\t\t\tif (format) {\n\t\t\t\tprintf(\"#\\n# Set pbshook %s\\n#\\n\", show_nonprint_chars(status->name));\n\t\t\t} else\n\t\t\t\tprintf(\"Hook %s\\n\", show_nonprint_chars(status->name));\n\t\t} else if (otype == MGR_OBJ_RSC) {\n\t\t\tif ((oname == NULL) || (strcmp(oname, \"\") == 0)) {\n\t\t\t\tif (strcmp(status->name, RESOURCE_UNKNOWN) == 0) {\n\t\t\t\t\tcustom_resource = TRUE;\n\t\t\t\t\tstatus = status->next;\n\t\t\t\t\tif (status)\n\t\t\t\t\t\tprintf(\"#\\n# Create resources and set their properties.\\n#\\n\");\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\t\t\t\tif (custom_resource == FALSE) {\n\t\t\t\t\tstatus = status->next;\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (format) {\n\t\t\t\tprintf(\"#\\n# Create and define resource %s\\n#\\n\", status->name);\n\t\t\t\tprintf(\"create resource %s\\n\", status->name);\n\t\t\t} else\n\t\t\t\tprintf(\"Resource %s\\n\", status->name);\n\t\t}\n\n\t\tattr = status->attribs;\n\n\t\twhile (attr != NULL) {\n\t\t\tif (format) {\n\t\t\t\tif ((otype == MGR_OBJ_SITE_HOOK) || (otype == MGR_OBJ_PBS_HOOK) ||\n\t\t\t\t    is_attr(otype, attr->name, TYPE_ATTR_PUBLIC)) {\n\t\t\t\t\tif ((otype != MGR_OBJ_SITE_HOOK) && (otype != MGR_OBJ_PBS_HOOK) &&\n\t\t\t\t\t    ((strcmp(attr->name, ATTR_NODE_Host) == 0) ||\n\t\t\t\t\t     (strcmp(attr->name, ATTR_NODE_Mom) == 0) ||\n\t\t\t\t\t     (strcmp(attr->name, ATTR_NODE_Port) == 0))) {\n\t\t\t\t\t\t/* skip Host, Mom and Port, already done on line with name */\n\t\t\t\t\t\tattr = attr->next;\n\t\t\t\t\t\tcontinue;\n\t\t\t\t\t}\n\t\t\t\t\tif ((otype != MGR_OBJ_SITE_HOOK) && (otype != MGR_OBJ_PBS_HOOK) &&\n\t\t\t\t\t    (strcmp(attr->name, ATTR_NODE_state) == 0) &&\n\t\t\t\t\t    ((strncmp(attr->value, ND_state_unknown, strlen(ND_state_unknown)) == 0) ||\n\t\t\t\t\t     (strcmp(attr->value, ND_down) == 0))) {\n\t\t\t\t\t\t/* don't record \"Down\" or \"state-unknown\" */\n\t\t\t\t\t\tattr = attr->next;\n\t\t\t\t\t\tcontinue;\n\t\t\t\t\t}\n\t\t\t\t\tif (otype == MGR_OBJ_RSC) {\n\t\t\t\t\t\tif ((attr != NULL) && (strcmp(attr->name, ATTR_RESC_TYPE) == 0)) {\n\t\t\t\t\t\t\tstruct resc_type_map *rtm = find_resc_type_map_by_typev(atoi(attr->value));\n\t\t\t\t\t\t\tif (rtm) {\n\t\t\t\t\t\t\t\tprintf(\"set resource %s type = %s\\n\", status->name, rtm->rtm_rname);\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tattr = attr->next;\n\t\t\t\t\t\t\tcontinue;\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif ((attr != NULL) && (strcmp(attr->name, ATTR_RESC_FLAG) == 0)) {\n\t\t\t\t\t\t\tchar *rfm = find_resc_flag_map(atoi(attr->value));\n\t\t\t\t\t\t\tif ((rfm != NULL) && (strcmp(rfm, \"\") != 0)) {\n\t\t\t\t\t\t\t\tprintf(\"set resource %s flag = %s\\n\", status->name, rfm);\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tif (rfm != NULL) {\n\t\t\t\t\t\t\t\tfree(rfm);\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tattr = attr->next;\n\t\t\t\t\t\t\tcontinue;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tif ((attr->resource != NULL) &&\n\t\t\t\t\t    (get_resc_type(attr->resource, mysvr->s_rsc) == ATR_TYPE_STR))\n\t\t\t\t\t\tdo_comma = FALSE; /* single string, don't parse substrings on a comma */\n\t\t\t\t\telse\n\t\t\t\t\t\tdo_comma = TRUE;\n\t\t\t\t\tfirst = TRUE;\n\t\t\t\t\tc = attr->value;\n\t\t\t\t\te = c;\n\t\t\t\t\twhile (*c) {\n\t\t\t\t\t\tprintf(\"set \");\n\t\t\t\t\t\tif (otype == MGR_OBJ_SERVER) {\n\t\t\t\t\t\t\tprintf(\"server \");\n\t\t\t\t\t\t} else if (otype == MGR_OBJ_SCHED) {\n\t\t\t\t\t\t\tif (strcmp(status->name, PBS_DFLT_SCHED_NAME) == 0)\n\t\t\t\t\t\t\t\tprintf(\"sched \");\n\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t\tprintf(\"sched %s \", status->name);\n\t\t\t\t\t\t} else if (otype == MGR_OBJ_QUEUE) {\n\t\t\t\t\t\t\tprintf(\"queue %s \", status->name);\n\t\t\t\t\t\t} else if (otype == MGR_OBJ_NODE) {\n\t\t\t\t\t\t\tprintf(\"node %s \", status->name);\n\t\t\t\t\t\t} else if (otype == MGR_OBJ_SITE_HOOK)\n\t\t\t\t\t\t\tprintf(\"hook %s \", show_nonprint_chars(status->name));\n\t\t\t\t\t\telse if (otype == MGR_OBJ_PBS_HOOK)\n\t\t\t\t\t\t\tprintf(\"pbshook %s \", show_nonprint_chars(status->name));\n\n\t\t\t\t\t\tif (attr->name != NULL)\n\t\t\t\t\t\t\tprintf(\"%s\", attr->name);\n\t\t\t\t\t\tif (attr->resource != NULL)\n\t\t\t\t\t\t\tprintf(\".%s\", attr->resource);\n\t\t\t\t\t\tif (attr->value != NULL) {\n\t\t\t\t\t\t\tfor (i = 0; i < attrdef_size; i++) {\n\t\t\t\t\t\t\t\tif (strcmp(attr->name, attrdef_l[i].at_name) == 0) {\n\t\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tif ((attrdef_l != NULL) && (attrdef_l[i].at_type == ATR_TYPE_STR)) {\n\t\t\t\t\t\t\t\tif (strpbrk(c, \"\\\"' ,\") != NULL) {\n\t\t\t\t\t\t\t\t\tif (strchr(c, (int) '\"'))\n\t\t\t\t\t\t\t\t\t\tq = '\\'';\n\t\t\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t\t\t\tq = '\"';\n\t\t\t\t\t\t\t\t\tprintf(\" = %c%s%c\\n\", q, show_nonprint_chars(c), q);\n\t\t\t\t\t\t\t\t} else\n\t\t\t\t\t\t\t\t\tprintf(\" = %s\\n\", show_nonprint_chars(c));\n\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t\tif (attr->op == INCR)\n\t\t\t\t\t\t\t\t\tprintf(\" += \");\n\t\t\t\t\t\t\t\telse if (first)\n\t\t\t\t\t\t\t\t\tprintf(\" = \");\n\t\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t\t\tprintf(\" += \");\n\n\t\t\t\t\t\t\t\tfirst = FALSE;\n\n\t\t\t\t\t\t\t\twhile (*e) {\n\t\t\t\t\t\t\t\t\tif ((do_comma == TRUE) && (*e == ',')) {\n\t\t\t\t\t\t\t\t\t\t*e++ = '\\0';\n\t\t\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\te++;\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\tif (strpbrk(c, \"\\\"' ,\") != NULL) {\n\t\t\t\t\t\t\t\t\t/* need to quote string */\n\t\t\t\t\t\t\t\t\tif (strchr(c, (int) '\"'))\n\t\t\t\t\t\t\t\t\t\tq = '\\'';\n\t\t\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t\t\t\tq = '\"';\n\t\t\t\t\t\t\t\t\tprintf(\"%c%s%c\", q, show_nonprint_chars(c), q);\n\t\t\t\t\t\t\t\t} else\n\t\t\t\t\t\t\t\t\tprintf(\"%s\", show_nonprint_chars(c)); /* no quoting */\n\n\t\t\t\t\t\t\t\tc = e;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\tprintf(\"\\n\");\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tindent_len = 4;\n\t\t\t\tif (otype == MGR_OBJ_RSC) {\n\t\t\t\t\tif ((attr != NULL) && (strcmp(attr->name, \"type\") == 0)) {\n\t\t\t\t\t\tstruct resc_type_map *rtm = find_resc_type_map_by_typev(atoi(attr->value));\n\t\t\t\t\t\tif (rtm) {\n\t\t\t\t\t\t\tprintf(\"%*s\", indent_len, \" \");\n\t\t\t\t\t\t\tprintf(\"type = %s\\n\", rtm->rtm_rname);\n\t\t\t\t\t\t}\n\t\t\t\t\t} else if ((attr != NULL) && (strcmp(attr->name, \"flag\") == 0)) {\n\t\t\t\t\t\tchar *rfm = find_resc_flag_map(atoi(attr->value));\n\t\t\t\t\t\tif ((rfm != NULL) && (strcmp(rfm, \"\") != 0)) {\n\t\t\t\t\t\t\tprintf(\"%*s\", indent_len, \" \");\n\t\t\t\t\t\t\tprintf(\"flag = %s\\n\", rfm);\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif (rfm != NULL) {\n\t\t\t\t\t\t\tfree(rfm);\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tattr = attr->next;\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\n\t\t\t\tif (attr->name != NULL) {\n\t\t\t\t\tprintf(\"%*s\", indent_len, \" \");\n\t\t\t\t\tprintf(\"%s\", attr->name);\n\t\t\t\t}\n\n\t\t\t\tif (attr->resource != NULL)\n\t\t\t\t\tprintf(\".%s\", attr->resource);\n\n\t\t\t\tif (attr->value != NULL) {\n\t\t\t\t\tl = strlen(attr->name) + 8;\n\n\t\t\t\t\tif (attr->resource != NULL)\n\t\t\t\t\t\tl += strlen(attr->resource) + 1;\n\n\t\t\t\t\tl += 3; /* length of \" = \" */\n\t\t\t\t\tprintf(\" = \");\n\t\t\t\t\tc = attr->value;\n\t\t\t\t\te = c;\n\t\t\t\t\tcomma = TRUE;\n\t\t\t\t\tfirst = TRUE;\n\t\t\t\t\twhile (comma) {\n\t\t\t\t\t\twhile (*e != ',' && *e != '\\0')\n\t\t\t\t\t\t\te++;\n\n\t\t\t\t\t\tcomma = (*e == ',');\n\t\t\t\t\t\t*e = '\\0';\n\t\t\t\t\t\tl += strlen(c) + 1;\n\n\t\t\t\t\t\tif (!first && (l >= 80)) { /* line extension */\n\t\t\t\t\t\t\tprintf(\"\\n\\t\");\n\t\t\t\t\t\t\twhile (White(*c))\n\t\t\t\t\t\t\t\tc++;\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\tprintf(\"%s\", show_nonprint_chars(c));\n\t\t\t\t\t\tfirst = FALSE;\n\n\t\t\t\t\t\tif (comma) {\n\t\t\t\t\t\t\tprintf(\",\");\n\t\t\t\t\t\t\t*e = ',';\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\te++;\n\t\t\t\t\t\tc = e;\n\t\t\t\t\t}\n\t\t\t\t\tprintf(\"\\n\");\n\t\t\t\t}\n\t\t\t}\n\t\t\tattr = attr->next;\n\t\t}\n\t\tif (!format) {\n\t\t\tprintf(\"\\n\");\n\t\t} else {\n\t\t\tif (otype == MGR_OBJ_SITE_HOOK) {\n\t\t\t\tif (exp_attribs[2].value[0] == '\\0') {\n\t\t\t\t\tfprintf(stderr, \"%s\", hook_tempfile_errmsg);\n\t\t\t\t\tfprintf(stderr, \"can't display hooks data - no hook_tempfile!\\n\");\n\t\t\t\t} else if (pbs_manager(mysvr->s_connect, MGR_CMD_EXPORT, otype,\n\t\t\t\t\t\t       status->name, exp_attribs, NULL) == 0) {\n\t\t\t\t\tprintf(PRINT_HOOK_IMPORT_CALL, show_nonprint_chars(status->name));\n\t\t\t\t\tif (dump_file(hook_tempfile, NULL, HOOKSTR_BASE64,\n\t\t\t\t\t\t      dump_msg, sizeof(dump_msg)) != 0) {\n\t\t\t\t\t\tfprintf(stderr, \"%s\\n\", dump_msg);\n\t\t\t\t\t}\n\t\t\t\t\tprintf(\"\\n\");\n\t\t\t\t}\n\t\t\t\tif (exp_attribs_config[2].value[0] == '\\0') {\n\t\t\t\t\tfprintf(stderr, \"%s\", hook_tempfile_errmsg);\n\t\t\t\t\tfprintf(stderr, \"can't display hooks data - no hook_tempfile!\\n\");\n\t\t\t\t} else if (pbs_manager(mysvr->s_connect, MGR_CMD_EXPORT, otype,\n\t\t\t\t\t\t       status->name, exp_attribs_config, NULL) == 0) {\n\t\t\t\t\tprintf(PRINT_HOOK_IMPORT_CONFIG, show_nonprint_chars(status->name));\n\t\t\t\t\tif (dump_file(hook_tempfile, NULL, HOOKSTR_BASE64,\n\t\t\t\t\t\t      dump_msg, sizeof(dump_msg)) != 0) {\n\t\t\t\t\t\tfprintf(stderr, \"%s\\n\", dump_msg);\n\t\t\t\t\t}\n\t\t\t\t\tprintf(\"\\n\");\n\t\t\t\t}\n\t\t\t} else if (otype == MGR_OBJ_PBS_HOOK) {\n\t\t\t\tif (exp_attribs_config[2].value[0] == '\\0') {\n\t\t\t\t\tfprintf(stderr, \"%s\", hook_tempfile_errmsg);\n\t\t\t\t\tfprintf(stderr, \"can't display pbs hooks data - no hook_tempfile!\\n\");\n\t\t\t\t} else if (pbs_manager(mysvr->s_connect, MGR_CMD_EXPORT, otype,\n\t\t\t\t\t\t       status->name, exp_attribs_config, NULL) == 0) {\n\t\t\t\t\tprintf(PRINT_HOOK_IMPORT_CONFIG, show_nonprint_chars(status->name));\n\t\t\t\t\tif (dump_file(hook_tempfile, NULL, HOOKSTR_BASE64,\n\t\t\t\t\t\t      dump_msg, sizeof(dump_msg)) != 0) {\n\t\t\t\t\t\tfprintf(stderr, \"%s\\n\", dump_msg);\n\t\t\t\t\t}\n\t\t\t\t\tprintf(\"\\n\");\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tstatus = status->next;\n\t}\n}\n\n/**\n * @brief\n *\tset_active - sets active objects\n *\n * @param[in] obj_type - the type of object - should be caller allocated space\n * @parm[in]  obj_names - names of objects to set active\n *\n * @return  Error code\n * @retval  0  on success\n * @retval  !0 on failure\n *\n */\nint\nset_active(int obj_type, struct objname *obj_names)\n{\n\tstruct objname *cur_obj = NULL;\n\tstruct server *svr;\n\tint error = 0;\n\n\tif (obj_names != NULL) {\n\t\tswitch (obj_type) {\n\t\t\tcase MGR_OBJ_SERVER:\n\t\t\t\tcur_obj = obj_names;\n\t\t\t\twhile (cur_obj != NULL && !error) {\n\t\t\t\t\tif (cur_obj->svr == NULL) {\n\t\t\t\t\t\tsvr = find_server(cur_obj->obj_name);\n\t\t\t\t\t\tif (svr == NULL)\n\t\t\t\t\t\t\terror = connect_servers(cur_obj, 1);\n\t\t\t\t\t\telse {\n\t\t\t\t\t\t\tcur_obj->svr = svr;\n\t\t\t\t\t\t\tsvr->ref++;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\n\t\t\t\t\tcur_obj = cur_obj->next;\n\t\t\t\t}\n\t\t\t\tif (!error) {\n\t\t\t\t\tfree_objname_list(active_servers);\n\t\t\t\t\tactive_servers = obj_names;\n\t\t\t\t} else\n\t\t\t\t\tfree_objname_list(obj_names);\n\n\t\t\t\tbreak;\n\n\t\t\tcase MGR_OBJ_SCHED:\n\t\t\t\tcur_obj = obj_names;\n\t\t\t\twhile (cur_obj != NULL && !error) {\n\t\t\t\t\tif (cur_obj->svr == NULL) {\n\t\t\t\t\t\tsvr = find_server(cur_obj->obj_name);\n\t\t\t\t\t\tif (svr == NULL)\n\t\t\t\t\t\t\terror = connect_servers(cur_obj, 1);\n\t\t\t\t\t\telse {\n\t\t\t\t\t\t\tcur_obj->svr = svr;\n\t\t\t\t\t\t\tsvr->ref++;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\n\t\t\t\t\tcur_obj = cur_obj->next;\n\t\t\t\t}\n\t\t\t\tif (!error) {\n\t\t\t\t\tfree_objname_list(active_scheds);\n\t\t\t\t\tactive_scheds = obj_names;\n\t\t\t\t} else\n\t\t\t\t\tfree_objname_list(obj_names);\n\n\t\t\t\tbreak;\n\n\t\t\tcase MGR_OBJ_QUEUE:\n\t\t\t\tcur_obj = obj_names;\n\n\t\t\t\twhile (cur_obj != NULL && !error) {\n\t\t\t\t\tif (cur_obj->svr_name != NULL) {\n\t\t\t\t\t\tif (cur_obj->svr == NULL)\n\t\t\t\t\t\t\tif (connect_servers(cur_obj, 1) == TRUE)\n\t\t\t\t\t\t\t\terror = 1;\n\t\t\t\t\t}\n\n\t\t\t\t\tif (!is_valid_object(cur_obj, MGR_OBJ_QUEUE)) {\n\t\t\t\t\t\tPSTDERR1(\"Queue does not exist: %s.\\n\", cur_obj->obj_name)\n\t\t\t\t\t\terror = 1;\n\t\t\t\t\t}\n\n\t\t\t\t\tcur_obj = cur_obj->next;\n\t\t\t\t}\n\n\t\t\t\tif (!error) {\n\t\t\t\t\tfree_objname_list(active_queues);\n\t\t\t\t\tactive_queues = obj_names;\n\t\t\t\t}\n\t\t\t\tbreak;\n\n\t\t\tcase MGR_OBJ_NODE:\n\t\t\t\tcur_obj = obj_names;\n\t\t\t\twhile (cur_obj != NULL && !error) {\n\t\t\t\t\tif (cur_obj->svr_name != NULL) {\n\t\t\t\t\t\tif (cur_obj->svr == NULL)\n\t\t\t\t\t\t\tif (connect_servers(cur_obj, 1) == TRUE)\n\t\t\t\t\t\t\t\terror = 1;\n\t\t\t\t\t}\n\t\t\t\t\tif (!is_valid_object(cur_obj, MGR_OBJ_NODE)) {\n\t\t\t\t\t\tPSTDERR1(\"Node does not exist: %s.\\n\", cur_obj->obj_name)\n\t\t\t\t\t\terror = 1;\n\t\t\t\t\t}\n\n\t\t\t\t\tcur_obj = cur_obj->next;\n\t\t\t\t}\n\n\t\t\t\tif (!error) {\n\t\t\t\t\tfree_objname_list(active_nodes);\n\t\t\t\t\tactive_nodes = obj_names;\n\t\t\t\t}\n\t\t\t\tbreak;\n\n\t\t\tdefault:\n\t\t\t\terror = 1;\n\t\t}\n\t} else {\n\t\tswitch (obj_type) {\n\t\t\tcase MGR_OBJ_SERVER:\n\t\t\t\tprintf(\"Active servers:\\n\");\n\t\t\t\tcur_obj = active_servers;\n\t\t\t\tbreak;\n\t\t\tcase MGR_OBJ_SCHED:\n\t\t\t\tprintf(\"Active schedulers:\\n\");\n\t\t\t\tcur_obj = active_scheds;\n\t\t\t\tbreak;\n\t\t\tcase MGR_OBJ_QUEUE:\n\t\t\t\tprintf(\"Active queues:\\n\");\n\t\t\t\tcur_obj = active_queues;\n\t\t\t\tbreak;\n\t\t\tcase MGR_OBJ_NODE:\n\t\t\t\tprintf(\"Active nodes:\\n\");\n\t\t\t\tcur_obj = active_nodes;\n\t\t\t\tbreak;\n\t\t}\n\t\twhile (cur_obj != NULL) {\n\t\t\tif (obj_type == MGR_OBJ_SERVER)\n\t\t\t\tprintf(\"%s\\n\", Svrname(cur_obj->svr));\n\t\t\telse\n\t\t\t\tprintf(\"%s@%s\\n\", cur_obj->obj_name, Svrname(cur_obj->svr));\n\n\t\t\tcur_obj = cur_obj->next;\n\t\t}\n\t}\n\n\treturn error;\n}\n\n/**\n * @brief\n *\thandle_formula - if we're setting the formula, we need to write the\n *\t\t\tvalue into a root owned file, instead of sending it over\n *\t\t\tthe wire.  This\tis because of the fact that we run it as root\n *\n * @param[in] attribs - the attribute we're setting\n *\n * @return Void\n *\n */\nvoid\nhandle_formula(struct attropl *attribs)\n{\n\tstruct attropl *pattr;\n\tchar pathbuf[MAXPATHLEN + 1];\n\tFILE *fp;\n\n\tfor (pattr = attribs; pattr != NULL; pattr = pattr->next) {\n\t\tif (!strcmp(pattr->name, ATTR_job_sort_formula) && pattr->op == SET) {\n\t\t\tsprintf(pathbuf, \"%s/%s\", pbs_conf.pbs_home_path, FORMULA_ATTR_PATH);\n\t\t\tif ((fp = fopen(pathbuf, \"w\")) != NULL) {\n\t\t\t\tfprintf(fp, \"%s\\n\", pattr->value);\n\t\t\t\tfclose(fp);\n#ifdef WIN32\n\t\t\t\t/* Give file an Administrators permission so pbs server can read it */\n\t\t\t\tsecure_file(pathbuf, \"Administrators\",\n\t\t\t\t\t    READS_MASK | WRITES_MASK | STANDARD_RIGHTS_REQUIRED);\n#endif\n\t\t\t} else {\n\t\t\t\tPSTDERR1(\"qmgr: Failed to open %s for writing.\\n\", pathbuf)\n\t\t\t\treturn;\n\t\t\t}\n\t\t}\n\t}\n\treturn;\n}\n\n/**\n * @brief\n * \texecute - contact the server and execute the command\n *\n * @param[in] aopt      True, if the -a option was given.\n * @param[in] oper      The command, either create, delete, set, unset or list.\n * @param[in] type      The object type, either server or queue.\n * @param[in] names     The object name list.\n * @param[in] attribs   The attribute list with operators.\n *\n * @return int\n * @retval 0 for success\n * @retval non-zero for error\n *\n * @par\n * Uses the following library calls from libpbs:\n *          pbs_manager\n *          pbs_statserver\n *          pbs_statque\n *          pbs_statsched\n *          pbs_statfree\n *          pbs_geterrmsg\n *\n */\nint\nexecute(int aopt, int oper, int type, char *names, struct attropl *attribs)\n{\n\tint len; /* Used for length of an err msg*/\n\tint cerror;\n\tint error; /* Error value returned */\n\tint perr;  /* Value returned from pbs_manager */\n\tchar *pmsg;\n\tchar *errmsg;\t\t      /* Error message from pbs_errmsg */\n\tchar errnomsg[256];\t      /* Error message with pbs_errno */\n\tstruct objname *name;\t      /* Pointer to a list of object names */\n\tstruct objname *pname = NULL; /* Pointer to current object name */\n\tstruct objname *sname = NULL; /* Pointer to current server name */\n\tstruct objname *svrs;\t      /* servers to loop through */\n\tstruct attrl *sa;\t      /* Argument needed for status routines */\n\t/* Argument used to request queue names */\n\tstruct server *sp; /* Pointer to server structure */\n\t/* Return structure from a list or print request */\n\tstruct batch_status *ss = NULL;\n\tstruct attropl *attribs_tmp = NULL;\n\tstruct attropl *attribs_file = NULL;\n\tchar infile[MAXPATHLEN + 1];\n\tchar outfile[MAXPATHLEN + 1];\n\tchar dump_msg[HOOK_MSG_SIZE];\n\tchar content_encoding[HOOK_BUF_SIZE];\n\tchar content_type[HOOK_BUF_SIZE];\n\terror = 0;\n\tname = commalist2objname(names, type);\n\n\tif (oper == MGR_CMD_ACTIVE)\n\t\treturn set_active(type, name);\n\n\tif (name == NULL) {\n\t\tswitch (type) {\n\t\t\t\t/* There will always be an active server */\n\t\t\tcase MGR_OBJ_SCHED:\n\t\t\tcase MGR_OBJ_SERVER:\n\t\t\tcase MGR_OBJ_SITE_HOOK:\n\t\t\tcase MGR_OBJ_PBS_HOOK:\n\t\t\tcase MGR_OBJ_RSC:\n\t\t\t\tpname = active_servers;\n\t\t\t\tbreak;\n\t\t\tcase MGR_OBJ_QUEUE:\n\t\t\t\tif (active_queues != NULL)\n\t\t\t\t\tpname = active_queues;\n\t\t\t\telse\n\t\t\t\t\tpstderr(\"No Active Queues, nothing done.\\n\");\n\t\t\t\tbreak;\n\t\t\tcase MGR_OBJ_NODE:\n\t\t\t\tif (active_nodes != NULL)\n\t\t\t\t\tpname = active_nodes;\n\t\t\t\telse\n\t\t\t\t\tpstderr(\"No Active Nodes, nothing done.\\n\");\n\t\t\t\tbreak;\n\t\t}\n\t} else\n\t\tpname = name;\n\n\tfor (; pname != NULL; pname = pname->next) {\n\t\tif (pname->svr_name != NULL)\n\t\t\tsvrs = temp_objname(NULL, pname->svr_name, pname->svr);\n\t\telse\n\t\t\tsvrs = active_servers;\n\n\t\tfor (sname = svrs; sname != NULL; sname = sname->next) {\n\t\t\tif (sname->svr == NULL) {\n\t\t\t\tcerror = connect_servers(sname, 1);\n\t\t\t\t/* if connect_servers() returned an error   */\n\t\t\t\t/* update \"error\", otherwise leave it alone */\n\t\t\t\t/* so that any prior error is retained      */\n\t\t\t\tif (cerror) {\n\t\t\t\t\terror = cerror;\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tsp = sname->svr;\n\t\t\tif (oper == MGR_CMD_LIST) {\n\t\t\t\tsa = attropl2attrl(attribs);\n\t\t\t\tswitch (type) {\n\t\t\t\t\tcase MGR_OBJ_SERVER:\n\t\t\t\t\t\tss = pbs_statserver(sp->s_connect, sa, NULL);\n\t\t\t\t\t\tbreak;\n\t\t\t\t\tcase MGR_OBJ_QUEUE:\n\t\t\t\t\t\tss = pbs_statque(sp->s_connect, pname->obj_name, sa, NULL);\n\t\t\t\t\t\tbreak;\n\t\t\t\t\tcase MGR_OBJ_NODE:\n\t\t\t\t\t\tss = pbs_statvnode(sp->s_connect, pname->obj_name, sa, NULL);\n\t\t\t\t\t\tbreak;\n\t\t\t\t\tcase MGR_OBJ_SCHED:\n\t\t\t\t\t\tss = pbs_statsched(sp->s_connect, sa, NULL);\n\t\t\t\t\t\tbreak;\n\t\t\t\t\tcase MGR_OBJ_SITE_HOOK:\n\t\t\t\t\t\tss = pbs_stathook(sp->s_connect, pname->obj_name, sa, SITE_HOOK);\n\t\t\t\t\t\tbreak;\n\t\t\t\t\tcase MGR_OBJ_PBS_HOOK:\n\t\t\t\t\t\tss = pbs_stathook(sp->s_connect, pname->obj_name, sa, PBS_HOOK);\n\t\t\t\t\t\tbreak;\n\t\t\t\t\tcase MGR_OBJ_RSC:\n\t\t\t\t\t\tss = pbs_statrsc(sp->s_connect, pname->obj_name, sa, \"p\");\n\t\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tfree_attrl_list(sa);\n\t\t\t\tperr = (ss == NULL);\n\t\t\t\tif (!perr)\n\t\t\t\t\tdisplay(type, type, pname->obj_name, ss, FALSE, sp);\n\n\t\t\t\t/* For 'list hook' command of all available */\n\t\t\t\t/* hooks, if none are found in the system, */\n\t\t\t\t/* then force a return success value.   */\n\t\t\t\tif ((perr != 0) &&\n\t\t\t\t    ((type == MGR_OBJ_SITE_HOOK) ||\n\t\t\t\t     (type == MGR_OBJ_PBS_HOOK)) &&\n\t\t\t\t    ((pname->obj_name == NULL) ||\n\t\t\t\t     (pname->obj_name[0] == '\\0'))) {\n\t\t\t\t\t/* not an error */\n\t\t\t\t\tperr = 0;\n\t\t\t\t}\n\n\t\t\t\tpbs_statfree(ss);\n\t\t\t} else if (oper == MGR_CMD_PRINT) {\n\n\t\t\t\tsa = attropl2attrl(attribs);\n\t\t\t\tswitch (type) {\n\t\t\t\t\tcase MGR_OBJ_SERVER:\n\t\t\t\t\t\tif (sa == NULL) {\n\t\t\t\t\t\t\tsp->s_rsc = pbs_statrsc(sp->s_connect, NULL, NULL, \"p\");\n\t\t\t\t\t\t\tif (sp->s_rsc != NULL) {\n\t\t\t\t\t\t\t\tdisplay(MGR_OBJ_RSC, MGR_OBJ_SERVER, NULL, sp->s_rsc, TRUE, sp);\n\t\t\t\t\t\t\t} else if (pbs_errno != PBSE_NONE) {\n\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tss = pbs_statque(sp->s_connect, NULL, NULL, NULL);\n\t\t\t\t\t\t\tif (ss != NULL) {\n\t\t\t\t\t\t\t\tdisplay(MGR_OBJ_QUEUE, MGR_OBJ_SERVER, NULL, ss, TRUE, sp);\n\t\t\t\t\t\t\t\tpbs_statfree(ss);\n\t\t\t\t\t\t\t} else if (pbs_errno != PBSE_NONE) {\n\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t\tss = pbs_statserver(sp->s_connect, sa, NULL);\n\t\t\t\t\t\tbreak;\n\t\t\t\t\tcase MGR_OBJ_QUEUE:\n\t\t\t\t\t\tss = pbs_statque(sp->s_connect, pname->obj_name, sa, NULL);\n\t\t\t\t\t\tbreak;\n\t\t\t\t\tcase MGR_OBJ_NODE:\n\t\t\t\t\t\tss = pbs_statvnode(sp->s_connect, pname->obj_name, sa, NULL);\n\t\t\t\t\t\tbreak;\n\t\t\t\t\tcase MGR_OBJ_SCHED:\n\t\t\t\t\t\tss = pbs_statsched(sp->s_connect, sa, NULL);\n\t\t\t\t\t\tbreak;\n\t\t\t\t\tcase MGR_OBJ_SITE_HOOK:\n\t\t\t\t\t\tss = pbs_stathook(sp->s_connect, pname->obj_name, sa, SITE_HOOK);\n\t\t\t\t\t\tbreak;\n\t\t\t\t\tcase MGR_OBJ_RSC:\n\t\t\t\t\t\tss = pbs_statrsc(sp->s_connect, pname->obj_name, sa, \"p\");\n\t\t\t\t\t\tbreak;\n\t\t\t\t}\n\n\t\t\t\tfree_attrl_list(sa);\n\t\t\t\tperr = (ss == NULL);\n\t\t\t\tif (!perr) {\n\t\t\t\t\tdisplay(type, type, pname->obj_name, ss, TRUE, sp);\n\t\t\t\t}\n\t\t\t\tpbs_statfree(ss);\n\t\t\t} else {\n\t\t\t\tif (oper == MGR_CMD_IMPORT) {\n\t\t\t\t\tinfile[0] = '\\0';\n\t\t\t\t\tcontent_encoding[0] = '\\0';\n\t\t\t\t\tcontent_type[0] = '\\0';\n\t\t\t\t\tattribs_tmp = attribs;\n\t\t\t\t\tattribs_file = NULL;\n\t\t\t\t\twhile (attribs_tmp) {\n\t\t\t\t\t\tif (strcmp(attribs_tmp->name, INPUT_FILE_PARAM) == 0) {\n\t\t\t\t\t\t\tpbs_strncpy(infile, attribs_tmp->value, sizeof(infile));\n\t\t\t\t\t\t\tattribs_file = attribs_tmp;\n\t\t\t\t\t\t} else if (strcmp(attribs_tmp->name, CONTENT_ENCODING_PARAM) == 0) {\n\t\t\t\t\t\t\tpbs_strncpy(content_encoding, attribs_tmp->value, sizeof(content_encoding));\n\t\t\t\t\t\t} else if (strcmp(attribs_tmp->name, CONTENT_TYPE_PARAM) == 0) {\n\t\t\t\t\t\t\tpbs_strncpy(content_type, attribs_tmp->value, sizeof(content_type));\n\t\t\t\t\t\t}\n\t\t\t\t\t\tattribs_tmp = attribs_tmp->next;\n\t\t\t\t\t}\n\t\t\t\t\tif (infile[0] == '\\0') {\n\t\t\t\t\t\tfprintf(stderr,\n\t\t\t\t\t\t\t\"hook import command has no <input-file> argument\\n\");\n\t\t\t\t\t\terror = 1;\n\t\t\t\t\t\tcontinue;\n\t\t\t\t\t}\n\t\t\t\t\tif (content_encoding[0] == '\\0') {\n\t\t\t\t\t\tfprintf(stderr,\n\t\t\t\t\t\t\t\"hook import command has no <content-encoding> argument\\n\");\n\t\t\t\t\t\terror = 1;\n\t\t\t\t\t\tcontinue;\n\t\t\t\t\t}\n\t\t\t\t\tif (content_type[0] == '\\0') {\n\t\t\t\t\t\tfprintf(stderr,\n\t\t\t\t\t\t\t\"hook import command has no <content-type> argument\\n\");\n\t\t\t\t\t\terror = 1;\n\t\t\t\t\t\tcontinue;\n\t\t\t\t\t}\n\n\t\t\t\t\tif (strcmp(infile, \"-\") == 0) {\n\t\t\t\t\t\tinfile[0] = '\\0';\n\t\t\t\t\t}\n\n\t\t\t\t\tif (strcmp(content_type, HOOKSTR_CONFIG) == 0) {\n\t\t\t\t\t\tchar *p;\n\t\t\t\t\t\tint totlen;\n\t\t\t\t\t\tp = strrchr(infile, '.');\n\t\t\t\t\t\t/* need to pass the suffix */\n\t\t\t\t\t\t/* to the server which will */\n\t\t\t\t\t\t/* validate the file based */\n\t\t\t\t\t\t/* on type */\n\t\t\t\t\t\tif (p != NULL) {\n\t\t\t\t\t\t\ttotlen = strlen(p) + strlen(hook_tempfile);\n\n\t\t\t\t\t\t\tif (totlen < sizeof(hook_tempfile))\n\t\t\t\t\t\t\t\tstrcat(hook_tempfile, p);\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\n\t\t\t\t\t/* hook_tempfile could be set to empty if generating this filename */\n\t\t\t\t\t/* by mktemp() was not successful */\n\t\t\t\t\tif ((hook_tempfile[0] == '\\0') ||\n\t\t\t\t\t    dump_file(infile, hook_tempfile, content_encoding,\n\t\t\t\t\t\t      dump_msg, sizeof(dump_msg)) != 0) {\n\t\t\t\t\t\tstruct stat sbuf;\n\n\t\t\t\t\t\terror = 1; /* set error indicator */\n\n\t\t\t\t\t\tif (hook_tempfile_errmsg[0] != '\\0')\n\t\t\t\t\t\t\tfprintf(stderr, \"%s\\n\", hook_tempfile_errmsg);\n\n\t\t\t\t\t\t\t/* Detect failed to access hooks working directory */\n\n#ifdef WIN32\n\t\t\t\t\t\tif ((lstat(hook_tempdir, &sbuf) == -1) && (GetLastError() == ERROR_ACCESS_DENIED))\n#else\n\t\t\t\t\t\tif ((stat(hook_tempdir, &sbuf) == -1) && (errno == EACCES))\n#endif\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tfprintf(stderr, \"%s@%s is unauthorized to access hooks data \"\n\t\t\t\t\t\t\t\t\t\"from server %s\\n\",\n\t\t\t\t\t\t\t\tcur_user, cur_host,\n\t\t\t\t\t\t\t\t(sname->svr_name[0] == '\\0') ? pbs_conf.pbs_server_name : sname->svr_name);\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\tfprintf(stderr, \"%s\\n\", dump_msg);\n\t\t\t\t\t\t}\n\t\t\t\t\t\tcontinue;\n\t\t\t\t\t}\n\n\t\t\t\t\tdyn_strcpy(&attribs_file->value, base(hook_tempfile));\n\n\t\t\t\t} else if (oper == MGR_CMD_EXPORT) {\n\t\t\t\t\tchar *hooktmp = NULL;\n\n\t\t\t\t\tif (hook_tempfile[0] == '\\0') {\n\n\t\t\t\t\t\tstruct stat sbuf;\n\n\t\t\t\t\t\terror = 1;\n\n\t\t\t\t\t\tif (hook_tempfile_errmsg[0] != '\\0')\n\t\t\t\t\t\t\tfprintf(stderr, \"%s\\n\", hook_tempfile_errmsg);\n\n\t\t\t\t\t\t\t/* Detect failed to access hooks working directory */\n#ifdef WIN32\n\t\t\t\t\t\tif ((lstat(hook_tempdir, &sbuf) == -1) && (GetLastError() == ERROR_ACCESS_DENIED))\n#else\n\t\t\t\t\t\tif ((stat(hook_tempdir, &sbuf) == -1) && (errno == EACCES))\n#endif\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tfprintf(stderr, \"%s@%s is unauthorized to access hooks data \"\n\t\t\t\t\t\t\t\t\t\"from server %s\\n\",\n\t\t\t\t\t\t\t\tcur_user, cur_host,\n\t\t\t\t\t\t\t\t(sname->svr_name[0] == '\\0') ? conf_full_server_name : sname->svr_name);\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\tfprintf(stderr, \"can't export hooks data. no hook_tempfile!\\n\");\n\t\t\t\t\t\t}\n\t\t\t\t\t\tcontinue;\n\t\t\t\t\t}\n\t\t\t\t\toutfile[0] = '\\0';\n\t\t\t\t\tcontent_encoding[0] = '\\0';\n\t\t\t\t\tattribs_tmp = attribs;\n\t\t\t\t\tattribs_file = NULL;\n\t\t\t\t\twhile (attribs_tmp) {\n\t\t\t\t\t\tif (strcmp(attribs_tmp->name, OUTPUT_FILE_PARAM) == 0) {\n\t\t\t\t\t\t\tpbs_strncpy(outfile, attribs_tmp->value, sizeof(outfile));\n\t\t\t\t\t\t\tattribs_file = attribs_tmp;\n\t\t\t\t\t\t} else if (strcmp(attribs_tmp->name,\n\t\t\t\t\t\t\t\t  CONTENT_ENCODING_PARAM) == 0) {\n\t\t\t\t\t\t\tpbs_strncpy(content_encoding, attribs_tmp->value, sizeof(content_encoding));\n\t\t\t\t\t\t}\n\t\t\t\t\t\tattribs_tmp = attribs_tmp->next;\n\t\t\t\t\t}\n\t\t\t\t\thooktmp = base(hook_tempfile);\n\t\t\t\t\t/* dyn_strcpy does not like a NULL second argument. */\n\t\t\t\t\tdyn_strcpy(&attribs_file->value, (hooktmp ? hooktmp : \"\"));\n\t\t\t\t}\n\t\t\t\thandle_formula(attribs);\n\t\t\t\tif (type == MGR_OBJ_PBS_HOOK) {\n\t\t\t\t\tstruct attropl *popl;\n\t\t\t\t\tperr = pbs_manager(sp->s_connect, oper, type, pname->obj_name, attribs, PBS_HOOK);\n\n\t\t\t\t\tpopl = attribs;\n\t\t\t\t\tif (perr == 0) {\n\t\t\t\t\t\twhile (popl != NULL) {\n\n\t\t\t\t\t\t\tif (strcmp(popl->name, \"enabled\") != 0) {\n\t\t\t\t\t\t\t\tpopl = popl->next;\n\t\t\t\t\t\t\t\tcontinue;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tif ((strcasecmp(popl->value, HOOKSTR_FALSE) == 0) ||\n\t\t\t\t\t\t\t    (strcasecmp(popl->value, \"f\") == 0) ||\n\t\t\t\t\t\t\t    (strcasecmp(popl->value, \"n\") == 0) ||\n\t\t\t\t\t\t\t    (strcmp(popl->value, \"0\") == 0)) {\n\t\t\t\t\t\t\t\tfprintf(stderr, \"WARNING: Disabling a PBS hook \"\n\t\t\t\t\t\t\t\t\t\t\"results in an unsupported configuration!\\n\");\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tpopl = popl->next;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tif ((strlen(pname->obj_name) == 0) && type == MGR_OBJ_SCHED && oper != MGR_CMD_DELETE) {\n\t\t\t\t\t\tperr = pbs_manager(sp->s_connect, oper, type, PBS_DFLT_SCHED_NAME, attribs, NULL);\n\t\t\t\t\t} else\n\t\t\t\t\t\tperr = pbs_manager(sp->s_connect, oper, type, pname->obj_name, attribs, NULL);\n\t\t\t\t}\n\t\t\t}\n\n\t\t\terrmsg = pbs_geterrmsg(sp->s_connect);\n\t\t\tif (perr) {\n\t\t\t\t/*\n\t\t\t\t ** IF\n\t\t\t\t **\tstdin is a tty\t\t\tOR\n\t\t\t\t **\tthe command is not SET\t\tOR\n\t\t\t\t **\tthe object type is not NODE\tOR\n\t\t\t\t **\tthe error is not \"attempt to set READ ONLY attribute\"\n\t\t\t\t ** THEN print error messages\n\t\t\t\t ** ELSE don't print error messages\n\t\t\t\t **\n\t\t\t\t ** This is to deal with bug 4941 where errors are generated\n\t\t\t\t ** from running qmgr with input from a file which was generated\n\t\t\t\t ** by 'qmgr -c \"p n @default\" > /tmp/nodes_out'.\n\t\t\t\t */\n\t\t\t\tif (isatty(0) ||\n\t\t\t\t    (oper != MGR_CMD_SET) || (type != MGR_OBJ_NODE) ||\n\t\t\t\t    (pbs_errno != PBSE_ATTRRO)) {\n\t\t\t\t\tif (errmsg != NULL) {\n\t\t\t\t\t\tlen = strlen(errmsg) + strlen(pname->obj_name) + strlen(Svrname(sp)) + 20;\n\t\t\t\t\t\tif (len < 256) {\n\t\t\t\t\t\t\tsprintf(errnomsg, \"qmgr obj=%s svr=%s: %s\\n\",\n\t\t\t\t\t\t\t\tpname->obj_name, Svrname(sp), errmsg);\n\t\t\t\t\t\t\tpstderr(errnomsg);\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t/*obviously, this is to cover a highly unlikely case*/\n\n\t\t\t\t\t\t\tpstderr_big(Svrname(sp), pname->obj_name, errmsg);\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\n\t\t\t\t\tif (pbs_errno == PBSE_PROTOCOL) {\n\t\t\t\t\t\tif ((check_time - start_time) >= QMGR_TIMEOUT) {\n\t\t\t\t\t\t\tpstderr(\"qmgr: Server disconnected due to idle connection timeout\\n\");\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\tpstderr(\"qmgr: Protocol error, server disconnected\\n\");\n\t\t\t\t\t\t}\n\t\t\t\t\t\texit(1);\n\t\t\t\t\t} else if (pbs_errno == PBSE_HOOKERROR) {\n\t\t\t\t\t\tpstderr(\"qmgr: hook error returned from server\\n\");\n\t\t\t\t\t} else if (pbs_errno != 0) /* 0 happens with hooks if no hooks found */\n\t\t\t\t\t\tPSTDERR1(\"qmgr: Error (%d) returned from server\\n\", pbs_errno)\n\t\t\t\t}\n\n\t\t\t\tif (aopt)\n\t\t\t\t\treturn perr;\n\t\t\t\terror = perr;\n\t\t\t} else if (errmsg != NULL) {\n\t\t\t\t/* batch reply code is 0 but a text message is also being returned */\n\n\t\t\t\tif ((pmsg = malloc(strlen(errmsg) + 2)) != NULL) {\n\t\t\t\t\tstrcpy(pmsg, errmsg);\n\t\t\t\t\tstrcat(pmsg, \"\\n\");\n\t\t\t\t\tpstderr(pmsg);\n\t\t\t\t\tfree(pmsg);\n\t\t\t\t}\n\t\t\t} else {\n\n\t\t\t\tif (oper == MGR_CMD_EXPORT) {\n\t\t\t\t\tif (dump_file(hook_tempfile, outfile, content_encoding,\n\t\t\t\t\t\t      dump_msg, sizeof(dump_msg)) != 0) {\n\t\t\t\t\t\tfprintf(stderr, \"%s\\n\", dump_msg);\n\t\t\t\t\t\terror = 1;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\n\t\t\ttemp_objname(NULL, NULL, NULL); /* clears reference count */\n\t\t}\n\t}\n\tif (name != NULL)\n\t\tfree_objname_list(name);\n\treturn error;\n}\n\n/**\n * @brief\n *\tfrees the attribute list\n *\n * @param[in] attr   Pointer to the linked list of attropls to clean up.\n *\n * @return Void\n *\n */\nvoid\nfreeattropl(struct attropl *attr)\n{\n\tstruct attropl *ap;\n\n\twhile (attr != NULL) {\n\t\tif (attr->name != NULL)\n\t\t\tfree(attr->name);\n\t\tif (attr->resource != NULL)\n\t\t\tfree(attr->resource);\n\t\tif (attr->value != NULL)\n\t\t\tfree(attr->value);\n\t\tap = attr->next;\n\t\tfree(attr);\n\t\tattr = ap;\n\t}\n}\n\n/**\n * @brief\n *\tcommalist2objname - convert a comma seperated list of strings to a\n *\t\t\t    linked list of objname structs\n *\n * @param[in] names - comma seperated list of strings\n * @param[in] type - the type of the objects\n *\n * @return  structure\n * @retval  linked list of objname structs\n *\n */\nstruct objname *\ncommalist2objname(char *names, int type)\n{\n\tchar *foreptr, *backptr;\t /* front and back of words */\n\tstruct objname *objs = NULL;\t /* the front of the name object list */\n\tstruct objname *cur_obj;\t /* the current name object */\n\tstruct objname *prev_obj = NULL; /* the previous name object */\n\tint len;\t\t\t /* length of segment of string */\n\tchar error = 0;\t\t\t /* error flag */\n\n\tif (names != NULL) {\n\t\tforeptr = backptr = names;\n\t\twhile (!EOL(*foreptr) && !error) {\n\t\t\twhile (White(*foreptr))\n\t\t\t\tforeptr++;\n\n\t\t\tbackptr = foreptr;\n\n\t\t\twhile (*foreptr != ',' && *foreptr != '@' && !EOL(*foreptr))\n\t\t\t\tforeptr++;\n\n\t\t\tcur_obj = new_objname();\n\t\t\tcur_obj->obj_type = type;\n\t\t\tif (*foreptr == '@') {\n\t\t\t\tlen = foreptr - backptr;\n\t\t\t\tMstring(cur_obj->obj_name, len + 1);\n\t\t\t\tpbs_strncpy(cur_obj->obj_name, backptr, len + 1);\n\t\t\t\tforeptr++;\n\t\t\t\tbackptr = foreptr;\n\t\t\t\twhile (*foreptr != ',' && !EOL(*foreptr))\n\t\t\t\t\tforeptr++;\n\n\t\t\t\tlen = foreptr - backptr;\n\t\t\t\tif (strncmp(backptr, DEFAULT_SERVER, len) == 0) {\n\t\t\t\t\tMstring(cur_obj->svr_name, 1);\n\t\t\t\t\tcur_obj->svr_name[0] = '\\0';\n\t\t\t\t} else if (strncmp(backptr, ACTIVE_SERVER, len) == 0)\n\t\t\t\t\tcur_obj->svr_name = NULL;\n\t\t\t\telse {\n\t\t\t\t\tMstring(cur_obj->svr_name, len + 1);\n\t\t\t\t\tpbs_strncpy(cur_obj->svr_name, backptr, len + 1);\n\t\t\t\t}\n\n\t\t\t\tif (!EOL(*foreptr))\n\t\t\t\t\tforeptr++;\n\t\t\t} else {\n\t\t\t\tlen = foreptr - backptr;\n\n\t\t\t\tif ((type == MGR_OBJ_SERVER || type == MGR_OBJ_SITE_HOOK || type == MGR_OBJ_PBS_HOOK) && !strcmp(backptr, DEFAULT_SERVER)) {\n\t\t\t\t\tMstring(cur_obj->obj_name, 1);\n\t\t\t\t\tcur_obj->obj_name[0] = '\\0';\n\t\t\t\t} else {\n\t\t\t\t\tMstring(cur_obj->obj_name, len + 1);\n\t\t\t\t\tpbs_strncpy(cur_obj->obj_name, backptr, len + 1);\n\t\t\t\t}\n\n\t\t\t\tif (type == MGR_OBJ_SERVER)\n\t\t\t\t\tcur_obj->svr_name = cur_obj->obj_name;\n\n\t\t\t\tif (!EOL(*foreptr))\n\t\t\t\t\tforeptr++;\n\t\t\t}\n\n\t\t\tif ((cur_obj->svr = find_server(cur_obj->svr_name)) != NULL)\n\t\t\t\tcur_obj->svr->ref++;\n\n\t\t\tif (objs == NULL)\n\t\t\t\tobjs = cur_obj;\n\n\t\t\tif (prev_obj == NULL)\n\t\t\t\tprev_obj = cur_obj;\n\t\t\telse if (cur_obj != NULL) {\n\t\t\t\tprev_obj->next = cur_obj;\n\t\t\t\tprev_obj = cur_obj;\n\t\t\t}\n\t\t}\n\t}\n\n\tif (error) {\n\t\tfree_objname_list(objs);\n\t\treturn NULL;\n\t}\n\n\treturn objs;\n}\n\n/**\n * @brief\n *\tget_request - get a qmgr request from the standard input\n *\n * @param[out] request      The buffer for the qmgr request\n *\n * @return Error code\n * @retval 0     Success\n * @retval EOF   Failure\n *\n * NOTE:\n *      This routine has a static buffer it keeps lines of input in.\n * Since commands can be separated by semicolons, a line may contain\n * more than one command.  In this case, the command is copied to\n * request and the rest of the line is moved up to overwrite the previous\n * command.  Another line is retrieved from stdin only if the buffer is\n * empty\n *\n */\nint\nget_request(char **request)\n{\n\tstatic char *line = NULL; /* Stdin line */\n\tstatic int empty = TRUE;  /* Line has nothing in it */\n\tint eol;\t\t  /* End of line */\n\tint ll;\t\t\t  /* Length of line */\n\tint i = 0;\t\t  /* Index into line */\n\tchar *rp;\t\t  /* Pointer into request */\n\tchar *lp;\t\t  /* Pointer into line */\n\tint eoc;\t\t  /* End of command */\n\tchar quote;\t\t  /* Either ' or \" */\n\tchar *cur_line = NULL;\t  /* Pointer to the current line */\n\tint line_len = 0;\t  /* Length of the line buffer */\n\tchar *ret;\n\n#ifdef QMGR_HAVE_HIST\n\tif (qmgr_hist_enabled == 1) {\n\t\tif (empty) {\n\t\t\tif (line != NULL) {\n\t\t\t\tfree(line);\n\t\t\t\tline = NULL;\n\t\t\t}\n\n\t\t\tif (get_request_hist(&cur_line) == EOF)\n\t\t\t\treturn EOF;\n\t\t}\n\t}\n#endif\n\n\t/* Make sure something is in the stdin line */\n\tif (empty) {\n\t\teol = FALSE;\n\t\tlp = line;\n\t\tll = 0;\n\t\twhile (!eol) {\n\t\t\t/* The following code block (enclosed within if() {}) is executed only for the special case of\n \t\t\t * qmgr Commands being supplied from a file or within delimiters e.g. $qmgr < cmd_file.txt, where\n\t\t\t * cmd_file.txt contains Commands ... or\n\t\t\t * $qmgr <<EOF\n \t\t\t * p q workq\n \t\t\t * EOF <--- EOF is a delimiter.\n \t\t\t * This code block is not needed for the cases where qmgr receives input Interactively\n \t\t\t * or from the Command Line, since these checks already get done in get_request_hist().\n \t\t\t */\n\t\t\tif (qmgr_hist_enabled == 0) {\n\t\t\t\tif (isatty(0) && isatty(1)) {\n\t\t\t\t\tif (lp == line)\n\t\t\t\t\t\tprintf(\"%s\", prompt);\n\t\t\t\t\telse\n\t\t\t\t\t\tprintf(\"%s\", contin);\n\t\t\t\t}\n\n\t\t\t\tstart_time = time(0);\n\t\t\t\tll = 0;\n\t\t\t\tif ((ret = pbs_fgets(&cur_line, &ll, stdin)) == NULL) {\n\t\t\t\t\tif (line != NULL) {\n\t\t\t\t\t\tfree(line);\n\t\t\t\t\t\tline = NULL;\n\t\t\t\t\t}\n\t\t\t\t\tif (cur_line != NULL) {\n\t\t\t\t\t\tfree(cur_line);\n\t\t\t\t\t}\n\t\t\t\t\treturn EOF;\n\t\t\t\t}\n\t\t\t\tcur_line = ret;\n\t\t\t\tll = strlen(cur_line);\n\t\t\t\tif (cur_line[ll - 1] == '\\n') {\n\t\t\t\t\t/* remove newline */\n\t\t\t\t\tcur_line[ll - 1] = '\\0';\n\t\t\t\t\t--ll;\n\t\t\t\t}\n\t\t\t\tlp = cur_line;\n\n\t\t\t\twhile (White(*lp))\n\t\t\t\t\tlp++;\n\n\t\t\t\tif (strlen(lp) == 0) {\n\t\t\t\t\tif (cur_line != NULL) {\n\t\t\t\t\t\tfree(cur_line);\n\t\t\t\t\t\tcur_line = NULL;\n\t\t\t\t\t\tlp = line;\n\t\t\t\t\t}\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tll = strlen(cur_line);\n\t\t\t\tlp = cur_line;\n\t\t\t}\n\n\t\t\tif (cur_line[ll - 1] == '\\\\') {\n\t\t\t\tcur_line[ll - 1] = ' ';\n\t\t\t} else if (*lp != '#')\n\t\t\t\teol = TRUE;\n\n\t\t\tif (*lp != '#') {\n\t\t\t\tif (line != NULL) {\n\t\t\t\t\tline_len = strlen(line);\n\t\t\t\t} else {\n\t\t\t\t\tline_len = 0;\n\t\t\t\t}\n\t\t\t\t/*\n\t\t\t\t * Append the contents of cur_line to the earlier line buffer\n\t\t\t\t * pbs_strcat takes care of increasing the size of the destination\n\t\t\t\t * buffer if required.\n\t\t\t\t */\n\t\t\t\tif ((pbs_strcat(&line, &line_len, cur_line)) == NULL) {\n\t\t\t\t\tfprintf(stderr, \"malloc failure (errno %d)\\n\", errno);\n\t\t\t\t\texit(1);\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (cur_line != NULL) {\n\t\t\t\tfree(cur_line);\n\t\t\t}\n\t\t} /* End while(). */\n\t}\t  /* End if(empty). */\n\n\t/* Move a command from line to request */\n\tll = strlen(line);\n\t*request = (char *) malloc(ll + 1);\n\tif (*request == NULL) {\n\t\tfprintf(stderr, \"malloc failure (errno %d)\\n\", errno);\n\t\texit(1);\n\t}\n\t(*request)[ll] = '\\0';\n\trp = *request;\n\tlp = line;\n\teoc = FALSE;\n\twhile (!eoc) {\n\t\tswitch (*lp) {\n\t\t\t\t/* End of command */\n\t\t\tcase ';':\n\t\t\tcase '\\0':\n\t\t\t\teoc = TRUE;\n\t\t\t\tbreak;\n\n\t\t\t\t/* Quoted string */\n\t\t\tcase '\"':\n\t\t\tcase '\\'':\n\t\t\t\tquote = *lp;\n\t\t\t\t*rp = *lp;\n\t\t\t\trp++;\n\t\t\t\tlp++;\n\t\t\t\twhile (*lp != quote && !EOL(*lp)) {\n\t\t\t\t\t*rp = *lp;\n\t\t\t\t\trp++;\n\t\t\t\t\tlp++;\n\t\t\t\t}\n\t\t\t\t*rp = *lp;\n\t\t\t\tif (!EOL(*lp)) {\n\t\t\t\t\trp++;\n\t\t\t\t\tlp++;\n\t\t\t\t}\n\t\t\t\tbreak;\n\n\t\t\tcase '#':\n\t\t\t\tif ((lp == line) || isspace(*(lp - 1))) {\n\t\t\t\t\t/* comment */\n\t\t\t\t\teoc = TRUE;\n\t\t\t\t\tbreak;\n\t\t\t\t} /* not comment, fall into default case */\n\t\t\t\t  /* Move the character */\n\t\t\tdefault:\n\t\t\t\t*rp = *lp;\n\t\t\t\trp++;\n\t\t\t\tlp++;\n\t\t\t\tbreak;\n\t\t}\n\t}\n\t*rp = '\\0';\n\n\t/* Is there any thing left in the line? */\n\tswitch (*lp) {\n\t\tcase '\\0':\n\t\tcase '#':\n\t\t\ti = 0;\n\t\t\tempty = TRUE;\n\t\t\tbreak;\n\n\t\tcase ';':\n\t\t\trp = line;\n\t\t\tlp++;\n\t\t\twhile (White(*lp))\n\t\t\t\tlp++;\n\t\t\tif (!EOL(*lp)) {\n\t\t\t\ti = strlen(lp);\n\t\t\t\tmemmove(rp, lp, (size_t) i); /* By using memmove() we avoid strcpy's overlapping buffer issue. */\n\t\t\t\tempty = FALSE;\t\t     /* Note: memmove() doesn't Null terminate; so we take care of this by */\n\t\t\t}\t\t\t\t     /* nullifying 'line', at the end of this function, by setting line[i] to '\\0'. */\n\t\t\telse {\n\t\t\t\ti = 0;\n\t\t\t\tempty = TRUE;\n\t\t\t}\n\t\t\tbreak;\n\t}\n\n\tline[i] = '\\0'; /* Nullify the 'line' buffer at position 'i'. The un-processed command(s) got copied */\n\t\t\t/* to the start of the 'line' buffer by memmove() above. These command(s) are now */\n\t\t\t/* Null terminated appropriately. */\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tshow_help - show help for qmgr\n *\n * @param[in] str - possible sub topic to show help on\n *\n * @return Void\n *\n */\nvoid\nshow_help(char *str)\n{\n\tif (str != NULL) {\n\t\twhile (White(*str))\n\t\t\tstr++;\n\t}\n\n\tif ((str == NULL) || (*str == '\\0')) {\n\t\tprintf(HELP_DEFAULT);\n\t} else if (strncmp(str, \"active\", 6) == 0)\n\t\tprintf(HELP_ACTIVE);\n\telse if (strncmp(str, \"create\", 6) == 0)\n\t\tprintf(HELP_CREATE);\n\telse if (strncmp(str, \"delete\", 6) == 0)\n\t\tprintf(HELP_DELETE);\n\telse if (strncmp(str, \"set\", 3) == 0)\n\t\tprintf(HELP_SET);\n\telse if (strncmp(str, \"unset\", 5) == 0)\n\t\tprintf(HELP_UNSET);\n\telse if (strncmp(str, \"list\", 4) == 0)\n\t\tprintf(HELP_LIST);\n\telse if (strncmp(str, \"print\", 5) == 0)\n\t\tprintf(HELP_PRINT);\n\telse if (strncmp(str, \"import\", 6) == 0)\n\t\tprintf(HELP_IMPORT);\n\telse if (strncmp(str, \"export\", 6) == 0)\n\t\tprintf(HELP_EXPORT);\n\telse if (strncmp(str, \"quit\", 4) == 0)\n\t\tprintf(HELP_QUIT0);\n\telse if (strncmp(str, \"exit\", 4) == 0)\n\t\tprintf(HELP_EXIT);\n\telse if (strncmp(str, \"operator\", 8) == 0)\n\t\tprintf(HELP_OPERATOR);\n\telse if (strncmp(str, \"value\", 5) == 0)\n\t\tprintf(HELP_VALUE);\n\telse if (strncmp(str, \"name\", 4) == 0)\n\t\tprintf(HELP_NAME);\n\telse if (strncmp(str, \"attribute\", 9) == 0)\n\t\tprintf(HELP_ATTRIBUTE);\n\telse if (strncmp(str, \"serverpublic\", 12) == 0)\n\t\tprintf(HELP_SERVERPUBLIC);\n\telse if (strncmp(str, \"serverro\", 8) == 0)\n\t\tprintf(HELP_SERVERRO);\n\telse if (strncmp(str, \"queuepublic\", 11) == 0)\n\t\tprintf(HELP_QUEUEPUBLIC);\n\telse if (strncmp(str, \"queueexec\", 9) == 0)\n\t\tprintf(HELP_QUEUEEXEC);\n\telse if (strncmp(str, \"queueroute\", 10) == 0)\n\t\tprintf(HELP_QUEUEROUTE);\n\telse if (strncmp(str, \"queuero\", 7) == 0)\n\t\tprintf(HELP_QUEUERO);\n\telse if (strncmp(str, \"nodeattr\", 8) == 0)\n\t\tprintf(HELP_NODEATTR);\n\telse\n\t\tprintf(\"No help available on: %s\\nCheck the PBS Reference Guide for more help.\\n\", str);\n\n\tprintf(\"\\n\");\n}\n\n/**\n * @brief\n *\tparse - parse the qmgr request\n *\n * @param[in]  request      The text of a single qmgr command.\n * @param[out] oper    Indicates either create, delete, set, unset, or list\n * @param[out] type    Indicates either server or queue.\n * @param[out] names   The names of the objects.\n * @param[out] attr    The attribute list with operators.\n *\n * @return Error code\n * @retval       0  Success\n * @retval       !0 Failure\n *\n * Note:\n *  The syntax of a qmgr directive is:\n *\n *      operation type [namelist] [attributelist]\n *\n *  where\n *      operation       create, delete, set, unset, list or print\n *      type            server, queue, node, or resource\n *      namelist        comma delimit list of names with no white space,\n *                      can only be defaulted if the type is server\n *      attributelist   comma delimit list of name or name-value pairs\n *\n *  If the operation part is quit or exit, then the code will be stopped\n *  now.\n *\n */\nint\nparse(char *request, int *oper, int *type, char **names, struct attropl **attr)\n{\n\tint error;\n\tint lp;\t /* Length of current string */\n\tint len; /* ammount parsed by parse_request */\n\tchar **req = NULL;\n\tint names_len = 0;\n\tchar *p;\n\n\t/* jump over whitespace atleast in the LHS */\n\tp = request;\n\twhile (White(*p))\n\t\tp++;\n\tif (*p == '\\0')\n\t\treturn -1;\n\trequest = p;\n\n\t/* request was all right, if history enabled, add to history */\n#ifdef QMGR_HAVE_HIST\n\tif (qmgr_hist_enabled == 1)\n\t\tqmgr_add_history(p);\n#endif\n\n\t/* parse the request into parts */\n\tlen = parse_request(request, &req);\n\n\tif (len != 0) { /* error in parse_request */\n\t\tlp = strlen(req[IND_CMD]);\n\n\t\tif (strncmp(req[0], \"create\", lp) == 0)\n\t\t\t*oper = MGR_CMD_CREATE;\n\t\telse if (strncmp(req[0], \"delete\", lp) == 0)\n\t\t\t*oper = MGR_CMD_DELETE;\n\t\telse if (strncmp(req[0], \"set\", lp) == 0)\n\t\t\t*oper = MGR_CMD_SET;\n\t\telse if (strncmp(req[0], \"unset\", lp) == 0)\n\t\t\t*oper = MGR_CMD_UNSET;\n\t\telse if (strncmp(req[0], \"list\", lp) == 0)\n\t\t\t*oper = MGR_CMD_LIST;\n\t\telse if (strncmp(req[0], \"print\", lp) == 0)\n\t\t\t*oper = MGR_CMD_PRINT;\n\t\telse if (strncmp(req[0], \"active\", lp) == 0)\n\t\t\t*oper = MGR_CMD_ACTIVE;\n\t\telse if (strncmp(req[0], \"import\", lp) == 0)\n\t\t\t*oper = MGR_CMD_IMPORT;\n\t\telse if (strncmp(req[0], \"export\", lp) == 0)\n\t\t\t*oper = MGR_CMD_EXPORT;\n\t\telse if (strncmp(req[0], \"help\", lp) == 0) {\n\t\t\tshow_help(req[1]);\n\t\t\tCLEAN_UP_REQ(req)\n\t\t\treturn -1;\n\t\t} else if (strncmp(req[0], \"?\", lp) == 0) {\n\t\t\tshow_help(req[1]);\n\t\t\tCLEAN_UP_REQ(req)\n\t\t\treturn -1;\n\t\t} else if (strncmp(req[0], \"quit\", lp) == 0) {\n\t\t\tCLEAN_UP_REQ(req)\n\t\t\tclean_up_and_exit(0);\n\t\t} else if (strncmp(req[0], \"exit\", lp) == 0) {\n\t\t\tCLEAN_UP_REQ(req)\n\t\t\tclean_up_and_exit(0);\n\t\t}\n#ifdef QMGR_HAVE_HIST\n\t\telse if (strncmp(req[0], \"history\", lp) == 0) {\n\t\t\tqmgr_list_history(req[1] ? atol(req[1]) : QMGR_HIST_SIZE);\n\t\t\tfree(request);\n\t\t\treturn -1;\n\t\t}\n#endif\n\t\telse {\n\t\t\tPSTDERR1(\"qmgr: Illegal operation: %s\\n\"\n\t\t\t\t \"Try 'help' if you are having trouble.\\n\",\n\t\t\t\t req[IND_CMD])\n\t\t\tCLEAN_UP_REQ(req)\n\t\t\treturn 1;\n\t\t}\n\n\t\tif (EOL(req[IND_OBJ])) {\n\t\t\tpstderr(\"qmgr: No object type given\\n\");\n\t\t\tCLEAN_UP_REQ(req)\n\t\t\treturn 2;\n\t\t}\n\n\t\tlp = strlen(req[IND_OBJ]);\n\t\tif (strncmp(req[1], \"server\", lp) == 0)\n\t\t\t*type = MGR_OBJ_SERVER;\n\t\telse if ((strncmp(req[1], \"queue\", lp) == 0) ||\n\t\t\t (strncmp(req[1], \"queues\", lp) == 0))\n\t\t\t*type = MGR_OBJ_QUEUE;\n\t\telse if ((strncmp(req[1], \"node\", lp) == 0) ||\n\t\t\t (strncmp(req[1], \"nodes\", lp) == 0))\n\t\t\t*type = MGR_OBJ_NODE;\n\t\telse if (strncmp(req[1], \"resource\", lp) == 0)\n\t\t\t*type = MGR_OBJ_RSC;\n\t\telse if (strncmp(req[1], \"sched\", lp) == 0)\n\t\t\t*type = MGR_OBJ_SCHED;\n\t\telse if (strncmp(req[1], SITE_HOOK, lp) == 0)\n\t\t\t*type = MGR_OBJ_SITE_HOOK;\n\t\telse if (strncmp(req[1], PBS_HOOK, lp) == 0)\n\t\t\t*type = MGR_OBJ_PBS_HOOK;\n\t\telse {\n\t\t\tPSTDERR1(\"qmgr: Illegal object type: %s.\\n\", req[IND_OBJ])\n\t\t\tCLEAN_UP_REQ(req)\n\t\t\treturn 2;\n\t\t}\n\n\t\tif (!EOL(req[IND_NAME])) {\n\t\t\tif ((*type != MGR_OBJ_SITE_HOOK) && (*type != MGR_OBJ_PBS_HOOK) && (*type != MGR_OBJ_RSC) &&\n\t\t\t    is_attr(*type, req[IND_NAME], TYPE_ATTR_ALL)) {\n\t\t\t\tlen -= strlen(req[IND_NAME]);\n\t\t\t\treq[IND_NAME][0] = '\\0';\n\t\t\t} else if ((error = check_list(req[IND_NAME], *type))) {\n\t\t\t\tpstderr(syntaxerr);\n\t\t\t\tCaretErr(request, len - (int) strlen(req[IND_NAME]) + error - 1);\n\t\t\t\tCLEAN_UP_REQ(req)\n\t\t\t\treturn 3;\n\t\t\t} else {\n\t\t\t\tnames_len = strlen(req[IND_NAME]);\n\t\t\t\t*names = (char *) malloc(names_len + 1);\n\t\t\t\tif (*names == NULL) {\n\t\t\t\t\tfprintf(stderr, \"malloc failure (errno %d)\\n\", errno);\n\t\t\t\t\texit(1);\n\t\t\t\t}\n\t\t\t\tpbs_strncpy(*names, req[IND_NAME], names_len + 1);\n\t\t\t}\n\t\t}\n\n\t\t/* Get attribute list; remaining part of the request */\n\t\tif ((*oper != MGR_CMD_IMPORT) && (*oper != MGR_CMD_EXPORT) &&\n\t\t    ((error = attributes(request + len, attr, *oper)) != 0)) {\n\t\t\tpstderr(syntaxerr);\n\t\t\tCaretErr(request, len + error);\n\t\t\tCLEAN_UP_REQ(req)\n\t\t\treturn 4;\n\t\t} else if ((*oper == MGR_CMD_IMPORT) &&\n\t\t\t   ((error = params_import(request + len, attr, *oper)) != 0)) {\n\t\t\tpstderr(syntaxerr);\n\t\t\tCaretErr(request, len + error);\n\t\t\tCLEAN_UP_REQ(req)\n\t\t\treturn 4;\n\t\t} else if ((*oper == MGR_CMD_EXPORT) &&\n\t\t\t   ((error = params_export(request + len, attr, *oper)) != 0)) {\n\t\t\tpstderr(syntaxerr);\n\t\t\tCaretErr(request, len + error);\n\t\t\tCLEAN_UP_REQ(req)\n\t\t\treturn 4;\n\t\t} else if ((*oper == MGR_CMD_SET || *oper == MGR_CMD_UNSET) && *attr == NULL) {\n\t\t\tpstderr(syntaxerr);\n\t\t\tCaretErr(request, len + error);\n\t\t\tCLEAN_UP_REQ(req)\n\t\t\treturn 4;\n\t\t} else if (*oper == MGR_CMD_ACTIVE && *attr != NULL) {\n\t\t\tpstderr(syntaxerr);\n\t\t\tCaretErr(request, len);\n\t\t\tCLEAN_UP_REQ(req)\n\t\t\treturn 4;\n\t\t}\n\t} else {\n\t\tpstderr(syntaxerr);\n\t\tCaretErr(request, len);\n\t\tCLEAN_UP_REQ(req)\n\t\treturn 4;\n\t}\n\tCLEAN_UP_REQ(req)\n\treturn 0;\n}\n\n/**\n * @brief\n *\tpstderr - prints error message to standard error.  It will not be\n *\t\t  printed if the \"-z\" option was given on the command line\n *\n * @param[in] string       The error message to print.\n *\n * @return Void\n * \tGlobal Variable: zopt\n *\n */\nvoid\npstderr(const char *string)\n{\n\tif (!zopt)\n\t\tfprintf(stderr, \"%s\", string);\n}\n\n/**\n * @brief\n *\tpstderr_big - prints error message to standard error.  It handles\n *                    the highly unusual case where the error message\n *                    that's to be generated is too big to be placed in\n *                    the buffer that was allocated.  In this case the\n *                    message is put out in pieces to stderr.  Kind of\n *                    ugly, but one doesn't really expect this code to be\n *                    called except in the oddest of cases.\n *\n * @param[in]   svrname       name of the server\n * @param[in]\tobjname       name of the object\n * @param[in]\terrmesg       actual error message\n *\n * @return Void\n * \tGlobal Variable: zopt\n *\n */\nvoid\npstderr_big(char *svrname, char *objname, char *errmesg)\n{\n\tpstderr(\"qmgr obj=\");\n\tpstderr(objname);\n\tpstderr(\" svr=\");\n\tpstderr(svrname);\n\tpstderr(\": \");\n\tpstderr(errmesg);\n\tpstderr(\"\\n\");\n}\n\n/**\n * @brief\n *\tfree_objname_list - frees an objname list\n *\n *@param[in]  list - objname list to free\n *\n * @return Void\n *\n */\nvoid\nfree_objname_list(struct objname *list)\n{\n\tstruct objname *cur, *tmp;\n\n\tcur = list;\n\n\twhile (cur != NULL) {\n\t\ttmp = cur->next;\n\t\tfree_objname(cur);\n\t\tcur = tmp;\n\t}\n}\n\n/**\n * @brief\n *\tfind_server - find a server in the server list\n *\n * @param[in] name - the name of the server\n *\n * @return pointer to structure\n * @retval return a pointer to the specified server struct or NULL if not found\n *\n */\nstruct server *\nfind_server(char *name)\n{\n\tstruct server *s = NULL;\n\n\tif (name != NULL) {\n\t\ts = servers;\n\n\t\twhile (s != NULL && strcmp(s->s_name, name))\n\t\t\ts = s->next;\n\t}\n\n\treturn s;\n}\n\n/**\n * @brief\n *\tnew_server - allocate new server objcet and initialize it\n *\n * @return structure\n * @retval new server object\n *\n */\nstruct server *\nnew_server()\n{\n\tstruct server *new;\n\n\tMstruct(new, struct server);\n\tnew->s_connect = -1;\n\tnew->s_name = NULL;\n\tnew->ref = 0;\n\tnew->s_rsc = NULL;\n\tnew->next = NULL;\n\treturn new;\n}\n\n/**\n * @brief\n *\tfree_server - remove server from servers list and free it up\n *\n * @param[in] svr - the server to free\n *\n * @return Void\n *\n */\nvoid\nfree_server(struct server *svr)\n{\n\tstruct server *cur_svr, *prev_svr = NULL;\n\n\t/* remove server from servers list */\n\tcur_svr = servers;\n\twhile (cur_svr != svr && cur_svr != NULL) {\n\t\tprev_svr = cur_svr;\n\t\tcur_svr = cur_svr->next;\n\t}\n\n\tif (cur_svr != NULL) {\n\t\tif (prev_svr == NULL)\n\t\t\tservers = servers->next;\n\t\telse\n\t\t\tprev_svr->next = cur_svr->next;\n\n\t\tif (svr->s_name != NULL)\n\t\t\tfree(svr->s_name);\n\n\t\tif (svr->s_rsc != NULL)\n\t\t\tpbs_statfree(svr->s_rsc);\n\n\t\tsvr->s_name = NULL;\n\t\tsvr->s_connect = -1;\n\t\tsvr->next = NULL;\n\t\tfree(svr);\n\t}\n}\n\n/**\n * @brief\n *\tnew_objname - allocate new object and initialize it\n *\n * @return structure\n * @retval newly allocated object\n *\n */\nstruct objname *\nnew_objname()\n{\n\tstruct objname *new;\n\tMstruct(new, struct objname);\n\tnew->obj_type = MGR_OBJ_NONE;\n\tnew->obj_name = NULL;\n\tnew->svr_name = NULL;\n\tnew->svr = NULL;\n\tnew->next = NULL;\n\n\treturn new;\n}\n\n/**\n * @brief\n *\tfree_objname - frees space used by an objname\n *\n * @param[in] obj - objname to free\n *\n * @return Void\n *\n */\nvoid\nfree_objname(struct objname *obj)\n{\n\tif (obj->obj_name != NULL)\n\t\tfree(obj->obj_name);\n\n\tif (obj->obj_type != MGR_OBJ_SERVER && obj->svr_name != NULL &&\n\t    obj->obj_name != obj->svr_name)\n\t\tfree(obj->svr_name);\n\n\tif (obj->svr != NULL)\n\t\tobj->svr->ref--;\n\n\tobj->svr = NULL;\n\tobj->obj_name = NULL;\n\tobj->svr_name = NULL;\n\tobj->next = NULL;\n\tfree(obj);\n}\n\n/**\n * @brief\n *\tstrings2objname - convert an array of strings to a list of objnames\n *\n * @param[in]  str - array of strings\n * @param[in]  num - number of strings\n * @param[in]  type - type of objects\n *\n * @return structure\n * @retval newly allocated list of objnames\n *\n */\nstruct objname *\nstrings2objname(char **str, int num, int type)\n{\n\tstruct objname *objs = NULL;\t /* head of objname list */\n\tstruct objname *cur_obj;\t /* current object in objname list */\n\tstruct objname *prev_obj = NULL; /* previous object in objname list */\n\tint i;\n\tint len;\n\n\tif (str != NULL) {\n\t\tfor (i = 0; i < num; i++) {\n\t\t\tcur_obj = new_objname();\n\n\t\t\tlen = strlen(str[i]);\n\t\t\tMstring(cur_obj->obj_name, len + 1);\n\t\t\tstrcpy(cur_obj->obj_name, str[i]);\n\t\t\tcur_obj->obj_type = type;\n\t\t\tif (type == MGR_OBJ_SERVER || type == MGR_OBJ_SCHED || type == MGR_OBJ_SITE_HOOK || type == MGR_OBJ_PBS_HOOK)\n\t\t\t\tcur_obj->svr_name = cur_obj->obj_name;\n\n\t\t\tif (prev_obj != NULL)\n\t\t\t\tprev_obj->next = cur_obj;\n\n\t\t\tif (objs == NULL)\n\t\t\t\tobjs = cur_obj;\n\n\t\t\tprev_obj = cur_obj;\n\t\t}\n\t}\n\treturn objs;\n}\n\n/**\n * @brief\n *\tis_valid_object - connects to the server to check if the object exists\n *\t\t\t  on the server its connected.\n *\n * @param[in] obj - object to check\n * @param[in] type - type of object\n *\n * @returns Error code\n * @retval  1  Success  valid\n * @retval  0  Failure  not valid\n *\n */\nint\nis_valid_object(struct objname *obj, int type)\n{\n\tstruct batch_status *batch_obj = NULL;\n\t/* we need something to make the pbs_stat* call.\n\t * Even if we only want the object name\n\t */\n\tstatic struct attrl attrq = {NULL, ATTR_qtype, \"\", \"\"};\n\tstatic struct attrl attrn = {NULL, ATTR_NODE_state, \"\", \"\"};\n\tint valid = 1;\n\tchar *errmsg;\n\n\tif (obj != NULL && obj->svr != NULL) {\n\t\tswitch (type) {\n\t\t\tcase MGR_OBJ_QUEUE:\n\t\t\t\tbatch_obj = pbs_statque(obj->svr->s_connect, obj->obj_name, &attrq, NULL);\n\t\t\t\tbreak;\n\n\t\t\tcase MGR_OBJ_NODE:\n\t\t\t\tbatch_obj = pbs_statvnode(obj->svr->s_connect, obj->obj_name, &attrn, NULL);\n\t\t\t\tbreak;\n\n\t\t\tdefault:\n\t\t\t\tvalid = 0;\n\t\t}\n\n\t\tif (batch_obj == NULL) {\n\t\t\terrmsg = pbs_geterrmsg(obj->svr->s_connect);\n\t\t\tPSTDERR1(\"qmgr: %s.\\n\", errmsg)\n\t\t\tvalid = 0;\n\t\t} else {\n\t\t\t/* if pbs_stat*() returned something, then the object exists */\n\t\t\tvalid = 1;\n\t\t\tpbs_statfree(batch_obj);\n\t\t}\n\t} else\n\t\tvalid = 1; /* NULL server means all active servers */\n\n\treturn valid;\n}\n\n/**\n * @brief\n *\tdefault_server_name - create an objname struct for the default server\n *\n * @return   structure\n * @retval   newly allocated objname with the default server assigned\n *\n */\nstruct objname *\ndefault_server_name()\n{\n\tstruct objname *obj;\n\n\tobj = new_objname();\n\t/* default server name is the NULL string */\n\tMstring(obj->obj_name, 1);\n\tobj->obj_name[0] = '\\0';\n\tobj->svr_name = obj->obj_name;\n\tobj->obj_type = MGR_OBJ_SERVER;\n\n\treturn obj;\n}\n\n/**\n * @brief\n *\ttemp_objname - set up a temporary objname struct.  This is meant for\n *\t\t       a one time use.  The memory is static.\n *\n * @param[in]  obj_name - name for temp struct\n * @param[in]  svr_name - name of the server for the temp struct\n * @param[in]  svr  - server for temp struct\n *\n * @returns structure\n * @retval  temporary objname\n *\n */\nstruct objname *\ntemp_objname(char *obj_name, char *svr_name, struct server *svr)\n{\n\tstatic struct objname temp = {0, NULL, NULL, NULL, NULL};\n\n\tif (temp.svr != NULL)\n\t\ttemp.svr->ref--;\n\n\ttemp.obj_name = NULL;\n\ttemp.svr_name = NULL;\n\ttemp.svr = NULL;\n\n\ttemp.obj_name = obj_name;\n\ttemp.svr_name = svr_name;\n\ttemp.svr = svr;\n\n\tif (temp.svr != NULL)\n\t\ttemp.svr->ref++;\n\n\treturn &temp;\n}\n\n/**\n * @brief\n *\tclose_non_ref_servers - close all nonreferenced servers\n *\n * @returns Void\n *\n */\nvoid\nclose_non_ref_servers()\n{\n\tstruct server *svr, *tmp_svr;\n\n\tsvr = servers;\n\n\twhile (svr != NULL) {\n\t\ttmp_svr = svr->next;\n\n\t\tif (svr->ref == 0)\n\t\t\tdisconnect_from_server(svr);\n\n\t\tsvr = tmp_svr;\n\t}\n}\n\n/**\n * @brief\n *\tparse_request - parse out the command, object, and possible name\n *\n * @remarks\n *\tFULL: command object name ...\n *\t      command object ...\n *\tNOTE: there does not need to be whitespace around the operator\n *\n * @param[in]  request - the request to be processed\n * @param[out] req - array to return data in\n *\t       indicies:\n *\t       IND_CMD   : command\n *\t       IND_OBJ   : object\n *\t       IND_NAME  : name\n *\n * if any field is not there, it is left blank.\n * returns The number of characters parsed.  Note: 0 chars parsed is error\n *\t\tData is passed back via the req variable.\n * @return int\n * @retval 0  Failure\n * @retval The number of characters parsed  Success\n *\n */\nint\nparse_request(char *request, char ***req)\n{\n\tchar *foreptr, *backptr;\n\tint len;\n\tint i = 0;\n\tint chars_parsed = 0;\n\tint error = 0;\n\n\tforeptr = request;\n\t*req = (char **) malloc(MAX_REQ_WORDS * sizeof(char *));\n\tif (*req == NULL) {\n\t\tfprintf(stderr, \"malloc failure (errno %d)\\n\", errno);\n\t\texit(1);\n\t}\n\tfor (i = IND_FIRST; i <= IND_LAST; i++)\n\t\t(*req)[i] = NULL;\n\n\tfor (i = 0; !EOL(*foreptr) && i < MAX_REQ_WORDS && error == 0;) {\n\t\twhile (White(*foreptr))\n\t\t\tforeptr++;\n\n\t\tbackptr = foreptr;\n\t\twhile (!White(*foreptr) && !Oper(foreptr) && !EOL(*foreptr))\n\t\t\tforeptr++;\n\n\t\tlen = foreptr - backptr;\n\n\t\tif (len > strlen(request)) {\n\t\t\terror = 1;\n\t\t\tchars_parsed = (int) (foreptr - request);\n\t\t\tpstderr(\"qmgr: max word length exceeded\\n\");\n\t\t\tCaretErr(request, chars_parsed);\n\t\t}\n\t\t(*req)[i] = (char *) malloc(len + 1);\n\t\tif ((*req)[i] == NULL) {\n\t\t\tfprintf(stderr, \"malloc failure (errno %d)\\n\", errno);\n\t\t\texit(1);\n\t\t}\n\t\t((*req)[i])[len] = '\\0';\n\t\tif (len > 0)\n\t\t\tpbs_strncpy((*req)[i], backptr, len + 1);\n\t\ti++;\n\t}\n\tchars_parsed = foreptr - request;\n\n\treturn error ? 0 : chars_parsed;\n}\n"
  },
  {
    "path": "src/cmds/qmgr_sup.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h>\n\n#include <string.h>\n#include <stdlib.h>\n#include <stdio.h>\n#include <pwd.h>\n\n#include \"pbs_ifl.h\"\n#include \"cmds.h\"\n#include \"qmgr.h\"\n\n#include \"histedit.h\"\n\n#ifdef QMGR_HAVE_HIST\n\nEditLine *el;\nHistEvent ev;\nHistory *qmgrhist;\n\nextern char prompt[];\nextern char contin[];\nextern char *cur_prompt;\nextern const char hist_init_err[];\nextern const char histfile_access_err[];\nextern char qmgr_hist_file[MAXPATHLEN + 1]; /* history file for this user */\n\n/**\n * @brief\n *\tTo print out the prompt you need to use a function.  This could be\n *\tmade to do something special, but I opt to just have a static prompt.\n *\n * @param[in] e - prompt printing function\n *\n * @return string\n * @retval string containing prompt\n *\n */\nstatic char *\nel_prompt(EditLine *e)\n{\n\treturn cur_prompt;\n}\n\n/**\n * @brief\n *\tTo handle SIGQUIT signal when Ctrl-D is pressed.\n *\n * @param[in] Editline pointer\n * @param[in] int - key which caused the invocation\n *\n * @return EOF\n *\n * @par Side Effects: None\n *\n */\nstatic unsigned char\nEOF_handler(EditLine *e, int ch)\n{\n\treturn CC_EOF;\n}\n\n/**\n * @brief\n *\tList the commands stored in qmgr history\n *\n * @param[in] len - Length of history from recent to list\n *\n * @par Side Effects: None\n *\n */\nvoid\nqmgr_list_history(int len)\n{\n\tint i = 0;\n\tint tot;\n\n\tif (len <= 0) {\n\t\tif (len != 0)\n\t\t\tprintf(\"Invalid option\\n\");\n\t\treturn;\n\t}\n\n\tif (history(qmgrhist, &ev, H_GETSIZE) == -1)\n\t\treturn;\n\ttot = ev.num;\n\n\tif (history(qmgrhist, &ev, H_LAST) == -1)\n\t\treturn;\n\n\twhile (1) {\n\t\ti++;\n\t\tif ((ev.str != NULL) && ((i + len) > tot))\n\t\t\tprintf(\"%d\\t%s\\n\", ev.num, ev.str);\n\n\t\tif (history(qmgrhist, &ev, H_PREV) == -1)\n\t\t\treturn;\n\t}\n}\n\n/**\n * @brief\n *\tGet the num-th event from the history\n *\n * @param[in] num - the num-th element to get\n * @param[out] request - return history in newly allocated address\n *\n * @par Side Effects: None\n *\n * @return      Error code\n * @retval  0 - success\n * @retval -1 - Failure\n */\nstatic int\nqmgr_get_history(int num, char **request)\n{\n\tif (history(qmgrhist, &ev, H_LAST) == -1)\n\t\treturn -1;\n\n\twhile (1) {\n\t\tif (ev.num == num) {\n\t\t\tif (ev.str == NULL || (*request = strdup(ev.str)) == NULL)\n\t\t\t\treturn -1;\n\t\t\treturn 0;\n\t\t}\n\t\tif (history(qmgrhist, &ev, H_PREV) == -1)\n\t\t\treturn -1;\n\t}\n\treturn -1;\n}\n\n/**\n * @brief\n *\tInitialize the qmgr history capability\n *\n * @param[in]\tprog - Name of the program (qmgr) so that\n * editline can use editrc for any custom settings.\n *\n * @return      Error code\n * @retval  0 - Success\n * @retval -1 - Failure\n *\n * @par Side Effects: None\n *\n */\nint\ninit_qmgr_hist(char *prog)\n{\n\tstruct passwd *pw;\n\tint rc;\n\n\tel = el_init(prog, stdin, stdout, stderr);\n\tel_set(el, EL_PROMPT, &el_prompt);\n\tel_set(el, EL_EDITOR, \"emacs\");\n\tel_set(el, EL_ADDFN, \"EOF_handler\", \"EOF_handler\", &EOF_handler);\n\tel_set(el, EL_BIND, \"^D\", \"EOF_handler\", NULL);\n\n\t/* Initialize the history */\n\tqmgrhist = history_init();\n\tif (qmgrhist == NULL) {\n\t\tfprintf(stderr, \"%s\", hist_init_err);\n\t\treturn -1;\n\t}\n\n\t/* Set the size of the history */\n\tif (history(qmgrhist, &ev, H_SETSIZE, QMGR_HIST_SIZE) == -1) {\n\t\tfprintf(stderr, \"%s\", hist_init_err);\n\t\treturn -1;\n\t}\n\n\t/* set adjacent unique */\n\tif (history(qmgrhist, &ev, H_SETUNIQUE, 1) == -1) {\n\t\tfprintf(stderr, \"%s\", hist_init_err);\n\t\treturn -1;\n\t}\n\n\t/* This sets up the call back functions for history functionality */\n\tel_set(el, EL_HIST, history, qmgrhist);\n\n\tqmgr_hist_file[0] = '\\0';\n\trc = 1;\n\tif ((pw = getpwuid(getuid()))) {\n\t\tsnprintf(qmgr_hist_file, MAXPATHLEN, \"%s/.pbs_qmgr_history\", pw->pw_dir);\n\t\thistory(qmgrhist, &ev, H_LOAD, qmgr_hist_file);\n\t\tif (history(qmgrhist, &ev, H_SAVE, qmgr_hist_file) == -1)\n\t\t\thistory(qmgrhist, &ev, H_CLEAR);\n\t\telse\n\t\t\trc = 0;\n\n\t\tif (rc == 1) {\n\t\t\tsnprintf(qmgr_hist_file, MAXPATHLEN, \"%s/spool/.pbs_qmgr_history_%s\",\n\t\t\t\t pbs_conf.pbs_home_path, pw->pw_name);\n\t\t\thistory(qmgrhist, &ev, H_LOAD, qmgr_hist_file);\n\t\t\tif (history(qmgrhist, &ev, H_SAVE, qmgr_hist_file) == -1)\n\t\t\t\thistory(qmgrhist, &ev, H_CLEAR);\n\t\t\telse\n\t\t\t\trc = 0;\n\t\t}\n\t}\n\n\tif (rc == 1) {\n\t\tfprintf(stderr, histfile_access_err, qmgr_hist_file);\n\t\tqmgr_hist_file[0] = '\\0';\n\t}\n\n\treturn 0;\n}\n\n/**\n * @brief\n * Add a line to history\n *\n * @param[in] req - line to be added to history\n *\n * @return - Error code\n * @retval -1 Failure\n * @retval  0 Success\n */\nint\nqmgr_add_history(char *req)\n{\n\tif (history(qmgrhist, &ev, H_ENTER, req) == -1) {\n\t\tfprintf(stderr, \"Failed to set history\\n\");\n\t\treturn -1;\n\t} else if (qmgr_hist_file[0] != '\\0') {\n\t\tif (history(qmgrhist, &ev, H_SAVE, qmgr_hist_file) == -1) {\n\t\t\tfprintf(stderr, \"Failed to save history\\n\");\n\t\t\treturn -1;\n\t\t}\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\tGet a request from the command prompt with the support for history\n *\n * @par Functionality:\n *\tGets a line of input from the user. The user can use up and down arrows\n *\t(emacs style) to recall history.\n *\n * @param[out]\trequest - The buffer to which user-input is returned into\n *\n * @return\t   int\n * @retval 0 - Success\n * @retval 1 - Failure\n *\n * @par Side Effects: None\n *\n */\n\nint\nget_request_hist(char **request)\n{\n\tint count;\n\tchar *line;\n\tchar *p;\n\tchar *req;\n\tint req_size;\n\tint cont_char;\n\n\t*request = NULL;\n\treq = NULL;\n\n\t/* loop till we get some data */\n\twhile (1) {\n\t\tcur_prompt = prompt;\n\t\tcont_char = 1;\n\n\t\twhile (cont_char) {\n\t\t\t/* count is the number of characters read.\n\t\t\t line is a const char* of our command line with the tailing \\n */\n\t\t\tif ((line = (char *) el_gets(el, &count)) == NULL) {\n\t\t\t\treturn EOF;\n\t\t\t}\n\n\t\t\tcount--; /* don't count the last \\n */\n\t\t\tif (count <= 0) {\n\t\t\t\tcont_char = 0;\n\t\t\t\tcontinue;\n\t\t\t}\n\n\t\t\tline[count] = '\\0'; /* remove the trailing \\n */\n\n\t\t\tp = line;\n\t\t\t/* gloss over initial white space */\n\t\t\twhile (White(*p))\n\t\t\t\tp++;\n\n\t\t\tif (*p == '#')\n\t\t\t\tcontinue; /* ignore comments */\n\n\t\t\tcount = strlen(p);\n\t\t\tif (count <= 0) {\n\t\t\t\tcont_char = 0;\n\t\t\t\tcontinue;\n\t\t\t}\n\n\t\t\tif (p[count - 1] == '\\\\') {\n\t\t\t\tp[count - 1] = ' ';\n\t\t\t} else\n\t\t\t\tcont_char = 0;\n\n\t\t\tif (*request == NULL) {\n\t\t\t\t*request = strdup(p);\n\t\t\t\tif (*request == NULL)\n\t\t\t\t\treturn 1;\n\t\t\t} else {\n\t\t\t\treq_size = strlen(*request) + count + 1;\n\t\t\t\t*request = realloc(*request, req_size);\n\t\t\t\tif (*request == NULL)\n\t\t\t\t\treturn 1;\n\t\t\t\tstrcat(*request, p);\n\t\t\t}\n\t\t\tcur_prompt = contin;\n\t\t}\n\n\t\tif (*request == NULL)\n\t\t\tcontinue; /* we did not get a good input, continue */\n\n\t\treq = *request;\n\n\t\t/* immediately check if this was a recall of a command from history */\n\t\tif (req[0] == '!') {\n\t\t\tp = &req[1];\n\t\t\tif (qmgr_get_history(atol(p), request) != 0) {\n\t\t\t\tfprintf(stderr, \"No item %s in history\\n\", p);\n\t\t\t\tfree(req);\n\t\t\t\t*request = NULL;\n\t\t\t\treq = NULL;\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\tfree(req); /* free the old one */\n\t\t}\n\t\treturn 0;\n\t}\n\treturn 1;\n}\n\n#endif\n"
  },
  {
    "path": "src/cmds/qmove.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tqmove.c\n * @brief\n *  qmove - (PBS) move batch job\n *\n * @author\tTerry Heidelberg\n * \t\t\tLivermore Computing\n *\n * @author\tBruce Kelly\n * \t\t\tNational Energy Research Supercomputer Center\n *\n * @author\tLawrence Livermore National Laboratory\n * \t\t\tUniversity of California\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include \"cmds.h\"\n#include \"pbs_ifl.h\"\n#include <pbs_version.h>\n\nint\nmain(int argc, char **argv, char **envp) /* qmove */\n{\n\tint any_failed = 0;\n\n\tchar job_id[PBS_MAXCLTJOBID];\t     /* from the command line */\n\tchar destination[PBS_MAXSERVERNAME]; /* from the command line */\n\tchar *q_n_out, *s_n_out;\n\n\tchar job_id_out[PBS_MAXCLTJOBID];\n\tchar server_out[MAXSERVERNAME];\n\tchar rmt_server[MAXSERVERNAME];\n\n\t/*test for real deal or just version and exit*/\n\n\tPRINT_VERSION_AND_EXIT(argc, argv);\n\n\tif (initsocketlib())\n\t\treturn 1;\n\n\tif (argc < 3) {\n\t\tstatic char usage[] = \"usage: qmove destination job_identifier...\\n\";\n\t\tstatic char usag2[] = \"       qmove --version\\n\";\n\t\tfprintf(stderr, \"%s\", usage);\n\t\tfprintf(stderr, \"%s\", usag2);\n\t\texit(2);\n\t}\n\n\tpbs_strncpy(destination, argv[1], sizeof(destination));\n\tif (parse_destination_id(destination, &q_n_out, &s_n_out)) {\n\t\tfprintf(stderr, \"qmove: illegally formed destination: %s\\n\", destination);\n\t\texit(2);\n\t}\n\n\t/*perform needed security library initializations (including none)*/\n\n\tif (CS_client_init() != CS_SUCCESS) {\n\t\tfprintf(stderr, \"qmove: unable to initialize security library.\\n\");\n\t\texit(2);\n\t}\n\n\tfor (optind = 2; optind < argc; optind++) {\n\t\tint connect;\n\t\tint stat = 0;\n\t\tint located = FALSE;\n\n\t\tpbs_strncpy(job_id, argv[optind], sizeof(job_id));\n\t\tif (get_server(job_id, job_id_out, server_out)) {\n\t\t\tfprintf(stderr, \"qmove: illegally formed job identifier: %s\\n\", job_id);\n\t\t\tany_failed = 1;\n\t\t\tcontinue;\n\t\t}\n\tcnt:\n\t\tconnect = cnt2server(server_out);\n\t\tif (connect <= 0) {\n\t\t\tfprintf(stderr, \"qmove: cannot connect to server %s (errno=%d)\\n\",\n\t\t\t\tpbs_server, pbs_errno);\n\t\t\tany_failed = pbs_errno;\n\t\t\tcontinue;\n\t\t}\n\n\t\tstat = pbs_movejob(connect, job_id_out, destination, NULL);\n\t\tif (stat && (pbs_errno != PBSE_UNKJOBID)) {\n\t\t\tif (stat != PBSE_NEEDQUET) {\n\t\t\t\tprt_job_err(\"qmove\", connect, job_id_out);\n\t\t\t\tany_failed = pbs_errno;\n\t\t\t} else {\n\t\t\t\tfprintf(stderr, \"qmove: Queue type not set for queue \\'%s\\'\\n\", destination);\n\t\t\t}\n\t\t} else if (stat && (pbs_errno == PBSE_UNKJOBID) && !located) {\n\t\t\tlocated = TRUE;\n\t\t\tif (locate_job(job_id_out, server_out, rmt_server)) {\n\t\t\t\tpbs_disconnect(connect);\n\t\t\t\tpbs_strncpy(server_out, rmt_server, sizeof(server_out));\n\t\t\t\tgoto cnt;\n\t\t\t}\n\t\t\tprt_job_err(\"qmove\", connect, job_id_out);\n\t\t\tany_failed = pbs_errno;\n\t\t}\n\n\t\tpbs_disconnect(connect);\n\t}\n\n\t/*cleanup security library initializations before exiting*/\n\tCS_close_app();\n\n\texit(any_failed);\n}\n"
  },
  {
    "path": "src/cmds/qmsg.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tqmsg.c\n * @brief\n * \tqmsg - (PBS) send message to batch job\n *\n * @author\tTerry Heidelberg\n * \t\t\tLivermore Computing\n *\n * @author\tBruce Kelly\n * \t\t\tNational Energy Research Supercomputer Center\n *\n * @author\tLawrence Livermore National Laboratory\n * \t\t\tUniversity of California\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include \"cmds.h\"\n#include \"pbs_ifl.h\"\n#include <pbs_version.h>\n\nint\nmain(int argc, char **argv, char **envp) /* qmsg */\n{\n\tint c;\n\tint to_file;\n\tint errflg = 0;\n\tint any_failed = 0;\n\n\tchar job_id[PBS_MAXCLTJOBID]; /* from the command line */\n\n\tchar job_id_out[PBS_MAXCLTJOBID];\n\tchar server_out[MAXSERVERNAME];\n\tchar rmt_server[MAXSERVERNAME];\n\n#define MAX_MSG_STRING_LEN 256\n\tchar msg_string[MAX_MSG_STRING_LEN + 1];\n\n#define GETOPT_ARGS \"EO\"\n\n\t/*test for real deal or just version and exit*/\n\n\tPRINT_VERSION_AND_EXIT(argc, argv);\n\n\tif (initsocketlib())\n\t\treturn 1;\n\n\tmsg_string[0] = '\\0';\n\tto_file = 0;\n\n\twhile ((c = getopt(argc, argv, GETOPT_ARGS)) != EOF)\n\t\tswitch (c) {\n\t\t\tcase 'E':\n\t\t\t\tto_file |= MSG_ERR;\n\t\t\t\tbreak;\n\t\t\tcase 'O':\n\t\t\t\tto_file |= MSG_OUT;\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\terrflg++;\n\t\t}\n\tif (to_file == 0)\n\t\tto_file = MSG_ERR; /* default */\n\n\tif (errflg || ((optind + 1) >= argc)) {\n\t\tstatic char usage[] =\n\t\t\t\"usage: qmsg [-O] [-E] msg_string job_identifier...\\n\";\n\t\tstatic char usag2[] =\n\t\t\t\"       qmsg --version\\n\";\n\t\tfprintf(stderr, \"%s\", usage);\n\t\tfprintf(stderr, \"%s\", usag2);\n\t\texit(2);\n\t}\n\n\tpbs_strncpy(msg_string, argv[optind], sizeof(msg_string));\n\n\t/*perform needed security library initializations (including none)*/\n\n\tif (CS_client_init() != CS_SUCCESS) {\n\t\tfprintf(stderr, \"qmsg: unable to initialize security library.\\n\");\n\t\texit(2);\n\t}\n\n\tfor (optind++; optind < argc; optind++) {\n\t\tint connect;\n\t\tint stat = 0;\n\t\tint located = FALSE;\n\n\t\tpbs_strncpy(job_id, argv[optind], sizeof(job_id));\n\t\tif (get_server(job_id, job_id_out, server_out)) {\n\t\t\tfprintf(stderr, \"qmsg: illegally formed job identifier: %s\\n\", job_id);\n\t\t\tany_failed = 1;\n\t\t\tcontinue;\n\t\t}\n\tcnt:\n\t\tconnect = cnt2server(server_out);\n\t\tif (connect <= 0) {\n\t\t\tfprintf(stderr, \"qmsg: cannot connect to server %s (errno=%d)\\n\",\n\t\t\t\tpbs_server, pbs_errno);\n\t\t\tany_failed = pbs_errno;\n\t\t\tcontinue;\n\t\t}\n\n\t\tstat = pbs_msgjob(connect, job_id_out, to_file, msg_string, NULL);\n\t\tif (stat && (pbs_errno != PBSE_UNKJOBID)) {\n\t\t\tprt_job_err(\"qmsg\", connect, job_id_out);\n\t\t\tany_failed = pbs_errno;\n\t\t} else if (stat && (pbs_errno == PBSE_UNKJOBID) && !located) {\n\t\t\tlocated = TRUE;\n\t\t\tif (locate_job(job_id_out, server_out, rmt_server)) {\n\t\t\t\tpbs_disconnect(connect);\n\t\t\t\tstrcpy(server_out, rmt_server);\n\t\t\t\tgoto cnt;\n\t\t\t}\n\t\t\tprt_job_err(\"qmsg\", connect, job_id_out);\n\t\t\tany_failed = pbs_errno;\n\t\t}\n\n\t\tpbs_disconnect(connect);\n\t}\n\n\t/*cleanup security library initializations before exiting*/\n\tCS_close_app();\n\n\texit(any_failed);\n}\n"
  },
  {
    "path": "src/cmds/qorder.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tqorder.c\n * @brief\n * \tqorder - change the order of two batch jobs in a queue\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <string.h>\n#include \"cmds.h\"\n#include <pbs_version.h>\n#include \"pbs_ifl.h\"\n#include \"net_connect.h\"\n\nint\nmain(int argc, char **argv, char **envp)\n{\n\tchar job_id1[PBS_MAXCLTJOBID + 1]; /* from the command line */\n\tchar job_id2[PBS_MAXCLTJOBID + 1]; /* from the command line */\n\tchar job_id1_out[PBS_MAXCLTJOBID + 1];\n\tchar job_id2_out[PBS_MAXCLTJOBID + 1];\n\tchar *pn;\n\tint port1 = 0;\n\tint port2 = 0;\n\tchar server_out1[MAXSERVERNAME + 1];\n\tchar server_out2[MAXSERVERNAME + 1];\n\tchar svrtmp[MAXSERVERNAME + 1];\n\tint connect;\n\tint stat = 0;\n\tint rc = 0;\n\n\t/*test for real deal or just version and exit*/\n\n\tPRINT_VERSION_AND_EXIT(argc, argv);\n\n\tif (initsocketlib())\n\t\treturn 1;\n\n\tif (argc != 3) {\n\t\tstatic char usage[] = \"usage: qorder job_identifier job_identifier\\n\";\n\t\tstatic char usag2[] = \"       qorder --version\\n\";\n\t\tfprintf(stderr, \"%s\", usage);\n\t\tfprintf(stderr, \"%s\", usag2);\n\t\texit(2);\n\t}\n\n\tpbs_strncpy(job_id1, argv[1], sizeof(job_id1));\n\tpbs_strncpy(job_id2, argv[2], sizeof(job_id2));\n\tsvrtmp[0] = '\\0';\n\tif (get_server(job_id1, job_id1_out, svrtmp)) {\n\t\tfprintf(stderr, \"qorder: illegally formed job identifier: %s\\n\", job_id1);\n\t\texit(1);\n\t}\n\tif (*svrtmp == '\\0') {\n\t\tif ((pn = pbs_default()) != NULL) {\n\t\t\tpbs_strncpy(svrtmp, pn, sizeof(svrtmp));\n\t\t} else {\n\t\t\tfprintf(stderr, \"qorder: could not get default server: %s\\n\", job_id1);\n\t\t\texit(1);\n\t\t}\n\t}\n\n\tif ((pn = strchr(svrtmp, (int) ':')) != 0) {\n\t\t*pn = '\\0';\n\t\tport1 = atoi(pn + 1);\n\t}\n\tif (get_fullhostname(svrtmp, server_out1, MAXSERVERNAME) != 0) {\n\t\tfprintf(stderr, \"qorder: invalid server name: %s\\n\", job_id1);\n\t\texit(1);\n\t}\n\n\tsvrtmp[0] = '\\0';\n\tif (get_server(job_id2, job_id2_out, svrtmp)) {\n\t\tfprintf(stderr, \"qorder: illegally formed job identifier: %s\\n\", job_id2);\n\t\texit(1);\n\t}\n\tif (*svrtmp == '\\0') {\n\t\tif ((pn = pbs_default()) != NULL) {\n\t\t\tpbs_strncpy(svrtmp, pn, sizeof(svrtmp));\n\t\t} else {\n\t\t\tfprintf(stderr, \"qorder: could not get default server: %s\\n\", job_id1);\n\t\t\texit(1);\n\t\t}\n\t}\n\tif ((pn = strchr(svrtmp, (int) ':')) != 0) {\n\t\t*pn = '\\0';\n\t\tport2 = atoi(pn + 1);\n\t}\n\tif (get_fullhostname(svrtmp, server_out2, MAXSERVERNAME) != 0) {\n\t\tfprintf(stderr, \"qorder: invalid server name: %s\\n\", job_id2);\n\t\texit(1);\n\t}\n\tif ((strcmp(server_out1, server_out2) != 0) || (port1 != port2)) {\n\t\tfprintf(stderr, \"qorder: both jobs ids must specify the same server\\n\");\n\t\texit(1);\n\t}\n\tif (pn)\n\t\t*pn = ':'; /* restore : if it was present */\n\n\t/*perform needed security library initializations (including none)*/\n\n\tif (CS_client_init() != CS_SUCCESS) {\n\t\tfprintf(stderr, \"qorder: unable to initialize security library.\\n\");\n\t\texit(1);\n\t}\n\n\tconnect = cnt2server(svrtmp);\n\tif (connect <= 0) {\n\t\tfprintf(stderr, \"qorder: cannot connect to server %s (errno=%d)\\n\",\n\t\t\tpbs_server, pbs_errno);\n\t\texit(1);\n\t\t;\n\t}\n\n\tstat = pbs_orderjob(connect, job_id1_out, job_id2_out, NULL);\n\tif (stat) {\n\n\t\tchar job_id_both[PBS_MAXCLTJOBID + PBS_MAXCLTJOBID + 3];\n\n\t\tstrcpy(job_id_both, job_id1_out);\n\t\tstrcat(job_id_both, \" or \");\n\t\tstrcat(job_id_both, job_id2_out);\n\t\tprt_job_err(\"qorder\", connect, job_id_both);\n\t\trc = pbs_errno;\n\t}\n\n\tpbs_disconnect(connect);\n\n\t/*cleanup security library initializations before exiting*/\n\tCS_close_app();\n\n\texit(rc);\n}\n"
  },
  {
    "path": "src/cmds/qrerun.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tqrerun.c\n * @brief\n * \tqrerun - (PBS) rerun batch job\n *\n * @author\t   \tTerry Heidelberg\n * \t\t\t\tLivermore Computing\n *\n * @author     \tBruce Kelly\n * \t\t\t\tNational Energy Research Supercomputer Center\n *\n * @author     \tLawrence Livermore National Laboratory\n * \t\t\t\tUniversity of California\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include \"cmds.h\"\n#include \"pbs_ifl.h\"\n#include <pbs_version.h>\n\nint\nmain(int argc, char **argv, char **envp) /* qrerun */\n{\n\tint any_failed = 0;\n\n\tchar job_id[PBS_MAXCLTJOBID]; /* from the command line */\n\n\tchar job_id_out[PBS_MAXCLTJOBID];\n\tchar server_out[PBS_MAXSERVERNAME];\n\tchar rmt_server[MAXSERVERNAME];\n\tchar *extra = NULL;\n\tchar *force = \"force\";\n\tint i;\n\tstatic char usage[] = \"usage: qrerun [-W force] job_identifier...\\n\";\n\tstatic char usag2[] = \"       qrerun --version\\n\";\n\n\t/*test for real deal or just version and exit*/\n\n\tPRINT_VERSION_AND_EXIT(argc, argv);\n\n\tif (initsocketlib())\n\t\treturn 1;\n\n\twhile ((i = getopt(argc, argv, \"W:\")) != EOF) {\n\t\tswitch (i) {\n\t\t\tcase 'W':\n\t\t\t\tif (strcmp(optarg, force) == 0)\n\t\t\t\t\textra = force;\n\t\t\t\telse {\n\t\t\t\t\tfprintf(stderr, \"%s\", usage);\n\t\t\t\t\tfprintf(stderr, \"%s\", usag2);\n\t\t\t\t\texit(2);\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\tfprintf(stderr, \"%s\", usage);\n\t\t\t\tfprintf(stderr, \"%s\", usag2);\n\t\t\t\texit(2);\n\t\t}\n\t}\n\n\tif (optind == argc) {\n\t\tfprintf(stderr, \"%s\", usage);\n\t\tfprintf(stderr, \"%s\", usag2);\n\t\texit(2);\n\t}\n\n\t/*perform needed security library initializations (including none)*/\n\n\tif (CS_client_init() != CS_SUCCESS) {\n\t\tfprintf(stderr, \"qrerun: unable to initialize security library.\\n\");\n\t\texit(1);\n\t}\n\n\tfor (; optind < argc; optind++) {\n\t\tint connect;\n\t\tint stat = 0;\n\t\tint located = FALSE;\n\n\t\tpbs_strncpy(job_id, argv[optind], sizeof(job_id));\n\t\tif (get_server(job_id, job_id_out, server_out)) {\n\t\t\tfprintf(stderr, \"qrerun: illegally formed job identifier: %s\\n\", job_id);\n\t\t\tany_failed = 1;\n\t\t\tcontinue;\n\t\t}\n\tcnt:\n\t\tconnect = cnt2server(server_out);\n\t\tif (connect <= 0) {\n\t\t\tfprintf(stderr, \"qrerun: cannot connect to server %s (errno=%d)\\n\",\n\t\t\t\tpbs_server, pbs_errno);\n\t\t\tany_failed = pbs_errno;\n\t\t\tcontinue;\n\t\t}\n\n\t\tstat = pbs_rerunjob(connect, job_id_out, extra);\n\t\tif (stat && (pbs_errno != PBSE_UNKJOBID)) {\n\t\t\tprt_job_err(\"qrerun\", connect, job_id_out);\n\t\t\tany_failed = pbs_errno;\n\t\t} else if (stat && (pbs_errno == PBSE_UNKJOBID) && !located) {\n\t\t\tlocated = TRUE;\n\t\t\tif (locate_job(job_id_out, server_out, rmt_server)) {\n\t\t\t\tpbs_disconnect(connect);\n\t\t\t\tpbs_strncpy(server_out, rmt_server, sizeof(server_out));\n\t\t\t\tgoto cnt;\n\t\t\t}\n\t\t\tprt_job_err(\"qrerun\", connect, job_id_out);\n\t\t\tany_failed = pbs_errno;\n\t\t}\n\n\t\tpbs_disconnect(connect);\n\t}\n\n\t/*cleanup security library initializations before exiting*/\n\tCS_close_app();\n\n\texit(any_failed);\n}\n"
  },
  {
    "path": "src/cmds/qrls.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tqrls.c\n * qrls - (PBS) release a hold on a batch job\n *\n * @author\t   \tTerry Heidelberg\n * \t\t\t\tLivermore Computing\n *\n * @author     \tBruce Kelly\n * \t\t\t\tNational Energy Research Supercomputer Center\n *\n * @author     \tLawrence Livermore National Laboratory\n * \t\t\t\tUniversity of California\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include \"cmds.h\"\n#include \"pbs_ifl.h\"\n#include <pbs_version.h>\n\nint\nmain(int argc, char **argv, char **envp) /* qrls */\n{\n\tint c;\n\tint errflg = 0;\n\tint any_failed = 0;\n\tint u_cnt, o_cnt, s_cnt, n_cnt, p_cnt;\n\tchar *pc;\n\n\tchar job_id[PBS_MAXCLTJOBID]; /* from the command line */\n\n\tchar job_id_out[PBS_MAXCLTJOBID];\n\tchar server_out[MAXSERVERNAME];\n\tchar rmt_server[MAXSERVERNAME];\n\n#define MAX_HOLD_TYPE_LEN 32\n\tchar hold_type[MAX_HOLD_TYPE_LEN + 1];\n\n#define GETOPT_ARGS \"h:\"\n\n\t/*test for real deal or just version and exit*/\n\n\tPRINT_VERSION_AND_EXIT(argc, argv);\n\n\tif (initsocketlib())\n\t\treturn 1;\n\n\thold_type[0] = '\\0';\n\n\twhile ((c = getopt(argc, argv, GETOPT_ARGS)) != EOF)\n\t\tswitch (c) {\n\t\t\tcase 'h':\n\t\t\t\twhile (isspace((int) *optarg))\n\t\t\t\t\toptarg++;\n\t\t\t\tif (strlen(optarg) == 0) {\n\t\t\t\t\tfprintf(stderr, \"qrls: illegal -h value\\n\");\n\t\t\t\t\terrflg++;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tpc = optarg;\n\t\t\t\tu_cnt = o_cnt = s_cnt = n_cnt = p_cnt = 0;\n\t\t\t\twhile (*pc) {\n\t\t\t\t\tif (*pc == 'u')\n\t\t\t\t\t\tu_cnt++;\n\t\t\t\t\telse if (*pc == 'o')\n\t\t\t\t\t\to_cnt++;\n\t\t\t\t\telse if (*pc == 's')\n\t\t\t\t\t\ts_cnt++;\n\t\t\t\t\telse if (*pc == 'p')\n\t\t\t\t\t\tp_cnt++;\n\t\t\t\t\telse if (*pc == 'n')\n\t\t\t\t\t\tn_cnt++;\n\t\t\t\t\telse {\n\t\t\t\t\t\tfprintf(stderr, \"qrls: illegal -h value\\n\");\n\t\t\t\t\t\terrflg++;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t\tpc++;\n\t\t\t\t}\n\t\t\t\tif (n_cnt && (u_cnt + o_cnt + s_cnt + p_cnt)) {\n\t\t\t\t\tfprintf(stderr, \"qrls: illegal -h value\\n\");\n\t\t\t\t\terrflg++;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tpbs_strncpy(hold_type, optarg, sizeof(hold_type));\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\terrflg++;\n\t\t}\n\n\tif (errflg || optind >= argc) {\n\t\tstatic char usage[] = \"usage: qrls [-h hold_list] job_identifier...\\n\";\n\t\tstatic char usag2[] = \"       qrls --version\\n\";\n\t\tfprintf(stderr, \"%s\", usage);\n\t\tfprintf(stderr, \"%s\", usag2);\n\t\texit(2);\n\t}\n\n\t/*perform needed security library initializations (including none)*/\n\n\tif (CS_client_init() != CS_SUCCESS) {\n\t\tfprintf(stderr, \"qrls: unable to initialize security library.\\n\");\n\t\texit(1);\n\t}\n\n\tfor (; optind < argc; optind++) {\n\t\tint connect;\n\t\tint stat = 0;\n\t\tint located = FALSE;\n\n\t\tpbs_strncpy(job_id, argv[optind], sizeof(job_id));\n\t\tif (get_server(job_id, job_id_out, server_out)) {\n\t\t\tfprintf(stderr, \"qrls: illegally formed job identifier: %s\\n\", job_id);\n\t\t\tany_failed = 1;\n\t\t\tcontinue;\n\t\t}\n\tcnt:\n\t\tconnect = cnt2server(server_out);\n\t\tif (connect <= 0) {\n\t\t\tfprintf(stderr, \"qrls: cannot connect to server %s (errno=%d)\\n\",\n\t\t\t\tpbs_server, pbs_errno);\n\t\t\tany_failed = pbs_errno;\n\t\t\tcontinue;\n\t\t}\n\n\t\tstat = pbs_rlsjob(connect, job_id_out, hold_type, NULL);\n\t\tif (stat && (pbs_errno != PBSE_UNKJOBID)) {\n\t\t\tprt_job_err(\"qrls\", connect, job_id_out);\n\t\t\tany_failed = pbs_errno;\n\t\t} else if (stat && (pbs_errno == PBSE_UNKJOBID) && !located) {\n\t\t\tlocated = TRUE;\n\t\t\tif (locate_job(job_id_out, server_out, rmt_server)) {\n\t\t\t\tpbs_disconnect(connect);\n\t\t\t\tstrcpy(server_out, rmt_server);\n\t\t\t\tgoto cnt;\n\t\t\t}\n\t\t\tprt_job_err(\"qrls\", connect, job_id_out);\n\t\t\tany_failed = pbs_errno;\n\t\t}\n\n\t\tpbs_disconnect(connect);\n\t}\n\n\t/*cleanup security library initializations before exiting*/\n\tCS_close_app();\n\n\texit(any_failed);\n}\n"
  },
  {
    "path": "src/cmds/qrun.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tqrun.c\n * @brief\n *  \tThe qrun command forces a batch job to run.\n *\n * @par\tSynopsis:\n *  \tqrun [-H host][-a] job_identifier ...\n *\n * @par\tArguments:\n *  host\n *      The host to run the job at.\n *  job_identifier ...\n *      A list of job_identifiers.  A job_identifier has the following form:\n *          sequence_number[.server_name][@server]\n *\n *  @author\tBruce Kelly\n *  \t\tNational Energy Research Supercomputer Center\n *  \t\tLivermore, CA\n *  \t\tMay, 1993\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include \"cmds.h\"\n#include \"pbs_ifl.h\"\n#include <pbs_version.h>\n#include <pbs_share.h> /* the comment buffer size declaration*/\n#include \"grunt.h\"\n\nint exitstatus = 0; /* Exit Status */\nint async = 0;\nstatic void execute(char *, char *, char *);\n\nint\nmain(int argc, char **argv)\n{\n\t/*\n\t *  This routine sends a Run Job request to the batch server.  If the\n\t * batch request is accepted, the server will have started the execution\n\t * of the job.\n\t */\n\n\tchar job[PBS_MAXCLTJOBID];  /* Job Id */\n\tchar server[MAXSERVERNAME]; /* Server name */\n\tchar *location = NULL;\t    /* Where to run the job */\n\n\tstatic char opts[] = \"H:a\"; /* See man getopt */\n\tstatic char *usage = \"Usage: qrun [-a] [-H vnode_specification ] job_identifier_list\\n\"\n\t\t\t     \"       qrun [-a] [-H - ] job_identifier_list\\n\"\n\t\t\t     \"       qrun --version\\n\";\n\tint s;\n\tint errflg = 0;\n\n\t/*test for real deal or just version and exit*/\n\n\tPRINT_VERSION_AND_EXIT(argc, argv);\n\n\tif (initsocketlib())\n\t\treturn 1;\n\n\t/* Command line options */\n\twhile ((s = getopt(argc, argv, opts)) != EOF)\n\t\tswitch (s) {\n\n\t\t\tcase 'H':\n\t\t\t\tif (strlen(optarg) == 0) {\n\t\t\t\t\tfprintf(stderr, \"qrun: illegal -H value\\n\");\n\t\t\t\t\terrflg++;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tlocation = optarg;\n\t\t\t\tbreak;\n\n\t\t\tcase 'a':\n\t\t\t\tasync = 1;\n\t\t\t\tbreak;\n\n\t\t\tcase '?':\n\t\t\tdefault:\n\t\t\t\terrflg++;\n\t\t\t\tbreak;\n\t\t}\n\n\tif (errflg || (optind >= argc)) {\n\t\tfprintf(stderr, \"%s\", usage);\n\t\texit(1);\n\t}\n\n\t/*perform needed security library initializations (including none)*/\n\n\tif (CS_client_init() != CS_SUCCESS) {\n\t\tfprintf(stderr, \"qrun: unable to initialize security library.\\n\");\n\t\texit(2);\n\t}\n\n\tfor (; optind < argc; optind++) {\n\t\tif (get_server(argv[optind], job, server)) {\n\t\t\tfprintf(stderr,\n\t\t\t\t\"qrun: illegally formed job identifier: %s\\n\", argv[optind]);\n\t\t\texitstatus = 1;\n\t\t\tcontinue;\n\t\t}\n\t\texecute(job, server, location);\n\t}\n\n\t/*cleanup security library initializations before exiting*/\n\tCS_close_app();\n\n\texit(exitstatus);\n}\n\n/**\n * @brief\n * \texecutes a job\n *\n * @param[in] job - The fully qualified job id.\n * @param[in] server - The name of the server that manages the job.\n * @param[in] location -  location indicating where to run job\n *\n * @return - Void\n *\n * @File Variables:\n *  exitstatus  Set to two if an error occurs.\n *\n */\nstatic void\nexecute(char *job, char *server, char *location)\n{\n\tint ct;\t /* Connection to the server */\n\tint err; /* Error return from pbs_run */\n\tint out; /* Stores the size of err_msg_buf*/\n\tint located = FALSE;\n\tchar *errmsg;\n\tchar err_msg_buf[COMMENT_BUF_SIZE] = {'\\0'}; /* generic buffer - comments & logging*/\n\tchar rmt_server[MAXSERVERNAME];\n\ncnt:\n\tif ((ct = cnt2server(server)) > 0) {\n\t\tif (async)\n\t\t\terr = pbs_asyrunjob(ct, job, location, NULL);\n\t\telse\n\t\t\terr = pbs_runjob(ct, job, location, NULL);\n\n\t\tif (err && (pbs_errno != PBSE_UNKJOBID)) {\n\t\t\terrmsg = pbs_geterrmsg(ct);\n\t\t\tif (errmsg != NULL) {\n\t\t\t\tif (pbs_errno == PBSE_UNKNODE) {\n\t\t\t\t\tout = snprintf(err_msg_buf, sizeof(err_msg_buf), \"qrun: %s %s\", errmsg, location);\n\t\t\t\t\tif (out >= sizeof(err_msg_buf)) {\n\t\t\t\t\t\tfprintf(stderr, \"%s...\\n\", err_msg_buf);\n\t\t\t\t\t} else {\n\t\t\t\t\t\tfprintf(stderr, \"%s\\n\", err_msg_buf);\n\t\t\t\t\t}\n\n\t\t\t\t} else {\n\t\t\t\t\tprt_job_err(\"qrun\", ct, job);\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tfprintf(stderr, \"qrun : Server returned error %d for job\\n\", pbs_errno);\n\t\t\t}\n\t\t\texitstatus = 2;\n\t\t} else if (err && (pbs_errno == PBSE_UNKJOBID) && !located) {\n\t\t\tlocated = TRUE;\n\t\t\tif (locate_job(job, server, rmt_server)) {\n\t\t\t\tpbs_disconnect(ct);\n\t\t\t\tstrcpy(server, rmt_server);\n\t\t\t\tgoto cnt;\n\t\t\t}\n\t\t\tprt_job_err(\"qrun\", ct, job);\n\t\t\texitstatus = 2;\n\t\t}\n\t\tpbs_disconnect(ct);\n\t} else {\n\t\tfprintf(stderr,\n\t\t\t\"qrun: could not connect to server %s (%d)\\n\", server, pbs_errno);\n\t\texitstatus = 2;\n\t}\n}\n"
  },
  {
    "path": "src/cmds/qselect.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tqselect.c\n * @brief\n * \tqselect - (PBS) select batch job\n *\n * @author     \tTerry Heidelberg\n *\t\t\t\tLivermore Computing\n *\n * @author     \tBruce Kelly\n * \t\t\t\tNational Energy Research Supercomputer Center\n *\n * @author     \tLawrence Livermore National Laboratory\n * \t\t\t\tUniversity of California\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include \"cmds.h\"\n#include <pbs_version.h>\n\n#define OPSTRING_LEN 4\n#define OP_LEN 2\n#define OP_ENUM_LEN 6\n#define MAX_OPTARG_LEN 256\n#define MAX_RESOURCE_NAME_LEN 256\n#define GETOPT_ARGS \"a:A:c:h:HJl:N:p:q:r:s:t:Tu:xP:\"\nstatic char *opstring_vals[] = {\"eq\", \"ne\", \"ge\", \"gt\", \"le\", \"lt\"};\nstatic enum batch_op opstring_enums[] = {EQ, NE, GE, GT, LE, LT};\n\n/**\n * @brief\n *\tsets the attribute details\n *\n * @param[out] list - attribute list\n * @param[in] a_name - attribute name\n * @param[in] r_name - resource name\n * @param[in] v_name - value for the attribute\n *\n * @return Void\n *\n */\nvoid\nset_attrop(struct attropl **list, char *a_name, char *r_name, char *v_name, enum batch_op op)\n{\n\tstruct attropl *attr;\n\n\tattr = (struct attropl *) malloc(sizeof(struct attropl));\n\tif (attr == NULL) {\n\t\tfprintf(stderr, \"qselect: out of memory\\n\");\n\t\texit(2);\n\t}\n\n\tif (a_name == NULL) {\n\t\tattr->name = NULL;\n\t} else {\n\t\tattr->name = strdup(a_name);\n\t\tif (attr->name == NULL) {\n\t\t\tfprintf(stderr, \"qselect: out of memory\\n\");\n\t\t\texit(2);\n\t\t}\n\t}\n\n\tif (r_name == NULL) {\n\t\tattr->resource = NULL;\n\t} else {\n\t\tattr->resource = strdup(r_name);\n\t\tif (attr->resource == NULL) {\n\t\t\tfprintf(stderr, \"qselect: out of memory\\n\");\n\t\t\texit(2);\n\t\t}\n\t}\n\n\tif (v_name == NULL) {\n\t\tattr->value = NULL;\n\t} else {\n\t\tattr->value = strdup(v_name);\n\t\tif (attr->value == NULL) {\n\t\t\tfprintf(stderr, \"qselect: out of memory\\n\");\n\t\t\texit(2);\n\t\t}\n\t}\n\tattr->op = op;\n\tattr->next = *list;\n\t*list = attr;\n\treturn;\n}\n\n/**\n * @brief\n *\tprocesses the argument string and checks the operation\n *\tto be performed\n *\n * @param[in] optarg -  argument string\n * @param[out] op    -  enum value\n * @param[out] optargout  -  char pointer to hold output\n *\n */\nvoid\ncheck_op(char *optarg, enum batch_op *op, char *optargout)\n{\n\tchar opstring[OP_LEN + 1];\n\tint i;\n\tint cp_pos;\n\n\t*op = EQ; /* default */\n\tcp_pos = 0;\n\n\tif (optarg[0] == '.') {\n\t\tpbs_strncpy(opstring, &optarg[1], OP_LEN + 1);\n\t\tcp_pos = OPSTRING_LEN;\n\t\tfor (i = 0; i < OP_ENUM_LEN; i++) {\n\t\t\tif (strncmp(opstring, opstring_vals[i], OP_LEN) == 0) {\n\t\t\t\t*op = opstring_enums[i];\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t}\n\tpbs_strncpy(optargout, &optarg[cp_pos], MAX_OPTARG_LEN + 1);\n\treturn;\n}\n\n/**\n * @brief\n *\tFunction name: get_tsubopt()\n * \tTo parse the optarg and get the sub option to -t option\n * \tand populate the appropriate time atrr in attr_t.\n *\n * @param[in]  opt the first character of the \"optarg\" string.\n * @param[out] attr_t string pointer to be populated with appropriate\n * \t\t job level time attribute.\n * @param[out] resc_t string pointer to be populated with appropriate\n *\t\t resource associated with attr_t if necessary (NULL if not).\n *\n * @retval 0 for SUCCESS\n * @retval -1 for FAILURE\n *\n * @par Scope of Linkage: local\n *\n */\nstatic int\nget_tsubopt(char opt, char **attr_t, char **resc_t)\n{\n\t*resc_t = NULL;\n\tswitch (opt) {\n\t\tcase 'a':\n\t\t\t*attr_t = ATTR_a;\n\t\t\tbreak;\n\t\tcase 'c':\n\t\t\t*attr_t = ATTR_ctime;\n\t\t\tbreak;\n\t\tcase 'e':\n\t\t\t*attr_t = ATTR_etime;\n\t\t\tbreak;\n\t\tcase 'g':\n\t\t\t*attr_t = ATTR_eligible_time;\n\t\t\tbreak;\n\t\tcase 'm':\n\t\t\t*attr_t = ATTR_mtime;\n\t\t\tbreak;\n\t\tcase 'q':\n\t\t\t*attr_t = ATTR_qtime;\n\t\t\tbreak;\n\t\tcase 's':\n\t\t\t*attr_t = ATTR_stime;\n\t\t\tbreak;\n\t\tcase 't':\n\t\t\t*attr_t = ATTR_estimated;\n\t\t\t*resc_t = \"start_time\";\n\t\t\tbreak;\n\t\tdefault:\n\t\t\treturn -1; /* failure */\n\t}\n\treturn 0; /* success */\n}\n\n/**\n * @brief\n *      To parse the optarg and check the option to -l option\n *      and populate the appropriate resource value in resource_list.\n *\n * @param[in]  optarg -  string containing resource info\n * @param[out] resource_name - string to hold resource name\n * @param[out] op - enum value to hold option\n * @param[out] resource_value - string to hold resource value\n * @param[out] res_pos - string holding the resource info\n *\n * @retval 0 for SUCCESS\n * @retval 1 for FAILURE\n *\n */\nint\ncheck_res_op(char *optarg, char *resource_name, enum batch_op *op, char *resource_value, char **res_pos)\n{\n\tchar opstring[OPSTRING_LEN];\n\tint i;\n\tint hit;\n\tchar *p;\n\n\tp = strchr(optarg, '.');\n\tif (p == NULL || *p == '\\0') {\n\t\tfprintf(stderr, \"qselect: illegal -l value\\n\");\n\t\tfprintf(stderr, \"resource_list: %s\\n\", optarg);\n\t\treturn (1);\n\t} else {\n\t\tpbs_strncpy(resource_name, optarg, p - optarg + 1);\n\t\t*res_pos = p + OPSTRING_LEN;\n\t}\n\tif (p[0] == '.') {\n\t\tpbs_strncpy(opstring, &p[1], OP_LEN + 1);\n\t\thit = 0;\n\t\tfor (i = 0; i < OP_ENUM_LEN; i++) {\n\t\t\tif (strncmp(opstring, opstring_vals[i], OP_LEN) == 0) {\n\t\t\t\t*op = opstring_enums[i];\n\t\t\t\thit = 1;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t\tif (!hit) {\n\t\t\tfprintf(stderr, \"qselect: illegal -l value\\n\");\n\t\t\tfprintf(stderr, \"resource_list: %s\\n\", optarg);\n\t\t\treturn (1);\n\t\t}\n\t}\n\tp = strchr(*res_pos, ',');\n\tif (p == NULL) {\n\t\tp = strchr(*res_pos, '\\0');\n\t}\n\tpbs_strncpy(resource_value, *res_pos, p - (*res_pos) + 1);\n\tif (strlen(resource_value) == 0) {\n\t\tfprintf(stderr, \"qselect: illegal -l value\\n\");\n\t\tfprintf(stderr, \"resource_list: %s\\n\", optarg);\n\t\treturn (1);\n\t}\n\t*res_pos = (*p == '\\0') ? p : (p += 1);\n\tif (**res_pos == '\\0' && *(p - 1) == ',') {\n\t\tfprintf(stderr, \"qselect: illegal -l value\\n\");\n\t\tfprintf(stderr, \"resource_list: %s\\n\", optarg);\n\t\treturn (1);\n\t}\n\n\treturn (0); /* ok */\n}\n\n/**\n * @brief\n * \tprints usage format for qselect command\n *\n * @return - Void\n *\n */\nstatic void\nprint_usage()\n{\n\tstatic char usag2[] = \"       qselect --version\\n\";\n\tstatic char usage[] =\n\t\t\"usage: qselect [-a [op]date_time] [-A account_string] [-c [op]interval]\\n\"\n\t\t\"\\t[-h hold_list] [-H] [-J] [-l resource_list] [-N name] [-p [op]priority]\\n\"\n\t\t\"\\t[-q destination] [-r y|n] [-s states] [-t subopt[op]date_time] [-T] [-P project_name]\\n\"\n\t\t\"\\t[-x] [-u user_name]\\n\";\n\tfprintf(stderr, \"%s\", usage);\n\tfprintf(stderr, \"%s\", usag2);\n}\n\n/**\n * @brief\n * \thandles attribute errors and prints appropriate errmsg\n *\n * @param[in] err_list - list of possible attribute errors\n *\n * @return - Void\n *\n */\nstatic void\nhandle_attribute_errors(struct ecl_attribute_errors *err_list)\n{\n\tstruct attropl *attribute;\n\tchar *opt;\n\tint i;\n\n\tfor (i = 0; i < err_list->ecl_numerrors; i++) {\n\t\tattribute = err_list->ecl_attrerr[i].ecl_attribute;\n\t\tif (strcmp(attribute->name, ATTR_a) == 0)\n\t\t\topt = \"a\";\n\t\telse if (strcmp(attribute->name, ATTR_project) == 0)\n\t\t\topt = \"P\";\n\t\telse if (strcmp(attribute->name, ATTR_A) == 0)\n\t\t\topt = \"A\";\n\t\telse if (strcmp(attribute->name, ATTR_c) == 0)\n\t\t\topt = \"c\";\n\t\telse if (strcmp(attribute->name, ATTR_h) == 0)\n\t\t\topt = \"h\";\n\t\telse if (strcmp(attribute->name, ATTR_array) == 0)\n\t\t\topt = \"J\";\n\t\telse if (strcmp(attribute->name, ATTR_l) == 0) {\n\t\t\topt = \"l\";\n\t\t\tfprintf(stderr, \"qselect: %s\\n\",\n\t\t\t\terr_list->ecl_attrerr[i].ecl_errmsg);\n\t\t\texit(err_list->ecl_attrerr[i].ecl_errcode);\n\t\t} else if (strcmp(attribute->name, ATTR_N) == 0)\n\t\t\topt = \"N\";\n\t\telse if (strcmp(attribute->name, ATTR_p) == 0) {\n\t\t\topt = \"p\";\n\t\t\tfprintf(stderr, \"qselect: %s\\n\",\n\t\t\t\terr_list->ecl_attrerr[i].ecl_errmsg);\n\t\t\texit(err_list->ecl_attrerr[i].ecl_errcode);\n\t\t} else if (strcmp(attribute->name, ATTR_q) == 0)\n\t\t\topt = \"q\";\n\t\telse if (strcmp(attribute->name, ATTR_r) == 0)\n\t\t\topt = \"r\";\n\t\telse if (strcmp(attribute->name, ATTR_state) == 0)\n\t\t\topt = \"s\";\n\t\telse if (strcmp(attribute->name, ATTR_ctime) == 0)\n\t\t\topt = \"t\";\n\t\telse if (strcmp(attribute->name, ATTR_etime) == 0)\n\t\t\topt = \"t\";\n\t\telse if (strcmp(attribute->name, ATTR_eligible_time) == 0)\n\t\t\topt = \"t\";\n\t\telse if (strcmp(attribute->name, ATTR_mtime) == 0)\n\t\t\topt = \"t\";\n\t\telse if (strcmp(attribute->name, ATTR_qtime) == 0)\n\t\t\topt = \"t\";\n\t\telse if (strcmp(attribute->name, ATTR_stime) == 0)\n\t\t\topt = \"t\";\n\t\telse if (strcmp(attribute->name, ATTR_u) == 0)\n\t\t\topt = \"u\";\n\t\telse\n\t\t\treturn;\n\n\t\tfprintf(stderr, \"qselect: illegal -%s value\\n\", opt);\n\t\tprint_usage();\n\t\t/*cleanup security library initializations before exiting*/\n\t\tCS_close_app();\n\t\texit(2);\n\t}\n}\n\nint\nmain(int argc, char **argv, char **envp) /* qselect */\n{\n\tint c;\n\tint errflg = 0;\n\tchar *errmsg;\n\n\tchar optargout[MAX_OPTARG_LEN + 1];\n\tchar resource_name[MAX_RESOURCE_NAME_LEN + 1];\n\n\tenum batch_op op;\n\tenum batch_op *pop = &op;\n\n\tstruct attropl *select_list = 0;\n\n\t/* two extra spaces because of \\0 and '@' */\n\tchar destination[MAXSERVERNAME + PBS_MAXQUEUENAME + 2] = \"\";\n\tchar server_out[MAXSERVERNAME] = \"\";\n\n\tchar *queue_name_out;\n\tchar *server_name_out;\n\n\tint connect;\n\tchar **selectjob_list;\n\tchar *res_pos;\n\tchar *pc;\n\ttime_t after;\n\tchar a_value[80];\n\tchar extendopts[4] = \"\";\n\tchar *attr_time = NULL;\n\tstruct ecl_attribute_errors *err_list;\n\tchar *resc_time = NULL;\n\n\t/*test for real deal or just version and exit*/\n\tPRINT_VERSION_AND_EXIT(argc, argv);\n\n\tif (initsocketlib())\n\t\treturn 1;\n\n\twhile ((c = getopt(argc, argv, GETOPT_ARGS)) != EOF)\n\t\tswitch (c) {\n\t\t\tcase 'a':\n\t\t\t\tcheck_op(optarg, pop, optargout);\n\t\t\t\tif ((after = cvtdate(optargout)) < 0) {\n\t\t\t\t\tfprintf(stderr, \"qselect: illegal -a value\\n\");\n\t\t\t\t\terrflg++;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tsprintf(a_value, \"%ld\", (long) after);\n\t\t\t\tset_attrop(&select_list, ATTR_a, NULL, a_value, op);\n\t\t\t\tbreak;\n\t\t\tcase 'c':\n\t\t\t\tcheck_op(optarg, pop, optargout);\n\t\t\t\tpc = optargout;\n\t\t\t\twhile (isspace((int) *pc))\n\t\t\t\t\tpc++;\n\t\t\t\tif (strlen(pc) == 0) {\n\t\t\t\t\tfprintf(stderr, \"qselect: illegal -c value\\n\");\n\t\t\t\t\terrflg++;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tset_attrop(&select_list, ATTR_c, NULL, optargout, op);\n\t\t\t\tbreak;\n\t\t\tcase 'h':\n\t\t\t\tcheck_op(optarg, pop, optargout);\n\t\t\t\tpc = optargout;\n\t\t\t\twhile (isspace((int) *pc))\n\t\t\t\t\tpc++;\n\t\t\t\tset_attrop(&select_list, ATTR_h, NULL, optargout, op);\n\t\t\t\tbreak;\n\t\t\tcase 'J':\n\t\t\t\top = EQ;\n\t\t\t\tset_attrop(&select_list, ATTR_array, NULL, \"True\", op);\n\t\t\t\tbreak;\n\t\t\tcase 'l':\n\t\t\t\tres_pos = optarg;\n\t\t\t\twhile (*res_pos != '\\0') {\n\t\t\t\t\tif (check_res_op(res_pos, resource_name, pop, optargout, &res_pos) != 0) {\n\t\t\t\t\t\terrflg++;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t\tset_attrop(&select_list, ATTR_l, resource_name, optargout, op);\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase 'p':\n\t\t\t\tcheck_op(optarg, pop, optargout);\n\t\t\t\tset_attrop(&select_list, ATTR_p, NULL, optargout, op);\n\t\t\t\tbreak;\n\t\t\tcase 'q':\n\t\t\t\tpbs_strncpy(destination, optarg, sizeof(destination));\n\t\t\t\tcheck_op(optarg, pop, optargout);\n\t\t\t\tset_attrop(&select_list, ATTR_q, NULL, optargout, op);\n\t\t\t\tbreak;\n\t\t\tcase 'r':\n\t\t\t\top = EQ;\n\t\t\t\tpc = optarg;\n\t\t\t\twhile (isspace((int) (*pc)))\n\t\t\t\t\tpc++;\n\t\t\t\tif (*pc != 'y' && *pc != 'n') { /* qselect specific check - stays */\n\t\t\t\t\tfprintf(stderr, \"qselect: illegal -r value\\n\");\n\t\t\t\t\terrflg++;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tset_attrop(&select_list, ATTR_r, NULL, pc, op);\n\t\t\t\tbreak;\n\t\t\tcase 's':\n\t\t\t\tcheck_op(optarg, pop, optargout);\n\t\t\t\tpc = optargout;\n\t\t\t\twhile (isspace((int) (*pc)))\n\t\t\t\t\tpc++;\n\t\t\t\tset_attrop(&select_list, ATTR_state, NULL, optargout, op);\n\t\t\t\tbreak;\n\t\t\tcase 't':\n\t\t\t\tif (get_tsubopt(*optarg, &attr_time, &resc_time)) {\n\t\t\t\t\tfprintf(stderr, \"qselect: illegal -t value\\n\");\n\t\t\t\t\terrflg++;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\t/* 1st character possess the subopt, so send optarg++ */\n\t\t\t\toptarg++;\n\t\t\t\tcheck_op(optarg, pop, optargout);\n\t\t\t\tif ((after = cvtdate(optargout)) < 0) {\n\t\t\t\t\tfprintf(stderr, \"qselect: illegal -t value\\n\");\n\t\t\t\t\terrflg++;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tsprintf(a_value, \"%ld\", (long) after);\n\t\t\t\tset_attrop(&select_list, attr_time, resc_time, a_value, op);\n\t\t\t\tbreak;\n\t\t\tcase 'T':\n\t\t\t\tif (strchr(extendopts, (int) 'T') == NULL)\n\t\t\t\t\t(void) strcat(extendopts, \"T\");\n\t\t\t\tbreak;\n\t\t\tcase 'x':\n\t\t\t\tif (strchr(extendopts, (int) 'x') == NULL)\n\t\t\t\t\t(void) strcat(extendopts, \"x\");\n\t\t\t\tbreak;\n\t\t\tcase 'H':\n\t\t\t\top = EQ;\n\t\t\t\tif (strchr(extendopts, (int) 'x') == NULL)\n\t\t\t\t\t(void) strcat(extendopts, \"x\");\n\t\t\t\tset_attrop(&select_list, ATTR_state, NULL, \"FM\", op);\n\t\t\t\tbreak;\n\t\t\tcase 'u':\n\t\t\t\top = EQ;\n\t\t\t\tset_attrop(&select_list, ATTR_u, NULL, optarg, op);\n\t\t\t\tbreak;\n\t\t\tcase 'A':\n\t\t\t\top = EQ;\n\t\t\t\tset_attrop(&select_list, ATTR_A, NULL, optarg, op);\n\t\t\t\tbreak;\n\t\t\tcase 'P':\n\t\t\t\top = EQ;\n\t\t\t\tset_attrop(&select_list, ATTR_project, NULL, optarg, op);\n\t\t\t\tbreak;\n\t\t\tcase 'N':\n\t\t\t\top = EQ;\n\t\t\t\tset_attrop(&select_list, ATTR_N, NULL, optarg, op);\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\terrflg++;\n\t\t}\n\n\tif (errflg || (optind < argc)) {\n\t\tprint_usage();\n\t\texit(2);\n\t}\n\n\tif (notNULL(destination)) {\n\t\tif (parse_destination_id(destination, &queue_name_out, &server_name_out)) {\n\t\t\tfprintf(stderr, \"qselect: illegally formed destination: %s\\n\", destination);\n\t\t\texit(2);\n\t\t}\n\t\tif (notNULL(server_name_out))\n\t\t\tpbs_strncpy(server_out, server_name_out, sizeof(server_out));\n\t}\n\n\t/*perform needed security library initializations (including none)*/\n\tif (CS_client_init() != CS_SUCCESS) {\n\t\tfprintf(stderr, \"qselect: unable to initialize security library.\\n\");\n\t\texit(2);\n\t}\n\n\tconnect = cnt2server(server_out);\n\tif (connect <= 0) {\n\t\tfprintf(stderr, \"qselect: cannot connect to server %s (errno=%d)\\n\",\n\t\t\tpbs_server, pbs_errno);\n\n\t\t/*cleanup security library initializations before exiting*/\n\t\tCS_close_app();\n\n\t\texit(pbs_errno);\n\t}\n\n\tif (extendopts[0] == '\\0')\n\t\tselectjob_list = pbs_selectjob(connect, select_list, NULL);\n\telse\n\t\tselectjob_list = pbs_selectjob(connect, select_list, extendopts);\n\tif (selectjob_list == NULL) {\n\t\tif ((err_list = pbs_get_attributes_in_error(connect)))\n\t\t\thandle_attribute_errors(err_list);\n\n\t\tif (pbs_errno != PBSE_NONE) {\n\t\t\terrmsg = pbs_geterrmsg(connect);\n\t\t\tif (errmsg != NULL) {\n\t\t\t\tfprintf(stderr, \"qselect: %s\\n\", errmsg);\n\t\t\t} else {\n\t\t\t\tfprintf(stderr, \"qselect: Error (%d) selecting jobs\\n\", pbs_errno);\n\t\t\t}\n\n\t\t\t/*\n\t\t\t * If the server is not configured for history jobs i.e.\n\t\t\t * job_history_enable svr attr is unset/set to FALSE, qselect\n\t\t\t * command with -x/-H option is being used, then pbs_selectjob()\n\t\t\t * will return PBSE_JOBHISTNOTSET error code. But command will\n\t\t\t * exit with exit code '0'after printing the corresponding error\n\t\t\t * message. i.e. \"job_history_enable is set to false\"\n\t\t\t */\n\t\t\tif (pbs_errno == PBSE_JOBHISTNOTSET)\n\t\t\t\tpbs_errno = 0;\n\n\t\t\t/*cleanup security library initializations before exiting*/\n\t\t\tCS_close_app();\n\t\t\texit(pbs_errno);\n\t\t}\n\t} else { /* got some jobs ids */\n\t\tint i = 0;\n\t\twhile (selectjob_list[i] != NULL) {\n\t\t\tprintf(\"%s\\n\", selectjob_list[i++]);\n\t\t}\n\t\tfree_string_array(selectjob_list);\n\t}\n\tpbs_disconnect(connect);\n\n\t/*cleanup security library initializations before exiting*/\n\tCS_close_app();\n\n\texit(0);\n}\n"
  },
  {
    "path": "src/cmds/qsig.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tqsig.c\n * @brief\n * \tqsig - (PBS) signal a batch job\n *\n * @author\t    Terry Heidelberg\n * \t\t\t\tLivermore Computing\n *\n * @author      Bruce Kelly\n * \t\t\t\tNational Energy Research Supercomputer Center\n *\n * @author     \tLawrence Livermore National Laboratory\n * \t\t\t\tUniversity of California\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include \"cmds.h\"\n#include \"pbs_ifl.h\"\n#include <pbs_version.h>\n\nint\nmain(int argc, char **argv, char **envp) /* qsig */\n{\n\tint c;\n\tint errflg = 0;\n\tint any_failed = 0;\n\n\tchar job_id[PBS_MAXCLTJOBID]; /* from the command line */\n\n\tchar job_id_out[PBS_MAXCLTJOBID];\n\tchar server_out[MAXSERVERNAME];\n\tchar rmt_server[MAXSERVERNAME];\n\n#define MAX_SIGNAL_TYPE_LEN 32\n\tstatic char sig_string[MAX_SIGNAL_TYPE_LEN + 1] = \"SIGTERM\";\n\n#define GETOPT_ARGS \"s:\"\n\n\t/*test for real deal or just version and exit*/\n\n\tPRINT_VERSION_AND_EXIT(argc, argv);\n\n\tif (initsocketlib())\n\t\treturn 1;\n\n\twhile ((c = getopt(argc, argv, GETOPT_ARGS)) != EOF)\n\t\tswitch (c) {\n\t\t\tcase 's':\n\t\t\t\tpbs_strncpy(sig_string, optarg, sizeof(sig_string));\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\terrflg++;\n\t\t}\n\n\tif (errflg || optind >= argc) {\n\t\tstatic char usage[] = \"usage: qsig [-s signal] job_identifier...\\n\";\n\t\tstatic char usag2[] = \"       qsig --version\\n\";\n\t\tfprintf(stderr, \"%s\", usage);\n\t\tfprintf(stderr, \"%s\", usag2);\n\t\texit(2);\n\t}\n\n\t/*perform needed security library initializations (including none)*/\n\n\tif (CS_client_init() != CS_SUCCESS) {\n\t\tfprintf(stderr, \"qsig: unable to initialize security library.\\n\");\n\t\texit(2);\n\t}\n\n\tfor (; optind < argc; optind++) {\n\t\tint connect;\n\t\tint stat = 0;\n\t\tint located = FALSE;\n\n\t\tpbs_strncpy(job_id, argv[optind], sizeof(job_id));\n\t\tif (get_server(job_id, job_id_out, server_out)) {\n\t\t\tfprintf(stderr, \"qsig: illegally formed job identifier: %s\\n\", job_id);\n\t\t\tany_failed = 1;\n\t\t\tcontinue;\n\t\t}\n\tcnt:\n\t\tconnect = cnt2server(server_out);\n\t\tif (connect <= 0) {\n\t\t\tfprintf(stderr, \"qsig: cannot connect to server %s (errno=%d)\\n\",\n\t\t\t\tpbs_server, pbs_errno);\n\t\t\tany_failed = pbs_errno;\n\t\t\tcontinue;\n\t\t}\n\n\t\tstat = pbs_sigjob(connect, job_id_out, sig_string, NULL);\n\t\tif (stat && (pbs_errno != PBSE_UNKJOBID)) {\n\t\t\tprt_job_err(\"qsig\", connect, job_id_out);\n\t\t\tany_failed = pbs_errno;\n\t\t} else if (stat && (pbs_errno == PBSE_UNKJOBID) && !located) {\n\t\t\tlocated = TRUE;\n\t\t\tif (locate_job(job_id_out, server_out, rmt_server)) {\n\t\t\t\tpbs_disconnect(connect);\n\t\t\t\tpbs_strncpy(server_out, rmt_server, sizeof(server_out));\n\t\t\t\tgoto cnt;\n\t\t\t}\n\t\t\tprt_job_err(\"qsig\", connect, job_id_out);\n\t\t\tany_failed = pbs_errno;\n\t\t}\n\n\t\tpbs_disconnect(connect);\n\t}\n\n\t/*cleanup security library initializations before exiting*/\n\tCS_close_app();\n\n\texit(any_failed);\n}\n"
  },
  {
    "path": "src/cmds/qstart.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tqstart.c\n * @brief\n *  The qstart command directs that a destination should start scheduling\n *  or routing batch jobs.\n *\n * @par\tSynopsis:\n *  qstart destination ...\n *\n * @par\tArguments:\n *  destination ...\n *      A list of destinations.  A destination has one of the following\n *      three forms:\n *          queue\n *          @server\n *          queue@server\n *      If queue is specified, the request is to start the queue at\n *      the default server.  If @server is given, the request is to\n *      start all queues at the server.  If queue@server is used,\n *      the request is to start the named queue at the named server.\n *\n * @author\tBruce Kelly\n *\t\t\tNational Energy Research Supercomputer Center\n *\t\t\tLivermore, CA\n * \t\t\tMay, 1993\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include \"cmds.h\"\n#include <pbs_version.h>\n\nint exitstatus = 0; /* Exit Status */\nstatic void execute(char *, char *);\n\nint\nmain(int argc, char **argv)\n{\n\t/*\n\t *  This routine sends a Manage request to the batch server specified by\n\t * the destination.  The STARTED queue attribute is set to {True}.  If the\n\t * batch request is accepted, the server will start scheduling or routing\n\t * requests in the specified queue.\n\t */\n\n\tint dest;     /* Index into the destination array (argv) */\n\tchar *queue;  /* Queue name part of destination */\n\tchar *server; /* Server name part of destination */\n\n\t/*test for real deal or just version and exit*/\n\n\tPRINT_VERSION_AND_EXIT(argc, argv);\n\n\tif (initsocketlib())\n\t\treturn 1;\n\n\tif (argc == 1) {\n\t\tfprintf(stderr, \"Usage: qstart [queue][@server] ...\\n\");\n\t\tfprintf(stderr, \"       qstart --version\\n\");\n\t\texit(1);\n\t}\n\n\t/*perform needed security library initializations (including none)*/\n\n\tif (CS_client_init() != CS_SUCCESS) {\n\t\tfprintf(stderr, \"qstart: unable to initialize security library.\\n\");\n\t\texit(1);\n\t}\n\n\tfor (dest = 1; dest < argc; dest++)\n\t\tif (parse_destination_id(argv[dest], &queue, &server) == 0)\n\t\t\texecute(queue, server);\n\t\telse {\n\t\t\tfprintf(stderr, \"qstart: illegally formed destination: %s\\n\",\n\t\t\t\targv[dest]);\n\t\t\texitstatus = 1;\n\t\t}\n\n\t/*cleanup security library initializations before exiting*/\n\tCS_close_app();\n\n\texit(exitstatus);\n}\n\n/**\n * @brief\n *\texecutes to start a queue\n *\n * @param[in] queue - The name of the queue to start.\n * @param[in] server - The name of the server that manages the queue.\n *\n * @return - Void\n *\n * File Variables:\n * exitstatus  Set to two if an error occurs.\n *\n */\nstatic void\nexecute(char *queue, char *server)\n{\n\tint ct;\t      /* Connection to the server */\n\tint merr;     /* Error return from pbs_manager */\n\tchar *errmsg; /* Error message from pbs_manager */\n\t/* The disable request */\n\tstatic struct attropl attr = {NULL, \"started\", NULL, \"TRUE\", SET};\n\n\tif ((ct = cnt2server(server)) > 0) {\n\t\tmerr = pbs_manager(ct, MGR_CMD_SET, MGR_OBJ_QUEUE, queue, &attr, NULL);\n\t\tif (merr != 0) {\n\t\t\terrmsg = pbs_geterrmsg(ct);\n\t\t\tif (errmsg != NULL) {\n\t\t\t\tfprintf(stderr, \"qstart: %s \", errmsg);\n\t\t\t} else {\n\t\t\t\tfprintf(stderr, \"qstart: Error (%d) starting queue \", pbs_errno);\n\t\t\t}\n\t\t\tif (notNULL(queue))\n\t\t\t\tfprintf(stderr, \"%s\", queue);\n\t\t\tif (notNULL(server))\n\t\t\t\tfprintf(stderr, \"@%s\", server);\n\t\t\tfprintf(stderr, \"\\n\");\n\t\t\texitstatus = 2;\n\t\t}\n\t\tpbs_disconnect(ct);\n\t} else {\n\t\tfprintf(stderr, \"qstart: could not connect to server %s (%d)\\n\", server, pbs_errno);\n\t\texitstatus = 2;\n\t}\n}\n"
  },
  {
    "path": "src/cmds/qstat.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tqstat.c\n * @brief\n * \tqstat - (PBS) show stats of batch jobs, queues, or servers\n *\n * @author\tTerry Heidelberg\n * \t\t\tLivermore Computing\n *\n * @author  Bruce Kelly\n * \t\t\tNational Energy Research Supercomputer Center\n *\n * @author  Lawrence Livermore National Laboratory\n * \t\t\tUniversity of California\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n#include <pbs_version.h>\n\n#include \"pbs_ifl.h\"\n#include \"cmds.h\"\n#include \"pbs_share.h\"\n#include <pwd.h>\n#include <stdlib.h>\n#include \"pbs_internal.h\"\n#include \"libutil.h\"\n#include <arpa/inet.h>\n#include \"pbs_json.h\"\n\n#if TCL_QSTAT\n#include <sys/stat.h>\n#include <tcl.h>\n#ifdef NAS /* localmod 071 */\nextern char *tcl_atrsep;\n#endif /* localmod 071 */\n#endif\n\n/* default server */\nchar *def_server;\n\nstatic char *cvtResvstate(char *);\nstatic int cmp_est_time(struct batch_status *a, struct batch_status *b);\nchar *cnvt_est_start_time(char *start_time, int shortform);\n\n#if !defined(PBS_NO_POSIX_VIOLATION)\n/* defines for alternative display formats */\n#define ALT_DISPLAY_a 1\t\t      /* -a option - show all jobs */\n#define ALT_DISPLAY_i 2\t\t      /* -i option - show not running */\n#define ALT_DISPLAY_r 4\t\t      /* -r option - show only running */\n#define ALT_DISPLAY_u 8\t\t      /* -u option - list user's jobs */\n#define ALT_DISPLAY_n 0x10\t      /* -n option - add node list */\n#define ALT_DISPLAY_s 0x20\t      /* -s option - add scheduler comment */\n#define ALT_DISPLAY_H 0x40\t      /* -H option - show history(F/M) jobs */\n#define ALT_DISPLAY_q 0x80\t      /* -q option - alt queue display */\n#define ALT_DISPLAY_Mb 0x100\t      /* show sizes in MB */\n#define ALT_DISPLAY_Mw 0x200\t      /* -M option - show sizes in MW */\n#define ALT_DISPLAY_G 0x400\t      /* -G option - show sizes in GB */\n#define ALT_DISPLAY_1l 0x800\t      /* -n -s -f on line line */\n#define ALT_DISPLAY_w 0x1000\t      /* -w - wide output */\n#define ALT_DISPLAY_T 0x2000\t      /* -T option - estimated start times */\n#define ALT_DISPLAY_p 0x4000\t      /* -p option - percentage completed for the job */\n#define ALT_DISPLAY_INCR_WIDTH 0x8000 /* increases qstat header width */\n\n#endif /* not PBS_NO_POSIX_VIOLATION */\n#define HPCBP_EXEC_TAG \"jsdl-hpcpa:Executable\"\n#define CHAR_LINE_LIMIT 78 /* maximum number of characters which can be printed on a line */\n\n#define DISPLAY_TRUNC_CHAR '*'\n\n#define NUML 5\n#define MAX_ATTRS 64\n#define MAX_RES 64\n\nstatic struct attrl basic_attribs[] = {\n\t{&basic_attribs[1],\n\t ATTR_N,\n\t NULL,\n\t \"\",\n\t SET},\n\t{&basic_attribs[2],\n\t ATTR_owner,\n\t NULL,\n\t \"\",\n\t SET},\n\t{&basic_attribs[3],\n\t ATTR_used,\n\t NULL,\n\t \"\",\n\t SET},\n\t{&basic_attribs[4],\n\t ATTR_state,\n\t NULL,\n\t \"\",\n\t SET},\n\t{NULL,\n\t ATTR_queue,\n\t NULL,\n\t \"\",\n\t SET}};\n\nstatic struct attrl alt_attribs[] = {\n\t{&alt_attribs[1],\n\t ATTR_session,\n\t NULL,\n\t \"\",\n\t SET},\n\t{&basic_attribs[0],\n\t ATTR_l,\n\t NULL,\n\t \"\",\n\t SET}};\n\nenum output_format_enum {\n\tFORMAT_DEFAULT = 0,\n\tFORMAT_DSV,\n\tFORMAT_JSON,\n\tFORMAT_MAX\n};\n\nenum batch_status_item_type {\n\tBATCH_ITEM_DEFAULT=0,\n\tBATCH_ITEM_IS_STRING,\n\tBATCH_ITEM_IS_NUMBER,\n\tBATCH_ITEM_IS_ATTR_V,\n\tBATCH_ITEM_IS_RESOURCE\n};\n\n/**\n * @brief\n *\ttriage the json_type of a batch_item, identify it's type\n *\n * @param[in] string - ( batch_item -> name )\n * @param[in] string - ( batch_item -> resource ) \n *\n * @return enum BATCH_ITEM_* \n * \n */\n\nint batch_item_json_type_triage (const char *name, const char *resource)\n{\n\tif (resource)\n\t\treturn BATCH_ITEM_IS_RESOURCE; \n\tstruct name_type_mappings {\n\t\tchar *name;\n\t\tint type; \n\t};\n\ttypedef struct name_type_mappings name_map_t;\n\t/* keep it local */ \n\tconst static name_map_t name_type_map[] = {\n\t\t{ATTR_v, BATCH_ITEM_IS_ATTR_V},\n\t\t{ATTR_N, BATCH_ITEM_IS_STRING},\n\t\t{ATTR_project, BATCH_ITEM_IS_STRING},\n\t\t{ATTR_A, BATCH_ITEM_IS_STRING}\n\t};\n\n\tfor (int i = 0; i < sizeof(name_type_map) / sizeof(name_type_map[0]); ++i) {\n\t\tif (!strcmp(name,name_type_map[i].name)) {\n\t\t\treturn name_type_map[i].type;\n\t\t}\n\t}\n  \n\treturn BATCH_ITEM_DEFAULT;\n}\n\n\n\n/* This array contains the names users may specify for output format. */\nstatic char *output_format_names[] = {\"default\", \"dsv\", \"json\", NULL};\n\nstatic int output_format = FORMAT_DEFAULT;\nstatic char *dsv_delim = \"|\";\nstatic char *delimiter = \"\\n\";\nstatic char *prev_resc_name = NULL;\nstatic int first_stat = 1;\nstatic int conn;\nstatic json_data *json_root = NULL; /* root of json structure */\nstatic json_data *json_prev_resc = NULL;\n\nstatic struct attrl *display_attribs = &basic_attribs[0];\n\nint\ncmp_jobs(const void *j1, const void *j2)\n{\n\tchar *job1, *job2;\n\tchar *seq_num1 = NULL;\n\tchar *seq_num2 = NULL;\n\tchar *pserver_one = NULL;\n\tchar *pserver_two = NULL;\n\tint ret = 0;\n\tint ret_val = 0;\n\tchar *eptr = NULL;\n\tlong jid1, jid2;\n\tjob1 = *(char **) j1;\n\tjob2 = *(char **) j2;\n\tif (pbs_isjobid(job1) == 0) {\n\t\tret_val = 1;\n\t\tgoto return_here;\n\t} else if (pbs_isjobid(job2) == 0) {\n\t\tret_val = 1;\n\t\tgoto return_here;\n\t} else {\n\t\tparse_jobid(job1, &seq_num1, &pserver_one, NULL);\n\t\t/* default server */\n\t\tif (pserver_one == NULL)\n\t\t\tpserver_one = strdup(def_server);\n\t\telse if (pserver_one[0] == '\\0')\n\t\t\tstrcpy(pserver_one, def_server);\n\t\tparse_jobid(job2, &seq_num2, &pserver_two, NULL);\n\t\t/* default server */\n\t\tif (pserver_two == NULL)\n\t\t\tpserver_two = strdup(def_server);\n\t\telse if (pserver_two[0] == '\\0')\n\t\t\tstrcpy(pserver_two, def_server);\n\n\t\tret = strcmp(pserver_one, pserver_two);\n\t\tif (ret < 0) {\n\t\t\tret_val = -1;\n\t\t\tgoto return_here;\n\t\t} else if (ret > 0) {\n\t\t\tret_val = 1;\n\t\t\tgoto return_here;\n\t\t}\n\t}\n\t/* Server name is same, Now sort on the job id */\n\tjid1 = strtoll(job1, &eptr, 10);\n\tjid2 = strtoll(job2, &eptr, 10);\n\tif (jid1 < jid2) {\n\t\tret_val = -1;\n\t\tgoto return_here;\n\t} else if (jid1 > jid2) {\n\t\tret_val = 1;\n\t\tgoto return_here;\n\t} else if (seq_num1 != NULL && seq_num2 != NULL) {\n\t\t/* Array sub jobs, sort on the basis of index */\n\t\tjid1 = strtoll(seq_num1, &eptr, 10);\n\t\tjid2 = strtoll(seq_num2, &eptr, 10);\n\t\tif (jid1 < jid2) {\n\t\t\tret_val = -1;\n\t\t\tgoto return_here;\n\t\t} else if (jid1 > jid2) {\n\t\t\tret_val = 1;\n\t\t\tgoto return_here;\n\t\t}\n\t}\n\t/* Same job id getting repeated */\n\tret_val = 0;\nreturn_here:\n\tfree(seq_num1);\n\tfree(seq_num2);\n\tfree(pserver_one);\n\tfree(pserver_two);\n\treturn ret_val;\n}\n\n/**\n * @brief\n *\ttests whether the string value is true or false\n *\n * @param[in] string - string to be tested\n *\n * @return Boolean value\n * @retval 0  Failre\n * @retval 1  Success\n *\n */\nint\nistrue(char *string)\n{\n\tif (strcmp(string, \"TRUE\") == 0)\n\t\treturn TRUE;\n\tif (strcmp(string, \"True\") == 0)\n\t\treturn TRUE;\n\tif (strcmp(string, \"true\") == 0)\n\t\treturn TRUE;\n\tif (strcmp(string, \"1\") == 0)\n\t\treturn TRUE;\n\treturn FALSE;\n}\n\n/**\n * @brief\n *\tsets single character for each state of job\n *\n * @param[in] string - string holding state of job\n * @param[in] q      - string holding string for que\n * @param[in] r      - string holding string for run\n * @param[in] h      - string holding string for hld\n * @param[in] w      - string holding string for wait\n * @param[in] t      - string holding string for transit\n * @param[in] e      - string holding string for end\n * @param[in] len    - length of string\n *\n */\nstatic void\nstates(char *string, char *q, char *r, char *h, char *w, char *t, char *e, int len)\n{\n\tchar *c, *d, *f, *s, l;\n\n\tc = string;\n\twhile (isspace(*c) && *c != '\\0')\n\t\tc++;\n\twhile (*c != '\\0') {\n\t\ts = c;\n\t\twhile ((*c != ':') && (*c != '\\0'))\n\t\t\tc++;\n\t\tif (*c == '\\0')\n\t\t\tbreak;\n\t\t*c = '\\0';\n\t\td = NULL;\n\t\tif (strcmp(s, \"Queued\") == 0)\n\t\t\td = q;\n\t\telse if (strcmp(s, \"Running\") == 0)\n\t\t\td = r;\n\t\telse if (strcmp(s, \"Held\") == 0)\n\t\t\td = h;\n\t\telse if (strcmp(s, \"Waiting\") == 0)\n\t\t\td = w;\n\t\telse if (strcmp(s, \"Transit\") == 0)\n\t\t\td = t;\n\t\telse if (strcmp(s, \"Exiting\") == 0)\n\t\t\td = e;\n\t\tc++;\n\t\tif (d != NULL) {\n\t\t\ts = c;\n\t\t\twhile (*c != ' ' && *c != '\\0')\n\t\t\t\tc++;\n\t\t\tl = *c;\n\t\t\t*c = '\\0';\n\t\t\tif (strlen(s) > (size_t) len) {\n\t\t\t\tf = s + len;\n\t\t\t\tif (f > s)\n\t\t\t\t\t*(f - 1) = DISPLAY_TRUNC_CHAR;\n\t\t\t\t*f = '\\0';\n\t\t\t}\n\t\t\tpbs_strncpy(d, s, NUML + 1);\n\t\t\tif (l != '\\0')\n\t\t\t\tc++;\n\t\t} else {\n\t\t\twhile (*c != ' ' && *c != '\\0')\n\t\t\t\tc++;\n\t\t}\n\t}\n}\n\n/**\n * @brief\n *\tConvert special characters, line feed and form feed into '\\\\'+n and '\\\\'+f respectively.\n *\n * @param[in]   val - input string\n *\n * @return string\n * @retval NULL     - if memory allocation failed.\n * @retval string   - string with feed characters escaped.\n *\n * @note\n *\tThe string returned should be freed by the caller.\n */\nchar *\nconvert_feed_chars(char *val)\n{\n\tint i = 0;\n\tint len = MAXBUFLEN;\n\tchar *temp = NULL;\n\tchar *buf = (char *) malloc(MAXBUFLEN);\n\tif (buf == NULL)\n\t\treturn NULL;\n\twhile (*val != '\\0') {\n\t\tif (*val == '\\n') {\n\t\t\tbuf[i++] = '\\\\';\n\t\t\tbuf[i++] = 'n';\n\t\t\tval++;\n\t\t} else if (*val == '\\f') {\n\t\t\tbuf[i++] = '\\\\';\n\t\t\tbuf[i++] = 'f';\n\t\t\tval++;\n\t\t} else\n\t\t\tbuf[i++] = *val++;\n\t\tif (i >= len - 2) {\n\t\t\tlen *= BUFFER_GROWTH_RATE;\n\t\t\ttemp = (char *) realloc(buf, len);\n\t\t\tif (temp == NULL) {\n\t\t\t\tfree(buf);\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t\tbuf = temp;\n\t\t}\n\t}\n\tbuf[i] = '\\0';\n\treturn buf;\n}\n\n/**\n * @brief\n *\tprint the exit message and die.\n */\nvoid\nexit_qstat(char *msg)\n{\n\tfprintf(stderr, \"qstat: %s\\n\", msg);\n\texit(1);\n}\n\n/**\n * @brief\n *\tprint a attribute value string, formating to break at a comma if possible\n *\n * @param[in] name - attribute name\n * @param[in] resource - resource name\n * @param[in] value - value for the attribute\n * @param[in] one_line - one line per attribute\n *\n */\nvoid\nprt_attr(char *name, char *resource, char *value, int one_line, json_data * json_obj)\n{\n\tint first = 1;\n\tint len = 0;\n\tint start = 0;\n\tchar *comma = \",\";\n\tchar *key = NULL;\n\tchar *val = NULL;\n\tchar *buf = NULL;\n\tchar *temp = NULL;\n\tjson_data *json_attr = NULL;\n\tint item_type = 0;\n\n\tif (value == NULL)\n\t\treturn;\n\tswitch (output_format) {\n\tcase FORMAT_JSON:\n\t\titem_type = batch_item_json_type_triage (name, resource);\n\t\tswitch (item_type) {\n\t\tcase BATCH_ITEM_IS_ATTR_V:\n\t\t\tif ((json_attr = pbs_json_create_object()) == NULL)\n\t\t\t\texit_qstat(\"json error\");\n\t\t\tpbs_json_insert_item(json_obj, name, json_attr);\n\t\t\tbuf = strdup(value);\n\t\t\ttemp = buf;\n\t\t\tif (buf == NULL)\n\t\t\t\texit_qstat(\"out of memory\");\n\t\t\twhile (*value) {\n\t\t\t\t/* value is split based on comma and each key-value is stored in keyvalpair.\n\t\t\t\t * If the value contains an escaped comma, only the comma is copied to keyvalpair.\n\t\t\t\t * Then separated into key and value based on the first '=' */\n\t\t\t\tkey = buf;\n\t\t\t\tval = NULL;\n\t\t\t\twhile (!(*value == *comma || *value == '\\0')) {\n\t\t\t\t\tif (*value == ESC_CHAR && *(value + 1) == *comma)\n\t\t\t\t\t\tvalue++;\n\t\t\t\t\tif (!val && *value == '=') {\n\t\t\t\t\t\t*buf++ = '\\0';\n\t\t\t\t\t\tval = buf;\n\t\t\t\t\t\tvalue++;\n\t\t\t\t\t} else {\n\t\t\t\t\t\t*buf++ = *value++;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\t*buf = '\\0';\n\t\t\t\tif (pbs_json_insert_parsed(json_attr, key, val, 0))\n\t\t\t\t\texit_qstat(\"json error\");\n\t\t\t\tif (*value != '\\0')\n\t\t\t\t\tvalue++;\n\t\t\t}\n\t\t\tfree(temp);\n\t\t\tbreak;\n\t\tcase BATCH_ITEM_IS_RESOURCE:\n\t\t\tif (prev_resc_name && strcmp(prev_resc_name, name) != 0) {\n\t\t\t\tprev_resc_name = NULL;\n\t\t\t}\n\t\t\tif (prev_resc_name == NULL || strcmp(prev_resc_name, name) != 0) {\n\t\t\t\tif ((json_prev_resc = pbs_json_create_object()) == NULL)\n\t\t\t\t\texit_qstat(\"json error\");\n\t\t\t\tpbs_json_insert_item(json_obj, name, json_prev_resc);\n\t\t\t\tprev_resc_name = name;\n\t\t\t}\n\t\t\tif (pbs_json_insert_parsed(json_prev_resc, resource, value, 0))\n\t\t\t\texit_qstat(\"json error\");\n\t\t\tbreak;\n\t\tcase BATCH_ITEM_IS_STRING:\n\t\t\tif (prev_resc_name) {\n\t\t\t\tjson_prev_resc = NULL;\n\t\t\t\tprev_resc_name = NULL;\n\t\t\t}\n\t\t\tif (pbs_json_insert_string(json_obj, name, value))\n\t\t\t\texit_qstat(\"json error\");\n\t\t\tbreak;\n\t\tcase BATCH_ITEM_IS_NUMBER:\n\t\t\t/* skip to default parser */ \n\t\tdefault:\n\t\t\tif (prev_resc_name) {\n\t\t\t\tjson_prev_resc = NULL;\n\t\t\t\tprev_resc_name = NULL;\n\t\t\t}\n\t\t\tif (pbs_json_insert_parsed(json_obj, name, value, 0))\n\t\t\t\texit_qstat(\"json error\");\n\t\t}\n\t\tbreak;\n\n\t\tcase FORMAT_DSV:\n\t\t\tbuf = escape_delimiter(value, delimiter, ESC_CHAR);\n\t\t\tif (buf == NULL)\n\t\t\t\texit_qstat(\"out of memory\");\n\t\t\tbuf = convert_feed_chars(buf);\n\t\t\tif (buf == NULL)\n\t\t\t\texit_qstat(\"out of memory\");\n\t\t\tif (resource)\n\t\t\t\tprintf(\"%s.%s=%s\", name, resource, show_nonprint_chars(buf));\n\t\t\telse\n\t\t\t\tprintf(\"%s=%s\", name, show_nonprint_chars(buf));\n\t\t\tfree(buf);\n\t\t\tbreak;\n\n\t\tdefault:\n\t\t\tif (one_line) {\n\t\t\t\tbuf = convert_feed_chars(value);\n\t\t\t\tif (buf == NULL)\n\t\t\t\t\texit_qstat(\"out of memory\");\n\t\t\t\tif (resource)\n\t\t\t\t\tprintf(\"    %s.%s = %s\", name, resource, buf);\n\t\t\t\telse\n\t\t\t\t\tprintf(\"    %s = %s\", name, show_nonprint_chars(buf));\n\t\t\t\tfree(buf);\n\t\t\t} else {\n\t\t\t\tstart = strlen(name) + 7; /* 4 spaces + ' = ' is 7 */\n\t\t\t\tprintf(\"    %s\", name);\n\t\t\t\tif (resource) {\n\t\t\t\t\tstart += strlen(resource) + 1; /* 1 for dot */\n\t\t\t\t\tprintf(\".%s\", resource);\n\t\t\t\t}\n\t\t\t\tprintf(\" = \");\n\t\t\t\tif ((temp = strdup(value)) == NULL)\n\t\t\t\t\texit_qstat(\"out of memory\");\n\t\t\t\tbuf = strtok(temp, comma);\n\t\t\t\twhile (buf) {\n\t\t\t\t\tif ((len = strlen(buf)) + start < CHAR_LINE_LIMIT) {\n\t\t\t\t\t\tprintf(\"%s\", show_nonprint_chars(buf));\n\t\t\t\t\t\tstart += len;\n\t\t\t\t\t} else {\n\t\t\t\t\t\tif (!first) {\n\t\t\t\t\t\t\tprintf(\"\\n\\t\");\n\t\t\t\t\t\t\tstart = 9; /* tab + 1 */\n\t\t\t\t\t\t}\n\t\t\t\t\t\twhile (*buf) {\n\t\t\t\t\t\t\tchar *sbuf;\n\t\t\t\t\t\t\tint ch;\n\t\t\t\t\t\t\tchar char_buf[2];\n\n\t\t\t\t\t\t\tch = *buf++;\n\t\t\t\t\t\t\tsprintf(char_buf, \"%c\", ch);\n\t\t\t\t\t\t\tsbuf = show_nonprint_chars(char_buf);\n\t\t\t\t\t\t\tif (sbuf != NULL) {\n\t\t\t\t\t\t\t\tint c;\n\t\t\t\t\t\t\t\tfor (c = 0; c < strlen(sbuf); c++)\n\t\t\t\t\t\t\t\t\tputchar(sbuf[c]);\n\t\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t\tputchar(ch);\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tif (++start > CHAR_LINE_LIMIT) {\n\t\t\t\t\t\t\t\tstart = 8; /* tab */\n\t\t\t\t\t\t\t\tprintf(\"\\n\\t\");\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tif ((buf = strtok(NULL, comma)) != NULL) {\n\t\t\t\t\t\tfirst = 0;\n\t\t\t\t\t\tputchar(',');\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tfree(temp);\n\t\t\t}\n\t}\n}\n\n#define NAMEL 16  /* printf of jobs, queues, and servers */\n#define OWNERL 16 /* printf of jobs */\n#define TIMEUL 8  /* printf of jobs */\n#define STATEL 1  /* printf of jobs */\n#define LOCL 16\t  /* printf of jobs */\n#define SIZEL 6\t  /* length of \"SIZE\" fields in printf */\n\n/* format widths defined for display in normal format output */\n#define SIZEJOBID 15\t  /* length of JobId field in printf */\n#define SIZEJOBID_INCR 20 /*length of jobId field in printf with incr_width */\n#define SIZEJOBNAME 10\t  /* length of JobName field in printf */\n#define SIZEQUEUENAME 8\t  /* length of Queue field in printf */\n#define SIZESESSID 6\t  /* length of session id in printf */\n#define SIZENDS 3\t  /* length of nds in printf */\n#define SIZETSK 3\t  /* length of tsk in printf */\n#define SIZEUSER 8\t  /* length of user name in printf */\n\n/* format widths defined for display in wide format output */\n#define SIZEJOBID_W 30\t /* length of JobId field in printf */\n#define SIZEJOBNAME_W 15 /* length of JobName field in printf */\n#define SIZESESSID_W 8\t /* length of session id in printf */\n#define SIZENDS_W 4\t /* length of nds in printf */\n#define SIZETSK_W 5\t /* length of tsk in printf */\n#define SIZEUSER_W 15\t /* length of user name in printf */\n\n/**\n * @brief\n *\tCheck if value is too long, truncate and append DISPLAY_TRUNC_CHAR if needed\n *\n * @param[in] value - value that may be truncated\n * @param[in] len - non-wide length that must be met\n * @param[in] wide_len - wide length that must be met\n * @param[in] wide - whether or not wide format is used\n *\n */\n\nstatic void\ntrunc_value(char *value, int len, int wide_len, int wide)\n{\n\tif (wide) {\n\t\tif (strlen(value) > wide_len && wide_len > 0) {\n\t\t\t*(value + wide_len - 1) = DISPLAY_TRUNC_CHAR;\n\t\t}\n\t} else {\n\t\tif (strlen(value) > len && len > 0) {\n\t\t\t*(value + len - 1) = DISPLAY_TRUNC_CHAR;\n\t\t}\n\t}\n}\n\n/**\n * @brief\n *\tFormat and display string of assigned nodes, (1) strip off domain name\n *\tif present and (2) break line at '+' sign.\n *\n * @param[in] nodes - name of exechosts\n * @param[in] no_newl - int value to decide which comment to be set\n *\n */\nstatic void\nprt_nodes(char *nodes, int no_newl)\n{\n\tint i, len;\n\tchar linebuf[CHAR_LINE_LIMIT];\n\tchar *rest = NULL;\n\tchar *saveptr = NULL;\n\tchar *token = NULL;\n\tchar *token_cp = NULL;\n\tchar *subtoken = NULL;\n\tchar *node_name = NULL;\n\tchar *node_name_bkp = NULL;\n\tchar *chunk = NULL;\n\tstruct sockaddr_in check_ip;\n\tint ret = 0;\n\n\tif ((nodes == NULL) || (*nodes == '\\0'))\n\t\treturn;\n\n\ti = 0;\n\trest = strdup(nodes);\n\tif (rest == NULL)\n\t\texit_qstat(\"out of memory\");\n\t/* The exec_host string has the format <host1>/<T1>*<P1>[+<host2>/<T2>*<P2>+... ].\n\t * We are using '+' delimiter to find each <host1>/<T1>*<P1> string.\n\t */\n\ttoken = strtok_r(rest, \"+\", &saveptr);\n\twhile (token != NULL) {\n\t\ttoken_cp = strdup(token);\n\t\tif (token_cp == NULL)\n\t\t\texit_qstat(\"out of memory\");\n\t\t/* We are using '/' delimiter to extract the <host1> value\n\t\t * from <host1>/<T1>*<P1> string. We use the <host1> to identify if\n\t\t * the node is created using IP address.\n\t\t */\n\t\tsubtoken = strtok(token, \"/\");\n\t\tchunk = token_cp + strlen(subtoken);\n\t\tret = inet_pton(AF_INET, subtoken, &(check_ip.sin_addr));\n\t\tif (ret == 1) {\n\t\t\t/* node name is an IP address */\n\t\t\tpbs_asprintf(&node_name, \"%s%s\", subtoken, chunk);\n\t\t} else {\n\t\t\t/* Node name is not an IP address */\n\t\t\tpbs_asprintf(&node_name, \"%s%s\", strtok(subtoken, \".\"), chunk);\n\t\t}\n\t\t/* Backing up node_name as we are modifying the pointer further in the code */\n\t\tnode_name_bkp = node_name;\n\t\tlen = strlen(node_name);\n\t\tif (i + len < (CHAR_LINE_LIMIT - 1)) {\n\t\t\tfor (; len > 0; i++, len--) {\n\t\t\t\tlinebuf[i] = *node_name++;\n\t\t\t}\n\t\t\t/* Appending a  '+' here because we want to maintain the\n\t\t\t * exec_host format i.e. <host1>/<T1>*<P1>[+<host2>/<T2>*<P2>+.\n\t\t\t */\n\t\t\tlinebuf[i++] = '+';\n\t\t} else {\n\t\t\t/* flush line and start next */\n\t\t\tlinebuf[i] = '\\0';\n\t\t\tprintf((no_newl ? \"%s\" : \"   %s\\n\"), show_nonprint_chars(linebuf));\n\t\t\tfor (i = 0; len > 0; i++, len--) {\n\t\t\t\tlinebuf[i] = *node_name++;\n\t\t\t}\n\t\t\tlinebuf[i++] = '+';\n\t\t}\n\t\ttoken = strtok_r(NULL, \"+\", &saveptr);\n\t}\n\tif (i > 0) {\n\t\tlinebuf[--i] = '\\0';\n\t\tprintf((no_newl ? \"%s\\n\" : \"   %s\\n\"), show_nonprint_chars(linebuf));\n\t} else if (no_newl)\n\t\tprintf(\"\\n\");\n\tfree(token_cp);\n\tfree(rest);\n\tfree(node_name_bkp);\n\ttoken_cp = NULL;\n\trest = NULL;\n\tnode_name_bkp = NULL;\n}\n\n/**\n * @brief\n *\tconvert size from suffix string (nnnn[ kmgt][ bw]) to string of\n *\tk[bw] for neither -M or -G (magnitude may be adjusted to fit print)\n *\tmw    for\t  -M\n *\tgb    for\t  -G\n * @param[in] value - string holding info about size which processed to get long int\n * @param[in] opt   - option indicating which conversion\n *\n * @return string\n * @retval string holding magnitude\n *\n */\n\nstatic char *\ncnv_size(char *value, int opt)\n{\n\tstatic int sift_factor[5][5] = {\n\t\t{0, 10, 20, 30, 40},\t/*  b conversion */\n\t\t{-10, 0, 10, 20, 30},\t/* kb conversion */\n\t\t{-20, -10, 0, 10, 20},\t/* mb conversion */\n\t\t{-30, -20, -10, 0, 10}, /* gb conversion */\n\t\t{-40, -30, -20, -10, 0} /* tb conversion */\n\t};\n\n\tstatic char suffixletter[] = \" kmgtp?\";\n\tint in;\t /* magnitude of value from server  */\n\tint out; /* magnitude of value when printed */\n\tint sft;\n\tunsigned long nval;\n\tchar *pc;\n\tchar suffix1;\t    /* magnitude key letter [ kmgt] */\n\tchar suffix2 = 'b'; /* suffix letter, 'b' or 'w'    */\n\tstatic char outbuf[25];\n\n\tnval = strtol(value, &pc, 10);\n\tif (*pc == 'k')\n\t\tin = 1;\n\telse if (*pc == 'm')\n\t\tin = 2;\n\telse if (*pc == 'g')\n\t\tin = 3;\n\telse if (*pc == 't')\n\t\tin = 4;\n\telse\n\t\tin = 0;\n\n\tif ((*pc == 'w') || (*(pc + 1) == 'w')) {\n\t\tnval = nval << 3; /* convert to bytes */\n\t\tsuffix2 = 'w';\n\t}\n\n\tif (opt & (ALT_DISPLAY_Mb | ALT_DISPLAY_Mw)) {\n\t\tout = 2;\n\t\tsuffix1 = 'm';\n\t\tif (opt & ALT_DISPLAY_Mw)\n\t\t\tsuffix2 = 'w';\n\t\telse\n\t\t\tsuffix2 = 'b';\n\t} else if (opt & ALT_DISPLAY_G) {\n\t\tout = 3;\n\t\tsuffix2 = 'b';\n\t} else {\n\t\tout = in;\n\t}\n\n\tsft = sift_factor[out][in];\n\n\tif (sft < 0) {\n\t\tnval = nval + ((1 << -sft) - 1); /* round up (ceiling) */\n\t\tnval = nval >> -sft;\n\t} else if (sft > 0) {\n\t\tnval = nval << sft;\n\t}\n\n\tif (suffix2 == 'w')\n\t\tnval = (nval + 7) >> 3; /* round and convert (back) to words */\n\n\t/* if the value will overflow the field size, up the magnitude */\n\twhile (nval > 9999) {\n\t\tnval = (nval + 1023) >> 10;\n\t\tout++;\n\t}\n\tsuffix1 = suffixletter[out];\n\n\t(void) sprintf(outbuf, \"%lu%c%c\", nval, suffix1, suffix2);\n\treturn outbuf;\n}\n\n/**\n * @brief\n *\tFormat and display status of job in alternate and alternate wide form (not POSIX standard)\n *\n *\n */\n\nstatic void\naltdsp_statjob(struct batch_status *pstat, struct batch_status *prtheader, int alt_opt, int wide, int how_opt)\n{\n\tchar *comment;\n\tchar *pc;\n\tstruct attrl *pat;\n\tchar *exechost;\n\tchar *usern;\n\tchar *queuen;\n\tchar *jobn;\n\tchar *sess;\n\tchar *tasks;\n\tchar *nodect;\n\tchar *rqtimecpu;\n\tchar *rqtimewal;\n\tchar *jstate;\n\tchar *eltimecpu;\n\tchar *eltimewal;\n\tchar *est_time;\n\tchar *timeval;\n\tint usecput;\n\tstatic char pfs[SIZEL];\n\tstatic char rqmem[SIZEL];\n\tstatic char srfsbig[SIZEL];\n\tstatic char srfsfast[SIZEL];\n\tstatic char *blank = \" -- \";\n\tchar buf[COMMENT_BUF_SIZE] = {'\\0'};\n\tint id_len;\n\n\tif (alt_opt & ALT_DISPLAY_T)\n\t\tpstat = bs_isort(pstat, cmp_est_time);\n\n\tif (prtheader) {\n\t\tprintf(\"\\n%s: \", prtheader->name);\n\n\t\tpc = get_attr(prtheader->attribs, ATTR_comment, NULL);\n\t\tif (pc)\n\t\t\tprintf(\"%s\", show_nonprint_chars(pc));\n\t\tif (wide) {\n\n\t\t\t/* Used for for displaying spaces and dashes dynamically for wide formatted output */\n#define STR_DASH \"--------------------------------------------------------------------------------\"\n#define STR_SPACE \"                                                                                \"\n\n\t\t\t/* dynamic formatting to display spaces */\n\t\t\tprintf(\"\\n\");\n\t\t\tif (alt_opt & ALT_DISPLAY_T) {\n\t\t\t\tprintf(\"%*.*s %*.*s %*.*s %*.*s %*.*s %*.*s %*.*s \", SIZEJOBID_W, SIZEJOBID_W, STR_SPACE,\n\t\t\t\t       SIZEUSER_W, SIZEUSER_W, STR_SPACE,\n\t\t\t\t       PBS_MAXQUEUENAME, PBS_MAXQUEUENAME, STR_SPACE,\n\t\t\t\t       SIZEJOBNAME_W, SIZEJOBNAME_W, STR_SPACE,\n\t\t\t\t       SIZESESSID_W, SIZESESSID_W, STR_SPACE,\n\t\t\t\t       SIZENDS_W, SIZENDS_W, STR_SPACE,\n\t\t\t\t       SIZETSK_W, SIZETSK_W, STR_SPACE);\n\t\t\t\tprintf(\"               Est\\n\");\n\t\t\t}\n\n\t\t\tprintf(\"%*.*s %*.*s %*.*s %*.*s %*.*s %*.*s %*.*s \", SIZEJOBID_W, SIZEJOBID_W, STR_SPACE,\n\t\t\t       SIZEUSER_W, SIZEUSER_W, STR_SPACE,\n\t\t\t       PBS_MAXQUEUENAME, PBS_MAXQUEUENAME, STR_SPACE,\n\t\t\t       SIZEJOBNAME_W, SIZEJOBNAME_W, STR_SPACE,\n\t\t\t       SIZESESSID_W, SIZESESSID_W, STR_SPACE,\n\t\t\t       SIZENDS_W, SIZENDS_W, STR_SPACE,\n\t\t\t       SIZETSK_W, SIZETSK_W, STR_SPACE);\n\t\t\tif (alt_opt & ALT_DISPLAY_T)\n\t\t\t\tprintf(\"Req'd  Req'd   Start\\n\");\n\t\t\telse\n\t\t\t\tprintf(\"Req'd  Req'd   Elap\\n\");\n\n\t\t\t/* dynamic formatting to display header */\n\t\t\tprintf(\"%-*.*s %-*.*s %-*.*s %-*.*s %-*.*s %-*.*s %-*.*s \", SIZEJOBID_W, SIZEJOBID_W, \"Job ID\",\n\t\t\t       SIZEUSER_W, SIZEUSER_W, \"Username\",\n\t\t\t       PBS_MAXQUEUENAME, PBS_MAXQUEUENAME, \"Queue\",\n\t\t\t       SIZEJOBNAME_W, SIZEJOBNAME_W, \"Jobname\",\n\t\t\t       SIZESESSID_W, SIZESESSID_W, \"SessID\",\n\t\t\t       SIZENDS_W, SIZENDS_W, \"NDS\",\n\t\t\t       SIZETSK_W, SIZETSK_W, \"TSK\");\n\t\t\tprintf(\"Memory Time  S Time\\n\");\n\n\t\t\t/* dynamic formatting to display dashes */\n\t\t\tprintf(\"%-*.*s %-*.*s %-*.*s %-*.*s %-*.*s %-*.*s %-*.*s \", SIZEJOBID_W, SIZEJOBID_W, STR_DASH,\n\t\t\t       SIZEUSER_W, SIZEUSER_W, STR_DASH,\n\t\t\t       PBS_MAXQUEUENAME, PBS_MAXQUEUENAME, STR_DASH,\n\t\t\t       SIZEJOBNAME_W, SIZEJOBNAME_W, STR_DASH,\n\t\t\t       SIZESESSID_W, SIZESESSID_W, STR_DASH,\n\t\t\t       SIZENDS_W, SIZENDS_W, STR_DASH,\n\t\t\t       SIZETSK_W, SIZETSK_W, STR_DASH);\n\t\t\tprintf(\"------ ----- - -----\\n\");\n\t\t} else {\n\t\t\tif (alt_opt & ALT_DISPLAY_T) {\n\t\t\t\tif (how_opt & ALT_DISPLAY_INCR_WIDTH)\n\t\t\t\t\tprintf(\"\\n%80s%s\\n%65s%-7s%-8s%s\\n\", \" \", \"Est\", \" \", \"Req'd\", \"Req'd\", \"Start\");\n\t\t\t\telse\n\t\t\t\t\tprintf(\"\\n%75s%s\\n%60s%-7s%-8s%s\\n\", \" \", \"Est\", \" \", \"Req'd\", \"Req'd\", \"Start\");\n\t\t\t} else {\n\t\t\t\tif (how_opt & ALT_DISPLAY_INCR_WIDTH)\n\t\t\t\t\tprintf(\"\\n%65s%-7s%-8s%s\\n\", \" \", \"Req'd\", \"Req'd\", \"Elap\");\n\t\t\t\telse\n\t\t\t\t\tprintf(\"\\n%60s%-7s%-8s%s\\n\", \" \", \"Req'd\", \"Req'd\", \"Elap\");\n\t\t\t}\n\t\t\tif (how_opt & ALT_DISPLAY_INCR_WIDTH) {\n\t\t\t\tprintf(\"Job ID               Username Queue    Jobname    SessID NDS TSK Memory Time  S Time\\n\");\n\t\t\t\tprintf(\"-------------------- -------- -------- ---------- ------ --- --- ------ ----- - -----\\n\");\n\t\t\t} else {\n\t\t\t\tprintf(\"Job ID          Username Queue    Jobname    SessID NDS TSK Memory Time  S Time\\n\");\n\t\t\t\tprintf(\"--------------- -------- -------- ---------- ------ --- --- ------ ----- - -----\\n\");\n\t\t\t}\n\t\t}\n\t}\n\twhile (pstat) {\n\t\texechost = blank;\n\t\tsess = blank;\n\t\tnodect = blank;\n\t\ttasks = blank;\n\t\trqtimecpu = blank;\n\t\trqtimewal = blank;\n\t\teltimecpu = blank;\n\t\teltimewal = blank;\n\t\tjstate = blank;\n\t\tcomment = blank;\n\t\tusern = blank;\n\t\tjobn = blank;\n\t\tqueuen = blank;\n\t\test_time = NULL;\n\t\t/*\t\t*pfs      = *blank;  */\n\t\tstrcpy(pfs, blank);\n\t\t/*\t\t*rqmem    = *blank;  */\n\t\tstrcpy(rqmem, blank);\n\t\t/*\t\t*srfsbig  = *blank;  */\n\t\tstrcpy(srfsbig, blank);\n\t\t/*\t\t*srfsfast = *blank;  */\n\t\tstrcpy(srfsfast, blank);\n\t\tusecput = 0;\n\n\t\tpat = pstat->attribs;\n\n\t\twhile (pat) {\n\t\t\tif (strcmp(pat->name, ATTR_N) == 0) {\n\t\t\t\tjobn = pat->value;\n\t\t\t\ttrunc_value(jobn, SIZEJOBNAME, SIZEJOBNAME_W, wide);\n\t\t\t} else if (strcmp(pat->name, ATTR_owner) == 0) {\n\t\t\t\tusern = pat->value;\n\t\t\t\tif ((pc = strchr(usern, (int) '@')) != NULL)\n\t\t\t\t\t*pc = '\\0';\n\t\t\t\ttrunc_value(usern, SIZEUSER, SIZEUSER_W, wide);\n\t\t\t} else if (strcmp(pat->name, ATTR_state) == 0) {\n\t\t\t\tjstate = pat->value;\n\t\t\t} else if (strcmp(pat->name, ATTR_queue) == 0) {\n\t\t\t\tqueuen = pat->value;\n\t\t\t\ttrunc_value(queuen, SIZEQUEUENAME, PBS_MAXQUEUENAME, wide);\n\t\t\t} else if (strcmp(pat->name, ATTR_session) == 0) {\n\t\t\t\tsess = pat->value;\n\t\t\t\ttrunc_value(sess, SIZESESSID, SIZESESSID_W, wide);\n\t\t\t} else if (strcmp(pat->name, ATTR_l) == 0) {\n\t\t\t\tif (strcmp(pat->resource, \"nodect\") == 0) {\n\t\t\t\t\tnodect = pat->value;\n\t\t\t\t\ttrunc_value(nodect, SIZENDS, SIZENDS_W, wide);\n\t\t\t\t} else if (strcmp(pat->resource, \"ncpus\") == 0) {\n\t\t\t\t\tif (strcmp(pat->value, \"0\") != 0) {\n\t\t\t\t\t\ttasks = pat->value;\n\t\t\t\t\t\ttrunc_value(tasks, SIZETSK, SIZETSK_W, wide);\n\t\t\t\t\t}\n\t\t\t\t} else if (strcmp(pat->resource, \"mppe\") == 0) {\n\t\t\t\t\tif (strcmp(pat->value, \"0\") != 0) {\n\t\t\t\t\t\ttasks = pat->value;\n\t\t\t\t\t\ttrunc_value(tasks, SIZETSK, SIZETSK_W, wide);\n\t\t\t\t\t}\n\t\t\t\t} else if (strcmp(pat->resource, \"mem\") == 0) {\n\t\t\t\t\tpbs_strncpy(rqmem,\n\t\t\t\t\t\t    cnv_size(pat->value, alt_opt), sizeof(rqmem));\n\t\t\t\t} else if (strcmp(pat->resource, \"walltime\") == 0) {\n\t\t\t\t\trqtimewal = pat->value;\n\t\t\t\t} else if (strcmp(pat->resource, \"cput\") == 0) {\n\t\t\t\t\trqtimecpu = pat->value;\n\t\t\t\t\tusecput = 1;\n\t\t\t\t} else if (strcmp(pat->resource, \"srfs_big\") == 0) {\n\t\t\t\t\tpbs_strncpy(srfsbig,\n\t\t\t\t\t\t    cnv_size(pat->value, alt_opt), sizeof(srfsbig));\n\t\t\t\t} else if (strcmp(pat->resource, \"srfs_fast\") == 0) {\n\t\t\t\t\tpbs_strncpy(srfsfast,\n\t\t\t\t\t\t    cnv_size(pat->value, alt_opt), sizeof(srfsfast));\n\t\t\t\t} else if (strcmp(pat->resource, \"piofs\") == 0) {\n\t\t\t\t\tpbs_strncpy(pfs,\n\t\t\t\t\t\t    cnv_size(pat->value, alt_opt), sizeof(pfs));\n\t\t\t\t}\n\n\t\t\t} else if (strcmp(pat->name, ATTR_exechost) == 0) {\n\t\t\t\texechost = pat->value;\n\t\t\t} else if (strcmp(pat->name, ATTR_estimated) == 0) {\n\t\t\t\tif (strcmp(pat->resource, \"start_time\") == 0) {\n\t\t\t\t\test_time = pat->value;\n\t\t\t\t}\n\t\t\t} else if (strcmp(pat->name, ATTR_used) == 0) {\n\t\t\t\tif (strcmp(pat->resource, \"walltime\") == 0) {\n\t\t\t\t\teltimewal = pat->value;\n\t\t\t\t} else if (strcmp(pat->resource, \"cput\") == 0) {\n\t\t\t\t\teltimecpu = pat->value;\n\t\t\t\t}\n\t\t\t} else if (strcmp(pat->name, ATTR_comment) == 0) {\n\t\t\t\t/* there are 3 blank spaces after & before comment string */\n\t\t\t\t/* hence, for 80char line - 74 chars are displayed and for */\n\t\t\t\t/* 120 char line - 114 chars are displayed */\n\t\t\t\tif (strlen(pat->value) > COMMENTLENSCOPE_SHORT) {\n\t\t\t\t\tif (wide) {\n\t\t\t\t\t\tif (strlen(pat->value) > COMMENTLENSCOPE_WIDE) {\n\t\t\t\t\t\t\tpbs_strncpy(buf, pat->value, COMMENTLEN_WIDE);\n\t\t\t\t\t\t\tstrcat(buf, \"...\");\n\t\t\t\t\t\t\tcomment = buf;\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\tcomment = pat->value;\n\t\t\t\t\t\t}\n\t\t\t\t\t} else {\n\t\t\t\t\t\tpbs_strncpy(buf, pat->value, COMMENTLEN_SHORT);\n\t\t\t\t\t\tstrcat(buf, \"...\");\n\t\t\t\t\t\tcomment = buf;\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tcomment = pat->value;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tpat = pat->next;\n\t\t}\n\n\t\tif (alt_opt & ALT_DISPLAY_T) {\n\t\t\tpc = get_attr(pstat->attribs, ATTR_state, NULL);\n\t\t\tif (pc != NULL && (*pc == 'Q' || *pc == 'S' || *pc == 'B'))\n\t\t\t\ttimeval = cnvt_est_start_time(est_time, 0);\n\t\t\telse\n\t\t\t\ttimeval = \"--\";\n\t\t} else\n\t\t\ttimeval = usecput ? eltimecpu : eltimewal;\n\n\t\tid_len = how_opt & ALT_DISPLAY_INCR_WIDTH ? SIZEJOBID_INCR : SIZEJOBID;\n\t\ttrunc_value(pstat->name, id_len, SIZEJOBID_W, wide);\n\n\t\tif (wide) {\n\t\t\t/* dynamic formatting of values as defined by constants */\n\t\t\tprintf(\"%-*.*s %-*.*s %-*.*s %-*.*s %*.*s %*.*s %*.*s \", SIZEJOBID_W, SIZEJOBID_W, pstat->name,\n\t\t\t       SIZEUSER_W, SIZEUSER_W, usern,\n\t\t\t       PBS_MAXQUEUENAME, PBS_MAXQUEUENAME, queuen,\n\t\t\t       SIZEJOBNAME_W, SIZEJOBNAME_W, jobn,\n\t\t\t       SIZESESSID_W, SIZESESSID_W, sess,\n\t\t\t       SIZENDS_W, SIZENDS_W, nodect,\n\t\t\t       SIZETSK_W, SIZETSK_W, tasks);\n\t\t\t/* static formatting of fixed size values */\n\t\t\tprintf(\"%6.6s %5.5s %1s %5.5s\",\n\t\t\t       rqmem,\n\t\t\t       usecput ? rqtimecpu : rqtimewal,\n\t\t\t       jstate,\n\t\t\t       timeval);\n\t\t} else {\n\t\t\tif (how_opt & ALT_DISPLAY_INCR_WIDTH) {\n\t\t\t\tprintf(\"%-*.*s %-*.*s %-*.*s \",\n\t\t\t\t       SIZEJOBID_INCR, SIZEJOBID_INCR, pstat->name,\n\t\t\t\t       SIZEUSER, SIZEUSER, usern,\n\t\t\t\t       SIZEQUEUENAME, SIZEQUEUENAME, queuen);\n\t\t\t} else {\n\t\t\t\tprintf(\"%-*.*s %-*.*s %-*.*s \",\n\t\t\t\t       SIZEJOBID, SIZEJOBID, pstat->name,\n\t\t\t\t       SIZEUSER, SIZEUSER, usern,\n\t\t\t\t       SIZEQUEUENAME, SIZEQUEUENAME, queuen);\n\t\t\t}\n\t\t\t/* dynamic formatting of values as defined by constants */\n\t\t\tprintf(\"%-*.*s %*.*s %*.*s %*.*s \",\n\t\t\t       SIZEJOBNAME, SIZEJOBNAME, jobn,\n\t\t\t       SIZESESSID, SIZESESSID, sess,\n\t\t\t       SIZENDS, SIZENDS, nodect,\n\t\t\t       SIZETSK, SIZETSK, tasks);\n\t\t\t/* static formatting of fixed size values */\n\t\t\tprintf(\"%6.6s %5.5s %1.1s %5.5s\",\n\t\t\t       rqmem,\n\t\t\t       usecput ? rqtimecpu : rqtimewal,\n\t\t\t       jstate,\n\t\t\t       timeval);\n\t\t}\n\t\tif (!(alt_opt & ALT_DISPLAY_1l))\n\t\t\tprintf(\"\\n\");\n\t\telse\n\t\t\tprintf(\" \");\n\n\t\tif (alt_opt & ALT_DISPLAY_n) {\n\t\t\t/* print assigned nodes */\n\t\t\tprt_nodes(exechost, alt_opt & ALT_DISPLAY_1l);\n\t\t}\n\t\tif (alt_opt & ALT_DISPLAY_s) {\n\t\t\t/* print (scheduler) comment */\n\t\t\tif (*comment != '\\0')\n\t\t\t\tprintf(\"   %s\\n\", show_nonprint_chars(comment));\n\t\t}\n\n\t\tpstat = pstat->next;\n\t}\n}\n\n/**\n * @brief\n * \tget_ct - get count of jobs in queue/run state\n *\tsupport function for altdsp_statque()\n */\nstatic void\nget_ct(char *str, int *jque, int *jrun)\n{\n\tchar *ps;\n\tint colon = (int) ':';\n\n\tps = strchr(str, colon);    /* Transit - skip */\n\tps = strchr(ps + 1, colon); /* Queued  - add to jque */\n\t*jque += atoi(ps + 1);\n\tps = strchr(ps + 1, colon); /* Held    - add to jque  */\n\t*jque += atoi(ps + 1);\n\tps = strchr(ps + 1, colon); /* Waiting - add to jque  */\n\t*jque += atoi(ps + 1);\n\tps = strchr(ps + 1, colon); /* Running - add to jrun  */\n\t*jrun += atoi(ps + 1);\n}\n\n/**\n * @brief\n * \taltdsp_statque - alternative display for queue information, -q option\n */\n\nstatic void\naltdsp_statque(char *serv, struct batch_status *pstat, int opt)\n{\n\tchar rmem[SIZEL];\n\tchar *cput;\n\tchar *wallt;\n\tchar *jmax;\n\tchar *nodect;\n\tchar *blank = \"  --   \";\n\tint jrun;\n\tint jque;\n\tchar qenabled;\n\tchar qstarted;\n\tint tot_jrun = 0;\n\tint tot_jque = 0;\n\tstruct attrl *pat;\n\n\tprintf(\"\\nserver: %s\\n\\n\", serv);\n\tprintf(\"Queue            Memory CPU Time Walltime Node   Run   Que   Lm  State\\n\");\n\tprintf(\"---------------- ------ -------- -------- ---- ----- ----- ----  -----\\n\");\n\n\twhile (pstat) {\n\t\t/* *rmem = '\\0'; */\n\t\tstrcpy(rmem, \"--  \");\n\t\tcput = blank;\n\t\twallt = blank;\n\t\tnodect = \"-- \";\n\t\tjrun = 0;\n\t\tjque = 0;\n\t\tjmax = blank;\n\n\t\tqenabled = 'D';\n\t\tqstarted = 'S';\n\t\tpat = pstat->attribs;\n\n\t\twhile (pat) {\n\t\t\tif (strcmp(pat->name, ATTR_maxrun) == 0) {\n\t\t\t\tjmax = pat->value;\n\t\t\t} else if (strcmp(pat->name, ATTR_enable) == 0) {\n\t\t\t\tif (*pat->value == 'T')\n\t\t\t\t\tqenabled = 'E';\n\t\t\t} else if (strcmp(pat->name, ATTR_start) == 0) {\n\t\t\t\tif (*pat->value == 'T')\n\t\t\t\t\tqstarted = 'R';\n\t\t\t} else if (strcmp(pat->name, ATTR_count) == 0) {\n\t\t\t\tget_ct(pat->value, &jque, &jrun);\n\t\t\t\ttot_jque += jque;\n\t\t\t\ttot_jrun += jrun;\n\t\t\t} else if (strcmp(pat->name, ATTR_rescmax) == 0) {\n\t\t\t\tif (strcmp(pat->resource, \"mem\") == 0) {\n\t\t\t\t\tpbs_strncpy(rmem,\n\t\t\t\t\t\t    cnv_size(pat->value, opt), sizeof(rmem));\n\t\t\t\t} else if (strcmp(pat->resource, \"cput\") == 0) {\n\t\t\t\t\tcput = pat->value;\n\t\t\t\t} else if (strcmp(pat->resource, \"walltime\") == 0) {\n\t\t\t\t\twallt = pat->value;\n\t\t\t\t} else if (strcmp(pat->resource, \"nodect\") == 0) {\n\t\t\t\t\tnodect = pat->value;\n\t\t\t\t}\n\t\t\t}\n\t\t\tpat = pat->next;\n\t\t}\n\n\t\tprintf(\"%-16.16s %6.6s %8.8s %8.8s %4.4s \",\n\t\t       pstat->name, rmem, cput, wallt, nodect);\n\t\tprintf(\"%5d %5d %4.4s   %c %c\\n\",\n\t\t       jrun, jque, jmax, qenabled, qstarted);\n\n\t\tpstat = pstat->next;\n\t}\n\tprintf(\"                                               ----- -----\\n\");\n\tprintf(\"                                               %5d %5d\\n\", tot_jrun, tot_jque);\n}\n\n/* build and add an attropl struct to the list */\n\nstatic void\nadd_atropl(struct attropl **list, char *name, char *resc, char *value, enum batch_op op)\n{\n\tstruct attropl *patro;\n\n\tpatro = (struct attropl *) malloc(sizeof(struct attropl));\n\tif (patro == 0)\n\t\texit_qstat(\"out of memory\");\n\tpatro->next = *list;\n\tpatro->name = name;\n\tpatro->resource = resc;\n\tpatro->value = value;\n\tpatro->op = op;\n\t*list = patro;\n}\n\nstatic long\ncvt_time_to_seconds(char *ts)\n{\n\tchar *workval;\n\tchar *pc;\n\tchar *pv;\n\tlong rv = 0;\n\n\tif ((workval = strdup(ts)) == NULL)\n\t\texit_qstat(\"out of memory\");\n\tfor (pc = workval, pv = workval; *pc; ++pc) {\n\t\tif (*pc == ':') {\n\t\t\t*pc = '\\0';\n\t\t\trv = (rv * 60) + atol(pv);\n\t\t\tpv = pc + 1;\n\t\t}\n\t}\n\trv = rv * 60 + atol(pv);\n\tfree(workval);\n\treturn rv;\n}\n\n/**\n * @brief\n * \tpercent_cal - calculate the percent done for the -p option\n *\tCalculation is either:\n *\t1.  expired / total subjobs for an array,\n *\t2.  cput_used / cput_requested, if cput specified, or\n *\t3.  walltime_used / walltime_requested if walltime specified, or\n *\t4.  \"--\" if none of the above apply\n */\nchar *\npercent_cal(char *state, char *timeu, char *timer, char *wtimu, char *wtimr, char *arsct)\n{\n\tchar *rtn = NULL;\n\tlong bot = 0;\n\tlong top = 0;\n\tint qu, ru, ex, ep;\n\n\tswitch (*state) {\n\n\t\tcase 'Q':\n\t\tcase 'T':\n\t\tcase 'W':\n\t\t\tpbs_asprintf(&rtn, \"%3s\", \"-- \");\n\t\t\treturn (rtn);\n\n\t\tcase 'X':\n\t\t\tpbs_asprintf(&rtn, \"%3s\", \"100\");\n\t\t\treturn (rtn);\n\t}\n\n\tif (arsct) { /* array job: percent expired */\n\t\tlong percexp = -1;\n\t\tsscanf(arsct, \"Queued:%d Running:%d Exiting:%d Expired:%d\", &qu, &ru, &ex, &ep);\n\t\tbot = qu + ru + ex + ep;\n\t\ttop = ep;\n\t\tif (bot != 0)\n\t\t\tpercexp = (top * 100) / bot;\n\t\tif ((percexp >= 0) && (percexp < 1000)) {\n\t\t\tpbs_asprintf(&rtn, \"%3ld \", percexp);\n\t\t}\n\t} else {\n\t\tlong perccpu = -1;\n\t\tlong percwal = -1;\n\t\tif (timer && timeu) { /* if cput specified */\n\t\t\ttop = cvt_time_to_seconds(timeu);\n\t\t\tbot = cvt_time_to_seconds(timer);\n\t\t\tif (bot != 0)\n\t\t\t\tperccpu = (top * 100) / bot;\n\t\t}\n\t\tif (wtimr && wtimu) { /* if walltime specified */\n\t\t\ttop = cvt_time_to_seconds(wtimu);\n\t\t\tbot = cvt_time_to_seconds(wtimr);\n\t\t\tif (bot != 0)\n\t\t\t\tpercwal = (top * 100) / bot;\n\t\t}\n\t\tif ((perccpu != -1) || (percwal != -1)) {\n\t\t\tpbs_asprintf(&rtn, \"%3ld \",\n\t\t\t\t     perccpu > percwal ? perccpu : percwal);\n\t\t}\n\t}\n\tif (rtn == NULL) {\n\t\tpbs_asprintf(&rtn, \"%3s\", \"-- \");\n\t}\n\treturn (rtn);\n}\n\n/** @fn display_statjob\n * @brief\tdisplay job status in specific format.\n *\n * @return int\n * @retval\t0\t- success\n * @retval\t1\t- failure\n *\n */\n\nint\ndisplay_statjob(struct batch_status *status, struct batch_status *prtheader, int full, int how_opt, int alt_opt, int wide)\n{\n\tstruct batch_status *p;\n\tstruct attrl *a;\n\tint l;\n\tchar *c;\n\tchar *jid;\n\tchar *name;\n\tchar *owner;\n\tchar *timeu;\n\tchar *timer;\n\tchar *wtimu;\n\tchar *wtimr;\n\tchar *arsct;\n\tchar *state;\n\tchar *location;\n\tchar format[80];\n\tchar long_name[NAMEL + 1] = {'\\0'};\n\tchar *cmdargs = NULL;\n\tchar *hpcbp_executable;\n\tjson_data *json_jobs = NULL;\n\tjson_data *json_job = NULL;\n\n\tif (wide) {\n\t\tsprintf(format, \"%%-%ds %%-%ds %%-%ds  %%%ds %%%ds %%-%ds\\n\",\n\t\t\tSIZEJOBID_W, SIZEJOBNAME_W, SIZEUSER_W, TIMEUL, STATEL, PBS_MAXQUEUENAME);\n\t} else if (how_opt & ALT_DISPLAY_INCR_WIDTH) {\n\t\tsprintf(format, \"%%-%ds %%-%ds %%-%ds  %%%ds %%%ds %%-%ds\\n\",\n\t\t\tPBS_MAXSEQNUM + 10, NAMEL, OWNERL, TIMEUL, STATEL, LOCL);\n\t} else {\n\t\tsprintf(format, \"%%-%ds %%-%ds %%-%ds  %%%ds %%%ds %%-%ds\\n\",\n\t\t\tPBS_MAXSEQNUM + 5, NAMEL, OWNERL, TIMEUL, STATEL, LOCL);\n\t}\n\n\tif (!full && prtheader && output_format == FORMAT_DEFAULT) {\n\t\tc = get_attr(prtheader->attribs, ATTR_comment, NULL);\n\t\tif (c)\n\t\t\tprintf(\"%s\\n\", show_nonprint_chars(c));\n\t\tif (how_opt & ALT_DISPLAY_p) {\n\t\t\tif (wide) {\n\t\t\t\tprintf(\"Job id                         Name            User              %% done  S Queue\\n\");\n\t\t\t\tprintf(\"-----------------------------  --------------- ---------------  -------- - ---------------\\n\");\n\t\t\t} else if (how_opt & ALT_DISPLAY_INCR_WIDTH) {\n\t\t\t\tprintf(\"Job id                 Name             User               %% done  S Queue\\n\");\n\t\t\t\tprintf(\"---------------------  ---------------- ----------------  -------- - -----\\n\");\n\t\t\t} else {\n\t\t\t\tprintf(\"Job id            Name             User               %% done  S Queue\\n\");\n\t\t\t\tprintf(\"----------------  ---------------- ----------------  -------- - -----\\n\");\n\t\t\t}\n\t\t} else {\n\t\t\tif (wide) {\n\t\t\t\tprintf(\"Job id                         Name            User             Time Use S Queue\\n\");\n\t\t\t\tprintf(\"-----------------------------  --------------- ---------------  -------- - ---------------\\n\");\n\t\t\t} else if (how_opt & ALT_DISPLAY_INCR_WIDTH) {\n\t\t\t\tprintf(\"Job id                 Name             User              Time Use S Queue\\n\");\n\t\t\t\tprintf(\"---------------------  ---------------- ----------------  -------- - -----\\n\");\n\t\t\t} else {\n\t\t\t\tprintf(\"Job id            Name             User              Time Use S Queue\\n\");\n\t\t\t\tprintf(\"----------------  ---------------- ----------------  -------- - -----\\n\");\n\t\t\t}\n\t\t}\n\t}\n\n\tif (output_format == FORMAT_JSON && first_stat) {\n\t\tif ((json_jobs = pbs_json_create_object()) == NULL)\n\t\t\treturn 1;\n\t\tpbs_json_insert_item(json_root, \"Jobs\", json_jobs);\n\t\tfirst_stat = 0;\n\t}\n\tp = status;\n\twhile (p != NULL) {\n\t\tjid = NULL;\n\t\tname = NULL;\n\t\towner = NULL;\n\t\ttimeu = NULL;\n\t\ttimer = NULL;\n\t\twtimu = NULL;\n\t\twtimr = NULL;\n\t\tarsct = NULL;\n\t\tstate = NULL;\n\t\tlocation = NULL;\n\t\thpcbp_executable = NULL;\n\t\tprev_resc_name = NULL;\n\t\tjson_job = NULL;\n\t\tif (full) {\n\t\t\tif (output_format == FORMAT_DSV || output_format == FORMAT_DEFAULT)\n\t\t\t\tprintf(\"Job Id: %s%s\", p->name, delimiter);\n\t\t\telse if (output_format == FORMAT_JSON) {\n\t\t\t\tif ((json_job = pbs_json_create_object()) == NULL)\n\t\t\t\t\treturn 1;\n\t\t\t\tpbs_json_insert_item(json_jobs, p->name, json_job);\n\t\t\t}\n\t\t\ta = p->attribs;\n\t\t\twhile (a != NULL) {\n\t\t\t\tif (a->name != NULL) {\n\t\t\t\t\ttime_t epoch;\n\n\t\t\t\t\tif (strcmp(a->name, ATTR_ctime) == 0 ||\n\t\t\t\t\t    strcmp(a->name, ATTR_etime) == 0 ||\n\t\t\t\t\t    strcmp(a->name, ATTR_stime) == 0 ||\n\t\t\t\t\t    strcmp(a->name, ATTR_obittime) == 0 ||\n\t\t\t\t\t    strcmp(a->name, ATTR_mtime) == 0 ||\n\t\t\t\t\t    strcmp(a->name, ATTR_qtime) == 0 ||\n\t\t\t\t\t    strcmp(a->name, ATTR_resv_start) == 0 ||\n\t\t\t\t\t    strcmp(a->name, ATTR_resv_end) == 0 ||\n\t\t\t\t\t    strcmp(a->name, ATTR_cred_validity) == 0 ||\n\t\t\t\t\t    (strcmp(a->name, ATTR_estimated) == 0 &&\n\t\t\t\t\t     strcmp(a->resource, \"start_time\") == 0) ||\n\t\t\t\t\t    strcmp(a->name, ATTR_a) == 0) {\n\t\t\t\t\t\tepoch = (time_t) atol(a->value);\n\t\t\t\t\t\tif (epoch == 0 &&\n\t\t\t\t\t\t    strcmp(a->name, ATTR_estimated) == 0 &&\n\t\t\t\t\t\t    strcmp(a->resource, \"start_time\") == 0) {\n\t\t\t\t\t\t\t/*\n\t\t\t\t\t\t\t * Must not pass constant string to\n\t\t\t\t\t\t\t * ptr_attr due to strtok bug in Linux.\n\t\t\t\t\t\t\t * Use a stack variable instead.\n\t\t\t\t\t\t\t */\n\t\t\t\t\t\t\tchar noval[] = \"UNKNOWN\";\n\t\t\t\t\t\t\tprt_attr(a->name, a->resource, noval, alt_opt & ALT_DISPLAY_w, json_job);\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\tchar time_buffer[32];\n\t\t\t\t\t\t\tpbs_strncpy(time_buffer, ctime(&epoch), sizeof(time_buffer));\n\t\t\t\t\t\t\ttime_buffer[strlen(time_buffer) - 1] = '\\0';\n\t\t\t\t\t\t\tprt_attr(a->name, a->resource, time_buffer, alt_opt & ALT_DISPLAY_w, json_job);\n\t\t\t\t\t\t}\n\t\t\t\t\t} else if (strcmp(a->name, ATTR_resv_state) == 0) {\n\t\t\t\t\t\tprt_attr(a->name, a->resource, cvtResvstate(a->value), alt_opt & ALT_DISPLAY_w, json_job);\n\t\t\t\t\t} else if (strcmp(a->name, ATTR_submit_arguments) == 0) {\n\t\t\t\t\t\tif (decode_xml_arg_list_str((a->value), &cmdargs) == -1)\n\t\t\t\t\t\t\texit_qstat(\"out of memory\");\n\t\t\t\t\t\tprt_attr(a->name, a->resource, cmdargs, alt_opt & ALT_DISPLAY_w, json_job);\n\t\t\t\t\t\tfree(cmdargs);\n\t\t\t\t\t} else if (strcmp(a->name, ATTR_executable) == 0) {\n\t\t\t\t\t\t/*\n\t\t\t\t\t\t * Prefix and suffix attribute value with\n\t\t\t\t\t\t * HPCBP_EXEC_TAG value.\n\t\t\t\t\t\t */\n\t\t\t\t\t\thpcbp_executable =\n\t\t\t\t\t\t\tmalloc((strlen(HPCBP_EXEC_TAG) * 2) +\n\t\t\t\t\t\t\t       sizeof(\"<></>\") + strlen(a->value) + 1);\n\t\t\t\t\t\tif (hpcbp_executable == NULL)\n\t\t\t\t\t\t\texit_qstat(\"out of memory\");\n\t\t\t\t\t\t(void) sprintf(hpcbp_executable, \"<%s>%s</%s>\",\n\t\t\t\t\t\t\t       HPCBP_EXEC_TAG, a->value, HPCBP_EXEC_TAG);\n\t\t\t\t\t\tprt_attr(a->name, a->resource, hpcbp_executable, alt_opt & ALT_DISPLAY_w, json_job);\n\t\t\t\t\t\tfree(hpcbp_executable);\n\t\t\t\t\t} else {\n\t\t\t\t\t\tprt_attr(a->name, a->resource, a->value, alt_opt & ALT_DISPLAY_w, json_job);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\ta = a->next;\n\t\t\t\tif (a)\n\t\t\t\t\tprintf(\"%s\", delimiter);\n\t\t\t}\n\t\t\tif (output_format == FORMAT_DEFAULT)\n\t\t\t\tprintf(\"%s\", delimiter);\n\t\t} else {\n\t\t\tif (p->name != NULL) {\n\t\t\t\tc = p->name;\n\t\t\t\twhile (*c != '.' && *c != '\\0')\n\t\t\t\t\tc++;\n\t\t\t\tc++; /* List the first part of the server name, too. */\n\t\t\t\twhile (*c != '.' && *c != '\\0')\n\t\t\t\t\tc++;\n\t\t\t\t*c = '\\0';\n\t\t\t\tl = strlen(p->name);\n\t\t\t\tif (wide) {\n\t\t\t\t\tif (l > SIZEJOBID_W) {\n\t\t\t\t\t\tc = p->name + SIZEJOBID_W;\n\t\t\t\t\t\t*(c - 1) = DISPLAY_TRUNC_CHAR;\n\t\t\t\t\t\t*c = '\\0';\n\t\t\t\t\t}\n\t\t\t\t} else if (how_opt & ALT_DISPLAY_INCR_WIDTH) {\n\t\t\t\t\tif (l > (PBS_MAXSEQNUM + 10)) {\n\t\t\t\t\t\tc = p->name + PBS_MAXSEQNUM + 10;\n\t\t\t\t\t\t*(c - 1) = DISPLAY_TRUNC_CHAR;\n\t\t\t\t\t\t*c = '\\0';\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tif (l > (PBS_MAXSEQNUM + 5)) {\n\t\t\t\t\t\tc = p->name + PBS_MAXSEQNUM + 5;\n\t\t\t\t\t\t*(c - 1) = DISPLAY_TRUNC_CHAR;\n\t\t\t\t\t\t*c = '\\0';\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tjid = p->name;\n\t\t\t}\n\t\t\ta = p->attribs;\n\t\t\twhile (a != NULL) {\n\t\t\t\tif (a->name != NULL) {\n\t\t\t\t\tif (strcmp(a->name, ATTR_name) == 0) {\n\t\t\t\t\t\tl = strlen(a->value);\n\t\t\t\t\t\tif (wide) {\n\t\t\t\t\t\t\tif (l >= SIZEJOBNAME_W) {\n\t\t\t\t\t\t\t\tsnprintf(long_name, SIZEJOBNAME_W + 1, \"%.*s%c\", (SIZEJOBNAME_W - 1), a->value, DISPLAY_TRUNC_CHAR);\n\t\t\t\t\t\t\t\tc = long_name;\n\t\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t\tc = a->value;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\tif (l >= NAMEL) {\n\t\t\t\t\t\t\t\tsnprintf(long_name, NAMEL + 1, \"%.*s%c\", (NAMEL - 1), a->value, DISPLAY_TRUNC_CHAR);\n\t\t\t\t\t\t\t\tc = long_name;\n\t\t\t\t\t\t\t} else\n\t\t\t\t\t\t\t\tc = a->value;\n\t\t\t\t\t\t}\n\t\t\t\t\t\tname = c;\n\t\t\t\t\t} else if (strcmp(a->name, ATTR_owner) == 0) {\n\t\t\t\t\t\tc = a->value;\n\t\t\t\t\t\twhile (*c != '@' && *c != '\\0')\n\t\t\t\t\t\t\tc++;\n\t\t\t\t\t\t*c = '\\0';\n\t\t\t\t\t\tl = strlen(a->value);\n\t\t\t\t\t\tif (wide) {\n\t\t\t\t\t\t\tif (l > SIZEUSER_W) {\n\t\t\t\t\t\t\t\tc = a->value + SIZEUSER_W;\n\t\t\t\t\t\t\t\t*(c - 1) = DISPLAY_TRUNC_CHAR;\n\t\t\t\t\t\t\t\t*c = '\\0';\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\tif (l > OWNERL) {\n\t\t\t\t\t\t\t\tc = a->value + OWNERL;\n\t\t\t\t\t\t\t\t*(c - 1) = DISPLAY_TRUNC_CHAR;\n\t\t\t\t\t\t\t\t*c = '\\0';\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t\towner = a->value;\n\t\t\t\t\t} else if (strcmp(a->name, ATTR_used) == 0) {\n\t\t\t\t\t\tif (strcmp(a->resource, \"cput\") == 0) {\n\t\t\t\t\t\t\tl = strlen(a->value);\n\t\t\t\t\t\t\tif (l > TIMEUL) {\n\t\t\t\t\t\t\t\tc = a->value + TIMEUL;\n\t\t\t\t\t\t\t\t*(c - 1) = DISPLAY_TRUNC_CHAR;\n\t\t\t\t\t\t\t\t*c = '\\0';\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\ttimeu = a->value;\n\t\t\t\t\t\t} else if (strcmp(a->resource, \"walltime\") == 0) {\n\t\t\t\t\t\t\tl = strlen(a->value);\n\t\t\t\t\t\t\tif (l > TIMEUL) {\n\t\t\t\t\t\t\t\tc = a->value + TIMEUL;\n\t\t\t\t\t\t\t\t*(c - 1) = DISPLAY_TRUNC_CHAR;\n\t\t\t\t\t\t\t\t*c = '\\0';\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\twtimu = a->value;\n\t\t\t\t\t\t}\n\t\t\t\t\t} else if (strcmp(a->name, ATTR_l) == 0) {\n\t\t\t\t\t\tif (strcmp(a->resource, \"cput\") == 0) {\n\t\t\t\t\t\t\tl = strlen(a->value);\n\t\t\t\t\t\t\tif (l > TIMEUL) {\n\t\t\t\t\t\t\t\tc = a->value + TIMEUL;\n\t\t\t\t\t\t\t\t*(c - 1) = DISPLAY_TRUNC_CHAR;\n\t\t\t\t\t\t\t\t*c = '\\0';\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\ttimer = a->value;\n\t\t\t\t\t\t} else if (strcmp(a->resource, \"walltime\") == 0) {\n\t\t\t\t\t\t\tl = strlen(a->value);\n\t\t\t\t\t\t\tif (l > TIMEUL) {\n\t\t\t\t\t\t\t\tc = a->value + TIMEUL;\n\t\t\t\t\t\t\t\t*(c - 1) = DISPLAY_TRUNC_CHAR;\n\t\t\t\t\t\t\t\t*c = '\\0';\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\twtimr = a->value;\n\t\t\t\t\t\t}\n\t\t\t\t\t} else if (strcmp(a->name, ATTR_state) == 0) {\n\t\t\t\t\t\tl = strlen(a->value);\n\t\t\t\t\t\tif (l > STATEL) {\n\t\t\t\t\t\t\tc = a->value + STATEL;\n\t\t\t\t\t\t\t*(c - 1) = DISPLAY_TRUNC_CHAR;\n\t\t\t\t\t\t\t*c = '\\0';\n\t\t\t\t\t\t}\n\t\t\t\t\t\tstate = a->value;\n\t\t\t\t\t} else if (strcmp(a->name, ATTR_queue) == 0) {\n\t\t\t\t\t\tc = a->value;\n\t\t\t\t\t\twhile (*c != '@' && *c != '\\0')\n\t\t\t\t\t\t\tc++;\n\t\t\t\t\t\t*c = '\\0';\n\t\t\t\t\t\tl = strlen(a->value);\n\t\t\t\t\t\tif (wide) {\n\t\t\t\t\t\t\tif (l > PBS_MAXQUEUENAME) {\n\t\t\t\t\t\t\t\tc = a->value + PBS_MAXQUEUENAME;\n\t\t\t\t\t\t\t\t*(c - 1) = DISPLAY_TRUNC_CHAR;\n\t\t\t\t\t\t\t\t*c = '\\0';\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\tif (l > LOCL) {\n\t\t\t\t\t\t\t\tc = a->value + LOCL;\n\t\t\t\t\t\t\t\t*(c - 1) = DISPLAY_TRUNC_CHAR;\n\t\t\t\t\t\t\t\t*c = '\\0';\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t\tlocation = a->value;\n\t\t\t\t\t} else if (strcmp(a->name, ATTR_array_state_count) == 0) {\n\t\t\t\t\t\tarsct = a->value;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\ta = a->next;\n\t\t\t}\n\t\t\tif (timeu == NULL)\n\t\t\t\ttimeu = \"0\";\n\t\t\tif (how_opt & ALT_DISPLAY_p) {\n\t\t\t\tchar *pc = percent_cal(state, timeu, timer, wtimu, wtimr, arsct);\n\t\t\t\tprintf(format, jid, name, owner, pc, state, location);\n\t\t\t\tfree(pc);\n\t\t\t} else\n\t\t\t\tprintf(format, jid, name, owner, timeu, state, location);\n\t\t}\n\t\tif (full && output_format != FORMAT_JSON)\n\t\t\tprintf(\"\\n\");\n\t\tp = p->next;\n\t}\n\treturn 0;\n}\n\n#define TYPEL 4\n/**\n * @brief\n *      Helper function to accumulate new-type restriction to a running string\n *\n * @param[in] keys - array of  keys\n * @param[in] values - array of values corresponding to each key\n * @param[in] count - pointer to current number of unqiue keys stored in keys/values\n * @param[in] max - maximum allowed number of unique keys \n * @param[in] key - new key to process \n * @param[in] val - value associated with the key to accumulate \n *\n */\nstatic void accumulate_restriction(char **keys, char **values, int *count, int max, const char *key, const char *val) {\n\t/* check NULL-ness of parameters*/\n\tif (!keys || !values || !count || !key || !val) {\n\t\texit_qstat(\"accumulate_restriction: NULL key or value\");\n\t}\n\tfor (int i = 0; i < *count; i++) {\n\t\t/* check if key already exists */\n\t\tif (strcmp(keys[i], key) == 0) {\n\t\t\tsize_t new_len = strlen(values[i]) + strlen(val) + 2; /* \",\\0\" */\n\t\t\tchar *new_buf = malloc(new_len);\n\t\t\tif (!new_buf) {\n\t\t\t\texit_qstat(\"out of memory\");\n\t\t\t}\n\t\t\tsnprintf(new_buf, new_len, \"%s,%s\", values[i], val); /* append to running output */\n\t\t\tfree(values[i]);\n\t\t\tvalues[i] = new_buf;\n\t\t\treturn;\n\t\t}\n\t}\n\n\t/* first time restriction seen, create new key */\n\tif (*count < max) {\n\t\tchar *new_key = strndup(key, strlen(key) + 1);\n\t\tif (!new_key) {\n\t\t\texit_qstat(\"out of memory\");\n\t\t}\n\n\t\tchar *new_val = strndup(val, strlen(val) + 1);\n\t\tif (!new_val) {\n\t\t\tfree(new_key);\n\t\t\texit_qstat(\"out of memory\");\n\t\t}\n\n\t\t/* assign values*/\n\t\tkeys[*count] = new_key;\n\t\tvalues[*count] = new_val;\n\t\t(*count)++;\n\t}\n}\n\n/**\n * @brief\n *\tDisplays the status of queue.\n *\n * @param[in] status - batch request for queue status\n * @param[in] prtheader - true or false\n * @param[in] full - server full name\n *\n * @return\tint\n * @retval\t0\t- success\n * @retval\t1\t- failure\n *\n */\nint\ndisplay_statque(struct batch_status *status, int prtheader, int full, int alt_opt)\n{\n\tstruct batch_status *p;\n\tstruct attrl *a;\n\tint l;\n\tchar *c;\n\tchar *name;\n\tchar *max;\n\tchar *tot;\n\tchar ena[3 + 1];\n\tchar str[3 + 1];\n\tchar que[NUML + 1];\n\tchar run[NUML + 1];\n\tchar hld[NUML + 1];\n\tchar wat[NUML + 1];\n\tchar trn[NUML + 1];\n\tchar ext[NUML + 1];\n\tchar *type;\n\tchar format[80];\n\tjson_data *json_queues = NULL;\n\tjson_data *json_queue = NULL;\n\n\tsprintf(format, \"%%-%ds %%%ds %%%ds %%%ds %%%ds %%%ds %%%ds %%%ds %%%ds %%%ds %%%ds %%-%ds\\n\",\n\t\tNAMEL, NUML, NUML, 3, 3, NUML,\n\t\tNUML, NUML, NUML, NUML, NUML, TYPEL);\n\n\tif (!full && prtheader && output_format == FORMAT_DEFAULT) {\n\t\tprintf(\"Queue              Max   Tot Ena Str   Que   Run   Hld   Wat   Trn   Ext Type\\n\");\n\t\tprintf(\"---------------- ----- ----- --- --- ----- ----- ----- ----- ----- ----- ----\\n\");\n\t}\n\n\tif (output_format == FORMAT_JSON && first_stat) {\n\t\tif ((json_queues = pbs_json_create_object()) == NULL)\n\t\t\treturn 1;\n\t\tpbs_json_insert_item(json_root, \"Queue\", json_queues);\n\t\tfirst_stat = 0;\n\t}\n\tp = status;\n\twhile (p != NULL) {\n\t\tname = NULL;\n\t\tmax = \"0\";\n\t\ttot = \"0\";\n\t\tstrcpy(ena, \"no\");\n\t\tstrcpy(str, \"no\");\n\t\tstrcpy(que, \"0\");\n\t\tstrcpy(run, \"0\");\n\t\tstrcpy(hld, \"0\");\n\t\tstrcpy(wat, \"0\");\n\t\tstrcpy(trn, \"0\");\n\t\tstrcpy(ext, \"0\");\n\t\ttype = \"not defined\";\n\t\tprev_resc_name = NULL;\n\n\t\tchar *attr_names[MAX_ATTRS] = {0};\n\t\tchar *attr_values[MAX_ATTRS] = {0};\n\t\tint attr_count = 0;\n\n\t\tchar *res_new_type_name = NULL;\n\t\tchar *res_names[MAX_RES] = {0};\n\t\tchar *res_values[MAX_RES] = {0};\n\t\tint res_count = 0;\n\n\t\tif (full) {\n\t\t\tif (output_format == FORMAT_DSV || output_format == FORMAT_DEFAULT)\n\t\t\t\tprintf(\"Queue: %s%s\", p->name, delimiter);\n\t\t\telse if (output_format == FORMAT_JSON) {\n\t\t\t\tif ((json_queue = pbs_json_create_object()) == NULL)\n\t\t\t\t\treturn 1;\n\t\t\t\tpbs_json_insert_item(json_queues, p->name, json_queue);\n\t\t\t}\n\t\t\ta = p->attribs;\n\t\t\twhile (a != NULL) {\n\t\t\t\tif (a->name != NULL) {\n\t\t\t\t\tif (output_format == FORMAT_JSON && a->value[0] == '[' && strchr(a->value, '=') != NULL && a->value[strlen(a->value)-1] == ']') { /* new type queue restriction */\n\t\t\t\t\t\tif (a->resource) { /* resource + new type queue restriction + json */\n\t\t\t\t\t\t\taccumulate_restriction(res_names, res_values, &res_count, MAX_RES, a->resource, a->value);\n\t\t\t\t\t\t\tres_new_type_name = a->name;\n\t\t\t\t\t\t}\n\t\t\t\t\t\telse { /* new type but not a sub resource */\n\t\t\t\t\t\t\taccumulate_restriction(attr_names, attr_values, &attr_count, MAX_ATTRS, a->name, a->value);\n\t\t\t\t\t\t}\n\t\t\t\t\t} else { /* not new-type queue restriction */\n\t\t\t\t\t\tprt_attr(a->name, a->resource, a->value,\n\t\t\t\t\t\t\talt_opt & ALT_DISPLAY_w, json_queue);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\ta = a->next;\n\t\t\t\tif (a)\n\t\t\t\t\tprintf(\"%s\", delimiter);\n\t\t\t}\n\t\t\tif (attr_count > 0) { /* new type restriction */\n\t\t\t\tfor (int i = 0; i < attr_count; i++) {\n\t\t\t\t\tpbs_json_insert_string(json_queue, attr_names[i], attr_values[i]);\n\t\t\t\t\tfree(attr_names[i]);\n\t\t\t\t\tfree(attr_values[i]);\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (res_count > 0) { /* new type restiction + resource type */\n\t\t\t\tjson_data *json_obj = pbs_json_create_object();\n\t\t\t\tfor (int i = 0; i < res_count; i++) {\n\t\t\t\t\tpbs_json_insert_string(json_obj, res_names[i], res_values[i]);\n\t\t\t\t\tfree(res_names[i]);\n\t\t\t\t\tfree(res_values[i]);\n\t\t\t\t}\n\t\t\t\tpbs_json_insert_item(json_queue, res_new_type_name, json_obj);\n\t\t\t}\n\t\t\tif (output_format == FORMAT_DEFAULT)\n\t\t\t\tprintf(\"%s\", delimiter);\n\t\t} else {\n\t\t\tif (p->name != NULL) {\n\t\t\t\tl = strlen(p->name);\n\t\t\t\tif (l > NAMEL) {\n\t\t\t\t\tc = p->name + NAMEL;\n\t\t\t\t\t*(c - 1) = DISPLAY_TRUNC_CHAR;\n\t\t\t\t\t*c = '\\0';\n\t\t\t\t}\n\t\t\t\tname = p->name;\n\t\t\t}\n\t\t\ta = p->attribs;\n\t\t\twhile (a != NULL) {\n\t\t\t\tif (a->name != NULL) {\n\t\t\t\t\tif (strcmp(a->name, ATTR_maxrun) == 0) {\n\t\t\t\t\t\tl = strlen(a->value);\n\t\t\t\t\t\tif (l > NUML) {\n\t\t\t\t\t\t\tc = a->value + NUML;\n\t\t\t\t\t\t\t*(c - 1) = DISPLAY_TRUNC_CHAR;\n\t\t\t\t\t\t\t*c = '\\0';\n\t\t\t\t\t\t}\n\t\t\t\t\t\tmax = a->value;\n\t\t\t\t\t} else if (strcmp(a->name, ATTR_total) == 0) {\n\t\t\t\t\t\tl = strlen(a->value);\n\t\t\t\t\t\tif (l > NUML) {\n\t\t\t\t\t\t\tc = a->value + NUML;\n\t\t\t\t\t\t\t*(c - 1) = DISPLAY_TRUNC_CHAR;\n\t\t\t\t\t\t\t*c = '\\0';\n\t\t\t\t\t\t}\n\t\t\t\t\t\ttot = a->value;\n\t\t\t\t\t} else if (strcmp(a->name, ATTR_enable) == 0) {\n\t\t\t\t\t\tif (istrue(a->value))\n\t\t\t\t\t\t\tstrcpy(ena, \"yes\");\n\t\t\t\t\t\telse\n\t\t\t\t\t\t\tstrcpy(ena, \"no\");\n\t\t\t\t\t} else if (strcmp(a->name, ATTR_start) == 0) {\n\t\t\t\t\t\tif (istrue(a->value))\n\t\t\t\t\t\t\tstrcpy(str, \"yes\");\n\t\t\t\t\t\telse\n\t\t\t\t\t\t\tstrcpy(str, \"no\");\n\t\t\t\t\t} else if (strcmp(a->name, ATTR_count) == 0) {\n\t\t\t\t\t\tstates(a->value, que, run, hld, wat, trn, ext, NUML);\n\t\t\t\t\t} else if (strcmp(a->name, ATTR_qtype) == 0) {\n\t\t\t\t\t\tl = strlen(a->value);\n\t\t\t\t\t\tif (l > TYPEL) {\n\t\t\t\t\t\t\tc = a->value + TYPEL;\n\t\t\t\t\t\t\t*(c - 1) = DISPLAY_TRUNC_CHAR;\n\t\t\t\t\t\t\t*c = '\\0';\n\t\t\t\t\t\t}\n\t\t\t\t\t\ttype = a->value;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\ta = a->next;\n\t\t\t}\n\t\t\tprintf(format, name, max, tot, ena, str, que, run, hld, wat, trn, ext, type);\n\t\t}\n\t\tif (full && output_format != FORMAT_JSON)\n\t\t\tprintf(\"\\n\");\n\t\tp = p->next;\n\t}\n\treturn 0;\n}\n\n#define STATUSL 10\n\n/**\n * @brief\n *      Displays the status of server.\n *\n * @param[in] status - batch request for server status\n * @param[in] prtheader - true or false\n * @param[in] full - server full name\n *\n * @return  int\n * @retval\t0\t- success\n * @retval\t1\t- failure\n *\n */\n\nint\ndisplay_statserver(struct batch_status *status, int prtheader, int full, int alt_opt)\n{\n\tstruct batch_status *p;\n\tstruct attrl *a;\n\tint l;\n\tchar *c;\n\tchar *name;\n\tchar *max;\n\tchar *tot;\n\tchar que[NUML + 1];\n\tchar run[NUML + 1];\n\tchar hld[NUML + 1];\n\tchar wat[NUML + 1];\n\tchar trn[NUML + 1];\n\tchar ext[NUML + 1];\n\tchar *stats;\n\tchar format[80];\n\tjson_data *json_servers = NULL;\n\tjson_data *json_server = NULL;\n\n\tsprintf(format, \"%%-%ds %%%ds %%%ds %%%ds %%%ds %%%ds %%%ds %%%ds %%%ds %%-%ds\\n\",\n\t\tNAMEL, NUML, NUML, NUML, NUML, NUML, NUML, NUML, NUML, STATUSL);\n\n\tif (!full && prtheader && output_format == FORMAT_DEFAULT) {\n\t\tprintf(\"Server             Max   Tot   Que   Run   Hld   Wat   Trn   Ext Status\\n\");\n\t\tprintf(\"---------------- ----- ----- ----- ----- ----- ----- ----- ----- -----------\\n\");\n\t}\n\n\tif (output_format == FORMAT_JSON && first_stat) {\n\t\tif ((json_servers = pbs_json_create_object()) == NULL)\n\t\t\treturn 1;\n\t\tpbs_json_insert_item(json_root, \"Server\", json_servers);\n\t\tfirst_stat = 0;\n\t}\n\tp = status;\n\twhile (p != NULL) {\n\t\tname = NULL;\n\t\tmax = \"0\";\n\t\ttot = \"0\";\n\t\tstrcpy(que, \"0\");\n\t\tstrcpy(run, \"0\");\n\t\tstrcpy(hld, \"0\");\n\t\tstrcpy(wat, \"0\");\n\t\tstrcpy(trn, \"0\");\n\t\tstrcpy(ext, \"0\");\n\t\tstats = \"\";\n\t\tif (full) {\n\t\t\tif (output_format == FORMAT_DSV || output_format == FORMAT_DEFAULT)\n\t\t\t\tprintf(\"Server: %s%s\", p->name, delimiter);\n\t\t\telse if (output_format == FORMAT_JSON) {\n\t\t\t\tif ((json_server = pbs_json_create_object()) == NULL)\n\t\t\t\t\treturn 1;\n\t\t\t\tpbs_json_insert_item(json_servers, p->name, json_server);\n\t\t\t}\n\t\t\ta = p->attribs;\n\t\t\twhile (a != NULL) {\n\t\t\t\tif (a->name != NULL) {\n\t\t\t\t\tprt_attr(a->name, a->resource, a->value, alt_opt & ALT_DISPLAY_w, json_server);\n\t\t\t\t}\n\t\t\t\ta = a->next;\n\t\t\t\tif ((a || output_format == FORMAT_DEFAULT))\n\t\t\t\t\tprintf(\"%s\", delimiter);\n\t\t\t}\n\t\t} else {\n\t\t\tif (p->name != NULL) {\n\t\t\t\tl = strlen(p->name);\n\t\t\t\tif (l > NAMEL) {\n\t\t\t\t\tc = p->name + NAMEL;\n\t\t\t\t\t*(c - 1) = DISPLAY_TRUNC_CHAR;\n\t\t\t\t\t*c = '\\0';\n\t\t\t\t}\n\t\t\t\tname = p->name;\n\t\t\t}\n\t\t\ta = p->attribs;\n\t\t\twhile (a != NULL) {\n\t\t\t\tif (a->name != NULL) {\n\t\t\t\t\tif (strcmp(a->name, ATTR_maxrun) == 0) {\n\t\t\t\t\t\tl = strlen(a->value);\n\t\t\t\t\t\tif (l > NUML) {\n\t\t\t\t\t\t\tc = a->value + NUML;\n\t\t\t\t\t\t\t*(c - 1) = DISPLAY_TRUNC_CHAR;\n\t\t\t\t\t\t\t*c = '\\0';\n\t\t\t\t\t\t}\n\t\t\t\t\t\tmax = a->value;\n\t\t\t\t\t} else if (strcmp(a->name, ATTR_total) == 0) {\n\t\t\t\t\t\tl = strlen(a->value);\n\t\t\t\t\t\tif (l > NUML) {\n\t\t\t\t\t\t\tc = a->value + NUML;\n\t\t\t\t\t\t\t*(c - 1) = DISPLAY_TRUNC_CHAR;\n\t\t\t\t\t\t\t*c = '\\0';\n\t\t\t\t\t\t}\n\t\t\t\t\t\ttot = a->value;\n\t\t\t\t\t} else if (strcmp(a->name, ATTR_count) == 0) {\n\t\t\t\t\t\tstates(a->value, que, run, hld, wat, trn, ext, NUML);\n\t\t\t\t\t} else if (strcmp(a->name, ATTR_status) == 0) {\n\t\t\t\t\t\tl = strlen(a->value);\n\t\t\t\t\t\tif (l > STATUSL) {\n\t\t\t\t\t\t\tc = a->value + STATUSL;\n\t\t\t\t\t\t\t*(c - 1) = DISPLAY_TRUNC_CHAR;\n\t\t\t\t\t\t\t*c = '\\0';\n\t\t\t\t\t\t}\n\t\t\t\t\t\tstats = a->value;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\ta = a->next;\n\t\t\t}\n\t\t\tprintf(format, name, max, tot, que, run, hld, wat, trn, ext, stats);\n\t\t}\n\t\tif (full && output_format != FORMAT_JSON)\n\t\t\tprintf(\"\\n\");\n\t\tp = p->next;\n\t}\n\treturn 0;\n}\n\n#if TCL_QSTAT\n#ifdef NAS /* localmod 071 */\nstatic Tcl_Obj *\nattrlist(struct attrl *ap)\n{\n\tchar nameres[256];\n\tint rc;\n\tTcl_Obj *ret;\n\tTcl_Obj *sublist;\n\n\t/*\n\t * Build a list out of sublists made from attribute name / value\n\t * pairs\n\t */\n\tret = Tcl_NewListObj(0, NULL);\n\tif (ret == NULL)\n\t\treturn ret;\n\twhile (ap) {\n\t\tTcl_Obj *twol[2];\n\n\t\tif (ap->resource) {\n\t\t\tsprintf(nameres, \"%s%s%s\",\n\t\t\t\tap->name, tcl_atrsep, ap->resource);\n\t\t\ttwol[0] = Tcl_NewStringObj(nameres, -1);\n\t\t} else\n\t\t\ttwol[0] = Tcl_NewStringObj(ap->name, -1);\n\t\ttwol[1] = Tcl_NewStringObj(ap->value, -1);\n\t\tsublist = Tcl_NewListObj(2, twol);\n\t\tif (sublist == NULL) {\n\t\t\tif (twol[0])\n\t\t\t\tTcl_DecrRefCount(twol[0]);\n\t\t\tif (twol[1])\n\t\t\t\tTcl_DecrRefCount(twol[1]);\n\t\t\tbreak;\n\t\t}\n\t\trc = Tcl_ListObjAppendElement(NULL, ret, sublist);\n\t\tif (rc != TCL_OK) {\n\t\t\tTcl_DecrRefCount(sublist);\n\t\t\tbreak;\n\t\t}\n\t\tap = ap->next;\n\t}\n\treturn (ret);\n}\n#else\n#define ARGNUM 1024\n\nchar *\nattrlist(struct attrl *ap)\n{\n\tchar nameres[256];\n\tchar *argv[ARGNUM];\n\tchar *ret;\n\tint i, num = 0;\n\n\twhile (ap) {\n\t\tchar *twol[2];\n\n\t\tif (ap->resource) {\n\t\t\tsprintf(nameres, \"%s%s%s\",\n\t\t\t\tap->name, TCL_ATRSEP, ap->resource);\n\t\t\ttwol[0] = nameres;\n\t\t} else\n\t\t\ttwol[0] = ap->name;\n\t\ttwol[1] = ap->value;\n\t\targv[num++] = Tcl_Merge(2, twol);\n\t\tif (num == ARGNUM)\n\t\t\tbreak;\n\t\tap = ap->next;\n\t}\n\tret = Tcl_Merge(num, argv);\n\tfor (i = 0; i < num; i++)\n\t\tfree(argv[i]);\n\treturn (ret);\n}\n#endif /* localmod 071 */\n\nTcl_Interp *interp = NULL;\nchar script[200];\nchar flags[] = \"flags\";\nchar ops[] = \"operands\";\nchar error[] = \"error\";\n\n#ifdef NAS /* localmod 071 */\nchar log_buffer[4096];\nextern int quiet;\n\nextern void add_cmds(Tcl_Interp *interp);\n\n/**\n * @brief\n *\tlog error msg on error\n *\n * @param[in] errnum - error number\n * @param[in] func - function name where error occured\n * @param[in] text - error msg\n *\n * @return\tVoid\n *\n */\nvoid\nlog_err(int errnum, char *func, char *text)\n{\n\tif (quiet)\n\t\treturn;\n\tfprintf(stderr, \"%s: %s: %s\\n\",\n\t\t(errnum < 0) ? \"Internal error\" : strerror(errnum),\n\t\tfunc, text);\n}\n\n/**\n * @brief\n *\tinitialise the tcl.\n */\nvoid\ntcl_init()\n{\n\tstruct passwd *pw;\n\tuid_t uid;\n\tstruct stat sb;\n\tstruct batch_status *bp;\n\tint i, ret = 1;\n\tchar *home;\n\n\tif ((home = getenv(\"QSTATRCHOME\")) == NULL &&\n\t    (home = getenv(\"HOME\")) == NULL) {\n\t\tuid = getuid();\n\t\tpw = getpwuid(uid);\n\t\tif (pw == NULL)\n\t\t\treturn;\n\t\thome = pw->pw_dir;\n\t}\n\n\tsnprintf(script, sizeof(script), \"%s/.qstatrc\", home);\n\tif (stat(script, &sb) == -1) {\n\t\tpbs_strncpy(script, QSTATRC_PATH, sizeof(script));\n\t\tif (stat(script, &sb) == -1)\n\t\t\treturn;\n\t}\n\n\tinterp = Tcl_CreateInterp();\n\tif (Tcl_Init(interp) == TCL_ERROR) {\n\t\tfprintf(stderr, \"Tcl_Init error: %s\",\n\t\t\tTcl_GetStringResult(interp));\n\t}\n#if TCLX\n#if TCL_MINOR_VERSION < 5 && TCL_MAJOR_VERSION < 8\n\tif (TclX_Init(interp) == TCL_ERROR)\n#else\n\tif (Tclx_Init(interp) == TCL_ERROR)\n#endif\n\t{\n\t\tfprintf(stderr, \"Tclx_Init error: %s\",\n\t\t\tinterp->result);\n\t}\n#endif /* TCLX */\n\tadd_cmds(interp);\n\treturn;\n}\n#else\n\n/**\n * @brief\n *      initialise the tcl lib.\n */\n\nvoid\ntcl_init()\n{\n\tstruct passwd *pw;\n\tuid_t uid;\n\tstruct stat sb;\n\tstruct batch_status *bp;\n\tint i, ret = 1;\n\n\tuid = getuid();\n\tpw = getpwuid(uid);\n\tif (pw == NULL)\n\t\treturn;\n\n\tsnprintf(script, sizeof(script), \"%s/.qstatrc\", pw->pw_dir);\n\tif (stat(script, &sb) == -1) {\n\t\tpbs_strncpy(script, QSTATRC_PATH, sizeof(script));\n\t\tif (stat(script, &sb) == -1)\n\t\t\treturn;\n\t}\n\n\tinterp = Tcl_CreateInterp();\n\tif (Tcl_Init(interp) == TCL_ERROR) {\n\t\tfprintf(stderr, \"Tcl_Init error: %s\",\n\t\t\tTcl_GetStringResult(interp));\n\t}\n#if TCLX\n#if TCL_MINOR_VERSION < 5 && TCL_MAJOR_VERSION < 8\n\tif (TclX_Init(interp) == TCL_ERROR)\n#else\n\tif (Tclx_Init(interp) == TCL_ERROR)\n#endif\n\t{\n\t\tfprintf(stderr, \"Tclx_Init error: %s\",\n\t\t\tinterp->result);\n\t}\n#endif /* TCLX */\n\treturn;\n}\n#endif /* localmod 071 */\n\n/**\n * @brief\n *\tadd argument to tcl list.\n *\n * @param[in] name - flag\n * @param[in] arg - argument\n *\n * @return\tVoid\n *\n */\n\nvoid\ntcl_addarg(char *name, char *arg)\n{\n\tif (interp == NULL)\n\t\treturn;\n\n\tif (arg == NULL || *arg == '\\0')\n\t\treturn;\n\n\tTcl_SetVar(interp, name, arg,\n\t\t   TCL_GLOBAL_ONLY |\n\t\t\t   TCL_LIST_ELEMENT |\n\t\t\t   TCL_APPEND_VALUE);\n}\n\n/**\n * @brief\n *      set tcl status .\n *\n * @param[in] type - type\n * @param[in] bs - batch request for tcl status\n * @param[in] f_opt - file option\n *\n * @return      int\n * @retval      0       success\n * @retval      1       error\n *\n */\n\n#ifdef NAS /* localmod 071 */\nint tcl_stat(char *, struct batch_status *, int);\nint\ntcl_stat(char *type, struct batch_status *bs, int tcl_opt)\n{\n\tstruct batch_status *bp;\n\tTcl_Obj *twol[2];\n\tTcl_Obj *value;\n\tTcl_Obj *result;\n\tTcl_Obj *name;\n\tint rc;\n\tint i;\n\tint errs = 0;\n\n\tif (interp == NULL)\n\t\treturn 1;\n\n\tif (tcl_opt == 0)\n\t\treturn 1;\n\n\tvalue = Tcl_NewListObj(0, NULL);\n\tif (value == NULL)\n\t\treturn 1;\n\n\tfor (bp = bs; bp; bp = bp->next) {\n\t\tTcl_Obj *threel[3];\n\t\tTcl_Obj *sublist;\n\n\t\tthreel[0] = Tcl_NewStringObj(bp->name, -1);\n\t\tthreel[1] = attrlist(bp->attribs);\n\t\tthreel[2] = Tcl_NewStringObj(bp->text, -1);\n\n\t\tsublist = Tcl_NewListObj(3, threel);\n\t\tif (sublist == NULL) {\n\t\t\tfor (i = 0; i < 3; ++i) {\n\t\t\t\tif (threel[i] != NULL) {\n\t\t\t\t\tTcl_DecrRefCount(threel[i]);\n\t\t\t\t}\n\t\t\t}\n\t\t\t++errs;\n\t\t\tbreak;\n\t\t}\n\t\trc = Tcl_ListObjAppendElement(interp, value, sublist);\n\t\tif (rc != TCL_OK) {\n\t\t\tTcl_DecrRefCount(sublist);\n\t\t\t++errs;\n\t\t\tbreak;\n\t\t}\n\t}\n\tif (errs) {\n\t\tTcl_DecrRefCount(value);\n\t\treturn 1;\n\t}\n\ttwol[0] = Tcl_NewStringObj(type, -1);\n\ttwol[1] = value;\n\n\tresult = Tcl_NewListObj(2, twol);\n\tif (result == NULL) {\n\t\tfor (i = 0; i < 2; ++i) {\n\t\t\tif (twol[i] != NULL) {\n\t\t\t\tTcl_DecrRefCount(twol[i]);\n\t\t\t}\n\t\t}\n\t\treturn 1;\n\t}\n\tname = Tcl_NewStringObj(\"objects\", -1);\n\tif (name == NULL) {\n\t\tTcl_DecrRefCount(result);\n\t\treturn 1;\n\t}\n\n\tTcl_ObjSetVar2(interp, name, NULL, result,\n\t\t       TCL_GLOBAL_ONLY |\n\t\t\t       TCL_LIST_ELEMENT |\n\t\t\t       TCL_APPEND_VALUE);\n\treturn 0;\n}\n#else\n/**\n * @brief\n *\tset tcl status .\n *\n * @param[in] type - type\n * @param[in] bs - batch request for tcl status\n * @param[in] f_opt - file option\n *\n * @return\tint\n * @retval\t0\tsuccess\n * @retval\t1\terror\n *\n */\nint\ntcl_stat(char *type, struct batch_status *bs, int f_opt)\n{\n\tstruct batch_status *bp;\n\tchar *twol[2];\n\tchar *argv[ARGNUM];\n\tint i, num = 0;\n\tchar *result;\n\n\tif (interp == NULL)\n\t\treturn 1;\n\n\tif (f_opt == 0)\n\t\treturn 1;\n\n\ttwol[0] = type;\n\tfor (bp = bs; bp; bp = bp->next) {\n\t\tchar *threel[3];\n\n\t\tthreel[0] = bp->name;\n\t\tthreel[1] = attrlist(bp->attribs);\n\t\tthreel[2] = bp->text;\n\n\t\targv[num++] = Tcl_Merge(3, threel);\n\t\tfree(threel[1]); /* malloc'ed in attrlist() */\n\t\tif (num == ARGNUM)\n\t\t\tbreak;\n\t}\n\ttwol[1] = Tcl_Merge(num, argv);\n\tfor (i = 0; i < num; i++)\n\t\tfree(argv[i]);\n\n\tresult = Tcl_Merge(2, twol);\n\tTcl_SetVar(interp, \"objects\", result,\n\t\t   TCL_GLOBAL_ONLY |\n\t\t\t   TCL_LIST_ELEMENT |\n\t\t\t   TCL_APPEND_VALUE);\n\tfree(twol[1]);\n\tfree(result);\n\treturn 0;\n}\n#endif /* localmod 071 */\n\nvoid\n#ifdef NAS /* localmod 071 */\ntcl_run(int tcl_opt)\n#else\ntcl_run(int f_opt)\n#endif /* localmod 071 */\n{\n\tif (interp == NULL)\n\t\treturn;\n\n#ifdef NAS /* localmod 071 */\n\tif (tcl_opt &&\n#else\n\tif (f_opt &&\n#endif /* localmod 071 */\n\t    Tcl_EvalFile(interp, script) != TCL_OK) {\n\t\tchar *trace;\n\n\t\ttrace = (char *) Tcl_GetVar(interp, \"errorInfo\", 0);\n\t\tif (trace == NULL)\n\t\t\ttrace = Tcl_GetStringResult(interp);\n\n\t\tfprintf(stderr, \"%s: TCL error @ line %d: %s\\n\",\n\t\t\tscript, Tcl_GetErrorLine(interp), trace);\n\t}\n\tTcl_DeleteInterp(interp);\n}\n\n#else\n#define tcl_init()\n#define tcl_addarg(name, arg)\n#ifdef NAS /* localmod 071 */\n#define tcl_stat(type, bs, tcl_opt) 1\n#define tcl_run(tcl_opt)\n#else\n#define tcl_stat(type, bs, f_opt) 1\n#define tcl_run(f_opt)\n#endif /* localmod 071 */\n#endif /* TCL_QSTAT */\n\nint\nmain(int argc, char **argv, char **envp) /* qstat */\n{\n\tint added_queue;\n\tint c;\n\tint errflg = 0;\n\tint any_failed = 0;\n\textern char *optarg;\n\tchar *conflict = \"qstat: conflicting options.\\n\";\n\tchar *pc;\n\tint located = FALSE;\n\tchar extend[4];\n\tint wide = 0;\n\tint format = 0;\n\ttime_t timenow;\n\n#if TCL_QSTAT\n\tchar option[3];\n#endif\n\n\tchar job_id[PBS_MAXCLTJOBID];\n\n\tchar job_id_out[PBS_MAXCLTJOBID];\n\tchar server_out[MAXSERVERNAME] = {0};\n\tchar prev_server[MAXSERVERNAME] = {0};\n\tchar server_old[MAXSERVERNAME] = \"\";\n\tchar rmt_server[MAXSERVERNAME];\n\tchar destination[PBS_MAXDEST + 1];\n\n\tchar *queue_name_out;\n\tchar *server_name_out;\n\n\tchar operand[PBS_MAXCLTJOBID + 1];\n\tint alt_opt;\n\tint f_opt, B_opt, Q_opt, how_opt, E_opt;\n\tint p_header = TRUE;\n\tint stat_single_job = 0;\n\tint new_remote_server = 0;\n\tenum { JOBS,\n\t       QUEUES,\n\t       SERVERS } mode;\n\tstruct batch_status *p_status;\n\tstruct batch_status *p_server = NULL;\n\tstruct attropl *p_atropl = 0;\n\tstruct attropl *new_atropl;\n#ifdef NAS /* localmod 071 */\n\tint tcl_opt;\n\tstruct batch_status *p_rsvstat;\n#endif /* localmod 071 */\n\n\tchar *errmsg;\n\tchar *job_list = NULL;\n\tsize_t job_list_size = 0;\n\tchar *query_job_list = NULL;\n\n#if !defined(PBS_NO_POSIX_VIOLATION)\n#ifdef NAS /* localmod 071 */\n#define GETOPT_ARGS \"aeinpqrstwxu:fGHJMQEBW:T1\"\n#else\n#define GETOPT_ARGS \"ainpqrstwxu:fGHJMQEBW:T1F:D:\"\n#endif /* localmod 071 */\n#else\n#define GETOPT_ARGS \"fQBW:\"\n#endif /* PBS_NO_POSIX_VIOLATION */\n\n\t/*test for real deal or just version and exit*/\n\n\tPRINT_VERSION_AND_EXIT(argc, argv);\n\tdelay_query();\n\tif (initsocketlib())\n\t\treturn 1;\n\n\tmode = JOBS; /* default */\n\talt_opt = 0;\n\tf_opt = 0;\n\tB_opt = 0;\n\tQ_opt = 0;\n\tE_opt = 0;\n\thow_opt = 0;\n#ifdef NAS /* localmod 071 */\n\ttcl_opt = -2;\n#endif /* localmod 071 */\n\textend[0] = '\\0';\n\n#if TCL_QSTAT\n\n\ttcl_init();\n\ttcl_addarg(flags, argv[0]);\n\toption[0] = '-';\n\toption[2] = '\\0';\n#endif /* TCL_QSTAT */\n\n\twhile ((c = getopt(argc, argv, GETOPT_ARGS)) != EOF) {\n#if TCL_QSTAT\n\t\toption[1] = (char) c;\n\t\ttcl_addarg(flags, option);\n\t\ttcl_addarg(flags, optarg);\n#endif /* TCL_QSTAT */\n\t\tswitch (c) {\n\n#if !defined(PBS_NO_POSIX_VIOLATION)\n\n\t\t\tcase 'a':\n\t\t\t\talt_opt |= ALT_DISPLAY_a;\n\t\t\t\tdisplay_attribs = &alt_attribs[0];\n\t\t\t\tbreak;\n\n#ifdef NAS /* localmod 071 */\n\t\t\tcase 'e':\n\t\t\t\ttcl_opt += 1;\n\t\t\t\tbreak;\n#endif /* localmod 071 */\n\n\t\t\tcase 'T':\n\t\t\t\talt_opt |= ALT_DISPLAY_T;\n\t\t\t\tdisplay_attribs = &alt_attribs[0];\n\t\t\t\tadd_atropl((struct attropl **) &display_attribs, ATTR_estimated, \"start_time\", \"\", SET);\n\t\t\t\tadd_atropl((struct attropl **) &display_attribs, ATTR_stime, NULL, \"\", SET);\n\t\t\t\tbreak;\n\n\t\t\tcase 'i':\n\t\t\t\talt_opt |= ALT_DISPLAY_i;\n\t\t\t\tdisplay_attribs = &alt_attribs[0];\n#ifdef NAS /* localmod 071 */\n\t\t\t\tadd_atropl(&p_atropl, ATTR_state, NULL, \"HQTW\", EQ);\n#else\n\t\t\t\tadd_atropl(&p_atropl, ATTR_state, NULL, \"EHQTW\", EQ);\n#endif /* localmod 071 */\n\t\t\t\tbreak;\n\n\t\t\tcase 'r':\n\t\t\t\talt_opt |= ALT_DISPLAY_r;\n\t\t\t\tdisplay_attribs = &alt_attribs[0];\n#ifdef NAS /* localmod 071 */\n\t\t\t\tadd_atropl(&p_atropl, ATTR_state, NULL, \"BERS\", EQ);\n#else\n\t\t\t\tadd_atropl(&p_atropl, ATTR_state, NULL, \"RS\", EQ);\n#endif /* localmod 071 */\n\t\t\t\tbreak;\n\n\t\t\tcase 'H':\n\t\t\t\talt_opt |= ALT_DISPLAY_H;\n\t\t\t\tdisplay_attribs = &alt_attribs[0];\n\t\t\t\tif (strchr(extend, (int) 'x') == NULL)\n\t\t\t\t\tstrcat(extend, \"x\");\n\t\t\t\tadd_atropl(&p_atropl, ATTR_state, NULL, \"MFX\", EQ);\n\t\t\t\tbreak;\n\n\t\t\tcase 't':\n\t\t\t\t/* send 't' in extend field to include sub jobs */\n\t\t\t\tif (strchr(extend, (int) 't') == NULL)\n\t\t\t\t\tstrcat(extend, \"t\");\n\t\t\t\tbreak;\n\n\t\t\tcase 'x':\n\t\t\t\t/* send 'x' in extend field to include history jobs */\n\t\t\t\tif (strchr(extend, (int) 'x') == NULL)\n\t\t\t\t\tstrcat(extend, \"x\");\n\t\t\t\tbreak;\n\n\t\t\tcase 'u':\n\t\t\t\talt_opt |= ALT_DISPLAY_u;\n\t\t\t\tdisplay_attribs = &alt_attribs[0];\n\t\t\t\tadd_atropl(&p_atropl, ATTR_u, NULL, optarg, EQ);\n\t\t\t\tbreak;\n\n\t\t\tcase 'n':\n\t\t\t\talt_opt |= ALT_DISPLAY_n;\n\t\t\t\tif (display_attribs == &basic_attribs[0] || f_opt == 1)\n\t\t\t\t\tdisplay_attribs = &alt_attribs[0];\n\t\t\t\tadd_atropl((struct attropl **) &display_attribs, ATTR_exechost,\n\t\t\t\t\t   NULL, \"\", SET);\n\t\t\t\tf_opt = 0;\n\t\t\t\tbreak;\n\n\t\t\tcase 'p':\n\t\t\t\thow_opt |= ALT_DISPLAY_p;\n\t\t\t\tadd_atropl((struct attropl **) &display_attribs, ATTR_l, NULL, \"\", EQ);\n\t\t\t\tadd_atropl((struct attropl **) &display_attribs, ATTR_array_state_count, NULL, \"\", EQ);\n\t\t\t\tbreak;\n\n\t\t\tcase 's':\n\t\t\t\talt_opt |= ALT_DISPLAY_s;\n\t\t\t\tif (display_attribs == &basic_attribs[0] || f_opt == 1)\n\t\t\t\t\tdisplay_attribs = &alt_attribs[0];\n\t\t\t\tadd_atropl((struct attropl **) &display_attribs, ATTR_comment,\n\t\t\t\t\t   NULL, \"\", SET);\n\t\t\t\tf_opt = 0;\n\t\t\t\tbreak;\n\n\t\t\tcase 'q':\n\t\t\t\talt_opt |= ALT_DISPLAY_q;\n\t\t\t\tmode = QUEUES;\n\t\t\t\tbreak;\n\n\t\t\tcase 'G':\n\t\t\t\talt_opt |= ALT_DISPLAY_G;\n\t\t\t\tdisplay_attribs = &alt_attribs[0];\n\t\t\t\tbreak;\n\n\t\t\tcase 'J':\n\t\t\t\tadd_atropl(&p_atropl, ATTR_array, NULL, \"True\", EQ);\n\t\t\t\tbreak;\n\n\t\t\tcase 'M':\n\t\t\t\talt_opt |= ALT_DISPLAY_Mw;\n\t\t\t\tdisplay_attribs = &alt_attribs[0];\n\t\t\t\tbreak;\n\n\t\t\tcase '1':\n\t\t\t\talt_opt |= ALT_DISPLAY_1l;\n\t\t\t\tbreak;\n\n\t\t\tcase 'w':\n\t\t\t\talt_opt |= ALT_DISPLAY_w;\n\t\t\t\twide = 1;\n\t\t\t\tbreak;\n#endif /* PBS_NO_POSIX_VIOLATION */\n\n\t\t\tcase 'f':\n\t\t\t\tf_opt = 1;\n\t\t\t\tdisplay_attribs = NULL; /* get all attributes */\n\t\t\t\tbreak;\n\n\t\t\tcase 'B':\n\t\t\t\tB_opt = 1;\n\t\t\t\tmode = SERVERS;\n\t\t\t\tif (Q_opt || (alt_opt && !wide)) {\n\t\t\t\t\tfprintf(stderr, \"%s\", conflict);\n\t\t\t\t\terrflg++;\n\t\t\t\t}\n\t\t\t\tbreak;\n\n\t\t\tcase 'Q':\n\t\t\t\tQ_opt = 1;\n\t\t\t\tmode = QUEUES;\n\t\t\t\tif (B_opt || (alt_opt && !wide)) {\n\t\t\t\t\tfprintf(stderr, \"%s\", conflict);\n\t\t\t\t\terrflg++;\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase 'E':\n\t\t\t\tE_opt = 1;\n\t\t\t\tbreak;\n\n\t\t\tcase 'D':\n\t\t\t\tif (output_format == FORMAT_DSV)\n\t\t\t\t\tdsv_delim = optarg;\n\t\t\t\telse\n\t\t\t\t\terrflg++;\n\t\t\t\tbreak;\n\n\t\t\tcase 'F':\n\t\t\t\tfor (format = FORMAT_DEFAULT; format < FORMAT_MAX; format++) {\n\t\t\t\t\tif (strcasecmp(optarg, output_format_names[format]) == 0) {\n\t\t\t\t\t\toutput_format = format;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tif (format >= FORMAT_MAX)\n\t\t\t\t\terrflg++;\n\t\t\t\tbreak;\n\n\t\t\tcase 'W':\n#if (TCL_QSTAT == 0)\n\t\t\t\tpc = optarg;\n\t\t\t\twhile (*pc) {\n\t\t\t\t\tswitch (*pc) {\n\t\t\t\t\t\tcase 'a':\n\t\t\t\t\t\t\talt_opt |= ALT_DISPLAY_a;\n\t\t\t\t\t\t\tbreak;\n\n\t\t\t\t\t\tcase 'i':\n\t\t\t\t\t\t\talt_opt |= ALT_DISPLAY_i;\n\t\t\t\t\t\t\tadd_atropl(&p_atropl, ATTR_state, NULL, \"EHQTW\", EQ);\n\t\t\t\t\t\t\tbreak;\n\n\t\t\t\t\t\tcase 'r':\n\t\t\t\t\t\t\talt_opt |= ALT_DISPLAY_r;\n#ifdef NAS /* localmod 071 */\n\t\t\t\t\t\t\tadd_atropl(&p_atropl, ATTR_state, NULL, \"BRS\", EQ);\n#else\n\t\t\t\t\t\t\tadd_atropl(&p_atropl, ATTR_state, NULL, \"RS\", EQ);\n#endif /* localmod 071 */\n\t\t\t\t\t\t\tbreak;\n\n\t\t\t\t\t\tcase 'H':\n\t\t\t\t\t\t\talt_opt |= ALT_DISPLAY_H;\n\t\t\t\t\t\t\tif (strchr(extend, (int) 'x') == NULL)\n\t\t\t\t\t\t\t\tstrcat(extend, \"x\");\n\t\t\t\t\t\t\tadd_atropl(&p_atropl, ATTR_state, NULL, \"MF\", EQ);\n\t\t\t\t\t\t\tbreak;\n\n\t\t\t\t\t\tcase 'u':\n\t\t\t\t\t\t\t/* note - u option is assumed to be last in  */\n\t\t\t\t\t\t\t/* string and all remaining is the name list */\n\t\t\t\t\t\t\talt_opt |= ALT_DISPLAY_u;\n\t\t\t\t\t\t\twhile (*++pc == ' ')\n\t\t\t\t\t\t\t\t;\n\t\t\t\t\t\t\tadd_atropl(&p_atropl, ATTR_u, NULL, pc, EQ);\n\t\t\t\t\t\t\tpc = pc + strlen(pc) - 1; /* for the later incr */\n\t\t\t\t\t\t\tbreak;\n\n\t\t\t\t\t\tcase 'n':\n\t\t\t\t\t\t\talt_opt |= ALT_DISPLAY_n;\n\t\t\t\t\t\t\tbreak;\n\n\t\t\t\t\t\tcase 's':\n\t\t\t\t\t\t\talt_opt |= ALT_DISPLAY_s;\n\t\t\t\t\t\t\tbreak;\n\n\t\t\t\t\t\tcase 'q':\n\t\t\t\t\t\t\talt_opt |= ALT_DISPLAY_q;\n\t\t\t\t\t\t\tmode = QUEUES;\n\t\t\t\t\t\t\tbreak;\n\n\t\t\t\t\t\tcase 'G':\n\t\t\t\t\t\t\talt_opt |= ALT_DISPLAY_G;\n\t\t\t\t\t\t\tbreak;\n\n\t\t\t\t\t\tcase 'M':\n\t\t\t\t\t\t\talt_opt |= ALT_DISPLAY_Mw;\n\t\t\t\t\t\t\tbreak;\n\n\t\t\t\t\t\tcase '1':\n\t\t\t\t\t\t\talt_opt |= ALT_DISPLAY_1l;\n\t\t\t\t\t\t\tbreak;\n\n\t\t\t\t\t\tcase ' ':\n\t\t\t\t\t\t\tbreak; /* ignore blanks */\n\n\t\t\t\t\t\tdefault:\n\t\t\t\t\t\t\terrflg++;\n\t\t\t\t\t}\n\t\t\t\t\t++pc;\n\t\t\t\t}\n#endif /* (TCL_QSTAT == 0) */\n\t\t\t\tbreak;\n\n\t\t\tcase '?':\n\t\t\tdefault:\n\t\t\t\terrflg++;\n\t\t}\n\t}\n\n#if !defined(PBS_NO_POSIX_VIOLATION)\n#ifdef NAS /* localmod 071 */\n#if TCL_QSTAT\n\tif (tcl_opt) {\n\t\tdisplay_attribs = NULL; /* get all attributes */\n\t}\n#endif\n#endif /* localmod 071 */\n\n\t/* certain combinations are not allowed */\n\n#ifdef NAS /* localmod 071 */\n\tif (f_opt == 1 && alt_opt != 0) {\n\t\tfprintf(stderr, conflict);\n\t\terrflg++;\n\t}\n#endif /* localmod 071 */\n\tc = alt_opt & (ALT_DISPLAY_a | ALT_DISPLAY_i | ALT_DISPLAY_r | ALT_DISPLAY_q | ALT_DISPLAY_H);\n\tif ((c != 0) && ((c != ALT_DISPLAY_a) && (c != ALT_DISPLAY_i) &&\n\t\t\t (c != ALT_DISPLAY_r) && (c != ALT_DISPLAY_q) &&\n\t\t\t (c != ALT_DISPLAY_H))) {\n\t\tfprintf(stderr, \"%s\", conflict);\n\t\terrflg++;\n\t}\n\tc = alt_opt & (ALT_DISPLAY_Mw | ALT_DISPLAY_G);\n\tif (c == (ALT_DISPLAY_Mw | ALT_DISPLAY_G)) {\n\t\tfprintf(stderr, \"%s\", conflict);\n\t\terrflg++;\n\t}\n\tif (!(output_format == FORMAT_DEFAULT || f_opt) || (output_format != FORMAT_DEFAULT && alt_opt)) {\n\t\tfprintf(stderr, \"%s\", conflict);\n\t\terrflg++;\n\t}\n#ifndef NAS /* localmod 071 */\n\tif ((alt_opt & ALT_DISPLAY_q) && (f_opt == 1)) {\n\t\tfprintf(stderr, \"%s\", conflict);\n\t\terrflg++;\n\t}\n#endif /* localmod 071 */\n\tif ((alt_opt & ALT_DISPLAY_1l) && !(alt_opt & (ALT_DISPLAY_n | ALT_DISPLAY_s))) {\n\t\tfprintf(stderr, \"%s\", conflict);\n\t\terrflg++;\n\t}\n\tif (wide) {\n\t\tif (output_format != FORMAT_DEFAULT) {\n\t\t\tfprintf(stderr, \"qstat: option w cannot be used with -F\\n\");\n\t\t\terrflg++;\n\t\t}\n\t}\n#endif /* PBS_NO_POSIX_VIOLATION */\n\n\tif (errflg) {\n\t\tstatic char usag2[] = \"qstat --version\\n\";\n\t\tstatic char usage[] = \"usage: \\n\\\nqstat [-f] [-J] [-p] [-t] [-x] [-E] [-F format | -w] [-D delim] [ job_identifier... | destination... ]\\n\\\nqstat [-a|-i|-r|-H|-T] [-J] [-t] [-u user] [-n] [-s] [-G|-M] [-1] [-w]\\n\\\n\\t[ job_identifier... | destination... ]\\n\\\nqstat -Q [-f] [-F format] [-D delim] [ destination... ]\\n\\\nqstat -q [-G|-M] [ destination... ]\\n\\\nqstat -B [-f] [-F format] [-D delim] [ server_name... ]\\n\";\n\t\tfprintf(stderr, \"%s\", usage);\n\t\tfprintf(stderr, \"%s\", usag2);\n\t\texit(2);\n\t}\n\n\tdef_server = pbs_default();\n\tif (def_server == NULL)\n\t\tdef_server = \"\";\n\n\t/*perform needed security library initializations (including none)*/\n\n\tif (CS_client_init() != CS_SUCCESS)\n\t\texit_qstat(\"unable to initialize security library.\");\n\n\t/* keep original list for reuse with next operand */\n\t/* in case a queue_name is added to front of list */\n\tadded_queue = 0;\n\tnew_atropl = p_atropl;\n\n\tif (output_format == FORMAT_DSV)\n\t\tdelimiter = dsv_delim;\n\telse if (output_format == FORMAT_JSON) {\n\t\tdelimiter = \"\";\n\t\t/* adding prologue to json output. */\n\t\ttimenow = time(0);\n\t\tif ((json_root = pbs_json_create_object()) == NULL)\n\t\t\texit_qstat(\"json error\");\n\t\tif (pbs_json_insert_number(json_root, \"timestamp\", (double) timenow))\n\t\t\texit_qstat(\"json error\");\n\t\tif (pbs_json_insert_string(json_root, \"pbs_version\", PBS_VERSION))\n\t\t\texit_qstat(\"json error\");\n\t\tif (pbs_json_insert_string(json_root, \"pbs_server\", def_server))\n\t\t\texit_qstat(\"json error\");\n\t}\n\n\tif (optind >= argc) { /* If no arguments, then set defaults */\n\t\tswitch (mode) {\n\n\t\t\tcase JOBS:\n\t\t\t\tserver_out[0] = '@';\n\t\t\t\tpbs_strncpy(&server_out[1], def_server, sizeof(server_out) - 1);\n\t\t\t\ttcl_addarg(ops, server_out);\n\n\t\t\t\tjob_id_out[0] = '\\0';\n\t\t\t\tserver_out[0] = '\\0';\n\t\t\t\tgoto job_no_args;\n\t\t\tcase QUEUES:\n\t\t\t\tserver_out[0] = '@';\n\t\t\t\tpbs_strncpy(&server_out[1], def_server, sizeof(server_out) - 1);\n\t\t\t\ttcl_addarg(ops, server_out);\n\n\t\t\t\tqueue_name_out = NULL;\n\t\t\t\tserver_out[0] = '\\0';\n\t\t\t\tgoto que_no_args;\n\t\t\tcase SERVERS:\n\t\t\t\ttcl_addarg(ops, def_server);\n\n\t\t\t\tserver_out[0] = '\\0';\n\t\t\t\tgoto svr_no_args;\n\t\t}\n\t}\n\tif (E_opt == 1 && mode == JOBS) {\n\t\t/* allocate enough memory to store list of job ids */\n\t\tjob_list_size = ((argc - 1) * (PBS_MAXCLTJOBID + 1));\n\t\tjob_list = calloc(argc - 1, PBS_MAXCLTJOBID + 1);\n\t\tif (job_list == NULL)\n\t\t\texit_qstat(\"out of memory\");\n\t\t/* sort all jobs */\n\t\tqsort(&argv[optind], (argc - optind), sizeof(char *), cmp_jobs);\n\t}\n\tfor (; optind < argc; optind++) {\n\n\t\tlocated = FALSE;\n\n\t\tpbs_strncpy(operand, argv[optind], sizeof(operand));\n\t\ttcl_addarg(ops, operand);\n\n\t\tswitch (mode) {\n\n\t\t\tcase JOBS:\t\t\t    /* get status of batch jobs */\n\t\t\t\tif (pbs_isjobid(operand)) { /* must be a job-id */\n\t\t\t\t\tstat_single_job = 1;\n\t\t\t\t\tpbs_strncpy(job_id, operand, sizeof(job_id));\n\t\t\t\t\tif (get_server(job_id, job_id_out, server_out)) {\n\t\t\t\t\t\tfprintf(stderr, \"qstat: illegally formed job identifier: %s\\n\", job_id);\n#ifdef NAS /* localmod 071 */\n\t\t\t\t\t\t(void) tcl_stat(error, NULL, tcl_opt);\n#else\n\t\t\t\t\t\t(void) tcl_stat(error, NULL, f_opt);\n#endif /* localmod 071 */\n\t\t\t\t\t\tany_failed = 1;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t\tif (E_opt == 1) {\n\t\t\t\t\t\t/* Local Server */\n\t\t\t\t\t\tif (server_out[0] == '\\0' || (strcmp(server_out, def_server) == 0)) {\n\t\t\t\t\t\t\t/* This is probably the first job id requested from primary server */\n\t\t\t\t\t\t\tif (prev_server[0] == '\\0')\n\t\t\t\t\t\t\t\tpbs_strncpy(prev_server, def_server, sizeof(prev_server));\n\t\t\t\t\t\t\tstrncat(job_list, job_id_out, job_list_size - strlen(job_list));\n\t\t\t\t\t\t\tstrncat(job_list, \",\", job_list_size - strlen(job_list));\n\t\t\t\t\t\t\tif (optind != argc - 1)\n\t\t\t\t\t\t\t\tcontinue;\n\t\t\t\t\t\t\telse {\n\t\t\t\t\t\t\t\tfree(query_job_list);\n\t\t\t\t\t\t\t\tquery_job_list = strdup(job_list);\n\t\t\t\t\t\t\t\tjob_list[0] = '\\0';\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t/* Remote server but jobs in continuation */\n\t\t\t\t\t\t\tif ((prev_server[0] == '\\0') || (strcmp(server_out, prev_server) == 0)) {\n\t\t\t\t\t\t\t\t/* This is probably the first job id requested and not from primary server */\n\t\t\t\t\t\t\t\tif (prev_server[0] == '\\0')\n\t\t\t\t\t\t\t\t\tpbs_strncpy(prev_server, server_out, sizeof(prev_server));\n\t\t\t\t\t\t\t\tstrncat(job_list, job_id_out, job_list_size - strlen(job_list));\n\t\t\t\t\t\t\t\tstrncat(job_list, \",\", job_list_size - strlen(job_list));\n\t\t\t\t\t\t\t\tif (optind != argc - 1)\n\t\t\t\t\t\t\t\t\tcontinue;\n\t\t\t\t\t\t\t\telse {\n\t\t\t\t\t\t\t\t\t/* It's a new remote server and the only job */\n\t\t\t\t\t\t\t\t\tnew_remote_server = 1;\n\t\t\t\t\t\t\t\t\tfree(query_job_list);\n\t\t\t\t\t\t\t\t\tquery_job_list = strdup(job_list);\n\t\t\t\t\t\t\t\t\tjob_list[0] = '\\0';\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t\t/* A new remote server */\n\t\t\t\t\t\t\t\tnew_remote_server = 1;\n\t\t\t\t\t\t\t\tfree(query_job_list);\n\t\t\t\t\t\t\t\tquery_job_list = strdup(job_list);\n\t\t\t\t\t\t\t\tsnprintf(job_list, job_list_size, \"%s,\", job_id_out);\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t} else { /* must be a destination-id */\n\t\t\t\t\tif (E_opt == 1) {\n\t\t\t\t\t\tfprintf(stderr, \"qstat: Express option can only be used with job ids\\n\");\n\t\t\t\t\t\treturn 1;\n\t\t\t\t\t}\n\t\t\t\t\tstat_single_job = 0;\n\t\t\t\t\tpbs_strncpy(destination, operand, sizeof(destination));\n\t\t\t\t\tif (parse_destination_id(destination,\n\t\t\t\t\t\t\t\t &queue_name_out,\n\t\t\t\t\t\t\t\t &server_name_out)) {\n\t\t\t\t\t\tfprintf(stderr, \"qstat: illegally formed destination: %s\\n\", destination);\n#ifdef NAS /* localmod 071 */\n\t\t\t\t\t\t(void) tcl_stat(error, NULL, tcl_opt);\n#else\n\t\t\t\t\t\t(void) tcl_stat(error, NULL, f_opt);\n#endif /* localmod 071 */\n\t\t\t\t\t\tany_failed = 1;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t} else {\n\t\t\t\t\t\tif (notNULL(server_name_out)) {\n\t\t\t\t\t\t\tpbs_strncpy(server_out, server_name_out, sizeof(server_out));\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\tserver_out[0] = '\\0';\n\t\t\t\t\t\t}\n\t\t\t\t\t\tpbs_strncpy(job_id_out, queue_name_out, sizeof(job_id_out));\n\t\t\t\t\t\tif (*queue_name_out != '\\0') {\n\t\t\t\t\t\t\t/* add \"destination\" to front of list */\n\t\t\t\t\t\t\tadd_atropl(&new_atropl, ATTR_q, NULL, queue_name_out, EQ);\n\t\t\t\t\t\t\tadded_queue = 1;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\tjob_no_args:\n\t\t\t\t/* We could have been sent here after p_server was set. Free it. */\n\t\t\t\tpbs_statfree(p_server);\n\t\t\t\tp_server = NULL;\n\t\t\t\tif (E_opt == 1)\n\t\t\t\t\tconn = cnt2server(prev_server);\n\t\t\t\telse\n\t\t\t\t\tconn = cnt2server(server_out);\n\n\t\t\t\tif (conn <= 0) {\n\t\t\t\t\tfprintf(stderr, \"qstat: cannot connect to server %s (errno=%d)\\n\",\n\t\t\t\t\t\tdef_server, pbs_errno);\n#ifdef NAS /* localmod 071 */\n\t\t\t\t\t(void) tcl_stat(error, NULL, tcl_opt);\n#else\n\t\t\t\t\t(void) tcl_stat(error, NULL, f_opt);\n#endif /* localmod 071 */\n\t\t\t\t\tany_failed = conn;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\n\t\t\t\tif (strcmp(pbs_server, server_old) != 0) {\n\t\t\t\t\t/* changing to a different server */\n\t\t\t\t\tp_server = pbs_statserver(conn, NULL, NULL);\n#ifdef NAS /* localmod 071 */\n\t\t\t\t\tp_rsvstat = pbs_statresv(conn, NULL, NULL, NULL);\n#endif /* localmod 071 */\n\t\t\t\t\tpbs_strncpy(server_old, pbs_server, sizeof(server_old));\n\t\t\t\t} else {\n\t\t\t\t\tp_server = NULL;\n\t\t\t\t}\n\n\t\t\t\tif (p_server == NULL && pbs_errno != PBSE_NONE) {\n\t\t\t\t\tany_failed = pbs_errno;\n\t\t\t\t\tif ((errmsg = pbs_geterrmsg(conn)) != NULL)\n\t\t\t\t\t\tfprintf(stderr, \"qstat: %s\\n\", errmsg);\n\t\t\t\t\telse\n\t\t\t\t\t\tfprintf(stderr, \"qstat: Error %d\\n\", pbs_errno);\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\n\t\t\t\t/* check the server attribute max_job_sequence_id value */\n\t\t\t\tif (p_server != NULL) {\n\t\t\t\t\tint check_seqid_len; /* for dynamic qstat width */\n\t\t\t\t\tcheck_seqid_len = check_max_job_sequence_id(p_server);\n\t\t\t\t\tif (check_seqid_len == 1) {\n\t\t\t\t\t\thow_opt |= ALT_DISPLAY_INCR_WIDTH; /* increase column width */\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\tif ((stat_single_job == 1) || (new_atropl == 0)) {\n\t\t\t\t\tif (E_opt == 1)\n\t\t\t\t\t\tp_status = pbs_statjob(conn, query_job_list, display_attribs, extend);\n\t\t\t\t\telse\n\t\t\t\t\t\tp_status = pbs_statjob(conn, job_id_out, display_attribs, extend);\n\t\t\t\t} else {\n\t\t\t\t\tp_status = pbs_selstat(conn, new_atropl, NULL, extend);\n\t\t\t\t}\n\n\t\t\t\tif (added_queue) {\n\t\t\t\t\t/* added queue name as first entry in atropl list,  */\n\t\t\t\t\t/* remove it and reset list pointer to base list    */\n\t\t\t\t\tfree(new_atropl);\n\t\t\t\t\tnew_atropl = p_atropl;\n\t\t\t\t\tadded_queue = 0;\n\t\t\t\t}\n\t\t\t\tif (p_status == NULL) {\n\t\t\t\t\tif ((pbs_errno == PBSE_UNKJOBID) && !located) {\n\t\t\t\t\t\tlocated = TRUE;\n\t\t\t\t\t\tif (locate_job(job_id_out, server_out, rmt_server)) {\n\t\t\t\t\t\t\tpbs_disconnect(conn);\n\t\t\t\t\t\t\tstrcpy(server_out, rmt_server);\n\t\t\t\t\t\t\tgoto job_no_args;\n\t\t\t\t\t\t}\n#ifdef NAS /* localmod 071 */\n\t\t\t\t\t\t(void) tcl_stat(\"job\", NULL, tcl_opt);\n#else\n\t\t\t\t\t\t(void) tcl_stat(\"job\", NULL, f_opt);\n#endif /* localmod 071 */\n\t\t\t\t\t\tif (pbs_errno != PBSE_HISTJOBID) {\n\t\t\t\t\t\t\tprt_job_err(\"qstat\", conn, job_id_out);\n\t\t\t\t\t\t\tany_failed = pbs_errno;\n\t\t\t\t\t\t}\n\t\t\t\t\t} else {\n#ifdef NAS /* localmod 071 */\n\t\t\t\t\t\tif (p_server) {\n\t\t\t\t\t\t\ttcl_stat(\"serverhdr\", p_server, tcl_opt);\n\t\t\t\t\t\t\ttcl_stat(\"resv\", p_rsvstat, tcl_opt);\n\t\t\t\t\t\t}\n\t\t\t\t\t\t(void) tcl_stat(\"job\", NULL, tcl_opt);\n#else\n\t\t\t\t\t\t(void) tcl_stat(\"job\", NULL, f_opt);\n#endif /* localmod 071 */\n\t\t\t\t\t\tif (pbs_errno != PBSE_NONE && pbs_errno != PBSE_HISTJOBID) {\n\t\t\t\t\t\t\tif (pbs_errno == PBSE_ATTRRO && alt_opt & ALT_DISPLAY_T)\n\t\t\t\t\t\t\t\tfprintf(stderr, \"qstat: -T option is unavailable.\\n\");\n\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t\tprt_job_err(\"qstat\", conn, job_id_out);\n\t\t\t\t\t\t\tany_failed = pbs_errno;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\n\t\t\t\t\t/*\n\t\t\t\t\t * If it is qstat command and error is PBSE_HISTJOBID, then\n\t\t\t\t\t * handle it separately without using prt_job_err() API as we\n\t\t\t\t\t * are adding some extra message.\n\t\t\t\t\t */\n\t\t\t\t\tif (pbs_errno == PBSE_HISTJOBID) {\n\t\t\t\t\t\terrmsg = pbs_geterrmsg(conn);\n\t\t\t\t\t\tif (errmsg) {\n\t\t\t\t\t\t\tfprintf(stderr,\n\t\t\t\t\t\t\t\t\"qstat: %s %s, use -x or -H to obtain historical job information\\n\",\n\t\t\t\t\t\t\t\tjob_id_out, errmsg);\n\t\t\t\t\t\t}\n\t\t\t\t\t\tany_failed = pbs_errno;\n\t\t\t\t\t}\n\t\t\t\t} else {\n\n#ifdef NAS /* localmod 071 */\n\t\t\t\t\tif (p_server) {\n\t\t\t\t\t\ttcl_stat(\"serverhdr\", p_server, tcl_opt);\n\t\t\t\t\t\ttcl_stat(\"resv\", p_rsvstat, tcl_opt);\n\t\t\t\t\t}\n\t\t\t\t\tif (tcl_stat(\"job\", p_status, tcl_opt)) {\n\t\t\t\t\t\tif (alt_opt != 0) {\n\t\t\t\t\t\t\taltdsp_statjob(p_status, p_server, alt_opt, wide, how_opt);\n\t\t\t\t\t\t} else if (display_statjob(p_status, p_server, f_opt, how_opt))\n\t\t\t\t\t\t\texit_qstat(\"out of memory\");\n\t\t\t\t\t}\n#else\n\n\t\t\t\t\tif ((alt_opt & ~ALT_DISPLAY_w) != 0 && !(wide && f_opt)) {\n\t\t\t\t\t\taltdsp_statjob(p_status, p_server, alt_opt, wide, how_opt);\n\t\t\t\t\t} else if (f_opt == 0 || tcl_stat(\"job\", p_status, f_opt))\n\t\t\t\t\t\tif (display_statjob(p_status, p_server, f_opt, how_opt, alt_opt, wide))\n\t\t\t\t\t\t\texit_qstat(\"out of memory\");\n#endif /* localmod 071 */\n\t\t\t\t\tp_header = FALSE;\n\t\t\t\t\tpbs_statfree(p_status);\n\t\t\t\t}\n\t\t\t\tpbs_statfree(p_server);\n\t\t\t\tp_server = NULL;\n\t\t\t\tpbs_disconnect(conn);\n\t\t\t\tif (E_opt == 1) {\n\t\t\t\t\tfree(query_job_list);\n\t\t\t\t\tquery_job_list = NULL;\n\t\t\t\t\tif (new_remote_server == 1) {\n\t\t\t\t\t\t/* If there is a new remote server\n\t\t\t\t\t\t* then update the prev_server to new server\n\t\t\t\t\t\t*/\n\t\t\t\t\t\tstrcpy(prev_server, server_out);\n\t\t\t\t\t\tnew_remote_server = 0;\n\t\t\t\t\t\t/* If we are at the end of the loop then\n\t\t\t\t\t\t* query jobs one more time if and only if\n\t\t\t\t\t\t* there are jobs present in job_list\n\t\t\t\t\t\t*/\n\t\t\t\t\t\tif (optind == argc - 1) {\n\t\t\t\t\t\t\tif (query_job_list != NULL) {\n\t\t\t\t\t\t\t\tfree(query_job_list);\n\t\t\t\t\t\t\t\tquery_job_list = NULL;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tif (job_list[0] != '\\0') {\n\t\t\t\t\t\t\t\tquery_job_list = strdup(job_list);\n\t\t\t\t\t\t\t\toptind++;\n\t\t\t\t\t\t\t\tgoto job_no_args;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tbreak;\n\n\t\t\tcase QUEUES: /* get status of batch queues */\n\t\t\t\tpbs_strncpy(destination, operand, sizeof(destination));\n\t\t\t\tif (parse_destination_id(destination,\n\t\t\t\t\t\t\t &queue_name_out,\n\t\t\t\t\t\t\t &server_name_out)) {\n\t\t\t\t\tfprintf(stderr, \"qstat: illegal 'destination' value\\n\");\n#ifdef NAS /* localmod 071 */\n\t\t\t\t\t(void) tcl_stat(error, NULL, tcl_opt);\n#else\n\t\t\t\t\t(void) tcl_stat(error, NULL, f_opt);\n#endif /* localmod 071 */\n\t\t\t\t\tany_failed = 1;\n\t\t\t\t\tbreak;\n\t\t\t\t} else {\n\t\t\t\t\tif (notNULL(server_name_out)) {\n\t\t\t\t\t\tstrcpy(server_out, server_name_out);\n\t\t\t\t\t} else\n\t\t\t\t\t\tserver_out[0] = '\\0';\n\t\t\t\t}\n\t\t\tque_no_args:\n\t\t\t\tconn = cnt2server(server_out);\n\t\t\t\tif (conn <= 0) {\n\t\t\t\t\tfprintf(stderr, \"qstat: cannot connect to server %s (errno=%d)\\n\", def_server, pbs_errno);\n#ifdef NAS /* localmod 071 */\n\t\t\t\t\t(void) tcl_stat(error, NULL, tcl_opt);\n#else\n\t\t\t\t\t(void) tcl_stat(error, NULL, f_opt);\n#endif /* localmod 071 */\n\t\t\t\t\tany_failed = conn;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\n\t\t\t\tp_status = pbs_statque(conn, queue_name_out, NULL, NULL);\n\t\t\t\tif (p_status == NULL) {\n\t\t\t\t\tif (pbs_errno) {\n\t\t\t\t\t\terrmsg = pbs_geterrmsg(conn);\n\t\t\t\t\t\tif (errmsg != NULL) {\n\t\t\t\t\t\t\tfprintf(stderr, \"qstat: %s \", errmsg);\n\t\t\t\t\t\t} else\n\t\t\t\t\t\t\tfprintf(stderr, \"qstat: Error (%d) getting status of queue \", pbs_errno);\n\t\t\t\t\t\tfprintf(stderr, \"%s\\n\", queue_name_out);\n#ifdef NAS /* localmod 071 */\n\t\t\t\t\t\t(void) tcl_stat(error, NULL, tcl_opt);\n#else\n\t\t\t\t\t\t(void) tcl_stat(error, NULL, f_opt);\n#endif /* localmod 071 */\n\t\t\t\t\t\tany_failed = pbs_errno;\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tif (alt_opt & ALT_DISPLAY_q) {\n\t\t\t\t\t\taltdsp_statque(pbs_server, p_status, alt_opt);\n#ifdef NAS /* localmod 071 */\n\t\t\t\t\t} else if (tcl_stat(\"queue\", p_status, tcl_opt)) {\n#else\n\t\t\t\t\t} else if (tcl_stat(\"queue\", p_status, f_opt)) {\n#endif /* localmod 071 */\n\t\t\t\t\t\tif (display_statque(p_status, p_header, f_opt, alt_opt))\n\t\t\t\t\t\t\texit_qstat(\"out of memory\");\n\t\t\t\t\t}\n\t\t\t\t\tp_header = FALSE;\n\t\t\t\t\tpbs_statfree(p_status);\n\t\t\t\t}\n\t\t\t\tpbs_disconnect(conn);\n\t\t\t\tbreak;\n\n\t\t\tcase SERVERS: /* get status of batch servers */\n\t\t\t\tpbs_strncpy(server_out, operand, sizeof(server_out));\n\t\t\tsvr_no_args:\n\t\t\t\tconn = cnt2server(server_out);\n\t\t\t\tif (conn <= 0) {\n\t\t\t\t\tfprintf(stderr, \"qstat: cannot connect to server %s (errno=%d)\\n\",\n\t\t\t\t\t\tdef_server, pbs_errno);\n#ifdef NAS /* localmod 071 */\n\t\t\t\t\t(void) tcl_stat(error, NULL, tcl_opt);\n#else\n\t\t\t\t\t(void) tcl_stat(error, NULL, f_opt);\n#endif /* localmod 071 */\n\t\t\t\t\tany_failed = conn;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\n\t\t\t\tp_status = pbs_statserver(conn, NULL, NULL);\n\t\t\t\tif (p_status == NULL) {\n\t\t\t\t\tif (pbs_errno) {\n\t\t\t\t\t\terrmsg = pbs_geterrmsg(conn);\n\t\t\t\t\t\tif (errmsg != NULL) {\n\t\t\t\t\t\t\tfprintf(stderr, \"qstat: %s \", errmsg);\n\t\t\t\t\t\t} else\n\t\t\t\t\t\t\tfprintf(stderr, \"qstat: Error (%d) getting status of server \", pbs_errno);\n\t\t\t\t\t\tfprintf(stderr, \"%s\\n\", server_out);\n#ifdef NAS /* localmod 071 */\n\t\t\t\t\t\t(void) tcl_stat(error, NULL, tcl_opt);\n#else\n\t\t\t\t\t\t(void) tcl_stat(error, NULL, f_opt);\n#endif /* localmod 071 */\n\t\t\t\t\t\tany_failed = pbs_errno;\n\t\t\t\t\t}\n\t\t\t\t} else {\n#ifdef NAS /* localmod 071 */\n\t\t\t\t\tif (tcl_stat(\"server\", p_status, tcl_opt))\n#else\n\t\t\t\t\tif (tcl_stat(\"server\", p_status, f_opt))\n#endif /* localmod 071 */\n\t\t\t\t\t\tif (display_statserver(p_status, p_header, f_opt, alt_opt))\n\t\t\t\t\t\t\texit_qstat(\"out of memory\");\n\t\t\t\t\tp_header = FALSE;\n\t\t\t\t\tpbs_statfree(p_status);\n\t\t\t\t}\n\t\t\t\tpbs_disconnect(conn);\n\t\t\t\tbreak;\n\n\t\t} /* switch */\n\n\t\tif (any_failed == PBSE_PERM)\n\t\t\tbreak;\n\t}\n\tif (output_format == FORMAT_JSON) {\n\t\tif (pbs_json_print(json_root, stdout))\n\t\t\tfprintf(stderr, \"json error\\n\");\n\t\tpbs_json_delete(json_root);\n\t}\n#ifdef NAS /* localmod 071 */\n\ttcl_run(tcl_opt);\n#else\n\ttcl_run(f_opt);\n#endif /* localmod 071 */\n\tif (E_opt == 1) {\n\t\tif (query_job_list != NULL)\n\t\t\tfree(query_job_list);\n\t\tfree(job_list);\n\t}\n\t/*cleanup security library initializations before exiting*/\n\tCS_close_app();\n\n\t/*\n\t * If the server is not configured for history jobs i.e. job_history_enable\n\t * svr attr is unset/set to FALSE, qstat command with -x/-H option is being\n\t * used, then pbs_selstat()/pbs_statjob() will return PBSE_JOBHISTNOTSET\n\t * error code. But command will exit with exit code '0'after printing the\n\t * corresponding error message. i.e. \"job_history_enable is set to false\".\n\t */\n\tif (any_failed == PBSE_JOBHISTNOTSET)\n\t\tany_failed = 0;\n\texit(any_failed);\n}\n\n/**\n * @brief\n *\tcvtResvstate - converts a job reservation \"state code\" to\n *\tdescriptive text string\n *\n * @param[out] pcode - descriptive text string\n *\n * @return\tstring\n * @retval\tPointer to the descriptive text string\n * @retval\tThe input pointer if conversion fails\n */\n\nstatic char *\ncvtResvstate(char *pcode)\n{\n\tint i;\n\tstatic char *resvStrings[] = {\"RESV_NONE\", \"RESV_UNCONFIRMED\",\n\t\t\t\t      \"RESV_CONFIRMED\", \"RESV_WAIT\",\n\t\t\t\t      \"RESV_TIME_TO_RUN\", \"RESV_RUNNING\",\n\t\t\t\t      \"RESV_FINISHED\", \"RESV_BEING_DELETED\",\n\t\t\t\t      \"RESV_DELETED\", \"RESV_DELETING_JOBS\",\n\t\t\t\t      \"RESV_BEING_ALTERED\"};\n\n\t/*Remark: the static buffer below is used to get around a problem with\n\t *\tthe linux strtok() library function.  There is a \"bugs\" comment\n\t *\ton the man page for strtok() that mentions that the function can't\n\t *\tbe called on constant strings.  I presume that this is what is\n\t *\tcausing strtok in prt_attr() to fail when fed the return value\n\t *\tresvStrings[i], as we had been doing.  So, we return acopy instead\n\t */\n\tstatic char acopy[25];\n\n\tswitch (i = atoi(pcode)) {\n\t\tcase RESV_NONE:\n\t\tcase RESV_UNCONFIRMED:\n\t\tcase RESV_CONFIRMED:\n\t\tcase RESV_WAIT:\n\t\tcase RESV_TIME_TO_RUN:\n\t\tcase RESV_RUNNING:\n\t\tcase RESV_FINISHED:\n\t\tcase RESV_BEING_DELETED:\n\t\tcase RESV_DELETED:\n\t\tcase RESV_DELETING_JOBS:\n\t\tcase RESV_BEING_ALTERED:\n\t\t\tpbs_strncpy(acopy, resvStrings[i], sizeof(acopy));\n\t\t\treturn acopy;\n\n\t\tdefault:\n\t\t\treturn pcode;\n\t}\n}\n\n/*!\n *\tcmp_est_time -  compare function used with bs_isort\n *\t\t        compares based on:\n *\t\t\t1. stime\n *\t\t\t2. if estimated.start_time < now sort to bottom\n *\t\t\t3. estimated.start_time\n *\n *\t\\param a\n *\t\\param b\n *\n *\t\\return -1: a < b\n *\t\\return  0: a == b\n *\t\\return  1: a > b\n *\n */\nstatic int\ncmp_est_time(struct batch_status *a, struct batch_status *b)\n{\n\tchar *attrval;\n\ttime_t est_a = -1;\n\ttime_t est_b = -1;\n\ttime_t stime_a = -1;\n\ttime_t stime_b = -1;\n\ttime_t now;\n\n\tattrval = get_attr(a->attribs, ATTR_estimated, \"start_time\");\n\tif (attrval != NULL)\n\t\test_a = atol(attrval);\n\n\tattrval = get_attr(b->attribs, ATTR_estimated, \"start_time\");\n\tif (attrval != NULL)\n\t\test_b = atol(attrval);\n\n\tattrval = get_attr(a->attribs, ATTR_stime, NULL);\n\tif (attrval != NULL)\n\t\tstime_a = atol(attrval);\n\n\tattrval = get_attr(b->attribs, ATTR_stime, NULL);\n\tif (attrval != NULL)\n\t\tstime_b = atol(attrval);\n\n\t/* sort running jobs first by stime */\n\tif (stime_a >= 0 || stime_b >= 0) {\n\t\tif (stime_a == -1 && stime_b >= 0)\n\t\t\treturn 1;\n\t\telse if (stime_a >= 0 && stime_b == -1)\n\t\t\treturn -1;\n\t\telse if (stime_a < stime_b)\n\t\t\treturn -1;\n\t\telse if (stime_a > stime_b)\n\t\t\treturn 1;\n\t\telse\n\t\t\treturn 0;\n\t}\n\n\tif (est_a == est_b)\n\t\treturn 0;\n\n\ttime(&now);\n\t/* if estimated start time is before now, sort to bottom */\n\tif (est_a == -1 || est_a < now)\n\t\treturn 1;\n\n\tif (est_b == -1 || est_b < now)\n\t\treturn -1;\n\n\tif (est_a < est_b)\n\t\treturn -1;\n\n\tif (est_a > est_b)\n\t\treturn 1;\n\n\treturn 0;\n}\n\n#define SEC_IN_WEEK 604800\n/*!\n *\tcnvt_est_start_time - convert estimated start time to a time form\n *\t\t\t      either a short form of 5 characters or the\n *\t\t\t      wider form of convert_time().\n *\n *\t\\param est_time - the string value of estimated.start time\n *\t\t\t\tex: \"1247683654\"\n *\t\\param wide\t- 1 if we should use the wide format 0 if not\n *\n *\t\\return converted time string\n *\t\\return \"--\" if estimated.start_time  < now or est_time == NULL\n *\t\\return \"?\" if estimated.start_time == 0\n *\n */\nchar *\ncnvt_est_start_time(char *est_time, int wide)\n{\n\ttime_t t;\n\ttime_t start_time;\n\tchar buf[16];\n\tchar buf2[16];\n\tstatic char timebuf[32];\n\tstruct tm *tmptr;\n\tstruct tm nowtm;\n\tstruct tm esttm;\n\n\tif (est_time == NULL)\n\t\treturn \"--\";\n\n\tstart_time = atol(est_time);\n\n\t/* special case: unknown estimated start time: return \"?\" */\n\tif (start_time == 0)\n\t\treturn \"?\";\n\n\ttime(&t);\n\ttmptr = localtime(&t);\n\tif (tmptr != NULL) {\n\t\tnowtm = *tmptr;\n\t\ttmptr = localtime(&start_time);\n\t\tif (tmptr != NULL)\n\t\t\testtm = *tmptr;\n\t\telse\n\t\t\treturn \"--\";\n\t} else\n\t\treturn \"--\";\n\n\t/* special case: estimated start time before now: return \"--\" */\n\tif (start_time < t)\n\t\treturn \"--\";\n\n\tif (wide)\n\t\treturn convert_time(est_time);\n\n\t/* within the current day: HH:MM */\n\tif (nowtm.tm_year == esttm.tm_year && nowtm.tm_yday == esttm.tm_yday) {\n\t\tstrftime(timebuf, 32, \"%H:%M\", &esttm);\n\t} /* within 7 days of now */\n\telse if ((start_time - t) < SEC_IN_WEEK) {\n\t\tstrftime(buf, 16, \"%a\", &esttm);\n\t\tstrftime(buf2, 16, \"%H\", &esttm);\n\t\tsnprintf(timebuf, 32, \"%2.2s %s\", buf, buf2);\n\t} /* within the current year: short form of the month */\n\telse if (nowtm.tm_year == esttm.tm_year) {\n\t\tstrftime(timebuf, 32, \"%b\", &esttm);\n\t\t/* after the current year: the 4 digit year */\n\t} else if (esttm.tm_year - nowtm.tm_year < 5) {\n\t\tstrftime(timebuf, 32, \"%Y\", &esttm);\n\t} else { /* after 5 years, print \">5yrs\" */\n\t\tstrcpy(timebuf, \">5yrs\");\n\t}\n\n\treturn timebuf;\n}\n"
  },
  {
    "path": "src/cmds/qstop.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tqstop.c\n * @brief\n *  The qstop command directs that a destination should stop scheduling\n *  or routing batch jobs.\n *\n * @par\tSynopsis:\n *  qstop destination ...\n *\n * @par Arguments:\n *  destination ...\n *      A list of destinations.  A destination has one of the following\n *      three forms:\n *          queue\n *          @server\n *          queue@server\n *      If queue is specified, the request is to stop the queue at\n *      the default server.  If @server is given, the request is to\n *      stop all queues at the server.  If queue@server is used,\n *      the request is to stop the named queue at the named server.\n *\n *  @author\tBruce Kelly\n *  \t\tNational Energy Research Supercomputer Center, Livermore, CA\n *  \t\tMay, 1993\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include \"cmds.h\"\n#include <pbs_version.h>\n\nint exitstatus = 0; /* Exit Status */\nstatic void execute(char *, char *);\n\nint\nmain(int argc, char **argv)\n{\n\t/*\n\t *  This routine sends a Manage request to the batch server specified by\n\t * the destination.  The STARTED queue attribute is set to {False}.  If the\n\t * batch request is accepted, the server will stop scheduling or routing\n\t * requests for the specified queue.\n\t */\n\n\tint dest;     /* Index into the destination array (argv) */\n\tchar *queue;  /* Queue name part of destination */\n\tchar *server; /* Server name part of destination */\n\n\t/*test for real deal or just version and exit*/\n\n\tPRINT_VERSION_AND_EXIT(argc, argv);\n\n\tif (initsocketlib())\n\t\treturn 1;\n\n\tif (argc == 1) {\n\t\tfprintf(stderr, \"Usage: qstop [queue][@server] ...\\n\");\n\t\tfprintf(stderr, \"       qstop --version\\n\");\n\t\texit(1);\n\t}\n\n\t/*perform needed security library initializations (including none)*/\n\n\tif (CS_client_init() != CS_SUCCESS) {\n\t\tfprintf(stderr, \"qstop: unable to initialize security library.\\n\");\n\t\texit(1);\n\t}\n\n\tfor (dest = 1; dest < argc; dest++)\n\t\tif (parse_destination_id(argv[dest], &queue, &server) == 0)\n\t\t\texecute(queue, server);\n\t\telse {\n\t\t\tfprintf(stderr, \"qstop: illegally formed destination: %s\\n\",\n\t\t\t\targv[dest]);\n\t\t\texitstatus = 1;\n\t\t}\n\n\t/*cleanup security library initializations before exiting*/\n\tCS_close_app();\n\n\texit(exitstatus);\n}\n\n/**\n * @brief\n *\tdisables a destination (queue)\n *\n * @param queue - The name of the queue to disable.\n * @param server - The name of the server that manages the queue.\n *\n * @return - Void\n *\n * @File Variables:\n * exitstatus  Set to two if an error occurs.\n *\n */\nstatic void\nexecute(char *queue, char *server)\n{\n\tint ct;\t      /* Connection to the server */\n\tint merr;     /* Error return from pbs_manager */\n\tchar *errmsg; /* Error message from pbs_manager */\n\t/* The disable request */\n\tstatic struct attropl attr = {NULL, \"started\", NULL, \"FALSE\", SET};\n\n\tif ((ct = cnt2server(server)) > 0) {\n\t\tmerr = pbs_manager(ct, MGR_CMD_SET, MGR_OBJ_QUEUE, queue, &attr, NULL);\n\t\tif (merr != 0) {\n\t\t\terrmsg = pbs_geterrmsg(ct);\n\t\t\tif (errmsg != NULL) {\n\t\t\t\tfprintf(stderr, \"qstop: %s \", errmsg);\n\t\t\t} else {\n\t\t\t\tfprintf(stderr, \"qstop: Error (%d) disabling queue \", pbs_errno);\n\t\t\t}\n\t\t\tif (notNULL(queue))\n\t\t\t\tfprintf(stderr, \"%s\", queue);\n\t\t\tif (notNULL(server))\n\t\t\t\tfprintf(stderr, \"@%s\", server);\n\t\t\tfprintf(stderr, \"\\n\");\n\t\t\texitstatus = 2;\n\t\t}\n\t\tpbs_disconnect(ct);\n\t} else {\n\t\tfprintf(stderr, \"qstop: could not connect to server %s (%d)\\n\", server, pbs_errno);\n\t\texitstatus = 2;\n\t}\n}\n"
  },
  {
    "path": "src/cmds/qsub.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tqsub.c\n * @brief\n *\tqsub - (PBS) submit batch job\n *\n * @author Terry Heidelberg\n *         Livermore Computing\n *\n * @author Bruce Kelly\n *         National Energy Research Supercomputer Center\n *\n * @author Lawrence Livermore National Laboratory\n *         University of California\n */\n\n/**\n * @file    qsub.c\n *\n * @brief\n * qsub now has two components:\n * A forground process and a background process.\n * - The background process is loaded initially (per user, per target server)\n *   by the foreground.\n * - The background process reuses a authenticated server connection.\n * - The foreground process sends job information to background process\n *   which in turn communicates over the already established connection to the\n *   server. It returns back any jobid or error string (and code) to the\n *   foreground process.\n * - The background process quits silently if:\n *    a) The connection to the server is lost\n *    b) There are no requests sent to it for the last 1 minute.\n *\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n#include <pbs_version.h>\n\n#include <sys/types.h>\n#include <sys/socket.h>\n#include <sys/stat.h>\n#include <sys/time.h>\n#include <sys/utsname.h>\n#include <sys/wait.h>\n#include <netinet/in.h>\n#include <errno.h>\n#include <fcntl.h>\n#include <netdb.h>\n#include <signal.h>\n#include <termios.h>\n#include <assert.h>\n#include <sys/un.h>\n#include <syslog.h>\n#include <unistd.h>\n#include \"pbs_ifl.h\"\n#include \"cmds.h\"\n#include \"libpbs.h\"\n#include \"net_connect.h\"\n#include \"dis.h\"\n#include \"port_forwarding.h\"\n#include \"credential.h\"\n#include \"ticket.h\"\n#include \"portability.h\"\n\n#ifdef LOG_BUF_SIZE\n/* Also defined in port_forwarding.h */\n#undef LOG_BUF_SIZE\n#endif\n\n#define LOG_BUF_SIZE 1024\n#define ENV_PBS_JOBID \"PBS_JOBID\"\n#define CMDLINE 3\n\n#undef DEBUG\n#undef DBPRT\n#ifdef DEBUG\n#define DBPRT(x) printf x;\n#else\n#define DBPRT(x)\n#endif\n\n#if defined(HAVE_SYS_IOCTL_H)\n#include <sys/ioctl.h>\n#endif /* HAVE_SYS_IOCTL_H */\n\n#if defined(FD_SET_IN_SYS_SELECT_H)\n#include <sys/select.h>\n#endif\n\n/*\n * For the purpose of unit testing, qsub is capable of printing a backtrace\n * when it exits with a non-zero status. This behavior is limited to systems\n * that support the backtrace(3) library function. To enable this behavior,\n * define the BACKTRACE_SIZE macro immediately following this comment. The\n * value assigned to BACKTRACE_SIZE is an integer that defines the maximum\n * depth of the backtrace. For example:\n * #define BACKTRACE_SIZE 100\n */\n#ifdef BACKTRACE_SIZE\n#include <execinfo.h>\n#endif\n\n#define MAX_QSUB_PREFIX_LEN 32\n#define DMN_REFUSE_EXIT 7 /* return code when daemon can't serve a job and exits */\n\nextern char *msg_force_qsub_update;\n\n#define PBS_DPREFIX_DEFAULT \"#PBS\"\n#define PBS_O_ENV \"PBS_O_\" /* prefix for environment variables created by qsub */\n\n/* Warning/Error messages */\n#define INTER_GUI_WARN \"qsub: only interactive jobs can have GUI\\n\"\n#define INTER_BLOCK_WARN \"qsub (Warning) : setting \\\"block\\\" attribute as \\\"true\\\"\" \\\n\t\t\t \" for an interactive job will not return job's exit status\\n\"\n#define INTER_ARRAY \"qsub: interactive and array job submission cannot be used together\\n\"\n#define NO_RERUN_ARRAY \"qsub:  cannot submit non-rerunable Array Job\\n\"\n#define INTER_RERUN_WARN \"qsub (Warning): Interactive jobs will be treated as not rerunnable\\n\"\n#define BAD_W \"qsub: illegal -W value\\n\"\n#define MULTIPLE_MAX_RUN \"qsub: multiple max_run_subjobs values found\\n\"\n\n/* Security library variables */\nstatic int cs_init = 0; /*1 == security library initialized, 0 == not initialized*/\nstatic int cred_type = -1;\nsize_t cred_len = 0;\nchar *cred_buf = NULL;\nchar cred_name[32];  /* space to hold small credential name */\nchar *tmpdir = NULL; /* Path of temp directory in which to put the job script */\n\n/* variables for Interactive mode */\nint comm_sock; /* Socket for interactive and block job */\n\n#define X11_MSG_OFFSET sizeof(XAUTH_ERR_REDIRECTION) /* offset of the redirection clause */\n\nextern char fl[];\nchar retmsg[MAXPATHLEN];\t\t\t\t /* holds the message that background qsub process will send */\nchar qsub_cwd[MAXPATHLEN + 1];\t\t\t\t /* buffer to pass cwd to background qsub */\nchar *new_jobname = NULL;\t\t\t\t /* return from submit request */\nchar destination[PBS_MAXDEST];\t\t\t\t /* Destination of the batch job, specified by q opt */\nchar server_out[PBS_MAXSERVERNAME + PBS_MAXPORTNUM + 2]; /* Destination server, parsed from destination[] */\nchar script_tmp[MAXPATHLEN + 1] = {'\\0'};\t\t /* name of script file copy */\nint sd_svr;\t\t\t\t\t\t /* return from pbs_connect */\nchar *display;\t\t\t\t\t\t /* environment variable DISPLAY */\nstruct attrl *attrib = NULL;\t\t\t\t /* Attribute list */\nstatic struct attrl *attrib_o = NULL;\t\t\t /* Original attribute list, before applying default_qsub_arguments */\nstatic char dir_prefix[MAX_QSUB_PREFIX_LEN + 1];\t /* Directive Prefix, specified by C opt */\nstatic struct batch_status *ss = NULL;\nstatic char *dfltqsubargs = NULL;\t\t\t/* Default qsub arguments */\nstatic char *pbs_hostvar = NULL;\t\t\t/* buffer containing \",PBS_O_HOST=\" and host name */\nstatic int pbs_o_hostsize = sizeof(\",PBS_O_HOST=\") + 1; /* size of prefix for hostvar */\n\nint pid = -1;\n\n/*\n * Flag to check if current process is the background process.\n * This variable is set only once and is read-only afterwards.\n */\nint is_background = 0;\nchar *basic_envlist = NULL;    /* basic comma-separated environment variables list string */\nchar *qsub_envlist = NULL;     /* comma-separated variables list string */\nchar *v_value = NULL;\t       /* expanded variable list from v opt */\nstatic int no_background = 0;  /* flag to disable backgrounding */\nstatic char roptarg = 'y';     /* whether the job is rerunnable */\nstatic char *v_value_o = NULL; /* copy of v_value before set_job_env() */\nstatic int x11_disp = FALSE;   /* whether DISPLAY environment variable is available */\n\n/* state booleans for protecting already-set options */\nstatic int a_opt = FALSE;\nstatic int c_opt = FALSE;\nstatic int e_opt = FALSE;\nstatic int h_opt = FALSE;\nstatic int j_opt = FALSE;\nstatic int k_opt = FALSE;\nstatic int l_opt = FALSE;\nstatic int m_opt = FALSE;\nstatic int o_opt = FALSE;\nstatic int p_opt = FALSE;\nstatic int q_opt = FALSE;\nstatic int r_opt = FALSE;\nstatic int u_opt = FALSE;\nstatic int v_opt = FALSE;\nstatic int z_opt = FALSE;\nstatic int A_opt = FALSE;\nstatic int C_opt = FALSE;\nstatic int J_opt = FALSE;\nstatic int M_opt = FALSE;\nstatic int N_opt = FALSE;\nstatic int P_opt = FALSE;\nstatic int R_opt = FALSE;\nstatic int S_opt = FALSE;\nstatic int V_opt = FALSE;\nstatic int Depend_opt = FALSE;\nstatic int Stagein_opt = FALSE;\nstatic int Stageout_opt = FALSE;\nstatic int Sandbox_opt = FALSE;\nstatic int Grouplist_opt = FALSE;\nstatic int Resvstart_opt = FALSE;\nstatic int Resvend_opt = FALSE;\nstatic int pwd_opt = FALSE;\nstatic int cred_opt = FALSE;\nstatic int block_opt = FALSE;\nstatic int relnodes_on_stageout_opt = FALSE;\nstatic int tolerate_node_failures_opt = FALSE;\nstatic int roptarg_inter = FALSE;\nint Interact_opt = FALSE;\nint Forwardx11_opt = FALSE;\nint gui_opt = FALSE;\n\n/* for saving option booleans */\nstatic int a_opt_o = FALSE;\nstatic int c_opt_o = FALSE;\nstatic int e_opt_o = FALSE;\nstatic int h_opt_o = FALSE;\nstatic int j_opt_o = FALSE;\nstatic int k_opt_o = FALSE;\nstatic int l_opt_o = FALSE;\nstatic int m_opt_o = FALSE;\nstatic int o_opt_o = FALSE;\nstatic int p_opt_o = FALSE;\nstatic int q_opt_o = FALSE;\nstatic int r_opt_o = FALSE;\nstatic int u_opt_o = FALSE;\nstatic int v_opt_o = FALSE;\nstatic int z_opt_o = FALSE;\nstatic int A_opt_o = FALSE;\nstatic int C_opt_o = FALSE;\nstatic int J_opt_o = FALSE;\nstatic int M_opt_o = FALSE;\nstatic int N_opt_o = FALSE;\nstatic int P_opt_o = FALSE;\nstatic int S_opt_o = FALSE;\nstatic int V_opt_o = FALSE;\nstatic int Depend_opt_o = FALSE;\nstatic int Interact_opt_o = FALSE;\nstatic int Stagein_opt_o = FALSE;\nstatic int Stageout_opt_o = FALSE;\nstatic int Sandbox_opt_o = FALSE;\nstatic int Grouplist_opt_o = FALSE;\nstatic int gui_opt_o = FALSE;\nstatic int Resvstart_opt_o = FALSE;\nstatic int Resvend_opt_o = FALSE;\nstatic int pwd_opt_o = FALSE;\nstatic int cred_opt_o = FALSE;\nstatic int block_opt_o = FALSE;\nstatic int relnodes_on_stageout_opt_o = FALSE;\nstatic int tolerate_node_failures_opt_o = FALSE;\nstatic int max_run_opt = FALSE;\n\nextern char **environ;\n\nextern void blockint(int sig);\nextern void do_daemon_stuff();\nextern void enable_gui(void);\nextern void set_sig_handlers(void);\nextern void interactive(void);\nextern int dorecv(void *, char *, int);\nextern int dosend(void *, char *, int);\nextern int daemon_submit(int *, int *);\nextern int get_script(FILE *, char *, char *);\nextern int check_for_background(int, char **);\n\nvoid exit_qsub(int exitstatus);\n\n/* The following are \"Utility\" functions. */\n\n/**\n * @brief\n * \tProcess comma separated tokens with consideration for quotes.\n *\n * @param[in]\tstr\tsource string to scan for tokens\n *\n * @retval\tNULL\tno more tokens\n *\n */\nstatic char *\ncomma_token(char *str)\n{\n\tstatic char *p = NULL;\n\tchar quote = 0;\n\tchar *tok;\n\n\tif (str != NULL)\n\t\tp = str;\n\n\t/* check for no more tokens */\n\tif ((p == NULL) || (*p == 0))\n\t\treturn NULL;\n\n\ttok = p;\n\tfor (; *p != '\\0'; p++) {\n\t\tswitch (*p) {\n\n\t\t\tcase '\\'':\n\t\t\tcase '\"':\n\t\t\t\tif (*p == quote) /* ending quote */\n\t\t\t\t\tquote = 0;\n\t\t\t\telse /* starting quote */\n\t\t\t\t\tquote = *p;\n\t\t\t\tbreak;\n\n\t\t\tcase ',':\n\t\t\t\tif (quote == 0) { /* normal comma */\n\t\t\t\t\t*p++ = 0; /* terminate token */\n\t\t\t\t\treturn tok;\n\t\t\t\t}\n\t\t\t\tbreak; /* comma inside quotes, keep scanning */\n\n\t\t\tcase ESC_CHAR:\t\t   /* pass over next char */\n\t\t\t\tif (*(p + 1) != 0) /* check '\\' is not last */\n\t\t\t\t\tp++;\n\t\t\t\tbreak;\n\t\t}\n\t}\n\treturn tok;\n}\n\n/**\n * @brief\n *      Copy an environment variable to a specified location\n *\n * @param[in]\tdest\t- The destination address\n * @param[in]   pv\t- The source address\n * @param[in]   quote_flg - Whether quote characters should be escaped\n *\n * @return\tchar*\n * @retval\tNULL - Failure\n * @retval\t!NULL - Success - Pointer to pv parameter\n */\nstatic char *\ncopy_env_value(char *dest, char *pv, int quote_flg)\n{\n\tint go = 1;\n\tint q_ch = 0;\n\tint is_func = 0;\n\tchar *dest_full = dest;\n\n\twhile (*dest)\n\t\t++dest;\n\n\tis_func = ((*pv == '(') && (*(pv + 1) == ')') && (*(pv + 2) == ' ') && (*(pv + 3) == '{'));\n\n\t/*\n\t * Keep the list of special characters consistent with encode_arst_bs()\n\t * and parse_comma_string_bs().\n\t */\n\n\twhile (go && *pv) {\n\t\tswitch (*pv) {\n\t\t\tcase '\"':\n\t\t\tcase '\\'':\n\t\t\t\tif (q_ch) { /* local quoting is in progress */\n\t\t\t\t\tif (q_ch == (int) *pv) {\n\t\t\t\t\t\tq_ch = 0; /* end quote */\n\t\t\t\t\t} else {\n\t\t\t\t\t\t*dest++ = ESC_CHAR; /* escape quote */\n\t\t\t\t\t\t*dest++ = *pv;\n\t\t\t\t\t}\n\t\t\t\t} else if (quote_flg) {\t    /* global quoting is on */\n\t\t\t\t\t*dest++ = ESC_CHAR; /* escape quote */\n\t\t\t\t\t*dest++ = *pv;\n\t\t\t\t} else {\n\t\t\t\t\tq_ch = (int) *pv; /* turn local quoting on */\n\t\t\t\t}\n\t\t\t\tbreak;\n\n\t\t\tcase ESC_CHAR: /* backslash in value, escape it */\n\t\t\t\t*dest++ = *pv;\n\t\t\t\tif (*(pv + 1) != ',') /* do not escape if ESC_CHAR already escapes */\n\t\t\t\t\t*dest++ = *pv;\n\t\t\t\tbreak;\n\n\t\t\tcase ',':\n\t\t\t\tif (q_ch || quote_flg) {\n\t\t\t\t\t*dest++ = ESC_CHAR;\n\t\t\t\t\t*dest++ = *pv;\n\t\t\t\t} else if (dest_full != dest && *(dest - 1) == ESC_CHAR) { /* the comma is escaped, not finished yet */\n\t\t\t\t\t*dest++ = *pv;\n\t\t\t\t} else {\n\t\t\t\t\tgo = 0; /* end of value string */\n\t\t\t\t}\n\t\t\t\tbreak;\n\n\t\t\tcase ';':\n\t\t\t\t*dest++ = *pv;\n\t\t\t\tif (is_func && (*(pv + 1) == '\\n'))\n\t\t\t\t\tpv++;\n\t\t\t\tbreak;\n\n\t\t\tdefault:\n\t\t\t\t*dest++ = *pv;\n\t\t\t\tbreak;\n\t\t}\n\t\tpv++;\n\t}\n\n\t*dest = '\\0';\n\tif (q_ch)\n\t\treturn NULL; /* error-unterminated quote */\n\telse\n\t\treturn (pv);\n}\n\n/**\n * @brief\n *\tGiven a comma-separated list of \"variable\" or \"variable=value\"\n *\tentries, return a new variable list with those \"variable\" entries\n *\texpanded to contain their values obtained from the current\n *\tenvironment.\n *\n * @param[in] varlist - variable list\n *\n * @return char *\n *\tThe malloced list of expanded variables list.\n *\tNULL if any error encountered.\n *\n */\nstatic char *\nexpand_varlist(char *varlist)\n{\n\tchar *v_value1 = NULL;\n\tchar *v_value2 = NULL;\n\tchar *vn = NULL;\n\tchar *vv = NULL;\n\tchar *p1, *p2, *p;\n\tchar *ev;\n\tchar *ev2;\n\tint v_value1_sz = 0;\n\tchar *pc;\n\tint special_char_cnt = 0;\n\tint len = 0;\n\tint esc_char_cnt = 0;\n\n\t/*\n\t * count special characters as they are escaped with '\\' in copy_env_value function\n\t * so that this is useful while calculating the accurate size of the destination string.\n\t * Also calculating the length of the string.\n\t */\n\tpc = varlist;\n\tfor (; *pc; pc++) {\n\t\tif ((*pc == '\"') || (*pc == '\\'') || (*pc == ',') || (*pc == '\\\\'))\n\t\t\tspecial_char_cnt++;\n\t\tlen++;\n\t}\n\n\tv_value1_sz = len + special_char_cnt + 1;\n\t/* final copy */\n\tv_value1 = malloc(v_value1_sz);\n\tif (v_value1 == NULL) {\n\t\tfprintf(stderr, \"qsub: out of memory\\n\");\n\t\treturn NULL;\n\t}\n\tv_value1[0] = '\\0';\n\n\t/* working copy */\n\tv_value2 = strdup(varlist);\n\tif (v_value2 == NULL) {\n\t\tfprintf(stderr, \"qsub: out of memory\\n\");\n\t\tgoto expand_varlist_err;\n\t}\n\n\tp1 = comma_token(v_value2);\n\twhile (p1 != NULL) {\n\t\tvn = p1;\n\t\tvv = NULL;\n\t\tif ((p2 = strchr(p1, '=')) != NULL) {\n\t\t\t*p2 = '\\0';\n\t\t\tvv = p2 + 1;\n\t\t}\n\t\tif ((vv == NULL) && (strncmp(vn, PBS_O_ENV, sizeof(PBS_O_ENV) - 1) != 0)\n\t\t\t\t && (strncmp(vn, PBS_JOBCOOKIE, sizeof(PBS_JOBCOOKIE) - 1) != 0)\n\t\t\t\t && (strncmp(vn, PBS_INTERACTIVE_COOKIE, sizeof(PBS_INTERACTIVE_COOKIE) - 1) != 0)) {\n\t\t\t/* do not add PBS_O_* env variables, as these are set by qsub\n\t\t\t * Job related cookies should not be sent and are excluded to\n\t\t\t * prevent exposing internal PBS interactive session information. */\n\n\t\t\tev = getenv(vn);\n\t\t\tif (ev == NULL) {\n\t\t\t\tfprintf(stderr, \"qsub: cannot send environment with the job\\n\");\n\t\t\t\tgoto expand_varlist_err;\n\t\t\t}\n\t\t\t/* count escape characters as they are escaped with '\\'*/\n\t\t\tev2 = ev;\n\t\t\tlen = 0;\n\t\t\tfor (; *ev2; ev2++) {\n\t\t\t\tif ((*ev2 == ESC_CHAR))\n\t\t\t\t\tesc_char_cnt++;\n\n\t\t\t\tlen++;\n\t\t\t}\n\t\t\tv_value1_sz = v_value1_sz + len + esc_char_cnt + 1; /* include '=' */\n\t\t\tp = realloc(v_value1, v_value1_sz);\n\t\t\tif (p == NULL) {\n\t\t\t\tfprintf(stderr, \"qsub: out of memory\\n\");\n\t\t\t\tgoto expand_varlist_err;\n\t\t\t}\n\t\t\tv_value1 = p;\n\t\t\tif (v_value1[0] != '\\0')\n\t\t\t\tstrcat(v_value1, \",\");\n\t\t\tstrcat(v_value1, vn);\n\t\t\tstrcat(v_value1, \"=\");\n\n\t\t\tif (copy_env_value(v_value1, ev, 1) == NULL) {\n\t\t\t\tfprintf(stderr, \"qsub: cannot send environment with the job\\n\");\n\t\t\t\tgoto expand_varlist_err;\n\t\t\t}\n\t\t} else if (vv != NULL) {\n\t\t\t/* no need to adjust */\n\t\t\tif (v_value1[0] != '\\0')\n\t\t\t\tstrcat(v_value1, \",\");\n\t\t\tstrcat(v_value1, vn);\n\t\t\tstrcat(v_value1, \"=\");\n\t\t\tif (copy_env_value(v_value1, vv, 0) == NULL) {\n\t\t\t\tfprintf(stderr, \"qsub: cannot send environment with the job\\n\");\n\t\t\t\tgoto expand_varlist_err;\n\t\t\t}\n\t\t}\n\n\t\tp1 = comma_token(NULL);\n\t}\n\tfree(v_value2);\n\treturn (v_value1);\n\nexpand_varlist_err:\n\tfree(v_value1);\n\tfree(v_value2);\n\treturn NULL;\n}\n\n/**\n * @brief\n *\tQuery the server for a new value to \"default_qsub_arguments\".\n * @note\n *\tReferences the global variables 'sd_svr', 'ss', and \"dfltqsubargs'.\n *\t'dfltqsubargs' is updated with the new value.\n *\n */\nstatic void\nrefresh_dfltqsubargs(void)\n{\n\tstruct attrl *attr;\n\tstruct batch_status *ss_save = NULL;\n\tchar *errmsg;\n\n\tif (sd_svr == -1)\n\t\treturn;\n\n\tfree(dfltqsubargs);\n\n\tdfltqsubargs = NULL;\n\tss = pbs_statserver(sd_svr, NULL, NULL);\n\n\tif (ss == NULL && pbs_errno != PBSE_NONE) {\n\t\tif ((errmsg = pbs_geterrmsg(sd_svr)) != NULL)\n\t\t\tfprintf(stderr, \"qsub: %s\\n\", errmsg);\n\t\telse\n\t\t\tfprintf(stderr, \"qsub: Error %d\\n\", pbs_errno);\n\t\treturn;\n\t}\n\n\tss_save = ss;\n\twhile (ss != NULL) {\n\t\tfor (attr = ss->attribs; attr != NULL; attr = attr->next) {\n\t\t\tif (strcmp(attr->name, ATTR_dfltqsubargs) == 0) {\n\t\t\t\tdfltqsubargs = strdup(attr->value);\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t\tss = ss->next;\n\t}\n\tpbs_statfree(ss_save);\n}\n\n/**\n * @brief\n *\texit_qsub - issues the exit system call with the \"exit\" argument after\n * \tdoing and needed library shutdown.\n *\n * @param[in] exitstatus integer value indiacting exit\n *\n * @return None\n *\n */\nvoid\nexit_qsub(int exitstatus)\n{\n\t/* A thread that makes qsub exit, should try and acquire the Critical Section. */\n\tcritical_section();\n\n\tif (cs_init == 1)\n\t\t/* Cleanup security library initializations before exiting */\n\t\tCS_close_app();\n\n#ifdef BACKTRACE_SIZE\n\tif (exitstatus != 0) {\n\t\tint i, frames;\n\t\tvoid *bt_buf[BACKTRACE_SIZE];\n\t\tchar **bt_strings;\n\n\t\tframes = backtrace(bt_buf, BACKTRACE_SIZE);\n\t\tprintf(\"Backtrace has %d frames.\\n\", frames);\n\t\tbt_strings = backtrace_symbols(bt_buf, frames);\n\t\tif (bt_strings == NULL) {\n\t\t\tprintf(\"No backtrace symbols present!\\n\");\n\t\t} else {\n\t\t\tfor (i = 0; i < frames; i++) {\n\t\t\t\tprintf(\"%s\\n\", bt_strings[i]);\n\t\t\t}\n\t\t\tfree(bt_strings);\n\t\t}\n\t}\n#endif\n\n\texit(exitstatus);\n}\n\n/**\n * @brief\n *\tstrdup_esc_commas - duplicate a string escaping commas\n *\tThe string is duplicated with all commas in the original string\n *\tescaped by preceding escape character.\n *\n * @param[in] str_to_dup - string to be duplicated\n *\n * @return\n * @retval string Succes\n * @retval NULL   Failure\n */\nchar *\nstrdup_esc_commas(char *str_to_dup)\n{\n\tchar *roaming = str_to_dup;\n\tchar *endstr, *returnstr;\n\n\tif (str_to_dup == NULL)\n\t\treturn NULL;\n\n\treturnstr = endstr = malloc(strlen(str_to_dup) * 2 + 2);\n\t/* even for an all-comma string, this should suffice */\n\tif (returnstr == NULL)\n\t\treturn NULL; /* just return null on malloc failure */\n\twhile (*roaming != '\\0') {\n\t\twhile (*roaming != '\\0' && *roaming != ',')\n\t\t\t*(endstr++) = *(roaming++);\n\t\tif (*roaming == ',') {\n\t\t\t*(endstr++) = ESC_CHAR;\n\t\t\t*(endstr++) = ',';\n\t\t\troaming++;\n\t\t}\n\t}\n\t*endstr = '\\0';\n\treturn (returnstr);\n}\n\n/**\n * @brief\n *\tprints the usage format for qsub\n *\n */\nstatic void\nprint_usage(void)\n{\n\tstatic char usage2[] = \"       qsub --version\\n\";\n\textern char usage[];\n\tfprintf(stderr, \"%s\", usage);\n\tfprintf(stderr, \"%s\", usage2);\n}\n\n/* End of \"Utility\" functions. */\n\n/* The following functions support the \"Interactive Job\" capability of PBS. */\n\n/**\n * @brief\n * \tinteractive_port - get a socket to listen to for \"interactive\" job\n *\tWhen the \"interactive\" job is run, its standard in, out, and error\n *\twill be connected to this socket.\n *\n * @return string\n * @retval portstring holding port info\n * @note exits from program on failure\n *\n */\nstatic char *\ninteractive_port(void)\n{\n\tpbs_socklen_t namelen;\n\tstatic char portstring[8];\n\tstruct sockaddr_in myaddr;\n\tunsigned short port;\n\n\tif ((isatty(0) == 0) || (isatty(1) == 0)) {\n\t\tfprintf(stderr, \"qsub:\\tstandard input and output must be a terminal for\\n\"\n\t\t\t\t\"\\tinteractive job submission\\n\");\n\t\texit_qsub(1);\n\t}\n\tcomm_sock = socket(AF_INET, SOCK_STREAM, 0);\n\tif (comm_sock < 0) {\n\t\tperror(\"qsub: unable to obtain socket\");\n\t\texit_qsub(1);\n\t}\n\tmyaddr.sin_family = AF_INET;\n\tmyaddr.sin_addr.s_addr = INADDR_ANY;\n\tmyaddr.sin_port = 0;\n\tif (bind(comm_sock, (struct sockaddr *) &myaddr, sizeof(myaddr)) < 0) {\n\t\tperror(\"qsub: unable to bind to socket\");\n\t\texit_qsub(1);\n\t}\n\n\t/* get port number assigned */\n\n\tnamelen = sizeof(myaddr);\n\tif (getsockname(comm_sock, (struct sockaddr *) &myaddr, &namelen) < 0) {\n\t\tperror(\"qsub: unable to get port number\");\n\t\texit_qsub(1);\n\t}\n\tport = ntohs(myaddr.sin_port);\n\t(void) sprintf(portstring, \"%u\", (unsigned int) port);\n\tif (listen(comm_sock, 1) < 0) {\n\t\tperror(\"qsub: listen on interactive socket\");\n\t\texit_qsub(1);\n\t}\n\n\treturn (portstring);\n}\n\n/**\n * @brief\n *\tShut and Close a socket\n *\n * @param\tsock\tfile descriptor\n *\n * @return Void\n *\n */\nstatic void\nshut_close_sock(int sock)\n{\n\tshutdown(sock, 2);\n\tclosesocket(sock);\n}\n\n/**\n * @brief\n * \tsend delete job request, disconnect with server and exit qsub\n *\n * @param[in]\tret\tqsub exit code\n *\n * @return      void\n *\n */\nvoid\nbailout(int ret)\n{\n\tint c;\n\n\tshut_close_sock(comm_sock);\n\tprintf(\"Job %s is being deleted\\n\", new_jobname);\n\tc = cnt2server(server_out);\n\tif (c <= 0) {\n\t\tfprintf(stderr,\n\t\t\t\"qsub: cannot connect to server %s (errno=%d)\\n\",\n\t\t\tpbs_server, pbs_errno);\n\t\texit_qsub(1);\n\t}\n\t(void) pbs_deljob(c, new_jobname, NULL);\n\tpbs_disconnect(c);\n\texit_qsub(ret);\n}\n\n/* The following functions support the \"Block Job\" capability of PBS. */\n\n/**\n * @brief\n *\tcreates a socket and blocks the port\n *\n * @return char *\n * @retval portstring string holding port info\n *\n */\nstatic char *\nblock_port(void)\n{\n\tpbs_socklen_t namelen;\n\tstatic char portstring[8];\n\tstruct sockaddr_in myaddr;\n\tunsigned short port;\n\n\tcomm_sock = socket(AF_INET, SOCK_STREAM, 0);\n\tif (comm_sock < 0) {\n\t\tperror(\"qsub: unable to obtain socket\");\n\t\texit_qsub(1);\n\t}\n\tmyaddr.sin_family = AF_INET;\n\tmyaddr.sin_addr.s_addr = INADDR_ANY;\n\tmyaddr.sin_port = 0;\n\tif (bind(comm_sock, (struct sockaddr *) &myaddr, sizeof(myaddr)) < 0) {\n\t\tperror(\"qsub: unable to bind to socket\");\n\t\texit_qsub(1);\n\t}\n\n\t/* get port number assigned */\n\n\tnamelen = sizeof(myaddr);\n\tif (getsockname(comm_sock, (struct sockaddr *) &myaddr, &namelen) < 0) {\n\t\tperror(\"qsub: unable to get port number\");\n\t\texit_qsub(1);\n\t}\n\tport = ntohs(myaddr.sin_port);\n\t(void) sprintf(portstring, \"%u\", (unsigned int) port);\n\tDBPRT((\"block_port: %s\\n\", portstring))\n\n\tif (listen(comm_sock, 1) < 0) {\n\t\tperror(\"qsub: listen on block socket\");\n\t\texit_qsub(1);\n\t}\n\n\treturn (portstring);\n}\n\nint sig_happened = 0;\n\n#define BAIL(message)             \\\n\tif (ret != DIS_SUCCESS) { \\\n\t\tfail = message;   \\\n\t\tgoto err;         \\\n\t}\n\n/**\n * @brief\n *\tblock - set up to wait for a job to end.\n *\n * @return Void\n * Exits on failre\n *\n */\nstatic void\nblock(void)\n{\n\tstruct sockaddr_in from;\n\tpbs_socklen_t fromlen;\n\tchar *jobid = \"none\";\n\tchar *message = NULL;\n\tchar *fail = NULL;\n\tint news;\n\tint ret;\n\tint version;\n\tint exitval;\n\n#ifndef WIN32\n\tstruct sigaction act;\n\n\t/* Catch SIGHUP, SIGINT, SIGQUIT and SIGTERM */\n\n\tsigemptyset(&act.sa_mask);\n\tact.sa_handler = blockint;\n\tact.sa_flags = 0;\n\tif ((sigaction(SIGHUP, &act, NULL) < 0) ||\n\t    (sigaction(SIGINT, &act, NULL) < 0) ||\n\t    (sigaction(SIGQUIT, &act, NULL) < 0) ||\n\t    (sigaction(SIGTERM, &act, NULL) < 0)) {\n\t\tperror(\"qsub: unable to catch signals\");\n\t\texit_qsub(1);\n\t}\n#endif\n\nretry:\n\tfromlen = sizeof(from);\n\tif ((news = accept(comm_sock, (struct sockaddr *) &from,\n\t\t\t   &fromlen)) < 0) {\n#ifdef WIN32\n\t\tif (errno == WSAEINTR)\n#else\n\t\tif (errno == EINTR)\n#endif\n\t\t{\n\t\t\tfprintf(stderr, \"qsub: wait for job %s \"\n\t\t\t\t\t\"interrupted by signal %d\\n\",\n\t\t\t\tnew_jobname, sig_happened);\n\t\t\tbailout(2);\n\t\t}\n\t\tperror(\"qsub: accept error\");\n\t\texit_qsub(1);\n\t}\n\tDBPRT((\"got connection from %s:%d\\n\", inet_ntoa(from.sin_addr), (int) ntohs(from.sin_port)))\n\n\t/*\n\t * if SIGINT or SIGBREAK interrupt is raised, then child thread win_blockint()\n\t * does job deletion and other related stuff. So main thread can exit now.\n\t */\n\n#ifdef WIN32\n\tif ((sig_happened == SIGINT) || (sig_happened == SIGBREAK))\n\t\texit_qsub(3);\n#endif\n\n\t/* When Mom connects back, the first thing that needs\n\t * to happen is to engage in an authentication activity.\n\t * Any return value other than CS_SUCCESS or CS_AUTH_USE_IFF\n\t * means the authentication failed.\n\t */\n\tret = CS_client_auth(news);\n\n\tif ((ret != CS_SUCCESS) && (ret != CS_AUTH_USE_IFF)) {\n\t\tfprintf(stderr, \"qsub: failed authentication with execution host\\n\");\n\t\tshut_close_sock(news);\n\t\tgoto retry;\n\t}\n\n\tDIS_tcp_funcs();\n\tversion = disrsi(news, &ret);\n\tif (ret != DIS_SUCCESS) {\n\t\t/*\n\t\t * We couldn't read data so try again if it is a port scan.\n\t\t */\n\t\tshut_close_sock(news);\n\t\tgoto retry;\n\t}\n\tif (version != 1) {\n\t\tfprintf(stderr, \"qsub: unknown protocol version %d\\n\", version);\n\t\tshut_close_sock(news);\n\t\tgoto retry;\n\t}\n\n\tjobid = disrst(news, &ret);\n\tif ((ret != DIS_SUCCESS) || (strcmp(jobid, new_jobname) != 0)) {\n\t\tfprintf(stderr, \"qsub: Unknown Job Identifier %s\\n\", jobid);\n\t\tshut_close_sock(news);\n\t\tgoto retry;\n\t}\n\n\t/* after getting the correct jobid, give up on error */\n\tmessage = disrst(news, &ret);\n\tBAIL(\"message\")\n\tif (message != NULL && *message != '\\0') { /* non-null message */\n\t\tfprintf(stderr, \"qsub: %s %s\\n\", jobid, message);\n\t\texit_qsub(3);\n\t}\n\texitval = disrsi(news, &ret);\n\tBAIL(\"exitval\");\n\texit_qsub(exitval);\n\nerr:\n\tfprintf(stderr, \"qsub: Bad Request Protocol, %s\\n\",\n\t\t((fail != NULL) && (*fail != '\\0')) ? fail : \"unknown error\");\n\texit_qsub(3);\n}\n\n/* End of \"Block Job\" functions. */\n\n/* End of \"Authentication\" functions. */\n\n/*\n * The following functions support the \"Options Processing\"\n * functionality of qsub.\n */\n\n/**\n * @brief\n *  This function processes all the options specified while submitting a job. It\n *  validates all these options and sets their corresponding flags.\n *\n * @param[in] argc  Number of options present in argv.\n * @param[in] argv  An array containing all the options and their values.\n * @param[in] passet The value that will be used to set the options. It can have\n *                   value as CMDLINE (for command line options), CMDLINE-1 (for\n *                   job script options), CMDLINE-2 (server default options).\n *\n * @return int - It returns number of erroneous options processed.\n *\n */\nstatic int\nprocess_opts(int argc, char **argv, int passet)\n{\n\tint i;\n\tint c;\n\tchar *erp;\n\tint errflg = 0;\n\ttime_t after;\n\tchar a_value[512];\n\tchar *keyword;\n\tchar *valuewd;\n\tchar *pc;\n\tstruct attrl *pattr = NULL;\n\tsize_t N_len = 0;\n\tint ddash_index = -1;\n\n\textern char GETOPT_ARGS[];\n\n/*\n * The following macro, together the value of passet is used\n * to enforce the following rules:\n * 1. option on the command line take precedence over those in script directives.\n * 2. With in the command line or within the script, the last occurance of an option takes\n *    precedence over the earlier occurance.\n */\n\n/*\n * The passet value is saved in the opt register. The option will\n * only be set if the value of passet is greater then or equal to the\n * opt regiester.\n */\n#define if_cmd_line(x) if (x <= passet)\n\n\tif (passet != CMDLINE) {\n#if defined(linux) || defined(WIN32)\n\t\toptind = 0; /* prime getopt's starting point */\n#else\n\t\toptind = 1; /* prime getopt's starting point */\n#endif\n\t}\n\twhile ((c = getopt(argc, argv, GETOPT_ARGS)) != EOF) {\n\t\t/*\n\t\t * qsub uses \"--\" to specify the executable to run for a job,\n\t\t * so, if \"--\" is used as a value, we need to make sure that\n\t\t * there is another \"--\" for providing the executable name (if any).\n\t\t */\n\t\tif (optarg && (strcmp(optarg, \"--\") == 0))\n\t\t\tddash_index = optind - 1;\n\n\t\tswitch (c) {\n\t\t\tcase 'a':\n\t\t\t\tif_cmd_line(a_opt)\n\t\t\t\t{\n\t\t\t\t\ta_opt = passet;\n\t\t\t\t\tif ((after = cvtdate(optarg)) < 0) {\n\t\t\t\t\t\tfprintf(stderr, \"qsub: illegal -a value\\n\");\n\t\t\t\t\t\terrflg++;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t\tsprintf(a_value, \"%ld\", (long) after);\n\t\t\t\t\t(void) set_attr_error_exit(&attrib, ATTR_a, a_value);\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase 'A':\n\t\t\t\tif_cmd_line(A_opt)\n\t\t\t\t{\n\t\t\t\t\tA_opt = passet;\n\t\t\t\t\t(void) set_attr_error_exit(&attrib, ATTR_A, optarg);\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase 'P':\n\t\t\t\tif_cmd_line(P_opt)\n\t\t\t\t{\n\t\t\t\t\tP_opt = passet;\n\t\t\t\t\t(void) set_attr_error_exit(&attrib, ATTR_project, optarg);\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase 'c':\n\t\t\t\tif_cmd_line(c_opt)\n\t\t\t\t{\n\t\t\t\t\tc_opt = passet;\n\t\t\t\t\twhile (isspace((int) *optarg))\n\t\t\t\t\t\toptarg++;\n\t\t\t\t\tpc = optarg;\n\t\t\t\t\tif (strlen(optarg) == 1) {\n\t\t\t\t\t\tif (*pc == 'u') {\n\t\t\t\t\t\t\tfprintf(stderr, \"qsub: illegal -c value\\n\");\n\t\t\t\t\t\t\terrflg++;\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tset_attr_error_exit(&attrib, ATTR_c, optarg);\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase 'C':\n\t\t\t\tif_cmd_line(C_opt)\n\t\t\t\t{\n\t\t\t\t\tC_opt = passet;\n\t\t\t\t\tsnprintf(dir_prefix, sizeof(dir_prefix), \"%s\", optarg);\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase 'e':\n\t\t\t\tif_cmd_line(e_opt)\n\t\t\t\t{\n\t\t\t\t\te_opt = passet;\n\t\t\t\t\tset_attr_error_exit(&attrib, ATTR_e, optarg);\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase 'h':\n\t\t\t\tif_cmd_line(h_opt)\n\t\t\t\t{\n\t\t\t\t\th_opt = passet;\n\t\t\t\t\tset_attr_error_exit(&attrib, ATTR_h, \"u\");\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase 'f':\n\t\t\t\tno_background = 1;\n\t\t\t\tbreak;\n#if !defined(PBS_NO_POSIX_VIOLATION)\n\t\t\tcase 'I':\n\t\t\t\tif (J_opt != 0) {\n\t\t\t\t\tfprintf(stderr, \"%s\", INTER_ARRAY);\n\t\t\t\t\terrflg++;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tif_cmd_line(Interact_opt)\n\t\t\t\t{\n\t\t\t\t\tInteract_opt = passet;\n\t\t\t\t\tif (block_opt != FALSE) {\n\t\t\t\t\t\tfprintf(stderr, \"%s\", INTER_BLOCK_WARN);\n\t\t\t\t\t\tblock_opt = FALSE;\n\t\t\t\t\t}\n\t\t\t\t\tif (roptarg_inter == TRUE) {\n\t\t\t\t\t\tfprintf(stderr, \"%s\", INTER_RERUN_WARN);\n\t\t\t\t\t}\n\t\t\t\t\tset_attr_error_exit(&attrib, ATTR_inter, interactive_port());\n\t\t\t\t}\n\t\t\t\tbreak;\n#endif /* PBS_NO_POSIX_VIOLATION */\n\t\t\tcase 'j':\n\t\t\t\tif_cmd_line(j_opt)\n\t\t\t\t{\n\t\t\t\t\tj_opt = passet;\n\t\t\t\t\tset_attr_error_exit(&attrib, ATTR_j, optarg);\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase 'J':\n\t\t\t\tif (Interact_opt != FALSE) {\n\t\t\t\t\tfprintf(stderr, \"%s\", INTER_ARRAY);\n\t\t\t\t\terrflg++;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tif (roptarg != 'y') {\n\t\t\t\t\tfprintf(stderr, \"%s\", NO_RERUN_ARRAY);\n\t\t\t\t\terrflg++;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tif_cmd_line(J_opt)\n\t\t\t\t{\n\t\t\t\t\tchar *p;\n\t\t\t\t\tJ_opt = passet;\n\t\t\t\t\tp = strpbrk(optarg, \"%\");\n\t\t\t\t\tif (p != NULL)\n\t\t\t\t\t\t*p = '\\0';\n\t\t\t\t\tset_attr_error_exit(&attrib, ATTR_J, optarg);\n\t\t\t\t\tif (p != NULL) {\n\t\t\t\t\t\tif (max_run_opt == FALSE) {\n\t\t\t\t\t\t\tmax_run_opt = TRUE;\n\t\t\t\t\t\t\tset_attr_error_exit(&attrib, ATTR_max_run_subjobs, ++p);\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\tfprintf(stderr, \"%s\", MULTIPLE_MAX_RUN);\n\t\t\t\t\t\t\terrflg++;\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase 'k':\n\t\t\t\tif_cmd_line(k_opt)\n\t\t\t\t{\n\t\t\t\t\tk_opt = passet;\n\t\t\t\t\tset_attr_error_exit(&attrib, ATTR_k, optarg);\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase 'l':\n\t\t\t\tl_opt = passet;\n\t\t\t\tif ((i = set_resources(&attrib, optarg, (passet == CMDLINE), &erp))) {\n\t\t\t\t\tif (i > 1) {\n\t\t\t\t\t\tpbs_prt_parse_err(\"qsub: illegal -l value\\n\", optarg,\n\t\t\t\t\t\t\t\t  (int) (erp - optarg), i);\n\t\t\t\t\t} else\n\t\t\t\t\t\tfprintf(stderr, \"qsub: illegal -l value\\n\");\n\t\t\t\t\terrflg++;\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase 'm':\n\t\t\t\tif_cmd_line(m_opt)\n\t\t\t\t{\n\t\t\t\t\tm_opt = passet;\n\t\t\t\t\twhile (isspace((int) *optarg))\n\t\t\t\t\t\toptarg++;\n\t\t\t\t\tset_attr_error_exit(&attrib, ATTR_m, optarg);\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase 'M':\n\t\t\t\tif_cmd_line(M_opt)\n\t\t\t\t{\n\t\t\t\t\tM_opt = passet;\n\t\t\t\t\tset_attr_error_exit(&attrib, ATTR_M, optarg);\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase 'N':\n\t\t\t\tif_cmd_line(N_opt)\n\t\t\t\t{\n\t\t\t\t\tN_opt = passet;\n\t\t\t\t\t/* If ATTR_N is not set previously */\n\t\t\t\t\tif (get_attr(attrib, ATTR_N, NULL) == NULL) {\n\t\t\t\t\t\tset_attr_error_exit(&attrib, ATTR_N, optarg);\n\t\t\t\t\t}\n\t\t\t\t\t/* If N_opt is not set previously but if ATTR_N is set\n\t\t\t\t\t * earlier directly without verification based on the\n\t\t\t\t\t * job script name and if there is a value for ATTR_N\n\t\t\t\t\t * after parsing the job script for PBS directives\n\t\t\t\t\t * replace the earlier value with the current value\n\t\t\t\t\t * for this attribute\n\t\t\t\t\t */\n\t\t\t\t\telse {\n\t\t\t\t\t\tfor (pattr = attrib; pattr; pattr = pattr->next) {\n\t\t\t\t\t\t\tif (strcmp(pattr->name, ATTR_N) == 0) {\n\t\t\t\t\t\t\t\tN_len = strlen(optarg);\n\t\t\t\t\t\t\t\tif (strlen(pattr->value) < N_len) {\n\t\t\t\t\t\t\t\t\tpattr->value = (char *) realloc(pattr->value, N_len + 1);\n\t\t\t\t\t\t\t\t\tif (pattr->value == NULL) {\n\t\t\t\t\t\t\t\t\t\tfprintf(stderr, \"Out of memory\\n\");\n\t\t\t\t\t\t\t\t\t\texit(2);\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\tstrcpy(pattr->value, optarg); /* safe because we just allocated enough space */\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase 'o':\n\t\t\t\tif_cmd_line(o_opt)\n\t\t\t\t{\n\t\t\t\t\to_opt = passet;\n\t\t\t\t\tset_attr_error_exit(&attrib, ATTR_o, optarg);\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase 'p':\n\t\t\t\tif_cmd_line(p_opt)\n\t\t\t\t{\n\t\t\t\t\tp_opt = passet;\n\t\t\t\t\twhile (isspace((int) *optarg))\n\t\t\t\t\t\toptarg++;\n\t\t\t\t\tset_attr_error_exit(&attrib, ATTR_p, optarg);\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase 'q':\n\t\t\t\tif_cmd_line(q_opt)\n\t\t\t\t{\n\t\t\t\t\tq_opt = passet;\n\t\t\t\t\tsnprintf(destination, sizeof(destination), \"%s\", optarg);\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase 'r':\n\t\t\t\tif_cmd_line(r_opt)\n\t\t\t\t{\n\t\t\t\t\tr_opt = passet;\n\t\t\t\t\tif (strlen(optarg) != 1) {\n\t\t\t\t\t\tfprintf(stderr, \"qsub: illegal -r value\\n\");\n\t\t\t\t\t\terrflg++;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t\tif (*optarg != 'y' && *optarg != 'n') {\n\t\t\t\t\t\tfprintf(stderr, \"qsub: illegal -r value\\n\");\n\t\t\t\t\t\terrflg++;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t} else if ((*optarg == 'n') && (J_opt != 0)) {\n\t\t\t\t\t\tfprintf(stderr, \"%s\", NO_RERUN_ARRAY);\n\t\t\t\t\t\terrflg++;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t\tif (*optarg == 'y') {\n\t\t\t\t\t\troptarg_inter = TRUE;\n\t\t\t\t\t\tif (Interact_opt)\n\t\t\t\t\t\t\tfprintf(stderr, \"%s\", INTER_RERUN_WARN);\n\t\t\t\t\t}\n\t\t\t\t\troptarg = *optarg;\n\t\t\t\t\tset_attr_error_exit(&attrib, ATTR_r, optarg);\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase 'R':\n\t\t\t\tif_cmd_line(R_opt)\n\t\t\t\t{\n\t\t\t\t\tR_opt = passet;\n\t\t\t\t\tset_attr_error_exit(&attrib, ATTR_R, optarg);\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase 'S':\n\t\t\t\tif_cmd_line(S_opt)\n\t\t\t\t{\n\t\t\t\t\tS_opt = passet;\n\t\t\t\t\tset_attr_error_exit(&attrib, ATTR_S, optarg);\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase 'u':\n\t\t\t\tif_cmd_line(u_opt)\n\t\t\t\t{\n\t\t\t\t\tu_opt = passet;\n\t\t\t\t\tset_attr_error_exit(&attrib, ATTR_u, optarg);\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase 'v':\n\t\t\t\tif_cmd_line(v_opt)\n\t\t\t\t{\n\t\t\t\t\tv_opt = passet;\n\t\t\t\t\tfree(v_value);\n\t\t\t\t\t/*\n\t\t\t\t\t * Need to change '\\' to '/' before expanding the\n\t\t\t\t\t * environment because '\\' is used to protect commas\n\t\t\t\t\t * inside quoted values.\n\t\t\t\t\t */\n\t\t\t\t\tfix_path(optarg, 1);\n\t\t\t\t\tv_value = expand_varlist(optarg);\n\t\t\t\t\tif (v_value == NULL)\n\t\t\t\t\t\texit(1);\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase 'V':\n\t\t\t\tif_cmd_line(V_opt)\n\t\t\t\t{\n\t\t\t\t\tV_opt = passet;\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase 'W':\n\t\t\t\twhile (isspace((int) *optarg))\n\t\t\t\t\toptarg++;\n\t\t\t\tif (strlen(optarg) == 0) {\n\t\t\t\t\tfprintf(stderr, \"%s\", BAD_W);\n\t\t\t\t\terrflg++;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tfix_path(optarg, 2);\n\t\t\t\ti = parse_equal_string(optarg, &keyword, &valuewd);\n\n\t\t\t\t/*\n\t\t\t\t * All the arguments to option 'W' are\n\t\t\t\t * accepted in the format of -Wattrname=value.\n\t\t\t\t */\n\n\t\t\t\twhile (i == 1) {\n\t\t\t\t\tif (strcmp(keyword, ATTR_depend) == 0) {\n\t\t\t\t\t\tif_cmd_line(Depend_opt)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tDepend_opt = passet;\n\t\t\t\t\t\t\tset_attr_error_exit(&attrib, ATTR_depend, valuewd);\n\t\t\t\t\t\t}\n\t\t\t\t\t} else if (strcmp(keyword, ATTR_stagein) == 0) {\n\t\t\t\t\t\tif_cmd_line(Stagein_opt)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tStagein_opt = passet;\n\t\t\t\t\t\t\tset_attr_error_exit(&attrib, ATTR_stagein, valuewd);\n\t\t\t\t\t\t}\n\t\t\t\t\t} else if (strcmp(keyword, ATTR_stageout) == 0) {\n\t\t\t\t\t\tif_cmd_line(Stageout_opt)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tStageout_opt = passet;\n\t\t\t\t\t\t\tset_attr_error_exit(&attrib, ATTR_stageout, valuewd);\n\t\t\t\t\t\t}\n\t\t\t\t\t} else if (strcmp(keyword, ATTR_sandbox) == 0) {\n\t\t\t\t\t\tif_cmd_line(Sandbox_opt)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tSandbox_opt = passet;\n\t\t\t\t\t\t\tset_attr_error_exit(&attrib, ATTR_sandbox, valuewd);\n\t\t\t\t\t\t}\n\t\t\t\t\t} else if (strcmp(keyword, ATTR_g) == 0) {\n\t\t\t\t\t\tif_cmd_line(Grouplist_opt)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tGrouplist_opt = passet;\n\t\t\t\t\t\t\tset_attr_error_exit(&attrib, ATTR_g, valuewd);\n\t\t\t\t\t\t}\n\t\t\t\t\t} else if (strcmp(keyword, ATTR_inter) == 0) {\n\t\t\t\t\t\tif_cmd_line(Interact_opt)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tif (J_opt != 0) {\n\t\t\t\t\t\t\t\tfprintf(stderr, \"%s\", INTER_ARRAY);\n\t\t\t\t\t\t\t\terrflg++;\n\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t/*\n\t\t\t\t\t\t\t * SPID 232472: can't set interactive attribute to false\n\t\t\t\t\t\t\t * Problem: \"qsub -W interactive=false\" throws an error\n\t\t\t\t\t\t\t * Cause: There should be check to compare the user value\n\t\t\t\t\t\t\t *   with \"false\" string and accordingly decide whether it\n\t\t\t\t\t\t\t *   is an interactive job or not.\n\t\t\t\t\t\t\t * Solution: Added additional checks which will not set\n\t\t\t\t\t\t\t *   Interact_opt and will not call set_attr_error_exit() to create\n\t\t\t\t\t\t\t *   interactive port if user gives a value \"false\"\n\t\t\t\t\t\t\t */\n\t\t\t\t\t\t\tif (!(strcasecmp(valuewd, \"true\"))) {\n\t\t\t\t\t\t\t\tInteract_opt = passet;\n\t\t\t\t\t\t\t\tset_attr_error_exit(&attrib, ATTR_inter, interactive_port());\n\t\t\t\t\t\t\t} else if (!(strcasecmp(valuewd, \"false\"))) {\n\t\t\t\t\t\t\t\t/* Do Nothing, let it run as a non-interactive job */\n\t\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t\t/* Any value other than true/false is not acceptable */\n\t\t\t\t\t\t\t\tfprintf(stderr, \"%s\", BAD_W);\n\t\t\t\t\t\t\t\terrflg++;\n\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tif (roptarg_inter == TRUE) {\n\t\t\t\t\t\t\t\tfprintf(stderr, \"%s\", INTER_RERUN_WARN);\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t/* check if both block and interactive are true */\n\t\t\t\t\t\t\tif ((block_opt != FALSE) && (Interact_opt)) {\n\t\t\t\t\t\t\t\tfprintf(stderr, \"%s\", INTER_BLOCK_WARN);\n\t\t\t\t\t\t\t\tblock_opt = FALSE;\n\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t} else if (strcmp(keyword, ATTR_block) == 0) {\n\t\t\t\t\t\tif_cmd_line(block_opt)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tif (!(strcasecmp(valuewd, \"true\"))) {\n\t\t\t\t\t\t\t\tblock_opt = passet;\n\t\t\t\t\t\t\t} else if (!(strcasecmp(valuewd, \"false\"))) {\n\t\t\t\t\t\t\t\t/* Do Nothing, Let it run as a non-blocking job */\n\t\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t\t/* Any value other than true/false is not acceptable */\n\t\t\t\t\t\t\t\tfprintf(stderr, \"%s\", BAD_W);\n\t\t\t\t\t\t\t\terrflg++;\n\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tif ((Interact_opt != FALSE) && (block_opt == passet)) {\n\t\t\t\t\t\t\t\tfprintf(stderr, \"%s\", INTER_BLOCK_WARN);\n\t\t\t\t\t\t\t\tblock_opt = FALSE;\n\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t} else if (strcmp(keyword, ATTR_resv_start) == 0) {\n\t\t\t\t\t\tif_cmd_line(Resvstart_opt)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tResvstart_opt = passet;\n\t\t\t\t\t\t\tif ((after = cvtdate(valuewd)) < 0) {\n\t\t\t\t\t\t\t\tfprintf(stderr, \"%s\", BAD_W);\n\t\t\t\t\t\t\t\terrflg++;\n\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tsprintf(a_value, \"%ld\", (long) after);\n\t\t\t\t\t\t\tset_attr_error_exit(&attrib, ATTR_resv_start, a_value);\n\t\t\t\t\t\t}\n\t\t\t\t\t} else if (strcmp(keyword, ATTR_resv_end) == 0) {\n\t\t\t\t\t\tif_cmd_line(Resvend_opt)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tResvend_opt = passet;\n\t\t\t\t\t\t\tif ((after = cvtdate(valuewd)) < 0) {\n\t\t\t\t\t\t\t\tfprintf(stderr, \"%s\", BAD_W);\n\t\t\t\t\t\t\t\terrflg++;\n\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tsprintf(a_value, \"%ld\", (long) after);\n\t\t\t\t\t\t\tset_attr_error_exit(&attrib, ATTR_resv_end, a_value);\n\t\t\t\t\t\t}\n\t\t\t\t\t} else if (strcmp(keyword, ATTR_cred) == 0) {\n\t\t\t\t\t\tif_cmd_line(cred_opt)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\tcred_opt = passet;\n\t\t\t\t\t\t\tsnprintf(cred_name, sizeof(cred_name), \"%s\", valuewd);\n\t\t\t\t\t\t\tset_attr_error_exit(&attrib, ATTR_cred, valuewd);\n\t\t\t\t\t\t}\n\t\t\t\t\t} else if (strcmp(keyword, ATTR_tolerate_node_failures) == 0) {\n\t\t\t\t\t\tif_cmd_line(tolerate_node_failures_opt)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\ttolerate_node_failures_opt = passet;\n\t\t\t\t\t\t\tset_attr_error_exit(&attrib, ATTR_tolerate_node_failures, valuewd);\n\t\t\t\t\t\t}\n\t\t\t\t\t} else if (strcmp(keyword, ATTR_max_run_subjobs) == 0) {\n\t\t\t\t\t\tif (max_run_opt == FALSE) {\n\t\t\t\t\t\t\tmax_run_opt = TRUE;\n\t\t\t\t\t\t\tset_attr_error_exit(&attrib, keyword, valuewd);\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\tfprintf(stderr, \"%s\", MULTIPLE_MAX_RUN);\n\t\t\t\t\t\t\terrflg++;\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t}\n\t\t\t\t\t} else {\n\t\t\t\t\t\tset_attr_error_exit(&attrib, keyword, valuewd);\n\t\t\t\t\t}\n\t\t\t\t\ti = parse_equal_string(NULL, &keyword, &valuewd);\n\t\t\t\t} /* bottom of long while loop */\n\t\t\t\tif (i == -1) {\n\t\t\t\t\tfprintf(stderr, \"%s\", BAD_W);\n\t\t\t\t\terrflg++;\n\t\t\t\t}\n\t\t\t\tbreak;\n\n\t\t\tcase 'X':\n\t\t\t\tif_cmd_line(Forwardx11_opt)\n\t\t\t\t{\n\t\t\t\t\tForwardx11_opt = passet;\n#if !defined(PBS_NO_POSIX_VIOLATION) && !defined(WIN32)\n\t\t\t\t\tif (!(display = getenv(\"DISPLAY\"))) {\n\t\t\t\t\t\tfprintf(stderr, \"qsub: DISPLAY not set\\n\");\n\t\t\t\t\t\terrflg++;\n\t\t\t\t\t}\n#endif\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase 'G':\n\t\t\t\tif_cmd_line(gui_opt)\n\t\t\t\t{\n\t\t\t\t\tgui_opt = passet;\n\t\t\t\t\tset_attr_error_exit(&attrib, ATTR_GUI, \"TRUE\");\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase 'z':\n\t\t\t\tif_cmd_line(z_opt) z_opt = passet;\n\t\t\t\tbreak;\n\t\t\tcase '?':\n\t\t\tdefault:\n\t\t\t\terrflg++;\n\t\t}\n\t}\n\tif ((block_opt == passet) && (Interact_opt == FALSE))\n\t\tset_attr_error_exit(&attrib, ATTR_block, block_port());\n\tif ((Forwardx11_opt == CMDLINE) && (Interact_opt == FALSE) && (errflg == 0)) {\n\t\tfprintf(stderr, \"qsub: X11 Forwarding possible only for \"\n\t\t\t\t\"interactive jobs\\n\");\n\t\texit_qsub(1);\n\t}\n\tif ((gui_opt == CMDLINE) && (Interact_opt == FALSE)) {\n\t\tfprintf(stderr, INTER_GUI_WARN);\n\t\tgui_opt = FALSE;\n\t\texit_qsub(1);\n\t}\n\n\tif (errflg == 0 && J_opt == 0 && get_attr(attrib, ATTR_m, NULL) != NULL &&\n\t    strchr(get_attr(attrib, ATTR_m, NULL), 'j') != NULL) {\n\t\tfprintf(stderr, \"qsub: mail option 'j' can not be used without array job\\n\");\n\t\texit_qsub(1);\n\t}\n\n\t/*\n\t * If argv[optind] points to '--' string, then\n\t * decrement optind, so that it would always point\n\t * to first non-command line option.\n\t * And also confirm if \"--\" was consumed by getopt\n\t * and not used as an argument value.\n\t * If used as an argument value, we cannot use it as\n\t * an indicator that an executable name follows the \"--\".\n\t */\n\tif (strcmp(argv[optind - 1], \"--\") == 0) {\n\t\tif (ddash_index != optind - 1)\n\t\t\toptind--;\n\t\telse\n\t\t\terrflg++;\n\t}\n\n\tif ((optind != 0) && (argc > 1) && (argv[optind] != NULL)) {\n\t\t/* Now, optind is pointing to first non-command line option */\n\t\tchar *s = argv[optind];\n\t\tif ((s[0] == '-') && (s[1] == '-') && (s[2] == '\\0')) {\n\t\t\t/* optind points to '--', it should not be last character */\n\t\t\tif (optind == (argc - 1))\n\t\t\t\terrflg++;\n\t\t} else {\n\t\t\t/* optind points to 'script-file path' */\n\t\t\t/* It should be a last argument in command-line options */\n\t\t\tif (optind != (argc - 1))\n\t\t\t\terrflg++;\n\t\t}\n\t}\n\tif (!errflg && passet != CMDLINE) {\n\t\terrflg = (optind != argc);\n\t}\n\t/* use PBS_SHELL if specified only if -S was not specified */\n\tif (S_opt == FALSE) {\n\t\tchar *c = getenv(\"PBS_SHELL\");\n\t\tif (c)\n\t\t\tset_attr_error_exit(&attrib, ATTR_S, c);\n\t}\n\n\tif (u_opt && cred_name[0]) {\n\t\tfprintf(stderr, \"qsub: credential incompatable with -u\\n\");\n\t\terrflg++;\n\t}\n\treturn (errflg);\n}\n\n/**\n * @brief\n *  Process special arguments.\n *  The \"--\" argument indicates an executable and possible arguments to that\n *  exectuable. qsub will treat that executable and its arguments as the job\n *  rather than reading from a job script.\n *\n * @param[in]  argc         - argument count\n * @param[in]  argv         - pointer to array of argument variables\n * @param[out] script       - path of job script\n * @return     command_flag - indicates whether an executable was specified instead of a job script\n */\nstatic int\nprocess_special_args(int argc, char **argv, char *script)\n{\n\tint command_flag = 0;\n\tchar *arg_list = NULL;\n\tif (optind < argc) {\n\t\tif (strcmp(argv[optind], \"--\") == 0) {\n\t\t\tcommand_flag = 1;\n\t\t\t/* set executable */\n\t\t\tset_attr_error_exit(&attrib, ATTR_executable, argv[optind + 1]);\n\t\t\tif (argc > (optind + 2)) {\n\t\t\t\t/* user has specified arguments to executable as well. */\n\t\t\t\targ_list = encode_xml_arg_list(optind + 2, argc, argv);\n\t\t\t\tif (arg_list == NULL) {\n\t\t\t\t\tfprintf(stderr, \"qsub: out of memory\\n\");\n\t\t\t\t\texit_qsub(2);\n\t\t\t\t} else {\n\t\t\t\t\t/* set argument list */\n\t\t\t\t\tset_attr_error_exit(&attrib, ATTR_Arglist, arg_list);\n\t\t\t\t\tfree(arg_list);\n\t\t\t\t\targ_list = NULL;\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (!N_opt) /* '-N' is not set */\n\t\t\t\tset_attr_error_exit(&attrib, ATTR_N, \"STDIN\");\n\t\t} else {\n\t\t\tif (optind + 1 != argc) {\n\t\t\t\t/* argument is a job script, it should be last */\n\t\t\t\tprint_usage();\n\t\t\t\texit_qsub(2);\n\t\t\t}\n\t\t\tsnprintf(script, MAXPATHLEN, \"%s\", argv[optind]);\n\t\t}\n\t}\n\treturn command_flag;\n}\n\n/**\n * @brief\n * \tprocesses and creates arguments passed for qsub\n *\n * @param[in] argc - argument count\n * @param[in] argv - pointer to array of argument variables\n * @param[in] line - character pointer for whole line\n *\n */\nstatic void\nmake_argv(int *argc, char *argv[], char *line)\n{\n\tchar *l, *b, *c;\n\tchar static_buffer[MAX_LINE_LEN + 1];\n\tchar *buffer;\n\tint line_len = 0;\n\tint len;\n\tchar quote;\n\tint i;\n\n\t*argc = 0;\n\targv[(*argc)++] = \"qsub\";\n\tl = line;\n\tline_len = strlen(line);\n\tif (line_len > MAX_LINE_LEN) {\n\t\tbuffer = malloc(line_len + 1);\n\t\tif (buffer == NULL) {\n\t\t\tfprintf(stderr, \"qsub: out of memory\\n\");\n\t\t\texit_qsub(2);\n\t\t}\n\t} else\n\t\tbuffer = static_buffer;\n\tb = buffer;\n\twhile (isspace(*l))\n\t\tl++;\n\tc = l;\n\twhile (*c != '\\0') {\n\t\tif ((*c == '\"') || (*c == '\\'')) {\n\t\t\tquote = *c;\n\t\t\tc++;\n\t\t\twhile ((*c != quote) && *c)\n\t\t\t\t*b++ = *c++;\n\t\t\tif (*c == '\\0') {\n\t\t\t\tfprintf(stderr, \"qsub: unmatched %c\\n\", *c);\n\t\t\t\texit_qsub(1);\n\t\t\t}\n\t\t\tc++;\n\t\t} else if (*c == ESC_CHAR) {\n\t\t\tc++;\n\t\t\t*b++ = *c++;\n\t\t} else if (isspace(*c)) {\n\t\t\tlen = c - l;\n\t\t\tfree(argv[*argc]);\n\t\t\targv[*argc] = (char *) malloc(len + 1);\n\t\t\tif (argv[*argc] == NULL) {\n\t\t\t\tfprintf(stderr, \"qsub: out of memory\\n\");\n\t\t\t\texit_qsub(2);\n\t\t\t}\n\t\t\t*b = '\\0';\n\t\t\tstrcpy(argv[(*argc)++], buffer);\n\t\t\twhile (isspace(*c))\n\t\t\t\tc++;\n\t\t\tl = c;\n\t\t\tb = buffer;\n\t\t} else\n\t\t\t*b++ = *c++;\n\t}\n\tif (c != l) {\n\t\tlen = c - l;\n\t\tfree(argv[*argc]);\n\t\targv[*argc] = (char *) malloc(len + 1);\n\t\tif (argv[*argc] == NULL) {\n\t\t\tfprintf(stderr, \"qsub: out of memory\\n\");\n\t\t\texit_qsub(2);\n\t\t}\n\t\t*b = '\\0';\n\t\tstrcpy(argv[(*argc)++], buffer);\n\t}\n\ti = *argc;\n\t/*\n\t * free and null any pointers used for the prior call that are not used\n\t * for this line. Otherwise the argv array would not be null terminated\n\t */\n\twhile (argv[i] != NULL) {\n\t\tfree(argv[i]);\n\t\targv[i++] = NULL;\n\t}\n\tif (buffer != static_buffer)\n\t\tfree(buffer);\n}\n\n/**\n * @brief\n *      Create and process qsub argument list from the string 'opts'\n *\n * @param[in]\topts     - The qsub options as single parameter.\n * @param[in]   opt_pass - priority set based on precedence.\n *\n * @return      int\n * @retval\t>0 - Failure - Other than PBS directive error.\n * @retval      -1 - Failure - PBS directive error.\n * @retval\t 0 - Success\n *\n */\nint\ndo_dir(char *opts, int opt_pass, char *retmsg, size_t ret_size)\n{\n\tint argc;\n\tint ret = -1;\n\tint index = 0;\n\tint len = 0;\n\tint nxt_pos = 0;\n\tsize_t max_size = ret_size - 2 /* 2 deducted for adding newline at end */;\n#define MAX_ARGV_LEN 128\n\tstatic char *vect[MAX_ARGV_LEN + 1];\n\n\tmake_argv(&argc, vect, opts);\n\tret = process_opts(argc, vect, opt_pass);\n\tif ((ret != 0) && (opt_pass != CMDLINE)) {\n\t\tnxt_pos = snprintf(retmsg, max_size, \"qsub: directive error: \");\n\t\tif (nxt_pos < 0)\n\t\t\treturn (ret);\n\t\tmax_size = max_size - nxt_pos;\n\t\tfor (index = 1; index < argc; index++) {\n\t\t\t/* +1 is added to strlen(vect[index]) to reserve space */\n\t\t\tif ((max_size > 0) && (max_size > strlen(vect[index]) + 1)) {\n\t\t\t\tlen = snprintf(retmsg + nxt_pos, max_size, \"%s \", vect[index]);\n\t\t\t\tif (len < 0)\n\t\t\t\t\tbreak;\n\t\t\t\tnxt_pos = nxt_pos + len;\n\t\t\t\tmax_size = max_size - len;\n\t\t\t} else {\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t\tsnprintf(retmsg + nxt_pos, 2, \"\\n\");\n\t\treturn (-1);\n\t}\n\treturn (ret);\n}\n\n/*\n * @brief\n *\tset_opt_defaults - if not already set, set certain job attributes to\n *\ttheir default value\n *\n */\nstatic void\nset_opt_defaults(void)\n{\n\tif (c_opt == FALSE)\n\t\tset_attr_error_exit(&attrib, ATTR_c, CHECKPOINT_UNSPECIFIED);\n\tif (h_opt == FALSE)\n\t\tset_attr_error_exit(&attrib, ATTR_h, NO_HOLD);\n\tif (j_opt == FALSE)\n\t\tset_attr_error_exit(&attrib, ATTR_j, NO_JOIN);\n\tif (k_opt == FALSE)\n\t\tset_attr_error_exit(&attrib, ATTR_k, NO_KEEP);\n\tif (m_opt == FALSE)\n\t\tset_attr_error_exit(&attrib, ATTR_m, MAIL_AT_ABORT);\n\tif (p_opt == FALSE)\n\t\tset_attr_error_exit(&attrib, ATTR_p, \"0\");\n\tif (r_opt == FALSE)\n\t\tset_attr_error_exit(&attrib, ATTR_r, \"TRUE\");\n}\n\n/* End of \"Options Processing\" functions. */\n\n/**\n * @brief\n *\tReturns the directory prefix string, which is chosen from the following possibilities:\n *\t1. the prefix parameter, if not empty\n *\t2. an empty string\n *\t3. the PBS_DPREFIX environment variable\n *\t4. the PBS_DPREFIX_DEFAULT constant\n *\n * @param[in] prefix - string to be prefixed\n * @param[in] diropt - boolean value indicating directory prefix to be set or not\n *\n * @return String\n * @retval Success - pbs directory prefix\n * @retval Failure - NULL\n *\n */\nstatic char *\nset_dir_prefix(char *prefix, int diropt)\n{\n\tchar *s;\n\n\tif (notNULL(prefix))\n\t\treturn (prefix);\n\telse if (diropt != FALSE)\n\t\treturn (\"\");\n\telse if ((s = getenv(\"PBS_DPREFIX\")) != NULL)\n\t\treturn (s);\n\telse\n\t\treturn (PBS_DPREFIX_DEFAULT);\n}\n\n/**\n * @brief\n * Read the job script from a file or stdin.\n *\n * @param[in] script - path of job script to read from\n */\nstatic void\nread_job_script(char *script)\n{\n\textern char read_script_msg[];\n\tint errflg; /* error code from get_script() */\n\tstruct stat statbuf;\n\tchar *bnp;\n\tchar basename[MAXPATHLEN + 1]; /* base name of script for job name*/\n\tFILE *f;\t\t       /* FILE pointer to the script */\n\n\t/* if script is empty, get standard input */\n\tif ((strcmp(script, \"\") == 0) || (strcmp(script, \"-\") == 0)) {\n\t\t/* if this is a terminal, print a short info */\n\t\tif (isatty(STDIN_FILENO) && Interact_opt == FALSE) {\n\t\t\tprintf(\"%s\", read_script_msg);\n\t\t}\n\n\t\tif (!N_opt)\n\t\t\tset_attr_error_exit(&attrib, ATTR_N, \"STDIN\");\n\t\tif (Interact_opt == FALSE) {\n\t\t\terrflg = get_script(stdin, script_tmp, set_dir_prefix(dir_prefix, C_opt));\n\t\t\tif (errflg > 0) {\n\t\t\t\t(void) unlink(script_tmp);\n\t\t\t\texit_qsub(1);\n\t\t\t} else if (errflg < 0) {\n\t\t\t\texit_qsub(1);\n\t\t\t}\n\t\t}\n\t} else { /* non-empty script, read it for directives */\n\t\tif (stat(script, &statbuf) < 0) {\n\t\t\tperror(\"qsub: script file:\");\n\t\t\texit_qsub(1);\n\t\t}\n\t\tif (!S_ISREG(statbuf.st_mode)) {\n\t\t\tfprintf(stderr, \"qsub: script not a file\\n\");\n\t\t\texit_qsub(1);\n\t\t}\n\t\tif ((f = fopen(script, \"r\")) != NULL) {\n\t\t\tif (!N_opt) {\n\t\t\t\tif ((bnp = strrchr(script, (int) '/')) != NULL)\n\t\t\t\t\tbnp++;\n\t\t\t\telse\n\t\t\t\t\tbnp = script;\n\n\t\t\t\tsnprintf(basename, sizeof(basename), \"%s\", bnp);\n\t\t\t\t/*\n\t\t\t\t * set ATTR_N directly - verification would be done\n\t\t\t\t * by IFL later\n\t\t\t\t */\n\t\t\t\tset_attr_error_exit(&attrib, ATTR_N, basename);\n\t\t\t}\n\t\t\terrflg = get_script(f, script_tmp, set_dir_prefix(dir_prefix, C_opt));\n\t\t\tif (errflg > 0) {\n\t\t\t\t(void) unlink(script_tmp);\n\t\t\t\texit_qsub(1);\n\t\t\t} else if (errflg < 0) {\n\t\t\t\texit_qsub(1);\n\t\t\t}\n\t\t\t(void) fclose(f);\n\t\t\tf = NULL;\n\t\t} else {\n\t\t\tperror(\"qsub: opening script file:\");\n\t\t\texit_qsub(8);\n\t\t}\n\t}\n}\n\n/* End of \"Job Script\" functions. */\n\n/* The following functions supports the \"Environment Variables\" feature of qsub. */\n\n/**\n * @brief\n *\tConstructs the basic comma-separated environment variables\n *\tlist string for a PBS job.\n *\n * @return\tchar *\n * @retval\tNULL for failure.\n * @retval\tA comma-separated list of environment variable=value entries.\n *\n */\nstatic char *\njob_env_basic(void)\n{\n\tchar *job_env = NULL;\n\tchar *s = NULL;\n\tchar *c = NULL;\n\tchar *p = NULL;\n\tchar *env = NULL;\n#ifdef WIN32\n\tOSVERSIONINFO os_info;\n#else\n\tstruct utsname uns;\n#endif\n\tint len = 0;\n\n\t/* Calculate how big to make the variable string. */\n\tlen = 0;\n\tenv = strdup_esc_commas(getenv(\"HOME\"));\n\tif (env != NULL) {\n\t\tlen += strlen(env);\n\t\tfree(env);\n\t}\n\tenv = strdup_esc_commas(getenv(\"LANG\"));\n\tif (env != NULL) {\n\t\tlen += strlen(env);\n\t\tfree(env);\n\t}\n\tenv = strdup_esc_commas(getenv(\"LOGNAME\"));\n\tif (env != NULL) {\n\t\tlen += strlen(env);\n\t\tfree(env);\n\t}\n\tenv = strdup_esc_commas(getenv(\"PATH\"));\n\tif (env != NULL) {\n\t\tlen += strlen(env);\n\t\tfree(env);\n\t}\n\tenv = strdup_esc_commas(getenv(\"MAIL\"));\n\tif (env != NULL) {\n\t\tlen += strlen(env);\n\t\tfree(env);\n\t}\n\tenv = strdup_esc_commas(getenv(\"SHELL\"));\n\tif (env != NULL) {\n\t\tlen += strlen(env);\n\t\tfree(env);\n\t}\n\tenv = strdup_esc_commas(getenv(\"TZ\"));\n\tif (env != NULL) {\n\t\tlen += strlen(env);\n\t\tfree(env);\n\t}\n\tenv = strdup_esc_commas(pbs_conf.interactive_auth_method);\n\tif (env != NULL) {\n\t\tlen += strlen(env);\n\t\tfree(env);\n\t}\n\tenv = strdup_esc_commas(pbs_conf.interactive_encrypt_method);\n\tif (env != NULL) {\n\t\tlen += strlen(env);\n\t\tfree(env);\n\t}\n\tlen += PBS_MAXHOSTNAME;\n\tlen += MAXPATHLEN;\n\tlen *= 2; /* Double it for all the commas, etc. */\n\n\tif ((job_env = (char *) malloc(len)) == NULL) {\n\t\tfprintf(stderr, \"malloc failure (errno %d)\\n\", errno);\n\t\treturn NULL;\n\t}\n\tmemset(job_env, '\\0', len);\n\n\t/* Send the required variables with the job. */\n\tc = strdup_esc_commas(getenv(\"HOME\"));\n\tfix_path(c, 1);\n\tstrcat(job_env, \"PBS_O_HOME=\");\n\tif (c != NULL) {\n\t\tstrcat(job_env, c);\n\t\tfree(c);\n\t} else\n\t\tstrcat(job_env, \"/\");\n\tc = strdup_esc_commas(getenv(\"LANG\"));\n\tif (c != NULL) {\n\t\tstrcat(job_env, \",PBS_O_LANG=\");\n\t\tstrcat(job_env, c);\n\t\tfree(c);\n\t}\n\tc = strdup_esc_commas(getenv(\"LOGNAME\"));\n\tif (c != NULL) {\n\t\tstrcat(job_env, \",PBS_O_LOGNAME=\");\n\t\tstrcat(job_env, c);\n\t\tfree(c);\n\t}\n\tc = strdup_esc_commas(getenv(\"PATH\"));\n\tfix_path(c, 1);\n\tif (c != NULL) {\n\t\tstrcat(job_env, \",PBS_O_PATH=\");\n\t\tstrcat(job_env, c);\n\t\tfree(c);\n\t}\n\tc = strdup_esc_commas(getenv(\"MAIL\"));\n\tfix_path(c, 1);\n\tif (c != NULL) {\n\t\tstrcat(job_env, \",PBS_O_MAIL=\");\n\t\tstrcat(job_env, c);\n\t\tfree(c);\n\t}\n\tc = strdup_esc_commas(getenv(\"SHELL\"));\n\tfix_path(c, 1);\n\tif (c != NULL) {\n\t\tstrcat(job_env, \",PBS_O_SHELL=\");\n\t\tstrcat(job_env, c);\n\t\tfree(c);\n\t}\n\n\tc = strdup_esc_commas(getenv(\"TZ\"));\n\tif (c != NULL) {\n\t\tstrcat(job_env, \",PBS_O_TZ=\");\n\t\tstrcat(job_env, c);\n\t\tfree(c);\n\t}\n\tc = strdup_esc_commas(pbs_conf.interactive_auth_method);\n\tif (c != NULL) {\n\t\tif (*job_env)\n\t\t\tstrcat(job_env, \",PBS_O_INTERACTIVE_AUTH_METHOD=\");\n\t\telse\n\t\t\tstrcat(job_env, \"PBS_O_INTERACTIVE_AUTH_METHOD=\");\n\t\tstrcat(job_env, c);\n\t\tfree(c);\n\t}\n\tc = strdup_esc_commas(pbs_conf.interactive_encrypt_method);\n\tif (c != NULL) {\n\t\tif (*job_env)\n\t\t\tstrcat(job_env, \",PBS_O_INTERACTIVE_ENCRYPT_METHOD=\");\n\t\telse\n\t\t\tstrcat(job_env, \"PBS_O_INTERACTIVE_ENCRYPT_METHOD=\");\n\t\tstrcat(job_env, c);\n\t\tfree(c);\n\t}\n\t/*\n\t * Don't detect the hostname here because it utilizes network services\n\t * that slow everthing down. PBS_O_HOST is set in the daemon later on.\n\t */\n\n\t/* get current working directory, use $PWD if available, it is more\n\t * NFS automounter \"friendly\". But must double check that is right\n\t */\n\ts = job_env + strlen(job_env);\n\tstrcat(job_env, \",PBS_O_WORKDIR=\");\n\tc = getenv(\"PWD\");\n\tif (c != NULL) {\n\t\tstruct stat statbuf;\n\t\tdev_t dev;\n\t\tino_t ino;\n\n\t\tif (stat(c, &statbuf) < 0) {\n\t\t\t/* cannot stat, cannot trust it */\n\t\t\tc = NULL;\n\t\t} else {\n\t\t\tdev = statbuf.st_dev;\n\t\t\tino = statbuf.st_ino;\n\t\t\tif (stat(\".\", &statbuf) < 0) {\n\t\t\t\tperror(\"qsub: cannot stat current directory: \");\n\t\t\t\tfree(job_env);\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t\t/* compare against \".\" */\n\t\t\tif ((dev != statbuf.st_dev) || (ino != statbuf.st_ino))\n\t\t\t\t/* \".\" and $PWD is different, cannot trust it */\n\t\t\t\tc = NULL;\n\t\t}\n\t}\n\n\tif (c == NULL) {\n\t\tp = c = job_env + strlen(job_env);\n\t\tif (getcwd(c, MAXPATHLEN) == NULL)\n\t\t\tc = NULL;\n\t} else\n\t\tp = job_env + strlen(job_env);\n\n\tif (c != NULL) {\n\t\tchar *c_escaped = NULL;\n\n\t\t/* save current working dir for daemon */\n\t\tsnprintf(qsub_cwd, sizeof(qsub_cwd), \"%s\", c);\n\t\t/* get UNC path (if available) if it is mapped drive */\n\t\tget_uncpath(c);\n\t\tc_escaped = strdup_esc_commas(c);\n\t\tif (c_escaped != NULL) {\n\t\t\tfix_path(c_escaped, 1);\n\t\t\tpbs_strncpy(p, c_escaped, len - (p - job_env));\n\t\t\tfree(c_escaped);\n\t\t\tc_escaped = NULL;\n\t\t} else\n\t\t\t*s = '\\0';\n\t} else\n\t\t*s = '\\0';\n\n#ifdef WIN32 /* Windows */\n\tos_info.dwOSVersionInfoSize = sizeof(OSVERSIONINFO);\n\tif (GetVersionEx(&os_info)) {\n\t\tswitch (os_info.dwPlatformId) {\n\t\t\tcase 0:\n\t\t\t\tstrcat(job_env, \",PBS_O_SYSTEM=VER_PLATFORM_WIN32s\");\n\t\t\t\tbreak;\n\t\t\tcase 1:\n\t\t\t\tstrcat(job_env, \",PBS_O_SYSTEM=VER_PLATFORM_WIN32_WINDOWS\");\n\t\t\t\tbreak;\n\t\t\tcase 2:\n\t\t\t\tstrcat(job_env, \",PBS_O_SYSTEM=VER_PLATFORM_WIN32_NT\");\n\t\t\t\tbreak;\n\t\t}\n\t}\n#else /* Unix */\n\tif (uname(&uns) != -1) {\n\t\tstrcat(job_env, \",PBS_O_SYSTEM=\");\n\t\tstrcat(job_env, uns.sysname);\n\t}\n#endif\n\telse {\n\t\tperror(\"qsub: cannot get uname info:\");\n\t\tfree(job_env);\n\t\treturn NULL;\n\t}\n\n\treturn (job_env);\n}\n\n/**\n * @brief\n *\tConverts an array of environment variable=value strings,\n *\tinto a comma-separated variables list string that can be\n *\texported to a job.\n *\n * @par\t NOTE: Variables in the list beginning with \"PBS_O\" are ignored\n *\t as these will be preconstructed somewhere else.\n *\t PBS_JOBCOOKIE and PBS_INTERACTIVE_JOBCOOKIE are also ignored\n *\n * @param[in]\tenvp - aray of strings making up the current environment.\n *\n * @return      char *\n * @retval      NULL - Failure\n * @retval      A comma-separated list of environment variables and values.\n *\t\tThe returned string is malloc-ed so it must be freed later.\n */\nstatic char *\nenv_array_to_varlist(char **envp)\n{\n\tchar **evp;\n\tint len;\n\tchar *job_env = NULL;\n\tchar *s;\n\n\tif (envp == NULL) {\n\t\tfprintf(stderr, \"env_array_to_varlist: no envp array!\\n\");\n\t\treturn NULL;\n\t}\n\n\tevp = envp;\n\tlen = 0;\n\twhile (notNULL(*evp)) {\n\t\tlen += strlen(*evp);\n\t\tevp++;\n\t}\n\tlen += len; /* Double it for all the commas, etc. */\n\n\tif ((len > 0) && ((job_env = (char *) malloc(len)) == NULL)) {\n\t\t\tfprintf(stderr, \"env_array_to_varlist: malloc failure errno=%d\", errno);\n\t\t\treturn NULL;\n\t}\n\n\t*job_env = '\\0';\n\n\tevp = envp;\n\twhile (notNULL(*evp)) {\n\t\ts = *evp;\n\t\twhile ((*s != '=') && *s)\n\t\t\t++s;\n\t\t*s = '\\0';\n\t\t/* Check for PBS_O_, PBS_JOBCOOKIE, and PBS_INTERACTIVE_COOKIE\n\t\t * PBS_O_* env variables, are set by qsub\n\t\t * Job related cookies should not be sent and are excluded to\n\t\t * prevent exposing internal PBS interactive session information. */\n\t\tif ((strncmp(*evp, PBS_O_ENV, sizeof(PBS_O_ENV) - 1) != 0) &&\n\t\t\t(strcmp(*evp, PBS_JOBCOOKIE) != 0) &&\n\t\t\t(strcmp(*evp, PBS_INTERACTIVE_COOKIE)) != 0) {\n\t\t\tstrcat(job_env, \",\");\n\t\t\tstrcat(job_env, *evp);\n\t\t\tstrcat(job_env, \"=\");\n\t\t\tfix_path(s + 1, 1);\n\t\t\t(void) copy_env_value(job_env, s + 1, 1);\n\t\t}\n\t\t*s = '=';\n\t\tevp++;\n\t}\n\n\treturn (job_env);\n}\n\n/**\n * @brief\n *\tAdds to the global 'attrib' structure an entry:\n *\n *\t\"-v <basic_vlist>,<v_value>,<current_vlist>\n *\tand this 'attrib' is something that will be passed onto a\n *\tPBS job before submission.\n *\n * @param[in]\tbasic_vlist - the basic variables list string of job.\n * @param[in]\tcurent_envlist - the variables list\n *\t\tstring representing the environment where qsub was\n *\t\tinvoked.\n *\n * @return\tboolean (int)\n * @retval\tTRUE for success.\n * @retval\tFALSE for failure.\n *\n */\nstatic int\nset_job_env(char *basic_vlist, char *current_vlist)\n{\n\tchar *job_env;\n\tint len;\n\n\tchar *s, *c, *env, l, *pc;\n\n\t/* Calculate how big to make the variable string. */\n\tlen = 0;\n\tif (v_opt)\n\t\tlen += strlen(v_value);\n\n\tif ((basic_vlist == NULL) || (basic_vlist[0] == '\\0'))\n\t\treturn FALSE;\n\n\tlen += strlen(basic_vlist);\n\n\tif (V_opt && (current_vlist != NULL) && (current_vlist[0] != '\\0'))\n\t\tlen += strlen(current_vlist);\n\n\tlen += len; /* Double it for all the commas, etc. */\n\tif ((job_env = (char *) malloc(len)) == NULL)\n\t\treturn FALSE;\n\t*job_env = '\\0';\n\n\tpbs_strncpy(job_env, basic_vlist, len);\n\n\t/* Send these variables with the job. */\n\t/* POSIX requirement: If a variable is given without a value, supply the\n\t value from the environment. */\n\t/* MY requirement: There can be no white space in -v value. */\n\tif (v_opt) {\n\t\tc = v_value;\n\tstate1: /* Initial state comes here */\n\t\tswitch (*c) {\n\t\t\tcase ',':\n\t\t\tcase '=':\n\t\t\t\tfree(job_env);\n\t\t\t\treturn FALSE;\n\t\t\tcase '\\0':\n\t\t\t\tgoto final;\n\t\t}\n\t\ts = c;\n\tstate2: /* Variable name */\n\t\tswitch (*c) {\n\t\t\tcase ',':\n\t\t\tcase '\\0':\n\t\t\t\tgoto state3;\n\t\t\tcase '=':\n\t\t\t\tgoto state4;\n\t\t\tdefault:\n\t\t\t\tc++;\n\t\t\t\tgoto state2;\n\t\t}\n\tstate3: /* No value - get it from qsub environment */\n\n\t\t/* From state3, goes back to state1, using 'c' as input */\n\t\tl = *c;\n\t\t*c = '\\0';\n\t\tif (strncmp(s, PBS_O_ENV, sizeof(PBS_O_ENV) - 1) != 0) {\n\t\t\t/* do not add PBS_O_* env variables, as these are set by qsub */\n\n\t\t\tenv = getenv(s);\n\t\t\tif (env == NULL) {\n\t\t\t\tfree(job_env);\n\t\t\t\treturn FALSE;\n\t\t\t}\n\n\t\t\tstrcat(job_env, \",\");\n\t\t\tstrcat(job_env, s);\n\t\t\tstrcat(job_env, \"=\");\n\t\t\tfix_path(env, 1);\n\t\t\tif (copy_env_value(job_env, env, 1) == NULL) {\n\t\t\t\tfree(job_env);\n\t\t\t\treturn FALSE;\n\t\t\t}\n\t\t}\n\n\t\tif (l == ',')\n\t\t\tc++;\n\t\tgoto state1;\n\tstate4: /* Value specified */\n\n\t\t/* From state4, goes back to state1, using 'c' as input */\n\t\t*c++ = '\\0';\n\t\t;\n\t\tif (v_opt && Forwardx11_opt) {\n\t\t\tif (strcmp(s, \"DISPLAY\") == 0) {\n\t\t\t\tx11_disp = TRUE;\n\t\t\t\tfree(job_env);\n\t\t\t\treturn FALSE;\n\t\t\t}\n\t\t}\n\t\tpc = job_env + strlen(job_env);\n\t\t(void) strcat(job_env, \",\");\n\t\t(void) strcat(job_env, s);\n\t\t(void) strcat(job_env, \"=\");\n\t\tfix_path(c, 1);\n\t\tif ((c = copy_env_value(job_env, c, 0)) == NULL) {\n\t\t\tfree(job_env);\n\t\t\treturn FALSE;\n\t\t}\n\n\t\t/* Have to undo here, since 'c' was incremented by copy_env_value */\n\t\tif (strncmp(s, PBS_O_ENV, sizeof(PBS_O_ENV) - 1) == 0)\n\t\t\t/* ignore PBS_O_ env variables as these are created by qsub */\n\t\t\t*pc = '\\0';\n\n\t\tgoto state1;\n\t}\n\nfinal:\n\n\tif (V_opt && (current_vlist != NULL) && (current_vlist[0] != '\\0'))\n\t\t/* Send every environment variable with the job. */\n\t\tstrcat(job_env, current_vlist);\n\n\tset_attr_error_exit(&attrib, ATTR_v, job_env);\n\tfree(job_env);\n\n\treturn TRUE;\n}\n\n/* End of \"Environment Variables\" functions. */\n\n/* The following functions support the \"Daemon\" capability of qsub. */\n\n/*\n * static buffer and length used by various messages for communication\n * between the qsub foreground and background process\n */\nstatic char *daemon_buf = NULL;\nstatic int daemon_buflen = 0;\n\n/**\n * @brief\n *  Resize the static variable daemon_buf.\n *\n * @param bufused - Amount of the buffer already used\n * @param lenreq - Amount of length required by new data\n *\n * @return - Error code\n * @retval - 0 - Success\n * @retval - -1 - Error\n *\n */\nstatic int\nresize_daemon_buf(int bufused, int lenreq)\n{\n\tchar *p;\n\tint new_buflen = lenreq + bufused;\n\n\tif (daemon_buflen < new_buflen) {\n\t\tnew_buflen += 1000; /* adding 1000 so that we realloc fewer times */\n\t\tp = realloc(daemon_buf, new_buflen);\n\t\tif (p == NULL) {\n\t\t\tfree(daemon_buf);\n\t\t\tdaemon_buf = NULL;\n\t\t\tdaemon_buflen = 0;\n\t\t\treturn -1;\n\t\t}\n\t\tdaemon_buf = p;\n\t\tdaemon_buflen = new_buflen;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\tSend the attrl list to the background qsub process. This is the\n * \tattribute list that was created by the foreground process based on\n *\tthe options that the user has provided to qsub.\n *\n * @param[in]\ts - pointer to the windows PIPE or Unix domain socket\n * @parma[in]\tattrib - List of attributes created by foreground qsub process\n *\n * @return int\n * @retval\t-1 - Failure\n * @retval\t 0 - Success\n *\n */\nint\nsend_attrl(void *s, struct attrl *attrib)\n{\n\tint bufused = 0;\n\tint len_n = 0, len_r = 0, len_v = 0;\n\tchar *p;\n\tint lenreq = 0;\n\n\twhile (attrib) {\n\t\tlen_n = strlen(attrib->name) + 1;\n\t\tif (attrib->resource)\n\t\t\tlen_r = strlen(attrib->resource) + 1;\n\t\telse\n\t\t\tlen_r = 0;\n\t\tlen_v = strlen(attrib->value) + 1;\n\n\t\tlenreq = len_n + len_r + len_v + 3 * sizeof(int);\n\t\tif (resize_daemon_buf(bufused, lenreq) != 0)\n\t\t\treturn -1;\n\n\t\t/* write the lengths */\n\t\tp = daemon_buf + bufused;\n\t\tmemmove(p, &len_n, sizeof(int));\n\t\tp += sizeof(int);\n\t\tmemmove(p, &len_r, sizeof(int));\n\t\tp += sizeof(int);\n\t\tmemmove(p, &len_v, sizeof(int));\n\t\tp += sizeof(int);\n\n\t\t/* now add the strings */\n\t\tmemmove(p, attrib->name, len_n);\n\t\tp += len_n;\n\t\tif (len_r > 0) {\n\t\t\tmemmove(p, attrib->resource, len_r);\n\t\t\tp += len_r;\n\t\t}\n\t\tmemmove(p, attrib->value, len_v);\n\t\tp += len_v;\n\n\t\tbufused += lenreq;\n\n\t\tattrib = attrib->next;\n\t}\n\tif ((dosend(s, (char *) &bufused, sizeof(int)) != 0) ||\n\t    (dosend(s, daemon_buf, bufused) != 0))\n\t\treturn -1;\n\n\treturn 0;\n}\n\n/**\n * @brief\n * \tSend a null terminated string to the peer process. Used by backrgound and\n * \tforeground qsub processes to communicate error-strings, job-ids etc.\n *\n * @param[in]\ts - pointer to the windows PIPE or Unix domain socket\n * @parma[in]\tstr - null terminated string to send\n *\n * @return int\n * @retval\t-1 - Failure\n * @retval\t 0 - Success\n *\n */\nint\nsend_string(void *s, char *str)\n{\n\tint len = strlen(str) + 1;\n\n\tif ((dosend(s, (char *) &len, sizeof(int)) != 0) ||\n\t    (dosend(s, str, len) != 0))\n\t\treturn -1;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tRecv the attrl list from the foreground qsub process. This is the\n * \tattribute list that was created by the foreground process based on\n * \tthe options that the user has provided to qsub.\n *\n * @param[in]\ts - pointer to the windows PIPE or Unix domain socket\n * @parma[in]\tattrib - List of attributes created by foreground qsub process\n *\n * @return int\n * @retval\t-1 - Failure\n * @retval\t 0 - Success\n *\n */\nint\nrecv_attrl(void *s, struct attrl **attrib)\n{\n\tint recvlen = 0;\n\tstruct attrl *attr = NULL;\n\tchar *p;\n\tint len_n = 0, len_r = 0, len_v = 0;\n\tchar *attr_v_val = NULL;\n\n\tif (dorecv(s, (char *) &recvlen, sizeof(int)) != 0)\n\t\treturn -1;\n\tif (resize_daemon_buf(0, recvlen) != 0)\n\t\treturn -1;\n\n\tif (dorecv(s, daemon_buf, recvlen) != 0)\n\t\treturn -1;\n\n\tp = daemon_buf;\n\twhile (p - daemon_buf < recvlen) {\n\t\tmemmove(&len_n, p, sizeof(int));\n\t\tp += sizeof(int);\n\t\tmemmove(&len_r, p, sizeof(int));\n\t\tp += sizeof(int);\n\t\tmemmove(&len_v, p, sizeof(int));\n\t\tp += sizeof(int);\n\n\t\tif (len_r > 0) {\n\t\t\t/* strings have null character also in daemon_buf */\n\t\t\tset_attr_resc_error_exit(&attr, p,\n\t\t\t\t\t\t p + len_n,\n\t\t\t\t\t\t p + len_n + len_r);\n\t\t} else {\n\t\t\t/*\n\t\t\t * if value is ATTR_v, we need to add PBS_O_HOSTNAME to it\n\t\t\t * Since determininig PBS_O_HOSTNAME is expensive, we do it\n\t\t\t * once in the background qsub, and add it to the list that comes\n\t\t\t * from the front end qsub\n\t\t\t */\n\t\t\tif (strcmp(p, ATTR_v) == 0 && pbs_hostvar != NULL) {\n\t\t\t\tint attr_v_len = len_v + strlen(pbs_hostvar) + 1;\n\t\t\t\tattr_v_val = malloc(attr_v_len);\n\t\t\t\tif (!attr_v_val)\n\t\t\t\t\treturn -1;\n\t\t\t\tstrcpy(attr_v_val, p + len_n);\n\t\t\t\tstrcat(attr_v_val, pbs_hostvar);\n\t\t\t\tset_attr_error_exit(&attr, p, attr_v_val);\n\t\t\t\tfree(attr_v_val);\n\t\t\t} else {\n\t\t\t\tset_attr_error_exit(&attr, p, p + len_n);\n\t\t\t}\n\t\t}\n\t\tp += len_n + len_r + len_v;\n\t}\n\t*attrib = attr;\n\treturn 0;\n}\n\n/**\n * @brief\n *  Recv a null terminated string from the peer process. Used by backrgound and\n * \tforeground qsub processes to communicate error-strings, job-ids etc.\n *\n * @param[in]\ts - pointer to the windows PIPE or Unix domain socket\n * @parma[in]\tstr - null terminated string to send\n *\n * @return int\n * @retval\t-1 - Failure\n * @retval\t 0 - Success\n *\n */\nint\nrecv_string(void *s, char *str)\n{\n\tint len = 0;\n\n\tif ((dorecv(s, (char *) &len, sizeof(int)) != 0) ||\n\t    (dorecv(s, str, len) != 0))\n\t\treturn -1;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *  Recv a null terminated string from the peer process. Used by background and\n * \tforeground qsub processes to communicate error-strings, job-ids etc.\n * \tThis is like recv_string() except the 'strp' parameter will hold a pointer\n * \tto a newly-malloced string holding the resultant string.\n *\n * @param[in]\ts - pointer to the windows PIPE or Unix domain socket\n * @parma[out]\tstrp - holds a pointer to the newly-malloced string.\n *\n * @return int\n * @retval\t-1 - Failure\n * @retval\t 0 - Success\n *\n */\nint\nrecv_dyn_string(void *s, char **strp)\n{\n\tint recvlen = 0;\n\n\tif (dorecv(s, (char *) &recvlen, sizeof(int)) != 0)\n\t\treturn -1;\n\t/* resizes the global 'daemon_buf' array */\n\tif (resize_daemon_buf(0, recvlen) != 0)\n\t\treturn -1;\n\tif (dorecv(s, daemon_buf, recvlen) != 0)\n\t\treturn -1;\n\n\t*strp = strdup(daemon_buf);\n\tif (*strp == NULL)\n\t\treturn -1;\n\treturn 0;\n}\n\n/**\n * @brief\n *\tSend the cmd opt values for each parameter supported by qsub to the\n *\tbackground qsub process.\n *\n * @param[in]\ts - pointer to the windows PIPE or Unix domain socket\n *\n * @return int\n * @retval\t-1 - Failure\n * @retval\t 0 - Success\n *\n */\nint\nsend_opts(void *s)\n{\n\t/*\n\t * we are allocating a fixed size of 100. This is because we know that\n\t * the list of opts to send is going to fit within 100. Specifically, for each\n\t * opt we need 2 characters, and currently we have 35 opts.\n\t * If a new set of opts are added, the buffer space of 100 allocated here\n\t * needs to be double checked.\n\t */\n\tif (resize_daemon_buf(0, 100) != 0)\n\t\treturn -1;\n\n\tsprintf(daemon_buf,\n\t\t\"%d %d %d %d %d %d %d %d %d %d \"\n\t\t\"%d %d %d %d %d %d %d %d %d %d \"\n\t\t\"%d %d %d %d %d %d %d %d %d %d \"\n\t\t\"%d %d %d %d %d %d \",\n\t\ta_opt, c_opt, e_opt, h_opt, j_opt,\n\t\tk_opt, l_opt, m_opt, o_opt, p_opt,\n\t\tq_opt, r_opt, u_opt, v_opt, z_opt,\n\t\tA_opt, C_opt, J_opt, M_opt, N_opt,\n\t\tS_opt, V_opt, Depend_opt, Interact_opt, Stagein_opt,\n\t\tStageout_opt, Sandbox_opt, Grouplist_opt, Resvstart_opt,\n\t\tResvend_opt, pwd_opt, cred_opt, block_opt, P_opt,\n\t\trelnodes_on_stageout_opt, tolerate_node_failures_opt);\n\n\treturn (send_string(s, daemon_buf));\n}\n\n/**\n * @brief\n *\tRecv the cmd opt values for each parameter supported by qsub from the\n *\tforeground qsub process.\n *\n * @param[in]\ts - pointer to the windows PIPE or Unix domain socket\n *\n * @return int\n * @retval\t-1 - Failure\n * @retval\t 0 - Success\n *\n */\nint\nrecv_opts(void *s)\n{\n\t/*\n\t * we are allocating a fixed size of 100. This is because we know that\n\t * the list of opts to send is going to fit within 100. Specifically, for each\n\t * opt we need 2 characters, and currently we have 35 opts.\n\t * If a new set of opts are added, the buffer space of 100 allocated here\n\t * needs to be double checked.\n\t */\n\tif (resize_daemon_buf(0, 100) != 0)\n\t\treturn -1;\n\n\tif (recv_string(s, daemon_buf) != 0)\n\t\treturn -1;\n\n\tsscanf(daemon_buf,\n\t       \"%d %d %d %d %d %d %d %d %d %d \"\n\t       \"%d %d %d %d %d %d %d %d %d %d \"\n\t       \"%d %d %d %d %d %d %d %d %d %d \"\n\t       \"%d %d %d %d %d %d \",\n\t       &a_opt, &c_opt, &e_opt, &h_opt, &j_opt,\n\t       &k_opt, &l_opt, &m_opt, &o_opt, &p_opt,\n\t       &q_opt, &r_opt, &u_opt, &v_opt, &z_opt,\n\t       &A_opt, &C_opt, &J_opt, &M_opt, &N_opt,\n\t       &S_opt, &V_opt, &Depend_opt, &Interact_opt, &Stagein_opt,\n\t       &Stageout_opt, &Sandbox_opt, &Grouplist_opt, &Resvstart_opt,\n\t       &Resvend_opt, &pwd_opt, &cred_opt, &block_opt, &P_opt,\n\t       &relnodes_on_stageout_opt, &tolerate_node_failures_opt);\n\treturn 0;\n}\n\n/**\n * @brief\n *\tHandles the attribute errors listed from the ECL layer\n *\tby iterating through the err_list parameter. It then\n *\tcompares the attribute name and sets and appropriate\n *\terror message in retmsg to be shown to the user.\n *\n * @param[in]\terr_list - The list of attribute errors returned from\n *\t\t\tthe ECL verification layer\n * @param[out] retmsg - The return error message to the caller\n *\t\t\tto be shown to the user\n *\n * @return int\n * @retval\t-1 - Failure\n * @retval\t 0 - Success\n *\n */\nstatic int\nhandle_attribute_errors(struct ecl_attribute_errors *err_list, char *retmsg)\n{\n\tstruct attropl *attribute;\n\tchar *opt;\n\tint i;\n\n\tfor (i = 0; i < err_list->ecl_numerrors; i++) {\n\t\tattribute = err_list->ecl_attrerr[i].ecl_attribute;\n\t\tif (strcmp(attribute->name, ATTR_a) == 0)\n\t\t\topt = \"a\";\n\t\telse if (strcmp(attribute->name, ATTR_A) == 0)\n\t\t\topt = \"A\";\n\t\telse if (strcmp(attribute->name, ATTR_project) == 0)\n\t\t\topt = \"P\";\n\t\telse if (strcmp(attribute->name, ATTR_c) == 0)\n\t\t\topt = \"c\";\n\t\telse if (strcmp(attribute->name, ATTR_e) == 0)\n\t\t\topt = \"e\";\n\t\telse if (strcmp(attribute->name, ATTR_h) == 0)\n\t\t\topt = \"h\";\n\t\telse if (strcmp(attribute->name, ATTR_inter) == 0)\n\t\t\topt = \"I\";\n\t\telse if (strcmp(attribute->name, ATTR_j) == 0)\n\t\t\topt = \"j\";\n\t\telse if (strcmp(attribute->name, ATTR_J) == 0)\n\t\t\topt = \"J\";\n\t\telse if (strcmp(attribute->name, ATTR_k) == 0)\n\t\t\topt = \"k\";\n\t\telse if (strcmp(attribute->name, ATTR_l) == 0)\n\t\t\topt = \"l\";\n\t\telse if (strcmp(attribute->name, ATTR_m) == 0)\n\t\t\topt = \"m\";\n\t\telse if (strcmp(attribute->name, ATTR_M) == 0)\n\t\t\topt = \"M\";\n\t\telse if (strcmp(attribute->name, ATTR_N) == 0)\n\t\t\topt = \"N\";\n\t\telse if (strcmp(attribute->name, ATTR_o) == 0)\n\t\t\topt = \"o\";\n\t\telse if (strcmp(attribute->name, ATTR_p) == 0)\n\t\t\topt = \"p\";\n\t\telse if (strcmp(attribute->name, ATTR_r) == 0)\n\t\t\topt = \"r\";\n\t\telse if (strcmp(attribute->name, ATTR_R) == 0)\n\t\t\topt = \"R\";\n\t\telse if (strcmp(attribute->name, ATTR_S) == 0)\n\t\t\topt = \"S\";\n\t\telse if (strcmp(attribute->name, ATTR_u) == 0)\n\t\t\topt = \"u\";\n\t\telse if ((strcmp(attribute->name, ATTR_depend) == 0) ||\n\t\t\t (strcmp(attribute->name, ATTR_stagein) == 0) ||\n\t\t\t (strcmp(attribute->name, ATTR_stageout) == 0) ||\n\t\t\t (strcmp(attribute->name, ATTR_sandbox) == 0) ||\n\t\t\t (strcmp(attribute->name, ATTR_g) == 0) ||\n\t\t\t (strcmp(attribute->name, ATTR_inter) == 0) ||\n\t\t\t (strcmp(attribute->name, ATTR_block) == 0) ||\n\t\t\t (strcmp(attribute->name, ATTR_relnodes_on_stageout) == 0) ||\n\t\t\t (strcmp(attribute->name, ATTR_tolerate_node_failures) == 0) ||\n\t\t\t (strcmp(attribute->name, ATTR_resv_start) == 0) ||\n\t\t\t (strcmp(attribute->name, ATTR_resv_end) == 0) ||\n\t\t\t (strcmp(attribute->name, ATTR_umask) == 0) ||\n\t\t\t (strcmp(attribute->name, ATTR_runcount) == 0) ||\n\t\t\t (strcmp(attribute->name, ATTR_cred) == 0))\n\t\t\topt = \"W\";\n\t\telse\n\t\t\treturn 0;\n\n\t\tif (*opt == 'l') {\n\t\t\tsprintf(retmsg, \"qsub: %s\\n\",\n\t\t\t\terr_list->ecl_attrerr[i].ecl_errmsg);\n\t\t\treturn (err_list->ecl_attrerr[i].ecl_errcode);\n\t\t} else if (err_list->ecl_attrerr->ecl_errcode == PBSE_JOBNBIG) {\n\t\t\tsprintf(retmsg, \"qsub: Job %s \\n\", err_list->ecl_attrerr->ecl_errmsg);\n\t\t\treturn (2);\n\t\t} else {\n\t\t\tsprintf(retmsg, \"qsub: illegal -%s value\\n\", opt);\n\t\t\treturn (2);\n\t\t}\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\tThis functions connects to the pbs_server.\n *\n * @param[in] server_out - The target server name, if any, else NULL\n * @param[out] retmsg\t - Any error string is returned in this parameter\n *\n * @return int\n * @retval 0 - Success\n * @retval 1/pbs_errno - Failure, retmsg paramter is set\n *\n */\nint\ndo_connect(char *server_out, char *retmsg)\n{\n\tint rc = 0;\n\tchar host[PBS_MAXHOSTNAME + 1];\n\n\t/* Set single threaded mode */\n\tpbs_client_thread_set_single_threaded_mode();\n\n\t/* Perform needed security library initializations (including none) */\n\tif (CS_client_init() == CS_SUCCESS) {\n\t\tcs_init = 1;\n\t} else {\n\t\tsprintf(retmsg, \"qsub: unable to initialize security library.\\n\");\n\t\treturn 1;\n\t}\n\n\t/* Connect to the server */\n\tif ((Interact_opt == FALSE) && (block_opt == FALSE))\n\t\tsd_svr = cnt2server_extend(server_out, QSUB_DAEMON);\n\telse\n\t\tsd_svr = cnt2server(server_out);\n\n\tif (sd_svr <= 0) {\n\t\tsprintf(retmsg, \"qsub: cannot connect to server %s (errno=%d)\\n\",\n\t\t\tpbs_default() == NULL ? \"\" : pbs_default(), pbs_errno);\n\t\treturn (pbs_errno);\n\t}\n\n\trefresh_dfltqsubargs();\n\n\tpbs_hostvar = malloc(pbs_o_hostsize + PBS_MAXHOSTNAME + 1);\n\tif (!pbs_hostvar) {\n\t\tsprintf(retmsg, \"qsub: out of memory\\n\");\n\t\treturn (2);\n\t}\n\tif ((rc = gethostname(host, (sizeof(host) - 1))) == 0) {\n\t\tif ((rc = get_fullhostname(host, host, (sizeof(host) - 1))) == 0) {\n\t\t\tsnprintf(pbs_hostvar, pbs_o_hostsize + PBS_MAXHOSTNAME + 1, \",PBS_O_HOST=%s\", host);\n\t\t}\n\t}\n\tif (rc != 0) {\n\t\tsprintf(retmsg, \"qsub: cannot get full local host name\\n\");\n\t\treturn (3);\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\tThis functions does a job submission to the server using the global\n *\tconnected server socket sd_svr.\n *\n * @param[out] retmsg\t - Any error string is returned in this parameter\n *\n * @return int\n * @retval 0 - Success\n * @retval 1/-1/pbs_errno - Failure, retmsg paramter is set\n * @retval DMN_REFUSE_EXIT - If daemon can't submit the job\n *\n */\nstatic int\ndo_submit(char *retmsg)\n{\n\tstruct ecl_attribute_errors *err_list;\n\tchar *new_jobname = NULL;\n\tint rc;\n\tchar *errmsg;\n\tint retries;\n\n\tif (dfltqsubargs != NULL) {\n\t\t/*\n\t\t * Setting options from the server defaults will not overwrite\n\t\t * options set from the job script. CMDLINE-2 means\n\t\t * \"one less than job script priority\"\n\t\t */\n\t\tfor (retries = 2; retries > 0; retries--) {\n\t\t\trc = do_dir(dfltqsubargs, CMDLINE - 2, retmsg, MAXPATHLEN);\n\t\t\tif (rc >= 0)\n\t\t\t\tbreak;\n\t\t\tif (retries == 2) {\n\t\t\t\trefresh_dfltqsubargs();\n\t\t\t\tif (pbs_errno != PBSE_NONE)\n\t\t\t\t\treturn (pbs_errno);\n\t\t\t}\n\t\t}\n\t\tif (rc != 0)\n\t\t\treturn (rc);\n\t}\n\n\t/*\n\t * get environment variable if -V option is set. Return the code\n\t * DMN_REFUSE_EXIT if -V option is detected in background qsub.\n\t */\n\tif (V_opt) {\n\t\tif (is_background)\n\t\t\treturn DMN_REFUSE_EXIT;\n\t\tqsub_envlist = env_array_to_varlist(environ);\n\t}\n\n\t/* set_job_env must be done here to pick up -v, -V options passed by default_qsub_arguments */\n\tif (!set_job_env(basic_envlist, qsub_envlist)) {\n\t\tif (x11_disp)\n\t\t\tsnprintf(retmsg, MAXPATHLEN, \"qsub: invalid usage of incompatible option –X with –v DISPLAY\\n\");\n\t\telse\n\t\t\tsnprintf(retmsg, MAXPATHLEN, \"qsub: cannot send environment with the job\\n\");\n\t\treturn 1;\n\t}\n\n\t/* Send submit request to the server. */\n\tpbs_errno = 0;\n\tif (cred_buf) {\n\t\t/* A credential was obtained, call the credential version of submit */\n\t\tnew_jobname = pbs_submit_with_cred(sd_svr, (struct attropl *) attrib,\n\t\t\t\t\t\t   script_tmp, destination, NULL, cred_type,\n\t\t\t\t\t\t   cred_len, cred_buf);\n\t} else {\n\t\tnew_jobname = pbs_submit(sd_svr, (struct attropl *) attrib,\n\t\t\t\t\t script_tmp, destination, NULL);\n\t}\n\tif (new_jobname == NULL) {\n\n\t\tif ((err_list = pbs_get_attributes_in_error(sd_svr))) {\n\t\t\trc = handle_attribute_errors(err_list, retmsg);\n\t\t\tif (rc != 0)\n\t\t\t\treturn rc;\n\t\t}\n\n\t\terrmsg = pbs_geterrmsg(sd_svr);\n\t\tif (errmsg != NULL) {\n\t\t\tif (strcmp(errmsg, msg_force_qsub_update) == 0)\n\t\t\t\treturn PBSE_FORCE_QSUB_UPDATE;\n\t\t\tsprintf(retmsg, \"qsub: %s\\n\", errmsg);\n\t\t} else {\n\t\t\tsprintf(retmsg, \"qsub: Error (%d) submitting job\\n\", pbs_errno);\n\t\t}\n\t\treturn (pbs_errno);\n\t} else {\n\t\tsprintf(retmsg, \"%s\", new_jobname);\n\t\tfree(new_jobname);\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\tSave original values of qsub option variables.\n *\n */\nstatic void\nsave_opts(void)\n{\n\t/* save the values */\n\ta_opt_o = a_opt;\n\tc_opt_o = c_opt;\n\te_opt_o = e_opt;\n\th_opt_o = h_opt;\n\tj_opt_o = j_opt;\n\tk_opt_o = k_opt;\n\tl_opt_o = l_opt;\n\tm_opt_o = m_opt;\n\to_opt_o = o_opt;\n\tp_opt_o = p_opt;\n\tq_opt_o = q_opt;\n\tr_opt_o = r_opt;\n\tu_opt_o = u_opt;\n\tv_opt_o = v_opt;\n\tz_opt_o = z_opt;\n\tA_opt_o = A_opt;\n\tC_opt_o = C_opt;\n\tJ_opt_o = J_opt;\n\tM_opt_o = M_opt;\n\tN_opt_o = N_opt;\n\tP_opt_o = P_opt;\n\tS_opt_o = S_opt;\n\tV_opt_o = V_opt;\n\tDepend_opt_o = Depend_opt;\n\tInteract_opt_o = Interact_opt;\n\tStagein_opt_o = Stagein_opt;\n\tStageout_opt_o = Stageout_opt;\n\tSandbox_opt_o = Sandbox_opt;\n\tGrouplist_opt_o = Grouplist_opt;\n\tgui_opt_o = gui_opt;\n\tResvstart_opt_o = Resvstart_opt;\n\tResvend_opt_o = Resvend_opt;\n\tpwd_opt_o = pwd_opt;\n\tcred_opt_o = cred_opt;\n\tblock_opt_o = block_opt;\n\trelnodes_on_stageout_opt_o = relnodes_on_stageout_opt;\n\ttolerate_node_failures_opt_o = tolerate_node_failures_opt;\n}\n\n/**\n * @brief\n *\tInitialize qsub option variables to their original values.\n *\n */\nstatic void\nrestore_opts(void)\n{\n\t/* save the values */\n\ta_opt = a_opt_o;\n\tc_opt = c_opt_o;\n\te_opt = e_opt_o;\n\th_opt = h_opt_o;\n\tj_opt = j_opt_o;\n\tk_opt = k_opt_o;\n\tl_opt = l_opt_o;\n\tm_opt = m_opt_o;\n\to_opt = o_opt_o;\n\tp_opt = p_opt_o;\n\tq_opt = q_opt_o;\n\tr_opt = r_opt_o;\n\tu_opt = u_opt_o;\n\tv_opt = v_opt_o;\n\tz_opt = z_opt_o;\n\tA_opt = A_opt_o;\n\tC_opt = C_opt_o;\n\tJ_opt = J_opt_o;\n\tM_opt = M_opt_o;\n\tN_opt = N_opt_o;\n\tP_opt = P_opt_o;\n\tS_opt = S_opt_o;\n\tV_opt = V_opt_o;\n\tDepend_opt = Depend_opt_o;\n\tInteract_opt = Interact_opt_o;\n\tStagein_opt = Stagein_opt_o;\n\tStageout_opt = Stageout_opt_o;\n\tSandbox_opt = Sandbox_opt_o;\n\tGrouplist_opt = Grouplist_opt_o;\n\tResvstart_opt = Resvstart_opt_o;\n\tResvend_opt = Resvend_opt_o;\n\tpwd_opt = pwd_opt_o;\n\tcred_opt = cred_opt_o;\n\tblock_opt = block_opt_o;\n\trelnodes_on_stageout_opt = relnodes_on_stageout_opt_o;\n\ttolerate_node_failures_opt = tolerate_node_failures_opt_o;\n}\n\n/**\n * @brief\n *\tHelper function to free a list of attributes. This is called from\n *\tdo_daemon_stuff, since that function loops over for each client request.\n *\tNo freeing the list of attributes created would result in a lot of\n *\tmemory leak.\n *\n * @param[in]\tattrib - The list of attributes to free\n *\n */\nvoid\nqsub_free_attrl(struct attrl *attrib)\n{\n\tstruct attrl *attr;\n\n\twhile (attrib) {\n\t\tfree(attrib->name);\n\t\tfree(attrib->resource);\n\t\tfree(attrib->value);\n\n\t\tattr = attrib;\n\t\tattrib = attrib->next;\n\t\tfree(attr);\n\t}\n}\n\n/**\n * @brief\n *\tHelper function to duplicate the passed attrl structure.\n *\n * @param[in]\tattrib - The list of attributes to copy\n *\n * @return\tstruct attrl * - the duplicated 'attrib' (malloced).\n * @retval\tnon-NULL - success.\n * @retval\tNULL - failure.\n *\n */\nstatic struct attrl *\ndup_attrl(struct attrl *attrib)\n{\n\tstruct attrl *attr = NULL;\n\tstruct attrl *attr_new = NULL;\n\n\tfor (attr = attrib; attr != NULL; attr = attr->next) {\n\t\tif (attr->resource != NULL) {\n\t\t\t/* strings have null character also in buf */\n\t\t\tset_attr_resc_error_exit(&attr_new, attr->name,\n\t\t\t\t\t\t attr->resource,\n\t\t\t\t\t\t attr->value);\n\t\t} else {\n\t\t\tset_attr_error_exit(&attr_new, attr->name, attr->value);\n\t\t}\n\t}\n\n\treturn attr_new;\n}\n\n/**\n *\n * @brief\n *\tThe wrapper program to \"do_submit()\".\n * @par\n *\tThis attempts up to 'retry' times to do_submit(), when this function\n *\treturns PBSE_FORCE_QSUB_UPDATE.\n *\n * @param[in]\tretmsg - gets filled with the error message.\n *\n * @return \tint\n * @retval\tthe return code of do_submit().\n * @retval\tif retry time exhausted or any unexpected failure,\n * \t\treturn PBSE_PROTOCOL\n *\n */\nint\ndo_submit2(char *rmsg)\n{\n\tint retry; /* do a retry count to prevent infinite loop */\n\tint rc;\n\n\trmsg[0] = '\\0';\n\t/*\n\t * Save the original job attributes/resources (attrib)\n\t * before 'default_qsub_arguments\" was applied.\n\t */\n\tif (attrib != NULL) {\n\t\tif (attrib_o != NULL)\n\t\t\tqsub_free_attrl(attrib_o);\n\t\tattrib_o = dup_attrl(attrib); /* save attributes list */\n\t\tif (attrib_o == NULL) {\n\t\t\tsnprintf(rmsg, MAXPATHLEN, \"Failed to duplicate attributes list.\\n\");\n\t\t\treturn PBSE_PROTOCOL;\n\t\t}\n\t}\n\n\t/* original v_value also needs to be saved as it gets mangled inside set_job_env() */\n\tif (v_value != NULL) {\n\t\tfree(v_value_o);\n\t\tv_value_o = strdup(v_value);\n\t\tif (v_value_o == NULL) {\n\t\t\tsnprintf(rmsg, MAXPATHLEN, \"Failed to duplicate original -v value\\n\");\n\t\t\treturn PBSE_PROTOCOL;\n\t\t}\n\t}\n\n\t/*\n\t * Need to save original values of qsub option variables,\n\t * as \"reset_dfltqsubargs() below could \"lose\" memory\n\t * of the option variable values. The values are\n\t * needed in case a new \"default_qsub_arguments come and\n\t * gets reparsed.\n\t */\n\tsave_opts();\n\n\trc = do_submit(rmsg);\n\tfor (retry = 5; (rc == PBSE_FORCE_QSUB_UPDATE) && (retry > 0); retry--) {\n\t\t/* Let's retry with the new \"default_qsub_arguments\" */\n\t\trefresh_dfltqsubargs();\n\t\tif (pbs_errno != PBSE_NONE)\n\t\t\treturn (pbs_errno);\n\n\t\t/* Use the original attrib value before the previous \"default_qsub_arguments\" was applied. */\n\t\tif (attrib_o != NULL) {\n\t\t\tif (attrib != NULL)\n\t\t\t\tqsub_free_attrl(attrib);\n\t\t\tattrib = dup_attrl(attrib_o);\n\t\t\tif (attrib == NULL) {\n\t\t\t\tsnprintf(rmsg, MAXPATHLEN, \"Failed to duplicate attributes list\\n\");\n\t\t\t\treturn PBSE_PROTOCOL;\n\t\t\t}\n\t\t}\n\n\t\t/* use original -v value */\n\t\tif (v_value_o != NULL) {\n\t\t\tfree(v_value);\n\t\t\tv_value = strdup(v_value_o);\n\t\t\tif (v_value == NULL) {\n\t\t\t\tsnprintf(rmsg, MAXPATHLEN, \"Failed to duplicate -v value\\n\");\n\t\t\t\treturn PBSE_PROTOCOL;\n\t\t\t}\n\t\t}\n\n\t\trestore_opts();\n\t\trc = do_submit(rmsg);\n\t}\n\tif (retry == 0) {\n\t\tsnprintf(rmsg, MAXPATHLEN, \"Retry to submit a job exhausted.\\n\");\n\t\trc = PBSE_PROTOCOL;\n\t}\n\treturn (rc);\n}\n\n/*\n * @brief\n *  Perform a regular submit, without the daemon.\n *\n * @param[in] daemon_up - Indicates whether daemon is running\n * @return    rc        - Error code\n */\nstatic int\nregular_submit(int daemon_up)\n{\n\tint rc = 0;\n\trc = do_connect(server_out, retmsg);\n\tif (rc == 0) {\n\t\tif (sd_svr != -1) {\n\t\t\trc = do_submit2(retmsg);\n\t\t} else\n\t\t\trc = -1;\n\t}\n\tif ((rc == 0) && !(Interact_opt != FALSE || block_opt) && (daemon_up == 0) && (no_background == 0) && !V_opt)\n\t\tdo_daemon_stuff();\n\treturn rc;\n}\n\n/* End of \"Daemon\" functions. */\n\nint\nmain(int argc, char **argv, char **envp) /* qsub */\n{\n\tint errflg;\t\t\t\t /* option error */\n\tstatic char script[MAXPATHLEN + 1] = \"\"; /* name of script file */\n\tchar *q_n_out;\t\t\t\t /* queue part of destination */\n\tchar *s_n_out;\t\t\t\t /* server part of destination */\n\t/* server:port to send request to */\n\tchar *cmdargs = NULL;\n\tint command_flag = 0;\n\tint rc = 0;\t\t   /* error code for submit */\n\tint do_regular_submit = 1; /* used if daemon based submit fails */\n\tint daemon_up = 0;\n\tchar **argv_cpy; /* copy argv for getopt */\n\tint i;\n\n\t/* Set signal handlers */\n\t(void) set_sig_handlers();\n\n\t/*\n\t * Print version info and exit, if specified with --version option.\n\t * Otherwise, proceed normally.\n\t */\n\tPRINT_VERSION_AND_EXIT(argc, argv);\n\n\t/*\n\t * Identify the configured tmpdir without calling pbs_loadconf().\n\t * We do not want to incur the cost of parsing the services DB.\n\t */\n\ttmpdir = pbs_get_tmpdir();\n\tif (tmpdir == NULL) {\n\t\tfprintf(stderr, \"qsub: Failed to load configuration parameters!\\n\");\n\t\texit_qsub(2);\n\t}\n\n\t/*\n\t * If qsub command is submitted with arguments, then capture them and\n\t * encode in XML format using encode_xml_arg_list() and set the\n\t * \"Submit_arguments\" job attribute.\n\t */\n\tif ((argc >= 2) && (cmdargs = encode_xml_arg_list(1, argc, argv))) {\n\t\tset_attr_error_exit(&attrib, ATTR_submit_arguments, cmdargs);\n\t\tfree(cmdargs);\n\t\tcmdargs = NULL;\n\t}\n\n\t/* Process options */\n\targv_cpy = calloc(argc + 1, sizeof(char *));\n\tif (argv_cpy == NULL) {\n\t\tfprintf(stderr, \"qsub: out of memory\\n\");\n\t\texit_qsub(2);\n\t}\n\tfor (i = 0; i < argc; i++) {\n\t\targv_cpy[i] = argv[i];\n\t}\n\targv_cpy[argc] = NULL;\n\n\terrflg = process_opts(argc, argv_cpy, CMDLINE); /* get cmd-line options */\n\tif (errflg || ((optind < argc) && (strcmp(argv[optind], argv_cpy[optind]) != 0))) {\n\t\t/*\n\t\t * The arguments changed, the script and \"--\" must have been present.\n\t\t * getopt will move all non-options to the end of the array. In qsub's\n\t\t * case, it will only happen if both the \"script\" and \"-- executable\"\n\t\t * were present in the qsub command. This is unsupported usage and\n\t\t * should exit.\n\t\t */\n\t\tprint_usage();\n\t\texit_qsub(2);\n\t}\n\tfree(argv_cpy);\n\t/* Process special arguments */\n\tcommand_flag = process_special_args(argc, argv, script);\n\tfix_path(script, 1);\n\n\tif (command_flag == 0)\n\t\t/* Read the job script from a file or stdin */\n\t\tread_job_script(script);\n\n\t/* Needs to be done before job_env_basic(), so that it gets the correct interactive auth method */\n\tif (Interact_opt)\n\t\tpbs_loadconf(0);\n\n\t/* Enable X11 Forwarding or GUI if specified */\n\tenable_gui();\n\n\t/* Set option default values */\n\tset_opt_defaults();\n\n\t/* Parse destination string */\n\tserver_out[0] = '\\0';\n\tif (parse_destination_id(destination, &q_n_out, &s_n_out)) {\n\t\tfprintf(stderr, \"qsub: illegally formed destination: %s\\n\", destination);\n\t\t(void) unlink(script_tmp);\n\t\texit_qsub(2);\n\t} else if (notNULL(s_n_out)) {\n\t\tsnprintf(server_out, sizeof(server_out), \"%s\", s_n_out);\n\t}\n\n\t/*\n\t * Get required environment variables to be sent to the server.\n\t * Must be done early here, as basic_envlist and qsub_envlist will\n\t * be sent to the qsub daemon if needed.\n\t */\n\tbasic_envlist = job_env_basic();\n\tif (basic_envlist == NULL)\n\t\texit_qsub(3);\n\tif (V_opt)\n\t\tqsub_envlist = env_array_to_varlist(envp);\n\n\t/*\n\t * Disable backgrounding if we are inside another qsub\n\t */\n\tif (getenv(ENV_PBS_JOBID) != NULL)\n\t\tno_background = 1;\n\n\t/*\n\t * In case of interactive jobs, jobs with block=true, or no_background == 1,\n\t * qsub should fully execute from the foreground, so daemon_submit() is not called.\n\t * It should not fork, neither should it send the data to the background qsub.\n\t *\n\t * If all 3 of these options are zero, then try to submit via daemon.\n\t */\n\tif ((Interact_opt || block_opt || no_background) == 0) {\n\t\t/* Try to submit jobs using a daemon */\n\t\trc = daemon_submit(&daemon_up, &do_regular_submit);\n\t}\n\n\tif (do_regular_submit == 1)\n\t\t/* submission via daemon was not successful, so do regular submit */\n\t\trc = regular_submit(daemon_up);\n\n\t/* remove temporary job script file */\n\t(void) unlink(script_tmp);\n\n\tif (rc == 0) { /* submit was successful */\n\t\tnew_jobname = retmsg;\n\t\tif (!z_opt && Interact_opt == FALSE)\n\t\t\tprintf(\"%s\\n\", retmsg); /* print jobid with a \\n */\n\t} else {\n\t\t/* error, print whatever our daemon gave us back */\n\t\tfprintf(stderr, \"%s\", retmsg);\n\t\t/* check if the retmsg has \"qsub: illegal -\" string, if so print usage */\n\t\tif (strstr(retmsg, \"qsub: illegal -\"))\n\t\t\tprint_usage();\n\t\texit_qsub(rc);\n\t}\n\n\t/* is this an interactive job ??? */\n\tif (Interact_opt != FALSE)\n\t\tinteractive();\n\telse if (block_opt) { /* block until job completes? */\n\t\tfflush(stdout);\n\t\tblock();\n\t}\n\n\texit_qsub(0);\n\treturn (0);\n} /* end of main() */\n"
  },
  {
    "path": "src/cmds/qsub_sup.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/*\n * qsub_sup.c\n *\n *  Created on: Jul 3, 2020\n *      Author: bhagatr\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n#include <pbs_version.h>\n#include <sys/types.h>\n#include <sys/socket.h>\n#include <sys/stat.h>\n#include <sys/time.h>\n#include <sys/utsname.h>\n#include <sys/wait.h>\n#include <netinet/in.h>\n#include <errno.h>\n#include <fcntl.h>\n#include <netdb.h>\n#include <signal.h>\n#include <termios.h>\n#include <assert.h>\n#include <sys/un.h>\n#include <syslog.h>\n#include <openssl/sha.h>\n#include \"pbs_ifl.h\"\n#include \"ifl_internal.h\"\n#include \"cmds.h\"\n#include \"libpbs.h\"\n#include \"net_connect.h\"\n#include \"dis.h\"\n#include \"port_forwarding.h\"\n#include \"credential.h\"\n#include \"libutil.h\"\n\n#if defined(HAVE_SYS_IOCTL_H)\n#include <sys/ioctl.h>\n#endif /* HAVE_SYS_IOCTL_H */\n\n#define MAXPIPENAME sizeof(((struct sockaddr_un *) 0)->sun_path)\n#define QSUB_DMN_TIMEOUT_SHORT 5\n#define QSUB_DMN_TIMEOUT_LONG 60 /* timeout for qsub background process */\n#define DMN_REFUSE_EXIT 7\t /* return code when daemon can't serve a job and exits */\n#define CMDLINE 3\n#define XAUTH_LEN 512\t\t     /* Max size of buffer to store Xauth cookie length */\n#define XAUTH_ERR_REDIRECTION \"2>&1\" /* redirection string used for xauth command */\n#define X11_PORT_LEN 8\t\t     /* Max size of buffer to store port information as string */\n\n#if !defined(PBS_NO_POSIX_VIOLATION)\nchar GETOPT_ARGS[] = \"a:A:c:C:e:fhIj:J:k:l:m:M:N:o:p:q:r:R:S:u:v:VW:XzP:\";\n#else\nchar GETOPT_ARGS[] = \"a:A:c:C:e:fhj:J:k:l:m:M:N:o:p:q:r:R:S:u:v:VW:zP:\";\n#endif /* PBS_NO_POSIX_VIOLATION */\n\nchar usage[] =\n\t\"usage: qsub [-a date_time] [-A account_string] [-c interval]\\n\"\n\t\"\\t[-C directive_prefix] [-e path] [-f ] [-h ] [-I [-X]] [-j oe|eo] [-J X-Y[:Z]]\\n\"\n\t\"\\t[-k keep] [-l resource_list] [-m mail_options] [-M user_list]\\n\"\n\t\"\\t[-N jobname] [-o path] [-p priority] [-P project] [-q queue] [-r y|n]\\n\"\n\t\"\\t[-R o|e|oe] [-S path] [-u user_list] [-W otherattributes=value...]\\n\"\n\t\"\\t[-v variable_list] [-V ] [-z] [script | -- command [arg1 ...]]\\n\";\n\nchar read_script_msg[] =\n\t\"Job script will be read from standard input. Submit with CTRL+D.\\n\";\n\nstatic struct termios oldtio;\t\t\t\t\t/* Terminal info */\nstatic struct winsize wsz;\t\t\t\t\t/* Window size */\nextern struct attrl *attrib;\t\t\t\t\t/* Attribute list */\nchar fl[MAXPIPENAME];\t\t\t\t\t\t/* the filename used as the pipe name */\nextern char *new_jobname;\t\t\t\t\t/* return from submit request */\nextern char server_out[PBS_MAXSERVERNAME + PBS_MAXPORTNUM + 2]; /* Destination server, parsed from destination[] */\nextern char *tmpdir;\t\t\t\t\t\t/* Path of temp directory in which to put the job script */\nextern char cred_name[32];\t\t\t\t\t/* space to hold small credential name */\nextern char destination[PBS_MAXDEST];\t\t\t\t/* Destination of the batch job, specified by q opt */\nextern char script_tmp[MAXPATHLEN + 1];\t\t\t\t/* name of script file copy */\nextern char retmsg[MAXPATHLEN];\t\t\t\t\t/* holds the message that background qsub process will send */\nextern char qsub_cwd[MAXPATHLEN + 1];\t\t\t\t/* buffer to pass cwd to background qsub */\nextern char *basic_envlist;\t\t\t\t\t/* basic comma-separated environment variables list string */\nextern char *qsub_envlist;\t\t\t\t\t/* comma-separated variables list string */\nextern char *v_value;\t\t\t\t\t\t/* expanded variable list from v opt */\nextern char *cred_buf;\nextern char *display; /* environment variable DISPLAY */\nextern int comm_sock; /* Socket for interactive and block job */\nint X11_comm_sock;    /* Socket for x11 communication */\nextern int Forwardx11_opt;\nextern int sd_svr; /* return from pbs_connect */\nextern int is_background;\nextern size_t cred_len;\n\nextern int recv_attrl(void *s, struct attrl **attrib);\nextern int recv_string(void *s, char *str);\nextern int recv_dyn_string(void *s, char **strp);\nextern int send_string(void *s, char *str);\nextern int send_attrl(void *s, struct attrl *attrib);\nextern int send_opts(void *s);\nextern int recv_opts(void *s);\nextern int do_submit2(char *rmsg);\nextern int dosend(void *s, char *buf, int bufsize);\nextern int dorecv(void *s, char *buf, int bufsize);\nextern int do_dir(char *, int, char *, size_t);\nextern int check_qsub_daemon(char *);\nextern int Interact_opt;\nextern int sig_happened;\nextern void log_syslog(char *msg);\nextern void exit_qsub(int exitstatus);\nextern void qsub_free_attrl(struct attrl *attrib);\nextern void bailout(int ret);\nextern char *strdup_esc_commas(char *str_to_dup);\nstatic char *X11_get_authstring(void);\nextern void set_attr_error_exit(struct attrl **attrib, char *attrib_name, char *attrib_value);\nstatic char *port_X11(void);\nstatic void daemon_stuff(void);\n\n/**\n * @brief\n * \tLog a simple message to syslog\n * \tTo be used from the qsub background daemon\n *\n * @param[in]\tmsg - string to be logged\n *\n */\nvoid\nlog_syslog(char *msg)\n{\n\topenlog(\"qsub\", LOG_PID | LOG_CONS | LOG_NOWAIT, LOG_USER);\n\tsyslog(LOG_ERR, \"%s\", msg);\n\tcloselog();\n}\n\n/**\n * @brief\n *\tHelper function to get the pbs conf file path, and\n *\tconvert it to a string, later added to the filename\n *\t(pipe or unix domain socket filename) that is used\n *\tfor communications between the front-end and\n *\tbackground qsub processes.\n *\tThe path to the pbs conf file is converted to a string\n *\tby replacing the slashes with underscore char.\n *\tIf PBS_CONF_FILE is not set, then an empty string is returned.\n *\n * @return - The string representing path to the pbs conf file\n *\n */\nstatic char *\nget_conf_path(void)\n{\n\tchar *cnf = getenv(\"PBS_CONF_FILE\");\n\t/* static pointer so we can free heap memory from previous invocation of this function */\n\tstatic char *dup_cnf_path = NULL;\n\tchar *p;\n\n\tif (cnf) {\n\t\tp = strdup(cnf);\n\t\tif (p) {\n\t\t\tfree(dup_cnf_path);\n\t\t\tdup_cnf_path = p;\n\t\t\twhile (*p) {\n\t\t\t\tif (*p == '/' || *p == ' ' || *p == '.')\n\t\t\t\t\t*p = '_';\n\t\t\t\tp++;\n\t\t\t}\n\t\t}\n\t\treturn dup_cnf_path;\n\t} else if (dup_cnf_path) {\n\t\treturn dup_cnf_path;\n\t} else {\n\t\treturn \"\";\n\t}\n}\n\n/**\n * @brief\n *      Check the line does not end with win cr,lf.\n *\n * @param[in]\ts\t- input line\n *\n * @return      int\n * @retval -1 - input error\n * @retval 0 - unix\n * @retval 1 - win (cr, lf)\n */\nint\ncheck_crlf(char * s)\n{\n\tif (s == NULL) {\n\t\treturn -1;\n\t}\n\tint len = strlen(s);\n\tif (len == 0) {\n\t\treturn 0;\n\t}\n\tif (len > 1 && s[len - 2] == '\\r' && s[len - 1] == '\\n') {\n\t\treturn 1;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *      Create a temporary file that will house the job script\n *\n * @param[in]\tfile\t- Input file pointer\n * @param[out]  script\t- Temp file location\n * @param[in]   prefix\t- Prefix for PBS directives\n *\n * @return      int\n * @retval\t-1 - Error processing qsub parameters\n * @retval      3 - Error writing script file\n * @retval      4 - Temp file creation failure\n * @retval      5 - Error reading input file\n * @retval      6 - Unexpected EOF on read\n */\nint\nget_script(FILE *file, char *script, char *prefix)\n{\n\tchar *sopt;\n\tint err = 0;\n\tint exec = FALSE;\n\tchar tmp_name[MAXPATHLEN + 1];\n\tFILE *TMP_FILE;\n\tchar *in;\n\tchar *s_in = NULL;\n\tint s_len = 0;\n\tchar *extend;\n\tchar *extend_in = NULL;\n\tint extend_len = 0;\n\tint extend_loc;\n\tstatic char tmp_template[] = \"pbsscrptXXXXXX\";\n\tint fds;\n\n\t/*\n\t * Note: Need to populate script variable as soon as temp file is created so it\n\t * gets cleaned up in case of an error.\n\t */\n\tsnprintf(tmp_name, sizeof(tmp_name), \"%s/%s\", tmpdir, tmp_template);\n\tfds = mkstemp(tmp_name); /* returns file descriptor */\n\tif (fds != -1) {\n\t\tsnprintf(script, MAXPATHLEN + 1, \"%s\", tmp_name);\n\t\tif ((TMP_FILE = fdopen(fds, \"w+\")) == NULL)\n\t\t\terr = 1;\n\t} else {\n\t\terr = 1;\n\t}\n\n\tif (err != 0) {\n\t\tperror(\"mkstemp\");\n\t\tfprintf(stderr, \"qsub: could not create/open tmp file %s for script\\n\", tmp_name);\n\t\treturn (4);\n\t}\n\n\twhile ((in = pbs_fgets(&s_in, &s_len, file)) != NULL) {\n\t\tif (!exec && ((sopt = pbs_ispbsdir(s_in, prefix)) != NULL)) {\n\t\t\t/* Check if this is a directive line that should be extended */\n\t\t\textend_loc = pbs_extendable_line(in);\n\t\t\tif (extend_loc >= 0) {\n\t\t\t\tin[extend_loc] = '\\0'; /* remove the backslash (\\) */\n\t\t\t\textend = pbs_fgets_extend(&extend_in, &extend_len, file);\n\t\t\t\tif (extend != NULL) {\n\t\t\t\t\tif (pbs_strcat(&s_in, &s_len, extend) == NULL)\n\t\t\t\t\t\treturn (5);\n\t\t\t\t\tin = s_in;\n\t\t\t\t\tsopt = pbs_ispbsdir(s_in, prefix);\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t/*\n\t\t\t * Setting options from the job script will not overwrite\n\t\t\t * options set on the command line. CMDLINE-1 means\n\t\t\t * \"one less than CMDLINE priority\"\n\t\t\t */\n\t\t\tif (do_dir(sopt, CMDLINE - 1, retmsg, MAXPATHLEN) != 0) {\n\t\t\t\tfprintf(stderr, \"%s\", retmsg);\n\t\t\t\tfree(extend_in);\n\t\t\t\tfree(s_in);\n\t\t\t\treturn (-1);\n\t\t\t}\n\t\t} else if (!exec && pbs_isexecutable(s_in)) {\n\t\t\texec = TRUE;\n\t\t}\n#ifndef WIN32\n\t\tif (check_crlf(in)) {\n\t\t\tfprintf(stderr, \"qsub: script contains cr, lf\\n\");\n\t\t\tfclose(TMP_FILE);\n\t\t\tfree(extend_in);\n\t\t\tfree(s_in);\n\t\t\treturn (5);\n\t\t}\n#endif\n\t\tif (fputs(in, TMP_FILE) < 0) {\n\t\t\tperror(\"fputs\");\n\t\t\tfprintf(stderr, \"qsub: error writing copy of script, %s\\n\",\n\t\t\t\ttmp_name);\n\t\t\tfclose(TMP_FILE);\n\t\t\tfree(extend_in);\n\t\t\tfree(s_in);\n\t\t\treturn (3);\n\t\t}\n\t}\n\n\tfree(extend_in);\n\tfree(s_in);\n\tif (fclose(TMP_FILE) != 0) {\n\t\tperror(\" qsub: copy of script to tmp failed on close\");\n\t\treturn (5);\n\t}\n\tif (ferror(file)) {\n\t\tfprintf(stderr, \"qsub: error reading script file\\n\");\n\t\treturn (5);\n\t}\n\treturn (0);\n}\n\n/**\n * @brief\n *\tsignal handler to avoid race condition\n *\n * @param[in] sig - signal number\n *\n * @return Void\n *\n */\nvoid\nblockint(int sig)\n{\n\tsig_happened = sig;\n}\n\n/**\n * @brief\n *  Enable X11 Forwarding (on Unix) if specified.\n */\nvoid\nenable_gui(void)\n{\n\tchar *x11authstr = NULL;\n\tif (Forwardx11_opt) {\n\t\tif (!Interact_opt) {\n\t\t\tfprintf(stderr, \"qsub: X11 Forwarding possible only for interactive jobs\\n\");\n\t\t\texit_qsub(1);\n\t\t}\n\t\t/* get the DISPLAY's auth protocol, hexdata, and screen number */\n\t\tif ((x11authstr = X11_get_authstring()) != NULL) {\n\t\t\tset_attr_error_exit(&attrib, ATTR_X11_cookie, x11authstr);\n\t\t\tset_attr_error_exit(&attrib, ATTR_X11_port, port_X11());\n#ifdef DEBUG\n\t\t\tfprintf(stderr, \"x11auth string: %s\\n\", x11authstr);\n#endif\n\t\t} else {\n\t\t\texit_qsub(1);\n\t\t}\n\t}\n}\n\n/**\n * @Brief\n *      This function returns a string that consists of the protocol getting\n *      used, the hex data and the screen number . This string will form the\n *      basis of X authentication . It will be passed as a job attribute to\n *      MOM.\n * @return char*\n * @retval authstring Success\n * @retval NULL Failure\n *\n */\nstatic char *\nX11_get_authstring(void)\n{\n\tchar line[XAUTH_LEN] = {'\\0'};\n\tchar command[XAUTH_LEN] = {'\\0'};\n\tchar protocol[XAUTH_LEN];\n\tchar hexdata[XAUTH_LEN];\n\tchar screen[XAUTH_LEN];\n\tchar format[XAUTH_LEN];\n\tchar *authstring = NULL;\n\tFILE *f;\n\tint got_data = 0, ret = 0;\n\tchar *p;\n\n\tprotocol[0] = '\\0';\n\thexdata[0] = '\\0';\n\tscreen[0] = '\\0';\n\n\tsprintf(format, \" %%*s %%%ds %%%ds \", XAUTH_LEN - 1, XAUTH_LEN - 1);\n\n\tp = strchr(display, ':');\n\tif (p == NULL) {\n\t\tfprintf(stderr, \"qsub: Failed to get xauth data \"\n\t\t\t\t\"(check $DISPLAY variable)\\n\");\n\t\treturn NULL;\n\t}\n\n\t/* Try to get Xauthority information for the display. */\n\tif (strncmp(display, \"localhost\", sizeof(\"localhost\") - 1) == 0) {\n\t\t/*\n\t\t * Handle FamilyLocal case where $DISPLAY does\n\t\t * not match an authorization entry. For this we\n\t\t * just try \"xauth list unix:displaynum.screennum\".\n\t\t * \"localhost\" match to determine FamilyLocal\n\t\t * is not perfect.\n\t\t */\n\t\tret = snprintf(line, sizeof(line), \"%s list unix:%s %s\",\n\t\t\t       XAUTH_BINARY,\n\t\t\t       p + 1,\n\t\t\t       XAUTH_ERR_REDIRECTION);\n\t\tif (ret >= sizeof(line)) {\n\t\t\tfprintf(stderr, \" qsub: line overflow\\n\");\n\t\t\treturn NULL;\n\t\t}\n\t} else {\n\t\tret = snprintf(line, sizeof(line), \"%s list %.255s %s\",\n\t\t\t       XAUTH_BINARY,\n\t\t\t       display,\n\t\t\t       XAUTH_ERR_REDIRECTION);\n\t\tif (ret >= sizeof(line)) {\n\t\t\tfprintf(stderr, \" qsub: line overflow\\n\");\n\t\t\treturn NULL;\n\t\t}\n\t}\n\tsnprintf(command, sizeof(command), \"%s\", line);\n\n\tif (p != NULL)\n\t\tp = strchr(p, '.');\n\n\tif (p != NULL)\n\t\tsnprintf(screen, sizeof(screen), \"%s\", p + 1);\n\telse\n\t\tstrcpy(screen, \"0\"); /* Should be safe because sizeof(screen) = XAUTH_LEN which is >= 2 */\n\n#ifdef DEBUG\n\tfprintf(stderr, \"X11_get_authstring: %s\\n\", line);\n#endif\n\tf = popen(line, \"r\");\n\tif (f == NULL) {\n\t\tfprintf(stderr, \"execution of '%s' failed, errno=%d \\n\", command, errno);\n\t} else if (fgets(line, sizeof(line), f) == 0) {\n\t\tfprintf(stderr, \"cannot read data from '%s', errno=%d \\n\", command, errno);\n\t} else if (sscanf(line, format,\n\t\t\t  protocol,\n\t\t\t  hexdata) != 2) {\n\t\tfprintf(stderr, \"cannot parse output from '%s'\\n\", command);\n\t} else {\n\t\t/* SUCCESS */\n\t\tgot_data = 1;\n\t}\n\n\tif (f != NULL) {\n\t\t/*\n\t\t * Check the return value of pclose to see if the command failed?\n\t\t * In that case, the \"line\" read from stdout is probably an\n\t\t * error message (since stderr is redirected to stdout) from the shell or xauth,\n\t\t * so display that to the user.\n\t\t */\n\t\tif (pclose(f) != 0) {\n\t\t\tfprintf(stderr, \"execution of xauth failed: %s\", line);\n\t\t\treturn NULL;\n\t\t}\n\t}\n\n\tif (!got_data)\n\t\t/* FAILURE */\n\t\treturn NULL;\n\n\t/**\n\t * Allocate 4 additional bytes for the terminating NULL character for\n\t * each of the strings inside malloc\n\t */\n\tauthstring = malloc(strlen(protocol) + strlen(hexdata) +\n\t\t\t    strlen(screen) + 4);\n\tif (authstring == NULL) {\n\t\t/* FAILURE */\n\t\tfprintf(stderr, \" qsub: Malloc Failed\\n\");\n\t\treturn NULL;\n\t}\n\tsprintf(authstring, \"%s:%s:%s\",\n\t\tprotocol,\n\t\thexdata,\n\t\tscreen);\n\n\treturn (authstring);\n}\n\n/**\n * @brief\n *\tThis function creates a socket to listen for \"X11\" data\n *\tand returns a port number where its listening for X data.\n *\n * @return\tchar*\n * @retval\tportstring\tsuccess\n *\n * @par Side Effects\n *\t\tIf this function fails, it will exit the qsub process.\n *\n */\nstatic char *\nport_X11(void)\n{\n\tpbs_socklen_t namelen;\n\tstruct sockaddr_in myaddr;\n\tstatic char X11_port_str[X11_PORT_LEN];\n\tunsigned short X11_port;\n\n\tX11_comm_sock = socket(AF_INET, SOCK_STREAM, 0);\n\tif (X11_comm_sock < 0) {\n\t\tperror(\"qsub: unable to create socket\");\n\t\texit_qsub(1);\n\t}\n\tmyaddr.sin_family = AF_INET;\n\tmyaddr.sin_addr.s_addr = INADDR_ANY;\n\tmyaddr.sin_port = 0;\n\n\tif (bind(X11_comm_sock, (struct sockaddr *) &myaddr,\n\t\t sizeof(myaddr)) < 0) {\n\t\tperror(\"qsub: unable to bind to socket\");\n\t\texit_qsub(1);\n\t}\n\t/* get port number assigned */\n\tnamelen = sizeof(myaddr);\n\tif (getsockname(X11_comm_sock, (struct sockaddr *) &myaddr,\n\t\t\t&namelen) < 0) {\n\t\tperror(\"qsub: unable to get port number\");\n\t\texit_qsub(1);\n\t}\n\tX11_port = ntohs(myaddr.sin_port);\n\t(void) sprintf(X11_port_str, \"%u\", (unsigned int) X11_port);\n\tif (listen(X11_comm_sock, 1) < 0) {\n\t\tperror(\"qsub: listening on X11 socket failed\");\n\t\texit_qsub(1);\n\t}\n\treturn (X11_port_str);\n}\n\n/**\n * @brief\n *\tFork the current process. Call the daemon_stuff function in the\n *\tchild process which starts listening on the unix domain socket etc.\n *\tThe parent process continues out of this function and eventually\n *\treturns back control to the calling shell.\n *\n * @param[in] fname  - The filename used for the communication pipe/socket for\n *                     the communication between background and forground qsub processes.\n * @param[in] handle - Handle to synchronization event between foreground and\n *                     background qsub processes.\n * @param[in] server - Target server name of NULL in case of default\n *\n * exits program on failure\n *\n */\nvoid\ndo_daemon_stuff()\n{\n\tint pid;\n\n\tpid = fork();\n\tif (pid == 0) {\n\t\t/*\n\t\t * Try to become the session leader.\n\t\t * If that fails, exit with a syslog message\n\t\t */\n\t\tif (setsid() == -1) {\n\t\t\tlog_syslog(\"setsid failed\");\n\t\t\texit(1);\n\t\t}\n\n\t\t/*\n\t\t * Just close standard files. We don't want to\n\t\t * be session leader or close all other files.\n\t\t */\n\t\t(void) fclose(stdin);\n\t\t(void) fclose(stdout);\n\t\t(void) fclose(stderr);\n\n\t\t/* clear off all the attributes */\n\t\tqsub_free_attrl(attrib);\n\t\tattrib = NULL;\n\t\tfree(v_value);\n\t\tv_value = NULL;\n\t\tfree(basic_envlist);\n\t\tbasic_envlist = NULL;\n\t\tfree(qsub_envlist);\n\t\tqsub_envlist = NULL;\n\n\t\t/* set single threaded mode */\n\t\tpbs_client_thread_set_single_threaded_mode();\n\n\t\t/* set when background qsub is running */\n\t\tis_background = 1;\n\t\tdaemon_stuff();\n\t\t/*\n\t\t * Control should never reach here.\n\t\t * Still adding an exit, so it does not traverse parent code.\n\t\t */\n\t\texit(1);\n\t}\n}\n\n/**\n * @brief\n *\tSignal handler for SIGPIPE\n * @param[in]\tsig - signal number\n * @return\tvoid\n *\n */\nvoid\nexit_on_sigpipe(int sig)\n{\n\tperror(\"qsub: SIGPIPE received, job submission interrupted.\");\n\texit_qsub(1);\n}\n\n/**\n * @brief\n *  Set the signal handlers.\n */\nvoid\nset_sig_handlers(void)\n{\n\t/* Catch SIGPIPE on write() failures. */\n\tstruct sigaction act;\n\tsigemptyset(&act.sa_mask);\n\tact.sa_handler = exit_on_sigpipe;\n\tact.sa_flags = 0;\n\tif (sigaction(SIGPIPE, &act, NULL) < 0) {\n\t\tperror(\"qsub: unable to catch SIGPIPE\");\n\t\texit_qsub(1);\n\t}\n}\n\n/**\n * @brief\n *\tSend data of bufsize length to the peer. Used for communications\n * \tbetween the foreground and background qsub processes.\n *\n * @param[in]\ts - pointer to the windows PIPE or Unix domain socket\n * @param[in]\tbuf - The buf to send data from\n * @param[in]\tbufsize - The amount of data to send\n *\n * @return int\n * @retval\t-1 - Failure\n * @retval\t 0 - Success\n *\n */\nint\ndosend(void *s, char *buf, int bufsize)\n{\n\tint bytes = 0;\n\tint sock = (int) *((int *) s);\n\tint rc;\n\tchar *p = buf;\n\tint remaining = bufsize;\n\tdo {\n\t\t/*\n\t\t * For systems with MSG_NOSIGNAL defined (e.g. Linux 2.2 and later),\n\t\t * we use send() rather than write() in order to block the SIGPIPE\n\t\t * that qsub would receive if the remote side closes the stream. For\n\t\t * other systems, the exit_on_sigpipe() handler gets called.\n\t\t */\n\t\terrno = 0;\n#ifdef MSG_NOSIGNAL\n\t\trc = send(sock, p, remaining, MSG_NOSIGNAL);\n#else\n\t\trc = write(sock, p, remaining);\n#endif\n\t\tif (rc == -1)\n\t\t\treturn -1;\n\t\tif (rc == 0)\n\t\t\tbreak;\n\t\tbytes += rc;\n\t\tp += rc;\n\t\tremaining -= rc;\n\t} while (bytes < bufsize);\n\n\tif (bytes != bufsize)\n\t\treturn -1;\n\treturn 0;\n}\n\n/**\n * @brief\n *\tReceive data of bufsize length from the peer. Used for communications\n * \tbetween the foreground and background qsub processes.\n *\n * @param[in]\ts - pointer to the windows PIPE or Unix domain socket\n * @param[in]\tbuf - The buf to receive data into\n * @param[in]\tbufsize - The amount of data to read\n *\n * @return int\n * @retval\t-1 - Failure\n * @retval\t 0 - Success\n *\n */\nint\ndorecv(void *s, char *buf, int bufsize)\n{\n\tint bytes = 0;\n\tchar *p = buf;\n\tint remaining = bufsize;\n\tint sock = *((int *) s);\n\tint rc;\n\n\tdo {\n\t\terrno = 0;\n\t\trc = read(sock, p, remaining);\n\t\tif (rc == -1)\n\t\t\treturn -1;\n\t\tif (rc == 0)\n\t\t\tbreak;\n\t\tbytes += rc;\n\t\tp += rc;\n\t\tremaining -= rc;\n\t} while (bytes < bufsize);\n\n\tif (bytes != bufsize)\n\t\treturn -1;\n\treturn 0;\n}\n\n/**\n * @brief\n *\tInteractive Reader process: reads from the remote socket,\n *\tand writes that out to the stdout\n *\n * @param[in] s - socket (file descriptor)\n *\n * @return   Error code\n * @retval  -1  Failure\n * @retval   0   Success\n *\n */\nint\nreader(int s)\n{\n\tchar buf[4096];\n\tint c;\n\tchar *p;\n\tint wc;\n\n\t/* read from the socket, and write to stdout */\n\twhile (1) {\n\t\tc = CS_read(s, buf, sizeof(buf));\n\t\tif (c > 0) {\n\t\t\tp = buf;\n\t\t\twhile (c) {\n\t\t\t\tif ((wc = write(1, p, c)) < 0) {\n\t\t\t\t\tif (errno == EINTR) {\n\t\t\t\t\t\tcontinue;\n\t\t\t\t\t} else {\n\t\t\t\t\t\tperror(\"qsub: write error\");\n\t\t\t\t\t\treturn (-1);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tc -= wc;\n\t\t\t\tp += wc;\n\t\t\t}\n\t\t} else if (c == 0) {\n\t\t\treturn (0); /* EOF - all done */\n\t\t} else {\n\t\t\tif (errno == EINTR)\n\t\t\t\tcontinue;\n\t\t\telse {\n\t\t\t\tperror(\"qsub: read error\");\n\t\t\t\treturn (-1);\n\t\t\t}\n\t\t}\n\t}\n}\n\n/**\n * @brief\n *\tInteractive Paket Reader process: reads from the remote socket,\n *\tand writes that out to the stdout\n *\n * @param[in] s - socket (file descriptor)\n *\n * @return   Error code\n * @retval  -1  Failure\n * @retval   0   Success\n *\n */\nint\npkt_reader(int s)\n{\n\tint c;\n\tchar *p;\n\tint wc;\n\tvoid *data_in = NULL;\n\tsize_t len_in = 0;\n\tint type = 0;\n\n\t/* read from the socket, and write to stdout */\n\twhile (1) {\n\t\tpbs_tcp_timeout = -1;\n\t\tc = transport_recv_pkt(s, &type, &data_in, &len_in);\n\t\tif (c > 0) {\n\t\t\tp = data_in;\n\t\t\twhile (c) {\n\t\t\t\tif ((wc = write(1, p, c)) < 0) {\n\t\t\t\t\tif (errno == EINTR) {\n\t\t\t\t\t\tcontinue;\n\t\t\t\t\t} else {\n\t\t\t\t\t\tperror(\"qsub: write error\");\n\t\t\t\t\t\treturn (-1);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tc -= wc;\n\t\t\t\tp += wc;\n\t\t\t}\n\t\t} else if (c == -2) { /* tcp_recv returns -2 on EOF */\n\t\t\treturn (0); /* EOF - all done */\n\t\t} else {\n\t\t\tif (errno == EINTR)\n\t\t\t\tcontinue;\n\t\t\telse {\n\t\t\t\tperror(\"qsub: read error\");\n\t\t\t\treturn (-1);\n\t\t\t}\n\t\t}\n\t}\n}\n\n/**\n * @brief       This is a reader function which reads from the remote socket\n *              when X forwarding is enabled and writes it back to stdout.\n *\n * @param[in] s - socket descriptor from where data is to be read.\n *\n * @return\tint\n * @retval\t 0\tSuccess\n * @retval\t-1\tFailure\n * @retval      -2      Peer Closed connection\n *\n */\nint\nreader_Xjob(int s)\n{\n\tstatic char buf[PF_BUF_SIZE];\n\tint c = 0;\n\tchar *p;\n\tint wc;\n\tint d = fileno(stdout);\n\n\t/* read from the socket and write to stdout */\n\tc = CS_read(s, buf, sizeof(buf));\n\tif (c > 0) {\n\t\tp = buf;\n\t\twhile (c) {\n\t\t\t/*write data back to stdout*/\n\t\t\tif ((wc = write(d, p, c)) < 0) {\n\t\t\t\tif (errno == EINTR) {\n\t\t\t\t\tcontinue;\n\t\t\t\t} else {\n\t\t\t\t\tperror(\"qsub: write error\");\n\t\t\t\t\treturn (-1);\n\t\t\t\t}\n\t\t\t}\n\t\t\tc -= wc;\n\t\t\tp += wc;\n\t\t}\n\t} else if (c == 0) {\n\t\t/*\n\t\t * If control reaches here, then it means peer has closed the\n\t\t * connection.\n\t\t */\n\t\treturn (-2);\n\t} else if (errno == EINTR) {\n\t\treturn (0);\n\t} else {\n\t\tperror(\"qsub: read error\");\n\t\treturn (-1);\n\t}\n\n\treturn (0);\n}\n\n/**\n * @brief       This is a packet reader function which reads from the remote socket\n *              when X forwarding is enabled and writes it back to stdout.\n *\n * @param[in] s - socket descriptor from where data is to be read.\n *\n * @return\tint\n * @retval\t 0\tSuccess\n * @retval\t-1\tFailure\n * @retval      -2      Peer Closed connection\n *\n */\nint\npkt_reader_Xjob(int s)\n{\n\tint c = 0;\n\tchar *p;\n\tint wc;\n\tint d = fileno(stdout);\n\tvoid *data_in = NULL;\n\tsize_t len_in = 0;\n\tint type = 0;\n\n\t/* read from the socket and write to stdout */\n\tc = transport_recv_pkt(s, &type, &data_in, &len_in);\n\tif (c > 0) {\n\t\tp = data_in;\n\t\twhile (c) {\n\t\t\t/*write data back to stdout*/\n\t\t\tif ((wc = write(d, p, c)) < 0) {\n\t\t\t\tif (errno == EINTR) {\n\t\t\t\t\tcontinue;\n\t\t\t\t} else {\n\t\t\t\t\tperror(\"qsub: write error\");\n\t\t\t\t\treturn (-1);\n\t\t\t\t}\n\t\t\t}\n\t\t\tc -= wc;\n\t\t\tp += wc;\n\t\t}\n\t} else if (c == -2) { /* tcp_recv returns -2 on EOF */\n\t\t/*\n\t\t * If control reaches here, then it means peer has closed the\n\t\t * connection.\n\t\t */\n\t\treturn (-2);\n\t} else if (errno == EINTR) {\n\t\treturn (0);\n\t} else {\n\t\tperror(\"qsub: read error\");\n\t\treturn (-1);\n\t}\n\n\treturn (0);\n}\n\n/**\n * @brief       This is a reader wrapper function which selects reader for remote socket\n *              when X forwarding is enabled and writes it back to stdout.\n *\n * @param[in] s - socket descriptor from where data is to be read.\n *\n * @return\tint\n * @retval\t 0\tSuccess\n * @retval\t-1\tFailure\n * @retval      -2      Peer Closed connection\n *\n */\nint\nget_reader_Xjob(int s)\n{\n\tif (transport_chan_get_ctx_status(s, FOR_ENCRYPT) == (int) AUTH_STATUS_CTX_READY) {\n\t\treturn pkt_reader_Xjob(s);\n\t} else {\n\t\treturn reader_Xjob(s);\n\t}\n}\n\n/**\n * @brief\n * \tsettermraw - set terminal into \"raw\" mode\n *\n * @param[in] ptio - pointer to termios structure\n *\n * @return None\n * @retval Void\n *\n */\nvoid\nsettermraw(struct termios *ptio)\n{\n\tstruct termios tio;\n\n\ttio = *ptio;\n\n\ttio.c_lflag &= ~(ICANON | ISIG | ECHO | ECHOE | ECHOK);\n\ttio.c_iflag &= ~(IGNBRK | INLCR | ICRNL | IXON | IXOFF);\n\ttio.c_oflag = 0;\n\ttio.c_oflag |= (OPOST); /* TAB3 */\n\ttio.c_cc[VMIN] = 1;\n\ttio.c_cc[VTIME] = 0;\n\n#if defined(TABDLY) && defined(TAB3)\n\tif ((tio.c_oflag & TABDLY) == TAB3)\n\t\ttio.c_oflag &= ~TABDLY;\n#endif\n\ttio.c_cc[VKILL] = -1;\n\ttio.c_cc[VERASE] = -1;\n\n\tif (tcsetattr(0, TCSANOW, &tio) < 0)\n\t\tperror(\"qsub: set terminal mode\");\n}\n\n/**\n * @brief\n * \tstopme - suspend process on ~^Z or ~^Y\n *\ton suspend, reset terminal to normal \"cooked\" mode;\n *\twhen resumed, again set terminal to raw.\n *\n * @param[in] p - process id\n *\n * @return None\n * @retval Void\n *\n */\nvoid\nstopme(pid_t p)\n{\n\t(void) tcsetattr(0, TCSANOW, &oldtio); /* reset terminal */\n\tkill(p, SIGTSTP);\n\tsettermraw(&oldtio); /* back to raw when we resume */\n}\n\n/**\n * @brief\n * \tWriter process: reads from stdin, and writes\n * \tdata out to the rem socket\n *\n * @param[in] s - file descriptor\n *\n * @return Void\n *\n */\nvoid\nwriter(int s)\n{\n\tchar c;\n\tint i;\n\tint newline = 1;\n\tchar tilde = '~';\n\tint wi;\n\n\tbool as_pkt = transport_chan_get_ctx_status(s, FOR_ENCRYPT) == (int) AUTH_STATUS_CTX_READY;\n\n\t/* read from stdin, and write to the socket */\n\n\twhile (1) {\n\t\ti = read(0, &c, 1);\n\t\tif (i > 0) { /* read data */\n\t\t\tif (newline) {\n\t\t\t\tif (c == tilde) { /* maybe escape character */\n\n\t\t\t\t\t/* read next character to check */\n\n\t\t\t\t\twhile ((i = read(0, &c, 1)) != 1) {\n\t\t\t\t\t\tif ((i == -1) && (errno == EINTR))\n\t\t\t\t\t\t\tcontinue;\n\t\t\t\t\t\telse\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t\tif (i != 1)\n\t\t\t\t\t\tbreak;\n\t\t\t\t\tif (c == '.') /* termination character */\n\t\t\t\t\t\tbreak;\n\t\t\t\t\telse if (c == oldtio.c_cc[VSUSP]) {\n\t\t\t\t\t\tstopme(0); /* ^Z suspend all */\n\t\t\t\t\t\tcontinue;\n#ifdef VDSUSP\n\t\t\t\t\t} else if (c == oldtio.c_cc[VDSUSP]) {\n\t\t\t\t\t\tstopme(getpid());\n\t\t\t\t\t\tcontinue;\n#endif\t\t\t\t\t\t /* VDSUSP */\n\t\t\t\t\t} else { /* not escape, write out tilde */\n\t\t\t\t\t\tif (as_pkt) {\n\t\t\t\t\t\t\twi = transport_send_pkt(s, AUTH_ENCRYPTED_DATA, &tilde, 1);\n\t\t\t\t\t\t\tif (wi < 1)\n\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\twhile ((wi = CS_write(s, &tilde, 1)) != 1) {\n\t\t\t\t\t\t\t\tif ((wi == -1) && (errno == EINTR))\n\t\t\t\t\t\t\t\t\tcontinue;\n\t\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tif (wi != 1)\n\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tnewline = 0; /* no longer at start of line */\n\t\t\t} else {\n\t\t\t\t/* reset to newline if \\n \\r kill or interrupt */\n\t\t\t\tnewline = (c == '\\n') ||\n\t\t\t\t\t  (c == oldtio.c_cc[VKILL]) ||\n\t\t\t\t\t  (c == oldtio.c_cc[VINTR]) ||\n\t\t\t\t\t  (c == '\\r');\n\t\t\t}\n\t\t\tif (as_pkt) {\n\t\t\t\twi = transport_send_pkt(s, AUTH_ENCRYPTED_DATA, &c, 1);\n\t\t\t\tif (wi < 1)\n\t\t\t\t\tbreak;\n\t\t\t} else {\n\t\t\t\twhile ((wi = CS_write(s, &c, 1)) != 1) { /* write out character */\n\t\t\t\t\tif ((wi == -1) && (errno == EINTR))\n\t\t\t\t\t\tcontinue;\n\t\t\t\t\telse\n\t\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tif (wi != 1)\n\t\t\t\t\tbreak;\n\t\t\t}\n\n\t\t} else if (i == 0) { /* EOF */\n\t\t\tbreak;\n\t\t} else if (i < 0) { /* error */\n\t\t\tif (errno == EINTR)\n\t\t\t\tcontinue;\n\t\t\telse {\n\t\t\t\tperror(\"qsub: read error\");\n\t\t\t\treturn;\n\t\t\t}\n\t\t}\n\t}\n\treturn;\n}\n\n/**\n * @brief\n * \tsend_term - send the current TERM type and certain control characters\n *\n * @param[in] sock - file descriptor\n *\n * @return Void\n *\n */\nvoid\nsend_term(int sock)\n{\n\tchar buf[PBS_TERM_BUF_SZ];\n\tchar *term;\n\tchar cc_array[PBS_TERM_CCA];\n\n\tterm = getenv(\"TERM\");\n\tterm = strdup_esc_commas(term);\n\tif (term == NULL)\n\t\tsnprintf(buf, sizeof(buf), \"TERM=unknown\");\n\telse {\n\t\tsnprintf(buf, sizeof(buf), \"TERM=%s\", term);\n\t\tfree(term);\n\t}\n\n\tif (transport_chan_get_ctx_status(sock, FOR_ENCRYPT) == (int) AUTH_STATUS_CTX_READY) {\n\t\ttransport_send_pkt(sock, AUTH_ENCRYPTED_DATA, buf, PBS_TERM_BUF_SZ);\n\t} else {\n\t\t(void) CS_write(sock, buf, PBS_TERM_BUF_SZ);\n\t}\n\n\tcc_array[0] = oldtio.c_cc[VINTR];\n\tcc_array[1] = oldtio.c_cc[VQUIT];\n\tcc_array[2] = oldtio.c_cc[VERASE];\n\tcc_array[3] = oldtio.c_cc[VKILL];\n\tcc_array[4] = oldtio.c_cc[VEOF];\n\tcc_array[5] = oldtio.c_cc[VSUSP];\n\n\tif (transport_chan_get_ctx_status(sock, FOR_ENCRYPT) == (int) AUTH_STATUS_CTX_READY) {\n\t\ttransport_send_pkt(sock, AUTH_ENCRYPTED_DATA, cc_array, PBS_TERM_CCA);\n\t} else {\n\t\t(void) CS_write(sock, cc_array, PBS_TERM_CCA);\n\t}\n}\n\n/**\n * @brief\n *\tsend_winsize = send the current tty's window size\n *\n * @param[in] sock - file descriptor\n *\n * @return Void\n *\n */\nvoid\nsend_winsize(int sock)\n{\n\tchar buf[PBS_TERM_BUF_SZ];\n\n\t(void) sprintf(buf, \"WINSIZE %hu,%hu,%hu,%hu\", wsz.ws_row, wsz.ws_col, wsz.ws_xpixel, wsz.ws_ypixel);\n\n\tif (transport_chan_get_ctx_status(sock, FOR_ENCRYPT) == (int) AUTH_STATUS_CTX_READY) {\n\t\ttransport_send_pkt(sock, AUTH_ENCRYPTED_DATA, buf, PBS_TERM_BUF_SZ);\n\t} else {\n\t\t(void) CS_write(sock, buf, PBS_TERM_BUF_SZ);\n\t}\n\treturn;\n}\n\n/**\n * @brief\n *\tgetwinsize - get the current window size\n *\n * @param[in] pwsz - pointer to winsize structure\n *\n * @return   Error code\n * @retval  -1    Failure\n * @retval   0    Success\n *\n */\nint\ngetwinsize(struct winsize *pwsz)\n{\n\tif (ioctl(0, TIOCGWINSZ, &wsz) < 0) {\n\t\tperror(\"qsub: unable to get window size\");\n\t\treturn (-1);\n\t}\n\treturn (0);\n}\n\n/**\n * @brief\n *\tcatchchild = signal handler for Death of Child\n *\n * @param[in] sig - signal number\n *\n * @return Void\n *\n */\nvoid\ncatchchild(int sig)\n{\n\tint status;\n\tint pid;\n\n\twhile (1) {\n\t\tpid = waitpid(-1, &status, WNOHANG | WUNTRACED);\n\t\tif (pid == 0)\n\t\t\treturn;\n\t\tif ((pid > 0) && (WIFSTOPPED(status) == 0))\n\t\t\tbreak;\n\t\tif ((pid == -1) && (errno != EINTR)) {\n\t\t\tperror(\"qsub: bad status in catchchild: \");\n\t\t\treturn;\n\t\t}\n\t}\n\n\t/* reset terminal to cooked mode */\n\n\t(void) tcsetattr(0, TCSANOW, &oldtio);\n\texit_qsub(0);\n}\n\n/**\n * @brief\n *\tprints can't suspend qsub process on arrival of signal causing suspension\n *\n * @param[in] sig - signal number\n *\n * @return Void\n *\n */\nvoid\nno_suspend(int sig)\n{\n\tprintf(\"Sorry, you cannot suspend qsub until the job is started\\n\");\n\tfflush(stdout);\n}\n\n/**\n * @brief\n *\tsignal handler function for interrupt signal\n *\n * @param[in] sig - signal number\n *\n * @return Void\n *\n */\nvoid\ncatchint(int sig)\n{\n\tint c;\n\n\tprintf(\"Do you wish to terminate the job and exit (y|[n])? \");\n\tfflush(stdout);\n\twhile (1) {\n\t\talarm(60); /* give a minute to think about it */\n\t\tc = getchar();\n\n\t\tif ((c == 'n') || (c == 'N') || (c == '\\n'))\n\t\t\tbreak;\n\t\telse if ((c == 'y') || (c == 'Y') || (c == EOF)) {\n\t\t\tbailout(0);\n\t\t} else {\n\t\t\tprintf(\"yes or no please\\n\");\n\t\t\twhile ((c != '\\n') && (c != EOF))\n\t\t\t\tc = getchar();\n\t\t}\n\t}\n\talarm(0); /* reset alarm */\n\twhile ((c != '\\n') && (c != EOF))\n\t\tc = getchar();\n\treturn;\n}\n\n/**\n * @brief\n *\tsignal handler for timeout scenario\n *\n * @param[in] sig - signal number\n *\n * @return Void\n *\n */\nvoid\ntoolong(int sig)\n{\n\tprintf(\"Timeout -- deleting job\\n\");\n\tbailout(0);\n}\n\n/**\n * @brief\n *  Function used to log port forwarding messages.\n *\n * @param[in] msg - error message to be logged\n *\n * @return Void\n *\n */\nstatic void\nlog_cmds_portfw_msg(char *msg)\n{\n\tfprintf(stderr, \"%s\\n\", msg);\n\t(void) fflush(stderr);\n\t(void) fflush(stdout);\n}\n\n/**\n * @brief\n *\tThis function initializes pfwdsock structure and eventually\n *\tcalls port_forwarder.\n *\n * @param[in]\tX_data_socket - socket descriptor used to read X data from mom\n *\t\t\t\tport forwarders.\n * @param[in]\tinteractive_reader_socket - socket descriptor used to read\n *\t\t\t\tinteractive job data coming from mom writer.\n * @return\tvoid\n *\n * @par Side Effects\n * \tOn failure, the function will cause the qsub process to exit.\n *\n */\nstatic void\nx11handler(int X_data_socket, int interactive_reader_socket)\n{\n\tint n;\n\tstruct pfwdsock *socks;\n\tsocks = calloc(sizeof(struct pfwdsock), NUM_SOCKS);\n\tif (!socks) {\n\t\tfprintf(stderr, \"Calloc failed : out of memory\\n\");\n\t\texit_qsub(1);\n\t}\n\tfor (n = 0; n < NUM_SOCKS; n++) {\n\t\t(socks + n)->active = 0;\n\t}\n\tsocks->sock = X_data_socket;\n\tsocks->active = 1;\n\tsocks->listening = 1;\n\n\t/* Try to open a socket for the local X server. */\n\n\tport_forwarder(socks, x11_connect_display, display, 0,\n\t\t       interactive_reader_socket, get_reader_Xjob, log_cmds_portfw_msg,\n\t\t\t   QSUB_SIDE, pbs_conf.interactive_auth_method,\n\t\t\t   pbs_conf.interactive_encrypt_method, new_jobname);\n}\n\n/**\n * @brief\n *\tinteractive - set up for interactive communication with job\n *\n * @return      void\n *\n * @par Side Effects\n *\tOn failure, the function will cause the qsub process to exit.\n *\n */\nvoid\ninteractive(void)\n{\n\tint amt;\n\tchar cur_server[PBS_MAXSERVERNAME + PBS_MAXPORTNUM + 2];\n\tpbs_socklen_t fromlen;\n\tchar momjobid[PBS_MAXSVRJOBID + 1];\n\tint news;\n\tint nsel;\n\tchar *pc;\n\tfd_set selset;\n\n\tstruct sigaction act;\n\tstruct sockaddr_in from;\n\tstruct timeval timeout;\n\tstruct winsize wsz;\n\tint child;\n\tint ret;\n\tint type;\n\tvoid *data_in = NULL;\n\tsize_t len_in = 0;\n\n\t/* disallow ^Z which hangs up MOM starting an interactive job */\n\tsigemptyset(&act.sa_mask);\n\tact.sa_handler = no_suspend;\n\tact.sa_flags = 0;\n\tif (sigaction(SIGTSTP, &act, NULL) < 0) {\n\t\tperror(\"sigaction(SIGTSTP)\");\n\t\texit_qsub(1);\n\t}\n\n\t/* Catch SIGINT and SIGTERM, and setup to catch Death of child */\n\tact.sa_handler = catchint;\n\tif ((sigaction(SIGINT, &act, NULL) < 0) ||\n\t    (sigaction(SIGTERM, &act, NULL) < 0)) {\n\t\tperror(\"unable to catch signals\");\n\t\texit_qsub(1);\n\t}\n\tact.sa_handler = toolong;\n\tif ((sigaction(SIGALRM, &act, NULL) < 0)) {\n\t\tperror(\"cannot catch alarm\");\n\t\texit_qsub(2);\n\t}\n\n\t/* save the old terminal setting */\n\tif (tcgetattr(0, &oldtio) < 0) {\n\t\tperror(\"qsub: unable to get terminal settings\");\n\t\texit_qsub(1);\n\t}\n\n\t/* Get the current window size, to be sent to MOM later */\n\tif (getwinsize(&wsz)) {\n\t\t/* unable to get actual values, set defaults */\n\t\twsz.ws_row = 20;\n\t\twsz.ws_col = 80;\n\t\twsz.ws_xpixel = 0;\n\t\twsz.ws_ypixel = 0;\n\t}\n\n\tprintf(\"qsub: waiting for job %s to start\\n\", new_jobname);\n\n\t/* Accept connection on socket set up earlier */\n\tnsel = 0;\n\twhile (nsel == 0) {\n\t\tFD_ZERO(&selset);\n\t\tFD_SET(comm_sock, &selset);\n\t\ttimeout.tv_usec = 0;\n\t\ttimeout.tv_sec = 30;\n\t\tnsel = select(FD_SETSIZE, &selset, NULL, NULL, &timeout);\n\t\tif (nsel == -1) {\n\t\t\tif (errno == EINTR)\n\t\t\t\tnsel = 0;\n\t\t\telse {\n\t\t\t\tperror(\"qsub: select failed\");\n\t\t\t\texit_qsub(1);\n\t\t\t}\n\t\t}\n\t\tif (nsel == 0) {\n\t\t\t/* connect to server, status job to see if still there */\n\t\t\tif (!locate_job(new_jobname, server_out, cur_server)) {\n\t\t\t\tfprintf(stderr, \"qsub: job %s apparently deleted\\n\", new_jobname);\n\t\t\t\texit_qsub(1);\n\t\t\t}\n\t\t}\n\t}\n\n\t/* apparently someone is attempting to connect to us */\n\nretry:\n\tfromlen = sizeof(from);\n\tif ((news = accept(comm_sock, (struct sockaddr *) &from, &fromlen)) < 0) {\n\t\tperror(\"qsub: accept error from Interactive socket \");\n\t\texit_qsub(1);\n\t}\n\n\t/*\n\t * When Mom connects we expect:\n\t * first, to engage in an authentication activity\n\t * second, mom sends the job id for us to verify\n\t */\n\n\tret = auth_exec_socket(news, &from, pbs_conf.interactive_auth_method, pbs_conf.interactive_encrypt_method, new_jobname);\n\tif (ret != INTERACTIVE_AUTH_SUCCESS) {\n \t\tfprintf(stderr, \"qsub: failed authentication with execution host\\n\");\n\t\tshutdown(news, SHUT_RDWR);\n\t\tclose(news);\n\t\tdis_destroy_chan(news);\n\t\tif (ret == INTERACTIVE_AUTH_RETRY)\n\t\t\tgoto retry;\n\t\telse\n\t\t\texit_qsub(1);\n \t}\n\n\t/* now verify the value of job id */\n\n\tif (transport_chan_get_ctx_status(news, FOR_ENCRYPT) == (int) AUTH_STATUS_CTX_READY) {\n\t\tint len = transport_recv_pkt(news, &type, &data_in, &len_in);\n\t\tif (len <= 0) { /* no data read */\n\t\t\tshutdown(news, 2);\n\t\t\tclose(news);\n\t\t\tdis_destroy_chan(news);\n\t\t\tgoto retry;\n\t\t}\n\t\tstrncpy(momjobid, (char *) data_in, len_in);\n\t} else {\n\t\tamt = PBS_MAXSVRJOBID + 1;\n\t\tpc = momjobid;\n\t\twhile (amt > 0) {\n\t\t\tint len = CS_read(news, pc, amt);\n\t\t\tif (len <= 0)\n\t\t\t\tbreak;\n\t\t\tpc += len;\n\t\t\tif (*(pc - 1) == '\\0')\n\t\t\t\tbreak;\n\t\t\tamt -= len;\n\t\t}\n\t\tif (pc == momjobid) { /* no data read */\n\t\t\tshutdown(news, 2);\n\t\t\tclose(news);\n\t\t\tdis_destroy_chan(news);\n\t\t\tgoto retry;\n\t\t}\n\t}\n\n\tif (strncmp(momjobid, new_jobname, PBS_MAXSVRJOBID) != 0) {\n\t\tfprintf(stderr, \"qsub: invalid job name from execution server\\n\");\n\t\tshutdown(news, 2);\n\t\tdis_destroy_chan(news);\n\t\texit_qsub(1);\n\t}\n\n\t/*\n\t * got the right job, send:\n\t *\t\tterminal type as \"TERM=xxxx\"\n\t *\t\twindow size as   \"WINSIZE=r,c,x,y\"\n\t */\n\tsend_term(news);\n\tsend_winsize(news);\n\n\tprintf(\"qsub: job %s ready\\n\\n\", new_jobname);\n\n\t/* set SIGINT, SIGTERM processing to default */\n\n\tact.sa_handler = SIG_DFL;\n\tif ((sigaction(SIGINT, &act, NULL) < 0) ||\n\t    (sigaction(SIGTERM, &act, NULL) < 0) ||\n\t    (sigaction(SIGALRM, &act, NULL) < 0) ||\n\t    (sigaction(SIGTSTP, &act, NULL) < 0)) {\n\t\tperror(\"unable to reset signals\");\n\t\texit_qsub(1);\n\t}\n\n\tchild = fork();\n\tif (child == 0) {\n\t\t/* child process - start the reader function set terminal into raw mode */\n\t\tsettermraw(&oldtio);\n\n\t\tif (Forwardx11_opt) {\n\t\t\t/*\n\t\t\t * if forwardx11_opt is set call x11handler which\n\t\t\t * will act as a reader as well as a port forwarder\n\t\t\t */\n\t\t\tx11handler(X11_comm_sock, news);\n\t\t} else {\n\t\t\t/* call interactive job's reader */\n\t\t\tif (transport_chan_get_ctx_status(news, FOR_ENCRYPT) == (int) AUTH_STATUS_CTX_READY) {\n\t\t\t\t(void) pkt_reader(news);\n\t\t\t} else {\n\t\t\t\t(void) reader(news);\n\t\t\t}\n\t\t}\n\t\t/* reset terminal */\n\t\ttcsetattr(0, TCSANOW, &oldtio);\n\t\tprintf(\"\\nqsub: job %s completed\\n\", new_jobname);\n\t\texit_qsub(0);\n\n\t} else if (child > 0) {\n\t\t/* parent - start the writer function */\n\t\tact.sa_handler = catchchild;\n\t\tif (sigaction(SIGCHLD, &act, NULL) < 0)\n\t\t\texit_qsub(1);\n\n\t\twriter(news);\n\n\t\t/* all done - make sure the child is gone and reset the terminal */\n\t\tkill(child, SIGTERM);\n\t\tshutdown(comm_sock, SHUT_RDWR);\n\t\tclose(comm_sock);\n\n\t\ttcsetattr(0, TCSANOW, &oldtio);\n\t\tprintf(\"\\nqsub: job %s completed\\n\", new_jobname);\n\t\texit_qsub(0);\n\t} else {\n\t\tperror(\"qsub: unable to fork\");\n\t\texit_qsub(1);\n\t}\n}\n\n/**\n * @brief\n *\tSets the filename to be used for the unix domain socket based comm.\n *\tThis is formed by appending the UID and the target server name to the\n *\tfilename. The length of the string is restricted to the length of the\n *\tglobal variable fl. This is fairly small (108 characters) for Linux.\n *\n * @param[out] fname - The filename in tmpdir that is used as the unix domain socket\n *\t\t\tfile.\n *\n */\nvoid\nget_comm_filename(char *fname)\n{\n\tchar *env_svr = getenv(PBS_CONF_SERVER_NAME);\n\tchar *env_port = getenv(PBS_CONF_BATCH_SERVICE_PORT);\n\tint count = 0;\n\tchar buf[LARGE_BUF_LEN];\n\tint len;\n\tunsigned char hash[SHA_DIGEST_LENGTH];\n\tint i;\n\n\tcount = snprintf(fname, MAXPIPENAME, \"%s/pbs_%.16s_%lu_%.8s_%.32s_%.16s_%.5s\",\n\t\t\t tmpdir,\n\t\t\t server_out[0] == '\\0' ? \"default\" : server_out,\n\t\t\t (unsigned long int) getuid(),\n\t\t\t cred_name,\n\t\t\t get_conf_path(),\n\t\t\t env_svr ? env_svr : \"\",\n\t\t\t env_port ? env_port : \"\");\n\n\tif (count >= MAXPIPENAME) {\n\t\tcount = snprintf(fname, MAXPIPENAME, \"%s/pbs_\", TMP_DIR);\n\t\tlen = snprintf(buf, MAXPIPENAME, \"%.16s_%lu_%.8s_%.32s_%.16s_%.5s\",\n\t\t\t       server_out[0] == '\\0' ? \"default\" : server_out,\n\t\t\t       (unsigned long int) getuid(),\n\t\t\t       cred_name,\n\t\t\t       get_conf_path(),\n\t\t\t       (env_svr == NULL) ? \"\" : env_svr,\n\t\t\t       (env_port == NULL) ? \"\" : env_port);\n\n\t\tif (len + count < MAXPIPENAME) {\n\t\t\tpbs_strncpy(fname + count, buf, MAXPIPENAME - count);\n\t\t} else {\n\t\t\tif (SHA1((const unsigned char *) buf, SHA_DIGEST_LENGTH, (unsigned char *) &hash)) {\n\t\t\t\tfor (i = 0; i < SHA_DIGEST_LENGTH; i++)\n\t\t\t\t\tsprintf(buf + (i * 2), \"%02x\", hash[i]);\n\n\t\t\t\tbuf[SHA_DIGEST_LENGTH * 2] = 0;\n\t\t\t}\n\t\t\tpbs_strncpy(fname + count, buf, MAXPIPENAME - count);\n\t\t}\n\t}\n}\n\n/**\n * @brief\n *\tCheck whether a unix domain socket file is available.\n *\tThat is an indication that a background qsub might already be running.\n *\n * @param[out]\tfname - The filename used for the communication pipe/socket for\n *\t\t\tthe communication between background and forground\n *\t\t\tqsub processes.\n *\n * @return int\n * @retval\t0 - Not available\n * @retval\t1 - available\n *\n */\nint\ncheck_qsub_daemon(char *fname)\n{\n\tget_comm_filename(fname);\n\tif (access(fname, F_OK) == 0) {\n\t\t/* check if file is usable */\n\t\treturn 1;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\tThe daemon_stuff Unix counterpart.\n *\tIt creates a unix domain socket server and starts listening on it.\n *\tThe umask is set to 077 so that the domain socket file is owned and\n *\taccessible by the user executing qsub only. Once a client (foreground\n *\tqsub) connects, it receives all the data from the foreground qsub and\n *\texecutes do_submit, on the pre-established connection to pbs_server.\n *\tThe connection to server was estiblished by the caller of this function\n *\tby calling do_connect().\n *\tThis function also does a \"select\" wait on input of data from foreground\n *\tqsub processes, and a close notification on the socket with pbs_server.\n *\tThe select breaks if foreground qsubs connect, the pbs_server dies, or\n *\tthe timeout of 1 minutes expires. For the latter two cases, this function\n *\tdoes a silent exit of the background qsub daemon.\n *\n */\nstatic void\ndaemon_stuff(void)\n{\n\tint sock, bindfd;\n\tstruct sockaddr_un s_un;\n\tstruct sockaddr from;\n\tsocklen_t fromlen;\n\tint rc;\n\tfd_set readset;\n\tfd_set workset;\n\tstruct timeval timeout;\n\tint n, maxfd;\n\tmode_t cmask = 0077;\n\ttime_t connect_time = time(0);\n\tsigset_t newsigmask, oldsigmask;\n\tchar *err_op = \"\";\n\tchar log_buf[LOG_BUF_SIZE];\n\tint cred_timeout = 0;\n\n\t/* set umask so socket file created is only accessible by same user */\n\tumask(cmask);\n\tsigemptyset(&newsigmask);\n\tsigaddset(&newsigmask, SIGPIPE);\n\tsigprocmask(SIG_BLOCK, &newsigmask, NULL);\n\n\t/* start up a unix domain socket to listen */\n\tif ((bindfd = socket(AF_UNIX, SOCK_STREAM, 0)) == -1) {\n\t\terr_op = \"socket\";\n\t\tgoto error;\n\t}\n\n\ts_un.sun_family = AF_UNIX;\n\tsnprintf(s_un.sun_path, sizeof(s_un.sun_path), \"%s\", fl);\n\n\tif (bind(bindfd, (const struct sockaddr *) &s_un, sizeof(s_un)) == -1)\n\t\texit(1); /* dont go to error */\n\n\tFD_ZERO(&readset);\n\tif (listen(bindfd, 1) != 0) {\n\t\terr_op = \"listen\";\n\t\tgoto error;\n\t}\n\n\tFD_SET(bindfd, &readset);\n\tFD_SET(sd_svr, &readset);\n\tmaxfd = (bindfd > sd_svr) ? bindfd : sd_svr;\n\twhile (1) {\n\n\t\terr_op = \"\";\n\n\t\tmemcpy(&workset, &readset, sizeof(readset));\n\n\t\ttimeout.tv_usec = 0;\n\t\t/* since timeout gets reset on Linux */\n\t\tif (cred_timeout == 1)\n\t\t\ttimeout.tv_sec = QSUB_DMN_TIMEOUT_SHORT; /* Short timeout to allow any foreground process to finsih before exiting */\n\t\telse\n\t\t\ttimeout.tv_sec = QSUB_DMN_TIMEOUT_LONG;\n\t\tn = select(maxfd + 1, &workset, NULL, NULL, &timeout);\n\t\tif (n == 0)\n\t\t\tgoto out; /* daemon timed out waiting for connect from foreground */\n\t\telse if (n == -1) {\n\t\t\terr_op = \"select failed\";\n\t\t\tgoto error;\n\t\t}\n\n\t\t/*\n\t\t * check if we are past the credential timeout\n\t\t * Error out even if it is close to CREDENTIAL_LIFETIME, as\n\t\t * request could take a while to reach server and get processed\n\t\t * Qsub then does a regular submit (new connection)\n\t\t */\n\t\tif (cred_timeout == 0 && ((time(0) - connect_time) > (CREDENTIAL_LIFETIME - QSUB_DMN_TIMEOUT_LONG))) {\n\t\t\tunlink(fl);\n\t\t\tcred_timeout = 1;\n\t\t}\n\n\t\t/* Shut the qsub daemon if the server had closed the connection */\n\t\tif (FD_ISSET(sd_svr, &workset)) {\n\t\t\tif (recv(sd_svr, &rc, 1, MSG_OOB) < 1)\n\t\t\t\tgoto out;\n\t\t}\n\n\t\t/* accept the connection */\n\t\tfromlen = sizeof(from);\n\t\tif ((sock = accept(bindfd, &from, &fromlen)) == -1) {\n\t\t\terr_op = \"accept\";\n\t\t\tgoto error;\n\t\t}\n\n\t\tif ((recv_attrl(&sock, &attrib) != 0) ||\n\t\t    (recv_string(&sock, destination) != 0) ||\n\t\t    (recv_string(&sock, script_tmp) != 0) ||\n\t\t    (recv_string(&sock, cred_name) != 0) ||\n\t\t    (recv_dyn_string(&sock, &v_value) != 0) ||\n\t\t    (recv_dyn_string(&sock, &basic_envlist) != 0) ||\n\t\t    (recv_dyn_string(&sock, &qsub_envlist) != 0) ||\n\t\t    (recv_string(&sock, qsub_cwd) != 0) ||\n\t\t    (recv_opts(&sock) != 0)) {\n\t\t\terr_op = \"recv data from foreground\";\n\t\t\tgoto error;\n\t\t}\n\n\t\t/*\n\t\t * At this point the background qsub daemon has received all the data from the\n\t\t * foreground. Lets tell the foreground that we have received the data, so that\n\t\t * if the we crashed at any point after this the foreground should not end up\n\t\t * submitting a duplicate job. However, if the foreground did not get this intimation,\n\t\t * then it could go ahead and do a regular job submit.\n\t\t */\n\t\trc = 0;\n\t\tif (dosend(&sock, (char *) &rc, sizeof(int)) != 0) {\n\t\t\terr_op = \"send data to foreground\";\n\t\t\tgoto error;\n\t\t}\n\n\t\t/* set the current work directory by doing a chdir */\n\t\tif (chdir(qsub_cwd) != 0) {\n\t\t\terr_op = \"chdir\";\n\t\t\tgoto error;\n\t\t}\n\n\t\tif (setenv(\"PWD\", qsub_cwd, 1) != 0) {\n\t\t\terr_op = \"setenv\";\n\t\t\tgoto error;\n\t\t}\n\n\t\tsigemptyset(&newsigmask);\n\t\tsigaddset(&newsigmask, SIGXCPU);\n\t\tsigaddset(&newsigmask, SIGXFSZ);\n\t\tsigaddset(&newsigmask, SIGTSTP);\n\t\tsigaddset(&newsigmask, SIGINT);\n\t\tsigaddset(&newsigmask, SIGSTOP);\n\t\tsigaddset(&newsigmask, SIGTERM);\n\t\tsigaddset(&newsigmask, SIGTSTP);\n\t\tsigaddset(&newsigmask, SIGALRM);\n\t\tsigaddset(&newsigmask, SIGQUIT);\n\t\tsigaddset(&newsigmask, SIGUSR1);\n\t\tsigaddset(&newsigmask, SIGUSR2);\n\t\tsigprocmask(SIG_BLOCK, &newsigmask, &oldsigmask);\n\n\t\trc = do_submit2(retmsg);\n\n\t\tif (send_string(&sock, retmsg) != 0) {\n\t\t\terr_op = \"send data to foreground\";\n\t\t\tgoto error;\n\t\t}\n\t\tif (dosend(&sock, (char *) &rc, sizeof(int)) != 0) {\n\t\t\terr_op = \"send data to foreground\";\n\t\t\tgoto error;\n\t\t}\n\n\t\tclose(sock);\n\t\tsigprocmask(SIG_SETMASK, &oldsigmask, NULL);\n\n\t\tqsub_free_attrl(attrib);\n\t\tattrib = NULL;\n\t\tfree(v_value);\n\t\tv_value = NULL;\n\t\tfree(basic_envlist);\n\t\tbasic_envlist = NULL;\n\t\tfree(qsub_envlist);\n\t\tqsub_envlist = NULL;\n\n\t\tif (cred_buf != NULL) {\n\t\t\tmemset(cred_buf, 0, cred_len);\n\t\t\tfree(cred_buf);\n\t\t\tcred_buf = NULL;\n\t\t}\n\n\t\t/* Exit the daemon if it can't submit the job */\n\t\tif (rc == DMN_REFUSE_EXIT)\n\t\t\tgoto out;\n\t}\n\nout:\n\tclose(bindfd);\n\tif (cred_timeout != 1)\n\t\tunlink(fl);\n\texit(0);\n\nerror:\n\tsprintf(log_buf, \"Background qsub: Failed at %s, errno=%d\", err_op, errno);\n\tlog_syslog(log_buf);\n\tunlink(fl);\n\tclose(bindfd);\n\texit(1);\n}\n\n/*\n * @brief\n *  Try to submit job through daemon. On Unix, the daemon would be created by\n *  forking during a prior invocation of the qsub command. The foregound qsub\n *  process tries to send the job to the daemon using Unix domain sockets.\n *\n * @param[out] daemon_up         - Indicate whether daemon is running\n * @param[out] do_regular_submit - Indicate whether to do regular submit\n * @param[in]  qsub_exe          - Name of the qsub command\n *\n * @return     rc                - Error code\n */\nint\ndaemon_submit(int *daemon_up, int *do_regular_submit)\n{\n\tint sock; /* UNIX domain socket for talking to daemon */\n\tstruct sockaddr_un s_un;\n\tsigset_t newsigmask;\n\tint rc = 0;\nagain:\n\t/*\n\t * In case of Unix, use fork. Foreground checks if connection is\n\t * possible with background daemon. The communication used is unix\n\t * domain sockets. Only the specified user can connect to this socket\n\t * since the domain socket is created with a 0600 permission.\n\t *\n\t * If connection fails, proceed with qsub in the normal flow, and at\n\t * the end fork and stay in the background, while the foreground\n\t * process returns control to the shell. Subsequent qsubs will be able\n\t * to connect to this forked background qsub.\n\t *\n\t */\n\t*daemon_up = check_qsub_daemon(fl);\n\tif (*daemon_up == 1) {\n\t\t/* pass information to daemon */\n\t\t/* wait for job-id or error string */\n\t\tif ((sock = socket(AF_UNIX, SOCK_STREAM, 0)) == -1)\n\t\t\treturn rc;\n\n\t\ts_un.sun_family = AF_UNIX;\n\t\tsnprintf(s_un.sun_path, sizeof(s_un.sun_path), \"%s\", fl);\n\n\t\tif (connect(sock, (const struct sockaddr *) &s_un, sizeof(s_un)) == -1) {\n\t\t\tint refused = (errno == ECONNREFUSED);\n\n\t\t\tclose(sock);\n\t\t\tif (refused) {\n\t\t\t\t/* daemon unavailable, del temp file, restart */\n\t\t\t\tif (unlink(fl) != 0)\n\t\t\t\t\treturn rc;\n\n\t\t\t\tgoto again;\n\t\t\t}\n\t\t\treturn rc;\n\t\t}\n\n\t\t/* block SIGPIPE on write() failures. */\n\t\tsigemptyset(&newsigmask);\n\t\tsigaddset(&newsigmask, SIGPIPE);\n\t\tsigprocmask(SIG_BLOCK, &newsigmask, NULL);\n\n\t\tif ((send_attrl(&sock, attrib) == 0) &&\n\t\t    (send_string(&sock, destination) == 0) &&\n\t\t    (send_string(&sock, script_tmp) == 0) &&\n\t\t    (send_string(&sock, cred_name) == 0) &&\n\t\t    (send_string(&sock, v_value ? v_value : \"\") == 0) &&\n\t\t    (send_string(&sock, basic_envlist) == 0) &&\n\t\t    (send_string(&sock, qsub_envlist ? qsub_envlist : \"\") == 0) &&\n\t\t    (send_string(&sock, qsub_cwd) == 0) &&\n\t\t    (send_opts(&sock) == 0)) {\n\n\t\t\t/*\n\t\t\t * Read back the first error code from the background,\n\t\t\t * which confirms whether the background received our data.\n\t\t\t */\n\t\t\tif (dorecv(&sock, (char *) &rc, sizeof(int)) == 0) {\n\t\t\t\t/*\n\t\t\t\t * We were able to send data to the background daemon.\n\t\t\t\t * Now, even if we fail to read back response from\n\t\t\t\t * background, we do not want to submit again.\n\t\t\t\t */\n\t\t\t\t*do_regular_submit = 0;\n\t\t\t}\n\n\t\t\t/* read back response from background daemon */\n\t\t\tif ((recv_string(&sock, retmsg) != 0) ||\n\t\t\t    (dorecv(&sock, (char *) &rc, sizeof(int)) != 0) ||\n\t\t\t    rc == DMN_REFUSE_EXIT) {\n\t\t\t\t/*\n\t\t\t\t * Something bad happened, either background submitted\n\t\t\t\t * and failed to send us response, or it failed before\n\t\t\t\t * submitting. If background qsub detects -V option, then\n\t\t\t\t * submit the job through foreground.\n\t\t\t\t */\n\t\t\t\tif (rc != DMN_REFUSE_EXIT) {\n\t\t\t\t\trc = -1;\n\t\t\t\t\tsprintf(retmsg, \"Failed to recv data from background qsub\\n\");\n\t\t\t\t\t/* Error message will be printed in caller */\n\t\t\t\t} else\n\t\t\t\t\t*do_regular_submit = 1;\n\t\t\t}\n\t\t}\n\t\t/* going down, no need to free stuff */\n\t\tclose(sock);\n\t}\n\n\treturn rc;\n}\n"
  },
  {
    "path": "src/cmds/qterm.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tqterm.c\n * @brief\n *  The qterm command terminates the batch server.\n *\n * @par\tSynopsis:\n *  qterm [-t type] [-F|-f|-i] [-s] [-m] [server ...]\n *\n * @par\tOptions:\n *  -t  delay   Jobs are (1) checkpointed if possible; otherwise, (2) jobs are\n *\t\trerun (requeued) if possible; otherwise, (3) jobs are left to\n *              run.\n *\n *      immediate\n *              Jobs are (1) checkpointed if possible; otherwise, (2) jobs are\n *\t\trerun if possible; otherwise, (3) jobs are aborted.\n *\n *\tquick (the new default)\n *\t\tThe server will save state and exit leaving running jobs\n *\t\tstill running.  Good for shutting down when you wish to\n *\t\tquickly restart the server.\n *\n *  -F\tshutdonw the Secndary Server only (Primary stays up),\n *  -f  shutdown Secondary Servers as well, or\n *  -i\tidle the Secondary Server\n *\n *  -s\tshutdown scheduler as well\n *\n *  -m  shutdown Moms also\n *\n * @par\tArguments:\n *  server ...\n *      A list of servers to terminate.\n *\n * @author\tBruce Kelly\n *  National Energy Research Supercomputer Center\n *  Livermore, CA\n *  May, 1993\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include \"cmds.h\"\n#include <pbs_version.h>\n\nint exitstatus = 0; /* Exit Status */\n\nstatic void execute(int, char *);\n\nint\nmain(int argc, char **argv)\n{\n\t/*\n\t *  This routine sends a Server Shutdown request to the batch server.  If the\n\t * batch request is accepted, and the type is IMMEDIATE, then no more jobs\n\t * are accepted and all jobs are checkpointed or killed.  If the type is\n\t * DELAY, then only privileged users can submit jobs, and jobs will be\n\t * checkpointed if available.\n\t */\n\n\tstatic char opts[] = \"t:smfFi\"; /* See man getopt */\n\tint s;\t\t\t\t/* The execute line option */\n\tstatic char usage[] = \"Usage: qterm [-t immediate|delay|[quick]] [-m] [-s] [-f|-i] [server ...]\\n\";\n\tstatic char usag2[] = \"       qterm --version\\n\";\n\tchar *type = NULL; /* Pointer to the type of termination */\n\tint downsched = 0;\n\tint downmom = 0;\n\tint downsecd = 0;\n\tint idlesecd = 0;\n\tint manner;\t/* The type of termination */\n\tint errflg = 0; /* Error flag */\n\n\t/*test for real deal or just version and exit*/\n\n\tPRINT_VERSION_AND_EXIT(argc, argv);\n\n\tif (initsocketlib())\n\t\treturn 1;\n\n\t/* Command line options */\n\twhile ((s = getopt(argc, argv, opts)) != EOF)\n\t\tswitch (s) {\n\t\t\tcase 't':\n\t\t\t\ttype = optarg;\n\t\t\t\tbreak;\n\t\t\tcase 's':\n\t\t\t\tdownsched = 1;\n\t\t\t\tbreak;\n\t\t\tcase 'm':\n\t\t\t\tif (idlesecd == 2)\n\t\t\t\t\terrflg++;\n\t\t\t\tdownmom = 1;\n\t\t\t\tbreak;\n\t\t\tcase 'f':\n\t\t\t\tif ((idlesecd != 0) | (downsecd == 2))\n\t\t\t\t\terrflg++;\n\t\t\t\tdownsecd = 1;\n\t\t\t\tbreak;\n\t\t\tcase 'F':\n\t\t\t\tif ((idlesecd != 0) | (downsecd == 1) | (downmom != 0))\n\t\t\t\t\terrflg++;\n\t\t\t\tdownsecd = 2;\n\t\t\t\tbreak;\n\t\t\tcase 'i':\n\t\t\t\tif (downsecd)\n\t\t\t\t\terrflg++;\n\t\t\t\tidlesecd = 1;\n\t\t\t\tbreak;\n\t\t\tcase '?':\n\t\t\tdefault:\n\t\t\t\terrflg++;\n\t\t\t\tbreak;\n\t\t}\n\tif (errflg) {\n\t\tfprintf(stderr, \"%s\", usage);\n\t\tfprintf(stderr, \"%s\", usag2);\n\t\texit(1);\n\t} else if (type != NULL) {\n\t\tif (strcmp(type, \"delay\") == 0)\n\t\t\tmanner = SHUT_DELAY;\n\t\telse if (strcmp(type, \"immediate\") == 0)\n\t\t\tmanner = SHUT_IMMEDIATE;\n\t\telse if (strcmp(type, \"quick\") == 0)\n\t\t\tmanner = SHUT_QUICK;\n\t\telse {\n\t\t\tfprintf(stderr, \"%s\", usage);\n\t\t\tfprintf(stderr, \"%s\", usag2);\n\t\t\texit(1);\n\t\t}\n\t} else\n\t\tmanner = SHUT_QUICK;\n\n\tif (downsched)\n\t\tmanner |= SHUT_WHO_SCHED;\n\tif (downmom)\n\t\tmanner |= SHUT_WHO_MOM;\n\tif (downsecd == 1)\n\t\tmanner |= SHUT_WHO_SECDRY;\n\tif (downsecd == 2)\n\t\tmanner |= SHUT_WHO_SECDONLY;\n\telse if (idlesecd)\n\t\tmanner |= SHUT_WHO_IDLESECDRY;\n\n\t/*perform needed security library initializations (including none)*/\n\n\tif (CS_client_init() != CS_SUCCESS) {\n\t\tfprintf(stderr, \"qterm: unable to initialize security library.\\n\");\n\t\texit(1);\n\t}\n\n\tif (optind < argc)\n\t\tfor (; optind < argc; optind++)\n\t\t\texecute(manner, argv[optind]);\n\telse\n\t\texecute(manner, \"\");\n\n\t/*cleanup security library initializations before exiting*/\n\tCS_close_app();\n\n\texit(exitstatus);\n}\n\n/**\n * @brief\n *\texecutes to terminate server\n *\n * @param[in] manner - The manner in which to terminate the server.\n * @param[in] server - The name of the server to terminate.\n *\n * @return - Void\n *\n * @File Variables:\n * exitstatus  Set to two if an error occurs.\n *\n */\nstatic void\nexecute(int manner, char *server)\n{\n\tint ct;\t      /* Connection to the server */\n\tint err;      /* Error return from pbs_terminate */\n\tchar *errmsg; /* Error message from pbs_terminate */\n\n\tif ((ct = cnt2server(server)) > 0) {\n\t\terr = pbs_terminate(ct, manner, NULL);\n\t\tif (err != 0) {\n\t\t\terrmsg = pbs_geterrmsg(ct);\n\t\t\tif (errmsg != NULL) {\n\t\t\t\tfprintf(stderr, \"qterm: %s \", errmsg);\n\t\t\t} else {\n\t\t\t\tfprintf(stderr, \"qterm: Error (%d) terminating server \", pbs_errno);\n\t\t\t}\n\t\t\tfprintf(stderr, \"%s\\n\", server);\n\t\t\texitstatus = 2;\n\t\t}\n\t\tpbs_disconnect(ct);\n\t} else {\n\t\tfprintf(stderr, \"qterm: could not connect to server %s (%d)\\n\", server, pbs_errno);\n\t\texitstatus = 2;\n\t}\n}\n"
  },
  {
    "path": "src/cmds/sample.qstatrc",
    "content": "#\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n#\nproc niceprint {level name} {\n\tupvar $name str\n\n\tif {$level == 0} {\n\t\tset colone \"\"\n\t\tset limit 78\n\t} else {\n\t\tset colone \"\\t\"\n\t\tset limit 70\n\t}\n\n\tset comma [string first \",\" $str]\n\tif {$comma != -1} {\n\t\tif {$comma < $limit} {\n\t\t\tset line \"\"\n\t\t\twhile {[string length $line] + $comma < $limit} {\n\t\t\t\tset line \"$line[string range $str 0 $comma]\"\n\t\t\t\tset str [string range $str [expr $comma+1] end]\n\n\t\t\t\tset comma [string first \",\" $str]\n\t\t\t\tif {$comma == -1} {\n\t\t\t\t\tbreak\n\t\t\t\t}\n\t\t\t}\n\t\t} else {\n\t\t\tset line [string range $str 0 $comma]\n\t\t\tset str [string range $str [expr $comma+1] end]\n\t\t}\n\t} else {\n\t\tset line $str\n\t\tset str \"\"\n\t}\n\n\tputs \"$colone[string range $line 0 $limit]\"\n\tset line [string range $line [expr $limit+1] end]\n\tset len [string length $line]\n\twhile {$len > 0} {\n\t\tputs \"\\t[string range $line 0 70]\"\n\t\tset line [string range $line 71 end]\n\t\tset len [string length $line]\n\t}\n}\n\nif {[lsearch [info vars] objects] == -1} return\n\nforeach object $objects {\n\tforeach obj [lindex $object 1] {\n\t\tputs \"[lindex $object 0]: [lindex $obj 0]\"\n\t\tforeach attr [lindex $obj 1] {\n\t\t\tset name [lindex $attr 0]\n\t\t\tset value [lindex $attr 1]\n\n\t\t\tset line \"   $name = $value\"\n\t\t\tset len [string length $line]\n\t\t\tif {$len < 80} {\n\t\t\t\tputs $line\n\t\t\t\tcontinue\n\t\t\t}\n\n\t\t\tset comma [string first \",\" $line]\n\t\t\tif {$comma == -1} {\n\t\t\t\tputs [string range $line 0 78]\n\t\t\t\tset line [string range $line 79 end]\n\t\t\t\tset len [string length $line]\n\t\t\t\twhile {$len > 0} {\n\t\t\t\t\tputs \"\\t[string range $line 0 70]\"\n\t\t\t\t\tset line [string range $line 71 end]\n\t\t\t\t\tset len [string length $line]\n\t\t\t\t}\n\t\t\t\tcontinue;\n\t\t\t}\n\n\t\t\tniceprint 0 line\n\t\t\tset len [string length $line]\n\t\t\twhile {$len > 0} {\n\t\t\t\tniceprint 1 line\n\t\t\t\tset len [string length $line]\n\t\t\t}\n\t\t}\n\t\tputs [lindex $obj 3]\n\t}\n}\n"
  },
  {
    "path": "src/cmds/scripts/Makefile.am",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nlibinitdir = $(libdir)/init.d\n\ndist_libinit_DATA = \\\n\tlimits.pbs_mom \\\n\tlimits.pbs_mom.compat \\\n\tlimits.post_services \\\n\tlimits.post_services.compat \n\nlibmpidir = $(libdir)/MPI\n\ndist_libmpi_DATA = \\\n\tpbsrun.ch_gm.init.in \\\n\tpbsrun.ch_mx.init.in \\\n\tpbsrun.gm_mpd.init.in \\\n\tpbsrun.intelmpi.init.in \\\n\tpbsrun.mpich2.init.in \\\n\tpbsrun.mvapich1.init.in \\\n\tpbsrun.mvapich2.init.in \\\n\tpbsrun.mx_mpd.init.in \\\n\tsgiMPI.awk\n\npythonlibdir = $(libdir)/python\n\ndist_pythonlib_PYTHON = \\\n\tpbs_bootcheck.py \\\n\tpbs_topologyinfo.py\n\nsysprofiledir = /etc/profile.d\n\ndist_sysprofile_DATA = \\\n\tpbs.csh \\\n\tpbs.sh\n\nunitfiledir = @_unitdir@\n\ndist_unitfile_DATA = \\\n\tpbs.service\n\ndist_libexec_SCRIPTS = \\\n\tpbs_habitat \\\n\tpbs_init.d \\\n\tpbs_postinstall \\\n\tpbs_preuninstall \\\n\tpbs_posttrans \\\n\tpbs_reload\n\ndist_bin_SCRIPTS = \\\n\tpbs_topologyinfo \\\n\tprintjob\n\ndist_sbin_SCRIPTS = \\\n\tpbs_dataservice \\\n\tpbs_ds_password \\\n\tpbs_server \\\n\tpbs_snapshot\n\ndist_sysconf_DATA = \\\n\tmodulefile \\\n\tpbs.csh \\\n\tpbs.sh\n\nCLEANFILES = \\\n\tpbs_init.d \\\n\tlimits.pbs_mom \\\n\tlimits.post_services\n\nlimits.pbs_mom: $(srcdir)/limits.pbs_mom.compat\n\tcp $? $@\n\nlimits.post_services: $(srcdir)/limits.post_services.compat\n\tcp $? $@\n"
  },
  {
    "path": "src/cmds/scripts/limits.pbs_mom.compat",
    "content": "# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\tThis file will be sourced by the PBS startup script, pbs_init.d.\n#\tIt is here only for binary compatibility with previous releases.\n#\tFeel free to replace its contents.\nMEMLOCKLIM=`ulimit -l`\nNOFILESLIM=`ulimit -n`\nSTACKLIM=`ulimit -s`\nulimit -l unlimited\nulimit -n 16384\nulimit -s unlimited\n"
  },
  {
    "path": "src/cmds/scripts/limits.post_services.compat",
    "content": "# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\tThis file will be sourced by the PBS startup script, pbs_init.d.\n#\tIt is here only for binary compatibility with previous releases.\n#\tFeel free to replace its contents.\nif [ -n \"${MEMLOCKLIM}\" ] ; then\n    ulimit -l ${MEMLOCKLIM}\n    ulimit -n ${NOFILESLIM}\n    ulimit -s ${STACKLIM}\nfi\n"
  },
  {
    "path": "src/cmds/scripts/modulefile.in",
    "content": "# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#%Module1.0\nproc ModulesHelp { } {\nputs stderr \"The PBS module defines the default system paths and\"\nputs stderr \"environment variables needed to utilize the PBS\"\nputs stderr \"workload management system.\"\nputs stderr \"\"\nputs stderr \"Use the command \\\"module list\\\" to determine whether the\"\nputs stderr \"pbs modulefile has been loaded in your environment.\"\nputs stderr \"\"\nputs stderr \"Use the command \\\"module show pbs\\\" to display the\"\nputs stderr \"actions carried out by this module.\"\nputs stderr \"\"\n}\nset _module_name [module-info name]\nset is_module_rm [module-info mode remove]\nset package_root @prefix@\nprepend-path MANPATH [file join ${package_root} share/man]\nprepend-path PATH [file join ${package_root} bin]\n"
  },
  {
    "path": "src/cmds/scripts/pbs.csh",
    "content": "\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\n# Source in the /etc/pbs.conf file\nif ( $?PBS_CONF_FILE ) then\n   set conf = \"$PBS_CONF_FILE\"\nelse\n   set conf = /etc/pbs.conf\nendif\n\nif ( -r \"$conf\" ) then\n   setenv __PBS_EXEC `grep '^[[:space:]]*PBS_EXEC=' \"$conf\" | tail -1 | sed 's/^[[:space:]]*PBS_EXEC=\\([^[:space:]]*\\)[[:space:]]*/\\1/'`\n   if ( $?__PBS_EXEC ) then\n      # Define the PATH and MANPATH for the users\n      if ( $?PATH && -d ${__PBS_EXEC}/bin ) then\n         setenv PATH ${PATH}:${__PBS_EXEC}/bin\n      endif\n      if ( $?MANPATH && -d ${__PBS_EXEC}/man ) then\n         setenv MANPATH ${MANPATH}:${__PBS_EXEC}/man\n      endif\n      if ( $?MANPATH && -d ${__PBS_EXEC}/share/man ) then\n         setenv MANPATH ${MANPATH}:${__PBS_EXEC}/share/man\n      endif\n      if ( `whoami` == \"root\" ) then\n         if ( $?PATH && -d ${__PBS_EXEC}/sbin ) then\n            setenv PATH ${PATH}:${__PBS_EXEC}/sbin\n         endif\n      endif\n      unsetenv __PBS_EXEC\n   endif\nendif\n"
  },
  {
    "path": "src/cmds/scripts/pbs.service.in",
    "content": "# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\n\n\n# It's not recommended to modify this file in-place, because it will be\n# overwritten during package upgrades.  If you want to customize, the\n# best way is to create a file \"/etc/systemd/system/pbs.service\",\n# containing\n#       .include /lib/systemd/system/pbs.service\n#       ...make your changes here...\n# For more info about custom unit files, see -\n# http://fedoraproject.org/wiki/Systemd#How_do_I_customize_a_unit_file.2F_add_a_custom_unit_file.3F\n\n\n[Unit]\nDocumentation=man:pbs(8)\nSourcePath=@prefix@/libexec/pbs_init.d\nDescription=Portable Batch System\nAfter=network-online.target remote-fs.target nss-lookup.target\nWants=network-online.target\nDefaultDependencies=true\n\n[Service]\nType=forking\nRestart=no\nTimeoutStartSec=0\nTimeoutStopSec=5min\nDelegate=yes\nIgnoreSIGPIPE=no\nGuessMainPID=no\nExecStart=@prefix@/libexec/pbs_init.d start\nExecStop=@prefix@/libexec/pbs_init.d stop\nExecReload=@prefix@/libexec/pbs_reload @prefix@/libexec/pbs_init.d start\nTasksMax=infinity\n\n[Install]\nWantedBy=multi-user.target\n"
  },
  {
    "path": "src/cmds/scripts/pbs.sh",
    "content": "\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\n# Source in the /etc/pbs.conf file\nconf=\"${PBS_CONF_FILE:-/etc/pbs.conf}\"\nif [ -r \"${conf}\" ]; then\n   __PBS_EXEC=`grep '^[[:space:]]*PBS_EXEC=' \"$conf\" | tail -1 | sed 's/^[[:space:]]*PBS_EXEC=\\([^[:space:]]*\\)[[:space:]]*/\\1/'`\n   if [ \"X${__PBS_EXEC}\" != \"X\" ]; then\n      # Define PATH and MANPATH for the users\n      [ -d \"${__PBS_EXEC}/bin\" ] && export PATH=\"${PATH}:${__PBS_EXEC}/bin\"\n      [ -d \"${__PBS_EXEC}/man\" ] && export MANPATH=\"${MANPATH}:${__PBS_EXEC}/man\"\n      [ -d \"${__PBS_EXEC}/share/man\" ] && export MANPATH=\"${MANPATH}:${__PBS_EXEC}/share/man\"\n      if [ `whoami` = \"root\" ]; then\n         [ -d \"${__PBS_EXEC}/sbin\" ] && export PATH=\"${PATH}:${__PBS_EXEC}/sbin\"\n      fi\n   fi\n   unset __PBS_EXEC\nfi\n"
  },
  {
    "path": "src/cmds/scripts/pbs_bootcheck.py",
    "content": "# coding: utf-8\n#\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\ntry:\n    import os\n    import time\n    import sys\n    from operator import itemgetter\n    import socket\n    from fcntl import flock, LOCK_EX, LOCK_UN\n\n    def __get_uptime():\n        \"\"\"\n        This function will return system uptime in epoch pattern\n        it will find system uptime either reading to '/proc/uptime' file is\n        exists otherwise by using 'uptime' command\n        \"\"\"\n        _boot_time = 0\n        if os.path.exists('/proc/uptime'):\n            # '/proc/uptime' file exists, get system uptime from it\n            _uptime = open('/proc/uptime', 'r')\n            _boot_time = int(\n                time.time() - float(_uptime.readline().split()[0]))\n            _uptime.close()\n        else:\n            # '/proc/uptime' file not exists, get system uptime from 'uptime'\n            # command here uptime format will be as follow:\n            # <current time> <uptime>, <number of user logged into system>,\n            # <load average>\n            # Example: 11:14pm  up 150 days  5:39,  5 users,  load average:\n            # 0.07, 0.25, 0.22\n            # from above format the <uptime> will be one of the following\n            # format:\n            #\n            # 1. up MM min,\n            # \tExample 1: up 1 min,\n            # \tExample 2: up 12 min,\n            #\n            # 2. up HH:MM,\n            # \tExample 2: up 12:45,\n            #\n            # 3. up <number of days> day<(s)>,\n            # \tExample 3.1: up 23 day(s),\n            # \tExample 3.2: up 23 days\n            #\n            # 4. up <number of days> day<(s)> HH:MM,\n            # \tExample 4.1: up 23 day(s) 12:45,\n            # \tExample 4.2: up 23 days 12:45,\n            # \tExample 4.3: up 23 days, 12:45,\n\n            _uptime = os.popen('uptime').readline()\n            _days = _hours = _min = 0\n            if 'day' in _uptime:\n                (_days, _hm) = itemgetter(2, 4)(_uptime.split())\n            else:\n                _hm = itemgetter(2)(_uptime.split())\n            if ':' in _hm:\n                (_hours, _min) = _hm.split(':')\n                _min = _min.strip(',')\n            else:\n                _min = _hm\n            _boot_time = int(time.time()) - \\\n                (int(_days) * 86400 + int(\n                    _hours) * 3600 + int(_min) * 60)\n        return _boot_time\n\n    boot_time = __get_uptime()\n    boot_check_file = sys.argv[1]\n    prev_pbs_start_time = int(time.time())\n    hostname = socket.gethostname()\n    new_lines = ['###################################',\n                 '#      DO NOT EDIT THIS FILE      #',\n                 '#   THIS FILE IS MANAGED BY PBS   #',\n                 '###################################',\n                 hostname + '==' + str(prev_pbs_start_time)]\n    if os.path.exists(boot_check_file):\n        f = open(boot_check_file, 'a+')\n    else:\n        f = open(boot_check_file, 'w+')\n    flock(f.fileno(), LOCK_EX)\nexcept Exception:\n    sys.exit(255)\ntry:\n    f.seek(0)\n    lines = f.readlines()\n    for (line_no, line_content) in enumerate(lines):\n        if line_content[0] != '#' and line_content.strip() != '':\n            (host, start_time) = line_content.split('==')\n            if host == hostname:\n                prev_pbs_start_time = int(start_time)\n            else:\n                new_lines.append(line_content)\n    f.seek(0)\n    f.truncate()\n    f.writelines('\\n'.join(new_lines))\n    flock(f.fileno(), LOCK_UN)\nexcept Exception:\n    flock(f.fileno(), LOCK_UN)\n    f.close()\n    sys.exit(255)\nf.close()\nos.chmod(boot_check_file, 0o644)\n# if system being booted then exit with 0 otherwise exit with 1\nif boot_time >= prev_pbs_start_time:\n    sys.exit(0)\nelse:\n    sys.exit(1)\n"
  },
  {
    "path": "src/cmds/scripts/pbs_dataservice",
    "content": "#!/bin/sh\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\n. ${PBS_CONF_FILE:-/etc/pbs.conf}\n\nPBS_TMPDIR=\"${PBS_TMPDIR:-${TMPDIR:-/var/tmp}}\"\nexport PBS_TMPDIR\n\nversion_mismatch=3\n\nget_db_user() {\n\tdbusr_file=\"${PBS_HOME}/server_priv/db_user\"\n\tif [ ! -f \"${dbusr_file}\" ]; then\n\t\techo \"pbsdata\"\n\t\treturn 0\n\telse\n\t\tcat \"${dbusr_file}\"\n\t\treturn $?\n\tfi\n}\n\n\n# The get_port function does the following:\n# Setting PBS_DATA_SERVICE_PORT based on availability  in following order:\n# 1. Set PBS_DATA_SERVICE_PORT to port provided by pbs\n# 2. Set PBS_DATA_SERVICE_PORT to port provided by pbs.conf\n# 3. Set PBS_DATA_SERVICE_PORT to port provided by /etc/services\n# 4. Set PBS_DATA_SERVICE_PORT to default port\n\nget_port() {\n\tif [ \"$1\" ]; then\n\t\tPBS_DATA_SERVICE_PORT=\"$1\"\n\telse\n\t\tif [ -z \"${PBS_DATA_SERVICE_PORT}\" ]; then\n\t\t\tPBS_DATA_SERVICE_PORT=`awk '{if($1==\"pbs_dataservice\") {x=$2}} END {{split(x,a,\"/\")} {if ( a[2] == \"tcp\" ) print a[1]}}' /etc/services`\n\t\tfi\n\tfi\n\n\tif [ -z \"${PBS_DATA_SERVICE_PORT}\" ]; then\n\t\tPBS_DATA_SERVICE_PORT=\"15007\"\n\tfi\n}\n\n#Set PBS_DATA_SERVICE_PORT\nexport PBS_DATA_SERVICE_PORT\nget_port $3\n\nDBPORT=${PBS_DATA_SERVICE_PORT}\nexport DBPORT\n\n. ${PBS_EXEC}/libexec/pbs_db_env\n\nDBUSER=`get_db_user`\nif [ $? -ne 0 ]; then\n\techo \"Could not retrieve PBS Data Service User\"\n\texit 1\nfi\nexport DBUSER\n\nif [ -z \"${PBS_DATA_SERVICE_PORT}\" ]; then\n\tPBS_DATA_SERVICE_PORT=\"15007\"\nfi\n\nCWD=`pwd`\n\nif [ -z \"$(ls -A ${PBS_HOME}/datastore)\" ]; then\n\techo \"PBS Data Service not initialized\"\n\texit 1\nfi\n\n# Check if DBUSER is defined and is non-root\nid=`id ${DBUSER} 2>&1`\nif [ $? -ne 0 ]; then\n\techo \"User ${DBUSER} does not exist\"\n\texit 1\nfi\n\n# Check that id is not 0\nid=`echo ${id} | cut -c5- | cut -d \"(\" -f1`\nif [ \"${id}\" = \"0\" ]; then\n\techo \"PBS Data Service User should not be root\"\n\texit 1\nfi\n\n# check I am root\nmyid=`id | cut -c5- | cut -d \"(\" -f1 2>&1`\nif [ \"${myid}\" != \"0\" ]; then\n\techo \"Please run as root\"\n\texit 1\nfi\n\n# Check if DBUSER is enabled, try to cd to user home\nsu - ${DBUSER} -c \"cd\" >/dev/null 2>&1\nif [ $? -ne 0 ]; then\n\techo \"Unable to login as user ${DBUSER}. Is the user enabled/home directory accessible?\"\n\texit 1\nfi\n\n# check that the user has not tampered with the data user manually\nif [ -d \"${PBS_HOME}/datastore\" ]; then\n\tdbstore_owner=`ls -ld ${PBS_HOME}/datastore | awk '{print $3}'`\n\tif [ \"${dbstore_owner}\" != \"${DBUSER}\" ]; then\n\t\techo \"PBS DataService user value has been changed manually. Please revert back to the original value \\\"${dbstore_owner}\\\"\"\n\t\texit 1\n\tfi\nelse\n\techo \"Path ${PBS_HOME}/datastore is not accessible by Data Service User ${DBUSER}\"\n\texit 1\nfi\n\n#\n# The check_status function does the following:\n# Checks what LIBDB thinks about the database status. If LIBDB says db is up,\n# just return 0 (running locally) else check if the datastore directory can be\n# locked.\n# If we cannot obtain exclusive lock, then return 2\n#\n# Return code values:\n#\t-1 - failed to execute\n#\t0  - Data service running on local host\n#\t1  - Data service definitely NOT running\n#\t2  - Failed to obtain exclusive lock\n#\n\ncheck_status() {\n\tcd ${PBS_HOME}/datastore\n\tif [ $? -ne 0 ]; then\n\t\techo \"Could not change directory to ${PBS_HOME}/datastore\"\n\t\texit -1\n\tfi\n\n\tres=`${data_srv_bin} -s status 2>&1`\n\tstatus=$?\n\n\tif [ ${status} -eq 0 ]; then\n\t\techo \"${res}\" | grep 'no server running'\n\t\tif [ $? -eq 0 ]; then\n\t\t\tstatus=1\n\t\telse\n\t\t\tmsg=\"PBS data service running locally\"\n\t\tfi\n\telse\n\t\t# check further only if LIBDB thinks no DATABASE is running\n\t\tstatus=1\n\t\tmsg=\"PBS data service not running\"\n\t\tout=`${data_srv_mon} check 2>&1`\n\t\tif [ $? -ne 0 ]; then\n\t\t\tstatus=2\n\t\t\tmsg=\"${out}\"\n\t\tfi\n\tfi\n\n\tcd ${CWD}\n\texport msg\n\treturn ${status}\n}\n\ndata_srv_bin=\"${PBS_EXEC}/sbin/pbs_dataservice.bin\"\ndata_srv_mon=\"${PBS_EXEC}/sbin/pbs_ds_monitor\"\n\ncase \"$1\" in\n\tstart)\n\t\tcheck_status\n\t\tret=$?\n\t\tif [ ${ret} -eq 1 ]; then\n\t\t\techo \"Starting PBS Data Service..\"\n\t\t\t${data_srv_bin} -s start\n\t\t\tret=$?\n\n\t\t\tif [ ${ret} -ne 0 ]; then\n\t\t\t\techo \"Failed to start PBS Data Service\"\n\t\t\t\tif [ ${ret} -eq ${version_mismatch} ]; then\n\t\t\t\t\techo \"PBS database needs to be upgraded.\"\n\t\t\t\t\techo \"Consult the documentation/release notes for details.\"\n\t\t\t\tfi\n\t\t\tfi\n\t\telse\n\t\t\techo \"${msg} - cannot start\"\n\t\tfi\n\n\t\texit ${ret}\n\t\t;;\n\n\tstop)\n\t\tcheck_status\n\t\tret=$?\n\t\tif [ ${ret} -ne 0 ]; then\n\t\t\techo \"${msg} - cannot stop\"\n\t\t\texit 0\n\t\tfi\n\n\t\t# Check if PBS is running\n\t\t${PBS_EXEC}/bin/qstat -Bf >/dev/null 2>&1\n\t\tif [ $? -eq 0 ]; then\n\t\t\techo \"PBS server is running. Cannot stop PBS Data Service now.\"\n\t\t\tserver_pid=\"`ps -ef | grep \"pbs_server.bi[n]\" | awk '{print $2}'`\"\n\t\t\tif [ -z \"${server_pid}\" ]; then\n\t\t\t\tserver_host=\"`${PBS_EXEC}/bin/qstat -Bf | grep 'server_host' | awk '{print $NF}'`\"\n\t\t\t\tif [ ! -z \"${server_host}\" ]; then\n\t\t\t\t\techo \"PBS server is running on host ${server_host}.\"\n\t\t\t\tfi\n\t\t\t\techo \"Please check pbs.conf file to verify PBS is configured with correct server host value. Exiting.\"\n\t\t\tfi\n\t\t\texit 1\n\t\tfi\n\t\techo \"Stopping PBS Data Service..\"\n\n\t\tcd ${PBS_HOME}/datastore\n\t\tif [ $? -ne 0 ]; then\n\t\t\techo \"Could not change directory to ${PBS_HOME}/datastore\"\n\t\t\texit 1\n\t\tfi\n\t\t${data_srv_bin} -s stop\n\t\tret=$?\n\t\tcd ${CWD}\n\t\tif [ ${ret} -ne 0 ]; then\n\t\t\techo \"Failed to stop PBS Data Service\"\n\t\t\techo \"(Check if there are active connections to the data service)\"\n\t\telse\n\t\t\t# check that we are able to acquire locks again, ie, monitor has died\n\t\t\ti=0\n\t\t\tcheck_status\n\t\t\tsret=$?\n\t\t\twhile [ $i -lt 10 -a ${sret} -ne 1 ]\n\t\t\tdo\n\t\t\t\tsleep 1\n\t\t\t\tcheck_status\n\t\t\t\tsret=$?\n\t\t\t\ti=`expr $i + 1`\n\t\t\tdone\n\t\t\tif [ ${sret} -ne 1 ]; then\n\t\t\t\techo \"Failed to stop PBS Data Service\"\n\t\t\t\texit 1\n\t\t\tfi\n\t\tfi\n\t\texit ${ret}\n\t\t;;\n\n\tstatus)\n\t\tcheck_status\n\t\tret=$?\n\t\techo \"${msg}\"\n\t\texit ${ret}\n\t\t;;\n\n\t*) echo \"Usage: `basename $0` {start|stop|status}\"\n\t\texit 1\n\t\t;;\nesac\nexit 1\n"
  },
  {
    "path": "src/cmds/scripts/pbs_ds_password",
    "content": "#!/bin/sh\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\n# Source the PBS configuration file\n. ${PBS_CONF_FILE:-/etc/pbs.conf}\n\n# Source the file that sets PGSQL_LIBSTR\n. \"$PBS_EXEC\"/libexec/pbs_db_env\n\nexec $PBS_EXEC/sbin/pbs_ds_password.bin ${1+\"$@\"}\n"
  },
  {
    "path": "src/cmds/scripts/pbs_habitat.in",
    "content": "#!/bin/sh\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nif [ $# -eq 1 -a \"$1\" = \"--version\" ]; then\n   echo pbs_version = @PBS_VERSION@\n   exit 0\nfi\n\nPBS_VERSION=@PBS_VERSION@\nINSTALL_DB=\"install_db\"\nUPGRADE_DB=\"upgrade_db\"\n\nget_db_user() {\n\t[ -f \"${dbuser_fl}\" ] && dbuser_name=`cat \"${dbuser_fl}\" | tr -d '[:space:]'`\n\t[ -z \"${dbuser_name}\" ] && dbuser_name=\"${PBS_DATA_SERVICE_USER:-@database_user@}\"\n\tif [ ! -f \"${dbuser_fl}\" ]; then\n\t\tprintf \"%s\" \"$dbuser_name\" >\"${dbuser_fl}\"\n\t\tchmod 0600 \"${dbuser_fl}\"\n\tfi\n\tcat \"${dbuser_fl}\"\n\treturn $?\n}\n\nchk_dataservice_user() {\n\tchk_usr=\"$1\"\n\n\t# do user-id related stuff first\n\tid=`id ${chk_usr} 2>&1`\n\tif [ $? -ne 0 ]; then\n\t\techo \"PBS Data Service user ${chk_usr} does not exist\"\n\t\treturn 1;\n\tfi\n\n\tid=`echo ${id} | cut -c5- | cut -d \"(\" -f1`\n\tif [ \"$id\" = \"0\" ]; then\n\t\techo \"User ${chk_usr} should not have root privileges\"\n\t\treturn 1;\n\tfi\n\n\t# login as ${chk_usr} and try to cd to user home dir\n\tsu - ${chk_usr} -c \"cd\" > /dev/null 2>&1\n\n\tif [ $? -ne 0 ]; then\n\t\techo \"Unable to login as user ${chk_usr}. Is the user enabled/home directory accessible?\"\n\t\treturn 1\n\tfi\n\treturn 0\n}\n\nis_cray_xt() {\n\tif [ -f /proc/cray_xt/cname ] ; then\n\t\treturn 0\n\tfi\n\treturn 1\n}\n\nchkenv() {\n\tline=`grep -s \"^${1}=\" $envfile`\n\tif [ -z \"$line\" ]; then\n\t\techo \"*** setting ${1}=$2\"\n\t\techo \"${1}=$2\" >> $envfile\n\telse\n\t\techo \"*** leave existing $line\"\n\tfi\n}\n\ncreatedir() {\n\tif [ -n \"$1\" -a ! -d \"$1\" ]; then\n\t\tif ! mkdir -p \"$1\"; then\n\t\t\techo \"*** Could not create $1\"\n\t\t\texit 1\n\t\tfi\n\tfi\n\tif [ -n \"$1\" -a -n \"$2\" ]; then\n\t\tchmod \"$2\" \"$1\"\n\tfi\n}\n\ncreatepath() {\n\twhile read mode dir ;do\n\t\tcreatedir \"${PBS_HOME}/${dir}\" $mode\n\tdone\n}\n\n# Return the name of the PBS server host\nget_server_hostname() {\n\tshn=\"\"\n\tif [ -z \"${PBS_PRIMARY}\" -o -z \"${PBS_SECONDARY}\" ] ; then\n\t\tif [ -z \"${PBS_SERVER_HOST_NAME}\" ]; then\n\t\t\tshn=\"${PBS_SERVER}\"\n\t\telse\n\t\t\tshn=\"${PBS_SERVER_HOST_NAME}\"\n\t\tfi\n\telse\n\t\tshn=\"${PBS_PRIMARY}\"\n\tfi\n\techo ${shn} | awk '{print tolower($0)}'\n}\n\n# Ensure the supplied hostname is valid\ncheck_hostname() {\n\t# Check the hosts file\n\tgetent hosts \"${1}\" >/dev/null 2>&1 && return 0\n\t# Check DNS\n\thost \"${1}\" >/dev/null 2>&1 && return 0\n\treturn 1\n}\n\n# Backup pgsql binaries and libraries to PBS_EXEC\nbackup_pgsql() {\n\tpsql_ver=`psql --version | cut -d ' ' -f3 | cut -d '.' -f 1,2`\n\tmkdir -p \"$PBS_HOME/pgsql.forupgrade/bin\"\n\tmkdir -p \"$PBS_HOME/pgsql.forupgrade/lib\"\n\tmkdir -p \"$PBS_HOME/pgsql.forupgrade/lib64/pgsql\"\n\tmkdir -p \"$PBS_HOME/pgsql.forupgrade/share/pgsql/timezonesets\"\n\tmkdir -p \"$PBS_HOME/pgsql.forupgrade/share/timezonesets\"\n\tpgsql_lib_dir=`dirname $PGSQL_BIN`\"/lib64/pgsql\"\n\tif [ ! -d $pgsql_lib_dir ];then\n\t\tpgsql_lib_dir=\"/usr/pgsql-${psql_ver}/lib\"\n\tfi\n\tcp -r $pgsql_lib_dir/* \"$PBS_HOME/pgsql.forupgrade/lib64/pgsql\"\n\tcp -r $pgsql_lib_dir/* \"$PBS_HOME/pgsql.forupgrade/lib/\"\n\tpgsql_share_dir=`dirname $PGSQL_BIN`\"/share/pgsql\"\n\tif [ ! -d $pgsql_share_dir ];then\n\t\tpgsql_share_dir=\"/usr/pgsql-${psql_ver}/share\"\n\tfi\n\tcp -r $pgsql_share_dir/timezonesets/* \"$PBS_HOME/pgsql.forupgrade/share/pgsql/timezonesets\"\n\tcp -r $pgsql_share_dir/timezonesets/* \"$PBS_HOME/pgsql.forupgrade/share/timezonesets\"\n\tcp \"$PGSQL_BIN/pg_ctl\" \"$PBS_HOME/pgsql.forupgrade/bin\"\n\tcp \"$PGSQL_BIN/postgres\" \"$PBS_HOME/pgsql.forupgrade/bin\"\n\tcp \"$PGSQL_BIN/pg_controldata\" \"$PBS_HOME/pgsql.forupgrade/bin\"\n\tcp \"$PGSQL_BIN/pg_resetxlog\" \"$PBS_HOME/pgsql.forupgrade/bin\"\n\tcp \"$PGSQL_BIN/psql\" \"$PBS_HOME/pgsql.forupgrade/bin\"\n\tcp \"$PGSQL_BIN/pg_dump\" \"$PBS_HOME/pgsql.forupgrade/bin\"\n\tcp \"$PGSQL_BIN/pg_restore\" \"$PBS_HOME/pgsql.forupgrade/bin\"\n\tpsql_ver_min=9.3\n\tif [ ${psql_ver_min%.*} -eq ${psql_ver%.*} ] && [ ${psql_ver_min#*.} \\> ${psql_ver#*.} ] || [ ${psql_ver_min%.*} -gt ${psql_ver%.*} ]; then\n\t\ttouch \"$PBS_HOME/pgsql.forupgrade/oss\"\n\tfi\n}\n\n#\n# Start of the pbs_habitat script\n#\nconf=${PBS_CONF_FILE:-@PBS_CONF_FILE@}\nostype=`uname 2>/dev/null`\numask 022\n\necho \"***\"\n\n# Source pbs.conf to get paths: PBS_EXEC and PBS_HOME and PBS_START_*\n. $conf\n\n# Ensure certain variables are set\nif [ -z \"$PBS_EXEC\" ]; then\n\techo \"*** PBS_EXEC is not set\"\n\texit 1\nfi\nif [ ! -d \"$PBS_EXEC\" ]; then\n\techo \"*** $PBS_EXEC directory does not exist\"\n\texit 1\nfi\nif [ -z \"$PBS_HOME\" ]; then\n\techo \"*** PBS_HOME is not set\"\n\texit 1\nfi\nif [ ! -d \"$PBS_HOME\" -o ! \"$(/bin/ls -A $PBS_HOME)\" ]; then\n\techo \"*** WARNING: PBS_HOME not found in $PBS_HOME\"\n\tif [ -x \"$PBS_EXEC/sbin/pbs_server\" ]; then\n\t\tcomponent=\"server\"\n\telif [ -x \"$PBS_EXEC/sbin/pbs_mom\" ]; then\n\t\tcomponent=\"execution\"\n\telse\n\t\techo \"*** No need to execute pbs_habitat in client installation.\"\n\t\texit 0\n\tfi\n\t${PBS_EXEC}/libexec/pbs_postinstall $component $PBS_VERSION $PBS_EXEC $PBS_HOME \"\" \"sameconf\"\nfi\n\n\n# Get the current PBS version from qstat\nif [ -x \"$PBS_EXEC/bin/qstat\" ]; then\n\tpbs_version=`\"${PBS_EXEC}/bin/qstat\" --version | sed -e 's/^.* = //'`\n\tif [ -z \"$pbs_version\" ]; then\n\t\techo \"*** Could not obtain PBS version from qstat\"\n\t\texit 1\n\tfi\n\tif [ \"$pbs_version\" != \"$PBS_VERSION\" ]; then\n\t\techo \"*** Version mismatch.\"\n\t\techo \"*** Build version is $PBS_VERSION\"\n\t\techo \"*** qstat version is $pbs_version\"\n\t\texit 1\n\tfi\nelse\n\techo \"*** File not found: $PBS_EXEC/bin/qstat\"\n\texit 1\nfi\n\n# Perform sanity check on server name in pbs.conf\nserver_hostname=`get_server_hostname`\n[ \"$server_hostname\" = 'change_this_to_pbs_server_hostname' ] && server_hostname=''\nif [ -z \"${server_hostname}\" ] ; then\n\techo \"***\" >&2\n\techo \"*** The hostname of the PBS server in ${conf} is invalid.\" >&2\n\techo \"*** Update the configuration file before starting PBS.\" >&2\n\techo \"***\" >&2\n\texit 1\nfi\ncheck_hostname \"${server_hostname}\"\nif [ $? -ne 0 ]; then\n\techo \"***\" >&2\n\techo \"*** The PBS server could not be found: $server_hostname\" >&2\n\techo \"*** This value must resolve to a valid IP address.\" >&2\n\techo \"***\" >&2\n\texit 1\nfi\nserver=`echo ${server_hostname} | awk -F. '{print $1}'`\n\nif [ \"${PBS_START_SERVER:-0}\" != 0 ] ; then\n\t# Check for the db install script\n\tif [ ! -x \"${PBS_EXEC}/libexec/pbs_db_utility\" ]; then\n\t\techo \"${PBS_EXEC}/libexec/pbs_db_utility not found\"\n\t\texit 1\n\tfi\n\n\t# Source the file that sets DB env variables\n\t. \"$PBS_EXEC\"/libexec/pbs_db_env\n\tif [ $? -ne 0 ]; then\n\t\techo \"Could not setup PBS Data Service environment\"\n\t\texit 1\n\tfi\n\n\tPBS_licensing_loc_file=PBS_licensing_loc\n\n\tdbuser_fl=\"${PBS_HOME}/server_priv/db_user\"\n\tPBS_DATA_SERVICE_USER=`get_db_user`\n\tif [ $? -ne 0 ]; then\n\t\techo \"Could not retrieve PBS Data Service User\"\n\t\texit 1\n\tfi\n\n\t# Do not export the PBS_DATA_SERVICE_USER as a env var\n\t# since that would cause a false warning message\n\t# that \"deprecated\" variable PBS_DATA_SERVICE_USER is\n\t# being ignored.\n\n\tchk_dataservice_user ${PBS_DATA_SERVICE_USER}\n\tif [ $? -ne 0 ]; then\n\t\texit 1\n\tfi\n\texport PBS_DATA_SERVICE_USER\n\n\tserver_started=0\n\tPBS_DATA_SERVICE_PORT=${PBS_DATA_SERVICE_PORT:-\"@database_port@\"}\n\texport PBS_DATA_SERVICE_PORT\n\n\tcreate_new_svr_data=1\n\n\t# invoke the dataservice creation script for pbs\n\tresp=`${PBS_EXEC}/libexec/pbs_db_utility ${INSTALL_DB} 2>&1`\n\tret=$?\n\tif [ $ret -eq 2 ]; then\n\t\tcreate_new_svr_data=0\n\telif [ $ret -ne 0 ]; then\n\t\techo \"*** Error initializing the PBS dataservice\"\n\t\techo \"Error details:\"\n\t\techo \"$resp\"\n\t\texit $ret\n\tfi\n\n\texport PBS_HOME\n\texport PBS_EXEC\n\texport PBS_SERVER\n\texport PBS_ENVIRONMENT\n\n\tif [ $create_new_svr_data -eq 0 ]; then\n\t\t# datastore directory already exists\n\t\t# do the database upgrade\n\t\t${PBS_EXEC}/libexec/pbs_db_utility ${UPGRADE_DB}\n\tfi\n\n\tif [ $create_new_svr_data -eq 1 ] ; then\n\t\techo \"*** Setting default queue and resource limits.\"\n\t\techo \"***\"\n\n\t\t${PBS_EXEC}/sbin/pbs_server -t create > /dev/null\n\t\tret=$?\n\t\tif [ $ret -ne 0 ]; then\n\t\t\techo \"*** Error starting pbs server\"\n\t\t\texit $ret\n\t\tfi\n\t\tserver_started=1\n\n\t\tif is_cray_xt; then\n\t\t\t${PBS_EXEC}/bin/qmgr <<-EOF > /dev/null\n\t\t\t\tset server restrict_res_to_release_on_suspend = ncpus\n\t\t\tEOF\n\t\tfi\n\t\ttries=3\n\t\twhile [ $tries -ge 0 ]\n\t\tdo\n\t\t\t${PBS_EXEC}/bin/qmgr <<-EOF > /dev/null\n\t\t\t\tcreate queue workq\n\t\t\tEOF\n\t\t\tret=$?\n\t\t\tif [ $ret -eq 0 ]; then\n\t\t\t\tbreak\n\t\t\tfi\n\t\t\ttries=$((tries-1))\n\t\t\tsleep 2\n\t\tdone\n\t\t${PBS_EXEC}/bin/qmgr <<-EOF > /dev/null\n\t\t\tset queue workq queue_type = Execution\n\t\t\tset queue workq enabled = True\n\t\t\tset queue workq started = True\n\t\t\tset server default_queue = workq\n\t\tEOF\n\t\tif [ -f ${PBS_HOME}/server_priv/$PBS_licensing_loc_file ]; then\n\t\t\tread ans < ${PBS_HOME}/server_priv/$PBS_licensing_loc_file\n\t\t\techo \"*** Setting license file location(s).\"\n\t\t\techo \"***\"\n\t\t\t${PBS_EXEC}/bin/qmgr <<-EOF > /dev/null\n\t\t\t\tset server pbs_license_info = $ans\n\t\t\tEOF\n\t\t\tif ! is_cray_xt; then\n\t\t\t\trm -f ${PBS_HOME}/server_priv/$PBS_licensing_loc_file\t# clean up after INSTALL\n\t\t\tfi\n\t\tfi\n\telse\n\t\t# the upgrade case:  serverdb already exists, but license file\n\t\t# information is new\n\t\tif [ -f ${PBS_HOME}/server_priv/$PBS_licensing_loc_file ]; then\n\t\t\tread ans < ${PBS_HOME}/server_priv/$PBS_licensing_loc_file\n\t\t\techo \"*** Setting license file location(s).\"\n\t\t\techo \"***\"\n\t\t\t${PBS_EXEC}/sbin/pbs_server > /dev/null\n\t\t\t${PBS_EXEC}/bin/qmgr <<-EOF > /dev/null\n\t\t\t\tset server pbs_license_info = $ans\n\t\t\tEOF\n\t\t\tif ! is_cray_xt; then\n\t\t\t\trm -f ${PBS_HOME}/server_priv/$PBS_licensing_loc_file\t# clean up after INSTALL\n\t\t\telse\n\t\t\t\t${PBS_EXEC}/bin/qmgr <<-EOF > /dev/null\n\t\t\t\t\tset server restrict_res_to_release_on_suspend += ncpus\n\t\t\t\tEOF\n\t\t\tfi\n\t\t\tserver_started=1\n\t\tfi\n\tfi\n\n\tif [ $PBS_START_MOM != 0 ]; then\n\t\tif [ $server_started -eq 0 ]; then\n\t\t\t${PBS_EXEC}/sbin/pbs_server > /dev/null\n\t\t\tserver_started=1\n\t\tfi\n\n\t\tif ${PBS_EXEC}/bin/pbsnodes localhost > /dev/null 2>&1 ||\n\t\t   ${PBS_EXEC}/bin/pbsnodes $server > /dev/null 2>&1; then\n\t\t\t:\n\t\telse\n\t\t\t# node $server is not already available, create\n\t\t\t${PBS_EXEC}/bin/qmgr <<-EOF > /dev/null\n\t\t\t\tcreate node $server\n\t\t\tEOF\n\t\tfi\n\tfi\n\n\tif [ $server_started -eq 1 ]; then\n\t\t${PBS_EXEC}/bin/qterm\n\tfi\n\n\t# Take a backup of pgsql binaries and lib to the PBS_EXEC folder\n\tbackup_pgsql\n\nfi\n\n#\n# For overlay upgrades PBS_START_MOM will be disabled per the install\n# instructions. There may still be job and task files present that\n# require updating.\n#\nif [ -d ${PBS_HOME}/mom_priv/jobs ]; then\n\tupgrade_cmd=\"${PBS_EXEC}/sbin/pbs_upgrade_job\"\n\tif [ -x ${upgrade_cmd} ]; then\n\t\ttotal=0\n\t\tupgraded=0\n\t\tfor file in ${PBS_HOME}/mom_priv/jobs/*.JB; do\n\t\t\tif [ -f ${file} ]; then\n\t\t\t\t${upgrade_cmd} -f ${file}\n\t\t\t\tif [ $? -ne 0 ]; then\n\t\t\t\t\techo \"Failed to upgrade ${file}\"\n\t\t\t\telse\n\t\t\t\t\tupgraded=`expr ${upgraded} + 1`\n\t\t\t\tfi\n\t\t\t\ttotal=`expr ${total} + 1`\n\t\t\tfi\n\t\tdone\n\t\tif [ ${total} -gt 0 ]; then\n\t\t\techo \"Upgraded ${upgraded} of ${total} job files.\"\n\t\tfi\n\telse\n\t\techo \"WARNING: $upgrade_cmd not found!\"\n\tfi\nfi\n\n# Update the version file at the very end, after everything else succeeds.\n# This allows it to be re-run, in case a previous update attempt failed.\n# Clobber the existing pbs_version file and populate it with current version.\n# This will prevent updating PBS_HOME if this same version is re-installed\n#\necho \"${pbs_version}\" >\"$PBS_HOME/pbs_version\"\n\necho \"*** End of ${0}\"\nexit 0\n"
  },
  {
    "path": "src/cmds/scripts/pbs_init.d.in",
    "content": "#!/bin/bash\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\n#\n#    PBS init script\n#\n#        Recognized arguments:\n#            start   - start PBS\n#            stop    - terminate PBS\n#            restart - terminate and start PBS\n#            status  - report PBS deamon pids\n#\n#\n# chkconfig: 35 90 10\n# description: The Portable Batch System (PBS) is a flexible workload\n# management  system. It operates on # networked, multi-platform UNIX\n# environments, including heterogeneous clusters of workstations,\n# supercomputers, and massively parallel systems.\n#\n### BEGIN INIT INFO\n# Provides:       pbs\n# Required-Start: $network $local_fs $remote_fs $named\n# Should-Start: $sshd\n# Required-Stop:  $network $local_fs $remote_fs $named\n# Default-Start:  3 5\n# Default-Stop:   0 1 2 4 6\n# Description:    Portable Batch System\n### END INIT INFO\n\nif [ $# -eq 1 ] && [ $1 = \"--version\" ]; then\n\techo pbs_version = @PBS_VERSION@\n\texit 0\nfi\n\ntmpdir=${PBS_TMPDIR:-${TMPDIR:-\"/var/tmp\"}}\nostype=`uname 2>/dev/null`\n\ngetpid() {\n\tif [ -f $1 ]; then\n\t\tcat $1\n\telse\n\t\techo -1\n\tfi\n}\n\n# update_pids may cause an I/O read to block if PBS_HOME is on a shared\n# mount, it is called selectively after sanity checks are performed\nupdate_pids() {\n\tpbs_server_pid=`getpid ${PBS_HOME}/server_priv/server.lock`\n\tpbs_secondary_server_pid=`getpid ${PBS_HOME}/server_priv/server.lock.secondary`\n\tpbs_mom_pid=`getpid ${PBS_MOM_HOME}/mom_priv/mom.lock`\n\tpbs_sched_pid=`getpid ${PBS_HOME}/sched_priv/sched.lock`\n\tpbs_secondary_sched_pid=`getpid ${PBS_HOME}/sched_priv/sched.lock.secondary`\n\tpbs_comm_pid=`getpid ${PBS_HOME}/server_priv/comm.lock`\n\tpbs_secondary_comm_pid=`getpid ${PBS_HOME}/server_priv/comm.lock.secondary`\n}\n\n# lc_host_name - convert host name into lower case short host name\n# also handle multiple names in PBS_LEAF_NAME\n# PBS_LEAF_NAME is now of the format: host:port,host:port so the following code is to cut on , (to get the first host) and then : to parse our the port\nlc_host_name()\n{\n  echo $1 | cut -d, -f1 | cut -d: -f1 | cut -d. -f1 | sed -e \"y/ABCDEFGHIJKLMNOPQRSTUVWXYZ/abcdefghijklmnopqrstuvwxyz/\"\n}\n\n# check_started - check if a particular pid is the program which is expected.\n#                 pbs stores the pid of the currently running incarnation of\n#                 itself.  This function is used to see if that pid is correct\n#                  program.\n#         $1 - the pid\n#         $2 - the program name (pbs_server pbs_mom pbs_sched)\n#\n# return value: 0 - program is already running\n#                1 - program is not running\n#\n: check_started\ncheck_started() {\n\tps_out=`ps -p $1 -o args 2> /dev/null | tail -1`\n\tif [ -z \"${ps_out}\" -o \"`echo ${ps_out} | cut -c1`\" = \"[\" ]; then\n\t\tps_out=`ps -p $1 -o command 2> /dev/null | tail -1`\n\t\tif [ -z \"${ps_out}\" ]; then\n\t\t\treturn 1\n\t\tfi\n\tfi\n\n\t# strip out everything except executable name\n\tprog_name=`echo ${ps_out} | grep -how \"$2\"`\n\tif [ \"x${prog_name}\" = \"x$2\" ]; then\n\t\treturn 0;\n\tfi\n\n\treturn 1;\n}\n\n# check_prog - this function checks to see if a prog is still running.  It will\n#              get the pid out of the prog.lock file and run check_started\n#               on that pid.\n#\n#        $1 is either \"server\" \"pbs_comm\" \"mom\" or \"sched\"\n#\n# return value: 0 - program is still running\n#                1 - program is not running\n#\n: check_prog\ncheck_prog() {\n\n\tcase $1 in\n\t\tmom)\n\t\t\tdaemon_name=\"pbs_mom\"\n\t\t\tpid=${pbs_mom_pid} ;;\n\t\tserver)\n\t\t\tdaemon_name=\"pbs_server.bin\"\n\t\t\tif [ ${is_secondary} -eq 0 ] ; then\n\t\t\t\tpid=${pbs_server_pid}\n\t\t\telse\n\t\t\t\tpid=${pbs_secondary_server_pid}\n\t\t\tfi ;;\n\t\tsched)\n\t\t\tdaemon_name=\"pbs_sched\"\n\t\t\tif [ ${is_secondary} -eq 0 ] ; then\n\t\t\t\tpid=${pbs_sched_pid} ;\n\t\t\telse\n\t\t\t\tpid=${pbs_secondary_sched_pid} ;\n\t\t\tfi ;;\n\t\tpbs_comm)\n\t\t\tdaemon_name=\"pbs_comm\"\n\t\t\tif [ ${is_secondary} -eq 0 ] ; then\n\t\t\t\tpid=${pbs_comm_pid} ;\n\t\t\telse\n\t\t\t\tpid=${pbs_secondary_comm_pid} ;\n\t\t\tfi ;;\n\n\t\t*)\techo Invalid PBS daemon name: $1 >&2;\n\t\t\treturn 1;;\n\tesac\n\n\tif [ -n \"${pid}\" ]; then\n\t\tif [ \"${pid}\" -ne -1 ] ; then\n\t\t\tif check_started \"${pid}\" \"${daemon_name}\" ; then\n\t\t\t\treturn 0\n\t\t\tfi\n\t\tfi\n\tfi\n\n\t# Since the pid file does not exist, PBS has never been run\n\treturn 1\n}\n\n# Look to see if restrict_user_maxsysid exists in MOM config file\n# If it is there, nothing else needs to be done.\n# Otherwise, see if a default can be determined from login.defs.\ncheck_maxsys()\n{\n\tif grep '^$restrict_user_maxsysid' ${PBS_MOM_HOME}/mom_priv/config > /dev/null ; then\n\t\treturn\n\tfi\n\n\tfile=/etc/login.defs\n\tif [ -f ${file} ]; then\n\t\tval=`awk '$1 == \"SYSTEM_UID_MAX\" {print $2}' ${file}`\n\t\tif [ -z \"${val}\" ]; then\n\t\t\tval=`awk '$1 == \"UID_MIN\" {print $2 - 1}' ${file}`\n\t\tfi\n\t\tif [ -n \"${val}\" ]; then\n\t\t\techo '$restrict_user_maxsysid' ${val} \\\n\t\t\t\t>> ${PBS_MOM_HOME}/mom_priv/config\n\t\tfi\n\tfi\n}\n\n# Look if core or core.pid file exists in given _priv directory\n# If exists format the name to core_<next_sequence_number>\n# When core file found set core flag\ncheck_core() {\n\tcore_dir=\"$1\"\n\t[ -n \"${core_dir}\" ] || core_dir=\".\"\n\n\tcore_list=`/bin/ls -1 \"${core_dir}\"/core* 2> /dev/null`\n\tif [ -n \"${core_list}\" ]; then\n\t\tseq_files=`echo \"${core_list}\" | grep \"core_\"`\n\t\tif [ $? -eq 0 ]; then\n\t\t\tmax_seq=`echo \"${seq_files}\" | sed -e 's/[^0-9 ]*//g' | sort -n | tail -1`\n\t\telse\n\t\t\tmax_seq=0\n\t\tfi\n\n\t\tfor core_name in `/bin/ls \"${core_dir}\"/core* | grep -v \"core_\" 2> /dev/null`\n\t\tdo\n\t\t\tmax_seq=`expr ${max_seq} + 1`\n\t\t\tnew_seq=`printf \"%04d\" ${max_seq}`\n\t\t\tmv ${core_name} \"${core_dir}\"/\"core_${new_seq}\"\n\t\tdone\n\n\t\tcore_flag=1\n\tfi\n}\n\n# Return the name of the PBS server host\nget_server_hostname() {\n\tshn=\"\"\n\tif [ -z \"${PBS_PRIMARY}\" -o -z \"${PBS_SECONDARY}\" ] ; then\n\t\tif [ -z \"${PBS_SERVER_HOST_NAME}\" ]; then\n\t\t\tshn=\"${PBS_SERVER}\"\n\t\telse\n\t\t\tshn=\"${PBS_SERVER_HOST_NAME}\"\n\t\tfi\n\telse\n\t\tshn=\"${PBS_PRIMARY}\"\n\tfi\n\techo `lc_host_name \"${shn}\"`\n}\n\n# Ensure the supplied hostname is valid\ncheck_hostname() {\n\t# Check the hosts file\n\tgetent hosts \"$1\" >/dev/null 2>&1 && return 0\n\t# Check DNS\n\thost \"$1\" >/dev/null 2>&1 && return 0\n\treturn 1\n}\n\nstart_pbs() {\n\techo \"Starting PBS\"\n\n\t# Perform sanity checks\n\tserver_hostname=`get_server_hostname`\n\tif [ -z \"${server_hostname}\" -o \"${server_hostname}\" = 'CHANGE_THIS_TO_PBS_SERVER_HOSTNAME' ] ; then\n\t\techo \"***\" >&2\n\t\techo \"*** The hostname of the PBS server in ${conf} is invalid.\" >&2\n\t\techo \"*** Update the configuration file before starting PBS.\" >&2\n\t\techo \"***\" >&2\n\t\texit 1\n\tfi\n\tcheck_hostname \"${server_hostname}\"\n\tif [ $? -ne 0 ]; then\n\t\techo \"***\" >&2\n\t\techo \"*** The PBS server could not be found: ${server}\" >&2\n\t\techo \"*** This value must resolve to a valid IP address.\" >&2\n\t\techo \"***\" >&2\n\t\texit 1\n\tfi\n\n\tupdate_pids\n\n\t# See if we need to populate PBS_HOME. We do if...\n\t# 1) PBS_HOME doesn't exist  (needmakehome=1 -> create PBS_HOME)\n\t# 2) PBS_HOME/pbs_version doesn't exist\n\t# 3) if the version number in PBS_HOME/pbs_version does not match\n\t#    the version of the commands  (2 and 3 needmakehome=2 -> update)\n\t# 4) PBS_HOME/datastore does not exist and this is a server\n\tneedmakehome=0\n\t[ ! -d \"${PBS_HOME}\" ] && needmakehome=1\n\t[ ${needmakehome} -eq 0 -a ! -f \"${PBS_HOME}/pbs_version\" ] && needmakehome=2\n\tif [ ${needmakehome} -eq 0 ]; then\n\t\tqstatver=`${PBS_EXEC}/bin/qstat --version | sed -e \"s/^.* = //\"`\n\t\thomever=`cat ${PBS_HOME}/pbs_version`\n\t\t[ \"${qstatver}\" != \"${homever}\" ] && needmakehome=3\n\tfi\n\t[ ${needmakehome} -eq 0 -a \"${PBS_START_SERVER}\" != \"0\" -a ! -d \"${PBS_HOME}/datastore\" ] && needmakehome=4\n\n\tif [ ${needmakehome} -ne 0 -a ! -x ${PBS_EXEC}/libexec/pbs_habitat ]; then\n\t\techo \"***\" >&2\n\t\techo \"*** ${PBS_EXEC}/libexec/pbs_habitat is missing.\" >&2\n\t\techo \"***\" >&2\n\t\texit 1\n\tfi\n\n\tcase ${needmakehome} in\n\t\t1)\n\t\t\techo PBS Home directory ${PBS_HOME} does not exist.\n\t\t\techo Running ${PBS_EXEC}/libexec/pbs_habitat to create it.\n\t\t\t${PBS_EXEC}/libexec/pbs_habitat || return 1\n\t\t\techo Home directory ${PBS_HOME} created.\n\t\t\t;;\n\t\t2|3)\n\t\t\techo PBS Home directory ${PBS_HOME} needs updating.\n\t\t\techo Running ${PBS_EXEC}/libexec/pbs_habitat to update it.\n\t\t\t${PBS_EXEC}/libexec/pbs_habitat || return 1\n\t\t\techo Home directory ${PBS_HOME} updated.\n\t\t\t;;\n\t\t4)\n\t\t\techo PBS Home directory ${PBS_HOME} needs datastore.\n\t\t\techo Running ${PBS_EXEC}/libexec/pbs_habitat to initialize it.\n\t\t\t${PBS_EXEC}/libexec/pbs_habitat || return 1\n\t\t\techo Datastore directory ${PBS_HOME}/datastore initialized.\n\t\t\t;;\n\tesac\n\n\tcore_flag=0\n\tif [ -d ${PBS_HOME}/server_priv ]; then\n\t\tcheck_core ${PBS_HOME}/server_priv\n\tfi\n\tif [ -d ${PBS_HOME}/sched_priv ]; then\n\t\tcheck_core ${PBS_HOME}/sched_priv\n\tfi\n\tif [ -d ${PBS_HOME}/mom_priv ]; then\n\t\tcheck_core ${PBS_HOME}/mom_priv\n\tfi\n\n\tif [ ${core_flag} -eq 1 ];then\n\t\techo \"Warning: PBS has detected core file(s) in PBS_HOME that require attention!!!\"\n\t\techo \"Warning: Please inform your administrator immediately or contact Altair customer support\"\n\tfi\n\n\tif [ \"${PBS_START_COMM}\" -gt 0 ]; then\n\t\tif check_prog \"pbs_comm\" ; then\n\t\t\techo \"PBS comm already running.\"\n\t\telse\n\t\t\tif ${PBS_EXEC}/sbin/pbs_comm\n\t\t\tthen\n\t\t\t\techo \"PBS comm\"\n\t\t\telse\n\t\t\t\tret_val=$?\n\t\t\t\techo \"pbs_comm startup failed, exit ${retval} aborting.\" >&2\n\t\t\t\texit 1\n\t\t\tfi\n\t\tfi\n\tfi\n\n\tif [ \"${PBS_START_MOM}\" -gt 0 ]; then\n\t\tif check_prog \"mom\" ; then\n\t\t\techo \"PBS mom already running.\"\n\t\telse\n\t\t\tif [ -f ${pbslibdir}/init.d/limits.pbs_mom ]; then\n\t\t\t\t. ${pbslibdir}/init.d/limits.pbs_mom\n\t\t\tfi\n\t\t\tcheck_maxsys\n\n\t\t\tif  ${PBS_EXEC}/sbin/pbs_mom\n\t\t\tthen\n\t\t\t\techo \"PBS mom\"\n\t\t\telse\n\t\t\t\tret_val=$?\n\t\t\t\techo \"pbs_mom startup failed, exit ${ret_val} aborting.\" >&2\n\t\t\t\treturn 1\n\t\t\tfi\n\t\tfi\n\tfi\n\n\tif [ \"${PBS_START_SCHED}\" -gt 0 ]; then\n\t\tif check_prog \"sched\" ; then\n\t\t\techo \"PBS scheduler already running.\"\n\t\telse\n\t\t\tif [ -f ${pbslibdir}/init.d/limits.pbs_sched ]; then\n\t\t\t\t. ${pbslibdir}/init.d/limits.pbs_sched\n\t\t\tfi\n\t\t\tif ${PBS_EXEC}/sbin/pbs_sched\n\t\t\tthen\n\t\t\t\techo \"PBS sched\"\n\t\t\telse\n\t\t\t\tret_val=$?\n\t\t\t\techo \"pbs_sched startup failed, exit ${ret_val} aborting.\" >&2\n\t\t\t\treturn 1\n\t\t\tfi\n\t\tfi\n\tfi\n\n\tif [ \"${PBS_START_SERVER}\" -gt 0 ] ; then\n\t\tif check_prog \"server\" ; then\n\t\t\techo \"PBS Server already running.\"\n\t\telse\n\t\t\tif [ -f ${pbslibdir}/init.d/limits.pbs_server ] ; then\n\t\t\t\t. ${pbslibdir}/init.d/limits.pbs_server\n\t\t\tfi\n\t\t\tif ${PBS_EXEC}/sbin/pbs_server ; then\n\t\t\t\techo \"PBS server\"\n\t\t\telse\n\t\t\t\tret_val=$?\n\t\t\t\tif [ ${ret_val} -eq 4 ] ; then\n\t\t\t\t\techo \"pbs_server failed to start, will retry once in 30 seconds\" >&2\n\t\t\t\t\tsleep 30\n\t\t\t\t\tif ${PBS_EXEC}/sbin/pbs_server ; then\n\t\t\t\t\t\techo \"PBS server\"\n\t\t\t\t\telse\n\t\t\t\t\t\tret_val=$?\n\t\t\t\t\t\techo \"pbs_server startup failed, exit ${ret_val} aborting.\" >&2\n\t\t\t\t\t\treturn 1\n\t\t\t\t\tfi\n\t\t\t\telse\n\t\t\t\t\techo \"pbs_server startup failed, exit ${ret_val} aborting.\" >&2\n\t\t\t\t\treturn 1\n\t\t\t\tfi\n\t\t\tfi\n\t\tfi\n\tfi\n\n\tif [ -f ${pbslibdir}/init.d/limits.post_services ] ; then\n\t\t. ${pbslibdir}/init.d/limits.post_services\n\tfi\n\tif [ -f /etc/redhat-release ]; then\n\t\ttouch ${redhat_subsys_filepath}\n\tfi\n\n\treturn 0\n}\n\nstop_pbs() {\n\techo \"Stopping PBS\"\n\tupdate_pids\n\tif [ \"${PBS_START_SERVER}\" -gt 0 ] ; then\n\t\tactive_server=`lc_host_name \\`${PBS_EXEC}/bin/qstat -Bf 2>/dev/null | \\\n\t\tgrep \"server_host = \" | \\\n\t\tsed -e \"s/.*server_host = //\"\\``\n\t\tif check_prog \"server\" ; then\n\t\t\tif [ \"${my_hostname}\" = \"${active_server}\" ]; then\n\t\t\t\tif [ -z \"${PBS_SECONDARY}\" -o ${is_secondary} -eq 1 ]; then\n\t\t\t\t\techo \"Shutting server down with qterm.\"\n\t\t\t\telse\n\t\t\t\t\techo \"This is active server, shutting down with qterm, secondary will take over.\"\n\t\t\t\tfi\n\t\t\t\tif [ -r \"${PBS_HOME}/server_priv/qmgr_shutdown\" ]; then\n\t\t\t\t\techo \"pbs_server evaluating ${PBS_HOME}/server_priv/qmgr_shutdown\"\n\t\t\t\t\t${PBS_EXEC}/bin/qmgr <\"${PBS_HOME}/server_priv/qmgr_shutdown\"\n\t\t\t\tfi\n\t\t\t\t${PBS_EXEC}/bin/qterm -t quick\n\t\t\t\techo \"PBS server - was pid: ${pbs_server_pid}\"\n\t\t\telif [ ${is_secondary} -eq 1 ]; then\n\t\t\t\techo \"This is secondary server, killing process.\"\n\t\t\t\tkill ${pbs_secondary_server_pid}\n\t\t\t\techo \"PBS server - was pid: ${pbs_secondary_server_pid}\"\n\t\t\t\trm -f ${PBS_HOME}/server_priv/server.lock.secondary\n\t\t\telse\n\t\t\t\techo \"Killing Server.\"\n\t\t\t\tkill ${pbs_server_pid}\n\t\t\t\techo \"PBS server - was pid: ${pbs_server_pid}\"\n\t\t\tfi\n\t\tfi\n\t\t${PBS_EXEC}/sbin/pbs_dataservice status > /dev/null 2>&1\n\t\tif [ $? -eq 0 ]; then\n\t\t\t${PBS_EXEC}/sbin/pbs_dataservice stop > /dev/null 2>&1\n\t\tfi\n\tfi\n\tif [ \"${PBS_START_MOM}\" -gt 0 ] ; then\n\t\tif check_prog \"mom\" ; then\n\t\t\tkill ${pbs_mom_pid}\n\t\t\techo \"PBS mom - was pid: ${pbs_mom_pid}\"\n\t\tfi\n\tfi\n\tif [ \"${PBS_START_SCHED}\" -gt 0 ] ; then\n\t\tif check_prog \"sched\" ; then\n\t\t\tif [ ${is_secondary} -eq 0 ] ; then\n\t\t\t\tkill ${pbs_sched_pid}\n\t\t\t\techo \"PBS sched - was pid: ${pbs_sched_pid}\"\n\t\t\telse\n\t\t\t\tkill ${pbs_secondary_sched_pid}\n\t\t\t\techo \"PBS schedx - was pid: ${pbs_secondary_sched_pid}\"\n\t\t\t\trm -f ${PBS_HOME}/sched_priv/sched.lock.secondary\n\t\t\tfi\n\t\tfi\n\tfi\n\tif [ \"${PBS_START_COMM}\" -gt 0 ] ; then\n\t\tif check_prog \"pbs_comm\" ; then\n\t\t\tif [ ${is_secondary} -eq 0 ] ; then\n\t\t\t\tkill -TERM ${pbs_comm_pid}\n\t\t\t\techo \"PBS comm - was pid: ${pbs_comm_pid}\"\n\t\t\telse\n\t\t\t\tkill -TERM ${pbs_secondary_comm_pid}\n\t\t\t\techo \"PBS comm - was pid: ${pbs_secondary_comm_pid}\"\n\t\t\t\trm -f ${PBS_HOME}/server_priv/comm.lock.secondary\n\t\t\tfi\n\t\tfi\n\tfi\n\tif [ -f ${redhat_subsys_filepath} ] ; then\n\t\trm -f ${redhat_subsys_filepath}\n\tfi\n\t# make sure the daemons have exited for up to 180 seconds\n\t# if any still there, exit with message and error\n\twaitloop=1\n\techo \"Waiting for shutdown to complete\"\n\twhile [ ${waitloop} -lt 180 ]\n\tdo\n\t\tsleep 1\n\t\tsomething_running=\"\"\n\t\tif [ \"${PBS_START_SERVER}\" -gt 0 ] ; then\n\t\t\tif check_prog \"server\" ; then\n\t\t\t\tsomething_running=\" pbs_server\"\n\t\t\tfi\n\t\tfi\n\t\tif [ \"${PBS_START_MOM}\" -gt 0 ] ; then\n\t\t\tif check_prog \"mom\" ; then\n\t\t\t\tsomething_running=\"${something_running} pbs_mom\"\n\t\t\tfi\n\t\tfi\n\t\tif [ \"${PBS_START_SCHED}\" -gt 0 ] ; then\n\t\t\tif check_prog \"sched\" ; then\n\t\t\t\tsomething_running=\"${something_running} pbs_sched\"\n\t\t\tfi\n\t\tfi\n\t\tif [ \"${PBS_START_COMM}\" -gt 0 ] ; then\n\t\t\tif check_prog \"pbs_comm\" ; then\n\t\t\t\tsomething_running=\" pbs_comm\"\n\t\t\tfi\n\t\tfi\n\t\tif [ \"${something_running}\" = \"\" ]; then\n\t\t\treturn\n\t\tfi\n\t\twaitloop=`expr ${waitloop} + 1`\n\tdone\n\techo \"Unable to stop PBS,${something_running} still active\"\n\texit 1\n}\n\nstatus_pbs() {\n\tupdate_pids\n\tif [ \"${PBS_START_SERVER}\" -gt 0 ]; then\n\t\tif check_prog \"server\" ; then\n\t\t\techo \"pbs_server is pid ${pid}\"\n\t\telse\n\t\t\techo \"pbs_server is not running\"\n\t\tfi\n\tfi\n\tif [ \"${PBS_START_MOM}\" -gt 0 ]; then\n\t\tif check_prog \"mom\" ; then\n\t\t\techo \"pbs_mom is pid ${pbs_mom_pid}\"\n\t\telse\n\t\t\techo \"pbs_mom is not running\"\n\t\tfi\n\tfi\n\tif [ \"${PBS_START_SCHED}\" -gt 0 ]; then\n\t\tif check_prog \"sched\" ; then\n\t\t\techo \"pbs_sched is pid ${pbs_sched_pid}\"\n\t\telse\n\t\t\techo \"pbs_sched is not running\"\n\t\tfi\n\tfi\n\tif [ \"${PBS_START_COMM}\" -gt 0 ]; then\n\t\tif check_prog \"pbs_comm\" ; then\n\t\t\techo \"pbs_comm is ${pbs_comm_pid}\"\n\t\telse\n\t\t\techo \"pbs_comm is not running\"\n\t\tfi\n\tfi\n}\n\n# Check whether PBS is registered to start at boot time\nis_registered()\n{\n\tif command -v systemctl >/dev/null 2>&1; then\n\t\tsystemctl is-enabled pbs > /dev/null 2>&1\n\t\treturn $?\n\telif command -v chkconfig; then\n\t\tchkconfig pbs > /dev/null 2>&1\n\t\treturn $?\n\tfi\n\treturn 0\n}\n\n# Check whether system is being booted or not\n# and also update the time in /var/tmp/pbs_boot_check file\n# return 0 if system is being booted otherwise return 1\nis_boottime()\n{\n\tis_registered\n\t[ $? -ne 0 ] && return 1\n\n\tPYTHON_EXE=${PBS_EXEC}/python/bin/python\n\tif [ -z \"${PYTHON_EXE}\" -o ! -x \"${PYTHON_EXE}\" ] ; then\n\t\tPYTHON_EXE=`type python3 2>/dev/null | cut -d' ' -f3`\n\t\tif [ -z \"${PYTHON_EXE}\" -o ! -x \"${PYTHON_EXE}\" ] ; then\n\t\t\treturn 1\n\t\tfi\n\tfi\n\n\tBOOTPYFILE=\"${pbslibdir}/python/pbs_bootcheck.py\"\n\tBOOTCHECKFILE=\"/var/tmp/pbs_boot_check\"\n\n\tif [ ! -r \"${BOOTPYFILE}\" ] ; then\n\t\treturn 1\n\tfi\n\n\t${PYTHON_EXE} ${BOOTPYFILE} ${BOOTCHECKFILE} > /dev/null 2>&1\n\tret=$?\n\treturn ${ret}\n}\n\npre_start_pbs()\n{\n\tif is_boottime\n\tthen\n\t\tcase \"${ostype}\" in\n\t\t\tLinux) echo -e \"\\nStarting PBS in background\\c\" ;;\n\t\t\t*)  echo \"\\nStarting PBS in background\\c\" ;;\n\t\tesac\n\t\t(\n\t\t\tTEMP_DIR=${PBS_TMPDIR:-${TMPDIR:-\"/var/tmp\"}}\n\t\t\tTEMPFILE=${TEMP_DIR}/start_pbs_logs_tmp_$$\n\t\t\tstart_pbs > ${TEMPFILE} 2>&1\n\t\t\tret=$?\n\t\t\tlogger -i -t PBS -f ${TEMPFILE}\n\t\t\trm -f ${TEMPFILE}\n\t\t\texit ${ret}\n\t\t) &\n\telse\n\t\tstart_pbs\n\t\texit $?\n\tfi\n}\n\n: main code\n# save env variables in a temp file\nenv_save=\"/tmp/$$_$(date +'%s')_env_save\"\ndeclare -x > \"${env_save}\"\n\nconf=${PBS_CONF_FILE:-@PBS_CONF_FILE@}\n[ -r \"${conf}\" ] && . \"${conf}\"\n\n# re-apply saved env variables\n. \"${env_save}\"\n\nrm -f \"${env_save}\"\n\nif [ -z \"${PBS_EXEC}\" ]; then\n\techo \"PBS_EXEC is undefined.\" >&2\n\texit 1\nfi\nif [ ! -d \"${PBS_EXEC}\" ]; then\n\techo \"${PBS_EXEC} is not a directory.\" >&2\n\techo \"PBS_EXEC directory does not exist: ${PBS_EXEC}\" >&2\n\texit 1\nfi\n\npbslibdir=\"${PBS_EXEC}/lib\"\n[ ! -d \"${pbslibdir}\" -a -d \"${PBS_EXEC}/lib64\" ] && pbslibdir=\"${PBS_EXEC}/lib64\"\n\nif [ -z \"${PBS_HOME}\" ]; then\n\techo \"PBS_HOME is undefined.\" >&2\n\texit 1\nfi\n\n[ -z \"${PBS_START_SERVER}\" ] && PBS_START_SERVER=0\n[ -z \"${PBS_START_MOM}\" ] && PBS_START_MOM=0\n[ -z \"${PBS_START_SCHED}\" ] && PBS_START_SCHED=0\n[ -z \"${PBS_START_COMM}\" ] && PBS_START_COMM=0\n\nUNIX95=1\nexport UNIX95\nPBS_MOM_HOME=${PBS_MOM_HOME:-$PBS_HOME}\n\nredhat_subsys_filepath=\"/var/lock/subsys/pbs\"\n\n# Determine the hostname that the local system should use\nif [ -n \"${PBS_LEAF_NAME}\" ]; then\n\tmy_hostname=`lc_host_name \"${PBS_LEAF_NAME}\"`\nelse\n\tmy_hostname=`lc_host_name \\`hostname\\``\nfi\n\n# Check whether the hostname has an IP address\ncheck_hostname \"${my_hostname}\"\nif [ $? -ne 0 ]; then\n\techo \"***\" >&2\n\techo \"*** Invalid local hostname: $my_hostname\" >&2\n\techo \"*** This value must resolve to a valid IP address.\" >&2\n\techo \"***\" >&2\n\texit 1\nfi\n\nis_secondary=0\nif [ -n \"${PBS_SECONDARY}\" ]; then\n\tsec_host=`lc_host_name ${PBS_SECONDARY}`\n\tif [ ${sec_host} = ${my_hostname} ]; then\n\t\tis_secondary=1\n\tfi\nfi\n\n# lets see how we were called\ncase \"`basename $0`\" in\n\tpbs_start)\n\t\tpre_start_pbs\n\t\t;;\n\tpbs_stop)\n\t\tstop_pbs\n\t\t;;\n\t*)\n\t\tcase \"$1\" in\n\t\t\tstart_msg)\n\t\t\t\techo \"Starting PBS\"\n\t\t\t\t;;\n\t\t\tstop_msg)\n\t\t\t\techo \"Stopping PBS\"\n\t\t\t\t;;\n\t\t\tstatus)\n\t\t\t\tstatus_pbs\n\t\t\t\t;;\n\t\t\tstart)\n\t\t\t\tpre_start_pbs\n\t\t\t\t;;\n\t\t\tstop)\n\t\t\t\tstop_pbs\n\t\t\t\t;;\n\t\t\trestart)\n\t\t\t\techo \"Restarting PBS\"\n\t\t\t\tstop_pbs\n\t\t\t\tpre_start_pbs\n\t\t\t\t;;\n\t\t\t*)\n\t\t\t\techo \"Usage: `basename $0` --version\"\n\t\t\t\techo \"Usage: `basename $0` {start|stop|restart|status}\"\n\t\t\t\texit 1\n\t\t\t\t;;\n\t\tesac\n\t\t;;\nesac\n"
  },
  {
    "path": "src/cmds/scripts/pbs_poerun.in",
    "content": "#!/usr/bin/env ksh93\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n#\n# This script is run on each node of an MPI job by poe.  It uses the\n# MP_CHILD value to figure out what adapter information to put into\n# MP_MPI_NETWORK or MP_LAPI_NETWORK.  If the value of PBS_EUILIB is not\n# \"ip\" and $PBS_HPS_JOBKEY has a value, the setup is done to use the HPS.\n# Similarly, if the value of PBS_EUILIB is not \"ip\" and $PBS_AIXIB_JOBKEY\n# has a value, the setup is done to use the InfiniBand.  The function\n# extract_wins takes the window list and extracts the window information\n# for the child being run.  The PBS_EUILIB variable is used instead of\n# MP_EUILIB because poe may change MP_EUILIB.\n#\n# All arguments are passed to pbs_attach to run the MPI program.\n#\n\nif [ $# -eq 1 ] && [ $1 = \"--version\" ]; then\n   echo pbs_version = @PBS_VERSION@\n   exit 0\nfi\n\nextract_wins() {\n\t# Extract our list of switch windows from full list\n\t# arguments:\n\t#\tnumber of windows per process\n\t#\twindow list\n\t#\tsplit windows between MPI and LAPI\n\n\twinsperproc=$1\n\twinlist=$2\n\tsplit=${3:-0}\n\n\t(( start = MP_CHILD * winsperproc ))\n\toldifs=\"$IFS\"\n\tIFS=\":\"\n\tset -A all_windows $winlist\n\tset -A windows1\n\tset -A windows2\n\n\ti=0\n\twhile (( i < winsperproc )) ;do\n\t\t# use windows2 to save the second half of the windows list\n\t\t# if it is being split\n\t\tif (( $split && $i >= $split )) ;then\n\t\t\twindows2[$i]=\"${all_windows[(( start + i ))]}\"\n\t\telse\n\t\t\twindows1[$i]=\"${all_windows[(( start + i ))]}\"\n\t\tfi\n\t\t(( i = i + 1 ))\n\tdone\n\tdata1=\"${windows1[*]}\"\n\tdata2=\"${windows2[*]}\"\n\tIFS=\"$oldifs\"\n}\n\nif [ \"XX$PBS_EUILIB\" != \"XXip\" -a \"${PBS_HPS_JOBKEY:-XX}\" != \"XX\" ]; then\n\texport MP_PARTITION=$PBS_HPS_JOBKEY\n\n\textract_wins $PBS_HPS_ADAPTERS $PBS_HPS_WINDOWS\n\tif [ \"$MP_MSG_API\" = \"MPI\" ]; then\n\t\tunset MP_CHILD_INET_ADDR\n\t\texport MP_MPI_NETWORK=@$PBS_HPS_ADAPTERS:$data1\n\tfi\n\n\tif [ \"$MP_MSG_API\" = \"LAPI\" ]; then\n\t\tunset MP_LAPI_INET_ADDR\n\t\texport MP_LAPI_NETWORK=@$PBS_HPS_ADAPTERS:$data1\n\tfi\nelif [ \"XX$PBS_EUILIB\" != \"XXip\" -a \"${PBS_AIXIB_JOBKEY:-XX}\" != \"XX\" ]; then\n\texport MP_PARTITION=$PBS_AIXIB_JOBKEY\n\n\t# The lapi keywords are only genereated in lower case by poe.\n\t# The mpi keyword will be \"MPI\" if the user doesn't set it and\n\t# all lower case if the user sets it.\n\tif [ \"$MP_MSG_API\" = \"MPI\" -o \"$MP_MSG_API\" = \"mpi\" ]; then\n\t\textract_wins $PBS_AIXIB_NETWORKS $PBS_AIXIB_WINDOWS\n\t\tunset MP_CHILD_INET_ADDR\n\t\texport MP_MPI_NETWORK=@$PBS_AIXIB_NETWORKS:$data1\n\telif [ \"$MP_MSG_API\" = \"lapi\" -o \"$MP_MSG_API\" = \"mpi_lapi\" ]; then\n\t\textract_wins $PBS_AIXIB_NETWORKS $PBS_AIXIB_WINDOWS\n\t\tunset MP_LAPI_INET_ADDR\n\t\texport MP_LAPI_NETWORK=@$PBS_AIXIB_NETWORKS:$data1\n\telif [ \"$MP_MSG_API\" = \"mpi,lapi\" ]; then\n\t\t(( wins = PBS_AIXIB_NETWORKS / 2 ))\n\t\textract_wins $PBS_AIXIB_NETWORKS $PBS_AIXIB_WINDOWS $wins\n\t\tunset MP_CHILD_INET_ADDR\n\t\tunset MP_LAPI_INET_ADDR\n\t\texport MP_MPI_NETWORK=@$wins:$data1\n\t\texport MP_LAPI_NETWORK=@$wins:$data2\n\tfi\nfi\n\nexec pbs_attach -s -j $PBS_JOBID \"$@\"\n"
  },
  {
    "path": "src/cmds/scripts/pbs_postinstall.in",
    "content": "#!/bin/bash\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\n#\n# This script is responsible for creating and/or updating the PBS\n# configuration file and the PBS_HOME directory. It does not start\n# any PBS services. Additional configuration steps are performed\n# by pbs_habitat.\n#\n\nif [ $# -eq 1 -a \"$1\" = \"--version\" ]; then\n\techo pbs_version = @PBS_VERSION@\n\texit 0\nfi\n\n# Used to determine if this is Cray XT system\nis_cray_xt() {\n\t[ -f /proc/cray_xt/cname ] && return 0\n\treturn 1\n}\n\npoe_interactive() {\n\t#Check if poe will allow interactive jobs\n\tif [ -f /etc/poe.limits ]; then\n\t\t. /etc/poe.limits\n\t\tif [ -z \"$MP_POE_LAUNCH\" ]; then\n\t\t\techo \"*** WARNING: MP_POE_LAUNCH unset may not allow jobs to run correctly\"\n\t\telif [ \"$MP_POE_LAUNCH\" = \"none\" -o \\\n\t\t\t\t\"$MP_POE_LAUNCH\" = \"ip\" ]; then\n\t\t\techo \"*** WARNING: MP_POE_LAUNCH=$MP_POE_LAUNCH may not allow jobs to run correctly\"\n\t\tfi\n\tfi\n\treturn 0\n}\n\ncreatedir() {\n\tif [ -n \"$1\" -a ! -d \"$1\" ]; then\n\t\tif ! mkdir -p \"$1\"; then\n\t\t\techo \"*** Could not create $1\"\n\t\t\texit 1\n\t\tfi\n\tfi\n\tif [ -n \"$1\" -a -n \"$2\" ]; then\n\t\tchmod \"$2\" \"$1\"\n\tfi\n}\n\ncreatepath() {\n\twhile read mode dir; do\n\t\tcreatedir \"${PBS_HOME}/${dir}\" $mode\n\tdone\n}\n\n\n\ncreate_conf() {\n\t# If we have an existing @PBS_CONF_FILE@, save the old PBS_EXEC from\n\t# the existing pbs.conf and make a backup. It may be a directory or\n\t# a symbolic link so use cp rather than mv.\n\tif [ -f \"$conf\" ] ; then\n\t\techo \"*** Existing configuration file found: $conf\"\n\t\toldpbs_exec=`grep '^[[:space:]]*PBS_EXEC=' \"$conf\" | tail -1 | sed 's/^[[:space:]]*PBS_EXEC=\\([^[:space:]]*\\)[[:space:]]*/\\1/'`\n\t\toldpbs_home=`grep '^[[:space:]]*PBS_HOME=' \"$conf\" | tail -1 | sed 's/^[[:space:]]*PBS_HOME=\\([^[:space:]]*\\)[[:space:]]*/\\1/'`\n\t\tconforig=\"${conf}.pre.${PBS_VERSION}\"\n\t\t[ -f \"$conforig\" ] && conforig=\"${conforig}.`date +%Y%m%d%H%M%S`\"\n\t\techo \"***\"\n\t\techo \"*** Saving $conf as $conforig\"\n\t\tcp \"$conf\" \"$conforig\"\n\telse\n\t\techo \"*** No configuration file found.\"\n\t\techo \"*** Creating new configuration file: $conf\"\n\t\toldpbs_exec=''\n\t\toldpbs_home=''\n\tfi\n\toldpbs_exec=`readlink -f \"$oldpbs_exec\"`\n\n\tcase $INSTALL_METHOD in\n\trpm)\n\t\t[ -f \"$newconf\" ] && newconf=\"${newconf}.`date +%Y%m%d%H%M%S`\"\n\t\t# If an existing configuration file is present, adapt it.\n\t\tdeclare -a env_array=(\"PBS_HOME\" \"PBS_SERVER\" \"PBS_MOM_HOME\" \"PBS_PRIMARY\" \"PBS_SECONDARY\" \"PBS_LEAF_ROUTERS\" \"PBS_DAEMON_SERVICE_USER\")\n\t\tif [ -f \"$conf\" ]; then\n\t\t\teval \"sed 's;\\(^[[:space:]]*PBS_EXEC=\\)[^[:space:]]*;\\1$newpbs_exec;' \\\"$conf\\\" >$newconf\"\n\t\t\tupdate_pbs_conf() {\n\t\t\t\tunset env_var env_value\n\t\t\t\tenv_var=$1;\n\t\t\t\tenv_value=$(eval echo \\$$env_var)\n\t\t\t\tgrep -q \"^[[:space:]]*$env_var=[^[:space:]]*\" \"$newconf\" \\\n\t\t\t\t\t&& sed -i \"s;\\(^[[:space:]]*${env_var}=\\)[^[:space:]]*;\\1${env_value};\" \"$newconf\" \\\n\t\t\t\t\t|| echo \"$env_var=${env_value}\" >>\"$newconf\"\n\t\t\t}\n\t\t\tfor var in \"${env_array[@]}\"\n\t\t\tdo\n\t\t\t\t[ \"${!var:+set}\" ] && update_pbs_conf ${var}\n\t\t\tdone\n\t\telse\n\t\t\t[ ${newpbs_exec:+set} ] && echo \"PBS_EXEC=$newpbs_exec\" >\"$newconf\"\n\t\t\tfor var in \"${env_array[@]}\"\n\t\t\tdo\n\t\t\t\t[ \"${!var:+set}\" ] && echo \"${var}=${!var}\" >>\"$newconf\"\n\t\t\tdone\n\t\tfi\n\t\t;;\n\n\tscript)\n\t\t# Need to set INSTALL_PACKAGE for script method.\n\t\tif [ -f \"$newpbs_exec/sbin/pbs_server.bin\" ]; then\n\t\t\tINSTALL_PACKAGE=server\n\t\telif [ -f \"$newpbs_exec/sbin/pbs_mom\" ]; then\n\t\t\tINSTALL_PACKAGE=execution\n\t\telif [ -f \"$newpbs_exec/bin/qstat\" ]; then\n\t\t\tINSTALL_PACKAGE=client\n\t\t\tnewpbs_home=''\n\t\telse\n\t\t\techo \"***\"\n\t\t\techo \"*** Unable to locate PBS executables!\"\n\t\t\techo \"***\"\n\t\t\texit 1\n\t\tfi\n\t\t# if both conf files are present merge the files but precedence should be given to newconf\n\t\tif [ -f \"$newconf\" -a -f \"$conf\" ]; then\n\t\t\twhile IFS='=' read -r key value; do\n\t\t\t\tif [ -z `grep -q \"$key\" \"$newconf\" && echo $?` ]; then\n\t\t\t\t\techo \"$key=$value\" >> \"$newconf\"\n\t\t\t\tfi\n\t\t\tdone < \"$conf\"\n\t\tfi\n\t\t# The INSTALL script may have already created newconf. If it\n\t\t# did, leave it alone. If not, and an existing configuration\n\t\t# file is present, adapt it by substituting the new value\n\t\t# of PBS_EXEC.\n\t\tif [ ! -f \"$newconf\" -a -f \"$conf\" ]; then\n\t\t\t# If an existing configuration file is present, adapt it by\n\t\t\t# substituting the new value of PBS_EXEC.\n\t\t\teval \"sed 's;\\(^[[:space:]]*PBS_EXEC=\\)[^[:space:]]*;\\1$newpbs_exec;' \\\"$conf\\\"\" >\"$newconf\"\n\t\tfi\n\t\t;;\n\tesac\n\n\t# Ensure newconf exists.\n\ttouch \"$newconf\"\n\tchmod 644 \"$newconf\"\n\n\t# Source the new configuration file.\n\t. \"$newconf\"\n\n\t# Add some additional required fields if not present.\n\tif is_cray_xt ; then\n\t\tif [ -z \"$PBS_SERVER\" ]; then\n\t\t\tPBS_SERVER='CHANGE_THIS_TO_PBS_SERVER_HOSTNAME'\n\t\t\techo \"PBS_SERVER=$PBS_SERVER\" >>\"$newconf\"\n\t\tfi\n\t\t[ -z \"$PBS_START_SERVER\" ] && echo \"PBS_START_SERVER=0\" >>\"$newconf\"\n\t\t[ -z \"$PBS_START_SCHED\" ] && echo \"PBS_START_SCHED=0\" >>\"$newconf\"\n\t\t[ -z \"$PBS_START_COMM\" ] && echo \"PBS_START_COMM=0\" >>\"$newconf\"\n\t\t[ -z \"$PBS_START_MOM\" ] && echo \"PBS_START_MOM=0\" >>\"$newconf\"\n\telse\n\t\tcase $INSTALL_PACKAGE in\n\t\tserver)\n\t\t\tif [ -z \"$PBS_SERVER\" ]; then\n\t\t\t\tPBS_SERVER=`hostname | awk -F. '{print $1}'`\n\t\t\t\techo \"PBS_SERVER=$PBS_SERVER\" >>\"$newconf\"\n\t\t\tfi\n\t\t\t[ -z \"$PBS_START_SERVER\" ] && echo \"PBS_START_SERVER=1\" >>\"$newconf\"\n\t\t\t[ -z \"$PBS_START_SCHED\" ] && echo \"PBS_START_SCHED=1\" >>\"$newconf\"\n\t\t\t[ -z \"$PBS_START_COMM\" ] && echo \"PBS_START_COMM=1\" >>\"$newconf\"\n\t\t\t[ -z \"$PBS_START_MOM\" ] && echo \"PBS_START_MOM=0\" >>\"$newconf\"\n\t\t\t;;\n\t\texecution)\n\t\t\tif [ -z \"$PBS_SERVER\" ]; then\n\t\t\t\tPBS_SERVER='CHANGE_THIS_TO_PBS_SERVER_HOSTNAME'\n\t\t\t\techo \"PBS_SERVER=$PBS_SERVER\" >>\"$newconf\"\n\t\t\tfi\n\t\t\t[ -z \"$PBS_START_SERVER\" ] && echo \"PBS_START_SERVER=0\" >>\"$newconf\"\n\t\t\t[ -z \"$PBS_START_SCHED\" ] && echo \"PBS_START_SCHED=0\" >>\"$newconf\"\n\t\t\t[ -z \"$PBS_START_COMM\" ] && echo \"PBS_START_COMM=0\" >>\"$newconf\"\n\t\t\t[ -z \"$PBS_START_MOM\" ] && echo \"PBS_START_MOM=1\" >>\"$newconf\"\n\t\t\t;;\n\t\tclient)\n\t\t\tif [ -z \"$PBS_SERVER\" ]; then\n\t\t\t\tPBS_SERVER='CHANGE_THIS_TO_PBS_SERVER_HOSTNAME'\n\t\t\t\techo \"PBS_SERVER=$PBS_SERVER\" >>\"$newconf\"\n\t\t\tfi\n\t\t\t[ -z \"$PBS_START_SERVER\" ] && echo \"PBS_START_SERVER=0\" >>\"$newconf\"\n\t\t\t[ -z \"$PBS_START_SCHED\" ] && echo \"PBS_START_SCHED=0\" >>\"$newconf\"\n\t\t\t[ -z \"$PBS_START_COMM\" ] && echo \"PBS_START_COMM=0\" >>\"$newconf\"\n\t\t\t[ -z \"$PBS_START_MOM\" ] && echo \"PBS_START_MOM=0\" >>\"$newconf\"\n\t\t\t;;\n\t\tesac\n\tfi\n\n\t[ -z \"$PBS_EXEC\" ] && echo \"PBS_EXEC=$newpbs_exec\" >>\"$newconf\"\n\t[ -z \"$PBS_HOME\" -a -n \"$newpbs_home\" ] && echo \"PBS_HOME=$newpbs_home\" >>\"$newconf\"\n\t[ -z \"${PBS_CORE_LIMIT}\" ] && echo \"PBS_CORE_LIMIT=@PBS_CORE_LIMIT@\" >>\"$newconf\"\n\tif ! grep \"^PBS_SCP=\" $newconf>/dev/null 2>&1; then\n\t\tPBS_SCP=`type -P scp`\n\t\t[ -n \"$PBS_SCP\" ] && echo \"PBS_SCP=$PBS_SCP\" >>\"$newconf\"\n\tfi\n\n\t# Source the new configuation file again to pick up any changes\n\t. \"$newconf\"\n}\n\n\n\nperform_checks() {\n\tfail=0\n\tif [ ${PBS_START_SERVER} != 0 -a ! -x ${PBS_EXEC}/sbin/pbs_server ] ;then\n\t\techo \"*** Server does not exist!\"\n\t\tfail=1\n\tfi\n\tif [ ${PBS_START_SCHED} != 0 -a ! -x ${PBS_EXEC}/sbin/pbs_sched ] ;then\n\t\techo \"*** Scheduler does not exist!\"\n\t\tfail=1\n\tfi\n\tif [ ${PBS_START_COMM} != 0 -a ! -x ${PBS_EXEC}/sbin/pbs_comm ] ;then\n\t\techo \"*** Communication agent does not exist!\"\n\t\tfail=1\n\tfi\n\tif [ ${PBS_START_MOM} != 0 -a ! -x ${PBS_EXEC}/sbin/pbs_mom ] ;then\n\t\techo \"*** MOM does not exist!\"\n\t\tfail=1\n\tfi\n\tif [ $fail -ne 0 ] ;then\n\t\techo \"***\"\n\t\techo \"*** A required PBS executable is missing. This could be\"\n\t\techo \"*** due to values defined in $conf\"\n\t\techo \"*** Please edit or remove $conf and run the following command:\"\n\t\techo \"*** $0 $*\"\n\t\techo \"***\"\n\t\texit 1\n\tfi\n\n\t# Issue a warning if PBS_EXEC has changed.\n\tif [ -n \"$oldpbs_exec\" -a \"$PBS_EXEC\" != \"$oldpbs_exec\" ]; then\n\t\techo \"***\"\n\t\techo \"*** =======\"\n\t\techo \"*** NOTICE:\"\n\t\techo \"*** =======\"\n\t\techo \"*** PBS commands have moved.\"\n\t\techo \"*** Old location: $oldpbs_exec\"\n\t\techo \"*** New location: $PBS_EXEC\"\n\t\techo \"*** Users will need to ensure their PATH and MANPATH are set correctly.\"\n\t\techo \"*** In most cases, users must simply logout and log back in to source\"\n\t\techo \"*** the new files in /etc/profile.d.\"\n\t\techo \"***\"\n\tfi\n\n\t# Issue warning if PBS_HOME has changed.\n\tif [ -n \"$oldpbs_home\" -a \"$PBS_HOME\" != \"$oldpbs_home\" ]; then\n\t\techo \"***\"\n\t\techo \"*** =======\"\n\t\techo \"*** NOTICE:\"\n\t\techo \"*** =======\"\n\t\techo \"*** PBS_HOME has moved.\"\n\t\techo \"*** Old location: $oldpbs_home\"\n\t\techo \"*** New location: $PBS_HOME\"\n\t\techo \"*** To utilize PBS_HOME from the prior installation, you must perform\"\n\t\techo \"*** one of the following actions:\"\n\t\techo \"*** 1. Update PBS_HOME in $conf\"\n\t\techo \"*** 2. mv $oldpbs_home $PBS_HOME\"\n\t\techo \"*** 3. ln -s $oldpbs_home $PBS_HOME\"\n\t\techo \"***\"\n\tfi\n\n\t# Issue a warning if PBS_SERVER is invalid\n\tif [ -z \"$PBS_SERVER\" -o \"$PBS_SERVER\" = 'CHANGE_THIS_TO_PBS_SERVER_HOSTNAME' ]; then\n\t\techo \"*** =======\"\n\t\techo \"*** NOTICE:\"\n\t\techo \"*** =======\"\n\t\techo \"*** The value of PBS_SERVER in ${conf} is invalid.\"\n\t\techo \"*** PBS_SERVER should be set to the PBS server hostname.\"\n\t\techo \"*** Update this value before starting PBS.\"\n\t\techo \"***\"\n\tfi\n}\n\n\n\ninstall_pbsinitd() {\n\tif is_cray_xt ; then\n\t\tif [ -d /etc/init.d ]; then\n\t\t\tinitscript=\"/etc/init.d/pbs\"\n\t\telse\n\t\t\tinitscript=\"/etc/rc.d/init.d/pbs\"\n\t\tfi\n\t\tcp ${PBS_EXEC}/libexec/pbs_init.d $initscript\n\t\trm -f /etc/rc.d/rc?.d/*pbs\n\t\t# For now, for Cray XT only, install \"modulefile\" in hard location\n\t\tif [ -d /opt/modulefiles -a -f ${PBS_EXEC}/etc/modulefile ]; then\n\t\t\tif [ ! -d /opt/modulefiles/pbs ] ; then\n\t\t\t\tcreatedir /opt/modulefiles/pbs 0755\n\t\t\tfi\n\t\t\tcp ${PBS_EXEC}/etc/modulefile /opt/modulefiles/pbs/${PBS_VERSION}\n\t\t\tchmod 0644 /opt/modulefiles/pbs/${PBS_VERSION}\n\t\tfi\n\telif [ $INSTALL_PACKAGE != client ] ; then\n\t\techo \"*** Registering PBS as a service.\"\n\t\tcase \"$ostype\" in\n\t\tLinux)\n\t\t\tif [ -d /etc/init.d ]; then\n\t\t\t\tinitscript=\"/etc/init.d/pbs\"\n\t\t\telse\n\t\t\t\tinitscript=\"/etc/rc.d/init.d/pbs\"\n\t\t\tfi\n\t\t\tcp ${PBS_EXEC}/libexec/pbs_init.d $initscript\n\t\t\trm -f /etc/rc.d/rc?.d/*pbs\n\t\t\tif [ -x /sbin/chkconfig ] ; then\n\t\t\t\t/sbin/chkconfig --add pbs\n\t\t\telif [ -x /usr/sbin/update-rc.d ] ; then\n\t\t\t\t/usr/sbin/update-rc.d pbs enable\n\t\t\telse\n\t\t\t\tln -sf $initscript /etc/rc.d/rc0.d/K10pbs\n\t\t\t\tln -sf $initscript /etc/rc.d/rc1.d/K10pbs\n\t\t\t\tln -sf $initscript /etc/rc.d/rc2.d/K10pbs\n\t\t\t\tln -sf $initscript /etc/rc.d/rc3.d/S90pbs\n\t\t\t\tln -sf $initscript /etc/rc.d/rc4.d/K10pbs\n\t\t\t\tln -sf $initscript /etc/rc.d/rc5.d/S90pbs\n\t\t\t\tln -sf $initscript /etc/rc.d/rc6.d/K10pbs\n\t\t\tfi\n\t\t\tif [ -d /etc/profile.d ]; then\n\t\t\t\t[ -f /etc/profile.d/pbs.csh ] || cp ${PBS_EXEC}/etc/pbs.csh /etc/profile.d\n\t\t\t\t[ -f /etc/profile.d/pbs.sh ] || cp ${PBS_EXEC}/etc/pbs.sh /etc/profile.d\n\t\t\tfi\n\n\t\t\tpbs_unitfile=\"@_unitdir@/pbs.service\"\n\t\t\tif [ -f \"${pbs_unitfile}\" ]; then\n\t\t\t\tpresetdir=\"@_unitdir@-preset\"\n\t\t\t\teval \"sed -i 's;\\(^[[:space:]]*SourcePath=\\)[^[:space:]]*;\\1${PBS_EXEC}/libexec/pbs_init.d;' \\\"$pbs_unitfile\\\"\"\n\t\t\t\teval \"sed -i 's;\\(^[[:space:]]*ExecStart=\\)[^[:space:]]*;\\1${PBS_EXEC}/libexec/pbs_init.d;' \\\"$pbs_unitfile\\\"\"\n\t\t\t\teval \"sed -i 's;\\(^[[:space:]]*ExecStop=\\)[^[:space:]]*;\\1${PBS_EXEC}/libexec/pbs_init.d;' \\\"$pbs_unitfile\\\"\"\n\t\t\t\tif command -v systemctl >/dev/null 2>&1; then\n\t\t\t\t\tsystemctl enable pbs && systemctl daemon-reload\n\t\t\t\t\tif [ $? != 0 -a -d \"${presetdir}\" ]; then\n\t\t\t\t\t\techo \"*** Creating preset file ${presetdir}/95-pbs.preset\"\n\t\t\t\t\t\techo \"enable pbs.service\" > \"${presetdir}/95-pbs.preset\"\n\t\t\t\t\tfi\n\t\t\t\telse\n\t\t\t\t\techo \"*** Systemctl binary is not available; Failed to register PBS as a service\"\n\t\t\t\tfi\n\t\t\tfi\n\t\t\t;;\n\t\tesac\n\t\tpbslibdir=\"${PBS_EXEC}/lib64\"\n\t\t[ -d \"${pbslibdir}\" ] || pbslibdir=\"${PBS_EXEC}/lib\"\n\t\tif [ -f /var/tmp/pbs_boot_check ] ; then\n\t\t\trm -f /var/tmp/pbs_boot_check\n\t\tfi\n\tfi\n\techo \"***\"\n\n\tif [ \"$conf\" != \"@PBS_CONF_FILE@\" ]; then\n\t\techo \"*** =======\"\n\t\techo \"*** NOTICE:\"\n\t\techo \"*** =======\"\n\t\techo \"*** PBS configuration information has been saved to a location\"\n\t\techo \"*** other than the default. In order to make this the default\"\n\t\techo \"*** installation, a symbolic link must be created to the new\"\n\t\techo \"*** configuration file by manually issuing a command similar\"\n\t\techo \"*** to the following:\"\n\t\techo \"*** ln -s $conf @PBS_CONF_FILE@\"\n\t\techo \"***\"\n\tfi\n}\n\n\n\ncreate_home() {\n\tif [ $INSTALL_PACKAGE = client ]; then\n\t\tif [ -x ${PBS_EXEC}/bin/qstat ] ;then\n\t\t\techo \"*** The PBS commands have been installed in ${PBS_EXEC}/bin.\"\n\t\t\techo \"***\"\n\t\tfi\n\t\techo \"*** End of ${0}\"\n\t\texit 0\n\tfi\n\n\t# This is not a client install. Create PBS_HOME.\n\techo \"*** PBS_HOME is $PBS_HOME\"\n\tcreatedir \"$PBS_HOME\" 0755\n\n\t# Create the pbs_environment file if it does not exist\n\tenvfile=\"${PBS_HOME}/pbs_environment\"\n\tif [ ! -f \"$envfile\" ]; then\n\t\tnewtz=\"\"\n\t\tif [ -f /etc/TIMEZONE ]; then\n\t\t\techo \"*** Setting TZ from /etc/TIMEZONE\"\n\t\t\tnewtz=`grep '^TZ' /etc/TIMEZONE`\n\t\telif [ -f /etc/sysconfig/clock ]; then\n\t\t\techo \"*** Setting TZ from /etc/sysconfig/clock\"\n\t\t\t. /etc/sysconfig/clock\n\t\t\tif [ -f /etc/redhat-release ]; then\n\t\t\t\tif [ -n \"$ZONE\" ]; then\n\t\t\t\t\tnewtz=\"`echo TZ=${ZONE} | sed 's/ /_/g'`\"\n\t\t\t\tfi\n\t\t\telse\n\t\t\t\tif [ -n \"$TIMEZONE\" ]; then\n\t\t\t\t\tnewtz=\"`echo TZ=${TIMEZONE} | sed 's/ /_/g'`\"\n\t\t\t\tfi\n\t\t\tfi\n\t\telif [ -n \"$TZ\" ]; then\n\t\t\techo \"*** Setting TZ from \\$TZ\"\n\t\t\tnewtz=\"TZ=${TZ}\"\n\t\tfi\n\n\t\teuilibus=\"us\"\n\t\tif [ -f $envfile ] ; then\n\t\t\techo \"*** Found existing $envfile\"\n\t\t\tif [ -n \"$newtz\" ]; then\n\t\t\t\techo \"*** Replacing TZ with $newtz\"\n\t\t\t\tgrep -v '^TZ' $envfile > ${envfile}.new\n\t\t\t\techo $newtz >> ${envfile}.new\n\t\t\t\tmv -f $envfile ${envfile}.old\n\t\t\t\tmv -f ${envfile}.new $envfile\n\t\t\tfi\n\t\telse\n\t\t\techo \"*** Creating new file $envfile\"\n\t\t\ttouch $envfile\n\t\t\tchmod 644 $envfile\n\t\t\tif [ -n \"$newtz\" ]; then\n\t\t\t\techo $newtz >> $envfile\n\t\t\telse\n\t\t\t\techo \"*** WARNING: TZ not set in $envfile\"\n\t\t\tfi\n\n\t\t\techo PATH=\"/bin:/usr/bin\" >> $envfile\n\t\tfi\n\telse\n\t\techo \"*** Existing environment file left unmodified: $envfile\"\n\tfi\n\techo \"***\"\n\n\t# Configure PBS_HOME for server\n\tif [ -x \"$PBS_EXEC/sbin/pbs_server\" ]; then\n\t\techo \"*** The PBS server has been installed in ${PBS_EXEC}/sbin.\"\n\t\tcreatepath <<-EOF\n\t\t\t0755 server_logs\n\t\t\t1777 spool\n\t\t\t0750 server_priv\n\t\t\t0755 server_priv/accounting\n\t\t\t0750 server_priv/jobs\n\t\t\t0750 server_priv/users\n\t\t\t0750 server_priv/hooks\n\t\t\t0750 server_priv/hooks/tmp\n\t\tEOF\n\t\t# copy PBS hooks into place\n\t\tpbslibdir=\"${PBS_EXEC}/lib64\"\n\t\t[ -d \"${pbslibdir}\" ] || pbslibdir=\"${PBS_EXEC}/lib\"\n\t\tif [ -d ${pbslibdir}/python/altair/pbs_hooks ]; then\n\t\t\tcp -p ${pbslibdir}/python/altair/pbs_hooks/* \\\n\t\t\t\t\t${PBS_HOME}/server_priv/hooks\n\t\tfi\n\t\t# special for Cray\n\t\tif is_cray_xt; then\n\t\t\tsed --in-place \"s/enabled=false/enabled=true/\" $PBS_HOME/server_priv/hooks/PBS_alps_inventory_check.HK\n\t\telse\n\t\t\trm -f ${PBS_HOME}/server_priv/hooks/PBS_xeon_phi_provision.{HK,PY}\n\t\tfi\n\t\t# create the database user file if it does not exist\n\t\tdbuser_fl=\"${PBS_HOME}/server_priv/db_user\"\n\t\tif [ ! -f \"${dbuser_fl}\" ]; then\n\t\t\tprintf \"${dbuser:-@database_user@}\" >\"${dbuser_fl}\"\n\t\t\tchmod 0600 \"${dbuser_fl}\"\n\t\tfi\n\tfi\n\n\t# Configure PBS_HOME for scheduler\n\tif [ -x \"$PBS_EXEC/sbin/pbs_sched\" ]; then\n\t\techo \"*** The PBS scheduler has been installed in ${PBS_EXEC}/sbin.\"\n\t\tcreatepath <<-EOF\n\t\t\t0755 sched_logs\n\t\t\t0750 sched_priv\n\t\tEOF\n\t\t[ -f \"${PBS_HOME}/sched_priv/dedicated_time\" ] || cp \"${PBS_EXEC}/etc/pbs_dedicated\" \"${PBS_HOME}/sched_priv/dedicated_time\"\n\t\t[ -f \"${PBS_HOME}/sched_priv/holidays\" ] || cp \"${PBS_EXEC}/etc/pbs_holidays\" \"${PBS_HOME}/sched_priv/holidays\"\n\t\t[ -f \"${PBS_HOME}/sched_priv/resource_group\" ] || cp \"${PBS_EXEC}/etc/pbs_resource_group\" \"${PBS_HOME}/sched_priv/resource_group\"\n\t\tif [ ! -f ${PBS_HOME}/sched_priv/sched_config ]; then\n\t\t\tcp ${PBS_EXEC}/etc/pbs_sched_config ${PBS_HOME}/sched_priv/sched_config\n\t\t\tchmod 644 ${PBS_HOME}/sched_priv/sched_config\n\t\tfi\n\t\t# special for cray... add vntype and hbmem to sched_config if it isn't already there\n\t\tif is_cray_xt; then\n\t\t\tsconfig=\"${PBS_HOME}/sched_priv/sched_config\"\n\t\t\tgrep '^[[:space:]]*resources:.*vntype' $sconfig > /dev/null\n\t\t\tif [ $? -ne 0 ]; then\n\t\t\t\techo \"*** Added vntype to sched_config resources\"\n\t\t\t\tsed --in-place '/^[[:space:]]*resources:/ s/\\\"$/, vntype\\\"/' $sconfig\n\t\t\tfi\n\n\t\t\tgrep '^[[:space:]]*resources:.*hbmem' $sconfig > /dev/null\n\t\t\tif [ $? -ne 0 ]; then\n\t\t\t\techo \"*** Added hbmem to sched_config resources\"\n\t\t\t\tsed --in-place '/^[[:space:]]*resources:/ s/\\\"$/, hbmem\\\"/' $sconfig\n\t\t\tfi\n\t\tfi\n\n\t\tif [ \"${PBS_DAEMON_SERVICE_USER}x\" != \"x\" ]; then\n\t\t\tid=`id ${PBS_DAEMON_SERVICE_USER} 2>&1`\n\t\t\tif [ $? -ne 0 ]; then\n\t\t\t\techo \"*** PBS_DAEMON_SERVICE_USER ${PBS_DAEMON_SERVICE_USER} does not exist\"\n\t\t\telse\n\t\t\t\tchown -R \"${PBS_DAEMON_SERVICE_USER}\" \"${PBS_HOME}/sched_priv\"\n\t\t\t\tchown -R \"${PBS_DAEMON_SERVICE_USER}\" \"${PBS_HOME}/sched_logs\"\n\t\t\tfi\n\t\tfi\n\n\t\techo \"***\"\n\n\tfi\n\n\t# Configure PBS_HOME for pbs_comm\n\tif [ -x \"$PBS_EXEC/sbin/pbs_comm\" ]; then\n\t\techo \"*** The PBS communication agent has been installed in ${PBS_EXEC}/sbin.\"\n\t\tcreatepath <<-EOF\n\t\t\t0755 comm_logs\n\t\t\t0750 server_priv\n\t\t\t1777 spool\n\t\tEOF\n\t\techo \"***\"\n\tfi\n\n\t# Configure PBS_HOME for MOM\n\tif [ -x \"$PBS_EXEC/sbin/pbs_mom\" ] ;then\n\t\techo \"*** The PBS MOM has been installed in ${PBS_EXEC}/sbin.\"\n\t\tcreatepath <<-EOF\n\t\t\t0755 aux\n\t\t\t0700 checkpoint\n\t\t\t0755 mom_logs\n\t\t\t0751 mom_priv\n\t\t\t0751 mom_priv/jobs\n\t\t\t0750 mom_priv/hooks\n\t\t\t0750 mom_priv/hooks/tmp\n\t\t\t1777 spool\n\t\t\t1777 undelivered\n\t\tEOF\n\t\tmompriv=\"${PBS_HOME}/mom_priv\"\n\t\tmomconfig=\"${mompriv}/config\"\n\t\tif [ ! -f \"$momconfig\" ]; then\n\t\t\ttouch \"$momconfig\"\n\t\t\tchmod 0644 \"$momconfig\"\n\t\tfi\n\n\t\tif is_cray_xt; then\n\t\t\tgrep \"^\\$vnodedef_additive\" \"$momconfig\" >/dev/null\n\t\t\tif [ $? -ne 0 ] ; then\n\t\t\t\techo \"\\$vnodedef_additive 0\" >> $momconfig\n\t\t\tfi\n\t\t\tgrep \"^\\$alps_client\" \"$momconfig\" >/dev/null\n\t\t\tif [ $? -ne 0 ] ; then\n\t\t\t\techo \"\\$alps_client /opt/cray/alps/default/bin/apbasil\" >> $momconfig\n\t\t\tfi\n\t\tfi\n\t\techo \"***\"\n\tfi\n\n\tif [ -x \"${PBS_EXEC}/bin/qstat\" ] ;then\n\t\techo \"*** The PBS commands have been installed in ${PBS_EXEC}/bin.\"\n\t\techo \"***\"\n\telse\n\t\techo \"*** The PBS commands are missing in ${PBS_EXEC}/bin.\"\n\t\techo \"***\"\n\t\texit 1\n\tfi\n\n\t# Do not update PBS_HOME/pbs_version here, pbs_habitat will do that.\n\n}\n\necho \"*** PBS Installation Summary\"\necho \"***\"\necho \"*** Postinstall script called as follows:\"\nprintf \"*** $0 \"; printf \"%q \" \"$@\"; printf \"\\n\"\necho \"***\"\nPBS_VERSION='@PBS_VERSION@'\nconf=\"${PBS_CONF_FILE:-@PBS_CONF_FILE@}\"\noldconfdir=`dirname \"${conf}\"`\nostype=`uname 2>/dev/null`\nunset PBS_EXEC\nunset preset_dbuser\nunset preset_serviceuser\numask 022\nif [ \"${PBS_DATA_SERVICE_USER:+set}\" ]; then\n\tpreset_dbuser=\"${PBS_DATA_SERVICE_USER}\"\nfi\nif [ \"${PBS_DAEMON_SERVICE_USER:+set}\" ]; then\n\tpreset_serviceuser=\"${PBS_DAEMON_SERVICE_USER}\"\nfi\n\n# Define the location of a file that the INSTALL script may have created.\n# This file will be used regardless of installation method.\nnewconf=${oldconfdir}/pbs.conf.${PBS_VERSION}\n\nINSTALL_METHOD=\"rpm\"\ncase \"$1\" in\nserver)\n\tINSTALL_PACKAGE=$1\n\tPBS_VERSION=\"${2:-@PBS_VERSION@}\"\n\tnewpbs_exec=\"${3:-@prefix@}\"\n\tnewpbs_home=\"${4:-@PBS_SERVER_HOME@}\"\n\tdbuser=\"${preset_dbuser:-${5:-@database_user@}}\"\n\tif [ \"$6\" = \"sameconf\" ]; then\n\t\tsameconf=\"true\"\n\telse\n\t\tsameconf=\"false\"\n\tfi\n\tserviceuser=\"${preset_serviceuser:-${7:-@service_user@}}\"\n\tif [ ! -x \"$newpbs_exec/sbin/pbs_server\" ]; then\n\t\techo \"***\"\n\t\techo \"*** Unable to locate PBS executables!\"\n\t\techo \"***\"\n\t\texit 1\n\tfi\n\t;;\nexecution)\n\tINSTALL_PACKAGE=$1\n\tPBS_VERSION=\"${2:-@PBS_VERSION@}\"\n\tnewpbs_exec=\"${3:-@prefix@}\"\n\tnewpbs_home=\"${4:-@PBS_SERVER_HOME@}\"\n\tif [ \"$5\" = \"sameconf\" ]; then\n\t\tsameconf=\"true\"\n\telse\n\t\tsameconf=\"false\"\n\tfi\n\tif [ ! -x \"$newpbs_exec/sbin/pbs_mom\" ]; then\n\t\techo \"***\"\n\t\techo \"*** Unable to locate PBS executables!\"\n\t\techo \"***\"\n\t\texit 1\n\tfi\n\t;;\nclient)\n\tINSTALL_PACKAGE=$1\n\tPBS_VERSION=\"${2:-@PBS_VERSION@}\"\n\tnewpbs_exec=\"${3:-@prefix@}\"\n\tnewpbs_home='@PBS_SERVER_HOME@'\n\tif [ \"$4\" = \"sameconf\" ]; then\n\t\tsameconf=\"true\"\n\telse\n\t\tsameconf=\"false\"\n\tfi\n\tif [ ! -x \"$newpbs_exec/bin/qstat\" ]; then\n\t\techo \"***\"\n\t\techo \"*** Unable to locate PBS executables!\"\n\t\techo \"***\"\n\t\texit 1\n\tfi\n\t;;\n*)\n\tINSTALL_METHOD=\"script\"\n\tsameconf=\"false\"\n\tif [ -f \"$newconf\" ]; then\n\t\tnewpbs_exec=`grep '^[[:space:]]*PBS_EXEC=' \"$newconf\" | tail -1 | sed 's/^[[:space:]]*PBS_EXEC=\\([^[:space:]]*\\)[[:space:]]*/\\1/'`\n\t\tnewpbs_home=`grep '^[[:space:]]*PBS_HOME=' \"$newconf\" | tail -1 | sed 's/^[[:space:]]*PBS_HOME=\\([^[:space:]]*\\)[[:space:]]*/\\1/'`\n\telse\n\t\tnewpbs_exec=@prefix@\n\t\tnewpbs_home=@PBS_SERVER_HOME@\n\tfi\n\t;;\nesac\n\n# Ensure newpbs_exec exists.\nif [ ! -d \"$newpbs_exec\" ]; then\n\techo \"***\"\n\techo \"*** Directory does not exist: $newpbs_exec\"\n\techo \"***\"\n\texit 1\nfi\n\nif [ \"$sameconf\" != \"true\" ]; then\n\t# Edit the new configuration file based on the install method.\n\t# Create newconf for the rpm install method.\n\tcreate_conf\n\n\t# Set defaultdir based on the installed location of PBS. It controls whether\n\t# a symbolic link named \"default\" will be created or updated.\n\tdefaultdir=0\n\t[ `basename \"$PBS_EXEC\"` = \"$PBS_VERSION\" ] && defaultdir=1\n\n\t# Adjust PBS_EXEC defaultdir is enabled\n\tif [ $defaultdir -ne 0 ]; then\n\t\trealexec=\"${PBS_EXEC}\"\n\t\tPBS_EXEC=`dirname ${PBS_EXEC}`/default\n\t\teval \"sed -i 's;\\(^[[:space:]]*PBS_EXEC=\\)[^[:space:]]*;\\1$PBS_EXEC;' \\\"$newconf\\\"\"\n\t\tif [ -h \"${PBS_EXEC}\" ] ; then\n\t\t\techo \"*** Removing old symbolic link ${PBS_EXEC}\"\n\t\t\trm -f ${PBS_EXEC}\n\t\tfi\n\t\techo \"*** Creating new symbolic link ${realexec} pointing to ${PBS_EXEC}\"\n\t\tln -s \"${realexec}\" \"${PBS_EXEC}\"\n\tfi\n\n\t# Perform some sanity checks.\n\tperform_checks\n\n\techo \"*** Replacing $conf with $newconf\"\n\tmv -f \"$newconf\" \"$conf\"\n\n\tif [ -n \"$conforig\" ]; then\n\t\techo \"*** $conf has been modified.\"\n\t\techo \"*** The original contents have been saved to $conforig\"\n\telse\n\t\techo \"*** $conf has been created.\"\n\tfi\n\techo \"***\"\n\n\t# If any daemon is to be started, we need to install the init.d script\n\t# also if installing on Cray XT; but for Cray don't do chkconfig, see AG\n\tinstall_pbsinitd\nfi\n\n. \"$conf\"\n# The remainder of the script deals with creating and configuring PBS_HOME.\n# This is not necessary for a client installation.\ncreate_home\n\n# Now need to save the license information into PBS_HOME for pbs_habitat\n\tif [ ${PBS_LICENSE_INFO:+set} ] ; then\n\t\tif is_cray_xt ; then\n\t\t\txtopview -e \"[ -d ${PBS_HOME}/server_priv/ ] && echo ${PBS_LICENSE_INFO} > ${PBS_HOME}/server_priv/PBS_licensing_loc\"\n\t\telse\n\t\t\t[ -d ${PBS_HOME}/server_priv/ ] && echo ${PBS_LICENSE_INFO} > ${PBS_HOME}/server_priv/PBS_licensing_loc\n\t\tfi\n\tfi\necho \"*** End of ${0}\"\nexit 0\n"
  },
  {
    "path": "src/cmds/scripts/pbs_posttrans",
    "content": "#!/bin/bash\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\n# The %preun section of 14.x unconditially removes /etc/init.d/pbs\n# because it does not check whether the package is being removed\n# or upgraded. Make sure it exists here.\nif [ -r $1/libexec/pbs_init.d ]; then\n\tinstall -D $1/libexec/pbs_init.d /etc/init.d/pbs\nfi\n"
  },
  {
    "path": "src/cmds/scripts/pbs_preuninstall",
    "content": "#!/bin/bash\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\npbs_version=\"$2\"\npbs_exec=\"$3\"\npbs_home=\"${4:-/var/spool/pbs}\"\nhave_systemd=\"${5:-0}\"\n\nif [ `basename $pbs_exec` = $pbs_version ]; then\n    top_level=`dirname $pbs_exec`\n    if [ -h $top_level/default ]; then\n        link_target=`readlink $top_level/default`\n        [ `basename \"$link_target\"` = $pbs_version ] && rm -f $top_level/default\n    fi\nfi\nrm -f /opt/modulefiles/pbs/$pbs_version\n\ncase \"$1\" in\nserver|execution)\n    [ -x /etc/init.d/pbs ] && /etc/init.d/pbs stop\n    [ -x /sbin/chkconfig ] && /sbin/chkconfig --del pbs >/dev/null 2>&1\n    rm -f /etc/rc.d/rc?.d/[KS]??pbs\n    rm -f /var/tmp/pbs_boot_check\n    if [ $have_systemd = 1 ]; then\n        echo \"have systemd\"\n        systemctl disable pbs\n        rm -f /usr/lib/systemd/system-preset/95-pbs.preset\n        if command -v python &> /dev/null && [ -f ${pbs_home}/mom_priv/hooks/pbs_cgroups.CF ]; then\n            service_name=`cat ${pbs_home}/mom_priv/hooks/pbs_cgroups.CF | python -c \"import sys, json; print(json.load(sys.stdin)['cgroup_prefix'])\"`\n            systemctl stop ${service_name}.service\n            [ -d /sys/fs/cgroup/cpuset/${service_name}.service/jobid ] && rmdir /sys/fs/cgroup/cpuset/${service_name}.service/jobid\n            [ -d /sys/fs/cgroup/cpuset/${service_name}.service ] && rmdir /sys/fs/cgroup/cpuset/${service_name}.service\n        fi\n        [ -d /sys/fs/cgroup/cpuset/pbspro.service/jobid ] && rmdir /sys/fs/cgroup/cpuset/pbspro.service/jobid\n        [ -d /sys/fs/cgroup/cpuset/pbspro.service ] && rmdir /sys/fs/cgroup/cpuset/pbspro.service\n        exit 0\n    fi\n    ;;\nesac\n"
  },
  {
    "path": "src/cmds/scripts/pbs_reload.in",
    "content": "#!/bin/sh\n#\n# Copyright (C) 1994-2020 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\ncheck_cgroup=$$\npath=`awk '/cgroup/ {print $2; exit}' /proc/mounts`\ncgpath=$path/systemd/system.slice/pbs.service\nif [ -e $cgpath/cgroup.procs ]; then\n    echo $check_cgroup > $cgpath/cgroup.procs\nfi\nexec \"$@\"\n"
  },
  {
    "path": "src/cmds/scripts/pbs_server",
    "content": "#!/bin/bash\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\n# save env variables in a temp file\nenv_save=\"/tmp/$$_$(date +'%s')_env_save\"\ndeclare -x > \"${env_save}\"\n\n. ${PBS_CONF_FILE:-/etc/pbs.conf}\n\n# re-apply saved env variables\n. \"${env_save}\"\n\nrm -f \"${env_save}\"\n\n# Source the file that sets DB ENV variables\n. ${PBS_EXEC}/libexec/pbs_db_env\n\n\n# Loading dynamic library at PBS_EXEC/lib\nexport LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$PBS_EXEC/lib/:$PBS_EXEC/lib64/\n\n\nexec $PBS_EXEC/sbin/pbs_server.bin ${1+\"$@\"}\n"
  },
  {
    "path": "src/cmds/scripts/pbs_snapshot",
    "content": "#!/bin/bash\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nptllibpath=\nptlbinpath=\nptl_prefix_lib=\n\n# Try package install paths\nif [ -f /etc/debian_version ]; then\n    __ptlpkgname=$(dpkg-query -W -f='${binary:Package}\\n' 2>/dev/null | grep -E '*-ptl$')\n    if [ \"x${__ptlpkgname}\" != \"x\" ]; then\n        ptl_prefix_lib=$(dpkg -L ${__ptlpkgname} 2>/dev/null | grep -m 1 lib$ 2>/dev/null)\n    fi\nelse\n    __ptlpkgname=$(rpm -qa 2>/dev/null | grep -E '*-ptl-[[:digit:]]')\n    if [ \"x${__ptlpkgname}\" != \"x\" ]; then\n        ptl_prefix_lib=$(rpm -ql ${__ptlpkgname} 2>/dev/null | grep -m 1 lib$ 2>/dev/null)\n    fi\nfi\n\n# Try system paths\nif [ \"x${ptl_prefix_lib}\" != \"x\" ]; then\n    if [ ! -d ${ptl_prefix_lib} ]; then\n        ptl_prefix_lib=/usr/local/lib/python3.[[:digit:]]/site-packages\n        if [ ! -d ${ptl_prefix_lib} ]; then\n            ptl_prefix_lib=/usr/lib/python3.[[:digit:]]/site-packages\n        fi\n    fi\nfi\n\nconf=\"${PBS_CONF_FILE:-/etc/pbs.conf}\"\nif [ -r \"${conf}\" ]; then\n    # we only need PBS_EXEC from pbs.conf\n    __PBS_EXEC=$( grep '^[[:space:]]*PBS_EXEC=' \"$conf\" | tail -1 | sed 's/^[[:space:]]*PBS_EXEC=\\([^[:space:]]*\\)[[:space:]]*/\\1/' )\nelse\n    echo \"Error: Unable to read PBS conf file\" >&2\n    exit 1\nfi\nunset conf\n\n__pbs_snapshot=pbs_snapshot\n\nif [ \"x${ptl_prefix_lib}\" != \"x\" ]; then\n    python_dir=$(/bin/ls -1 ${ptl_prefix_lib})\n    prefix=$(dirname ${ptl_prefix_lib})\n\n    ptllibpath=${prefix}/lib/${python_dir}/site-packages\n    ptlbinpath=${prefix}/bin\nelse\n    if [ \"X${__PBS_EXEC}\" != \"X\" ]; then\n        # Define PATH and PYTHONPATH for the users\n        PTL_PREFIX=$(dirname ${__PBS_EXEC})/ptl\n        if [ ! -d \"${PTL_PREFIX}/lib/site-packages\" ]; then\n            ptlbinpath=${__PBS_EXEC}/unsupported/fw/bin\n            ptllibpath=${__PBS_EXEC}/unsupported/fw\n            __pbs_snapshot=${__pbs_snapshot}.py\n        else\n            python_dir=$(/bin/ls -1 ${PTL_PREFIX}/lib)/site-packages\n            ptllibpath=${PTL_PREFIX}/lib/${python_dir}\n            ptlbinpath=${PTL_PREFIX}/bin\n        fi\n       unset PTL_PREFIX\n       unset python_dir\n    fi\nfi\n\nexport PYTHONPATH=${ptllibpath}:${PYTHONPATH}\n\nif [ -d $ptlbinpath ] && [ -d $ptllibpath ];then\n    if [ -x \"${__PBS_EXEC}/python/bin/python\" ]; then\n        ${__PBS_EXEC}/python/bin/python ${ptlbinpath}/${__pbs_snapshot} \"${@}\"\n    else\n        python3 ${ptlbinpath}/${__pbs_snapshot} \"${@}\"\n    fi\nelse\n    echo \"***\" >&2\n    echo \"*** Ptllib/Ptlbin Path Not found\" >&2\n    echo \"***\" >&2\n    exit 1\nfi\n\nunset __PBS_EXEC\nunset __pbs_snapshot\n"
  },
  {
    "path": "src/cmds/scripts/pbs_topologyinfo",
    "content": "#!/bin/sh -\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\n. ${PBS_CONF_FILE:-/etc/pbs.conf}\nexport PBS_HOME\n\npbslibdir=\"${PBS_EXEC}/lib64\"\n[ -d \"${pbslibdir}\" ] || pbslibdir=\"${PBS_EXEC}/lib\"\nexec $PBS_EXEC/bin/pbs_python $pbslibdir/python/pbs_topologyinfo.py ${1+\"$@\"}\n"
  },
  {
    "path": "src/cmds/scripts/pbs_topologyinfo.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport errno\nfrom optparse import OptionParser\nimport os\nimport re\nimport sys\nimport math\nimport platform\n\n\nclass Inventory(object):\n    \"\"\"\n    This class is used to parse the inventory details\n    and hold the device information\n    \"\"\"\n\n    def reset(self):\n        self.nsockets = 0\n        self.nnodes = 0\n        self.hwloclatest = 0\n        self.CrayVersion = \"0.0\"\n        self.ndevices = 0\n        self.gpudevices = 0\n        self.cardflag = False\n        self.renderflag = False\n\n    def __init__(self):\n        self.reset()\n\n    def reportsockets_win(self, topo_file):\n        \"\"\"\n        counting devices by parsing topo_file\n        \"\"\"\n        temp = topo_file.read().decode('utf-8').split(',')\n        for item in temp:\n            if item.find('sockets:') != -1:\n                self.nsockets = int(item[8:])  # len('sockets:') = 8\n                self.ndevices += int(item[8:])\n            if item.find('gpus:') != -1:\n                self.ndevices += int(item[5:])  # len('gpus:') = 5\n            if item.find('mics:') != -1:\n                self.ndevices += int(item[5:])  # len('mics:') = 5\n\n    def latest_hwloc(self, hwlocVersion):\n        \"\"\"\n        socket tag is different on versions above 1.11\n        turning hwloclatest flag on if the version is above 1.11\n        \"\"\"\n        hwlocVersion = hwlocVersion.split('.')\n        major = int(hwlocVersion[0])\n        minor = int(hwlocVersion[1]) if len(hwlocVersion) > 1 else 0\n        if ((major == 1) and (minor >= 11)) or (major > 1):\n            self.hwloclatest = 1\n\n    def calculate(self):\n        \"\"\"\n        Returns the number of licenses required based on specific formula\n        \"\"\"\n        self.ndevices += self.gpudevices\n        return(int(math.ceil(self.ndevices / 4.0)))\n\n    def reportsockets(self, dirs, files, options):\n        \"\"\"\n        Look for and report the number of socket/node licenses\n        required by the cluster. Uses expat to parse the XML.\n        dirs - directory to look for topology files.\n        files - files for which inventory needs to be parsed\n        options - node / socket.\n        \"\"\"\n\n        if files is None:\n            compute_socket_nodelist = True\n            try:\n                files = os.listdir(dirs)\n                if not files:\n                    return\n            except (IOError, OSError) as err:\n                (e, strerror) = err.args\n                print(\"%s:  %s (%s)\" % (dirs, strerror, e))\n                return\n        else:\n            compute_socket_nodelist = False\n        try:\n            maxwidth = max(list(map(len, files)))\n        except Exception as e:\n            print('max/map failed: %s' % e)\n            return\n\n        try:\n            import xml.parsers.expat\n            from xml.parsers.expat import ExpatError\n            ExpatParser = True\n        except ImportError:\n            ExpatParser = False\n\n        for name in files:\n            pathname = os.sep.join((dirs, name))\n            self.reset()\n            try:\n                with open(pathname, \"rb\") as topo_file:\n                    temp_buf = topo_file.readline().decode('utf-8')\n                    topo_file.seek(0)\n                    # Windows topology file are not XML files. So if\n                    # a file does not start with '<', it is a Windows\n                    # topology file\n                    if not temp_buf.startswith('<'):\n                        self.reportsockets_win(topo_file)\n                    elif ExpatParser:\n                        try:\n                            p = xml.parsers.expat.ParserCreate()\n                            p.StartElementHandler = socketXMLstart\n                            p.ParseFile(topo_file)\n                        except ExpatError as e:\n                            print(\"%s:  parsing error at line %d, column %d\"\n                                  % (name, e.lineno, e.offset))\n                    else:\n                        self.countsockets(topo_file)\n                    if options.sockets:\n                        print(\"%-*s%d\" % (maxwidth + 1, name, self.nsockets))\n                    else:\n                        self.nnodes += self.calculate()\n                        print(\"%-*s%d\" % (maxwidth + 1, name,\n                                          inventory.nnodes))\n\n            except IOError as err:\n                (e, strerror) = err.args\n                if e == errno.ENOENT:\n                    if not compute_socket_nodelist:\n                        print(\"no socket information available for node %s\"\n                              % name)\n                    continue\n                else:\n                    print(\"%s:  %s (%s)\" % (pathname, strerror, e))\n                    raise\n\n    def countsockets(self, topo_file):\n        \"\"\"\n        Used when an import of the xml.parsers.expat module fails.\n        This version makes use of regex expressions.\n        \"\"\"\n        socketpattern = r'<\\s*object\\s+type=\"Socket\"'\n        packagepattern = r'<\\s*object\\s+type=\"Package\"'\n        gpupattern = r'<\\s*object\\s+type=\"OSDev\"\\s+name=\"card\\d+\"\\s+' \\\n            'osdev_type=\"1\"'\n        renderpattern = r'<\\s*object\\s+type=\"OSDev\"\\s+name=\"renderD\\d+\"\\s+' \\\n            'osdev_type=\"1\"'\n        micpattern = r'<\\s*object\\s+type=\"OSDev\"\\s+name=\"mic\\d+\"\\s+' \\\n            'osdev_type=\"5\"'\n        craypattern = r'<\\s*BasilResponse\\s+'\n        craynodepattern = r'<\\s*Node\\s+node_id='\n        craysocketpattern = r'<\\s*Socket\\s+ordinal='\n        craygpupattern = r'<\\s*Accelerator\\s+.*type=\"GPU\"'\n        hwloclatestpattern = r'<\\s*info\\s+name=\"hwlocVersion\"\\s+'\n\n        for line in topo_file:\n            line = line.decode('utf-8')\n            if re.search(craypattern, line):\n                start_index = line.find('protocol=\"') + len('protocol=\"')\n                self.CrayVersion = line[start_index:\n                                        line.find('\"', start_index)]\n                continue\n            if re.search(hwloclatestpattern, line):\n                hwlocVer = line[line.find('value=\"') + len('value=\"'):\n                                line.rfind('\"/>')]\n                self.latest_hwloc(hwlocVer)\n                continue\n\n            if self.CrayVersion != \"0.0\":\n                if re.search(craynodepattern, line):\n                    self.nnodes += self.calculate()\n                    self.ndevices = 0\n                    if float(self.CrayVersion) <= 1.2:\n                        self.nsockets += 2\n                        self.ndevices += 2\n                elif re.search(craysocketpattern, line):\n                    self.nsockets += 1\n                    self.ndevices += 1\n                if re.search(craygpupattern, line):\n                    self.ndevices += 1\n            else:\n                if ((self.hwloclatest and re.search(packagepattern, line)) or\n                        (not self.hwloclatest and re.search(socketpattern,\n                                                            line))):\n                    self.nsockets += 1\n                    self.ndevices += 1\n                self.cardflag += 1 if re.search(gpupattern, line) else 0\n                self.renderflag += 1 if re.search(renderpattern, line) else 0\n                self.ndevices += 1 if re.search(micpattern, line) else 0\n        self.gpudevices = min(self.cardflag, self.renderflag)\n\n\ndef socketXMLstart(name, attrs):\n    \"\"\"\n    StartElementHandler for expat parser\n    \"\"\"\n    global inventory\n    if name == \"BasilResponse\":\n        inventory.CrayVersion = attrs.get(\"protocol\")\n        return\n    if (name == \"info\" and attrs.get(\"name\") == \"hwlocVersion\"):\n        inventory.latest_hwloc(attrs.get(\"value\"))\n        return\n    if inventory.CrayVersion != \"0.0\":\n        if name == \"Node\":\n            inventory.nnodes += inventory.calculate()\n            inventory.ndevices = 0\n            if float(inventory.CrayVersion) <= 1.2:\n                inventory.nsockets += 2\n                inventory.ndevices += 2\n        elif name == \"Socket\":\n            inventory.nsockets += 1\n            inventory.ndevices += 1\n        if name == \"Accelerator\" and attrs.get(\"type\") == \"GPU\":\n            inventory.ndevices += 1\n    else:\n        if (name == \"object\" and ((inventory.hwloclatest == 1 and\n            attrs.get(\"type\") == \"Package\") or\n            (inventory.hwloclatest == 0 and attrs.get(\"type\") ==\n                \"Socket\"))):\n            inventory.nsockets += 1\n            inventory.ndevices += 1\n        if (name == \"object\" and attrs.get(\"type\") == \"OSDev\" and\n            attrs.get(\"osdev_type\") == \"1\" and\n                attrs.get(\"name\").startswith(\"card\")):\n            inventory.cardflag = True\n        elif (name == \"object\" and attrs.get(\"type\") == \"OSDev\" and\n              attrs.get(\"osdev_type\") == \"1\" and\n                attrs.get(\"name\").startswith(\"renderD\")):\n            if inventory.cardflag is True:\n                inventory.gpudevices += 1\n                inventory.cardflag = False\n        else:\n            inventory.cardflag = False\n        if (name == \"object\" and attrs.get(\"type\") == \"OSDev\" and\n            attrs.get(\"osdev_type\") == \"5\" and\n                attrs.get(\"name\").startswith(\"mic\")):\n            inventory.ndevices += 1\n\n\nif __name__ == \"__main__\":\n    usagestr = \"usage:  %prog [ -a -s ]\\n\\t%prog -s node1 [ node2 ... ]\"\n    parser = OptionParser(usage=usagestr)\n    parser.add_option(\"-a\", \"--all\", action=\"store_true\", dest=\"allnodes\",\n                      help=\"report on all nodes\")\n    parser.add_option(\"-s\", \"--sockets\", action=\"store_true\", dest=\"sockets\",\n                      help=\"report node socket count\")\n    parser.add_option(\"-l\", \"--license\", action=\"store_true\", dest=\"license\",\n                      help=\"report license count\")\n    (options, progargs) = parser.parse_args()\n\n    try:\n        topology_dir = os.sep.join((os.environ[\"PBS_HOME\"], \"server_priv\",\n                                    \"topology\"))\n    except KeyError:\n        print(\"PBS_HOME must be present in the caller's environment\")\n        sys.exit(1)\n    if not (options.sockets or options.license):\n        sys.exit(1)\n    inventory = Inventory()\n    if options.allnodes:\n        inventory.reportsockets(topology_dir, None, options)\n    else:\n        inventory.reportsockets(topology_dir, progargs, options)\n"
  },
  {
    "path": "src/cmds/scripts/pbsrun.ch_gm.init.in",
    "content": "#!/bin/sh\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\n### FILE: pbsrun_ch_gm.init ###\n\n####################################################################\n# strict_pbs: set to 1 if you want the wrapper script to be executed\n#             only if in a PBS environment\n#\n#####################################################################\nstrict_pbs=0\n\n#####################################################################\n# options_to_retain:  space-separted list of options and values that\n# \"pbsrun\" will pass on to the actual mpirun call. options must begin\n# with \"-\" or \"--\", option name can contain a wildcard (*) to match a\n# number of characters (e.g. --totalnum=*), and option arguments must\n# be specified by some arbitrary name bounded by angle brackets,\n# as in \"<val1>\".\n#####################################################################\noptions_to_retain=\"\\\n-v \\\n-t \\\n-s \\\n-r \\\n-h \\\n--gm-no-shmem \\\n-gm-no-shmem \\\n--gm-shmem-prefix \\\n-gm-shmem-prefix \\\n--gm-numa-shmem \\\n--gm-label \\\n-gm-label \\\n--gm-no-sigcatch \\\n-gm-no-sigcatch \\\n--gm-copy-env \\\n-gm-copy-env \\\n-totalview \\\n-tv \\\n-ddt \\\n-usage \\\n-help \\\n--help \\\n-mvback \\\n-mvhome \\\n--INIT \\\n--gm-tree-spawn \\\n-gm-tree-spawn \\\n--gm-fixed-alloc \\\n-gm-fixed-alloc \\\n--gm-spawn-servers \\\n-gm-spawn-servers \\\n-np <n> \\\n-wd <path> \\\n--gm-wait <n> \\\n-gm-wait <n> \\\n--gm-kill <n> \\\n-gm-kill <n> \\\n--gm-eager <n> \\\n-gm-eager <n> \\\n--gm-recv <m> \\\n-gm-recv <m> \\\n--gm-lock-mbytes <n> \\\n-gm-lock-mbytes <n> \\\n--gm-bounce-buffers <n> \\\n-gm-bounce-buffers <n> \\\n-pg <file> \\\n\"\n\n#####################################################################\n# options_to_ignore:  space-separted list of options and values that\n# \"pbsrun\" will NOT pass on to the actual mpirun call. options must begin\n# with \"-\" or \"--\", option name can contain a wildcard (*) to match a\n# number of characters (e.g. --totalnum=*), and option arguments must\n# be specified by some arbitrary name bounded by angle brackets,\n# as in \"<val1>\".\n#####################################################################\noptions_to_ignore=\"\\\n-machinefile <file> \\\n\"\n\n#####################################################################\n# options_to_transform:  space-separted list of options and values that\n# \"pbsrun\" will modify before passing on to the actual mpirun call.\n# options must begin with \"-\" or \"--\", option name can contain a\n# wildcard (*) to match a number of characters (e.g. --totalnum=*),\n# and option arguments must be specified by some arbitrary name\n# bounded by angle brackets, as in \"<val1>\".\n# NOTE: Adding values here require code to be added to\n# transform_action() function appearing later in this file.\n#####################################################################\noptions_to_transform=\"\\\n\"\n\n#####################################################################\n# options_to_fail: space-separated list of options that will cause \"pbsrun\"\n# to exit upon encountering a match.\n# options must begin with \"-\" or \"--\", and option name can contain a\n# wildcard (*) to match a number of characters (e.g. --totalnum=*).\n#\n#####################################################################\noptions_to_fail=\"\\\n\"\n\n#####################################################################\n# option_to_configfile: the SINGLE option and value that refers to the\n# name of the \"configfile\" containing command line segments, found\n# in certain versions of mpirun.\n# option must begin with \"-\" or \"--\".\n#\n#####################################################################\noption_to_configfile=\"\\\n\"\n\n#####################################################################\n# options_with_another_form: space-separated list of options and values\n# that can be found in options_to_retain, options_to_ignore, or\n# options_to_transform, whose syntax has an alternate, unsupported\n# form.\n# This usually occurs if a version of mpirun has different forms for\n# the same option.\n# For instance,\n#       MPICH2's mpirun provides:\n#               mpirun -localonly <x>\n#               mpirun -localonly\n# If options_to_retain lists \"-localonly\" as supported value, then\n# set options_with_another_form=\"-localonly\" as well.\n# This would cause \"pbsrun\" to issue a\n# warning about alternate forms upon encountering the  option.\n#\n# options must begin with \"-\" or \"--\", and option name can contain a\n# wildcard (*) to match a number of characters (e.g. --totalnum=*).\n#\n#####################################################################\noptions_with_another_form=\"\\\n\"\n\n####################################################################\n# pbs_attach: full path or relative path to the pbs_attach executable.\n#\n#####################################################################\npbs_attach=\"pbs_attach\"\n\n#####################################################################\n# options_to_pbs_attach:  are the special options to pass to the\n# pbs_attach call. You may pass variable references (e.g. $PBS_JOBID)\n# and they will be substituted  by pbsrun to actual values.\n#####################################################################\noptions_to_pbs_attach=\"-j $PBS_JOBID\"\n\n################## transform_action() ###################################\n# The action to be performed for each actual item and value matched in\n# options_to_transform.\n# RETURN: echo the replacement item and value for the matched arguments.\n# NOTES:\n# (1) \"echo\" produces the return value of this function;\n#     do not arbitrarily invoke echo statements.\n# (2) Please use 'printf \"%s\" \"<value>\"' (must quote the <value>) in place\n#     of \"echo\" command here since we're returning an option, and 'echo' is\n#     notorious for the following behavior:\n#\t\techo \"-n\"      \t\t--> prints empty (-n is an echo opt)\n#     Desired behavior:\n#\t\tprintf \"%s\" \"-n\" \t--> prints \"-n\"\n#\n########################################################################\ntransform_action () {\n\targs=$*\n\n\tprintf \"%s\" \"$*\"\n}\n\n################## boot_action() ###################################\n# The action to be performed BEFORE before calling mpirun.\n# The location of actual mpirun is passed as first argument.\n# RETURN: 0 for success; non-zero otherwise.\n# NOTES:\n# (1) 'return' produces the exit status of this function:\n#     0 for success; non-zero for failure.\n####################################################################\nboot_action () {\n\n\tmpirun_location=$1\n\treturn 0\n}\n\n################## evaluate_options_action() #######################\n# The action to be performed on the actual options and values matched\n# in options_to_retain, not including those in options_to_ignore, and\n# those changed arguments in options_to_transform, as well as any\n# other transformation needed on the program name and program arguments.\n#\n# RETURN: echo the list of final arguments and program arguments\n# to be passed on to mpirun command line.\n\n# NOTES:\n# (1) \"echo\" produces the return value of this function;\n#     do not arbitrarily invoke echo statements.\n# (2) Please use 'printf \"%s\" \"<value>\"' (must quote <value>) in place of\n#     \"echo\" command here since we're returning option string, and\n#     'echo' is notorious for the following behavior:\n#\t\techo \"-n\"               --> prints empty (-n is an echo opt)\n#     Desired behavior:\n# \t\tprintf \"%s\" \"-n\"        --> prints \"-n\"\n#\n########################################################################\nevaluate_options_action () {\n\targs=$*\n\n\tfound_np=0\n\twhile [ $# -gt 0 ]; do\n          if [ \"XX$1\" = \"XX-np\" ]; then\n\t\tfound_np=1\n\t\tbreak\n\t  fi\n\t  shift\n\tdone\n\n\tif [ $found_np -eq 0 ] ; then\n\t\tusernp=`cat ${PBS_NODEFILE} | wc -l | tr -d ' '`\n\t\targs=\"-np $usernp $args\"\n\tfi\n\n        pbs_machinefile=\"${PBS_TMPDIR:-/var/tmp}/pbsrun_machfile$$\"\n        cat -n ${PBS_NODEFILE} | \\\n        sort -k2 | uniq -f1 -c | \\\n        awk '{if ($1 == 1) print $2, $3; else print $2, $3 \":\" $1}'|\\\n        sort -n | awk '{print $2}' > $pbs_machinefile\n\n        args=\"-machinefile $pbs_machinefile $args\"\n\n\n\tprintf \"%s\" \"$args\"\n\n}\n\n################## configfile_cmdline_action() #######################\n# If the option_to_configfile (e.g. -configfile) is specified in the\n# mpirun command line, then this function gets passed any leading options\n# and values found before option_to_configfile.\n#\n# RETURN: return the actual options and values to be put in before the\n# option_to_configfile parameter. For instance,\n#\treturning \"--totalnum=N --file=Y\" would result in\n# an mpirun command line of:\n#\tmpirun --totalnum=N --file=Y -configfile pbs_config\n#\n########################################################################\nconfigfile_cmdline_action () {\n\targs=$*\n\n        printf \"\"\n\n}\n\n################## configfile_firstline_action () #######################\n# If the option_to_configfile (e.g. -configfile) is specified in the\n# mpirun command line, return here the item that will be put in the\n# FIRST line of the configuration file.\n# This is the place to put  the \"-machinefile <filename>\" parameter\n# which determines the processes to hosts mappings. Some versions\n# of mpirun (MPICH2, Intel MPI) require that the -machinefile parameter\n# must appear inside the config file and not on the commmand line.\n########################################################################\nconfigfile_firstline_action () {\n\tprintf \"\"\n}\n\n################## end_action() ########################################\n# The action to be performed AFTER calling mpirun, and also when\n# mpirun wrap script is prematurely interrupted.\n# INPUT: The location of actual mpirun is passed as first argument.\n# RETURN: none\n########################################################################\nend_action () {\n\tmpirun_location=$1\n\tpbs_machinefile=\"${PBS_TMPDIR:-/var/tmp}/pbsrun_machfile$$\"\n\trm -f $pbs_machinefile 2>/dev/null\n}\n"
  },
  {
    "path": "src/cmds/scripts/pbsrun.ch_mx.init.in",
    "content": "#!/bin/sh\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\n### FILE: pbsrun.ch_mx.init ###\n\n####################################################################\n# strict_pbs: set to 1 if you want the wrapper script to be executed\n#             only if in a PBS environment\n#\n#####################################################################\nstrict_pbs=0\n\n#####################################################################\n# options_to_retain:  space-separted list of options and values that\n# \"pbsrun\" will pass on to the actual mpirun call. options must begin\n# with \"-\" or \"--\", option name can contain a wildcard (*) to match a\n# number of characters (e.g. --totalnum=*), and option arguments must\n# be specified by some arbitrary name bounded by angle brakcets,\n# as in \"<val1>\".\n#####################################################################\noptions_to_retain=\"\\\n-v \\\n-t \\\n-s \\\n-r \\\n-totalview \\\n-tv \\\n-usage \\\n-help \\\n--help \\\n-h \\\n--INIT \\\n-mvback \\\n-mvhome \\\n-totalview \\\n-ddt \\\n-wd <path> \\\n-np <n> \\\n--mx-wait <n> \\\n-mx-wait <n> \\\n--mx-kill <n> \\\n-mx-kill <n> \\\n--mx-recv <m> \\\n-mx-recv <m> \\\n--mx-noshmem \\\n-mx-noshmem \\\n--mx-copy-env \\\n-mx-copy-env \\\n--mx-tree-spawn \\\n-mx-tree-spawn \\\n--mx-label \\\n-mx-label \\\n--mx-no-sigcatch \\\n-mx-no-sigcatch \\\n-pg <file> \\\n\"\n\n#####################################################################\n# options_to_ignore:  space-separted list of options and values that\n# \"pbsrun\" will NOT pass on to the actual mpirun call. options must begin\n# with \"-\" or \"--\", option name can contain a wildcard (*) to match a\n# number of characters (e.g. --totalnum=*), and option arguments must\n# be specified by some arbitrary name bounded by angle brackets,\n# as in \"<val1>\".\n#####################################################################\noptions_to_ignore=\"\\\n-machinefile <file> \\\n\"\n\n#####################################################################\n# options_to_transform:  space-separted list of options and values that\n# \"pbsrun\" will modify before passing on to the actual mpirun call.\n# options must begin with \"-\" or \"--\", option name can contain a\n# wildcard (*) to match a number of characters (e.g. --totalnum=*),\n# and option arguments must be specified by some arbitrary name\n# bounded by angle brackets, as in \"<val1>\".\n# NOTE: Adding values here require code to be added to\n# transform_action() function appearing later in this file.\n#####################################################################\noptions_to_transform=\"\\\n\"\n\n#####################################################################\n# options_to_fail: space-separated list of options that will cause \"pbsrun\"\n# to exit upon encountering a match.\n# options must begin with \"-\" or \"--\", and option name can contain a\n# wildcard (*) to match a number of characters (e.g. --totalnum=*).\n#\n#####################################################################\noptions_to_fail=\"\\\n\"\n\n#####################################################################\n# option_to_configfile: the SINGLE option and value that refers to the\n# name of the \"configfile\" containing command line segments, found\n# in certain versions of mpirun.\n# option must begin with \"-\" or \"--\".\n#\n#####################################################################\noption_to_configfile=\"\\\n\"\n\n#####################################################################\n# options_with_another_form: space-separated list of options and values\n# that can be found in options_to_retain, options_to_ignore, or\n# options_to_transform, whose syntax has an alternate, unsupported\n# form.\n# This usually occurs if a version of mpirun has different forms for\n# the same option.\n# For instance,\n#       MPICH2's mpirun provides:\n#               mpirun -localonly <x>\n#               mpirun -localonly\n# If options_to_retain lists \"-localonly\" as supported value, then\n# set options_with_another_form=\"-localonly\" as well.\n# This would cause \"pbsrun\" to issue a\n# warning about alternate forms upon encountering the  option.\n#\n# options must begin with \"-\" or \"--\", and option name can contain a\n# wildcard (*) to match a number of characters (e.g. --totalnum=*).\n#\n#####################################################################\noptions_with_another_form=\"\\\n\"\n\n####################################################################\n# pbs_attach: full path or relative path to the pbs_attach executable.\n#\n#####################################################################\npbs_attach=\"pbs_attach\"\n\n#####################################################################\n# options_to_pbs_attach:  are the special options to pass to the\n# pbs_attach call. You may pass variable references (e.g. $PBS_JOBID)\n# and they will be substituted  by pbsrun to actual values.\n#####################################################################\noptions_to_pbs_attach=\"-j $PBS_JOBID\"\n\n################## transform_action() ###################################\n# The action to be performed for each actual item and value matched in\n# options_to_transform.\n# RETURN: echo the replacement item and value for the matched arguments.\n# NOTES:\n# (1) \"echo\" produces the return value of this function;\n#     do not arbitrarily invoke echo statements.\n# (2) Please use 'printf \"%s\" \"<value>\"' (must quote the <value>) in place\n#     of \"echo\" command here since we're returning an option, and 'echo' is\n#     notorious for the following behavior:\n#               echo \"-n\"               --> prints empty (-n is an echo opt)\n#     Desired behavior:\n#               printf \"%s\" \"-n\"        --> prints \"-n\"\n#\n########################################################################\ntransform_action () {\n\targs=$*\n\n\tprintf \"%s\" \"$*\"\n}\n\n################## boot_action() ###################################\n# The action to be performed BEFORE before calling mpirun.\n# The location of actual mpirun is passed as first argument.\n# RETURN: 0 for success; non-zero otherwise.\n# NOTES:\n# (1) 'return' produces the exit status of this function:\n#     0 for success; non-zero for failure.\n####################################################################\nboot_action () {\n \tmpirun_location=$1\n\treturn 0\n}\n\n################## evaluate_options_action() #######################\n# The action to be performed on the actual options and values matched\n# in options_to_retain, not including those in options_to_ignore, and\n# those changed arguments in options_to_transform, as well as any\n# other transformation needed on the program name and program arguments.\n#\n# RETURN: echo the list of final arguments and program arguments\n# to be passed on to mpirun command line.\n\n# NOTES:\n# (1) \"echo\" produces the return value of this function;\n#     do not arbitrarily invoke echo statements.\n# (2) Please use 'printf \"%s\" \"<value>\"' (must quote <value>) in place of\n#     \"echo\" command here since we're returning option string, and\n#     'echo' is notorious for the following behavior:\n#               echo \"-n\"               --> prints empty (-n is an echo opt)\n#     Desired behavior:\n#               printf \"%s\" \"-n\"        --> prints \"-n\"\n#\n########################################################################\nevaluate_options_action () {\n\n\targs=$*\n\n\tfound_np=0\n\twhile [ $# -gt 0 ]; do\n          if [ \"XX$1\" = \"XX-np\" ]; then\n\t\tfound_np=1\n\t\tbreak\n\t  fi\n\t  shift\n\tdone\n\n\tif [ $found_np -eq 0 ] ; then\n\t\tusernp=`cat ${PBS_NODEFILE} | wc -l | tr -d ' '`\n\t\targs=\"-np $usernp $args\"\n\tfi\n\n        pbs_machinefile=\"${PBS_TMPDIR:-/var/tmp}/pbsrun_machfile$$\"\n        cat -n ${PBS_NODEFILE} | \\\n        sort -k2 | uniq -f1 -c | \\\n        awk '{if ($1 == 1) print $2, $3; else print $2, $3 \":\" $1}'|\\\n        sort -n | awk '{print $2}' > $pbs_machinefile\n\n        args=\"-machinefile $pbs_machinefile $args\"\n\n\tprintf \"%s\" \"$args\"\n}\n\n################## configfile_cmdline_action() #######################\n# If the option_to_configfile (e.g. -configfile) is specified in the\n# mpirun command line, then this function gets passed any leading options\n# and values found before option_to_configfile.\n#\n# RETURN: return the actual options and values to be put in before the\n# option_to_configfile parameter. For instance,\n#\treturning \"--totalnum=N --file=Y\" would result in\n# an mpirun command line of:\n#\tmpirun --totalnum=N --file=Y -configfile pbs_config\n#\n########################################################################\nconfigfile_cmdline_action () {\n\targs=$*\n\n        printf \"\"\n\n}\n\n################## configfile_firstline_action () #######################\n# If the option_to_configfile (e.g. -configfile) is specified in the\n# mpirun command line, return here the item that will be put in the\n# FIRST line of the configuration file.\n# This is the place to put  the \"-machinefile <filename>\" parameter\n# which determines the processes to hosts mappings. Some versions\n# of mpirun (MPICH2, Intel MPI) require that the -machinefile parameter\n# must appear inside the config file and not on the commmand line.\n########################################################################\nconfigfile_firstline_action () {\n\tprintf \"\"\n}\n\n################## end_action() ########################################\n# The action to be performed AFTER calling mpirun, and also when\n# mpirun wrap script is prematurely interrupted.\n# INPUT: The location of actual mpirun is passed as first argument.\n# RETURN: none\n########################################################################\nend_action () {\n\t mpirun_location=$1\n\tpbs_machinefile=\"${PBS_TMPDIR:-/var/tmp}/pbsrun_machfile$$\"\n\trm -f $pbs_machinefile 2>/dev/null\n}\n"
  },
  {
    "path": "src/cmds/scripts/pbsrun.gm_mpd.init.in",
    "content": "#!/bin/sh\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\n### FILE: pbsrun.gm_mpd.init ###\n\n####################################################################\n# strict_pbs: set to 1 if you want the wrapper script to be executed\n#             only if in a PBS environment\n#\n#####################################################################\nstrict_pbs=0\n\n#####################################################################\n# options_to_retain:  space-separted list of options and values that\n# \"pbsrun\" will pass on to the actual mpirun call. options must begin\n# with \"-\" or \"--\", option name can contain a wildcard (*) to match a\n# number of characters (e.g. --totalnum=*), and option arguments must\n# be specified by some arbitrary name bounded by angle brackets,\n# as in \"<val1>\".\n#####################################################################\noptions_to_retain=\"\\\n-np <n> \\\n-s \\\n-h \\\n-g <group_size> \\\n-iI \\\n-l \\\n-1 \\\n-y \\\n-whole \\\n-wdir <dirname> \\\n-jid <jobid> \\\n-jidfile <file> \\\n\"\n\n#####################################################################\n# options_to_ignore:  space-separted list of options and values that\n# \"pbsrun\" will NOT pass on to the actual mpirun call. options must begin\n# with \"-\" or \"--\", option name can contain a wildcard (*) to match a\n# number of characters (e.g. --totalnum=*), and option arguments must\n# be specified by some arbitrary name bounded by angle brackets,\n# as in \"<val1>\".\n#####################################################################\noptions_to_ignore=\"\\\n-m <file> \\\n\"\n\n####################################################################\n# options_to_transform:  space-separted list of options and values that\n# \"pbsrun\" will modify before passing on to the actual mpirun call.\n# options must begin with \"-\" or \"--\", option name can contain a\n# wildcard (*) to match a number of characters (e.g. --totalnum=*),\n# and option arguments must be specified by some arbitrary name\n# bounded by angle brackets, as in \"<val1>\".\n# NOTE: Adding values here require code to be added to\n# transform_action() function appearing later in this file.\n#####################################################################\noptions_to_transform=\"\\\n\"\n\n#####################################################################\n# options_to_fail: space-separated list of options that will cause \"pbsrun\"\n# to exit upon encountering a match.\n# options must begin with \"-\" or \"--\", and option name can contain a\n# wildcard (*) to match a number of characters (e.g. --totalnum=*).\n#\n#####################################################################\noptions_to_fail=\"\\\n\"\n\n#####################################################################\n# option_to_configfile: the SINGLE option and value that refers to the\n# name of the \"configfile\" containing command line segments, found\n# in certain versions of mpirun.\n# option must begin with \"-\" or \"--\".\n#\n#####################################################################\noption_to_configfile=\"\\\n\"\n\n#####################################################################\n# options_with_another_form: space-separated list of options and values\n# that can be found in options_to_retain, options_to_ignore, or\n# options_to_transform, whose syntax has an alternate, unsupported\n# form.\n# This usually occurs if a version of mpirun has different forms for\n# the same option.\n# For instance,\n#       MPICH2's mpirun provides:\n#               mpirun -localonly <x>\n#               mpirun -localonly\n# If options_to_retain lists \"-localonly\" as supported value, then\n# set options_with_another_form=\"-localonly\" as well.\n# This would cause \"pbsrun\" to issue a\n# warning about alternate forms upon encountering the  option.\n#\n# options must begin with \"-\" or \"--\", and option name can contain a\n# wildcard (*) to match a number of characters (e.g. --totalnum=*).\n#\n#####################################################################\noptions_with_another_form=\"\\\n\"\n\n####################################################################\n# pbs_attach: full path or relative path to the pbs_attach executable.\n#\n#####################################################################\npbs_attach=\"pbs_attach\"\n\n#####################################################################\n# options_to_pbs_attach:  are the special options to pass to the\n# pbs_attach call. You may pass variable references (e.g. $PBS_JOBID)\n# and they will be substituted  by pbsrun to actual values.\n#####################################################################\noptions_to_pbs_attach=\"-j $PBS_JOBID\"\n\n################## transform_action() ###################################\n# The action to be performed for each actual item and value matched in\n# options_to_transform.\n# RETURN: echo the replacement item and value for the matched arguments.\n# NOTES:\n# (1) \"echo\" produces the return value of this function;\n#     do not arbitrarily invoke echo statements.\n# (2) Please use 'printf \"%s\" \"<value>\"' (must quote the <value>) in place\n#     of \"echo\" command here since we're returning an option, and 'echo' is\n#     notorius for the following behavior:\n#               echo \"-n\"               --> prints empty (-n is an echo opt)\n#     Desired behavior:\n#               printf \"%s\" \"-n\"        --> prints \"-n\"\n#\n########################################################################\ntransform_action () {\n       args=$*\n\n        printf \"%s\" \"$*\"\n}\n\n################## boot_action() ###################################\n# The action to be performed BEFORE before calling mpirun.\n# The location of actual mpirun is passed as first argument.\n# RETURN: 0 for success; non-zero otherwise.\n# NOTES:\n# (1) 'return' produces the exit status of this function:\n#     0 for success; non-zero for failure.\n####################################################################\nboot_action () {\n\tmpirun_location=$1\n\tpbs_hostsfile=\"${PBS_TMPDIR:-/var/tmp}/pbsrun_hosts$$\"\n\n\tif [ \"$RSHCOMMAND\" = \"\" ] ; then\n\t\tRSHCOMMAND=rsh\n\tfi\n\n\tMPD=${mpirun_location}/mpd\n\tMPDKILL=${mpirun_location}/mpdallexit\n\tMPDTRACE=${mpirun_location}/mpdtrace\n\n\t# Get the host.list from PBS and uniq it.\n\tsort -u $PBS_NODEFILE > $pbs_hostsfile\n\n\t# Kill a previous ring\n\trm -f /tmp/mpd.console_${USER}\n\t$MPDKILL >/dev/null 2>/dev/null\n\n\t# Start a ring here : (assuming the first host in pbs_hostsfile is here)\n\t# check that the port is valid : 12345\n\n\tport=$($MPD -b -t|tail -1)\n\tif [ $? -ne 0 ] || [ \"x${port}\" = \"x\" ]\n\tthen\n  \t\trm -f $pbs_hostsfile\n  \t\treturn 1\n\tfi\n\n\t# For all remaining hosts in the pbs_hostsfile, join the ring\n\thostname=`hostname | awk -F. '{print $1}'`\n\tmaster=$(hostname)\n\tfor h in `cat $pbs_hostsfile |grep -vw $hostname`\n\tdo\n   \t\t$RSHCOMMAND $h \"$MPDKILL;rm -f /tmp/mpd.console_${USER};$MPD -h $master -p $port -b\" >/dev/null 2>/dev/null\n\tdone\n\n\t# Ring is ready\n\t$MPDTRACE > /dev/null 2>/dev/null\n\tif [ $? -ne 0 ]\n\tthen\n   \t\techo Well ring is not ready\n   \t\trm -f $pbs_hostsfile\n   \t\treturn 2\n\tfi\n\n\trm -f $pbs_hostsfile\n\treturn 0\n}\n\n################## evaluate_options_action() #######################\n# The action to be performed on the actual options and values matched\n# in options_to_retain, not including those in options_to_ignore, and\n# those changed arguments in options_to_transform, as well as any\n# other transformation needed on the program name and program arguments.\n#\n# RETURN: echo the list of final arguments and program arguments\n# to be passed on to mpirun command line.\n\n# NOTES:\n# (1) \"echo\" produces the return value of this function;\n#     do not arbitrarily invoke echo statements.\n# (2) Please use 'printf \"%s\" \"<value>\"' (must quote <value>) in place of\n#     \"echo\" command here since we're returning option string, and\n#     'echo' is notorius for the following behavior:\n#               echo \"-n\"               --> prints empty (-n is an echo opt)\n#     Desired behavior:\n#               printf \"%s\" \"-n\"        --> prints \"-n\"\n#\n########################################################################\nevaluate_options_action () {\n\targs=$*\n\n\tfound_np=0\n\twhile [ $# -gt 0 ]; do\n          if [ \"XX$1\" = \"XX-np\" ]; then\n\t\tfound_np=1\n\t\tbreak\n\t  fi\n\t  shift\n\tdone\n\n\tif [ $found_np -eq 0 ] ; then\n\t\tusernp=`cat ${PBS_NODEFILE} | wc -l | tr -d ' '`\n\t\targs=\"-np $usernp $args\"\n\tfi\n\n        pbs_machinefile=\"${PBS_TMPDIR:-/var/tmp}/pbsrun_machfile$$\"\n        cat -n ${PBS_NODEFILE} | \\\n        sort -k2 | uniq -f1 -c | \\\n        awk '{if ($1 == 1) print $2, $3; else print $2, $3 \":\" $1}'|\\\n        sort -n | awk '{print $2}' > $pbs_machinefile\n\n        args=\"-m $pbs_machinefile $args\"\n\n\tprintf \"%s\" \"$args\"\n\n}\n\n################## configfile_cmdline_action() #######################\n# If the option_to_configfile (e.g. -configfile) is specified in the\n# mpirun command line, then this function gets passed any leading options\n# and values found before option_to_configfile.\n#\n# RETURN: return the actual options and values to be put in before the\n# option_to_configfile parameter. For instance,\n#\treturning \"--totalnum=N --file=Y\" would result in\n# an mpirun command line of:\n#\tmpirun --totalnum=N --file=Y -configfile pbs_config\n#\n########################################################################\nconfigfile_cmdline_action () {\n\targs=$*\n\n        printf \"\"\n\n}\n\n################## configfile_firstline_action () #######################\n# If the option_to_configfile (e.g. -configfile) is specified in the\n# mpirun command line, return here the item that will be put in the\n# FIRST line of the configuration file.\n# This is the place to put  the \"-machinefile <filename>\" parameter\n# which determines the processes to hosts mappings. Some versions\n# of mpirun (MPICH2, Intel MPI) require that the -machinefile parameter\n# must appear inside the config file and not on the commmand line.\n########################################################################\nconfigfile_firstline_action () {\n\tprintf \"\"\n}\n\n################## end_action() ########################################\n# The action to be performed AFTER calling mpirun, and also when\n# mpirun wrap script is prematurely interrupted.\n# INPUT: The location of actual mpirun is passed as first argument.\n# RETURN: none\n########################################################################\nend_action () {\n\tmpirun_location=$1\n\n\tpbs_hostsfile=\"${PBS_TMPDIR:-/var/tmp}/pbsrun_hosts$$\"\n\tpbs_machinefile=\"${PBS_TMPDIR:-/var/tmp}/pbsrun_machfile$$\"\n\n\trm -f $pbs_hostsfile\n\trm -f $pbs_machinefile 2>/dev/null\n\n\t${mpirun_location}/mpdallexit >/dev/null 2>/dev/null\n\n}\n"
  },
  {
    "path": "src/cmds/scripts/pbsrun.intelmpi.init.in",
    "content": "#!/bin/sh\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\n### FILE: pbsrun.intelmpi.init ###\n\n####################################################################\n# strict_pbs: set to 1 if you want the wrapper script to be executed\n#             only if in a PBS environment\n#\n#####################################################################\nstrict_pbs=0\n\n#####################################################################\n# options_to_retain:  space-separted list of options and values that\n# \"pbsrun\" will pass on to the actual mpirun call. options must begin\n# with \"-\" or \"--\", option name can contain a wildcard (*) to match a\n# number of characters (e.g. --totalnum=*), and option arguments must\n# be specified by some arbitrary name bounded by angle brackets,\n# as in \"<val1>\".\n#####################################################################\noptions_to_retain=\"\\\n-h \\\n--help \\\n-help \\\n-nolocal \\\n-perhost <n> \\\n-tv \\\n-n <n> \\\n-np <n> \\\n-wdir <directory> \\\n-path <directory> \\\n-soft <spec> \\\n-arch <arch> \\\n-env <envvar> <value> \\\n-envall \\\n-envnone \\\n-envlist <list_of_env_var_names> \\\n-version \\\n-V \\\n-genv <envvar> <value> \\\n-genvnone \\\n-gn <n> \\\n-gnp <n> \\\n-gwdir <directory> \\\n-gpath <directory> \\\n-gsoft <spec> \\\n-garch <arch> \\\n-genv <envvar> <value> \\\n-genvall \\\n-genvnone \\\n-genvlist <list_of_env_var_names> \\\n-gversion \\\n-gV \\\n-l \\\n-if \\\n-kx \\\n-bnr \\\n-d \\\n-v \\\n--help \\\n--rsh=* \\\n--user=* \\\n--mpd=* \\\n--loccons \\\n--remcons \\\n--shell \\\n--verbose \\\n-1 \\\n--ncpus=* \\\n-r <rshcmd> \\\n-u <user> \\\n-m <mpdcmd> \\\n-file <XML_job_description> \\\n-s <spec> \\\n\"\n\n#####################################################################\n# options_to_ignore:  space-separted list of options and values that\n# \"pbsrun\" will NOT pass on to the actual mpirun call. options must begin\n# with \"-\" or \"--\", option name can contain a wildcard (*) to match a\n# number of characters (e.g. --totalnum=*), and option arguments must\n# be specified by some arbitrary name bounded by angle brackets,\n# as in \"<val1>\".\n#####################################################################\noptions_to_ignore=\"\\\n-host <hostname> \\\n-ghost <hostname> \\\n--totalnum=* \\\n--file=* \\\n-f <mpd_hosts_file> \\\n-machinefile <file> \\\n\"\n\n#####################################################################\n# options_to_transform:  space-separted list of options and values that\n# \"pbsrun\" will modify before passing on to the actual mpirun call.\n# options must begin with \"-\" or \"--\", option name can contain a\n# wildcard (*) to match a number of characters (e.g. --totalnum=*),\n# and option arguments must be specified by some arbitrary name\n# bounded by angle brackets, as in \"<val1>\".\n# NOTE: Adding values here require code to be added to\n# transform_action() function appearing later in this file.\n#####################################################################\noptions_to_transform=\"\\\n\"\n\n#####################################################################\n# options_to_fail: space-separated list of options that will cause \"pbsrun\"\n# to exit upon encountering a match.\n# options must begin with \"-\" or \"--\", and option name can contain a\n# wildcard (*) to match a number of characters (e.g. --totalnum=*).\n#\n#####################################################################\noptions_to_fail=\"\\\n\"\n\n#####################################################################\n# option_to_configfile: the SINGLE option and value that refers to the\n# name of the \"configfile\" containing command line segments, found\n# in certain versions of mpirun.\n# option must begin with \"-\" or \"--\".\n#\n#####################################################################\noption_to_configfile=\"\\\n-configfile <configfile> \\\n\"\n#####################################################################\n# options_with_another_form: space-separated list of options and values\n# that can be found in options_to_retain, options_to_ignore, or\n# options_to_transform, whose syntax has an alternate, unsupported\n# form.\n# This usually occurs if a version of mpirun has different forms for\n# the same option.\n# For instance,\n#       MPICH2's mpirun provides:\n#               mpirun -localonly <x>\n#               mpirun -localonly\n# If options_to_retain lists \"-localonly\" as supported value, then\n# set options_with_another_form=\"-localonly\" as well.\n# This would cause \"pbsrun\" to issue a\n# warning about alternate forms upon encountering the  option.\n#\n# options must begin with \"-\" or \"--\", and option name can contain a\n# wildcard (*) to match a number of characters (e.g. --totalnum=*).\n#\n#####################################################################\n\noptions_with_another_form=\"\\\n-s <spec> \\\n\"\n\n#####################################################################\n# pbs_attach: full path or relative path to the pbs_attach executable.\n#\n#####################################################################\npbs_attach=$PBS_EXEC/bin/pbs_attach\n\n#####################################################################\n# options_to_pbs_attach:  are the special options to pass to the\n# pbs_attach call. You may pass variable references (e.g. $PBS_JOBID)\n# and they will be substituted  by pbsrun to actual values.\n#####################################################################\noptions_to_pbs_attach=\"-j $PBS_JOBID -P\"\n\n\n################## transform_action() ###################################\n# The action to be performed for each actual item and value matched in\n# options_to_transform.\n# RETURN: echo the replacement item and value for the matched arguments.\n# NOTES:\n# (1) \"echo\" produces the return value of this function;\n#     do not arbitrarily invoke echo statements.\n# (2) Please use 'printf \"%s\" \"<value>\"' (must quote the <value>) in place\n#     of \"echo\" command here since we're returning an option, and 'echo' is\n#     notorius for the following behavior:\n#               echo \"-n\"               --> prints empty (-n is an echo opt)\n#     Desired behavior:\n#               printf \"%s\" \"-n\"        --> prints \"-n\"\n#\n########################################################################\ntransform_action () {\n    args=$*\n    printf \"%s\" \"$args\"\n}\n\n################## boot_action() ###################################\n# The action to be performed BEFORE before calling mpirun.\n# The location of actual mpirun is passed as first argument.\n# RETURN: 0 for success; non-zero otherwise.\n# NOTES:\n# (1) 'return' produces the exit status of this function:\n#     0 for success; non-zero for failure.\n####################################################################\nboot_action () {\n\n\tmpirun_location=$1\n\n\treturn 0\n}\n\n# Check if version is older than \"4.0.3\"\n# awk exit 0 means true (version is older)\n# awk exit 1 means false (version is equal or newer)\nver_older () {\n\tfile=\"/tmp/ckver$$\"\n\tcat <<-EOF > $file\n\t{\n\t\tif (NF < 3)\n\t\t\texit 0;\n\t\tif (\\$1 < 4)\n\t\t\texit 0;\n\t\tif (\\$1 > 4)\n\t\t\texit 1;\n\t\tif (\\$2 > 0)\n\t\t\texit 1;\n\t\tif (\\$3 < 3)\n\t\t\texit 0;\n\t\texit 1;\n\t}\n\tEOF\n\techo $1 | awk -F. -f $file\n\tret=$?\n\trm $file\n\treturn $ret\n}\n\n# get mpirun version and see if it is at least 4.0.3\nhydra_supported () {\n\tfile=\"/tmp/getver$$\"\n\tcat <<-EOF > $file\n\tBEGIN {\n\t\tver = \"0.0\";\n\t\tupdate = \"0\";\n\t}\n\n\t{\n\t\tfor(i=1;i<=NF;i++) {\n\t\t\tif (\\$i == \"Version\" && i<NF)\n\t\t\t\tver = \\$(i+1);\n\t\t\tif (\\$i == \"Update\" && i<NF)\n\t\t\t\tupdate = \\$(i+1);\n\t\t}\n\t}\n\n\tEND {\n\t\tprintf(\"%s.%s\\n\", ver, update);\n\t}\n\tEOF\n\n\tver=`$mpirun -V |awk -f $file`\n\trm $file\n\n\tif ver_older $ver ;then\n\t\treturn 1\n\telse\n\t\treturn 0\t# hydra supported\n\tfi\n}\n\n################## evaluate_options_action() #######################\n# The action to be performed on the actual options and values matched\n# in options_to_retain, not including those in options_to_ignore, and\n# those changed arguments in options_to_transform, as well as any\n# other transformation needed on the program name and program arguments.\n#\n# RETURN: echo the list of final arguments and program arguments\n# to be passed on to mpirun command line.\n\n# NOTES:\n# (1) \"echo\" produces the return value of this function;\n#     do not arbitrarily invoke echo statements.\n# (2) Please use 'printf \"%s\" \"<value>\"' (must quote <value>) in place of\n#     \"echo\" command here since we're returning option string, and\n#     'echo' is notorius for the following behavior:\n#               echo \"-n\"               --> prints empty (-n is an echo opt)\n#     Desired behavior:\n#               printf \"%s\" \"-n\"        --> prints \"-n\"\n#\n########################################################################\nevaluate_options_action () {\n\n# \tSpecial handling of --rsh and -r because they have to\n#\tbe added before other options.\n\tfirst=\"\"\n\targs=\"\"\n\n\twhile [ $# -gt 0 ] ; do\n\t\tcase \"$1\" in\n\t\t$pbs_attach)\n\t\t\targs=\"$args $*\"\n\t\t\tbreak\n\t\t\t;;\n\n\t\t-r|-u|-m)\n\t\t\tfirst=\"$first $1 $2\"\n\t\t\tshift\n\t\t\t;;\n\n\t\t--rsh=*|--user=*|--mpd=*)\n\t\t\tfirst=\"$first $1\"\n\t\t\t;;\n\n\t\t*)\n\t\t\targs=\"$args $1\"\n\t\t\t;;\n\t\tesac\n\t\tshift\n\tdone\n\n\tpbs_hostsfile=\"${PBS_TMPDIR:-/var/tmp}/pbsrun_hosts$$\"\n\tpbs_machinefile=\"${PBS_TMPDIR:-/var/tmp}/pbsrun_machfile$$\"\n\n\tsort -u $PBS_NODEFILE > $pbs_hostsfile\n\tnum_mpds=`wc -l $pbs_hostsfile | awk '{print $1}'`\n\n\tcat -n ${PBS_NODEFILE} | \\\n\t\tsort -k2 | uniq -f1 -c | \\\n\t\tawk '{if ($1 == 1) print $2, $3; else print $2, $3 \":\" $1}'|\\\n\t\tsort -n | awk '{print $2}' > $pbs_machinefile\n\n\tif [ \"x$I_MPI_PROCESS_MANAGER\" != \"xmpd\" -a \\\n\t\t\t\t\"x$I_MPI_PROCESS_MANAGER\" != \"xMPD\" ] && hydra_supported ;then\n\t\tprintf \"%s\" \"-hostfile=$pbs_hostsfile $first -machinefile $pbs_machinefile $args\"\n\telse\n\t\tprintf \"%s\" \"--totalnum=$num_mpds --file=$pbs_hostsfile $first -machinefile $pbs_machinefile $args\"\n\tfi\n}\n\n\n################## configfile_cmdline_action() #######################\n# If the option_to_configfile (e.g. -configfile) is specified in the\n# mpirun command line, then this function gets passed any leading options\n# and values found before option_to_configfile.\n#\n# RETURN: return the actual options and values to be put in before the\n# option_to_configfile parameter. For instance,\n#\treturning \"--totalnum=N --file=Y\" would result in\n# an mpirun command line of:\n#\tmpirun --totalnum=N --file=Y -configfile pbs_config\n#\n########################################################################\nconfigfile_cmdline_action () {\n\n# \tSpecial handling of --rsh and -r because they have to\n#\tbe added before other options.\n\tfirst=\"\"\n\targs=\"\"\n\n\twhile [ $# -gt 0 ] ; do\n\t\tcase \"$1\" in\n\t\t$pbs_attach)\n\t\t\targs=\"$args $*\"\n\t\t\tbreak\n\t\t\t;;\n\n\t\t-r|-u|-m)\n\t\t\tfirst=\"$first $1 $2\"\n\t\t\tshift\n\t\t\t;;\n\n\t\t--rsh=*|--user=*|--mpd=*)\n\t\t\tfirst=\"$first $1\"\n\t\t\t;;\n\n\t\t*)\n\t\t\targs=\"$args $1\"\n\t\t\t;;\n\t\tesac\n\t\tshift\n\tdone\n\n\tpbs_hostsfile=\"${PBS_TMPDIR:-/var/tmp}/pbsrun_hosts$$\"\n\n\tsort -u $PBS_NODEFILE > $pbs_hostsfile\n\tnum_mpds=`wc -l $pbs_hostsfile | awk '{print $1}'`\n\n\tif [ \"x$I_MPI_PROCESS_MANAGER\" != \"xmpd\" -a \\\n\t\t\t\t\"x$I_MPI_PROCESS_MANAGER\" != \"xMPD\" ] && hydra_supported ;then\n\t\tprintf \"%s\" \"-hostfile=$pbs_hostsfile $first $args\"\n\telse\n\t\tprintf \"%s\" \"--totalnum=$num_mpds --file=$pbs_hostsfile $first $args\"\n\tfi\n}\n\n################## configfile_firstline_action () #######################\n# If the option_to_configfile (e.g. -configfile) is specified in the\n# mpirun command line, return here the item that will be put in the\n# FIRST line of the configuration file.\n# This is the place to put  the \"-machinefile <filename>\" parameter\n# which determines the processes to hosts mappings. Some versions\n# of mpirun (MPICH2, Intel MPI) require that the -machinefile parameter\n# must appear inside the config file and not on the commmand line.\n########################################################################\nconfigfile_firstline_action () {\n\tpbs_machinefile=\"${PBS_TMPDIR:-/var/tmp}/pbsrun_machfile$$\"\n\n        cat -n ${PBS_NODEFILE} | \\\n        sort -k2 | uniq -f1 -c | \\\n        awk '{if ($1 == 1) print $2, $3; else print $2, $3 \":\" $1}'|\\\n        sort -n | awk '{print $2}' > $pbs_machinefile\n\n        printf \"%s\" \"-machinefile $pbs_machinefile\"\n}\n\n################## end_action() ########################################\n# The action to be performed AFTER calling mpirun, and also when\n# mpirun wrap script is prematurely interrupted.\n# INPUT: The location of actual mpirun is passed as first argument.\n# RETURN: none\n########################################################################\nend_action () {\n\n    mpirun_location=$1\n\n    pbs_hostsfile=\"${PBS_TMPDIR:-/var/tmp}/pbsrun_hosts$$\"\n    pbs_machinefile=\"${PBS_TMPDIR:-/var/tmp}/pbsrun_machfile$$\"\n\n    rm -f $pbs_hostsfile\n    rm -f $pbs_machinefile\n}\n"
  },
  {
    "path": "src/cmds/scripts/pbsrun.mpich2.init.in",
    "content": "#!/bin/sh\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\n### FILE: pbsrun.mpich2.init ###\n\n####################################################################\n# strict_pbs: set to 1 if you want the wrapper script to be executed\n#             only if in a PBS environment\n#\n#####################################################################\nstrict_pbs=0\n\n#####################################################################\n# options_to_retain:  space-separted list of options and values that\n# \"pbsrun\" will pass on to the actual mpirun call. options must begin\n# with \"-\" or \"--\", option name can contain a wildcard (*) to match a\n# number of characters (e.g. --totalnum=*), and option arguments must\n# be specified by some arbitrary name bounded by angle brackets,\n# as in \"<val1>\".\n#####################################################################\noptions_to_retain=\"\\\n-h \\\n-help \\\n--help \\\n-n <x> \\\n-np <x> \\\n-path <search_path> \\\n-wdir <dirname> \\\n-file <filename> \\\n-soft <spec> \\\n-arch <arch> \\\n-env <var> <val> \\\n-envall \\\n-envlist <env_var_names> \\\n-envnone \\\n-gn <x> \\\n-gnp <x> \\\n-gwdir <dirname> \\\n-gsoft <spec> \\\n-garch <arch> \\\n-genv <var> <val> \\\n-genvall \\\n-genvlist <env_var_names> \\\n-genvnone \\\n-gpath <search_path> \\\n-gumask <umask> \\\n-l \\\n-1 \\\n-ifhn \\\n-tv \\\n-usize \\\n-bnr \\\n-s <spec> \\\n-m \\\n-a \\\n-ecfn \\\n-kx \\\n-help2 \\\n-dir <drive_path> \\\n-exitcodes \\\n-noprompt \\\n-localroot \\\n-port <port> \\\n-p <port> \\\n-phrase <passphrase> \\\n-smpdfile <filename> \\\n-timeout <seconds>\n-map <drive_path> \\\n-logon \\\n-pwdfile <filename> \\\n-nopopup_debug  \\\n-priority <class_level> \\\n-register \\\n-remove \\\n-validate \\\n-delegate \\\n-impersonate \\\n-plaintext \\\n-maxtime <seconds> \\\n-gdb \\\n-umask <umask> \\\n-gdba <jobid> \\\n-localonly \\\n\"\n\n#####################################################################\n# options_to_ignore:  space-separted list of options and values that\n# \"pbsrun\" will NOT pass on to the actual mpirun call. options must begin\n# with \"-\" or \"--\", option name can contain a wildcard (*) to match a\n# number of characters (e.g. --totalnum=*), and option arguments must\n# be specified by some arbitrary name bounded by angle brackets,\n# as in \"<val1>\".\n#####################################################################\noptions_to_ignore=\"\\\n-host <hostname> \\\n-ghost <hostname> \\\n-machinefile <file> \\\n\"\n\n#####################################################################\n# options_to_transform:  space-separted list of options and values that\n# \"pbsrun\" will modify before passing on to the actual mpirun call.\n# options must begin with \"-\" or \"--\", option name can contain a\n# wildcard (*) to match a number of characters (e.g. --totalnum=*),\n# and option arguments must be specified by some arbitrary name\n# bounded by angle brackets, as in \"<val1>\".\n# NOTE: Adding values here require code to be added to\n# transform_action() function appearing later in this file.\n#####################################################################\noptions_to_transform=\"\\\n\"\n\n#####################################################################\n# options_to_fail: space-separated list of options that will cause \"pbsrun\"\n# to exit upon encountering a match.\n# options must begin with \"-\" or \"--\", and option name can contain a\n# wildcard (*) to match a number of characters (e.g. --totalnum=*).\n#\n#####################################################################\noptions_to_fail=\"\\\n-hosts \\\n-ghosts \\\n\"\n\n#####################################################################\n# option_to_configfile: the SINGLE option and value that refers to the\n# name of the \"configfile\" containing command line segments, found\n# in certain versions of mpirun.\n# option must begin with \"-\" or \"--\".\n#\n#####################################################################\noption_to_configfile=\"\\\n-configfile <configfile> \\\n\"\n\n#####################################################################\n# options_with_another_form: space-separated list of options and values\n# that can be found in options_to_retain, options_to_ignore, or\n# options_to_transform, whose syntax has an alternate, unsupported\n# form.\n# This usually occurs if a version of mpirun has different forms for\n# the same option.\n# For instance,\n#       MPICH2's mpirun provides:\n#               mpirun -localonly <x>\n#               mpirun -localonly\n# If options_to_retain lists \"-localonly\" as supported value, then\n# set options_with_another_form=\"-localonly\" as well.\n# This would cause \"pbsrun\" to issue a\n# warning about alternate forms upon encountering the  option.\n#\n# options must begin with \"-\" or \"--\", and option name can contain a\n# wildcard (*) to match a number of characters (e.g. --totalnum=*).\n#\n#####################################################################\noptions_with_another_form=\"\\\n-localonly \\\n\"\n\n#####################################################################\n# pbs_attach: full path or relative path to the pbs_attach executable.\n#\n#####################################################################\npbs_attach=pbs_attach\n\n#####################################################################\n# options_to_pbs_attach:  are the special options to pass to the\n# pbs_attach call. You may pass variable references (e.g. $PBS_JOBID)\n# and they will be substituted  by pbsrun to actual values.\n#####################################################################\noptions_to_pbs_attach=\"-j $PBS_JOBID -s -P\"\n\n\n################## transform_action() ###################################\n# The action to be performed for each actual item and value matched in\n# options_to_transform.\n# RETURN: echo the replacement item and value for the matched arguments.\n# NOTES:\n# (1) \"echo\" produces the return value of this function;\n#     do not arbitrarily invoke echo statements.\n# (2) Please use 'printf \"%s\" \"<value>\"' (must quote the <value>) in place\n#     of \"echo\" command here since we're returning an option, and 'echo' is\n#     notorius for the following behavior:\n#               echo \"-n\"               --> prints empty (-n is an echo opt)\n#     Desired behavior:\n#               printf \"%s\" \"-n\"        --> prints \"-n\"\n#\n########################################################################\ntransform_action () {\n    args=$*\n    printf \"%s\" \"$args\"\n}\n\n################## boot_action() ###################################\n# The action to be performed BEFORE before calling mpirun.\n# The location of actual mpirun is passed as first argument.\n# RETURN: 0 for success; non-zero otherwise.\n# NOTES:\n# (1) 'return' produces the exit status of this function:\n#     0 for success; non-zero for failure.\n####################################################################\nboot_action () {\n\n\tmpirun_location=$1\n\n\t # Get the host.list from PBS and uniq it.\n\tpbs_hostsfile=\"${PBS_TMPDIR:-/var/tmp}/pbsrun_hosts$$\"\n        sort -u $PBS_NODEFILE > $pbs_hostsfile\n\tnum_mpds=`wc -l $pbs_hostsfile | awk '{print $1}'`\n\t$mpirun_location/mpdboot -n $num_mpds --file=$pbs_hostsfile\n\treturn 0\n}\n\n\n################## evaluate_options_action() #######################\n# The action to be performed on the actual options and values matched\n# in options_to_retain, not including those in options_to_ignore, and\n# those changed arguments in options_to_transform, as well as any\n# other transformation needed on the program name and program arguments.\n#\n# RETURN: echo the list of final arguments and program arguments\n# to be passed on to mpirun command line.\n\n# NOTES:\n# (1) \"echo\" produces the return value of this function;\n#     do not arbitrarily invoke echo statements.\n# (2) Please use 'printf \"%s\" \"<value>\"' (must quote <value>) in place of\n#     \"echo\" command here since we're returning option string, and\n#     'echo' is notorius for the following behavior:\n#               echo \"-n\"               --> prints empty (-n is an echo opt)\n#     Desired behavior:\n#               printf \"%s\" \"-n\"        --> prints \"-n\"\n#\n########################################################################\nevaluate_options_action () {\n\targs=$*\n\n\tpbs_machinefile=\"${PBS_TMPDIR:-/var/tmp}/pbsrun_machfile$$\"\n        cat -n ${PBS_NODEFILE} | \\\n        sort -k2 | uniq -f1 -c | \\\n        awk '{if ($1 == 1) print $2, $3; else print $2, $3 \":\" $1}'|\\\n        sort -n | awk '{print $2}' > $pbs_machinefile\n\n        printf \"%s\" \"-machinefile $pbs_machinefile $args\"\n}\n\n\n################## configfile_cmdline_action() #######################\n# If the option_to_configfile (e.g. -configfile) is specified in the\n# mpirun command line, then this function gets passed any leading options\n# and values found before option_to_configfile.\n#\n# RETURN: return the actual options and values to be put in before the\n# option_to_configfile parameter. For instance,\n#\treturning \"--totalnum=N --file=Y\" would result in\n# an mpirun command line of:\n#\tmpirun --totalnum=N --file=Y -configfile pbs_config\n#\n########################################################################\nconfigfile_cmdline_action () {\n\targs=$*\n\n        printf \"%s\" \"$args\"\n}\n\n################## configfile_firstline_action () #######################\n# If the option_to_configfile (e.g. -configfile) is specified in the\n# mpirun command line, return here the item that will be put in the\n# FIRST line of the configuration file.\n# This is the place to put  the \"-machinefile <filename>\" parameter\n# which determines the processes to hosts mappings. Some versions\n# of mpirun (MPICH2, Intel MPI) require that the -machinefile parameter\n# must appear inside the config file and not on the commmand line.\n########################################################################\nconfigfile_firstline_action () {\n\tpbs_machinefile=\"${PBS_TMPDIR:-/var/tmp}/pbsrun_machfile$$\"\n\n        cat -n ${PBS_NODEFILE} | \\\n        sort -k2 | uniq -f1 -c | \\\n        awk '{if ($1 == 1) print $2, $3; else print $2, $3 \":\" $1}'|\\\n        sort -n | awk '{print $2}' > $pbs_machinefile\n\n        printf \"%s\" \"-machinefile $pbs_machinefile\"\n}\n\n################## end_action() ########################################\n# The action to be performed AFTER calling mpirun, and also when\n# mpirun wrap script is prematurely interrupted.\n# INPUT: The location of actual mpirun is passed as first argument.\n# RETURN: none\n########################################################################\nend_action () {\n\n    mpirun_location=$1\n\n    pbs_hostsfile=\"${PBS_TMPDIR:-/var/tmp}/pbsrun_hosts$$\"\n    pbs_machinefile=\"${PBS_TMPDIR:-/var/tmp}/pbsrun_machfile$$\"\n    $mpirun_location/mpdallexit  >/dev/null 2>/dev/null\n\n    rm -f $pbs_hostsfile\n    rm -f $pbs_machinefile\n}\n"
  },
  {
    "path": "src/cmds/scripts/pbsrun.mvapich1.init.in",
    "content": "#!/bin/sh\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\n### FILE: pbsrun_mvapich1.init ###\n\n####################################################################\n# strict_pbs: set to 1 if you want the wrapper script to be executed\n#             only if in a PBS environment\n#\n#####################################################################\nstrict_pbs=0\n\nif [ \"${PBS_NODEFILE:-XX}\" != \"XX\" ]; then\n#\tsave any value of P4_RSHCOMMAND\n\tif [ -n \"$P4_RSHCOMMAND\" -a -z \"$PBS_RSHCOMMAND\" ]; then\n\t\texport PBS_RSHCOMMAND=$P4_RSHCOMMAND\n\tfi\n\n#\tset P4_RSHCOMMAND to pbs_remsh\n\texport P4_RSHCOMMAND=\"$PBS_EXEC/bin/pbs_remsh\"\nfi\n\n#####################################################################\n# options_to_retain:  space-separted list of options and values that\n# \"pbsrun\" will pass on to the actual mpirun call. options must begin\n# with \"-\" or \"--\", option name can contain a wildcard (*) to match a\n# number of characters (e.g. --totalnum=*), and option arguments must\n# be specified by some arbitrary name bounded by angle brackets,\n# as in \"<val1>\".\n#####################################################################\noptions_to_retain=\"\\\n-h \\\n-v \\\n-t \\\n-ksq \\\n-dbg=<debugger> \\\n-stdin <file> \\\n-all-local \\\n-machine <machinename> \\\n-machinedir <directory> \\\n-np <n> \\\n\"\n\n#####################################################################\n# options_to_ignore:  space-separted list of options and values that\n# \"pbsrun\" will NOT pass on to the actual mpirun call. options must begin\n# with \"-\" or \"--\", option name can contain a wildcard (*) to match a\n# number of characters (e.g. --totalnum=*), and option arguments must\n# be specified by some arbitrary name bounded by angle brackets,\n# as in \"<val1>\".\n#####################################################################\noptions_to_ignore=\"\\\n-machinefile <file> \\\n-exclude <list> \\\n-map <list> \\\n\"\n\n#####################################################################\n# options_to_transform:  space-separted list of options and values that\n# \"pbsrun\" will modify before passing on to the actual mpirun call.\n# options must begin with \"-\" or \"--\", option name can contain a\n# wildcard (*) to match a number of characters (e.g. --totalnum=*),\n# and option arguments must be specified by some arbitrary name\n# bounded by angle brackets, as in \"<val1>\".\n# NOTE: Adding values here require code to be added to\n# transform_action() function appearing later in this file.\n#####################################################################\noptions_to_transform=\"\\\n\"\n\n#####################################################################\n# options_to_fail: space-separated list of options that will cause \"pbsrun\"\n# to exit upon encountering a match.\n# options must begin with \"-\" or \"--\", and option name can contain a\n# wildcard (*) to match a number of characters (e.g. --totalnum=*).\n#\n#####################################################################\noptions_to_fail=\"\\\n\"\n\n#####################################################################\n# option_to_configfile: the SINGLE option and value that refers to the\n# name of the \"configfile\" containing command line segments, found\n# in certain versions of mpirun.\n# option must begin with \"-\" or \"--\".\n#\n#####################################################################\noption_to_configfile=\"\\\n\"\n\n#####################################################################\n# options_with_another_form: space-separated list of options and values\n# that can be found in options_to_retain, options_to_ignore, or\n# options_to_transform, whose syntax has an alternate, unsupported\n# form.\n# This usually occurs if a version of mpirun has different forms for\n# the same option.\n# For instance,\n#       MPICH2's mpirun provides:\n#               mpirun -localonly <x>\n#               mpirun -localonly\n# If options_to_retain lists \"-localonly\" as supported value, then\n# set options_with_another_form=\"-localonly\" as well.\n# This would cause \"pbsrun\" to issue a\n# warning about alternate forms upon encountering the  option.\n#\n# options must begin with \"-\" or \"--\", and option name can contain a\n# wildcard (*) to match a number of characters (e.g. --totalnum=*).\n#\n#####################################################################\noptions_with_another_form=\"\\\n\"\n\n####################################################################\n# pbs_attach: full path or relative path to the pbs_attach executable.\n#\n# In mvapich1, the full path seems to be needed, otherwise the\n# parse of the command line fails.\n#####################################################################\npbs_attach=\"\"\n\n#####################################################################\n# options_to_pbs_attach:  are the special options to pass to the\n# pbs_attach call. You may pass variable references (e.g. $PBS_JOBID)\n# and they will be substituted  by pbsrun to actual values.\n#####################################################################\noptions_to_pbs_attach=\"\"\n\n################## transform_action() ###################################\n# The action to be performed for each actual item and value matched in\n# options_to_transform.\n# RETURN: echo the replacement item and value for the matched arguments.\n# NOTES:\n# (1) \"echo\" produces the return value of this function;\n#     do not arbitrarily invoke echo statements.\n# (2) Please use 'printf \"%s\" \"<value>\"' (must quote the <value>) in place\n#     of \"echo\" command here since we're returning an option, and 'echo' is\n#     notorious for the following behavior:\n#\t\techo \"-n\"      \t\t--> prints empty (-n is an echo opt)\n#     Desired behavior:\n#\t\tprintf \"%s\" \"-n\" \t--> prints \"-n\"\n#\n########################################################################\ntransform_action () {\n\targs=$*\n\n\tprintf \"%s\" \"$*\"\n}\n\n################## boot_action() ###################################\n# The action to be performed BEFORE before calling mpirun.\n# The location of actual mpirun is passed as first argument.\n# RETURN: 0 for success; non-zero otherwise.\n# NOTES:\n# (1) 'return' produces the exit status of this function:\n#     0 for success; non-zero for failure.\n####################################################################\nboot_action () {\n\tmpirun_location=$1\n\treturn 0\n}\n\n################## evaluate_options_action() #######################\n# The action to be performed on the actual options and values matched\n# in options_to_retain, not including those in options_to_ignore, and\n# those changed arguments in options_to_transform, as well as any\n# other transformation needed on the program name and program arguments.\n#\n# RETURN: echo the list of final arguments and program arguments\n# to be passed on to mpirun command line.\n\n# NOTES:\n# (1) \"echo\" produces the return value of this function;\n#     do not arbitrarily invoke echo statements.\n# (2) Please use 'printf \"%s\" \"<value>\"' (must quote <value>) in place of\n#     \"echo\" command here since we're returning option string, and\n#     'echo' is notorious for the following behavior:\n#\t\techo \"-n\"               --> prints empty (-n is an echo opt)\n#     Desired behavior:\n# \t\tprintf \"%s\" \"-n\"        --> prints \"-n\"\n#\n########################################################################\nevaluate_options_action () {\n\targs=$*\n\n\tfound_np=0\n\twhile [ $# -gt 0 ]; do\n          if [ \"XX$1\" = \"XX-np\" ]; then\n\t\tfound_np=1\n\t\tbreak\n\t  fi\n\t  shift\n\tdone\n\n\tif [ $found_np -eq 0 ] ; then\n\t\tusernp=`cat ${PBS_NODEFILE} | wc -l | tr -d ' '`\n\t\targs=\"-np $usernp $args\"\n\tfi\n\n        pbs_machinefile=\"${PBS_TMPDIR:-/var/tmp}/pbsrun_machfile$$\"\n        cat ${PBS_NODEFILE} > $pbs_machinefile\n\n        args=\"-machinefile $pbs_machinefile $args\"\n\n\n\tprintf \"%s\" \"$args\"\n\n}\n\n################## configfile_cmdline_action() #######################\n# If the option_to_configfile (e.g. -configfile) is specified in the\n# mpirun command line, then this function gets passed any leading options\n# and values found before option_to_configfile.\n#\n# RETURN: return the actual options and values to be put in before the\n# option_to_configfile parameter. For instance,\n#\treturning \"--totalnum=N --file=Y\" would result in\n# an mpirun command line of:\n#\tmpirun --totalnum=N --file=Y -configfile pbs_config\n#\n########################################################################\nconfigfile_cmdline_action () {\n\targs=$*\n\n        printf \"\"\n\n}\n\n################## configfile_firstline_action () #######################\n# If the option_to_configfile (e.g. -configfile) is specified in the\n# mpirun command line, return here the item that will be put in the\n# FIRST line of the configuration file.\n# This is the place to put  the \"-machinefile <filename>\" parameter\n# which determines the processes to hosts mappings. Some versions\n# of mpirun (MPICH2, Intel MPI) require that the -machinefile parameter\n# must appear inside the config file and not on the commmand line.\n########################################################################\nconfigfile_firstline_action () {\n\tprintf \"\"\n}\n\n################## end_action() ########################################\n# The action to be performed AFTER calling mpirun, and also when\n# mpirun wrap script is prematurely interrupted.\n# INPUT: The location of actual mpirun is passed as first argument.\n# RETURN: none\n########################################################################\nend_action () {\n\tmpirun_location=$1\n\tpbs_machinefile=\"${PBS_TMPDIR:-/var/tmp}/pbsrun_machfile$$\"\n\trm -f $pbs_machinefile 2>/dev/null\n}\n"
  },
  {
    "path": "src/cmds/scripts/pbsrun.mvapich2.init.in",
    "content": "#!/bin/sh\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\n### FILE: pbsrun.mvapich2.init ###\n\n####################################################################\n# strict_pbs: set to 1 if you want the wrapper script to be executed\n#             only if in a PBS environment\n#\n#####################################################################\nstrict_pbs=0\n\n#####################################################################\n# options_to_retain:  space-separted list of options and values that\n# \"pbsrun\" will pass on to the actual mpirun call. options must begin\n# with \"-\" or \"--\", option name can contain a wildcard (*) to match a\n# number of characters (e.g. --totalnum=*), and option arguments must\n# be specified by some arbitrary name bounded by angle brackets,\n# as in \"<val1>\".\n#####################################################################\noptions_to_retain=\"\\\n-l \\\n-bnr \\\n-s <spec> \\\n-1 \\\n-ifhn \\\n-tv \\\n-gdb \\\n-m \\\n-a \\\n-ecfn \\\n-n <n> \\\n-np <n> \\\n-wdir <dirname> \\\n-umask <umask> \\\n-path <dirname> \\\n-soft <spec> \\\n-arch <arch> \\\n-envall \\\n-envnone \\\n-envlist <list> \\\n-env <var> <val> \\\n-gn <x> \\\n-gnp <x> \\\n-gwdir <dirname> \\\n-gsoft <spec> \\\n-garch <arch> \\\n-genv <var> <val> \\\n-genvall \\\n-genvlist <env_var_names> \\\n-genvnone \\\n-gpath <search_path> \\\n-gumask <umask> \\\n\"\n\n#####################################################################\n# options_to_ignore:  space-separted list of options and values that\n# \"pbsrun\" will NOT pass on to the actual mpirun call. options must begin\n# with \"-\" or \"--\", option name can contain a wildcard (*) to match a\n# number of characters (e.g. --totalnum=*), and option arguments must\n# be specified by some arbitrary name bounded by angle brackets,\n# as in \"<val1>\".\n#####################################################################\noptions_to_ignore=\"\\\n-host <hostname> \\\n-machinefile <file> \\\n\"\n\n#####################################################################\n# options_to_transform:  space-separted list of options and values that\n# \"pbsrun\" will modify before passing on to the actual mpirun call.\n# options must begin with \"-\" or \"--\", option name can contain a\n# wildcard (*) to match a number of characters (e.g. --totalnum=*),\n# and option arguments must be specified by some arbitrary name\n# bounded by angle brackets, as in \"<val1>\".\n# NOTE: Adding values here require code to be added to\n# transform_action() function appearing later in this file.\n#####################################################################\noptions_to_transform=\"\\\n\"\n\n#####################################################################\n# options_to_fail: space-separated list of options that will cause \"pbsrun\"\n# to exit upon encountering a match.\n# options must begin with \"-\" or \"--\", and option name can contain a\n# wildcard (*) to match a number of characters (e.g. --totalnum=*).\n#\n#####################################################################\noptions_to_fail=\"\\\n\"\n\n#####################################################################\n# option_to_configfile: the SINGLE option and value that refers to the\n# name of the \"configfile\" containing command line segments, found\n# in certain versions of mpirun.\n# option must begin with \"-\" or \"--\".\n#\n#####################################################################\noption_to_configfile=\"\\\n-configfile <configfile> \\\n\"\n\n#####################################################################\n# options_with_another_form: space-separated list of options and values\n# that can be found in options_to_retain, options_to_ignore, or\n# options_to_transform, whose syntax has an alternate, unsupported\n# form.\n# This usually occurs if a version of mpirun has different forms for\n# the same option.\n# For instance,\n#       MPICH2's mpirun provides:\n#               mpirun -localonly <x>\n#               mpirun -localonly\n# If options_to_retain lists \"-localonly\" as supported value, then\n# set options_with_another_form=\"-localonly\" as well.\n# This would cause \"pbsrun\" to issue a\n# warning about alternate forms upon encountering the  option.\n#\n# options must begin with \"-\" or \"--\", and option name can contain a\n# wildcard (*) to match a number of characters (e.g. --totalnum=*).\n#\n#####################################################################\noptions_with_another_form=\"\\\n\"\n\n#####################################################################\n# pbs_attach: full path or relative path to the pbs_attach executable.\n#\n#####################################################################\npbs_attach=pbs_attach\n\n#####################################################################\n# options_to_pbs_attach:  are the special options to pass to the\n# pbs_attach call. You may pass variable references (e.g. $PBS_JOBID)\n# and they will be substituted  by pbsrun to actual values.\n#####################################################################\noptions_to_pbs_attach=\"-j $PBS_JOBID\"\n\n\n################## transform_action() ###################################\n# The action to be performed for each actual item and value matched in\n# options_to_transform.\n# RETURN: echo the replacement item and value for the matched arguments.\n# NOTES:\n# (1) \"echo\" produces the return value of this function;\n#     do not arbitrarily invoke echo statements.\n# (2) Please use 'printf \"%s\" \"<value>\"' (must quote the <value>) in place\n#     of \"echo\" command here since we're returning an option, and 'echo' is\n#     notorius for the following behavior:\n#               echo \"-n\"               --> prints empty (-n is an echo opt)\n#     Desired behavior:\n#               printf \"%s\" \"-n\"        --> prints \"-n\"\n#\n########################################################################\ntransform_action () {\n    args=$*\n    printf \"%s\" \"$args\"\n}\n\n################## boot_action() ###################################\n# The action to be performed BEFORE before calling mpirun.\n# The location of actual mpirun is passed as first argument.\n# RETURN: 0 for success; non-zero otherwise.\n# NOTES:\n# (1) 'return' produces the exit status of this function:\n#     0 for success; non-zero for failure.\n####################################################################\nboot_action () {\n\tmpirun_location=$1\n\n#\tIf user has done mpdboot then we don't need to do it.\n\t$mpirun_location/mpdtrace >/dev/null 2>/dev/null\n\tif [ $? -eq 0 ]; then\n\t\treturn 0\n\tfi\n\n\t# Get the host.list from PBS and uniq it.\n\tpbs_hostsfile=\"${PBS_TMPDIR:-/var/tmp}/pbsrun_hosts$$\"\n        sort -u $PBS_NODEFILE > $pbs_hostsfile\n\tnum_mpds=`wc -l $pbs_hostsfile | awk '{print $1}'`\n\t$mpirun_location/mpdboot -n $num_mpds --file=$pbs_hostsfile\n\treturn 0\n}\n\n\n################## evaluate_options_action() #######################\n# The action to be performed on the actual options and values matched\n# in options_to_retain, not including those in options_to_ignore, and\n# those changed arguments in options_to_transform, as well as any\n# other transformation needed on the program name and program arguments.\n#\n# RETURN: echo the list of final arguments and program arguments\n# to be passed on to mpirun command line.\n\n# NOTES:\n# (1) \"echo\" produces the return value of this function;\n#     do not arbitrarily invoke echo statements.\n# (2) Please use 'printf \"%s\" \"<value>\"' (must quote <value>) in place of\n#     \"echo\" command here since we're returning option string, and\n#     'echo' is notorius for the following behavior:\n#               echo \"-n\"               --> prints empty (-n is an echo opt)\n#     Desired behavior:\n#               printf \"%s\" \"-n\"        --> prints \"-n\"\n#\n########################################################################\nevaluate_options_action () {\n\targs=$*\n\n\tpbs_machinefile=\"${PBS_TMPDIR:-/var/tmp}/pbsrun_machfile$$\"\n        cat -n ${PBS_NODEFILE} | \\\n        sort -k2 | uniq -f1 -c | \\\n        awk '{if ($1 == 1) print $2, $3; else print $2, $3 \":\" $1}'|\\\n        sort -n | awk '{print $2}' > $pbs_machinefile\n\n        printf \"%s\" \"-machinefile $pbs_machinefile $args\"\n}\n\n\n################## configfile_cmdline_action() #######################\n# If the option_to_configfile (e.g. -configfile) is specified in the\n# mpirun command line, then this function gets passed any leading options\n# and values found before option_to_configfile.\n#\n# RETURN: return the actual options and values to be put in before the\n# option_to_configfile parameter. For instance,\n#\treturning \"--totalnum=N --file=Y\" would result in\n# an mpirun command line of:\n#\tmpirun --totalnum=N --file=Y -configfile pbs_config\n#\n########################################################################\nconfigfile_cmdline_action () {\n\targs=$*\n\n        printf \"%s\" \"$args\"\n}\n\n################## configfile_firstline_action () #######################\n# If the option_to_configfile (e.g. -configfile) is specified in the\n# mpirun command line, return here the item that will be put in the\n# FIRST line of the configuration file.\n# This is the place to put  the \"-machinefile <filename>\" parameter\n# which determines the processes to hosts mappings. Some versions\n# of mpirun (MPICH2, Intel MPI) require that the -machinefile parameter\n# must appear inside the config file and not on the commmand line.\n########################################################################\nconfigfile_firstline_action () {\n\tpbs_machinefile=\"${PBS_TMPDIR:-/var/tmp}/pbsrun_machfile$$\"\n\n        cat -n ${PBS_NODEFILE} | \\\n        sort -k2 | uniq -f1 -c | \\\n        awk '{if ($1 == 1) print $2, $3; else print $2, $3 \":\" $1}'|\\\n        sort -n | awk '{print $2}' > $pbs_machinefile\n\n        printf \"%s\" \"-machinefile $pbs_machinefile\"\n}\n\n################## end_action() ########################################\n# The action to be performed AFTER calling mpirun, and also when\n# mpirun wrap script is prematurely interrupted.\n# INPUT: The location of actual mpirun is passed as first argument.\n# RETURN: none\n########################################################################\nend_action () {\n\n\tmpirun_location=$1\n\n\tpbs_hostsfile=\"${PBS_TMPDIR:-/var/tmp}/pbsrun_hosts$$\"\n\tpbs_machinefile=\"${PBS_TMPDIR:-/var/tmp}/pbsrun_machfile$$\"\n\n\tif [ -f $pbs_hostsfile ]; then\n    \t\t$mpirun_location/mpdallexit  >/dev/null 2>/dev/null\n\t\trm -f $pbs_hostsfile\n\tfi\n\trm -f $pbs_machinefile\n}\n"
  },
  {
    "path": "src/cmds/scripts/pbsrun.mx_mpd.init.in",
    "content": "#!/bin/sh\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\n### FILE: pbsrun.mx_mpd.init ###\n\n####################################################################\n# strict_pbs: set to 1 if you want the wrapper script to be executed\n#             only if in a PBS environment\n#\n#####################################################################\nstrict_pbs=0\n\n#####################################################################\n# options_to_retain:  space-separted list of options and values that\n# \"pbsrun\" will pass on to the actual mpirun call. options must begin\n# with \"-\" or \"--\", option name can contain a wildcard (*) to match a\n# number of characters (e.g. --totalnum=*), and option arguments must\n# be specified by some arbitrary name bounded by angle brackets,\n# as in \"<val1>\".\n#####################################################################\noptions_to_retain=\"\\\n-np <n> \\\n-s \\\n-h \\\n-g <group_size> \\\n-iI \\\n-l \\\n-1 \\\n-y \\\n-whole \\\n-wdir <dirname> \\\n-jid <jobid> \\\n-jidfile <file> \\\n\"\n\n#####################################################################\n# options_to_ignore:  space-separted list of options and values that\n# \"pbsrun\" will NOT pass on to the actual mpirun call. options must begin\n# with \"-\" or \"--\", option name can contain a wildcard (*) to match a\n# number of characters (e.g. --totalnum=*), and option arguments must\n# be specified by some arbitrary name bounded by angle brackets,\n# as in \"<val1>\".\n#####################################################################\noptions_to_ignore=\"\\\n-m <file> \\\n\"\n\n####################################################################\n# options_to_transform:  space-separted list of options and values that\n# \"pbsrun\" will modify before passing on to the actual mpirun call.\n# options must begin with \"-\" or \"--\", option name can contain a\n# wildcard (*) to match a number of characters (e.g. --totalnum=*),\n# and option arguments must be specified by some arbitrary name\n# bounded by angle brackets, as in \"<val1>\".\n# NOTE: Adding values here require code to be added to\n# transform_action() function appearing later in this file.\n#####################################################################\noptions_to_transform=\"\\\n\"\n\n#####################################################################\n# options_to_fail: space-separated list of options that will cause \"pbsrun\"\n# to exit upon encountering a match.\n# options must begin with \"-\" or \"--\", and option name can contain a\n# wildcard (*) to match a number of characters (e.g. --totalnum=*).\n#\n#####################################################################\noptions_to_fail=\"\\\n\"\n\n#####################################################################\n# option_to_configfile: the SINGLE option and value that refers to the\n# name of the \"configfile\" containing command line segments, found\n# in certain versions of mpirun.\n# option must begin with \"-\" or \"--\".\n#\n#####################################################################\noption_to_configfile=\"\\\n\"\n\n#####################################################################\n# options_with_another_form: space-separated list of options and values\n# that can be found in options_to_retain, options_to_ignore, or\n# options_to_transform, whose syntax has an alternate, unsupported\n# form.\n# This usually occurs if a version of mpirun has different forms for\n# the same option.\n# For instance,\n#       MPICH2's mpirun provides:\n#               mpirun -localonly <x>\n#               mpirun -localonly\n# If options_to_retain lists \"-localonly\" as supported value, then\n# set options_with_another_form=\"-localonly\" as well.\n# This would cause \"pbsrun\" to issue a\n# warning about alternate forms upon encountering the  option.\n#\n# options must begin with \"-\" or \"--\", and option name can contain a\n# wildcard (*) to match a number of characters (e.g. --totalnum=*).\n#\n#####################################################################\noptions_with_another_form=\"\\\n\"\n\n####################################################################\n# pbs_attach: full path or relative path to the pbs_attach executable.\n#\n#####################################################################\npbs_attach=\"pbs_attach\"\n\n#####################################################################\n# options_to_pbs_attach:  are the special options to pass to the\n# pbs_attach call. You may pass variable references (e.g. $PBS_JOBID)\n# and they will be substituted  by pbsrun to actual values.\n#####################################################################\noptions_to_pbs_attach=\"-j $PBS_JOBID\"\n\n################## transform_action() ###################################\n# The action to be performed for each actual item and value matched in\n# options_to_transform.\n# RETURN: echo the replacement item and value for the matched arguments.\n# NOTES:\n# (1) \"echo\" produces the return value of this function;\n#     do not arbitrarily invoke echo statements.\n# (2) Please use 'printf \"%s\" \"<value>\"' (must quote the <value>) in place\n#     of \"echo\" command here since we're returning an option, and 'echo' is\n#     notorious for the following behavior:\n#               echo \"-n\"               --> prints empty (-n is an echo opt)\n#     Desired behavior:\n#               printf \"%s\" \"-n\"        --> prints \"-n\"\n#\n########################################################################\ntransform_action () {\n       args=$*\n\n        printf \"%s\" \"$*\"\n}\n\n################## boot_action() ###################################\n# The action to be performed BEFORE before calling mpirun.\n# The location of actual mpirun is passed as first argument.\n# RETURN: 0 for success; non-zero otherwise.\n# NOTES:\n# (1) 'return' produces the exit status of this function:\n#     0 for success; non-zero for failure.\n####################################################################\nboot_action () {\n\tmpirun_location=$1\n\tpbs_hostsfile=\"${PBS_TMPDIR:-/var/tmp}/pbsrun_hosts$$\"\n\n\tif [ \"$RSHCOMMAND\" = \"\" ] ; then\n\t\tRSHCOMMAND=rsh\n\tfi\n\n\tMPD=${mpirun_location}/mpd\n\tMPDKILL=${mpirun_location}/mpdallexit\n\tMPDTRACE=${mpirun_location}/mpdtrace\n\n\t# Get the host.list from PBS and uniq it.\n\tsort -u $PBS_NODEFILE > $pbs_hostsfile\n\n\t# Kill a previous ring\n\trm -f /tmp/mpd.console_${USER}\n\t$MPDKILL >/dev/null 2>/dev/null\n\n\t# Start a ring here : (assuming the first host in pbs_hostsfile is here)\n\t# check that the port is valid : 12345\n\n\tport=$($MPD -b  -t|tail -1)\n\tif [ $? -ne 0 ] || [ \"x${port}\" = \"x\" ]\n\tthen\n  \t\trm -f $pbs_hostsfile\n  \t\treturn 1\n\tfi\n\n\t# For all remaining hosts in the pbs_hostsfile, join the ring\n\thostname=`hostname | awk -F. '{print $1}'`\n\tmaster=$(hostname)\n\tfor h in `cat $pbs_hostsfile |grep -vw $hostname`\n\tdo\n   \t\t$RSHCOMMAND $h \"$MPDKILL;rm -f /tmp/mpd.console_${USER};$MPD -h $master -p $port -b\" >/dev/null 2>/dev/null\n\tdone\n\n\t# Ring is ready\n\t$MPDTRACE > /dev/null 2>/dev/null\n\tif [ $? -ne 0 ]\n\tthen\n   \t\techo Well ring is not ready\n   \t\trm -f $pbs_hostsfile\n   \t\treturn 2\n\tfi\n\n\trm -f $pbs_hostsfile\n\treturn 0\n}\n\n################## evaluate_options_action() #######################\n# The action to be performed on the actual options and values matched\n# in options_to_retain, not including those in options_to_ignore, and\n# those changed arguments in options_to_transform, as well as any\n# other transformation needed on the program name and program arguments.\n#\n# RETURN: echo the list of final arguments and program arguments\n# to be passed on to mpirun command line.\n\n# NOTES:\n# (1) \"echo\" produces the return value of this function;\n#     do not arbitrarily invoke echo statements.\n# (2) Please use 'printf \"%s\" \"<value>\"' (must quote <value>) in place of\n#     \"echo\" command here since we're returning option string, and\n#     'echo' is notorious for the following behavior:\n#               echo \"-n\"               --> prints empty (-n is an echo opt)\n#     Desired behavior:\n#               printf \"%s\" \"-n\"        --> prints \"-n\"\n#\n########################################################################\nevaluate_options_action () {\n\targs=$*\n\n\tfound_np=0\n\twhile [ $# -gt 0 ]; do\n          if [ \"XX$1\" = \"XX-np\" ]; then\n\t\tfound_np=1\n\t\tbreak\n\t  fi\n\t  shift\n\tdone\n\n\tif [ $found_np -eq 0 ] ; then\n\t\tusernp=`cat ${PBS_NODEFILE} | wc -l | tr -d ' '`\n\t\targs=\"-np $usernp $args\"\n\tfi\n\n        pbs_machinefile=\"${PBS_TMPDIR:-/var/tmp}/pbsrun_machfile$$\"\n        cat -n ${PBS_NODEFILE} | \\\n        sort -k2 | uniq -f1 -c | \\\n        awk '{if ($1 == 1) print $2, $3; else print $2, $3 \":\" $1}'|\\\n        sort -n | awk '{print $2}' > $pbs_machinefile\n\n        args=\"-m $pbs_machinefile $args\"\n\n\tprintf \"%s\" \"$args\"\n\n}\n\n################## configfile_cmdline_action() #######################\n# If the option_to_configfile (e.g. -configfile) is specified in the\n# mpirun command line, then this function gets passed any leading options\n# and values found before option_to_configfile.\n#\n# RETURN: return the actual options and values to be put in before the\n# option_to_configfile parameter. For instance,\n#\treturning \"--totalnum=N --file=Y\" would result in\n# an mpirun command line of:\n#\tmpirun --totalnum=N --file=Y -configfile pbs_config\n#\n########################################################################\nconfigfile_cmdline_action () {\n\targs=$*\n\n        printf \"\"\n\n}\n\n################## configfile_firstline_action () #######################\n# If the option_to_configfile (e.g. -configfile) is specified in the\n# mpirun command line, return here the item that will be put in the\n# FIRST line of the configuration file.\n# This is the place to put  the \"-machinefile <filename>\" parameter\n# which determines the processes to hosts mappings. Some versions\n# of mpirun (MPICH2, Intel MPI) require that the -machinefile parameter\n# must appear inside the config file and not on the commmand line.\n########################################################################\nconfigfile_firstline_action () {\n\tprintf \"\"\n}\n\n################## end_action() ########################################\n# The action to be performed AFTER calling mpirun, and also when\n# mpirun wrap script is prematurely interrupted.\n# INPUT: The location of actual mpirun is passed as first argument.\n# RETURN: none\n########################################################################\nend_action () {\n\tmpirun_location=$1\n\n\tpbs_hostsfile=\"${PBS_TMPDIR:-/var/tmp}/pbsrun_hosts$$\"\n\tpbs_machinefile=\"${PBS_TMPDIR:-/var/tmp}/pbsrun_machfile$$\"\n\n\trm -f $pbs_hostsfile\n\trm -f $pbs_machinefile 2>/dev/null\n\n\t${mpirun_location}/mpdallexit >/dev/null 2>/dev/null\n\n}\n"
  },
  {
    "path": "src/cmds/scripts/pbsrun.poe.in",
    "content": "#!/usr/bin/env ksh93\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nif [ $# -eq 1 ] && [ $1 = \"--version\" ]; then\n   echo pbs_version = @PBS_VERSION@\n   exit 0\nfi\n\n# We need to get name of the actual binary, and not some link\n# that resulted from wrapping\nif [ -h $0 ] ; then\n   realpath=`ls -l $0 | awk -F \"->\" '{print $2}'| tr -d ' '`\n   name=`basename $realpath`\nelse\n   name=`basename $0`\nfi\n\n. ${PBS_CONF_FILE:-@PBS_CONF_FILE@}\nPBS_LIB_PATH=\"${PBS_EXEC}/lib\"\nif [ ! -d ${PBS_LIB_PATH} -a -d ${PBS_EXEC}/lib64 ] ; then\n\tPBS_LIB_PATH=${PBS_EXEC}/lib64\nfi\n\nPBS_TMPDIR=\"${PBS_TMPDIR:-${TMPDIR:-/var/tmp}}\"\n\nif [ -h ${PBS_LIB_PATH}/MPI/${name}.link ] ; then\n   ibmpoe=`ls -l ${PBS_LIB_PATH}/MPI/${name}.link | \\\n\t\tawk -F \"->\" '{print $2}'| tr -d ' '`\n   if [ ! -x \"$ibmpoe\" ] ; then\n\techo \"poe=$ibmpoe is not executable!\"\n\texit 127\n\n   fi\nelse\n   echo \"No poe link found under ${PBS_LIB_PATH}/MPI/$name.link !\"\n   echo \"Please run pbsrun_wrap to create the link\"\n   exit 127\nfi\n\n# let's source the initialization script\nif [ -s ${PBS_LIB_PATH}/MPI/${name}.init ] ; then\n   . ${PBS_LIB_PATH}/MPI/${name}.init\nelse\n   echo \"No ${PBS_LIB_PATH}/MPI/{$name}.init file exists!\"\n   exit 127\nfi\n\nif [ \"${PBS_NODEFILE:-XX}\" = \"XX\" ]; then\n   echo \"$name: Warning, not running under PBS\"\n\n   if [ ${strict_pbs:=0} -eq 1 ] ; then\n         echo \"$name: exiting since strict_pbs is enabled; execute only in PBS\"\n         exit 1\n   fi\n   exec $ibmpoe \"$@\"\nfi\n\n# invoke man page\nif [ $# -eq 1 -a \"XX$1\" = \"XX-h\" ]; then\n\texec $ibmpoe -h\nelif [ $# -eq 0 ]; then\n\techo \"$name: Error, interactive program name entry not supported under PBS\"\n\texit 1\nfi\n\n# count number of entries in nodefile\n(( pbsnp=`wc -l < $PBS_NODEFILE` ))\n\n# make \"list\" an array\nset -A list\nwhile [ $# -gt 0 ]; do\n\tif [ \"XX$1\" = \"XX-procs\" ]; then\n\t\tshift\n\t\texport MP_PROCS=\"$1\"\n\telif [ \"XX$1\" = \"XX-hostfile\" -o \"XX$1\" = \"XX-hfile\" ]; then\n\t\techo \"$name: Warning, $1 value replaced by PBS\"\n\t\tshift\n\telif [ \"XX$1\" = \"XX-euilib\" ]; then\n\t\tshift\n\t\texport MP_EUILIB=\"$1\"\n\telif [ \"XX$1\" = \"XX-msg_api\" ]; then\n\t\tshift\n\t\texport MP_MSG_API=\"$1\"\n\telif [ \"XX$1\" = \"XX-devtype\" ]; then\n\t\tshift\n\t\texport MP_DEVTYPE=\"$1\"\n\telif [ \"XX$1\" = \"XX-instances\" ]; then\n\t\tshift\n\t\techo \"$name: Warning, -instances cmd line option removed by PBS\"\n\telif [ \"XX$1\" = \"XX-cmdfile\" ]; then\n\t\tshift\n\t\texport MP_CMDFILE=\"$1\"\n\telse\n#\t\tAppend an element to list\n\t\tlist[${#list[@]}]=\"$1\"\n\tfi\n\tshift\ndone\n\n# PBS is HPS enabled, only MPI and LAPI are supported\nif [ -n \"$PBS_HPS_JOBKEY\" ]; then\n\tif [ \"XX$MP_MSG_API\" = \"XX\" ]; then\n\t\texport MP_MSG_API=\"MPI\"\n\telif [ \"XX$MP_MSG_API\" != \"XXMPI\" -a \"XX$MP_MSG_API\" != \"XXLAPI\" \\\n\t\t\t-a \"XX$MP_EUILIB\" = \"XXus\" ]; then\n\t\texport MP_EUILIB=ip\n\tfi\nfi\n\nif [ -n \"$MP_HOSTFILE\" ]; then\n\techo \"$name: Warning, MP_HOSTFILE value replaced by PBS\"\nfi\n\nexport MP_HOSTFILE=$PBS_NODEFILE\nif [ -n \"$MP_PROCS\" ]; then\t# user has set a value for procs\n\tif [ $MP_PROCS -lt $pbsnp -a \"$MP_EUILIB\" = \"us\" ]; then\n\t\techo \"$name: Warning, usermode disabled due to MP_PROCS setting\"\n\t\texport MP_EUILIB=\"ip\"\n\t\ttest \"$MP_RESD\" = \"$PBS_MP_RESD\" && unset MP_RESD\n\t\ttest \"$MP_EUIDEVICE\" = \"$PBS_MP_EUIDEVICE\" && unset MP_EUIDEVICE\n\t\ttest \"$MP_DEVTYPE\" = \"$PBS_MP_DEVTYPE\" && unset MP_DEVTYPE\n\tfi\nelse\n\texport MP_PROCS=\"$pbsnp\"\t# set procs to PBS value\nfi\n\n# Duplicate the value of MP_EUILIB in an internal PBS variable so\n# we can depend on it being available to pbs_poerun.  Some versions\n# of poe seem to set MP_EUILIB=\"ip\" if it has no other value set.\nexport PBS_EUILIB=\"$MP_EUILIB\"\n\n# Don't change MP_RESD if we are doing \"mixed mode\" i.e. IB is requested but\n# PBS doesn't support it.  Otherwise, LoadLeveler shouldn't allocate nodes.\nif [ \"$MP_DEVTYPE\" != \"ib\" -o -n \"$PBS_AIXIB_JOBKEY\" ]; then\n\texport MP_RESD=no\n\tunset MP_INSTANCES\nfi\n\n# If user mode is set and PBS will provide it, unset MP_EUILIB so poe\n# doesn't contact LoadLeveler to do switch setup.\nif [ \"$MP_EUILIB\" = \"us\" ]; then\n\tif [ \"$MP_DEVTYPE\" = \"ib\" -a -n \"$PBS_AIXIB_JOBKEY\" ]; then\n\t\tunset MP_EUILIB\n\t\texport NRT_WINDOW_COUNT=$PBS_AIXIB_NETWORKS\n\telif [ \"$MP_DEVTYPE\" != \"ib\" -a -n \"$PBS_HPS_JOBKEY\" ]; then\n\t\tunset MP_EUILIB\n\tfi\nfi\n\nif [ \"$MP_CMDFILE\" ]; then\n\tif [ -s \"$MP_CMDFILE\" ]; then\n\t\tcmd=\"${PBS_TMPDIR}/PBS_cmd$$\"\n\t\tsed \"s;^;${PBS_LIB_PATH}/MPI/pbs_poerun ;\" \"$MP_CMDFILE\" > $cmd\n\t\texport MP_CMDFILE=\"$cmd\"\n\t\t$ibmpoe \"${list[@]}\"\n\t\tret=$?\n\t\t/bin/rm -f $cmd\n\t\texit $ret\n\tfi\nelse\n\texec $ibmpoe ${PBS_LIB_PATH}/MPI/pbs_poerun \"${list[@]}\"\nfi\n"
  },
  {
    "path": "src/cmds/scripts/pbsrun.poe.init.in",
    "content": "#!/bin/sh\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\n### FILE: pbsrun_poe.init ###\n\n####################################################################\n# strict_pbs: set to 1 if you want the wrapper script to be executed\n#             only if in a PBS environment\n#\n#####################################################################\nstrict_pbs=0\n\n# Reasonable defaults for MP_ environment variables\nMP_EUILIB=${MP_EUILIB:-$PBS_MP_EUILIB}\nMP_DEVTYPE=${MP_DEVTYPE:-$PBS_MP_DEVTYPE}\nMP_EUIDEVICE=${MP_EUIDEVICE:-$PBS_MP_EUIDEVICE}\nMP_RESD=${MP_RESD:-$PBS_MP_RESD}\nexport MP_EUILIB MP_DEVTYPE MP_EUIDEVICE MP_RESD\n"
  },
  {
    "path": "src/cmds/scripts/printjob",
    "content": "#!/bin/sh\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\n. ${PBS_CONF_FILE:-/etc/pbs.conf}\n\nif [ -n \"$PBS_START_SERVER\" -a \"$PBS_START_SERVER\" != \"0\" ]; then\n\t# Source the file that sets PGSQL_LIBSTR\n\t. \"$PBS_EXEC\"/libexec/pbs_db_env\n\texec \"$PBS_EXEC\"/bin/printjob_svr.bin ${1+\"$@\"}\nelse\n\texec \"$PBS_EXEC\"/bin/printjob.bin ${1+\"$@\"}\nfi\n"
  },
  {
    "path": "src/cmds/scripts/sgiMPI.awk",
    "content": "# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\n#\tWhen an MPI job executes within the PBS environment, MPI resource\n#\tspecification and PBS resource selections may conflict (for example,\n#\ton choice of execution host).  This script works in conjunction with\n#\tthe PBS mpiexec script to transform those MPI resource specifications\n#\tto instead use the resources provided by PBS.\n#\n#\tIt does this by consulting the PBS nodes file (whose name appears in\n#\tthe \"PBS_NODEFILE\" environment variable) to determine the host names\n#\tand available number of CPUs per host (the latter by counting the\n#\tnumber of times a given host name appears in the nodes file), then\n#\tassigning hosts and CPUs to MPI resource specifications in a round-\n#\trobin fashion.\n#\n#\tWe expect to be passed values for these variables by the PBS mpiexec\n#\tthat invokes us,\n#\n#\t\tconfigfile\tis a file either supplied as an argument to\n#\t\t\t\tmpiexec, or constructed by the PBS mpiexec\n#\t\t\t\tscript to represent the concatenation of rank\n#\t\t\t\tspecifications on the mpiexec command line,\n#\n#\t\trunfile\t\tis the output file into which this program\n#\t\t\t\tshould put the vendor-specific invocations\n#\t\t\t\tthat implement the various mpiexec directives\n#\n#\t\tpbs_exec\tis the value of $PBS_EXEC from the pbs.conf file\n#\n#\t\tdebug\t\tmay be set to enable this program to report\n#\t\t\t\tsome of its internal workings\n#\n#\tTo customize this program for a different vendor, change the functions\n#\n#\t\tvendor_init\twhich does one-time per-vendor initializations,\n#\t\t\t\tand\n#\n#\t\tvendor_dorank\twhich formats and returns a rank specification\n#\t\t\t\tfor output to the supplied runfile\n\nfunction init()\n{\n\tif ((jobid = ENVIRON[\"PBS_JOBID\"]) == \"\") {\n\t\tprintf(\"cannot find \\\"PBS_JOBID\\\" in environment\\n\");\n\t\texit 1;\n\t}\n\n\tif (configfile == \"\") {\n\t\tprintf(\"no mpiexec configfile specified\\n\");\n\t\texit 1;\n\t}\n\n\tif (runfile == \"\") {\n\t\tprintf(\"no output run file specified\\n\");\n\t\texit 1;\n\t}\n\n\tvendor_init();\n}\n\nfunction vendor_init()\n{\n\tSGIMPI_init();\n}\n\nfunction vendor_dorank()\n{\n\treturn SGIMPI_dorank();\n}\n\nfunction SGIMPI_init()\n{\n\tSGIMPI_cmdfmt = sprintf(\"%%s%%s -np %%d %%s -j %s  %%s %%s\",\n\t    jobid);\n}\n\nfunction SGIMPI_dorank(ret)\n{\n\tret = sprintf(SGIMPI_cmdfmt, ranksep, rank[HOST], rank[NCPUS],\n\t    pbs_exec \"/bin/pbs_attach\", prog, args);\n\tranksep = \": \";\t\t# for all but the first line of the config file\n\n\treturn (ret);\n}\n\n#\tBreak a line from the supplied configuration file into ranks, validate\n#\tthe syntax, assign nodes to the ranks, and emit the vendor-specific\n#\tinvocations for later use by the calling PBS mpiexec script.\nfunction doline(i, type, val)\n{\n\tlinenum++;\n\tranknum = 0;\n\treset_rank();\n\tfor (i = 1; i <= NF; i++) {\n\t\ttype = $i;\n\t\tif (type ~ argpat) {\n\t\t\t# mpiexec directive\n\t\t\tin_rankdef = 1;\n\t\t\tsub(/^-/, \"\", type);\n\t\t\tif (i == NF) {\n\t\t\t\tprintf(missingvalfmt,\n\t\t\t\t       configfile, linenum, type);\n\t\t\t\treturn;\n\t\t\t\texit 1;\n\t\t\t} else {\n\t\t\t\ti++;\n\t\t\t\tval = $i;\n\t\t\t}\n\t\t\trank[type] = val;\n\t\t} else if (type == \":\") {\n\t\t\t# rank separator\n\t\t\tin_rankdef = 0;\n\t\t\tbreak;\n\t\t} else {\n\t\t\t# program and its arguments\n\t\t\tprog = $i;\n\t\t\ti++;\n\t\t\tfor (; i <= NF; i++) {\n\t\t\t\tif ($i == \":\")\n\t\t\t\t\tbreak;\n\t\t\t\telse\n\t\t\t\t\targs = args \" \" $i;\n\t\t\t}\n\t\t\tassign_nodes();\n\t\t\tranknum++;\n\t\t\treset_rank();\n\t\t\tin_rankdef = 0;\n\t\t}\n\t}\n\n\tif ((in_rankdef == 1) && (prog == \"\")) {\n\t\tprintf(missingcmdfmt, configfile, linenum, ranknum);\n\t\treturn;\n\t\texit 1;\n\t}\n\n}\n\n#\t(re)initializations to be done when beginning a new rank\nfunction reset_rank(r)\n{\n\tfor (r in rank)\n\t\tdelete rank[r];\n\n\tprog = \"\";\n\targs = \"\";\n\n\trank[NCPUS] = num_nodes;\t# default is implementation-dependent\n}\n\n#\tEmit a rank specification into the runfile.\nfunction dorank(r)\n{\n\tif (debug) {\n\t\tfor (r in rank)\n\t\t\tprintf(\"line %d, rank #%d:  %s=%s\\n\",\n\t\t\t    linenum, ranknum, r, rank[r]);\n\t\tprintf(\"line %d, rank #%d:  prog:  %s, args %s\\n\",\n\t\t    linenum, ranknum, prog, args);\n\t}\n\n\tprintf(\"%s\\n\", vendor_dorank()) >> runfile;\n}\n\n#\tRead the PBS nodes file, remembering the node names and CPUs per node.\nfunction read_nodefile(nf, pbs_nodefile)\n{\n\tpbs_nodefile = \"PBS_NODEFILE\";\n\n\tif ((nf = ENVIRON[pbs_nodefile]) == \"\") {\n\t\tprintf(\"cannot find \\\"%s\\\" in environment\\n\", pbs_nodefile);\n\t\texit 1;\n\t}\n\n\tnodeindex = 0;\n\twhile ((getline < nf) > 0) {\n\t\tif ($0 in cpus_per_node)\n\t\t\tcpus_per_node[$0]++;\n\t\telse {\n\t\t\tnodelist[nodeindex] = $0;\n\t\t\tcpus_per_node[$0] = 1;\n\t\t\tnodeindex++;\n\t\t}\n\t}\n\tclose(nf);\n\n\tif (nodeindex == 0) {\n\t\tif ((getline < nf) == -1)\n\t\t\tprintf(badnodefilefmt, nf);\n\t\telse\n\t\t\tprintf(nonodesfmt, nf);\n\t\texit 1;\n\t}\n\tnum_nodes = nodeindex;\n\treinit_avail();\n}\n\n#\tfor debugging\nfunction report_nodefile(node)\n{\n\tprint \"report_nodefile:  num_nodes \" num_nodes;\n\tfor (node = 0; node < num_nodes; node++)\n\t\tprintf(\"\\tnode %s:  CPUs available:  %d\\n\", nodelist[node],\n\t\t    cpus_per_node[nodelist[node]]);\n}\n\n#\tThis does a virtual rewind of the PBS nodes list, resetting the\n#\tnumber of CPUs available to that determined in read_nodefile().\nfunction reinit_avail(node)\n{\n\tfor (node = 0; node < num_nodes; node++)\n\t\tcur_avail[node] = cpus_per_node[nodelist[node]];\n\tnodeindex = 0;\n}\n\n#\tStep through the nodelist[] array, consuming CPU resources.  When doing\n#\tso, either we satisfy a rank specification or must go on to the next\n#\thost;  in either case we emit the current rank spec before proceeding.\nfunction assign_nodes(cpus_needed, node)\n{\n\tcpus_needed = rank[NCPUS];\n\n\twhile (cpus_needed > 0) {\n\t\tfor (node = nodeindex; node < num_nodes; node++) {\n\t\t\trank[HOST] = nodelist[node];\n\t\t\tif (cur_avail[node] >= cpus_needed) {\n\t\t\t\tcur_avail[node] -= cpus_needed;\n\t\t\t\trank[NCPUS] = cpus_needed;\n\t\t\t\tdorank();\n\t\t\t\tif (cur_avail[node] == 0)\n\t\t\t\t\tnodeindex++;\n\t\t\t\treturn;\n\t\t\t} else {\n\t\t\t\trank[NCPUS] = cur_avail[node];\n\t\t\t\tcpus_needed -= cur_avail[node];\n\t\t\t\tcur_avail[node] = 0;\n\t\t\t\tdorank();\n\t\t\t\tnodeindex++;\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t}\n\t\treinit_avail();\n\t}\n}\n\nBEGIN\t{\n\t\tlinenum = 0;\n\t\targpat = \"^-(arch|host|file|n|path|soft|wdir)$\";\n\t\tbadnodefilefmt = \"could not read PBS_NODEFILE \\\"%s\\\"\\n\";\n\t\t\tnonodesfmt = \"no nodes found in PBS_NODEFILE \\\"%s\\\"\\n\";\n\t\tmissingcmdfmt = \"%s line %d rank %d has no executable\\n\";\n\t\tmissingvalfmt = \"%s line %d:  argument \\\"-%s\\\":  missing val\\n\";\n\t\t# symbolic name for readability\n\t\tHOST = \"host\"\n\t\tNCPUS = \"n\";\n\n\t\tinit();\n\n\t\tread_nodefile();\n\t\tif (debug)\n\t\t\treport_nodefile();\n\n\t\twhile ((getline < configfile) > 0)\n\t\t\tdoline();\n\t\tclose(configfile);\n\t}\n"
  },
  {
    "path": "src/hooks/Makefile.am",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\npbshooksdir = $(libdir)/python/altair/pbs_hooks\n\ndist_pbshooks_DATA = \\\n\tcgroups/pbs_cgroups.HK \\\n\tcgroups/pbs_cgroups.PY \\\n\tcgroups/pbs_cgroups.CF\n"
  },
  {
    "path": "src/hooks/cgroups/pbs_cgroups.CF",
    "content": "{\n    \"cgroup_prefix\"         : \"pbs_jobs\",\n    \"exclude_hosts\"         : [],\n    \"exclude_vntypes\"       : [\"no_cgroups\"],\n    \"run_only_on_hosts\"     : [],\n    \"periodic_resc_update\"  : true,\n    \"vnode_per_numa_node\"   : false,\n    \"online_offlined_nodes\" : true,\n    \"use_hyperthreads\"      : false,\n    \"ncpus_are_cores\"       : false,\n    \"discover_gpus\"         : true,\n    \"manage_rlimit_as\"      : true,\n    \"cgroup\" : {\n        \"cpuacct\" : {\n            \"enabled\"            : true,\n            \"exclude_hosts\"      : [],\n            \"exclude_vntypes\"    : []\n        },\n        \"cpuset\" : {\n            \"enabled\"            : true,\n            \"exclude_cpus\"       : [],\n            \"exclude_hosts\"      : [],\n            \"exclude_vntypes\"    : [],\n            \"mem_fences\"         : false,\n            \"mem_hardwall\"       : false,\n            \"memory_spread_page\" : false\n        },\n        \"devices\" : {\n            \"enabled\"            : false,\n            \"exclude_hosts\"      : [],\n            \"exclude_vntypes\"    : [],\n            \"allow\"              : [\n                \"b *:* rwm\",\n                \"c *:* rwm\"\n            ]\n        },\n        \"memory\" : {\n            \"enabled\"            : true,\n            \"exclude_hosts\"      : [],\n            \"exclude_vntypes\"    : [],\n            \"soft_limit\"         : false,\n            \"enforce_default\"    : true,\n            \"exclhost_ignore_default\" : false,\n            \"default\"            : \"256MB\",\n            \"reserve_percent\"    : 0,\n            \"reserve_amount\"     : \"1GB\"\n        },\n        \"memsw\" : {\n            \"enabled\"            : false,\n            \"exclude_hosts\"      : [],\n            \"exclude_vntypes\"    : [],\n            \"enforce_default\"    : true,\n            \"exclhost_ignore_default\" : false,\n            \"default\"            : \"0B\",\n            \"reserve_percent\"    : 0,\n            \"reserve_amount\"     : \"64MB\",\n            \"manage_cgswap\"      : false\n        },\n        \"hugetlb\" : {\n            \"enabled\"            : false,\n            \"exclude_hosts\"      : [],\n            \"exclude_vntypes\"    : [],\n            \"enforce_default\"    : true,\n            \"exclhost_ignore_default\" : false,\n            \"default\"            : \"0B\",\n            \"reserve_percent\"    : 0,\n            \"reserve_amount\"     : \"0B\"\n        }\n    }\n}\n"
  },
  {
    "path": "src/hooks/cgroups/pbs_cgroups.HK",
    "content": "type=site\nenabled=false\ndebug=false\nuser=pbsadmin\nevent=exechost_periodic,exechost_startup,execjob_attach,execjob_begin,execjob_end,execjob_epilogue,execjob_launch,execjob_resize,execjob_abort,execjob_postsuspend,execjob_preresume\nfail_action=offline_vnodes\norder=100\nalarm=90\nfreq=120\n"
  },
  {
    "path": "src/hooks/cgroups/pbs_cgroups.PY",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\n\"\"\"\nPBS hook for managing cgroups on Linux execution hosts.\nThis hook contains the handlers required for PBS to support\ncgroups on Linux hosts that support them (kernel 2.6.28 and higher)\n\nThis hook services the following events:\n- exechost_periodic\n- exechost_startup\n- execjob_attach\n- execjob_begin\n- execjob_end\n- execjob_epilogue\n- execjob_launch\n- execjob_resize\n- execjob_abort\n- execjob_postsuspend\n- execjob_preresume\n\"\"\"\n\n# NOTES:\n#\n# When soft_limit is true for memory, memsw represents the hard limit.\n#\n# The resources value in sched_config must contain entries for mem and\n# vmem if those subsystems are enabled in the hook configuration file. The\n# amount of resource requested will not be avaiable to the hook if they\n# are not present.\n\n# if we are not on a Linux system then the hook should always do nothing\n# and accept, and certainly not try Linux-only module imports\nimport platform\n\n# Module imports\n#\n# This one is needed to log messages\nimport pbs\n\nif platform.system() != 'Linux':\n    pbs.logmsg(pbs.EVENT_DEBUG,\n               'Cgroup hook not on supported OS '\n               '-- just accepting event')\n    pbs.event().accept()\n\n# we're on Linux, but does the kernel support cgroups?\nrel = list(map(int, (platform.release().split('-')[0].split('.'))))\npbs.logmsg(pbs.EVENT_DEBUG4,\n           'Cgroup hook: detected Linux kernel version %d.%d.%d' %\n           (rel[0], rel[1], rel[2]))\n\nsupported = False\nif rel[0] > 2:\n    supported = True\nelif rel[0] == 2:\n    if rel[1] > 6:\n        supported = True\n    elif rel[1] == 6:\n        if rel[2] >= 28:\n            supported = True\n\nif not supported:\n    pbs.logmsg(pbs.EVENT_DEBUG,\n               'Cgroup hook: kernel %s.%s.%s < 2.6.28; not supported'\n               % (rel[0], rel[1], rel[2]))\n    pbs.event.accept()\nelse:\n    # Now we know that at least the hook might do something useful\n    # so import other modules (some of them Linux-specific)\n    # Note the else is needed to be PEP8-compliant:  the imports must\n    # not be unindented since they are not top-of-file\n    import sys\n    import os\n    import stat\n    import errno\n    import signal\n    import subprocess\n    import re\n    import glob\n    import time\n    import string\n    import traceback\n    import copy\n    import operator\n    import fnmatch\n    import math\n    import types\n    try:\n        import json\n    except Exception:\n        import simplejson as json\n    multiprocessing = None\n    try:\n        # will fail in Python 2.5\n        import multiprocessing\n    except Exception:\n        # but we can use isinstance(multiprocessing, types.ModuleType) later\n        # to find out if it worked\n        pass\n    import fcntl\n    import pwd\n\n    PYTHON2 = sys.version_info[0] < 3\n\n    try:\n        bytearray\n        BYTEARRAY_EXISTS = True\n    except NameError:\n        BYTEARRAY_EXISTS = False\n\n# Define some globals that get set in main\nPBS_EXEC = ''\nPBS_HOME = ''\nPBS_MOM_HOME = ''\nPBS_MOM_JOBS = ''\n\n# ============================================================================\n# Derived error classes\n# ============================================================================\n\n\nclass AdminError(Exception):\n    \"\"\"\n    Base class for errors fixable only by administrative action.\n    \"\"\"\n    pass\n\n\nclass ProcessingError(Exception):\n    \"\"\"\n    Base class for errors in processing, unknown cause.\n    \"\"\"\n    pass\n\n\nclass UserError(Exception):\n    \"\"\"\n    Base class for errors fixable by the user.\n    \"\"\"\n    pass\n\n\nclass JobValueError(Exception):\n    \"\"\"\n    Errors in PBS job resource values.\n    \"\"\"\n    pass\n\n\nclass CgroupBusyError(ProcessingError):\n    \"\"\"\n    Errors when the cgroup is busy.\n    \"\"\"\n    pass\n\n\nclass CgroupConfigError(AdminError):\n    \"\"\"\n    Errors in configuring cgroup.\n    \"\"\"\n    pass\n\n\nclass CgroupLimitError(AdminError):\n    \"\"\"\n    Errors in configuring cgroup.\n    \"\"\"\n    pass\n\n\nclass CgroupProcessingError(ProcessingError):\n    \"\"\"\n    Errors processing cgroup.\n    \"\"\"\n    pass\n\n\nclass TimeoutError(ProcessingError):\n    \"\"\"\n    Timeout encountered.\n    \"\"\"\n    pass\n\n\n# ============================================================================\n# Utility functions\n# ============================================================================\n\n#\n# FUNCTION stringified_output\n#\ndef stringified_output(out):\n    if PYTHON2:\n        if isinstance(out, str):\n            return(out)\n        elif isinstance(out, unicode):\n            return(out.encode('utf-8'))\n        else:\n            return(str(out))\n    else:\n        if isinstance(out, str):\n            return(out)\n        elif isinstance(out, (bytes, bytearray)):\n            return(out.decode('utf-8'))\n        else:\n            return(str(out))\n\n\n#\n# FUNCTION caller_name\n#\ndef caller_name():\n    \"\"\"\n    Return the name of the calling function or method.\n    \"\"\"\n    return str(sys._getframe(1).f_code.co_name)\n\n\n#\n# FUNCTION systemd_escape\n#\ndef systemd_escape(buf):\n    \"\"\"\n    Escape strings for usage in system unit names\n    Some distros don't provide the systemd-escape command\n    \"\"\"\n    pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n    if not isinstance(buf, str):\n        raise ValueError('Not a basetype string')\n    ret = ''\n    for i, char in enumerate(buf):\n        if i < 1 and char == '.':\n            if PYTHON2:\n                ret += '\\\\x' + '.'.encode('hex')\n            else:\n                ret += '\\\\x' + b'.'.hex()\n        elif char.isalnum() or char in '_.':\n            ret += char\n        elif char == '/':\n            ret += '-'\n        else:\n            # Will turn non-ASCII into UTF-8 hex sequence on both Py2/3\n            if PYTHON2:\n                hexval = char.encode('hex')\n            else:\n                hexval = char.encode('utf-8').hex()\n            for j in range(0, len(hexval), 2):\n                ret += '\\\\x' + hexval[j:j + 2]\n    return ret\n\n\n#\n# FUNCTION convert_size\n#\ndef convert_size(value, units='b'):\n    \"\"\"\n    Convert a string containing a size specification (e.g. \"1m\") to a\n    string using different units (e.g. \"1024k\").\n\n    This function only interprets a decimal number at the start of the string,\n    stopping at any unrecognized character and ignoring the rest of the string.\n\n    When down-converting (e.g. MB to KB), all calculations involve integers and\n    the result returned is exact. When up-converting (e.g. KB to MB) floating\n    point numbers are involved. The result is rounded up. For example:\n\n    1023MB -> GB yields 1g\n    1024MB -> GB yields 1g\n    1025MB -> GB yields 2g  <-- This value was rounded up\n\n    Pattern matching or conversion may result in exceptions.\n    \"\"\"\n    logs = {'b': 0, 'k': 10, 'm': 20, 'g': 30,\n            't': 40, 'p': 50, 'e': 60, 'z': 70, 'y': 80}\n    try:\n        new = units[0].lower()\n        if new not in logs:\n            raise ValueError('Invalid unit value')\n        result = re.match(r'([-+]?\\d+)([bkmgtpezy]?)',\n                          str(value).lower())\n        if not result:\n            raise ValueError('Unrecognized value')\n        val, old = result.groups()\n        if int(val) < 0:\n            raise ValueError('Value may not be negative')\n        if old not in logs:\n            old = 'b'\n        factor = logs[old] - logs[new]\n        val = float(val)\n        val *= 2 ** factor\n        if (val - int(val)) > 0.0:\n            val += 1.0\n        val = int(val)\n        # pbs.size() does not like units following zero\n        if val <= 0:\n            return '0'\n        return str(val) + new\n    except Exception:\n        return None\n\n\n#\n# FUNCTION size_as_int\n#\ndef size_as_int(value):\n    \"\"\"\n    Convert a size string to an integer representation of size in bytes\n    \"\"\"\n    in_bytes = convert_size(value, units='b')\n    if in_bytes is not None:\n        return int(convert_size(value).rstrip(string.ascii_lowercase))\n    else:\n        pbs.logmsg(pbs.EVENT_ERROR, \"size_as_int: Value %s \"\n                   \"does not convert to int, returning None\" % value)\n        return None\n\n\n#\n# FUNCTION convert_time\n#\ndef convert_time(value, units='s'):\n    \"\"\"\n    Converts a integer value for time into the value of the return unit\n\n    A valid decimal number, with optional sign, may be followed by a character\n    representing a scaling factor.  Scaling factors may be either upper or\n    lower case. Examples include:\n    250ms\n    40s\n    +15min\n\n    Valid scaling factors are:\n    ns  = 10**-9\n    us  = 10**-6\n    ms  = 10**-3\n    s   =      1\n    min =     60\n    hr  =   3600\n\n    Pattern matching or conversion may result in exceptions.\n    \"\"\"\n    multipliers = {'': 1, 'ns': 10 ** -9, 'us': 10 ** -6,\n                   'ms': 10 ** -3, 's': 1, 'min': 60, 'hr': 3600}\n    new = units.lower()\n    if new not in multipliers:\n        raise ValueError('Invalid unit value')\n    result = re.match(r'([-+]?\\d+)\\s*([a-zA-Z]+)',\n                      str(value).lower())\n    if not result:\n        raise ValueError('Unrecognized value')\n    num, factor = result.groups()\n    # Check to see if there was not unit of time specified\n    if factor is None:\n        factor = ''\n    # Check to see if the unit is valid\n    if str.lower(factor) not in multipliers:\n        raise ValueError('Time unit not recognized')\n    # Convert the value to seconds\n    value = float(num) * float(multipliers[str.lower(factor)])\n    if units != 's':\n        value = value / multipliers[new]\n    # _pbs_v1.validate_input breaks with very small time values\n    # because Python converts them to values like 1e-05\n    if value < 0.001:\n        value = 0.0\n    return value\n\n\ndef decode_list(data):\n    \"\"\"\n    json hook to convert lists from non string type to str\n    \"\"\"\n    ret = []\n    for item in data:\n        if isinstance(item, str):\n            pass\n        if isinstance(item, list):\n            item = decode_list(item)\n        elif isinstance(item, dict):\n            item = decode_dict(item)\n        elif PYTHON2:\n            if isinstance(item, unicode):\n                item = item.encode('utf-8')\n            elif BYTEARRAY_EXISTS and isinstance(item, bytearray):\n                item = str(item)\n        elif isinstance(item, (bytes, bytearray)):\n            item = str(item, 'utf-8')\n        ret.append(item)\n    return ret\n\n\ndef decode_dict(data):\n    \"\"\"\n    json hook to convert dictionaries from non string type to str\n    \"\"\"\n    ret = {}\n    for key, value in list(data.items()):\n        # first the key\n        if isinstance(key, str):\n            pass\n        elif PYTHON2:\n            if isinstance(key, unicode):\n                key = key.encode('utf-8')\n            elif BYTEARRAY_EXISTS and isinstance(key, bytearray):\n                key = str(key)\n        elif isinstance(key, (bytes, bytearray)):\n            key = str(key, 'utf-8')\n\n        # now the value\n        if isinstance(value, str):\n            pass\n        if isinstance(value, list):\n            value = decode_list(value)\n        elif isinstance(value, dict):\n            value = decode_dict(value)\n        elif PYTHON2:\n            if isinstance(value, unicode):\n                value = value.encode('utf-8')\n            elif BYTEARRAY_EXISTS and isinstance(key, bytearray):\n                value = str(value)\n        elif isinstance(value, (bytes, bytearray)):\n            value = str(value, 'utf-8')\n\n        # add stringified (key, value) pair to result\n        ret[key] = value\n    return ret\n\n\ndef merge_dict(base, new):\n    \"\"\"\n    Merge together two multilevel dictionaries where new\n    takes precedence over base\n    \"\"\"\n    if not isinstance(base, dict):\n        raise ValueError('base must be type dict')\n    if not isinstance(new, dict):\n        raise ValueError('new must be type dict')\n    newkeys = list(new.keys())\n    merged = {}\n    for key in base:\n        if key in newkeys and isinstance(base[key], dict):\n            # Take it off the list of keys to copy\n            newkeys.remove(key)\n            merged[key] = merge_dict(base[key], new[key])\n        else:\n            merged[key] = copy.deepcopy(base[key])\n    # Copy the remaining unique keys from new\n    for key in newkeys:\n        merged[key] = copy.deepcopy(new[key])\n    return merged\n\n\ndef expand_list(old):\n    \"\"\"\n    Convert condensed list format (with ranges) to an expanded Python list.\n    The input string is a comma separated list of digits and ranges.\n    Examples include:\n    0-3,8-11\n    0,2,4,6\n    2,5-7,10\n    \"\"\"\n    new = []\n    if isinstance(old, list):\n        old = \",\".join(list(map(str, old)))\n    stripped = old.strip()\n    if not stripped:\n        return new\n    for entry in stripped.split(','):\n        if '-' in entry[1:]:\n            start, end = entry.split('-', 1)\n            for i in range(int(start), int(end) + 1):\n                new.append(i)\n        else:\n            new.append(int(entry))\n    return new\n\n\ndef find_files(path, pattern='*', kind='',\n               follow_links=False, follow_mounts=True):\n    \"\"\"\n    Return a list of files similar to the find command\n    \"\"\"\n    if isinstance(pattern, str):\n        pattern = [pattern]\n    if isinstance(kind, str):\n        if not kind:\n            kind = []\n        else:\n            kind = [kind]\n    if not isinstance(pattern, list):\n        raise TypeError('Pattern must be a string or list')\n    if not isinstance(kind, list):\n        raise TypeError('Kind must be a string or list')\n    # Top level not excluded if it is a mount point\n    mounts = []\n    for root, dirs, files in os.walk(path, followlinks=follow_links):\n        for name in [os.path.join(root, x) for x in dirs + files]:\n            if not follow_mounts:\n                if os.path.isdir(name) and os.path.ismount(name):\n                    mounts.append(os.path.join(name, ''))\n                    continue\n                undermount = False\n                for mountpoint in mounts:\n                    if name.startswith(mountpoint):\n                        undermount = True\n                        break\n                if undermount:\n                    continue\n            pattern_matched = False\n            for pat in pattern:\n                if fnmatch.fnmatchcase(os.path.basename(name), pat):\n                    pattern_matched = True\n                    break\n            if not pattern_matched:\n                continue\n            if not kind:\n                yield name\n                continue\n            statinfo = os.lstat(name).st_mode\n            for entry in kind:\n                if not entry:\n                    yield name\n                    break\n                for letter in entry:\n                    if letter == 'f' and stat.S_ISREG(statinfo):\n                        yield name\n                        break\n                    elif letter == 'l' and stat.S_ISLNK(statinfo):\n                        yield name\n                        break\n                    elif letter == 'c' and stat.S_ISCHR(statinfo):\n                        yield name\n                        break\n                    elif letter == 'b' and stat.S_ISBLK(statinfo):\n                        yield name\n                        break\n                    elif letter == 'p' and stat.S_ISFIFO(statinfo):\n                        yield name\n                        break\n                    elif letter == 's' and stat.S_ISSOCK(statinfo):\n                        yield name\n                        break\n                    elif letter == 'd' and stat.S_ISDIR(statinfo):\n                        yield name\n                        break\n\n\ndef initialize_resource(resc):\n    \"\"\"\n    Return a properly cast zero value\n    \"\"\"\n    if isinstance(resc, pbs.pbs_int):\n        ret = pbs.pbs_int(0)\n    elif isinstance(resc, pbs.pbs_float):\n        ret = pbs.pbs_float(0)\n    elif isinstance(resc, pbs.size):\n        ret = pbs.size('0')\n    elif isinstance(resc, int):\n        ret = 0\n    elif isinstance(resc, float):\n        ret = 0.0\n    elif isinstance(resc, list):\n        ret = []\n    elif isinstance(resc, dict):\n        ret = {}\n    elif isinstance(resc, tuple):\n        ret = ()\n    elif isinstance(resc, str):\n        ret = ''\n    else:\n        raise ValueError('Unable to initialize unknown resource type')\n    return ret\n\n\ndef printjob_info(jobid, include_attributes=False):\n    \"\"\"\n    Use printjob to acquire the job information\n    \"\"\"\n    info = {}\n    jobfile = os.path.join(PBS_MOM_JOBS, '%s.JB' % jobid)\n    if not os.path.isfile(jobfile):\n        pbs.logmsg(pbs.EVENT_DEBUG4, 'File not found: %s' % (jobfile))\n        return info\n    cmd = [os.path.join(PBS_EXEC, 'bin', 'printjob')]\n    if not include_attributes:\n        cmd.append('-a')\n    cmd.append(jobfile)\n    try:\n        pbs.logmsg(pbs.EVENT_DEBUG4, 'Running: %s' % cmd)\n        process = subprocess.Popen(cmd, shell=False,\n                                   stdout=subprocess.PIPE,\n                                   stderr=subprocess.PIPE,\n                                   universal_newlines=True)\n        out, err = process.communicate()\n        if process.returncode != 0:\n            pbs.logmsg(pbs.EVENT_DEBUG4,\n                       'command return code non-zero: %s'\n                       % str(process.returncode))\n            pbs.logmsg(pbs.EVENT_DEBUG4,\n                       'command stderr: %s'\n                       % stringified_output(err))\n    except Exception as exc:\n        pbs.logmsg(pbs.EVENT_DEBUG2, 'Error running command: %s' % cmd)\n        pbs.logmsg(pbs.EVENT_DEBUG2, 'Exception: %s' % exc)\n        return {}\n    # if we get a non-str type then convert before calling splitlines\n    # should not happen since we pass universal_newlines True\n    out_split = stringified_output(out).splitlines()\n    pattern = re.compile(r'^(\\w.*):\\s*(\\S+)')\n    for line in out_split:\n        result = re.match(pattern, line)\n        if not result:\n            continue\n        key, val = result.groups()\n        if not key or not val:\n            continue\n        if val.startswith('0x'):\n            info[key] = int(val, 16)\n        elif val.isdigit():\n            info[key] = int(val)\n        else:\n            info[key] = val\n    pbs.logmsg(pbs.EVENT_DEBUG4, 'JB file info returned: %s' % repr(info))\n    return info\n\n\ndef job_is_suspended(jobid):\n    \"\"\"\n    Returns True if job is in a suspended or unknown substate\n    \"\"\"\n    jobinfo = printjob_info(jobid)\n    if 'substate' in jobinfo:\n        return jobinfo['substate'] in [43, 45, 'unknown']\n    return False\n\n\ndef job_is_running(jobid):\n    \"\"\"\n    Returns True if job shows a running state and substate\n    \"\"\"\n    jobinfo = printjob_info(jobid)\n    if 'substate' in jobinfo:\n        return jobinfo['substate'] == 42\n    return False\n\n\ndef fetch_vnode_comments_nomp(vnode_list, timeout=10):\n    comment_dict = {}\n    failure = False\n    pbs.logmsg(pbs.EVENT_DEBUG4,\n               \"vnode list in fetch_vnode_comment is %s\"\n               % vnode_list)\n    try:\n        with Timeout(timeout, 'Timed out contacting server'):\n            for vn in vnode_list:\n                comment_dict[vn] = pbs.server().vnode(vn).comment\n                pbs.logmsg(pbs.EVENT_DEBUG4,\n                           \"comment for vnode %s fetched from server is %s\"\n                           % (str(vn), comment_dict[vn]))\n    except TimeoutError:\n        # pbs.server().vnode(xx).comment got stuck, or the timeout\n        # was too short for the number of nodes supplied\n        pbs.logmsg(pbs.EVENT_ERROR,\n                   'Timed out while fetching comments from server, '\n                   'timeout was %s' % str(timeout))\n        failure = True\n    except Exception as exc:\n        # other exception, like e.g. wrong vnode name\n        pbs.logmsg(pbs.EVENT_ERROR,\n                   'Unexpected error in fetch_vnode_comments: %s'\n                   % repr(exc))\n        failure = True\n    # only return a full dictionary if you got the comment for all vnodes\n    if not failure:\n        return (comment_dict, failure)\n    else:\n        return ({}, failure)\n\n\ndef fetch_vnode_comments_queue(vnode_list, commq):\n    comment_dict = {}\n    failure = False\n    pbs.logmsg(pbs.EVENT_DEBUG4,\n               \"vnode list in fetch_vnode_comment is %s\"\n               % vnode_list)\n    try:\n        for vn in vnode_list:\n            comment_dict[vn] = pbs.server().vnode(vn).comment\n            pbs.logmsg(pbs.EVENT_DEBUG4,\n                       \"comment for vnode %s fetched from server is %s\"\n                       % (str(vn), comment_dict[vn]))\n\n    except Exception as exc:\n        # other exception, like e.g. wrong vnode name\n        pbs.logmsg(pbs.EVENT_ERROR,\n                   'Unexpected error in fetch_vnode_comments: %s'\n                   % repr(exc))\n        failure = True\n    # only return a full dictionary if you got the comment for all vnodes\n    if not failure:\n        commq.put(comment_dict)\n    else:\n        commq.put({})\n    return True\n\n\ndef fetch_vnode_comments_mp(vnode_list, timeout=10):\n    worker_comment_dict = {}\n    commq = multiprocessing.Queue()\n    worker = multiprocessing.Process(target=fetch_vnode_comments_queue,\n                                     args=(vnode_list, commq))\n    worker.start()\n    worker.join(timeout)\n    if worker.is_alive():\n        pbs.logmsg(pbs.EVENT_ERROR,\n                   \"comment fetcher for %s timed out after %s seconds\"\n                   % (str(vnode_list), timeout))\n        # will mess up the commq Queue but we don't care\n        # will send SIGTERM, but it's possible that is masked\n        # while in a SWIG function\n        # we don't really care since unline in threading,\n        # in multiprocessing we can exit and orphan the worker\n        worker.terminate()\n        pbs.logmsg(pbs.EVENT_ERROR,\n                   'Timed out while fetching comments from server,'\n                   'timeout was %s' % str(timeout))\n        return ({}, True)\n    else:\n        pbs.logmsg(pbs.EVENT_DEBUG4,\n                   \"comments fetched from server without timeout\")\n        comment_dict = {}\n        try:\n            comment_dict = commq.get()\n        except Exception:\n            # Treat failure to get comments dictionary from queue\n            # as a timeout\n            return ({}, True)\n        pbs.logmsg(pbs.EVENT_DEBUG4,\n                   \"worker comment_dict is %r\" % comment_dict)\n\n        return (comment_dict, False)\n\n\ndef fetch_vnode_comments(vnode_list, timeout=10):\n    if not isinstance(multiprocessing, types.ModuleType):\n        pbs.logmsg(pbs.EVENT_DEBUG4, \"multiprocessing not available, \"\n                   \"fetch_vnode_comment will use SIGALRM timeout\")\n        return fetch_vnode_comments_nomp(vnode_list, timeout)\n    else:\n        pbs.logmsg(pbs.EVENT_DEBUG4, \"multiprocessing available, \"\n                   \"fetch_vnode_comment will use mp for timeout\")\n        return fetch_vnode_comments_mp(vnode_list, timeout)\n\n\n# ============================================================================\n# Utility classes\n# ============================================================================\n\n#\n# CLASS Lock\n#\nclass Lock(object):\n    \"\"\"\n    Implement a simple locking mechanism using a file lock\n    \"\"\"\n\n    def __init__(self, path):\n        self.path = path\n        self.lockfd = None\n\n    def getpath(self):\n        \"\"\"\n        Return the path of the lock file.\n        \"\"\"\n        return self.path\n\n    def getlockfd(self):\n        \"\"\"\n        Return the file descriptor of the lock file.\n        \"\"\"\n        return self.lockfd\n\n    def __enter__(self):\n        self.lockfd = open(self.path, 'w')\n        fcntl.flock(self.lockfd, fcntl.LOCK_EX)\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s file lock acquired by %s' %\n                   (self.path, str(sys._getframe(1).f_code.co_name)))\n\n    def __exit__(self, exc, val, trace):\n        if self.lockfd:\n            fcntl.flock(self.lockfd, fcntl.LOCK_UN)\n            self.lockfd.close()\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s file lock released by %s' %\n                   (self.path, str(sys._getframe(1).f_code.co_name)))\n\n\n#\n# CLASS Timeout\n#\nclass Timeout(object):\n    \"\"\"\n    Implement a timeout mechanism via SIGALRM\n    \"\"\"\n\n    def __init__(self, duration=1, message='Operation timed out'):\n        self.duration = duration\n        self.message = message\n\n    def handler(self, sig, frame):\n        \"\"\"\n        Throw a timeout error when SIGALRM is received\n        \"\"\"\n        raise TimeoutError(self.message)\n\n    def getduration(self):\n        \"\"\"\n        Return the timeout duration\n        \"\"\"\n        return self.duration\n\n    def getmessage(self):\n        \"\"\"\n        Return the timeout message\n        \"\"\"\n        return self.message\n\n    def __enter__(self):\n        if signal.getsignal(signal.SIGALRM):\n            raise RuntimeError('Alarm handler already registered')\n        signal.signal(signal.SIGALRM, self.handler)\n        signal.alarm(self.duration)\n\n    def __exit__(self, exc, val, trace):\n        signal.alarm(0)\n        signal.signal(signal.SIGALRM, signal.SIG_DFL)\n\n\n#\n# CLASS HookUtils\n#\nclass HookUtils(object):\n    \"\"\"\n    Hook utility methods\n    \"\"\"\n\n    def __init__(self, hook_events=None):\n        if hook_events is not None:\n            self.hook_events = hook_events\n        else:\n            # Defined in the order they appear in module_pbs_v1.c\n            # if adding new events that may not exist in all PBS versions,\n            # use hasattr() to prevent exceptions when the hook is used\n            # on older PBS versions - see e.g. EXECJOB_ABORT\n            self.hook_events = {}\n            self.hook_events[pbs.QUEUEJOB] = {\n                'name': 'queuejob',\n                'handler': None\n            }\n            self.hook_events[pbs.MODIFYJOB] = {\n                'name': 'modifyjob',\n                'handler': None\n            }\n            self.hook_events[pbs.RESVSUB] = {\n                'name': 'resvsub',\n                'handler': None\n            }\n            if hasattr(pbs, \"MODIFYRESV\"):\n                self.hook_events[pbs.MODIFYRESV] = {\n                    'name': 'modifyresv',\n                    'handler': None\n                }\n            self.hook_events[pbs.MOVEJOB] = {\n                'name': 'movejob',\n                'handler': None\n            }\n            self.hook_events[pbs.RUNJOB] = {\n                'name': 'runjob',\n                'handler': None\n            }\n            if hasattr(pbs, \"MANAGEMENT\"):\n                self.hook_events[pbs.MANAGEMENT] = {\n                    'name': 'management',\n                    'handler': None\n                }\n            if hasattr(pbs, \"MODIFYVNODE\"):\n                self.hook_events[pbs.MODIFYVNODE] = {\n                    'name': 'modifyvnode',\n                    'handler': None\n                }\n            self.hook_events[pbs.PROVISION] = {\n                'name': 'provision',\n                'handler': None\n            }\n            if hasattr(pbs, \"RESV_END\"):\n                self.hook_events[pbs.RESV_END] = {\n                'name': 'resv_end',\n                'handler': None\n                }\n            if hasattr(pbs, \"RESV_BEGIN\"):\n                self.hook_events[pbs.RESV_BEGIN] = {\n                'name': 'resv_begin',\n                'handler': None\n                }\n            if hasattr(pbs, \"RESV_CONFIRM\"):\n                self.hook_events[pbs.RESV_CONFIRM] = {\n                'name': 'resv_confirm',\n                'handler': None\n                }\n            self.hook_events[pbs.EXECJOB_BEGIN] = {\n                'name': 'execjob_begin',\n                'handler': self._execjob_begin_handler\n            }\n            self.hook_events[pbs.EXECJOB_PROLOGUE] = {\n                'name': 'execjob_prologue',\n                'handler': None\n            }\n            self.hook_events[pbs.EXECJOB_EPILOGUE] = {\n                'name': 'execjob_epilogue',\n                'handler': self._execjob_epilogue_handler\n            }\n            self.hook_events[pbs.EXECJOB_PRETERM] = {\n                'name': 'execjob_preterm',\n                'handler': None\n            }\n            self.hook_events[pbs.EXECJOB_END] = {\n                'name': 'execjob_end',\n                'handler': self._execjob_end_handler\n            }\n            self.hook_events[pbs.EXECJOB_LAUNCH] = {\n                'name': 'execjob_launch',\n                'handler': self._execjob_launch_handler\n            }\n            self.hook_events[pbs.EXECHOST_PERIODIC] = {\n                'name': 'exechost_periodic',\n                'handler': self._exechost_periodic_handler\n            }\n            self.hook_events[pbs.EXECHOST_STARTUP] = {\n                'name': 'exechost_startup',\n                'handler': self._exechost_startup_handler\n            }\n            self.hook_events[pbs.EXECJOB_ATTACH] = {\n                'name': 'execjob_attach',\n                'handler': self._execjob_attach_handler\n            }\n            if hasattr(pbs, \"EXECJOB_RESIZE\"):\n                self.hook_events[pbs.EXECJOB_RESIZE] = {\n                    'name': 'execjob_resize',\n                    'handler': self._execjob_resize_handler\n                }\n            if hasattr(pbs, \"EXECJOB_ABORT\"):\n                self.hook_events[pbs.EXECJOB_ABORT] = {\n                    'name': 'execjob_abort',\n                    'handler': self._execjob_end_handler\n                }\n            if hasattr(pbs, \"EXECJOB_POSTSUSPEND\"):\n                self.hook_events[pbs.EXECJOB_POSTSUSPEND] = {\n                    'name': 'execjob_postsuspend',\n                    'handler': self._execjob_postsuspend_handler\n                }\n            if hasattr(pbs, \"EXECJOB_PRERESUME\"):\n                self.hook_events[pbs.EXECJOB_PRERESUME] = {\n                    'name': 'execjob_preresume',\n                    'handler': self._execjob_preresume_handler\n                }\n            self.hook_events[pbs.MOM_EVENTS] = {\n                'name': 'mom_events',\n                'handler': None\n            }\n\n    def __repr__(self):\n        return 'HookUtils(%s)' % (repr(self.hook_events))\n\n    def event_name(self, hooktype):\n        \"\"\"\n        Return the event name for the supplied hook type.\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        if hooktype in self.hook_events:\n            return self.hook_events[hooktype]['name']\n        pbs.logmsg(pbs.EVENT_DEBUG4,\n                   '%s: Type: %s not found' % (caller_name(), type))\n        return None\n\n    def hashandler(self, hooktype):\n        \"\"\"\n        Return the handler for the supplied hook type.\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        if hooktype in self.hook_events:\n            return self.hook_events[hooktype]['handler'] is not None\n        return None\n\n    def invoke_handler(self, event, cgroup, jobutil, *args):\n        \"\"\"\n        Call the appropriate handler for the supplied event.\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: UID: real=%d, effective=%d' %\n                   (caller_name(), os.getuid(), os.geteuid()))\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: GID: real=%d, effective=%d' %\n                   (caller_name(), os.getgid(), os.getegid()))\n        if self.hashandler(event.type):\n            return self.hook_events[event.type]['handler'](event, cgroup,\n                                                           jobutil, *args)\n        pbs.logmsg(pbs.EVENT_DEBUG2,\n                   '%s: %s event not handled by this hook' %\n                   (caller_name(), self.event_name(event.type)))\n        return False\n\n    def _execjob_begin_handler(self, event, cgroup, jobutil):\n        \"\"\"\n        Handler for execjob_begin events.\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        # Instantiate the NodeUtils class for get_memory_on_node and\n        # get_vmem_on node\n        node = NodeUtils(cgroup.cfg)\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: NodeUtils class instantiated' %\n                   caller_name())\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Host assigned job resources: %s' %\n                   (caller_name(), jobutil.assigned_resources))\n        # Make sure the parent cgroup directories exist\n        cgroup.create_paths()\n        # Make sure the cgroup does not already exist\n        # from a failed run\n        cgroup.delete(event.job.id, False)\n        # Now that we have a lock, determine the current cgroup tree assigned\n        # resources\n        cgroup.assigned_resources = cgroup._get_assigned_cgroup_resources()\n        # Create the cgroup(s) for the job\n        cgroup.create_job(event.job.id, node)\n        if (cgroup.cfg['cgroup']['cpuset']['enabled']\n                and not cgroup.cfg['cgroup']['cpuset']['allow_zero_cpus']):\n            if ('ncpus' not in jobutil.assigned_resources\n                    or jobutil.assigned_resources['ncpus'] <= 0):\n                if event.job.in_ms_mom():\n                    pbs.logmsg(pbs.EVENT_ERROR,\n                               'cpuset enabled with mandatory ncpus >= 1, '\n                               'but job does not request ncpus on '\n                               'mother superior: rejecting job')\n                    event.reject('cpuset enabled with mandatory ncpus >= 1, '\n                                 'but job does not request ncpus on '\n                                 'mother superior: rejecting job')\n                else:\n                    # will be done in configure_job\n                    pbs.logmsg(pbs.EVENT_JOB_USAGE,\n                               'cpuset enabled with mandatory ncpus >= 1, '\n                               'but job does not request ncpus on host: '\n                               'deleting cpuset cgroup')\n\n        # Configure the new cgroup\n        cgroup.configure_job(event.job, jobutil.assigned_resources,\n                             node, cgroup, event.type)\n\n        # Write out the assigned resources\n        cgroup.write_cgroup_assigned_resources(event.job.id)\n        # Write out the environment variable for the host (pbs_attach)\n        if 'device_names' in cgroup.assigned_resources:\n            pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Devices: %s' %\n                       (caller_name(),\n                        cgroup.assigned_resources['device_names']))\n            env_list = []\n            if cgroup.assigned_resources['device_names']:\n                mics = []\n                gpus = []\n                for key in cgroup.assigned_resources['device_names']:\n                    if key.startswith('mic'):\n                        mics.append(key[3:])\n                    elif (key.startswith('nvidia')\n                            and \"gpu\" in node.devices\n                            and key in node.devices['gpu']):\n                        if 'uuid' in node.devices['gpu'][key]:\n                            gpus.append(node.devices['gpu'][key]['uuid'])\n                        if 'uuids' in node.devices['gpu'][key]:\n                            gpus.extend(node.devices['gpu'][key]['uuids'])\n                if mics:\n                    env_list.append('OFFLOAD_DEVICES=%s' %\n                                    \",\".join(mics))\n                if gpus:\n                    # Don't put quotes around the values. ex \"0\" or \"0,1\".\n                    # This will cause it to fail.\n                    env_list.append('CUDA_VISIBLE_DEVICES=%s' %\n                                    \",\".join(gpus))\n                    env_list.append('CUDA_DEVICE_ORDER=PCI_BUS_ID')\n            pbs.logmsg(pbs.EVENT_DEBUG4, 'ENV_LIST: %s' % env_list)\n            cgroup.write_job_env_file(event.job.id, env_list)\n        # Add jobid to cgroup_jobs file to tell periodic handler that this\n        # job is new and its cgroup should not be cleaned up\n        cgroup.add_jobid_to_cgroup_jobs(event.job.id)\n\n        # Initialize resources_used values that the hook will update\n        # so that they are not updated through MoM polling.\n        # Particularly important for resources_used.vmem which has\n        # different semantics when the cgroup hook manages mem+swap.\n        # Add 'force' flag because we haven't reached substate 42 yet\n        cgroup.update_job_usage(event.job.id, event.job.resources_used,\n                                force=True)\n        return True\n\n    def _execjob_epilogue_handler(self, event, cgroup, jobutil):\n        \"\"\"\n        Handler for execjob_epilogue events.\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        # delete this jobid from cgroup_jobs in case hook events before me\n        # failed to do that\n        cgroup.remove_jobid_from_cgroup_jobs(event.job.id)\n        # The resources_used information has a base type of pbs_resource.\n        # Update the usage data\n        cgroup.update_job_usage(event.job.id, event.job.resources_used)\n        # The job script has completed, but the obit has not been sent.\n        # Delete the cgroups for this job so that they don't interfere\n        # with incoming jobs assigned to this node.\n        cgroup.delete(event.job.id)\n        return True\n\n    def _execjob_end_handler(self, event, cgroup, jobutil):\n        \"\"\"\n        Handler for execjob_end events.\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        # delete this jobid from cgroup_jobs in case hook events before me\n        # failed to do that\n        cgroup.remove_jobid_from_cgroup_jobs(event.job.id)\n        # The cgroup is usually deleted in the execjob_epilogue event\n        # There are certain corner cases where epilogue can fail or skip\n        # Delete files again here to make sure we catch those\n        # cgroup.delete() does nothing if files are already deleted\n        cgroup.delete(event.job.id)\n        # Remove the assigned_resources and job_env files.\n        filelist = []\n        filelist.append(os.path.join(cgroup.hook_storage_dir, event.job.id))\n        filelist.append(cgroup.host_job_env_filename % event.job.id)\n        for filename in filelist:\n            try:\n                os.remove(filename)\n            except OSError:\n                pbs.logmsg(pbs.EVENT_DEBUG4, 'File: %s not found' % (filename))\n            except Exception:\n                pbs.logmsg(pbs.EVENT_DEBUG4,\n                           'Error removing file: %s' % (filename))\n        return True\n\n    def manage_rlimit_as(self, job):\n        \"\"\"\n        Sets rlimit_as according to pvmem (if present)\n        if pvmem is not present, sets it to unlimited\n        (since we know vmem specifies memory+swap usage limit,\n         not address space limit)\n        \"\"\"\n        import resource\n        rlimit_as = None\n        parent_pid = os.getppid()\n        if 'pvmem' in job.Resource_List and job.Resource_List['pvmem']:\n            rlimit_as = size_as_int(job.Resource_List['pvmem'])\n        else:\n            rlimit_as = resource.RLIM_INFINITY\n\n        prlimit_command = None\n        if hasattr(resource, 'prlimit'):\n            # No need to call command -- Python has direct support\n            prlimit_command = ''\n            pbs.logmsg(pbs.EVENT_DEBUG3,\n                       'Calling resource.prlimit(%s, resource.RLIMIT_AS, %s)'\n                       % (parent_pid, str((rlimit_as, rlimit_as))))\n            resource.prlimit(parent_pid, resource.RLIMIT_AS,\n                             (rlimit_as, rlimit_as))\n        elif os.path.isfile('/usr/bin/prlimit'):\n            prlimit_command = '/usr/bin/prlimit'\n        elif os.path.isfile('/bin/prlimit'):\n            prlimit_command = '/bin/prlimit'\n\n        if prlimit_command is None:\n            pbs.logmsg(pbs.EVENT_DEBUG, \"cannot set rlimit_as for task: \"\n                       \"prlimit command not found\")\n        elif prlimit_command != '':\n            cmd = [prlimit_command,\n                   '--as=' + str(rlimit_as) + ':' + str(rlimit_as),\n                   '--pid=' + str(parent_pid)]\n            pbs.logmsg(pbs.EVENT_DEBUG3, 'Running: ' + ' '.join(cmd))\n            try:\n                # Try running the prlimit command\n                process = subprocess.Popen(cmd, shell=False,\n                                           stdout=subprocess.PIPE,\n                                           stderr=subprocess.PIPE)\n                out = process.communicate()[0]\n            except Exception:\n                pbs.logmsg(pbs.EVENT_ERROR,\n                           'Found but to failed to execute: %s' %\n                           ' '.join(cmd))\n\n    def _execjob_launch_handler(self, event, cgroup, jobutil):\n        \"\"\"\n        Handler for execjob_launch events.\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        node = NodeUtils(cgroup.cfg)\n        # delete this jobid from cgroup_jobs in case hook events before me\n        # failed to do that\n        cgroup.remove_jobid_from_cgroup_jobs(event.job.id)\n        # Add the parent process id to the appropriate cgroups.\n        cgroup.add_pids(os.getppid(), jobutil.job.id)\n        # FUTURE: Add environment variable to the job environment\n        # if job requested mic or gpu\n        cgroup.read_cgroup_assigned_resources(event.job.id)\n        if cgroup.assigned_resources is not None:\n            pbs.logmsg(pbs.EVENT_DEBUG4,\n                       'assigned_resources: %s' %\n                       (cgroup.assigned_resources))\n            if \"gpu\" in node.devices:\n                cgroup.setup_job_devices_env(node.devices['gpu'])\n\n        # If vmem was requested, set per process address space limit\n        # or clear the one MoM has set,\n        # if possible (requires prlimit support)\n        if (cgroup.cfg['manage_rlimit_as']\n                and cgroup.cfg['cgroup']['memsw']['enabled']):\n            self.manage_rlimit_as(pbs.event().job)\n        return True\n\n    def _exechost_periodic_handler(self, event, cgroup, jobutil):\n        \"\"\"\n        Handler for exechost_periodic events.\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        # Instantiate the NodeUtils class for gather_jobs_on_node\n        node = NodeUtils(cgroup.cfg)\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: NodeUtils class instantiated' %\n                   caller_name())\n        # Cleanup cgroups for jobs not present on this node\n        jobdict = node.gather_jobs_on_node(cgroup)\n        for jobid in event.job_list:\n            if jobid not in jobdict:\n                jobdict[jobid] = float()\n        remaining = cgroup.cleanup_orphans(jobdict)\n        # Offline the node if there are remaining orphans\n        if remaining > 0:\n            try:\n                node.take_node_offline()\n            except Exception as exc:\n                pbs.logmsg(pbs.EVENT_DEBUG, '%s: Failed to offline node: %s' %\n                           (caller_name(), exc))\n        # Online nodes that were offlined due to a cgroup not cleaning up\n        if remaining == 0 and cgroup.cfg['online_offlined_nodes']:\n            node.bring_node_online()\n        # Update the resource usage information for each job\n        if cgroup.cfg['periodic_resc_update']:\n            # Using event.job_list, without the parenthesis, will\n            # make the dictionary iterable.\n            for jobid in event.job_list:\n                pbs.logmsg(pbs.EVENT_DEBUG4,\n                           '%s: Updating resource usage for %s' %\n                           (caller_name(), jobid))\n                try:\n                    cgroup.update_job_usage(jobid, (event.job_list[jobid]\n                                                    .resources_used))\n                except Exception:\n                    pbs.logmsg(pbs.EVENT_DEBUG, '%s: Failed to update %s' %\n                               (caller_name(), jobid))\n        return True\n\n    def _exechost_startup_handler(self, event, cgroup, jobutil):\n        \"\"\"\n        Handler for exechost_startup events.\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        cgroup.create_paths()\n        node = NodeUtils(cgroup.cfg)\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: NodeUtils class instantiated' %\n                   caller_name())\n        node.create_vnodes(cgroup.vntype)\n        host = node.hostname\n        # The memory limits are interdependent and might fail when set.\n        # There are three limits. Worst case scenario is to loop three\n        # times in order to set them all.\n        mem_on_node = 0\n        for _ in range(3):\n            result = True\n            if 'memory' in cgroup.subsystems:\n                val = node.get_memory_on_node()\n                mem_on_node = val\n                if val is not None and val > 0:\n                    try:\n                        cgroup.set_limit('mem', val)\n                    except Exception:\n                        result = False\n                try:\n                    val = int(cgroup.cfg['cgroup']['memory']['swappiness'])\n                    cgroup.set_swappiness(val)\n                except Exception:\n                    # error is not fatal, do not set result to False\n                    pbs.logmsg(pbs.EVENT_DEBUG,\n                               '%s: Failed to set swappiness' % caller_name())\n            if 'memsw' in cgroup.subsystems:\n                val = node.get_vmem_on_node()\n                if val < mem_on_node:\n                    val = mem_on_node\n                if val is not None and val > 0:\n                    try:\n                        cgroup.set_limit('vmem', val)\n                    except Exception:\n                        result = False\n            if 'hugetlb' in cgroup.subsystems:\n                val = node.get_hpmem_on_node(ignore_reserved=False)\n                if val is not None and val > 0:\n                    try:\n                        cgroup.set_limit('hpmem', val)\n                    except Exception:\n                        result = False\n            if result:\n                return True\n        return False\n\n    def _execjob_attach_handler(self, event, cgroup, jobutil):\n        \"\"\"\n        Handler for execjob_attach events.\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        # Ensure the job ID has been removed from cgroup_jobs\n        cgroup.remove_jobid_from_cgroup_jobs(event.job.id)\n        pbs.logjobmsg(jobutil.job.id, '%s: Attaching PID %s' %\n                      (caller_name(), event.pid))\n        # Add all processes in the job session to the appropriate cgroups\n        cgroup.add_pids(event.pid, jobutil.job.id)\n        return True\n\n    def _execjob_resize_handler(self, event, cgroup, jobutil):\n        \"\"\"\n        Handler for execjob_resize events.\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        # Instantiate the NodeUtils class for get_memory_on_node and\n        # get_vmem_on node\n        node = NodeUtils(cgroup.cfg)\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: NodeUtils class instantiated' %\n                   caller_name())\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Host assigned job resources: %s' %\n                   (caller_name(), jobutil.assigned_resources))\n        if (cgroup.cfg['cgroup']['cpuset']['enabled']\n                and not cgroup.cfg['cgroup']['cpuset']['allow_zero_cpus']):\n            if ('ncpus' not in jobutil.assigned_resources\n                    or jobutil.assigned_resources['ncpus'] <= 0):\n                if event.job.in_ms_mom():\n                    pbs.logmsg(pbs.EVENT_ERROR,\n                               'cpuset enabled with mandatory ncpus >= 1, '\n                               'but job does not request ncpus on '\n                               'mother superior: rejecting resize')\n                    event.reject('cpuset enabled with mandatory ncpus >= 1, '\n                                 'but job does not request ncpus on '\n                                 'mother superior: rejecting resize')\n                else:\n                    # will be done in configure_job\n                    pbs.logmsg(pbs.EVENT_ERROR,\n                               'cpuset enabled with mandatory ncpus >= 1, '\n                               'but job no longer requests ncpus on host: '\n                               'deleting cpuset cgroup')\n\n        # Configure the cgroup\n        cgroup.configure_job(event.job, jobutil.assigned_resources,\n                             node, cgroup, event.type)\n        # Write out the assigned resources\n        cgroup.write_cgroup_assigned_resources(event.job.id)\n        # Write out the environment variable for the host (pbs_attach)\n        if 'device_names' in cgroup.assigned_resources:\n            pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Devices: %s' %\n                       (caller_name(),\n                        cgroup.assigned_resources['device_names']))\n            env_list = []\n            if cgroup.assigned_resources['device_names']:\n                mics = []\n                gpus = []\n                for key in cgroup.assigned_resources['device_names']:\n                    if key.startswith('mic'):\n                        mics.append(key[3:])\n                    elif (key.startswith('nvidia')\n                            and \"gpu\" in node.devices\n                            and key in node.devices['gpu']):\n                        if 'uuid' in node.devices['gpu'][key]:\n                            gpus.append(node.devices['gpu'][key]['uuid'])\n                        if 'uuids' in node.devices['gpu'][key]:\n                            gpus.extend(node.devices['gpu'][key]['uuids'])\n                if mics:\n                    env_list.append('OFFLOAD_DEVICES=%s' %\n                                    \",\".join(mics))\n                if gpus:\n                    # Don't put quotes around the values. ex \"0\" or \"0,1\".\n                    # This will cause it to fail.\n                    env_list.append('CUDA_VISIBLE_DEVICES=%s' %\n                                    \",\".join(gpus))\n                    env_list.append('CUDA_DEVICE_ORDER=PCI_BUS_ID')\n            pbs.logmsg(pbs.EVENT_DEBUG4, 'ENV_LIST: %s' % env_list)\n            cgroup.write_job_env_file(event.job.id, env_list)\n        return True\n\n    def _execjob_postsuspend_handler(self, event, cgroup, jobutil):\n        \"\"\"\n        Handler for execjob_postsuspend events.\n        \"\"\"\n        return True\n\n    def _execjob_preresume_handler(self, event, cgroup, jobutil):\n        \"\"\"\n        Handler for execjob_preresume events.\n        \"\"\"\n        return True\n\n#\n# CLASS JobUtils\n#\n\n\nclass JobUtils(object):\n    \"\"\"\n    Job utility methods\n    \"\"\"\n\n    def __init__(self, job, hostname=None, assigned_resources=None):\n        self.job = job\n        if hostname is not None:\n            self.hostname = hostname\n        else:\n            self.hostname = pbs.get_local_nodename()\n        if assigned_resources is not None:\n            self.assigned_resources = assigned_resources\n        else:\n            self.assigned_resources = self._get_assigned_job_resources()\n\n    def __repr__(self):\n        return ('JobUtils(%s, %s, %s)' %\n                (repr(self.job),\n                 repr(self.hostname),\n                 repr(self.assigned_resources)))\n\n    def _get_assigned_job_resources(self, hostname=None):\n        \"\"\"\n        Return a dictionary of assigned resources on the local node\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        # Bail out if no hostname was provided\n        if not hostname:\n            hostname = self.hostname\n        if not hostname:\n            raise CgroupProcessingError('No hostname available')\n        # Bail out if no job information is present\n        if self.job is None:\n            raise CgroupProcessingError('No job information available')\n        # Create a list of local vnodes\n        vnodes = []\n        vnhost_pattern = r'%s\\[[\\d]+\\]' % hostname\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: vnhost pattern: %s' %\n                   (caller_name(), vnhost_pattern))\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Job exec_vnode list: %s' %\n                   (caller_name(), self.job.exec_vnode))\n        for match in re.findall(vnhost_pattern, str(self.job.exec_vnode)):\n            vnodes.append(match)\n        if vnodes:\n            pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Vnodes on %s: %s' %\n                       (caller_name(), hostname, vnodes))\n        # Collect host assigned resources\n        resources = {}\n        for chunk in self.job.exec_vnode.chunks:\n            if vnodes:\n                # Vnodes list is not empty\n                if chunk.vnode_name not in vnodes:\n                    continue\n                if 'vnodes' not in resources:\n                    resources['vnodes'] = {}\n                if chunk.vnode_name not in resources['vnodes']:\n                    resources['vnodes'][chunk.vnode_name] = {}\n                # Initialize any missing resources for the vnode.\n                # This check is needed because some resources might\n                # not be present in each chunk of a job. For example:\n                # exec_vnodes =\n                # (node1[0]:ncpus=4:mem=4gb+node1[1]:mem=2gb) +\n                # (node1[1]:ncpus=3+node[0]:ncpus=1:mem=4gb)\n                for resc in list(chunk.chunk_resources.keys()):\n                    vnresc = resources['vnodes'][chunk.vnode_name]\n                    if resc in list(vnresc.keys()):\n                        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: %s:%s defined' %\n                                   (caller_name(), chunk.vnode_name, resc))\n                    else:\n                        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: %s:%s missing' %\n                                   (caller_name(), chunk.vnode_name, resc))\n                        vnresc[resc] = \\\n                            initialize_resource(chunk.chunk_resources[resc])\n                pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Chunk %s resources: %s' %\n                           (caller_name(), chunk.vnode_name, resources))\n            else:\n                # Vnodes list is empty\n                if chunk.vnode_name != hostname:\n                    continue\n            for resc in list(chunk.chunk_resources.keys()):\n                if resc not in list(resources.keys()):\n                    resources[resc] = \\\n                        initialize_resource(chunk.chunk_resources[resc])\n                # Add resource value to total\n                if isinstance(chunk.chunk_resources[resc],\n                              (pbs.pbs_int, pbs.pbs_float, pbs.size)):\n                    resources[resc] += chunk.chunk_resources[resc]\n                    pbs.logmsg(pbs.EVENT_DEBUG4,\n                               '%s: resources[%s][%s] is now %s' %\n                               (caller_name(), hostname, resc,\n                                resources[resc]))\n                    if vnodes:\n                        resources['vnodes'][chunk.vnode_name][resc] += \\\n                            chunk.chunk_resources[resc]\n                else:\n                    pbs.logmsg(pbs.EVENT_DEBUG4,\n                               '%s: Setting resource %s to string %s' %\n                               (caller_name(), resc,\n                                str(chunk.chunk_resources[resc])))\n                    resources[resc] = str(chunk.chunk_resources[resc])\n                    if vnodes:\n                        resources['vnodes'][chunk.vnode_name][resc] = \\\n                            str(chunk.chunk_resources[resc])\n        if resources:\n            pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Resources for %s: %s' %\n                       (caller_name(), hostname, repr(resources)))\n            # Return assigned resources for specified host\n            return resources\n        else:\n            pbs.logmsg(pbs.EVENT_JOB_USAGE, \"WARNING: job seems \"\n                       + \"to have no resources assigned to this host.\")\n            pbs.logmsg(pbs.EVENT_JOB_USAGE,\n                       \"Server and MoM vnode names may not be consistent.\")\n            pbs.logmsg(pbs.EVENT_JOB_USAGE,\n                       \"Pattern for expected vnode name(s) is %s\"\n                       % vnhost_pattern)\n            pbs.logmsg(pbs.EVENT_JOB_USAGE,\n                       \"Job exec_vnode is %s\" % str(self.job.exec_vnode))\n            pbs.logmsg(pbs.EVENT_JOB_USAGE,\n                       \"You may have forgotten to set PBS_MOM_NODE_NAME to \"\n                       \"the desired matching entry in the exec_vnode string\")\n            pbs.logmsg(pbs.EVENT_JOB_USAGE,\n                       \"Job will fail or be configured with default ncpus/mem\")\n            return {}\n\n\n#\n# CLASS NodeUtils\n#\nclass NodeUtils(object):\n    \"\"\"\n    Node utility methods\n    NOTE: Multiple log messages pertaining to devices have been commented\n          out due to the size of the messages. They may be uncommented for\n          additional debugging if necessary.\n    \"\"\"\n\n    def __init__(self, cfg, hostname=None, cpuinfo=None, meminfo=None,\n                 numa_nodes=None, devices=None):\n        self.cfg = cfg\n        if hostname is not None:\n            self.hostname = hostname\n        else:\n            self.hostname = pbs.get_local_nodename()\n        if cpuinfo is not None:\n            self.cpuinfo = cpuinfo\n        else:\n            self.cpuinfo = self._discover_cpuinfo()\n        if meminfo is not None:\n            self.meminfo = meminfo\n        else:\n            self.meminfo = self._discover_meminfo()\n        if numa_nodes is not None:\n            self.numa_nodes = numa_nodes\n        else:\n            self.numa_nodes = dict()\n            self.numa_nodes = self._discover_numa_nodes()\n        if devices is not None:\n            self.devices = devices\n        elif self.cfg['cgroup']['devices']['enabled']:\n            self.devices = self._discover_devices()\n        else:\n            self.devices = {}\n        # Add the devices count i.e. nmics and ngpus to the numa nodes\n        self._add_device_counts_to_numa_nodes()\n        # Information for offlining nodes\n        self.offline_file = os.path.join(PBS_MOM_HOME, 'mom_priv', 'hooks',\n                                         ('%s.offline' %\n                                          pbs.event().hook_name))\n        self.offline_msg = 'Hook %s: ' % pbs.event().hook_name\n        self.offline_msg += 'Unable to clean up one or more cgroups'\n\n    def __repr__(self):\n        return ('NodeUtils(%s, %s, %s, %s, %s, %s)' %\n                (repr(self.cfg),\n                 repr(self.hostname),\n                 repr(self.cpuinfo),\n                 repr(self.meminfo),\n                 repr(self.numa_nodes),\n                 repr(self.devices)))\n\n    def _add_device_counts_to_numa_nodes(self):\n        \"\"\"\n        Update the device counts per numa node\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        for dclass in self.devices:\n            pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Device class: %s' %\n                       (caller_name(), dclass))\n            if dclass == 'mic' or dclass == 'gpu':\n                for inst in self.devices[dclass]:\n                    numa_node = self.devices[dclass][inst]['numa_node']\n                    if dclass == 'mic' and inst.find('mic') != -1:\n                        if 'nmics' not in self.numa_nodes[numa_node]:\n                            self.numa_nodes[numa_node]['nmics'] = 1\n                        else:\n                            self.numa_nodes[numa_node]['nmics'] += 1\n                    elif dclass == 'gpu':\n                        if 'ngpus' not in self.numa_nodes[numa_node]:\n                            self.numa_nodes[numa_node]['ngpus'] = 1\n                        else:\n                            self.numa_nodes[numa_node]['ngpus'] += 1\n        pbs.logmsg(pbs.EVENT_DEBUG4, 'NUMA nodes: %s' % (self.numa_nodes))\n        return\n\n    def _discover_numa_nodes(self):\n        \"\"\"\n        Discover what type of hardware is on this node and how it\n        is partitioned\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        numa_nodes = {}\n        for node in glob.glob(os.path.join(os.sep, 'sys', 'devices',\n                                           'system', 'node', 'node*')):\n            # The basename will be node0, node1, etc.\n            # Capture the numeric portion as the identifier/ordinal.\n            num = int(os.path.basename(node)[4:])\n            if num not in numa_nodes:\n                numa_nodes[num] = {}\n                numa_nodes[num]['devices'] = []\n            exclude = expand_list(self.cfg['cgroup']['cpuset']['exclude_cpus'])\n            with open(os.path.join(node, 'cpulist'), 'r') as desc:\n                avail = expand_list(desc.readline())\n                numa_nodes[num]['cpus'] = [\n                    x for x in avail if x not in exclude]\n            with open(os.path.join(node, 'meminfo'), 'r') as desc:\n                for line in desc:\n                    # Each line will contain four or five fields. Examples:\n                    # Node 0 MemTotal:       32995028 kB\n                    # Node 0 HugePages_Total:     0\n                    entries = line.split()\n                    if len(entries) < 4:\n                        continue\n                    if entries[2] == 'MemTotal:':\n                        numa_nodes[num]['MemTotal'] = \\\n                            convert_size(entries[3] + entries[4], 'kb')\n                    elif entries[2] == 'HugePages_Total:':\n                        numa_nodes[num]['HugePages_Total'] = int(entries[3])\n        # Adjust NUMA nodes wrt reserved memory resource\n        # note that the get_* routines only work because this routine\n        # is called in the constructor; that constructor first\n        # sets self.numa_nodes to the empty dict, which triggers\n        # the get_* routines to use the non-NUMA section\n        num_numa_nodes = len(numa_nodes)\n        if self.cfg['vnode_per_numa_node'] and num_numa_nodes > 0:\n            # Huge page memory\n            host_hpmem = self.get_hpmem_on_node(ignore_reserved=True)\n            host_resv_hpmem = host_hpmem - self.get_hpmem_on_node()\n            if host_resv_hpmem < 0:\n                host_resv_hpmem = 0\n            node_resv_hpmem = int(math.ceil(host_resv_hpmem / num_numa_nodes))\n            if 'Hugepagesize' in self.meminfo:\n                node_resv_hpmem -= \\\n                    (node_resv_hpmem\n                     % size_as_int(self.meminfo['Hugepagesize']))\n            # Physical memory\n            host_mem = self.get_memory_on_node(ignore_reserved=True)\n            host_mem_net = self.get_memory_on_node(ignore_reserved=False)\n            pbs.logmsg(pbs.EVENT_DEBUG4,\n                       '%s: gross mem = %s, net mem = %s'\n                       % (caller_name(), host_mem, host_mem_net))\n            host_resv_mem = host_mem - host_mem_net\n            if host_resv_mem < 0:\n                host_resv_mem = 0\n            node_resv_mem = int(math.ceil(host_resv_mem / num_numa_nodes))\n            node_resv_mem -= node_resv_mem % (1024 * 1024)\n            # Swap to be added to virtual memory\n            # net values, taking into account reserved memory\n            host_vmem_net = self.get_vmem_on_node(ignore_reserved=False)\n            node_swapmem = int(math.floor((host_vmem_net - host_mem_net)\n                                          / num_numa_nodes))\n            if node_swapmem < 0:\n                node_swapmem = 0\n            node_swapmem -= node_swapmem % (1024 * 1024)\n            # Set the NUMA node values\n            for num in numa_nodes:\n                val = 0\n                if 'HugePages_Total' in numa_nodes[num]:\n                    val = numa_nodes[num]['HugePages_Total']\n                    if 'Hugepagesize' in self.meminfo:\n                        val *= size_as_int(self.meminfo['Hugepagesize'])\n                    val -= node_resv_hpmem\n                numa_nodes[num]['hpmem'] = val\n                val = size_as_int(numa_nodes[num]['MemTotal'])\n                # round down only svr-reported values, not internal values\n                val -= node_resv_mem\n                numa_nodes[num]['mem'] = val\n                val += node_swapmem\n                # round down only svr-reported values, not internal values\n                numa_nodes[num]['vmem'] = val\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: %s' % (caller_name(), numa_nodes))\n        return numa_nodes\n\n    def _devinfo(self, path):\n        \"\"\"\n        Returns major minor and type from device\n        \"\"\"\n        # If the stat fails, log it and continue.\n        try:\n            statinfo = os.stat(path)\n        except OSError:\n            pbs.logmsg(pbs.EVENT_DEBUG2, '%s: Stat error on %s' %\n                       (caller_name(), path))\n            return None\n        major = os.major(statinfo.st_rdev)\n        minor = os.minor(statinfo.st_rdev)\n        if stat.S_ISCHR(statinfo.st_mode):\n            dtype = 'c'\n        else:\n            dtype = 'b'\n        pbs.logmsg(pbs.EVENT_DEBUG4,\n                   'Path: %s, Major: %d, Minor: %d, Type: %s' %\n                   (path, major, minor, dtype))\n        return {'major': major, 'minor': minor, 'type': dtype}\n\n    def _discover_devices(self):\n        \"\"\"\n        Identify devices and to which numa nodes they are attached\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        devices = {}\n        # First loop identifies all devices and determines their true path,\n        # major/minor device IDs, and NUMA node affiliation (if any).\n        paths = glob.glob(os.path.join(os.sep, 'sys', 'class', '*', '*'))\n        paths.extend(glob.glob(os.path.join(\n            os.sep, 'sys', 'bus', 'pci', 'devices', '*')))\n        for path in paths:\n            # Skip this path if it is not a directory\n            if not os.path.isdir(path):\n                continue\n            dirs = path.split(os.sep)   # Path components\n            dclass = dirs[-2]   # Device class\n            inst = dirs[-1]   # Device instance\n            if dclass not in devices:\n                devices[dclass] = {}\n            devices[dclass][inst] = {}\n            devices[dclass][inst]['realpath'] = os.path.realpath(path)\n            # Determine the PCI bus ID of the device\n            devices[dclass][inst]['bus_id'] = ''\n            if dirs[-3] == 'pci' and dirs[-2] == 'devices':\n                devices[dclass][inst]['bus_id'] = dirs[-1]\n            # Determine the major and minor device numbers\n            filename = os.path.join(devices[dclass][inst]['realpath'], 'dev')\n            devices[dclass][inst]['major'] = None\n            devices[dclass][inst]['minor'] = None\n            if os.path.isfile(filename):\n                with open(filename, 'r') as desc:\n                    major, minor = list(\n                        map(int, desc.readline().strip().split(':')))\n                    devices[dclass][inst]['major'] = int(major)\n                    devices[dclass][inst]['minor'] = int(minor)\n            numa_node = -1\n            subdir = os.path.join(devices[dclass][inst]['realpath'], 'device')\n            # The numa_node file is not always in the same place\n            # so work our way up the path trying to find it.\n            while len(subdir.split(os.sep)) > 2:\n                filename = os.path.join(subdir, 'numa_node')\n                if os.path.isfile(filename):\n                    # The file should contain a single integer\n                    with open(filename, 'r') as desc:\n                        numa_node = int(desc.readline().strip())\n                    break\n                subdir = os.path.dirname(subdir)\n            if numa_node < 0:\n                numa_node = 0\n            devices[dclass][inst]['numa_node'] = numa_node\n        # Second loop determines device types and their location\n        # under /dev. Only look for block and character devices.\n        for path in find_files(os.path.join(os.sep, 'dev'), kind='bc',\n                               follow_mounts=False):\n            # If the stat fails, log it and continue.\n            devinfo = self._devinfo(path)\n            if not devinfo:\n                continue\n\n            for dclass in devices:\n                for inst in devices[dclass]:\n                    if 'type' not in devices[dclass][inst]:\n                        devices[dclass][inst]['type'] = None\n                    if 'device' not in devices[dclass][inst]:\n                        devices[dclass][inst]['device'] = None\n                    if devices[dclass][inst]['major'] == devinfo['major']:\n                        if devices[dclass][inst]['minor'] == devinfo['minor']:\n                            devices[dclass][inst]['type'] = devinfo['type']\n                            devices[dclass][inst]['device'] = path\n        # Check to see if there are gpus on the node and copy them\n        # into their own dictionary.\n        devices['gpu'] = {}\n        gpus = self._discover_gpus()\n        if gpus:\n            for dclass in devices:\n                for inst in devices[dclass]:\n                    for gpuid in gpus:\n                        bus_id = devices[dclass][inst]['bus_id'].lower()\n                        if bus_id == gpus[gpuid]['pci_bus_id']:\n                            devices['gpu'][gpuid] = devices[dclass][inst]\n                            # For NVIDIA devices, sysfs doesn't contain a dev\n                            # file, so we must get the major, minor and device\n                            # type from the matching /dev/nvidia[0-9]*\n                            if gpuid.startswith('nvidia'):\n                                path = os.path.join(os.sep, 'dev', gpuid)\n                                # If the stat fails, continue.\n                                devinfo = self._devinfo(path)\n                                if not devinfo:\n                                    continue\n                                devices[dclass][inst]['major'] = \\\n                                    devinfo['major']\n                                devices[dclass][inst]['minor'] = \\\n                                    devinfo['minor']\n                                devices[dclass][inst]['type'] = \\\n                                    devinfo['type']\n                                devices[dclass][inst]['device'] = path\n                                devices[dclass][inst]['uuid'] = \\\n                                    gpus[gpuid]['uuid']\n        # if any gpu has mig enabled, let's use them\n        if gpus and any(gpus[gpu]['mig'] for gpu in gpus):\n            nvidia_cap_major = None\n            with open(os.path.join(os.sep, 'proc', 'devices')) as f:\n                for line in f:\n                    device = line.split()\n                    if len(device) != 2:\n                        continue\n                    if device[1] == 'nvidia-caps':\n                        nvidia_cap_major = int(device[0])\n                        break\n            if nvidia_cap_major is None:\n                pbs.logmsg(pbs.EVENT_SYSTEM, '%s: A GPU has MIG enabled, but'\n                           'nvidia-caps is not found in /proc/devices. '\n                           'Skipping MIG configuration'\n                           % caller_name())\n            else:\n                gis = {}\n                # we need major, minor, type, uuids, numa_node\n                for gpuid in gpus:\n                    if 'gis' not in gpus[gpuid]:\n                        continue\n                    for giid in gpus[gpuid]['gis']:\n                        gi = gpus[gpuid]['gis'][giid]\n                        if 'cis' not in gi:\n                            # no cis found for this gi, just skip\n                            continue\n                        name = 'nvidia-cap%d' % gi['minor']\n                        numa = devices['gpu'][gpuid]['numa_node']\n                        major = devices['gpu'][gpuid]['major']\n                        minor = devices['gpu'][gpuid]['minor']\n                        # extra_devs are the device numbers of the physical\n                        # gpu, as well as the nvidia controller\n                        new_gpu = {'major': nvidia_cap_major,\n                                   'minor': gi['minor'],\n                                   'is_gi': True,\n                                   'type': 'c',\n                                   'numa_node': numa,\n                                   'extra_devs': ['%d:%d' % (major, minor),\n                                                  '%d:255' % (major)],\n                                   'uuids': []\n                                   }\n                        for ciid in gi['cis']:\n                            ci = gi['cis'][ciid]\n                            new_gpu['uuids'].append(ci['uuid'])\n                            major = nvidia_cap_major\n                            minor = ci['minor']\n                            new_gpu['extra_devs'].append(\n                                '%d:%d' % (major, minor))\n                        devices['gpu'][name] = new_gpu\n                    del devices['gpu'][gpuid]\n\n        pbs.logmsg(pbs.EVENT_DEBUG4, 'Processed GPUs: %s' % devices['gpu'])\n        if gpus and not devices['gpu']:\n            pbs.logmsg(pbs.EVENT_SYSTEM, '%s: GPUs discovered but could not '\n                       'be successfully mapped to devices.' % (caller_name()))\n        return devices\n\n    def _discover_gpus(self):\n        \"\"\"\n        Return a dictionary where the keys are the name of the GPU devices\n        and the values are the PCI bus IDs.\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        gpus = {}\n        if self.cfg['discover_gpus'] and self.cfg['nvidia-smi']:\n            cmd = [self.cfg['nvidia-smi'], '-q', '-x']\n        else:\n            return gpus\n        pbs.logmsg(pbs.EVENT_DEBUG4, 'NVIDIA SMI command: %s' % cmd)\n        time_start = time.time()\n        mig_found = False\n        try:\n            # Try running the nvidia-smi command\n            process = subprocess.Popen(cmd, shell=False,\n                                       stdout=subprocess.PIPE,\n                                       stderr=subprocess.PIPE,\n                                       universal_newlines=True)\n            out = process.communicate()[0]\n        except Exception:\n            pbs.logmsg(pbs.EVENT_DEBUG4, 'Failed to execute: %s' %\n                       \" \".join(cmd))\n            pbs.logmsg(pbs.EVENT_DEBUG3, '%s: No GPUs found' % caller_name())\n            return gpus\n        elapsed_time = time.time() - time_start\n        if elapsed_time > 2.0:\n            pbs.logmsg(pbs.EVENT_DEBUG,\n                       '%s: nvidia-smi call took %f seconds' %\n                       (caller_name(), elapsed_time))\n        # if we get a non-str type then convert before calling xmlet\n        # should not happen since we passed universal_newlines True\n        try:\n            # Try parsing the output\n            import xml.etree.ElementTree as xmlet\n            root = xmlet.fromstring(stringified_output(out))\n            pbs.logmsg(pbs.EVENT_DEBUG4, 'root.tag: %s' % root.tag)\n            for child in root:\n                if child.tag == 'gpu':\n                    bus_id = child.get('id')\n                    result = re.match(r'([^:]+):(.*)', bus_id)\n                    if not result:\n                        raise ValueError('GPU ID not recognized: ' + bus_id)\n                    domain, instance = result.groups()\n                    # Make sure the PCI domain is 16 bits (4 hex digits)\n                    if len(domain) == 8:\n                        domain = domain[-4:]\n                    if len(domain) != 4:\n                        raise ValueError('GPU ID not recognized: ' + bus_id)\n                    name = 'nvidia%s' % child.find('minor_number').text\n                    mig_enabled = False\n                    mig_mode = child.find('mig_mode')\n                    if mig_mode is not None:\n                        current_mig = mig_mode.find('current_mig')\n                        if current_mig is not None:\n                            if current_mig.text == 'Enabled':\n                                mig_found = True\n                                mig_enabled = True\n                    gpus[name] = {\n                        'pci_bus_id': (domain + ':' + instance).lower(),\n                        'uuid': child.find('uuid').text,\n                        'mig': mig_enabled\n                    }\n        except Exception as exc:\n            pbs.logmsg(pbs.EVENT_DEBUG, 'Unexpected error: %s' % exc)\n\n        # at least one gpu had mig enabled, let's add the gis\n        if mig_found:\n            self._discover_migs(gpus)\n\n        pbs.logmsg(pbs.EVENT_DEBUG4, 'GPUs: %s' % gpus)\n        return gpus\n\n    def _discover_migs(self, gpus):\n        \"\"\"\n        Mutate the gpus dictionary with mig info, GIs and CIs\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        # find GIs\n        cmd = [self.cfg['nvidia-smi'], 'mig', '-lgi']\n        pbs.logmsg(pbs.EVENT_DEBUG4, 'NVIDIA SMI command: %s' % cmd)\n        time_start = time.time()\n        out = []\n        try:\n            # Try running the nvidia-smi command\n            process = subprocess.Popen(cmd, shell=False,\n                                       stdout=subprocess.PIPE,\n                                       stderr=subprocess.PIPE,\n                                       universal_newlines=True)\n            out = process.communicate()[0].split('\\n')\n        except Exception:\n            pbs.logmsg(pbs.EVENT_DEBUG4, 'Failed to execute: %s' %\n                       \" \".join(cmd))\n            pbs.logmsg(pbs.EVENT_DEBUG3, '%s: No MIGs found' % caller_name())\n            return\n        elapsed_time = time.time() - time_start\n        if elapsed_time > 2.0:\n            pbs.logmsg(pbs.EVENT_DEBUG,\n                       '%s: nvidia-smi call took %f seconds' %\n                       (caller_name(), elapsed_time))\n\n        # format is\n        # | GPUID   NAME       ProfileID  GIID       PLACE     |\n        # |====================================================|\n        # |   0  MIG 1g.5gb       19        7          4:1     |\n        # +----------------------------------------------------+\n        # |   0  MIG 1g.5gb       19        8          5:1     |\n        r = re.compile(r'^\\|\\s+(\\d+)\\s+MIG\\s+\\S+\\s+\\d+\\s+(\\d+)\\s+\\S+\\s+\\|$')\n        for out_line in out:\n            match = r.match(out_line)\n            if not match:\n                continue\n            gpu_num = int(match.group(1))\n            gpuid = 'nvidia%d' % gpu_num\n            giid = int(match.group(2))\n            minor = self._discover_mig_minor(gpu_num, giid, ci=None)\n            gi = {'minor': minor, 'gi': giid}\n            pbs.logmsg(pbs.EVENT_DEBUG4, 'GI found %s' % str(gi))\n            if 'gis' not in gpus[gpuid]:\n                gpus[gpuid]['gis'] = {}\n            gpus[gpuid]['gis'][giid] = gi\n\n        # Find all MIG UUIDs using nvidia-smi -L\n        # and update it later while finding the CIs\n        cmd = [self.cfg['nvidia-smi'], '-L']\n        pbs.logmsg(pbs.EVENT_DEBUG4, 'NVIDIA SMI command: %s' % cmd)\n        time_start = time.time()\n        out = []\n\n        try:\n            # Try running the nvidia-smi command\n            process = subprocess.Popen(cmd, shell=False,\n                                       stdout=subprocess.PIPE,\n                                       stderr=subprocess.PIPE,\n                                       universal_newlines=True)\n            out = process.communicate()[0].split('\\n')\n        except Exception:\n            pbs.logmsg(pbs.EVENT_DEBUG4, 'Failed to execute: %s' %\n                       \" \".join(cmd))\n            pbs.logmsg(pbs.EVENT_DEBUG3, '%s: No MIGs found' % caller_name())\n            return\n        elapsed_time = time.time() - time_start\n        if elapsed_time > 2.0:\n            pbs.logmsg(pbs.EVENT_DEBUG,\n                       '%s: nvidia-smi call took %f seconds' %\n                       (caller_name(), elapsed_time))\n\n        # format is:\n        # GPU gpuID: gpu_name (UUID: GPU UUID)\n        # MIG xg.ygb      Device  0: (UUID: MIG-UUID)\n        # MIG xg.ygb      Device  1: (UUID: MIG-UUID)\n\n        # Map (gpu_id, instance_id) to a MIG UUID\n        uuid_map = dict()\n        r_gpu = re.compile(r'\\s*GPU\\s(\\d+)')\n        r_mig = re.compile(r'\\s*MIG.*Device\\s*(\\d+)')\n        current_gpu = 0\n        for out_line in out:\n            gpu_match = r_gpu.match(out_line)\n            mig_match = r_mig.match(out_line)\n            if gpu_match:\n                current_gpu = int(gpu_match.group(1))\n                uuid_map[current_gpu] = dict()\n            if mig_match:\n                mig_uuid = out_line.split()[-1].rstrip(\")\")\n                uuid_map[current_gpu][int(mig_match.group(1))] = mig_uuid\n\n        pbs.logmsg(pbs.EVENT_DEBUG4, \"uuid map: %s\" % str(uuid_map))\n\n        # now find all CIs\n        cmd = [self.cfg['nvidia-smi'], 'mig', '-lci']\n        pbs.logmsg(pbs.EVENT_DEBUG4, 'NVIDIA SMI command: %s' % cmd)\n        time_start = time.time()\n        out = []\n        try:\n            # Try running the nvidia-smi command\n            process = subprocess.Popen(cmd, shell=False,\n                                       stdout=subprocess.PIPE,\n                                       stderr=subprocess.PIPE,\n                                       universal_newlines=True)\n            out = process.communicate()[0].split('\\n')\n        except Exception:\n            pbs.logmsg(pbs.EVENT_DEBUG4, 'Failed to execute: %s' %\n                       \" \".join(cmd))\n            pbs.logmsg(pbs.EVENT_DEBUG3, '%s: No MIGs found' % caller_name())\n            return\n        elapsed_time = time.time() - time_start\n        if elapsed_time > 2.0:\n            pbs.logmsg(pbs.EVENT_DEBUG,\n                       '%s: nvidia-smi call took %f seconds' %\n                       (caller_name(), elapsed_time))\n\n        # format is either\n        # | Compute instances:                                    |\n        # | GPU     GPU       Name             Profile   Instance |\n        # |       Instance                       ID        ID     |\n        # |         ID                                            |\n        # |   0      7       MIG 1g.5gb           0         0     |\n        # or\n        # | Compute instances:                                             |\n        # | GPU     GPU       Name         Profile   Instance   Placement  |\n        # |       Instance                   ID        ID       Start:Size |\n        # |         ID                                                     |\n        # |   0      7       MIG 1g.5gb       0         0          0:1     |\n        r = re.compile(\n            r'^\\|\\s+(\\d+)\\s+(\\d+)\\s+MIG\\s+\\S+\\s+\\d+\\s+(\\d+)\\s+(\\S+\\s+)?\\|$')\n\n        uuid_count = 0\n        gpu_num = 0\n        for out_line in out:\n            match = r.match(out_line)\n            if not match:\n                continue\n            # reset count if a new gpu is parsed\n            if gpu_num != int(match.group(1)):\n                uuid_count = 0\n            gpu_num = int(match.group(1))\n            gpuid = 'nvidia%d' % gpu_num\n            giid = int(match.group(2))\n            ciid = int(match.group(3))\n            minor = self._discover_mig_minor(gpu_num, giid, ciid)\n            if minor == -1:\n                continue\n            uuid = uuid_map[gpu_num][uuid_count]\n            uuid_count += 1\n            ci = {'minor': minor, 'gi': giid, 'ci': ciid, 'uuid': uuid}\n            pbs.logmsg(pbs.EVENT_DEBUG4, 'CI found %s' % str(ci))\n            if 'cis' not in gpus[gpuid]['gis'][giid]:\n                gpus[gpuid]['gis'][giid]['cis'] = {}\n            gpus[gpuid]['gis'][giid]['cis'][ciid] = ci\n\n    def _discover_mig_minor(self, gpu, gi, ci=None):\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        path = os.path.join(os.sep, 'proc', 'driver',\n                            'nvidia-caps', 'mig-minors')\n        if ci is None:\n            capname = 'gpu%d/gi%d/access' % (gpu, gi)\n        else:\n            capname = 'gpu%d/gi%d/ci%d/access' % (gpu, gi, ci)\n        with open(path, 'r') as minors:\n            for line in minors:\n                cap, minor = line.split(' ', 1)\n                if cap == capname:\n                    return int(minor)\n        pbs.logmsg(pbs.EVENT_DEBUG, 'Cannot find minor number for %s'\n                   % capname)\n        return -1\n\n    def _discover_meminfo(self):\n        \"\"\"\n        Return a dictionary where the keys are the NUMA node ordinals\n        and the values are the various memory sizes\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        meminfo = {}\n        with open(os.path.join(os.sep, 'proc', 'meminfo'), 'r') as desc:\n            for line in desc:\n                entries = line.split()\n                if entries[0] == 'MemTotal:':\n                    meminfo[entries[0].rstrip(':')] = \\\n                        convert_size(entries[1] + entries[2], 'kb')\n                elif entries[0] == 'SwapTotal:':\n                    meminfo[entries[0].rstrip(':')] = \\\n                        convert_size(entries[1] + entries[2], 'kb')\n                elif entries[0] == 'Hugepagesize:':\n                    meminfo[entries[0].rstrip(':')] = \\\n                        convert_size(entries[1] + entries[2], 'kb')\n                elif entries[0] == 'HugePages_Total:':\n                    meminfo[entries[0].rstrip(':')] = int(entries[1])\n                elif entries[0] == 'HugePages_Rsvd:':\n                    meminfo[entries[0].rstrip(':')] = int(entries[1])\n        pbs.logmsg(pbs.EVENT_DEBUG4, 'Discover meminfo: %s' % meminfo)\n        return meminfo\n\n    def _discover_cpuinfo(self):\n        \"\"\"\n        Return a dictionary where the keys include both global settings\n        and individual CPU characteristics\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        cpuinfo = {}\n        cpuinfo['cpu'] = {}\n        proc = None\n        with open(os.path.join(os.sep, 'proc', 'cpuinfo'), 'r') as desc:\n            for line in desc:\n                entries = line.strip().split(':')\n                if len(entries) < 2:\n                    # Blank line indicates end of processor\n                    proc = None\n                    continue\n                key = entries[0].strip()\n                val = entries[1].strip()\n                if proc is None and key != 'processor':\n                    raise ProcessingError('Failed to parse /proc/cpuinfo')\n                if key == 'processor':\n                    proc = int(val)\n                    if proc in cpuinfo:\n                        raise ProcessingError('Duplicate CPU ID found')\n                    cpuinfo['cpu'][proc] = {}\n                    cpuinfo['cpu'][proc]['threads'] = []\n                elif key == 'flags':\n                    cpuinfo['cpu'][proc][key] = val.split()\n                elif val.isdigit():\n                    cpuinfo['cpu'][proc][key] = int(val)\n                else:\n                    cpuinfo['cpu'][proc][key] = val\n        if not cpuinfo['cpu']:\n            raise ProcessingError('No CPU information found')\n        cpuinfo['logical_cpus'] = len(cpuinfo['cpu'])\n        cpuinfo['hyperthreads_per_core'] = 1\n        cpuinfo['hyperthreads'] = []\n        # Now try to construct a dictionary with hyperthread information\n        # if this is an Intel based processor\n        try:\n            if ('Intel' in cpuinfo['cpu'][0]['vendor_id']\n                    or 'AuthenticAMD' in cpuinfo['cpu'][0]['vendor_id']):\n                if 'ht' in cpuinfo['cpu'][0]['flags']:\n                    cpuinfo['hyperthreads_per_core'] = \\\n                        int(cpuinfo['cpu'][0]['siblings']\n                            // cpuinfo['cpu'][0]['cpu cores'])\n                    # Map hyperthreads to physical cores\n                    if cpuinfo['hyperthreads_per_core'] > 1:\n                        pbs.logmsg(pbs.EVENT_DEBUG4,\n                                   'Mapping hyperthreads to cores')\n                        cores = list(cpuinfo['cpu'].keys())\n                        threads = set()\n                        # CPUs with matching core IDs are hyperthreads\n                        # sharing the same physical core. Loop through\n                        # the cores to construct a list of threads.\n                        for xid in cores:\n                            xcore = cpuinfo['cpu'][xid]\n                            for yid in cores:\n                                if yid < xid:\n                                    continue\n                                if yid == xid:\n                                    cpuinfo['cpu'][xid]['threads'].append(yid)\n                                    continue\n                                ycore = cpuinfo['cpu'][yid]\n                                if xcore['physical id'] != \\\n                                        ycore['physical id']:\n                                    continue\n                                if xcore['core id'] == ycore['core id']:\n                                    cpuinfo['cpu'][xid]['threads'].append(yid)\n                                    cpuinfo['cpu'][yid]['threads'].append(xid)\n                                    threads.add(yid)\n                        pbs.logmsg(pbs.EVENT_DEBUG4, 'HT cores: %s' % threads)\n                        cpuinfo['hyperthreads'] = sorted(threads)\n                    else:\n                        cores = cpuinfo['cpu'].keys()\n                        for xid in cores:\n                            cpuinfo['cpu'][xid]['threads'].append(xid)\n        except Exception:\n            pbs.logmsg(pbs.EVENT_DEBUG, '%s: Hyperthreading check failed' %\n                       caller_name())\n        cpuinfo['physical_cpus'] = int(cpuinfo['logical_cpus']\n                                       // cpuinfo['hyperthreads_per_core'])\n        wanted_keys = ['physical_cpus', 'logical_cpus',\n                       'hyperthreads_per_core', 'hyperthreads']\n        cpuinfo_short = dict((k, cpuinfo[k])\n                             for k in wanted_keys if k in cpuinfo)\n        pbs.logmsg(pbs.EVENT_DEBUG4, \"%s returning: %s\"\n                   % (caller_name(), cpuinfo_short))\n        if 0 in cpuinfo['cpu']:\n            pbs.logmsg(pbs.EVENT_DEBUG4, \"%s For CPU 0: cpuinfo['cpu'][0]: %s\"\n                       % (caller_name(), cpuinfo['cpu'][0]))\n        return cpuinfo\n\n    def gather_jobs_on_node(self, cgroup):\n        \"\"\"\n        Gather the jobs assigned to this node and local vnodes\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        # Construct a dictionary where the keys are job IDs and the values\n        # are timestamps. The job IDs are collected from the cgroup jobs\n        # file and by inspecting MoM's job directory. Both are needed to\n        # ensure orphans are properly identified.\n        jobdict = cgroup.read_cgroup_jobs()\n        pbs.logmsg(pbs.EVENT_DEBUG4,\n                   'cgroup_jobs file content: %s' % str(jobdict))\n        try:\n            for jobfile in glob.glob(os.path.join(PBS_MOM_JOBS, '*.JB')):\n                (jobid, dot_jb) = os.path.splitext(os.path.basename(jobfile))\n                if jobid not in jobdict:\n                    if os.stat_float_times():\n                        jobdict[jobid] = os.path.getmtime(jobfile)\n                    else:\n                        jobdict[jobid] = float(os.path.getmtime(jobfile))\n        except Exception:\n            pbs.logmsg(pbs.EVENT_DEBUG, 'Could not get job list for %s' %\n                       self.hostname)\n        pbs.logmsg(pbs.EVENT_DEBUG4, 'Local job dictionary: %s' % str(jobdict))\n        return jobdict\n\n    def get_memory_on_node(self, memtotal=None, ignore_reserved=False):\n        \"\"\"\n        Get the memory resource on this mom\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        total = 0\n        if self.numa_nodes and self.cfg['vnode_per_numa_node']:\n            # Caller wants the sum of all NUMA nodes\n            for nnid in self.numa_nodes:\n                if 'mem' in self.numa_nodes[nnid]:\n                    total += self.numa_nodes[nnid]['mem']\n                else:\n                    # NUMA node unreliable, make sure other method is used\n                    total = 0\n                    break\n            # only round down svr-reported values, not internal values\n            if total > 0:\n                return total\n            pbs.logmsg(pbs.EVENT_DEBUG4,\n                       '%s: Failed to obtain memory using NUMA node method' %\n                       caller_name())\n        # Calculate total memory\n        try:\n            if memtotal is None:\n                total = size_as_int(self.meminfo['MemTotal'])\n            else:\n                total = size_as_int(memtotal)\n        except Exception:\n            pbs.logmsg(pbs.EVENT_DEBUG,\n                       '%s: Could not determine total node memory' %\n                       caller_name())\n            raise\n        if total <= 0:\n            raise ValueError('Total node memory value invalid')\n        pbs.logmsg(pbs.EVENT_DEBUG4, 'total visible mem: %d' % total)\n        # Calculate reserved memory\n        reserved = 0\n        if not ignore_reserved:\n            reserve_pct = int(self.cfg['cgroup']['memory']['reserve_percent'])\n            reserved += int(total * (reserve_pct / 100.0))\n            reserve_amount = self.cfg['cgroup']['memory']['reserve_amount']\n            reserved += size_as_int(reserve_amount)\n        pbs.logmsg(pbs.EVENT_DEBUG4, 'reserved mem: %d' % reserved)\n        # Calculate remaining memory\n        remaining = total - reserved\n        # only round down svr-reported values, not internal values\n        if remaining <= 0:\n            raise ValueError('Too much reserved memory')\n        pbs.logmsg(pbs.EVENT_DEBUG4, 'remaining mem: %d' % remaining)\n        amount = convert_size(str(remaining), 'kb')\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Returning: %s' %\n                   (caller_name(), amount))\n        return remaining\n\n    def get_vmem_on_node(self, vmemtotal=None, ignore_reserved=False):\n        \"\"\"\n        Get the virtual memory resource on this mom\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        total = 0\n        # If NUMA nodes were not yet discovered then get totals\n        # using non-NUMA methods\n        if self.numa_nodes and self.cfg['vnode_per_numa_node']:\n            # Caller wants the sum of all NUMA nodes, and they were\n            # computed earlier\n            for nnid in self.numa_nodes:\n                if 'vmem' in self.numa_nodes[nnid]:\n                    total += self.numa_nodes[nnid]['vmem']\n                else:\n                    total = 0\n                    break\n            # only round down svr-reported values, not internal values\n            if total > 0:\n                return total\n            pbs.logmsg(pbs.EVENT_DEBUG4,\n                       '%s: Failed to obtain vmem using NUMA node method' %\n                       caller_name())\n        # Calculate total vmem; start with visible or usable physical memory\n        total = self.get_memory_on_node(None, ignore_reserved)\n        if ignore_reserved:\n            pbs.logmsg(pbs.EVENT_DEBUG4, 'total visible mem: %d' % total)\n        else:\n            pbs.logmsg(pbs.EVENT_DEBUG4, 'total usable mem: %d' % total)\n        # Calculate total swap\n        try:\n            if vmemtotal is None:\n                swap = size_as_int(self.meminfo['SwapTotal'])\n            else:\n                swap = size_as_int(vmemtotal)\n        except Exception:\n            pbs.logmsg(pbs.EVENT_DEBUG,\n                       '%s: Could not determine total node swap' %\n                       caller_name())\n            raise\n        if swap <= 0:\n            pbs.logmsg(pbs.EVENT_DEBUG4,\n                       '%s: No swap space detected' %\n                       caller_name())\n            swap = 0\n        pbs.logmsg(pbs.EVENT_DEBUG4, 'total swap: %d' % swap)\n        # Calculate reserved swap\n        reserved = 0\n        if not ignore_reserved:\n            reserve_pct = int(self.cfg['cgroup']['memsw']['reserve_percent'])\n            reserved += int(swap * (reserve_pct / 100.0))\n            reserve_amount = self.cfg['cgroup']['memsw']['reserve_amount']\n            reserved += size_as_int(reserve_amount)\n            pbs.logmsg(pbs.EVENT_DEBUG4, 'reserved swap: %d' % reserved)\n            if reserved > swap:\n                reserved = swap\n        # Calculate remaining vmem\n        remaining = total + swap - reserved\n        # only round down svr-reported values, not internal values\n        if remaining <= 0:\n            raise ValueError('Too much reserved vmem')\n        pbs.logmsg(pbs.EVENT_DEBUG4, 'remaining vmem: %d' % remaining)\n        amount = convert_size(str(remaining), 'kb')\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Returning: %s' %\n                   (caller_name(), amount))\n        return remaining\n\n    def get_hpmem_on_node(self, hpmemtotal=None, ignore_reserved=False):\n        \"\"\"\n        Get the huge page memory resource on this mom\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        total = 0\n        if self.numa_nodes and self.cfg['vnode_per_numa_node']:\n            # Caller wants the sum of all NUMA nodes\n            for nnid in self.numa_nodes:\n                if 'hpmem' in self.numa_nodes[nnid]:\n                    total += self.numa_nodes[nnid]['hpmem']\n                else:\n                    total = 0\n                    break\n            # only round down svr-reported values, not internal values\n            if total > 0:\n                return total\n            pbs.logmsg(pbs.EVENT_DEBUG4,\n                       '%s: Failed to obtain memory using NUMA node method' %\n                       caller_name())\n        # Calculate hpmem\n        try:\n            if hpmemtotal is None:\n                total = size_as_int(self.meminfo['Hugepagesize'])\n                total *= (self.meminfo['HugePages_Total'] -\n                          self.meminfo['HugePages_Rsvd'])\n            else:\n                total = size_as_int(hpmemtotal)\n        except Exception:\n            pbs.logmsg(pbs.EVENT_DEBUG3,\n                       '%s: Could not determine huge page availability' %\n                       caller_name())\n            total = 0\n        if total <= 0:\n            total = 0\n            pbs.logmsg(pbs.EVENT_DEBUG4,\n                       '%s: No huge page memory detected' %\n                       caller_name())\n            return 0\n        # Calculate reserved hpmem\n        reserved = 0\n        if not ignore_reserved:\n            reserve_pct = int(self.cfg['cgroup']['hugetlb']['reserve_percent'])\n            reserved += int(total * (reserve_pct / 100.0))\n            reserve_amount = self.cfg['cgroup']['hugetlb']['reserve_amount']\n            reserved += size_as_int(reserve_amount)\n        pbs.logmsg(pbs.EVENT_DEBUG4, 'reserved hpmem: %d' % reserved)\n        # Calculate remaining vmem\n        remaining = total - reserved\n        # Round down to nearest huge page\n        if 'Hugepagesize' in self.meminfo:\n            remaining -= (remaining\n                          % (size_as_int(self.meminfo['Hugepagesize'])))\n        if remaining <= 0:\n            raise ValueError('Too much reserved hpmem')\n        pbs.logmsg(pbs.EVENT_DEBUG4, 'remaining hpmem: %d' % remaining)\n        amount = convert_size(str(remaining), 'kb')\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Returning: %s' %\n                   (caller_name(), amount))\n        # Remove any bytes beyond the last MB\n        return remaining\n\n    def create_vnodes(self, vntype=None):\n        \"\"\"\n        Create individual vnodes per socket\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        vnode_list = pbs.event().vnode_list\n        if self.cfg['vnode_per_numa_node']:\n            vnodes = True\n            pbs.logmsg(pbs.EVENT_DEBUG4, '%s: vnode_per_numa_node is enabled' %\n                       caller_name())\n        else:\n            vnodes = False\n            pbs.logmsg(pbs.EVENT_DEBUG4,\n                       '%s: vnode_per_numa_node is disabled' % caller_name())\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: numa nodes: %s' %\n                   (caller_name(), self.numa_nodes))\n        vnode_name = self.hostname\n        # In some cases the hostname and vnode name do not match\n        # admin should fix this!\n        # Give hints\n        if vnode_name not in vnode_list:\n            pbs.logmsg(pbs.EVENT_ERROR,\n                       \"Could not find hostname %s in vnode_list %s\"\n                       % (vnode_name, str(vnode_list.keys())))\n            pbs.logmsg(pbs.EVENT_ERROR,\n                       \"This error is FATAL. Possible causes:\")\n            pbs.logmsg(pbs.EVENT_ERROR, \"a) the server's name for \"\n                       \"the natural node created on the server \"\n                       \"does not match the output of 'hostname' on the host.\")\n            pbs.logmsg(pbs.EVENT_ERROR, \"   Please use PBS_MOM_NODE_NAME \"\n                       \"in /etc/pbs.conf to tell MoM the correct vnode name.\")\n            pbs.logmsg(pbs.EVENT_ERROR, \"b) v2 config files are used \"\n                       \"but none mention the natural vnode. Add a line for \"\n                       \"the natural vnode in one of the v2 config files.\")\n            raise ProcessingError('Could not identify local vnode')\n        vnode_list[vnode_name] = pbs.vnode(vnode_name)\n        host_resc_avail = vnode_list[vnode_name].resources_available\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: host_resc_avail: %s' %\n                   (caller_name(), host_resc_avail))\n        # Set resources_available.vntype of natural node according\n        # to what's in local file if it needs to be propagated to server\n        if (vntype and self.cfg['propagate_vntype_to_server']):\n            host_resc_avail['vntype'] = vntype\n            pbs.logmsg(pbs.EVENT_DEBUG4, '%s: vnode type set to %s'\n                       % (caller_name(), vntype))\n\n        vnode_msg_cpu = '%s: vnode_list[%s].resources_available[ncpus] = %d'\n        vnode_msg_mem = '%s: vnode_list[%s].resources_available[mem] = %s'\n\n        host_resc_avail['diag_messages'] = ''\n\n        if not vnodes:\n            # memory (global for host)\n            mem = self.get_memory_on_node(ignore_reserved=False)\n            # remove X MB - handle jitter in MemTotal observed in field\n            mem -= (1024 * 1024\n                    * self.cfg['cgroup']['memory']['vnode_hidden_mb'])\n            if mem < 0:\n                mem = 0\n            mem -= mem % (1024 * 1024)\n            mem = pbs.size(convert_size(mem, 'mb'))\n            host_resc_avail['mem'] = mem\n            pbs.logmsg(pbs.EVENT_DEBUG4, vnode_msg_mem %\n                       (caller_name(), vnode_name,\n                        str(host_resc_avail['mem'])))\n            # memory+swap ('vmem') (global for host)\n            vmem = self.get_vmem_on_node(ignore_reserved=False)\n            # remove X MB - handle jitter in MemTotal observed in field\n            vmem -= (1024 * 1024\n                     * self.cfg['cgroup']['memory']['vnode_hidden_mb'])\n            if vmem < 0:\n                vmem = 0\n            vmem -= vmem % (1024 * 1024)\n            # if there is no swap, trying to subtract vnode_hidden_mb\n            # will lower vmem under mem, do not allow\n            if vmem < mem:\n                vmem = mem\n            vmem = pbs.size(convert_size(vmem, 'mb'))\n            host_resc_avail['vmem'] = vmem\n            pbs.logmsg(pbs.EVENT_DEBUG4, vnode_msg_mem %\n                       (caller_name(), vnode_name,\n                        str(host_resc_avail['mem'])))\n            # huge page mem (global for host)\n            val = self.get_hpmem_on_node(ignore_reserved=False)\n            # remove X MB - handle jitter in mem reported by OS\n            val -= (1024 * 1024\n                    * self.cfg['cgroup']['hugetlb']\n                              ['vnode_hidden_mb'])\n            if val < 0:\n                val = 0\n            val -= val % (1024 * 1024)  # round down to MB\n            host_resc_avail['hpmem'] = \\\n                pbs.size(convert_size(val, 'mb'))\n            if (self.cfg['cgroup']['memsw']['enabled']\n                    and self.cfg['cgroup']['memsw']['manage_cgswap']):\n                # special case 0 cgswap because assigning a \"0mb\"\n                # pbs.size difference causes a bug\n                if vmem == mem:\n                    host_resc_avail['cgswap'] = pbs.size('0')\n                else:\n                    host_resc_avail['cgswap'] = (host_resc_avail['vmem']\n                                                 - host_resc_avail['mem'])\n            else:\n                # This unsets it if it was defined earlier and needs to\n                # disappear\n                host_resc_avail['cgswap'] = None\n        else:\n            # set the value on the host to 0\n            host_resc_avail['mem'] = pbs.size('0')\n            host_resc_avail['vmem'] = pbs.size('0')\n            host_resc_avail['hpmem'] = pbs.size('0')\n            if (self.cfg['cgroup']['memsw']['enabled']\n                    and self.cfg['cgroup']['memsw']['manage_cgswap']):\n                host_resc_avail['cgswap'] = pbs.size('0')\n            # Some kernel drivers \"reserve\" memory on /proc/meminfo\n            # and this is not reflected in the node meminfo; if necessary,\n            # adjust the values published as resources_available.mem\n            # on the per-socket vnodes by the difference spread over nodes\n            adjust_bytes_per_node = 0\n            num_numa_nodes = len(self.numa_nodes)\n            total_nodemem = 0\n            for num in self.numa_nodes:\n                total_nodemem += size_as_int(self.numa_nodes[num]['MemTotal'])\n            total_hostmem = size_as_int(self.meminfo['MemTotal'])\n            pbs.logmsg(pbs.EVENT_DEBUG4,\n                       '%s: host memtotal %s; node memtotal %s'\n                       % (caller_name(),\n                          str(total_hostmem), str(total_nodemem)))\n            if total_hostmem < total_nodemem:\n                adjust_bytes_per_node = \\\n                    int(math.ceil((total_nodemem - total_hostmem)\n                                  / float(num_numa_nodes)))\n                pbs.logmsg(pbs.EVENT_DEBUG,\n                           '%s: host memtotal %s < node memtotal %s, '\n                           'adjusting each vnode mem down by %s bytes'\n                           % (caller_name(),\n                              str(total_hostmem), str(total_nodemem),\n                              str(adjust_bytes_per_node)))\n        for nnid in self.numa_nodes:\n            if vnodes:\n                vnode_key = vnode_name + '[%d]' % nnid\n                vnode_list[vnode_key] = pbs.vnode(vnode_name)\n                vnode_resc_avail = vnode_list[vnode_key].resources_available\n                # ensure that if no devices were discovered that\n                # ngpus is set to 0; otherwise MoM may still use stale value\n                vnode_resc_avail['ngpus'] = 0\n                if (vntype and self.cfg['propagate_vntype_to_server']):\n                    vnode_resc_avail['vntype'] = vntype\n            for key, val in sorted(self.numa_nodes[nnid].items()):\n                if key is None:\n                    pbs.logmsg(pbs.EVENT_DEBUG4, '%s: key is None'\n                               % caller_name())\n                    continue\n                if val is None:\n                    pbs.logmsg(pbs.EVENT_DEBUG4, '%s: val is None'\n                               % caller_name())\n                    continue\n                pbs.logmsg(pbs.EVENT_DEBUG4, '%s: %s = %s'\n                           % (caller_name(), key, val))\n                if key in ['MemTotal', 'HugePages_Total']:\n                    # Irrelevant: transformed to other keys if vnodes is True\n                    # done outside of loop if vnodes is False\n                    pbs.logmsg(pbs.EVENT_DEBUG4, '%s: key %s skipped'\n                               % (caller_name(), key))\n                elif key == 'cpus':\n                    threads = len(val)\n                    if not self.cfg['use_hyperthreads']:\n                        # Do not treat a hyperthread as a core when\n                        # use_hyperthreads is false.\n                        threads = int(threads\n                                      // self.cpuinfo['hyperthreads_per_core'])\n                    elif self.cfg['ncpus_are_cores']:\n                        # use_hyperthreads and ncpus_are_cores are both true,\n                        # advertise only one thread per core\n                        threads = int(threads\n                                      // self.cpuinfo['hyperthreads_per_core'])\n                    if vnodes:\n                        # set the value on the host to 0\n                        host_resc_avail['ncpus'] = 0\n                        pbs.logmsg(pbs.EVENT_DEBUG4, vnode_msg_cpu %\n                                   (caller_name(), vnode_name,\n                                    host_resc_avail['ncpus']))\n                        # set the vnode value\n                        vnode_resc_avail['ncpus'] = threads\n                        pbs.logmsg(pbs.EVENT_DEBUG4, vnode_msg_cpu %\n                                   (caller_name(), vnode_key,\n                                    vnode_resc_avail['ncpus']))\n                    else:\n                        if 'ncpus' not in host_resc_avail:\n                            host_resc_avail['ncpus'] = 0\n                        if not isinstance(host_resc_avail['ncpus'],\n                                          (int, pbs.pbs_int)):\n                            host_resc_avail['ncpus'] = 0\n                        # update the cumulative value\n                        host_resc_avail['ncpus'] += threads\n                        pbs.logmsg(pbs.EVENT_DEBUG4, vnode_msg_cpu %\n                                   (caller_name(), vnode_name,\n                                    host_resc_avail['ncpus']))\n                elif key in ['mem', 'vmem', 'hpmem']:\n                    # Used for vnodes per NUMA socket\n                    if vnodes:\n                        mem_val = val\n                        if isinstance(val, float):\n                            mem_val = int(val)\n                        # remove X MB - handle jitter in mem reported by OS\n                        key_to_subsys = ({'mem': 'memory',\n                                          'vmem': 'memsw',\n                                          'hpmem': 'hugetlb'})\n                        if key != 'hpmem':\n                            mem_val -= adjust_bytes_per_node\n                        mem_val -= (1024 * 1024\n                                    * self.cfg['cgroup'][key_to_subsys[key]]\n                                                        ['vnode_hidden_mb'])\n                        if mem_val < 0:\n                            mem_val = 0\n                        mem_val -= mem_val % (1024 * 1024)\n                        vnode_resc_avail[key] = \\\n                            pbs.size(convert_size(mem_val, 'mb'))\n                elif isinstance(val, list):\n                    pass\n                elif isinstance(val, dict):\n                    pass\n                else:\n                    pbs.logmsg(pbs.EVENT_DEBUG4, '%s: key = %s (%s)' %\n                               (caller_name(), key, type(key)))\n                    pbs.logmsg(pbs.EVENT_DEBUG4, '%s: val = %s (%s)' %\n                               (caller_name(), val, type(val)))\n                    if vnodes:\n                        vnode_resc_avail[key] = val\n                        host_resc_avail[key] = initialize_resource(val)\n                    else:\n                        if key not in host_resc_avail:\n                            host_resc_avail[key] = initialize_resource(val)\n                        else:\n                            if not host_resc_avail[key]:\n                                host_resc_avail[key] = initialize_resource(val)\n                        host_resc_avail[key] += val\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: vnode list: %s' %\n                   (caller_name(), str(vnode_list)))\n        if vnodes:\n            for nnid in self.numa_nodes:\n                vnode_key = vnode_name + '[%d]' % nnid\n                vnode_resc_avail = vnode_list[vnode_key].resources_available\n                # vmem can be smaller than mem if there is no swap, since\n                # we try to hide 1MB of swap from the server\n                if vnode_resc_avail['vmem'] < vnode_resc_avail['mem']:\n                    vnode_resc_avail['vmem'] = vnode_resc_avail['mem']\n                if (self.cfg['cgroup']['memsw']['enabled']\n                        and self.cfg['cgroup']['memsw']['manage_cgswap']):\n                    # Special case 0 cgswap since a bug creates a crash when\n                    # the \"0mb\" pbs.size difference is assigned to the resource\n                    if vnode_resc_avail['vmem'] == vnode_resc_avail['mem']:\n                        vnode_resc_avail['cgswap'] = pbs.size('0')\n                    else:\n                        vnode_resc_avail['cgswap'] = \\\n                            vnode_resc_avail['vmem'] - vnode_resc_avail['mem']\n                else:\n                    # This unsets it if it was defined earlier and needs to\n                    # disappear\n                    host_resc_avail['cgswap'] = None\n\n                pbs.logmsg(pbs.EVENT_DEBUG4, '%s: %s vnode_resc_avail: %s' %\n                           (caller_name(), vnode_key, vnode_resc_avail))\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: host_resc_avail: %s' %\n                   (caller_name(), host_resc_avail))\n        return True\n\n    def take_node_offline(self):\n        \"\"\"\n        Take the local node and associated vnodes offline\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        pbs.logmsg(pbs.EVENT_DEBUG2, '%s: Taking vnode(s) offline' %\n                   caller_name())\n        # Check the offline file is present and skip if the node\n        # is already offline (to reduce server traffic)\n        if os.path.isfile(self.offline_file):\n            pbs.logmsg(pbs.EVENT_DEBUG2,\n                       '%s: Offline file already exists, skipping' %\n                       caller_name())\n            return\n        # Attempt to take vnodes that match this host offline\n        # Assume vnode names resemble self.hostname[#]\n        (vnode_comments, failure) = \\\n            fetch_vnode_comments(pbs.event().vnode_list.keys(),\n                                 timeout=self.cfg['server_timeout'])\n        if failure:\n            pbs.logmsg(pbs.EVENT_ERROR,\n                       '%s: Failed contacting server for vnode comments'\n                       % caller_name())\n            pbs.logmsg(pbs.EVENT_ERROR, '%s: Not bringing vnodes offline'\n                       % caller_name())\n            return\n        match_found = False\n        for vnode_name in pbs.event().vnode_list:\n            if (vnode_name == self.hostname or\n                    re.match(self.hostname + r'\\[.*\\]', vnode_name)):\n                pbs.event().vnode_list[vnode_name].state = pbs.ND_OFFLINE\n                vnode_comment = vnode_comments[vnode_name]\n                if vnode_comment:\n                    if not self.offline_msg in vnode_comment:\n                        vnode_comment += \" \" + self.offline_msg\n                else:\n                    vnode_comment = self.offline_msg\n                pbs.event().vnode_list[vnode_name].comment = vnode_comment\n                pbs.logmsg(pbs.EVENT_DEBUG2, '%s: %s; offlining %s' %\n                           (caller_name(), self.offline_msg, vnode_name))\n                match_found = True\n        if not match_found:\n            pbs.logmsg(pbs.EVENT_DEBUG2, '%s: No vnodes match %s' %\n                       (caller_name(), self.hostname))\n            return\n        # Write a file locally to reduce server traffic when the node\n        # is brought back online\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Offline file: %s' %\n                   (caller_name(), self.offline_file))\n        try:\n            # Write a timestamp so that exechost_periodic can avoid\n            # cleaning up before this event has sent updates to server\n            with open(self.offline_file, 'w') as fd:\n                fd.write(str(time.time()))\n        except Exception as exc:\n            pbs.logmsg(pbs.EVENT_DEBUG, '%s: Failed to write to %s: %s' %\n                       (caller_name(), self.offline_file, exc))\n            pass\n        if not os.path.isfile(self.offline_file):\n            pbs.logmsg(pbs.EVENT_DEBUG2, '%s: Offline file not present: %s' %\n                       (caller_name(), self.offline_file))\n        pbs.logmsg(pbs.EVENT_DEBUG2, '%s: Node taken offline' %\n                   caller_name())\n\n    def bring_node_online(self):\n        \"\"\"\n        Bring the local node and associated vnodes online\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        if not os.path.isfile(self.offline_file):\n            pbs.logmsg(pbs.EVENT_DEBUG3, '%s: Offline file not present: %s' %\n                       (caller_name(), self.offline_file))\n            return\n        # Read timestamp from offline file\n        timestamp = float()\n        try:\n            with open(self.offline_file, 'r') as fd:\n                timestamp = float(fd.read())\n        except Exception as exc:\n            pbs.logmsg(pbs.EVENT_DEBUG, '%s: Failed to read from %s: %s' %\n                       (caller_name(), self.offline_file, exc))\n            return\n        # Only bring node online after minimum delay has passed\n        delta = time.time() - timestamp\n        if delta < float(self.cfg['online_nodes_min_delay']):\n            pbs.logmsg(pbs.EVENT_DEBUG2,\n                       '%s: Too soon since node was offlined' % caller_name())\n            return\n        # Get comments for vnodes associated with this event\n        vnl = pbs.event().vnode_list.keys()\n        (vnode_comments, failure) = \\\n            fetch_vnode_comments(vnl, timeout=self.cfg['server_timeout'])\n        if failure:\n            pbs.logmsg(pbs.EVENT_ERROR,\n                       '%s: Failed contacting server for vnode comments'\n                       % caller_name())\n            pbs.logmsg(pbs.EVENT_ERROR, '%s: Not bringing vnodes online'\n                       % caller_name())\n            return\n        # Bring vnodes online that this hook has taken offline\n        for vnode_name in vnode_comments:\n            if not self.offline_msg in vnode_comments[vnode_name]:\n                pbs.logmsg(pbs.EVENT_DEBUG, ('%s: Comment for vnode %s '\n                                             + 'was not set by this hook')\n                           % (caller_name(), vnode_name))\n                continue\n            vnode = pbs.event().vnode_list[vnode_name]\n            vnode.state = pbs.ND_FREE\n            vnode_comment = vnode_comments[vnode_name]\\\n                .replace(self.offline_msg, \"\").strip()\n            if len(vnode_comment) == 0:\n                vnode_comment = None\n            vnode.comment = vnode_comment\n            pbs.logmsg(pbs.EVENT_DEBUG,\n                       '%s: Vnode %s will be brought back online' %\n                       (caller_name(), vnode_name))\n        # Remove the offline file\n        try:\n            os.remove(self.offline_file)\n        except Exception as exc:\n            pbs.logmsg(pbs.EVENT_DEBUG,\n                       '%s: Failed to remove offline file: %s' %\n                       (caller_name(), exc))\n\n\n#\n# CLASS CgroupUtils\n#\nclass CgroupUtils(object):\n    \"\"\"\n    Cgroup utility methods\n    \"\"\"\n\n    def __init__(self, hostname, vnode, cfg=None, subsystems=None,\n                 paths=None, vntype=None, assigned_resources=None,\n                 systemd_version=None):\n        self.hostname = hostname\n        self.vnode = vnode\n\n        # Read in the config file\n        if cfg is not None:\n            self.cfg = cfg\n        else:\n            self.cfg = self.parse_config_file()\n        # Determine the systemd version (zero for no systemd)\n        if systemd_version:\n            self.systemd_version = systemd_version\n        else:\n            self.systemd_version = self._get_systemd_version()\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: systemd version seems to be %d'\n                   % (caller_name(), self.systemd_version))\n        # Collect the cgroup mount points\n        if paths is not None:\n            self.paths = paths\n        else:\n            self.paths = self._get_paths()\n\n        # Define the local vnode type\n        if vntype is not None:\n            self.vntype = vntype\n        else:\n            self.vntype = self._get_vnode_type()\n\n        # morph the strings that should become booleans\n        # into booleans depending on host/vntype\n        self.morph_config_dict_bools(self.cfg)\n        pbs.logmsg(pbs.EVENT_DEBUG4, \"Final cgroup cfg: %s\" % repr(self.cfg))\n\n        # Remove the added \"enabled\" in the cgroup section\n        # it's added at all levels in the cfg because of the recursion,\n        # but some of the code assumes only iterables are\n        # in the \"cgroup\" dictionary,\n        # without any guards to protect against extras!!\n        if \"enabled\" in self.cfg['cgroup']:\n            del self.cfg['cgroup']['enabled']\n\n        if not self.cfg['cgroup']['devices']['enabled']:\n            self.cfg['discover_gpus'] = False\n            pbs.logmsg(pbs.EVENT_DEBUG, 'discover_gpus set to False because '\n                       'devices subsystem is disabled')\n\n        # Determine which subsystems we care about\n        if subsystems is not None:\n            self.subsystems = subsystems\n        else:\n            self.subsystems = self._target_subsystems()\n\n        # Check if memsw is enabled despite lack of kernel support\n        # if so disable memsw\n        if 'memsw' in self.subsystems:\n            memsw_limit_file = (self.paths['memsw']\n                                + 'limit_in_bytes')\n            if not os.path.isfile(memsw_limit_file):\n                pbs.logmsg(pbs.EVENT_ERROR,\n                           'CONFIG ERROR: CF enables memsw '\n                           'but %s is missing'\n                           % memsw_limit_file)\n                pbs.logmsg(pbs.EVENT_ERROR,\n                           'CONFIG ERROR: kernel swapaccount parameter is 0, '\n                           'disabling memsw subsystem')\n                pbs.logmsg(pbs.EVENT_ERROR,\n                           'CONFIG ERROR: to fix this, either add '\n                           'swapaccount=1 to kernel command line '\n                           'or disable memsw')\n                self.cfg['cgroup']['memsw']['enabled'] = False\n                self.subsystems.remove('memsw')\n                pbs.logmsg(pbs.EVENT_DEBUG4,\n                           'remaining enabled subsystems: '\n                           + repr(self.subsystems))\n\n        # Return now if nothing is enabled\n        if not self.subsystems:\n            pbs.logmsg(pbs.EVENT_DEBUG2, '%s: No cgroups enabled' %\n                       caller_name())\n            self.assigned_resources = {}\n            return\n\n        # Collect the cgroup resources\n        if assigned_resources:\n            self.assigned_resources = assigned_resources\n        else:\n            self.assigned_resources = self._get_assigned_cgroup_resources()\n\n        # location to store information for the different hook events\n        self.hook_storage_dir = os.path.join(PBS_MOM_HOME, 'mom_priv',\n                                             'hooks', 'hook_data')\n\n        if not os.path.isdir(self.hook_storage_dir):\n            try:\n                os.makedirs(self.hook_storage_dir, 0o700)\n            except OSError:\n                pbs.logmsg(pbs.EVENT_DEBUG, 'Failed to create %s' %\n                           self.hook_storage_dir)\n        self.host_job_env_dir = os.path.join(PBS_MOM_HOME, 'aux')\n        self.host_job_env_filename = os.path.join(self.host_job_env_dir,\n                                                  '%s.env')\n        # Temporarily stores list of new jobs that came after job_list was\n        # written to mom hook input file (work around for periodic\n        # and begin race condition)\n        self.cgroup_jobs_file = os.path.join(self.hook_storage_dir,\n                                             'cgroup_jobs')\n        if not os.path.isfile(self.cgroup_jobs_file):\n            self.empty_cgroup_jobs_file()\n\n    def __repr__(self):\n        return ('CgroupUtils(%s, %s, %s, %s, %s, %s, %s, %s)' %\n                (repr(self.hostname),\n                 repr(self.vnode),\n                 repr(self.cfg),\n                 repr(self.subsystems),\n                 repr(self.paths),\n                 repr(self.vntype),\n                 repr(self.assigned_resources),\n                 repr(self.systemd_version)))\n\n    def write_to_stderr(self, job, msg):\n        \"\"\"\n        Write a message to the job stderr file\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        filename = None\n        try:\n            if str(job.Join_Path) == 'oe':\n                # If we write in stderr we might do it after the join!\n                # so write in stdout instead\n                filename = job.stdout_file()\n            else:\n                filename = job.stderr_file()\n            pbs.logmsg(pbs.EVENT_DEBUG4, \"Message to be written to %s: %s\"\n                       % (str(filename), msg.strip()))\n            if filename is None:\n                return\n            # sticky bit requires leaving out os.O_CREAT\n            fd = os.open(filename, os.O_APPEND|os.O_WRONLY)\n            with os.fdopen(fd, 'a') as desc:\n                desc.write(msg)\n        except Exception:\n            # tough luck for user, but not fatal to system\n            # Note: sometimes normal (e.g. interactive jobs)\n            pass\n\n    def set_diag_messages(self, resc_used, msg, concate):\n        \"\"\"\n        Sets the 'diag_messages' resource to msg. A json format of msg\n        is required in order to merge msgs from sis moms and primary mom.\n        Settting concate=true concates multiple string values of the\n        same dict key into one.\n        \"\"\"\n        try:\n            json_msg = json.loads(msg)\n        except Exception:\n            json_msg = {}\n        if 'diag_messages' not in resc_used:\n            resc_used['diag_messages'] = json.dumps(json_msg)\n        elif concate:\n            try:\n                json_resc = json.loads(resc_used['diag_messages'])\n            except Exception:\n                json_resc = {}\n            for k in json_msg:\n                if k in json_resc.keys() and not json_msg[k] in json_resc[k]:\n                    json_resc[k] += f\", {json_msg[k]}\"\n                else:\n                    json_resc[k] = json_msg[k]\n            resc_used['diag_messages'] = json.dumps(json_resc)\n        else:\n            resc_used['diag_messages'] = json.dumps(json_msg)\n\n    def _target_subsystems(self):\n        \"\"\"\n        Determine which subsystems are being requested\n        Note that config file parsing has set \"enabled\" to false/true\n        if necessary by now, no need to delve in config dictionary\n        since self.enabled will discover it\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        # Check to see if this node is in the approved hosts list\n        subsystems = []\n        for key in self.cfg['cgroup']:\n            if self.enabled(key):\n                subsystems.append(key)\n        # Add an entry for systemd if anything else is enabled. This allows\n        # the hook to cleanup any directories systemd leaves behind.\n        # Add at start since we want this processed first\n        if subsystems and self.systemd_version >= 205:\n            subsystems.insert(0, 'systemd')\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Enabled subsystems: %s' %\n                   (caller_name(), subsystems))\n        # It is not an error for all subsystems to be disabled.\n        # This host or vnode type may be in the excluded list.\n        return subsystems\n\n    def _copy_from_parent(self, dest):\n        \"\"\"\n        Copy a setting from the parent cgroup\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        filename = os.path.basename(dest)\n        subdir = os.path.dirname(dest)\n        parent = os.path.dirname(subdir)\n        source = os.path.join(parent, filename)\n        pbs.logmsg(pbs.EVENT_DEBUG4,\n                   'Copying value from %s to %s'\n                   % (source, dest))\n        if not os.path.isfile(source):\n            raise CgroupConfigError('Failed to read %s' % (source))\n        with open(source, 'r') as desc:\n            self.write_value(dest, desc.read().strip())\n\n    def _assemble_path(self, subsys, mnt_point, flags):\n        \"\"\"\n        Determine the path for a cgroup directory given the subsystem, mount\n        point, and mount flags\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        if 'noprefix' in flags:\n            prefix = ''\n        else:\n            if subsys == 'hugetlb':\n                # hugetlb includes size in prefix\n                # TODO: make size component configurable\n                prefix = subsys + '.2MB.'\n            elif subsys == 'memsw':\n                prefix = 'memory.' + subsys + '.'\n            elif subsys in ['systemd', 'perf_event']:\n                prefix = ''\n            else:\n                prefix = subsys + '.'\n        return os.path.join(mnt_point,\n                            str(self.cfg['cgroup_prefix'])\n                            + '.service/jobid',\n                            prefix)\n\n    def _get_paths(self):\n        \"\"\"\n        Create a dictionary of the cgroup subsystems and their corresponding\n        directories taking mount options (noprefix) into account\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        paths = {}\n        subsys_to_discover = {'blkio', 'cpu', 'cpuacct', 'cpuset', 'devices',\n                              'freezer', 'hugetlb', 'memory', 'net_cls',\n                              'net_prio', 'perf_event', 'pids', 'rdma',\n                              'systemd'}\n        subsys_to_skip = set()\n\n        # First deal with the ones with paths set in the configuration file\n        for subsys in subsys_to_discover:\n            if (subsys in self.cfg['cgroup']\n                    and 'mount_path' in self.cfg['cgroup'][subsys]\n                    and self.cfg['cgroup'][subsys]['mount_path']):\n                flags = []\n                if (subsys == 'cpuset'\n                        and os.path.exists(os.path.join(self.cfg['cgroup']\n                                                                ['cpuset']\n                                                                ['mount_path'],\n                                                        'cpus'))):\n                    flags = ['noprefix']\n                paths[subsys] = \\\n                    self._assemble_path(subsys,\n                                        self.cfg['cgroup'][subsys]\n                                                ['mount_path'],\n                                        flags)\n                # memory -> memsw, different prefix\n                if (subsys == 'memory'):\n                    paths['memsw'] = \\\n                        self._assemble_path(subsys,\n                                            self.cfg['cgroup']['memory']\n                                                    ['mount_path'],\n                                            flags)\n                subsys_to_skip.add(subsys)\n        subsys_to_discover -= subsys_to_skip\n\n        # Loop through the mounts and collect the ones for cgroups\n        with open(os.path.join(os.sep, 'proc', 'mounts'), 'r') as desc:\n            for line in desc:\n                entries = line.split()\n                if entries[2] != 'cgroup':\n                    continue\n                # It is possible to have more than one cgroup mounted in\n                # the same place, so check them all for each mount.\n                flags = entries[3].split(',')\n                for subsys in subsys_to_discover:\n                    if (subsys in flags\n                            or (subsys == 'systemd'\n                                and ('name=systemd' in flags))):\n                        subsys_path_candidate = \\\n                            self._assemble_path(subsys, entries[1], flags)\n                        # If there are more than one option, prefer shortest\n                        if (subsys not in paths\n                            or (len(paths[subsys])\n                                > len(subsys_path_candidate))):\n                            if subsys in paths:\n                                pbs.logmsg(pbs.EVENT_DEBUG3,\n                                           '_get_paths: shorter path+prefix '\n                                           '%s used for subsystem %s'\n                                           % (subsys_path_candidate, subsys))\n                            if subsys == 'memory':\n                                # memory and memsw share a common mount point,\n                                # but use a different prefix\n                                paths['memsw'] = \\\n                                    self._assemble_path('memsw', entries[1],\n                                                        flags)\n                            paths[subsys] = subsys_path_candidate\n\n        # if a host does not have any cgroup controllers mounted\n        # don't panic here, let main code handle it by just accepting event\n\n        return paths\n\n    def _cgroup_path(self, subsys, cgfile='', jobid=''):\n        \"\"\"\n        Return the path to a cgroup file or directory\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        # Note: The tasks file never uses a prefix (e.g. use tasks and not\n        # cpuset.tasks).\n        # Note: The os.path.join() method is smart enough to ignore\n        # empty strings unless they occur as the last parameter.\n        if not subsys or subsys not in self.paths:\n            return None\n        try:\n            subdir, prefix = os.path.split(self.paths[subsys])\n        except Exception:\n            return None\n        if not cgfile:\n            if jobid:\n                # Caller wants parent directory of job\n                return os.path.join(subdir, jobid, '')\n            # Caller wants job parent directory for subsystem\n            return os.path.join(subdir, '')\n        # Caller wants full path to file\n        if cgfile == 'tasks':\n            # tasks file never uses a prefix\n            return os.path.join(subdir, jobid,\n                                cgfile)\n        if jobid:\n            return os.path.join(subdir, jobid,\n                                prefix + cgfile)\n        return os.path.join(subdir, prefix + cgfile)\n\n    def morph_config_dict_bools(self, config_dict, subsection=None):\n        \"\"\"\n        function to morph 'vntype in: vntype1,vntype2,...'\n        and 'vntype not in: ...' values\n        into booleans depending on the vntype in the CgroupConfig object\n        Ditto for 'host in:' and 'host not in:'\n        Convert old run_only_on_hosts and exclude_vntypes to derive\n        'enabled' boolean (which will now _always_ be defined)\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        vntype = self.vntype\n        # avoid crashes in fnmatch if no vntype was specified on the host\n        if not vntype:\n            vntype = \"__UNKNOWN__\"\n        # get a 'basestring' type that identifies all string types\n        # already exists in Python 2 (has both std and unicode strings)\n        # but not in Python 3, which only has \"str\" (always unicode)\n        try:\n            basestring\n        except NameError:\n            basestring = str\n        # We want a view/iterator object, not something that creates copied\n        # lists. iteritems/viewitems() no longer exists in Py3\n        # since items() itself is now implemented as a view object.\n        dict_iter_object = (config_dict.iteritems() if PYTHON2\n                            else config_dict.items())\n        for key_found, value_found in dict_iter_object:\n            if isinstance(value_found, dict):\n                # subsection found\n                pbs.logmsg(pbs.EVENT_DEBUG4,\n                           \"%s: Cfg file parsing, subsection for %s\"\n                           % (caller_name(), key_found))\n                self.morph_config_dict_bools(value_found, key_found)\n                pbs.logmsg(pbs.EVENT_DEBUG4,\n                           \"%s: Cfg file parsing, finished subsection for %s\"\n                           % (caller_name(), key_found))\n            elif isinstance(value_found, basestring):\n                # string value -- could be morphable description\n                value_split = value_found.strip().split(':', 1)\n                if (len(value_split) > 1\n                        and value_split[0].lower().strip() == 'vntype in'):\n                    vntypes_t_unstripped = value_split[1].split(',')\n                    vntypes_t = [item.strip() for item in vntypes_t_unstripped]\n                    if any([fnmatch.fnmatch(vntype, p) for p in vntypes_t]):\n                        config_dict[key_found] = True\n                    else:\n                        config_dict[key_found] = False\n                    pbs.logmsg(pbs.EVENT_DEBUG4,\n                               \"%s: Config file parsing,\"\n                               \" set %s to %s based on vntype inclusion\"\n                               % (caller_name(), key_found,\n                                  str(config_dict[key_found])))\n                elif (len(value_split) > 1\n                        and value_split[0].lower().strip() == 'vntype not in'):\n                    vntypes_f_unstripped = value_split[1].split(',')\n                    vntypes_f = [item.strip() for item in vntypes_f_unstripped]\n                    if any([fnmatch.fnmatch(vntype, p) for p in vntypes_f]):\n                        config_dict[key_found] = False\n                    else:\n                        config_dict[key_found] = True\n                    pbs.logmsg(pbs.EVENT_DEBUG4,\n                               \"%s: Config file parsing,\"\n                               \" set %s to %s based on vntype exclusion\"\n                               % (caller_name(), key_found,\n                                  str(config_dict[key_found])))\n                elif (len(value_split) > 1\n                        and value_split[0].lower().strip() == 'host in'):\n                    hosts_t_unstripped = value_split[1].split(',')\n                    hosts_t = [item.strip() for item in hosts_t_unstripped]\n                    if any([fnmatch.fnmatch(self.hostname, p)\n                            for p in hosts_t]):\n                        config_dict[key_found] = True\n                    else:\n                        config_dict[key_found] = False\n                    pbs.logmsg(pbs.EVENT_DEBUG4,\n                               \"%s: Config file parsing,\"\n                               \" set %s to %s based on host inclusion\"\n                               % (caller_name(), key_found,\n                                  str(config_dict[key_found])))\n                elif (len(value_split) > 1\n                        and value_split[0].lower().strip() == 'host not in'):\n                    hosts_f_unstripped = value_split[1].split(',')\n                    hosts_f = [item.strip() for item in hosts_f_unstripped]\n                    if any([fnmatch.fnmatch(self.hostname, p)\n                            for p in hosts_f]):\n                        config_dict[key_found] = False\n                    else:\n                        config_dict[key_found] = True\n                    pbs.logmsg(pbs.EVENT_DEBUG4,\n                               \"%s: Config file parsing,\"\n                               \" set %s to %s based on host exclusion\"\n                               % (caller_name(), key_found,\n                                  str(config_dict[key_found])))\n\n        # Old \"exclude_vntypes\" now modulates current \"enabled\"\n        # (if present) or creates it if non-empty\n        if (\"exclude_vntypes\" in config_dict\n                and config_dict['exclude_vntypes']):\n            if (\"enabled\" not in config_dict\n                    or not isinstance(config_dict['enabled'], bool)):\n                config_dict['enabled'] = False\n            if (config_dict['enabled'] and\n                    any([fnmatch.fnmatch(vntype, p)\n                         for p in config_dict['exclude_vntypes']])):\n                config_dict['enabled'] = False\n                if subsection is None:\n                    subname = 'all subsystems'\n                else:\n                    subname = 'subsystem ' + subsection\n                pbs.logmsg(pbs.EVENT_DEBUG,\n                           '%s: cgroup excluded for '\n                           '%s on vnode type %s' %\n                           (caller_name(), subname, vntype))\n                pbs.logmsg(pbs.EVENT_DEBUG4,\n                           '%s is in the excluded vnode type list: %s' %\n                           (vntype, config_dict['exclude_vntypes']))\n            pbs.logmsg(pbs.EVENT_DEBUG4,\n                       \"%s: Config file parsing, \"\n                       \"set %s to %s based on exclude_vntypes\"\n                       % (caller_name(), 'enabled',\n                          str(config_dict['enabled'])))\n\n        # Old \"exclude_hosts\" now modulates current \"enabled\" (if present)\n        # or creates enabled if it is non-empty\n        if (\"exclude_hosts\" in config_dict\n                and config_dict['exclude_hosts']):\n            if (\"enabled\" not in config_dict\n                    or not isinstance(config_dict['enabled'], bool)):\n                config_dict['enabled'] = False\n            if (config_dict['enabled'] and\n                any([fnmatch.fnmatch(self.hostname, p)\n                     for p in config_dict['exclude_hosts']])):\n                config_dict['enabled'] = False\n                if subsection is None:\n                    subname = 'all subsystems'\n                else:\n                    subname = 'subsystem ' + subsection\n                pbs.logmsg(pbs.EVENT_DEBUG,\n                           '%s: cgroup excluded for '\n                           '%s on host %s' %\n                           (caller_name(), subname, self.hostname))\n                pbs.logmsg(pbs.EVENT_DEBUG4,\n                           '%s is in the excluded host list: %s' %\n                           (self.hostname, config_dict['exclude_hosts']))\n            pbs.logmsg(pbs.EVENT_DEBUG4,\n                       \"%s: Config file parsing, \"\n                       \"set %s to %s based on exclude_hosts\"\n                       % (caller_name(), 'enabled',\n                          str(config_dict['enabled'])))\n        # mirror image include_hosts of old \"exclude_hosts\"\n        # now modulates current \"enabled\" (if present)\n        # or creates enabled if it is non-empty\n        if (\"include_hosts\" in config_dict\n                and config_dict['include_hosts']):\n            if (\"enabled\" not in config_dict\n                    or not isinstance(config_dict['enabled'], bool)):\n                config_dict['enabled'] = \\\n                    (any([fnmatch.fnmatch(self.hostname, p)\n                          for p in config_dict['include_hosts']]))\n            else:\n                config_dict['enabled'] = \\\n                    (config_dict['enabled']\n                     or any([fnmatch.fnmatch(self.hostname, p)\n                             for p in config_dict['include_hosts']]))\n            pbs.logmsg(pbs.EVENT_DEBUG4,\n                       \"%s: Config file parsing, \"\n                       \"set %s to %s based on include_hosts\"\n                       % (caller_name(), 'enabled',\n                          str(config_dict['enabled'])))\n\n        # Add \"disabled\" if unspecified\n        if \"enabled\" not in config_dict:\n            config_dict['enabled'] = False\n            pbs.logmsg(pbs.EVENT_DEBUG4,\n                       \"%s: Config file parsing, \"\n                       \"section disabled by default\"\n                       % caller_name())\n\n        # Old \"run_only_on_hosts\" limits enabled hosts if present\n        if (\"run_only_on_hosts\" in config_dict\n                and config_dict['run_only_on_hosts']):\n            config_dict['enabled'] = \\\n                (config_dict['enabled']\n                 and any([fnmatch.fnmatch(self.hostname, p)\n                          for p in config_dict['run_only_on_hosts']]))\n            pbs.logmsg(pbs.EVENT_DEBUG4,\n                       \"%s: Config file parsing,\"\n                       \" set %s to %s based on run_only_on_hosts\"\n                       % (caller_name(), 'enabled',\n                          str(config_dict['enabled'])))\n\n    @staticmethod\n    def parse_config_file():\n        \"\"\"\n        Read the config file in json format\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        # Turn everything off by default. These settings be modified\n        # when the configuration file is read. Keep the keys in sync\n        # with the default cgroup configuration files.\n        defaults = {}\n        defaults['enabled'] = True\n        defaults['cgroup_prefix'] = 'pbs_jobs'\n        defaults['cgroup_lock_file'] = os.path.join(PBS_MOM_HOME, 'mom_priv',\n                                                    'cgroups.lock')\n        defaults['discover_gpus'] = True\n        defaults['nvidia-smi'] = os.path.join(os.sep, 'usr', 'bin',\n                                              'nvidia-smi')\n        defaults['exclude_hosts'] = []\n        defaults['exclude_vntypes'] = []\n        defaults['run_only_on_hosts'] = []\n        defaults['periodic_resc_update'] = False\n        defaults['vnode_per_numa_node'] = False\n        defaults['online_offlined_nodes'] = False\n        defaults['online_nodes_min_delay'] = 30\n        defaults['use_hyperthreads'] = False\n        defaults['ncpus_are_cores'] = False\n        defaults['kill_timeout'] = 10\n        defaults['server_timeout'] = 15\n        defaults['job_setup_timeout'] = 30\n        defaults['placement_type'] = 'load_balanced'\n        defaults['propagate_vntype_to_server'] = True\n        defaults['manage_rlimit_as'] = True\n        defaults['cgroup'] = {}\n        defaults['cgroup']['cpu'] = {}\n        defaults['cgroup']['cpu']['enabled'] = False\n        defaults['cgroup']['cpu']['exclude_hosts'] = []\n        defaults['cgroup']['cpu']['exclude_vntypes'] = []\n        defaults['cgroup']['cpu']['cfs_period_us'] = 100000\n        defaults['cgroup']['cpu']['cfs_quota_fudge_factor'] = 1.03\n        defaults['cgroup']['cpu']['enforce_per_period_quota'] = False\n        # For zero_cpus_shares_fraction, the default value of 0.002 * 1000 = 2,\n        # which is the smallest value allowed by kernel as cpu.shares\n        defaults['cgroup']['cpu']['zero_cpus_shares_fraction'] = 0.002\n        defaults['cgroup']['cpu']['zero_cpus_quota_fraction'] = 0.2\n        defaults['cgroup']['blkio'] = {}\n        defaults['cgroup']['blkio']['enabled'] = False\n        defaults['cgroup']['cpuacct'] = {}\n        defaults['cgroup']['cpuacct']['enabled'] = False\n        defaults['cgroup']['cpuacct']['exclude_hosts'] = []\n        defaults['cgroup']['cpuacct']['exclude_vntypes'] = []\n        defaults['cgroup']['cpuset'] = {}\n        defaults['cgroup']['cpuset']['enabled'] = False\n        defaults['cgroup']['cpuset']['exclude_cpus'] = []\n        defaults['cgroup']['cpuset']['exclude_hosts'] = []\n        defaults['cgroup']['cpuset']['exclude_vntypes'] = []\n        defaults['cgroup']['cpuset']['mem_fences'] = True\n        defaults['cgroup']['cpuset']['mem_hardwall'] = False\n        defaults['cgroup']['cpuset']['memory_spread_page'] = False\n        defaults['cgroup']['cpuset']['allow_zero_cpus'] = True\n        defaults['cgroup']['devices'] = {}\n        defaults['cgroup']['devices']['enabled'] = False\n        defaults['cgroup']['devices']['exclude_hosts'] = []\n        defaults['cgroup']['devices']['exclude_vntypes'] = []\n        defaults['cgroup']['devices']['allow'] = []\n        defaults['cgroup']['memory'] = {}\n        defaults['cgroup']['memory']['enabled'] = False\n        defaults['cgroup']['memory']['exclude_hosts'] = []\n        defaults['cgroup']['memory']['exclude_vntypes'] = []\n        defaults['cgroup']['memory']['soft_limit'] = False\n        defaults['cgroup']['memory']['default'] = '0MB'\n        defaults['cgroup']['memory']['enforce_default'] = True\n        defaults['cgroup']['memory']['exclhost_ignore_default'] = False\n        defaults['cgroup']['memory']['reserve_percent'] = 0\n        defaults['cgroup']['memory']['reserve_amount'] = '0MB'\n        defaults['cgroup']['memory']['vnode_hidden_mb'] = 1\n        defaults['cgroup']['memory']['swappiness'] = 1\n        defaults['cgroup']['memsw'] = {}\n        defaults['cgroup']['memsw']['enabled'] = False\n        defaults['cgroup']['memsw']['exclude_hosts'] = []\n        defaults['cgroup']['memsw']['exclude_vntypes'] = []\n        defaults['cgroup']['memsw']['default'] = '0MB'\n        defaults['cgroup']['memsw']['enforce_default'] = True\n        defaults['cgroup']['memsw']['exclhost_ignore_default'] = False\n        defaults['cgroup']['memsw']['reserve_percent'] = 0\n        defaults['cgroup']['memsw']['reserve_amount'] = '0MB'\n        defaults['cgroup']['memsw']['vnode_hidden_mb'] = 1\n        defaults['cgroup']['memsw']['manage_cgswap'] = False\n        defaults['cgroup']['hugetlb'] = {}\n        defaults['cgroup']['hugetlb']['enabled'] = False\n        defaults['cgroup']['hugetlb']['exclude_hosts'] = []\n        defaults['cgroup']['hugetlb']['exclude_vntypes'] = []\n        defaults['cgroup']['hugetlb']['default'] = '0MB'\n        defaults['cgroup']['hugetlb']['enforce_defaults'] = True\n        defaults['cgroup']['hugetlb']['exclhost_ignore_default'] = False\n        defaults['cgroup']['hugetlb']['reserve_percent'] = 0\n        defaults['cgroup']['hugetlb']['reserve_amount'] = '0MB'\n        defaults['cgroup']['hugetlb']['vnode_hidden_mb'] = 1\n\n        # These are unmanaged -- if enabled only creation/removal\n        # and filling in 'tasks' are currently performed\n        defaults['cgroup']['blkio'] = {}\n        defaults['cgroup']['blkio']['enabled'] = False\n        defaults['cgroup']['freezer'] = {}\n        defaults['cgroup']['freezer']['enabled'] = False\n        defaults['cgroup']['net_cls'] = {}\n        defaults['cgroup']['net_cls']['enabled'] = False\n        defaults['cgroup']['net_prio'] = {}\n        defaults['cgroup']['net_prio']['enabled'] = False\n        defaults['cgroup']['perf_event'] = {}\n        defaults['cgroup']['perf_event']['enabled'] = False\n        defaults['cgroup']['pids'] = {}\n        defaults['cgroup']['pids']['enabled'] = False\n        defaults['cgroup']['rdma'] = {}\n        defaults['cgroup']['rdma']['enabled'] = False\n\n        # Identify the config file and read in the data\n        config_file = ''\n        if 'PBS_HOOK_CONFIG_FILE' in os.environ:\n            config_file = os.environ['PBS_HOOK_CONFIG_FILE']\n        if not config_file:\n            tmpcfg = os.path.join(PBS_MOM_HOME, 'mom_priv', 'hooks',\n                                  'pbs_cgroups.CF')\n            if os.path.isfile(tmpcfg):\n                config_file = tmpcfg\n        if not config_file:\n            tmpcfg = os.path.join(PBS_HOME, 'server_priv', 'hooks',\n                                  'pbs_cgroups.CF')\n            if os.path.isfile(tmpcfg):\n                config_file = tmpcfg\n        if not config_file:\n            tmpcfg = os.path.join(PBS_MOM_HOME, 'mom_priv', 'hooks',\n                                  'pbs_cgroups.json')\n            if os.path.isfile(tmpcfg):\n                config_file = tmpcfg\n        if not config_file:\n            tmpcfg = os.path.join(PBS_HOME, 'server_priv', 'hooks',\n                                  'pbs_cgroups.json')\n            if os.path.isfile(tmpcfg):\n                config_file = tmpcfg\n        if not config_file:\n            raise CgroupConfigError('Config file not found')\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Config file is %s' %\n                   (caller_name(), config_file))\n        try:\n            with open(config_file, 'r') as desc:\n                config = merge_dict(defaults,\n                                    json.load(desc, object_hook=decode_dict))\n            # config file entries denotes reserved _swap_\n            # but vnode_hidden_mb used in code for vmem relies\n            # on total for physical plus swap (i.e. memsw)\n            config['cgroup']['memsw']['vnode_hidden_mb'] += \\\n                config['cgroup']['memory']['vnode_hidden_mb']\n\n        except IOError:\n            raise CgroupConfigError('I/O error reading config file')\n        pbs.logmsg(pbs.EVENT_DEBUG4,\n                   '%s: cgroup hook configuration: %s' %\n                   (caller_name(), config))\n        config['cgroup_prefix'] = systemd_escape(config['cgroup_prefix'])\n        return config\n\n    def _create_service(self):\n        \"\"\"\n        Create the pbs_jobs.service systemd service\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        if self.systemd_version < 205:\n            return\n        if (pbs.event().type == pbs.EXECJOB_BEGIN):\n            # normally service is there for this event -- if so, return\n            try:\n                cmd = ['systemctl', 'is-active', self.cfg['cgroup_prefix']]\n                process = subprocess.Popen(cmd, shell=False,\n                                           stdout=subprocess.PIPE,\n                                           stderr=subprocess.PIPE,\n                                           universal_newlines=True)\n                out = process.communicate()[0]\n                if (process.returncode == 0):\n                    # service is already active -- no need to create it\n                    return\n                else:\n                    pbs.logmsg(pbs.EVENT_DEBUG,\n                               '%s: systemctl is-active <svc> return code %s'\n                               % (caller_name(), process.returncode))\n            except Exception:\n                pbs.logmsg(pbs.EVENT_DEBUG4,\n                           '%s: Failed to call systemctl is-active'\n                           % caller_name())\n                # was worth a try -- try to create service now\n                # and see if that fails\n                pass\n        description = 'PBS Pro job parent service'\n        servicefile = os.path.join(os.sep, 'run', 'systemd', 'system',\n                                   self.cfg['cgroup_prefix'] + '.service')\n        try:\n            with open(servicefile, 'w') as desc:\n                desc.write('[Unit]\\n'\n                           'Description=%s\\n'\n                           '[Service]\\n'\n                           'Type=simple\\n'\n                           'Slice=-.slice\\n'\n                           'TasksMax=infinity\\n'\n                           'RemainAfterExit=yes\\n'\n                           'ExecStart=/bin/sleep infinity\\n'\n                           'ExecStop=/bin/true\\n'\n                           'CPUAccounting=off\\n'\n                           'MemoryAccounting=off\\n'\n                           'Delegate=yes'\n                           % description)\n                desc.truncate()\n        except Exception:\n            pbs.logmsg(pbs.EVENT_DEBUG, '%s: Failed to write service file: %s'\n                       % (caller_name(), servicefile))\n            raise\n        try:\n            cmd = ['systemctl', 'start', os.path.basename(servicefile)]\n            process = subprocess.Popen(cmd, shell=False,\n                                       stdout=subprocess.PIPE,\n                                       stderr=subprocess.PIPE,\n                                       universal_newlines=True)\n            out, err = process.communicate()\n            if (process.returncode == 0):\n                pbs.logmsg(pbs.EVENT_DEBUG,\n                           '%s: Started systemd service with file %s'\n                           % (caller_name(), servicefile))\n            else:\n                pbs.logmsg(pbs.EVENT_ERROR,\n                           '%s: systemctl start <svc> return code %s'\n                           % (caller_name(), process.returncode))\n                pbs.logmsg(pbs.EVENT_ERROR, '%s: stderr of systemctl was: %s'\n                           % (caller_name(),\n                              stringified_output(err)))\n        except Exception:\n            pbs.logmsg(pbs.EVENT_DEBUG,\n                       '%s: Failed to start systemd service: %s'\n                       % (caller_name(), os.path.basename(servicefile)))\n            raise\n\n    def create_paths(self):\n        \"\"\"\n        Create the cgroup parent directories that will contain the jobs\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        old_umask = os.umask(0o022)\n        try:\n            # Create a systemd service for PBS jobs (if necessary)\n            self._create_service()\n            # Create the directories that PBS will use to house the jobs\n            # now under the controller mount at <prefix>.service/jobid\n            for subsys in self.subsystems:\n                subdir = self._cgroup_path(subsys)\n                if not subdir:\n                    raise CgroupConfigError('No path for subsystem: %s'\n                                            % (subsys))\n                created = False\n                if not os.path.exists(subdir):\n                    os.makedirs(subdir, 0o755)\n                    created = True\n                    pbs.logmsg(pbs.EVENT_DEBUG2, '%s: Created directory %s' %\n                               (caller_name(), subdir))\n                if (not created and pbs.event().type == pbs.EXECJOB_BEGIN):\n                    # only exechost_startup configures values in the cgroups\n                    # if they exist\n                    continue\n                if subsys == 'memory' or subsys == 'memsw':\n                    # Enable 'use_hierarchy' for memory when either memory\n                    # or memsw is in use.\n                    filename = self._cgroup_path('memory', 'use_hierarchy')\n                    if not os.path.isfile(filename):\n                        raise CgroupConfigError('Failed to configure %s' %\n                                                (filename))\n                    try:\n                        self.write_value(filename, 1)\n                    except CgroupBusyError:\n                        # Some kernels do not like the value written when\n                        # other jobs are running, or if the parent already\n                        # enabled use_hierarchy.\n                        # But then things were already set up anyway\n                        pass\n                elif subsys == 'cpuset':\n                    # copy <cgroup_prefix>.services cpuset values\n                    # from its parent (root), then copy those to\n                    # directory under it; necessary if systemd is not there\n                    (cpuset_dir, cpus_file) = \\\n                        os.path.split(self._cgroup_path(subsys, 'cpus'))\n                    (memset_dir, mems_file) = \\\n                        os.path.split(self._cgroup_path(subsys, 'mems'))\n                    cpuset_pardir = \\\n                        os.path.dirname(cpuset_dir)\n                    self._copy_from_parent(os.path.join(cpuset_pardir,\n                                                        cpus_file))\n                    self._copy_from_parent(os.path.join(cpuset_pardir,\n                                                        mems_file))\n                    self._copy_from_parent(self._cgroup_path(subsys, 'cpus'))\n                    self._copy_from_parent(self._cgroup_path(subsys, 'mems'))\n        except Exception as exc:\n            raise CgroupConfigError('Failed to create cgroup paths: %s' % exc)\n        finally:\n            os.umask(old_umask)\n\n    def _get_vnode_type(self):\n        \"\"\"\n        Return the vnode type of the local node\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        # self.vnode is not defined for pbs_attach events so the vnode\n        # type gets cached in the mom_priv/vntype file. First, check\n        # to see if it is defined.\n        resc_vntype = ''\n        if self.vnode is not None:\n            if 'vntype' in self.vnode.resources_available:\n                if self.vnode.resources_available['vntype']:\n                    resc_vntype = self.vnode.resources_available['vntype']\n        pbs.logmsg(pbs.EVENT_DEBUG4, 'resc_vntype: %s' % resc_vntype)\n        # Next, read it from the cache file.\n        file_vntype = ''\n        filename = os.path.join(PBS_MOM_HOME, 'mom_priv', 'vntype')\n        try:\n            with open(filename, 'r') as desc:\n                file_vntype = desc.readline().strip()\n        except Exception:\n            pbs.logmsg(pbs.EVENT_DEBUG4,\n                       '%s: Failed to read vntype file %s' %\n                       (caller_name(), filename))\n        pbs.logmsg(pbs.EVENT_DEBUG4, 'file_vntype: %s' % file_vntype)\n        # If vntype was not set then log a message. It is too expensive\n        # to have all moms query the server for large jobs.\n        if not resc_vntype and not file_vntype:\n            pbs.logmsg(pbs.EVENT_DEBUG3,\n                       '%s: Could not determine vntype' % caller_name())\n            return None\n        # Return file_vntype if it is set and resc_vntype is not.\n        if not resc_vntype and file_vntype:\n            pbs.logmsg(pbs.EVENT_DEBUG4, 'vntype: %s' % file_vntype)\n            return file_vntype\n        # Make sure the cache file is up to date.\n        if resc_vntype and resc_vntype != file_vntype:\n            pbs.logmsg(pbs.EVENT_DEBUG4, 'Updating vntype file')\n            try:\n                with open(filename, 'w') as desc:\n                    desc.write(resc_vntype)\n            except Exception:\n                pbs.logmsg(pbs.EVENT_DEBUG2,\n                           '%s: Failed to update vntype file %s' %\n                           (caller_name(), filename))\n        pbs.logmsg(pbs.EVENT_DEBUG4, 'vntype: %s' % resc_vntype)\n        return resc_vntype\n\n    def _get_assigned_cgroup_resources(self):\n        \"\"\"\n        Return a dictionary of currently assigned cgroup resources per job\n        \"\"\"\n        # if jobs have a limit *exactly* equal to the total limit available\n        # for all jobs, then they have no explicit memory req and the\n        # limit is the result of an 'unlimited' default;\n        # in that case, the scheduler assigned 0, so be consistent with it\n\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        assigned = {}\n\n        # Get totals available for all jobs to compare with later\n        mem_avail = None\n        filename = self._cgroup_path('memory', 'limit_in_bytes')\n        if filename and os.path.isfile(filename):\n            with open(filename) as desc:\n                mem_avail = int(desc.readline())\n        vmem_avail = None\n        filename = self._cgroup_path('memsw', 'limit_in_bytes')\n        if filename and os.path.isfile(filename):\n            with open(filename) as desc:\n                vmem_avail = int(desc.readline())\n        hpmem_avail = None\n        filename = self._cgroup_path('hugetlb', 'limit_in_bytes')\n        if filename and os.path.isfile(filename):\n            with open(filename) as desc:\n                hpmem_avail = int(desc.readline())\n        pbs.logmsg(pbs.EVENT_DEBUG4, \"get_assigned_cgroup_resources:\"\n                   \" total host avail mem=%s vmem=%s hpmem=%s\"\n                   % (mem_avail, vmem_avail, hpmem_avail))\n\n        for key in self.paths:\n            if key in ('blkio', 'cpu', 'cpuacct', 'freezer',\n                       'net_cls', 'net_prio', 'perf_event',\n                       'pids', 'rdma', 'systemd'):\n                continue\n            if not self.enabled(key):\n                continue\n            path = os.path.dirname(self._cgroup_path(key))\n            # do not exclude orphans\n            pattern = self._glob_subdir_wildcard()\n            pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Examining %s' %\n                       (caller_name(), os.path.join(path, pattern)))\n            for subdir in glob.glob(os.path.join(path, pattern)):\n                jobid = os.path.basename(subdir)\n                if not jobid:\n                    continue\n                pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Job ID is %s' %\n                           (caller_name(), jobid))\n                if jobid not in assigned:\n                    assigned[jobid] = {}\n                if key not in assigned[jobid]:\n                    assigned[jobid][key] = {}\n                if key == 'cpuset':\n                    with open(self._cgroup_path(key, 'cpus', jobid)) as desc:\n                        assigned[jobid][key]['cpus'] = \\\n                            expand_list(desc.readline())\n                    with open(self._cgroup_path(key, 'mems', jobid)) as desc:\n                        assigned[jobid][key]['mems'] = \\\n                            expand_list(desc.readline())\n                elif key == 'memory':\n                    filename = self._cgroup_path(key, 'limit_in_bytes', jobid)\n                    if filename and os.path.isfile(filename):\n                        with open(filename) as desc:\n                            assigned[jobid][key]['limit_in_bytes'] = \\\n                                int(desc.readline())\n                        if assigned[jobid][key]['limit_in_bytes'] == mem_avail:\n                            # not explicitly assigned, 'unlimited' raised dflt\n                            assigned[jobid][key]['limit_in_bytes'] = 0\n                    filename = self._cgroup_path(key,\n                                                 'soft_limit_in_bytes', jobid)\n                    if filename and os.path.isfile(filename):\n                        with open(filename) as desc:\n                            assigned[jobid][key]['soft_limit_in_bytes'] = \\\n                                int(desc.readline())\n                        if (assigned[jobid][key]['soft_limit_in_bytes']\n                                == mem_avail):\n                            # not explicitly assigned, 'unlimited' raised dflt\n                            assigned[jobid][key]['soft_limit_in_bytes'] = 0\n                elif key == 'memsw':\n                    filename = self._cgroup_path(key, 'limit_in_bytes', jobid)\n                    if filename and os.path.isfile(filename):\n                        with open(filename) as desc:\n                            assigned[jobid][key]['limit_in_bytes'] = \\\n                                int(desc.readline())\n                        if (assigned[jobid][key]['limit_in_bytes']\n                                == vmem_avail):\n                            # not explicitly assigned, 'unlimited' raised dflt\n                            assigned[jobid][key]['limit_in_bytes'] = 0\n                    else:\n                        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: No such file: %s' %\n                                   (caller_name(), filename))\n                elif key == 'hugetlb':\n                    filename = self._cgroup_path(key, 'limit_in_bytes', jobid)\n                    if filename and os.path.isfile(filename):\n                        with open(filename) as desc:\n                            assigned[jobid][key]['limit_in_bytes'] = \\\n                                int(desc.readline())\n                        if (assigned[jobid][key]['limit_in_bytes']\n                                == hpmem_avail):\n                            # not explicitly assigned, 'unlimited' raised dflt\n                            assigned[jobid][key]['limit_in_bytes'] = 0\n                elif key == 'devices':\n                    path = self._cgroup_path(key, 'list', jobid)\n                    pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Devices path is %s' %\n                               (caller_name(), path))\n                    if path and os.path.isfile(path):\n                        with open(path) as desc:\n                            assigned[jobid][key]['list'] = []\n                            for line in desc:\n                                pbs.logmsg(pbs.EVENT_DEBUG4, '%s: '\n                                           'Appending %s'\n                                           % (caller_name(), line))\n                                assigned[jobid][key]['list'].append(line)\n                                pbs.logmsg(pbs.EVENT_DEBUG4,\n                                           '%s: assigned[%s][%s][list] '\n                                           '= %s'\n                                           % (caller_name(), jobid, key,\n                                              assigned[jobid][key]['list']))\n                # Note: unmanaged but known subsystems should be filtered\n                # at the top of the loop body\n                else:\n                    pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Unknown subsystem %s' %\n                               (caller_name(), key))\n                    raise CgroupConfigError('Unknown subsystem: %s' % key)\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Returning %s' %\n                   (caller_name(), str(assigned)))\n        return assigned\n\n    def _get_systemd_version(self):\n        \"\"\"\n        Return an integer reflecting the systemd version, zero for no systemd\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        ver = 0\n        try:\n            process = subprocess.Popen(['systemctl', '--system', 'show',\n                                        '--property=Version'],\n                                       shell=False,\n                                       stdout=subprocess.PIPE,\n                                       stderr=subprocess.PIPE,\n                                       universal_newlines=True)\n            out = process.communicate()[0]\n            # if we get a non-str type then convert\n            # before calling splitlines\n            # should not happen since we pass universal_newlines True\n            # Note: some versions prepend \"Version=\", other versions\n            # add a more precise version in parentheses after a space\n            out_split = stringified_output(out).splitlines()\n            ver = int(re.sub(r'Version=\\D*(\\d+).*', r'\\1',\n                             out_split[0].split()[0]))\n        except Exception:\n            # Note we also get here if we can't decode an int version\n            return 0\n        if not os.path.exists(\n                os.path.join(os.sep, 'run', 'systemd', 'system')):\n            # no place to drop unit files? systemd not running in system mode?\n            return 0\n        return ver\n\n    def _glob_subdir_wildcard(self, extension=''):\n        \"\"\"\n        Return a string that may be used as a pattern with glob.glob\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        buf = '[0-9]*'\n        if extension:\n            buf += '.' + extension\n        return buf\n\n    def enabled(self, subsystem):\n        \"\"\"\n        Return whether a subsystem is enabled.\n        if we get here OS is supported: if a controller is enabled\n        in the cfg file but the controller is not mounted,\n        it is a configuration error that should be fixed\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        # Check whether the subsystem is enabled in the configuration file\n        if subsystem not in self.cfg['cgroup']:\n            return False\n        if ('enabled' in self.cfg\n                and isinstance(self.cfg['enabled'], bool)\n                and not self.cfg['enabled']):\n            return False\n        if ('enabled' not in self.cfg['cgroup'][subsystem]\n                or not self.cfg['cgroup'][subsystem]['enabled']):\n            return False\n        # Check whether the cgroup is mounted for this subsystem\n        if subsystem not in self.paths:\n            raise CgroupConfigError('%s: cgroups enabled '\n                                    'but not mounted for subsystem %s'\n                                    % (caller_name(), subsystem))\n            return False\n        return True\n\n    def default(self, subsystem):\n        \"\"\"\n        Return the default value for a subsystem\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        if subsystem in self.cfg['cgroup']:\n            if 'default' in self.cfg['cgroup'][subsystem]:\n                return self.cfg['cgroup'][subsystem]['default']\n        return None\n\n    def _is_pid_owner(self, pid, job_uid):\n        \"\"\"\n        Check to see if the pid's owner matches the job's owner\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        try:\n            proc_uid = os.stat('/proc/%d' % pid).st_uid\n        except OSError:\n            pbs.logmsg(pbs.EVENT_DEBUG, 'Unknown pid: %d' % pid)\n            return False\n        except Exception as exc:\n            pbs.logmsg(pbs.EVENT_DEBUG, 'Unexpected error: %s' % exc)\n            return False\n        pbs.logmsg(pbs.EVENT_DEBUG4, '/proc/%d uid:%d' % (pid, proc_uid))\n        pbs.logmsg(pbs.EVENT_DEBUG4, 'Job uid: %d' % job_uid)\n        if proc_uid != job_uid:\n            pbs.logmsg(pbs.EVENT_DEBUG4, 'Proc uid: %d != Job owner: %d' %\n                       (proc_uid, job_uid))\n            return False\n        return True\n\n    def _get_pids_in_sid(self, sid=None):\n        \"\"\"\n        Return a list of all PIDS associated with a session ID\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        pids = []\n        if not sid:\n            return pids\n        # Older kernels will not have a task directory\n        if os.path.isdir(os.path.join(os.sep, 'proc', 'self', 'task')):\n            check_tasks = True\n        else:\n            check_tasks = False\n        pattern = os.path.join(os.sep, 'proc', '[0-9]*', 'stat')\n        for filename in glob.glob(pattern):\n            try:\n                with open(filename, 'r') as desc:\n                    line = desc.readline()\n                    # valid columns are \\(.+\\) or [^\\s]+\n                    entries = re.split(r'\\s+(\\(.+\\)|[^\\s]+)\\s+', line)\n                    if int(entries[5]) != sid:\n                        continue\n                    if check_tasks:\n                        # Thread group leader will have a task entry.\n                        # No need to append entry from stat file.\n                        taskdir = os.path.join(os.path.dirname(filename),\n                                               'task')\n                        for task in glob.glob(os.path.join(taskdir, '[0-9]*')):\n                            pid = int(os.path.basename(task))\n                            if pid not in pids:\n                                pids.append(pid)\n                    else:\n                        # Append entry from stat file\n                        if int(entries[0]) not in pids:\n                            pids.append(int(entries[0]))\n            except (OSError, IOError):\n                # PIDs may come and go as we read /proc so the glob data can\n                # become stale. Tolerate failures in this case.\n                pass\n        return pids\n\n    def add_pids(self, pidarg, jobid):\n        \"\"\"\n        Add some number of PIDs to the cgroup tasks files for each subsystem\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        # make pids a list\n        pids = []\n        if isinstance(pidarg, int):\n            sid = 0\n            try:\n                sid = os.getsid(pidarg)\n            except OSError as exc:\n                sid = -1\n                pbs.logmsg(pbs.EVENT_DEBUG2,\n                           '%s: Request to attach session of non-existing'\n                           ' PID %s to jobid %s'\n                           % (caller_name(), pidarg, jobid))\n            if (sid > 1):\n                pids = self._get_pids_in_sid(sid)\n        elif isinstance(pidarg, list):\n            for pid in pidarg:\n                if not isinstance(pid, int):\n                    raise ValueError('PID list must contain integers')\n            pids = pidarg\n        else:\n            raise ValueError('PID argument must be integer or list')\n        if not pids:\n            return\n        if pbs.event().type == pbs.EXECJOB_LAUNCH:\n            if 1 in pids:\n                pbs.logmsg(pbs.EVENT_DEBUG2,\n                           '%s: Job %s contains defunct process' %\n                           (caller_name(), jobid))\n                # Use a list comprehension to remove all instances of the\n                # number 1\n                pids = [x for x in pids if x != 1]\n        if not pids:\n            return\n        # check pids to make sure that they are owned by the job owner\n        if pbs.event().type == pbs.EXECJOB_ATTACH:\n            pbs.logmsg(pbs.EVENT_DEBUG4, 'event type: attach')\n            try:\n                uid = pwd.getpwnam(pbs.event().job.euser).pw_uid\n            except Exception:\n                pbs.logmsg(pbs.EVENT_DEBUG2,\n                           'Failed to lookup UID by name')\n                raise\n            tmp_pids = []\n            for process in pids:\n                if self._is_pid_owner(process, uid):\n                    tmp_pids.append(process)\n                else:\n                    pbs.logmsg(pbs.EVENT_DEBUG2,\n                               'process %d not owned by %s' %\n                               (process, uid))\n            pids = tmp_pids\n        if not pids:\n            return\n        # Determine which subsystems will be used\n        for subsys in self.subsystems:\n            pbs.logmsg(pbs.EVENT_DEBUG4, '%s: subsys = %s' %\n                       (caller_name(), subsys))\n            # memsw and memory use the same tasks file\n            if subsys == 'memsw' and 'memory' in self.subsystems:\n                continue\n            tasks_file = self._cgroup_path(subsys, 'tasks', jobid)\n            if ((not os.path.exists(tasks_file))\n                    and (subsys == 'cpuset')):\n                if (self.cfg['cgroup']['cpuset']['enabled']\n                        and not self.cfg['cgroup']['cpuset']\n                                        ['allow_zero_cpus']):\n                    pbs.logmsg(pbs.EVENT_JOB_USAGE,\n                               'No cpuset cgroup present '\n                               'and npcus==0 disallowed')\n                    pbs.event().reject('No cpuset cgroup present '\n                                       'and npcus==0 disallowed')\n                else:\n                    # zero CPU job: should be attached to root pbs cpuset\n                    tasks_file = self._cgroup_path(subsys, 'tasks')\n            pbs.logmsg(pbs.EVENT_DEBUG4, '%s: tasks file = %s' %\n                       (caller_name(), tasks_file))\n            try:\n                for process in pids:\n                    self.write_value(tasks_file, process, 'a')\n            except IOError as exc:\n                raise CgroupLimitError('Failed to add PIDs %s to %s (%s)' %\n                                       (str(pids), tasks_file,\n                                        errno.errorcode[exc.errno]))\n            except Exception:\n                raise\n\n    def setup_job_devices_env(self, gpus):\n        \"\"\"\n        Setup the job environment for the devices assigned to the job for an\n        execjob_launch hook\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        if 'devices' in self.subsystems:\n            # prevent using GPUs without user awareness\n            pbs.event().env['CUDA_VISIBLE_DEVICES'] = ''\n        if 'device_names' in self.assigned_resources:\n            names = self.assigned_resources['device_names']\n            pbs.logmsg(pbs.EVENT_DEBUG4,\n                       'devices: %s' % (names))\n            offload_devices = []\n            cuda_visible_devices = []\n            for name in names:\n                if name.startswith('mic'):\n                    offload_devices.append(name[3:])\n                elif name.startswith('nvidia'):\n                    if 'uuid' in gpus[name]:\n                        cuda_visible_devices.append(gpus[name]['uuid'])\n                    if 'uuids' in gpus[name]:\n                        cuda_visible_devices.extend(gpus[name]['uuids'])\n            if offload_devices:\n                value = \"\\\\,\".join(offload_devices)\n                pbs.event().env['OFFLOAD_DEVICES'] = '%s' % value\n                pbs.logmsg(pbs.EVENT_DEBUG4,\n                           'offload_devices: %s' % offload_devices)\n            if cuda_visible_devices:\n                value = \"\\\\,\".join(cuda_visible_devices)\n                pbs.event().env['CUDA_VISIBLE_DEVICES'] = '%s' % value\n                pbs.event().env['CUDA_DEVICE_ORDER'] = 'PCI_BUS_ID'\n                pbs.logmsg(pbs.EVENT_DEBUG4,\n                           'cuda_visible_devices: %s' % cuda_visible_devices)\n            pbs.logmsg(pbs.EVENT_DEBUG4,\n                       'Environment: %s' % pbs.event().env)\n            return [offload_devices, cuda_visible_devices]\n        else:\n            return False\n\n    def _setup_subsys_devices(self, jobid, node):\n        \"\"\"\n        Configure access to devices given the job ID and node resources\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        if 'devices' not in self.subsystems:\n            return\n        devices_list_file = self._cgroup_path('devices', 'list', jobid)\n        devices_deny_file = self._cgroup_path('devices', 'deny', jobid)\n        devices_allow_file = self._cgroup_path('devices', 'allow', jobid)\n        # Add devices the user is granted access to\n        with open(devices_list_file, 'r') as desc:\n            devices_allowed = desc.read().splitlines()\n        pbs.logmsg(pbs.EVENT_DEBUG4, 'Initial devices.list: %s' %\n                   devices_allowed)\n        # Deny access to mic and gpu devices\n        accelerators = []\n        devices = node.devices\n        for devclass in devices:\n            if devclass == 'mic' or devclass == 'gpu':\n                for instance in devices[devclass]:\n                    dev = devices[devclass][instance]\n                    if 'extra_devs' in dev:\n                        accelerators.extend(dev['extra_devs'])\n                    accelerators.append('%d:%d' % (dev['major'], dev['minor']))\n        # For CentOS 7 we need to remove a *:* rwm from devices.list\n        # before we can add anything to devices.allow. Otherwise our\n        # changes are ignored. Check to see if a *:* rwm is in devices.list\n        # If so remove it\n        value = 'a *:* rwm'\n        if value in devices_allowed:\n            self.write_value(devices_deny_file, value)\n        # Verify that the following devices are not in devices.list\n        pbs.logmsg(pbs.EVENT_DEBUG4, 'Removing access to the following: %s' %\n                   accelerators)\n        for entry in accelerators:\n            value = 'c %s rwm' % entry\n            self.write_value(devices_deny_file, value)\n        # Add devices back to the list\n        devices_allow = self.cfg['cgroup']['devices']['allow']\n        pbs.logmsg(pbs.EVENT_DEBUG4,\n                   'Allowing access to the following: %s' %\n                   devices_allow)\n        for item in devices_allow:\n            if isinstance(item, str):\n                pbs.logmsg(pbs.EVENT_DEBUG4, 'string item: %s' % item)\n                self.write_value(devices_allow_file, item)\n                pbs.logmsg(pbs.EVENT_DEBUG4, 'write_value: %s' % value)\n                continue\n            if not isinstance(item, list):\n                pbs.logmsg(pbs.EVENT_DEBUG2,\n                           '%s: Entry is not a string or list: %s' %\n                           (caller_name(), item))\n                continue\n            pbs.logmsg(pbs.EVENT_DEBUG4, 'Device allow: %s' % item)\n            stat_filename = os.path.join(os.sep, 'dev', item[0])\n            pbs.logmsg(pbs.EVENT_DEBUG4, 'Stat file: %s' % stat_filename)\n            try:\n                statinfo = os.stat(stat_filename)\n            except OSError:\n                pbs.logmsg(pbs.EVENT_DEBUG,\n                           '%s: Entry not added to devices.allow: %s' %\n                           (caller_name(), item))\n                pbs.logmsg(pbs.EVENT_DEBUG4, '%s: File not found: %s' %\n                           (caller_name(), stat_filename))\n                continue\n            except Exception as exc:\n                pbs.logmsg(pbs.EVENT_DEBUG, 'Unexpected error: %s' % exc)\n                continue\n            device_type = None\n            if stat.S_ISBLK(statinfo.st_mode):\n                device_type = 'b'\n            elif stat.S_ISCHR(statinfo.st_mode):\n                device_type = 'c'\n            if not device_type:\n                pbs.logmsg(pbs.EVENT_DEBUG2, '%s: Unknown device type: %s' %\n                           (caller_name(), stat_filename))\n                continue\n            if len(item) == 3 and isinstance(item[2], str):\n                value = '%s %s:%s %s' % (device_type,\n                                         os.major(statinfo.st_rdev),\n                                         item[2], item[1])\n            else:\n                value = '%s %s:%s %s' % (device_type,\n                                         os.major(statinfo.st_rdev),\n                                         os.minor(statinfo.st_rdev),\n                                         item[1])\n            self.write_value(devices_allow_file, value)\n            pbs.logmsg(pbs.EVENT_DEBUG4, 'write_value: %s' % value)\n        with open(devices_list_file, 'r') as desc:\n            devices_allowed = desc.read().splitlines()\n        pbs.logmsg(pbs.EVENT_DEBUG4, 'Updated devices.list: %s' %\n                   devices_allowed)\n\n    def _assign_devices(self, device_kind, device_list, device_count, node):\n        \"\"\"\n        Select devices to assign to the job\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        devices = device_list[:device_count]\n        pbs.logmsg(pbs.EVENT_DEBUG4, 'Device List: %s' % devices)\n        device_names = []\n        device_allowed = []\n        for dev in devices:\n            # Skip device if already present in names\n            if dev in device_names:\n                continue\n            # Skip device if already present in allowed\n            device_info = node.devices[device_kind][dev]\n            dev_entry = '%s %d:%d rwm' % (device_info['type'],\n                                          device_info['major'],\n                                          device_info['minor'])\n            if dev_entry in device_allowed:\n                continue\n            # Device controllers must also be added for certain devices\n            if device_kind == 'mic':\n                # Requires the ctrl (0) and the scif (1) to be added\n                dev_entry = '%s %d:0 rwm' % (device_info['type'],\n                                             device_info['major'])\n                if dev_entry not in device_allowed:\n                    device_allowed.append(dev_entry)\n                dev_entry = '%s %d:1 rwm' % (device_info['type'],\n                                             device_info['major'])\n                if dev_entry not in device_allowed:\n                    device_allowed.append(dev_entry)\n            elif device_kind == 'gpu' and not device_info.get('is_gi', False):\n                # Requires the ctrl (255) to be added\n                # unless GPU is a GI from a MIG GPU\n                dev_entry = '%s %d:255 rwm' % (device_info['type'],\n                                               device_info['major'])\n                if dev_entry not in device_allowed:\n                    device_allowed.append(dev_entry)\n            # Now append the device name and entry\n            device_names.append(dev)\n            dev_entry = '%s %d:%d rwm' % (device_info['type'],\n                                          device_info['major'],\n                                          device_info['minor'])\n            if dev_entry not in device_allowed:\n                device_allowed.append(dev_entry)\n            # if there are extra majors/minors, add them too\n            if 'extra_devs' in device_info:\n                for extra in device_info['extra_devs']:\n                    dev_entry = '%s %s rwm' % (device_info['type'],\n                                               extra)\n                    if dev_entry not in device_allowed:\n                        device_allowed.append(dev_entry)\n\n        return device_names, device_allowed\n\n    def get_device_name(self, node, available, socket, major, minor):\n        \"\"\"\n        Find the device name\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        pbs.logmsg(pbs.EVENT_DEBUG4,\n                   'Get device name: major: %s, minor: %s' % (major, minor))\n        if not isinstance(major, int):\n            return None\n        if not isinstance(minor, int):\n            return None\n        pbs.logmsg(pbs.EVENT_DEBUG4,\n                   'Possible devices: %s' % (available[socket]['devices']))\n        for avail_device in available[socket]['devices']:\n            avail_major = None\n            avail_minor = None\n            pbs.logmsg(pbs.EVENT_DEBUG4,\n                       'Checking device: %s' % (avail_device))\n            if avail_device.find('mic') != -1:\n                pbs.logmsg(pbs.EVENT_DEBUG4,\n                           'Check mic device: %s' % (avail_device))\n                avail_major = node.devices['mic'][avail_device]['major']\n                avail_minor = node.devices['mic'][avail_device]['minor']\n                pbs.logmsg(pbs.EVENT_DEBUG4,\n                           'Device major: %s, minor: %s' % (major, minor))\n            elif avail_device.find('nvidia') != -1:\n                pbs.logmsg(pbs.EVENT_DEBUG4,\n                           'Check gpu device: %s' % (avail_device))\n                avail_major = node.devices['gpu'][avail_device]['major']\n                avail_minor = node.devices['gpu'][avail_device]['minor']\n                pbs.logmsg(pbs.EVENT_DEBUG4,\n                           'Device major: %s, minor: %s' % (major, minor))\n            if avail_major == major and avail_minor == minor:\n                pbs.logmsg(pbs.EVENT_DEBUG4,\n                           'Device match: name: %s, major: %s, minor: %s' %\n                           (avail_device, major, minor))\n                return avail_device\n        pbs.logmsg(pbs.EVENT_DEBUG4, 'No match found')\n        return None\n\n    def _combine_resources(self, dict1, dict2):\n        \"\"\"\n        Take two dictionaries containing known types and combine them together\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        dest = {}\n        for src in [dict1, dict2]:\n            for key in src:\n                val = src[key]\n                vtype = type(val)\n                if key not in dest:\n                    if vtype is int:\n                        dest[key] = 0\n                    elif vtype is float:\n                        dest[key] = 0.0\n                    elif vtype is str:\n                        dest[key] = ''\n                    elif vtype is list:\n                        dest[key] = []\n                    elif vtype is dict:\n                        dest[key] = {}\n                    elif vtype is tuple:\n                        dest[key] = ()\n                    elif vtype is pbs.size:\n                        dest[key] = pbs.size(0)\n                    elif vtype is pbs.int:\n                        dest[key] = pbs.int(0)\n                    elif vtype is pbs.float:\n                        dest[key] = pbs.float(0.0)\n                    else:\n                        raise ValueError('Unrecognized resource type')\n                dest[key] += val\n        return dest\n\n    def _assign_resources(self, requested, available, socketlist, node):\n        \"\"\"\n        Determine whether a job fits within resources\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        assigned = {'cpuset.cpus': [], 'cpuset.mems': []}\n        if 'ncpus' in requested and int(requested['ncpus']) > 0:\n            cores = set(available['cpus'])\n            if not self.cfg['use_hyperthreads']:\n                # Hyperthreads are excluded from core list\n                cores -= set(node.cpuinfo['hyperthreads'])\n            avail = len(cores)\n            needed = int(requested['ncpus'])\n            if self.cfg['use_hyperthreads'] and self.cfg['ncpus_are_cores']:\n                needed *= node.cpuinfo['hyperthreads_per_core']\n            if needed > avail:\n                pbs.logmsg(pbs.EVENT_DEBUG4,\n                           '%s: insufficient ncpus: %s needed, %s available'\n                           % (caller_name(), needed, avail))\n                return {}\n            if self.cfg['use_hyperthreads']:\n                # Find cores that are fully available\n                empty_cores = set()\n                for corenum in sorted(cores):\n                    if (corenum not in node.cpuinfo['hyperthreads']\n                            and set(node.cpuinfo['cpu'][corenum]['threads'])\n                            .issubset(set(available['cpus']))):\n                        # All hyperthreads available for this core\n                        empty_cores.add(corenum)\n                # Assign threads from the empty cores\n                for corenum in sorted(empty_cores):\n                    for thread in node.cpuinfo['cpu'][corenum]['threads']:\n                        if thread in assigned['cpuset.cpus']:\n                            # this thread is already assigned\n                            continue\n                        assigned['cpuset.cpus'].append(thread)\n                        needed -= 1\n                        if thread in cores:\n                            cores.remove(thread)\n                        if needed <= 0:\n                            break\n                    if needed <= 0:\n                        break\n            # When use_hyperthreads is enabled, the above block already\n            # assigned all of the fully avaiable cores. There still may\n            # be cores to assign. When use_hyperthreads is disabled, we\n            # assign all the cores here.\n            if needed <= 0:\n                # we're done (will always hold if use_hyperthreads\n                # and ncpus_are_cores are both True)\n                pass\n            elif needed <= len(cores):\n                corelist = sorted(cores)\n                assigned['cpuset.cpus'] += corelist[:needed]\n            else:\n                pbs.logmsg(pbs.EVENT_DEBUG4,\n                           '%s: in final pass, '\n                           '%d more ncpus needed than available'\n                           % (caller_name(), needed - len(cores)))\n                return {}\n\n            # Set cpuset.mems to the socketlist for now even though\n            # there may not be sufficient memory. Memory gets\n            # checked later in this method.\n            assigned['cpuset.mems'] = socketlist\n        if 'mem' in requested or 'ngpus' in requested or 'nmics' in requested:\n            # for multivnode systems asking memory/ngpus/nmics without asking\n            # cpus is valid, need access to memory\n            assigned['cpuset.mems'] = socketlist\n        if 'nmics' in requested and int(requested['nmics']) > 0:\n            assigned['device_names'] = []\n            assigned['devices'] = []\n            regex = re.compile('.*(mic).*')\n            nmics = int(requested['nmics'])\n            # Use a list comprehension to construct the mics list\n            mics = [m.group(0)\n                    for d in available['devices']\n                    for m in [regex.search(d)] if m]\n            if nmics > len(mics):\n                pbs.logmsg(pbs.EVENT_DEBUG4, 'Insufficient nmics: %s/%s' %\n                           (nmics, mics))\n                return {}\n            names, devices = self._assign_devices('mic', mics[:nmics],\n                                                  nmics, node)\n            for val in names:\n                assigned['device_names'].append(val)\n            for val in devices:\n                assigned['devices'].append(val)\n        if 'ngpus' in requested and int(requested['ngpus']) > 0:\n            if 'device_names' not in assigned:\n                assigned['device_names'] = []\n                assigned['devices'] = []\n            regex = re.compile('.*(nvidia).*')\n            ngpus = int(requested['ngpus'])\n            # Use a list comprehension to construct the gpus list\n            gpus = [m.group(0)\n                    for d in available['devices']\n                    for m in [regex.search(d)] if m]\n            if ngpus > len(gpus):\n                pbs.logmsg(pbs.EVENT_DEBUG4, 'Insufficient ngpus: %s/%s' %\n                           (ngpus, gpus))\n                return {}\n            names, devices = self._assign_devices('gpu', gpus[:ngpus],\n                                                  ngpus, node)\n            for val in names:\n                assigned['device_names'].append(val)\n            for val in devices:\n                assigned['devices'].append(val)\n        if 'mem' in requested:\n            req_mem = size_as_int(requested['mem'])\n            avail_mem = available['memory']\n            if req_mem > avail_mem:\n                if self.cfg['vnode_per_numa_node']:\n                    # scheduler is not supposed to let this happen\n                    debug_level = pbs.EVENT_ERROR\n                else:\n                    # can happen if chunk needs to be split over sockets\n                    debug_level = pbs.EVENT_DEBUG4\n                pbs.logmsg(debug_level,\n                           ('Insufficient memory on socket(s) '\n                            '%s: requested:%s, available:%s') %\n                           (socketlist, req_mem, available['memory']))\n                return {}\n            if 'mem' not in assigned:\n                assigned['mem'] = 0\n            assigned['mem'] += req_mem\n            if 'cpuset.mems' not in assigned:\n                assigned['cpuset.mems'] = socketlist\n        return assigned\n\n    def assign_job(self, requested, available, node):\n        \"\"\"\n        Assign resources to the job. There are two scenarios that need to\n        be handled:\n        1. If vnodes are present in the requested resources, then the\n           scheduler has already decided where the job is to run. Check\n           the available resources to ensure an orphaned cgroup is not\n           consuming them.\n        2. If no vnodes are present in the requested resources, try to\n           span the fewest number of sockets when creating the assignment.\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        pbs.logmsg(pbs.EVENT_DEBUG4,\n                   'Requested: %s, Available: %s, Numa Nodes: %s' %\n                   (requested, available, node.numa_nodes))\n        # Create a list of memory-only NUMA nodes (for KNL). These get assigned\n        # in addition to NUMA nodes with assigned devices or cpus.\n        memory_only_nodes = []\n        for nnid in node.numa_nodes:\n            if not node.numa_nodes[nnid]['cpus'] and \\\n                    not node.numa_nodes[nnid]['devices']:\n                pbs.logmsg(pbs.EVENT_DEBUG4,\n                           'Found memory only NUMA node: %s' %\n                           (node.numa_nodes[nnid]))\n                memory_only_nodes.append(nnid)\n        # Create a list of vnode/socket pairs\n        if 'vnodes' in requested:\n            regex = re.compile(r'(.*)\\[(\\d+)\\].*')\n            pairlist = []\n            for vnode in requested['vnodes']:\n                pairlist.append([regex.search(vnode).group(1),\n                                 int(regex.search(vnode).group(2))])\n        else:\n            sockets = list(available.keys())\n            # If placement type is job_balanced, reorder the sockets\n            if self.cfg['placement_type'] == 'job_balanced':\n                pbs.logmsg(pbs.EVENT_DEBUG4,\n                           'Requested job_balanced placement')\n                # Look at assigned_resources and determine which socket\n                # to start with\n                jobcount = {}\n                for sock in sockets:\n                    jobcount[sock] = 0\n                for job in self.assigned_resources:\n                    jobresc = self.assigned_resources[job]\n                    if 'cpuset' in jobresc and 'mems' in jobresc['cpuset']:\n                        for sock in jobresc['cpuset']['mems']:\n                            jobcount[sock] += 1\n                sorted_jobcounts = sorted(list(jobcount.items()),\n                                          key=operator.itemgetter(1))\n                reordered = []\n                for count in sorted_jobcounts:\n                    reordered.append(count[0])\n                sockets = reordered\n            elif self.cfg['placement_type'] == 'load_balanced':\n                cpucounts = dict()\n                for sock in sockets:\n                    cpucounts[sock] = len(available[sock]['cpus'])\n                sorted_cpucounts = sorted(list(cpucounts.items()),\n                                          key=operator.itemgetter(1),\n                                          reverse=True)\n                reordered = list()\n                for count in sorted_cpucounts:\n                    reordered.append(count[0])\n                sockets = reordered\n            elif self.cfg['placement_type'] == 'load_packed':\n                cpucounts = dict()\n                for sock in sockets:\n                    cpucounts[sock] = len(available[sock]['cpus'])\n                sorted_cpucounts = sorted(list(cpucounts.items()),\n                                          key=operator.itemgetter(1),\n                                          reverse=False)\n                reordered = list()\n                for count in sorted_cpucounts:\n                    reordered.append(count[0])\n                sockets = reordered\n            pairlist = []\n            for sock in sockets:\n                pairlist.append([None, int(sock)])\n        # Loop through the sockets or vnodes and assign resources\n        assigned = {}\n        for pair in pairlist:\n            vnode = pair[0]\n            socket = pair[1]\n            if vnode:\n                myname = 'vnode %s[%d]' % (vnode, socket)\n                req = requested['vnodes']['%s[%d]' % (vnode, socket)]\n            else:\n                myname = 'socket %d' % socket\n                req = requested\n            pbs.logmsg(pbs.EVENT_DEBUG4, 'Current target is %s' % myname)\n            new = None\n            if socket in available:\n                new = self._assign_resources(req, available[socket],\n                                             [socket], node)\n            else:\n                pbs.logmsg(pbs.EVENT_ERROR, 'Attempt to allocate resources '\n                           'on non-existing socket #' + str(socket))\n            if new:\n                new['cpuset.mems'].append(socket)\n                # Add the memory-only NUMA nodes\n                for nnid in memory_only_nodes:\n                    if nnid not in new['cpuset.mems']:\n                        new['cpuset.mems'].append(nnid)\n                pbs.logmsg(pbs.EVENT_DEBUG4, 'Resources assigned to %s' %\n                           myname)\n                if vnode:\n                    assigned = self._combine_resources(assigned, new)\n                else:\n                    # Requested resources fit on this socket\n                    return new\n            else:\n                pbs.logmsg(pbs.EVENT_DEBUG4, 'Resources not assigned to %s' %\n                           myname)\n                # This is fatal in the case of vnodes\n                if vnode:\n                    return {}\n        if vnode:\n            if 'cpuset.cpus' in assigned:\n                assigned['cpuset.cpus'].sort()\n            if 'cpuset.mems' in assigned:\n                assigned['cpuset.mems'].sort()\n            if 'devices' in assigned:\n                assigned['devices'].sort()\n            if 'device_names' in assigned:\n                assigned['device_names'].sort()\n            pbs.logmsg(pbs.EVENT_DEBUG4,\n                       'Assigned Resources: %s' % (assigned))\n            return assigned\n        # Not using vnodes so try spanning sockets\n        pbs.logmsg(pbs.EVENT_DEBUG4, 'Attempting to span sockets')\n        total = {}\n        socketlist = []\n        for pair in pairlist:\n            socket = pair[1]\n            socketlist.append(socket)\n            total = self._combine_resources(total, available[socket])\n        pbs.logmsg(pbs.EVENT_DEBUG4, 'Combined available resources: %s' %\n                   (total))\n        return self._assign_resources(requested, total, socketlist, node)\n\n    def available_node_resources(self, node, exclude_jobid=None):\n        \"\"\"\n        Determine which resources are available from the supplied node\n        dictionary (i.e. the local node) by removing resources already\n        assigned to jobs.\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        available = copy.deepcopy(node.numa_nodes)\n        pbs.logmsg(pbs.EVENT_DEBUG4, 'Available Keys: %s' % (available[0]))\n        pbs.logmsg(pbs.EVENT_DEBUG4, 'Available: %s' % (available))\n        for socket in available:\n            if 'mem' in available[socket]:\n                available[socket]['memory'] = \\\n                    size_as_int(str(available[socket]['mem']))\n            elif 'MemTotal' in available[socket]:\n                # Find the memory on the socket in bytes.\n                # Remove the 'b' to simplfy the math\n                available[socket]['memory'] = size_as_int(\n                    available[socket]['MemTotal'])\n        pbs.logmsg(pbs.EVENT_DEBUG4, 'Available prior to device add: %s' %\n                   (available))\n        for device in node.devices:\n            pbs.logmsg(pbs.EVENT_DEBUG4,\n                       '%s: Device Names: %s' %\n                       (caller_name(), device))\n            if device == 'mic' or device == 'gpu':\n                pbs.logmsg(pbs.EVENT_DEBUG4, 'Devices: %s' %\n                           node.devices[device])\n                for device_name in node.devices[device]:\n                    device_socket = \\\n                        node.devices[device][device_name]['numa_node']\n                    if 'devices' not in available[device_socket]:\n                        available[device_socket]['devices'] = []\n                    pbs.logmsg(pbs.EVENT_DEBUG4,\n                               'Device: %s, Socket: %s' %\n                               (device, device_socket))\n                    available[device_socket]['devices'].append(device_name)\n        pbs.logmsg(pbs.EVENT_DEBUG4, 'Available: %s' % (available))\n        pbs.logmsg(pbs.EVENT_DEBUG4,\n                   'Assigned: %s' % (self.assigned_resources))\n        # Remove all of the resources that are assigned to other jobs\n        for jobid in self.assigned_resources:\n            if exclude_jobid and (jobid == exclude_jobid):\n                pbs.logmsg(pbs.EVENT_DEBUG4,\n                           ('Job %s res not removed from host '\n                            'available res: excluded job') % jobid)\n                continue\n\n            # Support suspended jobs on nodes\n            if job_is_suspended(jobid):\n                pbs.logmsg(pbs.EVENT_DEBUG4,\n                           ('Job %s res not removed from host '\n                            'available res: suspended job') % jobid)\n                continue\n            cpus = []\n            sockets = []\n            devices = []\n            memory = 0\n            jra = self.assigned_resources[jobid]\n            if 'cpuset' in jra:\n                if 'cpus' in jra['cpuset']:\n                    cpus = jra['cpuset']['cpus']\n                if 'mems' in jra['cpuset']:\n                    sockets = jra['cpuset']['mems']\n            if 'devices' in jra:\n                if 'list' in jra['devices']:\n                    devices = jra['devices']['list']\n            if 'memory' in jra:\n                if 'limit_in_bytes' in jra['memory']:\n                    memory = size_as_int(jra['memory']['limit_in_bytes'])\n            pbs.logmsg(pbs.EVENT_DEBUG4,\n                       'cpus: %s, sockets: %s, memory limit: %s' %\n                       (cpus, sockets, memory))\n            pbs.logmsg(pbs.EVENT_DEBUG4, 'devices: %s' % devices)\n            # Loop through the sockets and remove cpus that are\n            # assigned to other cgroups\n            for socket in sockets:\n                for cpu in cpus:\n                    try:\n                        available[socket]['cpus'].remove(cpu)\n                    except ValueError:\n                        pass\n                    except Exception:\n                        pbs.logmsg(pbs.EVENT_DEBUG4,\n                                   'Error removing %d from %s' %\n                                   (cpu, available[socket]['cpus']))\n            if len(sockets) == 1:\n                avail_mem = available[sockets[0]]['memory']\n                pbs.logmsg(pbs.EVENT_DEBUG4,\n                           'Sockets: %s\\tAvailable: %s' %\n                           (sockets, available))\n                pbs.logmsg(pbs.EVENT_DEBUG4,\n                           'Decrementing memory: %d by %d' %\n                           (size_as_int(avail_mem), memory))\n                if memory <= available[sockets[0]]['memory']:\n                    available[sockets[0]]['memory'] -= memory\n            # Loop throught the available sockets\n            pbs.logmsg(pbs.EVENT_DEBUG4,\n                       'Assigned device to %s: %s' % (jobid, devices))\n            for socket in available:\n                for device in devices:\n                    try:\n                        # loop through known devices and see if they match\n                        if available[socket]['devices']:\n                            pbs.logmsg(pbs.EVENT_DEBUG4,\n                                       'Check device: %s' % (device))\n                            pbs.logmsg(pbs.EVENT_DEBUG4,\n                                       'Available device: %s' %\n                                       (available[socket]['devices']))\n                            major, minor = device.split()[1].split(':')\n                            avail_device = self.get_device_name(node,\n                                                                available,\n                                                                socket,\n                                                                int(major),\n                                                                int(minor))\n                            pbs.logmsg(pbs.EVENT_DEBUG4,\n                                       'Returned device: %s' %\n                                       (avail_device))\n                            if avail_device is not None:\n                                pbs.logmsg(pbs.EVENT_DEBUG4,\n                                           ('socket: %d,\\t'\n                                            'devices: %s,\\t'\n                                            'device to remove: %s') %\n                                           (socket,\n                                            available[socket]['devices'],\n                                            avail_device))\n                                available[socket]['devices'].remove(\n                                    avail_device)\n                    except ValueError:\n                        pass\n                    except Exception as exc:\n                        pbs.logmsg(pbs.EVENT_DEBUG2,\n                                   'Unexpected error: %s' % exc)\n                        pbs.logmsg(pbs.EVENT_DEBUG2,\n                                   'Error removing %s from %s' %\n                                   (device, available[socket]['devices']))\n        pbs.logmsg(pbs.EVENT_DEBUG4, 'Available resources: %s' % (available))\n        return available\n\n    def set_swappiness(self, value, jobid=''):\n        \"\"\"\n        Set the swappiness for a memory cgroup\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG3, \"%s: Method called\" % (caller_name()))\n        path = self._cgroup_path('memory', 'swappiness', jobid)\n        try:\n            self.write_value(path, value)\n        except Exception as exc:\n            pbs.logmsg(pbs.EVENT_DEBUG2, '%s: Failed to adjust %s: %s' %\n                       (caller_name(), path, exc))\n\n    def set_limit(self, resource, value, jobid=''):\n        \"\"\"\n        Set a cgroup limit on a node or a job\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        if jobid:\n            pbs.logmsg(pbs.EVENT_DEBUG4, '%s: %s = %s for job %s' %\n                       (caller_name(), resource, value, jobid))\n        else:\n            pbs.logmsg(pbs.EVENT_DEBUG4, '%s: %s = %s for node' %\n                       (caller_name(), resource, value))\n        if resource == 'mem':\n            if 'memory' in self.subsystems:\n                path = self._cgroup_path('memory', 'limit_in_bytes', jobid)\n                self.write_value(path, size_as_int(value))\n        elif resource == 'softmem':\n            if 'memory' in self.subsystems:\n                path = self._cgroup_path('memory', 'soft_limit_in_bytes',\n                                         jobid)\n                self.write_value(path, size_as_int(value))\n        elif resource == 'vmem':\n            if 'memsw' in self.subsystems:\n                if 'memory' not in self.subsystems:\n                    path = self._cgroup_path('memory', 'limit_in_bytes',\n                                             jobid)\n                    self.write_value(path, size_as_int(value))\n                path = self._cgroup_path('memsw', 'limit_in_bytes', jobid)\n                self.write_value(path, size_as_int(value))\n        elif resource == 'hpmem':\n            if 'hugetlb' in self.subsystems:\n                path = self._cgroup_path('hugetlb', 'limit_in_bytes', jobid)\n                self.write_value(path, size_as_int(value))\n        elif resource == 'ncpus':\n            if 'cpu' in self.subsystems:\n                # Note the value is already multiplied by threads/core\n                # if ncpus_are_cores and use_hyperthreads are True\n                path = self._cgroup_path('cpu', 'shares', jobid)\n                weightless = False\n                # 'zero' cpu jobs have the minimum shares, but value for\n                # clamping to quota needs to be realistic\n                if (value <= 0):\n                    weightless = True\n                    weightless_shares = (self.cfg['cgroup']\n                                                 ['cpu']\n                                                 ['zero_cpus_shares_fraction'])\n                    # Note that the minimum in the kernel is 2\n                    self.write_value(\n                        path, int(max(2, weightless_shares * 1000)))\n                    value = (self.cfg['cgroup']\n                                     ['cpu']\n                                     ['zero_cpus_quota_fraction'])\n                else:\n                    self.write_value(path, int(value * 1000))\n                if (self.cfg['cgroup']['cpu']['enforce_per_period_quota']\n                        or weightless):\n                    # zero cpu jobs ALWAYS get a quota -- keep them honest\n                    cfs_period_us = self.cfg['cgroup']['cpu']['cfs_period_us']\n                    path = self._cgroup_path('cpu', 'cfs_period_us', jobid)\n                    self.write_value(path, cfs_period_us)\n\n                    cfs_quota_fudge_factor = \\\n                        self.cfg['cgroup']['cpu']['cfs_quota_fudge_factor']\n                    path = self._cgroup_path('cpu', 'cfs_quota_us', jobid)\n                    # zero cpu jobs are clamped to use less than one cpu\n                    # during each period\n                    cfs_quota_us_calculated = (cfs_period_us\n                                               * value\n                                               * cfs_quota_fudge_factor)\n\n                    # Get the parent cgroup's maximum cfs_quota_us\n                    # we cannot set the child cgroup value to anything more\n                    # Note that some kernels display \"unlimited\" as signed -1\n                    # So we should ignore non-positive values\n                    max_cfs_quota_us = self._get_cfs_quota_us()\n                    if ((max_cfs_quota_us is not None)\n                            and (max_cfs_quota_us > 0)\n                            and (cfs_quota_us_calculated > max_cfs_quota_us)):\n                        cfs_quota_us_calculated = max_cfs_quota_us\n                    self.write_value(path, int(cfs_quota_us_calculated))\n        elif resource == 'cpuset.cpus':\n            if 'cpuset' in self.subsystems:\n                path = self._cgroup_path('cpuset', 'cpus', jobid)\n                cpus = value\n                if not cpus:\n                    raise CgroupLimitError('Failed to configure cpuset cpus')\n                cpus = \",\".join(list(map(str, cpus)))\n                self.write_value(path, cpus)\n        elif resource == 'cpuset.mems':\n            if 'cpuset' in self.subsystems:\n                path = self._cgroup_path('cpuset', 'mems', jobid)\n                if not os.path.exists(path):\n                    # probably zero cpu job now in root cpuset -- do nothing\n                    return\n                if self.cfg['cgroup']['cpuset']['mem_fences']:\n                    mems = value\n                    if not mems:\n                        raise CgroupLimitError(\n                            'Failed to configure cpuset mems')\n                    mems = ','.join(list(map(str, mems)))\n                    self.write_value(path, mems)\n                else:\n                    pbs.logmsg(pbs.EVENT_DEBUG4, ('Memory fences disabled, '\n                                                  'copying cpuset.mems from '\n                                                  ' parent for %s') % jobid)\n                    self._copy_from_parent(path)\n        elif resource == 'devices':\n            if 'devices' in self.subsystems:\n                path = self._cgroup_path('devices', 'allow', jobid)\n                devices = value\n                if not devices:\n                    raise CgroupLimitError('Failed to configure devices')\n                pbs.logmsg(pbs.EVENT_DEBUG4,\n                           'Setting devices: %s for %s' % (devices, jobid))\n                for dev in devices:\n                    self.write_value(path, dev)\n                path = self._cgroup_path('devices', 'list', jobid)\n                with open(path, 'r') as desc:\n                    output = desc.readlines()\n                pbs.logmsg(pbs.EVENT_DEBUG4, 'devices.list: %s' % output)\n        else:\n            pbs.logmsg(pbs.EVENT_DEBUG2, '%s: Resource %s not handled' %\n                       (caller_name(), resource))\n\n    def update_job_usage(self, jobid, resc_used, force=False):\n        \"\"\"\n        Update resource usage for a job\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: resc_used = %s' %\n                   (caller_name(), str(resc_used)))\n        if not job_is_running(jobid) and not force:\n            pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Job %s is not running' %\n                       (caller_name(), jobid))\n            return\n        if pbs.event().type in [pbs.EXECHOST_PERIODIC, pbs.EXECJOB_EPILOGUE]:\n            # Initialize diag_messages in order to be able\n            # to merge msgs only from sisters\n            self.set_diag_messages(resc_used, '{}', True)\n        # Sort the subsystems so that we consistently look at the subsystems\n        # in the same order every time\n        self.subsystems.sort()\n        for subsys in self.subsystems:\n            if subsys == 'memory':\n                max_mem = self._get_max_mem_usage(jobid)\n                if max_mem is None:\n                    pbs.logjobmsg(jobid, '%s: No max mem data' % caller_name())\n                else:\n                    resc_used['mem'] = pbs.size(convert_size(max_mem, 'kb'))\n                    pbs.logjobmsg(jobid, '%s: Memory usage: mem=%s' %\n                                  (caller_name(), resc_used['mem']))\n                mem_failcnt = self._get_mem_failcnt(jobid)\n                if mem_failcnt is None:\n                    pbs.logjobmsg(jobid, '%s: No mem fail count data' %\n                                  caller_name())\n                else:\n                    # Check to see if the job exceeded its resource limits\n                    if mem_failcnt > 0:\n                        err_msg = self._get_error_msg(jobid)\n                        pbs.logjobmsg(jobid,\n                                      'Cgroup memory limit exceeded: %s' %\n                                      (err_msg))\n                        if (pbs.event().type == pbs.EXECJOB_EPILOGUE\n                                and pbs.event().job.in_ms_mom()):\n                            self.write_to_stderr(pbs.event().job,\n                                                 \"Cgroup mem limit \"\n                                                 \"exceeded: %s\\n\" % (err_msg))\n                        if pbs.event().type in \\\n                           [pbs.EXECHOST_PERIODIC, pbs.EXECJOB_EPILOGUE]:\n                            self.set_diag_messages(resc_used, '{\"%s\":'\n                                                   '\"Cgroup mem limit '\n                                                   'exceeded\"}'\n                                                   % pbs.get_local_nodename(),\n                                                   True)\n            elif subsys == 'memsw':\n                max_vmem = self._get_max_memsw_usage(jobid)\n                if max_vmem is None:\n                    pbs.logjobmsg(jobid, '%s: No max vmem data' %\n                                  caller_name())\n                else:\n                    resc_used['vmem'] = pbs.size(convert_size(max_vmem, 'kb'))\n                    pbs.logjobmsg(jobid, '%s: Memory usage: vmem=%s' %\n                                  (caller_name(), resc_used['vmem']))\n                vmem_failcnt = self._get_memsw_failcnt(jobid)\n                if vmem_failcnt is None:\n                    pbs.logjobmsg(jobid, '%s: No vmem fail count data' %\n                                  caller_name())\n                else:\n                    pbs.logjobmsg(jobid, '%s: vmem fail count: %d ' %\n                                  (caller_name(), vmem_failcnt))\n                    if vmem_failcnt > 0:\n                        err_msg = self._get_error_msg(jobid)\n                        pbs.logjobmsg(jobid,\n                                      'Cgroup memsw limit exceeded: %s\\n' %\n                                      (err_msg))\n                        if (pbs.event().type == pbs.EXECJOB_EPILOGUE\n                                and pbs.event().job.in_ms_mom()):\n                            self.write_to_stderr(pbs.event().job,\n                                                 \"Cgroup memsw limit \"\n                                                 \"exceeded: %s\" % (err_msg))\n                        if pbs.event().type in \\\n                           [pbs.EXECHOST_PERIODIC, pbs.EXECJOB_EPILOGUE]:\n                            self.set_diag_messages(resc_used, '{\"%s\":'\n                                                   '\"Cgroup memsw limit '\n                                                   'exceeded\"}'\n                                                   % pbs.get_local_nodename(),\n                                                   True)\n            elif subsys == 'hugetlb':\n                max_hpmem = self._get_max_hugetlb_usage(jobid)\n                if max_hpmem is None:\n                    pbs.logjobmsg(jobid, '%s: No max hpmem data' %\n                                  caller_name())\n                    return\n                hpmem_failcnt = self._get_hugetlb_failcnt(jobid)\n                if hpmem_failcnt is None:\n                    pbs.logjobmsg(jobid, '%s: No hpmem fail count data' %\n                                  caller_name())\n                    return\n                if hpmem_failcnt > 0:\n                    err_msg = self._get_error_msg(jobid)\n                    pbs.logjobmsg(jobid, 'Cgroup hugetlb limit exceeded: %s' %\n                                  (err_msg))\n                    if (pbs.event().type == pbs.EXECJOB_EPILOGUE\n                            and pbs.event().job.in_ms_mom()):\n                        self.write_to_stderr(pbs.event().job,\n                                             \"Cgroup hugetlb limit \"\n                                             \"exceeded: %s\" % (err_msg))\n                    if pbs.event().type in \\\n                       [pbs.EXECHOST_PERIODIC, pbs.EXECJOB_EPILOGUE]:\n                        self.set_diag_messages(resc_used, '{\"%s\":'\n                                               '\"Cgroup hugetlb limit '\n                                               'exceeded\"}'\n                                               % pbs.get_local_nodename(),\n                                               True)\n                resc_used['hpmem'] = pbs.size(convert_size(max_hpmem, 'kb'))\n                pbs.logjobmsg(jobid, '%s: Hugepage usage: %s' %\n                              (caller_name(), resc_used['hpmem']))\n            elif subsys == 'cpuacct':\n                if 'walltime' not in resc_used:\n                    walltime = 0\n                else:\n                    if resc_used['walltime']:\n                        walltime = int(resc_used['walltime'])\n                    else:\n                        walltime = 0\n                if 'cput' not in resc_used:\n                    cput = 0\n                else:\n                    if resc_used['cput']:\n                        cput = int(resc_used['cput'])\n                    else:\n                        cput = 0\n                # Calculate cpupercent based on the reported values\n                if walltime > 0:\n                    cpupercent = (100.0 * cput) / walltime\n                else:\n                    cpupercent = 0\n                resc_used['cpupercent'] = pbs.pbs_int(int(cpupercent))\n                pbs.logjobmsg(jobid, '%s: CPU percent: %d' %\n                              (caller_name(), cpupercent))\n                # Now update cput\n                cput = self._get_cpu_usage(jobid)\n                if cput is None:\n                    pbs.logjobmsg(jobid, '%s: No CPU usage data' %\n                                  caller_name())\n                    return\n                cput = convert_time(str(cput) + 'ns')\n                resc_used['cput'] = pbs.duration(cput)\n                pbs.logjobmsg(jobid, '%s: CPU usage: %.3lf secs' %\n                              (caller_name(), cput))\n\n    def create_job(self, jobid, node):\n        \"\"\"\n        Creates the cgroup if it doesn't exists\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        # Iterate over the enabled subsystems\n        for subsys in self.subsystems:\n            # Create a directory for the job\n            old_umask = os.umask(0o022)\n            try:\n                path = self._cgroup_path(subsys, jobid=jobid)\n                if not os.path.exists(path):\n                    pbs.logmsg(pbs.EVENT_DEBUG2, '%s: Creating directory %s' %\n                               (caller_name(), path))\n                    os.makedirs(path, 0o755)\n                else:\n                    pbs.logmsg(pbs.EVENT_DEBUG3,\n                               '%s: Directory %s already exists' %\n                               (caller_name(), path))\n                if subsys == 'devices':\n                    self._setup_subsys_devices(jobid, node)\n            except OSError as exc:\n                raise CgroupConfigError('Failed to create directory: %s (%s)' %\n                                        (path, errno.errorcode[exc.errno]))\n            except Exception:\n                raise\n            finally:\n                os.umask(old_umask)\n\n    def configure_job(self, job, hostresc, node, cgroup, event_type):\n        \"\"\"\n        Determine the cgroup limits and configure the cgroups\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        jobid = job.id\n\n        # Get available totals for mem, vmem and hpmem from parent cgroup\n        # Calling node.get_memory_on_node etc. may give slightly different\n        # results if kernel drivers have freshly reserved memory since\n        # exechost_startup was called\n        mem_avail = 0\n        filename = self._cgroup_path('memory', 'limit_in_bytes')\n        if filename and os.path.isfile(filename):\n            with open(filename) as desc:\n                mem_avail = int(desc.readline())\n        vmem_avail = 0\n        filename = self._cgroup_path('memsw', 'limit_in_bytes')\n        if filename and os.path.isfile(filename):\n            with open(filename) as desc:\n                vmem_avail = int(desc.readline())\n        hpmem_avail = 0\n        filename = self._cgroup_path('hugetlb', 'limit_in_bytes')\n        if filename and os.path.isfile(filename):\n            with open(filename) as desc:\n                hpmem_avail = int(desc.readline())\n\n        # Sanity check vmem_avail >= mem_avail\n        # -- unnecessary by construction in exechost_startup\n        # but memsw may be disabled or swap accounting disabled\n        if vmem_avail < mem_avail:\n            vmem_avail = mem_avail\n\n        pbs.logmsg(pbs.EVENT_DEBUG4, \"configure_job:\"\n                   \" total host avail mem=%s vmem=%s hpmem=%s\"\n                   % (mem_avail, vmem_avail, hpmem_avail))\n\n        mem_enabled = 'memory' in self.subsystems\n        vmem_enabled = 'memsw' in self.subsystems\n        if mem_enabled or vmem_enabled:\n            # Initialize mem variables\n            mem_requested = None\n            if 'mem' in hostresc:\n                mem_requested = convert_size(hostresc['mem'], 'kb')\n            mem_default = None\n            if mem_enabled and self.default('memory') is not None:\n                mem_default = size_as_int(self.default('memory'))\n            # Initialize vmem variables\n            vmem_requested = None\n            if 'vmem' in hostresc:\n                vmem_requested = convert_size(hostresc['vmem'], 'kb')\n            vmem_default = 0\n            if (vmem_enabled and (self.default('memsw') is not None)\n                    and (vmem_avail > mem_avail)):\n                vmem_default = size_as_int(self.default('memsw'))\n                if vmem_default > (vmem_avail - mem_avail):\n                    vmem_default = vmem_avail - mem_avail\n            # Initialize softmem variables\n            if 'soft_limit' in self.cfg['cgroup']['memory']:\n                softmem_enabled = self.cfg['cgroup']['memory']['soft_limit']\n            else:\n                softmem_enabled = False\n            # Determine the mem limit\n            if mem_requested is not None:\n                # mem requested may not exceed available\n                if size_as_int(mem_requested) > mem_avail:\n                    raise JobValueError(\n                        'mem requested (%s) exceeds mem available (%s)' %\n                        (mem_requested, str(mem_avail)))\n                mem_limit = mem_requested\n            else:\n                # mem was not requested\n                if (mem_default is None\n                        or not self.cfg['cgroup']['memory']\n                                       ['enforce_default']\n                        or (self.cfg['cgroup']['memory']\n                                    ['exclhost_ignore_default']\n                            and \"place\" in job.Resource_List\n                            and 'exclhost'\n                                in repr(job.Resource_List['place']))):\n                    mem_limit = str(mem_avail)\n                else:\n                    mem_limit = str(mem_default)\n            # Determine the vmem limit\n            if not vmem_enabled:\n                # set as equal, but will not be used to set cgroup limit\n                vmem_limit = mem_limit\n            elif vmem_requested is not None:\n                # vmem requested may not exceed available\n                if size_as_int(vmem_requested) > vmem_avail:\n                    raise JobValueError(\n                        'vmem requested (%s) exceeds vmem available (%s)' %\n                        (vmem_requested, str(vmem_avail)))\n                vmem_limit = vmem_requested\n            else:\n                # vmem was not requested\n                if (vmem_default is None\n                        or not self.cfg['cgroup']['memsw']\n                                       ['enforce_default']\n                        or (self.cfg['cgroup']['memsw']\n                                    ['exclhost_ignore_default']\n                            and 'place' in job.Resource_List\n                            and 'exclhost'\n                                in repr(job.Resource_List['place']))):\n                    vmem_limit = str(vmem_avail)\n                else:\n                    vmem_limit = str(min(vmem_avail,\n                                         size_as_int(mem_limit)\n                                         + vmem_default))\n            pbs.logmsg(pbs.EVENT_DEBUG4,\n                       \"Limits computed from requests/defaults: \"\n                       \"mem: %s vmem: %s\" % (mem_limit, vmem_limit))\n            # Ensure vmem is at least as large as mem\n            if size_as_int(vmem_limit) < size_as_int(mem_limit):\n                vmem_limit = mem_limit\n            # Adjust for soft limits if enabled\n            if mem_enabled and softmem_enabled:\n                softmem_limit = mem_limit\n                # The hard memory limit is assigned the lesser of the vmem\n                # limit and available memory\n                if size_as_int(vmem_limit) < mem_avail:\n                    mem_limit = vmem_limit\n                else:\n                    mem_limit = str(mem_avail)\n            # Again, ensure vmem is at least as large as mem\n            if size_as_int(vmem_limit) < size_as_int(mem_limit):\n                vmem_limit = mem_limit\n            # Sanity checks when both memory and memsw are enabled\n            if mem_enabled and vmem_enabled:\n                if vmem_requested is not None:\n                    if size_as_int(vmem_requested) < size_as_int(vmem_limit):\n                        # The user requested an invalid limit\n                        raise JobValueError(\n                            'vmem requested (%s) under minimum possible (%s)'\n                            % (vmem_requested, vmem_limit))\n                if size_as_int(vmem_limit) > size_as_int(mem_limit):\n                    # This job may try to utilize swap\n                    if vmem_avail <= mem_avail:\n                        # No swap available\n                        if vmem_requested is not None:\n                            raise CgroupLimitError('Job might utilize swap'\n                                                   ' and no swap on host')\n                        else:\n                            # It got an impossible vmem>mem\n                            # undo that since there is no swap\n                            vmem_limit = mem_limit\n            # Assign mem and vmem\n            if mem_enabled:\n                if mem_requested is None:\n                    pbs.logmsg(pbs.EVENT_DEBUG2,\n                               '%s: mem not requested, '\n                               'assigning %s to cgroup'\n                               % (caller_name(), mem_limit))\n                    hostresc['mem'] = pbs.size(mem_limit)\n                if softmem_enabled:\n                    hostresc['softmem'] = pbs.size(softmem_limit)\n            if vmem_enabled:\n                if vmem_requested is None:\n                    pbs.logmsg(pbs.EVENT_DEBUG2,\n                               '%s: vmem not requested, '\n                               'assigning %s to cgroup'\n                               % (caller_name(), vmem_limit))\n                    pbs.logmsg(pbs.EVENT_DEBUG4,\n                               '%s: INFO: vmem is enabled in the hook '\n                               'configuration file and should also be '\n                               'listed in the resources line of the '\n                               'scheduler configuration file' %\n                               caller_name())\n                    hostresc['vmem'] = pbs.size(vmem_limit)\n\n        # Initialize hpmem variables\n        hpmem_default = None\n        hpmem_limit = None\n        hpmem_enabled = 'hugetlb' in self.subsystems\n        if hpmem_enabled:\n            if self.default('hugetlb') is not None:\n                hpmem_default = size_as_int(self.default('hugetlb'))\n            if (hpmem_default is None\n                    or not self.cfg['cgroup']['hugetlb']\n                                   ['enforce_default']\n                    or (self.cfg['cgroup']['hugetlb']\n                                ['exclhost_ignore_default']\n                        and \"place\" in job.Resource_List\n                        and 'exclhost' in repr(job.Resource_List['place']))):\n                hpmem_default = hpmem_avail\n            if 'hpmem' in hostresc:\n                hpmem_limit = convert_size(hostresc['hpmem'], 'kb')\n            else:\n                hpmem_limit = str(hpmem_default)\n            # Assign hpmem\n            if size_as_int(hpmem_limit) > hpmem_avail:\n                raise JobValueError('hpmem limit (%s) exceeds available (%s)' %\n                                    (hpmem_limit, str(hpmem_avail)))\n            hostresc['hpmem'] = pbs.size(hpmem_limit)\n        # Initialize cpuset variables\n        cpuset_enabled = 'cpuset' in self.subsystems\n        if cpuset_enabled:\n            cpu_limit = 0\n            if 'ncpus' in hostresc:\n                cpu_limit = hostresc['ncpus']\n            # support weightless jobs in root cpuset\n            # do not clamp cpu_limit to min 1\n            hostresc['ncpus'] = pbs.pbs_int(cpu_limit)\n        # Find the available resources and assign the right ones to the job\n        assigned = dict()\n        jobdict = dict()\n        # Make two attempts since self.cleanup_orphans may actually fix the\n        # problem we see in a first attempt\n        for attempt in range(2):\n            if (hasattr(pbs, \"EXECJOB_RESIZE\")\n                    and event_type == pbs.EXECJOB_RESIZE):\n                # consider current job's resources as being available,\n                # where a subset of them would be re-assigned to the\n                # the same job.\n                avail_resc = self.available_node_resources(node, jobid)\n            else:\n                avail_resc = self.available_node_resources(node)\n            assigned = self.assign_job(hostresc, avail_resc, node)\n            # If this was not the first attempt, do not bother trying to\n            # clean up again. This is handled immediately after the loop.\n            if attempt != 0:\n                break\n            if not assigned:\n                # No resources were assigned to the job, most likely because\n                # a cgroup has not been cleaned up yet\n                pbs.logmsg(pbs.EVENT_DEBUG2,\n                           '%s: Failed to assign job resources' %\n                           caller_name())\n                pbs.logmsg(pbs.EVENT_DEBUG2, '%s: Resyncing local job data' %\n                           caller_name())\n                # Collect the jobs on the node (try reading mom_priv/jobs)\n                try:\n                    jobdict = node.gather_jobs_on_node(cgroup)\n                except Exception:\n                    jobdict = dict()\n                    pbs.logmsg(pbs.EVENT_DEBUG2,\n                               '%s: Failed to gather local job data' %\n                               caller_name())\n                # There may not be a .JB file present for this job yet\n                if jobid not in jobdict:\n                    jobdict[jobid] = time.time()\n                self.cleanup_orphans(jobdict)\n                # Resynchronize after cleanup\n                self.assigned_resources = self._get_assigned_cgroup_resources()\n        if not assigned:\n            pbs.logmsg(pbs.EVENT_DEBUG2, '%s: Assignment of resources failed '\n                       'for %s, attempting cleanup' % (caller_name(), jobid))\n            # Cleanup cgroups for jobs not present on this node\n            jobdict = node.gather_jobs_on_node(cgroup)\n            if jobdict and jobid in jobdict:\n                del jobdict[jobid]\n            self.cleanup_orphans(jobdict)\n            # Log a message and rerun the job\n            pbs.logmsg(pbs.EVENT_DEBUG2, '%s: Requeuing job %s' %\n                       (caller_name(), jobid))\n            pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Run count for job %s: %d' %\n                       (caller_name(), jobid, pbs.event().job.run_count))\n            pbs.event().job.rerun()\n            raise CgroupProcessingError('Failed to assign resources')\n        # Print out the assigned resources\n        pbs.logmsg(pbs.EVENT_DEBUG2,\n                   'Assigned resources: %s' % (assigned))\n        self.assigned_resources = assigned\n        if cpuset_enabled:\n            # Do not remove the ncpus key if it exists:\n            # used by the \"cpu\" controller now.\n            for key in ['cpuset.cpus', 'cpuset.mems']:\n                if key in assigned:\n                    hostresc[key] = assigned[key]\n                else:\n                    pbs.logmsg(pbs.EVENT_DEBUG2,\n                               'Key: %s not found in assigned' % key)\n        # Initialize devices variables\n        key = 'devices'\n        if key in self.subsystems:\n            if key in assigned:\n                hostresc[key] = assigned[key]\n            else:\n                pbs.logmsg(pbs.EVENT_DEBUG2,\n                           'Key: %s not found in assigned' % key)\n        # Apply the resource limits to the cgroups\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Setting cgroup limits for: %s' %\n                   (caller_name(), hostresc))\n        # The vmem limit must be set after the mem limit, so sort the keys\n        # also ensures we get to cpuset.cpus before cpuset.mem\n        # important for zero cpu jobs migrated to root cpuset\n        for resc in sorted(hostresc):\n            if resc == 'ncpus':\n                # In case of HT, may need to multiply the ncpus;\n                # done here since \"node\" is not passed down to set_limit\n                # This only sets the cpu controller limits/shares,\n                # since the cpuset stuff is done through\n                # cpuset.cpus and cpuset.mems\n                # Note the set_limit in the final 'else' should be\n                # skipped -- its replacement is at the end here\n                htpc = 1\n                if (self.cfg['cgroup']['cpu']['enabled']\n                        and self.cfg['use_hyperthreads']\n                        and self.cfg['ncpus_are_cores']):\n                    if 'hyperthreads_per_core' in node.cpuinfo:\n                        htpc = node.cpuinfo['hyperthreads_per_core']\n                self.set_limit(resc, hostresc[resc] * htpc, jobid)\n            elif ((resc == \"cpuset.cpus\")\n                  and not hostresc['cpuset.cpus']):\n                # zero cpus jobs -- delete the cpuset made for the job;\n                # later allows job processes to be placed in the root cpuset.\n                # Note the set_limit in the final else should likewise\n                # be skipped.\n                # Since the advent of \"resize\" we may need to\n                # clean up processes in a pre-existing cpuset\n                finished = False\n                giveup_time = time.time() + self.cfg['kill_timeout']\n                path = self._cgroup_path('cpuset', '', jobid)\n                while not finished:\n                    success = self._remove_cgroup(path)\n                    if success:\n                        finished = True\n                    elif time.time() > giveup_time:\n                        finished = True\n                    else:\n                        time.sleep(0.2)\n            else:\n                # For all the rest just pass hostresc[resc] down to set_limit\n                self.set_limit(resc, hostresc[resc], jobid)\n        # Set additional parameters\n        # Note some kernels will surprisingly not let you set things\n        # to the value that is already there in some corner cases\n        # hence some of the reads to see \"if we need to write\"\n        if cpuset_enabled:\n            path = self._cgroup_path('cpuset', 'mem_hardwall', jobid)\n            lines = self.read_value(path)\n            curval = 0\n            if lines and lines[0] == '1':\n                curval = 1\n            if self.cfg['cgroup']['cpuset']['mem_hardwall']:\n                if curval == 0:\n                    self.write_value(path, '1')\n            else:\n                if curval == 1:\n                    self.write_value(path, '0')\n            path = self._cgroup_path('cpuset', 'memory_spread_page', jobid)\n            lines = self.read_value(path)\n            curval = 0\n            if lines and lines[0] == '1':\n                curval = 1\n            if self.cfg['cgroup']['cpuset']['memory_spread_page']:\n                if curval == 0:\n                    self.write_value(path, '1')\n            else:\n                if curval == 1:\n                    self.write_value(path, '0')\n\n    def _kill_tasks(self, tasks_file):\n        \"\"\"\n        Kill any processes contained within a tasks file\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        if not os.path.isfile(tasks_file):\n            return 0\n        count = 0\n        with open(tasks_file, 'r') as tasks_desc:\n            for line in tasks_desc:\n                count += 1\n                try:\n                    os.kill(int(line.strip()), signal.SIGKILL)\n                except Exception:\n                    pass\n        # Give the OS a moment to update the tasks file\n        time.sleep(0.1)\n        count = 0\n        try:\n            with open(tasks_file, 'r') as tasks_desc:\n                for line in tasks_desc:\n                    count += 1\n                    pid = line.strip()\n                    filename = os.path.join(os.sep, 'proc', pid, 'status')\n                    statlist = []\n                    try:\n                        with open(filename, 'r') as status_desc:\n                            for line2 in status_desc:\n                                if line2.find('Name:') != -1:\n                                    statlist.append(line2.strip())\n                                if line2.find('State:') != -1:\n                                    statlist.append(line2.strip())\n                                if line2.find('Uid:') != -1:\n                                    statlist.append(line2.strip())\n                    except Exception:\n                        pass\n                    pbs.logmsg(pbs.EVENT_DEBUG2, '%s: PID %s survived: %s' %\n                               (caller_name(), pid, statlist))\n        except Exception as exc:\n            if exc.errno != errno.ENOENT:\n                raise\n        return count\n\n    def _delete_cgroup_children(self, path):\n        \"\"\"\n        Recursively delete all children within a cgroup, but not the parent\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        if not os.path.isdir(path):\n            pbs.logmsg(pbs.EVENT_DEBUG4, '%s: No such directory: %s' %\n                       (caller_name(), path))\n            return 0\n        remaining_children = 0\n        for filename in os.listdir(path):\n            subdir = os.path.join(path, filename)\n            if not os.path.isdir(subdir):\n                continue\n            remaining_children += self._delete_cgroup_children(subdir)\n            tasks_file = os.path.join(subdir, 'tasks')\n            remaining_tasks = self._kill_tasks(tasks_file)\n            if remaining_tasks > 0:\n                remaining_children += 1\n                continue\n            pbs.logmsg(pbs.EVENT_DEBUG2, '%s: Removing directory %s' %\n                       (caller_name(), subdir))\n            try:\n                os.rmdir(subdir)\n            except Exception as exc:\n                pbs.logmsg(pbs.EVENT_SYSTEM,\n                           'Error removing cgroup path %s: %s' %\n                           (subdir, str(exc)))\n        return remaining_children\n\n    def _remove_cgroup(self, path, jobid=None):\n        \"\"\"\n        Perform the actual removal of the cgroup directory.\n        Make only one attempt at killing tasks in cgroup,\n        since this method could be called many times (for N\n        directories times M jobs).\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        if not os.path.isdir(path):\n            pbs.logmsg(pbs.EVENT_DEBUG4, '%s: No such directory: %s' %\n                       (caller_name(), path))\n            return True\n        if not jobid:\n            parent = path\n        else:\n            parent = os.path.join(path, jobid)\n        # Recursively delete children\n        self._delete_cgroup_children(parent)\n        # Delete the parent\n        tasks_file = os.path.join(parent, 'tasks')\n        remaining = 0\n        if not os.path.isfile(tasks_file):\n            pbs.logmsg(pbs.EVENT_DEBUG2, '%s: No such file: %s' %\n                       (caller_name(), tasks_file))\n        else:\n            try:\n                remaining = self._kill_tasks(tasks_file)\n            except Exception:\n                pass\n        if remaining == 0:\n            pbs.logmsg(pbs.EVENT_DEBUG2, '%s: Removing directory %s' %\n                       (caller_name(), parent))\n            for _ in range(2):\n                try:\n                    os.rmdir(parent)\n                except OSError as exc:\n                    pbs.logmsg(pbs.EVENT_SYSTEM,\n                               'OS error removing cgroup path %s: %s' %\n                               (parent, errno.errorcode[exc.errno]))\n                except Exception as exc:\n                    pbs.logmsg(pbs.EVENT_SYSTEM,\n                               'Failed to remove cgroup path %s: %s' %\n                               (parent, exc))\n                    raise\n                if not os.path.isdir(parent):\n                    break\n                time.sleep(0.5)\n            return True\n        if not os.path.isdir(parent):\n            return True\n        # Cgroup removal has failed\n        pbs.logmsg(pbs.EVENT_SYSTEM, 'cgroup still has %d tasks: %s' %\n                   (remaining, parent))\n        # Nodes are taken offline in the delete() method\n        return False\n\n    def cleanup_hook_data(self, local_jobs=[]):\n        pattern = os.path.join(self.hook_storage_dir, '[0-9]*.*')\n        for filename in glob.glob(pattern):\n            if os.path.basename(filename) in local_jobs:\n                continue\n            pbs.logmsg(pbs.EVENT_DEBUG4,\n                       'Stale file %s to be removed' % filename)\n            try:\n                os.remove(filename)\n            except Exception as exc:\n                pbs.logmsg(pbs.EVENT_ERROR, 'Error removing file: %s' % exc)\n\n    def cleanup_env_files(self, local_jobs=[]):\n        pattern = os.path.join(self.host_job_env_dir, '[0-9]*.env')\n        for filename in glob.glob(pattern):\n            (jobid, extension) = os.path.splitext(os.path.basename(filename))\n            if jobid in local_jobs:\n                continue\n            pbs.logmsg(pbs.EVENT_DEBUG4,\n                       'Stale file %s to be removed' % filename)\n            try:\n                os.remove(filename)\n            except Exception as exc:\n                pbs.logmsg(pbs.EVENT_ERROR, 'Error removing file: %s' % exc)\n\n    def cleanup_orphans(self, local_jobs):\n        \"\"\"\n        Removes cgroup directories that are not associated with a local job\n        and cleanup any environment and assigned_resources files\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        pbs.logmsg(pbs.EVENT_DEBUG4, 'Local jobs: %s' % local_jobs)\n        self.cleanup_hook_data(local_jobs)\n        self.cleanup_env_files(local_jobs)\n        remaining = 0\n        # Always do systemd first, to prevent it from re-\"mirroring\"\n        # that directory into other hierarchies behind our back\n        if 'systemd' in self.paths:\n            keys_to_process = \\\n                (['systemd']\n                 + [x for x in self.paths if x != 'systemd'])\n        else:\n            keys_to_process = [x for x in self.paths]\n        for key in keys_to_process:\n            path = os.path.dirname(self._cgroup_path(key))\n            # Identify any orphans and append an orphan suffix\n            pattern = self._glob_subdir_wildcard()\n            pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Searching for orphans: %s' %\n                       (caller_name(), os.path.join(path, pattern)))\n            for subdir in glob.glob(os.path.join(path, pattern)):\n                jobid = os.path.basename(subdir)\n                if jobid in local_jobs or jobid.endswith('.orphan'):\n                    continue\n                # Now rename the directory.\n                filename = jobid + '.orphan'\n                new_subdir = os.path.join(path, filename)\n                pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Renaming %s to %s' %\n                           (caller_name(), subdir, new_subdir))\n                # Make sure the directory still exists before it is renamed\n                # or the logs could contain extraneous messages\n                if os.path.exists(subdir):\n                    try:\n                        os.rename(subdir, new_subdir)\n                    except Exception:\n                        pbs.logmsg(pbs.EVENT_DEBUG2,\n                                   '%s: Failed to rename %s to %s' %\n                                   (caller_name(), subdir, new_subdir))\n            # Attempt to remove the orphans\n            pattern = self._glob_subdir_wildcard(extension='orphan')\n            pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Cleaning up orphans: %s' %\n                       (caller_name(), os.path.join(path, pattern)))\n            for subdir in glob.glob(os.path.join(path, pattern)):\n                pbs.logmsg(pbs.EVENT_DEBUG2,\n                           '%s: Removing orphaned cgroup: %s' %\n                           (caller_name(), subdir))\n                if not self._remove_cgroup(subdir):\n                    pbs.logmsg(pbs.EVENT_DEBUG,\n                               '%s: Removing orphaned cgroup %s failed ' %\n                               (caller_name(), subdir))\n                    remaining += 1\n        return remaining\n\n    def delete(self, jobid, offline_node=True):\n        \"\"\"\n        Removes the cgroup directories for a job\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        # Make multiple attempts to kill tasks in the cgroup. Keep\n        # trying for kill_timeout seconds.\n        if not jobid:\n            raise ValueError('Invalid job ID')\n        finished = False\n        giveup_time = time.time() + self.cfg['kill_timeout']\n        try:\n            while not finished:\n                failure = False\n                # Always do systemd first\n                if 'systemd' in self.paths:\n                    keys_to_process = \\\n                        (['systemd']\n                         + [x for x in self.paths if x != 'systemd'])\n                else:\n                    keys_to_process = [x for x in self.paths]\n                for key in keys_to_process:\n                    cgroup_path = self._cgroup_path(key)\n                    path = os.path.dirname(self._cgroup_path(key))\n                    subdir_basename = jobid\n                    subdir = os.path.join(path, subdir_basename)\n                    subdir_parent = path\n                    # Make sure it still exists\n                    if not os.path.isdir(subdir):\n                        pbs.logmsg(pbs.EVENT_DEBUG4,\n                                   '%s: Skipping because %s is gone' %\n                                   (caller_name(), subdir))\n                        continue\n                    # Remove it\n                    pbs.logmsg(pbs.EVENT_DEBUG4,\n                               '%s: Attempting to delete %s' %\n                               (caller_name(), subdir))\n                    if not self._remove_cgroup(subdir_parent, jobid):\n                        pbs.logmsg(pbs.EVENT_DEBUG2, '%s: Unable to '\n                                   'delete cgroup for job %s' %\n                                   (caller_name(), jobid))\n                    # Check again\n                    if os.path.isdir(subdir):\n                        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Deletion '\n                                   'failed: %s still exists' %\n                                   (caller_name(), subdir))\n                        failure = True\n                    else:\n                        pbs.logmsg(pbs.EVENT_DEBUG4,\n                                   '%s: Deleted %s' %\n                                   (caller_name(), subdir))\n                if not failure:\n                    finished = True\n                elif time.time() > giveup_time:\n                    finished = True\n                else:\n                    time.sleep(0.2)\n        except Exception as exc:\n            failure = True\n            pbs.logmsg(pbs.EVENT_DEBUG, '%s: Error removing cgroup '\n                       'for %s: %s' % (caller_name(), jobid, exc))\n\n        if finished and not failure:\n            return True\n\n        # Handle deletion failure\n        if not offline_node:\n            pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Offline not requested' %\n                       caller_name())\n            return False\n        node = NodeUtils(self.cfg)\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: NodeUtils class instantiated' %\n                   caller_name())\n        try:\n            node.take_node_offline()\n        except Exception as exc:\n            pbs.logmsg(pbs.EVENT_DEBUG, '%s: Failed to offline node: %s' %\n                       (caller_name(), exc))\n        return False\n\n    def read_value(self, filename):\n        \"\"\"\n        Read value(s) from a limit file\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        lines = []\n        try:\n            with open(filename, 'r') as desc:\n                lines = desc.readlines()\n        except IOError:\n            pbs.logmsg(pbs.EVENT_SYSTEM, '%s: Failed to read file: %s' %\n                       (caller_name(), filename))\n        return [x.strip() for x in lines]\n\n    def write_value(self, filename, value, mode='w'):\n        \"\"\"\n        Write a value to a limit file\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: writing %s to %s' %\n                   (caller_name(), value, filename))\n        try:\n            with open(filename, mode) as desc:\n                desc.write(str(value) + '\\n')\n        except IOError as exc:\n            if exc.errno == errno.ENOENT:\n                pbs.logmsg(pbs.EVENT_SYSTEM, '%s: No such file: %s' %\n                           (caller_name(), filename))\n            elif exc.errno in [errno.EACCES, errno.EPERM]:\n                pbs.logmsg(pbs.EVENT_SYSTEM, '%s: Permission denied: %s' %\n                           (caller_name(), filename))\n            elif exc.errno == errno.EBUSY:\n                raise CgroupBusyError('Limit %s rejected: %s' %\n                                      (value, filename))\n            elif exc.errno == errno.ENOSPC:\n                raise CgroupLimitError('Limit %s too small: %s' %\n                                       (value, filename))\n            elif exc.errno == errno.EINVAL:\n                raise CgroupLimitError('Invalid limit value: %s, file: %s' %\n                                       (value, filename))\n            else:\n                pbs.logmsg(pbs.EVENT_SYSTEM,\n                           '%s: Uncaught exception writing %s to %s' %\n                           (value, filename))\n                raise\n        except Exception:\n            raise\n\n    def _get_cfs_quota_us(self, jobid=''):\n        pbs.logmsg(pbs.EVENT_DEBUG3, \"%s: Method called\" % (caller_name()))\n        # default for _cgroup_path for parent is empty string\n        try:\n            with open(self._cgroup_path('cpu', 'cfs_quota_us',\n                                        jobid), 'r') as fd:\n                return int(fd.readline().strip())\n        except Exception:\n            return None\n\n    def _get_mem_failcnt(self, jobid):\n        \"\"\"\n        Return memory failcount\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        try:\n            with open(self._cgroup_path('memory', 'failcnt', jobid),\n                      'r') as desc:\n                return int(desc.readline().strip())\n        except Exception:\n            return None\n\n    def _get_memsw_failcnt(self, jobid):\n        \"\"\"\n        Return vmem failcount\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        try:\n            with open(self._cgroup_path('memsw', 'failcnt', jobid),\n                      'r') as desc:\n                return int(desc.readline().strip())\n        except Exception:\n            return None\n\n    def _get_hugetlb_failcnt(self, jobid):\n        \"\"\"\n        Return hpmem failcount\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        try:\n            with open(self._cgroup_path('hugetlb', 'failcnt', jobid),\n                      'r') as desc:\n                return int(desc.readline().strip())\n        except Exception:\n            return None\n\n    def _get_max_mem_usage(self, jobid):\n        \"\"\"\n        Return the max usage of memory in bytes\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        try:\n            with open(self._cgroup_path('memory', 'max_usage_in_bytes',\n                                        jobid), 'r') as desc:\n                return int(desc.readline().strip())\n        except Exception:\n            return None\n\n    def _get_max_memsw_usage(self, jobid):\n        \"\"\"\n        Return the max usage of memsw in bytes\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        try:\n            with open(self._cgroup_path('memsw', 'max_usage_in_bytes', jobid),\n                      'r') as desc:\n                return int(desc.readline().strip())\n        except Exception:\n            return None\n\n    def _get_max_hugetlb_usage(self, jobid):\n        \"\"\"\n        Return the max usage of hugetlb in bytes\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        try:\n            with open(self._cgroup_path('hugetlb', 'max_usage_in_bytes',\n                                        jobid),\n                      'r') as desc:\n                return int(desc.readline().strip())\n        except Exception:\n            return None\n\n    def _get_cpu_usage(self, jobid):\n        \"\"\"\n        Return the cpuacct.usage in cpu seconds\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        path = self._cgroup_path('cpuacct', 'usage', jobid)\n        try:\n            with open(path, 'r') as desc:\n                return int(desc.readline().strip())\n        except Exception:\n            return None\n\n    def select_cpus(self, path, ncpus):\n        \"\"\"\n        Assign CPUs to the cpuset\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: path is %s' % (caller_name(), path))\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: ncpus is %s' %\n                   (caller_name(), ncpus))\n        if ncpus < 1:\n            ncpus = 1\n        # Must select from those currently available\n        cpufile = os.path.basename(path)\n        base = os.path.dirname(path)\n        parent = os.path.dirname(base)\n        with open(os.path.join(parent, cpufile), 'r') as desc:\n            avail = expand_list(desc.read().strip())\n        if len(avail) < 1:\n            raise CgroupProcessingError('No CPUs available in cgroup')\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Available CPUs: %s' %\n                   (caller_name(), avail))\n        for filename in glob.glob(os.path.join(parent, '[0-9]*', cpufile)):\n            if filename.endswith('.orphan'):\n                continue\n            with open(filename, 'r') as desc:\n                cpus = expand_list(desc.read().strip())\n            for entry in cpus:\n                if entry in avail:\n                    avail.remove(entry)\n        if len(avail) < ncpus:\n            raise CgroupProcessingError('Insufficient CPUs in cgroup')\n        if len(avail) == ncpus:\n            return avail\n        # TODO: Try to minimize NUMA nodes based on memory requirement\n        return avail[:ncpus]\n\n    def _get_error_msg(self, jobid):\n        \"\"\"\n        Return the error message in system message file\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        try:\n            proc = subprocess.Popen(['dmesg'], shell=False,\n                                    stdout=subprocess.PIPE,\n                                    universal_newlines=True)\n            out = proc.communicate()[0]\n            # if we get a non-str type then convert before calling splitlines\n            # should not happen since we pass universal_newlines True\n            out_split = stringified_output(out).splitlines()\n        except Exception:\n            return ''\n        out_split.reverse()\n        # Check to see if the job id is found in dmesg output\n        for line in out_split:\n            start = line.find('Killed process ')\n            if start < 0:\n                start = line.find('Task in /%s' % self.cfg['cgroup_prefix'])\n            if start < 0:\n                continue\n            kill_line = line[start:]\n            job_start = line.find(jobid)\n            if job_start < 0:\n                continue\n            return kill_line\n\n        # Found nothing -- more recent kernels have different messages\n        for line in out_split:\n            start = line.find('oom-kill')\n            kill_line = line[start:]\n            job_start = line.find(jobid)\n            if job_start < 0:\n                continue\n            return kill_line\n\n        return ''\n\n    def write_job_env_file(self, jobid, env_list):\n        \"\"\"\n        Write out host cgroup environment for this job\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        jobid = str(jobid)\n        if not os.path.exists(self.host_job_env_dir):\n            os.makedirs(self.host_job_env_dir, 0o755)\n        # Write out assigned_resources\n        try:\n            lines = \"\\n\".join(env_list)\n            filename = self.host_job_env_filename % jobid\n            with open(filename, 'w') as desc:\n                desc.write(lines)\n            pbs.logmsg(pbs.EVENT_DEBUG4, 'Wrote out file: %s' % (filename))\n            pbs.logmsg(pbs.EVENT_DEBUG4, 'Data: %s' % (lines))\n            return True\n        except Exception:\n            return False\n\n    def write_cgroup_assigned_resources(self, jobid):\n        \"\"\"\n        Write out host cgroup assigned resources for this job\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        jobid = str(jobid)\n        if not os.path.exists(self.hook_storage_dir):\n            os.makedirs(self.hook_storage_dir, 0o700)\n        # Write out assigned_resources\n        try:\n            json_str = json.dumps(self.assigned_resources)\n            filename = os.path.join(self.hook_storage_dir, jobid)\n            with open(filename, 'w') as desc:\n                desc.write(json_str)\n            pbs.logmsg(pbs.EVENT_DEBUG4, 'Wrote out file: %s' %\n                       (os.path.join(self.hook_storage_dir, jobid)))\n            pbs.logmsg(pbs.EVENT_DEBUG4, 'Data: %s' % (json_str))\n            return True\n        except Exception:\n            return False\n\n    def read_cgroup_assigned_resources(self, jobid):\n        \"\"\"\n        Read assigned resources from job file stored in hook storage area\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Method called' % caller_name())\n        jobid = str(jobid)\n        pbs.logmsg(pbs.EVENT_DEBUG4, 'Host assigned resources: %s' %\n                   (self.assigned_resources))\n        hrfile = os.path.join(self.hook_storage_dir, jobid)\n        if os.path.isfile(hrfile):\n            # Read in assigned_resources\n            try:\n                with open(hrfile, 'r') as desc:\n                    json_data = json.load(desc, object_hook=decode_dict)\n                self.assigned_resources = json_data\n                pbs.logmsg(pbs.EVENT_DEBUG4,\n                           'Host assigned resources: %s' %\n                           (self.assigned_resources))\n            except IOError:\n                raise CgroupConfigError('I/O error reading config file')\n            except json.JSONDecodeError:\n                raise CgroupConfigError(\n                    'JSON parsing error reading config file')\n        return self.assigned_resources is not None\n\n    def add_jobid_to_cgroup_jobs(self, jobid):\n        \"\"\"\n        Add a job ID to the file where local jobs are maintained\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, 'Adding jobid %s to cgroup_jobs' % jobid)\n        try:\n            with open(self.cgroup_jobs_file, 'r+') as fd:\n                jobdict = eval(fd.read())\n                if not isinstance(jobdict, dict):\n                    pbs.logmsg(pbs.EVENT_ERROR, 'Incompatibly formatted '\n                               'cgroup_jobs; emptying it first')\n                    jobdict = dict()\n                if jobid not in jobdict:\n                    jobdict[jobid] = time.time()\n                    fd.seek(0)\n                    fd.write(str(jobdict))\n        except IOError:\n            pbs.logmsg(pbs.EVENT_DEBUG, 'Failed to open cgroup_jobs file')\n            raise\n        except SyntaxError:\n            pbs.logmsg(pbs.EVENT_ERROR, 'Incompatibly formatted cgroup_jobs; '\n                                        'emptying it first')\n            jobdict = dict()\n            jobdict[jobid] = time.time()\n            try:\n                with open(self.cgroup_jobs_file, 'w') as fd:\n                    fd.write(str(jobdict))\n            except Exception:\n                pbs.logsmg('Error adding jobid %s to cgroup_jobs' % jobid)\n                self.empty_cgroup_jobs_file()\n\n    def remove_jobid_from_cgroup_jobs(self, jobid):\n        \"\"\"\n        Remove a job ID from the file where local jobs are maintained\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4,\n                   'Removing jobid %s from cgroup_jobs' % jobid)\n        try:\n            with open(self.cgroup_jobs_file, 'r+') as fd:\n                jobdict = eval(fd.read())\n                if not isinstance(jobdict, dict):\n                    pbs.logmsg(pbs.EVENT_ERROR, 'Incompatibly formatted '\n                               'cgroup_jobs; emptying it')\n                    jobdict = dict()\n                if jobid in jobdict:\n                    del jobdict[jobid]\n                    fd.seek(0)\n                    fd.write(str(jobdict))\n                    fd.truncate()\n        except IOError:\n            pbs.logmsg(pbs.EVENT_DEBUG, 'Failed to open cgroup_jobs file')\n            raise\n        except SyntaxError:\n            pbs.logmsg(pbs.EVENT_ERROR, 'Incompatibly formatted cgroup_jobs; '\n                                        'emptying it')\n            self.empty_cgroup_jobs_file()\n\n    def read_cgroup_jobs(self):\n        \"\"\"\n        Read the file where local jobs are maintained\n        \"\"\"\n        jobdict = dict()\n        try:\n            with open(self.cgroup_jobs_file, 'r') as fd:\n                jobdict = eval(fd.read())\n                if not isinstance(jobdict, dict):\n                    pbs.logmsg(pbs.EVENT_ERROR, 'Incompatibly formatted '\n                               'cgroup_jobs; emptying it')\n                    jobdict = dict()\n                    self.empty_cgroup_jobs_file()\n        except IOError:\n            pbs.logmsg(pbs.EVENT_DEBUG, 'Failed to open cgroup_jobs file')\n            raise\n        except SyntaxError:\n            pbs.logmsg(pbs.EVENT_ERROR, 'Incompatibly formatted cgroup_jobs; '\n                                        'emptying it')\n            self.empty_cgroup_jobs_file()\n            jobdict = dict()\n        cutoff = time.time() - float(self.cfg['job_setup_timeout'])\n        result = {key: val for key, val in jobdict.items() if val >= cutoff}\n        if len(result) != len(jobdict):\n            pbs.logmsg(pbs.EVENT_DEBUG, 'Removing stale jobs from cgroup_jobs')\n            try:\n                with open(self.cgroup_jobs_file, 'w') as fd:\n                    fd.write(str(result))\n            except Exception:\n                # we tolerate even bad files\n                pass\n        return result\n\n    def delete_cgroup_jobs_file(self, jobid):\n        \"\"\"\n        Delete the file where local jobs are maintained\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, 'Deleting file: %s' %\n                   self.cgroup_jobs_file)\n        if os.path.isfile(self.cgroup_jobs_file):\n            os.remove(self.cgroup_jobs_file)\n\n    def empty_cgroup_jobs_file(self):\n        \"\"\"\n        Remove all keys from the file where local jobs are maintained\n        \"\"\"\n        pbs.logmsg(pbs.EVENT_DEBUG4, 'Emptying file: %s' %\n                   self.cgroup_jobs_file)\n        try:\n            with open(self.cgroup_jobs_file, 'w') as fd:\n                fd.write(str(dict()))\n        except IOError:\n            pbs.logmsg(pbs.EVENT_DEBUG, 'Failed to open cgroup_jobs file: %s' %\n                       self.cgroup_jobs_file)\n            raise\n\n\ndef set_global_vars():\n    \"\"\"\n    Define some global variables that the hook may use\n    \"\"\"\n    global PBS_EXEC\n    global PBS_HOME\n    global PBS_MOM_HOME\n    global PBS_MOM_JOBS\n    # Determine location of PBS_HOME, PBS_MOM_HOME, and PBS_EXEC. These\n    # should have each be initialized to empty strings near the beginning\n    # of this hook.\n    # Try the environment first\n    if not PBS_EXEC and 'PBS_EXEC' in os.environ:\n        PBS_EXEC = os.environ['PBS_EXEC']\n    if not PBS_HOME and 'PBS_HOME' in os.environ:\n        PBS_HOME = os.environ['PBS_HOME']\n    if not PBS_MOM_HOME and 'PBS_MOM_HOME' in os.environ:\n        PBS_MOM_HOME = os.environ['PBS_MOM_HOME']\n    # Try the built in config values next\n    pbs_conf = pbs.get_pbs_conf()\n    if pbs_conf:\n        if not PBS_EXEC and 'PBS_EXEC' in pbs_conf:\n            PBS_EXEC = pbs_conf['PBS_EXEC']\n        if not PBS_HOME and 'PBS_HOME' in pbs_conf:\n            PBS_HOME = pbs_conf['PBS_HOME']\n        if not PBS_MOM_HOME and 'PBS_MOM_HOME' in pbs_conf:\n            PBS_MOM_HOME = pbs_conf['PBS_MOM_HOME']\n    # Try reading the config file directly\n    if not PBS_EXEC or not PBS_HOME or not PBS_MOM_HOME:\n        if 'PBS_CONF_FILE' in os.environ:\n            pbs_conf_file = os.environ['PBS_CONF_FILE']\n        else:\n            pbs_conf_file = os.path.join(os.sep, 'etc', 'pbs.conf')\n        regex = re.compile(r'\\s*([^\\s]+)\\s*=\\s*([^\\s]+)\\s*')\n        try:\n            with open(pbs_conf_file, 'r') as desc:\n                for line in desc:\n                    match = regex.match(line)\n                    if match:\n                        if not PBS_EXEC and match.group(1) == 'PBS_EXEC':\n                            PBS_EXEC = match.group(2)\n                        if not PBS_HOME and match.group(1) == 'PBS_HOME':\n                            PBS_HOME = match.group(2)\n                        if not PBS_MOM_HOME and (match.group(1) ==\n                                                 'PBS_MOM_HOME'):\n                            PBS_MOM_HOME = match.group(2)\n        except Exception:\n            pass\n    # If PBS_MOM_HOME is not set, use the PBS_HOME value\n    if not PBS_MOM_HOME:\n        PBS_MOM_HOME = PBS_HOME\n    PBS_MOM_JOBS = os.path.join(PBS_MOM_HOME, 'mom_priv', 'jobs')\n    # Sanity check to make sure each global path is set\n    if not PBS_EXEC:\n        raise CgroupConfigError('Unable to determine PBS_EXEC')\n    if not PBS_HOME:\n        raise CgroupConfigError('Unable to determine PBS_HOME')\n    if not PBS_MOM_HOME:\n        raise CgroupConfigError('Unable to determine PBS_MOM_HOME')\n\n\ndef missing_str(memspecs):\n    if 'vmem' in memspecs and 'mem' in memspecs:\n        try:\n            if size_as_int(memspecs['vmem']) > 0:\n                vmem_size = pbs.size(memspecs['vmem'])\n            else:\n                vmem_size = pbs.size(\"0B\")\n            if size_as_int(memspecs['mem']) > 0:\n                mem_size = pbs.size(memspecs['mem'])\n            else:\n                mem_size = pbs.size(\"0B\")\n            if mem_size > vmem_size:\n                event.reject(\"invalid specification: vmem>mem\")\n            elif mem_size == vmem_size:\n                # Avoid creating a \"0mb\" pbs.size\n                return(\"cgswap=0B\")\n            else:\n                cgswap_size = vmem_size - mem_size\n                return (\"cgswap=\"\n                        + str(cgswap_size))\n        except Exception:\n            pbs.logmsg(pbs.EVENT_DEBUG4,\n                       str(traceback.format_exc().strip().splitlines()))\n            event.reject(\"Invalid (v)mem size requested\")\n\n    if 'mem' in memspecs and 'cgswap' in memspecs:\n        try:\n            if size_as_int(memspecs['mem']) > 0:\n                mem_size = pbs.size(memspecs['mem'])\n            else:\n                mem_size = pbs.size(\"0B\")\n            if size_as_int(memspecs['cgswap']) > 0:\n                cgswap_size = pbs.size(memspecs['cgswap'])\n                vmem_size = mem_size + cgswap_size\n            else:\n                vmem_size = mem_size\n            return (\"vmem=\"\n                    + str(vmem_size))\n        except Exception:\n            pbs.logmsg(pbs.EVENT_DEBUG4,\n                       str(traceback.format_exc().strip().splitlines()))\n            event.reject(\"Invalid mem or cgswap size requested\")\n\n    if 'vmem' in memspecs and 'cgswap' in memspecs:\n        try:\n            if size_as_int(memspecs['vmem']) > 0:\n                vmem_size = pbs.size(memspecs['vmem'])\n            else:\n                vmem_size = pbs.size(\"0B\")\n            # Woraround for bug: no arithmetic on zero pbs.size\n            # since user may have specified e.g. \"0mb\" cgswap\n            if size_as_int(memspecs['cgswap']) == 0:\n                mem_size = vmem_size\n                return (\"mem=\" + str(mem_size))\n            else:\n                cgswap_size = pbs.size(memspecs['cgswap'])\n                if cgswap_size >= vmem_size:\n                    event.reject(\"invalid specification: mem<=0\")\n                else:\n                    mem_size = vmem_size - cgswap_size\n                    return (\"mem=\" + str(mem_size))\n        except Exception:\n            pbs.logmsg(pbs.EVENT_DEBUG4,\n                       str(traceback.format_exc().strip().splitlines()))\n            event.reject(\"Invalid (v)mem size requested\")\n    return \"\"\n\n\ndef fill_cgswap():\n    selspec = None\n    event = pbs.event()\n\n    job = event.job\n    job_o = None\n    if event.type == pbs.MODIFYJOB:\n        job_o = event.job_o\n\n    if (hasattr(job, 'Resource_List') and 'select' in job.Resource_List\n            and job.Resource_List.select):\n        # either a submission using -lselect or a qalter specifying -lselect\n        selspec = repr(job.Resource_List[\"select\"])\n        pbs.logmsg(pbs.EVENT_DEBUG3, \"fill_cgswap: original selspec is: %s\"\n                   % selspec)\n        newspec = []\n        for chunkspec in selspec.split('+'):\n            chunkspec_split = chunkspec.split(':')\n            try:\n                chunkspec_quantity = int(chunkspec_split[0])\n            except Exception:\n                event.reject(\"Invalid number of chunks in -lselect\")\n            newchunk = chunkspec\n            if len(chunkspec_split) == 1:\n                newspec.append(newchunk)\n            else:\n                # more than just a number of chunks\n                # -- get resources specified\n                memspecs = {}\n                for t in chunkspec_split[1:]:\n                    if t.startswith('mem='):\n                        memstr = t.split('=')[1]\n                        memspecs['mem'] = memstr\n                    if t.startswith('vmem='):\n                        vmemstr = t.split('=')[1]\n                        memspecs['vmem'] = vmemstr\n                    if t.startswith('cgswap='):\n                        cgswapstr = t.split('=')[1]\n                        memspecs['cgswap'] = cgswapstr\n                if len(memspecs) > 2:\n                    event.reject(\"chunk specification overconstrained, \"\n                                 \"specify only 2 of mem/vmem/cgswap\")\n                elif len(memspecs) == 2:\n                    to_add = missing_str(memspecs)\n                    if to_add:\n                        newchunk += \":\" + to_add\n                    newspec.append(newchunk)\n                else:\n                    newspec.append(newchunk)\n        newselspec = '+'.join(newspec)\n        pbs.logmsg(pbs.EVENT_DEBUG3, \"fill_cgswap: old selspec was: %s, \"\n                   \"new selspec is %s\"\n                   % (selspec, newselspec))\n        job.Resource_List['select'] = pbs.select(newselspec)\n        # need to remove possibly stale \"global\" cgswap\n        # if there is a select now but if the job was originally submitted\n        # with \"old style\" per-job resources, as cgswap chunk values are\n        # not summed into Resource_List as for vmem and mem\n        try:\n            job.Resource_List['cgswap'] = None\n        except Exception:\n            pass\n    else:\n        # Old style submission without -lselect\n        # We set 'global' per-job resources; will fail (by design)\n        # if original submission used -lselect.\n        to_add = None\n        memspecs_o = {}\n        memspecs_n = {}\n        memspecs = {}\n        for res in ['vmem', 'mem', 'cgswap']:\n            if (hasattr(job_o, 'Resource_List')\n                    and res in job_o.Resource_List\n                    and job_o.Resource_List[res] is not None):\n                memspecs_o[res] = job_o.Resource_List[res]\n            if (hasattr(job, 'Resource_List')\n                    and res in job.Resource_List\n                    and job.Resource_List[res] is not None):\n                memspecs_n[res] = job.Resource_List[res]\n        memspecs = {**memspecs_o, **memspecs_n}\n        pbs.logmsg(pbs.EVENT_DEBUG3, \"memspecs read is: %s\"\n                   % str(memspecs))\n\n        if len(memspecs) == 3:\n            # overconstrained -- we have 3 values for mem, vmem and cgswap\n            if (event.type == pbs.QUEUEJOB or len(memspecs_n) > 2):\n                event.reject(\"chunk specification overconstrained, \"\n                             \"specify only 2 of mem/vmem/cgswap\")\n            else:\n                # modifyjob -- values in the new dictionary are controlling\n                if len(memspecs_n) == 2:\n                    # the two new values determine the third\n                    to_add = missing_str(memspecs_n)\n                elif len(memspecs_n) == 1:\n                    # specified one new value in qalter\n                    # -- pick which old one to keep\n                    # if new value is cgswap, keep old mem and recompute vmem\n                    # if new value is vmem, keep old mem and recompute cgswap\n                    # if new value is mem, keep old vmem and recompute cgswap\n                    if 'cgswap' in memspecs_n:\n                        del memspecs['vmem']\n                    else:\n                        del memspecs['cgswap']\n        if len(memspecs) == 2 and not to_add:\n            to_add = missing_str(memspecs)\n        if to_add:\n            to_add_split = to_add.split('=')\n            job.Resource_List[to_add_split[0]] = pbs.size(to_add_split[1])\n\n\n#\n# FUNCTION main\n#\ndef main():\n    \"\"\"\n    Main function for execution\n    \"\"\"\n    pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Function called' % caller_name())\n    # If an exception occurs, jobutil must be set to something\n    jobutil = None\n    hostname = pbs.get_local_nodename()\n    pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Host is %s' % (caller_name(), hostname))\n    # Log the hook event type\n    event = pbs.event()\n    pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Hook name is %s' %\n               (caller_name(), event.hook_name))\n\n    if event.type in [pbs.MODIFYJOB, pbs.QUEUEJOB]:\n        fill_cgswap()\n        event.accept()\n\n    try:\n        set_global_vars()\n    except Exception:\n        pbs.logmsg(pbs.EVENT_DEBUG,\n                   '%s: Hook failed to initialize configuration properly' %\n                   caller_name())\n        pbs.logmsg(pbs.EVENT_DEBUG,\n                   str(traceback.format_exc().strip().splitlines()))\n        event.accept()\n    # Instantiate the hook utility class\n    try:\n        hooks = HookUtils()\n        pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Hook utility class instantiated' %\n                   caller_name())\n    except Exception:\n        pbs.logmsg(pbs.EVENT_DEBUG,\n                   '%s: Failed to instantiate hook utility class' %\n                   caller_name())\n        pbs.logmsg(pbs.EVENT_DEBUG,\n                   str(traceback.format_exc().strip().splitlines()))\n        event.accept()\n    # Bail out if there is no handler for this event\n    if not hooks.hashandler(event.type):\n        pbs.logmsg(pbs.EVENT_DEBUG, '%s: %s event not handled by this hook' %\n                   (caller_name(), hooks.event_name(event.type)))\n        event.accept()\n    try:\n        # Instantiate the job utility class first so jobutil can be accessed\n        # by the exception handlers.\n        if hasattr(event, 'job'):\n            jobutil = JobUtils(event.job)\n            pbs.logmsg(pbs.EVENT_DEBUG4,\n                       '%s: Job information class instantiated' %\n                       caller_name())\n        else:\n            pbs.logmsg(pbs.EVENT_DEBUG4, '%s: Event does not include a job' %\n                       caller_name())\n        # Parse the cgroup configuration file here so we can use the file lock\n        cfg = CgroupUtils.parse_config_file()\n        # Instantiate the cgroup utility class\n        vnode = None\n        if hasattr(event, 'vnode_list'):\n            if hostname in event.vnode_list:\n                vnode = event.vnode_list[hostname]\n        with Lock(cfg['cgroup_lock_file']):\n            # Only write this once we grabbed the lock,\n            # otherwise *another* event could actually win the lock\n            # even though *this* event printed this message last,\n            # and we'd be confused about the event that the winner services\n            if event.type == pbs.EXECHOST_PERIODIC:\n                loglevel = pbs.EVENT_DEBUG3\n            else:\n                loglevel = pbs.EVENT_DEBUG2\n            if hasattr(event, 'job') and hasattr(event.job, 'id'):\n                pbs.logmsg(loglevel, '%s: Event type is %s, job ID is %s'\n                           % (caller_name(), hooks.event_name(event.type),\n                              event.job.id))\n            else:\n                pbs.logmsg(loglevel, '%s: Event type is %s'\n                           % (caller_name(), hooks.event_name(event.type)))\n\n            cgroup = CgroupUtils(hostname, vnode, cfg=cfg)\n            pbs.logmsg(pbs.EVENT_DEBUG4,\n                       '%s: Cgroup utility class instantiated' % caller_name())\n\n            # Bail out if there is nothing to do\n            if not cgroup.subsystems:\n                pbs.logmsg(pbs.EVENT_DEBUG,\n                           '%s: Cgroups disabled or none to manage' %\n                           caller_name())\n                event.accept()\n\n            # Call the appropriate handler\n            if hooks.invoke_handler(event, cgroup, jobutil):\n                pbs.logmsg(pbs.EVENT_DEBUG4,\n                           '%s: Hook handler returned success for %s event' %\n                           (caller_name(), hooks.event_name(event.type)))\n                event.accept()\n            else:\n                pbs.logmsg(pbs.EVENT_DEBUG,\n                           '%s: Hook handler returned failure for %s event' %\n                           (caller_name(), hooks.event_name(event.type)))\n                event.reject()\n    except SystemExit:\n        # The event.accept() and event.reject() methods generate a SystemExit\n        # exception.\n        pass\n    except UserError as exc:\n        # User must correct problem and resubmit job, job gets deleted\n        msg = ('User error in %s handling %s event' %\n               (event.hook_name, hooks.event_name(event.type)))\n        if jobutil is not None:\n            msg += (' for job %s' % (event.job.id))\n            try:\n                event.job.delete()\n                msg += ' (deleted)'\n            except Exception:\n                msg += ' (deletion failed)'\n        msg += (': %s %s' % (exc.__class__.__name__, str(exc.args)))\n        pbs.logmsg(pbs.EVENT_ERROR, msg)\n        event.reject(msg)\n    except CgroupProcessingError as exc:\n        # Something went wrong manipulating the cgroups\n        pbs.logmsg(pbs.EVENT_DEBUG,\n                   str(traceback.format_exc().strip().splitlines()))\n        msg = ('Processing error in %s handling %s event' %\n               (event.hook_name, hooks.event_name(event.type)))\n        if jobutil is not None:\n            msg += (' for job %s' % (event.job.id))\n        msg += (': %s %s' % (exc.__class__.__name__, str(exc.args)))\n        pbs.logmsg(pbs.EVENT_ERROR, msg)\n        event.reject(msg)\n    except Exception as exc:\n        # Catch all other exceptions and report them, job gets held\n        # and a stack trace is logged\n        pbs.logmsg(pbs.EVENT_DEBUG,\n                   str(traceback.format_exc().strip().splitlines()))\n        msg = ('Unexpected error in %s handling %s event' %\n               (event.hook_name, hooks.event_name(event.type)))\n        if jobutil is not None:\n            msg += (' for job %s' % (event.job.id))\n            try:\n                event.job.Hold_Types = pbs.hold_types('s')\n                event.job.rerun()\n                msg += ' (system hold set)'\n            except Exception:\n                msg += ' (system hold failed)'\n        msg += (': %s %s' % (exc.__class__.__name__, str(exc.args)))\n        pbs.logmsg(pbs.EVENT_ERROR, msg)\n        event.reject(msg)\n\n\n# The following block is skipped if this is a unit testing environment.\nif (__name__ == 'builtins') or (__name__ == '__builtin__'):\n    START = time.time()\n    try:\n        main()\n    except SystemExit:\n        # The event.accept() and event.reject() methods generate a\n        # SystemExit exception.\n        pass\n    except Exception:\n        # \"Should never happen\" since main() is supposed to catch these\n        pbs.logmsg(pbs.EVENT_DEBUG,\n                   str(traceback.format_exc().strip().splitlines()))\n        pbs.event().reject(str(traceback.format_exc().strip().splitlines()))\n    finally:\n        event = pbs.event()\n        if event.type == pbs.EXECHOST_PERIODIC:\n            loglevel = pbs.EVENT_DEBUG3\n        else:\n            loglevel = pbs.EVENT_DEBUG2\n        if hasattr(event, 'job') and hasattr(event.job, 'id'):\n            pbs.logmsg(loglevel, 'Hook ended: %s, job ID %s, '\n                       'event_type %s (elapsed time: %0.4lf)' %\n                       (pbs.event().hook_name,\n                        event.job.id,\n                        str(pbs.event().type),\n                        (time.time() - START)))\n        else:\n            pbs.logmsg(loglevel, 'Hook ended: %s, '\n                       'event_type %s (elapsed time: %0.4lf)' %\n                       (pbs.event().hook_name,\n                        str(pbs.event().type),\n                        (time.time() - START)))\n"
  },
  {
    "path": "src/iff/Makefile.am",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nsbin_PROGRAMS = pbs_iff\n\npbs_iff_CPPFLAGS = \\\n\t-I$(top_srcdir)/src/include \\\n\t@KRB5_CFLAGS@\n\npbs_iff_LDADD = \\\n\t$(top_builddir)/src/lib/Libpbs/libpbs.la \\\n\t$(top_builddir)/src/lib/Libutil/libutil.a \\\n\t$(top_builddir)/src/lib/Libnet/libnet.a \\\n\t$(top_builddir)/src/lib/Libsec/libsec.a \\\n\t@KRB5_LIBS@ \\\n\t-lpthread \\\n\t@socket_lib@ \\\n\t@libz_lib@\n\npbs_iff_SOURCES = iff2.c $(top_srcdir)/src/lib/Libcmds/cmds_common.c\n"
  },
  {
    "path": "src/iff/iff2.c",
    "content": "#include <pbs_config.h> /* the master config generated by configure */\n\n#include <errno.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <unistd.h>\n#include <string.h>\n#include <assert.h>\n#include <netdb.h>\n#include <pwd.h>\n#include <sys/types.h>\n#include <sys/param.h>\n#include <sys/socket.h>\n#include <sys/stat.h>\n#include <netinet/in.h>\n#include \"libpbs.h\"\n#include \"dis.h\"\n#include \"server_limits.h\"\n#include \"net_connect.h\"\n#include \"credential.h\"\n#include \"pbs_version.h\"\n#include \"pbs_ecl.h\"\n\n#define PBS_IFF_MAX_CONN_RETRIES 6\n\n/**\n * @file\tiff2.c\n * @brief\n * \tpbs_iff - authenticates the user to the PBS server.\n *\n * @par\tUsage: call via pbs_connect() with\n *\t\tpbs_iff [-t] hostname port [parent_connection_port]\n *\t\tpbs_iff --version\n *\n *\t\tThe parent_connection_port is required unless -t (for test) is given.\n */\n/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\nint\nmain(int argc, char *argv[], char *envp[])\n{\n\tint err = 0;\n\tpbs_net_t hostaddr = 0;\n\tint i;\n\tunsigned int parentport;\n\tint parentsock = -1;\n\tint parentsock_port = -1;\n\tshort legacy = -1;\n\tuid_t myrealuid;\n\tstruct passwd *pwent;\n\tint servport = -1;\n\tint sock;\n\tstruct sockaddr_in sockname;\n\tpbs_socklen_t socknamelen;\n\tint testmode = 0;\n\textern int optind;\n\tchar *cln_hostaddr = NULL;\n\n\t/*the real deal or output pbs_version and exit?*/\n\tPRINT_VERSION_AND_EXIT(argc, argv);\n\n\tcln_hostaddr = getenv(PBS_IFF_CLIENT_ADDR);\n\n\t/* Need to unset LOCALDOMAIN if set, want local host name */\n\n\tfor (i = 0; envp[i]; ++i) {\n\t\tif (!strncmp(envp[i], \"LOCALDOMAIN=\", 12)) {\n\t\t\tenvp[i] = \"\";\n\t\t\tbreak;\n\t\t}\n\t}\n\n\twhile ((i = getopt(argc, argv, \"ti:\")) != EOF) {\n\t\tswitch (i) {\n\t\t\tcase 't':\n\t\t\t\ttestmode = 1;\n\t\t\t\tbreak;\n\t\t\tcase 'i':\n\t\t\t\tcln_hostaddr = optarg;\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\terr = 1;\n\t\t}\n\t}\n\tif ((cln_hostaddr != NULL) && (testmode == 1)) {\n\t\terr = 1;\n\t}\n\n\t/* Keep the backward compatibility of pbs_iff.\n\t * If the invoker component is older version,\n\t * It will pass one lesser argument to pbs_iff.\n\t * in case of test mode, the (argc - optind) should always be 2.\n\t * Setting legacy true for testmode and correct num ot args,\n\t * because getsockname() should be called on LOCAL socket\n\t * to get the port number.\n\t */\n\tif ((testmode && (argc - optind) == 2) ||\n\t    (!testmode && (argc - optind) == 3)) {\n\t\tlegacy = 1;\n\t} else if ((!testmode && (argc - optind) == 4)) {\n\t\tlegacy = 0;\n\t}\n\n\tif ((err == 1) || (legacy == -1)) {\n\t\tfprintf(stderr,\n\t\t\t\"Usage: %s [-t] host port [parent_sock][parent_port]\\n\",\n\t\t\targv[0]);\n\t\tfprintf(stderr, \"       %s --version\\n\", argv[0]);\n\t\treturn (1);\n\t}\n\n\tif (!testmode && isatty(fileno(stdout))) {\n\t\tfprintf(stderr, \"pbs_iff: output is a tty & not test mode\\n\");\n\t\treturn (1);\n\t}\n\n\tif (initsocketlib())\n\t\treturn 1;\n\n\t/* first, make sure we have a valid server (host), and ports */\n\n\tif ((hostaddr = get_hostaddr(argv[optind])) == (pbs_net_t) 0) {\n\t\tfprintf(stderr, \"pbs_iff: unknown host %s\\n\", argv[optind]);\n\t\treturn (1);\n\t}\n\tif ((servport = atoi(argv[++optind])) <= 0)\n\t\treturn (1);\n\n\t/* set single threaded mode */\n\tpbs_client_thread_set_single_threaded_mode();\n\t/* disable attribute verification */\n\tset_no_attribute_verification();\n\n\t/* initialize the thread context */\n\tif (pbs_client_thread_init_thread_context() != 0) {\n\t\tfprintf(stderr, \"pbs_iff: thread initialization failed\\n\");\n\t\treturn (1);\n\t}\n\n\tfor (i = 0; i < PBS_IFF_MAX_CONN_RETRIES; i++) {\n\t\tsock = client_to_svr_extend(hostaddr, (unsigned int) servport, 1, cln_hostaddr);\n\t\tif (sock != PBS_NET_RC_RETRY)\n\t\t\tbreak;\n\t\tsleep(i * i + 1); /* exponential sleep increase */\n\t}\n\tif (sock < 0) {\n\t\tfprintf(stderr, \"pbs_iff: cannot connect to host\\n\");\n\t\tif (i == PBS_IFF_MAX_CONN_RETRIES)\n\t\t\tfprintf(stderr, \"pbs_iff: all reserved ports in use\\n\");\n\t\treturn (4);\n\t}\n\n\tDIS_tcp_funcs();\n\n\t/* setup connection level thread context */\n\tif (pbs_client_thread_init_connect_context(sock) != 0) {\n\t\tfprintf(stderr, \"pbs_iff: connect initialization failed\\n\");\n\t\treturn (1);\n\t}\n\n\tif (testmode == 0) {\n\t\t/*legacy component will still take one argument less and will still have getsockname() call\n\t\t * to get the parent port*/\n\t\tif ((parentsock = atoi(argv[++optind])) < 0)\n\t\t\treturn (1);\n\t\tif (legacy == 0) {\n\t\t\tif ((parentsock_port = atoi(argv[++optind])) < 0)\n\t\t\t\treturn (1);\n\t\t}\n\t} else {\n\t\t/* for test mode, use my own port rather than the parents */\n\t\tparentsock = sock;\n\t}\n\n\t/* next, get the real user name */\n\n\tmyrealuid = getuid();\n\tpwent = getpwuid(myrealuid);\n\tif (pwent == NULL)\n\t\treturn (3);\n\n\t/* now get the parent's client-side port */\n\n\tsocknamelen = sizeof(sockname);\n\n\t/* getsockname()should be called in case of legacy\n\t * or testmode.\n\t */\n\tif (legacy == 1 || testmode) {\n\t\tif (getsockname(parentsock, (struct sockaddr *) &sockname, &socknamelen) < 0)\n\t\t\treturn (3);\n\t\tparentport = ntohs(sockname.sin_port);\n\t} else\n\t\tparentport = ntohs(parentsock_port);\n\n\tpbs_errno = 0;\n\terr = tcp_send_auth_req(sock, parentport, pwent->pw_name, AUTH_RESVPORT_NAME, getenv(PBS_CONF_ENCRYPT_METHOD));\n\tif (err != 0 && pbs_errno != PBSE_BADCRED)\n\t\treturn 2;\n\n\terr = pbs_errno;\n\twhile (write(fileno(stdout), &err, sizeof(int)) == -1) {\n\t\tif (errno != EINTR)\n\t\t\tbreak;\n\t}\n\tif (pbs_errno != 0) {\n\t\tchar *msg = get_conn_errtxt(sock);\n\t\tint len = 0;\n\t\tif (msg != NULL)\n\t\t\tlen = strlen(msg);\n\t\twhile (write(fileno(stdout), (char *) &len, sizeof(int)) == -1) {\n\t\t\tif (errno != EINTR)\n\t\t\t\tbreak;\n\t\t}\n\t\tif (len > 0) {\n\t\t\twhile (write(fileno(stdout), msg, strlen(msg)) == -1) {\n\t\t\t\tif (errno != EINTR)\n\t\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t\treturn (1);\n\t}\n\n\t(void) close(sock);\n\t(void) fclose(stdout);\n\treturn (0);\n}\n"
  },
  {
    "path": "src/include/Long.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _LONG_H\n#define _LONG_H\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n#include <limits.h>\n\n/*\n * Define Long and u_Long to be the largest integer types supported by the\n * native compiler.  They need not be supported by printf or scanf or their\n * ilk.  Ltostr, uLtostr, strtoL, strtouL, and atoL provide conversion to and\n * from character string form.\n *\n * The following sections are listed in decreasing order of brain damage.\n */\n\n/****************************************************************************/\n#if defined(__GNUC__)\n\n/* On these systems, the compiler supports 64-bit integers as long longs but */\n/* there seems to be neither defined constant support nor library support. */\n\ntypedef long long Long;\ntypedef unsigned long long u_Long;\n\n#define lONG_MIN (-0x7FFFFFFFFFFFFFFFLL - 1)\n#define lONG_MAX 0x7FFFFFFFFFFFFFFFLL\n#define UlONG_MAX 0xFFFFFFFFFFFFFFFFULL\n\nLong strToL(const char *nptr, char **endptr, int base);\nu_Long strTouL(const char *nptr, char **endptr, int base);\n#define atoL(nptr) strToL((nptr), NULL, 10)\n\n/****************************************************************************/\n#elif defined(WIN32) /* Windows */\n\n/* long long and unsigned long long are 64 bit signed  and unsigned */\n/* integers on Windows platforms. */\n/* C compilers under Windows has built in functions for conversion */\n/* from string to 64 bit integers of signed and unsigned version. */\n\ntypedef long long Long;\ntypedef unsigned long long u_Long;\n\n#define lONG_MIN LLONG_MIN\n#define lONG_MAX LLONG_MAX\n#define UlONG_MAX ULLONG_MAX\n\n#define strToL(n, e, b) _strtoi64(n, e, (b))\n#define strTouL(n, e, b) _strtoui64(n, e, (b))\n#define aToL(nptr) _atoi64((nptr))\n#define atoL(nptr) aToL((nptr))\n\n/****************************************************************************/\n\n#endif\n\nconst char *uLTostr(u_Long value, int base);\n#ifdef __cplusplus\n}\n#endif\n#endif /* _LONG_H */\n"
  },
  {
    "path": "src/include/Long_.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#define LONG_DIG_VALUE \"0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ\"\nextern int Long_neg;\n"
  },
  {
    "path": "src/include/Makefile.am",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\ninclude_HEADERS = \\\n\tpbs_error.h \\\n\tpbs_ifl.h \\\n\trm.h \\\n\ttm_.h \\\n\ttm.h\n\nnoinst_HEADERS = \\\n\tacct.h \\\n\tlibauth.h \\\n\tauth.h \\\n\tattribute.h \\\n\tavltree.h \\\n\tbasil.h \\\n\tbatch_request.h \\\n\tbitfield.h \\\n\tcmds.h \\\n\tcredential.h \\\n\tdedup_jobids.h \\\n\tdis.h \\\n\tgrunt.h \\\n\thook_func.h \\\n\thook.h \\\n\tifl_internal.h \\\n\tjob.h \\\n\tlibpbs.h \\\n\tlibsec.h \\\n\tlibutil.h \\\n\tlist_link.h \\\n\tlog.h \\\n\tLong_.h \\\n\tLong.h \\\n\tMakefile.in \\\n\tmom_func.h \\\n\tmom_hook_func.h \\\n\tmom_server.h \\\n\tmom_vnode.h \\\n\tnet_connect.h \\\n\tpbs_array_list.h \\\n\tpbs_assert.h \\\n\tpbs_client_thread.h \\\n\tpbs_db.h \\\n\tpbs_ecl.h \\\n\tpbs_entlim.h \\\n\tpbs_idx.h \\\n\tpbs_internal.h \\\n\tpbs_reliable.h \\\n\tpbs_license.h \\\n\tpbs_mpp.h \\\n\tpbs_nodes.h \\\n\tpbs_python.h \\\n\tpbs_python_private.h \\\n\tpbs_share.h \\\n\tpbs_v1_module_common.i \\\n\tpbs_version.h \\\n\tpbs_json.h \\\n\tplacementsets.h \\\n\tportability.h \\\n\tport_forwarding.h \\\n\tprovision.h \\\n\tqmgr.h \\\n\tqueue.h \\\n\trange.h \\\n\treservation.h \\\n\tresmon.h \\\n\tresource.h \\\n\tresv_node.h \\\n\ttpp.h \\\n\tsched_cmds.h \\\n\tpbs_sched.h \\\n\tserver.h \\\n\tserver_limits.h \\\n\tsite_queue.h \\\n\tsite_job_attr_def.h \\\n\tsite_job_attr_enum.h \\\n\tsite_qmgr_node_print.h \\\n\tsite_qmgr_que_print.h \\\n\tsite_qmgr_sched_print.h \\\n\tsite_qmgr_svr_print.h \\\n\tsite_que_attr_def.h \\\n\tsite_que_attr_enum.h \\\n\tsite_resc_attr_def.h \\\n\tsite_resv_attr_def.h \\\n\tsite_resv_attr_enum.h \\\n\tsite_sched_attr_def.h \\\n\tsite_sched_attr_enum.h \\\n\tsite_svr_attr_def.h \\\n\tsite_svr_attr_enum.h \\\n\tjob_attr_enum.h \\\n\tnode_attr_enum.h \\\n\tqueue_attr_enum.h \\\n\tresc_def_enum.h \\\n\tresv_attr_enum.h \\\n\tsched_attr_enum.h \\\n\tsvr_attr_enum.h \\\n\tsvrfunc.h \\\n\tticket.h \\\n\ttracking.h \\\n\tuser.h \\\n\twork_task.h\n\n\nCLEANFILES = \\\n\tjob_attr_enum.h \\\n\tnode_attr_enum.h \\\n\tqueue_attr_enum.h \\\n\tresc_def_enum.h \\\n\tresv_attr_enum.h \\\n\tsched_attr_enum.h \\\n\tsvr_attr_enum.h\n\njob_attr_enum.h: $(top_srcdir)/src/lib/Libattr/master_job_attr_def.xml $(top_srcdir)/buildutils/attr_parser.py\n\t@echo Generating $@ from $< ; \\\n\t@PYTHON@ $(top_srcdir)/buildutils/attr_parser.py -m $(top_srcdir)/src/lib/Libattr/master_job_attr_def.xml -d $@\n\nnode_attr_enum.h: $(top_srcdir)/src/lib/Libattr/master_node_attr_def.xml $(top_srcdir)/buildutils/attr_parser.py\n\t@echo Generating $@ from $< ; \\\n\t@PYTHON@ $(top_srcdir)/buildutils/attr_parser.py -m $(top_srcdir)/src/lib/Libattr/master_node_attr_def.xml -d $@\n\nqueue_attr_enum.h: $(top_srcdir)/src/lib/Libattr/master_queue_attr_def.xml $(top_srcdir)/buildutils/attr_parser.py\n\t@echo Generating $@ from $< ; \\\n\t@PYTHON@ $(top_srcdir)/buildutils/attr_parser.py -m $(top_srcdir)/src/lib/Libattr/master_queue_attr_def.xml -d $@\n\nresc_def_enum.h: $(top_srcdir)/src/lib/Libattr/master_resc_def_all.xml $(top_srcdir)/buildutils/attr_parser.py\n\t@echo Generating $@ from $< ; \\\n\t@PYTHON@ $(top_srcdir)/buildutils/attr_parser.py -m $(top_srcdir)/src/lib/Libattr/master_resc_def_all.xml -d $@\n\nresv_attr_enum.h: $(top_srcdir)/src/lib/Libattr/master_resv_attr_def.xml $(top_srcdir)/buildutils/attr_parser.py\n\t@echo Generating $@ from $< ; \\\n\t@PYTHON@ $(top_srcdir)/buildutils/attr_parser.py -m $(top_srcdir)/src/lib/Libattr/master_resv_attr_def.xml -d $@\n\nsched_attr_enum.h: $(top_srcdir)/src/lib/Libattr/master_sched_attr_def.xml $(top_srcdir)/buildutils/attr_parser.py\n\t@echo Generating $@ from $< ; \\\n\t@PYTHON@ $(top_srcdir)/buildutils/attr_parser.py -m $(top_srcdir)/src/lib/Libattr/master_sched_attr_def.xml -d $@\n\nsvr_attr_enum.h: $(top_srcdir)/src/lib/Libattr/master_svr_attr_def.xml $(top_srcdir)/buildutils/attr_parser.py\n\t@echo Generating $@ from $< ; \\\n\t@PYTHON@ $(top_srcdir)/buildutils/attr_parser.py -m $(top_srcdir)/src/lib/Libattr/master_svr_attr_def.xml -d $@\n"
  },
  {
    "path": "src/include/acct.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _ACCT_H\n#define _ACCT_H\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n/*\n * header file supporting PBS accounting information\n */\n\n#define PBS_ACCT_MAX_RCD 4095\n#define PBS_ACCT_LEAVE_EXTRA 500\n\n/* for JOB accounting */\n\n#define PBS_ACCT_QUEUE (int) 'Q'   /* Job Queued record */\n#define PBS_ACCT_RUN (int) 'S'\t   /* Job run (Started) */\n#define PBS_ACCT_PRUNE (int) 's'   /* Job run (Reliably-Started, assigned resources pruned) */\n#define PBS_ACCT_RERUN (int) 'R'   /* Job Rerun record */\n#define PBS_ACCT_CHKPNT (int) 'C'  /* Job Checkpointed and held */\n#define PBS_ACCT_RESTRT (int) 'T'  /* Job resTart (from chkpnt) record */\n#define PBS_ACCT_END (int) 'E'\t   /* Job Ended/usage record */\n#define PBS_ACCT_DEL (int) 'D'\t   /* Job Deleted by request */\n#define PBS_ACCT_ABT (int) 'A'\t   /* Job Abort by server */\n#define PBS_ACCT_LIC (int) 'L'\t   /* Floating License Usage */\n#define PBS_ACCT_MOVED (int) 'M'   /* Job moved to other server */\n#define PBS_ACCT_UPDATE (int) 'u'  /* phased job update record */\n#define PBS_ACCT_NEXT (int) 'c'\t   /* phased job next record */\n#define PBS_ACCT_LAST (int) 'e'\t   /* phased job last usage record */\n#define PBS_ACCT_ALTER (int) 'a'   /* Job attribute is being altered */\n#define PBS_ACCT_SUSPEND (int) 'z' /* Job is suspended */\n#define PBS_ACCT_RESUME (int) 'r'  /* Suspended Job is resumed */\n\n/* for RESERVATION accounting */\n\n#define PBS_ACCT_UR (int) 'U'\t    /* Unconfirmed reservation enters system */\n#define PBS_ACCT_CR (int) 'Y'\t    /* Unconfirmed to a Confirmed reservation */\n#define PBS_ACCT_BR (int) 'B'\t    /* Beginning of the reservation period */\n#define PBS_ACCT_FR (int) 'F'\t    /* Reservation period Finished */\n#define PBS_ACCT_DRss (int) 'K'\t    /* sched/server requests reservation's removal */\n#define PBS_ACCT_DRclient (int) 'k' /* client requests reservation's removal */\n\n/* for PROVISIONING accounting */\n#define PBS_ACCT_PROV_START (int) 'P' /* Provisioning start record */\n#define PBS_ACCT_PROV_END (int) 'p'   /* Provisioning end record */\n\nextern int acct_open(char *filename);\nextern void acct_close(void);\nextern void account_record(int acctype, const job *pjob, char *text);\nextern void write_account_record(int acctype, const char *jobid, char *text);\n\n#ifdef _RESERVATION_H\nextern void account_recordResv(int acctype, resc_resv *presv, char *text);\nextern void account_resvstart(resc_resv *presv);\n#endif\n\nextern void account_jobstr(const job *pjob, int type);\nextern void account_job_update(job *pjob, int type);\nextern void account_jobend(job *pjob, char *used, int type);\nextern void log_alter_records_for_attrs(job *pjob, svrattrl *plist);\nextern void log_suspend_resume_record(job *pjob, int acct_type);\nextern void set_job_ProvAcctRcd(job *pjob, long time_se, int type);\n\nextern int concat_rescused_to_buffer(char **buffer, int *buffer_size, svrattrl *patlist, char *delim, const job *pjob);\n\n#define PROVISIONING_STARTED 1\n#define PROVISIONING_SUCCESS 2\n#define PROVISIONING_FAILURE 3\n\n#ifdef __cplusplus\n}\n#endif\n#endif /* _ACCT_H */\n"
  },
  {
    "path": "src/include/attribute.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _ATTRIBUTE_H\n#define _ATTRIBUTE_H\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n#include \"pbs_ifl.h\"\n#include \"pbs_internal.h\"\n#include \"Long.h\"\n#include \"grunt.h\"\n#include \"list_link.h\"\n#include \"pbs_db.h\"\n\n#ifndef _TIME_H\n#include <sys/types.h>\n#endif\n/*\n * This header file contains the definitions for attributes\n *\n * Other required header files:\n *\t\"list_link.h\"\n *\t\"portability.h\"\n *\n * Attributes are represented in one or both of two forms, external and\n * internal.  When an attribute is moving external to the server, either\n * to the network or to disk (for saving), it is represented in the\n * external form, a \"svrattropl\" structure.  This structure holds the\n * attribute name as a string.  If the attribute is a resource type, the\n * resource name and resource value are encoded as strings, a total of three\n * strings.  If the attribute is not a resource, then the attribute value is\n * coded into a string for a total of two strings.\n *\n * Internally, attributes exist in two separate structures.  The\n * attribute type is defined by a \"definition structure\" which contains\n * the name of the attribute, flags, and pointers to the functions used\n * to access the value.  This info is \"hard coded\".  There is one\n * \"attribute definition\" per (attribute name, parent object type) pair.\n *\n * The attribute value is contained in another struture which contains\n * the value and flags.  Both the attribute value and definition are in\n * arrays and share the same index.  When an\n * object is created, the attributes associated with that object\n * are created with default values.\n */\n\n/* define the size of fields in the structures */\n\n#define ATRDFLAG 24\n#define ATRVFLAG 16\n\n#define ATRDTYPE 4\n#define ATRVTYPE 8\n\n#define ATRPART 4\n\n#define BUF_SIZE 512\n#define RESC_USED_BUF_SIZE 2048\n\n#define MAX_STR_INT 40\n#define ENDATTRIBUTES -711\n\n/*\n * The following structure, svrattrl is used to hold the external form of\n * attributes.\n *\n */\n\nstruct svrattrl {\n\tpbs_list_link al_link;\n\tstruct svrattrl *al_sister;\t  /* co-resource svrattrl\t\t     */\n\tstruct attropl al_atopl;\t  /* name,resource,value, see pbs_ifl.h   */\n\tint al_tsize;\t\t\t  /* size of this structure (variable)    */\n\tint al_nameln;\t\t\t  /* len of name string (including null)  */\n\tint al_rescln;\t\t\t  /* len of resource name string (+ null) */\n\tint al_valln;\t\t\t  /* len of value, may contain many nulls */\n\tunsigned int al_flags : ATRVFLAG; /* copy of attribute value flags */\n\tint al_refct : 16;\t\t  /* reference count */\n\t\t\t\t\t  /*\n\t * data follows directly after\n\t */\n};\ntypedef struct svrattrl svrattrl;\n\n#define al_name al_atopl.name\n#define al_resc al_atopl.resource\n#define al_value al_atopl.value\n#define al_op al_atopl.op\n\n/*\n * The value of an attribute is contained in the following structure.\n *\n * The length field specifies the amount of memory that has been\n * malloc-ed for the value (used for at_str, at_array, and at_resrc).\n * If zero, no space has been malloc-ed.\n *\n * The union member is selected based on the value type given in the\n * flag field of the definition.\n */\n\nstruct size_value {\n\tu_Long atsv_num;\t     /* numeric part of a size value */\n\tunsigned int atsv_shift : 8; /* binary shift count, K=10, g=20 */\n\tunsigned int atsv_units : 1; /* units (size in words or bytes) */\n};\n#define ATR_SV_BYTESZ 0 /* size is in bytes */\n#define ATR_SV_WORDSZ 1 /* size is in words */\n\n/* used for Finer Granularity Control */\nstruct attr_entity {\n\tvoid *ae_tree;\t      /* root of tree */\n\ttime_t ae_newlimittm; /* time last limit added */\n};\n\nunion attrval {\n\tint type_int;\n\tlong type_long;\n\tchar *type_str;\n};\ntypedef union attrval attrval_t;\n\nenum attr_type {\n\tATTR_TYPE_LONG,\n\tATTR_TYPE_INT,\n\tATTR_TYPE_STR,\n};\n\nunion attr_val {\t\t       /* the attribute value\t*/\n\tlong at_long;\t\t       /* long integer */\n\tlong long at_ll;\t       /* largest long integer */\n\tchar at_char;\t\t       /* single character */\n\tchar *at_str;\t\t       /* char string  */\n\tstruct array_strings *at_arst; /* array of strings */\n\tstruct size_value at_size;     /* size value */\n\tpbs_list_head at_list;\t       /* list of resources,  ... */\n\tstruct pbsnode *at_jinfo;      /* ptr to node's job info  */\n\tshort at_short;\t\t       /* short int; node's state */\n\tfloat at_float;\t\t       /* floating point value */\n\tstruct attr_entity at_enty;    /* FGC entity tree head */\n};\n\nstruct attribute {\n\tunsigned int at_flags : ATRVFLAG; /* attribute flags\t*/\n\tunsigned int at_type : ATRVTYPE;  /* type of attribute    */\n\tsvrattrl *at_user_encoded;\t  /* encoded svrattrl form for users*/\n\tsvrattrl *at_priv_encoded;\t  /* encoded svrattrl form for mgr/op*/\n\tunion attr_val at_val;\t\t  /* the attribute value\t*/\n};\ntypedef struct attribute attribute;\n\n/*\n * The following structure is used to define an attribute for any parent\n * object.  The structure declares the attribute's name, value type, and\n * access methods.  This information is \"built into\" the server in an array\n * of attribute_def structures.  The definition occurs once for a given name.\n */\n\nstruct attribute_def {\n\tchar *at_name;\n\tint (*at_decode)(attribute *patr, char *name, char *rn, char *val);\n\tint (*at_encode)(const attribute *pattr, pbs_list_head *phead, char *aname, char *rsname, int mode, svrattrl **rtnl);\n\tint (*at_set)(attribute *pattr, attribute *nattr, enum batch_op);\n\tint (*at_comp)(attribute *pattr, attribute *with);\n\tvoid (*at_free)(attribute *pattr);\n\tint (*at_action)(attribute *pattr, void *pobject, int actmode);\n\tunsigned int at_flags : ATRDFLAG; /* flags: perms, ...\t\t*/\n\tunsigned int at_type : ATRDTYPE;  /* type of attribute\t\t*/\n\tunsigned int at_parent : ATRPART; /* type of parent object\t*/\n};\ntypedef struct attribute_def attribute_def;\n\n/**\n * This structure is used by IFL verification mechanism to associate\n * specific verification routines to specific attributes. New attributes added\n * to the above attribute_def array should also be added to this array to enable\n * attribute verification.\n */\nstruct ecl_attribute_def {\n\tchar *at_name;\n\tunsigned int at_flags; /* flags: perms, ...\t\t*/\n\tunsigned int at_type;  /* type of attribute\t\t*/\n\t/** function pointer to the datatype verification routine */\n\tint (*at_verify_datatype)(struct attropl *, char **);\n\t/** function pointer to the value verification routine */\n\tint (*at_verify_value)(int, int, int, struct attropl *, char **);\n};\ntypedef struct ecl_attribute_def ecl_attribute_def;\n\n/* the following is a special flag used in granting permission to create      */\n/* indirect references to resources in vnodes.  This bit does not actually    */\n/* appear within the at_flags field of an attribute definition.               */\n#define ATR_PERM_ALLOW_INDIRECT 0x1000000\n\n/* combination defines for permission field */\n\n#define READ_ONLY (ATR_DFLAG_USRD | ATR_DFLAG_OPRD | ATR_DFLAG_MGRD)\n#define READ_WRITE (ATR_DFLAG_USRD | ATR_DFLAG_OPRD | ATR_DFLAG_MGRD | ATR_DFLAG_USWR | ATR_DFLAG_OPWR | ATR_DFLAG_MGWR)\n#define NO_USER_SET (ATR_DFLAG_USRD | ATR_DFLAG_OPRD | ATR_DFLAG_MGRD | ATR_DFLAG_OPWR | ATR_DFLAG_MGWR)\n#define MGR_ONLY_SET (ATR_DFLAG_USRD | ATR_DFLAG_OPRD | ATR_DFLAG_MGRD | ATR_DFLAG_MGWR)\n#define PRIV_READ (ATR_DFLAG_OPRD | ATR_DFLAG_MGRD)\n#define ATR_DFLAG_SSET (ATR_DFLAG_SvWR | ATR_DFLAG_SvRD)\n\n/* What permission needed to be settable in a hook script */\n#define ATR_DFLAG_HOOK_SET (ATR_DFLAG_USWR | ATR_DFLAG_OPWR | ATR_DFLAG_MGWR)\n\n/* Defines for Flag field in attribute (value) \t\t*/\n\n#define ATR_VFLAG_SET 0x01\t\t /* has specifed value (is set)\t*/\n#define ATR_VFLAG_MODIFY 0x02\t\t /* value has been modified\t*/\n#define ATR_VFLAG_DEFLT 0x04\t\t /* value is default value\t*/\n#define ATR_VFLAG_MODCACHE 0x08\t\t /* value modified since cache \t*/\n#define ATR_VFLAG_INDIRECT 0x10\t\t /* indirect pointer to resource */\n#define ATR_VFLAG_TARGET 0x20\t\t /* target of indirect resource  */\n#define ATR_VFLAG_HOOK 0x40\t\t /* value set by a hook script   */\n#define ATR_VFLAG_IN_EXECVNODE_FLAG 0x80 /* resource key value pair was found in execvnode */\n\n#define ATR_MOD_MCACHE (ATR_VFLAG_MODIFY | ATR_VFLAG_MODCACHE)\n#define ATR_SET_MOD_MCACHE (ATR_VFLAG_SET | ATR_MOD_MCACHE)\n#define ATR_UNSET(X) (X)->at_flags = (((X)->at_flags & ~ATR_VFLAG_SET) | ATR_MOD_MCACHE)\n\n/* Defines for Parent Object type field in the attribute definition\t*/\n/* really only used for telling queue types apart\t\t\t*/\n\n#define PARENT_TYPE_JOB 1\n#define PARENT_TYPE_QUE_ALL 2\n#define PARENT_TYPE_QUE_EXC 3\n#define PARENT_TYPE_QUE_RTE 4\n#define PARENT_TYPE_QUE_PULL 5\n#define PARENT_TYPE_SERVER 6\n#define PARENT_TYPE_NODE 7\n#define PARENT_TYPE_RESV 8\n#define PARENT_TYPE_SCHED 9\n\n/*\n * values for the \"actmode\" parameter to at_action()\n */\n#define ATR_ACTION_NOOP 0\n#define ATR_ACTION_NEW 1\n#define ATR_ACTION_ALTER 2\n#define ATR_ACTION_RECOV 3\n#define ATR_ACTION_FREE 4\n\n/*\n * values for the mode parameter to at_encode(), determines:\n *\t- list separater character for encode_arst()\n *\t- which resources are encoded (see attr_fn_resc.c[encode_resc]\n */\n#define ATR_ENCODE_CLIENT 0 /* encode for sending to client\t+ sched\t*/\n#define ATR_ENCODE_SVR 1    /* encode for sending to another server */\n#define ATR_ENCODE_MOM 2    /* encode for sending to MOM\t\t*/\n#define ATR_ENCODE_SAVE 3   /* encode for saving to disk\t        */\n#define ATR_ENCODE_HOOK 4   /* encode for sending to hook \t\t*/\n#define ATR_ENCODE_DB 5\t    /* encode for saving to database        */\n\n/*\n * structure to hold array of pointers to character strings\n */\n\nstruct array_strings {\n\tint as_npointers;   /* number of pointer slots in this block */\n\tint as_usedptr;\t    /* number of used pointer slots */\n\tint as_bufsize;\t    /* size of buffer holding strings */\n\tchar *as_buf;\t    /* address of buffer */\n\tchar *as_next;\t    /* first available byte in buffer */\n\tchar *as_string[1]; /* first string pointer */\n};\n\n/*\n * specific attribute value function prototypes\n */\nextern struct attrl *attropl2attrl(struct attropl *from);\nstruct attrl *new_attrl(void);\nstruct attrl *dup_attrl(struct attrl *oattr);\nstruct attrl *dup_attrl_list(struct attrl *oattr_list);\nvoid free_attrl(struct attrl *at);\nvoid free_attrl_list(struct attrl *at_list);\nextern void clear_attr(attribute *pattr, attribute_def *pdef);\nextern int find_attr(void *attrdef_idx, attribute_def *attr_def, char *name);\nextern int recov_attr_fs(int fd, void *parent, void *padef_idx, attribute_def *padef,\n\t\t\t attribute *pattr, int limit, int unknown);\nextern void free_null(attribute *attr);\nextern void free_none(attribute *attr);\nextern svrattrl *attrlist_alloc(int szname, int szresc, int szval);\nextern svrattrl *attrlist_create(char *aname, char *rname, int szval);\nsvrattrl *dup_svrattrl(svrattrl *osvrat);\nextern void free_svrattrl(svrattrl *pal);\nextern void free_attrlist(pbs_list_head *attrhead);\nextern void free_svrcache(attribute *attr);\nextern int attr_atomic_set(svrattrl *plist, attribute *old,\n\t\t\t   attribute *nattr, void *adef_idx, attribute_def *pdef, int limit,\n\t\t\t   int unkn, int privil, int *badattr);\nextern int attr_atomic_node_set(svrattrl *plist, attribute *old,\n\t\t\t\tattribute *nattr, attribute_def *pdef, int limit,\n\t\t\t\tint unkn, int privil, int *badattr);\nextern void attr_atomic_kill(attribute *temp, attribute_def *pdef, int);\nextern void attr_atomic_copy(attribute *old, attribute *nattr, attribute_def *pdef, int limit);\n\nextern int copy_svrattrl_list(pbs_list_head *from_phead, pbs_list_head *to_head);\nextern int convert_attrl_to_svrattrl(struct attrl *from_list, pbs_list_head *to_head);\nextern int compare_svrattrl_list(pbs_list_head *list1, pbs_list_head *list2);\nextern svrattrl *find_svrattrl_list_entry(pbs_list_head *phead, char *name,\n\t\t\t\t\t  char *resc);\nextern int add_to_svrattrl_list(pbs_list_head *phead, char *name_str, char *resc_str,\n\t\t\t\tchar *val_str, unsigned int flag, char *name_prefix);\nextern int add_to_svrattrl_list_sorted(pbs_list_head *phead, char *name_str, char *resc_str,\n\t\t\t\t       char *val_str, unsigned int flag, char *name_prefix);\nextern unsigned int get_svrattrl_flag(char *name, char *resc, char *val,\n\t\t\t\t      pbs_list_head *svrattrl_list, int hook_set_flag);\nextern int compare_svrattrl_list(pbs_list_head *l1, pbs_list_head *l2);\n\nextern void free_str_array(char **);\nextern char **svrattrl_to_str_array(pbs_list_head *);\nextern int str_array_to_svrattrl(char **str_array, pbs_list_head *to_head, char *header_str);\nextern char *str_array_to_str(char **str_array, char delimiter);\nextern char *env_array_to_str(char **env_array, char delimiter);\nextern char **str_to_str_array(char *str, char delimiter);\nextern char *strtok_quoted(char *source, char delimiter);\n\nextern int decode_b(attribute *patr, char *name, char *rn, char *val);\nextern int decode_c(attribute *patr, char *name, char *rn, char *val);\nextern int decode_entlim(attribute *patr, char *name, char *rn, char *val);\nextern int decode_entlim_res(attribute *patr, char *name, char *rn, char *val);\nextern int decode_f(attribute *patr, char *name, char *rn, char *val);\nextern int decode_l(attribute *patr, char *name, char *rn, char *val);\nextern int decode_ll(attribute *patr, char *name, char *rn, char *val);\nextern int decode_size(attribute *patr, char *name, char *rn, char *val);\nextern int decode_str(attribute *patr, char *name, char *rn, char *val);\nextern int decode_jobname(attribute *patr, char *name, char *rn, char *val);\nextern int decode_time(attribute *patr, char *name, char *rn, char *val);\nextern int decode_arst(attribute *patr, char *name, char *rn, char *val);\nextern int decode_arst_bs(attribute *patr, char *name, char *rn, char *val);\nextern int decode_resc(attribute *patr, char *name, char *rn, char *val);\nextern int decode_depend(attribute *patr, char *name, char *rn, char *val);\nextern int decode_hold(attribute *patr, char *name, char *rn, char *val);\nextern int decode_sandbox(attribute *patr, char *name, char *rn, char *val);\nextern int decode_project(attribute *patr, char *name, char *rn, char *val);\nextern int decode_uacl(attribute *patr, char *name, char *rn, char *val);\nextern int decode_unkn(attribute *patr, char *name, char *rn, char *val);\nextern int decode_nodes(attribute *, char *, char *, char *);\nextern int decode_select(attribute *, char *, char *, char *);\nextern int decode_Mom_list(attribute *, char *, char *, char *);\n\nextern int encode_b(const attribute *attr, pbs_list_head *phead, char *atname,\n\t\t    char *rsname, int mode, svrattrl **rtnl);\nextern int encode_c(const attribute *attr, pbs_list_head *phead, char *atname,\n\t\t    char *rsname, int mode, svrattrl **rtnl);\nextern int encode_entlim(const attribute *attr, pbs_list_head *phead, char *atname,\n\t\t\t char *rsname, int mode, svrattrl **rtnl);\nextern int encode_f(const attribute *attr, pbs_list_head *phead, char *atname,\n\t\t    char *rsname, int mode, svrattrl **rtnl);\nextern int encode_l(const attribute *attr, pbs_list_head *phead, char *atname,\n\t\t    char *rsname, int mode, svrattrl **rtnl);\nextern int encode_ll(const attribute *attr, pbs_list_head *phead, char *atname,\n\t\t     char *rsname, int mode, svrattrl **rtnl);\nextern int encode_size(const attribute *attr, pbs_list_head *phead, char *atname,\n\t\t       char *rsname, int mode, svrattrl **rtnl);\nextern int encode_str(const attribute *attr, pbs_list_head *phead, char *atname,\n\t\t      char *rsname, int mode, svrattrl **rtnl);\nextern int encode_time(const attribute *attr, pbs_list_head *phead, char *atname,\n\t\t       char *rsname, int mode, svrattrl **rtnl);\nextern int encode_arst(const attribute *attr, pbs_list_head *phead, char *atname,\n\t\t       char *rsname, int mode, svrattrl **rtnl);\nextern int encode_arst_bs(const attribute *attr, pbs_list_head *phead, char *atname,\n\t\t\t  char *rsname, int mode, svrattrl **rtnl);\nextern int encode_resc(const attribute *attr, pbs_list_head *phead, char *atname,\n\t\t       char *rsname, int mode, svrattrl **rtnl);\nextern int encode_inter(const attribute *attr, pbs_list_head *phead, char *atname,\n\t\t\tchar *rsname, int mode, svrattrl **rtnl);\nextern int encode_unkn(const attribute *attr, pbs_list_head *phead, char *atname,\n\t\t       char *rsname, int mode, svrattrl **rtnl);\nextern int encode_depend(const attribute *attr, pbs_list_head *phead, char *atname,\n\t\t\t char *rsname, int mode, svrattrl **rtnl);\nextern int encode_hold(const attribute *attr, pbs_list_head *phead, char *atname,\n\t\t       char *rsname, int mode, svrattrl **rtnl);\n\nextern int set_b(attribute *attr, attribute *nattr, enum batch_op);\nextern int set_c(attribute *attr, attribute *nattr, enum batch_op);\nextern int set_entlim(attribute *attr, attribute *nattr, enum batch_op);\nextern int set_entlim_res(attribute *attr, attribute *nattr, enum batch_op);\nextern int set_f(attribute *attr, attribute *nattr, enum batch_op);\nextern int set_l(attribute *attr, attribute *nattr, enum batch_op);\nextern int set_ll(attribute *attr, attribute *nattr, enum batch_op);\nextern int set_size(attribute *attr, attribute *nattr, enum batch_op);\nextern int set_str(attribute *attr, attribute *nattr, enum batch_op);\nextern int set_arst(attribute *attr, attribute *nattr, enum batch_op);\nextern int set_arst_uniq(attribute *attr, attribute *nattr, enum batch_op);\nextern int set_resc(attribute *attr, attribute *nattr, enum batch_op);\nextern int set_hostacl(attribute *attr, attribute *nattr, enum batch_op);\nextern int set_uacl(attribute *attr, attribute *nattr, enum batch_op);\nextern int set_gacl(attribute *attr, attribute *nattr, enum batch_op);\nextern int set_unkn(attribute *attr, attribute *nattr, enum batch_op);\nextern int set_depend(attribute *attr, attribute *nattr, enum batch_op);\nextern u_Long get_kilobytes_from_attr(attribute *);\nextern u_Long get_bytes_from_attr(attribute *);\n\nextern int comp_b(attribute *attr, attribute *with);\nextern int comp_c(attribute *attr, attribute *with);\nextern int comp_f(attribute *attr, attribute *with);\nextern int comp_l(attribute *attr, attribute *with);\nextern int comp_ll(attribute *attr, attribute *with);\nextern int comp_size(attribute *attr, attribute *with);\nextern void from_size(const struct size_value *, char *);\nextern int comp_str(attribute *attr, attribute *with);\nextern int comp_arst(attribute *attr, attribute *with);\nextern int comp_resc(attribute *attr, attribute *with);\nextern int comp_unkn(attribute *attr, attribute *with);\nextern int comp_depend(attribute *attr, attribute *with);\nextern int comp_hold(attribute *attr, attribute *with);\n\nextern int action_depend(attribute *attr, void *pobj, int mode);\nextern int check_no_entlim(attribute *attr, void *pobj, int mode);\nextern int action_entlim_chk(attribute *attr, void *pobj, int mode);\nextern int action_entlim_ct(attribute *attr, void *pobj, int mode);\nextern int action_entlim_res(attribute *attr, void *pobj, int mode);\nextern int at_non_zero_time(attribute *attr, void *pobj, int mode);\nextern int set_log_events(attribute *pattr, void *pobject, int actmode);\n\nextern void free_str(attribute *attr);\nextern void free_arst(attribute *attr);\nextern void free_entlim(attribute *attr);\nextern void free_resc(attribute *attr);\nextern void free_depend(attribute *attr);\nextern void free_unkn(attribute *attr);\nextern int parse_equal_string(char *start, char **name, char **value);\nextern char *parse_comma_string(char *start);\nextern char *return_external_value(char *name, char *val);\nextern char *return_internal_value(char *name, char *val);\n\n#define NULL_FUNC_CMP (int (*)(attribute *, attribute *)) 0\n#define NULL_FUNC_RESC (int (*)(resource *, attribute *, void *, int, int)) 0\n#define NULL_FUNC (int (*)(attribute *, void *, int)) 0\n#define NULL_VERIFY_DATATYPE_FUNC (int (*)(struct attropl *, char **)) 0\n#define NULL_VERIFY_VALUE_FUNC (int (*)(int, int, int, struct attropl *, char **)) 0\n\n/* other associated funtions */\n\nextern int acl_check(attribute *, char *canidate, int type);\nextern int check_duplicates(struct array_strings *strarr);\n\nextern char *arst_string(char *str, attribute *pattr);\nextern void attrl_fixlink(pbs_list_head *svrattrl);\nextern int save_attr_fs(attribute_def *, attribute *, int);\n\nextern int encode_state(const attribute *, pbs_list_head *, char *,\n\t\t\tchar *, int, svrattrl **rtnl);\nextern int encode_props(const attribute *, pbs_list_head *, char *,\n\t\t\tchar *, int, svrattrl **rtnl);\nextern int encode_jobs(const attribute *, pbs_list_head *, char *,\n\t\t       char *, int, svrattrl **rtnl);\nextern int encode_resvs(const attribute *, pbs_list_head *, char *,\n\t\t\tchar *, int, svrattrl **rtnl);\nextern int encode_ntype(const attribute *, pbs_list_head *, char *,\n\t\t\tchar *, int, svrattrl **rtnl);\nextern int encode_sharing(const attribute *, pbs_list_head *, char *,\n\t\t\t  char *, int, svrattrl **rtnl);\nextern int decode_state(attribute *, char *, char *, char *);\nextern int decode_props(attribute *, char *, char *, char *);\nextern int decode_ntype(attribute *, char *, char *, char *);\nextern int decode_sharing(attribute *, char *, char *, char *);\nextern int decode_null(attribute *, char *, char *, char *);\nextern int comp_null(attribute *, attribute *);\nextern int count_substrings(char *, int *);\nextern int set_resources_min_max(attribute *, attribute *, enum batch_op);\nextern int set_node_state(attribute *, attribute *, enum batch_op);\nextern int set_node_ntype(attribute *, attribute *, enum batch_op);\nextern int set_node_props(attribute *, attribute *, enum batch_op);\nextern int set_null(attribute *, attribute *, enum batch_op);\nextern int node_state(attribute *, void *, int);\nextern int node_np_action(attribute *, void *, int);\nextern int node_ntype(attribute *, void *, int);\nextern int node_prop_list(attribute *, void *, int);\nextern int node_comment(attribute *, void *, int);\nextern int is_true_or_false(char *val);\nextern void unset_entlim_resc(attribute *, char *);\nextern int action_node_partition(attribute *, void *, int);\n\n/* Action routines for OS provisioning */\nextern int node_prov_enable_action(attribute *, void *, int);\nextern int node_current_aoe_action(attribute *, void *, int);\nextern int svr_max_conc_prov_action(attribute *, void *, int);\n\n/* Manager functions */\nextern void mgr_log_attr(char *, struct svrattrl *, int, char *, char *);\nextern int mgr_set_attr(attribute *, void *, attribute_def *, int, svrattrl *, int, int *, void *, int);\n/* Extern functions (at_action) called  from job_attr_def*/\n\nextern int job_set_wait(attribute *, void *, int);\nextern int setup_arrayjob_attrs(attribute *pattr, void *pobject, int actmode);\nextern int fixup_arrayindicies(attribute *pattr, void *pobject, int actmode);\nextern int action_resc_job(attribute *pattr, void *pobject, int actmode);\nextern int ck_chkpnt(attribute *pattr, void *pobject, int actmode);\nextern int keepfiles_action(attribute *pattr, void *pobject, int actmode);\nextern int removefiles_action(attribute *pattr, void *pobject, int actmode);\n/*extern int depend_on_que(attribute *, void *, int);*/\nextern int comp_chkpnt(attribute *, attribute *);\nextern int alter_eligibletime(attribute *, void *, int);\nextern int action_max_run_subjobs(attribute *, void *, int);\n/* Extern functions from svr_attr_def */\nextern int manager_oper_chk(attribute *pattr, void *pobject, int actmode);\nextern int poke_scheduler(attribute *pattr, void *pobject, int actmode);\nextern int cred_name_okay(attribute *pattr, void *pobject, int actmode);\nextern int action_reserve_retry_time(attribute *pattr, void *pobject, int actmode);\nextern int action_reserve_retry_init(attribute *pattr, void *pobject, int actmode);\nextern int set_rpp_retry(attribute *pattr, void *pobject, int actmode);\nextern int set_node_fail_requeue(attribute *pattr, void *pobject, int actmode);\nextern int set_resend_term_delay(attribute *pattr, void *pobject, int actmode);\nextern int set_rpp_highwater(attribute *pattr, void *pobject, int actmode);\nextern int set_license_location(attribute *pattr, void *pobject, int actmode);\nextern int set_license_min(attribute *pattr, void *pobject, int actmode);\nextern int set_license_max(attribute *pattr, void *pobject, int actmode);\nextern int set_license_linger(attribute *pattr, void *pobject, int actmode);\nextern int set_job_history_enable(attribute *pattr, void *pobject, int actmode);\nextern int set_job_history_duration(attribute *pattr, void *pobject, int actmode);\nextern int default_queue_chk(attribute *pattr, void *pobject, int actmode);\nextern int force_qsub_daemons_update_action(attribute *pattr, void *pobject, int actmode);\nextern int action_resc_dflt_svr(attribute *pattr, void *pobj, int actmode);\nextern int action_jobscript_max_size(attribute *pattr, void *pobj, int actmode);\nextern int action_check_res_to_release(attribute *pattr, void *pobj, int actmode);\nextern int set_max_job_sequence_id(attribute *pattr, void *pobj, int actmode);\nextern int set_cred_renew_enable(attribute *pattr, void *pobject, int actmode);\nextern int set_cred_renew_period(attribute *pattr, void *pobject, int actmode);\nextern int set_cred_renew_cache_period(attribute *pattr, void *pobject, int actmode);\nextern int action_clear_topjob_estimates(attribute *pattr, void *pobj, int actmode);\n\n/* Extern functions from sched_attr_def*/\nextern int action_opt_bf_fuzzy(attribute *pattr, void *pobj, int actmode);\n\nextern int encode_svrstate(const attribute *pattr, pbs_list_head *phead, char *aname,\n\t\t\t   char *rsname, int mode, svrattrl **rtnl);\n\nextern int decode_rcost(attribute *patr, char *name, char *rn, char *val);\nextern int encode_rcost(const attribute *attr, pbs_list_head *phead, char *atname,\n\t\t\tchar *rsname, int mode, svrattrl **rtnl);\nextern int set_rcost(attribute *attr, attribute *nattr, enum batch_op);\nextern void free_rcost(attribute *attr);\nextern int decode_null(attribute *patr, char *name, char *rn, char *val);\nextern int set_null(attribute *patr, attribute *nattr, enum batch_op op);\nextern int eligibletime_action(attribute *pattr, void *pobject, int actmode);\nextern int decode_formula(attribute *patr, char *name, char *rn, char *val);\nextern int action_backfill_depth(attribute *pattr, void *pobj, int actmode);\nextern int action_est_start_time_freq(attribute *pattr, void *pobj, int actmode);\nextern int action_sched_iteration(attribute *pattr, void *pobj, int actmode);\nextern int action_sched_priv(attribute *pattr, void *pobj, int actmode);\nextern int action_sched_log(attribute *pattr, void *pobj, int actmode);\nextern int action_sched_user(attribute *pattr, void *pobj, int actmode);\nextern int action_sched_host(attribute *pattr, void *pobj, int actmode);\nextern int action_sched_partition(attribute *pattr, void *pobj, int actmode);\nextern int action_sched_preempt_order(attribute *pattr, void *pobj, int actmode);\nextern int action_sched_preempt_common(attribute *pattr, void *pobj, int actmode);\nextern int action_job_run_wait(attribute *pattr, void *pobj, int actmode);\nextern int action_throughput_mode(attribute *pattr, void *pobj, int actmode);\n\n/* Extern functions from queue_attr_def */\nextern int decode_null(attribute *patr, char *name, char *rn, char *val);\nextern int set_null(attribute *patr, attribute *nattr, enum batch_op op);\nextern int cred_name_okay(attribute *pattr, void *pobject, int actmode);\nextern int action_resc_dflt_queue(attribute *pattr, void *pobj, int actmode);\nextern int action_queue_partition(attribute *pattr, void *pobj, int actmode);\n/* Extern functions (at_action) called  from resv_attr_def */\nextern int action_resc_resv(attribute *pattr, void *pobject, int actmode);\n\n/* Functions used to save and recover the attributes from the database */\nextern int encode_single_attr_db(attribute_def *padef, attribute *pattr, pbs_db_attr_list_t *db_attr_list);\nextern int encode_attr_db(attribute_def *padef, attribute *pattr, int numattr, pbs_db_attr_list_t *db_attr_list, int all);\nextern int decode_attr_db(void *parent, pbs_list_head *attr_list,\n\t\t\t  void *padef_idx, attribute_def *padef, attribute *pattr, int limit, int unknown);\n\nextern int is_attr(int, char *, int);\n\nextern int set_attr(struct attrl **attrib, const char *attrib_name, const char *attrib_value);\nextern int set_attr_resc(struct attrl **attrib, const char *attrib_name, const char *attrib_resc, const char *attrib_value);\n\nextern svrattrl *make_attr(char *attr_name, char *attr_resc, char *attr_value, int attr_flags);\nextern void *cr_attrdef_idx(attribute_def *adef, int limit);\n\n/* Attr setters */\nint set_attr_generic(attribute *pattr, attribute_def *pdef, char *value, char *rescn, enum batch_op op);\nint set_attr_with_attr(attribute_def *pdef, attribute *oattr, attribute *nattr, enum batch_op op);\nvoid set_attr_l(attribute *pattr, long value, enum batch_op op);\nvoid set_attr_ll(attribute *pattr, long long value, enum batch_op op);\nvoid set_attr_c(attribute *pattr, char value, enum batch_op op);\nvoid set_attr_b(attribute *pattr, long val, enum batch_op op);\nvoid set_attr_short(attribute *pattr, short value, enum batch_op op);\nvoid mark_attr_not_set(attribute *attr);\nvoid mark_attr_set(attribute *attr);\nvoid post_attr_set(attribute *attr);\n\n/* Attr getters */\nchar get_attr_c(const attribute *pattr);\nlong get_attr_l(const attribute *pattr);\nlong long get_attr_ll(const attribute *pattr);\nchar *get_attr_str(const attribute *pattr);\nstruct array_strings *get_attr_arst(const attribute *pattr);\nint is_attr_set(const attribute *pattr);\nattribute *_get_attr_by_idx(attribute *list, int attr_idx);\npbs_list_head get_attr_list(const attribute *pattr);\nvoid free_attr(attribute_def *attr_def, attribute *pattr, int attr_idx);\n\n/* \"type\" to pass to acl_check() */\n#define ACL_Host 1\n#define ACL_User 2\n#define ACL_Group 3\n#define ACL_Subnet 4\n\n#ifdef __cplusplus\n}\n#endif\n#endif /* _ATTRIBUTE_H */\n"
  },
  {
    "path": "src/include/auth.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _AUTH_H\n#define _AUTH_H\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n#include <netdb.h>\n\n#include \"libauth.h\"\n\n#define AUTH_RESVPORT_NAME \"resvport\"\n#define AUTH_MUNGE_NAME \"munge\"\n#define AUTH_GSS_NAME \"gss\"\n#ifndef MAXPATHLEN\n#define MAXPATHLEN 1024\n#endif\n\n#define FOR_AUTH 0\n#define FOR_ENCRYPT 1\n\nenum AUTH_CTX_STATUS {\n\tAUTH_STATUS_UNKNOWN = 0,\n\tAUTH_STATUS_CTX_ESTABLISHING,\n\tAUTH_STATUS_CTX_READY\n};\n\ntypedef struct auth_def auth_def_t;\nstruct auth_def {\n\t/* name of authentication method name */\n\tchar name[MAXAUTHNAME + 1];\n\n\t/* pointer to store handle from loaded auth library */\n\tvoid *lib_handle;\n\n\t/*\n\t * the function pointer to set logger method for auth lib\n\t */\n\tvoid (*set_config)(const pbs_auth_config_t *auth_config);\n\n\t/*\n\t * the function pointer to create new auth context used by auth lib\n\t */\n\tint (*create_ctx)(void **ctx, int mode, int conn_type, const char *hostname);\n\n\t/*\n\t * the function pointer to free auth context used by auth lib\n\t */\n\tvoid (*destroy_ctx)(void *ctx);\n\n\t/*\n\t * the function pointer to get user, host and realm information from authentication context\n\t */\n\tint (*get_userinfo)(void *ctx, char **user, char **host, char **realm);\n\n\t/*\n\t * the function pointer to process auth handshake data and authenticate user/connection\n\t */\n\tint (*process_handshake_data)(void *ctx, void *data_in, size_t len_in, void **data_out, size_t *len_out, int *is_handshake_done);\n\n\t/*\n\t * the function pointer to encrypt data\n\t */\n\tint (*encrypt_data)(void *ctx, void *data_in, size_t len_in, void **data_out, size_t *len_out);\n\n\t/*\n\t * the function pointer to decrypt data\n\t */\n\tint (*decrypt_data)(void *ctx, void *data_in, size_t len_in, void **data_out, size_t *len_out);\n\n\t/*\n\t * pointer to next authdef structure\n\t */\n\tauth_def_t *next;\n};\n\nenum AUTH_MSG_TYPES {\n\tAUTH_CTX_DATA = 1, /* starts from 1, zero means EOF */\n\tAUTH_ERR_DATA,\n\tAUTH_CTX_OK,\n\tAUTH_ENCRYPTED_DATA,\n\tAUTH_LAST_MSG\n};\n\nextern auth_def_t *get_auth(char *);\nextern int load_auths(int mode);\nextern void unload_auths(void);\nint is_valid_encrypt_method(char *);\npbs_auth_config_t *make_auth_config(char *, char *, char *, char *, void *);\nvoid free_auth_config(pbs_auth_config_t *);\n\nextern int engage_client_auth(int, const char *, int, char *, size_t);\nextern int engage_server_auth(int, char *, int, int, char *, size_t);\nint handle_client_handshake(int fd, const char *hostname, char *method, int for_encrypt, pbs_auth_config_t *config, char *ebuf, size_t ebufsz);\n\n/* For qsub interactive - execution host authentication */\nenum INTERACTIVE_AUTH_STATUS {\n\tINTERACTIVE_AUTH_SUCCESS = 0,\n\tINTERACTIVE_AUTH_FAILED,\n\tINTERACTIVE_AUTH_RETRY\n};\nint auth_exec_socket(int sock, struct sockaddr_in *from, char *auth_method, char *encrypt_method, char *jobid);\nint auth_with_qsub(int sock, unsigned short port, char* hostname, char *auth_method, char *encrypt_method, char *jobid);\nint client_cipher_auth(int fd, char *text, char *ebuf, size_t ebufsz);\nint server_cipher_auth(int fd, char *text, char *ebuf, size_t ebufsz);\n\n#ifdef __cplusplus\n}\n#endif\n#endif /* _AUTH_H */\n"
  },
  {
    "path": "src/include/avltree.h",
    "content": "/*\n *\tThe  first  version  of  this  code  was written in Algol 68 by Gregory\n *\tTseytin  (tseyting@acm.org)  who  later  translated  it  to   C.    The\n *\tAVL_COUNT_DUPS  option  was  added  at  the  suggestion  of  Bill  Ross\n *\t(bross@nas.nasa.gov), who also packaged the code for distribution.\n *\n *\tTaken from NetBSD avltree-1.1.tar.gz.\n */\n/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _AVLTREE_H\n#define _AVLTREE_H\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n#define AVL_DEFAULTKEYLEN (4 * sizeof(int)) /* size of default key */\n\ntypedef void *AVL_RECPOS;\n\ntypedef struct {\n\tAVL_RECPOS recptr;\n\tunsigned int count;\n\tchar key[AVL_DEFAULTKEYLEN];\n\t/* actually can be of any length */\n} rectype;\n\ntypedef rectype AVL_IX_REC;\n\ntypedef struct {\n\tvoid *root;\n\tint keylength; /* zero for null-terminated strings */\n\tint flags;\n} AVL_IX_DESC;\n\n/*  return codes  */\n#define AVL_IX_OK 1\n#define AVL_IX_FAIL 0\n#define AVL_EOIX -2\n\n/* default behavior is no-dup-keys and case-sensitive search */\n#define AVL_DUP_KEYS_OK 0x01 /* repeated key & rec cause an error message */\n#define AVL_CASE_CMP 0x02    /* case insensitive search */\n\nextern void avl_set_maxthreads(int n);\nextern void *get_avl_tls(void);\nextern void free_avl_tls(void);\nextern int avl_create_index(AVL_IX_DESC *pix, int flags, int keylength);\nextern void avl_destroy_index(AVL_IX_DESC *pix);\nextern int avl_find_key(AVL_IX_REC *pe, AVL_IX_DESC *pix);\nextern int avl_add_key(AVL_IX_REC *pe, AVL_IX_DESC *pix);\nextern int avl_delete_key(AVL_IX_REC *pe, AVL_IX_DESC *pix);\nextern void avl_first_key(AVL_IX_DESC *pix);\nextern int avl_next_key(AVL_IX_REC *pe, AVL_IX_DESC *pix);\n\n/* Added by Altair */\nAVL_IX_REC *avlkey_create(AVL_IX_DESC *tree, void *key);\n\n#ifdef __cplusplus\n}\n#endif\n#endif /* _AVLTREE_H */\n"
  },
  {
    "path": "src/include/basil.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/*\n * This file is provided as a convenience to anyone wishing to utilize\n * the Batch and Application Scheduler Interface Layer (BASIL) for the\n * Application Level Placement Scheduler (ALPS). It contains macro and\n * structure definitions that identify the elements and attributes\n * found in BASIL.\n *\n * BASIL was originally designed and coded by Michael Karo (mek@cray.com).\n *\n * BASIL has been improved and updated by contributors including:\n *  - Jason Coverston (jcovers@cray.com)\n *  - Benjamin Landsteiner (ben@cray.com)\n *\n */\n/* WARNING - this file has been modified by Altair from the original\n * file provided by Cray. Please merge this file with any new basil.h\n * copies that Cray may provide.\n */\n\n// clang-format off\n\n#ifndef _BASIL_H\n#define _BASIL_H\n\n\n#ifndef __GNUC__\n#define __attribute__(x) /* nothing */\n#endif\n\n#define BASIL_STRING_SHORT (16)\n#define BASIL_STRING_MEDIUM (32)\n#define BASIL_STRING_LONG (64)\n#define BASIL_ERROR_BUFFER_SIZE (256)\n\n#define BASIL_STRSET_SHORT(dst, src) \tsnprintf(dst, BASIL_STRING_SHORT, \"%s\", src)\n#define BASIL_BZERO_SHORT(p)\t\tmemset(p, 0, BASIL_STRING_SHORT)\n#define BASIL_STRSET_MEDIUM(dst, src)\tsnprintf(dst, BASIL_STRING_MEDIUM, \"%s\", src)\n#define BASIL_BZERO_MEDIUM(p) \t\tmemset(p, 0, BASIL_STRING_MEDIUM)\n#define BASIL_STRSET_LONG(dst, src)\tsnprintf(dst, BASIL_STRING_LONG, \"%s\", src)\n#define BASIL_BZERO_LONG(p)\t\tmemset(p, 0, BASIL_STRING_LONG)\n\n/*\n *\tMacro Name\t\tText\t\t\tMay Appear Within\n *\t==========\t\t====\t\t\t=================\n */\n\n/* XML element names */\n\n#define BASIL_ELM_MESSAGE\t\"Message\"\t\t/* All elements */\n#define BASIL_ELM_REQUEST\t\"BasilRequest\"\t\t/* Top level */\n#define BASIL_ELM_RESVPARAMARRAY \"ReserveParamArray\"\t/* BASIL_ELM_REQUEST */\n#define BASIL_ELM_RESERVEPARAM\t\"ReserveParam\"\t\t/* BASIL_ELM_RESVPARAMARRAY */\n#define BASIL_ELM_NODEPARMARRAY \"NodeParamArray\"\t/* BASIL_ELM_RESERVEPARAM */\n#define BASIL_ELM_NODEPARAM\t\"NodeParam\"\t\t/* BASIL_ELM_NODEPARMARRAY */\n#define BASIL_ELM_MEMPARAMARRAY \"MemoryParamArray\"\t/* BASIL_ELM_RESERVEPARAM */\n#define BASIL_ELM_MEMPARAM\t\"MemoryParam\"\t\t/* BASIL_ELM_MEMPARAMARRAY */\n#define BASIL_ELM_LABELPARAMARRAY \"LabelParamArray\"\t/* BASIL_ELM_RESERVEPARAM */\n#define BASIL_ELM_LABELPARAM\t\"LabelParam\"\t\t/* BASIL_ELM_LABELPARAMARRAY */\n\n#define BASIL_ELM_RESPONSE\t\"BasilResponse\"\t\t/* Top level */\n#define BASIL_ELM_RESPONSEDATA\t\"ResponseData\"\t\t/* BASIL_ELM_RESPONSE */\n#define BASIL_ELM_RESERVED\t\"Reserved\"\t\t/* BASIL_ELM_RESPONSEDATA */\n#define BASIL_ELM_CONFIRMED\t\"Confirmed\"\t\t/* BASIL_ELM_RESPONSEDATA */\n#define BASIL_ELM_RELEASED\t\"Released\"\t\t/* BASIL_ELM_RESPONSEDATA */\n#define BASIL_ELM_ENGINE\t\"Engine\"\t\t/* BASIL_ELM_RESPONSEDATA */\n#define BASIL_ELM_INVENTORY\t\"Inventory\"\t\t/* BASIL_ELM_RESPONSEDATA */\n#define BASIL_ELM_NETWORK\t\"Network\"\t\t/* BASIL_ELM_RESPONSEDATA */\n#define BASIL_ELM_TOPOLOGY\t\"Topology\"\t\t/* BASIL_ELM_RESPONSEDATA */\n#define BASIL_ELM_FILTARRAY\t\"FilterArray\"\t\t/* BASIL_ELM_TOPOLOGY */\n#define BASIL_ELM_FILTER\t\"Filter\"\t\t/* BASIL_ELM_FILTARRAY */\n#define BASIL_ELM_NODEARRAY\t\"NodeArray\"\t\t/* BASIL_ELM_INVENTORY */\n#define BASIL_ELM_NODE\t\t\"Node\"\t\t\t/* BASIL_ELM_NODEARRAY */\n#define BASIL_ELM_ACCELPARAMARRAY \"AccelParamArray\"\t/* BASIL_ELM_RESERVEPARAM */\n#define BASIL_ELM_ACCELPARAM\t\"AccelParam\"\t\t/* BASIL_ELM_ACCELPARAMARRAY */\n#define BASIL_ELM_ACCELERATORARRAY \"AcceleratorArray\"\t/* BASIL_ELM_NODE */\n#define BASIL_ELM_ACCELERATOR\t\"Accelerator\"\t\t/* BASIL_ELM_ACCELERATORARRAY */\n\t\t\t\t\t\t\t/* BASIL_ELM_ACCELSUM */\n#define BASIL_ELM_ACCELERATORALLOC \"AcceleratorAllocation\" /* BASIL_ELM_ACCELERATOR */\n#define BASIL_ELM_SOCKETARRAY\t\"SocketArray\"\t\t/* BASIL_ELM_NODE */\n#define BASIL_ELM_SOCKET\t\"Socket\"\t\t/* BASIL_ELM_SOCKETARRAY */\n#define BASIL_ELM_SEGMENTARRAY\t\"SegmentArray\"\t\t/* BASIL_ELM_SOCKET */\n#define BASIL_ELM_SEGMENT\t\"Segment\"\t\t/* BASIL_ELM_SEGMENTARRAY */\n#define BASIL_ELM_CUARRAY\t\"ComputeUnitArray\"\t/* BASIL_ELM_SEGMENT */\n#define BASIL_ELM_COMPUTEUNIT\t\"ComputeUnit\"\t\t/* BASIL_ELM_CUARRAY */\n#define BASIL_ELM_PROCESSORARRAY \"ProcessorArray\"\t/* BASIL_ELM_SEGMENT */\n\t\t\t\t\t\t\t/* BASIL_ELM_COMPUTEUNIT */\n#define BASIL_ELM_PROCESSOR\t\"Processor\"\t\t/* BASIL_ELM_PROCESSORARRAY */\n#define BASIL_ELM_PROCESSORALLOC \"ProcessorAllocation\"\t/* BASIL_ELM_PROCESSOR */\n#define BASIL_ELM_MEMORYARRAY\t\"MemoryArray\"\t\t/* BASIL_ELM_SEGMENT */\n#define BASIL_ELM_MEMORY\t\"Memory\"\t\t/* BASIL_ELM_MEMORYARRAY */\n#define BASIL_ELM_MEMORYALLOC\t\"MemoryAllocation\"\t/* BASIL_ELM_MEMORY */\n#define BASIL_ELM_LABELARRAY\t\"LabelArray\"\t\t/* BASIL_ELM_SEGMENT */\n#define BASIL_ELM_LABEL\t\t\"Label\"\t\t\t/* BASIL_ELM_LABELARRAY */\n#define BASIL_ELM_RSVNARRAY\t\"ReservationArray\"\t/* BASIL_ELM_INVENTORY */\n\t\t\t\t\t\t\t/* BASIL_ELM_RESPONSEDATA */\n#define BASIL_ELM_RESERVATION\t\"Reservation\"\t\t/* BASIL_ELM_RSVNARRAY */\n#define BASIL_ELM_APPARRAY\t\"ApplicationArray\"\t/* BASIL_ELM_RESERVATION */\n\t\t\t\t\t\t\t/* BASIL_ELM_RESPONSEDATA */\n#define BASIL_ELM_APPLICATION\t\"Application\"\t\t/* BASIL_ELM_APPARRAY */\n#define BASIL_ELM_CMDARRAY\t\"CommandArray\"\t\t/* BASIL_ELM_APPLICATION */\n#define BASIL_ELM_COMMAND\t\"Command\"\t\t/* BASIL_ELM_CMDARRAY */\n#define BASIL_ELM_RSVD_NODEARRAY \"ReservedNodeArray\"\t/* BASIL_ELM_RESERVED */\n#define BASIL_ELM_RSVD_NODE\t\"ReservedNode\"\t\t/* BASIL_ELM_RSVD_NODEARRAY */\n#define BASIL_ELM_RSVD_SGMTARRAY \"ReservedSegmentArray\" /* BASIL_ELM_RSVD_NODE */\n#define BASIL_ELM_RSVD_SGMT\t\"ReservedSegment\"\t/* BASIL_ELM_RSVD_SGMTARRAY */\n#define BASIL_ELM_RSVD_PROCARRAY \"ReservedProcessorArray\"/* BASIL_ELM_RSVD_SGMT */\n#define BASIL_ELM_RSVD_PROCESSOR \"ReservedProcessor\"\t/* BASIL_ELM_RSVD_PROCARRAY */\n#define BASIL_ELM_RSVD_MEMARRAY \"ReservedMemoryArray\"\t/* BASIL_ELM_RSVD_SGMT */\n#define BASIL_ELM_RSVD_MEMORY\t\"ReservedMemory\"\t/* BASIL_ELM_RSVD_MEMARRAY */\n#define BASIL_ELM_SUMMARY\t\"Summary\"\t\t/* BASIL_ELM_RESPONSEDATA */\n#define BASIL_ELM_NODESUM\t\"NodeSummary\"\t\t/* BASIL_ELM_SUMMARY */\n#define BASIL_ELM_ACCELSUM\t\"AccelSummary\"\t\t/* BASIL_ELM_SUMMARY */\n#define BASIL_ELM_UP\t\t\"Up\"\t\t\t/* BASIL_ELM_NODESUM */\n\t\t\t\t\t\t\t/* BASIL_ELM_ACCELSUM */\n#define BASIL_ELM_DOWN\t\t\"Down\"\t\t\t/* BASIL_ELM_NODESUM */\n\t\t\t\t\t\t\t/* BASIL_ELM_ACCELSUM */\n/* XML attribute names */\n#define BASIL_ATR_PROTOCOL\t\"protocol\"\t\t/* BASIL_ELM_REQUEST */\n\t\t\t\t\t\t\t/* BASIL_ELM_RESPONSE */\n#define BASIL_ATR_METHOD\t\"method\"\t\t/* BASIL_ELM_REQUEST */\n\t\t\t\t\t\t\t/* BASIL_ELM_RESPONSEDATA */\n#define BASIL_ATR_STATUS\t\"status\"\t\t/* BASIL_ELM_RESPONSEDATA */\n\t\t\t\t\t\t\t/* BASIL_ELM_APPLICATION */\n\t\t\t\t\t\t\t/* BASIL_ELM_RESERVATION */\n#define BASIL_ATR_ERROR_CLASS\t\"error_class\"\t\t/* BASIL_ELM_RESPONSEDATA */\n#define BASIL_ATR_ERROR_SOURCE\t\"error_source\"\t\t/* BASIL_ELM_RESPONSEDATA */\n#define BASIL_ATR_SEVERITY\t\"severity\"\t\t/* BASIL_ELM_MSG */\n#define BASIL_ATR_TYPE\t\t\"type\"\t\t\t/* BASIL_ELM_REQUEST:query */\n\t\t\t\t\t\t\t/* BASIL_ELM_MEMORY */\n\t\t\t\t\t\t\t/* BASIL_ELM_LABEL */\n\t\t\t\t\t\t\t/* BASIL_ELM_ACCELERATOR */\n#define BASIL_ATR_USER_NAME\t\"user_name\"\t\t/* BASIL_ELM_RESERVEARRAY */\n\t\t\t\t\t\t\t/* BASIL_ELM_RESERVATION */\n#define BASIL_ATR_ACCOUNT_NAME\t\"account_name\"\t\t/* BASIL_ELM_RESERVEARRAY */\n\t\t\t\t\t\t\t/* BASIL_ELM_RESERVATION */\n#define BASIL_ATR_BATCH_ID\t\"batch_id\"\t\t/* BASIL_ELM_RESERVEARRAY */\n\t\t\t\t\t\t\t/* BASIL_ELM_RESERVATION */\n#define BASIL_ATR_PAGG_ID\t\"pagg_id\"\t\t/* BASIL_ELM_RESERVATION */\n\t\t\t\t\t\t\t/* BASIL_ELM_REQUEST:confirm */\n\t\t\t\t\t\t\t/* BASIL_ELM_REQUEST:cancel */\n#define BASIL_ATR_ADMIN_COOKIE\t\"admin_cookie\"\t\t/* synonymous with pagg_id */\n#define BASIL_ATR_ALLOC_COOKIE\t\"alloc_cookie\"\t\t/* deprecated as of 1.1 */\n#define BASIL_ATR_CHANGECOUNT\t\"changecount\"\t\t/* BASIL_ELM_NODEARRAY */\n#define BASIL_ATR_SCHEDCOUNT\t\"schedchangecount\"\t/* BASIL_ELM_SUMMARY */\n\t\t\t\t\t\t\t/* BASIL_ELM_NODEARRAY */\n#define BASIL_ATR_CLAIMS\t\"claims\"\t\t/* BASIL_ELM_RELEASED */\n#define BASIL_ATR_RSVN_ID\t\"reservation_id\"\t/* BASIL_ELM_RESERVATION */\n\t\t\t\t\t\t\t/* BASIL_ELM_REQUEST:confirm */\n\t\t\t\t\t\t\t/* BASIL_ELM_REQUEST:release */\n#define BASIL_ATR_JOB_NAME\t\"job_name\"\t\t/* BASIL_ELM_REQUEST:confirm */\n#define BASIL_ATR_NODE_ID\t\"node_id\"\t\t/* BASIL_ELM_NODE */\n#define BASIL_ATR_ROUTER_ID\t\"router_id\"\t\t/* BASIL_ELM_NODE */\n#define BASIL_ATR_ARCH\t\t\"architecture\"\t\t/* BASIL_ELM_RESERVE */\n\t\t\t\t\t\t\t/* BASIL_ELM_NODEARRAY */\n\t\t\t\t\t\t\t/* BASIL_ELM_NODE */\n\t\t\t\t\t\t\t/* BASIL_ELM_PROCESSOR */\n\t\t\t\t\t\t\t/* BASIL_ELM_COMMAND */\n#define BASIL_ATR_ROLE\t\t\"role\"\t\t\t/* BASIL_ELM_NODE */\n\t\t\t\t\t\t\t/* BASIL_ELM_NODES */\n#define BASIL_ATR_WIDTH\t\t\"width\"\t\t\t/* BASIL_ELM_RESERVEPARAM */\n\t\t\t\t\t\t\t/* BASIL_ELM_COMMAND */\n#define BASIL_ATR_DEPTH\t\t\"depth\"\t\t\t/* BASIL_ELM_RESERVEPARAM */\n\t\t\t\t\t\t\t/* BASIL_ELM_COMMAND */\n#define BASIL_ATR_RSVN_MODE\t\"reservation_mode\"\t/* BASIL_ELM_RESERVEPARAM */\n\t\t\t\t\t\t\t/* BASIL_ELM_RESERVATION */\n#define BASIL_ATR_GPC_MODE\t\"gpc_mode\"\t\t/* BASIL_ELM_RESERVEPARAM */\n\t\t\t\t\t\t\t/* BASIL_ELM_RESERVATION */\n#define BASIL_ATR_OSCPN\t\t\"oscpn\"\t\t\t/* BASIL_ELM_RESERVEPARAM */\n#define BASIL_ATR_NPPN\t\t\"nppn\"\t\t\t/* BASIL_ELM_RESERVEPARAM */\n\t\t\t\t\t\t\t/* BASIL_ELM_COMMAND */\n#define BASIL_ATR_NPPS\t\t\"npps\"\t\t\t/* BASIL_ELM_RESERVEPARAM */\n#define BASIL_ATR_NSPN\t\t\"nspn\"\t\t\t/* BASIL_ELM_RESERVEPARAM */\n#define BASIL_ATR_NPPCU\t\t\"nppcu\"\t\t\t/* BASIL_ELM_RESERVEPARAM */\n#define BASIL_ATR_SEGMENTS\t\"segments\"\t\t/* BASIL_ELM_RESERVEPARAM */\n#define BASIL_ATR_SIZE_MB\t\"size_mb\"\t\t/* BASIL_ELM_MEMORY */\n#define BASIL_ATR_NAME\t\t\"name\"\t\t\t/* BASIL_ELM_LABEL */\n\t\t\t\t\t\t\t/* BASIL_ELM_NODE */\n\t\t\t\t\t\t\t/* BASIL_ELM_ENGINE */\n\t\t\t\t\t\t\t/* BASIL_ELM_FILTER */\n#define BASIL_ATR_DISPOSITION\t\"disposition\"\t\t/* BASIL_ELM_LABEL */\n#define BASIL_ATR_STATE\t\t\"state\"\t\t\t/* BASIL_ELM_NODE */\n\t\t\t\t\t\t\t/* BASIL_ELM_ACCELERATOR */\n\t\t\t\t\t\t\t/* BASIL_ELM_NODES */\n\n#define BASIL_ATR_ORDINAL\t\"ordinal\"\t\t/* BASIL_ELM_NODE */\n\t\t\t\t\t\t\t/* BASIL_ELM_SOCKET */\n\t\t\t\t\t\t\t/* BASIL_ELM_SEGMENT */\n\t\t\t\t\t\t\t/* BASIL_ELM_PROCESSOR */\n\t\t\t\t\t\t\t/* BASIL_ELM_ACCELERATOR */\n#define BASIL_ATR_CLOCK_MHZ\t\"clock_mhz\"\t\t/* BASIL_ELM_SOCKET */\n\t\t\t\t\t\t\t/* BASIL_ELM_PROCESSOR */\n\t\t\t\t\t\t\t/* BASIL_ELM_ACCELERATOR */\n#define BASIL_ATR_PAGE_SIZE_KB\t\"page_size_kb\"\t\t/* BASIL_ELM_MEMORY */\n\t\t\t\t\t\t\t/* BASIL_ELM_NODES */\n#define BASIL_ATR_PAGE_COUNT\t\"page_count\"\t\t/* BASIL_ELM_MEMORY */\n\t\t\t\t\t\t\t/* BASIL_ELM_MEMORYALLOC */\n\t\t\t\t\t\t\t/* BASIL_ELM_NODES */\n#define BASIL_ATR_PAGES_RSVD\t\"pages_rsvd\"\t\t/* BASIL_ELM_MEMORY */\n#define BASIL_ATR_VERSION\t\"version\"\t\t/* BASIL_ELM_ENGINE */\n#define BASIL_ATR_SUPPORTED\t\"basil_support\"\t\t/* BASIL_ELM_ENGINE */\n#define BASIL_ATR_MPPHOST\t\"mpp_host\"\t\t/* BASIL_ELM_INVENTORY */\n#define BASIL_ATR_TIMESTAMP\t\"timestamp\"\t\t/* BASIL_ELM_INVENTORY */\n#define BASIL_ATR_APPLICATION_ID \"application_id\"\t/* BASIL_ELM_APPLICATION */\n#define BASIL_ATR_USER_ID\t\"user_id\"\t\t/* BASIL_ELM_APPLICATION */\n#define BASIL_ATR_GROUP_ID\t\"group_id\"\t\t/* BASIL_ELM_APPLICATION */\n#define BASIL_ATR_TIME_STAMP\t\"time_stamp\"\t\t/* BASIL_ELM_APPLICATION */\n#define BASIL_ATR_MEMORY\t\"memory\"\t\t/* BASIL_ELM_COMMAND */\n#define BASIL_ATR_MEMORY_MB\t\"memory_mb\"\t\t/* BASIL_ELM_ACCELERATOR */\n#define BASIL_ATR_NODE_COUNT\t\"node_count\"\t\t/* BASIL_ELM_COMMAND */\n#define BASIL_ATR_COMMAND\t\"cmd\"\t\t\t/* BASIL_ELM_COMMAND */\n#define BASIL_ATR_SEGMENT_ID\t\"segment_id\"\t\t/* BASIL_ELM_RSVD_SGMT */\n#define BASIL_ATR_FAMILY\t\"family\"\t\t/* BASIL_ELM_ACCELERATOR */\n#define BASIL_ATR_ACTION\t\"action\"\t\t/* BASIL_ELM_APPLICATION */\n\t\t\t\t\t\t\t/* BASIL_ELM_RESERVATION */\n#define BASIL_ATR_PGOVERNOR\t\"p-governor\"\t\t/* BASIL_ELM_RESERVEPARAM */\n#define BASIL_ATR_PSTATE\t\"p-state\"\t\t/* BASIL_ELM_RESERVEPARAM */\n\n/* XML attribute values */\n\n#define BASIL_VAL_VERSION_1_0\t\"1.0\"\t\t\t/* BASIL_ATR_PROTOCOL 1.0 */\n#define BASIL_VAL_VERSION_1_1\t\"1.1\"\t\t\t/* BASIL_ATR_PROTOCOL 1.1 */\n#define BASIL_VAL_VERSION_1_2   \"1.2\"\t\t\t/* BASIL_ATR_PROTOCOL 1.2 */\n#define BASIL_VAL_VERSION_1_3   \"1.3\"\t\t\t/* BASIL_ATR_PROTOCOL 1.3 */\n#define BASIL_VAL_VERSION_1_4   \"1.4\"\t\t\t/* BASIL_ATR_PROTOCOL 1.4 */\n\n\n#define BASIL_VAL_UNDEFINED\t\"UNDEFINED\"\t/* All attributes */\n#define BASIL_VAL_SUCCESS\t\"SUCCESS\"\t/* BASIL_ATR_STATUS */\n#define BASIL_VAL_FAILURE\t\"FAILURE\"\t/* BASIL_ATR_STATUS */\n#define BASIL_VAL_PERMANENT\t\"PERMANENT\"\t/* BASIL_ATR_ERROR_CLASS */\n#define BASIL_VAL_TRANSIENT\t\"TRANSIENT\"\t/* BASIL_ATR_ERROR_CLASS */\n#define BASIL_VAL_INTERNAL\t\"INTERNAL\"\t/* BASIL_ATR_ERROR_SOURCE */\n#define BASIL_VAL_SYSTEM\t\"SYSTEM\"\t/* BASIL_ATR_ERROR_SOURCE */\n\t\t\t\t\t\t/* BASIL_ATR_QRY_TYPE */\n#define BASIL_VAL_PARSER\t\"PARSER\"\t/* BASIL_ATR_ERROR_SOURCE */\n#define BASIL_VAL_SYNTAX\t\"SYNTAX\"\t/* BASIL_ATR_ERROR_SOURCE */\n#define BASIL_VAL_BACKEND\t\"BACKEND\"\t/* BASIL_ATR_ERROR_SOURCE */\n#define BASIL_VAL_ERROR\t\t\"ERROR\"\t\t/* BASIL_ATR_SEVERITY */\n#define BASIL_VAL_WARNING\t\"WARNING\"\t/* BASIL_ATR_SEVERITY */\n#define BASIL_VAL_DEBUG\t\t\"DEBUG\"\t\t/* BASIL_ATR_SEVERITY */\n#define BASIL_VAL_RESERVE\t\"RESERVE\"\t/* BASIL_ATR_METHOD */\n#define BASIL_VAL_CONFIRM\t\"CONFIRM\"\t/* BASIL_ATR_METHOD */\n#define BASIL_VAL_RELEASE\t\"RELEASE\"\t/* BASIL_ATR_METHOD */\n#define BASIL_VAL_QUERY\t\t\"QUERY\"\t\t/* BASIL_ATR_METHOD */\n#define BASIL_VAL_SWITCH\t\"SWITCH\"\t/* BASIL_ATR_METHOD */\n#define BASIL_VAL_STATUS\t\"STATUS\"\t/* BASIL_ATR_QRY_TYPE */\n#define BASIL_VAL_SUMMARY\t\"SUMMARY\"\t/* BASIL_ATR_QRY_TYPE */\n#define BASIL_VAL_ENGINE\t\"ENGINE\"\t/* BASIL_ATR_QRY_TYPE */\n#define BASIL_VAL_INVENTORY\t\"INVENTORY\"\t/* BASIL_ATR_QRY_TYPE */\n#define BASIL_VAL_NETWORK\t\"NETWORK\"\t/* BASIL_ATR_QRY_TYPE */\n#define BASIL_VAL_TOPOLOGY\t\"TOPOLOGY\"\t/* BASIL_ATR_QRY_TYPE */\n#define BASIL_VAL_SHARED\t\"SHARED\"\t/* BASIL_ATR_MODE */\n#define BASIL_VAL_EXCLUSIVE\t\"EXCLUSIVE\"\t/* BASIL_ATR_MODE */\n#define BASIL_VAL_CATAMOUNT\t\"CATAMOUNT\"\t/* BASIL_ATR_OS */\n#define BASIL_VAL_LINUX\t\t\"LINUX\"\t\t/* BASIL_ATR_OS */\n#define BASIL_VAL_XT\t\t\"XT\"\t\t/* BASIL_ATR_ARCH:node */\n#define BASIL_VAL_X2\t\t\"X2\"\t\t/* BASIL_ATR_ARCH:node */\n#define BASIL_VAL_X86_64\t\"x86_64\"\t/* BASIL_ATR_ARCH:proc */\n#define BASIL_VAL_AARCH64\t\"aarch64\"\t/* BASIL_ATR_ARCH:proc */\n#define BASIL_VAL_CRAY_X2\t\"cray_x2\"\t/* BASIL_ATR_ARCH:proc */\n#define BASIL_VAL_OS\t\t\"OS\"\t\t/* BASIL_ATR_MEM_TYPE */\n#define BASIL_VAL_HUGEPAGE\t\"HUGEPAGE\"\t/* BASIL_ATR_MEM_TYPE */\n#define BASIL_VAL_VIRTUAL\t\"VIRTUAL\"\t/* BASIL_ATR_MEM_TYPE */\n#define BASIL_VAL_HARD\t\t\"HARD\"\t\t/* BASIL_ATR_LABEL_TYPE */\n#define BASIL_VAL_SOFT\t\t\"SOFT\"\t\t/* BASIL_ATR_LABEL_TYPE */\n#define BASIL_VAL_ATTRACT\t\"ATTRACT\"\t/* BASIL_ATR_LABEL_DSPN */\n#define BASIL_VAL_REPEL\t\t\"REPEL\"\t\t/* BASIL_ATR_LABEL_DSPN */\n#define BASIL_VAL_INTERACTIVE\t\"INTERACTIVE\"\t/* BASIL_ATR_ROLE */\n#define BASIL_VAL_BATCH\t\t\"BATCH\"\t\t/* BASIL_ATR_ROLE */\n#define BASIL_VAL_UP\t\t\"UP\"\t\t/* BASIL_ATR_STATE */\n#define BASIL_VAL_DOWN\t\t\"DOWN\"\t\t/* BASIL_ATR_STATE */\n#define BASIL_VAL_UNAVAILABLE\t\"UNAVAILABLE\"\t/* BASIL_ATR_STATE */\n#define BASIL_VAL_ROUTING\t\"ROUTING\"\t/* BASIL_ATR_STATE */\n#define BASIL_VAL_SUSPECT\t\"SUSPECT\"\t/* BASIL_ATR_STATE */\n#define BASIL_VAL_ADMIN\t\t\"ADMIN\"\t\t/* BASIL_ATR_STATE */\n#define BASIL_VAL_UNKNOWN\t\"UNKNOWN\"\t/* BASIL_ATR_STATE */\n\t\t\t\t\t\t/* BASIL_ATR_ARCH:node */\n\t\t\t\t\t\t/* BASIL_ATR_ARCH:proc */\n#define BASIL_VAL_NONE\t\t\"NONE\"\t\t/* BASIL_ATR_GPC */\n#define BASIL_VAL_PROCESSOR\t\"PROCESSOR\"\t/* BASIL_ATR_GPC */\n#define BASIL_VAL_LOCAL\t\t\"LOCAL\"\t\t/* BASIL_ATR_GPC */\n#define BASIL_VAL_GLOBAL\t\"GLOBAL\"\t/* BASIL_ATR_GPC */\n#define BASIL_VAL_GPU\t\t\"GPU\"\t\t/* BASIL_ATR_TYPE */\n#define BASIL_VAL_INVALID\t\"INVALID\"\t/* BASIL_ATR_STATUS */\n#define BASIL_VAL_RUN\t\t\"RUN\"\t\t/* BASIL_ATR_STATUS */\n#define BASIL_VAL_SUSPEND\t\"SUSPEND\"\t/* BASIL_ATR_STATUS */\n#define BASIL_VAL_SWITCH\t\"SWITCH\"\t/* BASIL_ATR_STATUS */\n#define BASIL_VAL_UNKNOWN\t\"UNKNOWN\"\t/* BASIL_ATR_STATUS */\n#define BASIL_VAL_EMPTY\t\t\"EMPTY\"\t\t/* BASIL_ATR_STATUS */\n#define BASIL_VAL_MIX\t\t\"MIX\"\t\t/* BASIL_ATR_STATUS */\n#define BASIL_VAL_IN\t\t\"IN\"\t\t/* BASIL_ATR_ACTION */\n#define BASIL_VAL_OUT\t\t\"OUT\"\t\t/* BASIL_ATR_ACTION */\n\n/*\n * The following SYSTEM Query (and BASIL 1.7) specific Macro definitions have\n * been copied from the Cray-supplied basil.h header file.\n * ('Role', 'State', 'Page_Size' & 'Page_Count' related Macros already exist.\n * These attributes are common across XML Elements such as Inventory & System.)\n */\n#define BASIL_ELM_SYSTEM        \"System\"        /* BASIL_ELM_RESPONSEDATA */\n#define BASIL_ELM_NODES         \"Nodes\"         /* BASIL_ELM_SYSTEM */\n#define BASIL_ATR_CPCU          \"cpcu\"          /* BASIL_ELM_SYSTEM */\n#define BASIL_ATR_SPEED         \"speed\"         /* BASIL_ELM_NODES */\n#define BASIL_ATR_NUMA_NODES    \"numa_nodes\"    /* BASIL_ELM_NODES */\n#define BASIL_ATR_DIES          \"dies\"          /* BASIL_ELM_NODES */\n#define BASIL_ATR_COMPUTE_UNITS \"compute_units\" /* BASIL_ELM_NODES */\n#define BASIL_ATR_CPUS_PER_CU   \"cpus_per_cu\"   /* BASIL_ELM_NODES */\n#define BASIL_ATR_ACCELS        \"accels\"        /* BASIL_ELM_NODES */\n#define BASIL_ATR_ACCEL_STATE   \"accel_state\"   /* BASIL_ELM_NODES */\n#define BASIL_ATR_NUMA_CFG      \"numa_cfg\"      /* BASIL_ELM_NODES */\n#define BASIL_ATR_HBMSIZE       \"hbm_size_mb\"   /* BASIL_ELM_NODES */\n#define BASIL_ATR_HBM_CFG       \"hbm_cache_pct\" /* BASIL_ELM_NODES */\n#define BASIL_VAL_VERSION_1_7   \"1.7\"           /* BASIL_ATR_PROTOCOL 1.7 */\n#define BASIL_VAL_VERSION       BASIL_VAL_VERSION_1_7\n\n/*\n * The following Macro definitions have been created by Altair to support\n * SYSTEM Query processing.\n */\n#define BASIL_VAL_INTERACTIVE_SYS \"interactive\" /* BASIL_ATR_ROLE */\n#define BASIL_VAL_BATCH_SYS     \"batch\"\t\t/* BASIL_ATR_ROLE */\n#define BASIL_VAL_UP_SYS        \"up\"          \t/* BASIL_ATR_STATE */\n#define BASIL_VAL_DOWN_SYS      \"down\"        \t/* BASIL_ATR_STATE */\n#define BASIL_VAL_UNAVAILABLE_SYS \"unavailable\" /* BASIL_ATR_STATE */\n#define BASIL_VAL_ROUTING_SYS   \"routing\"     \t/* BASIL_ATR_STATE */\n#define BASIL_VAL_SUSPECT_SYS   \"suspect\"     \t/* BASIL_ATR_STATE */\n#define BASIL_VAL_ADMIN_SYS     \"admin\"       \t/* BASIL_ATR_STATE */\n#define BASIL_VAL_EMPTY_SYS\t\"\"              /* BASIL_ATR_NUMA_CFG */\n#define BASIL_VAL_A2A_SYS     \t\"a2a\"           /* BASIL_ATR_NUMA_CFG */\n#define BASIL_VAL_SNC2_SYS    \t\"snc2\"          /* BASIL_ATR_NUMA_CFG */\n#define BASIL_VAL_SNC4_SYS    \t\"snc4\"          /* BASIL_ATR_NUMA_CFG */\n#define BASIL_VAL_HEMI_SYS    \t\"hemi\"          /* BASIL_ATR_NUMA_CFG */\n#define BASIL_VAL_QUAD_SYS    \t\"quad\"          /* BASIL_ATR_NUMA_CFG */\n#define BASIL_VAL_0_SYS       \t\"0\"             /* BASIL_ATR_HBM_CFG */\n#define BASIL_VAL_25_SYS      \t\"25\"            /* BASIL_ATR_HBM_CFG */\n#define BASIL_VAL_50_SYS      \t\"50\"            /* BASIL_ATR_HBM_CFG */\n#define BASIL_VAL_100_SYS     \t\"100\"           /* BASIL_ATR_HBM_CFG */\n\n/* if set, the specified env var is the href (i.e., url) of an xslt file */\n#define BASIL_XSLT_HREF_ENV\t\"BASIL_XSLT_HREF\"\n\n/*\n * BASIL versions.\n * To add a new version, define the BASIL_VAL_VERSION_#_# string, above,\n * then place it at the top of the supported_versions array.\n * Define a numeric version value in the enum, below.\n * Add the strcmp to match the string to the enum value in\n * request_start() in handlers.c\n */\n\n/* BASIL supported versions string array */\n\n/* The first version listed is considered the current version. */\nstatic const char *basil_supported_versions[] __attribute__((unused)) = {\n\tBASIL_VAL_VERSION_1_7,\n\tBASIL_VAL_VERSION_1_4,\n\tBASIL_VAL_VERSION_1_3,\n\tBASIL_VAL_VERSION_1_2,\n\tBASIL_VAL_VERSION_1_1,\n\tBASIL_VAL_VERSION_1_0,\n\tNULL\n};\n\n/*\n * BASIL versions -- numerical\n */\ntypedef enum {\n\tbasil_1_0 = 10,\n\tbasil_1_1,\n\tbasil_1_2,\n\tbasil_1_3,\n\tbasil_1_4,\n\t/* basil_1_5 and basil_1_6 are not supported */\n\tbasil_1_7 = 17\n} basil_version_t;\n\n/*\n * For conversion from enum to string array index\n * always make BASIL_VERSION_MAX the current, largest version\n * number from the basil_version_t enum.\n */\n#define BASIL_VERSION_MAX basil_1_7\n#define BASIL_VERSION_MIN basil_1_0\n\n/* BASIL enumerated types */\n\ntypedef enum {\n\tbasil_method_none = 0,\n\tbasil_method_reserve,\n\tbasil_method_confirm,\n\tbasil_method_release,\n\tbasil_method_query,\n\tbasil_method_switch\n} basil_method_t;\n\ntypedef enum {\n\tbasil_query_none = 0,\n\tbasil_query_engine,\n\tbasil_query_inventory,\n\tbasil_query_network,\n\tbasil_query_status,\n\tbasil_query_summary,\n\tbasil_query_system,\n\tbasil_query_topology\n} basil_query_t;\n\ntypedef enum {\n\tbasil_node_arch_none = 0,\n\tbasil_node_arch_x2,\n\tbasil_node_arch_xt,\n\tbasil_node_arch_unknown\n} basil_node_arch_t;\n\ntypedef enum {\n\tbasil_node_state_none = 0,\n\tbasil_node_state_up,\n\tbasil_node_state_down,\n\tbasil_node_state_unavail,\n\tbasil_node_state_route,\n\tbasil_node_state_suspect,\n\tbasil_node_state_admindown,\n\tbasil_node_state_unknown\n} basil_node_state_t;\n\ntypedef enum {\n\tbasil_node_role_none = 0,\n\tbasil_node_role_interactive,\n\tbasil_node_role_batch,\n\tbasil_node_role_unknown\n} basil_node_role_t;\n\ntypedef enum {\n\tbasil_accel_none = 0,\n\tbasil_accel_gpu\n} basil_accel_t;\n\ntypedef enum {\n\tbasil_accel_state_none = 0,\n\tbasil_accel_state_up,\n\tbasil_accel_state_down,\n\tbasil_accel_state_unknown\n} basil_accel_state_t;\n\ntypedef enum {\n\tbasil_processor_arch_none = 0,\n\tbasil_processor_cray_x2,\n\tbasil_processor_x86_64,\n\tbasil_processor_aarch64,\n\tbasil_processor_arch_unknown\n} basil_processor_arch_t;\n\ntypedef enum {\n\tbasil_memory_type_none = 0,\n\tbasil_memory_type_os,\n\tbasil_memory_type_hugepage,\n\tbasil_memory_type_virtual\n} basil_memory_type_t;\n\ntypedef enum {\n\tbasil_label_type_none = 0,\n\tbasil_label_type_hard,\n\tbasil_label_type_soft\n} basil_label_type_t;\n\ntypedef enum {\n\tbasil_label_disposition_none = 0,\n\tbasil_label_disposition_attract,\n\tbasil_label_disposition_repel\n} basil_label_disposition_t;\n\ntypedef enum {\n\tbasil_component_state_none = 0,\n\tbasil_component_state_available,\n\tbasil_component_state_unavailable\n} basil_component_state_t;\n\ntypedef enum {\n\tbasil_rsvn_mode_none = 0,\n\tbasil_rsvn_mode_exclusive,\n\tbasil_rsvn_mode_shared\n} basil_rsvn_mode_t;\n\ntypedef enum {\n\tbasil_gpc_mode_none = 0,\n\tbasil_gpc_mode_processor,\n\tbasil_gpc_mode_local,\n\tbasil_gpc_mode_global\n} basil_gpc_mode_t;\n\ntypedef enum {\n\tbasil_application_status_none = 0,\n\tbasil_application_status_invalid,\n\tbasil_application_status_run,\n\tbasil_application_status_suspend,\n\tbasil_application_status_switch,\n\tbasil_application_status_unknown\n} basil_application_status_t;\n\ntypedef enum {\n\tbasil_reservation_status_none = 0,\n\tbasil_reservation_status_empty,\n\tbasil_reservation_status_invalid,\n\tbasil_reservation_status_mix,\n\tbasil_reservation_status_run,\n\tbasil_reservation_status_suspend,\n\tbasil_reservation_status_switch,\n\tbasil_reservation_status_unknown\n} basil_reservation_status_t;\n\ntypedef enum {\n\tbasil_switch_action_none = 0,\n\tbasil_switch_action_in,\n\tbasil_switch_action_out,\n\tbasil_switch_action_unknown\n} basil_switch_action_t;\n\ntypedef enum {\n\tbasil_switch_status_none = 0,\n\tbasil_switch_status_success,\n\tbasil_switch_status_failure,\n\tbasil_switch_status_invalid,\n\tbasil_switch_status_unknown\n} basil_switch_status_t;\n\n/* Basil data structures common to requests and responses */\n\ntypedef struct basil_label {\n\tchar name[BASIL_STRING_MEDIUM];\n\tbasil_label_type_t type;\n\tbasil_label_disposition_t disposition;\n\tstruct basil_label *next;\n} basil_label_t;\n\ntypedef basil_label_t basil_label_param_t;\n\ntypedef struct basil_accelerator_gpu {\n\tchar *family;\n\tunsigned int memory;\n\tunsigned int clock_mhz;\n} basil_accelerator_gpu_t;\n\n/* BASIL request data structures */\n\ntypedef struct basil_accelerator_param {\n\tbasil_accel_t type;\n\tbasil_accel_state_t state;\n\tunion {\n\t\tbasil_accelerator_gpu_t *gpu;\n\t} data;\n\tstruct basil_accelerator_param *next;\n} basil_accelerator_param_t;\n\ntypedef struct basil_memory_param {\n\tlong size_mb;\n\tbasil_memory_type_t type;\n\tstruct basil_memory_param *next;\n} basil_memory_param_t;\n\ntypedef struct basil_nodelist_param {\n\tchar *nodelist;\n\tstruct basil_nodelist_param *next;\n} basil_nodelist_param_t;\n\ntypedef struct basil_reserve_param {\n\tbasil_node_arch_t arch;\n\tlong width;\n\tlong depth;\n\tlong oscpn;\n\tlong nppn;\n\tlong npps;\n\tlong nspn;\n\tlong nppcu;\n\tlong pstate;\n\tchar pgovernor[BASIL_STRING_SHORT];\n\tbasil_rsvn_mode_t rsvn_mode;\n\tbasil_gpc_mode_t gpc_mode;\n\tchar segments[BASIL_STRING_MEDIUM];\n\tbasil_memory_param_t *memory;\n\tbasil_label_param_t *labels;\n\tbasil_nodelist_param_t *nodelists;\n\tbasil_accelerator_param_t *accelerators;\n\tstruct basil_reserve_param *next;\n} basil_reserve_param_t;\n\ntypedef struct basil_request_reserve {\n\tchar user_name[BASIL_STRING_MEDIUM];\n\tchar account_name[BASIL_STRING_MEDIUM];\n\tchar batch_id[BASIL_STRING_LONG];\n\tlong rsvn_id;\t/* debug only */\n\tbasil_reserve_param_t *params;\n} basil_request_reserve_t;\n\ntypedef struct basil_request_confirm {\n\tlong rsvn_id;\n\tunsigned long long pagg_id;\n\tchar job_name[BASIL_STRING_LONG];\n} basil_request_confirm_t;\n\ntypedef struct basil_request_release {\n\tlong rsvn_id;\n\tunsigned long long pagg_id;\n} basil_request_release_t;\n\ntypedef struct basil_request_query_inventory {\n\tunsigned long long changecount;\n\tint doNodeArray;\n\tint doResvArray;\n} basil_request_query_inventory_t;\n\ntypedef struct basil_request_query_status_app {\n\tunsigned long long apid;\n\tstruct basil_request_query_status_app *next;\n} basil_request_query_status_app_t;\n\ntypedef struct basil_request_query_status_res {\n\tlong rsvn_id;\n\tstruct basil_request_query_status_res *next;\n} basil_request_query_status_res_t;\n\ntypedef struct basil_request_query_status {\n\tint doAppArray;\n\tbasil_request_query_status_app_t *application;\n\tint doResvArray;\n\tbasil_request_query_status_res_t *reservation;\n} basil_request_query_status_t;\n\n/*\n * Copied this System Query specific (BASIL 1.7) structure definition\n * (basil_request_query_system_t) from the Cray-supplied basil.h file.\n */\ntypedef struct basil_request_query_system {\n    unsigned long long changecount;\n} basil_request_query_system_t;\n\ntypedef struct basil_topology_filter {\n\tchar name[BASIL_STRING_LONG];\n\tstruct basil_topology_filter *next;\n} basil_topology_filter_t;\n\ntypedef struct basil_request_query_topology {\n\tint executeFilters;\n\tbasil_topology_filter_t *filters;\n} basil_request_query_topology_t;\n\ntypedef struct basil_request_query {\n\tbasil_query_t type;\n\tunion {\n\t\tbasil_request_query_inventory_t *inv;\n\t\tbasil_request_query_status_t *status;\n\t\tbasil_request_query_system_t *system;\n\t\tbasil_request_query_topology_t *topology;\n\t} data;\n} basil_request_query_t;\n\ntypedef struct basil_request_switch_app {\n\tunsigned long long apid;\n\tbasil_switch_action_t action;\n\tstruct basil_request_switch_app *next;\n} basil_request_switch_app_t;\n\ntypedef struct basil_request_switch_res {\n\tlong rsvn_id;\n\tbasil_switch_action_t action;\n\tstruct basil_request_switch_res *next;\n} basil_request_switch_res_t;\n\ntypedef struct basil_request_switch {\n\tbasil_request_switch_app_t *application;\n\tbasil_request_switch_res_t *reservation;\n} basil_request_switch_t;\n\ntypedef struct basil_request {\n\tbasil_version_t protocol;\n\tbasil_method_t method;\n\tunion {\n\t\tbasil_request_reserve_t reserve;\n\t\tbasil_request_confirm_t confirm;\n\t\tbasil_request_release_t release;\n\t\tbasil_request_switch_t swtch;\n\t\tbasil_request_query_t query;\n\t} data;\n} basil_request_t;\n\n/* BASIL response data structures */\n\ntypedef struct basil_rsvn_application_cmd {\n\tint width;\n\tint depth;\n\tint nppn;\n\tint memory;\n\tbasil_node_arch_t arch;\n\tchar cmd[BASIL_STRING_MEDIUM];\n\tstruct basil_rsvn_application_cmd *next;\n} basil_rsvn_application_cmd_t;\n\ntypedef struct basil_rsvn_application {\n\tunsigned long long application_id;\n\tunsigned int user_id;\n\tunsigned int group_id;\n\tchar time_stamp[BASIL_STRING_MEDIUM];\n\tbasil_rsvn_application_cmd_t *cmds;\n\tstruct basil_rsvn_application *next;\n} basil_rsvn_application_t;\n\ntypedef struct basil_rsvn {\n\tlong rsvn_id;\n\tchar user_name[BASIL_STRING_MEDIUM];\n\tchar account_name[BASIL_STRING_MEDIUM];\n\tchar batch_id[BASIL_STRING_LONG];\n\tchar time_stamp[BASIL_STRING_MEDIUM];\n\tchar rsvn_mode[BASIL_STRING_MEDIUM];\n\tchar gpc_mode[BASIL_STRING_MEDIUM];\n\tbasil_rsvn_application_t *applications;\n\tstruct basil_rsvn *next;\n} basil_rsvn_t;\n\ntypedef struct basil_memory_allocation {\n\tlong rsvn_id;\n\tlong page_count;\n\tstruct basil_memory_allocation *next;\n} basil_memory_allocation_t;\n\ntypedef struct basil_node_memory {\n\tbasil_memory_type_t type;\n\tlong page_size_kb;\n\tlong page_count;\n\tbasil_memory_allocation_t *allocations;\n\tstruct basil_node_memory *next;\n} basil_node_memory_t;\n\ntypedef struct basil_node_computeunit {\n\tint ordinal;\n\tint proc_per_cu_count;\n\tstruct basil_node_computeunit *next;\n} basil_node_computeunit_t;\n\ntypedef struct basil_processor_allocation {\n\tlong rsvn_id;\n\tstruct basil_processor_allocation *next;\n} basil_processor_allocation_t;\n\ntypedef struct basil_node_processor {\n\tint ordinal;\n\tbasil_processor_arch_t arch;\n\tint clock_mhz;\n\tbasil_processor_allocation_t *allocations;\n\tstruct basil_node_processor *next;\n} basil_node_processor_t;\n\ntypedef struct basil_node_segment {\n\tint ordinal;\n\tbasil_node_processor_t *processors;\n\tbasil_node_memory_t *memory;\n\tbasil_label_t *labels;\n\tbasil_node_computeunit_t *computeunits;\n\tstruct basil_node_segment *next;\n} basil_node_segment_t;\n\ntypedef struct basil_node_socket {\n\tint ordinal;\n\tbasil_processor_arch_t arch;\n\tint clock_mhz;\n\tbasil_node_segment_t *segments;\n\tstruct basil_node_socket *next;\n} basil_node_socket_t;\n\ntypedef struct basil_accelerator_allocation {\n\tlong rsvn_id;\n\tstruct basil_accelerator_allocation *next;\n} basil_accelerator_allocation_t;\n\ntypedef struct basil_node_accelerator {\n\tbasil_accel_t type;\n\tbasil_accel_state_t state;\n\tunion {\n\t\tbasil_accelerator_gpu_t *gpu;\n\t} data;\n\tbasil_accelerator_allocation_t *allocations;\n\tstruct basil_node_accelerator *next;\n} basil_node_accelerator_t;\n\ntypedef struct basil_node {\n\tlong node_id;\n\tlong router_id;\n\tbasil_node_arch_t arch;\n\tbasil_node_state_t state;\n\tbasil_node_role_t role;\n\tunsigned int numcpus;\t/* numcores */\n\tlong clock_mhz;\n\tchar name[BASIL_STRING_SHORT];\n\tbasil_node_socket_t *sockets;\n\tbasil_node_segment_t *segments;\n\tbasil_node_accelerator_t *accelerators;\n\tstruct basil_node *next;\n} basil_node_t;\n\ntypedef struct basil_response_query_inventory {\n\tlong long timestamp;\n\tchar mpp_host[BASIL_STRING_LONG];\n\tint node_count;\n\tint node_maxid;\n\tunsigned long long int changecount;\n\tunsigned long long int schedcount;\n\tbasil_node_t *nodes;\n\tint rsvn_count;\n\tbasil_rsvn_t *rsvns;\n} basil_response_query_inventory_t;\n\ntypedef struct basil_response_query_engine {\n\tchar *name;\n\tchar *version;\n\tchar *basil_support;\n} basil_response_query_engine_t;\n\ntypedef struct basil_response_query_network {\n} basil_response_query_network_t;\n\ntypedef struct basil_response_query_status_app {\n\tunsigned long long apid;\n\tbasil_application_status_t status;\n\tstruct basil_response_query_status_app *next;\n} basil_response_query_status_app_t;\n\ntypedef struct basil_response_query_status_res {\n\tlong rsvn_id;\n\tbasil_reservation_status_t status;\n\tstruct basil_response_query_status_res *next;\n} basil_response_query_status_res_t;\n\ntypedef struct basil_response_query_status {\n\tbasil_response_query_status_app_t *application;\n\tbasil_response_query_status_res_t *reservation;\n} basil_response_query_status_t;\n\n/*\n * Selectively copied System Query specific (BASIL 1.7) structure definitions\n * (basil_system_element_t, basil_response_query_system_t) from the cray\n * supplied basil.h file.\n */\ntypedef struct basil_system_element {\n    char role[BASIL_STRING_SHORT];\n    char state[BASIL_STRING_SHORT];\n    char speed[BASIL_STRING_SHORT];\n    char numa_nodes[BASIL_STRING_SHORT];\n    char n_dies[BASIL_STRING_SHORT];\n    char compute_units[BASIL_STRING_SHORT];\n    char cpus_per_cu[BASIL_STRING_SHORT];\n    char pgszl2[BASIL_STRING_SHORT];\n    char avlmem[BASIL_STRING_SHORT];\n    char accel_name[BASIL_STRING_SHORT];\n    char accel_state[BASIL_STRING_SHORT];\n    char numa_cfg[BASIL_STRING_SHORT];\n    char hbmsize[BASIL_STRING_SHORT];\n    char hbm_cfg[BASIL_STRING_SHORT];\n    char *nidlist;\n    struct basil_system_element *next;\n} basil_system_element_t;\n\ntypedef struct basil_response_query_system {\n    long long timestamp;\n    char mpp_host[BASIL_STRING_LONG];\n    int cpcu_val;\n    basil_system_element_t *elements;\n} basil_response_query_system_t;\n\ntypedef struct basil_response_query_topology {\n\tint executeFilters;\n\tbasil_topology_filter_t *filters;\n} basil_response_query_topology_t;\n\ntypedef struct basil_response_query {\n\tbasil_query_t type;\n\tunion {\n\t\tbasil_response_query_inventory_t inventory;\n\t\tbasil_response_query_engine_t engine;\n\t\tbasil_response_query_network_t network;\n\t\tbasil_response_query_status_t status;\n\t\tbasil_response_query_system_t system;\n\t\tbasil_response_query_topology_t topology;\n\t} data;\n} basil_response_query_t;\n\ntypedef struct basil_response_reserve {\n\tlong rsvn_id;\n\tbasil_node_t **nodes;\n\tint *nids;\n\tsize_t nidslen;\n\t/* CPA admin_cookie deprecated as of 1.1 */\n\t/* CPA alloc_cookie deprecated as of 1.1 */\n} basil_response_reserve_t;\n\ntypedef struct basil_response_confirm {\n\tlong rsvn_id;\n\tunsigned long long pagg_id;\n} basil_response_confirm_t;\n\ntypedef struct basil_response_release {\n\tlong rsvn_id;\n\tunsigned int claims;\n} basil_response_release_t;\n\ntypedef struct basil_response_switch_app {\n\tunsigned long long apid;\n\tbasil_switch_status_t status;\n\tstruct basil_response_switch_app *next;\n} basil_response_switch_app_t;\n\ntypedef struct basil_response_switch_res {\n\tlong rsvn_id;\n\tbasil_switch_status_t status;\n\tstruct basil_response_switch_res *next;\n} basil_response_switch_res_t;\n\ntypedef struct basil_response_switch {\n\tbasil_response_switch_app_t *application;\n\tbasil_response_switch_res_t *reservation;\n} basil_response_switch_t;\n\ntypedef struct basil_response {\n\tbasil_version_t protocol;\n\tbasil_method_t method;\n\tunsigned long error_flags;\n\tchar error[BASIL_ERROR_BUFFER_SIZE];\n\tunion {\n\t\tbasil_response_reserve_t reserve;\n\t\tbasil_response_confirm_t confirm;\n\t\tbasil_response_release_t release;\n\t\tbasil_response_query_t query;\n\t\tbasil_response_switch_t swtch;\n\t} data;\n} basil_response_t;\n\n/*\n * Bit assignments for error_flags define in basil_response_t for\n * use in callback functions.\n */\n#define BASIL_ERR_TRANSIENT\t0x00000001UL\n\n#endif /* _BASIL_H */\n\n// clang-format on\n\n"
  },
  {
    "path": "src/include/batch_request.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _BATCH_REQUEST_H\n#define _BATCH_REQUEST_H\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n#include \"pbs_share.h\"\n#include \"attribute.h\"\n#include \"libpbs.h\"\n#include \"net_connect.h\"\n\n#define PBS_SIGNAMESZ 16\n#define MAX_JOBS_PER_REPLY 500\n\n/* QueueJob */\nstruct rq_queuejob {\n\tchar rq_destin[PBS_MAXSVRRESVID + 1];\n\tchar rq_jid[PBS_MAXSVRJOBID + 1];\n\tpbs_list_head rq_attr; /* svrattrlist */\n};\n\n/* PostQueueJob */\nstruct rq_postqueuejob {\n\tstruct job *rq_pjob;\n\tchar rq_destin[PBS_MAXSVRRESVID + 1];\n\tchar rq_jid[PBS_MAXSVRJOBID + 1];\n\tpbs_list_head rq_attr; /* svrattrlist */\n};\n\n/* JobCredential */\nstruct rq_jobcred {\n\tint rq_type;\n\tlong rq_size;\n\tchar *rq_data;\n};\n\n/* UserCredential */\nstruct rq_usercred {\n\tchar rq_user[PBS_MAXUSER + 1];\n\tint rq_type;\n\tlong rq_size;\n\tchar *rq_data;\n};\n\n/* Job File */\nstruct rq_jobfile {\n\tint rq_sequence;\n\tint rq_type;\n\tlong rq_size;\n\tchar rq_jobid[PBS_MAXSVRJOBID + 1];\n\tchar *rq_data;\n};\n\n/* Hook File */\nstruct rq_hookfile {\n\tint rq_sequence;\n\tlong rq_size;\n\tchar rq_filename[MAXPATHLEN + 1];\n\tchar *rq_data;\n};\n\n/*\n * job or destination id - used by RdyToCommit, Commit, RerunJob,\n * status ..., and locate job - is just a char *\n *\n * Manage - used by Manager, DeleteJob, ReleaseJob, ModifyJob\n */\nstruct rq_manage {\n\tint rq_cmd;\n\tint rq_objtype;\n\tchar rq_objname[PBS_MAXSVRJOBID + 1];\n\tpbs_list_head rq_attr; /* svrattrlist */\n};\n\n/* DeleteJobList */\nstruct rq_deletejoblist {\n\tint rq_count;\n\tint mails;\n\tchar **rq_jobslist;\n\tbool rq_resume;\n\tint jobid_to_resume;\n\tint subjobid_to_resume;\n};\n\n/* Management - used by PBS_BATCH_Manager requests */\nstruct rq_management {\n\tstruct rq_manage rq_manager;\n\tstruct batch_reply *rq_reply;\n\ttime_t rq_time;\n};\n\n/* ModifyVnode - used for node state changes */\nstruct rq_modifyvnode {\n\tstruct pbsnode *rq_vnode_o; /* old/previous vnode state */\n\tstruct pbsnode *rq_vnode;   /* new/current vnode state */\n};\n\n/* HoldJob -  plus preference flag */\nstruct rq_hold {\n\tstruct rq_manage rq_orig;\n\tint rq_hpref;\n};\n\n/* MessageJob */\nstruct rq_message {\n\tint rq_file;\n\tchar rq_jid[PBS_MAXSVRJOBID + 1];\n\tchar *rq_text;\n};\n\n/* RelnodesJob */\nstruct rq_relnodes {\n\tchar rq_jid[PBS_MAXSVRJOBID + 1];\n\tchar *rq_node_list;\n};\n\n/* PySpawn */\nstruct rq_py_spawn {\n\tchar rq_jid[PBS_MAXSVRJOBID + 1];\n\tchar **rq_argv;\n\tchar **rq_envp;\n};\n\n/* MoveJob */\nstruct rq_move {\n\tchar rq_jid[PBS_MAXSVRJOBID + 1];\n\tchar rq_destin[(PBS_MAXSVRRESVID > PBS_MAXDEST ? PBS_MAXSVRRESVID : PBS_MAXDEST) + 1];\n};\n\n/* Resource Query/Reserve/Free */\nstruct rq_rescq {\n\tint rq_rhandle;\n\tint rq_num;\n\tchar **rq_list;\n};\n\n/* RunJob */\nstruct rq_runjob {\n\tchar rq_jid[PBS_MAXSVRJOBID + 1];\n\tchar *rq_destin;\n\tunsigned long rq_resch;\n};\n\n/* JobObit */\nstruct rq_jobobit {\n\tstruct job *rq_pjob;\n\tchar rq_jid[PBS_MAXSVRJOBID + 1];\n\tchar *rq_destin;\n};\n\n/* SignalJob */\nstruct rq_signal {\n\tchar rq_jid[PBS_MAXSVRJOBID + 1];\n\tchar rq_signame[PBS_SIGNAMESZ + 1];\n};\n\n/* Status (job, queue, server, hook) */\nstruct rq_status {\n\tchar *rq_id; /* allow mulitple (job) ids */\n\tpbs_list_head rq_attr;\n};\n\n/* Select Job  and selstat */\nstruct rq_selstat {\n\tpbs_list_head rq_selattr;\n\tpbs_list_head rq_rtnattr;\n};\n\n/* TrackJob */\nstruct rq_track {\n\tint rq_hopcount;\n\tchar rq_jid[PBS_MAXSVRJOBID + 1];\n\tchar rq_location[PBS_MAXDEST + 1];\n\tchar rq_state[2];\n};\n\n/* RegisterDependentJob */\nstruct rq_register {\n\tchar rq_owner[PBS_MAXUSER + 1];\n\tchar rq_svr[PBS_MAXSERVERNAME + 1];\n\tchar rq_parent[PBS_MAXSVRJOBID + 1];\n\tchar rq_child[PBS_MAXCLTJOBID + 1]; /* need separate entry for */\n\tint rq_dependtype;\t\t    /* from server_name:port   */\n\tint rq_op;\n\tlong rq_cost;\n};\n\n/* Authenticate request */\nstruct rq_auth {\n\tchar rq_auth_method[MAXAUTHNAME + 1];\n\tchar rq_encrypt_method[MAXAUTHNAME + 1];\n\tunsigned int rq_port;\n};\n\n/* Deferred Scheduler Reply */\nstruct rq_defschrpy {\n\tint rq_cmd;\n\tchar *rq_id;\n\tint rq_err;\n\tchar *rq_txt;\n};\n\n/* Copy/Delete Files (Server -> MOM Only) */\n\n#define STDJOBFILE 1\n#define JOBCKPFILE 2\n#define STAGEFILE 3\n\n#define STAGE_DIR_IN 0\n#define STAGE_DIR_OUT 1\n\n#define STAGE_DIRECTION 1 /* mask for setting/extracting direction of file copy from rq_dir */\n#define STAGE_JOBDIR 2\t  /* mask for setting/extracting \"sandbox\" mode flag from rq_dir */\n\nstruct rq_cpyfile {\n\tchar rq_jobid[PBS_MAXSVRJOBID + 1]; /* used in Copy & Delete */\n\tchar rq_owner[PBS_MAXUSER + 1];\t    /* used in Copy only\t   */\n\tchar rq_user[PBS_MAXUSER + 1];\t    /* used in Copy & Delete */\n\tchar rq_group[PBS_MAXGRPN + 1];\t    /* used in Copy only     */\n\tint rq_dir;\t\t\t    /* direction and sandbox flags: used in Copy & Delete */\n\tpbs_list_head rq_pair;\t\t    /* list of rqfpair,  used in Copy & Delete */\n};\n\nstruct rq_cpyfile_cred {\n\tstruct rq_cpyfile rq_copyfile; /* copy/delete info */\n\tint rq_credtype;\t       /* cred type */\n\tsize_t rq_credlen;\t       /* credential length bytes */\n\tchar *rq_pcred;\t\t       /* encrpyted credential */\n};\n\nstruct rq_cred {\n\tchar rq_jobid[PBS_MAXSVRJOBID + 1];\n\tchar rq_credid[PBS_MAXUSER + 1]; /* contains id specific for the used security mechanism */\n\tlong rq_cred_validity;\t\t /* validity of provided credentials */\n\tint rq_cred_type;\t\t /* type of credentials like CRED_KRB5, CRED_TLS ... */\n\tchar *rq_cred_data;\t\t /* credentials in base64 */\n\tsize_t rq_cred_size;\t\t /* size of credentials */\n};\n\nstruct rqfpair {\n\tpbs_list_link fp_link;\n\tint fp_flag;\t/* 1 for std[out|err] 2 for stageout */\n\tchar *fp_local; /* used in Copy & Delete */\n\tchar *fp_rmt;\t/* used in Copy only     */\n};\n\nstruct rq_register_sched {\n\tchar *rq_name;\n};\n\n/*\n * ok we now have all the individual request structures defined,\n * so here is the union ...\n */\nstruct batch_request {\n\tpbs_list_link rq_link;\t\t   /* linkage of all requests */\n\tstruct batch_request *rq_parentbr; /* parent request for job array request */\n\tint rq_refct;\t\t\t   /* reference count - child requests */\n\tint rq_type;\t\t\t   /* type of request */\n\tint rq_perm;\t\t\t   /* access permissions for the user */\n\tint rq_fromsvr;\t\t\t   /* true if request from another server */\n\tint rq_conn;\t\t\t   /* socket connection to client/server */\n\tint rq_orgconn;\t\t\t   /* original socket if relayed to MOM */\n\tint rq_extsz;\t\t\t   /* size of \"extension\" data */\n\tlong rq_time;\t\t\t   /* time batch request created */\n\tchar rq_user[PBS_MAXUSER + 1];\t   /* user name request is from */\n\tchar rq_host[PBS_MAXHOSTNAME + 1]; /* name of host sending request */\n\tvoid *rq_extra;\t\t\t   /* optional ptr to extra info */\n\tchar *rq_extend;\t\t   /* request \"extension\" data */\n\tint prot;\t\t\t   /* PROT_TCP or PROT_TPP */\n\tint tpp_ack;\t\t\t   /* send acks for this tpp stream? */\n\tchar *tppcmd_msgid;\t\t   /* msg id for tpp commands */\n\tstruct batch_reply rq_reply;\t   /* the reply area for this request */\n\tunion indep_request {\n\t\tstruct rq_register_sched rq_register_sched;\n\t\tstruct rq_auth rq_auth;\n\t\tint rq_connect;\n\t\tstruct rq_queuejob rq_queuejob;\n\t\tstruct rq_postqueuejob rq_postqueuejob;\n\t\tstruct rq_jobcred rq_jobcred;\n\t\tstruct rq_jobfile rq_jobfile;\n\t\tchar rq_rdytocommit[PBS_MAXSVRJOBID + 1];\n\t\tchar rq_commit[PBS_MAXSVRJOBID + 1];\n\t\tstruct rq_manage rq_delete;\n\t\tstruct rq_manage rq_resresvbegin;\n\t\tstruct rq_deletejoblist rq_deletejoblist;\n\t\tstruct rq_hold rq_hold;\n\t\tchar rq_locate[PBS_MAXSVRJOBID + 1];\n\t\tstruct rq_manage rq_manager;\n\t\tstruct rq_management rq_management;\n\t\tstruct rq_modifyvnode rq_modifyvnode;\n\t\tstruct rq_message rq_message;\n\t\tstruct rq_relnodes rq_relnodes;\n\t\tstruct rq_py_spawn rq_py_spawn;\n\t\tstruct rq_manage rq_modify;\n\t\tstruct rq_move rq_move;\n\t\tstruct rq_register rq_register;\n\t\tstruct rq_manage rq_release;\n\t\tchar rq_rerun[PBS_MAXSVRJOBID + 1];\n\t\tstruct rq_rescq rq_rescq;\n\t\tstruct rq_runjob rq_run;\n\t\tstruct rq_jobobit rq_obit;\n\t\tstruct rq_selstat rq_select;\n\t\tint rq_shutdown;\n\t\tstruct rq_signal rq_signal;\n\t\tstruct rq_status rq_status;\n\t\tstruct rq_track rq_track;\n\t\tstruct rq_cpyfile rq_cpyfile;\n\t\tstruct rq_cpyfile_cred rq_cpyfile_cred;\n\t\tint rq_failover;\n\t\tstruct rq_usercred rq_usercred;\n\t\tstruct rq_defschrpy rq_defrpy;\n\t\tstruct rq_hookfile rq_hookfile;\n\t\tstruct rq_preempt rq_preempt;\n\t\tstruct rq_cred rq_cred;\n\t} rq_ind;\n};\n\nextern struct batch_request *alloc_br(int);\nextern struct batch_request *copy_br(struct batch_request *);\nextern void reply_ack(struct batch_request *);\nextern void req_reject(int, int, struct batch_request *);\nextern void req_reject_msg(int, int, struct batch_request *, int);\nextern void reply_badattr(int, int, svrattrl *, struct batch_request *);\nextern void reply_badattr_msg(int, int, svrattrl *, struct batch_request *, int);\nextern int reply_text(struct batch_request *, int, char *);\nextern int reply_send(struct batch_request *);\nextern int reply_send_status_part(struct batch_request *);\nextern int reply_jobid(struct batch_request *, char *, int);\nextern int reply_jobid_msg(struct batch_request *, char *, int, int);\nextern void reply_free(struct batch_reply *);\nextern void dispatch_request(int, struct batch_request *);\nextern void free_br(struct batch_request *);\nextern int isode_request_read(int, struct batch_request *);\nextern void req_stat_job(struct batch_request *);\nextern void req_stat_resv(struct batch_request *);\nextern void req_stat_resc(struct batch_request *);\nextern void req_rerunjob(struct batch_request *);\nextern void arrayfree(char **);\n\n#ifdef PBS_NET_H\nextern int authenticate_user(struct batch_request *, conn_t *);\n#endif\n\n#ifndef PBS_MOM\nextern void req_confirmresv(struct batch_request *);\nextern void req_connect(struct batch_request *);\nextern void req_defschedreply(struct batch_request *);\nextern void req_locatejob(struct batch_request *);\nextern void req_manager(struct batch_request *);\nextern void req_movejob(struct batch_request *);\nextern void req_register(struct batch_request *);\nextern void req_releasejob(struct batch_request *);\nextern void req_rescq(struct batch_request *);\nextern void req_runjob(struct batch_request *);\nextern void req_selectjobs(struct batch_request *);\nextern void req_stat_que(struct batch_request *);\nextern void req_stat_svr(struct batch_request *);\nextern void req_stat_sched(struct batch_request *);\nextern void req_trackjob(struct batch_request *);\nextern void req_stat_rsc(struct batch_request *);\nextern void req_preemptjobs(struct batch_request *);\n#else\nextern void req_cpyfile(struct batch_request *);\nextern void req_delfile(struct batch_request *);\nextern void req_copy_hookfile(struct batch_request *);\nextern void req_del_hookfile(struct batch_request *);\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\nextern void req_cred(struct batch_request *);\n#endif\n#endif\n\n/* PBS Batch Request Decode/Encode routines */\nextern int decode_DIS_Authenticate(int, struct batch_request *);\nextern int decode_DIS_CopyFiles(int, struct batch_request *);\nextern int decode_DIS_CopyFiles_Cred(int, struct batch_request *);\nextern int decode_DIS_JobCred(int, struct batch_request *);\nextern int decode_DIS_UserCred(int, struct batch_request *);\nextern int decode_DIS_JobFile(int, struct batch_request *);\nextern int decode_DIS_CopyHookFile(int, struct batch_request *);\nextern int decode_DIS_DelHookFile(int, struct batch_request *);\nextern int decode_DIS_Manage(int, struct batch_request *);\nextern int decode_DIS_DelJobList(int, struct batch_request *);\nextern int decode_DIS_MoveJob(int, struct batch_request *);\nextern int decode_DIS_MessageJob(int, struct batch_request *);\nextern int decode_DIS_ModifyResv(int, struct batch_request *);\nextern int decode_DIS_PySpawn(int, struct batch_request *);\nextern int decode_DIS_QueueJob(int, struct batch_request *);\nextern int decode_DIS_Register(int, struct batch_request *);\nextern int decode_DIS_RelnodesJob(int, struct batch_request *);\nextern int decode_DIS_ReqExtend(int, struct batch_request *);\nextern int decode_DIS_ReqHdr(int, struct batch_request *, int *, int *);\nextern int decode_DIS_Rescl(int, struct batch_request *);\nextern int decode_DIS_Rescq(int, struct batch_request *);\nextern int decode_DIS_Run(int, struct batch_request *);\nextern int decode_DIS_ShutDown(int, struct batch_request *);\nextern int decode_DIS_SignalJob(int, struct batch_request *);\nextern int decode_DIS_Status(int, struct batch_request *);\nextern int decode_DIS_TrackJob(int, struct batch_request *);\nextern int decode_DIS_replySvr(int, struct batch_reply *);\nextern int decode_DIS_svrattrl(int, pbs_list_head *);\nextern int decode_DIS_Cred(int, struct batch_request *);\nextern int encode_DIS_failover(int, struct batch_request *);\nextern int encode_DIS_CopyFiles(int, struct batch_request *);\nextern int encode_DIS_CopyFiles_Cred(int, struct batch_request *);\nextern int encode_DIS_Register(int, struct batch_request *);\nextern int encode_DIS_TrackJob(int, struct batch_request *);\nextern int encode_DIS_reply(int, struct batch_reply *);\nextern int encode_DIS_replyTPP(int, char *, struct batch_reply *);\nextern int encode_DIS_svrattrl(int, svrattrl *);\nextern int encode_DIS_Cred(int, char *, char *, int, char *, size_t, long);\nextern int dis_request_read(int, struct batch_request *);\nextern int dis_reply_read(int, struct batch_reply *, int);\nextern int decode_DIS_PreemptJobs(int, struct batch_request *);\n\n#ifdef __cplusplus\n}\n#endif\n#endif /* _BATCH_REQUEST_H */\n"
  },
  {
    "path": "src/include/bitfield.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _BITFIELD_H\n#define _BITFIELD_H\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n/*\n * Definition of interface for dealing with arbitrarily large numbers of\n * contiguous bits.  Size of the bitfield is declared at compile time with\n * the BITFIELD_SIZE #define (default is 128 bits).\n *\n * Macros/inlines all take pointers to a Bitfield, and provide :\n *\n * Macro BITFIELD_WORD(Bitfield *p,int ndx)\n * Macro BITFIELD_SET_WORD(Bitfield *p, int ndx, unsigned long long word)\n *\n * Macro BITFIELD_CLRALL(Bitfield *p)\n * Macro BITFIELD_SETALL(Bitfield *p)\n * \tClear or set all bits in a Bitfield.\n *\n * Macro BITFIELD_SET_LSB(Bitfield *p)\n * Macro BITFIELD_CLR_LSB(Bitfield *p)\n * Macro BITFIELD_SET_MSB(Bitfield *p)\n * Macro BITFIELD_CLR_MSB(Bitfield *p)\n *\tSet the least or most significant bit of a Bitfield.\n *\n * Macro BITFIELD_LSB_ISONE(Bitfield *p)\n * Macro BITFIELD_MSB_ISONE(Bitfield *p)\n * \tEquals non-zero if the least or most significant bit of the Bitfield\n * \t    is set, or zero otherwise.\n *\n * Macro BITFIELD_SETB(Bitfield *p, int bit)\n * Macro BITFIELD_CLRB(Bitfield *p, int bit)\n * Macro BITFIELD_TSTB(Bitfield *p, int bit)\n * \tSet, clear, or test the bit at position 'bit' in the Bitfield '*p'.\n * \tBITFIELD_TSTB() is non-zero if the bit at position 'bit' is set, or\n *\t     zero if it is clear.\n *\n * Inline BITFIELD_IS_ZERO(Bitfield *p)\n * Inline BITFIELD_IS_ONES(Bitfield *p)\n * \tReturn non-zero if the bitfield is composed of all zeros or ones,\n *\t     or zero if the bitfield is non-homogeneous.\n *\n * Inline BITFIELD_IS_NONZERO(Bitfield *p)\n *\tReturns non-zero if the bitfield contains at least one set bit.\n *\n * Inline BITFIELD_NUM_ONES(Bitfield *p)\n *\tReturns number of '1' bits in the bitfield.\n *\n * Inline BITFIELD_MS_ONE(Bitfield *p)\n * Inline BITFIELD_LS_ONE(Bitfield *p)\n *\tReturns bit position number of least or most significant 1-bit in\n *\tthe Bitfield.\n *\n * Inline BITFIELD_EQ(Bitfield *p, Bitfield *q)\n * Inline BITFIELD_NOTEQ(Bitfield *p, Bitfield *q)\n *\tReturn non-zero if Bitfields '*p' and '*q' are (not) equal.\n *\n * Inline BITFIELD_SETM(Bitfield *p, Bitfield *mask)\n * Inline BITFIELD_CLRM(Bitfield *p, Bitfield *mask)\n * Inline BITFIELD_ANDM(Bitfield *p, Bitfield *mask)\n * Inline BITFIELD_TSTM(Bitfield *p, Bitfield *mask)\n * Inline BITFIELD_TSTALLM(Bitfield *p, Bitfield *mask)\n *\tApply the specified 'mask' to the given bitfield 'p':\n *\tSETM() sets bits in 'p' for any bits set in 'mask' ('p |= mask').\n *\tCLRM() clears bits in 'p' for any bits set in 'mask' ('p &=~ mask').\n *\tANDM() logical-and's 'mask' into 'p' ('p &= mask');\n *\tTSTM() returns non-zero if *any* bits set in 'mask' are set in 'p'.\n *\tTSTMALL() returns non-zero if *all* bits set in 'mask' are also set\n *\t\tin 'p'.\n *\n * Inline BITFIELD_CPY(Bitfield *p, Bitfield *q)\n * Inline BITFIELD_CPYNOTM(Bitfield *p, Bitfield *q)\n *\tCopy the (inverse of) bitfield 'q' into 'p'.\n *\n * Inline BITFIELD_ORNOTM(Bitfield *p, Bitfield *q)\n * \tSet any bits in 'p' where the corresponding bit in 'q' is clear.\n * \t    (p |= ~q)\n *\n * Inline BITFIELD_SHIFTL(Bitfield *p)\n * Inline BITFIELD_SHIFTR(Bitfield *p)\n * \tShift the bits in Bitfield 'p' one bit to the right or left.\n */\n\n/* The size of bitfields being used.  Default to 256 bits. */\n#ifndef BITFIELD_SIZE\n#define BITFIELD_SIZE 256\n#endif /* !BITFIELD_SIZE */\n\n#include <assert.h>\n#define BITFIELD_BPW ((int) (sizeof(unsigned long long) * 8))\n\n#define BITFIELD_SHIFT(bit) ((bit) / BITFIELD_BPW)\n#define BITFIELD_OFFSET(bit) ((bit) & (BITFIELD_BPW - 1))\n#define BITFIELD_WORDS (BITFIELD_SHIFT(BITFIELD_SIZE))\n\ntypedef struct bitfield {\n\tunsigned long long _bits[BITFIELD_WORDS];\n} Bitfield;\n\n#define INLINE __inline\n\n/* Word-oriented operations on bitfields */\n#define BITFIELD_WORD(p, ndx) \\\n\t(((ndx) >= 0 && (ndx) < BITFIELD_WORDS) ? (p)->_bits[ndx] : 0ULL)\n\n#define BITFIELD_SET_WORD(p, ndx, word)                   \\\n\t{                                                 \\\n\t\tif ((ndx) >= 0 && (ndx) < BITFIELD_WORDS) \\\n\t\t\t(p)->_bits[ndx] = word;           \\\n\t}\n\n/* Operate on least significant bit of a bitfield. */\n\n#define BITFIELD_LSB_ISONE(p) \\\n\t((p)->_bits[0] & 1ULL)\n\n#define BITFIELD_SET_LSB(p) \\\n\t((p)->_bits[0] |= 1ULL)\n\n#define BITFIELD_CLR_LSB(p) \\\n\t((p)->_bits[0] &= ~(1ULL))\n\n/* Operate on most significant bit of a bitfield. */\n\n#define BITFIELD_MSB_ISONE(p) \\\n\t((p)->_bits[BITFIELD_SHIFT(BITFIELD_SIZE - 1)] & (1ULL << (BITFIELD_BPW - 1)))\n\n#define BITFIELD_SET_MSB(p) \\\n\t((p)->_bits[BITFIELD_SHIFT(BITFIELD_SIZE - 1)] |= (1ULL << (BITFIELD_BPW - 1)))\n\n#define BITFIELD_CLR_MSB(p) \\\n\t((p)->_bits[BITFIELD_SHIFT(BITFIELD_SIZE - 1)] &= ~(1ULL << (BITFIELD_BPW - 1)))\n\n/* Operate on arbitrary bits within the bitfield. */\n\n#define BITFIELD_SETB(p, bit) (((bit) >= 0 && (bit) < BITFIELD_SIZE) ? (p)->_bits[BITFIELD_SHIFT(bit)] |= (1ULL << BITFIELD_OFFSET(bit)) : 0)\n\n#define BITFIELD_CLRB(p, bit) (((bit) >= 0 && (bit) < BITFIELD_SIZE) ? (p)->_bits[BITFIELD_SHIFT(bit)] &= ~(1ULL << BITFIELD_OFFSET(bit)) : 0)\n\n#define BITFIELD_TSTB(p, bit) (((bit) >= 0 && (bit) < BITFIELD_SIZE) ? ((p)->_bits[BITFIELD_SHIFT(bit)] & (1ULL << BITFIELD_OFFSET(bit))) : 0)\n\n/* Clear or set all the bits in the bitfield. */\n\n#define BITFIELD_CLRALL(p)                           \\\n\t{                                            \\\n\t\tint w;                               \\\n\t\tassert(p != NULL);                   \\\n\t\tfor (w = 0; w < BITFIELD_WORDS; w++) \\\n\t\t\t(p)->_bits[w] = 0ULL;        \\\n\t}\n\n#define BITFIELD_SETALL(p)                           \\\n\t{                                            \\\n\t\tint w;                               \\\n\t\tassert(p != NULL);                   \\\n\t\tfor (w = 0; w < BITFIELD_WORDS; w++) \\\n\t\t\t(p)->_bits[w] = ~(0ULL);     \\\n\t}\n\n/* Comparison functions for two bitfield. */\n\nINLINE int\nBITFIELD_IS_ZERO(Bitfield *p)\n{\n\tint w;\n\tassert(p != NULL);\n\tfor (w = 0; w < BITFIELD_WORDS; w++)\n\t\tif ((p)->_bits[w])\n\t\t\treturn 0;\n\treturn 1;\n}\n\nINLINE int\nBITFIELD_IS_ONES(Bitfield *p)\n{\n\tint w;\n\tassert(p != NULL);\n\tfor (w = 0; w < BITFIELD_WORDS; w++)\n\t\tif ((p)->_bits[w] != ~(0ULL))\n\t\t\treturn 0;\n\treturn 1;\n}\n\nINLINE int\nBITFIELD_IS_NONZERO(Bitfield *p)\n{\n\tint w;\n\tassert(p != NULL);\n\tfor (w = 0; w < BITFIELD_WORDS; w++)\n\t\tif ((p)->_bits[w])\n\t\t\treturn 1;\n\treturn 0;\n}\n\nINLINE int\nBITFIELD_NUM_ONES(Bitfield *p)\n{\n\tint w, cnt;\n\tunsigned long long n;\n\tassert(p != NULL);\n\n\tcnt = 0;\n\tfor (w = 0; w < BITFIELD_WORDS; w++)\n\t\tfor (n = (p)->_bits[w]; n != 0ULL; cnt++)\n\t\t\tn &= (n - 1);\n\n\treturn (cnt);\n}\n\nINLINE int\nBITFIELD_LS_ONE(Bitfield *p)\n{\n\tint w, bit;\n\tunsigned long long n, x;\n\tassert(p != NULL);\n\n\tbit = 0;\n\tfor (w = 0; w < BITFIELD_WORDS; w++) {\n\t\tn = (p)->_bits[w];\n\n\t\t/* Look for the first non-zero word. */\n\t\tif (n != 0ULL)\n\t\t\tbreak;\n\n\t\tbit += BITFIELD_BPW;\n\t}\n\n\t/* No non-zero words found in the bitfield. */\n\tif (w == BITFIELD_WORDS)\n\t\treturn (-1);\n\n\t/* Slide a single bit left, looking for the non-zero bit. */\n\tfor (x = 1ULL; !(n & x); bit++)\n\t\tx <<= 1;\n\n\treturn (bit);\n}\n\nINLINE int\nBITFIELD_MS_ONE(Bitfield *p)\n{\n\tint w, bit;\n\tunsigned long long n, x;\n\tassert(p != NULL);\n\n\tbit = BITFIELD_SIZE - 1;\n\tfor (w = BITFIELD_WORDS - 1; w >= 0; w--) {\n\t\tn = (p)->_bits[w];\n\n\t\t/* Look for the first non-zero word. */\n\t\tif (n != 0ULL)\n\t\t\tbreak;\n\n\t\tbit -= BITFIELD_BPW;\n\t}\n\n\t/* No non-zero words found in the bitfield. */\n\tif (w < 0)\n\t\treturn (-1);\n\n\t/* Slide a single bit right, looking for the non-zero bit. */\n\tfor (x = 1ULL << BITFIELD_BPW - 1; !(n & x); bit--)\n\t\tx >>= 1;\n\n\treturn (bit);\n}\n\nINLINE int\nBITFIELD_EQ(Bitfield *p, Bitfield *q)\n{\n\tint w;\n\tassert(p != NULL && q != NULL);\n\tfor (w = 0; w < BITFIELD_WORDS; w++)\n\t\tif ((p)->_bits[w] != (q)->_bits[w])\n\t\t\treturn 0;\n\treturn 1;\n}\n\nINLINE int\nBITFIELD_NOTEQ(Bitfield *p, Bitfield *q)\n{\n\tint w;\n\tassert(p != NULL && q != NULL);\n\tfor (w = 0; w < BITFIELD_WORDS; w++)\n\t\tif ((p)->_bits[w] != (q)->_bits[w])\n\t\t\treturn 1;\n\treturn 0;\n}\n\n/* Logical manipulation functions for applying one bitfield to another. */\n\nINLINE int\nBITFIELD_SETM(Bitfield *p, Bitfield *mask)\n{\n\tint w;\n\tassert(p != NULL && mask != NULL);\n\tfor (w = 0; w < BITFIELD_WORDS; w++)\n\t\t(p)->_bits[w] |= (mask)->_bits[w];\n\treturn 0;\n}\n\nINLINE int\nBITFIELD_CLRM(Bitfield *p, Bitfield *mask)\n{\n\tint w;\n\tassert(p != NULL && mask != NULL);\n\tfor (w = 0; w < BITFIELD_WORDS; w++)\n\t\t(p)->_bits[w] &= ~((mask)->_bits[w]);\n\treturn 0;\n}\n\nINLINE int\nBITFIELD_ANDM(Bitfield *p, Bitfield *mask)\n{\n\tint w;\n\tassert(p != NULL && mask != NULL);\n\tfor (w = 0; w < BITFIELD_WORDS; w++)\n\t\t(p)->_bits[w] &= (mask)->_bits[w];\n\treturn 0;\n}\n\nINLINE int\nBITFIELD_TSTM(Bitfield *p, Bitfield *mask)\n{\n\tint w;\n\tassert(p != NULL && mask != NULL);\n\tfor (w = 0; w < BITFIELD_WORDS; w++)\n\t\tif ((p)->_bits[w] & (mask)->_bits[w])\n\t\t\treturn 1;\n\treturn 0;\n}\n\nINLINE int\nBITFIELD_TSTALLM(Bitfield *p, Bitfield *mask)\n{\n\tint w;\n\tassert(p != NULL && mask != NULL);\n\tfor (w = 0; w < BITFIELD_WORDS; w++)\n\t\tif (((p)->_bits[w] & (mask)->_bits[w]) != (mask)->_bits[w])\n\t\t\treturn 0;\n\treturn 1;\n}\n\nINLINE int\nBITFIELD_CPY(Bitfield *p, Bitfield *q)\n{\n\tint w;\n\tassert(p != NULL && q != NULL);\n\tfor (w = 0; w < BITFIELD_WORDS; w++)\n\t\t(p)->_bits[w] = (q)->_bits[w];\n\treturn 0;\n}\n\nINLINE int\nBITFIELD_CPYNOTM(Bitfield *p, Bitfield *q)\n{\n\tint w;\n\tassert(p != NULL && q != NULL);\n\tfor (w = 0; w < BITFIELD_WORDS; w++)\n\t\t(p)->_bits[w] = ~((q)->_bits[w]);\n\treturn 0;\n}\n\nINLINE int\nBITFIELD_ORNOTM(Bitfield *p, Bitfield *q)\n{\n\tint w;\n\tassert(p != NULL && q != NULL);\n\tfor (w = 0; w < BITFIELD_WORDS; w++)\n\t\t(p)->_bits[w] |= ~((q)->_bits[w]);\n\treturn 0;\n}\n\n/* Logical shift left and shift right for bitfield. */\n\nINLINE int\nBITFIELD_SHIFTL(Bitfield *p)\n{\n\tint w, upper;\n\tassert(p != NULL);\n\n\tfor (w = 0; w < BITFIELD_WORDS - 1; w++) {\n\t\tupper = (p->_bits[w] & (1ULL << (BITFIELD_BPW - 1))) ? 1 : 0;\n\t\tp->_bits[w] <<= 1;\n\t\tp->_bits[w + 1] <<= 1;\n\t\tp->_bits[w + 1] |= upper;\n\t}\n\treturn 0;\n}\n\nINLINE int\nBITFIELD_SHIFTR(Bitfield *p)\n{\n\tint w, lower;\n\tassert(p != NULL);\n\n\tfor (w = BITFIELD_WORDS - 1; w > 0; w--) {\n\t\tlower = p->_bits[w] & 1ULL;\n\t\tp->_bits[w] >>= 1;\n\t\tp->_bits[w - 1] >>= 1;\n\t\tp->_bits[w - 1] |= (lower ? (1ULL << (BITFIELD_BPW - 1)) : 0);\n\t}\n\treturn 0;\n}\n#ifdef __cplusplus\n}\n#endif\n#endif /* _BITFIELD_H */\n"
  },
  {
    "path": "src/include/cmds.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/*\n * cmds.h\n *\n *\tHeader file for the PBS utilities.\n */\n\n#ifndef _CMDS_H\n#define _CMDS_H\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <unistd.h>\n#include <ctype.h>\n#include <string.h>\n#include <time.h>\n\n#include \"pbs_error.h\"\n#include \"libpbs.h\"\n#include \"libsec.h\"\n\n/* Needed for qdel and pbs_deljoblist */\n#define DELJOB_DFLT_NUMIDS 1000\n\ntypedef struct svr_jobid_list svr_jobid_list_t;\nstruct svr_jobid_list {\n\tint max_sz;\n\tint total_jobs;\n\tint svr_fd;\n\tchar svrname[PBS_MAXSERVERNAME + 1];\n\tchar **jobids;\n\tsvr_jobid_list_t *next;\n};\n\n#ifndef TRUE\n#define TRUE 1\n#define FALSE 0\n#endif\n\n#define notNULL(x) (((x) != NULL) && (strlen(x) > (size_t) 0))\n#define NULLstr(x) (((x) == NULL) || (strlen(x) == 0))\n\n#define MAX_LINE_LEN 4095\n#define LARGE_BUF_LEN 4096\n#define MAXSERVERNAME PBS_MAXSERVERNAME + PBS_MAXPORTNUM + 2\n#define PBS_DEPEND_LEN 2040\n\n/* for calling pbs_parse_quote:  to accept whitespace as data or separators */\n#define QMGR_ALLOW_WHITE_IN_VALUE 1\n#define QMGR_NO_WHITE_IN_VALUE 0\n\n#define QDEL_MAIL_SUPPRESS 1000\n\n#define PBS_JOBCOOKIE \"PBS_JOBCOOKIE\"\n#define PBS_INTERACTIVE_COOKIE \"PBS_INTERACTIVE_COOKIE\"\n\nextern int optind, opterr;\nextern char *optarg;\n\nextern int parse_at_item(char *, char *, char *);\nextern int parse_jobid(char *, char **, char **, char **);\nextern int parse_stage_name(char *, char *, char *, char *);\nextern void prt_error(char *, char *, int);\nextern int check_max_job_sequence_id(struct batch_status *);\nextern void set_attr_error_exit(struct attrl **, char *, char *);\nextern void set_attr_resc_error_exit(struct attrl **, char *, char *, char *);\n\n#ifdef __cplusplus\n}\n#endif\n#endif /* _CMDS_H */\n"
  },
  {
    "path": "src/include/credential.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _CREDENTIAL_H\n#define _CREDENTIAL_H\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n/*\n * credential.h - header file for default authentication system provided\n *\twith PBS.\n *\n * Other Requrired Header Files:\n *\t\"portability.h\"\n *\t\"libpbs.h\"\n *\n */\n\n/*\n * a full ticket (credential) as passed from the client to the server\n * is of the following size: 8 for the pbs_iff key + 8 for the timestamp +\n * space for user and host name rounded up to multiple of 8 which is the\n * sub-credential size\n */\n#define PBS_KEY_SIZE 8\n#define PBS_TIMESTAMP_SZ 8\n#define PBS_SUBCRED_SIZE ((PBS_MAXUSER + PBS_MAXHOSTNAME + 7) / 8 * 8)\n#define PBS_SEALED_SIZE (PBS_SUBCRED_SIZE + PBS_TIMESTAMP_SZ)\n#define PBS_TICKET_SIZE (PBS_KEY_SIZE + PBS_SEALED_SIZE)\n\n#define CREDENTIAL_LIFETIME 1800\n#define CREDENTIAL_TIME_DELTA 300\n#define ENV_AUTH_KEY \"PBS_AUTH_KEY\"\n\n#ifdef __cplusplus\n}\n#endif\n#endif /* _CREDENTIAL_H */\n"
  },
  {
    "path": "src/include/dedup_jobids.h",
    "content": "/*\n * Copyright (C) 1994-2023 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <ctype.h>\n#include \"list_link.h\"\n\nstruct array_job_range_list {\n\tchar *range;\n\tstruct array_job_range_list *next;\n};\ntypedef struct array_job_range_list array_job_range_list;\n\nint is_array_job(char *id);\narray_job_range_list * new_job_range(void);\nvoid free_array_job_range_list(array_job_range_list *head);\nint dedup_jobids(char **jobids, int *numjids, char *malloc_track);\n"
  },
  {
    "path": "src/include/dis.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _DIS_H\n#define _DIS_H\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n#include <string.h>\n#include <limits.h>\n#include <float.h>\n#include \"Long.h\"\n#include \"auth.h\"\n\n#ifndef TRUE\n#define TRUE 1\n#define FALSE 0\n#endif\n\n/*\n * Integer function return values from Data-is-Strings reading calls\n */\n\n#define DIS_SUCCESS 0\t/* No error */\n#define DIS_OVERFLOW 1\t/* Value too large to convert */\n#define DIS_HUGEVAL 2\t/* Tried to write floating point infinity */\n#define DIS_BADSIGN 3\t/* Negative sign on an unsigned datum */\n#define DIS_LEADZRO 4\t/* Input count or value has leading zero */\n#define DIS_NONDIGIT 5\t/* Non-digit found where a digit was expected */\n#define DIS_NULLSTR 6\t/* String read has an embedded ASCII NUL */\n#define DIS_EOD 7\t/* Premature end of message */\n#define DIS_NOMALLOC 8\t/* Unable to malloc space for string */\n#define DIS_PROTO 9\t/* Supporting protocol failure */\n#define DIS_NOCOMMIT 10 /* Protocol failure in commit */\n#define DIS_EOF 11\t/* End of File */\n\nunsigned long disrul(int stream, int *retval);\n\n/*#if UINT_MAX == ULONG_MAX*/\n#if SIZEOF_UNSIGNED == SIZEOF_LONG\n#define disrui(stream, retval) (unsigned) disrul(stream, (retval))\n#else\nunsigned disrui(int stream, int *retval);\n#endif\n\n/*#if USHRT_MAX == UINT_MAX*/\n#if SIZEOF_UNSIGNED_SHORT == SIZEOF_UNSIGNED_INT\n#define disrus(stream, retval) (unsigned short) disrui(stream, (retval))\n#else\nunsigned short disrus(int stream, int *retval);\n#endif\n\n/*#if UCHAR_MAX == USHRT_MAX*/\n#if SIZEOF_UNSIGNED_CHAR == SIZEOF_UNSIGNED_SHORT\n#define disruc(stream, retval) (unsigned char) disrus(stream, (retval))\n#else\nunsigned char disruc(int stream, int *retval);\n#endif\n\nlong disrsl(int stream, int *retval);\n/*#if INT_MIN == LONG_MIN && INT_MAX == LONG_MAX*/\n#if SIZEOF_INT == SIZEOF_LONG\n#define disrsi(stream, retval) (int) disrsl(stream, (retval))\n#else\nint disrsi(int stream, int *retval);\n#endif\n\n/*#if SHRT_MIN == INT_MIN && SHRT_MAX == INT_MAX*/\n#if SIZEOF_SHORT == SIZEOF_INT\n#define disrss(stream, retval) (short) disrsi(stream, (retval))\n#else\nshort disrss(int stream, int *retval);\n#endif\n\n/*#if CHAR_MIN == SHRT_MIN && CHAR_MAX == SHRT_MAX*/\n#if SIZEOF_SIGNED_CHAR == SIZEOF_SHORT\n#define disrsc(stream, retval) (signed char) disrss(stream, (retval))\n#else\nsigned char disrsc(int stream, int *retval);\n#endif\n\n/*#if CHAR_MIN, i.e. if chars are signed*/\n/* also, flip the order of statements */\n#if __CHAR_UNSIGNED__\n#define disrc(retval, stream) (char) disruc(stream, (retval))\n#else\n#define disrc(stream, retval) (char) disrsc(stream, (retval))\n#endif\n\nchar *disrcs(int stream, size_t *nchars, int *retval);\nint disrfcs(int stream, size_t *nchars, size_t achars, char *value);\nchar *disrst(int stream, int *retval);\nint disrfst(int stream, size_t achars, char *value);\n\n/*\n * some compilers do not like long doubles, if long double is the same\n * as a double, just use a double.\n */\n#if SIZEOF_DOUBLE == SIZEOF_LONG_DOUBLE\ntypedef double dis_long_double_t;\n#else\ntypedef long double dis_long_double_t;\n#endif\n\ndis_long_double_t disrl(int stream, int *retval);\n/*#if DBL_MANT_DIG == LDBL_MANT_DIG && DBL_MAX_EXP == LDBL_MAX_EXP*/\n#if SIZEOF_DOUBLE == SIZEOF_LONG_DOUBLE\n#define disrd(stream, retval) (double) disrl(stream, (retval))\n#else\ndouble disrd(int stream, int *retval);\n#endif\n\n/*#if FLT_MANT_DIG == DBL_MANT_DIG && FLT_MAX_EXP == DBL_MAX_EXP*/\n#if SIZEOF_FLOAT == SIZEOF_DOUBLE\n#define disrf(stream, retval) (float) disrd(stream, (retval))\n#else\nfloat disrf(int stream, int *retval);\n#endif\n\nint diswul(int stream, unsigned long value);\n/*#if UINT_MAX == ULONG_MAX*/\n#if SIZEOF_UNSIGNED_INT == SIZEOF_UNSIGNED_LONG\n#define diswui(stream, value) diswul(stream, (unsigned long) (value))\n#else\nint diswui(int stream, unsigned value);\n#endif\n#define diswus(stream, value) diswui(stream, (unsigned) (value))\n#define diswuc(stream, value) diswui(stream, (unsigned) (value))\n\nint diswsl(int stream, long value);\n/*#if INT_MIN == LONG_MIN && INT_MAX == LONG_MAX*/\n#if SIZEOF_INT == SIZEOF_LONG\n#define diswsi(stream, value) diswsl(stream, (long) (value))\n#else\nint diswsi(int stream, int value);\n#endif\n#define diswss(stream, value) diswsi(stream, (int) (value))\n#define diswsc(stream, value) diswsi(stream, (int) (value))\n\n/*#if CHAR_MIN*/\n#if __UNSIGNED_CHAR__\n#define diswc(stream, value) diswui(stream, (unsigned) (value))\n#else\n#define diswc(stream, value) diswsi(stream, (int) (value))\n#endif\n\nint diswcs(int stream, const char *value, size_t nchars);\n#define diswst(stream, value) diswcs(stream, value, strlen(value))\n\nint diswl_(int stream, dis_long_double_t value, unsigned int ndigs);\n#define diswl(stream, value) diswl_(stream, (value), LDBL_DIG)\n#define diswd(stream, value) diswl_(stream, (dis_long_double_t) (value), DBL_DIG)\n/*#if FLT_MANT_DIG == DBL_MANT_DIG || DBL_MANT_DIG == LDBL_MANT_DIG*/\n#if SIZEOF_FLOAT == SIZEOF_DOUBLE\n#define diswf(stream, value) diswl_(stream, (dis_long_double_t) (value), FLT_DIG)\n#else\nint diswf(int stream, double value);\n#endif\n\nint diswull(int stream, u_Long value);\nu_Long disrull(int stream, int *retval);\n\nextern const char *dis_emsg[];\n\n/* the following routines set/control DIS over tcp */\nextern void DIS_tcp_funcs();\n\n#define PBS_DIS_BUFSZ 8192\n\n#define DIS_WRITE_BUF 0\n#define DIS_READ_BUF 1\n\ntypedef struct pbs_dis_buf {\n\tsize_t tdis_bufsize;\n\tsize_t tdis_len;\n\tchar *tdis_pos;\n\tchar *tdis_data;\n} pbs_dis_buf_t;\n\ntypedef struct pbs_tcp_auth_data {\n\tint ctx_status;\n\tvoid *ctx;\n\tauth_def_t *def;\n} pbs_tcp_auth_data_t;\n\ntypedef struct pbs_tcp_chan {\n\tpbs_dis_buf_t readbuf;\n\tpbs_dis_buf_t writebuf;\n\tint is_old_client; /* This is just for backward compatibility */\n\tpbs_tcp_auth_data_t auths[2];\n} pbs_tcp_chan_t;\n\nvoid dis_clear_buf(pbs_dis_buf_t *);\nvoid dis_reset_buf(int, int);\nint disr_skip(int, size_t);\nint dis_getc(int);\nint dis_gets(int, char *, size_t);\nint dis_puts(int, const char *, size_t);\nint dis_flush(int);\nvoid dis_setup_chan(int, pbs_tcp_chan_t *(*) (int) );\nvoid dis_destroy_chan(int);\n\nvoid transport_chan_set_ctx_status(int, int, int);\nint transport_chan_get_ctx_status(int, int);\nvoid transport_chan_set_authctx(int, void *, int);\nvoid *transport_chan_get_authctx(int, int);\nvoid transport_chan_set_authdef(int, auth_def_t *, int);\nauth_def_t *transport_chan_get_authdef(int, int);\nint transport_send_pkt(int, int, void *, size_t);\nint transport_recv_pkt(int, int *, void **, size_t *);\n\nextern pbs_tcp_chan_t *(*pfn_transport_get_chan)(int);\nextern int (*pfn_transport_set_chan)(int, pbs_tcp_chan_t *);\nextern int (*pfn_transport_recv)(int, void *, int);\nextern int (*pfn_transport_send)(int, void *, int);\n\n#define transport_recv(x, y, z) (*pfn_transport_recv)(x, y, z)\n#define transport_send(x, y, z) (*pfn_transport_send)(x, y, z)\n#define transport_get_chan(x) (*pfn_transport_get_chan)(x)\n#define transport_set_chan(x, y) (*pfn_transport_set_chan)(x, y)\n\n#ifdef __cplusplus\n}\n#endif\n#endif /* _DIS_H */\n"
  },
  {
    "path": "src/include/grunt.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _GRUNT_H\n#define _GRUNT_H\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n/* structure used by grunt syntax parser located in libpbs.h */\ntypedef struct key_value_pair {\n\tchar *kv_keyw;\n\tchar *kv_val;\n} key_value_pair;\n\n#define KVP_SIZE 50\n#define MPIPROCS \"mpiprocs\"\n#define OMPTHREADS \"ompthreads\"\n\nextern struct resc_sum *svr_resc_sum;\nextern int parse_chunk(char *str, int *nchk, int *nl, struct key_value_pair **kv, int *dflt);\nextern int parse_chunk_r(char *str, int *nchk, int *pnelem, int *nkve, struct key_value_pair **pkv, int *dflt);\nextern int parse_chunk_make_room(int inuse, int extra, struct key_value_pair **rtn);\nextern int parse_chunk_make_room_r(int inuse, int extra, int *pnkve, struct key_value_pair **ppkve);\nextern int parse_node_resc(char *str, char **nodep, int *nl, struct key_value_pair **kv);\nextern int parse_node_resc_r(char *str, char **nodep, int *pnelem, int *nlkv, struct key_value_pair **kv);\nextern char *parse_plus_spec(char *selstr, int *rc);\nextern char *parse_plus_spec_r(char *selstr, char **last, int *hp);\nextern int parse_resc_equal_string(char *start, char **name, char **value, char **last);\nchar *get_first_vnode(char *execvnode);\n#ifdef __cplusplus\n}\n#endif\n#endif /* _GRUNT_H */\n"
  },
  {
    "path": "src/include/hook.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _HOOK_H\n#define _HOOK_H\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n/*\n * hook.h - structure definitions for hook objects\n *\n * Include Files Required:\n *\t<sys/types.h>\n *\t\"list_link.h\"\n *\t\"batch_request.h\"\n *\t\"pbs_ifl.h\"\n */\n#ifndef TRUE\n#define TRUE 1\n#endif\n\n#ifndef FALSE\n#define FALSE 0\n#endif\n\n#include \"pbs_python.h\"\n\nenum hook_type {\n\tHOOK_SITE,\n\tHOOK_PBS\n};\ntypedef enum hook_type hook_type;\n\nenum hook_user {\n\tHOOK_PBSADMIN,\n\tHOOK_PBSUSER\n};\ntypedef enum hook_user hook_user;\n\n#define HOOK_FAIL_ACTION_NONE 0x01\n#define HOOK_FAIL_ACTION_OFFLINE_VNODES 0x02\n#define HOOK_FAIL_ACTION_CLEAR_VNODES 0x04\n#define HOOK_FAIL_ACTION_SCHEDULER_RESTART_CYCLE 0x08\n\n/* server hooks */\n\n#define HOOK_EVENT_QUEUEJOB 0x01\n#define HOOK_EVENT_MODIFYJOB 0x02\n#define HOOK_EVENT_RESVSUB 0x04\n#define HOOK_EVENT_MOVEJOB 0x08\n#define HOOK_EVENT_RUNJOB 0x10\n#define HOOK_EVENT_JOBOBIT 0x800000\n#define HOOK_EVENT_PROVISION 0x20\n#define HOOK_EVENT_PERIODIC 0x8000\n#define HOOK_EVENT_RESV_END 0x10000\n#define HOOK_EVENT_MANAGEMENT 0x200000\n#define HOOK_EVENT_MODIFYVNODE 0x400000\n#define HOOK_EVENT_RESV_BEGIN 0x1000000\n#define HOOK_EVENT_RESV_CONFIRM 0x2000000\n#define HOOK_EVENT_MODIFYRESV 0x4000000\n#define HOOK_EVENT_POSTQUEUEJOB 0x8000000\n\n/* mom hooks */\n#define HOOK_EVENT_EXECJOB_BEGIN 0x40\n#define HOOK_EVENT_EXECJOB_PROLOGUE 0x80\n#define HOOK_EVENT_EXECJOB_EPILOGUE 0x100\n#define HOOK_EVENT_EXECJOB_END 0x200\n#define HOOK_EVENT_EXECJOB_PRETERM 0x400\n#define HOOK_EVENT_EXECJOB_LAUNCH 0x800\n#define HOOK_EVENT_EXECHOST_PERIODIC 0x1000\n#define HOOK_EVENT_EXECHOST_STARTUP 0x2000\n#define HOOK_EVENT_EXECJOB_ATTACH 0x4000\n#define HOOK_EVENT_EXECJOB_RESIZE 0x20000\n#define HOOK_EVENT_EXECJOB_ABORT 0x40000\n#define HOOK_EVENT_EXECJOB_POSTSUSPEND 0x80000\n#define HOOK_EVENT_EXECJOB_PRERESUME 0x100000\n\n#define MOM_EVENTS (HOOK_EVENT_EXECJOB_BEGIN | HOOK_EVENT_EXECJOB_PROLOGUE | HOOK_EVENT_EXECJOB_EPILOGUE | HOOK_EVENT_EXECJOB_END | HOOK_EVENT_EXECJOB_PRETERM | HOOK_EVENT_EXECHOST_PERIODIC | HOOK_EVENT_EXECJOB_LAUNCH | HOOK_EVENT_EXECHOST_STARTUP | HOOK_EVENT_EXECJOB_ATTACH | HOOK_EVENT_EXECJOB_RESIZE | HOOK_EVENT_EXECJOB_ABORT | HOOK_EVENT_EXECJOB_POSTSUSPEND | HOOK_EVENT_EXECJOB_PRERESUME)\n#define USER_MOM_EVENTS (HOOK_EVENT_EXECJOB_PROLOGUE | HOOK_EVENT_EXECJOB_EPILOGUE | HOOK_EVENT_EXECJOB_PRETERM)\n#define FAIL_ACTION_EVENTS (HOOK_EVENT_EXECJOB_BEGIN | HOOK_EVENT_EXECHOST_STARTUP | HOOK_EVENT_EXECJOB_PROLOGUE)\nstruct hook {\n\tchar *hook_name;\t  /* unique name of the hook */\n\thook_type type;\t\t  /* site-defined or pbs builtin */\n\tint enabled;\t\t  /* TRUE or FALSE */\n\tint debug;\t\t  /* TRUE or FALSE */\n\thook_user user;\t\t  /* who executes the hook */\n\tunsigned int fail_action; /* what to do when hook fails unexpectedly */\n\tunsigned int event;\t  /* event  flag */\n\tshort order;\t\t  /* -1000..1000 */\n\t/* -1000..0 for pbs hooks */\n\t/* 1..1000 for site hooks */\n\tint alarm;    /* number of seconds */\n\tvoid *script; /* actual script content in some fmt */\n\n\tint freq; /* # of seconds in between calls */\n\t/* install hook */\n\tint pending_delete;\t\t     /* set to 1 if a mom hook and pending */\n\tunsigned long hook_control_checksum; /* checksum for .HK file */\n\tunsigned long hook_script_checksum;  /* checksum for .PY file */\n\tunsigned long hook_config_checksum;  /* checksum for .CF file */\n\t/* deletion */\n\tpbs_list_link hi_allhooks;\n\tpbs_list_link hi_queuejob_hooks;\n\tpbs_list_link hi_postqueuejob_hooks;\n\tpbs_list_link hi_modifyjob_hooks;\n\tpbs_list_link hi_resvsub_hooks;\n\tpbs_list_link hi_modifyresv_hooks;\n\tpbs_list_link hi_movejob_hooks;\n\tpbs_list_link hi_runjob_hooks;\n\tpbs_list_link hi_jobobit_hooks;\n\tpbs_list_link hi_management_hooks;\n\tpbs_list_link hi_modifyvnode_hooks;\n\tpbs_list_link hi_provision_hooks;\n\tpbs_list_link hi_periodic_hooks;\n\tpbs_list_link hi_resv_confirm_hooks;\n\tpbs_list_link hi_resv_begin_hooks;\n\tpbs_list_link hi_resv_end_hooks;\n\tpbs_list_link hi_execjob_begin_hooks;\n\tpbs_list_link hi_execjob_prologue_hooks;\n\tpbs_list_link hi_execjob_epilogue_hooks;\n\tpbs_list_link hi_execjob_end_hooks;\n\tpbs_list_link hi_execjob_preterm_hooks;\n\tpbs_list_link hi_execjob_launch_hooks;\n\tpbs_list_link hi_exechost_periodic_hooks;\n\tpbs_list_link hi_exechost_startup_hooks;\n\tpbs_list_link hi_execjob_attach_hooks;\n\tpbs_list_link hi_execjob_resize_hooks;\n\tpbs_list_link hi_execjob_abort_hooks;\n\tpbs_list_link hi_execjob_postsuspend_hooks;\n\tpbs_list_link hi_execjob_preresume_hooks;\n\tstruct work_task *ptask; /* work task pointer, used in periodic hooks */\n};\n\ntypedef struct hook hook;\n\n/* Hook-related files and directories */\n#define HOOK_FILE_SUFFIX \".HK\"\t   /* hook control file */\n#define HOOK_SCRIPT_SUFFIX \".PY\"   /* hook script file */\n#define HOOK_REJECT_SUFFIX \".RJ\"   /* hook error reject message */\n#define HOOK_TRACKING_SUFFIX \".TR\" /* hook pending action tracking file */\n#define HOOK_BAD_SUFFIX \".BD\"\t   /* a bad (moved out of the way) hook file */\n#define HOOK_CONFIG_SUFFIX \".CF\"\n#define PBS_HOOKDIR \"hooks\"\n#define PBS_HOOK_WORKDIR PBS_HOOKDIR \"/tmp\"\n#define PBS_HOOK_TRACKING PBS_HOOKDIR \"/tracking\"\n#define PBS_HOOK_NAME_SIZE 512\n\n/* Some hook-related buffer sizes */\n#define HOOK_BUF_SIZE 512\n#define HOOK_MSG_SIZE 3172\n\n/* parameters to import and export qmgr command */\n#define CONTENT_TYPE_PARAM \"content-type\"\n#define CONTENT_ENCODING_PARAM \"content-encoding\"\n#define INPUT_FILE_PARAM \"input-file\"\n#define OUTPUT_FILE_PARAM \"output-file\"\n\n/* attribute default values */\n/* Save only the non-defaults */\n#define HOOK_TYPE_DEFAULT HOOK_SITE\n#define HOOK_USER_DEFAULT HOOK_PBSADMIN\n#define HOOK_FAIL_ACTION_DEFAULT HOOK_FAIL_ACTION_NONE\n#define HOOK_ENABLED_DEFAULT TRUE\n#define HOOK_DEBUG_DEFAULT FALSE\n#define HOOK_EVENT_DEFAULT 0\n#define HOOK_ORDER_DEFAULT 1\n#define HOOK_ALARM_DEFAULT 30\n#define HOOK_FREQ_DEFAULT 120\n#define HOOK_PENDING_DELETE_DEFAULT 0\n\n/* Various attribute names in string format */\n#define HOOKATT_NAME \"hook_name\"\n#define HOOKATT_TYPE \"type\"\n#define HOOKATT_USER \"user\"\n#define HOOKATT_ENABLED \"enabled\"\n#define HOOKATT_DEBUG \"debug\"\n#define HOOKATT_EVENT \"event\"\n#define HOOKATT_ORDER \"order\"\n#define HOOKATT_ALARM \"alarm\"\n#define HOOKATT_FREQ \"freq\"\n#define HOOKATT_FAIL_ACTION \"fail_action\"\n#define HOOKATT_PENDING_DELETE \"pending_delete\"\n\n#define HOOK_PBS_PREFIX \"PBS\" /* valid Hook name prefix for PBS hook */\n\n/* Valid Hook type values */\n#define HOOKSTR_SITE \"site\"\n#define HOOKSTR_PBS \"pbs\"\n#define HOOKSTR_UNKNOWN \"\" /* empty string is the value for */\n/* unknown Hook type and Hook user */\n\n/*  Valid Hook user values */\n#define HOOKSTR_ADMIN \"pbsadmin\"\n#define HOOKSTR_USER \"pbsuser\"\n\n/*  Valid Hook fail_action values */\n#define HOOKSTR_FAIL_ACTION_NONE \"none\"\n#define HOOKSTR_FAIL_ACTION_OFFLINE_VNODES \"offline_vnodes\"\n#define HOOKSTR_FAIL_ACTION_CLEAR_VNODES \"clear_vnodes_upon_recovery\"\n#define HOOKSTR_FAIL_ACTION_SCHEDULER_RESTART_CYCLE \"scheduler_restart_cycle\"\n\n/* Valid hook enabled or debug values */\n#define HOOKSTR_TRUE \"true\"\n#define HOOKSTR_FALSE \"false\"\n\n/* Valid Hook event values */\n#define HOOKSTR_QUEUEJOB \"queuejob\"\n#define HOOKSTR_POSTQUEUEJOB \"postqueuejob\"\n#define HOOKSTR_MODIFYJOB \"modifyjob\"\n#define HOOKSTR_RESVSUB \"resvsub\"\n#define HOOKSTR_MODIFYRESV \"modifyresv\"\n#define HOOKSTR_MOVEJOB \"movejob\"\n#define HOOKSTR_RUNJOB \"runjob\"\n#define HOOKSTR_PROVISION \"provision\"\n#define HOOKSTR_PERIODIC \"periodic\"\n#define HOOKSTR_RESV_CONFIRM \"resv_confirm\"\n#define HOOKSTR_RESV_BEGIN \"resv_begin\"\n#define HOOKSTR_RESV_END \"resv_end\"\n#define HOOKSTR_MANAGEMENT \"management\"\n#define HOOKSTR_JOBOBIT \"jobobit\"\n#define HOOKSTR_MODIFYVNODE \"modifyvnode\"\n#define HOOKSTR_EXECJOB_BEGIN \"execjob_begin\"\n#define HOOKSTR_EXECJOB_PROLOGUE \"execjob_prologue\"\n#define HOOKSTR_EXECJOB_EPILOGUE \"execjob_epilogue\"\n#define HOOKSTR_EXECJOB_END \"execjob_end\"\n#define HOOKSTR_EXECJOB_PRETERM \"execjob_preterm\"\n#define HOOKSTR_EXECJOB_LAUNCH \"execjob_launch\"\n#define HOOKSTR_EXECJOB_ATTACH \"execjob_attach\"\n#define HOOKSTR_EXECJOB_RESIZE \"execjob_resize\"\n#define HOOKSTR_EXECJOB_ABORT \"execjob_abort\"\n#define HOOKSTR_EXECJOB_POSTSUSPEND \"execjob_postsuspend\"\n#define HOOKSTR_EXECJOB_PRERESUME \"execjob_preresume\"\n#define HOOKSTR_EXECHOST_PERIODIC \"exechost_periodic\"\n#define HOOKSTR_EXECHOST_STARTUP \"exechost_startup\"\n#define HOOKSTR_NONE \"\\\"\\\"\" /* double quote val for event == 0 */\n\n#define HOOKSTR_FAIL_ACTION_EVENTS HOOKSTR_EXECJOB_BEGIN \", \" HOOKSTR_EXECHOST_STARTUP \", \" HOOKSTR_EXECJOB_PROLOGUE\n\n/* Valid Hook order valid ranges */\n/* for SITE hooks */\n#define HOOK_SITE_ORDER_MIN 1\n#define HOOK_SITE_ORDER_MAX 1000\n\n/* for PBS hooks */\n#define HOOK_PBS_ORDER_MIN -1000\n#define HOOK_PBS_ORDER_MAX 2000\n\n/* For cleanup_hooks_workdir() parameters */\n#define HOOKS_TMPFILE_MAX_AGE 1200 /* a temp hooks file's maximum age (in  */\n/* in secs) before getting removed      */\n#define HOOKS_TMPFILE_NEXT_CLEANUP_PERIOD 600 /* from this time, when (secs) */\n/* cleanup_hooks_workdir()     */\n/* gets called. \t\t     */\n\n/* for import/export actions */\n#define HOOKSTR_CONTENT \"application/x-python\"\n#define HOOKSTR_CONFIG \"application/x-config\"\n#define HOOKSTR_BASE64 \"base64\"\n#define HOOKSTR_DEFAULT \"default\"\n\n#define PBS_HOOK_CONFIG_FILE \"PBS_HOOK_CONFIG_FILE\"\n\n/* default import statement printed out on a \"print hook\" request */\n#define PRINT_HOOK_IMPORT_CALL \"import hook %s application/x-python base64 -\\n\"\n#define PRINT_HOOK_IMPORT_CONFIG \"import hook %s application/x-config base64 -\\n\"\n\n/* Format: The first %s is the directory location */\n#define FMT_HOOK_PREFIX \"hook_\"\n#define FMT_HOOK_JOB_OUTFILE \"%s\" FMT_HOOK_PREFIX \"%s.out\"\n#define FMT_HOOK_INFILE \"%s\" FMT_HOOK_PREFIX \"%s_%s_%d.in\"\n#define FMT_HOOK_OUTFILE \"%s\" FMT_HOOK_PREFIX \"%s_%s_%d.out\"\n#define FMT_HOOK_DATAFILE \"%s\" FMT_HOOK_PREFIX \"%s_%s_%d.data\"\n#define FMT_HOOK_SCRIPT \"%s\" FMT_HOOK_PREFIX \"script%d\"\n#define FMT_HOOK_SCRIPT_COPY \"%s\" FMT_HOOK_PREFIX \"script_%s.%s\"\n#define FMT_HOOK_CONFIG \"%s\" FMT_HOOK_PREFIX \"config%d\"\n#define FMT_HOOK_CONFIG_COPY \"%s\" FMT_HOOK_PREFIX \"config_%s.%s\"\n#define FMT_HOOK_RESCDEF \"%s\" FMT_HOOK_PREFIX \"resourcedef%d\"\n#define FMT_HOOK_RESCDEF_COPY \"%s\" FMT_HOOK_PREFIX \"resourcedef.%s\"\n#define FMT_HOOK_LOG \"%s\" FMT_HOOK_PREFIX \"log%d\"\n\n/* Special log levels  - values must not intersect PBS_EVENT* values in log.h */\n\n#define SEVERITY_LOG_DEBUG 0x0005   /* syslog DEBUG */\n#define SEVERITY_LOG_WARNING 0x0006 /* syslog WARNING */\n#define SEVERITY_LOG_ERR 0x0007\t    /* syslog ERR */\n\n/* Power hook name */\n#define PBS_POWER \"PBS_power\"\n\n/* External functions */\nextern int\nset_hook_name(hook *, char *, char *, size_t);\nextern int\nset_hook_enabled(hook *, char *, char *, size_t);\nextern int\nset_hook_debug(hook *, char *, char *, size_t);\nextern int\nset_hook_type(hook *, char *, char *, size_t, int);\nextern int\nset_hook_user(hook *, char *, char *, size_t, int);\nextern int\nset_hook_event(hook *, char *, char *, size_t);\nextern int\nadd_hook_event(hook *, char *, char *, size_t);\nextern int\ndel_hook_event(hook *, char *, char *, size_t);\nextern int\nset_hook_fail_action(hook *, char *, char *, size_t, int);\nextern int\nadd_hook_fail_action(hook *, char *, char *, size_t, int);\nextern int\ndel_hook_fail_action(hook *, char *, char *, size_t);\nextern int\nset_hook_order(hook *, char *, char *, size_t);\nextern int\nset_hook_alarm(hook *, char *, char *, size_t);\nextern int\nset_hook_freq(hook *, char *, char *, size_t);\n\nextern int\nunset_hook_enabled(hook *, char *, size_t);\nextern int\nunset_hook_debug(hook *, char *, size_t);\nextern int\nunset_hook_type(hook *, char *, size_t);\nextern int\nunset_hook_user(hook *, char *, size_t);\nextern int\nunset_hook_fail_action(hook *, char *, size_t);\nextern int\nunset_hook_event(hook *, char *, size_t);\nextern int\nunset_hook_order(hook *, char *, size_t);\nextern int\nunset_hook_alarm(hook *, char *, size_t);\nextern int\nunset_hook_freq(hook *, char *, size_t);\nextern hook *hook_alloc(void);\nextern void hook_free(hook *, void (*)(struct python_script *));\nextern void hook_purge(hook *, void (*)(struct python_script *));\nextern int hook_save(hook *);\nextern hook *hook_recov(char *, FILE *, char *, size_t,\n\t\t\tint (*)(const char *, struct python_script **),\n\t\t\tvoid (*)(struct python_script *));\n\nextern hook *find_hook(char *);\nextern hook *find_hookbyevent(int);\nextern int encode_hook_content(char *, char *, char *, char *, size_t);\nextern int decode_hook_content(char *, char *, char *, char *, size_t);\nextern void print_hooks(unsigned int);\nextern void mark_hook_file_bad(char *);\n\nextern char *hook_event_as_string(unsigned int);\nextern unsigned int hookstr_event_toint(char *);\nextern char *hook_enabled_as_string(int);\nextern char *hook_debug_as_string(int);\nextern char *hook_type_as_string(hook_type);\nextern char *hook_alarm_as_string(int);\nextern char *hook_freq_as_string(int);\nextern char *hook_order_as_string(short);\nextern char *hook_user_as_string(hook_user);\nextern char *hook_fail_action_as_string(unsigned int);\nextern int num_eligible_hooks(unsigned int);\n\n#ifdef _WORK_TASK_H\nextern void cleanup_hooks_workdir(struct work_task *);\n#endif\n\n#ifdef WIN32\n#define ALARM_HANDLER_ARG void\n#else\n#define ALARM_HANDLER_ARG int sig\n#endif\n\nextern void catch_hook_alarm(ALARM_HANDLER_ARG);\nextern int set_alarm(int sec, void (*)(void));\n\nextern void hook_perf_stat_start(char *label, char *action, int);\nextern void hook_perf_stat_stop(char *label, char *action, int);\n#define HOOK_PERF_POPULATE \"populate\"\n#define HOOK_PERF_FUNC \"hook_func\"\n#define HOOK_PERF_RUN_CODE \"run_code\"\n#define HOOK_PERF_START_PYTHON \"start_interpreter\"\n#define HOOK_PERF_LOAD_INPUT \"load_hook_input_file\"\n#define HOOK_PERF_HOOK_OUTPUT \"hook_output\"\n#define HOOK_PERF_POPULATE_VNODE \"populate:pbs.event().vnode\"\n#define HOOK_PERF_POPULATE_VNODE_O \"populate:pbs.event().vnode_o\"\n#define HOOK_PERF_POPULATE_VNODELIST \"populate:pbs.event().vnode_list\"\n#define HOOK_PERF_POPULATE_VNODELIST_FAIL \"populate:pbs.event().vnode_list_fail\"\n#define HOOK_PERF_POPULATE_RESVLIST \"populate:pbs.event().resv_list\"\n#define HOOK_PERF_POPULATE_JOBLIST \"populate:pbs.event().job_list\"\n#define HOOK_PERF_LOAD_DATA \"load_hook_data\"\n#ifdef __cplusplus\n}\n#endif\n#endif /* _HOOK_H */\n"
  },
  {
    "path": "src/include/hook_func.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _HOOK_FUNC_H\n#define _HOOK_FUNC_H\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n#include \"work_task.h\"\n#include \"job.h\"\n#include \"hook.h\"\n\n/*\n * hook_func.h - structure definitions for hook objects\n *\n * Include Files Required:\n *\t<sys/types.h>\n *\t\"list_link.h\"\n *\t\"batch_request.h\"\n *\t\"pbs_ifl.h\"\n */\n\n#define MOM_HOOK_ACTION_NONE 0\n#define MOM_HOOK_ACTION_SEND_ATTRS 0x01\n#define MOM_HOOK_ACTION_SEND_SCRIPT 0x02\n#define MOM_HOOK_ACTION_DELETE 0x04\n#define MOM_HOOK_ACTION_SEND_RESCDEF 0x08\n#define MOM_HOOK_ACTION_DELETE_RESCDEF 0x10\n#define MOM_HOOK_ACTION_SEND_CONFIG 0x20\n\n/* MOM_HOOK_ACTION_SEND_RESCDEF is really not part of this */\n#define MOM_HOOK_SEND_ACTIONS (MOM_HOOK_ACTION_SEND_ATTRS | MOM_HOOK_ACTION_SEND_SCRIPT | MOM_HOOK_ACTION_SEND_CONFIG)\n\nstruct mom_hook_action {\n\tchar hookname[PBS_HOOK_NAME_SIZE];\n\tunsigned int action;\n\tunsigned int reply_expected; /* reply expected from mom for sent out actions */\n\tint do_delete_action_first;  /* force order between delete and send actions */\n\tlong long int tid;\t     /* transaction id to group actions under */\n};\n\n/* Return values to sync_mom_hookfilesTPP() function */\nenum sync_hookfiles_result {\n\tSYNC_HOOKFILES_NONE,\n\tSYNC_HOOKFILES_SUCCESS_ALL,\n\tSYNC_HOOKFILES_SUCCESS_PARTIAL,\n\tSYNC_HOOKFILES_FAIL\n};\n\ntypedef struct mom_hook_action mom_hook_action_t;\n\nextern int add_mom_hook_action(mom_hook_action_t ***,\n\t\t\t       int *, char *, unsigned int, int, long long int);\n\nextern int delete_mom_hook_action(mom_hook_action_t **, int,\n\t\t\t\t  char *, unsigned int);\n\nextern mom_hook_action_t *find_mom_hook_action(mom_hook_action_t **,\n\t\t\t\t\t       int, char *);\n\nextern void add_pending_mom_hook_action(void *minfo, char *, unsigned int);\n\nextern void delete_pending_mom_hook_action(void *minfo, char *, unsigned int);\n\nextern void add_pending_mom_allhooks_action(void *minfo, unsigned int);\n\nextern int has_pending_mom_action_delete(char *);\n\nextern void hook_track_save(void *, int);\nextern void hook_track_recov(void);\nextern int mc_sync_mom_hookfiles(void);\nextern void uc_delete_mom_hooks(void *);\nextern int sync_mom_hookfiles_count(void *);\nextern void next_sync_mom_hookfiles(void);\nextern void send_rescdef(int);\nextern unsigned long get_hook_rescdef_checksum(void);\nextern void mark_mom_hooks_seen(void);\nextern int mom_hooks_seen_count(void);\nextern void hook_action_tid_set(long long int);\nextern long long int hook_action_tid_get(void);\nextern void set_srv_pwr_prov_attribute(void);\nextern void fprint_svrattrl_list(FILE *, char *, pbs_list_head *);\n\n#ifdef _BATCH_REQUEST_H\nextern int status_hook(hook *, struct batch_request *, pbs_list_head *, char *, size_t);\nextern void mgr_hook_import(struct batch_request *);\nextern void mgr_hook_export(struct batch_request *);\nextern void mgr_hook_set(struct batch_request *);\nextern void mgr_hook_unset(struct batch_request *);\nextern void mgr_hook_create(struct batch_request *);\nextern void mgr_hook_delete(struct batch_request *);\nextern void req_stat_hook(struct batch_request *);\n\n/* Hook script processing */\nextern int server_process_hooks(int rq_type, char *rq_user, char *rq_host, hook *phook,\n\t\t\t\tint hook_event, job *pjob, hook_input_param_t *req_ptr,\n\t\t\t\tchar *hook_msg, int msg_len, void (*pyinter_func)(void),\n\t\t\t\tint *num_run, int *event_initialized);\nextern int process_hooks(struct batch_request *, char *, size_t, void (*)(void));\nextern int recreate_request(struct batch_request *);\n\n/* Server periodic hook call-back */\nextern void run_periodic_hook(struct work_task *ptask);\n\nextern int get_server_hook_results(char *input_file, int *accept_flag, int *reject_flag,\n\t\t\t\t   char *reject_msg, int reject_msg_size, job *pjob, hook *phook, hook_output_param_t *hook_output);\n#endif\n\n#ifdef __cplusplus\n}\n#endif\n#endif /* _HOOK_FUNC_H */\n"
  },
  {
    "path": "src/include/ifl_internal.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _IFL_INTERNAL_H\n#define _IFL_INTERNAL_H\n\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n#include \"pbs_ifl.h\"\n#include \"pbs_internal.h\"\n\n/* Used for non blocking connect */\n#define NOBLK_FLAG \"NOBLK\"\n#define NOBLK_TOUT 2\n\n/* IFL functions */\nint __pbs_asyrunjob(int, const char *, const char *, const char *);\n\nint __pbs_asyrunjob_ack(int c, const char *jobid, const char *location, const char *extend);\n\nint __pbs_alterjob(int, const char *, struct attrl *, const char *);\n\nint __pbs_asyalterjob(int, const char *, struct attrl *, const char *);\n\nint __pbs_confirmresv(int, const char *, const char *, unsigned long, const char *);\n\nint __pbs_connect(const char *);\n\nint __pbs_connect_extend(const char *, const char *);\n\nchar *__pbs_default(void);\n\nint __pbs_deljob(int, const char *, const char *);\n\nstruct batch_deljob_status *__pbs_deljoblist(int, char **, int, const char *);\n\nint __pbs_disconnect(int);\n\nchar *__pbs_geterrmsg(int);\n\nint __pbs_holdjob(int, const char *, const char *, const char *);\n\nint __pbs_loadconf(int);\n\nchar *__pbs_locjob(int, const char *, const char *);\n\nint __pbs_manager(int, int, int, const char *, struct attropl *, const char *);\n\nint __pbs_movejob(int, const char *, const char *, const char *);\n\nint __pbs_msgjob(int, const char *, int, const char *, const char *);\n\nint __pbs_orderjob(int, const char *, const char *, const char *);\n\nint __pbs_rerunjob(int, const char *, const char *);\n\nint __pbs_rlsjob(int, const char *, const char *, const char *);\n\nint __pbs_runjob(int, const char *, const char *, const char *);\n\nchar **__pbs_selectjob(int, struct attropl *, const char *);\n\nint __pbs_sigjob(int, const char *, const char *, const char *);\n\nvoid __pbs_statfree(struct batch_status *);\n\nvoid __pbs_delstatfree(struct batch_deljob_status *);\n\nstruct batch_status *__pbs_statrsc(int, const char *, struct attrl *, const char *);\n\nstruct batch_status *__pbs_statjob(int, const char *, struct attrl *, const char *);\n\nstruct batch_status *__pbs_selstat(int, struct attropl *, struct attrl *, const char *);\n\nstruct batch_status *__pbs_statque(int, const char *, struct attrl *, const char *);\n\nstruct batch_status *__pbs_statserver(int, struct attrl *, const char *);\n\nstruct batch_status *__pbs_statsched(int, struct attrl *, const char *);\n\nstruct batch_status *__pbs_stathost(int, const char *, struct attrl *, const char *);\n\nstruct batch_status *__pbs_statnode(int, const char *, struct attrl *, const char *);\n\nstruct batch_status *__pbs_statvnode(int, const char *, struct attrl *, const char *);\n\nstruct batch_status *__pbs_statresv(int, const char *, struct attrl *, const char *);\n\nstruct batch_status *__pbs_stathook(int, const char *, struct attrl *, const char *);\n\nstruct ecl_attribute_errors *__pbs_get_attributes_in_error(int);\n\nchar *__pbs_submit(int, struct attropl *, const char *, const char *, const char *);\n\nchar *__pbs_submit_resv(int, struct attropl *, const char *);\n\nchar *__pbs_modify_resv(int c, const char *resv_id, struct attropl *attrib, const char *extend);\n\nint __pbs_delresv(int, const char *, const char *);\n\nint __pbs_relnodesjob(int c, const char *jobid, const char *node_list, const char *extend);\n\nint __pbs_terminate(int, int, const char *);\n\npreempt_job_info *__pbs_preempt_jobs(int, char **);\n\nint __pbs_register_sched(const char *sched_id, int primary_conn_id, int secondary_conn_id);\n\n#ifdef __cplusplus\n}\n#endif\n\n#endif /* _IFL_INTERNAL_H */\n"
  },
  {
    "path": "src/include/job.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _PBS_JOB_H\n#define _PBS_JOB_H\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"range.h\"\n#include \"Long.h\"\n\n/*\n * job.h - structure definations for job objects\n *\n * Include Files Required:\n *\t<sys/types.h>\n *\t\"list_link.h\"\n *\t\"attribute.h\"\n *\t\"server_limits.h\"\n *\t\"reservation.h\"\n */\n\n#ifndef _SERVER_LIMITS_H\n#include \"server_limits.h\"\n#endif\n#include \"work_task.h\"\n\n#ifdef PBS_MOM /* For the var_table used in env funcs */\n/* struct var_table = used to hold environment variables for the job */\n\nstruct var_table {\n\tchar **v_envp;\n\tint v_ensize;\n\tint v_used;\n};\n#endif\n\n/*\n * Dependent Job Structures\n *\n * This set of structures are used by the server to track job\n * dependency.\n *\n * The depend (parent) structure is used to record the type of\n * dependency.  It also heads the list of depend_job related via this type.\n * For a type of \"sycnto\", the number of jobs expected, registered and\n * ready are also recorded.\n */\n\nstruct depend {\n\tpbs_list_link dp_link; /* link to next dependency, if any       */\n\tshort dp_type;\t       /* type of dependency (all) \t         */\n\tshort dp_numexp;       /* num jobs expected (on or syncct only) */\n\tshort dp_numreg;       /* num jobs registered (syncct only)     */\n\tshort dp_released;     /* This job released to run (syncwith)   */\n\tshort dp_numrun;       /* num jobs supposed to run\t\t */\n\tpbs_list_head dp_jobs; /* list of related jobs  (all)           */\n};\n\n/*\n * The depend_job structure is used to record the name and location\n * of each job which is involved with the dependency\n */\n\nstruct depend_job {\n\tpbs_list_link dc_link;\n\tshort dc_state;\t\t\t    /* released / ready to run (syncct)\t */\n\tlong dc_cost;\t\t\t    /* cost of this child (syncct)\t\t */\n\tchar dc_child[PBS_MAXSVRJOBID + 1]; /* child (dependent) job\t */\n\tchar dc_svr[PBS_MAXSERVERNAME + 1]; /* server owning job\t */\n};\n\n/*\n * Warning: the relation between the numbers assigned to after* and before*\n * is critical.\n */\n#define JOB_DEPEND_TYPE_AFTERSTART 0\n#define JOB_DEPEND_TYPE_AFTEROK 1\n#define JOB_DEPEND_TYPE_AFTERNOTOK 2\n#define JOB_DEPEND_TYPE_AFTERANY 3\n#define JOB_DEPEND_TYPE_BEFORESTART 4\n#define JOB_DEPEND_TYPE_BEFOREOK 5\n#define JOB_DEPEND_TYPE_BEFORENOTOK 6\n#define JOB_DEPEND_TYPE_BEFOREANY 7\n#define JOB_DEPEND_TYPE_ON 8\n#define JOB_DEPEND_TYPE_RUNONE 9\n#define JOB_DEPEND_NUMBER_TYPES 11\n\n#define JOB_DEPEND_OP_REGISTER 1\n#define JOB_DEPEND_OP_RELEASE 2\n#define JOB_DEPEND_OP_READY 3\n#define JOB_DEPEND_OP_DELETE 4\n#define JOB_DEPEND_OP_UNREG 5\n\n/*\n * The badplace structure is used to keep track of destinations\n * which have been tried by a route queue and given a \"reject\"\n * status back, see svr_movejob.c.\n */\ntypedef struct badplace {\n\tpbs_list_link bp_link;\n\tchar bp_dest[PBS_MAXROUTEDEST + 1];\n} badplace;\n\n/*\n * The grpcache structure defined here is used by MOM to maintain the\n * home directory, uid and gid of the user name under\n * which the job is running.\n * The information is keep here rather than make repeated hits on the\n * password and group files.\n */\nstruct grpcache {\n\tuid_t gc_uid;\t    /* uid job will execute under */\n\tgid_t gc_gid;\t    /* gid job will execute under */\n\tgid_t gc_rgid;\t    /* login gid of user uid      */\n\tchar gc_homedir[1]; /* more space allocated as part of this\t */\n\t\t\t    /* structure following here\t\t */\n};\n\n/*\n * Job attributes/resources are maintained in one of two ways.\n * Most of the attributes are maintained in a decoded or parsed form.\n * This allows quick access to the attribute and resource values\n * when making decisions about the job (scheduling, routing, ...),\n *\n * Any attribute or resource which is not recognized on this server\n * are kept in an \"attrlist\", a linked list of the \"external\"\n * form (attr_extern, see attribute.h).  These are maintained because\n * the job may be passed on to another server (route or qmove) that\n * does recognize them.\n * See the job structure entry ji_attrlist and the attrlist structure.\n */\n\n/*\n * The following job_atr enum provide an index into the array of\n * decoded job attributes, for quick access.\n * Most of the attributes here are \"public\", but some are Read Only,\n * Private, or even Internal data items; maintained here because of\n * their variable size.\n *\n * \"JOB_ATR_LAST\" must be the last value as its number is used to\n * define the size of the array.\n */\n\nenum job_atr {\n#include \"job_attr_enum.h\"\n#include \"site_job_attr_enum.h\"\n\tJOB_ATR_UNKN, /* the special \"unknown\" type */\n\tJOB_ATR_LAST  /* This MUST be LAST\t*/\n};\n\n/* the following enum defines the type of checkpoint to be done */\n/* none, based on cputime or walltime.   Used only by Mom       */\nenum PBS_Chkpt_By {\n\tPBS_CHECKPOINT_NONE, /* no checkpoint                   */\n\tPBS_CHECKPOINT_CPUT, /* checkpoint by cputime interval  */\n\tPBS_CHECKPOINT_WALLT /* checkpoint by walltime interval */\n};\n\ntypedef struct string_and_number_t {\n\tchar *str;\n\tint num;\n} string_and_number_t;\n\ntypedef struct resc_limit {\t\t   /* per node limits for Mom\t*/\n\tint rl_ncpus;\t\t\t   /* number of cpus\t\t*/\n\tint rl_ssi;\t\t\t   /* ssinodes (for irix cpusets\t*/\n\tlong long rl_mem;\t\t   /* working set size (real mem)\t*/\n\tlong long rl_vmem;\t\t   /* total mem space (virtual)\t*/\n\tint rl_naccels;\t\t\t   /* number of accelerators\t*/\n\tlong long rl_accel_mem;\t\t   /* accelerator mem (real mem)\t*/\n\tpbs_list_head rl_other_res;\t   /* list of all other resources found in execvnode and sched select*/\n\tunsigned int rl_res_count;\t   /* total count of resources */\n\tchar *chunkstr;\t\t\t   /* chunk represented */\n\tint chunkstr_sz;\t\t   /* size of chunkstr */\n\tchar *chunkspec;\t\t   /* the spec in select string representing the chunk */\n\tstring_and_number_t host_chunk[2]; /* chunks representing exec_host/exec_host2  */\n} resc_limit_t;\n\n/*\n * The \"definations\" for the job attributes are in the following array,\n * it is also indexed by the JOB_ATR_... enums.\n */\n\nextern attribute_def job_attr_def[];\nextern void *job_attr_idx;\n\n#ifndef PBS_MOM\n\ntypedef enum histjob_type {\n\tT_FIN_JOB, /* Job finished execution or terminated */\n\tT_MOV_JOB, /* Job moved to different destination */\n\tT_MOM_DOWN /* Non-rerunnable Job failed because of\n\t\t\t MOM failure.*/\n} histjob_type;\n\n#endif /* SERVER only! */\n\n#ifdef PBS_MOM\n#include \"tm_.h\"\n\n/*\n * host_vlist - an array of these is hung off of the hnodent for this host,\n *\ti.e. if the hnodent index equals that in ji_nodeid;\n *\tThe array contains one entry for each vnode allocated for this job\n *\tfrom this host.\n *\n *\tWARNING: This array only exists for cpuset machines !\n *\n *\t\t The mom hooks code (src/resmom/mom_hook_func.c:run_hook())\n *\t\t depends on this structure. If newer resource members\n *\t\t are added (besides hv_ncpus, hv_mem), then please update\n *\t\t mom hooks code, as well as the appropriate RFE.\n *\n */\ntypedef struct host_vlist {\n\tchar hv_vname[PBS_MAXNODENAME + 1]; /* vnode name     */\n\tint hv_ncpus;\t\t\t    /* ncpus assigned */\n\tsize_t hv_mem;\t\t\t    /* mem assigned   */\n} host_vlist_t;\n\n/*\n **\tTrack nodes with an array of structures which each\n **\tpoint to a list of events\n */\ntypedef struct hnodent {\n\ttm_host_id hn_node;\t /* host (node) identifier (index) */\n\tchar *hn_host;\t\t /* hostname of node */\n\tint hn_port;\t\t /* port of Mom */\n\tint hn_stream;\t\t /* stream to MOM on node */\n\ttime_t hn_eof_ts;\t /* timestamp of when the stream went down */\n\tint hn_sister;\t\t /* save error for KILL_JOB event */\n\tint hn_nprocs;\t\t /* num procs allocated to this node */\n\tint hn_vlnum;\t\t /* num entries in vlist */\n\thost_vlist_t *hn_vlist;\t /* list of vnodes allocated */\n\tresc_limit_t hn_nrlimit; /* resc limits per node */\n\tvoid *hn_setup;\t\t /* save any setup info here */\n\tpbs_list_head hn_events; /* pointer to list of events */\n} hnodent;\n\ntypedef struct vmpiprocs {\n\ttm_node_id vn_node; /* user's vnode identifier */\n\thnodent *vn_host;   /* parent (host) nodeent entry */\n\tchar *vn_hname;\t    /* host name for MPI, if null value */\n\t/* use vn_host->hn_host             */\n\t/* host name for MPI */\n\tchar *vn_vname;\t\t/* vnode name */\n\tint vn_cpus;\t\t/* number of cpus allocated to proc */\n\tint vn_mpiprocs;\t/* number for mpiprocs  */\n\tint vn_threads;\t\t/* number for OMP_NUM_THREADS */\n\tlong long vn_mem;\t/* working set size (real mem)\t*/\n\tlong long vn_vmem;\t/* total mem space (virtual)\t*/\n\tint vn_naccels;\t\t/* number of accelerators */\n\tint vn_need_accel;\t/* should we reserve the accelerators */\n\tchar *vn_accel_model;\t/* model of the desired accelerator */\n\tlong long vn_accel_mem; /* amt of accelerator memory wanted */\n} vmpiprocs;\n\n/* the following enum defines if a node resource is to be reported by Mom */\nenum PBS_NodeRes_Status {\n\tPBS_NODERES_ACTIVE, /* resource reported from a non-released node */\n\tPBS_NODERES_DELETE  /* resource reported from a released node */\n};\n\n/*\n * Mother Superior gets to hold an array of information from each\n * of the other nodes for resource usage.\n */\ntypedef struct noderes {\n\tchar *nodehost;\t    /* corresponding node name */\n\tlong nr_cput;\t    /* cpu time */\n\tlong nr_mem;\t    /* memory */\n\tlong nr_cpupercent; /* cpu percent */\n\tattribute nr_used;  /* node resources used */\n\tenum PBS_NodeRes_Status nr_status;\n} noderes;\n\n/* State for a sister */\n\n#define SISTER_OKAY 0\n#define SISTER_KILLDONE 1000\n#define SISTER_BADPOLL 1001\n#define SISTER_EOF 1099\n\n/* job flags for ji_flags (mom only) */\n\n#define MOM_CHKPT_ACTIVE 0x0001\t  /* checkpoint in progress */\n#define MOM_CHKPT_POST 0x0002\t  /* post checkpoint call returned */\n#define MOM_SISTER_ERR 0x0004\t  /* a sisterhood operation failed */\n#define MOM_NO_PROC 0x0008\t  /* no procs found for job */\n#define MOM_RESTART_ACTIVE 0x0010 /* restart in progress */\n\n#define PBS_MAX_POLL_DOWNTIME 300 /* 5 minutes by default */\n#endif\t\t\t\t  /* MOM */\n\n/*\n * specific structures for Job Array attributes\n */\n\n/* subjob index table */\ntypedef struct ajinfo {\n\tint tkm_ct;\t\t\t  /* count of original entries in table */\n\tint tkm_start;\t\t\t  /* start of range (x in x-y:z) */\n\tint tkm_end;\t\t\t  /* end of range (y in x-y:z) */\n\tint tkm_step;\t\t\t  /* stepping factor for range (z in x-y:z) */\n\tint tkm_flags;\t\t\t  /* special flags for array job */\n\tint tkm_subjsct[PBS_NUMJOBSTATE]; /* count of subjobs in various states */\n\tint tkm_dsubjsct;\t\t  /* count of deleted subjobs */\n\trange *trm_quelist;\t\t  /* pointer to range list */\n} ajinfo_t;\n\n/*\n * Discard Job Structure,  see Server's discard_job function\n *\tUsed to record which Mom has responded to when we need to tell them\n *\tto discard a running job because of problems with a Mom\n */\n\nstruct jbdscrd {\n\tstruct machine_info *jdcd_mom; /* ptr to Mom */\n\tint jdcd_state;\t\t       /* 0 - waiting on her */\n};\n#define JDCD_WAITING 0 /* still waiting to hear from this Mom */\n#define JDCD_REPLIED 1 /* this Mom has replied to the discard job */\n#define JDCD_DOWN -1   /* this Mom is down */\n\n/* Special array job flags in tkm_flags */\n#define TKMFLG_NO_DELETE 0x01 /* delete subjobs in progess */\n#define TKMFLG_CHK_ARRAY 0x02 /* chk_array_doneness() already in call stack */\n\n/* Structure for block job reply processing */\nstruct block_job_reply {\n\tchar jobid[PBS_MAXSVRJOBID + 1];\n\tchar client[PBS_MAXHOSTNAME + 1];\n\tint port;\n\tint exitstat;\n\ttime_t reply_time; /* The timestamp at which the block job tried it's first attempt to reply */\n\tchar *msg;\t   /* Abort message to be send to client */\n\tint fd;\n};\n#define BLOCK_JOB_REPLY_TIMEOUT 60\n\n/*\n * THE JOB\n *\n * This structure is used by the server to maintain internal\n * quick access to the state and status of each job.\n * There is one instance of this structure per job known by the server.\n *\n * This information must be PRESERVED and is done so by updating the\n * job file in the jobs subdirectory which corresponds to this job.\n *\n * ji_state is the state of the job.  It is kept up front to provide for a\n * \"quick\" update of the job state with minimum rewritting of the job file.\n * Which is why the sub-struct ji_qs exists, that is the part which is\n * written on the \"quick\" save.  Update the symbolic constants JSVERSION_**\n * if any changes to the format of the \"quick-save\" area are made.\n *\n * The unparsed string set forms of the attributes (including resources)\n * are maintained in the struct attrlist as discussed above.\n */\n\n#define JSVERSION_18 800  /* 18 denotes the PBS version and it covers the job structure from >= 13.x to <= 18.x */\n#define JSVERSION_19 1900 /* 1900 denotes the 19.x.x version */\n#define JSVERSION 2200\t  /* denotes 22.x and newer */\n#define ji_taskid ji_extended.ji_ext.ji_taskidx\n#define ji_nodeid ji_extended.ji_ext.ji_nodeidx\n\nenum bg_hook_request {\n\tBG_NONE,\n\tBG_IS_DISCARD_JOB,\n\tBG_PBS_BATCH_DeleteJob,\n\tBG_PBSE_SISCOMM,\n\tBG_IM_DELETE_JOB_REPLY,\n\tBG_IM_DELETE_JOB,\n\tBG_IM_DELETE_JOB2,\n\tBG_CHECKPOINT_ABORT\n};\n\nstruct job {\n\n\t/*\n\t * Note: these members, upto ji_qs, are not saved to disk\n\t * IMPORTANT: if adding to this are, see create_subjob()\n\t * in array_func.c; add the copy of the required elements\n\t */\n\n\tpbs_list_link ji_alljobs;\t     /* links to all jobs in server */\n\tpbs_list_link ji_jobque;\t     /* SVR: links to jobs in same queue, MOM: links to polled jobs */\n\tpbs_list_link ji_unlicjobs;\t     /* links to unlicensed jobs */\n\tint ji_momhandle;\t\t     /* open connection handle to MOM */\n\tint ji_mom_prot;\t\t     /* PROT_TCP or PROT_TPP */\n\tstruct batch_request *ji_rerun_preq; /* outstanding rerun request */\n#ifdef PBS_MOM\n\tvoid *ji_pending_ruu;\t\t\t    /* pending last update */\n\tstruct batch_request *ji_preq;\t\t    /* outstanding request */\n\tstruct grpcache *ji_grpcache;\t\t    /* cache of user's groups */\n\tenum PBS_Chkpt_By ji_chkpttype;\t\t    /* checkpoint type  */\n\ttime_t ji_chkpttime;\t\t\t    /* periodic checkpoint time */\n\ttime_t ji_chkptnext;\t\t\t    /* next checkpoint time */\n\ttime_t ji_sampletim;\t\t\t    /* last usage sample time, irix only */\n\ttime_t ji_polltime;\t\t\t    /* last poll from mom superior */\n\ttime_t ji_actalarm;\t\t\t    /* time of site callout alarm */\n\ttime_t ji_joinalarm;\t\t\t    /* time of job's sister join job alarm, also, time obit sent, all */\n\ttime_t ji_overlmt_timestamp;\t\t    /*time the job exceeded limit*/\n\tint ji_jsmpipe;\t\t\t\t    /* pipe from child starter process */\n\tint ji_mjspipe;\t\t\t\t    /* pipe to   child starter for ack */\n\tint ji_jsmpipe2;\t\t\t    /* pipe for child starter process to send special requests to parent mom */\n\tint ji_mjspipe2;\t\t\t    /* pipe for parent mom to ack special request from child starter process */\n\tint ji_child2parent_job_update_pipe;\t    /* read pipe to receive special request from child starter process */\n\tint ji_parent2child_job_update_pipe;\t    /* write pipe for parent mom to send info to child starter process */\n\tint ji_parent2child_job_update_status_pipe; /* write pipe for parent mom to send job update status to child starter process */\n\tint ji_parent2child_moms_status_pipe;\t    /* write pipe for parent mom to send sister moms status to child starter process */\n\tint ji_updated;\t\t\t\t    /* set to 1 if job's node assignment was updated */\n\ttime_t ji_walltime_stamp;\t\t    /* time stamp for accumulating walltime */\n\tstruct work_task *ji_bg_hook_task;\n\tstruct work_task *ji_report_task;\n#ifdef WIN32\n\tHANDLE ji_momsubt;\t /* process HANDLE to mom subtask */\n#else\t\t\t\t /* not WIN32 */\n\tpid_t ji_momsubt; /* pid of mom subtask   */\n#endif\t\t\t\t /* WIN32 */\n\tstruct var_table ji_env; /* environment for the job */\n\t/* ptr to post processing func  */\n\tvoid (*ji_mompost)(struct job *, int);\n\ttm_event_t ji_postevent;\t   /* event waiting on mompost */\n\tint ji_numnodes;\t\t   /* number of nodes (at least 1) */\n\tint ji_numrescs;\t\t   /* number of entries in ji_resources*/\n\tint ji_numvnod;\t\t\t   /* number of virtual nodes */\n\tint ji_num_assn_vnodes;\t\t   /* number of virtual nodes (full count) */\n\ttm_event_t ji_obit;\t\t   /* event for end-of-job */\n\thnodent *ji_hosts;\t\t   /* ptr to job host management stuff */\n\tvmpiprocs *ji_vnods;\t\t   /* ptr to job vnode management stuff */\n\tnoderes *ji_resources;\t\t   /* ptr to array of node resources */\n\tvmpiprocs *ji_assn_vnodes;\t   /* ptr to actual assigned vnodes (for hooks) */\n\tpbs_list_head ji_tasks;\t\t   /* list of task structs */\n\tpbs_list_head ji_failed_node_list; /* list of mom nodes which fail to join job */\n\tpbs_list_head ji_node_list;\t   /* list of functional mom nodes with vnodes assigned to the job */\n\ttm_node_id ji_nodekill;\t\t   /* set to nodeid requesting job die */\n\tint ji_flags;\t\t\t   /* mom only flags */\n\tvoid *ji_setup;\t\t\t   /* save setup info */\n\n#ifdef WIN32\n\tHANDLE ji_hJob;\t\t\t\t    /* handle for job */\n\tstruct passwd *ji_user;\t\t\t    /* user info */\n#endif\t\t\t\t\t\t    /* WIN32 */\n\tint ji_stdout;\t\t\t\t    /* socket for stdout */\n\tint ji_stderr;\t\t\t\t    /* socket for stderr */\n\tint ji_ports[2];\t\t\t    /* ports for stdout/err */\n\tenum bg_hook_request ji_hook_running_bg_on; /* set when hook starts in the background*/\n\tint ji_msconnected;\t\t\t    /* 0 - not connected, 1 - connected */\n\tpbs_list_head ji_multinodejobs;\t\t    /* links to recovered multinode jobs */\n#else\t\t\t\t\t\t    /* END Mom ONLY -  start Server ONLY */\n\tstruct batch_request *ji_pmt_preq; /* outstanding preempt job request for deleting jobs */\n\tint ji_discarding;\t\t   /* discarding job */\n\tstruct batch_request *ji_prunreq;  /* outstanding runjob request */\n\tpbs_list_head ji_svrtask;\t   /* links to svr work_task list */\n\tstruct pbs_queue *ji_qhdr;\t   /* current queue header */\n\tstruct resc_resv *ji_myResv;\t   /* !=0 job belongs to a reservation, see also, attribute JOB_ATR_myResv */\n\n\tint ji_lastdest;\t     /* last destin tried by route */\n\tint ji_retryok;\t\t     /* ok to retry, some reject was temp */\n\tint ji_terminated;\t     /* job terminated by deljob batch req */\n\tint ji_deletehistory;\t     /* job history should not be saved */\n\tpbs_list_head ji_rejectdest; /* list of rejected destinations */\n\tstruct job *ji_parentaj;     /* subjob: parent Array Job */\n\tajinfo_t *ji_ajinfo;\t     /* ArrayJob: information about subjobs and its state counts */\n\tstruct jbdscrd *ji_discard;  /* see discard_job() */\n\tint ji_jdcd_waiting;\t     /* set if waiting on a mom for a response to discard job request */\n\tchar *ji_acctrec;\t     /* holder for accounting info */\n\tchar *ji_clterrmsg;\t     /* error message to return to client */\n\n\t/*\n\t * This variable is used to temporarily hold the script for a new job\n\t * in memory instead of immediately saving it to the database in the\n\t * req_jobscript function. The script is eventually saved into the\n\t * database along with saving the job structure as part of req_commit\n\t * under one single transaction. After this the memory is freed.\n\t */\n\tchar *ji_script;\n\n\t/*\n\t * This flag is to indicate if queued entity limit attribute usage\n\t * is decremented when the job is run\n\t */\n\tint ji_etlimit_decr_queued;\n\n\tstruct preempt_ordering *preempt_order;\n\tint preempt_order_index;\n\tstruct work_task *ji_prov_startjob_task;\n\n#endif /* END SERVER ONLY */\n\n\t/*\n\t * fixed size internal data - maintained via \"quick save\"\n\t * some of the items are copies of attributes, if so this\n\t * internal version takes precendent\n\t *\n\t * This area CANNOT contain any pointers!\n\t */\n#ifndef PBS_MOM\n\tchar qs_hash[DIGEST_LENGTH];\n#endif\n\tstruct jobfix {\n\t\tint ji_jsversion;\t\t      /* job structure version - JSVERSION */\n\t\tint ji_svrflags;\t\t      /* server flags */\n\t\ttime_t ji_stime;\t\t      /* time job started execution */\n\t\ttime_t ji_obittime;\t\t      /* time job has ended execution */\n\t\tchar ji_jobid[PBS_MAXSVRJOBID + 1];   /* job identifier */\n\t\tchar ji_fileprefix[PBS_JOBBASE + 1];  /* no longer used */\n\t\tchar ji_queue[PBS_MAXQUEUENAME + 1];  /* name of current queue */\n\t\tchar ji_destin[PBS_MAXROUTEDEST + 1]; /* dest from qmove/route, MomS for execution */\n\n\t\tint ji_un_type;\t\t\t\t /* type of ji_un union */\n\t\tunion {\t\t\t\t\t /* depends on type of queue currently in */\n\t\t\tstruct {\t\t\t /* if in execution queue .. */\n\t\t\t\tpbs_net_t ji_momaddr;\t /* host addr of Server */\n\t\t\t\tunsigned int ji_momport; /* port # */\n\t\t\t\tint ji_exitstat;\t /* job exit status from MOM */\n\t\t\t} ji_exect;\n\t\t\tstruct {\n\t\t\t\ttime_t ji_quetime;  /* time entered queue */\n\t\t\t\ttime_t ji_rteretry; /* route retry time */\n\t\t\t} ji_routet;\n\t\t\tstruct {\n\t\t\t\tint ji_fromsock;\t  /* socket job coming over */\n\t\t\t\tpbs_net_t ji_fromaddr;\t  /* host job coming from   */\n\t\t\t\tunsigned int ji_scriptsz; /* script size */\n\t\t\t} ji_newt;\n\t\t\tstruct {\n\t\t\t\tpbs_net_t ji_svraddr; /* host addr of Server */\n\t\t\t\tint ji_exitstat;      /* job exit status from MOM */\n\t\t\t\tuid_t ji_exuid;\t      /* execution uid */\n\t\t\t\tgid_t ji_exgid;\t      /* execution gid */\n\t\t\t} ji_momt;\n\t\t} ji_un;\n\t} ji_qs;\n\t/*\n\t * Extended job save area\n\t *\n\t * This area CANNOT contain any pointers!\n\t */\n\tunion jobextend {\n\t\tchar fill[256]; /* fill to keep same size */\n\t\tstruct {\n\t\t\tchar ji_jid[8];\t /* extended job save data for ALPS */\n\t\t\tint ji_credtype; /* credential type */\n#ifdef PBS_MOM\n\t\t\ttm_host_id ji_nodeidx; /* my node id */\n\t\t\ttm_task_id ji_taskidx; /* generate task id's for job */\n\t\t\tint ji_stdout;\n\t\t\tint ji_stderr;\n#if MOM_ALPS\n\t\t\tlong ji_reservation;\n\t\t\t/* ALPS reservation identifier */\n\t\t\tunsigned long long ji_pagg;\n\t\t\t/* ALPS process aggregate ID */\n#endif /* MOM_ALPS */\n#endif /* PBS_MOM */\n\t\t} ji_ext;\n\t} ji_extended;\n\n\t/*\n\t * The following array holds the decode\t format of the attributes.\n\t * Its presence is for rapid acces to the attributes.\n\t */\n\n\tattribute ji_wattr[JOB_ATR_LAST]; /* decoded attributes  */\n\n\tshort newobj; /* newly created job? */\n};\n\ntypedef struct job job;\n\n#ifdef PBS_MOM\n/*\n **\tTasks are sessions belonging to a job, running on one of the\n **\tnodes assigned to the job.\n */\ntypedef struct pbs_task {\n\tjob *ti_job;\t\t  /* pointer to owning job */\n\tunsigned long ti_cput;\t  /* track cput by task */\n\tpbs_list_link ti_jobtask; /* links to tasks for this job */\n\tint *ti_tmfd;\t\t  /* DIS file descriptors to tasks */\n\tint ti_tmnum;\t\t  /* next avail entry in ti_tmfd */\n\tint ti_tmmax;\t\t  /* size of ti_tmfd */\n\tint ti_protover;\t  /* protocol version number */\n\tint ti_flags;\t\t  /* task internal flags */\n\n#ifdef WIN32\n\tHANDLE ti_hProc; /* keep proc handle */\n#endif\n\n\ttm_event_t ti_register; /* event if task registers */\n\tpbs_list_head ti_obits; /* list of obit events */\n\tpbs_list_head ti_info;\t/* list of named info */\n\tstruct taskfix {\n\t\tchar ti_parentjobid[PBS_MAXSVRJOBID + 1];\n\t\ttm_node_id ti_parentnode; /* parent vnode */\n\t\ttm_node_id ti_myvnode;\t  /* my vnode */\n\t\ttm_task_id ti_parenttask; /* parent task */\n\t\ttm_task_id ti_task;\t  /* task's taskid */\n\t\tint ti_status;\t\t  /* status of task */\n\t\tpid_t ti_sid;\t\t  /* session id */\n\t\tint ti_exitstat;\t  /* exit status */\n\t\tunion {\n\t\t\tint ti_hold[16]; /* reserved space */\n\t\t} ti_u;\n\t} ti_qs;\n} pbs_task;\n\n/*\n **\tA linked list of eventent structures is maintained for all events\n **\tfor which we are waiting for another MOM to report back.\n */\ntypedef struct eventent {\n\tint ee_command;\t       /* command event is for */\n\tint ee_fd;\t       /* TM stream */\n\tint ee_retry;\t       /* event message retry attempt number */\n\ttm_event_t ee_client;  /* client event number */\n\ttm_event_t ee_event;   /* MOM event number */\n\ttm_task_id ee_taskid;  /* which task id */\n\tchar **ee_argv;\t       /* save args for spawn */\n\tchar **ee_envp;\t       /* save env for spawn */\n\tpbs_list_link ee_next; /* link to next one */\n} eventent;\n\n/*\n **\tThe information needed for a task manager obit request\n **\tis indicated with OBIT_TYPE_TMEVENT.  The information needed\n **\tfor a batch request is indicated with OBIT_TYPE_BREVENT.\n */\n#define OBIT_TYPE_TMEVENT 0\n#define OBIT_TYPE_BREVENT 1\n\n/*\n **\tA task can have events which are triggered when it exits.\n **\tThese are tracked by obitent structures linked to the task.\n */\ntypedef struct obitent {\n\tint oe_type; /* what kind of obit */\n\tunion oe_u {\n\t\tstruct oe_tm {\n\t\t\tint oe_fd;\t      /* TM reply fd */\n\t\t\ttm_node_id oe_node;   /* where does notification go */\n\t\t\ttm_event_t oe_event;  /* event number */\n\t\t\ttm_task_id oe_taskid; /* which task id */\n\t\t} oe_tm;\n\t\tstruct batch_request *oe_preq;\n\t} oe_u;\n\tpbs_list_link oe_next; /* link to next one */\n} obitent;\n\n/*\n **\tA task can have a list of named infomation which it makes\n **\tavailable to other tasks in the job.\n */\ntypedef struct infoent {\n\tchar *ie_name;\t       /* published name */\n\tvoid *ie_info;\t       /* the glop */\n\tsize_t ie_len;\t       /* how much glop */\n\tpbs_list_link ie_next; /* link to next one */\n} infoent;\n\n#define TI_FLAGS_INIT 1\t   /* task has called tm_init */\n#define TI_FLAGS_CHKPT 2   /* task has checkpointed */\n#define TI_FLAGS_ORPHAN 4  /* MOM not parent of task */\n#define TI_FLAGS_SAVECKP 8 /* save value of CHKPT flag during checkpoint op */\n\n#define TI_STATE_EMBRYO 0\n#define TI_STATE_RUNNING 1\n#define TI_STATE_EXITED 2 /* ti_exitstat valid */\n#define TI_STATE_DEAD 3\n\n/*\n **      Here is the set of commands for InterMOM (IM) requests.\n */\n#define IM_ALL_OKAY 0\n#define IM_JOIN_JOB 1\n#define IM_KILL_JOB 2\n#define IM_SPAWN_TASK 3\n#define IM_GET_TASKS 4\n#define IM_SIGNAL_TASK 5\n#define IM_OBIT_TASK 6\n#define IM_POLL_JOB 7\n#define IM_GET_INFO 8\n#define IM_GET_RESC 9\n#define IM_ABORT_JOB 10\n#define IM_GET_TID 11 /* no longer used */\n#define IM_SUSPEND 12\n#define IM_RESUME 13\n#define IM_CHECKPOINT 14\n#define IM_CHECKPOINT_ABORT 15\n#define IM_RESTART 16\n#define IM_DELETE_JOB 17\n#define IM_REQUEUE 18\n#define IM_DELETE_JOB_REPLY 19\n#define IM_SETUP_JOB 20\n#define IM_DELETE_JOB2 21 /* sent by sister mom to delete job early */\n#define IM_SEND_RESC 22\n#define IM_UPDATE_JOB 23\n#define IM_EXEC_PROLOGUE 24\n#define IM_CRED 25\n#define IM_PMIX 26\n#define IM_RECONNECT_TO_MS 27\n#define IM_JOIN_RECOV_JOB 28\n\n#define IM_ERROR 99\n#define IM_ERROR2 100\n\neventent *\nevent_alloc(job *pjob,\n\t    int command,\n\t    int fd,\n\t    hnodent *pnode,\n\t    tm_event_t event,\n\t    tm_task_id taskid);\n\npbs_task *momtask_create(job *pjob);\n\npbs_task *\ntask_find(job *pjob,\n\t  tm_task_id taskid);\n\n#endif /* MOM */\n\n/*\n * server flags (in ji_svrflags)\n */\n#define JOB_SVFLG_HERE 0x01 /* SERVER: job created here */\n/* MOM: set for Mother Superior */\n#define JOB_SVFLG_HASWAIT 0x02\t   /* job has timed task entry for wait time */\n#define JOB_SVFLG_HASRUN 0x04\t   /* job has been run before (being rerun) */\n#define JOB_SVFLG_HOTSTART 0x08\t   /* job was running, if hot init, restart */\n#define JOB_SVFLG_CHKPT 0x10\t   /* job has checkpoint file for restart */\n#define JOB_SVFLG_SCRIPT 0x20\t   /* job has a Script file */\n#define JOB_SVFLG_OVERLMT1 0x40\t   /* job over limit first time, MOM only */\n#define JOB_SVFLG_OVERLMT2 0x80\t   /* job over limit second time, MOM only */\n#define JOB_SVFLG_ChkptMig 0x100   /* job has migratable checkpoint */\n#define JOB_SVFLG_Suspend 0x200\t   /* job suspended (signal suspend) */\n#define JOB_SVFLG_StagedIn 0x400   /* job has files that have been staged in */\n#define JOB_SVFLG_HASHOLD 0x800\t   /* job has a hold request sent to MoM */\n#define JOB_SVFLG_HasNodes 0x1000  /* job has nodes allocated to it */\n#define JOB_SVFLG_RescAssn 0x2000  /* job resources accumulated in server/que */\n#define JOB_SVFLG_SPSwitch 0x2000  /* SP switch loaded for job, SP MOM only */\n#define JOB_SVFLG_Actsuspd 0x4000  /* job suspend because workstation active */\n#define JOB_SVFLG_cpuperc 0x8000   /* cpupercent violation logged, MOM only */\n#define JOB_SVFLG_ArrayJob 0x10000 /* Job is an Array Job */\n#define JOB_SVFLG_SubJob 0x20000   /* Job is a subjob of an Array */\n#define JOB_SVFLG_StgoFal 0x40000  /* Stageout failed, del jobdir, MOM only */\n#define JOB_SVFLG_TERMJOB 0x80000  /* terminate in progress by TERM, MOM only */\n/* 0x100000 is UNUSED, previously called JOB_SVFLG_StgoDel for stageout succ */\n/* If you intend to use it, make sure jobs to be recovered do not have\n * 0x100000 bit set. Refer SPM229744\n */\n#define JOB_SVFLG_AdmSuspd 0x200000 /* Job is suspended for maintenance */\n\n#define MAIL_NONE (int) 'n'\n#define MAIL_ABORT (int) 'a'\n#define MAIL_BEGIN (int) 'b'\n#define MAIL_END (int) 'e'\n#define MAIL_OTHER (int) 'o'\n#define MAIL_STAGEIN (int) 's'\n#define MAIL_CONFIRM (int) 'c' /*scheduler requested reservation be confirmed*/\n#define MAIL_SUBJOB (int) 'j'\n#define MAIL_NORMAL 0\n#define MAIL_FORCE 1\n\n#define JOB_FILE_COPY \".JC\"\t /* tmp copy while updating */\n#define JOB_FILE_SUFFIX \".JB\"\t /* job control file */\n#define JOB_CRED_SUFFIX \".CR\"\t /* job credential file */\n#define JOB_EXPORT_SUFFIX \".XP\"\t /* job export security context */\n#define JOB_SCRIPT_SUFFIX \".SC\"\t /* job script file  */\n#define JOB_STDOUT_SUFFIX \".OU\"\t /* job standard out */\n#define JOB_STDERR_SUFFIX \".ER\"\t /* job standard error */\n#define JOB_CKPT_SUFFIX \".CK\"\t /* job checkpoint file */\n#define JOB_TASKDIR_SUFFIX \".TK\" /* job task directory */\n#define JOB_BAD_SUFFIX \".BD\"\t /* save bad job file */\n#define JOB_DEL_SUFFIX \".RM\"\t /* file pending to be removed */\n\n/*\n * Job states are defined by POSIX as:\n */\n#define JOB_STATE_TRANSIT 0\n#define JOB_STATE_QUEUED 1\n#define JOB_STATE_HELD 2\n#define JOB_STATE_WAITING 3\n#define JOB_STATE_RUNNING 4\n#define JOB_STATE_EXITING 5\n#define JOB_STATE_EXPIRED 6\n#define JOB_STATE_BEGUN 7\n/* Job states defined for history jobs and OGF-BES model */\n#define JOB_STATE_MOVED 8\n#define JOB_STATE_FINISHED 9\n\n#define JOB_STATE_LTR_UNKNOWN '0'\n#define JOB_STATE_LTR_BEGUN 'B'\n#define JOB_STATE_LTR_EXITING 'E'\n#define JOB_STATE_LTR_FINISHED 'F'\n#define JOB_STATE_LTR_HELD 'H'\n#define JOB_STATE_LTR_MOVED 'M'\n#define JOB_STATE_LTR_QUEUED 'Q'\n#define JOB_STATE_LTR_RUNNING 'R'\n#define JOB_STATE_LTR_SUSPENDED 'S'\n#define JOB_STATE_LTR_TRANSIT 'T'\n#define JOB_STATE_LTR_USUSPENDED 'U'\n#define JOB_STATE_LTR_WAITING 'W'\n#define JOB_STATE_LTR_EXPIRED 'X'\n\n/*\n * job sub-states are defined by PBS (more detailed) as:\n */\n#define JOB_SUBSTATE_UNKNOWN -1\n#define JOB_SUBSTATE_TRANSIN 00\t /* Transit in, wait for commit, commit not yet called */\n#define JOB_SUBSTATE_TRANSICM 01 /* Transit in, job is being commited */\n#define JOB_SUBSTATE_TRNOUT 02\t /* transiting job outbound */\n#define JOB_SUBSTATE_TRNOUTCM 03 /* transiting outbound, rdy to commit */\n\n#define JOB_SUBSTATE_QUEUED 10\t   /* job queued and ready for selection */\n#define JOB_SUBSTATE_PRESTAGEIN 11 /* job queued, has files to stage in */\n#define JOB_SUBSTATE_SYNCRES 13\t   /* job waiting on sync start ready */\n#define JOB_SUBSTATE_STAGEIN 14\t   /* job staging in files then wait */\n#define JOB_SUBSTATE_STAGEGO 15\t   /* job staging in files and then run */\n#define JOB_SUBSTATE_STAGECMP 16   /* job stage in complete */\n\n#define JOB_SUBSTATE_HELD 20\t /* job held - user or operator */\n#define JOB_SUBSTATE_SYNCHOLD 21 /* job held - waiting on sync regist */\n#define JOB_SUBSTATE_DEPNHOLD 22 /* job held - waiting on dependency */\n\n#define JOB_SUBSTATE_WAITING 30\t  /* job waiting on execution time */\n#define JOB_SUBSTATE_STAGEFAIL 37 /* job held - file stage in failed */\n\n#define JOB_SUBSTATE_PRERUN 41\t/* job set to MOM to run */\n#define JOB_SUBSTATE_RUNNING 42 /* job running */\n#define JOB_SUBSTATE_SUSPEND 43 /* job suspended by client */\n#define JOB_SUBSTATE_SCHSUSP 45 /* job supsended by scheduler */\n\n#define JOB_SUBSTATE_EXITING 50\t  /* Start of job exiting processing */\n#define JOB_SUBSTATE_STAGEOUT 51  /* job staging out (other) files   */\n#define JOB_SUBSTATE_STAGEDEL 52  /* job deleting staged out files  */\n#define JOB_SUBSTATE_EXITED 53\t  /* job exit processing completed   */\n#define JOB_SUBSTATE_ABORT 54\t  /* job is being aborted by server  */\n#define JOB_SUBSTATE_KILLSIS 56\t  /* (MOM) job kill IM to sisters    */\n#define JOB_SUBSTATE_RUNEPILOG 57 /* (MOM) job epilogue running      */\n#define JOB_SUBSTATE_OBIT 58\t  /* (MOM) job obit notice sent\t   */\n#define JOB_SUBSTATE_TERM 59\t  /* Job is in site termination stage */\n#define JOB_SUBSTATE_DELJOB 153\t  /* (MOM) Job del_job_wait to sisters  */\n\n#define JOB_SUBSTATE_RERUN 60\t/* job is rerun, recover output stage */\n#define JOB_SUBSTATE_RERUN1 61\t/* job is rerun, stageout phase */\n#define JOB_SUBSTATE_RERUN2 62\t/* job is rerun, delete files stage */\n#define JOB_SUBSTATE_RERUN3 63\t/* job is rerun, mom delete job */\n#define JOB_SUBSTATE_EXPIRED 69 /* subjob (of an array) is gone */\n\n#define JOB_SUBSTATE_BEGUN 70\t\t /* Array job has begun */\n#define JOB_SUBSTATE_PROVISION 71\t /* job is waiting for provisioning tocomplete */\n#define JOB_SUBSTATE_WAITING_JOIN_JOB 72 /* job waiting on IM_JOIN_JOB completion */\n\n/*\n * Job sub-states defined in PBS to support history jobs and OGF-BES model:\n */\n#define JOB_SUBSTATE_TERMINATED 91\n#define JOB_SUBSTATE_FINISHED 92\n#define JOB_SUBSTATE_FAILED 93\n#define JOB_SUBSTATE_MOVED 94\n\n/* decriminator for ji_un union type */\n\n#define JOB_UNION_TYPE_NEW 0\n#define JOB_UNION_TYPE_EXEC 1\n#define JOB_UNION_TYPE_ROUTE 2\n#define JOB_UNION_TYPE_MOM 3\n\n/* job hold (internal) types */\n\n#define HOLD_n 0\n#define HOLD_u 1\n#define HOLD_o 2\n#define HOLD_s 4\n#define HOLD_bad_password 8\n\n/* Array Job related Defines */\n\n/* See is_job_array() in array_func.c */\n#define IS_ARRAY_NO 0\t    /* Not an array job nor subjob */\n#define IS_ARRAY_ArrayJob 1 /* Is an Array Job    */\n#define IS_ARRAY_Single 2   /* A single Sub Job   */\n#define IS_ARRAY_Range 3    /* A range of Subjobs */\n#define PBS_FILE_ARRAY_INDEX_TAG \"^array_index^\"\n\n/* Special Job Exit Values,  Set by the job starter (child of MOM)   */\n/* see server/req_jobobit() & mom/start_exec.c\t\t\t     */\n\n#define JOB_EXEC_OK 0\t\t       /* job exec successfull */\n#define JOB_EXEC_FAIL1 -1\t       /* Job exec failed, before files, no retry */\n#define JOB_EXEC_FAIL2 -2\t       /* Job exec failed, after files, no retry  */\n#define JOB_EXEC_RETRY -3\t       /* Job execution failed, do retry    */\n#define JOB_EXEC_INITABT -4\t       /* Job aborted on MOM initialization */\n#define JOB_EXEC_INITRST -5\t       /* Job aborted on MOM init, chkpt, no migrate */\n#define JOB_EXEC_INITRMG -6\t       /* Job aborted on MOM init, chkpt, ok migrate */\n#define JOB_EXEC_BADRESRT -7\t       /* Job restart failed */\n#define JOB_EXEC_FAILUID -10\t       /* invalid uid/gid for job */\n#define JOB_EXEC_RERUN -11\t       /* Job rerun */\n#define JOB_EXEC_CHKP -12\t       /* Job was checkpointed and killed */\n#define JOB_EXEC_FAIL_PASSWORD -13     /* Job failed due to a bad password */\n#define JOB_EXEC_RERUN_SIS_FAIL -14    /* Job rerun */\n#define JOB_EXEC_QUERST -15\t       /* requeue job for restart from checkpoint */\n#define JOB_EXEC_FAILHOOK_RERUN -16    /* job exec failed due to a hook rejection, requeue job for later retry (usually returned by the \"early\" hooks\" */\n#define JOB_EXEC_FAILHOOK_DELETE -17   /* job exec failed due to a hook rejection, delete the job at end */\n#define JOB_EXEC_HOOK_RERUN -18\t       /* a hook requested for job to be requeued */\n#define JOB_EXEC_HOOK_DELETE -19       /* a hook requested for job to be deleted */\n#define JOB_EXEC_RERUN_MS_FAIL -20     /* Mother superior connection failed */\n#define JOB_EXEC_FAIL_SECURITY -21     /* Security breach in PBS directory */\n#define JOB_EXEC_HOOKERROR -22\t       /* job exec failed due to\n\t\t\t\t     * unexpected exception or\n\t\t\t\t     * hook execution timed out\n\t\t\t\t     */\n#define JOB_EXEC_FAIL_KRB5 -23\t       /* Error no kerberos credentials supplied */\n#define JOB_EXEC_UPDATE_ALPS_RESV_ID 1 /* Update ALPS reservation ID to parent mom as soon\n\t\t\t\t\t* as it is available.\n\t\t\t\t\t* This is neither a success nor a failure exit code,\n\t\t\t\t\t* so we are using a positive value\n\t\t\t\t\t*/\n#define JOB_EXEC_KILL_NCPUS_BURST -24  /* job exec failed due to exceeding ncpus (burst) */\n#define JOB_EXEC_KILL_NCPUS_SUM -25    /* job exec failed due to exceeding ncpus (sum) */\n#define JOB_EXEC_KILL_VMEM -26\t       /* job exec failed due to exceeding vmem */\n#define JOB_EXEC_KILL_MEM -27\t       /* job exec failed due to exceeding mem */\n#define JOB_EXEC_KILL_CPUT -28\t       /* job exec failed due to exceeding cput */\n#define JOB_EXEC_KILL_WALLTIME -29     /* job exec failed due to exceeding walltime */\n#define JOB_EXEC_JOINJOB -30\t       /* Job exec failed due to join job error */\n\n/*\n * Fake \"random\" number added onto the end of the staging\n * and execution directory when sandbox=private\n * used in jobdirname()\n */\n#define FAKE_RANDOM \"x8z\"\n\n/* The default project assigned to jobs when project attribute is unset */\n#define PBS_DEFAULT_PROJECT \"_pbs_project_default\"\n\nextern void add_dest(job *);\nextern int depend_on_que(attribute *, void *, int);\nextern int depend_on_exec(job *);\nextern int depend_runone_remove_dependency(job *);\nextern int depend_runone_hold_all(job *);\nextern int depend_runone_release_all(job *);\nextern int depend_on_term(job *);\nextern struct depend *find_depend(int type, attribute *pattr);\nextern struct depend_job *find_dependjob(struct depend *pdep, char *name);\nextern int send_depend_req(job *pjob, struct depend_job *pparent, int type, int op, int schedhint, void (*postfunc)(struct work_task *));\nextern void post_runone(struct work_task *pwt);\nextern job *find_job(char *);\nextern char *get_variable(job *, char *);\nextern void check_block(job *, char *);\nextern char *lookup_variable(void *, int, char *);\nextern void issue_track(job *);\nextern void issue_delete(job *);\nextern int job_abt(job *, char *);\nextern int job_delete_attr(job *, int);\nextern job *job_alloc(void);\nextern void job_free(job *);\nextern int modify_job_attr(job *, svrattrl *, int, int *);\nextern char *prefix_std_file(job *, int);\nextern void cat_default_std(job *, int, char *, char **);\nextern int set_objexid(void *, int, attribute *);\nextern bool update_deljob_rply(struct batch_request *, char *, int);\n#if 0\nextern int   site_check_user_map(job *, char *);\n#endif\nextern int site_check_user_map(void *, int, char *);\nextern int site_allow_u(char *user, char *host);\nextern void svr_dequejob(job *);\nextern int svr_enquejob(job *, char *);\nextern void svr_evaljobstate(job *, char *, int *, int);\nextern int svr_setjobstate(job *, char, int);\nextern int state_char2int(char);\nextern char state_int2char(int);\nextern int uniq_nameANDfile(char *, char *, char *);\nextern long determine_accruetype(job *);\nextern int update_eligible_time(long, job *);\n\n#define TOLERATE_NODE_FAILURES_ALL \"all\"\n#define TOLERATE_NODE_FAILURES_JOB_START \"job_start\"\n#define TOLERATE_NODE_FAILURES_NONE \"none\"\nextern int do_tolerate_node_failures(job *);\nint check_job_state(const job *pjob, char state);\nint check_job_substate(const job *pjob, int substate);\nchar get_job_state(const job *pjob);\nint get_job_state_num(const job *pjob);\nlong get_job_substate(const job *pjob);\nchar *get_jattr_str(const job *pjob, int attr_idx);\nstruct array_strings *get_jattr_arst(const job *pjob, int attr_idx);\npbs_list_head get_jattr_list(const job *pjob, int attr_idx);\nlong get_jattr_long(const job *pjob, int attr_idx);\nlong long get_jattr_ll(const job *pjob, int attr_idx);\nsvrattrl *get_jattr_usr_encoded(const job *pjob, int attr_idx);\nsvrattrl *get_jattr_priv_encoded(const job *pjob, int attr_idx);\nvoid set_job_state(job *pjob, char val);\nvoid set_job_substate(job *pjob, long val);\nint set_jattr_str_slim(job *pjob, int attr_idx, char *val, char *rscn);\nint set_jattr_l_slim(job *pjob, int attr_idx, long val, enum batch_op op);\nint set_jattr_ll_slim(job *pjob, int attr_idx, long long val, enum batch_op op);\nint set_jattr_b_slim(job *pjob, int attr_idx, long val, enum batch_op op);\nint set_jattr_c_slim(job *pjob, int attr_idx, char val, enum batch_op op);\nint set_jattr_generic(job *pjob, int attr_idx, char *val, char *rscn, enum batch_op op);\nint is_jattr_set(const job *pjob, int attr_idx);\nvoid free_jattr(job *pjob, int attr_idx);\nvoid mark_jattr_not_set(job *pjob, int attr_idx);\nvoid mark_jattr_set(job *pjob, int attr_idx);\nattribute *get_jattr(const job *pjob, int attr_idx);\nvoid clear_jattr(job *pjob, int attr_idx);\n\n/*\n *\tThe filesystem related recovery/save routines are renamed\n *\twith the suffix \"_fs\", and the database versions of them\n *\tare suffixed \"_db\". This distinguishes between the two\n *\tversion. The \"_fs\" version will continue to be used by\n *\tmigration routine \"svr_migrate_data\" and by \"mom\". Rest of\n *\tthe server code will typically use only the \"_db\" routines.\n *\tSince mom uses only the \"_fs\" versions, define the \"_fs\"\n *\tversions to names with the suffix, so that the mom code\n *\tremain unchanges and continues to use the \"_fs\" versions.\n */\n#ifdef PBS_MOM\n\nextern job *job_recov_fs(char *);\nextern int job_save_fs(job *);\n\n#define job_save job_save_fs\n#define job_recov job_recov_fs\n\n#else\n\nextern job *job_recov_db(char *, job *pjob);\nextern int job_save_db(job *);\n\n#define job_save job_save_db\n#define job_recov job_recov_db\n\nextern char *get_job_credid(char *);\n#endif\n\n#ifdef _BATCH_REQUEST_H\nextern job *chk_job_request(char *, struct batch_request *, int *, int *);\nextern int net_move(job *, struct batch_request *);\nextern int svr_chk_owner(struct batch_request *, job *);\nextern int svr_movejob(job *, char *, struct batch_request *);\nextern struct batch_request *cpy_stage(struct batch_request *, job *, enum job_atr, int);\n\n#ifdef _RESERVATION_H\nextern int svr_chk_ownerResv(struct batch_request *, resc_resv *);\n#endif /* _RESERVATION_H */\n#endif /* _BATCH_REQUEST_H */\n\n#ifdef _QUEUE_H\nextern int svr_chkque(job *, pbs_queue *, char *, char *, int mtype);\nextern int default_router(job *, pbs_queue *, long);\nextern int site_alt_router(job *, pbs_queue *, long);\nextern int site_acl_check(job *, pbs_queue *);\n#endif /* _QUEUE_H */\n\n#ifdef _WORK_TASK_H\nextern int issue_signal(job *, char *, void (*)(struct work_task *), void *);\nextern int delayed_issue_signal(job *pjob, char *signame, void (*func)(struct work_task *), void *extra, int delay);\nextern void on_job_exit(struct work_task *);\n#endif /* _WORK_TASK_H */\n\n#ifdef _PBS_IFL_H\nextern int update_resources_list(job *, char *, int, char *, enum batch_op op, int, int);\n#endif\n\nextern int Mystart_end_dur_wall(void *, int);\nextern int get_wall(job *);\nextern int get_softwall(job *);\nextern int get_used_wall(job *);\nextern int get_used_cput(job *);\nextern int get_cput(job *);\nextern void remove_deleted_resvs(void);\nextern void degrade_corrupted_confirmed_resvs(void);\nextern int pbsd_init_job(job *pjob, int type);\n\nextern void del_job_related_file(job *pjob, char *fsuffix);\n#ifdef PBS_MOM\nextern void del_job_dirs(job *pjob, char *taskdir);\nextern void del_chkpt_files(job *pjob);\n#endif\n\nextern void get_jobowner(char *, char *);\nextern struct batch_request *cpy_stage(struct batch_request *, job *, enum job_atr, int);\nextern struct batch_request *cpy_stdfile(struct batch_request *, job *, enum job_atr);\nextern int has_stage(job *);\n\nvoid svr_evalsetjobstate(job *jobp);\n\n#ifdef __cplusplus\n}\n#endif\n#endif /* _PBS_JOB_H */\n"
  },
  {
    "path": "src/include/libauth.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n#ifndef _LIBAUTH_H\n#define _LIBAUTH_H\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n#include \"log.h\"\n#include \"portability.h\"\n\n/* Max length of auth method name */\n#define MAXAUTHNAME 100\n\n/* Type of roles */\nenum AUTH_ROLE {\n\t/* Unknown role, mostly used as initial value */\n\tAUTH_ROLE_UNKNOWN = 0,\n\t/* Client role, aka who is initiating authentication */\n\tAUTH_CLIENT,\n\t/* Server role, aka who is authenticating incoming user/connection */\n\tAUTH_SERVER,\n\t/* qsub side, when authenticating an interactive connection (i.e. qsub -I) from an execution host */\n\tAUTH_INTERACTIVE,\n\t/* last role, mostly used while error checking for role value */\n\tAUTH_ROLE_LAST\n};\n\n/* Type of connections */\nenum AUTH_CONN_TYPE {\n\t/* user-oriented connection (aka like PBS client is connecting to PBS Server) */\n\tAUTH_USER_CONN = 0,\n\t/* service-oriented connection (aka like PBS Mom is connecting to PBS Server via PBS Comm) */\n\tAUTH_SERVICE_CONN\n};\n\ntypedef struct pbs_auth_config {\n\t/* Path to PBS_HOME directory (aka PBS_HOME in pbs.conf). This should be a null-terminated string. */\n\tchar *pbs_home_path;\n\n\t/* Path to PBS_EXEC directory (aka PBS_EXEC in pbs.conf). This should be a null-terminated string. */\n\tchar *pbs_exec_path;\n\n\t/* Name of authentication method (aka PBS_AUTH_METHOD in pbs.conf). This should be a null-terminated string. */\n\tchar *auth_method;\n\n\t/* Name of encryption method (aka PBS_ENCRYPT_METHOD in pbs.conf). This should be a null-terminated string. */\n\tchar *encrypt_method;\n\n\t/*\n\t * Function pointer to the logging method with the same signature as log_event from Liblog.\n\t * With this, the user of the authentication library can redirect logs from the authentication\n\t * library into respective log files or stderr in case no log files.\n\t * If func is set to NULL then logs will be written to stderr (if available, else no logging at all).\n\t */\n\tvoid (*logfunc)(int type, int objclass, int severity, const char *objname, const char *text);\n} pbs_auth_config_t;\n\n/** @brief\n *\tpbs_auth_set_config - Set auth config\n *\n * @param[in] config - auth config structure\n *\n * @return void\n *\n */\nextern DLLEXPORT void pbs_auth_set_config(const pbs_auth_config_t *config);\n\n/** @brief\n *\tpbs_auth_create_ctx - allocates auth context structure\n *\n * @param[in] ctx - pointer in which auth context to be allocated\n * @param[in] mode - AUTH_SERVER or AUTH_CLIENT\n * @param[in] conn_type - AUTH_USER_CONN or AUTH_SERVICE_CONN\n * @param[in] hostname - hostname of other authenticating party in case of AUTH_CLIENT else not used\n *\n * @return\tint\n * @retval\t0 - success\n * @retval\t1 - error\n */\nextern DLLEXPORT int pbs_auth_create_ctx(void **ctx, int mode, int conn_type, const char *hostname);\n\n/** @brief\n *\tpbs_auth_destroy_ctx - destroy given auth context structure\n *\n * @param[in] ctx - pointer to auth context\n *\n * @return void\n */\nextern DLLEXPORT void pbs_auth_destroy_ctx(void *ctx);\n\n/** @brief\n *\tpbs_auth_get_userinfo - get user, host and realm from authentication context\n *\n * @param[in] ctx - pointer to auth context\n * @param[out] user - username assosiate with ctx\n * @param[out] host - hostname/realm assosiate with ctx\n * @param[out] realm - realm assosiate with ctx\n *\n * @return\tint\n * @retval\t0 on success\n * @retval\t1 on error\n */\nextern DLLEXPORT int pbs_auth_get_userinfo(void *ctx, char **user, char **host, char **realm);\n\n/** @brief\n *\tpbs_auth_process_handshake_data - process incoming auth handshake data or start auth handshake if no incoming data\n *\n * @param[in] ctx - pointer to auth context\n * @param[in] data_in - received auth token data (if any, else NULL)\n * @param[in] len_in - length of received auth token data (if any else 0)\n * @param[out] data_out - auth token data to send (if any, else NULL)\n * @param[out] len_out - lenght of auth token data to send (if any, else 0)\n * @param[out] is_handshake_done - indicates whether handshake is done (1) or not (0)\n *\n * @return\tint\n * @retval\t0 on success\n * @retval\t!0 on error\n */\nextern DLLEXPORT int pbs_auth_process_handshake_data(void *ctx, void *data_in, size_t len_in, void **data_out, size_t *len_out, int *is_handshake_done);\n\n/** @brief\n *\tpbs_auth_encrypt_data - encrypt data based on given auth context.\n *\n * @param[in] ctx - pointer to auth context\n * @param[in] data_in - clear text data\n * @param[in] len_in - length of clear text data\n * @param[out] data_out - encrypted data\n * @param[out] len_out - length of encrypted data\n *\n * @return\tint\n * @retval\t0 on success\n * @retval\t1 on error\n */\nextern DLLEXPORT int pbs_auth_encrypt_data(void *ctx, void *data_in, size_t len_in, void **data_out, size_t *len_out);\n\n/** @brief\n *\tpbs_auth_decrypt_data - decrypt data based on given auth context.\n *\n * @param[in] ctx - pointer to auth context\n * @param[in] data_in - encrypted data\n * @param[in] len_in - length of encrypted data\n * @param[out] data_out - clear text data\n * @param[out] len_out - length of clear text data\n *\n * @return\tint\n * @retval\t0 on success\n * @retval\t1 on error\n */\nextern DLLEXPORT int pbs_auth_decrypt_data(void *ctx, void *data_in, size_t len_in, void **data_out, size_t *len_out);\n\n#ifdef __cplusplus\n}\n#endif\n#endif /* _LIBAUTH_H */\n"
  },
  {
    "path": "src/include/libpbs.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _LIBPBS_H\n#define _LIBPBS_H\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n#include <stdlib.h>\n#include <string.h>\n#include <memory.h>\n#include <limits.h>\n#include \"pbs_ifl.h\"\n#include \"list_link.h\"\n#include \"pbs_error.h\"\n#include \"pbs_internal.h\"\n#include \"pbs_client_thread.h\"\n#include \"net_connect.h\"\n#include \"dis.h\"\n\n#define VALUE(str) #str\n#define TOSTR(str) VALUE(str)\n\n/* Protocol types when connecting to another server (eg mom) */\n#define PROT_INVALID -1\n#define PROT_TCP 0 /* For TCP based connection */\n#define PROT_TPP 1 /* For TPP based connection */\n\n#define PBS_BATCH_PROT_TYPE 2\n#define PBS_BATCH_PROT_VER_OLD 1\n#define PBS_BATCH_PROT_VER 2\n#define SCRIPT_CHUNK_Z (65536)\n#ifndef TRUE\n#define TRUE 1\n#define FALSE 0\n#endif\n#ifndef EOF\n#define EOF (-1)\n#endif\n\n/* enums for standard job files */\nenum job_file {\n\tJScript,\n\tStdIn,\n\tStdOut,\n\tStdErr,\n\tChkpt\n};\n\n/*\n * This variable has been moved to Thread local storage\n * The define points to a function pointer which locates\n * the actual variable from the TLS of the calling thread\n */\n#ifndef __PBS_CURRENT_USER\n#define __PBS_CURRENT_USER\nextern char *__pbs_current_user_location(void);\n#define pbs_current_user (__pbs_current_user_location())\n#endif\n\n#ifndef __PBS_TCP_TIMEOUT\n#define __PBS_TCP_TIMEOUT\nextern time_t *__pbs_tcptimeout_location(void);\n#define pbs_tcp_timeout (*__pbs_tcptimeout_location())\n#endif\n\n#ifndef __PBS_TCP_INTERRUPT\n#define __PBS_TCP_INTERRUPT\nextern int *__pbs_tcpinterrupt_location(void);\n#define pbs_tcp_interrupt (*__pbs_tcpinterrupt_location())\n#endif\n\n#ifndef __PBS_TCP_ERRNO\n#define __PBS_TCP_ERRNO\nextern int *__pbs_tcperrno_location(void);\n#define pbs_tcp_errno (*__pbs_tcperrno_location())\n#endif\n\nextern char pbs_current_group[];\n\n#define NCONNECTS 50\t\t /* max connections per client */\n#define PBS_MAX_CONNECTIONS 5000 /* Max connections in the connections array */\n#define PBS_LOCAL_CONNECTION INT_MAX\n\ntypedef struct pbs_conn {\n\tint ch_errno;\t\t  /* last error on this connection */\n\tchar *ch_errtxt;\t  /* pointer to last server error text\t*/\n\tpthread_mutex_t ch_mutex; /* serialize connection between threads */\n\tpbs_tcp_chan_t *ch_chan;  /* pointer tcp chan structure for this connection */\n} pbs_conn_t;\n\nint destroy_connection(int);\nint set_conn_errtxt(int, const char *);\nchar *get_conn_errtxt(int);\nint set_conn_errno(int, int);\nint get_conn_errno(int);\npbs_tcp_chan_t *get_conn_chan(int);\nint set_conn_chan(int, pbs_tcp_chan_t *);\npthread_mutex_t *get_conn_mutex(int);\n\n#define SVR_CONN_STATE_DOWN 0\n#define SVR_CONN_STATE_UP 1\n\n/* max number of preempt orderings */\n#define PREEMPT_ORDER_MAX 20\n\n/* PBS Batch Reply Structure */\n\n/* reply to Select Job Request */\nstruct brp_select {\n\tstruct brp_select *brp_next;\n\tchar brp_jobid[PBS_MAXSVRJOBID + 1];\n};\n\n/* reply to Status Job/Queue/Server Request */\nstruct brp_status {\n\tpbs_list_link brp_stlink;\n\tint brp_objtype;\n\tchar brp_objname[(PBS_MAXSVRJOBID > PBS_MAXDEST ? PBS_MAXSVRJOBID : PBS_MAXDEST) + 1];\n\tpbs_list_head brp_attr; /* head of svrattrlist */\n};\n\n/* reply to Resource Query Request */\nstruct brp_rescq {\n\tint brq_number; /* number of items in following arrays */\n\tint *brq_avail;\n\tint *brq_alloc;\n\tint *brq_resvd;\n\tint *brq_down;\n};\n\nstruct rq_preempt {\n\tint count;\n\tpreempt_job_info *ppj_list;\n};\n\ntypedef struct rq_preempt brp_preempt_jobs;\n\n#define BATCH_REPLY_CHOICE_NULL 1\t  /* no reply choice, just code */\n#define BATCH_REPLY_CHOICE_Queue 2\t  /* Job ID, see brp_jid */\n#define BATCH_REPLY_CHOICE_RdytoCom 3\t  /* select, see brp_jid */\n#define BATCH_REPLY_CHOICE_Commit 4\t  /* commit, see brp_jid */\n#define BATCH_REPLY_CHOICE_Select 5\t  /* select, see brp_select */\n#define BATCH_REPLY_CHOICE_Status 6\t  /* status, see brp_status */\n#define BATCH_REPLY_CHOICE_Text 7\t  /* text, see brp_txt */\n#define BATCH_REPLY_CHOICE_Locate 8\t  /* locate, see brp_locate */\n#define BATCH_REPLY_CHOICE_RescQuery 9\t  /* Resource Query */\n#define BATCH_REPLY_CHOICE_PreemptJobs 10 /* Preempt Job */\n#define BATCH_REPLY_CHOICE_Delete 11\t  /* Delete Job status */\n\n/*\n * the following is the basic Batch Reply structure\n */\nstruct batch_reply {\n\tint brp_code;\n\tint brp_auxcode;\n\tint brp_choice; /* the union discriminator */\n\tint brp_is_part;\n\tint brp_count;\n\tint brp_type;\n\tstruct batch_status *last;\n\tunion {\n\t\tchar brp_jid[PBS_MAXSVRJOBID + 1];\n\t\tstruct brp_select *brp_select;\t/* select replies */\n\t\tpbs_list_head brp_status;\t/* status (svr) replies */\n\t\tstruct batch_status *brp_statc; /* status (cmd) replies) */\n\t\tstruct {\n\t\t\tvoid *undeleted_job_idx;\t\t  /* tracking undeleted jobs */\n\t\t\tstruct batch_deljob_status *brp_delstatc; /* list of failed jobs with errcode */\n\t\t} brp_deletejoblist;\n\t\tstruct {\n\t\t\tint brp_txtlen;\n\t\t\tchar *brp_str;\n\t\t} brp_txt; /* text and credential reply */\n\t\tchar brp_locate[PBS_MAXDEST + 1];\n\t\tstruct brp_rescq brp_rescq;\t   /* query resource reply */\n\t\tbrp_preempt_jobs brp_preempt_jobs; /* preempt jobs reply */\n\t} brp_un;\n};\n\n/*\n * The Batch Request ID numbers\n */\n#define PBS_BATCH_Connect 0\n#define PBS_BATCH_QueueJob 1\n#define PBS_BATCH_PostQueueJob 2\n/* Unused -- #define PBS_BATCH_JobCred 2 */\n#define PBS_BATCH_jobscript 3\n#define PBS_BATCH_RdytoCommit 4\n#define PBS_BATCH_Commit 5\n#define PBS_BATCH_DeleteJob 6\n#define PBS_BATCH_HoldJob 7\n#define PBS_BATCH_LocateJob 8\n#define PBS_BATCH_Manager 9\n#define PBS_BATCH_MessJob 10\n#define PBS_BATCH_ModifyJob 11\n#define PBS_BATCH_MoveJob 12\n#define PBS_BATCH_ReleaseJob 13\n#define PBS_BATCH_Rerun 14\n#define PBS_BATCH_RunJob 15\n#define PBS_BATCH_SelectJobs 16\n#define PBS_BATCH_Shutdown 17\n#define PBS_BATCH_SignalJob 18\n#define PBS_BATCH_StatusJob 19\n#define PBS_BATCH_StatusQue 20\n#define PBS_BATCH_StatusSvr 21\n#define PBS_BATCH_TrackJob 22\n#define PBS_BATCH_AsyrunJob 23\n#define PBS_BATCH_Rescq 24\n#define PBS_BATCH_ReserveResc 25\n#define PBS_BATCH_ReleaseResc 26\n#define PBS_BATCH_FailOver 27\n#define PBS_BATCH_JobObit 28\n#define PBS_BATCH_StageIn 48\n/* Unused -- #define PBS_BATCH_AuthenResvPort 49 */\n#define PBS_BATCH_OrderJob 50\n#define PBS_BATCH_SelStat 51\n#define PBS_BATCH_RegistDep 52\n#define PBS_BATCH_CopyFiles 54\n#define PBS_BATCH_DelFiles 55\n/* Unused -- #define PBS_BATCH_JobObit 56 */\n#define PBS_BATCH_MvJobFile 57\n#define PBS_BATCH_StatusNode 58\n#define PBS_BATCH_Disconnect 59\n/* Unused -- #define PBS_BATCH_CopyFiles_Cred 60 */\n/* Unused -- #define PBS_BATCH_DelFiles_Cred 61 */\n#define PBS_BATCH_JobCred 62\n#define PBS_BATCH_CopyFiles_Cred 63\n#define PBS_BATCH_DelFiles_Cred 64\n/* Unused -- #define PBS_BATCH_GSS_Context 65 */\n#define PBS_BATCH_SubmitResv 70\n#define PBS_BATCH_StatusResv 71\n#define PBS_BATCH_DeleteResv 72\n#define PBS_BATCH_BeginResv 76\n#define PBS_BATCH_UserCred 73\n/* Unused -- #define PBS_BATCH_UserMigrate\t\t74 */\n#define PBS_BATCH_ConfirmResv 75\n#define PBS_BATCH_DefSchReply 80\n#define PBS_BATCH_StatusSched 81\n#define PBS_BATCH_StatusRsc 82\n#define PBS_BATCH_StatusHook 83\n#define PBS_BATCH_PySpawn 84\n#define PBS_BATCH_CopyHookFile 85\n#define PBS_BATCH_DelHookFile 86\n/* Unused -- #define PBS_BATCH_MomRestart 87 */\n/* Unused -- #define PBS_BATCH_AuthExternal 88 */\n#define PBS_BATCH_HookPeriodic 89\n#define PBS_BATCH_RelnodesJob 90\n#define PBS_BATCH_ModifyResv 91\n#define PBS_BATCH_ResvOccurEnd 92\n#define PBS_BATCH_PreemptJobs 93\n#define PBS_BATCH_Cred 94\n#define PBS_BATCH_Authenticate 95\n#define PBS_BATCH_ModifyJob_Async 96\n#define PBS_BATCH_AsyrunJob_ack 97\n#define PBS_BATCH_RegisterSched 98\n#define PBS_BATCH_ModifyVnode 99\n#define PBS_BATCH_DeleteJobList 100\n\n#define PBS_BATCH_FileOpt_Default 0\n#define PBS_BATCH_FileOpt_OFlg 1\n#define PBS_BATCH_FileOpt_EFlg 2\n\n#define PBS_IFF_CLIENT_ADDR \"PBS_IFF_CLIENT_ADDR\"\n\n/* time out values for tcp_dis read/write */\n#define PBS_DIS_TCP_TIMEOUT_CONNECT 10\n#define PBS_DIS_TCP_TIMEOUT_REPLY 10\n#define PBS_DIS_TCP_TIMEOUT_SHORT 30\n#define PBS_DIS_TCP_TIMEOUT_RERUN 45 /* timeout used in pbs_rerunjob() */\n#define PBS_DIS_TCP_TIMEOUT_LONG 600\n#define PBS_DIS_TCP_TIMEOUT_VLONG 10800\n\n#define FAILOVER_Register 0\t  /* secondary server register with primary */\n#define FAILOVER_HandShake 1\t  /* handshake from secondary to primary */\n#define FAILOVER_PrimIsBack 2\t  /* Primary is taking control again */\n#define FAILOVER_SecdShutdown 3\t  /* Primary going down, secondary go down */\n#define FAILOVER_SecdGoInactive 4 /* Primary down, secondary go inactive */\n#define FAILOVER_SecdTakeOver 5\t  /* Primary down, secondary take over */\n\n#define EXTEND_OPT_IMPLICIT_COMMIT \":C:\" /* option added to pbs_submit() extend parameter to request implicit commit */\n#define EXTEND_OPT_NEXT_MSG_TYPE \"next_msg_type\"\n#define EXTEND_OPT_NEXT_MSG_PARAM \"next_msg_param\"\n\nint is_compose(int, int);\nint is_compose_cmd(int, int, char **);\nvoid PBS_free_aopl(struct attropl *);\nvoid advise(char *, ...);\nint PBSD_commit(int, char *, int, char **, char *);\nint PBSD_jcred(int, int, char *, int, int, char **);\nint PBSD_jscript(int, const char *, int, char **);\nint PBSD_jscript_direct(int, char *, int, char **);\nint PBSD_copyhookfile(int, char *, int, char **);\nint PBSD_delhookfile(int, char *, int, char **);\nint PBSD_mgr_put(int, int, int, int, const char *, struct attropl *, const char *, int, char **);\nint PBSD_manager(int, int, int, int, const char *, struct attropl *, const char *);\nint PBSD_msg_put(int, const char *, int, const char *, const char *, int, char **);\nint PBSD_relnodes_put(int, const char *, const char *, const char *, int, char **);\nint PBSD_py_spawn_put(int, char *, char **, char **, int, char **);\nint PBSD_sig_put(int, const char *, const char *, const char *, int, char **);\nint PBSD_jobfile(int, int, char *, char *, enum job_file, int, char **);\nint PBSD_status_put(int, int, const char *, struct attrl *, const char *, int, char **);\nint PBSD_select_put(int, int, struct attropl *, struct attrl *, const char *);\nchar **PBSD_select_get(int);\nstruct batch_reply *PBSD_rdrpy(int);\nstruct batch_reply *PBSD_rdrpy_sock(int, int *, int prot);\nvoid PBSD_FreeReply(struct batch_reply *);\nstruct batch_status *PBSD_status(int, int, const char *, struct attrl *, const char *);\nstruct batch_status *PBSD_status_get(int c);\nchar *PBSD_queuejob(int, char *, const char *, struct attropl *, const char *, int, char **, int *);\nint decode_DIS_svrattrl(int, pbs_list_head *);\nint decode_DIS_attrl(int, struct attrl **);\nint decode_DIS_JobId(int, char *);\nint decode_DIS_replyCmd(int, struct batch_reply *, int);\nint encode_DIS_JobCred(int, int, const char *, int);\nint encode_DIS_UserCred(int, const char *, int, const char *, int);\nint encode_DIS_JobFile(int, int, const char *, int, const char *, int);\nint encode_DIS_JobId(int, const char *);\nint encode_DIS_Manage(int, int, int, const char *, struct attropl *);\nint encode_DIS_MessageJob(int, const char *, int, const char *);\nint encode_DIS_MoveJob(int, const char *, const char *);\nint encode_DIS_ModifyResv(int, const char *, struct attropl *);\nint encode_DIS_RelnodesJob(int, const char *, const char *);\nint encode_DIS_PySpawn(int, const char *, char **, char **);\nint encode_DIS_QueueJob(int, char *, const char *, struct attropl *);\nint encode_DIS_SubmitResv(int, const char *, struct attropl *);\nint encode_DIS_ReqExtend(int, const char *);\nint encode_DIS_ReqHdr(int, int, const char *);\nint encode_DIS_Run(int, const char *, const char *, unsigned long);\nint encode_DIS_ShutDown(int, int);\nint encode_DIS_SignalJob(int, const char *, const char *);\nint encode_DIS_Status(int, const char *, struct attrl *);\nint encode_DIS_attrl(int, struct attrl *);\nint encode_DIS_attropl(int, struct attropl *);\nint encode_DIS_CopyHookFile(int, int, const char *, int, const char *);\nint encode_DIS_DelHookFile(int, const char *);\nint encode_DIS_JobsList(int, char **, int);\nchar *PBSD_submit_resv(int, const char *, struct attropl *, const char *);\nint DIS_reply_read(int, struct batch_reply *, int);\nint tcp_pre_process(conn_t *);\nchar *PBSD_modify_resv(int, const char *, struct attropl *, const char *);\nint PBSD_cred(int, char *, char *, int, char *, long, int, char **);\nint tcp_send_auth_req(int, unsigned int, const char *, const char *, const char *);\nint pbs_register_sched(const char *sched_id, int primary_conn_id, int secondary_conn_id);\nchar *PBS_get_server(const char *, char *, uint *);\n\nvoid pbs_statfree_single(struct batch_status *bsp);\n#ifdef __cplusplus\n}\n#endif\n#endif /* _LIBPBS_H */\n"
  },
  {
    "path": "src/include/libsec.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/*------------------------------------------------------------------------\n * Possible return values for the various external functions in the library\n *------------------------------------------------------------------------\n */\n\n#ifndef _LIBSEC_H\n#define _LIBSEC_H\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n#define STD 0  /* standard PBS security (pbs_iff program) */\n#define KRB5 1 /* krb5/gssapi based authentication and encryption */\n\n#define CS_SUCCESS 0\t     /* success\t\t\t*/\n#define CS_FATAL_NOMEM 1     /* memory allocation failure\t*/\n#define CS_FATAL_NOAUTH 2    /* authentication failure\t*/\n#define CS_FATAL 3\t     /* non-specific failure\t\t*/\n#define CS_NOTIMPLEMENTED 4  /* function not implmeneted\t*/\n#define CS_AUTH_CHECK_PORT 6 /* STD CS_server_auth return code */\n#define CS_AUTH_USE_IFF 7    /* STD CS_client_auth return code */\n#define CS_REMAP_CTX_FAIL 8\n\n#define CS_IO_FAIL -1\t     /* error in CS_read, CS_write   */\n#define CS_CTX_TRAK_FATAL -2 /* error with context tracking  */\n\n#define CS_MODE_CLIENT 0\n#define CS_MODE_SERVER 1\n\nextern int CS_read(int sd, char *buf, size_t len);\nextern int CS_write(int sd, char *buf, size_t len);\nextern int CS_client_auth(int sd);\nextern int CS_server_auth(int sd);\nextern int CS_close_socket(int sd);\nextern int CS_close_app(void);\nextern int CS_client_init(void);\nextern int CS_server_init(void);\nextern int CS_verify(void);\nextern int CS_reset_vector(int sd);\nextern int CS_remap_ctx(int sd, int newsd);\nextern void (*p_cslog)(int ecode, const char *caller, const char *txtmsg);\n\n#define cs_logerr(a, b, c) ((*p_cslog)((a), (b), (c)))\n\n#ifdef __cplusplus\n}\n#endif\n#endif /* _LIBSEC_H */\n"
  },
  {
    "path": "src/include/libutil.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _HAVE_LIB_UTIL_H\n#define _HAVE_LIB_UTIL_H\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n#include <time.h>\n#include <stdio.h>\n#include <stdbool.h>\n#include <netinet/in.h>\n\n#include \"pbs_ifl.h\"\n\n/* misc_utils specific */\n\n#define IS_EMPTY(str) (!str || str[0] == '\\0')\n\n/* replace - Replace sub-string  with new pattern in string */\nvoid replace(char *, char *, char *, char *);\n\n/* show_nonprint_chars - show non-printable characters in string */\nchar *show_nonprint_chars(char *);\n\n/*\tchar_in_set - is the char c in the tokenset */\nint char_in_set(char c, const char *tokset);\n\n/* string_token - strtok() without an an internal state pointer */\nchar *string_token(char *str, const char *tokset, char **ret_str);\nint in_string_list(char *str, char sep, char *string_list);\n\nint copy_file_internal(char *src, char *dst);\n\nint is_full_path(char *path);\nint file_exists(char *path);\nint is_same_host(char *, char *);\n\n/* Determine if a placement directive is present in a placment string. */\nint place_sharing_check(char *, char *);\n\n/* execvnode_seq_util specific */\n#define TOKEN_SEPARATOR \"~\"\n#define MAX_INT_LENGTH 10\n\n#define WORD_TOK \"{\"\n#define MAP_TOK \",\"\n#define WORD_MAP_TOK \"}\"\n#define RANGE_TOK \"-\"\n#define COUNT_TOK \"#\"\n\n/* Memory Allocation Error Message */\n#define MALLOC_ERR_MSG \"No memory available\"\n\n/* Dictionary is a list of words */\ntypedef struct dict {\n\tstruct word *first;\n\tstruct word *last;\n\tint count;\n\tint length;\n\tint max_idx;\n} dictionary;\n\n/* Each word maps to an associated set (map) in the dictionary */\nstruct word {\n\tchar *name;\n\tstruct word *next;\n\tstruct map *map;\n\tint count;\n};\n\n/* A map is the associated set of a word */\nstruct map {\n\tint val;\n\tstruct map *next;\n};\n\n/* Compress a delimited string into a dictionary compressed representation */\nchar *condense_execvnode_seq(const char *);\n\n/* Decompress a compress string into an array of words (strings) indexed by\n * their associated indices */\nchar **unroll_execvnode_seq(char *, char ***);\n\n/*\n * Get the total number of indices represented in the condensed string\n * which corresponds to the total number of occurrences in the execvnode string\n */\nint get_execvnodes_count(char *);\n\n/* Free the memory allocated to an unrolled string */\nvoid free_execvnode_seq(char **ptr);\n\n/* pbs_ical specific */\n\n/* Define the location of ical zoneinfo directory\n * this will be relative to PBS_EXEC (see pbsd_init and pbs_sched) */\n#define ICAL_ZONEINFO_DIR \"/lib/ical/zoneinfo\"\n\n/* Returns the number of occurrences defined by a recurrence rule */\nint get_num_occurrences(char *rrule, time_t dtstart, char *tz);\n\n/* Get the occurrence as defined by the given recurrence rule,\n * start time, and index. This function assumes that the\n * time dtsart passed in is the one to start the occurrence from.\n */\ntime_t get_occurrence(char *, time_t, char *, int);\n\n/*\n * Check if a recurrence rule is valid and consistent.\n * The recurrence rule is verified against a start date and checks\n * that the frequency of the recurrence matches the duration of the\n * submitted reservation. If the duration of a reservation exceeds the\n * granularity of the frequency then an error message is displayed.\n *\n * The recurrence rule is checked to contain a COUNT or an UNTIL.\n *\n * Note that the TZ environment variable HAS to be set for the occurrence's\n * dates to be correctly computed.\n */\nint check_rrule(char *, time_t, time_t, char *, int *);\n\n/*\n * Displays the occurrences in a two-column format:\n * the first column corresponds to the occurrence date and time\n * the second column corresponds to the reserved execvnode\n */\nvoid display_occurrences(char *, time_t, char *, char *, int, int);\n\n/*\n * Set the zoneinfo directory\n */\nvoid set_ical_zoneinfo(char *path);\n\n/*\n * values for the vnode 'sharing' attribute\n */\nenum vnode_sharing {\n\tVNS_UNSET,\n\tVNS_DFLT_SHARED,\n\tVNS_DFLT_EXCL,\n\tVNS_IGNORE_EXCL,\n\tVNS_FORCE_EXCL,\n\tVNS_DFLT_EXCLHOST,\n\tVNS_FORCE_EXCLHOST,\n\tVNS_FORCE_SHARED,\n};\n\n/*\n * convert vnode sharing enum into string form\n */\nchar *vnode_sharing_to_str(enum vnode_sharing vns);\n/*\n * convert string form of vnode sharing to enum\n */\nenum vnode_sharing str_to_vnode_sharing(char *vn_str);\n\n/*\n * concatenate two strings by expanding target string as needed.\n * \t  Operation: strbuf += str\n */\nchar *pbs_strcat(char **strbuf, int *ssize, const char *str);\n\n/*\n * like strcpy, but returns pointer to end of copied data\n * useful for chain copies instead of sprintf which is very\n * slow\n *\n */\nchar *pbs_strcpy(char *dest, const char *src);\n\n/*\n * general purpose strncpy function that will make sure to\n * copy '\\0' at the end of the buffer.\n */\nchar *pbs_strncpy(char *dest, const char *src, size_t n);\n\nint pbs_extendable_line(char *buf);\nchar *pbs_fgets(char **pbuf, int *pbuf_size, FILE *fp);\nchar *pbs_fgets_extend(char **pbuf, int *pbuf_size, FILE *fp);\n\n/*\n * Internal asprintf() implementation for use on all platforms\n */\nextern int pbs_asprintf(char **dest, const char *fmt, ...);\nextern char *pbs_asprintf_format(int len, const char *fmt, va_list args);\n\n/*\n * calculate the number of digits to the right of the decimal point in\n *        a floating point number.  This can be used in conjunction with\n *        printf() to not print trailing zeros.\n *\n * Use: int x = float_digits(fl, 8);\n * printf(\"%0.*f\\n\", x, fl);\n *\n */\nint float_digits(double fl, int digits);\n\n/* Various helper functions in hooks processing */\nint starts_with_triple_quotes(char *str);\nint ends_with_triple_quotes(char *str, int strip_quotes);\n\n/* Special symbols for copy_file_internal() */\n\n#define COPY_FILE_BAD_INPUT 1\n#define COPY_FILE_BAD_SOURCE 2\n#define COPY_FILE_BAD_DEST 3\n#define COPY_FILE_BAD_WRITE 4\n\n#define LOCK_RETRY_DEFAULT 2\nint\nlock_file(int fd, int op, char *filename, int lock_retry,\n\t  char *err_msg, size_t err_msg_len);\n\n/* RSHD/RCP related */\n/* Size of the buffer used in communication with rshd deamon */\n#define RCP_BUFFER_SIZE 65536\n\n#define MAXBUFLEN 1024\n#define BUFFER_GROWTH_RATE 2\n\n/*\n *      break_comma_list - break apart a comma delimited string into an array\n *                         of strings\n */\nchar **break_comma_list(char *list);\n\n/*\n *      break_delimited_str - break apart a delimited string into an array\n *                         of strings\n */\nchar **break_delimited_str(char *list, char delim);\n\n/*\n * find index of str in strarr\n */\nint find_string_idx(char **strarr, const char *str);\n\n/*\n *\tis_string_in_arr - Does a string exist in the given array?\n */\nint is_string_in_arr(char **strarr, const char *str);\n\n/*\n * Make copy of string array\n */\nchar **dup_string_arr(char **strarr);\n\n/*\n *      free_string_array - free an array of strings with NULL as sentinel\n */\nvoid free_string_array(char **arr);\n\n/*\n * ensure_string_not_null - if string is NULL, allocate an empty string\n */\nvoid ensure_string_not_null(char **str);\n\n/*\n * convert_string_to_lowercase - convert string to lower case\n */\nchar *convert_string_to_lowercase(char *str);\n\n/*\n * Escape every occurrence of 'delim' in 'str' with 'esc'\n */\nchar *escape_delimiter(char *str, char *delim, char esc);\n\n#ifdef HAVE_MALLOC_INFO\nchar *get_mem_info(void);\n#endif\n\n/* Size of time buffer */\n#define TIMEBUF_SIZE 128\n\n/**\n *\n * \tconvert_duration_to_str - Convert a duration to HH:MM:SS format string\n *\n */\nvoid convert_duration_to_str(time_t duration, char *buf, int bufsize);\n\n/**\n * deduce the preemption ordering to be used for a job\n */\nstruct preempt_ordering *get_preemption_order(struct preempt_ordering *porder, int req, int used);\n\n/**\n * Begin collecting performance stats (e.g. walltime)\n */\nvoid perf_stat_start(char *instance);\n\n/**\n * Remove a performance stats entry.\n */\nvoid perf_stat_remove(char *instance);\n\n/**\n * check delay in client commands\n */\nvoid create_query_file(void);\nvoid delay_query(void);\n\n/**\n * End collecting performance stats (e.g. walltime)\n */\nchar *perf_stat_stop(char *instance);\n\nextern char *netaddr(struct sockaddr_in *);\nextern unsigned long crc_file(char *fname);\nextern int get_fullhostname(char *, char *, int);\nextern char *get_hostname_from_addr(struct in_addr addr);\nextern char *parse_servername(char *, unsigned int *);\nextern int rand_num(void);\n\nextern char *gen_hostkey(char *cluster_key, char *salt, size_t *len);\nextern int validate_hostkey(char *host_key, size_t host_keylen, char **cluster_key);\nvoid set_rand_str(char *str, int len);\n\n/* thread utils */\nextern int init_mutex_attr_recursive(void *attr);\n\n#ifdef _USRDLL\n#ifdef DLL_EXPORT\n#define DECLDIR __declspec(dllexport)\n#else\n#define DECLDIR __declspec(dllimport)\n#endif\nDECLDIR void encode_SHA(char *, size_t, char **);\n#else\nvoid encode_SHA(char *, size_t, char **);\n#endif\n\nvoid set_proc_limits(char *, int);\nint get_index_from_jid(char *jid);\nchar *get_range_from_jid(char *jid);\nchar *create_subjob_id(char *parent_jid, int sjidx);\n\n#define GET_IP_PORT(x) ((struct sockaddr_in *) (x))->sin_port\n#define IS_VALID_IP(x) (((struct sockaddr_in *)(x))->sin_family == AF_INET)\n\n#ifdef __cplusplus\n}\n#endif\n#endif\n"
  },
  {
    "path": "src/include/list_link.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _LIST_LINK_H\n#define _LIST_LINK_H\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n/*\n * list_link.h - header file for general linked list routines\n *\t\tsee list_link.c\n *\n *\tA user defined linked list can be managed by these routines if\n *\tthe first element of the user structure is the pbs_list_link struct\n *\tdefined below.\n */\n\n/* list entry list sub-structure */\n\ntypedef struct pbs_list_link {\n\tstruct pbs_list_link *ll_prior;\n\tstruct pbs_list_link *ll_next;\n\tvoid *ll_struct;\n} pbs_list_link;\ntypedef pbs_list_link pbs_list_head;\n\n/* macros to clear list head or link */\n\n#define CLEAR_HEAD(e) e.ll_next = &e, e.ll_prior = &e, e.ll_struct = NULL\n#define CLEAR_LINK(e) e.ll_next = &e, e.ll_prior = &e\n\n#define LINK_INSET_BEFORE 0\n#define LINK_INSET_AFTER 1\n\n#if defined(DEBUG) && !defined(NDEBUG)\n#define GET_NEXT(pe) get_next((pe), __FILE__, __LINE__)\n#define GET_PRIOR(pe) get_prior((pe), __FILE__, __LINE__)\n#else\n#define GET_NEXT(pe) (pe).ll_next->ll_struct\n#define GET_PRIOR(pe) (pe).ll_prior->ll_struct\n#endif\n\n/* function prototypes */\n\nextern void insert_link(pbs_list_link *oldp, pbs_list_link *newp, void *pobj, int pos);\nextern void append_link(pbs_list_head *head, pbs_list_link *newp, void *pnewobj);\nextern void delete_link(pbs_list_link *oldp);\nextern void delete_clear_link(pbs_list_link *oldp);\nextern void swap_link(pbs_list_link *, pbs_list_link *);\nextern int is_linked(pbs_list_link *head, pbs_list_link *oldp);\nextern void list_move(pbs_list_head *oldp, pbs_list_head *newp);\n\n#ifndef NDEBUG\nextern void *get_next(pbs_list_link, char *file, int line);\nextern void *get_prior(pbs_list_link, char *file, int line);\n#endif /* NDEBUG */\n\n#ifdef __cplusplus\n}\n#endif\n#endif /* _LIST_LINK_H */\n"
  },
  {
    "path": "src/include/log.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _LOG_H\n#define _LOG_H\n\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n#include <stdio.h>\n#if SYSLOG\n#include <syslog.h>\n#else\n/* normally found in syslog.h, need to be defined for calls, but */\n/* will be ingnored in pbs_log.c\t\t\t\t */\n#define LOG_EMERG 0\n#define LOG_ALERT 1\n#define LOG_CRIT 2\n#define LOG_ERR 3\n#define LOG_WARNING 4\n#define LOG_NOTICE 5\n#define LOG_INFO 6\n#define LOG_DEBUG 7\n#define LOG_AUTH 8\n#endif /* SYSLOG */\n\n#include <sys/stat.h>\n\n/*\n * include file for error/event logging\n */\n\n/*\n * The default log buffer size should be large enough to hold a short message\n * together with the full pathname of a file. A full pathname may be up to 4096\n * characters. Add to this an extra 256 characters. This helps to avoid format\n * truncation warnings from certain compilers.\n */\n#define LOG_BUF_SIZE 4352\n\n/* The following macro assist in sharing code between the Server and Mom */\n#define LOG_EVENT log_event\n\n/*\n ** Set up a debug print macro.\n */\n#define sys_printf(...) syslog(LOG_NOTICE, __VA_ARGS__);\n#ifdef DEBUG\n#define DBPRT(x) printf x;\n#endif\n#ifdef DBPRT_LOG\n#include <stdlib.h>\n#include <errno.h>\n#include \"libutil.h\"\n#define STRIP_PARENS(...) __VA_ARGS__\n#undef DBPRT\n#define DBPRT(x)                                                                     \\\n\tif (will_log_event(PBSEVENT_DEBUGPRT)) {                                     \\\n\t\tchar *msg_;                                                          \\\n\t\tint msg_len_;                                                        \\\n\t\tint save_errno_ = errno;                                             \\\n\t\tmsg_len_ = pbs_asprintf(&msg_, STRIP_PARENS x);                      \\\n\t\tif (msg_len_ >= 0) {                                                 \\\n\t\t\tif (msg_len_ > 0 && msg_[msg_len_ - 1] == '\\n') {            \\\n\t\t\t\tmsg_[msg_len_ - 1] = '\\0';                           \\\n\t\t\t}                                                            \\\n\t\t\tlog_record(PBSEVENT_DEBUGPRT, 0, LOG_DEBUG, __func__, msg_); \\\n\t\t\tfree(msg_);                                                  \\\n\t\t}                                                                    \\\n\t\terrno = save_errno_;                                                 \\\n\t}\n#endif\n#ifndef DBPRT\n#define DBPRT(x)\n#endif\n\n#define IFNAME_MAX 256\n#define IFFAMILY_MAX 16\n\nstruct log_net_info { /* interface info for logging */\n\tstruct log_net_info *next;\n\tchar ifname[IFNAME_MAX];\n\tchar iffamily[IFFAMILY_MAX];\n\tchar **ifhostnames;\n};\n\nextern char *msg_daemonname;\n\nextern long *log_event_mask;\n\nextern void set_logfile(FILE *fp);\nextern int set_msgdaemonname(const char *ch);\nvoid set_log_conf(char *leafname, char *nodename,\n\t\t  unsigned int islocallog, unsigned int sl_fac, unsigned int sl_svr,\n\t\t  unsigned int log_highres);\n\nextern struct log_net_info *get_if_info(char *msg);\nextern void free_if_info(struct log_net_info *ni);\n\nextern void log_close(int close_msg);\nextern void log_err(int err, const char *func, const char *text);\nextern void log_errf(int errnum, const char *routine, const char *fmt, ...);\nextern void log_joberr(int err, const char *func, const char *text, const char *pjid);\nextern void log_event(int type, int objclass, int severity, const char *objname, const char *text);\nextern void do_log_eventf(int eventtype, int objclass, int sev, const char *objname, const char *fmt, va_list args);\nextern void log_eventf(int eventtype, int objclass, int sev, const char *objname, const char *fmt, ...);\nextern int will_log_event(int type);\nextern void log_suspect_file(const char *func, const char *text, const char *file, struct stat *sb);\nextern int log_open(char *name, char *directory);\nextern int log_open_main(char *name, char *directory, int silent);\nextern void log_record(int type, int objclass, int severity, const char *objname, const char *text);\nextern char log_buffer[LOG_BUF_SIZE];\nextern int log_level_2_etype(int level);\n\nextern int chk_path_sec(char *path, int dir, int sticky, int bad, int);\nextern int chk_file_sec(char *path, int isdir, int sticky, int disallow, int fullpath);\nextern int chk_file_sec_user(char *path, int isdir, int sticky, int disallow, int fullpath, int uid);\nextern int tmp_file_sec(char *path, int isdir, int sticky, int disallow, int fullpath);\nextern int tmp_file_sec_user(char *path, int isdir, int sticky, int disallow, int fullpath, int uid);\n\n#ifdef WIN32\nextern int chk_file_sec2(char *path, int isdir, int sticky,\n\t\t\t int disallow, int fullpath, char *owner);\n#endif\nextern char *get_script_name(char *input);\n\nextern int setup_env(char *filename);\nextern void log_supported_auth_methods(char **supported_auth_methods);\n\n/* Event types */\n\n#define PBSEVENT_ERROR 0x0001\t  /* internal errors */\n#define PBSEVENT_SYSTEM 0x0002\t  /* system (server) events */\n#define PBSEVENT_ADMIN 0x0004\t  /* admin events */\n#define PBSEVENT_JOB 0x0008\t  /* job related events */\n#define PBSEVENT_JOB_USAGE 0x0010 /* End of Job accounting */\n#define PBSEVENT_SECURITY 0x0020  /* security violation events */\n#define PBSEVENT_SCHED 0x0040\t  /* scheduler events */\n#define PBSEVENT_DEBUG 0x0080\t  /* common debug messages */\n#define PBSEVENT_DEBUG2 0x0100\t  /* less needed debug messages */\n#define PBSEVENT_RESV 0x0200\t  /* reservation related msgs */\n#define PBSEVENT_DEBUG3 0x0400\t  /* less needed debug messages */\n#define PBSEVENT_DEBUG4 0x0800\t  /* rarely needed debugging */\n#ifndef PBSEVENT_DEBUGPRT\n#define PBSEVENT_DEBUGPRT 0x1000 /* messages from the DBPRT macro */\n#endif\n#define PBSEVENT_FORCE 0x8000 /* set to force a message */\n#define SVR_LOG_DFLT PBSEVENT_ERROR | PBSEVENT_SYSTEM | PBSEVENT_ADMIN | PBSEVENT_JOB | PBSEVENT_JOB_USAGE | PBSEVENT_SECURITY | PBSEVENT_SCHED | PBSEVENT_DEBUG | PBSEVENT_DEBUG2\n#define SCHED_LOG_DFLT PBSEVENT_ERROR | PBSEVENT_SYSTEM | PBSEVENT_ADMIN | PBSEVENT_JOB | PBSEVENT_JOB_USAGE | PBSEVENT_SECURITY | PBSEVENT_SCHED | PBSEVENT_DEBUG | PBSEVENT_RESV\n\n/* Event Object Classes, see array class_names[] in ../lib/Liblog/pbs_log.c */\n\n#define PBS_EVENTCLASS_SERVER 1\t /* The server itself */\n#define PBS_EVENTCLASS_QUEUE 2\t /* Queues */\n#define PBS_EVENTCLASS_JOB 3\t /* Jobs\t */\n#define PBS_EVENTCLASS_REQUEST 4 /* Batch Requests */\n#define PBS_EVENTCLASS_FILE 5\t /* A Job related File */\n#define PBS_EVENTCLASS_ACCT 6\t /* Accounting info */\n#define PBS_EVENTCLASS_NODE 7\t /* Nodes */\n#define PBS_EVENTCLASS_RESV 8\t /* Reservations */\n#define PBS_EVENTCLASS_SCHED 9\t /* Scheduler */\n#define PBS_EVENTCLASS_HOOK 10\t /* Hook\t */\n#define PBS_EVENTCLASS_RESC 11\t /* Resource */\n#define PBS_EVENTCLASS_TPP 12\t /* TPP */\n\n/* Logging Masks */\n\n#define PBSEVENT_MASK 0x01ff\n\n#ifdef __cplusplus\n}\n#endif\n\n#endif /* _LOG_H */\n"
  },
  {
    "path": "src/include/mom_func.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _MOM_FUNC_H\n#define _MOM_FUNC_H\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n#include <stdio.h>\n\n#ifndef MOM_MACH\n#include \"mom_mach.h\"\n#endif /* MOM_MACH */\n\n#include \"port_forwarding.h\"\n#include \"batch_request.h\"\n#include \"pbs_internal.h\"\n\n/* struct sig_tbl = used to hold map of local signal names to values */\n\nstruct sig_tbl {\n\tchar *sig_name;\n\tint sig_val;\n};\n\n#define NUM_LCL_ENV_VAR 10\n\n/* used by mom_main.c and requests.c for $usecp */\n\nstruct cphosts {\n\tchar *cph_hosts;\n\tchar *cph_from;\n\tchar *cph_to;\n#ifdef NAS /* localmod 009 */\n\t/* support $usecp rules that exclude a pattern */\n\tint cph_exclude;\n#endif /* localmod 009 */\n};\nextern int cphosts_num;\nextern struct cphosts *pcphosts;\n\n/* used by mom_main.c and start_exec.c for TMPDIR */\n\nextern char pbs_tmpdir[];\n\n/* used by mom_main.c and start_exec.c for PBS_JOBDIR */\nextern char pbs_jobdir_root[];\nextern int pbs_jobdir_root_shared;\n#define JOBDIR_DEFAULT \"PBS_USER_HOME\"\n\n/* test bits */\n#define PBSQA_DELJOB_SLEEP 1\n#define PBSQA_DELJOB_CRASH 2\n#define PBSQA_POLLJOB_CRASH 4\n#define PBSQA_POLLJOB_SLEEP 8\n#define PBSQA_NTBL_STATUS 16\n#define PBSQA_NTBL_ADAPTER 32\n#define PBSQA_NTBL_LOAD 64\n#define PBSQA_NTBL_UNLOAD 128\n#define PBSQA_NTBL_CLEAN 256\n#define PBSQA_DELJOB_SLEEPLONG 512\n#define PBSQA_NTBL_NOPORTS 1024\n\nextern unsigned long QA_testing;\n\n/* used by Mom for external actions */\n\nenum Action_Event {\t\t     /* enum should start with zero\t*/\n\t\t    TerminateAction, /* On Job Termination\t\t*/\n\t\t    ChkptAction,     /* On Checkpoint\t\t*/\n\t\t    ChkptAbtAction,  /* On Checkpoint with abort\t*/\n\t\t    RestartAction,   /* On Restart (chkpt)\t\t*/\n\t\t    MultiNodeBusy,   /* when desktop goes keyboard busy */\n\t\t    LastAction\t     /* Must be last entry\t\t*/\n};\n\nenum Action_Verb {\n\tDefault, /* default action  */\n\tScript,\t /* external script */\n\tRequeue\t /* requeue job     */\n};\n\nextern struct mom_action {\n\tchar *ma_name;\t\t  /* action name (noun)\t     */\n\tint ma_timeout;\t\t  /* allowable time for action */\n\tenum Action_Verb ma_verb; /* action verb\t\t     */\n\tchar *ma_script;\t  /* absolute script path      */\n\tchar **ma_args;\t\t  /* args to pass to script    */\n} mom_action[(int) LastAction];\n\n/**\n * values for call_hup\n */\nenum hup_action {\n\tHUP_CLEAR = 0, /* No HUP processing needed */\n\tHUP_REAL,      /* a HUP signal was received */\n\tHUP_INIT       /* a job failure requires init processing */\n};\n\n/**\n * Flag used to indicate that HUP processing should take place.\n */\nextern enum hup_action call_hup;\n\nextern int mock_run;\n\n/* public funtions within MOM */\n\n#ifdef _PBS_JOB_H\n\n#define COMM_MATURITY_TIME 60 /* time when we consider a pbs_comm connection as mature */\n#define MOM_DELTA_NORMAL 1    /* Normal mode of operation for time_delta_hellosvr function */\n#define MOM_DELTA_RESET 0     /* Reset the values of time_delta_hellosvr function back to 1 */\n\ntypedef int (*pbs_jobfunc_t)(job *);\ntypedef int (*pbs_jobnode_t)(job *, hnodent *);\ntypedef int (*pbs_jobstream_t)(job *, int);\ntypedef int (*pbs_jobndstm_t)(job *, hnodent *, int);\ntypedef void (*pbs_jobvoid_t)(job *);\ntypedef void (*pbs_jobnodevoid_t)(job *, hnodent *);\n\nextern pbs_jobnode_t job_join_extra;\nextern pbs_jobndstm_t job_join_ack;\nextern pbs_jobndstm_t job_join_read;\nextern pbs_jobndstm_t job_setup_send;\nextern pbs_jobstream_t job_setup_final;\nextern pbs_jobvoid_t job_end_final;\nextern pbs_jobfunc_t job_clean_extra;\nextern pbs_jobvoid_t job_free_extra;\nextern pbs_jobnodevoid_t job_free_node;\n\nextern int local_supres(job *, int, struct batch_request *);\nextern void post_suspend(job *, int);\nextern void post_resume(job *, int);\nextern void post_restart(job *, int);\nextern void post_chkpt(job *, int);\nextern int start_checkpoint(job *, int, struct batch_request *);\nextern int local_checkpoint(job *, int, struct batch_request *);\nextern int start_restart(job *, struct batch_request *);\nextern int local_restart(job *, struct batch_request *);\nextern int time_delta_hellosvr(int);\n\n#ifdef WIN32\nextern void wait_action(void);\n#endif\n\ntypedef enum {\n\tHANDLER_FAIL = 0,\n\tHANDLER_SUCCESS = 1,\n\tHANDLER_REPARSE = 2\n} handler_ret_t;\nextern handler_ret_t set_boolean(const char *id, char *value, int *flag);\nextern int do_susres(job *pjob, int which);\nextern int error(char *string, int value);\nextern int kill_job(job *, int sig);\nextern int kill_task(pbs_task *, int sig, int dir);\nextern void del_job_hw(job *);\nextern void mom_deljob(job *);\nextern void mom_deljob_wait2(job *);\nextern int send_sisters_deljob_wait(job *);\nextern void del_job_resc(job *);\nextern int do_mom_action_script(int, job *, pbs_task *, char *,\n\t\t\t\tvoid (*)(job *, int));\nextern enum Action_Verb chk_mom_action(enum Action_Event);\nextern void mom_freenodes(job *);\nextern void unset_job(job *, int);\nstruct passwd *check_pwd(job *);\nextern char *set_shell(job *, struct passwd *);\nextern void start_exec(job *);\nextern void send_obit(job *, int);\nextern void send_hellosvr(int);\nextern void send_wk_job_idle(char *, int);\nextern int site_job_setup(job *);\nextern int site_mom_chkuser(job *);\nextern int site_mom_postchk(job *, int);\nextern int site_mom_prerst(job *);\nextern int terminate_job(job *, int);\nextern int mom_deljob_wait(job *);\nextern int run_pelog(int which, char *file, job *pjob, int pe_io_type);\nextern int is_joined(job *);\nextern void update_jobs_status(void);\nextern void calc_cpupercent(job *, unsigned long, unsigned long, time_t);\nextern void dorestrict_user(void);\nextern int task_save(pbs_task *ptask);\nextern void send_join_job_restart(int, eventent *, int, job *, pbs_list_head *);\nextern int send_resc_used_to_ms(int stream, job *pjob);\nextern int recv_resc_used_from_sister(int stream, job *pjob, int nodeidx);\nextern int is_comm_up(int);\n\n/* Defines for pe_io_type, see run_pelog() */\n\n#define PE_IO_TYPE_NULL -1\n#define PE_IO_TYPE_ASIS 0\n#define PE_IO_TYPE_STD 1\n#define PE_PROLOGUE 1\n#define PE_EPILOGUE 2\n\ntypedef enum {\n\tPRE_FINISH_SUCCESS,\n\tPRE_FINISH_SUCCESS_JOB_SETUP_SEND,\n\tPRE_FINISH_FAIL,\n\tPRE_FINISH_FAIL_JOB_SETUP_SEND,\n\tPRE_FINISH_FAIL_JOIN_EXTRA\n} pre_finish_results_t;\n\n#ifdef _LIBPBS_H\nextern int open_std_file(job *, enum job_file, int, gid_t);\nextern char *std_file_name(job *, enum job_file, int *keeping);\nextern int task_recov(job *pjob);\nextern int send_sisters(job *pjob, int com, pbs_jobndstm_t);\nextern int send_sisters_inner(job *pjob, int com, pbs_jobndstm_t, char *);\nextern int send_sisters_job_update(job *pjob);\nextern int im_compose(int stream, char *jobid, char *cookie, int command, tm_event_t event, tm_task_id taskid, int version);\nextern int message_job(job *pjob, enum job_file jft, char *text);\nextern void term_job(job *pjob);\nextern int start_process(pbs_task *pt, char **argv, char **envp, bool nodemux);\nextern pre_finish_results_t pre_finish_exec(job *pjob, int do_job_setup_send);\nextern void finish_exec(job *pjob);\nextern void exec_bail(job *pjob, int code, char *txt);\nextern int generate_pbs_nodefile(job *pjob, char *nodefile, int nodefile_sz, char *err_msg, int err_msg_sz);\nextern int job_nodes_inner(struct job *pjob, hnodent **mynp);\nextern int job_nodes(job *pjob);\nextern int tm_reply(int stream, int version, int com, tm_event_t event);\n#ifdef WIN32\nextern void end_proc(void);\nextern int dep_procinfo(pid_t pid, pid_t *psid, uid_t *puid, char *puname, size_t uname_len, char *comm, size_t comm_len);\n#else\nextern int dep_procinfo(pid_t, pid_t *, uid_t *, char *, size_t);\n#endif\n#ifdef NAS_UNKILL /* localmod 011 */\nextern int kill_procinfo(pid_t, pid_t *, u_Long *);\n#endif /* localmod 011 */\nextern int dep_attach(pbs_task *ptask);\n#endif /* _LIBPBS_H */\n\n#ifdef _RESOURCE_H\nextern u_long gettime(resource *pres);\nextern u_long getsize(resource *pres);\nextern int local_gettime(resource *, unsigned long *ret);\nextern int local_getsize(resource *, unsigned long *ret);\n#endif /* _RESOURCE_H */\n\n#ifdef _BATCH_REQUEST_H\nextern int start_checkpoint(job *, int abt, struct batch_request *pq);\n\n#endif /* _BATCH_REQUEST_H */\n\n#endif /* _PBS_JOB_H */\n\nstruct cpy_files {\n\tint stageout_failed; /* for stageout failed */\n\tint bad_files;\t     /* for failed to stageout file */\n\tint from_spool;\t     /* copy from spool */\n\tint file_num;\t     /* no. of file name in file_list */\n\tint file_max;\t     /* no. of file name that can reside in file_list */\n\tchar **file_list;    /* list of file name to be deleted later*/\n\tint sandbox_private; /* for stageout with PRIVATE sandbox */\n\tchar *bad_list;\t     /* list of failed stageout filename */\n\tint direct_write;    /* whether direct write has requested by the job */\n};\ntypedef struct cpy_files cpy_files;\n\n#ifdef WIN32\nenum stagefile_errcode {\n\tSTAGEFILE_OK = 0,\n\tSTAGEFILE_NOCOPYFILE,\n\tSTAGEFILE_FATAL,\n\tSTAGEFILE_BADUSER,\n\tSTAGEFILE_LAST\n};\n\nstruct copy_info {\n\tpbs_list_link al_link;\t    /* link to all copy info list */\n\tchar *jobid;\t\t    /* job id to which this info belongs */\n\tjob *pjob;\t\t    /* pointer to job structure */\n\tstruct work_task *ptask;    /* pointer to work task */\n\tstruct batch_request *preq; /* pointer to batch request */\n\tpio_handles pio;\t    /* process info struct */\n};\ntypedef struct copy_info copy_info;\n\n#define CPY_PIPE_BUFSIZE 4096 /* buffer size for pipe */\nextern pbs_list_head mom_copyreqs_list;\nextern void post_cpyfile(struct work_task *);\nextern copy_info *get_copyinfo_from_list(char *);\n#endif\nextern char *tmpdirname(char *);\n#ifdef NAS /* localmod 010 */\nextern char *NAS_tmpdirname(job *);\n#endif /* localmod 010 */\nextern char *jobdirname(char *, char *);\nextern void rmtmpdir(char *);\nextern int local_or_remote(char **);\nextern void add_bad_list(char **, char *, int);\nextern int is_child_path(char *, char *);\nextern int pbs_glob(char *, char *);\nextern void rmjobdir(char *, char *, uid_t, gid_t, int);\nextern int stage_file(int, int, char *, struct rqfpair *, int, cpy_files *, char *, char *);\n#ifdef WIN32\nextern int mktmpdir(char *, char *);\nextern int mkjobdir(char *, char *, char *, HANDLE login_handle);\nextern int isdriveletter(int);\nextern void send_pcphosts(pio_handles *, struct cphosts *);\nextern int send_rq_cpyfile_cred(pio_handles *, struct rq_cpyfile *);\nextern int recv_pcphosts(void);\nextern int recv_rq_cpyfile_cred(struct rq_cpyfile *);\nextern int remdir(char *);\nextern void check_err(const char *func_name, char *buf, int len);\n#else\nextern int mktmpdir(char *, uid_t, gid_t, struct var_table *);\nextern int mkjobdir(char *, char *, uid_t, gid_t);\nextern int impersonate_user(uid_t, gid_t);\nextern void revert_from_user(void);\nextern int open_file_as_user(char *path, int oflag, mode_t mode,\n\t\t\t     uid_t exuid, gid_t exgid);\n#endif\nextern int find_env_slot(struct var_table *, char *);\nextern void bld_env_variables(struct var_table *, char *, char *);\nextern void add_envp(char **, struct var_table *);\nextern pid_t fork_me(int sock);\n\nextern ssize_t readpipe(int pfd, void *vptr, size_t nbytes);\nextern ssize_t writepipe(int pfd, void *vptr, size_t nbytes);\nextern int get_la(double *);\nextern void init_abort_jobs(int, pbs_list_head *);\nextern void checkret(char **spot, int len);\nextern void mom_nice(void);\nextern void mom_unnice(void);\nextern int mom_reader(int, int, char *);\nextern int mom_reader_pkt(int, int, char *);\nextern int mom_reader_Xjob(int);\nextern int mom_reader_pkt_Xjob(int);\nextern int mom_get_reader_Xjob(int);\nextern int mom_writer(int, int);\nextern int mom_writer_pkt(int, int);\nextern void log_mom_portfw_msg(char *msg);\nextern void nodes_free(job *);\nextern int open_demux(u_long, int);\nextern int open_master(char **);\nextern int open_slave(void);\nextern char *rcvttype(int);\nextern int rcvwinsize(int);\nextern int remtree(char *);\nextern void scan_for_exiting(void);\nextern void scan_for_terminated(void);\nextern int setwinsize(int);\nextern void set_termcc(int);\nextern int conn_qsub(char *host, long port);\nextern int conn_qsub_resvport(char *host, long port);\nextern int state_to_server(int, int);\nextern int send_hook_vnl(void *vnl);\nextern int hook_requests_to_server(pbs_list_head *);\nextern int init_x11_display(struct pfwdsock *, int, char *, char *, char *);\nextern int setcurrentworkdir(char *);\nextern int becomeuser(job *);\nextern int becomeuser_args(char *, uid_t, gid_t, gid_t);\nextern void close_update_pipes(job *);\nextern void mom_set_use_all(void);\nvoid job_purge_mom(job *pjob);\n\n/* From popen.c */\nextern FILE *pbs_popen(const char *, const char *);\nextern int pbs_pkill(FILE *, int);\nextern int pbs_pclose(FILE *);\n\n/* from mom_walltime.c */\nextern void start_walltime(job *);\nextern void update_walltime(job *);\nextern void stop_walltime(job *);\nextern void recover_walltime(job *);\n\n/* Define for max xauth data*/\n#define X_DISPLAY_LEN 512\n\n/* Defines for when resource usage is polled by Mom */\n#define MAX_CHECK_POLL_TIME 120\n#define MIN_CHECK_POLL_TIME 10\n\n/* For windows only, define the window station to use */\n/* for launching processes. */\n#define PBS_DESKTOP_NAME \"PBSWS\\\\default\"\n\n/* max # of users that will be exempted from dorestrict_user process killing */\n#ifdef NAS /* localmod 008 */\n#define NUM_RESTRICT_USER_EXEMPT_UIDS 99\n#else\n#define NUM_RESTRICT_USER_EXEMPT_UIDS 10\n#endif /* localmod 008 */\n\n/* max length of the error message generated due to database issues */\n#define PBS_MAX_DB_ERR 500\n\n/* Defines for state_to_server */\n#define UPDATE_VNODES 0\n#define UPDATE_MOM_ONLY 1\n#ifdef __cplusplus\n}\n#endif\n#endif /* _MOM_FUNC_H */\n"
  },
  {
    "path": "src/include/mom_hook_func.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _MOM_HOOK_FUNC_H\n#define _MOM_HOOK_FUNC_H\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n#ifdef linux\n#define REBOOT_CMD \"/sbin/reboot\"\n#elif WIN32\n#define REBOOT_CMD \"\\\\windows\\\\system32\\\\shutdown.exe /g /f /t 5\"\n#else\n#define REBOOT_CMD \"/usr/sbin/reboot\"\n#endif\n\n/* These are attribute names whose value if set from a hook will be */\n/* merged withe mom's vnlp list, which is sent to the server upon */\n/* receiving IS_HELLO sequeunce.  */\n#define HOOK_VNL_PERSISTENT_ATTRIBS \"resources_available sharing pcpus resources_assigned\"\n\n#define HOOK_RUNNING_IN_BACKGROUND (3)\n\n/* used to send hook's job delete/requeue request to server */\nstruct hook_job_action {\n\tpbs_list_link hja_link;\n\tchar hja_jid[PBS_MAXSVRJOBID + 1]; /* job id */\n\tunsigned long hja_actid;\t   /* action id number */\n\tint hja_runct;\t\t\t   /* job's run count */\n\tenum hook_user hja_huser;\t   /* admin or user */\n\tint hja_action;\t\t\t   /* delete or requeue */\n};\n\n#ifdef VNL_NODENUM\nstruct hook_vnl_action {\n\tpbs_list_link hva_link;\n\tunsigned long hva_actid;\t /* action id number */\n\tchar hva_euser[PBS_MAXUSER + 1]; /* effective hook user */\n\tvnl_t *hva_vnl;\t\t\t /* vnl updates */\n\tint hva_update_cmd;\t\t /* e.g. IS_UPDATE_FROM_HOOK */\n};\n#endif\n\n/**\n * @brief\n * \tThe mom_hook_input_t holds the input request parameters\n * \tto the mom_process_hooks() function.\n * @param[in]\tpjob - a job for which a hook is executing in behalf.\n * @param[in]\tprogname - used by execjob_launch hook as the\n * \t\t\tpbs.event().progname value.\n * @param[in]\targv - used by execjob_launch hook as the\n * \t\t\tpbs.event().argv value.\n * @param[in]\tenv - used by execjob_launch hook as the\n * \t\t\tpbs.event().env value.\n * @param[in]\tvnl - a vnl_t * structure used by various hooks\n * \t\t\tthat enumerate the list of vnodes and\n * \t\t\ttheir attributes/resources assigned to a\n * \t\t\tjob, or for exechost_periodic and exechost_startup\n * \t\t\thooks, the vnodes managed by the system.\n * @param[in]\tvnl_fail - a vnl_t * structure used by various hooks\n * \t\t\tthat enumerate the list of vnodes and\n * \t\t\ttheir attributes/resources assigned to a\n * \t\t\tjob, whose parent moms are non-functional.\n * @param[in]\tfailed_mom_list - a svattrl structure enumerating the\n *\t\t\tsister mom hosts that have been seen as down.\n * @param[in]\tsucceeded_mom_list - a svattrl structure enumerating the\n *\t\t\tsister mom hosts that have been seen as up.\n * @param[in]\tpid - used by execjob_atttach hook as the\n * \t\t\tpbs.event().pid value.\n * @param[in]\tjobs_list - list of jobs and their attributes/resources\n * \t\t\t    used by the exechost_periodic hook.\n *\n */\ntypedef struct mom_hook_input {\n\tjob *pjob;\n\tchar *progname;\n\tchar **argv;\n\tchar **env;\n\tvoid *vnl;\n\tvoid *vnl_fail;\n\tvoid *failed_mom_list;\n\tvoid *succeeded_mom_list;\n\tpid_t pid;\n\tpbs_list_head *jobs_list;\n} mom_hook_input_t;\n\n/**\n * @brief\n * \tThe mom_hook_output_t holds the output request parameters\n * \tthat are filled in, after calling mom_process_hooks().\n * @param[out]\treject_errcode - the resultant errorcode\n * \t\t(e.g. PBSE_HOOKERROR) when job is rejected due\n * \t\tto hook.\n * @param[out]\tlast_phook - the most recent hook that executed.\n * @param[out]\tfail_action - the accumulation of fail_action\n * \t\t\tvalues seen for the hooks that\n * \t\t\texecuted; mom_process_hooks() will\n * \t\t\texecute all hooks responding to a particular\n * \t\t\tevent until reject is encountered.\n * @param[out]\tprogname - the resultant pbs.event().progname value\n * \t\t\t after executing the execjob_launch hooks\n * \t\t\t responding to a particular event.\n * @param[out]\targv - the resultant pbs.event().argv value after\n * \t\t\texecuting the execjob_launch hooks\n * \t\t\tresponding to a particular event.\n * @param[out]\tenv - the resultant pbs.event().env value after\n * \t\t\texecuting the execjob_launch hooks\n * \t\t\tresponding to a particular event.\n * @param[in]\tvnl - a vnl_t * structure holding the vnode changes\n * \t\t\tmade after executing mom_process_hooks().\n * @param[in]\tvnl_fail - a vnl_t * structure holding the changes to\n * \t\t\tfailed vnodes made after executing\n *\t\t\tmom_process_hooks().\n *holding the changes to\n * \t\t\tfailed vnodes made after executing\n *\t\t\tmom_process_hooks().\n */\ntypedef struct mom_hook_output {\n\tint *reject_errcode;\n\thook **last_phook;\n\tunsigned int *fail_action;\n\tchar **progname;\n\tpbs_list_head *argv;\n\tchar **env;\n\tvoid *vnl;\n\tvoid *vnl_fail;\n} mom_hook_output_t;\n\n/**\n * @brief\n * \tThe mom_process_hooks_params_t holds the arguments of\n *  mom_process_hooks function, which will be prcessed in\n *  the post_run_hook function.\n */\ntypedef struct mom_process_hooks_params {\n\tchar *hook_msg;\n\tchar *req_user;\n\tchar *req_host;\n\tint update_svr;\n\tint parent_wait;\n\tunsigned int hook_event;\n\tpid_t child;\n\tsize_t msg_len;\n\tmom_hook_input_t *hook_input;\n\tmom_hook_output_t *hook_output;\n} mom_process_hooks_params_t;\n\nvoid post_reply(job *, int);\nextern int mom_process_hooks(unsigned int hook_event, char *req_user, char *req_host,\n\t\t\t     mom_hook_input_t *hook_input,\n\t\t\t     mom_hook_output_t *hook_output,\n\t\t\t     char *hook_msg, size_t msg_len, int update_svr);\nextern void cleanup_hooks_in_path_spool(struct work_task *ptask);\nextern int python_script_alloc(const char *script_path, struct python_script **py_script);\nextern void python_script_free(struct python_script *py_script);\nextern void run_periodic_hook_bg(hook *phook);\nextern int get_hook_results(char *input_file, int *accept_flag, int *reject_flag,\n\t\t\t    char *reject_msg, int reject_msg_size, int *reject_rerunjob,\n\t\t\t    int *reject_deletejob, int *reboot_flag, char *reboot_cmd,\n\t\t\t    int reboot_cmd_size, pbs_list_head *p_obj, job *pjob, hook *phook,\n\t\t\t    int copy_file, mom_hook_output_t *hook_output);\nextern void send_hook_job_action(struct hook_job_action *phja);\nextern void attach_hook_requestor_merge_vnl(hook *phook, void *pnv, job *pjob);\nextern void new_job_action_req(job *pjob, enum hook_user huser, int action);\nextern void send_hook_fail_action(hook *);\nextern void vna_list_free(pbs_list_head);\nextern void mom_hook_input_init(mom_hook_input_t *hook_input);\nextern void mom_hook_output_init(mom_hook_output_t *hook_output);\nextern void send_hook_fail_action(hook *);\n\n#ifdef __cplusplus\n}\n#endif\n#endif /* _MOM_HOOK_FUNC_H */\n"
  },
  {
    "path": "src/include/mom_server.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _MOM_SERVER_H\n#define _MOM_SERVER_H\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n#include \"list_link.h\"\n\n/*\n * Definition of basic structures and functions used for Mom -> Server\n * TPP communication.\n *\n * Job Obituary/Resource Usage requests...\n *\n * These are from Mom to Server only and only via TPP\n */\n\ntypedef struct resc_used_update ruu;\nstruct resc_used_update {\n\truu *ru_next;\n\tchar *ru_pjobid;       /* pointer to job id */\n\tchar *ru_comment;      /* a general message */\n\tint ru_status;\t       /* job exit status (or zero) */\n\tint ru_hop;\t       /* hop/run count of job\t*/\n\tpbs_list_head ru_attr; /* list of svrattrl */\n#ifdef PBS_MOM\n\ttime_t ru_created_at;\t  /* time in epoch at which this ruu was created */\n\tjob *ru_pjob;\t\t  /* pointer to job structure for this ruu */\n\tint ru_cmd;\t\t  /* cmd for this ruu */\n\tpbs_list_link ru_pending; /* link to mom_pending_ruu list */\n#endif\n};\n\n#ifdef PBS_MOM\n#define FREE_RUU(x)                                        \\\n\tdo {                                               \\\n\t\tif (x->ru_pjob) {                          \\\n\t\t\tx->ru_pjob->ji_pending_ruu = NULL; \\\n\t\t\tx->ru_pjob = NULL;                 \\\n\t\t}                                          \\\n\t\tdelete_link(&x->ru_pending);               \\\n\t\tfree_attrlist(&x->ru_attr);                \\\n\t\tif (x->ru_pjobid)                          \\\n\t\t\tfree(x->ru_pjobid);                \\\n\t\tif (x->ru_comment)                         \\\n\t\t\tfree(x->ru_comment);               \\\n\t\tfree(x);                                   \\\n\t} while (0)\n#else\n#define FREE_RUU(x)                          \\\n\tdo {                                 \\\n\t\tfree_attrlist(&x->ru_attr);  \\\n\t\tif (x->ru_pjobid)            \\\n\t\t\tfree(x->ru_pjobid);  \\\n\t\tif (x->ru_comment)           \\\n\t\t\tfree(x->ru_comment); \\\n\t\tfree(x);                     \\\n\t} while (0)\n#endif\n\nextern int job_obit(ruu *, int);\nextern int enqueue_update_for_send(job *, int);\nextern void send_resc_used(int cmd, int count, ruu *rud);\nextern void send_pending_updates(void);\nextern char mom_short_name[];\n\n#ifdef _PBS_JOB_H\nextern u_long resc_used(job *, char *, u_long (*func)(resource *pres));\n#endif /* _PBS_JOB_H */\n#ifdef __cplusplus\n}\n#endif\n#endif /* _MOM_SERVER_H */\n"
  },
  {
    "path": "src/include/mom_vnode.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include \"job.h\"\n#include \"server_limits.h\"\n#include \"attribute.h\"\n#include \"resource.h\"\n#include \"pbs_nodes.h\"\n\n/*\n *\tA mom that manages its own lists of CPUs needs to provide a function\n *\tthat frees these CPUs when the job terminates.  If non-NULL, this\n *\tfunction pointer will be called from mom_deljob().\n */\nextern void (*free_job_CPUs)(job *);\n\n/*\n *\tThese are interfaces to functions that manipulate CPU states for moms\n *\tthat manage their own CPU lists.  The cpuindex_*() functions are used\n *\twhen referring to a CPU by its relative position on a given mom_vninfo_t\n *\tCPU list, while the cpunum_*() functions deal with physical CPU numbers.\n *\n *\tget_cpubits() and get_membits() initialize memory bitmasks used to\n *\trepresent the CPUs (resp. memory boards) discovered while parsing\n *\tvnode definitions files.\n */\nextern void cpuindex_free(mom_vninfo_t *, unsigned int);\nextern void cpuindex_inuse(mom_vninfo_t *, unsigned int, job *);\nextern void cpunum_outofservice(unsigned int);\nextern void cpu_raresync(void);\n"
  },
  {
    "path": "src/include/net_connect.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _NET_CONNECT_H\n#define _NET_CONNECT_H\n\n/*\n * Other Include Files Required\n *\t<sys/types.h>\n *       \"pbs_ifl.h\"\n */\n#include <sys/types.h>\n#include <unistd.h>\n#include \"list_link.h\"\n#include \"auth.h\"\n#define PBS_NET_H\n#ifndef PBS_NET_TYPE\ntypedef unsigned long pbs_net_t; /* for holding host addresses */\n#define PBS_NET_TYPE\n#endif\n\n#ifndef INADDR_NONE\n#define INADDR_NONE (unsigned int) 0xFFFFFFFF\n#endif\n\n#define PBS_NET_MAXCONNECTIDLE 900\n\n/* flag bits for cn_authen field */\n#define PBS_NET_CONN_AUTHENTICATED 0x01\n#define PBS_NET_CONN_FROM_PRIVIL 0x02\n#define PBS_NET_CONN_NOTIMEOUT 0x04\n#define PBS_NET_CONN_FROM_QSUB_DAEMON 0x08\n#define PBS_NET_CONN_FORCE_QSUB_UPDATE 0x10\n#define PBS_NET_CONN_PREVENT_IP_SPOOFING 0x20\n\n#define QSUB_DAEMON \"qsub-daemon\"\n\n/*\n **\tProtocol numbers and versions for PBS communications.\n */\n\n#define RM_PROTOCOL 1 /* resource monitor protocol number */\n#define TM_PROTOCOL 2 /* task manager protocol number */\n#define IM_PROTOCOL 3 /* inter-mom protocol number */\n#define IS_PROTOCOL 4 /* inter-server protocol number */\n\n/* When protocol changes, increment the version\n* not to be changed lightly as it makes everything incompatable.\n*/\n#define RM_PROTOCOL_VER 1     /* resmon protocol version number */\n#define TM_PROTOCOL_VER 2     /* task manager protocol version number */\n#define TM_PROTOCOL_OLD 1     /* old task manager protocol version number */\n#define IM_PROTOCOL_VER 6     /* inter-mom protocol version number */\n#define IM_OLD_PROTOCOL_VER 5 /* inter-mom old protocol version number */\n#define IS_PROTOCOL_VER 4     /* inter-server protocol version number */\n\n/*\tTypes of Inter Server messages (between Server and Mom). */\n#define IS_NULL 0\n#define IS_CMD 1\n#define IS_CMD_REPLY 2\n#define IS_CLUSTER_ADDRS 3\n#define IS_UPDATE 4\n#define IS_RESCUSED 5\n#define IS_JOBOBIT 6\n#define IS_OBITREPLY 7\n#define IS_REPLYHELLO 8\n#define IS_SHUTDOWN 9\n#define IS_IDLE 10\n#define IS_REGISTERMOM 11\n#define IS_UPDATE2 12\n#define IS_DISCARD_JOB 13\n#define IS_DISCARD_DONE 14\n#define IS_UPDATE_FROM_HOOK 15\t\t   /* request to update vnodes from a hook running on parent mom host */\n#define IS_RESCUSED_FROM_HOOK 16\t   /* request from child mom for a hook */\n#define IS_HOOK_JOB_ACTION 17\t\t   /* request from hook to delete/requeue job */\n#define IS_HOOK_ACTION_ACK 18\t\t   /* acknowledge a request of the above 2    */\n#define IS_HOOK_SCHEDULER_RESTART_CYCLE 19 /* hook wish scheduler to recycle */\n#define IS_HOOK_CHECKSUMS 20\t\t   /* mom reports about hooks seen */\n#define IS_UPDATE_FROM_HOOK2 21\t\t   /* request to update vnodes from a hook running on a parent mom host or an allowed non-parent mom host */\n#define IS_HELLOSVR 22\t\t\t   /* hello send to server from mom to initiate a hello sequence */\n\n/* return codes for client_to_svr() */\n\n#define PBS_NET_RC_FATAL -1\n#define PBS_NET_RC_RETRY -2\n\n/* bit flags: authentication method (resv ports/external) and authentication mode(svr/client) */\n\n#define B_RESERVED 0x1 /* need reserved port */\n#define B_SVR 0x2      /* generate server type auth message */\n\n/**\n * @brief\n * enum conn_type is used to (1) indicate that a connection table entry is in\n * use or is free (Idle).  Additional meaning for the entries are:\n *\n * @verbatim\n * \tPrimary\n * \t\tthe primary entry (port) on which the daemon is listening for\n *\t\tconnections from a client\n * \tSecondary\n * \t\tanother connection on which the daemon is listening, a different\n *\t\tservice such as \"resource monitor\" part of Mom.\n *\t\tIf init_network() is called twice, the second port/entry is\n *\t\tmarked as the Secondary\n *\tFromClientDIS\n *\t\ta client initiated connection\n *\tTppComm\n *\t\tTPP based connection\n *\tChildPipe\n *\t\tUsed by Mom for a \"unix\" pipe between herself and a child;\n *\t\tthis is not a IP connection.\n *\n * @endverbatim\n *\n * @note\n *\tThe entries marked as Primary, Secondary, and TppComm do not require\n *\tadditional authenication of the user making the request.\n */\ntypedef struct connection conn_t;\nenum conn_type {\n\tPrimary = 0,\n\tSecondary,\n\tFromClientDIS,\n\tToServerDIS,\n\tTppComm,\n\tChildPipe,\n\tIdle\n};\n\n/*\n * This is used to know where the connection is originated from.\n * This can be extended to have MOM and other clients of Server in future.\n */\ntypedef enum conn_origin {\n\tCONN_UNKNOWN = 0,\n\tCONN_SCHED_PRIMARY,\n\tCONN_SCHED_SECONDARY,\n\tCONN_SCHED_ANY\n} conn_origin_t;\n\n/* functions available in libnet.a */\n\nconn_t *add_conn(int sock, enum conn_type, pbs_net_t, unsigned int port, int (*ready_func)(conn_t *), void (*func)(int));\nint set_conn_as_priority(conn_t *);\nint add_conn_data(int sock, void *data); /* Adds the data to the connection */\nvoid *get_conn_data(int sock);\t\t /* Gets the pointer to the data present with the connection */\nint client_to_svr(pbs_net_t, unsigned int port, int);\nint client_to_svr_extend(pbs_net_t, unsigned int port, int, char *);\nvoid close_conn(int socket);\npbs_net_t get_connectaddr(int sock);\nint get_connecthost(int sock, char *namebuf, int size);\npbs_net_t get_hostaddr(char *hostname);\nint comp_svraddr(pbs_net_t, char *, pbs_net_t *);\nint compare_short_hostname(char *shost, char *lhost);\nunsigned int get_svrport(char *servicename, char *proto, unsigned int df);\nint init_network(unsigned int port);\nint init_network_add(int sock, int (*readyreadfunc)(conn_t *), void (*readfunc)(int));\nvoid net_close(int);\nint wait_request(float waittime, void *priority_context);\nextern void *priority_context;\nvoid net_add_close_func(int, void (*)(int));\nextern pbs_net_t get_addr_of_nodebyname(char *name, unsigned int *port);\nextern int make_host_addresses_list(char *phost, u_long **pul);\n\nconn_t *get_conn(int sock); /* gets the connection, for a given socket id */\nvoid connection_idlecheck(void);\nvoid connection_init(void);\nchar *build_addr_string(pbs_net_t);\nint set_nodelay(int fd);\nextern void process_IS_CMD(int);\n\nstruct connection {\n\tint cn_sock;\t\t\t/* socket descriptor */\n\tpbs_net_t cn_addr;\t\t/* internet address of client */\n\tint cn_sockflgs;\t\t/* file status flags - fcntl(F_SETFL) */\n\tunsigned int cn_port;\t\t/* internet port number of client */\n\tunsigned short cn_authen;\t/* authentication flags */\n\tenum conn_type cn_active;\t/* idle or type if active */\n\ttime_t cn_lasttime;\t\t/* time last active */\n\tint (*cn_ready_func)(conn_t *); /* true if data rdy for cn_func */\n\tvoid (*cn_func)(int);\t\t/* read function when data rdy */\n\tvoid (*cn_oncl)(int);\t\t/* func to call on close */\n\tunsigned short cn_prio_flag;\t/* flag for a priority socket */\n\tpbs_list_link cn_link;\t\t/* link to the next connection in the linked list */\n\t/* following attributes are for */\n\t/* credential checking */\n\ttime_t cn_timestamp;\n\tvoid *cn_data; /* pointer to some data for cn_func */\n\tchar cn_username[PBS_MAXUSER + 1];\n\tchar cn_hostname[PBS_MAXHOSTNAME + 1];\n\tchar *cn_credid;\n\tchar cn_physhost[PBS_MAXHOSTNAME + 1];\n\tpbs_auth_config_t *cn_auth_config;\n\tconn_origin_t cn_origin; /* used to know the origin of the connection i.e. Scheduler, MOM etc. */\n};\n#endif /* _NET_CONNECT_H */\n"
  },
  {
    "path": "src/include/pbs_array_list.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef PBS_ARRAY_LIST_H__\n#define PBS_ARRAY_LIST_H__\n\n/*\n * The data structures, macros and functions in this header file are used for\n * compressing the list of IP addresses sent across from the\n * server to the MOM(s) as part of the IS_CLUSTER_ADDRS message.\n *\n * The high-level algorithm is to reduce a given set of IP addresses to range(s)\n * E.g.: Given: 1,2,3,4,5,8,9,10,11 => {1-5},{8,11}\n * The ranges are stores as an ordered pair: (a,b). The first element 'a' refers\n * to the first IP address in the range and the second element 'b' refers to\n * the number of continuous IP addresses.\n * e.g. (1,5) => {1,2,3,4,5,6}  from 1  to (1+5)\n *      (5,3) => {5,6,7,8}      from 5  to (5+3)\n *     (11,0) => {11}           from 11 to (11+0)\n *\n * Each ordered pair is represented by a 'PBS_IP_RANGE' data structure.\n * For a given ordered pair (a,b), the first element 'a' is referred\n * to as 'ra_low' and the second element 'b' is referred to as 'ra_high' in the\n * code/documentation.\n *\n */\n\n/**\n * 'T' is used to store 'ra_low' and 'ra_high'.\n * This was 'typedef-ed to allow for inclusion of IP v6 addresses possibly\n * in the future\n */\ntypedef long unsigned int T;\n\n/**\n * 'PBS_IP_RANGE' is used to store the ordered pair (ra_low,ra_high) where 'ra_low' is the\n * starting IP address in a range and 'ra_high' gives the number of IP address\n * in the range\n */\ntypedef struct pbs_ip_range {\n\tT ra_low;\n\tT ra_high;\n} PBS_IP_RANGE; /* ra_high  is the number of addresses in the range 'in addition' to the starting address */\n\ntypedef PBS_IP_RANGE *pntPBS_IP_RANGE;\n\n/**\n * The PBS_IP_LIST data structure contains an array of ordered pairs (PBS_IP_RANGE)\n * Carries meta-data about the range: the number of slots used and number of\n * slots available\n */\ntypedef struct pbs_ip_list {\n\tpntPBS_IP_RANGE li_range;\n\tint li_nrowsused;\n\tint li_totalsize;\n} PBS_IP_LIST;\n\ntypedef PBS_IP_LIST *pntPBS_IP_LIST;\n#define CHUNK 5 /* The number of slots by which PBS_IP_LIST is resized */\n#define INIT_VALUE 0\n\n/* Various macros to retrieve or set 'ra_low' or 'ra_high' for a given PBS_IP_RANGE */\n#define IPLIST_GET_LOW(X, Y) (X)->li_range[(Y)].ra_low\n#define IPLIST_GET_HIGH(X, Y) (X)->li_range[(Y)].ra_high\n#define IPLIST_SET_LOW(X, Y, Z) (X)->li_range[(Y)].ra_low = (Z)\n#define IPLIST_SET_HIGH(X, Y, Z) (X)->li_range[(Y)].ra_high = (Z)\n#define IPLIST_IS_CONTINUOUS(X, Y) ((X) + 1 == (Y))\n\n#define IPLIST_IS_CONTINUOUS_ROW(X, Y, Z) (IPLIST_IS_CONTINUOUS((IPLIST_GET_LOW(X, Y) + IPLIST_GET_HIGH(X, Y)), (Z)))\n#define IPLIST_IS_ROW_SAME(X, Y, Z) ((IPLIST_GET_LOW(X, Y) + IPLIST_GET_HIGH(X, Y)) == (Z))\n#define IPLIST_MOVE_DOWN(X, Y) (((X) - (Y)) * sizeof(PBS_IP_RANGE))\n#define IPLIST_MOVE_UP(X, Y) (((X) - ((Y) + 1)) * sizeof(PBS_IP_RANGE))\n#define IPLIST_SHIFT_ALL_DOWN_BY_ONE(X, Y, Z) memmove((X)->li_range + (Y) + 1, (X)->li_range + (Y), (Z) * sizeof(PBS_IP_RANGE))\n#define IPLIST_SHIFT_ALL_UP_BY_ONE(X, Y, Z) memmove((X)->li_range + (Y), (X)->li_range + (Y) + 1, (Z) * sizeof(PBS_IP_RANGE))\n\n#define IPLIST_INSERT_SUCCESS 0\n#define IPLIST_INSERT_FAILURE -1\n#define IPLIST_DELETE_SUCCESS 0\n#define IPLIST_DELETE_FAILURE -1\n\n/**\n * @brief\n *\tCreates an array of size CHUNK of type PBS_IP_RANGE\n *\n * @par Functionality:\n *      This function is invoked by create_pbs_iplist\n *      It results in an array of PBS_IP_RANGE type of size CHUNK\n *      which is an array of ordered pairs (a,b)\n *\n * @param[in]\tvoid\n *\n * @return\tpntPBS_IP_RANGE a pointer to PBS_IP_RANGE\n */\npntPBS_IP_RANGE create_pbs_range(void);\n\n/**\n * @brief\n *\tReallocates the array of PBS_IP_RANGE by CHUNK\n *\n * @par Functionality:\n *      Since the PBS_IP_LIST is build dynamically at run-time, therefore\n *      if required, more slots are created by invoking this function.\n *\n * @param[in]\tvoid\n *\n * @return\tpntPBS_IP_RANGE a pointer to the newly reallocated PBS_IP_RANGE\n */\npntPBS_IP_LIST resize_pbs_iplist(pntPBS_IP_LIST);\n\n/**\n * @brief\n *\tCreates an instance of PBS_IP_LIST\n *\n * @par Functionality:\n *      Invokes create_pbs_range() to\n *      create a PBS_IP_RANGE to store ordered pairs and sets 'totalsize' to CHUNK\n *\n * @param[in]\tvoid\n *\n * @return\tpntPBS_IP_RANGE a pointer to PBS_IP_RANGE or NULL if memory allocation fails\n */\npntPBS_IP_LIST create_pbs_iplist(void);\n\n/**\n * @brief\n *\tFrees memory associated with PBS_IP_LIST and PBS_IP_RANGE\n *\n * @param[in]\tpntPBS_IP_LIST, pointer to PBS_IP_LIST to be freed\n *\n * @return\tvoid\n */\nvoid delete_pbs_iplist(pntPBS_IP_LIST);\n\n/**\n * @brief\n *\tIdentifies location of slot in which to insert new incoming element.\n *\n * @par Functionality:\n *      This function is invoked by both insert_pbs_element( ) and delete_pbs_element( )\n *      The function takes pointer to PBS_IP_LIST in which to search for key 'T'\n *      The function performs a binary search over only the 'ra_low' elements of\n *      all the ordered pairs in the PBS_IP_LIST. If the element is found, the\n *      function returns the index at which the key is found. Else the function\n *      returns the index at which the element should be inserted. This is set\n *      in the third variable which is passed by reference to the function.\n *\n * @param[in]\tpntPBS_IP_LIST pointer to the PBS_IP_LIST in which search is done\n * @param[in]  T the key for which to search in PBS_IP_LIST\n * @param[in]  int* The variable in which location of insertion/deletion is set\n *\n * @return\tNon-negative if location found, -1 if location not found\n */\nint search_iplist_location(pntPBS_IP_LIST, T, int *);\n\n/**\n * @brief\n *\tInserts provided key into provided PBS_IP_LIST\n *\n * @par Functionality:\n *      The function first calls search_iplist_location to determine location at\n *      which to insert the new element.\n *      The function can determine when the insertion of new key may cause\n *      two distinct ranges to merge and does so.\n *      Function invokes resize_pbs_iplist to resize PBS_IP_LIST as required.\n *      Builds the PBS_IP_LIST dynamically at run-time.\n *\n * @param[in]\tpntPBS_IP_LIST pointer to the PBS_IP_LIST in which to insert key.\n * @param[in]  T the key to insert in PBS_IP_LIST\n *\n * @return\n * 0 - SUCCESS\n * 1 - FAILURE\n */\nint insert_iplist_element(pntPBS_IP_LIST, T);\n\n/**\n * @brief\n *\tDeletes provided key from given PBS_IP_LIST\n *\n * @par Functionality:\n *      The function takes the provided key and removes it from the given\n *      PBS_IP_LIST. If the key matches an element inside a range, then the\n *      range needs to be split into two ranges. The function takes care of\n *      the splitting of the ranges.\n *\n * @param[in]\tpntPBS_IP_LIST pointer to the pntPBS_IP_LIST from which to delete key.\n * @param[in]  T the key to delete from PBS_IP_LIST\n *\n * @return\n * 0 - SUCCESS\n * 1 - FAILURE\n */\nint delete_iplist_element(pntPBS_IP_LIST, T);\n\n#endif\n"
  },
  {
    "path": "src/include/pbs_assert.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef NDEBUG\n#define assert(as)                                                                                              \\\n\t{                                                                                                       \\\n\t\tif (!(as)) {                                                                                    \\\n\t\t\t(void) fprintf(stderr, \"Assertion failed: file \\\"%s\\\", line %d\\n\", __FILE__, __LINE__); \\\n\t\t\texit(1);                                                                                \\\n\t\t}                                                                                               \\\n\t}\n#else\n#define assert(ex)\n#endif\n"
  },
  {
    "path": "src/include/pbs_client_thread.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file    pbs_client_thread.h\n *\n * @brief\n *\tPbs threading related functions declarations and structures\n */\n\n#ifndef _PBS_CLIENT_THREAD_H\n#define _PBS_CLIENT_THREAD_H\n\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n#include <pthread.h>\n\n/**\n * @brief\n *  Structure used for storing the connection related context data\n *\n *  Since each thread can open multiple connections, these connection specific\n *  data that might be accessed by the thread after an API call needs to be\n *  saved. An example of this is a thread calling pbs_submit(c1) and\n *  pbs_submit(c2) and then calling pbs_geterrmsg(c1) and pbs_geterrmsg(c2).\n *  The connection level errtxt and errno cannot be left at a global connection\n *  table level, since multiple threads will overwrite it when they share a\n *  connection (since the locking is done at an API level, and the errmsg might\n *  be requested past lock boundaries).\n *\n *  The structure connect_context captures ch_errno, and ch_errtxt from the\n *  connection handle. For each connection associated with a thread, a node of\n *  type struct connect_context is stored into the linked list headed by member\n *  th_ch_conn_context in structure thread_context\n */\nstruct pbs_client_thread_connect_context {\n\t/** connection handle */\n\tint th_ch;\n\t/** last error number that occured on this connection handle */\n\tint th_ch_errno;\n\t/** last server error text on this connection handle */\n\tchar *th_ch_errtxt;\n\t/** link to the next node in the linked list */\n\tstruct pbs_client_thread_connect_context\n\t\t*th_ch_next;\n};\n\n/**\n * @brief\n *  Structure used to store thread level context data (TLS)\n *\n *  struct thread_context is the consolidated data that is required by each\n *  thread during its flow throught IFL API and communication to server.\n *  The structure is allocated and stored into TLS area during thread_init\n */\nstruct pbs_client_thread_context {\n\t/** stores the global pbs errno */\n\tint th_pbs_errno;\n\t/** head pointer to the linked list of connection contexts */\n\tstruct pbs_client_thread_connect_context\n\t\t*th_conn_context;\n\t/** pointer to the array of attribute error structures */\n\tstruct ecl_attribute_errors\n\t\t*th_errlist;\n\t/** pointer to the location for the dis_buffer for each thread */\n\tchar *th_dis_buffer;\n\t/** pointer to the cred_info structure used by pbs_submit_with_cred */\n\tvoid *th_cred_info;\n\t/** used by totpool and usepool functions */\n\tvoid *th_node_pool;\n\tchar th_pbs_server[PBS_MAXSERVERNAME + 1];\n\tchar th_pbs_defserver[PBS_MAXSERVERNAME + 1];\n\tchar th_pbs_current_user[PBS_MAXUSER + 1];\n\ttime_t th_pbs_tcp_timeout;\n\tint th_pbs_tcp_interrupt;\n\tint th_pbs_tcp_errno;\n\tint th_pbs_mode;\n};\n\n/* corresponding function pointers for the externally used functions */\nextern int (*pfn_pbs_client_thread_lock_connection)(int connect);\nextern int (*pfn_pbs_client_thread_unlock_connection)(int connect);\nextern struct pbs_client_thread_context *(*pfn_pbs_client_thread_get_context_data)(void);\nextern int (*pfn_pbs_client_thread_lock_conntable)(void);\nextern int (*pfn_pbs_client_thread_unlock_conntable)(void);\nextern int (*pfn_pbs_client_thread_lock_conf)(void);\nextern int (*pfn_pbs_client_thread_unlock_conf)(void);\nextern int (*pfn_pbs_client_thread_init_thread_context)(void);\nextern int (*pfn_pbs_client_thread_init_connect_context)(int connect);\nextern int (*pfn_pbs_client_thread_destroy_connect_context)(int connect);\n\n/* #defines for functions called by other code */\n#define pbs_client_thread_lock_connection(connect) \\\n\t(*pfn_pbs_client_thread_lock_connection)(connect)\n#define pbs_client_thread_unlock_connection(connect) \\\n\t(*pfn_pbs_client_thread_unlock_connection)(connect)\n#define pbs_client_thread_get_context_data() \\\n\t(*pfn_pbs_client_thread_get_context_data)()\n#define pbs_client_thread_lock_conntable() \\\n\t(*pfn_pbs_client_thread_lock_conntable)()\n#define pbs_client_thread_unlock_conntable() \\\n\t(*pfn_pbs_client_thread_unlock_conntable)()\n#define pbs_client_thread_lock_conf() \\\n\t(*pfn_pbs_client_thread_lock_conf)()\n#define pbs_client_thread_unlock_conf() \\\n\t(*pfn_pbs_client_thread_unlock_conf)()\n#define pbs_client_thread_init_thread_context() \\\n\t(*pfn_pbs_client_thread_init_thread_context)()\n#define pbs_client_thread_init_connect_context(connect) \\\n\t(*pfn_pbs_client_thread_init_connect_context)(connect)\n#define pbs_client_thread_destroy_connect_context(connect) \\\n\t(*pfn_pbs_client_thread_destroy_connect_context)(connect)\n\n/* functions to add/remove/find connection context to the thread context */\nstruct pbs_client_thread_connect_context *\npbs_client_thread_add_connect_context(int connect);\nint pbs_client_thread_remove_connect_context(int connect);\nstruct pbs_client_thread_connect_context *\npbs_client_thread_find_connect_context(int connect);\nvoid free_errlist(struct ecl_attribute_errors *errlist);\n\n/* function called by daemons to set them to use the unthreaded functions */\nvoid pbs_client_thread_set_single_threaded_mode(void);\n\n#ifdef __cplusplus\n}\n#endif\n\n#endif /* _PBS__CLIENT_THREAD_H */\n"
  },
  {
    "path": "src/include/pbs_db.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file    pbs_db.h\n *\n * @brief\n * PBS database interface. (Functions declarations and structures)\n *\n * This header file contains functions to access the PBS data store\n * Only these functions should be used by PBS code. Actual implementation\n * of these functions are database specific and are implemented in Libdb.\n *\n * In most cases, the size of the fields in the structures correspond\n * one to one with the column size of the respective tables in database.\n * The functions/interfaces in this header are PBS Private.\n */\n\n#ifndef _PBS_DB_H\n#define _PBS_DB_H\n\n#include <pbs_ifl.h>\n#include <sys/types.h>\n#include <stdlib.h>\n#include <string.h>\n#include <time.h>\n#include \"list_link.h\"\n\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n#ifndef MIN\n#define MIN(x, y) (((x) < (y)) ? (x) : (y))\n#endif\n#ifndef MAX\n#define MAX(x, y) (((x) > (y)) ? (x) : (y))\n#endif\n\n#define PBS_MAX_DB_CONN_INIT_ERR (MAXPATHLEN * 2)\n\n/* type of saves bit wise flags - see savetype */\n#define OBJ_SAVE_NEW 1 /* object is new, so whole object should be saved */\n#define OBJ_SAVE_QS 2  /* quick save area modified, it should be saved */\n\n/**\n * @brief\n * Following are a set of mapping of DATABASE vs C data types. These are\n * typedefed here to allow mapping the database data types easily.\n */\ntypedef short SMALLINT;\ntypedef int INTEGER;\ntypedef long long BIGINT;\ntypedef char *TEXT;\n\nstruct pbs_db_attr_list {\n\tint attr_count;\n\tpbs_list_head attrs;\n};\n\ntypedef struct pbs_db_attr_list pbs_db_attr_list_t;\n\n/**\n * @brief\n *  Structure used to map database server structure to C\n *\n */\nstruct pbs_db_svr_info {\n\tBIGINT sv_jobidnumber;\n\tpbs_db_attr_list_t db_attr_list; /* list of attributes */\n};\ntypedef struct pbs_db_svr_info pbs_db_svr_info_t;\n\n/**\n * @brief\n *  Structure used to map database scheduler structure to C\n *\n */\nstruct pbs_db_sched_info {\n\tchar sched_name[PBS_MAXSCHEDNAME + 1]; /* sched name */\n\tpbs_db_attr_list_t db_attr_list;       /* list of attributes */\n};\ntypedef struct pbs_db_sched_info pbs_db_sched_info_t;\n\n/**\n * @brief\n *  Structure used to map database queue structure to C\n *\n */\nstruct pbs_db_que_info {\n\tchar qu_name[PBS_MAXQUEUENAME + 1]; /* queue name */\n\tINTEGER qu_type;\t\t    /* queue type: exec, route */\n\tpbs_db_attr_list_t db_attr_list;    /* list of attributes */\n};\ntypedef struct pbs_db_que_info pbs_db_que_info_t;\n\n/**\n * @brief\n *  Structure used to map database node structure to C\n *\n */\nstruct pbs_db_node_info {\n\tchar nd_name[PBS_MAXSERVERNAME + 1];\t /* vnode's name */\n\tINTEGER nd_index;\t\t\t /* global node index */\n\tBIGINT mom_modtime;\t\t\t /* node config update time */\n\tchar nd_hostname[PBS_MAXSERVERNAME + 1]; /* node hostname */\n\tINTEGER nd_state;\t\t\t /* state of node */\n\tINTEGER nd_ntype;\t\t\t /* node type */\n\tchar nd_pque[PBS_MAXSERVERNAME + 1];\t /* queue to which it belongs */\n\tpbs_db_attr_list_t db_attr_list;\t /* list of attributes */\n};\ntypedef struct pbs_db_node_info pbs_db_node_info_t;\n\n/**\n * @brief\n *  Structure used to map database mominfo_time structure to C\n *\n */\nstruct pbs_db_mominfo_time {\n\tBIGINT mit_time; /* time of the host to vnode map */\n\tINTEGER mit_gen; /* generation of the host to vnode map */\n};\ntypedef struct pbs_db_mominfo_time pbs_db_mominfo_time_t;\n\n/**\n * @brief\n *  Structure used to map database job structure to C\n *\n */\nstruct pbs_db_job_info {\n\tchar ji_jobid[PBS_MAXSVRJOBID + 1];   /* job identifier */\n\tINTEGER ji_state;\t\t      /* Internal copy of state */\n\tINTEGER ji_substate;\t\t      /* job sub-state */\n\tINTEGER ji_svrflags;\t\t      /* server flags */\n\tBIGINT ji_stime;\t\t      /* time job started execution */\n\tchar ji_queue[PBS_MAXQUEUENAME + 1];  /* name of current queue */\n\tchar ji_destin[PBS_MAXROUTEDEST + 1]; /* dest from qmove/route */\n\tINTEGER ji_un_type;\t\t      /* job's queue type */\n\tINTEGER ji_exitstat;\t\t      /* job exit status from MOM */\n\tBIGINT ji_quetime;\t\t      /* time entered queue */\n\tBIGINT ji_rteretry;\t\t      /* route retry time */\n\tINTEGER ji_fromsock;\t\t      /* socket job coming over */\n\tBIGINT ji_fromaddr;\t\t      /* host job coming from   */\n\tchar ji_jid[8];\t\t\t      /* extended job save data */\n\tINTEGER ji_credtype;\t\t      /* credential type */\n\tBIGINT ji_qrank;\t\t      /* sort key for db query */\n\tpbs_db_attr_list_t db_attr_list;      /* list of attributes for database */\n};\ntypedef struct pbs_db_job_info pbs_db_job_info_t;\n\n/**\n * @brief\n *  Structure used to map database job script to C\n *\n */\nstruct pbs_db_jobscr_info {\n\tchar ji_jobid[PBS_MAXSVRJOBID + 1]; /* job identifier */\n\tTEXT script;\t\t\t    /* job script */\n};\ntypedef struct pbs_db_jobscr_info pbs_db_jobscr_info_t;\n\n/**\n * @brief\n *  Structure used to map database resv structure to C\n *\n */\nstruct pbs_db_resv_info {\n\tchar ri_resvid[PBS_MAXSVRJOBID + 1]; /* reservation identifier */\n\tchar ri_queue[PBS_MAXQUEUENAME + 1]; /* queue used by reservation */\n\tINTEGER ri_state;\t\t     /* internal copy of state */\n\tINTEGER ri_substate;\t\t     /* substate of resv state */\n\tBIGINT ri_stime;\t\t     /* left window boundry  */\n\tBIGINT ri_etime;\t\t     /* right window boundry */\n\tBIGINT ri_duration;\t\t     /* reservation duration */\n\tINTEGER ri_tactive;\t\t     /* time reservation became active */\n\tINTEGER ri_svrflags;\t\t     /* server flags */\n\tpbs_db_attr_list_t db_attr_list;     /* list of attributes */\n};\ntypedef struct pbs_db_resv_info pbs_db_resv_info_t;\n\n/**\n * @brief\n *  Structure used to pass database query options to database functions\n *\n *  Flags field can be used to pass any flags to a query function.\n *  Timestamp field can be used to pass a timestamp, to return rows that have\n *  a modification timestamp newer (more recent) than the timestamp passed.\n *  (Basically to return rows that have been modified since a point of time)\n *\n */\nstruct pbs_db_query_options {\n\tint flags;\n\ttime_t timestamp;\n};\ntypedef struct pbs_db_query_options pbs_db_query_options_t;\n\n#define PBS_DB_SVR 0\n#define PBS_DB_SCHED 1\n#define PBS_DB_QUEUE 2\n#define PBS_DB_NODE 3\n#define PBS_DB_MOMINFO_TIME 4\n#define PBS_DB_JOB 5\n#define PBS_DB_JOBSCR 6\n#define PBS_DB_RESV 7\n#define PBS_DB_NUM_TYPES 8\n\n/* connection error code */\n#define PBS_DB_SUCCESS 0\n#define PBS_DB_CONNREFUSED 1\n#define PBS_DB_AUTH_FAILED 2\n#define PBS_DB_CONNFAILED 3\n#define PBS_DB_NOMEM 4\n#define PBS_DB_STILL_STARTING 5\n#define PBS_DB_ERR 6\n#define PBS_DB_OOM_ERR 7\n\n/* Database connection states */\n#define PBS_DB_CONNECT_STATE_NOT_CONNECTED 1\n#define PBS_DB_CONNECT_STATE_CONNECTING 2\n#define PBS_DB_CONNECT_STATE_CONNECTED 3\n#define PBS_DB_CONNECT_STATE_FAILED 4\n\n/* Database states */\n#define PBS_DB_DOWN 1\n#define PBS_DB_STARTING 2\n#define PBS_DB_STARTED 3\n\n/**\n * @brief\n *  Wrapper object structure. Contains a pointer to one of the several database\n *  structures.\n *\n *  Most of the database manipulation/query functions take this structure as a\n *  parmater. Depending on the contained structure type, an appropriate internal\n *  database manipulation/query function is eventually called. This allows to\n *  keep the interace simpler and generic.\n *\n */\nstruct pbs_db_obj_info {\n\tint pbs_db_obj_type; /* identifies the contained object type */\n\tunion {\n\t\tpbs_db_svr_info_t *pbs_db_svr;\t\t  /* map database server structure to C */\n\t\tpbs_db_sched_info_t *pbs_db_sched;\t  /* map database scheduler structure to C */\n\t\tpbs_db_que_info_t *pbs_db_que;\t\t  /* map database queue structure to C */\n\t\tpbs_db_node_info_t *pbs_db_node;\t  /* map database node structure to C */\n\t\tpbs_db_mominfo_time_t *pbs_db_mominfo_tm; /* map database mominfo_time structure to C */\n\t\tpbs_db_job_info_t *pbs_db_job;\t\t  /* map database job structure to C */\n\t\tpbs_db_jobscr_info_t *pbs_db_jobscr;\t  /* map database job script to C */\n\t\tpbs_db_resv_info_t *pbs_db_resv;\t  /* map database resv structure to C */\n\t} pbs_db_un;\n};\ntypedef struct pbs_db_obj_info pbs_db_obj_info_t;\ntypedef void (*query_cb_t)(pbs_db_obj_info_t *, int *);\n\n#define PBS_DB_CNT_TIMEOUT_NORMAL 30\n#define PBS_DB_CNT_TIMEOUT_INFINITE 0\n\n/* Database start stop control commands */\n#define PBS_DB_CONTROL_STATUS \"status\"\n#define PBS_DB_CONTROL_START \"start\"\n#define PBS_DB_CONTROL_STOP \"stop\"\n\n/**\n * @brief\n *\tInitialize a database connection handle\n *      - creates a database connection handle\n *      - Initializes various fields of the connection structure\n *      - Retrieves connection password and sets the database\n *        connect string\n *\n * @param[out]  conn\t\t- Initialized connecetion handler\n * @param[in]   host\t\t- The name of the host on which database resides\n * @param[in]\tport\t\t- The port number where database is running\n * @param[in]   timeout\t\t- The timeout value in seconds to attempt the connection\n *\n * @return      int\n * @retval      !0  - Failure\n * @retval      0 - Success\n *\n */\nint pbs_db_connect(void **conn, char *host, int port, int timeout);\n\n/**\n * @brief\n *\tDisconnect from the database and frees all allocated memory.\n *\n * @param[in]   conn - Connected database handle\n *  \n * @return      Failure error code\n * @retval      Non-zero  - Failure\n * @retval      0 - Success\n *\n */\nint pbs_db_disconnect(void *conn);\n\n/**\n * @brief\n *\tInsert a new object into the database\n *\n * @param[in]\tconn - Connected database handle\n * @param[in]\tpbs_db_obj_info_t - Wrapper object that describes the object\n *              (and data) to insert\n * @param[in]   savetype - Update or Insert\n *\n * @return      int\n * @retval      -1  - Failure\n * @retval       0  - success\n *\n */\nint pbs_db_save_obj(void *conn, pbs_db_obj_info_t *obj, int savetype);\n\n/**\n * @brief\n *\tDelete an existing object from the database\n *\n * @param[in]\tconn - Connected database handle\n * @param[in]\tpbs_db_obj_info_t - Wrapper object that describes the object\n *              (and data) to delete\n *\n * @return      int\n * @retval      -1  - Failure\n * @retval       0  - success\n * @retval       1 -  Success but no rows deleted\n *\n */\nint pbs_db_delete_obj(void *conn, pbs_db_obj_info_t *obj);\n\n/**\n * @brief\n *\tDelete attributes of an existing object from the database\n *\n * @param[in]\tconn - Connected database handle\n * @param[in]\tpbs_db_obj_info_t - The pointer to the wrapper object which\n *\t\t\t\tdescribes the PBS object (job/resv/node etc) that is wrapped\n *\t\t\t\tinside it.\n * @param[in]\tobj_id - The object id of the parent (jobid, node-name etc)\n * @param[in]\tdb_attr_list - List of attributes to remove from DB\n *\n * @return      int\n * @retval      -1  - Failure\n * @retval       0  - success\n * @retval       1 -  Success but no rows deleted\n *\n */\nint pbs_db_delete_attr_obj(void *conn, pbs_db_obj_info_t *obj, void *obj_id, pbs_db_attr_list_t *db_attr_list);\n\n/**\n * @brief\n *\tSearch the database for existing objects and load the server structures.\n *\n * @param[in]\tconn - Connected database handle\n * @param[in]\tpbs_db_obj_info_t - The pointer to the wrapper object which\n *\t\t\t\tdescribes the PBS object (job/resv/node etc) that is wrapped\n *\t\t\t\tinside it.\n * @param[in]\tpbs_db_query_options_t - Pointer to the options object that can\n *\t\t\t\tcontain the flags or timestamp which will effect the query.\n * @param[in]\tcallback function which will process the result from the database\n * \t\t\t\tand update the server strctures.\n *\n * @return\tint\n * @retval\t0\t- Success but no rows found\n * @retval\t-1\t- Failure\n * @retval\t>0\t- Success and number of rows found\n *\n */\nint pbs_db_search(void *conn, pbs_db_obj_info_t *obj, pbs_db_query_options_t *opts, query_cb_t query_cb);\n\n/**\n * @brief\n *\tLoad a single existing object from the database\n *\n * @param[in]     conn - Connected database handle\n * @param[in/out] pbs_db_obj_info_t - Wrapper object that describes the object\n *                (and data) to load. This parameter used to return the data about\n *                the object loaded\n *\n * @return      int\n * @retval      -1  - Failure\n * @retval       0  - success\n * @retval       1 -  Success but no rows loaded\n *\n */\nint pbs_db_load_obj(void *conn, pbs_db_obj_info_t *obj);\n\n/**\n * @brief\n *\tFunction to check whether data-service is running\n *\n * @return      Error code\n * @retval\t-1 - Error in routine\n * @retval\t0  - Data service running\n * @retval\t1  - Data service not running\n *\n */\nint pbs_status_db(char *pbs_ds_host, int pbs_ds_port);\n\n/**\n * @brief\n *\tStart the database daemons/service.\n *\n * @param[out]\terrmsg - returns the startup error message if any\n *\n * @return\t    int\n * @retval       0     - success\n * @retval       !=0   - Failure\n *\n */\nint pbs_start_db(char *pbs_ds_host, int pbs_ds_port);\n\n/**\n * @brief\n *\tStop the database daemons/service\n *\n * @param[out]\terrmsg - returns the db  error message if any\n *\n * @return      Error code\n * @retval      !=0 - Failure\n * @retval       0  - Success\n *\n */\nint pbs_stop_db(char *pbs_ds_host, int pbs_ds_port);\n\n/**\n * @brief\n *\tTranslates the error code to an error message\n *\n * @param[in]   err_code - Error code to translate\n * @param[out]  err_msg  - The translated error message (newly allocated memory)\n *\n */\nvoid pbs_db_get_errmsg(int err_code, char **err_msg);\n\n/**\n * @brief\n *\tFunction to create new databse user or change password of current user.\n *\n * @param[in] conn[in]: The database connection handle which was created by pbs_db_connection.\n * @param[in] user_name[in]: Databse user name.\n * @param[in] password[in]:  New password for the database.\n * @param[in] olduser[in]: old database user name.\n *\n * @retval       -1 - Failure\n * @retval        0  - Success\n *\n */\nint pbs_db_password(void *conn, char *userid, char *password, char *olduser);\n\n#ifdef __cplusplus\n}\n#endif\n\n#endif /* _PBS_DB_H */\n"
  },
  {
    "path": "src/include/pbs_ecl.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _PBS_ECL_H\n#define _PBS_ECL_H\n\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n#include \"portability.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"resource.h\"\n\n#define SLOT_INCR_SIZE 10\n\nextern ecl_attribute_def ecl_svr_attr_def[];\nextern ecl_attribute_def ecl_node_attr_def[];\nextern ecl_attribute_def ecl_que_attr_def[];\nextern ecl_attribute_def ecl_job_attr_def[];\nextern ecl_attribute_def ecl_svr_resc_def[];\nextern ecl_attribute_def ecl_resv_attr_def[];\nextern ecl_attribute_def ecl_sched_attr_def[];\n\nextern int ecl_svr_resc_size;\nextern int ecl_job_attr_size;\nextern int ecl_que_attr_size;\nextern int ecl_node_attr_size;\nextern int ecl_resv_attr_size;\nextern int ecl_svr_attr_size;\nextern int ecl_sched_attr_size;\n\nvoid set_no_attribute_verification(void);\n\nextern int (*pfn_pbs_verify_attributes)(int connect, int batch_request,\n\t\t\t\t\tint parent_object, int command, struct attropl *attribute_list);\n\n#define pbs_verify_attributes(connect, batch_request, parent_object, \\\n\t\t\t      cmd, attribute_list)                   \\\n\t(*pfn_pbs_verify_attributes)(connect, batch_request,         \\\n\t\t\t\t     parent_object, cmd, attribute_list)\n\nint verify_an_attribute(int, int, int, struct attropl *, int *, char **);\nint verify_attributes(int, int, int, struct attropl *, struct ecl_attribute_errors **);\n\necl_attribute_def *ecl_find_resc_def(ecl_attribute_def *, char *, int);\nstruct ecl_attribute_errors *ecl_get_attr_err_list(int);\nvoid ecl_free_attr_err_list(int);\n\n/* verify datatype functions */\nint verify_datatype_bool(struct attropl *, char **);\nint verify_datatype_short(struct attropl *, char **);\nint verify_datatype_long(struct attropl *, char **);\nint verify_datatype_size(struct attropl *, char **);\nint verify_datatype_float(struct attropl *, char **);\nint verify_datatype_time(struct attropl *, char **);\nint verify_datatype_nodes(struct attropl *, char **);\nint verify_datatype_select(struct attropl *, char **);\nint verify_datatype_long_long(struct attropl *, char **);\n\n/* verify value functions */\nint verify_value_resc(int, int, int, struct attropl *, char **);\nint verify_value_select(int, int, int, struct attropl *, char **);\nint verify_value_preempt_targets(int, int, int, struct attropl *, char **);\nint verify_value_preempt_queue_prio(int, int, int, struct attropl *, char **);\nint verify_value_preempt_prio(int, int, int, struct attropl *, char **);\nint verify_value_preempt_order(int, int, int, struct attropl *, char **);\nint verify_value_preempt_sort(int, int, int, struct attropl *, char **);\nint verify_value_dependlist(int, int, int, struct attropl *, char **);\nint verify_value_user_list(int, int, int, struct attropl *, char **);\nint verify_value_authorized_users(int, int, int, struct attropl *, char **);\nint verify_value_authorized_groups(int, int, int, struct attropl *, char **);\nint verify_value_path(int, int, int, struct attropl *, char **);\nint verify_value_jobname(int, int, int, struct attropl *, char **);\nint verify_value_checkpoint(int, int, int, struct attropl *, char **);\nint verify_value_hold(int, int, int, struct attropl *, char **);\nint verify_value_credname(int, int, int, struct attropl *, char **);\nint verify_value_zero_or_positive(int, int, int, struct attropl *, char **);\nint verify_value_non_zero_positive(int, int, int, struct attropl *, char **);\nint verify_value_non_zero_positive_long_long(int, int, int, struct attropl *, char **);\nint verify_value_maxlicenses(int, int, int, struct attropl *, char **);\nint verify_value_minlicenses(int, int, int, struct attropl *, char **);\nint verify_value_licenselinger(int, int, int, struct attropl *, char **);\nint verify_value_mgr_opr_acl_check(int, int, int, struct attropl *, char **);\nint verify_value_queue_type(int, int, int, struct attropl *, char **);\nint verify_value_joinpath(int, int, int, struct attropl *, char **);\nint verify_value_keepfiles(int, int, int, struct attropl *, char **);\nint verify_keepfiles_common(char *value);\nint verify_value_mailpoints(int, int, int, struct attropl *, char **);\nint verify_value_mailusers(int, int, int, struct attropl *, char **);\nint verify_value_removefiles(int, int, int, struct attropl *, char **);\nint verify_removefiles_common(char *value);\nint verify_value_priority(int, int, int, struct attropl *, char **);\nint verify_value_shellpathlist(int, int, int, struct attropl *, char **);\nint verify_value_sandbox(int, int, int, struct attropl *, char **);\nint verify_value_stagelist(int, int, int, struct attropl *, char **);\nint verify_value_jrange(int, int, int, struct attropl *, char **);\nint verify_value_state(int, int, int, struct attropl *, char **);\nint verify_value_tolerate_node_failures(int, int, int, struct attropl *, char **);\n\n/* verify object name function */\nint pbs_verify_object_name(int, const char *);\n\n#ifdef __cplusplus\n}\n#endif\n\n#endif /* _PBS_ECL_H */\n"
  },
  {
    "path": "src/include/pbs_entlim.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _PBS_ENTLIM_H\n#define _PBS_ENTLIM_H\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n/* PBS Limits on Entities */\n\n#include \"pbs_idx.h\"\n\n#define PBS_MAX_RESC_NAME 1024\n\n#define ENCODE_ENTITY_MAX 100\n\nenum lim_keytypes {\n\tLIM_USER,\n\tLIM_GROUP,\n\tLIM_PROJECT,\n\tLIM_OVERALL\n};\n\n#define PBS_GENERIC_ENTITY \"PBS_GENERIC\"\n#define PBS_ALL_ENTITY \"PBS_ALL\"\n#define ETLIM_INVALIDCHAR \"/[]\\\";:|<>+,?*\"\n\n/* Flags used for account_entity_limit_usages() */\n#define ETLIM_ACC_CT 1 << 0\t/* flag for set_entity_ct_sum_ */\n#define ETLIM_ACC_RES 1 << 1\t/* flag for set_entity_resc_sum_ */\n#define ETLIM_ACC_QUEUED 1 << 2 /* flag for set_entity_-_sum_max */\n#define ETLIM_ACC_MAX 1 << 3\t/* flag for set_entity_-_sum_queued */\n\n#define ETLIM_ACC_CT_QUEUED (ETLIM_ACC_CT | ETLIM_ACC_QUEUED)\t/* set_entity_ct_sum_queued */\n#define ETLIM_ACC_CT_MAX (ETLIM_ACC_CT | ETLIM_ACC_MAX)\t\t/* set_entity_ct_sum_max */\n#define ETLIM_ACC_RES_QUEUED (ETLIM_ACC_RES | ETLIM_ACC_QUEUED) /* set_entity_resc_sum_queued */\n#define ETLIM_ACC_RES_MAX (ETLIM_ACC_RES | ETLIM_ACC_MAX)\t/* set_entity_resc_sum_max */\n\n#define ETLIM_ACC_ALL_RES (ETLIM_ACC_QUEUED | ETLIM_ACC_MAX | ETLIM_ACC_RES)\t\t/* set_entity_resc_sum_* */\n#define ETLIM_ACC_ALL_CT (ETLIM_ACC_QUEUED | ETLIM_ACC_MAX | ETLIM_ACC_CT)\t\t/* set_entity_ct_sum_* */\n#define ETLIM_ACC_ALL_MAX (ETLIM_ACC_CT | ETLIM_ACC_RES | ETLIM_ACC_MAX)\t\t/* set_entity_*_sum_max */\n#define ETLIM_ACC_ALL_QUEUED (ETLIM_ACC_CT | ETLIM_ACC_RES | ETLIM_ACC_QUEUED)\t\t/* set_entity_*_sum_queued */\n#define ETLIM_ACC_ALL (ETLIM_ACC_CT | ETLIM_ACC_RES | ETLIM_ACC_QUEUED | ETLIM_ACC_MAX) /* for all 4 set_entity_* */\n\nvoid *entlim_initialize_ctx(void);\n\n/* get data record from an entry based on a key string */\nvoid *entlim_get(const char *keystr, void *ctx);\n\n/* add a record including key and data, based on a key string */\nint entlim_add(const char *entity, const void *recptr, void *ctx);\n\n/* replace a record including key and data, based on a key string */\nint entlim_replace(const char *entity, void *recptr, void *ctx, void free_leaf(void *));\n\n/* delete a record based on a key string */\nint entlim_delete(const char *entity, void *ctx, void free_leaf(void *));\n\n/* free the entire data context and all associated data and keys */\n/* the function \"free_leaf\" is used to free the data record      */\nint entlim_free_ctx(void *ctx, void free_leaf(void *));\n\n/* walk the records returning a key object for the next entry found */\nvoid *entlim_get_next(void *ctx, void **key);\n\n/* entlim_parse - parse a comma separated set of \"entity limit strings */\nint entlim_parse(char *str, char *resc, void *ctx,\n\t\t int (*addfunc)(void *ctx, enum lim_keytypes kt, char *fulent,\n\t\t\t\tchar *entname, char *resc, char *value));\nchar *parse_comma_string_r(char **start);\nchar *entlim_mk_runkey(enum lim_keytypes kt, const char *entity);\nchar *entlim_mk_reskey(enum lim_keytypes kt, const char *entity, const char *resc);\nint entlim_resc_from_key(char *key, char *rtnresc, size_t ln);\nint entlim_entity_from_key(char *key, char *rtnname, size_t ln);\n\n#ifdef __cplusplus\n}\n#endif\n#endif /* _PBS_ENTLIM_H */\n"
  },
  {
    "path": "src/include/pbs_error.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _PBS_ERROR_H\n#define _PBS_ERROR_H\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n/*\n * The error returns possible to a Batch Request\n *\n * Each error is prefixed with the string PBSE_ for Portable (Posix)\n * Batch System Error.  The numeric values start with 15000 since the\n * POSIX Batch Extensions Working group is 1003.15\n */\n\n/*\n * The following error numbers should not be used while adding new PBS errors.\n * As a general guideline, do not use a number if number%256=0.\n * If PBS code erroneously uses these error numbers for future errors\n * and use them as command exit code, the behavior\n * will be erroneous on many standards-compliant systems.\n */\n#define PBSE_DONOTUSE1 15360\n#define PBSE_DONOTUSE2 15616\n#define PBSE_DONOTUSE3 15872\n#define PBSE_DONOTUSE4 16128\n#define PBSE_DONOTUSE5 16384\n\n#define PBSE_ 15000\n\n#define PBSE_NONE 0\t\t    /* no error */\n#define PBSE_UNKJOBID 15001\t    /* Unknown Job Identifier */\n#define PBSE_NOATTR 15002\t    /* Undefined Attribute */\n#define PBSE_ATTRRO 15003\t    /* attempt to set READ ONLY attribute */\n#define PBSE_IVALREQ 15004\t    /* Invalid request */\n#define PBSE_UNKREQ 15005\t    /* Unknown batch request */\n#define PBSE_TOOMANY 15006\t    /* Too many submit retries */\n#define PBSE_PERM 15007\t\t    /* No permission */\n#define PBSE_BADHOST 15008\t    /* access from host not allowed */\n#define PBSE_JOBEXIST 15009\t    /* job already exists */\n#define PBSE_SYSTEM 15010\t    /* system error occurred */\n#define PBSE_INTERNAL 15011\t    /* internal server error occurred */\n#define PBSE_REGROUTE 15012\t    /* parent job of dependent in rte que */\n#define PBSE_UNKSIG 15013\t    /* unknown signal name */\n#define PBSE_BADATVAL 15014\t    /* bad attribute value */\n#define PBSE_MODATRRUN 15015\t    /* Cannot modify attrib in run state */\n#define PBSE_BADSTATE 15016\t    /* request invalid for job state */\n#define PBSE_UNKQUE 15018\t    /* Unknown queue name */\n#define PBSE_BADCRED 15019\t    /* Invalid Credential in request */\n#define PBSE_EXPIRED 15020\t    /* Expired Credential in request */\n#define PBSE_QUNOENB 15021\t    /* Queue not enabled */\n#define PBSE_QACESS 15022\t    /* No access permission for queue */\n#define PBSE_BADUSER 15023\t    /* Bad user - no password entry */\n#define PBSE_HOPCOUNT 15024\t    /* Max hop count exceeded */\n#define PBSE_QUEEXIST 15025\t    /* Queue already exists */\n#define PBSE_ATTRTYPE 15026\t    /* incompatable queue attribute type */\n#define PBSE_OBJBUSY 15027\t    /* Object Busy (not empty) */\n#define PBSE_QUENBIG 15028\t    /* Queue name too long */\n#define PBSE_NOSUP 15029\t    /* Feature/function not supported */\n#define PBSE_QUENOEN 15030\t    /* Cannot enable queue,needs add def */\n#define PBSE_PROTOCOL 15031\t    /* Batch Protocol error */\n#define PBSE_BADATLST 15032\t    /* Bad attribute list structure */\n#define PBSE_NOCONNECTS 15033\t    /* No free connections */\n#define PBSE_NOSERVER 15034\t    /* No server to connect to */\n#define PBSE_UNKRESC 15035\t    /* Unknown resource */\n#define PBSE_EXCQRESC 15036\t    /* Job exceeds Queue resource limits */\n#define PBSE_QUENODFLT 15037\t    /* No Default Queue Defined */\n#define PBSE_NORERUN 15038\t    /* Job Not Rerunnable */\n#define PBSE_ROUTEREJ 15039\t    /* Route rejected by all destinations */\n#define PBSE_ROUTEEXPD 15040\t    /* Time in Route Queue Expired */\n#define PBSE_MOMREJECT 15041\t    /* Request to MOM failed */\n#define PBSE_BADSCRIPT 15042\t    /* (qsub) cannot access script file */\n#define PBSE_STAGEIN 15043\t    /* Stage In of files failed */\n#define PBSE_RESCUNAV 15044\t    /* Resources temporarily unavailable */\n#define PBSE_BADGRP 15045\t    /* Bad Group specified */\n#define PBSE_MAXQUED 15046\t    /* Max number of jobs in queue */\n#define PBSE_CKPBSY 15047\t    /* Checkpoint Busy, may be retries */\n#define PBSE_EXLIMIT 15048\t    /* Limit exceeds allowable */\n#define PBSE_BADACCT 15049\t    /* Bad Account attribute value */\n#define PBSE_ALRDYEXIT 15050\t    /* Job already in exit state */\n#define PBSE_NOCOPYFILE 15051\t    /* Job files not copied */\n#define PBSE_CLEANEDOUT 15052\t    /* unknown job id after clean init */\n#define PBSE_NOSYNCMSTR 15053\t    /* No Master in Sync Set */\n#define PBSE_BADDEPEND 15054\t    /* Invalid dependency */\n#define PBSE_DUPLIST 15055\t    /* Duplicate entry in List */\n#define PBSE_DISPROTO 15056\t    /* Bad DIS based Request Protocol */\n#define PBSE_EXECTHERE 15057\t    /* cannot execute there */\n#define PBSE_SISREJECT 15058\t    /* sister rejected */\n#define PBSE_SISCOMM 15059\t    /* sister could not communicate */\n#define PBSE_SVRDOWN 15060\t    /* req rejected -server shutting down */\n#define PBSE_CKPSHORT 15061\t    /* not all tasks could checkpoint */\n#define PBSE_UNKNODE 15062\t    /* Named node is not in the list */\n#define PBSE_UNKNODEATR 15063\t    /* node-attribute not recognized */\n#define PBSE_NONODES 15064\t    /* Server has no node list */\n#define PBSE_NODENBIG 15065\t    /* Node name is too big */\n#define PBSE_NODEEXIST 15066\t    /* Node name already exists */\n#define PBSE_BADNDATVAL 15067\t    /* Bad node-attribute value */\n#define PBSE_MUTUALEX 15068\t    /* State values are mutually exclusive */\n#define PBSE_GMODERR 15069\t    /* Error(s) during global modification of nodes */\n#define PBSE_NORELYMOM 15070\t    /* could not contact Mom */\n#define PBSE_NOTSNODE 15071\t    /* No time-share node available */\n#define PBSE_RESV_NO_WALLTIME 15075 /* job reserv lacking walltime */\n#define PBSE_JOBNOTRESV 15076\t    /* not a reservation job       */\n#define PBSE_TOOLATE 15077\t    /* too late for job reservation*/\n#define PBSE_IRESVE 15078\t    /* internal reservation-system error */\n/* 15079 unused */\n#define PBSE_RESVEXIST 15080   /* reservation already exists */\n#define PBSE_resvFail 15081    /* reservation failed */\n#define PBSE_genBatchReq 15082 /* batch request generation failed */\n#define PBSE_mgrBatchReq 15083 /* qmgr batch request failed */\n#define PBSE_UNKRESVID 15084   /* unknown reservation ID */\n#define PBSE_delProgress 15085 /* delete already in progress */\n#define PBSE_BADTSPEC 15086    /* bad time specification(s) */\n#define PBSE_RESVMSG 15087     /* so reply_text can send back a msg */\n#define PBSE_NOTRESV 15088     /* not a reservation */\n#define PBSE_BADNODESPEC 15089 /* node(s) specification error */\n#define PBSE_UNUSED1 15090     /* Licensed CPUs exceeded */\n#define PBSE_LICENSEINV 15091  /* License is invalid     */\n#define PBSE_RESVAUTH_H 15092  /* Host machine not authorized to */\n/* submit reservations            */\n#define PBSE_RESVAUTH_G 15093 /* Requestor's group not authorized */\n/* to submit reservations           */\n#define PBSE_RESVAUTH_U 15094 /* Requestor not authorized to make */\n/* reservations                     */\n#define PBSE_R_UID 15095       /* Bad effective UID for reservation */\n#define PBSE_R_GID 15096       /* Bad effective GID for reservation */\n#define PBSE_IBMSPSWITCH 15097 /* IBM SP Switch error */\n#define PBSE_UNUSED2 15098     /* Floating License unavailable  */\n#define PBSE_NOSCHEDULER 15099 /* Unable to contact Scheduler */\n#define PBSE_RESCNOTSTR 15100  /* resource is not of type string */\n\n#define PBSE_MaxArraySize 15107\t   /* max array size exceeded */\n#define PBSE_INVALSELECTRESC 15108 /* resc invalid in select spec */\n#define PBSE_INVALJOBRESC 15109\t   /* invalid job resource */\n#define PBSE_INVALNODEPLACE 15110  /* node invalid w/ place|select */\n#define PBSE_PLACENOSELECT 15111   /* cannot have place w/o select */\n#define PBSE_INDIRECTHOP 15112\t   /* too many indirect resc levels */\n#define PBSE_INDIRECTBT 15113\t   /* target resc undefined */\n/* Error number 15114 not used */\n#define PBSE_NODESTALE 15115\t\t /* Cannot change state of stale nd */\n#define PBSE_DUPRESC 15116\t\t /* cannot dup resc within a chunk */\n#define PBSE_CONNFULL 15117\t\t /* server connection table full */\n#define PBSE_LICENSE_MIN_BADVAL 15118\t /* bad value for pbs_license_min */\n#define PBSE_LICENSE_MAX_BADVAL 15119\t /* bad value for pbs_license_max */\n#define PBSE_LICENSE_LINGER_BADVAL 15120 /* bad value for pbs_license_linger_time*/\n#define PBSE_UNUSED3 15121\t\t /* License server is down */\n#define PBSE_UNUSED4 15122\t\t /* Not allowed action with FLEX licensing */\n#define PBSE_BAD_FORMULA 15123\t\t /* invalid sort formula */\n#define PBSE_BAD_FORMULA_KW 15124\t /* invalid keyword in formula */\n#define PBSE_BAD_FORMULA_TYPE 15125\t /* invalid resource type in formula */\n#define PBSE_BAD_RRULE_YEARLY 15126\t /* reservation duration exceeds 1 year */\n#define PBSE_BAD_RRULE_MONTHLY 15127\t /* reservation duration exceeds 1 month */\n#define PBSE_BAD_RRULE_WEEKLY 15128\t /* reservation duration exceeds 1 week */\n#define PBSE_BAD_RRULE_DAILY 15129\t /* reservation duration exceeds 1 day */\n#define PBSE_BAD_RRULE_HOURLY 15130\t /* reservation duration exceeds 1 hour */\n#define PBSE_BAD_RRULE_MINUTELY 15131\t /* reservation duration exceeds 1 minute */\n#define PBSE_BAD_RRULE_SECONDLY 15132\t /* reservation duration exceeds 1 second */\n#define PBSE_BAD_RRULE_SYNTAX 15133\t /* invalid recurrence rule syntax */\n#define PBSE_BAD_RRULE_SYNTAX2 15134\t /* invalid recurrence rule syntax. COUNT/UNTIL required*/\n#define PBSE_BAD_ICAL_TZ 15135\t\t /* Undefined timezone info directory */\n#define PBSE_HOOKERROR 15136\t\t /* error encountered related to hooks */\n#define PBSE_NEEDQUET 15137\t\t /* need queue type set */\n#define PBSE_ETEERROR 15138\t\t /* not allowed to alter attribute when eligible_time_enable is off */\n#define PBSE_HISTJOBID 15139\t\t /* History job ID */\n#define PBSE_JOBHISTNOTSET 15140\t /* job_history_enable not SET */\n#define PBSE_MIXENTLIMS 15141\t\t /* mixing old and new limit enformcement */\n#define PBSE_ENTLIMCT 15142\t\t /* entity count limit exceeded */\n#define PBSE_ENTLIMRESC 15143\t\t /* entity resource limit exceeded */\n#define PBSE_ATVALERANGE 15144\t\t /* attribute value out of range */\n#define PBSE_PROV_HEADERROR 15145\t /* not allowed to set provisioningattributes on head node */\n#define PBSE_NODEPROV_NOACTION 15146\t /* cannot modify attribute while node is provisioning */\n#define PBSE_NODEPROV 15147\t\t /* Cannot change state of provisioning node */\n#define PBSE_NODEPROV_NODEL 15148\t /* Cannot del node if provisioning*/\n#define PBSE_NODE_BAD_CURRENT_AOE 15149\t /* current aoe is not one of resources_available.aoe */\n#define PBSE_NOLOOPBACKIF 15153\t\t /* Local host does not have loopback interface configured. */\n#define PBSE_IVAL_AOECHUNK 15155\t /* aoe not following chunk rules */\n#define PBSE_JOBINRESV_CONFLICT 15156\t /* job and reservation conflict */\n\n#define PBSE_NORUNALTEREDJOB 15157\t   /* cannot run altered/moved job */\n#define PBSE_HISTJOBDELETED 15158\t   /* Job was in F or M state . Its history deleted upon request. */\n#define PBSE_NOHISTARRAYSUBJOB 15159\t   /* Request invalid for finished array subjob */\n#define PBSE_FORCE_QSUB_UPDATE 15160\t   /* a qsub action needs to be redone */\n#define PBSE_SAVE_ERR 15161\t\t   /* failed to save job or resv to database */\n#define PBSE_MAX_NO_MINWT 15162\t\t   /* no max walltime w/o min walltime */\n#define PBSE_MIN_GT_MAXWT 15163\t\t   /* min_walltime can not be > max_walltime */\n#define PBSE_NOSTF_RESV 15164\t\t   /* There can not be a shrink-to-fit reservation */\n#define PBSE_NOSTF_JOBARRAY 15165\t   /* There can not be a shrink-to-fit job array */\n#define PBSE_NOLIMIT_RESOURCE 15166\t   /* Resource limits can not be set for the resource */\n#define PBSE_MOM_INCOMPLETE_HOOK 15167\t   /* mom hook not fully transferred */\n#define PBSE_MOM_REJECT_ROOT_SCRIPTS 15168 /* no hook, root job scripts */\n#define PBSE_HOOK_REJECT 15169\t\t   /* mom receives a hook rejection */\n#define PBSE_HOOK_REJECT_RERUNJOB 15170\t   /* hook rejection requiring a job rerun */\n#define PBSE_HOOK_REJECT_DELETEJOB 15171   /* hook rejection requiring a job delete */\n#define PBSE_IVAL_OBJ_NAME 15172\t   /* invalid object name */\n\n#define PBSE_JOBNBIG 15173 /* Job name is too long */\n\n#define PBSE_RESCBUSY 15174\t       /* Resource is set on an object */\n#define PBSE_JOBSCRIPTMAXSIZE 15175    /* job script max size exceeded */\n#define PBSE_BADJOBSCRIPTMAXSIZE 15176 /* user set size more than 2GB */\n#define PBSE_WRONG_RESUME 15177\t       /* user tried to resume job with wrong resume signal*/\n\n/* Error code specific to altering reservation start and end times */\n#define PBSE_RESV_NOT_EMPTY 15178\t   /* cannot change start time of a non-empty reservation */\n#define PBSE_STDG_RESV_OCCR_CONFLICT 15179 /* cannot change start time of a non-empty reservation */\n\n#define PBSE_SOFTWT_STF 15180 /* soft_walltime is incompatible with STF jobs */\n\n#define PBSE_RESV_FROM_RESVJOB 15181 /* Job already in a reservation used to create a reservation */\n#define PBSE_RESV_FROM_ARRJOB 15182  /* Array job used to create a reservation */\n#define PBSE_SELECT_NOT_SUBSET 15183 /* ralter select spec is not a smaller subset of the original */\n/*\n ** \tResource monitor specific\n */\n#define PBSE_RMUNKNOWN 15201  /* resource unknown */\n#define PBSE_RMBADPARAM 15202 /* parameter could not be used */\n#define PBSE_RMNOPARAM 15203  /* a parameter needed did not exist */\n#define PBSE_RMEXIST 15204    /* something specified didn't exist */\n#define PBSE_RMSYSTEM 15205   /* a system error occured */\n#define PBSE_RMPART 15206     /* only part of reservation made */\n#define RM_ERR_UNKNOWN PBSE_RMUNKNOWN\n#define RM_ERR_BADPARAM PBSE_RMBADPARAM\n#define RM_ERR_NOPARAM PBSE_RMNOPARAM\n#define RM_ERR_EXIST PBSE_RMEXIST\n#define RM_ERR_SYSTEM PBSE_RMSYSTEM\n\n#define PBSE_TRYAGAIN 15208   /* Try the request again later */\n#define PBSE_ALPSRELERR 15209 /* ALPS failed to release the resv */\n\n#define PBSE_JOB_MOVED 15210\t\t\t  /* Job moved to another server */\n#define PBSE_SCHEDEXIST 15211\t\t\t  /* Scheduler already exists */\n#define PBSE_SCHED_NAME_BIG 15212\t\t  /* Scheduler name too long */\n#define PBSE_UNKSCHED 15213\t\t\t  /* sched not in the list */\n#define PBSE_SCHED_NO_DEL 15214\t\t\t  /* can not delete scheduler */\n#define PBSE_SCHED_PRIV_EXIST 15215\t\t  /* Scheduler sched_priv directory already exists */\n#define PBSE_SCHED_LOG_EXIST 15216\t\t  /* Scheduler sched_log directory already exists */\n#define PBSE_ROUTE_QUE_NO_PARTITION 15217\t  /*Partition can not be assigned to route queue */\n#define PBSE_CANNOT_SET_ROUTE_QUE 15218\t\t  /*Can not set queue type to route */\n#define PBSE_QUE_NOT_IN_PARTITION 15219\t\t  /* Queue does not belong to the partition */\n#define PBSE_PARTITION_NOT_IN_QUE 15220\t\t  /* Partition does not belong to the queue */\n#define PBSE_INVALID_PARTITION_QUE 15221\t  /* Invalid partition to the queue */\n#define PBSE_ALPS_SWITCH_ERR 15222\t\t  /* ALPS failed to do the suspend/resume */\n#define PBSE_SCHED_OP_NOT_PERMITTED 15223\t  /* Operation not permitted on default scheduler */\n#define PBSE_SCHED_PARTITION_ALREADY_EXISTS 15224 /* Partition already exists */\n#define PBSE_INVALID_MAX_JOB_SEQUENCE_ID 15225\t  /* Invalid max_job_sequence_id < 9999999, or > 999999999999 */\n#define PBSE_SVR_SCHED_JSF_INCOMPAT 15226\t  /* Server's job_sort_formula is incompatible with sched's */\n#define PBSE_NODE_BUSY 15227\t\t\t  /* Node is busy */\n#define PBSE_DEFAULT_PARTITION 15228\t\t  /* Default partition name is not allowed */\n#define PBSE_HISTDEPEND 15229\t\t\t  /* Finished job did not satisfy dependency */\n#define PBSE_SCHEDCONNECTED 15230\n#define PBSE_NOTARRAY_ATTR 15231 /* Not an array job */\n\n/* the following structure is used to tie error number      */\n/* with text to be returned to a client, see svr_messages.c */\n\nstruct pbs_err_to_txt {\n\tint err_no;\n\tchar **err_txt;\n};\n\nextern char *pbse_to_txt(int);\n\n/* This variable has been moved to Thread local storage\n * The define points to a function pointer which locates\n * the actual variable from the TLS of the calling thread\n */\n#ifndef __PBS_ERRNO\n#define __PBS_ERRNO\nextern int *__pbs_errno_location(void);\n#define pbs_errno (*__pbs_errno_location())\n#endif\n#ifdef __cplusplus\n}\n#endif\n#endif /* _PBS_ERROR_H */\n"
  },
  {
    "path": "src/include/pbs_idx.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _PBS_IDX_H\n#define _PBS_IDX_H\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n#include <stdbool.h>\n\n#define PBS_IDX_DUPS_OK 0x01   /* duplicate key allowed in index */\n#define PBS_IDX_ICASE_CMP 0x02 /* set case-insensitive compare */\n\n#define PBS_IDX_RET_OK 0    /* index op succeed */\n#define PBS_IDX_RET_FAIL -1 /* index op failed */\n\n/**\n * @brief\n *\tCreate an empty index\n *\n * @param[in] - dups   - Whether duplicates are allowed or not in index\n * @param[in] - keylen - length of key in index (can be 0 for default size)\n *\n * @return void *\n * @retval !NULL - success\n * @retval NULL  - failure\n *\n */\nextern void *pbs_idx_create(int dups, int keylen);\n\n/**\n * @brief\n *\tdestroy index\n *\n * @param[in] - idx - pointer to index\n *\n * @return void\n *\n */\nextern void pbs_idx_destroy(void *idx);\n\n/**\n * @brief\n *\tadd entry in index\n *\n * @param[in] - idx  - pointer to index\n * @param[in] - key  - key of entry\n * @param[in] - data - data of entry\n *\n * @return int\n * @retval PBS_IDX_RET_OK   - success\n * @retval PBS_IDX_RET_FAIL - failure\n *\n */\nextern int pbs_idx_insert(void *idx, void *key, void *data);\n\n/**\n * @brief\n *\tdelete entry from index\n *\n * @param[in] - idx - pointer to index\n * @param[in] - key - key of entry\n *\n * @return int\n * @retval PBS_IDX_RET_OK   - success\n * @retval PBS_IDX_RET_FAIL - failure\n *\n */\nextern int pbs_idx_delete(void *idx, void *key);\n\n/**\n * @brief\n *\tdelete exact entry from index using given context\n *\n * @param[in] - ctx - pointer to context used while\n *                    deleting exact entry in index\n *\n * @return int\n * @retval PBS_IDX_RET_OK   - success\n * @retval PBS_IDX_RET_FAIL - failure\n *\n */\nextern int pbs_idx_delete_byctx(void *ctx);\n\n/**\n * @brief\n *\tfind or iterate entry in index\n *\n * @param[in]     - idx  - pointer to index\n * @param[in/out] - key  - key of the entry\n *                         if *key is NULL then this routine will\n *                         return the first entry in index\n * @param[in/out] - data - data of the entry\n * @param[in/out] - ctx  - context to be set for iteration\n *                         can be NULL, if caller doesn't want\n *                         iteration context\n *                         if *ctx is not NULL, then this routine\n *                         will return next entry in index\n *\n * @return int\n * @retval PBS_IDX_RET_OK   - success\n * @retval PBS_IDX_RET_FAIL - failure\n *\n * @note\n * \tctx should be free'd after use, using pbs_idx_free_ctx()\n *\n */\nextern int pbs_idx_find(void *idx, void **key, void **data, void **ctx);\n\n/**\n * @brief\n *\tfree given iteration context\n *\n * @param[in] - ctx - pointer to context for iteration\n *\n * @return void\n *\n */\nextern void pbs_idx_free_ctx(void *ctx);\n\n/**\n * @brief check whether idx is empty and has no key associated with it\n * \n * @param[in] idx - pointer to avl index\n * \n * @return int\n * @retval 1 - idx is empty\n * @retval 0 - idx is not empty\n */\nextern bool pbs_idx_is_empty(void *idx);\n\n#ifdef __cplusplus\n}\n#endif\n#endif /* _PBS_IDX_H */\n"
  },
  {
    "path": "src/include/pbs_ifl.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _PBS_IFL_H\n#define _PBS_IFL_H\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n/*\n *\n *  pbs_ifl.h\n *\n */\n\n#include <stdio.h>\n#include <time.h>\n\n/* types of attributes: read only, public, all */\n#define TYPE_ATTR_READONLY 1\n#define TYPE_ATTR_PUBLIC 2\n#define TYPE_ATTR_INVISIBLE 4\n#define TYPE_ATTR_ALL TYPE_ATTR_READONLY | TYPE_ATTR_PUBLIC | TYPE_ATTR_INVISIBLE\n\n/* Attribute Names used by user commands */\n\n#define ATTR_a \"Execution_Time\"\n#define ATTR_c \"Checkpoint\"\n#define ATTR_e \"Error_Path\"\n#define ATTR_g \"group_list\"\n#define ATTR_h \"Hold_Types\"\n#define ATTR_j \"Join_Path\"\n#define ATTR_J \"array_indices_submitted\"\n#define ATTR_k \"Keep_Files\"\n#define ATTR_l \"Resource_List\"\n#define ATTR_l_orig \"Resource_List_orig\"\n#define ATTR_l_acct \"Resource_List_acct\"\n#define ATTR_m \"Mail_Points\"\n#define ATTR_o \"Output_Path\"\n#define ATTR_p \"Priority\"\n#define ATTR_q \"destination\"\n#define ATTR_R \"Remove_Files\"\n#define ATTR_r \"Rerunable\"\n#define ATTR_u \"User_List\"\n#define ATTR_v \"Variable_List\"\n#define ATTR_A \"Account_Name\"\n#define ATTR_M \"Mail_Users\"\n#define ATTR_N \"Job_Name\"\n#define ATTR_S \"Shell_Path_List\"\n#define ATTR_array_indices_submitted ATTR_J\n#define ATTR_depend \"depend\"\n#define ATTR_inter \"interactive\"\n#define ATTR_sandbox \"sandbox\"\n#define ATTR_stagein \"stagein\"\n#define ATTR_stageout \"stageout\"\n#define ATTR_resvTag \"reserve_Tag\"\n#define ATTR_resv_start \"reserve_start\"\n#define ATTR_resv_end \"reserve_end\"\n#define ATTR_resv_duration \"reserve_duration\"\n#define ATTR_resv_state \"reserve_state\"\n#define ATTR_resv_substate \"reserve_substate\"\n#define ATTR_resv_job \"reserve_job\"\n#define ATTR_auth_u \"Authorized_Users\"\n#define ATTR_auth_g \"Authorized_Groups\"\n#define ATTR_auth_h \"Authorized_Hosts\"\n#define ATTR_cred \"cred\"\n#define ATTR_nodemux \"no_stdio_sockets\"\n#define ATTR_umask \"umask\"\n#define ATTR_block \"block\"\n#define ATTR_convert \"qmove\"\n#define ATTR_DefaultChunk \"default_chunk\"\n#define ATTR_X11_cookie \"forward_x11_cookie\"\n#define ATTR_X11_port \"forward_x11_port\"\n#define ATTR_GUI \"gui\"\n#define ATTR_max_run_subjobs \"max_run_subjobs\"\n\n/* Begin Standing Reservation Attributes */\n#define ATTR_resv_standing \"reserve_standing\"\n#define ATTR_resv_count \"reserve_count\"\n#define ATTR_resv_idx \"reserve_index\"\n#define ATTR_resv_rrule \"reserve_rrule\"\n#define ATTR_resv_execvnodes \"reserve_execvnodes\"\n#define ATTR_resv_timezone \"reserve_timezone\"\n/* End of standing reservation specific */\n\n/* additional job and general attribute names */\n#define ATTR_ctime \"ctime\"\n#define ATTR_estimated \"estimated\"\n#define ATTR_exechost \"exec_host\"\n#define ATTR_exechost_acct \"exec_host_acct\"\n#define ATTR_exechost_orig \"exec_host_orig\"\n#define ATTR_exechost2 \"exec_host2\"\n#define ATTR_execvnode \"exec_vnode\"\n#define ATTR_execvnode_acct \"exec_vnode_acct\"\n#define ATTR_execvnode_deallocated \"exec_vnode_deallocated\"\n#define ATTR_execvnode_orig \"exec_vnode_orig\"\n#define ATTR_resv_nodes \"resv_nodes\"\n#define ATTR_mtime \"mtime\"\n#define ATTR_qtime \"qtime\"\n#define ATTR_session \"session_id\"\n#define ATTR_jobdir \"jobdir\"\n#define ATTR_euser \"euser\"\n#define ATTR_egroup \"egroup\"\n#define ATTR_project \"project\"\n#define ATTR_hashname \"hashname\"\n#define ATTR_hopcount \"hop_count\"\n#define ATTR_security \"security\"\n#define ATTR_sched_hint \"sched_hint\"\n#define ATTR_SchedSelect \"schedselect\"\n#define ATTR_SchedSelect_orig \"schedselect_orig\"\n#define ATTR_substate \"substate\"\n#define ATTR_name \"Job_Name\"\n#define ATTR_owner \"Job_Owner\"\n#define ATTR_used \"resources_used\"\n#define ATTR_used_acct \"resources_used_acct\"\n#define ATTR_used_update \"resources_used_update\"\n#define ATTR_relnodes_on_stageout \"release_nodes_on_stageout\"\n#define ATTR_tolerate_node_failures \"tolerate_node_failures\"\n#define ATTR_released \"resources_released\"\n#define ATTR_rel_list \"resource_released_list\"\n#define ATTR_state \"job_state\"\n#define ATTR_queue \"queue\"\n#define ATTR_resv \"resv\"\n#define ATTR_server \"server\"\n#define ATTR_maxrun \"max_running\"\n#define ATTR_max_run \"max_run\"\n#define ATTR_max_run_res \"max_run_res\"\n#define ATTR_max_run_soft \"max_run_soft\"\n#define ATTR_max_run_res_soft \"max_run_res_soft\"\n#define ATTR_total \"total_jobs\"\n#define ATTR_comment \"comment\"\n#define ATTR_cookie \"cookie\"\n#define ATTR_qrank \"queue_rank\"\n#define ATTR_altid \"alt_id\"\n#define ATTR_altid2 \"alt_id2\"\n#define ATTR_acct_id \"accounting_id\"\n#define ATTR_array \"array\"\n#define ATTR_array_id \"array_id\"\n#define ATTR_array_index \"array_index\"\n#define ATTR_array_state_count \"array_state_count\"\n#define ATTR_array_indices_remaining \"array_indices_remaining\"\n#define ATTR_etime \"etime\"\n#define ATTR_gridname \"gridname\"\n#define ATTR_refresh \"last_context_refresh\"\n#define ATTR_ReqCredEnable \"require_cred_enable\"\n#define ATTR_ReqCred \"require_cred\"\n#define ATTR_runcount \"run_count\"\n#define ATTR_run_version \"run_version\"\n#define ATTR_stime \"stime\"\n#define ATTR_obittime \"obittime\"\n#define ATTR_executable \"executable\"\n#define ATTR_Arglist \"argument_list\"\n#define ATTR_version \"pbs_version\"\n#define ATTR_eligible_time \"eligible_time\"\n#define ATTR_accrue_type \"accrue_type\"\n#define ATTR_sample_starttime \"sample_starttime\"\n#define ATTR_job_kill_delay \"job_kill_delay\"\n#define ATTR_topjob \"topjob\"\n#define ATTR_topjob_ineligible \"topjob_ineligible\"\n#define ATTR_submit_host \"Submit_Host\"\n#define ATTR_cred_id \"credential_id\"\n#define ATTR_cred_validity \"credential_validity\"\n#define ATTR_history_timestamp \"history_timestamp\"\n#define ATTR_create_resv_from_job \"create_resv_from_job\"\n/* Added for finished jobs RFE */\n#define ATTR_stageout_status \"Stageout_status\"\n#define ATTR_exit_status \"Exit_status\"\n#define ATTR_submit_arguments \"Submit_arguments\"\n/* additional Reservation attribute names */\n\n#define ATTR_resv_name \"Reserve_Name\"\n#define ATTR_resv_owner \"Reserve_Owner\"\n#define ATTR_resv_Tag \"reservation_Tag\"\n#define ATTR_resv_ID \"reserve_ID\"\n#define ATTR_resv_retry \"reserve_retry\"\n#define ATTR_del_idle_time \"delete_idle_time\"\n\n/* additional queue attributes names */\n\n#define ATTR_aclgren \"acl_group_enable\"\n#define ATTR_aclgroup \"acl_groups\"\n#define ATTR_aclhten \"acl_host_enable\"\n#define ATTR_aclhost \"acl_hosts\"\n#define ATTR_aclhostmomsen \"acl_host_moms_enable\"\n#define ATTR_acluren \"acl_user_enable\"\n#define ATTR_acluser \"acl_users\"\n#define ATTR_altrouter \"alt_router\"\n#define ATTR_chkptmin \"checkpoint_min\"\n#define ATTR_enable \"enabled\"\n#define ATTR_fromroute \"from_route_only\"\n#define ATTR_HasNodes \"hasnodes\"\n#define ATTR_killdelay \"kill_delay\"\n#define ATTR_maxgrprun \"max_group_run\"\n#define ATTR_maxgrprunsoft \"max_group_run_soft\"\n#define ATTR_maxque \"max_queuable\"\n#define ATTR_max_queued \"max_queued\"\n#define ATTR_max_queued_res \"max_queued_res\"\n#define ATTR_queued_jobs_threshold \"queued_jobs_threshold\"\n#define ATTR_queued_jobs_threshold_res \"queued_jobs_threshold_res\"\n#define ATTR_maxuserrun \"max_user_run\"\n#define ATTR_maxuserrunsoft \"max_user_run_soft\"\n#define ATTR_qtype \"queue_type\"\n#define ATTR_rescassn \"resources_assigned\"\n#define ATTR_rescdflt \"resources_default\"\n#define ATTR_rescmax \"resources_max\"\n#define ATTR_rescmin \"resources_min\"\n#define ATTR_rndzretry \"rendezvous_retry\"\n#define ATTR_routedest \"route_destinations\"\n#define ATTR_routeheld \"route_held_jobs\"\n#define ATTR_routewait \"route_waiting_jobs\"\n#define ATTR_routeretry \"route_retry_time\"\n#define ATTR_routelife \"route_lifetime\"\n#define ATTR_rsvexpdt \"reserved_expedite\"\n#define ATTR_rsvsync \"reserved_sync\"\n#define ATTR_start \"started\"\n#define ATTR_count \"state_count\"\n#define ATTR_number \"number_jobs\"\n#define ATTR_jobscript_max_size \"jobscript_max_size\"\n#ifdef NAS\n/* localmod 046 */\n#define ATTR_maxstarve \"max_starve\"\n/* localmod 034 */\n#define ATTR_maxborrow \"max_borrow\"\n#endif\n\n/* additional server attributes names */\n\n#define ATTR_SvrHost \"server_host\"\n#define ATTR_aclroot \"acl_roots\"\n#define ATTR_managers \"managers\"\n#define ATTR_dfltque \"default_queue\"\n#define ATTR_defnode \"default_node\"\n#define ATTR_locsvrs \"location_servers\"\n#define ATTR_logevents \"log_events\"\n#define ATTR_logfile \"log_file\"\n#define ATTR_mailer \"mailer\"\n#define ATTR_mailfrom \"mail_from\"\n#define ATTR_nodepack \"node_pack\"\n#define ATTR_nodefailrq \"node_fail_requeue\"\n#define ATTR_resendtermdelay \"resend_term_delay\"\n#define ATTR_operators \"operators\"\n#define ATTR_queryother \"query_other_jobs\"\n#define ATTR_resccost \"resources_cost\"\n#define ATTR_rescavail \"resources_available\"\n#define ATTR_maxuserres \"max_user_res\"\n#define ATTR_maxuserressoft \"max_user_res_soft\"\n#define ATTR_maxgroupres \"max_group_res\"\n#define ATTR_maxgroupressoft \"max_group_res_soft\"\n#define ATTR_maxarraysize \"max_array_size\"\n#define ATTR_PNames \"pnames\"\n#define ATTR_schediteration \"scheduler_iteration\"\n#define ATTR_scheduling \"scheduling\"\n#define ATTR_status \"server_state\"\n#define ATTR_syscost \"system_cost\"\n#define ATTR_FlatUID \"flatuid\"\n#define ATTR_ResvEnable \"resv_enable\"\n#define ATTR_aclResvgren \"acl_resv_group_enable\"\n#define ATTR_aclResvgroup \"acl_resv_groups\"\n#define ATTR_aclResvhten \"acl_resv_host_enable\"\n#define ATTR_aclResvhost \"acl_resv_hosts\"\n#define ATTR_aclResvuren \"acl_resv_user_enable\"\n#define ATTR_aclResvuser \"acl_resv_users\"\n#define ATTR_NodeGroupEnable \"node_group_enable\"\n#define ATTR_NodeGroupKey \"node_group_key\"\n#define ATTR_dfltqdelargs \"default_qdel_arguments\"\n#define ATTR_dfltqsubargs \"default_qsub_arguments\"\n#define ATTR_rpp_retry \"rpp_retry\"\n#define ATTR_rpp_highwater \"rpp_highwater\"\n#define ATTR_pbs_license_info \"pbs_license_info\"\n#define ATTR_license_min \"pbs_license_min\"\n#define ATTR_license_max \"pbs_license_max\"\n#define ATTR_license_linger \"pbs_license_linger_time\"\n#define ATTR_license_count \"license_count\"\n#define ATTR_job_sort_formula \"job_sort_formula\"\n#define ATTR_EligibleTimeEnable \"eligible_time_enable\"\n#define ATTR_resv_retry_time \"reserve_retry_time\"\n#define ATTR_resv_retry_init \"reserve_retry_init\"\n#define ATTR_JobHistoryEnable \"job_history_enable\"\n#define ATTR_JobHistoryDuration \"job_history_duration\"\n#define ATTR_max_concurrent_prov \"max_concurrent_provision\"\n#define ATTR_resv_post_processing \"resv_post_processing_time\"\n#define ATTR_backfill_depth \"backfill_depth\"\n#define ATTR_clearesten \"clear_topjob_estimates_enable\"\n#define ATTR_job_requeue_timeout \"job_requeue_timeout\"\n#define ATTR_show_hidden_attribs \"show_hidden_attribs\"\n#define ATTR_python_restart_max_hooks \"python_restart_max_hooks\"\n#define ATTR_python_restart_max_objects \"python_restart_max_objects\"\n#define ATTR_python_restart_min_interval \"python_restart_min_interval\"\n#define ATTR_power_provisioning \"power_provisioning\"\n#define ATTR_sync_mom_hookfiles_timeout \"sync_mom_hookfiles_timeout\"\n#define ATTR_max_job_sequence_id \"max_job_sequence_id\"\n#define ATTR_has_runjob_hook \"has_runjob_hook\"\n#define ATTR_acl_krb_realm_enable \"acl_krb_realm_enable\"\n#define ATTR_acl_krb_realms \"acl_krb_realms\"\n#define ATTR_acl_krb_submit_realms \"acl_krb_submit_realms\"\n#define ATTR_cred_renew_enable \"cred_renew_enable\"\n#define ATTR_cred_renew_tool \"cred_renew_tool\"\n#define ATTR_cred_renew_period \"cred_renew_period\"\n#define ATTR_cred_renew_cache_period \"cred_renew_cache_period\"\n#define ATTR_attr_update_period \"attr_update_period\"\n\n/**\n * RPP_MAX_PKT_CHECK_DEFAULT controls the number of loops used to process\n * backend data before servicing frontend requests. Smaller values can\n * starve the amount of time spent on backend processing.\n * Larger values can have a marginal impact on latency of frontend requests.\n */\n#define ATTR_rpp_max_pkt_check \"rpp_max_pkt_check\"\n\n/* additional scheduler \"attribute\" names */\n\n#define ATTR_SchedHost \"sched_host\"\n#define ATTR_sched_cycle_len \"sched_cycle_length\"\n#define ATTR_do_not_span_psets \"do_not_span_psets\"\n#define ATTR_only_explicit_psets \"only_explicit_psets\"\n#define ATTR_sched_preempt_enforce_resumption \"sched_preempt_enforce_resumption\"\n#define ATTR_preempt_targets_enable \"preempt_targets_enable\"\n#define ATTR_job_sort_formula_threshold \"job_sort_formula_threshold\"\n#define ATTR_throughput_mode \"throughput_mode\"\n#define ATTR_opt_backfill_fuzzy \"opt_backfill_fuzzy\"\n#define ATTR_partition \"partition\"\n#define ATTR_sched_priv \"sched_priv\"\n#define ATTR_sched_log \"sched_log\"\n#define ATTR_sched_user \"sched_user\"\n#define ATTR_sched_state \"state\"\n#define ATTR_sched_preempt_queue_prio \"preempt_queue_prio\"\n#define ATTR_sched_preempt_prio \"preempt_prio\"\n#define ATTR_sched_preempt_order \"preempt_order\"\n#define ATTR_sched_preempt_sort \"preempt_sort\"\n#define ATTR_sched_server_dyn_res_alarm \"server_dyn_res_alarm\"\n#define ATTR_job_run_wait \"job_run_wait\"\n\n/* additional node \"attributes\" names */\n\n#define ATTR_NODE_Host \"Host\" /* in 8.0, replaced with ATTR_NODE_Mom */\n#define ATTR_NODE_Mom \"Mom\"\n#define ATTR_NODE_Port \"Port\"\n#define ATTR_NODE_state \"state\"\n#define ATTR_NODE_ntype \"ntype\"\n#define ATTR_NODE_jobs \"jobs\"\n#define ATTR_NODE_resvs \"resv\"\n#define ATTR_NODE_resv_enable \"resv_enable\"\n#define ATTR_NODE_np \"np\"\n#define ATTR_NODE_pcpus \"pcpus\"\n#define ATTR_NODE_properties \"properties\"\n#define ATTR_NODE_NoMultiNode \"no_multinode_jobs\"\n#define ATTR_NODE_No_Tasks \"no_tasks\"\n#define ATTR_NODE_Sharing \"sharing\"\n#define ATTR_NODE_ProvisionEnable \"provision_enable\"\n#define ATTR_NODE_current_aoe \"current_aoe\"\n#define ATTR_NODE_in_multivnode_host \"in_multivnode_host\"\n#define ATTR_NODE_License \"license\"\n#define ATTR_NODE_LicenseInfo \"license_info\"\n#define ATTR_NODE_TopologyInfo \"topology_info\"\n#define ATTR_NODE_MaintJobs \"maintenance_jobs\"\n#define ATTR_NODE_VnodePool \"vnode_pool\"\n#define ATTR_NODE_current_eoe \"current_eoe\"\n#define ATTR_NODE_power_provisioning \"power_provisioning\"\n#define ATTR_NODE_poweroff_eligible \"poweroff_eligible\"\n#define ATTR_NODE_last_state_change_time \"last_state_change_time\"\n#define ATTR_NODE_last_used_time \"last_used_time\"\n\n#define ND_RESC_LicSignature \"lic_signature\" /* custom resource used for licensing */\n\n/* Resource \"attribute\" names */\n#define ATTR_RESC_TYPE \"type\"\n#define ATTR_RESC_FLAG \"flag\"\n\n/* various attribute values */\n\n#define CHECKPOINT_UNSPECIFIED \"u\"\n#define NO_HOLD \"n\"\n#define NO_JOIN \"n\"\n#define NO_KEEP \"n\"\n#define MAIL_AT_ABORT \"a\"\n\n#define USER_HOLD \"u\"\n#define OTHER_HOLD \"o\"\n#define SYSTEM_HOLD \"s\"\n#define BAD_PASSWORD_HOLD \"p\"\n\n/* Add new MGR_CMDs before MGR_CMD_LAST */\nenum mgr_cmd {\n\tMGR_CMD_NONE = -1,\n\tMGR_CMD_CREATE,\n\tMGR_CMD_DELETE,\n\tMGR_CMD_SET,\n\tMGR_CMD_UNSET,\n\tMGR_CMD_LIST,\n\tMGR_CMD_PRINT,\n\tMGR_CMD_ACTIVE,\n\tMGR_CMD_IMPORT,\n\tMGR_CMD_EXPORT,\n\tMGR_CMD_LAST\n};\n\n/* Add new MGR_OBJs before MGR_OBJ_LAST */\nenum mgr_obj {\n\tMGR_OBJ_NONE = -1,\n\tMGR_OBJ_SERVER,\t\t /* Server\t*/\n\tMGR_OBJ_QUEUE,\t\t /* Queue\t*/\n\tMGR_OBJ_JOB,\t\t /* Job\t\t*/\n\tMGR_OBJ_NODE,\t\t /* Vnode  \t*/\n\tMGR_OBJ_RESV,\t\t /* Reservation\t*/\n\tMGR_OBJ_RSC,\t\t /* Resource\t*/\n\tMGR_OBJ_SCHED,\t\t /* Scheduler\t*/\n\tMGR_OBJ_HOST,\t\t /* Host  \t*/\n\tMGR_OBJ_HOOK,\t\t /* Hook         */\n\tMGR_OBJ_PBS_HOOK,\t /* PBS Hook     */\n\tMGR_OBJ_JOBARRAY_PARENT, /* Job array parent */\n\tMGR_OBJ_SUBJOB,\t\t /* Sub Job */\n\tMGR_OBJ_LAST\t\t /* Last entry\t*/\n};\n\n#define MGR_OBJ_SITE_HOOK MGR_OBJ_HOOK\n#define SITE_HOOK \"hook\"\n#define PBS_HOOK \"pbshook\"\n\n/* Misc defines for various requests */\n#define MSG_OUT 1\n#define MSG_ERR 2\n\n/* SUSv2 guarantees that host names are limited to 255 bytes */\n#define PBS_MAXHOSTNAME 255 /* max host name length */\n#ifndef MAXPATHLEN\n#define MAXPATHLEN 1024 /* max path name length */\n#endif\n#ifndef MAXNAMLEN\n#define MAXNAMLEN 255\n#endif\n#define MSVR_JID_NCHARS_SVR 2 /* No. of chars reserved for svr instance in job ids for multi-server */\n#define PBS_MAXSCHEDNAME 15\n#define PBS_MAXUSER 256\t\t\t\t\t\t\t\t\t\t\t   /* max user name length */\n#define PBS_MAXPWLEN 256\t\t\t\t\t\t\t\t\t\t   /* max password length */\n#define PBS_MAXGRPN 256\t\t\t\t\t\t\t\t\t\t\t   /* max group name length */\n#define PBS_MAXQUEUENAME 15\t\t\t\t\t\t\t\t\t\t   /* max queue name length */\n#define PBS_MAXJOBNAME 230\t\t\t\t\t\t\t\t\t\t   /* max job name length */\n#define PBS_MAXSERVERNAME PBS_MAXHOSTNAME\t\t\t\t\t\t\t\t   /* max server name length */\n#define MAX_SVR_ID (PBS_MAXSERVERNAME + PBS_MAXPORTNUM + 1)\t\t\t\t\t\t   /* svr_id is of the form sever_name:port */\n#define PBS_MAXSEQNUM 12\t\t\t\t\t\t\t\t\t\t   /* max sequence number length */\n#define PBS_DFLT_MAX_JOB_SEQUENCE_ID 9999999\t\t\t\t\t\t\t\t   /* default value of max_job_sequence_id server attribute */\n#define PBS_MAXPORTNUM 5\t\t\t\t\t\t\t\t\t\t   /* udp/tcp port numbers max=16 bits */\n#define PBS_MAXSVRJOBID (PBS_MAXSEQNUM + MSVR_JID_NCHARS_SVR - 1 + PBS_MAXSERVERNAME + PBS_MAXPORTNUM + 2) /* server job id size, -1 to keep same length when made SEQ 7 */\n#define PBS_MAXSVRRESVID (PBS_MAXSVRJOBID + 1)\n#define PBS_MAXQRESVNAME (PBS_MAXQUEUENAME)\n#define PBS_MAXCLTJOBID (PBS_MAXSVRJOBID + PBS_MAXSERVERNAME + PBS_MAXPORTNUM + 2)   /* client job id size */\n#define PBS_MAXDEST 256\t\t\t\t\t\t\t\t     /* destination size */\n#define PBS_MAXROUTEDEST (PBS_MAXQUEUENAME + PBS_MAXSERVERNAME + PBS_MAXPORTNUM + 2) /* destination size */\n#define PBS_INTERACTIVE 1\t\t\t\t\t\t\t     /* Support of Interactive jobs */\n#define PBS_TERM_BUF_SZ 80\t\t\t\t\t\t\t     /* Interactive term buffer size */\n#define PBS_TERM_CCA 6\t\t\t\t\t\t\t\t     /* Interactive term cntl char array */\n#define PBS_RESV_ID_CHAR 'R'\t\t\t\t\t\t\t     /* Character in front of a resv ID */\n#define PBS_STDNG_RESV_ID_CHAR 'S'\t\t\t\t\t\t     /* Character in front of a resv ID */\n#define PBS_MNTNC_RESV_ID_CHAR 'M'\t\t\t\t\t\t     /* Character in front of a resv ID */\n#define PBS_AUTH_KEY_LEN (129)\n#define PBS_MAXIP_LEN 15\t\t\t\t\t\t\t     /* max ip address length */\n\n/* the pair to this list is in module_pbs_v1.c and must be updated to reflect any changes */\nenum batch_op { SET,\n\t\tUNSET,\n\t\tINCR,\n\t\tDECR,\n\t\tEQ,\n\t\tNE,\n\t\tGE,\n\t\tGT,\n\t\tLE,\n\t\tLT,\n\t\tDFLT,\n\t\tINTERNAL\n};\n\n/* shutdown manners externally visible */\n#define SHUT_IMMEDIATE 0\n#define SHUT_DELAY 1\n#define SHUT_QUICK 2\n\n/* messages that may be passsed  by pbs_deljob() api to the server  via its extend parameter*/\n\n#define FORCE \"force\"\n#define NOMAIL \"nomail\"\n#define SUPPRESS_EMAIL \"suppress_email\"\n#define DELETEHISTORY \"deletehist\"\n\n/*\n ** This structure is identical to attropl so they can be used\n ** interchangably.  The op field is not used.\n */\nstruct attrl {\n\tstruct attrl *next;\n\tchar *name;\n\tchar *resource;\n\tchar *value;\n\tenum batch_op op; /* not used */\n};\n\nstruct attropl {\n\tstruct attropl *next;\n\tchar *name;\n\tchar *resource;\n\tchar *value;\n\tenum batch_op op;\n};\n\nstruct batch_status {\n\tstruct batch_status *next;\n\tchar *name;\n\tstruct attrl *attribs;\n\tchar *text;\n};\n\nstruct batch_deljob_status {\n\tstruct batch_deljob_status *next;\n\tchar *name;\n\tint code;\n};\n\n/* structure to hold an attribute that failed verification at ECL\n * and the associated errcode and errmsg\n */\nstruct ecl_attrerr {\n\tstruct attropl *ecl_attribute;\n\tint ecl_errcode;\n\tchar *ecl_errmsg;\n};\n\n/* structure to hold a number of attributes that failed verification */\nstruct ecl_attribute_errors {\n\tint ecl_numerrors;\t\t /* num of attributes that failed verification */\n\tstruct ecl_attrerr *ecl_attrerr; /* ecl_attrerr array of structs */\n};\n\nenum preempt_method {\n\tPREEMPT_METHOD_LOW,\n\tPREEMPT_METHOD_SUSPEND,\n\tPREEMPT_METHOD_CHECKPOINT,\n\tPREEMPT_METHOD_REQUEUE,\n\tPREEMPT_METHOD_DELETE,\n\tPREEMPT_METHOD_HIGH\n};\n\ntypedef struct preempt_job_info {\n\tchar job_id[PBS_MAXSVRJOBID + 1];\n\tchar order[PREEMPT_METHOD_HIGH + 1];\n} preempt_job_info;\n\n/* Resource Reservation Information */\ntypedef int pbs_resource_t; /* resource reservation handle */\n\n#define RESOURCE_T_NULL (pbs_resource_t) 0\n#define RESOURCE_T_ALL (pbs_resource_t) - 1\n\nenum resv_states { RESV_NONE,\n\t\t   RESV_UNCONFIRMED,\n\t\t   RESV_CONFIRMED,\n\t\t   RESV_WAIT,\n\t\t   RESV_TIME_TO_RUN,\n\t\t   RESV_RUNNING,\n\t\t   RESV_FINISHED,\n\t\t   RESV_BEING_DELETED,\n\t\t   RESV_DELETED,\n\t\t   RESV_DELETING_JOBS,\n\t\t   RESV_DEGRADED,\n\t\t   RESV_BEING_ALTERED,\n\t\t   RESV_IN_CONFLICT };\n\n#ifdef _USRDLL /* This is only for building Windows DLLs\n\t\t\t * and not their static libraries\n\t\t\t */\n\n#ifdef DLL_EXPORT\n#define DECLDIR __declspec(dllexport)\n#else\n#define DECLDIR __declspec(dllimport)\n#endif\n\n#ifndef __PBS_ERRNO\n#define __PBS_ERRNO\nDECLDIR int *__pbs_errno_location(void);\n#define pbs_errno (*__pbs_errno_location())\n#endif\n\n/* server attempted to connect | connected to */\n/* see pbs_connect(3B)\t\t\t      */\n#ifndef __PBS_SERVER\n#define __PBS_SERVER\nDECLDIR char *__pbs_server_location(void);\n#define pbs_server (__pbs_server_location())\n#endif\n\nDECLDIR int pbs_asyrunjob(int, char *, char *, char *);\n\nDECLDIR int pbs_alterjob(int, char *, struct attrl *, char *);\n\nDECLDIR int pbs_connect(char *);\n\nDECLDIR int pbs_connect_extend(char *, char *);\n\nDECLDIR char *pbs_default(void);\n\nDECLDIR int pbs_deljob(int, char *, char *);\n\nDECLDIR struct batch_deljob_status *pbs_deljoblist(int, char **, int, char *);\n\nDECLDIR int pbs_disconnect(int);\n\nDECLDIR char *pbs_geterrmsg(int);\n\nDECLDIR int pbs_holdjob(int, char *, char *, char *);\n\nDECLDIR char *pbs_locjob(int, char *, char *);\n\nDECLDIR int pbs_manager(int, int, int, char *, struct attropl *, char *);\n\nDECLDIR int pbs_movejob(int, char *, char *, char *);\n\nDECLDIR int pbs_msgjob(int, char *, int, char *, char *);\n\nDECLDIR int pbs_relnodesjob(int, char *, char *, char *);\n\nDECLDIR int pbs_orderjob(int, char *, char *, char *);\n\nDECLDIR int pbs_rerunjob(int, char *, char *);\n\nDECLDIR int pbs_rlsjob(int, char *, char *, char *);\n\nDECLDIR int pbs_runjob(int, char *, char *, char *);\n\nDECLDIR char **pbs_selectjob(int, struct attropl *, char *);\n\nDECLDIR int pbs_sigjob(int, char *, char *, char *);\n\nDECLDIR void pbs_statfree(struct batch_status *);\n\nDECLDIR struct batch_status *pbs_statrsc(int, char *, struct attrl *, char *);\n\nDECLDIR struct batch_status *pbs_statjob(int, char *, struct attrl *, char *);\n\nDECLDIR struct batch_status *pbs_selstat(int, struct attropl *, struct attrl *, char *);\n\nDECLDIR struct batch_status *pbs_statque(int, char *, struct attrl *, char *);\n\nDECLDIR struct batch_status *pbs_statserver(int, struct attrl *, char *);\n\nDECLDIR struct batch_status *pbs_statsched(int, struct attrl *, char *);\n\nDECLDIR struct batch_status *pbs_stathost(int, char *, struct attrl *, char *);\n\nDECLDIR struct batch_status *pbs_statnode(int, char *, struct attrl *, char *);\n\nDECLDIR struct batch_status *pbs_statvnode(int, char *, struct attrl *, char *);\n\nDECLDIR struct batch_status *pbs_statresv(int, char *, struct attrl *, char *);\n\nDECLDIR struct batch_status *pbs_stathook(int, char *, struct attrl *, char *);\n\nDECLDIR struct ecl_attribute_errors *pbs_get_attributes_in_error(int);\n\nDECLDIR char *pbs_submit(int, struct attropl *, char *, char *, char *);\n\nDECLDIR char *pbs_submit_resv(int, struct attropl *, char *);\n\nDECLDIR int pbs_delresv(int, char *, char *);\n\nDECLDIR int pbs_terminate(int, int, char *);\n\nDECLDIR char *pbs_modify_resv(int, char *, struct attropl *, char *);\n\nDECLDIR preempt_job_info *pbs_preempt_jobs(int, char **);\n#else\n\n#ifndef __PBS_ERRNO\n#define __PBS_ERRNO\nextern int *__pbs_errno_location(void);\n#define pbs_errno (*__pbs_errno_location())\n#endif\n\n/* see pbs_connect(3B)\t\t\t      */\n#ifndef __PBS_SERVER\n#define __PBS_SERVER\nextern char *__pbs_server_location(void);\n#define pbs_server (__pbs_server_location())\n#endif\n\nextern int pbs_asyrunjob(int, const char *, const char *, const char *);\n\nextern int pbs_asyrunjob_ack(int, const char *, const char *, const char *);\n\nextern int pbs_alterjob(int, const char *, struct attrl *, const char *);\n\nextern int pbs_asyalterjob(int c, const char *jobid, struct attrl *attrib, const char *extend);\n\nextern int pbs_confirmresv(int, const char *, const char *, unsigned long, const char *);\n\nextern int pbs_connect(const char *);\n\nextern int pbs_connect_extend(const char *, const char *);\n\nextern int pbs_disconnect(int);\n\nextern int pbs_manager(int, int, int, const char *, struct attropl *, const char *);\n\nextern char *pbs_default(void);\n\nextern int pbs_deljob(int, const char *, const char *);\n\nextern struct batch_deljob_status *pbs_deljoblist(int, char **, int, const char *);\n\nextern char *pbs_geterrmsg(int);\n\nextern int pbs_holdjob(int, const char *, const char *, const char *);\n\nextern int pbs_loadconf(int);\n\nextern char *pbs_locjob(int, const char *, const char *);\n\nextern int pbs_movejob(int, const char *, const char *, const char *);\n\nextern int pbs_msgjob(int, const char *, int, const char *, const char *);\n\nextern int pbs_relnodesjob(int, const char *, const char *, const char *);\n\nextern int pbs_orderjob(int, const char *, const char *, const char *);\n\nextern int pbs_rerunjob(int, const char *, const char *);\n\nextern int pbs_rlsjob(int, const char *, const char *, const char *);\n\nextern int pbs_runjob(int, const char *, const char *, const char *);\n\nextern char **pbs_selectjob(int, struct attropl *, const char *);\n\nextern int pbs_sigjob(int, const char *, const char *, const char *);\n\nextern void pbs_statfree(struct batch_status *);\n\nextern void pbs_delstatfree(struct batch_deljob_status *);\n\nextern struct batch_status *pbs_statrsc(int, const char *, struct attrl *, const char *);\n\nextern struct batch_status *pbs_statjob(int, const char *, struct attrl *, const char *);\n\nextern struct batch_status *pbs_selstat(int, struct attropl *, struct attrl *, const char *);\n\nextern struct batch_status *pbs_statque(int, const char *, struct attrl *, const char *);\n\nextern struct batch_status *pbs_statserver(int, struct attrl *, const char *);\n\nextern struct batch_status *pbs_statsched(int, struct attrl *, const char *);\n\nextern struct batch_status *pbs_stathost(int, const char *, struct attrl *, const char *);\n\nextern struct batch_status *pbs_statnode(int, const char *, struct attrl *, const char *);\n\nextern struct batch_status *pbs_statvnode(int, const char *, struct attrl *, const char *);\n\nextern struct batch_status *pbs_statresv(int, const char *, struct attrl *, const char *);\n\nextern struct batch_status *pbs_stathook(int, const char *, struct attrl *, const char *);\n\nextern struct ecl_attribute_errors *pbs_get_attributes_in_error(int);\n\nextern char *pbs_submit(int, struct attropl *, const char *, const char *, const char *);\n\nextern char *pbs_submit_resv(int, struct attropl *, const char *);\n\nextern int pbs_delresv(int, const char *, const char *);\n\nextern int pbs_terminate(int, int, const char *);\n\nextern char *pbs_modify_resv(int, const char *, struct attropl *, const char *);\n\nextern preempt_job_info *pbs_preempt_jobs(int, char **);\n#endif /* _USRDLL */\n\n#ifdef __cplusplus\n}\n#endif\n#endif /* _PBS_IFL_H */\n"
  },
  {
    "path": "src/include/pbs_internal.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _PBS_INTERNAL_H\n#define _PBS_INTERNAL_H\n\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n#include <stdbool.h>\n#include \"pbs_ifl.h\"\n#include \"portability.h\"\n#include \"libutil.h\"\n#include \"auth.h\"\n\n/*\n *\n *  pbs_internal.h\n *  This file contains all the definitions that are used by internal tools/cmds\n *  by pbs/pbs suite products like Aif.\n *\n */\n\n/* node-attribute values (state,ntype) */\n\n#define ND_free \"free\"\n#define ND_offline \"offline\"\n#define ND_offline_by_mom \"offline_by_mom\"\n#define ND_down \"down\"\n#define ND_Stale \"Stale\"\n#define ND_jobbusy \"job-busy\"\n#define ND_job_exclusive \"job-exclusive\"\n#define ND_resv_exclusive \"resv-exclusive\"\n#define ND_job_sharing \"job-sharing\"\n#define ND_busy \"busy\"\n#define ND_state_unknown \"state-unknown\"\n#define ND_prov \"provisioning\"\n#define ND_wait_prov \"wait-provisioning\"\n#define ND_maintenance \"maintenance\"\n#define ND_pbs \"PBS\"\n#define ND_Default_Shared \"default_shared\"\n#define ND_Default_Excl \"default_excl\"\n#define ND_Default_Exclhost \"default_exclhost\"\n#define ND_Ignore_Excl \"ignore_excl\"\n#define ND_Force_Excl \"force_excl\"\n#define ND_Force_Exclhost \"force_exclhost\"\n#define ND_Initializing \"initializing\"\n#define ND_unresolvable \"unresolvable\"\n#define ND_sleep \"sleep\"\n\n/* Defines for type of Attribute based on data type \t\t\t*/\n/* currently limited to 4 bits (max number 15)\t\t\t\t*/\n\n#define ATR_TYPE_NONE 0\t   /* Not to be used */\n#define ATR_TYPE_LONG 1\t   /* Long integer, also Boolean */\n#define ATR_TYPE_CHAR 2\t   /* single character */\n#define ATR_TYPE_STR 3\t   /* string, null terminated */\n#define ATR_TYPE_ARST 4\t   /* Array of strings (char **) */\n#define ATR_TYPE_SIZE 5\t   /* size (integer + suffix) */\n#define ATR_TYPE_RESC 6\t   /* list type: resources only */\n#define ATR_TYPE_LIST 7\t   /* list type:  dependencies, unkn, etc */\n#define ATR_TYPE_ACL 8\t   /* Access Control Lists */\n#define ATR_TYPE_LL 9\t   /* Long (64 bit) integer */\n#define ATR_TYPE_SHORT 10  /* short integer    */\n#define ATR_TYPE_BOOL 11   /* boolean\t    */\n#define ATR_TYPE_JINFOP 13 /* struct jobinfo*  */\n#define ATR_TYPE_FLOAT 14  /* Float  */\n#define ATR_TYPE_ENTITY 15 /* FGC Entity Limit */\n/* WARNING: adding anther WILL overflow the type field in the attribut_def */\n\n/* Defines for  Flag field in attribute_def */\n\n#define ATR_DFLAG_USRD 0x01    /* User client can read (status) attribute */\n#define ATR_DFLAG_USWR 0x02    /* User client can write (set)   attribute */\n#define ATR_DFLAG_OPRD 0x04    /* Operator client can read   attribute */\n#define ATR_DFLAG_OPWR 0x08    /* Operator client can write  attribute */\n#define ATR_DFLAG_MGRD 0x10    /* Manager client can read  attribute */\n#define ATR_DFLAG_MGWR 0x20    /* Manager client can write attribute */\n#define ATR_DFLAG_OTHRD 0x40   /* Reserved */\n#define ATR_DFLAG_Creat 0x80   /* Can be set on create only */\n#define ATR_DFLAG_SvRD 0x100   /* job attribute is sent to server on move */\n#define ATR_DFLAG_SvWR 0x200   /* job attribute is settable by server/Sch */\n#define ATR_DFLAG_MOM 0x400    /* attr/resc sent to MOM \"iff\" set\t   */\n#define ATR_DFLAG_RDACC 0x515  /* Read access mask  */\n#define ATR_DFLAG_WRACC 0x6AA  /* Write access mask */\n#define ATR_DFLAG_ACCESS 0x7ff /* Mask access flags */\n\n#define ATR_DFLAG_ALTRUN 0x0800\t /* (job) attr/resc is alterable in Run state  */\n#define ATR_DFLAG_NOSAVM 0x1000\t /* object not saved on attribute modify       */\n#define ATR_DFLAG_SELEQ 0x2000\t /* attribute is only selectable eq/ne\t      */\n#define ATR_DFLAG_RASSN 0x4000\t /* resc in server/queue resources_assigned    */\n#define ATR_DFLAG_ANASSN 0x8000\t /* resource in all node resources_assigned  */\n#define ATR_DFLAG_FNASSN 0x10000 /* resource in 1st node resources_assigned  */\n#define ATR_DFLAG_CVTSLT 0x20000 /* used in or converted to select directive */\n#define ATR_DFLAG_SCGALT 0x40000 /* if altered during sched cycle dont run job*/\n#define ATR_DFLAG_HIDDEN 0x80000 /* if set, keep attribute hidden to client */\n\n#define SHUT_MASK 0xf\n#define SHUT_WHO_MASK 0x1f0\n#define SHUT_SIG 8\n#define SHUT_WHO_SCHED 0x10\t /* also shutdown Scheduler\t  */\n#define SHUT_WHO_MOM 0x20\t /* also shutdown Moms\t\t  */\n#define SHUT_WHO_SECDRY 0x40\t /* also shutdown Secondary Server */\n#define SHUT_WHO_IDLESECDRY 0x80 /* idle the Secondary Server    */\n#define SHUT_WHO_SECDONLY 0x100\t /* shut down the Secondary only */\n\n#define SIG_RESUME \"resume\"\n#define SIG_SUSPEND \"suspend\"\n#define SIG_TermJob \"TermJob\"\n#define SIG_RERUN \"Rerun\"\n#define SIG_ADMIN_SUSPEND \"admin-suspend\"\n#define SIG_ADMIN_RESUME \"admin-resume\"\n\n#define PLACE_Group \"group\"\n#define PLACE_Excl \"excl\"\n#define PLACE_ExclHost \"exclhost\"\n#define PLACE_Shared \"shared\"\n#define PLACE_Free \"free\"\n#define PLACE_Pack \"pack\"\n#define PLACE_Scatter \"scatter\"\n#define PLACE_VScatter \"vscatter\"\n\n#define ATR_TRUE \"True\"\n#define ATR_FALSE \"False\"\n\n#ifdef WIN32\n#define ESC_CHAR '^' /* commonly used in windows cmd shell */\n#else\n#define ESC_CHAR '\\\\'\n#endif\n\n/* set of characters that are not allowed in a queue name */\n#define INVALID_QUEUE_NAME_CHARS \"`~!$%^&*()+=<>?;'\\\"|\"\n\n/*constant related to sum of string lengths for above strings*/\n#define MAX_ENCODE_BFR 100\n\n/* Default value of Node fail requeue (ATTR_nodefailrq)*/\n#define PBS_NODE_FAIL_REQUEUE_DEFAULT 310\n\n/* Default value of resend_term_delay (ATTR_resendtermdelay)*/\n#define PBS_RESEND_TERM_DELAY_DEFAULT 5\n\n/* Default value of preempt_queue_prio */\n#define PBS_PREEMPT_QUEUE_PRIO_DEFAULT 150\n\n/* Default value of server_dyn_res_alarm */\n#define PBS_SERVER_DYN_RES_ALARM_DEFAULT 30\n\n/* Default value of preempt_prio */\n#define PBS_PREEMPT_PRIO_DEFAULT \"express_queue, normal_jobs\"\n\n/* Default value of preempt_order */\n#define PBS_PREEMPT_ORDER_DEFAULT \"SCR\"\n\n/* Default value of preempt_sort */\n#define PBS_PREEMPT_SORT_DEFAULT \"min_time_since_start\"\n\n// clang-format off\nstruct pbs_config\n{\n\tunsigned loaded:1;\t\t\t/* has the conf file been loaded? */\n\tunsigned load_failed:1;\t\t/* previously loaded and failed */\n\tunsigned start_server:1;\t\t/* should the server be started */\n\tunsigned start_mom:1;\t\t\t/* should the mom be started */\n\tunsigned start_sched:1;\t\t/* should the scheduler be started */\n\tunsigned start_comm:1; \t\t/* should the comm daemon be started */\n\tunsigned locallog:1;\t\t\t/* do local logging */\n\tchar **supported_auth_methods;\t\t/* supported auth methods on server */\n\tchar **auth_service_users;\t\t/* recognised service users */\n\tchar encrypt_method[MAXAUTHNAME + 1];\t/* auth method to used for encrypt/decrypt data */\n\tchar auth_method[MAXAUTHNAME + 1];\t/* default auth_method to used by client */\n\tchar interactive_auth_method[MAXAUTHNAME + 1];\t/* auth_method used in interactive qsub sessions */\n\tchar interactive_encrypt_method[MAXAUTHNAME + 1];\t/* encrypt_method used in interactive qsub sessions */\n\tunsigned int sched_modify_event:1;\t/* whether to trigger modifyjob hook event or not */\n\tunsigned syslogfac;\t\t        /* syslog facility */\n\tunsigned syslogsvr;\t\t\t/* min priority to log to syslog */\n\tunsigned int batch_service_port;\t/* PBS batch_service_port */\n\tunsigned int batch_service_port_dis;\t/* PBS batch_service_port_dis */\n\tunsigned int mom_service_port;\t/* PBS mom_service_port */\n\tunsigned int manager_service_port;\t/* PBS manager_service_port */\n\tunsigned int pbs_data_service_port;    /* PBS data_service port */\n\tchar *pbs_conf_file;\t\t\t/* full path of the pbs.conf file */\n\tchar *pbs_home_path;\t\t\t/* path to the pbs home dir */\n\tchar *pbs_exec_path;\t\t\t/* path to the pbs exec dir */\n\tchar *pbs_server_name;\t\t/* name of PBS Server, usually hostname of host on which PBS server is executing */\n\tchar *cp_path;\t\t\t/* path to local copy function */\n\tchar *scp_path;\t\t\t/* path to scp, overriding the default (OS-dependent) one */\n\tchar *scp_args;\t\t\t/* arguments for scp, overriding the default (OS-dependent) ones */\n\tchar *rcp_path;\t\t\t/* path to pbs_rsh */\n\tchar *pbs_demux_path;\t\t\t/* path to pbs demux */\n\tchar *pbs_environment;\t\t/* path to pbs_environment file */\n\tchar *iff_path;\t\t\t/* path to pbs_iff */\n\tchar *pbs_primary;\t\t\t/* FQDN of host with primary server */\n\tchar *pbs_secondary;\t\t\t/* FQDN of host with secondary server */\n\tchar *pbs_mom_home;\t\t\t/* path to alternate home for Mom */\n\tchar *pbs_core_limit;\t\t\t/* RLIMIT_CORE setting */\n\tchar *pbs_data_service_host;\t\t/* dataservice host */\n\tchar *pbs_tmpdir;\t\t\t/* temporary file directory */\n\tchar *pbs_server_host_name;\t/* name of host on which Server is running */\n\tchar *pbs_public_host_name;\t/* name of the local host for outgoing connections */\n\tchar *pbs_mail_host_name;\t/* name of host to which to address mail */\n\tchar *pbs_smtp_server_name;   /* name of SMTP host to which to send mail */\n\tchar *pbs_output_host_name;\t/* name of host to which to stage std out/err */\n\tunsigned pbs_use_compression:1;\t/* whether pbs should compress communication data */\n\tunsigned pbs_use_mcast:1;\t\t/* whether pbs should multicast communication */\n\tchar *pbs_leaf_name;\t\t\t/* non-default name of this leaf in the communication network */\n\tchar *pbs_leaf_routers;\t\t/* for this leaf, the optional list of routers to talk to */\n\tchar *pbs_comm_name;\t\t\t/* non-default name of this router in the communication network */\n\tchar *pbs_comm_routers;\t\t/* for this router, the optional list of other routers to talk to */\n\tlong  pbs_comm_log_events;      /* log_events for pbs_comm process, default 0 */\n\tunsigned int pbs_comm_threads;\t/* number of threads for router, default 4 */\n\tchar *pbs_mom_node_name;\t/* mom short name used for natural node, default NULL */\n\tunsigned int pbs_log_highres_timestamp; /* high resolution logging */\n\tunsigned int pbs_sched_threads;\t/* number of threads for scheduler */\n\tchar *pbs_daemon_service_user; /* user the scheduler runs as */\n\tchar *pbs_daemon_service_auth_user; /* auth user the scheduler runs as */\n\tchar *pbs_privileged_auth_user; /* auth user with admin access */\n\tchar *pbs_gss_user_creds_bin; /* path to user credentials program */\n\tchar current_user[PBS_MAXUSER+1]; /* current running user */\n#ifdef WIN32\n\tchar *pbs_conf_remote_viewer; /* Remote viewer client executable for PBS GUI jobs, along with launch options */\n#endif\n};\n\nextern struct pbs_config pbs_conf;\n\n// clang-format on\n\n/*\n * NOTE: PBS_CONF_PATH is no longer defined here. It has moved into\n * the pbs_config.h file generated by configure and has been renamed\n * to PBS_CONF_FILE to reflect the environment variable that can\n * now override the value defined at compile time.\n */\n\n/* names in the pbs.conf file */\n#define PBS_CONF_START_SERVER\t\"PBS_START_SERVER\"\t/* start the server? */\n#define PBS_CONF_START_MOM\t\"PBS_START_MOM\"\t\t   /* start the mom? */\n#define PBS_CONF_START_SCHED\t\"PBS_START_SCHED\"    /* start the scheduler? */\n#define PBS_CONF_START_COMM \t\"PBS_START_COMM\"    /* start the comm? */\n#define PBS_CONF_LOCALLOG\t\"PBS_LOCALLOG\"\t/* non-zero to force logging */\n#define PBS_CONF_SYSLOG\t\t\"PBS_SYSLOG\"\t  /* non-zero for syslogging */\n#define PBS_CONF_SYSLOGSEVR\t\"PBS_SYSLOGSEVR\"  /* severity lvl for syslog */\n#define PBS_CONF_BATCH_SERVICE_PORT\t     \"PBS_BATCH_SERVICE_PORT\"\n#define PBS_CONF_BATCH_SERVICE_PORT_DIS\t     \"PBS_BATCH_SERVICE_PORT_DIS\"\n#define PBS_CONF_MOM_SERVICE_PORT\t     \"PBS_MOM_SERVICE_PORT\"\n#define PBS_CONF_MANAGER_SERVICE_PORT\t     \"PBS_MANAGER_SERVICE_PORT\"\n#define PBS_CONF_DATA_SERVICE_PORT           \"PBS_DATA_SERVICE_PORT\"\n#define PBS_CONF_DATA_SERVICE_HOST           \"PBS_DATA_SERVICE_HOST\"\n#define PBS_CONF_USE_COMPRESSION     \t     \"PBS_USE_COMPRESSION\"\n#define PBS_CONF_USE_MCAST\t\t     \"PBS_USE_MCAST\"\n#define PBS_CONF_LEAF_NAME\t\t     \"PBS_LEAF_NAME\"\n#define PBS_CONF_LEAF_ROUTERS\t\t     \"PBS_LEAF_ROUTERS\"\n#define PBS_CONF_COMM_NAME\t\t     \"PBS_COMM_NAME\"\n#define PBS_CONF_COMM_ROUTERS\t\t     \"PBS_COMM_ROUTERS\"\n#define PBS_CONF_COMM_THREADS\t\t     \"PBS_COMM_THREADS\"\n#define PBS_CONF_COMM_LOG_EVENTS\t     \"PBS_COMM_LOG_EVENTS\"\n#define PBS_CONF_HOME\t\t\"PBS_HOME\"\t \t /* path to pbs home */\n#define PBS_CONF_EXEC\t\t\"PBS_EXEC\"\t\t /* path to pbs exec */\n#define PBS_CONF_DEFAULT_NAME\t\"PBS_DEFAULT\"\t  /* old name for PBS_SERVER */\n#define PBS_CONF_SERVER_NAME\t\"PBS_SERVER\"\t   /* name of the pbs server */\n#define PBS_CONF_INSTALL_MODE    \"PBS_INSTALL_MODE\" /* PBS installation mode */\n#define PBS_CONF_RCP\t\t\"PBS_RCP\"\n#define PBS_CONF_CP\t\t\"PBS_CP\"\n#define PBS_CONF_SCP\t\t\"PBS_SCP\"\t\t      /* path to scp */\n#define PBS_CONF_SCP_ARGS\t\"PBS_SCP_ARGS\"\t\t      /* args for scp */\n#define PBS_CONF_ENVIRONMENT    \"PBS_ENVIRONMENT\" /* path to pbs_environment */\n#define PBS_CONF_PRIMARY\t\"PBS_PRIMARY\"  /* Primary Server in failover */\n#define PBS_CONF_SECONDARY\t\"PBS_SECONDARY\"\t/* Secondary Server failover */\n#define PBS_CONF_MOM_HOME\t\"PBS_MOM_HOME\"  /* alt Mom home for failover */\n#define PBS_CONF_CORE_LIMIT\t\"PBS_CORE_LIMIT\"      /* RLIMIT_CORE setting */\n#define PBS_CONF_SERVER_HOST_NAME \"PBS_SERVER_HOST_NAME\"\n#define PBS_CONF_PUBLIC_HOST_NAME \"PBS_PUBLIC_HOST_NAME\"\n#define PBS_CONF_MAIL_HOST_NAME \"PBS_MAIL_HOST_NAME\"\n#define PBS_CONF_OUTPUT_HOST_NAME \"PBS_OUTPUT_HOST_NAME\"\n#define PBS_CONF_SMTP_SERVER_NAME \"PBS_SMTP_SERVER_NAME\" /* Name of SMTP Host to send mail to */\n#define PBS_CONF_TMPDIR\t\t\"PBS_TMPDIR\"     /* temporary file directory */\n#define PBS_CONF_INTERACTIVE_AUTH_METHOD\t\"PBS_INTERACTIVE_AUTH_METHOD\"\t/* Authentication method used in qsub interactive */\n#define PBS_CONF_INTERACTIVE_ENCRYPT_METHOD\t\"PBS_INTERACTIVE_ENCRYPT_METHOD\"\t/* Encryption method used in qsub interactive */\n#define PBS_CONF_AUTH\t\t\"PBS_AUTH_METHOD\"\n#define PBS_CONF_ENCRYPT_METHOD\t\"PBS_ENCRYPT_METHOD\"\n#define PBS_CONF_SUPPORTED_AUTH_METHODS\t\"PBS_SUPPORTED_AUTH_METHODS\"\n#define PBS_CONF_AUTH_SERVICE_USERS\t\"PBS_AUTH_SERVICE_USERS\"\n#define PBS_CONF_SCHEDULER_MODIFY_EVENT\t\"PBS_SCHEDULER_MODIFY_EVENT\"\n#define PBS_CONF_MOM_NODE_NAME\t\"PBS_MOM_NODE_NAME\"\n#define PBS_CONF_LOG_HIGHRES_TIMESTAMP\t\"PBS_LOG_HIGHRES_TIMESTAMP\"\n#define PBS_CONF_SCHED_THREADS\t\"PBS_SCHED_THREADS\"\n#define PBS_CONF_DAEMON_SERVICE_USER \"PBS_DAEMON_SERVICE_USER\"\n#define PBS_CONF_DAEMON_SERVICE_AUTH_USER \"PBS_DAEMON_SERVICE_AUTH_USER\"\n#define PBS_CONF_PRIVILEGED_AUTH_USER \"PBS_PRIVILEGED_AUTH_USER\" /* e.g.: used for gss/krb and krb host principal (host/<fqdn>@<REALM>) is expected */\n#define PBS_CONF_GSS_USER_CREDENTIALS_BIN \"PBS_GSS_USER_CREDENTIALS_BIN\"\n#ifdef WIN32\n#define PBS_CONF_REMOTE_VIEWER \"PBS_REMOTE_VIEWER\"\t/* Executable for remote viewer application alongwith its launch options, for PBS GUI jobs */\n#endif\n#define LOCALHOST_FULLNAME \"localhost.localdomain\"\n#define LOCALHOST_SHORTNAME \"localhost\"\n\n/* someday the PBS_*_PORT definition will go away and only the\t*/\n/* PBS_*_SERVICE_NAME form will be used, maybe\t\t\t*/\n\n#define PBS_BATCH_SERVICE_NAME\t\t\"pbs\"\n#define PBS_BATCH_SERVICE_PORT\t\t15001\n#define PBS_BATCH_SERVICE_NAME_DIS\t\"pbs_dis\"\t/* new DIS port   */\n#define PBS_BATCH_SERVICE_PORT_DIS\t15001\t\t/* new DIS port   */\n#define PBS_MOM_SERVICE_NAME\t\t\"pbs_mom\"\n#define PBS_MOM_SERVICE_PORT\t\t15002\n#define PBS_MANAGER_SERVICE_NAME\t\"pbs_resmon\"\n#define PBS_MANAGER_SERVICE_PORT\t15003\n#define PBS_SCHEDULER_SERVICE_NAME\t\"pbs_sched\"\n#define PBS_SCHEDULER_SERVICE_PORT\t15004\n#define PBS_DATA_SERVICE_NAME           \"pbs_dataservice\"\n#define PBS_DATA_SERVICE_STORE_NAME     \"pbs_datastore\"\n\n/* Values for Job's ATTR_accrue_type */\nenum accrue_types {\n\tJOB_INITIAL = 0,\n\tJOB_INELIGIBLE,\n\tJOB_ELIGIBLE,\n\tJOB_RUNNING,\n\tJOB_EXIT\n};\n\n#define\tACCRUE_NEW\t\"0\"\n#define\tACCRUE_INEL\t\"1\"\n#define\tACCRUE_ELIG\t\"2\"\n#define\tACCRUE_RUNN\t\"3\"\n#define\tACCRUE_EXIT\t\"4\"\n\n\n/* Default values for degraded reservation retry times boundary. 7200 seconds\n * is 2hrs and is considered to be a reasonable amount of time to wait before\n * confirming that a reservation is indeed degraded, and that an attempt to\n * reconfirm won't be made if the reservation is to start within the cutoff\n * time.\n */\n\n#define RESV_RETRY_TIME_DEFAULT 600\n\n#define PBS_RESV_CONFIRM_FAIL \"PBS_RESV_CONFIRM_FAIL\"   /* Used to inform server that a reservation could not be confirmed */\n#define PBS_RESV_CONFIRM_SUCCESS \"PBS_RESV_CONFIRM_SUCCESS\"   /* Used to inform server that a reservation could be confirmed */\n#define DEFAULT_PARTITION \"pbs-default\" /* Default partition name set on the reservation queue when the reservation is confirmed by default scheduler */\n\n#define PBS_USE_IFF\t\t1\t/* pbs_connect() to call pbs_iff */\n\n\n/* time flag 2030-01-01 01:01:00 for ASAP reservation */\n#define PBS_RESV_FUTURE_SCH 1893488460L\n\n\n/* this is the PBS default max_concurrent_provision value */\n#define PBS_MAX_CONCURRENT_PROV 5\n\n/* this is the PBS max lenth of quote parse error messages */\n#define PBS_PARSE_ERR_MSG_LEN_MAX 50\n\n/* this is the PBS defult jobscript_max_size default value is 100MB*/\n#define DFLT_JOBSCRIPT_MAX_SIZE \"100mb\"\n\n/* internal attributes */\n#define ATTR_prov_vnode\t\"prov_vnode\"\t/* job attribute */\n#define ATTR_ProvisionEnable\t\"provision_enable\"  /* server attribute */\n#define ATTR_provision_timeout\t\"provision_timeout\" /* server attribute */\n#define ATTR_node_set\t\t\"node_set\"\t    /* job attribute */\n#define ATTR_sched_preempted    \"ptime\"   /* job attribute */\n#define ATTR_restrict_res_to_release_on_suspend \"restrict_res_to_release_on_suspend\"\t    /* server attr */\n#define ATTR_resv_alter_revert\t\t\"reserve_alter_revert\"\n#define ATTR_resv_standing_revert\t\"reserve_standing_revert\"\n\n#ifndef IN_LOOPBACKNET\n#define IN_LOOPBACKNET\t127\n#endif\n#define LOCALHOST_SHORTNAME \"localhost\"\n\n#ifdef _USRDLL\t\t/* This is only for building Windows DLLs\n\t\t\t * and not their static libraries\n\t\t\t */\n\n#ifdef DLL_EXPORT\n#define DECLDIR __declspec(dllexport)\n#else\n#define DECLDIR __declspec(dllimport)\n#endif\n\nDECLDIR int pbs_connect_noblk(char *, int);\n\nDECLDIR int pbs_query_max_connections(void);\n\nDECLDIR int pbs_connection_set_nodelay(int);\n\nDECLDIR int pbs_geterrno(void);\n\nDECLDIR int pbs_py_spawn(int, char *, char **, char **);\n\nDECLDIR int pbs_encrypt_pwd(unsigned char *, int *, unsigned char **, size_t *, const unsigned char *, const unsigned char *);\n\nDECLDIR int pbs_decrypt_pwd(unsigned char *, int, size_t, unsigned char **, const unsigned char * , const unsigned char *);\n\nDECLDIR char *\npbs_submit_with_cred(int, struct attropl *, char *,\n\tchar *, char *, int, size_t, char *);\n\nDECLDIR char *pbs_get_tmpdir(void);\n\nDECLDIR char *pbs_strsep(char **, const char *);\n\nDECLDIR int pbs_defschreply(int, int, char *, int, char *, char *);\n\nDECLDIR int pbs_quote_parse(char *, char **, char **, int);\n\nDECLDIR const char *pbs_parse_err_msg(int);\n\nDECLDIR void pbs_prt_parse_err(char *, char *, int, int);\n\n/* This was added to pbs_ifl.h for use by AIF */\nDECLDIR int      pbs_isexecutable(char *);\nDECLDIR char    *pbs_ispbsdir(char *, char *);\nDECLDIR int      pbs_isjobid(char *);\nDECLDIR int      check_job_name(char *, int);\nDECLDIR int      chk_Jrange(char *);\nDECLDIR time_t   cvtdate(char *);\nDECLDIR int      locate_job(char *, char *, char *);\nDECLDIR int      parse_destination_id(char *, char **, char **);\nDECLDIR int      parse_at_list(char *, int, int);\nDECLDIR int      parse_equal_string(char *, char **, char **);\nDECLDIR int      parse_depend_list(char *, char **, int);\nDECLDIR int      parse_stage_list(char *);\nDECLDIR int      prepare_path(char *, char*);\nDECLDIR void     prt_job_err(char *, int, char *);\nDECLDIR int\t\t set_attr(struct attrl **, const char *, const char *);\nDECLDIR int      set_attr_resc(struct attrl **, char *, char *, char *);\nDECLDIR int      set_resources(struct attrl **, char *, int, char **);\nDECLDIR int      cnt2server(char *);\nDECLDIR int      cnt2server_extend(char *, char *);\nDECLDIR int      get_server(char *, char *, char *);\nDECLDIR int      PBSD_ucred(int, char *, int, char *, int);\n\n#else\n\nextern int pbs_connect_noblk(const char *);\n\nextern int pbs_connection_set_nodelay(int);\n\nextern int pbs_geterrno(void);\n\nextern int pbs_py_spawn(int, char *, char **, char **);\n\nextern int pbs_encrypt_pwd(char *, int *, char **, size_t *, const unsigned char *, const unsigned char *);\n\nextern int pbs_decrypt_pwd(char *, int, size_t, char **, const unsigned char *, const unsigned char *);\n\nextern char *pbs_submit_with_cred(int, struct attropl *, char *,\n\tchar *, char *, int, size_t , char *);\n\nextern int pbs_query_max_connections(void);\n\nextern char *pbs_get_tmpdir(void);\n\nextern FILE *pbs_popen(const char *, const char *);\n\nextern int pbs_pkill(FILE *, int);\n\nextern int pbs_pclose(FILE *);\n\nextern char* pbs_strsep(char **, const char *);\n\nextern int pbs_defschreply(int, int, char *, int, char *, char *);\n\nextern char *pbs_strsep(char **, const char *);\n\nextern int pbs_quote_parse(char *, char **, char **, int);\n\nextern const char *pbs_parse_err_msg(int);\n\nextern void pbs_prt_parse_err(char *, char *, int, int);\n\nextern int pbs_rescquery(int, char **, int, int *, int *, int *, int *);\n\nextern int pbs_rescreserve(int, char **, int, pbs_resource_t *);\n\nextern int pbs_rescrelease(int, pbs_resource_t);\n\nextern char *avail(int, char *);\n\nextern int totpool(int, int);\n\nextern int usepool(int, int);\n\nextern enum vnode_sharing place_sharing_type(char *, enum vnode_sharing);\n\n/* This was added to pbs_ifl.h for use by AIF */\nextern int \tpbs_isexecutable(char *);\nextern char \t*pbs_ispbsdir(char *, char *);\nextern int \tpbs_isjobid(char *);\nextern int      check_job_name(char *, int);\nextern int      chk_Jrange(char *);\nextern time_t   cvtdate(char *);\nextern int      locate_job(char *, char *, char *);\nextern int      parse_destination_id(char *, char **, char **);\nextern int      parse_at_list(char *, int, int);\nextern int      parse_equal_string(char *, char **, char **);\nextern int      parse_depend_list(char *, char **, int);\nextern int      parse_destination_id(char *, char **, char **);\nextern int      parse_stage_list(char *);\nextern int      prepare_path(char *, char*);\nextern void     prt_job_err(char *, int, char *);\nextern int     set_attr(struct attrl **, const char *, const char *);\n#ifndef pbs_get_dataservice_usr\nextern char*    pbs_get_dataservice_usr(char *, int);\n#endif\nextern char*\tget_attr(struct attrl *, const char *, const char *);\nextern int      set_resources(struct attrl **, const char *, int, char **);\nextern int      cnt2server(char *server);\nextern int      cnt2server_extend(char *server, char *);\nextern int      get_server(char *, char *, char *);\nextern int      PBSD_ucred(int, char *, int, char *, int);\nextern char\t*encode_xml_arg_list(int, int, char **);\nextern int\tdecode_xml_arg_list(char *, char *, char **, char ***);\nextern int\tdecode_xml_arg_list_str(char *, char **);\nextern char *convert_time(char *);\nextern struct batch_status *bs_isort(struct batch_status *bs,\n\tint (*cmp_func)(struct batch_status*, struct batch_status *));\nextern struct batch_status *bs_find(struct batch_status *, const char *);\nextern void init_bstat(struct batch_status *);\n\n/* IFL function pointers */\nextern int (*pfn_pbs_asyrunjob)(int, const char *, const char *, const char *);\nextern int (*pfn_pbs_asyrunjob_ack)(int, const char *, const char *, const char *);\nextern int (*pfn_pbs_alterjob)(int, const char *, struct attrl *, const char *);\nextern int (*pfn_pbs_asyalterjob)(int, const char *, struct attrl *, const char *);\nextern int (*pfn_pbs_confirmresv)(int, const char *, const char *, unsigned long, const char *);\nextern int (*pfn_pbs_connect)(const char *);\nextern int (*pfn_pbs_connect_extend)(const char *, const char *);\nextern char *(*pfn_pbs_default)(void);\nextern int (*pfn_pbs_deljob)(int, const char *, const char *);\nextern struct batch_deljob_status *(*pfn_pbs_deljoblist)(int, char **, int, const char *);\nextern int (*pfn_pbs_disconnect)(int);\nextern char *(*pfn_pbs_geterrmsg)(int);\nextern int (*pfn_pbs_holdjob)(int, const char *, const char *, const char *);\nextern int (*pfn_pbs_loadconf)(int);\nextern char *(*pfn_pbs_locjob)(int, const char *, const char *);\nextern int (*pfn_pbs_manager)(int, int, int, const char *, struct attropl *, const char *);\nextern int (*pfn_pbs_movejob)(int, const char *, const char *, const char *);\nextern int (*pfn_pbs_msgjob)(int, const char *, int, const char *, const char *);\nextern int (*pfn_pbs_orderjob)(int, const char *, const char *, const char *);\nextern int (*pfn_pbs_rerunjob)(int, const char *, const char *);\nextern int (*pfn_pbs_rlsjob)(int, const char *, const char *, const char *);\nextern int (*pfn_pbs_runjob)(int, const char *, const char *, const char *);\nextern char **(*pfn_pbs_selectjob)(int, struct attropl *, const char *);\nextern int (*pfn_pbs_sigjob)(int, const char *, const char *, const char *);\nextern void (*pfn_pbs_statfree)(struct batch_status *);\nextern void (*pfn_pbs_delstatfree)(struct batch_deljob_status *);\nextern struct batch_status *(*pfn_pbs_statrsc)(int, const char *, struct attrl *, const char *);\nextern struct batch_status *(*pfn_pbs_statjob)(int, const char *, struct attrl *, const char *);\nextern struct batch_status *(*pfn_pbs_selstat)(int, struct attropl *, struct attrl *, const char *);\nextern struct batch_status *(*pfn_pbs_statque)(int, const char *, struct attrl *, const char *);\nextern struct batch_status *(*pfn_pbs_statserver)(int, struct attrl *, const char *);\nextern struct batch_status *(*pfn_pbs_statsched)(int, struct attrl *, const char *);\nextern struct batch_status *(*pfn_pbs_stathost)(int, const char *, struct attrl *, const char *);\nextern struct batch_status *(*pfn_pbs_statnode)(int, const char *, struct attrl *, const char *);\nextern struct batch_status *(*pfn_pbs_statvnode)(int, const char *, struct attrl *, const char *);\nextern struct batch_status *(*pfn_pbs_statresv)(int, const char *, struct attrl *, const char *);\nextern struct batch_status *(*pfn_pbs_stathook)(int, const char *, struct attrl *, const char *);\nextern struct ecl_attribute_errors * (*pfn_pbs_get_attributes_in_error)(int);\nextern char *(*pfn_pbs_submit)(int, struct attropl *, const char *, const char *, const char *);\nextern char *(*pfn_pbs_submit_resv)(int, struct attropl *, const char *);\nextern int (*pfn_pbs_delresv)(int, const char *, const char *);\nextern char *(*pfn_pbs_modify_resv)(int, const char *, struct attropl *, const char *);\nextern int (*pfn_pbs_relnodesjob)(int, const char *, const char *, const char *);\nextern int (*pfn_pbs_terminate)(int, int, const char *);\nextern preempt_job_info *(*pfn_pbs_preempt_jobs)(int, char**);\nextern int (*pfn_pbs_register_sched)(const char *, int, int);\n\n#endif /* _USRDLL */\n\nextern const char pbs_parse_err_msges[][PBS_PARSE_ERR_MSG_LEN_MAX + 1];\n\n#ifdef\t__cplusplus\n}\n#endif\n\n#endif\t/* _PBS_INTERNAL_H */\n"
  },
  {
    "path": "src/include/pbs_json.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _PBS_JSON_H\n#define _PBS_JSON_H\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n#include <stdlib.h>\n#include <stdio.h>\n\ntypedef void json_data;\n\njson_data *pbs_json_create_object();\njson_data *pbs_json_create_array();\n\nvoid pbs_json_insert_item(json_data *parent, char *key, json_data *value);\nint pbs_json_insert_string(json_data *parent, char *key, char *value);\nint pbs_json_insert_number(json_data *parent, char *key, double value);\nint pbs_json_insert_parsed(json_data *parent, char *key, char *value, int ignore_empty);\n\nint pbs_json_print(json_data *data, FILE *stream);\nvoid pbs_json_delete(json_data *data);\n\n#ifdef __cplusplus\n}\n#endif\n#endif /* _PBS_JSON_H */\n"
  },
  {
    "path": "src/include/pbs_license.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _PBS_LICENSE_H\n#define _PBS_LICENSE_H\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n#include <limits.h>\n#include \"work_task.h\"\n/* Node license types */\n\n#define ND_LIC_TYPE_locked 'l'\n#define ND_LIC_TYPE_cloud 'c'\n#define ND_LIC_locked_str \"l\"\n#define ND_LIC_cloud_str \"c\"\n\ntypedef struct {\n\tlong licenses_min;\t       /* minimum no.of licenses to be kept handy \t\t*/\n\tlong licenses_max;\t       /* maximum licenses that can be used\t\t\t*/\n\tlong licenses_linger_time;     /* time for which unused licenses can be kept\t\t*/\n\tlong licenses_checked_out;     /* licenses that are  checked out\t\t\t*/\n\tlong licenses_checkout_time;   /* time at which licenses were checked out\t\t*/\n\tlong licenses_total_needed;    /* licenses needed to license all nodes in the complex\t*/\n\tint expiry_warning_email_yday; /* expiry warning email sent on this day of the year\t*/\n} pbs_licensing_control;\n\ntypedef struct {\n\tint lu_max_hr;\t    /* max number of licenses used in the hour  */\n\tint lu_max_day;\t    /* max number of licenses used in the day   */\n\tint lu_max_month;   /* max number of licenses used in the month */\n\tint lu_max_forever; /* max number of licenses used so far\t    */\n\tint lu_day;\t    /* which day of month\t\t\t    */\n\tint lu_month;\t    /* which month\t\t\t\t    */\n} pbs_licenses_high_use;\n\ntypedef struct {\n\tlong licenses_global; /* licenses available at pbs_license_info   */\n\tlong licenses_local;  /* licenses that are checked out but unused */\n\tlong licenses_used;   /* licenses in use\t\t\t    */\n\tpbs_licenses_high_use licenses_high_use;\n\n} pbs_license_counts;\n\nenum node_topology_type {\n\ttt_hwloc,\n\ttt_Cray,\n\ttt_Win\n};\ntypedef enum node_topology_type ntt_t;\n\nextern pbs_list_head unlicensed_nodes_list;\n\n#define PBS_MIN_LICENSING_LICENSES 0\n#define PBS_MAX_LICENSING_LICENSES INT_MAX\n#define PBS_LIC_LINGER_TIME 31536000 /* keep extra licenses 1 year by default */\n#define PBS_LICENSE_LOCATION \\\n\t(pbs_licensing_location ? pbs_licensing_location : \"null\")\n\nextern void unset_signature(void *, char *);\nextern int release_node_lic(void *);\n\nextern void license_nodes();\nextern void init_licensing(struct work_task *ptask);\nextern void reset_license_counters(pbs_license_counts *);\nextern void remove_from_unlicensed_node_list(struct pbsnode *pnode);\n\n/* Licensing-related variables */\nextern char *pbs_licensing_location;\nextern pbs_licensing_control licensing_control;\nextern pbs_license_counts license_counts;\n#ifdef __cplusplus\n}\n#endif\n#endif /* _PBS_LICENSE_H */\n"
  },
  {
    "path": "src/include/pbs_mpp.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _PBS_MPP_H\n#define _PBS_MPP_H\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n/*\n *\tHeader file for defining MPP CPU states and scheduling restrictions.\n */\n\n#define MPP_MAX_CPUS_PER_NODE 16\n#define MPP_MAX_APPS_PER_CPU 16\n\ntypedef enum {\n\tmpp_node_arch_none = 0,\n\tmpp_node_arch_Cray_XT3,\n\tmpp_node_arch_Cray_X1,\n\tmpp_node_arch_Cray_X2,\n\tmpp_node_arch_max\n} mpp_node_arch_t;\n\nstatic char *mpp_node_arch_name[] = {\n\t\"NONE\",\n\t\"XT3\",\n\t\"X1\",\n\t\"X2\",\n\t\"UNKNOWN\"};\n\ntypedef enum {\n\tmpp_node_state_none = 0,\n\tmpp_node_state_avail,\n\tmpp_node_state_unavail,\n\tmpp_node_state_down,\n\tmpp_node_state_max\n} mpp_node_state_t;\n\nstatic char *mpp_node_state_name[] = {\n\t\"NONE\",\n\t\"AVAILABLE\",\n\t\"UNAVAILABLE\",\n\t\"DOWN\",\n\t\"UNKNOWN\"};\n\ntypedef enum {\n\tmpp_cpu_type_none = 0,\n\tmpp_cpu_type_x86_64,\n\tmpp_cpu_type_Cray_X1,\n\tmpp_cpu_type_Cray_X2,\n\tmpp_cpu_type_max\n} mpp_cpu_type_t;\n\nstatic char *mpp_cpu_type_name[] = {\n\t\"NONE\",\n\t\"x86_64\",\n\t\"craynv1\",\n\t\"Cray-BlackWidow\",\n\t\"UNKNOWN\"};\n\ntypedef enum {\n\tmpp_cpu_state_none = 0,\n\tmpp_cpu_state_up,\n\tmpp_cpu_state_down,\n\tmpp_cpu_state_max\n} mpp_cpu_state_t;\n\nstatic char *mpp_cpu_state_name[] = {\n\t\"NONE\",\n\t\"UP\",\n\t\"DOWN\",\n\t\"UNKNOWN\"};\n\ntypedef enum {\n\tmpp_label_type_hard = 0,\n\tmpp_label_type_soft\n} mpp_label_type_t;\n\n#ifdef __cplusplus\n}\n#endif\n#endif /* _PBS_MPP_H */\n"
  },
  {
    "path": "src/include/pbs_nodes.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _PBS_NODES_H\n#define _PBS_NODES_H\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n/*\n *\tHeader file used for the node tracking routines.\n */\n\n#include \"resv_node.h\"\n#include \"resource.h\"\n#include \"job.h\"\n\n#include \"libutil.h\"\n#ifndef PBS_MOM\n#include \"pbs_db.h\"\nextern void *svr_db_conn;\n#endif\n\n#include \"pbs_array_list.h\"\n#include \"hook.h\"\n#include \"hook_func.h\"\n\n/* Attributes in the Server's vnode (old node) object */\nenum nodeattr {\n#include \"node_attr_enum.h\"\n\tND_ATR_LAST /* WARNING: Must be the highest valued enum */\n};\n\n#ifndef PBS_MAXNODENAME\n#define PBS_MAXNODENAME 79\n#endif\n\n/* Daemon info structure which are common for both mom and peer server */\nstruct daemon_info {\n\tunsigned long dmn_state;\t /* Daemon's state */\n\tint dmn_stream;\t\t\t /* TPP stream to service */\n\tunsigned long *dmn_addrs;\t /* IP addresses of host */\n\tpbs_list_head dmn_deferred_cmds; /* links to svr work_task list for TPP replies */\n};\ntypedef struct daemon_info dmn_info_t;\n\n/*\n * mominfo structure - used by both the Server and Mom\n *\tto hold contact information for an instance of a pbs_mom on a host\n */\n\nstruct machine_info {\n\tchar mi_host[PBS_MAXHOSTNAME + 1]; /* hostname where service is */\n\tunsigned int mi_port;\t\t   /* port to which service is listening */\n\tunsigned int mi_rmport;\t\t   /* port for service RM */\n\ttime_t mi_modtime;\t\t   /* time configuration changed */\n\tdmn_info_t *mi_dmn_info;\t   /* daemon specific data which are common for all */\n\tvoid *mi_data;\t\t\t   /* daemon dependent substructure */\n\tpbs_list_link mi_link;\t\t   /* forward/backward links */\n};\ntypedef struct machine_info mominfo_t;\n\n/*\n * The following structure is used by the Server for each Mom.\n * It is pointed to by the mi_data element in mominfo_t\n */\n\nstruct mom_svrinfo {\n\tlong msr_pcpus;\t\t\t/* number of physical cpus reported by Mom */\n\tlong msr_acpus;\t\t\t/* number of avail    cpus reported by Mom */\n\tu_Long msr_pmem;\t\t/* amount of physical mem  reported by Mom */\n\tint msr_numjobs;\t\t/* number of jobs on this node */\n\tchar *msr_arch;\t\t\t/* reported \"arch\" */\n\tchar *msr_pbs_ver;\t\t/* mom's reported \"pbs_version\" */\n\ttime_t msr_timedown;\t\t/* time Mom marked down */\n\tstruct work_task *msr_wktask;\t/* work task for reque jobs */\n\tint msr_numvnds;\t\t/* number of vnodes */\n\tint msr_numvslots;\t\t/* number of slots in msr_children */\n\tstruct pbsnode **msr_children;\t/* array of vnodes supported by Mom */\n\tint msr_jbinxsz;\t\t/* size of job index array */\n\tstruct job **msr_jobindx;\t/* index array of jobs on this Mom */\n\tlong msr_vnode_pool;\t\t/* the pool of vnodes that belong to this Mom */\n\tint msr_has_inventory;\t\t/* Tells whether mom is an inventory reporting mom */\n\tmom_hook_action_t **msr_action; /* pending hook copy/delete on mom */\n\tint msr_num_action;\t\t/* # of hook actions in msr_action */\n};\ntypedef struct mom_svrinfo mom_svrinfo_t;\n\nstruct vnpool_mom {\n\tlong vnpm_vnode_pool;\n\tint vnpm_nummoms;\n\tmominfo_t *vnpm_inventory_mom;\n\tmominfo_t **vnpm_moms;\n\tstruct vnpool_mom *vnpm_next;\n};\ntypedef struct vnpool_mom vnpool_mom_t;\n\n#ifdef PBS_MOM\n\nenum vnode_sharing_state { isshared = 0,\n\t\t\t   isexcl = 1 };\nenum rlplace_value { rlplace_unset = 0,\n\t\t     rlplace_share = 1,\n\t\t     rlplace_excl = 2 };\n\nextern enum vnode_sharing_state vnss[][rlplace_excl - rlplace_unset + 1];\n\n/*\n *\tThe following information is used by pbs_mom to track per-Mom\n *\tinformation.  The mi_data member of a mominfo_t structure points to it.\n */\nstruct mom_vnodeinfo {\n\tchar *mvi_id;\t\t\t/* vnode ID */\n\tenum vnode_sharing mvi_sharing; /* declared \"sharing\" value */\n\tunsigned int mvi_memnum;\t/* memory board node ID */\n\tunsigned int mvi_ncpus;\t\t/* number of CPUs in mvi_cpulist[] */\n\tunsigned int mvi_acpus;\t\t/* of those, number of CPUs available */\n\tstruct mvi_cpus {\n\t\tunsigned int mvic_cpunum;\n#define MVIC_FREE 0x1\n#define MVIC_ASSIGNED 0x2\n#define MVIC_CPUISFREE(m, j) (((m)->mvi_cpulist[j].mvic_flags) & MVIC_FREE)\n\t\tunsigned int mvic_flags;\n\t\tjob *mvic_job; /* job this CPU is assigned */\n\t} * mvi_cpulist;       /* CPUs owned by this vnode */\n};\ntypedef struct mvi_cpus mom_mvic_t;\ntypedef struct mom_vnodeinfo mom_vninfo_t;\n\nextern enum rlplace_value getplacesharing(job *pjob);\n\n#endif /* PBS_MOM */\n\n/* The following are used by Mom to map vnodes to the parent host */\n\nstruct mom_vnode_map {\n\tchar mvm_name[PBS_MAXNODENAME + 1];\n\tchar *mvm_hostn; /* host name for MPI via PBS_NODEFILE */\n\tint mvm_notask;\n\tmominfo_t *mvm_mom;\n};\ntypedef struct mom_vnode_map momvmap_t;\n\n/* used for generation control on the Host to Vnode mapping */\nstruct mominfo_time {\n\ttime_t mit_time;\n\tint mit_gen;\n};\ntypedef struct mominfo_time mominfo_time_t;\n\nextern momvmap_t **mommap_array;\nextern int mommap_array_size;\nextern mominfo_time_t mominfo_time;\nextern vnpool_mom_t *vnode_pool_mom_list;\n\nstruct prop {\n\tchar *name;\n\tshort mark;\n\tstruct prop *next;\n};\n\nstruct jobinfo {\n\tchar *jobid;\n\tint has_cpu;\n\tsize_t mem;\n\tstruct jobinfo *next;\n};\n\nstruct resvinfo {\n\tresc_resv *resvp;\n\tstruct resvinfo *next;\n};\n\nstruct node_req {\n\tint nr_ppn; /* processes (tasks) per node */\n\tint nr_cpp; /* cpus per process           */\n\tint nr_np;  /* nr_np = nr_ppn * nr_cpp    */\n};\n\n/* virtual cpus - one for each resource_available.ncpus on a vnode */\nstruct pbssubn {\n\tstruct pbssubn *next;\n\tstruct jobinfo *jobs;\n\tunsigned long inuse;\n\tlong index;\n};\n\nunion ndu_ninfo {\n\tstruct {\n\t\tunsigned int __nd_lic_info : 24; /* OEM license information */\n\t\tunsigned int __nd_spare : 8;\t /* unused bits in this integer */\n\t} __ndu_bitfields;\n\tunsigned int __nd_int;\n};\n\n/*\n * Vnode structure\n */\nstruct pbsnode {\n\tchar *nd_name;\t\t/* vnode's name */\n\tmominfo_t **nd_moms;\t/* array of parent Moms */\n\tint nd_nummoms;\t\t/* number of Moms */\n\tint nd_nummslots;\t/* number of slots in nd_moms */\n\tint nd_index;\t\t/* global node index */\n\tint nd_arr_index;\t/* index of myself in the svr node array, only in mem, not db */\n\tchar *nd_hostname;\t/* ptr to hostname */\n\tstruct pbssubn *nd_psn; /* ptr to list of virt cpus */\n\tstruct resvinfo *nd_resvp;\n\tlong nd_nsn;\t\t\t /* number of VPs  */\n\tlong nd_nsnfree;\t\t /* number of VPs free */\n\tlong nd_ncpus;\t\t\t /* number of phy cpus on node */\n\tunsigned long nd_state;\t\t /* state of node */\n\tunsigned short nd_ntype;\t /* node type */\n\tstruct pbs_queue *nd_pque;\t /* queue to which it belongs */\n\tvoid *nd_lic_info;\t\t /* information set and used for licensing */\n\tint nd_added_to_unlicensed_list; /* To record if the node is added to the list of unlicensed node */\n\tpbs_list_link un_lic_link;\t /*Link to unlicense list */\n\tint nd_svrflags;\t\t /* server flags */\n\tint nd_modified;\n\tattribute nd_attr[ND_ATR_LAST];\n};\ntypedef struct pbsnode pbs_node;\n\nenum warn_codes { WARN_none,\n\t\t  WARN_ngrp_init,\n\t\t  WARN_ngrp_ck,\n\t\t  WARN_ngrp };\nenum nix_flags { NIX_none,\n\t\t NIX_qnodes,\n\t\t NIX_nonconsume };\nenum part_flags { PART_refig,\n\t\t  PART_add,\n\t\t  PART_rmv };\n\n#define NDPTRBLK 50 /* extend a node ptr array by this amt */\n\n/*\n * The following INUSE_* flags are used for several structures\n * (subnode.inuse, node.nd_state, and dmn_info.dmn_state).\n * The database schema stores node.nd_state as a 4 byte integer.\n * If more than 32 flags bits need to be added, the database schema will\n * need to be updated.  If not, the excess flags will be lost upon server restart\n */\n#define INUSE_FREE 0x00\t\t/* Node has one or more avail VPs\t*/\n#define INUSE_OFFLINE 0x01\t/* Node was removed by administrator\t*/\n#define INUSE_DOWN 0x02\t\t/* Node is down/unresponsive \t\t*/\n#define INUSE_DELETED 0x04\t/* Node is \"deleted\"\t\t\t*/\n#define INUSE_UNRESOLVABLE 0x08 /* Node not reachable */\n#define INUSE_JOB 0x10\t\t/* VP   in used by a job (normal use)\t*/\n/* Node all VPs in use by jobs\t\t*/\n#define INUSE_STALE 0x20\t   /* Vnode not reported by Mom            */\n#define INUSE_JOBEXCL 0x40\t   /* Node is used by one job (exclusive)\t*/\n#define INUSE_BUSY 0x80\t\t   /* Node is busy (high loadave)\t\t*/\n#define INUSE_UNKNOWN 0x100\t   /* Node has not been heard from yet\t*/\n#define INUSE_NEEDS_HELLOSVR 0x200 /* Fresh hello sequence needs to be initiated */\n#define INUSE_INIT 0x400\t   /* Node getting vnode map info\t\t*/\n#define INUSE_PROV 0x800\t   /* Node is being provisioned\t\t*/\n#define INUSE_WAIT_PROV 0x1000\t   /* Node is being provisioned\t\t*/\n/* INUSE_WAIT_PROV is 0x1000 - this should not clash with MOM_STATE_BUSYKB\n * since INUSE_WAIT_PROV is used as part of the node_state and MOM_STATE_BUSYKB\n * is used inside mom for variable internal_state\n */\n#define INUSE_RESVEXCL 0x2000\t       /* Node is exclusive to a reservation\t*/\n#define INUSE_OFFLINE_BY_MOM 0x4000    /* Node is offlined by mom */\n#define INUSE_MARKEDDOWN 0x8000\t       /* TPP layer marked node down */\n#define INUSE_NEED_ADDRS 0x10000       /* Needs to be sent IP addrs */\n#define INUSE_MAINTENANCE 0x20000      /* Node has a job in the admin suspended state */\n#define INUSE_SLEEP 0x40000\t       /* Node is sleeping */\n#define INUSE_NEED_CREDENTIALS 0x80000 /* Needs to be sent credentials */\n\n#define VNODE_UNAVAILABLE (INUSE_STALE | INUSE_OFFLINE | INUSE_DOWN | \\\n\t\t\t   INUSE_DELETED | INUSE_UNKNOWN | INUSE_UNRESOLVABLE | INUSE_OFFLINE_BY_MOM | INUSE_MAINTENANCE | INUSE_SLEEP)\n\n/* states that are set by the admin OR cannot be determined on the fly */\n#define INUSE_NOAUTO_MASK (INUSE_OFFLINE | INUSE_OFFLINE_BY_MOM | INUSE_MAINTENANCE | INUSE_SLEEP | INUSE_PROV | INUSE_WAIT_PROV)\n\n/* the following are used in Mom's internal state\t\t\t*/\n#define MOM_STATE_DOWN INUSE_DOWN\n#define MOM_STATE_BUSY INUSE_BUSY\n#define MOM_STATE_BUSYKB 0x1000\t      /* keyboard is busy \t\t   */\n#define MOM_STATE_INBYKB 0x2000\t      /* initial period of keyboard busy */\n#define MOM_STATE_CONF_HARVEST 0x4000 /* MOM configured to cycle-harvest */\n#define MOM_STATE_MASK 0x0fff\t      /* to mask what is sent to server  */\n\n#define FLAG_OKAY 0x01\t   /* \"ok\" to consider this node in the search */\n#define FLAG_THINKING 0x02 /* \"thinking\" to use node to satisfy specif */\n#define FLAG_CONFLICT 0x04 /* \"conflict\" temporarily  ~\"thinking\"      */\n#define FLAG_IGNORE 0x08   /* \"no use\"; reality, can't use node in spec*/\n\n/* bits both in nd_state and inuse\t*/\n#define INUSE_SUBNODE_MASK (INUSE_OFFLINE | INUSE_OFFLINE_BY_MOM | INUSE_DOWN | INUSE_JOB | INUSE_STALE |            \\\n\t\t\t    INUSE_JOBEXCL | INUSE_BUSY | INUSE_UNKNOWN | INUSE_INIT | INUSE_PROV | INUSE_WAIT_PROV | \\\n\t\t\t    INUSE_RESVEXCL | INUSE_UNRESOLVABLE | INUSE_MAINTENANCE | INUSE_SLEEP)\n\n#define INUSE_COMMON_MASK (INUSE_OFFLINE | INUSE_DOWN)\n/* state bits that go from node to subn */\n#define CONFLICT 1   /*search process must consider conflicts*/\n#define NOCONFLICT 0 /*be oblivious to conflicts in search*/\n\n/*\n * server flags (in nd_svrflags)\n */\n#define NODE_UNLICENSED 0x01 /* To record if the node is added to the list of unlicensed node */\n#define NODE_NEWOBJ 0x02     /* new node ? */\n#define NODE_ACCTED 0x04     /* resc recorded in job acct */\n\n/* operators to set the state of a vnode. Nd_State_Set is \"=\",\n * Nd_State_Or is \"|=\" and Nd_State_And is \"&=\". This is used in set_vnode_state\n */\nenum vnode_state_op {\n\tNd_State_Set,\n\tNd_State_Or,\n\tNd_State_And\n};\n\n/* To indicate whether a degraded time should be set on a reservation */\nenum vnode_degraded_op {\n\tSkip_Degraded_Time,\n\tSet_Degraded_Time,\n};\n\n/*\n * NTYPE_* values are used in \"node.nd_type\"\n */\n#define NTYPE_PBS 0x00 /* Node is normal node\t*/\n\n#define PBSNODE_NTYPE_MASK 0xf /* relevant ntype bits */\n\n/* tree for mapping contact info to node struture */\nstruct tree {\n\tunsigned long key1;\n\tunsigned long key2;\n\tmominfo_t *momp;\n\tstruct tree *left, *right;\n};\n\nextern void *node_attr_idx;\nextern attribute_def node_attr_def[]; /* node attributes defs */\nextern struct pbsnode **pbsndlist;    /* array of ptr to nodes  */\nextern int svr_totnodes;\t      /* number of nodes (hosts) */\nextern struct tree *ipaddrs;\nextern struct tree *streams;\nextern mominfo_t **mominfo_array;\nextern pntPBS_IP_LIST pbs_iplist;\nextern int mominfo_array_size;\nextern int mom_send_vnode_map;\nextern int svr_num_moms;\n\n/* Handlers for vnode state changing.for degraded reservations */\nextern void vnode_unavailable(struct pbsnode *, int);\nextern void vnode_available(struct pbsnode *);\nextern int find_degraded_occurrence(resc_resv *, struct pbsnode *, enum vnode_degraded_op);\nextern int find_vnode_in_execvnode(char *, char *);\nextern void set_vnode_state(struct pbsnode *, unsigned long, enum vnode_state_op);\nextern struct resvinfo *find_vnode_in_resvs(struct pbsnode *, enum vnode_degraded_op);\nextern void free_rinf_list(struct resvinfo *);\nextern void degrade_offlined_nodes_reservations(void);\nextern void degrade_downed_nodes_reservations(struct work_task *);\n\nextern int mod_node_ncpus(struct pbsnode *pnode, long ncpus, int actmode);\nextern int initialize_pbsnode(struct pbsnode *, char *, int);\nextern void initialize_pbssubn(struct pbsnode *, struct pbssubn *, struct prop *);\nextern struct pbssubn *create_subnode(struct pbsnode *, struct pbssubn *lstsn);\nextern void effective_node_delete(struct pbsnode *);\nextern void setup_notification(void);\nextern struct pbssubn *find_subnodebyname(char *);\nextern struct pbsnode *find_nodebyname(char *);\nextern struct pbsnode *find_nodebyaddr(pbs_net_t);\nextern void free_prop_list(struct prop *);\nextern void recompute_ntype_cnts(void);\nextern int process_host_name_part(char *, svrattrl *, char **, int *);\nextern int create_pbs_node(char *, svrattrl *, int, int *, struct pbsnode **, int);\nextern int create_pbs_node2(char *, svrattrl *, int, int *, struct pbsnode **, int, int);\nextern int mgr_set_node_attr(struct pbsnode *, attribute_def *, int, svrattrl *, int, int *, void *, int);\nextern int node_queue_action(attribute *, void *, int);\nextern int node_pcpu_action(attribute *, void *, int);\nstruct prop *init_prop(char *pname);\nextern void set_node_license(void);\nextern int set_node_topology(attribute *, void *, int);\nextern void unset_node_license(struct pbsnode *);\nextern mominfo_t *tfind2(const unsigned long, const unsigned long, struct tree **);\nextern int set_node_host_name(attribute *, void *, int);\nextern int set_node_hook_action(attribute *, void *, int);\nextern int set_node_mom_port(attribute *, void *, int);\nextern mominfo_t *create_mom_entry(char *, unsigned int);\nextern mominfo_t *find_mom_entry(char *, unsigned int);\nextern void momptr_down(mominfo_t *, char *);\nextern void momptr_offline_by_mom(mominfo_t *, char *);\nextern void momptr_clear_offline_by_mom(mominfo_t *, char *);\nextern void delete_mom_entry(mominfo_t *);\nextern mominfo_t *create_svrmom_entry(char *, unsigned int, unsigned long *);\nextern void delete_svrmom_entry(mominfo_t *);\nextern int legal_vnode_char(char, int);\nextern char *parse_node_token(char *, int, int *, char *);\nextern int cross_link_mom_vnode(struct pbsnode *, mominfo_t *);\nextern int fix_indirectness(resource *, struct pbsnode *, int);\nextern int chk_vnode_pool(attribute *, void *, int);\nextern void free_pnode(struct pbsnode *);\nextern int save_nodes_db(int, void *);\nextern void propagate_socket_licensing(mominfo_t *);\nextern void update_jobs_on_node(char *, char *, int, int);\nextern int mcast_add(mominfo_t *, int *, bool);\nvoid stream_eof(int, int, char *);\n\nextern char *msg_daemonname;\n\n#define NODE_TOPOLOGY_TYPE_HWLOC \"hwloc\"\n#define NODE_TOPOLOGY_TYPE_CRAY \"Cray-v1:\"\n#define NODE_TOPOLOGY_TYPE_WIN \"Windows:\"\n\n#define CRAY_COMPUTE \"cray_compute\" /* vntype for a Cray compute node */\n#define CRAY_LOGIN \"cray_login\"\t    /* vntype for a Cray login node */\n\n/* Mom Job defines */\n#define JOB_ACT_REQ_REQUEUE 0\n#define JOB_ACT_REQ_DELETE 1\n#define JOB_ACT_REQ_DEALLOCATE 2\n\nextern void remove_mom_from_pool(mominfo_t *);\nextern void mcast_moms();\n\n#ifndef PBS_MOM\nextern int node_save_db(struct pbsnode *pnode);\nstruct pbsnode *node_recov_db(char *nd_name, struct pbsnode *pnode);\nextern int add_mom_to_pool(mominfo_t *);\nextern void reset_pool_inventory_mom(mominfo_t *);\nextern vnpool_mom_t *find_vnode_pool(mominfo_t *pmom);\nextern void mcast_msg(struct work_task *);\nint get_job_share_type(struct job *pjob);\n#endif\n\nextern int recover_vmap(void);\nextern void delete_momvmap_entry(momvmap_t *);\nextern momvmap_t *create_mommap_entry(char *, char *hostn, mominfo_t *, int);\nextern mominfo_t *find_mom_by_vnodename(const char *);\nextern momvmap_t *find_vmap_entry(const char *);\nextern mominfo_t *add_mom_data(const char *, void *);\nextern mominfo_t *find_mominfo(const char *);\nextern int create_vmap(void **);\nextern void destroy_vmap(void *);\nextern mominfo_t *find_vmapent_byID(void *, const char *);\nextern int add_vmapent_byID(void *, const char *, void *);\nextern int open_conn_stream(mominfo_t *);\nextern void close_streams(int stm, int ret);\nextern void delete_daemon_info(struct machine_info *pmi);\nextern dmn_info_t *init_daemon_info(unsigned long *pul, unsigned int port, struct machine_info *pmi);\n\nattribute *get_nattr(const struct pbsnode *pnode, int attr_idx);\nchar *get_nattr_str(const struct pbsnode *pnode, int attr_idx);\nstruct array_strings *get_nattr_arst(const struct pbsnode *pnode, int attr_idx);\npbs_list_head get_nattr_list(const struct pbsnode *pnode, int attr_idx);\nlong get_nattr_long(const struct pbsnode *pnode, int attr_idx);\nchar get_nattr_c(const struct pbsnode *pnode, int attr_idx);\nint set_nattr_generic(struct pbsnode *pnode, int attr_idx, char *val, char *rscn, enum batch_op op);\nint set_nattr_str_slim(struct pbsnode *pnode, int attr_idx, char *val, char *rscn);\nint set_nattr_l_slim(struct pbsnode *pnode, int attr_idx, long val, enum batch_op op);\nint set_nattr_b_slim(struct pbsnode *pnode, int attr_idx, long val, enum batch_op op);\nint set_nattr_c_slim(struct pbsnode *pnode, int attr_idx, char val, enum batch_op op);\nint set_nattr_short_slim(struct pbsnode *pnode, int attr_idx, short val, enum batch_op op);\nint is_nattr_set(const struct pbsnode *pnode, int attr_idx);\nvoid free_nattr(struct pbsnode *pnode, int attr_idx);\nvoid clear_nattr(struct pbsnode *pnode, int attr_idx);\nvoid set_nattr_jinfo(struct pbsnode *pnode, int attr_idx, struct pbsnode *val);\n\n#ifdef __cplusplus\n}\n#endif\n#endif /* _PBS_NODES_H */\n"
  },
  {
    "path": "src/include/pbs_python.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _PBS_PYTHON_DEF\n#define _PBS_PYTHON_DEF\n\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n#include <sys/types.h>\n#include <sys/stat.h>\n#include <unistd.h>\n#include <errno.h>\n#include <stdlib.h>\n#include <string.h>\n#include <pbs_ifl.h>\n#include <pbs_config.h>\n#include \"pbs_internal.h\"\n#include <log.h>\n#include \"list_link.h\"\n#include \"attribute.h\"\n\n#ifdef WIN32\n#define DIRSEP '\\\\'\n#define DIRSEP_STR \"\\\\\"\n#else\n#define DIRSEP '/'\n#define DIRSEP_STR \"/\"\n#endif\n\n#ifdef LOG_BUF_SIZE /* from log.h */\n#define STRBUF LOG_BUF_SIZE\n#else\n#define STRBUF 4096\n#endif\n\n/*\n * Header file providing external callable routines without regardless\n * of whether it is linked with the python interpreter.\n *\n * NOTE: This is the external interface to basically start/stop/run commands\n *       associated with the embedded python interpreter.\n * ====================== IMPORTANT =======================\n * This file should *NOT* depend on the Python header files\n * use generic pointers!\n * ====================== IMPORTANT =======================\n */\n\n#define PBS_PYTHON_PROGRAM \"pbs_python\"\nstruct python_interpreter_data {\n\tint data_initialized;\t\t\t   /* data initialized */\n\tint interp_started;\t\t\t   /* status flag*/\n\tchar *daemon_name;\t\t\t   /* useful for logging */\n\tchar local_host_name[PBS_MAXHOSTNAME + 1]; /* short host name */\n\tint pbs_python_types_loaded;\t\t   /* The PBS python types */\n\tvoid (*init_interpreter_data)(struct python_interpreter_data *);\n\tvoid (*destroy_interpreter_data)(struct python_interpreter_data *);\n};\n\nstruct python_script {\n\tint check_for_recompile;\n\tchar *path;\t      /* FULL pathname of script */\n\tvoid *py_code_obj;    /* the actual compiled code string\n\t\t\t\t\t      * type is PyCodeObject *\n\t\t\t\t\t      */\n\tvoid *global_dict;    /* this is the globals() dictionary\n\t\t\t\t\t      * type is PyObject *\n\t\t\t\t\t      */\n\tstruct stat cur_sbuf; /* last modification time */\n};\n\n/**\n *\n * @brief\n * \tThe hook_input_param_t structure contains the input request\n * \tparameters to the pbs_python_event_set() function.\n *\n * @param[in]\trq_job - maps to a struct rq_quejob batch request.\n * @param[in]\trq_postqueuejob - resultant struct rq_postqueuejob batch request values.\n * @param[in]\trq_manage - maps to a struct rq_manage batch request.\n * @param[in]\trq_modifyvnode - maps to a struct rq_modifyvnode.\n * @param[in]\trq_move - maps to a struct rq_move batch request.\n * @param[in]\trq_prov - maps to a struct prov_vnode_info.\n * @param[in]\trq_run - maps to a struct rq_runjob batch request.\n * @param[in]\trq_obit - maps to a struct rq_jobobit batch request.\n * @param[in]\tprogname - value to pbs.event().progname in an execjob_launch\n * \t\t\t\thook.\n * @param[in]\targv_list - pbs.event().argv but in list form, used by\n * \t\t\t\texecjob_launch hook.\n * @param[in]\tenv - value to pbs.event().env in an execjob_launch hook.\n * @param[in]\tjobs_list - list of jobs and their attributes/resources\n * \t\t\t\tpassed to an exechost_periodic hook.\n * @param[in]\tvns_list - list of vnodes and their attributes/resources\n * \t\t\t\tpassed to various hooks.\n * @param[in]\tvns_list_fail - list of failed vnodes and their\n *\t\t\t\tattributes/resources passed to various hooks.\n * @param[in]\tfailed_mom_list - list of parent moms that have been\n *\t\t\t\tseen as down.\n * @param[in]\tsucceeded_mom_list - list of parent moms that have been\n *\t\t\t\tseend as healthy.\n * @param[in]\tpid - value to pbs.event().pid in an execjob_attach hook.\n *\n */\ntypedef struct hook_input_param {\n\tvoid *rq_job;\n\tvoid *rq_postqueuejob;\n\tvoid *rq_manage;\n\tvoid *rq_modifyvnode;\n\tvoid *rq_move;\n\tvoid *rq_prov;\n\tvoid *rq_run;\n\tvoid *rq_obit;\n\tchar *progname;\n\tpbs_list_head *argv_list;\n\tchar *env;\n\tpbs_list_head *jobs_list;\n\tpbs_list_head *vns_list;\n\tpbs_list_head *resv_list;\n\tpbs_list_head *vns_list_fail;\n\tpbs_list_head *failed_mom_list;\n\tpbs_list_head *succeeded_mom_list;\n\tpid_t pid;\n} hook_input_param_t;\n\n/**\n *\n * @brief\n * \tThe hook_output_param_t structure contains the output request\n * \tparameters to be filled in by pbs_python_event_to_request() function.\n *\n * @param[out]\trq_job - resultant struct rq_quejob batch request values.\n * @param[out]\trq_postqueuejob - resultant struct rq_postqueuejob batch request values.\n * @param[out]\trq_manage - resultant struct rq_manage batch request values.\n * @param[out]\trq_move - resultant struct rq_move batch request values.\n * @param[out]\trq_prov - resultant struct prov_vnode_info values.\n * @param[out]\trq_run - resultant struct rq_runjob batch request values.\n * @param[in]\trq_obit - maps to a struct rq_jobobit batch request.\n * @param[out]\tprogname - resultant value to pbs.event().progname\n * \t\t\t   after executing execjob_launch hook.\n * @param[out]\targv_list - resultant pbs.event().argv value after\n * \t\t\t   executing execjob_launch hook.\n * @param[out]\tenv - resultant value to pbs.event().env after executing\n * \t\t\texecjob_launch hook.\n * @param[out]\tjobs_list - list of modifications done to jobs after\n * \t\t\texecuting exechost_periodic hook.\n * @param[out]\tvns_list - list of modifications done to vnodes\n * \t\t\tafter executing various hooks.\n * @param[out]\tvns_list_fail - list of modifications done to failed\n * \t\t\tvnodes after executing various hooks.\n *\n */\ntypedef struct hook_output_param {\n\tvoid *rq_job;\n\tvoid *rq_postqueuejob;\n\tvoid *rq_manage;\n\tvoid *rq_move;\n\tvoid *rq_prov;\n\tvoid *rq_run;\n\tvoid *rq_obit;\n\tchar **progname;\n\tpbs_list_head *argv_list;\n\tchar **env;\n\tpbs_list_head *jobs_list;\n\tpbs_list_head *vns_list;\n\tpbs_list_head *resv_list;\n\tpbs_list_head *vns_list_fail;\n} hook_output_param_t;\n\n/* global constants */\n\n/* this is a pointer to interp data -> pbs_python_daemon_name.\n * Since some of the routines could be shared by all three daemons, this saves\n * passing the struct python_interpreter_data all over the place just get the\n * daemon name\n */\n\nextern char *pbs_python_daemon_name; /* pbs_python_external.c */\n\n/* -- BEGIN pbs_python_external.c implementations -- */\nextern int pbs_python_ext_start_interpreter(\n\tstruct python_interpreter_data *interp_data);\nextern void pbs_python_ext_shutdown_interpreter(\n\tstruct python_interpreter_data *interp_data);\n\nextern int pbs_python_load_python_types(\n\tstruct python_interpreter_data *interp_data);\nextern void pbs_python_unload_python_types(\n\tstruct python_interpreter_data *interp_data);\n\nextern void *pbs_python_ext_namespace_init(\n\tstruct python_interpreter_data *interp_data);\n\nextern int pbs_python_check_and_compile_script(\n\tstruct python_interpreter_data *interp_data,\n\tstruct python_script *py_script);\n\nextern int pbs_python_run_code_in_namespace(\n\tstruct python_interpreter_data *interp_data,\n\tstruct python_script *script_file,\n\tint *exit_code);\n\nextern void pbs_python_ext_free_python_script(\n\tstruct python_script *py_script);\nextern void\tpbs_python_ext_free_code_obj(\n\tstruct python_script *py_script);\nextern void\tpbs_python_ext_free_global_dict(\n\tstruct python_script *py_script);\nextern int pbs_python_ext_alloc_python_script(\n\tconst char *script_path,\n\tstruct python_script **py_script);\n\nextern void pbs_python_ext_quick_start_interpreter(void);\nextern void pbs_python_ext_quick_shutdown_interpreter(void);\nextern int set_py_progname(void);\nextern int get_py_progname(char **);\nextern void pbs_python_clear_attributes();\n\n/* -- END pbs_python_external.c implementations -- */\n\n/* -- BEGIN PBS Server/Python implementations -- */\n\n/* For the symbolic constants below, cross-reference src/modules files */\n\n#define PY_ATTRIBUTES \"attributes\" /* list of valid PBS attributes */\n#define PY_ATTRIBUTES_READONLY \"attributes_readonly\"\n/* valid PBS attributes not */\n/* settable in a hook script */\n#define PY_ATTRIBUTES_HOOK_SET \"_attributes_hook_set\"\n/* attributes that got set in */\n/* a hook script */\n#define PY_READONLY_FLAG \"_readonly\" /* an object is read-only */\n#define PY_RERUNJOB_FLAG \"_rerun\"    /* flag some job to rerun */\n#define PY_DELETEJOB_FLAG \"_delete\"  /* flag some job to be deleted*/\n\n/* List of attributes appearing in a Python job, resv, server, queue,\t*/\n/* resource, and other PBS-related objects,  that are only defined in\t*/\n/* Python but not in PBS. \t \t\t\t\t\t*/\n/* This means the attributes are not settable inside a hook script.\t*/\n#define PY_PYTHON_DEFINED_ATTRIBUTES \"id resvid _name _has_value\"\n\n/* special event object attributes - in modules/pbs/v1.1 _svr_types.py */\n#define PY_EVENT_TYPE \"type\"\n#define PY_EVENT_HOOK_NAME \"hook_name\"\n#define PY_EVENT_HOOK_TYPE \"hook_type\"\n#define PY_EVENT_REQUESTOR \"requestor\"\n#define PY_EVENT_REQUESTOR_HOST \"requestor_host\"\n#define PY_EVENT_PARAM \"_param\"\n#define PY_EVENT_FREQ \"freq\"\n\n/* The event parameter keys */\n#define PY_EVENT_PARAM_JOB \"job\"\n#define PY_EVENT_PARAM_JOB_O \"job_o\"\n#define PY_EVENT_PARAM_RESV \"resv\"\n#define PY_EVENT_PARAM_RESV_O \"resv_o\"\n#define PY_EVENT_PARAM_SRC_QUEUE \"src_queue\"\n#define PY_EVENT_PARAM_VNODE \"vnode\"\n#define PY_EVENT_PARAM_VNODE_O \"vnode_o\"\n#define PY_EVENT_PARAM_VNODELIST \"vnode_list\"\n#define PY_EVENT_PARAM_VNODELIST_FAIL \"vnode_list_fail\"\n#define PY_EVENT_PARAM_JOBLIST \"job_list\"\n#define PY_EVENT_PARAM_RESVLIST \"resv_list\"\n#define PY_EVENT_PARAM_AOE \"aoe\"\n#define PY_EVENT_PARAM_PROGNAME \"progname\"\n#define PY_EVENT_PARAM_ARGLIST \"argv\"\n#define PY_EVENT_PARAM_ENV \"env\"\n#define PY_EVENT_PARAM_PID \"pid\"\n#define PY_EVENT_PARAM_MANAGEMENT \"management\"\n\n/* special job object attributes */\n#define PY_JOB_FAILED_MOM_LIST \"failed_mom_list\"\n#define PY_JOB_SUCCEEDED_MOM_LIST \"succeeded_mom_list\"\n\n/* special resource object attributes - in modules/pbs/v1.1/_base_types.py */\n\n#define PY_RESOURCE \"resc\"\n#define PY_RESOURCE_NAME \"_name\"\n#define PY_RESOURCE_HAS_VALUE \"_has_value\"\n#define PY_RESOURCE_GENERIC_VALUE \"<generic resource>\"\n\n/* descriptor-related symbols - in modules/pbs/v1.1/_base_types.py */\n#define PY_DESCRIPTOR_NAME \"_name\"\n#define PY_DESCRIPTOR_VALUE \"_value\"\n#define PY_DESCRIPTOR_VALUE_TYPE \"_value_type\"\n#define PY_DESCRIPTOR_CLASS_NAME \"_class_name\"\n#define PY_DESCRIPTOR_IS_RESOURCE \"_is_resource\"\n#define PY_DESCRIPTOR_RESC_ATTRIBUTE \"_resc_attribute\"\n\n/* optional value attrib of a pbs.hold_types instance */\n#define PY_OPVAL \"opval\"\n#define PY_DELVAL \"delval\"\n\n/* refers to the __sub__ method of pbs.hold_types */\n#define PY_OPVAL_SUB \"__sub__\"\n\n/* class-related - in modules/pbs/v1.1/_base_types.py */\n#define PY_CLASS_DERIVED_TYPES \"_derived_types\"\n\n/* PBS Python types - in modules/pbs/v1.1 files */\n\n#define PY_TYPE_ATTR_DESCRIPTOR \"attr_descriptor\"\n#define PY_TYPE_GENERIC \"generic_type\"\n#define PY_TYPE_SIZE \"size\"\n#define PY_TYPE_TIME \"generic_time\"\n#define PY_TYPE_ACL \"generic_acl\"\n#define PY_TYPE_BOOL \"pbs_bool\"\n#define PY_TYPE_JOB \"job\"\n#define PY_TYPE_QUEUE \"queue\"\n#define PY_TYPE_SERVER \"server\"\n#define PY_TYPE_RESV \"resv\"\n#define PY_TYPE_VNODE \"vnode\"\n#define PY_TYPE_EVENT \"event\"\n#define PY_TYPE_RESOURCE \"pbs_resource\"\n#define PY_TYPE_LIST \"pbs_list\"\n#define PY_TYPE_INT \"pbs_int\"\n#define PY_TYPE_STR \"pbs_str\"\n#define PY_TYPE_FLOAT \"pbs_float\"\n#define PY_TYPE_FLOAT2 \"float\"\n#define PY_TYPE_ENTITY \"pbs_entity\"\n#define PY_TYPE_ENV \"pbs_env\"\n#define PY_TYPE_MANAGEMENT \"management\"\n#define PY_TYPE_SERVER_ATTRIBUTE \"server_attribute\"\n\n/* PBS Python Exception errors - in modules/pbs/v1.1 files */\n#define PY_ERROR_EVENT_INCOMPATIBLE \"EventIncompatibleError\"\n#define PY_ERROR_EVENT_UNSET_ATTRIBUTE \"UnsetAttributeNameError\"\n#define PY_ERROR_BAD_ATTRIBUTE_VALUE_TYPE \"BadAttributeValueTypeError\"\n#define PY_ERROR_BAD_ATTRIBUTE_VALUE \"BadAttributeValueError\"\n#define PY_ERROR_UNSET_RESOURCE \"UnsetResourceNameError\"\n#define PY_ERROR_BAD_RESOURCE_VALUE_TYPE \"BadResourceValueTypeError\"\n#define PY_ERROR_BAD_RESOURCE_VALUE \"BadResourceValueError\"\n\n/* Special values */\n\n/* Value of an unset Job_Name attribute */\n#define JOB_NAME_UNSET_VALUE \"none\"\n#define WALLTIME_RESC \"walltime\" /* correlates with resv_duration */\n\n/* Determines if attribute being set in C or in a Python script */\n#define PY_MODE 1\n#define C_MODE 2\n\n/* Special Method names */\n#define PY_GETRESV_METHOD \"get_resv\"\n#define PY_GETVNODE_METHOD \"get_vnode\"\n#define PY_ITER_NEXTFUNC_METHOD \"iter_nextfunc\"\n#define PY_SIZE_TO_KBYTES_METHOD \"size_to_kbytes\"\n#define PY_MARK_VNODE_SET_METHOD \"mark_vnode_set\"\n#define PY_LOAD_RESOURCE_VALUE_METHOD \"load_resource_value\"\n#define PY_RESOURCE_STR_VALUE_METHOD \"resource_str_value\"\n#define PY_SET_C_MODE_METHOD \"set_c_mode\"\n#define PY_SET_PYTHON_MODE_METHOD \"set_python_mode\"\n#define PY_STR_TO_VNODE_STATE_METHOD \"str_to_vnode_state\"\n#define PY_STR_TO_VNODE_NTYPE_METHOD \"str_to_vnode_ntype\"\n#define PY_STR_TO_VNODE_SHARING_METHOD \"str_to_vnode_sharing\"\n#define PY_VNODE_STATE_TO_STR_METHOD \"vnode_state_to_str\"\n#define PY_VNODE_SHARING_TO_STR_METHOD \"vnode_sharing_to_str\"\n#define PY_VNODE_NTYPE_TO_STR_METHOD \"vnode_ntype_to_str\"\n#define PY_GET_PYTHON_DAEMON_NAME_METHOD \"get_python_daemon_name\"\n#define PY_GET_PBS_SERVER_NAME_METHOD \"get_pbs_server_name\"\n#define PY_GET_LOCAL_HOST_NAME_METHOD \"get_local_host_name\"\n#define PY_GET_PBS_CONF_METHOD \"get_pbs_conf\"\n#define PY_TYPE_PBS_ITER \"pbs_iter\"\n#define ITER_QUEUES \"queues\"\n#define ITER_JOBS \"jobs\"\n#define ITER_RESERVATIONS \"resvs\"\n#define ITER_VNODES \"vnodes\"\n#define PY_LOGJOBMSG_METHOD \"logjobmsg\"\n#define PY_REBOOT_HOST_METHOD \"reboot\"\n#define PY_SCHEDULER_RESTART_CYCLE_METHOD \"scheduler_restart_cycle\"\n#define PY_SET_PBS_STATOBJ_METHOD \"set_pbs_statobj\"\n#define PY_GET_SERVER_STATIC_METHOD \"get_server_static\"\n#define PY_GET_JOB_STATIC_METHOD \"get_job_static\"\n#define PY_GET_RESV_STATIC_METHOD \"get_resv_static\"\n#define PY_GET_VNODE_STATIC_METHOD \"get_vnode_static\"\n#define PY_GET_QUEUE_STATIC_METHOD \"get_queue_static\"\n#define PY_GET_SERVER_DATA_FP_METHOD \"get_server_data_fp\"\n#define PY_GET_SERVER_DATA_FILE_METHOD \"get_server_data_file\"\n#define PY_USE_STATIC_DATA_METHOD \"use_static_data\"\n\n/* Event parameter names */\n#define PBS_OBJ \"pbs\"\n#define PBS_REBOOT_OBJECT \"reboot\"\n#define PBS_REBOOT_CMD_OBJECT \"reboot_cmd\"\n#define GET_NODE_NAME_FUNC \"get_local_nodename()\"\n#define EVENT_OBJECT \"pbs.event()\"\n#define EVENT_JOB_OBJECT EVENT_OBJECT \".job\"\n#define EVENT_JOB_O_OBJECT EVENT_OBJECT \".job_o\"\n#define EVENT_RESV_OBJECT EVENT_OBJECT \".resv\"\n#define EVENT_SRC_QUEUE_OBJECT EVENT_OBJECT \".src_queue\"\n#define EVENT_VNODE_OBJECT EVENT_OBJECT \".vnode\"\n#define EVENT_VNODE_O_OBJECT EVENT_OBJECT \".vnode_o\"\n#define EVENT_VNODELIST_OBJECT EVENT_OBJECT \".vnode_list\"\n#define EVENT_VNODELIST_FAIL_OBJECT EVENT_OBJECT \".vnode_list_fail\"\n#define EVENT_JOBLIST_OBJECT EVENT_OBJECT \".job_list\"\n#define EVENT_AOE_OBJECT EVENT_OBJECT \".aoe\"\n#define EVENT_ACCEPT_OBJECT EVENT_OBJECT \".accept\"\n#define EVENT_REJECT_OBJECT EVENT_OBJECT \".reject\"\n#define EVENT_REJECT_MSG_OBJECT EVENT_OBJECT \".reject_msg\"\n#define EVENT_HOOK_EUSER EVENT_OBJECT \".hook_euser\"\n#define EVENT_JOB_RERUNFLAG_OBJECT EVENT_OBJECT \".job._rerun\"\n#define EVENT_JOB_DELETEFLAG_OBJECT EVENT_OBJECT \".job._delete\"\n#define EVENT_PROGNAME_OBJECT EVENT_OBJECT \".progname\"\n#define EVENT_ARGV_OBJECT EVENT_OBJECT \".argv\"\n#define EVENT_ENV_OBJECT EVENT_OBJECT \".env\"\n#define EVENT_PID_OBJECT EVENT_OBJECT \".pid\"\n#define EVENT_MANAGEMENT_OBJECT EVENT_OBJECT \".management\"\n\n/* Special Job parameters */\n#define JOB_FAILED_MOM_LIST_OBJECT EVENT_JOB_OBJECT \".\" PY_JOB_FAILED_MOM_LIST\n#define JOB_SUCCEEDED_MOM_LIST_OBJECT EVENT_JOB_OBJECT \".\" PY_JOB_SUCCEEDED_MOM_LIST\n\n/* Server parameter names */\n#define SERVER_OBJECT \"pbs.server()\"\n#define SERVER_JOB_OBJECT SERVER_OBJECT \".job\"\n#define SERVER_QUEUE_OBJECT SERVER_OBJECT \".queue\"\n#define SERVER_RESV_OBJECT SERVER_OBJECT \".resv\"\n#define SERVER_VNODE_OBJECT SERVER_OBJECT \".vnode\"\n\nextern void pbs_python_set_mode(int mode);\n\nextern int pbs_python_event_mark_readonly(void);\n\nextern int pbs_python_event_set(\n\tunsigned int hook_event,\n\tchar *req_user,\n\tchar *req_host,\n\thook_input_param_t *req_params,\n\tchar *perf_label);\n\nextern int pbs_python_event_to_request(unsigned int hook_event,\n\t\t\t\t       hook_output_param_t *req_params, char *perf_label, char *perf_action);\n\nextern int pbs_python_event_set_attrval(char *name, char *value);\n\nextern char *\npbs_python_event_get_attrval(char *name);\n\nextern void *pbs_python_event_get(void);\n\nextern void\npbs_python_event_accept(void);\n\nextern void\npbs_python_event_reject(char *msg);\n\nextern char *\npbs_python_event_get_reject_msg(void);\n\nextern int\npbs_python_event_get_accept_flag(void);\n\nextern void\npbs_python_reboot_host(char *cmd);\n\nextern void\npbs_python_scheduler_restart_cycle(void);\n\nextern void\npbs_python_no_scheduler_restart_cycle(void);\n\nint\npbs_python_get_scheduler_restart_cycle_flag(void);\n\nextern char *\npbs_python_get_reboot_host_cmd(void);\n\nextern int\npbs_python_get_reboot_host_flag(void);\n\nextern void\npbs_python_event_param_mod_allow(void);\n\nextern void\npbs_python_event_param_mod_disallow(void);\n\nextern int\npbs_python_event_param_get_mod_flag(void);\n\nextern void\npbs_python_set_interrupt(void);\n\nextern char *\npbs_python_event_job_getval_hookset(char *attrib_name, char *opval,\n\t\t\t\t    int opval_len, char *delval, int delval_len);\n\nextern char *\npbs_python_event_job_getval(char *attrib_name);\n\nextern char *\npbs_python_event_jobresc_getval_hookset(char *attrib_name, char *resc_name);\n\nextern int\npbs_python_event_jobresc_clear_hookset(char *attrib_name);\n\nextern char *\npbs_python_event_jobresc_getval(char *attrib_name, char *resc_name);\n\nextern int\npbs_python_has_vnode_set(void);\n\nextern void\npbs_python_do_vnode_set(void);\n\nextern void\npbs_python_set_hook_debug_input_fp(FILE *);\n\nextern FILE *\npbs_python_get_hook_debug_input_fp(void);\n\nextern void\npbs_python_set_hook_debug_input_file(char *);\n\nextern char *\npbs_python_get_hook_debug_input_file(void);\n\nextern void\npbs_python_set_hook_debug_output_file(char *);\n\nextern char *\npbs_python_get_hook_debug_output_file(void);\n\nextern void\npbs_python_set_hook_debug_output_fp(FILE *fp);\n\nFILE *\npbs_python_get_hook_debug_output_fp(void);\n\nextern void\npbs_python_set_hook_debug_data_fp(FILE *);\n\nextern FILE *\npbs_python_get_hook_debug_data_fp(void);\n\nextern void\npbs_python_set_hook_debug_data_file(char *);\n\nextern char *\npbs_python_get_hook_debug_data_file(void);\n\nextern void\npbs_python_set_use_static_data_value(int);\n\nextern void\npbs_python_set_server_info(pbs_list_head *);\n\nextern void\npbs_python_unset_server_info(void);\n\nextern void\npbs_python_set_server_jobs_info(pbs_list_head *, pbs_list_head *);\n\nextern void\npbs_python_unset_server_jobs_info(void);\n\nextern void\npbs_python_set_server_queues_info(pbs_list_head *, pbs_list_head *);\n\nextern void\npbs_python_unset_server_queues_info(void);\n\nextern void\npbs_python_set_server_resvs_info(pbs_list_head *, pbs_list_head *);\n\nextern void\npbs_python_unset_server_resvs_info(void);\n\nextern void\npbs_python_set_server_vnodes_info(pbs_list_head *, pbs_list_head *);\n\nextern void\npbs_python_unset_server_vnodes_info(void);\n\nextern int\nvarlist_same(char *varl1, char *varl2);\n\nextern int\npbs_python_set_os_environ(char *env_var, char *env_val);\n\nextern int\npbs_python_set_pbs_hook_config_filename(char *conf_file);\n\nextern void\nhook_input_param_init(hook_input_param_t *hook_input);\n\nextern void\nhook_output_param_init(hook_output_param_t *hook_output);\n\n/* -- END PBS Server/Python implementations -- */\n\n#ifdef __cplusplus\n}\n#endif\n\n#endif /* _PBS_PYTHON_DEF */\n"
  },
  {
    "path": "src/include/pbs_python_private.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _PBS_PYTHON_PRIVATE_DEF\n#define _PBS_PYTHON_PRIVATE_DEF\n\n/*\n * This header file contains dependencies to be *ONLY* used by the embedded or\n * extesion Python/C routines. These are typically found in src/lib/Libpython.\n *\n * Always include this header file within #ifdef PYTHON\n *\n * IMPORTANT:\n *   Under no circumstance this header file should be included by sources out-\n *   side the src/lib/Libpython directory. Extranlize any needed functionalities\n *   provided by this routine using generic pointers and place them in <pbs_python.h>.\n *\n *   The motivation for doing this is care has been taken so that the actual python\n *   build environment CPPFLAGS and CFLAGS are passed to the compiler that acts on the\n *   source files that include this header file. There will be situations where this\n *   compiler flags could break or spit out warnings if it is passed to the whole build\n *   environment.\n *\n *\n *\n */\n\n#ifndef PY_SSIZE_T_CLEAN\n#define PY_SSIZE_T_CLEAN\n#endif\n#include <Python.h>\n#include <pbs_python.h> /* the pbs python external header file supporting\n                             with or without built-in python */\n\n/* The below is the C pbs python extension module */\n#ifndef PBS_PYTHON_V1_MODULE_EXTENSION_NAME\n#define PBS_PYTHON_V1_MODULE_EXTENSION_NAME \"_pbs_v1\"\n#endif\n\n/* The below is the pure pbs.v1 package */\n#ifndef PBS_PYTHON_V1_MODULE\n#define PBS_PYTHON_V1_MODULE \"pbs.v1\"\n#endif\n\n/* this is the dictionary containing all the types for the embedded interp */\n#define PBS_PYTHON_V1_TYPES_DICTIONARY \"EXPORTED_TYPES_DICT\"\n\n/*             BEGIN CONVENIENCE LOGGING MACROS                              */\n\n/* Assumptions:\n *    log_buffer\n *    pbs_python_daemon_name\n */\n\n#define LOG_EVENT_DEBUG_MACRO(_evtype)                                              \\\n\tif (_evtype & PBSEVENT_DEBUG3)                                              \\\n\t\tlog_event(_evtype,                                                  \\\n\t\t\t  PBS_EVENTCLASS_SERVER, LOG_DEBUG, pbs_python_daemon_name, \\\n\t\t\t  log_buffer);                                              \\\n\telse                                                                        \\\n\t\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_ADMIN | _evtype,               \\\n\t\t\t  PBS_EVENTCLASS_SERVER, LOG_DEBUG, pbs_python_daemon_name, \\\n\t\t\t  log_buffer);\n\n#define DEBUG_ARG1_WRAP(_evtype, fmt, a)                        \\\n\tdo {                                                    \\\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, fmt, a); \\\n\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';            \\\n\t\tLOG_EVENT_DEBUG_MACRO(_evtype);                 \\\n\t} while (0)\n\n#define DEBUG_ARG2_WRAP(_evtype, fmt, a, b)                        \\\n\tdo {                                                       \\\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, fmt, a, b); \\\n\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';               \\\n\t\tLOG_EVENT_DEBUG_MACRO(_evtype);                    \\\n\t} while (0)\n\n#define DEBUG1_ARG1(fmt, a) DEBUG_ARG1_WRAP(PBSEVENT_DEBUG, fmt, a)\n#define DEBUG2_ARG1(fmt, a) DEBUG_ARG1_WRAP(PBSEVENT_DEBUG2, fmt, a)\n#define DEBUG3_ARG1(fmt, a) DEBUG_ARG1_WRAP(PBSEVENT_DEBUG3, fmt, a)\n#define DEBUG1_ARG2(fmt, a, b) DEBUG_ARG2_WRAP(PBSEVENT_DEBUG, fmt, a, b)\n#define DEBUG2_ARG2(fmt, a, b) DEBUG_ARG2_WRAP(PBSEVENT_DEBUG2, fmt, a, b)\n#define DEBUG3_ARG2(fmt, a, b) DEBUG_ARG2_WRAP(PBSEVENT_DEBUG3, fmt, a, b)\n\n#define LOG_ERROR_ARG2(fmt, a, b)                                  \\\n\tdo {                                                       \\\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, fmt, a, b); \\\n\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';               \\\n\t\t(void) log_record(PBSEVENT_ERROR | PBSEVENT_FORCE, \\\n\t\t\t\t  PBS_EVENTCLASS_SERVER, LOG_ERR,  \\\n\t\t\t\t  pbs_python_daemon_name,          \\\n\t\t\t\t  log_buffer);                     \\\n\t} while (0)\n\n#define IS_PBS_PYTHON_CMD(a) (((a) != NULL) && (strcmp((a), \"pbs_python\") == 0))\n\n/*             END CONVENIENCE LOGGING MACROS                              */\n\n/*\n * All Python Types from the pbs.v1 module\n */\n\n/* declarations from common_python_utils.c */\n\nextern void pbs_python_write_object_to_log(PyObject *, char *, int);\n\nextern void pbs_python_write_error_to_log(const char *);\nextern int pbs_python_modify_syspath(const char *, int);\nextern int pbs_python_dict_set_item_integral_value(PyObject *,\n\t\t\t\t\t\t   const char *,\n\t\t\t\t\t\t   const Py_ssize_t);\nextern int pbs_python_dict_set_item_string_value(PyObject *,\n\t\t\t\t\t\t const char *,\n\t\t\t\t\t\t const char *);\nextern int pbs_python_object_set_attr_string_value(PyObject *,\n\t\t\t\t\t\t   const char *,\n\t\t\t\t\t\t   const char *);\n\nextern int pbs_python_object_set_attr_integral_value(PyObject *,\n\t\t\t\t\t\t     const char *,\n\t\t\t\t\t\t     int);\n\nextern int pbs_python_object_get_attr_integral_value(PyObject *,\n\t\t\t\t\t\t     const char *);\n\nextern char *pbs_python_object_get_attr_string_value(PyObject *,\n\t\t\t\t\t\t     const char *);\n\nextern char *pbs_python_object_str(PyObject *);\n\nextern char *pbs_python_list_get_item_string_value(PyObject *, int);\n\nextern PyObject *pbs_python_import_name(const char *, const char *);\n\n/* declarations from module_pbs_v1.c */\n\nextern PyObject *pbs_v1_module_init(void);\nextern PyObject *pbs_v1_module_inittab(void);\n\n/* declrations from pbs_python_svr_internal.c */\n\nextern PyObject *\n_pbs_python_event_get_param(char *name);\n\n#endif /* _PBS_PYTHON_PRIVATE_DEF */\n"
  },
  {
    "path": "src/include/pbs_reliable.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _PBS_RELIABLE_H\n#define _PBS_RELIABLE_H\n\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n#include \"pbs_ifl.h\"\n#include \"libutil.h\"\n#include \"placementsets.h\"\n#include \"list_link.h\"\n\n#define DEFAULT_JOINJOB_ALARM 30\n#define DEFAULT_JOB_LAUNCH_DELAY 30\n/*\n *\n *  pbs_reliable.h\n *  This file contains all the definitions that are used by\n *  server/mom/hooks for supporting reliable job startup.\n *\n */\n\n/*\n * The reliable_job_node structure is used to keep track of nodes\n * representing mom hosts fore reliable job startup.\n */\ntypedef struct reliable_job_node {\n\tpbs_list_link rjn_link;\n\tint prologue_hook_success;\t    /* execjob_prologue hook execution succeeded */\n\tchar rjn_host[PBS_MAXHOSTNAME + 1]; /* mom host name */\n} reliable_job_node;\n\nextern reliable_job_node *reliable_job_node_find(pbs_list_head *, char *);\nextern int reliable_job_node_add(pbs_list_head *, char *);\nextern void reliable_job_node_delete(pbs_list_head *, char *);\nextern reliable_job_node *reliable_job_node_set_prologue_hook_success(pbs_list_head *, char *);\nextern void reliable_job_node_free(pbs_list_head *);\nextern void reliable_job_node_print(char *, pbs_list_head *, int);\n\n/**\n *\n * @brief\n * \tThe relnodes_input_t structure contains the input request\n * \tparameters to the pbs_release_nodes_*() function.\n *\n * @param[in]\tjobid - pointer to id of the job being released\n * @param[in]\tvnodes_data - list of vnodes and their data in the system\n * @param[in]\texecvnode - job's exec_vnode value\n * @param[in]\texechost - job's exec_host value\n * @param[in]\texechost2 - job's exec_host2 value\n * @param[in]\tschedselect - job's schedselect value\n * @param[out]\tp_new_exec_vhode - holds the new exec_vnode value after release\n * @param[out]\tp_new_exec_host - holds the new exec_host value after release\n * @param[out]\tp_new_exec_host2 - holds the new exec_host2 value after release\n * @param[out]\tp_new_schedselect - holds the new schedselect value after release\n *\n */\ntypedef struct relnodes_input {\n\tchar *jobid;\n\tvoid *vnodes_data;\n\tchar *execvnode;\n\tchar *exechost;\n\tchar *exechost2;\n\tchar *schedselect;\n\tchar **p_new_exec_vnode;\n\tchar **p_new_exec_host[2];\n\tchar **p_new_schedselect;\n} relnodes_input_t;\n\n/**\n *\n * @brief\n * \tThe relnodes_given_nodelist_t structure contains the additional\n * \tinput parameters to the pbs_release_nodes(_given_vnodelist) function\n *\twhen called to release a set of vnodes.\n *\n * @param[in]\tvnodelist - list of vnodes to release\n * @param[in]\tdeallocated_nodes_orig - job's current deallocated_exevnode value\n * @param[out]\tp_new_deallocated_execvnode - holds the new deallocated_exec_vnode after release\n */\ntypedef struct relnodes_input_vnodelist {\n\tchar *vnodelist;\n\tchar *deallocated_nodes_orig;\n\tchar **p_new_deallocated_execvnode;\n} relnodes_input_vnodelist_t;\n\n/**\n *\n * @brief\n * \tThe relnodes_given_select_t structure contains the input\n * \tparameters to the pbs_release_nodes_given_select() function\n *\twhen called to satisfy select_str parameter.\n *\n * @param[in]\tselect_str - job's select value after nodes are released\n * @param[in]\tfailed_mom_list - list of unhealthy moms\n * @param[in]\tsucceeded_mom_list - list of healthy moms\n * @param[in]\tfailed_vnodes - list of vnodes assigned to the job managed by unhealthy moms\n * @param[in]\tgood_vnodes- list of vnodes assigned to the job managed by healthy moms\n */\ntypedef struct relnodes_input_select {\n\tchar *select_str;\n\tpbs_list_head *failed_mom_list;\n\tpbs_list_head *succeeded_mom_list;\n\tvnl_t **failed_vnodes;\n\tvnl_t **good_vnodes;\n} relnodes_input_select_t;\n\nextern void relnodes_input_init(relnodes_input_t *r_input);\nextern void relnodes_input_vnodelist_init(relnodes_input_vnodelist_t *r_input);\nextern void relnodes_input_select_init(relnodes_input_select_t *r_input);\nextern int pbs_release_nodes_given_select(relnodes_input_t *r_input, relnodes_input_select_t *r_input2, char *err_msg, int err_msg_sz);\n\nextern int pbs_release_nodes_given_nodelist(relnodes_input_t *r_input, relnodes_input_vnodelist_t *r_input2, char *err_msg, int err_msg_sz);\n\nextern int do_schedselect(char *, void *, void *, char **, char **);\n\n#include \"placementsets.h\"\nextern int prune_exec_vnode(job *pjob, char *select_str, vnl_t **failed_vnodes, vnl_t **good_vnodes, char *err_msg, int err_msg_sz);\n\n// clang-format off\n#define FREE_VNLS(vnf, vng) { \\\nvnl_free(vnf); \\\nvnf = NULL; \\\nvnl_free(vng); \\\nvng = NULL; \\\n}\n\n// clang-format on\n\n#ifdef __cplusplus\n}\n#endif\n\n#endif /* _PBS_INTERNAL_H */\n"
  },
  {
    "path": "src/include/pbs_sched.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _PBS_SCHED_H\n#define _PBS_SCHED_H\n\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n#include \"pbs_config.h\"\n#include \"pbs_ifl.h\"\n#include \"libpbs.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"resource.h\"\n#include \"pbs_ifl.h\"\n#include \"server_limits.h\"\n#include \"sched_cmds.h\"\n#include \"work_task.h\"\n#include \"net_connect.h\"\n#include \"resv_node.h\"\n#include \"queue.h\"\n#include \"batch_request.h\"\n#include \"job.h\"\n#include \"reservation.h\"\n\n#define PBS_SCHED_CYCLE_LEN_DEFAULT 1200\n\n/* Default value of preempt_queue_prio */\n#define PBS_PREEMPT_QUEUE_PRIO_DEFAULT 150\n\n#define SC_STATUS_LEN 10\n\n/*\n * Attributes for the server's sched object\n * Must be the same order as listed in sched_attr_def (master_sched_attr_def.xml)\n */\nenum sched_atr {\n#include \"sched_attr_enum.h\"\n#include \"site_sched_attr_enum.h\"\n\t/* This must be last */\n\tSCHED_ATR_LAST\n};\n\nextern void *sched_attr_idx;\nextern attribute_def sched_attr_def[];\n\ntypedef struct pbs_sched {\n\tpbs_list_link sc_link;\t\t\t\t\t      /* link to all scheds known to server */\n\tint sc_primary_conn;\t\t\t\t\t      /* primary connection to sched */\n\tint sc_secondary_conn;\t\t\t\t\t      /* secondary connection to sched */\n\tint svr_do_schedule;\t\t\t\t\t      /* next sched command which will be sent to sched */\n\tint svr_do_sched_high;\t\t\t\t\t      /* next high prio sched command which will be sent to sched */\n\tpbs_net_t sc_conn_addr;\t\t\t\t\t      /* sched host address */\n\ttime_t sch_next_schedule;\t\t\t\t      /* time when to next run scheduler cycle */\n\tchar sc_name[PBS_MAXSCHEDNAME + 1];\t\t\t      /* name of sched this sched */\n\tstruct preempt_ordering preempt_order[PREEMPT_ORDER_MAX + 1]; /* preempt order for this sched */\n\tint sc_cycle_started;\t\t\t\t\t      /* indicates whether sched cycle is started or not, 0 - not started, 1 - started */\n\tattribute sch_attr[SCHED_ATR_LAST];\t\t\t      /* sched object's attributes  */\n\tshort newobj;\t\t\t\t\t\t      /* is this new sched obj? */\n} pbs_sched;\n\nextern pbs_sched *dflt_scheduler;\nextern pbs_list_head svr_allscheds;\nextern void set_scheduler_flag(int flag, pbs_sched *psched);\nextern int find_assoc_sched_jid(char *jid, pbs_sched **target_sched);\nextern int find_assoc_sched_pque(pbs_queue *pq, pbs_sched **target_sched);\nextern pbs_sched *find_sched_from_sock(int sock, conn_origin_t which);\nextern pbs_sched *find_sched(char *sched_name);\nextern int validate_job_formula(attribute *pattr, void *pobject, int actmode);\nextern pbs_sched *find_sched_from_partition(char *partition);\nextern int recv_sched_cycle_end(int sock);\nextern void handle_deferred_cycle_close(pbs_sched *psched);\n\nattribute *get_sched_attr(const pbs_sched *psched, int attr_idx);\nchar *get_sched_attr_str(const pbs_sched *psched, int attr_idx);\nstruct array_strings *get_sched_attr_arst(const pbs_sched *psched, int attr_idx);\npbs_list_head get_sched_attr_list(const pbs_sched *psched, int attr_idx);\nlong get_sched_attr_long(const pbs_sched *psched, int attr_idx);\nint set_sched_attr_generic(pbs_sched *psched, int attr_idx, char *val, char *rscn, enum batch_op op);\nint set_sched_attr_str_slim(pbs_sched *psched, int attr_idx, char *val, char *rscn);\nint set_sched_attr_l_slim(pbs_sched *psched, int attr_idx, long val, enum batch_op op);\nint set_sched_attr_b_slim(pbs_sched *psched, int attr_idx, long val, enum batch_op op);\nint set_sched_attr_c_slim(pbs_sched *psched, int attr_idx, char val, enum batch_op op);\nint is_sched_attr_set(const pbs_sched *psched, int attr_idx);\nvoid free_sched_attr(pbs_sched *psched, int attr_idx);\nvoid clear_sched_attr(pbs_sched *psched, int attr_idx);\n\n#ifdef __cplusplus\n}\n#endif\n#endif /* _PBS_SCHED_H */\n"
  },
  {
    "path": "src/include/pbs_share.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/*\n * The purpose of this file is to share information between different parts\n * of PBS.\n * An example would be to share a constant between the server and the scheduler\n */\n\n#ifndef PBS_SHARE\n#define PBS_SHARE\n\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n#include \"pbs_ifl.h\"\n\n/* Formula special case constants */\n\n#define FORMULA_FSPERC \"fairshare_perc\"\n#define FORMULA_FSPERC_DEP \"fair_share_perc\"\n#define FORMULA_TREE_USAGE \"fairshare_tree_usage\"\n#define FORMULA_FSFACTOR \"fairshare_factor\"\n#define FORMULA_QUEUE_PRIO \"queue_priority\"\n#define FORMULA_JOB_PRIO \"job_priority\"\n#define FORMULA_ELIGIBLE_TIME \"eligible_time\"\n#define FORMULA_ACCRUE_TYPE \"accrue_type\"\n\n/* Well known file to store job sorting formula */\n#define FORMULA_FILENAME \"sched_formula\"\n#define FORMULA_ATTR_PATH \"server_priv/\" FORMULA_FILENAME\n\n/* Constant to check preempt_targets for NONE */\n#define TARGET_NONE \"NONE\"\n\n/* Constants for qstat's comment field */\n\n/* comment buffer can hold max 255 chars */\n#define COMMENT_BUF_SIZE 256\n/* buffer takes only 255 chars, minus 34 chars for timespec, hence the remaining.\n * sacrificing 3 chars for ...\n */\n#define MAXCOMMENTSCOPE COMMENT_BUF_SIZE - 1 - 34\n#define MAXCOMMENTLEN COMMENT_BUF_SIZE - 1 - 37\n/* comment line has 3 spaces each at beginning and at end. hence, for qstat -s\n * 74 chars can be displayed and for qstat -sw 114 chars can be displayed.\n */\n#define COMMENTLENSCOPE_SHORT 74\n#define COMMENTLENSCOPE_WIDE 114\n/* if comment string is longer than length of the scope then ... is appended.\n * this reduces the actual display length by 3\n */\n#define COMMENTLEN_SHORT 71\n#define COMMENTLEN_WIDE 111\n\n/* number of digits to print after the decimal point for floats */\n#define FLOAT_NUM_DIGITS 4\n\n/* the size (in bytes) of a word.  All resources are kept in kilobytes\n * internally in the server.  If any specification is in words, it will be\n * converted into kilobytes with this constant\n */\n#define SIZEOF_WORD 8\n\n/* Default scheduler name */\n#define PBS_DFLT_SCHED_NAME \"default\"\n\n/* scheduler-attribute values (state) */\n#define SC_DOWN \"down\"\n#define SC_IDLE \"idle\"\n#define SC_SCHEDULING \"scheduling\"\n\n#define MAX_INT_LEN 10\n\n/* Values of sched attribute 'job_run_wait' */\n#define RUN_WAIT_NONE \"none\"\n#define RUN_WAIT_RUNJOB_HOOK \"runjob_hook\"\n#define RUN_WAIT_EXECJOB_HOOK \"execjob_hook\"\n\nstruct preempt_ordering {\n\tunsigned high_range; /* high end of the walltime range */\n\tunsigned low_range;  /* low end of the walltime range */\n\n\tenum preempt_method order[PREEMPT_METHOD_HIGH]; /* the order to preempt jobs */\n};\n\n#ifdef __cplusplus\n}\n#endif\n#endif\n"
  },
  {
    "path": "src/include/pbs_v1_module_common.i",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#if !defined(PBS_v1_COMMON_I_INCLUDED)\n#define PBS_v1_COMMON_I_INCLUDED 1\n\ntime_t time_now = 0;\n\n/*\n * the names of the Server:\n *    pbs_server_name - from PBS_SERVER_HOST_NAME\n *\t  server_name - from PBS_SERVER\n *\t  server_host - Set as follows:\n *\t  \t\t1. FQDN of pbs_server_name if set\n *\t  \t\t2. FQDN of server_name if set\n *\t  \t\t3. Call gethostname()\n *\n * The following is an excerpt from the EDD for SPID 4534 that explains\n * how PBS_SERVER_HOST_NAME is used:\n *\n * I.1.2.3\tSynopsis:\n * Add new optional entry in PBS Configuration whose value is the fully\n * qualified domain name (FQDN) of the host on which the PBS Server is\n * running.\n *\tI.1.2.3.1\tThis name is used by clients to contact the Server.\n *\tI.1.2.3.2\tIf PBS Failover is configured (PBS_PRIMARY and\n *\t\t\tPBS_SECONDARY in the PBS Configuration), this symbol\n *\t\t\tand its value will be ignored and the values of\n *\t\t\tPBS_PRIMARY and PBS_SECONDARY will be use as per\n *\t\t\tsectionI.1.1.1.\n *\tI.1.2.3.3\tWhen  PBS failover is not configured and\n *\t\t\tPBS_SERVER_HOST_NAME is specified, if the server_name\n *\t\t\tis not specified by the client or is specified and\n *\t\t\tmatches the value of PBS_SERVER, then the value of\n *\t\t\tPBS_SERVER_HOST_NAME is used as the name of the Server\n *\t\t\tto contact.\n *\tI.1.2.3.4\tNote: When PBS_SERVER_HOST_NAME is not specified,\n *\t\t\tthe current behavior for determining the name of the\n *\t\t\tServer to contact will still apply.\n *\tI.1.2.3.5\tThe value of the configuration variable should be a\n *\t\t\tfully qualified host name to avoid the possibility of\n *\t\t\thost name collisions (e.g. master.foo.domain.name and\n *\t\t\tmaster.bar.domain.name).\n */\nchar *pbs_server_name = NULL;\nchar server_name[PBS_MAXSERVERNAME+1] = \"\";  /* host_name[:service|port] */\nchar server_host[PBS_MAXHOSTNAME+1] = \"\";  /* host_name of this svr */\nstruct server server = {{0}};  /* the server structure */\nstruct pbsnode **pbsndlist = NULL;  /* array of ptr to nodes */\nint svr_totnodes = 0;  /* number of nodes (hosts) */\nstruct python_interpreter_data  svr_interp_data;\nint svr_delay_entry = 0;\npbs_list_head svr_queues;  /* list of queues */\npbs_list_head svr_alljobs;  /* list of all jobs in server */\npbs_list_head svr_allresvs;  /* all reservations in server */\npbs_list_head svr_queues;\npbs_list_head svr_alljobs;\npbs_list_head svr_allresvs;\npbs_list_head svr_allhooks;\npbs_list_head svr_queuejob_hooks;\npbs_list_head svr_postqueuejob_hooks;\npbs_list_head svr_modifyjob_hooks;\npbs_list_head svr_resvsub_hooks;\npbs_list_head svr_modifyresv_hooks;\npbs_list_head svr_movejob_hooks;\npbs_list_head svr_runjob_hooks;\npbs_list_head svr_jobobit_hooks;\npbs_list_head svr_management_hooks;\npbs_list_head svr_modifyvnode_hooks;\npbs_list_head svr_provision_hooks;\npbs_list_head svr_periodic_hooks;\npbs_list_head svr_resv_confirm_hooks;\npbs_list_head svr_resv_begin_hooks;\npbs_list_head svr_resv_end_hooks;\npbs_list_head svr_execjob_begin_hooks;\npbs_list_head svr_execjob_prologue_hooks;\npbs_list_head svr_execjob_epilogue_hooks;\npbs_list_head svr_execjob_preterm_hooks;\npbs_list_head svr_execjob_launch_hooks;\npbs_list_head svr_execjob_end_hooks;\npbs_list_head svr_exechost_periodic_hooks;\npbs_list_head svr_exechost_startup_hooks;\npbs_list_head svr_execjob_attach_hooks;\npbs_list_head svr_execjob_resize_hooks;\npbs_list_head svr_execjob_abort_hooks;\npbs_list_head svr_execjob_postsuspend_hooks;\npbs_list_head svr_execjob_preresume_hooks;\n\npbs_list_head task_list_immed;\npbs_list_head task_list_interleave;\npbs_list_head task_list_timed;\npbs_list_head task_list_event;\n\nchar *path_hooks = NULL;\nchar *path_hooks_workdir = NULL;\nchar *path_rescdef = NULL;\nchar *resc_in_err = NULL;\n\nvoid *job_attr_idx = NULL;\nvoid *resv_attr_idx = NULL;\nvoid *node_attr_idx = NULL;\nvoid *que_attr_idx = NULL;\nvoid *svr_attr_idx = NULL;\nvoid *sched_attr_idx = NULL;\n\n#if defined(PBS_V1_COMMON_MODULE_DEFINE_STUB_FUNCS)\n/*\n *\tThe following are a set of unused stub functions needed so that pbs_python\n *\tand the loadable pbs_v1 Python module can be linked to svr_attr_def.o,\n *\tjob_attr_def.o, node_attr_def.o, queue_attr_def.o, resv_attr_def.o\n *\n */\n#ifndef PBS_PYTHON\nPyObject *\nPyInit__pbs_ifl(void) {\n\treturn NULL;\n}\n#endif\n\nint\nnode_state(attribute *new, void *pnode, int actmode) {\n\treturn 0;\n}\n\nint\nset_resources_min_max(attribute *old, attribute *new, enum batch_op op) {\n\treturn (0);\n}\n\nvoid\nset_scheduler_flag(int flag, pbs_sched *psched) {\n\treturn;\n}\n\njob\t*\nfind_job(char *jobid) {\n\treturn NULL;\n}\n\nresc_resv *\nfind_resv(char *resvid) {\n\treturn NULL;\n}\n\npbs_queue *\nfind_queuebyname(char *qname) {\n\treturn NULL;\n}\n\nstruct pbsnode *find_nodebyname(char *nname) {\n\treturn NULL;\n}\n\nvoid\nwrite_node_state(void) {\n\treturn;\n}\n\nvoid\nmgr_log_attr(char *msg, struct svrattrl *plist, int logclass,\n\t\tchar *objname, char *hookname) {\n\treturn;\n}\n\nint\nmgr_set_attr(attribute *pattr, void *pidx, attribute_def *pdef, int limit,\n\t\tsvrattrl *plist, int privil, int *bad, void *parent, int mode) {\n\treturn (0);\n}\n\nint\nsvr_chk_history_conf(void) {\n\treturn (0);\n}\n\nint\nsave_nodes_db(int flag, void *pmom) {\n\treturn (0);\n}\n\nvoid\nupdate_state_ct(attribute *pattr, int *ct_array, attribute_def *attr_def) {\n\treturn;\n}\n\nvoid\nupdate_license_ct() {\n\treturn;\n}\n\nint\nis_job_array(char *jobid) {\n\treturn (0);\n}\n\njob *\nfind_arrayparent(char *subjobid) {\n\treturn NULL;\n}\n\nint\nck_chkpnt(attribute *pattr, void *pobject, int mode) {\n\treturn (0);\n}\n\nint\ncred_name_okay(attribute *pattr, void *pobj, int actmode) {\n\treturn PBSE_NONE;\n}\n\nint\npoke_scheduler(attribute *attr, void *pobj, int actmode) {\n\treturn PBSE_NONE;\n}\n\nint\naction_sched_priv(attribute *pattr, void *pobj, int actmode) {\n\treturn 0;\n}\n\nint\naction_sched_log(attribute *pattr, void *pobj, int actmode) {\n\treturn 0;\n}\n\nint\naction_sched_iteration(attribute *pattr, void *pobj, int actmode) {\n\treturn 0;\n}\n\nint\naction_sched_user(attribute *pattr, void *pobj, int actmode) {\n\treturn 0;\n}\n\nint\naction_queue_partition(attribute *pattr, void *pobj, int actmode) {\n\treturn 0;\n}\n\nint\naction_sched_preempt_order(attribute *pattr, void *pobj, int actmode) {\n\treturn 0;\n}\n\nint\naction_sched_preempt_common(attribute *pattr, void *pobj, int actmode) {\n\treturn 0;\n}\n\nint\naction_reserve_retry_time(attribute *pattr, void *pobj, int actmode) {\n\treturn PBSE_NONE;\n}\n\nint\naction_reserve_retry_init(attribute *pattr, void *pobj, int actmode) {\n        return PBSE_NONE;\n}\n\nint\nset_rpp_retry(attribute *pattr, void *pobj, int actmode) {\n\treturn PBSE_NONE;\n}\n\nint\nset_rpp_highwater(attribute *pattr, void *pobj, int actmode) {\n\treturn PBSE_NONE;\n}\n\nint\nis_valid_resource(attribute *pattr, void *pobject, int actmode) {\n\n\treturn PBSE_NONE;\n}\n\nint\ndeflt_chunk_action(attribute *pattr, void *pobj, int mode) {\n\n\treturn 0;\n}\n\nint\naction_svr_iteration(attribute *pattr, void *pobj, int mode) {\n\treturn 0;\n}\n\nint\nset_license_location(attribute *pattr, void *pobject, int actmode) {\n\treturn (PBSE_NONE);\n}\n\nvoid\nunset_license_location(void) {\n\treturn;\n}\n\nint\nset_node_fail_requeue(attribute *pattr, void *pobject, int actmode) {\n\treturn (PBSE_NONE);\n}\n\nvoid\nunset_node_fail_requeue(void) {\n\treturn;\n}\n\nint\nset_resend_term_delay(attribute *pattr, void *pobject, int actmode) {\n\treturn (PBSE_NONE);\n}\n\nvoid\nunset_resend_term_delay(void) {\n\treturn;\n}\n\nint\naction_node_partition(attribute *pattr, void *pobject, int actmode) {\n\treturn (PBSE_NONE);\n}\n\nint\nset_license_min(attribute *pattr, void *pobject, int actmode) {\n\treturn (PBSE_NONE);\n}\n\nvoid\nunset_license_min(void) {\n\treturn;\n}\n\nint\nset_license_max(attribute *pattr, void *pobject, int actmode) {\n\treturn (PBSE_NONE);\n}\n\nvoid\nunset_license_max(void) {\n\treturn;\n}\n\nint\nset_license_linger(attribute *pattr, void *pobject, int actmode) {\n\n\treturn (PBSE_NONE);\n}\n\nvoid\nunset_license_linger(void) {\n\treturn;\n}\n\nvoid\nunset_job_history_enable(void) {\n\treturn;\n}\n\nint\nset_job_history_enable(attribute *pattr, void *pobject, int actmode) {\n\treturn (PBSE_NONE);\n}\n\nint\nset_job_history_duration(attribute *pattr, void *pobject, int actmode) {\n\n\treturn (PBSE_NONE);\n}\n\nvoid\nunset_job_history_duration(void) {\n\treturn;\n}\n\nint\nset_max_job_sequence_id(attribute *pattr, void *pobject, int actmode) {\n\treturn (PBSE_NONE);\n}\n\nvoid\nunset_max_job_sequence_id(void) {\n\treturn;\n}\n\nint\neligibletime_action(attribute *pattr, void *pobject, int actmode) {\n\treturn 0;\n}\n\nint\ndecode_formula(attribute *patr, char *name, char *rescn, char *val) {\n\treturn PBSE_NONE;\n}\n\nint\naction_entlim_chk(attribute *pattr, void *pobject, int actmode) {\n\treturn PBSE_NONE;\n}\n\nint\naction_entlim_ct(attribute *pattr, void *pobject, int actmode) {\n\treturn PBSE_NONE;\n}\n\nint\naction_entlim_res(attribute *pattr, void *pobject, int actmode) {\n\treturn PBSE_NONE;\n}\n\nint\ncheck_no_entlim(attribute *pattr, void *pobject, int actmode) {\n\treturn 0;\n}\n\nint\ndefault_queue_chk(attribute *pattr, void *pobj, int actmode) {\n\treturn (PBSE_NONE);\n}\n\nvoid\nset_vnode_state(struct pbsnode *pnode, unsigned long state_bits,\n\t\t enum vnode_state_op type) {\n\treturn;\n}\n\nint\nctcpus(char *buf, int *hascpp) {\n\treturn 0;\n}\n\nint\nvalidate_nodespec(char *str) {\n\treturn 0;\n}\n\nint\ncheck_que_enable(attribute *pattr, void *pque, int mode) {\n\treturn (0);\n}\n\nint\nset_queue_type(attribute *pattr, void *pque, int mode) {\n\treturn (0);\n}\n\nint\nmanager_oper_chk(attribute *pattr, void *pobject, int actmode) {\n\treturn (0);\n}\n\nint\nnode_comment(attribute *pattr, void *pobj, int act) {\n\treturn 0;\n}\n\nint\nnode_prov_enable_action(attribute *new, void *pobj, int act) {\n\treturn PBSE_NONE;\n}\n\nint\nset_log_events(attribute *new, void *pobj, int act) {\n\treturn PBSE_NONE;\n}\n\nint\nnode_current_aoe_action(attribute *new, void *pobj, int act) {\n\treturn PBSE_NONE;\n}\n\nint\naction_sched_host(attribute *new, void *pobj, int act)\n{\n\treturn PBSE_NONE;\n}\n\nint\naction_throughput_mode(attribute *new, void *pobj, int act)\n{\n\treturn PBSE_NONE;\n}\n\nint\naction_job_run_wait(attribute *new, void *pobj, int act)\n{\n\treturn PBSE_NONE;\n}\n\nint\naction_opt_bf_fuzzy(attribute *new, void *pobj, int act)\n{\n\treturn PBSE_NONE;\n}\n\nint\naction_sched_partition(attribute *new, void *pobj, int act)\n{\n\treturn PBSE_NONE;\n}\n\nint\naction_max_run_subjobs(attribute *pattr, void *pobject, int actmode)\n{\n\treturn 0;\n}\n\nint\ndecode_rcost(attribute *patr, char *name, char *rescn, char *val) {\n\treturn 0;\n}\n\nint\nencode_rcost(const attribute *attr, pbs_list_head *phead, char *atname,\n\t\tchar *rsname, int mode, svrattrl **rtnl) {\n\treturn (1);\n}\n\nint\nset_rcost(attribute *old, attribute *new, enum batch_op op) {\n\treturn (0);\n}\n\nvoid\nfree_rcost(attribute *pattr) {\n\treturn;\n}\n\nint\nsvr_max_conc_prov_action(attribute *new, void *pobj, int act) {\n\treturn 0;\n}\n\nint\naction_backfill_depth(attribute *pattr, void *pobj, int actmode) {\n\treturn PBSE_NONE;\n}\n\nint\naction_jobscript_max_size(attribute *pattr, void *pobj, int actmode) {\n\treturn PBSE_NONE;\n}\n\nint\naction_check_res_to_release(attribute *pattr, void *pobj, int actmode) {\n\treturn PBSE_NONE;\n}\n\nint\nqueuestart_action(attribute *pattr, void *pobject, int actmode) {\n\treturn 0;\n}\n\nint\nset_cred_renew_enable(attribute *pattr, void *pobj, int actmode) {\n\treturn PBSE_NONE;\n}\n\nint\nset_cred_renew_period(attribute *pattr, void *pobj, int actmode) {\n\treturn PBSE_NONE;\n}\n\nint\nset_cred_renew_cache_period(attribute *pattr, void *pobj, int actmode) {\n\treturn PBSE_NONE;\n}\n\nint\nencode_svrstate(const attribute *pattr, pbs_list_head *phead, char *atname,\n\t\tchar *rsname, int mode, svrattrl **rtnl) {\n\treturn (1);\n}\n\nint\ncomp_chkpnt(attribute *attr, attribute *with) {\n\treturn 0;\n}\n\nint\ndecode_depend(attribute *patr, char *name, char *rescn, char *val) {\n\treturn (0);\n}\n\nint\nencode_depend(const attribute *attr, pbs_list_head *phead, char *atname,\n\t\tchar *rsname, int mode, svrattrl **rtnl) {\n\treturn 0;\n}\n\nint\nset_depend(attribute *attr, attribute *new, enum batch_op op) {\n\treturn (0);\n}\n\nint\ncomp_depend(attribute *attr, attribute *with) {\n\treturn (-1);\n}\n\nvoid\nfree_depend(attribute *attr) {\n\treturn;\n}\n\nint\ndepend_on_que(attribute *pattr, void *pobj, int mode) {\n\treturn 0;\n}\n\nint\njob_set_wait(attribute *pattr, void *pjob, int mode) {\n\treturn (0);\n}\n\nint\nalter_eligibletime(attribute *pattr, void *pobject, int actmode) {\n\treturn PBSE_NONE;\n}\n\nint\nkeepfiles_action(attribute *pattr, void *pobject, int actmode) {\n    return PBSE_NONE;\n}\n\nint\nremovefiles_action(attribute *pattr, void *pobject, int actmode) {\n    return PBSE_NONE;\n}\n\nint\naction_est_start_time_freq(attribute *pattr, void *pobj, int actmode) {\n\treturn PBSE_NONE;\n}\n\nint\nsetup_arrayjob_attrs(attribute *pattr, void *pobj, int mode) {\n\treturn (PBSE_NONE);\n}\n\nint\nfixup_arrayindicies(attribute *pattr, void *pobj, int mode) {\n\treturn (PBSE_NONE);\n}\n\nint\ndecode_Mom_list(attribute *patr, char *name, char *rescn, char *val) {\n\treturn (0);\n}\n\nint\nnode_queue_action(attribute *pattr, void *pobj, int actmode) {\n\treturn 0;\n}\n\nint\nset_node_host_name(attribute *pattr, void *pobj, int actmode) {\n\treturn 0;\n}\n\nint\nset_node_mom_port(attribute *pattr, void *pobj, int actmode) {\n\treturn 0;\n}\n\nint\nnode_np_action(attribute *new, void *pobj, int actmode) {\n\treturn PBSE_NONE;\n}\n\nint\nnode_pcpu_action(attribute *new, void *pobj, int actmode) {\n\treturn (0);\n}\n\nchar*\nfind_aoe_from_request(resc_resv *presv) {\n\treturn NULL;\n}\n\nint\nforce_qsub_daemons_update_action(attribute *pattr, void *pobject,\n\tint actmode) {\n\treturn (PBSE_NONE);\n}\n\nint\nset_node_topology(attribute *pattr, void *pobject, int actmode) {\n\n\treturn (PBSE_NONE);\n}\n\nint\nchk_vnode_pool(attribute *pattr, void *pobject, int actmode) {\n\treturn (PBSE_NONE);\n}\n\nint\nvalidate_job_formula(attribute *pattr, void *pobject, int actmode) {\n\treturn (PBSE_NONE);\n}\n\nint\naction_clear_topjob_estimates(attribute *pattr, void *pobj, int actmode) {\n\treturn (PBSE_NONE);\n}\n#endif /* defined(PBS_V1_COMMON_MODULE_DEFINE_STUB_FUNCS) */\n\n#endif /* defined(PBS_v1_COMMON_I_INCLUDED) */\n"
  },
  {
    "path": "src/include/pbs_version.h.in",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _PBS_VERSION_H\n#define _PBS_VERSION_H\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n/*\n * The version of this file and the PBS version have no simple correlation.\n *\n */\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n\n#ifdef WIN32\n#define PBS_BUILD \"mach=WIN32:security=:configure_args=\"\n#define PBS_VERSION \"@PBS_WIN_VERSION@\"\n#else\n#include <pbs_config.h>\n\n#ifndef PBS_BUILD\n#define PBS_BUILD \"mach=N/A:security=N/A:configure_args=N/A\"\n#endif\n\n#ifndef PBS_VERSION\n#define PBS_VERSION \"@PBS_VERSION@\"\n#endif /* PBS_VERSION */\n\n#endif\n\n#define PRINT_VERSION_AND_EXIT(argc, argv) if (argc == 2 && strcasecmp(argv[1], \"--version\") == 0) { fprintf(stdout, \"pbs_version = %s\\n\", PBS_VERSION); exit(0); }\n\n#ifdef __cplusplus\n}\n#endif\n#endif /* _PBS_VERSION_H */\n"
  },
  {
    "path": "src/include/placementsets.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tplacementsets.h\n *\n * @brief\n *\tManage vnodes and their associated attributes\n */\n\n#ifndef _PBS_PLACEMENTSETS_H\n#define _PBS_PLACEMENTSETS_H\n\n#include <sys/types.h>\n#include <stdio.h>\n#include \"pbs_idx.h\"\n\n/*\n *\tThis structure is used to describe a dynamically-sized list, one which\n *\tgrows when needed.\n */\ntypedef struct dynlist {\n\tunsigned long dl_nelem; /* number of elements in dl_list[] */\n\tunsigned long dl_used;\t/* of which this many are used */\n\tunsigned long dl_cur;\t/* the one currently being filled in */\n\tvoid *dl_list;\n} dl_t;\n\n/**\n * @brief\n * @verbatim\n *\tThe list of vnodes and their associated attributes is tracked and\n *\tmaintained using a list that looks like this:\n *\n *\t +------------------------------+\t\t\tvnl_t\n *\t |  \tfile mod time\t        |\n *\t +------------------------------+\n *\t |\tindex tree\t\t|\n *\t +------------------------------+\n *\t |\tsize of vnode list  \t|\n *\t |\tnumber of used entries\t|\n *\t |\tcurrent entry index \t|\n *\t +------------------------------+\n *\t |\tpointer to list head |\t|\n *\t +---------------------------|--+\n *\t\t\t\t     |\n *\t\t\t\t    \\ /\n *\t   +---------------------------------------+ \t \tvnal_t\n *\t   |\tvnode ID\t\t     | ... |\n *\t   +---------------------------------------+\n *\t   |\tsize of vnode attribute list | \t   |\n *\t   |\tnumber of used entries\t     | ... |\n *\t   |\tcurrent entry index\t     | \t   |\n *\t   +---------------------------------------+\n *\t   |\tpointer to list head | \t     | ... |\n *\t   +-------------------------|-------------+\n *\t\t\t\t     |\n *\t\t\t\t    \\ /\n *\t      \t\t     +-------------------------+\tvna_t\n *\t\t\t     | \t attribute name  | ... |\n *\t\t\t     |------------------ |-----+\n *\t\t\t     | \t attribute value | ... |\n *\t\t\t     |------------------ |-----+\n *\t\t\t     | \t attribute type  | ... |    in V4 of message\n *\t\t\t     |------------------ |-----+\n *\t\t\t     | 0 (will be flags) | ... |    in V4 of message\n *\t\t\t     +-------------------------+\n * @endverbatim\n */\ntypedef struct vnode_list {\n\ttime_t vnl_modtime; /* last mod time for these data */\n\tvoid *vnl_ix;\t    /* index with vnode name as key */\n\tdl_t vnl_dl;\t    /* current state of vnal_t list */\n#define vnl_nelem vnl_dl.dl_nelem\n#define vnl_used vnl_dl.dl_used\n#define vnl_cur vnl_dl.dl_cur\n\t/* vnl_list is a list of vnal_t structures */\n#define vnl_list vnl_dl.dl_list\n} vnl_t;\n#define VNL_NODENUM(vnlp, n) (&((vnal_t *) ((vnlp)->vnl_list))[n])\n#define CURVNLNODE(vnlp) VNL_NODENUM(vnlp, (vnlp)->vnl_cur)\n\ntypedef struct vnode_attrlist {\n\tchar *vnal_id; /* unique ID for this vnode */\n\tdl_t vnal_dl;  /* current state of vna_t list */\n#define vnal_nelem vnal_dl.dl_nelem\n#define vnal_used vnal_dl.dl_used\n#define vnal_cur vnal_dl.dl_cur\n\t/* vnal_list is a list of vna_t structures */\n#define vnal_list vnal_dl.dl_list\n} vnal_t;\n#define VNAL_NODENUM(vnrlp, n) (&((vna_t *) ((vnrlp)->vnal_list))[n])\n#define CURVNRLNODE(vnrlp) VNAL_NODENUM(vnrlp, (vnrlp)->vnal_cur)\n\ntypedef struct vnode_attr {\n\tchar *vna_name; /* attribute[.resource] name */\n\tchar *vna_val;\t/* attribute/resource  value */\n\tint vna_type;\t/* attribute/resource  data type */\n\tint vna_flag;\t/* attribute/resource  flags */\n} vna_t;\n\n#define PS_DIS_V1 1\n#define PS_DIS_V2 2\n#define PS_DIS_V3 3\n#define PS_DIS_V4 4\n#define PS_DIS_CURVERSION PS_DIS_V4\n\n/**\n * @brief\n *\tAn attribute named VNATTR_PNAMES attached to a ``special'' vnode\n *\twill have as its value the list of placement set types.\n */\n#define VNATTR_PNAMES \"pnames\"\n\n/**\n * @brief\n *\tAn attribute named VNATTR_HOOK_REQUESTOR attached to a ``special'' vnode\n *\twill have as its value as the requestor (user@host) who is\n *\tmaking a hook request to update vnodes information.\n */\n#define VNATTR_HOOK_REQUESTOR \"requestor\"\n\n/**\n * @brief\n *\tAn attribute named VNATTR_OFFLINE_VNODES attached to a `special' vnode\n *\twill have a \"1.<hook_name>\" value to mean: a hook named <hook_name>\n *\tinstucted the server to 'offline_by_mom' all the vnodes managed by the mom owning\n *\tthis special vnode.\n *\tA value of \"0.<hook_name>\" means a hook named <hook_name> instructed\n *\tthe server to 'clear offline_by_mom' states of all the vnodes managed by the mom\n *\towning the special vnode.\n */\n#define VNATTR_HOOK_OFFLINE_VNODES \"offline_vnodes\"\n\n/**\n * @brief\n *\tAn attribute named VNATTR_SCHEDULER_RESTART_CYCLE\n *\tattached to a `special' vnode\n *\twill have a \"1,<hook_name>\" value to mean a hook named\n *\t<hook_name> has requested that a message be sent to the\n *\tscheduler to restart its scheduling cycle.\n */\n#define VNATTR_HOOK_SCHEDULER_RESTART_CYCLE \"scheduler_restart_cycle\"\n\ntypedef int(callfunc_t)(char *, char *, char *);\n\n/**\n * @brief\tadd attribute to vnode\n *\n * @retval\t0\tsuccess\n *\n * @retval\t-1\tfailure\n */\nextern int vn_addvnr(vnl_t *, char *, char *, char *, int, int, callfunc_t);\n\n/**\n * @return\n *\tan attribute in a vnal_t\n *\n * @retval\tNULL\tattribute does not exist\n */\nextern char *attr_exist(vnal_t *, char *);\n\n/**\n * @return\tvnal_t\tpointer to vnode\n * @retval\tNULL\tnode does not exist\n */\nextern vnal_t *vn_vnode(vnl_t *, char *);\n\n/**\n * @return\n * the value of the attribute\n *\n * @retval\tNULL\tattribute does not exist\n */\nextern char *vn_exist(vnl_t *, char *, char *);\n\n/**\n * @brief\tallocate new vnode list\n *\n * @return\n *\ta pointer to an empty vnode list\n *\n * @retval\tNULL\terror\n *\n * @par Side-effects\n *\tSpace allocated for vnode list should be freed with vnl_free().\n */\nextern vnl_t *vnl_alloc(vnl_t **);\n\n/**\n * @brief\tfree vnode list\n */\nextern void vnl_free(vnl_t *);\n\n/**\n * @brief\tmerge new vnode list into existing list\n *\n * @return\n *\tthe existing vnode list\n *\n * @retval\tNULL\terror\n */\nextern vnl_t *vn_merge(vnl_t *, vnl_t *, callfunc_t);\n\n/**\n * @brief\tmerge new vnode list into existing list\n * \t\tfor those vnodes with certain attribute names.\n *\n * @return\n *\tthe existing vnode list\n *\n * @retval\tNULL\terror\n */\nextern vnl_t *vn_merge2(vnl_t *, vnl_t *, char **, callfunc_t);\n\n/**\n * @brief\tparse a file containing vnode information into a vnode list\n *\n * @return\n *\ta pointer to the resulting vnode list\n *\n * @retval\tNULL\terror\n *\n * @par Side-effects\n *\tSpace allocated by the parse functions should be freed with vnl_free().\n */\nextern vnl_t *vn_parse(const char *, callfunc_t);\n\n/**\n * @brief\tparse an already opened stream containing vnode information\n *\n * @return\n *\ta pointer to the resulting vnode list\n *\n * @retval\tNULL\terror\n *\n * @par Side-effects\n *\tSpace allocated by the parse functions should be freed with vnl_free().\n */\nextern vnl_t *vn_parse_stream(FILE *, callfunc_t);\n\n/**\n * @brief\tread a vnode list off the wire\n *\n * @return\n *\ta pointer to the resulting vnode list\n *\n * @retval\tNULL\terror\n *\n * @par Side-effects\n *\tSpace allocated for vnode list should be freed with vnl_free().\n */\nextern vnl_t *vn_decode_DIS(int, int *);\n\n/**\n * @brief\tsend a vnode list over the network\n *\n * @return\n *\ta DIS error code\n *\n * @retval\tDIS_SUCCESS\tsuccess\n */\nextern int vn_encode_DIS(int, vnl_t *);\n#endif /* _PBS_PLACEMENTSETS_H */\n"
  },
  {
    "path": "src/include/port_forwarding.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _PORT_FORWARDING_H\n#define _PORT_FORWARDING_H\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n/*Defines used by port_forwarding.c*/\n/* Max size of buffer to store data*/\n#define PF_BUF_SIZE 8192\n\n/* Limits the number of simultaneous X applications that a single job\n can run in the background to 24 . 1 socket fd is used for storing\n the X11 listening socket fd and 2 socket fds are used whenever an\n X application is started . Hence (24*2+1)=49 fds will be used when\n an attempt is make to start 24 X applications in the background .*/\n#define NUM_SOCKS 50\n\n/* Attempt to bind to a available port in the range of 6000+X11OFFSET to\n 6000+X11OFFSET+MAX_DISPLAYS */\n#define MAX_DISPLAYS 500\n#define X11OFFSET 50\n\n#define X_PORT 6000\n\n/* derived from XF4/xc/lib/dps/Xlibnet.h */\n#ifndef X_UNIX_PATH\n#define X_UNIX_PATH \"/tmp/.X11-unix/X%u\"\n#endif /* X_UNIX_PATH */\n\n#ifndef NI_MAXSERV\n#define NI_MAXSERV 32\n#endif /* !NI_MAXSERV */\n\n#define QSUB_SIDE 1\n#define EXEC_HOST_SIDE 0\n\n/*\n * Structure which maintains the relationship between the producer/consumer\n * sockets and also about the length of the data read/written.\n */\nstruct pfwdsock {\n\tint sock;\n\tint listening;\n\tint remotesock;\n\tint bufavail;\n\tint bufwritten;\n\tint active;\n\tint peer;\n\tchar buff[PF_BUF_SIZE];\n};\n/*Functions available in port_forwarding.h*/\nvoid port_forwarder(struct pfwdsock *, int (*connfunc)(char *phost, long pport),\n\t\t    char *, int, int inter_read_sock, int (*readfunc)(int), void (*logfunc)(char *),\n\t\t    int is_qsub_side, char *auth_method, char *encrypt_method, char *jobid);\nint connect_local_xsocket(u_int);\nint x11_connect_display(char *, long);\nint set_nonblocking(int);\n\n#ifdef __cplusplus\n}\n#endif\n#endif /* _PORT_FORWARDING_H */\n"
  },
  {
    "path": "src/include/portability.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _PORTABILITY_H\n#define _PORTABILITY_H\n\n#define closesocket(X) close(X)\n#define initsocketlib() 0\n#define SOCK_ERRNO errno\n\n#define NULL_DEVICE \"/dev/null\"\n\n#undef DLLEXPORT\n#define DLLEXPORT\n\n#define dlerror_reset() dlerror()\n#define SHAREDLIB_EXT \"so\"\n#define fix_path(char, int)\n#define get_uncpath(char)\n#define critical_section()\n\n#ifdef PBS_MOM\n#define TRAILING_CHAR '/'\n#define verify_dir(dir_val, isdir, sticky, disallow, fullpath) tmp_file_sec(dir_val, isdir, sticky, disallow, fullpath)\n#define FULLPATH 1\n#define process_string(str, tok, len) wtokcpy(str, tok, len)\n\n/* Check and skip if there are any special trailing character */\n#define skip_trailing_spcl_char(line, char_to_skip) \\\n\t{                                           \\\n\t}\n\n/* Check whether character is special allowed character */\n#define check_spl_ch(check_char) 1\n#endif\n\n#endif\n"
  },
  {
    "path": "src/include/provision.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _PROVISION_H\n#define _PROVISION_H\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n/*\n * provision.h - header file for maintaining provisioning related definitions\n *\n * These are linked into the server structure.  Entries are added or\n * updated upon the receipt of Track Provision Requests and are used to\n * satisfy Locate Provision requests.\n *\n * The main data is kept in the form of the track batch request so\n * that copying is easy.\n *\n * Other required header files:\n *\t\"server_limits.h\"\n */\n\n#ifdef WIN32\ntypedef HANDLE prov_pid;\n#else\ntypedef pid_t prov_pid;\n#endif /* WIN32 */\n\nextern void prov_track_save(void);\n\n/* Provisioning functions and structures*/\n/**\n * @brief\n *\n */\n\nstruct prov_vnode_info {\n\tpbs_list_link al_link;\n\tchar *pvnfo_vnode;\n\tchar *pvnfo_aoe_req;\n\tchar pvnfo_jobid[PBS_MAXSVRJOBID + 1]; /* job id */\n\tstruct work_task *ptask_defer;\n\tstruct work_task *ptask_timed;\n};\n\n/**\n * @brief\n *\n */\n\nstruct prov_tracking {\n\ttime_t pvtk_mtime; /* time this entry modified */\n\tprov_pid pvtk_pid;\n\tchar *pvtk_vnode;\n\tchar *pvtk_aoe_req;\n\tstruct prov_vnode_info *prov_vnode_info;\n};\n\ntypedef char (*exec_vnode_listtype)[PBS_MAXHOSTNAME + 1]; /* typedef to pointer to an array*/\n\nextern int check_and_enqueue_provisioning(job *, int *);\n\nextern void do_provisioning(struct work_task *wtask);\n\n#ifdef __cplusplus\n}\n#endif\n#endif /* _PROVISION_H */\n"
  },
  {
    "path": "src/include/qmgr.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/* $I$ */\n\n/* symbolic constants */\n#define ALL_SERVERS -1 /* all servers for connect_servers() */\n/* server name used for the default PBS server (\"\") */\n#define DEFAULT_SERVER \"default\"\n/* server name used for all the active servers */\n#define ACTIVE_SERVER \"active\"\n/* max word length in the reqest */\n#define MAX_REQ_WORD_LEN 10240\n\n/* there can be three words before the attribute list\n * command object name <attribute list> */\n#define MAX_REQ_WORDS 3\n#define IND_CMD 0\n#define IND_OBJ 1\n#define IND_NAME 2\n#define IND_FIRST IND_CMD\n#define IND_LAST IND_NAME\n\n/* Macros */\n\n/* This macro will determine if the char it is passed is a qmgr operator. */\n#define Oper(x) ((*(x) == '=') ||                      \\\n\t\t (*(x) == '+' && *((x) + 1) == '=') || \\\n\t\t (*(x) == '-' && *((x) + 1) == '='))\n\n/* This macro will determine if the char it is passed is white space. */\n#define White(x) (isspace((int) (x)))\n\n/* This macro will determine if the char is the end of a line. */\n#define EOL(x) ((unsigned long) (x) == (unsigned long) '\\0')\n\n/* This macro will allocate memory for a character string */\n#define Mstring(x, y)                             \\\n\tif ((x = (char *) malloc(y)) == NULL) {   \\\n\t\tpstderr(\"qmgr: Out of memory\\n\"); \\\n\t\tclean_up_and_exit(5);             \\\n\t}\n/* This macro will duplicate string */\n#define Mstrdup(x, y)                             \\\n\tif ((x = strdup(y)) == NULL) {            \\\n\t\tpstderr(\"qmgr: Out of memory\\n\"); \\\n\t\tclean_up_and_exit(5);             \\\n\t}\n/* This macro will allocate memory for some fixed size object */\n#define Mstruct(x, y)                                \\\n\tif ((x = (y *) malloc(sizeof(y))) == NULL) { \\\n\t\tpstderr(\"qmgr: Out of memory\\n\");    \\\n\t\tclean_up_and_exit(5);                \\\n\t}\n/* server name: \"\" is the default server and NULL is all active servers */\n#define Svrname(x) (((x) == NULL) ? ACTIVE_SERVER : ((strlen((x)->s_name)) ? (x)->s_name : DEFAULT_SERVER))\n/*\n *\n *\tPSTDERR1 - print error message to stdard error with one argument.\n *\t\t   Message will not be printed if \"-z\" option was specifed\n *\n *\t  string - format string to fprintf\n *\t  arg    - argument to be printed\n *\n */\n#define PSTDERR1(fmt, parm) \\\n\tif (!zopt)          \\\n\t\tfprintf(stderr, fmt, parm);\n/* print an input line and then a caret under where the error has occured */\n#define CaretErr(x, y)         \\\n\tPSTDERR1(\"%s\\n\", (x)); \\\n\tblanks((y));           \\\n\tpstderr(\"^\\n\");\n#define CLEAN_UP_REQ(x)                               \\\n\t{                                             \\\n\t\tint i;                                \\\n\t\tfor (i = 0; i < MAX_REQ_WORDS; i++) { \\\n\t\t\tfree(x[i]);                   \\\n\t\t}                                     \\\n\t\tfree(x);                              \\\n\t}\n\n#define QMGR_HIST_SIZE 500 /* size of the qmgr history area */\n\n/* structures */\n\n/* this struct is for the open servers */\nstruct server {\n\tchar *s_name;\t\t    /* name of server */\n\tint s_connect;\t\t    /* PBS connection descriptor to server */\n\tint ref;\t\t    /* number of references to server */\n\tstruct batch_status *s_rsc; /* ptr to status of resources on server */\n\tstruct server *next;\t    /* next server in list */\n};\n\n/* objname - name of an object with a possible server associated with it\n * i.e. batch@server1   -> queue batch at server server1\n */\nstruct objname {\n\tint obj_type;\t      /* type of object */\n\tchar *obj_name;\t      /* name of object */\n\tchar *svr_name;\t      /* name of server associated with object */\n\tstruct server *svr;   /* short cut to server associated with object */\n\tstruct objname *next; /* next object in list */\n};\n\n/* prototypes */\nstruct objname *commalist2objname(char *, int);\nstruct server *find_server(char *);\nstruct server *make_connection(char *);\nstruct server *new_server();\nstruct objname *new_objname();\nstruct objname *strings2objname(char **, int, int);\nstruct objname *default_server_name();\nstruct objname *temp_objname(char *, char *, struct server *);\nint parse_request(char *, char ***);\nvoid clean_up_and_exit(int);\nvoid freeattropl(struct attropl *);\nvoid pstderr(const char *);\nvoid pstderr_big(char *, char *, char *);\nvoid free_objname_list(struct objname *);\nvoid free_server(struct server *);\nvoid free_objname(struct objname *);\nvoid close_non_ref_servers();\nint connect_servers(struct objname *, int);\nint set_active(int, struct objname *);\nint get_request(char **);\nint parse(char *, int *, int *, char **, struct attropl **);\nint execute(int, int, int, char *, struct attropl *);\nint is_valid_object(struct objname *, int);\n\n/* help messages */\n\n#define HELP_DEFAULT                                                                                           \\\n\t\"General syntax: command [object][@server] [name attribute[.resource] OP value]\\n\"                     \\\n\t\"To get help on any topic or subtopic, type help <topic>\\n\"                                            \\\n\t\"Help is available on all commands and topics.\\n\"                                                      \\\n\t\"Available commands: \\n\"                                                                               \\\n\t\"active                 The active command will set the active objects.\\n\"                             \\\n\t\"create                 The create command will create the specified object on the PBS server(s).\\n\"   \\\n\t\"delete                 The delete command will delete the specified object from the PBS server(s).\\n\" \\\n\t\"set                    The set command sets the value for an attribute on the specified object.\\n\"    \\\n\t\"unset                  The unset command will unset an attribute on the specified object.\\n\"          \\\n\t\"list                   The list command will list out all the attributes for the specified object.\\n\" \\\n\t\"print                  The print command's output can be fed back into qmgr as input.\\n\"              \\\n\t\"import                 This takes hook script contents.\\n\"                                            \\\n\t\"export                 Dumps output of hook script into.\\n\"                                           \\\n\t\"quit                   The quit command will exit from qmgr.\\n\"                                       \\\n\t\"history                The history command will show qmgr command history.\\n\"                         \\\n\t\"Other topics: \\n\"                                                                                     \\\n\t\"attributes             type help or ? <attributes>.\\n\"                                                \\\n\t\"operators              type help or ? <operators>.\\n\"                                                 \\\n\t\"names                  type help or ? <names>.\\n\"                                                     \\\n\t\"values                 type help or ? <values>.\\n\"\n\n#define HELP_ACTIVE                                                                      \\\n\t\"Syntax active object [name [,name...]]\\n\"                                       \\\n\t\"Objects can be \\\"server\\\" \\\"queue\\\" \\\"resource\\\" or \\\"node\\\"\\n\"                 \\\n\t\"The active command will set the active objects.  The active objects are used\\n\" \\\n\t\"when no name is specified for different commands.\\n\"                            \\\n\t\"If no server is specified for nodes or queues, the command will be sent\\n\"      \\\n\t\"to all active servers.\\n\"                                                       \\\n\t\"Examples:\\n\"                                                                    \\\n\t\"active queue q1,batch@server1\\n\"                                                \\\n\t\"active server server2,server3\\n\"                                                \\\n\t\"Now if the following command is typed:\\n\"                                       \\\n\t\"set queue max_running = 10\\n\"                                                   \\\n\t\"The attribute max_running will be set to ten on the batch queue on server1\\n\"   \\\n\t\"and the q1 queue on server2 and server3.\\n\\n\"                                   \\\n\t\"active server s1, s2\\n\"                                                         \\\n\t\"active node @active\\n\"                                                          \\\n\t\"This would specify all nodes at all servers.\\n\\n\"                               \\\n\t\"active queue @s2\\n\"                                                             \\\n\t\"This would specify all queues at server s2\\n\"\n\n#define HELP_CREATE                                                                        \\\n\t\"Syntax: create object name[,name...] \\n\"                                          \\\n\t\"Objects can be \\\"queue\\\", \\\"node\\\", \\\"resource\\\" or \\\"hook\\\"\\n\"                   \\\n\t\"The create command will create the specified object on the PBS server(s).\\n\"      \\\n\t\"For multiple names, use a comma seperated list with no intervening whitespace.\\n\" \\\n\t\"A hook object can only be created by the Administrator, and only on the \\n\"       \\\n\t\"host on which the server runs.\\n\"                                                 \\\n\t\"\\nExamples:\\n\"                                                                    \\\n\t\"create queue q1,q2,q3\\n\"                                                          \\\n\t\"create resource r1,r2,r3 type=long,flag=nh\\n\"\n\n#define HELP_DELETE                                                                     \\\n\t\"Syntax: delete object name[,name...]\\n\"                                        \\\n\t\"Objects can be \\\"queue\\\", \\\"node\\\", \\\"resource\\\" or \\\"hook\\\"\\n\"                \\\n\t\"The delete command will delete the specified object from the PBS server(s).\\n\" \\\n\t\"A hook object can only be deleted by the Administrator, and only on the \\n\"    \\\n\t\"host on which the server runs.\\n\"                                              \\\n\t\"\\nExamples:\\n\"                                                                 \\\n\t\"delete queue q1\\n\"\n\n#define HELP_SET                                                                                  \\\n\t\"Syntax: set object [name,][,name...] attribute[.resource] OP value\\n\"                    \\\n\t\"Objects can be \\\"server\\\", \\\"queue\\\", \\\"node\\\", \\\"hook\\\", \\\"resource\\\" or \\\"pbshook\\\"\\n\" \\\n\t\"The \\\"set\\\" command sets the value for an attribute on the specified object.\\n\"          \\\n\t\"If the object is \\\"server\\\" and name is not specified, the attribute will be\\n\"          \\\n\t\"set on all the servers specified on the command line.\\n\"                                 \\\n\t\"For multiple names, use a comma seperated list with no intervening whitespace.\\n\"        \\\n\t\"A hook object can only be set by the Administrator, and only on the \\n\"                  \\\n\t\"host on which the server runs.\\n\"                                                        \\\n\t\"Examples:\\n\"                                                                             \\\n\t\"set server s1 max_running = 5\\n\"                                                         \\\n\t\"set server managers = root@host.domain.com\\n\"                                            \\\n\t\"set server managers += susan@*.domain.com\\n\"                                             \\\n\t\"set node n1,n2 state=offline\\n\"                                                          \\\n\t\"set queue q1@s3 resources_max.mem += 5mb\\n\"                                              \\\n\t\"set queue @s3 default_queue = batch\\n\"                                                   \\\n\t\"set server default_qdel_arguments = \\\"-Wsuppress_email = 1000\\\"\\n\"                       \\\n\t\"set server default_qsub_arguments = \\\"-m n -r n\\\"\\n\"                                     \\\n\t\"set resource r1 type=long\\n\"\n\n#define HELP_UNSET                                                                                  \\\n\t\"Syntax: unset object [name][,name...]\\n\"                                                   \\\n\t\"Objects can be \\\"server\\\", \\\"queue\\\", \\\"node\\\", \\\"hook\\\", \\\"resource\\\" or \\\"pbshook\\\"\\n\"   \\\n\t\"The unset command will unset an attribute on the specified object except resource type.\\n\" \\\n\t\"If the object is \\\"server\\\" and name is not specified, the attribute will be\\n\"            \\\n\t\"unset on all the servers specified on the command line.\\n\"                                 \\\n\t\"For multiple names, use a comma seperated list with no intervening whitespace.\\n\"          \\\n\t\"A hook object can only be unset by the Administrator, and only on the \\n\"                  \\\n\t\"host on which the server runs.\\n\"                                                          \\\n\t\"Examples:\\n\"                                                                               \\\n\t\"unset server s1 max_running\\n\"                                                             \\\n\t\"unset server managers\\n\"                                                                   \\\n\t\"unset queue enabled\\n\"                                                                     \\\n\t\"unset resource r1 flag\\n\"\n\n#define HELP_LIST                                                                                 \\\n\t\"Syntax: list object [name][,name...]\\n\"                                                  \\\n\t\"Object can be \\\"server\\\", \\\"queue\\\", \\\"node\\\", \\\"resource\\\", \\\"hook\\\", or \\\"pbshook\\\"\\n\" \\\n\t\"The list command will list out all the attributes for the specified object.\\n\"           \\\n\t\"If the object is \\\"server\\\" and name is not specified, all the servers\\n\"                \\\n\t\"specified on the command line will be listed.\\n\"                                         \\\n\t\"For multiple names, use a comma seperated list with no intervening whitespace.\\n\"        \\\n\t\"Hooks can only be listed by the Administrator, and only on the \\n\"                       \\\n\t\"host on which the server runs.\\n\"                                                        \\\n\t\"Examples:\\n\"                                                                             \\\n\t\"list server\\n\"                                                                           \\\n\t\"list queue q1\\n\"                                                                         \\\n\t\"list node n1,n2,n3\\n\"\n\n#define HELP_PRINT                                                                         \\\n\t\"Syntax: print object [name][,...]\\n\"                                              \\\n\t\"Object can be \\\"server\\\", \\\"queue\\\", \\\"node\\\", \\\"resource\\\" or \\\"hook\\\"\\n\"        \\\n\t\"The print command's output can be fed back into qmgr as input.\\n\"                 \\\n\t\"If the object is \\\"server\\\", all the queues and nodes associated \\n\"              \\\n\t\"with the server are printed as well as the server information.\\n\"                 \\\n\t\"For multiple names, use a comma seperated list with no intervening whitespace.\\n\" \\\n\t\"Hooks can only be printed via \\\"print hook [name][,...]\\\" \\n\"                     \\\n\t\"and by the Administrator, and only on the host on which the server runs.\\n\"       \\\n\t\"Examples:\\n\"                                                                      \\\n\t\"print server\\n\"                                                                   \\\n\t\"print node n1\\n\"                                                                  \\\n\t\"print queue q3\\n\"\n\n#define HELP_IMPORT                                                                    \\\n\t\"Syntax: import hook hook_name content-type content-encoding {input_file|-}\\n\" \\\n\t\"This takes hook script contents from \\\"input_file\\\" or STDIN (-)\\n\"           \\\n\t\"\\\"content-type\\\" is currently \\\"application/x-python\\\" only. \\n\"              \\\n\t\"\\\"content-encoding\\\" is currently \\\"default\\\" (7bit/ASCII), or \\\"base64\\\".\\n\" \\\n\t\"Hooks can only be imported by the Administrator, and only on the \\n\"          \\\n\t\"host on which the server runs.\\n\"\n\n#define HELP_EXPORT                                                                      \\\n\t\"Syntax: export hook hook_name content-type content-encoding [output_file]\\n\"    \\\n\t\"Dumps output of hook script into \\\"output_file\\\" if specified, or to STDOUT.\\n\" \\\n\t\"\\\"content-type\\\" is currently \\\"application/x-python\\\" only.\\n\"                 \\\n\t\"\\\"content-encoding\\\" is currently \\\"default\\\" (7bit/ASCII), or \\\"base64\\\".\\n\"   \\\n\t\"Hooks can only be exported by the Administrator, and only on the \\n\"            \\\n\t\"host on which the server runs.\\n\"\n\n/* HELP_QUIT macro name changed to HELP_QUIT0 here, as it clashes with one */\n/* defined under Windows' winuser.h */\n#define HELP_QUIT0       \\\n\t\"Syntax: quit\\n\" \\\n\t\"The quit command will exit from qmgr.\\n\"\n\n#define HELP_EXIT        \\\n\t\"Syntax: exit\\n\" \\\n\t\"The exit command will exit from qmgr.\\n\"\n\n#define HELP_OPERATOR                                                             \\\n\t\"Syntax: ... attribute OP new value\\n\"                                    \\\n\t\"Qmgr accepts three different operators for its commands.\\n\"              \\\n\t\"\\t=\\tAssign value into attribute.\\n\"                                     \\\n\t\"\\t+=\\tAdd new value and old value together and assign into attribute.\\n\" \\\n\t\"\\t-=\\tSubtract new value from old value and assign into attribute.\\n\"    \\\n\t\"These operators are used in the \\\"set\\\" and the \\\"unset\\\" commands\\n\"\n\n#define HELP_VALUE                                                                        \\\n\t\"Syntax ... OP value[multiplier]\\n\"                                               \\\n\t\"A multipler can be added to the end of a size in bytes or words.\\n\"              \\\n\t\"The multipliers are: tb, gb, mb, kb, b, tw, gw, mw, kw, w.  The second letter\\n\" \\\n\t\"stands for bytes or words.  b is the default multiplier.\\n\"                      \\\n\t\"The multipliers are case insensitive i.e. gw is the same as GW.\\n\"               \\\n\t\"Examples:\\n\"                                                                     \\\n\t\"100mb\\n\"                                                                         \\\n\t\"2gw\\n\"                                                                           \\\n\t\"10\\n\"\n\n#define HELP_NAME                                                                     \\\n\t\"Syntax: [name][@server]\\n\"                                                   \\\n\t\"Names can be in several parts.  There can be the name of an object, \\n\"      \\\n\t\"the name of an object at a server, or just at a server.\\n\"                   \\\n\t\"The name of an object specifys a name.  A name of an object at a server\\n\"   \\\n\t\"specifys the name of an object at a specific server.  Lastly, at a server\\n\" \\\n\t\"specifys all objects of a type at a server\\n\"                                \\\n\t\"Examples:\\n\"                                                                 \\\n\t\"batch     - An object called batch\\n\"                                        \\\n\t\"batch@s1  - An object called batch at the server s1\\n\"                       \\\n\t\"@s1       - All the objects of a cirtain type at the server s1\\n\"\n\n#define HELP_ATTRIBUTE                                                               \\\n\t\"The help for attributes are broken up into the following help subtopics:\\n\" \\\n\t\"\\tserverpublic\\t- Public server attributes\\n\"                               \\\n\t\"\\tserverro\\t- Read only server attributes\\n\"                                \\\n\t\"\\tqueuepublic\\t- Public queue attributes\\n\"                                 \\\n\t\"\\tqueueexec\\t- Attributes specific to execution queues\\n\"                   \\\n\t\"\\tqueueroute\\t- Attributes specified to routing queues\\n\"                   \\\n\t\"\\tqueuero \\t- Read only queue attributes\\n\"                                 \\\n\t\"\\tnodeattr\\t- Node Attributes\\n\"\n\n#define HELP_SERVERPUBLIC                                                                                   \\\n\t\"Server Public Attributes:\\n\"                                                                       \\\n\t\"acl_host_enable - enables host level access control\\n\"                                             \\\n\t\"acl_user_enable - enables user level access control\\n\"                                             \\\n\t\"acl_users - list of users allowed/denied access to server\\n\"                                       \\\n\t\"comment - informational text string about the server\\n\"                                            \\\n\t\"default_queue - default queue used when a queue is not specified\\n\"                                \\\n\t\"log_events - a bit string which specfiies what is logged\\n\"                                        \\\n\t\"mail_uid - uid of sender of mail which is sent by the server\\n\"                                    \\\n\t\"managers - list of users granted administrator privledges\\n\"                                       \\\n\t\"max_running - maximum number of jobs that can run on the server\\n\"                                 \\\n\t\"max_user_run - maximum number of jobs that a user can run on the server\\n\"                         \\\n\t\"max_group_run - maximum number of jobs a UNIX group can run on the server\\n\"                       \\\n\t\"max_queued - set of enqueued-count based limits to control futher job enqueueing\\n\"                \\\n\t\"max_queued_res - set of resource count based limits to control futher job enqueueing\\n\"            \\\n\t\"queued_jobs_threshold - set of resource count based limits to control futher job enqueueing\\n\"     \\\n\t\"queued_jobs_threshold_res - set of resource count based limits to control futher job enqueueing\\n\" \\\n\t\"max_run - set of running-count based limits to control job scheduling\\n\"                           \\\n\t\"max_run_soft - set of soft running-count based limits to control job scheduling\\n\"                 \\\n\t\"max_run_res - set of resource based limits to control job scheduling\\n\"                            \\\n\t\"max_run_soft_res - set of soft resource based limits to control job scheduling\\n\"                  \\\n\t\"operators - list of users granted operator privledges\\n\"                                           \\\n\t\"query_other_jobs - when true users can query jobs owned by other users\\n\"                          \\\n\t\"resources_available - ammount of resources which are available to the server\\n\"                    \\\n\t\"resources_cost - the cost factors of resources.  Used for sync. job starting\\n\"                    \\\n\t\"resources_default - the default resource value when the job does not specify\\n\"                    \\\n\t\"resource_max - the maximum ammount of resources that are on the system\\n\"                          \\\n\t\"scheduler_iteration - the amount of seconds between timed scheduler iterations\\n\"                  \\\n\t\"scheduling - when true the server should tell the scheduler to run\\n\"                              \\\n\t\"system_cost - arbitirary value factored into resource costs\\n\"                                     \\\n\t\"default_qdel_arguments - default arguments for qdel command\\n\"                                     \\\n\t\"default_qsub_arguments - default arguments for qsub command\\n\"\n\n#define HELP_SERVERRO                                                                 \\\n\t\"Server Read Only Attributes:\\n\"                                              \\\n\t\"resources_assigned - total ammount of resources allocated to running jobs\\n\" \\\n\t\"server_name - the name of the server and possibly a port number\\n\"           \\\n\t\"server_state - the current state of the server\\n\"                            \\\n\t\"state_count - total number of jobs in each state\\n\"                          \\\n\t\"total_jobs - total number of jobs managed by the server\\n\"                   \\\n\t\"PBS_version - the release version of PBS\\n\"\n\n#define HELP_QUEUEPUBLIC                                                                         \\\n\t\"Queue Public Attributes:\\n\"                                                             \\\n\t\"acl_group_enable - enables group level access control on the queue\\n\"                   \\\n\t\"acl_groups - list of groups which have been allowed or denied access\\n\"                 \\\n\t\"acl_host_enable - enables host level access control on the queue\\n\"                     \\\n\t\"acl_hosts - list of hosts which have been allowed or denied access\\n\"                   \\\n\t\"acl_user_enable - enables user level access control on the queue\\n\"                     \\\n\t\"acl_users - list of users which have been allowed or denied access\\n\"                   \\\n\t\"enabled - when true users can enqueue jobs\\n\"                                           \\\n\t\"from_route_only - when true queue only accepts jobs when routed by servers\\n\"           \\\n\t\"max_queuable - maximum number of jobs allowed to reside in the queue\\n\"                 \\\n\t\"max_running - maximum number of jobs in the queue that can be routed or running\\n\"      \\\n\t\"max_queued - set of enqueued-count based limits to control futher job enqueueing\\n\"     \\\n\t\"max_queued_res - set of resource count based limits to control futher job enqueueing\\n\" \\\n\t\"max_run - set of running-count based limits to control job scheduling\\n\"                \\\n\t\"max_run_soft - set of soft running-count based limits to control job scheduling\\n\"      \\\n\t\"max_run_res - set of resource based limits to control job scheduling\\n\"                 \\\n\t\"max_run_soft_res - set of soft resource based limits to control job scheduling\\n\"       \\\n\t\"priority - the priority of the queue\\n\"                                                 \\\n\t\"queue_type - type of queue: execution or routing\\n\"                                     \\\n\t\"resources_max - maximum ammount of a resource which can be requested by a job\\n\"        \\\n\t\"resources_min - minimum ammount of a resource which can be requested by a job\\n\"        \\\n\t\"resources_default - the default resource value when the job does not specify\\n\"         \\\n\t\"started - when true jobs can be scheduled for execution\\n\"\n\n#define HELP_QUEUEEXEC                                                                      \\\n\t\"Attributes for Execution queues only:\\n\"                                           \\\n\t\"checkpoint_min - min. number of mins. of CPU time allowed bwtween checkpointing\\n\" \\\n\t\"resources_available - ammount of resources which are available to the queue\\n\"     \\\n\t\"kill_delay - ammount of time between SIGTERM and SIGKILL when deleting a job\\n\"    \\\n\t\"max_user_run - maximum number of jobs a user can run in the queue\\n\"               \\\n\t\"max_group_run - maximum number of jobs a UNIX group can run in a queue\\n\"\n\n#define HELP_QUEUEROUTE                                                               \\\n\t\"Attributes for Routing queues only:\\n\"                                       \\\n\t\"route_destinations - list of destinations which jobs may be routed to\\n\"     \\\n\t\"alt_router - when true a alternate routing function is used to route jobs\\n\" \\\n\t\"route_held_jobs - when true held jobs may be routed from this queue\\n\"       \\\n\t\"route_waiting_jobs - when true waiting jobs may be routed from this queue\\n\" \\\n\t\"route_retry_time - time delay between route retries.\\n\"                      \\\n\t\"route_lifetime - maximum ammount of time a job can be in this routing queue\\n\"\n\n#define HELP_QUEUERO                                                      \\\n\t\"Queue read only attributes:\\n\"                                   \\\n\t\"total_jobs - total number of jobs in queue\\n\"                    \\\n\t\"state_count - total number of jobs in each state in the queue\\n\" \\\n\t\"resources_assigned - ammount of resources allocated to jobs running in queue\\n\"\n\n#define HELP_NODEATTR                           \\\n\t\"Node attributes:\\n\"                    \\\n\t\"state - the current state of a node\\n\" \\\n\t\"properties - the properties the node has\\n\"\n"
  },
  {
    "path": "src/include/queue.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _QUEUE_H\n#define _QUEUE_H\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n#include \"attribute.h\"\n#include \"server_limits.h\"\n\n#define QTYPE_Unset 0\n#define QTYPE_Execution 1\n#define QTYPE_RoutePush 2\n#define QTYPE_RoutePull 3\n\n/*\n * Attributes, including the various resource-lists are maintained in an\n * array in a \"decoded or parsed\" form for quick access to the value.\n *\n * The following enum defines the index into the array.\n */\n\nenum queueattr {\n#include \"queue_attr_enum.h\"\n#include \"site_que_attr_enum.h\"\n\tQA_ATR_LAST /* WARNING: Must be the highest valued enum */\n};\n\nextern void *que_attr_idx;\nextern attribute_def que_attr_def[];\n\n/* at last we come to the queue definition itself\t*/\n\nstruct pbs_queue {\n\tpbs_list_link qu_link; /* forward/backward links */\n\tpbs_list_head qu_jobs; /* jobs in this queue */\n\tresc_resv *qu_resvp;   /* != NULL if que established */\n\t/* to support a reservation */\n\tint qu_nseldft;\t\t   /* number of elm in qu_seldft */\n\tkey_value_pair *qu_seldft; /* defaults for job -l select */\n\n\tchar qs_hash[DIGEST_LENGTH];\n\tstruct queuefix {\n\t\tint qu_type;\t\t\t    /* queue type: exec, route */\n\t\tchar qu_name[PBS_MAXQUEUENAME + 1]; /* queue name */\n\t} qu_qs;\n\n\tint qu_numjobs;\t\t\t /* current numb jobs in queue */\n\tint qu_njstate[PBS_NUMJOBSTATE]; /* # of jobs per state */\n\n\t/* the queue attributes */\n\n\tattribute qu_attr[QA_ATR_LAST];\n\tshort newobj;\n};\ntypedef struct pbs_queue pbs_queue;\n\nextern void *queues_idx;\n\nextern pbs_queue *find_queuebyname(char *);\n#ifdef NAS /* localmod 075 */\nextern pbs_queue *find_resvqueuebyname(char *);\n#endif /* localmod 075 */\nextern pbs_queue *get_dfltque(void);\nextern pbs_queue *que_alloc(char *);\nextern pbs_queue *que_recov_db(char *, pbs_queue *);\nextern void que_free(pbs_queue *);\nextern int que_save_db(pbs_queue *);\n\n#define QUE_SAVE_FULL 0\n#define QUE_SAVE_NEW 1\n\nattribute *get_qattr(const pbs_queue *pq, int attr_idx);\nchar *get_qattr_str(const pbs_queue *pq, int attr_idx);\nstruct array_strings *get_qattr_arst(const pbs_queue *pq, int attr_idx);\npbs_list_head get_qattr_list(const pbs_queue *pq, int attr_idx);\nlong get_qattr_long(const pbs_queue *pq, int attr_idx);\nint set_qattr_generic(pbs_queue *pq, int attr_idx, char *val, char *rscn, enum batch_op op);\nint set_qattr_str_slim(pbs_queue *pq, int attr_idx, char *val, char *rscn);\nint set_qattr_l_slim(pbs_queue *pq, int attr_idx, long val, enum batch_op op);\nint set_qattr_b_slim(pbs_queue *pq, int attr_idx, long val, enum batch_op op);\nint set_qattr_c_slim(pbs_queue *pq, int attr_idx, char val, enum batch_op op);\nint is_qattr_set(const pbs_queue *pq, int attr_idx);\nvoid free_qattr(pbs_queue *pq, int attr_idx);\n\n#ifdef __cplusplus\n}\n#endif\n#endif /* _QUEUE_H */\n"
  },
  {
    "path": "src/include/range.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _RANGE_H\n#define _RANGE_H\n\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n/**\n * Control whether to consider stepping or not\n */\nenum range_step_type {\n\tDISABLE_SUBRANGE_STEPPING,\n\tENABLE_SUBRANGE_STEPPING\n};\n\ntypedef struct range {\n\tint start;\n\tint end;\n\tint step;\n\tint count;\n\tstruct range *next;\n} range;\n\n/* Error message when we fail to allocate memory */\n#define RANGE_MEM_ERR_MSG \"Unable to allocate memory (malloc error)\"\n\n#define INIT_RANGE_ARR_SIZE 2048\n\n/*\n *\tnew_range - allocate and initialize a range structure\n */\n\nrange *new_range(int start, int end, int step, int count, range *next);\n\n/*\n *\tfree_range_list - free a list of ranges\n */\nvoid free_range_list(range *r);\n\n/*\n *\tfree_range - free a range structure\n */\nvoid free_range(range *r);\n\n/*\n *\tdup_range_list - duplicate a range list\n */\nrange *dup_range_list(range *old_r);\n\n/*\n *\tdup_range - duplicate a range structure\n */\nrange *dup_range(range *old_r);\n\n/**\n * @brief\n *\trange_count - count number of elements in a given range structure\n *\n * @param[in]\tr - range structure to count\n *\n * @return int\n * @retval # - number of elements in range\n *\n */\nint range_count(range *r);\n\n/*\n *\trange_parse - parse string of ranges delimited by comma\n */\nrange *range_parse(char *str);\n\n/*\n *\n *\trange_next_value - get the next value in a range\n *\t\t\t   if a current value is given, return the next\n *\t\t\t   if no current value is given, return the first\n *\n */\nint range_next_value(range *r, int cur_value);\n\n/*\n *\trange_contains - find if a range contains a value\n */\nint range_contains(range *r, int val);\n\n/*\n *\trange_contains_single - is a value contained in a single range\n *\t\t\t\t  structure\n */\nint range_contains_single(range *r, int val);\n\n/*\n *\trange_remove_value - remove a value from a range list\n *\n */\nint range_remove_value(range **r, int val);\n\n/*\n *\trange_add_value - add a value to a range list\n *\n */\nint range_add_value(range **r, int val, int range_step);\n\n/*\n *\trange_intersection - create an intersection between two ranges\n */\nrange *range_intersection(range *r1, range *r2);\n\nextern int parse_subjob_index(char *, char **, int *, int *, int *, int *);\n\n/*\n * Return a string representation of a range structure\n */\nchar *range_to_str(range *r);\n\nrange * range_join(range *r1, range *r2);\n\n#ifdef __cplusplus\n}\n#endif\n\n#endif /* _RANGE_H */\n"
  },
  {
    "path": "src/include/reservation.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _RESERVATION_H\n#define _RESERVATION_H\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n/*\n * reservation.h - structure definations for reservation objects\n *\n * Include Files Required:\n *\t<sys/types.h>\n *\t\"list_link.h\"\n *\t\"attribute.h\"\n *\t\"server_limits.h\"\n *\t\"batch_request.h\"\n *\t\"pbs_nodes.h\"\n *\t\"job.h\"\n */\n\n#ifndef _RESV_NODE_H\n#include \"resv_node.h\"\n#endif\n\n#define JOB_OBJECT 1\n#define RESC_RESV_OBJECT 2\n\n#define RESV_START_TIME_MODIFIED 0x1\n#define RESV_END_TIME_MODIFIED 0x2\n#define RESV_DURATION_MODIFIED 0x4\n#define RESV_SELECT_MODIFIED 0x8\n#define RESV_ALTER_FORCED 0x10\n\n/*\n * The following resv_atr enum provide an index into the array of\n * decoded reservation attributes, for quick access.\n * Most of the attributes here are \"public\", but some are Read Only,\n * Private, or even Internal data items; maintained here because of\n * their variable size.\n *\n * \"RESV_ATR_LAST\" must be the last value as its number is used to\n * define the size of the array.\n */\n\nenum resv_atr {\n#include \"resv_attr_enum.h\"\n\tRESV_ATR_UNKN,\n\tRESV_ATR_LAST\n};\n\nenum resvState_discrim {\n\tRESVSTATE_gen_task_Time4resv,\n\tRESVSTATE_Time4resv,\n\tRESVSTATE_req_deleteReservation,\n\tRESVSTATE_add_resc_resv_to_job,\n\tRESVSTATE_is_resv_window_in_future,\n\tRESVSTATE_req_resvSub,\n\tRESVSTATE_alter_failed\n};\n\n/*\n * The \"definations\" for the reservation attributes are in the following array,\n * it is also indexed by the RESV_ATR_... enums.\n */\n\nextern void *resv_attr_idx;\nextern attribute_def resv_attr_def[];\nextern int index_atrJob_to_atrResv[][2];\n\n/* linked list of vnodes associated to the soonest reservation */\ntypedef struct pbsnode_list_ {\n\tstruct pbsnode *vnode;\n\tstruct pbsnode_list_ *next;\n} pbsnode_list_t;\n\n/* Structure used to revert reservation back if the ralter failed */\nstruct resv_alter {\n\tlong ra_state;\n\tunsigned long ra_flags;\n};\n\n/*\n * THE RESERVATION\n *\n * This structure is used by the server to maintain internal\n * quick access to the state and status of each reservation.\n * There is one instance of this structure per reservation known by the server.\n *\n * This information must be PRESERVED and is done so by updating the\n * reservation file in the reservation subdirectory which corresponds to this\n * reservation.\n *\n * ri_state is the state of the reservation.  It is kept up front to provide\n * for a \"quick\" update of the reservation state with minimum rewritting of the\n * reservation file.\n * Which is why the sub-struct ri_qs exists, that is the part which is\n * written on the \"quick\" save.  If in the future the format of this area\n * is modified the value of RSVERSION needs to be be bumped.\n *\n * The unparsed string set forms of the attributes (including resources)\n * are maintained in the struct attrlist as discussed above.\n */\n\n#define RSVERSION 500\nstruct resc_resv {\n\n\t/* Note: these members, upto ri_qs, are not saved to disk */\n\n\tpbs_list_link ri_allresvs; /* links this resc_resv into the\n\t\t\t\t\t\t\t * server's global list\n\t\t\t\t\t\t\t */\n\n\tstruct pbs_queue *ri_qp; /* pbs_queue that got created\n\t\t\t\t\t\t\t * to support this \"reservation\n\t\t\t\t\t\t\t * note: for a \"reservation job\"\n\t\t\t\t\t\t\t * this value is NULL\n\t\t\t\t\t\t\t */\n\n\tint ri_futuredr; /* non-zero if future delete resv\n\t\t\t\t\t\t\t * task placed on \"task_list_timed\n\t\t\t\t\t\t\t */\n\n\tjob *ri_jbp;\t      /* for a \"reservation job\" this\n\t\t\t\t\t\t\t * points to the associated job\n\t\t\t\t\t\t\t */\n\tresc_resv *ri_parent; /* reservation in a reservation */\n\n\tint ri_giveback; /*flag, return resources to parent */\n\n\tint ri_vnodes_down; /* the number of vnodes that are unavailable */\n\tint ri_vnodect;\t    /* the number of vnodes associated to an advance\n\t\t\t\t\t\t\t * reservation or a standing reservation occurrence\n\t\t\t\t\t\t\t */\n\n\tpbs_list_head ri_svrtask; /* place to keep work_task struct that\n\t\t\t\t\t\t\t * are \"attached\" to this reservation\n\t\t\t\t\t\t\t */\n\n\tpbs_list_head ri_rejectdest; /* place to keep badplace structs that\n\t\t\t\t\t\t\t * are \"attached\" to this reservation\n\t\t\t\t\t\t\t * Will only be useful if we later make\n\t\t\t\t\t\t\t */\n\n\tstruct batch_request *ri_brp; /* NZ if choose interactive (I) mode */\n\n\t/*resource reservations routeable objs*/\n\tint ri_downcnt; /* used when deleting the reservation*/\n\n\tlong ri_resv_retry; /* time at which the reservation will be reconfirmed */\n\n\tlong ri_degraded_time; /* a tentative time to reconfirm the reservation */\n\n\tpbsnode_list_t *ri_pbsnode_list; /* vnode list associated to the reservation */\n\n\t/* objects used while altering a reservation. */\n\tstruct resv_alter ri_alter; /* object used to alter a reservation */\n\n\t/* Reservation start and end tasks */\n\tstruct work_task *resv_start_task;\n\tstruct work_task *resv_end_task;\n\tint resv_from_job;\n\n\t/* A count to keep track of how many schedulers have been requested and\n\t * responsed to this reservation request\n\t */\n\tint req_sched_count;\n\tint rep_sched_count;\n\n\t/*\n\t * fixed size internal data - maintained via \"quick save\"\n\t * some of the items are copies of attributes, if so this\n\t * internal version takes precendent\n\t */\n#ifndef PBS_MOM\n\tchar qs_hash[DIGEST_LENGTH];\n#endif\n\tstruct resvfix {\n\t\tint ri_rsversion;\t\t\t   /* reservation struct verison#, see RSVERSION */\n\t\tint ri_state; /* internal copy of state */ // FIXME: can we remove this like we did for job?\n\t\tint ri_substate;\t\t\t   /* substate of resv state */\n\t\ttime_t ri_stime;\t\t\t   /* left window boundry  */\n\t\ttime_t ri_etime;\t\t\t   /* right window boundry */\n\t\ttime_t ri_duration;\t\t\t   /* reservation duration */\n\t\ttime_t ri_tactive;\t\t\t   /* time reservation became active */\n\t\tint ri_svrflags;\t\t\t   /* server flags */\n\t\tchar ri_resvID[PBS_MAXSVRRESVID];\t   /* reservation identifier */\n\t\tchar ri_fileprefix[PBS_RESVBASE + 1];\t   /* reservation file prefix */\n\t\tchar ri_queue[PBS_MAXQRESVNAME + 1];\t   /* queue used by reservation */\n\t} ri_qs;\n\n\t/*\n\t * The following array holds the decode\t format of the attributes.\n\t * Its presence is for rapid access to the attributes.\n\t */\n\tattribute ri_wattr[RESV_ATR_LAST]; /*reservation's attributes*/\n\tshort newobj;\n};\n\n/*\n * server flags (in ri_svrflags)\n */\n#define RESV_SVFLG_HERE 0x01\t   /* SERVER: job created here */\n#define RESV_SVFLG_HASWAIT 0x02\t   /* job has timed task entry for wait time*/\n#define RESV_SVFLG_HASRUN 0x04\t   /* job has been run before (being rerun */\n#define RESV_SVFLG_Suspend 0x200   /* job suspended (signal suspend) */\n#define RESV_SVFLG_HasNodes 0x1000 /* job has nodes allocated to it */\n\n#define RESV_FILE_COPY \".RC\"   /* tmp copy while updating */\n#define RESV_FILE_SUFFIX \".RB\" /* reservation control file */\n#define RESV_BAD_SUFFIX \".RBD\" /* save bad reservation file */\n\n#define RESV_UNION_TYPE_NEW 0\n\n#define RESV_RETRY_DELAY 10\t/* for degraded standing reservation retries */\n#define RESV_ASAP_IDLE_TIME 600 /* default delete_idle_time for ASAP reservations */\n\n/* reservation hold (internal) types */\n\n#define RHOLD_n 0\n#define RHOLD_u 1\n#define RHOLD_o 2\n#define RHOLD_s 4\n#define RHOLD_bad_password 8\n\n/* other symbolic constants */\n#define Q_CHNG_ENABLE 0\n#define Q_CHNG_START 1\n\nextern void *resvs_idx;\nextern resc_resv *find_resv(char *);\nextern resc_resv *resv_alloc(char *);\nextern resc_resv *resv_recov(char *);\nextern void resv_purge(resc_resv *);\nextern int start_end_dur_wall(resc_resv *);\n\n#ifdef _BATCH_REQUEST_H\nextern resc_resv *chk_rescResv_request(char *, struct batch_request *);\nextern void resv_mailAction(resc_resv *, struct batch_request *);\nextern int chk_resvReq_viable(resc_resv *);\n#endif /* _BATCH_REQUEST_H */\n\n#ifdef _WORK_TASK_H\nextern int gen_task_Time4resv(resc_resv *);\nextern int gen_task_EndresvWindow(resc_resv *);\nextern int gen_future_deleteResv(resc_resv *, long);\nextern int gen_future_reply(resc_resv *, long);\nextern int gen_negI_deleteResv(resc_resv *, long);\nextern void time4resvFinish(struct work_task *);\nextern void Time4resvFinish(struct work_task *);\nextern void Time4_I_term(struct work_task *);\nextern void tickle_for_reply(void);\nextern void remove_deleted_resvs();\nextern void add_resv_beginEnd_tasks();\nextern void resv_retry_handler(struct work_task *);\nextern void set_idle_delete_task(resc_resv *presv);\n#endif /* _WORK_TASK_H */\n\nextern int change_enableORstart(resc_resv *, int, char *);\nextern void unset_resv_retry(resc_resv *);\nextern void set_resv_retry(resc_resv *, long);\nextern void force_resv_retry(resc_resv *, long);\nextern void eval_resvState(resc_resv *, enum resvState_discrim, int, int *, int *);\nextern void free_resvNodes(resc_resv *);\nextern int act_resv_add_owner(attribute *, void *, int);\nextern void svr_mailownerResv(resc_resv *, int, int, char *);\nextern void resv_free(resc_resv *);\nextern void set_old_subUniverse(resc_resv *);\nextern int assign_resv_resc(resc_resv *, char *, int);\nextern void resv_exclusive_handler(resc_resv *);\nextern void resv_exclusive_handler_forced(resc_resv *);\nextern long determine_resv_retry(resc_resv *presv);\n\nextern resc_resv *resv_recov_db(char *resvid, resc_resv *presv);\nextern int resv_save_db(resc_resv *presv);\nextern void pbsd_init_resv(resc_resv *presv, int type);\n\nattribute *get_rattr(const resc_resv *presv, int attr_idx);\nchar *get_rattr_str(const resc_resv *presv, int attr_idx);\nstruct array_strings *get_rattr_arst(const resc_resv *presv, int attr_idx);\npbs_list_head get_rattr_list(const resc_resv *presv, int attr_idx);\nlong get_rattr_long(const resc_resv *presv, int attr_idx);\nint set_rattr_generic(resc_resv *presv, int attr_idx, char *val, char *rscn, enum batch_op op);\nint set_rattr_str_slim(resc_resv *presv, int attr_idx, char *val, char *rscn);\nint set_rattr_l_slim(resc_resv *presv, int attr_idx, long val, enum batch_op op);\nint set_rattr_b_slim(resc_resv *presv, int attr_idx, long val, enum batch_op op);\nint set_rattr_c_slim(resc_resv *presv, int attr_idx, char val, enum batch_op op);\nint is_rattr_set(const resc_resv *presv, int attr_idx);\nvoid free_rattr(resc_resv *presv, int attr_idx);\nvoid clear_rattr(resc_resv *presv, int attr_idx);\n\n#ifdef __cplusplus\n}\n#endif\n#endif /* _RESERVATION_H */\n"
  },
  {
    "path": "src/include/resmon.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\nstruct rm_attribute {\n\tchar *a_qualifier;\n\tchar *a_value;\n};\n\n/*\n ** The config structure is used to save a name to be used as a key\n ** for searching and a value or function call to provide an \"answer\"\n ** for the name in question.\n */\ntypedef char *(*confunc)(struct rm_attribute *);\nstruct config {\n\tchar *c_name;\n\tunion {\n\t\tconfunc c_func;\n\t\tchar *c_value;\n\t} c_u;\n};\n\n#define RM_NPARM 20 /* max number of parameters for child */\n\n#define RM_CMD_CLOSE 1\n#define RM_CMD_REQUEST 2\n#define RM_CMD_CONFIG 3\n#define RM_CMD_SHUTDOWN 4\n\n#define RM_RSP_OK 100\n#define RM_RSP_ERROR 999\n\n#define UPDATE_MOM_STATE 1\n\n/*\n ** Macros for fast min/max.\n */\n#ifndef MIN\n#define MIN(a, b) (((a) < (b)) ? (a) : (b))\n#endif\n#ifndef MAX\n#define MAX(a, b) (((a) > (b)) ? (a) : (b))\n#endif\n\nextern char *arch(struct rm_attribute *);\nextern char *physmem(struct rm_attribute *);\n"
  },
  {
    "path": "src/include/resource.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _RESOURCE_H\n#define _RESOURCE_H\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n#include \"attribute.h\"\n#include \"list_link.h\"\n\n/*\n * This header file contains the definitions for resources.\n *\n * Other required header files:\n *\t\"portability.h\"\n *\t\"attribute.h\"\n *\t\"list_link.h\"\n *\n * Resources are \"a special case\" of attributes.  Resources use similiar\n * structures as attributes.  Certain types, type related functions,\n * and flags may differ between the two.\n *\n * Within the resource structure, the value is contained in an attribute\n * substructure, this is done so the various attribute decode and encode\n * routines can be \"reused\".\n *\n * For any server, queue or job attribute which is a set of resources,\n * the attribute points to an list  of \"resource\" structures.\n * The value of the resource is contained in these structures.\n *\n * Unlike \"attributes\" which are typically identical between servers\n * within an administrative domain,  resources vary between systems.\n * Hence, the resource instance has a pointer to the resource definition\n * rather than depending on a predefined index.\n */\n\n#define RESOURCE_UNKNOWN \"|unknown|\"\n\nenum resc_enum {\n#include \"resc_def_enum.h\"\n\tRESC_UNKN,\n\tRESC_LAST\n};\n\n#define RESC_NOOP_DEF \"noop\"\n\ntypedef enum resdef_op {\n\tRESDEF_CREATE,\n\tRESDEF_UPDATE,\n\tRESDEF_DELETE\n} resdef_op_t;\n\ntypedef struct resource {\n\tpbs_list_link rs_link;\t       /* link to other resources in list */\n\tstruct resource_def *rs_defin; /* pointer to definition entry */\n\tattribute rs_value;\t       /* attribute struct holding value */\n} resource;\n\ntypedef struct resource_def {\n\tchar *rs_name;\n\tint (*rs_decode)(attribute *prsc, char *name, char *rn, char *val);\n\tint (*rs_encode)(const attribute *prsv, pbs_list_head *phead, char *atname, char *rsname, int mode, svrattrl **rtnl);\n\tint (*rs_set)(attribute *old, attribute *nattr, enum batch_op op);\n\tint (*rs_comp)(attribute *prsc, attribute *with);\n\tvoid (*rs_free)(attribute *prsc);\n\tint (*rs_action)(resource *presc, attribute *pat, void *pobj, int type, int actmode);\n\tunsigned int rs_flags : ATRDFLAG; /* flags: R/O, ..., see attribute.h */\n\tunsigned int rs_type : ATRDTYPE;  /* type of resource,see attribute.h */\n\tunsigned int rs_entlimflg;\t  /* tracking entity limits for this  */\n\tstruct resource_def *rs_next;\n\tunsigned int rs_custom; /* bit flag to indicate custom resource or builtin */\n} resource_def;\n\nstruct resc_sum {\n\tstruct resource_def *rs_def; /* ptr to this resources's def   */\n\tstruct resource *rs_prs;     /* ptr resource in Resource_List */\n\tattribute rs_attr;\t     /* used for summation of values  */\n\tint rs_set;\t\t     /* set if value is set here      */\n};\n\n/* following used for Entity Limits for Finer Granularity Control */\ntypedef struct svr_entlim_leaf {\n\tresource_def *slf_rescd;\n\tattribute slf_limit;\n\tattribute slf_sum;\n} svr_entlim_leaf_t;\n\nextern struct resc_sum *svr_resc_sum;\nextern void *resc_attrdef_idx;\nextern resource_def *svr_resc_def; /* the resource definition array */\nextern int svr_resc_size;\t   /* size (num elements) in above  */\nextern int svr_resc_unk;\t   /* index to \"unknown\" resource   */\n\nextern resource *add_resource_entry(attribute *, resource_def *);\nextern int cr_rescdef_idx(resource_def *resc_def, int limit);\nextern resource_def *find_resc_def(resource_def *, char *);\nextern resource *find_resc_entry(const attribute *, resource_def *);\nextern int update_resource_def_file(char *name, resdef_op_t op, int type, int perms);\nextern int add_resource_def(char *name, int type, int perms);\nextern int restart_python_interpreter(const char *);\nextern long long to_kbsize(char *val);\nextern int alloc_svrleaf(char *resc_name, svr_entlim_leaf_t **pplf);\nextern int parse_resc_type(char *val, int *resc_type_p);\nextern int parse_resc_flags(char *val, int *flag_ir_p, int *resc_flag_p);\nextern int verify_resc_name(char *name);\nextern int verify_resc_type_and_flags(int resc_type, int *pflag_ir, int *presc_flag, const char *rescname, char *buf, int buflen, int autocorrect);\nextern void update_resc_sum(void);\n\n/* Defines for entity limit tracking */\n#define PBS_ENTLIM_NOLIMIT 0  /* No entity limit has been set for this resc */\n#define PBS_ENTLIM_LIMITSET 1 /* this set in rs_entlim if limit exists */\n\n/*\n * struct for providing mapping between resource type name or a\n * resource type value and the corresponding functions.\n * See lib/Libattr/resc_map.c\n */\nstruct resc_type_map {\n\tchar *rtm_rname;\n\tint rtm_type;\n\tint (*rtm_decode)(attribute *prsc, char *name, char *rn, char *val);\n\tint (*rtm_encode)(const attribute *prsv, pbs_list_head *phead, char *atname,\n\t\t\t  char *rsname, int mode, svrattrl **rtnl);\n\tint (*rtm_set)(attribute *old, attribute *nattr, enum batch_op op);\n\tint (*rtm_comp)(attribute *prsc, attribute *with);\n\tvoid (*rtm_free)(attribute *prsc);\n};\nextern struct resc_type_map *find_resc_type_map_by_typev(int);\nextern struct resc_type_map *find_resc_type_map_by_typest(char *);\nextern char *find_resc_flag_map(int);\n\n#ifdef __cplusplus\n}\n#endif\n#endif /* _RESOURCE_H */\n"
  },
  {
    "path": "src/include/resv_node.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _RESV_NODE_H\n#define _RESV_NODE_H\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\ntypedef struct subUniverse subUniverse;\ntypedef struct spec_and_context spec_and_context;\ntypedef struct spec_and_context spec_ctx;\ntypedef struct resc_resv resc_resv;\ntypedef struct pbsnode pbsnode;\ntypedef unsigned reservationTag;\n\n/*\"specification and solving context\"*/\n\n/*pointer to an instantiation of \"spec_and_context\" is passed to the node\n *solving routine, \"node_spec\".  It finds, if possible, a set of nodes in the\n *specified subUniverse that satisfies the node specification stored in field\n *\"nspec\"\n */\n\nstruct subUniverse {\n\n\tstruct pbsnode **univ; /*solve relative to this \"universe\",\n\t\t\t\t\t\t which is just an array of pbsnode\n\t\t\t\t\t\t pointers\n\t\t\t\t\t\t */\n\tint usize;\t       /*number of entries in \"universe\" array*/\n\tint inheap;\t       /*set non-zero if univ is in heap and\n\t\t\t\t\t should be freed */\n};\n\nstruct spec_and_context {\n\tchar *nspec; /*specification of a node set*/\n\n\tsubUniverse subUniv;\n\n\tunsigned int when : 4; /*NEEDNOW or NEEDFUTURE*/\n\tunsigned int type : 4; /*SPECTYPE_JOB; SPECTYPE_RESV*/\n\n\tresc_resv *belong_to;\t/*0==no parent else, ptr to parent*/\n\treservationTag resvTag; /*if trying to find nodes for a */\n\t/*reservation or reservation job*/\n\t/*this is the resv's \"handle\"   */\n\t/*currently not being used      */\n\n\tlong stime; /*job or reservation \"start\" time*/\n\tlong etime; /*best estimate of \"end\" time*/\n};\n\nextern spec_and_context *create_context(void *, int, char *);\nextern void free_context(spec_and_context *);\n\n#ifdef __cplusplus\n}\n#endif\n#endif /*_RESV_NODE_H*/\n"
  },
  {
    "path": "src/include/rm.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/*\n **\tHeader file defining the library calls and message formats for\n **\tconnecting to and communicating with the resource monitor.\n */\n\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\nint openrm(char *, unsigned int);\nint closerm(int);\nint downrm(int);\nint configrm(int, char *);\nint addreq(int, char *);\nint allreq(char *);\nchar *getreq(int);\nint flushreq(void);\nint activereq(void);\nvoid fullresp(int);\n\n#ifdef __cplusplus\n}\n#endif\n"
  },
  {
    "path": "src/include/sched_cmds.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _SCHED_CMDS_H\n#define _SCHED_CMDS_H\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n#include \"pbs_ifl.h\"\ntypedef struct sched_cmd sched_cmd;\n\nstruct sched_cmd {\n\t/* sched command */\n\tint cmd;\n\n\t/* jobid assisiated with cmd if any else NULL */\n\tchar *jid;\n};\n\n/* server to scheduler commands: */\nenum svr_sched_cmd {\n\tSCH_SCHEDULE_NULL,\n\tSCH_SCHEDULE_NEW,   /* New job queued or eligible\t*/\n\tSCH_SCHEDULE_TERM,  /* Running job terminated\t*/\n\tSCH_SCHEDULE_TIME,  /* Scheduler interval reached\t*/\n\tSCH_SCHEDULE_RECYC, /* Not currently used\t\t*/\n\tSCH_SCHEDULE_CMD,   /* Schedule on command \t\t*/\n\tSCH_CONFIGURE,\n\tSCH_QUIT,\n\tSCH_RULESET,\n\tSCH_SCHEDULE_FIRST,\t     /* First schedule after server starts */\n\tSCH_SCHEDULE_JOBRESV,\t     /* Arrival of an existing reservation time */\n\tSCH_SCHEDULE_AJOB,\t     /* run one, named job */\n\tSCH_SCHEDULE_STARTQ,\t     /* Stopped queue started */\n\tSCH_SCHEDULE_MVLOCAL,\t     /* Job moved to local queue */\n\tSCH_SCHEDULE_ETE_ON,\t     /* eligible_time_enable is turned ON */\n\tSCH_SCHEDULE_RESV_RECONFIRM, /* Reconfirm a reservation */\n\tSCH_SCHEDULE_RESTART_CYCLE,  /* Restart a scheduling cycle */\n\tSCH_CMD_HIGH\t\t     /* This has to be the last command always. Any new command can be inserted above if required */\n};\n\nint schedule(int sd, const sched_cmd *cmd);\n\n#ifdef __cplusplus\n}\n#endif\n#endif /* _SCHED_CMDS_H */\n"
  },
  {
    "path": "src/include/server.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _SERVER_H\n#define _SERVER_H\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n/*\n * server.h - definitions for the server object (structure)\n *\n * Other include files required:\n *\t<sys/types.h>\n *\t\"attribute.h\"\n *\t\"list_link.h\"\n *\t\"server_limits.h\"\n *\n * The server object (structure) contains the parameters which\n * control the operation of the server itself.  This includes\n * the server attributes and resource (limits).\n */\n#include <stdbool.h>\n#ifndef _GRUNT_H\n#include \"grunt.h\"\n#endif\n#include \"pbs_sched.h\"\n#include \"server_limits.h\"\n\n#define SYNC_SCHED_HINT_NULL 0\n#define SYNC_SCHED_HINT_FIRST 1\n#define SYNC_SCHED_HINT_OTHER 2\n\nenum srv_atr {\n#include \"svr_attr_enum.h\"\n#include \"site_svr_attr_enum.h\"\n\t/* This must be last */\n\tSVR_ATR_LAST\n};\n\nextern char *pbs_server_name;\nextern char server_host[];\nextern uint pbs_server_port_dis;\nextern void *svr_attr_idx;\nextern attribute_def svr_attr_def[];\n/* for trillion job id */\nextern long long svr_max_job_sequence_id;\n\n/* for history jobs*/\nextern long svr_history_enable;\nextern long svr_history_duration;\n\nstruct server {\n\tstruct server_qs {\n\t\tint sv_numjobs;\t\t  /* number of job owned by server   */\n\t\tint sv_numque;\t\t  /* number of queues managed  */\n\t\tlong long sv_jobidnumber; /* next number to use in new jobid  */\n\t\tlong long sv_lastid;\t  /* block increment to avoid many saves */\n\t} sv_qs;\n\tattribute sv_attr[SVR_ATR_LAST]; /* the server attributes */\n\tshort newobj;\n\ttime_t sv_started;\t\t   /* time server started */\n\ttime_t sv_hotcycle;\t\t   /* if RECOV_HOT,time of last restart */\n\ttime_t sv_next_schedule;\t   /* when to next run scheduler cycle */\n\tint sv_jobstates[PBS_NUMJOBSTATE]; /* # of jobs per state */\n\tint sv_nseldft;\t\t\t   /* num of elems in sv_seldft */\n\tkey_value_pair *sv_seldft;\t   /* defelts for job's -l select\t*/\n\n\tint sv_trackmodifed;\t\t     /* 1 if tracking list modified\t    */\n\tint sv_tracksize;\t\t     /* total number of sv_track entries */\n\tstruct tracking *sv_track;\t     /* array of track job records\t    */\n\tint sv_provtrackmodifed;\t     /* 1 if prov_tracking list modified */\n\tint sv_provtracksize;\t\t     /* total number of sv_prov_track entries */\n\tstruct prov_tracking *sv_prov_track; /* array of provision records */\n\tint sv_cur_prov_records;\t     /* number of provisiong requests currently running */\n};\n\nextern struct server server;\nextern pbs_list_head svr_alljobs;\nextern pbs_list_head svr_allresvs; /* all reservations in server */\n\n/* degraded reservations globals */\nextern long resv_retry_time;\n\n/*\n * server state values\n */\n#define SV_STATE_DOWN 0\n#define SV_STATE_INIT 1\n#define SV_STATE_HOT 2\n#define SV_STATE_RUN 3\n#define SV_STATE_SHUTDEL 4\n#define SV_STATE_SHUTIMM 5\n#define SV_STATE_SHUTSIG 6\n#define SV_STATE_SECIDLE 7\n#define SV_STATE_PRIMDLY 0x10\n\n/*\n * Other misc defines\n */\n#define SVR_HOSTACL \"svr_hostacl\"\n#define PBS_DEFAULT_NODE \"1\"\n\n#define SVR_SAVE_QUICK 0\n#define SVR_SAVE_FULL 1\n#define SVR_SAVE_NEW 2\n\n#define SVR_HOT_CYCLE 15  /* retry mom every n sec on hot start     */\n#define SVR_HOT_LIMIT 300 /* after n seconds, drop out of hot start */\n\n#define PBS_SCHED_DAEMON_NAME \"Scheduler\"\n#define WALLTIME \"walltime\"\n#define MIN_WALLTIME \"min_walltime\"\n#define MAX_WALLTIME \"max_walltime\"\n#define SOFT_WALLTIME \"soft_walltime\"\n#define MCAST_WAIT_TM 2\n\n\n#define ESTIMATED_DELAY_NODES_UP 60 /* delay reservation reconf at boot until nodes expected up */\n\n/*\n * Server failover role\n */\nenum failover_state {\n\tFAILOVER_NONE,\t       /* Only Server, no failover */\n\tFAILOVER_PRIMARY,      /* Primary in failover configuration */\n\tFAILOVER_SECONDARY,    /* Secondary in failover */\n\tFAILOVER_CONFIG_ERROR, /* error in configuration */\n};\n\n/*\n * Server job history defines & globals\n */\n#define SVR_CLEAN_JOBHIST_TM 120\t    /* after 2 minutes, reschedule the work task */\n#define SVR_CLEAN_JOBHIST_SECS 5\t    /* never spend more than 5 seconds in one sweep to clean hist */\n#define SVR_JOBHIST_DEFAULT 1209600\t    /* default time period to keep job history: 2 weeks */\n#define SVR_MAX_JOB_SEQ_NUM_DEFAULT 9999999 /* default max job id is 9999999 */\n\n/* function prototypes */\n\nextern int svr_recov_db();\nextern int svr_save_db(struct server *);\nextern pbs_sched *sched_recov_db(char *, pbs_sched *ps);\nextern int sched_save_db(pbs_sched *);\nextern enum failover_state are_we_primary(void);\nextern int have_licensed_nodes(void);\nextern void unlicense_nodes(void);\nextern void set_sched_default(pbs_sched *, int from_scheduler);\nextern void memory_debug_log(struct work_task *ptask);\n\nextern pbs_list_head *fetch_sched_deferred_request(pbs_sched *psched, bool create);\nextern void clear_sched_deferred_request(pbs_sched *psched);\n\nattribute *get_sattr(int attr_idx);\nchar *get_sattr_str(int attr_idx);\nstruct array_strings *get_sattr_arst(int attr_idx);\npbs_list_head get_sattr_list(int attr_idx);\nlong get_sattr_long(int attr_idx);\nint set_sattr_generic(int attr_idx, char *val, char *rscn, enum batch_op op);\nint set_sattr_str_slim(int attr_idx, char *val, char *rscn);\nint set_sattr_l_slim(int attr_idx, long val, enum batch_op op);\nint set_sattr_b_slim(int attr_idx, long val, enum batch_op op);\nint set_sattr_c_slim(int attr_idx, char val, enum batch_op op);\nint is_sattr_set(int attr_idx);\nvoid free_sattr(int attr_idx);\n\n#ifdef __cplusplus\n}\n#endif\n#endif /* _SERVER_H */\n"
  },
  {
    "path": "src/include/server_limits.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _SERVER_LIMITS_H\n#define _SERVER_LIMITS_H\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n#include <pbs_config.h>\n\n/*\n * This section contains size limit definitions\n *\n * BEWARE OF CHANGING THESE\n */\n#ifndef PBS_MAXNODENAME\n#define PBS_MAXNODENAME 79 /* max length of a vnode name\t\t    */\n#endif\n#define PBS_JOBBASE 11 /* basename size for job file, 11 = 14 -3   */\n/* where 14 is min file name, 3 for suffix  */\n\n#define PBS_RESVBASE 11 /* basename size for job file, 10 = 14 -3   */\n/* where 14 is max file name, 3 for suffix - \".RF\" */\n#define PBS_NUMJOBSTATE 10 /* TQHWREXBMF */\n\n#ifdef NAS\t\t   /* localmod 083 */\n#define PBS_MAX_HOPCOUNT 3 /* limit on number of routing hops per job */\n#else\n#define PBS_MAX_HOPCOUNT 10 /* limit on number of routing hops per job */\n#endif\t\t\t    /* localmod 083 */\n\n#define PBS_SEQNUMTOP 999999999999 /* top number for job sequence number, reset */\n/* to zero when reached, see req_quejob.c    */\n\n#define PBS_NET_RETRY_TIME 30\t    /* Retry time between re-sending requests  */\n#define PBS_NET_RETRY_LIMIT 14400   /* Max retry time */\n#define PBS_SCHEDULE_CYCLE 600\t    /* re-schedule even if no change, 10 min   */\n#define PBS_RESTAT_JOB 30\t    /* ask mom for status only once in 30 sec  */\n#define PBS_STAGEFAIL_WAIT 1800\t    /* retry time after stage in failuere */\n#define PBS_MAX_ARRAY_JOB_DFL 10000 /* default max size of an array job */\n\n/* Server Database information - path names */\n\n#define PBS_SVR_PRIVATE \"server_priv\"\n#define PBS_ACCT \"accounting\"\n#define PBS_JOBDIR \"jobs\"\n#define PBS_USERDIR \"users\"\n#define PBS_RESCDEF \"resourcedef\"\n#define PBS_RESVDIR \"resvs\"\n#define PBS_SPOOLDIR \"spool\"\n#define PBS_QUEDIR \"queues\"\n#define PBS_LOGFILES \"server_logs\"\n#define PBS_ACTFILES \"accounting\"\n#define PBS_SERVERDB \"serverdb\"\n#define PBS_SVRACL \"acl_svr\"\n#define PBS_TRACKING \"tracking\"\n#define NODE_DESCRIP \"nodes\"\n#define NODE_STATUS \"node_status\"\n#define VNODE_MAP \"vnodemap\"\n#define PBS_PROV_TRACKING \"prov_tracking\"\n#define PBS_SCHEDDB \"scheddb\"\n#define PBS_SCHED_PRIVATE \"sched_priv\"\n#define PBS_SVRLIVE \"svrlive\"\n#define DIGEST_LENGTH 20 /* for now making this equal to SHA_DIGEST_LENGTH  which is 20 */\n\n/*\n * Security, Authentication, Authorization Control:\n *\n *\t- What account is PBS mail from\n *\t- Who is the default administrator (when none defined)\n *\t- Is \"root\" always an batch adminstrator (manager) (YES/no)\n */\n\n#define PBS_DEFAULT_MAIL \"adm\"\n#define PBS_DEFAULT_ADMIN \"root\"\n#define PBS_ROOT_ALWAYS_ADMIN 1\n\n/* #define NO_SPOOL_OUTPUT 1\tUser output in home directory,not spool */\n\n/* \"simplified\" network address structure */\n\n#ifndef PBS_NET_TYPE\ntypedef unsigned long pbs_net_t; /* for holding host addresses */\n#define PBS_NET_TYPE\n#endif /* PBS_NET_TYPE */\n\n/*\n * the following funny business is due to the fact that O_SYNC\n * is not currently POSIX\n */\n#if defined(O_SYNC)\n#define O_Sync O_SYNC\n#elif defined(_FSYNC)\n#define O_Sync _FSYNC\n#elif defined(O_FSYNC)\n#define O_Sync O_FSYNC\n#else\n#define O_Sync 0\n#endif\n\n/* defines for job moving (see net_move() ) */\n\n#define MOVE_TYPE_Move 1  /* Move by user request */\n#define MOVE_TYPE_Route 2 /* Route from routing queue */\n#define MOVE_TYPE_Exec 3  /* Execution (move to MOM) */\n#define MOVE_TYPE_MgrMv 4 /* Mover by privileged user, a manager */\n#define MOVE_TYPE_Order 5 /* qorder command by user */\n\n#define SEND_JOB_OK 0\t\t\t /* send_job sent successfully\t  */\n#define SEND_JOB_FATAL 1\t\t /* send_job permenent fatal error */\n#define SEND_JOB_RETRY 2\t\t /* send_job failed, retry later\t  */\n#define SEND_JOB_NODEDW 3\t\t /* send_job node down, mark down  */\n#define SEND_JOB_HOOKERR 4\t\t /* send_job hook error */\n#define SEND_JOB_HOOK_REJECT 5\t\t /* send_job hook reject */\n#define SEND_JOB_HOOK_REJECT_RERUNJOB 6\t /*send_job hook reject,requeue job*/\n#define SEND_JOB_HOOK_REJECT_DELETEJOB 7 /*send_job hook reject, delete job*/\n#define SEND_JOB_SIGNAL 8\t\t /* send_job response for signal received  */\n\n/*\n * server initialization modes\n */\n#define RECOV_HOT 0\t /* restart prior running jobs   */\n#define RECOV_WARM 1\t /* requeue/reschedule  all jobs */\n#define RECOV_COLD 2\t /* discard all jobs\t\t*/\n#define RECOV_CREATE 3\t /* discard all info\t\t*/\n#define RECOV_UPDATEDB 4 /* migrate data from fs to database */\n#define RECOV_Invalid 5\n\n/*\n * for protecting the daemons from kernel killers\n */\nenum PBS_Daemon_Protect {\n\tPBS_DAEMON_PROTECT_OFF,\n\tPBS_DAEMON_PROTECT_ON\n};\nvoid daemon_protect(pid_t, enum PBS_Daemon_Protect);\n\n#ifdef __cplusplus\n}\n#endif\n#endif /* _SERVER_LIMITS_H */\n"
  },
  {
    "path": "src/include/site_job_attr_def.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/*\n * Place holder for site supplied additions to the array of\n * site job attribute definitions, see job_attr_def.c\n *\n * Array elements must be of the form:\n *\t    {\t\"name\",\n *\t\tdecode_Func,\n *\t\tencode_Func,\n *\t\tset_Func,\n *\t\tcomp_Func,\n *\t\tfree_Func,\n *\t\taction_routine,\n *\t\tpermissions,\n *\t\tATR_TYPE_*,\n *\t\tPARENT_TYPE_JOB\n *\t    },\n *\n * Matching entry must be added in site_job_attr_enum.h\n */\n"
  },
  {
    "path": "src/include/site_job_attr_enum.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/*\n * Place holder for site supplied additions to the job's enumerated\n * list of attributes,  see job.h.\n *\n * List should be of the form:\n *\tJOB_SITE_ATR_name,\n *\n * Matching entry must be added in site_job_attr_def.h\n */\n"
  },
  {
    "path": "src/include/site_qmgr_node_print.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/*\n * list of site defined queue attribute names which qmgr should\n * include in its \"print node\" output\n *\n * format is (do include the quote marks):\n *\t\"attribute_name\",\n */\n"
  },
  {
    "path": "src/include/site_qmgr_que_print.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/*\n * list of site defined queue attribute names which qmgr should\n * include in its \"print queue\" output\n *\n * format is (do include the quote marks):\n *\t\"attribute_name\",\n */\n"
  },
  {
    "path": "src/include/site_qmgr_sched_print.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/*\n * list of site defined scheduler attribute names which qmgr should\n * include in its \"print sched\" output\n *\n * format is (do include the quote marks):\n *\t\"attribute_name\",\n */\n"
  },
  {
    "path": "src/include/site_qmgr_svr_print.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/*\n * list of site defined server attribute names which qmgr should\n * include in its \"print server\" output\n *\n * format is (do include the quote marks):\n *\t\"attribute_name\",\n */\n"
  },
  {
    "path": "src/include/site_que_attr_def.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/*\n * Place holder for site supplied additions to the array of\n * site queue attribute definitions, see server/queue_attr_def.c\n *\n * Array elements must be of the form:\n *\t    {\t\"name\",\n *\t\tdecode_Func,\n *\t\tencode_Func,\n *\t\tset_Func,\n *\t\tcomp_Func,\n *\t\tfree_Func,\n *\t\taction_routine,\n *\t\tpermissions,\n *\t\tATR_TYPE_*,\n *\t\tPARENT_TYPE_QUE_[ALL|EXC|RTE]\n *\t    },\n *\n * Matching entry must be added in site_que_attr_enum.h\n */\n#ifdef NAS\n/* localmod 046 */\n{ATTR_maxstarve,\n decode_time,\n encode_time,\n set_l,\n comp_l,\n free_null,\n NULL_FUNC,\n NO_USER_SET,\n ATR_TYPE_LONG,\n PARENT_TYPE_QUE_EXC},\n\t/* localmod 034 */\n\t{ATTR_maxborrow,\n\t decode_time,\n\t encode_time,\n\t set_l,\n\t comp_l,\n\t free_null,\n\t NULL_FUNC,\n\t NO_USER_SET,\n\t ATR_TYPE_LONG,\n\t PARENT_TYPE_QUE_EXC},\n#endif\n"
  },
  {
    "path": "src/include/site_que_attr_enum.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/*\n * Place holder for site supplied additions to the queue's enumerated\n * list of attributes,  see queue.h.\n *\n * List should be of the form:\n *\tQ_SITE_ATR_name,\n *\n * Matching entry must be added in site_que_attr_def.h\n */\n#ifdef NAS\n/* localmod 046 */\nQ_SITE_ATR_maxstarve,\n\t/* localmod 034 */\n\tQ_SITE_ATR_maxborrow,\n#endif\n"
  },
  {
    "path": "src/include/site_queue.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/*\n * site_queue.h - Site additions to queue information\n */\n\n#ifdef NAS\n\n/*\n * Magic cookie value for max_starve indicating jobs in queue never starve\n */\n#define Q_SITE_STARVE_NEVER (9999 * 3600)\n\n#endif\n"
  },
  {
    "path": "src/include/site_resc_attr_def.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/*\n * Place holder for site supplied additions to the array of\n * resource definitions, see server/resc_def_all.c\n *\n * Array elements must be of the form:\n *\t    {\t\"name\",\n *\t\tdecode_Func,\n *\t\tencode_Func,\n *\t\tset_Func,\n *\t\tcomp_Func,\n *\t\tfree_Func,\n *\t\taction_routine,\n *\t\tpermissions,\n *\t\tATR_TYPE_*\n *\t    },\n */\n"
  },
  {
    "path": "src/include/site_resv_attr_def.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/*\n * Place holder for site supplied additions to the array of\n * site resc_resv attribute definitions, see server/resv_attr_def.c\n *\n * Array elements must be of the form:\n *\t    {\t\"name\",\n *\t\tdecode_Func,\n *\t\tencode_Func,\n *\t\tset_Func,\n *\t\tcomp_Func,\n *\t\tfree_Func,\n *\t\taction_routine,\n *\t\tpermissions,\n *\t\tATR_TYPE_*,\n *\t\tPARENT_TYPE_RESV\n *\t    },\n *\n * Matching entry must be added in site_resv_attr_enum.h\n */\n"
  },
  {
    "path": "src/include/site_resv_attr_enum.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/*\n * Place holder for site supplied additions to the reservation's enumerated\n * list of attributes,  see reservation.h.\n *\n * List should be of the form:\n *\tRESV_SITE_ATR_name,\n *\n * Matching entry must be added in site_resv_attr_def.h\n */\n"
  },
  {
    "path": "src/include/site_sched_attr_def.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/*\n * Place holder for site supplied additions to the array of site\n * scheduler attribute definitions, see server/svr_attr_def.c\n *\n * Array elements must be of the form:\n *\t    {\t\"name\",\n *\t\tdecode_Func,\n *\t\tencode_Func,\n *\t\tset_Func,\n *\t\tcomp_Func,\n *\t\tfree_Func,\n *\t\taction_routine,\n *\t\tpermissions,\n *\t\tATR_TYPE_*,\n *\t\tPARENT_TYPE_SERVER\n *\t    },\n *\n * Matching entry must be added in site_sched_attr_enum.h\n */\n"
  },
  {
    "path": "src/include/site_sched_attr_enum.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/*\n * Place holder for site supplied additions to the scheduler's enumerated\n * list of attributes.\n *\n * List should be of the form:\n *\tSCHED_SITE_ATR_name,\n *\n * Matching entry must be added in site_sched_attr_def.h\n */\n"
  },
  {
    "path": "src/include/site_svr_attr_def.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/*\n * Place holder for site supplied additions to the array of site\n * server attribute definitions, see server/svr_attr_def.c\n *\n * Array elements must be of the form:\n *\t    {\t\"name\",\n *\t\tdecode_Func,\n *\t\tencode_Func,\n *\t\tset_Func,\n *\t\tcomp_Func,\n *\t\tfree_Func,\n *\t\taction_routine,\n *\t\tpermissions,\n *\t\tATR_TYPE_*,\n *\t\tPARENT_TYPE_SERVER\n *\t    },\n *\n * Matching entry must be added in site_sv_attr_enum.h\n */\n"
  },
  {
    "path": "src/include/site_svr_attr_enum.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/*\n * Place holder for site supplied additions to the server's enumerated\n * list of attributes,  see server.h.\n *\n * List should be of the form:\n *\tSVR_SITE_ATR_name,\n *\n * Matching entry must be added in site_sv_attr_def.h\n */\n"
  },
  {
    "path": "src/include/svrfunc.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _SVRFUNC_H\n#define _SVRFUNC_H\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n/*\n * misc server function prototypes\n */\n\n#include \"net_connect.h\"\n#include \"pbs_db.h\"\n#include \"reservation.h\"\n#include \"resource.h\"\n#include \"pbs_sched.h\"\n#include \"pbs_entlim.h\"\n\nextern int check_num_cpus(void);\nextern int chk_hold_priv(long, int);\nextern void close_client(int);\nextern void scheduler_close(int);\nextern int send_sched_cmd(pbs_sched *, int, char *);\nextern void count_node_cpus(void);\nextern int ctcpus(char *, int *);\nextern void cvrt_fqn_to_name(char *, char *);\nextern int failover_send_shutdown(int);\nextern char *get_hostPart(char *);\nextern int is_compose(int, int);\nextern int is_compose_cmd(int, int, char **);\nextern char *get_servername(unsigned int *);\nextern void process_Areply(int);\nextern void process_Dreply(int);\nextern void process_DreplyTPP(int);\nextern void process_request(int);\nextern void process_dis_request(int);\nextern int save_flush(void);\nextern void save_setup(int);\nextern int save_struct(char *, unsigned int);\nextern int schedule_jobs(pbs_sched *);\nextern int schedule_high(pbs_sched *);\nextern void shutdown_nodes(void);\nextern char *site_map_user(char *, char *);\nextern char *site_map_resvuser(char *, char *);\nextern void svr_disconnect(int);\nextern void svr_disconnect_with_wait_option(int, int);\nextern int svr_connect(pbs_net_t, unsigned int, void (*)(int), enum conn_type, int);\nextern void svr_force_disconnect(int);\nextern void svr_shutdown(int);\nextern int svr_get_privilege(char *, char *);\nextern int setup_nodes(void);\nextern int setup_resc(int);\nextern void update_job_node_rassn(job *, attribute *, enum batch_op);\nextern void mark_node_down(char *, char *);\nextern void mark_node_offline_by_mom(char *, char *);\nextern void clear_node_offline_by_mom(char *, char *);\nextern void mark_which_queues_have_nodes(void);\n#ifndef DEBUG\nextern void pbs_close_stdfiles(void);\n#endif\nextern int is_job_array(char *);\nextern int get_queued_subjobs_ct(job *);\nextern int parse_subjob_index(char *, char **, int *, int *, int *, int *);\nextern int expand_resc_array(char *, int, int);\nextern void resv_timer_init(void);\nextern int validate_nodespec(char *);\nextern long longto_kbsize(char *);\nextern int is_vnode_up(char *);\nextern char *convert_long_to_time(long);\nextern int svr_chk_history_conf(void);\nextern int update_svrlive(void);\nextern void init_socket_licenses(char *);\nextern void update_job_finish_comment(job *, int, char *);\nextern void svr_saveorpurge_finjobhist(job *);\nextern int recreate_exec_vnode(job *, char *, char *, char *, int);\nextern void unset_extra_attributes(job *);\nextern int node_delete_db(struct pbsnode *);\nextern int pbsd_init(int);\nextern int svr_chk_histjob(job *);\nextern int chk_and_update_db_svrhost(void);\nextern int apply_aoe_inchunk_rules(resource *, attribute *, void *, int);\nextern int apply_select_inchunk_rules(resource *, attribute *, void *, int, int);\nextern int svr_create_tmp_jobscript(job *, char *);\nextern void unset_jobscript_max_size(void);\nextern char *svr_load_jobscript(job *);\nextern int direct_write_requested(job *);\nextern void spool_filename(job *, char *, char *);\nextern enum failover_state are_we_primary(void);\nextern void license_more_nodes(void);\nextern void reset_svr_sequence_window(void);\nextern void reply_preempt_jobs_request(int, int, struct job *);\nextern int copy_params_from_job(char *, resc_resv *);\nextern int confirm_resv_locally(resc_resv *, struct batch_request *, char *);\nextern int set_select_and_place(int, void *, attribute *);\nextern int make_schedselect(attribute *, resource *, pbs_queue *, attribute *);\nextern long long get_next_svr_sequence_id(void);\nextern int compare_obj_hash(void *, int, void *);\nextern void panic_stop_db();\nextern void free_db_attr_list(pbs_db_attr_list_t *);\nextern bool delete_pending_arrayjobs(struct batch_request *);\n\n#ifdef _PROVISION_H\nextern int find_prov_vnode_list(job *, exec_vnode_listtype *, char **);\n#endif /* _PROVISION_H */\n\nextern void *jobs_idx;\n\n#ifdef _RESERVATION_H\nextern int set_nodes(void *, int, char *, char **, char **, char **, int, int);\n#endif /* _RESERVATION_H */\n\n#ifdef _PBS_NODES_H\nextern void tinsert2(const u_long, const u_long, mominfo_t *, struct tree **);\nextern void *tdelete2(const u_long, const u_long, struct tree **);\nextern void tfree2(struct tree **);\n#ifdef _RESOURCE_H\nextern int fix_indirect_resc_targets(struct pbsnode *, resource *, int, int);\n#endif /* _RESOURCE_H */\n#endif /* _PBS_NODES_H */\n\n#ifdef _PBS_JOB_H\nextern int assign_hosts(job *, char *, int);\nextern void clear_exec_on_run_fail(job *);\nextern void discard_job(job *, char *, int);\nextern void post_rerun(struct work_task *);\nextern void force_reque(job *);\nextern void set_resc_assigned(void *, int, enum batch_op);\nextern int is_ts_node(char *);\nextern char *cnv_eh(job *);\nextern char *find_ts_node(void);\nextern void job_purge(job *);\nextern void check_block(job *, char *);\nextern void free_nodes(job *);\nextern int job_route(job *);\nextern void rel_resc(job *);\nextern int remove_stagein(job *);\nextern size_t check_for_cred(job *, char **);\nextern void svr_mailowner(job *, int, int, char *);\nextern void svr_mailowner_id(char *, job *, int, int, char *);\nextern char *lastname(char *);\nextern void chk_array_doneness(job *);\nextern job *create_subjob(job *, char *, int *);\nextern job *find_arrayparent(char *);\nextern job *get_subjob_and_state(job *, int, char *, int *);\nextern void update_sj_parent(job *, job *, char *, char, char);\nextern void update_subjob_state_ct(job *);\nextern char *subst_array_index(job *, char *);\n#ifndef PBS_MOM\nextern void svr_setjob_histinfo(job *, histjob_type);\nextern void svr_histjob_update(job *, char, int);\nextern char *form_attr_comment(const char *, const char *);\nextern void complete_running(job *);\nextern void am_jobs_add(job *);\nextern int was_job_alteredmoved(job *);\nextern void check_failed_attempts(job *);\n#endif\n#ifdef _QUEUE_H\nextern int check_entity_ct_limit_max(job *, pbs_queue *);\nextern int check_entity_ct_limit_queued(job *, pbs_queue *);\nextern int check_entity_resc_limit_max(job *, pbs_queue *, attribute *);\nextern int check_entity_resc_limit_queued(job *, pbs_queue *, attribute *);\nextern int set_entity_ct_sum_max(job *, pbs_queue *, enum batch_op);\nextern int set_entity_ct_sum_queued(job *, pbs_queue *, enum batch_op);\nextern int set_entity_resc_sum_max(job *, pbs_queue *, attribute *, enum batch_op);\nextern int set_entity_resc_sum_queued(job *, pbs_queue *, attribute *, enum batch_op);\nextern int account_entity_limit_usages(job *, pbs_queue *, attribute *, enum batch_op, int);\nextern void eval_chkpnt(job *pjob, attribute *queckp);\n#endif /* _QUEUE_H */\n\n#ifdef _BATCH_REQUEST_H\nextern int svr_startjob(job *, struct batch_request *);\nextern int svr_authorize_jobreq(struct batch_request *, job *);\nextern int dup_br_for_subjob(struct batch_request *, job *, int (*)(struct batch_request *, job *));\nextern void set_old_nodes(job *);\nextern int send_job_exec_update_to_mom(job *, char *, int, struct batch_request *);\nextern int free_sister_vnodes(job *, char *, char *, char *, int, struct batch_request *);\n#ifdef _WORK_TASK_H\nextern int send_job(job *, pbs_net_t, int, int, void (*)(struct work_task *), struct batch_request *);\nextern int relay_to_mom(job *, struct batch_request *, void (*)(struct work_task *));\nextern int relay_to_mom2(job *, struct batch_request *, void (*)(struct work_task *), struct work_task **);\nextern int recreate_exec_vnode(job *, char *, char *, char *, int);\nextern int send_job_exec_update_to_mom(job *, char *, int, struct batch_request *);\nextern int free_sister_vnodes(job *, char *, char *, char *, int, struct batch_request *);\nextern void indirect_target_check(struct work_task *);\nextern void primary_handshake(struct work_task *);\nextern void secondary_handshake(struct work_task *);\n#endif /* _WORK_TASK_H */\n#endif /* _BATCH_REQUEST_H */\n#ifdef _TICKET_H\nextern int write_cred(job *, char *, size_t);\nextern int read_cred(job *, char **, size_t *);\nextern int get_credential(char *, job *, int, char **, size_t *);\n#endif /* _TICKET_H */\nextern int local_move(job *, struct batch_request *);\nextern int user_read_password(char *, char **, size_t *);\n\n#endif /* _PBS_JOB_H */\n\n#ifdef _BATCH_REQUEST_H\nextern void req_quejob(struct batch_request *);\nextern void req_jobcredential(struct batch_request *);\nextern void req_usercredential(struct batch_request *);\nextern void req_jobscript(struct batch_request *);\nextern void req_commit(struct batch_request *);\nextern void req_commit_now(struct batch_request *, job *);\nextern void req_deletejob(struct batch_request *);\nextern void req_holdjob(struct batch_request *);\nextern void req_messagejob(struct batch_request *);\nextern void req_py_spawn(struct batch_request *);\nextern void req_relnodesjob(struct batch_request *);\nextern void req_modifyjob(struct batch_request *);\nextern void req_modifyReservation(struct batch_request *);\nextern void req_orderjob(struct batch_request *);\nextern void req_rescreserve(struct batch_request *);\nextern void req_rescfree(struct batch_request *);\nextern void req_shutdown(struct batch_request *);\nextern void req_signaljob(struct batch_request *);\nextern void req_mvjobfile(struct batch_request *);\nextern void req_stat_node(struct batch_request *);\nextern void req_track(struct batch_request *);\nextern void req_stagein(struct batch_request *);\nextern void req_resvSub(struct batch_request *);\nextern void req_deleteReservation(struct batch_request *);\nextern void req_reservationOccurrenceEnd(struct batch_request *);\nextern void req_failover(struct batch_request *);\nextern int put_failover(int, struct batch_request *);\nextern void set_last_used_time_node(void *, int);\n\n#endif /* _BATCH_REQUEST_H */\n\n#ifdef _ATTRIBUTE_H\nextern int check_que_enable(attribute *, void *, int);\nextern int set_queue_type(attribute *, void *, int);\nextern int chk_characteristic(struct pbsnode *pnode, int *pneed_todo);\nextern int is_valid_str_resource(attribute *pattr, void *pobject, int actmode);\nextern int setup_arrayjob_attrs(attribute *, void *, int);\nextern int deflt_chunk_action(attribute *pattr, void *pobj, int mode);\nextern int action_svr_iteration(attribute *pattr, void *pobj, int mode);\nextern void update_node_rassn(attribute *, enum batch_op);\nextern void update_job_node_rassn(job *, attribute *, enum batch_op);\nextern int cvt_nodespec_to_select(char *, char **, size_t *, attribute *);\nextern int is_valid_resource(attribute *pattr, void *pobject, int actmode);\nextern int queuestart_action(attribute *pattr, void *pobject, int actmode);\nextern int alter_eligibletime(attribute *pattr, void *pobject, int actmode);\nextern int set_chunk_sum(attribute *pselectattr, attribute *pattr);\nextern int update_resources_rel(job *, attribute *, enum batch_op);\nextern int keepfiles_action(attribute *pattr, void *pobject, int actmode);\nextern int removefiles_action(attribute *pattr, void *pobject, int actmode);\n\n/* Functions below exposed as they are now accessed by the Python hooks */\nextern void update_state_ct(attribute *, int *, attribute_def *attr_def);\nextern void update_license_ct();\n\n#ifdef _PBS_JOB_H\nextern int job_set_wait(attribute *, void *, int);\n#endif /* _PBS_JOB_H */\n#ifdef _QUEUE_H\nextern int chk_resc_limits(attribute *, pbs_queue *);\nextern int set_resc_deflt(void *, int, pbs_queue *);\nextern void queue_route(pbs_queue *);\nextern int que_purge(pbs_queue *);\n#endif /* _QUEUE_H */\n#endif /* _ATTRIBUTE_H */\n\n#ifdef PBS_MOM\nextern void addrinsert(const unsigned long);\nextern int addrfind(const unsigned long);\n#endif /* PBS_MOM */\n\n#ifdef PBS_NET_H\nextern int svr_connect(pbs_net_t, unsigned int, void (*)(int), enum conn_type, int);\n#endif /* PBS_NET_H */\n#ifdef _WORK_TASK_H\nextern void release_req(struct work_task *);\n#ifdef _BATCH_REQUEST_H\nextern int issue_Drequest(int, struct batch_request *, void (*)(struct work_task *), struct work_task **, int);\n#endif /* _BATCH_REQUEST_H */\n#endif /* _WORK_TASK_H */\n\n#ifdef _RESERVATION_H\nextern void is_resv_window_in_future(resc_resv *);\nextern void resv_setResvState(resc_resv *, int, int);\nextern void is_resv_window_in_future(resc_resv *);\nextern int gen_task_EndResvWindow(resc_resv *);\nextern int gen_future_deleteResv(resc_resv *, long);\nextern int gen_deleteResv(resc_resv *, long);\nextern int node_avail(spec_and_context *, int *, int *, int *, int *);\nextern int node_avail_complex(spec_and_context *, int *, int *, int *, int *);\nextern int node_reserve(spec_and_context *, pbs_resource_t);\nextern void node_unreserve(pbs_resource_t);\nextern int node_spec(struct spec_and_context *, int);\nextern int notify_scheds_about_resv(int, resc_resv *);\nextern char *create_resv_destination(resc_resv *presv);\n#endif /* _RESERVATION_H */\n\n#ifdef _LIST_LINK_H\n/*\n * This structure is used to hold information for a runjob batch request\n * from a client (that is not the Scheduler) which is being forwarded to\n * the Scheduler for consideration.   Since the Scheduler will make many\n * requests to the Server before replying to this request, the normal\n * request/reply mechanism breaks down.\n *\n * The request currently may be in the following states:\n *\tPending - waiting for the next scheduling cycle\n *\tSent    - sent to the Scheduler\n * When the Scheduler deals with the request, it will use the Deferred\n * Scheduler Reply request;  the Server will look in the list for one with\n * a matching Job ID and on finding it, reply to the original runjob request\n * and remove the structure from the list.\n */\nstruct deferred_request {\n\tpbs_list_link dr_link;\n\tchar dr_id[PBS_MAXSVRJOBID + 1];\n\tstruct batch_request *dr_preq;\n\tint dr_sent; /* sent to Scheduler */\n};\n\nstruct sched_deferred_request {\n\tpbs_list_link sdr_link;\n\tpbs_list_head sdr_deferred_req; /* list of deferred requests of the scheduler */\n\tpbs_sched *sdr_psched; /* Scheduler */\n};\n\n#endif /* _LIST_LINK_H */\n\n/*\n * The following is used are req_stat.c and req_select.c\n * Also defined in status_job.c\n */\n#ifdef STAT_CNTL\nstruct select_list {\n\tstruct select_list *sl_next; /* ptr to next in list   */\n\tenum batch_op sl_op;\t     /* comparison operator   */\n\tattribute_def *sl_def;\t     /* ptr to attr definition, for at_comp */\n\tint sl_atindx;\t\t     /* index into attribute_def, for type */\n\tattribute sl_attr;\t     /* the attribute (value) */\n};\n\n/* used in req_stat_job */\nstruct stat_cntl {\n\tint sc_XXXX;\n\tint sc_type;\n\tint sc_XXXY;\n\tint sc_conn;\n\tpbs_queue *sc_pque;\n\tstruct batch_request *sc_origrq;\n\tstruct select_list *sc_select;\n\tvoid (*sc_post)(struct stat_cntl *);\n\tchar sc_jobid[PBS_MAXSVRJOBID + 1];\n};\n\nextern int status_job(job *, struct batch_request *, svrattrl *, pbs_list_head *, int *, int);\nextern int status_subjob(job *, struct batch_request *, svrattrl *, int, pbs_list_head *, int *, int);\nextern int stat_to_mom(job *, struct stat_cntl *);\n\n#endif /* STAT_CNTL */\n#ifdef __cplusplus\n}\n#endif\n#endif /* _SVRFUNC_H */\n"
  },
  {
    "path": "src/include/ticket.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _TICKET_H\n#define _TICKET_H\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n/*\n * ticket.h - header file for dealing with security systems such as kerberos.\n */\n\n#include <sys/types.h>\n\n#define PBS_CREDVER 1\n#define PBS_CREDTYPE_NONE 0\n#define PBS_CREDTYPE_GRIDPROXY 2 /* Deprecated */\n#define PBS_CREDTYPE_AES 3\n\n#define PBS_GC_BATREQ 100\n#define PBS_GC_CPYFILE 101\n#define PBS_GC_EXEC 102\n\n#define PBS_CREDNAME_AES \"aes\"\n\nextern int encode_to_base64(const unsigned char *buffer, size_t buffer_len, char **ret_encoded_data);\nextern int decode_from_base64(char *buffer, unsigned char **ret_decoded_data, size_t *ret_decoded_len);\n\n#ifdef __cplusplus\n}\n#endif\n#endif /* _TICKET_H */\n"
  },
  {
    "path": "src/include/tm.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/*\n **\tHeader file defineing the datatypes and library visiable\n **\tvariables for paralell awareness.\n */\n\n#ifndef _TM_H\n#define _TM_H\n\n#include \"tm_.h\"\n\n/*\n **\tThe tm_roots structure contains data for the last\n **\ttm_init call whose event has been polled.  <Me> is the\n **\tcaller's identity.  <Daddy> is the identity of the task that\n **\tspawned the caller.  If <daddy> is the TM_NULL_TASK, the caller\n **\tis the initial task of the job, running on job-relative\n **\tnode 0.\n */\nstruct tm_roots {\n\ttm_task_id tm_me;\n\ttm_task_id tm_parent;\n\tint tm_nnodes;\n\tint tm_ntasks;\n\tint tm_taskpoolid;\n\ttm_task_id *tm_tasklist;\n};\n\n/*\n **\tThe tm_whattodo structure contains data for the last\n **\ttm_register event polled.  This is not implemented yet.\n */\ntypedef struct tm_whattodo {\n\tint tm_todo;\n\ttm_task_id tm_what;\n\ttm_node_id tm_where;\n} tm_whattodo_t;\n\n/*\n **\tPrototypes for all the TM API calls.\n */\nint\ntm_init(void *info,\n\tstruct tm_roots *roots);\n\nint\ntm_poll(tm_event_t poll_event,\n\ttm_event_t *result_event,\n\tint wait,\n\tint *tm_errno);\n\nint tm_notify(int tm_signal);\n\nint\ntm_spawn(int argc,\n\t char *argv[],\n\t char *envp[],\n\t tm_node_id where,\n\t tm_task_id *tid,\n\t tm_event_t *event);\n\nint\ntm_kill(tm_task_id tid,\n\tint sig,\n\ttm_event_t *event);\n\nint\ntm_obit(tm_task_id tid,\n\tint *obitval,\n\ttm_event_t *event);\n\nint\ntm_nodeinfo(tm_node_id **list,\n\t    int *nnodes);\n\nint\ntm_taskinfo(tm_node_id node,\n\t    tm_task_id *list,\n\t    int lsize,\n\t    int *ntasks,\n\t    tm_event_t *event);\n\nint\ntm_atnode(tm_task_id tid,\n\t  tm_node_id *node);\n\nint\ntm_rescinfo(tm_node_id node,\n\t    char *resource,\n\t    int len,\n\t    tm_event_t *event);\n\nint\ntm_publish(char *name,\n\t   void *info,\n\t   int nbytes,\n\t   tm_event_t *event);\n\nint\ntm_subscribe(tm_task_id tid,\n\t     char *name,\n\t     void *info,\n\t     int len,\n\t     int *amount,\n\t     tm_event_t *event);\n\nint tm_finalize(void);\n\nint\ntm_alloc(char *resources,\n\t tm_event_t *event);\n\nint\ntm_dealloc(tm_node_id node,\n\t   tm_event_t *event);\n\nint tm_create_event(tm_event_t *event);\n\nint tm_destroy_event(tm_event_t *event);\n\nint\ntm_register(tm_whattodo_t *what,\n\t    tm_event_t *event);\n\nint\ntm_attach(char *jobid,\n\t  char *cookie,\n\t  pid_t pid,\n\t  tm_task_id *tid,\n\t  char *host,\n\t  int port);\n\n#endif /* _TM_H */\n"
  },
  {
    "path": "src/include/tm_.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/*\n **\tHeader file defineing the datatypes and library visiable\n **\tvariables for paralell awareness.\n */\n\n#ifndef _TM__H\n#define _TM__H\n\n#include <sys/types.h>\n\ntypedef int tm_host_id; /* physical node index  */\ntypedef int tm_node_id; /* job-relative node id */\n#define TM_ERROR_NODE ((tm_node_id) -1)\n\ntypedef int tm_event_t; /* event handle, > 0 for real events */\n#define TM_NULL_EVENT ((tm_event_t) 0)\n#define TM_ERROR_EVENT ((tm_event_t) -1)\n\ntypedef unsigned int tm_task_id;\n#define TM_NULL_TASK (tm_task_id) 0\n#define TM_INIT_TASK (tm_task_id) 1\n\n/*\n **\tProtocol message type defines\n */\n#define TM_INIT 100\t /* tm_init request\t*/\n#define TM_TASKS 101\t /* tm_taskinfo request\t*/\n#define TM_SPAWN 102\t /* tm_spawn request\t*/\n#define TM_SIGNAL 103\t /* tm_signal request\t*/\n#define TM_OBIT 104\t /* tm_obit request\t*/\n#define TM_RESOURCES 105 /* tm_rescinfo request\t*/\n#define TM_POSTINFO 106\t /* tm_publish request\t*/\n#define TM_GETINFO 107\t /* tm_subscribe request\t*/\n#define TM_GETTID 108\t /* tm_gettasks request */\n#define TM_REGISTER 109\t /* tm_register request\t*/\n#define TM_RECONFIG 110\t /* tm_register deferred reply */\n#define TM_ACK 111\t /* tm_register event acknowledge */\n#define TM_FINALIZE 112\t /* tm_finalize request, there is no reply */\n#define TM_ATTACH 113\t /* tm_attach request */\n#define TM_OKAY 0\n\n#define TM_ERROR 999\n\n/*\n **\tError numbers returned from library\n */\n#define TM_SUCCESS 0\n#define TM_ESYSTEM 17000\n#define TM_ENOEVENT 17001\n#define TM_ENOTCONNECTED 17002\n#define TM_EUNKNOWNCMD 17003\n#define TM_ENOTIMPLEMENTED 17004\n#define TM_EBADENVIRONMENT 17005\n#define TM_ENOTFOUND 17006\n#define TM_BADINIT 17007\n#define TM_ESESSION 17008\n#define TM_EUSER 17009\n#define TM_EOWNER 17010\n#define TM_ENOPROC 17011\n#define TM_EHOOK 17012\n\n#define TM_TODO_NOP 5000  /* Do nothing (the nodes value may be new) */\n#define TM_TODO_CKPT 5001 /* Checkpoint <what> and continue it */\n#define TM_TODO_MOVE 5002 /* Move <what> to <where> */\n#define TM_TODO_QUIT 5003 /* Terminate <what> */\n#define TM_TODO_STOP 5004 /* Suspend execution of <what> */\n\n#endif /* _TM__H */\n"
  },
  {
    "path": "src/include/tpp.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef __TPP_H\n#define __TPP_H\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n#include <pbs_config.h>\n#include <errno.h>\n#include \"pbs_internal.h\"\n#include \"auth.h\"\n\n#if defined(PBS_HAVE_DEVPOLL)\n#define PBS_USE_DEVPOLL\n#elif defined(PBS_HAVE_EPOLL)\n#define PBS_USE_EPOLL\n#elif defined(PBS_HAVE_POLLSET)\n#define PBS_USE_POLLSET\n#elif defined(HAVE_POLL)\n#define PBS_USE_POLL\n#elif defined(HAVE_SELECT)\n#define PBS_USE_SELECT\n#endif\n\n#if defined(PBS_USE_EPOLL)\n\n#include <sys/epoll.h>\n\n#elif defined(PBS_USE_POLL)\n\n#include <poll.h>\n\n#elif defined(PBS_USE_SELECT)\n\n#if defined(FD_SET_IN_SYS_SELECT_H)\n#include <sys/select.h>\n#endif\n\n#elif defined(PBS_USE_DEVPOLL)\n\n#include <sys/devpoll.h>\n\n#elif defined(PBS_USE_POLLSET)\n\n#include <sys/poll.h>\n#include <sys/pollset.h>\n#include <fcntl.h>\n\n#endif\n\n/*\n * Default number of RPP packets to check every server iteration\n */\n#define RPP_MAX_PKT_CHECK_DEFAULT 64\n\n/* TPP specific definitions and structures */\n#define TPP_DEF_ROUTER_PORT 17001\n#define TPP_MAXOPENFD 8192 /* limit for pbs_comm max open files */\n\n/* tpp node types, leaf and router */\n#define TPP_LEAF_NODE 1\t       /* leaf node that does not care about TPP_CTL_LEAVE messages from other leaves */\n#define TPP_LEAF_NODE_LISTEN 2 /* leaf node that wants to be notified of TPP_CTL_LEAVE messages from other leaves */\n#define TPP_ROUTER_NODE 3      /* router */\n#define TPP_AUTH_NODE 4\t       /* authenticated, but yet unknown node type till a join happens */\n\nextern int tpp_fd;\nstruct tpp_config {\n\tint node_type;\t/* leaf, proxy */\n\tchar **routers; /* other proxy names (and backups) to connect to */\n\tint numthreads;\n\tchar *node_name; /* list of comma separated node names */\n\tint compress;\n\tint tcp_keepalive; /* use keepalive? */\n\tint tcp_keep_idle;\n\tint tcp_keep_intvl;\n\tint tcp_keep_probes;\n\tint tcp_user_timeout;\n\tint buf_limit_per_conn; /* buffer limit per physical connection */\n\tpbs_auth_config_t *auth_config;\n\tchar **supported_auth_methods;\n};\n\n/* TPP specific functions */\nextern int tpp_init(struct tpp_config *);\nextern void tpp_set_app_net_handler(void (*app_net_down_handler)(void *), void (*app_net_restore_handler)(void *));\nextern void tpp_set_logmask(long);\nextern int set_tpp_config(struct pbs_config *, struct tpp_config *, char *, int, char *);\nextern void free_tpp_config(struct tpp_config *);\nextern void DIS_tpp_funcs();\nextern int tpp_open(char *, unsigned int);\nextern int tpp_close(int);\nextern int tpp_eom(int);\nextern int tpp_bind(unsigned int);\nextern int tpp_poll(void);\nextern void tpp_terminate(void);\nextern void tpp_shutdown(void);\nextern struct sockaddr_in *tpp_getaddr(int);\nextern void tpp_add_close_func(int, void (*func)(int));\nextern char *tpp_parse_hostname(char *, int *);\nextern int tpp_init_router(struct tpp_config *);\nextern void tpp_router_shutdown(void);\n\n/* special tpp only multicast function prototypes */\nextern int tpp_mcast_open(void);\nextern int tpp_mcast_add_strm(int, int, bool);\nextern int *tpp_mcast_members(int, int *);\nextern int tpp_mcast_send(int, void *, unsigned int, unsigned int);\nextern int tpp_mcast_close(int);\n\n/**********************************************************************/\n/* em related definitions (external version) */\n/**********************************************************************/\n#if defined(PBS_USE_POLL)\n\ntypedef struct {\n\tint fd;\n\tint events;\n} em_event_t;\n\n#define EM_GET_FD(ev, i) ev[i].fd\n#define EM_GET_EVENT(ev, i) ev[i].events\n\n#define EM_IN POLLIN\n#define EM_OUT POLLOUT\n#define EM_HUP POLLHUP\n#define EM_ERR POLLERR\n\n#elif defined(PBS_USE_EPOLL)\n\ntypedef struct epoll_event em_event_t;\n\n#define EM_GET_FD(ev, i) ev[i].data.fd\n#define EM_GET_EVENT(ev, i) ev[i].events\n\n#define EM_IN EPOLLIN\n#define EM_OUT EPOLLOUT\n#define EM_HUP EPOLLHUP\n#define EM_ERR EPOLLERR\n\n#elif defined(PBS_USE_POLLSET)\n\ntypedef struct pollfd em_event_t;\n\n#define EM_GET_FD(ev, i) ev[i].fd\n#define EM_GET_EVENT(ev, i) ev[i].revents\n\n#define EM_IN POLLIN\n#define EM_OUT POLLOUT\n#define EM_HUP POLLHUP\n#define EM_ERR POLLERR\n\n#elif defined(PBS_USE_SELECT)\n\ntypedef struct {\n\tint fd;\n\tint events;\n} em_event_t;\n\n#define EM_GET_FD(ev, i) ev[i].fd\n#define EM_GET_EVENT(ev, i) ev[i].events\n\n#define EM_IN 0x001\n#define EM_OUT 0x002\n#define EM_HUP 0x004\n#define EM_ERR 0x008\n\n#elif defined(PBS_USE_DEVPOLL)\n\ntypedef struct pollfd em_event_t;\n\n#define EM_GET_FD(ev, i) ev[i].fd\n#define EM_GET_EVENT(ev, i) ev[i].revents\n\n#define EM_IN POLLIN\n#define EM_OUT POLLOUT\n#define EM_HUP POLLHUP\n#define EM_ERR POLLERR\n\n#endif\n\n/* platform independent functions that handle the underlying platform specific event\n * handling mechanism. Internally it could use epoll, poll, select etc, depending on the\n * platform.\n */\nvoid *tpp_em_init(int);\nvoid tpp_em_destroy(void *);\nint tpp_em_add_fd(void *, int, int);\nint tpp_em_mod_fd(void *, int, int);\nint tpp_em_del_fd(void *, int);\nint tpp_em_wait(void *, em_event_t **, int);\n#ifndef WIN32\nint tpp_em_pwait(void *, em_event_t **, int, const sigset_t *);\n#else\nint tpp_em_wait_win(void *, em_event_t **, int);\n#endif\n\nextern char *get_all_ips(char *, char *, size_t);\n\n#ifdef __cplusplus\n}\n#endif\n#endif /* _TPP_H */\n"
  },
  {
    "path": "src/include/tracking.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/*\n * tracking.h - header file for maintaining job tracking records\n *\n * These are linked into the server structure.  Entries are added or\n * updated upon the receipt of Track Job Requests and are used to\n * satisfy Locate Job requests.\n *\n * The main data is kept in the form of the track batch request so\n * that copying is easy.\n *\n * Other required header files:\n *\t\"server_limits.h\"\n */\n\n#define PBS_TRACK_MINSIZE 100 /* mininum size of buffer in records */\n#define PBS_SAVE_TRACK_TM 300 /* time interval between saves of track data */\n\nstruct tracking {\n\ttime_t tk_mtime; /* time this entry modified */\n\tint tk_hopcount;\n\tchar tk_jobid[PBS_MAXSVRJOBID + 1];\n\tchar tk_location[PBS_MAXDEST + 1];\n\tchar tk_state;\n};\n\nextern void track_save(struct work_task *);\n"
  },
  {
    "path": "src/include/user.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _USER_H\n#define _USER_H\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n/*\n * user.h - structure definations for user concept\n * Other required header files:\n *     \"list_link.h\"\n *     \"server_limits.h\"\n *     \"attribute.h\"\n *     \"credential.h\"\n *     \"batch_request.h\"\n *\n */\n\n#define USER_PASSWORD_SUFFIX \".CR\" /* per user/per server password */\n\nextern int user_write_password(char *user, char *cred, size_t len);\nextern int user_read_password(char *user, char **cred, size_t *len);\n\n#ifdef __cplusplus\n}\n#endif\n#endif /* _USER_H */\n"
  },
  {
    "path": "src/include/work_task.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _WORK_TASK_H\n#define _WORK_TASK_H\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n/*\n * Server Work Tasks\n *\n * This structure is used by the server to track deferred work tasks.\n *\n * This information need not be preserved.\n *\n * Other Required Header Files\n *\t\"list_link.h\"\n */\n\nenum work_type {\n\tWORK_Immed,\t     /* immediate action: see state */\n\tWORK_Interleave,     /* immediate action: but allow other work to interleave */\n\tWORK_Timed,\t     /* action at certain time */\n\tWORK_Deferred_Child, /* On Death of a Child */\n\tWORK_Deferred_Reply, /* On reply to an outgoing service request */\n\tWORK_Deferred_Local, /* On reply to a local service request */\n\tWORK_Deferred_Other, /* various other events */\n\n\tWORK_Deferred_Cmp, /* Never set directly, used to indicate that */\n\t/* a WORK_Deferred_Child is ready            */\n\tWORK_Deferred_cmd /* used by TPP for deferred\n\t                        * reply but without a preq attached\n\t                        */\n};\n\nenum wtask_delete_option {\n\tDELETE_ONE,\n\tDELETE_ALL\n};\n\nstruct work_task {\n\tpbs_list_link wt_linkevent;\t     /* link to event type work list */\n\tpbs_list_link wt_linkobj;\t     /* link to others of same object */\n\tpbs_list_link wt_linkobj2;\t     /* link to another set of similarity */\n\tlong wt_event;\t\t\t     /* event id: time, pid, socket, ... */\n\tchar *wt_event2;\t\t     /* if replies on the same handle, then additional distinction */\n\tenum work_type wt_type;\t\t     /* type of event */\n\tvoid (*wt_func)(struct work_task *); /* function to perform task */\n\tvoid *wt_parm1;\t\t\t     /* obj pointer for use by func */\n\tvoid *wt_parm2;\t\t\t     /* optional pointer for use by func */\n\tvoid *wt_parm3;\t\t\t     /* used to store reply for deferred cmds TPP */\n\tint wt_aux;\t\t\t     /* optional info: e.g. child status */\n\tint wt_aux2;\t\t\t     /* optional info 2: e.g. *real* child pid (windows), tpp msgid etc */\n};\n\nextern struct work_task *set_task(enum work_type, long event, void (*func)(struct work_task *), void *param);\nextern int convert_work_task(struct work_task *ptask, enum work_type);\nextern void clear_task(struct work_task *ptask);\nextern void dispatch_task(struct work_task *);\nextern void delete_task(struct work_task *);\nextern void delete_task_by_parm1_func(void *parm1, void (*func)(struct work_task *), enum wtask_delete_option option);\nextern int has_task_by_parm1(void *parm1);\nextern time_t default_next_task(void);\nextern struct work_task *find_work_task(enum work_type, void *, void *);\n\n#ifdef __cplusplus\n}\n#endif\n#endif /* _WORK_TASK_H */\n"
  },
  {
    "path": "src/lib/Libattr/Long_.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\nint Long_neg = 0;\n"
  },
  {
    "path": "src/lib/Libattr/Makefile.am",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nlib_LIBRARIES = libattr.a\n\nlibattr_a_CPPFLAGS = \\\n\t-I$(top_srcdir)/src/include \\\n\t@KRB5_CFLAGS@\n\nlibattr_a_SOURCES = \\\n\tattr_atomic.c \\\n\tattr_fn_acl.c \\\n\tattr_fn_arst.c \\\n\tattr_fn_b.c \\\n\tattr_fn_c.c \\\n\tattr_fn_entlim.c \\\n\tattr_fn_f.c \\\n\tattr_fn_hold.c \\\n\tattr_fn_intr.c \\\n\tattr_fn_l.c \\\n\tattr_fn_ll.c \\\n\tattr_fn_resc.c \\\n\tattr_fn_size.c \\\n\tattr_fn_str.c \\\n\tattr_fn_time.c \\\n\tattr_fn_unkn.c \\\n\tattr_func.c \\\n\tattr_node_func.c \\\n\tattr_resc_func.c \\\n\tjob_attr_def.c \\\n\tLong_.c \\\n\tnode_attr_def.c \\\n\tqueue_attr_def.c \\\n\tresc_def_all.c \\\n\tresc_map.c \\\n\tresv_attr_def.c \\\n\tsched_attr_def.c \\\n\tstrToL.c \\\n\tstrTouL.c \\\n\tsvr_attr_def.c \\\n\tuLTostr.c\n\nEXTRA_DIST = \\\n\tmaster_job_attr_def.xml \\\n\tmaster_node_attr_def.xml \\\n\tmaster_queue_attr_def.xml \\\n\tmaster_resc_def_all.xml \\\n\tmaster_resv_attr_def.xml \\\n\tmaster_sched_attr_def.xml \\\n\tmaster_svr_attr_def.xml\n\nCLEANFILES = \\\n\tjob_attr_def.c \\\n\tnode_attr_def.c \\\n\tqueue_attr_def.c \\\n\tresc_def_all.c \\\n\tresv_attr_def.c \\\n\tsched_attr_def.c \\\n\tsvr_attr_def.c\n\njob_attr_def.c: $(top_srcdir)/src/lib/Libattr/master_job_attr_def.xml $(top_srcdir)/buildutils/attr_parser.py\n\t@echo Generating $@ from $< ; \\\n\t@PYTHON@ $(top_srcdir)/buildutils/attr_parser.py -m $(top_srcdir)/src/lib/Libattr/master_job_attr_def.xml -s $@\n\nsvr_attr_def.c: $(top_srcdir)/src/lib/Libattr/master_svr_attr_def.xml $(top_srcdir)/buildutils/attr_parser.py\n\t@echo Generating $@ from $< ; \\\n\t@PYTHON@ $(top_srcdir)/buildutils/attr_parser.py -m $(top_srcdir)/src/lib/Libattr/master_svr_attr_def.xml -s $@\n\nqueue_attr_def.c: $(top_srcdir)/src/lib/Libattr/master_queue_attr_def.xml $(top_srcdir)/buildutils/attr_parser.py\n\t@echo Generating $@ from $< ; \\\n\t@PYTHON@ $(top_srcdir)/buildutils/attr_parser.py -m $(top_srcdir)/src/lib/Libattr/master_queue_attr_def.xml -s $@\n\nnode_attr_def.c: $(top_srcdir)/src/lib/Libattr/master_node_attr_def.xml $(top_srcdir)/buildutils/attr_parser.py\n\t@echo Generating $@ from $< ; \\\n\t@PYTHON@ $(top_srcdir)/buildutils/attr_parser.py -m $(top_srcdir)/src/lib/Libattr/master_node_attr_def.xml -s $@\n\nsched_attr_def.c: $(top_srcdir)/src/lib/Libattr/master_sched_attr_def.xml $(top_srcdir)/buildutils/attr_parser.py\n\t@echo Generating $@ from $< ; \\\n\t@PYTHON@ $(top_srcdir)/buildutils/attr_parser.py -m $(top_srcdir)/src/lib/Libattr/master_sched_attr_def.xml -s $@\n\nresv_attr_def.c: $(top_srcdir)/src/lib/Libattr/master_resv_attr_def.xml $(top_srcdir)/buildutils/attr_parser.py\n\t@echo Generating $@ from $< ; \\\n\t@PYTHON@ $(top_srcdir)/buildutils/attr_parser.py -m $(top_srcdir)/src/lib/Libattr/master_resv_attr_def.xml -s $@\n\nresc_def_all.c: $(top_srcdir)/src/lib/Libattr/master_resc_def_all.xml $(top_srcdir)/buildutils/attr_parser.py\n\t@echo Generating $@ from $< ; \\\n\t@PYTHON@ $(top_srcdir)/buildutils/attr_parser.py -m $(top_srcdir)/src/lib/Libattr/master_resc_def_all.xml -s $@\n"
  },
  {
    "path": "src/lib/Libattr/attr_atomic.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <ctype.h>\n#include <memory.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include \"portability.h\"\n#include \"pbs_ifl.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"resource.h\"\n#include \"pbs_error.h\"\n\n/**\n * @file\tattr_atomic.c\n * @brief\n * \tThis file contains general functions for manipulating an attribute array.\n *\n * @par\tIncluded is:\n *\tattr_atomic_set()\n *\tattr_atomic_kill()\n *\n * \tThe prototypes are declared in \"attr_func.h\"\n */\n\n/* Global Variables */\n\nextern int resc_access_perm; /* see lib/Libattr/attr_fn_resc.c */\n\n/**\n * @brief\n *\tatomically set a attribute array with values from a svrattrl\n *\n * @param[in] plist - Pointer to list of attributes to set\n * @param[in] old - Pointer to the original/old attribute\n * @param[in] new - Pointer to the new/updated attribute\n * @param[in] pdef_idx - Search index for the attribute def array\n * @param[in] pdef - Pointer to the attribute definition\n * @param[in] limit - The index up to which to search for a definition\n * @param[in] unkn - Whether to allow unknown resources or not\n * @param[in] privil - Permissions of the caller requesting the operation\n * @param[out] badattr - Pointer to the attribute index in case of a failed\n * \t\t\toperation\n *\n * @return \tint\n * @retval \tPBSE_NONE \ton success\n * @retval \tPBS error code \totherwise\n *\n */\n\nint\nattr_atomic_set(struct svrattrl *plist, attribute *old, attribute *new, void *pdef_idx, attribute_def *pdef, int limit, int unkn, int privil, int *badattr)\n{\n\tint acc;\n\tint index;\n\tint listidx;\n\tresource *prc;\n\tint rc;\n\tattribute temp;\n\tint privil2;\n\n\tfor (index = 0; index < limit; index++)\n\t\tclear_attr(new + index, pdef + index);\n\n\tresc_access_perm = privil; /* set privilege for decode_resc()  */\n\n\tfor (listidx = 1, rc = PBSE_NONE; (rc == PBSE_NONE) && (plist != NULL); plist = (struct svrattrl *) GET_NEXT(plist->al_link), listidx++) {\n\n\t\tif ((index = find_attr(pdef_idx, pdef, plist->al_name)) < 0) {\n\t\t\tif (unkn < 0) { /*unknown attr isn't allowed*/\n\t\t\t\trc = PBSE_NOATTR;\n\t\t\t\tbreak;\n\t\t\t} else\n\t\t\t\tindex = unkn; /* if unknown attr are allowed */\n\t\t}\n\n\t\t/* have we privilege to set the attribute ? */\n\t\tprivil2 = privil;\n\t\tif ((plist->al_flags & ATR_VFLAG_HOOK)) {\n\t\t\tprivil2 = ATR_DFLAG_USRD | ATR_DFLAG_USWR |\n\t\t\t\t  ATR_DFLAG_OPRD | ATR_DFLAG_OPWR |\n\t\t\t\t  ATR_DFLAG_MGRD | ATR_DFLAG_MGWR |\n\t\t\t\t  ATR_DFLAG_SvWR;\n\t\t}\n\t\tresc_access_perm = privil2; /* set privilege for decode_resc() */\n\n\t\tacc = (pdef + index)->at_flags & ATR_DFLAG_ACCESS;\n\t\tif ((acc & privil2 & ATR_DFLAG_WRACC) == 0) {\n\t\t\tif (privil2 & ATR_DFLAG_SvWR) {\n\t\t\t\t/*from a daemon, just ignore this attribute*/\n\t\t\t\tcontinue;\n\t\t\t} else {\n\t\t\t\t/*from user, error if can't write attribute*/\n\t\t\t\trc = PBSE_ATTRRO;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\n\t\t/* decode new value */\n\n\t\tclear_attr(&temp, pdef + index);\n\t\tif ((rc = (pdef + index)->at_decode(&temp, plist->al_name, plist->al_resc, plist->al_value)) != 0) {\n\t\t\t/* Even if the decode failed, it is possible for list types to\n\t\t\t * have allocated some memory.  Call at_free() to free that.\n\t\t\t */\n\t\t\t(pdef + index)->at_free(&temp);\n\t\t\tif ((rc == PBSE_UNKRESC) && (unkn > 0)) {\n\t\t\t\trc = PBSE_NONE; /* ignore the \"error\" */\n\t\t\t\tcontinue;\n\t\t\t} else\n\t\t\t\tbreak;\n\t\t}\n\n\t\t/* duplicate current value, if set AND not already dup-ed */\n\n\t\tif (((old + index)->at_flags & ATR_VFLAG_SET) &&\n\t\t    !((new + index)->at_flags & ATR_VFLAG_SET)) {\n\t\t\tif ((rc = (pdef + index)->at_set(new + index, old + index, SET)) != 0)\n\t\t\t\tbreak;\n\t\t\t/*\n\t\t\t * we need to know if the value is changed during\n\t\t\t * the next step, so clear MODIFY here; including\n\t\t\t * within resources.\n\t\t\t */\n\t\t\t(new + index)->at_flags &= ~ATR_MOD_MCACHE;\n\t\t\tif ((new + index)->at_type == ATR_TYPE_RESC) {\n\t\t\t\tprc = (resource *) GET_NEXT((new + index)->at_val.at_list);\n\t\t\t\twhile (prc) {\n\t\t\t\t\tprc->rs_value.at_flags &= ~ATR_MOD_MCACHE;\n\t\t\t\t\tprc = (resource *) GET_NEXT(prc->rs_link);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\t/* update new copy with temp, MODIFY is set on ones changed */\n\n\t\tif ((plist->al_op != INCR) && (plist->al_op != DECR) &&\n\t\t    (plist->al_op != SET))\n\t\t\tplist->al_op = SET;\n\n\t\tif (temp.at_flags & ATR_VFLAG_SET) {\n\t\t\trc = (pdef + index)->at_set(new + index, &temp, plist->al_op);\n\t\t\tif (rc) {\n\t\t\t\t(pdef + index)->at_free(&temp);\n\t\t\t\tbreak;\n\t\t\t}\n\t\t} else if (temp.at_flags & ATR_VFLAG_MODIFY) {\n\t\t\t(pdef + index)->at_free(new + index);\n\t\t\t(new + index)->at_flags |= ATR_MOD_MCACHE; /* SET was removed by at_free */\n\t\t}\n\n\t\t(pdef + index)->at_free(&temp);\n\t}\n\n\tif (rc != PBSE_NONE) {\n\t\t*badattr = listidx;\n\t\tfor (index = 0; index < limit; index++)\n\t\t\t(pdef + index)->at_free(new + index);\n\t}\n\n\treturn rc;\n}\n\n/**\n * @brief\n * \tattr_atomic_kill - kill (free) a temporary attribute array which\n *\twas set up by attr_atomic_set().\n *\n *\tat_free() is called on each element on the array, then\n *\tthe array itself is freed.\n *\n * @param[in] temp - pointer  to attribute structure\n * @param[in] pdef - pointer to attribute_def structure\n * @param[in] limit -  Last attribute in the list\n *\n * @return \tVoid\n *\n */\n\nvoid\nattr_atomic_kill(attribute *temp, attribute_def *pdef, int limit)\n{\n\tint i;\n\n\tfor (i = 0; i < limit; i++)\n\t\t(pdef + i)->at_free(temp + i);\n\tfree(temp);\n}\n\n/**\n * @brief\n * \tattr_atomic_copy - make a copy of the attributes in attribute array 'from' to the attribute array 'to'.\n * \t\t\t\t'to' must be preallocated.  Attributes that use set_null() as their set function\n * \t\t\t\tare not meant to be set via the attribute framework.  Leave these attributes\n * \t\t\t\talone in 'to'\n *\n * \t@param[out] to - attribute array copied into\n * \t@param[in] from - attribute array copied from\n * \t@param[in] pdef - pointer to attribute_def structure\n * \t@param[in] limit - Last attribute in the list\n */\n\nvoid\nattr_atomic_copy(attribute *to, attribute *from, attribute_def *pdef, int limit)\n{\n\tint i;\n\tfor (i = 0; i < limit; i++) {\n\t\tif (((to + i)->at_flags & ATR_VFLAG_SET) && (pdef + i)->at_set != set_null)\n\t\t\t(pdef + i)->at_free(to + i);\n\n\t\tif ((pdef + i)->at_set != set_null)\n\t\t\tclear_attr(to + i, pdef + i);\n\t\tif ((from + i)->at_flags & ATR_VFLAG_SET) {\n\t\t\t(pdef + i)->at_set((to + i), (from + i), SET);\n\t\t\t(to + i)->at_flags = (from + i)->at_flags;\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "src/lib/Libattr/attr_fn_acl.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <assert.h>\n#include <ctype.h>\n#include <memory.h>\n#ifndef NDEBUG\n#include <stdio.h>\n#endif\n#include <stdlib.h>\n#include <string.h>\n#include <pwd.h>\n#include <grp.h>\n#include <unistd.h>\n#include <arpa/inet.h>\n#include \"pbs_ifl.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"pbs_error.h\"\n\n/**\n * @file\tattr_fn_acl.c\n * @brief\n * \tThis file contains general functions for attributes of type\n *      User/Group/Hosts Acess Control List.\n * @details\n * \tThe following functions should be used for the 3 types of ACLs:\n *\n *\t User ACL\t Group ACL\t Host ACL\n *\t(+ mgrs + ops)\n *\t---------------------------------------------\n * \tdecode_arst\tdecode_arst\tdecode_arst\n * \tencode_arst\tencode_arst\tencode_arst\n *\tset_uacl\tset_gacl\tset_hostacl\n *\tcomp_arst\tcomp_arst\tcomp_arst\n *\tfree_arst\tfree_arst\tfree_arst\n *\n * \tThe \"encoded\" or external form of the value is a string with the orginial\n * \tstrings separated by commas (or new-lines) and terminated by a null.\n *\n * \tThe \"decoded\" form is a set of strings pointed to by an array_strings struct\n *\n * \tThese forms are identical to ATR_TYPE_ARST, and in fact encode_arst(),\n * \tcomp_arst(), and free_arst() are used for those functions.\n *\n * \tset_ugacl() is different because of the special  sorting required.\n */\n\n/* External Functions called */\n\n/* Private Functions */\n\nstatic int hacl_match(const char *can, const char *master);\nstatic int gacl_match(const char *can, const char *master);\nstatic int user_match(const char *can, const char *master);\nstatic int sacl_match(const char *can, const char *master);\nstatic int host_order(char *old, char *new);\nstatic int user_order(char *old, char *new);\nstatic int group_order(char *old, char *new);\nstatic int\nset_allacl(attribute *, attribute *, enum batch_op,\n\t   int (*order_func)(char *, char *));\n\n/* for all decode_*acl() - use decode_arst() */\n/* for all encode_*acl() - use encode_arst() */\n\n/**\n * @brief\n * \tset_uacl - set value of one User ACL attribute to another\n *\twith special sorting.\n *\n *\tA=B --> set of strings in A replaced by set of strings in B\n *\tA+B --> set of strings in B appended to set of strings in A\n *\tA-B --> any string in B found is A is removed from A\n * @param[in]   attr - pointer to new attribute to be set (A)\n * @param[in]   new  - pointer to attribute (B)\n * @param[in]   op   - operator\n *\n * @return\tint\n * @retval\t0\tif ok\n * @retval     >0\tif error\n *\n */\n\nint\nset_uacl(attribute *attr, attribute *new, enum batch_op op)\n{\n\n\treturn (set_allacl(attr, new, op, user_order));\n}\n\n/**\n * @brief\n * \tset_gacl - set value of one Group ACL attribute to another\n *\twith special sorting.\n *\n *\tA=B --> set of strings in A replaced by set of strings in B\n *\tA+B --> set of strings in B appended to set of strings in A\n *\tA-B --> any string in B found is A is removed from A\n *\n * @param[in]   attr - pointer to new attribute to be set (A)\n * @param[in]   new  - pointer to attribute (B)\n * @param[in]   op   - operator\n *\n * @return      int\n * @retval      0       if ok\n * @retval     >0       if error\n *\n */\n\nint\nset_gacl(attribute *attr, attribute *new, enum batch_op op)\n{\n\n\treturn (set_allacl(attr, new, op, group_order));\n}\n\n/**\n * @brief\n * \tset_hostacl - set value of one Host ACL attribute to another\n *\twith special sorting.\n *\n *\tA=B --> set of strings in A replaced by set of strings in B\n *\tA+B --> set of strings in B appended to set of strings in A\n *\tA-B --> any string in B found is A is removed from A\n *\n * @param[in]   attr - pointer to new attribute to be set (A)\n * @param[in]   new  - pointer to attribute (B)\n * @param[in]   op   - operator\n *\n * @return      int\n * @retval      0       if ok\n * @retval     >0       if error\n *\n */\n\nint\nset_hostacl(attribute *attr, attribute *new, enum batch_op op)\n{\n\n\treturn (set_allacl(attr, new, op, host_order));\n}\n\n/**\n * @brief\n * \tacl_check - check a name:\n *\t\tuser or [user@]full_host_name\n *\t\tgroup_name\n *\t\tfull_host_name\n *\tagainst the entries in an access control list.\n *\tMatch is done by calling the approprate comparison function\n *\twith the name and each string from the list in turn.\n *\n * @param[in] pattr - pointer to attribute list\n * @param[in] name - acl name to be checked\n * @param[in] type - type of acl\n *\n * @return\tint\n * @retval\t1\tif access allowed\n * @retval\t0\tif not allowed\n *\n */\n\nint\nacl_check(attribute *pattr, char *name, int type)\n{\n\tint i;\n#ifdef HOST_ACL_DEFAULT_ALL\n\tint default_rtn = 1;\n#else  /* HOST_ACL_DEFAULT_ALL */\n\tint default_rtn = 0;\n#endif /* HOST_ACL_DEFAULT_ALL */\n\tstruct array_strings *pas;\n\tchar *pstr;\n\tint (*match_func)(const char *name, const char *master);\n\n\textern char server_host[];\n\n\tswitch (type) {\n\t\tcase ACL_Host:\n\t\t\tmatch_func = hacl_match;\n\t\t\tbreak;\n\t\tcase ACL_User:\n\t\t\tmatch_func = user_match;\n\t\t\tbreak;\n\t\tcase ACL_Group:\n\t\t\tmatch_func = gacl_match;\n\t\t\tbreak;\n\t\tcase ACL_Subnet:\n\t\t\tmatch_func = sacl_match;\n\t\t\tbreak;\n\t\tdefault:\n\t\t\tmatch_func = (int (*)(const char *, const char *)) strcmp;\n\t\t\tbreak;\n\t}\n\n\tif (name == NULL)\n\t\treturn (default_rtn);\n\n\tif (!(pattr->at_flags & ATR_VFLAG_SET) ||\n\t    ((pas = pattr->at_val.at_arst) == NULL) ||\n\t    (pas->as_usedptr == 0)) {\n\n#ifdef HOST_ACL_DEFAULT_ALL\n\t\t/* no list, default to everybody is allowed */\n\t\treturn (1);\n#else\n\t\tif (type == ACL_Host) {\n\t\t\t/* if there is no list set, allow only from my host */\n\t\t\treturn (!hacl_match(name, server_host));\n\t\t} else\n\t\t\treturn (0);\n#endif\n\t}\n\n\tfor (i = 0; i < pas->as_usedptr; i++) {\n\t\tpstr = pas->as_string[i];\n\t\tif ((*pstr == '+') || (*pstr == '-')) {\n\t\t\tif (*(pstr + 1) == '\\0') { /* \"+\" or \"-\" sets default */\n\t\t\t\tif (*pstr == '+')\n\t\t\t\t\tdefault_rtn = 1;\n\t\t\t\telse\n\t\t\t\t\tdefault_rtn = 0;\n\t\t\t}\n\t\t\tpstr++; /* skip over +/- if present */\n\t\t}\n\t\tif (!match_func(name, pstr)) {\n\t\t\tif (*pas->as_string[i] == '-')\n\t\t\t\treturn (0); /* deny */\n\t\t\telse\n\t\t\t\treturn (1); /* allow */\n\t\t}\n\t}\n\treturn (default_rtn);\n}\n\n/**\n * @brief\n * \tchk_dup_acl - check for duplicate in list (array_strings)\n *\n * @param[in] old - old list of acl\n * @param[in] new - new list of acl\n *\n * @return \tint\n * @retval\t0\tif no duplicate\n * @retval\t1\tif duplicate within the new list or\n *      \t\tbetween the new and old list.\n *\n */\n\nstatic int\nchk_dup_acl(struct array_strings *old, struct array_strings *new)\n{\n\tint i;\n\tint j;\n\n\tfor (i = 0; i < new->as_usedptr; ++i) {\n\n\t\t/* first check against self */\n\n\t\tfor (j = 0; j < new->as_usedptr; ++j) {\n\n\t\t\tif (i != j) {\n\t\t\t\tif (strcmp(new->as_string[i], new->as_string[j]) == 0)\n\t\t\t\t\treturn 1;\n\t\t\t}\n\t\t}\n\n\t\t/* next check new against existing (old) strings */\n\n\t\tfor (j = 0; j < old->as_usedptr; ++j) {\n\n\t\t\tif (strcmp(new->as_string[i], old->as_string[j]) == 0)\n\t\t\t\treturn 1;\n\t\t}\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n * \tset_allacl - general set function for all types of acls\n *\tThis function is private to this file.  It is called\n *\tby the public set function which is specific to the\n *\tACL type.  The public function passes an extra\n *\tparameter which indicates the ACL type.\n *\n * @param[in]   attr - pointer to new attribute to be set (A)\n * @param[in]   new  - pointer to attribute (B)\n * @param[in]   op   - operator\n * @param[in] order_func - function pointer to indicate action depending on acl type\n *\n * @return\tint\n * @retval\tPBSE error number\tFailure\n * @retval\t0\t\t\tSuccess\n *\n */\n\nstatic int\nset_allacl(attribute *attr, attribute *new, enum batch_op op, int (*order_func)(char *, char *))\n{\n\tint i;\n\tint j;\n\tint k;\n\tunsigned long nsize;\n\tunsigned long need;\n\tlong offset;\n\tchar *pc;\n\tchar *where;\n\tint used;\n\tstruct array_strings *tmppas;\n\tstruct array_strings *pas;\n\tstruct array_strings *newpas;\n\textern void free_arst(attribute *);\n\n\tassert(attr && new && (new->at_flags &ATR_VFLAG_SET));\n\n\tpas = attr->at_val.at_arst;   /* array of strings control struct */\n\tnewpas = new->at_val.at_arst; /* array of strings control struct */\n\tif (!newpas)\n\t\treturn (PBSE_INTERNAL);\n\n\tif (!pas) {\n\n\t\t/* no array_strings control structure, make one */\n\n\t\ti = newpas->as_npointers;\n\t\tpas = (struct array_strings *) malloc((i - 1) * sizeof(char *) +\n\t\t\t\t\t\t      sizeof(struct array_strings));\n\t\tif (!pas)\n\t\t\treturn (PBSE_SYSTEM);\n\t\tpas->as_npointers = i;\n\t\tpas->as_usedptr = 0;\n\t\tpas->as_bufsize = 0;\n\t\tpas->as_buf = NULL;\n\t\tpas->as_next = NULL;\n\t\tattr->at_val.at_arst = pas;\n\t}\n\n\t/*\n\t * At this point we know we have a array_strings struct initialized\n\t */\n\n\tswitch (op) {\n\t\tcase SET:\n\n\t\t\t/*\n\t\t\t * Replace old array of strings with new array, this is\n\t\t\t * simply done by deleting old strings and adding the\n\t\t\t * new strings one at a time via Incr\n\t\t\t */\n\n\t\t\tfor (i = 0; i < pas->as_usedptr; i++)\n\t\t\t\tpas->as_string[i] = NULL; /* clear all pointers */\n\t\t\tpas->as_usedptr = 0;\n\t\t\tpas->as_next = pas->as_buf;\n\n\t\t\tif (newpas->as_usedptr == 0)\n\t\t\t\tbreak; /* none to set */\n\n\t\t\tnsize = newpas->as_next - newpas->as_buf; /* space needed */\n\t\t\tif (nsize > pas->as_bufsize) {\t\t  /* new won't fit */\n\t\t\t\tif (pas->as_buf)\n\t\t\t\t\tfree(pas->as_buf);\n\t\t\t\tnsize += nsize / 2; /* alloc extra space */\n\t\t\t\tif (!(pas->as_buf = malloc(nsize))) {\n\t\t\t\t\tpas->as_bufsize = 0;\n\t\t\t\t\treturn (PBSE_SYSTEM);\n\t\t\t\t}\n\t\t\t\tpas->as_bufsize = nsize;\n\n\t\t\t} else { /* str does fit, clear buf */\n\t\t\t\t(void) memset(pas->as_buf, 0, pas->as_bufsize);\n\t\t\t}\n\n\t\t\tpas->as_next = pas->as_buf;\n\n\t\t\t/* No break, \"Set\" falls into \"Incr\" to add strings */\n\n\t\tcase INCR:\n\n\t\t\t/* check for duplicates within new and between new and old  */\n\n\t\t\tif (chk_dup_acl(pas, newpas))\n\t\t\t\treturn (PBSE_DUPLIST);\n\n\t\t\tnsize = newpas->as_next - newpas->as_buf; /* space needed */\n\t\t\tused = pas->as_next - pas->as_buf;\t  /* space used   */\n\n\t\t\tif (nsize > (pas->as_bufsize - used)) {\n\n\t\t\t\t/* need to make more room for sub-strings */\n\n\t\t\t\tneed = pas->as_bufsize + 2 * nsize; /* alloc new buf */\n\t\t\t\tif (pas->as_buf)\n\t\t\t\t\tpc = realloc(pas->as_buf, need);\n\t\t\t\telse\n\t\t\t\t\tpc = malloc(need);\n\t\t\t\tif (pc == NULL)\n\t\t\t\t\treturn (PBSE_SYSTEM);\n\t\t\t\toffset = pc - pas->as_buf;\n\t\t\t\tpas->as_buf = pc;\n\t\t\t\tpas->as_next = pc + used;\n\t\t\t\tpas->as_bufsize = need;\n\n\t\t\t\tfor (j = 0; j < pas->as_usedptr; j++) /* adjust points */\n\t\t\t\t\tpas->as_string[j] += offset;\n\t\t\t}\n\n\t\t\tj = pas->as_usedptr + newpas->as_usedptr;\n\t\t\tif (j > pas->as_npointers) {\n\n\t\t\t\t/* need more pointers */\n\n\t\t\t\tj = 3 * j / 2; /* allocate extra     */\n\t\t\t\tneed = sizeof(struct array_strings) + (j - 1) * sizeof(char *);\n\t\t\t\ttmppas = (struct array_strings *) realloc((char *) pas, need);\n\t\t\t\tif (tmppas == NULL)\n\t\t\t\t\treturn (PBSE_SYSTEM);\n\t\t\t\ttmppas->as_npointers = j;\n\t\t\t\tpas = tmppas;\n\t\t\t\tattr->at_val.at_arst = pas;\n\t\t\t}\n\n\t\t\t/* now put in new strings in special ugacl sorted order */\n\n\t\t\tfor (i = 0; i < newpas->as_usedptr; i++) {\n\t\t\t\tfor (j = 0; j < pas->as_usedptr; j++) {\n\t\t\t\t\tif (order_func(pas->as_string[j], newpas->as_string[i]) > 0)\n\t\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\t/* push up rest of old strings to make room for new */\n\n\t\t\t\toffset = strlen(newpas->as_string[i]) + 1;\n\t\t\t\tif (j < pas->as_usedptr) {\n\t\t\t\t\twhere = pas->as_string[j]; /* where to put in new */\n\n\t\t\t\t\tpc = pas->as_next - 1;\n\t\t\t\t\twhile (pc >= pas->as_string[j]) { /* shift data up */\n\t\t\t\t\t\t*(pc + offset) = *pc;\n\t\t\t\t\t\tpc--;\n\t\t\t\t\t}\n\t\t\t\t\tfor (k = pas->as_usedptr; k > j; k--)\n\t\t\t\t\t\t/* re adjust pointrs */\n\t\t\t\t\t\tpas->as_string[k] = pas->as_string[k - 1] + offset;\n\t\t\t\t} else {\n\t\t\t\t\twhere = pas->as_next;\n\t\t\t\t}\n\t\t\t\t(void) strcpy(where, newpas->as_string[i]);\n\t\t\t\tpas->as_string[j] = where;\n\t\t\t\tpas->as_usedptr++;\n\t\t\t\tpas->as_next += offset;\n\t\t\t}\n\t\t\tbreak;\n\n\t\tcase DECR: /* decrement (remove) string from array */\n\t\t\tfor (j = 0; j < newpas->as_usedptr; j++) {\n\t\t\t\tfor (i = 0; i < pas->as_usedptr; i++) {\n\t\t\t\t\tif (!strcmp(pas->as_string[i], newpas->as_string[j])) {\n\t\t\t\t\t\t/* compact buffer */\n\t\t\t\t\t\tnsize = strlen(pas->as_string[i]) + 1;\n\t\t\t\t\t\tpc = pas->as_string[i] + nsize;\n\t\t\t\t\t\tneed = pas->as_next - pc;\n\t\t\t\t\t\t(void) memmove(pas->as_string[i], pc, (size_t) need);\n\t\t\t\t\t\tpas->as_next -= nsize;\n\t\t\t\t\t\t/* compact pointers */\n\t\t\t\t\t\tfor (++i; i < pas->as_npointers; i++)\n\t\t\t\t\t\t\tpas->as_string[i - 1] = pas->as_string[i] - nsize;\n\t\t\t\t\t\tpas->as_string[i - 1] = NULL;\n\t\t\t\t\t\tpas->as_usedptr--;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\tbreak;\n\n\t\tdefault:\n\t\t\treturn (PBSE_INTERNAL);\n\t}\n\tpost_attr_set(attr);\n\treturn (0);\n}\n\n/**\n * @brief\n *\tuser_match - User order match\n *\tMatch two strings by user, then from the tail end first\n *\n * @param[in] can - Canidate string (first parameter) is a single user@host string.\n * @param[in] master - Master string (2nd parameter) is an entry from a user/group acl.\n * It should have a leading + or - which is ignored.  Next is the user name\n * which is compared first.  If the user name matches, then the host name is\n * checked.  The host name may be a wild carded or null (including no '@').\n * If the hostname is null, it is treated the same as \"@*\", a fully wild\n * carded hostname that matches anything.\n *\n * @return\tint\n * @retval\t0\tif string match\n * @retval\t1\tif not\n *\n */\n\nstatic int\nuser_match(const char *can, const char *master)\n{\n\n\t/* match user name first */\n\n\tdo {\n\t\tif (*master != *can)\n\t\t\treturn (1); /* doesn't match */\n\t\tmaster++;\n\t\tcan++;\n\t} while ((*master != '@') && (*master != '\\0'));\n\n\tif (*master == '\\0') {\n\t\t/* if full match or if master has no host (=wildcard) */\n\t\tif ((*can == '\\0') || (*can == '@'))\n\t\t\treturn (0);\n\t\telse\n\t\t\treturn (1);\n\t} else if (*can != '@')\n\t\treturn (1);\n\n\t/* ok, now compare host/domain name working backwards     */\n\t/* if hit wild card in master ok to stop and report match */\n\n\treturn (hacl_match(can + 1, master + 1));\n}\n\n/**\n * @brief\n * \tuser_order - user order compare\n *\n * @param[in] s1 - user name\n * @param[in] s2 - user name\n *\n * @return\tint\n * @retval\t-1 \tif entry s1 sorts before s2\n * @retval\t0 \tif equal\n * @retval\t1 \tif s1 sorts after s2\n *\n */\n\nstatic int\nuser_order(char *s1, char *s2)\n{\n\tint d;\n\n\t/* skip over the + or - prefix */\n\n\tif ((*s1 == '+') || (*s1 == '-'))\n\t\ts1++;\n\tif ((*s2 == '+') || (*s2 == '-'))\n\t\ts2++;\n\n\t/* compare user names first, stop with '@' */\n\n\twhile (1) {\n\t\td = (int) *s1 - (int) *s2;\n\t\tif (d)\n\t\t\treturn (d);\n\t\tif ((*s1 == '@') || (*s1 == '\\0'))\n\t\t\treturn (host_order(s1 + 1, s2 + 1)); /* order host names */\n\t\ts1++;\n\t\ts2++;\n\t}\n}\n\n/**\n * @brief\n *\tcomapre the group names\n *\n * @param[in]   s1 - group name\n * @param[in]   s2 - group name\n *\n * @return \tint\n * @retval  \t0\tif equal\n * @retval \t-1\tif entry s1 sorts before s2\n * @retval  \t1   \tif s1 sorts after s2\n *\n */\nstatic int\ngroup_order(char *s1, char *s2)\n{\n\n\t/* skip over the + or - prefix */\n\n\tif ((*s1 == '+') || (*s1 == '-'))\n\t\ts1++;\n\tif ((*s2 == '+') || (*s2 == '-'))\n\t\ts2++;\n\n\treturn strcmp(s1, s2);\n}\n\n/**\n * @brief\n * \thost acl order match - match two strings from the tail end first\n *\n * @param[in] can - Canidate string (first parameter) is a single user@host string.\n * @param[in] master -  Master string (2nd parameter) is an entry from a host acl.  It may have a\n * leading + or - which is ignored.  It may also have an '*' as a leading\n * name segment to be a wild card - match anything.\n *\n * Strings match if identical, or if match up to leading '*' on master which\n * like a wild card, matches any prefix string on canidate domain name\n *\n * @return\tint\n * @retval\t0\tif strings match\n * @retval\t1\tif not\n *\n */\n\nstatic int\nhacl_match(const char *can, const char *master)\n{\n\tconst char *pc;\n\tconst char *pm;\n\n\tpc = can + strlen(can) - 1;\n\tpm = master + strlen(master) - 1;\n\twhile ((pc > can) && (pm > master)) {\n\t\tif (tolower((int) *pc) != tolower((int) *pm))\n\t\t\treturn (1);\n\t\tpc--;\n\t\tpm--;\n\t}\n\n\t/* comparison of one or both reached the start of the string */\n\n\tif (pm == master) {\n\t\tif (*pm == '*')\n\t\t\treturn (0);\n\t\telse if ((pc == can) && (tolower(*pc) == tolower(*pm)))\n\t\t\treturn (0);\n\t}\n\treturn (1);\n}\n\n/**\n * @brief\n * \tgroup acl order match - match two strings when user is in group\n *\n * @param[in] can - Canidate string (first parameter) is a euser string (egroup on Windows).\n * @param[in] master -  Master string (2nd parameter) is an entry from a group acl.\n *\n * Strings match if can is a member of master (strings are equal on windows).\n *\n * @return\tint\n * @retval\t0\tif strings match\n * @retval\t1\tif not\n *\n */\n\nstatic int\ngacl_match(const char *can, const char *master)\n{\n#ifdef WIN32\n\treturn (strcmp(can, master));\n#else\n\tint i, ng = 0;\n\tstruct passwd *pw;\n\tstruct group *gr;\n\tgid_t *groups = NULL;\n\n\tif ((pw = getpwnam(can)) == NULL)\n\t\treturn (1);\n\n\tif (getgrouplist(can, pw->pw_gid, NULL, &ng) < 0) {\n\t\tif ((groups = (gid_t *) malloc(ng * sizeof(gid_t))) == NULL)\n\t\t\treturn (1);\n\t\tgetgrouplist(can, pw->pw_gid, groups, &ng);\n\t}\n\n\tfor (i = 0; i < ng; i++) {\n\t\tif ((gr = getgrgid(groups[i])) != NULL) {\n\t\t\tif (!strcmp(gr->gr_name, master)) {\n\t\t\t\tfree(groups);\n\t\t\t\treturn (0);\n\t\t\t}\n\t\t}\n\t}\n\n\tfree(groups);\n\n\treturn (1);\n#endif\n}\n\n/**\n * @brief\n * \tsubnet acl order match - match two strings: ip and subnet with mask\n *  in short or long version\n *\n * @param[in] can - Canidate string (first parameter) is a ip string.\n * @param[in] master -  Master string (2nd parameter) is an entry from a host acl.\n *\n * Strings match if ip is in subnet.\n *\n * @return\tint\n * @retval\t0\tif strings match\n * @retval\t1\tif not\n *\n */\n\nstatic int\nsacl_match(const char *can, const char *master)\n{\n\tstruct in_addr addr;\n\tuint32_t ip;\n\tuint32_t subnet;\n\tuint32_t mask;\n\tchar tmpsubnet[PBS_MAXIP_LEN + 1];\n\tchar *delimiter;\n\tint len;\n\tint short_mask;\n\n\t/* check and convert candidate to numeric IP */\n\tif (inet_pton(AF_INET, can, &addr) == 0)\n\t\treturn 1;\n\tip = ntohl(addr.s_addr);\n\n\t/* split master to subnet and mask */\n\tif ((delimiter = strchr(master, '/')) == NULL)\n\t\treturn 1;\n\n\tif (*(delimiter + 1) == '\\0')\n\t\treturn 1;\n\n\tlen = delimiter - master;\n\tif (len > PBS_MAXIP_LEN)\n\t\treturn 1;\n\n\t/* get subnet */\n\tstrncpy(tmpsubnet, master, len);\n\ttmpsubnet[len] = '\\0';\n\tif (inet_pton(AF_INET, tmpsubnet, &addr) == 0)\n\t\treturn 1;\n\tsubnet = ntohl(addr.s_addr);\n\n\t/* get mask */\n\tif (strchr(delimiter + 1, '.')) {\n\t\t/* long mask */\n\t\tif (inet_pton(AF_INET, delimiter + 1, &addr) == 0)\n\t\t\treturn 1;\n\t\tmask = ntohl(addr.s_addr);\n\t} else {\n\t\t/* short mask */\n\t\tshort_mask = atoi(delimiter + 1);\n\t\tif (short_mask < 0 || short_mask > 32)\n\t\t\treturn 1;\n\t\tmask = short_mask ? ~0 << (32 - short_mask) : 0;\n\t}\n\n\tif (mask == 0)\n\t\treturn 1;\n\n\treturn ! ((ip & mask) == (subnet & mask));\n}\n\n/**\n * @brief\n *\thost reverse order compare - compare two host entrys from the tail end first\n *\tdomain name segment at at time.\n *\n * @param[in] s1 - hostname\n * @param[in] s2 - hostname\n *\n * @return\tint\n * @retval\t-1 \tif entry s1 sorts before s2\n * @retval\t0 \tif equal\n * @retval\t1 \tif s1 sorts after s2\n *\n */\n\nstatic int\nhost_order(char *s1, char *s2)\n{\n\tint d;\n\tchar *p1;\n\tchar *p2;\n\n\tif ((*s1 == '+') || (*s1 == '-'))\n\t\ts1++;\n\tif ((*s2 == '+') || (*s2 == '-'))\n\t\ts2++;\n\n\tp1 = s1 + strlen(s1) - 1;\n\tp2 = s2 + strlen(s2) - 1;\n\twhile (1) {\n\t\td = (int) *p2 - (int) *p1;\n\t\tif ((p1 > s1) && (p2 > s2)) {\n\t\t\tif (d != 0)\n\t\t\t\treturn (d);\n\t\t\telse {\n\t\t\t\tp1--;\n\t\t\t\tp2--;\n\t\t\t}\n\t\t} else if ((p1 == s1) && (p2 == s2)) {\n\t\t\tif (*p1 == '*')\n\t\t\t\treturn (1);\n\t\t\telse if (*p2 == '*')\n\t\t\t\treturn (-1);\n\t\t\telse\n\t\t\t\treturn (d);\n\t\t} else if (p1 == s1) {\n\t\t\treturn (1);\n\t\t} else {\n\t\t\treturn (-1);\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "src/lib/Libattr/attr_fn_arst.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <assert.h>\n#include <ctype.h>\n#include <memory.h>\n#ifndef NDEBUG\n#include <stdio.h>\n#endif\n#include <stdlib.h>\n#include <string.h>\n#include \"pbs_ifl.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"pbs_error.h\"\n\n/**\n * @file\tattr_fn_arst.c\n * @brief\n * \tThis file contains general function for attributes of type\n * \tarray of (pointers to) strings\n * @details\n * \tEach set has functions for:\n *\t\tDecoding the value string to the machine representation.\n *\t\tEncoding the internal representation of the attribute to external\n *\t\tSetting the value by =, + or - operators.\n *\t\tComparing a (decoded) value with the attribute value.\n *\t\tFreeing the space malloc-ed to the attribute value.\n *\n * \tSome or all of the functions for an attribute type may be shared with\n * \tother attribute types.\n *\n * \tThe prototypes are declared in \"attribute.h\"\n *\n * \tAttribute functions for attributes with value type \"array of strings\":\n *\n * \tThe \"encoded\" or external form of the value is a string with the orginial\n * \tstrings separated by commas (or new-lines) and terminated by a null.\n * \tAny embedded commas or back-slashes must be escaped by a prefixed back-\n * \tslash.\n *\n * \tThe \"decoded\" form is a set of strings pointed to by an array_strings\n * \tstruct\n */\n\n/**\n * @brief\n *\tdecode a comma string into an attribute of type ATR_TYPE_ARST\n *\n * @par Functionality:\n *\t1. Call count_substrings to find out the number of sub strings separated\n *\t   by comma\n *\t2. Call parse_comma_string function to parse the value of the attribute\n *\n * @see\n *\n * @param[in,out] patr\t- Pointer to attribute structure\n * @param[in]\t  val\t- Value of the attribute as comma separated string. This\n *                        parameter's value cannot be modified by any of the\n *                        functions that are called inside this function.\n *\n * @return\tint\n * @retval\t0  -  Success\n * @retval\t>0 -  Failure\n *\n *\n */\nstatic int\ndecode_arst_direct(attribute *patr, char *val)\n{\n\tunsigned long bksize;\n\tint j;\n\tint ns;\n\tchar *pbuf = NULL;\n\tchar *pc;\n\tchar *pstr;\n\tstruct array_strings *stp = NULL;\n\tint rc;\n\tchar strbuf[BUF_SIZE]; /* Should handle most values */\n\tchar *sbufp = NULL;\n\tsize_t slen;\n\n\tif (!patr || !val)\n\t\treturn (PBSE_INTERNAL);\n\n\t/*\n\t * determine number of sub strings, each sub string is terminated\n\t * by a non-escaped comma or a new-line, the whole string is terminated\n\t * by a null\n\t */\n\n\tif ((rc = count_substrings(val, &ns)) != 0)\n\t\treturn (rc);\n\n\tslen = strlen(val);\n\n\tpbuf = calloc(slen + 1, sizeof(char));\n\tif (pbuf == NULL)\n\t\treturn (PBSE_SYSTEM);\n\tbksize = ((ns - 1) * sizeof(char *)) + sizeof(struct array_strings);\n\tstp = (struct array_strings *) malloc(bksize);\n\tif (!stp) {\n\t\tfree(pbuf);\n\t\treturn (PBSE_SYSTEM);\n\t}\n\n\t/* number of slots (sub strings) */\n\tstp->as_npointers = ns;\n\tstp->as_usedptr = 0;\n\t/* for the strings themselves */\n\tstp->as_buf = pbuf;\n\tstp->as_next = pbuf;\n\tstp->as_bufsize = slen + 1;\n\n\t/*\n\t * determine string storage requirement and copy the string \"val\"\n\t * to a work buffer area\n\t */\n\tif (slen < BUF_SIZE) {\n\t\t/* buffer on stack */\n\t\tsnprintf(strbuf, sizeof(strbuf), \"%s\", val);\n\t\tsbufp = strbuf;\n\t} else {\n\t\t/* buffer on heap */\n\t\tsbufp = strdup(val);\n\t\tif (sbufp == NULL) {\n\t\t\tfree(pbuf);\n\t\t\tfree(stp);\n\t\t\treturn (PBSE_SYSTEM);\n\t\t}\n\t}\n\n\t/* now copy in substrings and set pointers */\n\tpc = pbuf;\n\tj = 0;\n\tpstr = parse_comma_string(sbufp);\n\twhile ((pstr != NULL) && (j < ns)) {\n\t\tstp->as_string[j] = pc;\n\t\twhile (*pstr) {\n\t\t\t*pc++ = *pstr++;\n\t\t}\n\t\t*pc++ = '\\0';\n\t\tpstr = parse_comma_string(NULL);\n\t\tj++;\n\t}\n\n\tstp->as_usedptr = j;\n\tstp->as_next = pc;\n\tpost_attr_set(patr);\n\tpatr->at_val.at_arst = stp;\n\n\tif (sbufp != strbuf) /* buffer on heap, not stack */\n\t\tfree(sbufp);\n\treturn (0);\n}\n\n/**\n * @brief\n * \tdecode_arst - decode a comma string into an attr of type ATR_TYPE_ARST\n *\n * @param[in] patr - pointer to attribute structure\n * @param[in] name - attribute name\n * @param[in] rescn - resource name\n * @param[in] val - attribute value\n *\n * @return\tint\n * @retval\t0\tif ok\n * @retval\t>0\terror number1 if error\n * @retval\tpatr\tmembers set\n *\n */\n\nint\ndecode_arst(attribute *patr, char *name, char *rescn, char *val)\n{\n\tint rc;\n\tattribute temp;\n\n\tif ((val == NULL) || (strlen(val) == 0)) {\n\t\tfree_arst(patr);\n\t\t/* _SET cleared in free_arst */\n\t\tpatr->at_flags |= ATR_MOD_MCACHE;\n\n\t\treturn (0);\n\t}\n\n\tif ((patr->at_flags & ATR_VFLAG_SET) && (patr->at_val.at_arst)) {\n\t\t/* already have values, decode new into temp\t*/\n\t\t/* then use set(incr) to add new to existing\t*/\n\n\t\ttemp.at_flags = 0;\n\t\ttemp.at_type = ATR_TYPE_ARST;\n\t\ttemp.at_user_encoded = NULL;\n\t\ttemp.at_priv_encoded = NULL;\n\t\ttemp.at_val.at_arst = 0;\n\t\tif ((rc = decode_arst_direct(&temp, val)) != 0)\n\t\t\treturn (rc);\n\t\trc = set_arst(patr, &temp, SET);\n\t\tfree_arst(&temp);\n\t\treturn (rc);\n\n\t} else {\n\t\t/* decode directly into real attribute */\n\t\treturn (decode_arst_direct(patr, val));\n\t}\n}\n\n/**\n * @brief\n * \tencode_arst - encode attr of type ATR_TYPE_ARST into attrlist entry\n *\n * Mode ATR_ENCODE_CLIENT - encode strings into single super string\n *\t\t\t    separated by ','\n *\n * Mode ATR_ENCODE_SVR    - treated as above\n *\n * Mode ATR_ENCODE_MOM    - treated as above\n *\n * Mode ATR_ENCODE_HOOK   - treated as above\n *\n * Mode ATR_ENCODE_SAVE - encode strings into single super string\n *\t\t\t  separated by '\\n'\n *\n * @param[in] attr - ptr to attribute to encode\n * @param[in] phead - ptr to head of attrlist list\n * @param[in] atname - attribute name\n * @param[in] rsname - resource name or null\n * @param[in] mode - encode mode\n * @param[out] rtnl - ptr to svrattrl\n *\n * @retval\tint\n * @retval\t>0\tif ok, entry created and linked into list\n * @retval\t=0\tno value to encode, entry not created\n * @retval\t-1\tif error\n *\n */\n\n/*ARGSUSED*/\n\nint\nencode_arst(const attribute *attr, pbs_list_head *phead, char *atname, char *rsname, int mode, svrattrl **rtnl)\n{\n\tchar *end;\n\tint i;\n\tint j;\n\tsvrattrl *pal;\n\tchar *pc;\n\tchar *pfrom;\n\tchar separator;\n\n\tif (!attr)\n\t\treturn (-2);\n\tif (((attr->at_flags & ATR_VFLAG_SET) == 0) || !attr->at_val.at_arst ||\n\t    !attr->at_val.at_arst->as_usedptr)\n\t\treturn (0); /* no values */\n\n\ti = (int) (attr->at_val.at_arst->as_next - attr->at_val.at_arst->as_buf);\n\tif (mode == ATR_ENCODE_SAVE) {\n\t\tseparator = '\\n'; /* new-line for encode_acl  */\n\t\t/* allow one extra byte for final new-line */\n\t\tj = i + 1;\n\t} else {\n\t\tseparator = ','; /* normally a comma is the separator */\n\t\tj = i;\n\t}\n\n\t/* in the future will need to expand the size of value\t*/\n\t/* if we are escaping special characters\t\t*/\n\n\tpal = attrlist_create(atname, rsname, j);\n\tif (pal == NULL)\n\t\treturn (-1);\n\n\tpal->al_flags = attr->at_flags;\n\n\tpc = pal->al_value;\n\tpfrom = attr->at_val.at_arst->as_buf;\n\n\t/* replace nulls between sub-strings with separator characters */\n\t/* in the future we need to escape any embedded special character */\n\n\tend = attr->at_val.at_arst->as_next;\n\twhile (pfrom < end) {\n\t\tif (*pfrom == '\\0') {\n\t\t\t*pc = separator;\n\t\t} else {\n\t\t\t*pc = *pfrom;\n\t\t}\n\t\tpc++;\n\t\tpfrom++;\n\t}\n\n\t/* convert the last null to separator only if going to new-lines */\n\n\tif (mode == ATR_ENCODE_SAVE)\n\t\t*pc = '\\0'; /* insure string terminator */\n\telse\n\t\t*(pc - 1) = '\\0';\n\tif (phead)\n\t\tappend_link(phead, &pal->al_link, pal);\n\tif (rtnl)\n\t\t*rtnl = pal;\n\n\treturn (1);\n}\n\n/**\n * @brief\n * \tset_arst - set value of attribute of type ATR_TYPE_ARST to another\n *\n *\tA=B --> set of strings in A replaced by set of strings in B\n *\tA+B --> set of strings in B appended to set of strings in A\n *\tA-B --> any string in B found is A is removed from A\n *\n * @param[in]   attr - pointer to new attribute to be set (A)\n * @param[in]   new  - pointer to attribute (B)\n * @param[in]   op   - operator\n *\n * @return      int\n * @retval      0       if ok\n * @retval     >0       if error\n *\n */\n\nint\nset_arst(attribute *attr, attribute *new, enum batch_op op)\n{\n\tint i;\n\tint j;\n\tunsigned long nsize;\n\tunsigned long need;\n\tlong offset;\n\tchar *pc;\n\tlong used;\n\tstruct array_strings *newpas;\n\tstruct array_strings *pas;\n\tstruct array_strings *xpasx;\n\tvoid free_arst(attribute *);\n\n\tassert(attr && new && (new->at_flags &ATR_VFLAG_SET));\n\n\tpas = attr->at_val.at_arst;\n\txpasx = new->at_val.at_arst;\n\tif (!xpasx)\n\t\treturn (PBSE_INTERNAL);\n\n\tif (!pas) {\n\n\t\t/* not array_strings control structure, make one */\n\n\t\tj = xpasx->as_npointers;\n\t\tif (j < 1)\n\t\t\treturn (PBSE_INTERNAL);\n\t\tneed = sizeof(struct array_strings) + (j - 1) * sizeof(char *);\n\t\tpas = (struct array_strings *) malloc(need);\n\t\tif (!pas)\n\t\t\treturn (PBSE_SYSTEM);\n\t\tpas->as_npointers = j;\n\t\tpas->as_usedptr = 0;\n\t\tpas->as_bufsize = 0;\n\t\tpas->as_buf = NULL;\n\t\tpas->as_next = NULL;\n\t\tattr->at_val.at_arst = pas;\n\t}\n\tif ((op == INCR) && !pas->as_buf)\n\t\top = SET; /* no current strings, change op to SET */\n\n\t/*\n\t * At this point we know we have a array_strings struct initialized\n\t */\n\n\tswitch (op) {\n\t\tcase SET:\n\n\t\t\t/*\n\t\t\t * Replace old array of strings with new array, this is\n\t\t\t * simply done by deleting old strings and appending the\n\t\t\t * new (to the null set).\n\t\t\t */\n\n\t\t\tfor (i = 0; i < pas->as_usedptr; i++)\n\t\t\t\tpas->as_string[i] = NULL; /* clear all pointers */\n\t\t\tpas->as_usedptr = 0;\n\t\t\tpas->as_next = pas->as_buf;\n\n\t\t\tif (new->at_val.at_arst == NULL)\n\t\t\t\tbreak; /* none to set */\n\n\t\t\tnsize = xpasx->as_next - xpasx->as_buf; /* space needed */\n\t\t\tif (nsize > pas->as_bufsize) {\t\t/* new wont fit */\n\t\t\t\tif (pas->as_buf)\n\t\t\t\t\tfree(pas->as_buf);\n\t\t\t\tnsize += nsize / 2; /* alloc extra space */\n\t\t\t\tif (!(pas->as_buf = malloc(nsize))) {\n\t\t\t\t\tpas->as_bufsize = 0;\n\t\t\t\t\treturn (PBSE_SYSTEM);\n\t\t\t\t}\n\t\t\t\tpas->as_bufsize = nsize;\n\n\t\t\t} else { /* str does fit, clear buf */\n\t\t\t\t(void) memset(pas->as_buf, 0, pas->as_bufsize);\n\t\t\t}\n\n\t\t\tpas->as_next = pas->as_buf;\n\n\t\t\t/* No break, \"SET\" falls into \"INCR\" to add strings */\n\n\t\tcase INCR:\n\n\t\t\tnsize = xpasx->as_next - xpasx->as_buf; /* space needed */\n\t\t\tused = pas->as_next - pas->as_buf;\n\n\t\t\tif (nsize > (pas->as_bufsize - used)) {\n\t\t\t\tneed = pas->as_bufsize + 2 * nsize; /* alloc new buf */\n\t\t\t\tif (pas->as_buf)\n\t\t\t\t\tpc = realloc(pas->as_buf, need);\n\t\t\t\telse\n\t\t\t\t\tpc = malloc(need);\n\t\t\t\tif (pc == NULL)\n\t\t\t\t\treturn (PBSE_SYSTEM);\n\t\t\t\toffset = pc - pas->as_buf;\n\t\t\t\tpas->as_buf = pc;\n\t\t\t\tpas->as_next = pc + used;\n\t\t\t\tpas->as_bufsize = need;\n\n\t\t\t\tfor (j = 0; j < pas->as_usedptr; j++) /* adjust points */\n\t\t\t\t\tpas->as_string[j] += offset;\n\t\t\t}\n\n\t\t\tj = pas->as_usedptr + xpasx->as_usedptr;\n\t\t\tif (j > pas->as_npointers) { /* need more pointers */\n\t\t\t\tj = 3 * j / 2;\t     /* allocate extra     */\n\t\t\t\tneed = sizeof(struct array_strings) + (j - 1) * sizeof(char *);\n\t\t\t\tnewpas = (struct array_strings *) realloc((char *) pas, need);\n\t\t\t\tif (newpas == NULL)\n\t\t\t\t\treturn (PBSE_SYSTEM);\n\t\t\t\tnewpas->as_npointers = j;\n\t\t\t\tpas = newpas;\n\t\t\t\tattr->at_val.at_arst = pas;\n\t\t\t}\n\n\t\t\t/* now append new strings */\n\n\t\t\tfor (i = 0; i < xpasx->as_usedptr; i++) {\n\t\t\t\t(void) strcpy(pas->as_next, xpasx->as_string[i]);\n\t\t\t\tpas->as_string[pas->as_usedptr++] = pas->as_next;\n\t\t\t\tpas->as_next += strlen(pas->as_next) + 1;\n\t\t\t}\n\t\t\tbreak;\n\n\t\tcase DECR: /* decrement (remove) string from array */\n\t\t\tfor (j = 0; j < xpasx->as_usedptr; j++) {\n\t\t\t\tfor (i = 0; i < pas->as_usedptr; i++) {\n\t\t\t\t\tif (!strcmp(pas->as_string[i], xpasx->as_string[j])) {\n\t\t\t\t\t\t/* compact buffer */\n\t\t\t\t\t\tnsize = strlen(pas->as_string[i]) + 1;\n\t\t\t\t\t\tpc = pas->as_string[i] + nsize;\n\t\t\t\t\t\tneed = pas->as_next - pc;\n\t\t\t\t\t\t(void) memmove(pas->as_string[i], pc, (size_t) need);\n\t\t\t\t\t\tpas->as_next -= nsize;\n\t\t\t\t\t\t/* compact pointers */\n\t\t\t\t\t\tfor (++i; i < pas->as_npointers; i++)\n\t\t\t\t\t\t\tpas->as_string[i - 1] = pas->as_string[i] - nsize;\n\t\t\t\t\t\tpas->as_string[i - 1] = NULL;\n\t\t\t\t\t\tpas->as_usedptr--;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\tbreak;\n\n\t\tdefault:\n\t\t\treturn (PBSE_INTERNAL);\n\t}\n\tpost_attr_set(attr);\n\treturn (0);\n}\n\n/**\n * @brief\n * \tcomp_arst - compare two attributes of type ATR_TYPE_ARST\n *\n * @param[in] attr - pointer to attribute structure\n * @param[in] with - pointer to attribute structure\n *\n * @return\tint\n * @retval\t0\tif the set of strings in \"with\" is a subset of \"attr\"\n * @retval\t1\totherwise\n *\n */\n\nint\ncomp_arst(attribute *attr, attribute *with)\n{\n\tint i;\n\tint j;\n\tint match = 0;\n\tstruct array_strings *apa;\n\tstruct array_strings *bpb;\n\n\tif (!attr || !with || !attr->at_val.at_arst || !with->at_val.at_arst)\n\t\treturn (1);\n\tif ((attr->at_type != ATR_TYPE_ARST) ||\n\t    (with->at_type != ATR_TYPE_ARST))\n\t\treturn (1);\n\tapa = attr->at_val.at_arst;\n\tbpb = with->at_val.at_arst;\n\n\tfor (j = 0; j < bpb->as_usedptr; j++) {\n\t\tfor (i = 0; i < apa->as_usedptr; i++) {\n\t\t\tif (strcmp(bpb->as_string[j], apa->as_string[i]) == 0) {\n\t\t\t\tmatch++;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t}\n\n\tif (match == bpb->as_usedptr)\n\t\treturn (0); /* all in \"with\" are in \"attr\" */\n\telse\n\t\treturn (1);\n}\n\n/**\n * @brief\n *\tfrees arst attribute.\n *\n * @param[in] attr - pointer to attribute\n *\n * @return\tVoid\n *\n */\n\nvoid\nfree_arst(attribute *attr)\n{\n\tif ((attr->at_flags & ATR_VFLAG_SET) && (attr->at_val.at_arst)) {\n\t\t(void) free(attr->at_val.at_arst->as_buf);\n\t\t(void) free((char *) attr->at_val.at_arst);\n\t}\n\tfree_null(attr);\n}\n\n/**\n * @brief\n * \tarst_string - see if a string occurs as a prefix in an arst attribute entry\n *\tSearch each entry in the value of an arst attribute for a sub-string\n *\tthat begins with the passed string\n *\n * @param[in] pattr - pointer to attribute structure\n * @param[in] str - string which is prefix to be searched in an attribute\n *\n * @return\tstring\n * @retval\tarst attribute\t\tSuccess\n * @retval\tNULL\t\t\tFailure\n *\n */\n\nchar *\narst_string(char *str, attribute *pattr)\n{\n\tint i;\n\tsize_t len;\n\tstruct array_strings *parst;\n\n\tif ((pattr->at_type != ATR_TYPE_ARST) || !(pattr->at_flags & ATR_VFLAG_SET))\n\t\treturn NULL;\n\n\tlen = strlen(str);\n\tparst = pattr->at_val.at_arst;\n\tfor (i = 0; i < parst->as_usedptr; i++) {\n\t\tif (strncmp(str, parst->as_string[i], len) == 0)\n\t\t\treturn (parst->as_string[i]);\n\t}\n\treturn NULL;\n}\n\n/**\n * @brief\n * \tparse_comma_string_bs() - parse a string of the form:\n *\t\tvalue1 [, value2 ...]\n *\n *\tFor use by decode_arst_direct_bs(), the old 8.0 version.\n *\tOn the first call, start is non null, a pointer to the first value\n *\telement upto a comma, new-line, or end of string is returned.\n *\n *\tCommas escaped by a back-slash '\\' are ignored.\n *\n *\tNewlines (\\n) are allowed because they could be present in\n *\tenvironment variables.\n *\n *\tOn any following calls with start set to a null pointer NULL,\n *\tthe next value element is returned...\n *\n * @param[in] start - string to be parsed\n *\n * @return \tstring\n * @retval\tstart address for string\tSuccess\n * @retval\tNULL\t\t\t\tFailure\n *\n */\n\nstatic char *\nparse_comma_string_bs(char *start)\n{\n\tstatic char *pc = NULL; /* if start is null, restart from here */\n\tchar *dest;\n\tchar *back;\n\tchar *rv;\n\n\tif (start != NULL)\n\t\tpc = start;\n\n\t/* skip over leading white space */\n\twhile (pc && *pc && isspace((int) *pc))\n\t\tpc++;\n\n\tif (!pc || !*pc)\n\t\treturn NULL; /* already at end, no strings */\n\n\trv = dest = pc; /* the start point which will be returned */\n\n\t/* find comma */\n\twhile (*pc) {\n\t\tif (*pc == ESC_CHAR) {\n\t\t\t/*\n\t\t\t * Both copy_env_value() and encode_arst_bs() escape certain\n\t\t\t * characters. Unescape them here.\n\t\t\t */\n\t\t\tpc++;\n\t\t\tif (*pc == '\\0') {\n\t\t\t\t/* should not happen, but handle it */\n\t\t\t\tbreak;\n\t\t\t} else if ((*pc == '\"') || (*pc == '\\'') || (*pc == ',') || (*pc == ESC_CHAR)) {\n\t\t\t\t/* omit the ESC_CHAR preceding these characters */\n\t\t\t\t*dest = *pc;\n\t\t\t} else {\n\t\t\t\t/* unrecognized escape sequence, just copy it */\n\t\t\t\t*dest++ = ESC_CHAR;\n\t\t\t\t*dest = *pc;\n\t\t\t}\n\t\t} else if (*pc == ',') {\n\t\t\tbreak;\n\t\t} else {\n\t\t\t*dest = *pc;\n\t\t}\n\t\t++pc;\n\t\t++dest;\n\t}\n\n\tif (*pc)\n\t\t*pc++ = '\\0'; /* if not end, terminate this and adv past */\n\n\t*dest = '\\0';\n\tback = dest;\n\twhile (isspace((int) *--back)) /* strip trailing spaces */\n\t\t*back = '\\0';\n\n\treturn (rv);\n}\n\n/**\n * @brief\n * \tcount_substrings_bs - counts number of substrings in a comma separated\n * \tstring.\n *\n * \tNewlines (\\n) are allowed because they could be present in environment\n * \tvariables.\n *\n *\tSee also count_substrings\n *\n * @param val comma separated string of substrings\n * @param pcnt where to return the value\n */\nint\ncount_substrings_bs(char *val, int *pcnt)\n{\n\tint rc = 0;\n\tint ns;\n\tchar *pc;\n\n\tif (val == NULL)\n\t\treturn (PBSE_INTERNAL);\n\n\t/*\n\t * determine number of substrings, each sub string is terminated\n\t * by a non-escaped comma or a new-line, the whole string is terminated\n\t * by a null\n\t */\n\n\tns = 1;\n\tfor (pc = val; *pc; pc++) {\n\t\tif (*pc == ESC_CHAR) {\n\t\t\tif (*(pc + 1))\n\t\t\t\tpc++;\n\t\t} else {\n\t\t\tif (*pc == ',')\n\t\t\t\t++ns;\n\t\t}\n\t}\n\tif (pc > val)\n\t\tpc--;\n\tif (*pc == ',') {\n\t\tif ((pc > val) && (*(pc - 1) != ESC_CHAR)) {\n\t\t\t/* strip trailing empty string */\n\t\t\tns--;\n\t\t\t*pc = '\\0';\n\t\t}\n\t}\n\n\t*pcnt = ns;\n\treturn rc;\n}\n\n/**\n * @brief\n *\tdecode_arst_direct_bs - this version of the decode routine treats\n *\tback-slashes (hence the \"_bs\" on the name) as escape characters\n *\tIt is needed to deal with environment variables that contain a\n *\tcomma and was taken from the 8.0 version.\n *\n * @param[in] patr - pointer to attribute structure\n * @param[in] val - string holding value for attribute structure\n *\n * @retuan\tint\n * @retval\t0\tSuccess\n * @retval\t>0\tPBSE error code\n *\n */\nstatic int\ndecode_arst_direct_bs(attribute *patr, char *val)\n{\n\tunsigned long bksize;\n\tint j;\n\tint ns;\n\tint rc;\n\tsize_t slen;\n\tchar *pbuf = NULL;\n\tchar *pc;\n\tchar *pstr;\n\tchar *sbufp = NULL;\n\tstruct array_strings *stp = NULL;\n\tchar strbuf[BUF_SIZE]; /* Should handle most values */\n\n\tif (!patr || !val)\n\t\treturn (PBSE_INTERNAL);\n\n\t/*\n\t * determine number of sub strings, each sub string is terminated\n\t * by a non-escaped comma, the whole string is terminated by a null\n\t */\n\n\tif ((rc = count_substrings_bs(val, &ns)) != 0)\n\t\treturn (rc);\n\n\tslen = strlen(val);\n\n\tpbuf = calloc(slen + 1, sizeof(char));\n\tif (pbuf == NULL)\n\t\treturn (PBSE_SYSTEM);\n\tbksize = ((ns - 1) * sizeof(char *)) + sizeof(struct array_strings);\n\tstp = (struct array_strings *) malloc(bksize);\n\tif (!stp) {\n\t\tfree(pbuf);\n\t\treturn (PBSE_SYSTEM);\n\t}\n\n\t/* number of slots (sub strings) */\n\tstp->as_npointers = ns;\n\tstp->as_usedptr = 0;\n\t/* for the strings themselves */\n\tstp->as_buf = pbuf;\n\tstp->as_next = pbuf;\n\tstp->as_bufsize = slen + 1;\n\n\t/*\n\t * determine string storage requirement and copy the string \"val\"\n\t * to a work buffer area\n\t */\n\tif (slen < BUF_SIZE) {\n\t\t/* buffer on stack */\n\t\tsnprintf(strbuf, sizeof(strbuf), \"%s\", val);\n\t\tsbufp = strbuf;\n\t} else {\n\t\t/* buffer on heap */\n\t\tsbufp = strdup(val);\n\t\tif (sbufp == NULL) {\n\t\t\tfree(pbuf);\n\t\t\tfree(stp);\n\t\t\treturn (PBSE_SYSTEM);\n\t\t}\n\t}\n\n\t/* now copy in substrings and set pointers */\n\tpc = pbuf;\n\tj = 0;\n\tpstr = parse_comma_string_bs(sbufp);\n\twhile ((pstr != NULL) && (j < ns)) {\n\t\tstp->as_string[j] = pc;\n\t\twhile (*pstr) {\n\t\t\t*pc++ = *pstr++;\n\t\t}\n\t\t*pc++ = '\\0';\n\t\tpstr = parse_comma_string_bs(NULL);\n\t\tj++;\n\t}\n\n\tstp->as_usedptr = j;\n\tstp->as_next = pc;\n\tpost_attr_set(patr);\n\tpatr->at_val.at_arst = stp;\n\n\tif (sbufp != strbuf) /* buffer on heap, not stack */\n\t\tfree(sbufp);\n\treturn (0);\n}\n\n/**\n * @brief\n * \tdecode_arst_bs - decode a comma string into an attr of type ATR_TYPE_ARST\n *\tCalls decode_arst_direct_bs() instead of decode_arst_direct() for\n *\tenvironment variables that may contain commas\n *\n *\n * @param[in] patr - ptr to attribute to decode\n * @param[in] name - attribute name\n * @param[in] rescn - resource name or null\n * @param[out] val - string holding values for attribute structure\n *\n * @retval      int\n * @retval      0\tif ok\n * @retval      >0\terror number1 if error,\n * @retval      *patr \tmembers set\n *\n */\n\nint\ndecode_arst_bs(attribute *patr, char *name, char *rescn, char *val)\n{\n\tint rc;\n\tattribute temp;\n\n\tif ((val == NULL) || (strlen(val) == 0)) {\n\t\tfree_arst(patr);\n\t\t/* _SET cleared in free_arst */\n\t\tpatr->at_flags |= ATR_MOD_MCACHE;\n\n\t\treturn (0);\n\t}\n\n\tif ((patr->at_flags & ATR_VFLAG_SET) && (patr->at_val.at_arst)) {\n\t\t/* already have values, decode new into temp\t*/\n\t\t/* then use set(incr) to add new to existing\t*/\n\n\t\ttemp.at_flags = 0;\n\t\ttemp.at_type = ATR_TYPE_ARST;\n\t\ttemp.at_user_encoded = NULL;\n\t\ttemp.at_priv_encoded = NULL;\n\t\ttemp.at_val.at_arst = 0;\n\t\tif ((rc = decode_arst_direct_bs(&temp, val)) != 0)\n\t\t\treturn (rc);\n\t\trc = set_arst(patr, &temp, SET);\n\t\tfree_arst(&temp);\n\t\treturn (rc);\n\n\t} else {\n\t\t/* decode directly into real attribute */\n\t\treturn (decode_arst_direct_bs(patr, val));\n\t}\n}\n\n/**\n * @brief\n * \tencode_arst_bs - encode attr of type ATR_TYPE_ARST into attrlist entry\n *\tUsed in conjunction with decode_arst_bs() and decode_arst_direct_bs()\n *\tfor environment variables that may contain commas.\n *\n * Mode ATR_ENCODE_CLIENT - encode strings into single super string\n *\t\t\t    separated by ','\n *\n * Mode ATR_ENCODE_SVR    - treated as above\n *\n * Mode ATR_ENCODE_MOM    - treated as above\n *\n * Mode ATR_ENCODE_HOOK   - treated as above\n *\n * Mode ATR_ENCODE_SAVE - encode strings into single super string\n *\t\t\t  separated by '\\n'\n *\n * @param[in] attr - ptr to attribute to encode\n * @param[in] phead - ptr to head of attrlist list\n * @param[in] atname - attribute name\n * @param[in] rsname - resource name or null\n * @param[in] mode - encode mode\n * @param[out] rtnl - ptr to svrattrl\n *\n * @retval      int\n * @retval      >0      if ok, entry created and linked into list\n * @retval      =0      no value to encode, entry not created\n * @retval      -1      if error\n *\n */\n/*ARGSUSED*/\n\nint\nencode_arst_bs(const attribute *attr, pbs_list_head *phead, char *atname, char *rsname, int mode, svrattrl **rtnl)\n{\n\tchar *end;\n\tint i;\n\tint j;\n\tsvrattrl *pal;\n\tchar *pc;\n\tchar *pfrom;\n\tchar separator;\n\n\tif (!attr)\n\t\treturn (-2);\n\tif (((attr->at_flags & ATR_VFLAG_SET) == 0) || !attr->at_val.at_arst ||\n\t    !attr->at_val.at_arst->as_usedptr)\n\t\treturn (0); /* no values */\n\n\ti = (int) (attr->at_val.at_arst->as_next - attr->at_val.at_arst->as_buf);\n\tseparator = ','; /* normally a comma is the separator */\n\tj = i;\n\n\t/* how many back-slashes are required */\n\n\tfor (pc = attr->at_val.at_arst->as_buf; pc < attr->at_val.at_arst->as_next; ++pc) {\n\t\tif ((*pc == '\"') || (*pc == '\\'') || (*pc == ',') || (*pc == ESC_CHAR))\n\t\t\t++j;\n\t}\n\tpal = attrlist_create(atname, rsname, j);\n\tif (pal == NULL)\n\t\treturn (-1);\n\n\tpal->al_flags = attr->at_flags;\n\n\tpc = pal->al_value;\n\tpfrom = attr->at_val.at_arst->as_buf;\n\n\t/*\n\t * replace nulls between sub-strings with separater characters\n\t * escape any embedded special character\n\t *\n\t * Keep the list of special characters consistent with copy_env_value()\n\t * and parse_comma_string_bs().\n\t */\n\n\tend = attr->at_val.at_arst->as_next;\n\twhile (pfrom < end) {\n\t\tif ((*pfrom == '\"') || (*pfrom == '\\'') || (*pfrom == ',') || (*pfrom == ESC_CHAR)) {\n\t\t\t*pc++ = ESC_CHAR;\n\t\t\t*pc = *pfrom;\n\t\t} else if (*pfrom == '\\0') {\n\t\t\t*pc = separator;\n\t\t} else {\n\t\t\t*pc = *pfrom;\n\t\t}\n\t\tpc++;\n\t\tpfrom++;\n\t}\n\n\t/* convert the last null to separator only if going to new-lines */\n\n\t*(pc - 1) = '\\0';\n\tappend_link(phead, &pal->al_link, pal);\n\tif (rtnl)\n\t\t*rtnl = pal;\n\n\treturn (1);\n}\n\n/**\n * @brief\n * set_arst_uniq - set value of attribute of type ATR_TYPE_ARST to another\n * discarding duplicate entries on the INCR operation.\n *\n * @par Functionality:\n * Set value of one attribute of type ATR_TYPE_ARST to another discarding\n * duplicate entries on the INCR operator. For example:\n *\t(A B C) + (D B E) = (A B C D E)\n *\n * The operators are:\n *   SET   A=B --> set of strings in A replaced by set of strings in B\n *\tDone by clearing A and then setting A = A + B\n *   INCR  A+B --> set of strings in B appended to set of strings in A\n *\texcept no duplicates are appended\n *   DECR  A-B --> any string in B found in A is removed from A\n *\tDone via basic set_arst() function\n *\n * @param[in]\tattr - pointer to new attribute to be set (A)\n * @param[in]\tnew  - pointer to attribute (B)\n * @param[in]\top   - operator\n *\n * @return\tint\n * @retval\t PBSE_NONE on success\n * @retval\t PBSE_* on error\n *\n * @par Side Effects: None\n *\n * @par MT-safe: unknown\n *\n */\n\nint\nset_arst_uniq(attribute *attr, attribute *new, enum batch_op op)\n{\n\tint i;\n\tint j;\n\tunsigned long nsize;\n\tunsigned long need;\n\tlong offset;\n\tchar *pc;\n\tlong used;\n\tstruct array_strings *newpas;\n\tstruct array_strings *pas;\n\tstruct array_strings *xpasx;\n\tvoid free_arst(attribute *);\n\n\tassert(attr && new && (new->at_flags &ATR_VFLAG_SET));\n\n\t/* if the operation is DECR, just use the normal set_arst() function */\n\tif (op == DECR)\n\t\treturn (set_arst(attr, new, op));\n\n\tpas = attr->at_val.at_arst;  /* old attribute, A */\n\txpasx = new->at_val.at_arst; /* new attribute, B */\n\tif (!xpasx)\n\t\treturn (PBSE_INTERNAL);\n\n\t/* if the operation is SET, free the existing and then INCR (add) new */\n\tif (op == SET) {\n\t\tfree_arst(attr); /* clear old and use INCR to set w/o dups */\n\t\tpas = NULL;\t /* just freed what it was point to */\n\t\top = INCR;\n\t}\n\n\tif (!pas) {\n\n\t\t/* not array_strings control structure, make one */\n\n\t\tj = xpasx->as_npointers;\n\t\tif (j < 1)\n\t\t\treturn (PBSE_INTERNAL);\n\t\tneed = sizeof(struct array_strings) + (j - 1) * sizeof(char *);\n\t\tpas = (struct array_strings *) malloc(need);\n\t\tif (!pas)\n\t\t\treturn (PBSE_SYSTEM);\n\t\tpas->as_npointers = j;\n\t\tpas->as_usedptr = 0;\n\t\tpas->as_bufsize = 0;\n\t\tpas->as_buf = NULL;\n\t\tpas->as_next = NULL;\n\t\tattr->at_val.at_arst = pas;\n\t}\n\n\t/*\n\t * At this point we know we have a array_strings struct initialized\n\t * and we are doing the equivalent of the INCR operation\n\t */\n\n\tnsize = xpasx->as_next - xpasx->as_buf; /* space needed */\n\tused = pas->as_next - pas->as_buf;\n\n\tif (nsize > (pas->as_bufsize - used)) {\n\t\tneed = pas->as_bufsize + 2 * nsize; /* alloc new buf */\n\t\tif (pas->as_buf)\n\t\t\tpc = realloc(pas->as_buf, need);\n\t\telse\n\t\t\tpc = malloc(need);\n\t\tif (pc == NULL)\n\t\t\treturn (PBSE_SYSTEM);\n\t\toffset = pc - pas->as_buf;\n\t\tpas->as_buf = pc;\n\t\tpas->as_next = pc + used;\n\t\tpas->as_bufsize = need;\n\n\t\tif (offset != 0) {\n\t\t\tfor (j = 0; j < pas->as_usedptr; j++) /* adjust points */\n\t\t\t\tpas->as_string[j] += offset;\n\t\t}\n\t}\n\n\tj = pas->as_usedptr + xpasx->as_usedptr;\n\tif (j > pas->as_npointers) { /* need more pointers */\n\t\tj = 3 * j / 2;\t     /* allocate extra     */\n\t\tneed = sizeof(struct array_strings) + (j - 1) * sizeof(char *);\n\t\tnewpas = (struct array_strings *) realloc((char *) pas, need);\n\t\tif (newpas == NULL)\n\t\t\treturn (PBSE_SYSTEM);\n\t\tnewpas->as_npointers = j;\n\t\tpas = newpas;\n\t\tattr->at_val.at_arst = pas;\n\t}\n\n\t/* now append new strings ingoring enties already present  */\n\n\tfor (i = 0; i < xpasx->as_usedptr; i++) {\n\t\tfor (j = 0; j < pas->as_usedptr; ++j) {\n\t\t\tif (strcasecmp(xpasx->as_string[i], pas->as_string[j]) == 0)\n\t\t\t\tbreak;\n\t\t}\n\t\tif (j == pas->as_usedptr) {\n\t\t\t/* didn't find this there already, so copy it in */\n\n\t\t\t(void) strcpy(pas->as_next, xpasx->as_string[i]);\n\t\t\tpas->as_string[pas->as_usedptr++] = pas->as_next;\n\t\t\tpas->as_next += strlen(pas->as_next) + 1;\n\t\t}\n\t}\n\n\tpost_attr_set(attr);\n\treturn (0);\n}\n\n/**\n * @brief\n *\tcheck for duplicate entries in a string array\n *\n * @param[in] strarr - the string array to check for duplicates\n *\n * @retval 0 - no duplicate entries found\n * @retval 1 - duplicate entries found\n *\n */\n\nint\ncheck_duplicates(struct array_strings *strarr)\n{\n\tint i, j;\n\n\tif (strarr == NULL)\n\t\treturn 0;\n\n\tfor (i = 0; i < strarr->as_usedptr; i++) {\n\t\tfor (j = i + 1; j < strarr->as_usedptr; j++) {\n\t\t\tif (strcmp(strarr->as_string[i],\n\t\t\t\t   strarr->as_string[j]) == 0)\n\t\t\t\treturn 1;\n\t\t}\n\t}\n\treturn 0;\n}\n\nstruct array_strings *\nget_attr_arst(const attribute *pattr)\n{\n\treturn pattr->at_val.at_arst;\n}\n"
  },
  {
    "path": "src/lib/Libattr/attr_fn_b.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <assert.h>\n#include <ctype.h>\n#include <memory.h>\n#ifndef NDEBUG\n#include <stdio.h>\n#endif\n#include <stdlib.h>\n#include <string.h>\n#include \"pbs_ifl.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"pbs_error.h\"\n\n/**\n * @file\tattr_fn_b.c\n * @brief\n * \tThis file contains functions for manipulating attributes of type\n *\tboolean\n * @details\n * Each set has functions for:\n *\tDecoding the value string to the machine representation.\n *\tEncoding the machine representation of the value to a string\n *\tSetting the value by =, + or - operators.\n *\tComparing a (decoded) value with the attribute value.\n *\tFreeing the space malloc-ed to the attribute value.\n *\n * Some or all of the functions for an attribute type may be shared with\n * other attribute types.\n *\n * The prototypes are declared in \"attribute.h\"\n *\n * -------------------------------------------------------\n * Set of General functions for attributes of type boolean\n * -------------------------------------------------------\n */\n\nstatic char *true_val = ATR_TRUE;\nstatic char *false_val = ATR_FALSE;\n\n/**\n * @brief\n * \tis_true_or_false - examine input for possible true/value values\n *\n * @param[in] val - value string\n *\n * @return\tint\n * @retval\t1 \tfor true\n * @retval\t0 \tfor false\n * @retval\t-1 \tfor error\n *\n */\n\nint\nis_true_or_false(char *val)\n{\n\tif ((strcmp(val, true_val) == 0) ||\n\t    (strcmp(val, \"TRUE\") == 0) ||\n\t    (strcmp(val, \"true\") == 0) ||\n\t    (strcmp(val, \"t\") == 0) ||\n\t    (strcmp(val, \"T\") == 0) ||\n\t    (strcmp(val, \"1\") == 0) ||\n\t    (strcmp(val, \"y\") == 0) ||\n\t    (strcmp(val, \"Y\") == 0))\n\t\treturn 1; /* true */\n\telse if ((strcmp(val, false_val) == 0) ||\n\t\t (strcmp(val, \"FALSE\") == 0) ||\n\t\t (strcmp(val, \"false\") == 0) ||\n\t\t (strcmp(val, \"f\") == 0) ||\n\t\t (strcmp(val, \"F\") == 0) ||\n\t\t (strcmp(val, \"0\") == 0) ||\n\t\t (strcmp(val, \"n\") == 0) ||\n\t\t (strcmp(val, \"N\") == 0))\n\t\treturn 0; /* false */\n\telse\n\t\treturn (-1);\n}\n/**\n * @brief\n * \tdecode_b - decode string into boolean attribute\n *\n *\tString of \"1\" decodes to true, all else to false\n *\n * @param[in] patr - ptr to attribute to decode\n * @param[in] name - attribute name\n * @param[in] rescn - resource name or null\n * @param[out] val - string holding values for attribute structure\n *\n * @retval      int\n * @retval      0       if ok\n * @retval      >0      error number1 if error,\n * @retval      *patr   members set\n *\n */\n\nint\ndecode_b(attribute *patr, char *name, char *rescn, char *val)\n{\n\tint i;\n\n\tif ((val == NULL) || (strlen(val) == 0)) {\n\t\tATR_UNSET(patr);\n\t\tpatr->at_val.at_long = 0; /* default to false */\n\t} else {\n\t\tif ((i = is_true_or_false(val)) != -1)\n\t\t\tpatr->at_val.at_long = i;\n\t\telse\n\t\t\treturn (PBSE_BADATVAL);\n\t\tpost_attr_set(patr);\n\t}\n\treturn (0);\n}\n\n/**\n * @brief\n * \tencode_b - encode attribute of type ATR_TYPE_BOOL to attr_extern\n *\n * @param[in] patr - ptr to attribute to decode\n * @param[in] name - attribute name\n * @param[in] rescn - resource name or null\n * @param[out] val - string holding values for attribute structure\n *\n * @retval      int\n * @retval      0       if ok\n * @retval      >0      error number1 if error,\n * @retval      *patr   members set\n *\n */\n\n/*ARGSUSED*/\n\nint\nencode_b(const attribute *attr, pbs_list_head *phead, char *atname, char *rsname, int mode, svrattrl **rtnl)\n{\n\tsize_t ct;\n\tsvrattrl *pal;\n\tchar *value;\n\n\tif (!attr)\n\t\treturn (-1);\n\tif (!(attr->at_flags & ATR_VFLAG_SET))\n\t\treturn (0);\n\n\tif (attr->at_val.at_long) {\n\t\tvalue = true_val;\n\t} else {\n\t\tvalue = false_val;\n\t}\n\tct = strlen(value) + 1;\n\n\tpal = attrlist_create(atname, rsname, ct);\n\tif (pal == NULL)\n\t\treturn (-1);\n\tstrcpy(pal->al_value, value);\n\tpal->al_flags = attr->at_flags;\n\n\tif (phead)\n\t\tappend_link(phead, &pal->al_link, pal);\n\tif (rtnl)\n\t\t*rtnl = pal;\n\treturn (1);\n}\n\n/**\n * @brief\n * \tset_b - set attribute of type ATR_TYPE_BOOL\n *\n *\tA=B --> A set to value of B\n *\tA+B --> A = A | B  (inclusive or, turn on)\n *\tA-B --> A = A & ~B  (and not, clear)\n *\n * @param[in]   attr - pointer to new attribute to be set (A)\n * @param[in]   new  - pointer to attribute (B)\n * @param[in]   op   - operator\n *\n * @return      int\n * @retval      0       if ok\n * @retval     >0       if error\n *\n */\n\nint\nset_b(attribute *attr, attribute *new, enum batch_op op)\n{\n\tassert(attr && new && (new->at_flags &ATR_VFLAG_SET));\n\n\tswitch (op) {\n\t\tcase SET:\n\t\t\tattr->at_val.at_long = new->at_val.at_long;\n\t\t\tbreak;\n\n\t\tcase INCR:\n\t\t\tattr->at_val.at_long =\n\t\t\t\tattr->at_val.at_long | new->at_val.at_long; /* \"or\" */\n\t\t\tbreak;\n\n\t\tcase DECR:\n\t\t\tattr->at_val.at_long = attr->at_val.at_long &\n\t\t\t\t\t       ~new->at_val.at_long;\n\t\t\tbreak;\n\n\t\tdefault:\n\t\t\treturn (PBSE_INTERNAL);\n\t}\n\tpost_attr_set(attr);\n\treturn (0);\n}\n\n/**\n * @brief\n *\tcomp_b - compare two attributes of type ATR_TYPE_BOOL\n *\n * @param[in] attr - pointer to attribute structure\n * @param[in] with - pointer to attribute structure\n *\n * @return      int\n * @retval      0       if the set of strings in \"with\" is a subset of \"attr\"\n * @retval      1       otherwise\n *\n */\n\nint\ncomp_b(attribute *attr, attribute *with)\n{\n\tif (!attr || !with)\n\t\treturn (1);\n\tif (((attr->at_val.at_long == 0) && (with->at_val.at_long == 0)) ||\n\t    ((attr->at_val.at_long != 0) && (with->at_val.at_long != 0)))\n\t\treturn (0);\n\telse\n\t\treturn (1);\n}\n\n/*\n * free_b - use free_null() to (not) free space\n */\n\n/**\n * @brief\tAttribute setter function for boolean type values\n *\n * @param[in]\tpattr\t-\tpointer to attribute being set\n * @param[in]\tvalue\t-\tvalue to be set\n * @param[in]\top\t\t-\toperation to do\n *\n * @return\tvoid\n *\n * @par MT-Safe: No\n * @par Side Effects: None\n *\n */\nvoid\nset_attr_b(attribute *pattr, long val, enum batch_op op)\n{\n\tswitch (op) {\n\t\tcase SET:\n\t\t\tpattr->at_val.at_long = val;\n\t\t\tbreak;\n\n\t\tcase INCR:\n\t\t\tpattr->at_val.at_long = pattr->at_val.at_long | val; /* \"or\" */\n\t\t\tbreak;\n\n\t\tcase DECR:\n\t\t\tpattr->at_val.at_long = pattr->at_val.at_long & ~val;\n\t\t\tbreak;\n\n\t\tdefault:\n\t\t\treturn;\n\t}\n\tpost_attr_set(pattr);\n}\n"
  },
  {
    "path": "src/lib/Libattr/attr_fn_c.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <assert.h>\n#include <ctype.h>\n#include <memory.h>\n#ifndef NDEBUG\n#include <stdio.h>\n#endif\n#include <stdlib.h>\n#include <string.h>\n#include \"pbs_ifl.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"pbs_error.h\"\n\n/**\n * @file\tattr_fn_c.c\n * @brief\n * \tThis file contains functions for manipulating attributes\n *\tcharacter, a single\n * @details\n * Each set has functions for:\n *\tDecoding the value string to the machine representation.\n *\tEncoding the machine representation of the attribute to external form\n *\tSetting the value by =, + or - operators.\n *\tComparing a (decoded) value with the attribute value.\n *\n * Some or all of the functions for an attribute type may be shared with\n * other attribute types.\n *\n * The prototypes are declared in \"attribute.h\"\n *\n * --------------------------------------------------\n * The Set of Attribute Functions for attributes with\n * value type \"char\"\n * --------------------------------------------------\n */\n\n/**\n * @brief\n * \tdecode_c - decode first character of string into attribute structure\n *\n * @param[in] patr - ptr to attribute to decode\n * @param[in] name - attribute name\n * @param[in] rescn - resource name or null\n * @param[out] val - string holding values for attribute structure\n *\n * @retval      int\n * @retval      0       if ok\n * @retval      >0      error number1 if error,\n * @retval      *patr   members set\n *\n */\n\nint\ndecode_c(attribute *patr, char *name, char *rescn, char *val)\n{\n\tif ((val == NULL) || (strlen(val) == 0)) {\n\t\tATR_UNSET(patr);\n\t\tpatr->at_val.at_char = '\\0';\n\t} else {\n\t\tpost_attr_set(patr);\n\t\tpatr->at_val.at_char = *val;\n\t}\n\treturn (0);\n}\n\n/**\n * @brief\n * \tencode_c - encode attribute of type character into attr_extern\n *\n * @param[in] attr - ptr to attribute to encode\n * @param[in] phead - ptr to head of attrlist list\n * @param[in] atname - attribute name\n * @param[in] rsname - resource name or null\n * @param[in] mode - encode mode\n * @param[out] rtnl - ptr to svrattrl\n *\n * @retval      int\n * @retval      >0      if ok, entry created and linked into list\n * @retval      =0      no value to encode, entry not created\n * @retval      -1      if error\n *\n */\n/*ARGSUSED*/\n\nint\nencode_c(const attribute *attr, pbs_list_head *phead, char *atname, char *rsname, int mode, svrattrl **rtnl)\n\n{\n\n\tsvrattrl *pal;\n\n\tif (!attr)\n\t\treturn (-1);\n\tif (!(attr->at_flags & ATR_VFLAG_SET))\n\t\treturn (0);\n\n\tpal = attrlist_create(atname, rsname, 2);\n\tif (pal == NULL)\n\t\treturn (-1);\n\n\t*pal->al_value = attr->at_val.at_char;\n\t*(pal->al_value + 1) = '\\0';\n\tpal->al_flags = attr->at_flags;\n\tif (phead)\n\t\tappend_link(phead, &pal->al_link, pal);\n\tif (rtnl)\n\t\t*rtnl = pal;\n\n\treturn (1);\n}\n\n/**\n * @brief\n * \tset_c - set attribute A to attribute B,\n *\teither A=B, A += B, or A -= B\n *\n * @param[in]   attr - pointer to new attribute to be set (A)\n * @param[in]   new  - pointer to attribute (B)\n * @param[in]   op   - operator\n *\n * @return      int\n * @retval      0       if ok\n * @retval     >0       if error\n *\n */\n\nint\nset_c(attribute *attr, attribute *new, enum batch_op op)\n{\n\tassert(attr && new && (new->at_flags &ATR_VFLAG_SET));\n\n\tswitch (op) {\n\t\tcase SET:\n\t\t\tattr->at_val.at_char = new->at_val.at_char;\n\t\t\tbreak;\n\n\t\tcase INCR:\n\t\t\tattr->at_val.at_char =\n\t\t\t\t(char) ((int) attr->at_val.at_char +\n\t\t\t\t\t(int) new->at_val.at_char);\n\t\t\tbreak;\n\n\t\tcase DECR:\n\t\t\tattr->at_val.at_char =\n\t\t\t\t(char) ((int) attr->at_val.at_char -\n\t\t\t\t\t(int) new->at_val.at_char);\n\t\t\tbreak;\n\n\t\tdefault:\n\t\t\treturn (PBSE_INTERNAL);\n\t}\n\tpost_attr_set(attr);\n\treturn (0);\n}\n\n/**\n * @brief\n * \tcomp_c - compare two attributes of type character\n *\n * @param[in] attr - pointer to attribute structure\n * @param[in] with - pointer to attribute structure\n *\n * @return      int\n * @retval      0       if the set of strings in \"with\" is a subset of \"attr\"\n * @retval      1       otherwise\n *\n */\n\nint\ncomp_c(attribute *attr, attribute *with)\n{\n\tif (!attr || !with)\n\t\treturn (-1);\n\tif (attr->at_val.at_char < with->at_val.at_char)\n\t\treturn (-1);\n\telse if (attr->at_val.at_char > with->at_val.at_char)\n\t\treturn (1);\n\telse\n\t\treturn (0);\n}\n\n/*\n * free_c - use free_null() to (not) free space\n */\n\n/**\n * @brief\tAttribute setter function for char type values\n *\n * @param[in]\tpattr\t-\tpointer to attribute being set\n * @param[in]\tvalue\t-\tvalue to be set\n * @param[in]\top\t\t-\toperation to do\n *\n * @return\tvoid\n *\n * @par MT-Safe: No\n * @par Side Effects: None\n *\n */\nvoid\nset_attr_c(attribute *pattr, char value, enum batch_op op)\n{\n\tif (pattr == NULL) {\n\t\tlog_err(-1, __func__, \"Invalid pointer to attribute\");\n\t\treturn;\n\t}\n\n\tswitch (op) {\n\t\tcase SET:\n\t\t\tpattr->at_val.at_char = value;\n\t\t\tbreak;\n\t\tcase INCR:\n\t\t\tpattr->at_val.at_char += value;\n\t\t\tbreak;\n\t\tcase DECR:\n\t\t\tpattr->at_val.at_char -= value;\n\t\t\tbreak;\n\t\tdefault:\n\t\t\treturn;\n\t}\n\n\tpost_attr_set(pattr);\n}\n\nvoid\nset_attr_short(attribute *pattr, short value, enum batch_op op)\n{\n\tif (pattr == NULL) {\n\t\tlog_err(-1, __func__, \"Invalid pointer to attribute\");\n\t\treturn;\n\t}\n\n\tswitch (op) {\n\t\tcase SET:\n\t\t\tpattr->at_val.at_short = value;\n\t\t\tbreak;\n\t\tcase INCR:\n\t\t\tpattr->at_val.at_short += value;\n\t\t\tbreak;\n\t\tcase DECR:\n\t\t\tpattr->at_val.at_short -= value;\n\t\t\tbreak;\n\t\tdefault:\n\t\t\treturn;\n\t}\n\n\tpost_attr_set(pattr);\n}\n\n/**\n * @brief\tAttribute getter function for char type values\n *\n * @param[in]\tpattr\t-\tpointer to the attribute\n *\n * @return\tchar\n * @retval\tchar value of the attribute\n *\n * @par MT-Safe: No\n * @par Side Effects: None\n */\nchar\nget_attr_c(const attribute *pattr)\n{\n\treturn pattr->at_val.at_char;\n}\n"
  },
  {
    "path": "src/lib/Libattr/attr_fn_entlim.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <assert.h>\n#include <ctype.h>\n#include <memory.h>\n#ifndef NDEBUG\n#include <stdio.h>\n#endif\n#include <stdlib.h>\n#include <string.h>\n#include <strings.h>\n#include <sys/types.h>\n#include <pbs_ifl.h>\n#include \"log.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"resource.h\"\n#include \"pbs_error.h\"\n#include \"pbs_entlim.h\"\n\n/**\n * @file\tattr_fn_entlim.c\n * @brief\n * \tThis file contains functions for manipulating attributes of type \"entlim\"\n * \tentity limits for Finer Granularity Control (FGC)\n * @details\n * The entities are maintained in an index tree for fast searching,\n *\tsee attr_entity in attribute.h.\n * The \"key\" is the entity+resource and the corresponding data is\n *\tan \"fgc union\", see resource.h.\n */\n\nvoid free_entlim(attribute *); /* found in lib/Libattr/attr_fn_entlim.c */\n\n/**\n * @brief\n *\tFree a server style entity-limit leaf from the tree;  does not free\n *\tthe key associated with the leaf.\n *\n * @param[in] pvdlf - pointer to the leaf; void type to allow for indirect\n *\t\tcalls to this or similar functions.\n *\n * @return\tVoid\n *\n */\n\nstatic void\nsvr_freeleaf(void *pvdlf)\n{\n\tsvr_entlim_leaf_t *plf = pvdlf;\n\n\tif (plf) {\n\t\tplf->slf_rescd->rs_free(&plf->slf_limit);\n\t\tplf->slf_rescd->rs_free(&plf->slf_sum);\n\t\tfree(plf);\n\t}\n}\n\n/**\n * @brief\n * \tdup_svr_entlim_leaf - duplicate the leaf data (a svr_entlim_leaf struct)\n *\tUsed when adding a entry from one context (tree) to another, i.e.\n *\tin set_entilm():INCR\n *\n *\tWARNING: this simple code works only because we are allowing only\n *\tdata such as integers, floats, sizes; that is self contained within\n *\tthe attribute structure (no external data as needed for strings, ...)\n *\tI.e. it is doing a structure to structure shallow copy.\n *\n * @param[in] orig - pointer to the original svr_entlim_leaf structure\n *\n * @return pointer to new svr_entlim_leaf structure\n *\n */\n\nsvr_entlim_leaf_t *\ndup_svr_entlim_leaf(svr_entlim_leaf_t *orig)\n{\n\tsvr_entlim_leaf_t *newlf;\n\n\tnewlf = malloc(sizeof(svr_entlim_leaf_t));\n\tif (newlf)\n\t\t*newlf = *orig;\n\treturn (newlf);\n}\n\n/**\n * @brief\n * \talloc_svrleaf - allocate memory for Server entity leaf and do basic\n *\tinitialization\n *\n * @param[in]\tresc_name - either (1) the name of the limited resource for\n *\tmax_queued_resc and such, or (2) is NULL for the job count attributes\n *\tsuch as max_queued.\n * @param[out] pplf - address of a pointer to a svr_entlim_leaf, set to\n *\tnewly allocated memory\n *\n * @return  int\n * @retval 0 - success\n * @retval PBS_UNKRESC - resource name is unknown\n * @retval PBS_SYSTEM  - unable to allocate memory\n *\n */\n\nint\nalloc_svrleaf(char *resc_name, svr_entlim_leaf_t **pplf)\n{\n\tresource_def *prdef;\n\tsvr_entlim_leaf_t *plf;\n\n\tif (resc_name == NULL) /* use \"ncpus\" resource_def for the various functions\t*/\n\t\tprdef = &svr_resc_def[RESC_NCPUS];\n\telse\n\t\tprdef = find_resc_def(svr_resc_def, resc_name);\n\n\tif (prdef == NULL)\n\t\treturn PBSE_UNKRESC;\n\n\tplf = malloc(sizeof(svr_entlim_leaf_t));\n\tif (plf == NULL)\n\t\treturn PBSE_SYSTEM;\n\n\tmemset((void *) plf, 0, sizeof(svr_entlim_leaf_t));\n\tplf->slf_rescd = prdef;\n\t*pplf = plf;\n\treturn (PBSE_NONE);\n}\n\n/**\n * @brief\n * \tsvr_addleaf - add an entity limit leaf to the specified context (tree)\n *\tand set the slf_limit (server leaf) member.  Also sets\n *\tPBS_ELNTLIM_LIMITSET flag in the resource_def structure for the\n *\tresource (if one).  Used only by the Server.\n *\n * @param[in] ctx - pointer to \"context\" - i.e. the tree\n * @param[in] kt  - the entity type enum value\n * @param[in] fulent - the letter associated with the entity type\n * @param[in] entity - the entity name\n * @param[in] rescn  - the resource name, may be null for simple counts\n * @param[in] value  - the resource or count value\n *\n * @return int\n * @retval 0 on success\n * @retval PBSE_ error number on error\n *\n */\n\nint\nsvr_addleaf(void *ctx, enum lim_keytypes kt, char *fulent, char *entity,\n\t    char *rescn, char *value)\n{\n\tchar *kstr;\n\tsvr_entlim_leaf_t *plf = NULL;\n\tint rc;\n\n\tif (rescn == NULL) {\n\t\t/* use \"ncpus\" resource_def for the various functions\t*/\n\t\t/* as it is simple integer type needed here\t\t*/\n\t\tkstr = entlim_mk_runkey(kt, entity);\n\t} else {\n\t\tkstr = entlim_mk_reskey(kt, entity, rescn);\n\t}\n\n\tif (kstr == NULL)\n\t\treturn (PBSE_UNKRESC);\n\n\tif ((rc = alloc_svrleaf(rescn, &plf)) != PBSE_NONE) {\n\t\tfree(kstr);\n\t\treturn (rc);\n\t}\n\n\trc = plf->slf_rescd->rs_decode(&plf->slf_limit, NULL, rescn, value);\n\tif (rc != 0) {\n\t\tfree(kstr);\n\t\tfree(plf);\n\t\treturn rc;\n\t}\n\n\t/* flag that limits are set for this resource name */\n\tif (rescn != NULL)\n\t\tplf->slf_rescd->rs_entlimflg |= PBS_ENTLIM_LIMITSET;\n\n\t/* add key+record */\n\trc = entlim_add(kstr, (void *) plf, ctx);\n\tif (rc != 0) {\n\t\tsvr_freeleaf(plf);\n\t}\n\tfree(kstr); /* all cases, free the key string */\n\treturn (rc);\n}\n\n/**\n * @brief\n * \tinternal_decode_entlim - decode a \"attribute name/optional resource/value\"\n *\t\tset into a entity type attribute\n *\tUsed by decode_entlim() and decode_entlim_resc() to do the real work\n *\n * @param[in]  patr - pointer to attribute value into which we are decoding\n *\t\t\tattribute structure is modified/set\n * @param[in]  name - attribute name, not used\n * @param[in]  rn   - resource name\n * @param[in]  prdef - pointer to resource defininition\n * @param[in]  value - string to be decoded as the attribute value\n *\n * @return int\n * @retval  0 on success\n * @retval  PBSE_* on error\n *\n */\n\nstatic int\ninternal_decode_entlim(attribute *patr, char *name, char *rn,\n\t\t       struct resource_def *prdef, char *val)\n{\n\tvoid *petree;\n\tint rc = 0;\n\tchar *valcopy;\n\n\tif ((patr->at_flags & ATR_VFLAG_SET) ||\n\t    (patr->at_val.at_enty.ae_tree != NULL))\n\t\tfree_entlim(patr);\n\n\t/* create header for tree,  no duplicate keys and variable length key */\n\n\tpetree = entlim_initialize_ctx();\n\tif (petree == NULL)\n\t\treturn PBSE_SYSTEM;\n\n\t/* entlim_parse munges the input string, so give it a copy */\n\tvalcopy = strdup(val);\n\tif (valcopy == NULL) {\n\t\t(void) entlim_free_ctx(petree, svr_freeleaf);\n\t\treturn PBSE_SYSTEM;\n\t}\n\trc = entlim_parse(valcopy, rn, petree, svr_addleaf);\n\tfree(valcopy);\n\tif (rc != 0) {\n\t\t(void) entlim_free_ctx(petree, svr_freeleaf);\n\t\treturn (PBSE_BADATVAL);\n\t}\n\tpatr->at_val.at_enty.ae_tree = petree;\n\tpost_attr_set(patr);\n\n\treturn (0);\n}\n\n/**\n * @brief\n * \tdecode_entlim - decode a \"attribute name/value\" pair into\n *\ta entity count type attribute (without resource)\n *\tThe value is of the form \"[L:Ename=Rvalue],...\"\n *\twhere L is 'u' (user), 'g' (group), or 'o' (overall)\n *\tEname is a user or group name or \"PBS_ALL\"\n *\tRvalue is a integer value such as \"10\"\n *\n * @param[in] \tpatr\tpointer to attribute to be set\n * @param[in] \tname\tattribute name (not used)\n * @param[in] \trescn\tresource name - should be null\n * @param[in] \tval\tstring to decode as the entity value\n *\n * @return int\n * @retval 0 - success\n * @retval non zero - PBSE_* error number\n *\n */\n\nint\ndecode_entlim(attribute *patr, char *name, char *rescn, char *val)\n{\n\n\tif (patr == NULL)\n\t\treturn (PBSE_INTERNAL);\n\tif (rescn != NULL)\n\t\treturn (PBSE_INTERNAL);\n\n\treturn (internal_decode_entlim(patr, name, NULL, NULL, val));\n}\n\n/**\n * @brief\n * \tdecode_entlim_res - decode a \"attribute name/resource name/value\" triplet\n *\tinto a entity type attribute (with resource)\n *\tThe value is of the form \"[L:Ename=Rvalue],...\"\n *\twhere L is 'u' (user), 'g' (group), or 'o' (overall)\n *\tEname is a user or group name or \"PBS_ALL\"\n *\tRvalue is a resource value such as \"10\" or \"4gb\"\n *\n * @param[in] \tpatr\tpointer to attribute to be set\n * @param[in] \tname\tattribute name (not used)\n * @param[in] \trescn\tresource name - must not be null\n * @param[in] \tval\tstring to decode as the entity value\n *\n * @return int\n * @retval 0 - success\n * @retval non zero - PBSE_* error number\n *\n */\n\nint\ndecode_entlim_res(attribute *patr, char *name, char *rescn, char *val)\n{\n\tresource_def *prdef;\n\n\tif (patr == NULL)\n\t\treturn (PBSE_INTERNAL);\n\tif (rescn == NULL)\n\t\treturn (PBSE_UNKRESC);\n\tprdef = find_resc_def(svr_resc_def, rescn);\n\tif (prdef == NULL) {\n\t\t/*\n\t\t * didn't find resource with matching name\n\t\t * return PBSE_UNKRESC\n\t\t */\n\t\treturn (PBSE_UNKRESC);\n\t}\n\tif ((prdef->rs_type != ATR_TYPE_LONG) &&\n\t    (prdef->rs_type != ATR_TYPE_SIZE) &&\n\t    (prdef->rs_type != ATR_TYPE_LL) &&\n\t    (prdef->rs_type != ATR_TYPE_SHORT) &&\n\t    (prdef->rs_type != ATR_TYPE_FLOAT))\n\t\treturn (PBSE_INVALJOBRESC);\n\n\treturn (internal_decode_entlim(patr, name, rescn, NULL, val));\n}\n\n/**\n * @brief\n * \tencode_entlim_db - encode attr of type ATR_TYPE_ENTITY into a form suitable\n * \tto be stored as a single record into the database.\n *\n * Here we are a little different from the typical attribute.  Most have a\n * single value to be encoded.  But an entity attribute may have a whole bunch.\n * First get the name of the parent attribute.\n * Then for each entry in the tree, call the individual resource encode\n * routine with \"aname\" set to the parent attribute name and with a null\n * pbs_list_head .  The encoded resource value is then prepended with the \"entity\n * string\" and \"=\" character which is then concatenated together to create a\n * single value string for the entire attribute value. As we find a new pair of\n * \"attribute_name+resc_name\", we add to a list where we continue to assemble\n * the value strings.\n *\n * Note: entities with an \"unset\" value will not be encoded.\n *\n *\n *\tReturns: >0 if ok\n *\t\t =0 if no value to encode, no entries added to list\n *\t\t <0 if some resource entry had an encode error.\n *\n * @param[in] \tattr\tpointer to attribute to encode\n * @param[in]\tphead;\thead of attrlist list onto which the encoded is appended\n * @param[in] \tatname\tattribute name (not used)\n * @param[in] \trsname\tresource name, null on call\n * @param[in] \tmode\tencode mode\n * @param[out]  rtnl\taddress of pointer to encoded svrattrl entry which\n *\t\t\tis also appended to list\n *\n * @return int\n * @retval >0 - success\n * @retval =0  - no value to encode, nothing added to list (phead)\n * @retval <0 - if some entry had an encode error\n *\n */\n\nint\nencode_entlim_db(const attribute *attr, pbs_list_head *phead, char *atname, char *rsname, int mode, svrattrl **rtnl)\n{\n\tvoid *ctx;\n\tchar *key = NULL;\n\tchar rescn[PBS_MAX_RESC_NAME + 1];\n\tchar etname[PBS_MAX_RESC_NAME + 1];\n\tchar *pc;\n\tint needquotes;\n\tsvrattrl *pal;\n\tsvrattrl *tmpsvl;\n\tint len = 0;\n\tsvr_entlim_leaf_t *plf;\n\tchar *pos = NULL, *p;\n\tint oldlen = 0;\n\tsvrattrl *xprior = NULL;\n\n\t/*\n\t * structure to hold the various entity attributes along with their\n\t * concatenated values, as we walk the tree\n\t */\n\tstruct db_attrib {\n\t\tchar atname[PBS_MAX_RESC_NAME];\n\t\tchar rescn[PBS_MAX_RESC_NAME];\n\t\tchar *val;\n\t};\n\tstruct db_attrib *db_attrlist = NULL;\n\tint cursize = 0;\n\tint index = 0;\n\n\tif (!attr)\n\t\treturn (-1);\n\tif (!(attr->at_flags & ATR_VFLAG_SET))\n\t\treturn (0); /* nothing up the tree */\n\n\tctx = attr->at_val.at_enty.ae_tree;\n\n\t/* ok, now process each separate entry in the tree */\n\twhile ((plf = entlim_get_next(ctx, (void **) &key)) != NULL) {\n\n\t\trescn[0] = '\\0';\n\t\tneedquotes = 0;\n\n\t\tif ((entlim_entity_from_key(key, etname, PBS_MAX_RESC_NAME) == 0) &&\n\t\t    (entlim_resc_from_key(key, rescn, PBS_MAX_RESC_NAME) >= 0)) {\n\n\t\t\t/* decode leaf value into a local svrattrl structure in */\n\t\t\t/* order to obtain a string represnetation of the value */\n\n\t\t\tif (plf->slf_rescd->rs_encode(&plf->slf_limit, NULL, atname, rescn, mode, &tmpsvl) > 0) {\n\n\t\t\t\t/* find out if this etname + rescn pair is created already, if not create an attribute */\n\t\t\t\tfor (index = 0; index < cursize; index++) {\n\t\t\t\t\tif ((strcmp(db_attrlist[index].atname, atname) == 0) &&\n\t\t\t\t\t    (strcmp(db_attrlist[index].rescn, rescn) == 0)) {\n\t\t\t\t\t\t/* found the resource or NULL resource */\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tif (index == cursize) {\n\t\t\t\t\tcursize++;\n\t\t\t\t\tif (!(p = realloc(db_attrlist, sizeof(struct db_attrib) * cursize)))\n\t\t\t\t\t\tgoto err;\n\t\t\t\t\tdb_attrlist = (struct db_attrib *) p;\n\t\t\t\t\tstrcpy(db_attrlist[index].atname, atname);\n\t\t\t\t\tstrcpy(db_attrlist[index].rescn, rescn);\n\t\t\t\t\tdb_attrlist[index].val = NULL;\n\t\t\t\t}\n\n\t\t\t\t/* Allocate the \"real\" svrattrl sufficiently large to     */\n\t\t\t\t/* hold the form \"[l:entity;rname=value_string]\" plus one */\n\t\t\t\t/* and assemble the real value into the real svrattrl     */\n\n\t\t\t\t/* [u:=]  null = 6 extra characters */\n\t\t\t\tlen = tmpsvl->al_valln + strlen(etname) + 6;\n\n\t\t\t\t/* is there whitespace in the entity name ? */\n\t\t\t\t/* if so, then we quote the whole thing.    */\n\t\t\t\tpc = etname;\n\t\t\t\twhile (*pc) {\n\t\t\t\t\tif (isspace((int) *pc++)) {\n\t\t\t\t\t\tneedquotes = 1;\n\t\t\t\t\t\tlen += 2;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\tif (!db_attrlist[index].val) {\n\t\t\t\t\tif (!(db_attrlist[index].val = malloc(len)))\n\t\t\t\t\t\tgoto err;\n\t\t\t\t\tpos = db_attrlist[index].val;\n\t\t\t\t} else {\n\t\t\t\t\toldlen = strlen(db_attrlist[index].val);\n\t\t\t\t\t/* add old length + space for comma to total len */\n\t\t\t\t\tlen = len + oldlen + 1;\n\t\t\t\t\tif (!(p = realloc(db_attrlist[index].val, len)))\n\t\t\t\t\t\tgoto err;\n\t\t\t\t\tdb_attrlist[index].val = p;\n\t\t\t\t\tstrcat(db_attrlist[index].val, \",\");\n\t\t\t\t\tpos = db_attrlist[index].val + oldlen + 1;\n\t\t\t\t}\n\n\t\t\t\tif (needquotes) {\n\t\t\t\t\tsprintf(pos, \"[%c:\\\"%s\\\"=%s]\", *key, etname, tmpsvl->al_atopl.value);\n\t\t\t\t} else {\n\t\t\t\t\tsprintf(pos, \"[%c:%s=%s]\", *key, etname, tmpsvl->al_atopl.value);\n\t\t\t\t}\n\t\t\t\tfree(tmpsvl);\n\t\t\t}\n\t\t}\n\t}\n\n\t/*\n\t * now we are done with the tree and should have assembled the strings\n\t * for the various attributes. Walk this array and create the real\n\t * attribute list\n\t */\n\tfor (index = 0; index < cursize; index++) {\n\t\tlen = strlen(db_attrlist[index].val) + 1;\n\t\tif (db_attrlist[index].rescn[0] == '\\0')\n\t\t\tpal = attrlist_create(db_attrlist[index].atname, NULL, len);\n\t\telse\n\t\t\tpal = attrlist_create(db_attrlist[index].atname, db_attrlist[index].rescn, len);\n\n\t\tstrcpy(pal->al_atopl.value, db_attrlist[index].val);\n\t\tfree(db_attrlist[index].val);\n\t\tpal->al_flags = attr->at_flags;\n\t\t/* op is not stored in db, so no need to set it */\n\n\t\tif (phead)\n\t\t\tappend_link(phead, &pal->al_link, pal);\n\n\t\tif (index == 0) {\n\t\t\tif (rtnl)\n\t\t\t\t*rtnl = pal;\n\t\t} else {\n\t\t\txprior->al_sister = pal;\n\t\t}\n\t\txprior = pal;\n\t}\n\t/* finally free the whole db_attrlist */\n\tif (db_attrlist)\n\t\tfree(db_attrlist);\n\n\treturn (cursize);\n\nerr:\n\t/* walk the array and free every set index */\n\tif (db_attrlist) {\n\t\tfor (index = 0; index < cursize; index++) {\n\t\t\tif (db_attrlist[index].val)\n\t\t\t\tfree(db_attrlist[index].val);\n\t\t}\n\t\tfree(db_attrlist);\n\t}\n\n\treturn (-1);\n}\n\n/**\n * @brief\n * \tencode_entlim - encode attr of type ATR_TYPE_ENTITY into attr_extern form\n *\n * Here we are a little different from the typical attribute.  Most have a\n * single value to be encoded.  But an entity attribute may have a whole bunch.\n * First get the name of the parent attribute.\n * Then for each entry in the tree, call the individual resource encode\n * routine with \"aname\" set to the parent attribute name and with a null\n * pbs_list_head .  The encoded resource value is then prepended with the \"entity\n * string\" and \"=\" character which is then placed in a new svrattrl entry\n * which is then added to the real list head.\n *\n * Note: entities with an \"unset\" value will not be encoded.\n *\n *\n * @param[in] \tattr\tpointer to attribute to encode\n * @param[in]\tphead;\thead of attrlist list onto which the encoded is appended\n * @param[in] \tatname\tattribute name\n * @param[in] \trsname\tresource name, null on call\n * @param[in] \tmode\tencode mode\n * @param[out]  rtnl\taddress of pointer to encoded svrattrl entry which\n *\t\t\tis also appended to list\n * @return int\n * @retval  >0 if ok\n * @retval  =0 if no value to encode, no entries added to list\n * @retval  <0 if some resource entry had an encode error.\n *\n */\nint\nencode_entlim(const attribute *attr, pbs_list_head *phead, char *atname, char *rsname, int mode, svrattrl **rtnl)\n{\n\tvoid *ctx;\n\tint grandtotal = 0;\n\tint first = 1;\n\tsvrattrl *xprior = NULL;\n\tchar *key = NULL;\n\tchar rescn[PBS_MAX_RESC_NAME + 1];\n\tchar etname[PBS_MAX_RESC_NAME + 1];\n\tchar *pc;\n\tint needquotes;\n\tsvrattrl *pal;\n\tsvrattrl *tmpsvl;\n\tint len;\n\tenum batch_op op = SET;\n\tsvr_entlim_leaf_t *plf;\n\tchar **rescn_array;\n\tchar **temp_rescn_array;\n\tint index = 0;\n\tint i = 0;\n\tint array_size = ENCODE_ENTITY_MAX;\n\n\tif (mode == ATR_ENCODE_DB)\n\t\treturn (encode_entlim_db(attr, phead, atname, rsname, mode, rtnl));\n\n\tif (!attr)\n\t\treturn (-1);\n\tif (!(attr->at_flags & ATR_VFLAG_SET))\n\t\treturn (0); /* nothing up the tree */\n\n\tctx = attr->at_val.at_enty.ae_tree;\n\n\trescn_array = malloc(array_size * sizeof(char *));\n\tif (rescn_array == NULL)\n\t\treturn (PBSE_SYSTEM);\n\n\t/* ok, now process each separate entry in the tree */\n\twhile ((plf = entlim_get_next(ctx, (void **) &key)) != NULL) {\n\n\t\trescn[0] = '\\0';\n\t\tneedquotes = 0;\n\n\t\tif ((entlim_entity_from_key(key, etname, PBS_MAX_RESC_NAME) == 0) &&\n\t\t    (entlim_resc_from_key(key, rescn, PBS_MAX_RESC_NAME) >= 0)) {\n\n\t\t\t/* decode leaf value into a local svrattrl structure in */\n\t\t\t/* order to obtain a string represnetation of the value */\n\n\t\t\tif (plf->slf_rescd->rs_encode(&plf->slf_limit, NULL, atname, rescn, mode, &tmpsvl) > 0) {\n\n\t\t\t\t/* Allocate the \"real\" svrattrl sufficiently large to     */\n\t\t\t\t/* hold the form \"[l:entity;rname=value_string]\" plus one */\n\t\t\t\t/* and assemble the real value into the real svrattrl     */\n\n\t\t\t\t/* [u:=]  null = 6 extra characters */\n\t\t\t\tlen = tmpsvl->al_valln + strlen(etname) + 6;\n\n\t\t\t\t/* is there whitespace in the entity name ? */\n\t\t\t\t/* if so, then we quote the whole thing.    */\n\t\t\t\tpc = etname;\n\t\t\t\twhile (*pc) {\n\t\t\t\t\tif (isspace((int) *pc++)) {\n\t\t\t\t\t\tneedquotes = 1;\n\t\t\t\t\t\tlen += 2;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\tif (rescn[0] == '\\0')\n\t\t\t\t\tpal = attrlist_create(atname, NULL, len);\n\t\t\t\telse\n\t\t\t\t\tpal = attrlist_create(atname, rescn, len);\n\n\t\t\t\tif (needquotes) {\n\t\t\t\t\tsprintf(pal->al_atopl.value, \"[%c:\\\"%s\\\"=%s]\", *key, etname, tmpsvl->al_atopl.value);\n\t\t\t\t} else {\n\t\t\t\t\tsprintf(pal->al_atopl.value, \"[%c:%s=%s]\", *key, etname, tmpsvl->al_atopl.value);\n\t\t\t\t}\n\t\t\t\tfree(tmpsvl);\n\t\t\t\tpal->al_flags = attr->at_flags;\n\t\t\t\top = SET;\n\n\t\t\t\t/* check whether the resource is appeared first time or is repeated */\n\t\t\t\t/* After check set the op accordingly */\n\t\t\t\tif (rescn[0]) {\n\t\t\t\t\tfor (i = 0; i < index; i++) {\n\t\t\t\t\t\tif (strcmp(rescn, rescn_array[i]) == 0) {\n\t\t\t\t\t\t\top = INCR;\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tif (op == SET) {\n\t\t\t\t\t\t/* Doubling the size of array */\n\t\t\t\t\t\tif (index == array_size) {\n\t\t\t\t\t\t\tarray_size = array_size * 2;\n\t\t\t\t\t\t\ttemp_rescn_array = realloc(rescn_array, array_size * sizeof(char *));\n\t\t\t\t\t\t\tif (temp_rescn_array != NULL) {\n\t\t\t\t\t\t\t\trescn_array = temp_rescn_array;\n\t\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t\tfor (i = 0; i < index; i++)\n\t\t\t\t\t\t\t\t\tfree(rescn_array[i]);\n\t\t\t\t\t\t\t\tfree(rescn_array);\n\t\t\t\t\t\t\t\treturn (PBSE_SYSTEM);\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t\trescn_array[index] = strdup(rescn);\n\t\t\t\t\t\tif (rescn_array[index] == NULL) {\n\t\t\t\t\t\t\tfor (i = 0; i < index; i++)\n\t\t\t\t\t\t\t\tfree(rescn_array[i]);\n\t\t\t\t\t\t\tfree(rescn_array);\n\t\t\t\t\t\t\treturn (PBSE_SYSTEM);\n\t\t\t\t\t\t}\n\t\t\t\t\t\tindex++;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tpal->al_atopl.op = op;\n\t\t\t\tif (phead)\n\t\t\t\t\tappend_link(phead, &pal->al_link, pal);\n\t\t\t\tif (first) {\n\t\t\t\t\tif (rtnl)\n\t\t\t\t\t\t*rtnl = pal;\n\t\t\t\t\tfirst = 0;\n\t\t\t\t} else {\n\t\t\t\t\txprior->al_sister = pal;\n\t\t\t\t}\n\t\t\t\txprior = pal;\n\n\t\t\t\t++grandtotal;\n\t\t\t}\n\t\t}\n\t}\n\tfor (i = 0; i < index; i++)\n\t\tfree(rescn_array[i]);\n\tfree(rescn_array);\n\treturn (grandtotal);\n}\n\n/**\n * @brief\n * \tset_entlim - set value of an attribute of type ATR_TYPE_ENTITY to the\n *\tvalue of another attribute of type ATR_TYPE_ENTITY.\n *\n * @par Functionality:\n *\tThis function is used for all operations on the etlim attributes\n *\tEXCEPT for \"SET\" when the entlim involves a resource, see\n *\tset_entlim_resc() below.\n *\n *\tFor each entity in the list headed by the \"new\" attribute,\n *\tthe correspondingly entity in the list headed by \"old\"\n *\tis modified.\n *\n *\tThe mapping of the operations incr and decr depend on the type are\n *\t\tSET:  all of old entries are replaced by the new entries\n *\t\tINCR: if existing old key (matching new key),\n *\t\t      it is replaced by new (old removed, then set)\n *\t\t      if no existing old key (matching new key), then\n *\t\t      same as set\n *\t\tDECR: old is removed if (a) new has no Rvalue following the\n *\t\t      entity's name or (b) new's Rvalue matches Old's Rvalue\n *\n * @param[in] old pointer to attribute with existing values to be modified\n * @param[in] new pointer to (temp) attribute with new values to be set\n * @param[in] op  set operator: SET, INCR, DECR\n *\n * @return \tint\n * @retval\t0 \tif ok\n * @retval\t>0 \tif error\n *\n */\n\nint\nset_entlim(attribute *old, attribute *new, enum batch_op op)\n{\n\tchar *key = NULL;\n\tvoid *newctx;\n\tvoid *oldctx;\n\tsvr_entlim_leaf_t *newptr;\n\tsvr_entlim_leaf_t *exptr;\n\tattribute save_old;\n\n\tassert(old && new && (new->at_flags &ATR_VFLAG_SET));\n\n\tswitch (op) {\n\t\tcase SET:\n\t\t\t/* free the old, reinitialize it and then set old  */\n\t\t\t/* to to new by by falling into the \"INCR\" case    */\n\t\t\tsave_old = *old;\n\t\t\told->at_val.at_enty.ae_tree = entlim_initialize_ctx();\n\t\t\tif (old->at_val.at_enty.ae_tree == NULL) {\n\t\t\t\t*old = save_old;\n\t\t\t\treturn (PBSE_SYSTEM);\n\t\t\t}\n\t\t\tfree_entlim(&save_old); /* have new alloc, discard the saved */\n\t\t\t\t\t\t/* fall into INCR case */\n\n\t\tcase INCR:\n\t\t\t/* walk \"new\" and for each leaf, add it to \"old\" */\n\t\t\tnewctx = new->at_val.at_enty.ae_tree;\n\t\t\tif (old->at_val.at_enty.ae_tree == NULL) {\n\t\t\t\t/* likely the += without any prior values */\n\t\t\t\told->at_val.at_enty.ae_tree = entlim_initialize_ctx();\n\t\t\t}\n\t\t\toldctx = old->at_val.at_enty.ae_tree;\n\t\t\twhile ((newptr = entlim_get_next(newctx, (void **) &key)) != NULL) {\n\t\t\t\t/* duplicate the record to be added */\n\t\t\t\tnewptr = dup_svr_entlim_leaf(newptr);\n\t\t\t\tif (newptr) {\n\t\t\t\t\tif (entlim_replace(key, newptr, oldctx, svr_freeleaf) != 0) {\n\t\t\t\t\t\t/* failed to add */\n\t\t\t\t\t\tsvr_freeleaf(newptr);\n\t\t\t\t\t\treturn (PBSE_SYSTEM);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\told->at_val.at_enty.ae_newlimittm = time(0);\n\t\t\tbreak;\n\n\t\tcase DECR:\n\n\t\t\tif ((old->at_flags & ATR_VFLAG_SET) == 0) {\n\t\t\t\t/* nothing to unset, just return as done */\n\t\t\t\treturn 0;\n\t\t\t}\n\n\t\t\t/* walk \"new\" and for each leaf, remove matching from \"old\" */\n\t\t\t/* if no \"value\" for new leaf, then remove if keys match    */\n\t\t\t/* if new leaf has a value, remove old only if values match */\n\n\t\t\tnewctx = new->at_val.at_enty.ae_tree;\n\t\t\toldctx = old->at_val.at_enty.ae_tree;\n\n\t\t\twhile ((newptr = entlim_get_next(newctx, (void **) &key)) != NULL) {\n\t\t\t\t/* \"exptr\" points to record in \"old\" attribute */\n\t\t\t\tif ((exptr = entlim_get(key, oldctx)) != NULL) {\n\n\t\t\t\t\t/* found existing (\"old\") record with matching key */\n\t\t\t\t\tif (newptr->slf_limit.at_flags & ATR_VFLAG_SET) {\n\n\t\t\t\t\t\tint (*compf)(attribute * pattr, attribute * with);\n\n\t\t\t\t\t\t/* user specifed a value that must match current */\n\t\t\t\t\t\t/* if the current one is to be deleted           */\n\t\t\t\t\t\tchar rsbuf[PBS_MAX_RESC_NAME + 1];\n\t\t\t\t\t\tresource_def *prdef;\n\n\t\t\t\t\t\tif (entlim_resc_from_key(key, rsbuf, PBS_MAX_RESC_NAME) == 0) {\n\n\t\t\t\t\t\t\t/* find compare function for this resource */\n\t\t\t\t\t\t\tprdef = find_resc_def(svr_resc_def, rsbuf);\n\t\t\t\t\t\t\tif (prdef)\n\t\t\t\t\t\t\t\tcompf = prdef->rs_comp;\n\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t\tcompf = comp_l; /* default unknown resc to long */\n\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\tcompf = comp_l; /* no resource, use long type */\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif (compf(&newptr->slf_limit, &exptr->slf_limit) == 0) {\n\t\t\t\t\t\t\t/* value matches, delete \"old\" */\n\t\t\t\t\t\t\t(void) entlim_delete(key, oldctx, svr_freeleaf);\n\t\t\t\t\t\t}\n\t\t\t\t\t} else {\n\t\t\t\t\t\t/* DECR (a) case in function block comment, */\n\t\t\t\t\t\t/* no value supplied which must match, just */\n\t\t\t\t\t\t/* delete \"old\"\t\t\t\t*/\n\t\t\t\t\t\t(void) entlim_delete(key, oldctx, svr_freeleaf);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\t/* having removed one or more elements from the value tree */\n\t\t\t/* see if any entries are left or if the value is now null */\n\t\t\tkey = NULL;\n\t\t\tif (entlim_get_next(oldctx, (void **) &key) == NULL) {\n\t\t\t\t/* no entries left set, clear the entire attribute */\n\t\t\t\tfree_entlim(old);\n\t\t\t\t/* set _MODIFY flag so up level functions */\n\t\t\t\t/* know the attribute has been changed    */\n\t\t\t\told->at_flags |= ATR_VFLAG_MODIFY;\n\t\t\t\treturn (0);\n\t\t\t}\n\t\t\tbreak;\n\n\t\tdefault:\n\t\t\treturn (PBSE_INTERNAL);\n\t}\n\n\tpost_attr_set(old);\n\treturn (0);\n}\n\n/**\n * @brief\n *\tset_entlim_res - set value of attribute of type ATR_TYPE_ENTITY to another\n *\tThis function is used for all operations on the etlim attributes which\n *\tinvolves a resource.  However, except for the \"SET\" operation, the\n *\toperations are identical to set_entlim() above and that function does\n *\tthe real work.\n *\n *\tFor each entity in the list headed by the \"new\" attribute,\n *\tthe correspondingly entity in the list headed by \"old\"\n *\tis modified.\n *\n *\tThe mapping of the operations incr and decr depend on the type are\n *\t   ****\tSET:  all of old wth same resource are replace by new ****\n *\t\tINCR: if existing old key (matching new key),\n *\t\t      it is replaced by new (old removed, then set)\n *\t\t      if no existing old key (matching new key), then\n *\t\t      same as set\n *\t\tDECR: old is removed if (a) new has no Rvalue following the\n *\t\t      entity's name or (b) new's Rvalue matches Old's Rvalue\n * @param old pointer to attribute with existing values to be modified\n * @param new pointer to (temp) attribute with new values to be set\n * @param op  set operator: SET, INCR, DECR\n *\n * @return\tint\n * @retval\t0 \tif ok\n * @retval\t>0 \tif error\n *\n */\n\nint\nset_entlim_res(attribute *old, attribute *new, enum batch_op op)\n{\n\tchar *keynew = NULL;\n\tchar *keyold = NULL;\n\tvoid *valnew;\n\tvoid *valold;\n\tvoid *newctx;\n\tvoid *oldctx;\n\tchar newresc[PBS_MAX_RESC_NAME + 1];\n\tchar oldresc[PBS_MAX_RESC_NAME + 1];\n\n\tassert(old && new && (new->at_flags &ATR_VFLAG_SET));\n\n\tif (op == SET) {\n\n\t\tif (old->at_val.at_enty.ae_tree == NULL) {\n\t\t\t/* nothing in old, change op to INCR and use */\n\t\t\t/* other set_entlim function\t\t     */\n\t\t\top = INCR;\n\t\t\treturn (set_entlim(old, new, op));\n\t\t}\n\n\t\tnewctx = new->at_val.at_enty.ae_tree;\n\t\toldctx = old->at_val.at_enty.ae_tree;\n\n\t\t/* walk the new tree identifying which resources are */\n\t\t/* being changed,  walk the old tree and remove any  */\n\t\t/* record with the same resource in its key\t     */\n\t\twhile ((valnew = entlim_get_next(newctx, (void **) &keynew)) != NULL) {\n\t\t\t/* get the resource name from the \"new\" key */\n\t\t\tif (entlim_resc_from_key(keynew, newresc, PBS_MAX_RESC_NAME) != 0)\n\t\t\t\tcontinue; /* no resc, go to next */\n\n\t\t\tkeyold = NULL;\n\t\t\twhile ((valold = entlim_get_next(oldctx, (void **) &keyold)) != NULL) {\n\t\t\t\t/* get the resource name from the \"old\" key */\n\t\t\t\tif (entlim_resc_from_key(keyold, oldresc, PBS_MAX_RESC_NAME) != 0)\n\t\t\t\t\tcontinue; /* no resc, go to next */\n\n\t\t\t\t/* if old and new resource names match, */\n\t\t\t\t/* delete old record\t\t\t*/\n\t\t\t\tif (strcasecmp(oldresc, newresc) == 0)\n\t\t\t\t\t(void) entlim_delete(keyold, oldctx, svr_freeleaf);\n\t\t\t}\n\t\t}\n\n\t\t/* now the operation is the same as an INCR, adding\t*/\n\t\t/* new values and thus we change the operator and use\t*/\n\t\t/* the set_entlim() code above\t\t\t\t*/\n\t\top = INCR;\n\t}\n\n\t/* The other operators (and the SET turned into INCR)\t*/\n\t/* use the set_entlim() code above\t\t\t*/\n\n\treturn (set_entlim(old, new, op));\n}\n\n/**\n * @brief\n * \tfree_entlim - free space associated with attribute value\n *\n *\tFor each leaf in the tree, the associated structure is freed,\n *\tand the key is deleted until the tree is complete pruned.  Then\n *\tthe tree itself is uprooted and placed in the compost pile.\n *\n * @param[in] pattr - pointer to attrbute\n *\n * @return\tVoid\n *\n */\n\nvoid\nfree_entlim(attribute *pattr)\n{\n\t/* entlim_free_cts walks tree and for each leaf,  */\n\t/* prunes it and then uproots the tree (frees it) */\n\n\tif (pattr->at_val.at_enty.ae_tree)\n\t\t(void) entlim_free_ctx(pattr->at_val.at_enty.ae_tree, svr_freeleaf);\n\n\t/* now clear the basic attribute */\n\tpattr->at_val.at_enty.ae_newlimittm = 0;\n\tfree_null(pattr);\n\treturn;\n}\n\n/**\n * @brief\n *\tUnset the entity limits for a specific resource (rather than the\n *\tentire attribute).  For example,  unset the limits on \"ncpus\" while\n *\tleaving the \"mem\" limits set.\n *\n * @param[in] pattr - pointer to the attribute\n * @param[in] rescname - name of resource for which limits are to be unset\n *\n * @return\tVoid\n *\n */\n\nvoid\nunset_entlim_resc(attribute *pattr, char *rescname)\n{\n\tvoid *oldctx;\n\tchar *key = NULL;\n\tvoid *value = NULL;\n\tchar rsbuf[PBS_MAX_RESC_NAME + 1];\n\tint modified = 0;\n\tint hasentries = 0;\n\n\tif (((pattr->at_flags & ATR_VFLAG_SET) == 0) ||\n\t    (rescname == NULL) ||\n\t    (*rescname == '\\0'))\n\t\treturn; /* nothing to unset */\n\n\t/* walk \"old\" and for each leaf, remove */\n\t/* entry with matching  resource name   */\n\toldctx = pattr->at_val.at_enty.ae_tree;\n\twhile ((value = entlim_get_next(oldctx, (void **) &key)) != NULL) {\n\n\t\thasentries = 1; /* found at least one (remaining) entry */\n\n\t\tif (entlim_resc_from_key(key, rsbuf, PBS_MAX_RESC_NAME) == 0) {\n\t\t\tif (strcasecmp(rsbuf, rescname) == 0) {\n\t\t\t\t(void) entlim_delete(key, oldctx, svr_freeleaf);\n\t\t\t\tmodified = 1;\n\t\t\t\thasentries = 0; /* will see any in next pass */\n\t\t\t\t/*\n\t\t\t\t * now restart search from beginning as we are\n\t\t\t\t * not sure what the deletion did to the order\n\t\t\t\t */\n\t\t\t\tkey = NULL;\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t}\n\t}\n\tif (modified)\n\t\tpattr->at_flags |= ATR_MOD_MCACHE;\n\tif (hasentries == 0)\n\t\tfree_entlim(pattr); /* no entries left, clear attribute */\n\treturn;\n}\n"
  },
  {
    "path": "src/lib/Libattr/attr_fn_f.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <sys/types.h>\n#include <assert.h>\n#include <ctype.h>\n#include <memory.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <errno.h>\n#include <pbs_ifl.h>\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"resource.h\"\n#include \"server_limits.h\"\n#include \"job.h\"\n#include \"pbs_error.h\"\n#include \"libutil.h\"\n#include \"pbs_share.h\"\n\n/**\n * @file\tattr_fn_f.c\n * @brief\n * This file contains functions for manipulating attributes of type float\n * @details\n * Each set has functions for:\n *\tDecoding the value string to the machine representation.\n *\tEncoding the internal attribute to external form\n *\tSetting the value by =, + or - operators.\n *\tComparing a (decoded) value with the attribute value.\n *\n * Some or all of the functions for an attribute type may be shared with\n * other attribute types.\n *\n * The prototypes are declared in \"attribute.h\"\n *\n * --------------------------------------------------\n * The Set of Attribute Functions for attributes with\n * value type \"float\"\n * --------------------------------------------------\n */\n\n/**\n * @brief\n * \tdecode_f - decode float into attribute structure\n *\n *\n * @param[in] patr - ptr to attribute to decode\n * @param[in] name - attribute name\n * @param[in] rescn - resource name or null\n * @param[out] val - string holding values for attribute structure\n *\n * @retval      int\n * @retval      0       if ok\n * @retval      >0      error number1 if error,\n * @retval      *patr   members set\n *\n */\n\nint\ndecode_f(attribute *patr, char *name, char *rescn, char *val)\n{\n\tsize_t len;\n\tif ((val != NULL) && ((len = strlen(val)) != 0)) {\n\t\tchar *end;\n\t\tfloat fval;\n\n\t\terrno = 0;\n\t\t/*\n\t\t * The function strtof cannot be used because on some machines\n\t\t * it is only available in C99 mode.  Use strtod instead.\n\t\t * @see https://lists.debian.org/debian-glibc/2004/02/msg00176.html\n\t\t */\n\t\tfval = (float) strtod(val, &end);\n\t\t/* if any part of val is not converted or errno set, error */\n\t\tif (&val[len] != end || errno != 0)\n\t\t\treturn (PBSE_BADATVAL); /* invalid string */\n\t\tpost_attr_set(patr);\n\t\tpatr->at_val.at_float = fval;\n\t} else {\n\t\tATR_UNSET(patr);\n\t\tpatr->at_val.at_float = 0.0;\n\t}\n\treturn (0);\n}\n\n/**\n * @brief\n *\tencode attribute of type float into attr_extern\n *\n * @param[in] attr - ptr to attribute\n * @param[in] phead - head of attrlist list\n * @param[in] atname - attribute name\n * @param[in] rsname - resource name or null\n * @param[in] mode - encode mode, unused here\n * @param[out] rtnl - Return: ptr to svrattrl\n *\n * @return int\n * @retval >0 if ok\n * @retval =0 if no value, no attrlist link added\n * @retval <0 if error\n *\n */\n\n/*ARGSUSED*/\n\n#define CVNBUFSZ 32\n\nint\nencode_f(const attribute *attr, pbs_list_head *phead, char *atname, char *rsname, int mode, svrattrl **rtnl)\n{\n\tsize_t ct;\n\tchar cvnbuf[CVNBUFSZ];\n\tsvrattrl *pal;\n\n\tif (!attr)\n\t\treturn (-1);\n\tif (!(attr->at_flags & ATR_VFLAG_SET))\n\t\treturn (0);\n\n\t(void) snprintf(cvnbuf, CVNBUFSZ, \"%-.*f\",\n\t\t\tfloat_digits(attr->at_val.at_float, FLOAT_NUM_DIGITS),\n\t\t\tattr->at_val.at_float);\n\tct = strlen(cvnbuf) + 1;\n\n\tpal = attrlist_create(atname, rsname, ct);\n\tif (pal == NULL)\n\t\treturn (-1);\n\n\t(void) memcpy(pal->al_value, cvnbuf, ct);\n\tpal->al_flags = attr->at_flags;\n\tif (phead)\n\t\tappend_link(phead, &pal->al_link, pal);\n\tif (rtnl)\n\t\t*rtnl = pal;\n\n\treturn (1);\n}\n\n/**\n * @brief\n * \tset_f - set attribute A to attribute B,\n *\teither A=B, A += B, or A -= B\n *\n * @param[in]   attr - pointer to new attribute to be set (A)\n * @param[in]   new  - pointer to attribute (B)\n * @param[in]   op   - operator\n *\n * @return      int\n * @retval      0       if ok\n * @retval     >0       if error\n *\n */\n\nint\nset_f(attribute *attr, attribute *new, enum batch_op op)\n{\n\tassert(attr && new && (new->at_flags &ATR_VFLAG_SET));\n\n\tswitch (op) {\n\t\tcase SET:\n\t\t\tattr->at_val.at_float = new->at_val.at_float;\n\t\t\tbreak;\n\n\t\tcase INCR:\n\t\t\tattr->at_val.at_float += new->at_val.at_float;\n\t\t\tbreak;\n\n\t\tcase DECR:\n\t\t\tattr->at_val.at_float -= new->at_val.at_float;\n\t\t\tbreak;\n\n\t\tdefault:\n\t\t\treturn (PBSE_INTERNAL);\n\t}\n\tpost_attr_set(attr);\n\treturn (0);\n}\n\n/**\n * @brief\n * \tcomp_f - compare two attributes of type float\n *\n * @param[in] attr - pointer to attribute structure\n * @param[in] with - pointer to attribute structure\n *\n * @return      int\n * @retval      0       if the set of strings in \"with\" is a subset of \"attr\"\n * @retval      1       otherwise\n *\n */\n\nint\ncomp_f(attribute *attr, attribute *with)\n{\n\tif (!attr || !with)\n\t\treturn (-1);\n\tif (attr->at_val.at_float < with->at_val.at_float)\n\t\treturn (-1);\n\telse if (attr->at_val.at_float > with->at_val.at_float)\n\t\treturn (1);\n\telse\n\t\treturn (0);\n}\n\n/*\n * free_f - use free_null to (not) free space\n */\n"
  },
  {
    "path": "src/lib/Libattr/attr_fn_hold.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <sys/types.h>\n#include <ctype.h>\n#include <memory.h>\n#ifndef NDEBUG\n#include <stdio.h>\n#endif\n#include <stdlib.h>\n#include <string.h>\n#include \"pbs_ifl.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"server_limits.h\"\n#include \"job.h\"\n#include \"pbs_error.h\"\n\n#define HOLD_ENCODE_SIZE 4\n\n/**\n * @file\tattr_fn_hold.c\n * @brief\n * \tThis file contains special decode and encode functions for the hold-types\n * \tattribute.  All other functions for this attribute are the standard\n * \t_b (boolean) routines.\n *\n */\n\n/**\n * @brief\n *\tdecode_hold - decode string into hold attribute\n *\n * @param[in] patr - ptr to attribute to decode\n * @param[in] name - attribute name\n * @param[in] rescn - resource name or null\n * @param[out] val - string holding values for attribute structure\n *\n * @retval      int\n * @retval      0       if ok\n * @retval      >0      error number1 if error,\n * @retval      *patr   members set\n *\n */\n\nint\ndecode_hold(attribute *patr, char *name, char *rescn, char *val)\n{\n\tchar *pc;\n\n\tpatr->at_val.at_long = 0;\n\tif ((val != NULL) && (strlen(val) > (size_t) 0)) {\n\t\tfor (pc = val; *pc != '\\0'; pc++) {\n\t\t\tswitch (*pc) {\n\t\t\t\tcase 'n':\n\t\t\t\t\tpatr->at_val.at_long = HOLD_n;\n\t\t\t\t\tbreak;\n\t\t\t\tcase 'u':\n\t\t\t\t\tpatr->at_val.at_long |= HOLD_u;\n\t\t\t\t\tbreak;\n\t\t\t\tcase 'o':\n\t\t\t\t\tpatr->at_val.at_long |= HOLD_o;\n\t\t\t\t\tbreak;\n\t\t\t\tcase 's':\n\t\t\t\t\tpatr->at_val.at_long |= HOLD_s;\n\t\t\t\t\tbreak;\n\t\t\t\tcase 'p':\n\t\t\t\t\tpatr->at_val.at_long |= HOLD_bad_password;\n\t\t\t\t\tbreak;\n\t\t\t\tdefault:\n\t\t\t\t\treturn (PBSE_BADATVAL);\n\t\t\t}\n\t\t}\n\t\tpost_attr_set(patr);\n\t} else\n\t\tATR_UNSET(patr);\n\n\treturn (0);\n}\n\n/**\n * @brief\n * \tencode_str - encode attribute of type ATR_TYPE_STR into attr_extern\n *\n * @param[in] attr - ptr to attribute to encode\n * @param[in] phead - ptr to head of attrlist list\n * @param[in] atname - attribute name\n * @param[in] rsname - resource name or null\n * @param[in] mode - encode mode\n * @param[out] rtnl - ptr to svrattrl\n *\n * @retval      int\n * @retval      >0      if ok, entry created and linked into list\n * @retval      =0      no value to encode, entry not created\n * @retval      -1      if error\n *\n */\n/*ARGSUSED*/\n\nint\nencode_hold(const attribute *attr, pbs_list_head *phead, char *atname, char *rsname, int mode, svrattrl **rtnl)\n\n{\n\tint i;\n\tsvrattrl *pal;\n\n\tif (!attr)\n\t\treturn (-1);\n\tif (!(attr->at_flags & ATR_VFLAG_SET))\n\t\treturn (0);\n\n\tpal = attrlist_create(atname, rsname, HOLD_ENCODE_SIZE + 1);\n\tif (pal == NULL)\n\t\treturn (-1);\n\n\ti = 0;\n\tif (attr->at_val.at_long == 0)\n\t\t*(pal->al_value + i++) = 'n';\n\telse {\n\t\tif (attr->at_val.at_long & HOLD_s)\n\t\t\t*(pal->al_value + i++) = 's';\n\t\tif (attr->at_val.at_long & HOLD_o)\n\t\t\t*(pal->al_value + i++) = 'o';\n\t\tif (attr->at_val.at_long & HOLD_u)\n\t\t\t*(pal->al_value + i++) = 'u';\n\t\tif (attr->at_val.at_long & HOLD_bad_password)\n\t\t\t*(pal->al_value + i++) = 'p';\n\t}\n\twhile (i < HOLD_ENCODE_SIZE + 1)\n\t\t*(pal->al_value + i++) = '\\0';\n\n\tpal->al_flags = attr->at_flags;\n\tif (phead)\n\t\tappend_link(phead, &pal->al_link, pal);\n\tif (rtnl)\n\t\t*rtnl = pal;\n\n\treturn (1);\n}\n\n/**\n * @brief\n * \tcomp_hold - compare two attributes of type hold\n *\n * @param[in] attr - pointer to attribute structure\n * @param[in] with - pointer to attribute structure\n *\n * @return      int\n * @retval      0       if the set of strings in \"with\" is a subset of \"attr\"\n * @retval      1       otherwise\n *\n */\n\nint\ncomp_hold(attribute *attr, attribute *with)\n{\n\tif (!attr || !with)\n\t\treturn -1;\n\tif (attr->at_val.at_long == with->at_val.at_long)\n\t\treturn 0;\n\telse\n\t\treturn 1;\n}\n"
  },
  {
    "path": "src/lib/Libattr/attr_fn_intr.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <assert.h>\n#include <ctype.h>\n#include <memory.h>\n#ifndef NDEBUG\n#include <stdio.h>\n#endif\n#include <stdlib.h>\n#include <string.h>\n#include \"pbs_ifl.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"pbs_error.h\"\n\n/**\n * @file\tattr_fn_intr.c\n * @brief\n * \tThis file contains functions for manipulating attributes of type\n *\t\tinteractive\n * @details\n * Each set has functions for:\n *\tDecoding the value string to the machine representation.\n *\tEncoding the machine representation of the value to a string\n *\tSetting the value by =, + or - operators.\n *\tComparing a (decoded) value with the attribute value.\n *\tFreeing the space malloc-ed to the attribute value.\n *\n * Some or all of the functions for an attribute type may be shared with\n * other attribute types.\n *\n * The prototypes are declared in \"attribute.h\"\n *\n * -----------------------------------------------------------\n * Set of General functions for attributes of type interactive\n * -----------------------------------------------------------\n *\n * This attribute contains the port number to which an Interactive qsub is\n * listening.\n */\n\n/* decode_interactive - use decode_l() */\n\n/**\n * @brief\n * \tencode_inter - encode attribute of type ATR_TYPE_STR to attr_extern\n *\n *\tSpecial case for \"interactive\" attribute,  encode into TRUE/FALSE\n *\tfor  client, encode into port number for all others.\n *\n * @param[in] attr - ptr to attribute to encode\n * @param[in] phead - ptr to head of attrlist list\n * @param[in] atname - attribute name\n * @param[in] rsname - resource name or null\n * @param[in] mode - encode mode\n * @param[out] rtnl - ptr to svrattrl\n *\n * @retval      int\n * @retval      >0      if ok, entry created and linked into list\n * @retval      =0      no value to encode, entry not created\n * @retval      -1      if error\n *\n */\n/*ARGSUSED*/\n\nint\nencode_inter(const attribute *attr, pbs_list_head *phead, char *atname, char *rsname, int mode, svrattrl **rtnl)\n{\n\tif ((mode == ATR_ENCODE_CLIENT) || (mode == ATR_ENCODE_HOOK))\n\t\treturn (encode_b(attr, phead, atname, rsname, mode, rtnl));\n\telse\n\t\treturn (encode_l(attr, phead, atname, rsname, mode, rtnl));\n}\n"
  },
  {
    "path": "src/lib/Libattr/attr_fn_l.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <sys/types.h>\n#include <assert.h>\n#include <ctype.h>\n#include <memory.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <pbs_ifl.h>\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"resource.h\"\n#include \"server_limits.h\"\n#include \"job.h\"\n#include \"pbs_error.h\"\n\n/**\n * @file\tattr_fn_l.c\n * @brief\n * \tThis file contains functions for manipulating attributes of type\n *\t\tlong integer\n * @details\n * \tEach set has functions for:\n *\tDecoding the value string to the machine representation.\n *\tEncoding the internal attribute to external form\n *\tSetting the value by =, + or - operators.\n *\tComparing a (decoded) value with the attribute value.\n *\n * \tSome or all of the functions for an attribute type may be shared with\n * \tother attribute types.\n *\n * \tThe prototypes are declared in \"attribute.h\"\n *\n * --------------------------------------------------\n * The Set of Attribute Functions for attributes with\n * value type \"long\"\n * --------------------------------------------------\n */\n\n/**\n * @brief\n * \tdecode_l - decode long integer into attribute structure\n *\n * @param[in] patr - pointer to attribute structure\n * @param[in] name - attribute name\n * @param[in] rescn - resource name\n * @param[in] val - attribute value\n *\n * @return      int\n * @retval      0       if ok\n * @retval      >0      error number1 if error\n * @retval      patr    members set\n *\n */\n\nint\ndecode_l(attribute *patr, char *name, char *rescn, char *val)\n{\n\tchar *pc;\n\tchar *endp;\n\n\tif ((val != NULL) && (strlen(val) != 0)) {\n\n\t\tpc = val;\n\t\tif ((*pc == '+') || (*pc == '-'))\n\t\t\tpc++;\n\t\twhile (*pc != '\\0') {\n\t\t\tif (isdigit((int) *pc) == 0)\n\t\t\t\treturn (PBSE_BADATVAL); /* invalid string */\n\t\t\tpc++;\n\t\t}\n\t\tpost_attr_set(patr);\n\t\tpatr->at_val.at_long = strtol(val, &endp, 10);\n\t} else if ((val != NULL) && (strlen(val) == 0)) {\n\t\tpatr->at_val.at_long = 0;\n\t\tpost_attr_set(patr);\n\t} else {\n\t\tATR_UNSET(patr);\n\t\tpatr->at_val.at_long = 0;\n\t}\n\treturn (0);\n}\n\n/**\n * @brief\n * \tencode_l - encode attribute of type long into attr_extern\n *\n * @param[in] attr - ptr to attribute to encode\n * @param[in] phead - ptr to head of attrlist list\n * @param[in] atname - attribute name\n * @param[in] rsname - resource name or null\n * @param[in] mode - encode mode\n * @param[out] rtnl - ptr to svrattrl\n *\n * @retval      int\n * @retval      >0      if ok, entry created and linked into list\n * @retval      =0      no value to encode, entry not created\n * @retval      -1      if error\n *\n */\n/*ARGSUSED*/\n\n#define CVNBUFSZ 21\n\nint\nencode_l(const attribute *attr, pbs_list_head *phead, char *atname, char *rsname, int mode, svrattrl **rtnl)\n{\n\tsize_t ct;\n\tchar cvnbuf[CVNBUFSZ];\n\tsvrattrl *pal;\n\n\tif (!attr)\n\t\treturn (-1);\n\tif (!(attr->at_flags & ATR_VFLAG_SET))\n\t\treturn (0);\n\n\t(void) sprintf(cvnbuf, \"%ld\", attr->at_val.at_long);\n\tct = strlen(cvnbuf) + 1;\n\n\tpal = attrlist_create(atname, rsname, ct);\n\tif (pal == NULL)\n\t\treturn (-1);\n\n\t(void) memcpy(pal->al_value, cvnbuf, ct);\n\tpal->al_flags = attr->at_flags;\n\tif (phead)\n\t\tappend_link(phead, &pal->al_link, pal);\n\tif (rtnl)\n\t\t*rtnl = pal;\n\n\tif ((phead == NULL) && (rtnl == NULL))\n\t\tfree(pal);\n\n\treturn (1);\n}\n\n/**\n * @brief\n * \tset_l - set attribute A to attribute B,\n *\teither A=B, A += B, or A -= B\n *\n * @param[in]   attr - pointer to new attribute to be set (A)\n * @param[in]   new  - pointer to attribute (B)\n * @param[in]   op   - operator\n *\n * @return      int\n * @retval      0       if ok\n * @retval     >0       if error\n *\n */\n\nint\nset_l(attribute *attr, attribute *new, enum batch_op op)\n{\n\tassert(attr && new && (new->at_flags &ATR_VFLAG_SET));\n\n\tswitch (op) {\n\t\tcase SET:\n\t\t\tattr->at_val.at_long = new->at_val.at_long;\n\t\t\tbreak;\n\n\t\tcase INCR:\n\t\t\tattr->at_val.at_long += new->at_val.at_long;\n\t\t\tbreak;\n\n\t\tcase DECR:\n\t\t\tattr->at_val.at_long -= new->at_val.at_long;\n\t\t\tbreak;\n\n\t\tdefault:\n\t\t\treturn (PBSE_INTERNAL);\n\t}\n\tpost_attr_set(attr);\n\treturn (0);\n}\n\n/**\n * @brief\n *\tcomp_l - compare two attributes of type long\n *\n * @param[in] attr - pointer to attribute structure\n * @param[in] with - pointer to attribute structure\n *\n * @return      int\n * @retval      0       if the set of strings in \"with\" is a subset of \"attr\"\n * @retval      1       otherwise\n *\n */\n\nint\ncomp_l(attribute *attr, attribute *with)\n{\n\tif (!attr || !with)\n\t\treturn (-1);\n\tif (attr->at_val.at_long < with->at_val.at_long)\n\t\treturn (-1);\n\telse if (attr->at_val.at_long > with->at_val.at_long)\n\t\treturn (1);\n\telse\n\t\treturn (0);\n}\n\n/*\n * free_l - use free_null to (not) free space\n */\n\n/**\n * @brief\tAttribute setter function for long type values\n *\n * @param[in]\tpattr\t-\tpointer to attribute being set\n * @param[in]\tvalue\t-\tvalue to be set\n * @param[in]\top\t\t-\toperation to do\n *\n * @return\tvoid\n *\n * @par MT-Safe: No\n * @par Side Effects: None\n *\n */\nvoid\nset_attr_l(attribute *pattr, long value, enum batch_op op)\n{\n\tif (pattr == NULL) {\n\t\tlog_err(-1, __func__, \"Invalid pointer to attribute\");\n\t\treturn;\n\t}\n\n\tswitch (op) {\n\t\tcase SET:\n\t\t\tpattr->at_val.at_long = value;\n\t\t\tbreak;\n\t\tcase INCR:\n\t\t\tpattr->at_val.at_long += value;\n\t\t\tbreak;\n\t\tcase DECR:\n\t\t\tpattr->at_val.at_long -= value;\n\t\t\tbreak;\n\t\tdefault:\n\t\t\treturn;\n\t}\n\n\tpost_attr_set(pattr);\n}\n\n/**\n * @brief\tAttribute getter function for long type values\n *\n * @param[in]\tpattr\t-\tpointer to the attribute\n *\n * @return\tlong\n * @retval\tlong value of the attribute\n *\n * @par MT-Safe: No\n * @par Side Effects: None\n */\nlong\nget_attr_l(const attribute *pattr)\n{\n\treturn pattr->at_val.at_long;\n}\n"
  },
  {
    "path": "src/lib/Libattr/attr_fn_ll.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <assert.h>\n#include <ctype.h>\n#include <memory.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <pbs_ifl.h>\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"pbs_error.h\"\n\n/**\n * @file\tattr_fn_ll.c\n * @brief\n * \tThis file contains functions for manipulating attributes of type\n *\tLong integer, where \"Long\" is defined as the largest integer\n *\tavailable.\n * @details\n * Each set has functions for:\n *\tDecoding the value string to the machine representation.\n *\tEncoding the internal attribute to external form\n *\tSetting the value by =, + or - operators.\n *\tComparing a (decoded) value with the attribute value.\n *\n * Some or all of the functions for an attribute type may be shared with\n * other attribute types.\n *\n * The prototypes are declared in \"attribute.h\"\n *\n * --------------------------------------------------\n * The Set of Attribute Functions for attributes with\n * value type \"Long\" (_ll)\n * --------------------------------------------------\n */\n\n#define CVNBUFSZ 23\n\n/**\n * @brief\n * \tdecode_ll - decode Long integer into attribute structure\n *\tUnlike decode_long, this function will decode octal (leading zero) and\n *\thex (leading 0x or 0X) data as well as decimal\n *\n * @param[in] patr - ptr to attribute to decode\n * @param[in] name - attribute name\n * @param[in] rescn - resource name or null\n * @param[out] val - string holding values for attribute structure\n *\n * @retval      int\n * @retval      0       if ok\n * @retval      >0      error number1 if error,\n * @retval      *patr   members set\n *\n */\n\nint\ndecode_ll(attribute *patr, char *name, char *rescn, char *val)\n{\n\tchar *pc;\n\n\tif ((val != NULL) && (strlen(val) != 0)) {\n\n\t\tpatr->at_val.at_ll = strtoll(val, &pc, 0);\n\t\tif (*pc != '\\0')\n\t\t\treturn (PBSE_BADATVAL); /* invalid string */\n\t\tpost_attr_set(patr);\n\t} else {\n\t\tATR_UNSET(patr);\n\t\tpatr->at_val.at_ll = 0;\n\t}\n\treturn (0);\n}\n\n/**\n * @brief\n * \tencode_ll - encode attribute of type Long into attr_extern\n *\n * @param[in] attr - ptr to attribute to encode\n * @param[in] phead - ptr to head of attrlist list\n * @param[in] atname - attribute name\n * @param[in] rsname - resource name or null\n * @param[in] mode - encode mode\n * @param[out] rtnl - ptr to svrattrl\n *\n * @retval      int\n * @retval      >0      if ok, entry created and linked into list\n * @retval      =0      no value to encode, entry not created\n * @retval      -1      if error\n *\n */\n/*ARGSUSED*/\n\nint\nencode_ll(const attribute *attr, pbs_list_head *phead, char *atname, char *rsname, int mode, svrattrl **rtnl)\n{\n\tsize_t ct;\n\tconst char *cvn;\n\tsvrattrl *pal;\n\n\tif (!attr)\n\t\treturn (-1);\n\tif (!(attr->at_flags & ATR_VFLAG_SET))\n\t\treturn (0);\n\n\tcvn = uLTostr(attr->at_val.at_ll, 10);\n\tct = strlen(cvn) + 1;\n\n\tpal = attrlist_create(atname, rsname, ct);\n\tif (pal == NULL)\n\t\treturn (-1);\n\n\t(void) memcpy(pal->al_value, cvn, ct);\n\tpal->al_flags = attr->at_flags;\n\tif (phead)\n\t\tappend_link(phead, &pal->al_link, pal);\n\tif (rtnl)\n\t\t*rtnl = pal;\n\n\treturn (1);\n}\n\n/**\n * @brief\n * \tset_ll - set attribute A to attribute B,\n *\teither A=B, A += B, or A -= B\n *\n * @param[in]   attr - pointer to new attribute to be set (A)\n * @param[in]   new  - pointer to attribute (B)\n * @param[in]   op   - operator\n *\n * @return      int\n * @retval      0       if ok\n * @retval     >0       if error\n *\n */\n\nint\nset_ll(attribute *attr, attribute *new, enum batch_op op)\n{\n\tassert(attr && new && (new->at_flags &ATR_VFLAG_SET));\n\n\tswitch (op) {\n\t\tcase SET:\n\t\t\tattr->at_val.at_ll = new->at_val.at_ll;\n\t\t\tbreak;\n\n\t\tcase INCR:\n\t\t\tattr->at_val.at_ll += new->at_val.at_ll;\n\t\t\tbreak;\n\n\t\tcase DECR:\n\t\t\tattr->at_val.at_ll -= new->at_val.at_ll;\n\t\t\tbreak;\n\n\t\tdefault:\n\t\t\treturn (PBSE_INTERNAL);\n\t}\n\tpost_attr_set(attr);\n\treturn (0);\n}\n\n/**\n * @brief\n * \tcomp_ll - compare two attributes of type Long\n *\n * @param[in] attr - pointer to attribute structure\n * @param[in] with - pointer to attribute structure\n *\n * @return      int\n * @retval      0       if the set of strings in \"with\" is a subset of \"attr\"\n * @retval      1       otherwise\n *\n */\n\nint\ncomp_ll(attribute *attr, attribute *with)\n{\n\tif (!attr || !with)\n\t\treturn (-1);\n\tif (attr->at_val.at_ll < with->at_val.at_ll)\n\t\treturn (-1);\n\telse if (attr->at_val.at_ll > with->at_val.at_ll)\n\t\treturn (1);\n\telse\n\t\treturn (0);\n}\n\n/**\n * @brief\tAttribute setter function for long type values\n *\n * @param[in]\tpattr\t-\tpointer to attribute being set\n * @param[in]\tvalue\t-\tvalue to be set\n * @param[in]\top\t\t-\toperation to do\n *\n * @return\tvoid\n *\n * @par MT-Safe: No\n * @par Side Effects: None\n *\n */\nvoid\nset_attr_ll(attribute *pattr, long long value, enum batch_op op)\n{\n\tif (pattr == NULL) {\n\t\tlog_err(-1, __func__, \"Invalid pointer to attribute\");\n\t\treturn;\n\t}\n\n\tswitch (op) {\n\t\tcase SET:\n\t\t\tpattr->at_val.at_ll = value;\n\t\t\tbreak;\n\t\tcase INCR:\n\t\t\tpattr->at_val.at_ll += value;\n\t\t\tbreak;\n\t\tcase DECR:\n\t\t\tpattr->at_val.at_ll -= value;\n\t\t\tbreak;\n\t\tdefault:\n\t\t\treturn;\n\t}\n\n\tpost_attr_set(pattr);\n}\n\n/**\n * @brief\tAttribute getter function for long long type values\n *\n * @param[in]\tpattr\t-\tpointer to the attribute\n *\n * @return\tlong long\n * @retval\tlong long value of the attribute\n *\n * @par MT-Safe: No\n * @par Side Effects: None\n */\nlong long\nget_attr_ll(const attribute *pattr)\n{\n\treturn pattr->at_val.at_ll;\n}\n\n/*\n * free_ll - use free_null to (not) free space\n */\n"
  },
  {
    "path": "src/lib/Libattr/attr_fn_resc.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <assert.h>\n#include <ctype.h>\n#include <memory.h>\n#ifndef NDEBUG\n#include <stdio.h>\n#endif\n#include <stdlib.h>\n#include <string.h>\n#include <pbs_ifl.h>\n#include \"log.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"resource.h\"\n#include \"pbs_error.h\"\n#include \"pbs_idx.h\"\n\n/**\n * @file\tattr_fn_resc.c\n * @brief\n * \tThis file contains functions for manipulating attributes of type\n *\tresource\n *\n *  A \"resource\" is similiar to an attribute but with two levels of\n *  names.  The first name is the attribute name, e.g. \"resource-list\",\n *  the second name is the resource name, e.g. \"mem\".\n * @details\n * Each resource_def has functions for:\n *\tDecoding the value string to the internal representation.\n *\tEncoding the internal attribute to external form\n *\tSetting the value by =, + or - operators.\n *\tComparing a (decoded) value with the attribute value.\n *\tfreeing the resource value space (if extra memory is allocated)\n *\n * Some or all of the functions for an resource type may be shared with\n * other resource types or even attributes.\n *\n * The prototypes are declared in \"attribute.h\", also see resource.h\n *\n * ----------------------------------------------------------------------------\n * Attribute functions for attributes with value type resource\n * ----------------------------------------------------------------------------\n */\n\n/* Global Variables */\n\nint resc_access_perm;\n\n/* External Global Items */\n\nint comp_resc_gt; /* count of resources compared > */\nint comp_resc_eq; /* count of resources compared = */\nint comp_resc_lt; /* count of resources compared < */\nint comp_resc_nc; /* count of resources not compared  */\nvoid *resc_attrdef_idx = NULL;\n\n/**\n * @brief\n * \tdecode_resc - decode a \"attribute name/resource name/value\" triplet into\n *\t         a resource type attribute\n *\n * @param[in] patr - ptr to attribute to decode\n * @param[in] name - attribute name\n * @param[in] rescn - resource name or null\n * @param[out] val - string holding values for attribute structure\n *\n * @retval      int\n * @retval      0       if ok\n * @retval      >0      error number1 if error,\n * @retval      *patr   members set\n *\n */\n\nint\ndecode_resc(attribute *patr, char *name, char *rescn, char *val)\n{\n\tresource *prsc;\n\tresource_def *prdef;\n\tint rc = 0;\n\tint rv;\n\n\tif (patr == NULL)\n\t\treturn (PBSE_INTERNAL);\n\tif (rescn == NULL)\n\t\treturn (PBSE_UNKRESC);\n\tif (!(patr->at_flags & ATR_VFLAG_SET))\n\t\tCLEAR_HEAD(patr->at_val.at_list);\n\n\t/* check the resource name is not nasty e.g.: from user input */\n\tif (verify_resc_name(rescn)) {\n\t\treturn (PBSE_BADATVAL);\n\t}\n\n\tprdef = find_resc_def(svr_resc_def, rescn);\n\tif (prdef == NULL) {\n\t\t/*\n\t\t * didn't find resource with matching name, use unknown;\n\t\t * but return PBSE_UNKRESC incase caller dosn`t wish to\n\t\t * accept unknown resources\n\t\t */\n\t\trc = PBSE_UNKRESC;\n\t\tprdef = &svr_resc_def[RESC_UNKN];\n\t}\n\n\tprsc = find_resc_entry(patr, prdef);\n\tif (prsc == NULL) /* no current resource entry, add it */\n\t\tif ((prsc = add_resource_entry(patr, prdef)) == NULL) {\n\t\t\treturn (PBSE_SYSTEM);\n\t\t}\n\n\t/* note special use of ATR_DFLAG_ACCESS, see server/attr_recov() */\n\n\tif (((prsc->rs_defin->rs_flags & resc_access_perm & ATR_DFLAG_WRACC) == 0) &&\n\t    ((resc_access_perm & ATR_DFLAG_ACCESS) != ATR_DFLAG_ACCESS))\n\t\treturn (PBSE_ATTRRO);\n\n\tpost_attr_set(patr);\n\n\tif ((resc_access_perm & ATR_PERM_ALLOW_INDIRECT) && (*val == '@')) {\n\t\tif (strcmp(rescn, \"ncpus\") != 0)\n\t\t\trv = decode_str(&prsc->rs_value, name, rescn, val);\n\t\telse\n\t\t\trv = PBSE_BADNDATVAL;\n\t\tif (rv == 0)\n\t\t\tprsc->rs_value.at_flags |= ATR_VFLAG_INDIRECT;\n\t} else {\n\t\trv = prdef->rs_decode(&prsc->rs_value, name, rescn, val);\n\t}\n\tif (rv)\n\t\treturn (rv);\n\telse\n\t\treturn (rc);\n}\n\n/**\n * @brief\n * \tEncode attr of type ATR_TYPE_RESR into attr_extern form\n *\n * Here we are a little different from the typical attribute.  Most have a\n * single value to be encoded.  But resource attribute may have a whole bunch.\n * First get the name of the parent attribute (typically \"resource-list\").\n * Then for each resource in the list, call the individual resource encode\n * routine with \"atname\" set to the parent attribute name.\n *\n * @param[in] attr -  ptr to attribute to encode\n * @param[in] phead - head of attrlist list\n * @param[in] atname - attribute name\n * @param[in] rsname - resource name, null on call\n * @param[in] mode - encode mode\n * @param[out] rtnl - ptr to svrattrl\n *\n * If mode is either ATR_ENCODE_SAVE or ATR_ENCODE_SVR, then any resource\n * currently set to the default value is not encoded.   This allows it to be\n * reset if the default changes or it is moved.\n *\n * If the mode is ATR_ENCODE_CLIENT or ATR_ENCODE_MOM, the client permission\n * passed in the global variable resc_access_perm is checked against each\n * definition.  This allows a resource by resource access setting, not just\n * on the attribute.\n *\n * If the mode is ATR_ENCODE_HOOK, resource permission checking is bypassed.\n *\n * @return - Error code\n * @retval  =0 if no value to encode, no entries added to list\n * @retval  <0 if some resource entry had an encode error.\n *\n */\nint\nencode_resc(const attribute *attr, pbs_list_head *phead, char *atname, char *rsname, int mode, svrattrl **rtnl)\n{\n\tint dflt;\n\tresource *prsc;\n\tint rc;\n\tint grandtotal = 0;\n\tint perm;\n\tint first = 1;\n\tsvrattrl *xrtnl;\n\tsvrattrl *xprior = NULL;\n\n\tif (!attr)\n\t\treturn (-1);\n\tif (!(attr->at_flags & ATR_VFLAG_SET))\n\t\treturn (0); /* no resources at all */\n\n\t/* ok now do each separate resource */\n\n\tfor (prsc = (resource *) GET_NEXT(attr->at_val.at_list);\n\t     prsc != NULL;\n\t     prsc = (resource *) GET_NEXT(prsc->rs_link)) {\n\n\t\t/*\n\t\t * encode if sending to client or MOM with permission\n\t\t * encode if saving and ( not default value or save on deflt set)\n\t\t * encode if sending to server and not default and have permission\n\t\t */\n\n\t\tperm = prsc->rs_defin->rs_flags & resc_access_perm;\n\t\tdflt = prsc->rs_value.at_flags & ATR_VFLAG_DEFLT;\n\t\tif (((mode == ATR_ENCODE_CLIENT) && perm) ||\n\t\t    (mode == ATR_ENCODE_HOOK) ||\n\t\t    (mode == ATR_ENCODE_DB) ||\n\t\t    ((mode == ATR_ENCODE_MOM) && perm) ||\n\t\t    (mode == ATR_ENCODE_SAVE) ||\n\t\t    ((mode == ATR_ENCODE_SVR) && (dflt == 0) && perm)) {\n\n\t\t\trsname = prsc->rs_defin->rs_name;\n\t\t\txrtnl = NULL;\n\t\t\tif (prsc->rs_value.at_flags & ATR_VFLAG_INDIRECT)\n\t\t\t\trc = encode_str(&prsc->rs_value, phead,\n\t\t\t\t\t\tatname, rsname, mode, &xrtnl);\n\t\t\telse\n\t\t\t\trc = prsc->rs_defin->rs_encode(&prsc->rs_value, phead,\n\t\t\t\t\t\t\t       atname, rsname, mode, &xrtnl);\n\n\t\t\tif (rc < 0)\n\t\t\t\treturn (rc);\n\t\t\tif (xrtnl == NULL)\n\t\t\t\tcontinue;\n\t\t\tif (first) {\n\t\t\t\tif (rtnl)\n\t\t\t\t\t*rtnl = xrtnl;\n\t\t\t\tfirst = 0;\n\t\t\t} else {\n\t\t\t\tif (xprior)\n\t\t\t\t\txprior->al_sister = xrtnl;\n\t\t\t}\n\t\t\txprior = xrtnl;\n\n\t\t\tgrandtotal += rc;\n\t\t}\n\t}\n\treturn (grandtotal);\n}\n\n/**\n * @brief\n * \tset_resc - set value of attribute of type ATR_TYPE_RESR to another\n *\n *\tFor each resource in the list headed by the \"new\" attribute,\n *\tthe correspondingly name resource in the list headed by \"old\"\n *\tis modified.\n *\n *\tThe mapping of the operations incr and decr depend on the type\n *\tof each individual resource.\n *\n * @param[in]   old - pointer to old attribute to be set (A)\n * @param[in]   new  - pointer to attribute (B)\n * @param[in]   op   - operator\n *\n * @return      int\n * @retval      0       if ok\n * @retval     >0       if error\n *\n */\n\nint\nset_resc(attribute *old, attribute *new, enum batch_op op)\n{\n\tenum batch_op local_op;\n\tresource *newresc;\n\tresource *oldresc;\n\tint rc;\n\n\tassert(old && new);\n\n\tnewresc = (resource *) GET_NEXT(new->at_val.at_list);\n\twhile (newresc != NULL) {\n\n\t\tlocal_op = op;\n\n\t\t/* search for old that has same definition as new */\n\n\t\toldresc = find_resc_entry(old, newresc->rs_defin);\n\t\tif (oldresc == NULL) {\n\t\t\t/* add new resource to list */\n\t\t\toldresc = add_resource_entry(old, newresc->rs_defin);\n\t\t\tif (oldresc == NULL) {\n\t\t\t\tlog_err(-1, \"set_resc\", \"Unable to malloc space\");\n\t\t\t\treturn (PBSE_SYSTEM);\n\t\t\t}\n\t\t}\n\n\t\t/*\n\t\t * unlike other attributes, resources can be \"unset\"\n\t\t * if new is \"set\" to a value, the old one is set to that\n\t\t * value; if the new resource is unset (no value), then the\n\t\t * old resource is unset by freeing it.\n\t\t */\n\n\t\tif (newresc->rs_value.at_flags & ATR_VFLAG_SET) {\n\n\t\t\t/*\n\t\t\t * An indirect resource is a string of the form\n\t\t\t * \"@<node>\", it may be of a different type than the\n\t\t\t * resource definition itself. free_str() must be called\n\t\t\t * explicitly to clear away indirectness before the\n\t\t\t * value can be set again.\n\t\t\t */\n\t\t\tif (oldresc->rs_value.at_flags & ATR_VFLAG_INDIRECT) {\n\t\t\t\tfree_str(&oldresc->rs_value);\n\t\t\t}\n\t\t\tif (newresc->rs_value.at_flags & ATR_VFLAG_INDIRECT) {\n\t\t\t\toldresc->rs_defin->rs_free(&oldresc->rs_value);\n\t\t\t\trc = set_str(&oldresc->rs_value,\n\t\t\t\t\t     &newresc->rs_value, local_op);\n\t\t\t\toldresc->rs_value.at_flags |= ATR_VFLAG_INDIRECT;\n\t\t\t} else {\n\t\t\t\trc = oldresc->rs_defin->rs_set(&oldresc->rs_value,\n\t\t\t\t\t\t\t       &newresc->rs_value, local_op);\n\t\t\t\toldresc->rs_value.at_flags &= ~ATR_VFLAG_INDIRECT;\n\t\t\t}\n\t\t\tif (rc != 0)\n\t\t\t\treturn (rc);\n\t\t\toldresc->rs_value.at_flags |=\n\t\t\t\t(newresc->rs_value.at_flags & ATR_VFLAG_DEFLT);\n\t\t} else {\n\t\t\toldresc->rs_defin->rs_free(&oldresc->rs_value);\n\t\t}\n\n\t\tnewresc = (resource *) GET_NEXT(newresc->rs_link);\n\t}\n\tpost_attr_set(old);\n\treturn (0);\n}\n\n/**\n * @brief\n * \tcomp_resc - compare two attributes of type ATR_TYPE_RESR\n *\n *\tDANGER Will Robinson, DANGER\n *\n *\tAs you can see from the returns, this is different from the\n *\tat_comp model...  PLEASE read the Internal Design Spec\n *\n * @param[in] attr - pointer to attribute structure\n * @param[in] with - pointer to attribute structure\n *\n * @return      int\n * @retval      0       if the set of strings in \"with\" is a subset of \"attr\"\n * @retval      -1       otherwise\n *\n */\n\nint\ncomp_resc(attribute *attr, attribute *with)\n{\n\tresource *atresc;\n\tresource *wiresc;\n\tint rc;\n\n\tcomp_resc_gt = 0;\n\tcomp_resc_eq = 0;\n\tcomp_resc_lt = 0;\n\tcomp_resc_nc = 0;\n\n\tif ((attr == NULL) || (with == NULL))\n\t\treturn (-1);\n\n\twiresc = (resource *) GET_NEXT(with->at_val.at_list);\n\twhile (wiresc != NULL) {\n\t\tif (wiresc->rs_value.at_flags & ATR_VFLAG_SET) {\n\t\t\tatresc = find_resc_entry(attr, wiresc->rs_defin);\n\t\t\tif (atresc != NULL) {\n\t\t\t\tif (atresc->rs_value.at_flags & ATR_VFLAG_SET) {\n\t\t\t\t\tif ((rc = atresc->rs_defin->rs_comp(&atresc->rs_value, &wiresc->rs_value)) > 0)\n\t\t\t\t\t\tcomp_resc_gt++;\n\t\t\t\t\telse if (rc < 0)\n\t\t\t\t\t\tcomp_resc_lt++;\n\t\t\t\t\telse\n\t\t\t\t\t\tcomp_resc_eq++;\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tcomp_resc_nc++;\n\t\t\t}\n\t\t}\n\t\twiresc = (resource *) GET_NEXT(wiresc->rs_link);\n\t}\n\treturn (0);\n}\n\n/**\n * @brief\n * \tfree_resc - free space associated with attribute value\n *\n *\tFor each entry in the resource list, the entry is delinked,\n *\tthe resource entry value space freed (by calling the resource\n *\tfree routine), and then the resource structure is freed.\n *\n * @param[in] pattr - pointer to attribute structure\n *\n * @return\tVoid\n *\n */\nvoid\nfree_resc(attribute *pattr)\n{\n\tresource *next;\n\tresource *pr;\n\n\tif (!pattr)\n\t\treturn;\n\n\tpr = (resource *) GET_NEXT(pattr->at_val.at_list);\n\twhile (pr != NULL) {\n\t\tnext = (resource *) GET_NEXT(pr->rs_link);\n\t\tdelete_link(&pr->rs_link);\n\t\tif (pr->rs_value.at_flags & ATR_VFLAG_INDIRECT)\n\t\t\tfree_str(&pr->rs_value);\n\t\telse\n\t\t\tpr->rs_defin->rs_free(&pr->rs_value);\n\t\tfree(pr);\n\t\tpr = next;\n\t}\n\tfree_null(pattr);\n\tCLEAR_HEAD(pattr->at_val.at_list);\n}\n\n/**\n * @brief\n * \t create the search index for resource deinitions\n *\n * @param[in] rscdf - address of array of resource_def structs\n * @param[in] limit - number of members in resource_def array\n *\n * @return\terror code\n * @retval\t0  - Success\n * @retval\t-1 - Failure\n *\n */\nint\ncr_rescdef_idx(resource_def *resc_def, int limit)\n{\n\tint i;\n\n\tif (!resc_def)\n\t\treturn -1;\n\n\t/* create the attribute index */\n\tif ((resc_attrdef_idx = pbs_idx_create(PBS_IDX_ICASE_CMP, 0)) == NULL)\n\t\treturn -1;\n\n\t/* add all attributes to the tree with key as the attr name */\n\tfor (i = 0; i < limit; i++) {\n\t\tif (strcmp(resc_def->rs_name, RESC_NOOP_DEF) != 0) {\n\t\t\tif (pbs_idx_insert(resc_attrdef_idx, resc_def->rs_name, resc_def) != PBS_IDX_RET_OK)\n\t\t\t\treturn -1;\n\t\t}\n\t\tresc_def++;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n * \tfind the resource_def structure for a resource with a given name\n *\n * @param[in] rscdf - address of array of resource_def structs\n * @param[in] name - name of resource\n *\n * @return\tpointer to structure\n * @retval\tpointer to resource_def structure - Success\n * @retval\tNULL - Error\n *\n */\nresource_def *\nfind_resc_def(resource_def *resc_def, char *name)\n{\n\tresource_def *found_def = NULL, *def = NULL;\n\n\tif (pbs_idx_find(resc_attrdef_idx, (void **) &name, (void **) &found_def, NULL) == PBS_IDX_RET_OK)\n\t\tdef = &resc_def[found_def - resc_def];\n\n\treturn def;\n}\n\n/**\n * @brief\n * \tfind_resc_entry - find a resource (value) entry in a list headed in\n * \tan attribute that points to the specified resource_def structure\n *\n * @param[in] pattr - pointer to attribute structure\n * @param[in] rscdf - pointer to resource_def structure\n *\n * @return\tstructure handler\n * @retval\tpointer to struct resource \tSuccess\n * @retval\tNULL\t\t\t\tError\n *\n */\n\nresource *\nfind_resc_entry(const attribute *pattr, resource_def *rscdf)\n{\n\tresource *pr;\n\n\tpr = (resource *) GET_NEXT(pattr->at_val.at_list);\n\twhile (pr != NULL) {\n\t\tif (pr->rs_defin == rscdf)\n\t\t\tbreak;\n\t\tpr = (resource *) GET_NEXT(pr->rs_link);\n\t}\n\treturn (pr);\n}\n\n/**\n * @brief\n * \tadd_resource_entry - add and \"unset\" entry for a resource type to a\n *\tlist headed in an attribute.  Just for later displaying, the\n *\tresource list is maintained in an alphabetic order.\n *\tThe parent attribute is marked with ATR_VFLAG_SET and ATR_VFLAG_MODIFY\n *\n * @param[in] pattr - pointer to attribute structure\n * @param[in] prdef -  pointer to resource_def structure\n *\n * @return\tstructure handler\n * @retval      pointer to struct resource      Success\n * @retval      NULL                            Error\n *\n */\n\nresource *\nadd_resource_entry(attribute *pattr, resource_def *prdef)\n{\n\tint i;\n\tresource *new;\n\tresource *pr;\n\n\tpr = (resource *) GET_NEXT(pattr->at_val.at_list);\n\twhile (pr != NULL) {\n\t\ti = strcasecmp(pr->rs_defin->rs_name, prdef->rs_name);\n\t\tif (i == 0) /* found a matching entry */\n\t\t\treturn (pr);\n\t\telse if (i > 0)\n\t\t\tbreak;\n\t\tpr = (resource *) GET_NEXT(pr->rs_link);\n\t}\n\tnew = (resource *) malloc(sizeof(resource));\n\tif (new == NULL) {\n\t\tlog_err(-1, \"add_resource_entry\", \"unable to malloc space\");\n\t\treturn NULL;\n\t}\n\tCLEAR_LINK(new->rs_link);\n\tnew->rs_defin = prdef;\n\tnew->rs_value.at_type = prdef->rs_type;\n\tnew->rs_value.at_flags = 0;\n\tnew->rs_value.at_user_encoded = 0;\n\tnew->rs_value.at_priv_encoded = 0;\n\tprdef->rs_free(&new->rs_value);\n\n\tif (pr != NULL) {\n\t\tinsert_link(&pr->rs_link, &new->rs_link, new, LINK_INSET_BEFORE);\n\t} else {\n\t\tappend_link(&pattr->at_val.at_list, &new->rs_link, new);\n\t}\n\tpost_attr_set(pattr);\n\treturn (new);\n}\n\n/**\n * @brief\n *      This function is called by action routine of resource_list attribute\n *\tof job and reservation. For each resource in the list, if it has its\n *\town action routine,it calls it.\n *\n * @see\n *\taction_resc_job\n *\taction_resc_resv\n *\n * @param[in]   pattr   -     pointer to new attribute value\n * @param[in]   pobject -     pointer to object\n * @param[in]   type    -     object is job or reservation\n * @param[in]   actmode -     action mode\n *\n * @return      int\n * @retval       PBSE_NONE : success\n * @retval       Error code returned by resource action routine\n *\n * @par Side Effects: None\n *\n * @par MT-safe: Yes\n *\n */\n\nint\naction_resc(attribute *pattr, void *pobject, int type, int actmode)\n{\n\tresource *pr;\n\tint rc;\n\n\tpr = (resource *) GET_NEXT(pattr->at_val.at_list);\n\twhile (pr) {\n\t\tif ((pr->rs_value.at_flags & ATR_VFLAG_MODIFY) &&\n\t\t    (pr->rs_defin->rs_action)) {\n\t\t\tif ((rc = pr->rs_defin->rs_action(pr, pattr, pobject,\n\t\t\t\t\t\t\t  type, actmode)) != 0)\n\t\t\t\treturn (rc);\n\t\t}\n\n\t\tpr->rs_value.at_flags &= ~ATR_VFLAG_MODIFY;\n\t\tpr = (resource *) GET_NEXT(pr->rs_link);\n\t}\n\treturn (0);\n}\n\n/**\n * @brief\n *      the at_action for the resource_list attribute of a job\n *\n * @see\n *\taction_resc\n *\n * @param[in]   pattr    -     pointer to new attribute value\n * @param[in]   pobject  -     pointer to job\n * @param[in]   actmode  -     action mode\n *\n * @return      int\n * @retval       PBSE_NONE : success\n * @retval       Error code returned by resource action routine\n *\n * @par Side Effects: None\n *\n * @par MT-safe: Yes\n *\n */\n\nint\naction_resc_job(attribute *pattr, void *pobject, int actmode)\n{\n\treturn (action_resc(pattr, pobject, PARENT_TYPE_JOB, actmode));\n}\n\n/**\n * @brief\n *      the at_action for the resource_list attribute of a reservation\n *\n * @see\n *\taction_resc\n *\n * @param[in]   pattr    -     pointer to new attribute value\n * @param[in]   pobject  -     pointer to reservation\n * @param[in]   actmode  -     action mode\n *\n * @return      int\n * @retval       PBSE_NONE : success\n * @retval       Error code returned by resource action routine\n *\n * @par Side Effects: None\n *\n * @par MT-safe: Yes\n *\n */\n\nint\naction_resc_resv(attribute *pattr, void *pobject, int actmode)\n{\n\treturn (action_resc(pattr, pobject, PARENT_TYPE_RESV, actmode));\n}\n\n/**\n * @brief\n *      the at_action for the resource_default attribute of a server\n *\n * @param[in]   pattr    -     pointer to new attribute value\n * @param[in]   pobject  -     pointer to reservation\n * @param[in]   actmode  -     action mode\n *\n * @return      int\n * @retval       PBSE_NONE : success\n * @retval       Error code returned by action_resc_dflt routine\n *\n * @par Side Effects: None\n *\n * @par MT-safe: Yes\n *\n */\nint\naction_resc_dflt_svr(attribute *pattr, void *pobj, int actmode)\n{\n\treturn (action_resc(pattr, pobj, PARENT_TYPE_SERVER, actmode));\n}\n\n/**\n * @brief\n *      the at_action for the resource_default attribute of a queue\n *\n * @param[in]   pattr    -     pointer to new attribute value\n * @param[in]   pobject  -     pointer to reservation\n * @param[in]   actmode  -     action mode\n *\n * @return      int\n * @retval       PBSE_NONE : success\n * @retval       Error code returned by action_resc_dflt routine\n *\n * @par Side Effects: None\n *\n * @par MT-safe: Yes\n *\n */\nint\naction_resc_dflt_queue(attribute *pattr, void *pobj, int actmode)\n{\n\treturn (action_resc(pattr, pobj, PARENT_TYPE_QUE_ALL, actmode));\n}\n"
  },
  {
    "path": "src/lib/Libattr/attr_fn_size.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <assert.h>\n#include <ctype.h>\n#include <memory.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <pbs_ifl.h>\n#include <errno.h>\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"pbs_error.h\"\n#include \"pbs_share.h\"\n\n/**\n * @file\tattr_fn_size.c\n * @brief\n * \tThis file contains functions for manipulating attributes of type\n *\tsize, which is an integer optionally followed by k,K,m,M,g,\n *\tG,t, or T, optionally followed by w,W,b,B.\n *\tIf 'w' or 'W' is not specified, b for bytes is assumed.\n * @details\n * The attribute has functions for:\n *\tDecoding the value string to the machine representation.\n *\tEncoding the internal attribute to external form\n *\tSetting the value by =, + or - operators.\n *\tComparing a (decoded) value with the attribute value.\n *\n * Some or all of the functions for an attribute type may be shared with\n * other attribute types.\n *\n * The prototypes are declared in \"attribute.h\"\n *\n * --------------------------------------------------\n * The Set of Attribute Functions for attributes with\n * value type \"size\"\n * --------------------------------------------------\n */\n\n/**\n * @brief\n * \tdecode_size - decode size into attribute structure\n *\n * @param[in] patr - ptr to attribute to decode\n * @param[in] name - attribute name\n * @param[in] rescn - resource name or null\n * @param[out] val - string holding values for attribute structure\n *\n * @retval      int\n * @retval      0       if ok\n * @retval      >0      error number1 if error,\n * @retval      *patr   members set\n *\n */\n\nint\ndecode_size(attribute *patr, char *name, char *rescn, char *val)\n{\n\tint to_size(char *, struct size_value *);\n\n\tpatr->at_val.at_size.atsv_num = 0;\n\tpatr->at_val.at_size.atsv_shift = 0;\n\tif ((val != NULL) && (strlen(val) != 0)) {\n\t\terrno = 0;\n\t\tif (to_size(val, &patr->at_val.at_size) != 0)\n\t\t\treturn (PBSE_BADATVAL);\n\t\tif (errno != 0)\n\t\t\treturn (PBSE_BADATVAL);\n\t\tpost_attr_set(patr);\n\t} else\n\t\tATR_UNSET(patr);\n\n\treturn (0);\n}\n\n/**\n * @brief\n * \tencode_size - encode attribute of type size into attr_extern\n *\n * @param[in] attr - ptr to attribute to encode\n * @param[in] phead - ptr to head of attrlist list\n * @param[in] atname - attribute name\n * @param[in] rsname - resource name or null\n * @param[in] mode - encode mode\n * @param[out] rtnl - ptr to svrattrl\n *\n * @retval      int\n * @retval      >0      if ok, entry created and linked into list\n * @retval      =0      no value to encode, entry not created\n * @retval      -1      if error\n *\n */\n\n/*ARGSUSED*/\n\n#define CVNBUFSZ 23\n\nint\nencode_size(const attribute *attr, pbs_list_head *phead, char *atname, char *rsname, int mode, svrattrl **rtnl)\n{\n\tsize_t ct;\n\tchar cvnbuf[CVNBUFSZ];\n\tsvrattrl *pal;\n\n\tif (!attr)\n\t\treturn (-1);\n\tif (!(attr->at_flags & ATR_VFLAG_SET))\n\t\treturn (0);\n\n\tfrom_size(&attr->at_val.at_size, cvnbuf);\n\tct = strlen(cvnbuf) + 1;\n\n\tpal = attrlist_create(atname, rsname, ct);\n\tif (pal == NULL)\n\t\treturn (-1);\n\n\t(void) memcpy(pal->al_value, cvnbuf, ct);\n\tpal->al_flags = attr->at_flags;\n\tif (phead)\n\t\tappend_link(phead, &pal->al_link, pal);\n\tif (rtnl)\n\t\t*rtnl = pal;\n\tif ((phead == NULL) && (rtnl == NULL))\n\t\tfree(pal);\n\n\treturn (1);\n}\n\n/*\n * set_size - set attribute A to attribute B,\n *\teither A=B, A += B, or A -= B\n *\n * @param[in]   attr - pointer to new attribute to be set (A)\n * @param[in]   new  - pointer to attribute (B)\n * @param[in]   op   - operator\n *\n * @return      int\n * @retval      0       if ok\n * @retval     >0       if error\n *\n */\n\nint\nset_size(attribute *attr, attribute *new, enum batch_op op)\n{\n\tu_Long old;\n\tstruct size_value tmpa; /* the two temps are used to insure that the */\n\tstruct size_value tmpn; /* real attributes are not changed if error  */\n\tint normalize_size(struct size_value * a, struct size_value * b,\n\t\t\t   struct size_value * c, struct size_value * d);\n\n\tassert(attr && new && (new->at_flags &ATR_VFLAG_SET));\n\n\tif (op == INCR) {\n\t\tif (((attr->at_flags & ATR_VFLAG_SET) == 0) ||\n\t\t    ((attr->at_val.at_size.atsv_num == 0)))\n\t\t\top = SET; /* if adding to null, just set instead */\n\t}\n\n\tswitch (op) {\n\t\tcase SET:\n\t\t\tattr->at_val.at_size.atsv_num = new->at_val.at_size.atsv_num;\n\t\t\tattr->at_val.at_size.atsv_shift = new->at_val.at_size.atsv_shift;\n\t\t\tattr->at_val.at_size.atsv_units = new->at_val.at_size.atsv_units;\n\t\t\tbreak;\n\n\t\tcase INCR:\n\t\t\tif (normalize_size(&attr->at_val.at_size,\n\t\t\t\t\t   &new->at_val.at_size, &tmpa, &tmpn) < 0)\n\t\t\t\treturn (PBSE_BADATVAL);\n\t\t\told = tmpa.atsv_num;\n\t\t\ttmpa.atsv_num += tmpn.atsv_num;\n\t\t\tif (tmpa.atsv_num < old)\n\t\t\t\treturn (PBSE_BADATVAL);\n\t\t\tattr->at_val.at_size = tmpa;\n\t\t\tbreak;\n\n\t\tcase DECR:\n\t\t\tif (normalize_size(&attr->at_val.at_size,\n\t\t\t\t\t   &new->at_val.at_size, &tmpa, &tmpn) < 0)\n\t\t\t\treturn (PBSE_BADATVAL);\n\t\t\told = tmpa.atsv_num;\n\t\t\ttmpa.atsv_num -= tmpn.atsv_num;\n\t\t\tif (tmpa.atsv_num > old)\n\t\t\t\treturn (PBSE_BADATVAL);\n\t\t\tattr->at_val.at_size = tmpa;\n\t\t\tbreak;\n\n\t\tdefault:\n\t\t\treturn (PBSE_INTERNAL);\n\t}\n\tpost_attr_set(attr);\n\n\treturn (0);\n}\n\n/*\n * comp_size - compare two attributes of type size\n *\n * @param[in] attr - pointer to attribute structure\n * @param[in] with - pointer to attribute structure\n *\n * @return      int\n * @retval      0\tif 1st == 2nd\n * @retval      1\tif 1st > 2nd\n * @retval\t-1 \tif 1st < 2nd\n *\n */\n\nint\ncomp_size(attribute *attr, attribute *with)\n{\n\tstruct size_value tmpa;\n\tstruct size_value tmpw;\n\tint normalize_size(struct size_value * a, struct size_value * b,\n\t\t\t   struct size_value * c, struct size_value * d);\n\n\tif (normalize_size(&attr->at_val.at_size, &with->at_val.at_size,\n\t\t\t   &tmpa, &tmpw) != 0) {\n\t\tif (tmpa.atsv_shift >\n\t\t    tmpw.atsv_shift)\n\t\t\treturn (1);\n\t\telse if (tmpa.atsv_shift <\n\t\t\t tmpw.atsv_shift)\n\t\t\treturn (-1);\n\t\telse\n\t\t\treturn (0);\n\t} else if (tmpa.atsv_num > tmpw.atsv_num)\n\t\treturn (1);\n\telse if (tmpa.atsv_num < tmpw.atsv_num)\n\t\treturn (-1);\n\telse\n\t\treturn (0);\n}\n\n/*\n * free_size - use free_null to (not) free space\n */\n\n/**\n * @brief\n * \tnormalize_size - normalize two size values, adjust so the shift\n *\tcounts are the same, but not less than 10 (KB) otherwise a\n *\tchance for overflow.\n *\n * param[in] a - pointer to size_value structure\n * param[in] b -  pointer to size_value structure\n * param[in] ta -  pointer to size_value structure\n * param[in] tb -  pointer to size_value structure\n *\n */\n\nint\nnormalize_size(struct size_value *a, struct size_value *b, struct size_value *ta, struct size_value *tb)\n{\n\tint adj;\n\tu_Long temp;\n\n\t/*\n\t * we do the work in copies of the original attributes\n\t * to preserve the original (in case of error)\n\t */\n\t*ta = *a;\n\t*tb = *b;\n\n\t/* if either unit is in bytes (vs words), then both must be */\n\n\tif ((ta->atsv_units == ATR_SV_WORDSZ) &&\n\t    (tb->atsv_units != ATR_SV_WORDSZ)) {\n\t\tta->atsv_num *= SIZEOF_WORD;\n\t\tta->atsv_units = ATR_SV_BYTESZ;\n\t} else if ((ta->atsv_units != ATR_SV_WORDSZ) &&\n\t\t   (tb->atsv_units == ATR_SV_WORDSZ)) {\n\t\ttb->atsv_num *= SIZEOF_WORD;\n\t\ttb->atsv_units = ATR_SV_BYTESZ;\n\t}\n\n\t/* if either value is in units, round it up to kilos */\n\tif (ta->atsv_shift == 0) {\n\t\tta->atsv_num = (ta->atsv_num + 1023) >> 10;\n\t\tta->atsv_shift = 10;\n\t}\n\tif (tb->atsv_shift == 0) {\n\t\ttb->atsv_num = (tb->atsv_num + 1023) >> 10;\n\t\ttb->atsv_shift = 10;\n\t}\n\n\tadj = ta->atsv_shift - tb->atsv_shift;\n\n\tif (adj > 0) {\n\t\ttemp = ta->atsv_num;\n\t\tif ((adj > sizeof(u_Long) * 8) ||\n\t\t    (((temp << adj) >> adj) != ta->atsv_num))\n\t\t\treturn (-1); /* would overflow */\n\t\tta->atsv_shift = tb->atsv_shift;\n\t\tta->atsv_num = ta->atsv_num << adj;\n\t} else if (adj < 0) {\n\t\tadj = -adj;\n\t\ttemp = tb->atsv_num;\n\t\tif ((adj > sizeof(u_Long) * 8) ||\n\t\t    (((temp << adj) >> adj) != tb->atsv_num))\n\t\t\treturn (-1); /* would overflow */\n\t\ttb->atsv_shift = ta->atsv_shift;\n\t\ttb->atsv_num = tb->atsv_num << adj;\n\t}\n\treturn (0);\n}\n\n/**\n * @brief\n *\tDecode the value string into a size_value structure.\n *\n * @param[in] val - String containing the text to convert.\n * @param[out] psize - The size_value structure for the decoded value.\n *\n * @return - int\n * @retval - 0 - Success\n * @retval - !=0 - Failure\n *\n */\n\nint\nto_size(char *val, struct size_value *psize)\n{\n\tint havebw = 0;\n\tchar *pc;\n\n\tif ((val == NULL) || (psize == NULL))\n\t\treturn (PBSE_BADATVAL);\n\n\tpsize->atsv_units = ATR_SV_BYTESZ;\n\tpsize->atsv_num = strTouL(val, &pc, 10);\n\tpsize->atsv_shift = 0;\n\tif (pc == val) /* no numeric part */\n\t\treturn (PBSE_BADATVAL);\n\n\tswitch (*pc) {\n\t\tcase '\\0':\n\t\t\tbreak;\n\t\tcase 'k':\n\t\tcase 'K':\n\t\t\tpsize->atsv_shift = 10;\n\t\t\tbreak;\n\t\tcase 'm':\n\t\tcase 'M':\n\t\t\tpsize->atsv_shift = 20;\n\t\t\tbreak;\n\t\tcase 'g':\n\t\tcase 'G':\n\t\t\tpsize->atsv_shift = 30;\n\t\t\tbreak;\n\t\tcase 't':\n\t\tcase 'T':\n\t\t\tpsize->atsv_shift = 40;\n\t\t\tbreak;\n\t\tcase 'p':\n\t\tcase 'P':\n\t\t\tpsize->atsv_shift = 50;\n\t\t\tbreak;\n\t\tcase 'b':\n\t\tcase 'B':\n\t\t\thavebw = 1;\n\t\t\tbreak;\n\t\tcase 'w':\n\t\tcase 'W':\n\t\t\thavebw = 1;\n\t\t\tpsize->atsv_units = ATR_SV_WORDSZ;\n\t\t\tbreak;\n\t\tdefault:\n\t\t\treturn (PBSE_BADATVAL); /* invalid string */\n\t}\n\tif (*pc != '\\0')\n\t\tpc++;\n\tif (*pc != '\\0') {\n\t\tif (havebw)\n\t\t\treturn (PBSE_BADATVAL); /* invalid string */\n\t\tswitch (*pc) {\n\t\t\tcase 'b':\n\t\t\tcase 'B':\n\t\t\t\tbreak;\n\t\t\tcase 'w':\n\t\t\tcase 'W':\n\t\t\t\tpsize->atsv_units = ATR_SV_WORDSZ;\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\treturn (PBSE_BADATVAL);\n\t\t}\n\t\tpc++;\n\t}\n\t/* Make sure we reached the end of the size specification. */\n\tif (*pc != '\\0')\n\t\treturn (PBSE_BADATVAL); /* invalid string */\n\treturn (0);\n}\n\n/**\n * @brief\n * \tfrom_size - encode a string FROM a size_value structure\n *\n * @param[in] psize - pointer to size_value structure\n * @param[out] cvnbuf - buffer to hold size_value info\n *\n * @return\tVoid\n *\n */\n\nvoid\nfrom_size(const struct size_value *psize, char *cvnbuf)\n{\n\n#ifdef WIN32\n\t(void) sprintf(cvnbuf, \"%I64u\", psize->atsv_num);\n#else\n\t(void) sprintf(cvnbuf, \"%llu\", psize->atsv_num);\n#endif\n\n\tswitch (psize->atsv_shift) {\n\t\tcase 0:\n\t\t\tbreak;\n\t\tcase 10:\n\t\t\tstrcat(cvnbuf, \"k\");\n\t\t\tbreak;\n\t\tcase 20:\n\t\t\tstrcat(cvnbuf, \"m\");\n\t\t\tbreak;\n\t\tcase 30:\n\t\t\tstrcat(cvnbuf, \"g\");\n\t\t\tbreak;\n\t\tcase 40:\n\t\t\tstrcat(cvnbuf, \"t\");\n\t\t\tbreak;\n\t\tcase 50:\n\t\t\tstrcat(cvnbuf, \"p\");\n\t}\n\tif (psize->atsv_units & ATR_SV_WORDSZ)\n\t\tstrcat(cvnbuf, \"w\");\n\telse\n\t\tstrcat(cvnbuf, \"b\");\n}\n\n/**\n * @brief\n * \tget_kilobytes - return the size in the number of kilobytes from\n *\ta \"size\" type attribute.  A value saved in bytes/words is rounded up.\n *\tIf the value is not set, or the attriute is not type \"size\", then\n *\tzero is returned.\n *\n * @param[in] attr - pointer to attribute structure\n *\n * @return\tu_Long\n * @retval\t0\tError\n * @retval\tval \tkilobytes\n *\n */\nu_Long\nget_kilobytes_from_attr(attribute *attr)\n{\n\tu_Long val;\n\n\tif (!attr || !(attr->at_flags & ATR_VFLAG_SET) ||\n\t    attr->at_type != ATR_TYPE_SIZE)\n\t\treturn (0);\n\n\tval = attr->at_val.at_size.atsv_num;\n\tif (attr->at_val.at_size.atsv_units == ATR_SV_WORDSZ)\n\t\tval *= SIZEOF_WORD;\n\tif (attr->at_val.at_size.atsv_shift == 0)\n\t\tval = (val + 1023) >> 10;\n\telse\n\t\tval = val << (attr->at_val.at_size.atsv_shift - 10);\n\treturn val;\n}\n\n/**\n * @brief  Return the size in the number of bytes from\n *\ta \"size\" type attribute.  A value saved in bytes/words is rounded up.\n *\tIf the value is not set, or the attriute is not type \"size\", then\n *\tzero is returned.\n *\n * @param[in] attr- server attributes\n *\n * @return - size in bytes\n *\n */\nu_Long\nget_bytes_from_attr(attribute *attr)\n{\n\tu_Long val;\n\n\tif (!attr || !(attr->at_flags & ATR_VFLAG_SET) ||\n\t    attr->at_type != ATR_TYPE_SIZE)\n\t\treturn (0);\n\n\tval = attr->at_val.at_size.atsv_num;\n\tif (attr->at_val.at_size.atsv_units == ATR_SV_WORDSZ)\n\t\tval *= SIZEOF_WORD;\n\tif (attr->at_val.at_size.atsv_shift != 0)\n\t\tval = val << (attr->at_val.at_size.atsv_shift);\n\treturn val;\n}\n"
  },
  {
    "path": "src/lib/Libattr/attr_fn_str.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <assert.h>\n#include <ctype.h>\n#include <memory.h>\n#ifndef NDEBUG\n#include <stdio.h>\n#endif\n#include <stdlib.h>\n#include <string.h>\n#include \"pbs_ifl.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"pbs_error.h\"\n\n/**\n * @file\tattr_fn_str.c\n * @brief\n * \tThis file contains functions for manipulating attributes of type string\n *\n * \tThen there are a set of functions for each type of attribute:\n *\tstring\n * @details\n * Each set has functions for:\n *\tDecoding the value string to the internal representation.\n *\tEncoding the internal attribute form to external form\n *\tSetting the value by =, + or - operators.\n *\tComparing a (decoded) value with the attribute value.\n *\n * Some or all of the functions for an attribute type may be shared with\n * other attribute types.\n *\n * The prototypes are declared in \"attribute.h\"\n *\n * -------------------------------------------------\n * Set of general attribute functions for attributes\n * with value type \"string\"\n * -------------------------------------------------\n */\n\n/**\n * @brief\n * \tdecode_str - decode string into string attribute\n *\n * @param[in] patr - ptr to attribute to decode\n * @param[in] name - attribute name\n * @param[in] rescn - resource name or null\n * @param[out] val - string holding values for attribute structure\n *\n * @retval      int\n * @retval      0       if ok\n * @retval      >0      error number1 if error,\n * @retval      *patr   members set\n *\n */\n\nint\ndecode_str(attribute *patr, char *name, char *rescn, char *val)\n{\n\tsize_t len;\n\n\tif ((patr->at_flags & ATR_VFLAG_SET) && (patr->at_val.at_str))\n\t\t(void) free(patr->at_val.at_str);\n\n\tif ((val != NULL) && ((len = strlen(val) + 1) > 1)) {\n\t\tpatr->at_val.at_str = malloc((unsigned) len);\n\t\tif (patr->at_val.at_str == NULL)\n\t\t\treturn (PBSE_SYSTEM);\n\t\t(void) strcpy(patr->at_val.at_str, val);\n\t\tpost_attr_set(patr);\n\t} else {\n\t\tATR_UNSET(patr);\n\t\tpatr->at_val.at_str = NULL;\n\t}\n\treturn (0);\n}\n\n/**\n * @brief\n * \tencode_str - encode attribute of type ATR_TYPE_STR into attr_extern\n *\n * @param[in] attr - ptr to attribute to encode\n * @param[in] phead - ptr to head of attrlist list\n * @param[in] atname - attribute name\n * @param[in] rsname - resource name or null\n * @param[in] mode - encode mode\n * @param[out] rtnl - ptr to svrattrl\n *\n * @retval      int\n * @retval      >0      if ok, entry created and linked into list\n * @retval      =0      no value to encode, entry not created\n * @retval      -1      if error\n *\n */\n\n/*ARGSUSED*/\n\nint\nencode_str(const attribute *attr, pbs_list_head *phead, char *atname, char *rsname, int mode, svrattrl **rtnl)\n\n{\n\tsvrattrl *pal;\n\n\tif (!attr)\n\t\treturn (-1);\n\tif (!(attr->at_flags & ATR_VFLAG_SET) || !attr->at_val.at_str ||\n\t    (*attr->at_val.at_str == '\\0'))\n\t\treturn (0);\n\n\tpal = attrlist_create(atname, rsname, (int) strlen(attr->at_val.at_str) + 1);\n\tif (pal == NULL)\n\t\treturn (-1);\n\n\t(void) strcpy(pal->al_value, attr->at_val.at_str);\n\tpal->al_flags = attr->at_flags;\n\tif (phead)\n\t\tappend_link(phead, &pal->al_link, pal);\n\tif (rtnl)\n\t\t*rtnl = pal;\n\n\tif ((phead == NULL) && (rtnl == NULL))\n\t\tfree(pal);\n\n\treturn (1);\n}\n\n/**\n * @brief\n * \tset_str - set attribute value based upon another\n *\n *\tA+B --> B is concatenated to end of A\n *\tA=B --> A is replaced with B\n *\tA-B --> If B is a substring at the end of A, it is stripped off\n *\n * @param[in]   attr - pointer to new attribute to be set (A)\n * @param[in]   new  - pointer to attribute (B)\n * @param[in]   op   - operator\n *\n * @return      int\n * @retval      0       if ok\n * @retval     >0       if error\n *\n */\n\nint\nset_str(attribute *attr, attribute *new, enum batch_op op)\n{\n\tchar *new_value;\n\tchar *p;\n\tsize_t nsize;\n\n\tassert(attr && new &&new->at_val.at_str && (new->at_flags &ATR_VFLAG_SET));\n\tnsize = strlen(new->at_val.at_str) + 1; /* length of new string */\n\tif ((op == INCR) && !attr->at_val.at_str)\n\t\top = SET; /* no current string, change INCR to SET */\n\n\tswitch (op) {\n\n\t\tcase SET: /* set is replace old string with new */\n\n\t\t\tif (attr->at_val.at_str)\n\t\t\t\t(void) free(attr->at_val.at_str);\n\t\t\tif ((attr->at_val.at_str = malloc(nsize)) == NULL)\n\t\t\t\treturn (PBSE_SYSTEM);\n\t\t\t(void) strcpy(attr->at_val.at_str, new->at_val.at_str);\n\t\t\tbreak;\n\n\t\tcase INCR: /* INCR is concatenate new to old string */\n\n\t\t\tnsize += strlen(attr->at_val.at_str);\n\t\t\tif (attr->at_val.at_str)\n\t\t\t\tnew_value = realloc(attr->at_val.at_str, nsize);\n\t\t\telse\n\t\t\t\tnew_value = malloc(nsize);\n\t\t\tif (new_value == NULL)\n\t\t\t\treturn (PBSE_SYSTEM);\n\t\t\tattr->at_val.at_str = new_value;\n\t\t\t(void) strcat(attr->at_val.at_str, new->at_val.at_str);\n\t\t\tbreak;\n\n\t\tcase DECR: /* DECR is remove substring if match, start at end */\n\n\t\t\tif (!attr->at_val.at_str)\n\t\t\t\tbreak;\n\n\t\t\tif (--nsize == 0)\n\t\t\t\tbreak;\n\t\t\tp = attr->at_val.at_str + strlen(attr->at_val.at_str) - nsize;\n\t\t\twhile (p >= attr->at_val.at_str) {\n\t\t\t\tif (strncmp(p, new->at_val.at_str, (int) nsize) == 0) {\n\t\t\t\t\tdo {\n\t\t\t\t\t\t*p = *(p + nsize);\n\t\t\t\t\t} while (*p++);\n\t\t\t\t}\n\t\t\t\tp--;\n\t\t\t}\n\t\t\tbreak;\n\n\t\tdefault:\n\t\t\treturn (PBSE_INTERNAL);\n\t}\n\tif ((attr->at_val.at_str != NULL) && (*attr->at_val.at_str != '\\0'))\n\t\tpost_attr_set(attr);\n\telse\n\t\tattr->at_flags &= ~ATR_VFLAG_SET;\n\n\treturn (0);\n}\n\n/**\n * @brief\n * \tcomp_str - compare two attributes of type ATR_TYPE_STR\n *\n * @param[in] attr - pointer to attribute structure\n * @param[in] with - pointer to attribute structure\n *\n * @return      int\n * @retval      0       if the set of strings in \"with\" is a subset of \"attr\"\n * @retval      1       otherwise\n *\n */\n\nint\ncomp_str(attribute *attr, attribute *with)\n{\n\tif (!attr || !attr->at_val.at_str)\n\t\treturn (-1);\n\treturn (strcmp(attr->at_val.at_str, with->at_val.at_str));\n}\n\n/**\n * @brief\n * \tfree_str - free space malloc-ed for string attribute value\n *\n * @param[in] attr - pointer to attribute structure\n *\n * @return\tVoid\n *\n */\n\nvoid\nfree_str(attribute *attr)\n{\n\tif ((attr->at_flags & ATR_VFLAG_SET) && (attr->at_val.at_str)) {\n\t\t(void) free(attr->at_val.at_str);\n\t}\n\tfree_null(attr);\n\tattr->at_val.at_str = NULL;\n}\n\n/**\n * @brief\n *\tSpecial function that verifies the size of the input\n * \tfor jobname before calling decode_str\n *\n * @param[in] patr - attribute structure\n * @param[in] name - attribute name\n * @param[in] rescn - resource name - unused here\n * @param[in] val - attribute value\n *\n * @return  int\n * @retval  0 if 0k\n * @retval  >0 error number if error\n *\n */\n\nint\ndecode_jobname(attribute *patr, char *name, char *rescn, char *val)\n{\n\n\tif (val != NULL) {\n\t\tif (strlen(val) > (size_t) PBS_MAXJOBNAME)\n\t\t\treturn (PBSE_BADATVAL);\n\t}\n\treturn (decode_str(patr, name, rescn, val));\n}\n\n/**\n * set_attr_str: use set_attr_generic() instead\n */\n\n/**\n * @brief\tAttribute getter function for string type values\n *\n * @param[in]\tpattr\t-\tpointer to the attribute\n *\n * @return\tchar *\n * @retval\tstring value of the attribute\n * @retval\tNULL if attribute is NULL\n *\n * @par MT-Safe: No\n * @par Side Effects: None\n */\nchar *\nget_attr_str(const attribute *pattr)\n{\n\tif (pattr != NULL)\n\t\treturn pattr->at_val.at_str;\n\treturn NULL;\n}\n"
  },
  {
    "path": "src/lib/Libattr/attr_fn_time.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <limits.h>\n#include <assert.h>\n#include <ctype.h>\n#include <memory.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <errno.h>\n#include <pbs_ifl.h>\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"pbs_error.h\"\n\n/**\n * @file\tattr_fn_time.c\n * @brief\n * \tThis file contains functions for manipulating attributes of type\n *\ttime:\t[[hh:]mm:]ss[.sss]\n * @details\n * \tEach set has functions for:\n *\tDecoding the value string to the machine representation.\n *\tEncoding the internal attribute to external form\n *\tSetting the value by =, + or - operators.\n *\tComparing a (decoded) value with the attribute value.\n *\n * Some or all of the functions for an attribute type may be shared with\n * other attribute types.\n *\n * The prototypes are declared in \"attribute.h\"\n *\n * --------------------------------------------------\n * The Set of Attribute Functions for attributes with\n * value type \"long\"\n * --------------------------------------------------\n */\n\n/**\n * @brief\n *\tdecode time into attribute structure of type ATR_TYPE_LONG\n *\n * @param[in/out]   patr - pointer to attribute structure\n * @param[in]   name - attribute name\n * @param[in]   rescn - resource name\n * @param[in]   val - attribute value\n *\n * @return int\n * @retval  0\tif ok\n * @retval >0   error\n * @retval *patr elements set\n *\n */\n\n#define PBS_MAX_TIME (LONG_MAX - 1)\nint\ndecode_time(attribute *patr, char *name, char *rescn, char *val)\n{\n\tint i;\n\tchar msec[4] = {'\\0'};\n\tint ncolon = 0;\n\tchar *pc;\n\tlong rv = 0;\n\tchar *workval;\n\tchar *workvalsv;\n\tlong strtol_ret = 0;\n\tint index = -1;\n\n\tif ((val == NULL) || (strlen(val) == 0)) {\n\t\tATR_UNSET(patr);\n\t\tpatr->at_val.at_long = 0;\n\t\treturn (0);\n\t}\n\n\tworkval = strdup(val);\n\tif (workval == NULL)\n\t\treturn (PBSE_SYSTEM);\n\tworkvalsv = workval;\n\n\tfor (i = 0; i < 3; ++i)\n\t\tmsec[i] = '0';\n\n\tfor (pc = workval; *pc; ++pc) {\n\t\tindex++;\n\t\tif (*pc == ':') {\n\t\t\tif ((++ncolon > 2) || (index == 0) || (!isdigit(val[index - 1])))\n\t\t\t\tgoto badval;\n\t\t\t*pc = '\\0';\n\t\t\terrno = 0;\n\t\t\tstrtol_ret = strtol(workval, NULL, 10);\n\t\t\tif ((strtol_ret < 0) || (errno != 0))\n\t\t\t\tgoto badval;\n\t\t\trv = (rv * 60) + strtol_ret;\n\t\t\tworkval = pc + 1;\n\n\t\t} else if (*pc == '.') {\n\t\t\t*pc++ = '\\0';\n\t\t\tif ((index == 0) || (!isdigit(val[index - 1])))\n\t\t\t\tgoto badval;\n\t\t\tfor (i = 0; *pc; ++pc) {\n\t\t\t\tif (!isdigit((int) *pc)) {\n\t\t\t\t\tgoto badval;\n\t\t\t\t}\n\t\t\t\tif (i < 3) {\n\t\t\t\t\tmsec[i++] = *pc;\n\t\t\t\t}\n\t\t\t}\n\t\t\tbreak;\n\t\t} else if (!isdigit((int) *pc)) {\n\t\t\tgoto badval; /* bad value */\n\t\t}\n\t}\n\terrno = 0;\n\tstrtol_ret = strtol(workval, NULL, 10);\n\tif ((strtol_ret < 0) || (errno != 0))\n\t\tgoto badval;\n\trv = (rv * 60) + strtol_ret;\n\tif ((rv > PBS_MAX_TIME) || (rv < 0))\n\t\tgoto badval;\n\tif (atoi(msec) >= 500)\n\t\trv++;\n\tpatr->at_val.at_long = rv;\n\tpost_attr_set(patr);\n\t(void) free(workvalsv);\n\treturn (0);\n\nbadval:\n\t(void) free(workvalsv);\n\treturn (PBSE_BADATVAL);\n}\n\n/**\n * @brief\n *\tencode_time - encode attribute of type long into attr_extern\n *\twith value in form of hh:mm:ss\n *\n * @param[in] attr - ptr to attribute to encode\n * @param[in] phead - ptr to head of attrlist list\n * @param[in] atname - attribute name\n * @param[in] rsname - resource name or null\n * @param[in] mode - encode mode\n * @param[out] rtnl - ptr to svrattrl\n *\n * @retval      int\n * @retval      >0      if ok, entry created and linked into list\n * @retval      =0      no value to encode, entry not created\n * @retval      -1      if error\n *\n */\n/*ARGSUSED*/\n\n#define CVNBUFSZ 24\n\nint\nencode_time(const attribute *attr, pbs_list_head *phead, char *atname, char *rsname, int mode, svrattrl **rtnl)\n{\n\tsize_t ct;\n\tunsigned long n;\n\tunsigned long hr;\n\tunsigned int min;\n\tunsigned int sec;\n\tsvrattrl *pal;\n\tchar *pv;\n\tchar cvnbuf[CVNBUFSZ] = {'\\0'};\n\n\tif (!attr)\n\t\treturn (-1);\n\tif (!(attr->at_flags & ATR_VFLAG_SET))\n\t\treturn (0);\n\tif (attr->at_val.at_long < 0)\n\t\treturn (-1);\n\n\tn = attr->at_val.at_long;\n\thr = n / 3600;\n\tn %= 3600;\n\tmin = n / 60;\n\tsec = n % 60;\n\n\tpv = cvnbuf;\n\t(void) sprintf(pv, \"%02lu:%02u:%02u\", hr, min, sec);\n\tpv += strlen(pv);\n\n\tct = strlen(cvnbuf) + 1;\n\n\tpal = attrlist_create(atname, rsname, ct);\n\tif (pal == NULL)\n\t\treturn (-1);\n\n\t(void) memcpy(pal->al_value, cvnbuf, ct);\n\tpal->al_flags = attr->at_flags;\n\tif (phead)\n\t\tappend_link(phead, &pal->al_link, pal);\n\tif (rtnl)\n\t\t*rtnl = pal;\n\n\treturn (1);\n}\n\n/*\n * set_time  - use the function set_l()\n *\n * comp_time - use the funttion comp_l()\n *\n * free_l - use free_null to (not) free space\n */\n\n/**\n * @brief\n *\tAction routine for attributes of type time (or long) where a zero\n *\tvalue is to be disallowed.\n *\n * @param[in]   pattr - pointer to the changed attribute\n * @param[in]   pobject - pointer to parent object of the attribute - unused\n * @param[in]   actmode - if being set/altered - unused\n *\n * @return      int - a PBSE_ defined error\n * @retval\tPBSE_NONE - no error\n * @retval\tPBSE_BADATVAL - if being set to zero\n *\n */\n\nint\nat_non_zero_time(attribute *pattr, void *pobject, int actmode)\n{\n\tif ((pattr->at_flags & ATR_VFLAG_SET) == 0)\n\t\treturn PBSE_NONE;\n\tif (pattr->at_val.at_long == 0)\n\t\treturn PBSE_BADATVAL;\n\telse\n\t\treturn PBSE_NONE;\n}\n"
  },
  {
    "path": "src/lib/Libattr/attr_fn_unkn.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <assert.h>\n#include <ctype.h>\n#include <memory.h>\n#ifndef NDEBUG\n#include <stdio.h>\n#endif\n#include <stdlib.h>\n#include <string.h>\n#include <pbs_ifl.h>\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"resource.h\"\n#include \"pbs_error.h\"\n\n/**\n * @file\tattr_fn_unkn.c\n * @brief\n * This file contains functions for manipulating attributes of an\n * unknown (unrecognized) name (and therefore unknown type).\n * It is a collection point for all \"other\" attributes, other than\n * the types with specific definition and meaning.\n *\n * Because the type is unknown, it cannot be decoded into a native\n * form.  Thus the attribute is maintained in the attrlist form.\n * Any attribute/value located here will be sent to the Scheduler and it\n * within its rules may do as it choses with them.\n *\n * The prototypes are declared in \"attribute.h\"\n *\n * ----------------------------------------------------------------------------\n * Attribute functions for attributes with value type \"unknown\"\n * ----------------------------------------------------------------------------\n */\n\n/* External Global Items */\n\n/* private functions */\n\n/**\n * @brief\n * \tdecode_unkn - decode a pair of strings (name and value) into the Unknown\n *\ttype attribute/resource which is maintained as a \"svrattrl\", a\n *\tlinked list of structures containing strings.\n *\n * @param[in] patr - ptr to attribute to decode\n * @param[in] name - attribute name\n * @param[in] rescn - resource name or null\n * @param[out] val - string holding values for attribute structure\n *\n * @retval      int\n * @retval      0       if ok\n * @retval      >0      error number1 if error,\n * @retval      *patr   members set\n *\n */\n\nint\ndecode_unkn(attribute *patr, char *name, char *rescn, char *value)\n{\n\tsvrattrl *entry;\n\tsize_t valln;\n\n\tif (patr == NULL)\n\t\treturn (PBSE_INTERNAL);\n\n\tif (!(patr->at_flags & ATR_VFLAG_SET))\n\t\tCLEAR_HEAD(patr->at_val.at_list);\n\n\tif (name == NULL)\n\t\treturn (PBSE_INTERNAL);\n\n\tif (value == NULL)\n\t\tvalln = 0;\n\telse\n\t\tvalln = strlen(value) + 1;\n\n\tentry = attrlist_create(name, rescn, valln);\n\tif (entry == NULL)\n\t\treturn (PBSE_SYSTEM);\n\n\tif (valln)\n\t\tmemcpy(entry->al_value, value, valln);\n\n\tappend_link(&patr->at_val.at_list, &entry->al_link, entry);\n\tpost_attr_set(patr);\n\treturn (0);\n}\n\n/**\n * @brief\n * \tencode_unkn - encode attr of unknown type into attrlist form\n *\n * Here things are different from the typical attribute.  Most have a\n * single value to be encoded.  But \"the unknown\" attribute may have a whole\n * list.\n *\n * This function does not use the parent attribute name, after all \"_other_\"\n * is rather meaningless.  In addition, each unknown already is in an\n * attrlist form.\n *\n * Thus for each entry in the list, encode_unkn duplicates the existing\n * attrlist struct and links the copy into the list.\n *\n * @param[in] attr - ptr to attribute to encode\n * @param[in] phead - ptr to head of attrlist list\n * @param[in] atname - attribute name\n * @param[in] rsname - resource name or null\n * @param[in] mode - encode mode\n * @param[out] rtnl - ptr to svrattrl\n *\n * @retval      int\n * @retval      >0      if ok, entry created and linked into list\n * @retval      =0      no value to encode, entry not created\n * @retval      -1      if error\n *\n */\n/*ARGSUSED*/\n\nint\nencode_unkn(const attribute *attr, pbs_list_head *phead, char *atname, char *rsname, int mode, svrattrl **rtnl)\n{\n\tsvrattrl *plist;\n\tsvrattrl *pnew;\n\tsvrattrl *xprior = NULL;\n\tint first = 1;\n\n\tif (!attr)\n\t\treturn (-2);\n\n\tplist = (svrattrl *) GET_NEXT(attr->at_val.at_list);\n\tif (plist == NULL)\n\t\treturn (0);\n\n\twhile (plist != NULL) {\n\t\tpnew = (svrattrl *) malloc(plist->al_tsize);\n\t\tif (pnew == NULL)\n\t\t\treturn (-1);\n\t\tCLEAR_LINK(pnew->al_link);\n\t\tpnew->al_sister = NULL;\n\t\tpnew->al_tsize = plist->al_tsize;\n\t\tpnew->al_nameln = plist->al_nameln;\n\t\tpnew->al_rescln = plist->al_rescln;\n\t\tpnew->al_valln = plist->al_valln;\n\t\tpnew->al_flags = plist->al_flags;\n\t\tpnew->al_refct = 1;\n\n\t\tpnew->al_name = (char *) pnew + sizeof(svrattrl);\n\t\t(void) memcpy(pnew->al_name, plist->al_name, plist->al_nameln);\n\t\tif (plist->al_rescln) {\n\t\t\tpnew->al_resc = pnew->al_name + pnew->al_nameln;\n\t\t\t(void) memcpy(pnew->al_resc, plist->al_resc,\n\t\t\t\t      plist->al_rescln);\n\t\t} else {\n\t\t\tpnew->al_resc = NULL;\n\t\t}\n\t\tif (plist->al_valln) {\n\t\t\tpnew->al_value = pnew->al_name + pnew->al_nameln +\n\t\t\t\t\t pnew->al_rescln;\n\t\t\t(void) memcpy(pnew->al_value, plist->al_value,\n\t\t\t\t      pnew->al_valln);\n\t\t}\n\t\tif (phead)\n\t\t\tappend_link(phead, &pnew->al_link, pnew);\n\t\tif (first) {\n\t\t\tif (rtnl)\n\t\t\t\t*rtnl = pnew;\n\t\t\tfirst = 0;\n\t\t} else {\n\t\t\tif (xprior)\n\t\t\t\txprior->al_sister = pnew;\n\t\t}\n\t\txprior = pnew;\n\n\t\tplist = (svrattrl *) GET_NEXT(plist->al_link);\n\t}\n\treturn (1);\n}\n\n/**\n * @brief\n * \tset_unkn - set value of attribute of unknown type  to another\n *\n *\tEach entry in the list headed by the \"new\" attribute is appended\n *\tto the list headed by \"old\".\n *\n *\tAll operations, set, incr, and decr, map to append.\n * @param[in]   attr - pointer to new attribute to be set (A)\n * @param[in]   new  - pointer to attribute (B)\n * @param[in]   op   - operator\n *\n * @return      int\n * @retval      0       if ok\n * @retval     >0       if error\n *\n */\n\n/*ARGSUSED*/\nint\nset_unkn(attribute *old, attribute *new, enum batch_op op)\n{\n\tsvrattrl *plist;\n\tsvrattrl *pnext;\n\n\tassert(old && new && (new->at_flags &ATR_VFLAG_SET));\n\n\tplist = (svrattrl *) GET_NEXT(new->at_val.at_list);\n\twhile (plist != NULL) {\n\t\tpnext = (svrattrl *) GET_NEXT(plist->al_link);\n\t\tdelete_link(&plist->al_link);\n\t\tappend_link(&old->at_val.at_list, &plist->al_link, plist);\n\t\tplist = pnext;\n\t}\n\tpost_attr_set(old);\n\treturn (0);\n}\n\n/**\n * @brief\n * \tcomp_unkn - compare two attributes of type ATR_TYPE_RESR\n *\n *\tHow do you compare something when you don't know what it is...\n *\tSo, always returns +1\n *\n * @param[in] attr - pointer to attribute structure\n * @param[in] with - pointer to attribute structure\n *\n * @return      int\n * @retval      0       if the set of strings in \"with\" is a subset of \"attr\"\n * @retval      1       otherwise\n *\n */\n\nint\ncomp_unkn(attribute *attr, attribute *with)\n{\n\treturn (1);\n}\n\n/**\n * @brief\n * \tfree_unkn - free space associated with attribute value\n *\n *\tFor each entry in the list, it is delinked, and freed.\n *\n * @param[in] pattr - pointer to attribute structure\n *\n * @return\tVoid\n *\n */\n\nvoid\nfree_unkn(attribute *pattr)\n{\n\tsvrattrl *plist;\n\n\tif (pattr->at_flags & ATR_VFLAG_SET) {\n\t\twhile ((plist = (svrattrl *) GET_NEXT(pattr->at_val.at_list)) !=\n\t\t       NULL) {\n\t\t\tdelete_link(&plist->al_link);\n\t\t\t(void) free(plist);\n\t\t}\n\t}\n\tfree_null(pattr);\n\tCLEAR_HEAD(pattr->at_val.at_list);\n}\n"
  },
  {
    "path": "src/lib/Libattr/attr_func.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <assert.h>\n#include <ctype.h>\n#include <memory.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <errno.h>\n#include <ctype.h>\n#include \"pbs_ifl.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"pbs_error.h\"\n#include \"libpbs.h\"\n#include \"pbs_idx.h\"\n#include \"pbs_entlim.h\"\n#include \"job.h\"\n\n/**\n *\n * @brief\n * \tThis file contains general functions for manipulating attributes and attribute lists.\n *\n */\n\n/**\n * @brief\n * \tclear_attr - clear an attribute value structure and clear ATR_VFLAG_SET\n *\n * @param[in] pattr - pointer to attribute structure\n * @param[in] pdef - pointer to attribute_def structure\n *\n * @return\tVoid\n *\n */\n\nvoid\nclear_attr(attribute *pattr, attribute_def *pdef)\n{\n#ifndef NDEBUG\n\tif (pdef == 0) {\n\t\t(void) fprintf(stderr, \"Assertion failed, bad pdef in clear_attr\\n\");\n\t\tabort();\n\t}\n#endif\n\t(void) memset((char *) pattr, 0, sizeof(struct attribute));\n\tpattr->at_type = pdef->at_type;\n\tif ((pattr->at_type == ATR_TYPE_RESC) ||\n\t    (pattr->at_type == ATR_TYPE_LIST))\n\t\tCLEAR_HEAD(pattr->at_val.at_list);\n}\n\n/**\n * @brief\n * \tCreate the search index for the provided attribute def array\n *\n * @param[in] attr_def - ptr to attribute definitions\n * @param[in] limit - limit on size of def array\n *\n * @return\tvoid *\n * @retval\t NULL\tFailure\n * @retval\t!NULL\tThe address of the search index created\n *\n */\nvoid *\ncr_attrdef_idx(attribute_def *adef, int limit)\n{\n\tint i;\n\tvoid *attrdef_idx = NULL;\n\n\tif (!adef)\n\t\treturn NULL;\n\n\t/* create the attribute index */\n\tif ((attrdef_idx = pbs_idx_create(PBS_IDX_ICASE_CMP, 0)) == NULL)\n\t\treturn NULL;\n\n\t/* add all attributes to the tree with key as the attr name */\n\tfor (i = 0; i < limit; i++) {\n\t\tif (pbs_idx_insert(attrdef_idx, adef->at_name, adef) != PBS_IDX_RET_OK)\n\t\t\treturn NULL;\n\n\t\tadef++;\n\t}\n\treturn attrdef_idx;\n}\n\n/**\n * @brief\n * \tfind attribute definition by name\n *\n *\tSearches array of attribute definition strutures to find one\n *\twhose name matches the requested name.\n *\n * @param[in] attr_idx - search index of the attribute def array\n * @param[in] attr_def - ptr to attribute definitions\n * @param[in] name - attribute name to find\n *\n * @return\tint\n * @retval\t>=0\tindex into definition struture array\n * @retval\t-1\tif didn't find matching name\n *\n */\nint\nfind_attr(void *attrdef_idx, attribute_def *attr_def, char *name)\n{\n\tint index = -1;\n\tattribute_def *found_def = NULL;\n\n\tif (pbs_idx_find(attrdef_idx, (void **) &name, (void **) &found_def, NULL) == PBS_IDX_RET_OK)\n\t\tindex = (found_def - attr_def);\n\n\treturn index;\n}\n\n/**\n * @brief\n * \tfree_svrcache - free the cached svrattrl entries associated with an attribute\n *\n * @param[in] attr - pointer to attribute structure\n *\n * @return\tVoid\n *\n */\n\nvoid\nfree_svrcache(attribute *attr)\n{\n\tstruct svrattrl *working;\n\tstruct svrattrl *sister;\n\n\tworking = attr->at_user_encoded;\n\tif ((working != NULL) && (--working->al_refct <= 0)) {\n\t\twhile (working) {\n\t\t\tsister = working->al_sister;\n\t\t\tdelete_link(&working->al_link);\n\t\t\t(void) free(working);\n\t\t\tworking = sister;\n\t\t}\n\t}\n\tattr->at_user_encoded = NULL;\n\n\tworking = attr->at_priv_encoded;\n\tif ((working != NULL) && (--working->al_refct <= 0)) {\n\t\twhile (working) {\n\t\t\tsister = working->al_sister;\n\t\t\tdelete_link(&working->al_link);\n\t\t\t(void) free(working);\n\t\t\tworking = sister;\n\t\t}\n\t}\n\tattr->at_priv_encoded = NULL;\n}\n\n/**\n * @brief\n *\tfree_null - A free routine for attributes which do not\n *\thave malloc-ed space ( boolean, char, long ).\n *\n * @param[in] attr - pointer to attribute structure\n *\n * @return\tVoid\n *\n */\n/*ARGSUSED*/\nvoid\nfree_null(attribute *attr)\n{\n\tmemset(&attr->at_val, 0, sizeof(attr->at_val));\n\tif (attr->at_type == ATR_TYPE_SIZE)\n\t\tattr->at_val.at_size.atsv_shift = 10;\n\tattr->at_flags &= ~(ATR_VFLAG_SET | ATR_VFLAG_INDIRECT | ATR_VFLAG_TARGET);\n\tif (attr->at_user_encoded != NULL || attr->at_priv_encoded != NULL)\n\t\tfree_svrcache(attr);\n}\n\n/**\n * @brief\n * \t\tdecode_null - Null attribute decode routine for Read Only (server\n *\t\tand queue ) attributes.  It just returns 0.\n *\n * @param[in]\tpatr\t-\tnot used\n * @param[in]\tname\t-\tnot used\n * @param[in]\trn\t-\tnot used\n * @param[in]\tval\t-\tnot used\n *\n * @return\tzero\n */\n\nint\ndecode_null(attribute *patr, char *name, char *rn, char *val)\n{\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\tset_null - Null set routine for Read Only attributes.\n *\n * @param[in]\tpattr\t-\tnot used\n * @param[in]\tnew\t-\tnot used\n * @param[in]\top\t-\tnot used\n *\n * @return\tzero\n */\n\nint\nset_null(attribute *pattr, attribute *new, enum batch_op op)\n{\n\treturn 0;\n}\n\n/**\n * @brief\n * \tcomp_null - A do nothing, except return 0, attribute comparison\n *\tfunction.\n *\n * @param[in] attr - pointer to attribute structure\n * @param[in] with - pointer to attribute structure\n *\n * @return\tint\n * @retval\t0\n *\n */\n\nint\ncomp_null(attribute *attr, attribute *with)\n{\n\treturn 0;\n}\n\n/**\n * @brief\n * \tattrlist_alloc - allocate space for an svrattrl structure entry\n *\n *\tThe space required for the entry is calculated and allocated.\n *\tThe total size and three string lengths are set in the entry,\n *\tbut no values are placed in it.\n *\n * @param[in] szname - string size for name\n * @param[in] szresc - string size for resource\n * @param[in] szval - string size for value\n *\n * @return \tsvrattrl *\n * @retval\tptr to entry \ton success\n * @retval\tNULL \t\tif error\n *\n */\n\nsvrattrl *\nattrlist_alloc(int szname, int szresc, int szval)\n{\n\tregister size_t tsize;\n\tsvrattrl *pal;\n\n\tif (szname < 0 || szresc < 0 || szval < 0)\n\t\treturn NULL;\n\ttsize = sizeof(svrattrl) + szname + szresc + szval;\n\tpal = (svrattrl *) malloc(tsize);\n\tif (pal == NULL)\n\t\treturn NULL;\n#ifdef DEBUG\n\tmemset(pal, 0, sizeof(svrattrl));\n#endif\n\n\tCLEAR_LINK(pal->al_link); /* clear link */\n\tpal->al_sister = NULL;\n\tpal->al_atopl.next = 0;\n\tpal->al_tsize = tsize; /* set various string sizes */\n\tpal->al_nameln = szname;\n\tpal->al_rescln = szresc;\n\tpal->al_valln = szval;\n\tpal->al_flags = 0;\n\tpal->al_op = SET;\n\tpal->al_name = (char *) pal + sizeof(svrattrl);\n\tif (szresc)\n\t\tpal->al_resc = pal->al_name + szname;\n\telse\n\t\tpal->al_resc = NULL;\n\tpal->al_value = pal->al_name + szname + szresc;\n\tpal->al_refct = 0;\n\treturn (pal);\n}\n\n/**\n * @brief\n * \tattrlist_create - create an svrattrl structure entry\n *\n *\tThe space required for the entry is calculated and allocated.\n * \tThe attribute and resource name is copied into the entry.\n * \tNote, the value string should be inserted by the caller after this returns.\n *\n * @param[in] aname - attribute name\n * @param[in] rname - resource name if needed or null\n * @param[in] vsize - size of resource value\n *\n * @return      svrattrl *\n * @retval      ptr to entry    on success\n * @retval      NULL            if error\n *\n */\n\nsvrattrl *\nattrlist_create(char *aname, char *rname, int vsize)\n{\n\tsvrattrl *pal;\n\tsize_t asz;\n\tsize_t rsz;\n\n\tasz = strlen(aname) + 1; /* attribute name,allow for null term */\n\n\tif (rname == NULL) /* resource name only if type resource */\n\t\trsz = 0;\n\telse\n\t\trsz = strlen(rname) + 1;\n\n\tpal = attrlist_alloc(asz, rsz, vsize + 1);\n\tif (pal != NULL) {\n\t\tstrcpy(pal->al_name, aname); /* copy name right after struct */\n\t\tif (rsz)\n\t\t\tstrcpy(pal->al_resc, rname);\n\t\tpal->al_refct++;\n\t}\n\treturn (pal);\n}\n\n/**\n * @brief\n *\tfree_attrlist - free the space allocated to a list of svrattrl\n *\tstructures\n *\n * @param[in] pattrlisthead - Pointer to the head of the linked list to free\n *\n * @return\tVoid\n *\n */\nvoid\nfree_attrlist(pbs_list_head *pattrlisthead)\n{\n\tfree_svrattrl((svrattrl *) GET_NEXT(*pattrlisthead));\n}\n\n/**\n * @brief\n *\tfree an attribute list\n *\n * @param[in] pal - Pointer to the attribute list\n *\n * @return void\n *\n */\nvoid\nfree_svrattrl(svrattrl *pal)\n{\n\tsvrattrl *nxpal;\n\tsvrattrl *sister;\n\n\twhile (pal != NULL) {\n\t\tif (--pal->al_refct <= 0) {\n\t\t\t/* if we have any sisters, need to delete them now */\n\t\t\t/* just in case we end up deleting them later and  */\n\t\t\t/* still pointing to them\t\t\t   */\n\t\t\tsister = pal->al_sister;\n\t\t\twhile (sister) {\n\t\t\t\tnxpal = sister->al_sister;\n\t\t\t\tdelete_link(&sister->al_link);\n\t\t\t\t(void) free(sister);\n\t\t\t\tsister = nxpal;\n\t\t\t}\n\t\t}\n\t\tnxpal = (struct svrattrl *) GET_NEXT(pal->al_link);\n\t\tdelete_link(&pal->al_link);\n\t\tif (pal->al_refct <= 0)\n\t\t\t(void) free(pal);\n\t\tpal = nxpal;\n\t}\n}\n\n/**\n * @brief\n * \tparse_comma_string() - parse a string of the form:\n *\t\tvalue1 [, value2 ...]\n *\n *\tOn the first call, start is non null, a pointer to the first value\n *\telement upto a comma, new-line, or end of string is returned.\n *\n *\tOn any following calls with start set to a null pointer NULL,\n *\tthe next value element is returned...\n *\n *\tA null pointer is returned when there are no (more) value elements.\n */\n\nchar *\nparse_comma_string(char *start)\n{\n\tstatic char *pc; /* if start is null, restart from here */\n\n\tchar *back;\n\tchar *rv;\n\n\tif (start != NULL)\n\t\tpc = start;\n\n\tif (*pc == '\\0')\n\t\treturn NULL; /* already at end, no strings */\n\n\t/* skip over leading white space */\n\n\twhile ((*pc != '\\n') && isspace((int) *pc) && *pc)\n\t\tpc++;\n\n\trv = pc; /* the start point which will be returned */\n\n\t/* go find comma or end of line */\n\n\twhile (*pc) {\n\t\tif (((*pc == ',') && ((rv == pc) || (*(pc - 1) != ESC_CHAR))) || (*pc == '\\n'))\n\t\t\tbreak;\n\t\t++pc;\n\t}\n\tback = pc;\n\twhile (isspace((int) *--back)) /* strip trailing spaces */\n\t\t*back = '\\0';\n\n\tif (*pc)\n\t\t*pc++ = '\\0'; /* if not end, terminate this and adv past */\n\n\treturn (rv);\n}\n\n/**\n * @brief\n * \tcount_substrings - counts number of substrings in a comma separated string\n *\n * @see parse_comma_string\n *\n * @param[in] val - comma separated string of substrings\n * @param[in] pcnt - where to return the value\n *\n * @return\tint\n * @retval\t0\t\t\tsuccess\n * @retval\tPBSE error code\t\terror\n */\nint\ncount_substrings(char *val, int *pcnt)\n{\n\tint rc = 0;\n\tint ns;\n\tchar *pc;\n\n\tif (val == NULL)\n\t\treturn (PBSE_INTERNAL);\n\t/*\n\t * determine number of substrings, each sub string is terminated\n\t * by a non-escaped comma or a new-line, the whole string is terminated\n\t * by a null\n\t */\n\n\tns = 1;\n\tfor (pc = val; *pc; pc++) {\n\t\tif (*pc == ESC_CHAR) {\n\t\t\tif (*(pc + 1))\n\t\t\t\tpc++;\n\t\t} else {\n\t\t\tif (*pc == ',' || *pc == '\\n')\n\t\t\t\t++ns;\n\t\t}\n\t}\n\tif (pc > val)\n\t\tpc--;\n\tif ((*pc == '\\n') || (*pc == ',')) {\n\t\tif ((pc > val) && (*(pc - 1) != ESC_CHAR)) {\n\t\t\t/* strip trailing empty string */\n\t\t\tns--;\n\t\t\t*pc = '\\0';\n\t\t}\n\t}\n\n\t*pcnt = ns;\n\treturn rc;\n}\n\n/**\n * @brief\n * \tattrl_fixlink - fix up the next pointer within the attropl substructure\n *\twithin a svrattrl list.\n *\n * @param[in] phead - pointer to head of svrattrl list\n *\n * @return\tVoid\n *\n */\n\nvoid\nattrl_fixlink(pbs_list_head *phead)\n{\n\tsvrattrl *pal;\n\tsvrattrl *pnxt;\n\n\tpal = (svrattrl *) GET_NEXT(*phead);\n\twhile (pal) {\n\t\tpnxt = (svrattrl *) GET_NEXT(pal->al_link);\n\t\tif (pal->al_flags & ATR_VFLAG_DEFLT) {\n\t\t\tpal->al_atopl.op = DFLT;\n\t\t} else {\n\t\t\tpal->al_atopl.op = SET;\n\t\t}\n\t\tif (pnxt)\n\t\t\tpal->al_atopl.next = &pnxt->al_atopl;\n\t\telse\n\t\t\tpal->al_atopl.next = NULL;\n\t\tpal = pnxt;\n\t}\n}\n\n/**\n * @brief\n * \tfree_none - when scheduler modifies accrue_type, we don't\n *            want to delete previous value.\n *\n * @param[in] attr - pointer to attribute structure\n *\n * @return\tVoid\n *\n */\n\nvoid\nfree_none(attribute *attr)\n{\n\t/* do nothing */\n\t/* to be used for accrue_type attribute of job */\n\tif (attr->at_user_encoded != NULL || attr->at_priv_encoded != NULL) {\n\t\tfree_svrcache(attr);\n\t}\n}\n\n/**\n *  @brief duplicate svrattrl structure\n *\n *  @param[in] osvrat - svrattrl to dup\n *\n *  @return dup'd svrattrl or NULL (failure)\n */\n\nsvrattrl *\ndup_svrattrl(svrattrl *osvrat)\n{\n\tsvrattrl *psvrat;\n\tsize_t tsize;\n\n\tif (osvrat == NULL)\n\t\treturn NULL;\n\n\ttsize = sizeof(svrattrl) + osvrat->al_nameln + osvrat->al_valln + 2;\n\tif (osvrat->al_rescln > 0)\n\t\ttsize += osvrat->al_rescln + 1;\n\n\tif ((psvrat = (svrattrl *) malloc(tsize)) == 0)\n\t\treturn NULL;\n\n\tCLEAR_LINK(psvrat->al_link);\n\tpsvrat->al_sister = NULL;\n\tpsvrat->al_atopl.next = 0;\n\tpsvrat->al_tsize = tsize;\n\n\tpsvrat->al_name = (char *) psvrat + sizeof(svrattrl);\n\tstrcpy(psvrat->al_name, osvrat->al_name);\n\tpsvrat->al_nameln = osvrat->al_nameln;\n\n\tif (osvrat->al_rescln > 0) {\n\t\tpsvrat->al_resc = psvrat->al_name + psvrat->al_nameln + 1;\n\t\tstrcpy(psvrat->al_resc, osvrat->al_resc);\n\t\tpsvrat->al_rescln = osvrat->al_rescln;\n\t\tpsvrat->al_value = psvrat->al_resc + psvrat->al_rescln + 1;\n\t} else {\n\t\tpsvrat->al_resc = NULL;\n\t\tpsvrat->al_rescln = 0;\n\t\tpsvrat->al_value = psvrat->al_name + psvrat->al_nameln + 1;\n\t}\n\n\tstrcpy(psvrat->al_value, osvrat->al_value);\n\tpsvrat->al_valln = osvrat->al_valln;\n\n\tpsvrat->al_flags = osvrat->al_flags;\n\tpsvrat->al_refct = 1;\n\tpsvrat->al_op = osvrat->al_op;\n\n\treturn psvrat;\n}\n\n/**\n * @brief\n * \tAdds a new entry (name_str, resc_str, val_str, flag) to the 'phead'\n *\tsvrattrl list.\n *\tIf 'name_prefix' is not NULL, then instead of adding 'name_str',\n *\tadd 'name_prefix.name_str'.\n *\n * @param[in/out]\tphead - head of the svrattrl list to be populated.\n * @param[in]\t\tname_str - the name field\n * @param[in]\t\tresc_str - the resource name field\n * @param[in]\t\tval_str - the value field.\n * @param[in]\t\tflag - the flag entry\n * @param[in]\t\tname_prefix - string to prefix the 'name_str'\n *\n * @return int\n * @retval 0 for sucess\n * @retval -1 for error\n *\n */\nint\nadd_to_svrattrl_list(pbs_list_head *phead, char *name_str, char *resc_str,\n\t\t     char *val_str, unsigned int flag, char *name_prefix)\n{\n\tsvrattrl *psvrat = NULL;\n\tint valln = 0;\n\tchar *tmp_str = NULL;\n\tsize_t sz;\n\tchar *the_str;\n\n\tif (name_str == NULL)\n\t\treturn -1;\n\n\tthe_str = name_str;\n\n\tif (name_prefix != NULL) {\n\n\t\t/* for <name_prefix>.<name_str>\\0 */\n\t\tsz = strlen(name_prefix) + strlen(name_str) + 2;\n\t\ttmp_str = (char *) malloc(sz);\n\t\tif (tmp_str == NULL) {\n\t\t\treturn -1;\n\t\t} else {\n\t\t\tsnprintf(tmp_str, sz, \"%s.%s\", name_prefix, name_str);\n\t\t\tthe_str = tmp_str;\n\t\t}\n\t}\n\n\tif (val_str) {\n\t\tvalln = (int) strlen(val_str) + 1;\n\t}\n\tpsvrat = attrlist_create(the_str, resc_str, valln);\n\n\tfree(tmp_str);\n\n\tif (!psvrat) {\n\t\treturn -1;\n\t}\n\tif (val_str) {\n\t\tstrcpy(psvrat->al_value, val_str);\n\t}\n\tpsvrat->al_flags = flag;\n\tappend_link(phead, &psvrat->al_link, psvrat);\n\n\treturn 0;\n}\n\n/**\n * @brief\n * \tAdds a new entry (name_str, resc_str, val_str, flag) to the 'phead'\n *\tsvrattrl list in a sorted (by [name_prefix.]name_str) way.\n *\n * @param[in]\tphead\t- pointer to the targeted list.\n * @param[in]\tname_str - fills in the svrattrl al_name field\n * @param[in]\tresc_str - fills in the svrattrl al_resc field\n * @param[in]\tval_str - fills in the svrattrl al_value field\n * @param[in]\tflag - fills in the svrattrl al_flags field\n * @param[in]\tname_prefix - string to prefix the 'name_str'\n *\n * @return int\n * @retval 0\tsuccess\n * @retval -1\terror\n */\nint\nadd_to_svrattrl_list_sorted(pbs_list_head *phead, char *name_str, char *resc_str,\n\t\t\t    char *val_str, unsigned int flag, char *name_prefix)\n{\n\tsvrattrl *psvrat = NULL;\n\tint valln = 0;\n\tpbs_list_link *plink_cur;\n\tsvrattrl *psvr_cur;\n\tchar *tmp_str = NULL;\n\tsize_t sz;\n\tchar *the_str;\n\n\tthe_str = name_str;\n\n\tif (name_prefix != NULL) {\n\n\t\t/* for <name_prefix>.<name_str>\\0 */\n\t\tsz = strlen(name_prefix) + strlen(name_str) + 2;\n\t\ttmp_str = (char *) malloc(sz);\n\t\tif (tmp_str == NULL) {\n\t\t\treturn -1;\n\t\t} else {\n\t\t\tsnprintf(tmp_str, sz, \"%s.%s\", name_prefix, name_str);\n\t\t\tthe_str = tmp_str;\n\t\t}\n\t}\n\n\tif (val_str) {\n\t\tvalln = (int) strlen(val_str) + 1;\n\t}\n\tpsvrat = attrlist_create(the_str, resc_str, valln);\n\n\tif (tmp_str != NULL)\n\t\tfree(tmp_str);\n\n\tif (!psvrat) {\n\t\treturn -1;\n\t}\n\tif (val_str) {\n\t\tstrcpy(psvrat->al_value, val_str);\n\t}\n\tpsvrat->al_flags = flag;\n\n\tplink_cur = phead;\n\tpsvr_cur = (svrattrl *) GET_NEXT(*phead);\n\n\twhile (psvr_cur) {\n\t\tplink_cur = &psvr_cur->al_link;\n\n\t\tif (strcmp(psvr_cur->al_name, psvrat->al_name) > 0) {\n\t\t\tbreak;\n\t\t}\n\t\tpsvr_cur = (svrattrl *) GET_NEXT(*plink_cur);\n\t}\n\n\tif (psvr_cur) {\n\t\t/* link before 'current' svrattrl in list */\n\t\tinsert_link(plink_cur, &psvrat->al_link, psvrat, LINK_INSET_BEFORE);\n\t} else {\n\t\t/* attach either at the beginning or the last of the list */\n\t\tinsert_link(plink_cur, &psvrat->al_link, psvrat, LINK_INSET_AFTER);\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n * \tCopies contents of list headed by 'from_head' into 'to_head'\n *\n * @param[in]\t\tfrom_head\t- source list\n * @param[in,out]\tto_head\t\t- destination list\n *\n * @return int\n * @retval 0\t- success\n * @retval -1\t- failure\n *\n */\nint\ncopy_svrattrl_list(pbs_list_head *from_head, pbs_list_head *to_head)\n{\n\tsvrattrl *plist = NULL;\n\n\tif ((from_head == NULL) || (to_head == NULL))\n\t\treturn -1;\n\n\tCLEAR_HEAD((*to_head));\n\tplist = (svrattrl *) GET_NEXT((*from_head));\n\twhile (plist) {\n\n\t\tif (add_to_svrattrl_list(to_head, plist->al_name, plist->al_resc,\n\t\t\t\t\t plist->al_value, plist->al_op, NULL) == -1) {\n\t\t\tfree_attrlist(to_head);\n\t\t\tCLEAR_HEAD((*to_head));\n\t\t\treturn -1;\n\t\t}\n\n\t\tplist = (svrattrl *) GET_NEXT(plist->al_link);\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n * \tCopies contents of attr list headed by 'from_list' into 'to_head'\n * \tIt does not free the original list\n *\n * @param[in]\t\tfrom_list\t- source list\n * @param[in,out]\tto_head\t\t- destination list\n *\n * @return int\n * @retval 0\t- success\n * @retval -1\t- failure\n *\n */\nint\nconvert_attrl_to_svrattrl(struct attrl *from_list, pbs_list_head *to_head)\n{\n\tstruct attrl *plist = NULL;\n\n\tif ((from_list == NULL) || (to_head == NULL))\n\t\treturn -1;\n\n\tCLEAR_HEAD((*to_head));\n\n\tfor (plist = from_list; plist; plist = plist->next) {\n\n\t\tif (add_to_svrattrl_list(to_head, plist->name, plist->resource,\n\t\t\t\t\t plist->value, plist->op, NULL) == -1) {\n\t\t\tfree_attrlist(to_head);\n\t\t\tCLEAR_HEAD((*to_head));\n\t\t\treturn -1;\n\t\t}\n\t}\n\n\treturn 0;\n}\n\n/**\n * @brief\n * \treturns the svrattrl list matching 'name' and 'resc' (if resc is non-NULL)\n *  @param[in]\tphead\t- list being searched\n *  *param[in]\tname\t- search name\n *  @param[in]\tresc\t- search resource\n *\n *  @retval\t*svrattrl\n *\n *  @retval\t<pointer to the matching svrattrl entry>\n *  @retval\tNULL - none found\n */\nsvrattrl *\nfind_svrattrl_list_entry(pbs_list_head *phead, char *name, char *resc)\n{\n\tsvrattrl *plist = NULL;\n\n\tif (!name)\n\t\treturn NULL;\n\n\tplist = (svrattrl *) GET_NEXT(*phead);\n\twhile (plist) {\n\n\t\tif ((strcmp(plist->al_name, name) == 0) &&\n\t\t    (!resc || (strcmp(plist->al_resc, resc) == 0))) {\n\t\t\treturn plist;\n\t\t}\n\n\t\tplist = (svrattrl *) GET_NEXT(plist->al_link);\n\t}\n\treturn NULL;\n}\n\n/**\n * @brief\n * \t Checks svrattrl_list to see if 'name' and 'resc' (if set) appear\n * as al_name and al_resc values. if so, return that entry's al_flags value.\n *\n * @param[in]\tname - al_name value to match\n * @param[in]\tresc - al_resc value to match\n * @param[in]\thook_set_flag -  if set to 1, then add the ATR_VFLAG_HOOK flag\n * \t\t\tto the return value of al_flags\n * @return\tint\n * @retval\tATR_VFLAG_HOOK\tif there's no matching entry found for 'name' and\n *\t\t\t\t'resc', but 'hook_set_flag' is set to 1.\n * @retval\t0\t\tif there's no matching entry found for 'name' and\n * \t\t\t\t'resc', and 'hook_set_flag' is 0.\n * @retval\t<value>\t\tal_flags matching entry for 'name' and 'resc',\n * \t\t\t\tappended with ATR_VFLAG_HOOK if 'hook_set_flag' is\n * \t\t\t\tset to 1.\n *\n */\nunsigned int\nget_svrattrl_flag(char *name, char *resc, char *val,\n\t\t  pbs_list_head *svrattrl_list, int hook_set_flag)\n{\n\tsvrattrl *svrattrl_e;\n\tunsigned int flag = 0;\n\n\t/* get the flag to set */\n\tif ((svrattrl_e = find_svrattrl_list_entry(svrattrl_list, name, resc)) != NULL)\n\t\tflag = svrattrl_e->al_flags;\n\n\tif (hook_set_flag == 1)\n\t\tflag |= ATR_VFLAG_HOOK;\n\n\treturn (flag);\n}\n\n/**\n * @brief\n *\tCompares 2 svrattrl linked lists.\n *\n * @param[in]\tl1 - svrattrl list #1\n * @param[in]\tl2 - svrattrl list #2\n *\n * @return int\n * @retval 1\tif the 2 lists are the same\n * @retval 0\totherwise\n */\nint\ncompare_svrattrl_list(pbs_list_head *l1, pbs_list_head *l2)\n{\n\tpbs_list_head list1;\n\tpbs_list_head list2;\n\tsvrattrl *pal1 = NULL;\n\tsvrattrl *pal2 = NULL;\n\tsvrattrl *nxpal1 = NULL;\n\tsvrattrl *nxpal2 = NULL;\n\tint rc;\n\tint found_match = 0;\n\n\tif (copy_svrattrl_list(l1, &list1) == -1) {\n\t\trc = 0;\n\t\tgoto compare_svrattrl_list_exit;\n\t}\n\tif (copy_svrattrl_list(l2, &list2) == -1) {\n\t\trc = 0;\n\t\tgoto compare_svrattrl_list_exit;\n\t}\n\n\t/* now compare the 2 lists */\n\tpal1 = (svrattrl *) GET_NEXT(list1);\n\twhile (pal1 != NULL) {\n\n\t\tnxpal1 = (svrattrl *) GET_NEXT(pal1->al_link);\n\n\t\tpal2 = (svrattrl *) GET_NEXT(list2);\n\t\tfound_match = 0;\n\t\twhile (pal2 != NULL) {\n\t\t\tnxpal2 = (struct svrattrl *) GET_NEXT(pal2->al_link);\n\t\t\tif ((strcmp(pal1->al_name, pal2->al_name) == 0) &&\n\t\t\t    (strcmp(pal1->al_value, pal2->al_value) == 0)) {\n\t\t\t\tfound_match = 1;\n\t\t\t\tdelete_link(&pal2->al_link);\n\t\t\t\tfree(pal2);\n\n\t\t\t\tdelete_link(&pal1->al_link);\n\t\t\t\tfree(pal1);\n\t\t\t\tbreak;\n\t\t\t}\n\n\t\t\tpal2 = nxpal2;\n\t\t}\n\t\tif (!found_match) {\n\t\t\trc = 0;\n\t\t\tgoto compare_svrattrl_list_exit;\n\t\t}\n\t\tpal1 = nxpal1;\n\t}\n\tpal1 = (svrattrl *) GET_NEXT(list1);\n\tpal2 = (svrattrl *) GET_NEXT(list2);\n\n\tif ((pal1 == NULL) && (pal2 == NULL)) {\n\t\trc = 1;\n\t} else {\n\t\trc = 0;\n\t}\n\ncompare_svrattrl_list_exit:\n\tfree_attrlist(&list1);\n\tfree_attrlist(&list2);\n\n\treturn (rc);\n}\n\n/**\n * @brief\n *\tFree up malloc-ed entries of a 'str_array' and the array itself.\n *\n * @param[in]\tstr_array\t- array of strings terminated by a NULL entry\n *\n * @return void\n *\n */\nvoid\nfree_str_array(char **str_array)\n{\n\tint i;\n\n\tif (str_array == NULL)\n\t\treturn;\n\n\ti = 0;\n\twhile (str_array[i]) {\n\t\tfree(str_array[i]);\n\t\ti++;\n\t}\n\tfree(str_array);\n}\n\n/**\n * @brief\n * \tGiven a 'pbs_list', store the al_value field values into\n * \ta string array, and return that array.\n *\n * @param[in]\tpbs_list\t- the source list\n *\n * @return\tchar **\n * @retval\tpointer to the string array\n * @retval\tNULL\t- could not allocate memory or input invalid.\n */\nchar **\nsvrattrl_to_str_array(pbs_list_head *pbs_list)\n{\n\tint i;\n\tint len;\n\tchar **str_array = NULL;\n\tsvrattrl *plist = NULL;\n\n\tif (pbs_list == NULL)\n\t\treturn NULL;\n\n\t/* calculate the list size */\n\tlen = 0;\n\tplist = (svrattrl *) GET_NEXT(*pbs_list);\n\twhile (plist) {\n\t\tif (plist->al_value == NULL) {\n\t\t\treturn NULL;\n\t\t}\n\n\t\tlen++;\n\t\tplist = (svrattrl *) GET_NEXT(plist->al_link);\n\t}\n\n\t/* add one more entry to calloc for the terminating NULL entry */\n\tstr_array = (char **) calloc(len + 1, sizeof(char *));\n\tif (str_array == NULL) {\n\t\treturn NULL;\n\t}\n\n\tplist = (svrattrl *) GET_NEXT(*pbs_list);\n\ti = 0;\n\twhile (plist) {\n\t\tif (plist->al_value != NULL) {\n\t\t\tstr_array[i] = strdup(plist->al_value);\n\t\t\tif (str_array[i] == NULL) {\n\t\t\t\tfree_str_array(str_array);\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t}\n\t\tplist = (svrattrl *) GET_NEXT(plist->al_link);\n\t\ti++;\n\t}\n\treturn (str_array);\n}\n/**\n * @brief\n * \tGiven a string array 'str_array', dumps its contents\n * \tinto the 'to_head' list, in the same order as indexed\n * \tin the array.\n *\n * @param[in]\t\tstr_array - the array of strings to dump\n * @param[in,out]\tto_head - the destination list\n * @param[in]\t\tname_str - name to associate the values with\n *\n * @return\tint\n * @retval\t0\t- success\n * @retval\t-1\t- error\n *\n */\nint\nstr_array_to_svrattrl(char **str_array, pbs_list_head *to_head, char *name_str)\n{\n\tint i;\n\n\tif ((str_array == NULL) || (to_head == NULL))\n\t\treturn -1;\n\n\tCLEAR_HEAD((*to_head));\n\ti = 0;\n\twhile (str_array[i]) {\n\t\tif (add_to_svrattrl_list(to_head, name_str, NULL, str_array[i], 0, NULL) == -1) {\n\t\t\t/* clear what we've accumulated so far*/\n\t\t\tfree_attrlist(to_head);\n\t\t\tCLEAR_HEAD((*to_head));\n\t\t\treturn -1;\n\t\t}\n\t\ti++;\n\t}\n\treturn (0);\n}\n\n/**\n * @brief\n * \tGiven a string array 'str_array', return a malloc-ed\n * \tstring, containing the entries of 'str_array' separated\n *\tby 'delimiter'.\n * @note\n *\tNeed to free() returned value.\n *\n * @param[in]\tstr_array - the array of strings to dump\n * @param[in]\tdelimiter  - the separator character used in the resultant\n *\t\t\t\tstring.\n *\n * @return\tchar *\n * @retval\t<string>\t- pointer to a malloced area holding\n *\t\t\t\t  the contents of 'str_array'.\n * @retval\tNULL\t\t- error or the input <string> passed\n * \t\t\t\t  is empty.\n *\n */\nchar *\nstr_array_to_str(char **str_array, char delimiter)\n{\n\tint i, j, len;\n\tchar *ret_string = NULL;\n\n\tif (str_array == NULL)\n\t\treturn NULL;\n\n\tlen = 0;\n\ti = 0;\n\n\twhile (str_array[i]) {\n\n\t\tlen += strlen(str_array[i]);\n\t\tlen++; /* for 'delimiter' */\n\t\ti++;\n\t}\n\tlen++; /* for trailing '\\0' */\n\n\tif (len > 1) { /* not just an empty string */\n\n\t\tret_string = (char *) malloc(len);\n\n\t\tif (ret_string == NULL)\n\t\t\treturn NULL;\n\t\ti = 0;\n\t\twhile (str_array[i]) {\n\n\t\t\tif (i == 0) {\n\t\t\t\tstrcpy(ret_string, str_array[i]);\n\t\t\t} else {\n\t\t\t\tj = strlen(ret_string);\n\t\t\t\tret_string[j] = delimiter;\n\t\t\t\tret_string[j + 1] = '\\0';\n\t\t\t\tstrcat(ret_string, str_array[i]);\n\t\t\t}\n\t\t\ti++;\n\t\t}\n\t}\n\treturn (ret_string);\n}\n\n/**\n * @brief\n * \tGiven a 'delimiter'-separated string 'str', store the\n * \tthe string entities into\n * \ta string array, and return that array.\n * @note\n *\tNeed to free() returned value.\n *\n * @param[in]\tstr_array - the array of strings to dump\n * @param[in]\tdelimiter  - the delimiter to match.\n *\n * @return\tchar *\n * @retval\t<string>\t- pointer to a malloced area holding\n *\t\t\t\t  the contents of 'str_array'.\n * @retval\tNULL\t\t- error\n *\n */\nchar **\nstr_to_str_array(char *str, char delimiter)\n{\n\tint i;\n\tint len;\n\tchar **str_array = NULL;\n\tchar *str1;\n\tchar *p;\n\n\tif (str == NULL)\n\t\treturn NULL;\n\n\t/* calculate the list size */\n\tlen = 0;\n\n\tstr1 = strdup(str);\n\tif (str1 == NULL)\n\t\treturn NULL;\n\n\tlen = 0;\n\tp = strtok_quoted(str1, delimiter);\n\twhile (p) {\n\t\tlen++;\n\t\tp = strtok_quoted(NULL, delimiter);\n\t}\n\t(void) free(str1);\n\n\t/* add one more entry to calloc for the terminating NULL entry */\n\tstr_array = (char **) calloc(len + 1, sizeof(char *));\n\tif (str_array == NULL) {\n\t\treturn NULL;\n\t}\n\tstr1 = strdup(str);\n\tif (str1 == NULL) {\n\t\tfree_str_array(str_array);\n\t\treturn NULL;\n\t}\n\tp = strtok_quoted(str1, delimiter);\n\ti = 0;\n\twhile (p) {\n\t\tstr_array[i] = strdup(p);\n\t\tif (str_array[i] == NULL) {\n\t\t\tfree_str_array(str_array);\n\t\t\tfree(str1);\n\t\t\treturn NULL;\n\t\t}\n\t\ti++;\n\t\tp = strtok_quoted(NULL, delimiter);\n\t}\n\tfree(str1);\n\n\treturn (str_array);\n}\n\n/**\n * @brief\n * \tGiven a environment string array 'env_array' where there are\n *\t<var>=<value> entries, return a malloc-ed\n * \tstring, containing the entries of 'env_array' separated\n *\tby 'delimiter'.\n *\n * @note\n *\tNeed to free() returned value.\n *\tIf 'env_array' has a <value> entry containing the 'delimiter' character,\n *\tthen it is escaped (using ESC_CHAR). Similarly, if <value> contains the escape\n *\tcharacter, then that is also escaped.\n *\tEx:  env_array_to_str(envstr, ',')\n *\t\twhere   envstr[0]='HOME=/home/somebody'\n *\t\t\tenvstr[1]='G_FILENAME_ENCODING=@locale,UTF-8,ISO-8859-15,CP1252'\n *\tthen string returned will be:\n *\t\t'HOME=/home/somebody,G_FILENAME_ENCODING=\"@locale\\,UTF-8\\,ISO-8859-15\\,CP1252\"'\n *\n * @param[in]\tenv_array - the environment array of strings to dump\n * @param[in]\tdelimiter  - the separator character\n *\n * @return\tchar *\n * @retval\t<string>\t- pointer to a malloced area holding\n *\t\t\t\t  the contents of 'env_array'.\n * @retval\tNULL\t\t- error or the input <string> passed\n * \t\t\t\t  is empty.\n *\n */\nchar *\nenv_array_to_str(char **env_array, char delimiter)\n{\n\tint i, j, len;\n\tchar *ret_string = NULL;\n\tint escape = 0;\n\tchar *var = NULL;\n\tchar *val = NULL;\n\tchar *pc = NULL;\n\tchar *pc2 = NULL;\n\n\tif (env_array == NULL)\n\t\treturn NULL;\n\n\tlen = 0;\n\ti = 0;\n\n\twhile (env_array[i]) {\n\t\tval = strchr(env_array[i], '=');\n\t\tif (val != NULL) {\n\t\t\tval++;\n\t\t\tescape = 0;\n\t\t\tfor (pc2 = val; *pc2 != 0; pc2++) {\n\t\t\t\tif ((*pc2 == delimiter) || (*pc2 == ESC_CHAR)) {\n\t\t\t\t\tescape++;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\tlen += strlen(env_array[i]);\n\t\tif (escape > 0) {\n\t\t\tlen += escape; /* the ESC_CHAR */\n\t\t}\n\t\tlen++; /* for delimiter */\n\t\ti++;\n\t}\n\tlen++; /* for trailing '\\0' */\n\n\tif (len > 1) { /* not just an empty string */\n\n\t\tret_string = (char *) malloc(len);\n\n\t\tif (ret_string == NULL)\n\t\t\treturn NULL;\n\t\ti = 0;\n\t\twhile (env_array[i]) {\n\t\t\tvar = env_array[i];\n\t\t\tpc = strchr(env_array[i], '=');\n\t\t\tval = NULL;\n\t\t\tif (pc != NULL) {\n\t\t\t\t*pc = '\\0';\n\t\t\t\tval = pc + 1;\n\t\t\t}\n\n\t\t\tif (i == 0) {\n\t\t\t\tsprintf(ret_string, \"%s=\", var);\n\t\t\t} else {\n\t\t\t\tj = strlen(ret_string);\n\t\t\t\tret_string[j] = delimiter;\n\t\t\t\tret_string[j + 1] = '\\0';\n\t\t\t\tstrcat(ret_string, var);\n\t\t\t\tstrcat(ret_string, \"=\");\n\t\t\t}\n\t\t\tif (val != NULL) {\n\t\t\t\tpc2 = ret_string + strlen(ret_string);\n\t\t\t\twhile (*val != '\\0') {\n\t\t\t\t\tif ((*val == delimiter) ||\n\t\t\t\t\t    (*val == ESC_CHAR)) {\n\t\t\t\t\t\t*pc2 = ESC_CHAR;\n\t\t\t\t\t\tpc2++;\n\t\t\t\t\t}\n\t\t\t\t\t*pc2++ = *val++;\n\t\t\t\t}\n\t\t\t\t*pc2 = '\\0';\n\t\t\t}\n\n\t\t\tif (pc != NULL)\n\t\t\t\t*pc = '='; /* restore */\n\t\t\ti++;\n\t\t}\n\t}\n\treturn (ret_string);\n}\n\n/* @brief\n * \tFunction that takes a string 'str', and modifies it \"in-place\",\n *\tremoving each escape backslash preceding the character being\n *\tescaped.\n *\n * @param[in,out]\tstr\t- input string.\n *\n * @return void\n *\n */\nstatic void\nprune_esc_backslash(char *str)\n{\n\n\tint s, d, skip_idx;\n\n\tif (str == NULL)\n\t\treturn;\n\n\ts = 0; /* source */\n\td = 0; /* dest */\n\n\t/* initialize to an index that cannot be matched at the start */\n\tskip_idx = -2;\n\n\tdo {\n\t\twhile ((str[s] == ESC_CHAR) && (skip_idx != (s - 1))) {\n\t\t\tskip_idx = s;\n\t\t\ts++;\n\t\t}\n\t\tstr[d++] = str[s++];\n\t} while (str[s - 1] != '\\0');\n}\n\n/**\n * @brief\n * \tLike strtok, except this understands quoted (unescaped) substrings\n * \t(single quotes, or double quotes) and include the value as is.\n *\t For instance, given_str: 'foo_float=1.5,foo_stra=\"glad,elated\"some,squote=',foo_size=10mb,dquote=\"'\n *\tstring, this would return tokens:\n *\t\tstrtok_quoted(given_str, ',')=foo_float=1.5\n *\t\tstrtok_quoted(NULL,',')=foo_stra=\"glad,elated\"some\n *\t\tstrtok_quoted(NULL,',')=squote='\n * \t\tstrtok_quoted(NULL,',')=foo_size=10mb\n *\t\tstrtok_quoted(NULL,',')=dquote=\"\n *\n * @param[in]\tsource - input string\n * @param[in]\tdelimiter  - each element in this string represents the delimeter to match.\n *\n * @return\tchar *\n * @retval\t<string token>\n * @retval\tNULL if end of processing, or problem found with quoted string.\n */\nchar *\nstrtok_quoted(char *source, char delimiter)\n{\n\tstatic char *pc = NULL; /* save pointer position */\n\tchar *stok = NULL;\t/* token to return */\n\tchar *quoted = NULL;\n\n\tif (source != NULL) {\n\t\tpc = source;\n\t}\n\n\tif ((pc == NULL) || (*pc == '\\0'))\n\t\treturn NULL;\n\n\tfor (stok = pc; *pc != 0; pc++) {\n\n\t\t/* must not match <ESC_CHAR><delim> or <ESC_CHAR><ESC_CHAR><delim>\n\t\t * the latter means <ESC_CHAR> is the one escaped not <delim>\n\t\t */\n\t\tif ((*pc == delimiter) &&\n\t\t    (((pc - 1) < stok) || (*(pc - 1) != ESC_CHAR) ||\n\t\t     ((pc - 2) < stok) || (*(pc - 2) == ESC_CHAR))) {\n\t\t\t*pc = '\\0';\n\t\t\tpc++;\n\t\t\tprune_esc_backslash(stok);\n\t\t\treturn (stok);\n\t\t}\n\n\t\t/* check for a quoted value and advance\n\t\t * pointer up to the closing quote. If a non-escaped\n\t\t * delimiter appears first, ex. \"apple,bee\", this will\n\t\t * return the token string just before the delimiter\n\t\t * (ex. \"apple).\n\t\t */\n\t\tif ((*pc == '\\'') || (*pc == '\"')) {\n\n\t\t\t/* if immediately following the quote, try to match\n\t\t\t * one of:\n\t\t\t * \t'<null>\n\t\t\t *\t'<delimiter>\n\t\t\t * \t\"<null>\n\t\t\t *\t\"<delimiter>\n\t\t\t */\n\t\t\tif ((*(pc + 1) == '\\0') || (*(pc + 1) == delimiter)) {\n\t\t\t\tpc++;\n\t\t\t\tif (*pc != '\\0') {\n\t\t\t\t\t*pc = '\\0';\n\t\t\t\t\tpc++;\n\t\t\t\t}\n\t\t\t\tprune_esc_backslash(stok);\n\t\t\t\treturn (stok);\n\t\t\t}\n\t\t\t/* Otherwise, look for the matching endquote<delimiter>\n\t\t\t * or <endquote<null>\n\t\t\t * if not, just use the value as is, up to but not\n\t\t\t * including the non-escaped <delimiter>.\n\t\t\t */\n\t\t\tquoted = pc;\n\t\t\twhile (*++pc) {\n\t\t\t\tif (*pc == *quoted) {\n\t\t\t\t\tif ((*(pc + 1) == '\\0') ||\n\t\t\t\t\t    (*(pc + 1) == delimiter)) {\n\t\t\t\t\t\tquoted = NULL;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t} else if ((*pc == delimiter) &&\n\t\t\t\t\t   (((pc - 1) < stok) || (*(pc - 1) != ESC_CHAR) ||\n\t\t\t\t\t    ((pc - 2) < stok) || (*(pc - 2) == ESC_CHAR))) {\n\t\t\t\t\t*pc = '\\0';\n\t\t\t\t\tpc++;\n\t\t\t\t\tprune_esc_backslash(stok);\n\t\t\t\t\treturn (stok);\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif (quoted != NULL) { /* didn't find a close quote */\n\t\t\t\tpc = NULL;    /* use quote value as is */\n\t\t\t\tprune_esc_backslash(stok);\n\t\t\t\treturn (stok);\n\t\t\t}\n\t\t}\n\t}\n\n\tprune_esc_backslash(stok);\n\treturn (stok);\n}\n\n/**\n * @brief\tConvert an attropl struct to an attrl struct\n *\n * @param[in]\tfrom - the attropl struct to convert\n *\n * @return struct attrl*\n * @retval a newly converted attrl struct\n * @retval NULL on error\n */\nstruct attrl *\nattropl2attrl(struct attropl *from)\n{\n\tstruct attrl *ap = NULL, *rattrl = NULL;\n\n\twhile (from != NULL) {\n\t\tif (ap == NULL) {\n\t\t\tif ((ap = new_attrl()) == NULL) {\n\t\t\t\tperror(\"Out of memory\");\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t\trattrl = ap;\n\t\t} else {\n\t\t\tif ((ap->next = new_attrl()) == NULL) {\n\t\t\t\tperror(\"Out of memory\");\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t\tap = ap->next;\n\t\t}\n\n\t\tif (from->name != NULL) {\n\t\t\tif ((ap->name = (char *) malloc(strlen(from->name) + 1)) == NULL) {\n\t\t\t\tperror(\"Out of memory\");\n\t\t\t\tfree_attrl_list(rattrl);\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t\tstrcpy(ap->name, from->name);\n\t\t}\n\t\tif (from->resource != NULL) {\n\t\t\tif ((ap->resource = (char *) malloc(strlen(from->resource) + 1)) == NULL) {\n\t\t\t\tperror(\"Out of memory\");\n\t\t\t\tfree_attrl_list(rattrl);\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t\tstrcpy(ap->resource, from->resource);\n\t\t}\n\t\tif (from->value != NULL) {\n\t\t\tif ((ap->value = (char *) malloc(strlen(from->value) + 1)) == NULL) {\n\t\t\t\tperror(\"Out of memory\");\n\t\t\t\tfree_attrl_list(rattrl);\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t\tstrcpy(ap->value, from->value);\n\t\t}\n\t\tfrom = from->next;\n\t}\n\n\treturn rattrl;\n}\n\n/**\n *  @brief attrl copy constructor\n *\n *  @param[in] oattr - attrl to dup\n *\n *  @return dup'd attrl\n */\n\nstruct attrl *\ndup_attrl(struct attrl *oattr)\n{\n\tstruct attrl *nattr;\n\n\tif (oattr == NULL)\n\t\treturn NULL;\n\n\tnattr = new_attrl();\n\tif (nattr == NULL)\n\t\treturn NULL;\n\tif (oattr->name != NULL)\n\t\tnattr->name = strdup(oattr->name);\n\tif (oattr->resource != NULL)\n\t\tnattr->resource = strdup(oattr->resource);\n\tif (oattr->value != NULL)\n\t\tnattr->value = strdup(oattr->value);\n\n\tnattr->op = oattr->op;\n\treturn nattr;\n}\n\n/**\n * @brief copy constructor for attrl list\n * @param oattr_list - list to dup\n * @return dup'd attrl list\n */\n\nstruct attrl *\ndup_attrl_list(struct attrl *oattr_list)\n{\n\tstruct attrl *nattr_head = NULL;\n\tstruct attrl *nattr;\n\tstruct attrl *nattr_prev = NULL;\n\tstruct attrl *oattr;\n\n\tif (oattr_list == NULL)\n\t\treturn NULL;\n\n\tfor (oattr = oattr_list; oattr != NULL; oattr = oattr->next) {\n\t\tnattr = dup_attrl(oattr);\n\t\tif (nattr_prev == NULL) {\n\t\t\tnattr_head = nattr;\n\t\t\tnattr_prev = nattr_head;\n\t\t} else {\n\t\t\tnattr_prev->next = nattr;\n\t\t\tnattr_prev = nattr;\n\t\t}\n\t}\n\treturn nattr_head;\n}\n\n/**\n *\t@brief create a new attrl structure and initialize it\n */\nstruct attrl *\nnew_attrl()\n{\n\tstruct attrl *at;\n\n\tif ((at = malloc(sizeof(struct attrl))) == NULL)\n\t\treturn NULL;\n\n\tat->next = NULL;\n\tat->name = NULL;\n\tat->resource = NULL;\n\tat->value = NULL;\n\tat->op = SET;\n\n\treturn at;\n}\n\n/**\n * @brief frees attrl structure\n *\n * @param [in] at - attrl to free\n * @return nothing\n */\nvoid\nfree_attrl(struct attrl *at)\n{\n\tif (at == NULL)\n\t\treturn;\n\n\tfree(at->name);\n\tfree(at->resource);\n\tfree(at->value);\n\n\tfree(at);\n}\n\n/**\n * @brief frees attrl list\n *\n * @param[in] at_list - attrl list to free\n * @return nothing\n */\nvoid\nfree_attrl_list(struct attrl *at_list)\n{\n\tstruct attrl *cur, *tmp;\n\tif (at_list == NULL)\n\t\treturn;\n\n\tfor (cur = at_list; cur != NULL; cur = tmp) {\n\t\ttmp = cur->next;\n\t\tfree_attrl(cur);\n\t}\n}\n\n/**\n * @brief\tGeneric attribute setter function, accepts all values as string regardless of the type\n * \t\t\tTip: use this when you want at_set() and at_decode() to be invoked, otherwise use the\n * \t\t\ttype based setters below\n *\n * @param[in]\tpattr\t-\tpointer to attribute being set\n * @param[in]\tpdef \t-\tattribute definition\n * @param[in]\tvalue\t-\tvalue to be set\n * @param[in]\tresc\t-\tvalue of resource, if applicable\n * @param[in]\top\t-\tthe batch_op op to perform (SET, INCR, etc.)\n *\n * @return\tint\n * @retval\t0 for success\n * @retval\t!0 for failure\n *\n * @par MT-Safe: No\n * @par Side Effects: None\n *\n */\nint\nset_attr_generic(attribute *pattr, attribute_def *pdef, char *value, char *rescn, enum batch_op op)\n{\n\tint rc;\n\tattribute tempat;\n\n\tif (pattr == NULL || pdef == NULL) {\n\t\tlog_err(-1, __func__, \"Invalid pointer to attribute or its definition\");\n\t\treturn 1;\n\t}\n\n\tif (!pdef->at_decode)\n\t\treturn 1;\n\n\t/* Just call decode and set the value of attribute directly */\n\tif (op == INTERNAL) {\n\t\tif ((rc = pdef->at_decode(pattr, pdef->at_name, rescn, value)) != 0) {\n\t\t\tlog_errf(rc, __func__, \"decode of %s failed\", pdef->at_name);\n\t\t\treturn rc;\n\t\t}\n\t\treturn 0;\n\t}\n\n\tclear_attr(&tempat, pdef);\n\tif ((rc = pdef->at_decode(&tempat, pdef->at_name, rescn, value)) != 0) {\n\t\tlog_errf(rc, __func__, \"decode of %s failed\", pdef->at_name);\n\t\treturn rc;\n\t}\n\n\trc = set_attr_with_attr(pdef, pattr, &tempat, op);\n\n\tpdef->at_free(&tempat);\n\n\treturn rc;\n}\n\n/**\n * @brief\tSet attribute using another attribute\n *\n * @param[in]\tpdef \t-\tattribute definition\n * @param[in]\toattr\t-\tpointer to attribute being set\n * @param[in]\tnattr\t-\tpointer to attribute to set with\n * @param[in]\top\t\t-\toperation to do\n *\n * @return\tint\n * @retval\t0 for success\n * @retval\t1 for failure\n *\n * @par MT-Safe: No\n * @par Side Effects: None\n *\n */\nint\nset_attr_with_attr(attribute_def *pdef, attribute *oattr, attribute *nattr, enum batch_op op)\n{\n\tint rc;\n\n\tif ((rc = pdef->at_set(oattr, nattr, op)) != 0)\n\t\tlog_errf(rc, __func__, \"set of %s failed\", pdef->at_name);\n\n\treturn rc;\n}\n\n/**\n * @brief\tMark an attribute as \"not set\"\n *\n * @param[in]\tattr\t-\tpointer to attribute being modified\n *\n * @return\tvoid\n *\n * @par MT-Safe: No\n * @par Side Effects: None\n *\n */\nvoid\nmark_attr_not_set(attribute *attr)\n{\n\tif (attr != NULL)\n\t\tattr->at_flags &= ~ATR_VFLAG_SET;\n}\n\n/**\n * @brief\tMark an attribute as \"set\"\n *\n * @param[in]\tattr\t-\tpointer to attribute being modified\n *\n * @return\tvoid\n *\n * @par MT-Safe: No\n * @par Side Effects: None\n *\n */\nvoid\nmark_attr_set(attribute *attr)\n{\n\tif (attr != NULL)\n\t\tattr->at_flags |= ATR_VFLAG_SET;\n}\n\n/**\n * @brief\tCheck if an attribute is set\n *\n * @param[in]\tpattr\t-\tpointer to the attribute\n *\n * @return\tint\n * @retval\t1 if the attribute is set\n * @retval\t0 otherwise\n *\n * @par MT-Safe: No\n * @par Side Effects: None\n */\nint\nis_attr_set(const attribute *pattr)\n{\n\tif (pattr != NULL)\n\t\treturn pattr->at_flags & ATR_VFLAG_SET;\n\treturn 0;\n}\n\n/**\n * @brief\tCommon function to update attribute after set action performed.\n *\n * @param[in]\tattr\t-\tpointer to the attribute\n *\n * @return\tvoid\n *\n * @par MT-Safe: No\n * @par Side Effects: None\n */\nvoid\npost_attr_set(attribute *attr)\n{\n\tattr->at_flags |= ATR_SET_MOD_MCACHE;\n}\n\n/**\n * @brief\n *\t\tdecode_sandbox - decode sandbox into string attribute\n *\n * @param[in,out]\tpatr - the string attribute that holds the decoded value\n * @param[in]\t\tname - project attribute name\n * @param[in]\t\tresc - resource name (unused here)\n * @param[in]\t\tval - project attribute value\n *\n * @return int\n * @retval\t0\t- success\n * @retval\t>0\t- error number if error.\n *\n * @note\n *\t\targument rescn is unused here.\n */\n\nint\ndecode_sandbox(attribute *patr, char *name, char *rescn, char *val)\n{\n\tchar *pc;\n\n\tpc = val;\n\twhile (isspace((int) *pc))\n\t\t++pc;\n\tif (*pc == '\\0' || !isalpha((int) *pc))\n\t\treturn PBSE_BADATVAL;\n\n\t/* compare to valid values of sandbox */\n\tif ((strcasecmp(pc, \"HOME\") != 0) &&\n\t    (strcasecmp(pc, \"O_WORKDIR\") != 0) &&\n\t    (strcasecmp(pc, \"PRIVATE\") != 0)) {\n\t\treturn PBSE_BADATVAL;\n\t}\n\n\treturn (decode_str(patr, name, rescn, val));\n}\n\n/**\n * @brief\n *\tDecode project into string attribute.\n *\n * @param[in,out]\tpatr - the string attribute that holds the decoded value\n * @param[in]\t\tname - project attribute name\n * @param[in]\t\tresc - resource name (unused here)\n * @param[in]\t\tval - project attribute value\n *\n * @return\tint\n * @retval      0 if success\n * @retval      > 0 error number if error\n * @retval      *patr members set\n */\n\nint\ndecode_project(attribute *patr, char *name, char *rescn, char *val)\n{\n\tchar *pc;\n\n\tpc = val;\n\twhile (isspace((int) *pc))\n\t\t++pc;\n\n\tif (strpbrk(pc, ETLIM_INVALIDCHAR) != NULL)\n\t\treturn PBSE_BADATVAL;\n\n\treturn (decode_str(patr, name, rescn,\n\t\t\t   (*val == '\\0') ? PBS_DEFAULT_PROJECT : val));\n}\n\n/**\n * @brief\n *\tGeneric function to return the ponter to the attribute.\n *\tUse of this function only within object getter functions.\n *\n * @param[in]\tlist\t\t- Pointer to object's attribute list\n * @param[in]\tattr_idx\t- Index of the attribute to be freed.\n *\n */\nattribute *\n_get_attr_by_idx(attribute *list, int attr_idx)\n{\n\treturn &(list[attr_idx]);\n}\n\n/**\n * @brief\n *\tGeneric function to free the attribute with corresponding free routine.\n *\n * @param[in]\tattr_def\t- Pointer to object's attribute defition\n * @param[in]\tpattr \t\t- Pointer to attribute list\n * @param[in]\tattr_idx\t- Index of the attribute to be freed.\n *\n */\nvoid\nfree_attr(attribute_def *attr_def, attribute *pattr, int attr_idx)\n{\n\tif (attr_def != NULL && pattr != NULL && attr_def[attr_idx].at_free != NULL)\n\t\tattr_def[attr_idx].at_free(pattr);\n}\n\n/**\n * @brief\tGeneric getter for attribute's list value\n * \n *\n * @param[in]\tpattr - pointer to the object\n *\n * @return\tpbs_list_head\n * @retval\tvalue of attribute\n * @retval\tdummy pbs_list_head if pattr is NULL. This is to avoid GET_NEXT failing.\n */\npbs_list_head\nget_attr_list(const attribute *pattr)\n{\n\tconst pbs_list_head dummy = {(pbs_list_link *) &dummy, (pbs_list_link *) &dummy, NULL};\n\tif (pattr)\n\t\treturn pattr->at_val.at_list;\n\telse\n\t\treturn dummy;\n}\n"
  },
  {
    "path": "src/lib/Libattr/attr_node_func.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <sys/types.h>\n#include <ctype.h>\n#include <memory.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <assert.h>\n#include \"pbs_ifl.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"server_limits.h\"\n#include \"net_connect.h\"\n#include \"job.h\"\n#include \"reservation.h\"\n#include \"pbs_nodes.h\"\n#include \"pbs_error.h\"\n#include \"pbs_internal.h\"\n\nstatic struct node_state {\n\tunsigned long bit;\n\tchar *name;\n} ns[] = {{INUSE_UNKNOWN, ND_state_unknown},\n\t  {INUSE_DOWN, ND_down},\n\t  {INUSE_STALE, ND_Stale},\n\t  {INUSE_OFFLINE, ND_offline},\n\t  {INUSE_JOB, ND_jobbusy},\n\t  {INUSE_JOBEXCL, ND_job_exclusive},\n\t  {INUSE_BUSY, ND_busy},\n\t  {INUSE_INIT, ND_Initializing},\n\t  {INUSE_PROV, ND_prov},\n\t  {INUSE_WAIT_PROV, ND_wait_prov},\n\t  {INUSE_RESVEXCL, ND_resv_exclusive},\n\t  {INUSE_UNRESOLVABLE, ND_unresolvable},\n\t  {INUSE_OFFLINE_BY_MOM, ND_offline_by_mom},\n\t  {INUSE_MAINTENANCE, ND_maintenance},\n\t  {INUSE_SLEEP, ND_sleep},\n\t  {0, NULL}};\n\nstatic struct node_type {\n\tshort bit;\n\tchar *name;\n} nt[] = {{NTYPE_PBS, ND_pbs},\n\t  {0, NULL}};\n\n/**\n * @file\tattr_node_func.c\n * @brief\n * \tThis file contains functions for deriving attribute values from a pbsnode\n * \tand for updating the \"state\" (inuse), \"node type\" (ntype) or \"properties\"\n * \tlist using the \"value\" carried in an attribute.\n *\n * @par Included are:\n *\n * global:\n * decode_state()\t\t\"functions for at_decode func pointer\"\n * decode_ntype()\n * decode_props()\n * decode_sharing()\n *\n * encode_state()\t\t\"functions for at_encode func pointer\"\n * encode_ntype()\n * encode_props()\n * encode_jobs()\n * encode_sharing()\n *\n * set_node_state()\t\t\"functions for at_set func pointer\"\n * set_node_ntype()\n *\n * node_state()\t\t\t\"functions for at_action func pointer\"\n * node_ntype()\n *\n * get_vnode_state_str()\t\"helper functions\"\n * vnode_state_to_str()\n * vnode_ntype_to_str()\n * str_to_vnode_state()\n * str_to_vnode_ntype()\n *\n * local:\n * load_prop()\n * set_nodeflag()\n *\n * The prototypes are declared in \"attr_func.h\"\n */\n\n/*\n * Set of forward declarations for functions used before defined\n * keeps the compiler happy\n */\nstatic int set_nodeflag(char *, unsigned long *);\n\n/**\n * @brief\n *\tGiven a 'state_bit' value of a vnode, returns the human-readable\n *\tform.\n *\n * @par Example:\n *\tIf <state_bit> == 3, this returns string \"offline,down\"\n *\tsince bit 1 is INUSE_OFFLINE and bit 3 is INUSE_DOWN.\n *\n * @par Note:\n * \tDo not free the return value - it's a statically allocated string.\n *\n * @param[in]\tstate_bit - the numeric state bit value\n *\n * @return\tchar * (i.e. string)\n * @retval\t\"<state1>,<state2>,...\" - a comma-separated list of states.\n * @retval\t\"\"\t\t\t- corresponding state not found.\n *\n */\nchar *\nvnode_state_to_str(int state_bit)\n{\n\tstatic char *state_str = NULL;\n\tint state_bit_tmp;\n\tint i;\n\n\t/* Ensure that the state_bit_str value contains only valid */\n\t/* vnode state values. */\n\n\tstate_bit_tmp = state_bit;\n\tfor (i = 0; ns[i].name && (state_bit_tmp != 0); i++) {\n\t\t/* clear all the valid states in the value */\n\t\tstate_bit_tmp &= ~ns[i].bit;\n\t}\n\n\t/* Now clear any internal states */\n\tif (state_bit_tmp != 0)\n\t\tstate_bit_tmp &= ~(INUSE_DELETED | INUSE_NEEDS_HELLOSVR | INUSE_INIT);\n\n\tif (state_bit_tmp != 0)\n\t\treturn (\"\"); /* found an unknown state bit set! */\n\n\tif (state_str == NULL) {\n\n\t\tint alloc_sz = 0;\n\n\t\talloc_sz = strlen(ND_free) + 1;\n\n\t\tfor (i = 0; ns[i].name; i++) {\n\t\t\talloc_sz += strlen(ns[i].name) + 1; /* +1 for comma */\n\t\t}\n\t\talloc_sz += 1; /* for null character */\n\n\t\tstate_str = malloc(alloc_sz);\n\n\t\tif (state_str == NULL)\n\t\t\treturn (\"\"); /* malloc failure, just return empty */\n\t}\n\n\tif (state_bit == 0) {\n\t\tstrcpy(state_str, ND_free);\n\t} else {\n\t\tstate_str[0] = '\\0';\n\t\tfor (i = 0; ns[i].name; i++) {\n\t\t\tif (state_bit & ns[i].bit) {\n\t\t\t\tif (state_str[0] != '\\0')\n\t\t\t\t\t(void) strcat(state_str, \",\");\n\t\t\t\t(void) strcat(state_str, ns[i].name);\n\t\t\t}\n\t\t}\n\t}\n\n\treturn (state_str);\n}\n\n/**\n *\n * @brief\n *\tSame as vnode_state_to_str() ecept the argument is a string\n *\tinstead of an int.\n *\n * @param[in]\tstate_bit_str - the numeric state bit value in string format.\n *\n * @return\tchar * (i.e. string)\n * @retval\t\"<state1>,<state2>,...\" - a comma-separated list of states.\n * @retval\t\"\"\t\t\t- corresponding state not found.\n *\n */\nchar *\nget_vnode_state_str(char *state_bit_str)\n{\n\tint state_bit;\n\n\tif ((state_bit_str == NULL) || (state_bit_str[0] == '\\0'))\n\t\treturn (\"\");\n\n\tstate_bit = atoi(state_bit_str);\n\n\treturn (vnode_state_to_str(state_bit));\n}\n\n/**\n *\n * @brief\n *\tGiven a vnode state string 'vnstate', containing the list of descriptive\n *\tstates, comma separated, return the int bit mask equivalent.\n *\n * @param[in]\tvnstate - the vnode state attribute: \"<state1>,<state2>,...\"\n *\n * @return int\n * @reteval <n>\tthe bitmask value\n *\n */\nint\nstr_to_vnode_state(char *vnstate)\n{\n\tint statebit = 0;\n\tchar *pc = NULL;\n\tchar *vnstate_dup = NULL;\n\tint i;\n\n\tif (vnstate == NULL) {\n\t\treturn 0;\n\t}\n\n\tvnstate_dup = strdup(vnstate);\n\tif (vnstate_dup == NULL)\n\t\treturn 0;\n\n\tpc = strtok(vnstate_dup, \",\");\n\twhile (pc) {\n\t\tfor (i = 0; ns[i].name; i++) {\n\t\t\tif (strcmp(ns[i].name, pc) == 0) {\n\t\t\t\tstatebit |= ns[i].bit;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t\tpc = strtok(NULL, \",\");\n\t}\n\tfree(vnstate_dup);\n\n\treturn (statebit);\n}\n\n/*\n * @brief\n * \tEncodes the node attribute pattr's  state into an external\n * \trepresentation, via the  head of a svrattrl structure (ph).\n *\n * @param[in]\tpattr\t- input attribute\n * @param[in] \tph \t- head of a list of \"svrattrl\" structs.\n * @param[out] \taname\t- attribute name\n * @param[out] \trname\t- resource's name (null if none)\n * @param[out] \tmode\t- action mode code, unused here\n * @param[out] \trtnl\t- pointer to the actual svrattrl entry.\n *\n * @return   int\n * @retval    <0  an error encountered; value is negative of an error code\n * @retval    ==1 ok, encode succeeded and returning one item\n */\n\nint\nencode_state(const attribute *pattr, pbs_list_head *ph, char *aname, char *rname, int mode, svrattrl **rtnl)\n{\n\tint i;\n\tsvrattrl *pal;\n\tunsigned long state;\n\tstatic char state_str[MAX_ENCODE_BFR];\n\tint offline_str_seen;\n\tchar *ns_name;\n\n\tif (!pattr)\n\t\treturn -(PBSE_INTERNAL);\n\n\tif (!(pattr->at_flags & ATR_VFLAG_SET))\n\t\treturn (0); /*nothing to report back*/\n\n\tstate = pattr->at_val.at_long & INUSE_SUBNODE_MASK;\n\tif (!state)\n\t\tstrcpy(state_str, ND_free);\n\n\telse {\n\t\tstate_str[0] = '\\0';\n\t\toffline_str_seen = 0;\n\t\tfor (i = 0; ns[i].name; i++) {\n\t\t\tif (state & ns[i].bit) {\n\t\t\t\tns_name = ns[i].name;\n\t\t\t\tif (strcmp(ns_name, ND_offline) == 0) {\n\t\t\t\t\toffline_str_seen = 1;\n\t\t\t\t} else if (strcmp(ns_name,\n\t\t\t\t\t\t  ND_offline_by_mom) == 0) {\n\t\t\t\t\tif (offline_str_seen)\n\t\t\t\t\t\tcontinue;\n\t\t\t\t\t/* ND_offline_by_mom will always be */\n\t\t\t\t\t/* shown externally as ND_offline */\n\t\t\t\t\tns_name = ND_offline;\n\t\t\t\t}\n\n\t\t\t\tif (state_str[0] != '\\0') {\n\t\t\t\t\t(void) strcat(state_str, \",\");\n\t\t\t\t}\n\t\t\t\t(void) strcat(state_str, ns_name);\n\t\t\t}\n\t\t}\n\t}\n\n\tpal = attrlist_create(aname, rname, (int) strlen(state_str) + 1);\n\tif (pal == NULL)\n\t\treturn -(PBSE_SYSTEM);\n\n\t(void) strcpy(pal->al_value, state_str);\n\tpal->al_flags = ATR_VFLAG_SET;\n\tif (ph)\n\t\tappend_link(ph, &pal->al_link, pal);\n\tif (rtnl)\n\t\t*rtnl = pal;\n\n\treturn (1); /*success*/\n}\n\n/**\n *\n * @brief\n *\tGiven a vnode type string 'vntype', return the int equivalent.\n *\n * @param[in]\tvntype - the vnode type attribute as a string.\n *\n * @return \tint\n * @retval  \t<n> \tmapped value from nt[] array.\n * @retval  \t-1  \tif failed to find a mapping.\n *\n */\nint\nstr_to_vnode_ntype(char *vntype)\n{\n\tint i;\n\n\tif (vntype == NULL)\n\t\treturn (-1);\n\n\tfor (i = 0; nt[i].name; i++) {\n\t\tif (strcmp(vntype, nt[i].name) == 0)\n\t\t\treturn nt[i].bit;\n\t}\n\n\treturn (-1);\n}\n\n/**\n *\n * @brief\n *\tGiven a vnode type 'vntype' in int form, return the string equivalent.\n *\n * @par Note:\n * \tDo not free the return value - it's a statically allocated string.\n *\n * @param[in]\tvntype - the vnode type value in int.\n *\n * @return \tstr\n * @retval \tmapped value in the nt[] array from int to string.\n * @retval \t\"\"\tempty string if not found in nt[] array.\n *\n */\nchar *\nvnode_ntype_to_str(int vntype)\n{\n\tint i;\n\n\tfor (i = 0; nt[i].name; i++) {\n\t\tif (vntype == nt[i].bit)\n\t\t\treturn nt[i].name;\n\t}\n\n\treturn (\"\");\n}\n\n/**\n *\n * @brief\n *\tEncodes a node type attribute into a svrattrl structure\n *\n * @param[in]\tpattr - attribute being encoded\n * @param[in]\tph - head of a list of 'svrattrl' structs which are to be\n *\t\t     return.\n * @param[out]  aname - attribute's name\n * @param[out]  rname - resource's name (null if none)\n * @param[out]\tmode - mode code, unused here\n * @param[out]\trtnl - the return value, a pointer to svrattrl\n *\n * @note\n * \tOnce the node's \"ntype\" field is converted to an attribute,\n * \tthe attribute can be passed to this function for encoding into\n * \tan svrattrl structure\n *\n * @return \tint\n * @retval    \t< 0\tan error encountered; value is negative of an error code\n * @retval    \t0\tok, encode happened and svrattrl created and linked in,\n *\t\t     \tor nothing to encode\n *\n */\nint\nencode_ntype(const attribute *pattr, pbs_list_head *ph, char *aname, char *rname, int mode, svrattrl **rtnl)\n{\n\tsvrattrl *pal;\n\tshort ntype;\n\n\tstatic char ntype_str[MAX_ENCODE_BFR];\n\tint i;\n\n\tif (!pattr)\n\t\treturn -(PBSE_INTERNAL);\n\n\tif (!(pattr->at_flags & ATR_VFLAG_SET))\n\t\treturn (0); /*nothing to report back*/\n\n\tntype = pattr->at_val.at_short & PBSNODE_NTYPE_MASK;\n\tntype_str[0] = '\\0';\n\tfor (i = 0; nt[i].name; i++) {\n\t\tif (ntype == nt[i].bit) {\n\t\t\tstrcpy(ntype_str, nt[i].name);\n\t\t\tbreak;\n\t\t}\n\t}\n\n\tif (ntype_str[0] == '\\0') {\n\t\treturn -(PBSE_ATVALERANGE);\n\t}\n\n\tpal = attrlist_create(aname, rname, (int) strlen(ntype_str) + 1);\n\tif (pal == NULL)\n\t\treturn -(PBSE_SYSTEM);\n\n\t(void) strcpy(pal->al_value, ntype_str);\n\tpal->al_flags = ATR_VFLAG_SET;\n\tif (ph)\n\t\tappend_link(ph, &pal->al_link, pal);\n\tif (rtnl)\n\t\t*rtnl = pal;\n\n\treturn (0); /*success*/\n}\n\n/**\n * @brief\n * \tencode_jobs\n * \tOnce the node's struct jobinfo pointer is put in the data area of\n *\ttemporary attribute containing a pointer to the parent node, this\n * \tfunction will walk the list of jobs and generate the comma separated\n * \tlist to send back via an svrattrl structure.\n *\n * @param[in]   pattr - attribute being encoded\n * @param[in]   ph - head of a  list of \"svrattrl\"\n * @param[out]  aname - attribute's name\n * @param[out]  rname - resource's name (null if none)\n * @param[out]  mode - mode code, unused here\n * @param[out]  rtnl - the return value, a pointer to svrattrl\n *\n * @return\tint\n * @retval\t<0\tan error encountered; value is negative of an error code\n * @retval\t 0\tok, encode happened and svrattrl created and linked in,\n *\t\t\tor nothing to encode\n *\n */\n\nint\nencode_jobs(const attribute *pattr, pbs_list_head *ph, char *aname, char *rname, int mode, svrattrl **rtnl)\n\n{\n\tsvrattrl *pal;\n\tstruct jobinfo *jip;\n\tstruct pbsnode *pnode;\n\tstruct pbssubn *psubn;\n\tint i;\n\tint j;\n\tint offset;\n\tint jobcnt;    /*number of jobs using the node     */\n\tint strsize;   /*computed string size\t\t    */\n\tchar *job_str; /*holds comma separated list of jobs*/\n\n\tif (!pattr)\n\t\treturn (-1);\n\tif (!(pattr->at_flags & ATR_VFLAG_SET) || !pattr->at_val.at_jinfo)\n\t\treturn (0); /*nothing to report back   */\n\n\t/*cnt number of jobs and estimate size of string buffer required*/\n\tjobcnt = 0;\n\tstrsize = 1; /*allow for terminating null char*/\n\tpnode = pattr->at_val.at_jinfo;\n\tfor (psubn = pnode->nd_psn; psubn; psubn = psubn->next) {\n\t\tfor (jip = psubn->jobs; jip; jip = jip->next) {\n\t\t\tjobcnt++;\n\t\t\t/* add 3 to length of node name for slash, comma, and space */\n\t\t\t/* plus one for the cpu index\t\t\t\t   */\n\t\t\tstrsize += strlen(jip->jobid) + 4;\n\t\t\ti = psubn->index;\n\t\t\t/* now add additional space needed for the cpu index */\n\t\t\twhile ((i = i / 10) != 0)\n\t\t\t\tstrsize++;\n\t\t}\n\t}\n\n\tif (jobcnt == 0)\n\t\treturn (0); /*no jobs currently on this node*/\n\n\telse if (!(job_str = (char *) malloc(strsize + 1)))\n\t\treturn -(PBSE_SYSTEM);\n\n\tjob_str[0] = '\\0';\n\ti = 0;\n\tj = 0;\n\toffset = 0;\n\tfor (psubn = pnode->nd_psn; psubn; psubn = psubn->next) {\n\t\tfor (jip = psubn->jobs; jip; jip = jip->next) {\n\t\t\tif (i != 0) {\n\t\t\t\tsprintf(job_str + offset, \", \");\n\t\t\t\toffset += 2; /* accounting for comma and space */\n\t\t\t} else\n\t\t\t\ti++;\n\n\t\t\tsprintf(job_str + offset, \"%s/%ld\",\n\t\t\t\tjip->jobid, psubn->index);\n\t\t\toffset += strlen(jip->jobid) + 1;\n\t\t\tj = psubn->index;\n\t\t\twhile ((j = j / 10) != 0)\n\t\t\t\toffset++;\n\t\t\toffset++;\n\t\t}\n\t}\n\n\tpal = attrlist_create(aname, rname, (int) strlen(job_str) + 1);\n\tif (pal == NULL) {\n\t\tfree(job_str);\n\t\treturn -(PBSE_SYSTEM);\n\t}\n\n\t(void) strcpy(pal->al_value, job_str);\n\tpal->al_flags = ATR_VFLAG_SET;\n\tfree(job_str);\n\n\tif (ph)\n\t\tappend_link(ph, &pal->al_link, pal);\n\tif (rtnl)\n\t\t*rtnl = pal;\n\n\treturn (0); /*success*/\n}\n\n/**\n * @brief\n * \tencode_resvs\n * \tOnce the node's struct resvinfo pointer is put in the data area of\n * \ttemporary attribute containing a pointer to the parent node, this\n * \tfunction will walk the list of reservations and generate the comma\n * \tseparated list to send back via an svrattrl structure.\n *\n * @param[in]    pattr - attribute being encoded\n * @param[in]    ph - head of a  list of \"svrattrl\"\n * @param[out]   aname - attribute's name\n * @param[out]   rname - resource's name (null if none)\n * @param[out]   mode - mode code, unused here\n * @param[out]   rtnl - the return value, a pointer to svrattrl\n *\n * @return      int\n * @retval      <0      an error encountered; value is negative of an error code\n * @retval       0      ok, encode happened and svrattrl created and linked in,\n *                      or nothing to encode\n */\n\nint\nencode_resvs(const attribute *pattr, pbs_list_head *ph, char *aname, char *rname, int mode, svrattrl **rtnl)\n{\n\tsvrattrl *pal;\n\tstruct resvinfo *rip;\n\tstruct pbsnode *pnode;\n\tint i;\n\tint resvcnt;\t/*number of reservations on the node*/\n\tint strsize;\t/*computed string size\t\t    */\n\tchar *resv_str; /*comma separated reservations list */\n\n\tif (!pattr)\n\t\treturn (-1);\n\tif (!(pattr->at_flags & ATR_VFLAG_SET) || !pattr->at_val.at_jinfo)\n\t\treturn (0); /*nothing to report back   */\n\n\t/*cnt number of reservations and estimate size of string buffer required*/\n\tresvcnt = 0;\n\tstrsize = 1; /*allow for terminating null char*/\n\tpnode = pattr->at_val.at_jinfo;\n\tfor (rip = pnode->nd_resvp; rip; rip = rip->next) {\n\t\tresvcnt++;\n\t\tstrsize += strlen(rip->resvp->ri_qs.ri_resvID) + 9; /*4digit*/\n\t}\n\n\tif (resvcnt == 0)\n\t\treturn (0); /*no reservations currently on this node*/\n\n\telse if (!(resv_str = (char *) malloc(strsize)))\n\t\treturn -(PBSE_SYSTEM);\n\n\tresv_str[0] = '\\0';\n\ti = 0;\n\tfor (rip = pnode->nd_resvp; rip; rip = rip->next) {\n\t\tif (i != 0)\n\t\t\tstrcat(resv_str, \", \");\n\t\telse\n\t\t\ti++;\n\n\t\tsprintf(resv_str + strlen(resv_str), \"%s\",\n\t\t\trip->resvp->ri_qs.ri_resvID);\n\t}\n\n\tpal = attrlist_create(aname, rname, (int) strlen(resv_str) + 1);\n\tif (pal == NULL) {\n\t\tfree(resv_str);\n\t\treturn -(PBSE_SYSTEM);\n\t}\n\n\t(void) strcpy(pal->al_value, resv_str);\n\tpal->al_flags = ATR_VFLAG_SET;\n\tfree(resv_str);\n\n\tif (ph)\n\t\tappend_link(ph, &pal->al_link, pal);\n\tif (rtnl)\n\t\t*rtnl = pal;\n\n\treturn (0); /*success*/\n}\n\n/**\n * @brief\n * \tencode_sharing\n * \tEncode the sharing attribute value into one of its possible values,\n *\tsee \"share_words\" above\n *\n * @param[in]    pattr - attribute being encoded\n * @param[in]    ph - head of a  list of \"svrattrl\"\n * @param[out]   aname - attribute's name\n * @param[out]   rname - resource's name (null if none)\n * @param[out]   mode - mode code, unused here\n * @param[out]   rtnl - the return value, a pointer to svrattrl\n *\n * @return      int\n * @retval      <0      an error encountered; value is negative of an error code\n * @retval     ==1      ok, encode succeeded and returning one item\n *\n */\n\nint\nencode_sharing(const attribute *pattr, pbs_list_head *ph, char *aname, char *rname, int mode, svrattrl **rtnl)\n{\n\tint n;\n\tsvrattrl *pal;\n\tchar *vn_str;\n\n\tif (!pattr)\n\t\treturn -(PBSE_INTERNAL);\n\n\tif (!(pattr->at_flags & ATR_VFLAG_SET))\n\t\treturn (0); /*nothing to report back*/\n\n\tn = (int) pattr->at_val.at_long;\n\tvn_str = vnode_sharing_to_str((enum vnode_sharing) n);\n\tif (vn_str == NULL)\n\t\treturn -(PBSE_INTERNAL);\n\n\tpal = attrlist_create(aname, rname, (int) strlen(vn_str) + 1);\n\tif (pal == NULL)\n\t\treturn -(PBSE_SYSTEM);\n\n\t(void) strcpy(pal->al_value, vn_str);\n\tpal->al_flags = ATR_VFLAG_SET;\n\tif (ph)\n\t\tappend_link(ph, &pal->al_link, pal);\n\tif (rtnl)\n\t\t*rtnl = pal;\n\n\treturn (1); /*success*/\n}\n\n/**\n * @brief\n * \tdecode_state\n * \tIn this case, the two arguments that get  used are\n *\n * pattr - it points to an attribute whose value is a short,\n * and the argument \"val\".\n * Once the \"value\" argument, val, is decoded from its form\n * as a string of comma separated substrings, the component\n * values are used to set the appropriate bits in the attribute's\n * value field.\n *\n * @param[in]    pattr - it points to an attribute whose value is a short,\n *                       and the argument \"val\".\n * @param[out]   aname - attribute's name\n * @param[out]   rname - resource's name (null if none)\n *\n * @return\tint\n * @retval\tPBSE_*\terror code\n * @retval\t0\tSuccess\n *\n */\n\nint\ndecode_state(attribute *pattr, char *name, char *rescn, char *val)\n{\n\tint rc = 0; /*return code; 0==success*/\n\tunsigned long flag, currflag;\n\tchar *str;\n\n\tchar strbuf[512]; /*should handle most vals*/\n\tchar *sbufp;\n\tint slen;\n\n\tif (val == NULL)\n\t\treturn (PBSE_BADNDATVAL);\n\n\t/*\n\t * determine string storage requirement and copy the string \"val\"\n\t * to a work buffer area\n\t */\n\n\tslen = strlen(val); /*bufr either on stack or heap*/\n\tif (slen - 512 < 0)\n\t\tsbufp = strbuf;\n\telse {\n\t\tif (!(sbufp = (char *) malloc(slen + 1)))\n\t\t\treturn (PBSE_SYSTEM);\n\t}\n\n\tstrcpy(sbufp, val);\n\n\tif ((str = parse_comma_string(sbufp)) == NULL) {\n\t\tif (slen >= 512)\n\t\t\tfree(sbufp);\n\t\treturn rc;\n\t}\n\n\tflag = 0;\n\tif ((rc = set_nodeflag(str, &flag)) != 0) {\n\t\tif (slen >= 512)\n\t\t\tfree(sbufp);\n\t\treturn rc;\n\t}\n\tcurrflag = flag;\n\n\t/*calling parse_comma_string with a null ptr continues where*/\n\t/*last call left off.  The initial comma separated string   */\n\t/*copy pointed to by sbufp is modified with each func call  */\n\n\twhile ((str = parse_comma_string(NULL)) != 0) {\n\t\tif ((rc = set_nodeflag(str, &flag)) != 0)\n\t\t\tbreak;\n\n\t\tif ((currflag == 0 && flag) || (currflag && flag == 0)) {\n\t\t\trc = PBSE_MUTUALEX; /*free is mutually exclusive*/\n\t\t\tbreak;\n\t\t}\n\t\tcurrflag = flag;\n\t}\n\n\tif (!rc) {\n\t\tpattr->at_val.at_long = flag;\n\t\tpost_attr_set(pattr);\n\t}\n\n\tif (slen >= 512) /*buffer on heap, not stack*/\n\t\tfree(sbufp);\n\n\treturn rc;\n}\n\n/**\n * @brief\n * \tdecode_ntype\n * \tWe no longer decode the node type.  Instead, we simply pretend to do so\n * \tand return success.\n *\n * \tHistorical information from the previous version of this function:\n *\tIn this case, the two arguments that get used are\n *\tpattr-- it points to an attribute whose value is a short,\n *\tand the argument \"val\". We once had \"time-shared\" and \"cluster\"\n *\tnode types. There may come a time when other ntype values are\n *\tneeded. The one thing that is assumed is that the types are\n *\tgoing to be mutually exclusive.\n *\n * @param[in] pattr - pointer to attribute structure\n * @param[in] name - attribute name\n * @param[in] rescn - resource name, unused here\n * @param[in] val - attribute value\n *\n * @return\tint\n * @retval\t0\tSuccess\n */\n\nint\ndecode_ntype(attribute *pattr, char *name, char *rescn, char *val)\n{\n\tpattr->at_val.at_short = NTYPE_PBS;\n\tpost_attr_set(pattr);\n\n\treturn 0;\n}\n\n/**\n * @brief\n * \tdecode_sharing - decode one of the acceptable share value strings into\n *\tthe array index which is stored as the attribute value;\n *\n * @param[in] pattr - pointer to attribute structure\n * @param[in] name - attribute name\n * @param[in] rescn - resource name, unused here\n * @param[in] val - attribute value\n *\n * @return      int\n * @retval      0       \tSuccess\n * @retval\tPBSE code\terror\n */\n\nint\ndecode_sharing(attribute *pattr, char *name, char *rescn, char *val)\n{\n\tint vns;\n\tint rc = 0; /*return code; 0==success*/\n\n\tif (val == NULL)\n\t\trc = (PBSE_BADNDATVAL);\n\telse {\n\t\tvns = (int) str_to_vnode_sharing(val);\n\t\tif (vns == VNS_UNSET)\n\t\t\trc = (PBSE_BADNDATVAL);\n\t}\n\n\tif (!rc) {\n\t\tpattr->at_val.at_long = vns;\n\t\tpost_attr_set(pattr);\n\t}\n\n\treturn rc;\n}\n\n/**\n * @brief\n *\n *\tUpdate the state information  of 'pattr' using the info from 'new'.\n *\n * @param[out]\tpattr - attribute whose  node state is to be updated.\n * @param[in]\tnew - input information.\n * @param[in]\top - update mode (SET, INCR, DECR).\n *\n * @return int\n * @retval 0 \t- for success\n * @retval != 0 - for any failure\n */\nint\nset_node_state(attribute *pattr, attribute *new, enum batch_op op)\n{\n\tint rc = 0;\n\n\tassert(pattr && new && (new->at_flags &ATR_VFLAG_SET));\n\n\tswitch (op) {\n\n\t\tcase SET:\n\t\t\tpattr->at_val.at_long = new->at_val.at_long;\n\t\t\tbreak;\n\n\t\tcase INCR:\n\t\t\tif (pattr->at_val.at_long && new->at_val.at_long == 0) {\n\t\t\t\trc = PBSE_BADNDATVAL; /*\"free\" mutually exclusive*/\n\t\t\t\tbreak;\n\t\t\t}\n\n\t\t\tpattr->at_val.at_long |= new->at_val.at_long;\n\t\t\tbreak;\n\n\t\tcase DECR:\n\t\t\tif (pattr->at_val.at_long && new->at_val.at_long == 0) {\n\t\t\t\trc = PBSE_BADNDATVAL; /*\"free\" mutually exclusive*/\n\t\t\t\tbreak;\n\t\t\t}\n\n\t\t\tpattr->at_val.at_long &= ~new->at_val.at_long;\n\t\t\tif (new->at_val.at_long & INUSE_OFFLINE) {\n\t\t\t\t/* if INUSE_OFFLINE is being cleared, must also */\n\t\t\t\t/* clear INUSE_OFFLINE_BY_MOM. */\n\t\t\t\tpattr->at_val.at_long &= ~INUSE_OFFLINE_BY_MOM;\n\t\t\t}\n\t\t\tbreak;\n\n\t\tdefault:\n\t\t\trc = PBSE_INTERNAL;\n\t\t\tbreak;\n\t}\n\n\tif (!rc)\n\t\tpost_attr_set(pattr);\n\n\treturn rc;\n}\n\n/**\n * @brief\n * \tset_node_ntype - the value entry in attribute \"new\" is a short.  It was\n *\tgenerated by the decode routine is used to update the\n *\tvalue portion of the attribute *pattr\n *\tthe mode of the update is goverened by the argument \"op\"\n *\t(SET,INCR,DECR)\n *\n * @param[out]\tpattr - attribute whose  node state is to be updated.\n * @param[in]\tnew - input information.\n * @param[in]\top - update mode (SET, INCR, DECR).\n *\n * @return int\n * @retval 0 \t- for success\n * @retval != 0 - for any failure\n */\n\nint\nset_node_ntype(attribute *pattr, attribute *new, enum batch_op op)\n{\n\tint rc = 0;\n\n\tassert(pattr && new && (new->at_flags &ATR_VFLAG_SET));\n\n\tswitch (op) {\n\n\t\tcase SET:\n\t\t\tpattr->at_val.at_short = new->at_val.at_short;\n\t\t\tbreak;\n\n\t\tcase INCR:\n\t\t\tif (pattr->at_val.at_short != new->at_val.at_short) {\n\n\t\t\t\trc = PBSE_MUTUALEX; /*types are mutually exclusive*/\n\t\t\t}\n\t\t\tbreak;\n\n\t\tcase DECR:\n\t\t\tif (pattr->at_val.at_short != new->at_val.at_short)\n\t\t\t\trc = PBSE_MUTUALEX; /*types are mutually exclusive*/\n\n\t\t\tbreak;\n\n\t\tdefault:\n\t\t\trc = PBSE_INTERNAL;\n\t}\n\n\tif (!rc)\n\t\tpost_attr_set(pattr);\n\treturn rc;\n}\n\n/**\n * @brief\n *\tsets the node flag\n *\n * @param\n * \tUse the input 'str's value to set a bit in\n *\tthe \"flags\" variable pointed to by 'pflag'.\n * @note\n *\tEach call sets one more bit in the flags\n *\tvariable or it clears the flags variable\n *\tin the case where *str is the value \"free\".\n *\n * @param[in]\tstr - input string state value.\n * @param[out]\tpflag - pointer to the variable holding the result.\n *\n * @return\tint\n * @retval\t0\t\t\tsuccess\n * @retval\tPBSE_BADNDATVAL\t\terror\n *\n */\n\nstatic int\nset_nodeflag(char *str, unsigned long *pflag)\n{\n\tint rc = 0;\n\n\tif (*str == '\\0')\n\t\treturn (PBSE_BADNDATVAL);\n\n\tif (!strcmp(str, ND_free))\n\t\t*pflag = 0;\n\telse if (!strcmp(str, ND_offline))\n\t\t*pflag = *pflag | INUSE_OFFLINE;\n\telse if (!strcmp(str, ND_offline_by_mom))\n\t\t*pflag = *pflag | INUSE_OFFLINE_BY_MOM;\n\telse if (!strcmp(str, ND_down))\n\t\t*pflag = *pflag | INUSE_DOWN;\n\telse if (!strcmp(str, ND_sleep))\n\t\t*pflag = *pflag | INUSE_SLEEP;\n\telse {\n\t\trc = PBSE_BADNDATVAL;\n\t}\n\n\treturn rc;\n}\n\n/**\n * @brief\n * \tnode_ntype - Either derive an \"ntype\" attribute from the node\n *\tor update node's \"ntype\" field using the\n *\tattribute's data\n *\n * @param[out] new - derive ntype into this attribute\n * @param[in] pnode - pointer to a pbsnode struct\n * @param[in] actmode - action mode; \"NEW\" or \"ALTER\"\n *\n * @return      int\n * @retval      0                       success\n * @retval      PBSE_INTERNAL\t        error\n *\n */\n\nint\nnode_ntype(attribute *new, void *pnode, int actmode)\n{\n\tint rc = 0;\n\tstruct pbsnode *np;\n\n\tnp = (struct pbsnode *) pnode; /*because of def of at_action  args*/\n\tswitch (actmode) {\n\n\t\tcase ATR_ACTION_NOOP:\n\t\t\tbreak;\n\n\t\tcase ATR_ACTION_NEW:\n\t\tcase ATR_ACTION_ALTER:\n\t\t\tnp->nd_ntype = new->at_val.at_short;\n\t\t\tbreak;\n\n\t\tcase ATR_ACTION_RECOV:\n\t\tcase ATR_ACTION_FREE:\n\t\tdefault:\n\t\t\trc = PBSE_INTERNAL;\n\t}\n\treturn rc;\n}\n\n/**\n *\n * @brief\n *\n *\tReturns the \"external\" form of the attribute 'val' given 'name'.\n *\n * @param[in] \tname - attribute name\n * @param[in] \tval - attribute value\n *\n * @return char * - the external form for name=state: \"3\" -> \"down,offline\"\n * @Note\n *     \tReturns a static value that can potentially get cleaned up on next call.\n * \tMust use return value immediately!\n */\nchar *\nreturn_external_value(char *name, char *val)\n{\n\tchar *vns;\n\n\tif ((name == NULL) || (val == NULL))\n\t\treturn (\"\");\n\n\tif (strcmp(name, ATTR_NODE_state) == 0) {\n\t\treturn vnode_state_to_str(atoi(val));\n\t} else if (strcmp(name, ATTR_NODE_Sharing) == 0) {\n\t\tvns = vnode_sharing_to_str((enum vnode_sharing) atoi(val));\n\t\treturn (vns ? vns : \"\");\n\t} else if (strcmp(name, ATTR_NODE_ntype) == 0) {\n\t\treturn vnode_ntype_to_str(atoi(val));\n\t} else {\n\t\treturn val;\n\t}\n}\n\n/**\n * @brief\n *\t\tReturns the \"internal\" form of the attribute 'val' given 'name'.\n *\n * @param[in]\tname\t-\tattribute name\n * @param[in]\tval\t-\tattribute value\n *\n * @return char *\t: the external form for name=state: \"down,offline\" -> \"3\"\n * @Note\n *     \tReturns a static value that can potentially get cleaned up on next call.\n * \t\tMust use return value immediately!\n *\n * @par MT-safe: No\n */\nchar *\nreturn_internal_value(char *name, char *val)\n{\n\tstatic char ret_str[MAX_STR_INT];\n\tenum vnode_sharing share;\n\tint v;\n\n\tif ((name == NULL) || (val == NULL))\n\t\treturn (\"\");\n\n\tif (strcmp(name, ATTR_NODE_state) == 0) {\n\t\tv = str_to_vnode_state(val);\n\t\tsprintf(ret_str, \"%d\", v);\n\t\treturn (ret_str);\n\t} else if (strcmp(name, ATTR_NODE_Sharing) == 0) {\n\t\tshare = str_to_vnode_sharing(val);\n\t\tif (share == VNS_UNSET)\n\t\t\treturn val;\n\t\tsprintf(ret_str, \"%d\", share);\n\t\treturn (ret_str);\n\t} else if (strcmp(name, ATTR_NODE_ntype) == 0) {\n\t\tv = str_to_vnode_ntype(val);\n\t\tif (v == -1)\n\t\t\treturn val;\n\t\tsprintf(ret_str, \"%d\", v);\n\t\treturn (ret_str);\n\t} else {\n\t\treturn (val);\n\t}\n}\n\n/**\n *\n * @brief\n *\tPrints out the file on opened stream 'fp', the attribute names or\n *\tresources and their values as in:\n *\t\t<attribute_name>=<attribute_value>\n *\t\t<attribute_name>[<resource_name>]=<resource_value>\n *\t\t<vnode_name>.<attribute_name>=<attribute value>\n *\t\t<vnode_name>.<attribute_name>[<resource_name>]=<attribute value>\n *\t\t<head_str>[<attribute_name>].p[<resource_name>]=<resource_value>\n * @Note\n *\tOnly prints out values that were set in a hook script.\n *\n * @param[in]\tfp \t- the stream pointer of the file to write output into\n * @param[in]\thead_str- some string to print out the beginning.\n * @param[in]\tphead\t- pointer to the head of the list containing data.\n *\n * @return none\n */\nvoid\nfprint_svrattrl_list(FILE *fp, char *head_str, pbs_list_head *phead)\n{\n\tsvrattrl *plist = NULL;\n\tchar *p, *p0;\n\n\tif ((fp == NULL) || (head_str == NULL) || (phead == NULL)) {\n\t\tlog_err(errno, __func__, \"NULL input parameters!\");\n\t\treturn;\n\t}\n\n\tfor (plist = (svrattrl *) GET_NEXT(*phead); plist != NULL;\n\t     plist = (svrattrl *) GET_NEXT(plist->al_link)) {\n\t\tif (plist->al_flags & ATR_VFLAG_HOOK) {\n\t\t\tp = strrchr(plist->al_name, '.');\n\t\t\tp0 = p;\n\t\t\tif (p != NULL) {\n\t\t\t\t*p = '\\0';\n\t\t\t\tp++; /* this is the actual attribute name */\n\t\t\t}\n\n\t\t\tif (plist->al_resc != NULL) {\n\t\t\t\tif (p != NULL)\n\t\t\t\t\tfprintf(fp, \"%s[\\\"%s\\\"].%s[%s]=%s\\n\", head_str,\n\t\t\t\t\t\tplist->al_name, p,\n\t\t\t\t\t\tplist->al_resc,\n\t\t\t\t\t\treturn_external_value(p, plist->al_value));\n\t\t\t\telse\n\t\t\t\t\tfprintf(fp, \"%s.%s[%s]=%s\\n\", head_str,\n\t\t\t\t\t\tplist->al_name, plist->al_resc,\n\t\t\t\t\t\treturn_external_value(plist->al_name,\n\t\t\t\t\t\t\t\t      plist->al_value));\n\t\t\t} else {\n\t\t\t\tif (p != NULL) {\n\t\t\t\t\tfprintf(fp, \"%s[\\\"%s\\\"].%s=%s\\n\", head_str,\n\t\t\t\t\t\tplist->al_name, p,\n\t\t\t\t\t\treturn_external_value(p, plist->al_value));\n\t\t\t\t} else {\n\t\t\t\t\tif (strcmp(plist->al_name, ATTR_v) == 0) {\n\t\t\t\t\t\tfprintf(fp, \"%s.%s=\\\"\\\"\\\"%s\\\"\\\"\\\"\\n\",\n\t\t\t\t\t\t\thead_str,\n\t\t\t\t\t\t\tplist->al_name,\n\t\t\t\t\t\t\treturn_external_value(\n\t\t\t\t\t\t\t\tplist->al_name,\n\t\t\t\t\t\t\t\tplist->al_value));\n\t\t\t\t\t} else {\n\t\t\t\t\t\tfprintf(fp, \"%s.%s=%s\\n\", head_str,\n\t\t\t\t\t\t\tplist->al_name,\n\t\t\t\t\t\t\treturn_external_value(\n\t\t\t\t\t\t\t\tplist->al_name,\n\t\t\t\t\t\t\t\tplist->al_value));\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (p0 != NULL)\n\t\t\t\t*p0 = '.';\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "src/lib/Libattr/attr_resc_func.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <assert.h>\n#include <ctype.h>\n#include <memory.h>\n#ifndef NDEBUG\n#include <stdio.h>\n#endif\n#include <stdlib.h>\n#include <string.h>\n#include <pbs_ifl.h>\n#include \"pbs_internal.h\"\n#include \"log.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"resource.h\"\n#include \"pbs_error.h\"\n\n/**\n * @file\tattr_resc_func.c\n * @brief\n * This file contains functions for decoding \"nodes\" and \"select\" resources\n *\n * The prototypes are declared in \"attribute.h\", also see resource.h\n *\n * ----------------------------------------------------------------------------\n * Attribute functions for attributes with value type resource\n * ----------------------------------------------------------------------------\n */\n\n/**\n * @brief\n * \tdecode_nodes - decode a node requirement specification,\n *\tCheck if node requirement specification is syntactically ok,\n *\tthen call decode_str()\n *\n *\tval if of the form:\tnode_spec[+node_spec...]\n *\twhere node_spec is:\tnumber | properity | number:properity\n *\n * @param[out] patr - pointer to attribute structure\n * @param[in] name - attribute name\n * @param[in] rescn - resource name\n * @param[in] val - attribute value\n *\n * @return\tint\n * @retval\t0\tsuccess\n * @retval\t>0\terror\n *\n */\n\nint\ndecode_nodes(attribute *patr, char *name, char *rescn, char *val)\n{\n\tchar *pc;\n\n\tpc = val;\n\n\tif ((pc == NULL) || (*pc == '\\0')) /* effectively unsetting value */\n\t\treturn (decode_str(patr, name, rescn, val));\n\n\twhile (1) {\n\t\twhile (isspace((int) *pc))\n\t\t\t++pc;\n\n\t\tif (!isalnum((int) *pc))\n\t\t\treturn (PBSE_BADATVAL);\n\t\tif (isdigit((int) *pc)) {\n\t\t\twhile (isalnum((int) *++pc))\n\t\t\t\t;\n\t\t\tif (*pc == '\\0')\n\t\t\t\tbreak;\n\t\t\telse if ((*pc != '+') && (*pc != ':') && (*pc != '#'))\n\t\t\t\treturn (PBSE_BADATVAL);\n\t\t} else if (isalpha((int) *pc)) {\n\t\t\twhile (isalnum((int) *++pc) || *pc == '-' || *pc == '.' || *pc == '=' || *pc == '_')\n\t\t\t\t;\n\t\t\tif (*pc == '\\0')\n\t\t\t\tbreak;\n\t\t\telse if ((*pc != '+') && (*pc != ':') && (*pc != '#'))\n\t\t\t\treturn (PBSE_BADATVAL);\n\t\t}\n\t\t++pc;\n\t}\n\treturn (decode_str(patr, name, rescn, val));\n}\n\n/**\n * @brief\n * \tdecode_select - decode a selection specification,\n *\tCheck if the specification is syntactically ok, then call decode_str()\n *\n *\tSpec is of the form:\n *\n * @param[out] patr - pointer to attribute structure\n * @param[in] name - attribute name\n * @param[in] rescn - resource name\n * @param[in] val - attribute value\n *\n * @return      int\n * @retval      0       success\n * @retval      >0      error\n *\n */\n\nint\ndecode_select(attribute *patr, char *name, char *rescn, char *val)\n{\n\tint new_chunk = 1;\n\tchar *pc;\n\tchar *quoted = NULL;\n\n\tif (val == NULL)\n\t\treturn (PBSE_BADATVAL);\n\tpc = val;\n\t/* skip leading white space */\n\twhile (isspace((int) *pc))\n\t\t++pc;\n\n\tif (*pc == '\\0')\n\t\treturn (PBSE_BADATVAL);\n\n\twhile (*pc) {\n\n\t\t/* each chunk must start with number or letter */\n\t\tif (!isalnum((int) *pc))\n\t\t\treturn (PBSE_BADATVAL);\n\n\t\tif (new_chunk && isdigit((int) *pc)) {\n\t\t\t/* if digit, it is chunk multipler */\n\t\t\twhile (isdigit((int) *++pc))\n\t\t\t\t;\n\t\t\tif (*pc == '\\0') /* just number is ok */\n\t\t\t\treturn (decode_str(patr, name, rescn, val));\n\t\t\telse if (*pc == '+') {\n\t\t\t\t++pc;\n\t\t\t\tif (*pc == '\\0')\n\t\t\t\t\treturn (PBSE_BADATVAL);\n\t\t\t\tcontinue;\n\t\t\t} else if (*pc != ':')\n\t\t\t\treturn (PBSE_BADATVAL);\n\t\t\t++pc;\n\t\t\t/* a colon must be followed by a resource=value */\n\t\t}\n\n\t\t/* resource=value pairs */\n\t\tnew_chunk = 0;\n\n\t\t/* resource first and must start with alpha */\n\t\tif (!isalpha((int) *pc))\n\t\t\treturn (PBSE_BADATVAL);\n\n\t\twhile (isalnum((int) *pc) || *pc == '-' || *pc == '_')\n\t\t\t++pc;\n\t\tif (*pc != '=')\n\t\t\treturn (PBSE_BADATVAL);\n\n\t\t++pc; /* what following the '=' */\n\t\tif (*pc == '\\0')\n\t\t\treturn (PBSE_BADATVAL);\n\n\t\t/* next comes the value substring */\n\n\t\twhile (*pc) {\n\n\t\t\t/* is it a quoted substring ? */\n\t\t\tif (*pc == '\\'' || *pc == '\"') {\n\t\t\t\t/* quoted substring, goto close quote */\n\t\t\t\tquoted = pc;\n\t\t\t\twhile (*++pc) {\n\t\t\t\t\tif (*pc == *quoted) {\n\t\t\t\t\t\tquoted = NULL;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tif (quoted != NULL) /* didn't find close */\n\t\t\t\t\treturn (PBSE_BADATVAL);\n\t\t\t\t++pc;\n\t\t\t\tcontinue;\n\t\t\t}\n\n\t\t\tif (*pc == '\\0') {\n\t\t\t\t/* valid end of string */\n\t\t\t\treturn (decode_str(patr, name, rescn, val));\n\n\t\t\t} else if (*pc == ':') {\n\t\t\t\t/* should start new resource=value */\n\t\t\t\t++pc;\n\t\t\t\tif (*pc)\n\t\t\t\t\tbreak;\n\t\t\t\telse\n\t\t\t\t\treturn (PBSE_BADATVAL);\n\t\t\t} else if (*pc == '+') {\n\t\t\t\t/* should start new chunk */\n\t\t\t\t++pc;\n\t\t\t\tnew_chunk = 1;\n\t\t\t\tif (*pc)\n\t\t\t\t\tbreak; /* end of chunk, next */\n\t\t\t\telse\n\t\t\t\t\treturn (PBSE_BADATVAL);\n\t\t\t}\n\t\t\tif (isprint((int) *pc)) {\n\t\t\t\t++pc; /* legal character */\n\n\t\t\t} else\n\t\t\t\treturn (PBSE_BADATVAL);\n\t\t}\n\t}\n\treturn (decode_str(patr, name, rescn, val));\n}\n\n/**\n * @brief Verification of resource name\n *\n * A custom resource must start with an alpha character,\n * must be followed by alphanumeric characters excluding '_' and '-'\n *\n * @param[in] name of the resource\n *\n * @retval -1 if resource name does not start with an alpha character\n * @retval -2 if resource name, past first character does not follow the\n * required format\n * @retval 0 if resource name matches required format\n */\nint\nverify_resc_name(char *name)\n{\n\n\tchar *val;\n\n\tif (!isalpha((int) *name)) {\n\t\treturn -1;\n\t}\n\n\tval = name;\n\n\twhile (*++val) {\n\t\tif (!isalnum((int) *val) && (*val != '_') &&\n\t\t    (*val != '-')) {\n\t\t\treturn -2;\n\t\t}\n\t}\n\n\treturn 0;\n}\n\n/**\n * @brief Verification of type and flag values\n *\n * @param[in] resc_type - The resource type\n * @param[in] pflag_ir - The invisible and read-only flags\n * @param[in][out] presc_flag - Pointer to the resource flags\n * @param[in] rescname - The name of the resource\n * @param[in][out] buf - A buffer to hold error message if any\n * @param[in] autocorrect - If possible, fix inconsistencies in types and flags\n * @retval 0 on success\n * @retval -1 on error\n * @retval -2 when errors that got autocorrected\n */\nint\nverify_resc_type_and_flags(int resc_type, int *pflag_ir, int *presc_flag, const char *rescname, char *buf, int buflen, int autocorrect)\n{\n\tchar fchar;\n\tint correction = 0;\n\n\tif (*pflag_ir == 2) { /* both flag i and r are set */\n\t\tif (autocorrect) {\n\t\t\tsnprintf(buf, buflen, \"Erroneous to have flag \"\n\t\t\t\t\t      \"'i' and 'r' on resource \\\"%s\\\"; ignoring 'r' flag.\",\n\t\t\t\t rescname);\n\t\t\tcorrection = 1;\n\t\t} else {\n\t\t\tsnprintf(buf, buflen, \"Erroneous to have flag \"\n\t\t\t\t\t      \"'i' and 'r' on resource \\\"%s\\\".\",\n\t\t\t\t rescname);\n\t\t\treturn -1;\n\t\t}\n\t}\n\t*pflag_ir = 0;\n\tif ((*presc_flag & (ATR_DFLAG_FNASSN | ATR_DFLAG_ANASSN)) &&\n\t    ((*presc_flag & ATR_DFLAG_CVTSLT) == 0)) {\n\t\tif (*presc_flag & ATR_DFLAG_ANASSN)\n\t\t\tfchar = 'n';\n\t\telse\n\t\t\tfchar = 'f';\n\t\tif (autocorrect) {\n\t\t\tsnprintf(buf, buflen, \"Erroneous to have flag '%c' without \"\n\t\t\t\t\t      \"'h' on resource \\\"%s\\\"; adding 'h' flag.\",\n\t\t\t\t fchar, rescname);\n\t\t\tcorrection = 1;\n\t\t} else {\n\t\t\tsnprintf(buf, buflen, \"Erroneous to have flag '%c' without \"\n\t\t\t\t\t      \"'h' on resource \\\"%s\\\".\",\n\t\t\t\t fchar, rescname);\n\t\t\treturn -1;\n\t\t}\n\t}\n\n\tif ((*presc_flag & (ATR_DFLAG_FNASSN | ATR_DFLAG_ANASSN)) ==\n\t    (ATR_DFLAG_FNASSN | ATR_DFLAG_ANASSN)) {\n\t\t*presc_flag &= ~ATR_DFLAG_FNASSN;\n\t\tif (autocorrect) {\n\t\t\tsnprintf(buf, buflen, \"Erroneous to have flag 'n' and 'f' \"\n\t\t\t\t\t      \"on resource \\\"%s\\\"; ignoring 'f' flag.\",\n\t\t\t\t rescname);\n\t\t\tcorrection = 1;\n\t\t} else {\n\t\t\tsnprintf(buf, buflen, \"Erroneous to have flag 'n' and 'f' \"\n\t\t\t\t\t      \"on resource \\\"%s\\\".\",\n\t\t\t\t rescname);\n\t\t\treturn -1;\n\t\t}\n\t}\n\n\tif (((resc_type == ATR_TYPE_BOOL) || (resc_type == ATR_TYPE_STR) || (resc_type == ATR_TYPE_ARST)) &&\n\t    ((*presc_flag & (ATR_DFLAG_RASSN | ATR_DFLAG_FNASSN | ATR_DFLAG_ANASSN)) != 0)) {\n\t\t*presc_flag &= ~(ATR_DFLAG_RASSN | ATR_DFLAG_FNASSN | ATR_DFLAG_ANASSN);\n\t\tif (autocorrect) {\n\t\t\tsnprintf(buf, buflen, \"Erroneous to have flag 'n', 'f', \"\n\t\t\t\t\t      \"or 'q' on resource \\\"%s\\\" which is type string, \"\n\t\t\t\t\t      \"string_array, or boolean; ignoring those flags.\",\n\t\t\t\t rescname);\n\t\t\tcorrection = 1;\n\t\t} else {\n\t\t\tsnprintf(buf, buflen, \"Erroneous to have flag 'n', 'f', \"\n\t\t\t\t\t      \"or 'q' on resource \\\"%s\\\" which is type string, \"\n\t\t\t\t\t      \"string_array, or boolean.\",\n\t\t\t\t rescname);\n\t\t\treturn -1;\n\t\t}\n\t}\n\n\tif (autocorrect && correction)\n\t\treturn -2;\n\n\treturn 0;\n}\n\n/**\n * @brief parse type expression associated to the definition of a new resource\n *\n * @param[in] val - The value associated to the resource\n * @param[out] resc_type_p - resource type\n *\n * @retval 0 on success\n * @retval -1 on error\n */\nint\nparse_resc_type(char *val, int *resc_type_p)\n{\n\tstruct resc_type_map *p_resc_type_map;\n\n\tp_resc_type_map = find_resc_type_map_by_typest(val);\n\tif (p_resc_type_map == NULL)\n\t\treturn -1;\n\t*resc_type_p = p_resc_type_map->rtm_type;\n\n\treturn 0;\n}\n\n/**\n * @brief parse flags expression associated to the definition of a new resource\n *\n * @param[in] val - The value associated to the resource\n * @param[out] flag_ir_p - invisible and read-only flags\n * @param[out] resc_flag_p - resource flags\n *\n * @retval 0 on success\n * @retval -1 on error;\n */\nint\nparse_resc_flags(char *val, int *flag_ir_p, int *resc_flag_p)\n{\n\tint resc_flag = READ_WRITE;\n\tint flag_ir = 0;\n\n\tif ((val == NULL) || (flag_ir_p == NULL) || (resc_flag_p == NULL))\n\t\treturn -1;\n\n\twhile (*val) {\n\t\tif (*val == 'q')\n\t\t\tresc_flag |= ATR_DFLAG_RASSN;\n\t\telse if (*val == 'f')\n\t\t\tresc_flag |= ATR_DFLAG_FNASSN;\n\t\telse if (*val == 'n')\n\t\t\tresc_flag |= ATR_DFLAG_ANASSN;\n\t\telse if (*val == 'h')\n\t\t\tresc_flag |= ATR_DFLAG_CVTSLT;\n\t\telse if (*val == 'm')\n\t\t\tresc_flag |= ATR_DFLAG_MOM;\n\t\telse if (*val == 'r') {\n\t\t\tif (flag_ir == 0) {\n\t\t\t\tresc_flag &= ~READ_WRITE;\n\t\t\t\tresc_flag |= NO_USER_SET;\n\t\t\t}\n\t\t\tflag_ir++;\n\t\t} else if (*val == 'i') {\n\t\t\tresc_flag &= ~READ_WRITE;\n\t\t\tresc_flag |= ATR_DFLAG_OPRD |\n\t\t\t\t     ATR_DFLAG_OPWR |\n\t\t\t\t     ATR_DFLAG_MGRD | ATR_DFLAG_MGWR;\n\t\t\tflag_ir++;\n\t\t} else\n\t\t\treturn -1;\n\t\tval++;\n\t}\n\t*flag_ir_p = flag_ir;\n\t*resc_flag_p = resc_flag;\n\treturn 0;\n}\n"
  },
  {
    "path": "src/lib/Libattr/master_job_attr_def.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<data>\n   <!--\n/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n   NOTE (Server File)\n\n   job_attr_def is the array of attribute definitions for jobs.\n   Each legal job attribute is defined here.\n   The entries for each attribute are (see attribute.h):\n       name,\n       decode function,\n       encode function,\n       set function,\n       compare function,\n       free value space function,\n       action function,\n       access permission flags,\n       value type,\n       parent object type\n\n   NOTE (ECL File)\n\n   The entries for each attribute are (see attribute.h):\\n\n       name,\n       type,\n       flag,\n       verify datatype function,\n       verify value function\n\n   NOTE NOTE NOTE NOTE NOTE\n      For array jobs, if you want the job attribute to be\n      included in a subjob, you must also modify\n      \"attrs_to_copy\" in array_func.c\n\n      Also status of subjobs are dependent on order of\n      this list, please check status_subjob()->status_attrib()\n      (with limit argument) and expand_remaining_subjob()\n      before changing order\n      -->\n   <head>\n      <SVR>\n      #include &lt;pbs_config.h&gt;\n      #include &lt;sys/types.h&gt;\n      #include \"pbs_ifl.h\"\n      #include \"list_link.h\"\n      #include \"attribute.h\"\n      #include \"job.h\"\n      #include \"server_limits.h\"\n\n      attribute_def job_attr_def[] = {\n      </SVR>\n      <ECL>\n      #include &lt;pbs_config.h&gt;\n      #include &lt;sys/types.h&gt;\n      #include \"pbs_ifl.h\"\n      #include \"pbs_ecl.h\"\n      ecl_attribute_def ecl_job_attr_def[] = {\n      </ECL>\n   </head>\n   <attributes>\n      <member_index>JOB_ATR_jobname</member_index>\n      <member_name>ATTR_N</member_name>\n      <member_at_decode>decode_jobname</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>\n         <SVR>READ_WRITE | ATR_DFLAG_ALTRUN | ATR_DFLAG_SELEQ | ATR_DFLAG_MOM</SVR>\n         <ECL>READ_ONLY</ECL>\n      </member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>verify_value_jobname</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_job_owner</member_index>\n      <member_name>ATTR_owner</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_ONLY | ATR_DFLAG_SSET | ATR_DFLAG_SELEQ | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_resc_used</member_index>\n      <member_name>ATTR_used</member_name>\n      <member_at_decode>decode_resc</member_at_decode>\n      <member_at_encode>encode_resc</member_at_encode>\n      <member_at_set>set_resc</member_at_set>\n      <member_at_comp>comp_resc</member_at_comp>\n      <member_at_free>free_resc</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_ONLY | ATR_DFLAG_SvWR | ATR_DFLAG_NOSAVM</member_at_flags>\n      <member_at_type>ATR_TYPE_RESC</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_resc_used_acct</member_index>\n      <member_name>ATTR_used_acct</member_name>\n      <member_at_decode>decode_resc</member_at_decode>\n      <member_at_encode>encode_resc</member_at_encode>\n      <member_at_set>set_resc</member_at_set>\n      <member_at_comp>comp_resc</member_at_comp>\n      <member_at_free>free_resc</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_ONLY | ATR_DFLAG_SvWR | ATR_DFLAG_NOSAVM | ATR_DFLAG_HIDDEN</member_at_flags>\n      <member_at_type>ATR_TYPE_RESC</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_resc_used_update</member_index>\n      <member_name>ATTR_used_update</member_name>\n      <member_at_decode>decode_resc</member_at_decode>\n      <member_at_encode>encode_resc</member_at_encode>\n      <member_at_set>set_resc</member_at_set>\n      <member_at_comp>comp_resc</member_at_comp>\n      <member_at_free>free_resc</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_ONLY | ATR_DFLAG_SvWR | ATR_DFLAG_NOSAVM | ATR_DFLAG_HIDDEN</member_at_flags>\n      <member_at_type>ATR_TYPE_RESC</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_state</member_index>\n      <member_name>ATTR_state</member_name>\n      <member_at_decode>decode_c</member_at_decode>\n      <member_at_encode>encode_c</member_at_encode>\n      <member_at_set>set_c</member_at_set>\n      <member_at_comp>comp_c</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_ONLY | ATR_DFLAG_SvWR</member_at_flags>\n      <member_at_type>ATR_TYPE_CHAR</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>verify_value_state</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_resv</member_index>\n      <member_name>ATTR_resv</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_ONLY</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_in_queue</member_index>\n      <member_name>ATTR_queue</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_ONLY | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_at_server</member_index>\n      <member_name>ATTR_server</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_ONLY | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_account</member_index>\n      <member_name>ATTR_A</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_SELEQ | ATR_DFLAG_MOM | ATR_DFLAG_SCGALT</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_chkpnt</member_index>\n      <member_name>ATTR_c</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>\n#ifdef PBS_MOM\n\t\tcomp_str\n#else\n\t\tcomp_chkpnt\n#endif\n      </member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>\n#ifdef PBS_MOM\n\t\tNULL_FUNC\n#else\n\t\tck_chkpnt\n#endif\n      </member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_MOM | ATR_DFLAG_ALTRUN</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>verify_value_checkpoint</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_ctime</member_index>\n      <member_name>ATTR_ctime</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_ONLY | ATR_DFLAG_SSET</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_depend</member_index>\n      <member_name>ATTR_depend</member_name>\n      <member_at_decode>\n#ifndef PBS_MOM\n\t\tdecode_depend\n#else\n\t\tdecode_str\n#endif\n      </member_at_decode>\n      <member_at_encode>\n#ifndef PBS_MOM\n\t\tencode_depend\n#else\n\t\tencode_str\n#endif\n      </member_at_encode>\n      <member_at_set>\n#ifndef PBS_MOM\n\t\tset_depend\n#else\n      set_str\n#endif\n      </member_at_set>\n      <member_at_comp>\n#ifndef PBS_MOM\n\t\tcomp_depend\n#else\n\t\tcomp_str\n#endif\n      </member_at_comp>\n      <member_at_free>\n#ifndef PBS_MOM\n\t\tfree_depend\n#else\n\t\tfree_str\n#endif\n      </member_at_free>\n      <member_at_action>\n#ifndef PBS_MOM\n\t\tdepend_on_que\n#else\n\t\tNULL_FUNC\n#endif\n      </member_at_action>\n      <member_at_flags>READ_WRITE</member_at_flags>\n      <member_at_type>ATR_TYPE_LIST</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>verify_value_dependlist</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_errpath</member_index>\n      <member_name>ATTR_e</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_ALTRUN | ATR_DFLAG_SELEQ | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>verify_value_path</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_exec_host</member_index>\n      <member_name>ATTR_exechost</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>\n         <SVR>\n#ifdef PBS_MOM\n\t\tREAD_ONLY | ATR_DFLAG_MOM\n#else\n\t\tREAD_ONLY\n#endif\n      </SVR>\n         <ECL>READ_ONLY</ECL>\n      </member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_exec_host2</member_index>\n      <member_name>ATTR_exechost2</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_exec_host_acct</member_index>\n      <member_name>ATTR_exechost_acct</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>\n         <SVR>\n#ifdef PBS_MOM\n\t\tREAD_ONLY | ATR_DFLAG_MOM\n#else\n\t\tREAD_ONLY | ATR_DFLAG_HIDDEN\n#endif\n      </SVR>\n         <ECL>READ_ONLY</ECL>\n      </member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_exec_host_orig</member_index>\n      <member_name>ATTR_exechost_orig</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>\n         <SVR>\n#ifdef PBS_MOM\n\t\tREAD_ONLY | ATR_DFLAG_MOM\n#else\n\t\tREAD_ONLY | ATR_DFLAG_HIDDEN\n#endif\n      </SVR>\n         <ECL>READ_ONLY</ECL>\n      </member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_exec_vnode</member_index>\n      <member_name>ATTR_execvnode</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_ONLY | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_exec_vnode_acct</member_index>\n      <member_name>ATTR_execvnode_acct</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_ONLY | ATR_DFLAG_MOM | ATR_DFLAG_HIDDEN</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_exec_vnode_deallocated</member_index>\n      <member_name>ATTR_execvnode_deallocated</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_ONLY | ATR_DFLAG_MOM |ATR_DFLAG_HIDDEN</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_exec_vnode_orig</member_index>\n      <member_name>ATTR_execvnode_orig</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_ONLY | ATR_DFLAG_MOM | ATR_DFLAG_SvWR | ATR_DFLAG_HIDDEN</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_exectime</member_index>\n      <member_name>ATTR_a</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>\n#ifndef PBS_MOM\n\t\tjob_set_wait\n#else\n\t\tNULL_FUNC\n#endif\n      </member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_ALTRUN</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_grouplst</member_index>\n      <member_name>ATTR_g</member_name>\n      <member_at_decode>decode_arst</member_at_decode>\n      <member_at_encode>encode_arst</member_at_encode>\n      <member_at_set>set_arst</member_at_set>\n      <member_at_comp>comp_arst</member_at_comp>\n      <member_at_free>free_arst</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_SELEQ | ATR_DFLAG_MOM | ATR_DFLAG_SCGALT</member_at_flags>\n      <member_at_type>ATR_TYPE_ARST</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>verify_value_user_list</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_hold</member_index>\n      <member_name>ATTR_h</member_name>\n      <member_at_decode>decode_hold</member_at_decode>\n      <member_at_encode>encode_hold</member_at_encode>\n      <member_at_set>set_b</member_at_set>\n      <member_at_comp>comp_hold</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_ALTRUN | ATR_DFLAG_SELEQ</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>verify_value_hold</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_interactive</member_index>\n      <member_name>ATTR_inter</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_inter</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_b</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_ONLY | ATR_DFLAG_SvRD | ATR_DFLAG_Creat | ATR_DFLAG_SELEQ | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_join</member_index>\n      <member_name>ATTR_j</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_SELEQ | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>verify_value_joinpath</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_keep</member_index>\n      <member_name>ATTR_k</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>\n#ifdef PBS_MOM\n\t\tNULL_FUNC\n#else\n\t\tkeepfiles_action\n#endif\n      </member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_SELEQ | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>verify_value_keepfiles</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_mailpnts</member_index>\n      <member_name>ATTR_m</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_ALTRUN | ATR_DFLAG_SELEQ</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>verify_value_mailpoints</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_mailuser</member_index>\n      <member_name>ATTR_M</member_name>\n      <member_at_decode>decode_arst</member_at_decode>\n      <member_at_encode>encode_arst</member_at_encode>\n      <member_at_set>set_arst</member_at_set>\n      <member_at_comp>comp_arst</member_at_comp>\n      <member_at_free>free_arst</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_ALTRUN | ATR_DFLAG_SELEQ</member_at_flags>\n      <member_at_type>ATR_TYPE_ARST</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>verify_value_mailusers</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_mtime</member_index>\n      <member_name>ATTR_mtime</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_ONLY | ATR_DFLAG_SSET</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_nodemux</member_index>\n      <member_name>ATTR_nodemux</member_name>\n      <member_at_decode>decode_b</member_at_decode>\n      <member_at_encode>encode_b</member_at_encode>\n      <member_at_set>set_b</member_at_set>\n      <member_at_comp>comp_b</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_MOM | ATR_DFLAG_SELEQ</member_at_flags>\n      <member_at_type>ATR_TYPE_BOOL</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_bool</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_outpath</member_index>\n      <member_name>ATTR_o</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_ALTRUN | ATR_DFLAG_SELEQ | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>verify_value_path</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_priority</member_index>\n      <member_name>ATTR_p</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_ALTRUN</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>verify_value_priority</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_qtime</member_index>\n      <member_name>ATTR_qtime</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_ONLY</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_remove</member_index>\n      <member_name>ATTR_R</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>\n#ifdef PBS_MOM\n\t\tNULL_FUNC\n#else\n\t\tremovefiles_action\n#endif\n      </member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_SELEQ | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>verify_value_removefiles</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_rerunable</member_index>\n      <member_name>ATTR_r</member_name>\n      <member_at_decode>decode_b</member_at_decode>\n      <member_at_encode>encode_b</member_at_encode>\n      <member_at_set>set_b</member_at_set>\n      <member_at_comp>comp_b</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_ALTRUN | ATR_DFLAG_SELEQ</member_at_flags>\n      <member_at_type>ATR_TYPE_BOOL</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_bool</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_resource</member_index>\n      <member_name>ATTR_l</member_name>\n      <member_at_decode>decode_resc</member_at_decode>\n      <member_at_encode>encode_resc</member_at_encode>\n      <member_at_set>set_resc</member_at_set>\n      <member_at_comp>comp_resc</member_at_comp>\n      <member_at_free>free_resc</member_at_free>\n      <member_at_action>action_resc_job</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_ALTRUN | ATR_DFLAG_MOM | ATR_DFLAG_SCGALT</member_at_flags>\n      <member_at_type>ATR_TYPE_RESC</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>verify_value_resc</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_resource_orig</member_index>\n      <member_name>ATTR_l_orig</member_name>\n      <member_at_decode>decode_resc</member_at_decode>\n      <member_at_encode>encode_resc</member_at_encode>\n      <member_at_set>set_resc</member_at_set>\n      <member_at_comp>comp_resc</member_at_comp>\n      <member_at_free>free_resc</member_at_free>\n      <member_at_action>action_resc_job</member_at_action>\n      <member_at_flags>ATR_DFLAG_SvWR| ATR_DFLAG_ALTRUN | ATR_DFLAG_MOM | ATR_DFLAG_HIDDEN</member_at_flags>\n      <member_at_type>ATR_TYPE_RESC</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>verify_value_resc</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_resource_acct</member_index>\n      <member_name>ATTR_l_acct</member_name>\n      <member_at_decode>decode_resc</member_at_decode>\n      <member_at_encode>encode_resc</member_at_encode>\n      <member_at_set>set_resc</member_at_set>\n      <member_at_comp>comp_resc</member_at_comp>\n      <member_at_free>free_resc</member_at_free>\n      <member_at_action>action_resc_job</member_at_action>\n      <member_at_flags>ATR_DFLAG_SvWR | ATR_DFLAG_ALTRUN | ATR_DFLAG_MOM | ATR_DFLAG_HIDDEN</member_at_flags>\n      <member_at_type>ATR_TYPE_RESC</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>verify_value_resc</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_SchedSelect</member_index>\n      <member_name>ATTR_SchedSelect</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>ATR_DFLAG_MGRD | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_SchedSelect_orig</member_index>\n      <member_name>ATTR_SchedSelect_orig</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>ATR_DFLAG_MGRD | ATR_DFLAG_MOM |ATR_DFLAG_HIDDEN</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_stime</member_index>\n      <member_name>ATTR_stime</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_ONLY</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n      <attributes>\n      <member_index>JOB_ATR_obittime</member_index>\n      <member_name>ATTR_obittime</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_ONLY</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_session_id</member_index>\n      <member_name>ATTR_session</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_ONLY | ATR_DFLAG_SvWR</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_shell</member_index>\n      <member_name>ATTR_S</member_name>\n      <member_at_decode>decode_arst</member_at_decode>\n      <member_at_encode>encode_arst</member_at_encode>\n      <member_at_set>set_arst</member_at_set>\n      <member_at_comp>comp_arst</member_at_comp>\n      <member_at_free>free_arst</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_ALTRUN | ATR_DFLAG_SELEQ | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_ARST</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>verify_value_shellpathlist</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_sandbox</member_index>\n      <member_name>ATTR_sandbox</member_name>\n      <member_at_decode>decode_sandbox</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_SELEQ | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>verify_value_sandbox</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_jobdir</member_index>\n      <member_name>ATTR_jobdir</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>ATR_DFLAG_SvWR | READ_ONLY</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_stagein</member_index>\n      <member_name>ATTR_stagein</member_name>\n      <member_at_decode>decode_arst</member_at_decode>\n      <member_at_encode>encode_arst</member_at_encode>\n      <member_at_set>set_arst</member_at_set>\n      <member_at_comp>comp_arst</member_at_comp>\n      <member_at_free>free_arst</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_ARST</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>verify_value_stagelist</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_stageout</member_index>\n      <member_name>ATTR_stageout</member_name>\n      <member_at_decode>decode_arst</member_at_decode>\n      <member_at_encode>encode_arst</member_at_encode>\n      <member_at_set>set_arst</member_at_set>\n      <member_at_comp>comp_arst</member_at_comp>\n      <member_at_free>free_arst</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_ARST</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>verify_value_stagelist</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_substate</member_index>\n      <member_name>ATTR_substate</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>ATR_DFLAG_USRD | ATR_DFLAG_OPRD | ATR_DFLAG_MGRD | ATR_DFLAG_SvWR</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_userlst</member_index>\n      <member_name>ATTR_u</member_name>\n      <member_at_decode>decode_arst</member_at_decode>\n      <member_at_encode>encode_arst</member_at_encode>\n      <member_at_set>\n#ifndef PBS_MOM\n\t\tset_uacl\n#else\n\t\tset_arst\n#endif\n      </member_at_set>\n      <member_at_comp>comp_arst</member_at_comp>\n      <member_at_free>free_arst</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_SELEQ | ATR_DFLAG_SCGALT</member_at_flags>\n      <member_at_type>ATR_TYPE_ARST</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>verify_value_user_list</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_variables</member_index>\n      <member_name>ATTR_v</member_name>\n      <member_at_decode>decode_arst_bs</member_at_decode>\n      <member_at_encode>encode_arst_bs</member_at_encode>\n      <member_at_set>set_arst</member_at_set>\n      <member_at_comp>comp_arst</member_at_comp>\n      <member_at_free>free_arst</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_SELEQ | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_ARST</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_euser</member_index>\n      <member_name>ATTR_euser</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>ATR_DFLAG_MGRD | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_egroup</member_index>\n      <member_name>ATTR_egroup</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>ATR_DFLAG_MGRD | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_hashname</member_index>\n      <member_name>ATTR_hashname</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>ATR_DFLAG_MGRD | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_hopcount</member_index>\n      <member_name>ATTR_hopcount</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>ATR_DFLAG_SSET</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_qrank</member_index>\n      <member_name>ATTR_qrank</member_name>\n      <member_at_decode>decode_ll</member_at_decode>\n      <member_at_encode>encode_ll</member_at_encode>\n      <member_at_set>set_ll</member_at_set>\n      <member_at_comp>comp_ll</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>ATR_DFLAG_MGRD</member_at_flags>\n      <member_at_type>ATR_TYPE_LL</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long_long</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_queuetype</member_index>\n      <member_name>ATTR_qtype</member_name>\n      <member_at_decode>decode_c</member_at_decode>\n      <member_at_encode>encode_c</member_at_encode>\n      <member_at_set>set_c</member_at_set>\n      <member_at_comp>comp_c</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>ATR_DFLAG_MGRD | ATR_DFLAG_SELEQ</member_at_flags>\n      <member_at_type>ATR_TYPE_CHAR</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_sched_hint</member_index>\n      <member_name>ATTR_sched_hint</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>ATR_DFLAG_MGRD | ATR_DFLAG_MGWR</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_security</member_index>\n      <member_name>ATTR_security</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>ATR_DFLAG_SSET</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_Comment</member_index>\n      <member_name>ATTR_comment</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>NO_USER_SET | ATR_DFLAG_SvWR | ATR_DFLAG_ALTRUN | ATR_DFLAG_NOSAVM</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_Cookie</member_index>\n      <member_name>ATTR_cookie</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>ATR_DFLAG_SvRD | ATR_DFLAG_SvWR | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_altid</member_index>\n      <member_name>ATTR_altid</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_ONLY | ATR_DFLAG_SvWR</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_altid2</member_index>\n      <member_name>ATTR_altid2</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_ONLY | ATR_DFLAG_SvWR</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_etime</member_index>\n      <member_name>ATTR_etime</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_ONLY | ATR_DFLAG_SSET</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_reserve_ID</member_index>\n      <member_name>ATTR_resv_ID</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>ATR_DFLAG_Creat | ATR_DFLAG_SvWR | READ_ONLY</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_refresh</member_index>\n      <member_name>ATTR_refresh</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>ATR_DFLAG_SvRD | ATR_DFLAG_SvWR | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_gridname</member_index>\n      <member_name>ATTR_gridname</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_SELEQ | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_umask</member_index>\n      <member_name>ATTR_umask</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_block</member_index>\n      <member_name>ATTR_block</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_inter</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_b</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_ONLY | ATR_DFLAG_SvRD | ATR_DFLAG_Creat | ATR_DFLAG_SELEQ</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_cred</member_index>\n      <member_name>ATTR_cred</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_ONLY | ATR_DFLAG_SvRD | ATR_DFLAG_Creat | ATR_DFLAG_SELEQ</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_runcount</member_index>\n      <member_name>ATTR_runcount</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_b</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_ALTRUN | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>verify_value_zero_or_positive</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_acct_id</member_index>\n      <member_name>ATTR_acct_id</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>ATR_DFLAG_SvWR | READ_ONLY</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_eligible_time</member_index>\n      <member_name>ATTR_eligible_time</member_name>\n      <member_at_decode>decode_time</member_at_decode>\n      <member_at_encode>encode_time</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>\n#ifndef PBS_MOM\n\t\talter_eligibletime\n#else\n\t\tNULL_FUNC\n#endif\n      </member_at_action>\n      <member_at_flags>NO_USER_SET | ATR_DFLAG_SSET | ATR_DFLAG_ALTRUN</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_time</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_accrue_type</member_index>\n      <member_name>ATTR_accrue_type</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_none</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>ATR_DFLAG_MGRD | ATR_DFLAG_ALTRUN | ATR_DFLAG_SvWR</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_sample_starttime</member_index>\n      <member_name>ATTR_sample_starttime</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>ATR_DFLAG_SvWR</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_job_kill_delay</member_index>\n      <member_name>ATTR_job_kill_delay</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_stageout_status</member_index>\n      <member_name>ATTR_stageout_status</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>ATR_DFLAG_SvWR | READ_ONLY</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_exit_status</member_index>\n      <member_name>ATTR_exit_status</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>ATR_DFLAG_SvWR | READ_ONLY</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_submit_arguments</member_index>\n      <member_name>ATTR_submit_arguments</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>ATR_DFLAG_SvWR | READ_WRITE</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <!--\n       Attributes to store command and its argument list specified\n       in qsub command-line options list.\n    -->\n   <attributes>\n      <member_index>JOB_ATR_executable</member_index>\n      <member_name>ATTR_executable</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>ATR_DFLAG_SvWR | READ_WRITE | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_Arglist</member_index>\n      <member_name>ATTR_Arglist</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>ATR_DFLAG_SvWR | READ_WRITE | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_prov_vnode</member_index>\n      <member_name>ATTR_prov_vnode</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>ATR_DFLAG_SvRD | ATR_DFLAG_SvWR</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <!--\n\tThe following, Array Job related, attributes should be at the end of the\n\tjob attributes, see status_subjob() to understand why.   When statusing\n\ta subjob,  they are not copied from the parent array job and having them\n\tat the end helps to do that.\n    -->\n   <attributes>\n      <member_index>JOB_ATR_array</member_index>\n      <member_name>ATTR_array</member_name>\n      <member_at_decode>decode_b</member_at_decode>\n      <member_at_encode>encode_b</member_at_encode>\n      <member_at_set>set_b</member_at_set>\n      <member_at_comp>comp_b</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>ATR_DFLAG_SvWR | ATR_DFLAG_Creat | READ_ONLY | ATR_DFLAG_NOSAVM</member_at_flags>\n      <member_at_type>ATR_TYPE_BOOL</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_bool</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_array_id</member_index>\n      <member_name>ATTR_array_id</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_ONLY | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_array_index</member_index>\n      <member_name>ATTR_array_index</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>ATR_DFLAG_MOM | READ_ONLY</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_array_state_count</member_index>\n      <member_name>ATTR_array_state_count</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>ATR_DFLAG_SvWR | READ_ONLY | ATR_DFLAG_NOSAVM</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_array_indices_submitted</member_index>\n      <member_name>ATTR_array_indices_submitted</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>\n#ifndef PBS_MOM\n\t\tsetup_arrayjob_attrs\n#else\n\t\tNULL_FUNC\n#endif\n      </member_at_action>\n      <member_at_flags>ATR_DFLAG_SvWR | ATR_DFLAG_SvRD | ATR_DFLAG_Creat | READ_ONLY</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>verify_value_jrange</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_array_indices_remaining</member_index>\n      <member_name>ATTR_array_indices_remaining</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>\n#ifndef PBS_MOM\n\t\tfixup_arrayindicies\n#else\n\t\tNULL_FUNC\n#endif\n      </member_at_action>\n      <member_at_flags>ATR_DFLAG_SvWR | ATR_DFLAG_SvRD | READ_ONLY</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_estimated</member_index>\n      <member_name>ATTR_estimated</member_name>\n      <member_at_decode>decode_resc</member_at_decode>\n      <member_at_encode>encode_resc</member_at_encode>\n      <member_at_set>set_resc</member_at_set>\n      <member_at_comp>comp_resc</member_at_comp>\n      <member_at_free>free_resc</member_at_free>\n      <member_at_action>action_resc_job</member_at_action>\n      <member_at_flags>MGR_ONLY_SET | ATR_DFLAG_ALTRUN</member_at_flags>\n      <member_at_type>ATR_TYPE_RESC</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>verify_value_resc</ECL>\n      </member_verify_function>\n   </attributes>\n   <!--\n\tthe following is not really a job attribute, but used by selectjobs\n\tso listing them here at the end\n    -->\n   <attributes flag=\"ECL\">\n      <member_name>\n         <ECL>ATTR_q</ECL>\n      </member_name>\n      <member_at_flags>ATR_DFLAG_SvWR|ATR_DFLAG_MGWR</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_node_set</member_index>\n      <member_name>ATTR_node_set</member_name>\n      <member_at_decode>decode_arst</member_at_decode>\n      <member_at_encode>encode_arst</member_at_encode>\n      <member_at_set>set_arst</member_at_set>\n      <member_at_comp>comp_arst</member_at_comp>\n      <member_at_free>free_arst</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>ATR_DFLAG_SvWR|ATR_DFLAG_MGWR</member_at_flags>\n      <member_at_type>ATR_TYPE_ARST</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_history_timestamp</member_index>\n      <member_name>ATTR_history_timestamp</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_ONLY|ATR_DFLAG_SvWR</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_project</member_index>\n      <member_name>ATTR_project</member_name>\n      <member_at_decode>decode_project</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_SELEQ | ATR_DFLAG_MOM | ATR_DFLAG_SCGALT</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_X11_cookie</member_index>\n      <member_name>ATTR_X11_cookie</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>ATR_DFLAG_USWR | ATR_DFLAG_MGRD | ATR_DFLAG_SELEQ | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_X11_port</member_index>\n      <member_name>ATTR_X11_port</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_inter</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_b</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_ONLY | ATR_DFLAG_SvRD | ATR_DFLAG_Creat | ATR_DFLAG_SELEQ | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_sched_preempted</member_index>\n      <member_name>ATTR_sched_preempted</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>ATR_DFLAG_SvWR | ATR_DFLAG_MGWR | ATR_DFLAG_ALTRUN</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>verify_value_zero_or_positive</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_run_version</member_index>\n      <member_name>ATTR_run_version</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_b</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>ATR_DFLAG_MGRD | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_GUI</member_index>\n      <member_name>ATTR_GUI</member_name>\n      <member_at_decode>decode_b</member_at_decode>\n      <member_at_encode>encode_b</member_at_encode>\n      <member_at_set>set_b</member_at_set>\n      <member_at_comp>comp_b</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_ONLY | ATR_DFLAG_SvRD | ATR_DFLAG_Creat | ATR_DFLAG_SELEQ | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_BOOL</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_bool</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_topjob</member_index>\n      <member_name>ATTR_topjob</member_name>\n      <member_at_decode>decode_b</member_at_decode>\n      <member_at_encode>encode_b</member_at_encode>\n      <member_at_set>set_b</member_at_set>\n      <member_at_comp>comp_b</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>PRIV_READ | ATR_DFLAG_SvWR | ATR_DFLAG_ALTRUN</member_at_flags>\n      <member_at_type>ATR_TYPE_BOOL</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_bool</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_topjob_ineligible</member_index>\n      <member_name>ATTR_topjob_ineligible</member_name>\n      <member_at_decode>decode_b</member_at_decode>\n      <member_at_encode>encode_b</member_at_encode>\n      <member_at_set>set_b</member_at_set>\n      <member_at_comp>comp_b</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>ATR_DFLAG_MGRD | ATR_DFLAG_MGWR | ATR_DFLAG_ALTRUN</member_at_flags>\n      <member_at_type>ATR_TYPE_BOOL</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_bool</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_resc_released</member_index>\n      <member_name>ATTR_released</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>PRIV_READ | ATR_DFLAG_SvWR | ATR_DFLAG_ALTRUN</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_resc_released_list</member_index>\n      <member_name>ATTR_rel_list</member_name>\n      <member_at_decode>decode_resc</member_at_decode>\n      <member_at_encode>encode_resc</member_at_encode>\n      <member_at_set>set_resc</member_at_set>\n      <member_at_comp>comp_resc</member_at_comp>\n      <member_at_free>free_resc</member_at_free>\n      <member_at_action>action_resc_job</member_at_action>\n      <member_at_flags>PRIV_READ</member_at_flags>\n      <member_at_type>ATR_TYPE_RESC</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>verify_value_resc</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_relnodes_on_stageout</member_index>\n      <member_name>ATTR_relnodes_on_stageout</member_name>\n      <member_at_decode>decode_b</member_at_decode>\n      <member_at_encode>encode_b</member_at_encode>\n      <member_at_set>set_b</member_at_set>\n      <member_at_comp>comp_b</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_SvRD | ATR_DFLAG_Creat | ATR_DFLAG_ALTRUN | ATR_DFLAG_SELEQ | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_BOOL</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_bool</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_tolerate_node_failures</member_index>\n      <member_name>ATTR_tolerate_node_failures</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_SvRD | ATR_DFLAG_Creat | ATR_DFLAG_ALTRUN | ATR_DFLAG_SELEQ | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>verify_value_tolerate_node_failures</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_submit_host</member_index>\n      <member_name>ATTR_submit_host</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_MOM | ATR_DFLAG_SvRD | ATR_DFLAG_SvWR</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_cred_id</member_index>\n      <member_name>ATTR_cred_id</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_MOM | ATR_DFLAG_SvRD | ATR_DFLAG_SvWR</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_cred_validity</member_index>\n      <member_name>ATTR_cred_validity</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_ONLY|ATR_DFLAG_SvWR</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_create_resv_from_job</member_index>\n      <member_name>ATTR_create_resv_from_job</member_name>\n      <member_at_decode>decode_b</member_at_decode>\n      <member_at_encode>encode_b</member_at_encode>\n      <member_at_set>set_b</member_at_set>\n      <member_at_comp>comp_b</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_WRITE</member_at_flags>\n      <member_at_type>ATR_TYPE_BOOL</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_bool</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>JOB_ATR_max_run_subjobs</member_index>\n      <member_name>ATTR_max_run_subjobs</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>\n#ifdef PBS_MOM\n\t  NULL_FUNC\n#else\n\t  action_max_run_subjobs\n#endif\n      </member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_ALTRUN</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>verify_value_zero_or_positive</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes flag=\"SVR\">\n      #include \"site_job_attr_def.h\"\n      /* THIS MUST BE THE LAST ENTRY */\n      <member_name>\"_other_\"</member_name>\n      <member_at_decode>decode_unkn</member_at_decode>\n      <member_at_encode>encode_unkn</member_at_encode>\n      <member_at_set>set_unkn</member_at_set>\n      <member_at_comp>comp_unkn</member_at_comp>\n      <member_at_free>free_unkn</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_SELEQ</member_at_flags>\n      <member_at_type>ATR_TYPE_LIST</member_at_type>\n      <member_at_parent>PARENT_TYPE_JOB</member_at_parent>\n   </attributes>\n   <tail>\n      <SVR>};</SVR>\n      <ECL>};\nint ecl_job_attr_size = sizeof(ecl_job_attr_def) / sizeof(ecl_attribute_def);</ECL>\n   </tail>\n</data>\n"
  },
  {
    "path": "src/lib/Libattr/master_node_attr_def.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<data>\n   <!--\n/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n    NOTE (Server File)\n    node_attr_def is the array of attribute definitions for nodes.\n    Each legal node attribute is defined here.\n    The entries for each attribute are (see attribute.h):\n\tname,\n\tdecode function,\n\tencode function,\n\tset function,\n\tcompare function,\n\tfree value space function,\n\taction function,\n\taccess permission flags,\n\tvalue type,\n\tparent object type\n\n    NOTE (ECL File)\n    The entries for each attribute are (see attribute.h):\n\tname,\n\ttype,\n\tflag,\n\tverify datatype function,\n\tverify value function\n    -->\n   <head>\n   <SVR>\n   #include &lt;pbs_config.h&gt;\n\t#include &lt;sys/types.h&gt;\n\t#include &lt;stdlib.h&gt;\n\t#include &lt;ctype.h&gt;\n\t#include \"server_limits.h\"\n\t#include \"pbs_ifl.h\"\n\t#include &lt;string.h&gt;\n\t#include \"list_link.h\"\n\t#include \"attribute.h\"\n\t#include \"resource.h\"\n\t#include \"pbs_error.h\"\n\t#include \"pbs_nodes.h\"\n\n\tattribute_def node_attr_def[] = {\n   </SVR>\n   <ECL>#include &lt;pbs_config.h&gt;\n\t#include &lt;sys/types.h&gt;\n\t#include \"pbs_ifl.h\"\n\t#include \"pbs_ecl.h\"\n\n\tecl_attribute_def ecl_node_attr_def[] = {\n   </ECL>\n   </head>\n   <attributes>\n      <member_index>ND_ATR_Mom</member_index>\n      <member_name>ATTR_NODE_Mom</member_name>\n      <member_at_decode>\n#ifndef PBS_MOM\n      decode_Mom_list\n#else\n      decode_null\n#endif\n      </member_at_decode>\n      <member_at_encode>encode_arst</member_at_encode>\n      <member_at_set>set_arst_uniq</member_at_set>\n      <member_at_comp>comp_arst</member_at_comp>\n      <member_at_free>free_arst</member_at_free>\n      <member_at_action>\n#ifndef PBS_MOM\n      set_node_host_name\n#else\n      NULL_FUNC\n#endif\n      </member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_ARST</member_at_type>\n      <member_at_parent>PARENT_TYPE_NODE</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>ND_ATR_Port</member_index>\n      <member_name>ATTR_NODE_Port</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>\n#ifndef PBS_MOM\n      set_node_mom_port\n#else\n      NULL_FUNC\n#endif\n      </member_at_action>\n      <member_at_flags>ATR_DFLAG_OPRD | ATR_DFLAG_MGRD | ATR_DFLAG_OPWR | ATR_DFLAG_MGWR</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_NODE</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>ND_ATR_version</member_index>\n      <member_name>ATTR_version</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>ATR_DFLAG_OPRD | ATR_DFLAG_MGRD | ATR_DFLAG_SvWR</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_NODE</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>ND_ATR_ntype</member_index>\n      <member_name>ATTR_NODE_ntype</member_name>\n      <member_at_decode>decode_ntype</member_at_decode>\n      <member_at_encode>encode_ntype</member_at_encode>\n      <member_at_set>set_node_ntype</member_at_set>\n      <member_at_comp>comp_null</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>node_ntype</member_at_action>\n      <member_at_flags>READ_ONLY | ATR_DFLAG_NOSAVM</member_at_flags>\n      <member_at_type>ATR_TYPE_SHORT</member_at_type>\n      <member_at_parent>PARENT_TYPE_NODE</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>ND_ATR_state</member_index>\n      <member_name>ATTR_NODE_state</member_name>\n      <member_at_decode>decode_state</member_at_decode>\n      <member_at_encode>encode_state</member_at_encode>\n      <member_at_set>set_node_state</member_at_set>\n      <member_at_comp>comp_null</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>\n#ifndef PBS_MOM\n#ifndef PBS_PYTHON\n      node_state\n#else\n      NULL_FUNC\n#endif\n#else\n      NULL_FUNC\n#endif\n      </member_at_action>\n      <member_at_flags>NO_USER_SET | ATR_DFLAG_NOSAVM</member_at_flags>\n      <member_at_type>ATR_TYPE_SHORT</member_at_type>\n      <member_at_parent>PARENT_TYPE_NODE</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>ND_ATR_pcpus</member_index>\n      <member_name>ATTR_NODE_pcpus</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>\n#ifndef PBS_MOM\n      node_pcpu_action\n#else\n      NULL_FUNC\n#endif\n      </member_at_action>\n      <member_at_flags>READ_ONLY | ATR_DFLAG_SvWR</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_NODE</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>ND_ATR_priority</member_index>\n      <member_name>ATTR_p</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>NO_USER_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_NODE</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>verify_value_priority</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>ND_ATR_jobs</member_index>\n      <member_name>ATTR_NODE_jobs</member_name>\n      <member_at_decode>decode_null</member_at_decode>\n      <member_at_encode>encode_jobs</member_at_encode>\n      <member_at_set>set_null</member_at_set>\n      <member_at_comp>comp_null</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>ATR_DFLAG_RDACC | ATR_DFLAG_NOSAVM</member_at_flags>\n      <member_at_type>ATR_TYPE_JINFOP</member_at_type>\n      <member_at_parent>PARENT_TYPE_NODE</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>ND_ATR_MaxRun</member_index>\n      <member_name>ATTR_maxrun</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>NO_USER_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_NODE</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>ND_ATR_MaxUserRun</member_index>\n      <member_name>ATTR_maxuserrun</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>NO_USER_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_NODE</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>ND_ATR_MaxGrpRun</member_index>\n      <member_name>ATTR_maxgrprun</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>NO_USER_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_NODE</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>ND_ATR_No_Tasks</member_index>\n      <member_name>ATTR_NODE_No_Tasks</member_name>\n      <member_at_decode>decode_b</member_at_decode>\n      <member_at_encode>encode_b</member_at_encode>\n      <member_at_set>set_b</member_at_set>\n      <member_at_comp>comp_b</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>NO_USER_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_BOOL</member_at_type>\n      <member_at_parent>PARENT_TYPE_NODE</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_bool</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>ND_ATR_PNames</member_index>\n      <member_name>ATTR_PNames</member_name>\n      <member_at_decode>decode_arst</member_at_decode>\n      <member_at_encode>encode_arst</member_at_encode>\n      <member_at_set>set_arst</member_at_set>\n      <member_at_comp>comp_arst</member_at_comp>\n      <member_at_free>free_arst</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_ARST</member_at_type>\n      <member_at_parent>PARENT_TYPE_NODE</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>ND_ATR_resvs</member_index>\n      <member_name>ATTR_NODE_resvs</member_name>\n      <member_at_decode>decode_null</member_at_decode>\n      <member_at_encode>encode_resvs</member_at_encode>\n      <member_at_set>set_null</member_at_set>\n      <member_at_comp>comp_null</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>ATR_DFLAG_RDACC | ATR_DFLAG_NOSAVM</member_at_flags>\n      <member_at_type>ATR_TYPE_JINFOP</member_at_type>\n      <member_at_parent>PARENT_TYPE_NODE</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>ND_ATR_ResourceAvail</member_index>\n      <member_name>ATTR_rescavail</member_name>\n      <member_at_decode>decode_resc</member_at_decode>\n      <member_at_encode>encode_resc</member_at_encode>\n      <member_at_set>set_resc</member_at_set>\n      <member_at_comp>comp_resc</member_at_comp>\n      <member_at_free>free_resc</member_at_free>\n      <member_at_action>\n#ifndef PBS_MOM\n      node_np_action\n#else\n      NULL_FUNC\n#endif\n      </member_at_action>\n      <member_at_flags>NO_USER_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_RESC</member_at_type>\n      <member_at_parent>PARENT_TYPE_NODE</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>verify_value_resc</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>ND_ATR_ResourceAssn</member_index>\n      <member_name>ATTR_rescassn</member_name>\n      <member_at_decode>decode_resc</member_at_decode>\n      <member_at_encode>encode_resc</member_at_encode>\n      <member_at_set>set_resc</member_at_set>\n      <member_at_comp>comp_resc</member_at_comp>\n      <member_at_free>free_resc</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_ONLY | ATR_DFLAG_NOSAVM</member_at_flags>\n      <member_at_type>ATR_TYPE_RESC</member_at_type>\n      <member_at_parent>PARENT_TYPE_NODE</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>ND_ATR_Queue</member_index>\n      <member_name>ATTR_queue</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>\n#ifndef PBS_MOM\n      node_queue_action\n#else\n      NULL_FUNC\n#endif\n      </member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_NODE</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>ND_ATR_Comment</member_index>\n      <member_name>ATTR_comment</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>\n#ifndef PBS_MOM\n      node_comment\n#else\n      NULL_FUNC\n#endif\n      </member_at_action>\n      <member_at_flags>NO_USER_SET | ATR_DFLAG_NOSAVM</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_NODE</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>ND_ATR_ResvEnable</member_index>\n      <member_name>ATTR_NODE_resv_enable</member_name>\n      <member_at_decode>decode_b</member_at_decode>\n      <member_at_encode>encode_b</member_at_encode>\n      <member_at_set>set_b</member_at_set>\n      <member_at_comp>comp_b</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>MGR_ONLY_SET | ATR_DFLAG_SSET</member_at_flags>\n      <member_at_type>ATR_TYPE_BOOL</member_at_type>\n      <member_at_parent>PARENT_TYPE_NODE</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_bool</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>ND_ATR_NoMultiNode</member_index>\n      <member_name>ATTR_NODE_NoMultiNode</member_name>\n      <member_at_decode>decode_b</member_at_decode>\n      <member_at_encode>encode_b</member_at_encode>\n      <member_at_set>set_b</member_at_set>\n      <member_at_comp>comp_b</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_BOOL</member_at_type>\n      <member_at_parent>PARENT_TYPE_NODE</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_bool</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>ND_ATR_Sharing</member_index>\n      <member_name>ATTR_NODE_Sharing</member_name>\n      <member_at_decode>decode_sharing</member_at_decode>\n      <member_at_encode>encode_sharing</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_ONLY | ATR_DFLAG_SSET</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_NODE</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>ND_ATR_ProvisionEnable</member_index>\n      <member_name>ATTR_NODE_ProvisionEnable</member_name>\n      <member_at_decode>decode_b</member_at_decode>\n      <member_at_encode>encode_b</member_at_encode>\n      <member_at_set>set_b</member_at_set>\n      <member_at_comp>comp_b</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>\n#ifndef PBS_MOM\n      node_prov_enable_action\n#else\n      NULL_FUNC\n#endif\n      </member_at_action>\n      <member_at_flags>MGR_ONLY_SET | ATR_DFLAG_SSET</member_at_flags>\n      <member_at_type>ATR_TYPE_BOOL</member_at_type>\n      <member_at_parent>PARENT_TYPE_NODE</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_bool</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>ND_ATR_current_aoe</member_index>\n      <member_name>ATTR_NODE_current_aoe</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>\n#ifndef PBS_MOM\n      node_current_aoe_action\n#else\n      NULL_FUNC\n#endif\n      </member_at_action>\n      <member_at_flags>MGR_ONLY_SET | ATR_DFLAG_SSET</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_NODE</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>ND_ATR_in_multivnode_host</member_index>\n      <member_name>ATTR_NODE_in_multivnode_host</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>ATR_DFLAG_MGRD | ATR_DFLAG_MGWR</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_NODE</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>ND_ATR_MaintJobs</member_index>\n      <member_name>ATTR_NODE_MaintJobs</member_name>\n      <member_at_decode>decode_arst</member_at_decode>\n      <member_at_encode>encode_arst</member_at_encode>\n      <member_at_set>set_arst</member_at_set>\n      <member_at_comp>comp_arst</member_at_comp>\n      <member_at_free>free_arst</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>ATR_DFLAG_SvWR | ATR_DFLAG_MGRD</member_at_flags>\n      <member_at_type>ATR_TYPE_ARST</member_at_type>\n      <member_at_parent>PARENT_TYPE_NODE</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes flag=\"SVR\">\n      <member_index>ND_ATR_License</member_index>\n      <member_name>ATTR_NODE_License</member_name>\n      <member_at_decode>decode_c</member_at_decode>\n      <member_at_encode>encode_c</member_at_encode>\n      <member_at_set>set_c</member_at_set>\n      <member_at_comp>comp_c</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_ONLY | ATR_DFLAG_SSET</member_at_flags>\n      <member_at_type>ATR_TYPE_CHAR</member_at_type>\n      <member_at_parent>PARENT_TYPE_NODE</member_at_parent>\n   </attributes>\n   <attributes flag=\"SVR\">\n      <member_index>ND_ATR_LicenseInfo</member_index>\n      <member_name>ATTR_NODE_LicenseInfo</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>ATR_DFLAG_SSET</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_NODE</member_at_parent>\n   </attributes>\n   <attributes flag=\"SVR\">\n      <member_index>ND_ATR_TopologyInfo</member_index>\n      <member_name>ATTR_NODE_TopologyInfo</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>\n#ifndef PBS_MOM\n      set_node_topology\n#else\n      NULL_FUNC\n#endif\n      </member_at_action>\n      <member_at_flags>ATR_DFLAG_SSET | ATR_DFLAG_NOSAVM</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_NODE</member_at_parent>\n   </attributes>\n   <attributes>\n      <member_index>ND_ATR_vnode_pool</member_index>\n      <member_name>ATTR_NODE_VnodePool</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>\n#ifndef PBS_MOM\n      chk_vnode_pool\n#else\n      NULL_FUNC\n#endif\n      </member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_NODE</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>ND_ATR_Power_Provisioning</member_index>\n      <member_name>ATTR_NODE_power_provisioning</member_name>\n      <member_at_decode>decode_b</member_at_decode>\n      <member_at_encode>encode_b</member_at_encode>\n      <member_at_set>set_b</member_at_set>\n      <member_at_comp>comp_b</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>\n#ifndef PBS_MOM\n      node_prov_enable_action\n#else\n      NULL_FUNC\n#endif\n      </member_at_action>\n      <member_at_flags>MGR_ONLY_SET | ATR_DFLAG_SSET</member_at_flags>\n      <member_at_type>ATR_TYPE_BOOL</member_at_type>\n      <member_at_parent>PARENT_TYPE_NODE</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_bool</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>ND_ATR_current_eoe</member_index>\n      <member_name>ATTR_NODE_current_eoe</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>MGR_ONLY_SET | ATR_DFLAG_SSET</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_NODE</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>ND_ATR_partition</member_index>\n      <member_name>ATTR_partition</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>\n#ifndef PBS_MOM\n      action_node_partition\n#else\n      NULL_FUNC\n#endif\n      </member_at_action>\n      <member_at_flags>ATR_DFLAG_OPRD | ATR_DFLAG_MGRD | ATR_DFLAG_OPWR | ATR_DFLAG_MGWR | ATR_DFLAG_USRD</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_NODE</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>ND_ATR_poweroff_eligible</member_index>\n      <member_name>ATTR_NODE_poweroff_eligible</member_name>\n      <member_at_decode>decode_b</member_at_decode>\n      <member_at_encode>encode_b</member_at_encode>\n      <member_at_set>set_b</member_at_set>\n      <member_at_comp>comp_b</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>MGR_ONLY_SET | ATR_DFLAG_SSET</member_at_flags>\n      <member_at_type>ATR_TYPE_BOOL</member_at_type>\n      <member_at_parent>PARENT_TYPE_NODE</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_bool</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>ND_ATR_last_state_change_time</member_index>\n      <member_name>ATTR_NODE_last_state_change_time</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_ONLY | ATR_DFLAG_SSET</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_NODE</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>ND_ATR_last_used_time</member_index>\n      <member_name>ATTR_NODE_last_used_time</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_ONLY | ATR_DFLAG_SSET</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_NODE</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <tail>\n   <SVR>};</SVR>\n   <ECL>};\n\tint ecl_node_attr_size = sizeof(ecl_node_attr_def)/sizeof(ecl_attribute_def);</ECL>\n   </tail>\n</data>\n"
  },
  {
    "path": "src/lib/Libattr/master_queue_attr_def.xml",
    "content": "<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<data>\n\t<!--\n/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n\tNOTE (Server File)\n\n\tqueue_attr_def is the array of attribute definitions for queue.\n    Each legal job attribute is defined here.\n    The entries for each attribute are (see attribute.h):\n       name,\n       decode function,\n       encode function,\n       set function,\n       compare function,\n       free value space function,\n       action function,\n       access permission flags,\n       value type,\n       parent object type\n\n\t NOTE (ECL File)\n\n\t The entries for each attribute are (see attribute.h):\\n\n       name,\n       type,\n       flag,\n       verify datatype function,\n       verify value function\n\t-->\n    <head>\n      <SVR>\n      #include &lt;pbs_config.h&gt;\n      #include &lt;sys/types.h&gt;\n      #include \"pbs_ifl.h\"\n      #include \"list_link.h\"\n      #include \"attribute.h\"\n      #include \"pbs_nodes.h\"\n      #include \"svrfunc.h\"\n\n      attribute_def que_attr_def[] = {\n      </SVR>\n      <ECL>\n      #include &lt;pbs_config.h&gt;\n      #include &lt;sys/types.h&gt;\n      #include \"pbs_ifl.h\"\n      #include \"pbs_ecl.h\"\n\n      ecl_attribute_def ecl_que_attr_def[] = {\n      </ECL>\n    </head>\n    <attributes>\n\t<member_index>QA_ATR_QType</member_index>\n\t<member_name>ATTR_qtype</member_name>\t\t<!-- \"queue_type\"- type of queue -->\n\t<member_at_decode>decode_str</member_at_decode>\n\t<member_at_encode>encode_str</member_at_encode>\n\t<member_at_set>set_str</member_at_set>\n\t<member_at_comp>comp_str</member_at_comp>\n\t<member_at_free>free_str</member_at_free>\n\t<member_at_action>set_queue_type</member_at_action>\n\t<member_at_flags>NO_USER_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_STR</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_ALL</member_at_parent>\n\t<member_verify_function>\n\t<ECL>NULL_VERIFY_DATATYPE_FUNC </ECL>\n\t<ECL>verify_value_queue_type</ECL>\n\t</member_verify_function>\n    </attributes>\n    <attributes>\n\t<member_index>QA_ATR_Priority</member_index>\n\t<member_name>ATTR_p</member_name>\t\t\t<!-- \"priority\" -->\n\t<member_at_decode>decode_l</member_at_decode>\n\t<member_at_encode>encode_l</member_at_encode>\n\t<member_at_set>set_l</member_at_set>\n\t<member_at_comp>comp_l</member_at_comp>\n\t<member_at_free>free_null</member_at_free>\n\t<member_at_action>NULL_FUNC</member_at_action>\n\t<member_at_flags>NO_USER_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_LONG</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_ALL</member_at_parent>\n\t<member_verify_function>\n\t<ECL>verify_datatype_long </ECL>\n\t<ECL>verify_value_priority</ECL>\n\t</member_verify_function>\n    </attributes>\n    <attributes>\n\t<member_index>QA_ATR_MaxJobs</member_index>\n\t<member_name>ATTR_maxque</member_name>\t\t<!-- \"max_queuable\" -->\n\t<member_at_decode>decode_l</member_at_decode>\n\t<member_at_encode>encode_l</member_at_encode>\n\t<member_at_set>set_l</member_at_set>\n\t<member_at_comp>comp_l</member_at_comp>\n\t<member_at_free>free_null</member_at_free>\n\t<member_at_action>check_no_entlim</member_at_action>\n\t<member_at_flags>NO_USER_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_LONG</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_ALL</member_at_parent>\n\t<member_verify_function>\n\t<ECL>verify_datatype_long </ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n    </attributes>\n    <attributes>\n\t<member_index>QA_ATR_TotalJobs</member_index>\n\t<member_name>ATTR_total</member_name>\t\t<!-- \"total_jobs\" -->\n\t<member_at_decode>decode_null</member_at_decode>\n\t<member_at_encode>encode_l</member_at_encode>\n\t<member_at_set>set_null</member_at_set>\n\t<member_at_comp>comp_l</member_at_comp>\n\t<member_at_free>free_null</member_at_free>\n\t<member_at_action>NULL_FUNC</member_at_action>\n\t<member_at_flags>READ_ONLY</member_at_flags>\n\t<member_at_type>ATR_TYPE_LONG</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_ALL</member_at_parent>\n\t<member_verify_function>\n\t<ECL>NULL_VERIFY_DATATYPE_FUNC </ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n    </attributes>\n    <attributes>\n\t<member_index>QA_ATR_JobsByState</member_index>\n\t<member_name>ATTR_count</member_name>\t\t<!-- \"state_count\" -->\n\t<member_at_decode>decode_str</member_at_decode>\n\t<member_at_encode>encode_str</member_at_encode>\n\t<member_at_set>set_null</member_at_set>\n\t<member_at_comp>comp_str</member_at_comp>\n\t<member_at_free>free_str</member_at_free>\n\t<member_at_action>NULL_FUNC</member_at_action>\n\t<member_at_flags>READ_ONLY</member_at_flags>\n\t<member_at_type>ATR_TYPE_STR</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_ALL</member_at_parent>\n\t<member_verify_function>\n\t<ECL>NULL_VERIFY_DATATYPE_FUNC </ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n    </attributes>\n    <attributes>\n\t<member_index>QA_ATR_MaxRun</member_index>\n\t<member_name>ATTR_maxrun</member_name>\t\t<!-- \"max_running\" -->\n\t<member_at_decode>decode_l</member_at_decode>\n\t<member_at_encode>encode_l</member_at_encode>\n\t<member_at_set>set_l</member_at_set>\n\t<member_at_comp>comp_l</member_at_comp>\n\t<member_at_free>free_null</member_at_free>\n\t<member_at_action>check_no_entlim</member_at_action>\n\t<member_at_flags>NO_USER_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_LONG</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_ALL</member_at_parent>\n\t<member_verify_function>\n\t<ECL>verify_datatype_long </ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n    </attributes>\n    <attributes>\n\t<member_index>QA_ATR_max_queued</member_index>\n\t<member_name>ATTR_max_queued</member_name>\t<!-- \"max_queued\" -->\n\t<member_at_decode>decode_entlim</member_at_decode>\n\t<member_at_encode>encode_entlim</member_at_encode>\n\t<member_at_set>set_entlim</member_at_set>\n\t<member_at_comp>comp_str</member_at_comp>\n\t<member_at_free>free_entlim</member_at_free>\n\t<member_at_action>action_entlim_ct</member_at_action>\n\t<member_at_flags>NO_USER_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_ENTITY</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_ALL</member_at_parent>\n\t<member_verify_function>\n\t<ECL>NULL_VERIFY_DATATYPE_FUNC </ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n    </attributes>\n    <attributes>\n\t<member_index>QA_ATR_max_queued_res</member_index>\n\t<member_name>ATTR_max_queued_res</member_name>\t<!-- \"max_queued_res\" -->\n\t<member_at_decode>decode_entlim_res</member_at_decode>\n\t<member_at_encode>encode_entlim</member_at_encode>\n\t<member_at_set>set_entlim_res</member_at_set>\n\t<member_at_comp>comp_str</member_at_comp>\n\t<member_at_free>free_entlim</member_at_free>\n\t<member_at_action>action_entlim_res</member_at_action>\n\t<member_at_flags>NO_USER_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_ENTITY</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_ALL</member_at_parent>\n\t<member_verify_function>\n\t<ECL>NULL_VERIFY_DATATYPE_FUNC </ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n    </attributes>\n    <attributes>\n\t<member_index>QA_ATR_AclHostEnabled</member_index>\n\t<member_name>ATTR_aclhten</member_name>\t\t <!-- \"acl_host_enable\" -->\n\t<member_at_decode>decode_b</member_at_decode>\n\t<member_at_encode>encode_b</member_at_encode>\n\t<member_at_set>set_b</member_at_set>\n\t<member_at_comp>comp_b</member_at_comp>\n\t<member_at_free>free_null</member_at_free>\n\t<member_at_action>NULL_FUNC</member_at_action>\n\t<member_at_flags>NO_USER_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_BOOL</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_ALL</member_at_parent>\n\t<member_verify_function>\n\t<ECL>verify_datatype_bool </ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n    </attributes>\n    <attributes>\n\t<member_index>QA_ATR_AclHost</member_index>\n\t<member_name>ATTR_aclhost</member_name>\t\t<!-- \"acl_hosts\" -->\n\t<member_at_decode>decode_arst</member_at_decode>\n\t<member_at_encode>encode_arst</member_at_encode>\n\t<member_at_set>set_hostacl</member_at_set>\n\t<member_at_comp>comp_arst</member_at_comp>\n\t<member_at_free>free_arst</member_at_free>\n\t<member_at_action>NULL_FUNC</member_at_action>\n\t<member_at_flags>NO_USER_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_ACL</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_ALL</member_at_parent>\n\t<member_verify_function>\n\t<ECL>NULL_VERIFY_DATATYPE_FUNC </ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n    </attributes>\n    <attributes>\n\t<member_index>QA_ATR_AclUserEnabled</member_index>\n\t<member_name>ATTR_acluren</member_name>\t\t<!-- \"acl_user_enable\" -->\n\t<member_at_decode>decode_b</member_at_decode>\n\t<member_at_encode>encode_b</member_at_encode>\n\t<member_at_set>set_b</member_at_set>\n\t<member_at_comp>comp_b</member_at_comp>\n\t<member_at_free>free_null</member_at_free>\n\t<member_at_action>NULL_FUNC</member_at_action>\n\t<member_at_flags>NO_USER_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_BOOL</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_ALL</member_at_parent>\n\t<member_verify_function>\n\t<ECL>verify_datatype_bool </ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n    </attributes>\n    <attributes>\n\t<member_index>QA_ATR_AclUsers</member_index>\n\t<member_name>ATTR_acluser</member_name>\t\t<!-- \"acl_users\" -->\n\t<member_at_decode>decode_arst</member_at_decode>\n\t<member_at_encode>encode_arst</member_at_encode>\n\t<member_at_set>set_uacl</member_at_set>\n\t<member_at_comp>comp_arst</member_at_comp>\n\t<member_at_free>free_arst</member_at_free>\n\t<member_at_action>NULL_FUNC</member_at_action>\n\t<member_at_flags>NO_USER_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_ACL</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_ALL</member_at_parent>\n\t<member_verify_function>\n\t<ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n    </attributes>\n    <attributes>\n\t<member_index>QA_ATR_FromRouteOnly</member_index>\n\t<member_name>ATTR_fromroute</member_name>\t\t <!-- \"from_route_only\" -->\n\t<member_at_decode>decode_b</member_at_decode>\n\t<member_at_encode>encode_b</member_at_encode>\n\t<member_at_set>set_b</member_at_set>\n\t<member_at_comp>comp_b</member_at_comp>\n\t<member_at_free>free_null</member_at_free>\n\t<member_at_action>NULL_FUNC</member_at_action>\n\t<member_at_flags>MGR_ONLY_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_BOOL</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_ALL</member_at_parent>\n\t<member_verify_function>\n\t<ECL>verify_datatype_bool</ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n    </attributes>\n    <attributes>\n\t<member_index>QA_ATR_ResourceMax</member_index>\n\t<member_name>ATTR_rescmax</member_name>\t\t<!-- \"resources_max\" -->\n\t<member_at_decode>decode_resc</member_at_decode>\n\t<member_at_encode>encode_resc</member_at_encode>\n\t<member_at_set>set_resources_min_max</member_at_set>\n\t<member_at_comp>comp_resc</member_at_comp>\n\t<member_at_free>free_resc</member_at_free>\n\t<member_at_action>NULL_FUNC</member_at_action>\n\t<member_at_flags>NO_USER_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_RESC</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_ALL</member_at_parent>\n\t<member_verify_function>\n\t<ECL>NULL_VERIFY_DATATYPE_FUNC </ECL>\n\t<ECL>verify_value_resc</ECL>\n\t</member_verify_function>\n    </attributes>\n    <attributes>\n\t<member_index>QA_ATR_ResourceMin</member_index>\n\t<member_name>ATTR_rescmin</member_name>\t\t <!-- \"resources_min\" -->\n\t<member_at_decode>decode_resc</member_at_decode>\n\t<member_at_encode>encode_resc</member_at_encode>\n\t<member_at_set>set_resources_min_max</member_at_set>\n\t<member_at_comp>comp_resc</member_at_comp>\n\t<member_at_free>free_resc</member_at_free>\n\t<member_at_action>NULL_FUNC</member_at_action>\n\t<member_at_flags>NO_USER_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_RESC</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_ALL</member_at_parent>\n\t<member_verify_function>\n\t<ECL>NULL_VERIFY_DATATYPE_FUNC </ECL>\n\t<ECL>verify_value_resc</ECL>\n\t</member_verify_function>\n    </attributes>\n    <attributes>\n\t<member_index>QA_ATR_ResourceDefault</member_index>\n\t<member_name>ATTR_rescdflt</member_name>\t\t <!-- \"resources_default\" -->\n\t<member_at_decode>decode_resc</member_at_decode>\n\t<member_at_encode>encode_resc</member_at_encode>\n\t<member_at_set>set_resc</member_at_set>\n\t<member_at_comp>comp_resc</member_at_comp>\n\t<member_at_free>free_resc</member_at_free>\n\t<member_at_action>action_resc_dflt_queue</member_at_action>\n\t<member_at_flags>NO_USER_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_RESC</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_ALL</member_at_parent>\n\t<member_verify_function>\n\t<ECL>NULL_VERIFY_DATATYPE_FUNC </ECL>\n\t<ECL>verify_value_resc</ECL>\n\t</member_verify_function>\n    </attributes>\n    <attributes>\n\t<member_index>QA_ATR_ReqCredEnable</member_index>\n\t<member_name>ATTR_ReqCredEnable</member_name>\t<!-- \"require_cred_enable\" -->\n\t<member_at_decode>decode_b</member_at_decode>\n\t<member_at_encode>encode_b</member_at_encode>\n\t<member_at_set>set_b</member_at_set>\n\t<member_at_comp>comp_b</member_at_comp>\n\t<member_at_free>free_null</member_at_free>\n\t<member_at_action>NULL_FUNC</member_at_action>\n\t<member_at_flags>MGR_ONLY_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_BOOL</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_ALL</member_at_parent>\n\t<member_verify_function>\n\t<ECL>verify_datatype_bool </ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n    </attributes>\n    <attributes>\n\t<member_index>QA_ATR_ReqCred</member_index>\n\t<member_name>ATTR_ReqCred</member_name>\t\t <!-- \"require_cred\" -->\n\t<member_at_decode>decode_str</member_at_decode>\n\t<member_at_encode>encode_str</member_at_encode>\n\t<member_at_set>set_str</member_at_set>\n\t<member_at_comp>comp_str</member_at_comp>\n\t<member_at_free>free_str</member_at_free>\n\t<member_at_action>cred_name_okay</member_at_action>\n\t<member_at_flags>MGR_ONLY_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_STR</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_ALL</member_at_parent>\n\t<member_verify_function>\n\t<ECL>NULL_VERIFY_DATATYPE_FUNC </ECL>\n\t<ECL>verify_value_credname</ECL>\n\t</member_verify_function>\n    </attributes>\n    <attributes>\n\t<member_index>QA_ATR_maxarraysize</member_index>\n\t<member_name>ATTR_maxarraysize</member_name>\t <!-- max_array_size -->\n\t<member_at_decode>decode_l</member_at_decode>\n\t<member_at_encode>encode_l</member_at_encode>\n\t<member_at_set>set_l</member_at_set>\n\t<member_at_comp>comp_l</member_at_comp>\n\t<member_at_free>free_null</member_at_free>\n\t<member_at_action>NULL_FUNC</member_at_action>\n\t<member_at_flags>NO_USER_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_LONG</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_ALL</member_at_parent>\n\t<member_verify_function>\n\t<ECL>verify_datatype_long </ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n    </attributes>\n    <attributes>\n\t<member_index>QA_ATR_Comment</member_index>\n\t<member_name>ATTR_comment</member_name>\t\t\t<!-- \"comment\" -->\n\t<member_at_decode>decode_str</member_at_decode>\n\t<member_at_encode>encode_str</member_at_encode>\n\t<member_at_set>set_str</member_at_set>\n\t<member_at_comp>comp_str</member_at_comp>\n\t<member_at_free>free_str</member_at_free>\n\t<member_at_action>NULL_FUNC</member_at_action>\n\t<member_at_flags>NO_USER_SET | ATR_DFLAG_NOSAVM</member_at_flags>\n\t<member_at_type>ATR_TYPE_STR</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_ALL</member_at_parent>\n\t<member_verify_function>\n\t<ECL>NULL_VERIFY_DATATYPE_FUNC </ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n    </attributes>\n    <attributes>\n\t<member_index>QE_ATR_AclGroupEnabled</member_index>\n\t<member_name>ATTR_aclgren</member_name>\t\t <!-- \"acl_group_enable\" -->\n\t<member_at_decode>decode_b</member_at_decode>\n\t<member_at_encode>encode_b</member_at_encode>\n\t<member_at_set>set_b</member_at_set>\n\t<member_at_comp>comp_b</member_at_comp>\n\t<member_at_free>free_null</member_at_free>\n\t<member_at_action>NULL_FUNC</member_at_action>\n\t<member_at_flags>NO_USER_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_BOOL</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_ALL</member_at_parent>\n\t<member_verify_function>\n\t<ECL>verify_datatype_bool </ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n    </attributes>\n    <attributes>\n\t<member_index>QE_ATR_AclGroup</member_index>\n\t<member_name>ATTR_aclgroup</member_name>\t\t <!-- \"acl_groups\" -->\n\t<member_at_decode>decode_arst</member_at_decode>\n\t<member_at_encode>encode_arst</member_at_encode>\n\t<member_at_set>set_gacl</member_at_set>\n\t<member_at_comp>comp_arst</member_at_comp>\n\t<member_at_free>free_arst</member_at_free>\n\t<member_at_action>NULL_FUNC</member_at_action>\n\t<member_at_flags>NO_USER_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_ACL</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_ALL</member_at_parent>\n\t<member_verify_function>\n\t<ECL>NULL_VERIFY_DATATYPE_FUNC </ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n    </attributes>\n    <attributes>\n\t<member_index>QE_ATR_ChkptMin</member_index>\n\t<member_name>ATTR_chkptmin</member_name>\t\t<!-- \"checkpoint_min\" -->\n\t<member_at_decode>decode_l</member_at_decode>\n\t<member_at_encode>encode_l</member_at_encode>\n\t<member_at_set>set_l</member_at_set>\n\t<member_at_comp>comp_l</member_at_comp>\n\t<member_at_free>free_null</member_at_free>\n\t<member_at_action>NULL_FUNC</member_at_action>\n\t<member_at_flags>NO_USER_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_LONG</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_EXC</member_at_parent>\n\t<member_verify_function>\n\t<ECL>verify_datatype_long </ECL>\n    <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n    </attributes>\n    <attributes>\n\t<member_index>QE_ATR_RendezvousRetry</member_index>\n\t<member_name>\"rendezvous_retry\"</member_name>\n\t<member_at_decode>decode_l</member_at_decode>\n\t<member_at_encode>encode_l</member_at_encode>\n\t<member_at_set>set_l</member_at_set>\n\t<member_at_comp>comp_l</member_at_comp>\n\t<member_at_free>free_null</member_at_free>\n\t<member_at_action>NULL_FUNC</member_at_action>\n\t<member_at_flags>NO_USER_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_LONG</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_EXC</member_at_parent>\n\t<member_verify_function>\n\t<ECL>verify_datatype_long </ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n    </attributes>\n    <attributes>\n\t<member_index>QE_ATR_ReservedExpedite</member_index>\n\t<member_name>\"reserved_expedite\"</member_name>\n\t<member_at_decode>decode_l</member_at_decode>\n\t<member_at_encode>encode_l</member_at_encode>\n\t<member_at_set>set_l</member_at_set>\n\t<member_at_comp>comp_l</member_at_comp>\n\t<member_at_free>free_null</member_at_free>\n\t<member_at_action>NULL_FUNC</member_at_action>\n\t<member_at_flags>NO_USER_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_LONG</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_EXC</member_at_parent>\n\t<member_verify_function>\n\t<ECL>verify_datatype_long </ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n    </attributes>\n    <attributes>\n\t<member_index>QE_ATR_ReservedSync</member_index>\n\t<member_name>\"reserved_sync\"</member_name>\n\t<member_at_decode>decode_l</member_at_decode>\n\t<member_at_encode>encode_l</member_at_encode>\n\t<member_at_set>set_l</member_at_set>\n\t<member_at_comp>comp_l</member_at_comp>\n\t<member_at_free>free_null</member_at_free>\n\t<member_at_action>NULL_FUNC</member_at_action>\n\t<member_at_flags>NO_USER_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_LONG</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_EXC</member_at_parent>\n\t<member_verify_function>\n\t<ECL>verify_datatype_long </ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n    </attributes>\n    <attributes>\n\t<member_index>QE_ATR_DefaultChunk</member_index>\n\t<member_name>ATTR_DefaultChunk</member_name>\t<!-- default_chunk -->\n\t<member_at_decode>decode_resc</member_at_decode>\n\t<member_at_encode>encode_resc</member_at_encode>\n\t<member_at_set>set_resc</member_at_set>\n\t<member_at_comp>comp_resc</member_at_comp>\n\t<member_at_free>free_resc</member_at_free>\n\t<member_at_action>deflt_chunk_action</member_at_action>\n\t<member_at_flags>NO_USER_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_RESC</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_EXC</member_at_parent>\n\t<member_verify_function>\n\t<ECL>NULL_VERIFY_DATATYPE_FUNC </ECL>\n\t<ECL>verify_value_resc</ECL>\n\t</member_verify_function>\n    </attributes>\n    <attributes>\n\t<member_index>QE_ATR_ResourceAvail</member_index>\n\t<member_name>\"resources_available\"</member_name>\n\t<member_at_decode>decode_resc</member_at_decode>\n\t<member_at_encode>encode_resc</member_at_encode>\n\t<member_at_set>set_resc</member_at_set>\n\t<member_at_comp>comp_resc</member_at_comp>\n\t<member_at_free>free_resc</member_at_free>\n\t<member_at_action>NULL_FUNC</member_at_action>\n\t<member_at_flags>NO_USER_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_RESC</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_EXC</member_at_parent>\n\t<member_verify_function>\n\t<ECL>NULL_VERIFY_DATATYPE_FUNC </ECL>\n\t<ECL>verify_value_resc</ECL>\n\t</member_verify_function>\n    </attributes>\n    <attributes>\n\t<member_index>QE_ATR_ResourceAssn</member_index>\n\t<member_name>ATTR_rescassn</member_name>\t\t<!-- \"resources_assigned\" -->\n\t<member_at_decode>decode_resc</member_at_decode>\n\t<member_at_encode>encode_resc</member_at_encode>\n\t<member_at_set>set_resc</member_at_set>\n\t<member_at_comp>comp_resc</member_at_comp>\n\t<member_at_free>free_resc</member_at_free>\n\t<member_at_action>NULL_FUNC</member_at_action>\n\t<member_at_flags>READ_ONLY | ATR_DFLAG_NOSAVM</member_at_flags>\n\t<member_at_type>ATR_TYPE_RESC</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_EXC</member_at_parent>\n\t<member_verify_function>\n\t<ECL>NULL_VERIFY_DATATYPE_FUNC </ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n    </attributes>\n    <attributes>\n\t<member_index>QE_ATR_KillDelay</member_index>\n\t<member_name>ATTR_killdelay</member_name>\t\t<!-- \"kill_delay\" -->\n\t<member_at_decode>decode_l</member_at_decode>\n\t<member_at_encode>encode_l</member_at_encode>\n\t<member_at_set>set_l</member_at_set>\n\t<member_at_comp>comp_l</member_at_comp>\n\t<member_at_free>free_null</member_at_free>\n\t<member_at_action>NULL_FUNC</member_at_action>\n\t<member_at_flags>NO_USER_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_LONG</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_EXC</member_at_parent>\n\t<member_verify_function>\n\t<ECL>verify_datatype_long </ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n    </attributes>\n    <attributes>\n\t<member_index>QE_ATR_MaxUserRun</member_index>\n\t<member_name>ATTR_maxuserrun</member_name>\t<!-- \"max_user_run\" -->\n\t<member_at_decode>decode_l</member_at_decode>\n\t<member_at_encode>encode_l</member_at_encode>\n\t<member_at_set>set_l</member_at_set>\n\t<member_at_comp>comp_l</member_at_comp>\n\t<member_at_free>free_null</member_at_free>\n\t<member_at_action>check_no_entlim</member_at_action>\n\t<member_at_flags>NO_USER_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_LONG</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_EXC</member_at_parent>\n\t<member_verify_function>\n\t<ECL>verify_datatype_long </ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n    </attributes>\n    <attributes>\n\t<member_index>QE_ATR_MaxGrpRun</member_index>\n\t<member_name>ATTR_maxgrprun</member_name>\t\t<!-- \"max_group_run\" -->\n\t<member_at_decode>decode_l</member_at_decode>\n\t<member_at_encode>encode_l</member_at_encode>\n\t<member_at_set>set_l</member_at_set>\n\t<member_at_comp>comp_l</member_at_comp>\n\t<member_at_free>free_null</member_at_free>\n\t<member_at_action>check_no_entlim</member_at_action>\n\t<member_at_flags>NO_USER_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_LONG</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_EXC</member_at_parent>\n\t<member_verify_function>\n\t<ECL>verify_datatype_long </ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n    </attributes>\n    <attributes>\n\t<member_index>QE_ATR_max_run</member_index>\n\t<member_name>ATTR_max_run</member_name>\t\t<!-- \"max_run\" -->\n\t<member_at_decode>decode_entlim</member_at_decode>\n\t<member_at_encode>encode_entlim</member_at_encode>\n\t<member_at_set>set_entlim</member_at_set>\n\t<member_at_comp>comp_str</member_at_comp>\n\t<member_at_free>free_entlim</member_at_free>\n\t<member_at_action>action_entlim_chk</member_at_action>\n\t<member_at_flags>NO_USER_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_ENTITY</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_EXC</member_at_parent>\n\t<member_verify_function>\n\t<ECL>NULL_VERIFY_DATATYPE_FUNC </ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n    </attributes>\n    <attributes>\n\t<member_index>QE_ATR_max_run_res</member_index>\n\t<member_name>ATTR_max_run_res</member_name>\t\t<!-- \"max_run_res\" -->\n\t<member_at_decode>decode_entlim_res</member_at_decode>\n\t<member_at_encode>encode_entlim</member_at_encode>\n\t<member_at_set>set_entlim_res</member_at_set>\n\t<member_at_comp>comp_str</member_at_comp>\n\t<member_at_free>free_entlim</member_at_free>\n\t<member_at_action>action_entlim_chk</member_at_action>\n\t<member_at_flags>NO_USER_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_ENTITY</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_EXC</member_at_parent>\n\t<member_verify_function>\n\t<ECL>NULL_VERIFY_DATATYPE_FUNC </ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n    </attributes>\n    <attributes>\n\t<member_index>QE_ATR_max_run_soft</member_index>\n\t<member_name>ATTR_max_run_soft</member_name>\t\t<!-- \"max_run_soft\" -->\n\t<member_at_decode>decode_entlim</member_at_decode>\n\t<member_at_encode>encode_entlim</member_at_encode>\n\t<member_at_set>set_entlim</member_at_set>\n\t<member_at_comp>comp_str</member_at_comp>\n\t<member_at_free>free_entlim</member_at_free>\n\t<member_at_action>action_entlim_chk</member_at_action>\n\t<member_at_flags>NO_USER_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_ENTITY</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_EXC</member_at_parent>\n\t<member_verify_function>\n\t<ECL>NULL_VERIFY_DATATYPE_FUNC </ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n    </attributes>\n    <attributes>\n\t<member_index>QE_ATR_max_run_res_soft</member_index>\n\t<member_name>ATTR_max_run_res_soft</member_name>\t\t<!-- \"max_run_res_soft\" -->\n\t<member_at_decode>decode_entlim_res</member_at_decode>\n\t<member_at_encode>encode_entlim</member_at_encode>\n\t<member_at_set>set_entlim_res</member_at_set>\n\t<member_at_comp>comp_str</member_at_comp>\n\t<member_at_free>free_entlim</member_at_free>\n\t<member_at_action>action_entlim_chk</member_at_action>\n\t<member_at_flags>NO_USER_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_ENTITY</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_EXC</member_at_parent>\n\t<member_verify_function>\n\t<ECL>NULL_VERIFY_DATATYPE_FUNC </ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n    </attributes>\n    <attributes>\n\t<member_index>QE_ATR_HasNodes</member_index>\n\t<member_name>ATTR_HasNodes</member_name>\t\t<!-- \"hasnodes\" -->\n\t<member_at_decode>decode_b</member_at_decode>\n\t<member_at_encode>encode_b</member_at_encode>\n\t<member_at_set>set_b</member_at_set>\n\t<member_at_comp>comp_b</member_at_comp>\n\t<member_at_free>free_null</member_at_free>\n\t<member_at_action>NULL_FUNC</member_at_action>\n\t<member_at_flags>READ_ONLY | ATR_DFLAG_NOSAVM</member_at_flags>\n\t<member_at_type>ATR_TYPE_BOOL</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_EXC</member_at_parent>\n\t<member_verify_function>\n\t<ECL>NULL_VERIFY_DATATYPE_FUNC </ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n    </attributes>\n    <attributes>\n\t<member_index>QE_ATR_MaxUserRes</member_index>\n\t<member_name>ATTR_maxuserres</member_name> \t<!-- max_user_res -->\n\t<member_at_decode>decode_resc</member_at_decode>\n\t<member_at_encode>encode_resc</member_at_encode>\n\t<member_at_set>set_resc</member_at_set>\n\t<member_at_comp>comp_resc</member_at_comp>\n\t<member_at_free>free_resc</member_at_free>\n\t<member_at_action>check_no_entlim</member_at_action>\n\t<member_at_flags>NO_USER_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_RESC</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_EXC</member_at_parent>\n\t<member_verify_function>\n\t<ECL>NULL_VERIFY_DATATYPE_FUNC </ECL>\n\t<ECL>verify_value_resc</ECL>\n\t</member_verify_function>\n    </attributes>\n    <attributes>\n\t<member_index>QE_ATR_MaxGroupRes</member_index>\n\t<member_name>ATTR_maxgroupres</member_name> \t<!-- max_group_res -->\n\t<member_at_decode>decode_resc</member_at_decode>\n\t<member_at_encode>encode_resc</member_at_encode>\n\t<member_at_set>set_resc</member_at_set>\n\t<member_at_comp>comp_resc</member_at_comp>\n\t<member_at_free>free_resc</member_at_free>\n\t<member_at_action>check_no_entlim</member_at_action>\n\t<member_at_flags>NO_USER_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_RESC</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_EXC</member_at_parent>\n\t<member_verify_function>\n\t<ECL>NULL_VERIFY_DATATYPE_FUNC </ECL>\n\t<ECL>verify_value_resc</ECL>\n\t</member_verify_function>\n    </attributes>\n    <attributes>\n\t<member_index>QE_ATR_MaxUserRunSoft</member_index>\n\t<member_name>ATTR_maxuserrunsoft</member_name>\t<!-- max_user_run_soft -->\n\t<member_at_decode>decode_l</member_at_decode>\n\t<member_at_encode>encode_l</member_at_encode>\n\t<member_at_set>set_l</member_at_set>\n\t<member_at_comp>comp_l</member_at_comp>\n\t<member_at_free>free_null</member_at_free>\n\t<member_at_action>check_no_entlim</member_at_action>\n\t<member_at_flags>NO_USER_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_LONG</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_EXC</member_at_parent>\n\t<member_verify_function>\n\t<ECL>verify_datatype_long </ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n    </attributes>\n    <attributes>\n\t<member_index>QE_ATR_MaxGrpRunSoft</member_index>\n\t<member_name>ATTR_maxgrprunsoft</member_name>\t<!-- max_group_run_soft -->\n\t<member_at_decode>decode_l</member_at_decode>\n\t<member_at_encode>encode_l</member_at_encode>\n\t<member_at_set>set_l</member_at_set>\n\t<member_at_comp>comp_l</member_at_comp>\n\t<member_at_free>free_null</member_at_free>\n\t<member_at_action>check_no_entlim</member_at_action>\n\t<member_at_flags>NO_USER_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_LONG</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_EXC</member_at_parent>\n\t<member_verify_function>\n\t<ECL>verify_datatype_long </ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n    </attributes>\n    <attributes>\n\t<member_index>QE_ATR_MaxUserResSoft</member_index>\n\t<member_name>ATTR_maxuserressoft</member_name>\t<!-- max_user_res_soft -->\n\t<member_at_decode>decode_resc</member_at_decode>\n\t<member_at_encode>encode_resc</member_at_encode>\n\t<member_at_set>set_resc</member_at_set>\n\t<member_at_comp>comp_resc</member_at_comp>\n\t<member_at_free>free_resc</member_at_free>\n\t<member_at_action>check_no_entlim</member_at_action>\n\t<member_at_flags>NO_USER_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_RESC</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_EXC</member_at_parent>\n\t<member_verify_function>\n\t<ECL>NULL_VERIFY_DATATYPE_FUNC </ECL>\n\t<ECL>verify_value_resc</ECL>\n\t</member_verify_function>\n    </attributes>\n    <attributes>\n\t<member_index>QE_ATR_MaxGroupResSoft</member_index>\n\t<member_name>ATTR_maxgroupressoft</member_name>\t<!-- max_group_res_soft -->\n\t<member_at_decode>decode_resc</member_at_decode>\n\t<member_at_encode>encode_resc</member_at_encode>\n\t<member_at_set>set_resc</member_at_set>\n\t<member_at_comp>comp_resc</member_at_comp>\n\t<member_at_free>free_resc</member_at_free>\n\t<member_at_action>check_no_entlim</member_at_action>\n\t<member_at_flags>NO_USER_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_RESC</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_EXC</member_at_parent>\n\t<member_verify_function>\n\t<ECL>NULL_VERIFY_DATATYPE_FUNC </ECL>\n\t<ECL>verify_value_resc</ECL>\n\t</member_verify_function>\n    </attributes>\n    <attributes>\n\t<member_index>QE_ATR_NodeGroupKey</member_index>\n\t<member_name>ATTR_NodeGroupKey</member_name>\t<!-- node_group_key -->\n\t<member_at_decode>decode_arst</member_at_decode>\n\t<member_at_encode>encode_arst</member_at_encode>\n\t<member_at_set>set_arst</member_at_set>\n\t<member_at_comp>comp_arst</member_at_comp>\n\t<member_at_free>free_arst</member_at_free>\n\t<member_at_action>is_valid_resource</member_at_action>\n\t<member_at_flags>NO_USER_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_ARST</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_EXC</member_at_parent>\n\t<member_verify_function>\n\t<ECL>NULL_VERIFY_DATATYPE_FUNC </ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n    </attributes>\n   <attributes>\n   <member_index>QE_ATR_BackfillDepth</member_index>\n        <member_name>ATTR_backfill_depth</member_name>             <!-- \"backfill_depth\" -->\n        <member_at_decode>decode_l</member_at_decode>\n        <member_at_encode>encode_l</member_at_encode>\n        <member_at_set>set_l</member_at_set>\n        <member_at_comp>comp_l</member_at_comp>\n        <member_at_free>free_null</member_at_free>\n        <member_at_action>action_backfill_depth</member_at_action>\n        <member_at_flags>NO_USER_SET</member_at_flags>\n        <member_at_type>ATR_TYPE_LONG</member_at_type>\n        <member_at_parent>PARENT_TYPE_QUE_EXC</member_at_parent>\n        <member_verify_function>\n        <ECL>verify_datatype_long</ECL>\n        <ECL>verify_value_zero_or_positive</ECL>\n       </member_verify_function>\n   </attributes>\n\n    <attributes>\n\t<member_index>QR_ATR_RouteDestin</member_index>\n\t<member_name>ATTR_routedest</member_name>\t\t<!-- \"route_destinations\" -->\n\t<member_at_decode>decode_arst</member_at_decode>\n\t<member_at_encode>encode_arst</member_at_encode>\n\t<member_at_set>set_arst</member_at_set>\n\t<member_at_comp>comp_arst</member_at_comp>\n\t<member_at_free>free_arst</member_at_free>\n\t<member_at_action>NULL_FUNC</member_at_action>\n\t<member_at_flags>MGR_ONLY_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_ARST</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_RTE</member_at_parent>\n\t<member_verify_function>\n\t<ECL>NULL_VERIFY_DATATYPE_FUNC </ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n    </attributes>\n    <attributes>\n\t<member_index>QR_ATR_AltRouter</member_index>\n\t<member_name>ATTR_altrouter</member_name>\t\t<!-- \"alt_router\" -->\n\t<member_at_decode>decode_b</member_at_decode>\n\t<member_at_encode>encode_b</member_at_encode>\n\t<member_at_set>set_b</member_at_set>\n\t<member_at_comp>comp_b</member_at_comp>\n\t<member_at_free>free_null</member_at_free>\n\t<member_at_action>NULL_FUNC</member_at_action>\n\t<member_at_flags>MGR_ONLY_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_BOOL</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_RTE</member_at_parent>\n\t<member_verify_function>\n\t<ECL>verify_datatype_bool </ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n    </attributes>\n    <attributes>\n\t<member_index>QR_ATR_RouteHeld</member_index>\n\t<member_name>ATTR_routeheld</member_name>\t\t<!-- \"route_held_jobs\" -->\n\t<member_at_decode>decode_b</member_at_decode>\n\t<member_at_encode>encode_b</member_at_encode>\n\t<member_at_set>set_b</member_at_set>\n\t<member_at_comp>comp_b</member_at_comp>\n\t<member_at_free>free_null</member_at_free>\n\t<member_at_action>NULL_FUNC</member_at_action>\n\t<member_at_flags>NO_USER_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_BOOL</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_RTE</member_at_parent>\n\t<member_verify_function>\n\t<ECL>verify_datatype_bool </ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n    </attributes>\n    <attributes>\n\t<member_index>QR_ATR_RouteWaiting</member_index>\n\t<member_name>ATTR_routewait</member_name>\t\t<!-- \"route_waiting_jobs\" -->\n\t<member_at_decode>decode_b</member_at_decode>\n\t<member_at_encode>encode_b</member_at_encode>\n\t<member_at_set>set_b</member_at_set>\n\t<member_at_comp>comp_b</member_at_comp>\n\t<member_at_free>free_null</member_at_free>\n\t<member_at_action>NULL_FUNC</member_at_action>\n\t<member_at_flags>NO_USER_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_BOOL</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_RTE</member_at_parent>\n\t<member_verify_function>\n\t<ECL>verify_datatype_bool </ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n    </attributes>\n    <attributes>\n\t<member_index>QR_ATR_RouteRetryTime</member_index>\n\t<member_name>ATTR_routeretry</member_name>\t<!-- \"route_retry_time\" -->\n\t<member_at_decode>decode_l</member_at_decode>\n\t<member_at_encode>encode_l</member_at_encode>\n\t<member_at_set>set_l</member_at_set>\n\t<member_at_comp>comp_l</member_at_comp>\n\t<member_at_free>free_null</member_at_free>\n\t<member_at_action>NULL_FUNC</member_at_action>\n\t<member_at_flags>NO_USER_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_LONG</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_RTE</member_at_parent>\n\t<member_verify_function>\n\t<ECL>verify_datatype_long </ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n    </attributes>\n    <attributes>\n\t<member_index>QR_ATR_RouteLifeTime</member_index>\n\t<member_name>ATTR_routelife</member_name>\t\t<!-- \"route_lifetime\" -->\n\t<member_at_decode>decode_l</member_at_decode>\n\t<member_at_encode>encode_l</member_at_encode>\n\t<member_at_set>set_l</member_at_set>\n\t<member_at_comp>comp_l</member_at_comp>\n\t<member_at_free>free_null</member_at_free>\n\t<member_at_action>NULL_FUNC</member_at_action>\n\t<member_at_flags>NO_USER_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_LONG</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_RTE</member_at_parent>\n\t<member_verify_function>\n\t<ECL>verify_datatype_long </ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n    </attributes>\n    <attributes>\n\t<member_index>QA_ATR_Enabled</member_index>\n\t<member_name>ATTR_enable</member_name>\t\t<!-- \"enabled\" -->\n\t<member_at_decode>decode_b</member_at_decode>\n\t<member_at_encode>encode_b</member_at_encode>\n\t<member_at_set>set_b</member_at_set>\n\t<member_at_comp>comp_b</member_at_comp>\n\t<member_at_free>free_null</member_at_free>\n\t<member_at_action>check_que_enable</member_at_action>\n\t<member_at_flags>NO_USER_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_BOOL</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_ALL</member_at_parent>\n\t<member_verify_function>\n\t<ECL>verify_datatype_bool </ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n    </attributes>\n    <attributes>\n\t<member_index>QA_ATR_Started</member_index>\n\t<member_name>ATTR_start</member_name>\t\t<!-- \"started\" -->\n\t<member_at_decode>decode_b</member_at_decode>\n\t<member_at_encode>encode_b</member_at_encode>\n\t<member_at_set>set_b</member_at_set>\n\t<member_at_comp>comp_b</member_at_comp>\n\t<member_at_free>free_null</member_at_free>\n\t<member_at_action>queuestart_action</member_at_action>\n\t<member_at_flags>NO_USER_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_BOOL</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_ALL</member_at_parent>\n\t<member_verify_function>\n\t<ECL>verify_datatype_bool </ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n    </attributes>\n    <attributes>\n \t<member_index>QA_ATR_queued_jobs_threshold</member_index>\n\t<member_name>ATTR_queued_jobs_threshold</member_name>\t<!-- \"queued_jobs_threshold\" -->\n\t<member_at_decode>decode_entlim</member_at_decode>\n\t<member_at_encode>encode_entlim</member_at_encode>\n\t<member_at_set>set_entlim</member_at_set>\n\t<member_at_comp>comp_str</member_at_comp>\n\t<member_at_free>free_entlim</member_at_free>\n\t<member_at_action>action_entlim_ct</member_at_action>\n\t<member_at_flags>NO_USER_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_ENTITY</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_ALL</member_at_parent>\n\t<member_verify_function>\n\t<ECL>NULL_VERIFY_DATATYPE_FUNC </ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n    </attributes>\n    <attributes>\n\t<member_index>QA_ATR_queued_jobs_threshold_res</member_index>\n\t<member_name>ATTR_queued_jobs_threshold_res</member_name>\t<!-- \"queued_jobs_threshold_res\" -->\n\t<member_at_decode>decode_entlim_res</member_at_decode>\n\t<member_at_encode>encode_entlim</member_at_encode>\n\t<member_at_set>set_entlim_res</member_at_set>\n\t<member_at_comp>comp_str</member_at_comp>\n\t<member_at_free>free_entlim</member_at_free>\n\t<member_at_action>action_entlim_res</member_at_action>\n\t<member_at_flags>NO_USER_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_ENTITY</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_ALL</member_at_parent>\n\t<member_verify_function>\n\t<ECL>NULL_VERIFY_DATATYPE_FUNC </ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n    </attributes>\n    <attributes>\n\t<member_index>QA_ATR_partition</member_index>\n\t<member_name>ATTR_partition</member_name>\t<!-- partition -->\n\t<member_at_decode>decode_str</member_at_decode>\n\t<member_at_encode>encode_str</member_at_encode>\n\t<member_at_set>set_str</member_at_set>\n\t<member_at_comp>comp_str</member_at_comp>\n\t<member_at_free>free_str</member_at_free>\n\t<member_at_action>action_queue_partition</member_at_action>\n\t<member_at_flags>MGR_ONLY_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_STR</member_at_type>\n\t<member_at_parent>PARENT_TYPE_QUE_ALL</member_at_parent>\n\t<member_verify_function>\n\t<ECL>NULL_VERIFY_DATATYPE_FUNC </ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n    </attributes>\n    <tail>\n      <SVR>\n\t  #include \"site_que_attr_def.h\"\n\t};\n      </SVR>\n      <ECL>\n\t};\n\tint   ecl_que_attr_size = sizeof(ecl_que_attr_def)/sizeof(ecl_attribute_def);\n      </ECL>\n    </tail>\n</data>\n"
  },
  {
    "path": "src/lib/Libattr/master_resc_def_all.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<data>\n   <!--\n/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n\tNOTE (Server File)\n\n\tresc_def_all is the array of attribute definitions for resc.\n    Each legal job attribute is defined here.\n    The entries for each attribute are (see attribute.h):\n       name,\n       decode function,\n       encode function,\n       set function,\n       compare function,\n       free value space function,\n       action function,\n       access permission flags,\n       value type,\n\t   member_at_entlim,\n\t   member_at_struct,\n\n\t NOTE (ECL File)\n\n\t The entries for each attribute are (see attribute.h):\\n\n       name,\n       type,\n       flag,\n       verify datatype function,\n       verify value function\n    -->\n   <head>\n      <SVR>\n      #include &lt;pbs_config.h&gt;\n      #include &lt;sys/types.h&gt;\n      #include &lt;stdlib.h&gt;\n      #include &lt;stdio.h&gt;\n      #include &lt;ctype.h&gt;\n      #include \"pbs_ifl.h\"\n      #include \"server_limits.h\"\n      #include &lt;string.h&gt;\n      #include \"list_link.h\"\n      #include \"attribute.h\"\n      #include \"resource.h\"\n      #include \"pbs_error.h\"\n      #include \"pbs_nodes.h\"\n      #include \"svrfunc.h\"\n      #include \"grunt.h\"\n\n      extern int set_node_ct(resource *, attribute *, void *, int, int actmode);\n      extern int decode_place(attribute *, char *, char *, char *);\n      extern int preempt_targets_action(resource *presc, attribute *pattr, void *pobject, int type, int actmode);\n      extern int action_soft_walltime(resource *presc, attribute *pattr, void *pobject, int type, int actmode);\n      extern int action_walltime(resource *presc, attribute *pattr, void *pobject, int type, int actmode);\n      extern int action_min_walltime(resource *presc, attribute *pattr, void *pobject, int type, int actmode);\n      extern int action_max_walltime(resource *presc, attribute *pattr, void *pobject, int type, int actmode);\n      extern int zero_or_positive_action  (resource *, attribute *, void *, int, int actmode);\n      #ifndef PBS_MOM\n      extern int host_action(resource *, attribute *, void *, int, int actmode);\n      extern int resc_select_action(resource *, attribute *, void *, int, int);\n      #endif /* PBS_MOM */\n      /* ordered by guess to put ones most often used at front */\n\n      static resource_def svr_resc_defm[] = {\n      </SVR>\n      <ECL>\n      #include &lt;pbs_config.h&gt;\n      #include &lt;sys/types.h&gt;\n      #include \"pbs_ifl.h\"\n      #include \"pbs_ecl.h\"\n\n      ecl_attribute_def ecl_svr_resc_def[] = {\n      </ECL>\n   </head>\n   <attributes>\n      <member_index>RESC_CPUT</member_index>\n      <member_name>\"cput\"</member_name>\n      <member_at_decode>decode_time</member_at_decode>\n      <member_at_encode>encode_time</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC_RESC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_MOM | ATR_DFLAG_ALTRUN</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_entlim>PBS_ENTLIM_NOLIMIT</member_at_entlim>\n      <member_at_struct>NULL</member_at_struct>\n      <member_verify_function>\n         <ECL>verify_datatype_time</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESC_MEM</member_index>\n      <member_name>\"mem\"</member_name>\n      <member_at_decode>decode_size</member_at_decode>\n      <member_at_encode>encode_size</member_at_encode>\n      <member_at_set>set_size</member_at_set>\n      <member_at_comp>comp_size</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC_RESC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_MOM | ATR_DFLAG_RASSN | ATR_DFLAG_ANASSN |ATR_DFLAG_CVTSLT</member_at_flags>\n      <member_at_type>ATR_TYPE_SIZE</member_at_type>\n      <member_at_entlim>PBS_ENTLIM_NOLIMIT</member_at_entlim>\n      <member_at_struct>NULL</member_at_struct>\n      <member_verify_function>\n         <ECL>verify_datatype_size</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESC_WALLTIME</member_index>\n      <member_name>\"walltime\"</member_name>\n      <member_at_decode>decode_time</member_at_decode>\n      <member_at_encode>encode_time</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>action_walltime</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_MOM | ATR_DFLAG_ALTRUN</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_entlim>PBS_ENTLIM_NOLIMIT</member_at_entlim>\n      <member_at_struct>NULL</member_at_struct>\n      <member_verify_function>\n         <ECL>verify_datatype_time</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESC_SOFT_WALLTIME</member_index>\n      <member_name>\"soft_walltime\"</member_name>\n      <member_at_decode>decode_time</member_at_decode>\n      <member_at_encode>encode_time</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>action_soft_walltime</member_at_action>\n      <member_at_flags>MGR_ONLY_SET | ATR_DFLAG_ALTRUN</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_entlim>PBS_ENTLIM_NOLIMIT</member_at_entlim>\n      <member_at_struct>NULL</member_at_struct>\n      <member_verify_function>\n         <ECL>verify_datatype_time</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESC_MIN_WALLTIME</member_index>\n      <member_name>\"min_walltime\"</member_name>\n      <member_at_decode>decode_time</member_at_decode>\n      <member_at_encode>encode_time</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>action_min_walltime</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_ALTRUN</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_entlim>PBS_ENTLIM_NOLIMIT</member_at_entlim>\n      <member_at_struct>NULL</member_at_struct>\n      <member_verify_function>\n         <ECL>verify_datatype_time</ECL>\n         <ECL>verify_value_zero_or_positive</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESC_MAX_WALLTIME</member_index>\n      <member_name>\"max_walltime\"</member_name>\n      <member_at_decode>decode_time</member_at_decode>\n      <member_at_encode>encode_time</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>action_max_walltime</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_ALTRUN</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_entlim>PBS_ENTLIM_NOLIMIT</member_at_entlim>\n      <member_at_struct>NULL</member_at_struct>\n      <member_verify_function>\n         <ECL>verify_datatype_time</ECL>\n         <ECL>verify_value_zero_or_positive</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESC_NCPUS</member_index>\n      <member_name>\"ncpus\"</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>zero_or_positive_action</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_MOM | ATR_DFLAG_RASSN | ATR_DFLAG_ANASSN | ATR_DFLAG_CVTSLT</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_entlim>PBS_ENTLIM_NOLIMIT</member_at_entlim>\n      <member_at_struct>NULL</member_at_struct>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>verify_value_zero_or_positive</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes flag=\"SVR\">\n      <member_index>RESC_NACCELERATORS</member_index>\n      <member_name>\"naccelerators\"</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC_RESC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_MOM | ATR_DFLAG_RASSN | ATR_DFLAG_ANASSN | ATR_DFLAG_CVTSLT</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_entlim>PBS_ENTLIM_NOLIMIT</member_at_entlim>\n      <member_at_struct>NULL</member_at_struct>\n   </attributes>\n   <attributes>\n      <member_index>RESC_SELECT</member_index>\n      <member_name>\"select\"</member_name>\n      <member_at_decode>decode_select</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>\n#ifdef PBS_MOM\n\t\tNULL_FUNC_RESC\n#else\n\t\tresc_select_action\n#endif\n      </member_at_action>\n      <member_at_flags>READ_WRITE</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_entlim>PBS_ENTLIM_NOLIMIT</member_at_entlim>\n      <member_at_struct>NULL</member_at_struct>\n      <member_verify_function>\n         <ECL>verify_datatype_select</ECL>\n         <ECL>verify_value_select</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESC_PLACE</member_index>\n      <member_name>\"place\"</member_name>\n      <member_at_decode>decode_place</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC_RESC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_entlim>PBS_ENTLIM_NOLIMIT</member_at_entlim>\n      <member_at_struct>NULL</member_at_struct>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESC_NODES</member_index>\n      <member_name>\"nodes\"</member_name>\n      <member_at_decode>decode_nodes</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>set_node_ct</member_at_action>\n      <member_at_flags>READ_WRITE</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_entlim>PBS_ENTLIM_NOLIMIT</member_at_entlim>\n      <member_at_struct>NULL</member_at_struct>\n      <member_verify_function>\n         <ECL>verify_datatype_nodes</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESC_NODECT</member_index>\n      <member_name>\"nodect\"</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC_RESC</member_at_action>\n      <member_at_flags>READ_ONLY | ATR_DFLAG_MGWR | ATR_DFLAG_RASSN</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_entlim>PBS_ENTLIM_NOLIMIT</member_at_entlim>\n      <member_at_struct>NULL</member_at_struct>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>verify_value_zero_or_positive</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESC_ARCH</member_index>\n      <member_name>\"arch\"</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC_RESC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_CVTSLT | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_entlim>PBS_ENTLIM_NOLIMIT</member_at_entlim>\n      <member_at_struct>NULL</member_at_struct>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes flag=\"SVR\">\n      <member_index>RESC_NCHUNK</member_index>\n      <member_name>\"nchunk\"</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC_RESC</member_at_action>\n      <member_at_flags>NO_USER_SET | ATR_DFLAG_CVTSLT</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_entlim>PBS_ENTLIM_NOLIMIT</member_at_entlim>\n      <member_at_struct>NULL</member_at_struct>\n   </attributes>\n   <attributes flag=\"SVR\">\n      <member_index>RESC_VNTYPE</member_index>\n      <member_name>\"vntype\"</member_name>\n      <member_at_decode>decode_arst</member_at_decode>\n      <member_at_encode>encode_arst</member_at_encode>\n      <member_at_set>set_arst</member_at_set>\n      <member_at_comp>comp_arst</member_at_comp>\n      <member_at_free>free_arst</member_at_free>\n      <member_at_action>NULL_FUNC_RESC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_CVTSLT</member_at_flags>\n      <member_at_type>ATR_TYPE_ARST</member_at_type>\n      <member_at_entlim>PBS_ENTLIM_NOLIMIT</member_at_entlim>\n      <member_at_struct>NULL</member_at_struct>\n   </attributes>\n   <attributes>\n      <member_index>RESC_MPIPROCS</member_index>\n      <member_name>MPIPROCS</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>zero_or_positive_action</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_RASSN | ATR_DFLAG_CVTSLT</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_entlim>PBS_ENTLIM_NOLIMIT</member_at_entlim>\n      <member_at_struct>NULL</member_at_struct>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>verify_value_zero_or_positive</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESC_OMPTHREADS</member_index>\n      <member_name>OMPTHREADS</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>zero_or_positive_action</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_CVTSLT</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_entlim>PBS_ENTLIM_NOLIMIT</member_at_entlim>\n      <member_at_struct>NULL</member_at_struct>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>verify_value_zero_or_positive</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESC_CPUPERCENT</member_index>\n      <member_name>\"cpupercent\"</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC_RESC</member_at_action>\n      <member_at_flags>NO_USER_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_entlim>PBS_ENTLIM_NOLIMIT</member_at_entlim>\n      <member_at_struct>NULL</member_at_struct>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>verify_value_zero_or_positive</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESC_ENERGY</member_index>\n      <member_name>\"energy\"</member_name>\n      <member_at_decode>decode_f</member_at_decode>\n      <member_at_encode>encode_f</member_at_encode>\n      <member_at_set>set_f</member_at_set>\n      <member_at_comp>comp_f</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC_RESC</member_at_action>\n      <member_at_flags>NO_USER_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_FLOAT</member_at_type>\n      <member_at_entlim>PBS_ENTLIM_NOLIMIT</member_at_entlim>\n      <member_at_struct>NULL</member_at_struct>\n      <member_verify_function>\n         <ECL>verify_datatype_float</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESC_FILE</member_index>\n      <member_name>\"file\"</member_name>\n      <member_at_decode>decode_size</member_at_decode>\n      <member_at_encode>encode_size</member_at_encode>\n      <member_at_set>set_size</member_at_set>\n      <member_at_comp>comp_size</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC_RESC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_SIZE</member_at_type>\n      <member_at_entlim>PBS_ENTLIM_NOLIMIT</member_at_entlim>\n      <member_at_struct>NULL</member_at_struct>\n      <member_verify_function>\n         <ECL>verify_datatype_size</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESC_PMEM</member_index>\n      <member_name>\"pmem\"</member_name>\n      <member_at_decode>decode_size</member_at_decode>\n      <member_at_encode>encode_size</member_at_encode>\n      <member_at_set>set_size</member_at_set>\n      <member_at_comp>comp_size</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC_RESC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_SIZE</member_at_type>\n      <member_at_entlim>PBS_ENTLIM_NOLIMIT</member_at_entlim>\n      <member_at_struct>NULL</member_at_struct>\n      <member_verify_function>\n         <ECL>verify_datatype_size</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESC_VMEM</member_index>\n      <member_name>\"vmem\"</member_name>\n      <member_at_decode>decode_size</member_at_decode>\n      <member_at_encode>encode_size</member_at_encode>\n      <member_at_set>set_size</member_at_set>\n      <member_at_comp>comp_size</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC_RESC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_MOM | ATR_DFLAG_RASSN | ATR_DFLAG_ANASSN | ATR_DFLAG_CVTSLT</member_at_flags>\n      <member_at_type>ATR_TYPE_SIZE</member_at_type>\n      <member_at_entlim>PBS_ENTLIM_NOLIMIT</member_at_entlim>\n      <member_at_struct>NULL</member_at_struct>\n      <member_verify_function>\n         <ECL>verify_datatype_size</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESC_PVMEM</member_index>\n      <member_name>\"pvmem\"</member_name>\n      <member_at_decode>decode_size</member_at_decode>\n      <member_at_encode>encode_size</member_at_encode>\n      <member_at_set>set_size</member_at_set>\n      <member_at_comp>comp_size</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC_RESC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_SIZE</member_at_type>\n      <member_at_entlim>PBS_ENTLIM_NOLIMIT</member_at_entlim>\n      <member_at_struct>NULL</member_at_struct>\n      <member_verify_function>\n         <ECL>verify_datatype_size</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESC_NICE</member_index>\n      <member_name>\"nice\"</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC_RESC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_entlim>PBS_ENTLIM_NOLIMIT</member_at_entlim>\n      <member_at_struct>NULL</member_at_struct>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESC_PCPUT</member_index>\n      <member_name>\"pcput\"</member_name>\n      <member_at_decode>decode_time</member_at_decode>\n      <member_at_encode>encode_time</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC_RESC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_entlim>PBS_ENTLIM_NOLIMIT</member_at_entlim>\n      <member_at_struct>NULL</member_at_struct>\n      <member_verify_function>\n         <ECL>verify_datatype_time</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESC_NODEMASK</member_index>\n      <member_name>\"nodemask\"</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC_RESC</member_at_action>\n      <member_at_flags>NO_USER_SET | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_entlim>PBS_ENTLIM_NOLIMIT</member_at_entlim>\n      <member_at_struct>NULL</member_at_struct>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESC_HPM</member_index>\n      <member_name>\"hpm\"</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC_RESC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_MOM | ATR_DFLAG_RASSN</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_entlim>PBS_ENTLIM_NOLIMIT</member_at_entlim>\n      <member_at_struct>NULL</member_at_struct>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESC_SSINODES</member_index>\n      <member_name>\"ssinodes\"</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC_RESC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_entlim>PBS_ENTLIM_NOLIMIT</member_at_entlim>\n      <member_at_struct>NULL</member_at_struct>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESC_HOST</member_index>\n      <member_name>\"host\"</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>\n#ifdef PBS_MOM\n\t\tNULL_FUNC_RESC\n#else\n\t\thost_action\n#endif\n      </member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_CVTSLT</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_entlim>PBS_ENTLIM_NOLIMIT</member_at_entlim>\n      <member_at_struct>NULL</member_at_struct>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESC_VNODE</member_index>\n      <member_name>\"vnode\"</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC_RESC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_CVTSLT</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_entlim>PBS_ENTLIM_NOLIMIT</member_at_entlim>\n      <member_at_struct>NULL</member_at_struct>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESC_RESC</member_index>\n      <member_name>\"resc\"</member_name>\n      <member_at_decode>decode_arst</member_at_decode>\n      <member_at_encode>encode_arst</member_at_encode>\n      <member_at_set>set_arst</member_at_set>\n      <member_at_comp>comp_arst</member_at_comp>\n      <member_at_free>free_arst</member_at_free>\n      <member_at_action>NULL_FUNC_RESC</member_at_action>\n      <member_at_flags>READ_WRITE</member_at_flags>\n      <member_at_type>ATR_TYPE_ARST</member_at_type>\n      <member_at_entlim>PBS_ENTLIM_NOLIMIT</member_at_entlim>\n      <member_at_struct>NULL</member_at_struct>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESC_SOFTWARE</member_index>\n      <member_name>\"software\"</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC_RESC</member_at_action>\n      <member_at_flags>READ_WRITE</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_entlim>PBS_ENTLIM_NOLIMIT</member_at_entlim>\n      <member_at_struct>NULL</member_at_struct>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESC_SITE</member_index>\n      <member_name>\"site\"</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC_RESC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_entlim>PBS_ENTLIM_NOLIMIT</member_at_entlim>\n      <member_at_struct>NULL</member_at_struct>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes flag=\"SVR\">\n      <member_index>RESC_EXEC_VNODE</member_index>\n      <member_name>\"exec_vnode\"</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC_RESC</member_at_action>\n      <member_at_flags>NO_USER_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_entlim>PBS_ENTLIM_NOLIMIT</member_at_entlim>\n      <member_at_struct>NULL</member_at_struct>\n   </attributes>\n   <attributes flag=\"SVR\">\n      <member_index>RESC_START_TIME</member_index>\n      <member_name>\"start_time\"</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC_RESC</member_at_action>\n      <member_at_flags>NO_USER_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_entlim>PBS_ENTLIM_NOLIMIT</member_at_entlim>\n      <member_at_struct>NULL</member_at_struct>\n   </attributes>\n   <attributes macro=\"#if PE_MASK != 0\">\n      <member_index>RESC_PE_MASK</member_index>\n      <member_name>\"pe_mask\"</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC_RESC</member_at_action>\n      <member_at_flags>NO_USER_SET | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_entlim>PBS_ENTLIM_NOLIMIT</member_at_entlim>\n      <member_at_struct>NULL</member_at_struct>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESC_PARTITION</member_index>\n      <member_name>\"partition\"</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC_RESC</member_at_action>\n      <member_at_flags>NO_USER_SET | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_entlim>PBS_ENTLIM_NOLIMIT</member_at_entlim>\n      <member_at_struct>NULL</member_at_struct>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes macro=\"#ifndef PBS_MOM\" mflag=\"SVR\">\n      <member_index>RESC_AOE</member_index>\n      <member_name>\"aoe\"</member_name>\n      <member_at_decode>decode_arst</member_at_decode>\n      <member_at_encode>encode_arst</member_at_encode>\n      <member_at_set>set_arst</member_at_set>\n      <member_at_comp>comp_arst</member_at_comp>\n      <member_at_free>free_arst</member_at_free>\n      <member_at_action>NULL_FUNC_RESC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_CVTSLT</member_at_flags>\n      <member_at_type>ATR_TYPE_ARST</member_at_type>\n      <member_at_entlim>PBS_ENTLIM_NOLIMIT</member_at_entlim>\n      <member_at_struct>NULL</member_at_struct>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes macro=\"#ifndef PBS_MOM\" mflag=\"SVR\">\n      <member_index>RESC_EOE</member_index>\n      <member_name>\"eoe\"</member_name>\n      <member_at_decode>decode_arst</member_at_decode>\n      <member_at_encode>encode_arst</member_at_encode>\n      <member_at_set>set_arst</member_at_set>\n      <member_at_comp>comp_arst</member_at_comp>\n      <member_at_free>free_arst</member_at_free>\n      <member_at_action>NULL_FUNC_RESC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_CVTSLT</member_at_flags>\n      <member_at_type>ATR_TYPE_ARST</member_at_type>\n      <member_at_entlim>PBS_ENTLIM_NOLIMIT</member_at_entlim>\n      <member_at_struct>NULL</member_at_struct>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESC_PREEMPT_TARGETS</member_index>\n      <member_name>\"preempt_targets\"</member_name>\n      <member_at_decode>decode_arst</member_at_decode>\n      <member_at_encode>encode_arst</member_at_encode>\n      <member_at_set>set_arst</member_at_set>\n      <member_at_comp>comp_arst</member_at_comp>\n      <member_at_free>free_arst</member_at_free>\n      <member_at_action>preempt_targets_action</member_at_action>\n      <member_at_flags>READ_WRITE</member_at_flags>\n      <member_at_type>ATR_TYPE_ARST</member_at_type>\n      <member_at_entlim>PBS_ENTLIM_NOLIMIT</member_at_entlim>\n      <member_at_struct>NULL</member_at_struct>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>verify_value_preempt_targets</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes flag=\"SVR\">\n      <member_index>RESC_ACCELERATOR</member_index>\n      <member_name>\"accelerator\"</member_name>\n      <member_at_decode>decode_b</member_at_decode>\n      <member_at_encode>encode_b</member_at_encode>\n      <member_at_set>set_b</member_at_set>\n      <member_at_comp>comp_b</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC_RESC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_MOM | ATR_DFLAG_CVTSLT</member_at_flags>\n      <member_at_type>ATR_TYPE_BOOL</member_at_type>\n      <member_at_entlim>PBS_ENTLIM_NOLIMIT</member_at_entlim>\n      <member_at_struct>NULL</member_at_struct>\n   </attributes>\n   <attributes flag=\"SVR\">\n      <member_index>RESC_ACCELERATOR_MODEL</member_index>\n      <member_name>\"accelerator_model\"</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC_RESC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_MOM | ATR_DFLAG_CVTSLT</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_entlim>PBS_ENTLIM_NOLIMIT</member_at_entlim>\n      <member_at_struct>NULL</member_at_struct>\n   </attributes>\n   <attributes flag=\"SVR\">\n      <member_index>RESC_ACCELERATOR_MEMORY</member_index>\n      <member_name>\"accelerator_memory\"</member_name>\n      <member_at_decode>decode_size</member_at_decode>\n      <member_at_encode>encode_size</member_at_encode>\n      <member_at_set>set_size</member_at_set>\n      <member_at_comp>comp_size</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC_RESC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_MOM | ATR_DFLAG_RASSN | ATR_DFLAG_ANASSN |ATR_DFLAG_CVTSLT</member_at_flags>\n      <member_at_type>ATR_TYPE_SIZE</member_at_type>\n      <member_at_entlim>PBS_ENTLIM_NOLIMIT</member_at_entlim>\n      <member_at_struct>NULL</member_at_struct>\n   </attributes>\n   <attributes flag=\"SVR\">\n      <member_index>RESC_ACCELERATOR_GROUP</member_index>\n      <member_name>\"accelerator_group\"</member_name>\n      <member_at_decode>decode_arst</member_at_decode>\n      <member_at_encode>encode_arst</member_at_encode>\n      <member_at_set>set_arst</member_at_set>\n      <member_at_comp>comp_arst</member_at_comp>\n      <member_at_free>free_arst</member_at_free>\n      <member_at_action>NULL_FUNC_RESC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_CVTSLT</member_at_flags>\n      <member_at_type>ATR_TYPE_ARST</member_at_type>\n      <member_at_entlim>PBS_ENTLIM_NOLIMIT</member_at_entlim>\n      <member_at_struct>NULL</member_at_struct>\n   </attributes>\n   <attributes flag=\"SVR\">\n      <member_index>RESC_PSTATE</member_index>\n      <member_name>\"pstate\"</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC_RESC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_entlim>PBS_ENTLIM_NOLIMIT</member_at_entlim>\n      <member_at_struct>NULL</member_at_struct>\n   </attributes>\n   <attributes>\n      <member_index>RESC_HBMEM</member_index>\n      <member_name>\"hbmem\"</member_name>\n      <member_at_decode>decode_size</member_at_decode>\n      <member_at_encode>encode_size</member_at_encode>\n      <member_at_set>set_size</member_at_set>\n      <member_at_comp>comp_size</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC_RESC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_MOM | ATR_DFLAG_RASSN | ATR_DFLAG_ANASSN |ATR_DFLAG_CVTSLT</member_at_flags>\n      <member_at_type>ATR_TYPE_SIZE</member_at_type>\n      <member_at_entlim>PBS_ENTLIM_NOLIMIT</member_at_entlim>\n      <member_at_struct>(struct resource_def *)0</member_at_struct>\n      <member_verify_function>\n         <ECL>verify_datatype_size</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes flag=\"SVR\">\n      <member_index>RESC_PGOV</member_index>\n      <member_name>\"pgov\"</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC_RESC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_entlim>PBS_ENTLIM_NOLIMIT</member_at_entlim>\n      <member_at_struct>NULL</member_at_struct>\n   </attributes>\n   <attributes flag=\"SVR\">\n      <member_index>RESC_PCAP_NODE</member_index>\n      <member_name>\"pcap_node\"</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>zero_or_positive_action</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_entlim>PBS_ENTLIM_NOLIMIT</member_at_entlim>\n      <member_at_struct>NULL</member_at_struct>\n   </attributes>\n   <attributes flag=\"SVR\">\n      <member_index>RESC_PCAP_ACCELERATOR</member_index>\n      <member_name>\"pcap_accelerator\"</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>zero_or_positive_action</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_entlim>PBS_ENTLIM_NOLIMIT</member_at_entlim>\n      <member_at_struct>NULL</member_at_struct>\n   </attributes>\n   <attributes flag=\"SVR\">\n      <member_name>\"|unknown|\"</member_name>\n      <member_at_decode>decode_unkn</member_at_decode>\n      <member_at_encode>encode_unkn</member_at_encode>\n      <member_at_set>set_unkn</member_at_set>\n      <member_at_comp>comp_unkn</member_at_comp>\n      <member_at_free>free_unkn</member_at_free>\n      <member_at_action>NULL_FUNC_RESC</member_at_action>\n      <member_at_flags>READ_WRITE</member_at_flags>\n      <member_at_type>ATR_TYPE_LIST</member_at_type>\n      <member_at_entlim>PBS_ENTLIM_NOLIMIT</member_at_entlim>\n      <member_at_struct>NULL</member_at_struct>\n   </attributes>\n   <tail>\n   <SVR>};\nint svr_resc_size = sizeof(svr_resc_defm) / sizeof(resource_def);\nresource_def *svr_resc_def = svr_resc_defm;\nint svr_resc_unk = sizeof(svr_resc_defm) / sizeof(resource_def) - 1;</SVR>\n   <ECL>};\nint ecl_svr_resc_size = sizeof(ecl_svr_resc_def)/sizeof(ecl_attribute_def);</ECL>\n   </tail>\n</data>\n"
  },
  {
    "path": "src/lib/Libattr/master_resv_attr_def.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<data>\n   <!--\n/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n\n   The entries for each attribute are (see attribute.h):\n       name,\n       decode function,\n       encode function,\n       set function,\n       compare function,\n       free value space function,\n       action function,\n       access permission flags,\n       value type,\n       parent object type\n\n     NOTE (ECL File)\n\n     The entries for each attribute are (see attribute.h):\\n\n       name,\n       type,\n       flag,\n       verify datatype function,\n       verify value function\n    -->\n   <head>\n      <SVR>#include &lt;pbs_config.h&gt;\n     #include &lt;fcntl.h&gt;\n     #include &lt;sys/types.h&gt;\n     #include \"pbs_ifl.h\"\n     #include \"list_link.h\"\n     #include \"attribute.h\"\n     #include \"server_limits.h\"\n     #include \"job.h\"\n     #include \"reservation.h\"\n\n\n     attribute_def resv_attr_def[] = {</SVR>\n      <ECL>#include &lt;pbs_config.h&gt;\n     #include \"pbs_ifl.h\"\n     #include \"pbs_ecl.h\"\n\n    /* ordered by guess to put ones most often used at front */\n\n     ecl_attribute_def ecl_resv_attr_def[] = {</ECL>\n   </head>\n   <attributes>\n      <member_index>RESV_ATR_resv_name</member_index>\n      <member_name>ATTR_resv_name</member_name>\n      <member_at_decode>decode_jobname</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_ALTRUN | ATR_DFLAG_SELEQ</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_RESV</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>verify_value_jobname</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESV_ATR_resv_owner</member_index>\n      <member_name>ATTR_resv_owner</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_ONLY | ATR_DFLAG_SSET | ATR_DFLAG_SELEQ | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_RESV</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESV_ATR_state</member_index>\n      <member_name>ATTR_resv_state</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>ATR_DFLAG_RDACC | ATR_DFLAG_SvWR</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_RESV</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESV_ATR_substate</member_index>\n      <member_name>ATTR_resv_substate</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>ATR_DFLAG_RDACC | ATR_DFLAG_SvWR</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_RESV</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESV_ATR_reserve_Tag</member_index>\n      <member_name>ATTR_resv_Tag</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>ATR_DFLAG_Creat | READ_ONLY</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_RESV</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESV_ATR_reserveID</member_index>\n      <member_name>ATTR_resv_ID</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>ATR_DFLAG_Creat | ATR_DFLAG_SvWR | READ_ONLY</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_RESV</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESV_ATR_start</member_index>\n      <member_name>ATTR_resv_start</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_ALTRUN</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_RESV</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESV_ATR_end</member_index>\n      <member_name>ATTR_resv_end</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_ALTRUN</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_RESV</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESV_ATR_duration</member_index>\n      <member_name>ATTR_resv_duration</member_name>\n      <member_at_decode>decode_time</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_ALTRUN</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_RESV</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_time</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESV_ATR_queue</member_index>\n      <member_name>ATTR_queue</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_ONLY | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_RESV</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESV_ATR_resource</member_index>\n      <member_name>ATTR_l</member_name>\n      <member_at_decode>decode_resc</member_at_decode>\n      <member_at_encode>encode_resc</member_at_encode>\n      <member_at_set>set_resc</member_at_set>\n      <member_at_comp>comp_resc</member_at_comp>\n      <member_at_free>free_resc</member_at_free>\n      <member_at_action>action_resc_resv</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_ALTRUN | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_RESC</member_at_type>\n      <member_at_parent>PARENT_TYPE_RESV</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>verify_value_resc</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESV_ATR_SchedSelect</member_index>\n      <member_name>ATTR_SchedSelect</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>ATR_DFLAG_MGRD</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_RESV</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESV_ATR_resc_used</member_index>\n      <member_name>ATTR_used</member_name>\n      <member_at_decode>decode_resc</member_at_decode>\n      <member_at_encode>encode_resc</member_at_encode>\n      <member_at_set>set_resc</member_at_set>\n      <member_at_comp>comp_resc</member_at_comp>\n      <member_at_free>free_resc</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_ONLY | ATR_DFLAG_SvWR</member_at_flags>\n      <member_at_type>ATR_TYPE_RESC</member_at_type>\n      <member_at_parent>PARENT_TYPE_RESV</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESV_ATR_resv_nodes</member_index>\n      <member_name>ATTR_resv_nodes</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_ONLY</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_RESV</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESV_ATR_userlst</member_index>\n      <member_name>ATTR_u</member_name>\n      <member_at_decode>decode_arst</member_at_decode>\n      <member_at_encode>encode_arst</member_at_encode>\n      <member_at_set>set_uacl</member_at_set>\n      <member_at_comp>comp_arst</member_at_comp>\n      <member_at_free>free_arst</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_SELEQ</member_at_flags>\n      <member_at_type>ATR_TYPE_ARST</member_at_type>\n      <member_at_parent>PARENT_TYPE_RESV</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>verify_value_user_list</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESV_ATR_grouplst</member_index>\n      <member_name>ATTR_g</member_name>\n      <member_at_decode>decode_arst</member_at_decode>\n      <member_at_encode>encode_arst</member_at_encode>\n      <member_at_set>set_arst</member_at_set>\n      <member_at_comp>comp_arst</member_at_comp>\n      <member_at_free>free_arst</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_SELEQ</member_at_flags>\n      <member_at_type>ATR_TYPE_ARST</member_at_type>\n      <member_at_parent>PARENT_TYPE_RESV</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>verify_value_user_list</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESV_ATR_auth_u</member_index>\n      <member_name>ATTR_auth_u</member_name>\n      <member_at_decode>decode_arst</member_at_decode>\n      <member_at_encode>encode_arst</member_at_encode>\n      <member_at_set>set_uacl</member_at_set>\n      <member_at_comp>comp_arst</member_at_comp>\n      <member_at_free>free_arst</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_SELEQ</member_at_flags>\n      <member_at_type>ATR_TYPE_ARST</member_at_type>\n      <member_at_parent>PARENT_TYPE_RESV</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>verify_value_authorized_users</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESV_ATR_auth_g</member_index>\n      <member_name>ATTR_auth_g</member_name>\n      <member_at_decode>decode_arst</member_at_decode>\n      <member_at_encode>encode_arst</member_at_encode>\n      <member_at_set>set_arst</member_at_set>\n      <member_at_comp>comp_arst</member_at_comp>\n      <member_at_free>free_arst</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_SELEQ</member_at_flags>\n      <member_at_type>ATR_TYPE_ARST</member_at_type>\n      <member_at_parent>PARENT_TYPE_RESV</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>verify_value_authorized_groups</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESV_ATR_auth_h</member_index>\n      <member_name>ATTR_auth_h</member_name>\n      <member_at_decode>decode_arst</member_at_decode>\n      <member_at_encode>encode_arst</member_at_encode>\n      <member_at_set>set_arst</member_at_set>\n      <member_at_comp>comp_arst</member_at_comp>\n      <member_at_free>free_arst</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_SELEQ</member_at_flags>\n      <member_at_type>ATR_TYPE_ARST</member_at_type>\n      <member_at_parent>PARENT_TYPE_RESV</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESV_ATR_at_server</member_index>\n      <member_name>ATTR_server</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_ONLY | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_RESV</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESV_ATR_account</member_index>\n      <member_name>ATTR_A</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_SELEQ | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_RESV</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESV_ATR_ctime</member_index>\n      <member_name>ATTR_ctime</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_ONLY | ATR_DFLAG_SSET</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_RESV</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESV_ATR_mailpnts</member_index>\n      <member_name>ATTR_m</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_ALTRUN | ATR_DFLAG_SELEQ</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_RESV</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>verify_value_mailpoints</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESV_ATR_mailuser</member_index>\n      <member_name>ATTR_M</member_name>\n      <member_at_decode>decode_arst</member_at_decode>\n      <member_at_encode>encode_arst</member_at_encode>\n      <member_at_set>set_arst</member_at_set>\n      <member_at_comp>comp_arst</member_at_comp>\n      <member_at_free>free_arst</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_ALTRUN | ATR_DFLAG_SELEQ</member_at_flags>\n      <member_at_type>ATR_TYPE_ARST</member_at_type>\n      <member_at_parent>PARENT_TYPE_RESV</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>verify_value_mailusers</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESV_ATR_mtime</member_index>\n      <member_name>ATTR_mtime</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_ONLY | ATR_DFLAG_SSET</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_RESV</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESV_ATR_hashname</member_index>\n      <member_name>ATTR_hashname</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>ATR_DFLAG_MGRD | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_RESV</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESV_ATR_hopcount</member_index>\n      <member_name>ATTR_hopcount</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>ATR_DFLAG_SSET</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_RESV</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESV_ATR_priority</member_index>\n      <member_name>ATTR_p</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_ALTRUN</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_RESV</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>verify_value_priority</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESV_ATR_interactive</member_index>\n      <member_name>ATTR_inter</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_WRITE</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_RESV</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESV_ATR_variables</member_index>\n      <member_name>ATTR_v</member_name>\n      <member_at_decode>decode_arst</member_at_decode>\n      <member_at_encode>encode_arst</member_at_encode>\n      <member_at_set>set_arst</member_at_set>\n      <member_at_comp>comp_arst</member_at_comp>\n      <member_at_free>free_arst</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_SELEQ | ATR_DFLAG_MOM</member_at_flags>\n      <member_at_type>ATR_TYPE_ARST</member_at_type>\n      <member_at_parent>PARENT_TYPE_RESV</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESV_ATR_euser</member_index>\n      <member_name>ATTR_euser</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>ATR_DFLAG_MGRD</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_RESV</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESV_ATR_egroup</member_index>\n      <member_name>ATTR_egroup</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>ATR_DFLAG_MGRD</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_RESV</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESV_ATR_convert</member_index>\n      <member_name>ATTR_convert</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_WRITE</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_RESV</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESV_ATR_resv_standing</member_index>\n      <member_name>ATTR_resv_standing</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_ALTRUN</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_RESV</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_bool</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESV_ATR_resv_rrule</member_index>\n      <member_name>ATTR_resv_rrule</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_WRITE</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_RESV</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESV_ATR_resv_idx</member_index>\n      <member_name>ATTR_resv_idx</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_ALTRUN</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_RESV</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESV_ATR_resv_count</member_index>\n      <member_name>ATTR_resv_count</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_ALTRUN</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_RESV</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESV_ATR_resv_execvnodes</member_index>\n      <member_name>ATTR_resv_execvnodes</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_WRITE</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_RESV</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESV_ATR_resv_timezone</member_index>\n      <member_name>ATTR_resv_timezone</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_WRITE</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_RESV</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESV_ATR_retry</member_index>\n      <member_name>ATTR_resv_retry</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_WRITE</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_RESV</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESV_ATR_del_idle_time</member_index>\n      <member_name>ATTR_del_idle_time</member_name>\n      <member_at_decode>decode_time</member_at_decode>\n      <member_at_encode>encode_time</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_WRITE</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_RESV</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_time</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESV_ATR_job</member_index>\n      <member_name>ATTR_resv_job</member_name>\n      <member_at_decode>decode_jobname</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_WRITE</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_RESV</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESV_ATR_SchedSelect_orig</member_index>\n      <member_name>ATTR_SchedSelect_orig</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>ATR_DFLAG_MGRD</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_RESV</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESV_ATR_submit_host</member_index>\n      <member_name>ATTR_submit_host</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_MOM | ATR_DFLAG_SvRD | ATR_DFLAG_SvWR</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_RESV</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>RESV_ATR_cred_id</member_index>\n      <member_name>ATTR_cred_id</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_WRITE | ATR_DFLAG_MOM | ATR_DFLAG_SvRD | ATR_DFLAG_SvWR</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_RESV</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes flag=\"SVR\">\n      <member_index>RESV_ATR_node_set</member_index>\n      <member_name>\n         <SVR>ATTR_node_set</SVR>\n      </member_name>\n      <member_at_decode>decode_arst</member_at_decode>\n      <member_at_encode>encode_arst</member_at_encode>\n      <member_at_set>set_arst</member_at_set>\n      <member_at_comp>comp_arst</member_at_comp>\n      <member_at_free>free_arst</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>\n         <SVR>ATR_DFLAG_SvWR|ATR_DFLAG_MGWR</SVR>\n      </member_at_flags>\n      <member_at_type>\n         <SVR>ATR_TYPE_ARST</SVR>\n      </member_at_type>\n      <member_at_parent>PARENT_TYPE_RESV</member_at_parent>\n   </attributes>\n   <attributes flag=\"SVR\">\n      <member_index>RESV_ATR_partition</member_index>\n      <member_name>\n         <SVR>ATTR_partition</SVR>\n      </member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>\n         <SVR>ATR_DFLAG_SSET | READ_ONLY</SVR>\n      </member_at_flags>\n      <member_at_type>\n         <SVR>ATR_TYPE_STR</SVR>\n      </member_at_type>\n      <member_at_parent>PARENT_TYPE_RESV</member_at_parent>\n   </attributes>\n   <attributes flag=\"SVR\">\n      <member_index>RESV_ATR_alter_revert</member_index>\n      <member_name>ATTR_resv_alter_revert</member_name>\n      <member_at_decode>decode_resc</member_at_decode>\n      <member_at_encode>encode_resc</member_at_encode>\n      <member_at_set>set_resc</member_at_set>\n      <member_at_comp>comp_resc</member_at_comp>\n      <member_at_free>free_resc</member_at_free>\n      <member_at_action>action_resc_resv</member_at_action>\n      <member_at_flags>ATR_DFLAG_MGRD</member_at_flags>\n      <member_at_type>ATR_TYPE_RESC</member_at_type>\n      <member_at_parent>PARENT_TYPE_RESV</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes flag=\"SVR\">\n      <member_index>RESV_ATR_standing_revert</member_index>\n      <member_name>ATTR_resv_standing_revert</member_name>\n      <member_at_decode>decode_resc</member_at_decode>\n      <member_at_encode>encode_resc</member_at_encode>\n      <member_at_set>set_resc</member_at_set>\n      <member_at_comp>comp_resc</member_at_comp>\n      <member_at_free>free_resc</member_at_free>\n      <member_at_action>action_resc_resv</member_at_action>\n      <member_at_flags>ATR_DFLAG_MGRD</member_at_flags>\n      <member_at_type>ATR_TYPE_RESC</member_at_type>\n      <member_at_parent>PARENT_TYPE_RESV</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes flag=\"SVR\">\n      #include \"site_resv_attr_def.h\"\n      <member_name>\n         <SVR>\"_other_\"</SVR>\n      </member_name>\n      <member_at_decode>decode_unkn</member_at_decode>\n      <member_at_encode>encode_unkn</member_at_encode>\n      <member_at_set>set_unkn</member_at_set>\n      <member_at_comp>comp_unkn</member_at_comp>\n      <member_at_free>free_unkn</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>\n         <SVR>READ_WRITE | ATR_DFLAG_SELEQ</SVR>\n      </member_at_flags>\n      <member_at_type>\n         <SVR>ATR_TYPE_LIST</SVR>\n      </member_at_type>\n      <member_at_parent>PARENT_TYPE_RESV</member_at_parent>\n   </attributes>\n   <tail>\n      <SVR>};</SVR>\n      <ECL>};\n\tint   ecl_resv_attr_size = sizeof(ecl_resv_attr_def) / sizeof(ecl_attribute_def);</ECL>\n   </tail>\n</data>\n"
  },
  {
    "path": "src/lib/Libattr/master_sched_attr_def.xml",
    "content": "<?xml version=\"1.0\"?>\n\n<data>\n   <!--\n/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n\tNOTE (Server File)\n\n\tsched_attr_def the array of attribute definitions for sched object.\n\tEach legal sched attribute is defined here\n\tThe entries for each attribute are (see attribute.h):\n       name,\n       decode function,\n       encode function,\n       set function,\n       compare function,\n       free value space function,\n       action function,\n       access permission flags,\n       value type,\n       parent object type\n\n\tNOTE (ECL File)\n\n\tecl_sched_attr_def is the array of attribute and resource\n \tdefinitions for scheduler.\n \tThe structure is used by the ECL verification functionality\n \tto determine which verification function to be called for each\n \tattribute.\n\n\tThe entries for each attribute are (see attribute.h):\n\t\tname,\n\t\ttype,\n        flag,\n\t\tverify datatype function,\n\t\tverify value function\n   -->\n   <head>\n     <SVR>\n     #include &lt;pbs_config.h&gt;\n     #include &lt;sys/types.h&gt;\n     #include \"pbs_ifl.h\"\n     #include \"list_link.h\"\n     #include \"attribute.h\"\n     #include \"pbs_nodes.h\"\n     #include \"svrfunc.h\"\n     #include \"pbs_error.h\"\n     #include \"pbs_python.h\"\n\n     attribute_def sched_attr_def[] = {\n     </SVR>\n     <ECL>\n     #include &lt;pbs_config.h&gt;\n     #include &lt;sys/types.h&gt;\n     #include \"pbs_ifl.h\"\n     #include \"pbs_ecl.h\"\n\n     ecl_attribute_def ecl_sched_attr_def[] = {\n     </ECL>\n   </head>\n   <attributes>\n\t<member_index>SCHED_ATR_SchedHost</member_index>\n\t<member_name>ATTR_SchedHost</member_name>\t\t<!-- \"Sched_Host\" -->\n\t<member_at_decode>decode_str</member_at_decode>\n\t<member_at_encode>encode_str</member_at_encode>\n\t<member_at_set>set_str</member_at_set>\n\t<member_at_comp>comp_str</member_at_comp>\n\t<member_at_free>free_str</member_at_free>\n\t<member_at_action>action_sched_host</member_at_action>\n\t<member_at_flags>MGR_ONLY_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_STR</member_at_type>\n\t<member_at_parent>PARENT_TYPE_SCHED</member_at_parent>\n\t<member_verify_function>\n\t<ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n   </attributes>\n   <attributes>\n\t<member_index>SCHED_ATR_sched_cycle_len</member_index>\n\t<member_name>ATTR_sched_cycle_len</member_name>\t<!-- \"sched_cycle_length\" -->\n\t<member_at_decode>decode_time</member_at_decode>\n\t<member_at_encode>encode_time</member_at_encode>\n\t<member_at_set>set_l</member_at_set>\n\t<member_at_comp>comp_l</member_at_comp>\n\t<member_at_free>free_null</member_at_free>\n\t<member_at_action>NULL_FUNC</member_at_action>\n\t<member_at_flags>NO_USER_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_LONG</member_at_type>\n\t<member_at_parent>PARENT_TYPE_SCHED</member_at_parent>\n\t<member_verify_function>\n\t<ECL>verify_datatype_time</ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n   </attributes>\n   <attributes>  <member_index>SCHED_ATR_dont_span_psets</member_index>\n\t<member_name>ATTR_do_not_span_psets</member_name> \t<!-- do_not_span_psets -->\n\t<member_at_decode>decode_b</member_at_decode>\n\t<member_at_encode>encode_b</member_at_encode>\n\t<member_at_set>set_b</member_at_set>\n\t<member_at_comp>comp_b</member_at_comp>\n\t<member_at_free>free_null</member_at_free>\n\t<member_at_action>NULL_FUNC</member_at_action>\n\t<member_at_flags>NO_USER_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_LONG</member_at_type>\n\t<member_at_parent>PARENT_TYPE_SCHED</member_at_parent>\n\t<member_verify_function>\n\t<ECL>verify_datatype_bool</ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n   </attributes>\n   <attributes>  <member_index>SCHED_ATR_only_explicit_psets</member_index>\n\t<member_name>ATTR_only_explicit_psets</member_name> \t<!-- only_explicit_psets -->\n\t<member_at_decode>decode_b</member_at_decode>\n\t<member_at_encode>encode_b</member_at_encode>\n\t<member_at_set>set_b</member_at_set>\n\t<member_at_comp>comp_b</member_at_comp>\n\t<member_at_free>free_null</member_at_free>\n\t<member_at_action>NULL_FUNC</member_at_action>\n\t<member_at_flags>NO_USER_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_BOOL</member_at_type>\n\t<member_at_parent>PARENT_TYPE_SCHED</member_at_parent>\n\t<member_verify_function>\n\t<ECL>verify_datatype_bool</ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n   </attributes>\n    <attributes>  <member_index>SCHED_ATR_sched_preempt_enforce_resumption</member_index>\n\t<member_name>ATTR_sched_preempt_enforce_resumption</member_name> \t<!-- sched_preempt_enforce_resumption -->\n\t<member_at_decode>decode_b</member_at_decode>\n\t<member_at_encode>encode_b</member_at_encode>\n\t<member_at_set>set_b</member_at_set>\n\t<member_at_comp>comp_b</member_at_comp>\n\t<member_at_free>free_null</member_at_free>\n\t<member_at_action>NULL_FUNC</member_at_action>\n\t<member_at_flags>MGR_ONLY_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_BOOL</member_at_type>\n\t<member_at_parent>PARENT_TYPE_SCHED</member_at_parent>\n\t<member_verify_function>\n\t<ECL>verify_datatype_bool</ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n   </attributes>\n   <attributes>  <member_index>SCHED_ATR_preempt_targets_enable</member_index>\n\t<member_name>ATTR_preempt_targets_enable</member_name> \t<!-- preempt_targets_enable -->\n\t<member_at_decode>decode_b</member_at_decode>\n\t<member_at_encode>encode_b</member_at_encode>\n\t<member_at_set>set_b</member_at_set>\n\t<member_at_comp>comp_b</member_at_comp>\n\t<member_at_free>free_null</member_at_free>\n\t<member_at_action>NULL_FUNC</member_at_action>\n\t<member_at_flags>MGR_ONLY_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_BOOL</member_at_type>\n\t<member_at_parent>PARENT_TYPE_SCHED</member_at_parent>\n\t<member_verify_function>\n\t<ECL>verify_datatype_bool</ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n   </attributes>\n   <attributes>\n\t<member_index>SCHED_ATR_job_sort_formula_threshold</member_index>\n\t<member_name>ATTR_job_sort_formula_threshold</member_name>\t<!-- \"job_sort_formula_threshold\" -->\n\t<member_at_decode>decode_f</member_at_decode>\n\t<member_at_encode>encode_f</member_at_encode>\n\t<member_at_set>set_f</member_at_set>\n\t<member_at_comp>comp_f</member_at_comp>\n\t<member_at_free>free_null</member_at_free>\n\t<member_at_action>NULL_FUNC</member_at_action>\n\t<member_at_flags>PRIV_READ | ATR_DFLAG_MGWR</member_at_flags>\n\t<member_at_type>ATR_TYPE_FLOAT</member_at_type>\n\t<member_at_parent>PARENT_TYPE_SCHED</member_at_parent>\n\t<member_verify_function>\n\t<ECL>verify_datatype_float</ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n   </attributes>\n   <attributes>  <member_index>SCHED_ATR_throughput_mode</member_index>\n\t<member_name>ATTR_throughput_mode</member_name> \t<!-- throughput_mode -->\n\t<member_at_decode>decode_b</member_at_decode>\n\t<member_at_encode>encode_b</member_at_encode>\n\t<member_at_set>set_b</member_at_set>\n\t<member_at_comp>comp_b</member_at_comp>\n\t<member_at_free>free_null</member_at_free>\n\t<member_at_action>action_throughput_mode</member_at_action>\n\t<member_at_flags>MGR_ONLY_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_LONG</member_at_type>\n\t<member_at_parent>PARENT_TYPE_SCHED</member_at_parent>\n\t<member_verify_function>\n\t<ECL>verify_datatype_bool</ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n   </attributes>\n   <attributes>\n\t<member_index>SCHED_ATR_job_run_wait</member_index>\n        <member_name>ATTR_job_run_wait</member_name>    <!-- job_run_wait -->\n        <member_at_decode>decode_str</member_at_decode>\n        <member_at_encode>encode_str</member_at_encode>\n        <member_at_set>set_str</member_at_set>\n        <member_at_comp>comp_str</member_at_comp>\n        <member_at_free>free_str</member_at_free>\n        <member_at_action>action_job_run_wait</member_at_action>\n        <member_at_flags>MGR_ONLY_SET</member_at_flags>\n        <member_at_type>ATR_TYPE_STR</member_at_type>\n        <member_at_parent>PARENT_TYPE_SCHED</member_at_parent>\n        <member_verify_function>\n        <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n        <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n        </member_verify_function>\n   </attributes>\n   <attributes>\n\t<member_index>SCHED_ATR_opt_backfill_fuzzy</member_index>\n        <member_name>ATTR_opt_backfill_fuzzy</member_name>\n        <member_at_decode>decode_str</member_at_decode>\n        <member_at_encode>encode_str</member_at_encode>\n        <member_at_set>set_str</member_at_set>\n        <member_at_comp>comp_str</member_at_comp>\n        <member_at_free>free_str</member_at_free>\n        <member_at_action>action_opt_bf_fuzzy</member_at_action>\n        <member_at_flags>MGR_ONLY_SET</member_at_flags>\n        <member_at_type>ATR_TYPE_STR</member_at_type>\n        <member_at_parent>PARENT_TYPE_SCHED</member_at_parent>\n        <member_verify_function>\n        <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n        <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n        </member_verify_function>\n   </attributes>\n   <attributes>\n\t<member_index>SCHED_ATR_partition</member_index>\n\t<member_name>ATTR_partition</member_name>\t\t<!-- \"partition\" -->\n\t<member_at_decode>decode_str</member_at_decode>\n\t<member_at_encode>encode_str</member_at_encode>\n\t<member_at_set>set_str</member_at_set>\n\t<member_at_comp>comp_str</member_at_comp>\n\t<member_at_free>free_str</member_at_free>\n\t<member_at_action>action_sched_partition</member_at_action>\n\t<member_at_flags>MGR_ONLY_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_STR</member_at_type>\n\t<member_at_parent>PARENT_TYPE_SCHED</member_at_parent>\n\t<member_verify_function>\n\t<ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n   </attributes>\n   <attributes>\n<member_index>SCHED_ATR_sched_priv</member_index>\n\t<member_name>ATTR_sched_priv</member_name>\t\t<!-- \"sched_priv\" -->\n\t<member_at_decode>decode_str</member_at_decode>\n\t<member_at_encode>encode_str</member_at_encode>\n\t<member_at_set>set_str</member_at_set>\n\t<member_at_comp>comp_str</member_at_comp>\n\t<member_at_free>free_str</member_at_free>\n\t<member_at_action>action_sched_priv</member_at_action>\n\t<member_at_flags>MGR_ONLY_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_STR</member_at_type>\n\t<member_at_parent>PARENT_TYPE_SCHED</member_at_parent>\n\t<member_verify_function>\n\t<ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n   </attributes>\n   <attributes>\n<member_index>SCHED_ATR_sched_log</member_index>\n\t<member_name>ATTR_sched_log</member_name>\t\t<!-- \"sched_log\" -->\n\t<member_at_decode>decode_str</member_at_decode>\n\t<member_at_encode>encode_str</member_at_encode>\n\t<member_at_set>set_str</member_at_set>\n\t<member_at_comp>comp_str</member_at_comp>\n\t<member_at_free>free_str</member_at_free>\n\t<member_at_action>action_sched_log</member_at_action>\n\t<member_at_flags>MGR_ONLY_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_STR</member_at_type>\n\t<member_at_parent>PARENT_TYPE_SCHED</member_at_parent>\n\t<member_verify_function>\n\t<ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n   </attributes>\n   <attributes>\n\t<member_index>SCHED_ATR_scheduling</member_index>\n        <member_name>ATTR_scheduling</member_name>\n        <member_at_decode>decode_b</member_at_decode>\n        <member_at_encode>encode_b</member_at_encode>\n        <member_at_set>set_b</member_at_set>\n        <member_at_comp>comp_b</member_at_comp>\n        <member_at_free>free_null</member_at_free>\n        <member_at_action>poke_scheduler</member_at_action>\n        <member_at_flags>MGR_ONLY_SET</member_at_flags>\n        <member_at_type>ATR_TYPE_BOOL</member_at_type>\n        <member_at_parent>PARENT_TYPE_SCHED</member_at_parent>\n        <member_verify_function>\n        <ECL>verify_datatype_bool</ECL>\n        <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n        </member_verify_function>\n   </attributes>\n   <attributes>\n\t<member_index>SCHED_ATR_schediteration</member_index>\n        <member_name>ATTR_schediteration</member_name>            <!-- \"scheduler_iteration\" -->\n        <member_at_decode>decode_l</member_at_decode>\n        <member_at_encode>encode_l</member_at_encode>\n        <member_at_set>set_l</member_at_set>\n        <member_at_comp>comp_l</member_at_comp>\n        <member_at_free>free_null</member_at_free>\n        <member_at_action>action_sched_iteration</member_at_action>\n        <member_at_flags>MGR_ONLY_SET</member_at_flags>\n        <member_at_type>ATR_TYPE_LONG</member_at_type>\n        <member_at_parent>PARENT_TYPE_SCHED</member_at_parent>\n        <member_verify_function>\n        <ECL>verify_datatype_long</ECL>\n        <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n        </member_verify_function>\n   </attributes>\n   <attributes>\n\t<member_index>SCHED_ATR_sched_user</member_index>\n\t<member_name>ATTR_sched_user</member_name>\t\t<!-- \"sched_user\" -->\n\t<member_at_decode>decode_str</member_at_decode>\n\t<member_at_encode>encode_str</member_at_encode>\n\t<member_at_set>set_str</member_at_set>\n\t<member_at_comp>comp_str</member_at_comp>\n\t<member_at_free>free_str</member_at_free>\n\t<member_at_action>action_sched_user</member_at_action>\n\t<member_at_flags>MGR_ONLY_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_STR</member_at_type>\n\t<member_at_parent>PARENT_TYPE_SCHED</member_at_parent>\n\t<member_verify_function>\n\t<ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n   </attributes>\n   <attributes>\n\t<member_index>SCHED_ATR_sched_comment</member_index>\n        <member_name>ATTR_comment</member_name>                   <!-- \"comment\" -->\n        <member_at_decode>decode_str</member_at_decode>\n        <member_at_encode>encode_str</member_at_encode>\n        <member_at_set>set_str</member_at_set>\n        <member_at_comp>comp_str</member_at_comp>\n        <member_at_free>free_str</member_at_free>\n        <member_at_action>NULL_FUNC</member_at_action>\n        <member_at_flags>MGR_ONLY_SET</member_at_flags>\n        <member_at_type>ATR_TYPE_STR</member_at_type>\n        <member_at_parent>PARENT_TYPE_SCHED</member_at_parent>\n        <member_verify_function>\n        <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n        <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n        </member_verify_function>\n    </attributes>\n    <attributes>\n\t<member_index>SCHED_ATR_sched_state</member_index>\n\t<member_name>ATTR_sched_state</member_name>\t<!-- \"state\" -->\n\t<member_at_decode>decode_str</member_at_decode>\n\t<member_at_encode>encode_str</member_at_encode>\n\t<member_at_set>set_str</member_at_set>\n\t<member_at_comp>comp_str</member_at_comp>\n\t<member_at_free>free_str</member_at_free>\n\t<member_at_action>NULL_FUNC</member_at_action>\n\t<member_at_flags>READ_ONLY | ATR_DFLAG_SSET</member_at_flags>\n\t<member_at_type>ATR_TYPE_STR</member_at_type>\n\t<member_at_parent>PARENT_TYPE_SCHED</member_at_parent>\n\t<member_verify_function>\n\t<ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n   </attributes>\n   <attributes>\n\t<member_index>SCHED_ATR_preempt_queue_prio</member_index>\n\t<member_name>ATTR_sched_preempt_queue_prio</member_name>\t<!-- \"preempt_queue_prio\" -->\n\t<member_at_decode>decode_l</member_at_decode>\n\t<member_at_encode>encode_l</member_at_encode>\n\t<member_at_set>set_l</member_at_set>\n\t<member_at_comp>comp_l</member_at_comp>\n\t<member_at_free>free_null</member_at_free>\n\t<member_at_action>NULL_FUNC</member_at_action>\n\t<member_at_flags>MGR_ONLY_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_LONG</member_at_type>\n\t<member_at_parent>PARENT_TYPE_SCHED</member_at_parent>\n\t<member_verify_function>\n\t<ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n   </attributes>\n   <attributes>\n\t<member_index>SCHED_ATR_preempt_prio</member_index>\n\t<member_name>ATTR_sched_preempt_prio</member_name>\t<!-- \"preempt_prio\" -->\n\t<member_at_decode>decode_str</member_at_decode>\n\t<member_at_encode>encode_str</member_at_encode>\n\t<member_at_set>set_str</member_at_set>\n\t<member_at_comp>comp_str</member_at_comp>\n\t<member_at_free>free_str</member_at_free>\n\t<member_at_action>NULL_FUNC</member_at_action>\n\t<member_at_flags>MGR_ONLY_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_STR</member_at_type>\n\t<member_at_parent>PARENT_TYPE_SCHED</member_at_parent>\n\t<member_verify_function>\n\t<ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n\t<ECL>verify_value_preempt_prio</ECL>\n\t</member_verify_function>\n   </attributes>\n   <attributes>\n\t<member_index>SCHED_ATR_preempt_order</member_index>\n\t<member_name>ATTR_sched_preempt_order</member_name>\t<!-- \"preempt_order\" -->\n\t<member_at_decode>decode_str</member_at_decode>\n\t<member_at_encode>encode_str</member_at_encode>\n\t<member_at_set>set_str</member_at_set>\n\t<member_at_comp>comp_str</member_at_comp>\n\t<member_at_free>free_str</member_at_free>\n\t<member_at_action>action_sched_preempt_order</member_at_action>\n\t<member_at_flags>MGR_ONLY_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_STR</member_at_type>\n\t<member_at_parent>PARENT_TYPE_SCHED</member_at_parent>\n\t<member_verify_function>\n\t<ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n\t<ECL>verify_value_preempt_order</ECL>\n\t</member_verify_function>\n   </attributes>\n   <attributes>\n\t<member_index>SCHED_ATR_preempt_sort</member_index>\n\t<member_name>ATTR_sched_preempt_sort</member_name>\t<!-- \"preempt_sort\" -->\n\t<member_at_decode>decode_str</member_at_decode>\n\t<member_at_encode>encode_str</member_at_encode>\n\t<member_at_set>set_str</member_at_set>\n\t<member_at_comp>comp_str</member_at_comp>\n\t<member_at_free>free_str</member_at_free>\n\t<member_at_action>NULL_FUNC</member_at_action>\n\t<member_at_flags>MGR_ONLY_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_STR</member_at_type>\n\t<member_at_parent>PARENT_TYPE_SCHED</member_at_parent>\n\t<member_verify_function>\n\t<ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n\t<ECL>verify_value_preempt_sort</ECL>\n\t</member_verify_function>\n   </attributes>\n   <attributes>\n   <member_index>SCHED_ATR_log_events</member_index>\n        <member_name>ATTR_logevents</member_name>          <!-- \"log_events\" -->\n        <member_at_decode>decode_l</member_at_decode>\n        <member_at_encode>encode_l</member_at_encode>\n        <member_at_set>set_l</member_at_set>\n        <member_at_comp>comp_l</member_at_comp>\n        <member_at_free>free_null</member_at_free>\n        <member_at_action>NULL_FUNC</member_at_action>\n        <member_at_flags>NO_USER_SET</member_at_flags>\n        <member_at_type>ATR_TYPE_LONG</member_at_type>\n        <member_at_parent>PARENT_TYPE_SCHED</member_at_parent>\n        <member_verify_function>\n        <ECL>verify_datatype_long</ECL>\n        <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n        </member_verify_function>\n   </attributes>\n   <attributes>\n\t<member_index>SCHED_ATR_job_sort_formula</member_index>\n\t<member_name>ATTR_job_sort_formula</member_name>     <!-- \"job_sort_formula\" -->\n\t<member_at_decode>decode_formula</member_at_decode>\n\t<member_at_encode>encode_str</member_at_encode>\n\t<member_at_set>set_str</member_at_set>\n\t<member_at_comp>comp_str</member_at_comp>\n\t<member_at_free>free_str</member_at_free>\n\t<member_at_action>validate_job_formula</member_at_action>\n\t<member_at_flags>MGR_ONLY_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_STR</member_at_type>\n\t<member_at_parent>PARENT_TYPE_SCHED</member_at_parent>\n\t<member_verify_function>\n\t<ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n\t<ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n\t</member_verify_function>\n   </attributes>\n   <attributes>\n\t<member_index>SCHED_ATR_server_dyn_res_alarm</member_index>\n\t<member_name>ATTR_sched_server_dyn_res_alarm</member_name>\t<!-- \"server_dyn_res_alarm\" -->\n\t<member_at_decode>decode_l</member_at_decode>\n\t<member_at_encode>encode_l</member_at_encode>\n\t<member_at_set>set_l</member_at_set>\n\t<member_at_comp>comp_l</member_at_comp>\n\t<member_at_free>free_null</member_at_free>\n\t<member_at_action>NULL_FUNC</member_at_action>\n\t<member_at_flags>MGR_ONLY_SET</member_at_flags>\n\t<member_at_type>ATR_TYPE_LONG</member_at_type>\n\t<member_at_parent>PARENT_TYPE_SCHED</member_at_parent>\n\t<member_verify_function>\n\t<ECL>verify_datatype_long</ECL>\n\t<ECL>verify_value_zero_or_positive</ECL>\n\t</member_verify_function>\n   </attributes>\n   <attributes>\n\t<member_index>SCHED_ATR_attr_update_period</member_index>\n    <member_name>ATTR_attr_update_period</member_name> <!-- \"attr_update_period\" -->\n    <member_at_decode>decode_l</member_at_decode>\n    <member_at_encode>encode_l</member_at_encode>\n    <member_at_set>set_l</member_at_set>\n    <member_at_comp>comp_l</member_at_comp>\n    <member_at_free>free_null</member_at_free>\n    <member_at_action>NULL_FUNC</member_at_action>\n    <member_at_flags>MGR_ONLY_SET</member_at_flags>\n    <member_at_type>ATR_TYPE_LONG</member_at_type>\n    <member_at_parent>PARENT_TYPE_SCHED</member_at_parent>\n    <member_verify_function>\n    <ECL>verify_datatype_long</ECL>\n    <ECL>verify_value_zero_or_positive</ECL>\n    </member_verify_function>\n   </attributes>\n\n    <tail>\n     <SVR>\n         #include \"site_sched_attr_def.h\"\n\t};\n     </SVR>\n     <ECL>\n\t};\n\tint ecl_sched_attr_size=sizeof(ecl_sched_attr_def)/sizeof(ecl_attribute_def);\n     </ECL>\n   </tail>\n</data>\n"
  },
  {
    "path": "src/lib/Libattr/master_svr_attr_def.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<data>\n   <!--\n/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n  svr_attr_def is the array of attribute definitions for the server.\n  The entries for each attribute are (see attribute.h):\n       name,\n       decode function,\n       encode function,\n       set function,\n       compare function,\n       free value space function,\n       action function,\n       access permission flags,\n       value type\n\n  Each legal server attribute is defined here\n  ecl_svr_attr_def is the array of attribute definitions\n  for server.\\n\n  The structure is used by the ECL verification functionality\n  to determine which verification function to be called for each\n  attribute/resource.\n  The entries for each attribute are (see attribute.h):\n       name,\n       member_flag,\n       member_type,\n       verify datatype function,\n       verify value function\n\n  -->\n   <head>\n      <SVR>#include &lt;pbs_config.h&gt;\n      #include &lt;sys/types.h&gt;\n      #include \"pbs_ifl.h\"\n      #include \"list_link.h\"\n      #include \"attribute.h\"\n      #include \"pbs_nodes.h\"\n      #include \"svrfunc.h\"\n      #include \"pbs_error.h\"\n      #include \"pbs_python.h\"\n\n      long resv_retry_time = RESV_RETRY_TIME_DEFAULT;\n\n      attribute_def svr_attr_def[] = {</SVR>\n      <ECL>#include &lt;pbs_config.h&gt;\n      #include &lt;sys/types.h&gt;\n      #include \"pbs_ifl.h\"\n      #include \"pbs_ecl.h\"\n\n      ecl_attribute_def ecl_svr_attr_def[] = {</ECL>\n   </head>\n   <attributes>\n      <member_index>SVR_ATR_State</member_index>\n      <member_name>ATTR_status</member_name>\n      <member_at_decode>decode_null</member_at_decode>\n      <member_at_encode>encode_svrstate</member_at_encode>\n      <member_at_set>set_null</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_ONLY | ATR_DFLAG_NOSAVM</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_SvrHost</member_index>\n      <member_name>ATTR_SvrHost</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_null</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_ONLY</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_scheduling</member_index>\n      <member_name>ATTR_scheduling</member_name>\n      <member_at_decode>decode_b</member_at_decode>\n      <member_at_encode>encode_b</member_at_encode>\n      <member_at_set>set_b</member_at_set>\n      <member_at_comp>comp_b</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>poke_scheduler</member_at_action>\n      <member_at_flags>NO_USER_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_BOOL</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_bool</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_max_running</member_index>\n      <member_name>ATTR_maxrun</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>check_no_entlim</member_at_action>\n      <member_at_flags>NO_USER_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_max_queued</member_index>\n      <member_name>ATTR_max_queued</member_name>\n      <member_at_decode>decode_entlim</member_at_decode>\n      <member_at_encode>encode_entlim</member_at_encode>\n      <member_at_set>set_entlim</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_entlim</member_at_free>\n      <member_at_action>action_entlim_ct</member_at_action>\n      <member_at_flags>NO_USER_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_ENTITY</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_max_queued_res</member_index>\n      <member_name>ATTR_max_queued_res</member_name>\n      <member_at_decode>decode_entlim_res</member_at_decode>\n      <member_at_encode>encode_entlim</member_at_encode>\n      <member_at_set>set_entlim_res</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_entlim</member_at_free>\n      <member_at_action>action_entlim_res</member_at_action>\n      <member_at_flags>NO_USER_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_ENTITY</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_max_run</member_index>\n      <member_name>ATTR_max_run</member_name>\n      <member_at_decode>decode_entlim</member_at_decode>\n      <member_at_encode>encode_entlim</member_at_encode>\n      <member_at_set>set_entlim</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_entlim</member_at_free>\n      <member_at_action>action_entlim_chk</member_at_action>\n      <member_at_flags>NO_USER_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_ENTITY</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_max_run_res</member_index>\n      <member_name>ATTR_max_run_res</member_name>\n      <member_at_decode>decode_entlim_res</member_at_decode>\n      <member_at_encode>encode_entlim</member_at_encode>\n      <member_at_set>set_entlim_res</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_entlim</member_at_free>\n      <member_at_action>action_entlim_chk</member_at_action>\n      <member_at_flags>NO_USER_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_ENTITY</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_max_run_soft</member_index>\n      <member_name>ATTR_max_run_soft</member_name>\n      <member_at_decode>decode_entlim</member_at_decode>\n      <member_at_encode>encode_entlim</member_at_encode>\n      <member_at_set>set_entlim</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_entlim</member_at_free>\n      <member_at_action>action_entlim_chk</member_at_action>\n      <member_at_flags>NO_USER_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_ENTITY</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_max_run_res_soft</member_index>\n      <member_name>ATTR_max_run_res_soft</member_name>\n      <member_at_decode>decode_entlim_res</member_at_decode>\n      <member_at_encode>encode_entlim</member_at_encode>\n      <member_at_set>set_entlim_res</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_entlim</member_at_free>\n      <member_at_action>action_entlim_chk</member_at_action>\n      <member_at_flags>NO_USER_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_ENTITY</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_MaxUserRun</member_index>\n      <member_name>ATTR_maxuserrun</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>check_no_entlim</member_at_action>\n      <member_at_flags>NO_USER_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_MaxGrpRun</member_index>\n      <member_name>ATTR_maxgrprun</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>check_no_entlim</member_at_action>\n      <member_at_flags>NO_USER_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_MaxUserRes</member_index>\n      <member_name>ATTR_maxuserres</member_name>\n      <member_at_decode>decode_resc</member_at_decode>\n      <member_at_encode>encode_resc</member_at_encode>\n      <member_at_set>set_resc</member_at_set>\n      <member_at_comp>comp_resc</member_at_comp>\n      <member_at_free>free_resc</member_at_free>\n      <member_at_action>check_no_entlim</member_at_action>\n      <member_at_flags>NO_USER_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_RESC</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>verify_value_resc</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_MaxGroupRes</member_index>\n      <member_name>ATTR_maxgroupres</member_name>\n      <member_at_decode>decode_resc</member_at_decode>\n      <member_at_encode>encode_resc</member_at_encode>\n      <member_at_set>set_resc</member_at_set>\n      <member_at_comp>comp_resc</member_at_comp>\n      <member_at_free>free_resc</member_at_free>\n      <member_at_action>check_no_entlim</member_at_action>\n      <member_at_flags>NO_USER_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_RESC</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>verify_value_resc</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_MaxUserRunSoft</member_index>\n      <member_name>ATTR_maxuserrunsoft</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>check_no_entlim</member_at_action>\n      <member_at_flags>NO_USER_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_MaxGrpRunSoft</member_index>\n      <member_name>ATTR_maxgrprunsoft</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>check_no_entlim</member_at_action>\n      <member_at_flags>NO_USER_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_MaxUserResSoft</member_index>\n      <member_name>ATTR_maxuserressoft</member_name>\n      <member_at_decode>decode_resc</member_at_decode>\n      <member_at_encode>encode_resc</member_at_encode>\n      <member_at_set>set_resc</member_at_set>\n      <member_at_comp>comp_resc</member_at_comp>\n      <member_at_free>free_resc</member_at_free>\n      <member_at_action>check_no_entlim</member_at_action>\n      <member_at_flags>NO_USER_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_RESC</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>verify_value_resc</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_MaxGroupResSoft</member_index>\n      <member_name>ATTR_maxgroupressoft</member_name>\n      <member_at_decode>decode_resc</member_at_decode>\n      <member_at_encode>encode_resc</member_at_encode>\n      <member_at_set>set_resc</member_at_set>\n      <member_at_comp>comp_resc</member_at_comp>\n      <member_at_free>free_resc</member_at_free>\n      <member_at_action>check_no_entlim</member_at_action>\n      <member_at_flags>NO_USER_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_RESC</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>verify_value_resc</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_PNames</member_index>\n      <member_name>ATTR_PNames</member_name>\n      <member_at_decode>decode_arst</member_at_decode>\n      <member_at_encode>encode_arst</member_at_encode>\n      <member_at_set>set_arst</member_at_set>\n      <member_at_comp>comp_arst</member_at_comp>\n      <member_at_free>free_arst</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_ARST</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_TotalJobs</member_index>\n      <member_name>ATTR_total</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_null</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_ONLY</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_JobsByState</member_index>\n      <member_name>ATTR_count</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_null</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_ONLY</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_acl_host_enable</member_index>\n      <member_name>ATTR_aclhten</member_name>\n      <member_at_decode>decode_b</member_at_decode>\n      <member_at_encode>encode_b</member_at_encode>\n      <member_at_set>set_b</member_at_set>\n      <member_at_comp>comp_b</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_BOOL</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_bool</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_acl_hosts</member_index>\n      <member_name>ATTR_aclhost</member_name>\n      <member_at_decode>decode_arst</member_at_decode>\n      <member_at_encode>encode_arst</member_at_encode>\n      <member_at_set>set_hostacl</member_at_set>\n      <member_at_comp>comp_arst</member_at_comp>\n      <member_at_free>free_arst</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_ACL</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_acl_host_moms_enable</member_index>\n      <member_name>ATTR_aclhostmomsen</member_name>\n      <member_at_decode>decode_b</member_at_decode>\n      <member_at_encode>encode_b</member_at_encode>\n      <member_at_set>set_b</member_at_set>\n      <member_at_comp>comp_b</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_BOOL</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_bool</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_acl_Resvhost_enable</member_index>\n      <member_name>ATTR_aclResvhten</member_name>\n      <member_at_decode>decode_b</member_at_decode>\n      <member_at_encode>encode_b</member_at_encode>\n      <member_at_set>set_b</member_at_set>\n      <member_at_comp>comp_b</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_BOOL</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_bool</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_acl_Resvhosts</member_index>\n      <member_name>ATTR_aclResvhost</member_name>\n      <member_at_decode>decode_arst</member_at_decode>\n      <member_at_encode>encode_arst</member_at_encode>\n      <member_at_set>set_hostacl</member_at_set>\n      <member_at_comp>comp_arst</member_at_comp>\n      <member_at_free>free_arst</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_ACL</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_acl_ResvGroup_enable</member_index>\n      <member_name>ATTR_aclResvgren</member_name>\n      <member_at_decode>decode_b</member_at_decode>\n      <member_at_encode>encode_b</member_at_encode>\n      <member_at_set>set_b</member_at_set>\n      <member_at_comp>comp_b</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_BOOL</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_acl_ResvGroups</member_index>\n      <member_name>ATTR_aclResvgroup</member_name>\n      <member_at_decode>decode_arst</member_at_decode>\n      <member_at_encode>encode_arst</member_at_encode>\n      <member_at_set>set_arst</member_at_set>\n      <member_at_comp>comp_arst</member_at_comp>\n      <member_at_free>free_arst</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_ACL</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_AclUserEnabled</member_index>\n      <member_name>ATTR_acluren</member_name>\n      <member_at_decode>decode_b</member_at_decode>\n      <member_at_encode>encode_b</member_at_encode>\n      <member_at_set>set_b</member_at_set>\n      <member_at_comp>comp_b</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_BOOL</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_bool</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_AclUsers</member_index>\n      <member_name>ATTR_acluser</member_name>\n      <member_at_decode>decode_arst</member_at_decode>\n      <member_at_encode>encode_arst</member_at_encode>\n      <member_at_set>set_uacl</member_at_set>\n      <member_at_comp>comp_arst</member_at_comp>\n      <member_at_free>free_arst</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_ACL</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_AclResvUserEnabled</member_index>\n      <member_name>ATTR_aclResvuren</member_name>\n      <member_at_decode>decode_b</member_at_decode>\n      <member_at_encode>encode_b</member_at_encode>\n      <member_at_set>set_b</member_at_set>\n      <member_at_comp>comp_b</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_BOOL</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_bool</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_AclResvUsers</member_index>\n      <member_name>ATTR_aclResvuser</member_name>\n      <member_at_decode>decode_arst</member_at_decode>\n      <member_at_encode>encode_arst</member_at_encode>\n      <member_at_set>set_uacl</member_at_set>\n      <member_at_comp>comp_arst</member_at_comp>\n      <member_at_free>free_arst</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_ACL</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_AclRoot</member_index>\n      <member_name>ATTR_aclroot</member_name>\n      <member_at_decode>decode_arst</member_at_decode>\n      <member_at_encode>encode_arst</member_at_encode>\n      <member_at_set>set_uacl</member_at_set>\n      <member_at_comp>comp_arst</member_at_comp>\n      <member_at_free>free_arst</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_ACL</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_managers</member_index>\n      <member_name>ATTR_managers</member_name>\n      <member_at_decode>decode_arst</member_at_decode>\n      <member_at_encode>encode_arst</member_at_encode>\n      <member_at_set>set_uacl</member_at_set>\n      <member_at_comp>comp_arst</member_at_comp>\n      <member_at_free>free_arst</member_at_free>\n      <member_at_action>manager_oper_chk</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_ACL</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>verify_value_mgr_opr_acl_check</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_operators</member_index>\n      <member_name>ATTR_operators</member_name>\n      <member_at_decode>decode_arst</member_at_decode>\n      <member_at_encode>encode_arst</member_at_encode>\n      <member_at_set>set_uacl</member_at_set>\n      <member_at_comp>comp_arst</member_at_comp>\n      <member_at_free>free_arst</member_at_free>\n      <member_at_action>manager_oper_chk</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_ACL</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>verify_value_mgr_opr_acl_check</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_dflt_que</member_index>\n      <member_name>ATTR_dfltque</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>default_queue_chk</member_at_action>\n      <member_at_flags>NO_USER_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_log_events</member_index>\n      <member_name>ATTR_logevents</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>set_log_events</member_at_action>\n      <member_at_flags>NO_USER_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_mailer</member_index>\n      <member_name>ATTR_mailer</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_mailfrom</member_index>\n      <member_name>ATTR_mailfrom</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_query_others</member_index>\n      <member_name>ATTR_queryother</member_name>\n      <member_at_decode>decode_b</member_at_decode>\n      <member_at_encode>encode_b</member_at_encode>\n      <member_at_set>set_b</member_at_set>\n      <member_at_comp>comp_b</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_BOOL</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_bool</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_resource_avail</member_index>\n      <member_name>ATTR_rescavail</member_name>\n      <member_at_decode>decode_resc</member_at_decode>\n      <member_at_encode>encode_resc</member_at_encode>\n      <member_at_set>set_resc</member_at_set>\n      <member_at_comp>comp_resc</member_at_comp>\n      <member_at_free>free_resc</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>NO_USER_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_RESC</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>verify_value_resc</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_resource_deflt</member_index>\n      <member_name>ATTR_rescdflt</member_name>\n      <member_at_decode>decode_resc</member_at_decode>\n      <member_at_encode>encode_resc</member_at_encode>\n      <member_at_set>set_resc</member_at_set>\n      <member_at_comp>comp_resc</member_at_comp>\n      <member_at_free>free_resc</member_at_free>\n      <member_at_action>action_resc_dflt_svr</member_at_action>\n      <member_at_flags>NO_USER_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_RESC</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>verify_value_resc</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_DefaultChunk</member_index>\n      <member_name>ATTR_DefaultChunk</member_name>\n      <member_at_decode>decode_resc</member_at_decode>\n      <member_at_encode>encode_resc</member_at_encode>\n      <member_at_set>set_resc</member_at_set>\n      <member_at_comp>comp_resc</member_at_comp>\n      <member_at_free>free_resc</member_at_free>\n      <member_at_action>deflt_chunk_action</member_at_action>\n      <member_at_flags>NO_USER_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_RESC</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>verify_value_resc</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_ResourceMax</member_index>\n      <member_name>ATTR_rescmax</member_name>\n      <member_at_decode>decode_resc</member_at_decode>\n      <member_at_encode>encode_resc</member_at_encode>\n      <member_at_set>set_resources_min_max</member_at_set>\n      <member_at_comp>comp_resc</member_at_comp>\n      <member_at_free>free_resc</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>NO_USER_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_RESC</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>verify_value_resc</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_resource_assn</member_index>\n      <member_name>ATTR_rescassn</member_name>\n      <member_at_decode>decode_resc</member_at_decode>\n      <member_at_encode>encode_resc</member_at_encode>\n      <member_at_set>set_resc</member_at_set>\n      <member_at_comp>comp_resc</member_at_comp>\n      <member_at_free>free_resc</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_ONLY</member_at_flags>\n      <member_at_type>ATR_TYPE_RESC</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>verify_value_resc</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_resource_cost</member_index>\n      <member_name>ATTR_resccost</member_name>\n      <member_at_decode>decode_rcost</member_at_decode>\n      <member_at_encode>encode_rcost</member_at_encode>\n      <member_at_set>set_rcost</member_at_set>\n      <member_at_comp>NULL_FUNC_CMP</member_at_comp>\n      <member_at_free>free_rcost</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_RESC</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>verify_value_resc</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_sys_cost</member_index>\n      <member_name>ATTR_syscost</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>NULL_FUNC_CMP</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_scheduler_iteration</member_index>\n      <member_name>ATTR_schediteration</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>action_svr_iteration</member_at_action>\n      <member_at_flags>NO_USER_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_Comment</member_index>\n      <member_name>ATTR_comment</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>NO_USER_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_DefNode</member_index>\n      <member_name>ATTR_defnode</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_NodePack</member_index>\n      <member_name>ATTR_nodepack</member_name>\n      <member_at_decode>decode_b</member_at_decode>\n      <member_at_encode>encode_b</member_at_encode>\n      <member_at_set>set_b</member_at_set>\n      <member_at_comp>comp_b</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_BOOL</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_bool</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_FlatUID</member_index>\n      <member_name>ATTR_FlatUID</member_name>\n      <member_at_decode>decode_b</member_at_decode>\n      <member_at_encode>encode_b</member_at_encode>\n      <member_at_set>set_b</member_at_set>\n      <member_at_comp>comp_b</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_BOOL</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_bool</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_ResvEnable</member_index>\n      <member_name>ATTR_ResvEnable</member_name>\n      <member_at_decode>decode_b</member_at_decode>\n      <member_at_encode>encode_b</member_at_encode>\n      <member_at_set>set_b</member_at_set>\n      <member_at_comp>comp_b</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_BOOL</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_bool</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_NodeFailReq</member_index>\n      <member_name>ATTR_nodefailrq</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>set_node_fail_requeue</member_at_action>\n      <member_at_flags>NO_USER_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_maxarraysize</member_index>\n      <member_name>ATTR_maxarraysize</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>NO_USER_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_ResendTermDelay</member_index>\n      <member_name>ATTR_resendtermdelay</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>set_resend_term_delay</member_at_action>\n      <member_at_flags>NO_USER_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_ReqCredEnable</member_index>\n      <member_name>ATTR_ReqCredEnable</member_name>\n      <member_at_decode>decode_b</member_at_decode>\n      <member_at_encode>encode_b</member_at_encode>\n      <member_at_set>set_b</member_at_set>\n      <member_at_comp>comp_b</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_BOOL</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_bool</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_ReqCred</member_index>\n      <member_name>ATTR_ReqCred</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>cred_name_okay</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>verify_value_credname</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_NodeGroupEnable</member_index>\n      <member_name>ATTR_NodeGroupEnable</member_name>\n      <member_at_decode>decode_b</member_at_decode>\n      <member_at_encode>encode_b</member_at_encode>\n      <member_at_set>set_b</member_at_set>\n      <member_at_comp>comp_b</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>NO_USER_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_BOOL</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_bool</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_NodeGroupKey</member_index>\n      <member_name>ATTR_NodeGroupKey</member_name>\n      <member_at_decode>decode_arst</member_at_decode>\n      <member_at_encode>encode_arst</member_at_encode>\n      <member_at_set>set_arst</member_at_set>\n      <member_at_comp>comp_arst</member_at_comp>\n      <member_at_free>free_arst</member_at_free>\n      <member_at_action>is_valid_resource</member_at_action>\n      <member_at_flags>NO_USER_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_ARST</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_dfltqdelargs</member_index>\n      <member_name>ATTR_dfltqdelargs</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>NO_USER_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_dfltqsubargs</member_index>\n      <member_name>ATTR_dfltqsubargs</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>force_qsub_daemons_update_action</member_at_action>\n      <member_at_flags>NO_USER_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_rpp_retry</member_index>\n      <member_name>ATTR_rpp_retry</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>set_rpp_retry</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>verify_value_zero_or_positive</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_rpp_highwater</member_index>\n      <member_name>ATTR_rpp_highwater</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>set_rpp_highwater</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>verify_value_non_zero_positive</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_pbs_license_info</member_index>\n      <member_name>ATTR_pbs_license_info</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>set_license_location</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_license_min</member_index>\n      <member_name>ATTR_license_min</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>set_license_min</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>verify_value_minlicenses</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_license_max</member_index>\n      <member_name>ATTR_license_max</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>set_license_max</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>verify_value_maxlicenses</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_license_linger</member_index>\n      <member_name>ATTR_license_linger</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>set_license_linger</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>verify_value_licenselinger</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_license_count</member_index>\n      <member_name>ATTR_license_count</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_null</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_ONLY</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_version</member_index>\n      <member_name>\"pbs_version\"</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>READ_ONLY</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_job_sort_formula</member_index>\n      <member_name>ATTR_job_sort_formula</member_name>\n      <member_at_decode>decode_formula</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>validate_job_formula</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_EligibleTimeEnable</member_index>\n      <member_name>ATTR_EligibleTimeEnable</member_name>\n      <member_at_decode>decode_b</member_at_decode>\n      <member_at_encode>encode_b</member_at_encode>\n      <member_at_set>set_b</member_at_set>\n      <member_at_comp>comp_b</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>eligibletime_action</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_BOOL</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_bool</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_resv_retry_time</member_index>\n      <member_name>ATTR_resv_retry_time</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>action_reserve_retry_time</member_at_action>\n      <member_at_flags>ATR_DFLAG_MGWR | ATR_DFLAG_MGRD</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>verify_value_zero_or_positive</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_resv_retry_init</member_index>\n      <member_name>ATTR_resv_retry_init</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>action_reserve_retry_init</member_at_action>\n      <member_at_flags>ATR_DFLAG_MGWR | ATR_DFLAG_MGRD</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>verify_value_zero_or_positive</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_JobHistoryEnable</member_index>\n      <member_name>ATTR_JobHistoryEnable</member_name>\n      <member_at_decode>decode_b</member_at_decode>\n      <member_at_encode>encode_b</member_at_encode>\n      <member_at_set>set_b</member_at_set>\n      <member_at_comp>comp_b</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>set_job_history_enable</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_BOOL</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_bool</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_JobHistoryDuration</member_index>\n      <member_name>ATTR_JobHistoryDuration</member_name>\n      <member_at_decode>decode_time</member_at_decode>\n      <member_at_encode>encode_time</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>set_job_history_duration</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_time</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_ProvisionEnable</member_index>\n      <member_name>ATTR_ProvisionEnable</member_name>\n      <member_at_decode>decode_b</member_at_decode>\n      <member_at_encode>encode_b</member_at_encode>\n      <member_at_set>set_b</member_at_set>\n      <member_at_comp>comp_b</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>ATR_DFLAG_SvWR|ATR_DFLAG_MGRD</member_at_flags>\n      <member_at_type>ATR_TYPE_BOOL</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_bool</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_max_concurrent_prov</member_index>\n      <member_name>ATTR_max_concurrent_prov</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>svr_max_conc_prov_action</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_provision_timeout</member_index>\n      <member_name>ATTR_provision_timeout</member_name>\n      <member_at_decode>decode_time</member_at_decode>\n      <member_at_encode>encode_time</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>ATR_DFLAG_SvWR</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_time</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_resv_post_processing</member_index>\n      <member_name>ATTR_resv_post_processing</member_name>\n      <member_at_decode>decode_time</member_at_decode>\n      <member_at_encode>encode_time</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>NO_USER_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_time</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_BackfillDepth</member_index>\n      <member_name>ATTR_backfill_depth</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>action_backfill_depth</member_at_action>\n      <member_at_flags>NO_USER_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>verify_value_zero_or_positive</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_clear_est_enable</member_index>\n      <member_name>ATTR_clearesten</member_name>\n      <member_at_decode>decode_b</member_at_decode>\n      <member_at_encode>encode_b</member_at_encode>\n      <member_at_set>set_b</member_at_set>\n      <member_at_comp>comp_b</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>action_clear_topjob_estimates</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_BOOL</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_bool</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_JobRequeTimeout</member_index>\n      <member_name>ATTR_job_requeue_timeout</member_name>\n      <member_at_decode>decode_time</member_at_decode>\n      <member_at_encode>encode_time</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>at_non_zero_time</member_at_action>\n      <member_at_flags>NO_USER_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_time</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_PythonRestartMaxHooks</member_index>\n      <member_name>ATTR_python_restart_max_hooks</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>verify_value_non_zero_positive</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_PythonRestartMaxObjects</member_index>\n      <member_name>ATTR_python_restart_max_objects</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>verify_value_non_zero_positive</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_PythonRestartMinInterval</member_index>\n      <member_name>ATTR_python_restart_min_interval</member_name>\n      <member_at_decode>decode_time</member_at_decode>\n      <member_at_encode>encode_time</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>at_non_zero_time</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_time</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      #include \"site_svr_attr_def.h\"\n      <member_index>SVR_ATR_queued_jobs_threshold</member_index>\n      <member_name>ATTR_queued_jobs_threshold</member_name>\n      <member_at_decode>decode_entlim</member_at_decode>\n      <member_at_encode>encode_entlim</member_at_encode>\n      <member_at_set>set_entlim</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_entlim</member_at_free>\n      <member_at_action>action_entlim_ct</member_at_action>\n      <member_at_flags>NO_USER_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_ENTITY</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_queued_jobs_threshold_res</member_index>\n      <member_name>ATTR_queued_jobs_threshold_res</member_name>\n      <member_at_decode>decode_entlim_res</member_at_decode>\n      <member_at_encode>encode_entlim</member_at_encode>\n      <member_at_set>set_entlim_res</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_entlim</member_at_free>\n      <member_at_action>action_entlim_res</member_at_action>\n      <member_at_flags>NO_USER_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_ENTITY</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_jobscript_max_size</member_index>\n      <member_name>ATTR_jobscript_max_size</member_name>\n      <member_at_decode>decode_size</member_at_decode>\n      <member_at_encode>encode_size</member_at_encode>\n      <member_at_set>set_size</member_at_set>\n      <member_at_comp>comp_size</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>action_jobscript_max_size</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_SIZE</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_size</ECL>\n         <ECL>verify_value_non_zero_positive</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_restrict_res_to_release_on_suspend</member_index>\n      <member_name>ATTR_restrict_res_to_release_on_suspend</member_name>\n      <member_at_decode>decode_arst</member_at_decode>\n      <member_at_encode>encode_arst</member_at_encode>\n      <member_at_set>set_arst</member_at_set>\n      <member_at_comp>comp_arst</member_at_comp>\n      <member_at_free>free_arst</member_at_free>\n      <member_at_action>action_check_res_to_release</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_ARST</member_at_type>\n      <member_at_parent>PARENT_TYPE_SCHED</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_PowerProvisioning</member_index>\n      <member_name>ATTR_power_provisioning</member_name>\n      <member_at_decode>decode_b</member_at_decode>\n      <member_at_encode>encode_b</member_at_encode>\n      <member_at_set>set_b</member_at_set>\n      <member_at_comp>comp_b</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>ATR_DFLAG_SvWR|ATR_DFLAG_MGRD</member_at_flags>\n      <member_at_type>ATR_TYPE_BOOL</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_show_hidden_attribs</member_index>\n      <member_name>ATTR_show_hidden_attribs</member_name>\n      <member_at_decode>decode_b</member_at_decode>\n      <member_at_encode>encode_b</member_at_encode>\n      <member_at_set>set_b</member_at_set>\n      <member_at_comp>comp_b</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_BOOL</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_bool</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_sync_mom_hookfiles_timeout</member_index>\n      <member_name>ATTR_sync_mom_hookfiles_timeout</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>verify_value_non_zero_positive</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_rpp_max_pkt_check</member_index>\n      <member_name>ATTR_rpp_max_pkt_check</member_name>\n      <member_at_decode>decode_l</member_at_decode>\n      <member_at_encode>encode_l</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long</ECL>\n         <ECL>verify_value_non_zero_positive</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_max_job_sequence_id</member_index>\n      <member_name>ATTR_max_job_sequence_id</member_name>\n      <member_at_decode>decode_ll</member_at_decode>\n      <member_at_encode>encode_ll</member_at_encode>\n      <member_at_set>set_ll</member_at_set>\n      <member_at_comp>comp_ll</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>set_max_job_sequence_id</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_LL</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_long_long</ECL>\n         <ECL>verify_value_non_zero_positive_long_long</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_has_runjob_hook</member_index>\n      <member_name>ATTR_has_runjob_hook</member_name>\n      <member_at_decode>decode_b</member_at_decode>\n      <member_at_encode>encode_b</member_at_encode>\n      <member_at_set>set_b</member_at_set>\n      <member_at_comp>comp_b</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>ATR_DFLAG_SvWR</member_at_flags>\n      <member_at_type>ATR_TYPE_BOOL</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_bool</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_acl_krb_realm_enable</member_index>\n      <member_name>ATTR_acl_krb_realm_enable</member_name>\n      <member_at_decode>decode_b</member_at_decode>\n      <member_at_encode>encode_b</member_at_encode>\n      <member_at_set>set_b</member_at_set>\n      <member_at_comp>comp_b</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_BOOL</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_bool</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_acl_krb_realms</member_index>\n      <member_name>ATTR_acl_krb_realms</member_name>\n      <member_at_decode>decode_arst</member_at_decode>\n      <member_at_encode>encode_arst</member_at_encode>\n      <member_at_set>set_hostacl</member_at_set>\n      <member_at_comp>comp_arst</member_at_comp>\n      <member_at_free>free_arst</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_ACL</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_acl_krb_submit_realms</member_index>\n      <member_name>ATTR_acl_krb_submit_realms</member_name>\n      <member_at_decode>decode_arst</member_at_decode>\n      <member_at_encode>encode_arst</member_at_encode>\n      <member_at_set>set_hostacl</member_at_set>\n      <member_at_comp>comp_arst</member_at_comp>\n      <member_at_free>free_arst</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_ACL</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_cred_renew_enable</member_index>\n      <member_name>ATTR_cred_renew_enable</member_name>\n      <member_at_decode>decode_b</member_at_decode>\n      <member_at_encode>encode_b</member_at_encode>\n      <member_at_set>set_b</member_at_set>\n      <member_at_comp>comp_b</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>set_cred_renew_enable</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_BOOL</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_bool</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_cred_renew_tool</member_index>\n      <member_name>ATTR_cred_renew_tool</member_name>\n      <member_at_decode>decode_str</member_at_decode>\n      <member_at_encode>encode_str</member_at_encode>\n      <member_at_set>set_str</member_at_set>\n      <member_at_comp>comp_str</member_at_comp>\n      <member_at_free>free_str</member_at_free>\n      <member_at_action>NULL_FUNC</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_STR</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>NULL_VERIFY_DATATYPE_FUNC</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_cred_renew_period</member_index>\n      <member_name>ATTR_cred_renew_period</member_name>\n      <member_at_decode>decode_time</member_at_decode>\n      <member_at_encode>encode_time</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>set_cred_renew_period</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_time</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <attributes>\n      <member_index>SVR_ATR_cred_renew_cache_period</member_index>\n      <member_name>ATTR_cred_renew_cache_period</member_name>\n      <member_at_decode>decode_time</member_at_decode>\n      <member_at_encode>encode_time</member_at_encode>\n      <member_at_set>set_l</member_at_set>\n      <member_at_comp>comp_l</member_at_comp>\n      <member_at_free>free_null</member_at_free>\n      <member_at_action>set_cred_renew_cache_period</member_at_action>\n      <member_at_flags>MGR_ONLY_SET</member_at_flags>\n      <member_at_type>ATR_TYPE_LONG</member_at_type>\n      <member_at_parent>PARENT_TYPE_SERVER</member_at_parent>\n      <member_verify_function>\n         <ECL>verify_datatype_time</ECL>\n         <ECL>NULL_VERIFY_VALUE_FUNC</ECL>\n      </member_verify_function>\n   </attributes>\n   <tail>\n      <SVR>};</SVR>\n      <ECL>};\n\tint   ecl_svr_attr_size = sizeof(ecl_svr_attr_def) / sizeof(ecl_attribute_def);</ECL>\n   </tail>\n</data>\n"
  },
  {
    "path": "src/lib/Libattr/resc_map.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <assert.h>\n#include <ctype.h>\n#include <memory.h>\n#ifndef NDEBUG\n#include <stdio.h>\n#endif\n#include <stdlib.h>\n#include <string.h>\n#include \"pbs_ifl.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"resource.h\"\n#include \"pbs_error.h\"\n\n/**\n * @file\tresc_map.c\n * @brief\n * This file contains functions for mapping a known resource type into the\n * corresponding functions.  They are used for adding custom resources where\n * the type is specified as a string such as \"boolean\" or \"string\".\n *\n * The mapping is accomplished by the resc_type_map structure which has an\n * entry for the various types of resouces.\n *\n * At some point, it might make sense to merge this with the resource_def\n * used by the Server.\n */\n\nstatic struct resc_type_map resc_type_map_arr[] = {\n\t{\"boolean\",\n\t ATR_TYPE_BOOL,\n\t decode_b,\n\t encode_b,\n\t set_b,\n\t comp_b,\n\t free_null},\n\n\t{\"long\",\n\t ATR_TYPE_LONG,\n\t decode_l,\n\t encode_l,\n\t set_l,\n\t comp_l,\n\t free_null},\n\n\t{\"string\",\n\t ATR_TYPE_STR,\n\t decode_str,\n\t encode_str,\n\t set_str,\n\t comp_str,\n\t free_str},\n\n\t{\"size\",\n\t ATR_TYPE_SIZE,\n\t decode_size,\n\t encode_size,\n\t set_size,\n\t comp_size,\n\t free_null},\n\n\t{\"float\",\n\t ATR_TYPE_FLOAT,\n\t decode_f,\n\t encode_f,\n\t set_f,\n\t comp_f,\n\t free_null},\n\n\t{\"string_array\",\n\t ATR_TYPE_ARST,\n\t decode_arst,\n\t encode_arst,\n\t set_arst,\n\t comp_arst,\n\t free_arst}\n\n};\n\n/**\n * @brief\n *\tReturn a pointer to the resc_type_map entry corresponding to the\n *\tnumerical resource type as defined by ATR_TYPE_*.\n *\n * @par Functionality:\n *\tIndexes through the resc_type_map array until a match is found or the\n *\tend of the table is reached.\n *\n * @param[in]\ttypenum - Resource type value\n *\n * @return\tresc_type_map *\n * @retval\tpointer to map entry on success.\n * @retval\tNULL if no matching entry found.\n *\n * @par Side Effects: None\n *\n * @par MT-safe: yes\n *\n */\nstruct resc_type_map *\nfind_resc_type_map_by_typev(int typenum)\n{\n\tint i;\n\tint s = sizeof(resc_type_map_arr) / sizeof(struct resc_type_map);\n\n\tfor (i = 0; i < s; ++i) {\n\t\tif (resc_type_map_arr[i].rtm_type == typenum)\n\t\t\treturn (&resc_type_map_arr[i]);\n\t}\n\treturn NULL; /* didn't find the matching type */\n}\n\n/**\n * @brief\n *\tReturn a pointer to the resc_type_map entry corresponding to the\n *\tresource type as specified as a string.\n *\n * @par Functionality:\n *\tIndexes through the resc_type_map array until a match is found or the\n *\tend of the table is reached.\n *\n * @param[in]\ttypestr - Resource type as a string\n *\n * @return\tresc_type_map *\n * @retval\tpointer to map entry on success.\n * @retval\tNULL if no matching entry found.\n *\n * @par Side Effects: None\n *\n * @par MT-safe: yes\n *\n */\nstruct resc_type_map *\nfind_resc_type_map_by_typest(char *typestr)\n{\n\tint i;\n\tint s = sizeof(resc_type_map_arr) / sizeof(struct resc_type_map);\n\n\tif (typestr == NULL)\n\t\treturn NULL;\n\n\tfor (i = 0; i < s; ++i) {\n\t\tif (strcmp(typestr, resc_type_map_arr[i].rtm_rname) == 0)\n\t\t\treturn (&resc_type_map_arr[i]);\n\t}\n\treturn NULL; /* didn't find the matching type */\n}\n\n/**\n * @brief\n * \tReturns a string representation of numeric permission flags associated\n * \tto a resource\n *\n * @param[in] perms - The permission flags of the resource\n *\n * @par\n * \tCaller is responsible of freeing the returned string\n *\n * @return\tstring\n * @retval\tflag val as string\tsuccess\n * @retval\tNULL\t\t\terror\n */\nchar *\nfind_resc_flag_map(int perms)\n{\n\tchar *flags;\n\tint i = 0;\n\n\t/* 10 is a bit over the max number of flags that could be set */\n\tflags = malloc(10 * sizeof(char));\n\tif (flags == NULL) {\n\t\treturn NULL;\n\t}\n\tif (perms & ATR_DFLAG_CVTSLT)\n\t\tflags[i++] = 'h';\n\tif (perms & ATR_DFLAG_RASSN)\n\t\tflags[i++] = 'q';\n\tif (perms & ATR_DFLAG_ANASSN)\n\t\tflags[i++] = 'n';\n\telse if (perms & ATR_DFLAG_FNASSN)\n\t\tflags[i++] = 'f';\n\tif ((perms & (ATR_DFLAG_USRD | ATR_DFLAG_USWR)) == 0)\n\t\tflags[i++] = 'i';\n\telse if ((perms & ATR_DFLAG_USWR) == 0)\n\t\tflags[i++] = 'r';\n\tif (perms & ATR_DFLAG_MOM)\n\t\tflags[i++] = 'm';\n\n\tflags[i] = '\\0';\n\treturn flags;\n}\n"
  },
  {
    "path": "src/lib/Libattr/strToL.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <errno.h>\n#include \"Long.h\"\n#include \"Long_.h\"\n\n#undef strToL\n#undef strTouL\nu_Long strTouL(const char *nptr, char **endptr, int base);\n/**\n * @file\tstrToL.c\n *\n * @brief\n * \tstrToL - returns the Long value representing the string whose first\n *\tcharacter is *nptr, when interpreted as an integer in base, base.\n */\n\n/**\n * @brief\n * \tstrToL - returns the Long value representing the string whose first\n *\tcharacter is *nptr, when interpreted as an integer in base, base.\n *\n * @param[in]   \tnptr - pointer to string to convert to u_Long\n * @param[in/out]   \tendptr -  If endptr is not NULL,the function stores\n *              \t\tthe address of the first invalid character in *endptr.\n * @param[in] \t\tbase - If base is zero, the base of the integer is determined by the way the\n * \t\t\tstring starts.  The string is interpreted as decimal if the first\n * \t\t\tcharacter after leading white space and an optional sign is a digit\n * \t\t\tbetween 1 and 9, inclusive.  The string is interpreted as octal if the\n * \t\t\tfirst character after leading white space and an optional sign is the\n * \t\t\tdigit \"0\" and the next character is not an \"X\" (either upper or lower\n * \t\t\tcase).  The string is interpreted as hexidecimal if the first character\n * \t\t\tafter leading white space and an optional sign is the digit \"0\",\n * \t\t\tfollowed by an \"X\" (either upper or lower case).\n *\n * \tIf base is greater than 1 and less than the number of characters in the\n *\tLong_dig array, it represents the base in which the number will be\n *\tinterpreted.  Characters for digits beyond 9 are represented by the\n *\tletters of the alphabet, either upper case or lower case.\n *\n * @return Long Returns the result of the conversion\n * @retval >= 0 The result of the conversion\n * @retval 0    FAILURE\n *\n */\n\nLong\nstrToL(const char *nptr, char **endptr, int base)\n{\n\tLong value;\n\n\tvalue = (Long) strTouL(nptr, endptr, base);\n\tif (Long_neg) {\n\t\tif (value >= 0) {\n\t\t\tvalue = lONG_MIN;\n\t\t\terrno = ERANGE;\n\t\t}\n\t} else if (value < 0) {\n\t\tvalue = lONG_MAX;\n\t\terrno = ERANGE;\n\t}\n\treturn (value);\n}\n"
  },
  {
    "path": "src/lib/Libattr/strTouL.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tstrTouL.c\n *\n * @brief\n *\treturns the unsigned Long value representing the string whose\n * \tfirst character is *nptr, when interpreted as an integer in base,\n *\tbase.\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <ctype.h>\n#include <stddef.h>\n#include <string.h>\n#include <errno.h>\n\n#include \"Long.h\"\n#include \"Long_.h\"\n\n#undef strTouL\n\n#ifndef TRUE\n#define TRUE 1\n#define FALSE 0\n#endif\n\nstatic unsigned x_val;\nstatic const char letters[] = \"abcdefghijklmnopqrstuvwxyz\";\nstatic const char Long_dig[] = LONG_DIG_VALUE;\nstatic char table[UCHAR_MAX + 1];\n\n/**\n * @brief\n *\treturns the unsigned Long value representing the string whose\n * \tfirst character is *nptr, when interpreted as an integer in base,\n *\tbase.\n *\n * @param[in]   nptr - pointer to string to convert to u_Long\n * @param[in/out]   endptr -  If endptr is not NULL,the function stores\n *              the address of the first invalid character in *endptr.\n * @param[in]   base - If base is zero, the base of the integer is determined by the way the\n * \t        string starts.  The string is interpreted as decimal if the first\n * \t        character after leading white space and an optional sign is a digit\n * \t        between 1 and 9, inclusive.  The string is interpreted as octal if the\n * \t        first character after leading white space and an optional sign is the\n * \t        digit \"0\" and the next character is not an \"X\" (either upper or lower\n * \t        case).  The string is interpreted as hexidecimal if the first character\n * \t        after leading white space and an optional sign is the digit \"0\",\n * \t        followed by an \"X\" (either upper or lower case).\n *              If base is greater than 1 and less than the number of characters in the\n *\t        Long_dig array, it represents the base in which the number will be\n *\t        interpreted.  Characters for digits beyond 9 are represented by the\n *\t        letters of the alphabet, either upper case or lower case.\n *\n * @return u_Long Returns the result of the conversion\n * @retval >= 0\tThe result of the conversion\n * @retval 0\tFAILURE\n */\nu_Long\nstrTouL(const char *nptr, char **endptr, int base)\n{\n\tunsigned digit;\n\tu_Long limit = 0, value;\n\tenum {\n\t\tunknown1,\n\t\tunknown2,\n\t\thex1,\n\t\thex2,\n\t\thex3,\n\t\tknown,\n\t\tworking,\n\t\toverflow\n\t} state;\n\n\tif (table[(unsigned char) '1'] != 1) {\n\t\tint i; /* Initialize conversion table */\n\n\t\t(void) memset(table, CHAR_MAX, sizeof(table));\n\t\tfor (i = (int) strlen(Long_dig) - 1; i >= 0; i--)\n\t\t\ttable[(unsigned char) Long_dig[i]] = i;\n\t\tfor (i = (int) strlen(letters) - 1; i >= 0; i--)\n\t\t\ttable[(unsigned char) letters[i]] = i + 10;\n\t\tx_val = table[(unsigned char) 'x'];\n\t}\n\tif (nptr == NULL) {\n\t\tif (endptr != NULL)\n\t\t\t*endptr = (char *) nptr;\n\t\treturn (0);\n\t}\n\tif (base < 0 || base == 1 || (size_t) base > strlen(Long_dig)) {\n\t\terrno = EDOM;\n\t\tif (endptr != NULL)\n\t\t\t*endptr = (char *) nptr;\n\t\treturn (0);\n\t}\n\tswitch (base) {\n\t\tcase 0:\n\t\t\tstate = unknown1;\n\t\t\tbreak;\n\t\tcase 16:\n\t\t\tstate = hex1;\n\t\t\tbreak;\n\t\tdefault:\n\t\t\tstate = known;\n\t}\n\twhile (isspace(*nptr++))\n\t\t;\n\tLong_neg = FALSE;\n\tswitch (*--nptr) {\n\t\tcase '-':\n\t\t\tLong_neg = TRUE;\n\t\tcase '+':\n\t\t\tnptr++;\n\t}\n\tvalue = 0;\n\twhile ((digit = table[(unsigned char) *nptr++]) != CHAR_MAX) {\n\t\tswitch (state) {\n\t\t\tcase unknown1:\n\t\t\t\tif (digit >= 10)\n\t\t\t\t\tgoto done;\n\t\t\t\tif (digit == 0) {\n\t\t\t\t\tstate = unknown2;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tbase = 10;\n\t\t\t\tstate = working;\n\t\t\t\tlimit = UlONG_MAX / 10;\n\t\t\t\tvalue = digit;\n\t\t\t\tbreak;\n\t\t\tcase unknown2:\n\t\t\t\tif (digit >= 8) {\n\t\t\t\t\tif (digit != x_val)\n\t\t\t\t\t\tgoto done;\n\t\t\t\t\tbase = 16;\n\t\t\t\t\tstate = hex3;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tbase = 8;\n\t\t\t\tstate = working;\n\t\t\t\tlimit = UlONG_MAX / 8;\n\t\t\t\tvalue = digit;\n\t\t\t\tbreak;\n\t\t\tcase hex1:\n\t\t\t\tif (digit >= base)\n\t\t\t\t\tgoto done;\n\t\t\t\tif (digit == 0) {\n\t\t\t\t\tstate = hex2;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tstate = working;\n\t\t\t\tlimit = UlONG_MAX / 16;\n\t\t\t\tvalue = digit;\n\t\t\t\tbreak;\n\t\t\tcase hex2:\n\t\t\t\tif (digit == x_val) {\n\t\t\t\t\tstate = hex3;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\tcase hex3:\n\t\t\tcase known:\n\t\t\t\tif (digit >= base)\n\t\t\t\t\tgoto done;\n\t\t\t\tstate = working;\n\t\t\t\tlimit = UlONG_MAX / base;\n\t\t\t\tvalue = digit;\n\t\t\t\tbreak;\n\t\t\tcase working:\n\t\t\t\tif (digit >= base)\n\t\t\t\t\tgoto done;\n\t\t\t\tif (value < limit) {\n\t\t\t\t\tvalue = value * base + digit;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tif (value > limit ||\n\t\t\t\t    UlONG_MAX - (value *= base) < digit) {\n\t\t\t\t\tstate = overflow;\n\t\t\t\t\tvalue = UlONG_MAX;\n\t\t\t\t\terrno = ERANGE;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tvalue += digit;\n\t\t\t\tbreak;\n\t\t\tcase overflow:\n\t\t\t\tif (digit >= base)\n\t\t\t\t\tgoto done;\n\t\t}\n\t}\ndone:\n\tif (endptr != NULL) {\n\t\tif (state == hex3)\n\t\t\tnptr--;\n\t\t*endptr = (char *) --nptr;\n\t}\n\tif (Long_neg)\n\t\terrno = ERANGE;\n\treturn value;\n}\n"
  },
  {
    "path": "src/lib/Libattr/uLTostr.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tuLTostr.c\n *\n * @brief\n * \tuLTostr -  returns a pointer to the character string representation of the\n *\tu_Long, value, represented in base, base.\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <string.h>\n#include <errno.h>\n#include \"Long.h\"\n#include \"Long_.h\"\n\n#define LBUFSIZ (CHAR_BIT * sizeof(u_Long) + 2)\n\nstatic char buffer[LBUFSIZ];\nstatic const char Long_dig[] = LONG_DIG_VALUE;\n\n/**\n * @brief\n * \tuLTostr -  returns a pointer to the character string representation of the\n *\tu_Long, value, represented in base, base.\n *\n *\tIf base is outside its domain of 2 through the number of characters in\n *\tthe Long_dig array, uLTostr returns a zero-length string and sets errno\n *\tto EDOM.\n *\n *\tThe string is stored in a static array and will be clobbered the next\n *\ttime either LTostr or uLTostr is called.  The price of eliminating the\n *\tpossibility of memory leaks is the necessity to copy the string\n *\timmediately to a safe place if it must last.\n *\n * @param[in] value - u_Long value\n * @param[in] base - base representation of val\n *\n * @return      string\n * @retval      char reprsn of u_Long     Success\n * @retval      NULL                      error\n *\n */\n\nconst char *\nuLTostr(u_Long value, int base)\n{\n\tchar *bp = &buffer[LBUFSIZ];\n\n\t*--bp = '\\0';\n\tif (base < 2 || base > strlen(Long_dig)) {\n\t\terrno = EDOM;\n\t\treturn (bp);\n\t}\n\tdo {\n\t\t*--bp = Long_dig[value % base];\n\t\tvalue /= base;\n\t} while (value);\n\tswitch (base) {\n\t\tcase 16:\n\t\t\t*--bp = 'x';\n\t\tcase 8:\n\t\t\t*--bp = '0';\n\t}\n\treturn (bp);\n}\n"
  },
  {
    "path": "src/lib/Libauth/Makefile.am",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nSUBDIRS = \\\n\tgss \\\n\tmunge\n"
  },
  {
    "path": "src/lib/Libauth/README.md",
    "content": "***This file explains about LibAuth API Interface descriptions and design...***\n\n# pbs_auth_set_config\n - **Synopsis:** void pbs_auth_set_config(const pbs_auth_config_t *auth_config)\n - **Description:** This API sets configuration for the authentication library like logging method, where it can find required credentials etc... This API should be called first before calling any other LibAuth API.\n - **Arguments:**\n\n\t- const pbs_auth_config_t *auth_config\n\n\t\t&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Pointer to a configuration structure as shown below for the authentication library.\n\n\t\t```c\n\t\ttypedef struct pbs_auth_config {\n\t\t\t/* Path to PBS_HOME directory (aka same value as PBS_HOME in pbs.conf). This must be a null-terminated string. */\n\t\t\tchar *pbs_home_path;\n\n\t\t\t/* Path to PBS_EXEC directory (aka same value as PBS_EXEC in pbs.conf). This must be a null-terminated string. */\n\t\t\tchar *pbs_exec_path;\n\n\t\t\t/* Name of authentication method (aka same value as PBS_AUTH_METHOD in pbs.conf). This must be a null-terminated string. */\n\t\t\tchar *auth_method;\n\n\t\t\t/* Name of encryption method (aka same value as PBS_ENCRYPT_METHOD in pbs.conf). This must be a null-terminated string. */\n\t\t\tchar *encrypt_method;\n\n\t\t\t/*\n\t\t\t * Function pointer to the logging method with the same signature as log_event from Liblog.\n\t\t\t * With this, the user of the authentication library can redirect logs from the authentication\n\t\t\t * library into respective log files or stderr in case no log files.\n\t\t\t * If func is set to NULL then logs will be written to stderr (if available, else no logging at all).\n\t\t\t */\n\t\t\tvoid (*logfunc)(int type, int objclass, int severity, const char *objname, const char *text);\n\t\t} pbs_auth_config_t;\n\t\t```\n\n - **Return Value:** None, void\n\n# pbs_auth_create_ctx\n - **Synopsis:** int pbs_auth_create_ctx(void **ctx, int mode, int conn_type, char *hostname)\n - **Description:** This API creates an authentication context for a given mode and conn_type, which will be used by other LibAuth API for authentication, encrypt and decrypt data.\n - **Arguments:**\n\n\t- void **ctx\n\n\t\t&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Pointer to auth context to be created\n\n\t- int mode\n\n\t\t&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Specify which type of context to be created, should be one of AUTH_CLIENT or AUTH_SERVER.\n\n\t\t&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Use AUTH_CLIENT for client-side (aka who is initiating authentication) context\n\n\t\t&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Use AUTH_SERVER for server-side (aka who is authenticating incoming user/connection) context\n\n\t\t```c\n\t\tenum AUTH_ROLE {\n\t\t\tAUTH_ROLE_UNKNOWN = 0,\n\t\t\tAUTH_CLIENT,\n\t\t\tAUTH_SERVER,\n\t\t\tAUTH_ROLE_LAST\n\t\t};\n\t\t```\n\n\t- int conn_type\n\n\t\t&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Specify which type of connection is for which context to be created, should be one of AUTH_USER_CONN or AUTH_SERVICE_CONN\n\n\t\t&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Use AUTH_USER_CONN for user-oriented connection (aka like PBS client is connecting to PBS Server)\n\n\n\t\t&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Use AUTH_SERVICE_CONN for service-oriented connection (aka like PBS Mom is connecting to PBS Server via PBS Comm)\n\n\t\t```c\n\t\tenum AUTH_CONN_TYPE {\n\t\t\tAUTH_USER_CONN = 0,\n\t\t\tAUTH_SERVICE_CONN\n\t\t};\n\t\t```\n\n\t- char *hostname\n\n\t\t&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;The null-terminated hostname of another authenticating party\n\n - **Return Value:** Integer\n\n\t- 0 - On Success\n\n\t- 1 - On Failure\n\n - **Cleanup:** A context created by this API should be destroyed by auth_free_ctx when the context is no more required\n\n# pbs_auth_destroy_ctx\n - **Synopsis:** void pbs_auth_destroy_ctx(void *ctx)\n - **Description:** This API destroys the authentication context created by pbs_auth_create_ctx\n - **Arguments:**\n\n\t- void *ctx\n\n\t\t&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Pointer to auth context to be destroyed\n\n - **Return Value:** None, void\n\n# pbs_auth_get_userinfo\n - **Synopsis:** int pbs_auth_get_userinfo(void *ctx, char **user, char **host, char **realm)\n - **Description:** Extract username and its realm, hostname of the connecting party from the given authentication context. Extracted user, host and realm values will be a null-terminated string. This API is mostly useful on authenticating server-side to get another party (aka auth client) information and the auth server might want to use this information from the auth library to match against the actual username/realm/hostname provided by the connecting party.\n - **Arguments:**\n\n\t- void *ctx\n\n\t\t&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Pointer to auth context from which information will be extracted\n\n\t- char **user\n\n\t\t&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Pointer to a buffer in which this API will write the user name\n\n\t- char **host\n\n\t\t&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Pointer to a buffer in which this API will write hostname\n\n\t- char **realm\n\n\t\t&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Pointer to a buffer in which this API will write the realm\n\n - **Return Value:** Integer\n\n\t- 0 - On Success\n\n\t- 1 - On Failure\n\n - **Cleanup:** Returned user, host, and realm should be freed using free() when no more required, as it will be allocated heap memory.\n\n - **Example:** This example shows what will be the value of the user, host, and realm. Let's take an example of GSS/Kerberos authentication, where auth client hostname is \"xyz.abc.com\", the username is \"test\" and in Kerberos configuration domain realm is \"PBS\" so when this auth client authenticates to server using Kerberos authentication method, it will be authenticated as \"test@PBS\" and this API will return user = test, host = xyz.abc.com, and realm = PBS.\n\n# pbs_auth_process_handshake_data\n - **Synopsis:** int pbs_auth_process_handshake_data(void *ctx, void *data_in, size_t len_in, void **data_out, size_t *len_out, int *is_handshake_done)\n - **Description:** Process incoming handshake data and do the handshake, and if required generate handshake data which will be sent to another party. If there is no incoming data then initiate a handshake and generate initial handshake data to be sent to the authentication server.\n - **Arguments:**\n\n\t- void *ctx\n\n\t\t&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Pointer to auth context for which handshake is happening\n\n\t- void *data_in\n\n\t\t&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Incoming handshake data to process if any. This can be NULL which indicates to initiate handshake and generate initial handshake data to be sent to the authentication server.\n\n\t- size_t len_in\n\n\t\t&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Length of incoming handshake data if any, else 0\n\n\t- void **data_out\n\n\t\t&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Outgoing handshake data to be sent to another authentication party, this can be NULL is handshake is completed and no further data needs to be sent.\n\n\t\t&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;On failure (aka return 1 by this API), data in data_out will be considered as error data/message, which will be sent to another authentication party as auth error data.\n\n\t- size_t *len_out\n\n\t\t&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Length of outgoing handshake/auth error data if any, else 0\n\n\t- int *is_handshake_done\n\n\t\t&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;To indicate whether handshake is completed or not, 0 - means handshake is not completed or 1 - means handshake is completed\n\n - **Return Value:** Integer\n\n\t- 0 - On Success\n\n\t- 1 - On Failure\n\n - **Cleanup:** Returned data_out (if any) should be freed using free() when no more required, as it will be allocated heap memory.\n\n# pbs_auth_encrypt_data\n - **Synopsis:** int pbs_auth_encrypt_data(void *ctx, void *data_in, size_t len_in, void **data_out, size_t *len_out)\n - **Description:** Encrypt given clear text data with the given authentication context\n - **Arguments:**\n\n\t- void *ctx\n\n\t\t&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Pointer to auth context which will be used while encrypting given clear text data\n\n\t- void *data_in\n\n\t\t&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;clear text data to encrypt\n\n\t- size_t len_in\n\n\t\t&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Length of clear text data\n\n\t- void **data_out\n\n\t\t&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Encrypted data\n\n\t- size_t *len_out\n\n\t\t&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Length of encrypted data\n\n - **Return Value:** Integer\n\n\t- 0 - On Success\n\n\t- 1 - On Failure\n\n - **Cleanup:** Returned data_out should be freed using free() when no more required, as it will be allocated heap memory.\n\n# pbs_auth_decrypt_data\n - **Synopsis:** int pbs_auth_decrypt_data(void *ctx, void *data_in, size_t len_in, void **data_out, size_t *len_out)\n - **Description:** Decrypt given encrypted data with the given authentication context\n - **Arguments:**\n\n\t- void *ctx\n\n\t\t&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Pointer to auth context which will be used while decrypting given encrypted data\n\n\t- void *data_in\n\n\t\t&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Encrypted data to decrypt\n\n\t- size_t len_in\n\n\t\t&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Length of Encrypted data\n\n\t- void **data_out\n\n\t\t&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;clear text data\n\n\t- size_t *len_out\n\n\t\t&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Length of clear text data\n\n - **Return Value:** Integer\n\n\t- 0 - On Success\n\n\t- 1 - On Failure\n\n - **Cleanup:** Returned data_out should be freed using free() when no more required, as it will be allocated heap memory.\n"
  },
  {
    "path": "src/lib/Libauth/gss/Makefile.am",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\nif KRB5_ENABLED\n\nlib_LTLIBRARIES = libauth_gss.la\n\nlibauth_gss_la_CPPFLAGS = \\\n\t-I$(top_srcdir)/src/include \\\n\t@KRB5_CFLAGS@\n\nlibauth_gss_la_LDFLAGS = -version-info 0:0:0 -shared -fPIC\n\nlibauth_gss_la_LIBADD= \\\n\t$(top_builddir)/src/lib/Libpbs/libpbs.la \\\n\t@KRB5_LIBS@ \\\n\t-lpthread\n\nlibauth_gss_la_SOURCES = \\\n\tpbs_gss.c\n\nendif\n"
  },
  {
    "path": "src/lib/Libauth/gss/pbs_gss.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <time.h>\n#include <unistd.h>\n#include <pthread.h>\n#include <gssapi.h>\n#include <gssapi.h>\n#include <krb5.h>\n#include <errno.h>\n#include <termios.h>\n#include <pty.h>\n#include <sys/wait.h>\n#include \"pbs_ifl.h\"\n#include \"libauth.h\"\n#include \"pbs_internal.h\"\n\n#if defined(KRB5_HEIMDAL)\n#define PBS_GSS_MECH_OID GSS_KRB5_MECHANISM\n#else\n#include <gssapi/gssapi_krb5.h>\n#define PBS_GSS_MECH_OID (gss_OID) gss_mech_krb5\n#endif\n\nstatic pthread_mutex_t gss_mutex;\nstatic pthread_once_t gss_init_once = PTHREAD_ONCE_INIT;\nstatic char gss_log_buffer[LOG_BUF_SIZE];\nstatic void (*logger)(int type, int objclass, int severity, const char *objname, const char *text);\n#define DEFAULT_CREDENTIAL_LIFETIME 7200\n\n#define __GSS_LOGGER(e, c, s, m)                                          \\\n\tdo {                                                              \\\n\t\tif (logger == NULL) {                                     \\\n\t\t\tif (s != LOG_DEBUG)                               \\\n\t\t\t\tfprintf(stderr, \"%s: %s\\n\", __func__, m); \\\n\t\t} else {                                                  \\\n\t\t\tlogger(e, c, s, \"\", m);                           \\\n\t\t}                                                         \\\n\t} while (0)\n#define GSS_LOG_ERR(m) __GSS_LOGGER(PBSEVENT_ERROR | PBSEVENT_FORCE, PBS_EVENTCLASS_SERVER, LOG_ERR, m)\n#define GSS_LOG_DBG(m) __GSS_LOGGER(PBSEVENT_DEBUG | PBSEVENT_FORCE, PBS_EVENTCLASS_SERVER, LOG_DEBUG, m)\n#define __GSS_LOGGER_STS(m, s, c)                                                                                  \\\n\tdo {                                                                                                       \\\n\t\tOM_uint32 _mstat;                                                                                  \\\n\t\tgss_buffer_desc _msg;                                                                              \\\n\t\tOM_uint32 _msg_ctx;                                                                                \\\n\t\t_msg_ctx = 0;                                                                                      \\\n\t\tchar buf[LOG_BUF_SIZE];                                                                            \\\n\t\tdo {                                                                                               \\\n\t\t\tbuf[0] = '\\0';                                                                             \\\n\t\t\tgss_display_status(&_mstat, s, c, GSS_C_NULL_OID, &_msg_ctx, &_msg);                       \\\n\t\t\tsnprintf(buf, LOG_BUF_SIZE, \"GSS - %s : %.*s\", m, (int) _msg.length, (char *) _msg.value); \\\n\t\t\tGSS_LOG_ERR(buf);                                                                          \\\n\t\t\t(void) gss_release_buffer(&_mstat, &_msg);                                                 \\\n\t\t} while (_msg_ctx != 0);                                                                           \\\n\t} while (0)\n#define GSS_LOG_STS(m, mjs, mis)                  \\\n\t__GSS_LOGGER_STS(m, mjs, GSS_C_GSS_CODE); \\\n\t__GSS_LOGGER_STS(m, mis, GSS_C_MECH_CODE)\n\n#define PBS_KRB5_SERVICE_NAME \"host\"\n#define PBS_KRB5_CLIENT_CCNAME \"FILE:/tmp/krb5cc_pbs_client\"\n\n#define GSS_NT_SERVICE_NAME GSS_C_NT_HOSTBASED_SERVICE\n\ntypedef struct {\n\tgss_ctx_id_t gssctx;\t/* gss security context */\n\tint gssctx_established; /* true if gss context has been established */\n\tint is_secure;\t\t/* wrapping includes encryption */\n\tenum AUTH_ROLE role;\t/* value is client or server */\n\tint conn_type;\t\t/* type of connection one of user-oriented or service-oriented */\n\tchar *hostname;\t\t/* server name */\n\tchar *clientname;\t/* client name in string */\n} pbs_gss_extra_t;\n\nenum PBS_GSS_ERRORS {\n\tPBS_GSS_OK = 0,\n\tPBS_GSS_CONTINUE_NEEDED,\n\tPBS_GSS_ERR_INTERNAL,\n\tPBS_GSS_ERR_IMPORT_NAME,\n\tPBS_GSS_ERR_ACQUIRE_CREDS,\n\tPBS_GSS_ERR_CONTEXT_INIT,\n\tPBS_GSS_ERR_CONTEXT_ACCEPT,\n\tPBS_GSS_ERR_CONTEXT_DELETE,\n\tPBS_GSS_ERR_CONTEXT_ESTABLISH,\n\tPBS_GSS_ERR_NAME_CONVERT,\n\tPBS_GSS_ERR_WRAP,\n\tPBS_GSS_ERR_UNWRAP,\n\tPBS_GSS_ERR_OID,\n\tPBS_GSS_ERR_LAST\n};\n\nstatic int pbs_gss_can_get_creds(const gss_OID_set oidset);\nstatic int pbs_gss_ask_user_creds();\nstatic int init_pbs_client_ccache_from_keytab(char *err_buf, int err_buf_size);\nstatic void init_gss_mutex(void);\n\n/**\n * @brief\n *\tAcquire lock on a mutex\n *\n * @param[in] - lock - ptr to a mutex variable\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n * @return\terror code\n * @retval\t1\tfailure\n * @retval\t0\tsuccess\n */\nstatic int\ngss_lock(pthread_mutex_t *lock)\n{\n\tif (pthread_mutex_lock(lock) != 0) {\n\t\tGSS_LOG_ERR(\"Failed to lock mutex\");\n\t\treturn 1;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\tRelease lock on a mutex\n *\n * @param[in] - lock - ptr to a mutex variable\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n * @return\terror code\n * @retval\t1\tfailure\n * @retval\t0\tsuccess\n */\nstatic int\ngss_unlock(pthread_mutex_t *lock)\n{\n\tif (pthread_mutex_unlock(lock) != 0) {\n\t\tGSS_LOG_ERR(\"Failed to unlock mutex\");\n\t\treturn 1;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\twrapper function for gss_fork_mutex_lock().\n *\n */\nvoid\ngss_atfork_prepare()\n{\n\tgss_lock(&gss_mutex);\n}\n\n/**\n * @brief\n *\twrapper function for gss_fork_mutex_unlock().\n *\n */\nvoid\ngss_atfork_parent()\n{\n\tgss_unlock(&gss_mutex);\n}\n\n/**\n * @brief\n *\twrapper function for init_gss_mutex().\n *\n */\nvoid\ngss_atfork_child()\n{\n\tinit_gss_mutex();\n}\n\n/**\n * @brief\n *\tInitialize gss mutex.\n *\n */\nstatic void\ninit_gss_mutex(void)\n{\n\tpthread_mutexattr_t attr;\n\n\tif (pthread_mutexattr_init(&attr) != 0) {\n\t\tGSS_LOG_ERR(\"Failed to initialize mutex attr\");\n\t\treturn;\n\t}\n\n\tif (pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE_NP)) {\n\t\tGSS_LOG_ERR(\"Failed to set mutex type\");\n\t\treturn;\n\t}\n\n\tif (pthread_mutex_init(&gss_mutex, &attr) != 0) {\n\t\tGSS_LOG_ERR(\"Failed to initialize gss mutex\");\n\t\treturn;\n\t}\n\n\treturn;\n}\n\n/**\n * @brief\n *\tInitialize gss at fork.\n *\n */\nstatic void\ninit_gss_atfork(void)\n{\n\tinit_gss_mutex();\n\n\tif (pthread_atfork(gss_atfork_prepare, gss_atfork_parent, gss_atfork_child) != 0) {\n\t\tGSS_LOG_ERR(\"gss atfork handler failed\");\n\t\treturn;\n\t}\n\n\treturn;\n}\n\n/** @brief\n *\tIf oid set is null then create oid set. Once we have the oid set,\n *\tthe appropriate gss mechanism is added (e.g. kerberos).\n *\n * @param[in/out] oidset - oid set for change\n *\n * @return\tint\n * @retval\tPBS_GSS_OK on success\n * @retval\t!= PBS_GSS_OK on error\n */\nstatic int\npbs_gss_oidset_mech(gss_OID_set *oidset)\n{\n\tOM_uint32 maj_stat;\n\tOM_uint32 min_stat;\n\tif (*oidset == GSS_C_NULL_OID_SET) {\n\t\tmaj_stat = gss_create_empty_oid_set(&min_stat, oidset);\n\t\tif (maj_stat != GSS_S_COMPLETE) {\n\t\t\tGSS_LOG_STS(\"gss_create_empty_oid_set\", maj_stat, min_stat);\n\t\t\treturn PBS_GSS_ERR_OID;\n\t\t}\n\t}\n\n\tmaj_stat = gss_add_oid_set_member(&min_stat, PBS_GSS_MECH_OID, oidset);\n\tif (maj_stat != GSS_S_COMPLETE) {\n\t\tGSS_LOG_STS(\"gss_add_oid_set_member\", maj_stat, min_stat);\n\t\treturn PBS_GSS_ERR_OID;\n\t}\n\n\treturn PBS_GSS_OK;\n}\n\n/** @brief\n *\tRelease oid set\n *\n * @param[in] oidset - oid set for releasing\n *\n * @return void\n *\n */\nstatic void\npbs_gss_release_oidset(gss_OID_set *oidset)\n{\n\tOM_uint32 maj_stat;\n\tOM_uint32 min_stat;\n\n\tmaj_stat = gss_release_oid_set(&min_stat, oidset);\n\tif (maj_stat != GSS_S_COMPLETE) {\n\t\tGSS_LOG_STS(\"gss_release_oid_set\", maj_stat, min_stat);\n\t}\n}\n\n/** @brief\n *\tCopy data from gss buffer into string and provides the length of the data.\n *\n * @param[in] tok - token with source data\n * @param[out] data - data to be filled\n * @param[out] len - length of data\n *\n * @return\tint\n * @retval\tPBS_GSS_OK on success\n * @retval\t!= PBS_GSS_OK on error\n */\nstatic int\npbs_gss_fill_data(gss_buffer_t tok, void **data, size_t *len)\n{\n\t*data = malloc(tok->length);\n\tif (*data == NULL) {\n\t\tGSS_LOG_ERR(\"malloc failure\");\n\t\treturn PBS_GSS_ERR_INTERNAL;\n\t}\n\n\tmemcpy(*data, tok->value, tok->length);\n\t*len = tok->length;\n\treturn PBS_GSS_OK;\n}\n\n/** @brief\n *\tImports a service name and acquires credentials for it. The service name\n *\tis imported with gss_import_name, and service credentials are acquired\n *\twith gss_acquire_cred.\n *\n * @param[in] service_name - the service name\n * @param[out] server_creds - the GSS-API service credentials\n *\n * @return\tint\n * @retval\tPBS_GSS_OK on success\n * @retval\t!= PBS_GSS_OK on error\n */\nstatic int\npbs_gss_server_acquire_creds(char *service_name, gss_cred_id_t *server_creds)\n{\n\tgss_name_t server_name;\n\tOM_uint32 maj_stat;\n\tOM_uint32 min_stat = 0;\n\tgss_OID_set oidset = GSS_C_NO_OID_SET;\n\tgss_buffer_desc name_buf;\n\n\tname_buf.value = service_name;\n\tname_buf.length = strlen(service_name) + 1;\n\n\tmaj_stat = gss_import_name(&min_stat, &name_buf, GSS_NT_SERVICE_NAME, &server_name);\n\n\tif (maj_stat != GSS_S_COMPLETE) {\n\t\tGSS_LOG_STS(\"gss_import_name\", maj_stat, min_stat);\n\t\treturn PBS_GSS_ERR_IMPORT_NAME;\n\t}\n\n\tif (pbs_gss_oidset_mech(&oidset) != PBS_GSS_OK)\n\t\treturn PBS_GSS_ERR_OID;\n\n\tmaj_stat = gss_acquire_cred(&min_stat, server_name, 0, oidset, GSS_C_ACCEPT, server_creds, NULL, NULL);\n\n\tpbs_gss_release_oidset(&oidset);\n\n\tif (maj_stat != GSS_S_COMPLETE) {\n\t\tGSS_LOG_STS(\"gss_acquire_cred\", maj_stat, min_stat);\n\n\t\tif (gss_release_name(&min_stat, &server_name) != GSS_S_COMPLETE) {\n\t\t\tGSS_LOG_STS(\"gss_release_name\", maj_stat, min_stat);\n\t\t\treturn PBS_GSS_ERR_INTERNAL;\n\t\t}\n\n\t\treturn PBS_GSS_ERR_ACQUIRE_CREDS;\n\t}\n\n\tmaj_stat = gss_release_name(&min_stat, &server_name);\n\tif (maj_stat != GSS_S_COMPLETE) {\n\t\tGSS_LOG_STS(\"gss_release_name\", maj_stat, min_stat);\n\t\treturn PBS_GSS_ERR_INTERNAL;\n\t}\n\n\treturn PBS_GSS_OK;\n}\n\n/* @brief\n *\tClient part of GSS hadshake\n *\n * @param[in] service_name - GSS service name\n * @param[in] creds - client credentials\n * @param[in] oid - The security mechanism to use. GSS_C_NULL_OID for default\n * @param[in] gss_flags - Flags indicating additional services or parameters requested for the context.\n * @param[in/out] gss_context - this context is being established here\n * @param[out] ret_flags - Flags indicating additional services or parameters requested for the context.\n * @param[in] data_in - received GSS token data\n * @param[in] len_in - length of data_in\n * @param[out] data_out - GSS token data for transmitting\n * @param[out] len_out - length of data_out\n *\n * @return\tint\n * @retval\tPBS_GSS_OK on success\n * @retval\t!= PBS_GSS_OK on error\n */\nstatic int\npbs_gss_client_establish_context(char *service_name, gss_cred_id_t creds, gss_OID oid, OM_uint32 gss_flags, gss_ctx_id_t *gss_context, OM_uint32 *ret_flags, void *data_in, size_t len_in, void **data_out, size_t *len_out)\n{\n\tgss_buffer_desc send_tok;\n\tgss_buffer_desc recv_tok;\n\tgss_buffer_desc *token_ptr;\n\tgss_name_t target_name;\n\tOM_uint32 maj_stat;\n\tOM_uint32 min_stat = 0;\n\tOM_uint32 init_sec_maj_stat;\n\tOM_uint32 init_sec_min_stat = 0;\n\n\tsend_tok.value = service_name;\n\tsend_tok.length = strlen(service_name);\n\tmaj_stat = gss_import_name(&min_stat, &send_tok, GSS_NT_SERVICE_NAME, &target_name);\n\tif (maj_stat != GSS_S_COMPLETE) {\n\t\tGSS_LOG_STS(\"gss_import_name\", maj_stat, min_stat);\n\t\treturn PBS_GSS_ERR_IMPORT_NAME;\n\t}\n\n\tsend_tok.value = NULL;\n\tsend_tok.length = 0;\n\n\trecv_tok.value = (void *) data_in;\n\trecv_tok.length = len_in;\n\n\tif (recv_tok.length > 0)\n\t\ttoken_ptr = &recv_tok;\n\telse\n\t\ttoken_ptr = GSS_C_NO_BUFFER;\n\n\tinit_sec_maj_stat = gss_init_sec_context(&init_sec_min_stat, creds ? creds : GSS_C_NO_CREDENTIAL, gss_context, target_name, oid, gss_flags, 0, NULL, token_ptr, NULL, &send_tok, ret_flags, NULL);\n\n\tif (send_tok.length != 0) {\n\t\tpbs_gss_fill_data(&send_tok, data_out, len_out);\n\n\t\tmaj_stat = gss_release_buffer(&min_stat, &send_tok);\n\t\tif (maj_stat != GSS_S_COMPLETE) {\n\t\t\tGSS_LOG_STS(\"gss_release_buffer\", maj_stat, min_stat);\n\t\t\treturn PBS_GSS_ERR_INTERNAL;\n\t\t}\n\t}\n\n\tmaj_stat = gss_release_name(&min_stat, &target_name);\n\tif (maj_stat != GSS_S_COMPLETE) {\n\t\tGSS_LOG_STS(\"gss_release_name\", maj_stat, min_stat);\n\t\treturn PBS_GSS_ERR_INTERNAL;\n\t}\n\n\tif (init_sec_maj_stat != GSS_S_COMPLETE && init_sec_maj_stat != GSS_S_CONTINUE_NEEDED) {\n\t\tGSS_LOG_STS(\"gss_init_sec_context\", init_sec_maj_stat, init_sec_min_stat);\n\n\t\tif (*gss_context != GSS_C_NO_CONTEXT) {\n\t\t\tmaj_stat = gss_delete_sec_context(&min_stat, gss_context, GSS_C_NO_BUFFER);\n\t\t\tif (maj_stat != GSS_S_COMPLETE) {\n\t\t\t\tGSS_LOG_STS(\"gss_delete_sec_context\", maj_stat, min_stat);\n\t\t\t\treturn PBS_GSS_ERR_CONTEXT_DELETE;\n\t\t\t}\n\t\t}\n\n\t\treturn PBS_GSS_ERR_CONTEXT_INIT;\n\t}\n\n\tif (init_sec_maj_stat == GSS_S_CONTINUE_NEEDED)\n\t\treturn PBS_GSS_CONTINUE_NEEDED;\n\n\treturn PBS_GSS_OK;\n}\n\n/* @brief\n *\tServer part of GSS hadshake\n *\n * @param[in] server_creds - server credentials\n * @param[in] client_creds - optional credentials, can be NULL\n * @param[in/out] gss_context - this context is being established here\n * @param[out] client_name - GSS client name\n * @param[out] ret_flags - Flags indicating additional services or parameters requested for the context.\n * @param[in] data_in - received GSS token data\n * @param[in] len_in - length of data_in\n * @param[out] data_out - GSS token data for transmitting\n * @param[out] len_out - length of data_out\n *\n * @return\tint\n * @retval\tPBS_GSS_OK on success\n * @retval\t!= PBS_GSS_OK on error\n */\nstatic int\npbs_gss_server_establish_context(gss_cred_id_t server_creds, gss_cred_id_t *client_creds, gss_ctx_id_t *gss_context, gss_buffer_t client_name, OM_uint32 *ret_flags, void *data_in, size_t len_in, void **data_out, size_t *len_out)\n{\n\tgss_buffer_desc send_tok;\n\tgss_buffer_desc recv_tok;\n\tgss_name_t client;\n\tgss_OID doid;\n\tOM_uint32 maj_stat;\n\tOM_uint32 min_stat = 0;\n\tOM_uint32 acc_sec_maj_stat;\n\tOM_uint32 acc_sec_min_stat = 0;\n\n\trecv_tok.value = data_in;\n\trecv_tok.length = len_in;\n\n\tif (recv_tok.length == 0) {\n\t\tGSS_LOG_ERR(\"Invalid input data\");\n\t\treturn PBS_GSS_ERR_INTERNAL;\n\t}\n\n\tsend_tok.value = NULL;\n\tsend_tok.length = 0;\n\n\tacc_sec_maj_stat = gss_accept_sec_context(&acc_sec_min_stat, gss_context, server_creds, &recv_tok, GSS_C_NO_CHANNEL_BINDINGS, &client, &doid, &send_tok, ret_flags, NULL, client_creds);\n\n\tif (send_tok.length != 0) {\n\t\tpbs_gss_fill_data(&send_tok, data_out, len_out);\n\n\t\tmaj_stat = gss_release_buffer(&min_stat, &send_tok);\n\t\tif (maj_stat != GSS_S_COMPLETE) {\n\t\t\tGSS_LOG_STS(\"gss_release_buffer\", maj_stat, min_stat);\n\t\t\treturn PBS_GSS_ERR_INTERNAL;\n\t\t}\n\t}\n\n\tif (acc_sec_maj_stat != GSS_S_COMPLETE && acc_sec_maj_stat != GSS_S_CONTINUE_NEEDED) {\n\t\tGSS_LOG_STS(\"gss_accept_sec_context\", acc_sec_maj_stat, acc_sec_min_stat);\n\n\t\tif (*gss_context != GSS_C_NO_CONTEXT) {\n\t\t\tif ((maj_stat = gss_delete_sec_context(&min_stat, gss_context, GSS_C_NO_BUFFER)) != GSS_S_COMPLETE) {\n\t\t\t\tGSS_LOG_STS(\"gss_delete_sec_context\", maj_stat, min_stat);\n\t\t\t\treturn PBS_GSS_ERR_CONTEXT_DELETE;\n\t\t\t}\n\t\t}\n\n\t\treturn PBS_GSS_ERR_CONTEXT_ACCEPT;\n\t}\n\n\tmaj_stat = gss_display_name(&min_stat, client, client_name, &doid);\n\tif (maj_stat != GSS_S_COMPLETE) {\n\t\tGSS_LOG_STS(\"gss_display_name\", maj_stat, min_stat);\n\t\treturn PBS_GSS_ERR_NAME_CONVERT;\n\t}\n\n\tmaj_stat = gss_release_name(&min_stat, &client);\n\tif (maj_stat != GSS_S_COMPLETE) {\n\t\tGSS_LOG_STS(\"gss_release_name\", maj_stat, min_stat);\n\t\treturn PBS_GSS_ERR_INTERNAL;\n\t}\n\n\tif (acc_sec_maj_stat == GSS_S_CONTINUE_NEEDED)\n\t\treturn PBS_GSS_CONTINUE_NEEDED;\n\n\treturn PBS_GSS_OK;\n}\n\n/**\n * @brief\n *\tDetermines whether GSS credentials can be acquired\n *\n * @return\tint\n * @retval\t!= 0 if creds can be acquired\n * @retval\t0 if creds can not be acquired\n */\nstatic int\npbs_gss_can_get_creds(const gss_OID_set oidset)\n{\n\tOM_uint32 maj_stat;\n\tOM_uint32 min_stat;\n\tOM_uint32 valid_sec = 0;\n\tgss_cred_id_t creds = GSS_C_NO_CREDENTIAL;\n\n\tmaj_stat = gss_acquire_cred(&min_stat, GSS_C_NO_NAME, GSS_C_INDEFINITE, oidset, GSS_C_INITIATE, &creds, NULL, &valid_sec);\n\tif (maj_stat == GSS_S_COMPLETE && creds != GSS_C_NO_CREDENTIAL)\n\t\tgss_release_cred(&min_stat, &creds);\n\n\t/*\n\t * There is a bug in old MIT implementation.\n\t * It causes valid_sec is always 0.\n\t * The problem is fixed in version >= 1.14\n\t */\n\treturn (maj_stat == GSS_S_COMPLETE && valid_sec > 10);\n}\n\n/**\n * @brief\n *\tIf in tty, ask user for credentials. The custom binary is run in a new session.\n *  The pipes are created for passing stdin and stdout to the new session.\n *  User can therfore insert password using tools like 'kinit' and get credentials.\n *\n * @return\tint\n * @retval\t!= 0 if creds not be acquired\n * @retval\t0 if creds acquired\n */\nstatic int\npbs_gss_ask_user_creds()\n{\n\tint master_fd, slave_fd; /* PTY file descriptors */\n\tstatic struct termios original_term;\n\tstruct termios pty_tios, raw_tios;\n    pid_t pid;\n\tint status;\n\n\tchar *user_creds_bin = pbs_conf.pbs_gss_user_creds_bin ? pbs_conf.pbs_gss_user_creds_bin : NULL;\n\n\tif (!user_creds_bin) {\n\t\treturn -1;\n\t}\n\n\tchar *cmd[] = { user_creds_bin, 0 };\n\n\tif (! isatty(fileno(stdout) || ! isatty(fileno(stdin))))\n\t\treturn -1; /* not a terminal, cannot ask user for creds */\n\n\t/* save current terminal settings*/\n\tif (tcgetattr(STDIN_FILENO, &original_term) == -1) {\n\t\treturn -1;\n\t}\n\n\t/* configure new terminal - disable echo for inserting password */\n\traw_tios = original_term;\n    raw_tios.c_lflag &= ~(ICANON | ECHO | ECHOCTL | ECHONL);\n    raw_tios.c_cc[VMIN] = 1;\n    raw_tios.c_cc[VTIME] = 0;\n\n\tif (tcsetattr(STDIN_FILENO, TCSAFLUSH, &raw_tios) == -1) {\n        return -1;\n    }\n\n\tif (tcgetattr(slave_fd, &pty_tios) == 0) {\n        pty_tios.c_cc[VERASE] = 127; /* fix backspace key */\n        tcsetattr(slave_fd, TCSANOW, &pty_tios);\n    }\n\n\t/* create peseudo terminal for user cred bin to run */\n\tif (openpty(&master_fd, &slave_fd, NULL, NULL, NULL) == -1) {\n\t\treturn -1;\n\t}\n\n\tpid = fork();\n    if (pid < 0) {\n        return -1; /* fork failed */\n\t}\n\n\tif (pid == 0) {\n\t\t/* child */\n\t\tclose(master_fd);\n\t\tif (setsid() == -1) {\n            exit(1);\n        }\n\n\t\tif (dup2(slave_fd, STDIN_FILENO) == -1 ||\n            dup2(slave_fd, STDOUT_FILENO) == -1 ||\n            dup2(slave_fd, STDERR_FILENO) == -1) {\n            exit(1);\n        }\n\n\t\tclose(slave_fd);\n\n\t\tif (execvp(cmd[0], cmd) < 0) {\n\t\t\texit(1);\n\t\t}\n\t} else {\n\t\t/* parent */\n\n\t\tconst int size = 1024;\n\t\tchar buffer[size];\n        ssize_t bytes_read;\n\t\tint get_password = 1;\n\n\t\tclose(slave_fd);\n\n\t\twhile (1) {\n\t\t\tfd_set read_fds;\n\t\t\tint max_fd = master_fd;\n\n\t\t\tFD_ZERO(&read_fds);\n\t\t\tFD_SET(master_fd, &read_fds);\n\t\t\tFD_SET(STDIN_FILENO, &read_fds);\n\n\t\t\tif (select(max_fd + 1, &read_fds, NULL, NULL, NULL) < 0) {\n                if (errno == EINTR)\n\t\t\t\t\tcontinue;\n                break;\n            }\n\n\t\t\tif (FD_ISSET(master_fd, &read_fds)) {\n                bytes_read = read(master_fd, buffer, size - 1);\n                if (bytes_read > 0) {\n                    /* Write child's output directly to the parent's terminal STDOUT */\n                    write(STDOUT_FILENO, buffer, bytes_read);\n                } else if (bytes_read == 0) {\n                    break;\n                } else {\n                    break;\n                }\n            }\n\n\t\t\tif (FD_ISSET(STDIN_FILENO, &read_fds)) {\n                bytes_read = read(STDIN_FILENO, buffer, size - 1);\n                if (bytes_read > 0) {\n                    /* Write user input to the PTY master (to the child's STDIN) */\n                    write(master_fd, buffer, bytes_read);\n                } else if (bytes_read == 0) {\n                    close(master_fd);\n                    break;\n                } else {\n                    break;\n                }\n            }\n\n\t\t\tif (waitpid(pid, &status, WNOHANG) > 0) {\n                break;\n            }\n\t\t}\n\n\t\twaitpid(pid, &status, 0);\n\t}\n\n\tclose(master_fd);\n\ttcsetattr(STDIN_FILENO, TCSAFLUSH, &original_term);\n\treturn WEXITSTATUS(status);\n}\n\n/**\n * @brief\n * \tcreate or renew ccache from keytab for the gss client side.\n *\n * @param[in] err_buf - buffer to put error log\n * @param[in] err_buf_size - err_buf size\n *\n * @return \tint\n * @retval\t0 on success\n * @retval\t!= 0 otherwise\n */\nstatic int\ninit_pbs_client_ccache_from_keytab(char *err_buf, int err_buf_size)\n{\n\tkrb5_error_code ret = KRB5KRB_ERR_GENERIC;\n\tkrb5_context context = NULL;\n\tkrb5_principal pbs_service = NULL;\n\tkrb5_keytab keytab = NULL;\n\tkrb5_creds *creds = NULL;\n\tkrb5_get_init_creds_opt *opt = NULL;\n\tkrb5_ccache ccache = NULL;\n\tkrb5_creds *mcreds = NULL;\n\tchar *realm;\n\tchar **realms = NULL;\n\tchar hostname[PBS_MAXHOSTNAME + 1];\n\tint endtime = 0;\n\n\tcreds = malloc(sizeof(krb5_creds));\n\tif (creds == NULL) {\n\t\tsnprintf(err_buf, err_buf_size, \"malloc failure\");\n\t\tgoto out;\n\t}\n\tmemset(creds, 0, sizeof(krb5_creds));\n\n\tmcreds = malloc(sizeof(krb5_creds));\n\tif (mcreds == NULL) {\n\t\tsnprintf(err_buf, err_buf_size, \"malloc failure\");\n\t\tgoto out;\n\t}\n\tmemset(mcreds, 0, sizeof(krb5_creds));\n\n\tsetenv(\"KRB5CCNAME\", PBS_KRB5_CLIENT_CCNAME, 1);\n\n\tret = krb5_init_context(&context);\n\tif (ret) {\n\t\tsnprintf(err_buf, err_buf_size, \"Cannot initialize Kerberos context.\");\n\t\tgoto out;\n\t}\n\n\tret = krb5_sname_to_principal(context, NULL, PBS_KRB5_SERVICE_NAME, KRB5_NT_SRV_HST, &pbs_service);\n\tif (ret) {\n\t\tsnprintf(err_buf, err_buf_size, \"Preparing principal failed (%s)\", krb5_get_error_message(context, ret));\n\t\tgoto out;\n\t}\n\n\tret = krb5_cc_resolve(context, PBS_KRB5_CLIENT_CCNAME, &ccache);\n\tif (ret) /* for ret = true it is not a real error, we will just create new ccache */\n\t\tsnprintf(err_buf, err_buf_size, \"Couldn't resolve ccache name (%s) New ccache will be created.\", krb5_get_error_message(context, ret));\n\n\tret = gethostname(hostname, PBS_MAXHOSTNAME + 1);\n\tif (ret) {\n\t\tsnprintf(err_buf, err_buf_size, \"Failed to get host name\");\n\t\tgoto out;\n\t}\n\n\tret = krb5_get_host_realm(context, hostname, &realms);\n\tif (ret) {\n\t\tsnprintf(err_buf, err_buf_size, \"Failed to get host realms (%s)\", krb5_get_error_message(context, ret));\n\t\tgoto out;\n\t}\n\n\trealm = realms[0];\n\tret = krb5_build_principal(context, &mcreds->server, strlen(realm), realm, KRB5_TGS_NAME, realm, NULL);\n\tif (ret) {\n\t\tsnprintf(err_buf, err_buf_size, \"Couldn't build server principal (%s)\", krb5_get_error_message(context, ret));\n\t\tgoto out;\n\t}\n\n\tret = krb5_copy_principal(context, pbs_service, &mcreds->client);\n\tif (ret) {\n\t\tsnprintf(err_buf, err_buf_size, \"Couldn't copy client principal (%s)\", krb5_get_error_message(context, ret));\n\t\tgoto out;\n\t}\n\n\tret = krb5_cc_retrieve_cred(context, ccache, 0, mcreds, creds);\n\tif (ret) /* for ret = true it is not a real error, we will just create new ccache */\n\t\tsnprintf(err_buf, err_buf_size, \"Couldn't retrieve credentials from cache (%s) New ccache will be created.\", krb5_get_error_message(context, ret));\n\telse\n\t\tendtime = creds->times.endtime;\n\n\t/* if we have valid credentials in ccache goto out\n\t * if the credentials are about to expire soon (60 * 30 = 30 minutes)\n\t * then try to renew from keytab.\n\t */\n\tif (endtime - (60 * 30) >= time(NULL)) {\n\t\tret = 0;\n\t\tgoto out;\n\t}\n\n\tret = krb5_cc_resolve(context, PBS_KRB5_CLIENT_CCNAME, &ccache);\n\tif (ret) {\n\t\tsnprintf(err_buf, err_buf_size, \"Couldn't resolve cache name (%s)\", krb5_get_error_message(context, ret));\n\t\tgoto out;\n\t}\n\n\tret = krb5_kt_default(context, &keytab);\n\tif (ret) {\n\t\tsnprintf(err_buf, err_buf_size, \"Couldn't open keytab (%s)\", krb5_get_error_message(context, ret));\n\t\tgoto out;\n\t}\n\tret = krb5_get_init_creds_opt_alloc(context, &opt);\n\tif (ret) {\n\t\tsnprintf(err_buf, err_buf_size, \"Couldn't allocate a new initial credential options structure (%s)\", krb5_get_error_message(context, ret));\n\t\tgoto out;\n\t}\n\n\tkrb5_get_init_creds_opt_set_forwardable(opt, 1);\n\n\tret = krb5_get_init_creds_keytab(context, creds, pbs_service, keytab, 0, NULL, opt);\n\tif (ret) {\n\t\tsnprintf(err_buf, err_buf_size, \"Couldn't get initial credentials using a key table (%s)\", krb5_get_error_message(context, ret));\n\t\tgoto out;\n\t}\n\n\tret = krb5_cc_initialize(context, ccache, creds->client);\n\tif (ret) {\n\t\tsnprintf(err_buf, err_buf_size, \"Credentials cache initializing failed (%s)\", krb5_get_error_message(context, ret));\n\t\tgoto out;\n\t}\n\n\tret = krb5_cc_store_cred(context, ccache, creds);\n\tif (ret) {\n\t\tsnprintf(err_buf, err_buf_size, \"Couldn't store ccache (%s)\", krb5_get_error_message(context, ret));\n\t\tgoto out;\n\t}\n\nout:\n\tif (creds)\n\t\tkrb5_free_creds(context, creds);\n\tif (mcreds)\n\t\tkrb5_free_creds(context, mcreds);\n\tif (opt)\n\t\tkrb5_get_init_creds_opt_free(context, opt);\n\tif (pbs_service)\n\t\tkrb5_free_principal(context, pbs_service);\n\tif (ccache)\n\t\tkrb5_cc_close(context, ccache);\n\tif (realms)\n\t\tkrb5_free_host_realm(context, realms);\n\tif (keytab)\n\t\tkrb5_kt_close(context, keytab);\n\tif (context)\n\t\tkrb5_free_context(context);\n\treturn (ret);\n}\n\n/** @brief\n *\tThis is the main gss handshake function for asynchronous handshake.\n *\tIt has two branches: client and server. Once the handshake is finished\n *\tthe GSS structure is set to ready for un/wrapping.\n *\n *\n * @param[in] gss_extra - gss structure\n * @param[in] data_in - received GSS token data\n * @param[in] len_in - length of data_in\n * @param[out] data_out - GSS token data for transmitting\n * @param[out] len_out - length of data_out\n *\n * @return int\n * @retval PBS_GSS_OK - success\n * @retval !PBS_GSS_OK - failure\n *\n */\nint\npbs_gss_establish_context(pbs_gss_extra_t *gss_extra, void *data_in, size_t len_in, void **data_out, size_t *len_out)\n{\n\tOM_uint32 maj_stat;\n\tOM_uint32 min_stat = 0;\n\tgss_ctx_id_t gss_context = GSS_C_NO_CONTEXT;\n\tstatic gss_cred_id_t server_creds = GSS_C_NO_CREDENTIAL;\n\tgss_cred_id_t creds = GSS_C_NO_CREDENTIAL;\n\tchar *service_name = NULL;\n\ttime_t now = time((time_t *) NULL);\n\tstatic time_t lastcredstime = 0;\n\tstatic time_t credlifetime = 0;\n\tOM_uint32 lifetime;\n\tOM_uint32 gss_flags;\n\tOM_uint32 ret_flags;\n\tgss_OID oid;\n\tgss_OID_set oidset = GSS_C_NO_OID_SET;\n\tint ret;\n\tgss_buffer_desc client_name = {0};\n\tint ccache_from_keytab = 0;\n\n\tif (gss_extra == NULL)\n\t\treturn PBS_GSS_ERR_INTERNAL;\n\n\tif (gss_extra->role == AUTH_ROLE_UNKNOWN)\n\t\treturn PBS_GSS_ERR_INTERNAL;\n\n\tif (gss_extra->hostname == NULL)\n\t\treturn PBS_GSS_ERR_INTERNAL;\n\n\tgss_context = gss_extra->gssctx;\n\n\tif (service_name == NULL) {\n\t\tservice_name = (char *) malloc(strlen(PBS_KRB5_SERVICE_NAME) + 1 + strlen(gss_extra->hostname) + 1);\n\t\tif (service_name == NULL) {\n\t\t\tGSS_LOG_ERR(\"malloc failure\");\n\t\t\treturn PBS_GSS_ERR_INTERNAL;\n\t\t}\n\t\tsprintf(service_name, \"%s@%s\", PBS_KRB5_SERVICE_NAME, gss_extra->hostname);\n\t}\n\n\tswitch (gss_extra->role) {\n\n\t\tcase AUTH_CLIENT:\n\t\t\tif (pbs_gss_oidset_mech(&oidset) != PBS_GSS_OK)\n\t\t\t\treturn PBS_GSS_ERR_OID;\n\n\t\t\tif (gss_extra->conn_type == AUTH_USER_CONN) {\n\t\t\t\tchar *ccname = getenv(\"KRB5CCNAME\");\n\t\t\t\tint can_get_creds = 0;\n\n\t\t\t\tif (pbs_gss_can_get_creds(oidset)) {\n\t\t\t\t\tcan_get_creds = 1;\n\t\t\t\t}\n\n\t\t\t\tif (!can_get_creds && ccname) {\n\t\t\t\t\tunsetenv(\"KRB5CCNAME\");\n\n\t\t\t\t\t/* try get credentials with default ccache */\n\n\t\t\t\t\tif (pbs_gss_can_get_creds(oidset)) {\n\t\t\t\t\t\tcan_get_creds = 1;\n\t\t\t\t\t} else {\n\t\t\t\t\t\tif (ccname) {\n\t\t\t\t\t\t\tsetenv(\"KRB5CCNAME\", ccname, 1);\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\tif (!can_get_creds) {\n\t\t\t\t\tif (init_pbs_client_ccache_from_keytab(gss_log_buffer, LOG_BUF_SIZE)) {\n\t\t\t\t\t\tGSS_LOG_DBG(gss_log_buffer);\n\t\t\t\t\t\tunsetenv(\"KRB5CCNAME\");\n\t\t\t\t\t} else {\n\t\t\t\t\t\tccache_from_keytab = 1;\n\t\t\t\t\t\tcan_get_creds = 1;\n\t\t\t\t\t}\n\n\t\t\t\t\tif (!ccache_from_keytab && ccname) {\n\t\t\t\t\t\tsetenv(\"KRB5CCNAME\", ccname, 1);\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\tif (!can_get_creds) {\n\t\t\t\t\tif (ccname) {\n\t\t\t\t\t\tsetenv(\"KRB5CCNAME\", ccname, 1);\n\t\t\t\t\t}\n\n\t\t\t\t\t/* no credentials at all, ask user for creds */\n\t\t\t\t\tif (pbs_gss_ask_user_creds()) {\n\t\t\t\t\t\tunsetenv(\"KRB5CCNAME\");\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tif (init_pbs_client_ccache_from_keytab(gss_log_buffer, LOG_BUF_SIZE)) {\n\t\t\t\t\tGSS_LOG_DBG(gss_log_buffer);\n\t\t\t\t\tunsetenv(\"KRB5CCNAME\");\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tmaj_stat = gss_acquire_cred(&min_stat, GSS_C_NO_NAME, GSS_C_INDEFINITE, oidset, GSS_C_INITIATE, &creds, NULL, NULL);\n\n\t\t\tpbs_gss_release_oidset(&oidset);\n\n\t\t\tif (maj_stat != GSS_S_COMPLETE) {\n\t\t\t\tGSS_LOG_STS(\"gss_acquire_cred\", maj_stat, min_stat);\n\t\t\t\tif (ccache_from_keytab || gss_extra->conn_type == AUTH_SERVICE_CONN)\n\t\t\t\t\tunsetenv(\"KRB5CCNAME\");\n\t\t\t\treturn PBS_GSS_ERR_ACQUIRE_CREDS;\n\t\t\t}\n\n\t\t\tgss_flags = GSS_C_MUTUAL_FLAG | GSS_C_DELEG_FLAG | GSS_C_INTEG_FLAG | GSS_C_CONF_FLAG;\n\t\t\toid = PBS_GSS_MECH_OID;\n\n\t\t\tret = pbs_gss_client_establish_context(service_name, creds, oid, gss_flags, &gss_context, &ret_flags, data_in, len_in, data_out, len_out);\n\t\t\tgss_extra->gssctx = gss_context;\n\n\t\t\tif (ccache_from_keytab || gss_extra->conn_type == AUTH_SERVICE_CONN)\n\t\t\t\tunsetenv(\"KRB5CCNAME\");\n\n\t\t\tif (creds != GSS_C_NO_CREDENTIAL) {\n\t\t\t\tmaj_stat = gss_release_cred(&min_stat, &creds);\n\t\t\t\tif (maj_stat != GSS_S_COMPLETE) {\n\t\t\t\t\tGSS_LOG_STS(\"gss_release_cred\", maj_stat, min_stat);\n\t\t\t\t\treturn PBS_GSS_ERR_INTERNAL;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tbreak;\n\n\t\tcase AUTH_INTERACTIVE:\n\t\tcase AUTH_SERVER:\n\t\t\t/*\n\t\t\t * if credentials are old, try to get new ones. If we can't, keep the old\n\t\t\t * ones since they're probably still valid and hope that\n\t\t\t * we can get new credentials next time\n\t\t\t */\n\t\t\tif (now - lastcredstime > credlifetime) {\n\t\t\t\tgss_cred_id_t new_server_creds = GSS_C_NO_CREDENTIAL;\n\n\t\t\t\tif (pbs_gss_server_acquire_creds(service_name, &new_server_creds) != PBS_GSS_OK) {\n\t\t\t\t\tsnprintf(gss_log_buffer, LOG_BUF_SIZE, \"Failed to acquire server credentials for %s\", service_name);\n\t\t\t\t\tGSS_LOG_ERR(gss_log_buffer);\n\n\t\t\t\t\t/* try again in 2 minutes */\n\t\t\t\t\tlastcredstime = now + 120;\n\t\t\t\t} else {\n\t\t\t\t\tlastcredstime = now;\n\t\t\t\t\tsnprintf(gss_log_buffer, LOG_BUF_SIZE, \"Refreshing server credentials at %ld\", (long) now);\n\t\t\t\t\tGSS_LOG_DBG(gss_log_buffer);\n\n\t\t\t\t\tif (server_creds != GSS_C_NO_CREDENTIAL) {\n\t\t\t\t\t\tmaj_stat = gss_release_cred(&min_stat, &server_creds);\n\t\t\t\t\t\tif (maj_stat != GSS_S_COMPLETE) {\n\t\t\t\t\t\t\tGSS_LOG_STS(\"gss_release_cred\", maj_stat, min_stat);\n\t\t\t\t\t\t\treturn PBS_GSS_ERR_INTERNAL;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\n\t\t\t\t\tserver_creds = new_server_creds;\n\n\t\t\t\t\t/* fetch information about the fresh credentials */\n\t\t\t\t\tif (gss_inquire_cred(&ret_flags, server_creds, NULL, &lifetime, NULL, NULL) == GSS_S_COMPLETE) {\n\t\t\t\t\t\tif (lifetime == GSS_C_INDEFINITE) {\n\t\t\t\t\t\t\tcredlifetime = DEFAULT_CREDENTIAL_LIFETIME;\n\t\t\t\t\t\t\tsnprintf(gss_log_buffer, LOG_BUF_SIZE, \"Server credentials renewed with indefinite lifetime, using %d.\", DEFAULT_CREDENTIAL_LIFETIME);\n\t\t\t\t\t\t\tGSS_LOG_DBG(gss_log_buffer);\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\tsnprintf(gss_log_buffer, LOG_BUF_SIZE, \"Server credentials renewed with lifetime as %u.\", lifetime);\n\t\t\t\t\t\t\tGSS_LOG_DBG(gss_log_buffer);\n\t\t\t\t\t\t\tcredlifetime = lifetime;\n\t\t\t\t\t\t}\n\t\t\t\t\t} else {\n\t\t\t\t\t\t/* could not read information from credential */\n\t\t\t\t\t\tcredlifetime = 0;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tret = pbs_gss_server_establish_context(server_creds, NULL, &gss_context, &(client_name), &ret_flags, data_in, len_in, data_out, len_out);\n\t\t\tgss_extra->gssctx = gss_context;\n\n\t\t\tbreak;\n\n\t\tdefault:\n\t\t\treturn -1;\n\t}\n\n\tif (service_name != NULL)\n\t\tfree(service_name);\n\n\tif (gss_context == GSS_C_NO_CONTEXT) {\n\t\tGSS_LOG_ERR(\"Failed to establish gss context\");\n\t\treturn PBS_GSS_ERR_CONTEXT_ESTABLISH;\n\t}\n\n\tif (ret == PBS_GSS_CONTINUE_NEEDED) {\n\t\treturn PBS_GSS_OK;\n\t}\n\n\tif (client_name.length) {\n\t\tgss_extra->clientname = malloc(client_name.length + 1);\n\t\tif (gss_extra->clientname == NULL) {\n\t\t\tGSS_LOG_ERR(\"malloc failure\");\n\t\t\treturn PBS_GSS_ERR_INTERNAL;\n\t\t}\n\n\t\tmemcpy(gss_extra->clientname, client_name.value, client_name.length);\n\t\tgss_extra->clientname[client_name.length] = '\\0';\n\t}\n\n\tif (ret == PBS_GSS_OK) {\n\t\tgss_extra->gssctx_established = 1;\n\t\tgss_extra->is_secure = (ret_flags & GSS_C_CONF_FLAG);\n\t\tif (gss_extra->role == AUTH_SERVER || gss_extra->role == AUTH_INTERACTIVE) {\n\t\t\tsnprintf(gss_log_buffer, LOG_BUF_SIZE, \"GSS context established with client %s\", gss_extra->clientname);\n\t\t} else {\n\t\t\tsnprintf(gss_log_buffer, LOG_BUF_SIZE, \"GSS context established with server %s\", gss_extra->hostname);\n\t\t}\n\t\tGSS_LOG_DBG(gss_log_buffer);\n\t} else {\n\t\tif (gss_extra->role == AUTH_SERVER || gss_extra->role == AUTH_INTERACTIVE) {\n\t\t\tif (gss_extra->clientname)\n\t\t\t\tsnprintf(gss_log_buffer, LOG_BUF_SIZE, \"Failed to establish GSS context with client %s\", gss_extra->clientname);\n\t\t\telse\n\t\t\t\tsnprintf(gss_log_buffer, LOG_BUF_SIZE, \"Failed to establish GSS context with client\");\n\t\t} else {\n\t\t\tsnprintf(gss_log_buffer, LOG_BUF_SIZE, \"Failed to establish GSS context with server %s\", gss_extra->hostname);\n\t\t}\n\t\tGSS_LOG_ERR(gss_log_buffer);\n\t\treturn PBS_GSS_ERR_CONTEXT_ESTABLISH;\n\t}\n\n\treturn PBS_GSS_OK;\n}\n\n/********* START OF EXPORTED FUNCS *********/\n\n/** @brief\n *\tpbs_auth_set_config - Set config for this lib\n *\n * @param[in] config - auth config structure\n *\n * @return void\n *\n */\nvoid\npbs_auth_set_config(const pbs_auth_config_t *config)\n{\n\tlogger = config->logfunc;\n}\n\n/** @brief\n *\tpbs_auth_create_ctx - allocates external auth context structure for GSS authentication\n *\n * @param[in] ctx - pointer to external auth context to be allocated\n * @param[in] mode - AUTH_SERVER or AUTH_CLIENT\n * @param[in] conn_type - AUTH_USER_CONN or AUTH_SERVICE_CONN\n * @param[in] hostname - hostname of other authenticating party in case of AUTH_CLIENT else not used\n *\n * @return\tint\n * @retval\t0 - success\n * @retval\t1 - error\n */\nint\npbs_auth_create_ctx(void **ctx, int mode, int conn_type, const char *hostname)\n{\n\tpbs_gss_extra_t *gss_extra = NULL;\n\n\t*ctx = NULL;\n\n\tgss_extra = (pbs_gss_extra_t *) calloc(1, sizeof(pbs_gss_extra_t));\n\tif (gss_extra == NULL) {\n\t\treturn 1;\n\t}\n\n\tgss_extra->gssctx = GSS_C_NO_CONTEXT;\n\tgss_extra->role = mode;\n\tgss_extra->conn_type = conn_type;\n\tif (gss_extra->role == AUTH_SERVER || gss_extra->role == AUTH_INTERACTIVE) {\n\t\tchar *hn = NULL;\n\t\tif ((hn = malloc(PBS_MAXHOSTNAME + 1)) == NULL) {\n\t\t\treturn PBS_GSS_ERR_INTERNAL;\n\t\t}\n\t\tgethostname(hn, PBS_MAXHOSTNAME + 1);\n\t\tgss_extra->hostname = hn;\n\t} else {\n\t\tgss_extra->hostname = strdup(hostname);\n\t\tif (gss_extra->hostname == NULL) {\n\t\t\treturn PBS_GSS_ERR_INTERNAL;\n\t\t}\n\t}\n\n\t*ctx = gss_extra;\n\treturn 0;\n}\n\n/** @brief\n *\tpbs_auth_destroy_ctx - destroy external auth context structure for GSS authentication\n *\n * @param[in] ctx - pointer to external auth context\n *\n * @return void\n */\nvoid\npbs_auth_destroy_ctx(void *ctx)\n{\n\tpbs_gss_extra_t *gss_extra = (pbs_gss_extra_t *) ctx;\n\tOM_uint32 min_stat = 0;\n\n\tif (gss_extra == NULL)\n\t\treturn;\n\n\tfree(gss_extra->hostname);\n\tfree(gss_extra->clientname);\n\n\tif (gss_extra->gssctx != GSS_C_NO_CONTEXT)\n\t\t(void) gss_delete_sec_context(&min_stat, &gss_extra->gssctx, GSS_C_NO_BUFFER);\n\n\tmemset(gss_extra, 0, sizeof(pbs_gss_extra_t));\n\tfree(gss_extra);\n\tctx = NULL;\n}\n\n/** @brief\n *\tpbs_auth_get_userinfo - get user, host and realm from authentication context\n *\n * @param[in] ctx - pointer to external auth context\n * @param[out] user - username assosiate with ctx\n * @param[out] host - hostname/realm assosiate with ctx\n * @param[out] realm - realm assosiate with ctx\n *\n * @return\tint\n * @retval\t0 on success\n * @retval\t1 on error\n */\nint\npbs_auth_get_userinfo(void *ctx, char **user, char **host, char **realm)\n{\n\tpbs_gss_extra_t *gss_extra = (pbs_gss_extra_t *) ctx;\n\n\t*user = NULL;\n\t*host = NULL;\n\t*realm = NULL;\n\n\tif (gss_extra != NULL && gss_extra->clientname != NULL) {\n\t\tchar *cn = NULL;\n\t\tchar *p = NULL;\n\n\t\tcn = strdup(gss_extra->clientname);\n\t\tif (cn == NULL) {\n\t\t\tGSS_LOG_ERR(\"malloc failure\");\n\t\t\treturn 1;\n\t\t}\n\t\tp = strchr(cn, '@');\n\t\tif (p == NULL) {\n\t\t\tfree(cn);\n\t\t\tGSS_LOG_ERR(\"Invalid clientname in auth context\");\n\t\t\treturn 1;\n\t\t}\n\t\t*p = '\\0';\n\t\tif (strlen(cn) > PBS_MAXUSER || strlen(p + 1) > PBS_MAXHOSTNAME) {\n\t\t\tfree(cn);\n\t\t\tGSS_LOG_ERR(\"Invalid clientname in auth context\");\n\t\t\treturn 1;\n\t\t}\n\t\t*user = strdup(cn);\n\t\tif (*user == NULL) {\n\t\t\tGSS_LOG_ERR(\"malloc failure\");\n\t\t\tfree(cn);\n\t\t\treturn 1;\n\t\t}\n\t\t*realm = strdup(p + 1);\n\t\tif (*realm == NULL) {\n\t\t\tGSS_LOG_ERR(\"malloc failure\");\n\t\t\tfree(*user);\n\t\t\t*user = NULL;\n\t\t\tfree(cn);\n\t\t\treturn 1;\n\t\t}\n\t\t*host = strdup(*realm);\n\t\tif (*host == NULL) {\n\t\t\tGSS_LOG_ERR(\"malloc failure\");\n\t\t\tfree(*user);\n\t\t\t*user = NULL;\n\t\t\tfree(*realm);\n\t\t\t*realm = NULL;\n\t\t\tfree(cn);\n\t\t\treturn 1;\n\t\t}\n\t}\n\n\treturn 0;\n}\n\n/** @brief\n *\tpbs_auth_process_handshake_data - do GSS auth handshake\n *\n * @param[in] ctx - pointer to external auth context\n * @param[in] data_in - received auth token data (if any)\n * @param[in] len_in - length of received auth token data (if any)\n * @param[out] data_out - auth token data to send (if any)\n * @param[out] len_out - lenght of auth token data to send (if any)\n * @param[out] is_handshake_done - indicates whether handshake is done (1) or not (0)\n *\n * @return\tint\n * @retval\t0 on success\n * @retval\t!0 on error\n */\nint\npbs_auth_process_handshake_data(void *ctx, void *data_in, size_t len_in, void **data_out, size_t *len_out, int *is_handshake_done)\n{\n\tpbs_gss_extra_t *gss_extra = (pbs_gss_extra_t *) ctx;\n\tint rc = 0;\n\n\tif (gss_extra == NULL) {\n\t\tGSS_LOG_ERR(\"No auth context available\");\n\t\treturn 1;\n\t}\n\n\tif (gss_extra->gssctx_established) {\n\t\tGSS_LOG_ERR(\"GSS context already established\");\n\t\treturn 1;\n\t}\n\n\t*is_handshake_done = 0;\n\n\tpthread_once(&gss_init_once, init_gss_atfork);\n\n\tif (gss_lock(&gss_mutex)) {\n\t\treturn PBS_GSS_ERR_INTERNAL;\n\t}\n\n\trc = pbs_gss_establish_context(gss_extra, data_in, len_in, data_out, len_out);\n\n\tif (gss_unlock(&gss_mutex)) {\n\t\treturn PBS_GSS_ERR_INTERNAL;\n\t}\n\n\tif (gss_extra->gssctx_established) {\n\t\t*is_handshake_done = 1;\n\n\t\tif (gss_extra->role == AUTH_SERVER || gss_extra->role == AUTH_INTERACTIVE) {\n\t\t\tsnprintf(gss_log_buffer, LOG_BUF_SIZE, \"Entered encrypted communication with client %s\", gss_extra->clientname);\n\t\t\tGSS_LOG_DBG(gss_log_buffer);\n\t\t} else {\n\t\t\tsnprintf(gss_log_buffer, LOG_BUF_SIZE, \"Entered encrypted communication with server %s\", gss_extra->hostname);\n\t\t\tGSS_LOG_DBG(gss_log_buffer);\n\t\t}\n\t}\n\n\treturn rc;\n}\n\n/** @brief\n *\tpbs_auth_encrypt_data - encrypt data based on given GSS context.\n *\n * @param[in] ctx - pointer to external auth context\n * @param[in] data_in - clear text data\n * @param[in] len_in - length of clear text data\n * @param[out] data_out - encrypted data\n * @param[out] len_out - length of encrypted data\n *\n * @return\tint\n * @retval\t0 on success\n * @retval\t1 on error\n */\nint\npbs_auth_encrypt_data(void *ctx, void *data_in, size_t len_in, void **data_out, size_t *len_out)\n{\n\tpbs_gss_extra_t *gss_extra = (pbs_gss_extra_t *) ctx;\n\tOM_uint32 maj_stat;\n\tOM_uint32 min_stat = 0;\n\tgss_buffer_desc unwrapped;\n\tgss_buffer_desc wrapped;\n\tint conf_state = 0;\n\n\tif (gss_extra == NULL) {\n\t\tGSS_LOG_ERR(\"No auth context available\");\n\t\treturn PBS_GSS_ERR_INTERNAL;\n\t}\n\n\tif (len_in == 0) {\n\t\tGSS_LOG_ERR(\"No data available to encrypt\");\n\t\treturn PBS_GSS_ERR_INTERNAL;\n\t}\n\n\twrapped.length = 0;\n\twrapped.value = NULL;\n\n\tunwrapped.length = len_in;\n\tunwrapped.value = data_in;\n\n\tpthread_once(&gss_init_once, init_gss_atfork);\n\n\tif (gss_lock(&gss_mutex)) {\n\t\treturn PBS_GSS_ERR_INTERNAL;\n\t}\n\n\tmaj_stat = gss_wrap(&min_stat, gss_extra->gssctx, gss_extra->is_secure, GSS_C_QOP_DEFAULT, &unwrapped, &conf_state, &wrapped);\n\n\tif (gss_unlock(&gss_mutex)) {\n\t\treturn PBS_GSS_ERR_INTERNAL;\n\t}\n\n\tif (maj_stat != GSS_S_COMPLETE) {\n\t\tGSS_LOG_STS(\"gss_wrap\", maj_stat, min_stat);\n\n\t\tmaj_stat = gss_release_buffer(&min_stat, &wrapped);\n\t\tif (maj_stat != GSS_S_COMPLETE) {\n\t\t\tGSS_LOG_STS(\"gss_release_buffer\", maj_stat, min_stat);\n\t\t\treturn PBS_GSS_ERR_INTERNAL;\n\t\t}\n\n\t\treturn PBS_GSS_ERR_WRAP;\n\t}\n\n\t*len_out = wrapped.length;\n\t*data_out = malloc(wrapped.length);\n\tif (*data_out == NULL) {\n\t\tGSS_LOG_ERR(\"malloc failure\");\n\t\treturn PBS_GSS_ERR_INTERNAL;\n\t}\n\tmemcpy(*data_out, wrapped.value, wrapped.length);\n\n\tmaj_stat = gss_release_buffer(&min_stat, &wrapped);\n\tif (maj_stat != GSS_S_COMPLETE) {\n\t\tGSS_LOG_STS(\"gss_release_buffer\", maj_stat, min_stat);\n\t\treturn PBS_GSS_ERR_INTERNAL;\n\t}\n\n\treturn PBS_GSS_OK;\n}\n\n/** @brief\n *\tpbs_auth_decrypt_data - decrypt data based on given GSS context.\n *\n * @param[in] ctx - pointer to external auth context\n * @param[in] data_in - encrypted data\n * @param[in] len_in - length of encrypted data\n * @param[out] data_out - clear text data\n * @param[out] len_out - length of clear text data\n *\n * @return\tint\n * @retval\t0 on success\n * @retval\t1 on error\n */\nint\npbs_auth_decrypt_data(void *ctx, void *data_in, size_t len_in, void **data_out, size_t *len_out)\n{\n\tpbs_gss_extra_t *gss_extra = (pbs_gss_extra_t *) ctx;\n\tOM_uint32 maj_stat;\n\tOM_uint32 min_stat = 0;\n\tgss_buffer_desc wrapped;\n\tgss_buffer_desc unwrapped;\n\n\tif (gss_extra == NULL) {\n\t\tGSS_LOG_ERR(\"No auth context available\");\n\t\treturn PBS_GSS_ERR_INTERNAL;\n\t}\n\n\tif (len_in == 0) {\n\t\tGSS_LOG_ERR(\"No data available to decrypt\");\n\t\treturn PBS_GSS_ERR_INTERNAL;\n\t}\n\n\tif (gss_extra->is_secure == 0) {\n\t\tGSS_LOG_ERR(\"wrapped data ready but auth context is not secure\");\n\t\treturn PBS_GSS_ERR_INTERNAL;\n\t}\n\n\tunwrapped.length = 0;\n\tunwrapped.value = NULL;\n\n\twrapped.length = len_in;\n\twrapped.value = data_in;\n\n\tpthread_once(&gss_init_once, init_gss_atfork);\n\n\tif (gss_lock(&gss_mutex)) {\n\t\treturn PBS_GSS_ERR_INTERNAL;\n\t}\n\n\tmaj_stat = gss_unwrap(&min_stat, gss_extra->gssctx, &wrapped, &unwrapped, NULL, NULL);\n\n\tif (gss_unlock(&gss_mutex)) {\n\t\treturn PBS_GSS_ERR_INTERNAL;\n\t}\n\n\tif (maj_stat != GSS_S_COMPLETE) {\n\t\tGSS_LOG_STS(\"gss_unwrap\", maj_stat, min_stat);\n\n\t\tmaj_stat = gss_release_buffer(&min_stat, &unwrapped);\n\t\tif (maj_stat != GSS_S_COMPLETE) {\n\t\t\tGSS_LOG_STS(\"gss_release_buffer\", maj_stat, min_stat);\n\t\t\treturn PBS_GSS_ERR_INTERNAL;\n\t\t}\n\n\t\treturn PBS_GSS_ERR_UNWRAP;\n\t}\n\n\tif (unwrapped.length == 0)\n\t\treturn PBS_GSS_ERR_UNWRAP;\n\n\t*len_out = unwrapped.length;\n\t*data_out = malloc(unwrapped.length);\n\tif (*data_out == NULL) {\n\t\tGSS_LOG_ERR(\"malloc failure\");\n\t\treturn PBS_GSS_ERR_INTERNAL;\n\t}\n\tmemcpy(*data_out, unwrapped.value, unwrapped.length);\n\n\tmaj_stat = gss_release_buffer(&min_stat, &unwrapped);\n\tif (maj_stat != GSS_S_COMPLETE) {\n\t\tGSS_LOG_STS(\"gss_release_buffer\", maj_stat, min_stat);\n\t\treturn PBS_GSS_ERR_INTERNAL;\n\t}\n\n\treturn PBS_GSS_OK;\n}\n\n/********* END OF EXPORTED FUNCS *********/\n\n#endif /* PBS_SECURITY */\n"
  },
  {
    "path": "src/lib/Libauth/munge/Makefile.am",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nlib_LTLIBRARIES = libauth_munge.la\n\nlibauth_munge_la_CPPFLAGS = \\\n\t-I$(top_srcdir)/src/include\n\nlibauth_munge_la_LDFLAGS = -version-info 0:0:0\n\nlibauth_munge_la_SOURCES = \\\n\tmunge_supp.c\n"
  },
  {
    "path": "src/lib/Libauth/munge/munge_supp.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <stdio.h>\n#include <string.h>\n#include <stdlib.h>\n#include <errno.h>\n#include <unistd.h>\n#include <pwd.h>\n#include <pthread.h>\n#include <dlfcn.h>\n#include <grp.h>\n#include \"libauth.h\"\n#include \"libutil.h\"\n#include \"pbs_ifl.h\"\n\nstatic pthread_once_t munge_init_once = PTHREAD_ONCE_INIT;\n\nstatic void *munge_dlhandle = NULL;\t\t\t\t\t\t\t       /* MUNGE dynamic loader handle */\nstatic int (*munge_encode)(char **, void *, const void *, int) = NULL;\t\t\t       /* MUNGE munge_encode() function pointer */\nstatic int (*munge_decode)(const char *cred, void *, void **, int *, uid_t *, gid_t *) = NULL; /* MUNGE munge_decode() function pointer */\nstatic char *(*munge_strerror)(int) = NULL;\t\t\t\t\t\t       /* MUNGE munge_stderror() function pointer */\nstatic void (*logger)(int type, int objclass, int severity, const char *objname, const char *text);\n\n#define __MUNGE_LOGGER(e, c, s, m)                                        \\\n\tdo {                                                              \\\n\t\tif (logger == NULL) {                                     \\\n\t\t\tif (s != LOG_DEBUG)                               \\\n\t\t\t\tfprintf(stderr, \"%s: %s\\n\", __func__, m); \\\n\t\t} else {                                                  \\\n\t\t\tlogger(e, c, s, __func__, m);                     \\\n\t\t}                                                         \\\n\t} while (0)\n#define MUNGE_LOG_ERR(m) __MUNGE_LOGGER(PBSEVENT_ERROR | PBSEVENT_FORCE, PBS_EVENTCLASS_SERVER, LOG_ERR, m)\n#define MUNGE_LOG_DBG(m) __MUNGE_LOGGER(PBSEVENT_DEBUG | PBSEVENT_FORCE, PBS_EVENTCLASS_SERVER, LOG_DEBUG, m)\n\ntypedef struct {\n\t/* If set to non-zero, munge_validate_auth_data will also check that\n\t * the token received originated from root user (uid == 0)\n\t */\n\tint check_root;\n\tchar user[PBS_MAXUSER + 1];\n} munge_extra_t;\n\nstatic void init_munge(void);\nstatic char *munge_get_auth_data(char *, size_t);\nstatic int munge_validate_auth_data(munge_extra_t *, void *, int, char *, size_t);\n\n/**\n * @brief\n *\tinit_munge Check if libmunge.so shared library is present in the system\n *\tand assign specific function pointers to be used at the time\n *\tof decode or encode.\n *\n * @note\n *\tThis function should get invoked only once. Using pthread_once for this purpose.\n *\tThis function is not expecting any arguments. So storing error messages in a static\n *\tvariable in case of error.\n *\n * @return void\n *\n */\nstatic void\ninit_munge(void)\n{\n\tstatic const char libmunge[] = \"libmunge.so\";\n\tchar ebuf[LOG_BUF_SIZE];\n\n\tebuf[0] = '\\0';\n\tmunge_dlhandle = dlopen(libmunge, RTLD_LAZY);\n\tif (munge_dlhandle == NULL) {\n\t\tsnprintf(ebuf, sizeof(ebuf), \"%s not found\", libmunge);\n\t\tMUNGE_LOG_ERR(ebuf);\n\t\tgoto err;\n\t}\n\n\tmunge_encode = dlsym(munge_dlhandle, \"munge_encode\");\n\tif (munge_encode == NULL) {\n\t\tsnprintf(ebuf, sizeof(ebuf), \"symbol munge_encode not found in %s\", libmunge);\n\t\tMUNGE_LOG_ERR(ebuf);\n\t\tgoto err;\n\t}\n\n\tmunge_decode = dlsym(munge_dlhandle, \"munge_decode\");\n\tif (munge_decode == NULL) {\n\t\tsnprintf(ebuf, sizeof(ebuf), \"symbol munge_decode not found in %s\", libmunge);\n\t\tMUNGE_LOG_ERR(ebuf);\n\t\tgoto err;\n\t}\n\n\tmunge_strerror = dlsym(munge_dlhandle, \"munge_strerror\");\n\tif (munge_strerror == NULL) {\n\t\tsnprintf(ebuf, sizeof(ebuf), \"symbol munge_strerror not found in %s\", libmunge);\n\t\tMUNGE_LOG_ERR(ebuf);\n\t\tgoto err;\n\t}\n\n\treturn;\n\nerr:\n\tif (munge_dlhandle)\n\t\tdlclose(munge_dlhandle);\n\n\tmunge_dlhandle = NULL;\n\tmunge_encode = NULL;\n\tmunge_decode = NULL;\n\tmunge_strerror = NULL;\n\treturn;\n}\n\n/**\n * @brief\n *\tmunge_get_auth_data - Call Munge encode API's to get the authentication data for the current user\n *\n * @param[in] ebuf - buffer to hold error msg if any\n * @param[in] ebufsz - size of ebuf\n *\n * @return char *\n * @retval !NULL - success\n * @retval  NULL - failure\n *\n */\nstatic char *\nmunge_get_auth_data(char *ebuf, size_t ebufsz)\n{\n\tchar *cred = NULL;\n\tuid_t myrealuid;\n\tstruct passwd *pwent;\n\tstruct group *grp;\n\tchar payload[PBS_MAXUSER + PBS_MAXGRPN + 1] = {'\\0'};\n\tint munge_err = 0;\n\n\t/*\n\t * ebuf passed to this function is initialized with nulls all through\n\t * and ebufsz value passed is sizeof(ebuf) - 1\n\t * So, we don't need to null terminate the last byte in the below\n\t * all snprintf\n\t *\n\t * see pbs_auth_process_handshake_data()\n\t */\n\n\tif (munge_dlhandle == NULL) {\n\t\tpthread_once(&munge_init_once, init_munge);\n\t\tif (munge_encode == NULL) {\n\t\t\tsnprintf(ebuf, ebufsz, \"Failed to load munge lib\");\n\t\t\tMUNGE_LOG_ERR(ebuf);\n\t\t\tgoto err;\n\t\t}\n\t}\n\n\tmyrealuid = getuid();\n\tpwent = getpwuid(myrealuid);\n\tif (pwent == NULL) {\n\t\tsnprintf(ebuf, ebufsz, \"Failed to obtain user-info for uid = %d\", myrealuid);\n\t\tMUNGE_LOG_ERR(ebuf);\n\t\tgoto err;\n\t}\n\n\tgrp = getgrgid(pwent->pw_gid);\n\tif (grp == NULL) {\n\t\tsnprintf(ebuf, ebufsz, \"Failed to obtain group-info for gid=%d\", pwent->pw_gid);\n\t\tMUNGE_LOG_ERR(ebuf);\n\t\tgoto err;\n\t}\n\n\tsnprintf(payload, PBS_MAXUSER + PBS_MAXGRPN, \"%s:%s\", pwent->pw_name, grp->gr_name);\n\n\tmunge_err = munge_encode(&cred, NULL, payload, strlen(payload));\n\tif (munge_err != 0) {\n\t\tsnprintf(ebuf, ebufsz, \"MUNGE user-authentication on encode failed with `%s`\", munge_strerror(munge_err));\n\t\tMUNGE_LOG_ERR(ebuf);\n\t\tgoto err;\n\t}\n\treturn cred;\n\nerr:\n\tfree(cred);\n\treturn NULL;\n}\n\n/**\n * @brief\n *\tmunge_validate_auth_data - validate given munge authentication data\n *\n * @param[in] ctx - pointer to external auth context\n * @param[in] auth_data - auth data to be verified\n * @param[in] with_root - If set to non-zero, verify that token received matches root uid (0)\n * @param[in] ebuf - buffer to hold error msg if any\n * @param[in] ebufsz - size of ebuf\n *\n * @return int\n * @retval 0 - Success\n * @retval -1 - Failure\n *\n */\nstatic int\nmunge_validate_auth_data(munge_extra_t *ctx, void *auth_data, int with_root, char *ebuf, size_t ebufsz)\n{\n\tuid_t uid;\n\tgid_t gid;\n\tint recv_len = 0;\n\tstruct passwd *pwent = NULL;\n\tstruct group *grp = NULL;\n\tvoid *recv_payload = NULL;\n\tint munge_err = 0;\n\tchar *p;\n\tint rc = -1;\n\n\t/*\n\t * ebuf passed to this function is initialized with nulls all through\n\t * and ebufsz value passed is sizeof(ebuf) - 1\n\t * So, we don't need to null terminate the last byte in the below\n\t * all snprintf\n\t *\n\t * see pbs_auth_process_handshake_data()\n\t */\n\n\tif (munge_dlhandle == NULL) {\n\t\tpthread_once(&munge_init_once, init_munge);\n\t\tif (munge_decode == NULL) {\n\t\t\tsnprintf(ebuf, ebufsz, \"Failed to load munge lib\");\n\t\t\tMUNGE_LOG_ERR(ebuf);\n\t\t\tgoto err;\n\t\t}\n\t}\n\n\tmunge_err = munge_decode(auth_data, NULL, &recv_payload, &recv_len, &uid, &gid);\n\tif (munge_err != 0) {\n\t\tsnprintf(ebuf, ebufsz, \"MUNGE user-authentication on decode failed with `%s`\", munge_strerror(munge_err));\n\t\tMUNGE_LOG_ERR(ebuf);\n\t\tgoto err;\n\t}\n\n\tif ((pwent = getpwuid(uid)) == NULL) {\n\t\tsnprintf(ebuf, ebufsz, \"Failed to obtain user-info for uid = %d\", uid);\n\t\tMUNGE_LOG_ERR(ebuf);\n\t\tgoto err;\n\t}\n\n\tif ((grp = getgrgid(pwent->pw_gid)) == NULL) {\n\t\tsnprintf(ebuf, ebufsz, \"Failed to obtain group-info for gid=%d\", gid);\n\t\tMUNGE_LOG_ERR(ebuf);\n\t\tgoto err;\n\t}\n\t/* Keep the username for verification */\n\tpbs_strncpy(ctx->user, pwent->pw_name, PBS_MAXUSER);\n\n\tp = strtok((char *) recv_payload, \":\");\n\n\tif (p && (strncmp(pwent->pw_name, p, PBS_MAXUSER) == 0) && /* inline with current pbs_iff we compare with username only */\n\t    (with_root == 0 || pwent->pw_uid == 0))\n\t\trc = 0;\n\telse {\n\t\tsnprintf(ebuf, ebufsz, \"User credentials do not match\");\n\t\tMUNGE_LOG_ERR(ebuf);\n\t}\n\nerr:\n\tif (recv_payload)\n\t\tfree(recv_payload);\n\treturn rc;\n}\n\n/********* START OF EXPORTED FUNCS *********/\n\n/** @brief\n *\tpbs_auth_set_config - Set config for this lib\n *\n * @param[in] config - auth config structure\n *\n * @return void\n *\n */\nvoid\npbs_auth_set_config(const pbs_auth_config_t *config)\n{\n\tlogger = config->logfunc;\n}\n\n/** @brief\n *\tpbs_auth_create_ctx - allocates external auth context structure for MUNGE authentication\n *\n * @param[in] ctx - pointer to external auth context to be allocated\n * @param[in] mode - AUTH_SERVER, AUTH_CLIENT, or AUTH_INTERACTIVE\n * @param[in] conn_type - AUTH_USER_CONN or AUTH_SERVICE_CONN\n * @param[in] hostname - hostname of other authenticating party\n *\n * @return\tint\n * @retval\t0 - success\n * @retval\t1 - error\n */\nint\npbs_auth_create_ctx(void **ctx, int mode, int conn_type, const char *hostname)\n{\n\tmunge_extra_t *munge_extra = NULL;\n\n \t*ctx = NULL;\n\n\tmunge_extra = calloc(1, sizeof(munge_extra_t));\n\tif (munge_extra == NULL) {\n\t\tMUNGE_LOG_ERR(\"Out of memory!\");\n\t\treturn 1;\n\t}\n\n\t/* AUTH_INTERACTIVE used by qsub -I when authenticating an execution host connection */\n\tif (mode == AUTH_INTERACTIVE || conn_type == AUTH_SERVICE_CONN)\n\t\tmunge_extra->check_root = 1;\n\telse\n\t\tmunge_extra->check_root = 0;\n\n\t*ctx = munge_extra;\n \treturn 0;\n}\n\n/** @brief\n *\tpbs_auth_destroy_ctx - destroy external auth context structure for MUNGE authentication\n *\n * @param[in] ctx - pointer to external auth context\n *\n * @return void\n */\nvoid\npbs_auth_destroy_ctx(void *ctx)\n{\n\tmunge_extra_t *munge_extra = (munge_extra_t *) ctx;\n\tif (munge_extra)\n\t\tfree(munge_extra);\n \tctx = NULL;\n}\n\n/** @brief\n *\tpbs_auth_get_userinfo - get user, host and realm from authentication context\n *\n * @param[in] ctx - pointer to external auth context\n * @param[out] user - username assosiate with ctx\n * @param[out] host - hostname/realm assosiate with ctx\n * @param[out] realm - realm assosiate with ctx\n *\n * @return\tint\n * @retval\t0 on success\n * @retval\t1 on error\n *\n */\nint\npbs_auth_get_userinfo(void *ctx, char **user, char **host, char **realm)\n{\n\tmunge_extra_t *munge_extra = (munge_extra_t *) ctx;\n\tif (munge_extra == NULL) {\n\t\tMUNGE_LOG_ERR(\"Munge context not initialized\");\n\t\treturn 1;\n\t}\n\n\t*user = strdup(munge_extra->user);\n\tif ((*user) == NULL) {\n\t\tMUNGE_LOG_ERR(\"Failed to allocate memory for username\");\n\t\treturn 1;\n\t}\n \t*host = NULL;\n \t*realm = NULL;\n \treturn 0;\n}\n\n/** @brief\n *\tpbs_auth_process_handshake_data - do Munge auth handshake\n *\n * @param[in] ctx - pointer to external auth context\n * @param[in] data_in - received auth token data (if any)\n * @param[in] len_in - length of received auth token data (if any)\n * @param[out] data_out - auth token data to send (if any)\n * @param[out] len_out - lenght of auth token data to send (if any)\n * @param[out] is_handshake_done - indicates whether handshake is done (1) or not (0)\n *\n * @return\tint\n * @retval\t0 on success\n * @retval\t!0 on error\n */\nint\npbs_auth_process_handshake_data(void *ctx, void *data_in, size_t len_in, void **data_out, size_t *len_out, int *is_handshake_done)\n{\n\tint rc = -1;\n\tchar ebuf[LOG_BUF_SIZE] = {'\\0'};\n\n\tmunge_extra_t *munge_extra = (munge_extra_t *) ctx;\n\tif (munge_extra == NULL) {\n\t\tMUNGE_LOG_ERR(\"Munge context not initialized\");\n\t\treturn 1;\n\t}\n\n\t*len_out = 0;\n\t*data_out = NULL;\n\t*is_handshake_done = 0;\n\n\tpthread_once(&munge_init_once, init_munge);\n\n\tif (munge_dlhandle == NULL) {\n\t\t*data_out = strdup(\"Munge lib is not loaded\");\n\t\tif (*data_out != NULL)\n\t\t\t*len_out = strlen(*data_out);\n\t\treturn 1;\n\t}\n\n\tif (len_in > 0) {\n\t\tchar *data = (char *) data_in;\n\t\t/* enforce null char at given length of data */\n\t\tdata[len_in - 1] = '\\0';\n\t\trc = munge_validate_auth_data(ctx, data, munge_extra->check_root, ebuf, sizeof(ebuf) - 1);\n\t\tif (rc == 0) {\n\t\t\t*is_handshake_done = 1;\n\t\t\treturn 0;\n\t\t} else if (ebuf[0] != '\\0') {\n\t\t\t*data_out = strdup(ebuf);\n\t\t\tif (*data_out != NULL)\n\t\t\t\t*len_out = strlen(ebuf);\n\t\t}\n\t} else {\n\t\t*data_out = (void *) munge_get_auth_data(ebuf, sizeof(ebuf) - 1);\n\t\tif (*data_out) {\n\t\t\t*len_out = strlen((char *) *data_out) + 1; /* +1 to include null char also in data_out */\n\t\t\t*is_handshake_done = 1;\n\t\t\treturn 0;\n\t\t} else if (ebuf[0] != '\\0') {\n\t\t\t*data_out = strdup(ebuf);\n\t\t\tif (*data_out != NULL)\n\t\t\t\t*len_out = strlen(ebuf);\n\t\t}\n\t}\n\n\treturn 1;\n}\n\n/********* END OF EXPORTED FUNCS *********/\n"
  },
  {
    "path": "src/lib/Libcmds/batch_status.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <stdio.h>\n#include <string.h>\n\n#include \"pbs_ifl.h\"\n\n/**\n * @file\tbatch_status.c\n *\n * @brief\n *\tbatch_status.c - batch_status structures utilities\n */\n\n/**\n * @brief\n *\tbs_isort - insertion sort for batch_status structures\n *\n * @param[in] bs - batch_status linked list\n * @param[in] cmp_func - compare function to compare two batch_status\n *\n *\n * @return \tstructure handle\n * @retval\thead of sorted batch status list\n *\n */\nstruct batch_status *\nbs_isort(struct batch_status *bs,\n\t int (*cmp_func)(struct batch_status *, struct batch_status *))\n{\n\tstruct batch_status *new_head = NULL; /* new list head */\n\tstruct batch_status *cur_old;\t      /*where we are in the old list*/\n\tstruct batch_status *cur_new;\t      /* where we are in the new list */\n\tstruct batch_status *prev_new = NULL;\n\tstruct batch_status *tmp; /* tmp ptr to hold next */\n\n\tcur_old = bs;\n\tnew_head = NULL;\n\n\twhile (cur_old != NULL) {\n\t\ttmp = cur_old->next;\n\n\t\tif (new_head == NULL) {\n\t\t\tcur_old->next = NULL;\n\t\t\tnew_head = cur_old;\n\t\t} else {\n\t\t\t/* find where our node goes in the new list */\n\t\t\tfor (cur_new = new_head, prev_new = NULL;\n\t\t\t     cur_new != NULL && cmp_func(cur_new, cur_old) <= 0;\n\t\t\t     prev_new = cur_new, cur_new = cur_new->next)\n\t\t\t\t;\n\t\t\tif (prev_new == NULL) {\n\t\t\t\tcur_old->next = new_head;\n\t\t\t\tnew_head = cur_old;\n\t\t\t} else {\n\t\t\t\tcur_old->next = cur_new;\n\t\t\t\tprev_new->next = cur_old;\n\t\t\t}\n\t\t}\n\t\tcur_old = tmp;\n\t}\n\treturn new_head;\n}\n\n/**\n * @brief\n *\tbs_find - find a batch_status with given name in a batch_status structures list.\n *\n * @param[in] bs - batch_status linked list\n * @param[in] name - name of the batch_status structure to be searched\n *\n *\n * @return \tbatch_status structure handle\n * @retval\tbatch_status structure pointer with given name or NULL\n *\n */\nstruct batch_status *\nbs_find(struct batch_status *bs, const char *name)\n{\n\n\tif (name == NULL)\n\t\treturn NULL;\n\n\tfor (; ((bs != NULL) && strcmp(name, bs->name)); bs = bs->next)\n\t\t; /* empty for loop */\n\n\treturn bs;\n}\n\n/**\n * @brief\n *\tinit_bstat - Initialize batch status\n *\n * @param[in] bstat - batch_status struct\n *\n * @return \tvoid\n */\nvoid\ninit_bstat(struct batch_status *bstat)\n{\n\tbstat->next = NULL;\n\tbstat->text = NULL;\n\tbstat->attribs = NULL;\n}\n"
  },
  {
    "path": "src/lib/Libcmds/check_job_script.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tcheck_job_script.c\n * @brief\n * \tThese were moved from qsub so that AIF could access them.\n */\n#include <pbs_config.h>\n\n#include <ctype.h>\n#include <string.h>\n\n#include \"cmds.h\"\n#include \"libpbs.h\"\n\n/**\n * @brief\n *\tcheck whether the script content in buf s executable or not\n *\n * @param[in] s - buf with script content(job)\n *\n * @return\tint\n * @retval\tTRUE\texecutable\n * @retval\tFALSE\tnot executable\n *\n */\nint\npbs_isexecutable(char *s)\n{\n\tchar *c;\n\n\tc = s;\n\tif ((*c == ':') || ((*c == '#') && (*(c + 1) == '!')))\n\t\treturn FALSE;\n\twhile (isspace(*c))\n\t\tc++;\n\tif (notNULL(c))\n\t\treturn (*c != '#');\n\treturn FALSE;\n}\n\n/**\n * @brief\n *\treturns the pbs directive\n *\n * @param[in] s - copy of script file\n * @param[in] prefix - prefix for pbs directives\n *\n * @return\tstring\n * @retval\tNULL\t\terror\n *\n */\nchar *\npbs_ispbsdir(char *s, char *prefix)\n{\n\tchar *it;\n\tint l;\n\n\tit = s;\n\twhile (isspace(*it))\n\t\tit++;\n\tl = strlen(prefix);\n\tif (l > 0 && strncmp(it, prefix, l) == 0)\n\t\treturn (it + l);\n\telse\n\t\treturn NULL;\n}\n"
  },
  {
    "path": "src/lib/Libcmds/chk_Jrange.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tchk_Jrange.c\n *\n * @brief\n * \tchk_Jrange - validate the subjob index range for the J option to qsub/qalter\n */\n#include <pbs_config.h>\n#include <ctype.h>\n#include <limits.h>\n#include <stdlib.h>\n#include \"pbs_ifl.h\"\n#include \"pbs_internal.h\"\n\n/**\n * @brief\n * \tchk_Jrange - validate the subjob index range for the J option to qsub/qalter\n *\n * @param[in] arg - argument list\n *\n * @return\tint\n * @retval\t0 if ok,\n * @retval\t1 if invalid form\n * @retval\t2 if any of the individual numbers is too large\n */\nint\nchk_Jrange(char *arg)\n{\n\tchar *pc;\n\tchar *s;\n\tlong start;\n\tlong end;\n\tlong step;\n\n\tpc = arg;\n\tif (!isdigit((int) *pc))\n\t\treturn (1); /* no a positive number */\n\ts = arg;\n\twhile (*pc && isdigit((int) *pc))\n\t\t++pc;\n\tif (*pc != '-') {\n\t\treturn (1);\n\t}\n\tstart = strtol(s, NULL, 10);\n\tif (start < 0)\n\t\treturn 1;\n\tif (start == LONG_MAX)\n\t\treturn 2;\n\ts = ++pc;\n\tif (!isdigit((int) *pc)) {\n\t\treturn (1);\n\t}\n\twhile (*pc && isdigit((int) *pc))\n\t\t++pc;\n\tif ((*pc != '\\0') && (*pc != ':')) {\n\t\treturn (1);\n\t}\n\tend = strtol(s, NULL, 10);\n\tif (start >= end)\n\t\treturn 1;\n\tif (end == LONG_MAX)\n\t\treturn 2;\n\n\tif (*pc++ == ':') {\n\t\ts = pc;\n\t\twhile (*pc && isdigit((int) *pc))\n\t\t\t++pc;\n\t\tif (*pc != '\\0') {\n\t\t\treturn (1);\n\t\t}\n\t\tstep = strtol(s, NULL, 10);\n\t\tif (step < 1)\n\t\t\treturn (1);\n\t\tif (step == LONG_MAX)\n\t\t\treturn (2);\n\t}\n\treturn 0;\n}\n"
  },
  {
    "path": "src/lib/Libcmds/ck_job_name.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include \"pbs_ifl.h\"\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include \"cmds.h\"\n/**\n * @file\tck_job_name.c\n */\n\n/**\n * @brief\n * \tisalnumspch = if char is alpha numeric or from allowed set of special char\n *\n * @param[in] c - input\n *\n * @return\tint\n */\nstatic int\nisalnumspch(int c)\n{\n\tif (isalnum(c) != 0)\n\t\treturn c;\n\n\tif (c == '-' || c == '_' || c == '+' || c == '.')\n\t\treturn c;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tvalidates the job name\n * \tcheck_job_name = job name must be <= PBS_MAXJOBNAME \"printable\" characters with\n *\tfirst alphabetic, maybe.  The POSIX Batch standard calls for only\n *\talphanumeric, but then conflicts with itself to default to the\n *\tscript base-name which may have non-alphanumeric characters and\n *\tthe first character not alphabetic.\n *\n *\tWe check for visible, printable characters and the first being\n *\talphabetic if comining from a -N option (chk_alpha = 1).\n *\n * @param[in]       name        - job name\n * @param[in]\t    chk_alpha   - flag to allow numeric first char\n *\n * @return int\n * @retval 0    validation of job name was successful.\n * @retval -1   illeagal character in job name.\n * @retval -2\tjob name length is too long.\n */\nint\ncheck_job_name(char *name, int chk_alpha)\n{\n\n\tchar *p;\n\tif (!name)\n\t\treturn (-1);\n\n\tif (strlen(name) > (size_t) PBS_MAXJOBNAME)\n\t\treturn (-2);\n\telse if ((chk_alpha == 1) && (isalpha((int) *name) == 0))\n\t\treturn (-1);\n\n\tfor (p = name; *p; p++)\n\t\tif (isalnumspch((int) *p) == 0)\n\t\t\treturn (-1);\n\treturn (0);\n}\n"
  },
  {
    "path": "src/lib/Libcmds/cmds_common.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tcmds_common.c\n * @brief\n *\tFunctions shared by all pbs commands C files\n */\n\n#include <stdlib.h>\n#include <pbs_config.h> /* the master config generated by configure */\n#include \"attribute.h\"\n\n/**\n * @brief\n *\tAdd an entry to an attribute list. First, create the entry and set\n * \tthe fields. If the attribute list is empty, then just point it at the\n * \tnew entry. Otherwise, append the new entry to the list.\n *\n *  This function is a wrapper of set_attr function in libpbs. It exits when\n *  a non-zero error code is returned by set_attr.\n *\n * @param[in/out] attrib - pointer to attribute list\n * @param[in]     attrib_name - attribute name\n * @param[in]     attrib_value - attribute value\n *\n * @return\tVoid\n */\nvoid\nset_attr_error_exit(struct attrl **attrib, char *attrib_name, char *attrib_value)\n{\n\tif (set_attr(attrib, attrib_name, attrib_value))\n\t\texit(2);\n}\n\n/**\n * @brief\n *\twrapper function for set_attr_resc in libpbs. Exits if a non-zero error\n *  code is returned by set_attr_resc.\n *\n * @param[in/out] attrib - pointer to attribute list\n * @param[in]     attrib_name - attribute name\n * @param[in]     attrib_value - attribute value\n *\n * @return      Void\n */\nvoid\nset_attr_resc_error_exit(struct attrl **attrib, char *attrib_name, char *attrib_resc, char *attrib_value)\n{\n\tif (set_attr_resc(attrib, attrib_name, attrib_resc, attrib_value))\n\t\texit(2);\n}\n\n/*\n * Stub function of actual DIS_tpp_funcs for PBS clients commands\n * which doesn't use TPP\n */\nvoid\nDIS_tpp_funcs()\n{\n}\n"
  },
  {
    "path": "src/lib/Libcmds/cnt2server.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tcnt2server\n *\tConnect to the server, and if there is an error, print a more\n * \tdescriptive message.\n *\n * @par\tSynopsis:\n *\tint cnt2server( char *server )\n *\n *\tserver\tThe name of the server to connect to. A NULL or null string\n *\t\tfor the default server.\n *\n * @par\tReturns:\n *\tThe connection returned by pbs_connect().\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <errno.h>\n#include \"cmds.h\"\n#include \"auth.h\"\n\n/**\n * @brief\n *\tMakes a connection to the server, returning the pbs_connect() result.\n *\n * @param[in]\tserver\t- hostname of the pbs server to connect to\n * @param[in]\textend  - extend data to send along with the connection.\n *\n * @return\tint\n * @retval\tconnection\tsuccess\n * @retval\t0\t\tfail\n */\nint\ncnt2server_extend(char *server, char *extend)\n{\n\tint connect;\n\n\tconnect = pbs_connect_extend(server, extend);\n\tif (connect <= 0) {\n\t\tif (pbs_errno > PBSE_) {\n\t\t\tswitch (pbs_errno) {\n\n\t\t\t\tcase PBSE_BADHOST:\n\t\t\t\t\tfprintf(stderr, \"Unknown Host.\\n\");\n\t\t\t\t\tbreak;\n\n\t\t\t\tcase PBSE_NOCONNECTS:\n\t\t\t\t\tfprintf(stderr, \"Too many open connections.\\n\");\n\t\t\t\t\tbreak;\n\n\t\t\t\tcase PBSE_NOSERVER:\n\t\t\t\t\tfprintf(stderr, \"No default server name.\\n\");\n\t\t\t\t\tbreak;\n\n\t\t\t\tcase PBSE_SYSTEM:\n\t\t\t\t\tif (errno != 0)\n\t\t\t\t\t\tperror(NULL);\n\t\t\t\t\telse\n\t\t\t\t\t\tfprintf(stderr, \"System call failure.\\n\");\n\t\t\t\t\tbreak;\n\n\t\t\t\tcase PBSE_PERM:\n\t\t\t\t\tfprintf(stderr, \"No Permission.\\n\");\n\t\t\t\t\tbreak;\n\n\t\t\t\tcase PBSE_PROTOCOL:\n\t\t\t\t\tfprintf(stderr, \"Communication failure.\\n\");\n\t\t\t\t\tbreak;\n\n\t\t\t\tcase PBSE_NOSUP:\n\t\t\t\t\tfprintf(stderr, \"No support for requested service.\\n\");\n\t\t\t\t\tbreak;\n\t\t\t}\n\t\t} else if (errno != 0) {\n\t\t\tperror(NULL);\n\t\t}\n\n\t\treturn (connect);\n\t}\n\n\treturn (connect);\n}\n\n/**\n * @brief\n *\tA wrapper function to the cnt2server_extend() call where there's\n *\tno 'extend' parameter passed.\n *\n * @param[in]\tserver\t- hostname of the pbs server to connect to\n *\n * @return      int\n * @retval      connection      success\n * @retval\t0\t\tfail\n */\nint\ncnt2server(char *server)\n{\n\treturn (cnt2server_extend(server, NULL));\n}\n"
  },
  {
    "path": "src/lib/Libcmds/cs_error.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tcs_logerr.c\n * @brief\n *\tThis function is ment to be called by the \"CS\" library code in the\n *\tcase where the CS library is being used in a command executable.\n *\n * @note\n *\t  A function by the same name but with a different definition\n *\tis also part of PBS' Liblog library.  We can do this because the PBS\n *\tcommands are not linked against the Liblog library.  That function\n *\twill be the one used by the CS library when the executable is a PBS\n *\tdaemon.\n */\n\n#include <stdio.h> /* the master config generated by configure */\n\n/**\n * @brief\n *\tprints error message when cs library is\n *\tbeing used in a command executable.\n *\n * param[in] ecode - error code\n * param[in] caller - function\n * param[in] txtmsg - error message\n *\n * @return\tVoid\n *\n */\n\nvoid\ncs_logerr(int ecode, char *caller, char *txtmsg)\n{\n\tif (caller != NULL && txtmsg != NULL) {\n\n\t\tif (ecode != -1)\n\t\t\tfprintf(stderr, \"%s: %s (%d)\\n\", caller, txtmsg, ecode);\n\t\telse\n\t\t\tfprintf(stderr, \"%s: %s\\n\", caller, txtmsg);\n\t}\n}\n"
  },
  {
    "path": "src/lib/Libcmds/cvtdate.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tcvtdate.c\n * @brief\n * \tcvtdate - convert POSIX touch date/time to seconds since epoch time\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include \"cmds.h\"\n\n/**\n * @brief\n * \tcvtdate - convert POSIX touch date/time to seconds since epoch time\n *\n * @param[in]\tdatestr - date/time string in the form: [[[[CC]YY]MM]DD]hhmm[.SS]\n *\t\t\t  as defined by POSIX.\n *\n *\t\tCC = centry, ie 19 or 20\n *\t\tYY = year, if CC is not provided and YY is < 69, then\n *\t\t     CC is assumed to be 20, else 19.\n *\t\tMM = Month, [1,12], if YY is not provided and MM is less than\n *\t\t     the current month, YY is next year, else it is the\n *\t\t     current year.\n *\t\tDD = Day of month, [1,31], if MM is not provided and DD is less\n *\t\t     than the current day, MM is next month, else it is the\n *\t\t     next month.\n *\t\thh = hour, [00, 23], if DD is not provided and hh is less than\n *\t\t     the current hour, DD is tomorrow, else it is today.\n *\t\tmm = minute, [00, 59]\n *\t\tSS = seconds, [00, 59]\n *\n * @return\ttime_t\n * @retval\tnumber of seconds since epoch (Coordinated Univ. Time)\n * @retval\t-1 if error.\n */\n\ntime_t\ncvtdate(char *datestr)\n{\n\tchar buf[3];\n\ttime_t clock;\n\tint i;\n\tchar *pc;\n\tstruct tm tm;\n\tint year = 0;\n\tint month = -1;\n\tint day = 0;\n#ifdef WIN32\n\tSYSTEMTIME win_ltm;\n#endif /* WIN32 */\n\tstruct tm ltm;\n\tstruct tm *ptm;\n\n\tif ((pc = strchr(datestr, (int) '.')) != 0) {\n\t\t*pc++ = '\\0';\n\t\tif ((strlen(pc) != 2) ||\n\t\t    (isdigit((int) *pc) == 0) ||\n\t\t    (isdigit((int) *(pc + 1)) == 0))\n\t\t\treturn (-1);\n\t\ttm.tm_sec = atoi(pc);\n\t\tif (tm.tm_sec > 59)\n\t\t\treturn (-1);\n\t} else\n\t\ttm.tm_sec = 0;\n\n\tfor (pc = datestr; *pc; ++pc)\n\t\tif (isdigit((int) *pc) == 0)\n\t\t\treturn (-1);\n\n\tbuf[2] = '\\0';\n\tclock = time(NULL);\n#ifdef WIN32\n\tGetLocalTime(&win_ltm);\n\tltm.tm_year = win_ltm.wYear - 1900; /* unix is counted from 1900 */\n\tltm.tm_mon = win_ltm.wMonth - 1;    /* unix starts from 0 */\n\tltm.tm_mday = win_ltm.wDay;\n\tltm.tm_hour = win_ltm.wHour;\n\tltm.tm_min = win_ltm.wMinute;\n\tltm.tm_sec = win_ltm.wSecond;\n\tltm.tm_isdst = -1;\n#else\n\tlocaltime_r(&clock, &ltm);\n#endif /* WIN32 */\n\tptm = &ltm;\n\ttm.tm_year = ptm->tm_year; /* default year to current */\n\ttm.tm_mon = ptm->tm_mon;   /* default month to current */\n\ttm.tm_mday = ptm->tm_mday; /* default day to current */\n\n\tswitch (strlen(datestr)) {\n\n\t\tcase 12: /* CCYYMMDDhhmm */\n\t\t\tbuf[0] = datestr[0];\n\t\t\tbuf[1] = datestr[1];\n\t\t\tyear = atoi(buf) * 100;\n\t\t\tdatestr += 2;\n\n\t\t\t/* no break, fall into next case */\n\n\t\tcase 10: /* YYMMDDhhmm */\n\t\t\tbuf[0] = datestr[0];\n\t\t\tbuf[1] = datestr[1];\n\t\t\ti = atoi(buf);\n\t\t\tif (year == 0)\n\t\t\t\tif (i > 68)\n\t\t\t\t\tyear = 1900 + i;\n\t\t\t\telse\n\t\t\t\t\tyear = 2000 + i;\n\n\t\t\telse\n\t\t\t\tyear += i;\n\t\t\ttm.tm_year = year - 1900;\n\t\t\tdatestr += 2;\n\n\t\t\t/* no break, fall into next case */\n\n\t\tcase 8: /* MMDDhhmm */\n\t\t\tbuf[0] = datestr[0];\n\t\t\tbuf[1] = datestr[1];\n\t\t\ti = atoi(buf);\n\t\t\tif (i < 1 || i > 12)\n\t\t\t\treturn (-1);\n\t\t\tif (year == 0)\n\t\t\t\tif (i <= ptm->tm_mon)\n\t\t\t\t\ttm.tm_year++;\n\t\t\tmonth = i - 1;\n\t\t\ttm.tm_mon = month;\n\t\t\tdatestr += 2;\n\n\t\t\t/* no break, fall into next case */\n\n\t\tcase 6: /* DDhhmm */\n\t\t\tbuf[0] = datestr[0];\n\t\t\tbuf[1] = datestr[1];\n\t\t\tday = atoi(buf);\n\t\t\tif (day < 1 || day > 31)\n\t\t\t\treturn (-1);\n\t\t\tif (month == -1)\n\t\t\t\tif (day < ptm->tm_mday)\n\t\t\t\t\ttm.tm_mon++;\n\t\t\ttm.tm_mday = day;\n\t\t\tdatestr += 2;\n\n\t\t\t/* no break, fall into next case */\n\n\t\tcase 4: /* hhmm */\n\t\t\tbuf[0] = datestr[0];\n\t\t\tbuf[1] = datestr[1];\n\t\t\ttm.tm_hour = atoi(buf);\n\t\t\tif (tm.tm_hour > 23)\n\t\t\t\treturn (-1);\n\n\t\t\ttm.tm_min = atoi(&datestr[2]); /* mm -  minute portion */\n\t\t\tif (tm.tm_min > 59)\n\t\t\t\treturn (-1);\n\t\t\tif (day == 0) /* day not specified */\n\t\t\t\tif ((tm.tm_hour < ptm->tm_hour) ||\n\t\t\t\t    ((tm.tm_hour == ptm->tm_hour) &&\n\t\t\t\t     (tm.tm_min <= ptm->tm_min)))\n\t\t\t\t\ttm.tm_mday++; /* time for tomorrow */\n\n\t\t\tbreak;\n\n\t\tdefault:\n\t\t\treturn (-1);\n\t}\n\n\ttm.tm_isdst = -1;\n\n\treturn (mktime(&tm));\n}\n\n/**\n * @brief\n *\tconvert_time - convert a string time_t into\n *\t\t       a short human readable string\n *\t\t       today     : time of resv (i.e. 15:30)\n *\t\t       this year : day of month & time of resv (Mar 24 15:30)\n *\t\t       else      : day of month and year (Mar 24 2000)\n *\n * @param[in]  ptime - the string time_t\n *\n *\n * @return \ta pointer to a static string\n * @retval\tconverted time\t\t\tsuccess\n *\n */\nchar *\nconvert_time(char *ptime)\n{\n\tstatic char buf[64];\n\tstruct tm *ptm;\t   /* used to get a struct tm from localtime() */\n\tstruct tm now_tm;  /* current time */\n\tstruct tm then_tm; /* time to print */\n\ttime_t then;\n\ttime_t now;\n\n\ttime(&now);\n\tthen = atol(ptime);\n\n\tptm = localtime(&now);\n\tnow_tm = *ptm;\n\n\tptm = localtime(&then);\n\tthen_tm = *ptm;\n\n\tif (then_tm.tm_year == now_tm.tm_year) {\n\t\t/* time input is a time within this current year */\n\n\t\tif (then_tm.tm_yday == now_tm.tm_yday)\n\n\t\t\t/* time input is a time within the current day */\n\t\t\tstrftime(buf, 64, \"Today %H:%M\", &then_tm);\n\n\t\telse if ((then_tm.tm_yday >= now_tm.tm_yday - now_tm.tm_wday) &&\n\t\t\t (then_tm.tm_yday <= now_tm.tm_yday + 6 - now_tm.tm_wday))\n\n\t\t\t/* time input is a time within the current week */\n\t\t\tstrftime(buf, 64, \"%a %H:%M\", &then_tm);\n\t\telse\n\n\t\t\t/* time input is in the current year and outside the current week */\n\t\t\tstrftime(buf, 64, \"%a %b %d %H:%M\", &then_tm);\n\n\t} else {\n\n\t\t/* time input outside the current year */\n\t\tstrftime(buf, 64, \"%a %b %d %Y %H:%M\", &then_tm);\n\t}\n\n\treturn buf;\n}\n"
  },
  {
    "path": "src/lib/Libcmds/err_handling.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\terr_handling.c\n * @brief\n *\tThis file is meant for common error handling functions \n *      within commands\n */\n\n#include <pbs_config.h>\n\n#include <errno.h>\n#include <string.h>\n\n#include \"libutil.h\"\n#include \"libpbs.h\"\n#include \"libutil.h\"\n\n/**\n * @brief\n *\tPrint the error message returned by the server, if supplied. Otherwise,\n * \tprint a default error message.\n *\n * @param[in] cmd - error msg\n * @param[in] connect - fd\n * @param[in] id - error id\n *\n * @return\tVoid\n *\n */\n\nvoid\nprt_job_err(char *cmd, int connect, char *id)\n{\n\tchar *errmsg;\n\tchar *histerrmsg = NULL;\n\n\terrmsg = pbs_geterrmsg(connect);\n\tif (errmsg) {\n\t\tif (pbs_geterrno() == PBSE_HISTJOBID) {\n\t\t\tpbs_asprintf(&histerrmsg, errmsg, id);\n\t\t\tif (histerrmsg) {\n\t\t\t\tfprintf(stderr, \"%s: %s\\n\", cmd, histerrmsg);\n\t\t\t\tfree(histerrmsg);\n\t\t\t} else {\n\t\t\t\tfprintf(stderr,\n\t\t\t\t\t\"%s: Server returned error %d for job %s\\n\",\n\t\t\t\t\tcmd, pbs_errno, id);\n\t\t\t}\n\t\t\treturn;\n\t\t}\n\t\tfprintf(stderr, \"%s: %s %s\\n\", cmd, errmsg, id);\n\t} else {\n\t\tfprintf(stderr, \"%s: Server returned error %d for job %s\\n\", cmd, pbs_errno, id);\n\t}\n}\n"
  },
  {
    "path": "src/lib/Libcmds/get_attr.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tget_ttr.c\n * @brief\n *      Locate an attribute (attrl) by name (and resource).\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n#include \"cmds.h\"\n#include \"pbs_ifl.h\"\n\n/**\n * @brief\n *      Locate an attribute (attrl) by name (and resource).\n *\n * @param[in] pattrl    - Attribute list.\n * @param[in] name      - name to find in attribute list.\n * @param[in] resc      - resource to find in attribute list.\n *\n * @return\tpointer to string\n * @retval      value of the located name and resource from attribute list,\n * @retval \tothewise NULL.\n */\n\nchar *\nget_attr(struct attrl *pattrl, const char *name, const char *resc)\n{\n\twhile (pattrl) {\n\t\tif (strcmp(name, pattrl->name) == 0) {\n\t\t\tif (resc) {\n\t\t\t\tif (strcmp(resc, pattrl->resource) == 0) {\n\t\t\t\t\treturn (pattrl->value);\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\treturn (pattrl->value);\n\t\t\t}\n\t\t}\n\t\tpattrl = pattrl->next;\n\t}\n\treturn NULL;\n}\n\n/*\n * @brief\n *\tcheck_max_job_sequence_id - retrieve the max_job_sequence_id attribute value\n *\n *\t@param[in]server_attrs - Batch status\n *\n *\t@retval  1\tsuccess\n *\t@retval  0\terror/attribute is not set\n *\n */\nint\ncheck_max_job_sequence_id(struct batch_status *server_attrs)\n{\n\tchar *value;\n\tvalue = get_attr(server_attrs->attribs, ATTR_max_job_sequence_id, NULL);\n\tif (value == NULL) {\n\t\t/* if server is not configured for max_job_sequence_id\n\t\t* or attribute is unset */\n\t\treturn 0;\n\t} else {\n\t\t/* if attribute is set set */\n\t\tlong long seq_id = 0;\n\t\tseq_id = strtoul(value, NULL, 10);\n\t\tif (seq_id > PBS_DFLT_MAX_JOB_SEQUENCE_ID) {\n\t\t\treturn 1;\n\t\t}\n\t\treturn 0;\n\t}\n}\n"
  },
  {
    "path": "src/lib/Libcmds/get_dataservice_usr.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tget_dataservice_usr.c\n * @brief\n *\tRetrieves the database user. The database user-id is retrieved from\n *\tthe file PBS_HOME/server_priv/db_user.\n */\n#include <pbs_config.h> /* the master config generated by configure */\n#include \"cmds.h\"\n#include <sys/stat.h>\n#include <fcntl.h>\n#include <errno.h>\n\n/**\n * @brief\n *\tRetrieves the database user. The database user-id is retrieved from\n *\tthe file PBS_HOME/server_priv/db_user.\n *\tIf this file is not found, then the default username\n *\t\"pbsdata\" is returned as the default db user.\n *\n *      NOTE: pbs_get_dataservice_usr() was put into a separate file because\n *      the other database functions are only used by the server\n *\n * @param[out]  errmsg - Details of the error\n * @param[in]   len    - length of error messge variable\n *\n * @return      username String\n * @retval\t NULL - Failed to retrieve user-id\n * @retval\t!NULL - Pointer to allocated memory with user-id string.\n *\t\t\tCaller should free this memory after usage.\n *\n */\nchar *\npbs_get_dataservice_usr(char *errmsg, int len)\n{\n\tchar usr_file[MAXPATHLEN + 1];\n\tint fd = 0;\n\tstruct stat st = {0};\n\tchar buf[MAXPATHLEN + 1];\n\n\tsnprintf(usr_file, MAXPATHLEN + 1, \"%s/server_priv/db_user\", pbs_conf.pbs_home_path);\n\tif ((fd = open(usr_file, O_RDONLY)) == -1) {\n\t\tif (access(usr_file, F_OK) == 0) {\n\t\t\tsnprintf(errmsg, len, \"%s: open failed, errno=%d\", usr_file, errno);\n\t\t\treturn NULL; /* file exists but open failed */\n\t\t} else {\n\t\t\treturn strdup(PBS_DATA_SERVICE_USER); /* return default */\n\t\t}\n\t} else {\n\t\tif (fstat(fd, &st) == -1) {\n\t\t\tclose(fd);\n\t\t\tsnprintf(errmsg, len, \"%s: stat failed, errno=%d\", usr_file, errno);\n\t\t\treturn NULL;\n\t\t}\n\t\tif (st.st_size >= sizeof(buf)) {\n\t\t\tclose(fd);\n\t\t\tsnprintf(errmsg, len, \"%s: file too large\", usr_file);\n\t\t\treturn NULL;\n\t\t}\n\n\t\tif (read(fd, buf, st.st_size) != st.st_size) {\n\t\t\tclose(fd);\n\t\t\tsnprintf(errmsg, len, \"%s: read failed, errno=%d\", usr_file, errno);\n\t\t\treturn NULL;\n\t\t}\n\t\tbuf[st.st_size] = 0;\n\t\tclose(fd);\n\n\t\treturn (strdup(buf));\n\t}\n}\n"
  },
  {
    "path": "src/lib/Libcmds/get_server.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tget_server.c\n * @brief\n * ------------------------------------\n * As specified in section 5 of the ERS:\n *\n *  5.1.2.  Directing Requests to Correct Server\n *\n *  A  command  shall  perform  its  function  by  sending   the\n *  corresponding  request  for  service  to the a batch server.\n *  The choice of batch servers to which to send the request  is\n *  governed by the following ordered set of rules:\n *\n *  1. For those commands which require or accept a job identif-\n *     ier  operand, if the server is specified in the job iden-\n *     tifier operand as @server, then the batch  requests  will\n *     be sent to the server named by server.\n *\n *  2. For those commands which require or accept a job identif-\n *     ier  operand  and  the @server is not specified, then the\n *     command will attempt to determine the current location of\n *     the  job  by  sending  a  Locate Job batch request to the\n *     server which created the job.\n *\n *  3. If a server component of a destination  is  supplied  via\n *     the  -q  option,  such  as  on  qsub and qselect, but not\n *     qalter, then the server request is sent to that server.\n *\n *  4. The server request is sent to the  server  identified  as\n *     the default server, see section 2.6.3.\n *     [pbs_connect() implements this]\n *\n *  2.6.3.  Default Server\n *\n *  When a server is not specified to a client, the client  will\n *  send  batch requests to the server identified as the default\n *  server.  A client identifies the default server by  (a)  the\n *  setting  of  the environment variable PBS_DEFAULT which con-\n *  tains a destination, or (b) the  destination  in  the  batch\n *  administrator established file {PBS_DIR}/default_destn.\n * ------------------------------------\n *\n * Takes a job_id_in string as input, calls parse_jobid to separate\n * the pieces, then applies the above rules in order\n * If things go OK, the function value is set to 0,\n * if errors, it is set to 1.\n *\n * @par Full legal syntax is:\n *  seq_number[.parent_server[:port]][@current_server[:port]]\n *\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <sys/types.h>\n#include <netdb.h>\n#include <sys/param.h>\n#include \"cmds.h\"\n#include \"pbs_ifl.h\"\n#include \"net_connect.h\"\n\n/**\n * @brief\n *\tprocesses input jobid according to above mentioned rules\n *\n * @param[in] job_id_in - input job id\n * @param[out] job_id_out - processed job id\n * @param[out] server_out - server name\n *\n * @return\tint\n * @retval\t0\tsuccess\n * @retval\t1\terror\n *\n */\nint\nget_server(char *job_id_in, char *job_id_out, char *server_out)\n{\n\tchar *seq_number = NULL;\n\tchar *parent_server = NULL;\n\tchar *current_server = NULL;\n\tchar host_server[PBS_MAXSERVERNAME + 1];\n\n\tif (!job_id_in || !job_id_out || !server_out)\n\t\treturn 1;\n\n\tif (pbs_loadconf(0) != 1)\n\t\treturn 1;\n\n\t/* parse the job_id_in into components */\n\n\tif (parse_jobid(job_id_in, &seq_number, &parent_server,\n\t\t\t&current_server)) {\n\t\tfree(seq_number);\n\t\tfree(parent_server);\n\t\tfree(current_server);\n\t\treturn 1;\n\t}\n\n\t/* Apply the above rules, in order, except for the locate job request.\n\t That request is only sent if the job is not found on the local server.\n\t */\n\n\tserver_out[0] = '\\0';\n\tif (notNULL(current_server)) /* @server found */\n\t\tstrcpy(server_out, current_server);\n\tfree(current_server);\n\n\tstrcpy(job_id_out, seq_number);\n\tfree(seq_number);\n\n\tif (notNULL(parent_server)) {\n\n\t\t/* If parent_server matches PBS_SERVER then use it */\n\t\tif (pbs_conf.pbs_server_name) {\n\t\t\tif (strcasecmp(parent_server, pbs_conf.pbs_server_name) == 0) {\n\t\t\t\tstrcat(job_id_out, \".\");\n\t\t\t\tstrcat(job_id_out, pbs_conf.pbs_server_name);\n\t\t\t\tfree(parent_server);\n\t\t\t\treturn 0;\n\t\t\t}\n\t\t}\n\n\t\tif (get_fullhostname(parent_server, host_server,\n\t\t\t\t     PBS_MAXSERVERNAME) != 0) {\n\t\t\tfree(parent_server);\n\t\t\treturn 1;\n\t\t}\n\n\t\tstrcat(job_id_out, \".\");\n\n\t\tstrcat(job_id_out, parent_server);\n\t\tif (server_out[0] == '\\0')\n\t\t\tstrcpy(server_out, parent_server);\n\t\tfree(parent_server);\n\t\treturn 0;\n\t}\n\n\tfree(parent_server);\n\n\tif (pbs_conf.pbs_server_name) {\n\t\tstrcat(job_id_out, \".\");\n\t\tstrcat(job_id_out, pbs_conf.pbs_server_name);\n\t} else {\n\t\treturn 1;\n\t}\n\n\treturn 0;\n}\n"
  },
  {
    "path": "src/lib/Libcmds/isjobid.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h>\n\n#include <ctype.h>\n#include <string.h>\n#include \"pbs_ifl.h\"\n#include \"pbs_internal.h\"\n\n/**\n * @file\tisjobid.c\n */\n/**\n * @brief\n *\tvalidates whether the input string is jobid\n *\n * @param[in] string - jobid\n *\n * @return\tint\n * @retval\t1\tif jobid\n * @retval\t0\tnot jobid/error\n *\n */\n\nint\npbs_isjobid(char *string)\n{\n\tint i;\n\tint result;\n\n\ti = strspn(string, \" \"); /* locate first non-blank */\n\tif (isdigit(string[i]))\n\t\tresult = 1; /* job_id */\n\telse if (isalpha(string[i]))\n\t\tresult = 0; /* not a job_id */\n\telse\n\t\tresult = 0; /* who knows - probably a syntax error */\n\n\treturn (result);\n}\n"
  },
  {
    "path": "src/lib/Libcmds/locate_job.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tlocate_job.c\n * @brief\n *\tConnect to the server the job was submitted to, and issue a\n *  Locate Job command. The result should be the server that the job\n *  is currently at.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include \"cmds.h\"\n#include \"pbs_ifl.h\"\n\n/**\n * @brief\n *\treturns the location of job running at.\n *\n * @param[in] job_id - job id\n * @param[in] parent_server - server name\n * @param[out] located_server - server name\n *\n * @return\tint\n * @retval\tTRUE\tsuccess\n * @retval\t-1\terror\n *\n */\n\nint\nlocate_job(char *job_id, char *parent_server, char *located_server)\n{\n\tint connect;\n\tchar jid_server[PBS_MAXCLTJOBID + 1];\n\tchar *location;\n\n\tif ((connect = pbs_connect(parent_server)) > 0) {\n\t\tstrcpy(jid_server, job_id);\n\t\tif (notNULL(parent_server)) {\n\t\t\tstrcat(jid_server, \"@\");\n\t\t\tstrcat(jid_server, parent_server);\n\t\t}\n\t\tlocation = pbs_locjob(connect, jid_server, NULL);\n\t\tif (location == NULL) {\n\t\t\tpbs_disconnect(connect);\n\t\t\treturn FALSE;\n\t\t}\n\t\tstrcpy(located_server, location);\n\t\tfree(location);\n\t\tpbs_disconnect(connect);\n\t\treturn TRUE;\n\t} else\n\t\treturn -1;\n}\n"
  },
  {
    "path": "src/lib/Libcmds/parse_at.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tparse_at.c\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include \"cmds.h\"\n#include \"pbs_ifl.h\"\n#include \"portability.h\"\n\n#ifdef WIN32 /* we're including the space character under windows */\n#define ISNAMECHAR(x) ((isprint(x)) && ((x) != '#') && ((x) != '@'))\n#else\n#define ISNAMECHAR(x) ((isgraph(x)) && ((x) != '#') && ((x) != '@'))\n#endif\n\nstruct hostlist {\n\tchar host[PBS_MAXHOSTNAME + 1];\n\tstruct hostlist *next;\n};\n\n/** @fn int parse_at_item(char *at_item, char *at_name, char *host_name)\n * @brief\tparse a single name[@host] item and return name and host\n *\n * @param[in]\tat_item\n * @param[out]\tat_name\n * @param[out]\thost_name\n *\n * @return\tint\n * @retval\t1\tsuccess\n * @retval\t0\tparsing failure\n\n * @par MT-Safe:\tyes\n * @par Side Effects:\n *\tNone\n *\n * @par Note:\n *\tThis function requires caller to provide output parameters with\n *\trequired memory allocated.  Checks in this function are removed\n *\tfor speed.\n */\nint\nparse_at_item(char *at_item, char *at_name, char *host_name)\n{\n\tchar *c;\n\tint a_pos = 0;\n\tint h_pos = 0;\n\n\t/* Begin the parse */\n\tc = at_item;\n\twhile (isspace(*c))\n\t\tc++;\n\n\t/* Looking for something before the @ sign */\n\twhile (*c != '\\0') {\n\t\tif (ISNAMECHAR(*c)) {\n\t\t\tif (a_pos >= MAXPATHLEN)\n\t\t\t\treturn 1;\n\t\t\tat_name[a_pos++] = *c;\n\t\t} else\n\t\t\tbreak;\n\t\tc++;\n\t}\n\tif (a_pos == 0)\n\t\treturn 1;\n\n\t/* Looking for a server */\n\tif (*c == '@') {\n\t\tc++;\n\t\twhile (*c != '\\0') {\n\t\t\tif (ISNAMECHAR(*c)) {\n\t\t\t\tif (h_pos >= PBS_MAXSERVERNAME)\n\t\t\t\t\treturn 1;\n\t\t\t\thost_name[h_pos++] = *c;\n\t\t\t} else\n\t\t\t\tbreak;\n\t\t\tc++;\n\t\t}\n\t\tif (h_pos == 0)\n\t\t\treturn 1;\n\t}\n\n\tif (*c != '\\0')\n\t\treturn 1;\n\n\t/* set null chars at the end of the string */\n\tat_name[a_pos] = '\\0';\n\thost_name[h_pos] = '\\0';\n\n\treturn (0);\n}\n\n/** @fn int parse_at_list(char *list, int use_count, int abs_path)\n * @brief\tparse a comma-separated list of name[@host] items\n *\n * @param[in]\tlist\n * @param[out]\tuse_count\tif true, make sure no host is repeated\n *\t\t\t\tin the list, and host is defaulted only\n *\t\t\t\tonce\n * @param[out]\tabs_path\tif true, make sure the item appears to\n *\t\t\t\tbegin with an absolute path name\n *\n * @return\tint\n * @retval\t1\tparsing failure\n * @retval\t0\tsuccess\n\n * @par MT-Safe:\tno\n * @par Side Effects:\n *\texits with return code 1 on memory allocation failure\n */\nint\nparse_at_list(char *list, int use_count, int abs_path)\n{\n\tchar *b, *c, *s, *list_dup;\n\tint rc = 0;\n\tchar user[MAXPATHLEN + 1];\n\tchar host[PBS_MAXSERVERNAME + 1];\n\tstruct hostlist *ph, *nh, *hostlist = NULL;\n\n\tif ((list == NULL) || (*list == '\\0'))\n\t\treturn 1;\n\n\tfix_path(list, 1);\n\n\tif ((list_dup = strdup(list)) == NULL) {\n\t\tfprintf(stderr, \"Out of memory.\\n\");\n\t\treturn 1;\n\t}\n\n\tfor (c = list_dup; *c != '\\0'; rc = 0) {\n\t\trc = 1;\n\n\t\t/* Drop leading white space */\n\t\twhile (isspace(*c))\n\t\t\tc++;\n\n\t\t/* If requested, is this an absolute path */\n\t\tif (abs_path && !is_full_path(c))\n\t\t\tbreak;\n\n\t\t/* Find the next comma */\n\t\tfor (s = c; *c && *c != ','; c++)\n\t\t\t;\n\n\t\t/* Drop any trailing blanks */\n\t\tfor (b = c - 1; (b >= list_dup) && isspace(*b); b--)\n\t\t\t*b = '\\0';\n\n\t\t/* Make sure the list does not end with a comma */\n\t\tif (*c == ',') {\n\t\t\t*c++ = '\\0';\n\t\t\tif (*c == '\\0')\n\t\t\t\tbreak;\n\t\t}\n\n\t\t/* Parse the individual list item */\n\t\tif (parse_at_item(s, user, host))\n\t\t\tbreak;\n\n\t\t/* The user part must be given */\n\t\tif (*user == '\\0')\n\t\t\tbreak;\n\n\t\t/* If requested, make sure the host name is not repeated */\n\t\tif (use_count) {\n\t\t\tph = hostlist;\n\t\t\twhile (ph) {\n\t\t\t\tif (strcmp(ph->host, host) == 0)\n\t\t\t\t\tgoto duplicate;\n\t\t\t\tph = ph->next;\n\t\t\t}\n\t\t\tnh = (struct hostlist *) malloc(sizeof(struct hostlist));\n\t\t\tif (nh == NULL) {\n\t\t\t\tfprintf(stderr, \"Out of memory\\n\");\n\t\t\t\treturn 1;\n\t\t\t}\n\t\t\tnh->next = hostlist;\n\t\t\tstrcpy(nh->host, host);\n\t\t\thostlist = nh;\n\t\t}\n\t}\nduplicate:\n\n\t/* Release memory for hostlist and argument list */\n\tph = hostlist;\n\twhile (ph) {\n\t\tnh = ph->next;\n\t\tfree(ph);\n\t\tph = nh;\n\t}\n\tfree(list_dup);\n\n\treturn rc;\n}\n"
  },
  {
    "path": "src/lib/Libcmds/parse_depend.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tparse_depend.c\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include \"pbs_ifl.h\"\n#include \"cmds.h\"\n\nstatic char *deptypes[] = {\n\t\"on\", /* \"on\" and \"synccount\" must be first two */\n\t\"synccount\",\n\t\"after\",\n\t\"afterok\",\n\t\"afternotok\",\n\t\"afterany\",\n\t\"before\",\n\t\"beforeok\",\n\t\"beforenotok\",\n\t\"beforeany\",\n\t\"syncwith\",\n\t\"runone\",\n\tNULL};\n\n/**\n * @brief\n *\tAppend to an allocated string which will be expanded as needed.\n *\n * @param[in/out]\tdest\tdestination location (malloc'ed)\n * @param[in]\t\tstr\tsource string\n * @param[in/out]\tsize\tlength of destination allocation\n *\n * @return 0\tsuccess\n * @return 1\tfailure\n */\nstatic int\nappend_string(char **dest, char *str, int *size)\n{\n\tsize_t used, add;\n\n\tif (dest == NULL || *dest == NULL || str == NULL ||\n\t    size == NULL || *size == 0)\n\t\treturn 1;\n\n\tused = strlen(*dest);\n\tadd = strlen(str);\n\tif (used + add > *size) {\n\t\tchar *temp;\n\t\tint newsize = 2 * (used + add);\n\n\t\ttemp = (char *) realloc(*dest, newsize);\n\t\tif (temp == NULL)\n\t\t\treturn 1;\n\t\t*dest = temp;\n\t\t*size = newsize;\n\t}\n\tstrcat(*dest, str);\n\treturn 0;\n}\n\n/**\n * @brief\n * \tParse a string of depend jobs.\n *\n * @param[in]\t\tdepend_list\tdepend jobs syntax: \"jobid[:jobid...]\"\n * @param[in/out]\trtn_list\texpanded jobids appended here\n * @param[in/out]\trtn_size\tsize of rtn_list\n *\n * @return      int\n * @retval      0       success\n * @retval      1       failure\n *\n */\nstatic int\nparse_depend_item(char *depend_list, char **rtn_list, int *rtn_size)\n{\n\tchar *at;\n\tint i = 0;\n\tint first = 1;\n\tchar *b1, *b2;\n\tchar *s = NULL;\n\tchar *c;\n\tchar full_job_id[PBS_MAXCLTJOBID + 1];\n\tchar server_out[PBS_MAXSERVERNAME + PBS_MAXPORTNUM + 2];\n\n\t/* Begin the parse */\n\tc = depend_list;\n\n\t/* Loop on strings between colons */\n\twhile ((c != NULL) && (*c != '\\0')) {\n\t\ts = c;\n\t\twhile (((*c != ':') || ((c != depend_list) && (*(c - 1) == '\\\\'))) && (*c != '\\0'))\n\t\t\tc++;\n\t\tif (s == c)\n\t\t\treturn 1;\n\n\t\tif (*c == ':') {\n\t\t\t*c++ = '\\0';\n\t\t}\n\n\t\tif (first) {\n\t\t\tfirst = 0;\n\t\t\tfor (i = 0; deptypes[i]; ++i) {\n\t\t\t\tif (strcmp(s, deptypes[i]) == 0)\n\t\t\t\t\tbreak;\n\t\t\t}\n\t\t\tif (deptypes[i] == NULL)\n\t\t\t\treturn 1;\n\t\t\tif (append_string(rtn_list, deptypes[i], rtn_size))\n\t\t\t\treturn 1;\n\n\t\t\t/* It's an error if there are no values after ':' */\n\t\t\tif (*c == '\\0')\n\t\t\t\treturn 1;\n\n\t\t} else {\n\n\t\t\tif (i < 2) { /* for \"on\" and \"synccount\", number */\n\t\t\t\tif (append_string(rtn_list, s, rtn_size))\n\t\t\t\t\treturn 1;\n\t\t\t} else { /* for others, job id */\n\t\t\t\tat = strchr(s, (int) '@');\n\t\t\t\tif (get_server(s, full_job_id, server_out) != 0)\n\t\t\t\t\treturn 1;\n\t\t\t\t/* disallow subjob or range of subjobs, [] ok */\n\t\t\t\tif ((b1 = strchr(full_job_id, (int) '[')) != NULL) {\n\t\t\t\t\tif ((b2 = strchr(full_job_id, (int) ']')) != NULL)\n\t\t\t\t\t\tif (b2 != b1 + 1) {\n\t\t\t\t\t\t\tfprintf(stderr,\n\t\t\t\t\t\t\t\t\"cannot have \"\n\t\t\t\t\t\t\t\t\"dependancy on subjob \"\n\t\t\t\t\t\t\t\t\"or range\\n\");\n\t\t\t\t\t\t\treturn 1;\n\t\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tif (append_string(rtn_list, full_job_id, rtn_size))\n\t\t\t\t\treturn 1;\n\t\t\t\tif (at) {\n\t\t\t\t\tif (append_string(rtn_list, \"@\", rtn_size))\n\t\t\t\t\t\treturn 1;\n\t\t\t\t\tif (append_string(rtn_list, server_out,\n\t\t\t\t\t\t\t  rtn_size))\n\t\t\t\t\t\treturn 1;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tif (*c) {\n\t\t\tif (append_string(rtn_list, \":\", rtn_size))\n\t\t\t\treturn 1;\n\t\t}\n\t}\n\tif (s == c)\n\t\treturn 1;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tParse dependency lists with\n * \tsyntax depend_list[,depend_list...]\n *\n * @param[in]\t\tlist\t\tdependency list\n * @param[in/out]\trtn_list\taddress of allocated string for parsed result\n * @param[in]\t\trtn_size\tsize of rtn_list buffer\n *\n * @return\tint\n * @retval \t0 \tsuccess\n * @retval \t1 \tfailure\n *\n * @par Side Effects:\n * May exit if malloc fails.\n */\nint\nparse_depend_list(char *list, char **rtn_list, int rtn_size)\n\n{\n\tchar *b, *c, *s, *lc;\n\tint comma = 0;\n\n\tif (list == NULL || rtn_list == NULL || *rtn_list == NULL || rtn_size == 0)\n\t\treturn 1;\n\n\tif (strlen(list) == 0)\n\t\treturn (1);\n\n\tif ((lc = (char *) malloc(strlen(list) + 1)) == NULL) {\n\t\tfprintf(stderr, \"Out of memory.\\n\");\n\t\treturn 1;\n\t}\n\tstrcpy(lc, list);\n\tc = lc;\n\t**rtn_list = '\\0';\n\n\twhile (*c != '\\0') {\n\t\t/* Drop leading white space */\n\t\twhile (isspace(*c))\n\t\t\tc++;\n\n\t\t/* Find the next comma */\n\t\ts = c;\n\t\twhile (*c != ',' && *c)\n\t\t\tc++;\n\n\t\t/* Drop any trailing blanks */\n\t\tcomma = (*c == ',');\n\t\t*c = '\\0';\n\t\tb = c - 1;\n\t\twhile (isspace((int) *b))\n\t\t\t*b-- = '\\0';\n\n\t\t/* Parse the individual list item */\n\n\t\tif (parse_depend_item(s, rtn_list, &rtn_size)) {\n\t\t\tfree(lc);\n\t\t\treturn 1;\n\t\t}\n\n\t\tif (comma) {\n\t\t\tc++;\n\t\t\tappend_string(rtn_list, \",\", &rtn_size);\n\t\t}\n\t}\n\tfree(lc);\n\n\tif (comma)\n\t\treturn 1;\n\n\treturn 0;\n}\n"
  },
  {
    "path": "src/lib/Libcmds/parse_destid.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tparse_destid\n * @brief\n * full syntax permitted;\n *\n * queue_name[@server_name[:port_number]]\n * @server_name[:port_number]\n *\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n#include \"cmds.h\"\n\n#define ISNAMECHAR(x) ((isgraph(x)) && ((x) != '#') && ((x) != '@'))\n\n/*\n * note the queue_name_out and server_name_out is now allocated on heap\n * so caller should free them after using\n */\n\n/**\n * @brief\n *\tparse destination id\n *\n * @param[in] destination_in - destination string\n * @param[out] queue_name_out - queue name\n * @param[out] server_name_out - server name\n *\n * @return      int\n * @retval      0       success\n * @retval      1       failure\n *\n * NOTE: the queue_name_out and server_name_out is now allocated on heap\n * so caller should free them after using\n *\n */\n\nint\nparse_destination_id(char *destination_in, char **queue_name_out, char **server_name_out)\n{\n\tchar *c;\n\t/* moved following static vars to stack */\n\tchar *queue_name = NULL;\n\tint q_pos = 0;\n\tchar *server_name = NULL;\n\tint c_pos = 0;\n\n\tqueue_name = calloc(PBS_MAXQUEUENAME + 1, 1);\n\tif (queue_name == NULL)\n\t\tgoto err;\n\n\tserver_name = calloc(MAXSERVERNAME, 1);\n\tif (server_name == NULL)\n\t\tgoto err;\n\n\t/* Begin the parse */\n\tc = destination_in;\n\twhile (isspace(*c))\n\t\tc++;\n\n\t/* Looking for a queue */\n\twhile (*c != '\\0') {\n\t\tif (ISNAMECHAR(*c)) {\n\t\t\tif (q_pos >= PBS_MAXQUEUENAME)\n\t\t\t\tgoto err;\n\t\t\tqueue_name[q_pos++] = *c;\n\t\t} else\n\t\t\tbreak;\n\t\tc++;\n\t}\n\n\t/* Looking for a server */\n\tif (*c == '@') {\n\t\tc++;\n\t\twhile (*c != '\\0') {\n\t\t\tif (ISNAMECHAR(*c)) {\n\t\t\t\tif (c_pos >= MAXSERVERNAME)\n\t\t\t\t\tgoto err;\n\t\t\t\tserver_name[c_pos++] = *c;\n\t\t\t} else\n\t\t\t\tbreak;\n\t\t\tc++;\n\t\t}\n\t\tif (c_pos == 0)\n\t\t\tgoto err;\n\t}\n\n\tif (*c != '\\0')\n\t\tgoto err;\n\n\t/* set char * pointers to static data, to arguments */\n\tif (queue_name_out != NULL)\n\t\t*queue_name_out = queue_name;\n\telse\n\t\tfree(queue_name);\n\n\tif (server_name_out != NULL)\n\t\t*server_name_out = server_name;\n\telse\n\t\tfree(server_name);\n\n\treturn 0;\n\nerr:\n\tif (queue_name)\n\t\tfree(queue_name);\n\tif (server_name)\n\t\tfree(server_name);\n\treturn 1;\n}\n"
  },
  {
    "path": "src/lib/Libcmds/parse_equal.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n#include \"cmds.h\"\n\n/**\n * @file\tparse_equal.c\n */\n/**\n * @brief\n * \tparse_equal_string - parse a string of the form:\n *\t\tname1 = value1[, value2 ...][, name2 = value3 [, value4 ...]]\n *\tinto <name1> <value1[, value2 ...>\n *\t     <name2> <value3 [, value4 ...>\n *\n *\tOn the first call,\n *\t\t*name will point to \"name1\"\n *\t\t*value will point to \"value1 ...\" upto but not\n *\t\t\tincluding the comma before \"name2\".\n *\tOn a second call, with start = NULL,\n *\t\t*name will point to \"name2\"\n *\t\t*value will point t0 \"value3 ...\"\n *\n * @param[in]\tstart\tthe start of the string to parse\n * @param[in]\tname    set to point to the name string\n * @param[in]  \tvalue\tset to point to the value string\n *\n *      start is the start of the string to parse.  If called again with\n *      start  being a null pointer, it will resume parsing where it stoped\n *      on the prior call.\n\n * @return\tint\n * @retval\t1 \tif  name and value are found\n * @retval\t0 \tif nothing (more) is parsed (null input)\n * @retval\t-1 \tif a syntax error was detected.\n *\n */\n\nint\nparse_equal_string(char *start, char **name, char **value)\n{\n\tstatic char *pc; /* where prior call left off */\n\tchar *backup;\n\tint quoting = 0;\n\n\tif (start != NULL)\n\t\tpc = start;\n\n\tif (*pc == '\\0') {\n\t\t*name = NULL;\n\t\treturn (0); /* already at end, return no strings */\n\t}\n\n\t/* strip leading spaces */\n\n\twhile (isspace((int) *pc) && *pc)\n\t\tpc++;\n\n\tif (*pc == '\\0') {\n\t\t*name = NULL; /* null name */\n\t\treturn (0);\n\t} else if ((*pc == '=') || (*pc == ','))\n\t\treturn (-1); /* no name, return error */\n\n\t*name = pc;\n\n\t/* have found start of name, look for end of it */\n\n\twhile (!isspace((int) *pc) && (*pc != '=') && *pc)\n\t\tpc++;\n\n\t/* now look for =, while stripping blanks between end of name and = */\n\n\twhile (isspace((int) *pc) && *pc)\n\t\t*pc++ = '\\0';\n\tif (*pc != '=')\n\t\treturn (-1); /* should have found a = as first non blank */\n\t*pc++ = '\\0';\n\n\t/* that follows is the value string, skip leading white space */\n\n\twhile (isspace((int) *pc) && *pc)\n\t\tpc++;\n\n\t/* is the value string to be quoted ? */\n\n\tif ((*pc == '\"') || (*pc == '\\''))\n\t\tquoting = (int) *pc++;\n\t*value = pc;\n\n\t/*\n\t * now go to first equal sign, or if quoted, the first equal sign\n\t * after the close quote\n\t */\n\n\tif (quoting) {\n\t\twhile ((*pc != (char) quoting) && *pc) /* look for matching */\n\t\t\tpc++;\n\t\tif (*pc)\n\t\t\t*pc = ' '; /* change close quote to space */\n\t\telse\n\t\t\treturn (-1);\n\t}\n\twhile ((*pc != '=') && *pc)\n\t\tpc++;\n\n\tif (*pc == '\\0') {\n\t\twhile (isspace((int) *--pc))\n\t\t\t;\n\t\tif (*pc == ',') /* trailing comma is a no no */\n\t\t\treturn (-1);\n\t\tpc++;\n\t\treturn (1); /* no equal, just end of line, stop here */\n\t}\n\n\t/* back up to the first comma found prior to the equal sign */\n\n\twhile (*--pc != ',')\n\t\tif (pc <= *value) /* gone back too far, no comma, error */\n\t\t\treturn (-1);\n\tbackup = pc++;\n\t*backup = '\\0'; /* null the comma */\n\n\t/* strip off any trailing white space */\n\n\twhile (isspace((int) *--backup))\n\t\t*backup = '\\0';\n\treturn (1);\n}\n"
  },
  {
    "path": "src/lib/Libcmds/parse_jobid.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tparse_jobid.c\n * @brief\n * takes a job_id string as input, parses it to separate the\n * 'current_server[:port]' part, and returns this in the return\n * argument 'server'; if things go OK, the function value is\n * set to 0, if errors, it is set to 1.   The 'current_server[:port]'\n * part of 'job_id' is removed, if it was present.\n *\n * Full legal syntax is:\n *  seq_number[.parent_server[:port]][@current_server[:port]]\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n#include \"cmds.h\"\n\n#define ISNAMECHAR(x) ((isgraph(x)) && ((x) != '#') && ((x) != '@'))\n\n/* static vars removed to make this a thread_safe function */\n\n/**\n * @brief\n *\tparses the job id for current server[:port] part\n *\n * @param[in] job_id - job id\n * @param[out] arg_seq_number - sequence number\n * @param[out] arg_parent_server - parent server\n * @param[out] arg_current_server - current server\n *\n * @return\tint\n * @retval\t0\tsuccess\n * @retval\t1\terror\n *\n */\nint\nparse_jobid(char *job_id, char **arg_seq_number, char **arg_parent_server, char **arg_current_server)\n{\n\tint is_resv = 0;\n\tchar *c;\n\tchar *seq_number = NULL;\n\tint s_pos = 0;\n\tchar *parent_server = NULL;\n\tint p_pos = 0;\n\tchar *current_server = NULL;\n\tint c_pos = 0;\n\tint ret = 0;\n\n\tseq_number = calloc(PBS_MAXCLTJOBID + 1, 1);\n\tif (seq_number == NULL) {\n\t\tret = 1;\n\t\tgoto err;\n\t}\n\tparent_server = calloc(MAXSERVERNAME, 1);\n\tif (parent_server == NULL) {\n\t\tret = 1;\n\t\tgoto err;\n\t}\n\tcurrent_server = calloc(MAXSERVERNAME, 1);\n\tif (current_server == NULL) {\n\t\tret = 1;\n\t\tgoto err;\n\t}\n\n\t/* Begin the parse */\n\tc = job_id;\n\twhile (isspace(*c))\n\t\tc++;\n\n\t/* skip past initial char if reservation */\n\tif (*c == PBS_RESV_ID_CHAR || *c == PBS_STDNG_RESV_ID_CHAR || *c == PBS_MNTNC_RESV_ID_CHAR) {\n\t\tis_resv = 1;\n\t\tseq_number[s_pos++] = *c;\n\t\tc++;\n\t}\n\n\t/* Looking for a seq_number */\n\twhile (*c != '\\0') {\n\t\tif (isdigit(*c)) {\n\t\t\tif ((s_pos >= PBS_MAXSEQNUM && !is_resv) ||\n\t\t\t    (s_pos >= PBS_MAXSEQNUM + 1 && is_resv)) {\n\t\t\t\tret = 3;\n\t\t\t\tgoto err;\n\t\t\t}\n\t\t\tseq_number[s_pos++] = *c;\n\t\t} else\n\t\t\tbreak;\n\t\tc++;\n\t}\n\tif (s_pos == 0) {\n\t\tret = 1;\n\t\tgoto err;\n\t}\n\n\t/* Is this an ArrayJob identifier or a Array subjob id? */\n\tif (*c == '[') {\n\t\tif (is_resv) {\n\t\t\tret = 1;\n\t\t\tgoto err; /* cannot be both Array... and reservation */\n\t\t}\n\n\t\tif (s_pos >= PBS_MAXCLTJOBID) {\n\t\t\tret = 3;\n\t\t\tgoto err;\n\t\t}\n\t\tseq_number[s_pos++] = *c++; /* copy over opening brace */\n\t\twhile (*c != ']') {\n\t\t\twhile (isdigit((int) *c)) {\n\t\t\t\tif (s_pos >= PBS_MAXCLTJOBID) {\n\t\t\t\t\tret = 3;\n\t\t\t\t\tgoto err;\n\t\t\t\t}\n\t\t\t\tseq_number[s_pos++] = *c++;\n\t\t\t}\n\t\t\tif (*c == '-') {\n\t\t\t\tif (s_pos >= PBS_MAXCLTJOBID) {\n\t\t\t\t\tret = 3;\n\t\t\t\t\tgoto err;\n\t\t\t\t}\n\t\t\t\tseq_number[s_pos++] = *c++;\n\t\t\t} else if (*c == ':') {\n\t\t\t\tif (s_pos >= PBS_MAXCLTJOBID) {\n\t\t\t\t\tret = 3;\n\t\t\t\t\tgoto err;\n\t\t\t\t}\n\t\t\t\tseq_number[s_pos++] = *c++;\n\t\t\t} else if (*c != ']') {\n\t\t\t\tret = 1;\n\t\t\t\tgoto err;\n\t\t\t}\n\t\t}\n\t\tif (s_pos >= PBS_MAXCLTJOBID) {\n\t\t\tret = 3;\n\t\t\tgoto err;\n\t\t}\n\t\tseq_number[s_pos++] = *c++; /* copy in closing brace */\n\t}\n\n\t/* Looking for a parent_server */\n\tif (*c == '.') {\n\t\tc++;\n\t\twhile (*c != '\\0') {\n\t\t\tif (ISNAMECHAR(*c)) {\n\t\t\t\tif (p_pos >= MAXSERVERNAME) {\n\t\t\t\t\tret = 3;\n\t\t\t\t\tgoto err;\n\t\t\t\t}\n\t\t\t\tparent_server[p_pos++] = *c;\n\t\t\t} else\n\t\t\t\tbreak;\n\t\t\tc++;\n\t\t}\n\t\tif (p_pos == 0) {\n\t\t\tret = 1;\n\t\t\tgoto err;\n\t\t}\n\t}\n\n\t/* Looking for a current_server */\n\tif (*c == '@') {\n\t\tc++;\n\t\twhile (*c != '\\0') {\n\t\t\tif (ISNAMECHAR(*c)) {\n\t\t\t\tif (c_pos >= MAXSERVERNAME) {\n\t\t\t\t\tret = 3;\n\t\t\t\t\tgoto err;\n\t\t\t\t}\n\t\t\t\tcurrent_server[c_pos++] = *c;\n\t\t\t} else\n\t\t\t\tbreak;\n\t\t\tc++;\n\t\t}\n\t\tif (c_pos == 0) {\n\t\t\tret = 1;\n\t\t\tgoto err;\n\t\t}\n\t}\n\n\tif (*c != '\\0') {\n\t\tret = 2;\n\t\tgoto err;\n\t}\n\n\tif ((s_pos + p_pos + 2) > PBS_MAXCLTJOBID) {\n\t\tret = 3;\n\t\tgoto err;\n\t}\n\n\t/* set char * pointers to static data, to arguments */\n\tif (arg_seq_number != NULL)\n\t\t*arg_seq_number = seq_number;\n\telse\n\t\tfree(seq_number);\n\n\tif (arg_parent_server != NULL)\n\t\t*arg_parent_server = parent_server;\n\telse\n\t\tfree(parent_server);\n\n\tif (arg_current_server != NULL)\n\t\t*arg_current_server = current_server;\n\telse\n\t\tfree(current_server);\n\n\treturn 0;\n\nerr:\n\tif (seq_number)\n\t\tfree(seq_number);\n\tif (current_server)\n\t\tfree(current_server);\n\tif (parent_server)\n\t\tfree(parent_server);\n\treturn ret;\n}\n"
  },
  {
    "path": "src/lib/Libcmds/parse_stage.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include \"cmds.h\"\n\n/**\n * @file\tparse_stage.c\n */\n\n#define ISNAMECHAR(x) (((isprint(x)) || (isspace(x))) && ((x) != '@'))\n#define ISNAMECHAR2(x) ((isprint(x)) && (!isspace(x)) && ((x) != '@') && ((x) != ':'))\n\n/**\n * @brief\n *\tparses the staging file name\n *      syntax:\tlocat_file@hostname:remote_file\n *\t\ton Windows if remote_file is UNC path then\n *\t\thostname is optional so syntax can be\n *\t\tlocal_file@remote_unc_file\n *      Note: The arguments local_name, host_name, remote_name are mandatory and\n *\tmust be allocated with required memory by the caller.\n *\n * @param[in]       pair        - a staged file name pair\n * @param[in/out]   local_name  - local file name\n * @param[in/out]   host_name   - remote host\n * @param[in/out]   remote_name - remote file namea\n *\n * @return int\n * @retval 0    parsing was successful\n * @retval 1\terror in parsing\n */\nint\nparse_stage_name(char *pair, char *local_name, char *host_name, char *remote_name)\n{\n\tchar *c = NULL;\n\tint l_pos = 0;\n\tint h_pos = 0;\n\tint r_pos = 0;\n\n\t/* Begin the parse */\n\tc = pair;\n\twhile (isspace(*c))\n\t\tc++;\n\n\t/* Looking for something before the @ sign */\n\twhile (*c != '\\0') {\n\t\tif (ISNAMECHAR(*c)) { /* allow whitespace and stop on '@' */\n\t\t\tif (l_pos >= MAXPATHLEN)\n\t\t\t\treturn 1;\n\t\t\tlocal_name[l_pos++] = *c;\n\t\t} else\n\t\t\tbreak;\n\t\tc++;\n\t}\n\tif (l_pos == 0)\n\t\treturn 1;\n\n#ifdef WIN32\n\tif ((*c == '@') && (c + 1 != NULL) && (IS_UNCPATH(c + 1))) {\n\t\tc++;\n\t\t/*\n\t\t * remote_name is UNC path without host part\n\t\t * so skip parsing of host_name and parse\n\t\t * remote_name\n\t\t */\n\t\twhile (*c != '\\0') {\n\t\t\tif (ISNAMECHAR(*c)) { /* allow whitespace */\n\t\t\t\tif (r_pos >= MAXPATHLEN)\n\t\t\t\t\treturn 1;\n\t\t\t\tremote_name[r_pos++] = *c;\n\t\t\t} else\n\t\t\t\tbreak;\n\t\t\tc++;\n\t\t}\n\t}\n#endif\n\t/* Looking for something between the @ and the : */\n\tif (*c == '@') {\n\t\tc++;\n\t\twhile (*c != '\\0') {\n\t\t\tif (ISNAMECHAR2(*c)) { /* no whitespace allowed in host */\n\t\t\t\tif (h_pos >= PBS_MAXSERVERNAME)\n\t\t\t\t\treturn 1;\n\t\t\t\thost_name[h_pos++] = *c;\n\t\t\t} else\n\t\t\t\tbreak;\n\t\t\tc++;\n\t\t}\n\t\tif (h_pos == 0)\n\t\t\treturn 1;\n\t}\n\n#ifdef WIN32\n\t/*\n\t * h_pos may be 1 if non-UNC path is given\n\t * without host part which is not allowed\n\t * so return parsing error\n\t * example: -Wstagein=C:\\testdir@D:\\testdir1\n\t */\n\tif (h_pos == 1)\n\t\treturn 1;\n#endif\n\n\t/* Looking for something after the : */\n\tif (*c == ':') {\n\t\tc++;\n\t\twhile (*c != '\\0') {\n\t\t\tif (ISNAMECHAR(*c)) { /* allow whitespace */\n\t\t\t\tif (r_pos >= MAXPATHLEN)\n\t\t\t\t\treturn 1;\n\t\t\t\tremote_name[r_pos++] = *c;\n\t\t\t} else\n\t\t\t\tbreak;\n\t\t\tc++;\n\t\t}\n\t}\n\tif (r_pos == 0)\n\t\treturn 1;\n\n\tif (*c != '\\0')\n\t\treturn 1;\n\n\t/* set null chars at end of string */\n\tlocal_name[l_pos] = '\\0';\n\tremote_name[r_pos] = '\\0';\n\thost_name[h_pos] = '\\0';\n\n\treturn (0);\n}\n\n/**\n * @brief\n * \tparse_stage_list\n *\n * syntax:\n *\tlocal_file@hostname:remote_file [,...]\n *\n * @param[in]\tlist\tList of staged file name pairs.\n *\n * @return\tint\n * @retval\t0\tsuccess\n * @retval\t1\terror\n *\n */\n\nint\nparse_stage_list(char *list)\n{\n\tchar *b = NULL;\n\tchar *c = NULL;\n\tchar *s = NULL;\n\tchar *l = NULL;\n\tint comma = 0;\n\tchar local[MAXPATHLEN + 1] = {'\\0'};\n\tchar host[PBS_MAXSERVERNAME] = {'\\0'};\n\tchar remote[MAXPATHLEN + 1] = {'\\0'};\n\n\tif (strlen(list) == 0)\n\t\treturn (1);\n\n\tif ((l = (char *) malloc(strlen(list) + 1)) == NULL) {\n\t\tfprintf(stderr, \"Out of memory.\\n\");\n\t\treturn 1;\n\t}\n\tmemset(l, 0, strlen(list) + 1);\n\tstrcpy(l, list);\n\tc = l;\n\twhile (*c != '\\0') {\n\t\t/* Drop leading white space */\n\t\twhile (isspace((int) *c))\n\t\t\tc++;\n\n\t\t/* Find the next comma */\n\t\ts = c;\n\t\twhile (*c != '\\0') {\n\t\t\tif (*c == ',' && (s != c) && *(c - 1) != '\\\\')\n\t\t\t\tbreak;\n\t\t\tc++;\n\t\t}\n\n\t\t/* Drop any trailing blanks */\n\t\tcomma = (*c == ',');\n\t\t*c = '\\0';\n\t\tb = c - 1;\n\t\twhile (isspace((int) *b))\n\t\t\t*b-- = '\\0';\n\n\t\t/* Parse the individual list item */\n\t\tif (parse_stage_name(s, local, host, remote)) {\n\t\t\t(void) free(l);\n\t\t\treturn 1;\n\t\t}\n\n\t\t/* Make sure all parts of the item are present */\n\t\tif (strlen(local) == 0) {\n\t\t\t(void) free(l);\n\t\t\treturn 1;\n\t\t}\n#ifdef WIN32\n\t\tif ((strlen(host) == 0) && (strlen(remote) > 0) && (!IS_UNCPATH(remote)))\n#else\n\t\tif (strlen(host) == 0)\n#endif\n\t\t{\n\t\t\t(void) free(l);\n\t\t\treturn 1;\n\t\t}\n\t\tif (strlen(remote) == 0) {\n\t\t\t(void) free(l);\n\t\t\treturn 1;\n\t\t}\n\n\t\tif (comma) {\n\t\t\tc++;\n\t\t}\n\t}\n\tif (comma) {\n\t\t(void) free(l);\n\t\treturn 1;\n\t}\n\n\t(void) free(l);\n\n\treturn 0;\n}\n"
  },
  {
    "path": "src/lib/Libcmds/prepare_path.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tperpare_path.c\n * @brief\n *\tPrepare a full path name to give to the server.\n *\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <errno.h>\n#include <netdb.h>\n#include <sys/stat.h>\n#include \"cmds.h\"\n#include \"pbs_ifl.h\"\n#include \"net_connect.h\"\n\n/**\n * @brief\n *\tvalidate if hname is local host\n *\n * @param[in] hname - host name\n *\n * @return\tint\n * @retval\t0\tsuccess\n * @retval\t1\terror\n *\n */\nint\nis_local_host(char *hname)\n{\n\tchar hname_full[PBS_MAXSERVERNAME + 1];\n\tchar cname_short[PBS_MAXSERVERNAME + 1];\n\tchar cname_full[PBS_MAXSERVERNAME + 1];\n\n\tif (gethostname(cname_short, PBS_MAXSERVERNAME) != 0)\n\t\treturn (0);\n\t\t/*\n\t * Compare with \"locahost\" and \"localhost.localdomain\".\n\t */\n#ifdef WIN32\n\tif (stricmp(hname, cname_short) == 0 ||\n\t    stricmp(hname, LOCALHOST_SHORTNAME) == 0 ||\n\t    stricmp(hname, LOCALHOST_FULLNAME) == 0)\n\t\treturn (1);\n#else\n\tif (strcmp(hname, cname_short) == 0 ||\n\t    strcmp(hname, LOCALHOST_SHORTNAME) == 0 ||\n\t    strcmp(hname, LOCALHOST_FULLNAME) == 0)\n\t\treturn (1);\n#endif\n\tif (get_fullhostname(cname_short, cname_full, PBS_MAXSERVERNAME) != 0)\n\t\treturn (0);\n\n\tif (get_fullhostname(hname, hname_full, PBS_MAXSERVERNAME) != 0)\n\t\treturn (0);\n\n\tif (strcmp(hname_full, cname_full) == 0)\n\t\treturn (1);\n\n\treturn (0);\n}\n\n/**\n * @brief\n *\tparses path and prepares complete path name\n *\n * @param[in]      path_in - the path name provided as input to be parsed\n * @param[out]     path_out - contains final parsed and prepared path, must\n *                            be at least MAXPATHLEN+1 bytes.\n *\n * @return int\n * @retval  0 - success in parsing\n * @retval  nonzero - error encountered in parsing\n */\nint\nprepare_path(char *path_in, char *path_out)\n{\n\tchar *c = NULL;\n\tint have_fqdn = 0;\n\t/* Initialization with {'\\0'} populates entire array */\n\tchar host_name[PBS_MAXSERVERNAME + 1] = {'\\0'}; /* short host name */\n\tint h_pos = 0;\n\tchar path_name[MAXPATHLEN + 1] = {'\\0'};\n\tsize_t path_len;\n\tint p_pos = 0;\n\tchar *host_given = NULL;\n\tstruct stat statbuf = {0};\n\tdev_t dev = 0;\n\tino_t ino = 0;\n\n\tif (!path_out)\n\t\treturn 1;\n\t*path_out = '\\0';\n\tif (!path_in)\n\t\treturn 1;\n\n\t/* Begin the parse */\n\tfor (c = path_in; *c; c++) {\n\t\tif (isspace(*c) == 0)\n\t\t\tbreak;\n\t}\n\tif (*c == '\\0')\n\t\treturn 1;\n\n#ifdef WIN32\n\t/* Check for drive letter in Windows */\n\tif (!(isalpha(*c) && (*(c + 1) == ':')))\n#endif\n\t{\n\t\t/* Looking for a hostname */\n\t\tif ((host_given = strchr(c, ':')) != NULL) {\n\t\t\t/* Capture the hostname portion */\n\t\t\tfor (h_pos = 0; (h_pos < sizeof(host_name)); h_pos++, c++) {\n\t\t\t\tif (isalnum(*c) || (*c == '.') || (*c == '-')\n#ifdef WIN32\n\t\t\t\t    /* Underscores are legal in Windows */\n\t\t\t\t    || (*c == '_')\n#endif\n\t\t\t\t) {\n\t\t\t\t\thost_name[h_pos] = *c;\n\t\t\t\t} else {\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (*c != ':') {\n\t\t\t\tif (*c == '/') {\n\t\t\t\t\t/* There's a colon in the path */\n\t\t\t\t\thost_given = NULL;\n\t\t\t\t\thost_name[0] = '\\0';\n\t\t\t\t\tfor (c = path_in; *c; c++) {\n\t\t\t\t\t\tif (isspace(*c) == 0)\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t} else\n\t\t\t\t\treturn 1;\n\t\t\t} else {\n\t\t\t\t/* Advance past the colon */\n\t\t\t\tc++;\n\t\t\t}\n\t\t}\n\t}\n\n\t/* Looking for a posix path */\n\tfor (p_pos = 0; p_pos < sizeof(path_name); p_pos++, c++) {\n\t\tif (!isprint(*c))\n\t\t\tbreak;\n\t\tpath_name[p_pos] = *c;\n\t}\n\t/* Should be at end of string */\n\tif (*c != '\\0')\n\t\treturn 1;\n\n\tpath_len = strlen(path_name);\n\tif (path_len == 0 && strlen(host_name) == 0)\n\t\treturn 1;\n\n\t/* appending a slash in the end to indicate that it is a directory */\n\tif ((path_name[path_len - 1] != '/') &&\n\t    (path_name[path_len - 1] != '\\\\') &&\n\t    (stat(path_name, &statbuf) == 0) &&\n\t    S_ISDIR(statbuf.st_mode)) {\n\t\tif ((path_len + 1) < sizeof(path_name)) {\n\t\t\tstrcat(path_name, \"/\");\n\t\t\tpath_len++;\n\t\t}\n\t}\n\n#ifdef WIN32\n\tif (IS_UNCPATH(path_name)) {\n\t\t/*\n\t\t * given path is UNC path\n\t\t * so just skip hostname\n\t\t * as UNC path dose not require it\n\t\t */\n\t\thost_given = NULL;\n\t\thost_name[0] = '\\0';\n\t} else\n#endif\n\t{\n\t\t/* get full host name */\n\t\tif (host_name[0] == '\\0') {\n\t\t\tif (pbs_conf.pbs_output_host_name) {\n\t\t\t\t/* use the specified host for returning the file */\n\t\t\t\tsnprintf(host_name, sizeof(host_name), \"%s\", pbs_conf.pbs_output_host_name);\n\t\t\t\thave_fqdn = 1;\n\t\t\t} else {\n\t\t\t\tif (gethostname(host_name, sizeof(host_name)) != 0)\n\t\t\t\t\treturn 2;\n\t\t\t\thost_name[sizeof(host_name) - 1] = '\\0';\n\t\t\t}\n\t\t}\n\t\tif (have_fqdn == 0) {\n\t\t\tchar host_fqdn[PBS_MAXSERVERNAME + 1] = {'\\0'};\n\t\t\t/* need to fully qualify the host name */\n\t\t\tif (get_fullhostname(host_name, host_fqdn, PBS_MAXSERVERNAME) != 0)\n\t\t\t\treturn 2;\n\t\t\tstrncpy(path_out, host_fqdn, MAXPATHLEN); /* FQ host name */\n\t\t} else {\n\t\t\tstrncpy(path_out, host_name, MAXPATHLEN); /* \"localhost\" or pbs_output_host_name */\n\t\t}\n\t\tpath_out[MAXPATHLEN - 1] = '\\0';\n\n\t\t/* finish preparing complete host name */\n\t\tif (strlen(path_out) < MAXPATHLEN)\n\t\t\tstrcat(path_out, \":\");\n\t}\n\n#ifdef WIN32\n\tif (path_name[0] != '/' && path_name[0] != '\\\\' &&\n\t    host_given == NULL && strchr(path_name, ':') == NULL)\n#else\n\tif (path_name[0] != '/' && host_given == NULL)\n#endif\n\t{\n\t\tchar cwd[MAXPATHLEN + 1] = {'\\0'};\n\n\t\tc = getenv(\"PWD\"); /* PWD carries a name that will cause */\n\t\tif (c != NULL) {   /* the NFS to mount */\n\n\t\t\tif (stat(c, &statbuf) < 0) { /* can't stat PWD */\n\t\t\t\tc = NULL;\n\t\t\t} else {\n\t\t\t\tdev = statbuf.st_dev;\n\t\t\t\tino = statbuf.st_ino;\n\t\t\t\tif (stat(\".\", &statbuf) < 0) {\n\t\t\t\t\tperror(\"prepare_path: cannot stat current directory:\");\n\t\t\t\t\t*path_out = '\\0';\n\t\t\t\t\treturn (1);\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (dev == statbuf.st_dev && ino == statbuf.st_ino) {\n\t\t\t\tsnprintf(cwd, sizeof(cwd), \"%s\", c);\n\t\t\t} else {\n\t\t\t\tc = NULL;\n\t\t\t}\n\t\t}\n\t\tif (c == NULL) {\n\t\t\tc = getcwd(cwd, MAXPATHLEN);\n\t\t\tif (c == NULL) {\n\t\t\t\tperror(\"prepare_path: getcwd failed : \");\n\t\t\t\t*path_out = '\\0';\n\t\t\t\treturn (1);\n\t\t\t}\n\t\t}\n#ifdef WIN32\n\t\t/* get UNC path (if available) if it is mapped drive */\n\t\tget_uncpath(cwd);\n\t\tif (IS_UNCPATH(cwd)) {\n\t\t\tstrcpy(path_out, cwd);\n\t\t\tif (cwd[strlen(cwd) - 1] != '\\\\')\n\t\t\t\tstrcat(path_out, \"\\\\\");\n\t\t} else\n#endif\n\t\t{\n\t\t\tstrncat(path_out, cwd, (MAXPATHLEN + 1) - strlen(path_out));\n\t\t\tif (strlen(path_out) < MAXPATHLEN)\n\t\t\t\tstrcat(path_out, \"/\");\n\t\t}\n\t}\n\n#ifdef WIN32\n\t/* get UNC path (if available) if it is mapped drive */\n\tget_uncpath(path_name);\n\tif (IS_UNCPATH(path_name))\n\t\tstrcpy(path_out, path_name);\n\telse {\n\t\t/*\n\t\t * check whether given <path_name> is relative path\n\t\t * without drive on localhost and <cwd> is not UNC path?\n\t\t * if yes then do not append drive into <path_out>\n\t\t * otherwise append drive into <path_out>\n\t\t */\n\t\tif (is_local_host(host_name) &&\n\t\t    (strchr(path_name, ':') == NULL) &&\n\t\t    (path_out[strlen(path_out) - 1] != '/') &&\n\t\t    (!IS_UNCPATH(path_out))) {\n\n\t\t\tchar drivestr[3] = {'\\0'};\n\t\t\tchar drivestr_unc[MAXPATHLEN + 1] = {'\\0'};\n\n\t\t\tdrivestr[0] = _getdrive() + 'A' - 1;\n\t\t\tdrivestr[1] = ':';\n\t\t\tdrivestr[2] = '\\0';\n\t\t\t/*\n\t\t\t * check whether <drivestr> is mapped drive?\n\t\t\t * by calling get_uncpath()\n\t\t\t * if yes then remove <hostname> part from <path_out>\n\t\t\t *\n\t\t\t * This is the case when user submit job\n\t\t\t * from mapped drive with relative path without drive\n\t\t\t * in path ex. localhost:err or localhost:out\n\t\t\t */\n\t\t\tsnprintf(drivestr_unc, sizeof(drivestr_unc), \"%s\\\\\", drivestr);\n\t\t\tget_uncpath(drivestr_unc);\n\t\t\tif (IS_UNCPATH(drivestr_unc)) {\n\t\t\t\tstrncpy(path_out, drivestr_unc, MAXPATHLEN);\n\t\t\t} else {\n\t\t\t\tstrncat(path_out, drivestr, MAXPATHLEN - strlen(path_out));\n\t\t\t}\n\t\t}\n\t\tstrncat(path_out, path_name, MAXPATHLEN - strlen(path_out));\n\t}\n\tfix_path(path_out, 1);\n\tstrcpy(path_out, replace_space(path_out, \"\\\\ \"));\n\tpath_out[MAXPATHLEN - 1] = '\\0';\n#else\n\tstrncat(path_out, path_name, (MAXPATHLEN + 1) - strlen(path_out));\n#endif\n\n\treturn (0);\n}\n"
  },
  {
    "path": "src/lib/Libcmds/set_attr.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tset_attr.c\n * @brief\n *\tAdd an entry to an attribute list. First, create the entry and set\n *  the fields. If the attribute list is empty, then just point it at the\n *  new entry. Otherwise, append the new entry to the list.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n#include \"cmds.h\"\n#include \"attribute.h\"\n\n/* static pointer to be used by set_attr_resc */\nstatic struct attrl *new_attr;\n\n/**\n * @brief\n *\tAdd an entry to an attribute list. First, create the entry and set\n * \tthe fields. If the attribute list is empty, then just point it at the\n * \tnew entry. Otherwise, append the new entry to the list.\n *\n * @param[in/out] attrib - pointer to attribute list\n * @param[in]     attrib_name - attribute name\n * @param[in]     attrib_value - attribute value\n *\n * @return\terror code\n * @return\t0\tsuccess\n * @return\t1\terror\n *\n */\n\nint\nset_attr(struct attrl **attrib, const char *attrib_name, const char *attrib_value)\n{\n\tstruct attrl *attr, *ap;\n\n\tattr = new_attrl();\n\tif (attr == NULL) {\n\t\tfprintf(stderr, \"Out of memory\\n\");\n\t\treturn 1;\n\t}\n\tif (attrib_name == NULL)\n\t\tattr->name = NULL;\n\telse {\n\t\tattr->name = (char *) malloc(strlen(attrib_name) + 1);\n\t\tif (attr->name == NULL) {\n\t\t\tfprintf(stderr, \"Out of memory\\n\");\n\t\t\treturn 1;\n\t\t}\n\t\tstrcpy(attr->name, attrib_name);\n\t}\n\tif (attrib_value == NULL)\n\t\tattr->value = NULL;\n\telse {\n\t\tattr->value = (char *) malloc(strlen(attrib_value) + 1);\n\t\tif (attr->name == NULL) {\n\t\t\tfprintf(stderr, \"Out of memory\\n\");\n\t\t\treturn 1;\n\t\t}\n\t\tstrcpy(attr->value, attrib_value);\n\t}\n\tnew_attr = attr; /* set global var new_attrl in case set_attr_resc want to add resource to it */\n\tif (*attrib == NULL) {\n\t\t*attrib = attr;\n\t} else {\n\t\tap = *attrib;\n\t\twhile (ap->next != NULL)\n\t\t\tap = ap->next;\n\t\tap->next = attr;\n\t}\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\twrapper function for set_attr.\n *\n * @param[in/out] attrib - pointer to attribute list\n * @param[in]     attrib_name - attribute name\n * @param[in]     attrib_value - attribute value\n *\n * @return\terror code\n * @retval\t0\tsuccess\n * @retval\t1\tfailure\n */\n\nint\nset_attr_resc(struct attrl **attrib, const char *attrib_name, const char *attrib_resc, const char *attrib_value)\n{\n\tif (set_attr(attrib, attrib_name, attrib_value))\n\t\treturn 1;\n\n\tif (attrib_resc != NULL) {\n\t\tnew_attr->resource = (char *) malloc(strlen(attrib_resc) + 1);\n\t\tif (new_attr->resource == NULL) {\n\t\t\tfprintf(stderr, \"Out of memory\\n\");\n\t\t\treturn 1;\n\t\t}\n\t\tstrcpy(new_attr->resource, attrib_resc);\n\t}\n\treturn 0;\n}\n"
  },
  {
    "path": "src/lib/Libcmds/set_resource.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tset_resources\n * @brief\n *\tAppend entries to the attribute list that are from the resource list.\n * \tIf the add flag is set, append the resource regardless. Otherwise, append\n * \tit only if it is not already on the list.\n *\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n#include \"cmds.h\"\n#include \"attribute.h\"\n\nstatic int allowresc = 1;\n\n/**\n * @brief\n *\t Append entries to the attribute list that are from the resource list.\n * \tIf the add flag is set, append the resource regardless. Otherwise, append\n * \tit only if it is not already on the list.\n *\n * @param[in] attrib - pointer to attribute list\n * @param[in] resources - resources\n * @param[in] add - flag to indicate appending resource\n * @param[in] erptr - resource\n *\n * @return\tint\n * @retval\t0\tresource set\n * @retval\t1\tresource not set / error\n *\n */\n\nint\nset_resources(struct attrl **attrib, const char *resources, int add, char **erptr)\n{\n\tchar *eq, *v, *e;\n\tchar *r = (char *) resources;\n\tchar *str;\n\tstruct attrl *attr, *ap, *priorap;\n\tint i, found, len;\n\tint haveresc = 0;\n\n\twhile (*r != '\\0') {\n\n\t\t/* Skip any leading whitespace */\n\t\twhile (isspace((int) *r))\n\t\t\tr++;\n\n\t\t/* Get the resource name */\n\t\teq = r;\n\t\twhile (*eq != '=' && *eq != ',' && *eq != '\\0')\n\t\t\teq++;\n\n\t\t/* Make sure there is a resource name */\n\t\tif (r == eq) {\n\t\t\t*erptr = r;\n\t\t\treturn (1);\n\t\t}\n\n\t\t/*\n\t\t * Count the number of non-space character that make up the\n\t\t * resource name.  Count only up to the last character before the\n\t\t * separator ('\\0', ',' or '=').\n\t\t */\n\t\tfor (str = r, len = 0; str < eq && !isspace((int) *str); str++)\n\t\t\tlen++;\n\n\t\t/* If separated by an equal sign, get the value */\n\t\tif (*eq == '=') {\n\t\t\tchar *t;\n\n\t\t\tt = eq + 1;\n\t\t\twhile (isspace((int) *t))\n\t\t\t\t++t;\n\n\t\t\t/* Added a special case for preempt_targets as this resource is of\n\t\t\t * type array string and can have comma seperated resources and queues as its value.\n\t\t\t */\n\t\t\tif ((r != NULL) && (strncmp(r, \"preempt_targets\", 15) == 0) &&\n\t\t\t    (t != NULL)) {\n\t\t\t\te = t;\n\t\t\t\twhile (*e != '\\0') {\n\t\t\t\t\te++;\n\t\t\t\t}\n\t\t\t\tv = malloc(e - t + 1);\n\t\t\t\tif (v == NULL)\n\t\t\t\t\treturn (-1);\n\t\t\t\tstrncpy(v, t, e - t);\n\t\t\t\tv[e - t] = '\\0';\n\t\t\t} else {\n\t\t\t\t/* Normal resource: if no error, v will be on the heap */\n\t\t\t\tif ((i = pbs_quote_parse(t, &v, &e, QMGR_NO_WHITE_IN_VALUE)) != 0) {\n\t\t\t\t\t*erptr = e;\n\t\t\t\t\treturn (i);\n\t\t\t\t}\n\t\t\t}\n\t\t} else {\n\t\t\tv = NULL;\n\t\t}\n\n\t\t/* Allocate memory for the attrl structure */\n\t\tattr = new_attrl();\n\t\tif (attr == NULL) {\n\t\t\tfree(v);\n\t\t\tfprintf(stderr, \"Out of memory\\n\");\n\t\t\treturn 2;\n\t\t}\n\n\t\t/* Allocate memory for the attribute name and copy */\n\t\tstr = (char *) malloc(strlen(ATTR_l) + 1);\n\t\tif (str == NULL) {\n\t\t\tfree(v);\n\t\t\tfree_attrl(attr);\n\t\t\tfprintf(stderr, \"Out of memory\\n\");\n\t\t\treturn 2;\n\t\t}\n\t\tstrcpy(str, ATTR_l);\n\t\tattr->name = str;\n\n\t\t/* Allocate memory for the resource name and copy */\n\t\tstr = (char *) malloc(len + 1);\n\t\tif (str == NULL) {\n\t\t\tfree(v);\n\t\t\tfree_attrl(attr);\n\t\t\tfprintf(stderr, \"Out of memory\\n\");\n\t\t\treturn 2;\n\t\t}\n\t\tstrncpy(str, r, len);\n\t\tstr[len] = '\\0';\n\t\tattr->resource = str;\n\n\t\t/* insert value */\n\t\tif (v != NULL) {\n\t\t\tattr->value = v;\n\t\t} else {\n\t\t\tstr = (char *) malloc(1);\n\t\t\tif (str == NULL) {\n\t\t\t\tfree_attrl(attr);\n\t\t\t\tfprintf(stderr, \"Out of memory\\n\");\n\t\t\t\treturn 2;\n\t\t\t}\n\t\t\tstr[0] = '\\0';\n\t\t\tattr->value = str;\n\t\t}\n\n\t\t/* if we find \"resc\" in the command lines, disallow it in directives */\n\t\tif (strcasecmp(attr->resource, \"resc\") == 0) {\n\t\t\thaveresc = 1;\n\t\t\tif (add)\n\t\t\t\tallowresc = 0;\n\t\t}\n\n\t\t/* Put it on the attribute list */\n\t\t/* If the argument add is true, add to the list regardless.\n\t\t * Otherwise, add it to the list only if the resource name\n\t\t * is not already on the list.\n\t\t */\n\t\tfound = FALSE;\n\t\tattr->next = NULL;\n\t\tif (*attrib == NULL) {\n\t\t\t*attrib = attr;\n\t\t} else {\n\t\t\tap = *attrib;\n\t\t\twhile (ap != NULL) {\n\t\t\t\tpriorap = ap;\n\t\t\t\tif (strcmp(ap->name, ATTR_l) == 0 &&\n\t\t\t\t    strcmp(ap->resource, attr->resource) == 0)\n\t\t\t\t\tfound = TRUE;\n\t\t\t\tap = ap->next;\n\t\t\t}\n\t\t\t/* have to special case \"resc\" since it can appear multiple times */\n\t\t\tif (add || !found || (haveresc && allowresc))\n\t\t\t\tpriorap->next = attr;\n\t\t}\n\n\t\t/* Get ready for next resource/value pair */\n\t\tif (v != NULL)\n\t\t\tr = e;\n\t\telse\n\t\t\tr = eq;\n\t\tif (*r == ',') {\n\t\t\tr++;\n\t\t\tif (*r == '\\0') {\n\t\t\t\t*erptr = r;\n\t\t\t\treturn (1);\n\t\t\t}\n\t\t}\n\t}\n\treturn (0);\n}\n"
  },
  {
    "path": "src/lib/Libdb/Makefile.am",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nSUBDIRS = \\\n\tpgsql\n\nlib_LTLIBRARIES = libpbsdb.la\nlibpbsdb_la_LDFLAGS = -version-info 0:0:0\nlibpbsdb_la_LIBADD = pgsql/libpbsdbpg.la @database_lib@\nlibpbsdb_la_SOURCES =\n"
  },
  {
    "path": "src/lib/Libdb/pgsql/Makefile.am",
    "content": "#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\nnoinst_LTLIBRARIES = libpbsdbpg.la\n\nlibpbsdbpg_la_CPPFLAGS = \\\n\t-I$(top_srcdir)/src/include \\\n\t@database_inc@\n\nlibpbsdbpg_la_LIBADD = \\\n\t@database_lib@\n\nlibpbsdbpg_la_SOURCES = \\\n\tdb_postgres.h \\\n\tdb_common.c \\\n\tdb_attr.c \\\n\tdb_job.c \\\n\tdb_resv.c \\\n\tdb_svr.c \\\n\tdb_que.c \\\n\tdb_node.c \\\n\tdb_sched.c\n\ndist_libexec_SCRIPTS = \\\n\tpbs_db_utility \\\n\tpbs_db_env \\\n\tpbs_db_schema.sql \\\n\tpbs_schema_upgrade\n\ndist_sbin_SCRIPTS = \\\n\tpbs_ds_systemd\n"
  },
  {
    "path": "src/lib/Libdb/pgsql/db_attr.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n *\n * @brief\n *\tImplementation of the attribute related functions for postgres\n *\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n#include \"pbs_db.h\"\n#include \"db_postgres.h\"\n#include \"assert.h\"\n\n/*\n * initially allocate some space to buffer, anything more will be\n * allocated later as required. Just allocate 1000 chars, hoping that\n * most common sql's might fit within it without needing to resize\n */\n#define INIT_BUF_SIZE 1000\n\n#define TEXTOID 25\n#define DBARRAY_BUF_LEN 4096\n#define DBARRAY_BUF_INC 1024\n\nstruct str_data {\n\tint32_t len;\n\tchar str[0];\n};\n\n/* Structure of array header to determine array type */\nstruct pg_array {\n\tint32_t ndim; /* Number of dimensions */\n\tint32_t off;  /* offset for data, removed by libpq */\n\tOid elemtype; /* type of element in the array */\n\n\t/* First dimension */\n\tint32_t size;  /* Number of elements */\n\tint32_t index; /* Index of first element */\n\t\t       /* data follows this portion */\n};\n\n/**\n * @brief\n *\tCreate a svrattrl structure from the attr_name, and values\n *\n * @param[in]\tattr_name - name of the attributes\n * @param[in]\tattr_resc - name of the resouce, if any\n * @param[in]\tattr_value - value of the attribute\n * @param[in]\tattr_flags - Flags associated with the attribute\n *\n * @retval - Pointer to the newly created attribute\n * @retval - NULL - Failure\n * @retval - Not NULL - Success\n *\n */\nsvrattrl *\nmake_attr(char *attr_name, char *attr_resc, char *attr_value, int attr_flags)\n{\n\tint tsize;\n\tsvrattrl *psvrat = NULL;\n\tint nlen = 0, rlen = 0, vlen = 0;\n\tchar *p = NULL;\n\n\ttsize = sizeof(svrattrl);\n\tif (!attr_name)\n\t\treturn NULL;\n\n\tnlen = strlen(attr_name);\n\ttsize += nlen + 1;\n\n\tif (attr_resc) {\n\t\trlen = strlen(attr_resc);\n\t\ttsize += rlen + 1;\n\t}\n\n\tif (attr_value) {\n\t\tvlen = strlen(attr_value);\n\t\ttsize += vlen + 1;\n\t}\n\n\tif ((psvrat = (svrattrl *) malloc(tsize)) == 0)\n\t\treturn NULL;\n\n\tCLEAR_LINK(psvrat->al_link);\n\tpsvrat->al_sister = NULL;\n\tpsvrat->al_atopl.next = 0;\n\tpsvrat->al_tsize = tsize;\n\tpsvrat->al_name = (char *) psvrat + sizeof(svrattrl);\n\tpsvrat->al_resc = 0;\n\tpsvrat->al_value = 0;\n\tpsvrat->al_nameln = nlen;\n\tpsvrat->al_rescln = 0;\n\tpsvrat->al_valln = 0;\n\tpsvrat->al_refct = 1;\n\n\tstrcpy(psvrat->al_name, attr_name);\n\tp = psvrat->al_name + psvrat->al_nameln + 1;\n\n\tif (attr_resc && attr_resc[0] != '\\0') {\n\t\tpsvrat->al_resc = p;\n\t\tstrcpy(psvrat->al_resc, attr_resc);\n\t\tpsvrat->al_rescln = rlen;\n\t\tp = p + psvrat->al_rescln + 1;\n\t}\n\n\tpsvrat->al_value = p;\n\tif (attr_value && attr_value[0] != '\\0') {\n\t\tstrcpy(psvrat->al_value, attr_value);\n\t\tpsvrat->al_valln = vlen;\n\t}\n\n\tpsvrat->al_flags = attr_flags;\n\tpsvrat->al_op = SET;\n\n\treturn (psvrat);\n}\n/**\n * @brief\n *\tConverts a postgres hstore(which is in the form of array) to attribute linked list\n *\n * @param[in]\traw_array - Array string which is in the form of postgres hstore\n * @param[out]  attr_list - List of pbs_db_attr_list_t objects\n *\n * @return      Error code\n * @retval\t-1 - On Error\n * @retval\t 0 - On Success\n * @retval\t>1 - Number of attributes\n *\n */\nint\ndbarray_to_attrlist(char *raw_array, pbs_db_attr_list_t *attr_list)\n{\n\tint i;\n\tint j;\n\tint rows;\n\tint flags;\n\tchar *endp;\n\tchar *attr_name;\n\tchar *attr_value;\n\tchar *attr_flags;\n\tchar *attr_resc;\n\tsvrattrl *pal;\n\tstruct pg_array *array = (struct pg_array *) raw_array;\n\tstruct str_data *val = (struct str_data *) (raw_array + sizeof(struct pg_array));\n\n\tCLEAR_HEAD(attr_list->attrs);\n\tattr_list->attr_count = 0;\n\n\tif (ntohl(array->ndim) == 0)\n\t\treturn 0;\n\n\tif (ntohl(array->ndim) > 1 || ntohl(array->elemtype) != TEXTOID)\n\t\treturn -1;\n\n\trows = ntohl(array->size);\n\n\tfor (i = 0, j = 0; j < rows; i++, j += 2) {\n\n\t\tattr_resc = NULL;\n\t\tattr_value = NULL;\n\n\t\tattr_name = val->str;\n\t\tval = (struct str_data *) ((char *) val->str + ntohl(val->len));\n\n\t\tattr_flags = val->str;\n\t\tval = (struct str_data *) ((char *) val->str + ntohl(val->len));\n\n\t\tif ((attr_resc = strchr(attr_name, '.'))) {\n\t\t\t*attr_resc = '\\0';\n\t\t\tattr_resc++;\n\t\t}\n\n\t\tif ((attr_value = strchr(attr_flags, '.'))) {\n\t\t\t*attr_value = '\\0';\n\t\t\tattr_value++;\n\t\t}\n\n\t\tflags = strtol(attr_flags, &endp, 10);\n\t\tif (*endp != '\\0')\n\t\t\treturn -1;\n\n\t\tif (!(pal = make_attr(attr_name, attr_resc, attr_value, flags)))\n\t\t\treturn -1;\n\n\t\tappend_link(&(attr_list->attrs), &pal->al_link, pal);\n\t}\n\tattr_list->attr_count = i;\n\treturn 0;\n}\n\n/**\n * @brief\n *\tConverts an PBS link list of attributes to DB hstore(array) format\n *\n * @param[out]  raw_array - Array string which is in the form of postgres hstore\n * @param[in]\tattr_list - List of pbs_db_attr_list_t objects\n * @param[in]\tkeys_only - if true, convert only the keys, not values also\n *\n * @return      Error code\n * @retval\t-1 - On Error\n * @retval\t length of array - On Success\n *\n */\nint\nattrlist_to_dbarray_ex(char **raw_array, pbs_db_attr_list_t *attr_list, int keys_only)\n{\n\t/* use static variables to improve performance by not allocating memory for each object save */\n\tstatic struct pg_array *array = NULL, *tmp;\n\tstatic int len = sizeof(struct pg_array) + DBARRAY_BUF_LEN;\n\tstruct str_data *val = NULL;\n\tsvrattrl *pal;\n\tchar *p;\n\tint spc_avl, spc_req, len_to_val;\n\t/* (len_field * 2) + PBS_MAXATTRNAME + PBS_MAXATTRRESC + max 3 digits flags +  2 dots + 1 null terminator */\n\tstatic int fixed_part_req = (sizeof(int32_t) * 2) + PBS_MAXATTRNAME + PBS_MAXATTRRESC + 3 + 2 + 1;\n\n\tif (!array) {\n\t\tarray = malloc(len);\n\t\tif (!array)\n\t\t\treturn -1;\n\t}\n\n\tarray->ndim = htonl(1);\n\tarray->off = 0;\n\tarray->elemtype = htonl(TEXTOID);\n\tif (keys_only)\n\t\tarray->size = htonl(attr_list->attr_count);\n\telse\n\t\tarray->size = htonl(attr_list->attr_count * 2);\n\tarray->index = htonl(1);\n\n\t/* point to data area */\n\tval = (struct str_data *) ((char *) array + sizeof(struct pg_array));\n\n\tfor (pal = (svrattrl *) GET_NEXT(attr_list->attrs); pal != NULL; pal = (svrattrl *) GET_NEXT(pal->al_link)) {\n\t\tlen_to_val = (char *) val - (char *) array;\n\t\tspc_avl = len - len_to_val;\n\t\tspc_req = fixed_part_req + (pal->al_atopl.value ? strlen(pal->al_atopl.value) : 0); /* value can have arbitrary length */\n\t\tif (spc_avl <= spc_req) {\n\t\t\tlen += (spc_req > DBARRAY_BUF_LEN) ? spc_req : DBARRAY_BUF_LEN;\n\t\t\ttmp = realloc(array, len);\n\t\t\tif (!tmp)\n\t\t\t\treturn -1;\n\n\t\t\tval = (struct str_data *) ((char *) tmp + len_to_val); /* move val since array moved */\n\t\t\tarray = tmp;\n\t\t}\n\t\tp = pbs_strcpy(val->str, pal->al_atopl.name);\n\t\tif (pal->al_atopl.resource && pal->al_atopl.resource[0] != '\\0') {\n\t\t\t*p++ = '.';\n\t\t\tp = pbs_strcpy(p, pal->al_atopl.resource);\n\t\t}\n\t\tval->len = htonl(p - val->str);\n\t\tval = (struct str_data *) p; /* p is already pointing to the end */\n\n\t\tif (keys_only == 0) {\n\t\t\tp = pbs_strcpy(val->str, uLTostr(pal->al_flags, 10)); /* can't fail; uLtostr has buffer for long, we pass very small value */\n\t\t\tif (pal->al_atopl.value && pal->al_atopl.value[0] != '\\0') {\n\t\t\t\t*p++ = '.';\n\t\t\t\tp = pbs_strcpy(p, pal->al_atopl.value);\n\t\t\t}\n\t\t\tval->len = htonl(p - val->str);\n\t\t\tval = (struct str_data *) p; /* p is already pointing to the end */\n\t\t}\n\t}\n\t*raw_array = (char *) array;\n\n\treturn ((char *) val - (char *) array);\n}\n\n/**\n * @brief\n *\tConverts an PBS link list of attributes to DB hstore(array) format\n *\n * @param[out]  raw_array - Array string which is in the form of postgres hstore\n * @param[in]\tattr_list - List of pbs_db_attr_list_t objects\n *\n * @return      Error code\n * @retval\t-1 - On Error\n * @retval\t 0 - On Success\n *\n */\nint\nattrlist_to_dbarray(char **raw_array, pbs_db_attr_list_t *attr_list)\n{\n\treturn attrlist_to_dbarray_ex(raw_array, attr_list, 0);\n}\n"
  },
  {
    "path": "src/lib/Libdb/pgsql/db_common.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n *\n * @brief\n *\tThis file contains Postgres specific implementation of functions\n *\tto access the PBS postgres database.\n *\tThis is postgres specific data store implementation, and should not be\n *\tused directly by the rest of the PBS code.\n *\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n#include \"pbs_db.h\"\n#include \"db_postgres.h\"\n#include <sys/wait.h>\n#include <sys/types.h>\n#include <sys/stat.h>\n#include <unistd.h>\n#include <fcntl.h>\n#include <errno.h>\n#include <arpa/inet.h>\n#include \"ticket.h\"\n#include \"log.h\"\n#include \"server_limits.h\"\n\n#define IPV4_STR_LEN 15\n\nchar *errmsg_cache = NULL;\npg_conn_data_t *conn_data = NULL;\npg_conn_trx_t *conn_trx = NULL;\nstatic char pg_ctl[MAXPATHLEN + 1] = \"\";\nstatic char *pg_user = NULL;\n\nstatic int is_conn_error(void *conn, int *failcode);\nstatic char *get_dataservice_password(char *user, char *errmsg, int len);\nstatic char *db_escape_str(void *conn, char *str);\nstatic char *get_db_connect_string(char *host, int timeout, int *err_code, char *errmsg, int len);\nstatic int db_prepare_sqls(void *conn);\nstatic int db_cursor_next(void *conn, void *state, pbs_db_obj_info_t *obj);\n\nextern char *pbs_get_dataservice_usr(char *, int);\nextern int pbs_decrypt_pwd(char *, int, size_t, char **, const unsigned char *, const unsigned char *);\nextern unsigned char pbs_aes_key[][16];\nextern unsigned char pbs_aes_iv[][16];\n\n// clang-format off\n/**\n * An array of structures(of function pointers) for each of the database object\n */\npg_db_fn_t db_fn_arr[PBS_DB_NUM_TYPES] = {\n\t{\t/* PBS_DB_SVR */\n\t\tpbs_db_save_svr,\n\t\tNULL,\n\t\tpbs_db_load_svr,\n\t\tNULL,\n\t\tNULL,\n\t\tpbs_db_del_attr_svr\n\t},\n\t{\t/* PBS_DB_SCHED */\n\t\tpbs_db_save_sched,\n\t\tpbs_db_delete_sched,\n\t\tpbs_db_load_sched,\n\t\tpbs_db_find_sched,\n\t\tpbs_db_next_sched,\n\t\tpbs_db_del_attr_sched\n\t},\n\t{\t/* PBS_DB_QUE */\n\t\tpbs_db_save_que,\n\t\tpbs_db_delete_que,\n\t\tpbs_db_load_que,\n\t\tpbs_db_find_que,\n\t\tpbs_db_next_que,\n\t\tpbs_db_del_attr_que\n\t},\n\t{\t/* PBS_DB_NODE */\n\t\tpbs_db_save_node,\n\t\tpbs_db_delete_node,\n\t\tpbs_db_load_node,\n\t\tpbs_db_find_node,\n\t\tpbs_db_next_node,\n\t\tpbs_db_del_attr_node\n\t},\n\t{\t/* PBS_DB_MOMINFO_TIME */\n\t\tpbs_db_save_mominfo_tm,\n\t\tNULL,\n\t\tpbs_db_load_mominfo_tm,\n\t\tNULL,\n\t\tNULL,\n\t\tNULL\n\t},\n\t{\t/* PBS_DB_JOB */\n\t\tpbs_db_save_job,\n\t\tpbs_db_delete_job,\n\t\tpbs_db_load_job,\n\t\tpbs_db_find_job,\n\t\tpbs_db_next_job,\n\t\tpbs_db_del_attr_job\n\t},\n\t{\t/* PBS_DB_JOBSCR */\n\t\tpbs_db_save_jobscr,\n\t\tNULL,\n\t\tpbs_db_load_jobscr,\n\t\tNULL,\n\t\tNULL,\n\t\tNULL\n\t},\n\t{\t/* PBS_DB_RESV */\n\t\tpbs_db_save_resv,\n\t\tpbs_db_delete_resv,\n\t\tpbs_db_load_resv,\n\t\tpbs_db_find_resv,\n\t\tpbs_db_next_resv,\n\t\tpbs_db_del_attr_resv\n\t}\n};\n\n// clang-format on\n\n/**\n * @brief\n *\tInitialize a query state variable, before being used in a cursor\n *\n * @param[in]\tconn - Database connection handle\n * @param[in]\tquery_cb - Object handler query back function\n *\n * @return\tvoid *\n * @retval\tNULL - Failure to allocate memory\n * @retval\t!NULL - Success - returns the new state variable\n *\n */\nstatic void *\ndb_initialize_state(void *conn, query_cb_t query_cb)\n{\n\tdb_query_state_t *state = malloc(sizeof(db_query_state_t));\n\tif (!state)\n\t\treturn NULL;\n\tstate->count = -1;\n\tstate->res = NULL;\n\tstate->row = -1;\n\tstate->query_cb = query_cb;\n\treturn state;\n}\n\n/**\n * @brief\n *\tDestroy a query state variable.\n *\tClears the database resultset and free's the memory allocated to\n *\tthe state variable\n *\n * @param[in]\tst - Pointer to the state variable\n *\n * @return void\n */\nstatic void\ndb_destroy_state(void *st)\n{\n\tdb_query_state_t *state = st;\n\tif (state) {\n\t\tif (state->res)\n\t\t\tPQclear(state->res);\n\t\tfree(state);\n\t}\n}\n\n/**\n * @brief\n *\tSearch the database for exisitn objects and load the server structures.\n *\n * @param[in]\tconn - Connected database handle\n * @param[in]\tpbs_db_obj_info_t - The pointer to the wrapper object which\n *\t\tdescribes the PBS object (job/resv/node etc) that is wrapped\n *\t\tinside it.\n * @param[in/out]\tpbs_db_query_options_t - Pointer to the options object that can\n *\t\tcontain the flags or timestamp which will effect the query.\n * @param[in]\tcallback function which will process the result from the database\n * \t\tand update the server strctures.\n *\n * @return\tint\n * @retval\t0\t- Success but no rows found\n * @retval\t-1\t- Failure\n * @retval\t>0\t- Success and number of rows found\n *\n */\nint\npbs_db_search(void *conn, pbs_db_obj_info_t *obj, pbs_db_query_options_t *opts, query_cb_t query_cb)\n{\n\tvoid *st;\n\tint ret;\n\tint totcount;\n\tint refreshed;\n\tint rc;\n\n\tst = db_initialize_state(conn, query_cb);\n\tif (!st)\n\t\treturn -1;\n\n\tret = db_fn_arr[obj->pbs_db_obj_type].pbs_db_find_obj(conn, st, obj, opts);\n\tif (ret == -1) {\n\t\t/* error in executing the sql */\n\t\tdb_destroy_state(st);\n\t\treturn -1;\n\t}\n\ttotcount = 0;\n\twhile ((rc = db_cursor_next(conn, st, obj)) == 0) {\n\t\tquery_cb(obj, &refreshed);\n\t\tif (refreshed)\n\t\t\ttotcount++;\n\t}\n\n\tdb_destroy_state(st);\n\treturn totcount;\n}\n\n/**\n * @brief\n *\tGet the next row from the cursor. It also is used to get the first row\n *\tfrom the cursor as well.\n *\n * @param[in]\tconn - Connected database handle\n * @param[in]\tstate - The cursor state handle.\n * @param[out]\tpbs_db_obj_info_t - The pointer to the wrapper object which\n *\t\tdescribes the PBS object (job/resv/node etc) that is wrapped\n *\t\tinside it. The row data is loaded into this parameter.\n *\n * @return\tError code\n * @retval\t-1  - Failure\n * @retval\t0  - success\n * @retval\t1  - Success but no more rows\n *\n */\nstatic int\ndb_cursor_next(void *conn, void *st, pbs_db_obj_info_t *obj)\n{\n\tdb_query_state_t *state = (db_query_state_t *) st;\n\tint ret;\n\n\tif (state->row < state->count) {\n\t\tret = db_fn_arr[obj->pbs_db_obj_type].pbs_db_next_obj(conn, st, obj);\n\t\tstate->row++;\n\t\treturn ret;\n\t}\n\treturn 1; /* no more rows */\n}\n\n/**\n * @brief\n *\tDelete an existing object from the database\n *\n * @param[in]\tconn - Connected database handle\n * @param[in]\tpbs_db_obj_info_t - Wrapper object that describes the object\n *\t\t(and data) to delete\n *\n * @return\tint\n * @retval\t-1  - Failure\n * @retval\t0   - success\n * @retval\t1   -  Success but no rows deleted\n *\n */\nint\npbs_db_delete_obj(void *conn, pbs_db_obj_info_t *obj)\n{\n\treturn (db_fn_arr[obj->pbs_db_obj_type].pbs_db_delete_obj(conn, obj));\n}\n\n/**\n * @brief\n *\tLoad a single existing object from the database\n *\n * @param[in]\tconn - Connected database handle\n * @param[in/out]pbs_db_obj_info_t - Wrapper object that describes the object\n *\t\t(and data) to load. This parameter used to return the data about\n *\t\tthe object loaded\n *\n * @return      Error code\n * @retval       0  - success\n * @retval\t-1  - Failure\n * @retval\t 1 -  Success but no rows loaded\n *\n */\nint\npbs_db_load_obj(void *conn, pbs_db_obj_info_t *obj)\n{\n\treturn (db_fn_arr[obj->pbs_db_obj_type].pbs_db_load_obj(conn, obj));\n}\n\n/**\n * @brief\n *\tInitializes all the sqls before they can be used\n *\n * @param[in]   conn - Connected database handle\n *\n * @return      Error code\n * @retval       0  - success\n * @retval\t-1  - Failure\n *\n *\n */\nstatic int\ndb_prepare_sqls(void *conn)\n{\n\tif (db_prepare_job_sqls(conn) != 0)\n\t\treturn -1;\n\tif (db_prepare_svr_sqls(conn) != 0)\n\t\treturn -1;\n\tif (db_prepare_que_sqls(conn) != 0)\n\t\treturn -1;\n\tif (db_prepare_resv_sqls(conn) != 0)\n\t\treturn -1;\n\tif (db_prepare_node_sqls(conn) != 0)\n\t\treturn -1;\n\tif (db_prepare_sched_sqls(conn) != 0)\n\t\treturn -1;\n\treturn 0;\n}\n\n/**\n * @brief\n *\tExecute a direct sql string on the open database connection\n *\n * @param[in]\tconn - Connected database handle\n * @param[in]\tsql  - A string describing the sql to execute.\n *\n * @return      Error code\n * @retval\t-1  - Error\n * @retval       0  - success\n * @retval\t 1  - Execution succeeded but statement did not return any rows\n *\n */\nint\ndb_execute_str(void *conn, char *sql)\n{\n\tPGresult *res;\n\tchar *rows_affected = NULL;\n\tint status;\n\n\tres = PQexec((PGconn *) conn, sql);\n\tstatus = PQresultStatus(res);\n\tif (status != PGRES_COMMAND_OK && status != PGRES_TUPLES_OK) {\n\t\tchar *sql_error = PQresultErrorField(res, PG_DIAG_SQLSTATE);\n\t\tdb_set_error(conn, &errmsg_cache, \"Execution of string statement\\n\", sql, sql_error);\n\t\tPQclear(res);\n\t\treturn -1;\n\t}\n\trows_affected = PQcmdTuples(res);\n\tif ((rows_affected == NULL || strtol(rows_affected, NULL, 10) <= 0) && (PQntuples(res) <= 0)) {\n\t\tPQclear(res);\n\t\treturn 1;\n\t}\n\n\tPQclear(res);\n\treturn 0;\n}\n\n/**\n * @brief\n *\tFunction to start/stop the database service/daemons\n *\tBasically calls the psql command with the specified command.\n *\n * @return      Error code\n * @retval       !=0 - Failure\n * @retval         0 - Success\n *\n */\nint\npbs_dataservice_control(char *cmd, char *pbs_ds_host, int pbs_ds_port)\n{\n\tchar dbcmd[4 * MAXPATHLEN + 1];\n\tint rc = 0;\n\tint ret = 0;\n\tchar errfile[MAXPATHLEN + 1];\n\tchar log_file[MAXPATHLEN + 1];\n\tchar oom_file[MAXPATHLEN + 1];\n\tchar *oom_score_adj = \"/proc/self/oom_score_adj\";\n\tchar *oom_adj = \"/proc/self/oom_adj\";\n\tchar *oom_val = NULL;\n\tstruct stat stbuf;\n\tint fd = 0;\n\tchar *p = NULL;\n\tchar *pg_bin = NULL;\n\tchar *pg_libstr = NULL;\n\tchar *errmsg = NULL;\n\n\tif (pg_ctl[0] == '\\0') {\n\t\tpg_libstr = getenv(\"PGSQL_LIBSTR\");\n\t\tif ((pg_bin = getenv(\"PGSQL_BIN\")) == NULL) {\n\t\t\tif (errmsg_cache)\n\t\t\t\tfree(errmsg_cache);\n\t\t\terrmsg_cache = strdup(\"PGSQL_BIN not found in the environment. Please run PBS_EXEC/libexec/pbs_db_env and try again.\");\n\t\t\treturn -1;\n\t\t}\n\t\tsprintf(pg_ctl, \"%s %s/pg_ctl -D %s/datastore\", pg_libstr ? pg_libstr : \"\", pg_bin, pbs_conf.pbs_home_path);\n\t}\n\tif (pg_user == NULL) {\n\t\tif (errmsg_cache) {\n\t\t\tfree(errmsg_cache);\n\t\t\terrmsg_cache = NULL;\n\t\t}\n\t\terrmsg = (char *) malloc(PBS_MAX_DB_CONN_INIT_ERR + 1);\n\t\tif (errmsg == NULL) {\n\t\t\terrmsg_cache = strdup(\"Out of memory\\n\");\n\t\t\treturn -1;\n\t\t}\n\t\tif ((pg_user = pbs_get_dataservice_usr(errmsg, PBS_MAX_DB_CONN_INIT_ERR)) == NULL) {\n\t\t\terrmsg_cache = strdup(errmsg);\n\t\t\tfree(errmsg);\n\t\t\treturn -1;\n\t\t}\n\t\tfree(errmsg);\n\t}\n\n\tif (!(strcmp(cmd, PBS_DB_CONTROL_START))) {\n\t\t/*\n\t\t * try protect self from Linux OOM killer\n\t\t * but don't fail if can't update OOM score\n\t\t */\n\t\tif (access(oom_score_adj, F_OK) != -1) {\n\t\t\tstrcpy(oom_file, oom_score_adj);\n\t\t\toom_val = strdup(\"-1000\");\n\t\t} else if (access(oom_adj, F_OK) != -1) {\n\t\t\tstrcpy(oom_file, oom_adj);\n\t\t\toom_val = strdup(\"-17\");\n\t\t}\n\t\tif (oom_val != NULL) {\n\t\t\tif ((fd = open(oom_file, O_TRUNC | O_WRONLY, 0600)) != -1) {\n\t\t\t\tif (write(fd, oom_val, strlen(oom_val)) == -1)\n\t\t\t\t\tret = PBS_DB_OOM_ERR;\n\t\t\t\tclose(fd);\n\t\t\t} else\n\t\t\t\tret = PBS_DB_OOM_ERR;\n\t\t\tfree(oom_val);\n\t\t}\n\t\tsprintf(errfile, \"%s/spool/pbs_ds_monitor_errfile\", pbs_conf.pbs_home_path);\n\t\t/* launch monitoring program which will fork to background */\n\t\tsprintf(dbcmd, \"%s/sbin/pbs_ds_monitor monitor > %s 2>&1\", pbs_conf.pbs_exec_path, errfile);\n\t\trc = system(dbcmd);\n\t\tif (WIFEXITED(rc))\n\t\t\trc = WEXITSTATUS(rc);\n\t\tif (rc != 0) {\n\t\t\t/* read the contents of errfile and and see */\n\t\t\t/* if pbs_ds_monitor is already running */\n\t\t\tif ((fd = open(errfile, 0)) != -1) {\n\t\t\t\tif (fstat(fd, &stbuf) != -1) {\n\t\t\t\t\terrmsg = (char *) malloc(stbuf.st_size + 1);\n\t\t\t\t\tif (errmsg == NULL) {\n\t\t\t\t\t\tclose(fd);\n\t\t\t\t\t\tunlink(errfile);\n\t\t\t\t\t\treturn -1;\n\t\t\t\t\t}\n\t\t\t\t\trc = read(fd, errmsg, stbuf.st_size);\n\t\t\t\t\tif (rc == -1)\n\t\t\t\t\t\treturn -1;\n\t\t\t\t\t*(errmsg + stbuf.st_size) = 0;\n\t\t\t\t\tp = errmsg + strlen(errmsg) - 1;\n\t\t\t\t\twhile ((p >= errmsg) && (*p == '\\r' || *p == '\\n'))\n\t\t\t\t\t\t*p-- = 0; /* suppress the last newline */\n\t\t\t\t\tif (strstr((char *) errmsg, \"Lock seems to be held by pid\")) {\n\t\t\t\t\t\t/* pbs_ds_monitor is already running */\n\t\t\t\t\t\trc = 0;\n\t\t\t\t\t} else {\n\t\t\t\t\t\tif (errmsg_cache)\n\t\t\t\t\t\t\tfree(errmsg_cache);\n\t\t\t\t\t\terrmsg_cache = strdup(errmsg);\n\t\t\t\t\t}\n\t\t\t\t\tfree(errmsg);\n\t\t\t\t}\n\t\t\t\tclose(fd);\n\t\t\t}\n\t\t\tif (rc)\n\t\t\t\treturn -1;\n\t\t}\n\t\tunlink(errfile);\n\t}\n\n\t/* create unique filename by appending pid */\n\tsprintf(errfile, \"%s/spool/db_errfile_%s_%d\", pbs_conf.pbs_home_path, cmd, getpid());\n\tsprintf(log_file, \"%s/spool/db_start.log\", pbs_conf.pbs_home_path);\n\n\tif (!(strcmp(cmd, PBS_DB_CONTROL_START))) {\n\t\tsprintf(dbcmd,\n\t\t\t\"su - %s -c \\\"/bin/sh -c '%s -o \\\\\\\"-p %d \\\\\\\" -W start -l %s > %s 2>&1'\\\"\",\n\t\t\tpg_user,\n\t\t\tpg_ctl,\n\t\t\tpbs_ds_port,\n\t\t\tlog_file,\n\t\t\terrfile);\n\t} else if (!(strcmp(cmd, PBS_DB_CONTROL_STATUS))) {\n\t\tsprintf(dbcmd,\n\t\t\t\"su - %s -c \\\"/bin/sh -c '%s -o \\\\\\\"-p %d \\\\\\\" -w status > %s 2>&1'\\\"\",\n\t\t\tpg_user,\n\t\t\tpg_ctl,\n\t\t\tpbs_ds_port,\n\t\t\terrfile);\n\t} else if (!(strcmp(cmd, PBS_DB_CONTROL_STOP))) {\n\t\tsprintf(dbcmd,\n\t\t\t\"su - %s -c \\\"/bin/sh -c '%s -w stop -m fast > %s 2>&1'\\\"\",\n\t\t\tpg_user,\n\t\t\tpg_ctl,\n\t\t\terrfile);\n\t}\n\n\trc = system(dbcmd);\n\tif (WIFEXITED(rc))\n\t\trc = WEXITSTATUS(rc);\n\n\tif (rc != 0) {\n\t\tret = 1;\n\t\tif (!(strcmp(cmd, PBS_DB_CONTROL_STATUS))) {\n\t\t\tsprintf(errfile, \"%s/spool/pbs_ds_monitor_errfile\", pbs_conf.pbs_home_path);\n\t\t\t/* check further only if pg_ctl thinks no DATABASE is running */\n\t\t\tsprintf(dbcmd, \"%s/sbin/pbs_ds_monitor check > %s 2>&1\", pbs_conf.pbs_exec_path, errfile);\n\t\t\trc = system(dbcmd);\n\t\t\tif (WIFEXITED(rc))\n\t\t\t\trc = WEXITSTATUS(rc);\n\t\t\tif (rc != 0)\n\t\t\t\tret = 2;\n\t\t} else if (!(strcmp(cmd, PBS_DB_CONTROL_START))) {\n\t\t\t/* read the contents of logfile */\n\t\t\tif ((fd = open(log_file, 0)) != -1) {\n\t\t\t\tif (fstat(fd, &stbuf) != -1) {\n\t\t\t\t\terrmsg = (char *) malloc(stbuf.st_size + 1);\n\t\t\t\t\tif (errmsg == NULL) {\n\t\t\t\t\t\tclose(fd);\n\t\t\t\t\t\tunlink(log_file);\n\t\t\t\t\t\treturn -1;\n\t\t\t\t\t}\n\t\t\t\t\trc = read(fd, errmsg, stbuf.st_size);\n\t\t\t\t\tif (rc == -1) {\n\t\t\t\t\t\tclose(fd);\n\t\t\t\t\t\tunlink(log_file);\n\t\t\t\t\t\treturn -1;\n\t\t\t\t\t}\n\t\t\t\t\t*(errmsg + stbuf.st_size) = 0;\n\t\t\t\t\tp = errmsg + strlen(errmsg) - 1;\n\t\t\t\t\twhile ((p >= errmsg) && (*p == '\\r' || *p == '\\n'))\n\t\t\t\t\t\t*p-- = 0; /* suppress the last newline */\n\t\t\t\t\tif (strstr((char *) errmsg, \"database files are incompatible with server\"))\n\t\t\t\t\t\tret = 3; /* DB version mismatch */\n\t\t\t\t\tfree(errmsg);\n\t\t\t\t}\n\t\t\t\tclose(fd);\n\t\t\t}\n\t\t}\n\t\tif (rc != 0) {\n\t\t\t/* read the contents of errfile and load to errmsg */\n\t\t\tif ((fd = open(errfile, 0)) != -1) {\n\t\t\t\tif (fstat(fd, &stbuf) != -1) {\n\t\t\t\t\terrmsg = (char *) malloc(stbuf.st_size + 1);\n\t\t\t\t\tif (errmsg == NULL) {\n\t\t\t\t\t\tclose(fd);\n\t\t\t\t\t\tunlink(errfile);\n\t\t\t\t\t\treturn -1;\n\t\t\t\t\t}\n\t\t\t\t\trc = read(fd, errmsg, stbuf.st_size);\n\t\t\t\t\tif (rc == -1)\n\t\t\t\t\t\treturn -1;\n\t\t\t\t\t*(errmsg + stbuf.st_size) = 0;\n\t\t\t\t\tp = errmsg + strlen(errmsg) - 1;\n\t\t\t\t\twhile ((p >= errmsg) && (*p == '\\r' || *p == '\\n'))\n\t\t\t\t\t\t*p-- = 0; /* suppress the last newline */\n\t\t\t\t\tif (errmsg_cache)\n\t\t\t\t\t\tfree(errmsg_cache);\n\t\t\t\t\terrmsg_cache = strdup(errmsg);\n\t\t\t\t\tfree(errmsg);\n\t\t\t\t}\n\t\t\t\tclose(fd);\n\t\t\t}\n\t\t}\n\t} else if (rc == 0 && !(strcmp(cmd, PBS_DB_CONTROL_START))) {\n\t\t/* launch systemd setup script */\n\t\tsprintf(dbcmd, \"%s/sbin/pbs_ds_systemd\", pbs_conf.pbs_exec_path);\n\t\trc = system(dbcmd);\n\t\tif (WIFEXITED(rc))\n\t\t\trc = WEXITSTATUS(rc);\n\t\tif (rc != 0) {\n\t\t\tif (errmsg_cache)\n\t\t\t\tfree(errmsg_cache);\n\t\t\terrmsg_cache = strdup(\"systemd service setup for pbs failed\");\n\t\t\treturn -1;\n\t\t}\n\t}\n\tunlink(log_file);\n\tunlink(errfile);\n\treturn ret;\n}\n\n/**\n * @brief\n *\tFunction to check whether data-service is running\n *\n * @return      Error code\n * @retval      -1  - Error in routine\n * @retval       0  - Data service running on local host\n * @retval       1  - Data service not running\n * @retval       2  - Data service running on another host\n *\n */\nint\npbs_status_db(char *pbs_ds_host, int pbs_ds_port)\n{\n\treturn (pbs_dataservice_control(PBS_DB_CONTROL_STATUS, pbs_ds_host, pbs_ds_port));\n}\n\n/**\n * @brief\n *\tStart the database daemons/service in synchronous mode.\n *  This function waits for the database to complete startup.\n *\n * @param[out]\terrmsg - returns the startup error message if any\n *\n * @return       int\n * @retval       0     - success\n * @retval       !=0   - Failure\n *\n */\nint\npbs_start_db(char *pbs_ds_host, int pbs_ds_port)\n{\n\treturn (pbs_dataservice_control(PBS_DB_CONTROL_START, pbs_ds_host, pbs_ds_port));\n}\n\n/**\n * @brief\n *\tFunction to stop the database service/daemons\n *\tThis passes the parameter STOP to the\n *\tpbs_dataservice script.\n *\n * @param[out]\terrmsg - returns the db error message if any\n *\n * @return      Error code\n * @retval       !=0 - Failure\n * @retval        0  - Success\n *\n */\nint\npbs_stop_db(char *pbs_ds_host, int pbs_ds_port)\n{\n\treturn (pbs_dataservice_control(PBS_DB_CONTROL_STOP, pbs_ds_host, pbs_ds_port));\n}\n\n/**\n * @brief\n *\tFunction to create new databse user or\n *  change password of current user.\n *\n * @param[in] conn[in]: The database connection handle which was created by pbs_db_connection.\n * @param[in] user_name[in]: Databse user name.\n * @param[in] password[in]:  New password for the database.\n * @param[in] olduser[in]: old database user name.\n *\n * @return      Error code\n * @retval       -1 - Failure\n * @retval        0  - Success\n *\n */\nint\npbs_db_password(void *conn, char *userid, char *password, char *olduser)\n{\n\tchar sqlbuff[1024];\n\tchar *pquoted = NULL;\n\tchar prog[] = \"pbs_db_password\";\n\tint change_user = 0;\n\n\tif (userid[0] != 0) {\n\t\tif (strcmp(olduser, userid) != 0) {\n\t\t\tchange_user = 1;\n\t\t}\n\t}\n\n\t/* escape password to use in sql strings later */\n\tif ((pquoted = db_escape_str(conn, password)) == NULL) {\n\t\tfprintf(stderr, \"%s: Out of memory\\n\", prog);\n\t\treturn -1;\n\t}\n\n\tif (change_user == 1) {\n\t\t/* check whether user exists */\n\t\tsnprintf(sqlbuff, sizeof(sqlbuff), \"select usename from pg_user where usename = '%s'\", userid);\n\t\tif (db_execute_str(conn, sqlbuff) == 1) {\n\t\t\t/* now attempt to create new user & set the database passwd to the un-encrypted password */\n\t\t\tsnprintf(sqlbuff, sizeof(sqlbuff), \"create user \\\"%s\\\" SUPERUSER ENCRYPTED PASSWORD '%s'\", userid, pquoted);\n\t\t} else {\n\t\t\t/* attempt to alter new user & set the database passwd to the un-encrypted password */\n\t\t\tsnprintf(sqlbuff, sizeof(sqlbuff), \"alter user \\\"%s\\\" SUPERUSER ENCRYPTED PASSWORD '%s'\", userid, pquoted);\n\t\t}\n\t} else {\n\t\t/* now attempt to set the database passwd to the un-encrypted password */\n\t\t/* alter user ${user} SUPERUSER ENCRYPTED PASSWORD '${passwd}' */\n\t\tsprintf(sqlbuff, \"alter user \\\"%s\\\" SUPERUSER ENCRYPTED PASSWORD '%s'\", olduser, pquoted);\n\t}\n\tfree(pquoted);\n\n\tif (db_execute_str(conn, sqlbuff) == -1)\n\t\treturn -1;\n\tif (change_user) {\n\t\t/* delete the old user from the database */\n\t\tsprintf(sqlbuff, \"drop user \\\"%s\\\"\", olduser);\n\t\tif (db_execute_str(conn, sqlbuff) == -1)\n\t\t\treturn -1;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\tStatic helper function to retrieve postgres error string,\n *\tanalyze it and find out what kind of error it is. Based on\n *\tthat, a PBS DB layer specific error code is generated.\n *\n * @param[in]\tconn - Connected database handle\n *\n * @return      Connection status\n * @retval      -1 - Connection down\n * @retval       0 - Connection fine\n *\n */\nstatic int\nis_conn_error(void *conn, int *failcode)\n{\n\t/* Check to see that the backend connection was successfully made */\n\tif (conn == NULL || PQstatus(conn) == CONNECTION_BAD) {\n\t\tif (conn) {\n\t\t\tdb_set_error(conn, &errmsg_cache, \"Connection:\", \"\", \"\");\n\t\t\tif (strstr((char *) errmsg_cache, \"Connection refused\") || strstr((char *) errmsg_cache, \"No such file or directory\"))\n\t\t\t\t*failcode = PBS_DB_CONNREFUSED;\n\t\t\telse if (strstr((char *) errmsg_cache, \"authentication\"))\n\t\t\t\t*failcode = PBS_DB_AUTH_FAILED;\n\t\t\telse if (strstr((char *) errmsg_cache, \"database system is starting up\"))\n\t\t\t\t*failcode = PBS_DB_STILL_STARTING;\n\t\t\telse\n\t\t\t\t*failcode = PBS_DB_CONNFAILED; /* default failure code */\n\t\t} else\n\t\t\t*failcode = PBS_DB_CONNFAILED; /* default failure code */\n\t\treturn 1;\t\t\t       /* true - connection error */\n\t}\n\treturn 0; /* no connection error */\n}\n\n/**\n * @brief\n *\tCreate a new connection structure and initialize the fields\n *\n * @param[out]  conn - Pointer to database connection handler.\n * @param[in]   host - The hostname to connect to\n * @param[in]\tport - The port to connect to\n * @param[in]   timeout - The connection attempt timeout\n *\n * @return      int - failcode\n * @retval      non-zero  - Failure\n * @retval      0 - Success\n *\n */\nint\npbs_db_connect(void **db_conn, char *host, int port, int timeout)\n{\n\tint failcode = PBS_DB_SUCCESS;\n\tint len = PBS_MAX_DB_CONN_INIT_ERR;\n\tchar db_sys_msg[PBS_MAX_DB_CONN_INIT_ERR + 1] = {0};\n\tchar *conn_info = NULL;\n\n\tconn_data = malloc(sizeof(pg_conn_data_t));\n\tif (!conn_data) {\n\t\tfailcode = PBS_DB_NOMEM;\n\t\treturn failcode;\n\t}\n\n\t/*\n\t * calloc ensures that everything is initialized to zeros\n\t * so no need to explicitly set fields to 0.\n\t */\n\tconn_trx = calloc(1, sizeof(pg_conn_trx_t));\n\tif (!conn_trx) {\n\t\tfree(conn_data);\n\t\tfailcode = PBS_DB_NOMEM;\n\t\treturn failcode;\n\t}\n\n\tconn_info = get_db_connect_string(host, timeout, &failcode, db_sys_msg, len);\n\tif (!conn_info) {\n\t\terrmsg_cache = strdup(db_sys_msg);\n\t\tgoto db_cnerr;\n\t}\n\n\t/* Make a connection to the database */\n\t*db_conn = (PGconn *) PQconnectdb(conn_info);\n\n\t/*\n\t * For security remove the connection info from the memory.\n\t */\n\tmemset(conn_info, 0, strlen(conn_info));\n\tfree(conn_info);\n\n\t/* Check to see that the backend connection was successfully made */\n\tif (!(is_conn_error(*db_conn, &failcode))) {\n\t\tif (db_prepare_sqls(*db_conn) != 0) {\n\t\t\t/* this means there is programmatic/unrecoverable error, so we quit */\n\t\t\tfailcode = PBS_DB_ERR;\n\t\t\tpbs_stop_db(host, port);\n\t\t}\n\t}\n\ndb_cnerr:\n\tif (failcode != PBS_DB_SUCCESS) {\n\t\tfree(conn_data);\n\t\tfree(conn_trx);\n\t\t*db_conn = NULL;\n\t}\n\treturn failcode;\n}\n\n/**\n * @brief\n *\tDisconnect from the database and frees all allocated memory.\n *\n * @param[in]   conn - Connected database handle\n *\n * @return      Error code\n * @retval       0  - success\n * @retval      -1  - Failure\n *\n */\nint\npbs_db_disconnect(void *conn)\n{\n\tif (!conn)\n\t\treturn -1;\n\n\tif (conn)\n\t\tPQfinish(conn);\n\n\tfree(conn_data);\n\tfree(conn_trx);\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tSaves a new object into the database\n *\n * @param[in]\tconn - Connected database handle\n * @param[in]\tpbs_db_obj_info_t - Wrapper object that describes the object (and data) to insert\n * @param[in]\tsavetype - quick or full save\n *\n * @return      Error code\n * @retval\t-1  - Failure\n * @retval\t 0  - Success\n * @retval\t 1  - Success but no rows inserted\n *\n */\nint\npbs_db_save_obj(void *conn, pbs_db_obj_info_t *obj, int savetype)\n{\n\treturn (db_fn_arr[obj->pbs_db_obj_type].pbs_db_save_obj(conn, obj, savetype));\n}\n\n/**\n * @brief\n *\tDelete attributes of an object from the database\n *\n * @param[in]\tconn - Connected database handle\n * @param[in]\tpbs_db_obj_info_t - Wrapper object that describes the object\n * @param[in]\tid - Object id\n * @param[in]\t@param[in]\tdb_attr_list - pointer to the structure of type pbs_db_attr_list_t for deleting from DB\n * @return      Error code\n *\n * @retval      0  - success\n * @retval     -1  - Failure\n *\n */\nint\npbs_db_delete_attr_obj(void *conn, pbs_db_obj_info_t *obj, void *obj_id, pbs_db_attr_list_t *db_attr_list)\n{\n\treturn (db_fn_arr[obj->pbs_db_obj_type].pbs_db_del_attr_obj(conn, obj_id, db_attr_list));\n}\n\n/**\n * @brief\n *\tFunction to set the database error into the db_err field of the\n *\tconnection object\n *\n * @param[in]\tconn - Pointer to db connection handle.\n * @param[out]\tconn_db_err - Pointer to cached db error.\n * @param[in]\tfnc - Custom string added to the error message\n *\t\t\tThis can be used to provide the name of the functionality.\n * @param[in]\tmsg - Custom string added to the error message. This can be\n *\t\t\tused to provide a failure message.\n * @param[in]\tdiag_msg - Additional diagnostic message from the resultset, if any\n */\nvoid\ndb_set_error(void *conn, char **conn_db_err, char *fnc, char *msg, char *diag_msg)\n{\n\tchar *str;\n\tchar *p;\n\tchar fmt[] = \"%s %s failed: %s %s\";\n\n\tif (*conn_db_err) {\n\t\tfree(*conn_db_err);\n\t\t*conn_db_err = NULL;\n\t}\n\n\tstr = PQerrorMessage((PGconn *) conn);\n\tif (!str)\n\t\treturn;\n\n\tp = str + strlen(str) - 1;\n\twhile ((p >= str) && (*p == '\\r' || *p == '\\n'))\n\t\t*p-- = 0; /* supress the last newline */\n\n\tif (!diag_msg)\n\t\tdiag_msg = \"\";\n\n\tpbs_asprintf(conn_db_err, fmt, fnc, msg, str, diag_msg);\n\n#ifdef DEBUG\n\tprintf(\"%s\\n\", *conn_db_err);\n\tfflush(stdout);\n#endif\n}\n\n/**\n * @brief\n *\tFunction to prepare a database statement\n *\n * @param[in]\tconn - The connnection handle\n * @param[in]\tstmt - Name of the statement\n * @param[in]\tsql  - The string sql that has to be prepared\n * @param[in]\tnum_vars - The number of parameters in the sql ($1, $2 etc)\n *\n * @return      Error code\n * @retval\t-1 Failure\n * @retval\t 0 Success\n *\n */\nint\ndb_prepare_stmt(void *conn, char *stmt, char *sql, int num_vars)\n{\n\tPGresult *res;\n\tres = PQprepare((PGconn *) conn, stmt, sql, num_vars, NULL);\n\tif (PQresultStatus(res) != PGRES_COMMAND_OK) {\n\t\tchar *sql_error = PQresultErrorField(res, PG_DIAG_SQLSTATE);\n\t\tdb_set_error(conn, &errmsg_cache, \"Prepare of statement\", stmt, sql_error);\n\t\tPQclear(res);\n\t\treturn -1;\n\t}\n\tPQclear(res);\n\treturn 0;\n}\n\n/**\n * @brief\n *\tExecute a prepared DML (insert or update) statement\n *\n * @param[in]\tconn - The connnection handle\n * @param[in]\tstmt - Name of the statement (prepared previously)\n * @param[in]\tnum_vars - The number of parameters in the sql ($1, $2 etc)\n *\n * @return      Error code\n * @retval\t-1 - Execution of prepared statement failed\n * @retval\t 0 - Success and > 0 rows were affected\n * @retval\t 1 - Execution succeeded but statement did not affect any rows\n *\n *\n */\nint\ndb_cmd(void *conn, char *stmt, int num_vars)\n{\n\tPGresult *res;\n\tchar *rows_affected = NULL;\n\n\tres = PQexecPrepared((PGconn *) conn, stmt, num_vars,\n\t\t\t     conn_data->paramValues,\n\t\t\t     conn_data->paramLengths,\n\t\t\t     conn_data->paramFormats, 0);\n\tif (PQresultStatus(res) != PGRES_COMMAND_OK) {\n\t\tchar *sql_error = PQresultErrorField(res, PG_DIAG_SQLSTATE);\n\t\tdb_set_error(conn, &errmsg_cache, \"Execution of Prepared statement\", stmt, sql_error);\n\t\tPQclear(res);\n\t\treturn -1;\n\t}\n\trows_affected = PQcmdTuples(res);\n\n\t/*\n\t*  we can't call PQclear(res) yet, since rows_affected\n\t* (used below) is a pointer to a field inside res (PGresult)\n\t*/\n\tif (rows_affected == NULL || strtol(rows_affected, NULL, 10) <= 0) {\n\t\tPQclear(res);\n\t\treturn 1;\n\t}\n\tPQclear(res);\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tExecute a prepared query (select) statement\n *\n * @param[in]\tconn - The connnection handle\n * @param[in]\tstmt - Name of the statement (prepared previously)\n * @param[in]\tnum_vars - The number of parameters in the sql ($1, $2 etc)\n * @param[out]  res - The result set of the query\n *\n * @return      Error code\n * @retval\t-1 - Execution of prepared statement failed\n * @retval\t 0 - Success and > 0 rows were returned\n * @retval\t 1 - Execution succeeded but statement did not return any rows\n *\n */\nint\ndb_query(void *conn, char *stmt, int num_vars, PGresult **res)\n{\n\tint conn_result_format = 1;\n\t*res = PQexecPrepared((PGconn *) conn, stmt, num_vars,\n\t\t\t      conn_data->paramValues, conn_data->paramLengths,\n\t\t\t      conn_data->paramFormats, conn_result_format);\n\n\tif (PQresultStatus(*res) != PGRES_TUPLES_OK) {\n\t\tchar *sql_error = PQresultErrorField(*res, PG_DIAG_SQLSTATE);\n\t\tdb_set_error(conn, &errmsg_cache, \"Execution of Prepared statement\", stmt, sql_error);\n\t\tPQclear(*res);\n\t\treturn -1;\n\t}\n\n\tif (PQntuples(*res) <= 0) {\n\t\tPQclear(*res);\n\t\treturn 1;\n\t}\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tRetrieves the database password for an user. Currently, the database\n *\tpassword is retrieved from the file under server_priv, called db_passwd\n *\tCurrently, this function returns the same username as the password, if\n *\ta password file is not found under server_priv. However, if a password\n *\tfile is found but is not readable etc, then an error (indicated by\n *\treturning NULL) is returned.\n *\n * @param[in]\tuser - Name of the user\n * @param[out]  errmsg - Details of the error\n * @param[in]   len    - length of error messge variable\n *\n * @return      Password String\n * @retval\t NULL - Failed to retrieve password\n * @retval\t!NULL - Pointer to allocated memory with password string.\n *\t\t\tCaller should free this memory after usage.\n *\n */\nstatic char *\nget_dataservice_password(char *user, char *errmsg, int len)\n{\n\tchar pwd_file[MAXPATHLEN + 1];\n\tint fd;\n\tstruct stat st;\n\tchar buf[MAXPATHLEN + 1];\n\tchar *str;\n\n\tsprintf(pwd_file, \"%s/server_priv/db_password\", pbs_conf.pbs_home_path);\n\tif ((fd = open(pwd_file, O_RDONLY)) == -1) {\n\t\treturn strdup(user);\n\t} else {\n\t\tif (fstat(fd, &st) == -1) {\n\t\t\tclose(fd);\n\t\t\tsnprintf(errmsg, len, \"%s: stat failed, errno=%d\", pwd_file, errno);\n\t\t\treturn NULL;\n\t\t}\n\t\tif (st.st_size >= sizeof(buf)) {\n\t\t\tclose(fd);\n\t\t\tsnprintf(errmsg, len, \"%s: file too large\", pwd_file);\n\t\t\treturn NULL;\n\t\t}\n\n\t\tif (read(fd, buf, st.st_size) != st.st_size) {\n\t\t\tclose(fd);\n\t\t\tsnprintf(errmsg, len, \"%s: read failed, errno=%d\", pwd_file, errno);\n\t\t\treturn NULL;\n\t\t}\n\t\tbuf[st.st_size] = 0;\n\t\tclose(fd);\n\n\t\tif (pbs_decrypt_pwd(buf, PBS_CREDTYPE_AES, st.st_size, &str, (const unsigned char *) pbs_aes_key, (const unsigned char *) pbs_aes_iv) != 0)\n\t\t\treturn NULL;\n\n\t\treturn (str);\n\t}\n}\n\n/**\n * @brief\n *\tEscape any special characters contained in a database password.\n *\tThe list of such characters is found in the description of PQconnectdb\n *\tat http://www.postgresql.org/docs/<current_pgsql_version>/static/libpq-connect.html.\n *\n * @param[out]\tdest - destination string, which will hold the escaped password\n * @param[in]\tsrc - the original password string, which may contain characters\n *\t\t      that must be escaped\n * @param[in]   len - amount of space in the destination string;  to ensure\n *\t\t      successful conversion, this value should be at least one\n *\t\t      more than twice the length of the original password string\n *\n * @return      void\n *\n */\nvoid\nescape_passwd(char *dest, char *src, int len)\n{\n\tchar *p = dest;\n\n\twhile (*src && ((p - dest) < len)) {\n\t\tif (*src == '\\'' || *src == '\\\\') {\n\t\t\t*p = '\\\\';\n\t\t\tp++;\n\t\t}\n\t\t*p = *src;\n\t\tp++;\n\t\tsrc++;\n\t}\n\t*p = '\\0';\n}\n\n/**\n * @brief\n *\tCreates the database connect string by retreiving the\n *      database password and appending the other connection\n *      parameters.\n *\tIf parameter host is passed as NULL, then the \"host =\" portion\n *\tof the connection info is not set, allowing the database to\n *\tconnect to the default host (which is local).\n *\n * @param[in]   host - The hostname to connect to, if NULL the not used\n * @param[in]   timeout - The timeout parameter of the connection\n * @param[in]   err_code - The error code in case of failure\n * @param[out]  errmsg - Details of the error\n * @param[in]   len    - length of error messge variable\n *\n * @return      The newly allocated and populated connection string\n * @retval       NULL  - Failure\n * @retval       !NULL - Success\n *\n */\nstatic char *\nget_db_connect_string(char *host, int timeout, int *err_code, char *errmsg, int len)\n{\n\tchar *svr_conn_info;\n\tint pquoted_len = 0;\n\tchar *p = NULL, *pquoted = NULL;\n\tchar *usr = NULL;\n\tpbs_net_t hostaddr;\n\tstruct in_addr in;\n\tchar hostaddr_str[IPV4_STR_LEN + 1];\n\tchar *q;\n\tchar template1[] = \"hostaddr = '%s' port = %d dbname = '%s' user = '%s' password = '%s' connect_timeout = %d\";\n\tchar template2[] = \"port = %d dbname = '%s' user = '%s' password = '%s' connect_timeout = %d\";\n\n\tusr = pbs_get_dataservice_usr(errmsg, len);\n\tif (usr == NULL) {\n\t\t*err_code = PBS_DB_AUTH_FAILED;\n\t\treturn NULL;\n\t}\n\n\tp = get_dataservice_password(usr, errmsg, len);\n\tif (p == NULL) {\n\t\tfree(usr);\n\t\t*err_code = PBS_DB_AUTH_FAILED;\n\t\treturn NULL;\n\t}\n\n\tpquoted_len = strlen(p) * 2 + 1;\n\tpquoted = malloc(pquoted_len);\n\tif (!pquoted) {\n\t\tfree(p);\n\t\tfree(usr);\n\t\t*err_code = PBS_DB_NOMEM;\n\t\treturn NULL;\n\t}\n\n\tescape_passwd(pquoted, p, pquoted_len);\n\n\tsvr_conn_info = malloc(MAX(sizeof(template1), sizeof(template2)) +\n\t\t\t       ((host) ? IPV4_STR_LEN : 0) + /* length of IPv4 only if host is not NULL */\n\t\t\t       5 +\t\t\t     /* possible length of port */\n\t\t\t       strlen(PBS_DATA_SERVICE_STORE_NAME) +\n\t\t\t       strlen(usr) + /* NULL checked earlier */\n\t\t\t       strlen(p) +   /* NULL checked earlier */\n\t\t\t       10);\t     /* max 9 char timeout + null char */\n\tif (svr_conn_info == NULL) {\n\t\tfree(pquoted);\n\t\tfree(p);\n\t\tfree(usr);\n\t\t*err_code = PBS_DB_NOMEM;\n\t\treturn NULL;\n\t}\n\n\tif (host == NULL) {\n\t\tsprintf(svr_conn_info,\n\t\t\ttemplate2,\n\t\t\tpbs_conf.pbs_data_service_port,\n\t\t\tPBS_DATA_SERVICE_STORE_NAME,\n\t\t\tusr,\n\t\t\tpquoted,\n\t\t\ttimeout);\n\t} else {\n\t\tif ((hostaddr = get_hostaddr(host)) == (pbs_net_t) 0) {\n\t\t\tfree(pquoted);\n\t\t\tfree(svr_conn_info);\n\t\t\tfree(p);\n\t\t\tfree(usr);\n\t\t\tsnprintf(errmsg, len, \"Could not resolve dataservice host %s\", host);\n\t\t\t*err_code = PBS_DB_CONNFAILED;\n\t\t\treturn NULL;\n\t\t}\n\t\tin.s_addr = htonl(hostaddr);\n\t\tq = inet_ntoa(in);\n\t\tif (!q) {\n\t\t\tfree(pquoted);\n\t\t\tfree(svr_conn_info);\n\t\t\tfree(p);\n\t\t\tfree(usr);\n\t\t\tsnprintf(errmsg, len, \"inet_ntoa failed, errno=%d\", errno);\n\t\t\t*err_code = PBS_DB_CONNFAILED;\n\t\t\treturn NULL;\n\t\t}\n\t\tstrncpy(hostaddr_str, q, IPV4_STR_LEN);\n\t\thostaddr_str[IPV4_STR_LEN] = '\\0';\n\n\t\tsprintf(svr_conn_info,\n\t\t\ttemplate1,\n\t\t\thostaddr_str,\n\t\t\tpbs_conf.pbs_data_service_port,\n\t\t\tPBS_DATA_SERVICE_STORE_NAME,\n\t\t\tusr,\n\t\t\tpquoted,\n\t\t\ttimeout);\n\t}\n\tmemset(p, 0, strlen(p));\t     /* clear password from memory */\n\tmemset(pquoted, 0, strlen(pquoted)); /* clear password from memory */\n\tfree(pquoted);\n\tfree(p);\n\tfree(usr);\n\n\treturn svr_conn_info;\n}\n\n/**\n * @brief\n *\tFunction to escape special characters in a string\n *\tbefore using as a column value in the database\n *\n * @param[in]\tconn - Handle to the database connection\n * @param[in]\tstr - the string to escape\n *\n * @return      Escaped string\n * @retval        NULL - Failure to escape string\n * @retval       !NULL - Newly allocated area holding escaped string,\n *                       caller needs to free\n *\n */\nstatic char *\ndb_escape_str(void *conn, char *str)\n{\n\tchar *val_escaped;\n\tint error;\n\tint val_len;\n\n\tif (str == NULL)\n\t\treturn NULL;\n\n\tval_len = strlen(str);\n\t/* Use calloc() to ensure the character array is initialized. */\n\tval_escaped = calloc(((2 * val_len) + 1), sizeof(char)); /* 2*orig + 1 as per Postgres API documentation */\n\tif (val_escaped == NULL)\n\t\treturn NULL;\n\n\tPQescapeStringConn((PGconn *) conn, val_escaped, str, val_len, &error);\n\tif (error != 0) {\n\t\tfree(val_escaped);\n\t\treturn NULL;\n\t}\n\n\treturn val_escaped;\n}\n\n/**\n * @brief\n *\tTranslates the error code to an error message\n *\n * @param[in]   err_code - Error code to translate\n * @param[out]   err_msg - The translated error message (newly allocated memory)\n *\n */\nvoid\npbs_db_get_errmsg(int err_code, char **err_msg)\n{\n\tif (*err_msg) {\n\t\tfree(*err_msg);\n\t\t*err_msg = NULL;\n\t}\n\n\tswitch (err_code) {\n\t\tcase PBS_DB_STILL_STARTING:\n\t\t\t*err_msg = strdup(\"PBS dataservice is still starting up\");\n\t\t\tbreak;\n\n\t\tcase PBS_DB_AUTH_FAILED:\n\t\t\t*err_msg = strdup(\"PBS dataservice authentication failed\");\n\t\t\tbreak;\n\n\t\tcase PBS_DB_NOMEM:\n\t\t\t*err_msg = strdup(\"PBS out of memory in connect\");\n\t\t\tbreak;\n\n\t\tcase PBS_DB_CONNREFUSED:\n\t\t\t*err_msg = strdup(\"PBS dataservice not running\");\n\t\t\tbreak;\n\n\t\tcase PBS_DB_CONNFAILED:\n\t\t\t*err_msg = strdup(\"Failed to connect to PBS dataservice\");\n\t\t\tbreak;\n\n\t\tcase PBS_DB_OOM_ERR:\n\t\t\t*err_msg = strdup(\"Failed to protect PBS from Linux OOM killer. No access to OOM score file.\");\n\t\t\tbreak;\n\n\t\tcase PBS_DB_ERR:\n\t\t\t*err_msg = NULL;\n\t\t\tif (errmsg_cache)\n\t\t\t\t*err_msg = strdup(errmsg_cache);\n\t\t\tbreak;\n\n\t\tdefault:\n\t\t\t*err_msg = strdup(\"PBS dataservice error\");\n\t\t\tbreak;\n\t}\n}\n\n/**\n * @brief convert network to host byte order to unsigned long long\n *\n * @param[in]   x - Value to convert\n *\n * @return Value converted from network to host byte order. Return the original\n * value if network and host byte order are identical.\n */\nunsigned long long\ndb_ntohll(unsigned long long x)\n{\n\tif (ntohl(1) == 1)\n\t\treturn x;\n\n\t/*\n\t * htonl and ntohl always work on 32 bits, even on a 64 bit platform,\n\t * so there is no clash.\n\t */\n\treturn (unsigned long long) (((unsigned long long) ntohl((x) &0xffffffff)) << 32) | ntohl(((unsigned long long) (x)) >> 32);\n}\n"
  },
  {
    "path": "src/lib/Libdb/pgsql/db_job.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n *\n * @brief\n *      Implementation of the job data access functions for postgres\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n#include \"pbs_db.h\"\n#include \"db_postgres.h\"\n\n/**\n * @brief\n *\tPrepare all the job related sqls. Typically called after connect\n *\tand before any other sql exeuction\n *\n * @param[in]\tconn - Database connection handle\n *\n * @return      Error code\n * @retval\t-1 - Failure\n * @retval\t 0 - Success\n *\n */\nint\ndb_prepare_job_sqls(void *conn)\n{\n\tchar conn_sql[MAX_SQL_LENGTH];\n\tsnprintf(conn_sql, MAX_SQL_LENGTH, \"insert into pbs.job (\"\n\t\t\t\t\t   \"ji_jobid,\"\n\t\t\t\t\t   \"ji_state,\"\n\t\t\t\t\t   \"ji_substate,\"\n\t\t\t\t\t   \"ji_svrflags,\"\n\t\t\t\t\t   \"ji_stime,\"\n\t\t\t\t\t   \"ji_queue,\"\n\t\t\t\t\t   \"ji_destin,\"\n\t\t\t\t\t   \"ji_un_type,\"\n\t\t\t\t\t   \"ji_exitstat,\"\n\t\t\t\t\t   \"ji_quetime,\"\n\t\t\t\t\t   \"ji_rteretry,\"\n\t\t\t\t\t   \"ji_fromsock,\"\n\t\t\t\t\t   \"ji_fromaddr,\"\n\t\t\t\t\t   \"ji_jid,\"\n\t\t\t\t\t   \"ji_credtype,\"\n\t\t\t\t\t   \"ji_qrank,\"\n\t\t\t\t\t   \"ji_savetm,\"\n\t\t\t\t\t   \"ji_creattm,\"\n\t\t\t\t\t   \"attributes\"\n\t\t\t\t\t   \") \"\n\t\t\t\t\t   \"values ($1, $2, $3, $4, $5, $6, $7, $8, $9, \"\n\t\t\t\t\t   \"$10, $11, $12, $13, $14, $15, $16, \"\n\t\t\t\t\t   \"localtimestamp, localtimestamp, hstore($17::text[]))\");\n\tif (db_prepare_stmt(conn, STMT_INSERT_JOB, conn_sql, 17) != 0)\n\t\treturn -1;\n\n\tsnprintf(conn_sql, MAX_SQL_LENGTH, \"update pbs.job set \"\n\t\t\t\t\t   \"ji_state = $2,\"\n\t\t\t\t\t   \"ji_substate = $3,\"\n\t\t\t\t\t   \"ji_svrflags = $4,\"\n\t\t\t\t\t   \"ji_stime = $5,\"\n\t\t\t\t\t   \"ji_queue  = $6,\"\n\t\t\t\t\t   \"ji_destin = $7,\"\n\t\t\t\t\t   \"ji_un_type = $8,\"\n\t\t\t\t\t   \"ji_exitstat = $9,\"\n\t\t\t\t\t   \"ji_quetime = $10,\"\n\t\t\t\t\t   \"ji_rteretry = $11,\"\n\t\t\t\t\t   \"ji_fromsock = $12,\"\n\t\t\t\t\t   \"ji_fromaddr = $13,\"\n\t\t\t\t\t   \"ji_jid = $14,\"\n\t\t\t\t\t   \"ji_credtype = $15,\"\n\t\t\t\t\t   \"ji_qrank = $16,\"\n\t\t\t\t\t   \"ji_savetm = localtimestamp,\"\n\t\t\t\t\t   \"attributes = attributes || hstore($17::text[]) \"\n\t\t\t\t\t   \"where ji_jobid = $1\");\n\tif (db_prepare_stmt(conn, STMT_UPDATE_JOB, conn_sql, 17) != 0)\n\t\treturn -1;\n\n\tsnprintf(conn_sql, MAX_SQL_LENGTH, \"update pbs.job set \"\n\t\t\t\t\t   \"ji_savetm = localtimestamp,\"\n\t\t\t\t\t   \"attributes = attributes || hstore($2::text[]) \"\n\t\t\t\t\t   \"where ji_jobid = $1\");\n\tif (db_prepare_stmt(conn, STMT_UPDATE_JOB_ATTRSONLY, conn_sql, 2) != 0)\n\t\treturn -1;\n\n\tsnprintf(conn_sql, MAX_SQL_LENGTH, \"update pbs.job set \"\n\t\t\t\t\t   \"ji_savetm = localtimestamp,\"\n\t\t\t\t\t   \"attributes = attributes - $2::text[] \"\n\t\t\t\t\t   \"where ji_jobid = $1\");\n\tif (db_prepare_stmt(conn, STMT_REMOVE_JOBATTRS, conn_sql, 2) != 0)\n\t\treturn -1;\n\n\tsnprintf(conn_sql, MAX_SQL_LENGTH, \"update pbs.job set \"\n\t\t\t\t\t   \"ji_state = $2,\"\n\t\t\t\t\t   \"ji_substate = $3,\"\n\t\t\t\t\t   \"ji_svrflags = $4,\"\n\t\t\t\t\t   \"ji_stime = $5,\"\n\t\t\t\t\t   \"ji_queue  = $6,\"\n\t\t\t\t\t   \"ji_destin = $7,\"\n\t\t\t\t\t   \"ji_un_type = $8,\"\n\t\t\t\t\t   \"ji_exitstat = $9,\"\n\t\t\t\t\t   \"ji_quetime = $10,\"\n\t\t\t\t\t   \"ji_rteretry = $11,\"\n\t\t\t\t\t   \"ji_fromsock = $12,\"\n\t\t\t\t\t   \"ji_fromaddr = $13,\"\n\t\t\t\t\t   \"ji_jid = $14,\"\n\t\t\t\t\t   \"ji_credtype = $15,\"\n\t\t\t\t\t   \"ji_qrank = $16,\"\n\t\t\t\t\t   \"ji_savetm = localtimestamp \"\n\t\t\t\t\t   \"where ji_jobid = $1\");\n\tif (db_prepare_stmt(conn, STMT_UPDATE_JOB_QUICK, conn_sql, 16) != 0)\n\t\treturn -1;\n\n\tsnprintf(conn_sql, MAX_SQL_LENGTH, \"select \"\n\t\t\t\t\t   \"ji_jobid,\"\n\t\t\t\t\t   \"ji_state,\"\n\t\t\t\t\t   \"ji_substate,\"\n\t\t\t\t\t   \"ji_svrflags,\"\n\t\t\t\t\t   \"ji_stime,\"\n\t\t\t\t\t   \"ji_queue,\"\n\t\t\t\t\t   \"ji_destin,\"\n\t\t\t\t\t   \"ji_un_type,\"\n\t\t\t\t\t   \"ji_exitstat,\"\n\t\t\t\t\t   \"ji_quetime,\"\n\t\t\t\t\t   \"ji_rteretry,\"\n\t\t\t\t\t   \"ji_fromsock,\"\n\t\t\t\t\t   \"ji_fromaddr,\"\n\t\t\t\t\t   \"ji_jid,\"\n\t\t\t\t\t   \"ji_credtype,\"\n\t\t\t\t\t   \"ji_qrank,\"\n\t\t\t\t\t   \"hstore_to_array(attributes) as attributes \"\n\t\t\t\t\t   \"from pbs.job where ji_jobid = $1\");\n\tif (db_prepare_stmt(conn, STMT_SELECT_JOB, conn_sql, 1) != 0)\n\t\treturn -1;\n\n\t/*\n\t * Use the sql encode function to encode the $2 parameter. Encode using\n\t * 'escape' mode. Encode considers $2 as a bytea and returns a escaped\n\t * string using 'escape' syntax. Refer to the following postgres link\n\t * for details:\n\t * http://www.postgresql.org/docs/8.3/static/functions-string.html\n\t */\n\tsnprintf(conn_sql, MAX_SQL_LENGTH, \"insert into \"\n\t\t\t\t\t   \"pbs.job_scr (ji_jobid, script) \"\n\t\t\t\t\t   \"values \"\n\t\t\t\t\t   \"($1, encode($2, 'escape'))\");\n\tif (db_prepare_stmt(conn, STMT_INSERT_JOBSCR, conn_sql, 2) != 0)\n\t\treturn -1;\n\n\t/*\n\t * Use the sql decode function to decode the script parameter. Decode\n\t * using 'escape' mode. Decode considers script as encoded TEXT and\n\t * decodes it using 'escape' syntax, returning a bytea. The :: is used\n\t * to \"typecast\" the output to a bytea.\n\t * Refer to the following postgres link for details:\n\t * http://www.postgresql.org/docs/8.3/static/functions-string.html\n\t */\n\tsnprintf(conn_sql, MAX_SQL_LENGTH, \"select decode(script, 'escape')::bytea as script \"\n\t\t\t\t\t   \"from pbs.job_scr \"\n\t\t\t\t\t   \"where ji_jobid = $1\");\n\tif (db_prepare_stmt(conn, STMT_SELECT_JOBSCR, conn_sql, 1) != 0)\n\t\treturn -1;\n\n\tsnprintf(conn_sql, MAX_SQL_LENGTH, \"select \"\n\t\t\t\t\t   \"ji_jobid,\"\n\t\t\t\t\t   \"ji_state,\"\n\t\t\t\t\t   \"ji_substate,\"\n\t\t\t\t\t   \"ji_svrflags,\"\n\t\t\t\t\t   \"ji_stime,\"\n\t\t\t\t\t   \"ji_queue,\"\n\t\t\t\t\t   \"ji_destin,\"\n\t\t\t\t\t   \"ji_un_type,\"\n\t\t\t\t\t   \"ji_exitstat,\"\n\t\t\t\t\t   \"ji_quetime,\"\n\t\t\t\t\t   \"ji_rteretry,\"\n\t\t\t\t\t   \"ji_fromsock,\"\n\t\t\t\t\t   \"ji_fromaddr,\"\n\t\t\t\t\t   \"ji_jid,\"\n\t\t\t\t\t   \"ji_credtype,\"\n\t\t\t\t\t   \"ji_qrank,\"\n\t\t\t\t\t   \"hstore_to_array(attributes) as attributes \"\n\t\t\t\t\t   \"from pbs.job order by ji_qrank\");\n\tif (db_prepare_stmt(conn, STMT_FINDJOBS_ORDBY_QRANK, conn_sql, 0) != 0)\n\t\treturn -1;\n\n\tsnprintf(conn_sql, MAX_SQL_LENGTH, \"select \"\n\t\t\t\t\t   \"ji_jobid,\"\n\t\t\t\t\t   \"ji_state,\"\n\t\t\t\t\t   \"ji_substate,\"\n\t\t\t\t\t   \"ji_svrflags,\"\n\t\t\t\t\t   \"ji_stime,\"\n\t\t\t\t\t   \"ji_queue,\"\n\t\t\t\t\t   \"ji_destin,\"\n\t\t\t\t\t   \"ji_un_type,\"\n\t\t\t\t\t   \"ji_exitstat,\"\n\t\t\t\t\t   \"ji_quetime,\"\n\t\t\t\t\t   \"ji_rteretry,\"\n\t\t\t\t\t   \"ji_fromsock,\"\n\t\t\t\t\t   \"ji_fromaddr,\"\n\t\t\t\t\t   \"ji_jid,\"\n\t\t\t\t\t   \"ji_credtype,\"\n\t\t\t\t\t   \"ji_qrank,\"\n\t\t\t\t\t   \"hstore_to_array(attributes) as attributes \"\n\t\t\t\t\t   \"from pbs.job where ji_queue = $1\"\n\t\t\t\t\t   \" order by ji_qrank\");\n\tif (db_prepare_stmt(conn, STMT_FINDJOBS_BYQUE_ORDBY_QRANK,\n\t\t\t    conn_sql, 1) != 0)\n\t\treturn -1;\n\n\tsnprintf(conn_sql, MAX_SQL_LENGTH, \"delete from pbs.job where ji_jobid = $1\");\n\tif (db_prepare_stmt(conn, STMT_DELETE_JOB, conn_sql, 1) != 0)\n\t\treturn -1;\n\n\tsnprintf(conn_sql, MAX_SQL_LENGTH, \"delete from pbs.job_scr where ji_jobid = $1\");\n\tif (db_prepare_stmt(conn, STMT_DELETE_JOBSCR, conn_sql, 1) != 0)\n\t\treturn -1;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tLoad job data from the row into the job object\n *\n * @param[in]\tres - Resultset from an earlier query\n * @param[out]  pj  - Job object to load data into\n * @param[in]\trow - The current row to load within the resultset\n *\n * @return error code\n * @retval 0 Success\n * @retval -1 Error\n *\n */\nstatic int\nload_job(const PGresult *res, pbs_db_job_info_t *pj, int row)\n{\n\tchar *raw_array;\n\tstatic int ji_jobid_fnum;\n\tstatic int ji_state_fnum;\n\tstatic int ji_substate_fnum;\n\tstatic int ji_svrflags_fnum;\n\tstatic int ji_stime_fnum;\n\tstatic int ji_queue_fnum;\n\tstatic int ji_destin_fnum;\n\tstatic int ji_un_type_fnum;\n\tstatic int ji_exitstat_fnum;\n\tstatic int ji_quetime_fnum;\n\tstatic int ji_rteretry_fnum;\n\tstatic int ji_fromsock_fnum;\n\tstatic int ji_fromaddr_fnum;\n\tstatic int ji_jid_fnum;\n\tstatic int ji_credtype_fnum;\n\tstatic int ji_qrank_fnum;\n\tstatic int attributes_fnum;\n\tstatic int fnums_inited = 0;\n\n\tif (fnums_inited == 0) {\n\t\t/* cache the column numbers of various job table fields */\n\t\tji_jobid_fnum = PQfnumber(res, \"ji_jobid\");\n\t\tji_state_fnum = PQfnumber(res, \"ji_state\");\n\t\tji_substate_fnum = PQfnumber(res, \"ji_substate\");\n\t\tji_svrflags_fnum = PQfnumber(res, \"ji_svrflags\");\n\t\tji_stime_fnum = PQfnumber(res, \"ji_stime\");\n\t\tji_queue_fnum = PQfnumber(res, \"ji_queue\");\n\t\tji_destin_fnum = PQfnumber(res, \"ji_destin\");\n\t\tji_un_type_fnum = PQfnumber(res, \"ji_un_type\");\n\t\tji_exitstat_fnum = PQfnumber(res, \"ji_exitstat\");\n\t\tji_quetime_fnum = PQfnumber(res, \"ji_quetime\");\n\t\tji_rteretry_fnum = PQfnumber(res, \"ji_rteretry\");\n\t\tji_fromsock_fnum = PQfnumber(res, \"ji_fromsock\");\n\t\tji_fromaddr_fnum = PQfnumber(res, \"ji_fromaddr\");\n\t\tji_jid_fnum = PQfnumber(res, \"ji_jid\");\n\t\tji_qrank_fnum = PQfnumber(res, \"ji_qrank\");\n\t\tji_credtype_fnum = PQfnumber(res, \"ji_credtype\");\n\t\tattributes_fnum = PQfnumber(res, \"attributes\");\n\t\tfnums_inited = 1;\n\t}\n\n\tGET_PARAM_STR(res, row, pj->ji_jobid, ji_jobid_fnum);\n\tGET_PARAM_INTEGER(res, row, pj->ji_state, ji_state_fnum);\n\tGET_PARAM_INTEGER(res, row, pj->ji_substate, ji_substate_fnum);\n\tGET_PARAM_INTEGER(res, row, pj->ji_svrflags, ji_svrflags_fnum);\n\tGET_PARAM_BIGINT(res, row, pj->ji_stime, ji_stime_fnum);\n\tGET_PARAM_STR(res, row, pj->ji_queue, ji_queue_fnum);\n\tGET_PARAM_STR(res, row, pj->ji_destin, ji_destin_fnum);\n\tGET_PARAM_INTEGER(res, row, pj->ji_un_type, ji_un_type_fnum);\n\tGET_PARAM_INTEGER(res, row, pj->ji_exitstat, ji_exitstat_fnum);\n\tGET_PARAM_BIGINT(res, row, pj->ji_quetime, ji_quetime_fnum);\n\tGET_PARAM_BIGINT(res, row, pj->ji_rteretry, ji_rteretry_fnum);\n\tGET_PARAM_INTEGER(res, row, pj->ji_fromsock, ji_fromsock_fnum);\n\tGET_PARAM_BIGINT(res, row, pj->ji_fromaddr, ji_fromaddr_fnum);\n\tGET_PARAM_STR(res, row, pj->ji_jid, ji_jid_fnum);\n\tGET_PARAM_INTEGER(res, row, pj->ji_credtype, ji_credtype_fnum);\n\tGET_PARAM_BIGINT(res, row, pj->ji_qrank, ji_qrank_fnum);\n\tGET_PARAM_BIN(res, row, raw_array, attributes_fnum);\n\n\t/* convert attributes from postgres raw array format */\n\treturn (dbarray_to_attrlist(raw_array, &pj->db_attr_list));\n}\n\n/**\n *@brief\n *\tSave (insert/update) a new/existing job\n *\n * @param[in]\tconn - The connnection handle\n * @param[in]\tobj  - The job object to save\n * @param[in]\tsavetype - The kind of save \n *                         (insert, update full, or full qs area only)\n *\n * @return      Error code\n * @retval\t-1 - Execution of prepared statement failed\n * @retval\t 0 - Success and > 0 rows were affected\n *\n */\nint\npbs_db_save_job(void *conn, pbs_db_obj_info_t *obj, int savetype)\n{\n\tchar *stmt = NULL;\n\tpbs_db_job_info_t *pjob = obj->pbs_db_un.pbs_db_job;\n\tint params;\n\tint rc = 0;\n\tchar *raw_array = NULL;\n\n\tSET_PARAM_STR(conn_data, pjob->ji_jobid, 0);\n\n\tif (savetype & OBJ_SAVE_QS) {\n\t\tSET_PARAM_INTEGER(conn_data, pjob->ji_state, 1);\n\t\tSET_PARAM_INTEGER(conn_data, pjob->ji_substate, 2);\n\t\tSET_PARAM_INTEGER(conn_data, pjob->ji_svrflags, 3);\n\t\tSET_PARAM_BIGINT(conn_data, pjob->ji_stime, 4);\n\t\tSET_PARAM_STR(conn_data, pjob->ji_queue, 5);\n\t\tSET_PARAM_STR(conn_data, pjob->ji_destin, 6);\n\t\tSET_PARAM_INTEGER(conn_data, pjob->ji_un_type, 7);\n\t\tSET_PARAM_INTEGER(conn_data, pjob->ji_exitstat, 8);\n\t\tSET_PARAM_BIGINT(conn_data, pjob->ji_quetime, 9);\n\t\tSET_PARAM_BIGINT(conn_data, pjob->ji_rteretry, 10);\n\t\tSET_PARAM_INTEGER(conn_data, pjob->ji_fromsock, 11);\n\t\tSET_PARAM_BIGINT(conn_data, pjob->ji_fromaddr, 12);\n\t\tSET_PARAM_STR(conn_data, pjob->ji_jid, 13);\n\t\tSET_PARAM_INTEGER(conn_data, pjob->ji_credtype, 14);\n\t\tSET_PARAM_BIGINT(conn_data, pjob->ji_qrank, 15);\n\n\t\tstmt = STMT_UPDATE_JOB_QUICK;\n\t\tparams = 16;\n\t}\n\n\tif ((pjob->db_attr_list.attr_count > 0) || (savetype & OBJ_SAVE_NEW)) {\n\t\tint len = 0;\n\t\t/* convert attributes to postgres raw array format */\n\n\t\tif ((len = attrlist_to_dbarray(&raw_array, &pjob->db_attr_list)) <= 0)\n\t\t\treturn -1;\n\n\t\tif (savetype & OBJ_SAVE_QS) {\n\t\t\tSET_PARAM_BIN(conn_data, raw_array, len, 16);\n\t\t\tparams = 17;\n\t\t\tstmt = STMT_UPDATE_JOB;\n\t\t} else {\n\t\t\tSET_PARAM_BIN(conn_data, raw_array, len, 1);\n\t\t\tparams = 2;\n\t\t\tstmt = STMT_UPDATE_JOB_ATTRSONLY;\n\t\t}\n\t}\n\n\tif (savetype & OBJ_SAVE_NEW)\n\t\tstmt = STMT_INSERT_JOB;\n\n\tif (stmt)\n\t\trc = db_cmd(conn, stmt, params);\n\n\treturn rc;\n}\n\n/**\n * @brief\n *\tLoad job data from the database\n *\n * @param[in]\tconn - Connection handle\n * @param[in/out]obj  - Load job information into this object where\n *\t\t\tjobid = obj->pbs_db_un.pbs_db_job->ji_jobid\n *\n * @return      Error code\n * @retval\t-1 - Failure\n * @retval\t 0 - Success\n * @retval\t 1 -  Success but no rows loaded\n *\n */\nint\npbs_db_load_job(void *conn, pbs_db_obj_info_t *obj)\n{\n\tPGresult *res;\n\tint rc;\n\tpbs_db_job_info_t *pj = obj->pbs_db_un.pbs_db_job;\n\n\tSET_PARAM_STR(conn_data, pj->ji_jobid, 0);\n\n\tif ((rc = db_query(conn, STMT_SELECT_JOB, 1, &res)) != 0)\n\t\treturn rc;\n\n\trc = load_job(res, pj, 0);\n\n\tPQclear(res);\n\n\treturn rc;\n}\n\n/**\n * @brief\n *\tFind jobs\n *\n * @param[in]\tconn - Connection handle\n * @param[out]  st   - The cursor state variable updated by this query\n * @param[in]\tobj  - Information of job to be found\n * @param[in]\topts - Any other options (like flags, timestamp)\n *\n * @return      Error code\n * @retval\t-1 - Failure\n * @retval\t 0 - Success\n * @retval\t 1 -  Success but no rows found\n *\n */\nint\npbs_db_find_job(void *conn, void *st, pbs_db_obj_info_t *obj,\n\t\tpbs_db_query_options_t *opts)\n{\n\tPGresult *res;\n\tchar conn_sql[MAX_SQL_LENGTH];\n\tdb_query_state_t *state = (db_query_state_t *) st;\n\tpbs_db_job_info_t *pdjob = obj->pbs_db_un.pbs_db_job;\n\tint rc;\n\tint params;\n\n\tif (!state)\n\t\treturn -1;\n\n\tif (opts != NULL && opts->flags == FIND_JOBS_BY_QUE) {\n\t\tSET_PARAM_STR(conn_data, pdjob->ji_queue, 0);\n\t\tparams = 1;\n\t\tstrcpy(conn_sql, STMT_FINDJOBS_BYQUE_ORDBY_QRANK);\n\t} else {\n\t\tstrcpy(conn_sql, STMT_FINDJOBS_ORDBY_QRANK);\n\t\tparams = 0;\n\t}\n\n\tif ((rc = db_query(conn, conn_sql, params, &res)) != 0)\n\t\treturn rc;\n\n\tstate->row = 0;\n\tstate->res = res;\n\tstate->count = PQntuples(res);\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tGet the next job from the cursor\n *\n * @param[in]\tconn - Connection handle\n * @param[in]\tst   - The cursor state\n * @param[out]  obj  - Job information is loaded into this object\n *\n * @return      Error code\n * @retval\t-1 - Failure\n * @retval\t 0 - Success\n *\n */\nint\npbs_db_next_job(void *conn, void *st, pbs_db_obj_info_t *obj)\n{\n\tdb_query_state_t *state = (db_query_state_t *) st;\n\n\treturn load_job(state->res, obj->pbs_db_un.pbs_db_job, state->row);\n}\n\n/**\n * @brief\n *\tDelete the job from the database\n *\n * @param[in]\tconn - Connection handle\n * @param[in]\tobj  - Job information\n *\n * @return      Error code\n * @retval\t-1 - Failure\n * @retval\t 0 - Success\n *\n */\nint\npbs_db_delete_job(void *conn, pbs_db_obj_info_t *obj)\n{\n\tpbs_db_job_info_t *pj = obj->pbs_db_un.pbs_db_job;\n\tint rc = 0;\n\n\tSET_PARAM_STR(conn_data, pj->ji_jobid, 0);\n\n\tif ((rc = db_cmd(conn, STMT_DELETE_JOB, 1)) == -1)\n\t\tgoto err;\n\n\tif (db_cmd(conn, STMT_DELETE_JOBSCR, 1) == -1)\n\t\tgoto err;\n\n\treturn rc;\nerr:\n\treturn -1;\n}\n\n/**\n * @brief\n *\tInsert job script\n *\n * @param[in]\tconn - Connection handle\n * @param[in]\tobj  - Job script object\n * @param[in]\tsavetype - Just a place holder here. Maintained the same prototype as with\n * \t\t           the other database save functions since this is called through function pointer.\n *\n * @return      Error code\n * @retval\t-1 - Failure\n * @retval\t 0 - Success\n *\n */\nint\npbs_db_save_jobscr(void *conn, pbs_db_obj_info_t *obj, int savetype)\n{\n\tpbs_db_jobscr_info_t *pscr = obj->pbs_db_un.pbs_db_jobscr;\n\n\tSET_PARAM_STR(conn_data, pscr->ji_jobid, 0);\n\n\t/*\n\t * The script data could contain non-UTF8 characters. We therefore\n\t * consider it binary and encode it into TEXT by using the \"encode\"\n\t * sql function. The input data to load, therefore, is binary data\n\t * and so we use the function \"LOAD_BIN\" to load the parameter to\n\t * the prepared statement\n\t */\n\tSET_PARAM_BIN(conn_data, pscr->script, (pscr->script) ? strlen(pscr->script) : 0, 1);\n\n\treturn (db_cmd(conn, STMT_INSERT_JOBSCR, 2));\n}\n\n/**\n * @brief\n *\tload job script\n *\n * @param[in]\t  conn - Connection handle\n * @param[in/out] obj  - Job script is loaded into this object\n *\n * @return      Error code\n * @retval\t-1 - Failure\n * @retval\t 0 - Success\n *\n */\nint\npbs_db_load_jobscr(void *conn, pbs_db_obj_info_t *obj)\n{\n\tPGresult *res;\n\tpbs_db_jobscr_info_t *pscr = obj->pbs_db_un.pbs_db_jobscr;\n\tchar *script = NULL;\n\tstatic int script_fnum = -1;\n\n\tSET_PARAM_STR(conn_data, pscr->ji_jobid, 0);\n\n\t/*\n\t * The data (script) we stored was a \"encoded\" binary. We \"decode\" it\n\t * back while reading, giving us \"binary\" data. Since we want the\n\t * result data to be returned in binary, we set conn_result_format\n\t * to 1 to indicate binary result. This setting is a one-time,\n\t * auto-reset switch which resets to 0 (TEXT) mode after each execution\n\t * of pbs_db_query.\n\t */\n\tif (db_query(conn, STMT_SELECT_JOBSCR, 1, &res) != 0)\n\t\treturn -1;\n\n\tif (script_fnum == -1)\n\t\tscript_fnum = PQfnumber(res, \"script\");\n\n\tGET_PARAM_BIN(res, 0, script, script_fnum);\n\tpscr->script = strdup(script);\n\n\t/* Cleans up memory associated with a resultset */\n\tPQclear(res);\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tDeletes attributes of a job\n *\n * @param[in]\tconn - Connection handle\n * @param[in]\tobj  - Job information\n * @param[in]\tobj_id  - Job id\n * @param[in]\tattr_list - List of attributes\n *\n * @return      Error code\n * @retval\t 0 - Success\n * @retval\t-1 - On Failure\n *\n */\nint\npbs_db_del_attr_job(void *conn, void *obj_id, pbs_db_attr_list_t *attr_list)\n{\n\tchar *raw_array = NULL;\n\tint len = 0;\n\tint rc = 0;\n\n\tif ((len = attrlist_to_dbarray_ex(&raw_array, attr_list, 1)) <= 0)\n\t\treturn -1;\n\n\tSET_PARAM_STR(conn_data, obj_id, 0);\n\tSET_PARAM_BIN(conn_data, raw_array, len, 1);\n\n\trc = db_cmd(conn, STMT_REMOVE_JOBATTRS, 2);\n\n\treturn rc;\n}\n"
  },
  {
    "path": "src/lib/Libdb/pgsql/db_node.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n *\n * @brief\n *      Implementation of the node data access functions for postgres\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n#include \"pbs_db.h\"\n#include \"db_postgres.h\"\n\n/**\n * @brief\n *\tPrepare all the node related sqls. Typically called after connect\n *\tand before any other sql exeuction\n *\n * @param[in]\tconn - Database connection handle\n *\n * @return      Error code\n * @retval\t-1 - Failure\n * @retval\t 0 - Success\n *\n */\nint\ndb_prepare_node_sqls(void *conn)\n{\n\tchar conn_sql[MAX_SQL_LENGTH];\n\tsnprintf(conn_sql, MAX_SQL_LENGTH, \"insert into pbs.node(\"\n\t\t\t\t\t   \"nd_name, \"\n\t\t\t\t\t   \"nd_index, \"\n\t\t\t\t\t   \"mom_modtime, \"\n\t\t\t\t\t   \"nd_hostname, \"\n\t\t\t\t\t   \"nd_state, \"\n\t\t\t\t\t   \"nd_ntype, \"\n\t\t\t\t\t   \"nd_pque, \"\n\t\t\t\t\t   \"nd_savetm, \"\n\t\t\t\t\t   \"nd_creattm, \"\n\t\t\t\t\t   \"attributes \"\n\t\t\t\t\t   \") \"\n\t\t\t\t\t   \"values \"\n\t\t\t\t\t   \"($1, $2, $3, $4, $5, $6, $7, localtimestamp, localtimestamp, hstore($8::text[]))\");\n\tif (db_prepare_stmt(conn, STMT_INSERT_NODE, conn_sql, 8) != 0)\n\t\treturn -1;\n\n\t/* in case of nodes do not use || with existing attributes, since we re-write all attributes */\n\tsnprintf(conn_sql, MAX_SQL_LENGTH, \"update pbs.node set \"\n\t\t\t\t\t   \"nd_index = $2, \"\n\t\t\t\t\t   \"mom_modtime = $3, \"\n\t\t\t\t\t   \"nd_hostname = $4, \"\n\t\t\t\t\t   \"nd_state = $5, \"\n\t\t\t\t\t   \"nd_ntype = $6, \"\n\t\t\t\t\t   \"nd_pque = $7, \"\n\t\t\t\t\t   \"nd_savetm = localtimestamp, \"\n\t\t\t\t\t   \"attributes = attributes || hstore($8::text[]) \"\n\t\t\t\t\t   \"where nd_name = $1\");\n\tif (db_prepare_stmt(conn, STMT_UPDATE_NODE, conn_sql, 8) != 0)\n\t\treturn -1;\n\n\tsnprintf(conn_sql, MAX_SQL_LENGTH, \"update pbs.node set \"\n\t\t\t\t\t   \"nd_index = $2, \"\n\t\t\t\t\t   \"mom_modtime = $3, \"\n\t\t\t\t\t   \"nd_hostname = $4, \"\n\t\t\t\t\t   \"nd_state = $5, \"\n\t\t\t\t\t   \"nd_ntype = $6, \"\n\t\t\t\t\t   \"nd_pque = $7, \"\n\t\t\t\t\t   \"nd_savetm = localtimestamp \"\n\t\t\t\t\t   \"where nd_name = $1\");\n\tif (db_prepare_stmt(conn, STMT_UPDATE_NODE_QUICK, conn_sql, 7) != 0)\n\t\treturn -1;\n\n\tsnprintf(conn_sql, MAX_SQL_LENGTH, \"update pbs.node set \"\n\t\t\t\t\t   \"nd_savetm = localtimestamp,\"\n\t\t\t\t\t   \"attributes = attributes || hstore($2::text[]) \"\n\t\t\t\t\t   \"where nd_name = $1\");\n\tif (db_prepare_stmt(conn, STMT_UPDATE_NODE_ATTRSONLY, conn_sql, 2) != 0)\n\t\treturn -1;\n\n\tsnprintf(conn_sql, MAX_SQL_LENGTH, \"update pbs.node set \"\n\t\t\t\t\t   \"nd_savetm = localtimestamp,\"\n\t\t\t\t\t   \"attributes = attributes - $2::text[] \"\n\t\t\t\t\t   \"where nd_name = $1\");\n\tif (db_prepare_stmt(conn, STMT_REMOVE_NODEATTRS, conn_sql, 2) != 0)\n\t\treturn -1;\n\n\tsnprintf(conn_sql, MAX_SQL_LENGTH, \"select \"\n\t\t\t\t\t   \"nd_name, \"\n\t\t\t\t\t   \"nd_index, \"\n\t\t\t\t\t   \"mom_modtime, \"\n\t\t\t\t\t   \"nd_hostname, \"\n\t\t\t\t\t   \"nd_state, \"\n\t\t\t\t\t   \"nd_ntype, \"\n\t\t\t\t\t   \"nd_pque, \"\n\t\t\t\t\t   \"hstore_to_array(attributes) as attributes \"\n\t\t\t\t\t   \"from pbs.node \"\n\t\t\t\t\t   \"where nd_name = $1\");\n\tif (db_prepare_stmt(conn, STMT_SELECT_NODE, conn_sql, 1) != 0)\n\t\treturn -1;\n\n\tsnprintf(conn_sql, MAX_SQL_LENGTH, \"select \"\n\t\t\t\t\t   \"nd_name, \"\n\t\t\t\t\t   \"nd_index, \"\n\t\t\t\t\t   \"mom_modtime, \"\n\t\t\t\t\t   \"nd_hostname, \"\n\t\t\t\t\t   \"nd_state, \"\n\t\t\t\t\t   \"nd_ntype, \"\n\t\t\t\t\t   \"nd_pque, \"\n\t\t\t\t\t   \"hstore_to_array(attributes) as attributes \"\n\t\t\t\t\t   \"from pbs.node order by nd_creattm\");\n\tif (db_prepare_stmt(conn, STMT_FIND_NODES_ORDBY_CREATTM, conn_sql, 0) != 0)\n\t\treturn -1;\n\n\tsnprintf(conn_sql, MAX_SQL_LENGTH, \"select \"\n#ifdef NAS /* localmod 079 */\n\t\t\t\t\t   \"n.nd_name, \"\n\t\t\t\t\t   \"n.mom_modtime, \"\n\t\t\t\t\t   \"n.nd_hostname, \"\n\t\t\t\t\t   \"n.nd_state, \"\n\t\t\t\t\t   \"n.nd_ntype, \"\n\t\t\t\t\t   \"n.nd_pque \"\n\t\t\t\t\t   \"from pbs.node n left outer join pbs.nas_node i on \"\n\t\t\t\t\t   \"n.nd_name=i.nd_name order by i.nd_nasindex\");\n#else\n\t\t\t\t\t   \"nd_name, \"\n\t\t\t\t\t   \"mom_modtime, \"\n\t\t\t\t\t   \"nd_hostname, \"\n\t\t\t\t\t   \"nd_state, \"\n\t\t\t\t\t   \"nd_ntype, \"\n\t\t\t\t\t   \"nd_pque, \"\n\t\t\t\t\t   \"hstore_to_array(attributes) as attributes \"\n\t\t\t\t\t   \"from pbs.node order by nd_index, nd_creattm\");\n#endif /* localmod 079 */\n\tif (db_prepare_stmt(conn, STMT_FIND_NODES_ORDBY_INDEX, conn_sql, 0) != 0)\n\t\treturn -1;\n\n\tsnprintf(conn_sql, MAX_SQL_LENGTH, \"delete from pbs.node where nd_name = $1\");\n\tif (db_prepare_stmt(conn, STMT_DELETE_NODE, conn_sql, 1) != 0)\n\t\treturn -1;\n\n\tsnprintf(conn_sql, MAX_SQL_LENGTH, \"select \"\n\t\t\t\t\t   \"mit_time, \"\n\t\t\t\t\t   \"mit_gen \"\n\t\t\t\t\t   \"from pbs.mominfo_time \");\n\tif (db_prepare_stmt(conn, STMT_SELECT_MOMINFO_TIME, conn_sql, 0) != 0)\n\t\treturn -1;\n\n\tsnprintf(conn_sql, MAX_SQL_LENGTH, \"insert into pbs.mominfo_time(\"\n\t\t\t\t\t   \"mit_time, \"\n\t\t\t\t\t   \"mit_gen) \"\n\t\t\t\t\t   \"values \"\n\t\t\t\t\t   \"($1, $2)\");\n\tif (db_prepare_stmt(conn, STMT_INSERT_MOMINFO_TIME, conn_sql, 2) != 0)\n\t\treturn -1;\n\n\tsnprintf(conn_sql, MAX_SQL_LENGTH, \"update pbs.mominfo_time set \"\n\t\t\t\t\t   \"mit_time = $1, \"\n\t\t\t\t\t   \"mit_gen = $2 \");\n\tif (db_prepare_stmt(conn, STMT_UPDATE_MOMINFO_TIME, conn_sql, 2) != 0)\n\t\treturn -1;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tLoad node data from the row into the node object\n *\n * @param[in]\tres - Resultset from a earlier query\n * @param[in]\tpnd  - Node object to load data into\n * @param[in]\trow - The current row to load within the resultset\n *\n * @return      Error code\n * @retval\t-1 - On Error\n * @retval\t 0 - On Success\n * @retval\t>1 - Number of attributes\n *\n */\nstatic int\nload_node(PGresult *res, pbs_db_node_info_t *pnd, int row)\n{\n\tchar *raw_array;\n\tstatic int nd_name_fnum;\n\tstatic int mom_modtime_fnum;\n\tstatic int nd_hostname_fnum;\n\tstatic int nd_state_fnum;\n\tstatic int nd_ntype_fnum;\n\tstatic int nd_pque_fnum;\n\tstatic int attributes_fnum;\n\tstatic int fnums_inited = 0;\n\n\tif (fnums_inited == 0) {\n\t\tnd_name_fnum = PQfnumber(res, \"nd_name\");\n\t\tmom_modtime_fnum = PQfnumber(res, \"mom_modtime\");\n\t\tnd_hostname_fnum = PQfnumber(res, \"nd_hostname\");\n\t\tnd_state_fnum = PQfnumber(res, \"nd_state\");\n\t\tnd_ntype_fnum = PQfnumber(res, \"nd_ntype\");\n\t\tnd_pque_fnum = PQfnumber(res, \"nd_pque\");\n\t\tattributes_fnum = PQfnumber(res, \"attributes\");\n\t\tfnums_inited = 1;\n\t}\n\n\tGET_PARAM_STR(res, row, pnd->nd_name, nd_name_fnum);\n\tGET_PARAM_BIGINT(res, row, pnd->mom_modtime, mom_modtime_fnum);\n\tGET_PARAM_STR(res, row, pnd->nd_hostname, nd_hostname_fnum);\n\tGET_PARAM_INTEGER(res, row, pnd->nd_state, nd_state_fnum);\n\tGET_PARAM_INTEGER(res, row, pnd->nd_ntype, nd_ntype_fnum);\n\tGET_PARAM_STR(res, row, pnd->nd_pque, nd_pque_fnum);\n\tGET_PARAM_BIN(res, row, raw_array, attributes_fnum);\n\n\t/* convert attributes from postgres raw array format */\n\treturn (dbarray_to_attrlist(raw_array, &pnd->db_attr_list));\n}\n\n/**\n * @brief\n *\tInsert node data into the database\n *\n * @param[in]\tconn - Connection handle\n * @param[in]\tobj  - Information of node to be inserted\n *\n * @return      Error code\n * @retval\t-1 - Failure\n * @retval\t 0 - Success\n *\n */\nint\npbs_db_save_node(void *conn, pbs_db_obj_info_t *obj, int savetype)\n{\n\tpbs_db_node_info_t *pnd = obj->pbs_db_un.pbs_db_node;\n\tchar *stmt = NULL;\n\tint params;\n\tint rc = 0;\n\tchar *raw_array = NULL;\n\n\tSET_PARAM_STR(conn_data, pnd->nd_name, 0);\n\n\tif (savetype & OBJ_SAVE_QS) {\n\t\tSET_PARAM_INTEGER(conn_data, pnd->nd_index, 1);\n\t\tSET_PARAM_BIGINT(conn_data, pnd->mom_modtime, 2);\n\t\tSET_PARAM_STR(conn_data, pnd->nd_hostname, 3);\n\t\tSET_PARAM_INTEGER(conn_data, pnd->nd_state, 4);\n\t\tSET_PARAM_INTEGER(conn_data, pnd->nd_ntype, 5);\n\t\tSET_PARAM_STR(conn_data, pnd->nd_pque, 6);\n\t\tparams = 7;\n\t\tstmt = STMT_UPDATE_NODE_QUICK;\n\t}\n\n\tif ((pnd->db_attr_list.attr_count > 0) || (savetype & OBJ_SAVE_NEW)) {\n\t\tint len = 0;\n\t\t/* convert attributes to postgres raw array format */\n\t\tif ((len = attrlist_to_dbarray(&raw_array, &pnd->db_attr_list)) <= 0)\n\t\t\treturn -1;\n\n\t\tif (savetype & OBJ_SAVE_QS) {\n\t\t\tSET_PARAM_BIN(conn_data, raw_array, len, 7);\n\t\t\tparams = 8;\n\t\t\tstmt = STMT_UPDATE_NODE;\n\t\t} else {\n\t\t\tSET_PARAM_BIN(conn_data, raw_array, len, 1);\n\t\t\tparams = 2;\n\t\t\tstmt = STMT_UPDATE_NODE_ATTRSONLY;\n\t\t}\n\t}\n\n\tif (savetype & OBJ_SAVE_NEW)\n\t\tstmt = STMT_INSERT_NODE;\n\n\tif (stmt)\n\t\trc = db_cmd(conn, stmt, params);\n\n\treturn rc;\n}\n\n/**\n * @brief\n *\tLoad node data from the database\n *\n * @param[in]\tconn - Connection handle\n * @param[in]\tobj  - Load node information into this object\n *\n * @return      Error code\n * @retval\t-1 - Failure\n * @retval\t 0 - Success\n * @retval\t 1 -  Success but no rows loaded\n *\n */\nint\npbs_db_load_node(void *conn, pbs_db_obj_info_t *obj)\n{\n\tPGresult *res;\n\tint rc;\n\tpbs_db_node_info_t *pnd = obj->pbs_db_un.pbs_db_node;\n\n\tSET_PARAM_STR(conn_data, pnd->nd_name, 0);\n\n\tif ((rc = db_query(conn, STMT_SELECT_NODE, 1, &res)) != 0)\n\t\treturn rc;\n\n\trc = load_node(res, pnd, 0);\n\n\tPQclear(res);\n\n\treturn rc;\n}\n\n/**\n * @brief\n *\tFind nodes\n *\n * @param[in]\tconn - Connection handle\n * @param[in]\tst   - The cursor state variable updated by this query\n * @param[in]\tobj  - Information of node to be found\n * @param[in]\topts - Any other options (like flags, timestamp)\n *\n * @return      Error code\n * @retval\t-1 - Failure\n * @retval\t 0 - Success\n * @retval\t 1 - Success, but no rows found\n *\n */\nint\npbs_db_find_node(void *conn, void *st, pbs_db_obj_info_t *obj,\n\t\t pbs_db_query_options_t *opts)\n{\n\tPGresult *res;\n\tint rc;\n\tdb_query_state_t *state = (db_query_state_t *) st;\n\n\tif (!state)\n\t\treturn -1;\n\n\tif ((rc = db_query(conn, STMT_FIND_NODES_ORDBY_INDEX, 0, &res)) != 0)\n\t\treturn rc;\n\n\tstate->row = 0;\n\tstate->res = res;\n\tstate->count = PQntuples(res);\n\treturn 0;\n}\n\n/**\n * @brief\n *\tGet the next node from the cursor\n *\n * @param[in]\tconn - Connection handle\n * @param[in]\tst   - The cursor state\n * @param[in]\tobj  - Node information is loaded into this object\n *\n * @return      Error code\n *\t\t(Even though this returns only 0 now, keeping it as int\n *\t\t\tto support future change to return a failure)\n * @retval\t 0 - Success\n *\n */\nint\npbs_db_next_node(void *conn, void *st, pbs_db_obj_info_t *obj)\n{\n\tPGresult *res = ((db_query_state_t *) st)->res;\n\tdb_query_state_t *state = (db_query_state_t *) st;\n\n\treturn (load_node(res, obj->pbs_db_un.pbs_db_node, state->row));\n}\n\n/**\n * @brief\n *\tDelete the node from the database\n *\n * @param[in]\tconn - Connection handle\n * @param[in]\tobj  - Node information\n *\n * @return      Error code\n * @retval\t-1 - Failure\n * @retval\t 0 - Success\n *\n */\nint\npbs_db_delete_node(void *conn, pbs_db_obj_info_t *obj)\n{\n\tpbs_db_node_info_t *pnd = obj->pbs_db_un.pbs_db_node;\n\tSET_PARAM_STR(conn_data, pnd->nd_name, 0);\n\treturn (db_cmd(conn, STMT_DELETE_NODE, 1));\n}\n\n/**\n * @brief\n *\tDeletes attributes of a node\n *\n * @param[in]\tconn - Connection handle\n * @param[in]\tobj_id  - Node id\n * @param[in]\tattr_list - List of attributes\n *\n * @return      Error code\n * @retval\t 0 - Success\n * @retval\t-1 - On Failure\n *\n */\nint\npbs_db_del_attr_node(void *conn, void *obj_id, pbs_db_attr_list_t *attr_list)\n{\n\tchar *raw_array = NULL;\n\tint len = 0;\n\tint rc = 0;\n\n\tif ((len = attrlist_to_dbarray_ex(&raw_array, attr_list, 1)) <= 0)\n\t\treturn -1;\n\n\tSET_PARAM_STR(conn_data, obj_id, 0);\n\tSET_PARAM_BIN(conn_data, raw_array, len, 1);\n\n\trc = db_cmd(conn, STMT_REMOVE_NODEATTRS, 2);\n\n\treturn rc;\n}\n\n/**\n * @brief\n *\tInsert mominfo_time into the database\n *\n * @param[in]\tconn - Connection handle\n * @param[in]\tobj  - Information of node to be inserted\n *\n * @return      Error code\n * @retval\t-1 - Failure\n * @retval\t 0 - Success\n *\n */\nint\npbs_db_save_mominfo_tm(void *conn, pbs_db_obj_info_t *obj, int savetype)\n{\n\tchar *stmt;\n\tpbs_db_mominfo_time_t *pmi = obj->pbs_db_un.pbs_db_mominfo_tm;\n\n\tSET_PARAM_BIGINT(conn_data, pmi->mit_time, 0);\n\tSET_PARAM_INTEGER(conn_data, pmi->mit_gen, 1);\n\n\tif (savetype & OBJ_SAVE_NEW)\n\t\tstmt = STMT_INSERT_MOMINFO_TIME;\n\telse\n\t\tstmt = STMT_UPDATE_MOMINFO_TIME;\n\n\treturn (db_cmd(conn, stmt, 2));\n}\n\n/**\n * @brief\n *\tLoad node mominfo_time from the database\n *\n * @param[in]\tconn - Connection handle\n * @param[in]\tobj  - Load node information into this object\n *\n * @return      Error code\n * @retval\t-1 - Failure\n * @retval\t 0 - Success\n * @retval\t 1 -  Success but no rows loaded\n *\n */\nint\npbs_db_load_mominfo_tm(void *conn, pbs_db_obj_info_t *obj)\n{\n\tPGresult *res;\n\tint rc;\n\tpbs_db_mominfo_time_t *pmi = obj->pbs_db_un.pbs_db_mominfo_tm;\n\tstatic int mit_time_fnum = -1;\n\tstatic int mit_gen_fnum = -1;\n\n\tif ((rc = db_query(conn, STMT_SELECT_MOMINFO_TIME, 0, &res)) != 0)\n\t\treturn rc;\n\n\tif (mit_time_fnum == -1 || mit_gen_fnum == -1) {\n\t\tmit_time_fnum = PQfnumber(res, \"mit_time\");\n\t\tmit_gen_fnum = PQfnumber(res, \"mit_gen\");\n\t}\n\n\tGET_PARAM_BIGINT(res, 0, pmi->mit_time, mit_time_fnum);\n\tGET_PARAM_INTEGER(res, 0, pmi->mit_gen, mit_gen_fnum);\n\n\tPQclear(res);\n\treturn 0;\n}\n"
  },
  {
    "path": "src/lib/Libdb/pgsql/db_postgres.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n *\n * @brief\n *  Postgres specific implementation\n *\n * This header file contains Postgres specific data structures and functions\n * to access the PBS postgres database. These structures are used only by the\n * postgres specific data store implementation, and should not be used directly\n * by the rest of the PBS code.\n *\n * The functions/interfaces in this header are PBS Private.\n */\n\n#ifndef _DB_POSTGRES_H\n#define _DB_POSTGRES_H\n\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n#include <libpq-fe.h>\n#include <netinet/in.h>\n#include <sys/types.h>\n#include <inttypes.h>\n#include \"net_connect.h\"\n#include \"list_link.h\"\n#include \"portability.h\"\n#include \"attribute.h\"\n\n/*\n * Conversion macros for long long type\n */\n#if !defined(ntohll)\n#define ntohll(x) db_ntohll(x)\n#endif\n#if !defined(htonll)\n#define htonll(x) ntohll(x)\n#endif\n\n#define PBS_MAXATTRNAME 64\n#define PBS_MAXATTRRESC 64\n#define MAX_SQL_LENGTH 8192\n\n/* job sql statement names */\n#define STMT_SELECT_JOB \"select_job\"\n#define STMT_INSERT_JOB \"insert_job\"\n#define STMT_UPDATE_JOB \"update_job\"\n#define STMT_UPDATE_JOB_ATTRSONLY \"update_job_attrsonly\"\n#define STMT_UPDATE_JOB_QUICK \"update_job_quick\"\n#define STMT_FINDJOBS_ORDBY_QRANK \"findjobs_ordby_qrank\"\n#define STMT_FINDJOBS_BYQUE_ORDBY_QRANK \"findjobs_byque_ordby_qrank\"\n#define STMT_DELETE_JOB \"delete_job\"\n#define STMT_REMOVE_JOBATTRS \"remove_jobattrs\"\n\n/* JOBSCR stands for job script */\n#define STMT_INSERT_JOBSCR \"insert_jobscr\"\n#define STMT_SELECT_JOBSCR \"select_jobscr\"\n#define STMT_DELETE_JOBSCR \"delete_jobscr\"\n\n/* reservation statement names */\n#define STMT_INSERT_RESV \"insert_resv\"\n#define STMT_UPDATE_RESV \"update_resv\"\n#define STMT_UPDATE_RESV_QUICK \"update_resv_quick\"\n#define STMT_UPDATE_RESV_ATTRSONLY \"update_resv_attrsonly\"\n#define STMT_SELECT_RESV \"select_resv\"\n#define STMT_DELETE_RESV \"delete_resv\"\n#define STMT_REMOVE_RESVATTRS \"remove_resvattrs\"\n\n/* creattm is the table field that holds the creation time */\n#define STMT_FINDRESVS_ORDBY_CREATTM \"findresvs_ordby_creattm\"\n\n/* server & seq statement names */\n#define STMT_INSERT_SVR \"insert_svr\"\n#define STMT_UPDATE_SVR \"update_svr\"\n#define STMT_SELECT_SVR \"select_svr\"\n#define STMT_SELECT_DBVER \"select_dbver\"\n#define STMT_SELECT_NEXT_SEQID \"select_nextseqid\"\n#define STMT_REMOVE_SVRATTRS \"remove_svrattrs\"\n#define STMT_INSERT_SVRINST \"stmt_insert_svrinst\"\n#define STMT_UPDATE_SVRINST \"stmt_update_svrinst\"\n#define STMT_SELECT_SVRINST \"stmt_select_svrinst\"\n\n/* queue statement names */\n#define STMT_INSERT_QUE \"insert_que\"\n#define STMT_UPDATE_QUE \"update_que\"\n#define STMT_UPDATE_QUE_QUICK \"update_que_quick\"\n#define STMT_UPDATE_QUE_ATTRSONLY \"update_que_attrsonly\"\n#define STMT_SELECT_QUE \"select_que\"\n#define STMT_DELETE_QUE \"delete_que\"\n#define STMT_FIND_QUES_ORDBY_CREATTM \"find_ques_ordby_creattm\"\n#define STMT_REMOVE_QUEATTRS \"remove_queattrs\"\n\n/* node statement names */\n#define STMT_INSERT_NODE \"insert_node\"\n#define STMT_UPDATE_NODE \"update_node\"\n#define STMT_UPDATE_NODE_QUICK \"update_node_quick\"\n#define STMT_UPDATE_NODE_ATTRSONLY \"update_node_attrsonly\"\n#define STMT_SELECT_NODE \"select_node\"\n#define STMT_DELETE_NODE \"delete_node\"\n#define STMT_REMOVE_NODEATTRS \"remove_nodeattrs\"\n#define STMT_UPDATE_NODEATTRS \"update_nodeattrs\"\n#define STMT_FIND_NODES_ORDBY_CREATTM \"find_nodes_ordby_creattm\"\n#define STMT_FIND_NODES_ORDBY_INDEX \"find_nodes_ordby_index\"\n#define STMT_SELECT_MOMINFO_TIME \"select_mominfo_time\"\n#define STMT_INSERT_MOMINFO_TIME \"insert_mominfo_time\"\n#define STMT_UPDATE_MOMINFO_TIME \"update_mominfo_time\"\n\n/* node job statements */\n#define STMT_SELECT_NODEJOB \"select_nodejob\"\n#define STMT_FIND_NODEJOB_USING_NODEID \"select_nodejob_with_nodeid\"\n#define STMT_INSERT_NODEJOB \"insert_nodejob\"\n#define STMT_UPDATE_NODEJOB \"update_nodejob\"\n#define STMT_UPDATE_NODEJOB_QUICK \"update_nodejob_quick\"\n#define STMT_UPDATE_NODEJOB_ATTRSONLY \"update_nodejob_attrsonly\"\n#define STMT_DELETE_NODEJOB \"delete_nodejob\"\n\n/* scheduler statement names */\n#define STMT_INSERT_SCHED \"insert_sched\"\n#define STMT_UPDATE_SCHED \"update_sched\"\n#define STMT_SELECT_SCHED \"select_sched\"\n#define STMT_SELECT_SCHED_ALL \"select_sched_all\"\n#define STMT_DELETE_SCHED \"sched_delete\"\n#define STMT_REMOVE_SCHEDATTRS \"remove_schedattrs\"\n\n#define POSTGRES_QUERY_MAX_PARAMS 30\n\n/**\n * @brief\n *  Prepared statements require parameter postion, formats and values to be\n *  supplied to the query. This structure is stored as part of the connection\n *  object and re-used for every prepared statement\n *\n */\nstruct postgres_conn_data {\n\tconst char *paramValues[POSTGRES_QUERY_MAX_PARAMS];\n\tint paramLengths[POSTGRES_QUERY_MAX_PARAMS];\n\tint paramFormats[POSTGRES_QUERY_MAX_PARAMS];\n\n\t/* followin are two tmp arrays used for conversion of binary data*/\n\tINTEGER temp_int[POSTGRES_QUERY_MAX_PARAMS];\n\tBIGINT temp_long[POSTGRES_QUERY_MAX_PARAMS];\n};\ntypedef struct postgres_conn_data pg_conn_data_t;\n\n/**\n * @brief\n * Postgres transaction management helper structure.\n */\nstruct pg_conn_trx {\n\tint conn_trx_nest;     /* incr/decr with each begin/end trx */\n\tint conn_trx_rollback; /* rollback flag in case of nested trx */\n\tint conn_trx_async;    /* 1 - async, 0 - sync, one-shot reset */\n};\ntypedef struct pg_conn_trx pg_conn_trx_t;\n\nextern pg_conn_data_t *conn_data;\nextern pg_conn_trx_t *conn_trx;\n\n/**\n * @brief\n *  This structure is used to represent the cursor state for a multirow query\n *  result. The row field keep track of which row is the current row (or was\n *  last returned to the caller). The count field contains the total number of\n *  rows that are available in the resultset.\n *\n */\nstruct db_query_state {\n\tPGresult *res;\n\tint row;\n\tint count;\n\tquery_cb_t query_cb;\n};\ntypedef struct db_query_state db_query_state_t;\n\n/**\n * @brief\n * Each database object type supports most of the following 6 operations:\n *\t- insertion\n *\t- updation\n *\t- deletion\n *\t- loading\n *\t- find rows matching a criteria\n *\t- get next row from a cursor (created in a find command)\n *\n * The following structure has function pointers to all the above described\n * operations.\n *\n */\nstruct postgres_db_fn {\n\tint (*pbs_db_save_obj)(void *conn, pbs_db_obj_info_t *obj, int savetype);\n\tint (*pbs_db_delete_obj)(void *conn, pbs_db_obj_info_t *obj);\n\tint (*pbs_db_load_obj)(void *conn, pbs_db_obj_info_t *obj);\n\tint (*pbs_db_find_obj)(void *conn, void *state, pbs_db_obj_info_t *obj, pbs_db_query_options_t *opts);\n\tint (*pbs_db_next_obj)(void *conn, void *state, pbs_db_obj_info_t *obj);\n\tint (*pbs_db_del_attr_obj)(void *conn, void *obj_id, pbs_db_attr_list_t *attr_list);\n};\n\ntypedef struct postgres_db_fn pg_db_fn_t;\n\n/*\n * The following are defined as macros as they are used very frequently\n * Making them functions would effect performance.\n *\n * SET_PARAM_STR     - Loads null terminated string to postgres parameter at index \"i\"\n * SET_PARAM_STRSZ   - Same as SET_PARAM_STR, only size of string is provided\n * SET_PARAM_INTEGER - Loads integer to postgres parameter at index \"i\"\n * SET_PARAM_BIGINT  - Loads BIGINT value to postgres parameter at index \"i\"\n * SET_PARAM_BIN     - Loads a BINARY value to postgres parameter at index \"i\"\n *\n * Basically there are 3 values that need to be supplied for every paramter\n * of any prepared sql statement. They are:\n *\t1) The value - The value to be \"bound/loaded\" to the parameter. This\n *\t\t\tis the adress of the variable which holds the value.\n *\t\t\tThe variable paramValues[i] is used to hold that address\n *\t\t\tFor strings, its the address of the string, for integers\n *\t\t\tetc, we need to convert the integer value to network\n *\t\t\tbyte order (htonl - and store it in temp_int/long[i],\n *\t\t\tand pass the address of temp_int/long[i]\n *\n *\t2) The length - This is the length of the value that is loaded. It is\n *\t\t\tloaded to variable paramLengths[i]. For strings, this\n *\t\t\tis the string length or passed length value (LOAD_STRSZ)\n *\t\t\tFor integers (& bigints), its the sizeof(int) or\n *\t\t\tsizeof(BIGINT). In case of BINARY data, the len is set\n *\t\t\tto the length supplied as a parameter.\n *\n *\t3) The format - This is the format of the datatype that is being passed\n *\t\t\tFor strings, the value is \"0\", for binary value is \"1\".\n *\t\t\tThis is loaded into paramValues[i].\n *\n * The Postgres specific connection structure pg_conn_data_t has the following\n * arrays defined, so that they dont have to be created every time needed.\n * - paramValues - array to hold values of each parameter\n * - paramLengths - Lengths of each of these values (corresponding index)\n * - paramFormats - Formats of the datatype passed for each value (corr index)\n * - temp_int\t  - array to use to convert int to network byte order\n * - temp_long\t  - array to use to convery BIGINT to network byte order\n */\n#define SET_PARAM_STR(conn_data, itm, i)                                       \\\n\t((pg_conn_data_t *) conn_data)->paramValues[i] = (itm);                \\\n\tif (itm)                                                               \\\n\t\t((pg_conn_data_t *) conn_data)->paramLengths[i] = strlen(itm); \\\n\telse                                                                   \\\n\t\t((pg_conn_data_t *) conn_data)->paramLengths[i] = 0;           \\\n\t((pg_conn_data_t *) conn_data)->paramFormats[i] = 0;\n\n#define SET_PARAM_STRSZ(conn_data, itm, size, i)                  \\\n\t((pg_conn_data_t *) conn_data)->paramValues[i] = (itm);   \\\n\t((pg_conn_data_t *) conn_data)->paramLengths[i] = (size); \\\n\t((pg_conn_data_t *) conn_data)->paramFormats[i] = 0;\n\n#define SET_PARAM_INTEGER(conn_data, itm, i)                                \\\n\t((pg_conn_data_t *) conn_data)->temp_int[i] = (INTEGER) htonl(itm); \\\n\t((pg_conn_data_t *) conn_data)->paramValues[i] =                    \\\n\t\t(char *) &(((pg_conn_data_t *) conn_data)->temp_int[i]);    \\\n\t((pg_conn_data_t *) conn_data)->paramLengths[i] = sizeof(int);      \\\n\t((pg_conn_data_t *) conn_data)->paramFormats[i] = 1;\n\n#define SET_PARAM_BIGINT(conn_data, itm, i)                                  \\\n\t((pg_conn_data_t *) conn_data)->temp_long[i] = (BIGINT) htonll(itm); \\\n\t((pg_conn_data_t *) conn_data)->paramValues[i] =                     \\\n\t\t(char *) &(((pg_conn_data_t *) conn_data)->temp_long[i]);    \\\n\t((pg_conn_data_t *) conn_data)->paramLengths[i] = sizeof(BIGINT);    \\\n\t((pg_conn_data_t *) conn_data)->paramFormats[i] = 1;\n\n#define SET_PARAM_BIN(conn_data, itm, len, i)                    \\\n\t((pg_conn_data_t *) conn_data)->paramValues[i] = (itm);  \\\n\t((pg_conn_data_t *) conn_data)->paramLengths[i] = (len); \\\n\t((pg_conn_data_t *) conn_data)->paramFormats[i] = 1;\n\n#define GET_PARAM_STR(res, row, itm, fnum) \\\n\tstrcpy((itm), PQgetvalue((res), (row), (fnum)));\n\n#define GET_PARAM_INTEGER(res, row, itm, fnum) \\\n\t(itm) = ntohl(*((uint32_t *) PQgetvalue((res), (row), (fnum))));\n\n#define GET_PARAM_BIGINT(res, row, itm, fnum) \\\n\t(itm) = ntohll(*((uint64_t *) PQgetvalue((res), (row), (fnum))));\n\n#define GET_PARAM_BIN(res, row, itm, fnum) \\\n\t(itm) = PQgetvalue((res), (row), (fnum));\n\n#define FIND_JOBS_BY_QUE 1\n\n/* common functions */\nint db_prepare_job_sqls(void *conn);\nint db_prepare_resv_sqls(void *conn);\nint db_prepare_svr_sqls(void *conn);\nint db_prepare_node_sqls(void *conn);\nint db_prepare_sched_sqls(void *conn);\nint db_prepare_que_sqls(void *conn);\n\nvoid db_set_error(void *conn, char **conn_db_err, char *fnc, char *msg, char *msg2);\nint db_prepare_stmt(void *conn, char *stmt, char *sql, int num_vars);\nint db_cmd(void *conn, char *stmt, int num_vars);\nint db_query(void *conn, char *stmt, int num_vars, PGresult **res);\nunsigned long long db_ntohll(unsigned long long);\nint dbarray_to_attrlist(char *raw_array, pbs_db_attr_list_t *attr_list);\nint attrlist_to_dbarray(char **raw_array, pbs_db_attr_list_t *attr_list);\nint attrlist_to_dbarray_ex(char **raw_array, pbs_db_attr_list_t *attr_list, int keys_only);\n\n/* job functions */\nint pbs_db_save_job(void *conn, pbs_db_obj_info_t *obj, int savetype);\nint pbs_db_load_job(void *conn, pbs_db_obj_info_t *obj);\nint pbs_db_find_job(void *conn, void *st, pbs_db_obj_info_t *obj, pbs_db_query_options_t *opts);\nint pbs_db_next_job(void *conn, void *st, pbs_db_obj_info_t *obj);\nint pbs_db_delete_job(void *conn, pbs_db_obj_info_t *obj);\n\nint pbs_db_save_jobscr(void *conn, pbs_db_obj_info_t *obj, int savetype);\nint pbs_db_load_jobscr(void *conn, pbs_db_obj_info_t *obj);\n\n/* resv functions */\nint pbs_db_save_resv(void *conn, pbs_db_obj_info_t *obj, int savetype);\nint pbs_db_load_resv(void *conn, pbs_db_obj_info_t *obj);\nint pbs_db_find_resv(void *conn, void *st, pbs_db_obj_info_t *obj, pbs_db_query_options_t *opts);\nint pbs_db_next_resv(void *conn, void *st, pbs_db_obj_info_t *obj);\nint pbs_db_delete_resv(void *conn, pbs_db_obj_info_t *obj);\n\n/* svr functions */\nint pbs_db_save_svr(void *conn, pbs_db_obj_info_t *obj, int savetype);\nint pbs_db_load_svr(void *conn, pbs_db_obj_info_t *obj);\n\n/* node functions */\nint pbs_db_save_node(void *conn, pbs_db_obj_info_t *obj, int savetype);\nint pbs_db_load_node(void *conn, pbs_db_obj_info_t *obj);\nint pbs_db_find_node(void *conn, void *st, pbs_db_obj_info_t *obj, pbs_db_query_options_t *opts);\nint pbs_db_next_node(void *conn, void *st, pbs_db_obj_info_t *obj);\nint pbs_db_delete_node(void *conn, pbs_db_obj_info_t *obj);\n\n/* mominfo_time functions */\nint pbs_db_save_mominfo_tm(void *conn, pbs_db_obj_info_t *obj, int savetype);\nint pbs_db_load_mominfo_tm(void *conn, pbs_db_obj_info_t *obj);\n\n/* queue functions */\nint pbs_db_save_que(void *conn, pbs_db_obj_info_t *obj, int savetype);\nint pbs_db_load_que(void *conn, pbs_db_obj_info_t *obj);\nint pbs_db_find_que(void *conn, void *st, pbs_db_obj_info_t *obj, pbs_db_query_options_t *opts);\nint pbs_db_next_que(void *conn, void *st, pbs_db_obj_info_t *obj);\nint pbs_db_delete_que(void *conn, pbs_db_obj_info_t *obj);\n\n/* scheduler functions */\nint pbs_db_save_sched(void *conn, pbs_db_obj_info_t *obj, int savetype);\nint pbs_db_load_sched(void *conn, pbs_db_obj_info_t *obj);\n\nint pbs_db_find_sched(void *conn, void *st, pbs_db_obj_info_t *obj, pbs_db_query_options_t *opts);\nint pbs_db_next_sched(void *conn, void *st, pbs_db_obj_info_t *obj);\nint pbs_db_delete_sched(void *conn, pbs_db_obj_info_t *obj);\n\nint pbs_db_del_attr_job(void *conn, void *obj_id, pbs_db_attr_list_t *attr_list);\nint pbs_db_del_attr_sched(void *conn, void *obj_id, pbs_db_attr_list_t *attr_list);\nint pbs_db_del_attr_resv(void *conn, void *obj_id, pbs_db_attr_list_t *attr_list);\nint pbs_db_del_attr_svr(void *conn, void *obj_id, pbs_db_attr_list_t *attr_list);\nint pbs_db_del_attr_que(void *conn, void *obj_id, pbs_db_attr_list_t *attr_list);\nint pbs_db_del_attr_node(void *conn, void *obj_id, pbs_db_attr_list_t *attr_list);\n\n/**\n * @brief\n *\tExecute a direct sql string on the open database connection\n *\n * @param[in]\tconn - Connected database handle\n * @param[in]\tsql  - A string describing the sql to execute.\n *\n * @return      int\n * @retval      -1  - Error\n * @retval       0  - success\n * @retval       1  - Execution succeeded but statement did not return any rows\n *\n */\nint db_execute_str(void *conn, char *sql);\n\n#ifdef __cplusplus\n}\n#endif\n\n#endif /* _DB_POSTGRES_H */\n"
  },
  {
    "path": "src/lib/Libdb/pgsql/db_que.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n *\n * @brief\n *      Implementation of the queue data access functions for postgres\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n#include \"pbs_db.h\"\n#include \"db_postgres.h\"\n\n/**\n * @brief\n *\tPrepare all the queue related sqls. Typically called after connect\n *\tand before any other sql exeuction\n *\n * @param[in]\tconn - Database connection handle\n *\n * @return      Error code\n * @retval\t-1 - Failure\n * @retval\t 0 - Success\n *\n */\nint\ndb_prepare_que_sqls(void *conn)\n{\n\tchar conn_sql[MAX_SQL_LENGTH];\n\n\tsnprintf(conn_sql, MAX_SQL_LENGTH, \"insert into pbs.queue(\"\n\t\t\t\t\t   \"qu_name, \"\n\t\t\t\t\t   \"qu_type, \"\n\t\t\t\t\t   \"qu_creattm, \"\n\t\t\t\t\t   \"qu_savetm, \"\n\t\t\t\t\t   \"attributes \"\n\t\t\t\t\t   \") \"\n\t\t\t\t\t   \"values \"\n\t\t\t\t\t   \"($1, $2,  localtimestamp, localtimestamp, hstore($3::text[]))\");\n\tif (db_prepare_stmt(conn, STMT_INSERT_QUE, conn_sql, 3) != 0)\n\t\treturn -1;\n\n\t/* rewrite all attributes for FULL update */\n\tsnprintf(conn_sql, MAX_SQL_LENGTH, \"update pbs.queue set \"\n\t\t\t\t\t   \"qu_type = $2, \"\n\t\t\t\t\t   \"qu_savetm = localtimestamp, \"\n\t\t\t\t\t   \"attributes = attributes || hstore($3::text[]) \"\n\t\t\t\t\t   \"where qu_name = $1\");\n\tif (db_prepare_stmt(conn, STMT_UPDATE_QUE, conn_sql, 3) != 0)\n\t\treturn -1;\n\n\tsnprintf(conn_sql, MAX_SQL_LENGTH, \"update pbs.queue set \"\n\t\t\t\t\t   \"qu_type = $2, \"\n\t\t\t\t\t   \"qu_savetm = localtimestamp \"\n\t\t\t\t\t   \"where qu_name = $1\");\n\tif (db_prepare_stmt(conn, STMT_UPDATE_QUE_QUICK, conn_sql, 2) != 0)\n\t\treturn -1;\n\n\tsnprintf(conn_sql, MAX_SQL_LENGTH, \"update pbs.queue set \"\n\t\t\t\t\t   \"qu_savetm = localtimestamp, \"\n\t\t\t\t\t   \"attributes = attributes || hstore($2::text[]) \"\n\t\t\t\t\t   \"where qu_name = $1\");\n\tif (db_prepare_stmt(conn, STMT_UPDATE_QUE_ATTRSONLY, conn_sql, 2) != 0)\n\t\treturn -1;\n\n\tsnprintf(conn_sql, MAX_SQL_LENGTH, \"update pbs.queue set \"\n\t\t\t\t\t   \"qu_savetm = localtimestamp,\"\n\t\t\t\t\t   \"attributes = attributes - $2::text[] \"\n\t\t\t\t\t   \"where qu_name = $1\");\n\tif (db_prepare_stmt(conn, STMT_REMOVE_QUEATTRS, conn_sql, 2) != 0)\n\t\treturn -1;\n\n\tsnprintf(conn_sql, MAX_SQL_LENGTH, \"select qu_name, \"\n\t\t\t\t\t   \"qu_type, \"\n\t\t\t\t\t   \"hstore_to_array(attributes) as attributes \"\n\t\t\t\t\t   \"from pbs.queue \"\n\t\t\t\t\t   \"where qu_name = $1\");\n\tif (db_prepare_stmt(conn, STMT_SELECT_QUE, conn_sql, 1) != 0)\n\t\treturn -1;\n\n\tsnprintf(conn_sql, MAX_SQL_LENGTH, \"select \"\n\t\t\t\t\t   \"qu_name, \"\n\t\t\t\t\t   \"qu_type, \"\n\t\t\t\t\t   \"hstore_to_array(attributes) as attributes \"\n\t\t\t\t\t   \"from pbs.queue order by qu_creattm\");\n\tif (db_prepare_stmt(conn, STMT_FIND_QUES_ORDBY_CREATTM, conn_sql, 0) != 0)\n\t\treturn -1;\n\n\tsnprintf(conn_sql, MAX_SQL_LENGTH, \"delete from pbs.queue where qu_name = $1\");\n\tif (db_prepare_stmt(conn, STMT_DELETE_QUE, conn_sql, 1) != 0)\n\t\treturn -1;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tLoad queue data from the row into the queue object\n *\n * @param[in]\tres - Resultset from a earlier query\n * @param[in]\tpq  - Queue object to load data into\n * @param[in]\trow - The current row to load within the resultset\n *\n * @return      Error code\n * @retval\t-1 - On Error\n * @retval\t 0 - On Success\n * @retval\t>1 - Number of attributes\n */\nstatic int\nload_que(PGresult *res, pbs_db_que_info_t *pq, int row)\n{\n\tchar *raw_array;\n\tstatic int qu_name_fnum, qu_type_fnum, attributes_fnum;\n\tstatic int fnums_inited = 0;\n\n\tif (fnums_inited == 0) {\n\t\tqu_name_fnum = PQfnumber(res, \"qu_name\");\n\t\tqu_type_fnum = PQfnumber(res, \"qu_type\");\n\t\tattributes_fnum = PQfnumber(res, \"attributes\");\n\t\tfnums_inited = 1;\n\t}\n\n\tGET_PARAM_STR(res, row, pq->qu_name, qu_name_fnum);\n\tGET_PARAM_INTEGER(res, row, pq->qu_type, qu_type_fnum);\n\tGET_PARAM_BIN(res, row, raw_array, attributes_fnum);\n\n\t/* convert attributes from postgres raw array format */\n\treturn (dbarray_to_attrlist(raw_array, &pq->db_attr_list));\n}\n\n/**\n * @brief\n *\tInsert queue data into the database\n *\n * @param[in]\tconn - Connection handle\n * @param[in]\tobj  - Information of queue to be inserted\n *\n * @return      Error code\n * @retval\t-1 - Failure\n * @retval\t 0 - Success\n *\n */\nint\npbs_db_save_que(void *conn, pbs_db_obj_info_t *obj, int savetype)\n{\n\tpbs_db_que_info_t *pq = obj->pbs_db_un.pbs_db_que;\n\tchar *stmt = NULL;\n\tint params;\n\tint rc = 0;\n\tchar *raw_array = NULL;\n\n\tSET_PARAM_STR(conn_data, pq->qu_name, 0);\n\n\tif (savetype & OBJ_SAVE_QS) {\n\t\tSET_PARAM_INTEGER(conn_data, pq->qu_type, 1);\n\t\tparams = 2;\n\t\tstmt = STMT_UPDATE_QUE_QUICK;\n\t}\n\n\tif ((pq->db_attr_list.attr_count > 0) || (savetype & OBJ_SAVE_NEW)) {\n\t\tint len = 0;\n\t\t/* convert attributes to postgres raw array format */\n\t\tif ((len = attrlist_to_dbarray(&raw_array, &pq->db_attr_list)) <= 0)\n\t\t\treturn -1;\n\n\t\tif (savetype & OBJ_SAVE_QS) {\n\t\t\tSET_PARAM_BIN(conn_data, raw_array, len, 2);\n\t\t\tparams = 3;\n\t\t\tstmt = STMT_UPDATE_QUE;\n\t\t} else {\n\t\t\tSET_PARAM_BIN(conn_data, raw_array, len, 1);\n\t\t\tparams = 2;\n\t\t\tstmt = STMT_UPDATE_QUE_ATTRSONLY;\n\t\t}\n\t}\n\n\tif (savetype & OBJ_SAVE_NEW)\n\t\tstmt = STMT_INSERT_QUE;\n\n\tif (stmt)\n\t\trc = db_cmd(conn, stmt, params);\n\n\treturn rc;\n}\n\n/**\n * @brief\n *\tLoad queue data from the database\n *\n * @param[in]\tconn - Connection handle\n * @param[in]\tobj  - Load queue information into this object\n *\n * @return      Error code\n * @retval\t-1 - Failure\n * @retval\t 0 - Success\n * @retval\t 1 -  Success but no rows loaded\n *\n */\nint\npbs_db_load_que(void *conn, pbs_db_obj_info_t *obj)\n{\n\tPGresult *res;\n\tint rc;\n\tpbs_db_que_info_t *pq = obj->pbs_db_un.pbs_db_que;\n\n\tSET_PARAM_STR(conn_data, pq->qu_name, 0);\n\n\tif ((rc = db_query(conn, STMT_SELECT_QUE, 1, &res)) != 0)\n\t\treturn rc;\n\n\trc = load_que(res, pq, 0);\n\n\tPQclear(res);\n\n\treturn rc;\n}\n\n/**\n * @brief\n *\tFind queues\n *\n * @param[in]\tconn - Connection handle\n * @param[in]\tst   - The cursor state variable updated by this query\n * @param[in]\tobj  - Information of queue to be found\n * @param[in]\topts - Any other options (like flags, timestamp)\n *\n * @return      Error code\n * @retval\t-1 - Failure\n * @retval\t 0 - Success\n * @retval\t 1 - Success, but no rows found\n *\n */\nint\npbs_db_find_que(void *conn, void *st, pbs_db_obj_info_t *obj, pbs_db_query_options_t *opts)\n{\n\tPGresult *res;\n\tchar conn_sql[MAX_SQL_LENGTH];\n\tint rc;\n\tdb_query_state_t *state = (db_query_state_t *) st;\n\n\tif (!state)\n\t\treturn -1;\n\n\tstrcpy(conn_sql, STMT_FIND_QUES_ORDBY_CREATTM);\n\tif ((rc = db_query(conn, conn_sql, 0, &res)) != 0)\n\t\treturn rc;\n\n\tstate->row = 0;\n\tstate->res = res;\n\tstate->count = PQntuples(res);\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tGet the next queue from the cursor\n *\n * @param[in]\tconn - Connection handle\n * @param[in]\tst   - The cursor state\n * @param[in]\tobj  - queue information is loaded into this object\n *\n * @return      Error code\n *\t\t(Even though this returns only 0 now, keeping it as int\n *\t\t\tto support future change to return a failure)\n * @retval\t 0 - Success\n *\n */\nint\npbs_db_next_que(void *conn, void *st, pbs_db_obj_info_t *obj)\n{\n\tdb_query_state_t *state = (db_query_state_t *) st;\n\n\treturn (load_que(state->res, obj->pbs_db_un.pbs_db_que, state->row));\n}\n\n/**\n * @brief\n *\tDelete the queue from the database\n *\n * @param[in]\tconn - Connection handle\n * @param[in]\tobj  - queue information\n *\n * @return      Error code\n * @retval\t-1 - Failure\n * @retval\t 0 - Success\n *\n */\nint\npbs_db_delete_que(void *conn, pbs_db_obj_info_t *obj)\n{\n\tpbs_db_que_info_t *pq = obj->pbs_db_un.pbs_db_que;\n\tSET_PARAM_STR(conn_data, pq->qu_name, 0);\n\treturn (db_cmd(conn, STMT_DELETE_QUE, 1));\n}\n\n/**\n * @brief\n *\tDeletes attributes of a queue\n *\n * @param[in]\tconn - Connection handle\n * @param[in]\tobj_id  - queue id\n * @param[in]\tattr_list - List of attributes\n *\n * @return      Error code\n * @retval\t 0 - Success\n * @retval\t-1 - On Failure\n *\n */\nint\npbs_db_del_attr_que(void *conn, void *obj_id, pbs_db_attr_list_t *attr_list)\n{\n\tchar *raw_array = NULL;\n\tint len = 0;\n\tint rc = 0;\n\n\tif ((len = attrlist_to_dbarray_ex(&raw_array, attr_list, 1)) <= 0)\n\t\treturn -1;\n\n\tSET_PARAM_STR(conn_data, obj_id, 0);\n\tSET_PARAM_BIN(conn_data, raw_array, len, 1);\n\n\trc = db_cmd(conn, STMT_REMOVE_QUEATTRS, 2);\n\n\treturn rc;\n}\n"
  },
  {
    "path": "src/lib/Libdb/pgsql/db_resv.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n *\n * @brief\n *      Implementation of the resv data access functions for postgres\n */\n#include <pbs_config.h> /* the master config generated by configure */\n#include \"pbs_db.h\"\n#include \"db_postgres.h\"\n\n/**\n * @brief\n *\tPrepare all the reservation related sqls. Typically called after connect\n *\tand before any other sql exeuction\n *\n * @param[in]\tconn - Database connection handle\n *\n * @return      Error code\n * @retval\t-1 - Failure\n * @retval\t 0 - Success\n *\n */\nint\ndb_prepare_resv_sqls(void *conn)\n{\n\tchar conn_sql[MAX_SQL_LENGTH];\n\tsnprintf(conn_sql, MAX_SQL_LENGTH, \"insert into pbs.resv (\"\n\t\t\t\t\t   \"ri_resvID, \"\n\t\t\t\t\t   \"ri_queue, \"\n\t\t\t\t\t   \"ri_state, \"\n\t\t\t\t\t   \"ri_substate, \"\n\t\t\t\t\t   \"ri_stime, \"\n\t\t\t\t\t   \"ri_etime, \"\n\t\t\t\t\t   \"ri_duration, \"\n\t\t\t\t\t   \"ri_tactive, \"\n\t\t\t\t\t   \"ri_svrflags, \"\n\t\t\t\t\t   \"ri_savetm, \"\n\t\t\t\t\t   \"ri_creattm, \"\n\t\t\t\t\t   \"attributes \"\n\t\t\t\t\t   \") \"\n\t\t\t\t\t   \"values \"\n\t\t\t\t\t   \"($1, $2, $3, $4, $5, $6, $7, $8, $9, \"\n\t\t\t\t\t   \"localtimestamp, localtimestamp, hstore($10::text[]))\");\n\tif (db_prepare_stmt(conn, STMT_INSERT_RESV, conn_sql, 10) != 0)\n\t\treturn -1;\n\n\tsnprintf(conn_sql, MAX_SQL_LENGTH, \"update pbs.resv set \"\n\t\t\t\t\t   \"ri_queue = $2, \"\n\t\t\t\t\t   \"ri_state = $3, \"\n\t\t\t\t\t   \"ri_substate = $4, \"\n\t\t\t\t\t   \"ri_stime = $5, \"\n\t\t\t\t\t   \"ri_etime = $6, \"\n\t\t\t\t\t   \"ri_duration = $7, \"\n\t\t\t\t\t   \"ri_tactive = $8, \"\n\t\t\t\t\t   \"ri_svrflags = $9, \"\n\t\t\t\t\t   \"ri_savetm = localtimestamp, \"\n\t\t\t\t\t   \"attributes = attributes || hstore($10::text[]) \"\n\t\t\t\t\t   \"where ri_resvID = $1\");\n\tif (db_prepare_stmt(conn, STMT_UPDATE_RESV, conn_sql, 10) != 0)\n\t\treturn -1;\n\n\tsnprintf(conn_sql, MAX_SQL_LENGTH, \"update pbs.resv set \"\n\t\t\t\t\t   \"ri_queue = $2, \"\n\t\t\t\t\t   \"ri_state = $3, \"\n\t\t\t\t\t   \"ri_substate = $4, \"\n\t\t\t\t\t   \"ri_stime = $5, \"\n\t\t\t\t\t   \"ri_etime = $6, \"\n\t\t\t\t\t   \"ri_duration = $7, \"\n\t\t\t\t\t   \"ri_tactive = $8, \"\n\t\t\t\t\t   \"ri_svrflags = $9, \"\n\t\t\t\t\t   \"ri_savetm = localtimestamp \"\n\t\t\t\t\t   \"where ri_resvID = $1\");\n\tif (db_prepare_stmt(conn, STMT_UPDATE_RESV_QUICK, conn_sql, 9) != 0)\n\t\treturn -1;\n\n\tsnprintf(conn_sql, MAX_SQL_LENGTH, \"update pbs.resv set \"\n\t\t\t\t\t   \"ri_savetm = localtimestamp, \"\n\t\t\t\t\t   \"attributes = attributes || hstore($2::text[]) \"\n\t\t\t\t\t   \"where ri_resvID = $1\");\n\tif (db_prepare_stmt(conn, STMT_UPDATE_RESV_ATTRSONLY, conn_sql, 2) != 0)\n\t\treturn -1;\n\n\tsnprintf(conn_sql, MAX_SQL_LENGTH, \"update pbs.resv set \"\n\t\t\t\t\t   \"ri_savetm = localtimestamp,\"\n\t\t\t\t\t   \"attributes = attributes - $2::text[] \"\n\t\t\t\t\t   \"where ri_resvID = $1\");\n\tif (db_prepare_stmt(conn, STMT_REMOVE_RESVATTRS, conn_sql, 2) != 0)\n\t\treturn -1;\n\n\tsnprintf(conn_sql, MAX_SQL_LENGTH, \"select \"\n\t\t\t\t\t   \"ri_resvID, \"\n\t\t\t\t\t   \"ri_queue, \"\n\t\t\t\t\t   \"ri_state, \"\n\t\t\t\t\t   \"ri_substate, \"\n\t\t\t\t\t   \"ri_stime, \"\n\t\t\t\t\t   \"ri_etime, \"\n\t\t\t\t\t   \"ri_duration, \"\n\t\t\t\t\t   \"ri_tactive, \"\n\t\t\t\t\t   \"ri_svrflags, \"\n\t\t\t\t\t   \"hstore_to_array(attributes) as attributes \"\n\t\t\t\t\t   \"from pbs.resv where ri_resvid = $1\");\n\tif (db_prepare_stmt(conn, STMT_SELECT_RESV, conn_sql, 1) != 0)\n\t\treturn -1;\n\n\tsnprintf(conn_sql, MAX_SQL_LENGTH, \"select \"\n\t\t\t\t\t   \"ri_resvID, \"\n\t\t\t\t\t   \"ri_queue, \"\n\t\t\t\t\t   \"ri_state, \"\n\t\t\t\t\t   \"ri_substate, \"\n\t\t\t\t\t   \"ri_stime, \"\n\t\t\t\t\t   \"ri_etime, \"\n\t\t\t\t\t   \"ri_duration, \"\n\t\t\t\t\t   \"ri_tactive, \"\n\t\t\t\t\t   \"ri_svrflags, \"\n\t\t\t\t\t   \"hstore_to_array(attributes) as attributes \"\n\t\t\t\t\t   \"from pbs.resv \"\n\t\t\t\t\t   \"order by ri_creattm\");\n\tif (db_prepare_stmt(conn, STMT_FINDRESVS_ORDBY_CREATTM,\n\t\t\t    conn_sql, 0) != 0)\n\t\treturn -1;\n\n\tsnprintf(conn_sql, MAX_SQL_LENGTH, \"delete from pbs.resv where ri_resvid = $1\");\n\tif (db_prepare_stmt(conn, STMT_DELETE_RESV, conn_sql, 1) != 0)\n\t\treturn -1;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tLoad resv data from the row into the resv object\n *\n * @param[in]\tres - Resultset from a earlier query\n * @param[in]\tpresv  - resv object to load data into\n * @param[in]\trow - The current row to load within the resultset\n *\n * @return      Error code\n * @retval\t-1 - On Error\n * @retval\t 0 - On Success\n * @retval\t>1 - Number of attributes\n */\nstatic int\nload_resv(PGresult *res, pbs_db_resv_info_t *presv, int row)\n{\n\tchar *raw_array;\n\tstatic int ri_resvid_fnum;\n\tstatic int ri_queue_fnum;\n\tstatic int ri_state_fnum;\n\tstatic int ri_substate_fnum;\n\tstatic int ri_stime_fnum;\n\tstatic int ri_etime_fnum;\n\tstatic int ri_duration_fnum;\n\tstatic int ri_tactive_fnum;\n\tstatic int ri_svrflags_fnum;\n\tstatic int attributes_fnum;\n\tstatic int fnums_inited = 0;\n\n\tif (fnums_inited == 0) {\n\t\tri_resvid_fnum = PQfnumber(res, \"ri_resvID\");\n\t\tri_queue_fnum = PQfnumber(res, \"ri_queue\");\n\t\tri_state_fnum = PQfnumber(res, \"ri_state\");\n\t\tri_substate_fnum = PQfnumber(res, \"ri_substate\");\n\t\tri_stime_fnum = PQfnumber(res, \"ri_stime\");\n\t\tri_etime_fnum = PQfnumber(res, \"ri_etime\");\n\t\tri_duration_fnum = PQfnumber(res, \"ri_duration\");\n\t\tri_tactive_fnum = PQfnumber(res, \"ri_tactive\");\n\t\tri_svrflags_fnum = PQfnumber(res, \"ri_svrflags\");\n\t\tattributes_fnum = PQfnumber(res, \"attributes\");\n\t\tfnums_inited = 1;\n\t}\n\n\tGET_PARAM_STR(res, row, presv->ri_resvid, ri_resvid_fnum);\n\tGET_PARAM_STR(res, row, presv->ri_queue, ri_queue_fnum);\n\tGET_PARAM_INTEGER(res, row, presv->ri_state, ri_state_fnum);\n\tGET_PARAM_INTEGER(res, row, presv->ri_substate, ri_substate_fnum);\n\tGET_PARAM_BIGINT(res, row, presv->ri_stime, ri_stime_fnum);\n\tGET_PARAM_BIGINT(res, row, presv->ri_etime, ri_etime_fnum);\n\tGET_PARAM_BIGINT(res, row, presv->ri_duration, ri_duration_fnum);\n\tGET_PARAM_INTEGER(res, row, presv->ri_tactive, ri_tactive_fnum);\n\tGET_PARAM_INTEGER(res, row, presv->ri_svrflags, ri_svrflags_fnum);\n\tGET_PARAM_BIN(res, row, raw_array, attributes_fnum);\n\n\t/* convert attributes from postgres raw array format */\n\treturn (dbarray_to_attrlist(raw_array, &presv->db_attr_list));\n}\n\n/**\n * @brief\n *\tInsert resv data into the database\n *\n * @param[in]\tconn - Connection handle\n * @param[in]\tobj  - Information of resv to be inserted\n *\n * @return      Error code\n * @retval\t-1 - Failure\n * @retval\t 0 - Success\n *\n */\nint\npbs_db_save_resv(void *conn, pbs_db_obj_info_t *obj, int savetype)\n{\n\tpbs_db_resv_info_t *presv = obj->pbs_db_un.pbs_db_resv;\n\tchar *stmt = NULL;\n\tint params;\n\tint rc = 0;\n\tchar *raw_array = NULL;\n\n\tSET_PARAM_STR(conn_data, presv->ri_resvid, 0);\n\n\tif (savetype & OBJ_SAVE_QS) {\n\t\tSET_PARAM_STR(conn_data, presv->ri_queue, 1);\n\t\tSET_PARAM_INTEGER(conn_data, presv->ri_state, 2);\n\t\tSET_PARAM_INTEGER(conn_data, presv->ri_substate, 3);\n\t\tSET_PARAM_BIGINT(conn_data, presv->ri_stime, 4);\n\t\tSET_PARAM_BIGINT(conn_data, presv->ri_etime, 5);\n\t\tSET_PARAM_BIGINT(conn_data, presv->ri_duration, 6);\n\t\tSET_PARAM_INTEGER(conn_data, presv->ri_tactive, 7);\n\t\tSET_PARAM_INTEGER(conn_data, presv->ri_svrflags, 8);\n\t\tstmt = STMT_UPDATE_RESV_QUICK;\n\t\tparams = 9;\n\t}\n\n\tif ((presv->db_attr_list.attr_count > 0) || (savetype & OBJ_SAVE_NEW)) {\n\t\tint len = 0;\n\t\t/* convert attributes to postgres raw array format */\n\t\tif ((len = attrlist_to_dbarray(&raw_array, &presv->db_attr_list)) <= 0)\n\t\t\treturn -1;\n\n\t\tif (savetype & OBJ_SAVE_QS) {\n\t\t\tSET_PARAM_BIN(conn_data, raw_array, len, 9);\n\t\t\tstmt = STMT_UPDATE_RESV;\n\t\t\tparams = 10;\n\t\t} else {\n\t\t\tSET_PARAM_BIN(conn_data, raw_array, len, 1);\n\t\t\tparams = 2;\n\t\t\tstmt = STMT_UPDATE_RESV_ATTRSONLY;\n\t\t}\n\t}\n\n\tif (savetype & OBJ_SAVE_NEW)\n\t\tstmt = STMT_INSERT_RESV;\n\n\tif (stmt)\n\t\trc = db_cmd(conn, stmt, params);\n\n\treturn rc;\n}\n\n/**\n * @brief\n *\tLoad resv data from the database\n *\n * @param[in]\tconn - Connection handle\n * @param[in]\tobj  - Load resv information into this object\n *\n * @return      Error code\n * @retval\t-1 - Failure\n * @retval\t 0 - Success\n * @retval\t 1 -  Success but no rows loaded\n *\n */\nint\npbs_db_load_resv(void *conn, pbs_db_obj_info_t *obj)\n{\n\tpbs_db_resv_info_t *presv = obj->pbs_db_un.pbs_db_resv;\n\tPGresult *res;\n\tint rc;\n\n\tSET_PARAM_STR(conn_data, presv->ri_resvid, 0);\n\n\tif ((rc = db_query(conn, STMT_SELECT_RESV, 1, &res)) != 0)\n\t\treturn rc;\n\n\trc = load_resv(res, presv, 0);\n\n\tPQclear(res);\n\n\treturn rc;\n}\n\n/**\n * @brief\n *\tFind resv\n *\n * @param[in]\tconn - Connection handle\n * @param[in]\tst   - The cursor state variable updated by this query\n * @param[in]\tobj  - Information of resv to be found\n * @param[in]\topts - Any other options (like flags, timestamp)\n *\n * @return      Error code\n * @retval\t-1 - Failure\n * @retval\t 0 - Success\n * @retval\t 1 - Success, but no rows found\n *\n */\nint\npbs_db_find_resv(void *conn, void *st, pbs_db_obj_info_t *obj,\n\t\t pbs_db_query_options_t *opts)\n{\n\tPGresult *res;\n\tint rc;\n\tint params;\n\tchar conn_sql[MAX_SQL_LENGTH];\n\tdb_query_state_t *state = (db_query_state_t *) st;\n\n\tif (!state)\n\t\treturn -1;\n\n\tparams = 0;\n\tstrcpy(conn_sql, STMT_FINDRESVS_ORDBY_CREATTM);\n\n\tif ((rc = db_query(conn, conn_sql, params, &res)) != 0)\n\t\treturn rc;\n\n\tstate->row = 0;\n\tstate->res = res;\n\tstate->count = PQntuples(res);\n\treturn 0;\n}\n\n/**\n * @brief\n *\tGet the next resv from the cursor\n *\n * @param[in]\tconn - Connection handle\n * @param[in]\tst   - The cursor state\n * @param[in]\tobj  - Resv information is loaded into this object\n *\n * @return      Error code\n *\t\t(Even though this returns only 0 now, keeping it as int\n *\t\t\tto support future change to return a failure)\n * @retval\t 0 - Success\n *\n */\nint\npbs_db_next_resv(void *conn, void *st, pbs_db_obj_info_t *obj)\n{\n\tdb_query_state_t *state = (db_query_state_t *) st;\n\treturn (load_resv(state->res, obj->pbs_db_un.pbs_db_resv, state->row));\n}\n\n/**\n * @brief\n *\tDelete the resv from the database\n *\n * @param[in]\tconn - Connection handle\n * @param[in]\tobj  - Resv information\n *\n * @return      Error code\n * @retval\t-1 - Failure\n * @retval\t 0 - Success\n *\n */\nint\npbs_db_delete_resv(void *conn, pbs_db_obj_info_t *obj)\n{\n\tpbs_db_resv_info_t *presv = obj->pbs_db_un.pbs_db_resv;\n\tSET_PARAM_STR(conn_data, presv->ri_resvid, 0);\n\treturn (db_cmd(conn, STMT_DELETE_RESV, 1));\n}\n\n/**\n * @brief\n *\tDeletes attributes of a Resv\n *\n * @param[in]\tconn - Connection handle\n * @param[in]\tobj_id  - Resv id\n * @param[in]\tattr_list - List of attributes\n *\n * @return      Error code\n * @retval\t 0 - Success\n * @retval\t-1 - On Failure\n *\n */\nint\npbs_db_del_attr_resv(void *conn, void *obj_id, pbs_db_attr_list_t *attr_list)\n{\n\tchar *raw_array = NULL;\n\tint len = 0;\n\tint rc = 0;\n\n\tif ((len = attrlist_to_dbarray_ex(&raw_array, attr_list, 1)) <= 0)\n\t\treturn -1;\n\n\tSET_PARAM_STR(conn_data, obj_id, 0);\n\tSET_PARAM_BIN(conn_data, raw_array, len, 1);\n\n\trc = db_cmd(conn, STMT_REMOVE_RESVATTRS, 2);\n\n\treturn rc;\n}\n"
  },
  {
    "path": "src/lib/Libdb/pgsql/db_sched.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n *\n * @brief\n *      Implementation of the scheduler data access functions for postgres\n */\n#include <pbs_config.h> /* the master config generated by configure */\n#include \"pbs_db.h\"\n#include \"db_postgres.h\"\n\n/**\n * @brief\n *\tPrepare all the scheduler related sqls. Typically called after connect\n *\tand before any other sql exeuction\n *\n * @param[in]\tconn - Database connection handle\n *\n * @return      Error code\n * @retval\t-1 - Failure\n * @retval\t 0 - Success\n *\n */\nint\ndb_prepare_sched_sqls(void *conn)\n{\n\tchar conn_sql[MAX_SQL_LENGTH];\n\tsnprintf(conn_sql, MAX_SQL_LENGTH, \"insert into \"\n\t\t\t\t\t   \"pbs.scheduler( \"\n\t\t\t\t\t   \"sched_name, \"\n\t\t\t\t\t   \"sched_savetm, \"\n\t\t\t\t\t   \"sched_creattm, \"\n\t\t\t\t\t   \"attributes \"\n\t\t\t\t\t   \") \"\n\t\t\t\t\t   \"values ($1, localtimestamp, localtimestamp, hstore($2::text[]))\");\n\tif (db_prepare_stmt(conn, STMT_INSERT_SCHED, conn_sql, 2) != 0)\n\t\treturn -1;\n\n\t/* rewrite all attributes for a FULL update */\n\tsnprintf(conn_sql, MAX_SQL_LENGTH, \"update pbs.scheduler set \"\n\t\t\t\t\t   \"sched_savetm = localtimestamp, \"\n\t\t\t\t\t   \"attributes = attributes || hstore($2::text[]) \"\n\t\t\t\t\t   \"where sched_name = $1\");\n\tif (db_prepare_stmt(conn, STMT_UPDATE_SCHED, conn_sql, 2) != 0)\n\t\treturn -1;\n\n\tsnprintf(conn_sql, MAX_SQL_LENGTH, \"update pbs.scheduler set \"\n\t\t\t\t\t   \"sched_savetm = localtimestamp,\"\n\t\t\t\t\t   \"attributes = attributes - $2::text[] \"\n\t\t\t\t\t   \"where sched_name = $1\");\n\tif (db_prepare_stmt(conn, STMT_REMOVE_SCHEDATTRS, conn_sql, 2) != 0)\n\t\treturn -1;\n\n\tsnprintf(conn_sql, MAX_SQL_LENGTH, \"select \"\n\t\t\t\t\t   \"sched_name, \"\n\t\t\t\t\t   \"hstore_to_array(attributes) as attributes \"\n\t\t\t\t\t   \"from \"\n\t\t\t\t\t   \"pbs.scheduler \"\n\t\t\t\t\t   \"where sched_name = $1\");\n\tif (db_prepare_stmt(conn, STMT_SELECT_SCHED, conn_sql, 1) != 0)\n\t\treturn -1;\n\n\tsnprintf(conn_sql, MAX_SQL_LENGTH, \"select \"\n\t\t\t\t\t   \"sched_name, \"\n\t\t\t\t\t   \"hstore_to_array(attributes) as attributes \"\n\t\t\t\t\t   \"from \"\n\t\t\t\t\t   \"pbs.scheduler \");\n\tif (db_prepare_stmt(conn, STMT_SELECT_SCHED_ALL, conn_sql, 0) != 0)\n\t\treturn -1;\n\n\tsnprintf(conn_sql, MAX_SQL_LENGTH, \"delete from pbs.scheduler where sched_name = $1\");\n\tif (db_prepare_stmt(conn, STMT_DELETE_SCHED, conn_sql, 1) != 0)\n\t\treturn -1;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tInsert scheduler data into the database\n *\n * @param[in]\tconn - Connection handle\n * @param[in]\tobj  - Information of scheduler to be inserted\n *\n * @return      Error code\n * @retval\t-1 - Failure\n * @retval\t 0 - Success\n *\n */\nint\npbs_db_save_sched(void *conn, pbs_db_obj_info_t *obj, int savetype)\n{\n\tpbs_db_sched_info_t *psch = obj->pbs_db_un.pbs_db_sched;\n\tchar *stmt = NULL;\n\tint params;\n\tint rc = 0;\n\tchar *raw_array = NULL;\n\n\tSET_PARAM_STR(conn_data, psch->sched_name, 0);\n\n\t/* sched does not have a QS area, so ignoring that */\n\n\tif ((psch->db_attr_list.attr_count > 0) || (savetype & OBJ_SAVE_NEW)) {\n\t\tint len = 0;\n\t\t/* convert attributes to postgres raw array format */\n\t\tif ((len = attrlist_to_dbarray(&raw_array, &psch->db_attr_list)) <= 0)\n\t\t\treturn -1;\n\n\t\tSET_PARAM_BIN(conn_data, raw_array, len, 1);\n\t\tstmt = STMT_UPDATE_SCHED;\n\t\tparams = 2;\n\t}\n\n\tif (savetype & OBJ_SAVE_NEW)\n\t\tstmt = STMT_INSERT_SCHED;\n\n\tif (stmt)\n\t\trc = db_cmd(conn, stmt, params);\n\n\treturn rc;\n}\n\n/**\n * @brief\n *\tLoad scheduler data from the row into the scheduler object\n *\n * @param[in]\tres - Resultset from a earlier query\n * @param[out]\tpsch  - Scheduler object to load data into\n * @param[in]\trow - The current row to load within the resultset\n *\n * @return      Error code\n * @retval\t-1 - On Error\n * @retval\t 0 - On Success\n * @retval\t>1 - Number of attributes\n */\nstatic int\nload_sched(PGresult *res, pbs_db_sched_info_t *psch, int row)\n{\n\tchar *raw_array;\n\tstatic int sched_name_fnum;\n\tstatic int attributes_fnum;\n\tstatic int fnums_inited = 0;\n\n\tif (fnums_inited == 0) {\n\t\tsched_name_fnum = PQfnumber(res, \"sched_name\");\n\t\tattributes_fnum = PQfnumber(res, \"attributes\");\n\t\tfnums_inited = 1;\n\t}\n\n\tGET_PARAM_STR(res, row, psch->sched_name, sched_name_fnum);\n\tGET_PARAM_BIN(res, row, raw_array, attributes_fnum);\n\n\t/* convert attributes from postgres raw array format */\n\treturn (dbarray_to_attrlist(raw_array, &psch->db_attr_list));\n}\n\n/**\n * @brief\n *\tLoad scheduler data from the database\n *\n * @param[in]\tconn - Connection handle\n * @param[out]\tobj  - Load scheduler information into this object\n *\n * @return      Error code\n * @retval\t-1 - Failure\n * @retval\t 0 - Success\n * @retval\t 1 -  Success but no rows loaded\n *\n */\nint\npbs_db_load_sched(void *conn, pbs_db_obj_info_t *obj)\n{\n\tPGresult *res;\n\tint rc;\n\tpbs_db_sched_info_t *psch = obj->pbs_db_un.pbs_db_sched;\n\n\tSET_PARAM_STR(conn_data, psch->sched_name, 0);\n\n\tif ((rc = db_query(conn, STMT_SELECT_SCHED, 1, &res)) != 0)\n\t\treturn rc;\n\n\trc = load_sched(res, psch, 0);\n\n\tPQclear(res);\n\n\treturn rc;\n}\n\n/**\n * @brief\n *\tFind scheduler\n *\n * @param[in]\tconn - Connection handle\n * @param[out]\tst   - The cursor state variable updated by this query\n * @param[in]\tobj  - Information of sched to be found\n * @param[in]\topts - Any other options (like flags, timestamp)\n *\n * @return      Error code\n * @retval\t-1 - Failure\n * @retval\t 0 - Success\n * @retval\t 1 -  Success but no rows found\n *\n */\nint\npbs_db_find_sched(void *conn, void *st, pbs_db_obj_info_t *obj,\n\t\t  pbs_db_query_options_t *opts)\n{\n\tPGresult *res;\n\tdb_query_state_t *state = (db_query_state_t *) st;\n\tint rc;\n\tint params;\n\n\tif (!state)\n\t\treturn -1;\n\n\tparams = 0;\n\tif ((rc = db_query(conn, STMT_SELECT_SCHED_ALL, params, &res)) != 0)\n\t\treturn rc;\n\n\tstate->row = 0;\n\tstate->res = res;\n\tstate->count = PQntuples(res);\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tDeletes attributes of a Scheduler\n *\n * @param[in]\tconn - Connection handle\n * @param[in]\tobj_id  - Scheduler id\n * @param[in]\tattr_list - List of attributes\n *\n * @return      Error code\n * @retval\t 0 - Success\n * @retval\t-1 - On Failure\n *\n */\nint\npbs_db_del_attr_sched(void *conn, void *obj_id, pbs_db_attr_list_t *attr_list)\n{\n\tchar *raw_array = NULL;\n\tint len = 0;\n\tint rc = 0;\n\n\tif ((len = attrlist_to_dbarray_ex(&raw_array, attr_list, 1)) <= 0)\n\t\treturn -1;\n\n\tSET_PARAM_STR(conn_data, obj_id, 0);\n\tSET_PARAM_BIN(conn_data, raw_array, len, 1);\n\n\trc = db_cmd(conn, STMT_REMOVE_SCHEDATTRS, 2);\n\n\treturn rc;\n}\n\n/**\n * @brief\n *\tGet the next scheduler from the cursor\n *\n * @param[in]\tconn - Connection handle\n * @param[out]\tst   - The cursor state\n * @param[in]\tobj  - Scheduler information is loaded into this object\n *\n * @return      Error code\n * @retval\t-1 - Failure\n * @retval\t 0 - Success\n *\n */\nint\npbs_db_next_sched(void *conn, void *st, pbs_db_obj_info_t *obj)\n{\n\tdb_query_state_t *state = (db_query_state_t *) st;\n\n\treturn (load_sched(state->res, obj->pbs_db_un.pbs_db_sched, state->row));\n}\n\n/**\n * @brief\n *\tDelete the scheduler from the database\n *\n * @param[in]\tconn - Connection handle\n * @param[in]\tobj  - scheduler information\n *\n * @return      Error code\n * @retval\t-1 - Failure\n * @retval\t 0 - Success\n *\n */\nint\npbs_db_delete_sched(void *conn, pbs_db_obj_info_t *obj)\n{\n\tpbs_db_sched_info_t *sc = obj->pbs_db_un.pbs_db_sched;\n\tSET_PARAM_STR(conn_data, sc->sched_name, 0);\n\treturn (db_cmd(conn, STMT_DELETE_SCHED, 1));\n}\n"
  },
  {
    "path": "src/lib/Libdb/pgsql/db_svr.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n *\n * @brief\n *      Implementation of the svr data access functions for postgres\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n#include \"pbs_db.h\"\n#include <errno.h>\n#include \"db_postgres.h\"\n\nextern char *errmsg_cache;\nstatic int pbs_db_truncate_all(void *conn);\n\n/**\n * @brief\n *\tPrepare all the server related sqls. Typically called after connect\n *\tand before any other sql exeuction\n *\n * @param[in]\tconn - Database connection handle\n *\n * @return      Error code\n * @retval\t-1 - Failure\n * @retval\t 0 - Success\n *\n */\nint\ndb_prepare_svr_sqls(void *conn)\n{\n\tchar conn_sql[MAX_SQL_LENGTH];\n\n\tsnprintf(conn_sql, MAX_SQL_LENGTH, \"insert into pbs.server( \"\n\t\t\t\t\t   \"sv_jobidnumber, \"\n\t\t\t\t\t   \"sv_savetm, \"\n\t\t\t\t\t   \"sv_creattm, \"\n\t\t\t\t\t   \"attributes \"\n\t\t\t\t\t   \") \"\n\t\t\t\t\t   \"values \"\n\t\t\t\t\t   \"($1, localtimestamp, localtimestamp, hstore($2::text[]))\");\n\tif (db_prepare_stmt(conn, STMT_INSERT_SVR, conn_sql, 2) != 0)\n\t\treturn -1;\n\n\t/* replace all attributes for a FULL update */\n\tsnprintf(conn_sql, MAX_SQL_LENGTH, \"update pbs.server set \"\n\t\t\t\t\t   \"sv_jobidnumber = $1, \"\n\t\t\t\t\t   \"sv_savetm = localtimestamp, \"\n\t\t\t\t\t   \"attributes = attributes || hstore($2::text[])\");\n\tif (db_prepare_stmt(conn, STMT_UPDATE_SVR, conn_sql, 2) != 0)\n\t\treturn -1;\n\n\tsnprintf(conn_sql, MAX_SQL_LENGTH, \"update pbs.server set \"\n\t\t\t\t\t   \"sv_savetm = localtimestamp,\"\n\t\t\t\t\t   \"attributes = attributes - $1::text[]\");\n\tif (db_prepare_stmt(conn, STMT_REMOVE_SVRATTRS, conn_sql, 1) != 0)\n\t\treturn -1;\n\n\tsnprintf(conn_sql, MAX_SQL_LENGTH, \"select \"\n\t\t\t\t\t   \"sv_jobidnumber, \"\n\t\t\t\t\t   \"hstore_to_array(attributes) as attributes \"\n\t\t\t\t\t   \"from \"\n\t\t\t\t\t   \"pbs.server \");\n\tif (db_prepare_stmt(conn, STMT_SELECT_SVR, conn_sql, 0) != 0)\n\t\treturn -1;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tTruncate all data from ALL tables from the database\n *\n * @param[in]\tconn - The database connection handle\n *\n * @return      Error code\n * @retval\t-1 - Failure\n *\t\t 0 - Success\n *\n */\nstatic int\npbs_db_truncate_all(void *conn)\n{\n\tchar conn_sql[MAX_SQL_LENGTH]; /* sql buffer */\n\n\tsnprintf(conn_sql, MAX_SQL_LENGTH, \"truncate table \t\"\n\t\t\t\t\t   \"pbs.scheduler, \"\n\t\t\t\t\t   \"pbs.node, \"\n\t\t\t\t\t   \"pbs.queue, \"\n\t\t\t\t\t   \"pbs.resv, \"\n\t\t\t\t\t   \"pbs.job_scr, \"\n\t\t\t\t\t   \"pbs.job, \"\n\t\t\t\t\t   \"pbs.server\");\n\n\tif (db_execute_str(conn, conn_sql) == -1)\n\t\treturn -1;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tInsert server data into the database\n *\n * @param[in]\tconn - Connection handle\n * @param[in]\tobj  - Information of server to be inserted\n *\n * @return      Error code\n * @retval\t-1 - Failure\n * @retval\t 0 - Success\n *\n */\nint\npbs_db_save_svr(void *conn, pbs_db_obj_info_t *obj, int savetype)\n{\n\tpbs_db_svr_info_t *ps = obj->pbs_db_un.pbs_db_svr;\n\tchar *stmt = NULL;\n\tint params;\n\tchar *raw_array = NULL;\n\tint len = 0;\n\tint rc = 0;\n\n\t/* Svr does not have a QS area, so ignoring that */\n\tSET_PARAM_BIGINT(conn_data, ps->sv_jobidnumber, 0);\n\n\tif ((ps->db_attr_list.attr_count > 0) || (savetype & OBJ_SAVE_NEW)) {\n\t\t/* convert attributes to postgres raw array format */\n\t\tif ((len = attrlist_to_dbarray(&raw_array, &ps->db_attr_list)) <= 0)\n\t\t\treturn -1;\n\n\t\tSET_PARAM_BIN(conn_data, raw_array, len, 1);\n\t\tparams = 2;\n\t\tstmt = STMT_UPDATE_SVR;\n\t}\n\n\tif (savetype & OBJ_SAVE_NEW) {\n\t\tstmt = STMT_INSERT_SVR;\n\t\t/* reinitialize schema by dropping PBS schema */\n\t\tif (pbs_db_truncate_all(conn) == -1) {\n\t\t\tdb_set_error(conn, &errmsg_cache, \"Could not truncate PBS data\", stmt, \"\");\n\t\t\treturn -1;\n\t\t}\n\t}\n\n\tif (stmt)\n\t\trc = db_cmd(conn, stmt, params);\n\n\treturn rc;\n}\n\n/**\n * @brief\n *\tLoad server data from the database\n *\n * @param[in]\tconn - Connection handle\n * @param[in]\tobj  - Load server information into this object\n *\n * @return      Error code\n * @retval\t-1 - Failure\n * @retval\t 0 - Success\n * @retval\t 1 -  Success but no rows loaded\n *\n */\nint\npbs_db_load_svr(void *conn, pbs_db_obj_info_t *obj)\n{\n\tPGresult *res;\n\tint rc;\n\tchar *raw_array;\n\tpbs_db_svr_info_t *ps = obj->pbs_db_un.pbs_db_svr;\n\tstatic int sv_jobidnumber_fnum;\n\tstatic int attributes_fnum;\n\tstatic int fnums_inited = 0;\n\n\tif ((rc = db_query(conn, STMT_SELECT_SVR, 0, &res)) != 0)\n\t\treturn rc;\n\n\tif (fnums_inited == 0) {\n\t\tsv_jobidnumber_fnum = PQfnumber(res, \"sv_jobidnumber\");\n\t\tattributes_fnum = PQfnumber(res, \"attributes\");\n\t\tfnums_inited = 1;\n\t}\n\n\tGET_PARAM_BIGINT(res, 0, ps->sv_jobidnumber, sv_jobidnumber_fnum);\n\tGET_PARAM_BIN(res, 0, raw_array, attributes_fnum);\n\n\t/* convert attributes from postgres raw array format */\n\trc = dbarray_to_attrlist(raw_array, &ps->db_attr_list);\n\n\tPQclear(res);\n\n\treturn rc;\n}\n\n/**\n * @brief\n *\tDeletes attributes of a server\n *\n * @param[in]\tconn - Connection handle\n * @param[in]\tobj_id  - server id\n * @param[in]\tattr_list - List of attributes\n *\n * @return      Error code\n * @retval\t 0 - Success\n * @retval\t-1 - On Failure\n *\n */\nint\npbs_db_del_attr_svr(void *conn, void *obj_id, pbs_db_attr_list_t *attr_list)\n{\n\tchar *raw_array = NULL;\n\tint len = 0;\n\tint rc;\n\n\tif ((len = attrlist_to_dbarray_ex(&raw_array, attr_list, 1)) <= 0)\n\t\treturn -1;\n\n\tSET_PARAM_BIN(conn_data, raw_array, len, 0);\n\n\trc = db_cmd(conn, STMT_REMOVE_SVRATTRS, 1);\n\n\treturn rc;\n}\n"
  },
  {
    "path": "src/lib/Libdb/pgsql/pbs_db_env",
    "content": "#!/bin/false\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n# For legacy installations, PostgreSQL was packaged together with PBS.\n# If this is a legacy installation, setup the appropiate dynamic\n# library paths. Otherwise, set some variables the are used later.\n#\n\nPGSQL_LIBSTR=\"\"\nif [ -z \"$PBS_EXEC\" ]; then\n\t. ${PBS_CONF_FILE:-/etc/pbs.conf}\nfi\nif [ -d \"$PBS_EXEC/pgsql\" ]; then\n\t# Using PostgreSQL packaged with PBS\n\tif [ -n \"$PGSQL_INST_DIR\" ]; then\n\t\tPGSQL_DIR=\"$PGSQL_INST_DIR\"\n\t\tPGSQL_BIN=\"$PGSQL_INST_DIR/bin\"\n\telse\n\t\tPGSQL_DIR=\"$PBS_EXEC/pgsql\"\n\t\tPGSQL_BIN=\"$PBS_EXEC/pgsql/bin\"\n\tfi\n\tif [ ! -d \"$PGSQL_BIN\" ]; then\n\t\techo \"\\*\\*\\* $PGSQL_BIN directory does not exist\"\n\t\texit 1\n\tfi\n\tPGSQL_CMD=\"$PGSQL_BIN/psql\"\n\tif [ ! -x \"$PGSQL_CMD\" ]; then\n\t\techo \"\\*\\*\\* $PGSQL_BIN/psql not executable\"\n\t\texit 1\n\tfi\n\t[ -d \"$PGSQL_DIR/lib\" ] && LD_LIBRARY_PATH=\"$PGSQL_DIR/lib:$LD_LIBRARY_PATH\"\n\t[ -d \"$PGSQL_DIR/lib64\" ] && LD_LIBRARY_PATH=\"$PGSQL_DIR/lib64:$LD_LIBRARY_PATH\"\n\tPGSQL_LIBSTR=\"LD_LIBRARY_PATH=$LD_LIBRARY_PATH; export LD_LIBRARY_PATH; \"\n\texport PGSQL_LIBSTR\nelse\n\t# Using system installed PostgreSQL package\n\tPGSQL_CMD=`type psql 2>/dev/null | cut -d' ' -f3`\n\tif [ -z \"$PGSQL_CMD\" ]; then\n\t\techo \"\\*\\*\\* psql command is not in PATH\"\n\t\texit 1\n\tfi\n\tPGSQL_CONF=`type pg_config 2>/dev/null | cut -d' ' -f3`\n\tif [ -z \"$PGSQL_CONF\" ]; then\n\t\tPGSQL_BIN=`dirname ${PGSQL_CMD}`\n\telse\n\t\tPGSQL_BIN=`${PGSQL_CONF} | awk '/BINDIR/{ print $3 }'`\n\tfi\n\tPGSQL_DIR=`dirname $PGSQL_BIN`\n\t[ \"$PGSQL_DIR\" = \"/\" ] && PGSQL_DIR=\"\"\nfi\nexport PGSQL_BIN=$PGSQL_BIN\n[ -d \"$PBS_EXEC/lib\" ] && LD_LIBRARY_PATH=\"$PBS_EXEC/lib:$LD_LIBRARY_PATH\"\nexport LD_LIBRARY_PATH\n"
  },
  {
    "path": "src/lib/Libdb/pgsql/pbs_db_schema.sql",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n\n/*\n * pbs_schema.sql - contains sql code to re-create the PBS database schema\n *\n */\n\ndrop schema pbs cascade; -- Drop any existing schema called pbs\ncreate schema pbs;\t -- Create a new schema called pbs\ncreate extension hstore; -- Create the hstore extension if it does not exit\n---------------------- VERSION -----------------------------\n\n/*\n * Table pbs.info holds information about the schema\n * - The schema version, used for migrating and updating PBS\n */\nCREATE TABLE pbs.info (\n    pbs_schema_version TEXT    NOT NULL\n);\n\nINSERT INTO pbs.info values('1.5.0'); /* schema version */\n\n---------------------- SERVER ------------------------------\n\n/*\n * Table pbs.server holds server object information\n */\nCREATE TABLE pbs.server (\n    sv_jobidnumber  BIGINT      NOT NULL,\n    sv_savetm       TIMESTAMP   NOT NULL,\n    sv_creattm      TIMESTAMP   NOT NULL,\n    attributes      hstore      NOT NULL DEFAULT ''\t\n);\n\n---------------------- SCHED -------------------------------\n\n/*\n * Table pbs.scheduler holds scheduler instance information\n */\nCREATE TABLE pbs.scheduler (\n    sched_name      TEXT        NOT NULL,\n    sched_savetm    TIMESTAMP   NOT NULL,\n    sched_creattm   TIMESTAMP   NOT NULL,\n    attributes      hstore      NOT NULL default '',\t\n    CONSTRAINT scheduler_pk PRIMARY KEY (sched_name)\n);\n\n---------------------- NODE --------------------------------\n\n/*\n * Table pbs.mominfo_time holds information about the generation and time of\n * the host to vnode map\n */\nCREATE TABLE pbs.mominfo_time (\n    mit_time    BIGINT,\n    mit_gen     INTEGER\n);\n\n/*\n * Table pbs.node holds information about PBS nodes\n */\nCREATE TABLE pbs.node (\n    nd_name         TEXT        NOT NULL,\n    mom_modtime     BIGINT,\n    nd_hostname     TEXT        NOT NULL,\n    nd_state        INTEGER     NOT NULL,\n    nd_ntype        INTEGER     NOT NULL,\n    nd_pque         TEXT,\n    nd_index        INTEGER     NOT NULL,\n    nd_savetm       TIMESTAMP   NOT NULL,\n    nd_creattm      TIMESTAMP   NOT NULL,\n    attributes      hstore      NOT NULL default '',\n    CONSTRAINT pbsnode_pk PRIMARY KEY (nd_name)\n);\nCREATE INDEX nd_idx_cr\nON pbs.node\n( nd_creattm );\n\n---------------------- QUEUE -------------------------------\n\n/*\n * Table pbs.queue holds queue information\n */\nCREATE TABLE pbs.queue (\n    qu_name     TEXT        NOT NULL,\n    qu_type     INTEGER     NOT NULL,\n    qu_creattm  TIMESTAMP   NOT NULL,\n    qu_savetm   TIMESTAMP   NOT NULL,\n    attributes  hstore      NOT NULL default '',\n    CONSTRAINT queue_pk PRIMARY KEY (qu_name)\n);\nCREATE INDEX que_idx_cr\nON pbs.queue\n( qu_creattm );\n\n\n---------------------- RESERVATION -------------------------\n\n/*\n * Table pbs.resv holds reservation information\n */\nCREATE TABLE pbs.resv (\n    ri_resvID       TEXT        NOT NULL,\n    ri_queue        TEXT        NOT NULL,\n    ri_state        INTEGER     NOT NULL,\n    ri_substate     INTEGER     NOT NULL,\n    ri_stime        BIGINT      NOT NULL,\n    ri_etime        BIGINT      NOT NULL,\n    ri_duration     BIGINT      NOT NULL,\n    ri_tactive      INTEGER     NOT NULL,\n    ri_svrflags     INTEGER     NOT NULL,\n    ri_savetm       TIMESTAMP   NOT NULL,\n    ri_creattm      TIMESTAMP   NOT NULL,\n    attributes      hstore      NOT NULL default '',\n    CONSTRAINT resv_pk PRIMARY KEY (ri_resvID)\n);\n\n\n---------------------- JOB ---------------------------------\n\n/*\n * Table pbs.job holds job information\n */\nCREATE TABLE pbs.job (\n    ji_jobid        TEXT        NOT NULL,\n    ji_state        INTEGER     NOT NULL,\n    ji_substate     INTEGER     NOT NULL,\n    ji_svrflags     INTEGER     NOT NULL,\n    ji_stime        BIGINT,\n    ji_queue        TEXT        NOT NULL,\n    ji_destin       TEXT,\n    ji_un_type      INTEGER     NOT NULL,\n    ji_exitstat     INTEGER,\n    ji_quetime      BIGINT,\n    ji_rteretry     BIGINT,\n    ji_fromsock     INTEGER,\n    ji_fromaddr     BIGINT,\n    ji_jid          TEXT,\n    ji_credtype     INTEGER,\n    ji_qrank        BIGINT      NOT NULL,\n    ji_savetm       TIMESTAMP   NOT NULL,\n    ji_creattm      TIMESTAMP   NOT NULL,\n    attributes      hstore      NOT NULL default '',\n    CONSTRAINT jobid_pk PRIMARY KEY (ji_jobid)\n);\n\nCREATE INDEX job_rank_idx\nON pbs.job\n( ji_qrank );\n\n\n/*\n * Table pbs.job_scr holds the job script\n */\nCREATE TABLE pbs.job_scr (\n    ji_jobid    TEXT       NOT NULL,\n    script      TEXT\n);\nCREATE INDEX job_scr_idx ON pbs.job_scr (ji_jobid);\n\n---------------------- END OF SCHEMA -----------------------\n"
  },
  {
    "path": "src/lib/Libdb/pgsql/pbs_db_utility",
    "content": "#!/bin/sh\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n. ${PBS_CONF_FILE:-/etc/pbs.conf}\n\n\ntrap cleanup 1 2 3 15\n\ndir=`dirname $0`\nCWD=`pwd`\nupgrade=0\nPBS_AES_SWITCH_VER='14.0'\nchange_locale=0\nopt_err=1\nopt=\"$1\"\n\n#---------------------------------------------------------------------------------------------------\n# Helper functions.\ncleanup() {\n\tcd ${CWD}\n\trm -rf ${data_dir}\n\trm -f ${schema}\n\trm -f ${tmp_file}\n}\n\ncleanup_on_finish () {\n\t# change back to our dir and quit\n\tcd ${CWD}\n\terr=`rm -f ${schema}`\n\tif [ $? -ne 0 ]; then\n\t\techo \"${err}\"\n\tfi\n}\n\nset_db_trust_login() {\n\tdatastore_dir=$1\n\terr=`cp -p ${datastore_dir}/pg_hba.conf ${datastore_dir}/pg_hba.conf.orig 2>&1`\n\tif [ $? -ne 0 ]; then\n\t\techo \"${err}\"\n\t\treturn 1\n\tfi\n\terr=`chown ${PBS_DATA_SERVICE_USER} ${datastore_dir}/pg_hba.conf.orig`\n\tif [ $? -ne 0 ]; then\n\t\techo \"${err}\"\n\t\treturn 1\n\tfi\n\terr=`sed 's/md5/trust/g' ${datastore_dir}/pg_hba.conf > ${datastore_dir}/pg_hba.conf.new 2>&1`\n\tif [ $? -ne 0 ]; then\n\t\techo \"${err}\"\n\t\treturn 1\n\tfi\n\terr=`chown ${PBS_DATA_SERVICE_USER} ${datastore_dir}/pg_hba.conf.new`\n\tif [ $? -ne 0 ]; then\n\t\techo \"${err}\"\n\t\treturn 1\n\tfi\n\terr=`mv ${datastore_dir}/pg_hba.conf.new ${datastore_dir}/pg_hba.conf 2>&1`\n\tif [ $? -ne 0 ]; then\n\t\techo \"${err}\"\n\t\treturn 1\n\tfi\n}\n\nrevoke_db_trust_login() {\n\tdatastore_dir=$1\n\terr=`cp -p ${datastore_dir}/pg_hba.conf.orig ${datastore_dir}/pg_hba.conf 2>&1`\n\tif [ $? -eq 0 ]; then\n\t\trm -f ${datastore_dir}/pg_hba.conf.orig\n\telse\n\t\techo \"${err}\"\n\t\treturn 1\n\tfi\n}\n\nbackupdir() {\n\tif [ -d \"$1\" -a -d \"$2\" ]; then\n\t\tbackupdir=\"$(basename $1).pre.${PBS_VERSION}\"\n\t\techo \"*** Backing up $1 to ${2}/${backupdir}\"\n\t\tmv \"$1\" \"${2}/${backupdir}\"\n\tfi\n}\n\n# DB Upgrade functions\nupgrade_pbs_database() {\n\tsys_pgsql_ver=$1\n\told_pgsql_ver=$2\n\tuser=\"${PBS_DATA_SERVICE_USER}\"\n\tinst_dir=\"${PGSQL_DIR}\"\n\tdata_dir=\"${PBS_HOME}/datastore\"\n\n\t# Check for existence of old data service directory.\n\tif [ -d \"${PBS_HOME}/pgsql.old\" ]; then\n\t\told_inst_dir=\"${PBS_HOME}/pgsql.old\"\n\t\told_data_dir=\"${PBS_HOME}/datastore.old\"\n\telif [ -d \"${PBS_HOME}/pgsql.forupgrade\" ]; then\n\t\told_inst_dir=\"${PBS_HOME}/pgsql.forupgrade\"\n\t\told_data_dir=\"${PBS_HOME}/datastore.forupgrade\"\n\telse\n\t\techo \"Data service directory from previous PBS installation not found,\"\n\t\techo \"Datastore upgrade cannot continue\"\n\t\treturn 1\n\tfi\n\n\t# strip the minor version from sys_pgsql_ver if old_pgsql_ver does not have minor version (for comparison).\n\t[[ ! $old_pgsql_ver =~ \".\" ]] && sys_pgsql_ver=$(echo $sys_pgsql_ver | cut -d '.' -f 1)\n\n\t[ ${sys_pgsql_ver%.*} -eq ${old_pgsql_ver%.*} ] && [ ${sys_pgsql_ver#*.} \\> ${old_pgsql_ver#*.} ] || [ ${sys_pgsql_ver%.*} -gt ${old_pgsql_ver%.*} ];\n\tresult=$?\n\tif [ ${result} -eq 0 ]; then\n\t\tif [ -d \"$PBS_EXEC/pgsql\" ]; then\n\t\t\t#Start upgrade process of datastore\n\t\t\tupgrade_db\n\t\t\treturn $?\n\t\telse\n\t\t\treturn 2\n\t\tfi\n\telif [ \"${old_pgsql_ver}\" = \"${sys_pgsql_ver}\" ]; then\n\t\treturn 0\n\tfi\n\n\techo \"Upgrade from version ${old_pgsql_ver} unsupported\"\n\treturn 1\n}\n\nupgrade_db() {\n\t#\n\t# This routine will insatll a postgres database cluster,\n\t# will perform the pre-upgrade checks for datastore\n\t# with appropriate authentication management.\n\t#\n\n\tserver_ctl=\"${PBS_EXEC}/sbin/pbs_dataservice\"\n\tif [ ! -x \"${server_ctl}\" ]; then\n\t\techo \"${server_ctl} not found\"\n\t\treturn 1\n\tfi\n\n\tif [ ! -x \"${PBS_EXEC}/sbin/pbs_ds_password\" ]; then\n\t\techo \"${PBS_EXEC}/sbin/pbs_ds_password not found\"\n\t\treturn 1\n\tfi\n\n\tif [ ! -x \"${inst_dir}/bin/pg_upgrade\" ]; then\n\t\techo \"${inst_dir}/bin/pg_upgrade not found\"\n\t\treturn 1\n\tfi\n\n\t# Backup datastore directory, if backup directory already\n\t# present then exit.\n\tif [ -d \"${old_data_dir}\" ]; then\n\t\techo \"Files from previous datastore upgrade found,\"\n\t\techo \"Datastore upgrade cannot continue\"\n\t\treturn 1\n\telse\n\t\terr=`mv ${data_dir} ${old_data_dir} 2>&1`\n\t\tif [ $? -ne 0 ]; then\n\t\t\techo \"${err}\"\n\t\t\treturn 1\n\t\tfi\n\tfi\n\n\t# Invoke the dataservice creation script for pbs\n\n\tupgrade=1\n\tpbs_install_db\n\tret=$?\n\n\tif [ ${ret} -ne 0 ]; then\n\t\techo \"*** Error initializing the PBS dataservice\"\n\t\techo \"Error details:\"\n\t\techo \"$resp\"\n\t\treturn ${ret}\n\tfi\n\n\t# Copy the pg_hba.conf from old cluster.\n\terr=`cp -p ${old_data_dir}/pg_hba.conf ${data_dir}/pg_hba.conf`\n\tif [ $? -ne 0 ]; then\n\t\techo \"${err}\"\n\t\treturn 1\n\tfi\n\n\tset_db_trust_login \"${data_dir}\"\n\tret=$?\n\tif [ $ret -ne 0 ]; then\n\t\treturn $ret\n\tfi\n\tset_db_trust_login \"${old_data_dir}\"\n\tret=$?\n\tif [ $ret -ne 0 ]; then\n\t\treturn $ret\n\tfi\n\n\tCWD=`pwd`\n\tcd \"${data_dir}\"\n\t#Perform pg_upgrade -c to check if we can upgrade the cluster or not\n\terr=`su ${user} -c \"/bin/sh -c '${PGSQL_LIBSTR} ${inst_dir}/bin/pg_upgrade -b ${old_inst_dir}/bin -B ${inst_dir}/bin -d ${old_data_dir} -D ${data_dir} -c'\" 2>&1`\n\tif [ $? -ne 0 ]; then\n\t\techo \"Refer pg_upgrade log files at $PBS_HOME/datastore/pg_upgrade_internal.log,\"\n\t\techo \"$PBS_HOME/datastore/pg_upgrade_server.log and\"\n\t\techo \"$PBS_HOME/datastore/pg_upgrade_utility.log for more information\"\n\t\trevoke_db_trust_login \"${data_dir}\"\n\t\tret=$?\n\t\tif [ $ret -ne 0 ]; then\n\t\t\treturn $ret\n\t\tfi\n\n\t\trevoke_db_trust_login \"${old_data_dir}\"\n\t\tret=$?\n\t\tif [ $ret -ne 0 ]; then\n\t\t\treturn $ret\n\t\tfi\n\n\t\treturn 1\n\tfi\n\n\t#Perform pg_upgrade for database upgrade\n\terr=`su ${user} -c \"/bin/sh -c '${PGSQL_LIBSTR} ${inst_dir}/bin/pg_upgrade -b ${old_inst_dir}/bin -B ${inst_dir}/bin -d ${old_data_dir} -D ${data_dir}'\" 2>&1`\n\tif [ $? -ne 0 ]; then\n\t\techo \"Refer pg_upgrade log files at $PBS_HOME/datastore/pg_upgrade_internal.log,\"\n\t\techo \"$PBS_HOME/datastore/pg_upgrade_server.log and\"\n\t\techo \"$PBS_HOME/datastore/pg_upgrade_utility.log for more information\"\n\t\trevoke_db_trust_login \"${data_dir}\"\n\t\tret=$?\n\t\tif [ $ret -ne 0 ]; then\n\t\t\treturn $ret\n\t\tfi\n\n\t\trevoke_db_trust_login \"${old_data_dir}\"\n\t\tret=$?\n\t\tif [ $ret -ne 0 ]; then\n\t\t\treturn $ret\n\t\tfi\n\n\t\treturn 1\n\tfi\n\n\t# start the dataservice\n\t${server_ctl} start > /dev/null\n\tif [ $? -ne 0 ]; then\n\t\techo \"Error starting PBS Data Service\"\n\t\treturn 1\n\tfi\n\n\t# Optimizer statistics are not transferred by pg_upgrade, so do it manually.\n\tENVSTR=\"PGPORT=${PBS_DATA_SERVICE_PORT}; export PGPORT; PGHOST=${PBS_SERVER}; export PGHOST; PGUSER=${user}; export PGUSER; \"\n\terr=`su ${user} -c \"/bin/sh -c '${PGSQL_LIBSTR} ${ENVSTR} ${data_dir}/analyze_new_cluster.sh'\"`\n\n\t# Update locale of pbs database to C\n\tif [ ${change_locale} -eq 1 ]; then\n\t\t${inst_dir}/bin/psql -A -t -p ${PBS_DATA_SERVICE_PORT} -d pbs_datastore -U ${user} -c \"update pg_database set datcollate='C', datctype='C'\" > /dev/null\n\t\tret=$?\n\t\tif [ $ret -ne 0 ]; then\n\t\t\treturn $ret\n\t\tfi\n\tfi\n\n\t# stop the dataservice\n\t${server_ctl} stop > /dev/null\n\tif [ $? -ne 0 ]; then\n\t\techo \"Error stopping PBS Data Service\"\n\t\tkill -s SIGTERM `ps -ef | grep \"${inst_dir}/bin/postgres\" | grep -v grep | awk '{if ($3 == 1) print $2}'`\n\t\treturn 1\n\tfi\n\trevoke_db_trust_login \"${data_dir}\"\n\tret=$?\n\tif [ $ret -ne 0 ]; then\n\t\treturn $ret\n\tfi\n\n\tcd \"${CWD}\"\n\t# Delete old cluster\n\terr=`${data_dir}/delete_old_cluster.sh`\n\tret=$?\n\tif [ $ret -ne 0 ]; then\n\t\treturn $ret\n\tfi\n}\n\n\npbs_install_db () {\n\tlocale=\"\"\n\tif [ \"${change_locale}\" = \"0\" ]; then\n\t\tlocale=\"--locale=C\"\n\tfi\n\n\tif [ ! -z \"${PBS_DATA_SERVICE_HOST}\" ]; then\n\t\techo \"Custom data service host used...configure manually\"\n\t\texit 0\n\tfi\n\n\tif [ -z \"${PBS_DATA_SERVICE_PORT}\" ]; then\n\t\tPBS_DATA_SERVICE_PORT=\"15007\"\n\tfi\n\texport PBS_DATA_SERVICE_PORT\n\n\tbin_dir=\"${PGSQL_BIN}\"\n\tdata_dir=\"${PBS_HOME}/datastore\"\n\tserver_ctl=\"${PBS_EXEC}/sbin/pbs_dataservice\"\n\ttmp_file=\"${PBS_HOME}/spool/tmp_inst_$$\"\n\tdb_user=\"${PBS_HOME}/server_priv/db_user\"\n\n\t# Get non symbolic absolute path of pgsql directory\n\treal_inst_dir=\"`/bin/ls -l $PBS_EXEC | awk '{print $NF \"/pgsql\"}'`\"\n\n\tschema_in=\"${PBS_EXEC}/libexec/pbs_db_schema.sql\"\n\tif [ ! -f \"${schema_in}\" ]; then\n\t\techo \"PBS datastore schema file not found\"\n\t\texit 1\n\tfi\n\tschema=\"${PBS_HOME}/spool/pbs_install_db_schema\"\n\tcat ${schema_in} > ${schema}\n\tchmod 600 ${schema}\n\tif [ $? -ne 0 ]; then\n\t\techo \"chmod of ${schema} failed\"\n\t\trm -f ${schema}\n\t\texit 1\n\tfi\n\n\tlwd=`pwd`\n\n\tif [ ! -d \"${bin_dir}\" ]; then\n\t\t# Using the system installed Postgres instead\n\t\tinitdb_loc=`type initdb 2>/dev/null | cut -d' ' -f3`\n\t\tif [ -z \"$initdb_loc\" ]; then\n\t\t\techo \"PBS Data Service directory ${bin_dir}\"\n\t\t\techo \"not present and postgresql-server not installed.\"\n\t\t\trm -f ${schema}\n\t\t\texit 1\n\t\tfi\n\t\tbin_dir=`dirname $initdb_loc`\n\tfi\n\n\tuser=\"${PBS_DATA_SERVICE_USER}\"\n\tport=\"${PBS_DATA_SERVICE_PORT}\"\n\n\tchown ${user} ${schema}\n\tif [ $? -ne 0 ]; then\n\t\techo \"chown of ${schema} to user ${user} failed\"\n\t\trm -f ${schema}\n\t\texit 1\n\tfi\n\n\tif [ ! -x \"${bin_dir}/initdb\" ]; then\n\t\techo \"${bin_dir} exists, binaries missing...exiting\"\n\t\trm -f ${schema}\n\t\texit 1\n\tfi\n\n\tif [ -d \"${data_dir}/base\" ]; then\n\t\tolduser=`ls -ld ${data_dir} | awk '{print $3}'`\n\t\tif [ $? -ne 0 ]; then\n\t\t\techo \"Failed to stat directory ${data_dir}\"\n\t\t\trm -f ${schema}\n\t\t\texit 1\n\t\tfi\n\t\tif [ \"$olduser\" != \"$user\" ]; then\n\t\t\techo \"Existing PBS Data Store ${data_dir} owned by different user ${olduser}\"\n\t\t\techo \"Use the same user name or install in a different location\"\n\t\t\trm -f ${schema}\n\t\t\texit 1\n\t\tfi\n\t\trm -f ${schema}\n\t\texit 2\n\tfi\n\n\tif [ ! -d \"${data_dir}\" ]; then\n\t\tmkdir -p \"${data_dir}\"\n\t\tif [ $? -ne 0 ]; then\n\t\t\techo \"Error creating dir ${data_dir}\"\n\t\t\trm -f ${schema}\n\t\t\texit 1\n\t\tfi\n\tfi\n\n\t# delete the password file, if any, since we are creating new db\n\t[ ${upgrade} -eq 0 ] && rm -f \"${PBS_HOME}/server_priv/db_password\"\n\tpasswd=\"${user}\"\n\n\tchown ${user} ${data_dir}\n\tif [ $? -ne 0 ]; then\n\t\techo \"Chown of ${data_dir} to user ${user} failed\"\n\t\trm -f ${schema}\n\t\texit 1\n\tfi\n\n\tchmod 700 ${data_dir}\n\tif [ $? -ne 0 ]; then\n\t\techo \"chmod of ${data_dir} failed\"\n\t\trm -f ${schema}\n\t\texit 1\n\tfi\n\n\techo \"Creating the PBS Data Service...\"\n\n\t# change directory to data_dir to ensure that we don't get cd errors from postgres later\n\tcd ${data_dir}\n\n\terr=`su ${user} -c \"/bin/sh -c '${PGSQL_LIBSTR} ${bin_dir}/initdb -D ${data_dir} -U \\\"${user}\\\" -E SQL_ASCII ${locale}'\" 2>&1`\n\n\tif [ $? -ne 0 ]; then\n\t\techo \"$err\"\n\t\techo \"Error creating PBS datastore\"\n\t\tcleanup\n\t\texit 1\n\tfi\n\n\t# check for postgres config files existence\n\tif [ ! -f \"${data_dir}/postgresql.conf\" ]; then\n\t\techo \"PBS Data Sevice Config files not found\"\n\t\tcleanup\n\t\texit 1\n\tfi\n\n\tif [ ! -f \"${data_dir}/pg_hba.conf\" ]; then\n\t\techo \"PBS Data Sevice Config files not found\"\n\t\tcleanup\n\t\texit 1\n\tfi\n\n\t# update postgresql.conf\n\tsed \"{\n\t\ts/#checkpoint_segments = 3/checkpoint_segments = 20/g\n\t\ts/#port = 5432/port = ${port}/g\n\t\ts/#listen_addresses = 'localhost'/listen_addresses = '*'/g\n\t\ts/#standard_conforming_strings = off/standard_conforming_strings = on/g\n\t\ts/#logging_collector = off/logging_collector = on/g\n\t\ts/#log_directory = 'pg_log'/log_directory = 'pg_log'/g\n\t\ts/#log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'/log_filename = 'pbs_dataservice_log.%a'/g\n\t\ts/#log_truncate_on_rotation = off/log_truncate_on_rotation = on/g\n\t\ts/#log_rotation_age = 1d/log_rotation_age = 1440/g\n\t\ts/#log_line_prefix = ''/log_line_prefix = '%t'/g\n\t\t}\" ${data_dir}/postgresql.conf > ${tmp_file}\n\tif [ $? -ne 0 ]; then\n\t\techo \"Error creating PBS datastore\"\n\t\tcleanup\n\t\texit 1\n\tfi\n\tmv ${tmp_file} ${data_dir}/postgresql.conf\n\tif [ $? -ne 0 ]; then\n\t\techo \"Error moving ${tmp_file} to ${data_dir}/postgresql.conf\"\n\t\tcleanup\n\t\texit 1\n\tfi\n\n\tchown ${user} ${data_dir}/postgresql.conf\n\tif [ $? -ne 0 ]; then\n\t\techo \"Error setting ownership to file ${data_dir}/postgresql.conf\"\n\t\tcleanup\n\t\texit 1\n\tfi\n\n\tchmod 600 ${data_dir}/postgresql.conf\n\tif [ $? -ne 0 ]; then\n\t\techo \"Error setting permissions to file ${data_dir}/postgresql.conf\"\n\t\tcleanup\n\t\texit 1\n\tfi\n\n\t# Copy pgsql directory to PBS_HOME (as pgsql.forupgrade) for it's future upgrade\n\t[ ! -d \"${PBS_HOME}/pgsql.forupgrade\" -a -d \"${PBS_EXEC}/pgsql\" -a -d \"${PBS_HOME}\" ] && cp -pr --no-preserve=timestamps \"${PBS_EXEC}/pgsql\" \"${PBS_HOME}/pgsql.forupgrade\" 2>&1\n\n\tif [ $upgrade -eq 1 ]; then\n\t\tcleanup_on_finish\n\t\texit 0\n\tfi\n\n\t# Add IPV6 local address to pg_hba.conf so the pbs_ds_password is fine\n\techo \"host    all             all             ::1/128                 trust\" >> ${data_dir}/pg_hba.conf\n\n\t${server_ctl} start\n\tif [ $? -ne 0 ]; then\n\t\techo \"Error starting PBS Data Service\"\n\t\tcleanup\n\t\texit 1\n\tfi\n\t# Wait for postgres to start.\n\ttries=5\n\twhile [ $tries -ge 0 ]\n\tdo\n\t\t${server_ctl} status > /dev/null 2>&1\n\t\tret=$?\n\t\tif [ $ret -eq 0 ]; then\n\t\t\tbreak\n\t\tfi\n\t\ttries=$((tries-1))\n\t\tsleep 2\n\tdone\n\tif [ $ret -ne 0 ]; then\n\t\techo \"Error starting PBS Data Service\"\n\t\tcleanup\n\t\texit 1\n\tfi\n\n\terr=`su ${user} -c \"/bin/sh -c '${PGSQL_LIBSTR} ${bin_dir}/createdb -p ${port} pbs_datastore'\" 2>&1`\n\n\tif [ $? -ne 0 ]; then\n\t\techo \"$err\"\n\t\techo \"Error creating PBS datastore\"\n\t\t${server_ctl} stop > /dev/null 2>&1\n\t\tcleanup\n\t\texit 1\n\tfi\n\n\t# now install the pbs datastore schema onto the datastore\n\terr=`su ${user} -c \"/bin/sh -c '${PGSQL_LIBSTR} ${bin_dir}/psql -p ${port} -d pbs_datastore -U \\\"${user}\\\" -f ${schema}'\" 2>&1`\n\n\tif [ $? -ne 0 ]; then\n\t\techo $err\n\t\techo \"Error initializing PBS datastore\"\n\t\t${server_ctl} stop > /dev/null 2>&1\n\t\tcleanup\n\t\texit 1\n\tfi\n\n\terr=`${PBS_EXEC}/sbin/pbs_ds_password -r`\n\tif [ $? -ne 0 ]; then\n\t\techo $err\n\t\techo \"Error setting password for PBS Data Service\"\n\t\t${server_ctl} stop > /dev/null 2>&1\n\t\tcleanup\n\t\texit 1\n\tfi\n\n\t# stop the dataservice\n\t${server_ctl} stop\n\tif [ $? -ne 0 ]; then\n\t\techo $err\n\t\techo \"Error stopping PBS Data Service\"\n\t\tkill -TERM `ps -ef | grep \"${bin_dir}/postgres\" | grep -v grep | awk '{if ($3 == 1) print $2}'`\n\t\tcleanup\n\t\texit 1\n\tfi\n\n\t# update the pg_hba.conf, so that no passwordless entry is allowed\n\tnum=`grep -n \"#.*TYPE.*DATABASE.*USER.*ADDRESS.*METHOD\" ${data_dir}/pg_hba.conf | awk -F: '{print $1}'`\n\thead -n $num ${data_dir}/pg_hba.conf > ${tmp_file}\n\tmv ${tmp_file} ${data_dir}/pg_hba.conf\n\n\techo \"# IPv4 local connections: \" >> ${data_dir}/pg_hba.conf\n\techo \"local   all             all                                     md5\" >> ${data_dir}/pg_hba.conf\n\techo \"host    all             all             0.0.0.0/0               md5\" >> ${data_dir}/pg_hba.conf\n\techo \"host    all             all             127.0.0.1/32            md5\" >> ${data_dir}/pg_hba.conf\n\techo \"# IPv6 local connections:\" >> ${data_dir}/pg_hba.conf\n\techo \"host    all             all             ::1/128                 md5\" >> ${data_dir}/pg_hba.conf\n\n\tchown ${user} ${data_dir}/pg_hba.conf\n\tchmod 600 ${data_dir}/pg_hba.conf\n\n\tcleanup_on_finish\n\texit 0\n}\n\nif [ \"${opt}\" = \"upgrade_db\" ]; then\n\topt_err=0\n\t# Store the old PBS VERSION for later use\n\tif [ -f \"${PBS_HOME}/pbs_version\" ]; then\n\t\told_pbs_version=`cat ${PBS_HOME}/pbs_version`\n\tfi\n\n\tdata_dir=\"${PBS_HOME}/datastore\"\n\tif [ ! -f \"${data_dir}/PG_VERSION\" ]; then\n\t\techo \"Database version file: ${data_dir}/PG_VERSION not found, cannot continue\"\n\t\texit 1\n\tfi\n\tsys_pgsql_ver=$(echo `${PGSQL_BIN}/postgres -V` | awk 'NR==1 {print $NF}' | cut -d '.' -f 1,2)\n\told_pgsql_ver=`cat ${data_dir}/PG_VERSION`\n\tif [ \"${sys_pgsql_ver}\" != \"${old_pgsql_ver}\" ]; then\n\t\t# Upgrade postgres\n\t\tupgrade_pbs_database \"${sys_pgsql_ver}\" \"${old_pgsql_ver}\"\n\t\tret=$?\n\t\tif [ $ret -ne 0 ]; then\n\t\t\tif [ $ret -eq 2 ]; then\n\t\t\t\techo \"It appears that PostgreSQL has been upgraded independently of PBS.\"\n\t\t\t\techo \"The PBS database must be manually upgraded. Please refer to the\"\n\t\t\t\techo \"documentation/release notes for details.\"\n\t\t\telse\n\t\t\t\techo \"Failed to upgrade PBS Datastore\"\n\t\t\tfi\n\t\t\texit $ret\n\t\telse\n\t\t\tif [ -d \"${old_inst_dir}\" ]; then\n\t\t\t\tbackupdir \"$old_inst_dir\" \"$PBS_HOME\"\n\t\t\t\tif [ $? -ne 0 ]; then\n\t\t\t\t\techo \"Failed to backup $old_inst_dir, please follow the below instructions:\"\n\t\t\t\t\techo \"*** Backup \"$old_inst_dir\" if you need to downgrade pgsql later on.\"\n\t\t\t\t\techo \"*** For future upgrades to be successful run the below command.\"\n\t\t\t\t\techo \"*** cp -pr ${PBS_EXEC}/pgsql ${PBS_HOME}/pgsql.forupgrade\"\n\t\t\t\telse\n\t\t\t\t\techo \"*** ${PBS_HOME}/$(basename ${old_inst_dir}).pre.${PBS_VERSION} may need to be manually removed if you do not wish to downgrade PBS.\"\n\t\t\t\tfi\n\t\t\tfi\n\t\tfi\n\tfi\n\n\t# do schema upgrade\n\tset_db_trust_login \"${PBS_HOME}/datastore\"\n\t${PBS_EXEC}/libexec/pbs_schema_upgrade ${PBS_DATA_SERVICE_PORT} ${PBS_DATA_SERVICE_USER}\n\tret=$?\n\tif [ $ret -ne 0 ]; then\n\t\trevoke_db_trust_login \"${PBS_HOME}/datastore\"\n\t\techo \"Failed to upgrade PBS Datastore\"\n\t\texit $ret\n\tfi\n\n\t# We need to regenerate the db_password file since we have changed encryption/decryption\n\t# library from DES to AES in PBS Version PBS_AES_SWITCH_VER\n\tif [ \"$old_pbs_version\" \\< \"${PBS_AES_SWITCH_VER}\" ] ;then\n\t\trm -f  \"${PBS_HOME}/server_priv/db_password\"\n\t\terr=`${PBS_EXEC}/sbin/pbs_ds_password -r`\n\t\tif [ $? -ne 0 ]; then\n\t\t\techo $err\n\t\t\techo \"Error setting password for PBS Data Service\"\n\t\t\t${server_ctl} stop > /dev/null 2>&1\n\t\t\trevoke_db_trust_login \"${PBS_HOME}/datastore\"\n\t\t\texit 1\n\t\tfi\n\tfi\n\trevoke_db_trust_login \"${PBS_HOME}/datastore\"\n\nelif [ \"${opt}\" = \"install_db\" ]; then\n\topt_err=0\n\tpbs_install_db\nfi\n\nif [ \"${opt_err}\" -eq 1 ]; then\n\techo \"Usage: pbs_db_utility [install_db|upgrade_db]\"\n\texit 1\nfi\n"
  },
  {
    "path": "src/lib/Libdb/pgsql/pbs_ds_systemd",
    "content": "#!/bin/sh\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n. ${PBS_CONF_FILE:-/etc/pbs.conf}\n\nis_systemd=1\n_status=$(systemctl is-system-running 2>/dev/null)\nif [ ! \"${_status}\" -o \"x${_status}\" = \"xoffline\" -o \"x${_status}\" = \"xunknown\" ] ; then\n    is_systemd=0\nfi\nif [ $is_systemd -eq 1 ] ; then\n    SYSTEMD_CGROUP=`grep ^cgroup /proc/mounts | grep systemd | head -1 | cut -d ' ' -f2`\n    if [ ! -d $SYSTEMD_CGROUP/system.slice/pbs.service ] ; then\n        mkdir -p $SYSTEMD_CGROUP/system.slice/pbs.service\n    fi\n\n    pstmstr_pid_found=0\n    try=0\n    while [ $try -lt 10 -a $pstmstr_pid_found -ne 1 ]\n    do\n        sleep 1\n        if [ -f ${PBS_HOME}/datastore/postmaster.pid ] ; then\n            pstmstr_pid_found=1\n        fi\n        try=`expr $try + 1`\n    done\n\n    if [ $pstmstr_pid_found -eq 1 ] ; then\n        P_PID=`head -n 1 ${PBS_HOME}/datastore/postmaster.pid`\n        if [ -n \"$P_PID\" ] ; then\n            echo $P_PID >> $SYSTEMD_CGROUP/system.slice/pbs.service/tasks\n            pidlist=`pgrep -P $P_PID`\n            if [ -n \"$pidlist\" ] ; then\n                for PID in $pidlist; do\n                    echo $PID >> $SYSTEMD_CGROUP/system.slice/pbs.service/tasks\n                done\n            fi\n        fi\n    fi\nfi\nexit 0\n"
  },
  {
    "path": "src/lib/Libdb/pgsql/pbs_schema_upgrade",
    "content": "#!/bin/sh\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nupgrade_pbs_schema_from_v1_0_0() {\n\n\t${PGSQL_DIR}/bin/psql -p ${PBS_DATA_SERVICE_PORT} -d pbs_datastore -U ${PBS_DATA_SERVICE_USER} <<-EOF > /dev/null\n\t\talter table pbs.node alter column nd_index drop default;\n\t\talter table pbs.node drop constraint node_nd_index_key;\n\t\tcreate index job_scr_idx on pbs.job_scr (ji_jobid);\n\t\tdrop sequence pbs.node_sequence;\n\t\t\\set PBS_MAXHOSTNAME\t'64'\n\t\tcreate sequence pbs.svr_id_seq;\n\t\talter table pbs.server add column sv_hostname varchar(:PBS_MAXHOSTNAME);\n\t\tupdate pbs.server set sv_hostname = sv_name;\n\t\talter table pbs.server alter column sv_hostname set not null;\n\tEOF\n\tret=$?\n\tif [ $ret -ne 0 ]; then\n\t\techo \"Some datastore transformations failed to complete\"\n\t\techo \"Please check dataservice logs\"\n\t\treturn $ret\n\tfi\n\n\t${PBS_EXEC}/sbin/pbs_server -t updatedb > /dev/null\n\tret=$?\n\tif [ $ret -ne 0 -o -f ${PBS_HOME}/server_priv/serverdb ] ; then\n\t\techo \"*** Error during overlay upgrade\"\n\t\treturn $ret\n\tfi\n}\n\nupgrade_pbs_schema_from_v1_1_0() {\n\n\t${PGSQL_DIR}/bin/psql -p ${PBS_DATA_SERVICE_PORT} -d pbs_datastore -U ${PBS_DATA_SERVICE_USER} <<-EOF > /dev/null\n\t\tALTER TABLE pbs.info ALTER COLUMN pbs_schema_version TYPE text;\n\t\tALTER TABLE pbs.job ALTER COLUMN ji_jobid TYPE text;\n\t\tALTER TABLE pbs.job ALTER COLUMN ji_sv_name TYPE text;\n\t\tALTER TABLE pbs.job ALTER COLUMN ji_queue TYPE text;\n\t\tALTER TABLE pbs.job ALTER COLUMN ji_destin TYPE text;\n\t\tALTER TABLE pbs.job ALTER COLUMN ji_4jid TYPE text;\n\t\tALTER TABLE pbs.job ALTER COLUMN ji_4ash TYPE text;\n\t\tALTER TABLE pbs.job_attr ALTER COLUMN ji_jobid TYPE text;\n\t\tALTER TABLE pbs.job_attr ALTER COLUMN attr_name TYPE text;\n\t\tALTER TABLE pbs.job_attr ALTER COLUMN attr_resource TYPE text;\n\t\tALTER TABLE pbs.job_scr ALTER COLUMN ji_jobid TYPE text;\n\t\tALTER TABLE pbs.node ALTER COLUMN nd_name TYPE text;\n\t\tALTER TABLE pbs.node ALTER COLUMN nd_hostname TYPE text;\n\t\tALTER TABLE pbs.node ALTER COLUMN nd_pque TYPE text;\n\t\tALTER TABLE pbs.node_attr ALTER COLUMN nd_name TYPE text;\n\t\tALTER TABLE pbs.node_attr ALTER COLUMN attr_name TYPE text;\n\t\tALTER TABLE pbs.node_attr ALTER COLUMN attr_resource TYPE text;\n\t\tALTER TABLE pbs.queue ALTER COLUMN qu_name TYPE text;\n\t\tALTER TABLE pbs.queue ALTER COLUMN qu_sv_name TYPE text;\n\t\tALTER TABLE pbs.queue_attr ALTER COLUMN qu_name TYPE text;\n\t\tALTER TABLE pbs.queue_attr ALTER COLUMN attr_name TYPE text;\n\t\tALTER TABLE pbs.queue_attr ALTER COLUMN attr_resource TYPE text;\n\t\tALTER TABLE pbs.resv ALTER COLUMN ri_resvid TYPE text;\n\t\tALTER TABLE pbs.resv ALTER COLUMN ri_sv_name TYPE text;\n\t\tALTER TABLE pbs.resv ALTER COLUMN ri_queue TYPE text;\n\t\tALTER TABLE pbs.resv_attr ALTER COLUMN ri_resvid TYPE text;\n\t\tALTER TABLE pbs.resv_attr ALTER COLUMN attr_name TYPE text;\n\t\tALTER TABLE pbs.resv_attr ALTER COLUMN attr_resource TYPE text;\n\t\tALTER TABLE pbs.scheduler ALTER COLUMN sched_name TYPE text;\n\t\tALTER TABLE pbs.scheduler ALTER COLUMN sched_sv_name TYPE text;\n\t\tALTER TABLE pbs.scheduler_attr ALTER COLUMN sched_name TYPE text;\n\t\tALTER TABLE pbs.scheduler_attr ALTER COLUMN attr_name TYPE text;\n\t\tALTER TABLE pbs.scheduler_attr ALTER COLUMN attr_resource TYPE text;\n\t\tALTER TABLE pbs.server ALTER COLUMN sv_name TYPE text;\n\t\tALTER TABLE pbs.server ALTER COLUMN sv_hostname TYPE text;\n\t\tALTER TABLE pbs.server_attr ALTER COLUMN sv_name TYPE text;\n\t\tALTER TABLE pbs.server_attr ALTER COLUMN attr_name TYPE text;\n\t\tALTER TABLE pbs.server_attr ALTER COLUMN attr_resource TYPE text;\n\t\tALTER TABLE pbs.subjob_track ALTER COLUMN ji_jobid TYPE text;\n\t\talter table pbs.job drop constraint job_pkey cascade;\n\t\tcreate index ji_jobid_idx on pbs.job (ji_jobid);\n\t\tdrop index pbs.job_attr_idx;\n\t\tcreate index job_attr_idx on pbs.job_attr (ji_jobid, attr_name, attr_resource);\n\t\talter table pbs.subjob_track drop constraint subjob_track_pkey cascade;\n\t\tcreate index subjob_jid_idx on pbs.subjob_track (ji_jobid, trk_index);\n\t\tupdate pbs.info set pbs_schema_version = '1.2.0';\n\tEOF\n\tret=$?\n\tif [ $ret -ne 0 ]; then\n\t\techo \"Some datastore transformations failed to complete\"\n\t\techo \"Please check dataservice logs\"\n\t\treturn $ret\n\tfi\n}\n\nupgrade_pbs_schema_from_v1_2_0() {\n\n\t${PGSQL_DIR}/bin/psql -p ${PBS_DATA_SERVICE_PORT} -d pbs_datastore -U ${PBS_DATA_SERVICE_USER} <<-EOF > /dev/null\n\t\tDROP TABLE pbs.subjob_track;\n\t\tINSERT INTO pbs.scheduler (sched_name, sched_sv_name, sched_savetm, sched_creattm) VALUES ('default', '1', localtimestamp, localtimestamp);\n\t\tALTER TABLE pbs.scheduler_attr DROP CONSTRAINT scheduler_attr_fk;\n\t\tUPDATE pbs.scheduler_attr SET sched_name='default' WHERE sched_name='1';\n\t\tALTER TABLE pbs.scheduler_attr ADD CONSTRAINT scheduler_attr_fk\n\t\t\tFOREIGN KEY (sched_name)\n\t\t\tREFERENCES pbs.scheduler (sched_name)\n\t\t\tON DELETE CASCADE\n\t\t\tON UPDATE NO ACTION\n\t\t\tNOT DEFERRABLE;\n\t\tDELETE FROM pbs.scheduler WHERE sched_name='1';\n\t\tUPDATE pbs.info SET pbs_schema_version = '1.3.0';\n\tEOF\n\tret=$?\n\tif [ $ret -ne 0 ]; then\n\t\techo \"Some datastore transformations failed to complete\"\n\t\techo \"Please check dataservice logs\"\n\t\treturn $ret\n\tfi\n}\n\nupgrade_pbs_schema_from_v1_3_0() {\n\n\t${PGSQL_DIR}/bin/psql -p ${PBS_DATA_SERVICE_PORT} -d pbs_datastore -U ${PBS_DATA_SERVICE_USER} <<-EOF > /dev/null\n\t\tALTER TABLE pbs.job ADD CONSTRAINT jobid_pk PRIMARY KEY (ji_jobid);\n\tEOF\n\tret=$?\n\tif [ $ret -ne 0 ]; then\n\t\techo \"Primary key violation\"\n\t\techo \"Please check dataservice logs\"\n\t\treturn $ret\n\tfi\n\n\t${PGSQL_DIR}/bin/psql -p ${PBS_DATA_SERVICE_PORT} -d pbs_datastore -U ${PBS_DATA_SERVICE_USER} <<-EOF > /dev/null\n\n\t\tCREATE EXTENSION hstore SCHEMA public;\n\n\t\tALTER TABLE pbs.job ADD attributes hstore DEFAULT ''::hstore;\n\t\tUPDATE pbs.job SET attributes=(\n\t\t\tSELECT hstore(array_agg(attr.key ), array_agg(attr.value))\n\t\t\t\tFROM ( SELECT concat(attr_name, '.' , attr_resource) AS key,\n\t\t\t\t\t      concat(attr_flags, '.' , attr_value) AS value\n\t\t\t\t\t\tFROM pbs.job_attr WHERE pbs.job_attr.ji_jobid=pbs.job.ji_jobid) AS attr);\n\t\tUPDATE pbs.job SET attributes='' WHERE attributes IS NULL;\n\t\tALTER TABLE pbs.job ALTER COLUMN attributes SET NOT NULL;\n\n\t\tALTER TABLE pbs.node ADD attributes hstore DEFAULT ''::hstore;\n\t\tUPDATE pbs.node SET attributes=(\n\t\t\tSELECT hstore(array_agg(attr.key ), array_agg(attr.value))\n\t\t\t\tFROM ( SELECT concat(attr_name, '.' , attr_resource) AS key,\n\t\t\t\t\t      concat(attr_flags, '.' , attr_value) AS value\n\t\t\t\t\t\tFROM pbs.node_attr WHERE pbs.node_attr.nd_name=pbs.node.nd_name) AS attr);\n\t\tUPDATE pbs.node SET attributes='' WHERE attributes IS NULL;\n\t\tALTER TABLE pbs.node ALTER COLUMN attributes SET NOT NULL;\n\n\t\tALTER TABLE pbs.queue ADD attributes hstore DEFAULT ''::hstore;\n\t\tUPDATE pbs.queue SET attributes=(\n\t\t\tSELECT hstore(array_agg(attr.key ), array_agg(attr.value))\n\t\t\t\tFROM ( SELECT concat(attr_name, '.' , attr_resource) AS key,\n\t\t\t\t\t      concat(attr_flags, '.' , attr_value) AS value\n\t\t\t\t\t\tFROM pbs.queue_attr WHERE pbs.queue_attr.qu_name=pbs.queue.qu_name) AS attr);\n\t\tUPDATE pbs.queue SET attributes='' WHERE attributes IS NULL;\n\t\tALTER TABLE pbs.queue ALTER COLUMN attributes SET NOT NULL;\n\n\t\tALTER TABLE pbs.resv ADD attributes hstore DEFAULT ''::hstore;\n\t\tUPDATE pbs.resv SET attributes=(\n\t\t\tSELECT hstore(array_agg(attr.key ), array_agg(attr.value))\n\t\t\t\tFROM ( SELECT concat(attr_name, '.' , attr_resource) AS key,\n\t\t\t\t\t      concat(attr_flags, '.' , attr_value) AS value\n\t\t\t\t\t\tFROM pbs.resv_attr WHERE pbs.resv_attr.ri_resvid=pbs.resv.ri_resvid) AS attr);\n\t\tUPDATE pbs.resv SET attributes='' WHERE attributes IS NULL;\n\t\tALTER TABLE pbs.resv ALTER COLUMN attributes SET NOT NULL;\n\n\t\tALTER TABLE pbs.scheduler ADD attributes hstore DEFAULT ''::hstore;\n\t\tUPDATE pbs.scheduler SET attributes=(\n\t\t\tSELECT hstore(array_agg(attr.key ), array_agg(attr.value))\n\t\t\t\tFROM ( SELECT concat(attr_name, '.' , attr_resource) AS key,\n\t\t\t\t\t      concat(attr_flags, '.' , attr_value) AS value\n\t\t\t\t\t\tFROM pbs.scheduler_attr WHERE pbs.scheduler_attr.sched_name=pbs.scheduler.sched_name) AS attr);\n\t\tUPDATE pbs.scheduler SET attributes='' WHERE attributes IS NULL;\n\t\tALTER TABLE pbs.scheduler ALTER COLUMN attributes SET NOT NULL;\n\n\t\tALTER TABLE pbs.server ADD attributes hstore DEFAULT ''::hstore;\n\t\tUPDATE pbs.server SET attributes=(\n\t\t\tSELECT hstore(array_agg(attr.key ), array_agg(attr.value))\n\t\t\t\tFROM ( SELECT concat(attr_name, '.' , attr_resource) AS key,\n\t\t\t\t\t      concat(attr_flags, '.' , attr_value) AS value\n\t\t\t\t\t\tFROM pbs.server_attr WHERE pbs.server_attr.sv_name=pbs.server.sv_name) AS attr);\n\t\tUPDATE pbs.server SET attributes='' WHERE attributes IS NULL;\n\t\tALTER TABLE pbs.server ALTER COLUMN attributes SET NOT NULL;\n\n\t\tDROP TABLE pbs.server_attr;\n\t\tDROP TABLE pbs.scheduler_attr;\n\t\tDROP TABLE pbs.node_attr;\n\t\tDROP TABLE pbs.queue_attr;\n\t\tDROP TABLE pbs.resv_attr;\n\t\tDROP TABLE pbs.job_attr;\n\t\tDROP INDEX pbs.resv_idx_cr;\n\t\tDROP INDEX pbs.ji_jobid_idx;\n\t\tDROP sequence pbs.svr_id_seq;\n\n\t\tALTER INDEX pbs.job_src_idx RENAME TO job_scr_idx;\n\t\tALTER TABLE pbs.server ALTER COLUMN sv_jobidnumber TYPE BIGINT;\n\t\tALTER TABLE pbs.scheduler DROP COLUMN sched_sv_name;\n\t\tALTER TABLE pbs.queue DROP COLUMN qu_sv_name;\n\t\tALTER TABLE pbs.resv DROP COLUMN ri_sv_name;\n\t\tALTER TABLE pbs.job DROP COLUMN ji_sv_name;\n\t\tALTER TABLE pbs.server\n\t\tDROP COLUMN sv_name,\n\t\tDROP COLUMN sv_hostname;\n\n\t\tUPDATE pbs.info SET pbs_schema_version = '1.4.0';\n\tEOF\n\tret=$?\n\tif [ $ret -ne 0 ]; then\n\t\techo \"Some datastore transformations failed to complete\"\n\t\techo \"Please check dataservice logs\"\n\t\treturn $ret\n\tfi\n}\n\nupgrade_pbs_schema_from_v1_4_0() {\n\t${PGSQL_DIR}/bin/psql -p ${PBS_DATA_SERVICE_PORT} -d pbs_datastore -U ${PBS_DATA_SERVICE_USER} <<-EOF > /dev/null\n\t\tALTER TABLE pbs.job DROP COLUMN ji_numattr;\n\t\tALTER TABLE pbs.job DROP COLUMN ji_ordering;\n\t\tALTER TABLE pbs.job DROP COLUMN ji_priority;\n\t\tALTER TABLE pbs.job DROP COLUMN ji_endtbdry;\n\t\tALTER TABLE pbs.job DROP COLUMN ji_momaddr;\n\t\tALTER TABLE pbs.job DROP COLUMN ji_momport;\n\t\tALTER TABLE pbs.job RENAME COLUMN ji_4jid to ji_jid;\n\t\tALTER TABLE pbs.job ALTER COLUMN ji_qrank TYPE BIGINT;\n\t\tALTER TABLE pbs.job DROP COLUMN ji_4ash;\n\t\tALTER TABLE pbs.resv DROP COLUMN ri_type;\n\t\tALTER TABLE pbs.resv DROP COLUMN ri_numattr;\n\t\tALTER TABLE pbs.resv DROP COLUMN ri_resvTag;\n\t\tALTER TABLE pbs.resv DROP COLUMN ri_un_type;\n\t\tALTER TABLE pbs.resv DROP COLUMN ri_fromsock;\n\t\tALTER TABLE pbs.resv DROP COLUMN ri_fromaddr;\n\t\tALTER TABLE pbs.server DROP COLUMN sv_numjobs;\n\t\tALTER TABLE pbs.server DROP COLUMN sv_numque;\n\t\tALTER TABLE pbs.server DROP COLUMN sv_svraddr;\n\t\tALTER TABLE pbs.server DROP COLUMN sv_svrport;\n\t\tALTER TABLE pbs.queue RENAME COLUMN qu_ctime to qu_creattm;\n\t\tALTER TABLE pbs.queue RENAME COLUMN qu_mtime to qu_savetm;\n\t\tUPDATE pbs.info SET pbs_schema_version = '1.5.0';\n\tEOF\n\tret=$?\n\tif [ $ret -ne 0 ]; then\n\t\techo \"Error deleting ri_type during upgrade\"\n\t\techo \"Please check dataservice logs\"\n\t\treturn $ret\n\tfi\n}\n\n# start of the upgrade schema script\n. ${PBS_EXEC}/libexec/pbs_db_env\ntmpdir=${PBS_TMPDIR:-${TMPDIR:-\"/var/tmp\"}}\nPBS_CURRENT_SCHEMA_VER='1.5.0'\n\n#\n# pbs_dataservice command now has more diagnostic output.\n# It can tell why it could not start, for example, that\n# dataservice might be running on another host.\n# So capture pbs_dataservice output and display in case of\n# errors during starting.\n#\noutfile=\"${tmpdir}/pbs_dataservice_output_$$\"\n\n${PBS_EXEC}/sbin/pbs_dataservice status > /dev/null\nif [ $? -eq 0 ]; then\n\t# running stop now\n\t${PBS_EXEC}/sbin/pbs_dataservice stop > /dev/null\n\tif [ $? -ne 0 ]; then\n\t\techo \"Failed to stop PBS Dataservice\"\n\t\texit 1\n\tfi\nfi\n\nret=$?\nif [ $ret -ne 0 ]; then\n\texit $ret\nfi\n\n# restart with new credentials\n# redirect the output, dont execute inside `` since\n# postgres processes would otherwise cause the script\n# to hang forever\n#\n${PBS_EXEC}/sbin/pbs_dataservice start > ${outfile}\nif [ $? -ne 0 ]; then\n\tcat ${outfile}\n\trm -f ${outfile}\n\tret=$?\n\tif [ $ret -ne 0 ]; then\n\t\texit $ret\n\tfi\n\treturn 1\nfi\nrm -f ${outfile}\n\nver=`${PGSQL_DIR}/bin/psql -A -t -p ${PBS_DATA_SERVICE_PORT} -d pbs_datastore -U ${PBS_DATA_SERVICE_USER} -c \"select pbs_schema_version from pbs.info\"`\nif [ \"$ver\" = \"${PBS_CURRENT_SCHEMA_VER}\" ]; then\n\tret=$?\n\tif [ $ret -ne 0 ]; then\n\t\texit $ret\n\tfi\n\texit 0\nfi\n\nif [ \"$ver\" = \"1.0.0\" ]; then\n\tupgrade_pbs_schema_from_v1_0_0\n\tret=$?\n\tif [ $ret -ne 0 ]; then\n\t\tret=$?\n\t\tif [ $ret -ne 0 ]; then\n\t\texit $ret\n\t\tfi\n\t\texit 1\n\tfi\n\tver=\"1.1.0\"\nfi\n\n${PBS_EXEC}/sbin/pbs_dataservice status > /dev/null\nif [ $? -eq 1 ]; then\n\t# not running, start now\n\t${PBS_EXEC}/sbin/pbs_dataservice start > ${outfile}\n\tif [ $? -ne 0 ]; then\n\t\tcat ${outfile}\n\t\trm -f ${outfile}\n\t\tret=$?\n\t\tif [ $ret -ne 0 ]; then\n\t\t\texit $ret\n\t\tfi\n\t\texit 1\n\tfi\n\trm -f ${outfile}\nfi\n\nif [ \"$ver\" = \"1.1.0\" ]; then\n\tupgrade_pbs_schema_from_v1_1_0\n\tret=$?\n\tif [ $ret -ne 0 ]; then\n\t\texit $ret\n\tfi\n\tver=\"1.2.0\"\nfi\n\nif [ \"$ver\" = \"1.2.0\" ]; then\n\tupgrade_pbs_schema_from_v1_2_0\n\tret=$?\n\tif [ $ret -ne 0 ]; then\n\t\texit $ret\n\tfi\n\tver=\"1.3.0\"\nfi\n\nif [ \"$ver\" = \"1.3.0\" ]; then\n\tupgrade_pbs_schema_from_v1_3_0\n\tret=$?\n\tif [ $ret -ne 0 ]; then\n\t\texit $ret\n\tfi\n\tver=\"1.4.0\"\nfi\n\nif [ \"$ver\" = \"1.4.0\" ]; then\n\tupgrade_pbs_schema_from_v1_4_0\n\tret=$?\n\tif [ $ret -ne 0 ]; then\n\t\texit $ret\n\tfi\n\tver=\"1.5.0\"\nelse\n\techo \"Cannot upgrade PBS datastore version $ver\"\n\tret=$?\n\tif [ $ret -ne 0 ]; then\n\t\texit $ret\n\tfi\n\texit 1\nfi\n\n${PBS_EXEC}/sbin/pbs_dataservice status > /dev/null\nif [ $? -eq 1 ]; then\n\t# not running, start now\n\t${PBS_EXEC}/sbin/pbs_dataservice start > ${outfile}\n\tif [ $? -ne 0 ]; then\n\t\tcat ${outfile}\n\t\trm -f ${outfile}\n\t\tret=$?\n\t\tif [ $ret -ne 0 ]; then\n\t\t\texit $ret\n\t\tfi\n\t\texit 1\n\tfi\nfi\nrm -f ${outfile}\n\n${PBS_EXEC}/sbin/pbs_dataservice stop > /dev/null\n\nret=$?\nexit $ret\n"
  },
  {
    "path": "src/lib/Libdis/dis.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include \"dis_.h\"\n\nconst char *dis_emsg[] = {\"No error\",\n\t\t\t  \"Input value too large to convert to this type\",\n\t\t\t  \"Tried to write floating point infinity\",\n\t\t\t  \"Negative sign on an unsigned datum\",\n\t\t\t  \"Input count or value has leading zero\",\n\t\t\t  \"Non-digit found where a digit was expected\",\n\t\t\t  \"Input string has an embedded ASCII NUL\",\n\t\t\t  \"Premature end of message\",\n\t\t\t  \"Unable to malloc enough space for string\",\n\t\t\t  \"Supporting protocol failure\",\n\t\t\t  \"Protocol failure in commit\",\n\t\t\t  \"End of File\"};\n\npbs_tcp_chan_t *(*pfn_transport_get_chan)(int);\nint (*pfn_transport_set_chan)(int, pbs_tcp_chan_t *);\nint (*pfn_transport_recv)(int, void *, int);\nint (*pfn_transport_send)(int, void *, int);\n\n/* this is for our client threading functionlity to get the DIS_BUFSZ */\nlong dis_buffsize = DIS_BUFSIZ;\n\n/**\n * @brief\n * \tcalled once per process to initialize the dis tables\n *\n */\n\nvoid\ndis_init_tables(void)\n{\n\tif (dis_dmx10 == 0)\n\t\tdisi10d_();\n\tif (dis_lmx10 == 0)\n\t\tdisi10l_();\n\tif (dis_umaxd == 0)\n\t\tdisiui_();\n\tinit_ulmax();\n}\n"
  },
  {
    "path": "src/lib/Libdis/dis_.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <limits.h>\n#include <stddef.h>\n\n#include <dis.h>\n\n#define DIS_BUFSIZ (CHAR_BIT * sizeof(ULONG_MAX))\n/* define a limit for the number of times DIS will recurse when      */\n/* processing a sequence of character counts;  prvent stack overflow */\n#define DIS_RECURSIVE_LIMIT 30\n\nchar *discui_(char *cp, unsigned value, unsigned *ndigs);\nchar *discul_(char *cp, unsigned long value, unsigned *ndigs);\nchar *discull_(char *cp, u_Long value, unsigned *ndigs);\nvoid disi10d_();\nvoid disi10l_();\nvoid disiui_(void);\nvoid dis_init_tables(void); /* called once per process to init dis tables */\nvoid init_ulmax(void);\ndouble disp10d_(int expon);\ndis_long_double_t disp10l_(int expon);\nint\ndisrl_(int stream, dis_long_double_t *ldval, unsigned *ndigs,\n       unsigned *nskips, unsigned sigd, unsigned count, int recursv);\nint disrsi_(int stream, int *negate, unsigned *value, unsigned count, int rescuvr);\nint\ndisrsl_(int stream, int *negate, unsigned long *value,\n\tunsigned long count, int recursv);\nint disrsll_(int stream, int *negate, u_Long *value, unsigned long count, int recursv);\nint diswui_(int stream, unsigned value);\n\nextern unsigned dis_dmx10;\nextern double *dis_dp10;\nextern double *dis_dn10;\n\nextern unsigned dis_lmx10;\nextern dis_long_double_t *dis_lp10;\nextern dis_long_double_t *dis_ln10;\n\nextern char *__dis_buffer_location(void);\n#define dis_buffer (__dis_buffer_location())\n\nextern char *dis_umax;\nextern unsigned dis_umaxd;\n"
  },
  {
    "path": "src/lib/Libdis/dis_helpers.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n#include <pbs_config.h>\n#include <arpa/inet.h>\n#include <assert.h>\n#include <errno.h>\n#include <stdlib.h>\n#include \"auth.h\"\n#include \"dis.h\"\n#include \"pbs_error.h\"\n#include \"pbs_internal.h\"\n\n#define PKT_MAGIC \"PKTV1\"\n#define PKT_MAGIC_SZ sizeof(PKT_MAGIC)\n#define PKT_HDR_SZ (PKT_MAGIC_SZ + 1 + sizeof(int))\n\nstatic pbs_dis_buf_t *dis_get_readbuf(int);\nstatic pbs_dis_buf_t *dis_get_writebuf(int);\nstatic int dis_resize_buf(pbs_dis_buf_t *, size_t);\nstatic int transport_chan_is_encrypted(int);\n\n/**\n * @brief\n * \ttransport_chan_set_ctx_status - set auth context status tcp chan assosiated with given fd\n *\n * @param[in] fd - file descriptor\n * @param[in] status - auth ctx status\n * @param[in] for_encrypt - is authctx for encrypt/decrypt?\n *\n * @return void\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nvoid\ntransport_chan_set_ctx_status(int fd, int status, int for_encrypt)\n{\n\tpbs_tcp_chan_t *chan = transport_get_chan(fd);\n\n\tif (chan == NULL)\n\t\treturn;\n\tchan->auths[for_encrypt].ctx_status = status;\n}\n\n/**\n * @brief\n * \ttransport_chan_get_ctx_status - get auth context status tcp chan assosiated with given fd\n *\n * @param[in] fd - file descriptor\n * @param[in] for_encrypt - whether to get encrypt/decrypt authctx status or for authentication\n *\n * @return int\n *\n * @retval -1 - error\n * @retval !-1 - status\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\ntransport_chan_get_ctx_status(int fd, int for_encrypt)\n{\n\tpbs_tcp_chan_t *chan = transport_get_chan(fd);\n\n\tif (chan == NULL)\n\t\treturn -1;\n\treturn chan->auths[for_encrypt].ctx_status;\n}\n\n/**\n * @brief\n * \ttransport_chan_set_authctx - associates authenticaion context with connection\n *\n * @param[in] fd - file descriptor\n * @param[in] authctx - the context\n * @param[in] for_encrypt - is authctx for encrypt/decrypt?\n *\n * @return void\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nvoid\ntransport_chan_set_authctx(int fd, void *authctx, int for_encrypt)\n{\n\tpbs_tcp_chan_t *chan = transport_get_chan(fd);\n\n\tif (chan == NULL)\n\t\treturn;\n\tchan->auths[for_encrypt].ctx = authctx;\n}\n\n/**\n * @brief\n * \ttransport_chan_get_authctx - gets authentication context associated with connection\n *\n * @param[in] fd - file descriptor\n * @param[in] for_encrypt - whether to get encrypt/decrypt authctx or for authentication\n *\n * @return void *\n *\n * @retval !NULL - success\n * @retval NULL - error\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nvoid *\ntransport_chan_get_authctx(int fd, int for_encrypt)\n{\n\tpbs_tcp_chan_t *chan = transport_get_chan(fd);\n\n\tif (chan == NULL)\n\t\treturn NULL;\n\treturn chan->auths[for_encrypt].ctx;\n}\n\n/**\n * @brief\n * \ttransport_chan_set_authdef - associates authdef structure with connection\n *\n * @param[in] fd - file descriptor\n * @param[in] authdef - the authdef structure for association\n * @param[in] for_encrypt - is authdef for encrypt/decrypt?\n *\n * @return void\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nvoid\ntransport_chan_set_authdef(int fd, auth_def_t *authdef, int for_encrypt)\n{\n\tpbs_tcp_chan_t *chan = transport_get_chan(fd);\n\n\tif (chan == NULL)\n\t\treturn;\n\tchan->auths[for_encrypt].def = authdef;\n}\n\n/**\n * @brief\n * \ttransport_chan_get_authdef - gets authdef structure associated with connection\n *\n * @param[in] fd - file descriptor\n * @param[in] for_encrypt - whether to get encrypt/decrypt authdef or for authentication\n *\n * @return auth_def_t *\n *\n * @retval !NULL - success\n * @retval NULL - error\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nauth_def_t *\ntransport_chan_get_authdef(int fd, int for_encrypt)\n{\n\tpbs_tcp_chan_t *chan = transport_get_chan(fd);\n\n\tif (chan == NULL)\n\t\treturn NULL;\n\treturn chan->auths[for_encrypt].def;\n}\n\n/**\n * @brief\n * \ttransport_chan_is_encrypted - is chan assosiated with given fd is encrypted?\n *\n * @param[in] fd - file descriptor\n *\n * @return int\n *\n * @retval 0 - not encrypted\n * @retval 1 - encrypted\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nstatic int\ntransport_chan_is_encrypted(int fd)\n{\n\tpbs_tcp_chan_t *chan = transport_get_chan(fd);\n\n\tif (chan == NULL)\n\t\treturn 0;\n\treturn (chan->auths[FOR_ENCRYPT].def != NULL && chan->auths[FOR_ENCRYPT].ctx_status == AUTH_STATUS_CTX_READY);\n}\n\n/**\n * @brief\n * \tsend pkt from given DIS buffer over network\n * \tafter patching pkt header for data size and\n * \tif not encrypted already and chan is encrypted\n * \tthen encrypt data before send\n *\n * @param[in] fd - file descriptor\n * @param[in] tp - pointer to DIS buffer\n * @param[in] encrypt_done - is data already encrypted\n *\n * @return int\n *\n * @retval >= 0  - success\n * @retval -1 - failure\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nstatic int\n__send_pkt(int fd, pbs_dis_buf_t *tp, int encrypt_done)\n{\n\tint i;\n\n\tif (!encrypt_done && transport_chan_is_encrypted(fd)) {\n\t\tvoid *authctx = transport_chan_get_authctx(fd, FOR_ENCRYPT);\n\t\tauth_def_t *authdef = transport_chan_get_authdef(fd, FOR_ENCRYPT);\n\t\tvoid *data_out;\n\t\tsize_t len_out;\n\n\t\tif (authdef == NULL || authdef->encrypt_data == NULL)\n\t\t\treturn -1;\n\n\t\tif (authdef->encrypt_data(authctx, (void *) (tp->tdis_data + PKT_HDR_SZ), tp->tdis_len - PKT_HDR_SZ, &data_out, &len_out) != 0)\n\t\t\treturn -1;\n\n\t\tdis_resize_buf(tp, len_out + PKT_HDR_SZ);\n\t\tmemcpy((void *) (tp->tdis_data + PKT_HDR_SZ), data_out, len_out);\n\t\tfree(data_out);\n\t\ttp->tdis_len = len_out + PKT_HDR_SZ;\n\t}\n\n\ti = htonl(tp->tdis_len - PKT_HDR_SZ);\n\tmemcpy((void *) (tp->tdis_data + PKT_HDR_SZ - sizeof(int)), &i, sizeof(int));\n\n\ti = transport_send(fd, (void *) tp->tdis_data, tp->tdis_len);\n\tif (i < 0)\n\t\treturn i;\n\tif (i != tp->tdis_len)\n\t\treturn -1;\n\tdis_clear_buf(tp);\n\treturn i;\n}\n\n/**\n * @brief\n * \tcreate pkt based on given value\n * \tand send it over network. If channel for given fd is\n * \tencrypted then given data will be encrypted first\n * \tthen pkt will be sent\n *\n * @param[in] fd - file descriptor\n * @param[in] type - type of pkt\n * @param[in] data_in - data of pkt\n * @param[in] len_in - length of data\n *\n * @return int\n *\n * @retval >= 0  - success\n * @retval -1 - failure\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\ntransport_send_pkt(int fd, int type, void *data_in, size_t len_in)\n{\n\tpbs_dis_buf_t *tp;\n\n\tif (data_in == NULL || len_in == 0 || (tp = dis_get_writebuf(fd)) == NULL)\n\t\treturn -1;\n\n\tdis_clear_buf(tp);\n\tdis_resize_buf(tp, len_in + PKT_HDR_SZ);\n\tstrcpy(tp->tdis_data, PKT_MAGIC);\n\t*(tp->tdis_data + PKT_MAGIC_SZ) = (char) type;\n\ttp->tdis_pos = tp->tdis_data + PKT_HDR_SZ;\n\n\tif (transport_chan_is_encrypted(fd)) {\n\t\tvoid *authctx = transport_chan_get_authctx(fd, FOR_ENCRYPT);\n\t\tauth_def_t *authdef = transport_chan_get_authdef(fd, FOR_ENCRYPT);\n\t\tvoid *data_out;\n\t\tsize_t len_out;\n\n\t\tif (authdef == NULL || authdef->encrypt_data == NULL)\n\t\t\treturn -1;\n\n\t\tif (authdef->encrypt_data(authctx, data_in, len_in, &data_out, &len_out) != 0)\n\t\t\treturn -1;\n\n\t\tdis_resize_buf(tp, len_out + PKT_HDR_SZ);\n\t\tmemcpy(tp->tdis_pos, data_out, len_out);\n\t\tfree(data_out);\n\t\ttp->tdis_len = len_out;\n\t} else {\n\t\tmemcpy(tp->tdis_pos, data_in, len_in);\n\t\ttp->tdis_len = len_in;\n\t}\n\ttp->tdis_len += PKT_HDR_SZ;\n\n\treturn __send_pkt(fd, tp, 1);\n}\n\n/**\n * @brief\n * \treceive pkt in given DIS buffer from network\n * \tIf channel for given fd is encrypted then decrypt data\n * \tin received pkt\n *\n * @param[in] fd - file descriptor\n * @param[out] type - type of pkt\n * @param[in/out] tp - pointer to DIS buffer\n *\n * @return int\n *\n * @retval >= 0  - success\n * @retval -1 - failure\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nstatic int\n__recv_pkt(int fd, int *type, pbs_dis_buf_t *tp)\n{\n\tint i;\n\tsize_t datasz;\n\tchar pkthdr[PKT_HDR_SZ];\n\n\tdis_clear_buf(tp);\n\ti = transport_recv(fd, (void *) &pkthdr, PKT_HDR_SZ);\n\tif (i != PKT_HDR_SZ)\n\t\treturn (i < 0 ? i : -1);\n\tif (strncmp(pkthdr, PKT_MAGIC, PKT_MAGIC_SZ) != 0) {\n\t\t/* no pkt magic match, reject data/connection */\n\t\treturn -1;\n\t}\n\n\t*type = (int) pkthdr[PKT_MAGIC_SZ];\n\tmemcpy(&i, (void *) &(pkthdr[PKT_HDR_SZ - sizeof(int)]), sizeof(int));\n\tdatasz = ntohl(i);\n\tif (datasz <= 0)\n\t\treturn -1;\n\tdis_resize_buf(tp, datasz);\n\ti = transport_recv(fd, tp->tdis_data, datasz);\n\tif (i != datasz)\n\t\treturn (i < 0 ? i : -1);\n\n\tif (transport_chan_is_encrypted(fd)) {\n\t\tvoid *data;\n\t\tvoid *authctx = transport_chan_get_authctx(fd, FOR_ENCRYPT);\n\t\tauth_def_t *authdef = transport_chan_get_authdef(fd, FOR_ENCRYPT);\n\n\t\tif (authdef == NULL || authdef->decrypt_data == NULL)\n\t\t\treturn -1;\n\n\t\tif (authdef->decrypt_data(authctx, tp->tdis_data, i, &data, &datasz) != 0)\n\t\t\treturn -1;\n\n\t\tfree(tp->tdis_data);\n\t\ttp->tdis_data = data;\n\t\ttp->tdis_bufsize = datasz;\n\t}\n\ttp->tdis_pos = tp->tdis_data;\n\ttp->tdis_len = datasz;\n\treturn datasz;\n}\n\n/**\n * @brief\n * \ttransport_recv_pkt - receive pkt over network.\n * \tIf channel for given fd is encrypted then decrypt it\n * \tand parse pkt to find pkt type, data and its length\n *\n * \t@warning returned data should not be free'd as it is\n * \t\t internal dis buffer\n *\n * @param[in] fd - file descriptor\n * @param[out] type - type of pkt\n * @param[out] data_out - data of pkt\n * @param[out] len_out - length of data\n *\n * @return int\n *\n * @retval >= 0  - success\n * @retval -1 - failure\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\ntransport_recv_pkt(int fd, int *type, void **data_out, size_t *len_out)\n{\n\tint i;\n\tpbs_dis_buf_t *tp = dis_get_readbuf(fd);\n\n\t*type = 0;\n\t*data_out = NULL;\n\t*len_out = 0;\n\n\tif (tp == NULL)\n\t\treturn -1;\n\ti = __recv_pkt(fd, type, tp);\n\tif (i <= 0)\n\t\treturn i;\n\t*data_out = (void *) tp->tdis_data;\n\t*len_out = i;\n\tdis_clear_buf(tp);\n\treturn *len_out;\n}\n\n/**\n * @brief\n * \tdis_get_readbuf - get dis read buffer associated with connection\n *\n * @return pbs_dis_but_t *\n *\n * @retval !NULL - success\n * @retval NULL - error\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nstatic pbs_dis_buf_t *\ndis_get_readbuf(int fd)\n{\n\tpbs_tcp_chan_t *chan = transport_get_chan(fd);\n\n\tif (chan == NULL)\n\t\treturn NULL;\n\treturn &(chan->readbuf);\n}\n\n/**\n * @brief\n * \tdis_get_writebuf - get dis write buffer associated with connection\n *\n * @return pbs_dis_but_t *\n *\n * @retval !NULL - success\n * @retval NULL - error\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nstatic pbs_dis_buf_t *\ndis_get_writebuf(int fd)\n{\n\tpbs_tcp_chan_t *chan = transport_get_chan(fd);\n\n\tif (chan == NULL)\n\t\treturn NULL;\n\treturn &(chan->writebuf);\n}\n\n/**\n * @brief\n * \tdis_resize_buf - resize given dis buffer to appropriate size based on given needed\n *\n * \tif use_lead is true then it will use tdis_lead to calculate new size else tdis_eod\n *\n * @param[in] tp - dis buffer to pack\n * @param[in] needed - min needed buffer size\n *\n * @return int\n *\n * @retval 0 - success\n * @retval -1 - error\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nstatic int\ndis_resize_buf(pbs_dis_buf_t *tp, size_t needed)\n{\n\tif ((tp->tdis_len + needed) >= tp->tdis_bufsize) {\n\t\tint offset = tp->tdis_len > 0 ? (tp->tdis_pos - tp->tdis_data) : 0;\n\t\tchar *tmpcp = (char *) realloc(tp->tdis_data, tp->tdis_bufsize + needed + PBS_DIS_BUFSZ);\n\t\tif (tmpcp == NULL) {\n\t\t\treturn -1; /* realloc failed */\n\t\t} else {\n\t\t\ttp->tdis_data = tmpcp;\n\t\t\ttp->tdis_bufsize = tp->tdis_bufsize + needed + PBS_DIS_BUFSZ;\n\t\t\ttp->tdis_pos = tp->tdis_data + offset;\n\t\t}\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n * \tdis_clear_buf - reset dis buffer to empty by updating its counter\n *\n *\n * @param[in] tp - dis buffer to clear\n *\n * @return void\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nvoid\ndis_clear_buf(pbs_dis_buf_t *tp)\n{\n\ttp->tdis_pos = tp->tdis_data;\n\ttp->tdis_len = 0;\n}\n\n/**\n * @brief\n * \tdis_reset_buf - reset appropriate dis buffer associated with connection\n *\n * @param[in] fd - file descriptor\n * @param[in] rw - reset write buffer if true else read buffer\n *\n * @return void\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nvoid\ndis_reset_buf(int fd, int rw)\n{\n\tdis_clear_buf((rw == DIS_WRITE_BUF) ? dis_get_writebuf(fd) : dis_get_readbuf(fd));\n}\n\n/**\n * @brief\n * \tdisr_skip - dis suport routine to skip over data in read buffer\n *\n * @param[in] fd - file descriptor\n * @param[in] ct - count\n *\n * @return\tint\n *\n * @retval\tnumber of characters skipped\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\ndisr_skip(int fd, size_t ct)\n{\n\tpbs_dis_buf_t *tp = dis_get_readbuf(fd);\n\n\tif (tp == NULL)\n\t\treturn 0;\n\tif (ct > tp->tdis_len)\n\t\tdis_clear_buf(tp);\n\telse {\n\t\ttp->tdis_pos += ct;\n\t\ttp->tdis_len -= ct;\n\t}\n\treturn (int) ct;\n}\n\n/**\n * @brief\n * \tdis_getc - dis support routine to get next character from read buffer\n *\n * @param[in] fd - file descriptor\n *\n * @return\tint\n *\n * @retval\t>0 \tnumber of characters read\n * @retval\t-1 \tif EOD or error\n * @retval\t-2 \tif EOF (stream closed)\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\ndis_getc(int fd)\n{\n\tpbs_dis_buf_t *tp = dis_get_readbuf(fd);\n\tint c;\n\n\tif (tp == NULL)\n\t\treturn -1;\n\tif (tp->tdis_len <= 0) {\n\t\t/* not enought data, try to get more */\n\t\tint unused;\n\n\t\tdis_clear_buf(tp);\n\t\tif ((c = __recv_pkt(fd, &unused, tp)) <= 0) {\n\t\t\tdis_clear_buf(tp);\n\t\t\treturn c; /* Error or EOF */\n\t\t}\n\t}\n\tc = *tp->tdis_pos;\n\ttp->tdis_pos++;\n\ttp->tdis_len--;\n\treturn c;\n}\n\n/**\n * @brief\n * \tdis_gets - dis support routine to get a string from read buffer\n *\n * @param[in] fd - file descriptor\n * @param[in] str - string to be written\n * @param[in] ct - count\n *\n * @return\tint\n *\n * @retval\t>0 \tnumber of characters read\n * @retval\t0 \tif EOD (no data currently avalable)\n * @retval\t-1 \tif error\n * @retval\t-2 \tif EOF (stream closed)\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\ndis_gets(int fd, char *str, size_t ct)\n{\n\tpbs_dis_buf_t *tp = dis_get_readbuf(fd);\n\n\tif (tp == NULL) {\n\t\t*str = '\\0';\n\t\treturn -1;\n\t}\n\tif (ct == 0) {\n\t\t*str = '\\0';\n\t\treturn ct;\n\t}\n\tif (tp->tdis_len <= 0) {\n\t\t/* not enought data, try to get more */\n\t\tint unused;\n\t\tint c;\n\n\t\tif ((c = __recv_pkt(fd, &unused, tp)) <= 0) {\n\t\t\tdis_clear_buf(tp);\n\t\t\treturn c; /* Error or EOF */\n\t\t}\n\t}\n\tmemcpy(str, tp->tdis_pos, ct);\n\ttp->tdis_pos += ct;\n\ttp->tdis_len -= ct;\n\treturn (int) ct;\n}\n\n/**\n * @brief\n * \tdis_puts - dis support routine to put a counted string of characters\n *\tinto the write buffer.\n *\n * @param[in] fd - file descriptor\n * @param[in] str - string to be written\n * @param[in] ct - count\n *\n * @return\tint\n *\n * @retval\t>= 0\tthe number of characters placed\n * @retval\t-1 \tif error\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\ndis_puts(int fd, const char *str, size_t ct)\n{\n\tpbs_dis_buf_t *tp = dis_get_writebuf(fd);\n\n\tif (tp == NULL)\n\t\treturn -1;\n\tif (tp->tdis_len <= 0) {\n\t\tif (dis_resize_buf(tp, ct + PKT_HDR_SZ) != 0)\n\t\t\treturn -1;\n\t\tstrcpy(tp->tdis_data, PKT_MAGIC);\n\t\ttp->tdis_pos = tp->tdis_data + PKT_HDR_SZ;\n\t\ttp->tdis_len = PKT_HDR_SZ;\n\t} else {\n\t\tif (dis_resize_buf(tp, ct) != 0)\n\t\t\treturn -1;\n\t}\n\tmemcpy(tp->tdis_pos, str, ct);\n\ttp->tdis_pos += ct;\n\ttp->tdis_len += ct;\n\treturn ct;\n}\n\n/**\n * @brief\n *\tflush dis write buffer\n *\n *\tWrites \"committed\" data in buffer to file descriptor,\n *\tpacks remaining data (if any), resets pointers\n *\n * @param[in] - fd - file descriptor\n *\n * @return int\n *\n * @retval  0 on success\n * @retval -1 on error\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\ndis_flush(int fd)\n{\n\tpbs_dis_buf_t *tp = dis_get_writebuf(fd);\n\n\tif (tp == NULL)\n\t\treturn -1;\n\tif (tp->tdis_len == 0)\n\t\treturn 0;\n\tif (__send_pkt(fd, tp, 0) <= 0)\n\t\treturn -1;\n\treturn 0;\n}\n\n/**\n * @brief\n * \tdis_destroy_chan - release structures associated with fd\n *\n * @param[in] fd - socket descriptor\n *\n * @return void\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nvoid\ndis_destroy_chan(int fd)\n{\n\tpbs_tcp_chan_t *chan = NULL;\n\n\tif (pfn_transport_get_chan == NULL)\n\t\treturn;\n\tchan = transport_get_chan(fd);\n\tif (chan != NULL) {\n\t\tif (chan->auths[FOR_AUTH].ctx || chan->auths[FOR_ENCRYPT].ctx) {\n\t\t\t/* DO NOT free authdef here, it will be done in unload_auths() */\n\t\t\tif (chan->auths[FOR_AUTH].ctx && chan->auths[FOR_AUTH].def) {\n\t\t\t\tchan->auths[FOR_AUTH].def->destroy_ctx(chan->auths[FOR_AUTH].ctx);\n\t\t\t}\n\t\t\tif (chan->auths[FOR_ENCRYPT].def != chan->auths[FOR_AUTH].def &&\n\t\t\t    chan->auths[FOR_ENCRYPT].ctx &&\n\t\t\t    chan->auths[FOR_ENCRYPT].def) {\n\t\t\t\tchan->auths[FOR_ENCRYPT].def->destroy_ctx(chan->auths[FOR_ENCRYPT].ctx);\n\t\t\t}\n\t\t\tchan->auths[FOR_AUTH].ctx = NULL;\n\t\t\tchan->auths[FOR_AUTH].def = NULL;\n\t\t\tchan->auths[FOR_AUTH].ctx_status = AUTH_STATUS_UNKNOWN;\n\t\t\tchan->auths[FOR_ENCRYPT].ctx = NULL;\n\t\t\tchan->auths[FOR_ENCRYPT].def = NULL;\n\t\t\tchan->auths[FOR_ENCRYPT].ctx_status = AUTH_STATUS_UNKNOWN;\n\t\t}\n\t\tif (chan->readbuf.tdis_data) {\n\t\t\tfree(chan->readbuf.tdis_data);\n\t\t\tchan->readbuf.tdis_data = NULL;\n\t\t}\n\t\tif (chan->writebuf.tdis_data) {\n\t\t\tfree(chan->writebuf.tdis_data);\n\t\t\tchan->writebuf.tdis_data = NULL;\n\t\t}\n\t\tfree(chan);\n\t\ttransport_set_chan(fd, NULL);\n\t}\n}\n\n/**\n * @brief\n *\tallocate dis buffers associated with connection, if already allocated then clear it\n *\n * @param[in] fd - file descriptor\n *\n * @return void\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nvoid\ndis_setup_chan(int fd, pbs_tcp_chan_t *(*inner_transport_get_chan)(int) )\n{\n\tpbs_tcp_chan_t *chan;\n\tint rc;\n\n\t/* check for bad file descriptor */\n\tif (fd < 0)\n\t\treturn;\n\tchan = (pbs_tcp_chan_t *) (*inner_transport_get_chan)(fd);\n\tif (chan == NULL) {\n\t\tif (errno == ENOTCONN)\n\t\t\treturn;\n\t\tchan = (pbs_tcp_chan_t *) calloc(1, sizeof(pbs_tcp_chan_t));\n\t\tassert(chan != NULL);\n\t\tdis_resize_buf(&(chan->readbuf), PBS_DIS_BUFSZ);\n\t\tdis_resize_buf(&(chan->writebuf), PBS_DIS_BUFSZ);\n\t\trc = transport_set_chan(fd, chan);\n\t\tassert(rc == 0);\n\t}\n\n\t/* initialize read and write buffers */\n\tdis_clear_buf(&(chan->readbuf));\n\tdis_clear_buf(&(chan->writebuf));\n}\n"
  },
  {
    "path": "src/lib/Libdis/discui_.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n/**\n * @file\tdiscui.c\n */\n/**\n * @brief\n *      -Function to convert a value of data type unsigned to string form and\n *      computes the number of characters/digits in that string.\n *\n * @param[out] cp - copy string\n * @param[in] value - value to be converted\n * @param[out]  ndigs - number of digits\n *\n * @retrun      string\n * @retval      converted string\n *\n */\n\nchar *\ndiscui_(char *cp, unsigned value, unsigned *ndigs)\n{\n\tchar *ocp;\n\n\tocp = cp;\n\twhile (value > 9) {\n\t\t*--cp = value % 10 + '0';\n\t\tvalue /= 10;\n\t}\n\t*--cp = value + '0';\n\t*ndigs = ocp - cp;\n\treturn (cp);\n}\n"
  },
  {
    "path": "src/lib/Libdis/discul_.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n/**\n * @file\tdiscul.c\n */\n/**\n * @brief\n *      -Function to convert a value of data type unsigned long to string form and\n *      computes the number of characters/digits in that string.\n *\n * @param[out] cp - copy string\n * @param[in] value - value to be converted\n * @param[out]  ndigs - number of digits\n *\n * @retrun      string\n * @retval      converted string\n *\n */\n\nchar *\ndiscul_(char *cp, unsigned long value, unsigned *ndigs)\n{\n\tchar *ocp;\n\n\tocp = cp;\n\twhile (value > 9) {\n\t\t*--cp = value % 10 + '0';\n\t\tvalue /= 10;\n\t}\n\t*--cp = value + '0';\n\t*ndigs = ocp - cp;\n\treturn (cp);\n}\n"
  },
  {
    "path": "src/lib/Libdis/discull_.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n#include \"Long.h\"\n/**\n * @file\tdiscull.c\n */\n/**\n * @brief\n * \t-Function to convert a value of data type u_Long to string form and\n * \tcomputes the number of characters/digits in that string.\n *\n * @param[out] cp - copy string\n * @param[in] value - value to be converted\n * @param[out]  ndigs - number of digits\n *\n * @retrun\tstring\n * @retval\tconverted string\n *\n */\n\nchar *\ndiscull_(char *cp, u_Long value, unsigned *ndigs)\n{\n\tchar *ocp;\n\n\tocp = cp;\n\twhile (value > 9) {\n\t\t*--cp = value % 10 + '0';\n\t\tvalue /= 10;\n\t}\n\t*--cp = value + '0';\n\t*ndigs = ocp - cp;\n\treturn (cp);\n}\n"
  },
  {
    "path": "src/lib/Libdis/disi10d_.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <assert.h>\n#include <float.h>\n#include <stddef.h>\n#include <stdlib.h>\n\n#include <dis.h>\n\nunsigned dis_dmx10 = 0;\ndouble *dis_dp10 = NULL;\ndouble *dis_dn10 = NULL;\n/**\n * @file\tdisi10d_.c\n */\n/**\n * @brief\n *\t-Allocate and fill tables with all powers of 10 that fit the forms:\n *\n *\t\t\t\t  n\n *\t\t\t\t 2\n *\t\t*dis_dp10[n] = 10\n *\n *\t\t\t\t  ( n)\n *\t\t\t\t -(2 )\n *\t\t*dis_dn10[n] = 10\n *\n *\tFor all values of n supported by the floating point format.  Set\n *\tdis_dmx10 equal to the largest value of n that fits the format.\n */\n\nvoid\ndisi10d_()\n{\n\tint i;\n\tunsigned long ul;\n\tdis_long_double_t accum;\n\tsize_t tabsize;\n\n\tassert(dis_dp10 == NULL);\n\tassert(dis_dn10 == NULL);\n\tassert(dis_dmx10 == 0);\n\n#if DBL_MAX_10_EXP + DBL_MIN_10_EXP > 0\n\tul = DBL_MAX_10_EXP;\n#else\n\tul = -DBL_MIN_10_EXP;\n#endif\n\twhile (ul >>= 1)\n\t\tdis_dmx10++;\n\ttabsize = (dis_dmx10 + 1) * sizeof(double);\n\tdis_dp10 = (double *) malloc(tabsize);\n\tassert(dis_dp10 != NULL);\n\tdis_dn10 = (double *) malloc(tabsize);\n\tassert(dis_dn10 != NULL);\n\tassert(dis_dmx10 > 0);\n\tdis_dp10[0] = accum = 10.0L;\n\tdis_dn10[0] = 1.0L / accum;\n\tfor (i = 1; i <= dis_dmx10; i++) {\n\t\taccum *= accum;\n\t\tdis_dp10[i] = accum;\n\t\tdis_dn10[i] = 1.0L / accum;\n\t}\n}\n"
  },
  {
    "path": "src/lib/Libdis/disi10l_.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <assert.h>\n#include <float.h>\n#include <stddef.h>\n#include <stdlib.h>\n\n#include <dis.h>\n/**\n * @file\tdisi10l_.c\n */\nunsigned dis_lmx10 = 0;\ndis_long_double_t *dis_lp10 = NULL;\ndis_long_double_t *dis_ln10 = NULL;\n\n/**\n * @brief\n *\t-Allocate and fill tables with all powers of 10 that fit the forms:\n *\n *\t\t\t\t  n\n *\t\t\t\t 2\n *\t\t*dis_lp10[n] = 10\n *\n *\t\t\t\t  ( n)\n *\t\t\t\t -(2 )\n *\t\t*dis_ln10[n] = 10\n *\n *\tFor all values of n supported by the floating point format.  Set\n *\tdis_lmx10 equal to the largest value of n that fits the format.\n */\n\nvoid\ndisi10l_()\n{\n\tint i;\n\tunsigned long ul;\n\tdis_long_double_t accum;\n\tsize_t tabsize;\n\n\tassert(dis_lp10 == NULL);\n\tassert(dis_ln10 == NULL);\n\tassert(dis_lmx10 == 0);\n\n#if LDBL_MAX_10_EXP + LDBL_MIN_10_EXP > 0\n\tul = LDBL_MAX_10_EXP;\n#else\n\tul = -LDBL_MIN_10_EXP;\n#endif\n\twhile (ul >>= 1)\n\t\tdis_lmx10++;\n\ttabsize = (dis_lmx10 + 1) * sizeof(dis_long_double_t);\n\tdis_lp10 = (dis_long_double_t *) malloc(tabsize);\n\tassert(dis_lp10 != NULL);\n\tdis_ln10 = (dis_long_double_t *) malloc(tabsize);\n\tassert(dis_ln10 != NULL);\n\tassert(dis_lmx10 > 0);\n\tdis_lp10[0] = accum = 10.0L;\n\tdis_ln10[0] = 1.0L / accum;\n\tfor (i = 1; i <= dis_lmx10; i++) {\n\t\taccum *= accum;\n\t\tdis_lp10[i] = accum;\n\t\tdis_ln10[i] = 1.0L / accum;\n\t}\n}\n"
  },
  {
    "path": "src/lib/Libdis/disiui_.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <assert.h>\n#include <stddef.h>\n#include <stdlib.h>\n\n#include \"dis.h\"\n#include \"dis_.h\"\n\nchar *dis_umax = NULL;\nunsigned dis_umaxd = 0;\n/**\n * @file\tdisiui_.c\n */\n/**\n * @brief\n *\tAllocate and fill a counted string containing the constant, UINT_MAX,\n *\texpressed as character codes of decimal digits.\n *\n * @par SEE:\n *      dis_umaxd = number of digits in UINT_MAX\\n\n *\tdis_umax[0] through dis_umax[dis_umaxd - 1] = the digits, in order.\n */\n\nvoid\ndisiui_()\n{\n\tchar *cp;\n\n\tassert(dis_umax == NULL);\n\tassert(dis_umaxd == 0);\n\n\tcp = discui_(dis_buffer + DIS_BUFSIZ, UINT_MAX, &dis_umaxd);\n\tassert(dis_umaxd > 0);\n\tdis_umax = (char *) malloc(dis_umaxd);\n\tassert(dis_umax != NULL);\n\tmemcpy(dis_umax, cp, dis_umaxd);\n}\n"
  },
  {
    "path": "src/lib/Libdis/disp10d_.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <math.h>\n#include \"dis_.h\"\n/**\n * @file\tdisp10d_.c\n */\n/**\n * @brief\n *\t\t  expon\n *\tReturns 10\tas a double precision value.\n *\n * @param[in] expon - exponant value\n *\n * @return\tdouble\n * @retval\t10^expon value\tsuccess\n * @retval\t0.0\t\terror\n *\n */\n\ndouble\ndisp10d_(int expon)\n{\n\tint negate;\n\tint pow2;\n\tdouble accum;\n\n\tif (expon == 0)\n\t\treturn (1.0);\n\n\t/* dis_dmx10 would be initialized by prior call to dis_init_tables */\n\n\tif (expon < 0) {\n\t\texpon = -expon;\n\t\tnegate = TRUE;\n\t} else {\n\t\tnegate = FALSE;\n\t}\n\tpow2 = 0;\n\tdo {\n\t\tif (expon & 1) {\n\t\t\taccum = dis_dp10[pow2];\n\t\t\twhile (expon >>= 1) {\n\t\t\t\tif (++pow2 > dis_dmx10)\n\t\t\t\t\treturn (negate ? 0.0 : HUGE_VAL);\n\t\t\t\tif (expon & 1)\n\t\t\t\t\taccum *= dis_dp10[pow2];\n\t\t\t}\n\t\t\treturn (negate ? 1.0 / accum : accum);\n\t\t}\n\t\texpon >>= 1;\n\t} while (pow2++ < dis_dmx10);\n\treturn (negate ? 0.0 : HUGE_VAL);\n}\n"
  },
  {
    "path": "src/lib/Libdis/disp10l_.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <math.h>\n\n#include <dis.h>\n#include \"dis_.h\"\n\nvoid disi10l_();\n/**\n * @file\tdisp10l_.c\n */\n/**\n * @brief\t   expon\n *\t-Returns 10\tas a dis_long_double_t value.\n *\n * @param[in] expon - exponant value\n *\n * @return      dis_long_double_t\n * @retval      10^expon value  success\n * @retval      0.0             error\n *\n */\n\ndis_long_double_t\ndisp10l_(int expon)\n{\n\tint negate;\n\tint pow2;\n\tdis_long_double_t accum;\n\n\tif (expon == 0)\n\t\treturn (1.0L);\n\n\t/* dis_lmx10 would be initialized by prior call to dis_init_tables */\n\n\tif (expon < 0) {\n\t\tnegate = TRUE;\n\t\texpon = -expon;\n\t} else {\n\t\tnegate = FALSE;\n\t}\n\tpow2 = 0;\n\tdo {\n\t\tif (expon & 1) {\n\t\t\taccum = dis_lp10[pow2];\n\t\t\twhile (expon >>= 1) {\n\t\t\t\tif (++pow2 > dis_lmx10)\n\t\t\t\t\treturn (negate ? 0.0L : HUGE_VAL);\n\t\t\t\tif (expon & 1)\n\t\t\t\t\taccum *= dis_lp10[pow2];\n\t\t\t}\n\t\t\treturn (negate ? 1.0L / accum : accum);\n\t\t}\n\t\texpon >>= 1;\n\t} while (pow2++ < dis_lmx10);\n\treturn (negate ? 0.0L : HUGE_VAL);\n}\n"
  },
  {
    "path": "src/lib/Libdis/disrcs.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tdisrcs.c\n *\n * @par Synopsis:\n *\tchar *disrcs(int stream, size_t *nchars, int *retval)\n *\n *\tGets a Data-is-Strings character string from <stream> and converts it\n *\tinto a counted string, returns a pointer to it, and puts the character\n *\tcount into *<nchars>.  The character string in <stream> consists of an\n *\tunsigned integer, followed by a number of characters determined by the\n *\tunsigned integer.\n *\n *\tThe data returned has an NULL byte appended to the end in case the\n *\tcalling program wishes to treat it as a NULL terminated string.\n *\tThis means the space allocated for the data is one byte larger than\n *\tindicated by the count.\n *\n *\t*<retval> gets DIS_SUCCESS if everything works well.  It gets an error\n *\tcode otherwise.  In case of an error, the <stream> character pointer is\n *\treset, making it possible to retry with some other conversion strategy.\n *\tIn case of an error, disrcs returns NULL and <nchars> is set to 0.\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <assert.h>\n#include <stddef.h>\n#include <stdlib.h>\n\n#include \"dis.h\"\n#include \"dis_.h\"\n\n/**\n * @brief\n *\t-Gets a Data-is-Strings character string from <stream> and converts it\n *      into a counted string, returns a pointer to it, and puts the character\n *      count into *<nchars>.  The character string in <stream> consists of an\n *      unsigned integer, followed by a number of characters determined by the\n *      unsigned integer.\n *\n * @param[in] stream - socket descriptor\n * @param[out] nchars - character count\n * @param[out] retval - success/error code\n *\n * @return\tchar*\n * @retval\tpointer to converted string\tsuccess\n * @retval\tNULL\t\t\t\terror\n *\n */\nchar *\ndisrcs(int stream, size_t *nchars, int *retval)\n{\n\tint locret;\n\tint negate;\n\tunsigned count = 0;\n\tchar *value = NULL;\n\n\tassert(nchars != NULL);\n\tassert(retval != NULL);\n\n\tlocret = disrsi_(stream, &negate, &count, 1, 0);\n\tlocret = negate ? DIS_BADSIGN : locret;\n\tif (locret == DIS_SUCCESS) {\n\t\tif (negate)\n\t\t\tlocret = DIS_BADSIGN;\n\t\telse {\n\t\t\tvalue = (char *) malloc((size_t) count + 1);\n\t\t\tif (value == NULL)\n\t\t\t\tlocret = DIS_NOMALLOC;\n\t\t\telse {\n\t\t\t\tif (dis_gets(stream, value,\n\t\t\t\t\t     (size_t) count) != (size_t) count)\n\t\t\t\t\tlocret = DIS_PROTO;\n\t\t\t\telse\n\t\t\t\t\tvalue[count] = '\\0';\n\t\t\t}\n\t\t}\n\t}\n\tif ((*retval = locret) != DIS_SUCCESS && value != NULL) {\n\t\tcount = 0;\n\t\tfree(value);\n\t\tvalue = NULL;\n\t}\n\t*nchars = count;\n\treturn (value);\n}\n"
  },
  {
    "path": "src/lib/Libdis/disrd.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tdisrd.c\n *\n * @par Synopsis:\n * \tdouble disrd(int stream, int *retval)\n *\n *\tGets a Data-is-Strings floating point number from <stream> and converts\n *\tit into a double which it returns.  The number from <stream> consists of\n *\ttwo consecutive signed integers.  The first is the coefficient, with its\n *\timplied decimal point at the low-order end.  The second is the exponent\n *\tas a power of 10.\n *\n *\t*<retval> gets DIS_SUCCESS if everything works well.  It gets an error\n *\tcode otherwise.  In case of an error, the <stream> character pointer is\n *\treset, making it possible to retry with some other conversion strategy.\n *\n *\tBy fiat of the author, neither loss of significance nor underflow are\n *\terrors.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <assert.h>\n#include <math.h>\n#include <stddef.h>\n#include <stdio.h>\n\n#include \"dis.h\"\n#include \"dis_.h\"\n#undef disrd\n\n/**\n * @brief\n *      Gets a Data-is-Strings floating point number from <stream> and converts\n *      it into a double which it returns.  The number from <stream> consists of\n *      two consecutive signed integers.  The first is the coefficient, with its\n *      implied decimal point at the low-order end.  The second is the exponent\n *      as a power of 10.\n *\n * @param[in] stream - socket descriptor\n * @param[out] nchars - character count\n * @param[out] retval - success/error code\n *\n * @return      double\n * @retval      double value    success\n * @retval      0.0\t\terror\n *\n */\ndouble\ndisrd(int stream, int *retval)\n{\n\tint expon;\n\tunsigned uexpon;\n\tint locret;\n\tint negate;\n\tunsigned ndigs;\n\tunsigned nskips;\n\tdis_long_double_t ldval;\n\n\tassert(retval != NULL);\n\n\tldval = 0.0L;\n\tlocret = disrl_(stream, &ldval, &ndigs, &nskips, DBL_DIG, 1, 0);\n\tif (locret == DIS_SUCCESS) {\n\t\tlocret = disrsi_(stream, &negate, &uexpon, 1, 0);\n\t\tif (locret == DIS_SUCCESS) {\n\t\t\texpon = negate ? nskips - uexpon : nskips + uexpon;\n\t\t\tif (expon + (int) ndigs > DBL_MAX_10_EXP) {\n\t\t\t\tif (expon + (int) ndigs > DBL_MAX_10_EXP + 1) {\n\t\t\t\t\tldval = ldval < 0.0L ? -HUGE_VAL : HUGE_VAL;\n\t\t\t\t\tlocret = DIS_OVERFLOW;\n\t\t\t\t} else {\n\t\t\t\t\tldval *= disp10l_(expon - 1);\n\t\t\t\t\tif (ldval > DBL_MAX / 10.0L) {\n\t\t\t\t\t\tldval = ldval < 0.0L ? -HUGE_VAL : HUGE_VAL;\n\t\t\t\t\t\tlocret = DIS_OVERFLOW;\n\t\t\t\t\t} else\n\t\t\t\t\t\tldval *= 10.0L;\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tif (expon < LDBL_MIN_10_EXP) {\n\t\t\t\t\tldval *= disp10l_(expon + (int) ndigs);\n\t\t\t\t\tldval /= disp10l_((int) ndigs);\n\t\t\t\t} else\n\t\t\t\t\tldval *= disp10l_(expon);\n\t\t\t}\n\t\t}\n\t}\n\t*retval = locret;\n\treturn ((double) ldval);\n}\n"
  },
  {
    "path": "src/lib/Libdis/disrf.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tdisrf.c\n *\n * @par Synopsis:\n * \tfloat disrf(int stream, int *retval)\n *\n *\tGets a Data-is-Strings floating point number from <stream> and converts\n *\tit into a float and returns it.  The number from <stream> consists of\n *\ttwo consecutive signed integers.  The first is the coefficient, with its\n *\timplied decimal point at the low-order end.  The second is the exponent\n *\tas a power of 10.\n *\n *\t*<retval> gets DIS_SUCCESS if everything works well.  It gets an error\n *\tcode otherwise.  In case of an error, the <stream> character pointer is\n *\treset, making it possible to retry with some other conversion strategy.\n *\n *\tBy fiat of the author, neither loss of significance nor underflow are\n *\terrors.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <assert.h>\n#include <math.h>\n#include <stddef.h>\n\n#include \"dis.h\"\n#include \"dis_.h\"\n#undef disrf\n\n/**\n * @brief\n *\t-Gets a Data-is-Strings floating point number from <stream> and converts\n *      it into a double and returns it.  The number from <stream> consists of\n *      two consecutive signed integers.  The first is the coefficient, with its\n *      implied decimal point at the low-order end.  The second is the exponent\n *      as a power of 10.\n *\n * @return\tint\n * @retval\tDIS_success/error status\n *\n */\nstatic int\ndisrd_(int stream, unsigned count, unsigned *ndigs, unsigned *nskips, double *dval, int recursv)\n{\n\tint c;\n\tint negate;\n\tunsigned unum;\n\tchar *cp;\n\n\tif (++recursv > DIS_RECURSIVE_LIMIT)\n\t\treturn (DIS_PROTO);\n\n\t/* dis_umaxd would be initialized by prior call to dis_init_tables */\n\tswitch (c = dis_getc(stream)) {\n\t\tcase '-':\n\t\tcase '+':\n\t\t\tnegate = c == '-';\n\t\t\t*nskips = count > FLT_DIG ? count - FLT_DIG : 0;\n\t\t\tcount -= *nskips;\n\t\t\t*ndigs = count;\n\t\t\t*dval = 0.0;\n\t\t\tdo {\n\t\t\t\tif ((c = dis_getc(stream)) < '0' || c > '9') {\n\t\t\t\t\tif (c < 0)\n\t\t\t\t\t\treturn (DIS_EOD);\n\t\t\t\t\treturn (DIS_NONDIGIT);\n\t\t\t\t}\n\t\t\t\t*dval = *dval * 10.0 + (double) (c - '0');\n\t\t\t} while (--count);\n\t\t\tif ((count = *nskips) > 0) {\n\t\t\t\tcount--;\n\t\t\t\tswitch (dis_getc(stream)) {\n\t\t\t\t\tcase '5':\n\t\t\t\t\t\tif (count == 0)\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\tcase '6':\n\t\t\t\t\tcase '7':\n\t\t\t\t\tcase '8':\n\t\t\t\t\tcase '9':\n\t\t\t\t\t\t*dval += 1.0;\n\t\t\t\t\tcase '0':\n\t\t\t\t\tcase '1':\n\t\t\t\t\tcase '2':\n\t\t\t\t\tcase '3':\n\t\t\t\t\tcase '4':\n\t\t\t\t\t\tif (count > 0 &&\n\t\t\t\t\t\t    disr_skip(stream, (size_t) count) < 0)\n\t\t\t\t\t\t\treturn (DIS_EOD);\n\t\t\t\t\t\tbreak;\n\t\t\t\t\tdefault:\n\t\t\t\t\t\treturn (DIS_NONDIGIT);\n\t\t\t\t}\n\t\t\t}\n\t\t\t*dval = negate ? -(*dval) : (*dval);\n\t\t\treturn (DIS_SUCCESS);\n\t\tcase '0':\n\t\t\treturn (DIS_LEADZRO);\n\t\tcase '1':\n\t\tcase '2':\n\t\tcase '3':\n\t\tcase '4':\n\t\tcase '5':\n\t\tcase '6':\n\t\tcase '7':\n\t\tcase '8':\n\t\tcase '9':\n\t\t\tunum = c - '0';\n\t\t\tif (count > 1) {\n\t\t\t\tif (count > dis_umaxd)\n\t\t\t\t\tbreak;\n\t\t\t\tif (dis_gets(stream, dis_buffer + 1, count - 1) !=\n\t\t\t\t    count - 1)\n\t\t\t\t\treturn (DIS_EOD);\n\t\t\t\tcp = dis_buffer;\n\t\t\t\tif (count == dis_umaxd) {\n\t\t\t\t\t*cp = c;\n\t\t\t\t\tif (memcmp(dis_buffer, dis_umax, dis_umaxd) > 0)\n\t\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\twhile (--count) {\n\t\t\t\t\tif ((c = *++cp) < '0' || c > '9')\n\t\t\t\t\t\treturn (DIS_NONDIGIT);\n\t\t\t\t\tunum = unum * 10 + (unsigned) (c - '0');\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn (disrd_(stream, unum, ndigs, nskips, dval, recursv));\n\t\tcase -1:\n\t\t\treturn (DIS_EOD);\n\t\tcase -2:\n\t\t\treturn (DIS_EOF);\n\t\tdefault:\n\t\t\treturn (DIS_NONDIGIT);\n\t}\n\t*dval = HUGE_VAL;\n\treturn (DIS_OVERFLOW);\n}\n\n/**\n * @brief\n *      Gets a Data-is-Strings floating point number from <stream> and converts\n *      it into a float which it returns.  The number from <stream> consists of\n *      two consecutive signed integers.  The first is the coefficient, with its\n *      implied decimal point at the low-order end.  The second is the exponent\n *      as a power of 10.\n *\n * @param[in] stream - socket descriptor\n * @param[out] retval - success/error code\n *\n * @return      double\n * @retval      double value    success\n * @retval      0.0             error\n *\n */\n\nfloat\ndisrf(int stream, int *retval)\n{\n\tint expon;\n\tunsigned uexpon;\n\tint locret;\n\tint negate;\n\t/* following were static vars, so initializing them to defaults */\n\tunsigned ndigs = 0; /* 3 vars now stack variables for threads */\n\tunsigned nskips = 0;\n\tdouble dval = 0.0;\n\n\tassert(retval != NULL);\n\tassert(stream >= 0);\n\n\tdval = 0.0;\n\tif ((locret = disrd_(stream, 1, &ndigs, &nskips, &dval, 0)) == DIS_SUCCESS) {\n\t\tlocret = disrsi_(stream, &negate, &uexpon, 1, 0);\n\t\tif (locret == DIS_SUCCESS) {\n\t\t\texpon = negate ? nskips - uexpon : nskips + uexpon;\n\t\t\tif (expon + (int) ndigs > FLT_MAX_10_EXP) {\n\t\t\t\tif (expon + (int) ndigs > FLT_MAX_10_EXP + 1) {\n\t\t\t\t\tdval = dval < 0.0 ? -HUGE_VAL : HUGE_VAL;\n\t\t\t\t\tlocret = DIS_OVERFLOW;\n\t\t\t\t} else {\n\t\t\t\t\tdval *= disp10d_(expon - 1);\n\t\t\t\t\tif (dval > FLT_MAX / 10.0) {\n\t\t\t\t\t\tdval = dval < 0.0 ? -HUGE_VAL : HUGE_VAL;\n\t\t\t\t\t\tlocret = DIS_OVERFLOW;\n\t\t\t\t\t} else\n\t\t\t\t\t\tdval *= 10.0;\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tif (expon < DBL_MIN_10_EXP) {\n\t\t\t\t\tdval *= disp10d_(expon + (int) ndigs);\n\t\t\t\t\tdval /= disp10d_((int) ndigs);\n\t\t\t\t} else\n\t\t\t\t\tdval *= disp10d_(expon);\n\t\t\t}\n\t\t}\n\t}\n\t*retval = locret;\n\treturn (dval);\n}\n"
  },
  {
    "path": "src/lib/Libdis/disrfcs.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tdisrfcs.c\n *\n * @par Synopsis:\n *\tint disrfcs(int stream, size_t *nchars, size_t achars, char *value)\n *\n *\tGets a Data-is-Strings character string from <stream> and converts it\n *\tinto a counted string, puts the string into <value>, a pre-allocated\n *\tstring, <achars> long, and puts the count into *<nchars>.  The character\n *\tstring in <stream> consists of an unsigned integer, followed by a number\n *\tof characters determined by the unsigned integer.\n *\n *\tDisrfcs returns DIS_SUCCESS if everything works well.  It returns an\n *\terror code otherwise.  In case of an error, the <stream> character\n *\tpointer is reset, making it possible to retry with some other conversion\n *\tstrategy, and <nchars> is set to 0.\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <assert.h>\n#include <stddef.h>\n#include <stdlib.h>\n\n#include \"dis.h\"\n#include \"dis_.h\"\n\n/**\n * @brief\n *\t-Gets a Data-is-Strings character string from <stream> and converts it\n *      into a counted string, puts the string into <value>, a pre-allocated\n *      string, <achars> long, and puts the count into *<nchars>.  The character\n *      string in <stream> consists of an unsigned integer, followed by a number\n *      of characters determined by the unsigned integer.\n *\n * @param[in] stream - socket descriptor\n * @param[out] nchars - chars count\n * @param[out] achars - long value\n * @param[out] value - string value\n *\n * @return\tint\n * @retval\tDIS_success/error status\n *\n */\nint\ndisrfcs(int stream, size_t *nchars, size_t achars, char *value)\n{\n\tint locret;\n\tint negate;\n\tunsigned count = 0;\n\n\tassert(nchars != NULL);\n\tassert(value != NULL);\n\n\tlocret = disrsi_(stream, &negate, &count, 1, 0);\n\tif (locret == DIS_SUCCESS) {\n\t\tif (negate)\n\t\t\tlocret = DIS_BADSIGN;\n\t\telse if ((*nchars = count) > achars)\n\t\t\tlocret = DIS_OVERFLOW;\n\t\telse if (dis_gets(stream, value, *nchars) != *nchars)\n\t\t\tlocret = DIS_PROTO;\n\t}\n\tif (locret != DIS_SUCCESS)\n\t\t*nchars = 0;\n\treturn (locret);\n}\n"
  },
  {
    "path": "src/lib/Libdis/disrfst.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tdisrfst.c\n *\n * @par Synopsis:\n *\tint disrfst(int stream, size_t achars, char *value)\n *\n *\tGets a Data-is-Strings character string from <stream> and converts it\n *\tinto an ASCII NUL-terminated string, and puts the string into <value>,\n *\ta pre-allocated string, <achars> long.  The character string in <stream>\n *\tconsists of an unsigned integer, followed by a number of characters\n *\tdetermined by the unsigned integer.\n *\n *\tDisrfst returns DIS_SUCCESS if everything works well.  It returns an\n *\terror code otherwise.  In case of an error, the <stream> character\n *\tpointer is reset, making it possible to retry with some other conversion\n *\tstrategy, and the first character of <value> is set to ASCII NUL.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <assert.h>\n#include <stddef.h>\n#include <stdlib.h>\n\n#ifndef NDEBUG\n#include <string.h>\n#endif\n\n#include \"dis.h\"\n#include \"dis_.h\"\n\n/**\n * @brief\n *      -Gets a Data-is-Strings character string from <stream> and converts it\n *      into an ASCII NUL-terminated string, and puts the string into <value>,\n *      a pre-allocated string, <achars> long.  The character string in <stream>\n *      consists of an unsigned integer, followed by a number of characters\n *      determined by the unsigned integer.\n *\n * @param[in] stream - socket descriptor\n * @param[out] achars - long value\n * @param[out] value - string value\n *\n * @return      int\n * @retval      DIS_success/error status\n *\n */\n\nint\ndisrfst(int stream, size_t achars, char *value)\n{\n\tint locret;\n\tint negate;\n\tunsigned count;\n\n\tassert(value != NULL);\n\n\tlocret = disrsi_(stream, &negate, &count, 1, 0);\n\tif (locret == DIS_SUCCESS) {\n\t\tif (negate)\n\t\t\tlocret = DIS_BADSIGN;\n\t\telse if (count > achars)\n\t\t\tlocret = DIS_OVERFLOW;\n\t\telse if (dis_gets(stream, value, (size_t) count) !=\n\t\t\t (size_t) count)\n\t\t\tlocret = DIS_PROTO;\n#ifndef NDEBUG\n\t\telse if (memchr(value, 0, (size_t) count))\n\t\t\tlocret = DIS_NULLSTR;\n#endif\n\t\telse\n\t\t\tvalue[count] = '\\0';\n\t}\n\tif (locret != DIS_SUCCESS)\n\t\t*value = '\\0';\n\treturn (locret);\n}\n"
  },
  {
    "path": "src/lib/Libdis/disrl.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tdisrl.c\n *\n * @par Synopsis:\n * \tlong double disrl(int stream, int *retval)\n *\n *\tGets a Data-is-Strings floating point number from <stream> and converts\n *\tit into a long double and returns it.  The number from <stream> consists\n *\tof two consecutive signed integers.  The first is the coefficient, with\n *\tits implied decimal point at the low-order end.  The second is the\n *\texponent as a power of 10.\n *\n *\t*<retval> gets DIS_SUCCESS if everything works well.  It gets an error\n *\tcode otherwise.  In case of an error, the <stream> character pointer is\n *\treset, making it possible to retry with some other conversion strategy.\n *\n *\tBy fiat of the author, neither loss of significance nor underflow are\n *\terrors.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <assert.h>\n#include <math.h>\n#include <stddef.h>\n#include <stdio.h>\n\n#include \"dis.h\"\n#include \"dis_.h\"\n\n/**\n * @brief\n *      Gets a Data-is-Strings floating point number from <stream> and converts\n *      it into a long double which it returns.  The number from <stream> consists of\n *      two consecutive signed integers.  The first is the coefficient, with its\n *      implied decimal point at the low-order end.  The second is the exponent\n *      as a power of 10.\n *\n * @param[in] stream - socket descriptor\n * @param[out] retval - success/error code\n *\n * @return      dis_long_double_t\n * @retval      long double value\tsuccess\n * @retval      0.0L             \terror\n *\n */\n\ndis_long_double_t\ndisrl(int stream, int *retval)\n{\n\tint expon;\n\tunsigned uexpon;\n\tint locret;\n\tint negate;\n\tunsigned ndigs;\n\tunsigned nskips;\n\tdis_long_double_t ldval;\n\n\tassert(retval != NULL);\n\n\tldval = 0.0L;\n\tlocret = disrl_(stream, &ldval, &ndigs, &nskips, LDBL_DIG, 1, 0);\n\tif (locret == DIS_SUCCESS) {\n\t\tlocret = disrsi_(stream, &negate, &uexpon, 1, 0);\n\t\tif (locret == DIS_SUCCESS) {\n\t\t\texpon = negate ? nskips - uexpon : nskips + uexpon;\n\t\t\tif (expon + (int) ndigs > LDBL_MAX_10_EXP) {\n\t\t\t\tif (expon + (int) ndigs > LDBL_MAX_10_EXP + 1) {\n\t\t\t\t\tldval = ldval < 0.0L ? -HUGE_VAL : HUGE_VAL;\n\t\t\t\t\tlocret = DIS_OVERFLOW;\n\t\t\t\t} else {\n\t\t\t\t\tldval *= disp10l_(expon - 1);\n\t\t\t\t\tif (ldval > LDBL_MAX / 10.0L) {\n\t\t\t\t\t\tldval = ldval < 0.0L ? -HUGE_VAL : HUGE_VAL;\n\t\t\t\t\t\tlocret = DIS_OVERFLOW;\n\t\t\t\t\t} else\n\t\t\t\t\t\tldval *= 10.0L;\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tif (expon < LDBL_MIN_10_EXP) {\n\t\t\t\t\tldval *= disp10l_(expon + (int) ndigs);\n\t\t\t\t\tldval /= disp10l_((int) ndigs);\n\t\t\t\t} else\n\t\t\t\t\tldval *= disp10l_(expon);\n\t\t\t}\n\t\t}\n\t}\n\t*retval = locret;\n\treturn (ldval);\n}\n"
  },
  {
    "path": "src/lib/Libdis/disrl_.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <assert.h>\n#include <math.h>\n#include <stddef.h>\n\n#include \"dis.h\"\n#include \"dis_.h\"\n/**\n * @file\tdisrl_.c\n */\n/**\n * @brief\n *      -Gets a Data-is-Strings floating point number from <stream> and converts\n *      it into a long double and returns it.  The number from <stream> consists of\n *      two consecutive signed integers.  The first is the coefficient, with its\n *      implied decimal point at the low-order end.  The second is the exponent\n *      as a power of 10.\n *\n * @return      int\n * @retval      DIS_success/error status\n *\n */\nint\ndisrl_(int stream, dis_long_double_t *ldval, unsigned *ndigs, unsigned *nskips, unsigned sigd, unsigned count, int recursv)\n{\n\tint c;\n\tint negate;\n\tunsigned unum;\n\tchar *cp;\n\tdis_long_double_t fpnum;\n\n\tassert(stream >= 0);\n\n\tif (++recursv > DIS_RECURSIVE_LIMIT)\n\t\treturn (DIS_PROTO);\n\n\t/* dis_umaxd would be initialized by prior call to dis_init_tables */\n\tswitch (c = dis_getc(stream)) {\n\t\tcase '-':\n\t\tcase '+':\n\t\t\tnegate = c == '-';\n\t\t\t*nskips = count > sigd ? count - sigd : 0;\n\t\t\tcount -= *nskips;\n\t\t\t*ndigs = count;\n\t\t\tfpnum = 0.0L;\n\t\t\tdo {\n\t\t\t\tif ((c = dis_getc(stream)) < '0' || c > '9') {\n\t\t\t\t\tif (c < 0)\n\t\t\t\t\t\treturn (DIS_EOD);\n\t\t\t\t\treturn (DIS_NONDIGIT);\n\t\t\t\t}\n\t\t\t\tfpnum = fpnum * 10.0L + (dis_long_double_t) (c - '0');\n\t\t\t} while (--count);\n\t\t\tif ((count = *nskips) > 0) {\n\t\t\t\tcount--;\n\t\t\t\tswitch (dis_getc(stream)) {\n\t\t\t\t\tcase '5':\n\t\t\t\t\t\tif (count == 0)\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\tcase '6':\n\t\t\t\t\tcase '7':\n\t\t\t\t\tcase '8':\n\t\t\t\t\tcase '9':\n\t\t\t\t\t\tfpnum += 1.0L;\n\t\t\t\t\tcase '0':\n\t\t\t\t\tcase '1':\n\t\t\t\t\tcase '2':\n\t\t\t\t\tcase '3':\n\t\t\t\t\tcase '4':\n\t\t\t\t\t\tif (count > 0 &&\n\t\t\t\t\t\t    disr_skip(stream, (size_t) count) == count)\n\t\t\t\t\t\t\treturn (DIS_EOD);\n\t\t\t\t\t\tbreak;\n\t\t\t\t\tdefault:\n\t\t\t\t\t\treturn (DIS_NONDIGIT);\n\t\t\t\t}\n\t\t\t}\n\t\t\t*ldval = negate ? -fpnum : fpnum;\n\t\t\treturn (DIS_SUCCESS);\n\t\tcase '0':\n\t\t\treturn (DIS_LEADZRO);\n\t\tcase '1':\n\t\tcase '2':\n\t\tcase '3':\n\t\tcase '4':\n\t\tcase '5':\n\t\tcase '6':\n\t\tcase '7':\n\t\tcase '8':\n\t\tcase '9':\n\t\t\tunum = c - '0';\n\t\t\tif (count > 1) {\n\t\t\t\tif (count > dis_umaxd)\n\t\t\t\t\tbreak;\n\t\t\t\tif (dis_gets(stream, dis_buffer + 1, count - 1) !=\n\t\t\t\t    count - 1)\n\t\t\t\t\treturn (DIS_EOD);\n\t\t\t\tcp = dis_buffer;\n\t\t\t\tif (count == dis_umaxd) {\n\t\t\t\t\t*cp = c;\n\t\t\t\t\tif (memcmp(dis_buffer, dis_umax, dis_umaxd) > 0)\n\t\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\twhile (--count) {\n\t\t\t\t\tif ((c = *++cp) < '0' || c > '9')\n\t\t\t\t\t\treturn (DIS_NONDIGIT);\n\t\t\t\t\tunum = unum * 10 + (unsigned) (c - '0');\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn (disrl_(stream, ldval, ndigs, nskips, sigd, unum, recursv));\n\t\tcase -1:\n\t\t\treturn (DIS_EOD);\n\t\tcase -2:\n\t\t\treturn (DIS_EOF);\n\t\tdefault:\n\t\t\treturn (DIS_NONDIGIT);\n\t}\n\t*ldval = HUGE_VAL;\n\treturn (DIS_OVERFLOW);\n}\n"
  },
  {
    "path": "src/lib/Libdis/disrsc.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tdisrsc.c\n *\n * @par Synopsis:\n *\tsigned char disrsc(int stream, int *retval)\n *\n *\tGets a Data-is-Strings signed integer from <stream>, converts it into a\n *\tsigned char, and returns it.  The signed integer in <stream> consists\n *\tof a counted string of digits, starting with a zero or a minus sign,\n *\twhich represents the number.  If the number doesn't lie between -9 and\n *\t9, inclusive, it is preceeded by at least one count.\n *\n *\tThis format for character strings representing signed integers can best\n *\tbe understood through the decoding algorithm:\n *\n *\t1. Initialize the digit count to 1.\n *\n *\t2. Read the next digit; if it is a sign, go to step (4).\n *\n *\t3. Decode a new count from the digit decoded in step (2) and the next\n *\t   count - 1 digits; repeat step (2).\n *\n *\t4. Decode the next count digits as the magnitude of the signed integer.\n *\n *\t*<retval> gets DIS_SUCCESS if everything works well.  It gets an error\n *\tcode otherwise.  In case of an error, the <stream> character pointer is\n *\treset, making it possible to retry with some other conversion strategy.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <assert.h>\n#include <stddef.h>\n\n#include \"dis.h\"\n#include \"dis_.h\"\n#undef disrsc\n\n/**\n * @brief\n *      -Gets a Data-is-Strings signed integer from <stream>, converts it into a\n *      signed char, and returns it.  The signed integer in <stream> consists\n *      of a counted string of digits, starting with a zero or a minus sign,\n *      which represents the number.  If the number doesn't lie between -9 and\n *      9, inclusive, it is preceeded by at least one count.\n *\n * @param[in] stream - socket descriptor\n * @param[out] retval - dis status val\n *\n * @return      signed char\n * @retval      signed char val\t\tsuccess\n * @retval\t0\t\t\terror\n *\n */\nsigned char\ndisrsc(int stream, int *retval)\n{\n\tint locret;\n\tint negate;\n\tsigned char value;\n\tunsigned int uvalue;\n\n\tassert(retval != NULL);\n\n\tvalue = 0;\n\tswitch (locret = disrsi_(stream, &negate, &uvalue, 1, 0)) {\n\t\tcase DIS_SUCCESS:\n\t\t\tif (negate ? -uvalue >= SCHAR_MIN : uvalue <= SCHAR_MAX) {\n\t\t\t\tvalue = negate ? -uvalue : uvalue;\n\t\t\t\tbreak;\n\t\t\t} else\n\t\t\t\tlocret = DIS_OVERFLOW;\n\t\tcase DIS_OVERFLOW:\n\t\t\tvalue = negate ? SCHAR_MIN : SCHAR_MAX;\n\t}\n\t*retval = locret;\n\treturn (value);\n}\n"
  },
  {
    "path": "src/lib/Libdis/disrsi.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tdisrsi.c\n *\n * @par Synopsis:\n *\tint disrsi(int stream, int *retval)\n *\n *\tGets a Data-is-Strings signed integer from <stream>, converts it into\n *\tan int and returns it.\n *\n *\tThis format for character strings representing signed integers can best\n *\tbe understood through the decoding algorithm:\n *\n *\t1. Initialize the digit count to 1.\n *\n *\t2. Read the next digit; if it is a sign, go to step (4).\n *\n *\t3. Decode a new count from the digit decoded in step (2) and the next\n *\t   count - 1 digits; repeat step (2).\n *\n *\t4. Decode the next count digits as the magnitude of the signed integer.\n *\n *\t*<retval> gets DIS_SUCCESS if everything works well.  It gets an error\n *\tcode otherwise.  In case of an error, the <stream> character pointer is\n *\treset, making it possible to retry with some other conversion strategy.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <assert.h>\n#include <stddef.h>\n\n#include \"dis.h\"\n#include \"dis_.h\"\n#undef disrsi\n\n/**\n * @brief\n *\t-  Gets a Data-is-Strings signed integer from <stream>, converts it into\n *      an int and returns it.\n *\n * @param[in] stream - socket descriptor\n * @param[out] retval - dis status val\n *\n * @return      int\n * @retval      integer value\t\tsuccess\n * @retval      0                       error\n *\n */\n\nint\ndisrsi(int stream, int *retval)\n{\n\tint locret;\n\tint negate;\n\tint value;\n\tunsigned int uvalue;\n\n\tassert(retval != NULL);\n\n\tvalue = 0;\n\tswitch (locret = disrsi_(stream, &negate, &uvalue, 1, 0)) {\n\t\tcase DIS_SUCCESS:\n\t\t\tif (negate ? uvalue <= (unsigned) -(INT_MIN + 1) + 1 : uvalue <= (unsigned) INT_MAX) {\n\t\t\t\tvalue = negate ? -uvalue : uvalue;\n\t\t\t\tbreak;\n\t\t\t} else\n\t\t\t\tlocret = DIS_OVERFLOW;\n\t\tcase DIS_OVERFLOW:\n\t\t\tvalue = negate ? INT_MIN : INT_MAX;\n\t}\n\t*retval = locret;\n\treturn (value);\n}\n"
  },
  {
    "path": "src/lib/Libdis/disrsi_.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <assert.h>\n#include <stddef.h>\n\n#include \"dis.h\"\n#include \"dis_.h\"\n/**\n * @file\tdisrsi_.c\n */\n/**\n * @brief\n *      -Gets a Data-is-Strings signed integer from <stream>, converts it into a\n *      signed char, and returns it.  The signed integer in <stream> consists\n *      of a counted string of digits, starting with a zero or a minus sign,\n *      which represents the number.  If the number doesn't lie between -9 and\n *      9, inclusive, it is preceeded by at least one count.\n *\n *\n * @return      int\n * @retval      DIS_success/error status\n *\n */\nint\ndisrsi_(int stream, int *negate, unsigned *value, unsigned count, int recursv)\n{\n\tint c;\n\tunsigned locval;\n\tunsigned ndigs;\n\tchar *cp;\n\n\tassert(negate != NULL);\n\tassert(value != NULL);\n\tassert(count);\n\tassert(stream >= 0);\n\n\tif (++recursv > DIS_RECURSIVE_LIMIT)\n\t\treturn (DIS_PROTO);\n\t/* dis_umaxd would be initialized by prior call to dis_init_tables */\n\tswitch (c = dis_getc(stream)) {\n\t\tcase '-':\n\t\tcase '+':\n\t\t\t*negate = c == '-';\n\t\t\tif (count > dis_umaxd)\n\t\t\t\tgoto overflow;\n\t\t\tif (dis_gets(stream, dis_buffer, count) != count)\n\t\t\t\treturn (DIS_EOD);\n\t\t\tif (count == dis_umaxd) {\n\t\t\t\tif (memcmp(dis_buffer, dis_umax, dis_umaxd) > 0)\n\t\t\t\t\tgoto overflow;\n\t\t\t}\n\t\t\tcp = dis_buffer;\n\t\t\tlocval = 0;\n\t\t\tdo {\n\t\t\t\tif ((c = *cp++) < '0' || c > '9')\n\t\t\t\t\treturn (DIS_NONDIGIT);\n\t\t\t\tlocval = 10 * locval + c - '0';\n\t\t\t} while (--count);\n\t\t\t*value = locval;\n\t\t\treturn (DIS_SUCCESS);\n\t\tcase '0':\n\t\t\treturn (DIS_LEADZRO);\n\t\tcase '1':\n\t\tcase '2':\n\t\tcase '3':\n\t\tcase '4':\n\t\tcase '5':\n\t\tcase '6':\n\t\tcase '7':\n\t\tcase '8':\n\t\tcase '9':\n\t\t\tndigs = c - '0';\n\t\t\tif (count > 1) {\n\t\t\t\tif (count > dis_umaxd)\n\t\t\t\t\tbreak;\n\t\t\t\tif (dis_gets(stream, dis_buffer + 1, count - 1) !=\n\t\t\t\t    count - 1)\n\t\t\t\t\treturn (DIS_EOD);\n\t\t\t\tcp = dis_buffer;\n\t\t\t\tif (count == dis_umaxd) {\n\t\t\t\t\t*cp = c;\n\t\t\t\t\tif (memcmp(dis_buffer, dis_umax, dis_umaxd) > 0)\n\t\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\twhile (--count) {\n\t\t\t\t\tif ((c = *++cp) < '0' || c > '9')\n\t\t\t\t\t\treturn (DIS_NONDIGIT);\n\t\t\t\t\tndigs = 10 * ndigs + c - '0';\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn (disrsi_(stream, negate, value, ndigs, recursv));\n\t\tcase -1:\n\t\t\treturn (DIS_EOD);\n\t\tcase -2:\n\t\t\treturn (DIS_EOF);\n\t\tdefault:\n\t\t\treturn (DIS_NONDIGIT);\n\t}\n\t*negate = FALSE;\noverflow:\n\t*value = UINT_MAX;\n\treturn (DIS_OVERFLOW);\n}\n"
  },
  {
    "path": "src/lib/Libdis/disrsl.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tdisrsl.c\n *\n * @par Synopsis:\n *\tlong disrsl(int stream, int *retval)\n *\n *\tGets a Data-is-Strings signed integer from <stream>, converts it into a\n *\tlong, and returns it.\n *\n *\tThis format for character strings representing signed integers can best\n *\tbe understood through the decoding algorithm:\n *\n *\t1. Initialize the digit count to 1.\n *\n *\t2. Read the next digit; if it is a sign, go to step (4).\n *\n *\t3. Decode a new count from the digit decoded in step (2) and the next\n *\t   count - 1 digits; repeat step (2).\n *\n *\t4. Decode the next count digits as the magnitude of the signed integer.\n *\n *\t*<retval> gets DIS_SUCCESS if everything works well.  It gets an error\n *\tcode otherwise.  In case of an error, the <stream> character pointer is\n *\treset, making it possible to retry with some other conversion strategy.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <assert.h>\n#include <stddef.h>\n\n#include \"dis.h\"\n#include \"dis_.h\"\n#undef disrsl\n\n/**\n * @brief\n *\tGets a Data-is-Strings signed integer from <stream>, converts it into a\n *      long, and returns it.\n *\n * @param[in] stream - socket descriptor\n * @param[out] retval - dis status val\n *\n * @return      long\n * @retval      long value           success\n * @retval      0                       error\n *\n */\n\nlong\ndisrsl(int stream, int *retval)\n{\n\tint locret;\n\tint negate;\n\tlong value;\n\tunsigned long uvalue;\n\n\tassert(retval != NULL);\n\n\tvalue = 0;\n\tswitch (locret = disrsl_(stream, &negate, &uvalue, 1, 0)) {\n\t\tcase DIS_SUCCESS:\n\t\t\tif (negate ? uvalue <= (unsigned long) -(LONG_MIN + 1) + 1 : uvalue <= LONG_MAX) {\n\t\t\t\tvalue = negate ? -uvalue : uvalue;\n\t\t\t\tbreak;\n\t\t\t} else\n\t\t\t\tlocret = DIS_OVERFLOW;\n\t\tcase DIS_OVERFLOW:\n\t\t\tvalue = negate ? LONG_MIN : LONG_MAX;\n\t}\n\t*retval = locret;\n\treturn (value);\n}\n"
  },
  {
    "path": "src/lib/Libdis/disrsl_.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <assert.h>\n#include <stdio.h>\n#include <string.h>\n#include <stdlib.h>\n\n#include \"dis.h\"\n#include \"dis_.h\"\n\nextern char *ulmax;\nextern unsigned ulmaxdigs;\n/**\n * @file\tdisrsl.c\n */\n/**\n * @brief\n *      -  Gets a Data-is-Strings signed integer from <stream>, converts it into\n *      an int and returns it.\n *\n * @return      int\n * @retval      DIS_success/error status\n *\n */\n\nint\ndisrsl_(int stream, int *negate, unsigned long *value, unsigned long count, int recursv)\n{\n\tint c;\n\tunsigned long locval;\n\tunsigned long ndigs;\n\tchar *cp;\n\n\tassert(negate != NULL);\n\tassert(value != NULL);\n\tassert(count);\n\tassert(stream >= 0);\n\n\tif (++recursv > DIS_RECURSIVE_LIMIT)\n\t\treturn (DIS_PROTO);\n\n\tswitch (c = dis_getc(stream)) {\n\t\tcase '-':\n\t\tcase '+':\n\t\t\tif (count > ulmaxdigs)\n\t\t\t\tgoto overflow;\n\t\t\t*negate = c == '-';\n\t\t\tif (dis_gets(stream, dis_buffer, count) != count)\n\t\t\t\treturn (DIS_EOD);\n\t\t\tif (count == ulmaxdigs) {\n\t\t\t\tif (memcmp(dis_buffer, ulmax, ulmaxdigs) > 0)\n\t\t\t\t\tgoto overflow;\n\t\t\t}\n\t\t\tcp = dis_buffer;\n\t\t\tlocval = 0;\n\t\t\tdo {\n\t\t\t\tif ((c = *cp++) < '0' || c > '9')\n\t\t\t\t\treturn (DIS_NONDIGIT);\n\t\t\t\tlocval = 10 * locval + c - '0';\n\t\t\t} while (--count);\n\t\t\t*value = locval;\n\t\t\treturn (DIS_SUCCESS);\n\t\tcase '0':\n\t\t\treturn (DIS_LEADZRO);\n\t\tcase '1':\n\t\tcase '2':\n\t\tcase '3':\n\t\tcase '4':\n\t\tcase '5':\n\t\tcase '6':\n\t\tcase '7':\n\t\tcase '8':\n\t\tcase '9':\n\t\t\tndigs = c - '0';\n\t\t\tif (count > 1) {\n\t\t\t\tif (count > ulmaxdigs)\n\t\t\t\t\tbreak;\n\t\t\t\tif (dis_gets(stream, dis_buffer + 1, count - 1) !=\n\t\t\t\t    count - 1)\n\t\t\t\t\treturn (DIS_EOD);\n\t\t\t\tcp = dis_buffer;\n\t\t\t\tif (count == ulmaxdigs) {\n\t\t\t\t\t*cp = c;\n\t\t\t\t\tif (memcmp(dis_buffer, ulmax, ulmaxdigs) > 0)\n\t\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\twhile (--count) {\n\t\t\t\t\tif ((c = *++cp) < '0' || c > '9')\n\t\t\t\t\t\treturn (DIS_NONDIGIT);\n\t\t\t\t\tndigs = 10 * ndigs + c - '0';\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn (disrsl_(stream, negate, value, ndigs, recursv));\n\t\tcase -1:\n\t\t\treturn (DIS_EOD);\n\t\tcase -2:\n\t\t\treturn (DIS_EOF);\n\t\tdefault:\n\t\t\treturn (DIS_NONDIGIT);\n\t}\n\t*negate = FALSE;\noverflow:\n\t*value = ULONG_MAX;\n\treturn (DIS_OVERFLOW);\n}\n"
  },
  {
    "path": "src/lib/Libdis/disrsll_.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <assert.h>\n#include <stdio.h>\n#include <string.h>\n#include <stdlib.h>\n\n#include \"dis.h\"\n#include \"dis_.h\"\n\nchar *ulmax = NULL;\nunsigned ulmaxdigs = 0;\n/**\n * @file\tdisrsll_.c\n */\n/**\n * @brief\n *\t-initialise u_long max\n *\n */\nvoid\ninit_ulmax()\n{\n\tchar *cp;\n\tif (ulmaxdigs == 0) {\n\t\tcp = discull_(dis_buffer + DIS_BUFSIZ, UlONG_MAX, &ulmaxdigs);\n\t\tulmax = (char *) malloc(ulmaxdigs);\n\t\tassert(ulmax != NULL);\n\t\tmemcpy(ulmax, cp, ulmaxdigs);\n\t}\n}\n\n/**\n * @brief\n *\t-Function to convert a string into numeric form of type u_Long\n *\n * @return\tint\n * @retval\tDIS_success/error\n *\n */\n\nint\ndisrsll_(int stream, int *negate, u_Long *value, unsigned long count, int recursv)\n{\n\tint c;\n\tu_Long locval;\n\tunsigned long ndigs;\n\tchar *cp;\n\n\tassert(negate != NULL);\n\tassert(value != NULL);\n\tassert(count);\n\tassert(stream >= 0);\n\n\tif (++recursv > DIS_RECURSIVE_LIMIT)\n\t\treturn (DIS_PROTO);\n\n\t/* ulmaxdigs  would be initialized from dis_init_tables */\n\tswitch (c = dis_getc(stream)) {\n\t\tcase '-':\n\t\tcase '+':\n\t\t\t*negate = (c == '-');\n\t\t\tif (count > ulmaxdigs)\n\t\t\t\tgoto overflow;\n\t\t\tif (dis_gets(stream, dis_buffer, count) != count)\n\t\t\t\treturn (DIS_EOD);\n\t\t\tif (count == ulmaxdigs) {\n\t\t\t\tif (memcmp(dis_buffer, ulmax, ulmaxdigs) > 0)\n\t\t\t\t\tgoto overflow;\n\t\t\t}\n\t\t\tcp = dis_buffer;\n\t\t\tlocval = 0;\n\t\t\tdo {\n\t\t\t\tif ((c = *cp++) < '0' || c > '9')\n\t\t\t\t\treturn (DIS_NONDIGIT);\n\t\t\t\tlocval = (10 * locval) + c - '0';\n\t\t\t} while (--count);\n\t\t\t*value = locval;\n\t\t\treturn (DIS_SUCCESS);\n\t\tcase '0':\n\t\t\treturn (DIS_LEADZRO);\n\t\tcase '1':\n\t\tcase '2':\n\t\tcase '3':\n\t\tcase '4':\n\t\tcase '5':\n\t\tcase '6':\n\t\tcase '7':\n\t\tcase '8':\n\t\tcase '9':\n\t\t\tndigs = c - '0';\n\t\t\tif (count > 1) {\n\t\t\t\tif (count > ulmaxdigs)\n\t\t\t\t\tbreak;\n\t\t\t\tif (dis_gets(stream, dis_buffer + 1, count - 1) !=\n\t\t\t\t    count - 1)\n\t\t\t\t\treturn (DIS_EOD);\n\t\t\t\tcp = dis_buffer;\n\t\t\t\tif (count == ulmaxdigs) {\n\t\t\t\t\t*cp = c;\n\t\t\t\t\tif (memcmp(dis_buffer, ulmax, ulmaxdigs) > 0)\n\t\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\twhile (--count) {\n\t\t\t\t\tif ((c = *++cp) < '0' || c > '9')\n\t\t\t\t\t\treturn (DIS_NONDIGIT);\n\t\t\t\t\tndigs = (10 * ndigs) + c - '0';\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn (disrsll_(stream, negate, value, ndigs, recursv));\n\t\tcase -1:\n\t\t\treturn (DIS_EOD);\n\t\tcase -2:\n\t\t\treturn (DIS_EOF);\n\t\tdefault:\n\t\t\treturn (DIS_NONDIGIT);\n\t}\n\t*negate = FALSE;\noverflow:\n\t*value = UlONG_MAX;\n\treturn (DIS_OVERFLOW);\n}\n"
  },
  {
    "path": "src/lib/Libdis/disrss.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tdisrss.c\n *\n * @par Synopsis:\n *\tshort disrss(int stream, int *retval)\n *\n *\tGets a Data-is-Strings signed integer from <stream>, converts it\n *\tinto a short, and returns it.  The signed integer in <stream> consists\n *\tof a counted string of digits, starting with a zero or a minus sign,\n *\twhich represents the number.  If the number doesn't lie between -9 and\n *\t9, inclusive, it is preceeded by at least one count.\n *\n *\tThis format for character strings representing signed integers can best\n *\tbe understood through the decoding algorithm:\n *\n *\t1. Initialize the digit count to 1.\n *\n *\t2. Read the next digit; if it is zero or a minus sign, go to step (4).\n *\n *\t3. Decode a new count from the digit decoded in step (2) and the next\n *\t   count - 1 digits; repeat step (2).\n *\n *\t4. Decode the next count digits as the magnitude of the signed integer.\n *\n *\t*<retval> gets DIS_SUCCESS if everything works well.  It gets an error\n *\tcode otherwise.  In case of an error, the <stream> character pointer is\n *\treset, making it possible to retry with some other conversion strategy.\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <assert.h>\n#include <stddef.h>\n\n#include \"dis.h\"\n#include \"dis_.h\"\n#undef disrss\n\n/**\n * @brief\n *\tGets a Data-is-Strings signed integer from <stream>, converts it\n *\tinto a short, and returns it\n *\n * @param[in] stream - pointer to data stream\n * @param[out] retval - return value\n *\n * @return      short\n * @retval      converted value         success\n * @retval      0                       error\n *\n */\nshort\ndisrss(int stream, int *retval)\n{\n\tint locret;\n\tint negate;\n\tshort value;\n\tunsigned uvalue;\n\n\tassert(retval != NULL);\n\n\tvalue = 0;\n\tswitch (locret = disrsi_(stream, &negate, &uvalue, 1, 0)) {\n\t\tcase DIS_SUCCESS:\n\t\t\tif (negate ? -uvalue >= SHRT_MIN : uvalue <= SHRT_MAX) {\n\t\t\t\tvalue = negate ? -uvalue : uvalue;\n\t\t\t\tbreak;\n\t\t\t} else\n\t\t\t\tlocret = DIS_OVERFLOW;\n\t\tcase DIS_OVERFLOW:\n\t\t\tvalue = negate ? SHRT_MIN : SHRT_MAX;\n\t}\n\t*retval = locret;\n\treturn (value);\n}\n"
  },
  {
    "path": "src/lib/Libdis/disrst.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tdisrst.c\n *\n * @par Synopsis:\n *\tchar *disrst(int stream, int *retval)\n *\n *\tGets a Data-is-Strings character string from <stream> and converts it\n *\tinto a null-terminated string, and returns a pointer to the result.  The\n *\tcharacter string in <stream> consists of an unsigned integer, followed\n *\tby a number of characters determined by the unsigned integer.\n *\n *\t*<retval> gets DIS_SUCCESS if everything works well.  It gets an error\n *\tcode otherwise.  In case of an error, the <stream> character pointer is\n *\treset, making it possible to retry with some other conversion strategy.\n *\tIn case of an error, the return value is NULL.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <assert.h>\n#include <stddef.h>\n#include <stdlib.h>\n\n#ifndef NDEBUG\n#include <string.h>\n#endif\n\n#include \"dis.h\"\n#include \"dis_.h\"\n\n/**\n * @brief\n *      Gets a Data-is-Strings signed integer from <stream>, converts it\n *      into a null-terminated string, and returns a pointer to the result.\n *\n * @param[in] stream - pointer to data stream\n * @param[out] retval - return value\n *\n * @return      string\n * @retval      converted value         success\n * @retval      0                       error\n *\n */\n\nchar *\ndisrst(int stream, int *retval)\n{\n\tint locret;\n\tint negate;\n\tunsigned count;\n\tchar *value = NULL;\n\n\tassert(retval != NULL);\n\n\tlocret = disrsi_(stream, &negate, &count, 1, 0);\n\tif (locret == DIS_SUCCESS) {\n\t\tif (negate)\n\t\t\tlocret = DIS_BADSIGN;\n\t\telse {\n\t\t\tvalue = (char *) malloc((size_t) count + 1);\n\t\t\tif (value == NULL)\n\t\t\t\tlocret = DIS_NOMALLOC;\n\t\t\telse {\n\t\t\t\tif (dis_gets(stream, value,\n\t\t\t\t\t     (size_t) count) != (size_t) count)\n\t\t\t\t\tlocret = DIS_PROTO;\n#ifndef NDEBUG\n\t\t\t\telse if (memchr(value, 0, (size_t) count))\n\t\t\t\t\tlocret = DIS_NULLSTR;\n#endif\n\t\t\t\telse\n\t\t\t\t\tvalue[count] = '\\0';\n\t\t\t}\n\t\t}\n\t}\n\tif ((*retval = locret) != DIS_SUCCESS && value != NULL) {\n\t\tfree(value);\n\t\tvalue = NULL;\n\t}\n\treturn (value);\n}\n"
  },
  {
    "path": "src/lib/Libdis/disruc.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tdisruc.c\n *\n * @par Synopsis:\n *\tunsigned char disruc(int stream, int *<retval>)\n *\n *\tGets a Data-is-Strings unsigned integer from <stream>, converts it into\n *\tan unsigned char, and returns it.  The unsigned integer in <stream>\n *\tconsists of a counted string of digits, starting with a zero, which\n *\trepresents the number.  If the number doesn't lie between 0 and 9,\n *\tinclusive, it is preceeded by at least one count.\n *\n *\tThis format for character strings representing unsigned integers can\n *\tbest be understood through the decoding algorithm:\n *\n *\t1. Initialize the digit count to 1.\n *\n *\t2. Read the next digit; if it is a plus sign, go to step (4); if it\n *\t   is a minus sign, post an error.\n *\n *\t3. Decode a new count from the digit decoded in step (2) and the next\n *\t   count - 1 digits; repeat step (2).\n *\n *\t4. Decode the next count digits as the unsigned integer.\n *\n *\t*<retval> gets DIS_SUCCESS if everything works well.  It gets an error\n *\tcode otherwise.  In case of an error, the <stream> character pointer is\n *\treset, making it possible to retry with some other conversion strategy.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <assert.h>\n#include <stddef.h>\n\n#include \"dis.h\"\n#include \"dis_.h\"\n#undef disruc\n\n/**\n * @brief\n *      Gets a Data-is-Strings signed integer from <stream>, converts it\n *      into a unsigned char, and returns it\n *\n * @param[in] stream - pointer to data stream\n * @param[out] retval - return value\n *\n * @return      unsigned char\n * @retval      converted value         success\n * @retval      0                       error\n *\n */\n\nunsigned char\ndisruc(int stream, int *retval)\n{\n\tint locret;\n\tint negate;\n\tunsigned value;\n\n\tassert(retval != NULL);\n\n\tlocret = disrsi_(stream, &negate, &value, 1, 0);\n\tif (locret != DIS_SUCCESS) {\n\t\tvalue = 0;\n\t} else if (negate) {\n\t\tvalue = 0;\n\t\tlocret = DIS_BADSIGN;\n\t} else if (value > UCHAR_MAX) {\n\t\tvalue = UCHAR_MAX;\n\t\tlocret = DIS_OVERFLOW;\n\t}\n\t*retval = locret;\n\treturn ((unsigned char) value);\n}\n"
  },
  {
    "path": "src/lib/Libdis/disrui.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tdisrui.c\n *\n * @par Synopsis:\n *\tunsigned disrui(int stream, int *value)\n *\n *\tGets a Data-is-Strings unsigned integer from <stream>, converts it into\n *\tan unsigned int, and returns it.\n *\n *\tThis format for character strings representing unsigned integers can\n *\tbest be understood through the decoding algorithm:\n *\n *\t1. Initialize the digit count to 1.\n *\n *\t2. Read the next character; if it is a plus sign, go to step (4); if it\n *\t   is a minus sign, post an error.\n *\n *\t3. Decode a new count from the digit decoded in step (2) and the next\n *\t   count - 1 digits; repeat step (2).\n *\n *\t4. Decode the next count digits as the unsigned integer.\n *\n *\t*<retval> gets DIS_SUCCESS if everything works well.  It gets an error\n *\tcode otherwise.  In case of an error, the <stream> character pointer is\n *\treset, making it possible to retry with some other conversion strategy.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <assert.h>\n#include <stddef.h>\n\n#include \"dis.h\"\n#include \"dis_.h\"\n#undef disrui\n\n/**\n * @brief\n *      Gets a Data-is-Strings signed integer from <stream>, converts it\n *      into an unsigned int, and returns it\n *\n * @param[in] stream - pointer to data stream\n * @param[out] retval - return value\n *\n * @return      short\n * @retval      converted value         success\n * @retval      0                       error\n *\n */\n\nunsigned\ndisrui(int stream, int *retval)\n{\n\tint locret;\n\tint negate;\n\tunsigned value;\n\n\tlocret = disrsi_(stream, &negate, &value, 1, 0);\n\tif (locret != DIS_SUCCESS) {\n\t\tvalue = 0;\n\t} else if (negate) {\n\t\tvalue = 0;\n\t\tlocret = DIS_BADSIGN;\n\t}\n\t*retval = locret;\n\treturn (value);\n}\n"
  },
  {
    "path": "src/lib/Libdis/disrul.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tdisrul.c\n *\n * @par Synopsis:\n *\tunsigned long disrul(int stream, int *retval)\n *\n *\tGets a Data-is-Strings unsigned integer from <stream>, converts it into\n *\tan unsigned long, and returns it.\n *\n *\tThis format for character strings representing unsigned integers can\n *\tbest be understood through the decoding algorithm:\n *\n *\t1. Initialize the digit count to 1.\n *\n *\t2. Read the next character; if it is a plus sign, go to step (4); if it\n *\t   is a minus sign, post an error.\n *\n *\t3. Decode a new count from the digit decoded in step (2) and the next\n *\t   count - 1 digits; repeat step (2).\n *\n *\t4. Decode the next count digits as the unsigned integer.\n *\n *\t*<retval> gets DIS_SUCCESS if everything works well.  It gets an error\n *\tcode otherwise.  In case of an error, the <stream> character pointer is\n *\treset, making it possible to retry with some other conversion strategy.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <assert.h>\n#include <stddef.h>\n\n#include \"dis.h\"\n#include \"dis_.h\"\n\n/**\n * @brief\n *      Gets a Data-is-Strings signed integer from <stream>, converts it\n *      into a unsigned long, and returns it\n *\n * @param[in] stream - pointer to data stream\n * @param[out] retval - return value\n *\n * @return      unsigned long\n * @retval      converted value         success\n * @retval      0                       error\n *\n */\nunsigned long\ndisrul(int stream, int *retval)\n{\n\tint locret;\n\tint negate;\n\tunsigned long value;\n\n\tlocret = disrsl_(stream, &negate, &value, 1, 0);\n\tif (locret != DIS_SUCCESS) {\n\t\tvalue = 0;\n\t} else if (negate) {\n\t\tvalue = 0;\n\t\tlocret = DIS_BADSIGN;\n\t}\n\t*retval = locret;\n\treturn (value);\n}\n"
  },
  {
    "path": "src/lib/Libdis/disrull.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tdisrull.c\n *\n * @par Synopsis:\n *\tu_Long disrull(int stream, int *retval)\n *\n *\tGets a Data-is-Strings unsigned integer from <stream>, converts it into\n *\tan u_Long, and returns it.\n *\n *\tThis format for character strings representing unsigned integers can\n *\tbest be understood through the decoding algorithm:\n *\n *\t1. Initialize the digit count to 1.\n *\n *\t2. Read the next character; if it is a plus sign, go to step (4); if it\n *\t   is a minus sign, post an error.\n *\n *\t3. Decode a new count from the digit decoded in step (2) and the next\n *\t   count - 1 digits; repeat step (2).\n *\n *\t4. Decode the next count digits as the unsigned integer.\n *\n *\t*<retval> gets DIS_SUCCESS if everything works well.  It gets an error\n *\tcode otherwise.  In case of an error, the <stream> character pointer is\n *\treset, making it possible to retry with some other conversion strategy.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <assert.h>\n#include <stddef.h>\n\n#include \"dis.h\"\n#include \"dis_.h\"\n\n/**\n * @brief\n *  \tFunction to read a string which comes over network from\n *  \tmom to server and convert that string data to numeric form of type\n *  \tu_Long\n *\n * @param[in] stream - pointer to data stream\n * @param[out] retval - return value\n *\n * @return\tu_Long\n * @retval\tconverted value\t\tsuccess\n * @retval\t0\t\t\terror\n *\n */\n\nu_Long\ndisrull(int stream, int *retval)\n{\n\tint locret;\n\tint negate;\n\tu_Long value;\n\n\tassert(retval != NULL);\n\n\tlocret = disrsll_(stream, &negate, &value, 1, 0);\n\tif (locret != DIS_SUCCESS) {\n\t\tvalue = 0;\n\t} else if (negate) {\n\t\tvalue = 0;\n\t\tlocret = DIS_BADSIGN;\n\t}\n\t*retval = locret;\n\treturn (value);\n}\n"
  },
  {
    "path": "src/lib/Libdis/disrus.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tdisrus.c\n *\n * @par Synopsis:\n *\tunsigned short disrus(int stream, int *retval)\n *\n *\tGets a Data-is-Strings unsigned integer from <stream>, converts it into\n *\tan unsigned short, and returns it.  The unsigned integer in <stream>\n *\tconsists of a counted string of digits, starting with a zero, which\n *\trepresents the number.  If the number doesn't lie between 0 and 9,\n *\tinclusive, it is preceeded by at least one count.\n *\n *\tThis format for character strings representing unsigned integers can\n *\tbest be understood through the decoding algorithm:\n *\n *\t1. Initialize the digit count to 1.\n *\n *\t2. Read the next digit; if it is a plus sign, go to step (4); if it\n *\t   is a minus sign, post an error.\n *\n *\t3. Decode a new count from the digit decoded in step (2) and the next\n *\t   count - 1 digits; repeat step (2).\n *\n *\t4. Decode the next count digits as the unsigned integer.\n *\n *\t*<retval> gets DIS_SUCCESS if everything works well.  It gets an error\n *\tcode otherwise.  In case of an error, the <stream> character pointer is\n *\treset, making it possible to retry with some other conversion strategy.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <assert.h>\n#include <stddef.h>\n\n#include \"dis.h\"\n#include \"dis_.h\"\n#undef disrus\n\n/**\n * @brief\n *      Gets a Data-is-Strings signed integer from <stream>, converts it\n *      into a unsigned short, and returns it\n *\n * @param[in] stream - pointer to data stream\n * @param[out] retval - return value\n *\n * @return      unsigned short\n * @retval      converted value         success\n * @retval      0                       error\n *\n */\n\nunsigned short\ndisrus(int stream, int *retval)\n{\n\tint locret;\n\tint negate;\n\tunsigned value;\n\n\tassert(retval != NULL);\n\n\tlocret = disrsi_(stream, &negate, &value, 1, 0);\n\tif (locret != DIS_SUCCESS) {\n\t\tvalue = 0;\n\t} else if (negate) {\n\t\tvalue = 0;\n\t\tlocret = DIS_BADSIGN;\n\t} else if (value > USHRT_MAX) {\n\t\tvalue = USHRT_MAX;\n\t\tlocret = DIS_OVERFLOW;\n\t}\n\t*retval = locret;\n\treturn ((unsigned short) value);\n}\n"
  },
  {
    "path": "src/lib/Libdis/diswcs.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tdiswcs.c\n *\n * @par Synopsis:\n *\tint diswcs(int stream, char *value, size_t nchars)\n *\n *\tConverts a counted string in *<value> into a Data-is-Strings character\n *\tstring and sends it to <stream>.  The character string in <stream>\n *\tconsists of the unsigned integer representation of <nchars>, followed by\n *\t<nchars> characters from *<value>.\n *\n *\tReturns DIS_SUCCESS if everything works well.  Returns an error code\n *\totherwise.  In case of an error, no characters are sent to <stream>.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <assert.h>\n#include <stddef.h>\n#include <stdio.h>\n\n#include \"dis.h\"\n#include \"dis_.h\"\n\n/**\n * @brief\n *\tConverts a counted string in *<value> into a Data-is-Strings character\n *      string and sends it to <stream>.\n *\n * @param[in] stream - pointer to data stream\n * @param[in] value - value to be converted\n * @param[in] nchars - size of chars\n *\n * @return\tint\n * @return\tDIS_SUCCESS\tsuccess\n * @retval\terror code\terror\n *\n */\nint\ndiswcs(int stream, const char *value, size_t nchars)\n{\n\tint retval;\n\n\tassert(nchars <= UINT_MAX);\n\n\tretval = diswui_(stream, (unsigned) nchars);\n\tif (retval == DIS_SUCCESS && nchars > 0 &&\n\t    dis_puts(stream, value, nchars) != nchars)\n\t\tretval = DIS_PROTO;\n\treturn retval;\n}\n"
  },
  {
    "path": "src/lib/Libdis/diswf.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tdiswf.c\n *\n * @par Synopsis:\n * \tint diswf(int stream, float value)\n *\n *\tConverts <value> into a Data-is-Strings floating point number and sends\n *\tit to <stream>.  The converted number consists of two consecutive signed\n *\tintegers.  The first is the coefficient, at most <ndigs> long, with its\n *\timplied decimal point at the low-order end.  The second is the exponent\n *\tas a power of 10.\n *\n *\tReturns DIS_SUCCESS if everything works well.  Returns an error code\n *\totherwise.  In case of an error, no characters are sent to <stream>.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <assert.h>\n#include <stddef.h>\n#include <stdio.h>\n\n#include \"dis.h\"\n#include \"dis_.h\"\n#undef diswf\n\n/**\n * @brief\n *\tConverts <value> into a Data-is-Strings floating point number and sends\n *      it to <stream>.\n *\n * @param[in] stream\tsocket fd\n * @param[in] value \tvalue to be converted\n *\n * @return\tint\n * @retval\tDIS_SUCCESS\tsuccess\n * @retval\terror code\terror\n *\n */\nint\ndiswf(int stream, double value)\n{\n\tint c;\n\tint expon;\n\tunsigned ndigs;\n\tint negate;\n\tint retval;\n\tunsigned pow2;\n\tchar *cp;\n\tchar *ocp;\n\tdouble dval;\n\n\tassert(stream >= 0);\n\n\t/* Make zero a special case.  If we don't it will blow exponent\t\t*/\n\t/* calculation.\t\t\t\t\t\t\t\t*/\n\tif (value == 0.0) {\n\t\treturn (dis_puts(stream, \"+0+0\", 4) != 4 ? DIS_PROTO : DIS_SUCCESS);\n\t}\n\t/* Extract the sign from the coefficient.\t\t\t\t*/\n\tdval = (negate = value < 0.0) ? -value : value;\n\t/* Detect and complain about the infinite form.\t\t\t\t*/\n\tif (dval > FLT_MAX)\n\t\treturn (DIS_HUGEVAL);\n\t/* Compute the integer part of the log to the base 10 of dval.  As a\t*/\n\t/* byproduct, reduce the range of dval to the half-open interval,       */\n\t/* [1, 10).\t\t\t\t\t\t\t\t*/\n\n\t/* dis_dmx10 would be initialized by prior call to dis_init_tables */\n\texpon = 0;\n\tpow2 = dis_dmx10 + 1;\n\tif (dval < 1.0) {\n\t\tdo {\n\t\t\tif (dval < dis_dn10[--pow2]) {\n\t\t\t\tdval *= dis_dp10[pow2];\n\t\t\t\texpon += 1 << pow2;\n\t\t\t}\n\t\t} while (pow2);\n\t\tdval *= 10.0;\n\t\texpon = -expon - 1;\n\t} else {\n\t\tdo {\n\t\t\tif (dval >= dis_dp10[--pow2]) {\n\t\t\t\tdval *= dis_dn10[pow2];\n\t\t\t\texpon += 1 << pow2;\n\t\t\t}\n\t\t} while (pow2);\n\t}\n\t/* Round the value to the last digit\t\t\t\t\t*/\n\tdval += 5.0 * disp10d_(-FLT_DIG);\n\tif (dval >= 10.0) {\n\t\texpon++;\n\t\tdval *= 0.1;\n\t}\n\t/* Starting in the middle of the buffer, convert coefficient digits,\t*/\n\t/* most significant first.\t\t\t\t\t\t*/\n\tocp = cp = &dis_buffer[DIS_BUFSIZ - FLT_DIG];\n\tndigs = FLT_DIG;\n\tdo {\n\t\tc = dval;\n\t\tdval = (dval - c) * 10.0;\n\t\t*ocp++ = c + '0';\n\t} while (--ndigs);\n\t/* Eliminate trailing zeros.\t\t\t\t\t\t*/\n\twhile (*--ocp == '0')\n\t\t;\n\t/* The decimal point is at the low order end of the coefficient\t\t*/\n\t/* integer, so adjust the exponent for the number of digits in the\t*/\n\t/* coefficient.\t\t\t\t\t\t\t\t*/\n\tndigs = ++ocp - cp;\n\texpon -= ndigs - 1;\n\t/* Put the coefficient sign into the buffer, left of the coefficient.\t*/\n\t*--cp = negate ? '-' : '+';\n\t/* Insert the necessary number of counts on the left.\t\t\t*/\n\twhile (ndigs > 1)\n\t\tcp = discui_(cp, ndigs, &ndigs);\n\t/* The complete coefficient integer is done.  Put it out.\t\t*/\n\tretval = dis_puts(stream, cp, (size_t) (ocp - cp)) < 0 ? DIS_PROTO : DIS_SUCCESS;\n\t/* If that worked, follow with the exponent, commit, and return.\t*/\n\tif (retval == DIS_SUCCESS)\n\t\treturn (diswsi(stream, expon));\n\t/* If coefficient didn't work, negative commit and return the error.\t*/\n\treturn retval;\n}\n"
  },
  {
    "path": "src/lib/Libdis/diswl_.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tdiswl_.c\n *\n * @par Synopsis:\n * \tint diswl_(int stream, dis_long_double_t value, unsigned int ndigs)\n *\n *\tConverts <value> into a Data-is-Strings floating point number and sends\n *\tit to <stream>.  The converted number consists of two consecutive signed\n *\tintegers.  The first is the coefficient, at most <ndigs> long, with its\n *\timplied decimal point at the low-order end.  The second is the exponent\n *\tas a power of 10.\n *\n *\tThis function is only invoked through the macros, diswf, diswd, and\n *\tdiswl, which are defined in the header file, dis.h.\n *\n *\tReturns DIS_SUCCESS if everything works well.  Returns an error code\n *\totherwise.  In case of an error, no characters are sent to <stream>.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <assert.h>\n#include <stddef.h>\n#include <stdio.h>\n\n#include \"dis.h\"\n#include \"dis_.h\"\n\n/**\n * @brief\n *      Converts <value> into a Data-is-Strings floating point number and sends\n *      it to <stream>.\n *\n * @param[in] stream    socket fd\n * @param[in] ndigs\tlength of number\n * @param[in] value     value to be converted\n *\n * @return      int\n * @retval      DIS_SUCCESS     success\n * @retval      error code      error\n *\n */\nint\ndiswl_(int stream, dis_long_double_t value, unsigned ndigs)\n{\n\tint c;\n\tint expon;\n\tint negate;\n\tint retval;\n\tunsigned pow2;\n\tchar *cp;\n\tchar *ocp;\n\tdis_long_double_t ldval;\n\n\tassert(ndigs > 0 && ndigs <= LDBL_DIG);\n\tassert(stream >= 0);\n\n\t/* Make zero a special case.  If we don't it will blow exponent\t\t*/\n\t/* calculation.\t\t\t\t\t\t\t\t*/\n\tif (value == 0.0L) {\n\t\treturn (dis_puts(stream, \"+0+0\", 4) < 0 ? DIS_PROTO : DIS_SUCCESS);\n\t}\n\t/* Extract the sign from the coefficient.\t\t\t\t*/\n\tldval = (negate = value < 0.0L) ? -value : value;\n\t/* Detect and complain about the infinite form.\t\t\t\t*/\n\tif (ldval > LDBL_MAX)\n\t\treturn (DIS_HUGEVAL);\n\t/* Compute the integer part of the log to the base 10 of ldval.  As a\t*/\n\t/* byproduct, reduce the range of ldval to the half-open interval,      */\n\t/* [1, 10).\t\t\t\t\t\t\t\t*/\n\n\t/* dis_lmx10 would be initialized by prior call to dis_init_tables */\n\texpon = 0;\n\tpow2 = dis_lmx10 + 1;\n\tif (ldval < 1.0L) {\n\t\tdo {\n\t\t\tif (ldval < dis_ln10[--pow2]) {\n\t\t\t\tldval *= dis_lp10[pow2];\n\t\t\t\texpon += 1 << pow2;\n\t\t\t}\n\t\t} while (pow2);\n\t\tldval *= 10.0;\n\t\texpon = -expon - 1;\n\t} else {\n\t\tdo {\n\t\t\tif (ldval >= dis_lp10[--pow2]) {\n\t\t\t\tldval *= dis_ln10[pow2];\n\t\t\t\texpon += 1 << pow2;\n\t\t\t}\n\t\t} while (pow2);\n\t}\n\t/* Round the value to the last digit\t\t\t\t\t*/\n\tldval += 5.0L * disp10l_(-ndigs);\n\tif (ldval >= 10.0L) {\n\t\texpon++;\n\t\tldval *= 0.1L;\n\t}\n\t/* Starting in the middle of the buffer, convert coefficient digits,\t*/\n\t/* most significant first.\t\t\t\t\t\t*/\n\tocp = cp = &dis_buffer[DIS_BUFSIZ - ndigs];\n\tdo {\n\t\tc = ldval;\n\t\tldval = (ldval - c) * 10.0L;\n\t\t*ocp++ = c + '0';\n\t} while (--ndigs);\n\t/* Eliminate trailing zeros.\t\t\t\t\t\t*/\n\twhile (*--ocp == '0')\n\t\t;\n\t/* The decimal point is at the low order end of the coefficient\t\t*/\n\t/* integer, so adjust the exponent for the number of digits in the\t*/\n\t/* coefficient.\t\t\t\t\t\t\t\t*/\n\tndigs = ++ocp - cp;\n\texpon -= ndigs - 1;\n\t/* Put the coefficient sign into the buffer, left of the coefficient.\t*/\n\t*--cp = negate ? '-' : '+';\n\t/* Insert the necessary number of counts on the left.\t\t\t*/\n\twhile (ndigs > 1)\n\t\tcp = discui_(cp, ndigs, &ndigs);\n\t/* The complete coefficient integer is done.  Put it out.\t\t*/\n\tretval = dis_puts(stream, cp, (size_t) (ocp - cp)) < 0 ? DIS_PROTO : DIS_SUCCESS;\n\t/* If that worked, follow with the exponent, commit, and return.\t*/\n\tif (retval == DIS_SUCCESS)\n\t\treturn (diswsi(stream, expon));\n\t/* If coefficient didn't work, negative commit and return the error.\t*/\n\treturn retval;\n}\n"
  },
  {
    "path": "src/lib/Libdis/diswsi.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tdiswsi.c\n *\n * @par Synopsis:\n * \tint diswsi(int stream, int value)\n *\n *\tConverts <value> into a Data-is-Strings signed integer and sends it to\n *\t<stream>.\n *\n *\tThis format for character strings representing integers can best be\n *\tunderstood through the decoding algorithm:\n *\n *\t1. Initialize the digit count to 1.\n *\n *\t2. Read the next character; if it is a sign, go to step (4).\n *\n *\t3. Decode a new count from the digit decoded in step (2) and the next\n *\t   count - 1 digits; repeat step (2).\n *\n *\t4. Decode the next count digits as the unsigned integer.\n *\n *\tReturns DIS_SUCCESS if everything works well.  Returns an error code\n *\totherwise.  In case of an error, no characters are sent to <stream>.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <assert.h>\n#include <stddef.h>\n#include <stdio.h>\n\n#include \"dis.h\"\n#include \"dis_.h\"\n#undef diswsi\n\n/**\n * @brief\n *\tConverts <value> into a Data-is-Strings signed integer and sends it to\n *      <stream>.\n *\n * @param[in] stream    socket fd\n * @param[in] value     value to be converted\n *\n * @return      int\n * @retval      DIS_SUCCESS     success\n * @retval      error code      error\n *\n */\nint\ndiswsi(int stream, int value)\n{\n\tint retval;\n\tunsigned ndigs;\n\tunsigned uval;\n\tchar c;\n\tchar *cp;\n\n\tassert(stream >= 0);\n\n\tif (value < 0) {\n\t\tuval = (unsigned) -(value + 1) + 1;\n\t\tc = '-';\n\t} else {\n\t\tuval = value;\n\t\tc = '+';\n\t}\n\tcp = discui_(&dis_buffer[DIS_BUFSIZ], uval, &ndigs);\n\t*--cp = c;\n\twhile (ndigs > 1)\n\t\tcp = discui_(cp, ndigs, &ndigs);\n\tretval = dis_puts(stream, cp,\n\t\t\t  (size_t) (&dis_buffer[DIS_BUFSIZ] - cp)) < 0\n\t\t\t ? DIS_PROTO\n\t\t\t : DIS_SUCCESS;\n\treturn retval;\n}\n"
  },
  {
    "path": "src/lib/Libdis/diswsl.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tdiswsl.c\n *\n * @par Synopsis:\n * \tint diswsl(int stream, long value)\n *\n *\tConverts <value> into a Data-is-Strings signed integer and sends it to\n *\t<stream>.\n *\n *\tThis format for character strings representing integers can best be\n *\tunderstood through the decoding algorithm:\n *\n *\t1. Initialize the digit count to 1.\n *\n *\t2. Read the next character; if it is a sign, go to step (4).\n *\n *\t3. Decode a new count from the digit decoded in step (2) and the next\n *\t   count - 1 digits; repeat step (2).\n *\n *\t4. Decode the next count digits as the unsigned integer.\n *\n *\tReturns DIS_SUCCESS if everything works well.  Returns an error code\n *\totherwise.  In case of an error, no characters are sent to <stream>.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <assert.h>\n#include <stddef.h>\n#include <stdio.h>\n\n#include \"dis.h\"\n#include \"dis_.h\"\n\n/**\n * @brief\n *      Converts <value> into a Data-is-Strings signed integer and sends it to\n *      <stream>.\n *\n * @param[in] stream    socket fd\n * @param[in] value     value to be converted\n *\n * @return      int\n * @retval      DIS_SUCCESS     success\n * @retval      error code      error\n *\n */\n\nint\ndiswsl(int stream, long value)\n{\n\tint retval;\n\tunsigned ndigs;\n\tunsigned long ulval;\n\tchar c;\n\tchar *cp;\n\n\tassert(stream >= 0);\n\n\tif (value < 0) {\n\t\tulval = (unsigned long) -(value + 1) + 1;\n\t\tc = '-';\n\t} else {\n\t\tulval = value;\n\t\tc = '+';\n\t}\n\tcp = discul_(&dis_buffer[DIS_BUFSIZ], ulval, &ndigs);\n\t*--cp = c;\n\twhile (ndigs > 1)\n\t\tcp = discui_(cp, ndigs, &ndigs);\n\tretval = dis_puts(stream, cp,\n\t\t\t  (size_t) (&dis_buffer[DIS_BUFSIZ] - cp)) < 0\n\t\t\t ? DIS_PROTO\n\t\t\t : DIS_SUCCESS;\n\treturn retval;\n}\n"
  },
  {
    "path": "src/lib/Libdis/diswui.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tdiswui.c\n *\n * @par Synopsis:\n *\tint diswui(int stream, unsigned value)\n *\n *\tConverts <value> into a Data-is-Strings unsigned integer and sends it to\n *\t<stream>.\n *\n *\tThis format for character strings representing unsigned integers can\n *\tbest be understood through the decoding algorithm:\n *\n *\t1. Initialize the digit count to 1.\n *\n *\t2. Read the next character; if it is a plus sign, go to step (4); if it\n *\t   is a minus sign, post an error.\n *\n *\t3. Decode a new count from the digit decoded in step (2) and the next\n *\t   count - 1 digits; repeat step (2).\n *\n *\t4. Decode the next count digits as the unsigned integer.\n *\n *\tReturns DIS_SUCCESS if everything works well.  Returns an error code\n *\totherwise.  In case of an error, no characters are sent to <stream>.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <assert.h>\n#include <stddef.h>\n\n#include \"dis.h\"\n#include \"dis_.h\"\n#undef diswui\n\n/**\n * @brief\n *      Converts <value> into a Data-is-Strings unsigned integer and sends\n *      it to <stream>.\n *\n * @param[in] stream    socket fd\n * @param[in] value     value to be converted\n *\n * @return      int\n * @retval      DIS_SUCCESS     success\n * @retval      error code      error\n *\n */\nint\ndiswui(int stream, unsigned value)\n{\n\treturn diswui_(stream, value);\n}\n"
  },
  {
    "path": "src/lib/Libdis/diswui_.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <assert.h>\n#include <stddef.h>\n#include <stdio.h>\n\n#include \"dis.h\"\n#include \"dis_.h\"\n/**\n * @file\tdiswui_.c\n */\n/**\n * @brief\n *      Converts <value> into a Data-is-Strings unsigned integer and sends\n *      it to <stream>.\n *\n * @param[in] stream    socket fd\n * @param[in] value     value to be converted\n *\n * @return      int\n * @retval      DIS_SUCCESS     success\n * @retval      error code      error\n *\n */\nint\ndiswui_(int stream, unsigned value)\n{\n\tunsigned ndigs;\n\tchar *cp;\n\n\tassert(stream >= 0);\n\n\tcp = discui_(&dis_buffer[DIS_BUFSIZ], value, &ndigs);\n\t*--cp = '+';\n\twhile (ndigs > 1)\n\t\tcp = discui_(cp, ndigs, &ndigs);\n\tif (dis_puts(stream, cp, (size_t) (&dis_buffer[DIS_BUFSIZ] - cp)) < 0)\n\t\treturn (DIS_PROTO);\n\treturn (DIS_SUCCESS);\n}\n"
  },
  {
    "path": "src/lib/Libdis/diswul.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tdiswul.c\n *\n * @par Synopsis:\n *\tint diswul(int stream, unsigned long value)\n *\n *\tConverts <value> into a Data-is-Strings unsigned integer and sends it to\n *\t<stream>.\n *\n *\tThis format for character strings representing unsigned integers can\n *\tbest be understood through the decoding algorithm:\n *\n *\t1. Initialize the digit count to 1.\n *\n *\t2. Read the next character; if it is a plus sign, go to step (4); if it\n *\t   is a minus sign, post an error.\n *\n *\t3. Decode a new count from the digit decoded in step (2) and the next\n *\t   count - 1 digits; repeat step (2).\n *\n *\t4. Decode the next count digits as the unsigned integer.\n *\n *\tReturns DIS_SUCCESS if everything works well.  Returns an error code\n *\totherwise.  In case of an error, no characters are sent to <stream>.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <assert.h>\n#include <stddef.h>\n#include <stdio.h>\n\n#include \"dis.h\"\n#include \"dis_.h\"\n\n/**\n * @brief\n *      Converts <value> into a Data-is-Strings unsigned integer and sends\n *      it to <stream>.\n *\n * @param[in] stream    socket fd\n * @param[in] value     value to be converted\n *\n * @return      int\n * @retval      DIS_SUCCESS     success\n * @retval      error code      error\n *\n */\nint\ndiswul(int stream, unsigned long value)\n{\n\tint retval;\n\tunsigned ndigs;\n\tchar *cp;\n\n\tassert(stream >= 0);\n\tcp = discul_(&dis_buffer[DIS_BUFSIZ], value, &ndigs);\n\t*--cp = '+';\n\twhile (ndigs > 1)\n\t\tcp = discui_(cp, ndigs, &ndigs);\n\tretval = dis_puts(stream, cp,\n\t\t\t  (size_t) (&dis_buffer[DIS_BUFSIZ] - cp)) < 0\n\t\t\t ? DIS_PROTO\n\t\t\t : DIS_SUCCESS;\n\treturn retval;\n}\n"
  },
  {
    "path": "src/lib/Libdis/diswull.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tdiswull.c\n *\n * @par Synopsis:\n *\tint diswull(int stream, u_Long value)\n *\n *\tConverts <value> into a Data-is-Strings unsigned integer and sends it to\n *\t<stream>.\n *\n *\tThis format for character strings representing unsigned integers can\n *\tbest be understood through the decoding algorithm:\n *\n *\t1. Initialize the digit count to 1.\n *\n *\t2. Read the next character; if it is a plus sign, go to step (4); if it\n *\t   is a minus sign, post an error.\n *\n *\t3. Decode a new count from the digit decoded in step (2) and the next\n *\t   count - 1 digits; repeat step (2).\n *\n *\t4. Decode the next count digits as the unsigned integer.\n *\n *\tReturns DIS_SUCCESS if everything works well.  Returns an error code\n *\totherwise.  In case of an error, no characters are sent to <stream>.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <assert.h>\n#include <stddef.h>\n#include <stdio.h>\n\n#include \"dis.h\"\n#include \"dis_.h\"\n\n/**\n * @brief\n *  Function to convert a numeric value of data type u_Long to string\n *  form and  writes over network to send that string data from mom to\n *  PBS server\n *\n * @param[in] stream    socket fd\n * @param[in] value     value to be converted\n *\n * @return      int\n * @retval      DIS_SUCCESS     success\n * @retval      error code      error\n *\n */\n\nint\ndiswull(int stream, u_Long value)\n{\n\tint retval;\n\tunsigned ndigs;\n\tchar *cp;\n\n\tassert(stream >= 0);\n\n\tcp = discull_(&dis_buffer[DIS_BUFSIZ], value, &ndigs);\n\t*--cp = '+';\n\twhile (ndigs > 1)\n\t\tcp = discui_(cp, ndigs, &ndigs);\n\tretval = dis_puts(stream, cp,\n\t\t\t  (size_t) (&dis_buffer[DIS_BUFSIZ] - cp)) < 0\n\t\t\t ? DIS_PROTO\n\t\t\t : DIS_SUCCESS;\n\treturn retval;\n}\n"
  },
  {
    "path": "src/lib/Libdis/ps_dis.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <sys/types.h>\n#include <sys/stat.h>\n#include <fcntl.h>\n#include <stdlib.h>\n#include <unistd.h>\n#include <assert.h>\n#include \"dis.h\"\n#include \"placementsets.h\"\n\nstatic vnl_t *vn_decode_DIS_V3(int, int *);\nstatic vnl_t *vn_decode_DIS_V4(int, int *);\nstatic int vn_encode_DIS_V4(int, vnl_t *);\nstatic vnl_t *free_and_return(vnl_t *); /* expedient error function */\n/**\n * @file\tps_dis.c\n */\n/**\n * @brief\n *\tvn_decode_DIS - read verison 3 or 4 vnode definition information from\n * Mom.\n * @par Functionality:\n *\tThe V4 over-the-wire representation of a placement set list (vnl_t) is\n *\ta superset of V3.  V4 adds the ability to specify the type of an\n *\tattribute/resource (and reserves a place in the protocol for flags).\n *\tThe V3 over-the-wire representation of a placement set list (vnl_t) is\n *\n *\tversion\t\tunsigned integer\tthe version of the following\n *\t\t\t\t\t\tinformation\n *\n *\tVersion PS_DIS_V3 consists of\n *\n *\tvnl_modtime\tsigned long\t\tthis OTW format could be\n *\t\t\t\t\t\tproblematic:   the Open Group\n *\t\t\t\t\t\tBase Specifications Issue 6\n *\t\t\t\t\t\tsays that time_t ``shall be\n *\t\t\t\t\t\tinteger or real-floating''\n *\n *\tvnl_used\tunsigned integer\tnumber of entries in the vnal_t\n *\t\t\t\t\t\tarray to follow\n *\n *\n *\tThere follows, for each element of the vnal_t array,\n *\n *\tvnal_id\t\tstring\n *\n *\tvnal_used\tunsigned integer\tnumber of entries in the vna_t\n *\t\t\t\t\t\tarray to follow\n *\n *\tvna_name\tstring\t\t\tname of resource\n *\tvna_val\t\tstring\t\t\tvalue of resource\n *\t\tFollowing added in V4\n *\tvna_type\tint\t\t\ttype of attribute/resource\n *\tvna_flag\tint\t\t\tflag of attribute/resource (-h)\n *\n *\n * @param[in]\tfd  - file (socket) descriptor from which to read\n * @param[out]\trcp - pointer to int into which to return the error value,\n *\t\t\teither DIS_SUCCESS or some DIS_* error.\n *\n * @return\tvnl_t *\n * @retval\tpointer to decoded vnode information which has been malloc-ed.\n * @retval\tNULL on error, see rcp value\n *\n * @par Side Effects: None\n *\n * @par MT-safe: yes\n *\n */\nvnl_t *\nvn_decode_DIS(int fd, int *rcp)\n{\n\tunsigned int vers;\n\n\tvers = disrui(fd, rcp);\n\tif (*rcp != DIS_SUCCESS)\n\t\treturn NULL;\n\n\tswitch (vers) {\n\t\tcase PS_DIS_V3:\n\t\t\treturn (vn_decode_DIS_V3(fd, rcp));\n\t\tcase PS_DIS_V4:\n\t\t\treturn (vn_decode_DIS_V4(fd, rcp));\n\n\t\tdefault:\n\t\t\t*rcp = DIS_PROTO;\n\t\t\treturn NULL;\n\t}\n}\n\n/**\n * @brief\n *\tvn_decode_DIS_V4 - decode version 4 vnode information from Mom\n *\n * @par Functionality:\n *\tSee vn_decode_DIS() above, This is called from there to decode\n *\tV4 information.\n *\n * @param[in]\tfd  -     socket descriptor from which to read\n * @param[out]\trcp -     pointer to place to return error code if error.\n *\n * @return\tvnl_t *\n * @retval\tpointer to decoded vnode information which has been malloc-ed.\n * @retval\tNULL on error, see rcp value\n *\n * @par Side Effects: None\n *\n * @par MT-safe: yes\n *\n */\nstatic vnl_t *\nvn_decode_DIS_V4(int fd, int *rcp)\n{\n\tunsigned int i, j;\n\tunsigned int size;\n\ttime_t t;\n\tvnl_t *vnlp;\n\n\tif ((vnlp = calloc(1, sizeof(vnl_t))) == NULL) {\n\t\t*rcp = DIS_NOMALLOC;\n\t\treturn NULL;\n\t}\n\n\tt = (time_t) disrsl(fd, rcp);\n\tif (*rcp != DIS_SUCCESS) {\n\t\tfree(vnlp);\n\t\treturn NULL;\n\t} else {\n\t\tvnlp->vnl_modtime = t;\n\t}\n\tsize = disrui(fd, rcp);\n\tif (*rcp != DIS_SUCCESS) {\n\t\tfree(vnlp);\n\t\treturn NULL;\n\t} else {\n\t\tvnlp->vnl_nelem = vnlp->vnl_used = size;\n\t}\n\n\tif ((vnlp->vnl_list = calloc(vnlp->vnl_nelem,\n\t\t\t\t     sizeof(vnal_t))) == NULL) {\n\t\tfree(vnlp);\n\t\t*rcp = DIS_NOMALLOC;\n\t\treturn NULL;\n\t}\n\n\tfor (i = 0; i < vnlp->vnl_used; i++) {\n\t\tvnal_t *curreslist = VNL_NODENUM(vnlp, i);\n\n\t\t/*\n\t\t *\tIn case an error occurs and we need to free\n\t\t *\twhatever's been allocated so far, we use the\n\t\t *\tvnl_cur entry to record the number of vnal_t\n\t\t *\tentries to free.\n\t\t */\n\t\tvnlp->vnl_cur = i;\n\n\t\tcurreslist->vnal_id = disrst(fd, rcp);\n\t\tif (*rcp != DIS_SUCCESS)\n\t\t\treturn (free_and_return(vnlp));\n\n\t\tsize = disrui(fd, rcp);\n\t\tif (*rcp != DIS_SUCCESS)\n\t\t\treturn (free_and_return(vnlp));\n\t\telse\n\t\t\tcurreslist->vnal_nelem = curreslist->vnal_used = size;\n\t\tif ((curreslist->vnal_list = calloc(curreslist->vnal_nelem,\n\t\t\t\t\t\t    sizeof(vna_t))) == NULL)\n\t\t\treturn (free_and_return(vnlp));\n\n\t\tfor (j = 0; j < size; j++) {\n\t\t\tvna_t *curres = VNAL_NODENUM(curreslist, j);\n\n\t\t\t/*\n\t\t\t *\tIn case an error occurs and we need to free\n\t\t\t *\twhatever's been allocated so far, we use the\n\t\t\t *\tvnal_cur entry to record the number of vna_t\n\t\t\t *\tentries to free.\n\t\t\t */\n\t\t\tcurreslist->vnal_cur = j;\n\n\t\t\tcurres->vna_name = disrst(fd, rcp);\n\t\t\tif (*rcp != DIS_SUCCESS)\n\t\t\t\treturn (free_and_return(vnlp));\n\t\t\tcurres->vna_val = disrst(fd, rcp);\n\t\t\tif (*rcp != DIS_SUCCESS)\n\t\t\t\treturn (free_and_return(vnlp));\n\t\t\tcurres->vna_type = disrsi(fd, rcp);\n\t\t\tif (*rcp != DIS_SUCCESS)\n\t\t\t\treturn (free_and_return(vnlp));\n\t\t\tcurres->vna_flag = disrsi(fd, rcp);\n\t\t\tif (*rcp != DIS_SUCCESS)\n\t\t\t\treturn (free_and_return(vnlp));\n\t\t}\n\t}\n\n\t*rcp = DIS_SUCCESS;\n\treturn (vnlp);\n}\n\n/**\n * @brief\n *\tvn_decode_DIS_V3 - decode version 3 vnode information from Mom\n *\n * @par Functionality:\n *\tSee vn_decode_DIS() above, This is called from there to decode\n *\tV3 information.\n *\n * @param[in]\tfd  -     socket descriptor from which to read\n * @param[out]\trcp -     pointer to place to return error code if error.\n *\n * @return\tvnl_t *\n * @retval\tpointer to decoded vnode information which has been malloc-ed.\n * @retval\tNULL on error, see rcp value\n *\n * @par Side Effects: None\n *\n * @par MT-safe: yes\n *\n */\nstatic vnl_t *\nvn_decode_DIS_V3(int fd, int *rcp)\n{\n\tunsigned int i, j;\n\tunsigned int size;\n\ttime_t t;\n\tvnl_t *vnlp;\n\n\tif ((vnlp = calloc(1, sizeof(vnl_t))) == NULL) {\n\t\t*rcp = DIS_NOMALLOC;\n\t\treturn NULL;\n\t}\n\n\tt = (time_t) disrsl(fd, rcp);\n\tif (*rcp != DIS_SUCCESS) {\n\t\tfree(vnlp);\n\t\treturn NULL;\n\t} else\n\t\tvnlp->vnl_modtime = t;\n\tsize = disrui(fd, rcp);\n\tif (*rcp != DIS_SUCCESS) {\n\t\tfree(vnlp);\n\t\treturn NULL;\n\t} else\n\t\tvnlp->vnl_nelem = vnlp->vnl_used = size;\n\n\tif ((vnlp->vnl_list = calloc(vnlp->vnl_nelem,\n\t\t\t\t     sizeof(vnal_t))) == NULL) {\n\t\tfree(vnlp);\n\t\t*rcp = DIS_NOMALLOC;\n\t\treturn NULL;\n\t}\n\n\tfor (i = 0; i < vnlp->vnl_used; i++) {\n\t\tvnal_t *curreslist = VNL_NODENUM(vnlp, i);\n\n\t\t/*\n\t\t *\tIn case an error occurs and we need to free\n\t\t *\twhatever's been allocated so far, we use the\n\t\t *\tvnal_cur entry to record the number of vnal_t\n\t\t *\tentries to free.\n\t\t */\n\t\tvnlp->vnl_cur = i;\n\n\t\tcurreslist->vnal_id = disrst(fd, rcp);\n\t\tif (*rcp != DIS_SUCCESS)\n\t\t\treturn (free_and_return(vnlp));\n\n\t\tsize = disrui(fd, rcp);\n\t\tif (*rcp != DIS_SUCCESS)\n\t\t\treturn (free_and_return(vnlp));\n\t\telse\n\t\t\tcurreslist->vnal_nelem = curreslist->vnal_used = size;\n\t\tif ((curreslist->vnal_list = calloc(curreslist->vnal_nelem,\n\t\t\t\t\t\t    sizeof(vna_t))) == NULL)\n\t\t\treturn (free_and_return(vnlp));\n\n\t\tfor (j = 0; j < size; j++) {\n\t\t\tvna_t *curres = VNAL_NODENUM(curreslist, j);\n\n\t\t\t/*\n\t\t\t *\tIn case an error occurs and we need to free\n\t\t\t *\twhatever's been allocated so far, we use the\n\t\t\t *\tvnal_cur entry to record the number of vna_t\n\t\t\t *\tentries to free.\n\t\t\t */\n\t\t\tcurreslist->vnal_cur = j;\n\n\t\t\tcurres->vna_name = disrst(fd, rcp);\n\t\t\tif (*rcp != DIS_SUCCESS)\n\t\t\t\treturn (free_and_return(vnlp));\n\t\t\tcurres->vna_val = disrst(fd, rcp);\n\t\t\tif (*rcp != DIS_SUCCESS)\n\t\t\t\treturn (free_and_return(vnlp));\n\t\t}\n\t}\n\n\t*rcp = DIS_SUCCESS;\n\treturn (vnlp);\n}\n\n/**\n * @brief\n *\tvn_encode_DIS - encode vnode information, used by Mom.\n *\n * @par Functionality:\n *\tUsed to encode vnode information.  See vn_decode_DIS() above for a\n *\tdescription of the information encoded/decoded.  Only the latest\n *\tversion of information is currently supported for encode.\n *\n * @param[in]\tfd   - socket descriptor to which to write the encode info.\n * @param[in]\tvnlp - structure to encode and send.\n *\n * @return\tint\n * @retval\tDIS_SUCCESS (0) on success\n * @retval\tDIS_* on error.\n *\n * @par Side Effects: None\n *\n * @par MT-safe: No, the structure pointed to by vnlp needs to be locked\n *\n */\nint\nvn_encode_DIS(int fd, vnl_t *vnlp)\n{\n\tswitch (PS_DIS_CURVERSION) {\n\t\tcase PS_DIS_V4:\n\t\t\treturn (vn_encode_DIS_V4(fd, vnlp));\n\n\t\tdefault:\n\t\t\treturn (DIS_PROTO);\n\t}\n}\n\n/**\n * @brief\n *\tvn_encode_DIS_V4 - encode version 4 vnode information, used by Mom.\n *\n * @par Functionality:\n *\tUsed to encode vnode information.  See vn_encode_DIS() above for a\n *\tdescription of the information.  Supports version 4 only.\n *\n * @param[in]\tfd   - socket descriptor to which to write the encode info.\n * @param[in]\tvnlp - structure to encode and send.\n *\n * @return\tint\n * @retval\tDIS_SUCCESS (0) on success\n * @retval\tDIS_* on error.\n *\n * @par Side Effects: None\n *\n * @par MT-safe: No, the structure pointed to by vnlp needs to be locked\n *\n */\nstatic int\nvn_encode_DIS_V4(int fd, vnl_t *vnlp)\n{\n\tint rc;\n\tunsigned int i, j;\n\n\tif (((rc = diswui(fd, PS_DIS_V4)) != 0) ||\n\t    ((rc = diswsl(fd, (long) vnlp->vnl_modtime)) != 0) ||\n\t    ((rc = diswui(fd, vnlp->vnl_used)) != 0))\n\t\treturn (rc);\n\n\tfor (i = 0; i < vnlp->vnl_used; i++) {\n\t\tvnal_t *curreslist = VNL_NODENUM(vnlp, i);\n\n\t\tif ((rc = diswst(fd, curreslist->vnal_id)) != 0)\n\t\t\treturn (rc);\n\t\tif ((rc = diswui(fd, curreslist->vnal_used)) != 0)\n\t\t\treturn (rc);\n\n\t\tfor (j = 0; j < curreslist->vnal_used; j++) {\n\t\t\tvna_t *curres = VNAL_NODENUM(curreslist, j);\n\n\t\t\tif ((rc = diswst(fd, curres->vna_name)) != 0)\n\t\t\t\treturn (rc);\n\t\t\tif ((rc = diswst(fd, curres->vna_val)) != 0)\n\t\t\t\treturn (rc);\n\t\t\tif ((rc = diswsi(fd, curres->vna_type)) != 0)\n\t\t\t\treturn (rc);\n\t\t\tif ((rc = diswsi(fd, curres->vna_flag)) != 0)\n\t\t\t\treturn (rc);\n\t\t}\n\t}\n\n\treturn (DIS_SUCCESS);\n}\n\n/**\n * @brief\n *\tfree_and_return - free a vnl_t data structure.\n *\n * @par Functionality:\n *\tNote that this function is nearly identical to vnl_free() (q.v.),\n *\twith the exception of using the *_cur values to free partially-\n *\n * @param[in]\tvnlp - pointer to structure to free\n *\n * @return\tvnl_t *\n * @retval\tNULL\n *\n * @par Side Effects: None\n *\n * @par MT-safe: No, vnlp needs to be locked.\n *\n */\nstatic vnl_t *\nfree_and_return(vnl_t *vnlp)\n{\n\tunsigned int i, j;\n\n\t/* N.B. <=, not < because we may have a partially-allocated ith one */\n\tfor (i = 0; i <= vnlp->vnl_cur; i++) {\n\t\tvnal_t *vnrlp = VNL_NODENUM(vnlp, i);\n\n\t\t/* N.B. <=, not < (as above) for partially-allocated jth one */\n\t\tfor (j = 0; j <= vnrlp->vnal_cur; j++) {\n\t\t\tvna_t *vnrp = VNAL_NODENUM(vnrlp, j);\n\t\t\tfree(vnrp->vna_name);\n\t\t\tfree(vnrp->vna_val);\n\t\t}\n\t\tfree(vnrlp->vnal_list);\n\t\tfree(vnrlp->vnal_id);\n\t}\n\tfree(vnlp->vnl_list);\n\tfree(vnlp);\n\n\treturn NULL;\n}\n"
  },
  {
    "path": "src/lib/Libecl/ecl_verify.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tecl_verify.c\n *\n * @brief\tThe top level verification functionality\n *\n * @par\t\tFunctionality:\n *\t\tTop level verification routines which in-turn call attribute\n *\t\tlevel verification functions for datatype and value.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <sys/types.h>\n#include <stdlib.h>\n#include <string.h>\n\n#include \"pbs_ifl.h\"\n#include \"pbs_ecl.h\"\n#include \"pbs_error.h\"\n\n#include \"job.h\"\n#include \"reservation.h\"\n#include \"queue.h\"\n#include \"pbs_nodes.h\"\n#include \"server.h\"\n#include \"libpbs.h\"\n#include \"pbs_client_thread.h\"\n\nstatic enum batch_op seljobs_opstring_enums[] = {EQ, NE, GE, GT, LE, LT};\nstatic int size_seljobs = sizeof(seljobs_opstring_enums) / sizeof(enum batch_op);\n\n/* static function declarations */\nstatic int\n__pbs_verify_attributes(int connect, int batch_request,\n\t\t\tint parent_object, int command, struct attropl *attribute_list);\nstatic int\n__pbs_verify_attributes_dummy(int connect, int batch_request,\n\t\t\t      int parent_object, int command, struct attropl *attribute_list);\nstatic struct ecl_attribute_def *ecl_findattr(int, struct attropl *);\nstatic struct ecl_attribute_def *ecl_find_attr_in_def(\n\tstruct ecl_attribute_def *, char *, int);\nstatic int get_attr_type(struct ecl_attribute_def attr_def);\n\n/* default function pointer assignments */\nint (*pfn_pbs_verify_attributes)(int connect, int batch_request,\n\t\t\t\t int parent_object, int cmd, struct attropl *attribute_list) = &__pbs_verify_attributes;\n\n/**\n * @brief\n *\tBypass attribute verification on IFL API calls\n *\n * @par Functionality:\n *\tThis function resets the attribute verifcation function pointer to a\n *\tdummy function, called from daemons, such that attribute verification is\n *\tbypassed.\n *\n * @see\n *\n * @return void\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nvoid\nset_no_attribute_verification(void)\n{\n\tpfn_pbs_verify_attributes = &__pbs_verify_attributes_dummy;\n}\n\n/**\n * @brief\n *\tThe dummy verify attributes function\n *\n * @par Functionality:\n *\tThis is the function that gets called when IFL API is invoked by an\n *\tapplication which has earlier called \"set_no_attribute_verification\"\n *\n * @see\n *\n * @param[in]\tconnect\t\t-\tConnection Identifier\n * @param[in]\tbatch_request\t-\tBatch Request Type\n * @param[in]\tparent_object\t-\tParent Object Type\n * @param[in]\tcmd\t\t-\tCommand Type\n * @param[in]\tattribute_list\t-\tlist of attributes\n *\n * @return\tint\n * @retval\tZero\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nstatic int\n__pbs_verify_attributes_dummy(int connect, int batch_request,\n\t\t\t      int parent_object, int cmd, struct attropl *attribute_list)\n{\n\treturn 0;\n}\n\n/**\n * @brief\n *\tThe real verify function called from most IFL API calls\n *\n * @par Functionality:\n *\t1. Gets the attr_errlist from the TLS data, deallocates it, if already\n *         allocated and then allocates it again.\\n\n *\t2. Clears the connect context values from the TLS\\n\n *\t3. Calls verify_attributes to verify the list of attributes passed\\n\n *\n * @see verify_attributes\n *\n * @param[in]\tconnect\t\t-\tConnection Identifier\n * @param[in]\tbatch_request\t-\tBatch Request Type\n * @param[in]\tparent_object\t-\tParent Object Type\n * @param[in]\tcmd\t\t-\tCommand Type\n * @param[in]\tattribute_list\t-\tlist of attributes\n *\n * @return\tint\n * @retval \t0 - No failed attributes\n * @retval \t+n - Number of failed attributes (pbs_errno set to last error)\n * @retval \t-1 - System error verifying attributes (pbs_errno is set)\n *\n * @par\t\tSide effects:\n *\t\tModifies the TLS data for this thread\\n\n *\t\tpbs_errno is set on encourtering error\n *\n * @par MT-safe: Yes\n */\nstatic int\n__pbs_verify_attributes(int connect, int batch_request,\n\t\t\tint parent_object, int cmd, struct attropl *attribute_list)\n{\n\tstruct pbs_client_thread_context *ptr;\n\tstruct pbs_client_thread_connect_context *con;\n\tint rc;\n\n\t/* get error list from TLS */\n\tptr = (struct pbs_client_thread_context *)\n\t\tpbs_client_thread_get_context_data();\n\tif (ptr == NULL) {\n\t\t/* very unlikely case */\n\t\tpbs_errno = PBSE_SYSTEM;\n\t\treturn -1;\n\t}\n\n\t/* since api is going to reuse err_list, free it first */\n\tfree_errlist(ptr->th_errlist);\n\tptr->th_errlist = NULL;\n\n\tcon = pbs_client_thread_find_connect_context(connect);\n\tif (con == NULL) {\n\t\tif ((con = pbs_client_thread_add_connect_context(connect)) == NULL) {\n\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\treturn -1;\n\t\t}\n\t}\n\n\t/* clear the TLS error codes */\n\tcon->th_ch_errno = 0;\n\tif (con->th_ch_errtxt)\n\t\tfree(con->th_ch_errtxt);\n\tcon->th_ch_errtxt = NULL;\n\n\tif (attribute_list == NULL)\n\t\treturn 0;\n\n\trc = verify_attributes(batch_request, parent_object, cmd,\n\t\t\t       attribute_list, &ptr->th_errlist);\n\tif (rc > 0) {\n\t\t/* also set the pbs error code */\n\t\tpbs_errno = ptr->th_errlist->ecl_attrerr[0].ecl_errcode;\n\n\t\t/* copy first error code into TLS connection context */\n\t\tcon->th_ch_errno = ptr->th_errlist->ecl_attrerr[0].ecl_errcode;\n\t\tif (ptr->th_errlist->ecl_attrerr[0].ecl_errmsg) {\n\t\t\tcon->th_ch_errtxt =\n\t\t\t\tstrdup(ptr->th_errlist->ecl_attrerr[0].ecl_errmsg);\n\t\t\tif (con->th_ch_errtxt == NULL) {\n\t\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t}\n\t}\n\treturn rc;\n}\n\n/**\n * @brief\n *\tVerify one attribute\n *\n * @par Functionality:\n *      1. Finds the attribute in the correct object attribute list\\n\n *      2. Invokes the at_verify_datatype function to check datatype is good\\n\n *      3. Invokes the at_verify_value function to check if the value is good\\n\n *\t4. This function is also called from the hooks verification functions,\n *\t   \"is_job_input_valid\" and \"is_resv_input_valid\" from\n *\t    lib\\Libpython\\pbs_python_svr_internal.c\n *\n * @see verify_attributes\n *\n * @param[in]\tbatch_request\t-\tBatch Request Type\n * @param[in]\tparent_object\t-\tParent Object Type\n * @param[in]\tcmd\t\t-\tCommand Type\n * @param[in]\tpattr\t\t-\tlist of attributes\n * @param[out]\tverified\t-\tWhether verification was done\n * @param[out]\terr_msg\t\t-\tError message for attribute verification\n *\t\t\t\t\tfailure\n * @return\tint\n * @retval\t0   - Passed verification\n * @retval\t> 0 - attribute failed verification (pbs error number returned)\n * @retval\t-1  - Out of memory\n *\n * @par\tverified:\n *\t1 - if the verification could be done\\n\n *\t0 - No verification handlers present, verification not done\\n\n *\tThis output parameter is primarily used by the hooks verification\n *\tfunctions to figure out whether any attribute verification was really\n *\tdone. If not done (value was 0) then the hooks code calls the server\n *\tdecode functions in an attempt to verify the attribute values.\n *\n * @par\terr_msg:\n *\tIf the attribute fails verification, the err_msg parameter is set\n *\tto the reason of failure. \\n\n *\tThe err_msg parameter is passed to all the attribute verifiction\n *\troutines, such that if a need arises, it would be possible for the\n *\tindividual routines to set a custom error message. \\n\n * \tIf the called attribute verification routines do not set any custom\n *\tverification error message, then this routine sets the error message\n *\tby calling \"pbse_to_txt\" to convert the return error code to error msg.\n *\n * @par\tSide effects:\n *\tpbs_errno set on error\n *\n * @par MT-safe: Yes\n */\nint\nverify_an_attribute(int batch_request, int parent_object, int cmd,\n\t\t    struct attropl *pattr,\n\t\t    int *verified,\n\t\t    char **err_msg)\n{\n\tecl_attribute_def *p_eclattr = NULL;\n\tint err_code = PBSE_NONE;\n\tchar *p;\n\n\t*verified = 1; /* set to verified */\n\n\t/* skip check when dealing with a \"resource\" parent object */\n\tif (parent_object == MGR_OBJ_RSC)\n\t\treturn PBSE_NONE;\n\n\tif ((p_eclattr = ecl_findattr(parent_object, pattr)) == NULL) {\n\t\terr_code = PBSE_NOATTR;\n\t\tgoto err;\n\t}\n\n\tif (pattr->value == NULL || pattr->value[0] == '\\0') {\n\n\t\t/* allow empty/null values for unset/delete of pbs_manager */\n\t\tif ((batch_request == PBS_BATCH_Manager) &&\n\t\t    (cmd == MGR_CMD_UNSET || cmd == MGR_CMD_DELETE))\n\t\t\treturn PBSE_NONE;\n\n\t\t/* for the following stat calls, the value can be null/empty */\n\t\tif (batch_request == PBS_BATCH_StatusJob ||\n\t\t    batch_request == PBS_BATCH_StatusQue ||\n\t\t    batch_request == PBS_BATCH_StatusSvr ||\n\t\t    batch_request == PBS_BATCH_StatusNode ||\n\t\t    batch_request == PBS_BATCH_StatusRsc ||\n\t\t    batch_request == PBS_BATCH_StatusHook ||\n\t\t    batch_request == PBS_BATCH_StatusResv ||\n\t\t    batch_request == PBS_BATCH_StatusSched)\n\t\t\treturn PBSE_NONE;\n\t}\n\n\t/* for others, value shouldn't be null */\n\tif (pattr->value == NULL) {\n\t\terr_code = PBSE_BADATVAL;\n\t\tgoto err;\n\t}\n\n\t/*\n\t * When using ifl library directly, there is a possibility where resource is passed as NULL\n\t * Check this variable for NULL and send error if it is NULL.\n\t */\n\tif (strcasecmp(pattr->name, ATTR_l) == 0) {\n\t\tif (pattr->resource == NULL) {\n\t\t\terr_code = PBSE_UNKRESC;\n\t\t\tgoto err;\n\t\t}\n\t}\n\n\tif (p_eclattr->at_verify_datatype) {\n\t\tif ((err_code = p_eclattr->at_verify_datatype(pattr, err_msg)))\n\t\t\tgoto err;\n\t}\n\n\tif (p_eclattr->at_verify_value) {\n\t\tif ((err_code = p_eclattr->at_verify_value(batch_request,\n\t\t\t\t\t\t\t   parent_object, cmd, pattr, err_msg)))\n\t\t\tgoto err;\n\t}\n\n\tif (p_eclattr->at_verify_value == NULL) /* no verify func */\n\t\t*verified = 0;\n\n\treturn PBSE_NONE;\n\nerr:\n\tif ((err_code != 0) && (*err_msg == NULL)) {\n\t\t/* find err_msg and update it */\n\t\tp = pbse_to_txt(err_code);\n\t\tif (p) {\n\t\t\t*err_msg = strdup(p);\n\t\t\tif (*err_msg == NULL)\n\t\t\t\treturn -1;\n\t\t}\n\t}\n\treturn err_code;\n}\n\n/**\n * @brief\n *\tDuplicate an attribute structure\n *\n * @par Functionality:\n *\tHelper routine to safely duplicate a attribute structure\n *\tfrees if allocation fails anywhere.\n * @see\n *\n * @param[in]\tpattr\t-\tlist of attributes\n *\n * @return\tPointer to the duplicated attribute structure\n * @retval\taddress of the duplicated attribute (failure)\n * @retval\tNULL (failure)\n *\n * @par Side Effects: None\n *\n * @par MT-safe: Yes\n */\nstatic struct attropl *\nduplicate_attr(struct attropl *pattr)\n{\n\tstruct attropl *new_attr = (struct attropl *)\n\t\tcalloc(1, sizeof(struct attropl));\n\tif (new_attr == NULL)\n\t\treturn NULL;\n\tif (pattr->name)\n\t\tif ((new_attr->name = strdup(pattr->name)) == NULL)\n\t\t\tgoto err;\n\tif (pattr->resource)\n\t\tif ((new_attr->resource = strdup(pattr->resource)) == NULL)\n\t\t\tgoto err;\n\tif (pattr->value)\n\t\tif ((new_attr->value = strdup(pattr->value)) == NULL)\n\t\t\tgoto err;\n\treturn new_attr;\n\nerr:\n\tfree(new_attr->name);\n\tfree(new_attr->resource);\n\tfree(new_attr->value);\n\tfree(new_attr);\n\treturn NULL;\n}\n\n/**\n * @brief\n *\tLoops through the attribute list and verifies each attribute\n *\n * @par\tFunctionality:\n *\t1. Calls verify_an_attribute to verify each attribute in a loop\\n\n *\t2. Adjusts the attribute_list by expanding it appropriately\n *\n * @see\n *\t__pbs_verify_attributes\\n\n *\tverify_an_attribute\n *\n * @param[in]\tbatch_request\t-\tBatch Request Type\n * @param[in]\tparent_object\t-\tParent Object Type\n * @param[in]\tcmd\t\t-\tCommand Type\n * @param[in]\tattribute_list\t-\tlist of attributes\n * @param[out]\targ_err_list\t-\tlist holding attribute errors\n *\n * @return\tint\n * @retval\t0 - No failed attributes\n * @retval\t+n - Number of failed attributes (pbs_errno set to last error)\n * @retval\t-1 - System error verifying attributes (pbs_errno is set)\n *\n * @par\tSide effects:\n *\t pbs_errno set on error\n *\n * @par MT-safe: Yes\n */\nint\nverify_attributes(int batch_request, int parent_object, int cmd,\n\t\t  struct attropl *attribute_list,\n\t\t  struct ecl_attribute_errors **arg_err_list)\n{\n\n\tstruct attropl *pattr = NULL;\n\tint failure_count = 0;\n\tint cur_size = 0;\n\tint err_code = 0;\n\tstruct ecl_attribute_errors *err_list = NULL;\n\tstruct ecl_attrerr *temp = NULL;\n\tchar *msg = NULL;\n\tint i;\n\tint verified;\n\tstruct attropl *new_attr;\n\n\terr_list = (struct ecl_attribute_errors *)\n\t\tmalloc(sizeof(struct ecl_attribute_errors));\n\tif (err_list == NULL) {\n\t\terr_code = PBSE_SYSTEM;\n\t\treturn -1;\n\t}\n\terr_list->ecl_numerrors = 0;\n\terr_list->ecl_attrerr = NULL;\n\n\tif ((parent_object == MGR_OBJ_SITE_HOOK) || (parent_object == MGR_OBJ_PBS_HOOK)) {\n\t\t/* exempt from attribute checks */\n\t\t*arg_err_list = err_list;\n\t\treturn 0;\n\t}\n\n\tfor (pattr = attribute_list; pattr; pattr = pattr->next) {\n\n\t\terr_code = verify_an_attribute(batch_request, parent_object,\n\t\t\t\t\t       cmd, pattr, &verified, &msg);\n\n\t\t/* now check the op value, for selectjob api*/\n\t\tif ((err_code == 0) &&\n\t\t    (batch_request == PBS_BATCH_SelectJobs)) {\n\t\t\tfor (i = 0; i < size_seljobs; i++)\n\t\t\t\tif (pattr->op == seljobs_opstring_enums[i])\n\t\t\t\t\tbreak;\n\t\t\tif (i == sizeof(seljobs_opstring_enums))\n\t\t\t\terr_code = PBSE_BADATVAL;\n\t\t}\n\n\t\tif (err_code != 0) {\n\t\t\tif (cur_size - failure_count < 1) {\n\t\t\t\tcur_size += SLOT_INCR_SIZE;\n\t\t\t\ttemp = (struct ecl_attrerr *)\n\t\t\t\t\trealloc(err_list->ecl_attrerr,\n\t\t\t\t\t\tcur_size * sizeof(struct ecl_attrerr));\n\t\t\t\tif (temp == NULL) {\n\t\t\t\t\tfree_errlist(err_list);\n\t\t\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\t\t\treturn -1;\n\t\t\t\t}\n\t\t\t\terr_list->ecl_attrerr = temp;\n\t\t\t}\n\t\t\tfailure_count++;\n\n\t\t\t/* keep a copy of the whole attribute, incase attribute\n\t\t\t * was allocated from stack by caller etc, it might be\n\t\t\t * lost, and a pointer alone would be of no good\n\t\t\t */\n\t\t\tnew_attr = duplicate_attr(pattr);\n\t\t\tif (new_attr == NULL) {\n\t\t\t\tfree_errlist(err_list);\n\t\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t\terr_list->ecl_attrerr[failure_count - 1].ecl_attribute = (struct attropl *) new_attr;\n\t\t\terr_list->ecl_attrerr[failure_count - 1].ecl_errcode = err_code;\n\t\t\terr_list->ecl_attrerr[failure_count - 1].ecl_errmsg = NULL;\n\t\t\tif (msg != NULL) {\n\t\t\t\tif ((err_list->ecl_attrerr[failure_count - 1].ecl_errmsg = strdup(msg)) == NULL) {\n\t\t\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\t\t\tfree_errlist(err_list);\n\t\t\t\t\tfree(msg);\n\t\t\t\t\tmsg = NULL;\n\t\t\t\t\treturn -1;\n\t\t\t\t}\n\t\t\t\tfree(msg);\n\t\t\t\tmsg = NULL;\n\t\t\t}\n\t\t}\n\t}\n\n\tif ((failure_count > 0) && (failure_count != cur_size)) {\n\t\ttemp = (struct ecl_attrerr *)\n\t\t\trealloc(err_list->ecl_attrerr, failure_count *\n\t\t\t\t\t\t\t       sizeof(struct ecl_attrerr));\n\t\tif (temp == NULL) {\n\t\t\tfree_errlist(err_list);\n\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\treturn -1;\n\t\t}\n\t\terr_list->ecl_attrerr = temp;\n\t}\n\n\terr_list->ecl_numerrors = failure_count;\n\t*arg_err_list = err_list;\n\treturn failure_count;\n}\n\n/**\n * @brief\n *\tName: ecl_findattr\n *\n * @par\t\tFunctionality:\n *\t\t1. Find the attribute in the list associated with the\n *\t\tparent_object by calling ecl_find_attr_in_def().\n *\n * @see\n *\n * @param[in]\tparent_object\t-\tParent Object Type\n * @param[in]\tpattr\t\t-\tlist of attributes\n *\n * @return\tpointer to the ecl_attribute_def structure\n * @retval\tReturn value: Address of the ecl_attribute_def structure\n *\t\tassociated with the given attribute, NULL if not found\n *\n * @par\t\tSide effects:\n *\t\tNone\n *\n * @par MT-safe: Yes\n */\nstatic struct ecl_attribute_def *\necl_findattr(int parent_object,\n\t     struct attropl *pattr)\n{\n\tswitch (parent_object) {\n\t\tcase MGR_OBJ_JOB:\n\t\t\treturn (ecl_find_attr_in_def(ecl_job_attr_def, pattr->name,\n\t\t\t\t\t\t     ecl_job_attr_size));\n\t\tcase MGR_OBJ_SERVER:\n\t\t\treturn (ecl_find_attr_in_def(ecl_svr_attr_def, pattr->name,\n\t\t\t\t\t\t     ecl_svr_attr_size));\n\t\tcase MGR_OBJ_SCHED:\n\t\t\treturn (ecl_find_attr_in_def(ecl_sched_attr_def, pattr->name,\n\t\t\t\t\t\t     ecl_sched_attr_size));\n\t\tcase MGR_OBJ_QUEUE:\n\t\t\treturn (ecl_find_attr_in_def(ecl_que_attr_def, pattr->name,\n\t\t\t\t\t\t     ecl_que_attr_size));\n\t\tcase MGR_OBJ_NODE:\n\t\tcase MGR_OBJ_HOST:\n\t\t\treturn (ecl_find_attr_in_def(ecl_node_attr_def, pattr->name,\n\t\t\t\t\t\t     ecl_node_attr_size));\n\t\tcase MGR_OBJ_RESV:\n\t\t\treturn (ecl_find_attr_in_def(ecl_resv_attr_def, pattr->name,\n\t\t\t\t\t\t     ecl_resv_attr_size));\n\t}\n\treturn NULL;\n}\n\n/**\n * @brief\n * \tfind_attr - find attribute definition by name\n *\n * @see\n *\n * @par\n *\tSearches array of attribute definition strutures to find one whose name\n *\tmatches the requested name.\n *\n * @param[in]\tattr_def\t-\tptr to attribute definition\n * @param[in]\tname\t\t-\tattribute name to find\n * @param[in]\tlimit\t\t-\tlimit on size of defintion array\n *\n * @return\tecl_attribute_def  - ptr to attribute defintion\n * @retval\t>= pointer to attribute (success case)\n * @retval\tNULL - if didn't find matching name (failed)\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n */\nstatic struct ecl_attribute_def *\necl_find_attr_in_def(\n\tstruct ecl_attribute_def *attr_def,\n\tchar *name, int limit)\n{\n\tint index;\n\n\tif (attr_def) {\n\t\tfor (index = 0; index < limit; index++) {\n\t\t\tchar *pc = NULL;\n\n\t\t\tif (strncasecmp(name, attr_def[index].at_name,\n\t\t\t\t\tstrlen(attr_def[index].at_name)) == 0) {\n\t\t\t\tpc = name + strlen(attr_def[index].at_name);\n\t\t\t\tif ((*pc == '\\0') || (*pc == '.') || (*pc == ','))\n\t\t\t\t\treturn &(attr_def[index]);\n\t\t\t}\n\t\t}\n\t}\n\treturn NULL;\n}\n\n/**\n * @brief \tReturn the type of attribute (public, invisible or read-only)\n *\n * @param[in]\tattr_def\t-\tthe attribute to check\n *\n * @return\tint\n * @retval\tTYPE_ATTR_PUBLIC if the attribute is public\n * @retval\tTYPE_ATTR_INVISIBLE if the attribute is a SvWR or SvRD (invisible)\n * @retval\tTYPE_ATTR_READONLY otherwise\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n */\nstatic int\nget_attr_type(struct ecl_attribute_def attr_def)\n{\n\t/*\n\t * Consider an attr def public if it has any of the write flags set\n\t */\n\tif (attr_def.at_flags & (ATR_DFLAG_USWR | ATR_DFLAG_OPWR | ATR_DFLAG_MGWR))\n\t\treturn TYPE_ATTR_PUBLIC;\n\telse if (attr_def.at_flags & (ATR_DFLAG_MGRD | ATR_DFLAG_OPRD | ATR_DFLAG_OTHRD | ATR_DFLAG_USRD))\n\t\treturn TYPE_ATTR_READONLY;\n\telse\n\t\treturn TYPE_ATTR_INVISIBLE;\n}\n\n/**\n * @brief\n *\tfind_resc_def - find the resource_def structure for a resource\n *\twith a given name.\n *\n * @see\n *\n * @param[in]\trscdf\t\t-\tptr to attribute definition strcture\n * @param[in]\tname\t\t-\tname of resouce\n * @param[in] \tlimit\t\t-\tnumber of members in resource_def array\n *\n * @return \tecl_attribute_def - ptr to attribute definition\n * @retval\tpointer to the found resource structure (success)\n * @retval\tNULL(failure)\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n */\nstruct ecl_attribute_def *\necl_find_resc_def(struct ecl_attribute_def *rscdf, char *name, int limit)\n{\n\twhile (limit--) {\n\t\tif (strcasecmp(rscdf->at_name, name) == 0)\n\t\t\treturn (rscdf);\n\t\trscdf++;\n\t}\n\treturn NULL;\n}\n\n/**\n * @brief\n * \tReturns TRUE if the name passed in is an attribute.\n *\n * @note\n * \tThis must not be called with object of type MGR_OBJ_SITE_HOOK or MGR_OBJ_PBS_HOOK.\n *\n * @param[in]\tobject - type of object\n * @param[in]\tname  - name of the attribute\n * @param[in]\tattr_type  - type of the attribute\n *\n * @eturn int\n * @retval\tTRUE - means if the input is an attribute of the given 'object' type\n *        \t    and 'attr_type'.\n * @retval\tFALSE - otherwise.\n *\n */\nint\nis_attr(int object, char *name, int attr_type)\n{\n\tstruct ecl_attribute_def *attr_def = NULL;\n\n\tif ((object == MGR_OBJ_SITE_HOOK) || (object == MGR_OBJ_PBS_HOOK)) {\n\t\treturn FALSE;\n\t}\n\n\telse if (object == MGR_OBJ_RSC) {\n\t\treturn TRUE;\n\t}\n\n\tif ((attr_def = ecl_find_attr_in_def(ecl_svr_attr_def, name, ecl_svr_attr_size)) != NULL) {\n\t\t/* Make sure that the attribute types match */\n\t\tif (get_attr_type(*attr_def) & attr_type)\n\t\t\treturn TRUE;\n\t\telse\n\t\t\treturn FALSE;\n\t} else if ((attr_def = ecl_find_attr_in_def(ecl_node_attr_def, name, ecl_node_attr_size)) != NULL) {\n\t\t/* Make sure that the attribute types match */\n\t\tif (get_attr_type(*attr_def) & attr_type)\n\t\t\treturn TRUE;\n\t\telse\n\t\t\treturn FALSE;\n\t} else if ((attr_def = ecl_find_attr_in_def(ecl_que_attr_def, name, ecl_que_attr_size)) != NULL) {\n\t\t/* Make sure that the attribute types match */\n\t\tif (get_attr_type(*attr_def) & attr_type)\n\t\t\treturn TRUE;\n\t\telse\n\t\t\treturn FALSE;\n\t} else if ((attr_def = ecl_find_attr_in_def(ecl_sched_attr_def, name, ecl_sched_attr_size)) != NULL) {\n\t\t/* Make sure that the attribute types match */\n\t\tif (get_attr_type(*attr_def) & attr_type)\n\t\t\treturn TRUE;\n\t\telse\n\t\t\treturn FALSE;\n\t}\n\n\treturn FALSE;\n}\n"
  },
  {
    "path": "src/lib/Libecl/ecl_verify_datatypes.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tecl_verify_datatypes.c\n *\n * @brief\tAttribute datatype verification functions\n *\n * @par Functionality:\n *\tThis module contains attribute datatype verification functions\\n\n *\tEach function in this module takes a common format as follows:\\n\n *\tint verify_datatype_xxxx(struct attropl * pattr, char **err_msg)\\n\n *\n * @param[in] pattr - struct attropl - Address of attribute to verify\n * @param[out] err_msg - char ** - Sets the error message if any\n *\n * @return int\n * @retval 0 - Attribute passed verification\\n\n * @retval >0 - Attribute failed verification - pbs error code returned\\n\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n#include <stdlib.h>\n#include <string.h>\n\n#include \"pbs_ifl.h\"\n#include \"pbs_ecl.h\"\n#include \"pbs_error.h\"\n\n/**\n * @brief\n *\tverifies boolean attribute\n *\n * @param[in] pattr - struct attropl - Address of attribute to verify\n * @param[out] err_msg - char ** - Sets the error message if any\n *\n * @return int\n * @retval 0 - Attribute passed verification\\n\n * @retval >0 - Attribute failed verification - pbs error code returned\\n\n */\n\nint\nverify_datatype_bool(struct attropl *pattr, char **err_msg)\n{\n\tattribute atr;\n\tatr.at_flags = 0;\n\treturn (decode_b(&atr, pattr->name, pattr->resource, pattr->value));\n}\n\n/**\n * @brief\n *      verifies  attribute of short datatype\n *\n * @param[in] pattr - struct attropl - Address of attribute to verify\n * @param[out] err_msg - char ** - Sets the error message if any\n *\n * @return int\n * @retval 0 - Attribute passed verification\\n\n * @retval >0 - Attribute failed verification - pbs error code returned\\n\n */\n\nint\nverify_datatype_short(struct attropl *pattr, char **err_msg)\n{\n\tshort s;\n\tint ret;\n\tattribute atr;\n\tatr.at_flags = 0;\n\tif ((ret = decode_l(&atr, pattr->name, pattr->resource, pattr->value)))\n\t\treturn ret;\n\ts = (short) atr.at_val.at_long;\n\tif (atr.at_val.at_long != (long) s)\n\t\treturn (PBSE_BADATVAL);\n\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n *      verifies  attribute of long datatype\n *\n * @param[in] pattr - struct attropl - Address of attribute to verify\n * @param[out] err_msg - char ** - Sets the error message if any\n *\n * @return int\n * @retval 0 - Attribute passed verification\\n\n * @retval >0 - Attribute failed verification - pbs error code returned\\n\n */\n\nint\nverify_datatype_long(struct attropl *pattr, char **err_msg)\n{\n\tattribute atr;\n\tatr.at_flags = 0;\n\treturn (decode_l(&atr, pattr->name, pattr->resource, pattr->value));\n}\n\n/**\n * @brief\n *      verifies  attribute of long long datatype\n *\n * @param[in] pattr - struct attropl - Address of attribute to verify\n * @param[out] err_msg - char ** - Sets the error message if any\n *\n * @return      int\n * @retval  0 - Attribute passed verification\n * @retval >0 - Attribute failed verification - pbs error code returned\n */\n\nint\nverify_datatype_long_long(struct attropl *pattr, char **err_msg)\n{\n\tattribute atr;\n\tatr.at_flags = 0;\n\treturn (decode_ll(&atr, pattr->name, pattr->resource, pattr->value));\n}\n\n/**\n * @brief\n *      verifies  attribute of float datatype\n *\n * @param[in] pattr - struct attropl - Address of attribute to verify\n * @param[out] err_msg - char ** - Sets the error message if any\n *\n * @return int\n * @retval 0 - Attribute passed verification\\n\n * @retval >0 - Attribute failed verification - pbs error code returned\\n\n */\n\nint\nverify_datatype_float(struct attropl *pattr, char **err_msg)\n{\n\tattribute atr;\n\tatr.at_flags = 0;\n\treturn (decode_f(&atr, pattr->name, pattr->resource, pattr->value));\n}\n\n/**\n * @brief\n *      verifies  attribute of size type\n *\n * @param[in] pattr - struct attropl - Address of attribute to verify\n * @param[out] err_msg - char ** - Sets the error message if any\n *\n * @return int\n * @retval 0 - Attribute passed verification\\n\n * @retval >0 - Attribute failed verification - pbs error code returned\\n\n */\n\nint\nverify_datatype_size(struct attropl *pattr, char **err_msg)\n{\n\tattribute atr;\n\tatr.at_flags = 0;\n\treturn (decode_size(&atr, pattr->name, pattr->resource, pattr->value));\n}\n\n/**\n * @brief\n *      verifies  attribute of time type\n *\n * @param[in] pattr - struct attropl - Address of attribute to verify\n * @param[out] err_msg - char ** - Sets the error message if any\n *\n * @return int\n * @retval 0 - Attribute passed verification\\n\n * @retval >0 - Attribute failed verification - pbs error code returned\\n\n */\n\nint\nverify_datatype_time(struct attropl *pattr, char **err_msg)\n{\n\tattribute atr;\n\tatr.at_flags = 0;\n\treturn (decode_time(&atr, pattr->name, pattr->resource, pattr->value));\n}\n\n/**\n * @brief\n *      verifies  attribute of node type\n *\n * @param[in] pattr - struct attropl - Address of attribute to verify\n * @param[out] err_msg - char ** - Sets the error message if any\n *\n * @return int\n * @retval 0 - Attribute passed verification\\n\n * @retval >0 - Attribute failed verification - pbs error code returned\\n\n */\n\nint\nverify_datatype_nodes(struct attropl *pattr, char **err_msg)\n{\n\tattribute atr;\n\tatr.at_flags = 0;\n\treturn (decode_nodes(&atr, pattr->name, pattr->resource, pattr->value));\n}\n\n/**\n * @brief\n *      verifies  attribute of select type\n *\n * @param[in] pattr - struct attropl - Address of attribute to verify\n * @param[out] err_msg - char ** - Sets the error message if any\n *\n * @return int\n * @retval 0 - Attribute passed verification\\n\n * @retval >0 - Attribute failed verification - pbs error code returned\\n\n */\n\nint\nverify_datatype_select(struct attropl *pattr, char **err_msg)\n{\n\tattribute atr;\n\tint ret = PBSE_BADATVAL;\n\tmemset(&atr, 0, sizeof(struct attribute));\n\tret = decode_select(&atr, pattr->name,\n\t\t\t    pattr->resource, pattr->value);\n\t(void) free(atr.at_val.at_str);\n\treturn ret;\n}\n"
  },
  {
    "path": "src/lib/Libecl/ecl_verify_object_name.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tecl_verify_object_name.c\n *\n * @brief\tContains a function to validate object names\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <sys/types.h>\n#include <stdlib.h>\n#include <string.h>\n#include <ctype.h>\n\n#include \"pbs_ifl.h\"\n#include \"pbs_ecl.h\"\n#include \"pbs_error.h\"\n#include \"pbs_nodes.h\"\n#include \"libpbs.h\"\n\n/**\n * @brief\n *\tpbs_verify_object_name - Validate an object name\n *\n * @par Functionality:\n *\tVerify that the name of an object conforms to the type provided.\n *\n * @see\n *\tFormats chapter of the PBS Reference Guide for further information.\n *\n * @param[in]\ttype - Object type\n * @param[in]\tname - Object name to check\n *\n * @return\tint\n * @retval\t0 - The name conforms\n * @retval\t1 - The name does not conform (pbs_errno is modified)\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nint\npbs_verify_object_name(int type, const char *name)\n{\n\tconst char *ptr;\n\n\tif ((type < 0) || (type >= MGR_OBJ_LAST)) {\n\t\tpbs_errno = PBSE_IVAL_OBJ_NAME;\n\t\treturn 1;\n\t}\n\t/*\n\t * In many cases, the object name will be empty. This is normal\n\t * for qmgr commands such as \"set server scheduling=true\" because\n\t * the command will be sent to the default server. Don't bother\n\t * checking empty names.\n\t */\n\tif ((name == NULL) || (*name == '\\0'))\n\t\treturn 0;\n\tswitch (type) {\n\t\tcase MGR_OBJ_SERVER:\n\t\t\tif (strlen(name) > PBS_MAXSERVERNAME) {\n\t\t\t\tpbs_errno = PBSE_IVAL_OBJ_NAME;\n\t\t\t\treturn 1;\n\t\t\t}\n\t\t\tbreak;\n\t\tcase MGR_OBJ_QUEUE:\n\t\t\tif (strlen(name) > PBS_MAXQUEUENAME) {\n\t\t\t\tpbs_errno = PBSE_QUENBIG;\n\t\t\t\treturn 1;\n\t\t\t}\n\t\t\t/* Must begin with an alphanumeric character. */\n\t\t\tptr = name;\n\t\t\tif (!isalnum(*ptr)) {\n\t\t\t\tpbs_errno = PBSE_IVAL_OBJ_NAME;\n\t\t\t\treturn 1;\n\t\t\t}\n\t\t\tfor (ptr++; *ptr != '\\0'; ptr++) {\n\t\t\t\tswitch (*ptr) {\n\t\t\t\t\tcase '_':\n\t\t\t\t\tcase '-':\n\t\t\t\t\t\tbreak;\n\t\t\t\t\tdefault:\n\t\t\t\t\t\tif (!isalnum(*ptr)) {\n\t\t\t\t\t\t\tpbs_errno = PBSE_IVAL_OBJ_NAME;\n\t\t\t\t\t\t\treturn 1;\n\t\t\t\t\t\t}\n\t\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t\tbreak;\n\t\tcase MGR_OBJ_JOB:\n\t\t\tif (strlen(name) > PBS_MAXJOBNAME) {\n\t\t\t\tpbs_errno = PBSE_IVAL_OBJ_NAME;\n\t\t\t\treturn 1;\n\t\t\t}\n\t\t\tbreak;\n\t\tcase MGR_OBJ_NODE:\n\t\t\tif (strlen(name) > PBS_MAXNODENAME) {\n\t\t\t\tpbs_errno = PBSE_NODENBIG;\n\t\t\t\treturn 1;\n\t\t\t}\n\t\t\tbreak;\n\t\tcase MGR_OBJ_RESV:\n\t\t\tif (strlen(name) > PBS_MAXQRESVNAME) {\n\t\t\t\tpbs_errno = PBSE_IVAL_OBJ_NAME;\n\t\t\t\treturn 1;\n\t\t\t}\n\t\t\tbreak;\n\t\tcase MGR_OBJ_HOST:\n\t\t\tif (strlen(name) > PBS_MAXHOSTNAME) {\n\t\t\t\tpbs_errno = PBSE_IVAL_OBJ_NAME;\n\t\t\t\treturn 1;\n\t\t\t}\n\t\t\tbreak;\n\t\tcase MGR_OBJ_RSC:\n\t\tcase MGR_OBJ_SCHED:\n\t\tcase MGR_OBJ_SITE_HOOK:\n\t\tcase MGR_OBJ_PBS_HOOK:\n\t\tdefault:\n\t\t\tbreak;\n\t}\n\n\treturn 0;\n}\n"
  },
  {
    "path": "src/lib/Libecl/ecl_verify_values.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tecl_verify_values.c\n *\n * @brief\tThe attribute value verification functions\n *\n * @par Functionality:\n *\tThis module contains the attribute value verification functions.\\n\n *\tEach function in this module takes a common format as follows:\\n\n *\n * @par Signature:\n *\tint verify_value_xxxx(int batch_request, int parent_object,\n *                      struct attropl * pattr, char **err_msg)\\n\n *\n * @param[in]\tbatch_request\t-\tBatch Request Type\n * @param[in]\tparent_object\t-\tParent Object Type\n * @param[in]\tcmd\t\t-\tCommand Type\n * @param[in]\tpattr\t\t-\taddress of attribute to verify\n * @param[out]\terr_msg\t\t-\terror message list\n *\n * @return\tint\n * @retval\t0 - Attribute passed verification\n * @retval\t>0 - Failed verification - pbs errcode is returned\n *\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <stdlib.h>\n#include <string.h>\n\n#include \"pbs_ifl.h\"\n#include \"pbs_ecl.h\"\n#include \"pbs_error.h\"\n#include \"cmds.h\"\n#include \"ticket.h\"\n#include \"pbs_license.h\"\n\n#include \"job.h\"\n#include \"reservation.h\"\n#include \"queue.h\"\n#include \"pbs_nodes.h\"\n#include \"server.h\"\n#include \"batch_request.h\"\n#include \"pbs_share.h\"\n\nconst char *preempt_prio_names[] = {\n\t\"normal_jobs\",\n\t\"fairshare\",\n\t\"queue_softlimits\",\n\t\"server_softlimits\",\n\t\"express_queue\",\n};\nstatic long ecl_pbs_max_licenses = PBS_MAX_LICENSING_LICENSES;\n\n/**\n * @brief\n *\tverify the datatype and value of a resource\n *\n * @par Functionality:\n *\t1. Call ecl_find_resc_def to find the resource defn\\n\n *\t2. Call at_verify_datatype to verify the datatype of the resource\\n\n *\t3. Call at_verify_value to verify the value of a resource\n *\n * @see\n *\n * @param[in]\tbatch_request\t-\tBatch Request Type\n * @param[in]\tparent_object\t-\tParent Object Type\n * @param[in]\tcmd\t\t-\tCommand Type\n * @param[in]\tpattr\t\t-\taddress of attribute to verify\n * @param[out]\terr_msg\t\t-\terror message list\n *\n * @return\tint\n * @retval\t0 - Attribute passed verification\n * @retval\t>0 - Failed verification - pbs errcode is returned\n *\n * @par\tSide effects:\n * \tSome functions reset the value pointer to a new value. It thus\n *\tfrees the old value pointer.\n *\n * @par Reentrancy\n *\tMT-safe\n */\nint\nverify_value_resc(int batch_request, int parent_object, int cmd,\n\t\t  struct attropl *pattr, char **err_msg)\n{\n\tecl_attribute_def *prdef;\n\tint err_code = PBSE_NONE;\n\tchar *p;\n\n\tstruct attropl resc_attr;\n\n\tif (pattr == NULL)\n\t\treturn (PBSE_INTERNAL);\n\n\tif (pattr->resource == NULL)\n\t\treturn (PBSE_NONE);\n\n\tif ((prdef = ecl_find_resc_def(ecl_svr_resc_def, pattr->resource, ecl_svr_resc_size))) {\n\t\t/* found the resource, verify type and value of resource */\n\t\tresc_attr.name = pattr->resource;\n\t\tresc_attr.value = pattr->value;\n\n\t\tif (prdef->at_verify_datatype)\n\t\t\terr_code = prdef->at_verify_datatype(&resc_attr,\n\t\t\t\t\t\t\t     err_msg);\n\n\t\tif ((err_code == 0) && (prdef->at_verify_value)) {\n\t\t\terr_code = prdef->at_verify_value(batch_request,\n\t\t\t\t\t\t\t  parent_object, cmd, &resc_attr, err_msg);\n\t\t}\n\t\tif ((err_code != 0) && (*err_msg == NULL)) {\n\t\t\tp = pbse_to_txt(err_code);\n\t\t\tif (p) {\n\t\t\t\t*err_msg = malloc(strlen(p) + strlen(pattr->name) + strlen(pattr->resource) + 3);\n\t\t\t\tif (*err_msg == NULL) {\n\t\t\t\t\terr_code = PBSE_SYSTEM;\n\t\t\t\t\treturn -1;\n\t\t\t\t}\n\t\t\t\tsprintf(*err_msg, \"%s %s.%s\",\n\t\t\t\t\tp, pattr->name, pattr->resource);\n\t\t\t}\n\t\t}\n\t}\n\t/*\n\t * unknown resources are okay at this point of time\n\t * we dont return error if resource is not found\n\t * since custom resources are known only to server\n\t * and verified by server\n\t */\n\treturn err_code;\n}\n\n/**\n * @brief\n *\tVerify function for user_list (ATTR_g)\n *\n * @par Functionality:\n *\tverify function for the user/group lists related attributes(ATTR_g)\\n\n *\tcalls parse_at_list\n *\n * @param[in]\tbatch_request\t-\tBatch Request Type\n * @param[in]\tparent_object\t-\tParent Object Type\n * @param[in]\tcmd\t\t-\tCommand Type\n * @param[in]\tpattr\t\t-\taddress of attribute to verify\n * @param[out]\terr_msg\t\t-\terror message list\n *\n * @return\tint\n * @retval\t0 \t- \tAttribute passed verification\n * @retval\t>0 \t- \tFailed verification - pbs errcode is returned\n *\n * @par\tSide effects:\n * \tNone\n *\n * @par Reentrancy\n *\tMT-safe\n */\nint\nverify_value_user_list(int batch_request, int parent_object, int cmd,\n\t\t       struct attropl *pattr, char **err_msg)\n{\n\tif ((pattr->value == NULL) || (pattr->value[0] == '\\0'))\n\t\treturn PBSE_BADATVAL;\n\n\tif (batch_request == PBS_BATCH_SelectJobs) {\n\t\tif (parse_at_list(pattr->value, FALSE, FALSE))\n\t\t\treturn PBSE_BADATVAL;\n\t} else {\n\t\tif (parse_at_list(pattr->value, TRUE, FALSE))\n\t\t\treturn PBSE_BADATVAL;\n\t}\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n *\tVerify authorized users (ATTR_auth_u)\n *\n * @par Functionality:\n *\tverify function for the attribute ATTR_auth_u\n *\tcalls parse_at_list to parse the list of values\n *\n * @see\n *\n * @param[in]\tbatch_request\t-\tBatch Request Type\n * @param[in]\tparent_object\t-\tParent Object Type\n * @param[in]\tcmd\t\t-\tCommand Type\n * @param[in]\tpattr\t\t-\taddress of attribute to verify\n * @param[out]\terr_msg\t\t-\terror message list\n *\n * @return\tint\n * @retval\t0 \t- \tAttribute passed verification\n * @retval\t>0 \t- \tFailed verification - pbs errcode is returned\n *\n * @par\tSide effects:\n * \tNone\n *\n * @par Reentrancy\n *\tMT-safe\n */\nint\nverify_value_authorized_users(int batch_request, int parent_object, int cmd,\n\t\t\t      struct attropl *pattr, char **err_msg)\n{\n\tif ((pattr->value == NULL) || (pattr->value[0] == '\\0'))\n\t\treturn PBSE_BADATVAL;\n\n\tif (parse_at_list(pattr->value, FALSE, FALSE))\n\t\treturn PBSE_BADATVAL;\n\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n *\tVerify authorized groups (ATTR_auth_g)\n *\n * @par Functionality:\n *\tverify function for the attribute ATTR_auth_g\n *\tcalls parse_at_list to parse the list of values\n *\n * @see\n *\n * @param[in]\tbatch_request\t-\tBatch Request Type\n * @param[in]\tparent_object\t-\tParent Object Type\n * @param[in]\tcmd\t\t-\tCommand Type\n * @param[in]\tpattr\t\t-\taddress of attribute to verify\n * @param[out]\terr_msg\t\t-\terror message list\n *\n * @return\tint\n * @retval\t0 \t- \tAttribute passed verification\n * @retval\t>0 \t- \tFailed verification - pbs errcode is returned\n *\n * @par\tSide effects:\n * \tNone\n *\n * @par Reentrancy\n *\tMT-safe\n */\nint\nverify_value_authorized_groups(int batch_request, int parent_object, int cmd,\n\t\t\t       struct attropl *pattr, char **err_msg)\n{\n\tif (pattr->value == NULL)\n\t\treturn PBSE_BADATVAL;\n\n\tif (pattr->value[0] == '\\0') {\n\t\t/* unset group */\n\t\treturn PBSE_NONE;\n\t}\n\n\tif (parse_at_list(pattr->value, FALSE, FALSE))\n\t\treturn PBSE_BADATVAL;\n\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n *\tverify function for the attributes ATTR_depend\n *\n * @par Functionality:\n *\tverify function for the attributes ATTR_depend\\n\n *\tcalls parse_depend_list to parse the list of job dependencies\\n\n *\tNOTE: This function also resets the value pointer to the new (expanded)\n *\tdependancy list. It frees the original value pointer\n *\n * @see\n *\n * @param[in]\tbatch_request\t-\tBatch Request Type\n * @param[in]\tparent_object\t-\tParent Object Type\n * @param[in]\tcmd\t\t-\tCommand Type\n * @param[in]\tpattr\t\t-\taddress of attribute to verify\n * @param[out]\terr_msg\t\t-\terror message list\n *\n * @return\tint\n * @retval\t0 \t- \tAttribute passed verification\n * @retval\t>0 \t- \tFailed verification - pbs errcode is returned\n *\n * @par\tSide effects:\n * \tNone\n *\n * @par Reentrancy\n *\tMT-safe\n */\nint\nverify_value_dependlist(int batch_request, int parent_object, int cmd,\n\t\t\tstruct attropl *pattr, char **err_msg)\n{\n\tchar *pdepend;\n\n\tif ((pattr->value == NULL) || (pattr->value[0] == '\\0'))\n\t\treturn PBSE_BADATVAL;\n\n\tpdepend = malloc(PBS_DEPEND_LEN);\n\tif (pdepend == NULL)\n\t\treturn -1;\n\n\tif (parse_depend_list(pattr->value, &pdepend, PBS_DEPEND_LEN)) {\n\t\tfree(pdepend);\n\t\treturn PBSE_BADATVAL;\n\t}\n\t/* replace the value with the expanded value */\n\tfree(pattr->value);\n\tpattr->value = pdepend;\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n *\tverify function for the attributes ATTR_o, ATTR_e etc\n *\n * @par Functionality\n *\tcalls prepare_path to parse the path associatedd with ATTR_o, ATTR_e etc\n *\tNOTE: This function also resets the value pointer to the new (expanded)\n *\tdependancy list. It frees the original value pointer\n *\n * @see\n *\n * @param[in]\tbatch_request\t-\tBatch Request Type\n * @param[in]\tparent_object\t-\tParent Object Type\n * @param[in]\tcmd\t\t-\tCommand Type\n * @param[in]\tpattr\t\t-\taddress of attribute to verify\n * @param[out]\terr_msg\t\t-\terror message list\n *\n * @return\tint\n * @retval\t0 \t- \tAttribute passed verification\n * @retval\t>0 \t- \tFailed verification - pbs errcode is returned\n *\n * @par\tSide effects:\n * \tNone\n *\n * @par Reentrancy\n *\tMT-safe\n */\nint\nverify_value_path(int batch_request, int parent_object, int cmd,\n\t\t  struct attropl *pattr, char **err_msg)\n{\n\tchar *path_out;\n\n\tif ((pattr == NULL) || (pattr->value == NULL) || (pattr->value[0] == '\\0'))\n\t\treturn PBSE_BADATVAL;\n\n\tpath_out = malloc(MAXPATHLEN + 1);\n\tif (path_out == NULL)\n\t\treturn PBSE_SYSTEM;\n\telse\n\t\tmemset(path_out, 0, MAXPATHLEN + 1);\n\n\tif (prepare_path(pattr->value, path_out) != 0) {\n\t\tfree(path_out);\n\t\treturn (PBSE_BADATVAL);\n\t}\n\t/* replace with prepared path */\n\tfree(pattr->value);\n\tpattr->value = path_out;\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n *\tverify function for the attributes ATTR_J\n *\n * @par Functionality:\n *\tverify function for the attributes ATTR_J\\n\n *\tcalls chk_Jrange to verify that the range of the value is proper\n *\n * @see\n *\n * @param[in]\tbatch_request\t-\tBatch Request Type\n * @param[in]\tparent_object\t-\tParent Object Type\n * @param[in]\tcmd\t\t-\tCommand Type\n * @param[in]\tpattr\t\t-\taddress of attribute to verify\n * @param[out]\terr_msg\t\t-\terror message list\n *\n * @return\tint\n * @retval\t0 \t- \tAttribute passed verification\n * @retval\t>0 \t- \tFailed verification - pbs errcode is returned\n *\n * @par\tSide effects:\n * \tNone\n *\n * @par Reentrancy\n *\tMT-safe\n */\nint\nverify_value_jrange(int batch_request, int parent_object, int cmd,\n\t\t    struct attropl *pattr, char **err_msg)\n{\n\tint ret;\n\n\tif ((pattr->value == NULL) || (pattr->value[0] == '\\0'))\n\t\treturn PBSE_BADATVAL;\n\n\tret = chk_Jrange(pattr->value);\n\tif (ret == 1)\n\t\treturn PBSE_BADATVAL;\n\telse if (ret == 2)\n\t\treturn PBSE_ATVALERANGE;\n\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n * verify function for the attributes ATTR_N (job or resv)\n *\n * @par Functionality:\n *\tverify function for the attributes ATTR_N (job or resv)\\n\n *\tcalls check_job_name to verify that the job/resv name is proper\n *\n * @see\n *\n * @param[in]\tbatch_request\t-\tBatch Request Type\n * @param[in]\tparent_object\t-\tParent Object Type\n * @param[in]\tcmd\t\t-\tCommand Type\n * @param[in]\tpattr\t\t-\taddress of attribute to verify\n * @param[out]\terr_msg\t\t-\terror message list\n *\n * @return\tint\n * @retval\t0 \t- \tAttribute passed verification\n * @retval\t>0 \t- \tFailed verification - pbs errcode is returned\n *\n * @par\tSide effects:\n * \tNone\n *\n * @par Reentrancy\n *\tMT-safe\n */\nint\nverify_value_jobname(int batch_request, int parent_object, int cmd,\n\t\t     struct attropl *pattr, char **err_msg)\n{\n\tint chk_alpha = 1; /* by default disallow numeric first char  */\n\tint ret;\n\n\tif (pattr->value == NULL)\n\t\treturn PBSE_BADATVAL;\n\n\tif (pattr->value[0] == '\\0') {\n\t\tif ((batch_request == PBS_BATCH_StatusJob) ||\n\t\t    (batch_request == PBS_BATCH_SelectJobs))\n\t\t\treturn PBSE_NONE;\n\t\telse\n\t\t\treturn PBSE_BADATVAL;\n\t}\n\n\tif (batch_request == PBS_BATCH_QueueJob ||   /* for queuejob allow numeric first char */\n\t    batch_request == PBS_BATCH_ModifyJob ||  /* for alterjob allow numeric first char */\n\t    batch_request == PBS_BATCH_SubmitResv || /* for reservation submit allow numeric first char */\n\t    batch_request == PBS_BATCH_ModifyResv || /* for reservation modify allow numeric first char */\n\t    batch_request == PBS_BATCH_SelectJobs)   /* for selectjob allow numeric first char */\n\t\tchk_alpha = 0;\n\n\tret = check_job_name(pattr->value, chk_alpha);\n\tif (ret == -1)\n\t\treturn PBSE_BADATVAL;\n\telse if (ret == -2)\n\t\treturn PBSE_JOBNBIG;\n\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n *\tverify function for the attributes ATTR_c (checkpoint)\n *\n * @par Functionality:\n *\tverify function for the attributes ATTR_c (checkpoint)\\n\n *\tChecks that the format of ATTR_c is proper\n *\n * @see\n *\n * @param[in]\tbatch_request\t-\tBatch Request Type\n * @param[in]\tparent_object\t-\tParent Object Type\n * @param[in]\tcmd\t\t-\tCommand Type\n * @param[in]\tpattr\t\t-\taddress of attribute to verify\n * @param[out]\terr_msg\t\t-\terror message list\n *\n * @return\tint\n * @retval\t0 \t- \tAttribute passed verification\n * @retval\t>0 \t- \tFailed verification - pbs errcode is returned\n *\n * @par\tSide effects:\n * \tNone\n *\n * @par Reentrancy\n *\tMT-safe\n */\nint\nverify_value_checkpoint(int batch_request, int parent_object, int cmd,\n\t\t\tstruct attropl *pattr, char **err_msg)\n{\n\tchar *val = pattr->value;\n\tchar *pc;\n\n\tif ((pattr->value == NULL) || (pattr->value[0] == '\\0'))\n\t\treturn PBSE_BADATVAL;\n\n\tpc = val;\n\tif (strlen(val) == 1) {\n\t\t/* include 'u' as a valid one since unset is set as 'u' */\n\t\tif (*pc != 'n' && *pc != 's' && *pc != 'c' && *pc != 'w' && *pc != 'u')\n\t\t\treturn PBSE_BADATVAL;\n\t} else {\n\t\tif (((*pc != 'c') && (*pc != 'w')) || (*(pc + 1) != '='))\n\t\t\treturn PBSE_BADATVAL;\n\n\t\tpc += 2;\n\t\tif (*pc == '\\0')\n\t\t\treturn PBSE_BADATVAL;\n\n\t\twhile (isdigit(*pc))\n\t\t\tpc++;\n\t\tif (*pc != '\\0')\n\t\t\treturn PBSE_BADATVAL;\n\t}\n\n\tif (batch_request == PBS_BATCH_SelectJobs) {\n\t\tif (strcmp(pc, \"u\") == 0) {\n\t\t\tif ((pattr->op != EQ) && (pattr->op != NE))\n\t\t\t\treturn PBSE_BADATVAL;\n\t\t}\n\t}\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n *\tverify function for the attributes ATTR_h (hold)\n *\n * @par Functionality:\n *\tverify function for the attributes ATTR_h (hold)\\n\n *\tChecks that the format of ATTR_h is proper\n *\n * @see\n *\n * @param[in]\tbatch_request\t-\tBatch Request Type\n * @param[in]\tparent_object\t-\tParent Object Type\n * @param[in]\tcmd\t\t-\tCommand Type\n * @param[in]\tpattr\t\t-\taddress of attribute to verify\n * @param[out]\terr_msg\t\t-\terror message list\n *\n * @return\tint\n * @retval\t0 \t- \tAttribute passed verification\n * @retval\t>0 \t- \tFailed verification - pbs errcode is returned\n *\n * @par\tSide effects:\n * \tNone\n *\n * @par Reentrancy\n *\tMT-safe\n */\nint\nverify_value_hold(int batch_request, int parent_object, int cmd,\n\t\t  struct attropl *pattr, char **err_msg)\n{\n\tchar *val = pattr->value;\n\tchar *pc;\n\tint u_cnt = 0;\n\tint o_cnt = 0;\n\tint s_cnt = 0;\n\tint p_cnt = 0;\n\tint n_cnt = 0;\n\n\tif ((pattr->value == NULL) || (pattr->value[0] == '\\0'))\n\t\treturn PBSE_BADATVAL;\n\n\tfor (pc = val; *pc != '\\0'; pc++) {\n\t\tif (*pc == 'u')\n\t\t\tu_cnt++;\n\t\telse if (*pc == 'o')\n\t\t\to_cnt++;\n\t\telse if (*pc == 's')\n\t\t\ts_cnt++;\n\t\telse if (*pc == 'p')\n\t\t\tp_cnt++;\n\t\telse if (*pc == 'n')\n\t\t\tn_cnt++;\n\t\telse\n\t\t\treturn (PBSE_BADATVAL);\n\t}\n\tif (n_cnt && (u_cnt + o_cnt + s_cnt + p_cnt))\n\t\treturn (PBSE_BADATVAL);\n\tif (p_cnt && (u_cnt + o_cnt + s_cnt + n_cnt))\n\t\treturn (PBSE_BADATVAL);\n\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n *\tverify function for the attributes ATTR_j\n *\n * @par Functionality:\n *\tverify function for the attributes ATTR_j\\n\n *\tChecks that the format of ATTR_j is proper\n *\n * @see\n *\n * @param[in]\tbatch_request\t-\tBatch Request Type\n * @param[in]\tparent_object\t-\tParent Object Type\n * @param[in]\tcmd\t\t-\tCommand Type\n * @param[in]\tpattr\t\t-\taddress of attribute to verify\n * @param[out]\terr_msg\t\t-\terror message list\n *\n * @return\tint\n * @retval\t0 \t- \tAttribute passed verification\n * @retval\t>0 \t- \tFailed verification - pbs errcode is returned\n *\n * @par\tSide effects:\n * \tNone\n *\n * @par Reentrancy\n *\tMT-safe\n */\nint\nverify_value_joinpath(int batch_request, int parent_object, int cmd,\n\t\t      struct attropl *pattr, char **err_msg)\n{\n\tif ((pattr->value == NULL) || (pattr->value[0] == '\\0'))\n\t\treturn PBSE_BADATVAL;\n\n\tif (strcmp(pattr->value, \"oe\") != 0 &&\n\t    strcmp(pattr->value, \"eo\") != 0 &&\n\t    strcmp(pattr->value, \"n\") != 0) {\n\n\t\treturn PBSE_BADATVAL;\n\t}\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n *  verify function for the attributes ATTR_k\n *\n * @par Functionality:\n *  verify function for the attributes ATTR_k\n *  Checks that the format of ATTR_k is proper\n *\n * @param[in]   value       -   string value to verify\n *\n * @return  int\n * @retval  0   -   Attribute passed verification\n * @retval  >0  -   Failed verification - pbs errcode is returned\n *\n * @par Side effects:\n *  None\n *\n * @par Reentrancy\n *  MT-safe\n */\nint\nverify_keepfiles_common(char *value)\n{\n\tchar *ch;\n\tint keep_files = 0;\n\tint dont_keep = 0;\n\tint direct_write = 0;\n\tif ((value == NULL) || (value[0] == '\\0'))\n\t\treturn PBSE_BADATVAL;\n\n\tfor (ch = value; *ch; ch++) {\n\t\tif (*ch == 'o' || *ch == 'e')\n\t\t\tkeep_files = 1;\n\t\tif (*ch == 'n')\n\t\t\tdont_keep = 1;\n\t\tif (*ch == 'd')\n\t\t\tdirect_write = 1;\n\t\tif (*ch != 'o' && *ch != 'e' && *ch != 'd' && *ch != 'n')\n\t\t\treturn PBSE_BADATVAL;\n\t}\n\tif ((keep_files && dont_keep) || (direct_write && !(keep_files || dont_keep)))\n\t\treturn PBSE_BADATVAL;\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n *\tverify function for the attributes ATTR_k\n *\n * @par Functionality:\n *\tverify function for the attributes ATTR_k\n *\tChecks that the format of ATTR_k is proper\n *\n * @param[in]\tbatch_request\t-\tBatch Request Type\n * @param[in]\tparent_object\t-\tParent Object Type\n * @param[in]\tcmd\t\t-\tCommand Type\n * @param[in]\tpattr\t\t-\taddress of attribute to verify\n * @param[out]\terr_msg\t\t-\terror message list\n *\n * @return\tint\n * @retval\t0 \t- \tAttribute passed verification\n * @retval\t>0 \t- \tFailed verification - pbs errcode is returned\n *\n * @par\tSide effects:\n * \tNone\n *\n * @par Reentrancy\n *\tMT-safe\n */\nint\nverify_value_keepfiles(int batch_request, int parent_object, int cmd,\n\t\t       struct attropl *pattr, char **err_msg)\n{\n\treturn verify_keepfiles_common(pattr->value);\n}\n\n/**\n * @brief\n *\tverify function for the attributes ATTR_m (mailpoints)\n *\n * @par Functionality:\n *\tverify function for the attributes ATTR_m (mailpoints)\\n\n *\tChecks that the format of ATTR_m is proper\n *\n * @see\n *\n * @param[in]\tbatch_request\t-\tBatch Request Type\n * @param[in]\tparent_object\t-\tParent Object Type\n * @param[in]\tcmd\t\t-\tCommand Type\n * @param[in]\tpattr\t\t-\taddress of attribute to verify\n * @param[out]\terr_msg\t\t-\terror message list\n *\n * @return\tint\n * @retval\t0 \t- \tAttribute passed verification\n * @retval\t>0 \t- \tFailed verification - pbs errcode is returned\n *\n * @par\tSide effects:\n * \tNone\n *\n * @par Reentrancy\n *\tMT-safe\n */\nint\nverify_value_mailpoints(int batch_request, int parent_object, int cmd,\n\t\t\tstruct attropl *pattr, char **err_msg)\n{\n\tchar *pc;\n\n\tif ((pattr->value == NULL) || (pattr->value[0] == '\\0'))\n\t\treturn PBSE_BADATVAL;\n\n\twhile (isspace((int) *pattr->value))\n\t\tpattr->value++;\n\tif (strlen(pattr->value) == 0)\n\t\treturn PBSE_BADATVAL;\n\n\tif (strlen(pattr->value) == 1 && *pattr->value == 'j')\n\t\treturn PBSE_BADATVAL;\n\n\tif (strcmp(pattr->value, \"n\") != 0) {\n\t\tfor (pc = pattr->value; *pc; pc++) {\n\t\t\tif (batch_request == PBS_BATCH_SubmitResv || batch_request == PBS_BATCH_ModifyResv) {\n\t\t\t\tif (*pc != 'a' && *pc != 'b' && *pc != 'e' && *pc != 'c')\n\t\t\t\t\treturn PBSE_BADATVAL;\n\t\t\t} else {\n\t\t\t\tif (*pc != 'a' && *pc != 'b' && *pc != 'e' && *pc != 'j')\n\t\t\t\t\treturn PBSE_BADATVAL;\n\t\t\t}\n\t\t}\n\t}\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n *\tverify function for the attributes ATTR_M (mailusers)\n *\n * @par Functionality:\n *\tverify function for the attributes ATTR_M (mailusers)\\n\n *\tChecks that the format of ATTR_M is proper by calling parse_at_list\n *\n * @see\n *\n * @param[in]\tbatch_request\t-\tBatch Request Type\n * @param[in]\tparent_object\t-\tParent Object Type\n * @param[in]\tcmd\t\t-\tCommand Type\n * @param[in]\tpattr\t\t-\taddress of attribute to verify\n * @param[out]\terr_msg\t\t-\terror message list\n *\n * @return\tint\n * @retval\t0 \t- \tAttribute passed verification\n * @retval\t>0 \t- \tFailed verification - pbs errcode is returned\n *\n * @par\tSide effects:\n * \tNone\n *\n * @par Reentrancy\n *\tMT-safe\n */\nint\nverify_value_mailusers(int batch_request, int parent_object, int cmd,\n\t\t       struct attropl *pattr, char **err_msg)\n{\n\tif ((pattr->value == NULL) || (pattr->value[0] == '\\0'))\n\t\treturn PBSE_BADATVAL;\n\n\tif (parse_at_list(pattr->value, FALSE, FALSE))\n\t\treturn PBSE_BADATVAL;\n\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n *\tverify function for the attributes ATTR_S\n *\n * @par Functionality:\n *\tverify function for the attributes ATTR_S\\n\n *\tChecks that the format of ATTR_S is proper by calling parse_at_list\n *\n * @see\n *\n * @param[in]\tbatch_request\t-\tBatch Request Type\n * @param[in]\tparent_object\t-\tParent Object Type\n * @param[in]\tcmd\t\t-\tCommand Type\n * @param[in]\tpattr\t\t-\taddress of attribute to verify\n * @param[out]\terr_msg\t\t-\terror message list\n *\n * @return\tint\n * @retval\t0 \t- \tAttribute passed verification\n * @retval\t>0 \t- \tFailed verification - pbs errcode is returned\n *\n * @par\tSide effects:\n * \tNone\n *\n * @par Reentrancy\n *\tMT-safe\n */\nint\nverify_value_shellpathlist(int batch_request, int parent_object, int cmd,\n\t\t\t   struct attropl *pattr, char **err_msg)\n{\n\tif ((pattr->value == NULL) || (pattr->value[0] == '\\0'))\n\t\treturn PBSE_BADATVAL;\n\n\tif (parse_at_list(pattr->value, TRUE, TRUE))\n\t\treturn PBSE_BADATVAL;\n\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n *\tverify function for the attributes ATTR_p\n *\n * @par Functionality:\n *\tverify function for the attributes ATTR_p\\n\n *\tChecks that the format of ATTR_p is proper (between -1024 & +1023)\n *\n * @see\n *\n * @param[in]\tbatch_request\t-\tBatch Request Type\n * @param[in]\tparent_object\t-\tParent Object Type\n * @param[in]\tcmd\t\t-\tCommand Type\n * @param[in]\tpattr\t\t-\taddress of attribute to verify\n * @param[out]\terr_msg\t\t-\terror message list\n *\n * @return\tint\n * @retval\t0 \t- \tAttribute passed verification\n * @retval\t>0 \t- \tFailed verification - pbs errcode is returned\n *\n * @par\tSide effects:\n * \tNone\n *\n * @par Reentrancy\n *\tMT-safe\n */\nint\nverify_value_priority(int batch_request, int parent_object, int cmd,\n\t\t      struct attropl *pattr, char **err_msg)\n{\n\tint i;\n\n\tif ((pattr->value == NULL) || (pattr->value[0] == '\\0'))\n\t\treturn PBSE_BADATVAL;\n\n\ti = atoi(pattr->value);\n\tif (i < -1024 || i > 1023) {\n\t\tif (batch_request == PBS_BATCH_SelectJobs)\n\t\t\treturn PBSE_NONE;\n\t\telse\n\t\t\treturn PBSE_BADATVAL;\n\t}\n\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n *  verify function for the attributes ATTR_R\n *\n * @par Functionality:\n *  verify function for the attributes ATTR_R\n *  Checks that the format of ATTR_R is proper\n *\n * @param[in]   value       -   string value to verify\n *\n * @return  int\n * @retval  0   -   Attribute passed verification\n * @retval  >0  -   Failed verification - pbs errcode is returned\n *\n * @par Side effects:\n *  None\n *\n * @par Reentrancy\n *  MT-safe\n */\n\nint\nverify_removefiles_common(char *value)\n{\n\tchar *ch;\n\tif ((value == NULL) || (value[0] == '\\0'))\n\t\treturn PBSE_BADATVAL;\n\n\tfor (ch = value; *ch; ch++)\n\t\tif (*ch != 'o' && *ch != 'e')\n\t\t\treturn PBSE_BADATVAL;\n\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n *\tverify function for the attributes ATTR_R\n *\n * @par Functionality:\n *\tverify function for the attributes ATTR_R\n *\tChecks that the format of ATTR_R is proper\n *\n * @see\n *\n * @param[in]\tbatch_request\t-\tBatch Request Type\n * @param[in]\tparent_object\t-\tParent Object Type\n * @param[in]\tcmd\t\t-\tCommand Type\n * @param[in]\tpattr\t\t-\taddress of attribute to verify\n * @param[out]\terr_msg\t\t-\terror message list\n *\n * @return\tint\n * @retval\t0 \t- \tAttribute passed verification\n * @retval\t>0 \t- \tFailed verification - pbs errcode is returned\n *\n * @par\tSide effects:\n * \tNone\n *\n * @par Reentrancy\n *\tMT-safe\n */\nint\nverify_value_removefiles(int batch_request, int parent_object, int cmd,\n\t\t\t struct attropl *pattr, char **err_msg)\n{\n\treturn verify_removefiles_common(pattr->value);\n}\n\n/**\n * @brief\n *\tverify function for the attributes ATTR_sandbox\n *\n * @par Functionality:\n *\tverify function for the attributes ATTR_sandbox\n *\n * @see\n *\n * @param[in]\tbatch_request\t-\tBatch Request Type\n * @param[in]\tparent_object\t-\tParent Object Type\n * @param[in]\tcmd\t\t-\tCommand Type\n * @param[in]\tpattr\t\t-\taddress of attribute to verify\n * @param[out]\terr_msg\t\t-\terror message list\n *\n * @return\tint\n * @retval\t0 \t- \tAttribute passed verification\n * @retval\t>0 \t- \tFailed verification - pbs errcode is returned\n *\n * @par\tSide effects:\n * \tNone\n *\n * @par Reentrancy\n *\tMT-safe\n */\nint\nverify_value_sandbox(int batch_request, int parent_object, int cmd,\n\t\t     struct attropl *pattr, char **err_msg)\n{\n\tif ((pattr->value == NULL) || (pattr->value[0] == '\\0'))\n\t\treturn PBSE_BADATVAL;\n\n\tif ((strcasecmp(pattr->value, \"HOME\") != 0) &&\n\t    (strcasecmp(pattr->value, \"O_WORKDIR\") != 0) &&\n\t    (strcasecmp(pattr->value, \"PRIVATE\") != 0)) {\n\t\treturn PBSE_BADATVAL;\n\t}\n\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n *\t verify function for the attributes ATTR_stagein, ATTR_stageout\n *\tChecks that the format by calling parse_stage_list\n *\n * @see\n *\n * @param[in]\tbatch_request\t-\tBatch Request Type\n * @param[in]\tparent_object\t-\tParent Object Type\n * @param[in]\tcmd\t\t-\tCommand Type\n * @param[in]\tpattr\t\t-\taddress of attribute to verify\n * @param[out]\terr_msg\t\t-\terror message list\n *\n * @return\tint\n * @retval\t0 \t- \tAttribute passed verification\n * @retval\t>0 \t- \tFailed verification - pbs errcode is returned\n *\n * @par\tSide effects:\n * \tNone\n *\n * @par Reentrancy\n *\tMT-safe\n */\nint\nverify_value_stagelist(int batch_request, int parent_object, int cmd,\n\t\t       struct attropl *pattr, char **err_msg)\n{\n\tif ((pattr->value == NULL) || (pattr->value[0] == '\\0'))\n\t\treturn PBSE_BADATVAL;\n\n\tif (parse_stage_list(pattr->value))\n\t\treturn PBSE_BADATVAL;\n\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n *\tverify function for the attributes ATTR_ReqCred\n *\tChecks that the value is one of the allowed values\n *\n * @see\n *\n * @param[in]\tbatch_request\t-\tBatch Request Type\n * @param[in]\tparent_object\t-\tParent Object Type\n * @param[in]\tcmd\t\t-\tCommand Type\n * @param[in]\tpattr\t\t-\taddress of attribute to verify\n * @param[out]\terr_msg\t\t-\terror message list\n *\n * @return\tint\n * @retval\t0 \t- \tAttribute passed verification\n * @retval\t>0 \t- \tFailed verification - pbs errcode is returned\n *\n * @par\tSide effects:\n * \tNone\n *\n * @par Reentrancy\n *\tMT-safe\n */\nint\nverify_value_credname(int batch_request, int parent_object, int cmd,\n\t\t      struct attropl *pattr, char **err_msg)\n{\n\tstatic const char *cred_list[] = {\n\t\tPBS_CREDNAME_AES,\n\t\tNULL /* must be last */\n\t};\n\n\tchar *val = pattr->value;\n\tint i;\n\n\tif ((pattr->value == NULL) || (pattr->value[0] == '\\0'))\n\t\treturn PBSE_BADATVAL;\n\n\tfor (i = 0; cred_list[i]; i++) {\n\t\tif (strcmp(cred_list[i], val) == 0)\n\t\t\treturn PBSE_NONE;\n\t}\n\treturn PBSE_BADATVAL;\n}\n\n/**\n * @brief\n *\t for some attributes which can have 0 or +ve values\n *\n * @see\n *\n * @param[in]\tbatch_request\t-\tBatch Request Type\n * @param[in]\tparent_object\t-\tParent Object Type\n * @param[in]\tcmd\t\t-\tCommand Type\n * @param[in]\tpattr\t\t-\taddress of attribute to verify\n * @param[out]\terr_msg\t\t-\terror message list\n *\n * @return\tint\n * @retval\t0 \t- \tAttribute passed verification\n * @retval\t>0 \t- \tFailed verification - pbs errcode is returned\n *\n * @par\tSide effects:\n * \tNone\n *\n * @par Reentrancy\n *\tMT-safe\n */\nint\nverify_value_zero_or_positive(int batch_request, int parent_object, int cmd,\n\t\t\t      struct attropl *pattr, char **err_msg)\n{\n\tlong lval;\n\tchar *end = NULL;\n\n\tif ((pattr->value == NULL) || (pattr->value[0] == '\\0'))\n\t\treturn PBSE_BADATVAL;\n\n\terrno = 0;\n\tlval = strtol(pattr->value, &end, 10);\n\tif ((errno != 0) || (lval < 0))\n\t\treturn PBSE_BADATVAL;\n\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n *\tFunction checks the resource \"preempt_targets\" and verifies its values\n *\n * @see\n *\n * @param[in]\tbatch_request\t-\tBatch Request Type\n * @param[in]\tparent_object\t-\tParent Object Type\n * @param[in]\tcmd\t\t-\tCommand Type\n * @param[in]\tpattr\t\t-\taddress of attribute to verify\n * @param[out]\terr_msg\t\t-\terror message list\n *\n * @return\tint\n * @retval\t0 \t- \tAttribute passed verification\n * @retval\t>0 \t- \tFailed verification - pbs errcode is returned\n *\n * @par\tSide effects:\n * \tNone\n *\n */\nint\nverify_value_preempt_targets(int batch_request, int parent_object, int cmd,\n\t\t\t     struct attropl *pattr, char **err_msg)\n{\n\tchar *val = NULL;\n\tchar *result = NULL;\n\tchar *temp = NULL;\n\tchar *p = NULL;\n\tchar *q = NULL;\n\tchar ch = 0;\n\tchar ch1 = 0;\n\tint err_code = PBSE_NONE;\n\tecl_attribute_def *prdef = NULL;\n\tchar *value = NULL;\n\tchar *msg = NULL;\n\tint attrib_found = 0;\n\tchar *lcase_val = NULL;\n\tchar *chk_arr[] = {ATTR_l, ATTR_queue, NULL};\n\tint i = 0;\n\tint j = 0;\n\tint res_len = 0;\n\tecl_attribute_def *ecl_def = ecl_svr_resc_def;\n\tint ecl_def_size = ecl_svr_resc_size;\n\tstruct attropl resc_attr = {0};\n\n\tif ((pattr->value == NULL) || (pattr->value[0] == '\\0'))\n\t\treturn PBSE_BADATVAL;\n\tval = pattr->value;\n\twhile (isspace(*val))\n\t\tval++;\n\t/* Check if preempt_targets is set to \"NONE\" */\n\tif (strncasecmp(val, TARGET_NONE, strlen(TARGET_NONE)) == 0) {\n\t\tif (strcasecmp(val, TARGET_NONE) != 0)\n\t\t\terr_code = PBSE_BADATVAL;\n\t\treturn err_code;\n\t}\n\tfor (i = 0; chk_arr[i] != NULL; i++) {\n\t\tif (strcmp(chk_arr[i], ATTR_queue) == 0) {\n\t\t\tecl_def = ecl_resv_attr_def;\n\t\t\tecl_def_size = ecl_resv_attr_size;\n\t\t\t/*\n\t\t\t * Implementation for case insensitive search of string \"queue\", as many\n\t\t\t * platforms like Windows does not have any case insensitive string search\n\t\t\t */\n\t\t\tif (lcase_val != NULL) {\n\t\t\t\tfree(lcase_val);\n\t\t\t\tlcase_val = NULL;\n\t\t\t}\n\t\t\tlcase_val = strdup(val);\n\t\t\tif (lcase_val == NULL)\n\t\t\t\treturn PBSE_SYSTEM;\n\t\t\tfor (j = 0; lcase_val[j] != '\\0'; j++) {\n\t\t\t\tlcase_val[j] = tolower(lcase_val[j]);\n\t\t\t}\n\t\t\tval = lcase_val;\n\t\t} else\n\t\t\tval = pattr->value;\n\t\t/* Check preempt_targets for one of the attrib names in its values */\n\t\tresult = strstr(val, chk_arr[i]);\n\t\tres_len = strlen(chk_arr[i]);\n\t\t/* Traverse through the values, it may have multiple comma seperated values */\n\t\twhile (result != NULL) {\n\t\t\t/*At least one of the recognized attributes was found */\n\t\t\tattrib_found = 1;\n\t\t\tif (strcmp(chk_arr[i], ATTR_l) == 0) {\n\t\t\t\t/* We need to skip \"Resource_List\" */\n\t\t\t\ttemp = result + res_len;\n\t\t\t\tif (*temp != '.') {\n\t\t\t\t\tfree(lcase_val);\n\t\t\t\t\treturn PBSE_BADATVAL;\n\t\t\t\t}\n\t\t\t\t/* Ignoring '.' character */\n\t\t\t\ttemp = temp + 1;\n\t\t\t} else\n\t\t\t\ttemp = result;\n\t\t\tp = strpbrk(temp, \"=\");\n\t\t\tif (p == NULL) {\n\t\t\t\tfree(lcase_val);\n\t\t\t\treturn PBSE_BADATVAL;\n\t\t\t}\n\t\t\tch = *p;\n\t\t\tvalue = p + 1;\n\t\t\t*p = '\\0';\n\t\t\t/* find the resource definition */\n\t\t\tprdef = ecl_find_resc_def(ecl_def, temp, ecl_def_size);\n\t\t\tif (prdef == NULL) {\n\t\t\t\t*p = ch;\n\t\t\t\t/* Assuming custom resource, don't know datatype to verify */\n\t\t\t\tresult = strstr(temp, chk_arr[i]);\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\tq = strchr(value, ',');\n\t\t\tif (q != NULL) {\n\t\t\t\tch1 = *q;\n\t\t\t\t*q = '\\0';\n\t\t\t}\n\t\t\tresc_attr.name = strdup(temp);\n\t\t\tif (resc_attr.name == NULL) {\n\t\t\t\tfree(lcase_val);\n\t\t\t\treturn PBSE_SYSTEM;\n\t\t\t}\n\t\t\tresc_attr.value = strdup(value);\n\t\t\tif (resc_attr.value == NULL) {\n\t\t\t\tfree(lcase_val);\n\t\t\t\tfree(resc_attr.name);\n\t\t\t\treturn PBSE_SYSTEM;\n\t\t\t}\n\t\t\tif (q != NULL)\n\t\t\t\t*q = ch1;\n\t\t\t*p = ch;\n\t\t\tif (prdef->at_verify_datatype)\n\t\t\t\terr_code = prdef->at_verify_datatype(&resc_attr,\n\t\t\t\t\t\t\t\t     err_msg);\n\n\t\t\tif ((err_code == 0) && (prdef->at_verify_value)) {\n\t\t\t\terr_code = prdef->at_verify_value(batch_request,\n\t\t\t\t\t\t\t\t  parent_object, cmd, &resc_attr, err_msg);\n\t\t\t}\n\t\t\tif ((err_code != 0) && (*err_msg == NULL)) {\n\t\t\t\tmsg = pbse_to_txt(err_code);\n\t\t\t\tif (msg) {\n\t\t\t\t\t*err_msg = malloc(strlen(msg) + 1);\n\t\t\t\t\tif (*err_msg == NULL) {\n\t\t\t\t\t\tfree(lcase_val);\n\t\t\t\t\t\treturn PBSE_SYSTEM;\n\t\t\t\t\t}\n\t\t\t\t\tsprintf(*err_msg, \"%s\",\n\t\t\t\t\t\tmsg);\n\t\t\t\t}\n\t\t\t\treturn err_code;\n\t\t\t}\n\t\t\tval = p;\n\t\t\tfree(resc_attr.name);\n\t\t\tfree(resc_attr.value);\n\t\t\tresc_attr.name = resc_attr.value = NULL;\n\t\t\tresult = strstr(val, chk_arr[i]);\n\t\t}\n\t}\n\tfree(lcase_val);\n\tif (attrib_found == 0)\n\t\terr_code = PBSE_BADATVAL;\n\treturn err_code;\n}\n\n/**\n * @brief\n *\tfor some attributes which can have only +ve values\n *\n * @see\n *\n * @param[in]\tbatch_request\t-\tBatch Request Type\n * @param[in]\tparent_object\t-\tParent Object Type\n * @param[in]\tcmd\t\t-\tCommand Type\n * @param[in]\tpattr\t\t-\taddress of attribute to verify\n * @param[out]\terr_msg\t\t-\terror message list\n *\n * @return\tint\n * @retval\t0 \t- \tAttribute passed verification\n * @retval\t>0 \t- \tFailed verification - pbs errcode is returned\n *\n * @par\tSide effects:\n * \tNone\n *\n * @par Reentrancy\n *\tMT-safe\n */\nint\nverify_value_non_zero_positive(int batch_request, int parent_object,\n\t\t\t       int cmd, struct attropl *pattr, char **err_msg)\n{\n\tlong l;\n\n\tif ((pattr->value == NULL) || (pattr->value[0] == '\\0'))\n\t\treturn PBSE_BADATVAL;\n\n\tl = atol(pattr->value);\n\tif (l <= 0)\n\t\treturn PBSE_BADATVAL;\n\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n *\tfor some attributes which can have only +ve long values, eg, ATTR_max_job_sequence_id\n *\n * @see\n *\n * @param[in]\tbatch_request\t-\tBatch Request Type\n * @param[in]\tparent_object\t-\tParent Object Type\n * @param[in]\tcmd\t\t-\tCommand Type\n * @param[in]\tpattr\t\t-\taddress of attribute to verify\n * @param[out]err_msg\t\t-\terror message list\n *\n * @return\tint\n * @retval\t0 \t- \tAttribute passed verification\n * @retval\t>0 \t- \tFailed verification - pbs errcode is returned\n *\n * @par\tSide effects:\n * \tNone\n *\n * @par Reentrancy\n *\tMT-safe\n */\nint\nverify_value_non_zero_positive_long_long(int batch_request, int parent_object,\n\t\t\t\t\t int cmd, struct attropl *pattr, char **err_msg)\n{\n\tlong long l = -1;\n\tchar *pc = NULL;\n\tif ((pattr->value == NULL) || (pattr->value[0] == '\\0'))\n\t\treturn PBSE_BADATVAL;\n\n\tl = strTouL(pattr->value, &pc, 10);\n\tif ((*pc != '\\0') || (errno == ERANGE)) {\n\t\treturn PBSE_BADATVAL;\n\t}\n\tif (l <= 0)\n\t\treturn PBSE_BADATVAL;\n\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n *\tverifies attribute ATTR_license_min\n *\n * @see\n *\n * @param[in]\tbatch_request\t-\tBatch Request Type\n * @param[in]\tparent_object\t-\tParent Object Type\n * @param[in]\tcmd\t\t-\tCommand Type\n * @param[in]\tpattr\t\t-\taddress of attribute to verify\n * @param[out]\terr_msg\t\t-\terror message list\n *\n * @return\tint\n * @retval\t0 \t- \tAttribute passed verification\n * @retval\t>0 \t- \tFailed verification - pbs errcode is returned\n *\n * @par\tSide effects:\n * \tNone\n *\n * @par Reentrancy\n *\tMT-safe\n */\nint\nverify_value_minlicenses(int batch_request, int parent_object, int cmd,\n\t\t\t struct attropl *pattr, char **err_msg)\n{\n\tlong l;\n\n\tif ((pattr->value == NULL) || (pattr->value[0] == '\\0'))\n\t\treturn PBSE_BADATVAL;\n\n\tl = atol(pattr->value);\n\tif ((l < 0) || (l > ecl_pbs_max_licenses))\n\t\treturn (PBSE_LICENSE_MIN_BADVAL);\n\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n *\tverifies attribute ATTR_license_max\n *\n * @see\n *\n * @param[in]\tbatch_request\t-\tBatch Request Type\n * @param[in]\tparent_object\t-\tParent Object Type\n * @param[in]\tcmd\t\t-\tCommand Type\n * @param[in]\tpattr\t\t-\taddress of attribute to verify\n * @param[out]\terr_msg\t\t-\terror message list\n *\n * @return\tint\n * @retval\t0 \t- \tAttribute passed verification\n * @retval\t>0 \t- \tFailed verification - pbs errcode is returned\n *\n * @par\tSide effects:\n * \tNone\n *\n * @par Reentrancy\n *\tMT-safe\n */\nint\nverify_value_maxlicenses(int batch_request, int parent_object, int cmd,\n\t\t\t struct attropl *pattr, char **err_msg)\n{\n\tlong l;\n\n\tif ((pattr->value == NULL) || (pattr->value[0] == '\\0'))\n\t\treturn PBSE_BADATVAL;\n\n\tl = atol(pattr->value);\n\n\tif ((l < 0) || (l > ecl_pbs_max_licenses))\n\t\treturn (PBSE_LICENSE_MAX_BADVAL);\n\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n *\tverifies attribute ATTR_license_linger\n *\n * @see\n *\n * @param[in]\tbatch_request\t-\tBatch Request Type\n * @param[in]\tparent_object\t-\tParent Object Type\n * @param[in]\tcmd\t\t-\tCommand Type\n * @param[in]\tpattr\t\t-\taddress of attribute to verify\n * @param[out]\terr_msg\t\t-\terror message list\n *\n * @return\tint\n * @retval\t0 \t- \tAttribute passed verification\n * @retval\t>0 \t- \tFailed verification - pbs errcode is returned\n *\n * @par\tSide effects:\n * \tNone\n *\n * @par Reentrancy\n *\tMT-safe\n */\nint\nverify_value_licenselinger(int batch_request, int parent_object, int cmd,\n\t\t\t   struct attropl *pattr, char **err_msg)\n{\n\tlong l;\n\n\tif ((pattr->value == NULL) || (pattr->value[0] == '\\0'))\n\t\treturn PBSE_BADATVAL;\n\n\tl = atol(pattr->value);\n\tif (l <= 0)\n\t\treturn (PBSE_LICENSE_LINGER_BADVAL);\n\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n *\tverifies attributes like ATTR_managers, ATTR_operators etc\n *\n * @see\n *\n * @param[in]\tbatch_request\t-\tBatch Request Type\n * @param[in]\tparent_object\t-\tParent Object Type\n * @param[in]\tcmd\t\t-\tCommand Type\n * @param[in]\tpattr\t\t-\taddress of attribute to verify\n * @param[out]\terr_msg\t\t-\terror message list\n *\n * @return\tint\n * @retval\t0 \t- \tAttribute passed verification\n * @retval\t>0 \t- \tFailed verification - pbs errcode is returned\n *\n * @par\tSide effects:\n * \tNone\n *\n * @par Reentrancy\n *\tMT-safe\n */\nint\nverify_value_mgr_opr_acl_check(int batch_request, int parent_object,\n\t\t\t       int cmd, struct attropl *pattr, char **err_msg)\n{\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n\t/* with kerberos, we cannot really check validity */\n\treturn PBSE_NONE;\n#endif\n\n\tchar *dup_val;\n\tchar *token;\n\tchar *entry;\n\tchar *p;\n\tchar *comma;\n\tint err = PBSE_NONE;\n\tchar hostname[PBS_MAXHOSTNAME + 1];\n\n\tif ((pattr->value == NULL) || (pattr->value[0] == '\\0'))\n\t\treturn PBSE_BADATVAL;\n\n\tdup_val = strdup(pattr->value);\n\tif (!dup_val)\n\t\treturn -1;\n\n\ttoken = dup_val;\n\tcomma = strchr(token, ',');\n\twhile (token) {\n\t\t/* eliminate trailing spaces in token */\n\t\tif (comma)\n\t\t\tp = comma;\n\t\telse\n\t\t\tp = token + strlen(token);\n\t\twhile (*--p == ' ' && p != token)\n\t\t\t;\n\t\t*(p + 1) = 0;\n\n\t\t/* eliminate spaces in the front */\n\t\twhile (token && *token == ' ')\n\t\t\ttoken++;\n\n\t\tentry = strchr(token, (int) '@');\n\t\tif (entry == NULL) {\n\t\t\terr = PBSE_BADHOST;\n\t\t\tbreak;\n\t\t}\n\t\tentry++;\t     /* point after the '@' */\n\t\tif (*entry != '*') { /* if == * cannot check it any more */\n\n\t\t\t/* if not wild card, must be fully qualified host */\n\t\t\tif (get_fullhostname(entry, hostname, (sizeof(hostname) - 1)) || strncasecmp(entry, hostname, (sizeof(hostname) - 1))) {\n\t\t\t\terr = PBSE_BADHOST;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\n\t\ttoken = NULL;\n\t\tif (comma) {\n\t\t\ttoken = comma + 1;\n\t\t\tcomma = strchr(token, ',');\n\t\t}\n\t}\n\n\tfree(dup_val);\n\treturn err;\n}\n\n/**\n * @brief\n *\tverifies the queue type specified by attribute ATTR_q\n *\n * @see\n *\n * @param[in]\tbatch_request\t-\tBatch Request Type\n * @param[in]\tparent_object\t-\tParent Object Type\n * @param[in]\tcmd\t\t-\tCommand Type\n * @param[in]\tpattr\t\t-\taddress of attribute to verify\n * @param[out]\terr_msg\t\t-\terror message list\n *\n * @return\tint\n * @retval\t0 \t- \tAttribute passed verification\n * @retval\t>0 \t- \tFailed verification - pbs errcode is returned\n *\n * @par\tSide effects:\n * \tNone\n *\n * @par Reentrancy\n *\tMT-safe\n */\nint\nverify_value_queue_type(int batch_request, int parent_object, int cmd,\n\t\t\tstruct attropl *pattr, char **err_msg)\n{\n\tint i;\n\tchar *name[2] = {\"Execution\", \"Route\"};\n\n\tif ((pattr->value == NULL) || (pattr->value[0] == '\\0'))\n\t\treturn PBSE_BADATVAL;\n\n\t/* does the requested value match a legal value? */\n\tfor (i = 0; i < 2; i++) {\n\t\tif (strncasecmp(name[i], pattr->value,\n\t\t\t\tstrlen(pattr->value)) == 0)\n\t\t\treturn PBSE_NONE;\n\t}\n\treturn (PBSE_BADATVAL);\n}\n\n/**\n * @brief\n *\tverifies job state specified by attribute ATTR_state\n *\n * @see\n *\n * @param[in]\tbatch_request\t-\tBatch Request Type\n * @param[in]\tparent_object\t-\tParent Object Type\n * @param[in]\tcmd\t\t-\tCommand Type\n * @param[in]\tpattr\t\t-\taddress of attribute to verify\n * @param[out]\terr_msg\t\t-\terror message list\n *\n * @return\tint\n * @retval\t0 \t- \tAttribute passed verification\n * @retval\t>0 \t- \tFailed verification - pbs errcode is returned\n *\n * @par\tSide effects:\n * \tNone\n *\n * @par Reentrancy\n *\tMT-safe\n */\nint\nverify_value_state(int batch_request, int parent_object, int cmd,\n\t\t   struct attropl *pattr, char **err_msg)\n{\n\tchar *pc = pattr->value;\n\n\tif (pattr->value == NULL)\n\t\treturn PBSE_BADATVAL;\n\n\tif (pattr->value[0] == '\\0') {\n\t\tif (batch_request != PBS_BATCH_StatusJob)\n\t\t\treturn PBSE_BADATVAL;\n\t}\n\twhile (*pc) {\n\t\tif (*pc != 'E' && *pc != 'H' && *pc != 'Q' &&\n\t\t    *pc != 'R' && *pc != 'T' && *pc != 'W' &&\n\t\t    *pc != 'S' && *pc != 'U' && *pc != 'B' &&\n\t\t    *pc != 'X' && *pc != 'F' && *pc != 'M')\n\t\t\treturn PBSE_BADATVAL;\n\t\tpc++;\n\t}\n\treturn PBSE_NONE;\n}\n/**\n * @brief\n *\tParses select specification and verifies the datatype and value of each resource\n *\n * @par Functionality:\n *\t1. Parses select specification by calling parse_chunk function.\n *\t2. Decodes each chunk\n *\t3. Calls verify_value_resc for each resource in a chunk.\n *\n * @see\n *\n * @param[in]\tbatch_request\t-\tBatch Request Type\n * @param[in]\tparent_object\t-\tParent Object Type\n * @param[in]\tcmd\t\t-\tCommand Type\n * @param[in]\tpattr\t\t-\taddress of attribute to verify\n * @param[out]\terr_msg\t\t-\terror message list\n *\n * @return\tint\n * @retval\t0 - Attribute passed verification\n * @retval\t>0 - Failed verification - pbs errcode is returned\n *\n * @par\tSide effects:\n * \tSome functions reset the value pointer to a new value. It thus\n *\tfrees the old value pointer.\n *\n */\nint\nverify_value_select(int batch_request, int parent_object, int cmd,\n\t\t    struct attropl *pattr, char **err_msg)\n{\n\tchar *chunk;\n\tint nchk;\n\tint nelem;\n\tstruct key_value_pair *pkvp;\n\tint rc = 0;\n\tint j;\n\tstruct attropl resc_attr;\n\tif ((pattr->value == NULL) || (pattr->value[0] == '\\0'))\n\t\treturn PBSE_BADATVAL;\n\n\tchunk = parse_plus_spec(pattr->value, &rc); /* break '+' seperated substrings */\n\tif (rc != 0)\n\t\treturn (rc);\n\n\twhile (chunk) {\n\t\tif (parse_chunk(chunk, &nchk, &nelem, &pkvp, NULL) == 0) {\n\t\t\tfor (j = 0; j < nelem; ++j) {\n\t\t\t\tresc_attr.name = pattr->name;\n\t\t\t\tresc_attr.resource = pkvp[j].kv_keyw;\n\t\t\t\tresc_attr.value = pkvp[j].kv_val;\n\t\t\t\trc = verify_value_resc(batch_request, parent_object, cmd, &resc_attr, err_msg);\n\t\t\t\tif (rc > 0)\n\t\t\t\t\treturn rc;\n\t\t\t}\n\t\t} else {\n\t\t\treturn PBSE_BADATVAL;\n\t\t}\n\t\tchunk = parse_plus_spec(NULL, &rc);\n\t\tif (rc != 0)\n\t\t\treturn (rc);\n\t} /* while */\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n *\tverifies the values specified by attribute ATTR_tolerate_node_failures\n *\n * @see\n *\n * @param[in]\tbatch_request\t-\tBatch Request Type\n * @param[in]\tparent_object\t-\tParent Object Type\n * @param[in]\tcmd\t\t-\tCommand Type\n * @param[in]\tpattr\t\t-\taddress of attribute to verify\n * @param[out]\terr_msg\t\t-\terror message list\n *\n * @return\tint\n * @retval\t0 \t- \tAttribute passed verification\n * @retval\t>0 \t- \tFailed verification - pbs errcode is returned\n *\n * @par\tSide effects:\n * \tNone\n *\n * @par Reentrancy\n *\tMT-safe\n */\nint\nverify_value_tolerate_node_failures(int batch_request, int parent_object, int cmd,\n\t\t\t\t    struct attropl *pattr, char **err_msg)\n{\n\tint i;\n\tchar *tolerance_level[] = {TOLERATE_NODE_FAILURES_ALL, TOLERATE_NODE_FAILURES_JOB_START, TOLERATE_NODE_FAILURES_NONE, NULL};\n\t/* does the requested value match a legal value? */\n\tfor (i = 0; tolerance_level[i] != NULL; i++) {\n\t\tif (strcmp(tolerance_level[i], pattr->value) == 0)\n\t\t\treturn PBSE_NONE;\n\t}\n\treturn (PBSE_BADATVAL);\n}\n\n/**\n * @brief\n *\tFunction checks the resource \"preempt_prio\" and verifies its values\n *\n * @see\n *\n * @param[in]\tbatch_request\t-\tBatch Request Type\n * @param[in]\tparent_object\t-\tParent Object Type\n * @param[in]\tcmd\t\t-\tCommand Type\n * @param[in]\tpattr\t\t-\taddress of attribute to verify\n * @param[out]\terr_msg\t\t-\terror message list\n *\n * @return\tint\n * @retval\t0 \t- \tAttribute passed verification\n * @retval\t>0 \t- \tFailed verification - pbs errcode is returned\n *\n * @par\tSide effects:\n * \tNone\n *\n */\nint\nverify_value_preempt_prio(int batch_request, int parent_object, int cmd,\n\t\t\t  struct attropl *pattr, char **err_msg)\n{\n\tchar **list;\n\tchar *saveptr;\n\n\tlist = break_comma_list(pattr->value);\n\tif (list != NULL) {\n\t\tint i;\n\t\tint j;\n\t\tfor (i = 0; list[i] != NULL; i++) {\n\t\t\tchar *tok;\n\t\t\tshort found;\n\t\t\tfound = 0;\n\t\t\ttok = strtok_r(list[i], \"+\", &saveptr);\n\t\t\twhile (tok != NULL) {\n\t\t\t\tfor (j = 0; j < (sizeof(preempt_prio_names) / sizeof(char *)); j++)\n\t\t\t\t\tif (!strcmp(preempt_prio_names[j], tok))\n\t\t\t\t\t\tfound = 1;\n\n\t\t\t\tif (!found) {\n\t\t\t\t\tfree_string_array(list);\n\t\t\t\t\treturn PBSE_BADATVAL;\n\t\t\t\t}\n\t\t\t\ttok = strtok_r(NULL, \"+\", &saveptr);\n\t\t\t}\n\t\t}\n\t\tfree_string_array(list);\n\t} else\n\t\treturn PBSE_BADATVAL;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tFunction checks the resource \"preempt_order\" and verifies its values\n *\n * @see\n *\n * @param[in]\tbatch_request\t-\tBatch Request Type\n * @param[in]\tparent_object\t-\tParent Object Type\n * @param[in]\tcmd\t\t-\tCommand Type\n * @param[in]\tpattr\t\t-\taddress of attribute to verify\n * @param[out]\terr_msg\t\t-\terror message list\n *\n * @return\tint\n * @retval\t0 \t- \tAttribute passed verification\n * @retval\t>0 \t- \tFailed verification - pbs errcode is returned\n *\n * @par\tSide effects:\n * \tNone\n *\n */\nint\nverify_value_preempt_order(int batch_request, int parent_object, int cmd,\n\t\t\t   struct attropl *pattr, char **err_msg)\n{\n\tchar *save_ptr;\n\tchar *tok = NULL;\n\tchar *endp = NULL;\n\tchar copy[256] = {0};\n\n\tif ((pattr->value == NULL) || (pattr->value[0] == '\\0'))\n\t\treturn PBSE_BADATVAL;\n\n\tstrcpy(copy, pattr->value);\n\ttok = strtok_r(copy, \"\\t \", &save_ptr);\n\n\tif (tok != NULL && !isdigit(tok[0])) {\n\t\tint i = 0;\n\t\tchar s_done = 0;\n\t\tchar c_done = 0;\n\t\tchar r_done = 0;\n\t\tchar d_done = 0;\n\t\tchar next_is_num = 0;\n\t\tdo {\n\t\t\tint j = 0;\n\t\t\tj = isdigit(tok[0]);\n\t\t\tif (j) {\n\t\t\t\tif (next_is_num) {\n\t\t\t\t\t(void) strtol(tok, &endp, 10);\n\t\t\t\t\tif (*endp == '\\0') {\n\t\t\t\t\t\ti++;\n\t\t\t\t\t\tnext_is_num = 0;\n\t\t\t\t\t} else\n\t\t\t\t\t\treturn PBSE_BADATVAL;\n\t\t\t\t} else\n\t\t\t\t\treturn PBSE_BADATVAL;\n\t\t\t} else if (!next_is_num) {\n\t\t\t\tfor (j = 0; tok[j] != '\\0'; j++) {\n\t\t\t\t\tswitch (tok[j]) {\n\t\t\t\t\t\tcase 'S':\n\t\t\t\t\t\t\tif (!s_done)\n\t\t\t\t\t\t\t\ts_done = 1;\n\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t\treturn PBSE_BADATVAL;\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\tcase 'C':\n\t\t\t\t\t\t\tif (!c_done)\n\t\t\t\t\t\t\t\tc_done = 1;\n\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t\treturn PBSE_BADATVAL;\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\tcase 'R':\n\t\t\t\t\t\t\tif (!r_done)\n\t\t\t\t\t\t\t\tr_done = 1;\n\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t\treturn PBSE_BADATVAL;\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\tcase 'D':\n\t\t\t\t\t\t\tif (!d_done)\n\t\t\t\t\t\t\t\td_done = 1;\n\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t\treturn PBSE_BADATVAL;\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\tdefault:\n\t\t\t\t\t\t\treturn PBSE_BADATVAL;\n\t\t\t\t\t}\n\t\t\t\t\tnext_is_num = 1;\n\t\t\t\t}\n\t\t\t\ts_done = 0;\n\t\t\t\tc_done = 0;\n\t\t\t\tr_done = 0;\n\t\t\t\td_done = 0;\n\t\t\t} else\n\t\t\t\treturn PBSE_BADATVAL;\n\t\t\ttok = strtok_r(NULL, \"\\t \", &save_ptr);\n\t\t} while (tok != NULL && i < PREEMPT_ORDER_MAX);\n\n\t\tif (tok != NULL)\n\t\t\treturn PBSE_BADATVAL;\n\t} else\n\t\treturn PBSE_BADATVAL;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tFunction checks the resource \"preempt_sort\" and verifies its values\n *\n * @see\n *\n * @param[in]\tbatch_request\t-\tBatch Request Type\n * @param[in]\tparent_object\t-\tParent Object Type\n * @param[in]\tcmd\t\t-\tCommand Type\n * @param[in]\tpattr\t\t-\taddress of attribute to verify\n * @param[out]\terr_msg\t\t-\terror message list\n *\n * @return\tint\n * @retval\t0 \t- \tAttribute passed verification\n * @retval\t>0 \t- \tFailed verification - pbs errcode is returned\n *\n * @par\tSide effects:\n * \tNone\n *\n */\nint\nverify_value_preempt_sort(int batch_request, int parent_object, int cmd,\n\t\t\t  struct attropl *pattr, char **err_msg)\n{\n\tif (strcmp(pattr->value, PBS_PREEMPT_SORT_DEFAULT) != 0)\n\t\treturn PBSE_BADATVAL;\n\n\treturn 0;\n}\n"
  },
  {
    "path": "src/lib/Libecl/pbs_client_thread.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tpbs_client_thread.c\n *\n * @brief\tPbs threading related functions\n *\n * @par\t\tFunctionality:\n *\tThis module provides an higher level abstraction of the\n *\tpthread calls by wrapping them with some additional logic to the\n *\trest of the PBS world.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n#include <unistd.h>\n#include <sys/types.h>\n#include <pwd.h>\n#include \"libpbs.h\"\n#include \"pbs_client_thread.h\"\n\n/**\n * @brief\n *\tinitializes the dis static tables for future use\n *\n * @par Functionality:\n *\tInitializes the dis static tables for future use\\n\n *\tThis is called only once from @see __init_thread_data via\n *\tthe pthread_once mechanism\n */\nextern void dis_init_tables(void);\nextern long dis_buffsize; /* defn of DIS_BUFSZ in dis headers */\n\n/**\n * @brief\n *\tFunction to free the node pool structure\n *\n * @par Functionality:\n *\tFrees the node pool structure. \\n\n *\tThis is called only once when the thread is being destroyed\n */\nextern void free_node_pool(void *);\n\n/**\n * For capturing errors inside the once function. \\n\n * Even though this is a global var, this won't cause threading issues, \\n\n * since pthread calls the once function only once in a processes lifetime\n */\nstatic int __pbs_client_thread_init_rc = 0;\n\n/* the __ functions have the actual threaded functionality */\nstatic int __pbs_client_thread_lock_connection(int connect);\nstatic int __pbs_client_thread_unlock_connection(int connect);\nstatic struct pbs_client_thread_context *\n__pbs_client_thread_get_context_data(void);\nstatic int __pbs_client_thread_lock_conntable(void);\nstatic int __pbs_client_thread_unlock_conntable(void);\nstatic int __pbs_client_thread_lock_conf(void);\nstatic int __pbs_client_thread_unlock_conf(void);\nstatic int __pbs_client_thread_init_thread_context(void);\nstatic int __pbs_client_thread_init_connect_context(int connect);\nstatic int __pbs_client_thread_destroy_connect_context(int connect);\nstatic void __pbs_client_thread_destroy_thread_data(void *p);\n\n/*\n * The pfn_pbs_client_thread function pointers are assigned to the real\n * functions here. This is the default assignment. If an internal 'client'\n * (like the daemons) wish to bypass the verification and threading behavior\n * they can call pbs_client_thread_set_single_threaded_mode() to reset these\n * function pointers to point to single threaded mode funcs, most of which have\n * an empty implementation\n */\n\nint (*pfn_pbs_client_thread_lock_connection)(int connect) = &__pbs_client_thread_lock_connection;\n\nint (*pfn_pbs_client_thread_unlock_connection)(int connect) = &__pbs_client_thread_unlock_connection;\n\nstruct pbs_client_thread_context *(*pfn_pbs_client_thread_get_context_data)(void) = &__pbs_client_thread_get_context_data;\n\nint (*pfn_pbs_client_thread_lock_conntable)(void) = &__pbs_client_thread_lock_conntable;\n\nint (*pfn_pbs_client_thread_unlock_conntable)(void) = &__pbs_client_thread_unlock_conntable;\n\nint (*pfn_pbs_client_thread_lock_conf)(void) = &__pbs_client_thread_lock_conf;\n\nint (*pfn_pbs_client_thread_unlock_conf)(void) = &__pbs_client_thread_unlock_conf;\n\nint (*pfn_pbs_client_thread_init_thread_context)(void) = &__pbs_client_thread_init_thread_context;\n\nint (*pfn_pbs_client_thread_init_connect_context)(int connect) = &__pbs_client_thread_init_connect_context;\n\nint (*pfn_pbs_client_thread_destroy_connect_context)(int connect) = &__pbs_client_thread_destroy_connect_context;\n\n/* following are some global thread related variables, like initializers etc */\nstatic pthread_key_t key_tls;\t\t\t\t     /* the key used to set/retrieve the TLS data */\nstatic pthread_once_t pre_init_key_once = PTHREAD_ONCE_INIT; /* once keys */\nstatic pthread_once_t post_init_key_once = PTHREAD_ONCE_INIT;\n\nstatic pthread_mutex_t pbs_client_thread_conntable_mutex; /* for conn table */\nstatic pthread_mutex_t pbs_client_thread_conf_mutex;\t  /* for pbs_loadconf */\nstatic pthread_mutexattr_t attr;\n\n/**\n * This is a local thread_context variable which is used by the single threaded\n * model functions. This way the semantics is similar between single/multi\n * threaded code. The only difference in single threaded mode is that it uses a\n * global data structure to store the data instead of the TLS\n */\nstatic struct pbs_client_thread_context\n\tpbs_client_thread_single_threaded_context;\n\n/** single threaded mode dummy function definition */\nstatic int\n__pbs_client_thread_lock_connection_single_threaded(int connect)\n{\n\treturn 0;\n}\n\n/** single threaded mode dummy function definition */\nstatic int\n__pbs_client_thread_unlock_connection_single_threaded(int connect)\n{\n\treturn 0;\n}\n\n/** single threaded mode function definition\n * @brief\n *\tReturns the thread context data\n *\n * @par Functionality:\n *\tReturns the address of the global thread context variable called\n *\t@see pbs_client_thread_single_threaded_context\n *\n * @retval - Address of the thread context data\n *\n */\nstruct pbs_client_thread_context *\n__pbs_client_thread_get_context_data_single_threaded(void)\n{\n\treturn &pbs_client_thread_single_threaded_context;\n}\n\n/** single threaded mode dummy function definition */\nstatic int\n__pbs_client_thread_lock_conntable_single_threaded(void)\n{\n\treturn 0;\n}\n\n/** single threaded mode dummy function definition */\nstatic int\n__pbs_client_thread_unlock_conntable_single_threaded(void)\n{\n\treturn 0;\n}\n\n/** single threaded mode dummy function definition */\nstatic int\n__pbs_client_thread_lock_conf_single_threaded(void)\n{\n\treturn 0;\n}\n\n/** single threaded mode dummy function definition */\nstatic int\n__pbs_client_thread_unlock_conf_single_threaded(void)\n{\n\treturn 0;\n}\n\n/** single threaded mode dummy function definition */\nstatic int\n__pbs_client_thread_destroy_connect_context_single_threaded(int connect)\n{\n\treturn 0;\n}\n\n/**\n * this is a global variable but is used only when single threaded model is set,\n * so is not a issue with threading\n */\nstatic int single_threaded_init_done = 0;\n\n/**\n * @brief\n *\tInitialize the thread context for single threaded applications.\n *\n * @par Functionality:\n *      1. Sets the context using global variable\n *\t   pbs_client_thread_single_threaded_context \\n\n *      2. Initializes the members of this structure \\n\n *      3. Sets single_threaded_init_done to 1 \\n\n *\tThis is the function that gets called when single threaded applications\n *\tcall pbs_client_thread_init_thread_context.\n *\n * @return\tint\n *\n * @retval\t0 - success\n * @retval\t1 - failure (pbs_errno is set)\n *\n * @par Side-effects:\n *\tModifies global variable, single_threaded_init_done.\n *\n * @par Reentrancy:\n *\tMT unsafe\n */\nstatic int\n__pbs_client_thread_init_thread_context_single_threaded(void)\n{\n\tstruct pbs_client_thread_context *ptr;\n\n\tif (single_threaded_init_done)\n\t\treturn 0;\n\n\tptr = &pbs_client_thread_single_threaded_context;\n\n\t/* initialize any elements of the single_threaded_context */\n\tmemset(ptr, 0, sizeof(struct pbs_client_thread_context));\n\n\tptr->th_dis_buffer = calloc(1, dis_buffsize); /* defined in tcp_dis.c */\n\tif (ptr->th_dis_buffer == NULL) {\n\t\tpbs_errno = PBSE_SYSTEM;\n\t\treturn -1;\n\t}\n\n\t/* set any default values for the TLS vars */\n\tptr->th_pbs_tcp_timeout = PBS_DIS_TCP_TIMEOUT_SHORT;\n\tptr->th_pbs_tcp_interrupt = 0;\n\tptr->th_pbs_tcp_errno = 0;\n\n\tptr->th_pbs_errno = 0;\n\n\tdis_init_tables();\n\n\tsingle_threaded_init_done = 1;\n\tptr->th_pbs_mode = 1; /* single threaded */\n\treturn 0;\n}\n\n/** single threaded mode dummy functions definition */\nstatic int\n__pbs_client_thread_init_connect_context_single_threaded(int connect)\n{\n\treturn 0;\n}\n\n/**\n * @brief\n *\tSet single threaded mode for the caller\n *\n * @par Functionality:\n *\tThe functions pointers are reset to a different set of functions,\n *\tmost of which have an empty implementation.\n *\tCalled by the daemons to bypass multithreading functions.\n *\tThe functions pointers are reset to a different set of functions,\n *\tmost of which have an empty implementation.\n *\n * @return\tvoid\n *\n * @par Side-effects:\n *\tNone\n *\n * @par Reentrancy:\n *\tMT unsafe - should be called only in a single threaded application\n */\nvoid\npbs_client_thread_set_single_threaded_mode(void)\n{\n\t/* point these to dummy functions */\n\tpfn_pbs_client_thread_lock_connection =\n\t\t__pbs_client_thread_lock_connection_single_threaded;\n\tpfn_pbs_client_thread_unlock_connection =\n\t\t__pbs_client_thread_unlock_connection_single_threaded;\n\tpfn_pbs_client_thread_get_context_data =\n\t\t__pbs_client_thread_get_context_data_single_threaded;\n\tpfn_pbs_client_thread_lock_conntable =\n\t\t__pbs_client_thread_lock_conntable_single_threaded;\n\tpfn_pbs_client_thread_unlock_conntable =\n\t\t__pbs_client_thread_unlock_conntable_single_threaded;\n\tpfn_pbs_client_thread_lock_conf =\n\t\t__pbs_client_thread_lock_conf_single_threaded;\n\tpfn_pbs_client_thread_unlock_conf =\n\t\t__pbs_client_thread_unlock_conf_single_threaded;\n\tpfn_pbs_client_thread_init_thread_context =\n\t\t__pbs_client_thread_init_thread_context_single_threaded;\n\tpfn_pbs_client_thread_init_connect_context =\n\t\t__pbs_client_thread_init_connect_context_single_threaded;\n\tpfn_pbs_client_thread_destroy_connect_context =\n\t\t__pbs_client_thread_destroy_connect_context_single_threaded;\n}\n\n/* following are the definitions of the actual threaded functions */\n\n/**\n * @brief\n *\tPre/First initialization routine\n *\n * @par Functionality:\n *      1. Creates the key to be used to set/retrieve TLS data for a thread \\n\n *      2. Creates the mutex attribute attr of type recursive mutex \\n\n *      3. Creates the recursive mutex used to seralize access to global data \\n\n *\tThis is the function called by pthread_once mechanism exactly once in\n *\tthe process lifetime from \"__pbs_client_thread_init_thread_context\".\n *\n * @see __pbs_client_thread_init_thread_context\\n __post_init_thread_data\n *\n * @return\tvoid\n *\n * @par Side-effects:\n *\tModifies global variable, __pbs_client_thread_init_rc. This variable is\n *\tused to hold any error that might happen inside this function. (since\n *\tpthread_once only takes a function with a void return type). This won't\n *\tbe a thread race/issue, since the variable is modified only by the init\n *\tfunction which is called only once via pthread_once(). This global var\n *\tis set by this function and used by the caller routine\n *\t\"__pbs_client_thread_init_thread_context\" to know whether the code\n *\tinside __init_thead_data executed successfully or not.\n *\n * @par Reentrancy:\n *\tMT unsafe - must be called via pthread_once()\n */\nstatic void\n__init_thread_data(void)\n{\n\tif ((__pbs_client_thread_init_rc =\n\t\t     pthread_key_create(&key_tls,\n\t\t\t\t\t&__pbs_client_thread_destroy_thread_data)) != 0)\n\t\treturn;\n\n\t/*\n\t * since this function is called only once in the processes lifetime\n\t * use this place to initialize mutex attribute and the mutexes\n\t */\n\tif ((__pbs_client_thread_init_rc = pthread_mutexattr_init(&attr)) != 0)\n\t\treturn;\n\n\tif ((__pbs_client_thread_init_rc = pthread_mutexattr_settype(&attr,\n\t/*\n\t * linux does not have a PTHREAD_MUTEX_RECURSIVE attr_type, instead\n\t * has a PTHREAD_MUTEX_RECURSIVE_NP (NP stands for non-portable). Thus\n\t * need a conditional compile to ensure it builds properly in linux as\n\t * well as the other unixes. The windows implementation of pthread, the\n\t * Libpbspthread.dll library, however knows only about\n\t * \"PTHREAD_MUTEX_RECURSIVE\", and will reject any other attr_type.\n\t *\n\t */\n#if defined(linux)\n\t\t\t\t\t\t\t\t     PTHREAD_MUTEX_RECURSIVE_NP\n#else\n\t\t\t\t\t\t\t\t     PTHREAD_MUTEX_RECURSIVE\n#endif\n\t\t\t\t\t\t\t\t     )) != 0)\n\t\treturn;\n\n\t/*\n\t * initialize the process-wide conntable mutex\n\t * Recursive mutex\n\t */\n\tif ((__pbs_client_thread_init_rc =\n\t\t     pthread_mutex_init(&pbs_client_thread_conntable_mutex, &attr)) != 0)\n\t\treturn;\n\n\t/*\n\t * initialize the process-wide conf mutex\n\t * Recursive mutex\n\t */\n\tif ((__pbs_client_thread_init_rc =\n\t\t     pthread_mutex_init(&pbs_client_thread_conf_mutex, &attr)) != 0)\n\t\treturn;\n\n\tpthread_mutexattr_destroy(&attr);\n\treturn;\n}\n\n/**\n * @brief\n *\tPost initialization routine\n *\n * @par Functionality:\n *\t1. Initializes the dis tables, by calling the function dis_init_tables.\n *\tThis is the function called by pthread_once mechanism exactly once in\n *\tthe process lifetime from \"__pbs_client_thread_init_thread_context\".\n *\n * @see __pbs_client_thread_init_thread_context\\n __init_thread_data\n *\n * @note\n *\tCalled at the end of the __pbs_client_thread_init_thread_context.\n *      The reason is that the functionality depends on TLS data area set by\n *      __pbs_thead_init_context.\n *\n *\n * @return\tvoid\n *\n * @par Reentrancy:\n *\tMT unsafe - must be called via pthread_once()\n */\nstatic void\n__post_init_thread_data(void)\n{\n\tdis_init_tables();\n}\n\n/**\n * @brief\n *\tInitialize the thread context\n *\n * @par Functionality:\n *      1. Calls __init_thread_data via pthread_once to init mutexes/TLS key \\n\n *      2. If TLS data is not already created, then create it \\n\n *      3. Finally calls __post_init_thread_data function via pthread_once\n *         to ensure that the dis tables are initialized only once in the\n *         process lifetime. \\n\n *\tAll external API calls should call this function first before calling\n *\tany other pbs_client_thread_ functions.\n *\n * @see __init_thread_data\\n __post_init_thread_data\n *\n * @return\tint\n *\n * @retval\t0 Success\n * @retval\t>0 Failure (set to the pbs_errno)\n *\n * @par Side effects:\n *\tIf a failure occurs in this function, then it calls\n *\tset_single_threaded_model to switch to a single threaded model to be\n *\table to return the error code reliably to the caller.\n *\n * @par Reentrancy:\n *\tMT safe\n */\nstatic int\n__pbs_client_thread_init_thread_context(void)\n{\n\tstruct pbs_client_thread_context *ptr;\n\tint ret;\n\tint free_ptr = 0;\n\n\t/* initialize the TLS key for all threads */\n\tif (pthread_once(&pre_init_key_once, __init_thread_data) != 0) {\n\t\tret = PBSE_SYSTEM;\n\t\tgoto err;\n\t}\n\n\tif (__pbs_client_thread_init_rc != 0) {\n\t\tret = PBSE_SYSTEM;\n\t\tgoto err;\n\t}\n\n\tif (pthread_getspecific(key_tls) != NULL)\n\t\treturn 0; /* thread data already initialized */\n\n\tptr = calloc(1, sizeof(struct pbs_client_thread_context));\n\tif (!ptr) {\n\t\tret = PBSE_SYSTEM;\n\t\tgoto err;\n\t}\n\n\t/* set any default values for the TLS vars */\n\tptr->th_pbs_tcp_timeout = PBS_DIS_TCP_TIMEOUT_SHORT;\n\tptr->th_pbs_tcp_interrupt = 0;\n\tptr->th_pbs_tcp_errno = 0;\n\n\tptr->th_pbs_errno = 0;\n\n\t/* initialize any elements of the ptr */\n\tptr->th_dis_buffer = calloc(1, dis_buffsize); /* defined in tcp_dis.c */\n\tif (ptr->th_dis_buffer == NULL) {\n\t\tfree_ptr = 1;\n\t\tret = PBSE_SYSTEM;\n\t\tgoto err;\n\t}\n\n\t/*\n\t * synchronize this part, since the getuid, getpwuid functions are not\n\t * thread-safe\n\t */\n\tif (pbs_client_thread_lock_conf() != 0) {\n\t\tfree_ptr = 1;\n\t\tret = PBSE_SYSTEM;\n\t\tgoto err;\n\t}\n\n\tif (pthread_setspecific(key_tls, ptr) != 0) {\n\t\tret = PBSE_SYSTEM;\n\t\tpbs_client_thread_unlock_conf();\n\t\tgoto err;\n\t}\n\tif (pbs_client_thread_unlock_conf() != 0) {\n\t\tret = PBSE_SYSTEM;\n\t\tgoto err;\n\t}\n\n\tif (pthread_once(&post_init_key_once, __post_init_thread_data) != 0) {\n\t\tret = PBSE_SYSTEM;\n\t\tgoto err;\n\t}\n\n\treturn 0;\n\nerr:\n\t/*\n\t * this is a unlikely case and should not happen unless the system is\n\t * low on memory before even before the program started, or if there\n\t * are bugs in the pthread calls\n\t */\n\n\t/*\n\t * since thread init could not be set, set back single threaded mode,\n\t * so that at least the pbs_errno etc would work for the client side\n\t * to read the error code out of it.\n\t */\n\tpbs_client_thread_set_single_threaded_mode();\n\tif (free_ptr) {\n\t\tfree(ptr->th_dis_buffer);\n\t\tfree(ptr);\n\t}\n\tpbs_errno = ret; /* set the errno so that client can access it */\n\treturn ret;\n}\n\n/**\n * @brief\n *\tFree the attribute error list from TLS\n *\n * @par Functionality:\n *      Helper function to free the list of attributes in error that is stored\n *\tin the threads TLS\n *\n * @param[in]\terrlist  - pointer to the array of attributes structures to free\n *\n * @return\tvoid\n *\n * @par Side-effects:\n *\tNone\n *\n * @par Reentrancy:\n *\tReentrant\n */\nvoid\nfree_errlist(struct ecl_attribute_errors *errlist)\n{\n\tint i;\n\tstruct attropl *attr;\n\tif (errlist) {\n\t\t/* iterate through the error list and free everything */\n\t\tfor (i = 0; i < errlist->ecl_numerrors; i++) {\n\t\t\tattr = errlist->ecl_attrerr[i].ecl_attribute;\n\t\t\tif (attr) {\n\t\t\t\t/* free the attropl structure pointer */\n\t\t\t\tif (attr->name != NULL)\n\t\t\t\t\tfree(attr->name);\n\t\t\t\tif (attr->resource != NULL)\n\t\t\t\t\tfree(attr->resource);\n\t\t\t\tif (attr->value != NULL)\n\t\t\t\t\tfree(attr->value);\n\t\t\t\tfree(attr);\n\t\t\t}\n\t\t\t/* free the errmsg pointer */\n\t\t\tif (errlist->ecl_attrerr[i].ecl_errmsg)\n\t\t\t\tfree(errlist->ecl_attrerr[i].ecl_errmsg);\n\t\t}\n\t\tif (errlist->ecl_attrerr)\n\t\t\tfree(errlist->ecl_attrerr);\n\t\tfree(errlist);\n\t}\n}\n\n/**\n * @brief\n *\tDestroy the thread context data\n *\n * @par Functionality:\n *      Called by pbs_client_thread_destroy_context to free the TLS data\n *\tallocated for this thread\n *\n * @see __init_thread_data\n *\n * @param[in]\tp - pointer to TLS data area\n *\n * @return\tvoid\n *\n * @par Side-effects:\n *\tNone\n *\n * @par Reentrancy:\n *\tReentrant\n */\nstatic void\n__pbs_client_thread_destroy_thread_data(void *p)\n{\n\tstruct pbs_client_thread_connect_context *th_conn, *temp;\n\tstruct pbs_client_thread_context *ptr =\n\t\t(struct pbs_client_thread_context *) p;\n\n\tif (ptr) {\n\t\tfree_errlist(ptr->th_errlist);\n\n\t\tif (ptr->th_cred_info)\n\t\t\tfree(ptr->th_cred_info);\n\n\t\tif (ptr->th_dis_buffer)\n\t\t\tfree(ptr->th_dis_buffer);\n\n\t\tfree_node_pool(ptr->th_node_pool);\n\n\t\tth_conn = ptr->th_conn_context;\n\t\twhile (th_conn) {\n\t\t\tif (th_conn->th_ch_errtxt)\n\t\t\t\tfree(th_conn->th_ch_errtxt);\n\n\t\t\ttemp = th_conn;\n\t\t\tth_conn = th_conn->th_ch_next;\n\t\t\tfree(temp);\n\t\t}\n\t\tfree(ptr);\n\t}\n}\n\n/**\n * @brief\n *\tAdd a the connection related data to the TLS\n *\n * @par Functionality:\n *\tAllocates memory for struct connect_context, initializes the members.\n *      Finally adds the structure to a linked list in the thread_context\n *      headed by the member th_conn_context. See struct thread_context for\n *      more information.\n *\tCalled by __pbs_cleint_thread_init_connect_context/\n *\t__pbs_client_thread_lock_connection functions to add a connection\n *\tcontext to the TLS data\n *\n * @see __pbs_client_thread_init_connect_context\\n\n *\t__pbs_client_thread_lock_connection\\n\n *\n * @param[in]\tconnect - the connection identifier\n *\n * @retval\tAddress of the newly allocated node (success)\n * @retval\tNULL (failure)\n *\n * @par Side-effects:\n *\tNone\n *\n * @par Reentrancy:\n *\tReentrant\n */\nstruct pbs_client_thread_connect_context *\npbs_client_thread_add_connect_context(int connect)\n{\n\tstruct pbs_client_thread_context *p =\n\t\tpbs_client_thread_get_context_data();\n\tstruct pbs_client_thread_connect_context *new =\n\t\tmalloc(sizeof(struct pbs_client_thread_connect_context));\n\tif (new == NULL)\n\t\treturn NULL;\n\n\tnew->th_ch = connect;\n\tnew->th_ch_errno = 0;\n\tnew->th_ch_errtxt = NULL;\n\tif (p->th_conn_context)\n\t\tnew->th_ch_next = p->th_conn_context;\n\telse\n\t\tnew->th_ch_next = NULL;\n\n\tp->th_conn_context = new; /* chain at the head */\n\n\treturn new;\n}\n\n/**\n * @brief\n *\tRemove the data associated with the connection from TLS\n *\n * @par Functionality:\n *\tDeallocates memory for struct connect_context for the connection handle\n *\tspecified. The node is deallocated from the linked list headed by\n *\tmember th_conn_context in struct thread_context. For more information\n *\tsee definition of struct thread_context.\n *\tCalled by __pbs_client_thread_destroy_connect_context functions to\n *\tremove a connection context from the TLS (struct thread_context)\n *\n * @see\t__pbs_client_thread_destroy_connect_context\n *\n * @param[in]\tconnect - the connection identifier from pbs_connect call\n *\n * @retval\t0  -  (success)\n * @retval\t-1 -  (failure)\n *\n * @par Side-effects:\n *\tNone\n *\n * @par Reentrancy:\n *\tReentrant\n */\nint\npbs_client_thread_remove_connect_context(int connect)\n{\n\tstruct pbs_client_thread_context *p =\n\t\tpbs_client_thread_get_context_data();\n\tstruct pbs_client_thread_connect_context *prev = NULL;\n\tstruct pbs_client_thread_connect_context *ptr = p->th_conn_context;\n\twhile (ptr) {\n\t\tif (ptr->th_ch == connect) {\n\t\t\tif (prev)\n\t\t\t\tprev->th_ch_next = ptr->th_ch_next;\n\t\t\telse\n\t\t\t\tp->th_conn_context = ptr->th_ch_next;\n\n\t\t\tif (ptr->th_ch_errtxt)\n\t\t\t\tfree(ptr->th_ch_errtxt);\n\n\t\t\tfree(ptr);\n\t\t\treturn 0;\n\t\t}\n\t\tprev = ptr;\n\t\tptr = ptr->th_ch_next;\n\t}\n\treturn -1;\n}\n\n/**\n * @brief\n *\tFind the address of connection context data\n *\n * @par Functionality:\n *\tThe node is searched from the linked list headed by the\n *\tth_conn_context member of struct thread_context (TLS data)\n *\tCalled by functions __pbs_client_thread_lock_connection,\n *\t__pbs_client_thread_unlock_connection,\n *\tpbs_geterrmsg to locate the node associated to connection handle\n *\tspecified.\n *\n * @see\t__pbs_client_thread_lock_connection\\n\n *\t__pbs_client_thread_unlock_connection\n *\n * @param[in]\tconnect - the connection identifier from pbs_connect call\n *\n * @retval\tAddress of the node  -  (success)\n * @retval\tNULL, node not found -  (failure)\n *\n * @par Side-effects:\n *\tNone\n *\n * @par Reentrancy:\n *\tReentrant\n */\nstruct pbs_client_thread_connect_context *\npbs_client_thread_find_connect_context(int connect)\n{\n\tstruct pbs_client_thread_context *p =\n\t\tpbs_client_thread_get_context_data();\n\tstruct pbs_client_thread_connect_context *ptr = p->th_conn_context;\n\twhile (ptr) {\n\t\tif (ptr->th_ch == connect)\n\t\t\treturn ptr;\n\t\tptr = ptr->th_ch_next;\n\t}\n\treturn NULL;\n}\n\n/**\n * @brief\n *\tInitializes the connection related data\n *\n * @par Functionality:\n *\t1. Initialize the ch_mutex member of the struct connection to a\n *\t\trecursively lockable mutex\n *\t2. Calls pbs_client_thread_add_connect_context to add connect context\n *\t\tto the linked list of connections headed by the member\n *\t\tth_ch_conn_context of struct thread_context (TLS data).\n *\n * @see\tpbs_client_thread_add_connect_context\\n\n *\tthread_context\n *\n * @param[in]\tconnect - the connection identifier from pbs_connect call\n *\n * @retval\t0  -  (success)\n * @retval\tpbs_errno -  (failure)\n *\n * @par Side-effects:\n *\tSets pbs_errno\n *\n * @par Reentrancy:\n *\tReentrant\n */\nstatic int\n__pbs_client_thread_init_connect_context(int connect)\n{\n\t/* create an entry inside the thread context for this connect */\n\tif (pbs_client_thread_add_connect_context(connect) == NULL) {\n\t\tpbs_errno = PBSE_SYSTEM;\n\t\treturn pbs_errno;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\tDestroys the connection related data\n *\n * @par Functionality:\n *\t1. Destroy the ch_mutex member of the struct connection\n *\t2. Calls pbs_client_thread_remove_connect_context to remove connection\n *\t\tcontext from the linked list of connections headed by the member\n *\t\tth_ch_conn_context of struct thread_context (TLS data).\n *\n * @see\tpbs_client_thread_add_connect_context\\n\n *\tpbs_client_thread_remove_connect_context\\n\n *\tthread_context\n *\n * @param[in]\tconnect - the connection identifier from pbs_connect call\n *\n * @retval\t0  -  (success)\n * @retval\tpbs_errno -  (failure)\n *\n * @par Side-effects:\n *\tSets pbs_errno\n *\n * @par Reentrancy:\n *\tReentrant\n */\nstatic int\n__pbs_client_thread_destroy_connect_context(int connect)\n{\n\t/* dont ever destroy a connect level mutex */\n\t/* remove entry frm thread context for this connect */\n\tif (pbs_client_thread_remove_connect_context(connect) != 0) {\n\t\tpbs_errno = PBSE_SYSTEM;\n\t\treturn pbs_errno;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\tFetches the thread context data pointer\n *\n * @par Functionality:\n *\tConvenience function to get the thread context data of type\n *\tstruct thread_context from the TLS using pthread_getspecific call.\n *\tIn case pthread_getspecific returns NULL, it means that the thread_init\n *\tfunction was not called before calling this method. This can happen\n *\twhen clients might access pbs_errno before calling any IFL API. In such\n *\tcase, pbs_client_thread_init_thread_context is called to initialize the\n *\tthe TLS data and then a call to pbs_client_thread_get_context_data gives\n *\tus the address to the thread context data for this thread.\n *\n * @see\tpbs_client_thread_init_thread_context\\n\n *\tpbs_client_thread_context\n *\n * @retval\tAddress of the thread context data\n *\n * @par Side-effects:\n *\tSets pbs_errno\n *\n * @par Reentrancy:\n *\tReentrant\n */\nstatic struct pbs_client_thread_context *\n__pbs_client_thread_get_context_data(void)\n{\n\tstruct pbs_client_thread_context *p = NULL;\n\tp = pthread_getspecific(key_tls);\n\tif (p == NULL) {\n\t\t/* this thread has not entered pthread init, so call it */\n\t\t/* if this fails, it sets local context */\n\t\t(void) pbs_client_thread_init_thread_context();\n\t\tp = pbs_client_thread_get_context_data();\n\t}\n\treturn p;\n}\n\n/**\n * @brief\n *\tLocks the connection level mutex\n *\n * @par Functionality:\n *\t1. Locks ch_mutex member (recursive mutex) of the connection structure\n *\t\tthus providing the semantics of locking a connection\n *\t2. If the connection context was not previously added to TLS area for\n *\t\tthis thread (if this is the first call to lock_connection) then\n *\t\tadd it to the TLS by calling\n *\t\tpbs_client_thread_add_connect_context())\n *\n * @see\tpbs_client_thread_find_connect_context\\n\n *\tpbs_client_thread_add_connect_context\n *\n * @param[in]\tconnect - the connection identifier from pbs_connect call\n *\n * @retval\t0 (success)\n * @retval\tpbs_errno (failure)\n *\n * @par Side-effects:\n *\tSets pbs_errno\n *\n * @par Reentrancy:\n *\tReentrant\n */\nstatic int\n__pbs_client_thread_lock_connection(int connect)\n{\n\tstruct pbs_client_thread_connect_context *con;\n\tpthread_mutex_t *mutex = NULL;\n\n\tif ((mutex = get_conn_mutex(connect)) == NULL) {\n\t\treturn (pbs_errno = PBSE_SYSTEM);\n\t}\n\n\tif (pthread_mutex_lock(mutex) != 0) {\n\t\treturn (pbs_errno = PBSE_SYSTEM);\n\t}\n\n\tcon = pbs_client_thread_find_connect_context(connect);\n\tif (con == NULL) {\n\t\t/*\n\t\t * add the connect context to thread, since this thread is\n\t\t * sharing a connection handle amongst threads\n\t\t */\n\t\tif ((con = pbs_client_thread_add_connect_context(connect)) == NULL) {\n\t\t\t(void) pthread_mutex_unlock(mutex);\n\t\t\treturn (pbs_errno = PBSE_SYSTEM);\n\t\t}\n\t}\n\n\t/* copy stuff from con to connection handle slot */\n\tset_conn_errno(connect, con->th_ch_errno);\n\tif (set_conn_errtxt(connect, con->th_ch_errtxt) != 0) {\n\t\t(void) pthread_mutex_unlock(mutex);\n\t\treturn (pbs_errno = PBSE_SYSTEM);\n\t\t;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\tUnlocks the connection level mutex\n *\n * @par Functionality:\n *\t1. Removes the connection context from the TLS for this thread by\n *\t\tcalling pbs_client_thread_remove_connect_context()\n *\t2. Unlocks the ch_mutex member (recursive mutex) of connection structure\n *\t\tthus providing the semantics of unlocking a connection\n *\n * @param[in]\tconnect - the connection identifier from pbs_connect call\n *\n * @see\tpbs_client_thread_find_connect_context\n *\n * @retval\t0 (success)\n * @retval\tpbs_errno (failure)\n *\n * @par Side-effects:\n *\tSets pbs_errno\n *\n * @par Reentrancy:\n *\tReentrant\n */\nstatic int\n__pbs_client_thread_unlock_connection(int connect)\n{\n\tpthread_mutex_t *mutex = NULL;\n\tstruct pbs_client_thread_connect_context *con = NULL;\n\tchar *errtxt = NULL;\n\n\tif ((mutex = get_conn_mutex(connect)) == NULL) {\n\t\treturn (pbs_errno = PBSE_SYSTEM);\n\t}\n\n\tcon = pbs_client_thread_find_connect_context(connect);\n\tif (con == NULL) {\n\t\treturn (pbs_errno = PBSE_SYSTEM);\n\t}\n\n\t/* copy stuff from con to connection handle slot */\n\tcon->th_ch_errno = get_conn_errno(connect);\n\terrtxt = get_conn_errtxt(connect);\n\tif (errtxt) {\n\t\tif (con->th_ch_errtxt)\n\t\t\tfree(con->th_ch_errtxt);\n\t\tcon->th_ch_errtxt = strdup(errtxt);\n\t\tif (con->th_ch_errtxt == NULL)\n\t\t\treturn (pbs_errno = PBSE_SYSTEM);\n\t}\n\n\tif (pthread_mutex_unlock(mutex) != 0) {\n\t\treturn (pbs_errno = PBSE_SYSTEM);\n\t}\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tLocks the connection table level mutex\n *\n * @retval\t0 (success)\n * @retval\tpbs_errno (failure)\n *\n * @par Side-effects:\n *\tSets pbs_errno\n *\n * @par Reentrancy:\n *\tReentrant\n */\nstatic int\n__pbs_client_thread_lock_conntable(void)\n{\n\tif (pthread_mutex_lock(&pbs_client_thread_conntable_mutex) != 0) {\n\t\tpbs_errno = PBSE_SYSTEM;\n\t\treturn pbs_errno;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\tUnlocks the connection table level mutex\n *\n * @retval\t0 (success)\n * @retval\tpbs_errno (failure)\n *\n * @par Side-effects:\n *\tSets pbs_errno\n *\n * @par Reentrancy:\n *\tReentrant\n */\nstatic int\n__pbs_client_thread_unlock_conntable(void)\n{\n\tif (pthread_mutex_unlock(&pbs_client_thread_conntable_mutex) != 0) {\n\t\tpbs_errno = PBSE_SYSTEM;\n\t\treturn pbs_errno;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\tLocks the configuration level mutex (conf_mutex)\n *\n * @retval\t0 (success)\n * @retval\tpbs_errno (failure)\n *\n * @par Side-effects:\n *\tSets pbs_errno\n *\n * @par Reentrancy:\n *\tReentrant\n */\nstatic int\n__pbs_client_thread_lock_conf(void)\n{\n\tif (pthread_mutex_lock(&pbs_client_thread_conf_mutex) != 0) {\n\t\tpbs_errno = PBSE_SYSTEM;\n\t\treturn pbs_errno;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\tUnlocks the configuration level mutex (conf_mutex)\n *\n * @retval\t0 (success)\n * @retval\tpbs_errno (failure)\n *\n * @par Side-effects:\n *\tSets pbs_errno\n *\n * @par Reentrancy:\n *\tReentrant\n */\nstatic int\n__pbs_client_thread_unlock_conf(void)\n{\n\tif (pthread_mutex_unlock(&pbs_client_thread_conf_mutex) != 0) {\n\t\tpbs_errno = PBSE_SYSTEM;\n\t\treturn pbs_errno;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\tReturns the address of dis_buffer used in dis communication.\n *\n * @par Functionality:\n *\tThis function returns the address of the per thread dis_buffer location\n *\tfrom the TLS by calling @see __pbs_client_thread_get_context_data\n *\n * @retval\tAddress of the dis_buffer from TLS (success)\n *\n * @par Side-effects:\n *\tNone\n *\n * @par Reentrancy:\n *\tReentrant\n */\nchar *\n__dis_buffer_location(void)\n{\n\t/*\n\t * returns real thread context or data from a global structure\n\t * called local_thread_context\n\t */\n\tstruct pbs_client_thread_context *p =\n\t\tpbs_client_thread_get_context_data();\n\treturn (p->th_dis_buffer);\n}\n\n/**\n * @brief\n *\tReturns the address of pbs_errno.\n *\n * @par Functionality:\n *\tThis function returns the address of the per thread location of\n *\tpbs_errno variable from the TLS by calling\n *\t@see __pbs_client_thread_get_context_data\n *\n * @retval\tAddress of the pbs_errno from TLS (success)\n *\n * @par Side-effects:\n *\tNone\n *\n * @par Reentrancy:\n *\tReentrant\n */\nint *\n__pbs_errno_location(void)\n{\n\t/*\n\t * returns real thread context or data from a global structure called\n\t * local_thread_context\n\t */\n\tstruct pbs_client_thread_context *p =\n\t\tpbs_client_thread_get_context_data();\n\treturn (&p->th_pbs_errno);\n}\n\n/**\n * @brief\n *\tReturns the address of pbs_server.\n *\n * @par Functionality:\n *\tThis function returns the address of the per thread location of\n *\tpbs_server from the TLS by calling\n *\t@see __pbs_client_thread_get_context_data\n *\n * @retval\tAddress of the pbs_server from TLS (success)\n *\n * @par Side-effects:\n *\tNone\n *\n * @par Reentrancy:\n *\tReentrant\n */\nchar *\n__pbs_server_location(void)\n{\n\t/*\n\t * returns real thread context or data from a global structure\n\t * called local_thread_context\n\t */\n\tstruct pbs_client_thread_context *p =\n\t\tpbs_client_thread_get_context_data();\n\treturn (p->th_pbs_server);\n}\n\n/**\n * @brief\n *\tReturns the address of pbs_current_user.\n *\n * @par Functionality:\n *\tThis function returns address of the per thread location of\n *\tpbs_current_user from the TLS by calling\n *\t@see __pbs_client_thread_get_context_data\n *\n * @retval\tAddress of the pbs_current_user from TLS (success)\n *\n * @par Side-effects:\n *\tNone\n *\n * @par Reentrancy:\n *\tReentrant\n */\nchar *\n__pbs_current_user_location(void)\n{\n\treturn (pbs_conf.current_user);\n}\n\n/**\n * @brief\n *\tReturns the address of pbs_tcp_timeout.\n *\n * @par Functionality:\n *\tThis function returns address of the per thread location of\n *\tpbs_tcp_timeout from the TLS by calling\n *\t@see __pbs_client_thread_get_context_data\n *\n * @retval\tAddress of the pbs_tcp_timeout from TLS (success)\n *\n * @par Side-effects:\n *\tNone\n *\n * @par Reentrancy:\n *\tReentrant\n */\ntime_t *\n__pbs_tcptimeout_location(void)\n{\n\t/*\n\t * returns real thread context or data from a global structure\n\t * called local_thread_context\n\t */\n\tstruct pbs_client_thread_context *p =\n\t\tpbs_client_thread_get_context_data();\n\treturn (&p->th_pbs_tcp_timeout);\n}\n\n/**\n * @brief\n *\tReturns the address of pbs_tcp_interrupt.\n *\n * @par Functionality:\n *\tThis function returns address of the per thread location of\n *\tpbs_tcp_interrupt from the TLS by calling\n *\t@see __pbs_client_thread_get_context_data\n *\n * @retval\tAddress of the pbs_tcp_interrupt from TLS (success)\n *\n * @par Side-effects:\n *\tNone\n *\n * @par Reentrancy:\n *\tReentrant\n */\nint *\n__pbs_tcpinterrupt_location(void)\n{\n\t/*\n\t * returns real thread context or data from a global structure\n\t * called local_thread_context\n\t */\n\tstruct pbs_client_thread_context *p =\n\t\tpbs_client_thread_get_context_data();\n\treturn (&p->th_pbs_tcp_interrupt);\n}\n\n/**\n * @brief\n *\tReturns the location of pbs_tcp_errno.\n *\n * @par Functionality:\n *\tThis function returns address of the per thread location of\n *\tpbs_tcp_errno from the TLS\n *\tby calling @see __pbs_client_thread_get_context_data\n *\n * @retval\tAddress of the pbs_tcp_errno from TLS (success)\n *\n * @par Side-effects:\n *\tNone\n *\n * @par Reentrancy:\n *\tReentrant\n */\nint *\n__pbs_tcperrno_location(void)\n{\n\t/*\n\t * returns real thread context or data from a global structure\n\t * called local_thread_context\n\t */\n\tstruct pbs_client_thread_context *p =\n\t\tpbs_client_thread_get_context_data();\n\treturn (&p->th_pbs_tcp_errno);\n}\n"
  },
  {
    "path": "src/lib/Libifl/DIS_decode.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tDIS_encode.c\n * \n * @brief\n * DIS decode routines\n */\n\n#include \"batch_request.h\"\n#include \"dis.h\"\n\n/**\n * @brief\n *      Decode PBS batch request to authenticate based on external (non-resv-port) mechanisms.\n *      The batch request contains type and the auth data.\n *\n * @param [in] sock socket connection\n * @param [in] preq PBS bath request\n * @return in\n * @retval 0 on success\n * @retval > 0 on failure\n */\nint\ndecode_DIS_Authenticate(int sock, struct batch_request *preq)\n{\n\tint rc;\n\tint len = 0;\n\n\tmemset(preq->rq_ind.rq_auth.rq_auth_method, '\\0', sizeof(preq->rq_ind.rq_auth.rq_auth_method));\n\tlen = disrsi(sock, &rc);\n\tif (rc != DIS_SUCCESS)\n\t\treturn (rc);\n\tif (len <= 0) {\n\t\treturn DIS_PROTO;\n\t}\n\trc = disrfst(sock, len, preq->rq_ind.rq_auth.rq_auth_method);\n\tif (rc != DIS_SUCCESS)\n\t\treturn (rc);\n\n\tmemset(preq->rq_ind.rq_auth.rq_encrypt_method, '\\0', sizeof(preq->rq_ind.rq_auth.rq_encrypt_method));\n\tlen = disrsi(sock, &rc);\n\tif (rc != DIS_SUCCESS)\n\t\treturn (rc);\n\tif (len > 0) {\n\t\trc = disrfst(sock, len, preq->rq_ind.rq_auth.rq_encrypt_method);\n\t\tif (rc != DIS_SUCCESS)\n\t\t\treturn (rc);\n\t}\n\n\tpreq->rq_ind.rq_auth.rq_port = disrui(sock, &rc);\n\tif (rc != DIS_SUCCESS)\n\t\treturn (rc);\n\n\treturn (rc);\n}\n\n/**\n *\n * @brief\n *\tDecode the data items needed for a Copy Hook Filecopy request as:\n * \t\t\tu int\tblock sequence number\n *\t\t\tu int\tsize of data in block\n *\t\t\tstring\thook file name\n *\t\t\tcnt str\tfile data contents\n *\n * @param[in]\tsock\t- the connection to get data from.\n * @param[in]\tpreq\t- a request structure\n *\n * @return\tint\n * @retval\t0 for success\n *\t\tnon-zero otherwise\n */\n\nint\ndecode_DIS_CopyHookFile(int sock, struct batch_request *preq)\n{\n\tint rc = 0;\n\tsize_t amt;\n\n\tif (preq == NULL)\n\t\treturn 0;\n\n\tpreq->rq_ind.rq_hookfile.rq_data = 0;\n\n\tpreq->rq_ind.rq_hookfile.rq_sequence = disrui(sock, &rc);\n\tif (rc)\n\t\treturn rc;\n\n\tpreq->rq_ind.rq_hookfile.rq_size = disrui(sock, &rc);\n\tif (rc)\n\t\treturn rc;\n\n\tif ((rc = disrfst(sock, MAXPATHLEN + 1,\n\t\t\t  preq->rq_ind.rq_hookfile.rq_filename)) != 0)\n\t\treturn rc;\n\n\tpreq->rq_ind.rq_hookfile.rq_data = disrcs(sock, &amt, &rc);\n\tif ((amt != preq->rq_ind.rq_hookfile.rq_size) && (rc == 0))\n\t\trc = DIS_EOD;\n\tif (rc) {\n\t\tif (preq->rq_ind.rq_hookfile.rq_data)\n\t\t\t(void) free(preq->rq_ind.rq_hookfile.rq_data);\n\t\tpreq->rq_ind.rq_hookfile.rq_data = 0;\n\t}\n\n\treturn rc;\n}\n\n/**\n * @brief\n *\tdecode a Job Credential batch request\n *\n * @param[in] sock - socket descriptor\n * @param[out] preq - pointer to batch_request structure\n *\n * NOTE:The batch_request structure must already exist (be allocated by the\n *      caller.   It is assumed that the header fields (protocol type,\n *      protocol version, request type, and user name) have already be decoded.\n *\n * @return      int\n * @retval      DIS_SUCCESS(0)  success\n * @retval      error code      error\n *\n */\n\nint\ndecode_DIS_Cred(int sock, struct batch_request *preq)\n{\n\tint rc;\n\n\tpreq->rq_ind.rq_cred.rq_cred_data = NULL;\n\n\trc = disrfst(sock, PBS_MAXSVRJOBID + 1, preq->rq_ind.rq_cred.rq_jobid);\n\tif (rc)\n\t\treturn rc;\n\n\trc = disrfst(sock, PBS_MAXUSER + 1, preq->rq_ind.rq_cred.rq_credid);\n\tif (rc)\n\t\treturn rc;\n\n\tpreq->rq_ind.rq_cred.rq_cred_type = disrui(sock, &rc);\n\tif (rc)\n\t\treturn rc;\n\n\tpreq->rq_ind.rq_cred.rq_cred_data = disrcs(sock, (size_t *) &preq->rq_ind.rq_cred.rq_cred_size, &rc);\n\tif (rc)\n\t\treturn rc;\n\n\tpreq->rq_ind.rq_cred.rq_cred_validity = disrul(sock, &rc);\n\treturn rc;\n}\n\n/**\n * @brief\n *\tDecode data item(s) needed for a Delete Hook File request.\n *\n *\tData item is:\tstring\thook  filename\n *\t\t\tcnt str\tdata\n *\n * @param[in]\t\tsock - communication channel\n * @param[in/out]\tpreq - request structure to fill in\n *\n * @return \tint\n * @retval \t0 for success\n * @retval \tnon-zero otherwise\n */\n\nint\ndecode_DIS_DelHookFile(int sock, struct batch_request *preq)\n{\n\tint rc;\n\n\tif ((rc = disrfst(sock, MAXPATHLEN + 1,\n\t\t\t  preq->rq_ind.rq_hookfile.rq_filename)) != 0)\n\t\treturn rc;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\t-decode a Delete Job Batch Request\n *\n * @par\tFunctionality:\n *\tThis function is used to decode the request for deletion of list of jobids.\n *\n *\n *      The batch_request structure must already exist (be allocated by the\n *      caller.   It is assumed that the header fields (protocol type,\n *      protocol version, request type, and user name) have already be decoded.\n *\n * @par\tData items are:\\n\n *\t\tunsigned int    command\\n\n *              unsigned int    object type\\n\n *              string          object name\\n\n *              attropl         attributes\n *\n * @param[in] sock - socket descriptor\n * @param[out] preq - pointer to batch_request structure\n *\n * @return      int\n * @retval      DIS_SUCCESS(0)  success\n * @retval      error code      error\n *\n */\n\nint\ndecode_DIS_DelJobList(int sock, struct batch_request *preq)\n{\n\tint rc;\n\tint count = 0;\n\tchar **tmp_jobslist = NULL;\n\tint i = 0;\n\n\tpreq->rq_ind.rq_deletejoblist.rq_count = disrui(sock, &rc);\n\tif (rc)\n\t\treturn rc;\n\n\tcount = preq->rq_ind.rq_deletejoblist.rq_count;\n\n\ttmp_jobslist = malloc((count + 1) * sizeof(char *));\n\tif (tmp_jobslist == NULL)\n\t\treturn DIS_NOMALLOC;\n\n\tfor (i = 0; i < count; i++) {\n\t\ttmp_jobslist[i] = disrst(sock, &rc);\n\t\tif (rc) {\n\t\t\tfree(tmp_jobslist);\n\t\t\treturn rc;\n\t\t}\n\t}\n\ttmp_jobslist[i] = NULL;\n\n\tpreq->rq_ind.rq_deletejoblist.rq_jobslist = tmp_jobslist;\n\tpreq->rq_ind.rq_deletejoblist.rq_resume = FALSE;\n\n\treturn rc;\n}\n\n/**\n * @brief\n *\tdecode a Job Credential batch request\n *\n * @param[in] sock - socket descriptor\n * @param[out] preq - pointer to batch_request structure\n *\n * NOTE:The batch_request structure must already exist (be allocated by the\n *      caller.   It is assumed that the header fields (protocol type,\n *      protocol version, request type, and user name) have already be decoded.\n *\n * @return      int\n * @retval      DIS_SUCCESS(0)  success\n * @retval      error code      error\n *\n */\n\nint\ndecode_DIS_JobCred(int sock, struct batch_request *preq)\n{\n\tint rc;\n\n\tpreq->rq_ind.rq_jobcred.rq_data = 0;\n\tpreq->rq_ind.rq_jobcred.rq_type = disrui(sock, &rc);\n\tif (rc)\n\t\treturn rc;\n\n\tpreq->rq_ind.rq_jobcred.rq_data = disrcs(sock,\n\t\t\t\t\t\t (size_t *) &preq->rq_ind.rq_jobcred.rq_size,\n\t\t\t\t\t\t &rc);\n\treturn rc;\n}\n\n/**\n * @brief -\n *\tdecode a Job Related Job File Move request\n *\n * @param[in] sock - socket descriptor\n * @param[out] preq - pointer to batch_request structure\n *\n *\n * @par Data items are:\n *\t\t\tu int -\tblock sequence number\\n\n *\t\t \tu int -  file type (stdout, stderr, ...)\\n\n *\t\t \tu int -  size of data in block\\n\n *\t\t \tstring - job id\\n\n *\t\t \tcnt str - data\\n\n *\n * @return      int\n * @retval      DIS_SUCCESS(0)  success\n * @retval      error code      error\n *\n */\n\nint\ndecode_DIS_JobFile(int sock, struct batch_request *preq)\n{\n\tint rc;\n\tsize_t amt;\n\n\tpreq->rq_ind.rq_jobfile.rq_data = 0;\n\n\tpreq->rq_ind.rq_jobfile.rq_sequence = disrui(sock, &rc);\n\tif (rc)\n\t\treturn rc;\n\n\tpreq->rq_ind.rq_jobfile.rq_type = disrui(sock, &rc);\n\tif (rc)\n\t\treturn rc;\n\n\tpreq->rq_ind.rq_jobfile.rq_size = disrui(sock, &rc);\n\tif (rc)\n\t\treturn rc;\n\n\tif ((rc = disrfst(sock, PBS_MAXSVRJOBID + 1, preq->rq_ind.rq_jobfile.rq_jobid)) != 0)\n\t\treturn rc;\n\n\tpreq->rq_ind.rq_jobfile.rq_data = disrcs(sock, &amt, &rc);\n\tif ((amt != preq->rq_ind.rq_jobfile.rq_size) && (rc == 0))\n\t\trc = DIS_EOD;\n\tif (rc) {\n\t\tif (preq->rq_ind.rq_jobfile.rq_data)\n\t\t\t(void) free(preq->rq_ind.rq_jobfile.rq_data);\n\t\tpreq->rq_ind.rq_jobfile.rq_data = 0;\n\t}\n\n\treturn rc;\n}\n\n/**\n * @brief\n *\tdecode_DIS_JobId() - decode a Job ID string into a batch_request\n *\n * @par Functionality:\n *\t\tThis is used for the following batch requests:\\n\n *              \tReady_to_Commit\\n\n *              \tCommit\\n\n *              \tLocate Job\\n\n *              \tRerun Job\n *\n * @param[in] sock - socket descriptor\n * @param[out] preq - pointer to batch_request structure\n *\n * @return      int\n * @retval      DIS_SUCCESS(0)  success\n * @retval      error code      error\n *\n */\n\nint\ndecode_DIS_JobId(int sock, char *jobid)\n{\n\treturn (disrfst(sock, PBS_MAXSVRJOBID + 1, jobid));\n}\n\n/**\n * @brief\n *\t-decode a Manager Batch Request\n *\n * @par\tFunctionality:\n *\tThis request is used for most operations where an object is being\n *      created, deleted, or altered.\n *\n *      The batch_request structure must already exist (be allocated by the\n *      caller.   It is assumed that the header fields (protocol type,\n *      protocol version, request type, and user name) have already be decoded.\n *\n * @par\tData items are:\\n\n *\t\tunsigned int    command\\n\n *              unsigned int    object type\\n\n *              string          object name\\n\n *              attropl         attributes\n *\n * @param[in] sock - socket descriptor\n * @param[out] preq - pointer to batch_request structure\n *\n * @return      int\n * @retval      DIS_SUCCESS(0)  success\n * @retval      error code      error\n *\n */\n\nint\ndecode_DIS_Manage(int sock, struct batch_request *preq)\n{\n\tint rc;\n\n\tCLEAR_HEAD(preq->rq_ind.rq_manager.rq_attr);\n\tpreq->rq_ind.rq_manager.rq_cmd = disrui(sock, &rc);\n\tif (rc)\n\t\treturn rc;\n\tpreq->rq_ind.rq_manager.rq_objtype = disrui(sock, &rc);\n\tif (rc)\n\t\treturn rc;\n\trc = disrfst(sock, PBS_MAXSVRJOBID + 1, preq->rq_ind.rq_manager.rq_objname);\n\tif (rc)\n\t\treturn rc;\n\treturn (decode_DIS_svrattrl(sock, &preq->rq_ind.rq_manager.rq_attr));\n}\n\n/**\n * @brief Read the modify request for a reservation.\n *\n * @param[in] sock - connection identifier\n * @param[out] preq - batch_request that the information will be read into.\n *\n * @return 0 - on success\n * @return DIS error\n */\nint\ndecode_DIS_ModifyResv(int sock, struct batch_request *preq)\n{\n\tint rc = 0;\n\n\tCLEAR_HEAD(preq->rq_ind.rq_modify.rq_attr);\n\tpreq->rq_ind.rq_modify.rq_objtype = disrui(sock, &rc);\n\tif (rc)\n\t\treturn rc;\n\trc = disrfst(sock, PBS_MAXSVRJOBID + 1, preq->rq_ind.rq_modify.rq_objname);\n\tif (rc)\n\t\treturn rc;\n\treturn (decode_DIS_svrattrl(sock, &preq->rq_ind.rq_modify.rq_attr));\n}\n\n/**\n * @brief -\n *\tdecode a Move Job batch request\n *\talso used for an Order Job batch request\n *\n * @par\tFunctionality:\n *\t\tThe batch_request structure must already exist (be allocated by the\n *\t\tcaller.   It is assumed that the header fields (protocol type,\n *\t\tprotocol version, request type, and user name) have already be decoded.\n *\n * @par\t Data items are:\n *\t\tstring          job id\\n\n *\t\tstring          destination\n *\n * @param[in] sock - socket descriptor\n * @param[out] preq - pointer to batch_request structure\n *\n * @return      int\n * @retval      DIS_SUCCESS(0)  success\n * @retval      error code      error\n *\n */\n\nint\ndecode_DIS_MoveJob(int sock, struct batch_request *preq)\n{\n\tint rc;\n\n\trc = disrfst(sock, PBS_MAXSVRJOBID + 1, preq->rq_ind.rq_move.rq_jid);\n\tif (rc)\n\t\treturn rc;\n\n\trc = disrfst(sock, PBS_MAXDEST + 1, preq->rq_ind.rq_move.rq_destin);\n\n\treturn rc;\n}\n\n/**\n * @brief-\n *\tdecode a Message Job batch request\n *\n * @par\tFunctionality:\n *\t\tThe batch_request structure must already exist (be allocated by the\n *      \tcaller.   It is assumed that the header fields (protocol type,\n *      \tprotocol version, request type, and user name) have already be decoded.\n *\n * @par\t Data items are:\n *\t\tstring          job id\n *\t\tunsigned int    which file\n *\t\tstring          the message\n *\n * @param[in] sock - socket descriptor\n * @param[out] preq - pointer to batch_request structure\n *\n * @return      int\n * @retval      DIS_SUCCESS(0)  success\n * @retval      error code      error\n *\n */\n\nint\ndecode_DIS_MessageJob(int sock, struct batch_request *preq)\n{\n\tint rc;\n\n\tpreq->rq_ind.rq_message.rq_text = 0;\n\n\trc = disrfst(sock, PBS_MAXSVRJOBID + 1, preq->rq_ind.rq_message.rq_jid);\n\tif (rc)\n\t\treturn rc;\n\n\tpreq->rq_ind.rq_message.rq_file = disrui(sock, &rc);\n\tif (rc)\n\t\treturn rc;\n\n\tpreq->rq_ind.rq_message.rq_text = disrst(sock, &rc);\n\treturn rc;\n}\n\n/**\n * @brief Read the preempt multiple jobs request.\n *\n * @param[in] sock - connection identifier\n * @param[out] preq - batch_request that the information will be read into.\n *\n * @return 0 - on success\n * @return DIS error\n */\nint\ndecode_DIS_PreemptJobs(int sock, struct batch_request *preq)\n{\n\tint rc = 0;\n\tint i = 0;\n\tint count = 0;\n\tpreempt_job_info *ppj = NULL;\n\n\tpreq->rq_ind.rq_preempt.count = disrui(sock, &rc);\n\tif (rc)\n\t\treturn rc;\n\n\tcount = preq->rq_ind.rq_preempt.count;\n\n\tppj = calloc(sizeof(struct preempt_job_info), count);\n\tif (ppj == NULL)\n\t\treturn DIS_NOMALLOC;\n\n\tfor (i = 0; i < count; i++) {\n\t\tif ((rc = disrfst(sock, PBS_MAXSVRJOBID + 1, ppj[i].job_id))) {\n\t\t\tfree(ppj);\n\t\t\treturn rc;\n\t\t}\n\t}\n\n\tpreq->rq_ind.rq_preempt.ppj_list = ppj;\n\n\treturn rc;\n}\n\n/**\n * @brief -\n *\tdecode a Queue Job Batch Request\n *\n * @par\tFunctionality:\n *\t\tstring  job id\\n\n *\t\tstring  destination\\n\n *\t\tlist of attributes (attropl)\n *\n * @param[in] sock - socket descriptor\n * @param[out] preq - pointer to batch_request structure\n *\n * @return      int\n * @retval      DIS_SUCCESS(0)  success\n * @retval      error code      error\n *\n */\n\nint\ndecode_DIS_QueueJob(int sock, struct batch_request *preq)\n{\n\tint rc;\n\n\tCLEAR_HEAD(preq->rq_ind.rq_queuejob.rq_attr);\n\trc = disrfst(sock, PBS_MAXSVRJOBID + 1, preq->rq_ind.rq_queuejob.rq_jid);\n\tif (rc)\n\t\treturn rc;\n\n\trc = disrfst(sock, PBS_MAXSVRJOBID + 1, preq->rq_ind.rq_queuejob.rq_destin);\n\tif (rc)\n\t\treturn rc;\n\n\treturn (decode_DIS_svrattrl(sock, &preq->rq_ind.rq_queuejob.rq_attr));\n}\n\n/**\n * @brief -\n *\tdecode a Register Dependency Batch Request\n *\n * @par\tFunctionality:\n *\t\tThe batch_request structure must already exist (be allocated by the\n *      \tcaller.   It is assumed that the header fields (protocol type,\n *      \tprotocol version, request type, and user name) have already be decoded\n *\n * @par\tData items are:\n *\t\tstring          job owner\\n\n *\t\tstring          parent job id\\n\n *\t\tstring          child job id\\n\n *\t\tunsigned int    dependency type\\n\n *\t\tunsigned int    operation\\n\n *\t\tsigned long     cost\\n\n *\n * @param[in] sock - socket descriptor\n * @param[out] preq - pointer to batch_request structure\n *\n * @return      int\n * @retval      DIS_SUCCESS(0)  success\n * @retval      error code      error\n *\n */\n\nint\ndecode_DIS_Register(int sock, struct batch_request *preq)\n{\n\tint rc;\n\n\trc = disrfst(sock, PBS_MAXUSER, preq->rq_ind.rq_register.rq_owner);\n\tif (rc)\n\t\treturn rc;\n\trc = disrfst(sock, PBS_MAXSVRJOBID, preq->rq_ind.rq_register.rq_parent);\n\tif (rc)\n\t\treturn rc;\n\trc = disrfst(sock, PBS_MAXCLTJOBID, preq->rq_ind.rq_register.rq_child);\n\tif (rc)\n\t\treturn rc;\n\tpreq->rq_ind.rq_register.rq_dependtype = disrui(sock, &rc);\n\n\tif (rc)\n\t\treturn rc;\n\n\tpreq->rq_ind.rq_register.rq_op = disrui(sock, &rc);\n\tif (rc)\n\t\treturn rc;\n\n\tpreq->rq_ind.rq_register.rq_cost = disrsl(sock, &rc);\n\n\treturn rc;\n}\n\n/**\n * @brief -\n *\tdecode a batch_request Extend string\n *\n * @par\tFunctionality:\n *\t\tThe batch_request structure must already exist (be allocated by the\n *      \tcaller.   It is assumed that the header fields (protocol type,\n *\t\tprotocol version, request type, and user name) and the request body\n *\t\thave already be decoded.\n *\n * Note:The next field is an unsigned integer which is 1 if there is an\n *      extension string and zero if not.\n *\n * @param[in] sock - socket descriptor\n * @param[out] preq - pointer to batch_request structure\n *\n * @return      int\n * @retval      DIS_SUCCESS(0)  success\n * @retval      error code      error\n *\n */\n\nint\ndecode_DIS_ReqExtend(int sock, struct batch_request *preq)\n{\n\tint i;\n\tint rc;\n\n\ti = disrui(sock, &rc); /* indicates if an extension exists */\n\n\tif (rc == 0) {\n\t\tif (i != 0) {\n\t\t\tpreq->rq_extend = disrst(sock, &rc);\n\t\t}\n\t}\n\treturn (rc);\n}\n\n/**\n * @brief-\n *\tDecode the Request Header Fields\n *      common to all requests\n *\n * @param[in] sock - socket descriptor\n * @param[out] preq - pointer to batch_request structure\n *\n * @return\tint\n * @retval\t-1    on EOF (end of file on first read only)\n * @retval\t0    on success\n * @retval\t>0    a DIS error return, see dis.h\n *\n */\n\nint\ndecode_DIS_ReqHdr(int sock, struct batch_request *preq, int *proto_type, int *proto_ver)\n{\n\tint rc;\n\n\t*proto_type = disrui(sock, &rc);\n\tif (rc) {\n\t\treturn rc;\n\t}\n\tif (*proto_type != PBS_BATCH_PROT_TYPE)\n\t\treturn DIS_PROTO;\n\t*proto_ver = disrui(sock, &rc);\n\tif (rc) {\n\t\treturn rc;\n\t}\n\n\tpreq->rq_type = disrui(sock, &rc);\n\tif (rc) {\n\t\treturn rc;\n\t}\n\n\treturn (disrfst(sock, PBS_MAXUSER + 1, preq->rq_user));\n}\n\n/**\n * @brief-\n *\tdecode a resource request\n *\n * @par\tFunctionality:\n *\t\tUsed for resource query, resource reserver, resource free.\n *\n *\t\tThe batch_request structure must already exist (be allocated by the\n *\t\tcaller.   It is assumed that the header fields (protocol type,\n *\t\tprotocol version, request type, and user name) have been decoded.\n *\n * @par\tData items are:\\n\n *\t\tsigned int\tresource handle\\n\n *\t\tunsigned int\tcount of resource queries\\n\n *\tfollowed by that number of:\\n\n *\t\tstring\t\tresource list\n *\n * @param[in] sock - socket descriptor\n * @param[out] preq - pointer to batch_request structure\n *\n * @return      int\n * @retval      DIS_SUCCESS(0)  success\n * @retval      error code      error\n *\n */\n\nint\ndecode_DIS_Rescl(int sock, struct batch_request *preq)\n{\n\tint ct;\n\tint i;\n\tchar **ppc;\n\tint rc;\n\n\t/* first, the resource handle (even if not used in request) */\n\n\tpreq->rq_ind.rq_rescq.rq_rhandle = disrsi(sock, &rc);\n\tif (rc)\n\t\treturn rc;\n\n\t/* next need to know how many query strings */\n\n\tct = disrui(sock, &rc);\n\tif (rc)\n\t\treturn rc;\n\tpreq->rq_ind.rq_rescq.rq_num = ct;\n\tif (ct) {\n\t\tif ((ppc = (char **) malloc(ct * sizeof(char *))) == 0)\n\t\t\treturn PBSE_RMSYSTEM;\n\n\t\tfor (i = 0; i < ct; i++)\n\t\t\t*(ppc + i) = NULL;\n\n\t\tpreq->rq_ind.rq_rescq.rq_list = ppc;\n\t\tfor (i = 0; i < ct; i++) {\n\t\t\t*(ppc + i) = disrst(sock, &rc);\n\t\t\tif (rc)\n\t\t\t\tbreak;\n\t\t}\n\t}\n\n\treturn rc;\n}\n\n/**\n * @brief-\n *\tdecode a Run Job batch request\n *\n * @par\tFunctionality:\n *\t\tThe batch_request structure must already exist (be allocated by the\n *      \tcaller.   It is assumed that the header fields (protocol type,\n *      \tprotocol version, request type, and user name) have already be decoded.\n *\n * @par\tData items are:\\n\n *\t\tstring          job id\\n\n *\t\tstring          destination\\n\n *\t\tunsigned int    resource_handle\\n\n *\n * @param[in] sock - socket descriptor\n * @param[out] preq - pointer to batch_request structure\n *\n * @return      int\n * @retval      DIS_SUCCESS(0)  success\n * @retval      error code      error\n *\n */\n\nint\ndecode_DIS_Run(int sock, struct batch_request *preq)\n{\n\tint rc;\n\n\t/* job id */\n\trc = disrfst(sock, PBS_MAXSVRJOBID + 1, preq->rq_ind.rq_run.rq_jid);\n\tif (rc)\n\t\treturn rc;\n\n\t/* variable length list of vnodes (destination) */\n\tpreq->rq_ind.rq_run.rq_destin = disrst(sock, &rc);\n\tif (rc)\n\t\treturn rc;\n\n\t/* an optional flag, used by reservations */\n\tpreq->rq_ind.rq_run.rq_resch = disrul(sock, &rc);\n\treturn rc;\n}\n\n/**\n * @brief-\n *\tdecode a Server Shut Down batch request\n *\n * @par\tFunctionality:\n *\t\tThe batch_request structure must already exist (be allocated by the\n *      \tcaller.   It is assumed that the header fields (protocol type,\n *\t\tprotocol version, request type, and user name) have already be decoded.\n *\n * @par\t Data items are:\\n\n *\t\tu int           manner\n *\n * @param[in] sock - socket descriptor\n * @param[out] preq - pointer to batch_request structure\n *\n * @return      int\n * @retval      DIS_SUCCESS(0)  success\n * @retval      error code      error\n *\n */\n\nint\ndecode_DIS_ShutDown(int sock, struct batch_request *preq)\n{\n\tint rc;\n\n\tpreq->rq_ind.rq_shutdown = disrui(sock, &rc);\n\n\treturn rc;\n}\n\n/**\n * @brief-\n *\tdecode a Signal Job batch request\n *\n * @par\tFunctionality:\n *\t\tThe batch_request structure must already exist (be allocated by the\n *      \tcaller.   It is assumed that the header fields (protocol type,\n *      \tprotocol version, request type, and user name) have already be decoded.\n *\n * @par\tData items are:\\n\n *\t\tstring          job id\\n\n *\t\tstring          signal (name)\n *\n * @param[in] sock - socket descriptor\n * @param[out] preq - pointer to batch_request structure\n *\n * @return      int\n * @retval      DIS_SUCCESS(0)  success\n * @retval      error code      error\n *\n */\n\nint\ndecode_DIS_SignalJob(int sock, struct batch_request *preq)\n{\n\tint rc;\n\n\trc = disrfst(sock, PBS_MAXSVRJOBID + 1, preq->rq_ind.rq_signal.rq_jid);\n\tif (rc)\n\t\treturn rc;\n\n\trc = disrfst(sock, PBS_SIGNAMESZ + 1, preq->rq_ind.rq_signal.rq_signame);\n\treturn rc;\n}\n\n/**\n * @brief\n *\tDecode a Status batch request\n *\n * @par\n *\tThe batch_request structure must already exist (be allocated by the\n *\tcaller).   It is assumed that the header fields (protocol type,\n *\tprotocol version, request type, and user name) have already be decoded.\n *\n * @param[in]     sock - socket handle from which to read.\n * @param[in,out] preq - pointer to the batch request structure. The following\n *\t\telements of the rq_ind.rq_status union are updated:\n *\t\trq_id     - object id, a variable length string.\n *\t\trq_status - the linked list of attribute structures\n *\n * @return int\n * @retval 0 - request read and decoded successfully.\n * @retval non-zero - DIS decode error.\n */\n\nint\ndecode_DIS_Status(int sock, struct batch_request *preq)\n{\n\tint rc;\n\tsize_t nchars = 0;\n\n\tpreq->rq_ind.rq_status.rq_id = NULL;\n\n\tCLEAR_HEAD(preq->rq_ind.rq_status.rq_attr);\n\n\t/*\n\t * call the disrcs function to allocate and return a string of all ids\n\t * freed in free_br()\n\t */\n\tpreq->rq_ind.rq_status.rq_id = disrcs(sock, &nchars, &rc);\n\tif (rc)\n\t\treturn rc;\n\n\trc = decode_DIS_svrattrl(sock, &preq->rq_ind.rq_status.rq_attr);\n\treturn rc;\n}\n\n/**\n * @brief-\n *\tdecode a Track Job batch request\n *\n * @par\tNOTE:\n *\tThe batch_request structure must already exist (be allocated by the\n *      caller.   It is assumed that the header fields (protocol type,\n *      protocol version, request type, and user name) have already be decoded.\n *\n * @par\t Data items are:\\n\n *\t\tstring          job id\\n\n *\t\tunsigned int    hopcount\\n\n *\t\tstring          location\\n\n *\t\tu char          state\\n\n *\n * @param[in] sock - socket descriptor\n * @param[out] preq - pointer to batch_request structure\n *\n * @return      int\n * @retval      DIS_SUCCESS(0)  success\n * @retval      error code      error\n *\n */\n\nint\ndecode_DIS_TrackJob(int sock, struct batch_request *preq)\n{\n\tint rc;\n\n\trc = disrfst(sock, PBS_MAXSVRJOBID + 1, preq->rq_ind.rq_track.rq_jid);\n\tif (rc)\n\t\treturn rc;\n\n\tpreq->rq_ind.rq_track.rq_hopcount = disrui(sock, &rc);\n\tif (rc)\n\t\treturn rc;\n\n\trc = disrfst(sock, PBS_MAXDEST + 1, preq->rq_ind.rq_track.rq_location);\n\tif (rc)\n\t\treturn rc;\n\n\tpreq->rq_ind.rq_track.rq_state[0] = disruc(sock, &rc);\n\treturn rc;\n}\n\n/**\n * @brief-\n *\tdecode a User Credential batch request\n *\n * @par\tNOTE:\n *\tThe batch_request structure must already exist (be allocated by the\n *\tcaller.   It is assumed that the header fields (protocol type,\n *\tprotocol version, request type, and user name) have already be decoded.\n *\n * @par\tData items are:\\n\n *\t\tstring          user whose credential is being set\\n\n *\t\tunsigned int\tcredential type\\n\n *\t\tcounted string\tthe credential data\n *\n * @param[in] sock - socket descriptor\n * @param[out] preq - pointer to batch_request structure\n *\n * @return      int\n * @retval      DIS_SUCCESS(0)  success\n * @retval      error code      error\n *\n */\n\nint\ndecode_DIS_UserCred(int sock, struct batch_request *preq)\n{\n\tint rc;\n\n\trc = disrfst(sock, PBS_MAXUSER + 1, preq->rq_ind.rq_usercred.rq_user);\n\tif (rc)\n\t\treturn rc;\n\n\tpreq->rq_ind.rq_usercred.rq_type = disrui(sock, &rc);\n\tif (rc)\n\t\treturn rc;\n\n\tpreq->rq_ind.rq_usercred.rq_data = 0;\n\tpreq->rq_ind.rq_usercred.rq_data = disrcs(sock,\n\t\t\t\t\t\t  (size_t *) &preq->rq_ind.rq_usercred.rq_size,\n\t\t\t\t\t\t  &rc);\n\treturn rc;\n}\n\n/**\n * @brief\n *\tdecode into a list of PBS API \"attrl\" structures\n *\n *\tThe space for the attrl structures is allocated as needed.\n *\n *\tThe first item is a unsigned integer, a count of the\n *\tnumber of attrl entries in the linked list.  This is encoded\n *\teven when there are no entries in the list.\n *\n *\tEach individual entry is encoded as:\n *\t\tu int\tsize of the three strings (name, resource, value)\n *\t\t\tincluding the terminating nulls, see dec_svrattrl.c\n *\t\tstring\tattribute name\n *\t\tu int\t1 or 0 if resource name does or does not follow\n *\t\tstring\tresource name (if one)\n *\t\tstring  value of attribute/resource\n *\t\tu int\t\"op\" of attrlop (also flag of svrattrl)\n *\n *\tNote, the encoding of a attrl is the same as the encoding of\n *\tthe pbs_ifl.h structures \"attropl\" and the server struct svrattrl.\n *\tAny one of the three forms can be decoded into any of the three with\n *\tthe possible loss of the \"flags\" field (which is the \"op\" of the\n *\tattrlop).\n *\n * @param[in]   sock - socket descriptor\n * @param[in]   ppatt - pointer to list of attributes\n *\n * @return int\n * @retval 0 on SUCCESS\n * @retval >0 on failure\n */\n\nint\ndecode_DIS_attrl(int sock, struct attrl **ppatt)\n{\n\tint hasresc;\n\tint i;\n\tunsigned int numpat;\n\tstruct attrl *pat = 0;\n\tstruct attrl *patprior = 0;\n\tint rc;\n\n\tnumpat = disrui(sock, &rc);\n\tif (rc)\n\t\treturn rc;\n\n\tfor (i = 0; i < numpat; ++i) {\n\n\t\t(void) disrui(sock, &rc);\n\t\tif (rc)\n\t\t\tbreak;\n\n\t\tpat = new_attrl();\n\t\tif (pat == 0)\n\t\t\treturn DIS_NOMALLOC;\n\n\t\tpat->name = disrst(sock, &rc);\n\t\tif (rc)\n\t\t\tbreak;\n\n\t\thasresc = disrui(sock, &rc);\n\t\tif (rc)\n\t\t\tbreak;\n\t\tif (hasresc) {\n\t\t\tpat->resource = disrst(sock, &rc);\n\t\t\tif (rc)\n\t\t\t\tbreak;\n\t\t}\n\n\t\tpat->value = disrst(sock, &rc);\n\t\tif (rc)\n\t\t\tbreak;\n\n\t\tpat->op = (enum batch_op) disrui(sock, &rc);\n\t\tif (rc)\n\t\t\tbreak;\n\n\t\tif (i == 0) {\n\t\t\t/* first one, link to passing in pointer */\n\t\t\t*ppatt = pat;\n\t\t} else {\n\t\t\tpatprior->next = pat;\n\t\t}\n\t\tpatprior = pat;\n\t}\n\n\tif (rc)\n\t\tPBS_free_aopl((struct attropl *) pat);\n\treturn rc;\n}\n\n/**\n * @brief\n *\tdecode into a list of PBS API \"attropl\" structures\n *\n *\tThe space for the attropl structures is allocated as needed.\n *\n *\tThe first item is a unsigned integer, a count of the\n *\tnumber of attropl entries in the linked list.  This is encoded\n *\teven when there are no entries in the list.\n *\n *\tEach individual entry is encoded as:\n *\t\tu int\tsize of the three strings (name, resource, value)\n *\t\t\tincluding the terminating nulls, see dec_svrattrl.c\n *\t\tstring\tattribute name\n *\t\tu int\t1 or 0 if resource name does or does not follow\n *\t\tstring\tresource name (if one)\n *\t\tstring  value of attribute/resource\n *\t\tu int\t\"op\" of attrlop (also flag of svrattrl)\n *\n *\tNote, the encoding of a attropl is the same as the encoding of\n *\tthe pbs_ifl.h structures \"attrl\" and the server struct svrattrl.\n *\tAny one of the three forms can be decoded into any of the three with\n *\tthe possible loss of the \"flags\" field (which is the \"op\" of the\n *\tattrlop).\n *\n * @param[in]   sock - socket descriptor\n * @param[in]   ppatt - pointer to list of attributes\n *\n * @return int\n * @retval 0 on SUCCESS\n * @retval >0 on failure\n */\n\nint\ndecode_DIS_attropl(int sock, struct attropl **ppatt)\n{\n\tint hasresc;\n\tint i;\n\tunsigned int numpat;\n\tstruct attropl *pat = 0;\n\tstruct attropl *patprior = 0;\n\tint rc;\n\n\tnumpat = disrui(sock, &rc);\n\tif (rc)\n\t\treturn rc;\n\n\tfor (i = 0; i < numpat; ++i) {\n\n\t\t(void) disrui(sock, &rc);\n\t\tif (rc)\n\t\t\tbreak;\n\n\t\tpat = malloc(sizeof(struct attropl));\n\t\tif (pat == 0)\n\t\t\treturn DIS_NOMALLOC;\n\n\t\tpat->next = NULL;\n\t\tpat->name = NULL;\n\t\tpat->resource = NULL;\n\t\tpat->value = NULL;\n\n\t\tpat->name = disrst(sock, &rc);\n\t\tif (rc)\n\t\t\tbreak;\n\n\t\thasresc = disrui(sock, &rc);\n\t\tif (rc)\n\t\t\tbreak;\n\t\tif (hasresc) {\n\t\t\tpat->resource = disrst(sock, &rc);\n\t\t\tif (rc)\n\t\t\t\tbreak;\n\t\t}\n\n\t\tpat->value = disrst(sock, &rc);\n\t\tif (rc)\n\t\t\tbreak;\n\n\t\tpat->op = (enum batch_op) disrui(sock, &rc);\n\t\tif (rc)\n\t\t\tbreak;\n\n\t\tif (i == 0) {\n\t\t\t/* first one, link to passing in pointer */\n\t\t\t*ppatt = pat;\n\t\t} else {\n\t\t\tpatprior->next = pat;\n\t\t}\n\t\tpatprior = pat;\n\t}\n\n\tif (rc)\n\t\tPBS_free_aopl(pat);\n\treturn rc;\n}\n\n/**\n * @brief-\n *\t decode into a list of server \"svrattrl\" structures\n *\n * @par\tFunctionality:\n *\t\tThe space for the svrattrl structures is allocated as needed.\n *\n *      The first item is a unsigned integer, a count of the\n *      number of svrattrl entries in the linked list.  This is encoded\n *      even when there are no entries in the list.\n *\n * @par\tEach individual entry is encoded as:\\n\n *\t\t\tu int\t- size of the three strings (name, resource, value)\n *                      \t  including the terminating nulls\\n\n *\t\t\tstring  - attribute name\\n\n *\t\t\tu int   - 1 or 0 if resource name does or does not follow\\n\n *\t\t\tstring  - resource name (if one)\\n\n *\t\t\tstring  - value of attribute/resource\\n\n *\t\t\tu int   - \"op\" of attrlop\\n\n *\n * NOTE:\n *\tthe encoding of a svrattrl is the same as the encoding of\n *      the pbs_ifl.h structures \"attrl\" and \"attropl\".  Any one of\n *      the three forms can be decoded into any of the three with the\n *      possible loss of the \"flags\" field (which is the \"op\" of the attrlop).\n *\n * @param[in] sock - socket descriptor\n * @param[in] phead - head pointer to list entry list sub-structure\n *\n * @return      int\n * @retval      DIS_SUCCESS(0)  success\n * @retval      error code      error\n *\n */\n\nint\ndecode_DIS_svrattrl(int sock, pbs_list_head *phead)\n{\n\tint i;\n\tunsigned int hasresc;\n\tsize_t ls;\n\tunsigned int data_len;\n\tunsigned int numattr;\n\tsvrattrl *psvrat;\n\tint rc;\n\tsize_t tsize;\n\n\tnumattr = disrui(sock, &rc); /* number of attributes in set */\n\tif (rc)\n\t\treturn rc;\n\n\tfor (i = 0; i < numattr; ++i) {\n\n\t\tdata_len = disrui(sock, &rc); /* here it is used */\n\t\tif (rc)\n\t\t\treturn rc;\n\n\t\ttsize = sizeof(svrattrl) + data_len;\n\t\tif ((psvrat = (svrattrl *) malloc(tsize)) == 0)\n\t\t\treturn DIS_NOMALLOC;\n\n\t\tCLEAR_LINK(psvrat->al_link);\n\t\tpsvrat->al_sister = NULL;\n\t\tpsvrat->al_atopl.next = 0;\n\t\tpsvrat->al_tsize = tsize;\n\t\tpsvrat->al_name = (char *) psvrat + sizeof(svrattrl);\n\t\tpsvrat->al_resc = 0;\n\t\tpsvrat->al_value = 0;\n\t\tpsvrat->al_nameln = 0;\n\t\tpsvrat->al_rescln = 0;\n\t\tpsvrat->al_valln = 0;\n\t\tpsvrat->al_flags = 0;\n\t\tpsvrat->al_refct = 1;\n\n\t\tif ((rc = disrfcs(sock, &ls, data_len, psvrat->al_name)) != 0)\n\t\t\tbreak;\n\t\t*(psvrat->al_name + ls++) = '\\0';\n\t\tpsvrat->al_nameln = (int) ls;\n\t\tdata_len -= ls;\n\n\t\thasresc = disrui(sock, &rc);\n\t\tif (rc)\n\t\t\tbreak;\n\t\tif (hasresc) {\n\t\t\tpsvrat->al_resc = psvrat->al_name + ls;\n\t\t\trc = disrfcs(sock, &ls, data_len, psvrat->al_resc);\n\t\t\tif (rc)\n\t\t\t\tbreak;\n\t\t\t*(psvrat->al_resc + ls++) = '\\0';\n\t\t\tpsvrat->al_rescln = (int) ls;\n\t\t\tdata_len -= ls;\n\t\t}\n\n\t\tpsvrat->al_value = psvrat->al_name + psvrat->al_nameln +\n\t\t\t\t   psvrat->al_rescln;\n\t\tif ((rc = disrfcs(sock, &ls, data_len, psvrat->al_value)) != 0)\n\t\t\tbreak;\n\t\t*(psvrat->al_value + ls++) = '\\0';\n\t\tpsvrat->al_valln = (int) ls;\n\n\t\tpsvrat->al_op = (enum batch_op) disrui(sock, &rc);\n\t\tif (rc)\n\t\t\tbreak;\n\n\t\tappend_link(phead, &psvrat->al_link, psvrat);\n\t}\n\n\tif (rc) {\n\t\t(void) free(psvrat);\n\t}\n\n\treturn (rc);\n}\n"
  },
  {
    "path": "src/lib/Libifl/DIS_encode.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tDIS_encode.c\n * \n * @brief\n * DIS encode routines\n */\n\n#include \"batch_request.h\"\n#include \"dis.h\"\n\n/**\n * @brief encode the Preempt Jobs request for sending to the server.\n *\n * @param[in] sock - socket descriptor for the connection.\n * @param[in] jobs - list of job ids.\n *\n * @return - error code while writing data to the socket.\n */\nint\nencode_DIS_JobsList(int sock, char **jobs_list, int numofjobs)\n{\n\tint i = 0;\n\tint rc = 0;\n\tint count = 0;\n\n\tif (numofjobs == -1)\n\t\tfor (; jobs_list[count]; count++)\n\t\t\t;\n\telse\n\t\tcount = numofjobs;\n\n\tif (((rc = diswui(sock, count)) != 0))\n\t\treturn rc;\n\n\tfor (i = 0; i < count; i++)\n\t\tif ((rc = diswst(sock, jobs_list[i])) != 0)\n\t\t\treturn rc;\n\n\treturn rc;\n}\n\n/**\n *\n * @brief\n *\tEncode a Copy Hook File request.\n *\tSend over 'sock' the data items:\n *\t\t\tu int\tblock sequence number\n *\t\t\tu int\tfile type (stdout, stderr, ...)\n *\t\t\tu int\tsize of data in block\n *\t\t\tstring\thook file name\n *\t\t\tcnt str\tdata\n *\n * @param[in]\tsock -  the communication end point.\n * @param[in]\tseq -\tsequence number of the current block of data being sent\n * @param[in] \tbuf - block of data to be sent\n * @param[in]\tlen - # of characters in 'buf'\n * @param[in]\tfilename - name of the hook file being sent.\n *\n * @return \tint\n * @retval \t0 for success\n * @retval\tnon-zero otherwise\n */\n\nint\nencode_DIS_CopyHookFile(int sock, int seq, const char *buf, int len, const char *filename)\n{\n\tint rc;\n\n\tif ((rc = diswui(sock, seq) != 0) ||\n\t    (rc = diswui(sock, len) != 0) ||\n\t    (rc = diswst(sock, filename) != 0) ||\n\t    (rc = diswcs(sock, buf, len) != 0))\n\t\treturn rc;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\t-encode a Copy Files Dependency Batch Request\n *\n * @param[in] sock - socket descriptor\n * @param[in] preq - pointer to batch_request\n *\n * @return\tint\n * @retval\t0\tsuccess\n * @retval\t!0\terror\n *\n */\nint\nencode_DIS_CopyFiles(int sock, struct batch_request *preq)\n{\n\tint pair_ct = 0;\n\tchar *nullstr = \"\";\n\tstruct rqfpair *ppair;\n\tint rc;\n\n\tppair = (struct rqfpair *) GET_NEXT(preq->rq_ind.rq_cpyfile.rq_pair);\n\twhile (ppair) {\n\t\t++pair_ct;\n\t\tppair = (struct rqfpair *) GET_NEXT(ppair->fp_link);\n\t}\n\n\tif ((rc = diswst(sock, preq->rq_ind.rq_cpyfile.rq_jobid) != 0) ||\n\t    (rc = diswst(sock, preq->rq_ind.rq_cpyfile.rq_owner) != 0) ||\n\t    (rc = diswst(sock, preq->rq_ind.rq_cpyfile.rq_user) != 0) ||\n\t    (rc = diswst(sock, preq->rq_ind.rq_cpyfile.rq_group) != 0) ||\n\t    (rc = diswui(sock, preq->rq_ind.rq_cpyfile.rq_dir) != 0))\n\t\treturn rc;\n\n\tif ((rc = diswui(sock, pair_ct) != 0))\n\t\treturn rc;\n\tppair = (struct rqfpair *) GET_NEXT(preq->rq_ind.rq_cpyfile.rq_pair);\n\twhile (ppair) {\n\t\tif (ppair->fp_rmt == NULL)\n\t\t\tppair->fp_rmt = nullstr;\n\t\tif ((rc = diswui(sock, ppair->fp_flag) != 0) ||\n\t\t    (rc = diswst(sock, ppair->fp_local) != 0) ||\n\t\t    (rc = diswst(sock, ppair->fp_rmt) != 0))\n\t\t\treturn rc;\n\t\tppair = (struct rqfpair *) GET_NEXT(ppair->fp_link);\n\t}\n\n\treturn 0;\n}\n\n/**\n * @brief\n * \t-encode_DIS_CopyFiles_Cred() - encode a Copy Files with Credential Dependency\n *\tBatch Request\n *\n * @par Note:\n *\tThis request is used by the server ONLY; its input is a server\n *\tbatch request structure.\n *\n * @param[in] sock - socket descriptor\n * @param[in] preq - pointer to batch request\n *\n * @par\tData items are:\\n\n *\t\tstring\t\tjob id\\n\n *\t\tstring\t\tjob owner(may be null)\\n\n *\t\tstring\t\texecution user name\\n\n *\t\tstring\t\texecution group name(may be null)\\n\n *\t\tunsigned int\tdirection & job_dir_enable flag\\n\n *\t\tunsigned int\tcount of file pairs in set\\n\n *\tset of\tfile pairs:\\n\n *\t\tunsigned int\tflag\\n\n *\t\tstring\t\tlocal path name\\n\n *\t\tstring\t\tremote path name (may be null)\\n\n *\t\tunsigned int\tcredential type\\n\n *\t\tunsigned int\tcredential length (bytes)\\n\n *\t\tbyte string\tcredential\\n\n *\n * @return\tint\n * @retval\t0\tsuccess\n * @retval\t!0\terror\n *\n */\nint\nencode_DIS_CopyFiles_Cred(int sock, struct batch_request *preq)\n{\n\tint pair_ct = 0;\n\tchar *nullstr = \"\";\n\tstruct rqfpair *ppair;\n\tint rc;\n\tsize_t clen;\n\tstruct rq_cpyfile *rcpyf;\n\n\tclen = (size_t) preq->rq_ind.rq_cpyfile_cred.rq_credlen;\n\trcpyf = &preq->rq_ind.rq_cpyfile_cred.rq_copyfile;\n\tppair = (struct rqfpair *) GET_NEXT(rcpyf->rq_pair);\n\n\twhile (ppair) {\n\t\t++pair_ct;\n\t\tppair = (struct rqfpair *) GET_NEXT(ppair->fp_link);\n\t}\n\n\tif ((rc = diswst(sock, rcpyf->rq_jobid) != 0) ||\n\t    (rc = diswst(sock, rcpyf->rq_owner) != 0) ||\n\t    (rc = diswst(sock, rcpyf->rq_user) != 0) ||\n\t    (rc = diswst(sock, rcpyf->rq_group) != 0) ||\n\t    (rc = diswui(sock, rcpyf->rq_dir) != 0))\n\t\treturn rc;\n\n\tif ((rc = diswui(sock, pair_ct) != 0))\n\t\treturn rc;\n\tppair = (struct rqfpair *) GET_NEXT(rcpyf->rq_pair);\n\twhile (ppair) {\n\t\tif (ppair->fp_rmt == NULL)\n\t\t\tppair->fp_rmt = nullstr;\n\t\tif ((rc = diswui(sock, ppair->fp_flag) != 0) ||\n\t\t    (rc = diswst(sock, ppair->fp_local) != 0) ||\n\t\t    (rc = diswst(sock, ppair->fp_rmt) != 0))\n\t\t\treturn rc;\n\t\tppair = (struct rqfpair *) GET_NEXT(ppair->fp_link);\n\t}\n\n\trc = diswui(sock, preq->rq_ind.rq_cpyfile_cred.rq_credtype);\n\tif (rc != 0)\n\t\treturn rc;\n\trc = diswcs(sock, preq->rq_ind.rq_cpyfile_cred.rq_pcred, clen);\n\tif (rc != 0)\n\t\treturn rc;\n\n\treturn 0;\n}\n\n/**\n *\n * @brief\n *\tEncode a Hook Delete File request\n *\tSend over to 'sock' the data item:\n *\t\t\tstring\thook filename\n * @param[in]\tsock - communication channel\n * @param[in]\tfilename - hook filename\n *\n * @return\tint\n * @retval\t0\tsuccess\n * @retval\t!0\terror\n *\n */\n\nint\nencode_DIS_DelHookFile(int sock, const char *filename)\n{\n\tint rc;\n\n\tif ((rc = diswst(sock, filename) != 0))\n\t\treturn rc;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\t-encode a Job Credential Batch Request\n *\n * @par\tData items are:\\n\n *\t\tchar\t\tjob id\n *\t\tchar\t\tcred id (e.g principal)\n *\t\tint\tcredential type\n *\t\tcounted string\tthe message\n *\t\tlong\t\tcredential validity\n *\n * @param[in] sock - socket descriptor\n * @param[in] jobid - job id\n * @param[in] owner - cred id (e.g. principal)\n * @param[in] type - cred type\n * @param[in] data - credential\n * @param[in] size - length of credential\n * @param[in] long - credential validity\n *\n * @return\tint\n * @retval      0 for success\n * @retval      non-zero otherwise\n */\n\nint\nencode_DIS_Cred(int sock, char *jobid, char *credid, int type, char *data, size_t size, long validity)\n{\n\tint rc;\n\n\tif ((rc = diswst(sock, jobid) != 0) ||\n\t    (rc = diswst(sock, credid) != 0) ||\n\t    (rc = diswui(sock, type) != 0) ||\n\t    (rc = diswcs(sock, data, size) != 0) ||\n\t    (rc = diswul(sock, validity) != 0))\n\t\treturn rc;\n\n\treturn rc;\n}\n\n/**\n * @brief\n *\t-encode a Job Credential Batch Request\n *\n * @par\tData items are:\\n\n *\t\tunsigned int    Credential type\\n\n *\t\tstring          the credential (octet array)\\n\n *\n * @param[in] sock - socket descriptor\n * @param[in] type - cred type\n * @param[in] cred - credential\n * @param[in] len - length of credentials\n *\n * @return\tint\n * @retval      0 for success\n * @retval      non-zero otherwise\n */\n\nint\nencode_DIS_JobCred(int sock, int type, const char *cred, int len)\n{\n\tint rc;\n\n\tif ((rc = diswui(sock, type)) != 0)\n\t\treturn rc;\n\trc = diswcs(sock, cred, (size_t) len);\n\n\treturn rc;\n}\n\n/**\n * @brief\n *\t-encode a Job Releated File\n *\n * @param[in]   sock -  the communication end point.\n * @param[in]   seq -   sequence number of the current block of data being sent\n * @param[in]   buf - block of data to be sent\n * @param[in]   len - # of characters in 'buf'\n * @param[in]\tjobid - job id\n * @param[in] \twhich - file type\n *\n * @return      int\n * @retval      0 for success\n * @retval      non-zero otherwise\n */\n\nint\nencode_DIS_JobFile(int sock, int seq, const char *buf, int len, const char *jobid, int which)\n{\n\tint rc;\n\n\tif (jobid == NULL)\n\t\tjobid = \"\";\n\tif ((rc = diswui(sock, seq) != 0) ||\n\t    (rc = diswui(sock, which) != 0) ||\n\t    (rc = diswui(sock, len) != 0) ||\n\t    (rc = diswst(sock, jobid) != 0) ||\n\t    (rc = diswcs(sock, buf, len) != 0))\n\t\treturn rc;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *      - decode a Job ID string into a batch_request\n *\n * @par Functionality:\n *              This is used for the following batch requests:\\n\n *                      Ready_to_Commit\\n\n *                      Commit\\n\n *                      Locate Job\\n\n *                      Rerun Job\n *\n * @param[in] sock - socket descriptor\n * @param[in] jobid - job id\n *\n * @return      int\n * @retval      DIS_SUCCESS(0)  success\n * @retval      error code      error\n *\n */\n\nint\nencode_DIS_JobId(int sock, const char *jobid)\n{\n\treturn (diswst(sock, jobid));\n}\n\n/**\n * @brief\n *\t-encode a Manager Batch Request\n *\n * @par\tFunctionality:\n *\t\tThis request is used for most operations where an object is being\n *      \tcreated, deleted, or altered.\n *\n * @param[in] sock - socket descriptor\n * @param[in] command - command type\n * @param[in] objtype - object type\n * @param[in] objname - object name\n * @param[in] aoplp - pointer to attropl structure(list)\n *\n * @return      int\n * @retval      DIS_SUCCESS(0)  success\n * @retval      error code      error\n *\n */\n\nint\nencode_DIS_Manage(int sock, int command, int objtype, const char *objname, struct attropl *aoplp)\n{\n\tint rc;\n\n\tif ((rc = diswui(sock, command) != 0) ||\n\t    (rc = diswui(sock, objtype) != 0) ||\n\t    (rc = diswst(sock, objname) != 0))\n\t\treturn rc;\n\n\treturn (encode_DIS_attropl(sock, aoplp));\n}\n\n/**\n * @brief encode the Modify Reservation request for sending to the server.\n *\n * @param[in] sock - socket descriptor for the connection.\n * @param[in] resv_id - Reservation identifier of the reservation that would be modified.\n * @param[in] aoplp - list of attributes that will be modified.\n *\n * @return - error code while writing data to the socket.\n */\nint\nencode_DIS_ModifyResv(int sock, const char *resv_id, struct attropl *aoplp)\n{\n\tint rc = 0;\n\n\tif (resv_id == NULL)\n\t\tresv_id = \"\";\n\n\tif (((rc = diswui(sock, MGR_OBJ_RESV)) != 0) ||\n\t    ((rc = diswst(sock, resv_id)) != 0))\n\t\treturn rc;\n\n\treturn (encode_DIS_attropl(sock, aoplp));\n}\n\n/**\n * @brief\n *\t-encode a Move Job Batch Request\n *\talso used for an Order Job Batch Request\n *\n * @param[in] sock - socket descriptor\n * @param[in] jobid - job id to be moved\n * @param[in] destin - destination to which job to be moved\n *\n * @return      int\n * @retval      DIS_SUCCESS(0)  success\n * @retval      error code      error\n *\n */\n\nint\nencode_DIS_MoveJob(int sock, const char *jobid, const char *destin)\n{\n\tint rc;\n\n\tif ((rc = diswst(sock, jobid) != 0) ||\n\t    (rc = diswst(sock, destin) != 0))\n\t\treturn rc;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\t- encode a Message Job Batch Request\n *\n * @param[in] sock - socket descriptor\n * @param[in] jobid - job id\n * @param[in] fileopt - which file\n * @param[in] msg - msg to be encoded\n *\n * @return      int\n * @retval      DIS_SUCCESS(0)  success\n * @retval      error code      error\n *\n */\n\nint\nencode_DIS_MessageJob(int sock, const char *jobid, int fileopt, const char *msg)\n{\n\tint rc;\n\n\tif ((rc = diswst(sock, jobid) != 0) ||\n\t    (rc = diswui(sock, fileopt) != 0) ||\n\t    (rc = diswst(sock, msg) != 0))\n\t\treturn rc;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\t-Write a python spawn request onto the wire.\n *\tEach of the argv and envp arrays is sent by writing a counted\n *\tstring followed by a zero length string (\"\").  They are read\n *\tby the function read_carray() in dec_MsgJob.c\n *\n * @param[in] sock - socket descriptor\n * @param[in] jobid - job id\n * @param[in] argv - pointer to argument list\n * @param[in] envp - pointer to environment variable\n *\n * @return      int\n * @retval      DIS_SUCCESS(0)  success\n * @retval      error code      error\n *\n */\nint\nencode_DIS_PySpawn(int sock, const char *jobid, char **argv, char **envp)\n{\n\tint rc, i;\n\tchar *cp;\n\n\tif ((rc = diswst(sock, jobid)) != DIS_SUCCESS)\n\t\treturn rc;\n\n\tif (argv != NULL) {\n\t\tfor (i = 0; (cp = argv[i]) != NULL; i++) {\n\t\t\tif ((rc = diswcs(sock, cp, strlen(cp))) != DIS_SUCCESS)\n\t\t\t\treturn rc;\n\t\t}\n\t}\n\tif ((rc = diswcs(sock, \"\", 0)) != DIS_SUCCESS)\n\t\treturn rc;\n\n\tif (envp != NULL) {\n\t\tfor (i = 0; (cp = envp[i]) != NULL; i++) {\n\t\t\tif ((rc = diswcs(sock, cp, strlen(cp))) != DIS_SUCCESS)\n\t\t\t\treturn rc;\n\t\t}\n\t}\n\trc = diswcs(sock, \"\", 0);\n\n\treturn rc;\n}\n\nint\nencode_DIS_RelnodesJob(int sock, const char *jobid, const char *node_list)\n{\n\tint rc;\n\n\tif ((rc = diswst(sock, jobid) != 0) ||\n\t    (rc = diswst(sock, node_list) != 0))\n\t\treturn rc;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\t-encode a Queue Job Batch Request\n *\n * @par\tFunctionality:\n *\t\tThis request is used for the first step in submitting a job, sending\n *      \tthe job attributes.\n *\n * @param[in] sock - socket descriptor\n * @param[in] jobid - job id\n * @param[in] destin - destination queue name\n * @param[in] aoplp - pointer to attropl structure(list)\n * @return      int\n * @retval      DIS_SUCCESS(0)  success\n * @retval      error code      error\n *\n */\n\nint\nencode_DIS_QueueJob(int sock, char *jobid, const char *destin, struct attropl *aoplp)\n{\n\tint rc;\n\n\tif (jobid == NULL)\n\t\tjobid = \"\";\n\tif (destin == NULL)\n\t\tdestin = \"\";\n\n\tif ((rc = diswst(sock, jobid) != 0) ||\n\t    (rc = diswst(sock, destin) != 0))\n\t\treturn rc;\n\n\treturn (encode_DIS_attropl(sock, aoplp));\n}\n\n/**\n * @brief\n *      -encode a Register Dependency Batch Request\n *\n * @par Functionality:\n *       \tThis request is used by the server ONLY; its input is a server\n *      \tbatch request structure.\n *\n * @par Data items are:\n *              string          job owner\\n\n *              string          parent job id\\n\n *              string          child job id\\n\n *              unsigned int    dependency type\\n\n *              unsigned int    operation\\n\n *              signed long     cost\\n\n *\n * @param[in] sock - socket descriptor\n * @param[out] preq - pointer to batch_request structure\n *\n * @return      int\n * @retval      DIS_SUCCESS(0)  success\n * @retval      error code      error\n *\n */\n\nint\nencode_DIS_Register(int sock, struct batch_request *preq)\n{\n\tint rc;\n\n\tif ((rc = diswst(sock, preq->rq_ind.rq_register.rq_owner) != 0) ||\n\t    (rc = diswst(sock, preq->rq_ind.rq_register.rq_parent) != 0) ||\n\t    (rc = diswst(sock, preq->rq_ind.rq_register.rq_child) != 0) ||\n\t    (rc = diswui(sock, preq->rq_ind.rq_register.rq_dependtype) != 0) ||\n\t    (rc = diswui(sock, preq->rq_ind.rq_register.rq_op) != 0) ||\n\t    (rc = diswsl(sock, preq->rq_ind.rq_register.rq_cost) != 0))\n\t\treturn rc;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\t-write an extension to a Batch Request\n *\n * @par\tThe extension is in two parts:\n *\t\tunsigned integer - 1 if an extension string follows, 0 if not\\n\n *\t\tcharacter string - if 1 above\n *\n * @param[in] sock - socket descriptor\n * @param[in] extend - string which used as extension for req\n *\n * @return      int\n * @retval      DIS_SUCCESS(0)  success\n * @retval      error code      error\n *\n */\n\nint\nencode_DIS_ReqExtend(int sock, const char *extend)\n{\n\tint rc;\n\n\tif ((extend == NULL) || (*extend == '\\0')) {\n\t\trc = diswui(sock, 0);\n\t} else {\n\t\tif ((rc = diswui(sock, 1)) == 0) {\n\t\t\trc = diswst(sock, extend);\n\t\t}\n\t}\n\n\treturn rc;\n}\n\n/**\n * @brief\n *\t-encode a Request Header\n *\n * @param[in] sock - socket descriptor\n * @param[in] reqt - request type\n * @param[in] user - user name\n *\n * @return      int\n * @retval      DIS_SUCCESS(0)  success\n * @retval      error code      error\n *\n */\n\nint\nencode_DIS_ReqHdr(int sock, int reqt, const char *user)\n{\n\tint rc;\n\n\tif ((rc = diswui(sock, PBS_BATCH_PROT_TYPE)) ||\n\t    (rc = diswui(sock, PBS_BATCH_PROT_VER)) ||\n\t    (rc = diswui(sock, reqt)) ||\n\t    (rc = diswst(sock, user))) {\n\t\treturn rc;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\t-used to encode the basic information for the\n *      RunJob request and the ConfirmReservation request\n *\n * @param[in] sock - soket descriptor\n * @param[in] id - reservation id\n * @param[in] where - reservation on\n * @param[in] arg - ar\n *\n * @return      int\n * @retval      DIS_SUCCESS(0)  success\n * @retval      error code      error\n *\n */\n\nint\nencode_DIS_Run(int sock, const char *id, const char *where, unsigned long arg)\n{\n\tint rc;\n\n\tif ((rc = diswst(sock, id) != 0) ||\n\t    (rc = diswst(sock, where) != 0) ||\n\t    (rc = diswul(sock, arg) != 0))\n\t\treturn rc;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\t-encode a Server Shut Down Batch Request\n *\n * @param[in] sock - socket descriptor\n * @param[in] manner - type of shutdown\n *\n * @return      int\n * @retval      DIS_SUCCESS(0)  success\n * @retval      error code      error\n *\n */\n\nint\nencode_DIS_ShutDown(int sock, int manner)\n{\n\treturn (diswui(sock, manner));\n}\n\n/**\n * @brief\n *\t-encode a Signal Job Batch Request\n *\n * @param[in] sock - socket descriptor\n * @param[in] jobid - job id\n * @param[in] signal - signal\n *\n * @return      int\n * @retval      DIS_SUCCESS(0)  success\n * @retval      error code      error\n *\n */\n\nint\nencode_DIS_SignalJob(int sock, const char *jobid, const char *signal)\n{\n\tint rc;\n\n\tif ((rc = diswst(sock, jobid) != 0) ||\n\t    (rc = diswst(sock, signal) != 0))\n\t\treturn rc;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\t-encode a Status Job Batch Request\n *\n * @param[in] sock - socket descriptor\n * @param[in] objid - object id\n * @param[in] pattrl - pointer to attrl struct(list)\n *\n * @return      int\n * @retval      DIS_SUCCESS(0)  success\n * @retval      error code      error\n *\n */\n\nint\nencode_DIS_Status(int sock, const char *objid, struct attrl *pattrl)\n{\n\tint rc;\n\n\tif ((rc = diswst(sock, objid) != 0) ||\n\t    (rc = encode_DIS_attrl(sock, pattrl) != 0))\n\t\treturn rc;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\t-encode a Submit Resvervation Batch Request\n *\n * @par\tFunctionality:\n *\t\tThis request is used for the first step in submitting a reservation\n *      \tsending the reservation attributes.\n *\n * @param[in] sock - socket descriptor\n * @param[in] resv_id - reservation id\n * @param[id] aoplp - pointer to attropl struct(list)\n *\n * @return      int\n * @retval      DIS_SUCCESS(0)  success\n * @retval      error code      error\n *\n */\n\nint\nencode_DIS_SubmitResv(int sock, const char *resv_id, struct attropl *aoplp)\n{\n\tint rc;\n\n\tif (resv_id == NULL)\n\t\tresv_id = \"\";\n\n\t/* send the reservation ID and then an empty destination\n\t * This is done so the server can use the queuejob structure\n\t */\n\tif ((rc = diswst(sock, resv_id) != 0) ||\n\t    (rc = diswst(sock, \"\") != 0))\n\t\treturn rc;\n\n\treturn (encode_DIS_attropl(sock, aoplp));\n}\n\n/**\n * @brief\n *      -encode a Track Job batch request\n *\n * @par NOTE:\n *      This request is used by the server ONLY; its input is a server\n *      batch request structure.\n *\n * @par  Data items are:\\n\n *              string          job id\\n\n *              unsigned int    hopcount\\n\n *              string          location\\n\n *              u char          state\\n\n *\n * @param[in] sock - socket descriptor\n * @param[out] preq - pointer to batch_request structure\n *\n * @return      int\n * @retval      DIS_SUCCESS(0)  success\n * @retval      error code      error\n *\n */\n\nint\nencode_DIS_TrackJob(int sock, struct batch_request *preq)\n{\n\tint rc;\n\n\tif ((rc = diswst(sock, preq->rq_ind.rq_track.rq_jid) != 0) ||\n\t    (rc = diswui(sock, preq->rq_ind.rq_track.rq_hopcount) != 0) ||\n\t    (rc = diswst(sock, preq->rq_ind.rq_track.rq_location) != 0) ||\n\t    (rc = diswuc(sock, preq->rq_ind.rq_track.rq_state[0]) != 0))\n\t\treturn rc;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\t- encode a User Credential Batch Request\n *\n * @param[in] sock - socket descriptor\n * @param[in] user - user name\n * @param[in] type -  Credential type\n * @param[in] cred- the credential\n * @param[in] len - length of credential\n *\n * @return      int\n * @retval      DIS_SUCCESS(0)  success\n * @retval      error code      error\n *\n */\n\nint\nencode_DIS_UserCred(int sock, const char *user, int type, const char *cred, int len)\n{\n\tint rc;\n\n\tif ((rc = diswst(sock, user)) != 0)\n\t\treturn rc;\n\tif ((rc = diswui(sock, type)) != 0)\n\t\treturn rc;\n\trc = diswcs(sock, cred, (size_t) len);\n\n\treturn rc;\n}\n\n/**\n * @brief-\n *\tencode a list of PBS API \"attrl\" structures\n *\n * @par\tFunctionality:\n *\t\tThe first item encoded is a unsigned integer, a count of the\n *      \tnumber of attrl entries in the linked list.  This is encoded\n *      \teven when there are no svrattrl entries in the list.\n *\n * @par\t Each individual entry is then encoded as:\\n\n *\t\tu int   size of the three strings (name, resource, value)\n *                      including the terminating nulls\\n\n *\t\tstring  attribute name\\n\n *\t\tu int   1 or 0 if resource name does or does not follow\\n\n *\t\tstring  resource name (if one)\\n\n *\t\tstring  value of attribute/resource\\n\n *\t\tu int   \"op\" of attrlop, forced to \"Set\"\\n\n *\n * @param[in] sock - socket descriptor\n * @param[in] pattrl - pointer to attrl structure\n *\n * @return      int\n * @retval      DIS_SUCCESS(0)  success\n * @retval      error code      error\n *\n */\n\nint\nencode_DIS_attrl(int sock, struct attrl *pattrl)\n{\n\tunsigned int ct = 0;\n\tunsigned int name_len;\n\tstruct attrl *ps;\n\tint rc;\n\tchar *value;\n\n\t/* count how many */\n\tfor (ps = pattrl; ps; ps = ps->next) {\n\t\t++ct;\n\t}\n\n\tif ((rc = diswui(sock, ct)) != 0)\n\t\treturn rc;\n\n\tfor (ps = pattrl; ps; ps = ps->next) {\n\t\t/* length of three strings */\n\t\tvalue = ps->value ? ps->value : \"\";\n\t\tname_len = (int) strlen(ps->name) + (int) strlen(value) + 2;\n\t\tif (ps->resource)\n\t\t\tname_len += strlen(ps->resource) + 1;\n\n\t\tif ((rc = diswui(sock, name_len)) != 0)\n\t\t\tbreak;\n\t\tif ((rc = diswst(sock, ps->name)) != 0)\n\t\t\tbreak;\n\t\tif (ps->resource) { /* has a resource name */\n\t\t\tif ((rc = diswui(sock, 1)) != 0)\n\t\t\t\tbreak;\n\t\t\tif ((rc = diswst(sock, ps->resource)) != 0)\n\t\t\t\tbreak;\n\t\t} else {\n\t\t\tif ((rc = diswui(sock, 0)) != 0) /* no resource name */\n\t\t\t\tbreak;\n\t\t}\n\t\tif ((rc = diswst(sock, value)) ||\n\t\t    (rc = diswui(sock, (unsigned int) SET)))\n\t\t\tbreak;\n\t}\n\n\treturn rc;\n}\n\n/**\n * @brief\n *\t- encode a list of PBS API \"attropl\" structures\n *\n * @par\tNote:\n *\tThe first item encoded is a unsigned integer, a count of the\n *      number of attropl entries in the linked list.  This is encoded\n *      even when there are no attropl entries in the list.\n *\n * @par\t Each individual entry is then encoded as:\\n\n *\t\t\tu int   size of the three strings (name, resource, value)\n *                      \tincluding the terminating nulls\\n\n *\t\t\tstring  attribute name\\n\n *\t\t\tu int   1 or 0 if resource name does or does not follow\\n\n *\t\t\tstring  resource name (if one)\\n\n *\t\t\tstring  value of attribute/resource\\n\n *\t\t\tu int   \"op\" of attrlop\\n\n *\n * @par\tNote:\n *\tthe encoding of a attropl is the same as the encoding of\n *      the pbs_ifl.h structures \"attrl\" and the server svrattrl.  Any\n *      one of the three forms can be decoded into any of the three with the\n *      possible loss of the \"flags\" field (which is the \"op\" of the attrlop).\n *\n * @param[in] sock - socket id\n * @param[in] pattropl - pointer to attropl structure\n *\n * @return      int\n * @retval      DIS_SUCCESS(0)  success\n * @retval      error code      error\n *\n */\n\nint\nencode_DIS_attropl(int sock, struct attropl *pattropl)\n{\n\tunsigned int ct = 0;\n\tunsigned int name_len;\n\tstruct attropl *ps;\n\tint rc;\n\n\t/* count how many */\n\n\tfor (ps = pattropl; ps; ps = ps->next) {\n\t\t++ct;\n\t}\n\n\tif ((rc = diswui(sock, ct)) != 0)\n\t\treturn rc;\n\n\tfor (ps = pattropl; ps; ps = ps->next) {\n\t\t/* length of three strings */\n\t\tname_len = (int) strlen(ps->name) + (int) strlen(ps->value) + 2;\n\t\tif (ps->resource)\n\t\t\tname_len += strlen(ps->resource) + 1;\n\n\t\tif ((rc = diswui(sock, name_len)) != 0)\n\t\t\tbreak;\n\t\tif ((rc = diswst(sock, ps->name)) != 0)\n\t\t\tbreak;\n\t\tif (ps->resource) { /* has a resource name */\n\t\t\tif ((rc = diswui(sock, 1)) != 0)\n\t\t\t\tbreak;\n\t\t\tif ((rc = diswst(sock, ps->resource)) != 0)\n\t\t\t\tbreak;\n\t\t} else {\n\t\t\tif ((rc = diswui(sock, 0)) != 0) /* no resource name */\n\t\t\t\tbreak;\n\t\t}\n\t\tif ((rc = diswst(sock, ps->value)) ||\n\t\t    (rc = diswui(sock, (unsigned int) ps->op)))\n\t\t\tbreak;\n\t}\n\treturn rc;\n}\n\n/**\n * @brief\n *\t-encode a list of server \"svrattrl\" structures\n *\n * @par\tFunctionality:\n *\t\tThe first item encoded is a unsigned integer, a count of the\n *      \tnumber of svrattrl entries in the linked list.  This is encoded\n *      \teven when there are no svrattrl entries in the list.\n *\n * @par\tEach individual entry is then encoded as:\\n\n *\t\t\tu int   size of the three strings (name, resource, value)\n *                      \tincluding the terminating nulls\\n\n *\t\t\tstring  attribute name\\n\n *\t\t\tu int   1 or 0 if resource name does or does not follow\\n\n *\t\t\tstring  resource name (if one)\\n\n *\t\t\tstring  value of attribute/resource\\n\n *\t\t\tu int   \"op\" of attrlop\n *\n * @param[in] sock - socket descriptor\n * @param[in] psattl - pointer to svr attr list\n *\n * @return      int\n * @retval      DIS_SUCCESS(0)  success\n * @retval      error code      error\n *\n */\n\nint\nencode_DIS_svrattrl(int sock, svrattrl *psattl)\n{\n\tunsigned int ct = 0;\n\tunsigned int name_len;\n\tsvrattrl *ps;\n\tint rc;\n\n\t/* count how many */\n\n\tfor (ps = psattl; ps; ps = (svrattrl *) GET_NEXT(ps->al_link)) {\n\t\t++ct;\n\t}\n\n\tif ((rc = diswui(sock, ct)) != 0)\n\t\treturn rc;\n\n\tfor (ps = psattl; ps; ps = (svrattrl *) GET_NEXT(ps->al_link)) {\n\t\t/* length of three strings */\n\t\tname_len = (int) strlen(ps->al_atopl.name) +\n\t\t\t   (int) strlen(ps->al_atopl.value) + 2;\n\t\tif (ps->al_atopl.resource)\n\t\t\tname_len += strlen(ps->al_atopl.resource) + 1;\n\n\t\tif ((rc = diswui(sock, name_len)) != 0)\n\t\t\tbreak;\n\t\tif ((rc = diswst(sock, ps->al_atopl.name)) != 0)\n\t\t\tbreak;\n\t\tif (ps->al_rescln) { /* has a resource name */\n\t\t\tif ((rc = diswui(sock, 1)) != 0)\n\t\t\t\tbreak;\n\t\t\tif ((rc = diswst(sock, ps->al_atopl.resource)) != 0)\n\t\t\t\tbreak;\n\t\t} else {\n\t\t\tif ((rc = diswui(sock, 0)) != 0) /* no resource name */\n\t\t\t\tbreak;\n\t\t}\n\t\tif ((rc = diswst(sock, ps->al_atopl.value)) ||\n\t\t    (rc = diswui(sock, (unsigned int) ps->al_op)))\n\t\t\tbreak;\n\t}\n\treturn rc;\n}\n"
  },
  {
    "path": "src/lib/Libifl/Makefile.am",
    "content": "#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\nBUILT_SOURCES = pbs_ifl_wrap.c\nEXTRA_DIST = pbs_ifl.i\n\npbs_ifl_wrap.c: pbs_ifl.i\n\t@swig_dir@/bin/swig -python -outcurrentdir \\\n\t\t@swig_py_inc@ -I$(top_srcdir)/src/include $^\n\nCLEANFILES = \\\n\tpbs_ifl.py \\\n\tpbs_ifl_wrap.c\n"
  },
  {
    "path": "src/lib/Libifl/PBS_attr.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tPBS_attr.c\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <stdio.h>\n#include <stdlib.h>\n#include \"libpbs.h\"\n\n/**\n * @brief\n *\t-parses attr list and checks if name and value are null.\n *\n * @param[in] alp - pointer to attr list\n *\n * @return\tint\n * @retval\t0\tSuccess\n * @retval\t-1\tif val or name name\n *\n */\n\nint\nPBS_val_al(struct attrl *alp)\n{\n\twhile (alp) {\n\t\tif ((alp->name == 0) || (alp->value == 0))\n\t\t\treturn -1;\n\t\talp = alp->next;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\t-frees the attr list\n *\n * @param[in] alp - pointer to attr list\n *\n * @return\tVoid\n *\n */\nvoid\nPBS_free_al(struct attrl *alp)\n{\n\tstruct attrl *talp;\n\twhile (alp != NULL) {\n\t\tfree(alp->name);\n\t\tfree(alp->resource);\n\t\tfree(alp->value);\n\t\ttalp = alp;\n\t\talp = alp->next;\n\t\tfree(talp);\n\t}\n}\n\n/**\n * @brief\n *      -parses attr list with option and checks if name and value are null.\n *\n * @param[in] alp - pointer to attr list\n *\n * @return      int\n * @retval      0       Success\n * @retval      -1      if val or name name\n *\n */\n\nint\nPBS_val_aopl(struct attropl *aoplp)\n{\n\twhile (aoplp != NULL) {\n\t\tif ((aoplp->name == 0) || (aoplp == 0))\n\t\t\treturn -1;\n\t\taoplp = aoplp->next;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *      -frees the attr list with option\n *\n * @param[in] alp - pointer to attr list\n *\n * @return      Void\n *\n */\n\nvoid\nPBS_free_aopl(struct attropl *aoplp)\n{\n\tstruct attropl *taoplp;\n\twhile (aoplp != NULL) {\n\t\tif (aoplp->name) {\n\t\t\tfree(aoplp->name);\n\t\t\taoplp->name = NULL;\n\t\t}\n\t\tif (aoplp->resource) {\n\t\t\tfree(aoplp->resource);\n\t\t\taoplp->resource = NULL;\n\t\t}\n\t\tif (aoplp->value) {\n\t\t\tfree(aoplp->value);\n\t\t\taoplp->value = NULL;\n\t\t}\n\t\ttaoplp = aoplp;\n\t\taoplp = aoplp->next;\n\t\tif (taoplp) {\n\t\t\tfree(taoplp);\n\t\t\ttaoplp = NULL;\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "src/lib/Libifl/advise.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <stdarg.h>\n#include <stdio.h>\n/**\n * @file\tadvise.c\n */\n\n/**\n * @brief - prints advise from user (variable  argument list ) to standard error file.\n *\n * @param[in] who - user name\n * @param[in] variable arguments\n *\n * @return\tVoid\n *\n */\nvoid\nadvise(char *who, ...)\n{\n\tva_list args;\n\tchar *fmt;\n\n#ifndef NDEBUG\n\n\tva_start(args, who);\n\n\tif (who == NULL || *who == '\\0')\n\t\tfprintf(stderr, \"advise:\\n\");\n\telse\n\t\tfprintf(stderr, \"advise from %s:\\n\", who);\n\tfmt = va_arg(args, char *);\n\tfputs(\"\\t\", stderr);\n\tvfprintf(stderr, fmt, args);\n\tva_end(args);\n\tfputc('\\n', stderr);\n\tfputc('\\n', stderr);\n\tfflush(stderr);\n#endif\n}\n"
  },
  {
    "path": "src/lib/Libifl/auth.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n#include <stdio.h>\n#include <errno.h>\n#include <arpa/inet.h>\n#include <netinet/tcp.h>\n#include <dlfcn.h>\n#include <netdb.h>\n\n#include \"dis.h\"\n#include \"pbs_ifl.h\"\n#include \"libpbs.h\"\n#include \"libsec.h\"\n#include \"auth.h\"\n#include \"log.h\"\n\nextern unsigned char pbs_aes_key[][16];\nextern unsigned char pbs_aes_iv[][16];\n#define SALT_SIZE 16\n\nstatic auth_def_t *loaded_auths = NULL;\n\nstatic int _invoke_pbs_iff(int psock, const char *server_name, int server_port, char *ebuf, size_t ebufsz);\nstatic char *_get_load_lib_error(int reset);\nstatic void *_load_lib(char *loc);\nstatic void *_load_symbol(char *libloc, void *libhandle, char *name, int required);\nstatic auth_def_t *_load_auth(char *name);\nstatic void _unload_auth(auth_def_t *auth);\n\nstatic char *\n_get_load_lib_error(int reset)\n{\n\tif (reset) {\n\t\t(void) dlerror_reset();\n\t\treturn NULL;\n\t}\n\treturn dlerror();\n}\n\nstatic void *\n_load_lib(char *loc)\n{\n\t(void) _get_load_lib_error(1);\n\treturn dlopen(loc, RTLD_LAZY);\n}\n\nstatic void *\n_load_symbol(char *libloc, void *libhandle, char *name, int required)\n{\n\tvoid *handle = NULL;\n\n\t(void) _get_load_lib_error(1);\n\thandle = dlsym(libhandle, name);\n\n\tif (required && handle == NULL) {\n\t\tchar *errmsg = _get_load_lib_error(0);\n\t\tif (errmsg) {\n\t\t\tfprintf(stderr, \"%s\\n\", errmsg);\n\t\t} else {\n\t\t\tfprintf(stderr, \"symbol %s not found in %s\", name, libloc);\n\t\t}\n\t\treturn NULL;\n\t}\n\treturn handle;\n}\n\nstatic auth_def_t *\n_load_auth(char *name)\n{\n\tchar libloc[MAXPATHLEN + 1] = {'\\0'};\n\tchar *errmsg = NULL;\n\tauth_def_t *auth = NULL;\n\n\tif (strcmp(name, AUTH_RESVPORT_NAME) == 0)\n\t\treturn NULL;\n\n\tauth = (auth_def_t *) calloc(1, sizeof(auth_def_t));\n\tif (auth == NULL) {\n\t\treturn NULL;\n\t}\n\n\tstrcpy(auth->name, name);\n\tauth->name[MAXAUTHNAME] = '\\0';\n\n\tsnprintf(libloc, MAXPATHLEN, \"%s/lib/libauth_%s.%s\", pbs_conf.pbs_exec_path, name, SHAREDLIB_EXT);\n\n\tlibloc[MAXPATHLEN] = '\\0';\n\n\tauth->lib_handle = _load_lib(libloc);\n\tif (auth->lib_handle == NULL) {\n\t\terrmsg = _get_load_lib_error(0);\n\t\tif (errmsg) {\n\t\t\tfprintf(stderr, \"%s\\n\", errmsg);\n\t\t} else {\n\t\t\tfprintf(stderr, \"Failed to load %s\\n\", libloc);\n\t\t}\n\t\treturn NULL;\n\t}\n\n\tauth->set_config = _load_symbol(libloc, auth->lib_handle, \"pbs_auth_set_config\", 1);\n\tif (auth->set_config == NULL)\n\t\tgoto err;\n\n\tauth->create_ctx = _load_symbol(libloc, auth->lib_handle, \"pbs_auth_create_ctx\", 1);\n\tif (auth->create_ctx == NULL)\n\t\tgoto err;\n\n\tauth->destroy_ctx = _load_symbol(libloc, auth->lib_handle, \"pbs_auth_destroy_ctx\", 1);\n\tif (auth->destroy_ctx == NULL)\n\t\tgoto err;\n\n\tauth->get_userinfo = _load_symbol(libloc, auth->lib_handle, \"pbs_auth_get_userinfo\", 1);\n\tif (auth->get_userinfo == NULL)\n\t\tgoto err;\n\n\tauth->process_handshake_data = _load_symbol(libloc, auth->lib_handle, \"pbs_auth_process_handshake_data\", 1);\n\tif (auth->process_handshake_data == NULL)\n\t\tgoto err;\n\n\t/*\n\t * There are possiblity that auth lib only support authentication\n\t * but not encrypt/decrypt of data (for example munge auth lib)\n\t * so below 2 methods are marked as NOT required\n\t * and no error check for _load_symbol\n\t */\n\tauth->encrypt_data = _load_symbol(libloc, auth->lib_handle, \"pbs_auth_encrypt_data\", 0);\n\tauth->decrypt_data = _load_symbol(libloc, auth->lib_handle, \"pbs_auth_decrypt_data\", 0);\n\n\treturn auth;\n\nerr:\n\t(void) _unload_auth(auth);\n\treturn NULL;\n}\n\nstatic void\n_unload_auth(auth_def_t *auth)\n{\n\tif (auth == NULL)\n\t\treturn;\n\tif (auth->lib_handle != NULL) {\n\t\t(void) dlclose(auth->lib_handle);\n\t}\n\tmemset(auth, 0, sizeof(auth_def_t));\n\tfree(auth);\n\treturn;\n}\n\n/**\n * @brief\n *\tget_auth - find and return auth defination struture for given method\n *\n * @param[in] method - auth method name\n *\n * @return\tauth_def_t *\n * @retval\t!NULL - success\n * @retval\tNULL - failure\n *\n * @note\n * \tReturned value is from global static area so\n * \tcaller MUST NOT modify it\n */\nauth_def_t *\nget_auth(char *method)\n{\n\tauth_def_t *auth = NULL;\n\n\tfor (auth = loaded_auths; auth != NULL; auth = auth->next) {\n\t\tif (strcmp(auth->name, method) == 0)\n\t\t\treturn auth;\n\t}\n\n\t/*\n\t * At this point, given method is allowed\n\t * but it's authdef is not loaded\n\t * so lets try to load it\n\t */\n\tauth = _load_auth(method);\n\tif (auth == NULL)\n\t\treturn NULL;\n\tauth->next = loaded_auths;\n\tloaded_auths = auth;\n\treturn auth;\n}\n\n/**\n * @brief\n *\tload_auths - load all configured auth (aka PBS_SUPPORTED_AUTH_METHODS)\n *\n * @param[in] mode - AUTH_CLIENT or AUTH_SERVER\n *\n * @return\tint\n * @retval\t0 - success\n * @retval\t1 - failure\n */\nint\nload_auths(int mode)\n{\n\tif (loaded_auths != NULL)\n\t\treturn 0;\n\n\tif (strcmp(pbs_conf.auth_method, AUTH_RESVPORT_NAME) != 0) {\n\t\tauth_def_t *auth = _load_auth(pbs_conf.auth_method);\n\t\tif (auth == NULL) {\n\t\t\treturn 1;\n\t\t}\n\t\tloaded_auths = auth;\n\t}\n\n\tif (pbs_conf.encrypt_method[0] != '\\0' && strcmp(pbs_conf.auth_method, pbs_conf.encrypt_method) != 0) {\n\t\tauth_def_t *auth = _load_auth(pbs_conf.encrypt_method);\n\t\tif (auth == NULL) {\n\t\t\tunload_auths();\n\t\t\treturn 1;\n\t\t}\n\t\tauth->next = loaded_auths;\n\t\tloaded_auths = auth;\n\t}\n\n\tif (mode == AUTH_SERVER) {\n\t\tint i = 0;\n\t\twhile (pbs_conf.supported_auth_methods[i] != NULL) {\n\t\t\tauth_def_t *auth = NULL;\n\t\t\tif (strcmp(pbs_conf.supported_auth_methods[i], AUTH_RESVPORT_NAME) == 0) {\n\t\t\t\ti++;\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\tif (get_auth(pbs_conf.supported_auth_methods[i]) != NULL) {\n\t\t\t\ti++;\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\tauth = _load_auth(pbs_conf.supported_auth_methods[i]);\n\t\t\tif (auth == NULL) {\n\t\t\t\tunload_auths();\n\t\t\t\treturn 1;\n\t\t\t}\n\t\t\tauth->next = loaded_auths;\n\t\t\tloaded_auths = auth;\n\t\t\ti++;\n\t\t}\n\t}\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tunload_auths - unload all loaded auths\n *\n * @return\tvoid\n */\nvoid\nunload_auths(void)\n{\n\twhile (loaded_auths != NULL) {\n\t\tauth_def_t *cur = loaded_auths;\n\t\tloaded_auths = loaded_auths->next;\n\t\t_unload_auth(cur);\n\t}\n}\n\n/**\n * @brief\n *\tis_valid_encrypt_method - validate given auth method can be used as encryption/decryption or not\n *\n * @param[in] method - auth method name to be validated\n *\n * @return\tint\n * @retval\t0 - given method can't be used for encrypt/decrypt\n * @retval\t1 - given method can be used for encrypt/decrypt\n */\nint\nis_valid_encrypt_method(char *method)\n{\n\tint rc = 0;\n\tauth_def_t *auth = _load_auth(method);\n\n\tif (auth && auth->encrypt_data && auth->decrypt_data) {\n\t\trc = 1;\n\t}\n\n\t_unload_auth(auth);\n\treturn rc;\n}\n\n/**\n * @brief\n *\ttcp_send_auth_req - encodes and sends PBS_BATCH_Authenticate request\n *\n * @param[in] sock - socket descriptor\n * @param[in] port - parent port in pbs_iff (only used in resvport auth) else 0\n * @param[in] user - authenticating user name\n * @param[in] auth_method - auth method name\n * @param[in] encrypt_method - encrypt method name\n *\n * @return\tint\n * @retval\t0 on success\n * @retval\t-1 on error\n */\nint\ntcp_send_auth_req(int sock, unsigned int port, const char *user, const char *auth_method, const char *encrypt_method)\n{\n\tstruct batch_reply *reply;\n\tint rc;\n\tint am_len;\n\tint em_len = encrypt_method ? strlen(encrypt_method) : 0;\n\n\tif (auth_method == NULL || *auth_method == '\\0') {\n\t\t/* auth method can't be null or empty string */\n\t\tpbs_errno = PBSE_INTERNAL;\n\t\treturn -1;\n\t}\n\tam_len = strlen(auth_method);\n\tset_conn_errno(sock, 0);\n\tset_conn_errtxt(sock, NULL);\n\n\tif (encode_DIS_ReqHdr(sock, PBS_BATCH_Authenticate, user) ||\n\t    diswui(sock, am_len) ||\t\t /* auth method length */\n\t    diswcs(sock, auth_method, am_len) || /* auth method */\n\t    diswui(sock, em_len)) {\t\t /* encrypt method length */\n\t\tpbs_errno = PBSE_SYSTEM;\n\t\treturn -1;\n\t}\n\n\tif (em_len > 0 && diswcs(sock, encrypt_method, em_len)) { /* encrypt method */\n\t\tpbs_errno = PBSE_SYSTEM;\n\t\treturn -1;\n\t}\n\n\tif (diswui(sock, port) || /* port (only used in resvport auth) */\n\t    encode_DIS_ReqExtend(sock, NULL)) {\n\t\tpbs_errno = PBSE_SYSTEM;\n\t\treturn -1;\n\t}\n\n\tif (dis_flush(sock)) {\n\t\tpbs_errno = PBSE_SYSTEM;\n\t\treturn -1;\n\t}\n\n\treply = PBSD_rdrpy_sock(sock, &rc, PROT_TCP);\n\tif (reply == NULL) {\n\t\tpbs_errno = PBSE_SYSTEM;\n\t\treturn -1;\n\t}\n\n\tif ((reply->brp_code != 0)) {\n\t\tpbs_errno = reply->brp_code;\n\t\tset_conn_errno(sock, pbs_errno);\n\t\tif (reply->brp_choice == BATCH_REPLY_CHOICE_Text)\n\t\t\tset_conn_errtxt(sock, reply->brp_un.brp_txt.brp_str);\n\t\tPBSD_FreeReply(reply);\n\t\treturn -1;\n\t}\n\n\tPBSD_FreeReply(reply);\n\n\treturn 0;\n}\n\n/*\n * @brief\n *\t_invoke_pbs_iff - call pbs_iff(1) to authenticate user/connection to the PBS server.\n *\n * @note\n * This would create an environment variable PBS_IFF_CLIENT_ADDR set to\n * the client's connecting address, which is made known to the pbs_iff process.\n *\n * If unable to authenticate, an attempt is made to run the old method\n * 'pbs_iff -i <pbs_client_addr>' also.\n *\n *\n * @param[in]  psock           Socket descriptor used by PBS client to connect PBS server.\n * @param[in]  server_name     Connecting PBS server host name.\n * @param[in]  server_port     Connecting PBS server port number.\n *\n * @return int\n * @retval  0 on success.\n * @retval -1 on failure.\n */\nstatic int\n_invoke_pbs_iff(int psock, const char *server_name, int server_port, char *ebuf, size_t ebufsz)\n{\n\tchar cmd[2][PBS_MAXSERVERNAME + 80];\n\tint k;\n\tchar *pbs_client_addr = NULL;\n\tu_short psock_port = 0;\n\tint rc;\n\tstruct sockaddr_in sockname;\n\tpbs_socklen_t socknamelen;\n#ifdef WIN32\n\tstruct pio_handles pio;\n#else\n\tint i;\n\tFILE *piff;\n#endif\n\n\tsocknamelen = sizeof(sockname);\n\tif (getsockname(psock, (struct sockaddr *) &sockname, &socknamelen))\n\t\treturn -1;\n\n\tpbs_client_addr = inet_ntoa(sockname.sin_addr);\n\tif (pbs_client_addr == NULL)\n\t\treturn -1;\n\tpsock_port = sockname.sin_port;\n\n\t/* for compatibility with 12.0 pbs_iff */\n\t(void) snprintf(cmd[1], sizeof(cmd[1]) - 1, \"%s -i %s %s %u %d %u\", pbs_conf.iff_path, pbs_client_addr, server_name, server_port, psock, psock_port);\n#ifdef WIN32\n\tif (pbs_conf.encrypt_method[0] != '\\0') {\n\t\tif (!SetEnvironmentVariable(PBS_CONF_ENCRYPT_METHOD, pbs_conf.encrypt_method)) {\n\t\t\tfprintf(stderr, \"Failed to set %s=%s with errno: %ld\", PBS_CONF_ENCRYPT_METHOD, pbs_conf.encrypt_method, GetLastError());\n\t\t\treturn -1;\n\t\t}\n\t}\n\t(void) snprintf(cmd[0], sizeof(cmd[0]) - 1, \"%s %s %u %d %u\", pbs_conf.iff_path, server_name, server_port, psock, psock_port);\n\tfor (k = 0; k < 2; k++) {\n\t\trc = 0;\n\t\tif (!SetEnvironmentVariable(PBS_IFF_CLIENT_ADDR, pbs_client_addr)) {\n\t\t\tfprintf(stderr, \"Failed to set %s=%s with errno: %ld\", PBS_IFF_CLIENT_ADDR, pbs_client_addr, GetLastError());\n\t\t\trc = -1;\n\t\t\tbreak;\n\t\t}\n\t\tif (!win_popen(cmd[k], \"r\", &pio, NULL)) {\n\t\t\tfprintf(stderr, \"failed to execute %s\\n\", cmd[k]);\n\t\t\tSetEnvironmentVariable(PBS_IFF_CLIENT_ADDR, NULL);\n\t\t\trc = -1;\n\t\t\tbreak;\n\t\t}\n\t\twin_pread(&pio, (char *) &rc, (int) sizeof(int));\n\t\tpbs_errno = rc;\n\t\tif (rc > 0) {\n\t\t\trc = -1;\n\t\t\twin_pread(&pio, (char *) &rc, (int) sizeof(int));\n\t\t\tif (rc > 0) {\n\t\t\t\tif (rc > (int) (ebufsz - 1))\n\t\t\t\t\trc = (int) (ebufsz - 1);\n\t\t\t\twin_pread(&pio, ebuf, rc);\n\t\t\t\tebuf[ebufsz] = '\\0';\n\t\t\t}\n\t\t\trc = -1;\n\t\t}\n\t\twin_pclose(&pio);\n\t\tSetEnvironmentVariable(PBS_IFF_CLIENT_ADDR, NULL);\n\t\tif (rc == 0)\n\t\t\tbreak;\n\t}\n\n#else  /* UNIX code here */\n\tif (pbs_conf.encrypt_method[0] != '\\0') {\n\t\tsnprintf(cmd[0], sizeof(cmd[0]) - 1, \"%s=%s %s=%s %s %s %u %d %u\",\n\t\t\t PBS_IFF_CLIENT_ADDR, pbs_client_addr,\n\t\t\t PBS_CONF_ENCRYPT_METHOD, pbs_conf.encrypt_method,\n\t\t\t pbs_conf.iff_path, server_name, server_port,\n\t\t\t psock, psock_port);\n\t} else {\n\t\tsnprintf(cmd[0], sizeof(cmd[0]) - 1, \"%s=%s %s %s %u %d %u\",\n\t\t\t PBS_IFF_CLIENT_ADDR, pbs_client_addr,\n\t\t\t pbs_conf.iff_path, server_name, server_port,\n\t\t\t psock, psock_port);\n\t}\n\n\tfor (k = 0; k < 2; k++) {\n\t\trc = -1;\n\t\tpiff = (FILE *) popen(cmd[k], \"r\");\n\t\tif (piff == NULL) {\n\t\t\tbreak;\n\t\t}\n\n\t\twhile ((i = read(fileno(piff), &rc, sizeof(int))) == -1) {\n\t\t\tif (errno != EINTR)\n\t\t\t\tbreak;\n\t\t}\n\t\tpbs_errno = rc;\n\t\tif (rc > 0) {\n\t\t\trc = -1;\n\t\t\twhile ((i = read(fileno(piff), &rc, sizeof(int))) == -1) {\n\t\t\t\tif (errno != EINTR)\n\t\t\t\t\tbreak;\n\t\t\t}\n\t\t\tif (rc > 0) {\n\t\t\t\tif (rc > (ebufsz - 1))\n\t\t\t\t\trc = ebufsz - 1;\n\t\t\t\twhile ((i = read(fileno(piff), (void *) ebuf, rc)) == -1) {\n\t\t\t\t\tif (errno != EINTR)\n\t\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tebuf[ebufsz] = '\\0';\n\t\t\t}\n\t\t\trc = -1;\n\t\t}\n\n\t\t(void) pclose(piff);\n\t\tif (rc == 0)\n\t\t\tbreak;\n\t}\n#endif /* end of UNIX code */\n\n\treturn rc;\n}\n\nint\nhandle_client_handshake(int fd, const char *hostname, char *method, int for_encrypt, pbs_auth_config_t *config, char *ebuf, size_t ebufsz)\n{\n\tvoid *data_in = NULL;\n\tsize_t len_in = 0;\n\tvoid *data_out = NULL;\n\tsize_t len_out = 0;\n\tint type = 0;\n\tint is_handshake_done = 0;\n\tvoid *authctx = NULL;\n\tauth_def_t *authdef = NULL;\n\n\tauthdef = get_auth(method);\n\tif (authdef == NULL) {\n\t\tsnprintf(ebuf, ebufsz, \"Failed to find authdef\");\n\t\tpbs_errno = PBSE_SYSTEM;\n\t\treturn -1;\n\t}\n\n\tDIS_tcp_funcs();\n\n\ttransport_chan_set_authdef(fd, authdef, for_encrypt);\n\tauthdef->set_config((const pbs_auth_config_t *) config);\n\tif ((authctx = transport_chan_get_authctx(fd, for_encrypt)) == NULL) {\n\t\tif (authdef->create_ctx(&authctx, AUTH_CLIENT, AUTH_USER_CONN, hostname)) {\n\t\t\tsnprintf(ebuf, ebufsz, \"Failed to create auth context\");\n\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\treturn -1;\n\t\t}\n\t\ttransport_chan_set_authctx(fd, authctx, for_encrypt);\n\t}\n\n\tdo {\n\t\tif (authdef->process_handshake_data(authctx, data_in, len_in, &data_out, &len_out, &is_handshake_done) != 0) {\n\t\t\tif (len_out > 0) {\n\t\t\t\tsize_t len = len_out;\n\t\t\t\tif (len > ebufsz)\n\t\t\t\t\tlen = ebufsz;\n\t\t\t\tstrncpy(ebuf, (char *) data_out, len);\n\t\t\t\tebuf[len] = '\\0';\n\t\t\t\tfree(data_out);\n\t\t\t} else {\n\t\t\t\tsnprintf(ebuf, ebufsz, \"auth_process_handshake_data failure\");\n\t\t\t}\n\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\treturn -1;\n\t\t}\n\n\t\tif (len_in) {\n\t\t\tdata_in = NULL;\n\t\t\tlen_in = 0;\n\t\t}\n\n\t\tif (len_out > 0) {\n\t\t\tif (transport_send_pkt(fd, AUTH_CTX_DATA, data_out, len_out) <= 0) {\n\t\t\t\tsnprintf(ebuf, ebufsz, \"Failed to send auth context token\");\n\t\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\t\tfree(data_out);\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t\tfree(data_out);\n\t\t\tdata_out = NULL;\n\t\t\tlen_out = 0;\n\t\t}\n\n\t\t/* recieve ctx token */\n\t\tif (transport_recv_pkt(fd, &type, &data_in, &len_in) <= 0) {\n\t\t\tsnprintf(ebuf, ebufsz, \"Failed to receive auth token\");\n\t\t\treturn -1;\n\t\t}\n\n\t\tif (type == AUTH_ERR_DATA) {\n\t\t\tif (len_in > ebufsz)\n\t\t\t\tlen_in = ebufsz;\n\t\t\tstrncpy(ebuf, (char *) data_in, len_in);\n\t\t\tebuf[len_in] = '\\0';\n\t\t\tpbs_errno = PBSE_BADCRED;\n\t\t\treturn -1;\n\t\t}\n\n\t\tif ((is_handshake_done == 0 && type != AUTH_CTX_DATA) || (is_handshake_done == 1 && type != AUTH_CTX_OK)) {\n\t\t\tsnprintf(ebuf, ebufsz, \"incorrect auth token type\");\n\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\treturn -1;\n\t\t}\n\n\t\tif (type == AUTH_CTX_OK) {\n\t\t\tdata_in = NULL;\n\t\t\tlen_in = 0;\n\t\t}\n\n\t\tif (is_handshake_done == 1) {\n\t\t\ttransport_chan_set_ctx_status(fd, AUTH_STATUS_CTX_READY, for_encrypt);\n\t\t\ttransport_chan_set_authctx(fd, authctx, for_encrypt);\n\t\t}\n\n\t} while (is_handshake_done == 0);\n\n\treturn 0;\n}\n\n/**\n * @brief\n * \tfree_auth_config - free auth config structure\n *\n * @param[in] config - auth config structure to be freed\n *\n * @return\tvoid\n *\n */\nvoid\nfree_auth_config(pbs_auth_config_t *config)\n{\n\tif (config) {\n\t\tif (config->auth_method)\n\t\t\tfree(config->auth_method);\n\t\tif (config->encrypt_method)\n\t\t\tfree(config->encrypt_method);\n\t\tif (config->pbs_exec_path)\n\t\t\tfree(config->pbs_exec_path);\n\t\tif (config->pbs_home_path)\n\t\t\tfree(config->pbs_home_path);\n\t\tfree(config);\n\t}\n}\n\n/**\n * @brief\n * \tmake_auth_config - allocate and return auth config structure\n *\n * @param[in] auth_method - auth method name\n * @param[in] encrypt_method - encrypt method name\n * @param[in] exec_path - path to PBS_EXEC\n * @param[in] home_path - path to PBS_HOME\n * @param[in] logger - pointer to logger function for auth lib\n *\n * @return\tpbs_auth_config_t *\n * @retval\t!NULL\tsuccess\n * @retval\tNULL\tfailure\n *\n */\npbs_auth_config_t *\nmake_auth_config(char *auth_method, char *encrypt_method, char *exec_path, char *home_path, void *logger)\n{\n\tpbs_auth_config_t *config = NULL;\n\n\tconfig = (pbs_auth_config_t *) malloc(sizeof(pbs_auth_config_t));\n\tif (config == NULL)\n\t\treturn NULL;\n\n\tconfig->auth_method = strdup(auth_method);\n\tif (config->auth_method == NULL) {\n\t\tfree(config);\n\t\treturn NULL;\n\t}\n\tconfig->encrypt_method = strdup(encrypt_method);\n\tif (config->encrypt_method == NULL) {\n\t\tfree(config->auth_method);\n\t\tfree(config);\n\t\treturn NULL;\n\t}\n\tconfig->pbs_exec_path = strdup(exec_path);\n\tif (config->pbs_exec_path == NULL) {\n\t\tfree(config->auth_method);\n\t\tfree(config->encrypt_method);\n\t\tfree(config);\n\t\treturn NULL;\n\t}\n\tconfig->pbs_home_path = strdup(home_path);\n\tif (config->pbs_home_path == NULL) {\n\t\tfree(config->auth_method);\n\t\tfree(config->encrypt_method);\n\t\tfree(config->pbs_exec_path);\n\t\tfree(config);\n\t\treturn NULL;\n\t}\n\tconfig->logfunc = logger;\n\treturn config;\n}\n\n/**\n * @brief\n * \tengage_client_auth - this function handles client side authenication\n *\n * @param[in] fd - socket descriptor\n * @param[in] hostname - server hostname\n * @param[out] ebuf - error buffer\n * @param[in] ebufsz - size of error buffer\n *\n * @return\tint\n * @retval\t0\tsuccess\n * @retval\t-1\tfailure\n *\n */\nint\nengage_client_auth(int fd, const char *hostname, int port, char *ebuf, size_t ebufsz)\n{\n\tint rc;\n\tstatic pbs_auth_config_t *config = NULL;\n\n\tif (config == NULL) {\n\t\tconfig = make_auth_config(pbs_conf.auth_method,\n\t\t\t\t\t  pbs_conf.encrypt_method,\n\t\t\t\t\t  pbs_conf.pbs_exec_path,\n\t\t\t\t\t  pbs_conf.pbs_home_path,\n\t\t\t\t\t  NULL);\n\t\tif (config == NULL) {\n\t\t\tsnprintf(ebuf, ebufsz, \"Out of memory in %s!\", __func__);\n\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\treturn -1;\n\t\t}\n\t}\n\n\tif (strcmp(pbs_conf.auth_method, AUTH_RESVPORT_NAME) == 0) {\n\t\tif ((rc = CS_client_auth(fd)) == CS_SUCCESS)\n\t\t\treturn (0);\n\n\t\tif (rc == CS_AUTH_USE_IFF) {\n\t\t\tif (_invoke_pbs_iff(fd, hostname, port, ebuf, ebufsz) != 0) {\n\t\t\t\tsnprintf(ebuf, ebufsz, \"Unable to authenticate connection (%s:%d)\", hostname, port);\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t}\n\t} else {\n\t\tif (tcp_send_auth_req(fd, 0, pbs_current_user, pbs_conf.auth_method, pbs_conf.encrypt_method) != 0) {\n\t\t\tsnprintf(ebuf, ebufsz, \"Failed to send auth request\");\n\t\t\treturn -1;\n\t\t}\n\t}\n\n\tif (pbs_conf.encrypt_method[0] != '\\0') {\n\t\trc = handle_client_handshake(fd, hostname, pbs_conf.encrypt_method, FOR_ENCRYPT, config, ebuf, ebufsz);\n\t\tif (rc != 0)\n\t\t\treturn rc;\n\t}\n\n\tif (strcmp(pbs_conf.auth_method, AUTH_RESVPORT_NAME) != 0) {\n\t\tif (strcmp(pbs_conf.auth_method, pbs_conf.encrypt_method) != 0) {\n\t\t\treturn handle_client_handshake(fd, hostname, pbs_conf.auth_method, FOR_AUTH, config, ebuf, ebufsz);\n\t\t} else {\n\t\t\ttransport_chan_set_ctx_status(fd, transport_chan_get_ctx_status(fd, FOR_ENCRYPT), FOR_AUTH);\n\t\t\ttransport_chan_set_authdef(fd, transport_chan_get_authdef(fd, FOR_ENCRYPT), FOR_AUTH);\n\t\t\ttransport_chan_set_authctx(fd, transport_chan_get_authctx(fd, FOR_ENCRYPT), FOR_AUTH);\n\t\t}\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n * \tengage_server_auth - this function handles incoming authenication related data\n *\n * @param[in] fd - socket descriptor\n * @param[in] clienthost - hostname associate with fd\n * @param[in] for_encrypt - whether to handle incoming data for encrypt/decrypt auth or for authentication\n * @param[in] auth_role - role used when initializing the auth context\n * @param[out] ebuf - error buffer\n * @param[in] ebufsz - size of error buffer\n *\n * @return\tint\n * @retval\t0\tsuccess\n * @retval\t-1\tfailure\n *\n */\nint\nengage_server_auth(int fd, char *clienthost, int for_encrypt, int auth_role, char *ebuf, size_t ebufsz)\n{\n\tvoid *authctx;\n\tauth_def_t *authdef;\n\tvoid *data_in = NULL;\n\tsize_t len_in = 0;\n\tvoid *data_out = NULL;\n\tsize_t len_out = 0;\n\tint type;\n\tint is_handshake_done = 0;\n\n\tDIS_tcp_funcs();\n\n\tif (transport_chan_get_ctx_status(fd, for_encrypt) != (int) AUTH_STATUS_CTX_ESTABLISHING) {\n\t\t/*\n\t\t * auth ctx not establishing\n\t\t * consider data as clear text or encrypted data\n\t\t * BUT not auth ctx data\n\t\t */\n\t\treturn 1;\n\t}\n\n\tauthdef = transport_chan_get_authdef(fd, for_encrypt);\n\tif (authdef == NULL) {\n\t\tsnprintf(ebuf, ebufsz, \"No authdef associated with connection\");\n\t\tpbs_errno = PBSE_SYSTEM;\n\t\treturn -1;\n\t}\n\n\tif ((authctx = transport_chan_get_authctx(fd, for_encrypt)) == NULL) {\n\t\tif (authdef->create_ctx(&authctx, auth_role, AUTH_USER_CONN, clienthost)) {\n\t\t\tsnprintf(ebuf, ebufsz, \"Failed to create auth context\");\n\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\treturn -1;\n\t\t}\n\t\ttransport_chan_set_authctx(fd, authctx, for_encrypt);\n\t}\n\n\tif (transport_recv_pkt(fd, &type, &data_in, &len_in) <= 0) {\n\t\tsnprintf(ebuf, ebufsz, \"failed to receive auth token\");\n\t\tpbs_errno = PBSE_SYSTEM;\n\t\treturn -1;\n\t}\n\n\tif (type != AUTH_CTX_DATA) {\n\t\tsnprintf(ebuf, ebufsz, \"received incorrect auth token\");\n\t\tpbs_errno = PBSE_SYSTEM;\n\t\treturn -1;\n\t}\n\n\tif (authdef->process_handshake_data(authctx, data_in, len_in, &data_out, &len_out, &is_handshake_done) != 0) {\n\t\tif (len_out > 0) {\n\t\t\tsize_t len = len_out;\n\t\t\tif (len > ebufsz)\n\t\t\t\tlen = ebufsz;\n\t\t\tstrncpy(ebuf, (char *) data_out, len);\n\t\t\tebuf[len] = '\\0';\n\t\t\t(void) transport_send_pkt(fd, AUTH_ERR_DATA, data_out, len_out);\n\t\t\tfree(data_out);\n\t\t} else {\n\t\t\tsnprintf(ebuf, ebufsz, \"auth_process_handshake_data failure\");\n\t\t\t(void) transport_send_pkt(fd, AUTH_ERR_DATA, \"Unknown auth error\", strlen(\"Unknown auth error\"));\n\t\t}\n\t\tpbs_errno = PBSE_SYSTEM;\n\t\treturn -1;\n\t}\n\n\tif (len_out > 0) {\n\t\tif (transport_send_pkt(fd, AUTH_CTX_DATA, data_out, len_out) <= 0) {\n\t\t\tsnprintf(ebuf, ebufsz, \"Failed to send auth context token\");\n\t\t\tfree(data_out);\n\t\t\treturn -1;\n\t\t}\n\t}\n\n\tfree(data_out);\n\n\tif (is_handshake_done == 1) {\n\t\tif (transport_send_pkt(fd, AUTH_CTX_OK, \"\", 1) <= 0) {\n\t\t\tsnprintf(ebuf, ebufsz, \"Failed to send auth context ok token\");\n\t\t\treturn -1;\n\t\t}\n\t\ttransport_chan_set_ctx_status(fd, AUTH_STATUS_CTX_READY, for_encrypt);\n\t\ttransport_chan_set_authctx(fd, authctx, for_encrypt);\n\t}\n\n\tif (for_encrypt == FOR_ENCRYPT) {\n\t\tauth_def_t *def = transport_chan_get_authdef(fd, FOR_AUTH);\n\t\tif (def != NULL && def == authdef) {\n\t\t\ttransport_chan_set_ctx_status(fd, AUTH_STATUS_CTX_READY, FOR_AUTH);\n\t\t\ttransport_chan_set_authctx(fd, authctx, FOR_AUTH);\n\t\t}\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\tserver_cipher_auth - Validate received cipher authentication data\n *\n * @param[in] fd - The exec host side socket\n * @param[in] text - The text that will be compared against the one in the encrypted message.\n * @param[in] ebuf - buffer to hold error msg if any\n * @param[in] ebufsz - size of ebuf\n *\n * @return\tint\n * @retval\tINTERACTIVE_AUTH_SUCCESS(0)\tsuccess\n * @retval\tINTERACTIVE_AUTH_FAILED(1)\tfailure\n *\n */\nint\nserver_cipher_auth(int fd, char *text, char *ebuf, size_t ebufsz)\n{\n\tsize_t len_in = 0;\n\tchar *data_in = NULL;\n\tint type;\n\n\tDIS_tcp_funcs();\n\n\tif (transport_recv_pkt(fd, &type, (void **) &data_in, &len_in) <= 0) {\n\t\tsnprintf(ebuf, ebufsz, \"failed to receive auth token\");\n\t\tpbs_errno = PBSE_SYSTEM;\n\t\treturn INTERACTIVE_AUTH_FAILED;\n\t}\n\n\tif (type != AUTH_CTX_DATA || len_in <= 0) {\n\t\tsnprintf(ebuf, ebufsz, \"received incorrect auth token\");\n\t\tpbs_errno = PBSE_SYSTEM;\n\t\treturn INTERACTIVE_AUTH_FAILED;\n\t}\n\n\tdata_in[len_in - 1] = '\\0';\n\n\tif (validate_hostkey(data_in, len_in - 1, &text) != 0) {\n\t\tsnprintf(ebuf, ebufsz, \"Failed to decrypt auth data\");\n\t\tpbs_errno = PBSE_BADCRED;\n\t\treturn INTERACTIVE_AUTH_FAILED;\n\t}\n\n\tif (transport_send_pkt(fd, AUTH_CTX_OK, \"\", 1) <= 0) {\n\t\tsnprintf(ebuf, ebufsz, \"Failed to send auth context ok token\");\n\t\tpbs_errno = PBSE_SYSTEM;\n\t\treturn INTERACTIVE_AUTH_FAILED;\n\t}\n\n\treturn INTERACTIVE_AUTH_SUCCESS;\n}\n\n/**\n * @brief\n *\tclient_cipher_auth - Generate random string, encode it and send it over to qsub\n *\n * @param[in] fd - The qsub side socket\n * @param[in] text - The text that will be included in the encrypted message.\n * @param[in] ebuf - buffer to hold error msg if any\n * @param[in] ebufsz - size of ebuf\n *\n * @return\tint\n * @retval\tINTERACTIVE_AUTH_SUCCESS(0)\tsuccess\n * @retval\tINTERACTIVE_AUTH_FAILED(1)\tfailure\n *\n */\nint\nclient_cipher_auth(int fd, char *text, char *ebuf, size_t ebufsz)\n{\n\tvoid *data_in = NULL;\n\tsize_t len = 0;\n\tint type = 0;\n\tchar salt[SALT_SIZE];\n\tchar *msg = NULL;\n\n\tDIS_tcp_funcs();\n\n\t/* Generate random salt and append current time and text to it.\n\t * In the end the string will look like this: salt;timestr;text\n\t */\n\tset_rand_str(salt, SALT_SIZE);\n\tmsg = gen_hostkey(text, salt, &len);\n\tif (!msg) {\n\t\tsnprintf(ebuf, ebufsz, \"Failed to encode auth data\");\n\t\tfree(msg);\n\t\treturn INTERACTIVE_AUTH_FAILED;\n\t}\n\n\tif (transport_send_pkt(fd, AUTH_CTX_DATA, msg, len + 1) <= 0) {\n\t\tsnprintf(ebuf, ebufsz, \"Failed to send auth context token\");\n\t\tpbs_errno = PBSE_SYSTEM;\n\t\tfree(msg);\n\t\treturn INTERACTIVE_AUTH_FAILED;\n\t}\n\tfree(msg);\n\tmsg = NULL;\n\n\t/* recieve ctx token */\n\tif (transport_recv_pkt(fd, &type, &data_in, &len) <= 0) {\n\t\tsnprintf(ebuf, ebufsz, \"Failed to receive auth token\");\n\t\tpbs_errno = PBSE_SYSTEM;\n\t\treturn INTERACTIVE_AUTH_FAILED;\n\t}\n\n\tif (type == AUTH_ERR_DATA) {\n\t\tif (len > ebufsz)\n\t\t\tlen = ebufsz;\n\t\tstrncpy(ebuf, (char *) data_in, len);\n\t\tebuf[len] = '\\0';\n\t\tpbs_errno = PBSE_BADCRED;\n\t\treturn INTERACTIVE_AUTH_FAILED;\n\t}\n\n\tif (type != AUTH_CTX_OK) {\n\t\tsnprintf(ebuf, ebufsz, \"incorrect auth token type\");\n\t\tpbs_errno = PBSE_SYSTEM;\n\t\treturn INTERACTIVE_AUTH_FAILED;\n\t}\n\n\tif (type == AUTH_CTX_OK)\n\t\tdata_in = NULL;\n\n\treturn INTERACTIVE_AUTH_SUCCESS;\n}\n\n/**\n * @brief\n *\tAuthenticate an incoming connection from an execution host.\n *\n * @param[in]\tsock - An integer representing the socket file descriptor.\n * @param[in]\tport - The port from which the connection originated from\n * @param[in]\tauth_method - Authentication method used\n * @param[in]\tjobid - The job id.\n * @return\tint\n * @retval\tINTERACTIVE_AUTH_SUCCESS (0) - successful authentication\n * @retval\tINTERACTIVE_AUTH_FAILED (1) - authentication failed\n * @retval\tINTERACTIVE_AUTH_RETRY (2) - retry to connect\n *\n */\nint\nauth_exec_socket(int sock, struct sockaddr_in *from, char *auth_method, char *encrypt_method, char* jobid)\n{\n\tchar ebuf[LOG_BUF_SIZE] = \"\";\n\t/* Reduce timeout to avoid blocking too long */\n\tpbs_tcp_timeout = PBS_DIS_TCP_TIMEOUT_SHORT;\n\tunsigned short port = ntohs(GET_IP_PORT(from));\n\n\tif (strcmp(auth_method, AUTH_RESVPORT_NAME) == 0) {\n\t\t/* For resvport, simply verify that the remote port is prvileged */\n\t\tif (port >= IPPORT_RESERVED)\n\t\t\treturn INTERACTIVE_AUTH_RETRY;\n\t}\n\n\tif ((strcmp(auth_method, AUTH_MUNGE_NAME) == 0)) {\n\t\tencrypt_method[0] = '\\0';\n\t\tpbs_auth_config_t *auth_config = NULL;\n\t\tauth_def_t *authdef = NULL;\n\n\t\tDIS_tcp_funcs();\n\n\t\tif (load_auths(AUTH_CLIENT)) {\n\t\t\tfprintf(stderr, \"qsub: Failed to load auths\\n\");\n\t\t\treturn INTERACTIVE_AUTH_FAILED;\n\t\t}\n\n\t\tauth_config = make_auth_config(auth_method,\n\t\t\t\t\t       encrypt_method,\n\t\t\t\t\t       pbs_conf.pbs_exec_path,\n\t\t\t\t\t       pbs_conf.pbs_home_path,\n\t\t\t\t\t       NULL);\n\t\tif (auth_config == NULL) {\n\t\t\tfprintf(stderr, \"qsub: Out of memory when allocating new auth config\\n\");\n\t\t\treturn INTERACTIVE_AUTH_FAILED;\n\t\t}\n\n\t\tauthdef = get_auth(auth_method);\n\t\tif (authdef == NULL) {\n\t\t\tfprintf(stderr, \"qsub: Auth method '%s' does not seem implemented\\n\", auth_method ? auth_method : \"\");\n\t\t\tfree_auth_config(auth_config);\n\t\t\treturn INTERACTIVE_AUTH_FAILED;\n\t\t} else {\n\t\t\tauthdef->set_config((const pbs_auth_config_t *) auth_config);\n\t\t\ttransport_chan_set_authdef(sock, authdef, FOR_AUTH);\n\t\t\ttransport_chan_set_ctx_status(sock, AUTH_STATUS_CTX_ESTABLISHING, FOR_AUTH);\n\t\t}\n\t\tif (engage_server_auth(sock, \"\", FOR_AUTH, AUTH_INTERACTIVE, ebuf, sizeof(ebuf)) != 0) {\n\t\t\tif (ebuf[0] != '\\0')\n\t\t\t\tfprintf(stderr, \"qsub: %s\\n\", ebuf);\n\t\t\tfree_auth_config(auth_config);\n\t\t\t/* If authentication process failed, retry to connect with another execution host */\n\t\t\treturn INTERACTIVE_AUTH_RETRY;\n\t\t}\n\t\tfree_auth_config(auth_config);\n\t}\n\n\tif (strcmp(auth_method, AUTH_GSS_NAME) == 0 || strcmp(encrypt_method, AUTH_GSS_NAME) == 0) {\n\t\tpbs_auth_config_t *auth_config = NULL;\n\t\tauth_def_t *authdef = NULL;\n\t\tchar *hostname;\n\t\tint for_encrypt;\n\t\tchar *method;\n\n\t\tmethod = strcmp(auth_method, AUTH_GSS_NAME) == 0 ? auth_method : encrypt_method;\n\n\t\tif (load_auths(AUTH_CLIENT)) {\n\t\t\tfprintf(stderr, \"qsub: Failed to load auths\\n\");\n\t\t\treturn INTERACTIVE_AUTH_FAILED;\n\t\t}\n\n\t\tauth_config = make_auth_config(method,\n\t\t\t\t\t       encrypt_method,\n\t\t\t\t\t       pbs_conf.pbs_exec_path,\n\t\t\t\t\t       pbs_conf.pbs_home_path,\n\t\t\t\t\t       NULL);\n\t\tif (auth_config == NULL) {\n\t\t\tfprintf(stderr, \"qsub: Out of memory when allocating new auth config\\n\");\n\t\t\treturn INTERACTIVE_AUTH_FAILED;\n\t\t}\n\n\t\tauthdef = get_auth(method);\n\t\tif (authdef == NULL) {\n\t\t\tfprintf(stderr, \"qsub: Auth method '%s' does not seem implemented\\n\", auth_method ? auth_method : \"\");\n\t\t\tfree_auth_config(auth_config);\n\t\t\treturn INTERACTIVE_AUTH_FAILED;\n\t\t}\n\n\t\thostname = get_hostname_from_addr(from->sin_addr);\n\t\tif (hostname == NULL) {\n\t\t\tfprintf(stderr, \"qsub: Unable to resolve host address\\n\");\n\t\t\tfree_auth_config(auth_config);\n\t\t\treturn INTERACTIVE_AUTH_RETRY;\n\t\t}\n\n\t\tif (encrypt_method[0] == '\\0') {\n\t\t\tfor_encrypt = FOR_AUTH;\n\t\t} else {\n\t\t\tfor_encrypt = FOR_ENCRYPT;\n\t\t}\n\n\t\tif (handle_client_handshake(sock, hostname, method, for_encrypt, auth_config, ebuf, sizeof(ebuf)) != 0) {\n\t\t\tfprintf(stderr, \"qsub: %s\\n\", ebuf);\n\t\t\tfree_auth_config(auth_config);\n\t\t\treturn INTERACTIVE_AUTH_RETRY;\n\t\t}\n\t}\n\n\tif (!is_string_in_arr(pbs_conf.supported_auth_methods, auth_method)) {\n\t\tfprintf(stderr, \"qsub: Auth method '%s' not supported\\n\", auth_method ? auth_method : \"\");\n\t\treturn INTERACTIVE_AUTH_FAILED;\n\t}\n\treturn INTERACTIVE_AUTH_SUCCESS;\n}\n\n/**\n * @brief\n *\tAuthenticate a connection to a remote qsub.\n *\n * @param[in]\tsock - An integer representing the socket file descriptor.\n * @param[in]\tport - The port we connected to\n * @param[in]\thostname - The qsub hostname.\n * @param[in]\tauth_method - The interactive auth method suggested by qsub.\n * @param[in]\tjobid - The job id.\n *\n * @return\tint\n * @retval\tINTERACTIVE_AUTH_SUCCESS (0) - successful authentication\n * @retval\tINTERACTIVE_AUTH_FAILED (1) - authentication failed\n *\n */\nint auth_with_qsub(int sock, unsigned short port, char* hostname, char *auth_method, char *encrypt_method, char *jobid)\n{\n\tchar ebuf[LOG_BUF_SIZE] = \"\";\n\n\t/* If auth_method is resvport, we have already connected with a privileged port */\n\n\tif ((strcmp(auth_method, AUTH_GSS_NAME) == 0) || strcmp(encrypt_method, AUTH_GSS_NAME) == 0) {\n\t\tpbs_auth_config_t *auth_config = NULL;\n\t\tauth_def_t *authdef = NULL;\n\t\tint for_encrypt;\n\t\tchar *method;\n\n\t\tmethod = strcmp(auth_method, AUTH_GSS_NAME) == 0 ? auth_method : encrypt_method;\n\n\t\tif (!is_string_in_arr(pbs_conf.supported_auth_methods, method)) {\n\t\t\tlog_eventf(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_ERR, jobid, \"Auth method '%s' not supported\", method ? method : \"\");\n\t\t\treturn INTERACTIVE_AUTH_FAILED;\n\t\t}\n\n\t\tDIS_tcp_funcs();\n\n\t\t/* user credentials could be expired on qsub side, wait for user to possibly refresh credentials manually */\n\t\tpbs_tcp_timeout = PBS_DIS_TCP_TIMEOUT_VLONG;\n\n\t\tif (load_auths(AUTH_SERVER)) {\n\t\t\tlog_eventf(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_ERR, jobid, \"Failed to load auths\");\n\t\t\treturn INTERACTIVE_AUTH_FAILED;\n\t\t}\n\n\t\tauth_config = make_auth_config(method,\n\t\t\t\t\t\t\tencrypt_method,\n\t\t\t\t\t\t\tpbs_conf.pbs_exec_path,\n\t\t\t\t\t\t\tpbs_conf.pbs_home_path,\n\t\t\t\t\t\t\tNULL);\n\t\tif (auth_config == NULL) {\n\t\t\tlog_eventf(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_ERR, jobid, \"Out of memory when allocating new auth config\");\n\t\t\treturn INTERACTIVE_AUTH_FAILED;\n\t\t}\n\n\t\tif (encrypt_method[0] == '\\0') {\n\t\t\tfor_encrypt = FOR_AUTH;\n\t\t} else {\n\t\t\tfor_encrypt = FOR_ENCRYPT;\n\t\t}\n\n\t\tauthdef = get_auth(method);\n\t\tif (authdef == NULL) {\n\t\t\tlog_eventf(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_ERR, jobid, \"Auth method '%s' does not seem implemented\\n\", method ? method : \"\");\n\t\t\tfree_auth_config(auth_config);\n\t\t\treturn INTERACTIVE_AUTH_FAILED;\n\t\t} else {\n\t\t\tauthdef->set_config((const pbs_auth_config_t *) auth_config);\n\t\t\ttransport_chan_set_authdef(sock, authdef, for_encrypt);\n\t\t\ttransport_chan_set_ctx_status(sock, AUTH_STATUS_CTX_ESTABLISHING, for_encrypt);\n\t\t}\n\n\t\t/* run handshake loop */\n\t\twhile (transport_chan_get_ctx_status(sock, for_encrypt) == (int) AUTH_STATUS_CTX_ESTABLISHING) {\n\t\t\tif (engage_server_auth(sock, hostname, for_encrypt, AUTH_INTERACTIVE, ebuf, sizeof(ebuf)) != 0) {\n\t\t\t\tif (ebuf[0] != '\\0')\n\t\t\t\t\tlog_eventf(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_ERR, jobid, \"qsub: %s\\n\", ebuf);\n\t\t\t\tfree_auth_config(auth_config);\n\t\t\t\treturn INTERACTIVE_AUTH_FAILED;\n\t\t\t}\n\t\t}\n\t}\n\n\tif ((strcmp(auth_method, AUTH_MUNGE_NAME) == 0)) {\n\t\tencrypt_method[0] = '\\0';\n\t\tpbs_auth_config_t *auth_config = NULL;\n\n\t\tif (!is_string_in_arr(pbs_conf.supported_auth_methods, auth_method)) {\n\t\t\tlog_eventf(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_ERR, jobid, \"Auth method '%s' not supported\", auth_method ? auth_method : \"\");\n\t\t\treturn INTERACTIVE_AUTH_FAILED;\n\t\t}\n\n\t\tDIS_tcp_funcs();\n\n\t\tif (load_auths(AUTH_CLIENT)) {\n\t\t\tlog_eventf(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_ERR, jobid, \"Failed to load auths\");\n\t\t\treturn INTERACTIVE_AUTH_FAILED;\n\t\t}\n\n\t\tauth_config = make_auth_config(auth_method,\n\t\t\t\t\t\t\tencrypt_method,\n\t\t\t\t\t\t\tpbs_conf.pbs_exec_path,\n\t\t\t\t\t\t\tpbs_conf.pbs_home_path,\n\t\t\t\t\t\t\tNULL);\n\t\tif (auth_config == NULL) {\n\t\t\tlog_eventf(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_ERR, jobid, \"Out of memory when allocating new auth config\");\n\t\t\treturn INTERACTIVE_AUTH_FAILED;\n\t\t}\n\n\t\tif (handle_client_handshake(sock, hostname, auth_method, FOR_AUTH, auth_config, ebuf, sizeof(ebuf)) != 0) {\n\t\t\tlog_eventf(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_ERR, jobid, \"Error in handle client handshake: %s\", ebuf);\n\t\t\tfree_auth_config(auth_config);\n\t\t\treturn INTERACTIVE_AUTH_FAILED;\n\t\t}\n\t\tfree_auth_config(auth_config);\n\t}\n\n\treturn INTERACTIVE_AUTH_SUCCESS;\n}"
  },
  {
    "path": "src/lib/Libifl/conn_table.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include \"pbs_config.h\"\n#include <errno.h>\n#include \"libpbs.h\"\n\nstatic pbs_conn_t **connection = NULL;\nstatic int curr_connection_sz = 0;\nstatic int allocated_connection = 0;\n\nstatic pbs_conn_t *get_connection(int);\nstatic int destroy_conntable(void);\nstatic void _destroy_connection(int);\nstatic int add_connection(int fd);\n\n#ifdef WIN32\n#define INVALID_SOCK(x) (x == INVALID_SOCKET || x < 0 || x >= PBS_LOCAL_CONNECTION)\n#else\n#define INVALID_SOCK(x) (x < 0 || x >= PBS_LOCAL_CONNECTION)\n#endif\n\n#if defined(linux)\n#define MUTEX_TYPE PTHREAD_MUTEX_RECURSIVE_NP\n#else\n#define MUTEX_TYPE PTHREAD_MUTEX_RECURSIVE\n#endif\n\n#define LOCK_TABLE(x)                                               \\\n\tdo {                                                        \\\n\t\tif (pbs_client_thread_init_thread_context() != 0) { \\\n\t\t\treturn (x);                                 \\\n\t\t}                                                   \\\n\t\tif (pbs_client_thread_lock_conntable() != 0) {      \\\n\t\t\treturn (x);                                 \\\n\t\t}                                                   \\\n\t} while (0)\n\n#define UNLOCK_TABLE(x)                                          \\\n\tdo {                                                     \\\n\t\tif (pbs_client_thread_unlock_conntable() != 0) { \\\n\t\t\treturn (x);                              \\\n\t\t}                                                \\\n\t} while (0)\n\n/**\n * @brief\n * \tadd_connection - Add given fd in connection table and initialize it's structures\n *\n * @note: connection table locking/unlocking should be handled by caller\n *\n * @param[in] fd - socket number\n *\n * @return int\n *\n * @retval 0 - success\n * @retval -1 - error\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nstatic int\nadd_connection(int fd)\n{\n\tpthread_mutexattr_t attr = {{0}};\n\n\tif (INVALID_SOCK(fd))\n\t\treturn -1;\n\n\tif (fd >= curr_connection_sz) {\n\t\tvoid *p = NULL;\n\t\tint new_sz = fd + 10;\n\t\tp = realloc(connection, new_sz * sizeof(pbs_conn_t *));\n\t\tif (p == NULL)\n\t\t\tgoto add_connection_err;\n\t\tconnection = (pbs_conn_t **) (p);\n\t\tmemset((connection + curr_connection_sz), 0, (new_sz - curr_connection_sz) * sizeof(pbs_conn_t *));\n\t\tcurr_connection_sz = new_sz;\n\t}\n\tif (connection[fd] == NULL) {\n\t\tconnection[fd] = calloc(1, sizeof(pbs_conn_t));\n\t\tif (connection[fd] == NULL)\n\t\t\tgoto add_connection_err;\n\t\tif (pthread_mutexattr_init(&attr) != 0)\n\t\t\tgoto add_connection_err;\n\t\tif (pthread_mutexattr_settype(&attr, MUTEX_TYPE) != 0)\n\t\t\tgoto add_connection_err;\n\t\tif (pthread_mutex_init(&(connection[fd]->ch_mutex), &attr) != 0)\n\t\t\tgoto add_connection_err;\n\t\t(void) pthread_mutexattr_destroy(&attr);\n\t\tallocated_connection++;\n\t} else {\n\t\tif (connection[fd]->ch_errtxt)\n\t\t\tfree(connection[fd]->ch_errtxt);\n\t\tconnection[fd]->ch_errtxt = NULL;\n\t\tconnection[fd]->ch_errno = 0;\n\t}\n\n\treturn 0;\n\nadd_connection_err:\n\tif (connection[fd]) {\n\t\tfree(connection[fd]);\n\t\tconnection[fd] = NULL;\n\t}\n\treturn -1;\n}\n\n/** @brief\n *\t_destroy_connection - destroy connection in connection table\n *\n * @param[in] fd - file descriptor\n *\n * @return void\n *\n */\nstatic void\n_destroy_connection(int fd)\n{\n\tif (connection[fd]) {\n\t\tif (connection[fd]->ch_errtxt)\n\t\t\tfree(connection[fd]->ch_errtxt);\n\t\tpthread_mutex_destroy(&(connection[fd]->ch_mutex));\n\t\t/*\n\t\t * DON'T free connection[i]->ch_chan\n\t\t * it should be done by dis_destroy_chan\n\t\t */\n\t\tfree(connection[fd]);\n\t\tallocated_connection--;\n\t}\n\tconnection[fd] = NULL;\n}\n\n/** @brief\n * \tdestroy_conntable - destroy connection table\n *\n * @return int\n * @retval 0 - success\n * @retval -1 - failure\n *\n */\nstatic int\ndestroy_conntable(void)\n{\n\tint i = 0;\n\n\tif (curr_connection_sz <= 0)\n\t\treturn 0;\n\n\tLOCK_TABLE(-1);\n\tfor (i = 0; i < curr_connection_sz; i++) {\n\t\tif (connection[i]) {\n\t\t\t_destroy_connection(i);\n\t\t}\n\t}\n\tfree(connection);\n\tconnection = NULL;\n\tcurr_connection_sz = 0;\n\tUNLOCK_TABLE(-1);\n\n\treturn 0;\n}\n\n/** @brief\n *\tdestroy_connection - destroy connection in connection table\n *\n * @param[in] fd - file descriptor\n *\n * @return int\n * @retval 0 - success\n * @retval -1 - failure\n *\n */\nint\ndestroy_connection(int fd)\n{\n\tif (INVALID_SOCK(fd))\n\t\treturn -1;\n\n\tif (fd >= curr_connection_sz || allocated_connection == 0)\n\t\treturn 0;\n\n\tLOCK_TABLE(-1);\n\t_destroy_connection(fd);\n\tUNLOCK_TABLE(-1);\n\n\tif (allocated_connection == 0)\n\t\treturn destroy_conntable();\n\n\treturn 0;\n}\n\n/**\n * @brief\n * \tget_connection - get associate connection structure with fd\n *\n * \tIf given fd is not part of connection table or not initialized then\n * \tthis func will call add_connection(fd) to add fd in connection\n * \ttable and initialize it's structures\n *\n * @note: connection table locking/unlocking should be handled by caller\n *\n * @param[in] fd - socket number\n *\n * @return pbs_conn_t *\n *\n * @retval !NULL - success\n * @retval NULL - error\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nstatic pbs_conn_t *\nget_connection(int fd)\n{\n\tif (INVALID_SOCK(fd))\n\t\treturn NULL;\n\tif ((fd >= curr_connection_sz) || (connection[fd] == NULL)) {\n\t\tif (add_connection(fd) != 0)\n\t\t\treturn NULL;\n\t}\n\treturn connection[fd];\n}\n\n/**\n * @brief\n * \tset_conn_errtxt - set connection error text synchronously\n *\n * @param[in] fd - socket number\n * @param[in] errtxt - error text to set on connection\n *\n * @return int\n * @retval 0 - success\n * @retval -1 - error\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\nset_conn_errtxt(int fd, const char *errtxt)\n{\n\tpbs_conn_t *p = NULL;\n\n\tif (INVALID_SOCK(fd))\n\t\treturn -1;\n\n\tLOCK_TABLE(-1);\n\tp = get_connection(fd);\n\tif (p == NULL) {\n\t\tUNLOCK_TABLE(-1);\n\t\treturn -1;\n\t}\n\tif (p->ch_errtxt) {\n\t\tfree(p->ch_errtxt);\n\t\tp->ch_errtxt = NULL;\n\t}\n\tif (errtxt) {\n\t\tif ((p->ch_errtxt = strdup(errtxt)) == NULL) {\n\t\t\tUNLOCK_TABLE(-1);\n\t\t\treturn -1;\n\t\t}\n\t}\n\tUNLOCK_TABLE(-1);\n\treturn 0;\n}\n\n/**\n * @brief\n * \tget_conn_errtxt - get connection error text synchronously\n *\n * @param[in] fd - socket number\n *\n * @return char *\n * @retval !NULL - success\n * @retval NULL - error\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n */\nchar *\nget_conn_errtxt(int fd)\n{\n\tpbs_conn_t *p = NULL;\n\tchar *errtxt = NULL;\n\n\tif (INVALID_SOCK(fd))\n\t\treturn NULL;\n\n\tLOCK_TABLE(NULL);\n\tp = get_connection(fd);\n\tif (p == NULL) {\n\t\tUNLOCK_TABLE(NULL);\n\t\treturn NULL;\n\t}\n\terrtxt = p->ch_errtxt;\n\tUNLOCK_TABLE(NULL);\n\treturn errtxt;\n}\n\n/**\n * @brief\n * \tset_conn_errno - set connection error number synchronously\n *\n * @param[in] fd - socket number\n * @param[in] err - error number to set on connection\n *\n * @return int\n * @retval 0 - success\n * @retval -1 - error\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n */\nint\nset_conn_errno(int fd, int err)\n{\n\tpbs_conn_t *p = NULL;\n\n\tif (INVALID_SOCK(fd))\n\t\treturn -1;\n\n\tLOCK_TABLE(-1);\n\tp = get_connection(fd);\n\tif (p == NULL) {\n\t\tUNLOCK_TABLE(-1);\n\t\treturn -1;\n\t}\n\tp->ch_errno = err;\n\tUNLOCK_TABLE(-1);\n\treturn 0;\n}\n\n/**\n * @brief\n * \tget_conn_errno - get connection error number synchronously\n *\n * @param[in] fd - socket number\n *\n * @return int\n * @retval >= 0 - success\n * @retval -1 - error\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n */\nint\nget_conn_errno(int fd)\n{\n\tpbs_conn_t *p = NULL;\n\tint err = -1;\n\n\tif (INVALID_SOCK(fd))\n\t\treturn -1;\n\n\tLOCK_TABLE(-1);\n\tp = get_connection(fd);\n\tif (p == NULL) {\n\t\tUNLOCK_TABLE(-1);\n\t\treturn -1;\n\t}\n\terr = p->ch_errno;\n\tUNLOCK_TABLE(-1);\n\treturn err;\n}\n\n/**\n * @brief\n * \tset_conn_chan - set connection tcp chan synchronously\n *\n * @param[in] fd - socket number\n * @param[in] chan - tcp chan to set on connection\n *\n * @return int\n * @retval 0 - success\n * @retval -1 - error\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n */\nint\nset_conn_chan(int fd, pbs_tcp_chan_t *chan)\n{\n\tpbs_conn_t *p = NULL;\n\n\tif (INVALID_SOCK(fd))\n\t\treturn -1;\n\n\tLOCK_TABLE(-1);\n\tp = get_connection(fd);\n\tif (p == NULL) {\n\t\terrno = ENOTCONN;\n\t\tUNLOCK_TABLE(-1);\n\t\treturn -1;\n\t}\n\tp->ch_chan = chan;\n\tUNLOCK_TABLE(-1);\n\treturn 0;\n}\n\n/**\n * @brief\n * \tget_conn_chan - get connection tcp chan synchronously\n *\n * @param[in] fd - socket number\n *\n * @return pbs_tcp_chan_t *\n * @retval !NULL - success\n * @retval NULL - error\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n */\npbs_tcp_chan_t *\nget_conn_chan(int fd)\n{\n\tpbs_conn_t *p = NULL;\n\tpbs_tcp_chan_t *chan = NULL;\n\n\tif (INVALID_SOCK(fd))\n\t\treturn NULL;\n\n\tLOCK_TABLE(NULL);\n\tp = get_connection(fd);\n\tif (p == NULL) {\n\t\terrno = ENOTCONN;\n\t\tUNLOCK_TABLE(NULL);\n\t\treturn NULL;\n\t}\n\tchan = p->ch_chan;\n\tUNLOCK_TABLE(NULL);\n\treturn chan;\n}\n\n/**\n * @brief\n * \tget_conn_mutex - get connection mutex synchronously\n *\n * @param[in] fd - socket number\n *\n * @return pthread_mutex_t *\n * @retval !NULL - success\n * @retval NULL - error\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\npthread_mutex_t *\nget_conn_mutex(int fd)\n{\n\tpbs_conn_t *p = NULL;\n\tpthread_mutex_t *mutex = NULL;\n\n\tif (INVALID_SOCK(fd))\n\t\treturn NULL;\n\n\tLOCK_TABLE(NULL);\n\tp = get_connection(fd);\n\tif (p == NULL) {\n\t\tUNLOCK_TABLE(NULL);\n\t\treturn NULL;\n\t}\n\tmutex = &(p->ch_mutex);\n\tUNLOCK_TABLE(NULL);\n\treturn mutex;\n}\n"
  },
  {
    "path": "src/lib/Libifl/dec_DelJobList.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tdec_DelJobList.c\n * @brief\n * decode_DIS_DelJobList() - decode a Delete Job List Batch Request\n *\n *\tThis request is used for most operations where an object is being\n *\tcreated, deleted, or altered.\n *\n *\tThe batch_request structure must already exist (be allocated by the\n *\tcaller.   It is assumed that the header fields (protocol type,\n *\tprotocol version, request type, and user name) have already be decoded.\n *\n * @par\tData items are:\n * \t\t\tunsigned int\tcount\n *\t\t\tstring array\tjobslist\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <sys/types.h>\n#include \"libpbs.h\"\n#include \"list_link.h\"\n#include \"server_limits.h\"\n#include \"attribute.h\"\n#include \"credential.h\"\n#include \"batch_request.h\"\n#include \"dis.h\"\n/**\n * @brief\n *\t-decode a Delete Job Batch Request\n *\n * @par\tFunctionality:\n *\tThis function is used to decode the request for deletion of list of jobids.\n *\n *\n *      The batch_request structure must already exist (be allocated by the\n *      caller.   It is assumed that the header fields (protocol type,\n *      protocol version, request type, and user name) have already be decoded.\n *\n * @par\tData items are:\\n\n *\t\tunsigned int    command\\n\n *              unsigned int    object type\\n\n *              string          object name\\n\n *              attropl         attributes\n *\n * @param[in] sock - socket descriptor\n * @param[out] preq - pointer to batch_request structure\n *\n * @return      int\n * @retval      DIS_SUCCESS(0)  success\n * @retval      error code      error\n *\n */\n\nint\ndecode_DIS_DelJobList(int sock, struct batch_request *preq)\n{\n\tint rc;\n\tint count = 0;\n\tchar **tmp_jobslist = NULL;\n\tint i = 0;\n\n\tpreq->rq_ind.rq_deletejoblist.rq_count = disrui(sock, &rc);\n\tif (rc)\n\t\treturn rc;\n\n\tcount = preq->rq_ind.rq_deletejoblist.rq_count;\n\n\ttmp_jobslist = malloc((count + 1) * sizeof(char *));\n\tif (tmp_jobslist == NULL)\n\t\treturn DIS_NOMALLOC;\n\n\tfor (i = 0; i < count; i++) {\n\t\ttmp_jobslist[i] = disrst(sock, &rc);\n\t\tif (rc) {\n\t\t\tfree(tmp_jobslist);\n\t\t\treturn rc;\n\t\t}\n\t}\n\ttmp_jobslist[i] = NULL;\n\n\tpreq->rq_ind.rq_deletejoblist.rq_jobslist = tmp_jobslist;\n\tpreq->rq_ind.rq_deletejoblist.rq_resume = FALSE;\n\n\treturn rc;\n}\n"
  },
  {
    "path": "src/lib/Libifl/dec_reply.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tdec_rcpy.c\n * @brief\n * \tdecode_DIS_replyCmd() - decode a Batch Protocol Reply Structure for a Command\n *\n *\tThis routine decodes a batch reply into the form used by commands.\n *\tThe only difference between this and the server version is on status\n *\treplies.  For commands, the attributes are decoded into a list of\n *\tattrl structure rather than the server's svrattrl.\n *\n * \tbatch_reply structure defined in libpbs.h, it must be allocated\n *\tby the caller.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <sys/types.h>\n#include <stdlib.h>\n#include \"attribute.h\"\n#include \"range.h\"\n#include \"libpbs.h\"\n#include \"job.h\"\n#include \"dis.h\"\n\n/**\n * @brief\tRead one batch status from given socket\n *\n * @param[in]  sock - socket from which status to be read\n * @param[out] objtype - type of batch status\n * @param[out] rc - error code if any failure in read\n *\n * @return struct batch_status *\n * @retval !NULL - success\n * @retval NULL  - failure\n */\nstatic struct batch_status *\nread_batch_status(int sock, int *objtype, int *rc)\n{\n\tstruct batch_status *pstcmd;\n\n\tif (rc == NULL || objtype == NULL) {\n\t\tif (rc)\n\t\t\t*rc = DIS_PROTO;\n\t\treturn NULL;\n\t}\n\n\tpstcmd = (struct batch_status *) malloc(sizeof(struct batch_status));\n\tif (pstcmd == NULL) {\n\t\t*rc = DIS_NOMALLOC;\n\t\treturn NULL;\n\t}\n\tinit_bstat(pstcmd);\n\n\t*objtype = disrui(sock, rc);\n\tif (*rc == DIS_SUCCESS)\n\t\tpstcmd->name = disrst(sock, rc);\n\tif (*rc) {\n\t\tpbs_statfree(pstcmd);\n\t\treturn NULL;\n\t}\n\t*rc = decode_DIS_attrl(sock, &pstcmd->attribs);\n\tif (*rc)\n\t\tpbs_statfree(pstcmd);\n\treturn pstcmd;\n}\n\n/**\n * @brief\tExpand and append remaining subjob for given status of array job\n *\n * \tFind array_indices_remaining from given status's attributes list, parse it\n * \tand make a copy of given status for each subjob and set state and substate\n * \tin copy to queued\n *\n * \tThis function is almost same as status_subjob() except that this work only on\n * \tremaining subjobs and uses batch_status struct instead of job struct\n *\n * @param[in]  array - pointer batch status of parent array job\n * @param[out] count - count of subjobs expanded\n *\n * @return int\n * @retval 0 - success\n * @retval 1 - failure\n */\nstatic int\nexpand_remaining_subjob(struct batch_status *array, int *count)\n{\n\trange *r = NULL;\n\tint sjidx = -1;\n\tchar *remain;\n\tstruct attrl *sj_attrs = NULL;\n\tchar *parent_jid;\n\n\tif (array == NULL || count == NULL)\n\t\treturn 0;\n\n\t*count = 0;\n\tparent_jid = array->name;\n\tremain = get_attr(array->attribs, ATTR_array_indices_remaining, NULL);\n\tif (remain == NULL || *remain == '-')\n\t\treturn 0;\n\tr = range_parse(remain);\n\tif (r == NULL)\n\t\treturn 1;\n\tsj_attrs = dup_attrl_list(array->attribs);\n\tif (sj_attrs != NULL) {\n\t\tstruct attrl *next;\n\t\tstruct attrl *prev = NULL;\n\t\tint should_break = 0;\n\n\t\tfor (next = sj_attrs; next->next; next = next->next) {\n\t\t\tif (strcmp(next->name, ATTR_state) == 0) {\n\t\t\t\tfree(next->value);\n\t\t\t\tnext->value = malloc(2);\n\t\t\t\tif (next->value == NULL) {\n\t\t\t\t\tfree_attrl_list(sj_attrs);\n\t\t\t\t\treturn 1;\n\t\t\t\t}\n\t\t\t\tnext->value[0] = JOB_STATE_LTR_QUEUED;\n\t\t\t\tnext->value[1] = '\\0';\n\t\t\t\tshould_break++;\n\t\t\t} else if (strcmp(next->name, ATTR_substate) == 0) {\n\t\t\t\tfree(next->value);\n\t\t\t\tnext->value = strdup(TOSTR(JOB_SUBSTATE_QUEUED));\n\t\t\t\tif (next->value == NULL) {\n\t\t\t\t\tfree_attrl_list(sj_attrs);\n\t\t\t\t\treturn 1;\n\t\t\t\t}\n\t\t\t\tshould_break++;\n\t\t\t} else if (strcmp(next->name, ATTR_array) == 0) {\n\t\t\t\tif (prev) {\n\t\t\t\t\tprev->next = NULL;\n\t\t\t\t\tfree_attrl_list(next);\n\t\t\t\t\tnext = NULL;\n\t\t\t\t\tshould_break++;\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (should_break == 3 || next == NULL)\n\t\t\t\tbreak;\n\t\t\tprev = next;\n\t\t}\n\t} else {\n\t\tfree_range_list(r);\n\t\treturn 1;\n\t}\n\twhile ((sjidx = range_next_value(r, sjidx)) >= 0) {\n\t\tstruct batch_status *pstcmd = (struct batch_status *) malloc(sizeof(struct batch_status));\n\t\tchar *name;\n\n\t\tif (pstcmd == NULL) {\n\t\t\tfree_range_list(r);\n\t\t\tfree_attrl_list(sj_attrs);\n\t\t\treturn 1;\n\t\t}\n\t\tpstcmd->next = NULL;\n\t\tpstcmd->text = NULL;\n\t\tpstcmd->name = NULL;\n\t\tpstcmd->attribs = dup_attrl_list(sj_attrs);\n\t\tif (pstcmd->attribs == NULL) {\n\t\t\tpbs_statfree(pstcmd);\n\t\t\tfree_range_list(r);\n\t\t\tfree_attrl_list(sj_attrs);\n\t\t\treturn 1;\n\t\t}\n\t\tname = create_subjob_id(parent_jid, sjidx);\n\t\tif (name == NULL) {\n\t\t\tpbs_statfree(pstcmd);\n\t\t\tfree_range_list(r);\n\t\t\tfree_attrl_list(sj_attrs);\n\t\t\treturn 1;\n\t\t}\n\t\tpstcmd->name = strdup(name);\n\t\tif (pstcmd->name == NULL) {\n\t\t\tpbs_statfree(pstcmd);\n\t\t\tfree_range_list(r);\n\t\t\tfree_attrl_list(sj_attrs);\n\t\t\treturn 1;\n\t\t}\n\t\tpstcmd->next = array->next;\n\t\tarray->next = pstcmd;\n\t\t(*count)++;\n\t}\n\tfree_range_list(r);\n\tfree_attrl_list(sj_attrs);\n\treturn 0;\n}\n\n/**\n * @brief\tcompare subjob name using its index from given batch statues\n *\n * @param[in]\ta - first batch status to compare\n * @param[in]\tb - secound batch status to compare\n *\n * @return int\n * @retval  0 - a's index == b's index (or invalid inputs)\n * @retval  1 - a's index > b's index\n * @retval -1 - a's index < b's index\n */\nstatic int\ncmp_sj_name(struct batch_status *a, struct batch_status *b)\n{\n\tint sjidx_a = 0;\n\tint sjidx_b = 0;\n\n\tif (a == NULL || b == NULL)\n\t\treturn 0;\n\tif (a->name == NULL || b->name == NULL)\n\t\treturn 0;\n\tsjidx_a = get_index_from_jid(a->name);\n\tif (sjidx_a == -1)\n\t\treturn 0;\n\tsjidx_b = get_index_from_jid(b->name);\n\tif (sjidx_b == -1)\n\t\treturn 0;\n\tif (sjidx_a > sjidx_b)\n\t\treturn 1;\n\tif (sjidx_a < sjidx_b)\n\t\treturn -1;\n\treturn 0;\n}\n\n/**\n * @brief-\n *\tdecode a Batch Protocol Reply Structure for a Command\n *\n * @par\tFunctionality:\n *\t\tThis routine decodes a batch reply into the form used by commands.\n *      \tThe only difference between this and the server version is on status\n *      \treplies.  For commands, the attributes are decoded into a list of\n *      \tattrl structure rather than the server's svrattrl.\n *\n * Note: batch_reply structure defined in libpbs.h, it must be allocated\n *       by the caller.\n *\n * @param[in] sock - socket descriptor\n * @param[in] reply - pointer to batch_reply structure\n * @param[in] prot - protocol type\n *\n * @return\tint\n * @retval\t-1\terror\n * @retval\t0\tSuccess\n *\n */\n\nint\ndecode_DIS_replyCmd(int sock, struct batch_reply *reply, int prot)\n{\n\tint ct;\n\tint i;\n\tstruct brp_select *psel;\n\tstruct brp_select **pselx;\n\tstruct batch_status *pstcmd = NULL;\n\tstruct batch_status **pstcx = NULL;\n\tstruct batch_deljob_status *pdel;\n\tstruct batch_status *pstcmd_last = NULL;\n\tstruct batch_status *pstcmd_ja = NULL;\n\tint rc = 0;\n\tsize_t txtlen;\n\tpreempt_job_info *ppj = NULL;\n\n\t/* first decode \"header\" consisting of protocol type and version */\nagain:\n\ti = disrui(sock, &rc);\n\tif (rc != 0)\n\t\treturn rc;\n\tif (i != PBS_BATCH_PROT_TYPE)\n\t\treturn DIS_PROTO;\n\ti = disrui(sock, &rc);\n\tif (rc != 0)\n\t\treturn rc;\n\tif (i != PBS_BATCH_PROT_VER)\n\t\treturn DIS_PROTO;\n\n\t/* next decode code, auxcode and choice (union type identifier) */\n\n\treply->brp_code = disrsi(sock, &rc);\n\tif (rc)\n\t\treturn rc;\n\treply->brp_auxcode = disrsi(sock, &rc);\n\tif (rc)\n\t\treturn rc;\n\treply->brp_choice = disrui(sock, &rc);\n\tif (rc)\n\t\treturn rc;\n\treply->brp_is_part = disrui(sock, &rc);\n\tif (rc)\n\t\treturn rc;\n\n\tswitch (reply->brp_choice) {\n\n\t\tcase BATCH_REPLY_CHOICE_NULL:\n\t\t\tbreak; /* no more to do */\n\n\t\tcase BATCH_REPLY_CHOICE_Queue:\n\t\tcase BATCH_REPLY_CHOICE_RdytoCom:\n\t\tcase BATCH_REPLY_CHOICE_Commit:\n\t\t\tdisrfst(sock, PBS_MAXSVRJOBID + 1, reply->brp_un.brp_jid);\n\t\t\tif (rc)\n\t\t\t\treturn (rc);\n\t\t\tbreak;\n\n\t\tcase BATCH_REPLY_CHOICE_Select:\n\n\t\t\t/* have to get count of number of strings first */\n\n\t\t\treply->brp_un.brp_select = NULL;\n\t\t\tpselx = &reply->brp_un.brp_select;\n\t\t\tct = disrui(sock, &rc);\n\t\t\tif (rc)\n\t\t\t\treturn rc;\n\t\t\treply->brp_count = ct;\n\n\t\t\twhile (ct--) {\n\t\t\t\tpsel = (struct brp_select *) malloc(sizeof(struct brp_select));\n\t\t\t\tif (psel == 0)\n\t\t\t\t\treturn DIS_NOMALLOC;\n\t\t\t\tpsel->brp_next = NULL;\n\t\t\t\tpsel->brp_jobid[0] = '\\0';\n\t\t\t\trc = disrfst(sock, PBS_MAXSVRJOBID + 1, psel->brp_jobid);\n\t\t\t\tif (rc) {\n\t\t\t\t\t(void) free(psel);\n\t\t\t\t\treturn rc;\n\t\t\t\t}\n\t\t\t\t*pselx = psel;\n\t\t\t\tpselx = &psel->brp_next;\n\t\t\t}\n\t\t\tbreak;\n\n\t\tcase BATCH_REPLY_CHOICE_Status:\n\n\t\t\t/* have to get count of number of status objects first */\n\t\t\tif (pstcx == NULL) {\n\t\t\t\treply->brp_un.brp_statc = NULL;\n\t\t\t\tpstcx = &reply->brp_un.brp_statc;\n\t\t\t\treply->brp_count = 0;\n\t\t\t}\n\t\t\tct = disrui(sock, &rc);\n\t\t\tif (rc)\n\t\t\t\treturn rc;\n\t\t\treply->brp_count += ct;\n\n\t\t\twhile (ct--) {\n\n\t\t\t\trc = DIS_PROTO;\n\t\t\t\tpstcmd = read_batch_status(sock, &reply->brp_type, &rc);\n\t\t\t\tif (rc != DIS_SUCCESS || pstcmd == NULL) {\n\t\t\t\t\tif (pstcmd)\n\t\t\t\t\t\tpbs_statfree(pstcmd);\n\t\t\t\t\treturn rc;\n\t\t\t\t}\n\t\t\t\tif (reply->brp_type == MGR_OBJ_JOBARRAY_PARENT) {\n\t\t\t\t\tif (pstcmd_ja != NULL) {\n\t\t\t\t\t\tpstcmd_ja->next = bs_isort(pstcmd_ja->next, cmp_sj_name);\n\t\t\t\t\t\tfor (pstcmd_last = pstcmd_ja; pstcmd_last->next; pstcmd_last = pstcmd_last->next)\n\t\t\t\t\t\t\t;\n\t\t\t\t\t\t*pstcx = pstcmd_ja;\n\t\t\t\t\t\tpstcx = &pstcmd_last->next;\n\t\t\t\t\t\tpstcmd_ja = NULL;\n\t\t\t\t\t}\n\t\t\t\t\tif (expand_remaining_subjob(pstcmd, &reply->brp_count) != 0) {\n\t\t\t\t\t\tpbs_statfree(pstcmd);\n\t\t\t\t\t\treturn DIS_NOMALLOC;\n\t\t\t\t\t}\n\t\t\t\t\tpstcmd_ja = pstcmd;\n\t\t\t\t\tcontinue;\n\t\t\t\t} else if (reply->brp_type == MGR_OBJ_SUBJOB) {\n\t\t\t\t\tpstcmd->next = pstcmd_ja->next;\n\t\t\t\t\tpstcmd_ja->next = pstcmd;\n\t\t\t\t\tcontinue;\n\t\t\t\t} else {\n\t\t\t\t\tif (pstcmd_ja != NULL) {\n\t\t\t\t\t\tpstcmd_ja->next = bs_isort(pstcmd_ja->next, cmp_sj_name);\n\t\t\t\t\t\tfor (pstcmd_last = pstcmd_ja; pstcmd_last->next; pstcmd_last = pstcmd_last->next)\n\t\t\t\t\t\t\t;\n\t\t\t\t\t\t*pstcx = pstcmd_ja;\n\t\t\t\t\t\tpstcx = &pstcmd_last->next;\n\t\t\t\t\t\tpstcmd_ja = NULL;\n\t\t\t\t\t}\n\t\t\t\t\t*pstcx = pstcmd;\n\t\t\t\t\tpstcx = &pstcmd->next;\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (pstcmd_ja != NULL) {\n\t\t\t\tpstcmd_ja->next = bs_isort(pstcmd_ja->next, cmp_sj_name);\n\t\t\t\tfor (pstcmd_last = pstcmd_ja; pstcmd_last->next; pstcmd_last = pstcmd_last->next)\n\t\t\t\t\t;\n\t\t\t\t*pstcx = pstcmd_ja;\n\t\t\t\tpstcx = &pstcmd_last->next;\n\t\t\t\tpstcmd = pstcmd_last;\n\t\t\t}\n\n\t\t\tif (reply->brp_un.brp_statc)\n\t\t\t\treply->last = pstcmd;\n\t\t\tif (reply->brp_is_part)\n\t\t\t\tgoto again;\n\t\t\tbreak;\n\n\t\tcase BATCH_REPLY_CHOICE_Delete:\n\n\t\t\t/* have to get count of number of status objects first */\n\n\t\t\treply->brp_un.brp_deletejoblist.brp_delstatc = NULL;\n\t\t\treply->brp_count = 0;\n\n\t\t\tct = disrui(sock, &rc);\n\t\t\tif (rc)\n\t\t\t\treturn rc;\n\t\t\treply->brp_count += ct;\n\n\t\t\twhile (ct--) {\n\t\t\t\tpdel = (struct batch_deljob_status *) malloc(sizeof(struct batch_deljob_status));\n\t\t\t\tif (pdel == 0)\n\t\t\t\t\treturn DIS_NOMALLOC;\n\t\t\t\tpdel->next = reply->brp_un.brp_deletejoblist.brp_delstatc;\n\t\t\t\tpdel->code = 0;\n\t\t\t\tpdel->name = disrst(sock, &rc);\n\t\t\t\tif (rc) {\n\t\t\t\t\tpbs_delstatfree(pdel);\n\t\t\t\t\treturn rc;\n\t\t\t\t}\n\t\t\t\tpdel->code = disrui(sock, &rc);\n\t\t\t\tif (rc) {\n\t\t\t\t\tpbs_delstatfree(pdel);\n\t\t\t\t\treturn rc;\n\t\t\t\t}\n\t\t\t\treply->brp_un.brp_deletejoblist.brp_delstatc = pdel;\n\t\t\t}\n\n\t\t\tbreak;\n\n\t\tcase BATCH_REPLY_CHOICE_Text:\n\n\t\t\t/* text reply */\n\n\t\t\treply->brp_un.brp_txt.brp_str = disrcs(sock, &txtlen, &rc);\n\t\t\treply->brp_un.brp_txt.brp_txtlen = txtlen;\n\t\t\tbreak;\n\n\t\tcase BATCH_REPLY_CHOICE_Locate:\n\n\t\t\t/* Locate Job Reply */\n\n\t\t\trc = disrfst(sock, PBS_MAXDEST + 1, reply->brp_un.brp_locate);\n\t\t\tbreak;\n\n\t\tcase BATCH_REPLY_CHOICE_RescQuery:\n\n\t\t\t/* Resource Query Reply */\n\n\t\t\treply->brp_un.brp_rescq.brq_avail = NULL;\n\t\t\treply->brp_un.brp_rescq.brq_alloc = NULL;\n\t\t\treply->brp_un.brp_rescq.brq_resvd = NULL;\n\t\t\treply->brp_un.brp_rescq.brq_down = NULL;\n\t\t\tct = disrui(sock, &rc);\n\t\t\tif (rc)\n\t\t\t\tbreak;\n\t\t\treply->brp_un.brp_rescq.brq_number = ct;\n\t\t\treply->brp_un.brp_rescq.brq_avail = (int *) malloc(ct * sizeof(int));\n\t\t\tif (reply->brp_un.brp_rescq.brq_avail == NULL)\n\t\t\t\treturn DIS_NOMALLOC;\n\t\t\treply->brp_un.brp_rescq.brq_alloc = (int *) malloc(ct * sizeof(int));\n\t\t\tif (reply->brp_un.brp_rescq.brq_alloc == NULL)\n\t\t\t\treturn DIS_NOMALLOC;\n\t\t\treply->brp_un.brp_rescq.brq_resvd = (int *) malloc(ct * sizeof(int));\n\t\t\tif (reply->brp_un.brp_rescq.brq_resvd == NULL)\n\t\t\t\treturn DIS_NOMALLOC;\n\t\t\treply->brp_un.brp_rescq.brq_down = (int *) malloc(ct * sizeof(int));\n\t\t\tif (reply->brp_un.brp_rescq.brq_down == NULL)\n\t\t\t\treturn DIS_NOMALLOC;\n\n\t\t\tfor (i = 0; (i < ct) && (rc == 0); ++i)\n\t\t\t\t*(reply->brp_un.brp_rescq.brq_avail + i) = disrui(sock, &rc);\n\t\t\tfor (i = 0; (i < ct) && (rc == 0); ++i)\n\t\t\t\t*(reply->brp_un.brp_rescq.brq_alloc + i) = disrui(sock, &rc);\n\t\t\tfor (i = 0; (i < ct) && (rc == 0); ++i)\n\t\t\t\t*(reply->brp_un.brp_rescq.brq_resvd + i) = disrui(sock, &rc);\n\t\t\tfor (i = 0; (i < ct) && (rc == 0); ++i)\n\t\t\t\t*(reply->brp_un.brp_rescq.brq_down + i) = disrui(sock, &rc);\n\t\t\tbreak;\n\n\t\tcase BATCH_REPLY_CHOICE_PreemptJobs:\n\n\t\t\t/* Preempt Jobs Reply */\n\t\t\tct = disrui(sock, &rc);\n\t\t\treply->brp_un.brp_preempt_jobs.count = ct;\n\t\t\tif (rc)\n\t\t\t\tbreak;\n\n\t\t\tppj = calloc(sizeof(struct preempt_job_info), ct);\n\t\t\treply->brp_un.brp_preempt_jobs.ppj_list = ppj;\n\n\t\t\tfor (i = 0; i < ct; i++) {\n\t\t\t\tif (((rc = disrfst(sock, PBS_MAXSVRJOBID + 1, ppj[i].job_id)) != 0) ||\n\t\t\t\t    ((rc = disrfst(sock, PREEMPT_METHOD_HIGH + 1, ppj[i].order)) != 0))\n\t\t\t\t\treturn rc;\n\t\t\t}\n\n\t\t\tbreak;\n\n\t\tdefault:\n\t\t\treturn -1;\n\t}\n\n\treturn rc;\n}\n"
  },
  {
    "path": "src/lib/Libifl/enc_reply.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tenc_reply.c\n * @brief\n * encode_DIS_reply() - encode a Batch Protocol Reply Structure\n *\n * \tbatch_reply structure defined in libpbs.h, it must be allocated\n *\tby the caller.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include \"libpbs.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"dis.h\"\n#include \"net_connect.h\"\n\nint encode_DIS_svrattrl(int sock, svrattrl *psattl);\n\n/**\n * @brief-\n *      encode a Batch Protocol Reply Structure for a Command\n *\n * Note: batch_reply structure defined in libpbs.h, it must be allocated\n *       by the caller.\n *\n * @param[in] sock - socket descriptor\n * @param[in] reply - pointer to batch_reply structure\n *\n * @return      int\n * @retval      -1      error\n * @retval      0       Success\n *\n */\n\nstatic int\nencode_DIS_reply_inner(int sock, struct batch_reply *reply)\n{\n\tint ct;\n\tint i;\n\tstruct brp_select *psel;\n\tstruct brp_status *pstat;\n\tstruct batch_deljob_status *pdelstat;\n\tsvrattrl *psvrl;\n\tpreempt_job_info *ppj;\n\n\tint rc;\n\n\t/* next encode code, auxcode and choice (union type identifier) */\n\n\tif ((rc = diswsi(sock, reply->brp_code)) ||\n\t    (rc = diswsi(sock, reply->brp_auxcode)) ||\n\t    (rc = diswui(sock, reply->brp_choice)) ||\n\t    (rc = diswui(sock, reply->brp_is_part)))\n\t\treturn rc;\n\n\tswitch (reply->brp_choice) {\n\n\t\tcase BATCH_REPLY_CHOICE_NULL:\n\t\t\tbreak; /* no more to do */\n\n\t\tcase BATCH_REPLY_CHOICE_Queue:\n\t\tcase BATCH_REPLY_CHOICE_RdytoCom:\n\t\tcase BATCH_REPLY_CHOICE_Commit:\n\t\t\tif ((rc = diswst(sock, reply->brp_un.brp_jid)) != 0)\n\t\t\t\treturn (rc);\n\t\t\tbreak;\n\n\t\tcase BATCH_REPLY_CHOICE_Select:\n\n\t\t\t/* have to send count of number of strings first */\n\n\t\t\tif ((rc = diswui(sock, reply->brp_count)) != 0)\n\t\t\t\treturn rc;\n\n\t\t\tpsel = reply->brp_un.brp_select;\n\t\t\twhile (psel) {\n\t\t\t\t/* now encode each string */\n\t\t\t\tif ((rc = diswst(sock, psel->brp_jobid)) != 0)\n\t\t\t\t\treturn rc;\n\t\t\t\tpsel = psel->brp_next;\n\t\t\t}\n\t\t\tbreak;\n\n\t\tcase BATCH_REPLY_CHOICE_Status:\n\n\t\t\t/* encode \"server version\" of status structure.\n\t\t\t *\n\t\t\t * Server always uses svrattrl form.\n\t\t\t * Commands decode into their form.\n\t\t\t */\n\n\t\t\tif ((rc = diswui(sock, reply->brp_count)) != 0)\n\t\t\t\treturn rc;\n\t\t\tpstat = (struct brp_status *) GET_NEXT(reply->brp_un.brp_status);\n\t\t\twhile (pstat) {\n\t\t\t\tif ((rc = diswui(sock, pstat->brp_objtype)) || (rc = diswst(sock, pstat->brp_objname)))\n\t\t\t\t\treturn rc;\n\n\t\t\t\tpsvrl = (svrattrl *) GET_NEXT(pstat->brp_attr);\n\t\t\t\tif ((rc = encode_DIS_svrattrl(sock, psvrl)) != 0)\n\t\t\t\t\treturn rc;\n\t\t\t\tpstat = (struct brp_status *) GET_NEXT(pstat->brp_stlink);\n\t\t\t}\n\t\t\tbreak;\n\n\t\tcase BATCH_REPLY_CHOICE_Delete:\n\n\t\t\t/* encode \"server version\" of status structure.\n\t\t\t *\n\t\t\t * Server always uses svrattrl form.\n\t\t\t * Commands decode into their form.\n\t\t\t */\n\n\t\t\tif ((rc = diswui(sock, reply->brp_count)) != 0)\n\t\t\t\treturn rc;\n\t\t\tpdelstat = reply->brp_un.brp_deletejoblist.brp_delstatc;\n\t\t\twhile (pdelstat) {\n\t\t\t\tif ((rc = diswst(sock, pdelstat->name)) || (rc = diswui(sock, pdelstat->code)))\n\t\t\t\t\treturn rc;\n\n\t\t\t\tpdelstat = pdelstat->next;\n\t\t\t}\n\t\t\tbreak;\n\n\t\tcase BATCH_REPLY_CHOICE_Text:\n\n\t\t\t/* text reply */\n\n\t\t\trc = diswcs(sock, reply->brp_un.brp_txt.brp_str, (size_t) reply->brp_un.brp_txt.brp_txtlen);\n\t\t\tif (rc)\n\t\t\t\treturn rc;\n\t\t\tbreak;\n\n\t\tcase BATCH_REPLY_CHOICE_Locate:\n\n\t\t\t/* Locate Job Reply */\n\n\t\t\tif ((rc = diswst(sock, reply->brp_un.brp_locate)) != 0)\n\t\t\t\treturn rc;\n\t\t\tbreak;\n\n\t\tcase BATCH_REPLY_CHOICE_RescQuery:\n\n\t\t\t/* Query Resources Reply */\n\n\t\t\tct = reply->brp_un.brp_rescq.brq_number;\n\t\t\tif ((rc = diswui(sock, ct)) != 0)\n\t\t\t\treturn rc;\n\t\t\tfor (i = 0; (i < ct) && (rc == 0); ++i) {\n\t\t\t\trc = diswui(sock, *(reply->brp_un.brp_rescq.brq_avail + i));\n\t\t\t}\n\t\t\tif (rc)\n\t\t\t\treturn rc;\n\t\t\tfor (i = 0; (i < ct) && (rc == 0); ++i) {\n\t\t\t\trc = diswui(sock, *(reply->brp_un.brp_rescq.brq_alloc + i));\n\t\t\t}\n\t\t\tif (rc)\n\t\t\t\treturn rc;\n\t\t\tfor (i = 0; (i < ct) && (rc == 0); ++i) {\n\t\t\t\trc = diswui(sock, *(reply->brp_un.brp_rescq.brq_resvd + i));\n\t\t\t}\n\t\t\tif (rc)\n\t\t\t\treturn rc;\n\t\t\tfor (i = 0; (i < ct) && (rc == 0); ++i) {\n\t\t\t\trc = diswui(sock, *(reply->brp_un.brp_rescq.brq_down + i));\n\t\t\t}\n\t\t\tif (rc)\n\t\t\t\treturn rc;\n\t\t\tbreak;\n\n\t\tcase BATCH_REPLY_CHOICE_PreemptJobs:\n\n\t\t\t/* Preempt Jobs Reply */\n\t\t\tct = reply->brp_un.brp_preempt_jobs.count;\n\t\t\tppj = reply->brp_un.brp_preempt_jobs.ppj_list;\n\n\t\t\tif ((rc = diswui(sock, ct)) != 0)\n\t\t\t\treturn rc;\n\n\t\t\tfor (i = 0; i < ct; i++) {\n\t\t\t\tif (((rc = diswst(sock, ppj[i].job_id)) != 0) || ((rc = diswst(sock, ppj[i].order)) != 0))\n\t\t\t\t\treturn rc;\n\t\t\t}\n\n\t\t\tbreak;\n\n\t\tdefault:\n\t\t\treturn -1;\n\t}\n\n\treturn 0;\n}\n\nint\nencode_DIS_reply(int sock, struct batch_reply *reply)\n{\n\tint rc;\n\t/* first encode \"header\" consisting of protocol type and version */\n\n\tif ((rc = diswui(sock, PBS_BATCH_PROT_TYPE)) ||\n\t    (rc = diswui(sock, PBS_BATCH_PROT_VER)))\n\t\treturn rc;\n\n\treturn (encode_DIS_reply_inner(sock, reply));\n}\n\nint\nencode_DIS_replyTPP(int sock, char *tppcmd_msgid, struct batch_reply *reply)\n{\n\tint rc;\n\n\t/* first encode \"header\" consisting of protocol type and version */\n\tif (reply->brp_choice == BATCH_REPLY_CHOICE_Status) {\n\t\treturn encode_DIS_reply(sock, reply);\n\t} else {\n\t\tif ((rc = is_compose(sock, IS_CMD_REPLY)) != DIS_SUCCESS)\n\t\t\treturn rc;\n\n\t\t/* \n\t\t* for IS_CMD_REPLY, also send across the tppcmd_msgid, so that\n\t\t* server can match the reply with the request it had sent earlier\n\t\t*/\n\t\tif ((rc = diswst(sock, tppcmd_msgid)) != DIS_SUCCESS)\n\t\t\treturn rc;\n\t}\n\n\treturn (encode_DIS_reply_inner(sock, reply));\n}\n"
  },
  {
    "path": "src/lib/Libifl/entlim_parse.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <ctype.h>\n#include <string.h>\n#include \"pbs_entlim.h\"\n\n#include <stdio.h>\n\n/**\n * @brief\n * \tstrip_trailing_white - strip whitespace from the end of a character string\n *\n * @param[in] pw - input string\n *\tThe input string is null terminated at the start of any trailing\n *\twhite space.\n *\n * @return\tVoid\n */\nstatic void\nstrip_trailing_white(char *pw)\n{\n\twhile (isspace((int) *pw))\n\t\t--pw;\n\t*(pw + 1) = '\\0';\n\treturn;\n}\n\n/**\n * @brief\n * \t-Parse a string of the form: \tvalue1 [, value2 ...]\n * \treturning a pointer to each \"value\" in turn striping out the white space\n *\n * @param[in,out] - start - address of pointer to start of string, it is\n *\t\tupdated to the start of the next substring on return;\n *\n * @return char *\n * @retval pointer to first (next) substring\n * @retval NULL if reached the end the string\n */\n\nchar *\nparse_comma_string_r(char **start)\n{\n\tchar *pc;\n\tchar *rv;\n\n\tchar *back;\n\n\tif ((start == NULL) || (*start == NULL))\n\t\treturn NULL;\n\n\tpc = *start;\n\n\tif (*pc == '\\0')\n\t\treturn NULL; /* already at end, no strings */\n\n\t/* skip over leading white space */\n\n\twhile ((*pc != '\\n') && isspace((int) *pc) && *pc)\n\t\tpc++;\n\n\trv = pc; /* the start point which will be returned */\n\n\t/* go find comma or end of line */\n\n\twhile (*pc) {\n\t\tif ((*pc == ',') || (*pc == '\\n'))\n\t\t\tbreak;\n\t\t++pc;\n\t}\n\tback = pc;\n\twhile (isspace((int) *--back)) /* strip trailing spaces */\n\t\t*back = '\\0';\n\n\tif (*pc)\n\t\t*pc++ = '\\0'; /* if not end, terminate this and adv past */\n\t*start = pc;\n\n\treturn (rv);\n}\n\nstatic char pbs_all[] = PBS_ALL_ENTITY;\n\n/**\n * @brief\n * \t-etlim_validate_name - check the entity name for:\n *\t1. if type is 'o', then name must be \"PBS_ALL\",\n *\t2. else name must not contain invalid characters\n *\n * @param[in] etype - limits on entity type\n * @param[in] ename - entity name\n *\n * @return\tint\n * @retval\t0\tif name is ok\n * @retval\t-1 \tif name is invalid\n */\nstatic int\netlim_validate_name(enum lim_keytypes etype, char *ename)\n{\n\tif (etype == LIM_OVERALL) {\n\t\t/* do special check on entity for etype 'o' */\n\t\tif (strcmp(ename, pbs_all) != 0)\n\t\t\treturn (-1);\n\t} else {\n\t\t/* other etypes cannot use \"PBS_ALL\" */\n\t\tif (strcmp(ename, pbs_all) == 0)\n\t\t\treturn (-1);\n\n\t\t/* check for invalid characters in entity's name */\n\t\tif (strpbrk(ename, ETLIM_INVALIDCHAR) != NULL)\n\t\t\treturn (-1);\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n * \t-etlim_parse_one - parse a single \"entity limit\" string, for example\n *\t\"[ u:name=value ]\" into its component parts:\n *\n * @param[in] \tetype  - entity type enum\n * @param[in]\tetenty - entity type letter : name  - u:job\n * @param[in]\tentity - entity name\n * @param[in]\tval    - value\n *\n * @return\tint\n * @retval\t0 \tno error; values into etype, entity, val\n * @retval\t<0 \tthe negative of the offset (index) into the string\n *\t\t     \tat which point the sytax error occurred.\n *\n *  Warning\tthe input string will be munged with null characters,\n *\t\tIf you need the string intact,  pass in a copy\n */\nint\nentlim_parse_one(char *str, enum lim_keytypes *etype, char **etenty, char **entity, char **val)\n{\n\tchar *pc;\n\tchar *pendname = NULL;\n\n\tpc = str;\n\n\t/* search for open bracket */\n\twhile (isspace((int) *pc))\n\t\t++pc;\n\tif (*pc != '[')\n\t\treturn (str - pc - 1); /* negative of offset into string */\n\n\t++pc;\n\t/* skip whitespace till entity type letter */\n\twhile (isspace((int) *pc))\n\t\t++pc;\n\tif (*pc == 'u')\n\t\t*etype = LIM_USER;\n\telse if (*pc == 'g')\n\t\t*etype = LIM_GROUP;\n\telse if (*pc == 'p')\n\t\t*etype = LIM_PROJECT;\n\telse if (*pc == 'o')\n\t\t*etype = LIM_OVERALL;\n\telse\n\t\treturn (str - pc - 1);\n\t*etenty = pc;\n\n\t/* next must be the colon */\n\tif (*++pc != ':')\n\t\treturn (str - pc - 1);\n\t++pc;\n\n\t/* next must be start of entity's name */\n\tif ((*pc == '\\0') || isspace((int) *pc))\n\t\treturn (str - pc - 1);\n\t*entity = pc;\n\n\t/* Look for the end of the entity name, either the close quote or */\n\t/* the first white space.   If there is non-white space between   */\n\t/* the end (shown by \"pendname\") and the bracket/equal sign; then */\n\t/* that is an error.\t\t\t\t\t\t  */\n\tif ((*pc == '\"') || (*pc == '\\'')) {\n\t\t/* entity name is quoted,  look for matchng quote */\n\t\tchar match = *pc;\n\t\t*entity = ++pc; /* incr past the quote character */\n\n\t\twhile (*pc && *pc != match)\n\t\t\t++pc;\n\t\tif (*pc == '\\0')\n\t\t\treturn (str - pc - 1); /* no closing quote */\n\t\t/* set to null, ending the name */\n\t\t*pc = '\\0';\n\t\tpendname = pc; /* mark reached end of name (close quote) */\n\t}\n\n\t/* skip to equal sign  or closing bracket */\n\t++pc;\n\twhile (*pc && (*pc != '=') && (*pc != ']')) {\n\t\tif (isspace((int) *pc)) {\n\t\t\t*pc = '\\0';\n\t\t\tpendname = pc; /* mark end of name (whitespace) */\n\t\t} else if (pendname != NULL) {\n\t\t\t/* non-white space and already saw end of name, error */\n\t\t\treturn (str - pc - 1);\n\t\t}\n\t\t++pc;\n\t}\n\n\tif (*pc == ']') {\n\t\t/* case of \"[u:name]\" without value */\n\t\t*pc = '\\0';\n\t\t/* check name for validity */\n\t\tif (etlim_validate_name(*etype, *entity) == -1)\n\t\t\treturn (str - ((*entity) + 2) - 1);\n\t\t*val = NULL; /* no value */\n\t\treturn 0;\n\t} else if (*pc == '\\0') {\n\t\t/* error; no ']' nor '=' */\n\t\treturn (str - pc - 1);\n\t}\n\n\t/* hit the '=', value must follow */\n\n\t*pc = '\\0';\n\tstrip_trailing_white(pc - 1);\n\n\t/* check name for validity */\n\tif (etlim_validate_name(*etype, *entity) == -1)\n\t\treturn (str - ((*entity) + 2) - 1);\n\n\t++pc;\n\t/* skip white till start of value */\n\twhile (isspace(*pc))\n\t\t++pc;\n\tif (*pc == '\\0')\n\t\treturn (str - pc - 1); /* error, no value after = */\n\telse if (*pc == '-')\n\t\treturn (str - pc - 1); /* error, negative value */\n\t*val = pc;\n\n\t/* skip to closing bracket */\n\t++pc;\n\twhile (*pc && (*pc != ']') && (!isspace(*pc)))\n\t\t++pc;\n\twhile (isspace(*pc)) /* skip trailing white */\n\t\t++pc;\n\n\tif (*pc != ']')\n\t\treturn (str - pc - 1);\n\tstrip_trailing_white(pc - 1);\n\treturn 0;\n}\n\n/**\n * @brief\n * \t-etlim_parse - parse a comma separated set of  \"entity limit\" strings,\n *\tfor example:  \"[ u:name=value ],[g:name], ...\" and for each separate\n *\tentity limit substring, call the specified \"addfunc\" function with\n *\t\tentity name  (\"u:name\"),\n *\t\tpassed-in resource name (\"mem\" or \"ncpus\"), and\n *\t\tentity limit value (\"10mb\" or \"4\")\n *\n *\tThe \"addfunc\" will add the entity entry to the collection controlled\n *\tby the contex identified by 'cts\".  The \"addfunc\" function will return\n *\t0 for no error or non-zero for an error.\n *\n * @return\tint\n * @retval\t0\tno error\n * @retval\t<0 \tthe negative of the offset (index) into the string\n *\t\t     \tat which point the sytax error occurred.\n * @retval\t>0\tA general PBS error that is not specific to a location\n *\t\t     \tin the input string.\n *\n *  Warning\tthe input string will be munged with null characters,\n *\t\tIf you need the string intact,  pass in a copy\n */\nint\nentlim_parse(char *str, char *resc, void *ctx,\n\t     int (*addfunc)(void *ctx, enum lim_keytypes kt, char *fulent,\n\t\t\t    char *entity, char *resc, char *value))\n{\n\tenum lim_keytypes etype;\n\tchar *ett;\n\tchar *entity;\n\tchar *ntoken;\n\tchar *val;\n\tchar *pcs;\n\tint rc;\n\n\tntoken = str;\n\twhile ((pcs = parse_comma_string_r(&ntoken)) != NULL) {\n\t\trc = entlim_parse_one(pcs, &etype, &ett, &entity, &val);\n\t\tif (rc < 0)\t\t\t /* syntax error, rc is offset in ntoken */\n\t\t\treturn (str - pcs) + rc; /* adjust for str */\n\t\tif (addfunc) {\n\t\t\tif ((rc = addfunc(ctx, etype, ett, entity, resc, val)) != 0)\n\t\t\t\tif (rc != 0)\n\t\t\t\t\treturn rc;\n\t\t}\n\t}\n\treturn 0;\n}\n\n#ifdef ENTLIM_STANDALONE_TEST\n\nstatic int badonly = 0;\n\n/**\n * @brief\n * \t-this is just a dummy \"addfunc\" that prints what was passed in\n *\n * @return\tint\n *\n */\nint\ndummyadd(void *ctx, enum lim_keytypes kt, char *fent, char *entity, char *resc, char *value)\n{\n\tif (strpbrk(entity, \" \t\") != NULL)\n\t\tfprintf(stderr, \"  Note: entity name <%s> contains space\\n\", entity);\n\tif (badonly == 1)\n\t\treturn 0;\n\tif (value)\n\t\tprintf(\"\\t--%c--%s--%s--%s\\n\", *fent, entity, resc, value);\n\telse\n\t\tprintf(\"\\t--%c--%s--%s--<null>\\n\", *fent, entity, resc);\n\treturn 0;\n}\n\nmain(int argc, char *argv[])\n{\n\tchar *cstr;\n\tchar input[256];\n\tchar etl;\n\tchar *etname, *val;\n\tchar *pcs;\n\tint rc;\n\tint i;\n\tint goodonly = 0;\n\n\twhile ((i = getopt(argc, argv, \"bg\")) != EOF) {\n\t\tswitch (i) {\n\t\t\tcase 'b':\n\t\t\t\tbadonly = 1;\n\t\t\t\tbreak;\n\n\t\t\tcase 'g':\n\t\t\t\tgoodonly = 1;\n\t\t\t\tbreak;\n\n\t\t\tdefault:\n\t\t\t\tfprintf(stderr, \"Usage: %s [-b]\\n\", argv[0]);\n\t\t\t\tfprintf(stderr, \"\\t b = print only rejected (bad) entries\\n\");\n\t\t\t\tfprintf(stderr, \"\\t g = print only valid entries\\n\");\n\t\t\t\treturn (1);\n\t\t}\n\t}\n\n\tif (isatty(fileno(stdin)) == 1) {\n\t\tprintf(\"enter string: \");\n\t\tfflush(stdout);\n\t}\n\twhile (fgets(input, 255, stdin) != NULL) {\n\t\tif ((cstr = strdup(input)) == NULL) {\n\t\t\tfprintf(stderr, \"Out of Memory!\\n\");\n\t\t\treturn 1;\n\t\t}\n\t\tprintf(\"  %s\\n\", cstr);\n\t\trc = entlim_parse(cstr, \"mem\", NULL, dummyadd);\n\t\tif ((rc != 0) && (goodonly == 0)) {\n\t\t\tprintf(\"error: %s\\n\", input);\n\t\t\ti = 7 - rc;\n\t\t\twhile (--i)\n\t\t\t\tputchar(' ');\n\t\t\tprintf(\"^\\n\");\n\t\t}\n\t\tfree(cstr);\n\t\tbzero(input, 256);\n\t\tif (isatty(fileno(stdin)) == 1) {\n\t\t\tprintf(\"enter string: \");\n\t\t\tfflush(stdout);\n\t\t}\n\t}\n\treturn 0;\n}\n#endif /* ENTITY_STANDALONE_TEST */\n"
  },
  {
    "path": "src/lib/Libifl/get_svrport.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tget_svrport.c\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <sys/types.h>\n#include <netdb.h>\n#include <netinet/in.h>\n#include \"portability.h\"\n\n/*\n * get_svrport - get the port number for a given service\n *\n *\tReturn the port number in an unsigned interger in host byte order.\n *\tAssumes the protocol type is tcp.\n *\tReturns 0 on an error.\n */\n\n/**\n * @brief\n *\t-get the port number for a given service\n *\n * @param[in] service_name - service for which port reqd\n * @param[in] ptype - port type\n * @param[in] pdefault - in host byte order\n *\n * @return\tunsigned int\n * @retval\tport num\tsuccess\n * @retval\t0\t\terror\n *\n */\nunsigned int\nget_svrport(char *service_name, char *ptype, unsigned int pdefault)\n{\n\tstruct servent *psvr;\n\n\tpsvr = getservbyname(service_name, ptype);\n\tif (psvr)\n\t\treturn ((unsigned int) ntohs(psvr->s_port));\n\telse\n\t\treturn (pdefault);\n}\n"
  },
  {
    "path": "src/lib/Libifl/grunt_parse.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tgrunt_parse.c\n */\n\n#include \"pbs_config.h\" /* the master config generated by configure */\n#include <ctype.h>\n#include <errno.h>\n#include <stdlib.h>\n#include <string.h>\n#include \"grunt.h\"\n#include \"pbs_error.h\"\n\n/**\n * @brief\n * \t-parse_resc_equal_string - (thread safe) parse a string of the form:\n *\t\tname1 = value1[,value2 ...][: name2 = value3[,value4 ...]]\n *\tinto <name1> <value1[,value2 ...>\n *\t     <name2> <value3[,value4 ...>\n *\n * @par\n *\tafter call,\n *\t\t*name will point to \"name1\"\n *\t\t*value will point to \"value1 ...\" upto but not\n *\t\t\tincluding the colon before \"name2\".\n *\n * @param[in] start - the start of the string to parse.\n * @param[in] name - point to \"name1\"\n * @param[in] value - will point to \"value1 ...\" upto but not\n *\t\t      including the colon before \"name2\".\n * @param[in] last - points to where parsing stopped, use as \"start\" on\n *\t\t     next call to continue.  last is set only if the function\n *\t\t     return is \"1\".\n *\n * @return\tint\n * @return \tint\n * @retval \t1 \tif  name and value are found,\n * @retval\t0 \tif nothing (more) is parsed (null input)\n * @retval\t-1 \tif a syntax error was detected.\n *\n * @par\n *\teach string is null terminated.\n */\n\nint\nparse_resc_equal_string(char *start, char **name, char **value, char **last)\n{\n\tchar *pc;\n\tchar *backup;\n\tint quoting = 0;\n\n\tif ((start == NULL) || (name == NULL) || (value == NULL) || (last == NULL))\n\t\treturn -1; /* error */\n\n\tpc = start;\n\n\tif (*pc == '\\0') {\n\t\t*name = NULL;\n\t\treturn (0); /* already at end, return no strings */\n\t}\n\n\t/* strip leading spaces */\n\n\twhile (isspace((int) *pc) && *pc)\n\t\tpc++;\n\n\tif (*pc == '\\0') {\n\t\t*name = NULL; /* null name */\n\t\treturn (0);\n\t} else if (!isalpha((int) *pc))\n\t\treturn (-1); /* no name, return error */\n\n\t*name = pc;\n\n\t/* have found start of name, look for end of it */\n\n\twhile (!isspace((int) *pc) && (*pc != '=') && *pc)\n\t\tpc++;\n\n\t/* now look for =, while stripping blanks between end of name and = */\n\n\twhile (isspace((int) *pc) && *pc)\n\t\t*pc++ = '\\0';\n\tif (*pc != '=')\n\t\treturn (-1); /* should have found a = as first non blank */\n\t*pc++ = '\\0';\n\n\t/* that follows is the value string, skip leading white space */\n\n\twhile (isspace((int) *pc) && *pc)\n\t\tpc++;\n\n\t/* is the value string to be quoted ? */\n\n\tif ((*pc == '\"') || (*pc == '\\''))\n\t\tquoting = (int) *pc++; /* adv start of \"value\" past quote chr */\n\t*value = pc;\n\n\t/*\n\t * now go to first colon, or if quoted, the colon sign\n\t * after the close quote\n\t */\n\n\tif (quoting) {\n\t\twhile ((*pc != (char) quoting) && *pc) /* look for matching */\n\t\t\tpc++;\n\t\tif (*pc) {\n\t\t\tchar *pd;\n\t\t\t/* close string up over the trailing quote */\n\t\t\tpd = pc;\n\t\t\twhile (*pd) {\n\t\t\t\t*pd = *(pd + 1);\n\t\t\t\tpd++;\n\t\t\t}\n\t\t} else\n\t\t\treturn (-1);\n\t}\n\twhile ((*pc != ':') && *pc)\n\t\tpc++;\n\n\tif (*pc == '\\0') {\n\t\twhile (isspace((int) *--pc))\n\t\t\t;\n\t\tif (*pc == ',') /* trailing comma is a no no */\n\t\t\treturn (-1);\n\t\t*last = ++pc;\n\t\treturn (1); /* no equal, just end of line, stop here */\n\t}\n\n\t/* strip off any trailing white space */\n\n\tbackup = pc++;\n\t*backup = '\\0'; /* null the colon */\n\n\twhile (isspace((int) *--backup))\n\t\t*backup = '\\0';\n\t*last = pc;\n\treturn (1);\n}\n\n/**\n * @brief\n *\t-parse_node_resc_r - (thread safe) parse the node and resource string of the form:\n *\tnodeA:resc1=value1:resc2=value2\n *\n * @param[in]\tstr - start of string to parse (string will be\n *                    munged, so make a copy before calling this\n *                    function)\n * @param[out]\tnodep - pointer to node name\n * @param[out]\tpnelm - number of used elements in key_valye_pair\n *                      array\n * @param[in][out] nl - total number of elements in key_value_pair\n *                      array\n * @param[in][out] kv - pointer to array of key_value_pair structures\n *\t\t\twill be malloced if nl == 0, will grow if needed\n *\t\t\twill not be freed by this routine\n *\n * @return  int\n * @retval  0 = ok\n * @retval  !0 = error\n *\n */\nint\nparse_node_resc_r(char *str, char **nodep, int *pnelem, int *nlkv, struct key_value_pair **kv)\n{\n\tint i;\n\tint nelm = 0;\n\tchar *pc;\n\tchar *word;\n\tchar *value;\n\tchar *last;\n\n\tif (str == NULL)\n\t\treturn (PBSE_INTERNAL);\n\n\tif (*nlkv == 0) {\n\t\t*kv = (struct key_value_pair *) malloc(KVP_SIZE * sizeof(struct key_value_pair));\n\t\tif (*kv == NULL)\n\t\t\treturn -1;\n\t\t*nlkv = KVP_SIZE;\n\t}\n\tfor (i = 0; i < *nlkv; i++) {\n\t\t(*kv)[i].kv_keyw = NULL;\n\t\t(*kv)[i].kv_val = NULL;\n\t}\n\n\tpc = str;\n\n\twhile (isspace((int) *pc))\n\t\tpc++;\n\tif (*pc == '\\0') {\n\t\t*pnelem = nelm;\n\t\treturn 0;\n\t}\n\n\t*nodep = pc;\n\twhile ((*pc != ':') && !isspace((int) *pc) && *pc)\n\t\tpc++;\n\n\tif (pc == *nodep)\n\t\treturn -1; /* error - no node name */\n\n\tif (*pc == '\\0') {\n\t\t*pnelem = nelm; /* no resources */\n\t\treturn 0;\n\t} else {\n\t\twhile (*pc != ':' && *pc)\n\t\t\t*pc++ = '\\0';\n\t\tif (*pc == ':')\n\t\t\t*pc++ = '\\0';\n\t}\n\n\t/* process resource=value strings upto closing brace */\n\n\tif (*pc == '\\0')\n\t\treturn -1;\n\n\ti = parse_resc_equal_string(pc, &word, &value, &last);\n\twhile (i == 1) {\n\t\tif (nelm >= *nlkv) {\n\t\t\t/* make more space in k_v table */\n\t\t\tstruct key_value_pair *ttpkv;\n\t\t\tttpkv = (struct key_value_pair *) realloc(*kv, (*nlkv + KVP_SIZE) * sizeof(struct key_value_pair));\n\t\t\tif (ttpkv == NULL)\n\t\t\t\treturn PBSE_SYSTEM;\n\t\t\t*kv = ttpkv;\n\t\t\t*nlkv += KVP_SIZE;\n\t\t}\n\t\t(*kv)[nelm].kv_keyw = word;\n\t\t(*kv)[nelm].kv_val = value;\n\t\tnelm++;\n\n\t\ti = parse_resc_equal_string(last, &word, &value, &last);\n\t}\n\tif (i == -1)\n\t\treturn PBSE_BADATVAL;\n\n\t*pnelem = nelm;\n\treturn 0;\n}\n\n/**\n * @brief\n *\t-parse_node_resc - parse the node and resource string of the form:\n *\tnodeA:resc1=value1:resc2=value2\n *\n *      @param\t\tstr\tstart of the string to parse\n *      @param[out]\tnodep\tpointer to node name\n *\t@param[out]\tnl      number of used data elements in\n *                             \tkey_value_pair array\n *      @param[out]\tkv      pointer to array of key_value_pair structures\n *\n *      @return  int\n *      @retval  0 = ok\n *      @retval  !0 = error\n */\nint\nparse_node_resc(char *str, char **nodep, int *nl, struct key_value_pair **kv)\n{\n\tint i;\n\tint nelm = 0;\n\tstatic int nkvelements = 0;\n\tstatic struct key_value_pair *tpkv = NULL;\n\n\tif (str == NULL)\n\t\treturn (PBSE_INTERNAL);\n\n\ti = parse_node_resc_r(str, nodep, &nelm, &nkvelements, &tpkv);\n\n\t*nl = nelm;\n\t*kv = tpkv;\n\treturn i;\n}\n\n/**\n * @brief\n *\t-parse_chunk_r - (thread safe) decode a chunk of a selection specification string,\n *\n *\tChunk is of the form: [#][:word=value[:word=value...]]\n *\n * @param str    = string to parse (will be munged) (input)\n * @param[in] nchk   = number of chunks, \"#\" (output)\n * @param[in] pnelem = of active data elements in key_value_pair\n *                     array (output)\n * @param[in] nkve   = total number of elements (size) in the\n *                     key_value_pair array (input/output)\n * @param[in] pkv    = pointer to array of key_value_pair (input/output)\n * @param[in] dflt   = upon receiving a select specification with\n *                     no number of chunks factor, we default to a nchk\n *                     factor of 1.  The new resource default_chunk.nchunk\n *                     controls the value of this chunk factor when it is\n *                     not set.  The dflt argument specifies whether the\n *                     number of nchk was provided on the select line or not\n *                     such that at a later time we can determine if\n *                     the default_chunk.nchunk resource should be\n *                     applied or not (see make_schedselect) (output)\n *\n * @par\tNote:\n *\tthe key_value_pair array, rtn, will be grown if additional\n *\tspace is needed,  it is not freed by this routine\n *\n * @return \tint\n * @retval \t0 \tif ok\n * @retval \t!0 \ton error\n *\n */\n\nint\nparse_chunk_r(char *str, int *nchk, int *pnelem, int *nkve, struct key_value_pair **pkv, int *dflt)\n{\n\tint i;\n\tint nchunk = 1; /* default number of chucks */\n\tint setbydefault = 1;\n\tint nelem = 0;\n\tchar *pc;\n\tchar *ps;\n\tchar *word;\n\tchar *value;\n\tchar *last;\n\n\tif (str == NULL)\n\t\treturn (PBSE_INTERNAL);\n\n\tif (*nkve == 0) {\n\t\t/* malloc room for array of key_value_pair structure */\n\t\t*pkv = (struct key_value_pair *) malloc(KVP_SIZE * sizeof(struct key_value_pair));\n\t\tif (*pkv == NULL)\n\t\t\treturn PBSE_SYSTEM;\n\t\t*nkve = KVP_SIZE;\n\t}\n\tfor (i = 0; i < *nkve; ++i) {\n\t\t(*pkv)[i].kv_keyw = NULL;\n\t\t(*pkv)[i].kv_val = NULL;\n\t}\n\n\tpc = str;\n\n\t/* start of chunk */\n\twhile (isspace((int) *pc))\n\t\t++pc;\n\n\t/* first word must start with number or letter */\n\n\tps = pc;\n\tif (!isalnum((int) *pc))\n\t\treturn (PBSE_BADATVAL);\n\n\tif (isdigit((int) *pc)) {\n\t\t/* leading count, should be followed by ':' or '\\0' */\n\t\t++pc;\n\t\twhile (isdigit((int) *pc))\n\t\t\t++pc;\n\t\tnchunk = atoi(ps);\n\t\tsetbydefault = 0;\n\t\twhile (isspace((int) *pc))\n\t\t\t++pc;\n\t\tif (*pc != '\\0') {\n\t\t\tif (*pc != ':')\n\t\t\t\treturn (PBSE_BADATVAL);\n\t\t\t++pc;\n\t\t}\n\t}\n\n\t/* next comes \"resc=value\" pairs */\n\n\ti = parse_resc_equal_string(pc, &word, &value, &last);\n\twhile (i == 1) {\n\t\tif (nelem >= *nkve) {\n\t\t\t/* make more space in k_v table */\n\t\t\tstruct key_value_pair *ttpkv;\n\t\t\tttpkv = realloc(*pkv, (*nkve + KVP_SIZE) * sizeof(struct key_value_pair));\n\t\t\tif (ttpkv == NULL)\n\t\t\t\treturn PBSE_SYSTEM;\n\t\t\t*pkv = ttpkv;\n\t\t\tfor (i = *nkve; i < *nkve + KVP_SIZE; ++i) {\n\t\t\t\t(*pkv)[i].kv_keyw = NULL;\n\t\t\t\t(*pkv)[i].kv_val = NULL;\n\t\t\t}\n\t\t\t*nkve += KVP_SIZE;\n\t\t}\n\t\t(*pkv)[nelem].kv_keyw = word;\n\t\t(*pkv)[nelem].kv_val = value;\n\t\tnelem++;\n\t\t/* continue with next resc=value pair          */\n\n\t\ti = parse_resc_equal_string(last, &word, &value, &last);\n\t}\n\tif (i == -1)\n\t\treturn PBSE_BADATVAL;\n\n\t*pnelem = nelem;\n\t*nchk = nchunk;\n\tif (dflt)\n\t\t*dflt = setbydefault;\n\n\treturn 0;\n}\n\n/**\n * @brief\n * \tparse_chunk - (not thread safe) decode a chunk of a selection specification\n *\tstring,\n *\n * @par\n *\tChunk is of the form: [#][:word=value[:word=value...]]\n *\n * @param[in]  str  = string to parse\n * @param[in]\tnchk = number of chunks, \"#\"\n * @param[in]\tnrtn = number of active (used) word=value pairs in the\n *\t\t       key_value_pair array\n * @param[in]\trtn  = pointer to static array of key_value_pair\n * @param[in]  dflt = the nchk value was set to 1 by default\n *\n * @return \tint\n * @retval \t0 \tif ok\n * @retval \t!0 \ton error\n *\n */\n\nstatic int nkvelements = 0;\nstatic struct key_value_pair *tpkv = NULL;\nint\nparse_chunk(char *str, int *nchk, int *nrtn, struct key_value_pair **rtn, int *setbydflt)\n{\n\tint i;\n\tint nelm = 0;\n\n\tif (str == NULL)\n\t\treturn (PBSE_INTERNAL);\n\n\ti = parse_chunk_r(str, nchk, &nelm, &nkvelements, &tpkv, setbydflt);\n\t*nrtn = nelm;\n\t*rtn = tpkv;\n\treturn i;\n}\n\n/**\n * @brief\n * \tparse_chunk_make_room_r - (thread safe) make room in key/value array\n *\n * @par\n * @param[in]\tinuse = number of entries current in use\n * @param[in]\textra = number of additional entries needed\n * @param[in/out] pnkve = pointer to current size of k_v array\n * @param[in/out] ppkve = pointer to address of base of k_v array\n *\n * @return \tint\n * @retval \t0 \tif ok\n * @retval \t!0 \ton error\n *\n */\nint\nparse_chunk_make_room_r(int inuse, int extra, int *pnkve, struct key_value_pair **ppkve)\n{\n\tint new_len;\n\tstruct key_value_pair *ttpkv;\n\tint i;\n\n\t/* check if extra will fit in current allocation */\n\tif (inuse + extra <= *pnkve)\n\t\treturn 0;\n\t/*\n\t * Need to grow the key/value array.\n\t * Keep the size a multiple of KVP_SIZE just to match parse_chunk_r's\n\t * method.\n\t */\n\tnew_len = ((inuse + extra + KVP_SIZE - 1) / KVP_SIZE) * KVP_SIZE;\n\tttpkv = realloc(*ppkve, new_len * sizeof(struct key_value_pair));\n\tif (ttpkv == NULL)\n\t\treturn PBSE_SYSTEM;\n\t/* NULL out any new entries */\n\tfor (i = inuse; i < new_len; i++) {\n\t\tttpkv[i].kv_keyw = NULL;\n\t\tttpkv[i].kv_val = NULL;\n\t}\n\t*ppkve = ttpkv;\n\t*pnkve = new_len;\n\treturn 0;\n}\n\n/**\n * @brief\n * \tparse_chunk_make_room - (not thread safe) make room in default key/value\n * \tarray\n *\n * @par\n * @param[in]\tinuse = number of entries current in use\n * @param[in]\textra = number of additional entries needed\n * @param[in/out] rtn = pointer to address to hold possibly moved k_v array\n *\n * @return \tint\n * @retval \t0 \tif ok\n * @retval \t!0 \ton error\n *\n */\nint\nparse_chunk_make_room(int inuse, int extra, struct key_value_pair **rtn)\n{\n\tint rc;\n\t/* Make sure we are using the default k_v array */\n\tif (*rtn != tpkv)\n\t\treturn PBSE_SYSTEM;\n\trc = parse_chunk_make_room_r(inuse, extra, &nkvelements, &tpkv);\n\t*rtn = tpkv;\n\treturn rc;\n}\n\n/**\n * @brief\n *\tparse_plus_spec_r - (thread safe)\n * @par\n *\tCalled with \"str\" set for start of string of a set of plus connnected\n *\tsubstrings \"substring1+substring2+...\";\n *\n * @param[in]\tselstr - string to parse, continue to parse\n * @param[in]\tlast   - pointer to place to resume parsing\n * @param[in]\thp     - set based on finding '(' or ')'\n *\t\t\t > 0 = found '(' at start of substring\n *\t\t\t = 0 = no parens or found both in one substring\n *\t\t\t < 0 = found ')' at end of substring\n *\n * @par\n *\tIMPORTANT: the input string will be munged by the various\n *\tparsing routines, if you need an untouched original,  pass\n *\tin a pointer to a copy.\n *\n * @return         A pointer to next substring\n * @retval         next substring (char *)\n * @retval         NULL if end of the spec\n *\n */\nchar *\nparse_plus_spec_r(char *selstr, char **last, int *hp)\n{\n\tint haveparen = 0;\n\tchar *pe;\n\tchar *ps;\n\n\tif ((selstr == NULL) || (strlen(selstr)) == 0)\n\t\treturn NULL;\n\n\tps = selstr;\n\n\twhile (isspace((int) *ps))\n\t\t++ps;\n\tif (*ps == '(') {\n\t\thaveparen++;\n\t\tps++; /* skip over the ( */\n\t}\n\n\tpe = ps;\n\twhile (*pe != '\\0') {\n\t\tif ((*pe == '\"') || (*pe == '\\'')) {\n\t\t\tchar quote;\n\n\t\t\tquote = *pe;\n\t\t\tpe++;\n\t\t\twhile (*pe != '\\0' && *pe != quote)\n\t\t\t\tpe++;\n\t\t\tif (*pe == quote)\n\t\t\t\tpe++;\n\t\t} else if (*pe != '+' && *pe != ')')\n\t\t\tpe++;\n\t\telse\n\t\t\tbreak;\n\t}\n\n\tif (*pe) {\n\t\tif (*pe == ')') {\n\t\t\t*pe++ = '\\0'; /* null the )\t\t*/\n\t\t\thaveparen--;\n\t\t}\n\t\tif (*pe != '\\0')\n\t\t\t*pe++ = '\\0'; /* null the following +\t*/\n\t}\n\n\tif (*ps) {\n\t\tif (last != NULL)\n\t\t\t*last = pe;\n\t\tif (hp != NULL)\n\t\t\t*hp = haveparen;\n\t\treturn ps;\n\t} else\n\t\treturn NULL;\n}\n\n/**\n * @brief\n *\tparse_plus_spec - not thread safe\n * @par\n *\tCalled with \"str\" set for start of string of a set of plus connnected\n *\tsubstrings \"substring1+substring2+...\";  OR\n *\tcalled with null to continue where left off.\n *\n * @param[in] selstr - string holding select specs\n * @param[out] rc - flag\n *\n * @return \tA pointer to next substring\n * @retval\tnext substring (char *)\n * @retval\tNULL if end of the spec\n *\n * @par\n *\tIMPORTANT: the \"selstr\" is copied into a locally allocated \"static\"\n *\tchar array for parsing.  The orignal string is untouched.  The array\n *\tis grown as need to hold \"selstr\".\n */\nchar *\nparse_plus_spec(char *selstr, int *rc)\n{\n\tint hp; /* value returned by parse_pluse_spec ignored */\n\tsize_t len;\n\tstatic char *pe;\n\tchar *ps;\n\tstatic char *parsebuf = NULL;\n\tstatic int parsebufsz = 0;\n\n\t*rc = 0;\n\tif (selstr) {\n\n\t\tif ((len = strlen(selstr)) == 0)\n\t\t\treturn NULL;\n\t\telse if (len >= parsebufsz) {\n\t\t\tif (parsebuf)\n\t\t\t\tfree(parsebuf);\n\t\t\tparsebufsz = len * 2;\n\t\t\tparsebuf = (char *) malloc(parsebufsz);\n\t\t\tif (parsebuf == NULL) {\n\t\t\t\tparsebufsz = 0;\n\t\t\t\t*rc = errno;\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t}\n\n\t\t(void) strcpy(parsebuf, selstr);\n\t\tps = parsebuf;\n\t} else\n\t\tps = pe;\n\n\tif (*ps == '+') {\n\t\t/* invalid string, starts with + */\n\t\t*rc = PBSE_BADNODESPEC;\n\t\treturn NULL;\n\t}\n\n\treturn (parse_plus_spec_r(ps, &pe, &hp));\n}\n\n/**\n * @brief\n *\tget the first vnode corresponds to a selectspec\n *\n * @param[in] execvnode - selectspec\n *\n * @return \tvnode - has to be freed by the caller\n * @retval\tNULL: vnode could not find in the str\n */\nchar *\nget_first_vnode(char *execvnode)\n{\n\tchar *chunk;\n\tchar *last;\n\tint hasprn;\n\tchar *vname;\n\tint nelem;\n\tstruct key_value_pair *pkvp;\n\tchar *execvnode_dup;\n\tchar *vname_out = NULL;\n\n\tif (!execvnode)\n\t\treturn NULL;\n\n\texecvnode_dup = strdup(execvnode);\n\n\tchunk = parse_plus_spec_r(execvnode_dup, &last, &hasprn);\n\tif (chunk) {\n\t\tif (parse_node_resc(chunk, &vname, &nelem, &pkvp) == 0)\n\t\t\tvname_out = strdup(vname);\n\t}\n\n\tfree(execvnode_dup);\n\treturn vname_out;\n}\n"
  },
  {
    "path": "src/lib/Libifl/ifl_impl.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file    ifl_impl.c\n *\n * @brief\n * \t\tPass-through call to send batch_request to server. Interfaces can\n * \t\tbe overridden by the developer, to implement their own definition.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include \"ifl_internal.h\"\n#include \"pbs_ifl.h\"\n#include \"pbs_share.h\"\n\n/**\n * @brief\n *\t-Pass-through call to send async run job batch request.\n *\n * @param[in] c - connection handle\n * @param[in] jobid- job identifier\n * @param[in] location - string of vnodes/resources to be allocated to the job\n * @param[in] extend - extend string for encoding req\n *\n * @return      int\n * @retval      0       success\n * @retval      !0      error\n *\n */\nint\npbs_asyrunjob(int c, const char *jobid, const char *location, const char *extend)\n{\n\treturn (*pfn_pbs_asyrunjob)(c, jobid, location, extend);\n}\n\n/**\n * @brief\n *\t-Pass-through call to send async run job batch request with ack\n *\n * @param[in] c - connection handle\n * @param[in] jobid- job identifier\n * @param[in] location - string of vnodes/resources to be allocated to the job\n * @param[in] extend - extend string for encoding req\n *\n * @return      int\n * @retval      0       success\n * @retval      !0      error\n *\n */\nint\npbs_asyrunjob_ack(int c, const char *jobid, const char *location, const char *extend)\n{\n\treturn (*pfn_pbs_asyrunjob_ack)(c, jobid, location, extend);\n}\n\n/**\n * @brief\n *\t-Pass-through call to send alter Job request\n *\treally an instance of the \"manager\" request.\n *\n * @param[in] c - connection handle\n * @param[in] jobid- job identifier\n * @param[in] attrib - pointer to attribute list\n * @param[in] extend - extend string for encoding req\n *\n * @return\tint\n * @retval\t0\tsuccess\n * @retval\t!0\terror\n *\n */\nint\npbs_alterjob(int c, const char *jobid, struct attrl *attrib, const char *extend)\n{\n\treturn (*pfn_pbs_alterjob)(c, jobid, attrib, extend);\n}\n\n/**\n * @brief\n *\t-Pass-through call to send alter Job request\n *\treally an instance of the \"manager\" request.\n *\n * @param[in] c - connection handle\n * @param[in] jobid- job identifier\n * @param[in] attrib - pointer to attribute list\n * @param[in] extend - extend string for encoding req\n *\n * @return\tint\n * @retval\t0\tsuccess\n * @retval\t!0\terror\n *\n */\nint\npbs_asyalterjob(int c, const char *jobid, struct attrl *attrib, const char *extend)\n{\n\treturn (*pfn_pbs_asyalterjob)(c, jobid, attrib, extend);\n}\n\n/**\n * @brief\n * \t-pbs_confirmresv - this function is for exclusive use by the Scheduler\n *\tto confirm an advanced reservation.\n *\n * @param[in] rid \tReservaion ID\n * @param[in] location  string of vnodes/resources to be allocated to the resv.\n * @param[in] start \tstart time of reservation if non-zero\n * @param[in] extend PBS_RESV_CONFIRM_SUCCESS or PBS_RESV_CONFIRM_FAIL\n *\n * @return\tint\n * @retval\t0\tSuccess\n * @retval\t!0\terror\n *\n */\nint\npbs_confirmresv(int c, const char *rid, const char *location, unsigned long start, const char *extend)\n{\n\treturn (*pfn_pbs_confirmresv)(c, rid, location, start, extend);\n}\n\n/**\n * @brief\n *\tPass-through call to connect to pbs server\n *\tpassing any 'extend' data to the connection.\n *\n * @param[in] server - server - the hostname of the pbs server to connect to.\n *\n * @retval int\t- return value of pbs_connect_extend().\n */\nint\npbs_connect(const char *server)\n{\n\treturn (*pfn_pbs_connect)(server);\n}\n\n/**\n * @brief\n *\tPass-through call to make a PBS_BATCH_Connect request to 'server'.\n *\n * @param[in]   server - the hostname of the pbs server to connect to.\n * @param[in]   extend_data - a string to send as \"extend\" data.\n *\n * @return int\n * @retval >= 0\tindex to the internal connection table representing the\n *\t\tconnection made.\n * @retval -1\terror encountered setting up the connection.\n */\nint\npbs_connect_extend(const char *server, const char *extend_data)\n{\n\treturn (*pfn_pbs_connect_extend)(server, extend_data);\n}\n\n/**\n * @brief\n *\t- Pass-through call to get default server name.\n *\n * @return\tstring\n * @retval\tdflt srvr name\tsuccess\n * @retval\tNULL\t\terror\n *\n */\nchar *\npbs_default()\n{\n\treturn (*pfn_pbs_default)();\n}\n\n/**\n * @brief\n *\tPass-through call to send the delete Job request\n * \treally just an instance of the manager request\n *\n * @param[in] c - connection handler\n * @param[in] jobid - job identifier\n * @param[in] extend - string to encode req\n *\n * @return\tint\n * @retval\t0\tsuccess\n * @retval\t!0\terror\n *\n */\nint\npbs_deljob(int c, const char *jobid, const char *extend)\n{\n\treturn (*pfn_pbs_deljob)(c, jobid, extend);\n}\n\n/**\n * @brief\n *\tPass-through call to send the delete Job request\n * \treally just an instance of the manager request\n *\n * @param[in] c - connection handler\n * @param[in] jobid - job identifier\n * @param[in] extend - string to encode req\n *\n * @return\tstruct batch_status *\n * @retval\t0\tsuccess\n * @retval\t!0\terror\n *\n */\nstruct batch_deljob_status *\npbs_deljoblist(int c, char **jobid, int numofjobs, const char *extend)\n{\n\treturn (*pfn_pbs_deljoblist)(c, jobid, numofjobs, extend);\n}\n\n/**\n * @brief\n *\t-Pass-through call to send close connection batch request\n *\n * @param[in] connect - socket descriptor\n *\n * @return\tint\n * @retval\t0\tsuccess\n * @retval\t-1\terror\n *\n */\nint\npbs_disconnect(int connect)\n{\n\treturn (*pfn_pbs_disconnect)(connect);\n}\n\n/**\n * @brief\n *\t-Pass-through call to get last error message the server returned on\n *\tthis connection.\n *\n * @param[in] connect - soket descriptor\n *\n * @return\tstring\n * @retval\tconnection contexts\n *\t\tTLS\t\t\tmultithread\n *\t\tSTRUCTURE\t\tsingle thread\n * @retval\terrmsg\t\t\terror\n *\n */\nchar *\npbs_geterrmsg(int connect)\n{\n\treturn (*pfn_pbs_geterrmsg)(connect);\n}\n\n/**\n * @brief\n *\t- Pass-through call to send Hold Job request to the server --\n *\treally just an instance of the \"manager\" request.\n *\n * @param[in] c - connection handler\n * @param[in] jobid - job identifier\n * @param[in] holdtype - value for holdtype\n * @param[in] extend - string to encode req\n *\n * @return      int\n * @retval      0       success\n * @retval      !0      error\n *\n */\nint\npbs_holdjob(int c, const char *jobid, const char *holdtype, const char *extend)\n{\n\treturn (*pfn_pbs_holdjob)(c, jobid, holdtype, extend);\n}\n\n/**\n * @brief\n *\tpbs_loadconf - Populate the pbs_conf structure\n *\n * @par\n *\tLoad the pbs_conf structure.  The variables can be filled in\n *\tfrom either the environment or the pbs.conf file.  The\n *\tenvironment gets priority over the file.  If any of the\n *\tprimary variables are not filled in, the function fails.\n *\tPrimary vars: pbs_home_path, pbs_exec_path, pbs_server_name\n *\n * @note\n *\tClients can now be multithreaded. So dont call pbs_loadconf with\n *\treload = TRUE. Currently, the code flow ensures that the configuration\n *\tis loaded only once (never used with reload true). Thus in the rest of\n *\tthe code a direct read of the pbs_conf.variables is fine. There is no\n *\trace of access of pbs_conf vars against the loading of pbs_conf vars.\n *\tHowever, if pbs_loadconf is called with reload = TRUE, this assumption\n *\twill be void. In that case, access to every pbs_conf.variable has to be\n *\tsynchronized against the reload of those variables.\n *\n * @param[in] reload\t\tWhether to attempt a reload\n *\n * @return int\n * @retval 1 Success\n * @retval 0 Failure\n */\nint\npbs_loadconf(int reload)\n{\n\treturn (*pfn_pbs_loadconf)(reload);\n}\n\n/**\n* @brief\n*      Pass-through call to send LocateJob request.\n*\n* @param[in] c - connection handler\n* @param[in] jobid - job identifier\n* @param[in] extend - string to encode req\n*\n* @return      string\n* @retval      destination name\tsuccess\n* @retval      NULL      \t\terror\n*\n*/\nchar *\npbs_locjob(int c, const char *jobid, const char *extend)\n{\n\treturn (*pfn_pbs_locjob)(c, jobid, extend);\n}\n\n/**\n * @brief\n *\t- Basically a pass-thru to PBS_manager\n *\n * @param[in] c - connection handle\n * @param[in] command - mgr command with respect to obj\n * @param[in] objtype - object type\n * @param[in] objname - object name\n * @param[in] attrib -  pointer to attropl structure\n * @param[in] extend - extend string to encode req\n *\n * @return\tint\n * @retval\t0\tsuccess\n * @retval\t!0\terror\n *\n */\nint\npbs_manager(int c, int command, int objtype, const char *objname,\n\t    struct attropl *attrib, const char *extend)\n{\n\treturn (*pfn_pbs_manager)(c, command, objtype, objname,\n\t\t\t\t  attrib, extend);\n}\n\n/**\n * @brief\n *\tPass-through call to send move job request\n *\n * @param[in] c - connection handler\n * @param[in] jobid - job identifier\n * @param[in] destin - job moved to\n * @param[in] extend - string to encode req\n *\n * @return      int\n * @retval      0       success\n * @retval      !0      error\n *\n */\nint\npbs_movejob(int c, const char *jobid, const char *destin, const char *extend)\n{\n\treturn (*pfn_pbs_movejob)(c, jobid, destin, extend);\n}\n\n/**\n * @brief\n *\t-Pass-through call to send the MessageJob request and get the reply.\n *\n * @param[in] c - socket descriptor\n * @param[in] jobid - job id\n * @param[in] fileopt - which file\n * @param[in] msg - msg to be encoded\n * @param[in] extend - extend string for encoding req\n *\n * @return\tint\n * @retval\t0\tsuccess\n * @retval\t!0\terror\n *\n */\nint\npbs_msgjob(int c, const char *jobid, int fileopt, const char *msg, const char *extend)\n{\n\treturn (*pfn_pbs_msgjob)(c, jobid, fileopt, msg, extend);\n}\n\n/**\n * @brief\n *\t-Pass-through call to send order job batch request\n *\n * @param[in] c - connection handler\n * @param[in] job1 - job identifier\n * @param[in] job2 - job identifier\n * @param[in] extend - string to encode req\n *\n * @return      int\n * @retval      0       success\n * @retval      !0      error\n *\n */\nint\npbs_orderjob(int c, const char *job1, const char *job2, const char *extend)\n{\n\treturn (*pfn_pbs_orderjob)(c, job1, job2, extend);\n}\n\n/**\n * @brief\n *\t-Pass-through call to send rerun batch request\n *\n * @param[in] c - connection handler\n * @param[in] jobid - job identifier\n * @param[in] extend - string to encode req\n *\n * @return      int\n * @retval      0       success\n * @retval      !0      error\n *\n */\nint\npbs_rerunjob(int c, const char *jobid, const char *extend)\n{\n\treturn (*pfn_pbs_rerunjob)(c, jobid, extend);\n}\n\n/**\n * @brief\n *\t-Pass-through call to release a hold on a job.\n * \treally just an instance of the \"manager\" request.\n *\n * @param[in] c - connection handler\n * @param[in] jobid - job identifier\n * @param[in] holdtype - type of hold\n * @param[in] extend - string to encode req\n *\n * @return      int\n * @retval      0       success\n * @retval      !0      error\n *\n */\nint\npbs_rlsjob(int c, const char *jobid, const char *holdtype, const char *extend)\n{\n\treturn (*pfn_pbs_rlsjob)(c, jobid, holdtype, extend);\n}\n\n/**\n * @brief\n *\t-Pass-through call to send preempt jobs batch request\n *\n * @param[in] c - connection handler\n * @param[in] preempt_jobs_list - list of jobs to be preempted\n *\n * @return      preempt_job_info *\n * @retval      preempt_job_info object       success\n * @retval      NULL      error\n *\n */\npreempt_job_info *\npbs_preempt_jobs(int c, char **preempt_jobs_list)\n{\n\treturn (*pfn_pbs_preempt_jobs)(c, preempt_jobs_list);\n}\n\n/**\n * @brief\n *\t-Pass-through call to send runjob batch request\n *\n * @param[in] c - communication handle\n * @param[in] jobid - job identifier\n * @param[in] location - location where job running\n * @param[in] extend - extend string to encode req\n *\n * @return      int\n * @retval      DIS_SUCCESS(0)  success\n * @retval      error code      error\n *\n */\nint\npbs_runjob(int c, const char *jobid, const char *location, const char *extend)\n{\n\treturn (*pfn_pbs_runjob)(c, jobid, location, extend);\n}\n\n/**\n * @brief\n *\t-Pass-through call to send SelectJob request\n *\tReturn a list of job ids that meet certain selection criteria.\n *\n * @param[in] c - communication handle\n * @param[in] attrib - pointer to attropl structure(selection criteria)\n * @param[in] extend - extend string to encode req\n *\n * @return\tstring\n * @retval\tjob ids\t\tsuccess\n * @retval\tNULL\t\terror\n *\n */\nchar **\npbs_selectjob(int c, struct attropl *attrib, const char *extend)\n{\n\treturn (*pfn_pbs_selectjob)(c, attrib, extend);\n}\n\n/**\n * @brief\n *\tPass-through call to sends and reads signal job batch request.\n *\n * @param[in] c - communication handle\n * @param[in] jobid - job identifier\n * @param[in] signal - signal\n * @param[in] extend - extend string for request\n *\n * @return\tint\n * @retval\t0\tsuccess\n * @retval\t!0\terror\n *\n */\nint\npbs_sigjob(int c, const char *jobid, const char *signal, const char *extend)\n{\n\treturn (*pfn_pbs_sigjob)(c, jobid, signal, extend);\n}\n\n/**\n * @brief\n *\t-Pass-through call to deallocates a \"batch_status\" structure\n *\n * @param[in] bsp - pointer to batch request.\n *\n * @return\tVoid\n *\n */\nvoid\npbs_statfree(struct batch_status *bsp)\n{\n\t(*pfn_pbs_statfree)(bsp);\n}\n\n/**\n * @brief\n *\t-Pass-through call to deallocates a \"batch_deljob_status\" structure\n *\n * @param[in] bsp - pointer to batch request.\n *\n * @return\tVoid\n *\n */\nvoid\npbs_delstatfree(struct batch_deljob_status *bdsp)\n{\n\t(*pfn_pbs_delstatfree)(bdsp);\n}\n\n/**\n * @brief\n *\tPass-through call to get status of one or more resources.\n *\n * @param[in] c - communication handle\n * @param[in] id - object id\n * @param[in] attrib - pointer to attribute list\n * @param[in] extend - extend string for encoding req\n *\n * @return      structure handle\n * @retval      pointer to batch_status struct          Success\n * @retval      NULL                                    error\n *\n */\nstruct batch_status *\npbs_statrsc(int c, const char *id, struct attrl *attrib, const char *extend)\n{\n\treturn (*pfn_pbs_statrsc)(c, id, attrib, extend);\n}\n\n/**\n * @brief\n *\t-Pass-through call to get status of a job.\n *\n * @param[in] c - communication handle\n * @param[in] id - job id\n * @param[in] attrib - pointer to attribute list\n * @param[in] extend - extend string for req\n *\n * @return\tstructure handle\n * @retval\tpointer to batch_status struct\t\tsuccess\n * @retval\tNULL\t\t\t\t\terror\n *\n */\nstruct batch_status *\npbs_statjob(int c, const char *id, struct attrl *attrib, const char *extend)\n{\n\treturn (*pfn_pbs_statjob)(c, id, attrib, extend);\n}\n\n/**\n * @brief\n *\t-Pass-through call to SelectJob request\n *\tReturn a list of job ids that meet certain selection criteria.\n *\n * @param[in] c - communication handle\n * @param[in] attrib - pointer to attropl structure(selection criteria)\n * @param[in] extend - extend string to encode req\n *\n * @return\tstruct batch_status\n * @retval\tbatch_status object for job\t\tsuccess\n * @retval\tNULL\t\terror\n *\n */\nstruct batch_status *\npbs_selstat(int c, struct attropl *attrib, struct attrl *rattrib, const char *extend)\n{\n\treturn (*pfn_pbs_selstat)(c, attrib, rattrib, extend);\n}\n\n/**\n * @brief\n *\t-Pass-through call to get status of a queue.\n *\n * @param[in] c - communication handle\n * @param[in] id - object id\n * @param[in] attrib - pointer to attribute list\n * @param[in] extend - extend string for encoding req\n *\n * @return      structure handle\n * @retval      pointer to batch_status struct          Success\n * @retval      NULL                                    error\n *\n */\nstruct batch_status *\npbs_statque(int c, const char *id, struct attrl *attrib, const char *extend)\n{\n\treturn (*pfn_pbs_statque)(c, id, attrib, extend);\n}\n\n/**\n * @brief\n *\t- Pass-through call to return the status of a server.\n *\n * @param[in] c - communication handle\n * @param[in] attrib - pointer to attribute list\n * @param[in] extend - extend string for encoding req\n *\n * @return      structure handle\n * @retval      pointer to batch_status struct          Success\n * @retval      NULL                                    error\n *\n */\nstruct batch_status *\npbs_statserver(int c, struct attrl *attrib, const char *extend)\n{\n\treturn (*pfn_pbs_statserver)(c, attrib, extend);\n}\n\n/**\n * @brief\n *\t- Pass-through call to return the status of sched objects.\n *\n * @param[in] c - communication handle\n * @param[in] attrib - pointer to attribute list\n * @param[in] extend - extend string for encoding req\n *\n * @return      structure handle\n * @retval      pointer to batch_status struct          Success\n * @retval      NULL                                    error\n *\n */\nstruct batch_status *\npbs_statsched(int c, struct attrl *attrib, const char *extend)\n{\n\treturn (*pfn_pbs_statsched)(c, attrib, extend);\n}\n\n/**\n * @brief\n * \t- Pass-through call to return the status of all possible hosts.\n *\n * @param[in] con - communication handle\n * @param[in] hid - hostname to filter\n * @param[in] attrib - pointer to attribute list\n * @param[in] extend - extend string for encoding req\n *\n * @return      structure handle\n * @retval      pointer to batch_status struct          Success\n * @retval      NULL                                    error\n *\n */\nstruct batch_status *\npbs_stathost(int con, const char *hid, struct attrl *attrib, const char *extend)\n{\n\treturn (*pfn_pbs_stathost)(con, hid, attrib, extend);\n}\n\n/**\n * @brief\n * \t-Pass-through call to returns status of host\n *\tmaintained for backward compatibility\n *\n * @param[in] c - communication handle\n * @param[in] id - object id\n * @param[in] attrib - pointer to attribute list\n * @param[in] extend - extend string for encoding req\n *\n * @return      structure handle\n * @retval      pointer to batch_status struct          Success\n * @retval      NULL\t\t\t\t\terror\n *\n */\nstruct batch_status *\npbs_statnode(int c, const char *id, struct attrl *attrib, const char *extend)\n{\n\treturn (*pfn_pbs_statnode)(c, id, attrib, extend);\n}\n\n/**\n * @brief\n * \t-Pass-through call to to get information about virtual nodes (vnodes)\n *\n * @param[in] c - communication handle\n * @param[in] id - object id\n * @param[in] attrib - pointer to attribute list\n * @param[in] extend - extend string for encoding req\n *\n * @return\tstructure handle\n * @retval\tpointer to batch_status struct\t\tSuccess\n * @retval\tNULL\t\t\t\t\terror\n *\n */\nstruct batch_status *\npbs_statvnode(int c, const char *id, struct attrl *attrib, const char *extend)\n{\n\treturn (*pfn_pbs_statvnode)(c, id, attrib, extend);\n}\n\n/**\n * @brief\n *\t-Pass-through call to get the status of a reservation.\n *\n * @param[in] c - communication handle\n * @param[in] id - object id\n * @param[in] attrib - pointer to attribute list\n * @param[in] extend - extend string for encoding req\n *\n * @return      structure handle\n * @retval      pointer to batch_status struct          Success\n * @retval      NULL                                    error\n *\n */\nstruct batch_status *\npbs_statresv(int c, const char *id, struct attrl *attrib, const char *extend)\n{\n\treturn (*pfn_pbs_statresv)(c, id, attrib, extend);\n}\n\n/**\n * @brief\n *\tPass-through call to get status of a hook.\n *\n * @param[in] c - communication handle\n * @param[in] id - object name\n * @param[in] attrib - pointer to attrl structure(list)\n * @param[in] extend - extend string for req\n *\n * @return\tstructure handle\n * @retval\tpointer to attr list\tsuccess\n * @retval\tNULL\t\t\terror\n *\n */\nstruct batch_status *\npbs_stathook(int c, const char *id, struct attrl *attrib, const char *extend)\n{\n\treturn (*pfn_pbs_stathook)(c, id, attrib, extend);\n}\n\n/**\n * @brief\n *\t-Pass-through call to get the attributes that failed verification\n *\n * @param[in] connect - socket descriptor\n *\n * @return\tstructure handle\n * @retval\tpointer to ecl_attribute_errors struct\t\tsuccess\n * @retval\tNULL\t\t\t\t\t\terror\n *\n */\nstruct ecl_attribute_errors *\npbs_get_attributes_in_error(int connect)\n{\n\treturn (*pfn_pbs_get_attributes_in_error)(connect);\n}\n\n/**\n * @brief\n *\t-Pass-through call to submit job request\n *\n * @param[in] c - communication handle\n * @param[in] attrib - ponter to attr list\n * @param[in] script - job script\n * @param[in] destination - host where job submitted\n * @param[in] extend - buffer to hold cred info\n *\n * @return      string\n * @retval      jobid   success\n * @retval      NULL    error\n *\n */\nchar *\npbs_submit(int c, struct attropl *attrib, const char *script, const char *destination, const char *extend)\n{\n\treturn (*pfn_pbs_submit)(c, attrib, script, destination, extend);\n}\n\n/**\n * @brief\n *\tPass-through call to submit reservation request\n *\n * @param[in]   c - socket on which connected\n * @param[in]   attrib - the list of attributes for batch request\n * @parma[in]   extend - extension of batch request\n *\n * @return char*\n * @retval SUCCESS returns the reservation ID\n * @retval ERROR NULL\n */\nchar *\npbs_submit_resv(int c, struct attropl *attrib, const char *extend)\n{\n\treturn (*pfn_pbs_submit_resv)(c, attrib, extend);\n}\n\n/**\n * @brief\n *\tPasses modify reservation request to PBSD_modify_resv( )\n *\n * @param[in]   c - socket on which connected\n * @param[in]\tresv_id - reservation id\n * @param[in]   attrib - the list of attributes for batch request\n * @param[in]   extend - extension of batch request\n *\n * @return char*\n * @retval SUCCESS returns the response from the server.\n * @retval ERROR NULL\n */\nchar *\npbs_modify_resv(int c, const char *resv_id, struct attropl *attrib, const char *extend)\n{\n\treturn (*pfn_pbs_modify_resv)(c, resv_id, attrib, extend);\n}\n\n/**\n * @brief\n *      Pass-through call to Delete reservation\n *\n * @param[in] c - connection handler\n * @param[in] resv_id - reservation identifier\n * @param[in] extend - string to encode req\n *\n * @return      int\n * @retval      0       success\n * @retval      !0      error\n *\n */\nint\npbs_delresv(int c, const char *resv_id, const char *extend)\n{\n\treturn (*pfn_pbs_delresv)(c, resv_id, extend);\n}\n\n/**\n * @brief\n * \t \tRelease a set of sister nodes or vnodes,\n * \tor all sister nodes or vnodes assigned to the specified PBS\n * \tbatch job.\n *\n * @param[in] c \tcommunication handle\n * @param[in] jobid  job identifier\n * @param[in] node_list \tlist of hosts or vnodes to be released\n * @param[in] extend \tadditional params, currently passes -k arguments\n *\n * @return\tint\n * @retval\t0\tSuccess\n * @retval\t!0\terror\n *\n */\nint\npbs_relnodesjob(int c, const char *jobid, const char *node_list, const char *extend)\n{\n\treturn (*pfn_pbs_relnodesjob)(c, jobid, node_list, extend);\n}\n\n/**\n * @brief\n *\t-Pass-through call to send termination batch_request to server.\n *\n * @param[in] c - communication handle\n * @param[in] manner - manner in which server to be terminated\n * @param[in] extend - extension string for request\n *\n * @return\tint\n * @retval\t0\t\tsuccess\n * @retval\tpbs_error\terror\n *\n */\nint\npbs_terminate(int c, int manner, const char *extend)\n{\n\treturn (*pfn_pbs_terminate)(c, manner, extend);\n}\n\n/**\n * @brief Registers the Scheduler with all the Servers configured\n *\n * param[in]\tsched_id - sched identifier which is known to server\n * param[in]\tprimary_conn_id - primary connection handle which represents all servers returned by pbs_connect\n * param[in]\tsecondary_conn_id - secondary connection handle which represents all servers returned by pbs_connect\n *\n * @return int\n * @retval !0  - couldn't register with a connected server\n * @return 0  - success\n */\nint\npbs_register_sched(const char *sched_id, int primary_conn_sd, int secondary_conn_sd)\n{\n\treturn (*pfn_pbs_register_sched)(sched_id, primary_conn_sd, secondary_conn_sd);\n}\n"
  },
  {
    "path": "src/lib/Libifl/ifl_pointers.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <stdio.h>\n\n#include \"pbs_ifl.h\"\n#include \"ifl_internal.h\"\n\nint (*pfn_pbs_asyrunjob)(int, const char *, const char *, const char *) = __pbs_asyrunjob;\nint (*pfn_pbs_asyrunjob_ack)(int, const char *, const char *, const char *) = __pbs_asyrunjob_ack;\nint (*pfn_pbs_alterjob)(int, const char *, struct attrl *, const char *) = __pbs_alterjob;\nint (*pfn_pbs_asyalterjob)(int, const char *, struct attrl *, const char *) = __pbs_asyalterjob;\nint (*pfn_pbs_confirmresv)(int, const char *, const char *, unsigned long, const char *) = __pbs_confirmresv;\nint (*pfn_pbs_connect)(const char *) = __pbs_connect;\nint (*pfn_pbs_connect_extend)(const char *, const char *) = __pbs_connect_extend;\nchar *(*pfn_pbs_default)(void) = __pbs_default;\nint (*pfn_pbs_deljob)(int, const char *, const char *) = __pbs_deljob;\nstruct batch_deljob_status *(*pfn_pbs_deljoblist)(int, char **, int, const char *) = __pbs_deljoblist;\nint (*pfn_pbs_disconnect)(int) = __pbs_disconnect;\nchar *(*pfn_pbs_geterrmsg)(int) = __pbs_geterrmsg;\nint (*pfn_pbs_holdjob)(int, const char *, const char *, const char *) = __pbs_holdjob;\nint (*pfn_pbs_loadconf)(int) = __pbs_loadconf;\nchar *(*pfn_pbs_locjob)(int, const char *, const char *) = __pbs_locjob;\nint (*pfn_pbs_manager)(int, int, int, const char *, struct attropl *, const char *) = __pbs_manager;\nint (*pfn_pbs_movejob)(int, const char *, const char *, const char *) = __pbs_movejob;\nint (*pfn_pbs_msgjob)(int, const char *, int, const char *, const char *) = __pbs_msgjob;\nint (*pfn_pbs_orderjob)(int, const char *, const char *, const char *) = __pbs_orderjob;\nint (*pfn_pbs_rerunjob)(int, const char *, const char *) = __pbs_rerunjob;\nint (*pfn_pbs_rlsjob)(int, const char *, const char *, const char *) = __pbs_rlsjob;\nint (*pfn_pbs_runjob)(int, const char *, const char *, const char *) = __pbs_runjob;\nchar **(*pfn_pbs_selectjob)(int, struct attropl *, const char *) = __pbs_selectjob;\nint (*pfn_pbs_sigjob)(int, const char *, const char *, const char *) = __pbs_sigjob;\nvoid (*pfn_pbs_statfree)(struct batch_status *) = __pbs_statfree;\nvoid (*pfn_pbs_delstatfree)(struct batch_deljob_status *) = __pbs_delstatfree;\nstruct batch_status *(*pfn_pbs_statrsc)(int, const char *, struct attrl *, const char *) = __pbs_statrsc;\nstruct batch_status *(*pfn_pbs_statjob)(int, const char *, struct attrl *, const char *) = __pbs_statjob;\nstruct batch_status *(*pfn_pbs_selstat)(int, struct attropl *, struct attrl *, const char *) = __pbs_selstat;\nstruct batch_status *(*pfn_pbs_statque)(int, const char *, struct attrl *, const char *) = __pbs_statque;\nstruct batch_status *(*pfn_pbs_statserver)(int, struct attrl *, const char *) = __pbs_statserver;\nstruct batch_status *(*pfn_pbs_statsched)(int, struct attrl *, const char *) = __pbs_statsched;\nstruct batch_status *(*pfn_pbs_stathost)(int, const char *, struct attrl *, const char *) = __pbs_stathost;\nstruct batch_status *(*pfn_pbs_statnode)(int, const char *, struct attrl *, const char *) = __pbs_statnode;\nstruct batch_status *(*pfn_pbs_statvnode)(int, const char *, struct attrl *, const char *) = __pbs_statvnode;\nstruct batch_status *(*pfn_pbs_statresv)(int, const char *, struct attrl *, const char *) = __pbs_statresv;\nstruct batch_status *(*pfn_pbs_stathook)(int, const char *, struct attrl *, const char *) = __pbs_stathook;\nstruct ecl_attribute_errors *(*pfn_pbs_get_attributes_in_error)(int) = __pbs_get_attributes_in_error;\nchar *(*pfn_pbs_submit)(int, struct attropl *, const char *, const char *, const char *) = __pbs_submit;\nchar *(*pfn_pbs_submit_resv)(int, struct attropl *, const char *) = __pbs_submit_resv;\nchar *(*pfn_pbs_modify_resv)(int, const char *, struct attropl *, const char *) = __pbs_modify_resv;\nint (*pfn_pbs_delresv)(int, const char *, const char *) = __pbs_delresv;\nint (*pfn_pbs_relnodesjob)(int, const char *, const char *, const char *) = __pbs_relnodesjob;\nint (*pfn_pbs_terminate)(int, int, const char *) = __pbs_terminate;\npreempt_job_info *(*pfn_pbs_preempt_jobs)(int, char **) = __pbs_preempt_jobs;\nint (*pfn_pbs_register_sched)(const char *, int, int) = __pbs_register_sched;\n"
  },
  {
    "path": "src/lib/Libifl/ifl_util.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h>\n\n#include \"pbs_ifl.h\"\n#include \"libpbs.h\"\n#include \"pbs_error.h\"\n#include \"dis.h\"\n#include \"pbs_share.h\"\n\n/**\n * @brief\n *\tparse out the parts from 'server_name'\n *\n * @param[in] server_id_in - server id input, could be name:port pair\n * @param[out] server_name_out - server name out, does not include port\n * @param[out] port - port number out\n *\n * @return\tstring\n * @retval\tservr name\tsuccess\n *\n */\nchar *\nPBS_get_server(const char *server_id_in, char *server_name_out, unsigned int *port)\n{\n\tchar *pc;\n\tunsigned int dflt_port = pbs_conf.batch_service_port;\n\tchar *p;\n\n\tserver_name_out[0] = '\\0';\n\n\t/* first, get the \"net.address[:port]\" into 'server_name' */\n\n\tif ((server_id_in == NULL) || (*server_id_in == '\\0')) {\n\t\tif ((p = pbs_default()) == NULL)\n\t\t\treturn NULL;\n\t\tpbs_strncpy(server_name_out, p, PBS_MAXSERVERNAME);\n\t} else {\n\t\tpbs_strncpy(server_name_out, server_id_in, PBS_MAXSERVERNAME);\n\t}\n\n\t/* now parse out the parts from 'server_name_out' */\n\n\tif ((pc = strchr(server_name_out, (int) ':')) != NULL) {\n\t\t/* got a port number */\n\t\t*pc++ = '\\0';\n\t\t*port = atoi(pc);\n\t} else {\n\t\t*port = dflt_port;\n\t}\n\n\treturn server_name_out;\n}\n"
  },
  {
    "path": "src/lib/Libifl/int_cred.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tint_cred.c\n *\n * @brief\n * send job credentials to the mom.\n * @note\n * This code is not mean to be used by a persistent process.\n * If an error occurs, not all the allocated structures are freed.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include \"libpbs.h\"\n#include \"dis.h\"\n#include \"ticket.h\"\n#include \"net_connect.h\"\n#include \"tpp.h\"\n#include \"attribute.h\"\n#include \"batch_request.h\"\n\n/**\n * @brief\n *\t encode a Job Credential Batch Request\n *\n * @param[in] c - socket descriptor\n * @param[in] credid - credential id (e.g. principal)\n * @param[in] jobid - job id\n * @param[in] data - credentials\n * @param[in] validity - credential validity\n * @param[in] prot - PROT_TCP or PROT_TPP\n * @param[in] msgid - msg id\n *\n * @return\tint\n * @retval\t0\t\tsuccess\n * @retval\t!0(pbse error)\terror\n *\n */\nint\nPBSD_cred(int c, char *credid, char *jobid, int cred_type, char *data, long validity, int prot, char **msgid)\n{\n\tint rc;\n\n\tif (prot == PROT_TCP) {\n\t\tDIS_tcp_funcs();\n\t} else {\n\t\tif ((rc = is_compose_cmd(c, IS_CMD, msgid)) != DIS_SUCCESS)\n\t\t\treturn rc;\n\t}\n\n\tif ((rc = encode_DIS_ReqHdr(c, PBS_BATCH_Cred, pbs_current_user)) ||\n\t    (rc = encode_DIS_Cred(c, jobid, credid, cred_type, data, strlen(data), validity)) ||\n\t    (rc = encode_DIS_ReqExtend(c, NULL))) {\n\t\tif (prot == PROT_TCP) {\n\t\t\tif (set_conn_errtxt(c, dis_emsg[rc]) != 0)\n\t\t\t\treturn (pbs_errno = PBSE_SYSTEM);\n\t\t}\n\t\treturn (pbs_errno = PBSE_PROTOCOL);\n\t}\n\n\tif (dis_flush(c)) {\n\t\treturn (pbs_errno = PBSE_PROTOCOL);\n\t}\n\n\treturn PBSE_NONE;\n}\n"
  },
  {
    "path": "src/lib/Libifl/int_hook.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <string.h>\n#include <unistd.h>\n#include <stdio.h>\n#include <fcntl.h>\n#include \"portability.h\"\n#include \"libpbs.h\"\n#include \"dis.h\"\n#include \"net_connect.h\"\n#include \"tpp.h\"\n\n/**\n * @file\tint_hook.c\n */\n/**\n *\n * @brief\n *\tSend a chunk of data (buf) of size 'len', sequence 'seq'  associated\n *\twith the 'hook_filename', over the connection handle 'c'.\n *\n * @param[in]\tc - connection channel\n * @param[in]   reqtype - request type\n * @param[in] \tseq - sequence of a block of data (0,1,...)\n * @param[in] \tbuf - a block of data\n * @param[in] \tlen - size of buf\n * @param[in]\thook_filename - hook filename\n * @param[in]   prot - PROT_TCP or PROT_TPP\n * @param[in]   msgid - msg\n *\n * @return \tint\n * @retval\t0 for success\n * @retval\tnon-zero otherwise.\n */\nstatic int\nPBSD_hookbuf(int c, int reqtype, int seq, char *buf, int len, char *hook_filename, int prot, char **msgid)\n{\n\tstruct batch_reply *reply;\n\tint rc;\n\n\tif (prot == PROT_TCP) {\n\t\tDIS_tcp_funcs();\n\t} else {\n\t\tif ((rc = is_compose_cmd(c, IS_CMD, msgid)) != DIS_SUCCESS)\n\t\t\treturn rc;\n\t}\n\n\tif ((hook_filename == NULL) || (hook_filename[0] == '\\0'))\n\t\treturn (pbs_errno = PBSE_PROTOCOL);\n\n\tif ((rc = encode_DIS_ReqHdr(c, reqtype, pbs_current_user)) ||\n\t    (rc = encode_DIS_CopyHookFile(c, seq, buf, len,\n\t\t\t\t\t  hook_filename)) ||\n\t    (rc = encode_DIS_ReqExtend(c, NULL))) {\n\n\t\tif (prot == PROT_TCP) {\n\t\t\tif (set_conn_errtxt(c, dis_emsg[rc]) != 0)\n\t\t\t\treturn (pbs_errno = PBSE_SYSTEM);\n\t\t}\n\t\treturn (pbs_errno = PBSE_PROTOCOL);\n\t}\n\n\tif (prot == PROT_TPP) {\n\t\tpbs_errno = PBSE_NONE;\n\t\tif (dis_flush(c))\n\t\t\tpbs_errno = PBSE_PROTOCOL;\n\t\treturn pbs_errno;\n\t}\n\n\tif (dis_flush(c)) {\n\t\treturn (pbs_errno = PBSE_PROTOCOL);\n\t}\n\n\t/* read reply */\n\treply = PBSD_rdrpy(c);\n\tPBSD_FreeReply(reply);\n\n\treturn get_conn_errno(c);\n}\n\n/**\n *\n * @brief\n *\tCopy the contents of 'hook_filepath' over the network connection\n *\thandle 'c'.\n *\n * @param[in]\tc - connection channel\n * @param[in]\thook_filepath - local full file pathname\n * @param[in]   prot - PROT_TCP or PROT_TPP\n * @param[in]   msgid - msg\n *\n * @return int\n * @retval\t0 for success\n * @retval\t-2 for success, no hookfile or empty hookfile\n * @retval\tnon-zero otherwise.\n */\nint\nPBSD_copyhookfile(int c, char *hook_filepath, int prot, char **msgid)\n{\n\tint i;\n\tint fd;\n\tint cc;\n\tint rc = -2;\n\tchar s_buf[SCRIPT_CHUNK_Z];\n\tchar *p;\n\tchar hook_file[MAXPATHLEN + 1];\n\n\tif ((fd = open(hook_filepath, O_RDONLY, 0)) < 0) {\n\t\tif (prot == PROT_TPP)\n\t\t\treturn (-2); /* ok, if nothing to copy */\n\t\telse\n\t\t\treturn 0;\n\t}\n\n\t/* set hook_file to the relative path of 'hook_filepath' */\n\tstrncpy(hook_file, hook_filepath, MAXPATHLEN);\n\tif ((p = strrchr(hook_filepath, '/')) != NULL) {\n\t\tstrncpy(hook_file, p + 1, MAXPATHLEN);\n\t}\n\n\ti = 0;\n\tcc = read(fd, s_buf, SCRIPT_CHUNK_Z);\n\n\twhile ((cc > 0) &&\n\t       ((rc = PBSD_hookbuf(c, PBS_BATCH_CopyHookFile, i, s_buf, cc, hook_file, prot, msgid)) == 0)) {\n\t\ti++;\n\t\tcc = read(fd, s_buf, SCRIPT_CHUNK_Z);\n\t}\n\n\tclose(fd);\n\tif (cc < 0) /* read failed */\n\t\treturn (-1);\n\n\treturn rc; /* rc has the return value from PBSD_hookbuf */\n}\n\n/**\n *\n * @brief\n *\tSend a Delete Hook file request of 'hook_filename' over the network\n *\tchannel 'c'.\n *\n * @param[in]\tc - connection channel\n * @param[in]\thook_filename - hook filename\n * @param[in] \tprot - PROT_TCP or PROT_TPP\n * @param[in] \tmsgid - msg\n *\n * @return \tint\n * @retval\t0 for success\n * @retval\tnon-zero otherwise.\n */\nint\nPBSD_delhookfile(int c, char *hook_filename, int prot, char **msgid)\n{\n\tstruct batch_reply *reply;\n\tint rc;\n\n\tif (prot == PROT_TCP) {\n\t\tDIS_tcp_funcs();\n\t} else {\n\t\tif ((rc = is_compose_cmd(c, IS_CMD, msgid)) != DIS_SUCCESS)\n\t\t\treturn rc;\n\t}\n\n\tif ((hook_filename == NULL) || (hook_filename[0] == '\\0'))\n\t\treturn (pbs_errno = PBSE_PROTOCOL);\n\n\tif ((rc = encode_DIS_ReqHdr(c, PBS_BATCH_DelHookFile, pbs_current_user)) ||\n\t    (rc = encode_DIS_DelHookFile(c, hook_filename)) ||\n\t    (rc = encode_DIS_ReqExtend(c, NULL))) {\n\t\tif (prot == PROT_TCP) {\n\t\t\tif (set_conn_errtxt(c, dis_emsg[rc]) != 0)\n\t\t\t\treturn (pbs_errno = PBSE_SYSTEM);\n\t\t}\n\t\treturn (pbs_errno = PBSE_PROTOCOL);\n\t}\n\n\tif (prot == PROT_TPP) {\n\t\tpbs_errno = PBSE_NONE;\n\t\tif (dis_flush(c))\n\t\t\tpbs_errno = PBSE_PROTOCOL;\n\t\treturn pbs_errno;\n\t}\n\n\tif (dis_flush(c)) {\n\t\treturn (pbs_errno = PBSE_PROTOCOL);\n\t}\n\n\t/* read reply */\n\treply = PBSD_rdrpy(c);\n\tPBSD_FreeReply(reply);\n\n\treturn get_conn_errno(c);\n}\n"
  },
  {
    "path": "src/lib/Libifl/int_jcred.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tint_jcred.c\n *\n * @brief\n * send job credentials to the server.\n * @note\n * This code is not mean to be used by a persistent process.\n * If an error occurs, not all the allocated structures are freed.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <string.h>\n#include <stdio.h>\n#include <assert.h>\n#include \"libpbs.h\"\n#include \"dis.h\"\n#include \"ticket.h\"\n#include \"net_connect.h\"\n#include \"tpp.h\"\n\n/**\n * @brief\n *\t encode a Job Credential Batch Request\n *\n * @param[in] c - socket descriptor\n * @param[in] type - credential type\n * @param[in] buf - credentials\n * @param[in] len - credential length\n * @param[in] prot - PROT_TCP or PROT_TPP\n * @param[in] msgid - msg id\n *\n * @return\tint\n * @retval\t0\t\tsuccess\n * @retval\t!0(pbse error)\terror\n *\n */\nint\nPBSD_jcred(int c, int type, char *buf, int len, int prot, char **msgid)\n{\n\tint rc;\n\tstruct batch_reply *reply = NULL;\n\n\tif (prot == PROT_TCP) {\n\t\tDIS_tcp_funcs();\n\t} else {\n\t\tif ((rc = is_compose_cmd(c, IS_CMD, msgid)) != DIS_SUCCESS)\n\t\t\treturn rc;\n\t}\n\n\tif ((rc = encode_DIS_ReqHdr(c, PBS_BATCH_JobCred, pbs_current_user)) ||\n\t    (rc = encode_DIS_JobCred(c, type, buf, len)) ||\n\t    (rc = encode_DIS_ReqExtend(c, NULL))) {\n\t\tif (prot == PROT_TCP) {\n\t\t\tif (set_conn_errtxt(c, dis_emsg[rc]) != 0)\n\t\t\t\treturn (pbs_errno = PBSE_SYSTEM);\n\t\t}\n\t\treturn (pbs_errno = PBSE_PROTOCOL);\n\t}\n\n\tif (prot == PROT_TPP) {\n\t\tpbs_errno = PBSE_NONE;\n\t\tif (dis_flush(c))\n\t\t\tpbs_errno = PBSE_PROTOCOL;\n\n\t\treturn (pbs_errno);\n\t}\n\n\tif (dis_flush(c)) {\n\t\treturn (pbs_errno = PBSE_PROTOCOL);\n\t}\n\n\treply = PBSD_rdrpy(c);\n\n\tPBSD_FreeReply(reply);\n\n\treturn get_conn_errno(c);\n}\n"
  },
  {
    "path": "src/lib/Libifl/int_manage2.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tint_manage2.c\n * @brief\n * \tThe send-request side of the PBS_manager function\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <string.h>\n#include <stdio.h>\n#include \"libpbs.h\"\n#include \"dis.h\"\n#include \"net_connect.h\"\n#include \"tpp.h\"\n\n/**\n * @brief\n *      -encode a Manager Batch Request\n *\n * @par Functionality:\n *              This request is used for most operations where an object is being\n *              created, deleted, or altered.\n *\n * @param[in] c - socket descriptor\n * @param[in] command - command type\n * @param[in] objtype - object type\n * @param[in] objname - object name\n * @param[in] aoplp - pointer to attropl structure(list)\n * @param[in] prot - PROT_TCP or PROT_TPP\n * @param[in] msgid - message id\n *\n * @return      int\n * @retval      DIS_SUCCESS(0)  success\n * @retval      error code      error\n *\n */\nint\nPBSD_mgr_put(int c, int function, int command, int objtype, const char *objname, struct attropl *aoplp, const char *extend, int prot, char **msgid)\n{\n\tint rc;\n\n\tif (prot == PROT_TCP) {\n\t\tDIS_tcp_funcs();\n\t} else {\n\t\tif ((rc = is_compose_cmd(c, IS_CMD, msgid)) != DIS_SUCCESS)\n\t\t\treturn rc;\n\t}\n\n\tif ((rc = encode_DIS_ReqHdr(c, function, pbs_current_user)) ||\n\t    (rc = encode_DIS_Manage(c, command, objtype, objname, aoplp)) ||\n\t    (rc = encode_DIS_ReqExtend(c, extend))) {\n\t\tif (prot == PROT_TCP) {\n\t\t\tif (set_conn_errtxt(c, dis_emsg[rc]) != 0)\n\t\t\t\treturn (pbs_errno = PBSE_SYSTEM);\n\t\t}\n\t\treturn (pbs_errno = PBSE_PROTOCOL);\n\t}\n\n\tif (prot == PROT_TPP) {\n\t\tpbs_errno = PBSE_NONE;\n\t\tif (dis_flush(c))\n\t\t\tpbs_errno = PBSE_PROTOCOL;\n\t\treturn pbs_errno;\n\t}\n\n\tif (dis_flush(c)) {\n\t\treturn (pbs_errno = PBSE_PROTOCOL);\n\t}\n\treturn 0;\n}\n"
  },
  {
    "path": "src/lib/Libifl/int_manager.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tint_manager.c\n *\n * @brief\n * The function that underlies most of the job manipulation\n * routines...\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <stdio.h>\n#include \"libpbs.h\"\n#include \"pbs_ecl.h\"\n#include \"cmds.h\"\n\n/**\n * @brief\n *\t-send manager request and read reply to a possibly multi-svr connection\n *\n * @param[in] c - communication handle\n * @param[in] rq_type - req type\n * @param[in] command - command\n * @param[in] objtype - object type\n * @param[in] objname - object name\n * @param[in] aoplp - attribute list\n * @param[in] extend - extend string for req\n *\n * @return\tint\n * @retval\t0\tsuccess\n * @retval\t!0\terror\n *\n */\nint\nPBSD_manager(int c, int rq_type, int command, int objtype, const char *objname, struct attropl *aoplp, const char *extend)\n{\n\tstruct batch_reply *reply;\n\tint rc;\n\n\t/* initialize the thread context data, if not initialized */\n\tif (pbs_client_thread_init_thread_context() != 0)\n\t\treturn pbs_errno;\n\n\t/* verify the object name if creating a new one */\n\tif (command == MGR_CMD_CREATE)\n\t\tif (pbs_verify_object_name(objtype, objname) != 0)\n\t\t\treturn pbs_errno;\n\n\t/* now verify the attributes, if verification is enabled */\n\tif ((pbs_verify_attributes(c, rq_type, objtype, command, aoplp)) != 0)\n\t\treturn pbs_errno;\n\n\t/* lock pthread mutex here for this connection */\n\t/* blocking call, waits for mutex release */\n\tif (pbs_client_thread_lock_connection(c) != 0)\n\t\treturn pbs_errno;\n\n\t/* send the manage request */\n\trc = PBSD_mgr_put(c, rq_type, command, objtype, objname, aoplp, extend, PROT_TCP, NULL);\n\tif (rc) {\n\t\tpbs_client_thread_unlock_connection(c);\n\t\treturn rc;\n\t}\n\n\t/* read reply from stream into presentation element */\n\treply = PBSD_rdrpy(c);\n\tPBSD_FreeReply(reply);\n\n\trc = get_conn_errno(c);\n\n\t/* unlock the thread lock and update the thread context data */\n\tif (pbs_client_thread_unlock_connection(c) != 0)\n\t\treturn pbs_errno;\n\n\treturn rc;\n}\n"
  },
  {
    "path": "src/lib/Libifl/int_modify_resv.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <string.h>\n#include <unistd.h>\n#include <stdio.h>\n#include <fcntl.h>\n#include \"portability.h\"\n#include \"libpbs.h\"\n#include \"dis.h\"\n#include \"tpp.h\"\n\n/**\n * @brief Sends the Modify Reservation request\n *\n * @param[in] connect - socket descriptor for the connection.\n * @param[in] resv-Id - Reservation Identifier\n * @param[in] attrib  - list of attributes to be modified.\n * @param[in] extend  - extended options\n *\n * @return - reply from server on no error.\n * @return - NULL on error.\n */\n\nchar *\nPBSD_modify_resv(int connect, const char *resv_id, struct attropl *attrib, const char *extend)\n{\n\tstruct batch_reply *reply = NULL;\n\tint rc = -1;\n\tchar *ret = NULL;\n\n\t/* initialize the thread context data, if not initialized */\n\tif (pbs_client_thread_init_thread_context() != 0)\n\t\treturn NULL;\n\n\t/*\n\t * lock pthread mutex here for this connection\n\t * blocking call, waits for mutex release\n\t */\n\tif (pbs_client_thread_lock_connection(connect) != 0)\n\t\treturn NULL;\n\n\tDIS_tcp_funcs();\n\n\t/* first, set up the body of the Modify Reservation request */\n\n\tif ((rc = encode_DIS_ReqHdr(connect, PBS_BATCH_ModifyResv, pbs_current_user)) ||\n\t    (rc = encode_DIS_ModifyResv(connect, resv_id, attrib)) ||\n\t    (rc = encode_DIS_ReqExtend(connect, extend))) {\n\t\tif (set_conn_errtxt(connect, dis_emsg[rc]) != 0) {\n\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\tpbs_client_thread_unlock_connection(connect);\n\t\t\treturn NULL;\n\t\t}\n\t\tif (pbs_errno == PBSE_PROTOCOL) {\n\t\t\tpbs_client_thread_unlock_connection(connect);\n\t\t\treturn NULL;\n\t\t}\n\t}\n\tif (dis_flush(connect)) {\n\t\tpbs_errno = PBSE_PROTOCOL;\n\t\tpbs_client_thread_unlock_connection(connect);\n\t\treturn NULL;\n\t}\n\n\treply = PBSD_rdrpy(connect);\n\tif (reply == NULL)\n\t\tpbs_errno = PBSE_PROTOCOL;\n\telse {\n\t\tif ((reply->brp_code == PBSE_NONE) && (reply->brp_un.brp_txt.brp_str)) {\n\t\t\tret = strdup(reply->brp_un.brp_txt.brp_str);\n\t\t\tif (!ret)\n\t\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t}\n\t\tPBSD_FreeReply(reply);\n\t}\n\n\t/* unlock the thread lock and update the thread context data */\n\tif (pbs_client_thread_unlock_connection(connect) != 0)\n\t\treturn NULL;\n\n\treturn ret;\n}\n"
  },
  {
    "path": "src/lib/Libifl/int_msg2.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tint_msg2.c\n * @brief\n *\tsend the MessageJob request\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <string.h>\n#include <stdio.h>\n#include \"libpbs.h\"\n#include \"dis.h\"\n#include \"net_connect.h\"\n#include \"tpp.h\"\n\n/**\n * @brief\n *\t-PBS_msg_put Send the MessageJob request, does not read the reply.\n *\n * @param[in] c - socket descriptor\n * @param[in] jobid - job identifier\n * @param[in] fileopt - file type\n * @param[in] msg - msg to be sent\n * @param[in] extend - extention string for req encode\n * @param[in] prot - PROT_TCP or PROT_TPP\n * @param[in] msgid - message id\n *\n * @return      int\n * @retval      0               Success\n * @retval      pbs_error(!0)   error\n */\nint\nPBSD_msg_put(int c, const char *jobid, int fileopt, const char *msg, const char *extend, int prot, char **msgid)\n{\n\tint rc = 0;\n\n\tif (prot == PROT_TCP) {\n\t\tDIS_tcp_funcs();\n\t} else {\n\t\tif ((rc = is_compose_cmd(c, IS_CMD, msgid)) != DIS_SUCCESS)\n\t\t\treturn rc;\n\t}\n\n\tif ((rc = encode_DIS_ReqHdr(c, PBS_BATCH_MessJob, pbs_current_user)) ||\n\t    (rc = encode_DIS_MessageJob(c, jobid, fileopt, msg)) ||\n\t    (rc = encode_DIS_ReqExtend(c, extend))) {\n\t\treturn (pbs_errno = PBSE_PROTOCOL);\n\t}\n\n\tif (dis_flush(c)) {\n\t\tpbs_errno = PBSE_PROTOCOL;\n\t\trc = pbs_errno;\n\t}\n\n\treturn rc;\n}\n\n/**\n * @brief\n *\t-Send the PySpawn request, does not read the reply.\n *\n * @param[in] c - socket descriptor\n * @param[in] jobid - job identifier\n * @param[in] argv - pointer to arguments\n * @param[in] envp - pointer to environment vars\n * @param[in] prot - PROT_TCP or PROT_TPP\n * @param[in] msgid - message id\n *\n * @return\tint\n * @retval\t0\t\tSuccess\n * @retval\tpbs_error(!0)\terror\n */\nint\nPBSD_py_spawn_put(int c, char *jobid, char **argv, char **envp, int prot, char **msgid)\n{\n\tint rc = 0;\n\n\tif (prot == PROT_TCP) {\n\t\tDIS_tcp_funcs();\n\t} else {\n\t\tif ((rc = is_compose_cmd(c, IS_CMD, msgid)) != DIS_SUCCESS)\n\t\t\treturn rc;\n\t}\n\n\tif ((rc = encode_DIS_ReqHdr(c, PBS_BATCH_PySpawn, pbs_current_user)) ||\n\t    (rc = encode_DIS_PySpawn(c, jobid, argv, envp)) ||\n\t    (rc = encode_DIS_ReqExtend(c, NULL))) {\n\t\treturn (pbs_errno = PBSE_PROTOCOL);\n\t}\n\n\tif (dis_flush(c)) {\n\t\tpbs_errno = PBSE_PROTOCOL;\n\t\trc = pbs_errno;\n\t}\n\n\treturn rc;\n}\n\n/*\n *\tPBS_relnodes_put.c\n *\n *\tSend the RelnodesJob request, does not read the reply.\n */\nint\nPBSD_relnodes_put(int c, const char *jobid, const char *node_list, const char *extend, int prot, char **msgid)\n{\n\tint rc = 0;\n\n\tif (prot == PROT_TCP) {\n\t\tDIS_tcp_funcs();\n\t} else {\n\t\tif ((rc = is_compose_cmd(c, IS_CMD, msgid)) != DIS_SUCCESS)\n\t\t\treturn rc;\n\t}\n\n\tif ((rc = encode_DIS_ReqHdr(c, PBS_BATCH_RelnodesJob, pbs_current_user)) ||\n\t    (rc = encode_DIS_RelnodesJob(c, jobid, node_list)) ||\n\t    (rc = encode_DIS_ReqExtend(c, extend))) {\n\t\treturn (pbs_errno = PBSE_PROTOCOL);\n\t}\n\n\tif (dis_flush(c)) {\n\t\tpbs_errno = PBSE_PROTOCOL;\n\t\trc = pbs_errno;\n\t}\n\n\treturn rc;\n}\n"
  },
  {
    "path": "src/lib/Libifl/int_rdrpy.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tint_rdrpy.c\n * @brief\n * Read the reply to a batch request.\n * A reply structure is allocated and cleared.\n * The reply is read and decoded into the structure.\n * The reply structure is returned.\n *\n * The caller MUST free the reply structure by calling\n * PBS_FreeReply().\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <string.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <time.h>\n#include \"libpbs.h\"\n#include \"dis.h\"\n#include \"tpp.h\"\n\n/**\n * @brief read a batch reply from the given socket\n *\n * @param[in] sock - The socket fd to read from\n * @param[out] rc  - Return DIS error code\n * @param[in] prot - protocol type\n *\n * @return Batch reply structure\n * @retval  !NULL - Success\n * @retval   NULL - Failure\n *\n */\nstruct batch_reply *\nPBSD_rdrpy_sock(int sock, int *rc, int prot)\n{\n\tstruct batch_reply *reply;\n\ttime_t old_timeout;\n\n\t*rc = DIS_SUCCESS;\n\t/* clear any prior error message */\n\tif ((reply = (struct batch_reply *) calloc(1, sizeof(struct batch_reply))) == 0) {\n\t\tpbs_errno = PBSE_SYSTEM;\n\t\treturn NULL;\n\t}\n\n\tif (prot == PROT_TCP) {\n\t\tDIS_tcp_funcs();\n\t\told_timeout = pbs_tcp_timeout;\n\t\tif (pbs_tcp_timeout < PBS_DIS_TCP_TIMEOUT_LONG)\n\t\t\tpbs_tcp_timeout = PBS_DIS_TCP_TIMEOUT_LONG;\n\t} else\n\t\tDIS_tpp_funcs();\n\n\tif ((*rc = decode_DIS_replyCmd(sock, reply, prot)) != 0) {\n\t\t(void) free(reply);\n\t\tpbs_errno = PBSE_PROTOCOL;\n\t\treturn NULL;\n\t}\n\n\tdis_reset_buf(sock, DIS_READ_BUF);\n\tif (prot == PROT_TCP)\n\t\tpbs_tcp_timeout = old_timeout;\n\n\tpbs_errno = reply->brp_code;\n\treturn reply;\n}\n\n/**\n * @brief read a batch reply from the given connection index\n *\n * @param[in] c - The connection index to read from\n *\n * @return DIS error code\n * @retval   DIS_SUCCESS  - Success\n * @retval  !DIS_SUCCESS  - Failure\n */\nstruct batch_reply *\nPBSD_rdrpy(int c)\n{\n\tint rc;\n\tstruct batch_reply *reply;\n\n\t/* clear any prior error message */\n\n\tif (set_conn_errtxt(c, NULL) != 0) {\n\t\tpbs_errno = PBSE_SYSTEM;\n\t\treturn NULL;\n\t}\n\t/* PBSD_rdrpy() only handles TCP, hence passing PROT_TCP as prot */\n\treply = PBSD_rdrpy_sock(c, &rc, PROT_TCP);\n\tif (reply == NULL) {\n\t\tif (set_conn_errno(c, PBSE_PROTOCOL) != 0) {\n\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\treturn NULL;\n\t\t}\n\t\tif (set_conn_errtxt(c, dis_emsg[rc]) != 0) {\n\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\treturn NULL;\n\t\t}\n\t\treturn NULL;\n\t}\n\tif (set_conn_errno(c, reply->brp_code) != 0) {\n\t\tpbs_errno = reply->brp_code;\n\t\treturn NULL;\n\t}\n\tpbs_errno = reply->brp_code;\n\n\tif (reply->brp_choice == BATCH_REPLY_CHOICE_Text) {\n\t\tif (reply->brp_un.brp_txt.brp_str != NULL) {\n\t\t\tif (set_conn_errtxt(c, reply->brp_un.brp_txt.brp_str) != 0) {\n\t\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t}\n\t}\n\treturn reply;\n}\n\n/*\n * PBS_FreeReply - Free a batch_reply structure allocated in PBS_rdrpy()\n *\n *\tAny additional allocated substructures pointed to from the\n *\treply structure are freed, then the base struture itself is gone.\n */\n\nvoid\nPBSD_FreeReply(struct batch_reply *reply)\n{\n\tstruct brp_select *psel;\n\tstruct brp_select *pselx;\n\n\tif (reply == 0)\n\t\treturn;\n\tif (reply->brp_choice == BATCH_REPLY_CHOICE_Text) {\n\t\tif (reply->brp_un.brp_txt.brp_str) {\n\t\t\tfree(reply->brp_un.brp_txt.brp_str);\n\t\t\treply->brp_un.brp_txt.brp_str = NULL;\n\t\t\treply->brp_un.brp_txt.brp_txtlen = 0;\n\t\t}\n\n\t} else if (reply->brp_choice == BATCH_REPLY_CHOICE_Select) {\n\t\tpsel = reply->brp_un.brp_select;\n\t\twhile (psel) {\n\t\t\tpselx = psel->brp_next;\n\t\t\tfree(psel);\n\t\t\tpsel = pselx;\n\t\t}\n\n\t} else if (reply->brp_choice == BATCH_REPLY_CHOICE_Status) {\n\t\tif (reply->brp_un.brp_statc)\n\t\t\tpbs_statfree(reply->brp_un.brp_statc);\n\n\t} else if (reply->brp_choice == BATCH_REPLY_CHOICE_Delete) {\n\t\tif (reply->brp_un.brp_deletejoblist.brp_delstatc)\n\t\t\tpbs_delstatfree(reply->brp_un.brp_deletejoblist.brp_delstatc);\n\n\t} else if (reply->brp_choice == BATCH_REPLY_CHOICE_RescQuery) {\n\t\tfree(reply->brp_un.brp_rescq.brq_avail);\n\t\tfree(reply->brp_un.brp_rescq.brq_alloc);\n\t\tfree(reply->brp_un.brp_rescq.brq_resvd);\n\t\tfree(reply->brp_un.brp_rescq.brq_down);\n\t} else if (reply->brp_choice == BATCH_REPLY_CHOICE_PreemptJobs) {\n\t\tfree(reply->brp_un.brp_preempt_jobs.ppj_list);\n\t}\n\n\tfree(reply);\n}\n"
  },
  {
    "path": "src/lib/Libifl/int_sig2.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tint_sig2.c\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <string.h>\n#include <stdio.h>\n#include \"libpbs.h\"\n#include \"dis.h\"\n#include \"net_connect.h\"\n#include \"tpp.h\"\n\n/**\n * @brief\n *\t-PBS_sig_put.c Send the Signal Job Batch Request\n *\n * @param[in] c - socket descriptor\n * @param[in] jobid - job identifier\n * @param[in] signal - signal\n * @param[in] msg - msg to be sent\n * @param[in] extend - extention string for req encode\n * @param[in] prot - PROT_TCP or PROT_TPP\n * @param[in] msgid - message id\n *\n * @return      int\n * @retval      0               Success\n * @retval      pbs_error(!0)   error\n */\nint\nPBSD_sig_put(int c, const char *jobid, const char *signal, const char *extend, int prot, char **msgid)\n{\n\tint rc = 0;\n\n\tif (prot == PROT_TCP) {\n\t\tDIS_tcp_funcs();\n\t} else {\n\t\tif ((rc = is_compose_cmd(c, IS_CMD, msgid)) != DIS_SUCCESS)\n\t\t\treturn rc;\n\t}\n\n\tif ((rc = encode_DIS_ReqHdr(c, PBS_BATCH_SignalJob, pbs_current_user)) ||\n\t    (rc = encode_DIS_SignalJob(c, jobid, signal)) ||\n\t    (rc = encode_DIS_ReqExtend(c, extend))) {\n\t\tif (prot == PROT_TCP) {\n\t\t\tif (set_conn_errtxt(c, dis_emsg[rc]) != 0) {\n\t\t\t\treturn (pbs_errno = PBSE_SYSTEM);\n\t\t\t}\n\t\t}\n\t\treturn (pbs_errno = PBSE_PROTOCOL);\n\t}\n\n\tif (dis_flush(c)) {\n\t\tpbs_errno = PBSE_PROTOCOL;\n\t\trc = pbs_errno;\n\t}\n\treturn rc;\n}\n"
  },
  {
    "path": "src/lib/Libifl/int_status.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/** @file\tint_status.c\n * @brief\n * The function that underlies all the status requests\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <string.h>\n#include <stdio.h>\n#include <ctype.h>\n#include \"libpbs.h\"\n#include \"pbs_ecl.h\"\n#include \"libutil.h\"\n#include \"attribute.h\"\n#include \"cmds.h\"\n#include \"pbs_internal.h\"\n\n/**\n * @brief\n *\t-wrapper function for PBSD_status_put which sends\n *\tstatus batch request\n *\n * @param[in] c - socket descriptor\n * @param[in] function - request type\n * @param[in] id - object id\n * @param[in] attrib - pointer to attribute list\n * @param[in] extend - extention string for req encode\n *\n * @return\tstructure handle\n * @retval \tpointer to batch status on SUCCESS\n * @retval \tNULL on failure\n *\n */\nstruct batch_status *\nPBSD_status(int c, int function, const char *objid, struct attrl *attrib, const char *extend)\n{\n\tint rc;\n\n\t/* send the status request */\n\n\tif (objid == NULL)\n\t\tobjid = \"\"; /* set to null string for encoding */\n\n\trc = PBSD_status_put(c, function, objid, attrib, extend, PROT_TCP, NULL);\n\tif (rc) {\n\t\treturn NULL;\n\t}\n\n\t/* get the status reply */\n\treturn PBSD_status_get(c);\n}\n\n/**\n * @brief\n *\tReturns pointer to status record\n *\n * @param[in] c - connection socket\n * @param[out] last - return pointer to the last batch status read\n *\n * @return returns a pointer to a batch_status structure\n * @retval pointer to batch status on SUCCESS\n * @retval NULL on failure\n */\nstruct batch_status *\nPBSD_status_get(int c)\n{\n\tstruct batch_status *rbsp = NULL;\n\tstruct batch_reply *reply;\n\n\t/* read reply from stream into presentation element */\n\n\treply = PBSD_rdrpy(c);\n\tif (reply == NULL) {\n\t\tif (pbs_errno == PBSE_NONE)\n\t\t\tpbs_errno = PBSE_PROTOCOL;\n\t\tgoto end;\n\t} else if (reply->brp_choice != BATCH_REPLY_CHOICE_NULL &&\n\t\t   reply->brp_choice != BATCH_REPLY_CHOICE_Text &&\n\t\t   reply->brp_choice != BATCH_REPLY_CHOICE_Status) {\n\t\tif (pbs_errno == PBSE_NONE)\n\t\t\tpbs_errno = PBSE_PROTOCOL;\n\t\tgoto end;\n\t} else if (get_conn_errno(c) == 0) {\n\t\trbsp = reply->brp_un.brp_statc;\n\t\treply->brp_un.brp_statc = NULL;\n\t}\n\nend:\n\tPBSD_FreeReply(reply);\n\treturn rbsp;\n}\n"
  },
  {
    "path": "src/lib/Libifl/int_status2.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tint_status2.c\n * @brief\n * The function that sends the general batch status request\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <string.h>\n#include <stdio.h>\n#include \"libpbs.h\"\n#include \"dis.h\"\n#include \"net_connect.h\"\n#include \"tpp.h\"\n\n/**\n * @brief\n *\tsend status job batch request\n *\n * @param[in] c - socket descriptor\n * @param[in] function - request type\n * @param[in] id - object id\n * @param[in] attrib - pointer to attribute list\n * @param[in] extend - extention string for req encode\n * @param[in] prot - PROT_TCP or PROT_TPP\n * @param[in] msgid - message id\n *\n * @return      int\n * @retval      0               Success\n * @retval      pbs_error(!0)   error\n */\nint\nPBSD_status_put(int c, int function, const char *id, struct attrl *attrib, const char *extend, int prot, char **msgid)\n{\n\tint rc = 0;\n\n\tif (prot == PROT_TCP) {\n\t\tDIS_tcp_funcs();\n\t} else {\n\t\tif ((rc = is_compose_cmd(c, IS_CMD, msgid)) != DIS_SUCCESS)\n\t\t\treturn rc;\n\t}\n\n\tif ((rc = encode_DIS_ReqHdr(c, function, pbs_current_user)) ||\n\t    (rc = encode_DIS_Status(c, id, attrib)) ||\n\t    (rc = encode_DIS_ReqExtend(c, extend))) {\n\t\tif (prot == PROT_TCP) {\n\t\t\tif (set_conn_errtxt(c, dis_emsg[rc]) != 0) {\n\t\t\t\treturn (pbs_errno = PBSE_SYSTEM);\n\t\t\t}\n\t\t}\n\t\treturn (pbs_errno = PBSE_PROTOCOL);\n\t}\n\n\tif (dis_flush(c)) {\n\t\treturn (pbs_errno = PBSE_PROTOCOL);\n\t}\n\n\treturn 0;\n}\n"
  },
  {
    "path": "src/lib/Libifl/int_submit.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tint_submit.c\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <string.h>\n#include <unistd.h>\n#include <stdio.h>\n#include <fcntl.h>\n#include <stdint.h>\n\n#include \"portability.h\"\n#include \"libpbs.h\"\n#include \"dis.h\"\n#include \"tpp.h\"\n#include \"net_connect.h\"\n\n/**\n * @brief - Start a standard inter-server message.\n *\n * @param[in] stream  - The TPP stream on which to send message\n * @param[in] command - The message type (cmd) to encode\n *\n * @return error code\n * @retval  DIS_SUCCESS - Success\n * @retval !DIS_SUCCESS - Failure\n */\nint\nis_compose(int stream, int command)\n{\n\tint ret;\n\n\tif (stream < 0)\n\t\treturn DIS_EOF;\n\tDIS_tpp_funcs();\n\n\tret = diswsi(stream, IS_PROTOCOL);\n\tif (ret != DIS_SUCCESS)\n\t\tgoto done;\n\tret = diswsi(stream, IS_PROTOCOL_VER);\n\tif (ret != DIS_SUCCESS)\n\t\tgoto done;\n\tret = diswsi(stream, command);\n\tif (ret != DIS_SUCCESS)\n\t\tgoto done;\n\n\treturn DIS_SUCCESS;\n\ndone:\n\treturn ret;\n}\n\n/**\n * @brief - Get a unique id each time this function is called\n *\n * @par NOTE:\n *\tThis id is used as a message id in every command sent out from\n * \tthis daemon. This is done to match replies to asynchronous\n * \tcommand sends to the replies that we receive later\n *\n * @param[out] id - The return msgid created\n *\n * @return error code\n * @retval  DIS_SUCCESS  - Success\n * @retval  DIS_NOMALLOC - Failure\n */\nint\nget_msgid(char **id)\n{\n\tchar msgid[MAXNAMLEN];\n\n\tstatic time_t last_time = -1;\n\tstatic int counter = 0;\n\ttime_t now = time(NULL);\n\n\tif (now != last_time) {\n\t\tcounter = 0;\n\t\tlast_time = now;\n\t} else {\n\t\tcounter++;\n\t}\n#ifdef WIN32\n\tsprintf(msgid, \"%ld:%d\", now, counter);\n#else\n\tsprintf(msgid, \"%ju:%d\", (uintmax_t) now, counter);\n#endif\n\tif ((*id = strdup(msgid)) == NULL)\n\t\treturn DIS_NOMALLOC;\n\n\treturn DIS_SUCCESS;\n}\n\n/**\n * @brief - Compose a command to be sent over TPP stream\n *\n * @par Functionality:\n *\tcalls im_compose to create the message header, get_msgid to\n * \tadd a msg id to the header (unless one is passed)\n *\n * @param[in] stream - Tpp stream to write to\n * @param[in] command - The command to encode\n * @param[in,out] ret_msgid - The msgid, if passed to this function, is\n *                            the msgid to be used for this message.\n *                            If msgid is not passed, then create a unique\n *                            msgid and set for the message, also return it\n *                            back to caller.\n *\n * @return error code\n * @retval  DIS_SUCCESS - Success\n * @retval !DIS_SUCCESS - Failure\n */\nint\nis_compose_cmd(int stream, int command, char **ret_msgid)\n{\n\tint ret;\n\tchar *temp_id = NULL;\n\n\tif ((ret = is_compose(stream, command)) != DIS_SUCCESS)\n\t\treturn ret;\n\n\t/* Create a temp msg id, when there is no buffer passed */\n\tif (ret_msgid == NULL)\n\t\tret = get_msgid(&temp_id);\n\telse if (*ret_msgid == NULL || *ret_msgid[0] == '\\0') /* buffer passed but NULL or empty id provided */\n\t\tif ((ret = get_msgid(ret_msgid)) != 0)\n\t\t\treturn ret;\n\n\tif ((ret = diswst(stream, ret_msgid ? *ret_msgid : temp_id)) != DIS_SUCCESS)\n\t\treturn ret;\n\n\tfree(temp_id);\n\n\treturn DIS_SUCCESS;\n}\n\n/**\n * @brief\n *\t-PBS_commit.c This function does the Commit sub-function of\n *\tthe Queue Job request.\n *\n * @param[in] c - socket fd\n * @param[in] jobid - job identifier\n * @param[in] prot - PROT_TCP or PROT_TPP\n * @param[in] msgid - message id\n * @param[in] extend - extend field, comma separated key value pair\n *\n * @return      int\n * @retval      0               success\n * @retval      !0(pbs_errno)   failure\n *\n */\nint\nPBSD_commit(int c, char *jobid, int prot, char **msgid, char *extend)\n{\n\tstruct batch_reply *reply;\n\tint rc;\n\n\tif (prot == PROT_TCP) {\n\t\tDIS_tcp_funcs();\n\t} else {\n\t\tif ((rc = is_compose_cmd(c, IS_CMD, msgid)) != DIS_SUCCESS)\n\t\t\treturn rc;\n\t}\n\n\tif ((rc = encode_DIS_ReqHdr(c, PBS_BATCH_Commit, pbs_current_user)) ||\n\t    (rc = encode_DIS_JobId(c, jobid)) ||\n\t    (rc = encode_DIS_ReqExtend(c, extend))) {\n\t\tif (prot == PROT_TCP) {\n\t\t\tif (set_conn_errtxt(c, dis_emsg[rc]) != 0) {\n\t\t\t\treturn (pbs_errno = PBSE_SYSTEM);\n\t\t\t}\n\t\t}\n\t\treturn (pbs_errno = PBSE_PROTOCOL);\n\t}\n\n\tif (prot == PROT_TPP) {\n\t\tpbs_errno = PBSE_NONE;\n\t\tif (dis_flush(c))\n\t\t\tpbs_errno = PBSE_PROTOCOL;\n\t\treturn pbs_errno;\n\t}\n\n\tif (dis_flush(c)) {\n\t\treturn (pbs_errno = PBSE_PROTOCOL);\n\t}\n\n\treply = PBSD_rdrpy(c);\n\n\tPBSD_FreeReply(reply);\n\n\treturn get_conn_errno(c);\n}\n\n/**\n * @brief\n *\t-PBS_scbuf.c Send a chunk of a of the job script to the server.\n *\tCalled by pbs_submit.  The buffer length could be\n *\tzero; the server should handle that case...\n *\n * @param[in] c - connection handle\n * @param[in] reqtype - request type\n * @param[in] seq - file chunk sequence number\n * @param[in] buf - file chunk\n * @param[in] len - length of chunk\n * @param[in] jobid - ob id (for types 1 and 2 only)\n * @param[in] which - standard file type (enum)\n * @param[in] prot - PROT_TCP or PROT_TPP\n * @param[in] msgid - message id\n *\n * @return      int\n * @retval      0               success\n * @retval      !0(pbs_errno)   failure\n *\n */\nstatic int\nPBSD_scbuf(int c, int reqtype, int seq, char *buf, int len, char *jobid, enum job_file which, int prot, char **msgid)\n{\n\tstruct batch_reply *reply;\n\tint rc;\n\n\tif (prot == PROT_TCP) {\n\t\tDIS_tcp_funcs();\n\t} else {\n\t\tif ((rc = is_compose_cmd(c, IS_CMD, msgid)) != DIS_SUCCESS)\n\t\t\treturn rc;\n\t}\n\n\tif (jobid == NULL)\n\t\tjobid = \"\"; /* use null string for null pointer */\n\n\tif ((rc = encode_DIS_ReqHdr(c, reqtype, pbs_current_user)) ||\n\t    (rc = encode_DIS_JobFile(c, seq, buf, len, jobid, which)) ||\n\t    (rc = encode_DIS_ReqExtend(c, NULL))) {\n\t\tif (prot == PROT_TCP) {\n\t\t\tif (set_conn_errtxt(c, dis_emsg[rc]) != 0) {\n\t\t\t\treturn (pbs_errno = PBSE_SYSTEM);\n\t\t\t}\n\t\t}\n\t\treturn (pbs_errno = PBSE_PROTOCOL);\n\t}\n\n\tif (prot == PROT_TPP) {\n\t\tpbs_errno = PBSE_NONE;\n\t\tif (dis_flush(c))\n\t\t\tpbs_errno = PBSE_PROTOCOL;\n\t\treturn pbs_errno;\n\t}\n\n\tif (dis_flush(c)) {\n\t\treturn (pbs_errno = PBSE_PROTOCOL);\n\t}\n\n\t/* read reply */\n\n\treply = PBSD_rdrpy(c);\n\n\tPBSD_FreeReply(reply);\n\n\treturn get_conn_errno(c);\n}\n\n/**\n * @brief\n *\t-The Job File function used to move files related to\n *\ta job between servers.\n *\t-- the function PBS_scbuf is called repeatedly to\n *\ttransfer chunks of the script to the server.\n *\n * @param[in] c - connection handle\n * @param[in] script_file - job file\n * @param[in] prot - PROT_TCP or PROT_TPP\n * @param[in] msgid - message id\n *\n * @return\tint\n * @retval\t0\tsuccess\n * @retval\t-1\tfailure\n *\n */\n\nint\nPBSD_jscript(int c, const char *script_file, int prot, char **msgid)\n{\n\tint i;\n\tint fd;\n\tint cc;\n\tchar s_buf[SCRIPT_CHUNK_Z];\n\tint rc = 0;\n\n\tif ((fd = open(script_file, O_RDONLY, 0)) < 0) {\n\t\treturn (-1);\n\t}\n\ti = 0;\n\tcc = read(fd, s_buf, SCRIPT_CHUNK_Z);\n\twhile ((cc > 0) &&\n\t       ((rc = PBSD_scbuf(c, PBS_BATCH_jobscript, i, s_buf, cc, NULL, JScript, prot, msgid)) == 0)) {\n\t\ti++;\n\t\tcc = read(fd, s_buf, SCRIPT_CHUNK_Z);\n\t}\n\n\tclose(fd);\n\tif (cc < 0) /* read failed */\n\t\treturn (-1);\n\n\tif (prot == PROT_TPP)\n\t\treturn (rc);\n\n\treturn get_conn_errno(c);\n}\n\n/**\n * @brief\n *\tjob file function for moving file between server/mom\n *\n * @param[in] c - connection handle\n * @param[in] script_file - job file\n * @param[in] prot - PROT_TCP or PROT_TPP\n * @param[in] msgid - message id\n *\n * @return      int\n * @retval      0       success\n * @retval      -1      failure\n *\n */\nint\nPBSD_jscript_direct(int c, char *script, int prot, char **msgid)\n{\n\tint rc;\n\tint tosend;\n\tint i = 0;\n\tchar *p = script;\n\tint len;\n\n\tif (script == NULL) {\n\t\tpbs_errno = PBSE_INTERNAL;\n\t\treturn -1;\n\t}\n\n\tlen = strlen(script);\n\tdo {\n\t\ttosend = (len > SCRIPT_CHUNK_Z) ? SCRIPT_CHUNK_Z : len;\n\t\trc = PBSD_scbuf(c, PBS_BATCH_jobscript, i, p, tosend, NULL, JScript, prot, msgid);\n\t\ti++;\n\t\tp += tosend;\n\t\tlen -= tosend;\n\t} while ((rc == 0) && (len > 0));\n\n\tif (prot == PROT_TPP)\n\t\treturn (rc);\n\n\treturn get_conn_errno(c);\n}\n\n/**\n * @brief\n *\t-PBS_jobfile.c\n *\tThe Job File function used to move files related to\n *\ta job between servers.\n *\t-- the function PBS_scbuf is called repeatedly to\n *\ttransfer chunks of the script to the server.\n *\n * @param[in] c - connection handle\n * @param[in] reqtype - request type\n * @param[in] path - file path\n * @param[in] jobid - job id\n * @param[in] which - standard file type (enum)\n * @param[in] prot - PROT_TCP or PROT_TPP\n * @param[in] msgid - message id\n *\n * @return      int\n * @retval      0       success\n * @retval      -1      failure\n *\n */\nint\nPBSD_jobfile(int c, int req_type, char *path, char *jobid, enum job_file which, int prot, char **msgid)\n{\n\tint i;\n\tint cc;\n\tint fd;\n\tchar s_buf[SCRIPT_CHUNK_Z];\n\tint rc = 0;\n\n\tif ((fd = open(path, O_RDONLY, 0)) < 0) {\n\t\treturn (-1);\n\t}\n\ti = 0;\n\tcc = read(fd, s_buf, SCRIPT_CHUNK_Z);\n\twhile ((cc > 0) &&\n\t       ((rc = PBSD_scbuf(c, req_type, i, s_buf, cc, jobid, which, prot, msgid)) == 0)) {\n\t\ti++;\n\t\tcc = read(fd, s_buf, SCRIPT_CHUNK_Z);\n\t}\n\n\tclose(fd);\n\tif (cc < 0) /* read failed */\n\t\treturn (-1);\n\n\tif (prot == PROT_TPP)\n\t\treturn rc;\n\n\treturn get_conn_errno(c);\n}\n\n/**\n * @brief\n *\t-PBS_queuejob.c\n *\tThis function sends the first part of the Queue Job request\n *\n * @param[in] c - socket descriptor\n * @param[in] jobid - job identifier\n * @param[in] destin - destination name\n * @param[in] attrib - pointer to attribute list\n * @param[in] extend - extention string for req encode\n * @param[in] prot - PROT_TCP or PROT_TPP\n * @param[in] msgid - message id\n * @param[out] commit_done - 1 if job committed, 0 if not yet committed\n *\n * @return      int\n * @retval      0               Success\n * @retval      pbs_error(!0)   error\n */\nchar *\nPBSD_queuejob(int c, char *jobid, const char *destin, struct attropl *attrib, const char *extend, int prot, char **msgid, int *commit_done)\n{\n\tstruct batch_reply *reply;\n\tchar *return_jobid = NULL;\n\tint rc;\n\n\tif (commit_done)\n\t\t*commit_done = 0;\n\n\tif (prot == PROT_TCP) {\n\t\tDIS_tcp_funcs();\n\t} else {\n\t\tif ((rc = is_compose_cmd(c, IS_CMD, msgid)) != DIS_SUCCESS) {\n\t\t\tpbs_errno = PBSE_PROTOCOL;\n\t\t\treturn return_jobid;\n\t\t}\n\t}\n\n\t/* first, set up the body of the Queue Job request */\n\n\tif ((rc = encode_DIS_ReqHdr(c, PBS_BATCH_QueueJob, pbs_current_user)) ||\n\t    (rc = encode_DIS_QueueJob(c, jobid, destin, attrib)) ||\n\t    (rc = encode_DIS_ReqExtend(c, extend))) {\n\t\tif (prot == PROT_TCP) {\n\t\t\tif (set_conn_errtxt(c, dis_emsg[rc]) != 0) {\n\t\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t\tpbs_errno = PBSE_PROTOCOL;\n\t\t}\n\t\treturn return_jobid;\n\t}\n\n\tif (prot == PROT_TPP) {\n\t\tpbs_errno = PBSE_NONE;\n\t\tif (dis_flush(c))\n\t\t\tpbs_errno = PBSE_PROTOCOL;\n\n\t\treturn (\"\"); /* return something NON-NULL for tpp */\n\t}\n\n\tif (dis_flush(c)) {\n\t\tpbs_errno = PBSE_PROTOCOL;\n\t\treturn return_jobid;\n\t}\n\n\t/* read reply from stream into presentation element */\n\n\treply = PBSD_rdrpy(c);\n\tif (reply == NULL) {\n\t\tpbs_errno = PBSE_PROTOCOL;\n\t} else if (reply->brp_choice &&\n\t\t   reply->brp_choice != BATCH_REPLY_CHOICE_Text &&\n\t\t   reply->brp_choice != BATCH_REPLY_CHOICE_Queue &&\n\t\t   reply->brp_choice != BATCH_REPLY_CHOICE_Commit) {\n\t\tpbs_errno = PBSE_PROTOCOL;\n\t} else if (get_conn_errno(c) == 0) {\n\t\treturn_jobid = strdup(reply->brp_un.brp_jid);\n\t\tif (return_jobid == NULL) {\n\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t}\n\t\tif (commit_done && (reply->brp_choice == BATCH_REPLY_CHOICE_Commit))\n\t\t\t*commit_done = 1;\n\t}\n\n\tPBSD_FreeReply(reply);\n\treturn return_jobid;\n}\n"
  },
  {
    "path": "src/lib/Libifl/int_submit_resv.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tint_submit_resv.c\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <string.h>\n#include <unistd.h>\n#include <stdio.h>\n#include <fcntl.h>\n#include \"portability.h\"\n#include \"libpbs.h\"\n#include \"dis.h\"\n\n/**\n * @brief\n *\tThis function sends the Submit Reservation request\n *\n * @param[in] c - socket descriptor\n * @param[in] resv_id - reservation identifier\n * @param[in] attrib - pointer to attribute list\n * @param[in] extend - extention string for req encode\n *\n * @return      string\n * @retval      resvn id\tSuccess\n * @retval      NULL\t\terror\n *\n */\n\nchar *\nPBSD_submit_resv(int connect, const char *resv_id, struct attropl *attrib, const char *extend)\n{\n\tstruct batch_reply *reply;\n\tchar *return_resv_id = NULL;\n\tint rc;\n\n\tDIS_tcp_funcs();\n\n\t/* first, set up the body of the Submit Reservation request */\n\n\tif ((rc = encode_DIS_ReqHdr(connect, PBS_BATCH_SubmitResv, pbs_current_user)) ||\n\t    (rc = encode_DIS_SubmitResv(connect, resv_id, attrib)) ||\n\t    (rc = encode_DIS_ReqExtend(connect, extend))) {\n\t\tif (set_conn_errtxt(connect, dis_emsg[rc]) != 0) {\n\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\treturn NULL;\n\t\t}\n\t\tpbs_errno = PBSE_PROTOCOL;\n\t\treturn return_resv_id;\n\t}\n\tif (dis_flush(connect)) {\n\t\tpbs_errno = PBSE_PROTOCOL;\n\t\treturn return_resv_id;\n\t}\n\n\t/* read reply from stream into presentation element */\n\n\treply = PBSD_rdrpy(connect);\n\tif (reply == NULL) {\n\t\tpbs_errno = PBSE_PROTOCOL;\n\t} else if (!pbs_errno && reply->brp_choice &&\n\t\t   reply->brp_choice != BATCH_REPLY_CHOICE_Text) {\n\t\tpbs_errno = PBSE_PROTOCOL;\n\t} else if (get_conn_errno(connect) == 0 && reply->brp_code == 0) {\n\t\tif (reply->brp_choice == BATCH_REPLY_CHOICE_Text) {\n\t\t\treturn_resv_id = strdup(reply->brp_un.brp_txt.brp_str);\n\t\t\tif (return_resv_id == NULL) {\n\t\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\t}\n\t\t}\n\t}\n\n\tPBSD_FreeReply(reply);\n\treturn return_resv_id;\n}\n"
  },
  {
    "path": "src/lib/Libifl/int_ucred.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tint_ucred.c\n * @brief\n * send user credentials to the server.\n *\n * migrate users info on the current server to a destination server.\n *\n * @note\n * This code is not mean to be used by a persistent process.\n * If an error occurs, not all the allocated structures are freed.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <string.h>\n#include <stdio.h>\n#include <assert.h>\n#include \"libpbs.h\"\n#include \"dis.h\"\n#include \"ticket.h\"\n#include \"pbs_ifl.h\"\n#include \"pbs_ecl.h\"\n\n/**\n * @brief\n *\tsend user credentials to the server.\n *\n * @param[in] c - connection handler\n * @param[in] user - username\n * @param[in] type - cred type\n * @param[in] buf - credentials\n * @param[in] len - length of buffer\n *\n * @return      int\n * @retval      0  \t\tsuccess\n * @retval      error code      error\n *\n */\n\nint\nPBSD_ucred(int c, char *user, int type, char *buf, int len)\n{\n\tint rc;\n\tstruct batch_reply *reply = NULL;\n\n\t/* initialize the thread context data, if not already initialized */\n\tif (pbs_client_thread_init_thread_context() != 0)\n\t\treturn pbs_errno;\n\n\t/* lock pthread mutex here for this connection */\n\t/* blocking call, waits for mutex release */\n\tif (pbs_client_thread_lock_connection(c) != 0)\n\t\treturn pbs_errno;\n\n\tDIS_tcp_funcs();\n\n\tif ((rc = encode_DIS_ReqHdr(c, PBS_BATCH_UserCred, pbs_current_user)) ||\n\t    (rc = encode_DIS_UserCred(c, user, type, buf, len)) ||\n\t    (rc = encode_DIS_ReqExtend(c, NULL))) {\n\t\tif (set_conn_errtxt(c, dis_emsg[rc]) != 0) {\n\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t} else {\n\t\t\tpbs_errno = PBSE_PROTOCOL;\n\t\t}\n\t\t(void) pbs_client_thread_unlock_connection(c);\n\t\treturn pbs_errno;\n\t}\n\tif (dis_flush(c)) {\n\t\tpbs_errno = PBSE_PROTOCOL;\n\t\t(void) pbs_client_thread_unlock_connection(c);\n\t\treturn pbs_errno;\n\t}\n\n\treply = PBSD_rdrpy(c);\n\n\tPBSD_FreeReply(reply);\n\n\trc = get_conn_errno(c);\n\n\t/* unlock the thread lock and update the thread context data */\n\tif (pbs_client_thread_unlock_connection(c) != 0)\n\t\treturn pbs_errno;\n\n\treturn rc;\n}\n"
  },
  {
    "path": "src/lib/Libifl/list_link.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include \"portability.h\"\n#include \"list_link.h\"\n#ifndef NDEBUG\n#include <stdio.h>\n#include <stdlib.h>\n#endif\n\n/**\n * @file\tlist_link.c\n * @brief\n * list_link.c - general routines for maintenance of a double\n *\tlinked list.  A user defined structure can be managed as\n *\ta double linked list if the first element in the user structure\n *\tis the \"pbs_list_link\" struct defined in list_link.h and the list\n *\tis headed by a \"pbs_list_head\" struct also defined in list_link.h.\n *\n * @par\tThese are the routines provided:\n *\t\tinsert_link - inserts a new entry before or after an old\n *\t\tappend_link - adds a new entry to the end of the list\n *\t\tdelete_link - removes an entry from the list\n *\t\tis_linked   - returns 1 if entry is in the list\n */\n\n/**\n * @brief\n * \t-insert_link - adds a new entry to a list.\n *\tEntry is added either before (position=0) or after (position !=0)\n *\tan old entry.\n *\n * @param[in] oldp\tptr to old entry in list\n * @param[in] newp\tptr to new link entry\n * @param[in] pobj \tptr to object to link in\n * @param[in] position  0=before old, else after\n *\n * @return\tVoid\n */\n\nvoid\ninsert_link(struct pbs_list_link *oldp, struct pbs_list_link *newp,\n\t    void *pobj, int position)\n{\n\n#ifndef NDEBUG\n\t/* first make sure unlinked entries are pointing to themselves\t    */\n\n\tif ((pobj == NULL) ||\n\t    (oldp == NULL) ||\n\t    (oldp->ll_prior == NULL) ||\n\t    (oldp->ll_next == NULL) ||\n\t    (newp->ll_prior != (pbs_list_link *) newp) ||\n\t    (newp->ll_next != (pbs_list_link *) newp)) {\n\t\t(void) fprintf(stderr, \"Assertion failed, bad pointer in insert_link\\n\");\n\t\tabort();\n\t}\n#endif\n\n\tif (position == LINK_INSET_AFTER) { /* insert newp after oldp */\n\t\tnewp->ll_prior = oldp;\n\t\tnewp->ll_next = oldp->ll_next;\n\t\t(oldp->ll_next)->ll_prior = newp;\n\t\toldp->ll_next = newp;\n\t} else { /* insert newp before oldp */\n\t\tnewp->ll_next = oldp;\n\t\tnewp->ll_prior = oldp->ll_prior;\n\t\t(oldp->ll_prior)->ll_next = newp;\n\t\toldp->ll_prior = newp;\n\t}\n\t/*\n\t * its big trouble if ll_struct is null, it would make this\n\t * entry appear to be the head, so we never let that happen\n\t */\n\tif (pobj)\n\t\tnewp->ll_struct = pobj;\n\telse\n\t\tnewp->ll_struct = (void *) newp;\n}\n\n/**\n * @brief\n * \t-append_link - append a new entry to the end of the list\n *\n * @param[in] head\t\tptr to head of list\n * @param[in] newp\t\tptr to new link entry\n * @param[in] pobj\t\tptr to object to link in\n *\n * @return      Void\n */\n\nvoid\nappend_link(pbs_list_head *head, pbs_list_head *newp, void *pobj)\n{\n\n#ifndef NDEBUG\n\t/* first make sure unlinked entries are pointing to themselves\t    */\n\n\tif ((pobj == NULL) ||\n\t    (head->ll_prior == NULL) ||\n\t    (head->ll_next == NULL) ||\n\t    (newp->ll_prior != (pbs_list_link *) newp) ||\n\t    (newp->ll_next != (pbs_list_link *) newp)) {\n\t\t(void) fprintf(stderr, \"Assertion failed, bad pointer in insert_link\\n\");\n\t\tabort();\n\t}\n#endif\n\n\t(head->ll_prior)->ll_next = newp;\n\tnewp->ll_prior = head->ll_prior;\n\tnewp->ll_next = head;\n\thead->ll_prior = newp;\n\t/*\n\t * its big trouble if ll_struct is null, it would make this\n\t * entry appear to be the head, so we never let that happen\n\t */\n\tif (pobj)\n\t\tnewp->ll_struct = pobj;\n\telse\n\t\tnewp->ll_struct = (void *) newp;\n}\n\n/**\n * @brief\n * \t-delete_link - delete an entry from the list\n *\n * @par\tChecks to be sure links exist before breaking them\\n\n *\tNote: the oldp entry is unchanged other than the list links\n *\tare cleared.\n *\n * @param[in] oldp       ptr to link to delete\n *\n * @return\tVoid\n *\n */\nvoid\ndelete_link(struct pbs_list_link *oldp)\n{\n\tif ((oldp->ll_prior != NULL) &&\n\t    (oldp->ll_prior != oldp) && (oldp->ll_prior->ll_next == oldp))\n\t\t(oldp->ll_prior)->ll_next = oldp->ll_next;\n\n\tif ((oldp->ll_next != NULL) &&\n\t    (oldp->ll_next != oldp) && (oldp->ll_next->ll_prior == oldp))\n\t\t(oldp->ll_next)->ll_prior = oldp->ll_prior;\n\n\toldp->ll_next = oldp;\n\toldp->ll_prior = oldp;\n}\n\n/**\n * @brief delete an entry from the list and clear the struct\n *\n * @param[in] oldp       ptr to link to delete\n */\nvoid\ndelete_clear_link(struct pbs_list_link *oldp)\n{\n\tdelete_link(oldp);\n\toldp->ll_struct = NULL;\n}\n\n/**\n * @brief\n * \t-swap_link - swap the positions of members of a list\n *\n * @param[in] pone - member one\n * @param[in] ptwo - member two\n *\n * @return\tVoid\n *\n */\n\nvoid\nswap_link(pbs_list_link *pone, pbs_list_link *ptwo)\n{\n\tpbs_list_link *p1p;\n\tpbs_list_link *p2p;\n\n\tif (pone->ll_next == ptwo) {\n\t\tdelete_link(pone);\n\t\tinsert_link(ptwo, pone, pone->ll_struct, LINK_INSET_AFTER);\n\t} else if (ptwo->ll_next == pone) {\n\t\tdelete_link(ptwo);\n\t\tinsert_link(pone, ptwo, ptwo->ll_struct, LINK_INSET_AFTER);\n\t} else {\n\t\tp1p = pone->ll_prior;\n\t\tp2p = ptwo->ll_prior;\n\t\tdelete_link(pone);\n\t\tinsert_link(p2p, pone, pone->ll_struct, LINK_INSET_AFTER);\n\t\tdelete_link(ptwo);\n\t\tinsert_link(p1p, ptwo, ptwo->ll_struct, LINK_INSET_AFTER);\n\t}\n}\n\n/**\n * @brief\n * \t-is_linked - determine if entry is in the list\n *\n * @param[in] head - ptr to head of list\n * @param[in] entry - entry to be searched for\n *\n * @return\tint\n * @retval\t1 \tif in list\n * @retval\t0 \tif not in list\n *\n */\n\nint\nis_linked(pbs_list_link *head, pbs_list_link *entry)\n{\n\tpbs_list_link *pl;\n\n\tpl = head->ll_next;\n\twhile (pl != head) {\n\t\tif (pl == entry)\n\t\t\treturn (1);\n\t\tpl = pl->ll_next;\n\t}\n\treturn (0);\n}\n\n/*\n * The following routines are replaced by in-line code with the\n * GET_NEXT / GET_PRIOR macroes when NDEBUG is defined, see list_link.h\n */\n\n#ifndef NDEBUG\n/**\n * @brief\n *\tget next entry in list\n *\n * @param[in] pl - list variable\n * @param[in] file -file name\n * @param[in] line - line num\n *\n * @retuan \tVoid\n *\n */\nvoid *\nget_next(pbs_list_link pl, char *file, int line)\n{\n\tif ((pl.ll_next == NULL) ||\n\t    ((pl.ll_next == &pl) && (pl.ll_struct != NULL))) {\n\t\t(void) fprintf(stderr, \"Assertion failed, bad pointer in link: file \\\"%s\\\", line %d\\n\", file, line);\n\t\tabort();\n\t}\n\treturn (pl.ll_next->ll_struct);\n}\n\n/**\n * @brief\n *      get previous entry in list\n *\n * @param[in] pl - list variable\n * @param[in] file -file name\n * @param[in] line - line num\n *\n * @retuan      Void\n *\n */\nvoid *\nget_prior(pbs_list_link pl, char *file, int line)\n{\n\tif ((pl.ll_prior == NULL) ||\n\t    ((pl.ll_prior == &pl) && (pl.ll_struct != NULL))) {\n\t\t(void) fprintf(stderr, \"Assertion failed, null pointer in link: file \\\"%s\\\", line %d\\n\", file, line);\n\t\tabort();\n\t}\n\treturn (pl.ll_prior->ll_struct);\n}\n#endif\n\n/**\n * @brief\n * \t-list_move - move an entire list from one head to another\n *\n * @param[in] from - pointer to pbs_list_head\n * @param[in] to - pointer to pbs_list_head\n *\n * @return\tVoid\n */\n\nvoid\nlist_move(pbs_list_head *from, pbs_list_head *to)\n{\n\tif (from->ll_next == from) {\n\t\tto->ll_next = to;\n\t\tto->ll_prior = to;\n\t} else {\n\t\tto->ll_next = from->ll_next;\n\t\tto->ll_next->ll_prior = to;\n\t\tto->ll_prior = from->ll_prior;\n\t\tto->ll_prior->ll_next = to;\n\t\tCLEAR_HEAD((*from));\n\t}\n}\n"
  },
  {
    "path": "src/lib/Libifl/pbsD_Preempt_Jobs.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/*\tpbsD_modify_resv.c\n *\n *\tThe Modify Reservation request.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include \"dis.h\"\n#include \"libpbs.h\"\n#include \"net_connect.h\"\n#include \"pbs_ecl.h\"\n#include \"pbs_idx.h\"\n#include \"pbs_share.h\"\n#include \"portability.h\"\n#include \"tpp.h\"\n#include <fcntl.h>\n#include <stdio.h>\n#include <string.h>\n#include <unistd.h>\n\n/**\n * @brief\n *\t-Pass-through call to send preempt jobs batch request\n *\n * @param[in] connect - connection handler\n * @param[in] preempt_jobs_list - list of jobs to be preempted\n *\n * @return      preempt_job_info *\n * @retval      list of jobs and their preempt_method\n * @retval\t\tNULL in case of error\n *\n */\nstatic preempt_job_info *\nPBSD_preempt_jobs(int connect, char **preempt_jobs_list)\n{\n\tstruct batch_reply *reply = NULL;\n\tpreempt_job_info *ppj_reply = NULL;\n\tpreempt_job_info *ppj_temp = NULL;\n\tint rc = -1;\n\n\tDIS_tcp_funcs();\n\n\t/* first, set up the body of the Preempt Jobs request */\n\n\tif ((rc = encode_DIS_ReqHdr(connect, PBS_BATCH_PreemptJobs, pbs_current_user)) ||\n\t    (rc = encode_DIS_JobsList(connect, preempt_jobs_list, -1)) ||\n\t    (rc = encode_DIS_ReqExtend(connect, NULL))) {\n\t\tif (set_conn_errtxt(connect, dis_emsg[rc]) != 0) {\n\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\treturn NULL;\n\t\t}\n\t\tif (pbs_errno == PBSE_PROTOCOL)\n\t\t\treturn NULL;\n\t}\n\tif (dis_flush(connect)) {\n\t\tpbs_errno = PBSE_PROTOCOL;\n\t\treturn NULL;\n\t}\n\n\treply = PBSD_rdrpy(connect);\n\tif (reply == NULL)\n\t\tpbs_errno = PBSE_PROTOCOL;\n\telse {\n\t\tint i = 0;\n\t\tint count = 0;\n\t\tppj_temp = reply->brp_un.brp_preempt_jobs.ppj_list;\n\t\tcount = reply->brp_un.brp_preempt_jobs.count;\n\n\t\tppj_reply = calloc(sizeof(struct preempt_job_info), count);\n\t\tif (ppj_reply == NULL)\n\t\t\treturn NULL;\n\n\t\tfor (i = 0; i < count; i++) {\n\t\t\tstrcpy(ppj_reply[i].job_id, ppj_temp[i].job_id);\n\t\t\tstrcpy(ppj_reply[i].order, ppj_temp[i].order);\n\t\t}\n\t\tPBSD_FreeReply(reply);\n\t}\n\treturn ppj_reply;\n}\n\n/**\n * @brief\n *\t-Pass-through call to send preempt jobs batch request\n *\n * @param[in] c - connection handler\n * @param[in] preempt_jobs_list - list of jobs to be preempted\n *\n * @return\tpreempt_job_info *\n * @retval\tlist of jobs and their preempt_method\n * @retval\tNULL for Error/Failure\n *\n */\npreempt_job_info *\n__pbs_preempt_jobs(int c, char **preempt_jobs_list)\n{\n\tpreempt_job_info *ret = NULL;\n\n\t/* initialize the thread context data, if not already initialized */\n\tif (pbs_client_thread_init_thread_context() != 0)\n\t\treturn NULL;\n\n\t/* lock pthread mutex here for this connection\n\t * blocking call, waits for mutex release */\n\tif (pbs_client_thread_lock_connection(c) != 0)\n\t\treturn NULL;\n\n\tret = PBSD_preempt_jobs(c, preempt_jobs_list);\n\n\t/* unlock the thread lock and update the thread context data */\n\tif (pbs_client_thread_unlock_connection(c) != 0)\n\t\treturn NULL;\n\n\treturn ret;\n}\n"
  },
  {
    "path": "src/lib/Libifl/pbsD_alterjob.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tpbs_alterjob.c\n * @brief\n * Send the Alter Job request to the server --\n * really an instance of the \"manager\" request.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <stdio.h>\n#include <stdlib.h>\n#include \"libpbs.h\"\n\n/**\n * @brief\tConvenience function to create attropl list from attrl (shallow copy)\n *\n * @param[in]\tattrib - the list to copy\n *\n * @return struct attropl\n * @retval newly allocated attropl list\n * @retval NULL for malloc error\n */\nstatic struct attropl *\nattrl_to_attropl(struct attrl *attrib)\n{\n\tstruct attropl *ap = NULL;\n\tstruct attropl *ap1 = NULL;\n\n\t/* copy the attrl to an attropl */\n\twhile (attrib != NULL) {\n\t\tif (ap == NULL) {\n\t\t\tap1 = ap = (struct attropl *) malloc(sizeof(struct attropl));\n\t\t} else {\n\t\t\tap->next = (struct attropl *) malloc(sizeof(struct attropl));\n\t\t\tap = ap->next;\n\t\t}\n\t\tif (ap == NULL) {\n\t\t\twhile (ap1 != NULL) {\n\t\t\t\tap = ap1->next;\n\t\t\t\tfree(ap1);\n\t\t\t\tap1 = ap;\n\t\t\t}\n\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\treturn NULL;\n\t\t}\n\t\tap->name = attrib->name;\n\t\tap->resource = attrib->resource;\n\t\tap->value = attrib->value;\n\t\tap->op = SET;\n\t\tap->next = NULL;\n\t\tattrib = attrib->next;\n\t}\n\n\treturn ap1;\n}\n\n/**\n * @brief\tConvenience function to shallow-free oplist\n *\n * @param[out]\toplist - the list to free\n *\n * @return void\n */\nstatic void\n__free_attropl(struct attropl *oplist)\n{\n\tstruct attropl *ap = NULL;\n\n\twhile (oplist != NULL) {\n\t\tap = oplist->next;\n\t\tfree(oplist);\n\t\toplist = ap;\n\t}\n}\n\n/**\n * @brief\n *\t-Send the Alter Job request to the server --\n *\treally an instance of the \"manager\" request.\n *\n * @param[in] c - connection handle\n * @param[in] jobid- job identifier\n * @param[in] attrib - pointer to attribute list\n * @param[in] extend - extend string for encoding req\n *\n * @return\tint\n * @retval\t0\tsuccess\n * @retval\t!0\terror\n *\n */\nint\n__pbs_alterjob(int c, const char *jobid, struct attrl *attrib, const char *extend)\n{\n\tstruct attropl *attrib_opl = NULL;\n\tint rc = 0;\n\n\tif ((jobid == NULL) || (*jobid == '\\0'))\n\t\treturn (pbs_errno = PBSE_IVALREQ);\n\n\tattrib_opl = attrl_to_attropl(attrib);\n\n\trc = PBSD_manager(c, PBS_BATCH_ModifyJob, MGR_CMD_SET, MGR_OBJ_JOB, jobid, attrib_opl, extend);\n\n\t/* free up the attropl we just created */\n\t__free_attropl(attrib_opl);\n\n\treturn rc;\n}\n\n/**\n * @brief\tSend Alter Job request to the server, Asynchronously\n *\n * @param[in] c - connection handle\n * @param[in] jobid- job identifier\n * @param[in] attrib - pointer to attribute list\n * @param[in] extend - extend string for encoding req\n *\n * @return\tint\n * @retval\t0\tsuccess\n * @retval\t!0\terror\n *\n */\nint\n__pbs_asyalterjob(int c, const char *jobid, struct attrl *attrib, const char *extend)\n{\n\tstruct attropl *attrib_opl = NULL;\n\tint i;\n\n\tif ((jobid == NULL) || (*jobid == '\\0'))\n\t\treturn (pbs_errno = PBSE_IVALREQ);\n\n\t/* initialize the thread context data, if not initialized */\n\tif (pbs_client_thread_init_thread_context() != 0)\n\t\treturn pbs_errno;\n\n\t/* lock pthread mutex here for this connection */\n\t/* blocking call, waits for mutex release */\n\tif (pbs_client_thread_lock_connection(c) != 0)\n\t\treturn pbs_errno;\n\n\t/* send the manage request with modifyjob async */\n\tattrib_opl = attrl_to_attropl(attrib);\n\ti = PBSD_mgr_put(c, PBS_BATCH_ModifyJob_Async, MGR_CMD_SET, MGR_OBJ_JOB, jobid, attrib_opl, extend, PROT_TCP, NULL);\n\t__free_attropl(attrib_opl);\n\n\tif (i) {\n\t\t(void) pbs_client_thread_unlock_connection(c);\n\t\treturn i;\n\t}\n\n\t/* unlock the thread lock and update the thread context data */\n\tif (pbs_client_thread_unlock_connection(c) != 0)\n\t\treturn pbs_errno;\n\n\treturn i;\n}\n"
  },
  {
    "path": "src/lib/Libifl/pbsD_confirmresv.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tpbs_confirmresv.c\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <string.h>\n#include <stdio.h>\n#include \"libpbs.h\"\n#include \"dis.h\"\n#include \"pbs_ecl.h\"\n\n/**\n * @brief\n * \t-pbs_confirmresv - this function is for exclusive use by the Scheduler\n *\tto confirm an advanced reservation.\n *\n * @param[in] rid \tReservaion ID\n * @param[in] location  string of vnodes/resources to be allocated to the resv.\n * @param[in] start \tstart time of reservation if non-zero\n *\n * @return\tint\n * @retval\t0\tSuccess\n * @retval\t!0\terror\n *\n */\n\nint\n__pbs_confirmresv(int c, const char *rid, const char *location, unsigned long start,\n\t\t  char *extend)\n{\n\tint rc;\n\tstruct batch_reply *reply;\n\n\tif ((rid == NULL) || (*rid == '\\0') ||\n\t    (location == NULL) || (*location == '\\0'))\n\t\treturn (pbs_errno = PBSE_IVALREQ);\n\n\t/* initialize the thread context data, if not already initialized */\n\tif (pbs_client_thread_init_thread_context() != 0)\n\t\treturn pbs_errno;\n\n\t/* lock pthread mutex here for this connection */\n\t/* blocking call, waits for mutex release */\n\tif (pbs_client_thread_lock_connection(c) != 0)\n\t\treturn pbs_errno;\n\n\t/* setup DIS support routines for following DIS calls */\n\n\tDIS_tcp_funcs();\n\n\t/* send run request */\n\n\tif ((rc = encode_DIS_ReqHdr(c, PBS_BATCH_ConfirmResv, pbs_current_user)) ||\n\t    (rc = encode_DIS_Run(c, rid, location, start)) ||\n\t    (rc = encode_DIS_ReqExtend(c, extend))) {\n\t\tif (set_conn_errtxt(c, dis_emsg[rc]) != 0) {\n\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t} else {\n\t\t\tpbs_errno = PBSE_PROTOCOL;\n\t\t}\n\t\t(void) pbs_client_thread_unlock_connection(c);\n\t\treturn pbs_errno;\n\t}\n\n\tif (dis_flush(c)) {\n\t\tpbs_errno = PBSE_PROTOCOL;\n\t\t(void) pbs_client_thread_unlock_connection(c);\n\t\treturn pbs_errno;\n\t}\n\n\t/* get reply */\n\n\treply = PBSD_rdrpy(c);\n\trc = get_conn_errno(c);\n\n\tPBSD_FreeReply(reply);\n\n\t/* unlock the thread lock and update the thread context data */\n\tif (pbs_client_thread_unlock_connection(c) != 0)\n\t\treturn pbs_errno;\n\n\treturn rc;\n}\n"
  },
  {
    "path": "src/lib/Libifl/pbsD_connect.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tpbs_connect.c\n * @brief\n *\tOpen a connection with the pbs server.  At this point several\n *\tthings are stubbed out, and other things are hard-wired.\n *\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <ctype.h>\n#include <stdlib.h>\n#include <errno.h>\n#include <fcntl.h>\n#include <stdio.h>\n#include <pwd.h>\n#include <string.h>\n#include <netdb.h>\n#include <unistd.h>\n#include <sys/stat.h>\n#include <sys/types.h>\n#include <sys/socket.h>\n#include <sys/time.h>\n#include <netinet/in.h>\n#include <netinet/tcp.h>\n#include <arpa/inet.h>\n#include <pbs_ifl.h>\n#include \"libpbs.h\"\n#include \"net_connect.h\"\n#include \"dis.h\"\n#include \"libsec.h\"\n#include \"pbs_ecl.h\"\n#include \"pbs_internal.h\"\n#include \"log.h\"\n#include \"auth.h\"\n#include \"ifl_internal.h\"\n#include \"libutil.h\"\n#include \"portability.h\"\n\n/**\n * @brief\n *\t-returns the default server name.\n *\n * @return\tstring\n * @retval\tdflt srvr name\tsuccess\n * @retval\tNULL\t\terror\n *\n */\nchar *\n__pbs_default(void)\n{\n\tchar dflt_server[PBS_MAXSERVERNAME + 1];\n\tstruct pbs_client_thread_context *p;\n\n\t/* initialize the thread context data, if not already initialized */\n\tif (pbs_client_thread_init_thread_context() != 0)\n\t\treturn NULL;\n\n\tp = pbs_client_thread_get_context_data();\n\n\tif (pbs_loadconf(0) == 0)\n\t\treturn NULL;\n\n\tif (p->th_pbs_defserver[0] == '\\0') {\n\t\t/* The check for PBS_DEFAULT is done in pbs_loadconf() */\n\t\tif (pbs_conf.pbs_primary && pbs_conf.pbs_secondary) {\n\t\t\tstrncpy(dflt_server, pbs_conf.pbs_primary, PBS_MAXSERVERNAME);\n\t\t} else if (pbs_conf.pbs_server_host_name) {\n\t\t\tstrncpy(dflt_server, pbs_conf.pbs_server_host_name, PBS_MAXSERVERNAME);\n\t\t} else if (pbs_conf.pbs_server_name) {\n\t\t\tstrncpy(dflt_server, pbs_conf.pbs_server_name, PBS_MAXSERVERNAME);\n\t\t} else {\n\t\t\tdflt_server[0] = '\\0';\n\t\t}\n\t\tstrcpy(p->th_pbs_defserver, dflt_server);\n\t}\n\treturn (p->th_pbs_defserver);\n}\n\n/**\n * @brief\n *\tReturn the IP address used in binding a socket to a host\n *\tAttempts to find IPv4 address for the named host,  first address found\n *\tis returned.\n *\n * @param[in]\thost - The name of the host to whose address is needed\n * @param[out]\tsap  - pointer to the sockaddr_in structure into which\n *\t\t\t\t\t\tthe address will be returned.\n *\n * @return\tint\n * @retval  0\t- success, address set in *sap\n * @retval -1\t- error, *sap is left zero-ed\n */\nstatic int\nget_hostsockaddr(const char *host, struct sockaddr_in *sap)\n{\n\tstruct addrinfo hints;\n\tstruct addrinfo *aip, *pai;\n\n\tmemset(sap, 0, sizeof(struct sockaddr));\n\tmemset(&hints, 0, sizeof(struct addrinfo));\n\t/*\n\t *\tWhy do we use AF_UNSPEC rather than AF_INET?  Some\n\t *\timplementations of getaddrinfo() will take an IPv6\n\t *\taddress and map it to an IPv4 one if we ask for AF_INET\n\t *\tonly.  We don't want that - we want only the addresses\n\t *\tthat are genuinely, natively, IPv4 so we start with\n\t *\tAF_UNSPEC and filter ai_family below.\n\t */\n\thints.ai_family = AF_UNSPEC;\n\thints.ai_socktype = SOCK_STREAM;\n\thints.ai_protocol = IPPROTO_TCP;\n\n\tif (getaddrinfo(host, NULL, &hints, &pai) != 0) {\n\t\tpbs_errno = PBSE_BADHOST;\n\t\treturn -1;\n\t}\n\tfor (aip = pai; aip != NULL; aip = aip->ai_next) {\n\t\t/* skip non-IPv4 addresses */\n\t\tif (aip->ai_family == AF_INET) {\n\t\t\t*sap = *((struct sockaddr_in *) aip->ai_addr);\n\t\t\tfreeaddrinfo(pai);\n\t\t\treturn 0;\n\t\t}\n\t}\n\t/* treat no IPv4 addresses as getaddrinfo() failure */\n\tpbs_errno = PBSE_BADHOST;\n\tfreeaddrinfo(pai);\n\treturn -1;\n}\n\n/**\n * @brief\tThis function establishes a network connection to the given server.\n * \t\t\tset extend_data to the value of NOBLK_FLAG to do a non-blocking connect\n *\n * @param[in]   server - The hostname of the pbs server to connect to.\n * @param[in]   port - Port number of the pbs server to connect to.\n * @param[in]   extend_data - a string to send as \"extend\" data\n *\n *\n * @return int\n * @retval >= 0\tThe physical server socket.\n * @retval -1\terror encountered setting up the connection.\n */\n\nstatic int\ntcp_connect(const char *hostname, int server_port, const char *extend_data)\n{\n\tint i;\n\tint sd;\n\tstruct sockaddr_in server_addr;\n\tstruct batch_reply *reply;\n\tchar errbuf[LOG_BUF_SIZE] = {'\\0'};\n\tbool noblk = false;\n\tbool connect_err = false;\n#ifdef WIN32\n\tint non_block = 1;\n#else\n\tint oflg = 0;\n\tint nflg = 0;\n#endif\n\n\tif (extend_data != NULL && strcmp(NOBLK_FLAG, extend_data) == 0)\n\t\tnoblk = true;\n\n\t\t/* get socket\t*/\n#ifdef WIN32\n\t/* the following lousy hack is needed since the socket call needs */\n\t/* SYSTEMROOT env variable properly set! */\n\tif (getenv(\"SYSTEMROOT\") == NULL) {\n\t\tsetenv(\"SYSTEMROOT\", \"C:\\\\WINDOWS\", 1);\n\t\tsetenv(\"SystemRoot\", \"C:\\\\WINDOWS\", 1);\n\t}\n#endif\n\n\tsd = socket(AF_INET, SOCK_STREAM, 0);\n\tif (sd == -1) {\n\t\tpbs_errno = PBSE_SYSTEM;\n\t\treturn -1;\n\t}\n\tif (noblk) { /* set socket non-blocking */\n#ifdef WIN32\n\t\tif (ioctlsocket(sd, FIONBIO, &non_block) == SOCKET_ERROR) {\n#else\n\t\toflg = fcntl(sd, F_GETFL) & ~O_ACCMODE;\n\t\tnflg = oflg | O_NONBLOCK;\n\t\tif (fcntl(sd, F_SETFL, nflg) == -1)\n#endif\n\t\t\t{\n\t\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t}\n\n\t\tpbs_strncpy(pbs_server, hostname, sizeof(pbs_server)); /* set for error messages from commands */\n\t\t/* and connect... */\n\n\t\tif (get_hostsockaddr(hostname, &server_addr) != 0)\n\t\t\treturn -1;\n\n\t\tserver_addr.sin_port = htons(server_port);\n\t\tif (connect(sd, (struct sockaddr *) &server_addr, sizeof(struct sockaddr)) != 0)\n\t\t\tconnect_err = true;\n\n\t\tif (connect_err && noblk) { /* For non-blocking, wait until timeout before erroring out */\n\t\t\tfd_set fdset;\n\t\t\tstruct timeval tv;\n\t\t\tint n;\n\t\t\tpbs_socklen_t l;\n\n\t\t\t/* connect attempt failed */\n\t\t\tpbs_errno = SOCK_ERRNO;\n\t\t\tswitch (pbs_errno) {\n#ifdef WIN32\n\t\t\t\tcase WSAEWOULDBLOCK:\n#else\n\t\t\tcase EINPROGRESS:\n\t\t\tcase EWOULDBLOCK:\n#endif\n\t\t\t\t\twhile (1) {\n\t\t\t\t\t\tFD_ZERO(&fdset);\n\t\t\t\t\t\tFD_SET(sd, &fdset);\n\t\t\t\t\t\ttv.tv_sec = NOBLK_TOUT;\n\t\t\t\t\t\ttv.tv_usec = 0;\n\t\t\t\t\t\tn = select(sd + 1, NULL, &fdset, NULL, &tv);\n\t\t\t\t\t\tif (n > 0) {\n\t\t\t\t\t\t\tpbs_errno = 0;\n\t\t\t\t\t\t\tl = sizeof(pbs_errno);\n\t\t\t\t\t\t\tgetsockopt(sd, SOL_SOCKET, SO_ERROR, &pbs_errno, &l);\n\t\t\t\t\t\t\tif (pbs_errno == 0)\n\t\t\t\t\t\t\t\tconnect_err = false;\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t} else {\n#ifdef WIN32\n\t\t\t\t\t\t\tif (SOCK_ERRNO != WSAEINTR)\n#else\n\t\t\t\t\t\tif (SOCK_ERRNO != EINTR)\n#endif\n\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tbreak;\n\n\t\t\t\tdefault:;\n\t\t\t}\n\t\t}\n\t\tif (connect_err) {\n\t\t\tif (pbs_errno == PBSE_NONE)\n\t\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\tclosesocket(sd);\n\t\t\treturn -1; /* cannot connect */\n\t\t}\n\n\t\t/* reset socket blocking */\n#ifdef WIN32\n\t\tnon_block = 0;\n\t\tif (ioctlsocket(sd, FIONBIO, &non_block) == SOCKET_ERROR)\n#else\n\tif (fcntl(sd, F_SETFL, oflg) < 0)\n#endif\n\n\t\t\t/* setup connection level thread context */\n\t\t\tif (pbs_client_thread_init_connect_context(sd) != 0) {\n\t\t\t\tclosesocket(sd);\n\t\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\t\treturn -1;\n\t\t\t}\n\n\t\t/*\n\t * No need for global lock now on, since rest of the code\n\t * is only communication on a connection handle.\n\t * But we dont need to lock the connection handle, since this\n\t * connection handle is not yet been returned to the client\n\t */\n\n\t\tif (load_auths(AUTH_CLIENT)) {\n\t\t\tclosesocket(sd);\n\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\treturn -1;\n\t\t}\n\n\t\t/* setup DIS support routines for following pbs_* calls */\n\t\tDIS_tcp_funcs();\n\n\t\t/* The following code was originally  put in for HPUX systems to deal\n\t * with the issue where returning from the connect() call doesn't\n\t * mean the connection is complete.  However, this has also been\n\t * experienced in some Linux ppc64 systems like js-2. Decision was\n\t * made to enable this harmless code for all architectures.\n\t * FIX: Need to use the socket to send\n\t * a message to complete the process.  For IFF authentication there is\n\t * no leading authentication message needing to be sent on the client\n\t * socket, so will send a \"dummy\" message and discard the replyback.\n\t */\n\t\tif ((i = encode_DIS_ReqHdr(sd, PBS_BATCH_Connect, pbs_current_user)) ||\n\t\t    (i = encode_DIS_ReqExtend(sd, extend_data))) {\n\t\t\tclosesocket(sd);\n\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\treturn -1;\n\t\t}\n\t\tif (dis_flush(sd)) {\n\t\t\tclosesocket(sd);\n\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\treturn -1;\n\t\t}\n\n\t\tpbs_errno = PBSE_NONE;\n\t\treply = PBSD_rdrpy(sd);\n\t\tPBSD_FreeReply(reply);\n\t\tif (pbs_errno != PBSE_NONE) {\n\t\t\tclosesocket(sd);\n\t\t\treturn -1;\n\t\t}\n\n\t\tif (engage_client_auth(sd, hostname, server_port, errbuf, sizeof(errbuf)) != 0) {\n\t\t\tif (pbs_errno == PBSE_NONE)\n\t\t\t\tpbs_errno = PBSE_PERM;\n\t\t\tfprintf(stderr, \"auth: error returned: %d\\n\", pbs_errno);\n\t\t\tif (errbuf[0] != '\\0')\n\t\t\t\tfprintf(stderr, \"auth: %s\\n\", errbuf);\n\t\t\tclosesocket(sd);\n\t\t\treturn -1;\n\t\t}\n\n\t\tpbs_tcp_timeout = PBS_DIS_TCP_TIMEOUT_VLONG; /* set for 3 hours */\n\n\t\t/*\n\t * Disable Nagle's algorithm on the TCP connection to server.\n\t * Nagle's algorithm is hurting cmd-server communication.\n\t */\n\t\tif (pbs_connection_set_nodelay(sd) == -1) {\n\t\t\tclosesocket(sd);\n\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\treturn -1;\n\t\t}\n\n\t\treturn sd;\n\t}\n\n\t/**\n * @brief\tHelper function to connect to a particular server\n *\n * @param[in]\tsvrname - server hostname to connect to\n * @param[in]\tsvrport - server port to connect to\n * @param[in]\textend_data - any additional data relevant for connection\n *\n * @return\tint\n * @retval\t-1 for error\n * @retval\tfd of connection\n */\n\tstatic int\n\tconnect_to_server(const char *svrname, int svrport, const char *extend_data)\n\t{\n\t\tint sd = -1;\n\t\tstruct sockaddr_in my_sockaddr;\n\n\t\t/* bind to pbs_public_host_name if given  */\n\t\tif (pbs_conf.pbs_public_host_name) {\n\t\t\tif (get_hostsockaddr(pbs_conf.pbs_public_host_name, &my_sockaddr) != 0)\n\t\t\t\treturn -1; /* pbs_errno was set */\n\t\t\t/* my address will be in my_sockaddr,  bind the socket to it */\n\t\t\tmy_sockaddr.sin_port = 0;\n\t\t\tif (bind(sd, (struct sockaddr *) &my_sockaddr, sizeof(my_sockaddr)) != 0) {\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t}\n\n\t\tsd = tcp_connect(svrname, svrport, extend_data);\n\n\t\treturn sd;\n\t}\n\n\t/**\n * @brief\tMakes a PBS_BATCH_Connect request to 'server'.\n *\n * @param[in]   server - the hostname of the pbs server to connect to.\n * @param[in]   extend_data - a string to send as \"extend\" data.\n *\n * @return int\n * @retval >= 0\tindex to the internal connection table representing the\n *\t\tconnection made.\n * @retval -1\terror encountered setting up the connection.\n */\n\tint\n\t__pbs_connect_extend(const char *server, const char *extend_data)\n\t{\n\t\tchar server_name[PBS_MAXSERVERNAME + 1];\n\t\tunsigned int server_port;\n\t\tchar *altservers[2];\n\t\tint have_alt = 0;\n\t\tint sock = -1;\n\t\tint i;\n\t\tint f;\n\n\t\tchar pbsrc[_POSIX_PATH_MAX];\n\t\tstruct stat sb;\n\t\tint using_secondary = 0;\n\n\t\t/* initialize the thread context data, if not already initialized */\n\t\tif (pbs_client_thread_init_thread_context() != 0)\n\t\t\treturn -1;\n\n\t\tif (pbs_loadconf(0) == 0)\n\t\t\treturn -1;\n\n\t\tif (PBS_get_server(server, server_name, &server_port) == NULL) {\n\t\t\tpbs_errno = PBSE_NOSERVER;\n\t\t\treturn -1;\n\t\t}\n\n\t\tif (pbs_conf.pbs_primary && pbs_conf.pbs_secondary) {\n\t\t\t/* failover configuered ...   */\n\t\t\tif (is_same_host(server_name, pbs_conf.pbs_primary)) {\n\t\t\t\thave_alt = 1;\n\n\t\t\t\taltservers[0] = pbs_conf.pbs_primary;\n\t\t\t\taltservers[1] = pbs_conf.pbs_secondary;\n\n\t\t\t\t/* We want to try the one last seen as \"up\" first to not   */\n\t\t\t\t/* have connection delays.   If the primary was up, there  */\n\t\t\t\t/* is no .pbsrc.NAME file.  If the last command connected  */\n\t\t\t\t/* to the Secondary, then it created the .pbsrc.USER file. */\n\n\t\t\t\t/* see if already seen Primary down */\n\t\t\t\tsnprintf(pbsrc, _POSIX_PATH_MAX, \"%s/.pbsrc.%s\", pbs_conf.pbs_tmpdir, pbs_current_user);\n\t\t\t\tif (stat(pbsrc, &sb) != -1) {\n\t\t\t\t\t/* try secondary first */\n\t\t\t\t\taltservers[0] = pbs_conf.pbs_secondary;\n\t\t\t\t\taltservers[1] = pbs_conf.pbs_primary;\n\t\t\t\t\tusing_secondary = 1;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\t/*\n\t * connect to server ...\n\t * If attempt to connect fails and if Failover configured and\n\t *   if attempting to connect to Primary,  try the Secondary\n\t *   if attempting to connect to Secondary, try the Primary\n\t */\n\t\tfor (i = 0; i < (have_alt + 1); ++i) {\n\t\t\tif (have_alt)\n\t\t\t\tpbs_strncpy(server_name, altservers[i], PBS_MAXSERVERNAME);\n\t\t\tif ((sock = connect_to_server(server_name, server_port, extend_data)) != -1)\n\t\t\t\tbreak;\n\t\t}\n\n\t\tif (i >= (have_alt + 1) && sock == -1) {\n\t\t\treturn -1; /* cannot connect */\n\t\t}\n\n\t\tif (have_alt && (i == 1)) {\n\t\t\t/* had to use the second listed server ... */\n\t\t\tif (using_secondary == 1) {\n\t\t\t\t/* remove file that causes trying the Secondary first */\n\t\t\t\tunlink(pbsrc);\n\t\t\t} else {\n\t\t\t\t/* create file that causes trying the Primary first   */\n\t\t\t\tf = open(pbsrc, O_WRONLY | O_CREAT, 0200);\n\t\t\t\tif (f != -1)\n\t\t\t\t\tclose(f);\n\t\t\t}\n\t\t}\n\n\t\treturn sock;\n\t}\n\n\t/**\n * @brief\n *\tSet no-delay option (disable nagles algoritm) on connection\n *\n * @param[in]   connect - connection index\n *\n * @return int\n * @retval  0\tSucccess\n * @retval -1\tFailure (bad index, or failed to set)\n *\n */\n\tint\n\tpbs_connection_set_nodelay(int connect)\n\t{\n\t\tint opt;\n\t\tpbs_socklen_t optlen;\n\n\t\tif (connect < 0)\n\t\t\treturn -1;\n\t\toptlen = sizeof(opt);\n\t\tif (getsockopt(connect, IPPROTO_TCP, TCP_NODELAY, &opt, &optlen) == -1)\n\t\t\treturn -1;\n\n\t\tif (opt == 1)\n\t\t\treturn 0;\n\n\t\topt = 1;\n\t\treturn setsockopt(connect, IPPROTO_TCP, TCP_NODELAY, &opt, sizeof(opt));\n\t}\n\n\t/**\n * @brief\tA wrapper progarm to pbs_connect_extend() but this one not\n *\t\t\tpassing any 'extend' data to the connection.\n *\n * @param[in] server - server - the hostname of the pbs server to connect to.\n *\n * @retval int\t- return value of pbs_connect_extend().\n */\n\tint\n\t__pbs_connect(const char *server)\n\t{\n\t\treturn (pbs_connect_extend(server, NULL));\n\t}\n\n\t/**\n * @brief\n *\t-send close connection batch request\n *\n * @param[in] connect - socket descriptor\n *\n * @return\tint\n * @retval\t0\tsuccess\n * @retval\t-1\terror\n *\n */\n\tint\n\t__pbs_disconnect(int connect)\n\t{\n\t\tchar x;\n\n\t\tif (connect < 0)\n\t\t\treturn 0;\n\n\t\t/* initialize the thread context data, if not already initialized */\n\t\tif (pbs_client_thread_init_thread_context() != 0)\n\t\t\treturn -1;\n\n\t\t/*\n\t * Use only connection handle level lock since this is\n\t * just communication with server\n\t */\n\t\tif (pbs_client_thread_lock_connection(connect) != 0)\n\t\t\treturn -1;\n\n\t\t/*\n\t * check again to ensure that another racing thread\n\t * had not already closed the connection\n\t */\n\t\tif (get_conn_chan(connect) == NULL)\n\t\t\treturn 0;\n\n\t\t/* send close-connection message */\n\n\t\tDIS_tcp_funcs();\n\t\tif ((encode_DIS_ReqHdr(connect, PBS_BATCH_Disconnect, pbs_current_user) == 0) &&\n\t\t    (dis_flush(connect) == 0)) {\n\t\t\tfor (;;) { /* wait for server to close connection */\n#ifdef WIN32\n\t\t\t\tif (recv(connect, &x, 1, 0) < 1)\n#else\n\t\t\tif (read(connect, &x, 1) < 1)\n#endif\n\t\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\n\t\tCS_close_socket(connect);\n\t\tclosesocket(connect);\n\t\tdis_destroy_chan(connect);\n\n\t\t/* unlock the connection level lock */\n\t\tif (pbs_client_thread_unlock_connection(connect) != 0)\n\t\t\treturn -1;\n\n\t\t/*\n\t * this is only a per thread work, so outside lock and unlock\n\t * connection needs the thread level connect context so this should be\n\t * called after unlocking\n\t */\n\t\tif (pbs_client_thread_destroy_connect_context(connect) != 0)\n\t\t\treturn -1;\n\n\t\tdestroy_connection(connect);\n\n\t\treturn 0;\n\t}\n\n\t/**\n * @brief\n *\t-return the number of max connections.\n *\n * @return\tint\n * @retval\t0\tsuccess\n * @retval\t!0\terror\n *\n */\n\tint\n\tpbs_query_max_connections()\n\t{\n\t\treturn (NCONNECTS - 1);\n\t}\n\n\t/*\n *\tpbs_connect_noblk() - Open a connection with a pbs server.\n *\t\tDo not allow TCP to block us if Server host is down\n *\n *\tAt this point, this does not attempt to find a fail_over Server\n */\n\n\t/**\n * @brief\n *\tOpen a connection with a pbs server.\n *\tDo not allow TCP to block us if Server host is down\n *\tAt this point, this does not attempt to find a fail_over Server\n *\n * @param[in]   server - specifies the server to which to connect\n *\n * @return int\n * @retval >= 0\tindex to the internal connection table representing the\n *\t\tconnection made.\n * @retval -1\terror encountered in getting index\n */\n\tint\n\tpbs_connect_noblk(const char *server)\n\t{\n\t\treturn pbs_connect_extend(server, NOBLK_FLAG);\n\t}\n\n\t/**\n * @brief Registers the given connection with the Server by sending PBS_BATCH_RegisterSched\n *\n * param[in]\tsched_id - sched identifier which is known to server\n * @return int\n * @retval !0  - failure\n * @return 0  - success\n */\n\tstatic int\n\tregister_sched_conn(int c, const char *sched_id)\n\t{\n\t\tint rc;\n\t\tstruct batch_reply *reply = NULL;\n\n\t\tif (sched_id == NULL)\n\t\t\treturn -1;\n\n\t\trc = encode_DIS_ReqHdr(c, PBS_BATCH_RegisterSched, pbs_current_user);\n\t\tif (rc != DIS_SUCCESS)\n\t\t\tgoto rerr;\n\t\trc = diswst(c, sched_id);\n\t\tif (rc != DIS_SUCCESS)\n\t\t\tgoto rerr;\n\t\trc = encode_DIS_ReqExtend(c, NULL);\n\t\tif (rc != DIS_SUCCESS)\n\t\t\tgoto rerr;\n\t\tif (dis_flush(c) != 0)\n\t\t\tgoto rerr;\n\n\t\tpbs_errno = 0;\n\t\treply = PBSD_rdrpy(c);\n\t\tif (reply == NULL)\n\t\t\tgoto rerr;\n\n\t\tif (pbs_errno != 0)\n\t\t\tgoto rerr;\n\n\t\tPBSD_FreeReply(reply);\n\t\treturn 0;\n\n\trerr:\n\t\tpbs_disconnect(c);\n\t\tPBSD_FreeReply(reply);\n\t\treturn -1;\n\t}\n\n\t/**\n * @brief Registers the Scheduler with all the Servers configured\n *\n * param[in]\tsched_id - sched identifier which is known to server\n * param[in]\tprimary_conn_id - primary connection handle which represents all servers returned by pbs_connect\n * param[in]\tsecondary_conn_id - secondary connection handle which represents all servers returned by pbs_connect\n *\n * @return int\n * @retval !0  - couldn't register with a connected server\n * @return 0  - success\n */\n\tint\n\t__pbs_register_sched(const char *sched_id, int primary_conn_sd, int secondary_conn_sd)\n\t{\n\t\tif (sched_id == NULL || primary_conn_sd < 0 || secondary_conn_sd < 0)\n\t\t\treturn -1;\n\n\t\tif (register_sched_conn(primary_conn_sd, sched_id) != 0)\n\t\t\treturn -1;\n\t\tif (register_sched_conn(secondary_conn_sd, sched_id) != 0)\n\t\t\treturn -1;\n\n\t\treturn 0;\n\t}\n"
  },
  {
    "path": "src/lib/Libifl/pbsD_defschreply.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tpbs_defschreply.c\n * @brief\n *\t\tDeferred reply from the Scheduler to the Server\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <string.h>\n#include <stdio.h>\n#include \"libpbs.h\"\n#include \"dis.h\"\n#include \"pbs_error.h\"\n#include \"pbs_ecl.h\"\n\n/**\n * @brief\n *\t- Deferred reply from the Scheduler to the Server\n *\n * @param[in] c - connection handler\n * @param[in] cmd - command\n * @param[in] id - job id\n * @param[in] err - error number\n * @param[in] txt - message\n * @param[in] extend - extend string for encoding req\n *\n * @return\tint\n * @retval\t0\tsuccess\n * @retval\t!0\terror\n *\n */\nint\npbs_defschreply(int c, int cmd, char *id, int err, char *txt, char *extend)\n{\n\tint rc;\n\tstruct batch_reply *reply;\n\tint has_txt = 0;\n\n\tif ((id == NULL) || (*id == '\\0'))\n\t\treturn (pbs_errno = PBSE_IVALREQ);\n\tif ((txt != NULL) && (*txt != '\\0'))\n\t\thas_txt = 1;\n\n\t/* initialize the thread context data, if not already initialized */\n\tif (pbs_client_thread_init_thread_context() != 0)\n\t\treturn pbs_errno;\n\n\t/* lock pthread mutex here for this connection */\n\t/* blocking call, waits for mutex release */\n\tif (pbs_client_thread_lock_connection(c) != 0)\n\t\treturn pbs_errno;\n\n\t/* setup DIS support routines for following DIS calls */\n\n\tDIS_tcp_funcs();\n\n\t/* encode request */\n\n\tif ((rc = encode_DIS_ReqHdr(c, PBS_BATCH_DefSchReply,\n\t\t\t\t    pbs_current_user)) ||\n\t    (rc = diswui(c, cmd) != 0) ||\n\t    (rc = diswst(c, id) != 0) ||\n\t    (rc = diswui(c, err) != 0)) {\n\t\tif (set_conn_errtxt(c, dis_emsg[rc]) != 0) {\n\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t} else {\n\t\t\tpbs_errno = PBSE_PROTOCOL;\n\t\t}\n\t\t(void) pbs_client_thread_unlock_connection(c);\n\t\treturn pbs_errno;\n\t}\n\trc = diswsi(c, has_txt);\n\tif ((has_txt == 1) && (rc == 0)) {\n\t\trc = diswst(c, txt);\n\t}\n\tif (rc == 0)\n\t\trc = encode_DIS_ReqExtend(c, extend);\n\tif (rc) {\n\t\tif (set_conn_errtxt(c, dis_emsg[rc]) != 0) {\n\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t} else {\n\t\t\tpbs_errno = PBSE_PROTOCOL;\n\t\t}\n\t\t(void) pbs_client_thread_unlock_connection(c);\n\t\treturn pbs_errno;\n\t}\n\n\tif (dis_flush(c)) {\n\t\tpbs_errno = PBSE_PROTOCOL;\n\t\t(void) pbs_client_thread_unlock_connection(c);\n\t\treturn pbs_errno;\n\t}\n\n\t/* get reply */\n\n\treply = PBSD_rdrpy(c);\n\trc = get_conn_errno(c);\n\n\tPBSD_FreeReply(reply);\n\n\t/* unlock the thread lock and update the thread context data */\n\tif (pbs_client_thread_unlock_connection(c) != 0)\n\t\treturn pbs_errno;\n\n\treturn rc;\n}\n"
  },
  {
    "path": "src/lib/Libifl/pbsD_deljob.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tpbs_deljob.c\n * @brief\n * Send the Delete Job request to the server\n * really just an instance of the manager request\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <stdio.h>\n\n#include \"ifl_internal.h\"\n#include \"libpbs.h\"\n#include \"pbs_ifl.h\"\n\n/**\n * @brief\n *\tSend the Delete Job request to the server\n * \treally just an instance of the manager request\n *\n * @param[in] c - connection handler\n * @param[in] jobid - job identifier\n * @param[in] extend - string to encode req\n *\n * @return\tint\n * @retval\t0\tsuccess\n * @retval\t!0\terror\n *\n */\n\nint\n__pbs_deljob(int c, const char *jobid, const char *extend)\n{\n\tchar *list[2];\n\tstruct batch_deljob_status *res = NULL;\n\n\tif ((jobid == NULL) || (*jobid == '\\0'))\n\t\treturn (pbs_errno = PBSE_IVALREQ);\n\n\tlist[0] = (char *) jobid;\n\tlist[1] = NULL;\n\n\tres = pbs_deljoblist(c, list, 1, extend);\n\tif (res != NULL)\n\t\treturn res->code;\n\treturn PBSE_NONE;\n}\n"
  },
  {
    "path": "src/lib/Libifl/pbsD_deljoblist.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tpbs_deljob.c\n * @brief\n * Send the Delete Job request to the server\n * really just an instance of the manager request\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include \"cmds.h\"\n#include \"dis.h\"\n#include \"libpbs.h\"\n#include \"libutil.h\"\n#include \"net_connect.h\"\n#include \"pbs_ecl.h\"\n#include \"pbs_ifl.h\"\n#include \"tpp.h\"\n#include <stdio.h>\n#include \"dedup_jobids.h\"\n\n/**\n * @brief\tDeallocate a svr_jobid_list_t list\n * @param[out]\tlist - the list to deallocate\n * @param[in]\tshallow - shallow free (don't free the individual jobids in the array)\n * @return\tvoid\n */\nvoid\nfree_svrjobidlist(svr_jobid_list_t *list, int shallow)\n{\n\tsvr_jobid_list_t *iter_list = NULL;\n\tsvr_jobid_list_t *next = NULL;\n\n\tfor (iter_list = list; iter_list != NULL; iter_list = next) {\n\t\tnext = iter_list->next;\n\t\tif (shallow)\n\t\t\tfree(iter_list->jobids);\n\t\telse\n\t\t\tfree_str_array(iter_list->jobids);\n\t\tfree(iter_list);\n\t}\n}\n\n/**\n * @brief\n *\tAppend a given jobid to the given svr_jobid_list struct.\n *\n * @param[in] svr - Server name\n * @param[in] jobid - Job id\n *\n * @return\tint\n * @retval\t0 for Success\n * @retval\t1 for Failure\n */\nint\nappend_jobid(svr_jobid_list_t *svr, const char *jobid)\n{\n\tif ((svr == NULL) || (jobid == NULL))\n\t\treturn 0;\n\n\tif (svr->max_sz == 0) {\n\t\tsvr->jobids = malloc((DELJOB_DFLT_NUMIDS + 1) * sizeof(char *));\n\t\tif (svr->jobids == NULL)\n\t\t\tgoto error;\n\t\tsvr->max_sz = DELJOB_DFLT_NUMIDS;\n\t} else if (svr->total_jobs == svr->max_sz) {\n\t\tchar **realloc_ptr = NULL;\n\n\t\tsvr->max_sz *= 2;\n\t\trealloc_ptr = realloc(svr->jobids, (svr->max_sz + 1) * sizeof(char *));\n\t\tif (realloc_ptr == NULL)\n\t\t\tgoto error;\n\t\tsvr->jobids = realloc_ptr;\n\t}\n\n\tsvr->jobids[svr->total_jobs++] = (char *) jobid;\n\tsvr->jobids[svr->total_jobs] = NULL;\n\treturn 0;\n\nerror:\n\tpbs_errno = PBSE_SYSTEM;\n\treturn 1;\n}\n\n/**\n * @brief\n *\tIdentify the respective svr_jobid_list struct\n *  and calls the append_jobid function to append the jobid.\n *\n * @param[in] job_id - Job id\n * @param[in] svrname - server name\n * @param[in,out] svr_jobid_list_hd - head of the svr_jobib_list list\n *\n * @return int\n * @retval\t0\t- success\n * @retval\t1\t- failure\n *\n */\nint\nadd_jid_to_list_by_name(char *job_id, char *svrname, svr_jobid_list_t **svr_jobid_list_hd)\n{\n\tsvr_jobid_list_t *iter_list = NULL;\n\tsvr_jobid_list_t *new_node = NULL;\n\n\tif ((job_id == NULL) || (svrname == NULL) || (svr_jobid_list_hd == NULL))\n\t\treturn 1;\n\n\tfor (iter_list = *svr_jobid_list_hd; iter_list != NULL; iter_list = iter_list->next) {\n\t\tif (strcmp(svrname, iter_list->svrname) == 0) {\n\t\t\tif (append_jobid(iter_list, job_id) != 0)\n\t\t\t\treturn 1;\n\t\t\treturn 0;\n\t\t}\n\t}\n\n\tnew_node = calloc(1, sizeof(svr_jobid_list_t));\n\tif (new_node == NULL) {\n\t\tpbs_errno = PBSE_SYSTEM;\n\t\treturn 1;\n\t}\n\tnew_node->svr_fd = -1;\n\tpbs_strncpy(new_node->svrname, svrname, sizeof(new_node->svrname));\n\tif (append_jobid(new_node, job_id) != 0) {\n\t\tfree(new_node);\n\t\treturn 1;\n\t}\n\tnew_node->next = *svr_jobid_list_hd;\n\t*svr_jobid_list_hd = new_node;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tSend the Delete Job List request to the server\n *\n *\n * @param[in] c - connection handler\n * @param[in] jobid - job identifier array\n * @param[in] numjids - number of job ids\n * @param[in] extend - string to encode req\n *\n * @return\tstruct batch_status *\n * @retval     list of jobs which couldn't be deleted\n *\n */\nstruct batch_deljob_status *\n__pbs_deljoblist(int c, char **jobids, int numjids, const char *extend)\n{\n\tint rc, i;\n\tstruct batch_reply *reply;\n\tstruct batch_deljob_status *ret = NULL;\n\n\tif ((jobids == NULL) || (*jobids == NULL) || (**jobids == '\\0') || c < 0)\n\t\treturn NULL;\n\n\tchar *malloc_track = calloc(1, numjids);\n\t/* Deletes duplicate jobids */\n\tif (dedup_jobids(jobids, &numjids, malloc_track) != 0)\n\t\tgoto end;\n\n\tDIS_tcp_funcs();\n\n\tif ((rc = encode_DIS_ReqHdr(c, PBS_BATCH_DeleteJobList, pbs_current_user)) ||\n\t    (rc = encode_DIS_JobsList(c, jobids, numjids)) ||\n\t    (rc = encode_DIS_ReqExtend(c, extend))) {\n\t\tif (set_conn_errtxt(c, dis_emsg[rc]) != 0) {\n\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\tgoto end;\n\t\t}\n\t\tpbs_errno = PBSE_PROTOCOL;\n\t\tgoto end;\n\t}\n\n\tif (dis_flush(c)) {\n\t\tpbs_errno = PBSE_PROTOCOL;\n\t\tgoto end;\n\t}\n\n\tif (c < 0)\n\t\tgoto end;\n\n\treply = PBSD_rdrpy(c);\n\tif (reply == NULL && pbs_errno == PBSE_NONE)\n\t\tpbs_errno = PBSE_PROTOCOL;\n\n\telse if (reply->brp_choice != BATCH_REPLY_CHOICE_NULL &&\n\t\t reply->brp_choice != BATCH_REPLY_CHOICE_Text &&\n\t\t reply->brp_choice != BATCH_REPLY_CHOICE_Delete)\n\t\tpbs_errno = PBSE_PROTOCOL;\n\n\tif ((reply != NULL) && (reply->brp_un.brp_deletejoblist.brp_delstatc != NULL)) {\n\t\tret = reply->brp_un.brp_deletejoblist.brp_delstatc;\n\t\treply->brp_un.brp_deletejoblist.brp_delstatc = NULL;\n\t}\n\n\tPBSD_FreeReply(reply);\n\nend:\n\t/* We need to free the jobid's that were allocated in dedup_jobids()\n\t\t * rest of the jobid's are not on heap */\n\tfor (i = 0; i < numjids; i++)\n\t\tif (malloc_track[i])\n\t\t\tfree(jobids[i]);\n\n\tfree(malloc_track);\n\treturn ret;\n}\n"
  },
  {
    "path": "src/lib/Libifl/pbsD_delresv.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tpbs_delresv.c\n *\n * @brief\n * Send the Delete Reservation request to the server\n * really just an instance of the manager request\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <stdio.h>\n#include \"libpbs.h\"\n\n/**\n * @brief\n *      Send the Delete Job request to the server\n *      really just an instance of the manager request\n *\n * @param[in] c - connection handler\n * @param[in] resv_id - reservation identifier\n * @param[in] extend - string to encode req\n *\n * @return      int\n * @retval      0       success\n * @retval      !0      error\n *\n */\nint\n__pbs_delresv(int c, const char *resv_id, const char *extend)\n{\n\tstruct attropl *aoplp = NULL;\n\n\tif ((resv_id == NULL) || (*resv_id == '\\0'))\n\t\treturn (pbs_errno = PBSE_IVALREQ);\n\n\treturn PBSD_manager(c, PBS_BATCH_DeleteResv, MGR_CMD_DELETE, MGR_OBJ_JOB, resv_id, aoplp, extend);\n}\n"
  },
  {
    "path": "src/lib/Libifl/pbsD_holdjob.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tpbs_holdjob.c\n * @brief\n * Send the Hold Job request to the server --\n * really just an instance of the \"manager\" request.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <stdio.h>\n#include \"libpbs.h\"\n\n/**\n * @brief\n *\t- Send the Hold Job request to the server --\n *\treally just an instance of the \"manager\" request.\n *\n * @param[in] c - connection handler\n * @param[in] jobid - job identifier\n * @param[in] holdtype - value for holdtype\n * @param[in] extend - string to encode req\n *\n * @return      int\n * @retval      0       success\n * @retval      !0      error\n *\n */\n\nint\n__pbs_holdjob(int c, const char *jobid, const char *holdtype, const char *extend)\n{\n\tstruct attropl aopl;\n\n\tif ((jobid == NULL) || (*jobid == '\\0'))\n\t\treturn (pbs_errno = PBSE_IVALREQ);\n\n\taopl.name = ATTR_h;\n\taopl.resource = NULL;\n\tif ((holdtype == NULL) || (*holdtype == '\\0'))\n\t\taopl.value = \"u\";\n\telse\n\t\taopl.value = (char *) holdtype;\n\taopl.op = SET;\n\taopl.next = NULL;\n\treturn PBSD_manager(c, PBS_BATCH_HoldJob, MGR_CMD_SET, MGR_OBJ_JOB, jobid, &aopl, extend);\n}\n"
  },
  {
    "path": "src/lib/Libifl/pbsD_locjob.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tpbs_locjob.c\n * @brief\n * This function does the LocateJob request.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <string.h>\n#include <stdio.h>\n#include \"libpbs.h\"\n#include \"dis.h\"\n#include \"pbs_ecl.h\"\n\n/**\n * @brief\n *      This function does the LocateJob request.\n *\n * @param[in] c - connection handler\n * @param[in] jobid - job identifier\n * @param[in] extend - string to encode req\n *\n * @return\tstring\n * @retval\tdestination name\tsuccess\n * @retval\tNULL\terror\n */\nchar *\n__pbs_locjob(int c, const char *jobid, const char *extend)\n{\n\tint rc;\n\tstruct batch_reply *reply;\n\tchar *ploc = NULL;\n\n\tif ((jobid == NULL) || (*jobid == '\\0')) {\n\t\tpbs_errno = PBSE_IVALREQ;\n\t\treturn (ploc);\n\t}\n\n\t/* initialize the thread context data, if not already initialized */\n\tif (pbs_client_thread_init_thread_context() != 0)\n\t\treturn NULL;\n\t/* lock pthread mutex here for this connection */\n\t/* blocking call, waits for mutex release */\n\tif (pbs_client_thread_lock_connection(c) != 0)\n\t\treturn NULL;\n\t/* setup DIS support routines for following DIS calls */\n\tDIS_tcp_funcs();\n\tif ((rc = encode_DIS_ReqHdr(c, PBS_BATCH_LocateJob, pbs_current_user)) || (rc = encode_DIS_JobId(c, jobid)) || (rc = encode_DIS_ReqExtend(c, extend))) {\n\t\tif (set_conn_errtxt(c, dis_emsg[rc]) != 0) {\n\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t} else {\n\t\t\tpbs_errno = PBSE_PROTOCOL;\n\t\t}\n\t\tpbs_client_thread_unlock_connection(c);\n\t\treturn NULL;\n\t}\n\t/* write data over tcp stream */\n\tif (dis_flush(c)) {\n\t\tpbs_errno = PBSE_PROTOCOL;\n\t\tpbs_client_thread_unlock_connection(c);\n\t\treturn NULL;\n\t}\n\t/* read reply from stream */\n\treply = PBSD_rdrpy(c);\n\tif (reply == NULL) {\n\t\tpbs_errno = PBSE_PROTOCOL;\n\t} else if (reply->brp_choice != BATCH_REPLY_CHOICE_NULL && reply->brp_choice != BATCH_REPLY_CHOICE_Text && reply->brp_choice != BATCH_REPLY_CHOICE_Locate) {\n\t\tadvise(\"pbs_locjob\", \"Unexpected reply choice\");\n\t\tpbs_errno = PBSE_PROTOCOL;\n\t} else if (get_conn_errno(c) == 0) {\n\t\tif ((ploc = strdup(reply->brp_un.brp_locate)) == NULL) {\n\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t}\n\t}\n\tPBSD_FreeReply(reply);\n\t/* unlock the thread lock and update the thread context data */\n\tif (pbs_client_thread_unlock_connection(c) != 0)\n\t\treturn NULL;\n\n\treturn ploc;\n}\n"
  },
  {
    "path": "src/lib/Libifl/pbsD_manager.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tpbs_manager.c\n * @brief\n * Basically a pass-thru to PBS_manager\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include \"libpbs.h\"\n\n/**\n * @brief\n *\t- Basically a pass-thru to PBS_manager\n *\n * @param[in] c - connection handle\n * @param[in] command - mgr command with respect to obj\n * @param[in] objtype - object type\n * @param[in] objname - object name\n * @param[in] attrib -  pointer to attropl structure\n * @param[in] extend - extend string to encode req\n *\n * @return\tint\n * @retval\t0\tsuccess\n * @retval\t!0\terror\n *\n */\nint\n__pbs_manager(int c, int command, int objtype, const char *objname,\n\t      struct attropl *attrib, const char *extend)\n{\n\treturn PBSD_manager(c,\n\t\t\t    PBS_BATCH_Manager,\n\t\t\t    command,\n\t\t\t    objtype,\n\t\t\t    objname,\n\t\t\t    attrib,\n\t\t\t    extend);\n}\n"
  },
  {
    "path": "src/lib/Libifl/pbsD_modify_resv.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/*\tpbsD_modify_resv.c\n *\n *\tThe Modify Reservation request.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <stdio.h>\n#include <fcntl.h>\n#include <unistd.h>\n#include \"libpbs.h\"\n#include \"pbs_ecl.h\"\n\n/**\n * @brief\n *\tPasses modify reservation request to PBSD_modify_resv( )\n *\n * @param[in]   c - socket on which connected\n * @param[in]   attrib - the list of attributes for batch request\n * @param[in]   extend - extension of batch request\n *\n * @return char*\n * @retval SUCCESS returns the response from the server.\n * @retval ERROR NULL\n */\nchar *\n__pbs_modify_resv(int c, const char *resv_id, struct attropl *attrib, const char *extend)\n{\n\tstruct attropl *pal = NULL;\n\tint rc = 0;\n\tchar *ret = NULL;\n\n\tfor (pal = attrib; pal; pal = pal->next)\n\t\tpal->op = SET;\n\n\t/* initialize the thread context data, if not already initialized */\n\tif (pbs_client_thread_init_thread_context() != 0)\n\t\treturn NULL;\n\n\t/* first verify the attributes, if verification is enabled */\n\trc = pbs_verify_attributes(c, PBS_BATCH_ModifyResv, MGR_OBJ_RESV, MGR_CMD_NONE, attrib);\n\tif (rc)\n\t\treturn NULL;\n\n\t/* lock pthread mutex here for this connection\n\t * blocking call, waits for mutex release */\n\tif (pbs_client_thread_lock_connection(c) != 0)\n\t\treturn NULL;\n\n\t/* initiate the modification of the reservation  */\n\tret = PBSD_modify_resv(c, resv_id, attrib, extend);\n\n\t/* unlock the thread lock and update the thread context data */\n\tif (pbs_client_thread_unlock_connection(c) != 0)\n\t\treturn NULL;\n\n\treturn ret;\n}\n"
  },
  {
    "path": "src/lib/Libifl/pbsD_movejob.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tpbs_movejob.c\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <string.h>\n#include <stdio.h>\n#include \"libpbs.h\"\n#include \"dis.h\"\n#include \"pbs_ecl.h\"\n\n/**\n * @brief\n *\tsend move job request\n *\n * @param[in] c - connection handler\n * @param[in] jobid - job identifier\n * @param[in] destin - job moved to\n * @param[in] extend - string to encode req\n *\n * @return      int\n * @retval      0       success\n * @retval      !0      error\n *\n */\nint\n__pbs_movejob(int c, const char *jobid, const char *destin, const char *extend)\n{\n\tint rc;\n\tstruct batch_reply *reply;\n\n\tif ((jobid == NULL) || (*jobid == '\\0'))\n\t\treturn (pbs_errno = PBSE_IVALREQ);\n\n\tif (destin == NULL)\n\t\tdestin = \"\";\n\n\t/* initialize the thread context data, if not already initialized */\n\tif (pbs_client_thread_init_thread_context() != 0)\n\t\treturn pbs_errno;\n\n\t/* lock pthread mutex here for this connection */\n\t/* blocking call, waits for mutex release */\n\tif (pbs_client_thread_lock_connection(c) != 0)\n\t\treturn pbs_errno;\n\n\t/* setup DIS support routines for following DIS calls */\n\n\tDIS_tcp_funcs();\n\n\tif ((rc = encode_DIS_ReqHdr(c, PBS_BATCH_MoveJob, pbs_current_user)) ||\n\t    (rc = encode_DIS_MoveJob(c, jobid, destin)) ||\n\t    (rc = encode_DIS_ReqExtend(c, extend))) {\n\t\tif (set_conn_errtxt(c, dis_emsg[rc]) != 0) {\n\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t} else {\n\t\t\tpbs_errno = PBSE_PROTOCOL;\n\t\t}\n\t\t(void) pbs_client_thread_unlock_connection(c);\n\t\treturn pbs_errno;\n\t}\n\n\tif (dis_flush(c)) {\n\t\tpbs_errno = PBSE_PROTOCOL;\n\t\t(void) pbs_client_thread_unlock_connection(c);\n\t\treturn pbs_errno;\n\t}\n\n\t/* read reply */\n\treply = PBSD_rdrpy(c);\n\tPBSD_FreeReply(reply);\n\trc = get_conn_errno(c);\n\n\t/* unlock the thread lock and update the thread context data */\n\tif (pbs_client_thread_unlock_connection(c) != 0)\n\t\treturn pbs_errno;\n\n\treturn rc;\n}\n"
  },
  {
    "path": "src/lib/Libifl/pbsD_msgjob.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tpbs_msgjob.c\n * @brief\n *\tsend the MessageJob request and get the reply.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <string.h>\n#include <stdio.h>\n#include <errno.h>\n#include \"libpbs.h\"\n#include \"dis.h\"\n#include \"pbs_ecl.h\"\n\n/**\n * @brief\n *\t-send the MessageJob request and get the reply.\n *\n * @param[in] c - socket descriptor\n * @param[in] jobid - job id\n * @param[in] fileopt - which file\n * @param[in] msg - msg to be encoded\n * @param[in] extend - extend string for encoding req\n *\n * @return\tint\n * @retval\t0\tsuccess\n * @retval\t!0\terror\n *\n */\nint\n__pbs_msgjob(int c, const char *jobid, int fileopt, const char *msg, const char *extend)\n{\n\tstruct batch_reply *reply;\n\tint rc;\n\n\tif ((jobid == NULL) || (*jobid == '\\0') ||\n\t    (msg == NULL) || (*msg == '\\0'))\n\t\treturn (pbs_errno = PBSE_IVALREQ);\n\n\t/* initialize the thread context data, if not already initialized */\n\tif (pbs_client_thread_init_thread_context() != 0)\n\t\treturn pbs_errno;\n\n\t/* lock pthread mutex here for this connection */\n\t/* blocking call, waits for mutex release */\n\tif (pbs_client_thread_lock_connection(c) != 0)\n\t\treturn pbs_errno;\n\n\t/* setup DIS support routines for following DIS calls */\n\tDIS_tcp_funcs();\n\n\tif ((rc = PBSD_msg_put(c, jobid, fileopt, msg, extend, PROT_TCP, NULL)) != 0) {\n\t\tif (set_conn_errtxt(c, dis_emsg[rc]) != 0) {\n\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t} else {\n\t\t\tpbs_errno = PBSE_PROTOCOL;\n\t\t}\n\t\tpbs_client_thread_unlock_connection(c);\n\t\treturn pbs_errno;\n\t}\n\n\t/* read reply */\n\treply = PBSD_rdrpy(c);\n\trc = get_conn_errno(c);\n\n\tPBSD_FreeReply(reply);\n\n\t/* unlock the thread lock and update the thread context data */\n\tif (pbs_client_thread_unlock_connection(c) != 0)\n\t\treturn pbs_errno;\n\n\treturn rc;\n}\n\n/**\n * @brief\n *\t-Send a request to spawn a python script to the MS\n *\tof a job.  It will run as a task.\n *\n * @param[in] c - communication handle\n * @param[in] jobid - job id\n * @param[in] argv - pointer to argument list\n * @param[in] envp - pointer to environment variable\n *\n * @return\tint\n * @retval\texit value of the task\tsuccess\n * @retval\t-1\t\t\terror\n *\n */\nint\npbs_py_spawn(int c, char *jobid, char **argv, char **envp)\n{\n\tstruct batch_reply *reply;\n\tint rc;\n\n\t/*\n\t ** Must have jobid and argv[0] as a minimum.\n\t */\n\tif ((jobid == NULL) || (*jobid == '\\0') ||\n\t    (argv == NULL) || (argv[0] == NULL)) {\n\t\tpbs_errno = PBSE_IVALREQ;\n\t\treturn -1;\n\t}\n\t/* initialize the thread context data, if not already initialized */\n\tif (pbs_client_thread_init_thread_context() != 0)\n\t\treturn -1;\n\n\t/* lock pthread mutex here for this connection */\n\t/* blocking call, waits for mutex release */\n\tif (pbs_client_thread_lock_connection(c) != 0)\n\t\treturn -1;\n\n\t/* setup DIS support routines for following DIS calls */\n\n\tDIS_tcp_funcs();\n\n\tif ((rc = PBSD_py_spawn_put(c, jobid, argv, envp, 0, NULL)) != 0) {\n\t\tif (set_conn_errtxt(c, dis_emsg[rc]) != 0) {\n\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t} else {\n\t\t\tpbs_errno = PBSE_PROTOCOL;\n\t\t}\n\t\t(void) pbs_client_thread_unlock_connection(c);\n\t\treturn -1;\n\t}\n\n\t/* read reply */\n\n\treply = PBSD_rdrpy(c);\n\tif ((reply == NULL) || (get_conn_errno(c) != 0))\n\t\trc = -1;\n\telse\n\t\trc = reply->brp_auxcode;\n\n\tPBSD_FreeReply(reply);\n\n\t/* unlock the thread lock and update the thread context data */\n\tif (pbs_client_thread_unlock_connection(c) != 0)\n\t\treturn -1;\n\n\treturn rc;\n}\n\n/**\n * @brief\n * \t-pbs_relnodesjob - release a set of sister nodes or vnodes,\n * \tor all sister nodes or vnodes assigned to the specified PBS\n * \tbatch job.\n *\n * @param[in] c \tcommunication handle\n * @param[in] jobid  job identifier\n * @param[in] node_list \tlist of hosts or vnodes to be released\n * @param[in] extend \tadditional params, currently passes -k arguments\n *\n * @return\tint\n * @retval\t0\tSuccess\n * @retval\t!0\terror\n *\n */\nint\n__pbs_relnodesjob(int c, const char *jobid, const char *node_list, const char *extend)\n{\n\tstruct batch_reply *reply;\n\tint rc;\n\n\tif ((jobid == NULL) || (*jobid == '\\0') ||\n\t    (node_list == NULL))\n\t\treturn (pbs_errno = PBSE_IVALREQ);\n\n\t/* initialize the thread context data, if not already initialized */\n\tif (pbs_client_thread_init_thread_context() != 0)\n\t\treturn pbs_errno;\n\n\t/* first verify the resource list in keep_select option */\n\tif (extend) {\n\t\tstruct attrl *attrib = NULL;\n\t\tchar emsg_illegal_k_value[] = \"illegal -k value\";\n\t\tchar ebuff[PBS_PARSE_ERR_MSG_LEN_MAX + sizeof(emsg_illegal_k_value) + 4], *erp, *emsg = NULL;\n\t\tint i;\n\t\tstruct pbs_client_thread_connect_context *con;\n\t\tchar nd_ct_selstr[20];\n\t\tchar *endptr = NULL;\n\t\tlong int rc_long;\n\n\t\terrno = 0;\n\t\trc_long = strtol(extend, &endptr, 10);\n\n\t\tif ((errno == 0) && (rc_long > 0) && (*endptr == '\\0')) {\n\t\t\tsnprintf(nd_ct_selstr, sizeof(nd_ct_selstr), \"select=%s\", extend);\n\t\t\textend = nd_ct_selstr;\n\t\t} else if ((i = set_resources(&attrib, extend, 1, &erp))) {\n\t\t\tif (i > 1) {\n\t\t\t\tsnprintf(ebuff, sizeof(ebuff), \"%s: %s\\n\", emsg_illegal_k_value, pbs_parse_err_msg(i));\n\t\t\t\temsg = strdup(ebuff);\n\t\t\t} else\n\t\t\t\temsg = strdup(\"illegal -k value\\n\");\n\t\t\tpbs_errno = PBSE_INVALSELECTRESC;\n\t\t} else {\n\t\t\tif (!attrib || strcmp(attrib->resource, \"select\")) {\n\t\t\t\temsg = strdup(\"only a \\\"select=\\\" string is valid in -k option\\n\");\n\t\t\t\tpbs_errno = PBSE_IVALREQ;\n\t\t\t} else\n\t\t\t\tpbs_errno = PBSE_NONE;\n\t\t}\n\t\tif (pbs_errno) {\n\t\t\tif ((con = pbs_client_thread_find_connect_context(c))) {\n\t\t\t\tfree(con->th_ch_errtxt);\n\t\t\t\tcon->th_ch_errtxt = emsg;\n\t\t\t\tcon->th_ch_errno = pbs_errno;\n\t\t\t} else {\n\t\t\t\t(void) set_conn_errtxt(c, emsg);\n\t\t\t\t(void) set_conn_errno(c, pbs_errno);\n\t\t\t\tfree(emsg);\n\t\t\t}\n\t\t\treturn pbs_errno;\n\t\t}\n\t\trc = pbs_verify_attributes(c, PBS_BATCH_RelnodesJob,\n\t\t\t\t\t   MGR_OBJ_JOB, MGR_CMD_NONE, (struct attropl *) attrib);\n\t\tif (rc)\n\t\t\treturn rc;\n\t}\n\n\tif (pbs_client_thread_lock_connection(c) != 0)\n\t\treturn pbs_errno;\n\n\t/* setup DIS support routines for following DIS calls */\n\n\tDIS_tcp_funcs();\n\n\tif ((rc = PBSD_relnodes_put(c, jobid, node_list, extend, 0, NULL)) != 0) {\n\t\tif (set_conn_errtxt(c, dis_emsg[rc]) != 0)\n\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\telse\n\t\t\tpbs_errno = PBSE_PROTOCOL;\n\n\t\tpbs_client_thread_unlock_connection(c);\n\t\treturn pbs_errno;\n\t}\n\n\t/* read reply */\n\n\treply = PBSD_rdrpy(c);\n\trc = get_conn_errno(c);\n\n\tPBSD_FreeReply(reply);\n\n\t/* unlock the thread lock and update the thread context data */\n\tif (pbs_client_thread_unlock_connection(c) != 0)\n\t\treturn pbs_errno;\n\n\treturn rc;\n}\n"
  },
  {
    "path": "src/lib/Libifl/pbsD_orderjo.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tpbs_orderjob.c\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <string.h>\n#include <stdio.h>\n#include \"libpbs.h\"\n#include \"dis.h\"\n#include \"pbs_ecl.h\"\n\n/**\n * @brief\n *\t-send order job batch request\n *\n * @param[in] c - connection handler\n * @param[in] job1 - job identifier\n * @param[in] job2 - job identifier\n * @param[in] extend - string to encode req\n *\n * @return      int\n * @retval      0       success\n * @retval      !0      error\n *\n */\nint\n__pbs_orderjob(int c, const char *job1, const char *job2, const char *extend)\n{\n\tstruct batch_reply *reply;\n\tint rc;\n\n\tif ((job1 == NULL) || (*job1 == '\\0') ||\n\t    (job2 == NULL) || (*job2 == '\\0'))\n\t\treturn (pbs_errno = PBSE_IVALREQ);\n\n\t/* initialize the thread context data, if not already initialized */\n\tif (pbs_client_thread_init_thread_context() != 0)\n\t\treturn pbs_errno;\n\n\t/* lock pthread mutex here for this connection */\n\t/* blocking call, waits for mutex release */\n\tif (pbs_client_thread_lock_connection(c) != 0)\n\t\treturn pbs_errno;\n\n\t/* setup DIS support routines for following DIS calls */\n\tDIS_tcp_funcs();\n\n\tif ((rc = encode_DIS_ReqHdr(c, PBS_BATCH_OrderJob, pbs_current_user)) ||\n\t    (rc = encode_DIS_MoveJob(c, job1, job2)) ||\n\t    (rc = encode_DIS_ReqExtend(c, extend))) {\n\t\tif (set_conn_errtxt(c, dis_emsg[rc]) != 0)\n\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\telse\n\t\t\tpbs_errno = PBSE_PROTOCOL;\n\n\t\tpbs_client_thread_unlock_connection(c);\n\t\treturn pbs_errno;\n\t}\n\n\tif (dis_flush(c)) {\n\t\tpbs_errno = PBSE_PROTOCOL;\n\t\tpbs_client_thread_unlock_connection(c);\n\t\treturn pbs_errno;\n\t}\n\n\t/* read reply */\n\treply = PBSD_rdrpy(c);\n\tPBSD_FreeReply(reply);\n\trc = get_conn_errno(c);\n\n\t/* unlock the thread lock and update the thread context data */\n\tif (pbs_client_thread_unlock_connection(c) != 0)\n\t\treturn pbs_errno;\n\n\treturn rc;\n}\n"
  },
  {
    "path": "src/lib/Libifl/pbsD_rerunjo.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tpbs_rerunjob.c\n * @brief\n * This function does the RerunJob request.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <string.h>\n#include <stdio.h>\n#include \"libpbs.h\"\n#include \"dis.h\"\n#include \"pbs_ecl.h\"\n\n/**\n * @brief\n *\t-send rerun batch request\n *\n * @param[in] c - connection handler\n * @param[in] jobid - job identifier\n * @param[in] extend - string to encode req\n *\n * @return      int\n * @retval      0       success\n * @retval      !0      error\n *\n */\nint\n__pbs_rerunjob(int c, const char *jobid, const char *extend)\n{\n\tint rc;\n\tstruct batch_reply *reply;\n\ttime_t old_tcp_timeout;\n\n\tif ((jobid == NULL) || (*jobid == '\\0'))\n\t\treturn (pbs_errno = PBSE_IVALREQ);\n\n\t/* initialize the thread context data, if not already initialized */\n\tif (pbs_client_thread_init_thread_context() != 0)\n\t\treturn pbs_errno;\n\n\t/* lock pthread mutex here for this connection */\n\t/* blocking call, waits for mutex release */\n\tif (pbs_client_thread_lock_connection(c) != 0)\n\t\treturn pbs_errno;\n\n\t/* setup DIS support routines for following DIS calls */\n\n\tDIS_tcp_funcs();\n\n\tif ((rc = encode_DIS_ReqHdr(c, PBS_BATCH_Rerun, pbs_current_user)) ||\n\t    (rc = encode_DIS_JobId(c, jobid)) ||\n\t    (rc = encode_DIS_ReqExtend(c, extend))) {\n\t\tif (set_conn_errtxt(c, dis_emsg[rc]) != 0) {\n\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t} else {\n\t\t\tpbs_errno = PBSE_PROTOCOL;\n\t\t}\n\t\t(void) pbs_client_thread_unlock_connection(c);\n\t\treturn pbs_errno;\n\t}\n\n\t/* write data */\n\tif (dis_flush(c)) {\n\t\tpbs_errno = PBSE_PROTOCOL;\n\t\t(void) pbs_client_thread_unlock_connection(c);\n\t\treturn pbs_errno;\n\t}\n\n\t/* Set timeout value to very long value as rerun request */\n\t/* goes from Server to Mom and may take a long time      */\n\told_tcp_timeout = pbs_tcp_timeout;\n\tpbs_tcp_timeout = PBS_DIS_TCP_TIMEOUT_VLONG;\n\n\t/* read reply from stream into presentation element */\n\n\treply = PBSD_rdrpy(c);\n\n\t/* reset timeout */\n\tpbs_tcp_timeout = old_tcp_timeout;\n\n\tPBSD_FreeReply(reply);\n\n\trc = get_conn_errno(c);\n\n\t/* unlock the thread lock and update the thread context data */\n\tif (pbs_client_thread_unlock_connection(c) != 0)\n\t\treturn pbs_errno;\n\n\treturn rc;\n}\n"
  },
  {
    "path": "src/lib/Libifl/pbsD_resc.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tpbs_resc.c\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <string.h>\n#include <stdio.h>\n#include \"libpbs.h\"\n#include \"dis.h\"\n#include \"pbs_ecl.h\"\n\n/* following structure is used by pbsD_resc.c, functions totpool and usepool */\nstruct node_pool {\n\tint nodes_avail;\n\tint nodes_alloc;\n\tint nodes_resrv;\n\tint nodes_down;\n\tchar *resc_nodes;\n};\n\n/**\n * @brief\n *\t-frees the node pool\n *\n * @param[in] np - pointer to node pool list\n *\n * @return\tVoid\n *\n */\nvoid\nfree_node_pool(struct node_pool *np)\n{\n\tif (np) {\n\t\tif (np->resc_nodes)\n\t\t\tfree(np->resc_nodes);\n\t\tfree(np);\n\t}\n}\n\n/**\n * @brief\n * \t-encode_DIS_resc() - encode a resource related request,\n *\tUsed by pbs_rescquery(), pbs_rescreserve() and pbs_rescfree()\n *\n * @param[in] sock - socket fd\n * @param[in] rlist - pointer to resource list\n * @param[in] ct - count of query strings\n * @param[in] rh - resource handle\n *\n * @return\tint\n * @retval\t0\tsuccess\n * @retval\t!0\terror\n *\n */\n\nstatic int\nencode_DIS_Resc(int sock, char **rlist, int ct, pbs_resource_t rh)\n{\n\tint i;\n\tint rc;\n\n\tif ((rc = diswsi(sock, rh)) == 0) { /* resource reservation handle */\n\n\t\t/* next send the number of resource strings */\n\n\t\tif ((rc = diswui(sock, ct)) == 0) {\n\n\t\t\t/* now send each string (if any) */\n\n\t\t\tfor (i = 0; i < ct; ++i) {\n\t\t\t\tif ((rc = diswst(sock, *(rlist + i))) != 0)\n\t\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t}\n\treturn rc;\n}\n\n/**\n * @brief\n * \t-PBS_resc() - internal common code for sending resource requests\n *\n * @par Functionality:\n *\tFormats and sends the requests for pbs_rescquery(), pbs_rescreserve(),\n *\tand pbs_rescfree().   Note, while the request is overloaded for all\n *\tthree, each has its own expected reply format.\n *\n * @param[in] c - communication handle\n * @param[in] reqtype - request type\n * @param[in] rescl- pointer to resource list\n * @param[in] ct - count of query strings\n * @param[in] rh - resource handle\n *\n * @return      int\n * @retval      0       success\n * @retval      !0      error\n *\n */\nstatic int\nPBS_resc(int c, int reqtype, char **rescl, int ct, pbs_resource_t rh)\n{\n\tint rc;\n\n\t/* setup DIS support routines for following DIS calls */\n\n\tDIS_tcp_funcs();\n\n\tif ((rc = encode_DIS_ReqHdr(c, reqtype, pbs_current_user)) ||\n\t    (rc = encode_DIS_Resc(c, rescl, ct, rh)) ||\n\t    (rc = encode_DIS_ReqExtend(c, NULL))) {\n\t\tif (set_conn_errtxt(c, dis_emsg[rc]) != 0) {\n\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t} else {\n\t\t\tpbs_errno = PBSE_PROTOCOL;\n\t\t}\n\t\treturn (pbs_errno);\n\t}\n\tif (dis_flush(c)) {\n\t\treturn (pbs_errno = PBSE_PROTOCOL);\n\t}\n\treturn (0);\n}\n\n/**\n * @brief\n * \t-pbs_rescquery() - query the availability of resources\n *\n * @param[in] c - communication handle\n * @param[in] resclist - list of queries\n * @param[in] num_resc - number in list\n * @param[out] available - number available per query\n * @param[out] allocated - number allocated per query\n * @param[out] reserved - number reserved  per query\n * @param[out] down - number down/off  per query\n *\n * @return      int\n * @retval      0       success\n * @retval      !0      error\n *\n */\n\nint\npbs_rescquery(int c, char **resclist, int num_resc,\n\t      int *available, int *allocated, int *reserved, int *down)\n{\n\tint i;\n\tstruct batch_reply *reply;\n\tint rc = 0;\n\n\t/* initialize the thread context data, if not already initialized */\n\tif (pbs_client_thread_init_thread_context() != 0)\n\t\treturn pbs_errno;\n\n\t/* lock pthread mutex here for this connection */\n\t/* blocking call, waits for mutex release */\n\tif (pbs_client_thread_lock_connection(c) != 0)\n\t\treturn pbs_errno;\n\n\tif (resclist == 0) {\n\t\tif (set_conn_errno(c, PBSE_RMNOPARAM) != 0) {\n\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t} else {\n\t\t\tpbs_errno = PBSE_RMNOPARAM;\n\t\t}\n\t\t(void) pbs_client_thread_unlock_connection(c);\n\t\treturn pbs_errno;\n\t}\n\n\t/* send request */\n\n\tif ((rc = PBS_resc(c, PBS_BATCH_Rescq, resclist,\n\t\t\t   num_resc, (pbs_resource_t) 0)) != 0) {\n\t\t(void) pbs_client_thread_unlock_connection(c);\n\t\treturn rc;\n\t}\n\n\t/* read in reply */\n\n\treply = PBSD_rdrpy(c);\n\tif ((rc = get_conn_errno(c)) == PBSE_NONE &&\n\t    reply->brp_choice == BATCH_REPLY_CHOICE_RescQuery) {\n\t\tstruct brp_rescq *resq = &reply->brp_un.brp_rescq;\n\n\t\tif (resq == NULL || num_resc != resq->brq_number) {\n\t\t\trc = PBSE_IRESVE;\n\t\t\tif (set_conn_errno(c, PBSE_IRESVE) != 0) {\n\t\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\t} else {\n\t\t\t\tpbs_errno = PBSE_IRESVE;\n\t\t\t}\n\t\t\tgoto done;\n\t\t}\n\n\t\t/* copy in available and allocated numbers */\n\n\t\tfor (i = 0; i < num_resc; i++) {\n\t\t\tavailable[i] = resq->brq_avail[i];\n\t\t\tallocated[i] = resq->brq_alloc[i];\n\t\t\treserved[i] = resq->brq_resvd[i];\n\t\t\tdown[i] = resq->brq_down[i];\n\t\t}\n\t}\n\ndone:\n\tPBSD_FreeReply(reply);\n\n\t/* unlock the thread lock and update the thread context data */\n\tif (pbs_client_thread_unlock_connection(c) != 0)\n\t\treturn pbs_errno;\n\n\treturn (rc);\n}\n\n/**\n * @brief\n * \t-pbs_reserve() - reserver resources\n *\n * @param[in] c - communication handle\n * @param[in] rl - list of resources\n * @param[in] num_resc - number of items in list\n * @param[in] prh - ptr to resource reservation handle\n *\n * @return      int\n * @retval      0       success\n * @retval      !0      error\n *\n */\n\nint\npbs_rescreserve(int c, char **rl, int num_resc, pbs_resource_t *prh)\n{\n\tint rc;\n\tstruct batch_reply *reply;\n\n\t/* initialize the thread context data, if not already initialized */\n\tif (pbs_client_thread_init_thread_context() != 0)\n\t\treturn pbs_errno;\n\n\t/* lock pthread mutex here for this connection */\n\t/* blocking call, waits for mutex release */\n\tif (pbs_client_thread_lock_connection(c) != 0)\n\t\treturn pbs_errno;\n\n\tif (rl == NULL) {\n\t\tif (set_conn_errno(c, PBSE_RMNOPARAM) != 0) {\n\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t} else {\n\t\t\tpbs_errno = PBSE_RMNOPARAM;\n\t\t}\n\t\t(void) pbs_client_thread_unlock_connection(c);\n\t\treturn pbs_errno;\n\t}\n\tif (prh == NULL) {\n\t\tif (set_conn_errno(c, PBSE_RMNOPARAM) != 0) {\n\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t} else {\n\t\t\tpbs_errno = PBSE_RMNOPARAM;\n\t\t}\n\t\t(void) pbs_client_thread_unlock_connection(c);\n\t\treturn pbs_errno;\n\t}\n\t/* send request */\n\n\tif ((rc = PBS_resc(c, PBS_BATCH_ReserveResc, rl, num_resc, *prh)) != 0) {\n\t\t(void) pbs_client_thread_unlock_connection(c);\n\t\treturn (rc);\n\t}\n\n\t/*\n\t * now get reply, if reservation successful, the reservation handle,\n\t * pbs_resource_t, is in the  aux field\n\t */\n\n\treply = PBSD_rdrpy(c);\n\n\tif (((rc = get_conn_errno(c)) == PBSE_NONE) ||\n\t    (rc == PBSE_RMPART)) {\n\t\t*prh = reply->brp_auxcode;\n\t}\n\tPBSD_FreeReply(reply);\n\n\t/* unlock the thread lock and update the thread context data */\n\tif (pbs_client_thread_unlock_connection(c) != 0)\n\t\treturn pbs_errno;\n\n\treturn (rc);\n}\n\n/**\n * @brief\n * \t-pbs_release() - release a resource reservation\n *\n * @par Note:\n *\tTo encode we send same info as for reserve except that the resource\n *\tlist is empty.\n *\n * @param[in] c - connection handle\n * @param[in] rh - resorce handle\n *\n * @return      int\n * @retval      0       success\n * @retval      !0      error\n *\n */\n\nint\npbs_rescrelease(int c, pbs_resource_t rh)\n{\n\tstruct batch_reply *reply;\n\tint rc;\n\n\t/* initialize the thread context data, if not already initialized */\n\tif (pbs_client_thread_init_thread_context() != 0)\n\t\treturn pbs_errno;\n\n\t/* lock pthread mutex here for this connection */\n\t/* blocking call, waits for mutex release */\n\tif (pbs_client_thread_lock_connection(c) != 0)\n\t\treturn pbs_errno;\n\n\tif ((rc = PBS_resc(c, PBS_BATCH_ReleaseResc, NULL, 0, rh)) != 0) {\n\t\t(void) pbs_client_thread_unlock_connection(c);\n\t\treturn (rc);\n\t}\n\n\t/* now get reply */\n\n\treply = PBSD_rdrpy(c);\n\n\tPBSD_FreeReply(reply);\n\n\trc = get_conn_errno(c);\n\n\t/* unlock the thread lock and update the thread context data */\n\tif (pbs_client_thread_unlock_connection(c) != 0)\n\t\treturn pbs_errno;\n\n\treturn (rc);\n}\n\n/*\n * The following routines are provided as a convience in converting\n * older schedulers which did addreq() of \"totpool\", \"usepool\", and\n * \"avail\".\n *\n * The \"update\" flag if non-zero, causes a new resource query to be sent\n * to the server.  If zero, the existing numbers are used.\n */\n\n/**\n * @brief\n * \t-totpool() - return total number of nodes\n *\n * @param[in] con - connection handle\n * @param[in] update - flag indicating update or not\n *\n * @return      int\n * @retval      0       success\n * @retval      !0      error\n *\n */\n\nint\ntotpool(int con, int update)\n{\n\tstruct pbs_client_thread_context *ptr;\n\tstruct node_pool *np;\n\n\t/* initialize the thread context data, if not already initialized */\n\tif (pbs_client_thread_init_thread_context() != 0)\n\t\treturn -1;\n\n\tptr = (struct pbs_client_thread_context *)\n\t\tpbs_client_thread_get_context_data();\n\tif (!ptr) {\n\t\tpbs_errno = PBSE_INTERNAL;\n\t\treturn -1;\n\t}\n\n\tif (!ptr->th_node_pool) {\n\t\tnp = (struct node_pool *) malloc(sizeof(struct node_pool));\n\t\tif (!np) {\n\t\t\tpbs_errno = PBSE_INTERNAL;\n\t\t\treturn -1;\n\t\t}\n\t\tptr->th_node_pool = (void *) np;\n\t\tif ((np->resc_nodes = strdup(\"nodes\")) == NULL) {\n\t\t\tfree(np);\n\t\t\tnp = NULL;\n\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\treturn -1;\n\t\t}\n\t} else\n\t\tnp = (struct node_pool *) ptr->th_node_pool;\n\n\tif (update) {\n\t\tif (pbs_rescquery(con, &np->resc_nodes, 1,\n\t\t\t\t  &np->nodes_avail,\n\t\t\t\t  &np->nodes_alloc,\n\t\t\t\t  &np->nodes_resrv,\n\t\t\t\t  &np->nodes_down) != 0) {\n\t\t\treturn (-1);\n\t\t}\n\t}\n\treturn (np->nodes_avail +\n\t\tnp->nodes_alloc +\n\t\tnp->nodes_resrv +\n\t\tnp->nodes_down);\n}\n\n/**\n * @brief\n * \t-usepool() - return number of nodes in use, includes reserved and down\n *\n * @param[in] con - connection handle\n * @param[in] update - flag indicating update or not\n *\n * @return      int\n * @retval      0       success\n * @retval      !0      error\n *\n */\n\nint\nusepool(int con, int update)\n{\n\tstruct pbs_client_thread_context *ptr;\n\tstruct node_pool *np;\n\n\t/* initialize the thread context data, if not already initialized */\n\tif (pbs_client_thread_init_thread_context() != 0)\n\t\treturn -1;\n\n\tptr = (struct pbs_client_thread_context *)\n\t\tpbs_client_thread_get_context_data();\n\tif (!ptr) {\n\t\tpbs_errno = PBSE_INTERNAL;\n\t\treturn -1;\n\t}\n\tif (!ptr->th_node_pool) {\n\t\tnp = (struct node_pool *) malloc(sizeof(struct node_pool));\n\t\tif (!np) {\n\t\t\tpbs_errno = PBSE_INTERNAL;\n\t\t\treturn -1;\n\t\t}\n\t\tptr->th_node_pool = (void *) np;\n\t\tif ((np->resc_nodes = strdup(\"nodes\")) == NULL) {\n\t\t\tfree(np);\n\t\t\tnp = NULL;\n\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\treturn -1;\n\t\t}\n\t} else\n\t\tnp = (struct node_pool *) ptr->th_node_pool;\n\n\tif (update) {\n\t\tif (pbs_rescquery(con, &np->resc_nodes, 1,\n\t\t\t\t  &np->nodes_avail,\n\t\t\t\t  &np->nodes_alloc,\n\t\t\t\t  &np->nodes_resrv,\n\t\t\t\t  &np->nodes_down) != 0) {\n\t\t\treturn (-1);\n\t\t}\n\t}\n\treturn (np->nodes_alloc +\n\t\tnp->nodes_resrv +\n\t\tnp->nodes_down);\n}\n\n/**\n * @brief\n * \t-avail - returns answer about available of a specified node set\n *\n * @param[in] con - connection handler\n * @param[in] resc - resources\n *\n * @return\tstring\n * @retval\t\"yes\"\t\tif available (job could be run)\n * @return\t\"no\"\t\tif not currently available\n * @return\t\"never\"\t\tif can never be satified\n * @retval\t\"?\"\t\tif error in request\n */\n\nchar *\navail(int con, char *resc)\n{\n\tint av;\n\tint al;\n\tint res;\n\tint dwn;\n\n\tif (pbs_rescquery(con, &resc, 1, &av, &al, &res, &dwn) != 0)\n\t\treturn (\"?\");\n\n\telse if (av > 0)\n\t\treturn (\"yes\");\n\telse if (av == 0)\n\t\treturn (\"no\");\n\telse\n\t\treturn (\"never\");\n}\n"
  },
  {
    "path": "src/lib/Libifl/pbsD_rlsjob.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tpbs_rlsjob.c\n * @brief\n * Release a hold on a job.\n * really just an instance of the \"manager\" request.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <stdio.h>\n#include \"libpbs.h\"\n\n/**\n * @brief\n *\t-Release a hold on a job.\n * \treally just an instance of the \"manager\" request.\n *\n * @param[in] c - connection handler\n * @param[in] jobid - job identifier\n * @param[in] holdtype - type of hold\n * @param[in] extend - string to encode req\n *\n * @return      int\n * @retval      0       success\n * @retval      !0      error\n *\n */\n\nint\n__pbs_rlsjob(int c, const char *jobid, const char *holdtype, const char *extend)\n{\n\tstruct attropl aopl;\n\n\tif ((jobid == NULL) || (*jobid == '\\0'))\n\t\treturn (pbs_errno = PBSE_IVALREQ);\n\n\taopl.name = ATTR_h;\n\taopl.resource = NULL;\n\tif ((holdtype == NULL) || (*holdtype == '\\0'))\n\t\taopl.value = \"u\";\n\telse\n\t\taopl.value = (char *) holdtype;\n\taopl.op = SET;\n\taopl.next = NULL;\n\treturn PBSD_manager(c, PBS_BATCH_ReleaseJob, MGR_CMD_SET, MGR_OBJ_JOB, jobid, &aopl, extend);\n}\n"
  },
  {
    "path": "src/lib/Libifl/pbsD_runjob.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tpbs_runjob.c\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <string.h>\n#include <stdio.h>\n#include \"libpbs.h\"\n#include \"dis.h\"\n#include \"pbs_ecl.h\"\n\n/**\n * @brief\tInner function for __pbs_runjob, __pbs_asynrunjob and __pbs_asynrunjob_ack\n *\n * @param[in] c - connection handle\n * @param[in] jobid- job identifier\n * @param[in] location - string of vnodes/resources to be allocated to the job\n * @param[in] extend - extend string for encoding req\n * @param[in] req_type - one of PBS_BATCH_RunJob, PBS_BATCH_AsyrunJob or PBS_BATCH_AsyrunJob_ack\n *\n * @return      int\n * @retval      0       success\n * @retval      !0      error\n */\nstatic int\n__runjob_inner(int c, const char *jobid, const char *location, const char *extend, int req_type)\n{\n\tint rc = 0;\n\tunsigned long resch = 0;\n\n\tif ((jobid == NULL) || (*jobid == '\\0'))\n\t\treturn (pbs_errno = PBSE_IVALREQ);\n\n\tif (location == NULL)\n\t\tlocation = \"\";\n\n\t/* initialize the thread context data, if not already initialized */\n\tif (pbs_client_thread_init_thread_context() != 0)\n\t\treturn pbs_errno;\n\n\t/* lock pthread mutex here for this connection */\n\t/* blocking call, waits for mutex release */\n\tif (pbs_client_thread_lock_connection(c) != 0)\n\t\treturn pbs_errno;\n\n\t/* setup DIS support routines for following DIS calls */\n\n\tDIS_tcp_funcs();\n\n\t/* send run request */\n\n\tif ((rc = encode_DIS_ReqHdr(c, req_type, pbs_current_user)) ||\n\t    (rc = encode_DIS_Run(c, jobid, location, resch)) ||\n\t    (rc = encode_DIS_ReqExtend(c, extend))) {\n\t\tif (set_conn_errtxt(c, dis_emsg[rc]) != 0)\n\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\telse\n\t\t\tpbs_errno = PBSE_PROTOCOL;\n\n\t\tpbs_client_thread_unlock_connection(c);\n\t\treturn pbs_errno;\n\t}\n\n\tif (dis_flush(c)) {\n\t\tpbs_errno = PBSE_PROTOCOL;\n\t\tpbs_client_thread_unlock_connection(c);\n\t\treturn pbs_errno;\n\t}\n\n\tif (req_type != PBS_BATCH_AsyrunJob) {\n\t\tstruct batch_reply *reply = NULL;\n\n\t\t/* Get reply */\n\t\treply = PBSD_rdrpy(c);\n\t\trc = get_conn_errno(c);\n\t\tPBSD_FreeReply(reply);\n\t}\n\n\t/* unlock the thread lock and update the thread context data */\n\tif (pbs_client_thread_unlock_connection(c) != 0)\n\t\treturn pbs_errno;\n\n\treturn rc;\n}\n\n/**\n * @brief\n *\t-send async run job batch request.\n *\n * @param[in] c - connection handle\n * @param[in] jobid- job identifier\n * @param[in] location - string of vnodes/resources to be allocated to the job\n * @param[in] extend - extend string for encoding req\n *\n * @return      int\n * @retval      0       success\n * @retval      !0      error\n *\n */\nint\n__pbs_asyrunjob(int c, const char *jobid, const char *location, const char *extend)\n{\n\treturn __runjob_inner(c, jobid, location, extend, PBS_BATCH_AsyrunJob);\n}\n\n/**\n * @brief\n *\t-send a run job batch request which waits for an ack from server\n *\tpbs_runjob() and pbs_asyrunjob_ack() are similar in the fact that they both wait for an ack back from the server,\n *\tbut this call is faster than pbs_runjob() because the server returns before contacting MoM\n *\n * @param[in] c - connection handle\n * @param[in] jobid- job identifier\n * @param[in] location - string of vnodes/resources to be allocated to the job\n * @param[in] extend - extend string for encoding req\n *\n * @return      int\n * @retval      0       success\n * @retval      !0      error\n *\n */\nint\n__pbs_asyrunjob_ack(int c, const char *jobid, const char *location, const char *extend)\n{\n\treturn __runjob_inner(c, jobid, location, extend, PBS_BATCH_AsyrunJob_ack);\n}\n\n/**\n * @brief\n *\t-send runjob batch request\n *\n * @param[in] c - communication handle\n * @param[in] jobid - job identifier\n * @param[in] location - location where job running\n * @param[in] extend - extend string to encode req\n *\n * @return      int\n * @retval      DIS_SUCCESS(0)  success\n * @retval      error code      error\n *\n */\n\nint\n__pbs_runjob(int c, const char *jobid, const char *location, const char *extend)\n{\n\treturn __runjob_inner(c, jobid, location, extend, PBS_BATCH_RunJob);\n}\n"
  },
  {
    "path": "src/lib/Libifl/pbsD_selectj.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tpbsD_selectj.c\n * @brief\n *\tThis file contines two main library entries:\n *\t\tpbs_selectjob()\n *\t\tpbs_selstat()\n *\n *\n *\tpbs_selectjob() - the SelectJob request\n *\t\tReturn a list of job ids that meet certain selection criteria.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <string.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include \"libpbs.h\"\n#include \"dis.h\"\n#include \"pbs_ecl.h\"\n\n/**\n * @brief\n *\t-the SelectJob request\n *\tReturn a list of job ids that meet certain selection criteria.\n *\n * @param[in] c - communication handle\n * @param[in] attrib - pointer to attropl structure(selection criteria)\n * @param[in] extend - extend string to encode req\n *\n * @return\tstring\n * @retval\tjob ids\t\tsuccess\n * @retval\tNULL\t\terror\n *\n */\nchar **\n__pbs_selectjob(int c, struct attropl *attrib, const char *extend)\n{\n\tchar **ret = NULL;\n\n\t/* initialize the thread context data, if not already initialized */\n\tif (pbs_client_thread_init_thread_context() != 0)\n\t\treturn NULL;\n\n\t/* first verify the attributes, if verification is enabled */\n\tif (pbs_verify_attributes(c, PBS_BATCH_SelectJobs, MGR_OBJ_JOB,\n\t\t\t\t  MGR_CMD_NONE, attrib))\n\t\treturn NULL;\n\n\t/* lock pthread mutex here for this connection */\n\t/* blocking call, waits for mutex release */\n\tif (pbs_client_thread_lock_connection(c) != 0)\n\t\treturn NULL;\n\n\tif (PBSD_select_put(c, PBS_BATCH_SelectJobs, attrib, NULL, extend) == 0)\n\t\tret = PBSD_select_get(c);\n\n\t/* unlock the thread lock and update the thread context data */\n\tif (pbs_client_thread_unlock_connection(c) != 0) {\n\t\tfree_string_array(ret);\n\t\treturn NULL;\n\t}\n\n\treturn ret;\n}\n\n/**\n * @brief\n * \t-pbs_selstat() - Selectable status\n *\tReturn status information for jobs that meet certain selection\n *\tcriteria.  This is a short-cut combination of pbs_selecljob()\n *\tand repeated pbs_statjob().\n *\n * @param[in] c - communication handle\n * @param[in] attrib - pointer to attropl structure(selection criteria)\n * @param[in] extend - extend string to encode req\n * @param[in] rattrib - list of attributes to return\n *\n * @return      structure handle\n * @retval      list of attr\tsuccess\n * @retval      NULL\t\terror\n *\n */\n\nstruct batch_status *\n__pbs_selstat(int c, struct attropl *attrib, struct attrl *rattrib, const char *extend)\n{\n\tstruct batch_status *ret = NULL;\n\n\t/* initialize the thread context data, if not already initialized */\n\tif (pbs_client_thread_init_thread_context() != 0)\n\t\treturn NULL;\n\n\t/* first verify the attributes, if verification is enabled */\n\tif (pbs_verify_attributes(c, PBS_BATCH_SelectJobs, MGR_OBJ_JOB,\n\t\t\t\t  MGR_CMD_NONE, attrib))\n\t\treturn NULL;\n\n\t/* lock pthread mutex here for this connection */\n\t/* blocking call, waits for mutex release */\n\tif (pbs_client_thread_lock_connection(c) != 0)\n\t\treturn NULL;\n\n\tif (PBSD_select_put(c, PBS_BATCH_SelStat, attrib, rattrib, extend) == 0)\n\t\tret = PBSD_status_get(c);\n\n\t/* unlock the thread lock and update the thread context data */\n\tif (pbs_client_thread_unlock_connection(c) != 0)\n\t\treturn NULL;\n\n\treturn ret;\n}\n\n/**\n * @brief\n *\t-encode and puts selectjob request  data\n *\n * @param[in] c - communication handle\n * @param[in] type - type of request\n * @param[in] attrib - pointer to attropl structure(selection criteria)\n * @param[in] extend - extend string to encode req\n * @param[in] rattrib - list of attributes to return\n *\n * @return      int\n * @retval      0\tsuccess\n * @retval      !0\terror\n *\n */\nint\nPBSD_select_put(int c, int type, struct attropl *attrib,\n\t\tstruct attrl *rattrib, const char *extend)\n{\n\tint rc;\n\n\t/* setup DIS support routines for following DIS calls */\n\n\tDIS_tcp_funcs();\n\n\tif ((rc = encode_DIS_ReqHdr(c, type, pbs_current_user)) ||\n\t    (rc = encode_DIS_attropl(c, attrib)) ||\n\t    (rc = encode_DIS_attrl(c, rattrib)) ||\n\t    (rc = encode_DIS_ReqExtend(c, extend))) {\n\t\tif (set_conn_errtxt(c, dis_emsg[rc]) != 0) {\n\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t} else {\n\t\t\tpbs_errno = PBSE_PROTOCOL;\n\t\t}\n\t\treturn (pbs_errno);\n\t}\n\n\t/* write data */\n\n\tif (dis_flush(c)) {\n\t\treturn (pbs_errno = PBSE_PROTOCOL);\n\t}\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\t-reads selectjob reply from stream\n *\n * @param[in] c - communication handle\n *\n * @return\tstring list\n * @retval\tlist of strings\t\tsuccess\n * @retval\tNULL\t\t\terror\n *\n */\nchar **\nPBSD_select_get(int c)\n{\n\tint i;\n\tstruct batch_reply *reply;\n\tint njobs;\n\tstruct brp_select *sr;\n\tchar **retval = NULL;\n\n\t/* read reply from stream */\n\n\treply = PBSD_rdrpy(c);\n\tif (reply == NULL) {\n\t\tpbs_errno = PBSE_PROTOCOL;\n\t} else if (reply->brp_choice != BATCH_REPLY_CHOICE_NULL &&\n\t\t   reply->brp_choice != BATCH_REPLY_CHOICE_Text &&\n\t\t   reply->brp_choice != BATCH_REPLY_CHOICE_Select) {\n\t\tpbs_errno = PBSE_PROTOCOL;\n\t} else if (get_conn_errno(c) == 0) {\n\t\tnjobs = reply->brp_count;\n\t\tretval = (char **) malloc((njobs + 1) * sizeof(char *));\n\t\tif (retval == NULL) {\n\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\tPBSD_FreeReply(reply);\n\t\t\treturn NULL;\n\t\t}\n\t\tsr = reply->brp_un.brp_select;\n\t\tfor (i = 0; i < njobs; i++) {\n\t\t\tretval[i] = strdup(sr->brp_jobid);\n\t\t\tif (retval[i] == NULL) {\n\t\t\t\tfree_str_array(retval);\n\t\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\t\tPBSD_FreeReply(reply);\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t\tsr = sr->brp_next;\n\t\t}\n\t\tretval[i] = NULL;\n\t}\n\n\tPBSD_FreeReply(reply);\n\n\treturn retval;\n}\n"
  },
  {
    "path": "src/lib/Libifl/pbsD_sigjob.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tpbs_sigjob.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <string.h>\n#include <stdio.h>\n#include \"libpbs.h\"\n#include \"pbs_ecl.h\"\n\n/**\n * @brief\n *\tsends and reads signal job batch request.\n *\n * @param[in] c - communication handle\n * @param[in] jobid - job identifier\n * @param[in] sig - signal\n * @param[in] extend - extend string for request\n *\n * @return\tint\n * @retval\t0\tsuccess\n * @retval\t!0\terror\n *\n */\nint\n__pbs_sigjob(int c, const char *jobid, const char *sig, const char *extend)\n{\n\tint rc = 0;\n\tstruct batch_reply *reply;\n\n\tif ((jobid == NULL) || (*jobid == '\\0') || (sig == NULL))\n\t\treturn (pbs_errno = PBSE_IVALREQ);\n\n\t/* initialize the thread context data, if not already initialized */\n\tif (pbs_client_thread_init_thread_context() != 0)\n\t\treturn pbs_errno;\n\n\t/* lock pthread mutex here for this connection */\n\t/* blocking call, waits for mutex release */\n\tif (pbs_client_thread_lock_connection(c) != 0)\n\t\treturn pbs_errno;\n\n\t/* send request */\n\tif ((rc = PBSD_sig_put(c, jobid, sig, extend, PROT_TCP, NULL)) != 0) {\n\t\tpbs_client_thread_unlock_connection(c);\n\t\treturn rc;\n\t}\n\n\t/* read reply */\n\treply = PBSD_rdrpy(c);\n\tPBSD_FreeReply(reply);\n\trc = get_conn_errno(c);\n\n\t/* unlock the thread lock and update the thread context data */\n\tif (pbs_client_thread_unlock_connection(c) != 0)\n\t\treturn pbs_errno;\n\n\treturn rc;\n}\n"
  },
  {
    "path": "src/lib/Libifl/pbsD_stagein.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tpbs_stagein.c\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <string.h>\n#include <stdio.h>\n#include \"libpbs.h\"\n#include \"dis.h\"\n#include \"pbs_ecl.h\"\n\n/**\n * @brief\n *\t-sends stagein job batch request and get reply.\n *\n * @param[in] c - communication handle\n * @param[in] jobid - job identifier\n * @param[in] location - destination for job\n * @param[in] extend - extend string for request\n *\n * @return      int\n * @retval      0       success\n * @retval      !0      error\n *\n */\n\nint\npbs_stagein(int c, char *jobid, char *location, char *extend)\n{\n\tint rc;\n\tstruct batch_reply *reply;\n\n\tif ((jobid == NULL) || (*jobid == '\\0'))\n\t\treturn (pbs_errno = PBSE_IVALREQ);\n\tif (location == NULL)\n\t\tlocation = \"\";\n\n\t/* initialize the thread context data, if not already initialized */\n\tif (pbs_client_thread_init_thread_context() != 0)\n\t\treturn pbs_errno;\n\n\t/* lock pthread mutex here for this connection */\n\t/* blocking call, waits for mutex release */\n\tif (pbs_client_thread_lock_connection(c) != 0)\n\t\treturn pbs_errno;\n\n\t/* setup DIS support routines for following DIS calls */\n\n\tDIS_tcp_funcs();\n\n\t/* send stagein request, a run request with a different id */\n\n\tif ((rc = encode_DIS_ReqHdr(c, PBS_BATCH_StageIn, pbs_current_user)) ||\n\t    (rc = encode_DIS_Run(c, jobid, location, 0)) ||\n\t    (rc = encode_DIS_ReqExtend(c, extend))) {\n\t\tif (set_conn_errtxt(c, dis_emsg[rc]) != 0) {\n\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t} else {\n\t\t\tpbs_errno = PBSE_PROTOCOL;\n\t\t}\n\t\t(void) pbs_client_thread_unlock_connection(c);\n\t\treturn pbs_errno;\n\t}\n\n\tif (dis_flush(c)) {\n\t\tpbs_errno = PBSE_PROTOCOL;\n\t\t(void) pbs_client_thread_unlock_connection(c);\n\t\treturn pbs_errno;\n\t}\n\n\t/* get reply */\n\n\treply = PBSD_rdrpy(c);\n\trc = get_conn_errno(c);\n\n\tPBSD_FreeReply(reply);\n\n\t/* unlock the thread lock and update the thread context data */\n\tif (pbs_client_thread_unlock_connection(c) != 0)\n\t\treturn pbs_errno;\n\n\treturn rc;\n}\n"
  },
  {
    "path": "src/lib/Libifl/pbsD_stathook.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tpbs_stathook.c\n * @brief\n *\tReturn the status of a hook.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include \"libpbs.h\"\n#include \"pbs_ecl.h\"\n\n/**\n * @brief\n *\tReturn the status of a hook.\n *\n * @param[in] c - communication handle\n * @param[in] id - object name\n * @param[in] attrib - pointer to attrl structure(list)\n * @param[in] extend - extend string for req\n *\n * @return\tstructure handle\n * @retval\tpointer to attr list\tsuccess\n * @retval\tNULL\t\t\terror\n *\n */\nstruct batch_status *\n__pbs_stathook(int c, const char *id, struct attrl *attrib, const char *extend)\n{\n\tstruct batch_status *ret = NULL;\n\tint rc;\n\tint hook_obj;\n\n\tif (extend != NULL) {\n\t\tif (strcmp(extend, PBS_HOOK) == 0) {\n\t\t\thook_obj = MGR_OBJ_PBS_HOOK;\n\t\t} else if (strcmp(extend, SITE_HOOK) == 0) {\n\t\t\thook_obj = MGR_OBJ_SITE_HOOK;\n\t\t} else {\n\t\t\treturn NULL; /* bad extend value */\n\t\t}\n\t} else {\n\t\thook_obj = MGR_OBJ_SITE_HOOK;\n\t}\n\n\t/* initialize the thread context data, if not already initialized */\n\tif (pbs_client_thread_init_thread_context() != 0)\n\t\treturn NULL;\n\n\t/* first verify the attributes, if verification is enabled */\n\trc = pbs_verify_attributes(c, PBS_BATCH_StatusHook, hook_obj, MGR_CMD_NONE,\n\t\t\t\t   (struct attropl *) attrib);\n\tif (rc)\n\t\treturn NULL;\n\n\tif (pbs_client_thread_lock_connection(c) != 0)\n\t\treturn NULL;\n\n\tret = PBSD_status(c, PBS_BATCH_StatusHook, id, attrib, extend);\n\n\t/* unlock the thread lock and update the thread context data */\n\tif (pbs_client_thread_unlock_connection(c) != 0)\n\t\treturn NULL;\n\n\treturn ret;\n}\n"
  },
  {
    "path": "src/lib/Libifl/pbsD_stathost.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tpbs_stathost.c\n * @brief\n * Return the combined status of the vnodes on a host or all hosts.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include \"libpbs.h\"\n#include \"attribute.h\"\n\n/* data structure and local data items private to this file */\n\n/* This structure is used to determined the set of separate \"hosts\"   */\nstruct host_list {\n\tchar hl_name[PBS_MAXHOSTNAME + 1]; /* host value */\n\tstruct batch_status *hl_node;\t   /* count of vnodes in host */\n};\n\n/* This next structure is used to track and sum consumable resources */\n\nstruct consumable {\n\tchar *cons_resource;\n\tchar *cons_avail_str;\t  /* value in r_available if not consummable  */\n\tlong long cons_avail_sum; /* sum of values in r_available if consumm. */\n\tlong long cons_assn_sum;  /* sum of values in r_assigned  if consumm. */\n\tshort cons_k;\t\t  /* set if \"size\" type (kb) */\n\tshort cons_consum;\t  /* set if resource is consumable */\n\tshort cons_set;\t\t  /* set if resource has a value */\n};\n\n/*\n * get_resource_value - for the named resource in the indicated attribute,\n *\tresources_assigned or resources_available, return the value of the\n *\tresource (as a string).   NULL is returned if the resources isn't there\n */\n\nstatic char *\nget_resource_value(char *attrn, char *rname, struct attrl *pal)\n{\n\twhile (pal) {\n\t\tif ((strcasecmp(pal->name, attrn) == 0) &&\n\t\t    (strcasecmp(pal->resource, rname) == 0))\n\t\t\treturn pal->value;\n\t\tpal = pal->next;\n\t}\n\treturn NULL;\n}\n\n/**\n * @brief\n * add_consumable_entry - add an entry for a resource into the consumable array\n *\n * @par If the resource is found in resources_assigned, it is considered\n *\tto be \"consummable\" and the various values are added together;\n *\t- the resource is \"consumable\" and it is so flagged.\n *\tIf the resource is already in the table the consumable flag is updated\n *\n * @par If the resource is not consumable, the value string from the attrib\n *\tis used acording to the following rules:\n *\t - If cons_avail_str is null, use the attrib value\n *         else if cons_avail_str == attrib value, no change\n *         else repace cons_avail_str with \"<various>\"\n *\n * @param[in]\tpatl\tPointer to element in attribute list\n * @param[in]\tconsum_flag\tWhether the resource is consumable\n * @param[in]\tconsum\tThe array of consumables\n * @param[in]\tconsum\tNumber of elements in consum array\n *\n * @return\tvoid\n */\nstatic void\nadd_consumable_entry(struct attrl *patl,\n\t\t     int consum_flag,\n\t\t     struct consumable **consum,\n\t\t     int *consumable_size)\n{\n\tint i;\n\tstruct consumable *newc;\n\n\tif (!patl || !consum || !consumable_size)\n\t\treturn;\n\n\t/* Ignore indirect resources (those that contain '@') */\n\tif (strchr(patl->value, '@') != NULL)\n\t\treturn;\n\n\tfor (i = 0; i < *consumable_size; ++i) {\n\t\tif (((*consum) + i)->cons_resource == NULL)\n\t\t\tcontinue;\n\t\tif (strcasecmp(patl->resource, ((*consum) + i)->cons_resource) == 0) {\n\t\t\t((*consum) + i)->cons_consum |= consum_flag;\n\t\t\tbreak;\n\t\t}\n\t}\n\tif (i == *consumable_size) {\n\t\t/* need to add this resource */\n\t\tnewc = realloc(*consum, sizeof(struct consumable) * (*consumable_size + 1));\n\t\tif (newc) {\n\t\t\t*consum = newc;\n\t\t\t(*consumable_size)++;\n\t\t\tif ((((*consum) + i)->cons_resource = strdup(patl->resource)) == NULL) {\n\t\t\t\tfree(newc);\n\t\t\t\t(*consumable_size)--;\n\t\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\t\treturn;\n\t\t\t}\n\t\t\t((*consum) + i)->cons_avail_str = NULL;\n\t\t\t((*consum) + i)->cons_avail_sum = 0;\n\t\t\t((*consum) + i)->cons_assn_sum = 0;\n\t\t\t((*consum) + i)->cons_k = 0;\n\t\t\t((*consum) + i)->cons_consum = consum_flag;\n\t\t\t((*consum) + i)->cons_set = 0;\n\t\t} else {\n\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\treturn; /* realloc failed, just return */\n\t\t}\n\t}\n\t/* Set cons_k for size values */\n\tif (strpbrk(patl->value, \"kKmMgGtTpPbBwW\") != NULL)\n\t\t((*consum) + i)->cons_k = 1;\n}\n\n/**\n * @brief\n * \tbuild_host_list - performs two functions while running through the vnodes\n *\t1. builds a list of the various host names found in\n *\t   resource_available.host\n *\t2. determines which resources are in resources_assigned to know\n *\t   which are consumable (and should be summed together).\n */\n\nstatic void\nbuild_host_list(struct batch_status *pbst,\n\t\tstruct host_list **phost_list,\n\t\tint *host_list_size,\n\t\tstruct consumable **consum,\n\t\tint *consumable_size)\n{\n\tint i;\n\tchar *hostn;\n\tstruct host_list *newhl;\n\tstruct attrl *patl;\n\n\t/* clear existing entries for reuse */\n\n\tfor (i = 0; i < *host_list_size; ++i) {\n\t\t((*phost_list) + i)->hl_name[0] = '\\0';\n\t\t((*phost_list) + i)->hl_node = NULL;\n\t}\n\n\twhile (pbst) {\n\n\t\t/* if need be, add host_list entry for this host */\n\t\thostn = get_resource_value(ATTR_rescavail, \"host\", pbst->attribs);\n\t\tif (hostn) {\n\t\t\tfor (i = 0; i < *host_list_size; ++i) {\n\t\t\t\tif (strcasecmp(hostn, ((*phost_list) + i)->hl_name) == 0)\n\t\t\t\t\tbreak;\n\t\t\t}\n\t\t\tif (i == *host_list_size) {\n\t\t\t\t/* need to add slot for this host */\n\t\t\t\tnewhl = (struct host_list *) realloc((*phost_list),\n\t\t\t\t\t\t\t\t     sizeof(struct host_list) * (*host_list_size + 1));\n\t\t\t\tif (newhl) {\n\t\t\t\t\t*phost_list = newhl;\n\t\t\t\t\t(*host_list_size)++;\n\t\t\t\t} else {\n\t\t\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\t\t\treturn; /* memory allocation failure */\n\t\t\t\t}\n\t\t\t\tstrcpy(((*phost_list) + i)->hl_name, hostn);\n\t\t\t\t((*phost_list) + i)->hl_node = pbst;\n\n\t\t\t} else {\n\t\t\t\t((*phost_list) + i)->hl_node = NULL; /* null for multiple */\n\t\t\t}\n\t\t}\n\n\t\t/* now look to see what resources are in \"resources_assigned\" */\n\n\t\tpatl = pbst->attribs;\n\t\twhile (patl) {\n\t\t\tif (strcmp(patl->name, ATTR_rescavail) == 0)\n\t\t\t\tadd_consumable_entry(patl, 0, consum, consumable_size);\n\t\t\telse if (strcmp(patl->name, ATTR_rescassn) == 0)\n\t\t\t\tadd_consumable_entry(patl, 1, consum, consumable_size);\n\t\t\tpatl = patl->next;\n\t\t}\n\n\t\tpbst = pbst->next;\n\t}\n\n\treturn;\n}\n\n/**\n * @brief\n * sum_a_resources - add the value of the specified consumable resource\n *\tinto the \"consumable\" structure entry for that resource.\n *\t\"sized\" valued resources are adjusted to be in \"kb\".\n *\n * @param[in]\tpsum\tPointer to resource\n * @param[in]\tavail\tAvailable or assigned list flag\n * @param[in]\tvalue\tValue of resource as a string\n * @param[in]\tvarious\tLabel to use for multiple values\n *\n * @return\tvoid\n */\n\nstatic void\nsum_a_resource(struct consumable *psum, int avail, char *value,\n\t       char *various)\n{\n\tlong long amt;\n\tchar *pc;\n\n\tif (!psum || !value || !various)\n\t\treturn;\n\n\tif (psum->cons_consum == 0) {\n\t\t/* not a consumable resource */\n\t\tif (avail == 0)\n\t\t\treturn; /* this shouldn't happen, but no sweat */\n\t\tif (psum->cons_avail_str == NULL) {\n\t\t\tif ((psum->cons_avail_str = strdup(value)) == NULL) {\n\t\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\t\treturn;\n\t\t\t}\n\t\t} else if (strcasecmp(psum->cons_avail_str, value) != 0) {\n\t\t\tif (psum->cons_avail_str)\n\t\t\t\tfree(psum->cons_avail_str);\n\t\t\tif ((psum->cons_avail_str = strdup(various)) == NULL) {\n\t\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\t\treturn;\n\t\t\t}\n\t\t}\n\t\tpsum->cons_set = 1;\n\t\treturn;\n\t}\n\n\t/* Ignore indirect resources (those that contain '@') */\n\tif (strchr(value, '@') != NULL)\n\t\treturn;\n\n\t/* the resource is consumable so we need to add it to the current val */\n\n\tamt = strtol(value, &pc, 10);\n\n\tif (strpbrk(pc, \"kKmMgGtTpPbBwW\") != NULL) {\n\n\t\t/* adjust value to Kilobytes */\n\t\tif ((*pc == 'm') || (*pc == 'M'))\n\t\t\tamt = amt << 10;\n\t\telse if ((*pc == 'g') || (*pc == 'G'))\n\t\t\tamt = amt << 20;\n\t\telse if ((*pc == 't') || (*pc == 'T'))\n\t\t\tamt = amt << 30;\n\t\telse if ((*pc == 'p') || (*pc == 'P'))\n\t\t\tamt = amt << 30;\n\t\telse if ((*pc != 'k') && (*pc != 'K'))\n\t\t\tamt = amt >> 10;\n\n\t\t/* does the current sum need to be adjusted */\n\t\tif (psum->cons_k == 0) {\n\t\t\tpsum->cons_avail_sum = psum->cons_avail_sum << 10;\n\t\t\tpsum->cons_assn_sum = psum->cons_assn_sum << 10;\n\t\t\tpsum->cons_k = 1;\n\t\t}\n\t}\n\n\tif (avail)\n\t\tpsum->cons_avail_sum += amt;\n\telse\n\t\tpsum->cons_assn_sum += amt;\n\n\tpsum->cons_set = 1;\n}\n\n/**\n * @brief\n * \tsum_resources - for each resource found in the collection of vnodes with\n *\tthe have \"host\", the resoruces in resources_available and\n *\tresources_assigned are summed.\n */\nstatic void\nsum_resources(struct batch_status *pbs,\n\t      struct batch_status *working,\n\t      const char *hostn,\n\t      struct consumable *consum,\n\t      int consumable_size,\n\t      char *various)\n{\n\tchar *curhn;\n\tint i;\n\tchar *val;\n\n\t/* clear sums */\n\tfor (i = 0; i < consumable_size; ++i) {\n\t\tif ((consum + i)->cons_avail_str) {\n\t\t\tfree((consum + i)->cons_avail_str);\n\t\t\t(consum + i)->cons_avail_str = NULL;\n\t\t}\n\t\t(consum + i)->cons_avail_sum = 0;\n\t\t(consum + i)->cons_assn_sum = 0;\n\t}\n\n\twhile (pbs) {\n\n\t\tcurhn = get_resource_value(ATTR_rescavail, \"host\", pbs->attribs);\n\t\tif (curhn && strcasecmp(curhn, hostn) == 0) {\n\t\t\tfor (i = 0; i < consumable_size; ++i) {\n\n\t\t\t\tval = get_resource_value(ATTR_rescavail, (consum + i)->cons_resource, pbs->attribs);\n\t\t\t\tsum_a_resource(consum + i, 1, val, various);\n\n\t\t\t\tval = get_resource_value(ATTR_rescassn, (consum + i)->cons_resource, pbs->attribs);\n\t\t\t\tsum_a_resource(consum + i, 0, val, various);\n\t\t\t}\n\t\t}\n\n\t\tpbs = pbs->next;\n\t}\n}\n\n/*\n * attr_names array - this array contains the possible names of vnode/host\n *\tattributes to create for a \"host\" batch_status reply\n *\tThe values to be returned in the attrl list for the host are constructed\n *\tin the an_value element and then converted into the attrls by the\n *\tbuild_collective() function.\n */\n\n#ifdef NAS /* localmod 012 */\n#define SKIP_FIRST 1\n#define SKIP_REST 2\n#define CATENATE 4\n#define UNIQUE 8\n#define SKIP_ALL (SKIP_FIRST | SKIP_REST)\n\nstatic struct attr_names {\n\tchar *an_name;\n\tint an_type;\n\tint an_rest;\n\tchar *an_value;\n} attr_names[] = {\n\t{ATTR_NODE_Mom, UNIQUE, 0, NULL},\n\t{ATTR_NODE_Port, 0, 0, NULL},\n\t{ATTR_version, 0, 0, NULL},\n\t{ATTR_NODE_ntype, 0, 0, NULL},\n\t{ATTR_NODE_state, UNIQUE, 0, NULL},\n\t{ATTR_NODE_pcpus, SKIP_REST, 0, NULL},\n\t{ATTR_p, 0, 0, NULL},\n\t{ATTR_NODE_jobs, CATENATE | SKIP_FIRST, 0, NULL},\n\t{ATTR_maxrun, 0, 0, NULL},\n\t{ATTR_maxuserrun, 0, 0, NULL},\n\t{ATTR_maxgrprun, 0, 0, NULL},\n\t{ATTR_NODE_No_Tasks, SKIP_REST, 0, NULL},\n\t{ATTR_PNames, 0, 0, NULL},\n\t{ATTR_NODE_resvs, UNIQUE, 0, NULL},\n\t{ATTR_queue, UNIQUE, 0, NULL},\n\t{ATTR_comment, UNIQUE, 0, NULL},\n\t{ATTR_NODE_resv_enable, 0, 0, NULL},\n\t{ATTR_NODE_NoMultiNode, 0, 0, NULL},\n\t{ATTR_NODE_Sharing, UNIQUE, 0, NULL},\n\t{ATTR_NODE_ProvisionEnable, 0, 0, NULL},\n\t{ATTR_NODE_current_aoe, 0, 0, NULL},\n\t{ATTR_NODE_License, 0, 0, NULL},\n\t{ATTR_NODE_LicenseInfo, 0, 0, NULL},\n\t{ATTR_NODE_TopologyInfo, 0, 0, NULL},\n\t{ATTR_NODE_VnodePool, 0, 0, NULL},\n\t{ATTR_NODE_power_provisioning, 0, 0, NULL},\n\t{ATTR_NODE_current_eoe, 0, 0, NULL},\n\t{ATTR_NODE_poweroff_eligible, 0, 0, NULL},\n\t{ATTR_NODE_last_state_change_time, 0, 0, NULL},\n\t{ATTR_NODE_last_used_time, 0, 0, NULL},\n\t{ATTR_rescavail, SKIP_ALL, 0, NULL},\n\t{ATTR_rescassn, SKIP_ALL, 0, NULL},\n\t{NULL, 0, 0, NULL}};\n\n/**\n * @brief\n * \tbuild_collective - for each vnode in the original batch_status, pbs,\n *\tapply the following rules to build \"host\" attributes in newbsr entry\n *\t\"the array\" refers to the array attr_names[].an_value --\n *\t1. if \"resources_assigned\" or \"resources_available\", skip for now\n *\t2. else if that attribute in the array has a null value, dup the value\n *\t\t2.5. but record the null if UNIQUE form of CATENATE\n *\t3. else if CATENATE attribute, append string to that in the array\n *\t\t3.5. possibly suppress duplicates\n *\t4. else if pbs attribute and array value are different, set the array\n *\t\tto \"<various>\" if not already so set\n *\t5. Then add in the resources_available/assigned from the consum struct\n */\n\nstatic void\nbuild_collective(struct batch_status *pbs,\n\t\t struct batch_status *newbsr,\n\t\t char *hostn,\n\t\t struct consumable *consum,\n\t\t int consumable_size,\n\t\t char *various)\n{\n\tchar convtbuf[256];\n\tstruct attrl *cupatl;\n\tstruct attrl *hdpatl;\n\tstruct attrl *nwpatl;\n\tchar *curhn;\n\tchar *dup;\n\tchar *sp;\n\tint i;\n\tsize_t len;\n\n\tfor (i = 0; attr_names[i].an_name != NULL; ++i) {\n\t\tattr_names[i].an_rest = 0;\n\t\tattr_names[i].an_value = NULL;\n\t}\n\n\tfor (; pbs; pbs = pbs->next) {\n\t\tif (pbs->attribs == NULL)\n\t\t\tcontinue; /* move out because it was a single host */\n\t\tcurhn = get_resource_value(ATTR_rescavail, \"host\", pbs->attribs);\n\t\tif (curhn == NULL)\n\t\t\tcontinue;\n\t\tif (strcasecmp(hostn, curhn) != 0)\n\t\t\tcontinue;\n\t\tfor (cupatl = pbs->attribs; cupatl; cupatl = cupatl->next) {\n\t\t\tfor (i = 0; attr_names[i].an_name != NULL; ++i) {\n\t\t\t\tint rest, type;\n\t\t\t\tif (strcmp(attr_names[i].an_name, cupatl->name) != 0)\n\t\t\t\t\tcontinue;\n\t\t\t\ttype = attr_names[i].an_type;\n\t\t\t\trest = attr_names[i].an_rest;\n\t\t\t\tattr_names[i].an_rest = 1;\n\t\t\t\tif (rest == 0 && (type & SKIP_FIRST))\n\t\t\t\t\tbreak;\n\t\t\t\tif (rest && (type & SKIP_REST))\n\t\t\t\t\tbreak;\n\t\t\t\tif (!(type & UNIQUE) && attr_names[i].an_value == NULL) {\n\t\t\t\t\t/* rule 2. - replace null with this string */\n\t\t\t\t\tattr_names[i].an_value = strdup(cupatl->value);\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tif (type & UNIQUE) {\n\t\t\t\t\tif (attr_names[i].an_value == NULL) {\n\t\t\t\t\t\tif (rest == 0 || (type & SKIP_FIRST)) {\n\t\t\t\t\t\t\tattr_names[i].an_value = strdup(cupatl->value);\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t}\n\t\t\t\t\t\tattr_names[i].an_value = strdup(\"<null>\");\n\t\t\t\t\t}\n\t\t\t\t\t/* rule 3.5 suppress duplicates */\n\t\t\t\t\tlen = strlen(cupatl->value);\n\t\t\t\t\tfor (sp = attr_names[i].an_value; sp && *sp; ++sp) {\n\t\t\t\t\t\tsp = strstr(sp, cupatl->value);\n\t\t\t\t\t\tif (!sp)\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\tif (sp != attr_names[i].an_value && sp[-1] != ' ') {\n\t\t\t\t\t\t\tcontinue;\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif (sp[len] == ',' || sp[len] == '\\0')\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t\tif (sp && *sp)\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t/*\n\t\t\t\t\t * Fall through to CATENATE\n\t\t\t\t\t */\n\t\t\t\t\ttype |= CATENATE;\n\t\t\t\t}\n\t\t\t\tif (type & CATENATE) {\n\t\t\t\t\t/* rule 3. - concatenate values */\n\t\t\t\t\tlen = strlen(attr_names[i].an_value) +\n\t\t\t\t\t      strlen(cupatl->value) + 3;\n\t\t\t\t\t/* 3 for comma, space, and null char */\n\t\t\t\t\tdup = malloc(len);\n\t\t\t\t\tif (dup) {\n\t\t\t\t\t\tstrcpy(dup, attr_names[i].an_value);\n\t\t\t\t\t\tstrcat(dup, \", \");\n\t\t\t\t\t\tstrcat(dup, cupatl->value);\n\t\t\t\t\t\tfree(attr_names[i].an_value);\n\t\t\t\t\t\tattr_names[i].an_value = dup;\n\t\t\t\t\t}\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tif ((strcmp(attr_names[i].an_value, various) != 0) && (strcmp(attr_names[i].an_value, cupatl->value) != 0)) {\n\t\t\t\t\t/* rule 4. - differing values = \"<various>\" */\n\t\t\t\t\tfree(attr_names[i].an_value);\n\t\t\t\t\tattr_names[i].an_value = strdup(various);\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t}\n#else\nstatic struct attr_names {\n\tchar *an_name;\n\tchar *an_value;\n} attr_names[] = {\n\t{ATTR_NODE_Mom, NULL},\n\t{ATTR_NODE_Port, NULL},\n\t{ATTR_version, NULL},\n\t{ATTR_NODE_ntype, NULL},\n\t{ATTR_NODE_state, NULL},\n\t{ATTR_NODE_pcpus, NULL},\n\t{ATTR_p, NULL},\n\t{ATTR_NODE_jobs, NULL},\n\t{ATTR_maxrun, NULL},\n\t{ATTR_maxuserrun, NULL},\n\t{ATTR_maxgrprun, NULL},\n\t{ATTR_NODE_No_Tasks, NULL},\n\t{ATTR_PNames, NULL},\n\t{ATTR_NODE_resvs, NULL},\n\t{ATTR_queue, NULL},\n\t{ATTR_comment, NULL},\n\t{ATTR_NODE_resv_enable, NULL},\n\t{ATTR_NODE_NoMultiNode, NULL},\n\t{ATTR_NODE_Sharing, NULL},\n\t{ATTR_NODE_ProvisionEnable, NULL},\n\t{ATTR_NODE_current_aoe, NULL},\n\t{ATTR_NODE_License, NULL},\n\t{ATTR_NODE_LicenseInfo, NULL},\n\t{ATTR_NODE_TopologyInfo, NULL},\n\t{ATTR_NODE_VnodePool, NULL},\n\t{ATTR_NODE_power_provisioning, NULL},\n\t{ATTR_NODE_current_eoe, NULL},\n\t{ATTR_NODE_poweroff_eligible, NULL},\n\t{ATTR_NODE_last_state_change_time, NULL},\n\t{ATTR_NODE_last_used_time, NULL},\n\t{NULL, NULL}};\n\n/**\n * @brief\n * \tbuild_collective - for each vnode in the original batch_status, pbs,\n *\tapply the following rules to build \"host\" attributes in newbsr entry\n *\t\"the array\" refers to the array attr_names[].an_value --\n *\t1. if \"resources_assigned\" or \"resources_available\", skip for now\n *\t2. else if that attribute in the array has a null value, dup the value\n *\t3. else if \"jobs\" attribute, append string to that in the array\n *\t4. else if pbs attribute and array value are different, set the array\n *\t\tto \"<various>\" if not already so set\n *\t5. Then add in the resources_available/assigned from the consum struct\n */\n\nstatic void\nbuild_collective(struct batch_status *pbs,\n\t\t struct batch_status *newbsr,\n\t\t const char *hostn,\n\t\t struct consumable *consum,\n\t\t int consumable_size,\n\t\t char *various)\n{\n\tchar convtbuf[256];\n\tstruct attrl *cupatl;\n\tstruct attrl *hdpatl;\n\tstruct attrl *nwpatl;\n\tchar *curhn;\n\tchar *dup;\n\tint i;\n\tsize_t len;\n\n\tfor (i = 0; attr_names[i].an_name != NULL; ++i)\n\t\tattr_names[i].an_value = NULL;\n\n\tfor (; pbs; pbs = pbs->next) {\n\t\tif (pbs->attribs == NULL)\n\t\t\tcontinue; /* move out because it was a single host */\n\t\tcurhn = get_resource_value(ATTR_rescavail, \"host\", pbs->attribs);\n\t\tif (curhn == NULL)\n\t\t\tcontinue;\n\t\tif (strcasecmp(hostn, curhn) == 0) {\n\t\t\tfor (cupatl = pbs->attribs; cupatl; cupatl = cupatl->next) {\n\t\t\t\tif ((strcmp(cupatl->name, ATTR_rescavail) == 0) ||\n\t\t\t\t    (strcmp(cupatl->name, ATTR_rescassn) == 0))\n\t\t\t\t\tcontinue; /* rule 1. */\n\n\t\t\t\tfor (i = 0; attr_names[i].an_name != NULL; ++i) {\n\t\t\t\t\tif (strcmp(attr_names[i].an_name, cupatl->name) == 0) {\n\n\t\t\t\t\t\tif (attr_names[i].an_value == NULL) {\n\t\t\t\t\t\t\t/* rule 2. - replace null with this string */\n\t\t\t\t\t\t\tif ((attr_names[i].an_value = strdup(cupatl->value)) == NULL) {\n\t\t\t\t\t\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\t\t\t\t\t\treturn;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t} else if (strcmp(cupatl->name, ATTR_NODE_jobs) == 0) {\n\t\t\t\t\t\t\t/* rule 3. - concatenate jobs */\n\t\t\t\t\t\t\tlen = strlen(attr_names[i].an_value) +\n\t\t\t\t\t\t\t      strlen(cupatl->value) + 3;\n\t\t\t\t\t\t\t/* 3 for comma, space, and null char */\n\t\t\t\t\t\t\tdup = malloc(len);\n\t\t\t\t\t\t\tif (dup) {\n\t\t\t\t\t\t\t\tstrcpy(dup, attr_names[i].an_value);\n\t\t\t\t\t\t\t\tstrcat(dup, \", \");\n\t\t\t\t\t\t\t\tstrcat(dup, cupatl->value);\n\t\t\t\t\t\t\t\tfree(attr_names[i].an_value);\n\t\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\t\t\t\t\t\treturn;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tattr_names[i].an_value = dup;\n\t\t\t\t\t\t} else if ((strcmp(attr_names[i].an_value, various) != 0) && (strcmp(attr_names[i].an_value, cupatl->value) != 0)) {\n\t\t\t\t\t\t\t/* rule 4. - differing values = \"<various>\" */\n\t\t\t\t\t\t\tfree(attr_names[i].an_value);\n\t\t\t\t\t\t\tif ((attr_names[i].an_value = strdup(various)) == NULL) {\n\t\t\t\t\t\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\t\t\t\t\t\treturn;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n#endif /* localmod 012 */\n\n\t/* Turn the values saved in attr_names into attrl entries */\n\t/* any attr_names[].an_value with a null value is ignored */\n\n\thdpatl = NULL;\n\n\tfor (i = 0; attr_names[i].an_name; ++i) {\n\t\tif (attr_names[i].an_value) {\n\t\t\tnwpatl = new_attrl();\n\t\t\tif (nwpatl) {\n\t\t\t\tif (hdpatl == NULL)\n\t\t\t\t\thdpatl = nwpatl; /* head of list */\n\t\t\t\telse\n\t\t\t\t\tcupatl->next = nwpatl;\n\t\t\t\tif ((nwpatl->name = strdup(attr_names[i].an_name)) == NULL) {\n\t\t\t\t\tfree_attrl_list(hdpatl);\n\t\t\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t\tnwpatl->value = attr_names[i].an_value;\n\t\t\t\t/* note, the above is not dupped */\n\t\t\t\tattr_names[i].an_value = NULL;\n\t\t\t\tcupatl = nwpatl;\n\t\t\t} else {\n\t\t\t\tfree_attrl_list(hdpatl);\n\t\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\t\treturn;\n\t\t\t}\n\t\t}\n\t}\n\n\t/* then  apply rule 5 - add in resources_available/assigned */\n\t/* takes two passes, first for resources_available and      */\n\t/* then for resources_assigned\t\t\t\t    */\n\n\tfor (i = 0; i < consumable_size; ++i) {\n\t\tif ((consum + i)->cons_set == 0)\n\t\t\tcontinue;\n\t\tif ((consum + i)->cons_consum) {\n\t\t\tsprintf(convtbuf, \"%lld\", (consum + i)->cons_avail_sum);\n\t\t\tif ((consum + i)->cons_k)\n\t\t\t\tstrcat(convtbuf, \"kb\");\n\t\t\tdup = convtbuf;\n\t\t} else {\n\t\t\tdup = (consum + i)->cons_avail_str;\n\t\t}\n\t\tif (dup != NULL) {\n\t\t\tnwpatl = new_attrl();\n\n\t\t\tif (nwpatl) {\n\t\t\t\tif (hdpatl == NULL)\n\t\t\t\t\thdpatl = nwpatl;\n\t\t\t\telse\n\t\t\t\t\tcupatl->next = nwpatl;\n\t\t\t\tnwpatl->next = NULL;\n\t\t\t\tif ((nwpatl->name = strdup(ATTR_rescavail)) == NULL) {\n\t\t\t\t\tfree_attrl_list(hdpatl);\n\t\t\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t\tif ((nwpatl->resource = strdup((consum + i)->cons_resource)) == NULL) {\n\t\t\t\t\tfree_attrl_list(hdpatl);\n\t\t\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t\tif ((nwpatl->value = strdup(dup)) == NULL) {\n\t\t\t\t\tfree_attrl_list(hdpatl);\n\t\t\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t\tcupatl = nwpatl;\n\t\t\t} else {\n\t\t\t\tfree_attrl_list(hdpatl);\n\t\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\t\treturn;\n\t\t\t}\n\t\t}\n\t}\n\n\t/* now do the resources_assigned */\n\tfor (i = 0; i < consumable_size; ++i) {\n\t\tif ((consum + i)->cons_set == 0)\n\t\t\tcontinue;\n\t\tif ((consum + i)->cons_consum) {\n\t\t\tsprintf(convtbuf, \"%lld\", (consum + i)->cons_assn_sum);\n\t\t\tif ((consum + i)->cons_k)\n\t\t\t\tstrcat(convtbuf, \"kb\");\n\t\t\tnwpatl = new_attrl();\n\t\t\tif (nwpatl) {\n\t\t\t\tif (hdpatl == NULL)\n\t\t\t\t\thdpatl = nwpatl;\n\t\t\t\telse\n\t\t\t\t\tcupatl->next = nwpatl;\n\t\t\t\tif ((nwpatl->name = strdup(ATTR_rescassn)) == NULL) {\n\t\t\t\t\tfree_attrl_list(hdpatl);\n\t\t\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t\tif ((nwpatl->resource = strdup((consum + i)->cons_resource)) == NULL) {\n\t\t\t\t\tfree_attrl_list(hdpatl);\n\t\t\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t\tif ((nwpatl->value = strdup(convtbuf)) == NULL) {\n\t\t\t\t\tfree_attrl_list(hdpatl);\n\t\t\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t\tcupatl = nwpatl;\n\t\t\t} else {\n\t\t\t\tfree_attrl_list(hdpatl);\n\t\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\t\treturn;\n\t\t\t}\n\t\t}\n\t}\n\n\t/* NOTE: DO NOT free the attr_names[].an_value strings   */\n\t/* They are in use in the attrls sent back to the caller */\n\n\tnewbsr->attribs = hdpatl;\n}\n\n/**\n * @brief\n * \tbuild_return_status_list - build a new batch_status list for the reply\n *\tor append to the existing list which is passed in.\n */\nstruct batch_status *\nbuild_return_status(struct batch_status *bstatus,\n\t\t    const char *hname,\n\t\t    struct batch_status *curlist,\n\t\t    struct host_list *phost_list,\n\t\t    int host_list_size,\n\t\t    struct consumable *consum,\n\t\t    int consumable_size,\n\t\t    char *various)\n{\n\tstruct batch_status *cpbs;\n\tstruct batch_status *npbs;\n\tint i;\n\n\tnpbs = malloc(sizeof(struct batch_status));\n\tif (npbs == NULL) {\n\t\tpbs_errno = PBSE_SYSTEM;\n\t\treturn NULL;\n\t}\n\n\tnpbs->next = NULL;\n\tnpbs->text = NULL;\n\n\t/* is the host in question a single or multi-vnode host */\n\n\tfor (i = 0; i < host_list_size; ++i) {\n\t\tif (strcasecmp(hname, (phost_list + i)->hl_name) == 0) {\n\t\t\tif ((phost_list + i)->hl_node != NULL) {\n\n\t\t\t\t/* single vnode host - use the real one */\n\n\t\t\t\t/* prevent double free: copy name, move attribs */\n\t\t\t\tif ((npbs->name = strdup((phost_list + i)->hl_node->name)) == NULL) {\n\t\t\t\t\tfree(npbs);\n\t\t\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\t\t\treturn NULL;\n\t\t\t\t}\n\t\t\t\tnpbs->attribs = (phost_list + i)->hl_node->attribs;\n\t\t\t\t(phost_list + i)->hl_node->attribs = NULL;\n\t\t\t\tif ((phost_list + i)->hl_node->text)\n\t\t\t\t\tif ((npbs->text = strdup((phost_list + i)->hl_node->text)) == NULL) {\n\t\t\t\t\t\tfree(npbs->name);\n\t\t\t\t\t\tfree(npbs);\n\t\t\t\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\t\t\t\treturn NULL;\n\t\t\t\t\t}\n\n\t\t\t} else {\n\n\t\t\t\t/* multi-vnoded host, build attrls from collection */\n\t\t\t\t/* of the attrls of all the vnode's on the host    */\n\n\t\t\t\tif ((npbs->name = strdup(hname)) == NULL) {\n\t\t\t\t\tfree(npbs);\n\t\t\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\t\t\treturn NULL;\n\t\t\t\t}\n\t\t\t\tnpbs->attribs = NULL;\n\n\t\t\t\tsum_resources(bstatus, npbs, hname,\n\t\t\t\t\t      consum, consumable_size, various);\n\t\t\t\tbuild_collective(bstatus, npbs, hname,\n\t\t\t\t\t\t consum, consumable_size, various);\n\t\t\t}\n\n\t\t\t/* append new to end of the current list */\n\t\t\tif (curlist == NULL) {\n\t\t\t\tcurlist = npbs;\n\t\t\t} else {\n\t\t\t\t/* curlist must be unchanged here */\n\t\t\t\tcpbs = curlist;\n\t\t\t\twhile (cpbs->next)\n\t\t\t\t\tcpbs = cpbs->next;\n\t\t\t\tcpbs->next = npbs;\n\t\t\t}\n\t\t\tbreak;\n\t\t}\n\t}\n\tif (i == host_list_size) {\n\t\t/* did not find a host of the given name in the table */\n\t\tfree(npbs); /* no leaking */\n\t\tpbs_errno = PBSE_UNKNODE;\n\t}\n\n\treturn curlist;\n}\n\n/**\n * @brief\n * \tpbs_stathost - return status on a single named host or all hosts known\n *\tA host is defined by the value of resources_available.host\n *\n *\tFunction does a pbs_statvnode() to collect information on all vnodes\n *\tand then aggregates the attributes from the vndoes that share the same\n *\thost value.\n *\n *\tIf resources in resources_assigned/resources_available are consumable,\n *\tso defined by being in resources_assigned, then the values for the same\n *\tresource on the collection of vnodes are summed.\n *\n *\tOtherwise, if the attribute or resource values are identical across the\n *\tset of vnodes, that value is reported.  Else, the string \"<various>\"\n *\tis reported, meaing the vnodes have different values.\n *\n *\tThis function, like most in PBS, is NOT thread safe.\n */\n\nstruct batch_status *\n__pbs_stathost(int con, const char *hid, struct attrl *attrib, const char *extend)\n{\n\tstruct batch_status *breturn; /* the list returned to the caller */\n\tstruct batch_status *bstatus; /* used internally\t\t   */\n\tint i;\n\t/* variables used across many function, these are what make the function */\n\tchar *various = \"<various>\";\n\tstruct host_list *phost_list = NULL;\n\tstruct consumable *consum = NULL;\n\tint host_list_size = 0;\n\tint consumable_size = 0;\n\tstruct pbs_client_thread_connect_context *context;\n\n\tbreturn = NULL;\n\n\t/* get status of all vnodes */\n\tbstatus = pbs_statvnode(con, \"\", attrib, extend);\n\n\tif (bstatus == NULL)\n\t\treturn NULL;\n\n\tbuild_host_list(bstatus, &phost_list, &host_list_size,\n\t\t\t&consum, &consumable_size);\n\n\tif ((hid == NULL) || (*hid == '\\0')) {\n\n\t\t/*\n\t\t * No host specified, so for each host found in the host_list\n\t\t * entries, gather info from the vnodes associated with that host\n\t\t */\n\n\t\tfor (i = 0; i < host_list_size; ++i) {\n\t\t\thid = (phost_list + i)->hl_name;\n\t\t\tbreturn = build_return_status(bstatus, hid, breturn,\n\t\t\t\t\t\t      phost_list, host_list_size,\n\t\t\t\t\t\t      consum, consumable_size,\n\t\t\t\t\t\t      various);\n\t\t}\n\n\t} else {\n\n\t\t/*\n\t\t * Specific host names gather info from the vnodes associate with it\n\t\t */\n\n\t\tbreturn = build_return_status(bstatus, hid, breturn,\n\t\t\t\t\t      phost_list, host_list_size,\n\t\t\t\t\t      consum, consumable_size,\n\t\t\t\t\t      various);\n\t\tif ((breturn == NULL) && (pbs_errno == PBSE_UNKNODE)) {\n\t\t\t/*\n\t\t\t * store error in TLS if available. Fallback to\n\t\t\t * connection structure.\n\t\t\t */\n\t\t\tcontext = pbs_client_thread_find_connect_context(con);\n\t\t\tif (context != NULL) {\n\t\t\t\tif (context->th_ch_errtxt != NULL)\n\t\t\t\t\tfree(context->th_ch_errtxt);\n\t\t\t\tif ((context->th_ch_errtxt = strdup(pbse_to_txt(pbs_errno))) == NULL) {\n\t\t\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\t\t\treturn NULL;\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tif (set_conn_errtxt(con, pbse_to_txt(pbs_errno)) != 0) {\n\t\t\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\t\t\treturn NULL;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\tpbs_statfree(bstatus); /* free info returned by pbs_statvnodes() */\n\tfor (i = 0; i < consumable_size; ++i)\n\t\tfree((consum + i)->cons_resource);\n\tfree(consum);\n\tconsum = NULL;\n\tconsumable_size = 0;\n\tfree(phost_list);\n\tphost_list = NULL;\n\thost_list_size = 0;\n\treturn breturn;\n}\n"
  },
  {
    "path": "src/lib/Libifl/pbsD_statjob.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tpbs_statjob.c\n *\n * Return the status of a job.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include \"libpbs.h\"\n#include \"pbs_ecl.h\"\n\n/**\n * @brief\n *\t-Return the status of a job.\n *\n * @param[in] c - communication handle\n * @param[in] id - job id\n * @param[in] attrib - pointer to attribute list\n * @param[in] extend - extend string for req\n *\n * @return\tstructure handle\n * @retval\tpointer to batch_status struct\t\tsuccess\n * @retval\tNULL\t\t\t\t\terror\n *\n */\nstruct batch_status *\n__pbs_statjob(int c, const char *id, struct attrl *attrib, const char *extend)\n{\n\tstruct batch_status *ret = NULL;\n\n\t/* initialize the thread context data, if not already initialized */\n\tif (pbs_client_thread_init_thread_context() != 0)\n\t\treturn NULL;\n\n\t/* first verify the attributes, if verification is enabled */\n\tif ((pbs_verify_attributes(c, PBS_BATCH_StatusJob,\n\t\t\t\t   MGR_OBJ_JOB, MGR_CMD_NONE, (struct attropl *) attrib)))\n\t\treturn NULL;\n\n\tif (pbs_client_thread_lock_connection(c) != 0)\n\t\treturn NULL;\n\n\tret = PBSD_status(c, PBS_BATCH_StatusJob, id, attrib, extend);\n\n\t/* unlock the thread lock and update the thread context data */\n\tif (pbs_client_thread_unlock_connection(c) != 0)\n\t\treturn NULL;\n\n\treturn ret;\n}\n"
  },
  {
    "path": "src/lib/Libifl/pbsD_statnode.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tpbs_statnode.c - contains pbs_statnode() and pbs_statvnode()\n * @brief\n * Return the status of host(s) or vnodes.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include \"libpbs.h\"\n#include \"pbs_ecl.h\"\n\n/**\n * @brief\n * \t-pbs_statnode() - calls pbs_stathost(), returns status of host\n *\tmaintained for backward compatibility\n *\n * @param[in] c - communication handle\n * @param[in] id - object id\n * @param[in] attrib - pointer to attribute list\n * @param[in] extend - extend string for encoding req\n *\n * @return      structure handle\n * @retval      pointer to batch_status struct          Success\n * @retval      NULL\t\t\t\t\terror\n *\n */\nstruct batch_status *\n__pbs_statnode(int c, const char *id, struct attrl *attrib, const char *extend)\n{\n\treturn pbs_stathost(c, id, attrib, extend);\n}\n\n/**\n * @brief\n * \t-__pbs_statvnode() - returns information about virtual nodes (vnodes)\n *\n * @param[in] c - communication handle\n * @param[in] id - object id\n * @param[in] attrib - pointer to attribute list\n * @param[in] extend - extend string for encoding req\n *\n * @return\tstructure handle\n * @retval\tpointer to batch_status struct\t\tSuccess\n * @retval\tNULL\t\t\t\t\terror\n *\n */\nstruct batch_status *\n__pbs_statvnode(int c, const char *id, struct attrl *attrib, const char *extend)\n{\n\tint rc;\n\tstruct batch_status *ret = NULL;\n\n\t/* initialize the thread context data, if not already initialized */\n\tif (pbs_client_thread_init_thread_context() != 0)\n\t\treturn NULL;\n\n\t/* first verify the attributes, if verification is enabled */\n\trc = pbs_verify_attributes(c, PBS_BATCH_StatusNode, MGR_OBJ_NODE,\n\t\t\t\t   MGR_CMD_NONE, (struct attropl *) attrib);\n\tif (rc)\n\t\treturn NULL;\n\n\tif (pbs_client_thread_lock_connection(c) != 0)\n\t\treturn NULL;\n\n\tret = PBSD_status(c, PBS_BATCH_StatusNode, id, attrib, extend);\n\n\t/* unlock the thread lock and update the thread context data */\n\tif (pbs_client_thread_unlock_connection(c) != 0)\n\t\treturn NULL;\n\n\treturn ret;\n}\n"
  },
  {
    "path": "src/lib/Libifl/pbsD_statque.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tpbs_statque.c\n * @brief\n * Return the status of a queue.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n#include \"libpbs.h\"\n#include \"pbs_ecl.h\"\n\n/**\n * @brief\n *\t-Return the status of a queue.\n *\n * @param[in] c - communication handle\n * @param[in] id - object id\n * @param[in] attrib - pointer to attribute list\n * @param[in] extend - extend string for encoding req\n *\n * @return      structure handle\n * @retval      pointer to batch_status struct          Success\n * @retval      NULL                                    error\n *\n */\n\nstruct batch_status *\n__pbs_statque(int c, const char *id, struct attrl *attrib, const char *extend)\n{\n\tstruct batch_status *ret = NULL;\n\n\t/* initialize the thread context data, if not already initialized */\n\tif (pbs_client_thread_init_thread_context() != 0)\n\t\treturn NULL;\n\n\t/* first verify the attributes, if verification is enabled */\n\tif ((pbs_verify_attributes(c, PBS_BATCH_StatusQue, MGR_OBJ_QUEUE,\n\t\t\t\t   MGR_CMD_NONE, (struct attropl *) attrib)))\n\t\treturn NULL;\n\n\tif (pbs_client_thread_lock_connection(c) != 0)\n\t\treturn NULL;\n\n\tret = PBSD_status(c, PBS_BATCH_StatusQue, id, attrib, extend);\n\n\t/* unlock the thread lock and update the thread context data */\n\tif (pbs_client_thread_unlock_connection(c) != 0)\n\t\treturn NULL;\n\n\treturn ret;\n}\n"
  },
  {
    "path": "src/lib/Libifl/pbsD_statresv.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tpbs_statresv.c\n * @brief\n * Return the status of a reservation.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include \"libpbs.h\"\n#include \"pbs_ecl.h\"\n\n/**\n * @brief\n *\t-Return the status of a reservation.\n *\n * @param[in] c - communication handle\n * @param[in] id - object id\n * @param[in] attrib - pointer to attribute list\n * @param[in] extend - extend string for encoding req\n *\n * @return      structure handle\n * @retval      pointer to batch_status struct          Success\n * @retval      NULL                                    error\n *\n */\n\nstruct batch_status *\n__pbs_statresv(int c, const char *id, struct attrl *attrib, const char *extend)\n{\n\tstruct batch_status *ret = NULL;\n\n\t/* initialize the thread context data, if not already initialized */\n\tif (pbs_client_thread_init_thread_context() != 0)\n\t\treturn NULL;\n\n\t/* first verify the attributes, if verification is enabled */\n\tif ((pbs_verify_attributes(c, PBS_BATCH_StatusResv, MGR_OBJ_RESV,\n\t\t\t\t   MGR_CMD_NONE, (struct attropl *) attrib)))\n\t\treturn NULL;\n\n\tif (pbs_client_thread_lock_connection(c) != 0)\n\t\treturn NULL;\n\n\tret = PBSD_status(c, PBS_BATCH_StatusResv, id, attrib, extend);\n\n\t/* unlock the thread lock and update the thread context data */\n\tif (pbs_client_thread_unlock_connection(c) != 0)\n\t\treturn NULL;\n\n\treturn ret;\n}\n"
  },
  {
    "path": "src/lib/Libifl/pbsD_statrsc.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tpbs_statrsc.c\n * @brief\n * Return the status of one or more resources.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include \"libpbs.h\"\n#include \"pbs_ecl.h\"\n\n/**\n * @brief\n *\tReturn the status of one or more resources.\n *\n * @param[in] c - communication handle\n * @param[in] id - object id\n * @param[in] attrib - pointer to attribute list\n * @param[in] extend - extend string for encoding req\n *\n * @return      structure handle\n * @retval      pointer to batch_status struct          Success\n * @retval      NULL                                    error\n *\n */\n\nstruct batch_status *\n__pbs_statrsc(int c, const char *id, struct attrl *attrib, const char *extend)\n{\n\tstruct batch_status *ret = NULL;\n\tint rc;\n\n\t/* initialize the thread context data, if not already initialized */\n\tif (pbs_client_thread_init_thread_context() != 0)\n\t\treturn NULL;\n\n\t/* first verify the attributes, if verification is enabled */\n\trc = pbs_verify_attributes(c, PBS_BATCH_StatusRsc, MGR_OBJ_RSC,\n\t\t\t\t   MGR_CMD_NONE, (struct attropl *) attrib);\n\tif (rc)\n\t\treturn NULL;\n\n\tif (pbs_client_thread_lock_connection(c) != 0)\n\t\treturn NULL;\n\n\tret = PBSD_status(c, PBS_BATCH_StatusRsc, id, attrib, extend);\n\n\t/* unlock the thread lock and update the thread context data */\n\tif (pbs_client_thread_unlock_connection(c) != 0)\n\t\treturn NULL;\n\n\treturn ret;\n}\n"
  },
  {
    "path": "src/lib/Libifl/pbsD_statsched.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tpbs_statsched.c\n * @brief\n * Return the status of sched objects.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include \"libpbs.h\"\n#include \"pbs_ecl.h\"\n\n/**\n * @brief\n *\t- Return the status of sched objects.\n *\n * @param[in] c - communication handle\n * @param[in] attrib - pointer to attribute list\n * @param[in] extend - extend string for encoding req\n *\n * @return      structure handle\n * @retval      pointer to batch_status struct          Success\n * @retval      NULL                                    error\n *\n */\n\nstruct batch_status *\n__pbs_statsched(int c, struct attrl *attrib, const char *extend)\n{\n\tstruct batch_status *ret = NULL;\n\tint rc;\n\n\t/* initialize the thread context data, if not already initialized */\n\tif (pbs_client_thread_init_thread_context() != 0)\n\t\treturn NULL;\n\n\t/* first verify the attributes, if verification is enabled */\n\trc = pbs_verify_attributes(c, PBS_BATCH_StatusSched, MGR_OBJ_SCHED,\n\t\t\t\t   MGR_CMD_NONE, (struct attropl *) attrib);\n\tif (rc)\n\t\treturn NULL;\n\n\tif (pbs_client_thread_lock_connection(c) != 0)\n\t\treturn NULL;\n\n\tret = PBSD_status(c, PBS_BATCH_StatusSched, \"\", attrib, extend);\n\n\t/* unlock the thread lock and update the thread context data */\n\tif (pbs_client_thread_unlock_connection(c) != 0)\n\t\treturn NULL;\n\n\treturn ret;\n}\n"
  },
  {
    "path": "src/lib/Libifl/pbsD_statsrv.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tpbs_statserver.c\n * @brief\n * Return the status of a server.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include \"libpbs.h\"\n#include \"pbs_ecl.h\"\n\n/**\n * @brief\n *\t-Return the status of a server.\n *\n * @param[in] c - communication handle\n * @param[in] attrib - pointer to attribute list\n * @param[in] extend - extend string for encoding req\n *\n * @return      structure handle\n * @retval      pointer to batch_status struct          Success\n * @retval      NULL                                    error\n *\n */\n\nstruct batch_status *\n__pbs_statserver(int c, struct attrl *attrib, const char *extend)\n{\n\tstruct batch_status *ret = NULL;\n\tint rc;\n\n\t/* initialize the thread context data, if not already initialized */\n\tif (pbs_client_thread_init_thread_context() != 0)\n\t\treturn NULL;\n\n\t/* first verify the attributes, if verification is enabled */\n\trc = pbs_verify_attributes(c, PBS_BATCH_StatusSvr, MGR_OBJ_SERVER,\n\t\t\t\t   MGR_CMD_NONE, (struct attropl *) attrib);\n\tif (rc)\n\t\treturn NULL;\n\n\tif (pbs_client_thread_lock_connection(c) != 0)\n\t\treturn NULL;\n\n\tret = PBSD_status(c, PBS_BATCH_StatusSvr, \"\", attrib, extend);\n\n\t/* unlock the thread lock and update the thread context data */\n\tif (pbs_client_thread_unlock_connection(c) != 0)\n\t\treturn NULL;\n\n\treturn ret;\n}\n"
  },
  {
    "path": "src/lib/Libifl/pbsD_submit.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n *\n * @brief\n *\tThe Submit Job request.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <stdio.h>\n#include <strings.h>\n#include <fcntl.h>\n#include <unistd.h>\n#include <assert.h>\n#include \"libpbs.h\"\n#include \"credential.h\"\n#include \"pbs_ecl.h\"\n#include \"pbs_client_thread.h\"\n\n#include \"ticket.h\"\n\n/* for use with pbs_submit_with_cred */\nstruct cred_info {\n\tint cred_type;\n\tsize_t cred_len;\n\tchar *cred_buf;\n};\n\n/**\n * @brief\n *\t-wrapper function for pbs_submit where submission takes credentials.\n *\n * @param[in] c - communication handle\n * @param[in] attrib - pointer to attr list\n * @param[in] script - script to be submitted\n * @param[in] destination - host where submission happens\n * @param[in] extend - extend string for encoding req\n * @param[in] credtype - credential type\n * @param[in] credlen - credential length\n * @param[in] credbuf - buffer to hold cred info\n *\n * @return \tstring\n * @retval\tjobid\tsuccess\n * @retval\tNULL\terror\n *\n */\nchar *\npbs_submit_with_cred(int c, struct attropl *attrib, char *script,\n\t\t     char *destination, char *extend, int credtype,\n\t\t     size_t credlen, char *credbuf)\n{\n\tchar *ret;\n\tstruct pbs_client_thread_context *ptr;\n\tstruct cred_info *cred_info;\n\n\t/* initialize the thread context data, if not already initialized */\n\tif (pbs_client_thread_init_thread_context() != 0)\n\t\treturn NULL;\n\n\t/* lock pthread mutex here for this connection */\n\t/* blocking call, waits for mutex release */\n\tif (pbs_client_thread_lock_connection(c) != 0)\n\t\treturn NULL;\n\n\tptr = (struct pbs_client_thread_context *) pbs_client_thread_get_context_data();\n\tif (!ptr) {\n\t\tpbs_errno = PBSE_INTERNAL;\n\t\t(void) pbs_client_thread_unlock_connection(c);\n\t\treturn NULL;\n\t}\n\n\tif (!ptr->th_cred_info) {\n\t\tcred_info = malloc(sizeof(struct cred_info));\n\t\tif (!cred_info) {\n\t\t\tpbs_errno = PBSE_INTERNAL;\n\t\t\t(void) pbs_client_thread_unlock_connection(c);\n\t\t\treturn NULL;\n\t\t}\n\t\tptr->th_cred_info = (void *) cred_info;\n\t} else\n\t\tcred_info = (struct cred_info *) ptr->th_cred_info;\n\n\t/* copy credentials to static variables */\n\tcred_info->cred_buf = credbuf;\n\tcred_info->cred_len = credlen;\n\tcred_info->cred_type = credtype;\n\n\t/* pbs_submit takes credentials from static variables */\n\tret = pbs_submit(c, attrib, script, destination, extend);\n\n\tcred_info->cred_buf = NULL;\n\tcred_info->cred_len = 0;\n\tcred_info->cred_type = 0;\n\n\t/* unlock the thread lock and update the thread context data */\n\tif (pbs_client_thread_unlock_connection(c) != 0)\n\t\treturn NULL;\n\n\treturn ret;\n}\n\n/**\n * @brief\n *\t-submit job request\n *\n * @param[in] c - communication handle\n * @param[in] attrib - ponter to attr list\n * @param[in] script - job script\n * @param[in] dest - host where job submitted\n * @param[in] extend - buffer to hold cred info\n *\n * @return      string\n * @retval      jobid   success\n * @retval      NULL    error\n *\n */\nchar *\n__pbs_submit(int c, struct attropl *attrib, const char *script, const char *dest, const char *extend)\n{\n\tstruct attropl *pal;\n\tchar *return_jobid = NULL;\n\tint rc;\n\tstruct pbs_client_thread_context *ptr;\n\tstruct cred_info *cred_info = NULL;\n\tint commit_done = 0;\n\tchar *lextend = NULL;\n\n\t/* initialize the thread context data, if not already initialized */\n\tif ((pbs_errno = pbs_client_thread_init_thread_context()) != 0)\n\t\tgoto error;\n\n\tptr = (struct pbs_client_thread_context *) pbs_client_thread_get_context_data();\n\tif (!ptr) {\n\t\tpbs_errno = PBSE_INTERNAL;\n\t\tgoto error;\n\t}\n\n\t/* first verify the attributes, if verification is enabled */\n\tif (pbs_verify_attributes(c, PBS_BATCH_QueueJob, MGR_OBJ_JOB, MGR_CMD_NONE, attrib) != 0)\n\t\tgoto error; /* pbs_errno is already set in this case */\n\n\t/* lock pthread mutex here for this connection */\n\t/* blocking call, waits for mutex release */\n\tif (pbs_client_thread_lock_connection(c) != 0)\n\t\tgoto error; /* pbs_errno is already set in this case */\n\n\t/* first be sure that the script is readable if specified ... */\n\tif ((script != NULL) && (*script != '\\0')) {\n\t\tif (access(script, R_OK) != 0) {\n\t\t\tpbs_errno = PBSE_BADSCRIPT;\n\t\t\tif (set_conn_errtxt(c, \"cannot access script file\") != 0)\n\t\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\tgoto error;\n\t\t}\n\t}\n\n\t/* initiate the queueing of the job */\n\tfor (pal = attrib; pal; pal = pal->next)\n\t\tpal->op = SET; /* force operator to SET */\n\n\tcred_info = (struct cred_info *) ptr->th_cred_info;\n\tif ((!script || (*script == '\\0')) && (!cred_info || (cred_info->cred_len <= 0))) {\n\t\t/* no cred and no script, let's request implicit commit to cut one message exchange */\n\t\tif (extend) {\n\t\t\tlextend = EXTEND_OPT_IMPLICIT_COMMIT;\n\t\t\tif (!pbs_strcat(&lextend, NULL, extend)) {\n\t\t\t\tif (set_conn_errtxt(c, \"Failed to allocate memory\") != 0)\n\t\t\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\t\tlextend = NULL;\n\t\t\t\tgoto error;\n\t\t\t}\n\t\t\textend = lextend;\n\t\t} else\n\t\t\textend = EXTEND_OPT_IMPLICIT_COMMIT;\n\t}\n\n\t/* Queue job with null string for job id */\n\treturn_jobid = PBSD_queuejob(c, \"\", dest, attrib, extend, PROT_TCP, NULL, &commit_done);\n\tif (return_jobid == NULL)\n\t\tgoto error;\n\n\tif (commit_done)\n\t\tgoto done;\n\n\t/* send script across */\n\tif ((script != NULL) && (*script != '\\0')) {\n\t\tif ((rc = PBSD_jscript(c, script, 0, NULL)) != 0) {\n\t\t\tif (rc == PBSE_JOBSCRIPTMAXSIZE)\n\t\t\t\tpbs_errno = rc;\n\t\t\telse\n\t\t\t\tpbs_errno = PBSE_BADSCRIPT;\n\t\t\tgoto error;\n\t\t}\n\t}\n\n\t/* OK, the script got across, apparently, so we are */\n\t/* ready to commit */\n\t/* opaque information */\n\tif (cred_info && cred_info->cred_len > 0) {\n\t\tif (PBSD_jcred(c, cred_info->cred_type,\n\t\t\t       cred_info->cred_buf,\n\t\t\t       cred_info->cred_len, 0, NULL) != 0) {\n\t\t\tpbs_errno = PBSE_BADCRED;\n\t\t\tgoto error;\n\t\t}\n\t}\n\n\tif (PBSD_commit(c, return_jobid, 0, NULL, NULL) != 0)\n\t\tgoto error;\n\nerror:\ndone:\n\tfree(lextend);\n\t/* unlock the thread lock and update the thread context data */\n\tpbs_client_thread_unlock_connection(c);\n\treturn return_jobid;\n}\n"
  },
  {
    "path": "src/lib/Libifl/pbsD_submit_resv.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tpbs_submit.c\n * @brief\n *\tThe Submit Reservation request.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <stdio.h>\n#include <fcntl.h>\n#include <unistd.h>\n#include \"libpbs.h\"\n#include \"pbs_ecl.h\"\n\n/**\n * @brief\n *\tPasses submit reservation request to PBSD_submit_resv( )\n *\n * @param[in]   c - socket on which connected\n * @param[in]   attrib - the list of attributes for batch request\n * @parma[in]   extend - extension of batch request\n *\n * @return char*\n * @retval SUCCESS returns the reservation ID\n * @retval ERROR NULL\n */\nchar *\n__pbs_submit_resv(int c, struct attropl *attrib, const char *extend)\n{\n\tstruct attropl *pal;\n\tint rc;\n\tchar *ret;\n\n\tfor (pal = attrib; pal; pal = pal->next)\n\t\tpal->op = SET; /* force operator to SET */\n\n\t/* initialize the thread context data, if not already initialized */\n\tif (pbs_client_thread_init_thread_context() != 0)\n\t\treturn NULL;\n\n\t/* first verify the attributes, if verification is enabled */\n\trc = pbs_verify_attributes(c, PBS_BATCH_SubmitResv,\n\t\t\t\t   MGR_OBJ_RESV, MGR_CMD_NONE, attrib);\n\tif (rc)\n\t\treturn NULL;\n\n\t/* lock pthread mutex here for this connection */\n\t/* blocking call, waits for mutex release */\n\tif (pbs_client_thread_lock_connection(c) != 0)\n\t\treturn NULL;\n\n\t/* initiate the queueing of the reservation  */\n\n\t/* Queue job with null string for job id */\n\tret = PBSD_submit_resv(c, \"\", attrib, extend);\n\n\t/* unlock the thread lock and update the thread context data */\n\tif (pbs_client_thread_unlock_connection(c) != 0)\n\t\treturn NULL;\n\n\treturn ret;\n}\n"
  },
  {
    "path": "src/lib/Libifl/pbsD_termin.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tpbs_terminate.c\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <string.h>\n#include <stdio.h>\n#include \"libpbs.h\"\n#include \"dis.h\"\n#include \"pbs_ecl.h\"\n\n/**\n * @brief\n *\t-send termination batch_request to server.\n *\n * @param[in] c - communication handle\n * @param[in] manner - manner in which server to be terminated\n * @param[in] extend - extension string for request\n *\n * @return\tint\n * @retval\t0\t\tsuccess\n * @retval\tpbs_error\terror\n *\n */\nint\n__pbs_terminate(int c, int manner, const char *extend)\n{\n\tstruct batch_reply *reply;\n\tint rc = 0;\n\n\t/* initialize the thread context data, if not already initialized */\n\tif (pbs_client_thread_init_thread_context() != 0)\n\t\treturn pbs_errno;\n\n\t/* lock pthread mutex here for this connection */\n\t/* blocking call, waits for mutex release */\n\tif (pbs_client_thread_lock_connection(c) != 0)\n\t\treturn pbs_errno;\n\n\t/* setup DIS support routines for following DIS calls */\n\tDIS_tcp_funcs();\n\n\tif ((rc = encode_DIS_ReqHdr(c, PBS_BATCH_Shutdown, pbs_current_user)) ||\n\t    (rc = encode_DIS_ShutDown(c, manner)) ||\n\t    (rc = encode_DIS_ReqExtend(c, extend))) {\n\t\tif (set_conn_errtxt(c, dis_emsg[rc]) != 0)\n\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\telse\n\t\t\tpbs_errno = PBSE_PROTOCOL;\n\n\t\tpbs_client_thread_unlock_connection(c);\n\t\treturn pbs_errno;\n\t}\n\n\tif (dis_flush(c)) {\n\t\tpbs_errno = PBSE_PROTOCOL;\n\t\tpbs_client_thread_unlock_connection(c);\n\t\treturn pbs_errno;\n\t}\n\n\t/* read in reply */\n\treply = PBSD_rdrpy(c);\n\trc = get_conn_errno(c);\n\tPBSD_FreeReply(reply);\n\n\t/* unlock the thread lock and update the thread context data */\n\tif (pbs_client_thread_unlock_connection(c) != 0)\n\t\treturn pbs_errno;\n\n\treturn rc;\n}\n"
  },
  {
    "path": "src/lib/Libifl/pbs_delstatfree.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tpbs_statfree.c\n * @brief\n * The function that deallocates a \"batch_status\" structure\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <stdlib.h>\n#include <stdio.h>\n#include \"libpbs.h\"\n\n/**\n * @brief\n *\t-The function that deallocates a \"batch_deljob_status\" structure\n *\n * @param[in] bsp - pointer to batch request.\n *\n * @return\tVoid\n *\n */\nvoid\n__pbs_delstatfree(struct batch_deljob_status *bsp)\n{\n\tstruct batch_deljob_status *bsnxt;\n\n\twhile (bsp != NULL) {\n\t\tif (bsp->name != NULL)\n\t\t\tfree(bsp->name);\n\t\tbsnxt = bsp->next;\n\t\tfree(bsp);\n\t\tbsp = bsnxt;\n\t}\n}\n"
  },
  {
    "path": "src/lib/Libifl/pbs_get_attribute_errors.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tpbs_get_attribute_errors.c\n * @brief\n *\tThe function returns the attributes that failed verification\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <stdlib.h>\n#include <stdio.h>\n#include \"libpbs.h\"\n\n/**\n * @brief\n *\t-The function returns the attributes that failed verification\n *\n * @param[in] connect - socket descriptor\n *\n * @return\tstructure handle\n * @retval\tpointer to ecl_attribute_errors struct\t\tsuccess\n * @retval\tNULL\t\t\t\t\t\terror\n *\n */\nstruct ecl_attribute_errors *\n__pbs_get_attributes_in_error(int connect)\n{\n\tstruct ecl_attribute_errors *err_list = NULL;\n\tstruct pbs_client_thread_context *ptr = pbs_client_thread_get_context_data();\n\tif (ptr)\n\t\terr_list = ptr->th_errlist;\n\n\tif (err_list && err_list->ecl_numerrors)\n\t\treturn err_list;\n\telse\n\t\treturn NULL;\n}\n"
  },
  {
    "path": "src/lib/Libifl/pbs_geterrmg.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tpbs_geterrmsg.c\n * @brief\n * Return the last error message the server returned on\n * this connection.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include \"libpbs.h\"\n\n/**\n * @brief\n *\t-Return the last error message the server returned on\n *\tthis connection.\n *\n * @param[in] connect - soket descriptor\n *\n * @return\tstring\n * @retval\tconnection contexts\n *\t\tTLS\t\t\tmultithread\n *\t\tSTRUCTURE\t\tsingle thread\n * @retval\terrmsg\t\t\terror\n *\n */\nchar *\n__pbs_geterrmsg(int connect)\n{\n\tstruct pbs_client_thread_connect_context *con = pbs_client_thread_find_connect_context(connect);\n\tstruct pbs_client_thread_context *thrd_ctxt = pbs_client_thread_get_context_data();\n\n\t/*\n\t * multithreaded callers will have the connection contexts stored\n\t * in the TLS, whereas single threaded clients dont use the TLS\n\t * So, return connection structure values after checking\n\t */\n\tif (con && thrd_ctxt && (thrd_ctxt->th_pbs_mode == 0))\n\t\treturn (con->th_ch_errtxt);\n\telse\n\t\treturn get_conn_errtxt(connect);\n}\n"
  },
  {
    "path": "src/lib/Libifl/pbs_geterrno.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tpbs_geterrno.c\n * @brief\n *\tReturn pbs_errno\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include \"libpbs.h\"\n\n/**\n * @brief\n *\t-returns pbs_errno\n *\n * @return\tint\n * @retval\tpbs_errno\n *\n */\nint\npbs_geterrno(void)\n{\n\treturn pbs_errno;\n}\n"
  },
  {
    "path": "src/lib/Libifl/pbs_ifl.i",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n%module pbs_ifl\n\n%begin %{\n#include <pbs_python_private.h>\n%}\n\n%{\n#include \"pbs_ifl.h\"\n#include \"pbs_error.h\"\n%}\n\n/* functions to acquire values from thread specific variables */\n%inline\n%{\nint get_pbs_errno(void)\n{\n    return pbs_errno;\n}\n\nconst char * get_pbs_server(void)\n{\n    return pbs_server;\n}\n%}\n\n%include \"pbs_ifl.h\"\n%include \"pbs_error.h\"\n"
  },
  {
    "path": "src/lib/Libifl/pbs_loadconf.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tpbs_loadconf.c\n */\n#include <pbs_config.h>\n#include <ctype.h>\n#include <sys/types.h>\n#include <stdio.h>\n#include <string.h>\n#include <stdlib.h>\n#include <netdb.h>\n#include <pbs_ifl.h>\n#include <pwd.h>\n#include <pthread.h>\n#include \"pbs_internal.h\"\n#include <limits.h>\n#include <pbs_error.h>\n#include \"pbs_client_thread.h\"\n#include \"net_connect.h\"\n#include \"portability.h\"\n#include \"cmds.h\"\n\n#include <sys/stat.h>\n#include <unistd.h>\n\n#ifndef WIN32\n#define shorten_and_cleanup_path(p) strdup(p)\n#endif\n\nchar *pbs_conf_env = \"PBS_CONF_FILE\";\n\nstatic char *pbs_loadconf_buf = NULL;\nstatic int pbs_loadconf_len = 0;\n\n/*\n * Initialize the pbs_conf structure.\n *\n * The order of elements must be kept in sync with the pbs_config\n * structure definition in src/include/pbs_internal.h\n */\nstruct pbs_config pbs_conf = {\n\t0,\t\t\t    /* loaded */\n\t0,\t\t\t    /* load_failed */\n\t0,\t\t\t    /* start_server */\n\t0,\t\t\t    /* start_mom */\n\t0,\t\t\t    /* start_sched */\n\t0,\t\t\t    /* start comm */\n\t0,\t\t\t    /* locallog */\n\tNULL,\t\t\t    /* default to NULL for supported auths */\n\tNULL,\t\t\t    /* auth service users list, will default to just \"root\" if not set explicitly */\n\t{'\\0'},\t\t\t    /* default no auth method to encrypt/decrypt data */\n\tAUTH_RESVPORT_NAME,\t    /* default to reserved port authentication */\n\tAUTH_RESVPORT_NAME,\t    /* default to reserved port qsub -I authentication. Possible values: resvport, munge */\n\t{'\\0'},\t\t\t    /* default no method to encrypt/decrypt data in an interatcive job */\n\t0,\t\t\t    /* sched_modify_event */\n\t0,\t\t\t    /* syslogfac */\n\t3,\t\t\t    /* syslogsvr - LOG_ERR from syslog.h */\n\tPBS_BATCH_SERVICE_PORT,\t    /* batch_service_port */\n\tPBS_BATCH_SERVICE_PORT_DIS, /* batch_service_port_dis */\n\tPBS_MOM_SERVICE_PORT,\t    /* mom_service_port */\n\tPBS_MANAGER_SERVICE_PORT,   /* manager_service_port */\n\tPBS_DATA_SERVICE_PORT,\t    /* pbs data service port */\n\tNULL,\t\t\t    /* pbs_conf_file */\n\tNULL,\t\t\t    /* pbs_home_path */\n\tNULL,\t\t\t    /* pbs_exec_path */\n\tNULL,\t\t\t    /* pbs_server_name */\n\tNULL,\t\t\t    /* cp_path */\n\tNULL,\t\t\t    /* scp_path */\n\tNULL,\t\t\t    /* scp_args */\n\tNULL,\t\t\t    /* rcp_path */\n\tNULL,\t\t\t    /* pbs_demux_path */\n\tNULL,\t\t\t    /* pbs_environment */\n\tNULL,\t\t\t    /* iff_path */\n\tNULL,\t\t\t    /* primary name   */\n\tNULL,\t\t\t    /* secondary name */\n\tNULL,\t\t\t    /* aux Mom home   */\n\tNULL,\t\t\t    /* pbs_core_limit */\n\tNULL,\t\t\t    /* default database host  */\n\tNULL,\t\t\t    /* pbs_tmpdir */\n\tNULL,\t\t\t    /* pbs_server_host_name */\n\tNULL,\t\t\t    /* pbs_public_host_name */\n\tNULL,\t\t\t    /* pbs_mail_host_name */\n\tNULL,\t\t\t    /* pbs_output_host_name */\n\tNULL,\t\t\t    /* pbs_smtp_server_name */\n\t1,\t\t\t    /* use compression by default with TCP */\n\t1,\t\t\t    /* use mcast by default with TCP */\n\tNULL,\t\t\t    /* default leaf name */\n\tNULL,\t\t\t    /* for leaf, default communication routers list */\n\tNULL,\t\t\t    /* default router name */\n\tNULL,\t\t\t    /* for router, default communication routers list */\n\t0,\t\t\t    /* default comm logevent mask */\n\t4,\t\t\t    /* default number of threads */\n\tNULL,\t\t\t    /* mom short name override */\n\t0,\t\t\t    /* high resolution timestamp logging */\n\t0,\t\t\t    /* number of scheduler threads */\n\tNULL,\t\t\t    /* default scheduler user */\n\tNULL,\t\t\t    /* default scheduler auth user */\n\tNULL,\t\t\t    /* privileged auth user */\n\tNULL,\t\t\t    /* path to user credentials program */\n\t{'\\0'}\t\t\t    /* current running user */\n#ifdef WIN32\n\t,\n\tNULL /* remote viewer launcher executable along with launch options */\n#endif\n};\n\n/**\n * @brief\n *\tidentify_service_entry - Maps a service name to a variable location in pbs_conf\n *\n * @par\n *\tCalls to getservbyname() are expensive. Instead we want to parse the\n *\tservice entries using getsrvent(). This static function is used to\n *\tmap a service name (or alias) to the proper location in the pbs_conf\n *\tstructure defined above.\n *\n * @param[in] name\tThe name of the service being parsed\n *\n * @return unsigned int *\n * @retval !NULL for success\n * @retval NULL for not found\n */\nstatic unsigned int *\nidentify_service_entry(char *name)\n{\n\tunsigned int *p = NULL;\n\tif ((name == NULL) || (*name == '\\0'))\n\t\treturn NULL;\n\tif (strcmp(name, PBS_BATCH_SERVICE_NAME) == 0) {\n\t\tp = &pbs_conf.batch_service_port;\n\t} else if (strcmp(name, PBS_BATCH_SERVICE_NAME_DIS) == 0) {\n\t\tp = &pbs_conf.batch_service_port_dis;\n\t} else if (strcmp(name, PBS_MOM_SERVICE_NAME) == 0) {\n\t\tp = &pbs_conf.mom_service_port;\n\t} else if (strcmp(name, PBS_MANAGER_SERVICE_NAME) == 0) {\n\t\tp = &pbs_conf.manager_service_port;\n\t} else if (strcmp(name, PBS_DATA_SERVICE_NAME) == 0) {\n\t\tp = &pbs_conf.pbs_data_service_port;\n\t}\n\treturn p;\n}\n\n/**\n * @brief\n *\tpbs_get_conf_file - Identify the configuration file location\n *\n * @return char *\n * @retval !NULL pointer to the configuration file name\n * @retval NULL should never be returned\n */\nstatic char *\npbs_get_conf_file(void)\n{\n\tchar *conf_file;\n\n\t/* If pbs_conf already been populated use that value. */\n\tif ((pbs_conf.loaded != 0) && (pbs_conf.pbs_conf_file != NULL))\n\t\treturn (pbs_conf.pbs_conf_file);\n\n\tif (pbs_conf_env == NULL) {\n\t\tif ((conf_file = getenv(\"PBS_CONF_FILE\")) == NULL)\n\t\t\tconf_file = PBS_CONF_FILE;\n\t} else {\n\t\tif ((conf_file = getenv(pbs_conf_env)) == NULL)\n\t\t\tconf_file = PBS_CONF_FILE;\n\t}\n\treturn (shorten_and_cleanup_path(conf_file));\n}\n\n/**\n * @brief\n *\tparse_config_line - Read and parse one line of the pbs.conf file\n *\n * @param[in] fp\tFile pointer to use for reading\n * @param[in/out] key\tPointer to variable name pointer\n * @param[in/out] val\tPointer to variable value pointer\n *\n * @return int\n * @retval !NULL Input remains\n * @retval NULL End of input\n */\nstatic char *\nparse_config_line(FILE *fp, char **key, char **val)\n{\n\tchar *start;\n\tchar *end;\n\tchar *split;\n\tchar *ret;\n\n\t*key = \"\";\n\t*val = \"\";\n\n\t/* Use a do-while rather than a goto. */\n\tdo {\n\t\tint len;\n\n\t\tret = pbs_fgets(&pbs_loadconf_buf, &pbs_loadconf_len, fp);\n\t\tif (ret == NULL)\n\t\t\tbreak;\n\t\tlen = strlen(pbs_loadconf_buf);\n\t\tif (len < 1)\n\t\t\tbreak;\n\t\t/* Advance the start pointer past any whitespace. */\n\t\tfor (start = pbs_loadconf_buf; (*start != '\\0') && isspace((int) *start); start++)\n\t\t\t;\n\t\t/* Is this a comment line. */\n\t\tif (*start == '#')\n\t\t\tbreak;\n\t\t/* Remove whitespace from the end. */\n\t\tfor (end = pbs_loadconf_buf + len - 1; (end >= start) && isspace((int) *end); end--)\n\t\t\t*end = '\\0';\n\t\t/* Was there nothing but white space? */\n\t\tif (start >= end)\n\t\t\tbreak;\n\t\tsplit = strchr(start, '=');\n\t\tif (split == NULL)\n\t\t\tbreak;\n\t\t*key = start;\n\t\t*split++ = '\\0';\n\t\t*val = split;\n\t} while (0);\n\n\treturn ret;\n}\n\n/**\n * @brief\n *\tpbs_loadconf - Populate the pbs_conf structure\n *\n * @par\n *\tLoad the pbs_conf structure.  The variables can be filled in\n *\tfrom either the environment or the pbs.conf file.  The\n *\tenvironment gets priority over the file.  If any of the\n *\tprimary variables are not filled in, the function fails.\n *\tPrimary vars: pbs_home_path, pbs_exec_path, pbs_server_name\n *\n * @note\n *\tClients can now be multithreaded. So dont call pbs_loadconf with\n *\treload = TRUE. Currently, the code flow ensures that the configuration\n *\tis loaded only once (never used with reload true). Thus in the rest of\n *\tthe code a direct read of the pbs_conf.variables is fine. There is no\n *\trace of access of pbs_conf vars against the loading of pbs_conf vars.\n *\tHowever, if pbs_loadconf is called with reload = TRUE, this assumption\n *\twill be void. In that case, access to every pbs_conf.variable has to be\n *\tsynchronized against the reload of those variables.\n *\n * @param[in] reload\tWhether to attempt a reload\n *\n * @return int\n * @retval 1 Success\n * @retval 0 Failure\n */\nint\n__pbs_loadconf(int reload)\n{\n\tFILE *fp;\n\tchar buf[256];\n\tchar *conf_name;     /* the name of the conf parameter */\n\tchar *conf_value;    /* the value from the conf file or env*/\n\tchar *gvalue;\t     /* used with getenv() */\n\tunsigned int uvalue; /* used with sscanf() */\n\tstruct passwd *pw;\n\tuid_t pbs_current_uid;\n#ifndef WIN32\n\tstruct servent *servent; /* for use with getservent */\n\tchar **servalias;\t /* service alias list */\n\tunsigned int *pui;\t /* for use with identify_service_entry */\n#endif\n\n\t/* initialize the thread context data, if not already initialized */\n\tif (pbs_client_thread_init_thread_context() != 0)\n\t\treturn 0;\n\n\t/* this section of the code modified the procecss-wide\n\t * tcp array. Since multiple threads can get into this\n\t * simultaneously, we need to serialize it\n\t */\n\tif (pbs_client_thread_lock_conf() != 0)\n\t\treturn 0;\n\n\tif (pbs_conf.loaded && !reload) {\n\t\t(void) pbs_client_thread_unlock_conf();\n\t\treturn 1;\n\t} else if (pbs_conf.load_failed && !reload) {\n\t\t(void) pbs_client_thread_unlock_conf();\n\t\treturn 0;\n\t}\n\n\t/*\n\t * If there are service port definitions available, use them\n\t * as the defaults. They may be overridden later by the config\n\t * file or environment variables. If not available, retain\n\t * whatever we were using before.\n\t */\n#ifdef WIN32\n\t/* Windows does not have the getservent() call. */\n\tpbs_conf.batch_service_port = get_svrport(\n\t\tPBS_BATCH_SERVICE_NAME, \"tcp\",\n\t\tpbs_conf.batch_service_port);\n\tpbs_conf.batch_service_port_dis = get_svrport(\n\t\tPBS_BATCH_SERVICE_NAME_DIS, \"tcp\",\n\t\tpbs_conf.batch_service_port_dis);\n\tpbs_conf.mom_service_port = get_svrport(\n\t\tPBS_MOM_SERVICE_NAME, \"tcp\",\n\t\tpbs_conf.mom_service_port);\n\tpbs_conf.manager_service_port = get_svrport(\n\t\tPBS_MANAGER_SERVICE_NAME, \"tcp\",\n\t\tpbs_conf.manager_service_port);\n\tpbs_conf.pbs_data_service_port = get_svrport(\n\t\tPBS_DATA_SERVICE_NAME, \"tcp\",\n\t\tpbs_conf.pbs_data_service_port);\n#else\n\t/* Non-Windows uses getservent() for better performance. */\n\twhile ((servent = getservent()) != NULL) {\n\t\tif (strcmp(servent->s_proto, \"tcp\") != 0)\n\t\t\tcontinue;\n\t\t/* First, check the official service name. */\n\t\tpui = identify_service_entry(servent->s_name);\n\t\tif (pui != NULL) {\n\t\t\t*pui = (unsigned int) ntohs(servent->s_port);\n\t\t\tcontinue;\n\t\t}\n\t\t/* Next, check any aliases that may be defined. */\n\t\tfor (servalias = servent->s_aliases; (servalias != NULL) && (*servalias != NULL); servalias++) {\n\t\t\tpui = identify_service_entry(*servalias);\n\t\t\tif (pui != NULL) {\n\t\t\t\t*pui = (unsigned int) ntohs(servent->s_port);\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t}\n\tendservent();\n#endif\n\n\t/*\n\t * Once we determine the location of the pbs.conf file, it never changes.\n\t * The fact that it is saved to the pbs_conf global structure means that\n\t * we can always see its location when debugging.\n\t */\n\tif (pbs_conf.pbs_conf_file == NULL)\n\t\tpbs_conf.pbs_conf_file = pbs_get_conf_file();\n\n\t/*\n\t * Parse through the configuration file and set variables based\n\t * on the contents of the file.\n\t */\n\tif ((fp = fopen(pbs_conf.pbs_conf_file, \"r\")) != NULL) {\n\t\twhile (parse_config_line(fp, &conf_name, &conf_value) != NULL) {\n\t\t\tif ((conf_name == NULL) || (*conf_name == '\\0'))\n\t\t\t\tcontinue;\n\n\t\t\tif (!strcmp(conf_name, PBS_CONF_START_SERVER)) {\n\t\t\t\tif (sscanf(conf_value, \"%u\", &uvalue) == 1)\n\t\t\t\t\tpbs_conf.start_server = ((uvalue > 0) ? 1 : 0);\n\t\t\t} else if (!strcmp(conf_name, PBS_CONF_START_MOM)) {\n\t\t\t\tif (sscanf(conf_value, \"%u\", &uvalue) == 1)\n\t\t\t\t\tpbs_conf.start_mom = ((uvalue > 0) ? 1 : 0);\n\t\t\t} else if (!strcmp(conf_name, PBS_CONF_START_SCHED)) {\n\t\t\t\tif (sscanf(conf_value, \"%u\", &uvalue) == 1)\n\t\t\t\t\tpbs_conf.start_sched = ((uvalue > 0) ? 1 : 0);\n\t\t\t} else if (!strcmp(conf_name, PBS_CONF_START_COMM)) {\n\t\t\t\tif (sscanf(conf_value, \"%u\", &uvalue) == 1)\n\t\t\t\t\tpbs_conf.start_comm = ((uvalue > 0) ? 1 : 0);\n\t\t\t} else if (!strcmp(conf_name, PBS_CONF_LOCALLOG)) {\n\t\t\t\tif (sscanf(conf_value, \"%u\", &uvalue) == 1)\n\t\t\t\t\tpbs_conf.locallog = ((uvalue > 0) ? 1 : 0);\n\t\t\t} else if (!strcmp(conf_name, PBS_CONF_SYSLOG)) {\n\t\t\t\tif (sscanf(conf_value, \"%u\", &uvalue) == 1)\n\t\t\t\t\tpbs_conf.syslogfac = ((uvalue <= (23 << 3)) ? uvalue : 0);\n\t\t\t} else if (!strcmp(conf_name, PBS_CONF_SYSLOGSEVR)) {\n\t\t\t\tif (sscanf(conf_value, \"%u\", &uvalue) == 1)\n\t\t\t\t\tpbs_conf.syslogsvr = ((uvalue <= 7) ? uvalue : 0);\n\t\t\t} else if (!strcmp(conf_name, PBS_CONF_BATCH_SERVICE_PORT)) {\n\t\t\t\tif (sscanf(conf_value, \"%u\", &uvalue) == 1)\n\t\t\t\t\tpbs_conf.batch_service_port =\n\t\t\t\t\t\t((uvalue <= 65535) ? uvalue : pbs_conf.batch_service_port);\n\t\t\t} else if (!strcmp(conf_name, PBS_CONF_BATCH_SERVICE_PORT_DIS)) {\n\t\t\t\tif (sscanf(conf_value, \"%u\", &uvalue) == 1)\n\t\t\t\t\tpbs_conf.batch_service_port_dis =\n\t\t\t\t\t\t((uvalue <= 65535) ? uvalue : pbs_conf.batch_service_port_dis);\n\t\t\t} else if (!strcmp(conf_name, PBS_CONF_MOM_SERVICE_PORT)) {\n\t\t\t\tif (sscanf(conf_value, \"%u\", &uvalue) == 1)\n\t\t\t\t\tpbs_conf.mom_service_port =\n\t\t\t\t\t\t((uvalue <= 65535) ? uvalue : pbs_conf.mom_service_port);\n\t\t\t} else if (!strcmp(conf_name, PBS_CONF_MANAGER_SERVICE_PORT)) {\n\t\t\t\tif (sscanf(conf_value, \"%u\", &uvalue) == 1)\n\t\t\t\t\tpbs_conf.manager_service_port =\n\t\t\t\t\t\t((uvalue <= 65535) ? uvalue : pbs_conf.manager_service_port);\n\t\t\t} else if (!strcmp(conf_name, PBS_CONF_DATA_SERVICE_PORT)) {\n\t\t\t\tif (sscanf(conf_value, \"%u\", &uvalue) == 1)\n\t\t\t\t\tpbs_conf.pbs_data_service_port =\n\t\t\t\t\t\t((uvalue <= 65535) ? uvalue : pbs_conf.pbs_data_service_port);\n\t\t\t} else if (!strcmp(conf_name, PBS_CONF_DATA_SERVICE_HOST)) {\n\t\t\t\tfree(pbs_conf.pbs_data_service_host);\n\t\t\t\tpbs_conf.pbs_data_service_host = strdup(conf_value);\n\t\t\t} else if (!strcmp(conf_name, PBS_CONF_USE_COMPRESSION)) {\n\t\t\t\tif (sscanf(conf_value, \"%u\", &uvalue) == 1)\n\t\t\t\t\tpbs_conf.pbs_use_compression = ((uvalue > 0) ? 1 : 0);\n\t\t\t} else if (!strcmp(conf_name, PBS_CONF_USE_MCAST)) {\n\t\t\t\tif (sscanf(conf_value, \"%u\", &uvalue) == 1)\n\t\t\t\t\tpbs_conf.pbs_use_mcast = ((uvalue > 0) ? 1 : 0);\n\t\t\t} else if (!strcmp(conf_name, PBS_CONF_LEAF_NAME)) {\n\t\t\t\tif (pbs_conf.pbs_leaf_name)\n\t\t\t\t\tfree(pbs_conf.pbs_leaf_name);\n\t\t\t\tpbs_conf.pbs_leaf_name = strdup(conf_value);\n\t\t\t} else if (!strcmp(conf_name, PBS_CONF_LEAF_ROUTERS)) {\n\t\t\t\tif (pbs_conf.pbs_leaf_routers)\n\t\t\t\t\tfree(pbs_conf.pbs_leaf_routers);\n\t\t\t\tpbs_conf.pbs_leaf_routers = strdup(conf_value);\n\t\t\t} else if (!strcmp(conf_name, PBS_CONF_COMM_NAME)) {\n\t\t\t\tif (pbs_conf.pbs_comm_name)\n\t\t\t\t\tfree(pbs_conf.pbs_comm_name);\n\t\t\t\tpbs_conf.pbs_comm_name = strdup(conf_value);\n\t\t\t} else if (!strcmp(conf_name, PBS_CONF_COMM_ROUTERS)) {\n\t\t\t\tif (pbs_conf.pbs_comm_routers)\n\t\t\t\t\tfree(pbs_conf.pbs_comm_routers);\n\t\t\t\tpbs_conf.pbs_comm_routers = strdup(conf_value);\n\t\t\t} else if (!strcmp(conf_name, PBS_CONF_COMM_THREADS)) {\n\t\t\t\tif (sscanf(conf_value, \"%u\", &uvalue) == 1)\n\t\t\t\t\tpbs_conf.pbs_comm_threads = uvalue;\n\t\t\t} else if (!strcmp(conf_name, PBS_CONF_COMM_LOG_EVENTS)) {\n\t\t\t\tif (sscanf(conf_value, \"%u\", &uvalue) == 1)\n\t\t\t\t\tpbs_conf.pbs_comm_log_events = uvalue;\n\t\t\t} else if (!strcmp(conf_name, PBS_CONF_HOME)) {\n\t\t\t\tfree(pbs_conf.pbs_home_path);\n\t\t\t\tpbs_conf.pbs_home_path = shorten_and_cleanup_path(conf_value);\n\t\t\t} else if (!strcmp(conf_name, PBS_CONF_EXEC)) {\n\t\t\t\tfree(pbs_conf.pbs_exec_path);\n\t\t\t\tpbs_conf.pbs_exec_path = shorten_and_cleanup_path(conf_value);\n\t\t\t}\n\t\t\t/* Check for PBS_DEFAULT for backward compatibility */\n\t\t\telse if (!strcmp(conf_name, PBS_CONF_DEFAULT_NAME)) {\n\t\t\t\tfree(pbs_conf.pbs_server_name);\n\t\t\t\tpbs_conf.pbs_server_name = strdup(conf_value);\n\t\t\t} else if (!strcmp(conf_name, PBS_CONF_SERVER_NAME)) {\n\t\t\t\tfree(pbs_conf.pbs_server_name);\n\t\t\t\tpbs_conf.pbs_server_name = strdup(conf_value);\n\t\t\t} else if (!strcmp(conf_name, PBS_CONF_RCP)) {\n\t\t\t\tfree(pbs_conf.rcp_path);\n\t\t\t\tpbs_conf.rcp_path = shorten_and_cleanup_path(conf_value);\n\t\t\t} else if (!strcmp(conf_name, PBS_CONF_SCP)) {\n\t\t\t\tfree(pbs_conf.scp_path);\n\t\t\t\tpbs_conf.scp_path = shorten_and_cleanup_path(conf_value);\n\t\t\t} else if (!strcmp(conf_name, PBS_CONF_SCP_ARGS)) {\n\t\t\t\tfree(pbs_conf.scp_args);\n\t\t\t\tpbs_conf.scp_args = strdup(conf_value);\n\t\t\t} else if (!strcmp(conf_name, PBS_CONF_CP)) {\n\t\t\t\tfree(pbs_conf.cp_path);\n\t\t\t\tpbs_conf.cp_path = shorten_and_cleanup_path(conf_value);\n\t\t\t}\n\t\t\t/* rcp_path can be inferred from pbs_conf.pbs_exec_path - see below */\n\t\t\t/* pbs_demux_path is inferred from pbs_conf.pbs_exec_path - see below */\n\t\t\telse if (!strcmp(conf_name, PBS_CONF_ENVIRONMENT)) {\n\t\t\t\tfree(pbs_conf.pbs_environment);\n\t\t\t\tpbs_conf.pbs_environment = shorten_and_cleanup_path(conf_value);\n\t\t\t} else if (!strcmp(conf_name, PBS_CONF_PRIMARY)) {\n\t\t\t\tfree(pbs_conf.pbs_primary);\n\t\t\t\tpbs_conf.pbs_primary = strdup(conf_value);\n\t\t\t} else if (!strcmp(conf_name, PBS_CONF_SECONDARY)) {\n\t\t\t\tfree(pbs_conf.pbs_secondary);\n\t\t\t\tpbs_conf.pbs_secondary = strdup(conf_value);\n\t\t\t} else if (!strcmp(conf_name, PBS_CONF_MOM_HOME)) {\n\t\t\t\tfree(pbs_conf.pbs_mom_home);\n\t\t\t\tpbs_conf.pbs_mom_home = strdup(conf_value);\n\t\t\t} else if (!strcmp(conf_name, PBS_CONF_CORE_LIMIT)) {\n\t\t\t\tfree(pbs_conf.pbs_core_limit);\n\t\t\t\tpbs_conf.pbs_core_limit = strdup(conf_value);\n\t\t\t} else if (!strcmp(conf_name, PBS_CONF_SERVER_HOST_NAME)) {\n\t\t\t\tfree(pbs_conf.pbs_server_host_name);\n\t\t\t\tpbs_conf.pbs_server_host_name = strdup(conf_value);\n\t\t\t} else if (!strcmp(conf_name, PBS_CONF_PUBLIC_HOST_NAME)) {\n\t\t\t\tfree(pbs_conf.pbs_public_host_name);\n\t\t\t\tpbs_conf.pbs_public_host_name = strdup(conf_value);\n\t\t\t} else if (!strcmp(conf_name, PBS_CONF_MAIL_HOST_NAME)) {\n\t\t\t\tfree(pbs_conf.pbs_mail_host_name);\n\t\t\t\tpbs_conf.pbs_mail_host_name = strdup(conf_value);\n\t\t\t} else if (!strcmp(conf_name, PBS_CONF_SMTP_SERVER_NAME)) {\n\t\t\t\tfree(pbs_conf.pbs_smtp_server_name);\n\t\t\t\tpbs_conf.pbs_smtp_server_name = strdup(conf_value);\n\t\t\t} else if (!strcmp(conf_name, PBS_CONF_OUTPUT_HOST_NAME)) {\n\t\t\t\tfree(pbs_conf.pbs_output_host_name);\n\t\t\t\tpbs_conf.pbs_output_host_name = strdup(conf_value);\n\t\t\t} else if (!strcmp(conf_name, PBS_CONF_SCHEDULER_MODIFY_EVENT)) {\n\t\t\t\tif (sscanf(conf_value, \"%u\", &uvalue) == 1)\n\t\t\t\t\tpbs_conf.sched_modify_event = ((uvalue > 0) ? 1 : 0);\n\t\t\t} else if (!strcmp(conf_name, PBS_CONF_MOM_NODE_NAME)) {\n\t\t\t\tfree(pbs_conf.pbs_mom_node_name);\n\t\t\t\tpbs_conf.pbs_mom_node_name = strdup(conf_value);\n\t\t\t} else if (!strcmp(conf_name, PBS_CONF_LOG_HIGHRES_TIMESTAMP)) {\n\t\t\t\tif (sscanf(conf_value, \"%u\", &uvalue) == 1)\n\t\t\t\t\tpbs_conf.pbs_log_highres_timestamp = ((uvalue > 0) ? 1 : 0);\n\t\t\t} else if (!strcmp(conf_name, PBS_CONF_SCHED_THREADS)) {\n\t\t\t\tif (sscanf(conf_value, \"%u\", &uvalue) == 1)\n\t\t\t\t\tpbs_conf.pbs_sched_threads = uvalue;\n\t\t\t}\n#ifdef WIN32\n\t\t\telse if (!strcmp(conf_name, PBS_CONF_REMOTE_VIEWER)) {\n\t\t\t\tfree(pbs_conf.pbs_conf_remote_viewer);\n\t\t\t\tpbs_conf.pbs_conf_remote_viewer = strdup(conf_value);\n\t\t\t}\n#endif\n\t\t\telse if (!strcmp(conf_name, PBS_CONF_INTERACTIVE_AUTH_METHOD)) {\n\t\t\t\tchar *value = convert_string_to_lowercase(conf_value);\n\t\t\t\tif (value == NULL)\n\t\t\t\t\tgoto err;\n\t\t\t\tmemset(pbs_conf.interactive_auth_method, '\\0', sizeof(pbs_conf.interactive_auth_method));\n\t\t\t\tstrcpy(pbs_conf.interactive_auth_method, value);\n\t\t\t\tfree(value);\n\t\t\t} else if (!strcmp(conf_name, PBS_CONF_INTERACTIVE_ENCRYPT_METHOD)) {\n\t\t\t\tchar *value = convert_string_to_lowercase(conf_value);\n\t\t\t\tif (value == NULL)\n\t\t\t\t\tgoto err;\n\t\t\t\tmemset(pbs_conf.interactive_encrypt_method, '\\0', sizeof(pbs_conf.interactive_encrypt_method));\n\t\t\t\tstrcpy(pbs_conf.interactive_encrypt_method, value);\n\t\t\t\tfree(value);\n\t\t\t} else if (!strcmp(conf_name, PBS_CONF_AUTH)) {\n\t\t\t\tchar *value = convert_string_to_lowercase(conf_value);\n\t\t\t\tif (value == NULL)\n\t\t\t\t\tgoto err;\n\t\t\t\tmemset(pbs_conf.auth_method, '\\0', sizeof(pbs_conf.auth_method));\n\t\t\t\tstrcpy(pbs_conf.auth_method, value);\n\t\t\t\tfree(value);\n\t\t\t} else if (!strcmp(conf_name, PBS_CONF_ENCRYPT_METHOD)) {\n\t\t\t\tchar *value = convert_string_to_lowercase(conf_value);\n\t\t\t\tif (value == NULL)\n\t\t\t\t\tgoto err;\n\t\t\t\tmemset(pbs_conf.encrypt_method, '\\0', sizeof(pbs_conf.encrypt_method));\n\t\t\t\tstrcpy(pbs_conf.encrypt_method, value);\n\t\t\t\tfree(value);\n\t\t\t} else if (!strcmp(conf_name, PBS_CONF_SUPPORTED_AUTH_METHODS)) {\n\t\t\t\tchar *value = convert_string_to_lowercase(conf_value);\n\t\t\t\tif (value == NULL)\n\t\t\t\t\tgoto err;\n\t\t\t\tpbs_conf.supported_auth_methods = break_comma_list(value);\n\t\t\t\tif (pbs_conf.supported_auth_methods == NULL) {\n\t\t\t\t\tfree(value);\n\t\t\t\t\tgoto err;\n\t\t\t\t}\n\t\t\t\tfree(value);\n\t\t\t} else if (!strcmp(conf_name, PBS_CONF_AUTH_SERVICE_USERS)) {\n\t\t\t\tpbs_conf.auth_service_users = break_comma_list(conf_value);\n\t\t\t\tif (pbs_conf.auth_service_users == NULL) {\n\t\t\t\t\tgoto err;\n\t\t\t\t}\n\t\t\t} else if (!strcmp(conf_name, PBS_CONF_DAEMON_SERVICE_USER)) {\n\t\t\t\tfree(pbs_conf.pbs_daemon_service_user);\n\t\t\t\tpbs_conf.pbs_daemon_service_user = strdup(conf_value);\n\t\t\t} else if (!strcmp(conf_name, PBS_CONF_DAEMON_SERVICE_AUTH_USER)) {\n\t\t\t\tfree(pbs_conf.pbs_daemon_service_auth_user);\n\t\t\t\tpbs_conf.pbs_daemon_service_auth_user = strdup(conf_value);\n\t\t\t} else if (!strcmp(conf_name, PBS_CONF_PRIVILEGED_AUTH_USER)) {\n\t\t\t\tfree(pbs_conf.pbs_privileged_auth_user);\n\t\t\t\tpbs_conf.pbs_privileged_auth_user = strdup(conf_value);\n\t\t\t} else if (!strcmp(conf_name, PBS_CONF_GSS_USER_CREDENTIALS_BIN)) {\n\t\t\t\tfree(pbs_conf.pbs_gss_user_creds_bin);\n\t\t\t\tpbs_conf.pbs_gss_user_creds_bin = strdup(conf_value);\n\t\t\t}\n\t\t\t/* iff_path is inferred from pbs_conf.pbs_exec_path - see below */\n\t\t}\n\t\tfclose(fp);\n\t\tfree(pbs_loadconf_buf);\n\t\tpbs_loadconf_buf = NULL;\n\t\tpbs_loadconf_len = 0;\n\t}\n\n\t/*\n\t * Next, check the environment variables and set values accordingly\n\t * overriding those that were set in the configuration file.\n\t */\n\n\tif ((gvalue = getenv(PBS_CONF_START_SERVER)) != NULL) {\n\t\tif (sscanf(gvalue, \"%u\", &uvalue) == 1)\n\t\t\tpbs_conf.start_server = ((uvalue > 0) ? 1 : 0);\n\t}\n\tif ((gvalue = getenv(PBS_CONF_START_MOM)) != NULL) {\n\t\tif (sscanf(gvalue, \"%u\", &uvalue) == 1)\n\t\t\tpbs_conf.start_mom = ((uvalue > 0) ? 1 : 0);\n\t}\n\tif ((gvalue = getenv(PBS_CONF_START_SCHED)) != NULL) {\n\t\tif (sscanf(gvalue, \"%u\", &uvalue) == 1)\n\t\t\tpbs_conf.start_sched = ((uvalue > 0) ? 1 : 0);\n\t}\n\tif ((gvalue = getenv(PBS_CONF_START_COMM)) != NULL) {\n\t\tif (sscanf(gvalue, \"%u\", &uvalue) == 1)\n\t\t\tpbs_conf.start_comm = ((uvalue > 0) ? 1 : 0);\n\t}\n\tif ((gvalue = getenv(PBS_CONF_LOCALLOG)) != NULL) {\n\t\tif (sscanf(gvalue, \"%u\", &uvalue) == 1)\n\t\t\tpbs_conf.locallog = ((uvalue > 0) ? 1 : 0);\n\t}\n\tif ((gvalue = getenv(PBS_CONF_SYSLOG)) != NULL) {\n\t\tif (sscanf(gvalue, \"%u\", &uvalue) == 1)\n\t\t\tpbs_conf.syslogfac = ((uvalue <= (23 << 3)) ? uvalue : 0);\n\t}\n\tif ((gvalue = getenv(PBS_CONF_SYSLOGSEVR)) != NULL) {\n\t\tif (sscanf(gvalue, \"%u\", &uvalue) == 1)\n\t\t\tpbs_conf.syslogsvr = ((uvalue <= 7) ? uvalue : 0);\n\t}\n\tif ((gvalue = getenv(PBS_CONF_BATCH_SERVICE_PORT)) != NULL) {\n\t\tif (sscanf(gvalue, \"%u\", &uvalue) == 1)\n\t\t\tpbs_conf.batch_service_port =\n\t\t\t\t((uvalue <= 65535) ? uvalue : pbs_conf.batch_service_port);\n\t}\n\tif ((gvalue = getenv(PBS_CONF_BATCH_SERVICE_PORT_DIS)) != NULL) {\n\t\tif (sscanf(gvalue, \"%u\", &uvalue) == 1)\n\t\t\tpbs_conf.batch_service_port_dis =\n\t\t\t\t((uvalue <= 65535) ? uvalue : pbs_conf.batch_service_port_dis);\n\t}\n\tif ((gvalue = getenv(PBS_CONF_MOM_SERVICE_PORT)) != NULL) {\n\t\tif (sscanf(gvalue, \"%u\", &uvalue) == 1)\n\t\t\tpbs_conf.mom_service_port =\n\t\t\t\t((uvalue <= 65535) ? uvalue : pbs_conf.mom_service_port);\n\t}\n\tif ((gvalue = getenv(PBS_CONF_MANAGER_SERVICE_PORT)) != NULL) {\n\t\tif (sscanf(gvalue, \"%u\", &uvalue) == 1)\n\t\t\tpbs_conf.manager_service_port =\n\t\t\t\t((uvalue <= 65535) ? uvalue : pbs_conf.manager_service_port);\n\t}\n\tif ((gvalue = getenv(PBS_CONF_HOME)) != NULL) {\n\t\tfree(pbs_conf.pbs_home_path);\n\t\tpbs_conf.pbs_home_path = shorten_and_cleanup_path(gvalue);\n\t}\n\tif ((gvalue = getenv(PBS_CONF_EXEC)) != NULL) {\n\t\tfree(pbs_conf.pbs_exec_path);\n\t\tpbs_conf.pbs_exec_path = shorten_and_cleanup_path(gvalue);\n\t}\n\t/* Check for PBS_DEFAULT for backward compatibility */\n\tif ((gvalue = getenv(PBS_CONF_DEFAULT_NAME)) != NULL) {\n\t\tfree(pbs_conf.pbs_server_name);\n\t\tif ((pbs_conf.pbs_server_name = strdup(gvalue)) == NULL) {\n\t\t\tgoto err;\n\t\t}\n\t}\n\tif ((gvalue = getenv(PBS_CONF_SERVER_NAME)) != NULL) {\n\t\tfree(pbs_conf.pbs_server_name);\n\t\tif ((pbs_conf.pbs_server_name = strdup(gvalue)) == NULL) {\n\t\t\tgoto err;\n\t\t}\n\t}\n\tif ((gvalue = getenv(PBS_CONF_RCP)) != NULL) {\n\t\tfree(pbs_conf.rcp_path);\n\t\tpbs_conf.rcp_path = shorten_and_cleanup_path(gvalue);\n\t}\n\tif ((gvalue = getenv(PBS_CONF_SCP)) != NULL) {\n\t\tfree(pbs_conf.scp_path);\n\t\tpbs_conf.scp_path = shorten_and_cleanup_path(gvalue);\n\t}\n\tif ((gvalue = getenv(PBS_CONF_SCP_ARGS)) != NULL) {\n\t\tfree(pbs_conf.scp_args);\n\t\tpbs_conf.scp_args = strdup(gvalue);\n\t}\n\tif ((gvalue = getenv(PBS_CONF_CP)) != NULL) {\n\t\tfree(pbs_conf.cp_path);\n\t\tpbs_conf.cp_path = shorten_and_cleanup_path(gvalue);\n\t}\n\tif ((gvalue = getenv(PBS_CONF_PRIMARY)) != NULL) {\n\t\tfree(pbs_conf.pbs_primary);\n\t\tif ((pbs_conf.pbs_primary = strdup(gvalue)) == NULL) {\n\t\t\tgoto err;\n\t\t}\n\t}\n\tif ((gvalue = getenv(PBS_CONF_SECONDARY)) != NULL) {\n\t\tfree(pbs_conf.pbs_secondary);\n\t\tif ((pbs_conf.pbs_secondary = strdup(gvalue)) == NULL) {\n\t\t\tgoto err;\n\t\t}\n\t}\n\tif ((gvalue = getenv(PBS_CONF_MOM_HOME)) != NULL) {\n\t\tfree(pbs_conf.pbs_mom_home);\n\t\tif ((pbs_conf.pbs_mom_home = strdup(gvalue)) == NULL) {\n\t\t\tgoto err;\n\t\t}\n\t}\n\tif ((gvalue = getenv(PBS_CONF_CORE_LIMIT)) != NULL) {\n\t\tfree(pbs_conf.pbs_core_limit);\n\t\tif ((pbs_conf.pbs_core_limit = strdup(gvalue)) == NULL) {\n\t\t\tgoto err;\n\t\t}\n\t}\n\tif ((gvalue = getenv(PBS_CONF_DATA_SERVICE_HOST)) != NULL) {\n\t\tfree(pbs_conf.pbs_data_service_host);\n\t\tif ((pbs_conf.pbs_data_service_host = strdup(gvalue)) == NULL) {\n\t\t\tgoto err;\n\t\t}\n\t}\n\tif ((gvalue = getenv(PBS_CONF_USE_COMPRESSION)) != NULL) {\n\t\tif (sscanf(gvalue, \"%u\", &uvalue) == 1)\n\t\t\tpbs_conf.pbs_use_compression = ((uvalue > 0) ? 1 : 0);\n\t}\n\tif ((gvalue = getenv(PBS_CONF_USE_MCAST)) != NULL) {\n\t\tif (sscanf(gvalue, \"%u\", &uvalue) == 1)\n\t\t\tpbs_conf.pbs_use_mcast = ((uvalue > 0) ? 1 : 0);\n\t}\n\tif ((gvalue = getenv(PBS_CONF_LEAF_NAME)) != NULL) {\n\t\tif (pbs_conf.pbs_leaf_name)\n\t\t\tfree(pbs_conf.pbs_leaf_name);\n\t\tpbs_conf.pbs_leaf_name = strdup(gvalue);\n\t}\n\tif ((gvalue = getenv(PBS_CONF_LEAF_ROUTERS)) != NULL) {\n\t\tif (pbs_conf.pbs_leaf_routers)\n\t\t\tfree(pbs_conf.pbs_leaf_routers);\n\t\tpbs_conf.pbs_leaf_routers = strdup(gvalue);\n\t}\n\tif ((gvalue = getenv(PBS_CONF_COMM_NAME)) != NULL) {\n\t\tif (pbs_conf.pbs_comm_name)\n\t\t\tfree(pbs_conf.pbs_comm_name);\n\t\tpbs_conf.pbs_comm_name = strdup(gvalue);\n\t}\n\tif ((gvalue = getenv(PBS_CONF_COMM_ROUTERS)) != NULL) {\n\t\tif (pbs_conf.pbs_comm_routers)\n\t\t\tfree(pbs_conf.pbs_comm_routers);\n\t\tpbs_conf.pbs_comm_routers = strdup(gvalue);\n\t}\n\tif ((gvalue = getenv(PBS_CONF_COMM_THREADS)) != NULL) {\n\t\tif (sscanf(gvalue, \"%u\", &uvalue) == 1)\n\t\t\tpbs_conf.pbs_comm_threads = uvalue;\n\t}\n\tif ((gvalue = getenv(PBS_CONF_COMM_LOG_EVENTS)) != NULL) {\n\t\tif (sscanf(gvalue, \"%u\", &uvalue) == 1)\n\t\t\tpbs_conf.pbs_comm_log_events = uvalue;\n\t}\n\tif ((gvalue = getenv(PBS_CONF_DATA_SERVICE_PORT)) != NULL) {\n\t\tif (sscanf(gvalue, \"%u\", &uvalue) == 1)\n\t\t\tpbs_conf.pbs_data_service_port =\n\t\t\t\t((uvalue <= 65535) ? uvalue : pbs_conf.pbs_data_service_port);\n\t}\n\tif ((gvalue = getenv(PBS_CONF_SERVER_HOST_NAME)) != NULL) {\n\t\tfree(pbs_conf.pbs_server_host_name);\n\t\tpbs_conf.pbs_server_host_name = strdup(gvalue);\n\t}\n\tif ((gvalue = getenv(PBS_CONF_PUBLIC_HOST_NAME)) != NULL) {\n\t\tfree(pbs_conf.pbs_public_host_name);\n\t\tpbs_conf.pbs_public_host_name = strdup(gvalue);\n\t}\n\tif ((gvalue = getenv(PBS_CONF_MAIL_HOST_NAME)) != NULL) {\n\t\tfree(pbs_conf.pbs_mail_host_name);\n\t\tpbs_conf.pbs_mail_host_name = strdup(gvalue);\n\t}\n\tif ((gvalue = getenv(PBS_CONF_SMTP_SERVER_NAME)) != NULL) {\n\t\tfree(pbs_conf.pbs_smtp_server_name);\n\t\tpbs_conf.pbs_smtp_server_name = strdup(gvalue);\n\t}\n\tif ((gvalue = getenv(PBS_CONF_OUTPUT_HOST_NAME)) != NULL) {\n\t\tfree(pbs_conf.pbs_output_host_name);\n\t\tpbs_conf.pbs_output_host_name = strdup(gvalue);\n\t}\n\n\t/* support PBS_MOM_NODE_NAME to tell MOM natural node name on server */\n\tif ((gvalue = getenv(PBS_CONF_MOM_NODE_NAME)) != NULL) {\n\t\tfree(pbs_conf.pbs_mom_node_name);\n\t\tpbs_conf.pbs_mom_node_name = strdup(gvalue);\n\t}\n\n\t/* rcp_path is inferred from pbs_conf.pbs_exec_path - see below */\n\t/* pbs_demux_path is inferred from pbs_conf.pbs_exec_path - see below */\n\tif ((gvalue = getenv(PBS_CONF_ENVIRONMENT)) != NULL) {\n\t\tfree(pbs_conf.pbs_environment);\n\t\tpbs_conf.pbs_environment = shorten_and_cleanup_path(gvalue);\n\t}\n\tif ((gvalue = getenv(PBS_CONF_LOG_HIGHRES_TIMESTAMP)) != NULL) {\n\t\tif (sscanf(gvalue, \"%u\", &uvalue) == 1)\n\t\t\tpbs_conf.pbs_log_highres_timestamp = ((uvalue > 0) ? 1 : 0);\n\t}\n\tif ((gvalue = getenv(PBS_CONF_SCHED_THREADS)) != NULL) {\n\t\tif (sscanf(gvalue, \"%u\", &uvalue) == 1)\n\t\t\tpbs_conf.pbs_sched_threads = uvalue;\n\t}\n\n\tif ((gvalue = getenv(PBS_CONF_DAEMON_SERVICE_USER)) != NULL) {\n\t\tfree(pbs_conf.pbs_daemon_service_user);\n\t\tpbs_conf.pbs_daemon_service_user = strdup(gvalue);\n\t}\n\n\tif ((gvalue = getenv(PBS_CONF_DAEMON_SERVICE_AUTH_USER)) != NULL) {\n\t\tfree(pbs_conf.pbs_daemon_service_auth_user);\n\t\tpbs_conf.pbs_daemon_service_auth_user = strdup(gvalue);\n\t}\n\n\tif ((gvalue = getenv(PBS_CONF_PRIVILEGED_AUTH_USER)) != NULL) {\n\t\tfree(pbs_conf.pbs_privileged_auth_user);\n\t\tpbs_conf.pbs_privileged_auth_user = strdup(gvalue);\n\t}\n\n\tif ((gvalue = getenv(PBS_CONF_GSS_USER_CREDENTIALS_BIN)) != NULL) {\n\t\tfree(pbs_conf.pbs_gss_user_creds_bin);\n\t\tpbs_conf.pbs_gss_user_creds_bin = strdup(gvalue);\n\t}\n\n#ifdef WIN32\n\tif ((gvalue = getenv(PBS_CONF_REMOTE_VIEWER)) != NULL) {\n\t\tfree(pbs_conf.pbs_conf_remote_viewer);\n\t\tpbs_conf.pbs_conf_remote_viewer = strdup(gvalue);\n\t}\n#endif\n\n\t/* iff_path is inferred from pbs_conf.pbs_exec_path - see below */\n\n\t/*\n\t * Now that we have parsed through the configuration file and the\n\t * environment variables, check to make sure that all the critical\n\t * items are set.\n\t */\n\n\tbuf[0] = '\\0';\n\tif (pbs_conf.pbs_home_path == NULL)\n\t\tstrcat(buf, PBS_CONF_HOME);\n\tif (pbs_conf.pbs_exec_path == NULL) {\n\t\tif (buf[0] != '\\0')\n\t\t\tstrcat(buf, \" \");\n\t\tstrcat(buf, PBS_CONF_EXEC);\n\t}\n\tif (pbs_conf.pbs_server_name == NULL) {\n\t\tif (buf[0] != '\\0')\n\t\t\tstrcat(buf, \" \");\n\t\tstrcat(buf, PBS_CONF_SERVER_NAME);\n\t}\n\tif (buf[0] != '\\0') {\n\t\tfprintf(stderr, \"pbsconf error: pbs conf variables not found: %s\\n\", buf);\n\t\tgoto err;\n\t}\n\n\t/*\n\t * Perform sanity checks on PBS_*_HOST_NAME values and PBS_CONF_SMTP_SERVER_NAME.\n\t * See IDD for SPID 4534.\n\t */\n\tbuf[0] = '\\0';\n\tif ((pbs_conf.pbs_server_host_name != NULL) &&\n\t    (strchr(pbs_conf.pbs_server_host_name, ':') != NULL))\n\t\tstrcpy(buf, PBS_CONF_SERVER_HOST_NAME);\n\telse if ((pbs_conf.pbs_public_host_name != NULL) &&\n\t\t (strchr(pbs_conf.pbs_public_host_name, ':') != NULL))\n\t\tstrcpy(buf, PBS_CONF_PUBLIC_HOST_NAME);\n\telse if ((pbs_conf.pbs_mail_host_name != NULL) &&\n\t\t (strchr(pbs_conf.pbs_mail_host_name, ':') != NULL))\n\t\tstrcpy(buf, PBS_CONF_MAIL_HOST_NAME);\n\telse if ((pbs_conf.pbs_smtp_server_name != NULL) &&\n\t\t (strchr(pbs_conf.pbs_smtp_server_name, ':') != NULL))\n\t\tstrcpy(buf, PBS_CONF_SMTP_SERVER_NAME);\n\telse if ((pbs_conf.pbs_output_host_name != NULL) &&\n\t\t (strchr(pbs_conf.pbs_output_host_name, ':') != NULL))\n\t\tstrcpy(buf, PBS_CONF_OUTPUT_HOST_NAME);\n\telse if ((pbs_conf.pbs_mom_node_name != NULL) &&\n\t\t (strchr(pbs_conf.pbs_mom_node_name, ':') != NULL))\n\t\tstrcpy(buf, PBS_CONF_MOM_NODE_NAME);\n\n\tif (buf[0] != '\\0') {\n\t\tfprintf(stderr, \"pbsconf error: illegal value for: %s\\n\", buf);\n\t\tgoto err;\n\t}\n\n\t/*\n\t * Finally, fill in the blanks for variables with inferred values.\n\t */\n\n\tif (pbs_conf.pbs_environment == NULL) {\n\t\t/* a reasonable default for the pbs_environment file is in pbs_home */\n\t\t/* strlen(\"/pbs_environment\") + '\\0' == 16 + 1 == 17 */\n\t\tif ((pbs_conf.pbs_environment =\n\t\t\t     malloc(strlen(pbs_conf.pbs_home_path) + 17)) != NULL) {\n\t\t\tsprintf(pbs_conf.pbs_environment, \"%s/pbs_environment\",\n\t\t\t\tpbs_conf.pbs_home_path);\n\t\t\tfix_path(pbs_conf.pbs_environment, 1);\n\t\t} else {\n\t\t\tgoto err;\n\t\t}\n\t}\n\n\tfree(pbs_conf.iff_path);\n\t/* strlen(\"/sbin/pbs_iff\") + '\\0' == 13 + 1 == 14 */\n\tif ((pbs_conf.iff_path =\n\t\t     malloc(strlen(pbs_conf.pbs_exec_path) + 14)) != NULL) {\n\t\tsprintf(pbs_conf.iff_path, \"%s/sbin/pbs_iff\", pbs_conf.pbs_exec_path);\n\t\tfix_path(pbs_conf.iff_path, 1);\n\t} else {\n\t\tgoto err;\n\t}\n\n\tif (pbs_conf.rcp_path == NULL) {\n\t\tif ((pbs_conf.rcp_path =\n\t\t\t     malloc(strlen(pbs_conf.pbs_exec_path) + 14)) != NULL) {\n\t\t\tsprintf(pbs_conf.rcp_path, \"%s/sbin/pbs_rcp\", pbs_conf.pbs_exec_path);\n\t\t\tfix_path(pbs_conf.rcp_path, 1);\n\t\t} else {\n\t\t\tgoto err;\n\t\t}\n\t}\n\tif (pbs_conf.cp_path == NULL) {\n#ifdef WIN32\n\t\tchar *cmd = \"xcopy\";\n#else\n\t\tchar *cmd = \"/bin/cp\";\n#endif\n\t\tpbs_conf.cp_path = strdup(cmd);\n\t\tif (pbs_conf.cp_path == NULL) {\n\t\t\tgoto err;\n\t\t}\n\t}\n\n\tfree(pbs_conf.pbs_demux_path);\n\t/* strlen(\"/sbin/pbs_demux\") + '\\0' == 15 + 1 == 16 */\n\tif ((pbs_conf.pbs_demux_path =\n\t\t     malloc(strlen(pbs_conf.pbs_exec_path) + 16)) != NULL) {\n\t\tsprintf(pbs_conf.pbs_demux_path, \"%s/sbin/pbs_demux\",\n\t\t\tpbs_conf.pbs_exec_path);\n\t\tfix_path(pbs_conf.pbs_demux_path, 1);\n\t} else {\n\t\tgoto err;\n\t}\n\n\tif ((gvalue = getenv(PBS_CONF_INTERACTIVE_AUTH_METHOD)) != NULL) {\n\t\tchar *value = convert_string_to_lowercase(gvalue);\n\t\tif (value == NULL)\n\t\t\tgoto err;\n\t\tmemset(pbs_conf.interactive_auth_method, '\\0', sizeof(pbs_conf.interactive_auth_method));\n\t\tstrcpy(pbs_conf.interactive_auth_method, value);\n\t\tfree(value);\n\t}\n\tif ((gvalue = getenv(PBS_CONF_INTERACTIVE_ENCRYPT_METHOD)) != NULL) {\n\t\tchar *value = convert_string_to_lowercase(gvalue);\n\t\tensure_string_not_null(&value); /* allow unsetting */\n\t\tif (value == NULL)\n\t\t\tgoto err;\n\t\tmemset(pbs_conf.interactive_encrypt_method, '\\0', sizeof(pbs_conf.interactive_encrypt_method));\n\t\tstrcpy(pbs_conf.interactive_encrypt_method, value);\n\t\tfree(value);\n\t}\n\tif ((gvalue = getenv(PBS_CONF_AUTH)) != NULL) {\n\t\tchar *value = convert_string_to_lowercase(gvalue);\n\t\tif (value == NULL)\n\t\t\tgoto err;\n\t\tmemset(pbs_conf.auth_method, '\\0', sizeof(pbs_conf.auth_method));\n\t\tstrcpy(pbs_conf.auth_method, value);\n\t\tfree(value);\n\t}\n\tif ((gvalue = getenv(PBS_CONF_ENCRYPT_METHOD)) != NULL) {\n\t\tchar *value = convert_string_to_lowercase(gvalue);\n\t\tensure_string_not_null(&value); /* allow unsetting */\n\t\tif (value == NULL)\n\t\t\tgoto err;\n\t\tmemset(pbs_conf.encrypt_method, '\\0', sizeof(pbs_conf.encrypt_method));\n\t\tstrcpy(pbs_conf.encrypt_method, value);\n\t\tfree(value);\n\t}\n\tif ((gvalue = getenv(PBS_CONF_SUPPORTED_AUTH_METHODS)) != NULL) {\n\t\tchar *value = convert_string_to_lowercase(gvalue);\n\t\tif (value == NULL)\n\t\t\tgoto err;\n\t\tfree_string_array(pbs_conf.supported_auth_methods);\n\t\tpbs_conf.supported_auth_methods = break_comma_list(value);\n\t\tif (pbs_conf.supported_auth_methods == NULL) {\n\t\t\tfree(value);\n\t\t\tgoto err;\n\t\t}\n\t\tfree(value);\n\t}\n\tif (pbs_conf.supported_auth_methods == NULL) {\n\t\tpbs_conf.supported_auth_methods = break_comma_list(AUTH_RESVPORT_NAME);\n\t\tif (pbs_conf.supported_auth_methods == NULL) {\n\t\t\tgoto err;\n\t\t}\n\t}\n\tif ((gvalue = getenv(PBS_CONF_AUTH_SERVICE_USERS)) != NULL) {\n\t\tchar *value = convert_string_to_lowercase(gvalue);\n\t\tif (value == NULL)\n\t\t\tgoto err;\n\t\tfree_string_array(pbs_conf.auth_service_users);\n\t\tpbs_conf.auth_service_users = break_comma_list(value);\n\t\tif (pbs_conf.auth_service_users == NULL) {\n\t\t\tfree(value);\n\t\t\tgoto err;\n\t\t}\n\t\tfree(value);\n\t}\n\tif (pbs_conf.auth_service_users == NULL) {\n\t\tpbs_conf.auth_service_users = break_comma_list(\"root\");\n\t\tif (pbs_conf.auth_service_users == NULL) {\n\t\t\tgoto err;\n\t\t}\n\t}\n\tif (pbs_conf.encrypt_method[0] != '\\0') {\n\t\t/* encryption is not disabled, validate encrypt method */\n\t\tif (is_valid_encrypt_method(pbs_conf.encrypt_method) != 1) {\n\t\t\tfprintf(stderr, \"The given PBS_ENCRYPT_METHOD = %s does not support encrypt/decrypt of data\\n\", pbs_conf.encrypt_method);\n\t\t\tgoto err;\n\t\t}\n\t}\n\tif (pbs_conf.interactive_encrypt_method[0] != '\\0') {\n\t\t/* encryption is not disabled, validate encrypt method */\n\t\tif (is_valid_encrypt_method(pbs_conf.interactive_encrypt_method) != 1) {\n\t\t\tfprintf(stderr, \"The given PBS_INTERACTIVE_ENCRYPT_METHOD = %s does not support encrypt/decrypt of data\\n\", pbs_conf.interactive_encrypt_method);\n\t\t\tgoto err;\n\t\t}\n\t}\n\n\tpbs_conf.pbs_tmpdir = pbs_get_tmpdir();\n\n\t/* if routers has null value populate with server name as the default */\n\tif (pbs_conf.pbs_leaf_routers == NULL) {\n\t\tif (pbs_conf.pbs_primary && pbs_conf.pbs_secondary) {\n\t\t\tpbs_conf.pbs_leaf_routers = malloc(strlen(pbs_conf.pbs_primary) + strlen(pbs_conf.pbs_secondary) + 2);\n\t\t\tif (pbs_conf.pbs_leaf_routers == NULL) {\n\t\t\t\tfprintf(stderr, \"Out of memory\\n\");\n\t\t\t\tgoto err;\n\t\t\t}\n\t\t\tsprintf(pbs_conf.pbs_leaf_routers, \"%s,%s\", pbs_conf.pbs_primary, pbs_conf.pbs_secondary);\n\t\t} else {\n\t\t\tif (pbs_conf.pbs_server_host_name) {\n\t\t\t\tpbs_conf.pbs_leaf_routers = strdup(pbs_conf.pbs_server_host_name);\n\t\t\t} else if (pbs_conf.pbs_server_name) {\n\t\t\t\tpbs_conf.pbs_leaf_routers = strdup(pbs_conf.pbs_server_name);\n\t\t\t} else {\n\t\t\t\tfprintf(stderr, \"PBS server undefined\\n\");\n\t\t\t\tgoto err;\n\t\t\t}\n\t\t\tif (pbs_conf.pbs_leaf_routers == NULL) {\n\t\t\t\tfprintf(stderr, \"Out of memory\\n\");\n\t\t\t\tgoto err;\n\t\t\t}\n\t\t}\n\t}\n\n\t/* determine who we are */\n\tpbs_current_uid = getuid();\n\tif ((pw = getpwuid(pbs_current_uid)) == NULL) {\n\t\tgoto err;\n\t}\n\tif (strlen(pw->pw_name) > (PBS_MAXUSER - 1)) {\n\t\tgoto err;\n\t}\n\tstrcpy(pbs_conf.current_user, pw->pw_name);\n\n\tpbs_conf.loaded = 1;\n\n\tif (pbs_client_thread_unlock_conf() != 0)\n\t\treturn 0;\n\n\treturn 1; /* success */\n\nerr:\n\tif (pbs_conf.pbs_conf_file) {\n\t\tfree(pbs_conf.pbs_conf_file);\n\t\tpbs_conf.pbs_conf_file = NULL;\n\t}\n\tif (pbs_conf.pbs_data_service_host) {\n\t\tfree(pbs_conf.pbs_data_service_host);\n\t\tpbs_conf.pbs_data_service_host = NULL;\n\t}\n\tif (pbs_conf.pbs_home_path) {\n\t\tfree(pbs_conf.pbs_home_path);\n\t\tpbs_conf.pbs_home_path = NULL;\n\t}\n\tif (pbs_conf.pbs_exec_path) {\n\t\tfree(pbs_conf.pbs_exec_path);\n\t\tpbs_conf.pbs_exec_path = NULL;\n\t}\n\tif (pbs_conf.pbs_server_name) {\n\t\tfree(pbs_conf.pbs_server_name);\n\t\tpbs_conf.pbs_server_name = NULL;\n\t}\n\tif (pbs_conf.rcp_path) {\n\t\tfree(pbs_conf.rcp_path);\n\t\tpbs_conf.rcp_path = NULL;\n\t}\n\tif (pbs_conf.scp_path) {\n\t\tfree(pbs_conf.scp_path);\n\t\tpbs_conf.scp_path = NULL;\n\t}\n\tif (pbs_conf.scp_args) {\n\t\tfree(pbs_conf.scp_args);\n\t\tpbs_conf.scp_args = NULL;\n\t}\n\tif (pbs_conf.cp_path) {\n\t\tfree(pbs_conf.cp_path);\n\t\tpbs_conf.cp_path = NULL;\n\t}\n\tif (pbs_conf.pbs_environment) {\n\t\tfree(pbs_conf.pbs_environment);\n\t\tpbs_conf.pbs_environment = NULL;\n\t}\n\tif (pbs_conf.pbs_primary) {\n\t\tfree(pbs_conf.pbs_primary);\n\t\tpbs_conf.pbs_primary = NULL;\n\t}\n\tif (pbs_conf.pbs_secondary) {\n\t\tfree(pbs_conf.pbs_secondary);\n\t\tpbs_conf.pbs_secondary = NULL;\n\t}\n\tif (pbs_conf.pbs_mom_home) {\n\t\tfree(pbs_conf.pbs_mom_home);\n\t\tpbs_conf.pbs_mom_home = NULL;\n\t}\n\tif (pbs_conf.pbs_core_limit) {\n\t\tfree(pbs_conf.pbs_core_limit);\n\t\tpbs_conf.pbs_core_limit = NULL;\n\t}\n\tif (pbs_conf.supported_auth_methods) {\n\t\tfree_string_array(pbs_conf.supported_auth_methods);\n\t\tpbs_conf.supported_auth_methods = NULL;\n\t}\n\tif (pbs_conf.auth_service_users) {\n\t\tfree_string_array(pbs_conf.auth_service_users);\n\t\tpbs_conf.auth_service_users = NULL;\n\t}\n\n\tpbs_conf.load_failed = 1;\n\t(void) pbs_client_thread_unlock_conf();\n\treturn 0;\n}\n\n/**\n * @brief\n *\tpbs_get_tmpdir - Identify the configured tmpdir location\n *\n * @return char *\n * @retval !NULL pointer to the tmpdir string\n * @retval NULL failure\n */\nchar *\npbs_get_tmpdir(void)\n{\n\tFILE *fp = NULL;\n\tchar *tmpdir = NULL;\n\tchar *conf_file = NULL;\n\tchar *conf_name = NULL;\n\tchar *conf_value = NULL;\n\tchar *p = NULL;\n#ifdef WIN32\n\tstruct stat sb;\n#endif\n\n\t/* If pbs_conf already been populated use that value. */\n\tif ((pbs_conf.loaded != 0) && (pbs_conf.pbs_tmpdir != NULL))\n\t\treturn (pbs_conf.pbs_tmpdir);\n\n\t\t/* Next, try the environment. */\n#ifdef WIN32\n\tif ((p = getenv(\"TMP\")) != NULL)\n#else\n\tif ((p = getenv(\"TMPDIR\")) != NULL)\n#endif\n\t{\n\t\ttmpdir = shorten_and_cleanup_path(p);\n\t}\n\t/* PBS_TMPDIR overrides TMP or TMPDIR if set */\n\tif ((p = getenv(PBS_CONF_TMPDIR)) != NULL) {\n\t\tfree(tmpdir);\n\t\ttmpdir = shorten_and_cleanup_path(p);\n\t}\n\tif (tmpdir != NULL)\n\t\treturn tmpdir;\n\n\t/* Now try pbs.conf */\n\tconf_file = pbs_get_conf_file();\n\tif ((fp = fopen(conf_file, \"r\")) != NULL) {\n\t\twhile (parse_config_line(fp, &conf_name, &conf_value) != NULL) {\n\t\t\tif ((conf_name == NULL) || (*conf_name == '\\0'))\n\t\t\t\tcontinue;\n\t\t\tif ((conf_value == NULL) || (*conf_value == '\\0'))\n\t\t\t\tcontinue;\n\t\t\tif (!strcmp(conf_name, PBS_CONF_TMPDIR)) {\n\t\t\t\tfree(tmpdir);\n\t\t\t\ttmpdir = shorten_and_cleanup_path(conf_value);\n\t\t\t}\n\t\t}\n\t\tfclose(fp);\n\t}\n\tfree(conf_file);\n\tconf_file = NULL;\n\tif (tmpdir != NULL)\n\t\treturn tmpdir;\n\n\t\t/* Finally, resort to the default. */\n#ifdef WIN32\n\tif (stat(TMP_DIR, &sb) == 0) {\n\t\ttmpdir = shorten_and_cleanup_path(TMP_DIR);\n\t} else if (stat(\"C:\\\\WINDOWS\\\\TEMP\", &sb) == 0) {\n\t\ttmpdir = shorten_and_cleanup_path(\"C:\\\\WINDOWS\\\\TEMP\");\n\t}\n#else\n\ttmpdir = shorten_and_cleanup_path(TMP_DIR);\n#endif\n\tif (tmpdir == NULL) {\n\t\t/* strlen(\"/spool\") + '\\0' == 6 + 1 = 7 */\n\t\tif ((p = malloc(strlen(pbs_conf.pbs_home_path) + 7)) == NULL) {\n\t\t\treturn NULL;\n\t\t} else {\n\t\t\tsprintf(p, \"%s/spool\", pbs_conf.pbs_home_path);\n\t\t\ttmpdir = shorten_and_cleanup_path(p);\n\t\t\tfree(p);\n\t\t}\n\t}\n\t/* Strip the trailing separator. */\n#ifdef WIN32\n\tif (tmpdir[strlen(tmpdir) - 1] == '\\\\')\n\t\ttmpdir[strlen(tmpdir) - 1] = '\\0';\n#else\n\tif (tmpdir[strlen(tmpdir) - 1] == '/')\n\t\ttmpdir[strlen(tmpdir) - 1] = '\\0';\n#endif\n\treturn tmpdir;\n}"
  },
  {
    "path": "src/lib/Libifl/pbs_quote_parse.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tpbs_quote_parse.c\n */\n#include <pbs_config.h>\n\n#include <ctype.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include \"pbs_ifl.h\"\n#include \"pbs_internal.h\"\n\n/**\n * @brief\n * \t-pbs_quote_parse - parse quoted value string acording to BZ 6088 rules\n *\t1.  One of \" or ' may be used as quoting character\n *\t2.  characters must be printable as defined by \"isprint()\"\n *\t3.  '&' not accepted (reserved for future expansion).\n *\t4.  Comma is a token separator characters unless quoted\n *\t5.  space is a token separator characters unless quoted or unless\n *\t\t\"allow_white\" is true\n *\n * @param[in] in - he input string to parse\n * @param[out] out - ptr to where output value string is to be returned\n *\t\t     On non-error return.  This is on the HEAP and freeing is\n *                   upto the caller; if function returns non-zero, \"out\" is\n *                   not valid and should not be freed\n * @param[in] endptr - ptr into orignal input where we left off processing\n * @param[in] allow_white - indication for whether to allow white space.\n *\n * @return\tint\n * @retval\t0\tsuccess\n * @retval\t-1\tmalloc failure\n * @retval\t>0\tparse error\n *\t\t\tThe >0 value can be passed to  \"pbs_parse_err_msg()\"\n *\n */\n\nint\npbs_quote_parse(char *in, char **out, char **endptr, int allow_white)\n{\n\tchar *d; /* destination ptr (into \"work\") */\n\tsize_t len;\n\tint nthchar;\n\tchar quotechar = '\\0'; /* character used for quoting */\n\tint quoting = 0;       /* true if performing quoting */\n\tchar *s;\t       /* working ptr into \"in\" */\n\tchar *work;\t       /* destination buffer for parsed out */\n\n\t*out = NULL;\n\t*endptr = NULL;\n\n\tif (in == NULL)\n\t\treturn -1;\n\tlen = strlen(in) + 1;\n\twork = calloc((size_t) 1, len); /* calloc used to zero area */\n\tif (work == NULL)\n\t\treturn -1;\n\n\td = work;\n\n\ts = in;\n\twhile (isspace((int) *s)) /* skip leading white space */\n\t\t++s;\n\n\tnthchar = 0;\n\twhile (*s != '\\0') {\n\n\t\t++nthchar;\n\n\t\tif (!isprint((int) *s) && !isspace((int) *s)) {\n\t\t\t*endptr = s;\n\t\t\tfree(work);\n\t\t\treturn 2; /* illegal character */\n\n\t\t} else if (quoting) {\n\n\t\t\tif (*s == quotechar) {\n\t\t\t\tquoting = 0; /* end of quoting */\n\t\t\t\t\t     /* allow quotes inside the quoted string */\n\t\t\t} else if (*s == '&') {\n\t\t\t\t*endptr = s;\n\t\t\t\tfree(work);\n\t\t\t\treturn 2; /* illegal character */\n\t\t\t} else {\n\t\t\t\t*d++ = *s;\n\t\t\t}\n\n\t\t} else if (((*s == '\"') || (*s == '\\'')) &&\n\t\t\t   ((allow_white == 0) || (nthchar == 1))) {\n\n\t\t\t/* start quoting */\n\t\t\tif ((quotechar != '\\0') && (quotechar != *s)) {\n\t\t\t\t/* cannot switch quoting char in mid stream */\n\t\t\t\t/* so this is a plain character */\n\t\t\t\t*d++ = *s;\n\t\t\t} else {\n\t\t\t\tquotechar = *s;\n\t\t\t\tquoting = 1;\n\t\t\t}\n\n\t\t} else if ((*s == ',') ||\n\t\t\t   (isspace((int) *s) && (allow_white == 0))) {\n\n\t\t\t/* hit a special (parsing) character */\n\t\t\t*endptr = s;\n\t\t\t*out = work;\n\t\t\treturn 0;\n\n\t\t} else { /* normal un-quoted */\n\n\t\t\t/* check for special illegal character */\n\t\t\tif (*s == '&') {\n\t\t\t\t*endptr = s;\n\t\t\t\tfree(work);\n\t\t\t\treturn 2;\n\t\t\t}\n\n\t\t\t*d++ = *s;\n\t\t}\n\n\t\ts++;\n\t}\n\t*endptr = s;\n\n\tif (quoting) {\n\t\tfree(work);\n\t\treturn 4; /* invalid quoting, end of string */\n\t}\n\n\t*out = work;\n\treturn 0;\n}\n\n/**\n * @brief\n *\t-pbs_parse_err_msges - global list of pbs parse error messages\n *\n * @note\n *\tmake sure the string length of any message does not\n *\texceed PBS_PARSE_ERR_MSG_LEN_MAX\n */\nconst char pbs_parse_err_msges[][PBS_PARSE_ERR_MSG_LEN_MAX + 1] = {\n\t\"illegal character\",\n\t\"improper quoting syntax\",\n\t\"no closing quote\"};\n\n/**\n * @brief\n *\t-pbs_parse_err_msg() - for a positive, non-zero error returned by\n *\tpbs_quote_parse(), return a pointer to an error message string\n *\tAccepted error number are 2 and greater,  if not in this range,\n *\tthe string \"error\" is returned for the message\n *\n * @param[in] err -error number\n *\n * @return\tstring\n * @retval\terror msg string\tsuccess\n * @retval\t\"error\"\t\t\terror\n *\n */\nconst char *\npbs_parse_err_msg(int err)\n{\n\tint i;\n\ti = sizeof(pbs_parse_err_msges) / sizeof(pbs_parse_err_msges[0]);\n\tif ((err <= 1) || ((err - 1) > i))\n\t\treturn (\"error\");\n\telse\n\t\treturn (pbs_parse_err_msges[err - 2]);\n}\n\n/**\n * @brief\n * \t-pbs_prt_parse_err() -  print an error message associated with an\n *\tparsing/syntax error detected by pbs_quote_parse()\n *\n * @par Note:\n *\tWrites to stderr;  should not be used directly by a library function\n *\tor a daemon, only by user commands.\n *\n * @param[in] txt - error message\n * @param[in] str - option with argument\n * @param[in] offset - diff between txt and str\n * @param[in] err - error number\n *\n * @return\tVoid\n *\n */\nvoid\npbs_prt_parse_err(char *txt, char *str, int offset, int err)\n{\n\tint i;\n\tconst char *emsg;\n\n\temsg = pbs_parse_err_msg(err);\n\tfprintf(stderr, \"%s %s:\\n%s\\n\", txt, emsg, str);\n\tfor (i = 0; i < offset; ++i)\n\t\tputc((int) ' ', stderr);\n\tputc((int) '^', stderr);\n\tputc((int) '\\n', stderr);\n}\n"
  },
  {
    "path": "src/lib/Libifl/pbs_statfree.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tpbs_statfree.c\n * @brief\n * The function that deallocates a \"batch_status\" structure\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <stdlib.h>\n#include <stdio.h>\n#include \"libpbs.h\"\n\n/**\n * @brief\n *\t-The function that deallocates a \"batch_status\" structure\n *\n * @param[in] bsp - pointer to batch request.\n *\n * @return\tVoid\n *\n */\nvoid\n__pbs_statfree(struct batch_status *bsp)\n{\n\tstruct batch_status *bsnxt = NULL;\n\n\tfor (; bsp != NULL; bsp = bsnxt) {\n\t\tbsnxt = bsp->next;\n\t\tpbs_statfree_single(bsp);\n\t}\n}\n\n/**\n * @brief\tThere are times when we want to free just one batch_status from a list\n * \t\t\tof them and the original pbs_statfree() doesn't serve that purpose. So,\n * \t\t\tthis function was created to delete just one 'link' in a chain of batch_statuses\n *\n * \t@param[in/out]\tbsp - pointer to the batch_status which is being free'd\n */\nvoid\npbs_statfree_single(struct batch_status *bsp)\n{\n\tstruct attrl *atnxt;\n\tif (bsp != NULL) {\n\t\tfree(bsp->name);\n\t\tfree(bsp->text);\n\t\twhile (bsp->attribs != NULL) {\n\t\t\tif (bsp->attribs->name != NULL)\n\t\t\t\tfree(bsp->attribs->name);\n\t\t\tif (bsp->attribs->resource != NULL)\n\t\t\t\tfree(bsp->attribs->resource);\n\t\t\tif (bsp->attribs->value != NULL)\n\t\t\t\tfree(bsp->attribs->value);\n\t\t\tatnxt = bsp->attribs->next;\n\t\t\tfree(bsp->attribs);\n\t\t\tbsp->attribs = atnxt;\n\t\t}\n\t\tfree(bsp);\n\t}\n}\n"
  },
  {
    "path": "src/lib/Libifl/rm.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <stdio.h>\n#include <unistd.h>\n#include <stdlib.h>\n#include <errno.h>\n#include <string.h>\n#include <fcntl.h>\n#include <sys/types.h>\n#include <sys/socket.h>\n#include <sys/param.h>\n#include <sys/time.h>\n#include <netdb.h>\n#include <netinet/in.h>\n#include <arpa/inet.h>\n\n#include \"pbs_ifl.h\"\n#include \"pbs_internal.h\"\n#include \"net_connect.h\"\n#include \"resmon.h\"\n#include \"log.h\"\n#include \"dis.h\"\n#include \"rm.h\"\n#include \"tpp.h\"\n#if defined(FD_SET_IN_SYS_SELECT_H)\n#include <sys/select.h>\n#endif\n\n/**\n * @file\trm.c\n */\nstatic int full = 1;\n\n/*\n **\tThis is the structure used to keep track of the resource\n **\tmonitor connections.  Each entry is linked into as list\n **\tpointed to by \"outs\".  If len is -1, no\n **\trequest is active.  If len is -2, a request has been\n **\tsent and is waiting to be read.  If len is > 0, the number\n **\tindicates how much data is waiting to be sent.\n */\nstruct out {\n\tint stream;\n\tint len;\n\tstruct out *next;\n};\n\n#define HASHOUT 32\nstatic struct out *outs[HASHOUT];\n\n/**\n * @brief\n *\tCreate an \"out\" structure and put it in the hash table.\n *\n * @param[in] stream\tsocket descriptor\n *\n * @return\tint\n * @retval\t0\tsuccess\n * @retval\t-1\terror\n */\nstatic int\naddrm(int stream)\n{\n\tstruct out *op, **head;\n\n\tif ((op = (struct out *) malloc(sizeof(struct out))) == NULL) {\n\t\tpbs_errno = errno;\n\t\treturn -1;\n\t}\n\n\thead = &outs[stream % HASHOUT];\n\top->stream = stream;\n\top->len = -1;\n\top->next = *head;\n\t*head = op;\n\treturn 0;\n}\n\n/**\n * @brief\n *\tConnects to a resource monitor and returns a file descriptor to\n *\ttalk to it.  If port is zero, use default port.\n *\n * @param[in] host - hostname\n * @param[in] port - port number\n *\n * @return\tint\n * @retval\tsocket stream\tsuccess\n * @retval\t-1\t\terror\n */\nint\nopenrm(char *host, unsigned int port)\n{\n\tint stream;\n\tint c;\n\tint rc;\n\tstruct tpp_config tpp_conf;\n\tfd_set selset;\n\tstruct timeval tv;\n\n\tDBPRT((\"openrm: host %s port %u\\n\", host, port))\n\tpbs_errno = 0;\n\tif (port == 0)\n\t\tport = pbs_conf.manager_service_port;\n\tDBPRT((\"using port %u\\n\", port))\n\n\t/* call tpp_init */\n\trc = set_tpp_config(&pbs_conf, &tpp_conf, pbs_conf.pbs_leaf_name, -1, pbs_conf.pbs_leaf_routers);\n\tif (rc == -1) {\n\t\tfprintf(stderr, \"Error setting TPP config\\n\");\n\t\treturn -1;\n\t}\n\n\tif ((tpp_fd = tpp_init(&tpp_conf)) == -1) {\n\t\tfprintf(stderr, \"tpp_init failed\\n\");\n\t\treturn -1;\n\t}\n\n\t/*\n\t * Wait for net to get restored, ie, app to connect to routers\n\t */\n\tFD_ZERO(&selset);\n\tFD_SET(tpp_fd, &selset);\n\ttv.tv_sec = 5;\n\ttv.tv_usec = 0;\n\tselect(FD_SETSIZE, &selset, NULL, NULL, &tv);\n\n\ttpp_poll(); /* to clear off the read notification */\n\n\t/* get the FQDN of the mom */\n\tc = get_fullhostname(host, host, (sizeof(host) - 1));\n\tif (c == -1) {\n\t\tfprintf(stderr, \"Unable to get full hostname for mom %s\\n\", host);\n\t\treturn -1;\n\t}\n\n\tstream = tpp_open(host, port);\n\tpbs_errno = errno;\n\tif (stream < 0)\n\t\treturn -1;\n\tif (addrm(stream) == -1) {\n\t\tpbs_errno = errno;\n\t\ttpp_close(stream);\n\t\treturn -1;\n\t}\n\treturn stream;\n}\n\n/**\n * @brief\n *\tRoutine to close a connection to a resource monitor\n *\tand free the \"out\" structure.\n *\n * @param[in] stream\tsocket descriptor whose connection to be closed\n *\n * @return\tint\n * @retval\t0\tall well\n * @retval\t-1\terror\n *\n */\nstatic int\ndelrm(int stream)\n{\n\tstruct out *op, *prev = NULL;\n\n\tfor (op = outs[stream % HASHOUT]; op; op = op->next) {\n\t\tif (op->stream == stream)\n\t\t\tbreak;\n\t\tprev = op;\n\t}\n\tif (op) {\n\t\ttpp_close(stream);\n\n\t\tif (prev)\n\t\t\tprev->next = op->next;\n\t\telse\n\t\t\touts[stream % HASHOUT] = op->next;\n\t\tfree(op);\n\t\treturn 0;\n\t}\n\treturn -1;\n}\n\n/**\n * @brief\n *\tInternal routine to find the out structure for a stream number.\n *\n * @param[in] stream - socket descriptor\n *\n * @return\tstructure handle\n * @retval\tnin NULL value\t\tsuccess\n * @retval\tNULL\t\t\terror\n *\n */\nstatic struct out *\nfindout(int stream)\n{\n\tstruct out *op;\n\n\tfor (op = outs[stream % HASHOUT]; op; op = op->next) {\n\t\tif (op->stream == stream)\n\t\t\tbreak;\n\t}\n\tif (op == NULL)\n\t\tpbs_errno = ENOTTY;\n\treturn op;\n}\n\n/**\n * @brief\n *\tstart and compose command\n *\n * @param[in] stream - socket descriptor\n * @param[in] com - command\n *\n * @return\tint\n * @retval\t0\tsuccess\n * @retval\t!0\terror\n *\n */\nstatic int\nstartcom(int stream, int com)\n{\n\tint ret;\n\n\tDIS_tpp_funcs();\n\tret = diswsi(stream, RM_PROTOCOL);\n\tif (ret == DIS_SUCCESS) {\n\t\tret = diswsi(stream, RM_PROTOCOL_VER);\n\t\tif (ret == DIS_SUCCESS)\n\t\t\tret = diswsi(stream, com);\n\t}\n\n\tif (ret != DIS_SUCCESS) {\n\t\tDBPRT((\"startcom: diswsi error %s\\n\", dis_emsg[ret]))\n\t\tpbs_errno = errno;\n\t}\n\treturn ret;\n}\n\n/**\n * @brief\n *\tInternal routine to compose and send a \"simple\" command.\n *\tThis means anything with a zero length body.\n *\n * @param[in] stream - socket descriptor\n * @param[in] com - command\n *\n * @return      int\n * @retval      0       success\n * @retval      -1   \terror\n *\n */\nstatic int\nsimplecom(int stream, int com)\n{\n\tstruct out *op;\n\n\tif ((op = findout(stream)) == NULL)\n\t\treturn -1;\n\n\top->len = -1;\n\n\tif (startcom(stream, com) != DIS_SUCCESS) {\n\t\ttpp_close(stream);\n\t\treturn -1;\n\t}\n\tif (dis_flush(stream) == -1) {\n\t\tpbs_errno = errno;\n\t\tDBPRT((\"simplecom: flush error %d\\n\", pbs_errno))\n\t\ttpp_close(stream);\n\t\treturn -1;\n\t}\n\t(void) tpp_eom(stream);\n\treturn 0;\n}\n\n/**\n * @brief\n *\tInternal routine to read the return value from a command.\n *\n * @param[in] stream - socket descriptor\n *\n * @return      int\n * @retval      0       success\n * @retval      -1      error\n *\n */\nstatic int\nsimpleget(int stream)\n{\n\tint ret, num;\n\tfd_set selset;\n\n\twhile (1) {\n\t\t/* since tpp recvs are essentially always non blocking\n\t\t * we can call a dis function only if we are sure we have\n\t\t * data on that tpp fd\n\t\t */\n\t\tFD_ZERO(&selset);\n\t\tFD_SET(tpp_fd, &selset);\n\t\tif (select(FD_SETSIZE, &selset, NULL, NULL, NULL) > 0) {\n\t\t\tif (tpp_poll() == stream)\n\t\t\t\tbreak;\n\t\t} else\n\t\t\tbreak; /* let it flow down and fail in the DIS read */\n\t}\n\n\tnum = disrsi(stream, &ret);\n\tif (ret != DIS_SUCCESS) {\n\t\tDBPRT((\"simpleget: %s\\n\", dis_emsg[ret]))\n\t\tpbs_errno = errno ? errno : EIO;\n\t\ttpp_close(stream);\n\t\treturn -1;\n\t}\n\tif (num != RM_RSP_OK) {\n#ifdef ENOMSG\n\t\tpbs_errno = ENOMSG;\n#else\n\t\tpbs_errno = EINVAL;\n#endif\n\t\treturn -1;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\tClose connection to resource monitor.\n *\n * @param[in] stream - socket descriptor\n *\n * @return      int\n * @retval      0       success\n * @retval      -1      error(set pbs_errno).\n *\n */\nint\ncloserm(int stream)\n{\n\tpbs_errno = 0;\n\t(void) simplecom(stream, RM_CMD_CLOSE);\n\tif (delrm(stream) == -1) {\n\t\tpbs_errno = ENOTTY;\n\t\treturn -1;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\tShutdown the resource monitor.\n *\n * @param[in] stream - socket descriptor\n *\n * @return      int\n * @retval      0       success\n * @retval      -1      error(set pbs_errno).\n *\n */\nint\ndownrm(int stream)\n{\n\tpbs_errno = 0;\n\tif (simplecom(stream, RM_CMD_SHUTDOWN))\n\t\treturn -1;\n\tif (simpleget(stream))\n\t\treturn -1;\n\t(void) delrm(stream);\n\treturn 0;\n}\n\n/**\n * @brief\n *\tCause the resource monitor to read the file named.\n *\n * @param[in] stream - socket descriptor\n * @param[in] file - file name\n *\n * @return      int\n * @retval      0       success\n * @retval      -1      error(set pbs_errno).\n *\n */\nint\nconfigrm(int stream, char *file)\n{\n\tint ret, len;\n\tstruct out *op;\n\n\tpbs_errno = 0;\n\tif ((op = findout(stream)) == NULL)\n\t\treturn -1;\n\top->len = -1;\n\n\tif (file[0] != '/' || (len = strlen(file)) > (size_t) MAXPATHLEN) {\n\t\tpbs_errno = EINVAL;\n\t\treturn -1;\n\t}\n\n\tif (startcom(stream, RM_CMD_CONFIG) != DIS_SUCCESS)\n\t\treturn -1;\n\tret = diswcs(stream, file, len);\n\tif (ret != DIS_SUCCESS) {\n#if defined(ECOMM)\n\t\tpbs_errno = ECOMM;\n#elif defined(ENOCONNECT)\n\t\tpbs_errno = ENOCONNECT;\n#else\n\n#ifdef WIN32\n\t\tpbs_errno = ERROR_IO_INCOMPLETE;\n#else\n\t\tpbs_errno = ETXTBSY;\n#endif\n\n#endif\n\t\tDBPRT((\"configrm: diswcs %s\\n\", dis_emsg[ret]))\n\t\treturn -1;\n\t}\n\tif (dis_flush(stream) == -1) {\n\t\tpbs_errno = errno;\n\t\tDBPRT((\"configrm: flush error %d\\n\", pbs_errno))\n\t\treturn -1;\n\t}\n\n\tif (simpleget(stream))\n\t\treturn -1;\n\treturn 0;\n}\n\n/**\n * @brief\n *\tBegin a new message to the resource monitor if necessary.\n *\tAdd a line to the body of an outstanding command to the resource\n *\tmonitor.\n *\n * @param[in] op - pointer to message structure\n * @param[in line - string\n *\n * @return\tint\n * @retval\t0\tif all is ok\n * @retval\t-1\tif not (set pbs_errno).\n *\n */\nstatic int\ndoreq(struct out *op, char *line)\n{\n\tint ret;\n\n\tif (op->len == -1) { /* start new message */\n\t\tif (startcom(op->stream, RM_CMD_REQUEST) != DIS_SUCCESS)\n\t\t\treturn -1;\n\t\top->len = 1;\n\t}\n\tret = diswcs(op->stream, line, strlen(line));\n\tif (ret != DIS_SUCCESS) {\n#if defined(ECOMM)\n\t\tpbs_errno = ECOMM;\n#elif defined(ENOCONNECT)\n\t\tpbs_errno = ENOCONNECT;\n#else\n#ifdef WIN32\n\t\tpbs_errno = ERROR_IO_INCOMPLETE;\n#else\n\t\tpbs_errno = ETXTBSY;\n#endif\n\n#endif\n\t\tDBPRT((\"doreq: diswcs %s\\n\", dis_emsg[ret]))\n\t\treturn -1;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\tAdd a request to a single stream.\n *\n * @param[in] stream - socket descriptor\n * @param[in line - request string\n *\n * @return      int\n * @retval      0       if all is ok\n * @retval      -1      if not (set pbs_errno).\n *\n */\nint\naddreq(int stream, char *line)\n{\n\tstruct out *op;\n\n\tpbs_errno = 0;\n\tif ((op = findout(stream)) == NULL)\n\t\treturn -1;\n\tDIS_tpp_funcs();\n\tif (doreq(op, line) == -1) {\n\t\t(void) delrm(stream);\n\t\treturn -1;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\tAdd a request to every stream.\n *\n * @param[in] line - request string\n *\n * @return\tint\n * @retval\tnum of stream acted upon\tsuccess\n * @retval\t0\t\t\t\terror\n *\n */\nint\nallreq(char *line)\n{\n\tstruct out *op, *prev;\n\tint i, num;\n\n\tDIS_tpp_funcs();\n\tpbs_errno = 0;\n\tnum = 0;\n\tfor (i = 0; i < HASHOUT; i++) {\n\t\tprev = NULL;\n\t\top = outs[i];\n\t\twhile (op) {\n\t\t\tif (doreq(op, line) == -1) {\n\t\t\t\tstruct out *hold = op;\n\n\t\t\t\ttpp_close(op->stream);\n\t\t\t\tif (prev)\n\t\t\t\t\tprev->next = op->next;\n\t\t\t\telse\n\t\t\t\t\touts[i] = op->next;\n\t\t\t\top = op->next;\n\t\t\t\tfree(hold);\n\t\t\t} else {\n\t\t\t\tprev = op;\n\t\t\t\top = op->next;\n\t\t\t\tnum++;\n\t\t\t}\n\t\t}\n\t}\n\treturn num;\n}\n\n/**\n * @brief\n *\tFinish (and send) any outstanding message to the resource monitor.\n *\n * @param[in] stream\tsocket descriptor\n *\n * @return\tstring\n * @retval\tpointer to the next response line\n * @retval\tNULL if there are no more or an error occured.  Set pbs_errno on error.\n */\nchar *\ngetreq(int stream)\n{\n\tchar *startline;\n\tstruct out *op;\n\tint ret;\n\n\tpbs_errno = 0;\n\tif ((op = findout(stream)) == NULL)\n\t\treturn NULL;\n\tif (op->len >= 0) { /* there is a message to send */\n\t\tif (dis_flush(stream) == -1) {\n\t\t\tpbs_errno = errno;\n\t\t\tDBPRT((\"getreq: flush error %d\\n\", pbs_errno))\n\t\t\t(void) delrm(stream);\n\t\t\treturn NULL;\n\t\t}\n\t\top->len = -2;\n\t\t(void) tpp_eom(stream);\n\t}\n\tDIS_tpp_funcs();\n\tif (op->len == -2) {\n\t\tif (simpleget(stream) == -1)\n\t\t\treturn NULL;\n\t\top->len = -1;\n\t}\n\tstartline = disrst(stream, &ret);\n\tif (ret == DIS_EOF) {\n\t\treturn NULL;\n\t} else if (ret != DIS_SUCCESS) {\n\t\tpbs_errno = errno ? errno : EIO;\n\t\tDBPRT((\"getreq: cannot read string %s\\n\", dis_emsg[ret]))\n\t\treturn NULL;\n\t}\n\n\tif (!full) {\n\t\tchar *cc, *hold;\n\t\tint indent = 0;\n\n\t\tfor (cc = startline; *cc; cc++) {\n\t\t\tif (*cc == '[')\n\t\t\t\tindent++;\n\t\t\telse if (*cc == ']')\n\t\t\t\tindent--;\n\t\t\telse if (*cc == '=' && indent == 0) {\n\t\t\t\tif ((hold = strdup(cc + 1)) == NULL) {\n\t\t\t\t\tpbs_errno = errno ? errno : ENOMEM;\n\t\t\t\t\tDBPRT((\"getreq: Unable to allocate memory!\\n\"))\n\t\t\t\t}\n\t\t\t\tfree(startline);\n\t\t\t\tstartline = hold;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t}\n\treturn startline;\n}\n\n/**\n * @brief\n *\tFinish and send any outstanding messages to all resource monitors.\n *\n * @return\tint\n * @retval\tnum of msgs flushed\tsuccess\n * @retval\t0\t\t\terror\n *\n */\nint\nflushreq()\n{\n\tstruct out *op, *prev;\n\tint did, i;\n\n\tpbs_errno = 0;\n\tdid = 0;\n\tfor (i = 0; i < HASHOUT; i++) {\n\t\tfor (op = outs[i]; op; op = op->next) {\n\t\t\tif (op->len <= 0) /* no message to send */\n\t\t\t\tcontinue;\n\t\t\tif (dis_flush(op->stream) == -1) {\n\t\t\t\tpbs_errno = errno;\n\t\t\t\tDBPRT((\"flushreq: flush error %d\\n\", pbs_errno))\n\t\t\t\ttpp_close(op->stream);\n\t\t\t\top->stream = -1;\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\top->len = -2;\n\t\t\t(void) tpp_eom(op->stream);\n\t\t\tdid++;\n\t\t}\n\n\t\tprev = NULL;\n\t\top = outs[i];\n\t\twhile (op) { /* get rid of bad streams */\n\t\t\tif (op->stream != -1) {\n\t\t\t\tprev = op;\n\t\t\t\top = op->next;\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\tif (prev == NULL) {\n\t\t\t\touts[i] = op->next;\n\t\t\t\tfree(op);\n\t\t\t\top = outs[i];\n\t\t\t} else {\n\t\t\t\tprev->next = op->next;\n\t\t\t\tfree(op);\n\t\t\t\top = prev->next;\n\t\t\t}\n\t\t}\n\t}\n\treturn did;\n}\n\n/**\n * @brief\n *\tReturn the stream number of the next stream with something\n *\tto read or a negative number (the return from tpp_poll)\n *\tif there is no stream to read.\n *\n * @return\tint\n * @retval\tnext stream num\t\tsuccess\n * @retval\t-ve val\t\t\terror\n */\nint\nactivereq()\n{\n\tstruct out *op;\n\tint try, i, num;\n\tint bucket;\n\tstruct timeval tv;\n\tfd_set fdset;\n\n\tpbs_errno = 0;\n\tflushreq();\n\tFD_ZERO(&fdset);\n\n\tfor (try = 0; try < 3;) {\n\t\tif ((i = tpp_poll()) >= 0) {\n\t\t\tif ((op = findout(i)) != NULL)\n\t\t\t\treturn i;\n\n\t\t\top = (struct out *) malloc(sizeof(struct out));\n\t\t\tif (op == NULL) {\n\t\t\t\tpbs_errno = errno;\n\t\t\t\treturn -1;\n\t\t\t}\n\n\t\t\tbucket = i % HASHOUT;\n\t\t\top->stream = i;\n\t\t\top->len = -2;\n\t\t\top->next = outs[bucket];\n\t\t\touts[bucket] = op;\n\t\t} else if (i == -1) {\n\t\t\tpbs_errno = errno;\n\t\t\treturn -1;\n\t\t} else {\n\t\t\textern int tpp_fd;\n\n\t\t\tFD_SET(tpp_fd, &fdset);\n\t\t\ttv.tv_sec = 5;\n\t\t\ttv.tv_usec = 0;\n\t\t\tnum = select(FD_SETSIZE, &fdset, NULL, NULL, &tv);\n\t\t\tif (num == -1) {\n\t\t\t\tpbs_errno = errno;\n\t\t\t\tDBPRT((\"%s: select %d\\n\", __func__, pbs_errno))\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t\tif (num == 0) {\n\t\t\t\ttry++;\n\t\t\t\tDBPRT((\"%s: timeout %d\\n\", __func__, try))\n\t\t\t}\n\t\t}\n\t}\n\treturn i;\n}\n\n/**\n * @brief\n *\tIf flag is true, turn on \"full response\" mode where getreq\n *\treturns a pointer to the beginning of a line of response.\n *\tThis makes it possible to examine the entire line rather\n *\tthan just the answer following the equal sign.\n *\n * @param[in] flag - value indicating whether to turn on full response mode or not.\n *\n */\nvoid\nfullresp(int flag)\n{\n\tpbs_errno = 0;\n\tfull = flag;\n\treturn;\n}\n"
  },
  {
    "path": "src/lib/Libifl/strsep.c",
    "content": "\n/*-\n * Copyright (c) 1990, 1993\n *\tThe Regents of the University of California.  All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions\n * are met:\n * 1. Redistributions of source code must retain the above copyright\n *    notice, this list of conditions and the following disclaimer.\n * 2. Redistributions in binary form must reproduce the above copyright\n *    notice, this list of conditions and the following disclaimer in the\n *    documentation and/or other materials provided with the distribution.\n * 3. Neither the name of the University nor the names of its contributors\n *    may be used to endorse or promote products derived from this software\n *    without specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND\n * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED.  IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE\n * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS\n * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)\n * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT\n * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY\n * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF\n * SUCH DAMAGE.\n */\n/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/* OPENBSD ORIGINAL: lib/libc/string/strsep.c */\n/**\n * @file\tstrsep.c\n */\n\n#include <pbs_config.h>\n\n#include <string.h>\n#include <stdio.h>\n#include \"libpbs.h\" /* for aif dll export */\n\n/**\n * @brief\n * \tGet next token from string *stringp, where tokens are possibly-empty\n * \tstrings separated by characters from delim.\n *\n * @par\tFunctionality:\n *\tWrites NULs into the string at *stringp to end tokens.\n * \tdelim need not remain constant from call to call.\n * \tOn return, *stringp points past the last NUL written (if there might\n * \tbe further tokens), or is NULL (if there are definitely no more tokens).\n *\n * @param[in] stringp - string\n * @param[in] delim - delimeter\n *\n * @return\tstring\n * @retval\tIf *stringp is NULL, strsep returns NULL.\n *\n */\nchar *\npbs_strsep(char **stringp, const char *delim)\n{\n\tchar *s;\n\tconst char *spanp;\n\tint c, sc;\n\tchar *tok;\n\n\tif ((s = *stringp) == NULL)\n\t\treturn NULL;\n\tfor (tok = s;;) {\n\t\tc = *s++;\n\t\tspanp = delim;\n\t\tdo {\n\t\t\tif ((sc = *spanp++) == c) {\n\t\t\t\tif (c == 0)\n\t\t\t\t\ts = NULL;\n\t\t\t\telse\n\t\t\t\t\ts[-1] = 0;\n\t\t\t\t*stringp = s;\n\t\t\t\treturn (tok);\n\t\t\t}\n\t\t} while (sc != 0);\n\t}\n\t/* NOTREACHED */\n}\n"
  },
  {
    "path": "src/lib/Libifl/tcp_dis.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <errno.h>\n\n#if defined(FD_SET_IN_SYS_SELECT_H)\n#include <sys/select.h>\n#endif\n#include <unistd.h>\n#include <poll.h>\n#include \"libsec.h\"\n#include \"libpbs.h\"\n#include \"dis.h\"\n#include \"auth.h\"\n\nvolatile int reply_timedout = 0; /* for reply_send.c -- was alarm handler called? */\n\nstatic int tcp_recv(int, void *, int);\nstatic int tcp_send(int, void *, int);\n\n/**\n * @brief\n *\tGet the user buffer associated with the tcp channel. If no buffer has\n *\tbeen set, then allocate a pbs_tcp_chan_t structure and associate with\n *\tthe given tcp channel\n *\n * @param[in] - fd - tcp channel to which to get/associate a user buffer\n *\n * @retval\tNULL - Failure\n * @retval\t!NULL - Buffer associated with the tcp channel\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nstatic pbs_tcp_chan_t *\ntcp_get_chan(int fd)\n{\n\tpbs_tcp_chan_t *chan = get_conn_chan(fd);\n\tif (chan == NULL) {\n\t\tif (errno != ENOTCONN) {\n\t\t\tdis_setup_chan(fd, get_conn_chan);\n\t\t\tchan = get_conn_chan(fd);\n\t\t}\n\t}\n\treturn chan;\n}\n\n/**\n * @brief\n * \ttcp_recv - receive data from tcp stream\n *\n * @param[in] fd - socket descriptor\n * @param[out] data - data from tcp stream\n * @param[in] len - bytes to receive from tcp stream\n *\n * @return\tint\n * @retval\t0 \tif success\n * @retval\t-1 \tif error\n * @retval\t-2 \tif EOF (stream closed)\n */\nstatic int\ntcp_recv(int fd, void *data, int len)\n{\n\tint i = 0;\n\tint torecv = len;\n\tchar *pb = (char *) data;\n\tint amt = 0;\n#ifdef WIN32\n\tfd_set readset;\n\tstruct timeval timeout;\n#else\n\tstruct pollfd pollfds[1];\n\tint timeout;\n#endif\n\n#ifdef WIN32\n\ttimeout.tv_sec = (long) pbs_tcp_timeout;\n\ttimeout.tv_usec = 0;\n#else\n\ttimeout = pbs_tcp_timeout;\n\tpollfds[0].fd = fd;\n\tpollfds[0].events = POLLIN;\n\tpollfds[0].revents = 0;\n#endif\n\n\twhile (torecv > 0) {\n\t\t/*\n\t\t * we don't want to be locked out by an attack on the port to\n\t\t * deny service, so we time out the read, the network had better\n\t\t * deliver promptly\n\t\t */\n\t\tdo {\n#ifdef WIN32\n\t\t\tFD_ZERO(&readset);\n\t\t\tFD_SET((unsigned int) fd, &readset);\n\t\t\ti = select(FD_SETSIZE, &readset, NULL, NULL, &timeout);\n#else\n\t\t\ti = poll(pollfds, 1, timeout * 1000);\n#endif\n\t\t\tif (pbs_tcp_interrupt)\n\t\t\t\tbreak;\n\t\t}\n#ifdef WIN32\n\t\twhile (i == -1 && ((errno = WSAGetLastError()) == WSAEINTR));\n#else\n\t\twhile (i == -1 && errno == EINTR);\n#endif\n\n\t\tif (i <= 0)\n\t\t\treturn i;\n\n#ifdef WIN32\n\t\ti = recv(fd, pb, torecv, 0);\n\t\terrno = WSAGetLastError();\n#else\n\t\ti = CS_read(fd, pb, torecv);\n#endif\n\t\tif (i == 0)\n\t\t\treturn -2;\n\n\t\tif (i < 0) {\n#ifdef WIN32\n\t\t\t/*\n\t\t\t * for WASCONNRESET, treat like no data for winsock\n\t\t\t * will return this if remote\n\t\t\t * connection prematurely closed\n\t\t\t */\n\t\t\tif (errno == WSAECONNRESET)\n\t\t\t\treturn 0;\n\t\t\tif (errno != WSAEINTR)\n#else\n\t\t\tif (errno != EINTR)\n#endif\n\t\t\t\treturn i;\n\t\t} else {\n\t\t\ttorecv -= i;\n\t\t\tpb += i;\n\t\t\tamt += i;\n\t\t}\n\t}\n\treturn amt;\n}\n\n/**\n * @brief\n * \ttcp_send - send data to tcp stream\n *\n * @param[in] fd - socket descriptor\n * @param[out] data - data to send\n * @param[in] len - bytes to send\n *\n * @return\tint\n * @retval\t>0 \tnumber of characters sent\n * @retval\t0 \tif EOD (no data currently avalable)\n * @retval\t-1 \tif error\n * @retval\t-2 \tif EOF (stream closed)\n */\nstatic int\ntcp_send(int fd, void *data, int len)\n{\n\tsize_t ct = (size_t) len;\n\tint i;\n\tint j;\n\tchar *pb = (char *) data;\n\tstruct pollfd pollfds[1];\n\n#ifdef WIN32\n\twhile ((i = send(fd, pb, (int) ct, 0)) != (int) ct) {\n\t\terrno = WSAGetLastError();\n\t\tif (i == -1) {\n\t\t\tif (errno != WSAEINTR) {\n\t\t\t\tpbs_tcp_errno = errno;\n\t\t\t\treturn (-1);\n\t\t\t} else\n\t\t\t\tcontinue;\n\t\t}\n#else\n\twhile ((i = CS_write(fd, pb, ct)) != ct) {\n\t\tif (i == CS_IO_FAIL) {\n\t\t\tif (errno == EINTR) {\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\tif (errno != EAGAIN) {\n\t\t\t\t/* fatal error on write, abort output */\n\t\t\t\tpbs_tcp_errno = errno;\n\t\t\t\treturn (-1);\n\t\t\t}\n\n\t\t\t/* write would have blocked (EAGAIN returned) */\n\t\t\t/* poll for socket to be ready to accept, if  */\n\t\t\t/* not ready in TIMEOUT_SHORT seconds, fail   */\n\t\t\t/* redo the poll if EINTR\t\t      */\n\t\t\tdo {\n\t\t\t\tif (reply_timedout) {\n\t\t\t\t\t/* caught alarm - timeout spanning several writes for one reply */\n\t\t\t\t\t/* alarm set up in dis_reply_write() */\n\t\t\t\t\t/* treat identically to poll timeout */\n\t\t\t\t\tj = 0;\n\t\t\t\t\treply_timedout = 0;\n\t\t\t\t} else {\n\t\t\t\t\tpollfds[0].fd = fd;\n\t\t\t\t\tpollfds[0].events = POLLOUT;\n\t\t\t\t\tpollfds[0].revents = 0;\n\t\t\t\t\tj = poll(pollfds, 1, pbs_tcp_timeout * 1000);\n\t\t\t\t}\n\t\t\t} while ((j == -1) && (errno == EINTR));\n\n\t\t\tif (j == 0) {\n\t\t\t\t/* never came ready, return error */\n\t\t\t\t/* pbs_tcp_errno will add to log message */\n\t\t\t\tpbs_tcp_errno = EAGAIN;\n\t\t\t\treturn (-1);\n\t\t\t} else if (j == -1) {\n\t\t\t\t/* some other error - fatal */\n\t\t\t\tpbs_tcp_errno = errno;\n\t\t\t\treturn (-1);\n\t\t\t}\n\t\t\tcontinue; /* socket ready, retry write */\n\t\t}\n#endif\n\t\t/* write succeeded, do more if needed */\n\t\tct -= i;\n\t\tpb += i;\n\t}\n\treturn len;\n}\n\n/**\n * @brief\n *\tsets tcp related functions.\n *\n */\nvoid\nDIS_tcp_funcs()\n{\n\tpfn_transport_get_chan = tcp_get_chan;\n\tpfn_transport_set_chan = set_conn_chan;\n\tpfn_transport_recv = tcp_recv;\n\tpfn_transport_send = tcp_send;\n}\n"
  },
  {
    "path": "src/lib/Libifl/tm.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <unistd.h>\n#include <limits.h>\n#include <fcntl.h>\n#include <netdb.h>\n#include <string.h>\n#include <errno.h>\n#include <assert.h>\n#include <sys/types.h>\n#include <sys/socket.h>\n#include <sys/time.h>\n#include <netinet/in.h>\n#include <netdb.h>\n\n#include \"dis.h\"\n#include \"tm.h\"\n#include \"pbs_ifl.h\"\n#include \"pbs_client_thread.h\"\n#include \"net_connect.h\"\n#include \"libsec.h\"\n#include \"pbs_internal.h\"\n\n/**\n * @file\ttm.c\n */\n\n#ifndef MIN\n#define MIN(a, b) (((a) < (b)) ? (a) : (b))\n#endif\n\n#ifndef __PBS_TCP_TIMEOUT\n#define __PBS_TCP_TIMEOUT\nextern time_t *__pbs_tcptimeout_location(void);\n#define pbs_tcp_timeout (*__pbs_tcptimeout_location())\n#endif\n\n#ifndef __PBS_TCP_INTERRUPT\n#define __PBS_TCP_INTERRUPT\nextern int *__pbs_tcpinterrupt_location(void);\n#define pbs_tcp_interrupt (*__pbs_tcpinterrupt_location())\n#endif\n\n#ifndef __PBS_TCP_ERRNO\n#define __PBS_TCP_ERRNO\nextern int *__pbs_tcperrno_location(void);\n#define pbs_tcp_errno (*__pbs_tcperrno_location())\n#endif\n\n/*\n **\tAllocate some string space to hold the values passed in the\n **\tenviornment from MOM.\n */\nstatic char *tm_jobid = NULL;\nstatic int tm_jobid_len = 0;\nstatic char *tm_jobcookie = NULL;\nstatic int tm_jobcookie_len = 0;\nstatic tm_task_id tm_jobtid = TM_NULL_TASK;\nstatic tm_node_id tm_jobndid = TM_ERROR_NODE;\nstatic int tm_momport = 15003;\nstatic int local_conn = -1;\nstatic int init_done = 0;\nstatic char *localhost = LOCALHOST_SHORTNAME;\n\n/*\n **\tEvents are the central focus of this library.  They are tracked\n **\tin a hash table.  Many of the library calls return events.  They\n **\tare recorded and as information is received from MOM's, the\n **\tevent is updated and marked so tm_poll() can return it to the user.\n */\n#define EVENT_HASH 128\n\n/*\n * a bit of code to map a tm_ error number to the symbol\n */\n\nstruct tm_errcode {\n\tint trc_code;\n\tchar *trc_name;\n} tm_errcode[] = {\n\t{TM_ESYSTEM, \"system error - MOM cannot be contacted\"},\n\t{TM_ENOTCONNECTED, \"not connected\"},\n\t{TM_EUNKNOWNCMD, \"unknown command\"},\n\t{TM_ENOTIMPLEMENTED, \"not implemented/supported\"},\n\t{TM_EBADENVIRONMENT, \"bad environment\"},\n\t{TM_ENOTFOUND, \"no matching job found\"},\n\t{TM_ESESSION, \"session is already attached\"},\n\t{TM_EUSER, \"user not permitted to attach\"},\n\t{TM_EOWNER, \"process owner does not match job\"},\n\t{TM_ENOPROC, \"process does not exist\"},\n\t{TM_EHOOK, \"a hook has rejected the task manager request\"},\n\t{0, \"unknown\"}};\n\nchar *\nget_ecname(int rc)\n{\n\tstruct tm_errcode *p;\n\tstatic char buf[256];\n\n\tfor (p = &tm_errcode[0]; p->trc_code; ++p) {\n\t\tif (p->trc_code == rc)\n\t\t\tbreak;\n\t}\n\tsprintf(buf, \"%s (%d)\", p->trc_name, rc);\n\treturn buf;\n}\n\ntypedef struct event_info {\n\ttm_event_t e_event;\t   /* event number */\n\ttm_node_id e_node;\t   /* destination node */\n\tint e_mtype;\t\t   /* message type sent */\n\tvoid *e_info;\t\t   /* possible returned info */\n\tstruct event_info *e_next; /* link to next event */\n\tstruct event_info *e_prev; /* link to prev event */\n} event_info;\nstatic event_info *event_hash[EVENT_HASH];\nstatic int event_count = 0;\n\n/**\n * @brief\n *\tFind an event number or return a NULL.\n */\nstatic event_info *\nfind_event(tm_event_t x)\n{\n\tevent_info *ep;\n\n\tfor (ep = event_hash[x % EVENT_HASH]; ep; ep = ep->e_next) {\n\t\tif (ep->e_event == x)\n\t\t\tbreak;\n\t}\n\treturn ep;\n}\n\n/**\n * @brief\n *\tDelete an event.\n *\n * @param[in] ep - pointer to event info\n *\n * @return\tVoid\n *\n */\nstatic void\ndel_event(event_info *ep)\n{\n\n\t/* unlink event from hash list */\n\tif (ep->e_prev)\n\t\tep->e_prev->e_next = ep->e_next;\n\telse\n\t\tevent_hash[ep->e_event % EVENT_HASH] = ep->e_next;\n\tif (ep->e_next)\n\t\tep->e_next->e_prev = ep->e_prev;\n\n\t/*\n\t **\tFree any memory saved with the event.  This depends\n\t **\ton whay type of event it is.\n\t */\n\tswitch (ep->e_mtype) {\n\n\t\tcase TM_INIT:\n\t\tcase TM_SPAWN:\n\t\tcase TM_ATTACH:\n\t\tcase TM_SIGNAL:\n\t\tcase TM_OBIT:\n\t\tcase TM_POSTINFO:\n\t\t\tbreak;\n\n\t\tcase TM_TASKS:\n\t\tcase TM_GETINFO:\n\t\tcase TM_RESOURCES:\n\t\t\tfree(ep->e_info);\n\t\t\tbreak;\n\n\t\tdefault:\n\t\t\tDBPRT((\"del_event: unknown event command %d\\n\", ep->e_mtype))\n\t\t\tbreak;\n\t}\n\tfree(ep);\n\n\tif (--event_count == 0) {\n\t\tCS_close_socket(local_conn);\n\t\tclosesocket(local_conn);\n\t\tlocal_conn = -1;\n\t}\n\treturn;\n}\n\n/**\n * @brief\n *\tCreate a new event number.\n *\n * @return\ttm_event_t\n * @retval\teventinfo\tsuccess\n *\t\tbreaks out of loop if fails.\n */\nstatic tm_event_t\nnew_event()\n{\n\tstatic tm_event_t next_event = TM_NULL_EVENT + 1;\n\tevent_info *ep;\n\ttm_event_t ret;\n\n\tif (next_event == INT_MAX)\n\t\tnext_event = TM_NULL_EVENT + 1;\n\tfor (;;) {\n\t\tret = next_event++;\n\n\t\tfor (ep = event_hash[ret % EVENT_HASH]; ep; ep = ep->e_next) {\n\t\t\tif (ep->e_event == ret)\n\t\t\t\tbreak; /* innter loop: this number is in use */\n\t\t}\n\t\tif (ep == NULL)\n\t\t\tbreak; /* this number is not in use */\n\t}\n\treturn ret;\n}\n\n/**\n * @brief\n *\t-Link new event number into the above hash table.\n *\n * @param[in] event - event info\n * @param[in] node - job-relative node id\n * @param[in] type - type of event\n * @param[in] info - info about event\n *\n * @return\tVoid\n *\n */\nstatic void\nadd_event(tm_event_t event, tm_node_id node, int type, void *info)\n{\n\tevent_info *ep, **head;\n\n\tep = (event_info *) malloc(sizeof(event_info));\n\tassert(ep != NULL);\n\n\thead = &event_hash[event % EVENT_HASH];\n\tep->e_event = event;\n\tep->e_node = node;\n\tep->e_mtype = type;\n\tep->e_info = info;\n\tep->e_next = *head;\n\tep->e_prev = NULL;\n\tif (*head)\n\t\t(*head)->e_prev = ep;\n\t*head = ep;\n\n\tevent_count++;\n\treturn;\n}\n\n/*\n **\tSessions must be tracked by the library so tm_taskid objects\n **\tcan be resolved into real tasks on real nodes.\n **\tWe will use a hash table.\n */\n#define TASK_HASH 256\ntypedef struct task_info {\n\tchar *t_jobid;\t\t  /* jobid */\n\ttm_task_id t_task;\t  /* task id */\n\ttm_node_id t_node;\t  /* node id */\n\tstruct task_info *t_next; /* link to next task */\n} task_info;\nstatic task_info *task_hash[TASK_HASH];\n\n/**\n * @brief\n *\t-Find a task table entry for a given task number or return a NULL.\n *\n * @param[in] x - task id\n *\n * @return \tstructure handle\n * @retval\tpointer to task info\n *\n */\nstatic task_info *\nfind_task(tm_task_id x)\n{\n\ttask_info *tp;\n\n\tfor (tp = task_hash[x % TASK_HASH]; tp; tp = tp->t_next) {\n\t\tif (tp->t_task == x)\n\t\t\tbreak;\n\t}\n\treturn tp;\n}\n\n/**\n * @brief\n *\t-Create a new task entry and link it into the above hash\n *\ttable.\n *\n * @param[in] jobid - job identifier\n * @param[in] node - job-relative node id\n * @param[out] task - task id(0 or 1)\n *\n * @return\ttm_task_id\n * @retval\tTM_NULL_TASK\t\tfailure\n * @retval\tinitialized task\tsuccess\n *\n */\nstatic tm_task_id\nnew_task(char *jobid, tm_node_id node, tm_task_id task)\n{\n\ttask_info *tp, **head;\n\n\tDBPRT((\"%s: jobid=%s node=%d task=0x%08X\\n\",\n\t       __func__, jobid ? jobid : \"none\", node, task))\n\tif (jobid != tm_jobid && strcmp(jobid, tm_jobid) != 0) {\n\t\tDBPRT((\"%s: task job %s not my job %s\\n\",\n\t\t       __func__, jobid, tm_jobid))\n\t\treturn TM_NULL_TASK;\n\t}\n\n\tif ((tp = find_task(task)) != NULL) {\n\t\tDBPRT((\"%s: task 0x%08X found with node %d should be %d\\n\",\n\t\t       __func__, task, tp->t_node, node))\n\t\treturn task;\n\t}\n\n\tif ((tp = (task_info *) malloc(sizeof(task_info))) == NULL)\n\t\treturn TM_NULL_TASK;\n\n\thead = &task_hash[task % TASK_HASH];\n\ttp->t_jobid = tm_jobid;\n\ttp->t_task = task;\n\ttp->t_node = node;\n\ttp->t_next = *head;\n\t*head = tp;\n\n\treturn task;\n}\n\n/*\n **\tDelete a task.\n ===\n === right now, this is not used.\n ===\n static void\n del_task(x)\n tm_task_id\tx;\n {\n task_info\t*tp, *prev;\n\n prev = NULL;\n for (tp=task_hash[x % TASK_HASH]; tp; prev=tp, tp=tp->t_next) {\n if (tp->t_task == x)\n break;\n }\n if (tp) {\n if (prev)\n prev->t_next = tp->t_next;\n else\n task_hash[x % TASK_HASH] = tp->t_next;\n tp->t_next = NULL;\n if (tp->t_jobid != tm_jobid && tp->t_jobid != NULL)\n free(tp->t_jobid);\n free(tp);\n }\n return;\n }\n */\n\n/*\n **\tThe nodes are tracked in an array.\n */\nstatic tm_node_id *node_table = NULL;\n\n/**\n * @brief\n *\t-localmom() - make a connection to the local pbs_mom\n *\n * @par Note:\n *\tThe connection will remain open as long as there is an\n *\toutstanding event.\n *\n * @return\tint\n * @retval\t-1\tconnection fail\n * @retval\t>=0\tconnection succcess\n *\n */\n#define PBS_NET_RC_FATAL -1\n#define PBS_NET_RC_RETRY -2\n\nstatic int\nlocalmom()\n{\n\tstatic int have_addr = 0;\n\tstatic struct in_addr hostaddr;\n\tstruct hostent *hp;\n\tint i;\n\tint ret;\n\tstruct sockaddr_in remote;\n\tint sock;\n\tstruct linger ltime;\n\n\tif (local_conn >= 0)\n\t\treturn local_conn; /* already have open connection */\n\n\tif (have_addr == 0) {\n\t\t/* lookup localhost and save address */\n\t\tif ((hp = gethostbyname(localhost)) == NULL) {\n\t\t\tDBPRT((\"%s: host %s not found\\n\", __func__, localhost))\n\t\t\treturn -1;\n\t\t}\n\t\tassert(hp->h_length <= sizeof(hostaddr));\n\t\tmemcpy(&hostaddr, hp->h_addr_list[0], hp->h_length);\n\t\thave_addr = 1;\n\t}\n\n\tfor (i = 0; i < 5; i++) {\n\n\t\t/* get socket */\n\n\t\tsock = socket(AF_INET, SOCK_STREAM, 0);\n\t\tif (sock < 0)\n\t\t\treturn -1;\n\n\t\t/* make sure data goes out */\n\n\t\tltime.l_onoff = 1;\n\t\tltime.l_linger = 5;\n\t\tsetsockopt(sock, SOL_SOCKET, SO_LINGER, &ltime, sizeof(ltime));\n\n\t\t/* connect to specified local pbs_mom and port */\n\n\t\tremote.sin_addr = hostaddr;\n\t\tremote.sin_port = htons((unsigned short) tm_momport);\n\t\tremote.sin_family = AF_INET;\n\t\tif (connect(sock, (struct sockaddr *) &remote,\n\t\t\t    sizeof(remote)) < 0) {\n\t\t\tswitch (errno) {\n\t\t\t\tcase EADDRINUSE:\n\t\t\t\tcase ETIMEDOUT:\n\t\t\t\tcase ECONNREFUSED:\n#ifdef WIN32\n\t\t\t\tcase WSAEINTR:\n#else\n\t\t\t\tcase EINTR:\n#endif\n\t\t\t\t\tclosesocket(sock);\n\t\t\t\t\tsleep(1);\n\t\t\t\t\tcontinue;\n\t\t\t\tdefault:\n\t\t\t\t\tgoto failed;\n\t\t\t}\n\t\t} else {\n\t\t\tlocal_conn = sock;\n\t\t\tbreak;\n\t\t}\n\t}\n\n\tif (CS_client_init() != CS_SUCCESS)\n\t\tgoto failed;\n\n\tret = CS_client_auth(local_conn);\n\n\tif ((ret != CS_SUCCESS) && (ret != CS_AUTH_USE_IFF)) {\n\n\t\t(void) CS_close_socket(local_conn);\n\t\t(void) CS_close_app();\n\t\tgoto failed;\n\t}\n\n\tDIS_tcp_funcs();\n\treturn (local_conn);\n\nfailed:\n\n\tclosesocket(sock);\n\tlocal_conn = -1;\n\treturn -1;\n}\n\n/**\n * @brief\n *\t-startcom() - send request header to local pbs_mom.\n *\tIf required, make connection to her.\n *\n * @param[in] com - communication handle\n * @param[in] event - event\n *\n * @return\tint\n * @retval\tDIS_SUCCESS(0)\tsuccess\n * @retval\t!0\t\terror\n *\n */\nstatic int\nstartcom(int com, tm_event_t event)\n{\n\tint ret;\n\n\tif (localmom() == -1)\n\t\treturn -1;\n\n\tret = diswsi(local_conn, TM_PROTOCOL);\n\tif (ret != DIS_SUCCESS)\n\t\tgoto done;\n\tret = diswsi(local_conn, TM_PROTOCOL_VER);\n\tif (ret != DIS_SUCCESS)\n\t\tgoto done;\n\tret = diswcs(local_conn, tm_jobid, tm_jobid_len);\n\tif (ret != DIS_SUCCESS)\n\t\tgoto done;\n\tret = diswcs(local_conn, tm_jobcookie, tm_jobcookie_len);\n\tif (ret != DIS_SUCCESS)\n\t\tgoto done;\n\tret = diswsi(local_conn, com);\n\tif (ret != DIS_SUCCESS)\n\t\tgoto done;\n\tret = diswsi(local_conn, event);\n\tif (ret != DIS_SUCCESS)\n\t\tgoto done;\n\tret = diswui(local_conn, tm_jobtid);\n\tif (ret != DIS_SUCCESS)\n\t\tgoto done;\n\treturn DIS_SUCCESS;\n\ndone:\n\tDBPRT((\"startcom: send error %s\\n\", dis_emsg[ret]))\n\tCS_close_socket(local_conn);\n\tclosesocket(local_conn);\n\tlocal_conn = -1;\n\treturn ret;\n}\n\n/**\n * @brief\n *\t-Initialize the Task Manager interface.\n *\n * @param[in] info - currently unused\n * @param[out] roots - data for the last tm_init call whose event has been polled\n *\n * @return\tint\n * @retval\t0\t\t\tsuccess\n * @retval\tTM error msg(!0)\terror\n */\nint\ntm_init(void *info, struct tm_roots *roots)\n{\n\ttm_event_t nevent, revent;\n\tchar *env, *hold;\n\tint err;\n\tint nerr = 0;\n\textern int pbs_tcp_interrupt;\n\n\tif (init_done)\n\t\treturn TM_BADINIT;\n\n\t/* initialize the thread context data, if not already initialized */\n\tif (pbs_client_thread_init_thread_context() != 0)\n\t\treturn TM_ESYSTEM;\n\n\tpbs_tcp_interrupt = 1;\n\n\tif ((env = getenv(\"PBS_JOBID\")) == NULL)\n\t\treturn TM_EBADENVIRONMENT;\n\ttm_jobid_len = 0;\n\tfree(tm_jobid);\n\ttm_jobid = strdup(env);\n\tif (!tm_jobid)\n\t\treturn TM_ESYSTEM;\n\ttm_jobid_len = strlen(tm_jobid);\n\n\tif ((env = getenv(\"PBS_JOBCOOKIE\")) == NULL)\n\t\treturn TM_EBADENVIRONMENT;\n\ttm_jobcookie_len = 0;\n\tfree(tm_jobcookie);\n\ttm_jobcookie = strdup(env);\n\tif (!tm_jobcookie)\n\t\treturn TM_ESYSTEM;\n\ttm_jobcookie_len = strlen(tm_jobcookie);\n\n\tif ((env = getenv(\"PBS_NODENUM\")) == NULL)\n\t\treturn TM_EBADENVIRONMENT;\n\ttm_jobndid = (tm_node_id) strtol(env, &hold, 10);\n\tif (env == hold)\n\t\treturn TM_EBADENVIRONMENT;\n\tif ((env = getenv(\"PBS_TASKNUM\")) == NULL)\n\t\treturn TM_EBADENVIRONMENT;\n\tif ((tm_jobtid = strtoul(env, NULL, 16)) == 0)\n\t\treturn TM_EBADENVIRONMENT;\n\tif ((env = getenv(\"PBS_MOMPORT\")) == NULL)\n\t\treturn TM_EBADENVIRONMENT;\n\tif ((tm_momport = atoi(env)) == 0)\n\t\treturn TM_EBADENVIRONMENT;\n\n\tinit_done = 1;\n\tnevent = new_event();\n\n\t/*\n\t * send the following request:\n\t *\theader\t\t(tm_init)\n\t */\n\n\tif (startcom(TM_INIT, nevent) != DIS_SUCCESS)\n\t\treturn TM_ESYSTEM;\n\tdis_flush(local_conn);\n\tadd_event(nevent, TM_ERROR_NODE, TM_INIT, (void *) roots);\n\n\tif ((err = tm_poll(TM_NULL_EVENT, &revent, 1, &nerr)) != TM_SUCCESS)\n\t\treturn err;\n\treturn nerr;\n}\n\n/**\n *\n * @brief\n *\tInitialize and attach new task for <pid> to job <jobid>\n *\n * @param[in]\tjobid  - job id to which new task will be attached\n * @param[in]\tcookie - job cookie\n * @param[in]\tpid    - pid of task to be attached\n * @param[in]\thost   - hostname\n * @param[in] \tport   - port number\n * @param[out]\ttid  - newly attached task id\n *\n * @return\tint\n * @retval\tTM_SUCCESS (0)  - Success\n * @retval\tTM_E*\t   (>0) - Failer\n *\n */\nint\ntm_attach(char *jobid, char *cookie, pid_t pid, tm_task_id *tid, char *host, int port)\n{\n\ttm_event_t nevent, revent;\n\tint err;\n\tint nerr = 0;\n\textern int pbs_tcp_interrupt;\n#ifdef WIN32\n\tchar usern[UNLEN + 1] = {'\\0'};\n\tint sz = 0;\n\tint ret = 0;\n#endif\n\n\tpbs_tcp_interrupt = 1;\n\n\ttm_jobid_len = 0;\n\tfree(tm_jobid);\n\ttm_jobid = NULL;\n\tif (jobid && (*jobid != '\\0')) {\n\t\ttm_jobid = strdup(jobid);\n\t\tif (!tm_jobid)\n\t\t\treturn TM_ESYSTEM;\n\t\ttm_jobid_len = strlen(tm_jobid);\n\t}\n\n\ttm_jobcookie_len = 0;\n\tfree(tm_jobcookie);\n\ttm_jobcookie = NULL;\n\tif (cookie && (*cookie != '\\0')) {\n\t\ttm_jobcookie = strdup(cookie);\n\t\tif (!tm_jobcookie)\n\t\t\treturn TM_ESYSTEM;\n\t\ttm_jobcookie_len = strlen(tm_jobcookie);\n\t}\n\n\tif (host != NULL && *host != '\\0')\n\t\tlocalhost = host;\n\ttm_momport = port;\n\n\tnevent = new_event();\n\n\t/*\n\t * send the following request:\n\t *\theader\t\t(tm_attach)\n\t *\tint\t\tuid\n\t *\tint\t\tpid\n\t */\n\n\tif (startcom(TM_ATTACH, nevent) != DIS_SUCCESS)\n\t\treturn TM_ESYSTEM;\n#ifdef WIN32\n\tsz = sizeof(usern);\n\tret = GetUserName(usern, &sz);\n\tif (diswcs(local_conn, usern, strlen(usern)) != DIS_SUCCESS) /* send uid */\n\t\treturn TM_ENOTCONNECTED;\n#else\n\tif (diswsi(local_conn, getuid()) != DIS_SUCCESS) /* send uid */\n\t\treturn TM_ENOTCONNECTED;\n#endif\n\n\tif (diswsi(local_conn, pid) != DIS_SUCCESS) /* send pid */\n\t\treturn TM_ENOTCONNECTED;\n\n\tdis_flush(local_conn);\n\tadd_event(nevent, TM_ERROR_NODE, TM_ATTACH, (void *) tid);\n\n\tinit_done = 1; /* fake having called tm_init */\n\terr = tm_poll(TM_NULL_EVENT, &revent, 1, &nerr);\n\tinit_done = 0;\n\n\tif (err != TM_SUCCESS)\n\t\treturn err;\n\treturn nerr;\n}\n\n/**\n * @brief\n *\t-Copy out node info.  No communication with pbs_mom is needed.\n *\n * @param[in] list - pointer to job relative node list\n * @param[out] nnodes - number of nodes\n *\n * @return\tint\n * @retval\tTM_SUCCESS\tSuccess\n * @retval\tTM_E*\t\terror\n *\n */\nint\ntm_nodeinfo(tm_node_id **list, int *nnodes)\n{\n\ttm_node_id *np;\n\tint i;\n\tint n = 0;\n\n\tif (!init_done)\n\t\treturn TM_BADINIT;\n\tif (node_table == NULL)\n\t\treturn TM_ESYSTEM;\n\n\tfor (np = node_table; *np != TM_ERROR_NODE; np++)\n\t\tn++; /* how many nodes */\n\n\tnp = (tm_node_id *) calloc(n, sizeof(tm_node_id));\n\tif (np == NULL)\n\t\treturn TM_ESYSTEM;\n\tfor (i = 0; i < n; i++)\n\t\tnp[i] = node_table[i];\n\t*list = np;\n\t*nnodes = i;\n\treturn TM_SUCCESS;\n}\n\n/**\n * @brief\n *\t-Starts <argv>[0] with environment <envp> at <where>.\n *\n * @param[in] argc - argument count\n * @param[in] argv - argument list\n * @param[in] envp - environment variable list\n * @param[in] where - job relative node\n * @param[out] tid - task id\n * @param[out] event - event info\n *\n * @return\tint\n * @retval\tTM_SUCCESS\tsuccess\n * @retval\tTM_ER*\t\terror\n *\n */\nint\ntm_spawn(int argc, char **argv, char **envp,\n\t tm_node_id where, tm_task_id *tid, tm_event_t *event)\n{\n\tchar *cp;\n\tint i;\n\n\tif (!init_done)\n\t\treturn TM_BADINIT;\n\tif (argc <= 0 || argv == NULL || argv[0] == NULL || *argv[0] == '\\0')\n\t\treturn TM_ENOTFOUND;\n\n\t*event = new_event();\n\tif (startcom(TM_SPAWN, *event) != DIS_SUCCESS)\n\t\treturn TM_ENOTCONNECTED;\n\n\tif (diswsi(local_conn, where) != DIS_SUCCESS) /* send where */\n\t\treturn TM_ENOTCONNECTED;\n\n\tif (diswsi(local_conn, argc) != DIS_SUCCESS) /* send argc */\n\t\treturn TM_ENOTCONNECTED;\n\n\t/* send argv strings across */\n\n\tfor (i = 0; i < argc; i++) {\n\t\tcp = argv[i];\n\t\tif (diswcs(local_conn, cp, strlen(cp)) != DIS_SUCCESS)\n\t\t\treturn TM_ENOTCONNECTED;\n\t}\n\n\t/* send envp strings across */\n\tif (envp != NULL) {\n\t\tfor (i = 0; (cp = envp[i]) != NULL; i++) {\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n\t\t\t/* never send KRB5CCNAME; it would rewrite the value on target host */\n\t\t\tif (strncmp(envp[i], \"KRB5CCNAME\", strlen(\"KRB5CCNAME\")) == 0)\n\t\t\t\tcontinue;\n#endif\n\t\t\tif (diswcs(local_conn, cp, strlen(cp)) != DIS_SUCCESS)\n\t\t\t\treturn TM_ENOTCONNECTED;\n\t\t}\n\t}\n\tif (diswcs(local_conn, \"\", 0) != DIS_SUCCESS)\n\t\treturn TM_ENOTCONNECTED;\n\tdis_flush(local_conn);\n\tadd_event(*event, where, TM_SPAWN, (void *) tid);\n\treturn TM_SUCCESS;\n}\n\n/**\n * @brief\n *\t-Sends a <sig> signal to all the process groups in the task\n *\tsignified by the handle, <tid>.\n *\n * @param[in] tid - task id\n * @param[in] sig - signal number\n * @param[out] event - event handle\n *\n * @return\tint\n * @retval\tTM_SUCCESS\tSuccess\n * @retval\tTM_ER*\t\terror\n *\n */\nint\ntm_kill(tm_task_id tid, int sig, tm_event_t *event)\n{\n\ttask_info *tp;\n\n\tif (!init_done)\n\t\treturn TM_BADINIT;\n\tif ((tp = find_task(tid)) == NULL)\n\t\treturn TM_ENOTFOUND;\n\t*event = new_event();\n\tif (startcom(TM_SIGNAL, *event) != DIS_SUCCESS)\n\t\treturn TM_ENOTCONNECTED;\n\tif (diswsi(local_conn, tp->t_node) != DIS_SUCCESS)\n\t\treturn TM_ENOTCONNECTED;\n\tif (diswui(local_conn, tid) != DIS_SUCCESS)\n\t\treturn TM_ENOTCONNECTED;\n\tif (diswsi(local_conn, sig) != DIS_SUCCESS)\n\t\treturn TM_ENOTCONNECTED;\n\tdis_flush(local_conn);\n\tadd_event(*event, tp->t_node, TM_SIGNAL, NULL);\n\treturn TM_SUCCESS;\n}\n\n/**\n * @brief\n *\t-Returns an event that can be used to learn when a task\n *\tdies.\n *\n * @param[in] tid - task id\n * @param[out] obitval\t- obit value\n * @param[out]  event - event handle\n *\n * @return      int\n * @retval      TM_SUCCESS      Success\n * @retval      TM_ER*          error\n *\n */\nint\ntm_obit(tm_task_id tid, int *obitval, tm_event_t *event)\n{\n\ttask_info *tp;\n\n\tif (!init_done)\n\t\treturn TM_BADINIT;\n\tif ((tp = find_task(tid)) == NULL)\n\t\treturn TM_ENOTFOUND;\n\t*event = new_event();\n\tif (startcom(TM_OBIT, *event) != DIS_SUCCESS)\n\t\treturn TM_ESYSTEM;\n\tif (diswsi(local_conn, tp->t_node) != DIS_SUCCESS)\n\t\treturn TM_ESYSTEM;\n\tif (diswui(local_conn, tid) != DIS_SUCCESS)\n\t\treturn TM_ESYSTEM;\n\tdis_flush(local_conn);\n\tadd_event(*event, tp->t_node, TM_OBIT, (void *) obitval);\n\treturn TM_SUCCESS;\n}\n\nstruct taskhold {\n\ttm_task_id *list;\n\tint size;\n\tint *ntasks;\n};\n\n/**\n * @brief\n *\t-Makes a request for the list of tasks on <node>.  If <node>\n *\tis a valid node number, it returns the event that the list of\n *\ttasks on <node> is available.\n *\n * @param[in] node - job relative node id\n * @param[out] tid_list - pointer to task list\n * @param[in] list_size - size of the task list\n * @param[out] ntasks - number of tasks\n * @param[out] event - pointer to event list\n *\n * @return\tint\n * @retval\tTM_SUCCESS\tsuccess\n * @retval\tTM_ER*\t\terror\n *\n */\nint\ntm_taskinfo(tm_node_id node, tm_task_id *tid_list,\n\t    int list_size, int *ntasks, tm_event_t *event)\n{\n\tstruct taskhold *thold;\n\n\tif (!init_done)\n\t\treturn TM_BADINIT;\n\tif (tid_list == NULL || list_size == 0 || ntasks == NULL)\n\t\treturn TM_EBADENVIRONMENT;\n\t*event = new_event();\n\tif (startcom(TM_TASKS, *event) != DIS_SUCCESS)\n\t\treturn TM_ESYSTEM;\n\tif (diswsi(local_conn, node) != DIS_SUCCESS)\n\t\treturn TM_ESYSTEM;\n\tdis_flush(local_conn);\n\n\tthold = (struct taskhold *) malloc(sizeof(struct taskhold));\n\tassert(thold != NULL);\n\tthold->list = tid_list;\n\tthold->size = list_size;\n\tthold->ntasks = ntasks;\n\tadd_event(*event, node, TM_TASKS, (void *) thold);\n\treturn TM_SUCCESS;\n}\n\n/**\n * @brief\n *\t-Returns the job-relative node number that holds or held <tid>.  In\n *\tcase of an error, it returns TM_ERROR_NODE.\n *\n * @param[in] tid - task id\n * @param[out] node - job relative node\n *\n * @return\tint\n * @retval      TM_SUCCESS      success\n * @retval      TM_ER*          error\n *\n */\nint\ntm_atnode(tm_task_id tid, tm_node_id *node)\n{\n\ttask_info *tp;\n\n\tif (!init_done)\n\t\treturn TM_BADINIT;\n\tif ((tp = find_task(tid)) == NULL)\n\t\treturn TM_ENOTFOUND;\n\t*node = tp->t_node;\n\treturn TM_SUCCESS;\n}\n\nstruct reschold {\n\tchar *resc;\n\tint len;\n};\n\n/**\n * @brief\n *\t-Makes a request for a string specifying the resources\n *\tavailable on <node>.  If <node> is a valid node number, it\n *\treturns the event that the string specifying the resources on\n *\t<node> is available.  It returns ERROR_EVENT otherwise.\n *\n * @param[in] node - job relative node\n * @param[out] resource - resource avlbl on node\n * @param[in] len - length of string\n * @param[out] event - pointer to event info\n *\n * @return      int\n * @retval      TM_SUCCESS      success\n * @retval      TM_ER*          error\n *\n */\nint\ntm_rescinfo(tm_node_id node, char *resource, int len, tm_event_t *event)\n{\n\tstruct reschold *rhold;\n\n\tif (!init_done)\n\t\treturn TM_BADINIT;\n\tif (resource == NULL || len == 0)\n\t\treturn TM_EBADENVIRONMENT;\n\t*event = new_event();\n\tif (startcom(TM_RESOURCES, *event) != DIS_SUCCESS)\n\t\treturn TM_ESYSTEM;\n\tif (diswsi(local_conn, node) != DIS_SUCCESS)\n\t\treturn TM_ESYSTEM;\n\tdis_flush(local_conn);\n\n\trhold = (struct reschold *) malloc(sizeof(struct reschold));\n\tassert(rhold != NULL);\n\trhold->resc = resource;\n\trhold->len = len;\n\n\tadd_event(*event, node, TM_RESOURCES, (void *) rhold);\n\treturn TM_SUCCESS;\n}\n\n/**\n * @brief\n *\t-Posts the first <nbytes> of a copy of *<info> within MOM on\n *\tthis node, and associated with this task.  If <info> is\n *\tnon-NULL, it returns the event that the effort to post *<info>\n *\tis complete.  It returns ERROR_EVENT otherwise.\n *\n * @param[in] name - name of mom\n * @param[in] info - information (event)\n * @param[in] len - length of info\n * @param[out] event - pointer to event info\n *\n * @return      int\n * @retval      TM_SUCCESS (0)  - Success\n * @retval      TM_E*      () - Failure\n *\n */\nint\ntm_publish(char *name, void *info, int len, tm_event_t *event)\n{\n\n\tif (!init_done)\n\t\treturn TM_BADINIT;\n\t*event = new_event();\n\tif (startcom(TM_POSTINFO, *event) != DIS_SUCCESS)\n\t\treturn TM_ESYSTEM;\n\tif (diswst(local_conn, name) != DIS_SUCCESS)\n\t\treturn TM_ESYSTEM;\n\tif (diswcs(local_conn, info, len) != DIS_SUCCESS)\n\t\treturn TM_ESYSTEM;\n\n\tdis_flush(local_conn);\n\tadd_event(*event, TM_ERROR_NODE, TM_POSTINFO, NULL);\n\treturn TM_SUCCESS;\n}\n\nstruct infohold {\n\tvoid *info;\n\tint len;\n\tint *info_len;\n};\n\n/**\n * @brief\n *\tMakes a request for a copy of the info posted by <tid>.  If\n *\t<tid> is a valid task, it returns the event that the\n *\tstring specifying the info posted by <tid> is available.\n *\n * @param[in] tid - task id\n * @param[in] name - name of\n * @param[out] info - event info\n * @param[in] len -length of info\n * @param[out] info_len - info len to be output\n * @param[out] event - handle to event\n *\n * @return      int\n * @retval      0       success\n * @retval      !0      error\n *\n */\nint\ntm_subscribe(tm_task_id tid, char *name, void *info, int len, int *info_len, tm_event_t *event)\n{\n\ttask_info *tp;\n\tstruct infohold *ihold;\n\n\tif (!init_done)\n\t\treturn TM_BADINIT;\n\tif ((tp = find_task(tid)) == NULL)\n\t\treturn TM_ENOTFOUND;\n\t*event = new_event();\n\tif (startcom(TM_GETINFO, *event) != DIS_SUCCESS)\n\t\treturn TM_ESYSTEM;\n\tif (diswsi(local_conn, tp->t_node) != DIS_SUCCESS)\n\t\treturn TM_ESYSTEM;\n\tif (diswui(local_conn, tid) != DIS_SUCCESS)\n\t\treturn TM_ESYSTEM;\n\tif (diswst(local_conn, name) != DIS_SUCCESS)\n\t\treturn TM_ESYSTEM;\n\tdis_flush(local_conn);\n\n\tihold = (struct infohold *) malloc(sizeof(struct infohold));\n\tassert(ihold != NULL);\n\tihold->info = info;\n\tihold->len = len;\n\tihold->info_len = info_len;\n\n\tadd_event(*event, tp->t_node, TM_GETINFO, (void *) ihold);\n\treturn TM_SUCCESS;\n}\n\n/**\n * @brief\n *\t-tm_finalize() - close out task manager interface\n *\n * @par\tNote:\n *\tThis function should be the last one called.  It is illegal to call\n *\tany other task manager function following this one.   All events are\n *\tfreed and any connection to the task manager (pbs_mom) is closed.\n *\tThis call is synchronous.\n *\n * @return      int\n * @retval\t0\tsuccess\n * @retval\t!0\terror\n *\n */\nint\ntm_finalize()\n{\n\tevent_info *e;\n\tint i = 0;\n\n\tif (!init_done)\n\t\treturn TM_BADINIT;\n\twhile (event_count && (i < EVENT_HASH)) {\n\t\twhile ((e = event_hash[i]) != NULL) {\n\t\t\tdel_event(e);\n\t\t}\n\t\t++i; /* check next slot in hash table */\n\t}\n\tinit_done = 0;\n\tfree(tm_jobid);\n\ttm_jobid = NULL;\n\ttm_jobid_len = 0;\n\tfree(tm_jobcookie);\n\ttm_jobcookie = NULL;\n\ttm_jobcookie_len = 0;\n\treturn TM_SUCCESS; /* what else */\n}\n\n/**\n * @brief\n *\t-tm_notify() - set the signal to be sent on event arrival.\n *\n * @param[in] tm_signal - signal number\n *\n * @return      int\n * @retval      TM_ENOTIMPLEMENTED      Success\n * @retval      TM_BADINIT              error\n *\n */\nint\ntm_notify(int tm_signal)\n{\n\tif (!init_done)\n\t\treturn TM_BADINIT;\n\treturn TM_ENOTIMPLEMENTED;\n}\n\n/**\n * @brief\n *\t-tm_alloc() - make a request for additional resources.\n *\n * @param[in] resources - resource list\n * @param[in] event - event handle\n *\n * @return      int\n * @retval      TM_ENOTIMPLEMENTED      Success\n * @retval      TM_BADINIT              error\n *\n */\nint\ntm_alloc(char *resources, tm_event_t *event)\n{\n\tif (!init_done)\n\t\treturn TM_BADINIT;\n\treturn TM_ENOTIMPLEMENTED;\n}\n\n/**\n * @brief\n *\t-tm_dealloc() - drop a node from the job.\n *\n * @param[in] node - job relative node\n * @param[in] event - event handle\n *\n * @return      int\n * @retval      TM_ENOTIMPLEMENTED      Success\n * @retval      TM_BADINIT              error\n *\n */\nint\ntm_dealloc(tm_node_id node, tm_event_t *event)\n{\n\tif (!init_done)\n\t\treturn TM_BADINIT;\n\treturn TM_ENOTIMPLEMENTED;\n}\n\n/**\n * @brief\n *\t-tm_create_event() - create a persistent event.\n *\n * @param[in] event - event handle\n *\n * @return      int\n * @retval      TM_ENOTIMPLEMENTED      Success\n * @retval      TM_BADINIT              error\n *\n */\nint\ntm_create_event(tm_event_t *event)\n{\n\tif (!init_done)\n\t\treturn TM_BADINIT;\n\treturn TM_ENOTIMPLEMENTED;\n}\n\n/**\n * @brief\n *\t-tm_destroy_event() - destroy a persistent event.\n *\n * @param[in] event - event handle\n *\n * @return      int\n * @retval      TM_ENOTIMPLEMENTED      Success\n * @retval      TM_BADINIT              error\n *\n */\nint\ntm_destroy_event(tm_event_t *event)\n{\n\tif (!init_done)\n\t\treturn TM_BADINIT;\n\treturn TM_ENOTIMPLEMENTED;\n}\n\n/**\n * @brief\n *\t-tm_register() - link a persistent event with action requests\n *\tfrom the task manager.\n *\n * @param[in] what - info about last event polled\n * @param[in] event - event handle\n *\n * @return      int\n * @retval      TM_ENOTIMPLEMENTED\tSuccess\n * @retval      TM_BADINIT\t\terror\n *\n */\nint\ntm_register(tm_whattodo_t *what, tm_event_t *event)\n{\n\tif (!init_done)\n\t\treturn TM_BADINIT;\n\treturn TM_ENOTIMPLEMENTED;\n}\n\n#define FOREVER 2147000\n/**\n * @brief\n *\t-tm_poll - poll to see if an event has been completed.\n *\n * @par Note:\n *\tIf \"poll_event\" is a valid event handle, see if it is completed;\n *\telse if \"poll_event\" is the null event, check for the first event that\n *\tis completed.\n *\n * @par\tFunctionality:\n *\tresult_event is set to the completed event or the null event.\n *\n *\tIf wait is non_zero, wait for \"poll_event\" to be completed.\n *\n *\tIf an error ocurs, set tm_errno non-zero.\n *\n * @param[in] poll_event - event handle\n * @param[in] result_event - event handle to output\n * @param[in] wait - indiacation for wait\n * @param[in] tm_errno - error number\n *\n * @return      int\n * @retval      TM_SUCCESS (0)  - Success\n * @retval      TM_E*       \t- Failure\n *\n */\nint\ntm_poll(tm_event_t poll_event, tm_event_t *result_event, int wait, int *tm_errno)\n{\n\tint num, i;\n\tint ret, mtype, nnodes;\n\tint prot, protver;\n\tint *obitvalp;\n\tevent_info *ep = NULL;\n\ttm_task_id tid, *tidp;\n\ttm_event_t nevent;\n\ttm_node_id node;\n\tchar *jobid;\n\tchar *info;\n\tsize_t rdsize;\n\tstruct tm_roots *roots;\n\tstruct taskhold *thold;\n\tstruct infohold *ihold;\n\tstruct reschold *rhold;\n\n\tif (!init_done)\n\t\treturn TM_BADINIT;\n\tif (result_event == NULL)\n\t\treturn TM_EBADENVIRONMENT;\n\t*result_event = TM_ERROR_EVENT;\n\tif (poll_event != TM_NULL_EVENT)\n\t\treturn TM_ENOTIMPLEMENTED;\n\tif (tm_errno == NULL)\n\t\treturn TM_EBADENVIRONMENT;\n\n\tif (event_count == 0) {\n\t\tDBPRT((\"%s: no events waiting\\n\", __func__))\n\t\treturn TM_ENOTFOUND;\n\t}\n\tif (local_conn < 0) {\n\t\tDBPRT((\"%s: INTERNAL ERROR %d events but no connection\\n\",\n\t\t       __func__, event_count))\n\t\treturn TM_ENOTCONNECTED;\n\t}\n\n\t/*\n\t ** Setup tcp dis routines with a wait value appropriate for\n\t ** the value of wait the user set.\n\t */\n\tpbs_tcp_timeout = wait ? FOREVER : 0;\n\tDIS_tcp_funcs();\n\n\tprot = disrsi(local_conn, &ret);\n\tif (ret == DIS_EOD) {\n\t\t*result_event = TM_NULL_EVENT;\n\t\treturn TM_SUCCESS;\n\t} else if (ret != DIS_SUCCESS) {\n\t\tDBPRT((\"%s: protocol number dis error %d\\n\", __func__, ret))\n\t\tgoto err;\n\t}\n\tif (prot != TM_PROTOCOL) {\n\t\tDBPRT((\"%s: bad protocol number %d\\n\", __func__, prot))\n\t\tgoto err;\n\t}\n\n\t/*\n\t ** We have seen the start of a message.  Set the timeout value\n\t ** so we wait for the remaining data of a message.\n\t */\n\tpbs_tcp_timeout = FOREVER;\n\tprotver = disrsi(local_conn, &ret);\n\tif (ret != DIS_SUCCESS) {\n\t\tDBPRT((\"%s: protocol version dis error %d\\n\", __func__, ret))\n\t\tgoto err;\n\t}\n\tif (protver != TM_PROTOCOL_VER) {\n\t\tDBPRT((\"%s: bad protocol version %d\\n\", __func__, protver))\n\t\tgoto err;\n\t}\n\n\tmtype = disrsi(local_conn, &ret);\n\tif (ret != DIS_SUCCESS) {\n\t\tDBPRT((\"%s: mtype dis error %d\\n\", __func__, ret))\n\t\tgoto err;\n\t}\n\tnevent = disrsi(local_conn, &ret);\n\tif (ret != DIS_SUCCESS) {\n\t\tDBPRT((\"%s: event dis error %d\\n\", __func__, ret))\n\t\tgoto err;\n\t}\n\n\t*result_event = nevent;\n\tDBPRT((\"%s: got event %d return %d\\n\", __func__, nevent, mtype))\n\tif ((ep = find_event(nevent)) == NULL) {\n\t\tDBPRT((\"%s: No event found for number %d\\n\", __func__, nevent));\n\t\tCS_close_socket(local_conn);\n\t\tclosesocket(local_conn);\n\t\tlocal_conn = -1;\n\t\treturn TM_ENOEVENT;\n\t}\n\n\tif (mtype == TM_ERROR) { /* problem, read error num */\n\t\t*tm_errno = disrsi(local_conn, &ret);\n\t\tDBPRT((\"%s: event %d error %d\\n\", __func__, nevent, *tm_errno));\n\t\tgoto done;\n\t}\n\n\t*tm_errno = TM_SUCCESS;\n\tswitch (ep->e_mtype) {\n\n\t\t\t/*\n\t\t\t **\tauxiliary info (\n\t\t\t **\t\tnumber of nodes\tint;\n\t\t\t **\t\tnodeid[0]\tint;\n\t\t\t **\t\t...\n\t\t\t **\t\tnodeid[n-1]\tint;\n\t\t\t **\t\tparent jobid\tstring;\n\t\t\t **\t\tparent nodeid\tint;\n\t\t\t **\t\tparent taskid\tint;\n\t\t\t **\t)\n\t\t\t */\n\t\tcase TM_INIT:\n\t\t\tnnodes = disrsi(local_conn, &ret);\n\t\t\tif (ret != DIS_SUCCESS) {\n\t\t\t\tDBPRT((\"%s: INIT failed nnodes\\n\", __func__))\n\t\t\t\tgoto err;\n\t\t\t}\n\n\t\t\tnode_table = (tm_node_id *) calloc(nnodes + 1,\n\t\t\t\t\t\t\t   sizeof(tm_node_id));\n\t\t\tif (node_table == NULL)\n\t\t\t\tgoto err;\n\t\t\tDBPRT((\"%s: INIT nodes %d\\n\", __func__, nnodes))\n\t\t\tfor (i = 0; i < nnodes; i++) {\n\t\t\t\tnode_table[i] = disrsi(local_conn, &ret);\n\t\t\t\tif (ret != DIS_SUCCESS) {\n\t\t\t\t\tDBPRT((\"%s: INIT failed nodeid %d\\n\", __func__, i))\n\t\t\t\t\tgoto err;\n\t\t\t\t}\n\t\t\t}\n\t\t\tnode_table[nnodes] = TM_ERROR_NODE;\n\n\t\t\tjobid = disrst(local_conn, &ret);\n\t\t\tif (ret != DIS_SUCCESS) {\n\t\t\t\tDBPRT((\"%s: INIT failed jobid\\n\", __func__))\n\t\t\t\tgoto err;\n\t\t\t}\n\t\t\tDBPRT((\"%s: INIT daddy jobid %s\\n\", __func__, jobid))\n\t\t\tnode = disrsi(local_conn, &ret);\n\t\t\tif (ret != DIS_SUCCESS) {\n\t\t\t\tDBPRT((\"%s: INIT failed parent nodeid\\n\", __func__))\n\t\t\t\tgoto err;\n\t\t\t}\n\t\t\tDBPRT((\"%s: INIT daddy node %d\\n\", __func__, node))\n\t\t\ttid = disrui(local_conn, &ret);\n\t\t\tif (ret != DIS_SUCCESS) {\n\t\t\t\tDBPRT((\"%s: INIT failed parent taskid\\n\", __func__))\n\t\t\t\tgoto err;\n\t\t\t}\n\t\t\tDBPRT((\"%s: INIT daddy tid %lu\\n\", __func__, (unsigned long) tid))\n\n\t\t\troots = (struct tm_roots *) ep->e_info;\n\t\t\troots->tm_parent = new_task(jobid, node, tid);\n\t\t\troots->tm_me = new_task(tm_jobid,\n\t\t\t\t\t\ttm_jobndid,\n\t\t\t\t\t\ttm_jobtid);\n\t\t\troots->tm_nnodes = nnodes;\n\t\t\troots->tm_ntasks = 0;\t   /* TODO */\n\t\t\troots->tm_taskpoolid = -1; /* what? */\n\t\t\troots->tm_tasklist = NULL; /* TODO */\n\n\t\t\tbreak;\n\n\t\tcase TM_TASKS:\n\t\t\tthold = (struct taskhold *) ep->e_info;\n\t\t\ttidp = thold->list;\n\t\t\tnum = thold->size;\n\t\t\tfor (i = 0;; i++) {\n\t\t\t\ttid = disrui(local_conn, &ret);\n\t\t\t\tif (tid == TM_NULL_TASK)\n\t\t\t\t\tbreak;\n\t\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\t\tgoto err;\n\t\t\t\tif (i < num) {\n\t\t\t\t\ttidp[i] = new_task(tm_jobid,\n\t\t\t\t\t\t\t   ep->e_node, tid);\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (i < num)\n\t\t\t\ttidp[i] = TM_NULL_TASK;\n\t\t\t*(thold->ntasks) = i;\n\t\t\tbreak;\n\n\t\tcase TM_SPAWN:\n\t\tcase TM_ATTACH:\n\t\t\ttid = disrui(local_conn, &ret);\n\t\t\tif (ret != DIS_SUCCESS) {\n\t\t\t\tDBPRT((\"%s: SPAWN failed tid\\n\", __func__))\n\t\t\t\tgoto err;\n\t\t\t}\n\t\t\ttidp = (tm_task_id *) ep->e_info;\n\t\t\t*tidp = new_task(tm_jobid, ep->e_node, tid);\n\t\t\tbreak;\n\n\t\tcase TM_SIGNAL:\n\t\t\tbreak;\n\n\t\tcase TM_OBIT:\n\t\t\tobitvalp = (int *) ep->e_info;\n\t\t\t*obitvalp = disrsi(local_conn, &ret);\n\t\t\tif (ret != DIS_SUCCESS) {\n\t\t\t\tDBPRT((\"%s: OBIT failed obitval\\n\", __func__))\n\t\t\t\tgoto err;\n\t\t\t}\n\t\t\tbreak;\n\n\t\tcase TM_POSTINFO:\n\t\t\tbreak;\n\n\t\tcase TM_GETINFO:\n\t\t\tihold = (struct infohold *) ep->e_info;\n\t\t\tinfo = disrcs(local_conn, &rdsize, &ret);\n\t\t\t/* save the returned length to return to user in an int, */\n\t\t\t/* truncation is not an issue because the length of the  */\n\t\t\t/* message  published lenght  must fit in a \"int\"        */\n\t\t\t*ihold->info_len = (int) rdsize;\n\t\t\tif (ret != DIS_SUCCESS) {\n\t\t\t\tDBPRT((\"%s: GETINFO failed info\\n\", __func__))\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\tmemcpy(ihold->info, info, MIN(*ihold->info_len, ihold->len));\n\t\t\tfree(info);\n\t\t\tbreak;\n\n\t\tcase TM_RESOURCES:\n\t\t\trhold = (struct reschold *) ep->e_info;\n\t\t\tinfo = disrst(local_conn, &ret);\n\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\tbreak;\n\t\t\tstrncpy(rhold->resc, info, rhold->len);\n\t\t\tfree(info);\n\t\t\tbreak;\n\n\t\tdefault:\n\t\t\tDBPRT((\"%s: unknown event command %d\\n\", __func__, ep->e_mtype))\n\t\t\tgoto err;\n\t}\ndone:\n\tdel_event(ep);\n\treturn TM_SUCCESS;\n\nerr:\n\tif (ep)\n\t\tdel_event(ep);\n\tCS_close_socket(local_conn);\n\tclosesocket(local_conn);\n\tlocal_conn = -1;\n\treturn TM_ENOTCONNECTED;\n}\n"
  },
  {
    "path": "src/lib/Libifl/xml_encode_decode.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\txml_encode_decode.c\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <string.h>\n#include <unistd.h>\n#include <limits.h>\n#include <sys/stat.h>\n#include <stdlib.h>\n#include <sys/types.h>\n\n#include \"pbs_ifl.h\"\n#include \"log.h\"\n\n#ifndef _POSIX_ARG_MAX\n#define _POSIX_ARG_MAX 4096 /* largest value standards guarantee */\n#endif\n\n#define LESS_THAN \"&lt;\"\n#define GRT_THAN \"&gt;\"\n#define DOUBLE_QUOTE \"&quot;\"\n#define SINGLE_QUOTE \"&apos;\"\n#define AMPERSAND \"&amp;\"\n#define START_JSDL_ARG \"<jsdl-hpcpa:Argument>\"\n#define END_JSDL_ARG \"</jsdl-hpcpa:Argument>\"\n#define PBS_NUM_ESC_CHARS 256\n\n/* removed global variables arg_max and escape_chars\n * into function scope, so that these functions become MT-safe\n */\n\n/*\n * init_escapechars -- initializes escape_chars array\n * indices with corresponding escape string\n */\n\n/**\n * @brief\n *\t-Find max number of chars an argument can take.\n *\n * @param[out] escape_chars  - list of escape chars\n * @param[out] arg_max - max chars\n *\n * @return\tVoid\n *\n */\nstatic void\ninit_escapechars_maxarg(char **escape_chars, long *arg_max)\n{\n\tlong sysconf_argmax;\n\n\tif (*arg_max == -1) {\n#ifdef _SC_ARG_MAX\n#define PBS_MAX_ARG_MAX (1 << 24)\n\t\tsysconf_argmax = sysconf(_SC_ARG_MAX);\n\n\t\t/*\n\t\t * Constrain the size of arg_max returned.  The value returned\n\t\t * by sysconf(_SC_ARG_MAX) may be enormous:  one system we've\n\t\t * tested regularly returns 4611686018427387903, which it then\n\t\t * (quite reasonably) refuses to let us malloc.  We arbitrarily\n\t\t * constrain it to no more than PBS_MAX_ARG_MAX.\n\t\t */\n\t\tif (sysconf_argmax <= 0)\n\t\t\t*arg_max = _POSIX_ARG_MAX;\n\t\telse if (sysconf_argmax > PBS_MAX_ARG_MAX)\n\t\t\t*arg_max = PBS_MAX_ARG_MAX;\n\t\telse\n\t\t\t*arg_max = sysconf_argmax;\n#undef PBS_MAX_ARG_MAX\n#else\n\t\t*arg_max = _POSIX_ARG_MAX;\n#endif /* _SC_ARG_MAX */\n\t}\n\n\t/* initialize escape_chars to nulls, since its no longer static var */\n\tmemset(escape_chars, 0, PBS_NUM_ESC_CHARS * sizeof(char *));\n\t/*\n\t * Initialize ascii value escape char indices with\n\t * corresponding escape string.\n\t */\n\tescape_chars[(int) '<'] = LESS_THAN;\n\tescape_chars[(int) '>'] = GRT_THAN;\n\tescape_chars[(int) '\"'] = DOUBLE_QUOTE;\n\tescape_chars[(int) '\\''] = SINGLE_QUOTE;\n\tescape_chars[(int) '&'] = AMPERSAND;\n}\n\n/**\n * @brief\n *\ttakes plain/simple string as an input and return the number of\n *      characters in the resultant encoded string.\n *\n * @param[in]   original_arg - the input string to be encoded\n * @param[in/out]   encoded_arg - the encoding of 'orginal_arg'\n * @param[in]   escape_chars\n *              Following table lists escape chracters along with their\n *              replacement strings used by 'encode_argument' and\n *              'decode_argument' at the time of encoding/decoding.\n * -------------------------------------------------------\n *\tOriginal Character\t--\tReplacement String\n *\t\t<\t\t--\t'&lt;\n *\t\t>\t\t--\t'&gt;\n *\t\t&\t\t--\t&amp;\n *\t\t'\t\t--\t&apos;\n *\t\t\"\t\t--\t&quot;\n * @return int\n * @retval >=0\tNumber of characters in the resultant encoded string\n *\t\tconnection made.\n */\nstatic int\nencode_argument(char *original_arg, char *encoded_arg,\n\t\tchar **escape_chars)\n{\n\tint j = 0;\n\tint i, k;\n\tint ind;\n\n\tfor (i = 0; original_arg[i] != '\\0'; i++) {\n\n\t\tif (((int) original_arg[i] >= 0) &&\n\t\t    ((int) original_arg[i] < PBS_NUM_ESC_CHARS) &&\n\t\t    (escape_chars[(int) original_arg[i]] != NULL)) {\n\t\t\t/* found an escape char */\n\t\t\tind = (int) original_arg[i];\n\t\t\t/* Replace with corresponding escape string */\n\t\t\tfor (k = 0; escape_chars[ind][k] != '\\0'; k++) {\n\t\t\t\tencoded_arg[j] = escape_chars[ind][k];\n\t\t\t\tj++;\n\t\t\t}\n\t\t} else {\n\t\t\t/* not an escape char */\n\t\t\tencoded_arg[j] = original_arg[i];\n\t\t\tj++;\n\t\t}\n\t}\n\tencoded_arg[j] = '\\0';\n\treturn j;\n}\n\n/**\n * @brief\n *\tThis function is used to decode arguments from given\n *\txml(tags). It is invoked by other functions like\n *\tdecode_xml_arg_list( ) during xml parsing/decoding\n *\n * @param[in]   encoded_arg  - the token against which to parse\n * @param[in]   original_arg - populated with the decoded string\n *\n * @return int\n * @retval >= 0\tlength of original_arg\n * @retval -1\terror encountered during parsing\n */\nstatic int\ndecode_argument(char *encoded_arg, char *original_arg)\n{\n\tint i = 0;\n\tint j = 0;\n\tint k = 0;\n\tchar escape_chars[10];\n#ifdef WIN32\n\tint quotes_flag = 0;\n#endif\n\n#ifdef WIN32\n\t/*\n\t * This is to handle differences between M$ and non-M$ platforms\n\t * related to 'Double quotes'(\") character at the time of reading\n\t * command-line argument(with white spaces) from user(through 'qsub').\n\t * In case of non-M$ platforms, it removes the '\"' charater and passes\n\t * it to the PBS Server. In case PBS Scheduler decides to move job\n\t * to M$ MOM, then the M$ MOM would receive an argument without '\"'\n\t * character and consider it as two different arguments. So, if any\n\t * argument(having white spaces) recieved from the PBS Server doesn't\n\t * start with '&quot;' string, then the character '\"' is prepended\n\t * and appended to the argument at the time of decoding XML argument.\n\t */\n\tif ((strchr(encoded_arg, ' ')) &&\n\t    (strncmp(encoded_arg, DOUBLE_QUOTE,\n\t\t     sizeof(DOUBLE_QUOTE)))) {\n\t\t/* argument has white space without '&quot;\" string */\n\t\t/* prefix this arugment with '\"' character\t*/\n\t\tquotes_flag = 1;\n\t\toriginal_arg[k++] = '\"';\n\t}\n\n#endif /* WIN32 */\n\n\twhile (encoded_arg[i] != '\\0') {\n\t\tif (encoded_arg[i] != '&')\n\t\t\toriginal_arg[k] = encoded_arg[i];\n\t\telse {\n\t\t\tj = 0;\n\t\t\twhile (encoded_arg[i] != ';') {\n\t\t\t\tescape_chars[j] = encoded_arg[i];\n\t\t\t\tj++;\n\t\t\t\ti++;\n\t\t\t}\n\t\t\tescape_chars[j] = encoded_arg[i];\n\t\t\tescape_chars[j + 1] = '\\0';\n\t\t\tif (strcmp(escape_chars, LESS_THAN) == 0)\n\t\t\t\toriginal_arg[k] = '<';\n\t\t\telse if (strcmp(escape_chars, GRT_THAN) == 0)\n\t\t\t\toriginal_arg[k] = '>';\n\t\t\telse if (strcmp(escape_chars, AMPERSAND) == 0)\n\t\t\t\toriginal_arg[k] = '&';\n\t\t\telse if (strcmp(escape_chars, DOUBLE_QUOTE) == 0)\n\t\t\t\toriginal_arg[k] = '\"';\n\t\t\telse if (strcmp(escape_chars, SINGLE_QUOTE) == 0)\n\t\t\t\toriginal_arg[k] = '\\'';\n\t\t}\n\t\ti++;\n\t\tk++;\n\t}\n\n#ifdef WIN32\n\tif (quotes_flag) {\n\t\t/* suffix with '\"' character */\n\t\toriginal_arg[k++] = '\"';\n\t}\n#endif /* WIN32 */\n\n\toriginal_arg[k] = '\\0';\n\treturn k;\n}\n\n/**\n * @brief\n *\tencode_xml_arg_list takes current index, number of arguments\n *      passed to 'qsub' program and arguments as input and returns\n *      an encoded string(XML form) to 'qsub' program. It loops through each\n *      argument and converts that into an equivalent encoded string using\n *      ('encode_argument' function). For eg, if a1, a2 are the arguments,\n *      then it returns following encoded XML string to 'qsub' program.\n *      encoded string   --  <jsdl-hpcpa:Argument>a1</jsdl-hpcpa:Argument>\n *                           <jsdl-hpcpa:Argument>a2</jsdl-hpcpa:Argument>\n *\n * @param[in]   optind - current index\n * @param[in]   argc - number of arguments passed to 'qsub' program\n * @param[in]   argv - arguments as input\n *\n * @return char*\n * @retval An encoded string(XML form) to 'qsub' program in case of SUCCESS\n * @retval NULL in case of failure\n */\nextern char *\nencode_xml_arg_list(int optind, int argc, char **argv)\n{\n\tchar *xml_string = NULL;\n\tint cur_len = 1;\n\tint j;\n\tchar *arg = NULL;\n\tint jsdl_tag_len;\n\tchar *temp;\n\tint first = 1;\n\tchar *escape_chars[PBS_NUM_ESC_CHARS];\n\tlong arg_max = -1;\n\n\tjsdl_tag_len = sizeof(START_JSDL_ARG) + sizeof(END_JSDL_ARG) - 2;\n\tif (argc > 0 && argv == NULL)\n\t\treturn NULL;\n\t/*\n\t * Initialize escape chars array and max number of\n\t * chars an argument can hold.\n\t */\n\n\tinit_escapechars_maxarg(escape_chars, &arg_max);\n\n\t/* Allocate memory to hold encoded argument */\n\targ = malloc(arg_max * sizeof(char *));\n\tif (arg == NULL)\n\t\treturn NULL;\n\n\tfor (j = optind; j < argc; j++) {\n\t\tif (argv[j] == NULL) {\n\t\t\tif (xml_string)\n\t\t\t\tfree(xml_string);\n\t\t\tbreak;\n\t\t}\n\t\tcur_len += strlen(argv[j]) + jsdl_tag_len;\n\t\ttemp = realloc(xml_string, cur_len);\n\t\tif (temp == NULL) {\n\t\t\tif (xml_string)\n\t\t\t\tfree(xml_string);\n\t\t\tfree(arg);\n\t\t\treturn NULL;\n\t\t} else\n\t\t\txml_string = temp;\n\t\tif (first) {\n\t\t\tstrcpy(xml_string, START_JSDL_ARG);\n\t\t\tfirst = 0;\n\t\t} else\n\t\t\tstrcat(xml_string, START_JSDL_ARG);\n\n\t\tcur_len += encode_argument(argv[j], arg, escape_chars);\n\t\ttemp = realloc(xml_string, cur_len);\n\t\tif (temp == NULL) {\n\t\t\tfree(xml_string);\n\t\t\tfree(arg);\n\t\t\treturn NULL;\n\t\t} else\n\t\t\txml_string = temp;\n\n\t\tstrcat(xml_string, arg);\n\t\tstrcat(xml_string, END_JSDL_ARG);\n\t\targ[0] = '\\0';\n\t}\n\tfree(arg);\n\treturn xml_string;\n}\n\n/**\n * @brief\n *\tTakes 'executable' and 'argument_list'(encoded form) as an input and\n *      stores decoded arguments in address of argarray passed to this function.\n *      It breaks an encoding string into arguments, decodes each argument into\n *      plain string and assigns it to 'argarray'.\n * @param[in]   executable - the input for decoding\n * @param[in]   arg_list - list of arguments passed on command-line\n * @param[in]   shell - 'executable' stored into shell variable\n * @param[in]   argarray - stores the decoded arguments passed to this function\n *\n * @return int\n * @retval  0\tindicates SUCCESS\n * @retval -1\tindicates FAILURE\n */\nextern int\ndecode_xml_arg_list(char *executable, char *arg_list,\n\t\t    char **shell, char ***argarray)\n{\n\tchar *argument_list = NULL;\n\tchar *token = NULL;\n\tchar *arg = NULL;\n\tchar seps[] = \"<>\";\n\tchar **argv = NULL;\n\tchar **argv_temp = NULL;\n\tint no_of_arguments = 0;\n\tint arg_len = 0, i;\n\tchar *escape_chars[PBS_NUM_ESC_CHARS];\n\tlong arg_max = -1;\n\tchar *saveptr; /* for use with strtok_r */\n\t/* Check for executable */\n\tif (executable == NULL)\n\t\treturn -1;\n\n\t/* store executable into shell variable */\n\t*shell = executable;\n\n\t/*\n\t * Initialize escape chars array and max number of\n\t * chars an argument can hold.\n\t */\n\n\tinit_escapechars_maxarg(escape_chars, &arg_max);\n\n\tno_of_arguments++;\n\targv = calloc(no_of_arguments + 1, sizeof(char *));\n\tif (argv == NULL) {\n\t\treturn -1;\n\t}\n\n\targv[0] = malloc(strlen(*shell) + 1);\n\tif (argv[0] == NULL) {\n\t\tfree(argv);\n\t\treturn -1;\n\t}\n\tstrcpy(argv[0], *shell);\n\n\t/* only executable is passed in commend-line */\n\tif (arg_list == NULL) {\n\t\targv[no_of_arguments] = 0;\n\t\t*argarray = argv;\n\t\treturn 0;\n\t}\n\n\t/* Allocate memory to hold decoded argument */\n\targ = malloc(strlen(arg_list) + 1);\n\tif (arg == NULL) {\n\t\tfree(argv);\n\t\treturn -1;\n\t}\n\targ[0] = '\\0';\n\n\targument_list = strdup(arg_list);\n\tif (argument_list == NULL)\n\t\tgoto error;\n\n\ttoken = strtok_r(argument_list, seps, &saveptr);\n\twhile (token) {\n\t\tif (strstr(token, \"jsdl-hpcpa:Argument\") == NULL) {\n\t\t\t/*\n\t\t\t * '<>' is used as delimiters, so the strtok might\n\t\t\t * return '<jsdl-hpcpa:Argument>' as token, so consider\n\t\t\t * only those contains which doesn't contain\n\t\t\t * string 'jsdl-hpcpa:Argument'.\n\t\t\t */\n\t\t\tno_of_arguments++;\n\t\t\t/* found an argument */\n\t\t\targv_temp = realloc(argv,\n\t\t\t\t\t    (no_of_arguments + 1) * sizeof(char *));\n\t\t\tif (argv_temp == NULL)\n\t\t\t\tgoto error;\n\t\t\telse\n\t\t\t\targv = argv_temp;\n\n\t\t\targ_len = decode_argument(token, arg);\n\t\t\targv[no_of_arguments - 1] = (char *) malloc(arg_len + 1);\n\t\t\tif (argv[no_of_arguments - 1] == NULL)\n\t\t\t\tgoto error;\n\t\t\tstrcpy(argv[no_of_arguments - 1], arg);\n\t\t\targ[0] = '\\0';\n\t\t}\n\t\ttoken = strtok_r(NULL, seps, &saveptr);\n\t}\n\targv[no_of_arguments] = 0;\n\t*argarray = argv;\n\tfree(arg);\n\tfree(argument_list);\n\tDBPRT((\"%s: no of arguments: %d\\n\", __func__, no_of_arguments))\n\treturn 0;\nerror:\n\tif (argv) {\n\t\tfor (i = 0; i < no_of_arguments; i++) {\n\t\t\tif (argv[i])\n\t\t\t\tfree(argv[i]);\n\t\t}\n\t\tfree(argv);\n\t}\n\tif (arg)\n\t\tfree(arg);\n\tif (argument_list)\n\t\tfree(argument_list);\n\treturn -1;\n}\n\n/**\n * @brief\n *\ttakes an encoded XML string as an input, and assigns decoded arguments\n *      to the 'argarray' variable.\n *\n * @param[in]   arg_list - encoded XML input string\n * @param[in]   argarray - decoded argumenta string to send as \"extend\" data.\n *\n * @return int\n * @retval  0\ton SUCCESS\n * @retval -1\ton FAILURE\n */\n\nextern int\ndecode_xml_arg_list_str(char *arg_list,\n\t\t\tchar **argarray)\n{\n\tchar *argument_list = NULL;\n\tchar *token = NULL;\n\tchar *arg;\n\tchar seps[] = \"<>\";\n\tint cur_len = 0;\n\tchar *argv;\n\tchar *argv_temp;\n\tint first = 1;\n\tsize_t arg_len = 0;\n\tchar *escape_chars[PBS_NUM_ESC_CHARS];\n\tlong arg_max = -1;\n\tchar *saveptr; /* for use with strtok_r */\n\n\t/* Arguments are not specified */\n\tif (arg_list == NULL)\n\t\treturn 0;\n\n\t/*\n\t * Initialize escape chars array and max number of\n\t * chars an argument can hold.\n\t */\n\n\tinit_escapechars_maxarg(escape_chars, &arg_max);\n\n\t/* Allocate memory to hold decoded argument */\n\targ_len = strlen(arg_list) + 1;\n\targ = malloc(arg_len);\n\tif (arg == NULL)\n\t\treturn -1;\n\targ[0] = '\\0';\n\n\targument_list = strdup(arg_list);\n\tif (argument_list == NULL) {\n\t\tfree(arg);\n\t\treturn -1;\n\t}\n\n\t/* Assign memory to hold argument list */\n\targv = malloc(arg_len);\n\tif (argv == NULL) {\n\t\tfree(arg);\n\t\tfree(argument_list);\n\t\treturn -1;\n\t}\n\n\ttoken = strtok_r(argument_list, seps, &saveptr);\n\twhile (token) {\n\t\tif (strstr(token, \"jsdl-hpcpa:Argument\") == NULL) {\n\t\t\t/*\n\t\t\t * '<>' is used as delimiters, so the strtok might\n\t\t\t * return '<jsdl-hpcpa:Argument>' as token, so consider\n\t\t\t * only those contains which doesn't contain\n\t\t\t * string 'jsdl-hpcpa:Argument'.\n\t\t\t */\n\t\t\targ_len = decode_argument(token, arg);\n\t\t\tcur_len += arg_len + 1;\n\t\t\tif (first) {\n\t\t\t\tfirst = 0;\n\t\t\t\tstrcpy(argv, arg);\n\t\t\t} else {\n\t\t\t\tstrcat(argv, \" \");\n\t\t\t\tstrcat(argv, arg);\n\t\t\t}\n\t\t}\n\t\ttoken = strtok_r(NULL, seps, &saveptr);\n\t\targ[0] = '\\0';\n\t}\n\targv_temp = realloc(argv, cur_len);\n\tif (argv_temp == NULL) {\n\t\tfree(arg);\n\t\tfree(argument_list);\n\t\tfree(argv);\n\t\treturn -1;\n\t} else\n\t\targv = argv_temp;\n\t*argarray = argv;\n\tfree(arg);\n\tfree(argument_list);\n\treturn 0;\n}\n"
  },
  {
    "path": "src/lib/Libjson/Makefile.am",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nSUBDIRS = \\\n\tcJSON\n\nlib_LTLIBRARIES = libpbsjson.la\nlibpbsjson_la_LDFLAGS = -version-info 0:0:0\nlibpbsjson_la_LIBADD = cJSON/libpbscjson.la @cjson_lib@\nlibpbsjson_la_SOURCES =\n"
  },
  {
    "path": "src/lib/Libjson/cJSON/Makefile.am",
    "content": "#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\nnoinst_LTLIBRARIES = libpbscjson.la\n\nlibpbscjson_la_CPPFLAGS = \\\n\t-I$(top_srcdir)/src/include \\\n\t@cjson_inc@\n\nlibpbscjson_la_LIBADD = \\\n\t@cjson_lib@\n\nlibpbscjson_la_SOURCES = \\\n\tpbs_cjson.c\n"
  },
  {
    "path": "src/lib/Libjson/cJSON/pbs_cjson.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <stdlib.h>\n#include <stdio.h>\n#include <cjson/cJSON.h>\n#include \"pbs_json.h\"\n\n/**\n * @brief\n *\tInsert cJSON structure into cJSON object or array\n *\n * @param[in] parent - parent cJSON structure\n * @param[in] key - key for object structure, ignored for arrays\n * @param[in] value - cJSON structure\n *\n */\nstatic void\ncjson_insert_item(cJSON *parent, char *key, cJSON *value)\n{\n    if (cJSON_IsObject(parent))\n        cJSON_AddItemToObject(parent, key, value);\n\n    if (cJSON_IsArray(parent))\n        cJSON_AddItemToArray(parent, value);\n}\n\n/**\n * @brief\n *\tcreate json object\n *\n * @return - json_data\n * @retval   NULL - Failure\n * @retval   json_data - Success\n *\n */\njson_data *\npbs_json_create_object()\n{\n    return (json_data *) cJSON_CreateObject();\n}\n\n/**\n * @brief\n *\tcreate json array\n *\n * @return - json_data\n * @retval   NULL - Failure\n * @retval   json_data - Success\n *\n */\njson_data *\npbs_json_create_array()\n{\n    return (json_data *) cJSON_CreateArray();\n}\n\n/**\n * @brief\n *\tinsert json data into json structure (like object or array)\n *\n * @param[in] parent - json object or array\n * @param[in] key - key for object structure, ignored for arrays\n * @param[in] value - json structure\n *\n */\nvoid\npbs_json_insert_item(json_data *parent, char *key, json_data *value)\n{\n    cJSON *par = (cJSON *) parent;\n    cJSON *val = (cJSON *) value;\n    cjson_insert_item(par, key, val);\n}\n\n/**\n * @brief\n *  insert string into json structure (like object or array)\n *\n * @param[in] parent - json object or array\n * @param[in] key - key for object structure, ignored for arrays\n * @param[in] value - string\n *\n * @return - Error code\n * @retval   1 - Failure\n * @retval   0 - Success\n *\n */\nint\npbs_json_insert_string(json_data *parent, char *key, char *value)\n{\n    cJSON *par = (cJSON *) parent;\n    cJSON *val;\n    if ((val = cJSON_CreateString(value)) == NULL)\n        return 1;\n    cjson_insert_item(par, key, val);\n    return 0;\n}\n\n/**\n * @brief\n *  insert number into json structure (like object or array)\n *\n * @param[in] parent - json object or array\n * @param[in] key - key for object structure, ignored for arrays\n * @param[in] value - number\n *\n * @return - Error code\n * @retval   1 - Failure\n * @retval   0 - Success\n *\n */\nint\npbs_json_insert_number(json_data *parent, char *key, double value)\n{\n    cJSON *par = (cJSON *) parent;\n    cJSON *val;\n    if ((val = cJSON_CreateNumber(value)) == NULL)\n        return 1;\n    cjson_insert_item(par, key, val);\n    return 0;\n}\n\n/**\n * @brief\n *  parse and insert value into json structure (like object or array)\n *\n * @param[in] parent - json object or array\n * @param[in] key - key for object structure, ignored for arrays\n * @param[in] value - string for parsing\n * @param[in] ignore_empty - do not insert empty values (like 0 or \"\")\n *\n * @return - Error code\n * @retval   1 - Failure\n * @retval   0 - Success\n *\n */\nint\npbs_json_insert_parsed(json_data *parent, char *key, char *value, int ignore_empty)\n{\n    cJSON *par = (cJSON *) parent;\n    cJSON *val;\n    if ((val = cJSON_ParseWithOpts(value, NULL, 1)) == NULL)\n        val = cJSON_CreateString(value);\n    if (ignore_empty && val != NULL) {\n        if (cJSON_IsString(val) && cJSON_GetStringValue(val)[0] == '0') {\n            cJSON_Delete(val);\n            val = NULL;\n        }\n#if CJSON_VERSION_MAJOR <= 17 && CJSON_VERSION_MINOR <= 7 && CJSON_VERSION_PATCH < 13\n        if (cJSON_IsNumber(val) && val->valuedouble == 0)\n#else\n        if (cJSON_IsNumber(val) && cJSON_GetNumberValue(val) == 0)\n#endif\n        {\n            cJSON_Delete(val);\n            val = NULL;\n        }\n    }\n    if (!ignore_empty && val == NULL)\n        return 1;\n    if (val)\n        cjson_insert_item(par, key, val);\n    return 0;\n}\n\n/**\n * @brief\n *  print json data to output\n *\n * @param[in] data - json data\n * @param[in] stream - output\n *\n * @return - Error code\n * @retval   1 - Failure\n * @retval   0 - Success\n *\n */\nint\npbs_json_print(json_data *data, FILE *stream)\n{\n\tchar *json_out = cJSON_Print((cJSON *) data);\n    if (json_out != NULL) {\n        fprintf(stream, \"%s\\n\", json_out);\n\t\tfree(json_out);\n\t} else {\n\t\treturn 1;\n\t}\n    return 0;\n}\n\n/**\n * @brief\n *  free json structure\n *\n * @param[in] data - json data\n *\n */\nvoid\npbs_json_delete(json_data *data)\n{\n    cJSON_Delete((cJSON *) data);\n}\n"
  },
  {
    "path": "src/lib/Liblicensing/Makefile.am",
    "content": "#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\nlib_LTLIBRARIES = liblicensing.la\n\nliblicensing_la_CPPFLAGS = \\\n\t-I$(pbssrc_dir)/src/include\n\nliblicensing_la_LDFLAGS = -version-info 0:0:0\n\nliblicensing_la_SOURCES = license_client.c \\\n\t\t\t  liblicense.h\n"
  },
  {
    "path": "src/lib/Liblicensing/liblicense.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\nint lic_init(char *license_location);\nint lic_obtainable();\nint lic_get(int count);\nchar *lic_get_error();\nint checkkey(char **cred_list, char *nd_name, time_t *expiry);\nvoid lic_close();\nchar *lic_check_expiry();\nint lic_needed_for_node(void *node_lic_ctx);\n"
  },
  {
    "path": "src/lib/Liblicensing/license_client.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\nint\nlic_init(char *license_location)\n{\n\treturn 0;\n}\n\nint\nlic_obtainable()\n{\n\treturn 1000000;\n}\n\nint\nlic_get(int count)\n{\n\treturn 0;\n}\n\nchar\n\t*\n\tlic_get_error()\n{\n\treturn \"No Error\";\n}\n\nint\ncheckkey(char **cred_list, char *nd_name, void *expiry)\n{\n\treturn 0;\n}\n\nvoid\nlic_close()\n{\n\treturn;\n}\n\nchar *\nlic_check_expiry()\n{\n\treturn (char *) 0;\n}\n\nint\nlic_needed_for_node(void *node_lic_ctx)\n{\n\treturn 0;\n}\n\nint\nprocess_topology_info(void **node_lic_ctx, char *topology_str)\n{\n\treturn 0;\n}\n"
  },
  {
    "path": "src/lib/Liblog/Makefile.am",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nlib_LIBRARIES = liblog.a\n\nliblog_a_CPPFLAGS = \\\n\t-I$(top_srcdir)/src/include \\\n\t@KRB5_CFLAGS@\n\nliblog_a_SOURCES = \\\n\tchk_file_sec.c \\\n\tlog_event.c \\\n\tpbs_log.c \\\n\tpbs_messages.c \\\n\tsetup_env.c\n"
  },
  {
    "path": "src/lib/Liblog/chk_file_sec.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n/**\n * @file\tchk_file_sec.c\n */\n#include <errno.h>\n#include <stdio.h>\n#include <limits.h>\n#include <stdlib.h>\n#include <string.h>\n#include <unistd.h>\n#include <assert.h>\n#include <sys/param.h>\n#include <sys/types.h>\n#include <sys/stat.h>\n#include \"portability.h\"\n#include \"log.h\"\n#include \"libutil.h\"\n#include \"pbs_ifl.h\"\n\n#ifndef S_ISLNK\n#define S_ISLNK(m) (((m) &S_IFMT) == S_IFLNK)\n#endif\n\n#ifdef WIN32\n\n/**\n * @brief\n *\ttest stat of directory by checking permission mask.\n *\n * @param[in]\tsp - pointer to stat structure\n * @param[in]\tisdir - value indicating directory or not\n * @param[in]\tsticky - value indicating whether to allow write on directory\n * @param[in] \tdisallow - value indicating whether admin and owner given access permission\n * @param[in] \tpath - directory path\n * @param[in] \terrmsg - appropriate error msg\n *\n * @return\tint\n * @retval\t0\t\tsuccess\n * @retval\terror code\terror\n *\n */\nstatic int\nteststat(struct stat *sp, int isdir, int sticky, int disallow,\n\t char *path, char *errmsg)\n{\n\n\tint rc = 0;\n\n\tif (isdir && !(S_ISDIR(sp->st_mode))) {\n\t\t/* Target is supposed to be a directory, but is not. */\n\t\trc = ENOTDIR;\n\t} else if (!isdir && (S_ISDIR(sp->st_mode))) {\n\t\t/* Target is not supposed to be a directory, but is. */\n\t\trc = EISDIR;\n\t} else {\n\t\trc = perm_granted_admin_and_owner(path, disallow, NULL, errmsg);\n\t}\n\treturn rc;\n}\n\n#else /******* UNIX only code here ************ */\n\n/**\n * @brief\n *      test stat of directory by checking permission mask.\n *\n * @param[in]   sp - pointer to stat structure\n * @param[in]   isdir - value indicating directory or not\n * @param[in]   sticky - value indicating whether to allow write on directory\n * @param[in]   disallow - value indicating whether admin and owner given access permission\n * @param[in]   uid - if >0, value of owner. else, check if owner <=10\n *\n * @return      int\n * @retval      0               success\n * @retval      error code      error\n *\n */\n\nstatic int\nteststat(struct stat *sp, int isdir, int sticky, int disallow, int uid)\n{\n\tint rc = 0;\n\n\tif ((~disallow & S_IWUSR) && (sp->st_uid > 10 && !(uid > 0 && uid == sp->st_uid))) {\n\t\t/* Owner write is allowed, and UID is greater than 10 or uid doesn't match. */\n\t\trc = EPERM;\n\t} else if ((~disallow & S_IWGRP) && (sp->st_gid > 9)) {\n\t\t/* Group write is allowed, and GID is greater than 9. */\n\t\trc = EPERM;\n\t} else if ((~disallow & S_IWOTH) &&\n\t\t   (!S_ISDIR(sp->st_mode) || !(sp->st_mode & S_ISVTX) || !sticky)) {\n\t\t/*\n\t\t * Other write is allowed, and at least one of the following\n\t\t * is true:\n\t\t * - target is not a directory\n\t\t * - target does not have sticky bit set\n\t\t * - the value of the sticky argument we were passed was zero\n\t\t */\n\t\trc = EPERM;\n\t} else if (isdir && !S_ISDIR(sp->st_mode)) {\n\t\t/* Target is supposed to be a directory, but is not. */\n\t\trc = ENOTDIR;\n\t} else if (!isdir && S_ISDIR(sp->st_mode)) {\n\t\t/* Target is not supposed to be a directory, but is. */\n\t\trc = EISDIR;\n\t} else if ((S_IRWXU | S_IRWXG | S_IRWXO) & disallow & sp->st_mode) {\n\t\t/* Disallowed permission bits are set in the mode mask. */\n\t\trc = EACCES;\n\t}\n\treturn rc;\n}\n\n/**\n * @brief\n *      test stat of directory by checking permission mask.\n *\n * @param[in]   sp - pointer to stat structure\n * @param[in]   isdir - value indicating directory or not\n * @param[in]   sticky - value indicating whether to allow write on directory\n * @param[in]   disallow - value indicating whether admin and owner given access permission\n * @param[in]   uid - if >0, value of owner. else, check if owner <=10\n *\n * @return      int\n * @retval      0               success\n * @retval      error code      error\n *\n */\n\nstatic int\ntempstat(struct stat *sp, int isdir, int sticky, int disallow, int uid)\n{\n\tint rc = 0;\n\n\tif ((~disallow & S_IWUSR) && (sp->st_uid > 10 && !(uid > 0 && uid == sp->st_uid))) {\n\t\t/* Owner write is allowed, and UID is greater than 10 or owner is not self. */\n\t\trc = EPERM;\n\t} else if ((~disallow & S_IWGRP) && (sp->st_gid > 9)) {\n\t\t/* Group write is allowed, and GID is greater than 9. */\n\t\trc = EPERM;\n\t} else if (~disallow & S_IWOTH) {\n\t\t/*\n\t\t * Other write is allowed, and at least one of the following\n\t\t * is true:\n\t\t * - target is not a directory\n\t\t * - the value of the sticky argument we were passed was zero\n\t\t */\n\t\tif (!S_ISDIR(sp->st_mode) || !sticky) {\n\t\t\trc = EPERM;\n\t\t}\n\t\t/*\n\t\t ** - sticky bit is off and other write is on\n\t\t */\n\t\tif (!(sp->st_mode & S_ISVTX) && (sp->st_mode & S_IWOTH)) {\n\t\t\trc = EPERM;\n\t\t}\n\t} else if (isdir && !S_ISDIR(sp->st_mode)) {\n\t\t/* Target is supposed to be a directory, but is not. */\n\t\trc = ENOTDIR;\n\t} else if (!isdir && S_ISDIR(sp->st_mode)) {\n\t\t/* Target is not supposed to be a directory, but is. */\n\t\trc = EISDIR;\n\t} else if ((S_IRWXU | S_IRWXG | S_IRWXO) & disallow & sp->st_mode) {\n\t\t/* Disallowed permission bits are set in the mode mask. */\n\t\trc = EACCES;\n\t}\n\treturn rc;\n}\n\n#endif /** END of UNIX only code **/\n\n/**\n * @brief\n * \tchk_file_sec() - Check file/directory security\n *      Part of the PBS System Security \"Feature\"\n *\n * @par\tTo be secure, all directories (and final file) in path must be:\n *\t\towned by uid < 10\n *\t\towned by group < 10 if group writable\n *\t\tnot have world writable unless stick bit set & this is allowed.\n *\n * @param[in]\tpath - path to check\n * @param[in]\tisdir - 1 = path is directory, 0 = file\n * @param[in]\tsticky - allow write on directory if sticky set\n * @param[in]\tdisallow - perm bits to disallow\n * @param[in] \tfullpath - check full path\n *\n * @return\tint\n * @retval\t0 \t\t\tif ok\n * @retval\terrno value \t\tif not ok, including:\n *              \t\t\tEPERM if not owned by root\n *              \t\t\tENOTDIR if not file/directory as specified\n *              \t\t\tEACCESS if permissions are not ok\n */\nint\nchk_file_sec(char *path, int isdir, int sticky, int disallow, int fullpath)\n{\n\treturn chk_file_sec_user(path, isdir, sticky, disallow, fullpath, 0);\n}\n\n/**\n * @brief\n * \tchk_file_sec_user() - Check file/directory security\n *      Part of the PBS System Security \"Feature\"\n *\n * @par\tTo be secure, all directories (and final file) in path must be:\n *\t\towned by userid < 10 or owned by uid if set\n *\t\towned by groupid < 10 if group writable\n *\t\tnot have world writable unless stick bit set & this is allowed.\n *\n * @param[in]\tpath - path to check\n * @param[in]\tisdir - 1 = path is directory, 0 = file\n * @param[in]\tsticky - allow write on directory if sticky set\n * @param[in]\tdisallow - perm bits to disallow\n * @param[in]\tfullpath - check full path\n * @param[in]\tuid - uid to check\n *\n * @return\tint\n * @retval\t0 \t\t\tif ok\n * @retval\terrno value \t\tif not ok, including:\n *              \t\t\tEPERM if not owned by root or by uid\n *              \t\t\tENOTDIR if not file/directory as specified\n *              \t\t\tEACCESS if permissions are not ok\n */\nint\nchk_file_sec_user(char *path, int isdir, int sticky, int disallow, int fullpath, int uid)\n{\n\tint rc = 0;\n\tstruct stat sbuf;\n\tchar *real = NULL;\n\n\tassert(path != NULL);\n\tassert(path[0] != '\\0');\n\n\tif ((real = realpath(path, NULL)) == NULL) {\n\t\trc = errno;\n\t\tgoto chkerr;\n\t}\n\n#ifdef WIN32\n\tif (((real[0] == '/') ||\n\t     ((real[1] == ':') && (real[2] == '/'))) &&\n\t    fullpath) {\n\t\tchar *slash;\n\n\t\tif (real[0] == '/')\n\t\t\tslash = strchr(&real[1], '/');\n\t\telse\n\t\t\tslash = strchr(&real[3], '/');\n\n\t\twhile (slash != NULL) {\n\t\t\t*slash = '\\0'; /* temp end of string */\n\n\t\t\tif (lstat(real, &sbuf) == -1) {\n\t\t\t\trc = errno;\n\t\t\t\tgoto chkerr;\n\t\t\t}\n\n\t\t\tassert(S_ISLNK(sbuf.st_mode) == 0);\n\t\t\trc = teststat(&sbuf, 1, sticky, WRITES_MASK, real,\n\t\t\t\t      log_buffer);\n\t\t\tif (rc != 0)\n\t\t\t\tgoto chkerr;\n\n\t\t\t*slash = '/';\n\t\t\tslash = strchr(slash + 1, '/');\n\t\t}\n\t}\n#else\n\tif ((path[0] == '/') && fullpath) {\n\t\t/* check full path starting at root */\n\t\tchar *slash = strchr(&real[1], '/');\n\n\t\twhile (slash != NULL) {\n\t\t\t*slash = '\\0'; /* temp end of string */\n\n\t\t\tif (lstat(real, &sbuf) == -1) {\n\t\t\t\trc = errno;\n\t\t\t\tgoto chkerr;\n\t\t\t}\n\n\t\t\tassert(S_ISLNK(sbuf.st_mode) == 0);\n\n\t\t\trc = teststat(&sbuf, 1, sticky, S_IWGRP | S_IWOTH, uid);\n\n\t\t\tif (rc != 0)\n\t\t\t\tgoto chkerr;\n\n\t\t\t*slash = '/';\n\t\t\tslash = strchr(slash + 1, '/');\n\t\t}\n\t}\n#endif\n\n\tif (lstat(real, &sbuf) == -1) {\n\t\trc = errno;\n\t\tgoto chkerr;\n\t}\n\n\tassert(S_ISLNK(sbuf.st_mode) == 0);\n\n#ifdef WIN32\n\trc = teststat(&sbuf, isdir, sticky, disallow, real, log_buffer);\n#else\n\trc = teststat(&sbuf, isdir, sticky, disallow, uid);\n#endif\n\nchkerr:\n\tif (rc != 0) {\n\t\tchar *error_buf;\n\n\t\tpbs_asprintf(&error_buf,\n\t\t\t     \"Security violation \\\"%s\\\" resolves to \\\"%s\\\"\",\n\t\t\t     path, real);\n\t\tlog_err(rc, __func__, error_buf);\n#ifdef WIN32\n\t\tif (strlen(log_buffer) > 0)\n\t\t\tlog_err(rc, __func__, log_buffer);\n#endif\n\t\tfree(error_buf);\n\t}\n\n\tfree(real);\n\treturn (rc);\n}\n\n/**\n * @brief\n * \ttmp_file_sec() - Check file/directory security\n *      Part of the PBS System Security \"Feature\"\n *\n * @par\tTo be secure, all directories (and final file) in path must be:\n *\t\towned by uid < 10\n *\t\towned by group < 10 if group writable\n *\t\tnot have world writable unless stick bit set & this is allowed.\n *\n * @param[in]   path - path to check\n * @param[in]   isdir - 1 = path is directory, 0 = file\n * @param[in]   sticky - allow write on directory if sticky set\n * @param[in]   disallow - perm bits to disallow\n * @param[in]   fullpath - check full path\n *\n * @return      int\n * @retval      0                       if ok\n * @retval      errno value             if not ok, including:\n *                                      EPERM if not owned by root\n *                                      ENOTDIR if not file/directory as specified\n *                                      EACCESS if permissions are not ok\n */\n\nint\ntmp_file_sec(char *path, int isdir, int sticky, int disallow, int fullpath)\n{\n\treturn tmp_file_sec_user(path, isdir, sticky, disallow, fullpath, 0);\n}\n\n/**\n * @brief\n * \ttmp_file_sec_user() - Check file/directory security\n *      Part of the PBS System Security \"Feature\"\n *\n * @par\tTo be secure, all directories (and final file) in path must be:\n *\t\towned by userid < 10 or owned by uid if set\n *\t\towned by groupid < 10 if group writable\n *\t\tnot have world writable unless stick bit set & this is allowed.\n *\n * @param[in]   path - path to check\n * @param[in]   isdir - 1 = path is directory, 0 = file\n * @param[in]   sticky - allow write on directory if sticky set\n * @param[in]   disallow - perm bits to disallow\n * @param[in]   fullpath - check full path\n * @param[in]   uid - userid to check\n *\n * @return      int\n * @retval      0                       if ok\n * @retval      errno value             if not ok, including:\n *                                      EPERM if not owned by root\n *                                      ENOTDIR if not file/directory as specified\n *                                      EACCESS if permissions are not ok\n */\n\nint\ntmp_file_sec_user(char *path, int isdir, int sticky, int disallow, int fullpath, int uid)\n{\n\tint rc = 0;\n\tstruct stat sbuf;\n\tchar *real = NULL;\n\n\tassert(path != NULL);\n\tassert(path[0] != '\\0');\n\n\tif ((real = realpath(path, NULL)) == NULL) {\n\t\trc = errno;\n\t\tgoto chkerr;\n\t}\n\n#ifdef WIN32\n\tif (((real[0] == '/') ||\n\t     ((real[1] == ':') && (real[2] == '/'))) &&\n\t    fullpath) {\n\t\tchar *slash;\n\n\t\tif (real[0] == '/')\n\t\t\tslash = strchr(&real[1], '/');\n\t\telse\n\t\t\tslash = strchr(&real[3], '/');\n\n\t\twhile (slash != NULL) {\n\t\t\t*slash = '\\0'; /* temp end of string */\n\n\t\t\tif (lstat(real, &sbuf) == -1) {\n\t\t\t\trc = errno;\n\t\t\t\tgoto chkerr;\n\t\t\t}\n\n\t\t\tassert(S_ISLNK(sbuf.st_mode) == 0);\n\t\t\trc = teststat(&sbuf, 1, sticky, WRITES_MASK, real,\n\t\t\t\t      log_buffer);\n\t\t\tif (rc != 0)\n\t\t\t\tgoto chkerr;\n\n\t\t\t*slash = '/';\n\t\t\tslash = strchr(slash + 1, '/');\n\t\t}\n\t}\n#else\n\tif ((path[0] == '/') && fullpath) {\n\t\t/* check full path starting at root */\n\t\tchar *slash = strchr(&real[1], '/');\n\n\t\twhile (slash != NULL) {\n\t\t\t*slash = '\\0'; /* temp end of string */\n\n\t\t\tif (lstat(real, &sbuf) == -1) {\n\t\t\t\trc = errno;\n\t\t\t\tgoto chkerr;\n\t\t\t}\n\n\t\t\tassert(S_ISLNK(sbuf.st_mode) == 0);\n\n\t\t\trc = tempstat(&sbuf, 1, sticky, 0, uid);\n\n\t\t\tif (rc != 0)\n\t\t\t\tgoto chkerr;\n\n\t\t\t*slash = '/';\n\t\t\tslash = strchr(slash + 1, '/');\n\t\t}\n\t}\n#endif\n\n\tif (lstat(real, &sbuf) == -1) {\n\t\trc = errno;\n\t\tgoto chkerr;\n\t}\n\n\tassert(S_ISLNK(sbuf.st_mode) == 0);\n\n#ifdef WIN32\n\trc = teststat(&sbuf, isdir, sticky, disallow, real, log_buffer);\n#else\n\trc = tempstat(&sbuf, isdir, sticky, disallow, uid);\n#endif\n\nchkerr:\n\tif (rc != 0) {\n\t\tchar *error_buf;\n\n\t\tpbs_asprintf(&error_buf,\n\t\t\t     \"Security violation \\\"%s\\\" resolves to \\\"%s\\\"\",\n\t\t\t     path, real);\n\t\tlog_err(rc, __func__, error_buf);\n#ifdef WIN32\n\t\tif (strlen(log_buffer) > 0)\n\t\t\tlog_err(rc, __func__, log_buffer);\n#endif\n\t\tfree(error_buf);\n\t}\n\n\tfree(real);\n\treturn (rc);\n}\n\n/**\n * @brief - This function takes an <program name> <args> as input in string format\n *\t    and returns the program name\n *\n * @return - char *\n * @retval - NULL when no valid program name is found\n * @retval - a newly allocated program name\n *\n * @note - Caller holds the responsibility of freeing up the return value\n */\nchar *\nget_script_name(char *input)\n{\n\tchar *tok;\n\tchar *next_space;\n\tint path_exists = 0;\n\tchar *prev_space = NULL;\n\tint starts_with_quotes = 0;\n\tchar *delim = \" \";\n\tstruct stat sbuf;\n\n\tif (input == NULL)\n\t\treturn NULL;\n\n\t/* if path starts with double quotes, skip it */\n\tif (input[0] == '\\\"') {\n\t\tinput++;\n\t\tstarts_with_quotes = 1;\n\t}\n\n\ttok = strdup(input);\n\tif (tok == NULL)\n\t\treturn NULL;\n\n\t/* get rid of double quotes at the end */\n\tif (starts_with_quotes && tok[strlen(tok) - 1] == '\\\"')\n\t\ttok[strlen(tok) - 1] = '\\0';\n\n\tfor (next_space = strpbrk(tok, delim); next_space != NULL; next_space = strpbrk(next_space, delim)) {\n\t\tint ret_fs;\n\t\t*next_space = '\\0';\n\t\tmemset(&sbuf, 0, sizeof(struct stat));\n\t\tret_fs = stat(tok, &sbuf);\n\t\tif (ret_fs != 0) {\n\t\t\tif (path_exists == 1) {\n\t\t\t\t*prev_space = '\\0';\n\t\t\t\treturn (tok);\n\t\t\t}\n\t\t} else if (S_ISREG(sbuf.st_mode)) {\n\t\t\t/* even if we encounter a regular file, do not break out of loop\n\t\t\t * unless we hit a case where stat call fails. This is because there is a\n\t\t\t * possibility that we might be looking at a file with a prefix\n\t\t\t * similar to the script but not exactly the same.\n\t\t\t * Example, if input is \"/get foo\" and there exists a filename \"/get\"\n\t\t\t * then we do not want to return from this place.\n\t\t\t */\n\t\t\tpath_exists = 1;\n\t\t\tprev_space = next_space;\n\t\t}\n\t\t*next_space = ' ';\n\t\t/* Ignore any extra spaces */\n\t\tnext_space += strspn(next_space, delim);\n\t}\n\n\tif (path_exists == 1) {\n\t\t/* set last known space as '\\0' so that returned path contains no arguments */\n\t\t*prev_space = '\\0';\n\t\treturn tok;\n\t}\n\n\t/* If control is here then it would mean that \"tok\" must have only file path */\n\tmemset(&sbuf, 0, sizeof(struct stat));\n\tstat(tok, &sbuf);\n\tif (S_ISREG(sbuf.st_mode))\n\t\treturn tok;\n\n\t/* No file found */\n\tfree(tok);\n\treturn NULL;\n}\n"
  },
  {
    "path": "src/lib/Liblog/log_event.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tlog_event.c\n * @brief\n * log_event.c - contains functions to log event messages to the log file.\n *\n *\tThis is specific to the PBS Server.\n *\n * @par Functions included are:\n *\tlog_event()\n *\tlog_change()\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <sys/param.h>\n#include <sys/types.h>\n#include \"pbs_ifl.h\"\n#include <time.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <stdarg.h>\n#include \"log.h\"\n#include \"server_limits.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"server.h\"\n#include \"libutil.h\"\n\n/* private data */\n\nstatic long log_event_lvl_priv = PBSEVENT_ERROR | PBSEVENT_SYSTEM |\n\t\t\t\t PBSEVENT_ADMIN | PBSEVENT_JOB | PBSEVENT_SCHED |\n\t\t\t\t PBSEVENT_JOB_USAGE | PBSEVENT_SECURITY |\n\t\t\t\t PBSEVENT_DEBUG | PBSEVENT_DEBUG2 |\n\t\t\t\t PBSEVENT_RESV;\n\n/* external global data */\n\nextern char *path_home;\nlong *log_event_mask = &log_event_lvl_priv;\n\n/**\n * @brief\n * \tChecks whether or not the given event type is being recorded into\n *\tlog file.\n *\n * @param[in] eventtype - event type\n *\n * @return int\n *\t1 - if the given event type gets logged,\n *\t0 - otherwise, it's not recorded.\n *\n */\nint\nwill_log_event(int eventtype)\n{\n\tif (((eventtype & PBSEVENT_FORCE) != 0) ||\n\t    ((*log_event_mask & eventtype) != 0))\n\t\treturn (1);\n\n\treturn (0);\n}\n/**\n * @brief\n * \tlog_event - log a server event to the log file\n *\n *\tChecks to see if the event type is being recorded.  If they are,\n *\tpass off to log_record().\n *\n *\tThe caller should ensure proper formating of the message if \"text\"\n *\tis to contain \"continuation lines\".\n *\n * @param[in] eventtype - event type\n * @param[in] objclass - event object class\n * @param[in] sev - indication for whether to syslogging enabled or not\n * @param[in] objname - object name stating log msg related to which object\n * @param[in] text - log msg to be logged.\n *\n *\tNote, \"sev\" or severity is used only if syslogging is enabled,\n *\tsee syslog(3) and log_record.c for details.\n */\n\nvoid\nlog_event(int eventtype, int objclass, int sev, const char *objname, const char *text)\n{\n\tif (will_log_event(eventtype))\n\t\tlog_record(eventtype, objclass, sev, objname, text);\n}\n\n/**\n * @brief\n *      do_log_eventf - helper function which does the actual logging\n *\n * @param[in] eventtype - event type\n * @param[in] objclass - event object class\n * @param[in] sev - indication for whether to syslogging enabled or not\n * @param[in] objname - object name stating log msg related to which object\n * @param[in] fmt - format string\n * @param[in] ... - arguments to format string\n *\n * @return void\n */\n\nvoid\ndo_log_eventf(int eventtype, int objclass, int sev, const char *objname, const char *fmt, va_list args)\n{\n\tva_list args_copy;\n\tint len;\n\tchar logbuf[LOG_BUF_SIZE];\n\tchar *buf;\n\n\tif (will_log_event(eventtype) == 0)\n\t\treturn;\n\n\tva_copy(args_copy, args);\n\n\tlen = vsnprintf(logbuf, sizeof(logbuf), fmt, args_copy);\n\tva_end(args_copy);\n\n\tif (len >= sizeof(logbuf)) {\n\t\tbuf = pbs_asprintf_format(len, fmt, args);\n\t\tif (buf == NULL)\n\t\t\treturn;\n\n\t} else\n\t\tbuf = logbuf;\n\n\tlog_record(eventtype, objclass, sev, objname, buf);\n\n\tif (len >= sizeof(logbuf))\n\t\tfree(buf);\n}\n\n/**\n * @brief\n * \tlog_eventf - a combination of log_event() and printf()\n *\n * @param[in] eventtype - event type\n * @param[in] objclass - event object class\n * @param[in] sev - indication for whether to syslogging enabled or not\n * @param[in] objname - object name stating log msg related to which object\n * @param[in] fmt - format string\n * @param[in] ... - arguments to format string\n *\n * @return void\n */\nvoid\nlog_eventf(int eventtype, int objclass, int sev, const char *objname, const char *fmt, ...)\n{\n\tva_list args;\n\tva_start(args, fmt);\n\tdo_log_eventf(eventtype, objclass, sev, objname, fmt, args);\n\tva_end(args);\n}\n"
  },
  {
    "path": "src/lib/Liblog/pbs_log.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * \n * @brief\n *  contains functions to log error and event messages to\n *\tthe log file.\n *\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include \"portability.h\"\n#include \"pbs_error.h\"\n\n#include <sys/stat.h>\n#include <sys/param.h>\n#include <sys/types.h>\n#include <sys/time.h>\n#include <limits.h>\n#include <time.h>\n#include <fcntl.h>\n#include <stdio.h>\n#include <string.h>\n#include <unistd.h>\n#include <errno.h>\n#include <stdlib.h>\n#include <pthread.h>\n#include <signal.h>\n#include <stddef.h>\n#include <stdarg.h>\n\n#include \"log.h\"\n#include \"pbs_ifl.h\"\n#include \"libutil.h\"\n#include \"pbs_version.h\"\n#if SYSLOG\n#include <syslog.h>\n#endif\n\n/* Default to no locking. */\n\n/* Global Data */\n\nchar log_buffer[LOG_BUF_SIZE];\nchar log_directory[_POSIX_PATH_MAX / 2];\n\n/*\n * PBS logging is not reentrant. Especially the log switch changes the\n * global log file pointer. Guard using a mutex.\n * Initialize the mutex once at log_open().\n */\nstatic pthread_once_t log_once_ctl = PTHREAD_ONCE_INIT;\nstatic pthread_mutex_t log_write_mutex;\ntypedef struct {\n\tstruct tm ptm;\n\tchar microsec_buf[8];\n} ms_time; /* microsecond time stamp */\n\nchar *msg_daemonname;\n\n/* Local Data */\n\nstatic int log_auto_switch = 0;\nstatic int log_open_day;\nstatic FILE *logfile; /* open stream for log file */\nstatic volatile int log_opened = 0;\n#if SYSLOG\nstatic int syslogopen = 0;\n#endif /* SYSLOG */\n\n/*\n * the order of these names MUST match the defintions of\n * PBS_EVENTCLASS_* in log.h\n */\nstatic char *class_names[] = {\n\t\"n/a\",\n\t\"Svr\",\n\t\"Que\",\n\t\"Job\",\n\t\"Req\",\n\t\"Fil\",\n\t\"Act\",\n\t\"Node\",\n\t\"Resv\",\n\t\"Sched\",\n\t\"Hook\",\n\t\"Resc\",\n\t\"TPP\"};\n\nstatic char pbs_leaf_name[PBS_MAXHOSTNAME + 1] = \"N/A\";\nstatic char pbs_mom_node_name[PBS_MAXHOSTNAME + 1] = \"N/A\";\nstatic unsigned int locallog = 0;\nstatic unsigned int syslogfac = 0;\nstatic unsigned int syslogsvr = 3;\nstatic unsigned int pbs_log_highres_timestamp = 0;\n\nstatic void log_init(void);\nstatic int log_mutex_lock();\nstatic int log_mutex_unlock();\nstatic void get_timestamp(ms_time *mst);\nstatic void log_record_inner(int eventtype, int objclass, int sev, const char *objname, const char *text, ms_time *mst);\nstatic void log_console_error(char *);\n\nvoid\nset_log_conf(char *leafname, char *nodename,\n\t     unsigned int islocallog, unsigned int sl_fac, unsigned int sl_svr,\n\t     unsigned int log_highres)\n{\n\tpthread_once(&log_once_ctl, log_init); /* initialize mutex once */\n\n\tlog_mutex_lock();\n\n\tif (leafname) {\n\t\tstrncpy(pbs_leaf_name, leafname, PBS_MAXHOSTNAME);\n\t\tpbs_leaf_name[PBS_MAXHOSTNAME] = '\\0';\n\t}\n\n\tif (nodename) {\n\t\tstrncpy(pbs_mom_node_name, nodename, PBS_MAXHOSTNAME);\n\t\tpbs_mom_node_name[PBS_MAXHOSTNAME] = '\\0';\n\t}\n\n\tlocallog = islocallog;\n\tsyslogfac = sl_fac;\n\tsyslogsvr = sl_svr;\n\tpbs_log_highres_timestamp = log_highres;\n\n\tlog_mutex_unlock();\n}\n\n#ifdef WIN32\n/**\n * @brief\n *\t\tgettimeofday - This function returns the current calendar\n *\t\ttime as the elapsed time since the epoch in the struct timeval\n *\t\tstructure indicated by tp\n *\n * @param[in] - tp - pointer to timeval struct\n * @param[in] - tzp - pointer to timezone struct (not used)\n * @return int\n * @retval -1 - failure\n * @retval 0 - success\n */\nint\ngettimeofday(struct timeval *tp, struct timezone *tzp)\n{\n\tFILETIME file_time = {0};\n\tULARGE_INTEGER large_int = {0};\n\t/*\n \t * Microsecond different from \"January 1, 1601 (UTC)\" to\n  \t * \"00:00:00 January 1, 1970\" as Windows's FILESYSTEM is represents from\n \t * \"January 1, 1601 (UTC)\"\n  \t */\n\tstatic const unsigned __int64 epoch = 116444736000000000ULL;\n\n\tGetSystemTimeAsFileTime(&file_time);\n\tlarge_int.LowPart = file_time.dwLowDateTime;\n\tlarge_int.HighPart = file_time.dwHighDateTime;\n\ttp->tv_sec = (time_t) ((large_int.QuadPart - epoch) / 10000000L);\n\ttp->tv_usec = (time_t) ((large_int.QuadPart - epoch) % 1000000L);\n\treturn 0;\n}\n#endif\n\n/* External functions called */\n\n/**\n * @brief\n * set_msgdaemonname - set the variable msg_daemonname\n *\t\t\tas per the daemon\n * @param[in] - ch - the string msg_daemonname to be set\n * @return int\n * @retval 1 - failure\n * @retval 0 - success\n */\n\nint\nset_msgdaemonname(const char *ch)\n{\n\tif (!(msg_daemonname = strdup(ch))) {\n\t\treturn 1;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n * set_logfile - set the logfile to stderr to log the message to stderr\n * @param[in] - fp - log file pointer\n * @return void\n */\n\nvoid\nset_logfile(FILE *fp)\n{\n\tlog_opened = 1;\n\tlogfile = fp;\n}\n\n/*\n * @brief\n * \tmk_log_name - make the log name used by MOM\n *\tbased on the date: yyyymmdd\n *\n * @param[in] pbuf - buffer to hold log file name\n * @param[in] pbufsz - max size of buffer\n *\n * @return\tstring\n * @retval\tlog file name\tsuccess\n *\n */\n\nstatic char *\nmk_log_name(char *pbuf, size_t pbufsz)\n{\n#ifndef WIN32\n\tstruct tm ltm;\n#endif\n\tstruct tm *ptm;\n\ttime_t time_now;\n\n\ttime_now = time(NULL);\n\n#ifdef WIN32\n\tptm = localtime(&time_now);\n\t(void) snprintf(pbuf, pbufsz, \"%s\\\\%04d%02d%02d\", log_directory,\n\t\t\tptm->tm_year + 1900, ptm->tm_mon + 1, ptm->tm_mday);\n#else\n\tptm = localtime_r(&time_now, &ltm);\n\t(void) snprintf(pbuf, pbufsz, \"%s/%04d%02d%02d\", log_directory,\n\t\t\tptm->tm_year + 1900, ptm->tm_mon + 1, ptm->tm_mday);\n#endif\n\tlog_open_day = ptm->tm_yday; /* Julian date log opened */\n\treturn (pbuf);\n}\n\n/**\n * @brief\n *\tLock the mutex associated with this log\n *\n * @return Error code\n * @retval -1 - failure\n * @retval  0 - success\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nstatic int\nlog_mutex_lock()\n{\n\tif (pthread_mutex_lock(&log_write_mutex) != 0) {\n\t\tlog_console_error(\"PBS cannot lock its log\");\n\t\treturn -1;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\tUnlock the mutex associated with this log\n *\n * @return Error code\n * @retval -1 - failure\n * @retval  0 - success\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nstatic int\nlog_mutex_unlock()\n{\n\tif (pthread_mutex_unlock(&log_write_mutex) != 0) {\n\t\tlog_console_error(\"PBS cannot unlock its log\");\n\t\treturn -1;\n\t}\n\treturn 0;\n}\n\n#ifndef WIN32\n\n/**\n * @brief\n *\twrapper function for log_mutex_lock().\n *\n */\nstatic void\nlog_pre_fork_handler()\n{\n\tlog_mutex_lock();\n}\n\n/**\n * @brief\n *\twrapper function for log_mutex_unlock().\n *\n */\nstatic void\nlog_parent_post_fork_handler()\n{\n\tlog_mutex_unlock();\n}\n\n/**\n * @brief\n *\twrapper function for log_mutex_unlock().\n *\n */\nstatic void\nlog_child_post_fork_handler()\n{\n\tlog_mutex_unlock();\n}\n#endif\n\n/**\n * @brief\n *\tInitialize the log mutex and tls\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nstatic void\nlog_init(void)\n{\n\tif (pthread_mutex_init(&log_write_mutex, NULL) != 0) {\n\t\tfprintf(stderr, \"log write mutex init failed\\n\");\n\t\treturn;\n\t}\n\n\t/* \n\t * atfork handlers are required for the logging layer\n\t * since child processes might still want to log in the \n\t * child after fork, from the APP thread, then the APP thread\n\t * needs access to the log mutex - else if the fork happened \n\t * when the TPP thread acquired the lock, then the APP thread\n\t * after fork can never acquire it (since the TPP thread would\n\t * be dead after fork - only the thread calling fork is duplicated\n\t * in the child process).\n\t * \n\t * Hence in the prefork handler, we acquire the lock - thus ensuring\n\t * that the TPP thread does not own it, and then post fork we unlock \n\t * it both for the parent and child.\n\t *  \n\t */\n#ifndef WIN32\n\t/* for unix, set a pthread_atfork handler */\n\tif (pthread_atfork(log_pre_fork_handler, log_parent_post_fork_handler, log_child_post_fork_handler) != 0) {\n\t\tfprintf(stderr, \"log atfork handler failed\\n\");\n\t\treturn;\n\t}\n#endif\n}\n\n/**\n * @brief\n *\tAdd general debugging information in log\n *\n * @par Side Effects:\n * \tNone\n *\n * @par MT-safe: Yes\n *\n */\nstatic void\nlog_add_debug_info()\n{\n\tchar dest[LOG_BUF_SIZE] = {'\\0'};\n\tchar temp[PBS_MAXHOSTNAME + 1] = {'\\0'};\n\tchar host[PBS_MAXHOSTNAME + 1] = \"N/A\";\n\tms_time mst;\n\tget_timestamp(&mst);\n\n\t/* Set hostname */\n\tif (!gethostname(temp, (sizeof(temp) - 1))) {\n\t\tsnprintf(host, sizeof(host), \"%s\", temp);\n\t\tif (!get_fullhostname(temp, temp, (sizeof(temp) - 1)))\n\t\t\t/* Overwrite if full hostname is available */\n\t\t\tsnprintf(host, sizeof(host), \"%s\", temp);\n\t}\n\t/* Record to log */\n\tsnprintf(dest, sizeof(dest),\n\t\t \"hostname=%s;pbs_leaf_name=%s;pbs_mom_node_name=%s\",\n\t\t host, pbs_leaf_name, pbs_mom_node_name);\n\tlog_record_inner(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_INFO, msg_daemonname, dest, &mst);\n\treturn;\n}\n\n/**\n * @brief\n *\tAdd supported authentication method to log\n *\n * @param[in]\tsupported_auth_methods - An array of supported authentication method\n *\n * @return void\n *\n */\nvoid\nlog_supported_auth_methods(char **supported_auth_methods)\n{\n\tif (supported_auth_methods) {\n\t\tint i = 0;\n\t\twhile (supported_auth_methods[i]) {\n\t\t\tlog_eventf(PBSEVENT_FORCE, PBS_EVENTCLASS_SERVER, LOG_INFO, msg_daemonname,\n\t\t\t\t   \"Supported authentication method: %s\", supported_auth_methods[i]);\n\t\t\ti++;\n\t\t}\n\t}\n}\n\n/**\n * @brief\n *\tAdd interface information to log\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\n\nstatic void\nlog_add_if_info()\n{\n\tchar tbuf[LOG_BUF_SIZE];\n\tchar msg[LOG_BUF_SIZE];\n\tchar temp[LOG_BUF_SIZE];\n\tint i;\n\tchar dest[LOG_BUF_SIZE * 2];\n\tstruct log_net_info *ni, *curr;\n\tms_time mst;\n\tget_timestamp(&mst);\n\n\tmemset(msg, '\\0', sizeof(msg));\n\tni = get_if_info(msg);\n\tif (msg[0] != '\\0') {\n\t\t/* Adding error message to log */\n\t\tsnprintf(tbuf, sizeof(tbuf), \"%s\", msg);\n\t\tlog_record_inner(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_INFO, msg_daemonname, tbuf, &mst);\n\t}\n\tif (!ni)\n\t\treturn;\n\n\t/* Add info to log */\n\tfor (curr = ni; curr; curr = curr->next) {\n\t\tsnprintf(tbuf, sizeof(tbuf), \"%s interface %s: \",\n\t\t\t (curr->iffamily[0]) ? curr->iffamily : \"NULL\",\n\t\t\t (curr->ifname[0]) ? curr->ifname : \"NULL\");\n\t\tfor (i = 0; curr->ifhostnames[i]; i++) {\n\t\t\tsnprintf(temp, sizeof(temp), \"%s \", curr->ifhostnames[i]);\n\t\t\tsnprintf(dest, sizeof(dest), \"%s%s\", tbuf, temp);\n\t\t}\n\t\tlog_record_inner(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_INFO, msg_daemonname, dest, &mst);\n\t}\n\n\tfree_if_info(ni);\n}\n\n/**\n *\n * @brief\n *\tCalls log_open_main() in non-silent mode.\n *\n * @param[in]\tfilename - the log filename passed to log_open_main().\n * @param[in]\tlog_directory -  The directory name passed to log_open_main().\n *\n * @return int\t- return value of log_open_main().\n *\n */\nint\nlog_open(char *filename, char *directory)\n{\n\treturn (log_open_main(filename, directory, 0));\n}\n\n/**\n *\n * @brief\n * \tOpen the log file for append.\n *\n * @par\n *\tOpens a (new) log file.\n *\tIf a log file is already open, and the new file is successfully opened,\n *\tthe old file is closed.  Otherwise the old file is left open.\n\n * @param[in]\tfilename - if non-NULL or non-empty string, then this must be\n *\t\t\t   an absolute pathname, which is opened and made as\n *\t\t\t   the log file.\n *\t\t\t - if NULL or empty string, then calls mk_log_name()\n *\t\t\t   to create a log file named after the current date\n *\t\t\t   yymmdd, which is made into the log file.\n * @param[in]\tlog_directory -  The directory used by mk_log_name()\n *\t\t\t\t as the log directory for the generated\n *\t\t\t\t log filename.\n * @param[in]\tsilent - if set to 1, then extra messages such as\n *\t\t\t\"Log opened\", \"pbs_version=\", \"pbs_build=\"\n *\t\t\tare not printed out on the log file.\n *\n * @return int\n * @retval 0\tfor success\n * @retval != 0 for failure\n */\nint\nlog_open_main(char *filename, char *directory, int silent)\n{\n\tchar buf[_POSIX_PATH_MAX];\n\tint fds;\n\n\t/*providing temporary buffer, tbuf, for forming pbs_version\n\t *and pbs_build messages that get written on logfile open.\n\t *Using the usual buffer, log_buffer, that one sees in calls\n\t *to log_event() will result in clobbering the first message\n\t *after midnight:  log_event(), calls log_record(), calls\n\t *log_close() followed by log_open() - so a write into \"log_buffer\"\n\t *inside log_open() obliterates the message that would have been\n\t *placed in the newly opened, after mignight, server logfile.\n\t */\n\tchar tbuf[LOG_BUF_SIZE];\n\n\tpthread_once(&log_once_ctl, log_init); /* initialize mutex once */\n\n\tif (log_opened > 0) /* Close existing log */\n\t\tlog_close(0);\n\n\tif (locallog != 0 || syslogfac == 0) {\n\n\t\t/* open PBS local logging */\n\n\t\tif (strcmp(log_directory, directory) != 0)\n\t\t\t(void) strncpy(log_directory, directory, _POSIX_PATH_MAX / 2 - 1);\n\n\t\tif ((filename == NULL) || (*filename == '\\0')) {\n\t\t\tfilename = mk_log_name(buf, _POSIX_PATH_MAX);\n\t\t\tlog_auto_switch = 1;\n\t\t}\n#ifdef WIN32\n\t\telse if (*filename != '\\\\' && (strlen(filename) > 1 &&\n\t\t\t\t\t       *(filename + 1) != ':')) {\n\t\t\treturn (-1); /* must be absolute path */\n\t\t}\n#else\n\t\telse if (*filename != '/') {\n\t\t\treturn (-1); /* must be absolute path */\n\t\t}\n#endif\n\n#ifdef WIN32\n\t\tif ((fds = open(filename, O_CREAT | O_WRONLY | O_APPEND, S_IREAD | S_IWRITE)) < 0)\n#elif defined(O_LARGEFILE)\n\t\tif ((fds = open(filename, O_CREAT | O_WRONLY | O_APPEND | O_LARGEFILE, 0644)) < 0)\n#else\n\t\tif ((fds = open(filename, O_CREAT | O_WRONLY | O_APPEND, 0644)) < 0)\n#endif\n\t\t{\n\t\t\tlog_opened = -1; /* note that open failed */\n\t\t\treturn (-1);\n\t\t}\n\n#ifdef WIN32\n\t\tsecure_file2(filename, \"Administrators\",\n\t\t\t     READS_MASK | WRITES_MASK | STANDARD_RIGHTS_REQUIRED,\n\t\t\t     \"Everyone\", READS_MASK | READ_CONTROL);\n#endif\n\t\tDBPRT((\"Opened log file %s\\n\", filename))\n\t\tif (fds < 3) {\n\n\t\t\tlog_opened = fcntl(fds, F_DUPFD, 3); /* overload variable */\n\t\t\tif (log_opened < 0)\n\t\t\t\treturn (-1);\n\t\t\t(void) close(fds);\n\t\t\tfds = log_opened;\n\t\t}\n\t\tlogfile = fdopen(fds, \"a\");\n\n#ifdef WIN32\n\t\t(void) setvbuf(logfile, NULL, _IONBF, 0); /* no buffering to get instant log */\n#else\n\t\t(void) setvbuf(logfile, NULL, _IOLBF, 0); /* set line buffering */\n#endif\n\t\tlog_opened = 1; /* note that file is open */\n\n\t\tif (!silent) {\n\t\t\tms_time mst;\n\t\t\tget_timestamp(&mst);\n\t\t\tlog_record_inner(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_INFO, \"Log\", \"Log opened\", &mst);\n\t\t\tsnprintf(tbuf, LOG_BUF_SIZE, \"pbs_version=%s\", PBS_VERSION);\n\t\t\tlog_record_inner(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_INFO, msg_daemonname, tbuf, &mst);\n\t\t\tsnprintf(tbuf, LOG_BUF_SIZE, \"pbs_build=%s\", PBS_BUILD);\n\t\t\tlog_record_inner(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_INFO, msg_daemonname, tbuf, &mst);\n\n\t\t\tlog_add_debug_info();\n\t\t\tlog_add_if_info();\n\t\t}\n\t}\n#if SYSLOG\n\tif (syslogopen == 0 && syslogfac > 0 && syslogfac < 10) {\n\t\t/*\n\t\t * We do not assume that the log facilities are defined sequentially.\n\t\t * That is why we reference them each by name.\n\t\t */\n\t\tswitch (syslogfac) {\n\t\t\tcase 2:\n\t\t\t\tsyslogopen = LOG_LOCAL0;\n\t\t\t\tbreak;\n\t\t\tcase 3:\n\t\t\t\tsyslogopen = LOG_LOCAL1;\n\t\t\t\tbreak;\n\t\t\tcase 4:\n\t\t\t\tsyslogopen = LOG_LOCAL2;\n\t\t\t\tbreak;\n\t\t\tcase 5:\n\t\t\t\tsyslogopen = LOG_LOCAL3;\n\t\t\t\tbreak;\n\t\t\tcase 6:\n\t\t\t\tsyslogopen = LOG_LOCAL4;\n\t\t\t\tbreak;\n\t\t\tcase 7:\n\t\t\t\tsyslogopen = LOG_LOCAL5;\n\t\t\t\tbreak;\n\t\t\tcase 8:\n\t\t\t\tsyslogopen = LOG_LOCAL6;\n\t\t\t\tbreak;\n\t\t\tcase 9:\n\t\t\t\tsyslogopen = LOG_LOCAL7;\n\t\t\t\tbreak;\n\t\t\tcase 1:\n\t\t\tdefault:\n\t\t\t\tsyslogopen = LOG_DAEMON;\n\t\t\t\tbreak;\n\t\t}\n\t\topenlog(msg_daemonname, LOG_NOWAIT, syslogopen);\n\t\tDBPRT((\"Syslog enabled, facility = %d\\n\", syslogopen))\n\t\tif (syslogsvr != 0) {\n\t\t\t/* set min priority of what gets logged via syslog */\n\t\t\tsetlogmask(LOG_UPTO(syslogsvr));\n\t\t\tDBPRT((\"Syslog mask set to 0x%x\\n\", syslogsvr))\n\t\t}\n\t}\n#endif\n\n\treturn (0);\n}\n\n/**\n * @brief\n * \tlog_err - log an internal error\n *\tThe error is recorded to the pbs log file and to syslogd if it is\n *\tavailable.  If the error file has not been opened and if syslog is\n *\tnot defined, then the console is opened.\n *\n * @param[in] errnum - error number\n * @param[in] routine - error in which routine\n * @param[in] text - text to be logged\n *\n */\n\nvoid\nlog_err(int errnum, const char *routine, const char *text)\n{\n\tchar buf[LOG_BUF_SIZE], *errmsg;\n\tint i;\n\n\tif (errnum == -1) {\n\n#ifdef WIN32\n\t\tLPVOID lpMsgBuf;\n\t\tDWORD err = GetLastError();\n\t\tint len;\n\n\t\tsnprintf(buf, LOG_BUF_SIZE, \"Err(%lu): \", err);\n\t\tFormatMessage(\n\t\t\tFORMAT_MESSAGE_ALLOCATE_BUFFER |\n\t\t\t\tFORMAT_MESSAGE_FROM_SYSTEM |\n\t\t\t\tFORMAT_MESSAGE_IGNORE_INSERTS,\n\t\t\tNULL, err,\n\t\t\tMAKELANGID(LANG_NEUTRAL, SUBLANG_DEFAULT),\n\t\t\t(LPTSTR) &lpMsgBuf, 0, NULL);\n\t\tstrncat(buf, lpMsgBuf, LOG_BUF_SIZE - (int) strlen(buf) - 1);\n\t\tLocalFree(lpMsgBuf);\n\t\tbuf[sizeof(buf) - 1] = '\\0';\n\t\tlen = strlen(buf);\n\t\tif (buf[len - 1] == '\\n')\n\t\t\tlen--;\n\t\tif (buf[len - 1] == '.')\n\t\t\tlen--;\n\t\tbuf[len - 1] = '\\0';\n#else\n\t\tbuf[0] = '\\0';\n#endif\n\t} else {\n\t\tif (((errmsg = pbse_to_txt(errnum)) == NULL) &&\n\t\t    ((errmsg = strerror(errnum)) == NULL))\n\t\t\terrmsg = \"\";\n\t\t(void) snprintf(buf, LOG_BUF_SIZE, \"%s (%d) in \", errmsg, errnum);\n\t}\n\t(void) strcat(buf, routine);\n\t(void) strcat(buf, \", \");\n\ti = LOG_BUF_SIZE - (int) strlen(buf) - 2;\n\t(void) strncat(buf, text, i);\n\tbuf[LOG_BUF_SIZE - 1] = '\\0';\n\n\tif (log_opened == 0)\n\t\t(void) log_open(\"/dev/console\", log_directory);\n\n\tif (isatty(2)) {\n\t\tif (msg_daemonname == NULL) {\n\t\t\t(void) fprintf(stderr, \"%s\\n\", buf);\n\t\t} else {\n\t\t\t(void) fprintf(stderr, \"%s: %s\\n\", msg_daemonname, buf);\n\t\t}\n\t}\n\n\t(void) log_record(PBSEVENT_ERROR | PBSEVENT_FORCE, PBS_EVENTCLASS_SERVER,\n\t\t\t  LOG_ERR, msg_daemonname, buf);\n}\n\n/**\n * @brief\n * \tlog_errf - a combination of log_err() and printf()\n *\tThe error is recorded to the pbs log file and to syslogd if it is\n *\tavailable.  If the error file has not been opened and if syslog is\n *\tnot defined, then the console is opened.\n *\n * @param[in] errnum - error number\n * @param[in] routine - error in which routine\n * @param[in] fmt - format string\n * @param[in] ... - arguments to format string * \n *\n */\n\nvoid\nlog_errf(int errnum, const char *routine, const char *fmt, ...)\n{\n\tva_list args;\n\tva_list args_copy;\n\tint len;\n\tchar logbuf[LOG_BUF_SIZE];\n\tchar *buf;\n\n\tva_start(args, fmt);\n\tva_copy(args_copy, args);\n\n\tlen = vsnprintf(logbuf, sizeof(logbuf), fmt, args_copy);\n\tva_end(args_copy);\n\n\tif (len >= sizeof(logbuf)) {\n\t\tbuf = pbs_asprintf_format(len, fmt, args);\n\t\tif (buf == NULL) {\n\t\t\tva_end(args);\n\t\t\treturn;\n\t\t}\n\t} else\n\t\tbuf = logbuf;\n\n\tlog_err(errnum, routine, buf);\n\n\tif (len >= sizeof(logbuf))\n\t\tfree(buf);\n\tva_end(args);\n}\n\n/**\n * @brief\n * \tlog_joberr- log an internal, job-related error\n *\tThe error is recorded to the pbs log file and to syslogd if it is\n *\tavailable.  If the error file has not been opened and if syslog is\n *\tnot defined, then the console is opened.  The record written into\n *\tthe log will be of type PBS_EVENTCLASS_JOB\n *\n * @param[in] errnum - error number\n * @param[in] routine - error in which routine\n * @param[in] text - text to be logged\n * @param[in] pjid - job id which logged error\n *\n * @return\tvoid\n *\n */\n\nvoid\nlog_joberr(int errnum, const char *routine, const char *text, const char *pjid)\n{\n\tchar buf[LOG_BUF_SIZE], *errmsg;\n\tint i;\n\n\tif (errnum == -1) {\n\n#ifdef WIN32\n\t\tLPVOID lpMsgBuf;\n\t\tDWORD err = GetLastError();\n\t\tint len;\n\t\tsnprintf(buf, LOG_BUF_SIZE, \"Err(%lu): \", err);\n\t\tFormatMessage(\n\t\t\tFORMAT_MESSAGE_ALLOCATE_BUFFER |\n\t\t\t\tFORMAT_MESSAGE_FROM_SYSTEM |\n\t\t\t\tFORMAT_MESSAGE_IGNORE_INSERTS,\n\t\t\tNULL, err,\n\t\t\tMAKELANGID(LANG_NEUTRAL, SUBLANG_DEFAULT),\n\t\t\t(LPTSTR) &lpMsgBuf, 0, NULL);\n\t\tstrncat(buf, lpMsgBuf, LOG_BUF_SIZE - (int) strlen(buf) - 1);\n\t\tLocalFree(lpMsgBuf);\n\t\tbuf[sizeof(buf) - 1] = '\\0';\n\t\tlen = strlen(buf);\n\t\tif (buf[len - 1] == '\\n')\n\t\t\tlen--;\n\t\tif (buf[len - 1] == '.')\n\t\t\tlen--;\n\t\tbuf[len - 1] = '\\0';\n#else\n\t\tbuf[0] = '\\0';\n#endif\n\t} else {\n\n\t\tif (((errmsg = pbse_to_txt(errnum)) == NULL) &&\n\t\t    ((errmsg = strerror(errnum)) == NULL))\n\t\t\terrmsg = \"\";\n\t\t(void) snprintf(buf, LOG_BUF_SIZE, \"%s (%d) in \", errmsg, errnum);\n\t}\n\t(void) strcat(buf, routine);\n\t(void) strcat(buf, \", \");\n\ti = LOG_BUF_SIZE - (int) strlen(buf) - 2;\n\t(void) strncat(buf, text, i);\n\tbuf[LOG_BUF_SIZE - 1] = '\\0';\n\n\tif (log_opened == 0)\n\t\t(void) log_open(\"/dev/console\", log_directory);\n\n\tif (isatty(2))\n\t\t(void) fprintf(stderr, \"%s: %s\\n\", msg_daemonname, buf);\n\n\t(void) log_record(PBSEVENT_ERROR | PBSEVENT_FORCE, PBS_EVENTCLASS_JOB,\n\t\t\t  LOG_ERR, pjid, buf);\n}\n\n/**\n * @brief\n * \tlog_suspect_file - log security information about a file/directory\n *\n * @param[in] func - function id\n * @param[in] text - text to be logged\n * @param[in] file - file path\n * @param[in] sb - status of file\n *\n * @return\tVoid\n *\n */\n\nvoid\nlog_suspect_file(const char *func, const char *text, const char *file, struct stat *sb)\n{\n\tchar buf[LOG_BUF_SIZE];\n\n\tsnprintf(buf, LOG_BUF_SIZE, \"Security issue from %s: %s, inode %lu, mode %#lx, uid %ld, gid %ld, ctime %#lx\",\n\t\t func,\n\t\t text,\n\t\t (unsigned long) sb->st_ino,\n\t\t (unsigned long) sb->st_mode,\n\t\t (long) sb->st_uid,\n\t\t (long) sb->st_gid,\n\t\t (unsigned long) sb->st_ctime);\n\t/*\n\t * Log the data.  Note that we swap the text and file name order\n\t * because the text is more important in case msg is truncated.\n\t */\n\tlog_record(PBSEVENT_SECURITY, PBS_EVENTCLASS_FILE, LOG_CRIT, buf, file);\n}\n\n/**\n * @brief\n * \tget the timestamp, include microseconds if configured\n *\n * @param[out] mst - the ms_time structure is populated and returned\n *\n */\nvoid\nget_timestamp(ms_time *mst)\n{\n\ttime_t now = 0;\n\tstruct tm *ptm;\n\tstruct timeval tp;\n#ifndef WIN32\n\tstruct tm ltm;\n#endif\n\t/* if gettimeofday() fails, log messages will be printed at the epoch */\n\tif (gettimeofday(&tp, NULL) != -1) {\n\t\tnow = tp.tv_sec;\n\t\tif (pbs_log_highres_timestamp)\n\t\t\tsnprintf(mst->microsec_buf, sizeof(mst->microsec_buf), \".%06ld\", (long) tp.tv_usec);\n\t\telse\n\t\t\tmst->microsec_buf[0] = '\\0';\n\t}\n\n#ifdef WIN32\n\tptm = localtime(&now);\n#else\n\tptm = localtime_r(&now, &ltm);\n#endif\n\n\tmst->ptm = *ptm;\n}\n\n/**\n * @brief\n * \tlog_record - log a critical error to the console\n *\n * @param[in] msg - log msg to be logged\n * \n */\nvoid\nlog_console_error(char *msg)\n{\n\tFILE *errf;\n\tint rc = errno;\n\terrf = fopen(\"/dev/console\", \"w\");\n\tif (errf != NULL) {\n\t\tfprintf(errf, \"%s - errno = %d\\n\", msg, rc);\n\t\tfclose(errf);\n\t}\n}\n\n/**\n * @brief\n * \tlog a message to the log file - this function acquires a lock\n *\tThe log file must have been opened by log_open().\n *\n *\tThe caller should ensure proper formating of the message if \"text\"\n *\tis to contain \"continuation lines\".\n *\n * @param[in] eventtype - event type\n * @param[in] objclass - event object class\n * @param[in] sev - indication for whether to syslogging enabled or not\n * @param[in] objname - object name stating log msg related to which object\n * @param[in] text - log msg to be logged.\n *\n * Note, \"sev\" (for severity) is used  only if syslogging is enabled.\n * See syslog(3) for details.\n *\n * Note: Do NOT call this function from log_open() or log_close() since log_record\n * acquires a mutex and calls log_open(), log_close(), making it a recursive call, \n * and confusing the mutex. Rather call, log_record_inner() from log_open, log_close\n * etc\n */\nvoid\nlog_record(int eventtype, int objclass, int sev, const char *objname, const char *text)\n{\n\tms_time mst;\n#ifndef WIN32\n\tchar slogbuf[LOG_BUF_SIZE];\n\tsigset_t block_mask;\n\tsigset_t old_mask;\n\n\t/* Block all signals to the process to make the function async-safe */\n\tsigfillset(&block_mask);\n\tsigprocmask(SIG_BLOCK, &block_mask, &old_mask);\n#endif\n\n#if SYSLOG\n\tif (syslogopen != 0) {\n\t\tsnprintf(slogbuf, LOG_BUF_SIZE, \"%s;%s;%s\\n\", class_names[objclass], objname, text);\n\t\tsyslog(sev, \"%s\", slogbuf);\n\t}\n#endif /* SYSLOG */\n\n\tif (log_opened <= 0)\n\t\tgoto sigunblock;\n\n\tif ((text == NULL) || (objname == NULL))\n\t\tgoto sigunblock;\n\n\t/* lock the file mutex */\n\tif (log_mutex_lock() == 0) {\n\t\tget_timestamp(&mst);\n\n\t\t/* Do we need to switch the log? */\n\t\tif (log_auto_switch && (mst.ptm.tm_yday != log_open_day)) {\n\t\t\tlog_close(1);\n\t\t\tlog_open(NULL, log_directory);\n\t\t\tif (log_opened < 1) {\n\t\t\t\tlog_mutex_unlock();\n\t\t\t\tlog_console_error(\"PBS cannot open its log\");\n\t\t\t\tgoto sigunblock;\n\t\t\t}\n\t\t}\n\n\t\t/* call the inner routine which does not lock */\n\t\tlog_record_inner(eventtype, objclass, sev, objname, text, &mst);\n\t\tlog_mutex_unlock();\n\t}\n\nsigunblock:\n#ifndef WIN32\n\tsigprocmask(SIG_SETMASK, &old_mask, NULL);\n#else\n\treturn;\n#endif\n}\n\n/**\n * @brief\n * \tInner level function to log WITHOUT acquiring locks\n *\n *\tThe caller should ensure proper formating of the message if \"text\"\n *\tis to contain \"continuation lines\".\n *\n * @param[in] eventtype - event type\n * @param[in] objclass - event object class\n * @param[in] sev - indication for whether to syslogging enabled or not\n * @param[in] objname - object name stating log msg related to which object\n * @param[in] text - log msg to be logged\n * @param[in] mst - the ms_time format timestamp (with microseconds, if configured)\n *\n *\tNote, \"sev\" (for severity) is used  only if syslogging is enabled.\n *\tSee syslog(3) for details.\n */\nstatic void\nlog_record_inner(int eventtype, int objclass, int sev, const char *objname, const char *text, ms_time *mst)\n{\n\tint rc = 0;\n\tif (locallog != 0 || syslogfac == 0) {\n\t\trc = fprintf(logfile,\n\t\t\t     \"%02d/%02d/%04d %02d:%02d:%02d%s;%04x;%s;%s;%s;%s\\n\",\n\t\t\t     mst->ptm.tm_mon + 1, mst->ptm.tm_mday, mst->ptm.tm_year + 1900,\n\t\t\t     mst->ptm.tm_hour, mst->ptm.tm_min, mst->ptm.tm_sec, mst->microsec_buf,\n\t\t\t     eventtype & ~PBSEVENT_FORCE, msg_daemonname,\n\t\t\t     class_names[objclass], objname, text);\n\n\t\t(void) fflush(logfile);\n\t\tif (rc < 0)\n\t\t\tlog_console_error(\"PBS cannot write to its log\");\n\t}\n}\n\n/**\n * @brief\n * \tlog_close - close the current open log file\n *\n * @param[in] msg - indicating whether to log a message of closing log file before closing it\n *\n * @return\tVoid\n *\n */\nvoid\nlog_close(int msg)\n{\n\tif (log_opened == 1) {\n\t\tlog_auto_switch = 0;\n\t\tif (msg) {\n\t\t\tms_time mst;\n\t\t\tget_timestamp(&mst);\n\t\t\tlog_record_inner(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_INFO, \"Log\", \"Log closed\", &mst);\n\t\t}\n\t\t(void) fclose(logfile);\n\t\tlog_opened = 0;\n\t}\n#if SYSLOG\n\tif (syslogopen) {\n\t\tcloselog();\n\t\tsyslogopen = 0;\n\t}\n#endif /* SYSLOG */\n}\n\n/**\n * @brief\n *\tFunction to set the comm related log levels to event types\n *\ton which pbs log mask works.\n *\n * @param[in]\tlevel - The error level as per syslog\n *\n * @return - event type\n *\n */\nint\nlog_level_2_etype(int level)\n{\n\tint etype = PBSEVENT_DEBUG3 | PBSEVENT_DEBUG4;\n\n\tif (level == LOG_ERR)\n\t\tetype |= PBSEVENT_ERROR;\n\telse if (level == LOG_CRIT)\n\t\tetype |= PBSEVENT_SYSTEM | PBSEVENT_ADMIN | PBSEVENT_FORCE;\n\telse if (level == LOG_WARNING)\n\t\tetype |= PBSEVENT_SYSTEM | PBSEVENT_ADMIN;\n\telse if (level == LOG_NOTICE || level == LOG_INFO)\n\t\tetype |= PBSEVENT_DEBUG | PBSEVENT_DEBUG2;\n\n\treturn etype;\n}\n"
  },
  {
    "path": "src/lib/Liblog/pbs_messages.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include \"portability.h\"\n#include \"pbs_error.h\"\n\n#include <stdlib.h>\n\n/**\n * @file\tpbs_messages.c\n * @brief\n * Messages issued by the server.  They are kept here in one place\n * to make translation a bit easier.\n * @warning\n *  there are places where a message and other info is stuffed\n *\tinto a buffer, keep the messages short!\n *\n * This first set of messages are recorded by the server or mom to the log.\n */\n\nchar *msg_abt_err = \"Unable to abort Job %s which was in substate %d\";\nchar *msg_badexit = \"Abnormal exit status 0x%x: \";\nchar *msg_badwait = \"Invalid time in work task for waiting, job = %s\";\nchar *msg_deletejob = \"Job to be deleted\";\nchar *msg_delrunjobsig = \"Job sent signal %s on delete\";\nchar *msg_err_malloc = \"malloc failed\";\nchar *msg_err_noqueue = \"Unable to requeue job,\";\nchar *msg_err_noqueue1 = \"queue is not defined\";\nchar *msg_err_purgejob = \"Unlink of job file failed\";\nchar *msg_err_purgejob_db = \"Removal of job from datastore failed\";\nchar *msg_err_unlink = \"Unlink of %s file %s failed\";\nchar *msg_illregister = \"Illegal op in register request received for job %s\";\nchar *msg_init_abt = \"Job aborted on PBS Server initialization\";\nchar *msg_init_baddb = \"Unable to read server database\";\nchar *msg_init_badjob = \"Recover of job %s failed\";\nchar *msg_init_chdir = \"unable to change to directory %s\";\nchar *msg_init_expctq = \"Expected %d, recovered %d queues\";\nchar *msg_init_exptjobs = \"Recovered %d jobs\";\nchar *msg_init_nojobs = \"No jobs to open\";\nchar *msg_init_noqueues = \"No queues to open\";\nchar *msg_init_noresvs = \"No resvs to open\";\nchar *msg_init_norerun = \"Unable to rerun job at Server initialization\";\nchar *msg_init_queued = \"Requeued in queue: \";\nchar *msg_init_recovque = \"Recovered queue %s\";\nchar *msg_init_recovresv = \"Recovered reservation %s\";\nchar *msg_init_resvNOq = \"Queue %s for reservation %s missing\";\nchar *msg_init_substate = \"Requeueing job, substate: %d \";\nchar *msg_init_unkstate = \"Unable to recover job in strange substate: %d\";\nchar *msg_isodedecode = \"Decode of request failed\";\nchar *msg_issuebad = \"attempt to issue invalid request of type %d\";\nchar *msg_job_abort = \"Aborted by PBS Server \";\nchar *msg_jobholdset = \"Holds %s set at request of %s@%s\";\nchar *msg_jobholdrel = \"Holds %s released at request of %s@%s\";\nchar *msg_job_end = \"Execution terminated\";\nchar *msg_job_end_stat = \"Exit_status=%d\";\nchar *msg_job_end_sig = \"Terminated on signal %d\";\nchar *msg_jobmod = \"Job Modified\";\nchar *msg_jobnew = \"Job Queued at request of %s@%s, owner = %s, job name = %s, queue = %s\";\nchar *msg_jobrerun = \"Job Rerun\";\nchar *msg_jobrun = \"Job Run\";\nchar *msg_job_start = \"Begun execution\";\nchar *msg_job_stageinfail = \"File stage in failed, see below.\\nJob will be retried later, please investigate and correct problem.\";\nchar *msg_leftrunning = \"job running on at Server shutdown\";\nchar *msg_manager = \"%s at request of %s@%s\";\nchar *msg_man_cre = \"created\";\nchar *msg_man_del = \"deleted\";\nchar *msg_man_set = \"attributes set: \";\nchar *msg_man_uns = \"attributes unset: \";\nchar *msg_messagejob = \"Message request to job status %d\";\nchar *msg_mombadhold = \"MOM rejected hold request: %d\";\nchar *msg_mombadmodify = \"MOM rejected modify request, error: %d\";\nchar *msg_momsetlim = \"Job start failed.  Can't set \\\"%s\\\" limit: %s.\\n\";\nchar *msg_momnoexec1 = \"Job cannot be executed\\nSee Administrator for help\";\nchar *msg_momnoexec2 = \"Job cannot be executed\\nSee job standard error file\";\nchar *msg_movejob = \"Job moved to \";\nchar *msg_norelytomom = \"Server could not connect to MOM\";\nchar *msg_obitnojob = \"Job Obit notice received has error %d\";\nchar *msg_obitnocpy = \"Post job file processing error; job %s on host %s\";\nchar *msg_obitnodel = \"Unable to delete files for job %s, on host %s\";\nchar *msg_on_shutdown = \" on Server shutdown\";\nchar *msg_orighost = \"Job missing PBS_O_HOST value\";\nchar *msg_permlog = \"Unauthorized Request, request type: %d, Object: %s, Name: %s, request from: %s@%s\";\nchar *msg_postmomnojob = \"Job not found after hold reply from MOM\";\nchar *msg_request = \"Type %d request received from %s@%s, sock=%d\";\nchar *msg_regrej = \"Dependency request for job rejected by \";\nchar *msg_registerdel = \"Job deleted as result of dependency on job %s\";\nchar *msg_registerrel = \"Dependency on job %s released.\";\nchar *msg_routexceed = \"Route queue lifetime exceeded\";\nchar *msg_script_open = \"Unable to open script file\";\nchar *msg_script_write = \"Unable to write script file\";\nchar *msg_hookfile_open = \"Unable to open hook-related file\";\nchar *msg_hookfile_write = \"Unable to write hook-related file\";\nchar *msg_shutdown_op = \"Shutdown request from %s@%s \";\nchar *msg_shutdown_start = \"Starting to shutdown the server, type is \";\nchar *msg_startup1 = \"Version %s, started, initialization type = %d\";\nchar *msg_startup2 = \"Server pid = %d ready;  using ports Server:%d MOM:%d RM:%d\";\nchar *msg_startup3 = \"%s %s: %s mode and %s, \\ndo you wish to continue y/(n)?\";\nchar *msg_svdbopen = \"Unable to open server data base\";\nchar *msg_svdbnosv = \"Unable to save server data base \";\nchar *msg_svrdown = \"Server shutdown completed\";\nchar *msg_resvQcreateFail = \"failed to create queue for reservation\";\nchar *msg_genBatchReq = \"batch request generation failed\";\nchar *msg_mgrBatchReq = \"mgr batch request failed\";\nchar *msg_deleteresv = \"delete reservation\";\nchar *msg_deleteresvJ = \"delete reservation-job\";\nchar *msg_noDeljobfromResv = \"failed to delete job %s from reservation %s\";\nchar *msg_purgeResvlink = \"Unlink of reservation file failed\";\nchar *msg_purgeResvDb = \"Removal of reservation failed\";\nchar *msg_purgeResvFail = \"Failed to purge reservation\";\nchar *msg_internalReqFail = \"An internally generated request failed\";\nchar *msg_qEnabStartFail = \"Failed to start/enable reservation queue\";\nchar *msg_NotResv = \"not a reservation\";\nchar *msg_resv_abort = \"Reservation removed\";\nchar *msg_resv_start = \"Reservation period starting\";\nchar *msg_resv_end = \"Reservation period ended\";\nchar *msg_resv_confirm = \"Reservation transitioned from state UNCONFIRMED to CONFIRMED\";\nchar *msg_signal_job = \"job signaled with %s by %s@%s\";\nchar *msg_license_min_badval = \"pbs_license_min is < 0, or > pbs_license_max\";\nchar *msg_license_max_badval = \"pbs_license_max is < 0, or < pbs_license_min\";\nchar *msg_license_linger_badval = \"pbs_license_linger_time is <= 0\";\nchar *msg_license_server_down = \"Not running any new jobs. PBS license server is down.\";\nchar *msg_license_bad_action = \"Action not allowed with license server scheme.\";\nchar *msg_prov_script_notfound = \"Provision hook script not found\";\nchar *msg_jobscript_max_size = \"jobscript size exceeded the jobscript_max_size\";\nchar *msg_badjobscript_max_size = \"jobscript max size exceeds 2GB\";\nchar *msg_new_inventory_mom = \"Setting inventory_mom for vnode_pool %d to %s\";\nchar *msg_auth_request = \"Type %d request is authenticated. The credential id is %s@%s, host %s, sock=%d\";\n\n/*\n * This set of messages is used in e-mail to inform the user that the job exceeded resources.\n */\nchar *msg_momkillncpusburst = \"Job exceeded resource ncpus (burst)\\nSee job standard error file\";\nchar *msg_momkillncpussum = \"Job exceeded resource ncpus (sum)\\nSee job standard error file\";\nchar *msg_momkillvmem = \"Job exceeded resource vmem\\nSee job standard error file\";\nchar *msg_momkillmem = \"Job exceeded resource mem\\nSee job standard error file\";\nchar *msg_momkillcput = \"Job exceeded resource cput\\nSee job standard error file\";\nchar *msg_momkillwalltime = \"Job exceeded resource walltime\\nSee job standard error file\";\n\n/*\n * This next set of messages are returned to the client on an error.\n * They may also be logged.\n */\n\nchar *msg_unkjobid = \"Unknown Job Id\";\nchar *msg_noattr = \"Undefined attribute \";\nchar *msg_attrro = \"Cannot set attribute, read only or insufficient permission \";\nchar *msg_ivalreq = \"Invalid request\";\nchar *msg_unkreq = \"Unknown request\";\nchar *msg_perm = \"Unauthorized Request \";\nchar *msg_reqbadhost = \"Access from host not allowed, or unknown host\";\nchar *msg_jobexist = \"Job with requested ID already exists\";\nchar *msg_system = \"System error: \";\nchar *msg_internal = \"PBS server internal error\";\nchar *msg_regroute = \"Dependent parent job currently in routing queue\";\nchar *msg_unksig = \"Unknown/illegal signal name\";\nchar *msg_badatval = \"Illegal attribute or resource value\";\nchar *msg_badnodeatval = \"Illegal value for node\";\nchar *msg_nodenamebig = \"Node name is too big\";\nchar *msg_jobnamebig = \"name is too long\";\nchar *msg_mutualex = \"Mutually exclusive values for \";\nchar *msg_modatrrun = \"Cannot modify attribute while job running \";\nchar *msg_badstate = \"Request invalid for state of job\";\nchar *msg_unkque = \"Unknown queue\";\nchar *msg_unknode = \"Unknown node \";\nchar *msg_unknodeatr = \"Unknown node-attribute \";\nchar *msg_nonodes = \"Server has no node list\";\nchar *msg_badcred = \"Invalid credential\";\nchar *msg_expired = \"Expired credential\";\nchar *msg_qunoenb = \"Queue is not enabled\";\nchar *msg_qacess = \"Access to queue is denied\";\nchar *msg_nodestale = \"Cannot change state of stale node\";\nchar *msg_nodeexist = \"Node name already exists\";\n\n#ifdef WIN32\nchar *msg_baduser = \"Bad UID for job execution - could be an administrator-type account currently not allowed to run jobs (can be configured)\";\n#else\nchar *msg_baduser = \"Bad UID for job execution\";\n#endif\n\nchar *msg_bad_password = \"job has bad password\";\nchar *msg_badgrp = \"Bad GID for job execution\";\nchar *msg_badRuser = \"Bad effective UID for reservation\";\nchar *msg_badRgrp = \"Bad effective GID for reservation\";\nchar *msg_hopcount = \"Job routing over too many hops\";\nchar *msg_queexist = \"Queue already exists\";\nchar *msg_attrtype = \"Warning: type of queue %s incompatible with attribute %s\";\nchar *msg_attrtype2 = \"Incompatible type\";\nchar *msg_objbusy = \"Cannot delete busy object\";\nchar *msg_quenbig = \"Queue name too long\";\nchar *msg_nosupport = \"No support for requested service\";\nchar *msg_quenoen = \"Cannot enable queue, incomplete definition\";\nchar *msg_needquetype = \"Queue type must be set\";\nchar *msg_protocol = \"Batch protocol error\";\nchar *msg_noconnects = \"No free connections\";\nchar *msg_noserver = \"No server specified\";\nchar *msg_unkresc = \"Unknown resource\";\nchar *msg_excqresc = \"Job violates queue and/or server resource limits\";\nchar *msg_quenodflt = \"No default queue specified\";\nchar *msg_jobnorerun = \"job is not rerunnable\";\nchar *msg_routebad = \"Job rejected by all possible destinations\";\nchar *msg_momreject = \"Execution server rejected request\";\nchar *msg_nosyncmstr = \"No master found for sync job set\";\nchar *msg_sched_called = \"Scheduler sent command %d\";\nchar *msg_sched_nocall = \"Could not contact Scheduler\";\nchar *msg_stageinfail = \"Stage In of files failed\";\nchar *msg_rescunav = \"Resource temporarily unavailable\";\nchar *msg_maxqueued = \"Maximum number of jobs already in queue\";\nchar *msg_chkpointbusy = \"Checkpoint busy, may retry\";\nchar *msg_exceedlmt = \"Resource limit exceeds allowable\";\nchar *msg_badacct = \"Invalid Account\";\nchar *msg_baddepend = \"Invalid Job Dependency\";\nchar *msg_duplist = \"Duplicate entry in list \";\nchar *msg_svrshut = \"Request not allowed: Server shutting down\";\nchar *msg_execthere = \"Cannot execute at specified host because of checkpoint or stagein files\";\nchar *msg_gmoderr = \"Modification failed for \";\nchar *msg_notsnode = \"No time-share node available\";\nchar *msg_resvNowall = \"Reservation needs walltime\";\nchar *msg_jobNotresv = \"not a reservation job\";\nchar *msg_resvToolate = \"too late for reservation\";\nchar *msg_resvsyserr = \"internal reservation-system error\";\n\nchar *msg_Resv_Cancel = \"Attempting to cancel reservation\";\nchar *msg_unkresvID = \"Unknown Reservation Id\";\nchar *msg_resvExist = \"Reservation with requested ID already exists\";\nchar *msg_resvfromresvjob = \"Reservation may not be created from a job already within a reservation\";\nchar *msg_resvfromarrjob = \"Reservation may not be created from an array job\";\nchar *msg_selnotsubset = \"New select must be made up of a subset of the original chunks\";\nchar *msg_resvFail = \"reservation failure\";\nchar *msg_delProgress = \"Delete already in progress\";\nchar *msg_BadTspec = \"Bad time specification(s)\";\nchar *msg_BadNodespec = \"node(s) specification error\";\nchar *msg_licensecpu = \"Exceeded number of licensed cpus\";\nchar *msg_licenseinv = \"PBS license is invalid\";\nchar *msg_resvauth_H = \"Host machine not authorized to submit reservations\";\nchar *msg_resvauth_G = \"Requestor's group not authorized to submit reservations\";\nchar *msg_resvauth_U = \"Requestor not authorized to make reservations\";\nchar *msg_licenseunav = \"Floating License unavailable\";\nchar *msg_rescnotstr = \"Resource is not of type string or array_of_strings\";\nchar *msg_maxarraysize = \"Array job exceeds server or queue size limit\";\nchar *msg_invalselectresc = \"Resource invalid in \\\"select\\\" specification\";\nchar *msg_invaljobresc = \"\\\"-lresource=\\\" cannot be used with \\\"select\\\" or \\\"place\\\", resource is\";\nchar *msg_invalnodeplace = \"Cannot be used with select or place\";\nchar *msg_placenoselect = \"Cannot have \\\"place\\\" without \\\"select\\\"\";\nchar *msg_indirecthop = \"invalid multi-level indirect reference for resource\";\nchar *msg_indirectbadtgt = \"indirect target undefined on node for resource\";\nchar *msg_dupresc = \"duplicated resource within a section of a select specification, resource is\";\nchar *msg_connfull = \"Server connection table full\";\nchar *msg_bad_formula = \"Invalid Formula Format\";\nchar *msg_bad_formula_kw = \"Formula contains invalid keyword\";\nchar *msg_bad_formula_type = \"Formula contains a resource of an invalid type\";\nchar *msg_hook_error = \"hook error\";\nchar *msg_eligibletimeset_error = \"Cannot set attribute when eligible_time_enable is OFF\";\nchar *msg_historyjobid = \"Job has finished\";\nchar *msg_job_history_notset = \"PBS is not configured to maintain job history\";\nchar *msg_job_history_delete = \"Deleting job history upon request from %s@%s\";\nchar *msg_also_deleted_job_history = \"Also deleted job history\";\nchar *msg_nohistarrayjob = \"Request invalid for finished array subjob\";\nchar *msg_valueoutofrange = \"attribute value is out of range\";\nchar *msg_jobinresv_conflict = \"job and reservation have conflicting specification\";\nchar *msg_max_no_minwt = \"Cannot have \\\"max_walltime\\\" without \\\"min_walltime\\\"\";\nchar *msg_min_gt_maxwt = \"\\\"min_walltime\\\" can not be greater than \\\"max_walltime\\\"\";\nchar *msg_nostf_resv = \"\\\"min_walltime\\\" and \\\"max_walltime\\\" are not valid resources for a reservation\";\nchar *msg_nostf_jobarray = \"\\\"min_walltime\\\" and \\\"max_walltime\\\" are not valid resources for a job array\";\nchar *msg_nolimit_resc = \"Resource limits can not be set for the resource\";\nchar *msg_save_err = \"Failed to save job/resv, refer server logs for details\";\nchar *msg_mom_incomplete_hook = \"vnode's parent mom has a pending copy hook or delete hook request\";\nchar *msg_mom_reject_root_scripts = \"mom not accepting remote hook files or root job scripts\";\nchar *msg_hook_reject = \"hook rejected request\";\nchar *msg_hook_reject_rerunjob = \"hook rejected request, requiring job to be rerun\";\nchar *msg_hook_reject_deletejob = \"hook rejected request, requiring job to be deleted\";\nchar *msg_ival_obj_name = \"Invalid object name\";\nchar *msg_wrong_resume = \"Job can not be resumed with the requested resume signal\";\n\n/* Provisioning specific */\nchar *msg_provheadnode_error = \"Cannot set provisioning attribute on host running PBS server and scheduler\";\nchar *msg_cantmodify_ndprov = \"Cannot modify attribute while vnode is provisioning\";\nchar *msg_nostatechange_ndprov = \"Cannot change state of provisioning vnode\";\nchar *msg_cantdel_ndprov = \"Cannot delete vnode if vnode is provisioning\";\nchar *msg_node_bad_current_aoe = \"Current AOE does not match with resources_available.aoe\";\nchar *msg_invld_aoechunk = \"Invalid provisioning request in chunk(s)\";\n\n/* Standing reservation specific */\nchar *msg_bad_rrule_yearly = \"YEARLY recurrence duration cannot exceed 1 year\";\nchar *msg_bad_rrule_monthly = \"MONTHLY recurrence duration cannot exceed 1 month\";\nchar *msg_bad_rrule_weekly = \"WEEKLY recurrence duration cannot exceed 1 week\";\nchar *msg_bad_rrule_daily = \"DAILY recurrence duration cannot exceed 24 hours\";\nchar *msg_bad_rrule_hourly = \"HOURLY recurrence duration cannot exceed 1 hour\";\nchar *msg_bad_rrule_minutely = \"MINUTELY recurrence duration cannot exceed 1 minute\";\nchar *msg_bad_rrule_secondly = \"SECONDLY recurrence duration cannot exceed 1 second\";\nchar *msg_bad_rrule_syntax = \"Undefined iCalendar syntax\";\nchar *msg_bad_rrule_syntax2 = \"Undefined iCalendar syntax. A valid COUNT or UNTIL is required\";\nchar *msg_bad_ical_tz = \"Unrecognized PBS_TZID environment variable\";\n\n/* following set of messages are for entity limit controls */\nchar *msg_mixedquerunlimits = \"Cannot mix old and new style queue/run limit enforcement types\";\nchar *msg_et_qct = \"Maximum number of jobs already in queue %s\";\nchar *msg_et_sct = \"Maximum number of jobs already in complex\";\nchar *msg_et_ggq = \"would exceed queue %s's per-group limit\";\nchar *msg_et_ggs = \"would exceed complex's per-group limit\";\nchar *msg_et_gpq = \"would exceed queue %s's per-project limit\";\nchar *msg_et_gps = \"would exceed complex's per-project limit\";\n\nchar *msg_et_guq = \"would exceed queue %s's per-user limit\";\nchar *msg_et_gus = \"would exceed complex's per-user limit\";\nchar *msg_et_sgq = \"Maximum number of jobs for group %s already in queue %s\";\nchar *msg_et_sgs = \"Maximum number of jobs for group %s already in complex\";\nchar *msg_et_spq = \"Maximum number of jobs for project %s already in queue %s\";\nchar *msg_et_sps = \"Maximum number of jobs for project %s already in complex\";\nchar *msg_et_suq = \"Maximum number of jobs for user %s already in queue %s\";\nchar *msg_et_sus = \"Maximum number of jobs for user %s already in complex\";\nchar *msg_et_raq = \"would exceed limit on resource %s in queue %s\";\nchar *msg_et_ras = \"would exceed limit on resource %s in complex\";\nchar *msg_et_rggq = \"would exceed per-group limit on resource %s in queue %s\";\nchar *msg_et_rggs = \"would exceed per-group limit on resource %s in complex\";\nchar *msg_et_rgpq = \"would exceed per-project limit on resource %s in queue %s\";\nchar *msg_et_rgps = \"would exceed per-project limit on resource %s in complex\";\nchar *msg_et_rguq = \"would exceed per-user limit on resource %s in queue %s\";\nchar *msg_et_rgus = \"would exceed per-user limit on resource %s in complex\";\nchar *msg_et_rsgq = \"would exceed group %s's limit on resource %s in queue %s\";\nchar *msg_et_rsgs = \"would exceed group %s's limit on resource %s in complex\";\nchar *msg_et_rspq = \"would exceed project %s's limit on resource %s in queue %s\";\nchar *msg_et_rsps = \"would exceed project %s's limit on resource %s in complex\";\nchar *msg_et_rsuq = \"would exceed user %s's limit on resource %s in queue %s\";\nchar *msg_et_rsus = \"would exceed user %s's limit on resource %s in complex\";\n\nchar *msg_et_qct_q = \"Maximum number of jobs in 'Q' state already in queue %s\";\nchar *msg_et_sct_q = \"Maximum number of jobs in 'Q' state already in complex\";\nchar *msg_et_ggq_q = \"would exceed queue %s's per-group limit of jobs in 'Q' state\";\nchar *msg_et_ggs_q = \"would exceed complex's per-group limit of jobs in 'Q' state\";\nchar *msg_et_gpq_q = \"would exceed queue %s's per-project limit of jobs in 'Q' state\";\nchar *msg_et_gps_q = \"would exceed complex's per-project limit of jobs in 'Q' state\";\n\nchar *msg_et_guq_q = \"would exceed queue %s's per-user limit of jobs in 'Q' state\";\nchar *msg_et_gus_q = \"would exceed complex's per-user limit of jobs in 'Q' state\";\nchar *msg_et_sgq_q = \"Maximum number of jobs in 'Q' state for group %s already in queue %s\";\nchar *msg_et_sgs_q = \"Maximum number of jobs in 'Q' state for group %s already in complex\";\nchar *msg_et_spq_q = \"Maximum number of jobs in 'Q' state for project %s already in queue %s\";\nchar *msg_et_sps_q = \"Maximum number of jobs in 'Q' state for project %s already in complex\";\nchar *msg_et_suq_q = \"Maximum number of jobs in 'Q' state for user %s already in queue %s\";\nchar *msg_et_sus_q = \"Maximum number of jobs in 'Q' state for user %s already in complex\";\nchar *msg_et_raq_q = \"would exceed limit on resource %s in queue %s for jobs in 'Q' state\";\nchar *msg_et_ras_q = \"would exceed limit on resource %s in complex for jobs in 'Q' state\";\nchar *msg_et_rggq_q = \"would exceed per-group limit on resource %s in queue %s for jobs in 'Q' state\";\nchar *msg_et_rggs_q = \"would exceed per-group limit on resource %s in complex for jobs in 'Q' state\";\nchar *msg_et_rgpq_q = \"would exceed per-project limit on resource %s in queue %s for jobs in 'Q' state\";\nchar *msg_et_rgps_q = \"would exceed per-project limit on resource %s in complex for jobs in 'Q' state\";\nchar *msg_et_rguq_q = \"would exceed per-user limit on resource %s in queue %s for jobs in 'Q' state\";\nchar *msg_et_rgus_q = \"would exceed per-user limit on resource %s in complex for jobs in 'Q' state\";\nchar *msg_et_rsgq_q = \"would exceed group %s's limit on resource %s in queue %s for jobs in 'Q' state\";\nchar *msg_et_rsgs_q = \"would exceed group %s's limit on resource %s in complex for jobs in 'Q' state\";\nchar *msg_et_rspq_q = \"would exceed project %s's limit on resource %s in queue %s for jobs in 'Q' state\";\nchar *msg_et_rsps_q = \"would exceed project %s's limit on resource %s in complex for jobs in 'Q' state\";\nchar *msg_et_rsuq_q = \"would exceed user %s's limit on resource %s in queue %s for jobs in 'Q' state\";\nchar *msg_et_rsus_q = \"would exceed user %s's limit on resource %s in complex for jobs in 'Q' state\";\n\nchar *msg_force_qsub_update = \"force a qsub update\";\nchar *msg_noloopbackif = \"Local host does not have loopback interface configured or pingable.\";\n\nchar *msg_defproject = \"%s = %s is also the default project assigned to jobs with unset project attribute\";\nchar *msg_norunalteredjob = \"Cannot run job which was altered/moved during current scheduling cycle.\";\n\n/* resource limit setup specific */\nchar *msg_corelimit = \"invalid value for PBS_CORE_LIMIT in pbs.conf, continuing with default core limit. To use PBS_CORE_LIMIT update pbs.conf with correct value\";\n\nchar *msg_resc_busy = \"Resource busy\";\n\nchar *msg_job_moved = \"Job moved to remote server\";\nchar *msg_init_recovsched = \"Recovered scheduler %s\";\nchar *msg_sched_exist = \"Scheduler already exists\";\nchar *msg_sched_name_big = \"Scheduler name is too long\";\nchar *msg_unknown_sched = \"Unknown Scheduler\";\nchar *msg_no_del_sched = \"Can not delete Scheduler\";\nchar *msg_sched_priv_exists = \"Another scheduler also has same value for its sched_priv directory\";\nchar *msg_sched_logs_exists = \"Another scheduler also has same value for its sched_log directory\";\nchar *msg_route_que_no_partition = \"Cannot assign a partition to route queue\";\nchar *msg_cannot_set_route_que = \"Route queues are incompatible with the partition attribute\";\nchar *msg_queue_not_in_partition = \"Queue %s is not part of partition for node\";\nchar *msg_partition_not_in_queue = \"Partition %s is not part of queue for node\";\nchar *msg_invalid_partion_in_queue = \"Invalid partition in queue\";\nchar *msg_sched_op_not_permitted = \"Operation is not permitted on default scheduler\";\nchar *msg_sched_part_already_used = \"Partition is already associated with other scheduler\";\nchar *msg_invalid_max_job_sequence_id = \"Cannot set max_job_sequence_id < 9999999, or > 999999999999\";\nchar *msg_jsf_incompatible = \"Server's job_sort_formula value is incompatible with sched's\";\n\nchar *msg_resv_not_empty = \"Reservation not empty\";\nchar *msg_stdg_resv_occr_conflict = \"Requested time(s) will interfere with a later occurrence\";\nchar *msg_alps_switch_err = \"Switching ALPS reservation failed\";\n\nchar *msg_softwt_stf = \"soft_walltime is not supported with Shrink to Fit jobs\";\nchar *msg_node_busy = \"Node is busy\";\nchar *msg_default_partition = \"Default partition name is not allowed\";\nchar *msg_depend_runone = \"Job deleted, a dependent job ran\";\nchar *msg_histdepend = \"Finished job did not satisfy dependency\";\nchar *msg_sched_already_connected = \"Scheduler already connected\";\nchar *msg_notarray_attr = \"Attribute has to be set on an array job\";\n\n/*\n * The following table connects error numbers with text\n * to be returned to the client.  Each is guaranteed to be pure text.\n * There are no printf formatting strings imbedded.\n */\n\nstruct pbs_err_to_txt pbs_err_to_txt[] = {\n\t{PBSE_UNKJOBID, &msg_unkjobid},\n\t{PBSE_NOATTR, &msg_noattr},\n\t{PBSE_ATTRRO, &msg_attrro},\n\t{PBSE_IVALREQ, &msg_ivalreq},\n\t{PBSE_UNKREQ, &msg_unkreq},\n\t{PBSE_PERM, &msg_perm},\n\t{PBSE_BADHOST, &msg_reqbadhost},\n\t{PBSE_JOBEXIST, &msg_jobexist},\n\t{PBSE_SYSTEM, &msg_system},\n\t{PBSE_INTERNAL, &msg_internal},\n\t{PBSE_REGROUTE, &msg_regroute},\n\t{PBSE_UNKSIG, &msg_unksig},\n\t{PBSE_BADATVAL, &msg_badatval},\n\t{PBSE_BADNDATVAL, &msg_badnodeatval},\n\t{PBSE_NODENBIG, &msg_nodenamebig},\n\t{PBSE_MUTUALEX, &msg_mutualex},\n\t{PBSE_MODATRRUN, &msg_modatrrun},\n\t{PBSE_BADSTATE, &msg_badstate},\n\t{PBSE_UNKQUE, &msg_unkque},\n\t{PBSE_UNKNODE, &msg_unknode},\n\t{PBSE_UNKNODEATR, &msg_unknodeatr},\n\t{PBSE_NONODES, &msg_nonodes},\n\t{PBSE_BADCRED, &msg_badcred},\n\t{PBSE_EXPIRED, &msg_expired},\n\t{PBSE_QUNOENB, &msg_qunoenb},\n\t{PBSE_QACESS, &msg_qacess},\n\t{PBSE_BADUSER, &msg_baduser},\n\t{PBSE_R_UID, &msg_badRuser},\n\t{PBSE_HOPCOUNT, &msg_hopcount},\n\t{PBSE_QUEEXIST, &msg_queexist},\n\t{PBSE_OBJBUSY, &msg_objbusy},\n\t{PBSE_QUENBIG, &msg_quenbig},\n\t{PBSE_NOSUP, &msg_nosupport},\n\t{PBSE_QUENOEN, &msg_quenoen},\n\t{PBSE_PROTOCOL, &msg_protocol},\n\t{PBSE_NOCONNECTS, &msg_noconnects},\n\t{PBSE_NOSERVER, &msg_noserver},\n\t{PBSE_UNKRESC, &msg_unkresc},\n\t{PBSE_EXCQRESC, &msg_excqresc},\n\t{PBSE_QUENODFLT, &msg_quenodflt},\n\t{PBSE_NORERUN, &msg_jobnorerun},\n\t{PBSE_ROUTEREJ, &msg_routebad},\n\t{PBSE_MOMREJECT, &msg_momreject},\n\t{PBSE_NOSYNCMSTR, &msg_nosyncmstr},\n\t{PBSE_STAGEIN, &msg_stageinfail},\n\t{PBSE_RESCUNAV, &msg_rescunav},\n\t{PBSE_BADGRP, &msg_badgrp},\n\t{PBSE_R_GID, &msg_badRgrp},\n\t{PBSE_MAXQUED, &msg_maxqueued},\n\t{PBSE_CKPBSY, &msg_chkpointbusy},\n\t{PBSE_EXLIMIT, &msg_exceedlmt},\n\t{PBSE_BADACCT, &msg_badacct},\n\t{PBSE_BADDEPEND, &msg_baddepend},\n\t{PBSE_DUPLIST, &msg_duplist},\n\t{PBSE_EXECTHERE, &msg_execthere},\n\t{PBSE_SVRDOWN, &msg_svrshut},\n\t{PBSE_ATTRTYPE, &msg_attrtype2},\n\t{PBSE_GMODERR, &msg_gmoderr},\n\t{PBSE_NORELYMOM, &msg_norelytomom},\n\t{PBSE_NOTSNODE, &msg_notsnode},\n\t{PBSE_RESV_NO_WALLTIME, &msg_resvNowall},\n\t{PBSE_JOBNOTRESV, &msg_jobNotresv},\n\t{PBSE_TOOLATE, &msg_resvToolate},\n\t{PBSE_IRESVE, &msg_resvsyserr},\n\t{PBSE_RESVEXIST, &msg_resvExist},\n\t{PBSE_RESV_FROM_RESVJOB, &msg_resvfromresvjob},\n\t{PBSE_RESV_FROM_ARRJOB, &msg_resvfromarrjob},\n\t{PBSE_SELECT_NOT_SUBSET, &msg_selnotsubset},\n\t{PBSE_resvFail, &msg_resvFail},\n\t{PBSE_genBatchReq, &msg_genBatchReq},\n\t{PBSE_mgrBatchReq, &msg_mgrBatchReq},\n\t{PBSE_UNKRESVID, &msg_unkresvID},\n\t{PBSE_delProgress, &msg_delProgress},\n\t{PBSE_BADTSPEC, &msg_BadTspec},\n\t{PBSE_NOTRESV, &msg_NotResv},\n\t{PBSE_BADNODESPEC, &msg_BadNodespec},\n\t{PBSE_LICENSEINV, &msg_licenseinv},\n\t{PBSE_RESVAUTH_H, &msg_resvauth_H},\n\t{PBSE_RESVAUTH_G, &msg_resvauth_G},\n\t{PBSE_RESVAUTH_U, &msg_resvauth_U},\n\t{PBSE_RESCNOTSTR, &msg_rescnotstr},\n\t{PBSE_MaxArraySize, &msg_maxarraysize},\n\t{PBSE_NOSCHEDULER, &msg_sched_nocall},\n\t{PBSE_INVALSELECTRESC, &msg_invalselectresc},\n\t{PBSE_INVALJOBRESC, &msg_invaljobresc},\n\t{PBSE_INVALNODEPLACE, &msg_invalnodeplace},\n\t{PBSE_PLACENOSELECT, &msg_placenoselect},\n\t{PBSE_INDIRECTHOP, &msg_indirecthop},\n\t{PBSE_INDIRECTBT, &msg_indirectbadtgt},\n\t{PBSE_NODESTALE, &msg_nodestale},\n\t{PBSE_NODEEXIST, &msg_nodeexist},\n\t{PBSE_DUPRESC, &msg_dupresc},\n\t{PBSE_CONNFULL, &msg_connfull},\n\t{PBSE_LICENSE_MIN_BADVAL, &msg_license_min_badval},\n\t{PBSE_LICENSE_MAX_BADVAL, &msg_license_max_badval},\n\t{PBSE_LICENSE_LINGER_BADVAL, &msg_license_linger_badval},\n\t{PBSE_BAD_FORMULA, &msg_bad_formula},\n\t{PBSE_BAD_FORMULA_KW, &msg_bad_formula_kw},\n\t{PBSE_BAD_FORMULA_TYPE, &msg_bad_formula_type},\n\t{PBSE_BAD_RRULE_YEARLY, &msg_bad_rrule_yearly},\n\t{PBSE_BAD_RRULE_MONTHLY, &msg_bad_rrule_monthly},\n\t{PBSE_BAD_RRULE_WEEKLY, &msg_bad_rrule_weekly},\n\t{PBSE_BAD_RRULE_DAILY, &msg_bad_rrule_daily},\n\t{PBSE_BAD_RRULE_HOURLY, &msg_bad_rrule_hourly},\n\t{PBSE_BAD_RRULE_MINUTELY, &msg_bad_rrule_minutely},\n\t{PBSE_BAD_RRULE_SECONDLY, &msg_bad_rrule_secondly},\n\t{PBSE_BAD_RRULE_SYNTAX, &msg_bad_rrule_syntax},\n\t{PBSE_BAD_RRULE_SYNTAX2, &msg_bad_rrule_syntax2},\n\t{PBSE_BAD_ICAL_TZ, &msg_bad_ical_tz},\n\t{PBSE_HOOKERROR, &msg_hook_error},\n\t{PBSE_NEEDQUET, &msg_needquetype},\n\t{PBSE_ETEERROR, &msg_eligibletimeset_error},\n\t{PBSE_HISTJOBID, &msg_historyjobid},\n\t{PBSE_JOBHISTNOTSET, &msg_job_history_notset},\n\t{PBSE_MIXENTLIMS, &msg_mixedquerunlimits},\n\t{PBSE_ATVALERANGE, &msg_valueoutofrange},\n\t{PBSE_PROV_HEADERROR, &msg_provheadnode_error},\n\t{PBSE_NODEPROV_NOACTION, &msg_cantmodify_ndprov},\n\t{PBSE_NODEPROV, &msg_nostatechange_ndprov},\n\t{PBSE_NODEPROV_NODEL, &msg_cantdel_ndprov},\n\t{PBSE_NODE_BAD_CURRENT_AOE, &msg_node_bad_current_aoe},\n\t{PBSE_NOLOOPBACKIF, &msg_noloopbackif},\n\t{PBSE_IVAL_AOECHUNK, &msg_invld_aoechunk},\n\t{PBSE_JOBINRESV_CONFLICT, &msg_jobinresv_conflict},\n\t{PBSE_MAX_NO_MINWT, &msg_max_no_minwt},\n\t{PBSE_MIN_GT_MAXWT, &msg_min_gt_maxwt},\n\t{PBSE_NOSTF_RESV, &msg_nostf_resv},\n\t{PBSE_NOSTF_JOBARRAY, &msg_nostf_jobarray},\n\t{PBSE_NOLIMIT_RESOURCE, &msg_nolimit_resc},\n\t{PBSE_NORUNALTEREDJOB, &msg_norunalteredjob},\n\t{PBSE_NOHISTARRAYSUBJOB, &msg_nohistarrayjob},\n\t{PBSE_FORCE_QSUB_UPDATE, &msg_force_qsub_update},\n\t{PBSE_SAVE_ERR, &msg_save_err},\n\t{PBSE_MOM_INCOMPLETE_HOOK, &msg_mom_incomplete_hook},\n\t{PBSE_MOM_REJECT_ROOT_SCRIPTS, &msg_mom_reject_root_scripts},\n\t{PBSE_HOOK_REJECT, &msg_hook_reject},\n\t{PBSE_HOOK_REJECT_RERUNJOB, &msg_hook_reject_rerunjob},\n\t{PBSE_HOOK_REJECT_DELETEJOB, &msg_hook_reject_deletejob},\n\t{PBSE_IVAL_OBJ_NAME, &msg_ival_obj_name},\n\t{PBSE_JOBNBIG, &msg_jobnamebig},\n\t{PBSE_RESCBUSY, &msg_resc_busy},\n\t{PBSE_JOB_MOVED, &msg_job_moved},\n\t{PBSE_JOBSCRIPTMAXSIZE, &msg_jobscript_max_size},\n\t{PBSE_BADJOBSCRIPTMAXSIZE, &msg_badjobscript_max_size},\n\t{PBSE_WRONG_RESUME, &msg_wrong_resume},\n\t{PBSE_SCHEDEXIST, &msg_sched_exist},\n\t{PBSE_SCHED_NAME_BIG, &msg_sched_name_big},\n\t{PBSE_UNKSCHED, &msg_unknown_sched},\n\t{PBSE_SCHED_NO_DEL, &msg_no_del_sched},\n\t{PBSE_SCHED_PRIV_EXIST, &msg_sched_priv_exists},\n\t{PBSE_SCHED_LOG_EXIST, &msg_sched_logs_exists},\n\t{PBSE_ROUTE_QUE_NO_PARTITION, &msg_route_que_no_partition},\n\t{PBSE_CANNOT_SET_ROUTE_QUE, &msg_cannot_set_route_que},\n\t{PBSE_QUE_NOT_IN_PARTITION, &msg_queue_not_in_partition},\n\t{PBSE_PARTITION_NOT_IN_QUE, &msg_partition_not_in_queue},\n\t{PBSE_INVALID_PARTITION_QUE, &msg_invalid_partion_in_queue},\n\t{PBSE_RESV_NOT_EMPTY, &msg_resv_not_empty},\n\t{PBSE_STDG_RESV_OCCR_CONFLICT, &msg_stdg_resv_occr_conflict},\n\t{PBSE_ALPS_SWITCH_ERR, &msg_alps_switch_err},\n\t{PBSE_SOFTWT_STF, &msg_softwt_stf},\n\t{PBSE_SCHED_OP_NOT_PERMITTED, &msg_sched_op_not_permitted},\n\t{PBSE_SCHED_PARTITION_ALREADY_EXISTS, &msg_sched_part_already_used},\n\t{PBSE_INVALID_MAX_JOB_SEQUENCE_ID, &msg_invalid_max_job_sequence_id},\n\t{PBSE_SVR_SCHED_JSF_INCOMPAT, &msg_jsf_incompatible},\n\t{PBSE_NODE_BUSY, &msg_node_busy},\n\t{PBSE_DEFAULT_PARTITION, &msg_default_partition},\n\t{PBSE_HISTDEPEND, &msg_histdepend},\n\t{PBSE_SCHEDCONNECTED, &msg_sched_already_connected},\n\t{PBSE_NOTARRAY_ATTR, &msg_notarray_attr},\n\t{0, NULL} /* MUST be the last entry */\n};\n\n/**\n * @brief\n * \tpbse_to_txt() - return a text message for an PBS error number\n *\tif it exists\n *\n * @param[in] err - error number whose appropriate text message to be returned\n *\n * @return\tstring\n * @retval\ttext error msg\tif such error exists\n * @retval\tNULL\t\tni such error num\n *\n */\n\nchar *\npbse_to_txt(int err)\n{\n\tint i = 0;\n\n\twhile (pbs_err_to_txt[i].err_no && (pbs_err_to_txt[i].err_no != err))\n\t\t++i;\n\tif (pbs_err_to_txt[i].err_txt != NULL)\n\t\treturn (*pbs_err_to_txt[i].err_txt);\n\telse\n\t\treturn NULL;\n}\n"
  },
  {
    "path": "src/lib/Liblog/setup_env.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <errno.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include \"portability.h\"\n#include \"log.h\"\n\n/**\n * @file\tsetup_env.c\n */\n/**\n * @brief\n *\tsetup the daemon's environment\n *\tTo provide a \"safe and secure\" environment, the daemons replace their\n *\tinherited one with one from a file.  Each line in the file is\n *\teither a comment line, starting with '#' or ' ', or is a line of\n *\tthe forms:\tvariable=value\n *\t\t\tvariable\n *\tIn the second case the value is obtained from the current environment.\n *\n * @param[in]   filen - the daemons relace their inherited one with one from this\n *                     file\n *\n * @return int\n * @retval  0\tThe environment was setup successfully\n *\t\tconnection made.\n * @retval -1\terror encountered setting up the environment\n */\n#define PBS_ENVP_STR 64\n#define PBS_ENV_CHUNCK 8191\n\nint\nsetup_env(char *filen)\n{\n\tstatic char id[] = \"setup_env\";\n\tchar buf[PBS_ENV_CHUNCK + 1];\n\tFILE *efile;\n\tchar *envbuf;\n\tstatic char *envp[PBS_ENVP_STR + 1];\n\tstatic char *nulenv[1];\n\tint questionable = 0;\n\tint i, len;\n\tint nstr = 0;\n\tchar *pequal;\n\tchar *pval = NULL;\n\textern char **environ;\n\n\tif (environ == envp) {\n\t\tfor (i = 0; envp[i] != NULL; i++) {\n\t\t\tfree(envp[i]);\n\t\t\tenvp[i] = NULL;\n\t\t}\n\t}\n\tenviron = nulenv;\n\tif ((filen == NULL) || (*filen == '\\0'))\n\t\treturn 0;\n\n\tefile = fopen(filen, \"r\");\n\tif (efile != NULL) {\n\n\t\twhile (fgets(buf, PBS_ENV_CHUNCK, efile)) {\n\n\t\t\tif (questionable) {\n\t\t\t\t/* previous bufr had no '\\n' and more bytes remain */\n\t\t\t\tgoto err;\n\t\t\t}\n\n\t\t\tif ((buf[0] != '#') && (buf[0] != ' ') && (buf[0] != '\\n')) {\n\n\t\t\t\tlen = strlen(buf);\n\t\t\t\tif (buf[len - 1] != '\\n') {\n\t\t\t\t\t/* no newline, wonder if this is last line */\n\t\t\t\t\tquestionable = 1;\n\t\t\t\t} else {\n#ifdef WIN32\n\t\t\t\t\t/* take care of <carriage-return> char */\n\t\t\t\t\tif (len > 1 && !isalnum(buf[len - 2]))\n\t\t\t\t\t\tbuf[len - 2] = '\\0';\n#endif\n\t\t\t\t\tbuf[len - 1] = '\\0';\n\t\t\t\t}\n\n\t\t\t\tif ((pequal = strchr(buf, (int) '=')) == 0) {\n\t\t\t\t\tif ((pval = getenv(buf)) == 0)\n\t\t\t\t\t\tcontinue;\n\t\t\t\t\tlen += strlen(pval) + 1;\n\t\t\t\t}\n\n\t\t\t\tif ((envbuf = malloc(len + 1)) == 0)\n\t\t\t\t\tgoto err;\n\t\t\t\t(void) strcpy(envbuf, buf);\n\t\t\t\tif (pequal == 0) {\n\t\t\t\t\t(void) strcat(envbuf, \"=\");\n\t\t\t\t\t(void) strcat(envbuf, pval);\n\t\t\t\t}\n\t\t\t\tenvp[nstr++] = envbuf;\n\t\t\t\tif (nstr == PBS_ENVP_STR)\n\t\t\t\t\tgoto err;\n\t\t\t\tenvp[nstr] = NULL;\n\t\t\t}\n\t\t}\n\t\tfclose(efile);\n\t\tenviron = envp;\n\t} else if (errno != ENOENT) {\n\t\tgoto err;\n\t}\n\tsprintf(log_buffer, \"read environment from %s\", filen);\n\tlog_event(PBSEVENT_SYSTEM, 0, LOG_NOTICE, id, log_buffer);\n\treturn (nstr);\n\nerr:\n\tlog_err(errno, id, \"Could not set up the environment\");\n\tfclose(efile);\n\treturn (-1);\n}\n"
  },
  {
    "path": "src/lib/Libnet/Makefile.am",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nlib_LIBRARIES = libnet.a\n\nlibnet_a_CPPFLAGS = \\\n\t-I$(top_srcdir)/src/include \\\n\t@KRB5_CFLAGS@\n\nlibnet_a_SOURCES = \\\n\tget_hostaddr.c \\\n\tnet_client.c \\\n\tnet_server.c \\\n\tnet_set_clse.c \\\n\tport_forwarding.c \\\n\thnls.c\n"
  },
  {
    "path": "src/lib/Libnet/get_hostaddr.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <sys/types.h>\n#include <sys/socket.h>\n#include <netdb.h>\n#include <netinet/in.h>\n#include <memory.h>\n#include <arpa/inet.h>\n#include \"portability.h\"\n#include \"server_limits.h\"\n#include \"pbs_ifl.h\"\n#include \"net_connect.h\"\n#include \"pbs_error.h\"\n#include \"pbs_internal.h\"\n\n#if !defined(H_ERRNO_DECLARED)\nextern int h_errno;\n#endif\n\n/**\n * @file\tget_hostaddr.c\n * @brief\n * get_hostaddr.c - contains functions to provide the internal\n *\tinternet address for a host and to provide the port\n *\tnumber for a service.\n *\n * get_hostaddr - get internal internet address of a host\n *\n *\tReturns a pbs_net_t (unsigned long) containing the network\n *\taddress in host byte order.  A Null value is returned on\n *\tan error.\n */\n\n/**\n * @brief\n *\tget internal internet address of a host\n *\n * @param[in] hostname - hostname whose internet addr to be returned\n *\n * @return\tpbs_net_t\n * @retval\tinternat address\tsuccess\n * @retval\to\t\t\terror\n *\n */\npbs_net_t\nget_hostaddr(char *hostname)\n{\n\tstruct addrinfo *aip, *pai;\n\tstruct addrinfo hints;\n\tstruct sockaddr_in *inp;\n\tint err;\n\tpbs_net_t res;\n\n\tif ((hostname == 0) || (*hostname == '\\0')) {\n\t\tpbs_errno = PBS_NET_RC_FATAL;\n\t\treturn ((pbs_net_t) 0);\n\t}\n\n\tmemset(&hints, 0, sizeof(struct addrinfo));\n\t/*\n\t *      Why do we use AF_UNSPEC rather than AF_INET?  Some\n\t *      implementations of getaddrinfo() will take an IPv6\n\t *      address and map it to an IPv4 one if we ask for AF_INET\n\t *      only.  We don't want that - we want only the addresses\n\t *      that are genuinely, natively, IPv4 so we start with\n\t *      AF_UNSPEC and filter ai_family below.\n\t */\n\thints.ai_family = AF_UNSPEC;\n\thints.ai_socktype = SOCK_STREAM;\n\thints.ai_protocol = IPPROTO_TCP;\n\tif ((err = getaddrinfo(hostname, NULL, &hints, &pai)) != 0) {\n\t\tif (err == EAI_AGAIN)\n\t\t\tpbs_errno = PBS_NET_RC_RETRY;\n\t\telse\n\t\t\tpbs_errno = PBS_NET_RC_FATAL;\n\t\treturn ((pbs_net_t) 0);\n\t}\n\tfor (aip = pai; aip != NULL; aip = aip->ai_next) {\n\t\t/* skip non-IPv4 addresses */\n\t\tif (aip->ai_family == AF_INET) {\n\t\t\tinp = (struct sockaddr_in *) aip->ai_addr;\n\t\t\tbreak;\n\t\t}\n\t}\n\tif (aip == NULL) {\n\t\t/* treat no IPv4 addresses as fatal getaddrinfo() failure */\n\t\tpbs_errno = PBS_NET_RC_FATAL;\n\t\tfreeaddrinfo(pai);\n\t\treturn ((pbs_net_t) 0);\n\t}\n\tres = ntohl(inp->sin_addr.s_addr);\n\tfreeaddrinfo(pai);\n\treturn (res);\n}\n\n/**\n * @brief\n * \t\tcompare a short hostname with a FQ host match if same up to dot\n *\n * @param[in]\tshost\t- short hostname\n * @param[in]\tlhost\t- FQ host\n *\n * @return\tint\n * @retval\t0\t- match\n * @retval\t1\t- no match\n */\nint\ncompare_short_hostname(char *shost, char *lhost)\n{\n\tsize_t len;\n\tchar *pdot;\n\tint is_shost_ip;\n\tint is_lhost_ip;\n\tstruct sockaddr_in check_ip;\n\n\tif ((shost == NULL) || (lhost == NULL))\n\t\treturn 1;\n\n\t/* check if hostnames given are in IPV4 dotted-decimal form: ddd.ddd.ddd.ddd */\n\tis_shost_ip = inet_pton(AF_INET, shost, &(check_ip.sin_addr));\n\tis_lhost_ip = inet_pton(AF_INET, lhost, &(check_ip.sin_addr));\n\tif ((is_shost_ip > 0) || (is_lhost_ip > 0)) {\n\t\t/* ((3 * 4) + 3) = 15 characters, max length dotted decimal addr */\n\t\tif (strncmp(shost, lhost, 15) == 0)\n\t\t\treturn 0;\n\t\treturn 1;\n\t}\n\n\tif ((pdot = strchr(shost, '.')) != NULL)\n\t\tlen = (size_t) (pdot - shost);\n\telse\n\t\tlen = strlen(shost);\n\tif ((strncasecmp(shost, lhost, len) == 0) &&\n\t    ((*(lhost + len) == '.') || (*(lhost + len) == '\\0')))\n\t\treturn 0; /* match */\n\telse\n\t\treturn 1; /* no match */\n}\n\n/**\n * @brief\n *\n * comp_svraddr - get internal internet address of the given host\n *\t\t  check to see if any of the addresses match the given server\n *\t\t  net address.\n *\n *\n * @param[in] svr_addr - net address of the server\n * @param[in] hostname - hostname whose internet addr needs to be compared\n * @param[out] addr - one of the address associated with hostname is returned in this argument\n *\n * @return\tint\n * @retval\t0 address found\n * @retval\t1 address not found\n * @retval\t2 failed to find address\n *\n */\nint\ncomp_svraddr(pbs_net_t svr_addr, char *hostname, pbs_net_t *addr)\n{\n\tstruct addrinfo *aip, *pai;\n\tstruct addrinfo hints;\n\tstruct sockaddr_in *inp;\n\tpbs_net_t res;\n\n\tif ((hostname == NULL) || (*hostname == '\\0')) {\n\t\treturn (2);\n\t}\n\tif (addr)\n\t\t*addr = 0;\n\n\tmemset(&hints, 0, sizeof(struct addrinfo));\n\thints.ai_family = AF_UNSPEC;\n\thints.ai_socktype = SOCK_STREAM;\n\thints.ai_protocol = IPPROTO_TCP;\n\tif (getaddrinfo(hostname, NULL, &hints, &pai) != 0) {\n\t\tpbs_errno = PBSE_BADHOST;\n\t\treturn (2);\n\t}\n\tfor (aip = pai; aip != NULL; aip = aip->ai_next) {\n\t\tif (aip->ai_family == AF_INET) {\n\t\t\tinp = (struct sockaddr_in *) aip->ai_addr;\n\t\t\tres = ntohl(inp->sin_addr.s_addr);\n\t\t\tif (addr && *addr == 0)\n\t\t\t\t*addr = res;\n\t\t\tif (res == svr_addr) {\n\t\t\t\tfreeaddrinfo(pai);\n\t\t\t\treturn 0;\n\t\t\t}\n\t\t}\n\t}\n\t/* no match found */\n\tfreeaddrinfo(pai);\n\treturn (1);\n}\n"
  },
  {
    "path": "src/lib/Libnet/hnls.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h>\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <errno.h>\n#include \"log.h\"\n#include \"pbs_ifl.h\"\n#include \"pbs_internal.h\"\n\n#if defined(linux)\n\n#include <arpa/inet.h>\n#include <netinet/in.h>\n#include <sys/types.h>\n#include <sys/socket.h>\n#include <ifaddrs.h>\n#include <netdb.h>\n\n#elif defined(WIN32)\n\n#pragma comment(lib, \"Ws2_32.lib\")\n#pragma comment(lib, \"Iphlpapi.lib\")\n\n#else\n\n#include <arpa/inet.h>\n#include <netinet/in.h>\n#include <net/if.h>\n#include <sys/types.h>\n#include <sys/socket.h>\n#include <sys/socketvar.h>\n#include <sys/ioctl.h>\n#include <unistd.h>\n#include <netdb.h>\n\n#endif\n\nextern char *netaddr(struct sockaddr_in *);\n#define NETADDR_BUF 80\n\n/**\n *\n * @brief\n *      Free allocated memory for pointer.\n *\n * @par Side Effects:\n *      None\n *\n * @par MT-safe: Yes\n *\n * @param[in]   names - target pointer array to be freed\n *\n * @return void\n */\nvoid\nfree_if_hostnames(char **names)\n{\n\tint i;\n\n\tif (!names)\n\t\treturn;\n\n\tfor (i = 0; names[i]; i++)\n\t\tfree(names[i]);\n\tfree(names);\n}\n/**\n *\n * @brief\n *      Return family type for socket address.\n *\n * @par Side Effects:\n *      None\n *\n * @par MT-safe: Yes\n *\n * @param[in]\t\tsockaddr - structure holding information\n *\t\t\t\t\tabout particular address\n * @param[out]\t\tfamily   - holds the socket's family type\n *\t\t\t\t\t\"ipv4\" or \"ipv6\"\n *\n * @return void\n */\nvoid\nget_sa_family(struct sockaddr *saddr, char *family)\n{\n\tif (!family)\n\t\treturn;\n\t*family = '\\0';\n\tif (!saddr)\n\t\treturn;\n\n\tswitch (saddr->sa_family) {\n\t\tcase AF_INET:\n\t\t\tstrncpy(family, \"ipv4\", IFFAMILY_MAX);\n\t\t\tbreak;\n\t\tcase AF_INET6:\n\t\t\tstrncpy(family, \"ipv6\", IFFAMILY_MAX);\n\t\t\tbreak;\n\t}\n\tfamily[IFFAMILY_MAX - 1] = '\\0';\n}\n/**\n *\n * @brief\n *      Returns array of names related to interfaces.\n *\n * @par Side Effects:\n *      None\n *\n * @par MT-safe: Yes\n *\n * @param[in]   sockaddr - structure holding information\n *\t\t\t\tabout addresses\n *\n * @return char**\n */\nchar **\nget_if_hostnames(struct sockaddr *saddr)\n{\n\tint i;\n\tint aliases = 0;\n\tchar **names;\n\tconst void *addr;\n\tsize_t addr_size;\n\tstruct hostent *hostp;\n\tstruct sockaddr_in *saddr_in;\n\tstruct sockaddr_in6 *saddr_in6;\n\tchar buf[INET6_ADDRSTRLEN];\n\tconst char *bufp = NULL;\n#ifdef WIN32\n\tchar host[NI_MAXHOST] = {'\\0'};\n\tint ret = 0;\n#endif /* WIN32 */\n\n\tif (!saddr)\n\t\treturn NULL;\n\n\tswitch (saddr->sa_family) {\n\t\tcase AF_INET:\n\t\t\tsaddr_in = (struct sockaddr_in *) saddr;\n#ifdef WIN32\n\t\t\tsaddr_in->sin_family = AF_INET;\n#endif /* WIN32 */\n\t\t\taddr = &saddr_in->sin_addr;\n\t\t\taddr_size = sizeof(saddr_in->sin_addr);\n\t\t\tbufp = inet_ntop(AF_INET, addr, buf, INET_ADDRSTRLEN);\n\t\t\tif (!bufp)\n\t\t\t\treturn NULL;\n#ifdef WIN32\n\t\t\tret = getnameinfo(saddr_in, sizeof(struct sockaddr_in), host, NI_MAXHOST, NULL, NULL, 0);\n\t\t\tif (ret != 0 || host[0] == '\\0')\n\t\t\t\treturn NULL;\n#else\n\t\t\thostp = gethostbyaddr(addr, addr_size, saddr_in->sin_family);\n\t\t\tif (!hostp)\n\t\t\t\treturn NULL;\n#endif /* WIN32 */\n\t\t\tbreak;\n\t\tcase AF_INET6:\n\t\t\tsaddr_in6 = (struct sockaddr_in6 *) saddr;\n#ifdef WIN32\n\t\t\tsaddr_in6->sin6_family = AF_INET6;\n#endif /* WIN32 */\n\t\t\taddr = &saddr_in6->sin6_addr;\n\t\t\taddr_size = sizeof(saddr_in6->sin6_addr);\n\t\t\tbufp = inet_ntop(AF_INET6, addr, buf, INET6_ADDRSTRLEN);\n\t\t\tif (!bufp)\n\t\t\t\treturn NULL;\n#ifdef WIN32\n\t\t\tret = getnameinfo(saddr_in6, sizeof(struct sockaddr_in6), host, NI_MAXHOST, NULL, NULL, 0);\n\t\t\tif (ret != 0 || host[0] == '\\0')\n\t\t\t\treturn NULL;\n#else\n\t\t\thostp = gethostbyaddr(addr, addr_size, saddr_in6->sin6_family);\n\t\t\tif (!hostp)\n\t\t\t\treturn NULL;\n#endif /* WIN32 */\n\t\t\tbreak;\n\t\tdefault:\n\t\t\treturn NULL;\n\t}\n\n#ifdef WIN32\n\tnames = (char **) calloc(2, sizeof(char *));\n\tif (!names)\n\t\treturn NULL;\n\tnames[0] = strdup(host);\n#else\n\t/* Count the aliases. */\n\tfor (aliases = 0; hostp->h_aliases[aliases]; aliases++)\n\t\t;\n\tnames = (char **) calloc((aliases + 2), sizeof(char *));\n\tif (!names)\n\t\treturn NULL;\n\tnames[0] = strdup(hostp->h_name);\n\tfor (i = 0; i < aliases; i++) {\n\t\tnames[i + 1] = strdup(hostp->h_aliases[i]);\n\t}\n#endif /* WIN32 */\n\treturn names;\n}\n\n/**\n *\n * @brief\n *      Returns structure holding network information.\n *\n * @par Side Effects:\n *      None\n *\n * @par MT-safe: Yes\n *\n * @param[out]   msg - error message returned if system calls not successful\n *\n * @return struct log_net_info * - Linked list of log_net_info structures\n */\nstruct log_net_info *\nget_if_info(char *msg)\n{\n\tstruct log_net_info *head = NULL;\n\tstruct log_net_info *curr = NULL;\n\tstruct log_net_info *prev = NULL;\n\n#if defined(linux)\n\n\tint c, i, ret;\n\tchar **hostnames;\n\tstruct ifaddrs *ifp, *listp;\n\n\tif (!msg)\n\t\treturn NULL;\n\tret = getifaddrs(&ifp);\n\tif ((ret != 0) || (ifp == NULL)) {\n\t\tstrncpy(msg, \"Failed to obtain interface names\", LOG_BUF_SIZE);\n\t\tmsg[LOG_BUF_SIZE - 1] = '\\0';\n\t\treturn NULL;\n\t}\n\tfor (listp = ifp; listp; listp = listp->ifa_next) {\n\t\thostnames = get_if_hostnames(listp->ifa_addr);\n\t\tif (!hostnames)\n\t\t\tcontinue;\n\t\tcurr = (struct log_net_info *) calloc(1, sizeof(struct log_net_info));\n\t\tif (!curr) {\n\t\t\tfree_if_info(head);\n\t\t\tfree_if_hostnames(hostnames);\n\t\t\tstrncpy(msg, \"Out of memory\", LOG_BUF_SIZE);\n\t\t\tmsg[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\treturn NULL;\n\t\t}\n\t\tif (prev)\n\t\t\tprev->next = curr;\n\t\tif (!head)\n\t\t\thead = curr;\n\t\tget_sa_family(listp->ifa_addr, curr->iffamily);\n\t\tpbs_strncpy(curr->ifname, listp->ifa_name, IFNAME_MAX);\n\t\t/* Count the hostname entries and allocate space */\n\t\tfor (c = 0; hostnames[c]; c++)\n\t\t\t;\n\t\tcurr->ifhostnames = (char **) calloc(c + 1, sizeof(char *));\n\t\tif (!curr->ifhostnames) {\n\t\t\tfree_if_info(head);\n\t\t\tfree_if_hostnames(hostnames);\n\t\t\tstrncpy(msg, \"Out of memory\", LOG_BUF_SIZE);\n\t\t\tmsg[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\treturn NULL;\n\t\t}\n\t\tfor (i = 0; i < c; i++) {\n\t\t\tcurr->ifhostnames[i] = (char *) calloc(PBS_MAXHOSTNAME, sizeof(char));\n\t\t\tif (!curr->ifhostnames[i]) {\n\t\t\t\tfree_if_info(head);\n\t\t\t\tfree_if_hostnames(hostnames);\n\t\t\t\tstrncpy(msg, \"Out of memory\", LOG_BUF_SIZE);\n\t\t\t\tmsg[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t\tstrncpy(curr->ifhostnames[i], hostnames[i], (PBS_MAXHOSTNAME - 1));\n\t\t}\n\t\tcurr->ifhostnames[i] = NULL;\n\t\tfree_if_hostnames(hostnames);\n\t\tprev = curr;\n\t\tcurr->next = NULL;\n\t}\n\tfreeifaddrs(ifp);\n\n#elif defined(WIN32)\n\n\tint c, i;\n\tchar **hostnames;\n\tPIP_ADAPTER_ADDRESSES addrlistp, addrp;\n\tPIP_ADAPTER_UNICAST_ADDRESS ucp;\n\tDWORD size = 8192;\n\tDWORD ret;\n\tWSADATA wsadata;\n\n\tif (!msg)\n\t\treturn NULL;\n\taddrlistp = (IP_ADAPTER_ADDRESSES *) malloc(size);\n\tif (!addrlistp) {\n\t\tstrncpy(msg, \"Out of memory\", LOG_BUF_SIZE);\n\t\tmsg[LOG_BUF_SIZE - 1] = '\\0';\n\t\treturn NULL;\n\t}\n\tret = GetAdaptersAddresses(AF_UNSPEC, GAA_FLAG_INCLUDE_PREFIX, NULL, addrlistp, &size);\n\tif (ret == ERROR_BUFFER_OVERFLOW) {\n\t\taddrlistp = realloc(addrlistp, size);\n\t\tif (!addrlistp) {\n\t\t\tstrncpy(msg, \"Out of memory\", LOG_BUF_SIZE);\n\t\t\tmsg[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\treturn NULL;\n\t\t}\n\t\tret = GetAdaptersAddresses(AF_UNSPEC, GAA_FLAG_INCLUDE_PREFIX, NULL, addrlistp, &size);\n\t}\n\tif (ret == ERROR_NO_DATA) {\n\t\tstrncpy(msg, \"No addresses found\", LOG_BUF_SIZE);\n\t\tmsg[LOG_BUF_SIZE - 1] = '\\0';\n\t\tfree(addrlistp);\n\t\treturn NULL;\n\t}\n\tif (ret != NO_ERROR) {\n\t\tstrncpy(msg, \"Failed to obtain adapter addresses\", LOG_BUF_SIZE);\n\t\tmsg[LOG_BUF_SIZE - 1] = '\\0';\n\t\tfree(addrlistp);\n\t\treturn NULL;\n\t}\n\tif (WSAStartup(MAKEWORD(2, 2), &wsadata) != 0) {\n\t\tstrncpy(msg, \"Failed to initialize network\", LOG_BUF_SIZE);\n\t\tmsg[LOG_BUF_SIZE - 1] = '\\0';\n\t\tfree(addrlistp);\n\t\treturn NULL;\n\t}\n\tfor (addrp = addrlistp; addrp; addrp = addrp->Next) {\n\t\tfor (ucp = addrp->FirstUnicastAddress; ucp; ucp = ucp->Next) {\n\t\t\thostnames = get_if_hostnames((struct sockaddr *) ucp->Address.lpSockaddr);\n\t\t\tif (!hostnames)\n\t\t\t\tcontinue;\n\t\t\tcurr = (struct log_net_info *) calloc(1, sizeof(struct log_net_info));\n\t\t\tif (!curr) {\n\t\t\t\tfree(addrlistp);\n\t\t\t\tfree_if_info(head);\n\t\t\t\tfree_if_hostnames(hostnames);\n\t\t\t\tstrncpy(msg, \"Out of memory\", LOG_BUF_SIZE);\n\t\t\t\tmsg[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t\tif (prev)\n\t\t\t\tprev->next = curr;\n\t\t\tif (!head)\n\t\t\t\thead = curr;\n\t\t\tif (addrlistp->Flags & 0x0100 && addrlistp->Flags & 0x0080) {\n\t\t\t\tstrncpy(curr->iffamily, \"ipv4/ipv6\", IFFAMILY_MAX);\n\t\t\t} else if (addrlistp->Flags & 0x0100) {\n\t\t\t\tstrncpy(curr->iffamily, \"ipv6\", IFFAMILY_MAX);\n\t\t\t} else if (addrlistp->Flags & 0x0080) {\n\t\t\t\tstrncpy(curr->iffamily, \"ipv4\", IFFAMILY_MAX);\n\t\t\t} else {\n\t\t\t\tstrncpy(curr->iffamily, \"unknown\", IFFAMILY_MAX);\n\t\t\t}\n\t\t\tcurr->iffamily[IFFAMILY_MAX - 1] = '\\0';\n\t\t\tstrncpy(curr->ifname, addrp->AdapterName, IFNAME_MAX);\n\t\t\tcurr->ifname[IFNAME_MAX - 1] = '\\0';\n\t\t\t/* Count the hostname entries and allocate space */\n\t\t\tfor (c = 0; hostnames[c]; c++)\n\t\t\t\t;\n\t\t\tcurr->ifhostnames = (char **) calloc(c + 1, sizeof(char *));\n\t\t\tif (!curr->ifhostnames) {\n\t\t\t\tfree(addrlistp);\n\t\t\t\tfree_if_info(head);\n\t\t\t\tfree_if_hostnames(hostnames);\n\t\t\t\tstrncpy(msg, \"Out of memory\", LOG_BUF_SIZE);\n\t\t\t\tmsg[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t\tfor (i = 0; i < c; i++) {\n\t\t\t\tcurr->ifhostnames[i] = (char *) calloc(PBS_MAXHOSTNAME, sizeof(char));\n\t\t\t\tif (!(curr->ifhostnames[i])) {\n\t\t\t\t\tfree(addrlistp);\n\t\t\t\t\tfree_if_info(head);\n\t\t\t\t\tfree_if_hostnames(hostnames);\n\t\t\t\t\tstrncpy(msg, \"Out of memory\", LOG_BUF_SIZE);\n\t\t\t\t\tmsg[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\t\t\treturn NULL;\n\t\t\t\t}\n\t\t\t\tstrncpy(curr->ifhostnames[i], hostnames[i], (PBS_MAXHOSTNAME - 1));\n\t\t\t}\n\t\t\tcurr->ifhostnames[i] = NULL;\n\t\t\tfree_if_hostnames(hostnames);\n\t\t\tprev = curr;\n\t\t\tcurr->next = NULL;\n\t\t}\n\t}\n\tWSACleanup();\n\tfree(addrlistp);\n#endif\n\n\treturn (head);\n}\n\n/**\n *\n * @brief\n *      Frees structure holding network information.\n *\n * @par Side Effects:\n *      None\n *\n * @par MT-safe: Yes\n *\n * @param[in]   ni - linked list holding interface name, family,\n *\t\t\thostnames returned from system\n *\n * @return void\n */\nvoid\nfree_if_info(struct log_net_info *ni)\n{\n\tstruct log_net_info *curr;\n\tint i;\n\n\tcurr = ni;\n\twhile (curr) {\n\t\tstruct log_net_info *temp;\n\t\ttemp = curr;\n\t\tcurr = curr->next;\n\t\tif (temp->ifhostnames != NULL) {\n\t\t\tfor (i = 0; temp->ifhostnames[i]; i++)\n\t\t\t\tfree(temp->ifhostnames[i]);\n\t\t}\n\t\tfree(temp->ifhostnames);\n\t\tfree(temp);\n\t}\n}\n\n/**\n* @brief\n\tGet a list of all IPs (ipv4) for a given hostname\n*\n* @return\n*\tComma separated list of IPs in string format\n*\n* @par Side Effects:\n*\tNone\n*\n* @par MT-safe: Yes\n*\n* @param[in]    host        - hostname of the current host to resolve IPs for\n* @param[out]   msg_buf     - error message returned if system calls not successful\n* @param[in]    msg_buf_len - length of the message buffer passed\n*\n*/\nstatic char *\nget_host_ips(char *host, char *msg_buf, size_t msg_buf_len)\n{\n\tstruct addrinfo *aip, *pai;\n\tstruct addrinfo hints;\n\tint rc = 0;\n\tchar buf[NETADDR_BUF] = {'\\0'};\n\tint count = 0;\n\tchar *nodenames = NULL;\n\tchar *tmp;\n\tint len, hlen;\n\n\terrno = 0;\n\n\tmemset(&hints, 0, sizeof(struct addrinfo));\n\thints.ai_family = AF_INET;\n\thints.ai_socktype = SOCK_STREAM;\n\thints.ai_protocol = IPPROTO_TCP;\n\n\tif ((rc = getaddrinfo(host, NULL, &hints, &pai)) != 0) {\n\t\tsnprintf(msg_buf, msg_buf_len, \"Error %d resolving %s\\n\", rc, host);\n\t\treturn NULL;\n\t}\n\n\tlen = 0;\n\tcount = 0;\n\tfor (aip = pai; aip != NULL; aip = aip->ai_next) {\n\t\tif (aip->ai_family == AF_INET) { /* for now only count IPv4 addresses */\n\t\t\tchar *p;\n\t\t\tstruct sockaddr_in *sa = (struct sockaddr_in *) aip->ai_addr;\n\t\t\tif (ntohl(sa->sin_addr.s_addr) >> 24 == IN_LOOPBACKNET)\n\t\t\t\tcontinue;\n\t\t\tsprintf(buf, \"%s\", netaddr(sa));\n\t\t\tif (!strcmp(buf, \"unknown\"))\n\t\t\t\tcontinue;\n\t\t\tif ((p = strchr(buf, ':')))\n\t\t\t\t*p = '\\0';\n\n\t\t\thlen = strlen(buf);\n\t\t\ttmp = realloc(nodenames, len + hlen + 2); /* 2 for comma and null char */\n\t\t\tif (!tmp) {\n\t\t\t\tstrncpy(msg_buf, \"Out of memory\", msg_buf_len);\n\t\t\t\tfree(nodenames);\n\t\t\t\tnodenames = NULL;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\tnodenames = tmp;\n\n\t\t\tif (len == 0)\n\t\t\t\tstrcpy(nodenames, buf);\n\t\t\telse {\n\t\t\t\tstrcat(nodenames, \",\");\n\t\t\t\tstrcat(nodenames, buf);\n\t\t\t}\n\t\t\tlen += hlen + 2;\n\t\t\tcount++;\n\t\t}\n\t}\n\n\tfreeaddrinfo(pai);\n\n\tif (count == 0) {\n\t\tsnprintf(msg_buf, msg_buf_len, \"Could not find any usable IP address for host %s\", host);\n\t\treturn NULL;\n\t}\n\treturn nodenames;\n}\n\n/**\n* @brief\n*\tGet a list of all IPs\n*   First it resolves the supplied hostname to determine it's IPs\n*\tThen it enumerates the interfaces in the host and determines IPs\n*\tfor each of those interfaces.\n*\n*\tDo not supply a remote hostname in this function.\n*\n* @return\n*\tComma separated list of IPs in string format\n*\n* @par Side Effects:\n*\tNone\n*\n* @par MT-safe: Yes\n*\n* @param[in]\thostname    - hostname of the current host to resolve IPs for\n* @param[out]   msg_buf     - error message returned if system calls not successful\n* @param[in]    msg_buf_len - length of the message buffer passed\n*\n*/\nchar *\nget_all_ips(char *hostname, char *msg_buf, size_t msg_buf_len)\n{\n\tchar *nodenames;\n\tint len, ret;\n\tchar *tmp;\n\tchar buf[NETADDR_BUF];\n\n#if defined(linux)\n\tstruct ifaddrs *ifp, *listp;\n\tchar *p;\n#elif defined(WIN32)\n\tint i;\n\t/* Variables used by GetIpAddrTable */\n\tPMIB_IPADDRTABLE pIPAddrTable;\n\tDWORD dwSize = 0;\n\tDWORD dwRetVal = 0;\n\tIN_ADDR IPAddr;\n#endif\n\n\tmsg_buf[0] = '\\0';\n\n\t/* prepend the list of IPs with the IPs resolved from the passed hostname */\n\tnodenames = get_host_ips(hostname, msg_buf, msg_buf_len);\n\tif (!nodenames) {\n\t\treturn NULL;\n\t}\n\n\tlen = strlen(nodenames);\n\n#if defined(linux)\n\tret = getifaddrs(&ifp);\n\n\tif ((ret != 0) || (ifp == NULL)) {\n\t\tstrncpy(msg_buf, \"Failed to obtain interface names\", msg_buf_len);\n\t\tfree(nodenames);\n\t\treturn NULL;\n\t}\n\n\tfor (listp = ifp; listp; listp = listp->ifa_next) {\n\t\tint hlen;\n\n\t\tif ((listp->ifa_addr == NULL) || (listp->ifa_addr->sa_family != AF_INET))\n\t\t\tcontinue;\n\t\tsprintf(buf, \"%s\", netaddr((struct sockaddr_in *) listp->ifa_addr));\n\t\tif (!strcmp(buf, \"unknown\"))\n\t\t\tcontinue;\n\t\tif ((p = strchr(buf, ':')))\n\t\t\t*p = '\\0';\n\n\t\thlen = strlen(buf);\n\t\ttmp = realloc(nodenames, len + hlen + 2); /* 2 for comma and null char */\n\t\tif (!tmp) {\n\t\t\tstrncpy(msg_buf, \"Out of memory\", msg_buf_len);\n\t\t\tfree(nodenames);\n\t\t\tnodenames = NULL;\n\t\t\tbreak;\n\t\t}\n\t\tnodenames = tmp;\n\n\t\tif (len == 0)\n\t\t\tstrcpy(nodenames, buf);\n\t\telse {\n\t\t\tstrcat(nodenames, \",\");\n\t\t\tstrcat(nodenames, buf);\n\t\t}\n\t\tlen += hlen + 2;\n\t}\n\n\tfreeifaddrs(ifp);\n\n#elif defined(WIN32)\n\n\tpIPAddrTable = (MIB_IPADDRTABLE *) malloc(sizeof(MIB_IPADDRTABLE));\n\n\tif (pIPAddrTable) {\n\t\t// Make an initial call to GetIpAddrTable to get the\n\t\t// necessary size into the dwSize variable\n\t\tif (GetIpAddrTable(pIPAddrTable, &dwSize, 0) == ERROR_INSUFFICIENT_BUFFER) {\n\t\t\tfree(pIPAddrTable);\n\t\t\tpIPAddrTable = (MIB_IPADDRTABLE *) malloc(dwSize);\n\t\t}\n\t\tif (pIPAddrTable == NULL) {\n\t\t\tstrncpy(msg_buf, \"Memory allocation failed for GetIpAddrTable\", msg_buf_len);\n\t\t\tfree(nodenames);\n\t\t\treturn NULL;\n\t\t}\n\t}\n\t// Make a second call to GetIpAddrTable to get the\n\t// actual data we want\n\tif ((dwRetVal = GetIpAddrTable(pIPAddrTable, &dwSize, 0)) != NO_ERROR) {\n\t\tstrncpy(msg_buf, \"GetIpAddrTable failed\", msg_buf_len);\n\t\tfree(pIPAddrTable);\n\t\tfree(nodenames);\n\t\treturn NULL;\n\t}\n\n\tfor (i = 0; i < (int) pIPAddrTable->dwNumEntries; i++) {\n\t\tint hlen;\n\t\tIPAddr.S_un.S_addr = (u_long) pIPAddrTable->table[i].dwAddr;\n\t\tsprintf(buf, \"%s\", inet_ntoa(IPAddr));\n\t\thlen = strlen(buf);\n\t\ttmp = realloc(nodenames, len + hlen + 2); /* 2 for comma and null char */\n\t\tif (!tmp) {\n\t\t\tstrncpy(msg_buf, \"Out of memory\", msg_buf_len);\n\t\t\tfree(nodenames);\n\t\t\tnodenames = NULL;\n\t\t\tbreak;\n\t\t}\n\t\tnodenames = tmp;\n\t\tif (len == 0)\n\t\t\tstrcpy(nodenames, buf);\n\t\telse {\n\t\t\tstrcat(nodenames, \",\");\n\t\t\tstrcat(nodenames, buf);\n\t\t}\n\t\tlen += hlen + 2;\n\t}\n\n\tif (pIPAddrTable) {\n\t\tfree(pIPAddrTable);\n\t\tpIPAddrTable = NULL;\n\t}\n\n#endif\n\n\treturn nodenames;\n}\n"
  },
  {
    "path": "src/lib/Libnet/net_client.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n#include <stdio.h>\n\n#include <unistd.h>\n#include <sys/types.h>\n#include <sys/socket.h>\n#include <netinet/in.h>\n#include <arpa/inet.h>\n#include <errno.h>\n#include <netdb.h>\n#include <string.h>\n#include <unistd.h>\n#include <fcntl.h>\n#include <poll.h>\n#include <netinet/tcp.h>\n#include \"portability.h\"\n#include \"server_limits.h\"\n#include \"libpbs.h\"\n#include \"net_connect.h\"\n#include \"pbs_error.h\"\n#include \"libsec.h\"\n#include \"pbs_internal.h\"\n#include \"auth.h\"\n\n/**\n * @file\tnet_client.c\n */\nstatic int conn_timeout = PBS_DIS_TCP_TIMEOUT_CONNECT; /* timeout for connect */\n\n/**\n * @brief\n * \tengage_authentication - Use the security library interface to\n * \tengage the appropriate connection authentication.\n *\n * @param[in]      sd    socket descriptor to use in CS_* interface\n * @param[in]      addr  network address of the other party\n * @param[in]      port  associated port for the other party\n * @param[in]      authport_flags  authentication flags\n *\n * @return\tint\n * @retval\t 0  successful\n * @retval\t-1 unsuccessful\n *\n * @par\tRemark:\tIf the authentication fails, messages are logged to\n *              the server's log file and the connection's security\n *              information is closed out (freed).\n */\n\nstatic int\nengage_authentication(int sd, struct in_addr addr, int port, int authport_flags)\n{\n\tint ret;\n\tint mode;\n\tchar ebuf[128];\n#if !defined(WIN32)\n\tchar dst[INET_ADDRSTRLEN + 1]; /* for inet_ntop */\n#endif\n\n\tif (sd < 0) {\n\t\tcs_logerr(-1, __func__, \"Bad arguments, unable to authenticate.\");\n\t\treturn (-1);\n\t}\n\n\tmode = (authport_flags & B_SVR) ? CS_MODE_SERVER : CS_MODE_CLIENT;\n\tif (mode == CS_MODE_SERVER) {\n\t\tret = CS_server_auth(sd);\n\t\tif (ret == CS_SUCCESS || ret == CS_AUTH_CHECK_PORT)\n\t\t\treturn (0);\n\t} else if (mode == CS_MODE_CLIENT) {\n\t\tret = CS_client_auth(sd);\n\t\tif (ret == CS_SUCCESS || ret == CS_AUTH_USE_IFF) {\n\t\t\t/*\n\t\t\t\t* For authentication via iff CS_client_auth\n\t\t\t\t* temporarily returning CS_AUTH_USE_IFF until such\n\t\t\t\t* time as iff becomes a part of CS_client_auth\n\t\t\t\t*/\n\t\t\treturn (0);\n\t\t}\n\t}\n\n#if defined(WIN32)\n\t/* inet_ntoa is thread-safe on windows */\n\tsprintf(ebuf,\n\t\t\"Unable to authenticate with (%s:%d)\",\n\t\tinet_ntoa(addr), port);\n#else\n\tsprintf(ebuf,\n\t\t\"Unable to authenticate with (%s:%d)\",\n\t\tinet_ntop(AF_INET, (void *) &addr, dst,\n\t\t\t  INET_ADDRSTRLEN),\n\t\tport);\n#endif\n\tcs_logerr(-1, __func__, ebuf);\n\n\tif ((ret = CS_close_socket(sd)) != CS_SUCCESS) {\n#if defined(WIN32)\n\t\tsprintf(ebuf, \"Problem closing context (%s:%d)\",\n\t\t\tinet_ntoa(addr), port);\n#else\n\t\tsprintf(ebuf,\n\t\t\t\"Problem closing context (%s:%d)\",\n\t\t\tinet_ntop(AF_INET, (void *) &addr, dst,\n\t\t\t\t  INET_ADDRSTRLEN),\n\t\t\tport);\n#endif\n\t\tcs_logerr(-1, __func__, ebuf);\n\t}\n\n\treturn (-1);\n}\n\n/*\n * @brief\n *      client_to_svr calls client_to_svr_extend to perform connection\n *      to server.\n *\n * @param[in]\thostaddr - address of host to which to connect (pbs_net_t)\n * @param[in]\tport - port to which to connect\n * @param[in]\tauthport_flags  - flags or-ed together to describe\n *\t\t\tauthenication mode:\n *\t\t\tBPRIV - use reserved local port\n *\t\t\tBSVR  - Server mode if set, client mode if not\n * @returns\tint\n * @retval\t>=0 the socket obtained\n * @retval \t PBS_NET_RC_FATAL (-1) if fatal error, just quit\n * @retval\t PBS_NET_RC_RETRY (-2) if temp error, should retry\n *\n */\n\nint\nclient_to_svr(pbs_net_t hostaddr, unsigned int port, int authport_flags)\n{\n\treturn (client_to_svr_extend(hostaddr, port, authport_flags, NULL));\n}\n\n/**\n * @brief\n *\tclient_to_svr_extend - connect to a server\n *\tPerform socket/tcp/ip stuff to connect to a server.\n *\n * @par Functionality\n *\tOpen a tcp connection to the specified address and port.\n *\tBinds to a local socket, sets socket initially non-blocking,\n *\tconnects to remote system, resets sock blocking.\n *\n *\tNote, the server's host address and port are was chosen as parameters\n *\trather than their names to possibly save extra look-ups.  It seems\n *\tlikely that the caller \"might\" make several calls to the same host or\n *\tdifferent hosts with the same port.  Let the caller keep the addresses\n *\taround rather than look it up each time.\n *\n *\tSpecial note: The reserved port mechanism is not needed when the\n *               the PBS authentication mechanism is not pbs_iff.  Being\n *               left in for minimal code change.  It should to be removed\n *               in a future version.\n *\n * @param[in]\thostaddr - address of host to which to connect (pbs_net_t)\n * @param[in]\tport - port to which to connect\n * @param[in]\tauthport_flags  - flags or-ed together to describe\n *\t\t\tauthenication mode:\n *\t\t\tBPRIV - use reserved local port\n *\t\t\tBSVR  - Server mode if set, client mode if not\n * @param[in]   localaddr - host machine address to bind before connecting\n *                          to server.\n *\n * @returns\tint\n * @retval\t>=0 the socket obtained\n * @retval \t PBS_NET_RC_FATAL (-1) if fatal error, just quit\n * @retval\t PBS_NET_RC_RETRY (-2) if temp error, should retry\n */\n\nint\nclient_to_svr_extend(pbs_net_t hostaddr, unsigned int port, int authport_flags, char *localaddr)\n{\n\tstruct sockaddr_in remote;\n\tint sock;\n\tint local_port;\n\tint errn;\n\tint rc;\n#ifdef WIN32\n\tint ret;\n\tint non_block = 1;\n\tstruct linger li;\n\tstruct timeval tv;\n\tfd_set writeset;\n#else\n\tstruct pollfd fds[1];\n\tpbs_socklen_t len = sizeof(rc);\n\tint oflag;\n#endif\n\n\t/*\tIf local privilege port requested, bind to one\t*/\n\t/*\tMust be root privileged to do this\t\t*/\n\tlocal_port = authport_flags & B_RESERVED;\n\n\tif (local_port) {\n#ifdef IP_PORTRANGE_LOW\n\t\tint lport = IPPORT_RESERVED - 1;\n\n\t\tsock = rresvport(&lport);\n\t\tif (sock < 0) {\n\t\t\tif (errno == EAGAIN)\n\t\t\t\treturn PBS_NET_RC_RETRY;\n\t\t\telse\n\t\t\t\treturn PBS_NET_RC_FATAL;\n\t\t}\n#else /* IP_PORTRANGE_LOW */\n\t\tstruct sockaddr_in local;\n\t\tunsigned short tryport;\n\t\tstatic unsigned short start_port = 0;\n\n\t\tsock = socket(AF_INET, SOCK_STREAM, 0);\n\t\tif (sock < 0) {\n\t\t\treturn PBS_NET_RC_FATAL;\n\t\t}\n\n\t\tif (start_port == 0) { /* arbitrary start point */\n\t\t\tstart_port = (getpid() % (IPPORT_RESERVED / 2)) +\n\t\t\t\t     IPPORT_RESERVED / 2;\n\t\t} else if (--start_port < IPPORT_RESERVED / 2)\n\t\t\tstart_port = IPPORT_RESERVED - 1;\n\t\ttryport = start_port;\n\n\t\tmemset(&local, 0, sizeof(local));\n\t\tlocal.sin_family = AF_INET;\n\t\tif (localaddr != NULL) {\n\t\t\tlocal.sin_addr.s_addr = inet_addr(localaddr);\n\t\t\tif (local.sin_addr.s_addr == INADDR_NONE) {\n\t\t\t\tperror(\"inet_addr failed\");\n\t\t\t\treturn (PBS_NET_RC_FATAL);\n\t\t\t}\n\t\t} else if (pbs_conf.pbs_public_host_name) {\n\t\t\tpbs_net_t public_addr;\n\t\t\tpublic_addr = get_hostaddr(pbs_conf.pbs_public_host_name);\n\t\t\tif (public_addr == (pbs_net_t) 0) {\n\t\t\t\treturn (PBS_NET_RC_FATAL);\n\t\t\t}\n\t\t\tlocal.sin_addr.s_addr = htonl(public_addr);\n\t\t}\n\t\tfor (;;) {\n\n\t\t\tlocal.sin_port = htons(tryport);\n\t\t\tif (bind(sock, (struct sockaddr *) &local,\n\t\t\t\t sizeof(local)) == 0)\n\t\t\t\tbreak;\n#ifdef WIN32\n\t\t\terrno = WSAGetLastError();\n\t\t\tif (errno != EADDRINUSE && errno != EADDRNOTAVAIL && errno != WSAEACCES) {\n#else\n\t\t\tif (errno != EADDRINUSE && errno != EADDRNOTAVAIL) {\n#endif\n\t\t\t\tclosesocket(sock);\n\t\t\t\treturn PBS_NET_RC_FATAL;\n\t\t\t} else if (--tryport < (IPPORT_RESERVED / 2)) {\n\t\t\t\ttryport = IPPORT_RESERVED - 1;\n\t\t\t}\n\t\t\tif (tryport == start_port) {\n\t\t\t\tclosesocket(sock);\n\t\t\t\treturn PBS_NET_RC_RETRY;\n\t\t\t}\n\t\t}\n\t\t/*\n\t\t ** Ensure last tryport becomes start port on next call.\n\t\t */\n\t\tstart_port = tryport;\n#endif /* IP_PORTRANGE_LOW */\n\t} else {\n\t\tsock = socket(AF_INET, SOCK_STREAM, 0);\n\t\tif (sock < 0) {\n\t\t\treturn PBS_NET_RC_FATAL;\n\t\t}\n\t}\n\n\tremote.sin_addr.s_addr = htonl(hostaddr);\n\n\tremote.sin_port = htons((unsigned short) port);\n\tremote.sin_family = AF_INET;\n#ifdef WIN32\n\tli.l_onoff = 1;\n\tli.l_linger = 5;\n\n\tsetsockopt(sock, SOL_SOCKET, SO_LINGER, (char *) &li, sizeof(li));\n\n\tif (ioctlsocket(sock, FIONBIO, &non_block) == SOCKET_ERROR) {\n\t\terrno = WSAGetLastError();\n#else\n\toflag = fcntl(sock, F_GETFL);\n\tif (fcntl(sock, F_SETFL, (oflag | O_NONBLOCK)) == -1) {\n#endif\n\t\tclosesocket(sock);\n\t\treturn (PBS_NET_RC_FATAL);\n\t}\n\n\tif (connect(sock, (struct sockaddr *) &remote, sizeof(remote)) < 0) {\n\n#ifdef WIN32\n\t\terrno = WSAGetLastError();\n#endif\n\t\t/*\n\t\t * Bacause of  threading, pbs_errno is actually a macro\n\t\t * pointing to a variable within a tread context.  On certain\n\t\t * platforms, the threading library resulted in errno being\n\t\t * cleared after pbs_errno was set set from it, so save\n\t\t * errno into a local variable first, then test it.\n\t\t */\n\t\terrn = errno;\n\t\tpbs_errno = errn;\n\t\tswitch (errn) {\n#ifdef WIN32\n\t\t\tcase WSAEINTR:\n#else\n\t\t\tcase EINTR:\n#endif\n\t\t\tcase EADDRINUSE:\n\t\t\tcase ETIMEDOUT:\n\t\t\tcase ECONNREFUSED:\n\t\t\t\tclosesocket(sock);\n\t\t\t\treturn (PBS_NET_RC_RETRY);\n\n#ifdef WIN32\n\t\t\tcase WSAEWOULDBLOCK:\n\t\t\t\tFD_ZERO(&writeset);\n\t\t\t\tFD_SET((unsigned int) sock, &writeset);\n\t\t\t\ttv.tv_sec = conn_timeout; /* connect timeout */\n\t\t\t\ttv.tv_usec = 0;\n\t\t\t\tret = select(1, NULL, &writeset, NULL, &tv);\n\t\t\t\tif (ret == SOCKET_ERROR) {\n\t\t\t\t\terrno = WSAGetLastError();\n\t\t\t\t\terrn = errno;\n\t\t\t\t\tpbs_errno = errn;\n\t\t\t\t\tclosesocket(sock);\n\t\t\t\t\treturn PBS_NET_RC_FATAL;\n\t\t\t\t} else if (ret == 0) {\n\t\t\t\t\tclosesocket(sock);\n\t\t\t\t\treturn PBS_NET_RC_RETRY;\n\t\t\t\t}\n\t\t\t\tbreak;\n#else  /* UNIX */\n\t\t\tcase EWOULDBLOCK:\n\t\t\tcase EINPROGRESS:\n\t\t\t\twhile (1) {\n\t\t\t\t\tfds[0].fd = sock;\n\t\t\t\t\tfds[0].events = POLLOUT;\n\t\t\t\t\tfds[0].revents = 0;\n\n\t\t\t\t\trc = poll(fds, (nfds_t) 1, conn_timeout * 1000);\n\t\t\t\t\tif (rc == -1) {\n\t\t\t\t\t\terrn = errno;\n\t\t\t\t\t\tif ((errn != EAGAIN) && (errn != EINTR))\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t} else\n\t\t\t\t\t\tbreak; /* no error */\n\t\t\t\t}\n\n\t\t\t\tif (rc == 1) {\n\t\t\t\t\t/* socket may be connected and ready to write */\n\t\t\t\t\trc = 0;\n\t\t\t\t\tif ((getsockopt(sock, SOL_SOCKET, SO_ERROR, &rc, &len) == -1) || (rc != 0)) {\n\t\t\t\t\t\tclosesocket(sock);\n\t\t\t\t\t\treturn PBS_NET_RC_FATAL;\n\t\t\t\t\t}\n\t\t\t\t\tbreak;\n\n\t\t\t\t} else if (rc == 0) {\n\t\t\t\t\t/* socket not ready - not connected in time */\n\t\t\t\t\tclosesocket(sock);\n\t\t\t\t\treturn PBS_NET_RC_RETRY;\n\t\t\t\t} else {\n\t\t\t\t\t/* socket not ready - error */\n\t\t\t\t\tclosesocket(sock);\n\t\t\t\t\treturn PBS_NET_RC_FATAL;\n\t\t\t\t}\n#endif /* end UNIX */\n\n\t\t\tdefault:\n\t\t\t\tclosesocket(sock);\n\t\t\t\treturn (PBS_NET_RC_FATAL);\n\t\t}\n\t}\n\n\t/* reset socket to blocking */\n#ifdef WIN32\n\tnon_block = 0;\n\tif (ioctlsocket(sock, FIONBIO, &non_block) == SOCKET_ERROR) {\n\t\terrno = WSAGetLastError();\n#else /* UNIX */\n\tif (fcntl(sock, F_SETFL, oflag) == -1) {\n#endif\n\t\tclosesocket(sock);\n\t\treturn PBS_NET_RC_FATAL;\n\t}\n\n\tif (engage_authentication(sock,\n\t\t\t\t  remote.sin_addr, port, authport_flags) != -1)\n\t\treturn sock;\n\n\t/*authentication unsuccessful*/\n\n\tclosesocket(sock);\n\treturn (PBS_NET_RC_FATAL);\n}\n\n/**\n * @brief\n *      This function sets socket options to TCP_NODELAY\n * @param fd\n * @return 0 for SUCCESS\n *        -1 for FAILURE\n */\nint\nset_nodelay(int fd)\n{\n\tint opt;\n\tpbs_socklen_t optlen;\n\n\toptlen = sizeof(opt);\n\tif (getsockopt(fd, IPPROTO_TCP, TCP_NODELAY, &opt, &optlen) == -1)\n\t\treturn 0;\n\n\tif (opt == 1)\n\t\treturn 0;\n\n\topt = 1;\n\treturn setsockopt(fd, IPPROTO_TCP, TCP_NODELAY, &opt, sizeof(opt));\n}\n"
  },
  {
    "path": "src/lib/Libnet/net_server.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <unistd.h>\n#include <netdb.h>\n#include <string.h>\n#include <ctype.h>\n#include <errno.h>\n#include <signal.h>\n#include <stdio.h>\n#include <assert.h>\n#include <sys/types.h>\n#include <sys/time.h>\n#include <sys/socket.h>\n#include <netinet/in.h>\n#include <arpa/inet.h>\n#include <unistd.h>\n#include <fcntl.h>\n#include <time.h>\n#include <stdlib.h>\n#include <poll.h>\n#include <sys/resource.h>\n\n#include \"portability.h\"\n#include \"server_limits.h\"\n#include \"pbs_ifl.h\"\n#include \"net_connect.h\"\n#include \"log.h\"\n#include \"libsec.h\"\n#include \"pbs_error.h\"\n#include \"pbs_internal.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"job.h\"\n#include \"svrfunc.h\"\n#include \"tpp.h\"\n\n/**\n * @file\tnet_server.c\n */\n/* Global Data (I wish I could make it private to the library, sigh, but\n * C don't support that scope of control.)\n *\n * This array of connection structures is used by the server to maintain\n * a record of the open I/O connections, it is indexed by the socket number.\n */\n\nstatic conn_t **svr_conn;\t  /* list of pointers to connections indexed by the socket fd. List is dynamically allocated */\n#define CONNS_ARRAY_INCREMENT 100 /* Increases this many more connection pointers when dynamically allocating memory for svr_conn */\nstatic int conns_array_size = 0;  /* Size of the svr_conn list, initialized to 0 */\npbs_list_head svr_allconns;\t  /* head of the linked list of active connections */\n\n/*\n * The following data is private to this set of network interface routines.\n */\nint max_connection = -1;\nstatic int num_connections = 0;\nstatic int net_is_initialized = 0;\nstatic void *poll_context; /* This is the context of the descriptors being polled */\nvoid *priority_context;\nstatic int init_poll_context(); /* Initialize the tpp context */\nstatic void (*read_func[2])(int);\nstatic int (*ready_read_func[2])(conn_t *);\nstatic char logbuf[256];\n\n/* Private function within this file */\nstatic int conn_find_usable_index(int);\nstatic int conn_find_actual_index(int);\nstatic void accept_conn(int);\nstatic void cleanup_conn(int);\n\n/**\n * @brief\n * \tMakes the socket fd as index in the connection array usable and returns\n *  the socket fd.\n *\n * @par Functionality\n * \tChecks if the socket fd can be indexed into the connection array\n * \tIf it is out of bounds, allocates enough slots for the connection array\n * \tand returns the index (the socket fd itself)\n *\n * @param[in] sd - The socket fd for the connection\n *\n * @return Error code\n * @retval 0 - Success\n * @retval -1 - Failure\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n */\nstatic int\nconn_find_usable_index(int sd)\n{\n\tvoid *p;\n\tunsigned int new_conns_array_size = 0;\n\n\tif (sd < 0)\n\t\treturn -1;\n\n\tif (sd >= conns_array_size) {\n\t\tnew_conns_array_size = sd + CONNS_ARRAY_INCREMENT;\n\t\tp = realloc(svr_conn, new_conns_array_size * sizeof(conn_t *));\n\t\tif (!p)\n\t\t\treturn -1;\n\n\t\tsvr_conn = (conn_t **) (p);\n\t\tmemset((svr_conn + conns_array_size), 0,\n\t\t       (new_conns_array_size - conns_array_size) * sizeof(conn_t *));\n\t\tconns_array_size = new_conns_array_size;\n\t}\n\treturn sd;\n}\n\n/**\n * @brief\n *\tReturns the index of the connection for the socket fd provided\n *\n * @par Functionality\n *\tChecks if the socket fd is valid and connection is available and\n *\treturns the index to the connection in the array. The index is the\n *\tsocket identifier itself.\n *\n * @param[in] sd - The socket fd for the connection\n *\n * @return Error code\n * @retval  0 - Success\n * @retval -1 - Failure\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nstatic int\nconn_find_actual_index(int sd)\n{\n\tif (sd >= 0 && sd < conns_array_size) {\n\t\tif (svr_conn[sd])\n\t\t\treturn sd;\n\t}\n\treturn -1;\n}\n\n/**\n * @brief\n *\tGiven a socket fd, this function provides the handle to the connection\n *\n * @par Functionality\n *\tChecks if the socket fd has a valid connection and if present returns\n *\tpointer to the connection structure\n *\n * @param[in] sock - The socket fd for the connection\n *\n * @return Error code\n * @retval conn_t * - Success\n * @retval NULL - Failure\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nconn_t *\nget_conn(int sd)\n{\n\tint idx = conn_find_actual_index(sd);\n\tif (idx < 0)\n\t\treturn NULL;\n\n\treturn svr_conn[idx];\n}\n\n/**\n * @brief\n *\tinitialize the connection.\n *\n */\nvoid\nconnection_init(void)\n{\n\tconn_t *cp = NULL;\n\n\tif (!svr_allconns.ll_next) {\n\t\tCLEAR_HEAD(svr_allconns);\n\t\treturn;\n\t}\n\n\tcp = (conn_t *) GET_NEXT(svr_allconns);\n\twhile (cp) {\n\t\tint sock = cp->cn_sock;\n\t\tcp = GET_NEXT(cp->cn_link);\n\t\tclose_conn(sock);\n\t}\n\tCLEAR_HEAD(svr_allconns);\n}\n\n/**\n * @brief\n * \tinit_network - initialize the network interface\n *\n * @par\tFunctionality:\n *    \tNormal call, port > 0\n *\tallocate a socket and bind it to the service port,\n *\tadd the socket to the readset/pollfds for select()/poll(),\n *\tadd the socket to the connection structure and set the\n *\tprocessing function to accept_conn()\n *    \tSpecial call, port == 0\n *\tOnly initial the connection table and poll pollfds or select readset.\n *\n * @param[in] port - port number\n * @param[in] readfunc - callback function which indicates type of request\n *\n * @return\tError code\n * @retval\t0\tsuccess\n * @retval\t-1\terror\n */\nint\ninit_network(unsigned int port)\n{\n\tint i;\n\tsize_t j;\n\tint sd;\n#ifdef WIN32\n\tstruct linger li;\n#endif\n\tstruct sockaddr_in socname;\n\n\tif (port == 0)\n\t\treturn 0; /* that all for the special init only call */\n\n\tsd = socket(AF_INET, SOCK_STREAM, 0);\n\tif (sd < 0) {\n#ifdef WIN32\n\t\terrno = WSAGetLastError();\n#endif\n\t\tlog_err(errno, __func__, \"socket() failed\");\n\t\treturn (-1);\n\t}\n\n\ti = 1;\n\tsetsockopt(sd, SOL_SOCKET, SO_REUSEADDR, (char *) &i, sizeof(i));\n\n#ifdef WIN32\n\tli.l_onoff = 1;\n\tli.l_linger = 5;\n\tsetsockopt(sd, SOL_SOCKET, SO_LINGER, (char *) &li, sizeof(li));\n#endif\n\n\t/* name that socket \"in three notes\" */\n\n\tj = sizeof(socname);\n\tmemset((void *) &socname, 0, j);\n\tsocname.sin_port = htons((unsigned short) port);\n\tsocname.sin_addr.s_addr = INADDR_ANY;\n\tsocname.sin_family = AF_INET;\n\tif (bind(sd, (struct sockaddr *) &socname, sizeof(socname)) < 0) {\n#ifdef WIN32\n\t\terrno = WSAGetLastError();\n#endif\n\t\tclosesocket(sd);\n\t\tlog_err(errno, __func__, \"bind failed\");\n\t\treturn (-1);\n\t}\n\treturn sd;\n}\n\n/**\n * @brief\n * \tinit_network_add - initialize the network interface\n * \tand save the routine which should do the reading on connections.\n *\n * @param[in] sd - socket descriptor\n * @param[in] readfunc - routine which should do the reading on connections\n *\n * @return\tError code\n * @retval\t 0\tsuccess\n * @retval\t-1\terror\n */\nint\ninit_network_add(int sd, int (*readyreadfunc)(conn_t *), void (*readfunc)(int))\n{\n\tstatic int initialized = 0;\n\tenum conn_type type;\n\n\tif (initialized == 0) {\n\t\tconnection_init();\n\t\tif (init_poll_context() < 0)\n\t\t\treturn (-1);\n\t\ttype = Primary;\n\t} else if (initialized == 1)\n\t\ttype = Secondary;\n\telse\n\t\treturn (-1); /* too many main connections */\n\n\tnet_is_initialized = 1; /* flag that net stuff is initialized */\n\n\tif (sd == -1)\n\t\treturn -1;\n\n\tready_read_func[initialized] = readyreadfunc;\n\n\t/* for normal calls ...\t\t\t\t\t\t*/\n\t/* save the routine which should do the reading on connections\t*/\n\t/* accepted from the parent socket\t\t\t\t*/\n\tread_func[initialized++] = readfunc;\n\n\t/* record socket in connection structure and select set\n\t *\n\t * remark: passing 0 as port value causing entry's member\n\t *         cn_authen to have bit PBS_NET_CONN_PRIVIL set\n\t */\n\tif (add_conn(sd, type, (pbs_net_t) 0, 0, NULL, accept_conn) == NULL) {\n#ifdef WIN32\n\t\terrno = WSAGetLastError();\n#endif\n\t\tclosesocket(sd);\n\t\tlog_err(errno, __func__, \"add_conn failed\");\n\t\treturn -1;\n\t}\n\n\t/* start listening for connections */\n\tif (listen(sd, 256) < 0) {\n\t\tlog_err(errno, __func__, \"listen failed\");\n#ifdef WIN32\n\t\terrno = WSAGetLastError();\n#endif\n\t\tclosesocket(sd);\n\t\treturn (-1);\n\t}\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tchecks for any connection timeout.\n *\n */\nvoid\nconnection_idlecheck(void)\n{\n\tstatic time_t last_checked = (time_t) 0;\n\ttime_t now;\n\tconn_t *next_cp = (conn_t *) GET_NEXT(svr_allconns);\n\n\tnow = time(NULL);\n\tif (now - last_checked < 60)\n\t\treturn;\n\n\t/* have any connections timed out ?? */\n\twhile (next_cp) {\n\t\tu_long ipaddr;\n\t\tconn_t *cp = next_cp;\n\t\tnext_cp = GET_NEXT(cp->cn_link);\n\n\t\tif (cp->cn_active != FromClientDIS)\n\t\t\tcontinue;\n\t\tif ((now - cp->cn_lasttime) <= PBS_NET_MAXCONNECTIDLE)\n\t\t\tcontinue;\n\t\tif (cp->cn_authen & PBS_NET_CONN_NOTIMEOUT)\n\t\t\tcontinue; /* do not time-out this connection */\n\n\t\tipaddr = cp->cn_addr;\n\t\tsnprintf(logbuf, sizeof(logbuf),\n\t\t\t \"timeout connection from %lu.%lu.%lu.%lu\",\n\t\t\t (ipaddr & 0xff000000) >> 24, (ipaddr & 0x00ff0000) >> 16,\n\t\t\t (ipaddr & 0x0000ff00) >> 8, (ipaddr & 0x000000ff));\n\t\tlog_err(0, __func__, logbuf);\n\t\tclose_conn(cp->cn_sock);\n\t}\n\tlast_checked = now;\n}\n\n/**\n * @brief\n *\tengage_authentication - Use the security library interface to\n * \tengage the appropriate connection authentication.\n *\n * @param[in] pconn  pointer to a \"conn_t\" variable\n *\n * @return Error code\n * @return\t 0  successful\n * @retval\t-1 unsuccessful\n *\n * @par Remark:\n *\tIf the authentication fails, messages are logged to\n *\tthe server's log file and the connection's security\n *\tinformation is closed out (freed).\n */\nstatic int\nengage_authentication(conn_t *pconn)\n{\n\tint ret;\n\tint sd;\n\tchar ebuf[PBS_MAXHOSTNAME + 1] = {'\\0'};\n\tchar *msgbuf;\n\n\tif (pconn == NULL || (sd = pconn->cn_sock) < 0) {\n\t\tlog_err(-1, __func__, \"bad arguments, unable to authenticate\");\n\t\treturn (-1);\n\t}\n\n\tif ((ret = CS_server_auth(sd)) == CS_SUCCESS) {\n\t\tpconn->cn_authen |= PBS_NET_CONN_AUTHENTICATED;\n\t\treturn (0);\n\t}\n\n\tif (ret == CS_AUTH_CHECK_PORT) {\n\t\t/*dealing with STD security's  \"equivalent of\"  CS_sever_auth*/\n\t\tif (pconn->cn_authen & PBS_NET_CONN_FROM_PRIVIL)\n\t\t\tpconn->cn_authen |= PBS_NET_CONN_AUTHENTICATED;\n\t\treturn (0);\n\t}\n\n\t(void) get_connecthost(sd, ebuf, sizeof(ebuf));\n\n\tpbs_asprintf(&msgbuf,\n\t\t     \"unable to authenticate connection from (%s:%d)\",\n\t\t     ebuf, pconn->cn_port);\n\tlog_err(-1, __func__, msgbuf);\n\tfree(msgbuf);\n\n\treturn (-1);\n}\n\n/*\n * @brief\n * process_socket  -  The static method processes given socket and\n *                    engages the appropriate connection authentication.\n *\n * @param[in]   sock \t- socket fd to process\n *\n * @retval\t-1 for failure\n * @retval\t0  for success\n *\n */\nstatic int\nprocess_socket(int sock)\n{\n\tint idx = conn_find_actual_index(sock);\n\tif (idx < 0) {\n\t\treturn -1;\n\t}\n\tsvr_conn[idx]->cn_lasttime = time(NULL);\n\tif ((svr_conn[idx]->cn_active != Primary) &&\n\t    (svr_conn[idx]->cn_active != TppComm) &&\n\t    (svr_conn[idx]->cn_active != Secondary)) {\n\t\tif (!(svr_conn[idx]->cn_authen & PBS_NET_CONN_AUTHENTICATED)) {\n\t\t\tif (engage_authentication(svr_conn[idx]) == -1) {\n\t\t\t\tclose_conn(sock);\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t}\n\t}\n\n\tif (svr_conn[idx]->cn_ready_func != NULL) {\n\t\tint ret = 0;\n\t\tret = svr_conn[idx]->cn_ready_func(svr_conn[idx]);\n\t\tif (ret == -1) {\n\t\t\tclose_conn(sock);\n\t\t\treturn -1;\n\t\t} else if (ret == 0) {\n\t\t\t/* no data for cn_func */\n\t\t\treturn 0;\n\t\t}\n\t\t/* EOF will be handled in cn_func */\n\t}\n\tsvr_conn[idx]->cn_func(svr_conn[idx]->cn_sock);\n\treturn 0;\n}\n\n/**\n * @brief\n *\tWaits for events on a set of sockets and calls processing function\n *\tcorresponding to the socket fd.\n *\n * @par Functionality\n * wait_request - wait for a request (socket with data to read)\n *\tThis routine does a tpp_em_wait - which internally does poll()/epoll()/select()\n *\tbased on the platform on the socket fds.\n *\tIt loops through the socket fds which has events on them and the processing\n *\troutine associated with the socket is invoked.\n *\n * @param[in] waittime - Timeout for tpp_em_wait (poll)\n * @param[in] priority_context - context consists of high priority socket connections\n *\n * @return Error code\n * @retval 0 - Success\n * @retval -1 - Failure\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nint\nwait_request(float waittime, void *priority_context)\n{\n\tint nfds;\n\tint pnfds;\n\tint i;\n\tem_event_t *events;\n\tint err;\n\tint prio_sock_processed;\n\tint em_fd;\n\tint em_pfd;\n\tint timeout = (int) (waittime * 1000); /* milli seconds */\n\t\t\t\t\t       /* Platform specific declarations */\n\n#ifndef WIN32\n\tsigset_t pendingsigs;\n\tsigset_t emptyset;\n\textern sigset_t allsigs;\n\n\t/* wait after unblocking signals in an atomic call */\n\tsigemptyset(&emptyset);\n\tnfds = tpp_em_pwait(poll_context, &events, timeout, &emptyset);\n\terr = errno;\n#else\n\terrno = 0;\n\tnfds = tpp_em_wait(poll_context, &events, timeout);\n\terr = errno;\n#endif /* WIN32 */\n\tif (nfds < 0) {\n\t\tif (!(err == EINTR || err == EAGAIN || err == 0)) {\n\t\t\tsnprintf(logbuf, sizeof(logbuf), \" tpp_em_wait() error, errno=%d\", err);\n\t\t\tlog_err(err, __func__, logbuf);\n\t\t\treturn (-1);\n\t\t}\n\t} else {\n\t\tprio_sock_processed = 0;\n\t\tif (priority_context) {\n\t\t\tem_event_t *pevents;\n\t\t\ttimeout = 0;\n#ifndef WIN32\n\t\t\t/* wait after unblocking signals in an atomic call */\n\t\t\tsigemptyset(&emptyset);\n\t\t\tpnfds = tpp_em_pwait(priority_context, &pevents, timeout, &emptyset);\n\t\t\terr = errno;\n#else\n\t\t\tpnfds = tpp_em_wait(priority_context, &pevents, timeout);\n\t\t\terr = errno;\n#endif /* WIN32 */\n\t\t\tfor (i = 0; i < pnfds; i++) {\n\t\t\t\tem_pfd = EM_GET_FD(pevents, i);\n\t\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_SERVER,\n\t\t\t\t\t  LOG_DEBUG, __func__, \"processing priority socket\");\n\t\t\t\tif (process_socket(em_pfd) == -1) {\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_SERVER,\n\t\t\t\t\t\t  LOG_DEBUG, __func__, \"process priority socket failed\");\n\t\t\t\t} else {\n\t\t\t\t\tprio_sock_processed = 1;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\tfor (i = 0; i < nfds; i++) {\n\t\t\tem_fd = EM_GET_FD(events, i);\n#ifndef WIN32\n\t\t\t/* If there is any of the following signals pending, allow a small window to handle the signal */\n\t\t\tif (sigpending(&pendingsigs) == 0) {\n\t\t\t\tif (sigismember(&pendingsigs, SIGCHLD) || sigismember(&pendingsigs, SIGHUP) || sigismember(&pendingsigs, SIGINT) || sigismember(&pendingsigs, SIGTERM)) {\n\n\t\t\t\t\tif (sigprocmask(SIG_UNBLOCK, &allsigs, NULL) == -1)\n\t\t\t\t\t\tlog_err(errno, __func__, \"sigprocmask(UNBLOCK)\");\n\t\t\t\t\tif (sigprocmask(SIG_BLOCK, &allsigs, NULL) == -1)\n\t\t\t\t\t\tlog_err(errno, __func__, \"sigprocmask(BLOCK)\");\n\n\t\t\t\t\treturn (0);\n\t\t\t\t}\n\t\t\t}\n#endif\n\t\t\tif (prio_sock_processed) {\n\t\t\t\tint idx = conn_find_actual_index(em_fd);\n\t\t\t\tif (idx < 0)\n\t\t\t\t\tcontinue;\n\t\t\t\tif (svr_conn[idx]->cn_prio_flag == 1)\n\t\t\t\t\tcontinue;\n\t\t\t}\n\t\t\tif (process_socket(em_fd) == -1) {\n\t\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_SERVER,\n\t\t\t\t\t  LOG_DEBUG, __func__, \"process socket failed\");\n\t\t\t}\n\t\t}\n\t}\n\n#ifndef WIN32\n\tconnection_idlecheck();\n#endif\n\n\treturn (0);\n}\n\n/**\n * @brief\n *\taccept request for new connection\n *\tthis routine is normally associated with the main socket,\n *\trequests for connection on the socket are accepted and\n *\tthe new socket is added to the select set and the connection\n *\tstructure - the processing routine is set to the external\n *\tfunction: process_request(socket)Makes a PBS_BATCH_Connect request to\n *\t'server'.\n *\n * @param[in]   sd - main socket with connection request pending\n *\n * @return void\n *\n */\nstatic void\naccept_conn(int sd)\n{\n\tint newsock;\n\tstruct sockaddr_in from;\n\tpbs_socklen_t fromsize;\n\n\tint idx = conn_find_actual_index(sd);\n\tif (idx == -1)\n\t\treturn;\n\n\t/* update last-time of main socket */\n\n\tsvr_conn[idx]->cn_lasttime = time(NULL);\n\n\tfromsize = sizeof(from);\n\tnewsock = accept(sd, (struct sockaddr *) &from, &fromsize);\n\tif (newsock == -1) {\n#ifdef WIN32\n\t\terrno = WSAGetLastError();\n#endif\n\t\tlog_err(errno, __func__, \"accept failed\");\n\t\treturn;\n\t}\n\n\t/*\n\t * Disable Nagle's algorithm on this TCP connection to server.\n\t * Nagle's algorithm is hurting cmd-server communication.\n\t */\n\tif (set_nodelay(newsock) == -1) {\n\t\tlog_err(errno, __func__, \"set_nodelay failed\");\n\t\t(void) close(newsock);\n\t\treturn; /* set_nodelay failed */\n\t}\n\n\t/* add the new socket to the select set and connection structure */\n\n\t(void) add_conn(newsock, FromClientDIS,\n\t\t\t(pbs_net_t) ntohl(from.sin_addr.s_addr),\n\t\t\t(unsigned int) ntohs(from.sin_port),\n\t\t\tready_read_func[(int) svr_conn[idx]->cn_active],\n\t\t\tread_func[(int) svr_conn[idx]->cn_active]);\n}\n\n/**\n * @brief\n *\tadd_conn - add a connection to the svr_conn array.\n *\n * @par Functionality:\n *\tFind an empty slot in the connection table.  This is done by hashing\n *\ton the socket (file descriptor).  On Windows, this is not a small\n *\tinterger.  The socket is then added to the poll/select set.\n *\n * @param[in]\tsd: socket descriptor\n * @param[in]\ttype: (enumb conn_type)\n * @param[in]\taddr: host IP address in host byte order\n * @param[in]\tport: port number in host byte order\n * @param[in]\tfunc: pointer to function to call when data is ready to read\n *\n * @return\tpointer to conn_t\n * @retval\tNULL - failure.\n */\nconn_t *\nadd_conn(int sd, enum conn_type type, pbs_net_t addr, unsigned int port, int (*ready_func)(conn_t *), void (*func)(int))\n{\n\tint idx;\n\tconn_t *conn;\n\n\tidx = conn_find_usable_index(sd);\n\tif (idx == -1)\n\t\treturn NULL;\n\n\tconn = (conn_t *) calloc(1, sizeof(conn_t));\n\tif (!conn) {\n\t\treturn NULL;\n\t}\n\n\tconn->cn_sock = sd;\n\tconn->cn_active = type;\n\tconn->cn_addr = addr;\n\tconn->cn_port = (unsigned short) port;\n\tconn->cn_lasttime = time(NULL);\n\tconn->cn_ready_func = ready_func;\n\tconn->cn_func = func;\n\tconn->cn_oncl = 0;\n\tconn->cn_authen = 0;\n\tconn->cn_authen |= PBS_NET_CONN_PREVENT_IP_SPOOFING;\n\tconn->cn_prio_flag = 0;\n\tconn->cn_auth_config = NULL;\n\n\tnum_connections++;\n\n\tif (port < IPPORT_RESERVED)\n\t\tconn->cn_authen |= PBS_NET_CONN_FROM_PRIVIL;\n\n\tsvr_conn[idx] = conn;\n\n\t/* Add to list of connections */\n\tCLEAR_LINK(conn->cn_link);\n\tappend_link(&svr_allconns, &conn->cn_link, conn);\n\n\tif (tpp_em_add_fd(poll_context, sd, EM_IN | EM_HUP | EM_ERR) < 0) {\n\t\tint err = errno;\n\t\tsnprintf(logbuf, sizeof(logbuf),\n\t\t\t \"could not add socket %d to the poll list\", sd);\n\t\tlog_err(err, __func__, logbuf);\n\t\tclose_conn(sd);\n\t\treturn NULL;\n\t}\n\n\treturn svr_conn[idx];\n}\n\n/**\n * @brief set given conn as priority connection and add it to priority poll list\n *\n * @param[in]\tconn - pointer to connection structure\n *\n * @return int\n * @retval 0 - failure\n * @retval 1 - success\n */\nint\nset_conn_as_priority(conn_t *conn)\n{\n\tif (!conn || conn->cn_sock < 0)\n\t\treturn 0;\n\n\tif (conn->cn_prio_flag == 1)\n\t\treturn 1;\n\n\tif (tpp_em_add_fd(priority_context, conn->cn_sock, EM_IN | EM_HUP | EM_ERR) < 0) {\n\t\tlog_errf(errno, __func__, \"could not add socket %d to the priority poll list\", conn->cn_sock);\n\t\treturn 0;\n\t}\n\tconn->cn_prio_flag = 1;\n\treturn 1;\n}\n\n/**\n * @brief\n *\tadd_conn_data - add some data to a connection\n *\n * @par Functionality:\n *\tThis function identifies the connection based on index provided\n *  and sets cn_data value\n *\n * @param[in]\tsd: socket descriptor\n * @param[in]\tdata: void pointer to the data\n * @param[in]\tfunc: pointer to function to call when connection is to be deleted\n *\n * @return Connection index\n * @return  0 if the connection index is valid\n * @return -1 if the connection index is invalid\n */\nint\nadd_conn_data(int sd, void *data)\n{\n\tint idx = conn_find_actual_index(sd);\n\tif (idx < 0) {\n\t\treturn -1;\n\t}\n\n\tsvr_conn[idx]->cn_data = data;\n\treturn 0;\n}\n\n/**\n * @brief\n *\tget_conn_data - get cn_data from the connection\n *\n * @par Functionality:\n *\tThis function identifies the connection based on index provided\n *  and sets cn_data value\n *\n * @param[in]\tsd: socket descriptor\n *\n * @return pointer to the connection related data\n * @retval - Null, if sd not found\n *\n */\nvoid *\nget_conn_data(int sd)\n{\n\tint idx = conn_find_actual_index(sd);\n\tif (idx < 0) {\n\t\tsnprintf(logbuf, sizeof(logbuf), \"could not find index for the socket %d\", sd);\n\t\tlog_err(-1, __func__, logbuf);\n\t\treturn NULL;\n\t}\n\n\treturn svr_conn[idx]->cn_data;\n}\n\n/**\n * @brief\n *\tclose_conn - close a connection in the svr_conn array.\n *\n * @par Functionality:\n *\tValidate the socket (file descriptor).  For Unix/Linux it is a small\n *\tinteger less than the max number of connections.  For Windows, it is\n *\ta valid socket value (not equal to INVALID_SOCKET).\n *\tThe entry in the table corresponding to the socket is found.\n *\tIf the entry is for a network socket (not a pipe), it is closed via\n *\tCS_close_socket() which typically just does a close; for Windows,\n *\tclosesocket() is used.\n *\tFor a pipe (not a network socket), plain close() is called.\n *\tIf there is a function to be called, see cn_oncl table entry, that\n *\tfunction is called.\n *\tThe table entry is cleared and marked \"Idle\" meaning it is free for\n *\treuse.\n *\n * @param[in]\tsock: socket or file descriptor\n *\n */\nvoid\nclose_conn(int sd)\n{\n\tint idx;\n\n#ifdef WIN32\n\tif ((sd == INVALID_SOCKET))\n#else\n\tif ((sd < 0))\n#endif\n\t\treturn;\n\n\tidx = conn_find_actual_index(sd);\n\tif (idx == -1)\n\t\treturn;\n\n\tif (svr_conn[idx]->cn_active != ChildPipe) {\n\t\tdis_destroy_chan(sd);\n\t}\n\n\tif (svr_conn[idx]->cn_active != ChildPipe) {\n\t\tif (CS_close_socket(sd) != CS_SUCCESS) {\n\t\t\tchar ebuf[PBS_MAXHOSTNAME + 1] = {'\\0'};\n\t\t\tchar *msgbuf;\n\n\t\t\t(void) get_connecthost(sd, ebuf, sizeof(ebuf));\n\t\t\tpbs_asprintf(&msgbuf,\n\t\t\t\t     \"problem closing security context for %s:%d\",\n\t\t\t\t     ebuf, svr_conn[idx]->cn_port);\n\t\t\tlog_err(-1, __func__, msgbuf);\n\t\t\tfree(msgbuf);\n\t\t}\n\n\t\t/* if there is a function to call on close, do it */\n\t\tif (svr_conn[idx]->cn_oncl != 0)\n\t\t\tsvr_conn[idx]->cn_oncl(sd);\n\n\t\tcleanup_conn(idx);\n\t\tnum_connections--;\n\n\t\tclosesocket(sd);\n\t} else {\n\t\t/* if there is a function to call on close, do it */\n\t\tif (svr_conn[idx]->cn_oncl != 0)\n\t\t\tsvr_conn[idx]->cn_oncl(sd);\n\n\t\tcleanup_conn(idx);\n\t\tnum_connections--;\n\t\tclosesocket(sd); /* pipe so use normal close */\n\t}\n}\n\n/**\n * @brief\n *\tcleanup_conn - reset a connection entry in the svr_conn array.\n *\n * @par Functionality:\n * \tGiven an index within the svr_conn array, reset all fields back to\n * \ttheir defaults and clear any select/poll related flags.\n *\n * @param[in]\tidx: index of the svr_conn entry\n *\n */\nstatic void\ncleanup_conn(int idx)\n{\n\tif (tpp_em_del_fd(poll_context, svr_conn[idx]->cn_sock) < 0) {\n\t\tint err = errno;\n\t\tsnprintf(logbuf, sizeof(logbuf),\n\t\t\t \"could not remove socket %d from poll list\", svr_conn[idx]->cn_sock);\n\t\tlog_err(err, __func__, logbuf);\n\t}\n\tif (svr_conn[idx]->cn_prio_flag) {\n\t\tif (tpp_em_del_fd(priority_context, svr_conn[idx]->cn_sock) < 0) {\n\t\t\tint err = errno;\n\t\t\tsnprintf(logbuf, sizeof(logbuf),\n\t\t\t\t \"could not remove socket %d from priority poll list\", svr_conn[idx]->cn_sock);\n\t\t\tlog_err(err, __func__, logbuf);\n\t\t}\n\t}\n\n\t/* Remove connection from the linked list */\n\tdelete_link(&svr_conn[idx]->cn_link);\n\n\tsvr_conn[idx]->cn_physhost[0] = '\\0';\n\tif (svr_conn[idx]->cn_credid) {\n\t\tfree(svr_conn[idx]->cn_credid);\n\t\tsvr_conn[idx]->cn_credid = NULL;\n\t}\n\n\tif (svr_conn[idx]->cn_auth_config) {\n\t\tfree_auth_config(svr_conn[idx]->cn_auth_config);\n\t\tsvr_conn[idx]->cn_auth_config = NULL;\n\t}\n\n\t/* Free the connection memory */\n\tfree(svr_conn[idx]);\n\tsvr_conn[idx] = NULL;\n}\n\n/**\n * @brief\n * \tnet_close - close all network connections but the one specified,\n *\tif called with impossible socket number (-1), all will be closed.\n *\tThis function is typically called when a server is closing down and\n *\twhen it is forking a child.\n *\n * @par\tNote:\n *\tWe clear the cn_oncl field in the connection table to prevent any\n *\t\"special on close\" functions from being called.\n *\n * @param[in] but - socket number to leave open\n *\n * @par\tNote:\n *\tfree() the dynamically allocated data\n *\n */\nvoid\nnet_close(int but)\n{\n\tconn_t *cp = NULL;\n\n\tif (net_is_initialized == 0)\n\t\treturn;\n\n\tcp = (conn_t *) GET_NEXT(svr_allconns);\n\twhile (cp) {\n\t\tint sock = cp->cn_sock;\n\t\tcp = GET_NEXT(cp->cn_link);\n\t\tif (sock != but) {\n\t\t\tsvr_conn[sock]->cn_oncl = NULL;\n\t\t\tclose_conn(sock);\n\t\t\tdestroy_connection(sock);\n\t\t}\n\t}\n\n\tif (but == -1) {\n\t\ttpp_em_destroy(poll_context);\n\t\ttpp_em_destroy(priority_context);\n\t\tnet_is_initialized = 0;\n\t}\n}\n\n/**\n * @brief\n * \tget_connectaddr - return address of host connected via the socket\n *\tThis is in host order.\n *\n * @param[in] sd - socket descriptor\n *\n * @return address of host\n * @retval !0\t\tsuccess\n * @retval 0\t\terror\n *\n */\npbs_net_t\nget_connectaddr(int sd)\n{\n\tint idx = conn_find_actual_index(sd);\n\tif (idx == -1)\n\t\treturn (0);\n\n\treturn (svr_conn[idx]->cn_addr);\n}\n\n/**\n * @brief\n * \tget_connecthost - return name of host connected via the socket\n *\n * @param[in] sd - socket descriptor\n * @param[out] namebuf - buffer to hold host name\n * @param[out] size - size of buffer\n *\n * @return Error code\n * @retval\t0\tsuccess\n * @retval -1\terror\n *\n */\nint\nget_connecthost(int sd, char *namebuf, int size)\n{\n\tint i;\n\tstruct hostent *phe;\n\tstruct in_addr addr;\n\tint namesize = 0;\n#if !defined(WIN32)\n\tchar dst[INET_ADDRSTRLEN + 1]; /* for inet_ntop */\n#endif\n\n\tint idx = conn_find_actual_index(sd);\n\tif (idx == -1)\n\t\treturn (-1);\n\n\tsize--;\n\taddr.s_addr = htonl(svr_conn[idx]->cn_addr);\n\n\tif ((phe = gethostbyaddr((char *) &addr, sizeof(struct in_addr),\n\t\t\t\t AF_INET)) == NULL) {\n#if defined(WIN32)\n\t\t/* inet_ntoa is thread-safe on windows */\n\t\t(void) strcpy(namebuf, inet_ntoa(addr));\n#else\n\t\t(void) strcpy(namebuf,\n\t\t\t      inet_ntop(AF_INET, (void *) &addr, dst, INET_ADDRSTRLEN));\n#endif\n\t} else {\n\t\tnamesize = strlen(phe->h_name);\n\t\tfor (i = 0; i < size; i++) {\n\t\t\t*(namebuf + i) = tolower((int) *(phe->h_name + i));\n\t\t\tif (*(phe->h_name + i) == '\\0')\n\t\t\t\tbreak;\n\t\t}\n\t\t*(namebuf + size) = '\\0';\n\t}\n\tif (namesize > size)\n\t\treturn (-1);\n\n\treturn (0);\n}\n\n/**\n * @brief\n *\tInitialize maximum connetions.\n *\tInit the pollset i.e. socket descriptors to be polled.\n *\n * @par Functionality:\n *\tFor select() in WIN32, max_connection is decided based on the\n *\tFD_SETSIZE (max vaue, select() can handle) but for Unix variants\n *\tthat is decided by getrlimit() or getdtablesize(). For poll(),\n *\tallocate memory for pollfds[] and init the table.\n *\n * @par Linkage scope:\n *\tstatic (local)\n *\n * @return Error code\n * @retval\t 0 for success\n * @retval\t-1 for failure\n *\n * @par Reentrancy\n *\tMT-unsafe\n *\n */\nstatic int\ninit_poll_context(void)\n{\n#ifdef WIN32\n\tint sd_dummy;\n\tmax_connection = FD_SETSIZE;\n#else\n\tint idx;\n\tint nfiles;\n\tstruct rlimit rl;\n\n\tidx = getrlimit(RLIMIT_NOFILE, &rl);\n\tif ((idx == 0) && (rl.rlim_cur != RLIM_INFINITY))\n\t\tnfiles = rl.rlim_cur;\n\telse\n\t\tnfiles = getdtablesize();\n\n\tif ((nfiles > 0))\n\t\tmax_connection = nfiles;\n\n#endif\n\tDBPRT((\"#init_poll_context: initializing poll_context for %d\", max_connection))\n\tpoll_context = tpp_em_init(max_connection);\n\tif (poll_context == NULL) {\n\t\tlog_err(errno, __func__, \"could not initialize poll_context\");\n\t\treturn (-1);\n\t}\n\tpriority_context = tpp_em_init(max_connection);\n\tif (priority_context == NULL) {\n\t\tlog_err(errno, __func__, \"could not initialize priority_context\");\n\t\treturn (-1);\n\t}\n#ifdef WIN32\n\t/* set a dummy fd in the read set so that\t*/\n\t/* select() does not return WSAEINVAL \t\t*/\n\tsd_dummy = socket(AF_INET, SOCK_STREAM, 0);\n\tif (sd_dummy < 0) {\n\t\terrno = WSAGetLastError();\n\t\tlog_err(errno, __func__, \"socket() failed\");\n\t\treturn -1;\n\t}\n\tif ((tpp_em_add_fd(poll_context, sd_dummy, EM_IN) == -1)) {\n\t\tint err = errno;\n\t\tsnprintf(logbuf, sizeof(logbuf),\n\t\t\t \"Could not add socket %d to the read set\", sd_dummy);\n\t\tlog_err(err, __func__, logbuf);\n\t\tclosesocket(sd_dummy);\n\t\treturn -1;\n\t}\n\tif ((tpp_em_add_fd(priority_context, sd_dummy, EM_IN) == -1)) {\n\t\tint err = errno;\n\t\tsnprintf(logbuf, sizeof(logbuf),\n\t\t\t \"Could not add socket %d to the read set for priority socket\", sd_dummy);\n\t\tlog_err(err, __func__, logbuf);\n\t\tclosesocket(sd_dummy);\n\t\treturn -1;\n\t}\n#endif /* WIN32 */\n\n\treturn 0;\n}\n"
  },
  {
    "path": "src/lib/Libnet/net_set_clse.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tnet_set_clse.c\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include \"portability.h\"\n#include <sys/types.h>\n#include \"pbs_ifl.h\"\n#include \"net_connect.h\"\n\n/**\n * @brief\n * \tnet_add_close_func - install a function to be called on close of\n *\tthe network connection\n *\n * @param[in] sd - socket descriptor\n * @param[in] func - callback function indicating type of request\n *\n * @return\tVoid\n *\n */\n\nvoid\nnet_add_close_func(int sd, void (*func)(int))\n{\n\tconn_t *conn = get_conn(sd);\n\n\tif (!conn)\n\t\treturn;\n\n\tconn->cn_oncl = func;\n}\n"
  },
  {
    "path": "src/lib/Libnet/port_forwarding.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <sys/types.h>\n#include <netinet/in.h>\n#include <netinet/tcp.h>\n#include <sys/un.h>\n#include <sys/socket.h>\n#include <netdb.h>\n\n#include <errno.h>\n#include <fcntl.h>\n#include <stdio.h>\n#include <string.h>\n#include <termios.h>\n#include <unistd.h>\n#include <stdlib.h>\n\n#include \"port_forwarding.h\"\n#include \"pbs_ifl.h\"\n#include \"log.h\"\n#include \"libutil.h\"\n#include \"auth.h\"\n#include \"dis.h\"\n\n#define PF_LOGGER(logfunc, msg) \\\n\tif (logfunc != NULL) {  \\\n\t\tlogfunc(msg);   \\\n\t}\n\n/* handy utility to handle forwarding socket connections to another host\n * pass in an initialized pfwdsock struct with sockets to listen on, a function\n * pointer to get a new socket for forwarding, and a hostname and port number to\n * pass to the function pointer, and it will do the rest. The caller probably\n * should fork first since this function is an infinite loop and never returns */\n\n/* __attribute__((noreturn)) - how do I do this portably? */\nint x11_reader_go = 1;\n\nextern int set_nodelay(int fd);\n\n/**\n * @brief\n *      This function provides the port forwarding feature for forwarding the\n *      X data from mom to qsub and from qsub to the X server.\n *\n * @param socks[in] - Input structure which tracks the sockets that are active\n *                    and data read/written by peers.\n * @param connfunc[in] - Function pointer pointing to a function used for\n *                       either connecting the X server (if running in qsub) or\n *                       connecting qsub (if running in mom).\n * @param phost[in] - peer host that needs to be connected.\n * @param pport[in] - peer port number.\n * @param inter_read_sock[in] -  socket descriptor from where mom and qsub\n *                               readers read data.\n * @param readfunc[in] - function pointer pointing to the mom and qsub readers.\n * @param logfunc[in] - Function pointer for log function\n * @param is_qsub_side[in] - Can be one of QSUB_SIDE (1) or EXEC_HOST_SIDE (0)\n * @param auth_method[in] - Authentication method used\n * @param encrypt_method[in] - Encryption method used\n * @param jobid[in] - Job id\n *\n * @return void\n */\nvoid\nport_forwarder(\n\tstruct pfwdsock *socks,\n\tint (*connfunc)(char *, long),\n\tchar *phost,\n\tint pport,\n\tint inter_read_sock,\n\tint (*readfunc)(int),\n\tvoid (*logfunc)(char *),\n\tint is_qsub_side,\n\tchar *auth_method,\n\tchar *encrypt_method,\n\tchar *jobid)\n{\n\tfd_set rfdset, wfdset, efdset;\n\tint rc;\n\tstruct sockaddr_in from;\n\tpbs_socklen_t fromlen;\n\tint n, n2, sock;\n\tfromlen = sizeof(from);\n\tchar err_msg[LOG_BUF_SIZE];\n\tint readfunc_ret;\n\t/*\n\t * Make the sockets in the socks structure non blocking\n\t */\n\tfor (n = 0; n < NUM_SOCKS; n++) {\n\t\tif (!(socks + n)->active || ((socks + n)->sock < 0))\n\t\t\tcontinue;\n\t\tif (set_nonblocking((socks + n)->sock) == -1) {\n\t\t\tclose((socks + n)->sock);\n\t\t\t(socks + n)->active = 0;\n\t\t\tsnprintf(err_msg, sizeof(err_msg),\n\t\t\t\t \"set_nonblocking failed for socket=%d, errno=%d\",\n\t\t\t\t (socks + n)->sock, errno);\n\t\t\tPF_LOGGER(logfunc, err_msg);\n\t\t\tcontinue;\n\t\t}\n\t\tif (set_nodelay((socks + n)->sock) == -1) {\n\t\t\tsnprintf(err_msg, sizeof(err_msg),\n\t\t\t\t \"set_nodelay failed for socket=%d, errno=%d\",\n\t\t\t\t (socks + n)->sock, errno);\n\t\t\tPF_LOGGER(logfunc, err_msg);\n\t\t}\n\t}\n\n\twhile (x11_reader_go) {\n\t\tint maxsock;\n\n\t\tFD_ZERO(&rfdset);\n\t\tFD_ZERO(&wfdset);\n\t\tFD_ZERO(&efdset);\n\t\tmaxsock = inter_read_sock + 1;\n\t\t/*setting the sock fd in rfdset for qsub and mom readers to read data*/\n\t\tFD_SET(inter_read_sock, &rfdset);\n\t\tFD_SET(inter_read_sock, &efdset);\n\t\tfor (n = 0; n < NUM_SOCKS; n++) {\n\t\t\tif (!(socks + n)->active || ((socks + n)->sock < 0))\n\t\t\t\tcontinue;\n\n\t\t\tif ((socks + n)->listening) {\n\t\t\t\tFD_SET((socks + n)->sock, &rfdset);\n\t\t\t\tmaxsock = (socks + n)->sock > maxsock ? (socks + n)->sock : maxsock;\n\t\t\t} else {\n\t\t\t\tif ((socks + n)->bufavail < PF_BUF_SIZE) {\n\t\t\t\t\tFD_SET((socks + n)->sock, &rfdset);\n\t\t\t\t\tmaxsock = (socks + n)->sock > maxsock ? (socks + n)->sock : maxsock;\n\t\t\t\t}\n\t\t\t\tif ((socks + ((socks + n)->peer))->bufavail -\n\t\t\t\t\t    (socks + ((socks + n)->peer))->bufwritten >\n\t\t\t\t    0) {\n\t\t\t\t\tFD_SET((socks + n)->sock, &wfdset);\n\t\t\t\t\tmaxsock = (socks + n)->sock > maxsock ? (socks + n)->sock : maxsock;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\tmaxsock++;\n\n\t\trc = select(maxsock, &rfdset, &wfdset, &efdset, NULL);\n\t\tif ((rc == -1) && (errno == EINTR))\n\t\t\tcontinue;\n\t\tif (rc < 0) {\n\t\t\tsnprintf(err_msg, sizeof(err_msg),\n\t\t\t\t \"port forwarding select() error\");\n\t\t\tPF_LOGGER(logfunc, err_msg);\n\t\t\treturn;\n\t\t}\n\t\tif (FD_ISSET(inter_read_sock, &efdset)) {\n\t\t\tsnprintf(err_msg, sizeof(err_msg),\n\t\t\t\t \"exception for socket=%d, errno=%d\",\n\t\t\t\t inter_read_sock, errno);\n\t\t\tPF_LOGGER(logfunc, err_msg);\n\t\t\tclose(inter_read_sock);\n\t\t\treturn;\n\t\t}\n\t\tif (FD_ISSET(inter_read_sock, &rfdset)) {\n\t\t\t/*calling mom/qsub readers*/\n\t\t\treadfunc_ret = readfunc(inter_read_sock);\n\t\t\tif (readfunc_ret == -1) {\n\t\t\t\tsnprintf(err_msg, sizeof(err_msg),\n\t\t\t\t\t \"readfunc failed for socket:%d\", inter_read_sock);\n\t\t\t\tPF_LOGGER(logfunc, err_msg);\n\t\t\t}\n\t\t\tif (readfunc_ret < 0) {\n\t\t\t\treturn;\n\t\t\t}\n\t\t}\n\n\t\tfor (n = 0; n < NUM_SOCKS; n++) {\n\t\t\tif (!(socks + n)->active || ((socks + n)->sock < 0))\n\t\t\t\tcontinue;\n\t\t\tif (FD_ISSET((socks + n)->sock, &rfdset)) {\n\t\t\t\tif ((socks + n)->listening && (socks + n)->active) {\n\t\t\t\t\tint newsock = 0, peersock = 0;\n\t\t\t\t\tif ((sock = accept((socks + n)->sock, (struct sockaddr *) &from, &fromlen)) < 0) {\n\t\t\t\t\t\tif ((errno == EAGAIN) || (errno == EWOULDBLOCK) || (errno == EINTR) || (errno == ECONNABORTED))\n\t\t\t\t\t\t\tcontinue;\n\t\t\t\t\t\tsnprintf(err_msg, sizeof(err_msg),\n\t\t\t\t\t\t\t \"closing the socket %d after accept call failure, errno=%d\",\n\t\t\t\t\t\t\t (socks + n)->sock, errno);\n\t\t\t\t\t\tPF_LOGGER(logfunc, err_msg);\n\t\t\t\t\t\tclose((socks + n)->sock);\n\t\t\t\t\t\t(socks + n)->active = 0;\n\t\t\t\t\t\tcontinue;\n\t\t\t\t\t}\n\n\t\t\t\t\t/* authenticate execution host socket */\n\t\t\t\t\tif (is_qsub_side == QSUB_SIDE) {\n\t\t\t\t\t\tif (auth_exec_socket(sock, &from, auth_method, encrypt_method, jobid) != INTERACTIVE_AUTH_SUCCESS) {\n\t\t\t\t\t\t\tsnprintf(err_msg, sizeof(err_msg),\n\t\t\t\t\t\t\t\t\"Incoming connection from %s on socket %d rejected, authentication data incorrect, errno=%d\",\n\t\t\t\t\t\t\t\tnetaddr((struct sockaddr_in *)&from), sock, errno);\n\t\t\t\t\t\t\tPF_LOGGER(logfunc, err_msg);\n\t\t\t\t\t\t\tshutdown(sock, SHUT_RDWR);\n\t\t\t\t\t\t\tclose(sock);\n\t\t\t\t\t\t\tdis_destroy_chan(sock);\n\t\t\t\t\t\t\tcontinue;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\n\t\t\t\t\t/*\n\t\t\t\t\t * Make the sock non blocking\n\t\t\t\t\t */\n\t\t\t\t\tif (set_nonblocking(sock) == -1) {\n\t\t\t\t\t\tsnprintf(err_msg, sizeof(err_msg),\n\t\t\t\t\t\t\t \"set_nonblocking failed for socket=%d, errno=%d\",\n\t\t\t\t\t\t\t sock, errno);\n\t\t\t\t\t\tPF_LOGGER(logfunc, err_msg);\n\t\t\t\t\t\tclose(sock);\n\t\t\t\t\t\tdis_destroy_chan(sock);\n\t\t\t\t\t\tcontinue;\n\t\t\t\t\t}\n\t\t\t\t\tif (set_nodelay(sock) == -1) {\n\t\t\t\t\t\tsnprintf(err_msg, sizeof(err_msg),\n\t\t\t\t\t\t\t \"set_nodelay failed for socket=%d, errno=%d\",\n\t\t\t\t\t\t\t sock, errno);\n\t\t\t\t\t\tPF_LOGGER(logfunc, err_msg);\n\t\t\t\t\t}\n\n\t\t\t\t\tnewsock = peersock = 0;\n\n\t\t\t\t\tfor (n2 = 0; n2 < NUM_SOCKS; n2++) {\n\t\t\t\t\t\tif ((socks + n2)->active || (((socks + n2)->peer != 0) && (socks + ((socks + n2)->peer))->active))\n\t\t\t\t\t\t\tcontinue;\n\t\t\t\t\t\tif (newsock == 0)\n\t\t\t\t\t\t\tnewsock = n2;\n\t\t\t\t\t\telse if (peersock == 0)\n\t\t\t\t\t\t\tpeersock = n2;\n\t\t\t\t\t\telse\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\n\t\t\t\t\t(socks + newsock)->sock = (socks + peersock)->remotesock = sock;\n\t\t\t\t\t(socks + newsock)->listening = (socks + peersock)->listening = 0;\n\t\t\t\t\t(socks + newsock)->active = (socks + peersock)->active = 1;\n\t\t\t\t\t(socks + peersock)->sock = connfunc(phost, pport);\n\n\t\t\t\t\t/* authenticate with qsub side */\n\t\t\t\t\tif (is_qsub_side == EXEC_HOST_SIDE) {\n\t\t\t\t\t\tif (auth_with_qsub((socks + peersock)->sock, pport, phost, auth_method, encrypt_method, jobid) != 0) {\n\t\t\t\t\t\t\tsnprintf(err_msg, sizeof(err_msg),\n\t\t\t\t\t\t\t\t\"Authentication for outgoing connection to qsub from port %u on socket %d rejected by remote side, errno=%d\",\n\t\t\t\t\t\t\t\tpport, (socks + peersock)->sock, errno);\n\t\t\t\t\t\t\tPF_LOGGER(logfunc, err_msg);\n\t\t\t\t\t\t\tclose((socks + peersock)->sock);\n\t\t\t\t\t\t\tdis_destroy_chan((socks + peersock)->sock);\n\t\t\t\t\t\t\t(socks + peersock)->active = 0;\n\t\t\t\t\t\t\tcontinue;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\t/*\n\t\t\t\t\t * Make sockets non-blocking\n\t\t\t\t\t */\n\t\t\t\t\tif (set_nonblocking((socks + peersock)->sock) == -1) {\n\t\t\t\t\t\tsnprintf(err_msg, sizeof(err_msg),\n\t\t\t\t\t\t\t \"set_nonblocking failed for socket=%d, errno=%d\",\n\t\t\t\t\t\t\t (socks + peersock)->sock, errno);\n\t\t\t\t\t\tPF_LOGGER(logfunc, err_msg);\n\t\t\t\t\t\tclose((socks + peersock)->sock);\n\t\t\t\t\t\tdis_destroy_chan((socks + peersock)->sock);\n\t\t\t\t\t\t(socks + peersock)->active = 0;\n\t\t\t\t\t\tcontinue;\n\t\t\t\t\t}\n\t\t\t\t\tif (set_nodelay((socks + peersock)->sock) == -1) {\n\t\t\t\t\t\tsnprintf(err_msg, sizeof(err_msg),\n\t\t\t\t\t\t\t \"set_nodelay failed for socket=%d, errno=%d\",\n\t\t\t\t\t\t\t (socks + peersock)->sock, errno);\n\t\t\t\t\t\tPF_LOGGER(logfunc, err_msg);\n\t\t\t\t\t}\n\t\t\t\t\t(socks + newsock)->bufwritten = (socks + peersock)->bufwritten = 0;\n\t\t\t\t\t(socks + newsock)->bufavail = (socks + peersock)->bufavail = 0;\n\t\t\t\t\t(socks + newsock)->buff[0] = (socks + peersock)->buff[0] = '\\0';\n\t\t\t\t\t(socks + newsock)->peer = peersock;\n\t\t\t\t\t(socks + peersock)->peer = newsock;\n\t\t\t\t} else {\n\t\t\t\t\t/* non-listening socket to be read */\n\t\t\t\t\trc = read(\n\t\t\t\t\t\t(socks + n)->sock,\n\t\t\t\t\t\t(socks + n)->buff + (socks + n)->bufavail,\n\t\t\t\t\t\tPF_BUF_SIZE - (socks + n)->bufavail);\n\t\t\t\t\tif (rc == -1) {\n\t\t\t\t\t\tif ((errno == EWOULDBLOCK) || (errno == EAGAIN) || (errno == EINTR) || (errno == EINPROGRESS)) {\n\t\t\t\t\t\t\tcontinue;\n\t\t\t\t\t\t}\n\t\t\t\t\t\tshutdown((socks + n)->sock, SHUT_RDWR);\n\t\t\t\t\t\tclose((socks + n)->sock);\n\t\t\t\t\t\tdis_destroy_chan((socks + n)->sock);\n\t\t\t\t\t\t(socks + n)->active = 0;\n\t\t\t\t\t\tsnprintf(err_msg, sizeof(err_msg),\n\t\t\t\t\t\t\t \"closing the socket %d after read failure, errno=%d\",\n\t\t\t\t\t\t\t (socks + n)->sock, errno);\n\t\t\t\t\t\tPF_LOGGER(logfunc, err_msg);\n\t\t\t\t\t} else if (rc == 0) {\n\t\t\t\t\t\tshutdown((socks + n)->sock, SHUT_RDWR);\n\t\t\t\t\t\tclose((socks + n)->sock);\n\t\t\t\t\t\tdis_destroy_chan((socks + n)->sock);\n\t\t\t\t\t\t(socks + n)->active = 0;\n\t\t\t\t\t} else {\n\t\t\t\t\t\t(socks + n)->bufavail += rc;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t} /* END if rfdset */\n\t\t\tif (FD_ISSET((socks + n)->sock, &wfdset)) {\n\t\t\t\tint peer = (socks + n)->peer;\n\n\t\t\t\trc = write(\n\t\t\t\t\t(socks + n)->sock,\n\t\t\t\t\t(socks + peer)->buff + (socks + peer)->bufwritten,\n\t\t\t\t\t(socks + peer)->bufavail - (socks + peer)->bufwritten);\n\n\t\t\t\tif (rc == -1) {\n\t\t\t\t\tif ((errno == EWOULDBLOCK) || (errno == EAGAIN) || (errno == EINTR) || (errno == EINPROGRESS)) {\n\t\t\t\t\t\tcontinue;\n\t\t\t\t\t}\n\t\t\t\t\tshutdown((socks + n)->sock, SHUT_RDWR);\n\t\t\t\t\tclose((socks + n)->sock);\n\t\t\t\t\tdis_destroy_chan((socks + n)->sock);\n\t\t\t\t\t(socks + n)->active = 0;\n\t\t\t\t\tsnprintf(err_msg, sizeof(err_msg),\n\t\t\t\t\t\t \"closing the socket %d after write failure, errno=%d\",\n\t\t\t\t\t\t (socks + n)->sock, errno);\n\t\t\t\t\tPF_LOGGER(logfunc, err_msg);\n\t\t\t\t} else if (rc == 0) {\n\t\t\t\t\tshutdown((socks + n)->sock, SHUT_RDWR);\n\t\t\t\t\tclose((socks + n)->sock);\n\t\t\t\t\tdis_destroy_chan((socks + n)->sock);\n\t\t\t\t\t(socks + n)->active = 0;\n\t\t\t\t} else {\n\t\t\t\t\t(socks + peer)->bufwritten += rc;\n\t\t\t\t}\n\t\t\t} /* END if wfdset */\n\t\t\tif (!(socks + n)->listening) {\n\t\t\t\tint peer = (socks + n)->peer;\n\t\t\t\tif ((socks + peer)->bufavail == (socks + peer)->bufwritten) {\n\t\t\t\t\t(socks + peer)->bufavail = (socks + peer)->bufwritten = 0;\n\t\t\t\t}\n\t\t\t\tif (!(socks + peer)->active && ((socks + peer)->bufwritten == (socks + peer)->bufavail)) {\n\t\t\t\t\tshutdown((socks + n)->sock, SHUT_RDWR);\n\t\t\t\t\tclose((socks + n)->sock);\n\t\t\t\t\tdis_destroy_chan((socks + n)->sock);\n\t\t\t\t\t(socks + n)->active = 0;\n\t\t\t\t}\n\t\t\t}\n\n\t\t} /* END foreach fd */\n\n\t} /* END while(x11_reader_go) */\n} /* END port_forwarder() */\n\n/**\n * @brief\n *      This function returns a socket to the local X11 unix server.\n *\n * @param[in] dnr   Display number to which it has to connect to.\n *\n * @return\tint\n * @retval\tSocket fd connected to the local X11 unix server.\tsuccess\n * @retval  \t-1  \t\t\t\t\t\t\tFailure\n */\nint\nconnect_local_xsocket(u_int dnr)\n{\n\tint sock;\n\tstruct sockaddr_un addr;\n\n\tif ((sock = socket(AF_UNIX, SOCK_STREAM, 0)) < 0) {\n\t\tfprintf(stderr, \"socket: %.100s\", strerror(errno));\n\t\treturn -1;\n\t}\n\n\tmemset(&addr, 0, sizeof(addr));\n\taddr.sun_family = AF_UNIX;\n\tsnprintf(addr.sun_path, sizeof(addr.sun_path), X_UNIX_PATH, dnr);\n\n\tif (connect(sock, (struct sockaddr *) &addr, sizeof(addr)) == 0)\n\t\treturn sock;\n\n\tclose(sock);\n\tfprintf(stderr, \"connect %.100s: %.100s\", addr.sun_path, strerror(errno));\n\treturn (-1);\n}\n\n/**\n * @brief\n *      This function is called whenever there is a connection accepted by the\n *      port forwarder at qsub side. It will further send the data read by port\n *      forwarder to the x server listening on the display number set in the\n *      environment.\n * @param[in] display - The display number where X server is listening in\n *                      qsub.\n * @param[in] alsounused - This parameter is not used. its there just to\n *                         maintain consistency between function pointers used\n *                         by port_forwarder.\n * @return\tint\n * @retval\tsocket number which is connected to Xserver.\tsuccess\n * @retval \t-1   \t\t\t\t\t\tFailure\n */\nint\nx11_connect_display(\n\tchar *display,\n\tlong alsounused)\n{\n\tint display_number, sock = 0;\n\t/*\n\t * buf will hold the display string consisting of host:screen so\n\t * allow an extra 32 characters for the :screen portion\n\t */\n\tchar *buf;\n\tchar *cp;\n\tstruct addrinfo hints, *ai, *aitop;\n\tchar strport[NI_MAXSERV];\n\tint gaierr;\n\n\t/*\n\t * Now we decode the value of the DISPLAY variable and make a\n\t * connection to the real X server.\n\t */\n\n\t/*\n\t * Check if it is a unix domain socket.  Unix domain displays are in\n\t * one of the following formats: unix:d[.s], :d[.s], ::d[.s]\n\t */\n\tif (strncmp(display, \"unix:\", 5) == 0 ||\n\t    display[0] == ':') {\n\t\t/* Connect to the unix domain socket. */\n\t\tif (sscanf(strrchr(display, ':') + 1, \"%d\", &display_number) != 1) {\n\t\t\tfprintf(stderr, \"Could not parse display number from DISPLAY: %.100s\",\n\t\t\t\tdisplay);\n\t\t\treturn -1;\n\t\t}\n\t\t/* Create a socket. */\n\t\tsock = connect_local_xsocket(display_number);\n\t\tif (sock < 0)\n\t\t\treturn -1;\n\t\t/* OK, we now have a connection to the display. */\n\t\treturn sock;\n\t}\n\n\t/*\n\t * Connect to an inet socket.  The DISPLAY value is supposedly\n\t * hostname:d[.s], where hostname may also be numeric IP address.\n\t */\n\tpbs_asprintf(&buf, \"%s\", display);\n\tcp = strchr(buf, ':');\n\tif (!cp) {\n\t\tfprintf(stderr, \"Could not find ':' in DISPLAY: %.100s\", display);\n\t\tfree(buf);\n\t\treturn -1;\n\t}\n\n\t*cp = 0;\n\t/* buf now contains the host name.  But first we parse the display number. */\n\tif (sscanf(cp + 1, \"%d\", &display_number) != 1) {\n\t\tfprintf(stderr, \"Could not parse display number from DISPLAY: %.100s\",\n\t\t\tdisplay);\n\t\tfree(buf);\n\t\treturn -1;\n\t}\n\n\t/* Look up the host address */\n\tmemset(&hints, 0, sizeof(hints));\n\thints.ai_family = AF_UNSPEC;\n\thints.ai_socktype = SOCK_STREAM;\n\tsnprintf(strport, sizeof(strport), \"%d\", 6000 + display_number);\n\tif ((gaierr = getaddrinfo(buf, strport, &hints, &aitop)) != 0) {\n\t\tfprintf(stderr, \"%100s: unknown host. (%s)\", buf, gai_strerror(gaierr));\n\t\tfree(buf);\n\t\treturn -1;\n\t}\n\n\tfor (ai = aitop; ai; ai = ai->ai_next) {\n\t\t/* Create a socket. */\n\t\tsock = socket(ai->ai_family, SOCK_STREAM, 0);\n\t\tif (sock < 0) {\n\t\t\tfprintf(stderr, \"socket: %.100s\", strerror(errno));\n\t\t\tcontinue;\n\t\t}\n\n\t\t/* Connect it to the display. */\n\t\tif (connect(sock, ai->ai_addr, ai->ai_addrlen) < 0) {\n\t\t\tfprintf(stderr, \"connect %.100s port %d: %.100s\", buf,\n\t\t\t\t6000 + display_number, strerror(errno));\n\t\t\tclose(sock);\n\t\t\tcontinue;\n\t\t}\n\n\t\t/* Success */\n\t\tbreak;\n\t}\n\n\tfreeaddrinfo(aitop);\n\tif (!ai) {\n\t\tfprintf(stderr, \"connect %.100s port %d: %.100s\", buf, 6000 + display_number,\n\t\t\tstrerror(errno));\n\t\tfree(buf);\n\t\treturn -1;\n\t}\n\n\tfree(buf);\n\tset_nodelay(sock);\n\treturn sock;\n}\n/**\n * @brief\n *      Set the given file descriptor to non blocking mode.\n *      Calling this on a socket causes all future read() and write() calls on\n *      that socket to do only as much as they can immediately, and return\n *      without waiting.\n *      If no data can be read or written, they return -1 and set errno\n *      to EAGAIN or EWOULDBLOCK.\n *\n * @param[in] fd - file descriptor\n *\n * @return\tint\n * @retval\t1\tsuccess\n * @retval \t-1   \tFailure\n */\nint\nset_nonblocking(int fd)\n{\n\tint flags;\n\n\tif ((flags = fcntl(fd, F_GETFL, 0)) == -1)\n\t\tflags = 0;\n\tif (fcntl(fd, F_SETFL, flags | O_NONBLOCK) == -1)\n\t\treturn -1;\n\telse\n\t\treturn 1;\n}\n"
  },
  {
    "path": "src/lib/Libpbs/Makefile.am",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nlib_LTLIBRARIES = libpbs.la\n\nlibpbs_la_CPPFLAGS = \\\n\t-I$(top_srcdir)/src/include \\\n\t@KRB5_CFLAGS@\n\n#\n# There are specific rules that must be followed when updating the library\n# version. The value is completely independent of the product version. Refer\n# to the following article prior to modifying the library version:\n# https://autotools.io/libtool/version.html\n#\npkgconfig_DATA = pbs.pc\n\nlibpbs_la_LDFLAGS = -version-info 0:0:0 @KRB5_LIBS@\n\nlibpbs_la_LIBADD= \\\n\t@libz_lib@ \\\n\t-lcrypto \\\n\t-lpthread\n\nlibpbs_la_SOURCES = \\\n\t../Libattr/attr_fn_arst.c \\\n\t../Libattr/attr_fn_b.c \\\n\t../Libattr/attr_fn_c.c \\\n\t../Libattr/attr_fn_f.c \\\n\t../Libattr/attr_fn_hold.c \\\n\t../Libattr/attr_fn_intr.c \\\n\t../Libattr/attr_fn_l.c \\\n\t../Libattr/attr_fn_ll.c \\\n\t../Libattr/attr_fn_size.c \\\n\t../Libattr/attr_fn_str.c \\\n\t../Libattr/attr_fn_time.c \\\n\t../Libattr/attr_fn_unkn.c \\\n\t../Libattr/attr_func.c \\\n\t../Libattr/attr_resc_func.c \\\n\t../Libattr/Long_.c \\\n\t../Libattr/resc_map.c \\\n\t../Libattr/uLTostr.c \\\n\t../Libattr/strToL.c \\\n\t../Libattr/strTouL.c \\\n\t../Libcmds/batch_status.c \\\n\t../Libcmds/check_job_script.c \\\n\t../Libcmds/chk_Jrange.c \\\n\t../Libcmds/ck_job_name.c \\\n\t../Libcmds/cnt2server.c \\\n\t../Libcmds/cvtdate.c \\\n\t../Libcmds/get_attr.c \\\n\t../Libcmds/get_dataservice_usr.c \\\n\t../Libcmds/get_server.c \\\n\t../Libcmds/err_handling.c \\\n\t../Libcmds/isjobid.c \\\n\t../Libcmds/locate_job.c \\\n\t../Libcmds/parse_at.c \\\n\t../Libcmds/parse_depend.c \\\n\t../Libcmds/parse_destid.c \\\n\t../Libcmds/parse_equal.c \\\n\t../Libcmds/parse_jobid.c \\\n\t../Libcmds/parse_stage.c \\\n\t../Libcmds/prepare_path.c \\\n\t../Libcmds/set_attr.c \\\n\t../Libcmds/set_resource.c \\\n\t../Libdis/dis_helpers.c \\\n\t../Libdis/dis.c \\\n\t../Libdis/dis_.h \\\n\t../Libdis/discui_.c \\\n\t../Libdis/discul_.c \\\n\t../Libdis/disi10d_.c \\\n\t../Libdis/disi10l_.c \\\n\t../Libdis/disiui_.c \\\n\t../Libdis/disp10d_.c \\\n\t../Libdis/disp10l_.c \\\n\t../Libdis/disrcs.c \\\n\t../Libdis/disrd.c \\\n\t../Libdis/disrf.c \\\n\t../Libdis/disrfcs.c \\\n\t../Libdis/disrfst.c \\\n\t../Libdis/disrl.c \\\n\t../Libdis/disrl_.c \\\n\t../Libdis/disrsc.c \\\n\t../Libdis/disrsi.c \\\n\t../Libdis/disrsi_.c \\\n\t../Libdis/disrsl.c \\\n\t../Libdis/disrsl_.c \\\n\t../Libdis/disrss.c \\\n\t../Libdis/disrst.c \\\n\t../Libdis/disruc.c \\\n\t../Libdis/disrui.c \\\n\t../Libdis/disrul.c \\\n\t../Libdis/disrus.c \\\n\t../Libdis/diswcs.c \\\n\t../Libdis/diswf.c \\\n\t../Libdis/diswl_.c \\\n\t../Libdis/diswsi.c \\\n\t../Libdis/diswsl.c \\\n\t../Libdis/diswui.c \\\n\t../Libdis/diswui_.c \\\n\t../Libdis/diswul.c \\\n\t../Libdis/ps_dis.c \\\n\t../Libdis/diswull.c \\\n\t../Libdis/disrull.c \\\n\t../Libdis/discull_.c \\\n\t../Libdis/disrsll_.c \\\n\t../Libecl/ecl_verify.c \\\n\t../Libecl/ecl_verify_datatypes.c \\\n\t../Libecl/ecl_verify_values.c \\\n\t../Libecl/ecl_verify_object_name.c \\\n\t../Libecl/pbs_client_thread.c \\\n\t../Libifl/advise.c \\\n\t../Libifl/auth.c \\\n\t../Libifl/conn_table.c \\\n\t../Libifl/DIS_decode.c \\\n\t../Libifl/DIS_encode.c \\\n\t../Libifl/dec_reply.c \\\n\t../Libifl/enc_reply.c \\\n\t../Libifl/entlim_parse.c \\\n\t../Libifl/get_svrport.c \\\n\t../Libifl/grunt_parse.c \\\n\t../Libifl/int_hook.c \\\n\t../Libifl/int_jcred.c \\\n\t../Libifl/int_manager.c \\\n\t../Libifl/int_manage2.c \\\n\t../Libifl/int_msg2.c \\\n\t../Libifl/int_rdrpy.c \\\n\t../Libifl/int_sig2.c \\\n\t../Libifl/int_status2.c \\\n\t../Libifl/int_submit.c \\\n\t../Libifl/int_submit_resv.c \\\n\t../Libifl/int_status.c \\\n\t../Libifl/int_ucred.c \\\n\t../Libifl/int_cred.c \\\n\t../Libifl/int_modify_resv.c \\\n\t../Libifl/list_link.c \\\n\t../Libifl/ifl_util.c \\\n\t../Libifl/ifl_pointers.c \\\n\t../Libifl/PBS_attr.c \\\n\t../Libifl/pbs_get_attribute_errors.c \\\n\t../Libifl/pbs_geterrmg.c \\\n\t../Libifl/pbs_geterrno.c \\\n\t../Libifl/pbs_loadconf.c \\\n\t../Libifl/pbs_quote_parse.c \\\n\t../Libifl/pbs_statfree.c \\\n\t../Libifl/pbs_delstatfree.c \\\n\t../Libifl/pbsD_alterjob.c \\\n\t../Libifl/pbsD_connect.c \\\n\t../Libifl/pbsD_deljob.c \\\n\t../Libifl/pbsD_deljoblist.c \\\n\t../Libifl/pbsD_holdjob.c \\\n\t../Libifl/pbsD_locjob.c \\\n\t../Libifl/pbsD_manager.c \\\n\t../Libifl/pbsD_movejob.c \\\n\t../Libifl/pbsD_msgjob.c \\\n\t../Libifl/pbsD_orderjo.c \\\n\t../Libifl/pbsD_rerunjo.c \\\n\t../Libifl/pbsD_resc.c \\\n\t../Libifl/pbsD_rlsjob.c \\\n\t../Libifl/pbsD_runjob.c \\\n\t../Libifl/pbsD_selectj.c \\\n\t../Libifl/pbsD_sigjob.c \\\n\t../Libifl/pbsD_stagein.c \\\n\t../Libifl/pbsD_stathost.c \\\n\t../Libifl/pbsD_statjob.c \\\n\t../Libifl/pbsD_statnode.c \\\n\t../Libifl/pbsD_statque.c \\\n\t../Libifl/pbsD_statsrv.c \\\n\t../Libifl/pbsD_statsched.c \\\n\t../Libifl/pbsD_submit.c \\\n\t../Libifl/pbsD_termin.c \\\n\t../Libifl/pbsD_submit_resv.c \\\n\t../Libifl/pbsD_stathook.c \\\n\t../Libifl/pbsD_delresv.c \\\n\t../Libifl/pbsD_statresv.c \\\n\t../Libifl/pbsD_confirmresv.c \\\n\t../Libifl/pbsD_defschreply.c \\\n\t../Libifl/pbsD_statrsc.c \\\n\t../Libifl/pbsD_modify_resv.c \\\n\t../Libifl/pbsD_Preempt_Jobs.c \\\n\t../Libifl/ifl_impl.c \\\n\t../Libifl/rm.c \\\n\t../Libifl/strsep.c \\\n\t../Libifl/tcp_dis.c \\\n\t../Libifl/tm.c \\\n\t../Libifl/xml_encode_decode.c \\\n\t../Liblog/pbs_messages.c \\\n\t../Liblog/pbs_log.c \\\n\t../Liblog/log_event.c \\\n\t../Libsec/cs_standard.c \\\n\t../Libutil/avltree.c \\\n\t../Libutil/get_hostname.c \\\n\t../Libutil/misc_utils.c \\\n\t../Libutil/thread_utils.c \\\n\t../Libutil/pbs_secrets.c \\\n\t../Libutil/pbs_aes_encrypt.c \\\n\t../Libutil/pbs_idx.c \\\n\t../Libutil/range.c \\\n\t../Libutil/dedup_jobids.c \\\n\t../Libnet/get_hostaddr.c \\\n\t../Libnet/hnls.c \\\n\t../Libtpp/tpp_client.c \\\n\t../Libtpp/tpp_em.c \\\n\t../Libtpp/tpp_platform.c \\\n\t../Libtpp/tpp_transport.c \\\n\t../Libtpp/tpp_util.c \\\n\tecl_job_attr_def.c \\\n\tecl_svr_attr_def.c \\\n\tecl_sched_attr_def.c \\\n\tecl_node_attr_def.c \\\n\tecl_queue_attr_def.c \\\n\tecl_resc_def_all.c \\\n\tecl_resv_attr_def.c\n\nCLEANFILES = \\\n\tecl_job_attr_def.c \\\n\tecl_svr_attr_def.c \\\n\tecl_sched_attr_def.c \\\n\tecl_node_attr_def.c \\\n\tecl_queue_attr_def.c \\\n\tecl_resc_def_all.c \\\n\tecl_resv_attr_def.c\n\necl_job_attr_def.c: $(top_srcdir)/src/lib/Libattr/master_job_attr_def.xml $(top_srcdir)/buildutils/attr_parser.py\n\t@echo Generating $@ from $< ; \\\n\t$(PYTHON) $(top_srcdir)/buildutils/attr_parser.py -m $(top_srcdir)/src/lib/Libattr/master_job_attr_def.xml -e $@\n\necl_svr_attr_def.c: $(top_srcdir)/src/lib/Libattr/master_svr_attr_def.xml $(top_srcdir)/buildutils/attr_parser.py\n\t@echo Generating $@ from $< ; \\\n\t$(PYTHON) $(top_srcdir)/buildutils/attr_parser.py -m $(top_srcdir)/src/lib/Libattr/master_svr_attr_def.xml -e $@\n\necl_node_attr_def.c: $(top_srcdir)/src/lib/Libattr/master_node_attr_def.xml $(top_srcdir)/buildutils/attr_parser.py\n\t@echo Generating $@ from $< ; \\\n\t$(PYTHON) $(top_srcdir)/buildutils/attr_parser.py -m $(top_srcdir)/src/lib/Libattr/master_node_attr_def.xml -e $@\n\necl_queue_attr_def.c: $(top_srcdir)/src/lib/Libattr/master_queue_attr_def.xml $(top_srcdir)/buildutils/attr_parser.py\n\t@echo Generating $@ from $< ; \\\n\t$(PYTHON) $(top_srcdir)/buildutils/attr_parser.py -m $(top_srcdir)/src/lib/Libattr/master_queue_attr_def.xml -e $@\n\necl_resv_attr_def.c: $(top_srcdir)/src/lib/Libattr/master_resv_attr_def.xml $(top_srcdir)/buildutils/attr_parser.py\n\t@echo Generating $@ from $< ; \\\n\t$(PYTHON) $(top_srcdir)/buildutils/attr_parser.py -m $(top_srcdir)/src/lib/Libattr/master_resv_attr_def.xml -e $@\n\necl_sched_attr_def.c: $(top_srcdir)/src/lib/Libattr/master_sched_attr_def.xml $(top_srcdir)/buildutils/attr_parser.py\n\t@echo Generating $@ from $< ; \\\n\t$(PYTHON) $(top_srcdir)/buildutils/attr_parser.py -m $(top_srcdir)/src/lib/Libattr/master_sched_attr_def.xml -e $@\n\necl_resc_def_all.c: $(top_srcdir)/src/lib/Libattr/master_resc_def_all.xml $(top_srcdir)/buildutils/attr_parser.py\n\t@echo Generating $@ from $< ; \\\n\t$(PYTHON) $(top_srcdir)/buildutils/attr_parser.py -m $(top_srcdir)/src/lib/Libattr/master_resc_def_all.xml -e $@\n"
  },
  {
    "path": "src/lib/Libpbs/pbs.pc.in",
    "content": "# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\nprefix=@prefix@\nexec_prefix=@exec_prefix@\nlibdir=@libdir@\nincludedir=@includedir@\n\nName: @PACKAGE_NAME@\nDescription: Library for the PBS\nVersion: @PACKAGE_VERSION@\nLibs: -L${libdir} -lpbs\nCflags: -I${includedir}\nRequires.private: libcrypto\nRequires.private: libz\n"
  },
  {
    "path": "src/lib/Libpython/Makefile.am",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nnoinst_LIBRARIES = libpbspython.a libpbspython_svr.a\n\nlibpbspython_a_CPPFLAGS = \\\n\t-I$(top_srcdir)/src/include \\\n\t@PYTHON_INCLUDES@ \\\n\t@KRB5_CFLAGS@\n\nlibpbspython_a_SOURCES = \\\n\tshared_python_utils.c \\\n\tcommon_python_utils.c \\\n\tpbs_python_external.c \\\n\tpbs_python_svr_external.c \\\n\tmodule_pbs_v1.c \\\n\tpbs_python_svr_internal.c \\\n\tpbs_python_svr_size_type.c \\\n\tpbs_python_import_types.c\n\nnodist_libpbspython_a_SOURCES = \\\n\t$(top_builddir)/src/lib/Libifl/pbs_ifl_wrap.c\n\nlibpbspython_svr_a_CPPFLAGS = \\\n\t-DLIBPYTHONSVR \\\n\t-I$(top_srcdir)/src/include \\\n\t@PYTHON_INCLUDES@ \\\n\t@KRB5_CFLAGS@\n\nlibpbspython_svr_a_SOURCES = \\\n\tshared_python_utils.c \\\n\tcommon_python_utils.c \\\n\tpbs_python_external.c \\\n\tpbs_python_svr_external.c \\\n\tmodule_pbs_v1.c \\\n\tpbs_python_svr_internal.c \\\n\tpbs_python_svr_size_type.c \\\n\tpbs_python_import_types.c\n\nnodist_libpbspython_svr_a_SOURCES = \\\n\t$(top_builddir)/src/lib/Libifl/pbs_ifl_wrap.c\n"
  },
  {
    "path": "src/lib/Libpython/common_python_utils.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h>\n#include <pbs_python_private.h> /* include the internal python file */\n#include <log.h>\n#include <pbs_error.h>\n#include \"hook.h\"\n\nextern char *pbs_python_daemon_name;\n/**\n * @file\tcommon_python_utils.c\n * @brief\n * \tCommon Python Utilities shared by extension and embedded C routines.\n */\n\n/**\n *\n * @brief\n *\twrite python object info to log in the form of:\n *\t<object info> [<pre message>]\n *\n * @param[in]\tobj\t- the Python object to write info about.\n * @param[in]\tpre\t- some header string to put in the log\n * @param[in]\tseverity - severity level of the log message\n *\n * @return\tnone\n *\n * @note\n *  side effects:\n *   No python exceptions are generated, if they do, it is cleared.\n */\n\nvoid\npbs_python_write_object_to_log(PyObject *obj, char *pre, int severity)\n{\n\tPyObject *py_tmp_str = NULL;\n\tconst char *obj_str = NULL;\n\n\tif (!(py_tmp_str = PyObject_Str(obj))) {\n\t\tgoto ERROR_EXIT;\n\t}\n\tif (!(obj_str = PyUnicode_AsUTF8(py_tmp_str))) {\n\t\tgoto ERROR_EXIT;\n\t}\n\tif (pre) {\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"%s %s\", pre, obj_str);\n\t} else {\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"%s\", obj_str);\n\t}\n\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\n\tif (IS_PBS_PYTHON_CMD(pbs_python_daemon_name))\n\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_SERVER,\n\t\t\t  severity, pbs_python_daemon_name, log_buffer);\n\telse\n\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_SERVER,\n\t\t\t  severity, pbs_python_daemon_name, log_buffer);\n\n\tPy_CLEAR(py_tmp_str);\n\treturn;\n\nERROR_EXIT:\n\tPy_CLEAR(py_tmp_str);\n\tpbs_python_write_error_to_log(\"failed to convert object to str\");\n}\n\n/**\n * @brief\n * \tinsert directory to sys path list\n * \tif pos == -1 , then append to the end of the list.\n *\n * @param[in] dirname - directory path\n * @param[in] pos - position of python list\n *\n * @return\tint\n * @retval\t0\tsuccess\n * @retval\t-1\terror\n *\n */\nint\npbs_python_modify_syspath(const char *dirname, int pos)\n{\n\tPyObject *path = NULL; /* 'sys.path'  */\n\tPyObject *pystr_dirname = NULL;\n\n\tif (!dirname) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"passed NULL pointer to dirname argument!!\");\n\t\treturn -1;\n\t}\n\n\tPyErr_Clear(); /* clear any exceptions */\n\n\t/* if sucess we ger a NEW ref */\n\tif (!(pystr_dirname = PyUnicode_FromString(dirname))) { /* failed */\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"%s:creating pystr_dirname <%s>\",\n\t\t\t __func__, dirname);\n\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\tpbs_python_write_error_to_log(log_buffer);\n\t\tgoto ERROR_EXIT;\n\t}\n\n\t/* if sucess we ger a NEW ref */\n\tif (!(path = PySys_GetObject(\"path\"))) { /* failed */\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"%s:PySys_GetObject failed\",\n\t\t\t __func__);\n\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\tpbs_python_write_error_to_log(log_buffer);\n\t\tgoto ERROR_EXIT;\n\t}\n\n\tif (PyList_Check(path)) {\n\t\tif (pos == -1) {\n\t\t\tif (PyList_Append(path, pystr_dirname) == -1) {\n\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n#ifdef NAS /* localmod 005 */\n\t\t\t\t\t \"%s:could not append to list pos:<%ld>\",\n\t\t\t\t\t __func__, (long) pos\n#else\n\t\t\t\t\t \"%s:could not append to list pos:<%d>\",\n\t\t\t\t\t __func__, pos\n#endif /* localmod 005 */\n\t\t\t\t);\n\t\t\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\t\tpbs_python_write_error_to_log(log_buffer);\n\t\t\t\tgoto ERROR_EXIT;\n\t\t\t}\n\t\t} else {\n\t\t\tif (PyList_Insert(path, pos, pystr_dirname) == -1) {\n\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n#ifdef NAS /* localmod 005 */\n\t\t\t\t\t \"%s:could not append to list pos:<%ld>\",\n\t\t\t\t\t __func__, (long) pos\n#else\n\t\t\t\t\t \"%s:could not append to list pos:<%d>\",\n\t\t\t\t\t __func__, pos\n#endif /* localmod 005 */\n\t\t\t\t);\n\t\t\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\t\tpbs_python_write_error_to_log(log_buffer);\n\t\t\t\tgoto ERROR_EXIT;\n\t\t\t}\n\t\t}\n\t} else {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"sys.path is not a list?\");\n\t\tgoto ERROR_EXIT;\n\t}\n\n\t{\n\t\tPyObject *obj_repr;\n\t\tchar *str;\n\t\tobj_repr = PyObject_Repr(path);\n\t\tstr = pbs_python_object_str(obj_repr);\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"--> Python module path is now: %s <--\", str);\n\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_SERVER,\n\t\t\t  LOG_DEBUG, pbs_python_daemon_name, log_buffer);\n\t\tPy_CLEAR(obj_repr);\n\t}\n\n\tPy_CLEAR(pystr_dirname);\n\tPySys_SetObject(\"path\", path);\n\treturn 0;\n\nERROR_EXIT:\n\tPy_CLEAR(pystr_dirname);\n\treturn -1;\n}\n\n/**\n * @brief\n * \tpbs_python_write_error_to_log\n *    \twrite python exception occurred to PBS log file.\n *    \tHeavily borrowed from \"Programming Python\" by Mark Lutz\n *\n * @param[in] emsg - error msg to be logged\n *\n */\n\nvoid\npbs_python_write_error_to_log(const char *emsg)\n{\n\tPyObject *exc_type = NULL;\t/* NEW refrence, please DECREF */\n\tPyObject *exc_value = NULL;\t/* NEW refrence, please DECREF */\n\tPyObject *exc_traceback = NULL; /* NEW refrence, please DECREF */\n\tPyObject *exc_string = NULL;\t/* the exception message to be written to pbs log */\n\n\t/* get the exception */\n\tif (!PyErr_Occurred()) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"error handler called but no exception raised!\");\n\t\treturn;\n\t}\n\n\tPyErr_Fetch(&exc_type, &exc_value, &exc_traceback);\n\tPyErr_Clear(); /* just in case, not clear from API doc */\n\n\texc_string = NULL;\n\tif ((exc_type != NULL) && /* get the string representation of the object */\n\t    ((exc_string = PyObject_Str(exc_type)) != NULL) &&\n\t    (PyUnicode_Check(exc_string))) {\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"%s\", PyUnicode_AsUTF8(exc_string));\n\t} else {\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"%s\", \"<could not figure out the exception type>\");\n\t}\n\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\tPy_XDECREF(exc_string);\n\tif (log_buffer[0] != '\\0')\n\t\tlog_err(PBSE_INTERNAL, emsg, log_buffer);\n\n\t/* Log error exception value */\n\texc_string = NULL;\n\tif ((exc_value != NULL) && /* get the string representation of the object */\n\t    ((exc_string = PyObject_Str(exc_value)) != NULL) &&\n\t    (PyUnicode_Check(exc_string))) {\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"%s\", PyUnicode_AsUTF8(exc_string));\n\t} else {\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"%s\", \"<could not figure out the exception value>\");\n\t}\n\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\tPy_XDECREF(exc_string);\n\tif (log_buffer[0] != '\\0')\n\t\tlog_err(PBSE_INTERNAL, emsg, log_buffer);\n\n\tPy_XDECREF(exc_type);\n\tPy_XDECREF(exc_value);\n\n#if !defined(WIN32)\n\tPy_XDECREF(exc_traceback);\n#elif !defined(_DEBUG)\n\t/* for some reason this crashes on Windows Debug version */\n\tPy_XDECREF(exc_traceback);\n#endif\n\n\treturn;\n}\n\n/**\n * @brief\n * \tpbs_python_object_set_attr_string_value\n *    \tCurrent PBS API does not have an easy interface to set a C string value to a\n *    \ta object attribute. Hence, a lot of boilerplate code is needed to just set a string\n *   \tvalue, hence this routine.\n *\n * @param[in] obj - object attribute to which string value to be set\n * @param[in] key - name of the attribute to set\n * @param[in] value - string value to be set\n *\n * @par\tNOTES:\n *  \t- exceptions are cleared!!\n *\n * @return\tint\n * @retval\t0\tsuccess\n * @retval\t-1\tfailure\n *\n */\n\nint\npbs_python_object_set_attr_string_value(PyObject *obj,\n\t\t\t\t\tconst char *key,\n\t\t\t\t\tconst char *value)\n{\n\tPyObject *tmp_py_str = NULL;\n\n\tint rv = -1; /* default failure */\n\n\tif (!key) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"Null key passed!\");\n\t\treturn rv;\n\t}\n\n\tif (!value) {\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t \"Null value passed while setting attribute '%s'\", key);\n\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\treturn rv;\n\t}\n\n\ttmp_py_str = PyUnicode_FromString(value); /* NEW reference */\n\n\tif (!tmp_py_str) { /* Uh-of failed */\n\t\tpbs_python_write_error_to_log(__func__);\n\t\treturn rv;\n\t}\n\trv = PyObject_SetAttrString(obj, key, tmp_py_str);\n\n\tif (rv == -1) {\n\t\tpbs_python_write_error_to_log(__func__);\n\t}\n\tPy_CLEAR(tmp_py_str);\n\treturn rv;\n}\n\n/**\n * @brief\n *      pbs_python_object_set_attr_string_value\n *      Current PBS API does not have an easy interface to set a C integral value to a\n *      a object attribute. Hence, a lot of boilerplate code is needed to just set a integral\n *      value, hence this routine.\n *\n * @param[in] obj - object attribute to which string value to be set\n * @param[in] key - name of the attribute to set\n * @param[in] value - integer value to be set\n *\n * @par NOTES:\n *      - exceptions are cleared!!\n *\n * @return      int\n * @retval      0       success\n * @retval      -1      failure\n *\n */\n\nint\npbs_python_object_set_attr_integral_value(PyObject *obj,\n\t\t\t\t\t  const char *key,\n\t\t\t\t\t  int value)\n{\n\tPyObject *tmp_py_int = PyLong_FromSsize_t(value); /* NEW reference */\n\n\tint rv = -1;\t   /* default failure */\n\tif (!tmp_py_int) { /* Uh-of failed */\n\t\tpbs_python_write_error_to_log(__func__);\n\t\treturn rv;\n\t}\n\trv = PyObject_SetAttrString(obj, key, tmp_py_int);\n\n\tif (rv == -1)\n\t\tpbs_python_write_error_to_log(__func__);\n\tPy_CLEAR(tmp_py_int);\n\n\treturn rv;\n}\n\n/**\n * @brief\n *      pbs_python_object_set_attr_string_value\n *      Current PBS API does not have an easy interface to get a C integral value of\n *      a object attribute. Hence, a lot of boilerplate code is needed to just get a integral\n *      value, hence this routine.\n *\n * @param[in] obj - object attribute from which integral value is taken\n * @param[in] key - name of the attribute to get\n *\n * @return      int\n * @retval      0       success\n * @retval      -1      failure\n *\n */\nint\npbs_python_object_get_attr_integral_value(PyObject *obj, const char *key)\n{\n\tint rv = -1; /* default failure */\n\tPyObject *py_int = NULL;\n\tint retval;\n\n\tif (!key) { /* Uh-of failed */\n\t\treturn rv;\n\t}\n\tif (!PyObject_HasAttrString(obj, key)) {\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t \"obj %s has no key %s\", pbs_python_object_str(obj), key);\n\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\treturn rv;\n\t}\n\n\tpy_int = PyObject_GetAttrString(obj, key); /* NEW ref */\n\n\tif (!py_int) {\n\t\tpbs_python_write_error_to_log(__func__);\n\t\treturn rv;\n\t}\n\n\tif (!PyArg_Parse(py_int, \"i\", &retval)) {\n\t\tpbs_python_write_error_to_log(__func__);\n\t\tPy_CLEAR(py_int);\n\t\treturn rv;\n\t}\n\n\tPy_CLEAR(py_int);\n\treturn (retval);\n}\n\n/**\n * @brief\n * \tReturns a string representation of 'obj' in a fixed memory area that must\n * \tnot be freed. This never returns NULL.\n *\n * @par Note:\n *\tnext call of this function would overwrite this fixed memory area\n * \tso probably best to use the result immediately, or strdup() it.\n *\n * @param[in]\tobj - object\n *\n * @return\tstring\n * @retval\tstring repsn of obj\n */\nchar *\npbs_python_object_str(PyObject *obj)\n{\n\tconst char *str = NULL;\n\tPyObject *py_str;\n\tstatic char *ret_str = NULL;\n\tchar *tmp_str = NULL;\n\tsize_t alloc_sz = 0;\n\n\tpy_str = PyObject_Str(obj); /* NEW ref */\n\n\tif (!py_str)\n\t\treturn (\"\");\n\n\tstr = PyUnicode_AsUTF8(py_str);\n\n\tif (str)\n\t\talloc_sz = strlen(str) + 1;\n\telse\n\t\talloc_sz = 1; /* for null byte */\n\n\ttmp_str = (char *) realloc((char *) ret_str, alloc_sz);\n\tif (!tmp_str) { /* error on realloc */\n\t\tlog_err(errno, __func__, \"error on realloc\");\n\t\tPy_CLEAR(py_str);\n\t\treturn (\"\");\n\t}\n\tret_str = tmp_str;\n\t*ret_str = '\\0';\n\tif (str != NULL) {\n\t\tstrncpy(ret_str, str, alloc_sz);\n\t\tret_str[alloc_sz - 1] = '\\0';\n\t}\n\tPy_CLEAR(py_str);\n\treturn (ret_str);\n}\n\n/**\n * @brief\n * \tpbs_python_object_get_attr_string_value\n *    \tCurrent PBS API does not have an easy interface to set a C string value to a\n *    \ta object attribute. Hence, a lot of boilerplate code is needed to just set a string\n *    \tvalue, hence this routine.\n *\n * @par\tNOTES:\n *  \t- exceptions are cleared!!\n * \tThis must return NULL if object does not have a value for attribute 'name'.\n *\n * @param[in] obj - object\n * @param[in] name - name attr\n *\n * @return\tstring\n * @retval\tstring val to object\tsuccess\n * @retval\tNULL\t\t\terror\n *\n */\n\nchar *\npbs_python_object_get_attr_string_value(PyObject *obj, const char *name)\n{\n\tchar *attrval_str = NULL;\n\tPyObject *py_attrval = NULL;\n\n\tif (!name) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"No value for name\");\n\t\treturn NULL;\n\t}\n\n\tif (!PyObject_HasAttrString(obj, name)) {\n\t\treturn NULL;\n\t}\n\n\tpy_attrval = PyObject_GetAttrString(obj, name);\n\n\tif (py_attrval) {\n\t\tif (py_attrval != Py_None)\n\t\t\tattrval_str = pbs_python_object_str(py_attrval);\n\t\tPy_DECREF(py_attrval);\n\t}\n\treturn (attrval_str);\n}\n\n/**\n * @brief\n *\tpbs_python_dict_set_item_string_value\n *    \tCurrent PBS API does not have an easy interface to set a C string value to a\n *    \ta dictionary. Hence, a lot of boilerplate code is needed to just set a string\n *    \tvalue, hence this routine.\n *\n * @param[in] dict - dictionary to which string value to be set\n * @param[in] key - name of the attribute to set\n * @param[in] value - integer value to be set\n *\n *\n * @return      int\n * @retval      0       success\n * @retval      -1      failure\n *\n */\n\nint\npbs_python_dict_set_item_string_value(PyObject *dict,\n\t\t\t\t      const char *key,\n\t\t\t\t      const char *value)\n{\n\tPyObject *tmp_py_str;\n\n\tint rv = -1; /* default failure */\n\tif (!value) {\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t \"Null value passed while setting key '%s'\", key);\n\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\treturn rv;\n\t}\n\n\ttmp_py_str = PyUnicode_FromString(value); /* NEW reference */\n\tif (!tmp_py_str) {\t\t\t  /* Uh-of failed */\n\t\tpbs_python_write_error_to_log(__func__);\n\t\treturn rv;\n\t}\n\trv = PyDict_SetItemString(dict, key, tmp_py_str);\n\tif (rv == -1)\n\t\tpbs_python_write_error_to_log(__func__);\n\tPy_CLEAR(tmp_py_str);\n\treturn rv;\n}\n\n/**\n * @brief\n * \tGiven a list Python object, return the string item at 'index'.\n *\n * @param[in]\tlist - the Python list object\n * @param[in]\tindex - index of the item in the list to return\n *\n * @return char *\n * @retval <string>  - a string value that is in a fixed memory area that\n * \t\t\tmust not be freed. This would  be a an emnpty\n * \t\t\tstring \"\" if no value is found.\n * @note\n * \tNext call to this function would overwrite the fixed memory area\n * \treturned, so probably best to use the result immediately,\n *\tor strdup() it.\n *\tThis will never return a NULL value.\n */\n\nchar *\npbs_python_list_get_item_string_value(PyObject *list, int index)\n{\n\tPyObject *py_item = NULL;\n\tchar *ret_str = NULL;\n\n\tif (!PyList_Check(list)) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"Did not get passed a list object\");\n\t\treturn (\"\");\n\t}\n\n\tpy_item = PyList_GetItem(list, index);\n\tif (!py_item) {\n\t\tpbs_python_write_error_to_log(__func__);\n\t\treturn (\"\");\n\t}\n\tret_str = pbs_python_object_str(py_item); /* does not return NULL */\n\n\treturn ret_str;\n}\n\n/**\n * @brief\n * \tpbs_python_dict_set_item_integral_value\n *    \tCurrent PBS API does not have an easy interface to set a C integral value to a\n *    \ta dictionary. Hence, a lot of boilerplate code is needed to just set a integral\n *    \tvalue, hence this routine.\n *\n * @param[in] dict - dictionary to which string value to be set\n * @param[in] key - name of the attribute to set\n * @param[in] value - integer value to be set\n *\n *\n * @return      int\n * @retval      0       success\n * @retval      -1      failure\n *\n */\n\nint\npbs_python_dict_set_item_integral_value(PyObject *dict,\n\t\t\t\t\tconst char *key,\n\t\t\t\t\tconst Py_ssize_t value)\n{\n\tint rv = -1; /* default failure */\n\n\tPyObject *tmp_py_int = PyLong_FromSsize_t(value); /* NEW reference */\n\tif (!tmp_py_int) {\t\t\t\t  /* Uh-of failed */\n\t\tpbs_python_write_error_to_log(__func__);\n\t\treturn rv;\n\t}\n\trv = PyDict_SetItemString(dict, key, tmp_py_int);\n\tif (rv == -1)\n\t\tpbs_python_write_error_to_log(__func__);\n\tPy_CLEAR(tmp_py_int);\n\treturn rv;\n}\n\n/**\n * @brief\n * \tpbs_python_import_name\n *   \tThis imports a name from the given module. Note this returns a NEW\n *   \treference. This essentially retrieves an attribute name.\n *\n * @param[in]\tmodule_name - imported module name\n * @param[in] \tfromname    - imported from\n *\n * @return\tobject\n * @retval\tobject of type from imported file\tsuccess\n * @retval\texiterror code\t\t\t\terror\n *\n */\n\nPyObject *\npbs_python_import_name(const char *module_name, const char *fromname)\n{\n\tPyObject *py_mod_obj = NULL;\n\tPyObject *py_fromname_obj = NULL;\n\n\tpy_mod_obj = PyImport_ImportModule(module_name); /* fetch module */\n\tif (py_mod_obj == NULL) {\n\t\tgoto ERROR_EXIT;\n\t}\n\tif (!(py_fromname_obj = PyObject_GetAttrString(py_mod_obj, fromname))) {\n\t\tgoto ERROR_EXIT;\n\t}\n\n\tif (py_mod_obj)\n\t\tPy_CLEAR(py_mod_obj);\n\n\treturn py_fromname_obj;\n\nERROR_EXIT:\n\tpbs_python_write_error_to_log(__func__);\n\tif (py_mod_obj)\n\t\tPy_CLEAR(py_mod_obj);\n\treturn NULL;\n}\n\n/*\n * logmsg module method implementation and documentation\n *\n *  TODO\n *     This could really be a log type capturing all the log.h functionality.\n */\n\nconst char pbsv1mod_meth_logmsg_doc[] =\n\t\"logmsg(strSeverity,strMessage)\\n\\\n  where:\\n\\\n\\n\\\n   strSeverity: one of module constants\\n\\\n              pbs.LOG_WARNING\\n\\\n              pbs.LOG_ERROR\\n\\\n              pbs.LOG_DEBUG (default)\\n\\\n   strMessage:  error message to write\\n\\\n\\n\\\n  returns:\\n\\\n         None\\n\\\n\";\n\n/* note this is undefind later */\n#define VALID_SEVERITY_VALUE(val) \\\n\t((val == SEVERITY_LOG_WARNING) || (val == SEVERITY_LOG_ERR) || (val == SEVERITY_LOG_DEBUG))\n\n#define VALID_EVENTTYPE_VALUE(val)                                \\\n\t((val == PBSEVENT_ERROR) || (val == PBSEVENT_SYSTEM) ||   \\\n\t (val == PBSEVENT_JOB) || (val == PBSEVENT_JOB_USAGE) ||  \\\n\t (val == PBSEVENT_SECURITY) || (val == PBSEVENT_SCHED) || \\\n\t (val == PBSEVENT_DEBUG) || (val == PBSEVENT_DEBUG2) ||   \\\n\t (val == PBSEVENT_RESV) || (val == PBSEVENT_DEBUG3) ||    \\\n\t (val == PBSEVENT_DEBUG4) || (val == PBSEVENT_FORCE) ||   \\\n\t (val == PBSEVENT_ADMIN))\n\n/**\n * @brief\n *\tThis is the wrapper function to the pbs.logmsg() call in the hook world.\n * \tIt will basically call log_event() passing in values for eventtype,\n * \tseverity, and actual log message.\n *\n * @param[in]\tself - parent object\n * @param[in]\targs - the list of arguments:\n * \t\t\targs[0] = loglevel\t(pbs.LOG_DEBUG, pbs.EVENT_DEBUG4, etc...)\n * \t\t\targs[1] = log_message\n * \t\t\tif loglevel is pbs.LOG_DEBUG, pbs.LOG_ERROR, pbs.LOG_WARNING,\n * \t\t\tthen the 'severity' argument to log_event() is set to LOG_DEBUG,\n * \t\t\tLOG_ERROR, LOG_WARNING, respectively. Otherwise, it would default\n * \t\t\tto LOG_DEBUG.\n * \t\t\tNOTE: 'severity' determines the severity of the message when\n * \t\t\tsent to syslog.\n * @return PyObjectd *\n * @retval Py_None\t- for success\n * @retval NULL\t\t- which causes an exception to the executing hook script.\n *\n */\nPyObject *\npbsv1mod_meth_logmsg(PyObject *self, PyObject *args, PyObject *kwds)\n{\n\n\tstatic char *kwlist[] = {\"loglevel\", \"message\", NULL};\n\n\tint loglevel;\n\tint severity = -1;\n\tint eventtype = -1;\n\tchar *emsg = NULL;\n#ifdef PY_SSIZE_T_CLEAN\n\tPy_ssize_t emsg_len = 0;\n#else\n\tint emsg_len = 0;\n#endif\n\n\t/* The use of \"s#\" below is to allow embedded NULLs, to guarantee */\n\t/* something will get printed and not get an exception */\n\tif (!PyArg_ParseTupleAndKeywords(args, kwds,\n\t\t\t\t\t \"is#:logmsg\",\n\t\t\t\t\t kwlist,\n\t\t\t\t\t &loglevel,\n\t\t\t\t\t &emsg,\n\t\t\t\t\t &emsg_len)) {\n\t\treturn NULL;\n\t}\n\n\tif (!VALID_SEVERITY_VALUE(loglevel) &&\n\t    !VALID_EVENTTYPE_VALUE(loglevel)) {\n\t\tPyErr_Format(PyExc_TypeError, \"Invalid severity or eventtype value <%d>\",\n\t\t\t     loglevel);\n\t\treturn NULL;\n\t}\n\t/* log the message */\n\tif (VALID_SEVERITY_VALUE(loglevel)) {\n\t\tif (loglevel == SEVERITY_LOG_DEBUG)\n\t\t\tseverity = LOG_DEBUG;\n\t\telse if (loglevel == SEVERITY_LOG_ERR)\n\t\t\tseverity = LOG_ERR;\n\t\telse if (loglevel == SEVERITY_LOG_WARNING)\n\t\t\tseverity = LOG_WARNING;\n\t}\n\tif (VALID_EVENTTYPE_VALUE(loglevel)) {\n\t\teventtype = loglevel;\n\t}\n\n\t/* This usually means what got passed are the old\n\t * loglevel values (pbs.LOG_DEBUG, pbs.LOG_ERROR, pbs.LOG_WARNING).\n\t * These values were actually the 'severity' values for syslog.\n\t * so we use the same default as before for 'eventtype' argument\n\t * to log_event().\n\t */\n\tif (eventtype == -1) {\n\t\teventtype = (PBSEVENT_ADMIN | PBSEVENT_SYSTEM);\n\t}\n\t/* This means what got passed are the new log level values\n\t * (ex .pbs.EVENT_DEBUG4) which really maps to the 'eventtype'\n\t * argument to log_event(). So we'll use a default LOG_DEBUG\n\t * 'severity' value for syslog.\n\t */\n\tif (severity == -1) {\n\t\tseverity = LOG_DEBUG;\n\t}\n\n\tlog_event(eventtype, PBS_EVENTCLASS_HOOK,\n\t\t  severity, pbs_python_daemon_name, emsg);\n\tPy_RETURN_NONE;\n}\n#undef VALID_SEVERITY_VALUE\n\n/*\n * logjobmsg module method implementation and documentation\n *\n */\n\nconst char pbsv1mod_meth_logjobmsg_doc[] =\n\t\"logjobmsg(strJobId,strMessage)\\n\\\n  where:\\n\\\n\\n\\\n   strJobId:  a PBS  job id\\n\\\n   strMessage:  message to write to PBS log under class of messages\\n\\\n   \t\trelated to 'strJobId'.\\n\\\n\\n\\\n  returns:\\n\\\n         None\\n\\\n\";\n\nPyObject *\npbsv1mod_meth_logjobmsg(PyObject *self, PyObject *args, PyObject *kwds)\n{\n\n\tstatic char *kwlist[] = {\"jobid\", \"message\", NULL};\n\n\tchar *jobid = NULL;\n\tchar *msg = NULL;\n#ifdef PY_SSIZE_T_CLEAN\n\tPy_ssize_t msg_len = 0;\n#else\n\tint msg_len = 0;\n#endif\n\n\t/* The use of \"s#\" below is to allow embedded NULLs, to guarantee */\n\t/* something will get printed and not get an exception */\n\tif (!PyArg_ParseTupleAndKeywords(args, kwds,\n\t\t\t\t\t \"ss#:logjobmsg\",\n\t\t\t\t\t kwlist,\n\t\t\t\t\t &jobid,\n\t\t\t\t\t &msg,\n\t\t\t\t\t &msg_len)) {\n\t\treturn NULL;\n\t}\n\n\tif ((jobid == NULL) || (jobid[0] == '\\0')) {\n\t\tPyErr_SetString(PyExc_ValueError, \"no jobid given!\");\n\t\treturn NULL;\n\t}\n\n\t/* log the message */\n\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG, jobid, msg);\n\n\tPy_RETURN_NONE;\n}\n"
  },
  {
    "path": "src/lib/Libpython/module_pbs_v1.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n/* include the pbs_python private file with python dependencies */\n#include <pbs_python_private.h>\n\n#include <stdio.h>\n#include <unistd.h>\n#include <sys/types.h>\n#include <sys/param.h>\n#include <memory.h>\n#include <stdlib.h>\n#include <libpbs.h>\n#include <pbs_ifl.h>\n#include <errno.h>\n#include <string.h>\n#include <list_link.h>\n#include <log.h>\n#include <attribute.h>\n#include <server_limits.h>\n#include <server.h>\n#include <job.h>\n#include <reservation.h>\n#include <queue.h>\n#include <pbs_error.h>\n#include <hook.h>\n#include <pbs_internal.h>\n#include <pbs_nodes.h>\n#include \"pbs_internal.h\"\n\n/* pbs_python_import_types.c */\nextern int ppsvr_prepare_all_types(void);\nextern PyObject *ppsvr_create_types_module(void);\n\nextern char pbsv1mod_meth_logmsg_doc[]; /* common_python_utils.c */\nextern PyObject *pbsv1mod_meth_logmsg(PyObject *self,\n\t\t\t\t      PyObject *args, PyObject *kwds);\n\nextern char pbsv1mod_meth_logjobmsg_doc[]; /* common_python_utils.c */\nextern PyObject *pbsv1mod_meth_logjobmsg(PyObject *self,\n\t\t\t\t\t PyObject *args, PyObject *kwds);\n\n/* pbs_python_svr_internal.c */\nextern char pbsv1mod_meth_get_queue_doc[];\nextern PyObject *pbsv1mod_meth_get_queue(PyObject *self,\n\t\t\t\t\t PyObject *args, PyObject *kwds);\n\nextern char pbsv1mod_meth_get_job_doc[];\nextern PyObject *pbsv1mod_meth_get_job(PyObject *self,\n\t\t\t\t       PyObject *args, PyObject *kwds);\n\nextern char pbsv1mod_meth_release_nodes_doc[];\nextern PyObject *pbsv1mod_meth_release_nodes(PyObject *self,\n\t\t\t\t\t     PyObject *args, PyObject *kwds);\n\nextern char pbsv1mod_meth_get_resv_doc[];\nextern PyObject *pbsv1mod_meth_get_resv(PyObject *self,\n\t\t\t\t\tPyObject *args, PyObject *kwds);\n\nextern char pbsv1mod_meth_get_vnode_doc[];\nextern PyObject *pbsv1mod_meth_get_vnode(PyObject *self,\n\t\t\t\t\t PyObject *args, PyObject *kwds);\n\nextern char pbsv1mod_meth_iter_nextfunc_doc[];\nextern PyObject *pbsv1mod_meth_iter_nextfunc(PyObject *self,\n\t\t\t\t\t     PyObject *args, PyObject *kwds);\n\nextern char pbsv1mod_meth_mark_vnode_set_doc[];\nextern PyObject *pbsv1mod_meth_mark_vnode_set(PyObject *self,\n\t\t\t\t\t      PyObject *args, PyObject *kwds);\n\nextern char pbsv1mod_meth_load_resource_value_doc[];\nextern PyObject *pbsv1mod_meth_load_resource_value(PyObject *self,\n\t\t\t\t\t\t   PyObject *args, PyObject *kwds);\n\nextern char pbsv1mod_meth_resource_str_value_doc[];\nextern PyObject *pbsv1mod_meth_resource_str_value(PyObject *self,\n\t\t\t\t\t\t  PyObject *args, PyObject *kwds);\n\nextern char pbsv1mod_meth_vnode_state_to_str_doc[];\nextern PyObject *pbsv1mod_meth_vnode_state_to_str(PyObject *self,\n\t\t\t\t\t\t  PyObject *args, PyObject *kwds);\n\nextern char pbsv1mod_meth_vnode_sharing_to_str_doc[];\nextern PyObject *pbsv1mod_meth_vnode_sharing_to_str(PyObject *self,\n\t\t\t\t\t\t    PyObject *args, PyObject *kwds);\n\nextern char pbsv1mod_meth_vnode_ntype_to_str_doc[];\nextern PyObject *pbsv1mod_meth_vnode_ntype_to_str(PyObject *self,\n\t\t\t\t\t\t  PyObject *args, PyObject *kwds);\n\nextern char pbsv1mod_meth_str_to_vnode_state_doc[];\nextern PyObject *pbsv1mod_meth_str_to_vnode_state(PyObject *self,\n\t\t\t\t\t\t  PyObject *args, PyObject *kwds);\n\nextern char pbsv1mod_meth_str_to_vnode_ntype_doc[];\nextern PyObject *pbsv1mod_meth_str_to_vnode_ntype(PyObject *self,\n\t\t\t\t\t\t  PyObject *args, PyObject *kwds);\n\nextern char pbsv1mod_meth_str_to_vnode_sharing_doc[];\nextern PyObject *pbsv1mod_meth_str_to_vnode_sharing(PyObject *self,\n\t\t\t\t\t\t    PyObject *args, PyObject *kwds);\n\nextern char pbsv1mod_meth_is_attrib_val_settable_doc[];\nextern PyObject *pbsv1mod_meth_is_attrib_val_settable(PyObject *self,\n\t\t\t\t\t\t      PyObject *args, PyObject *kwds);\n\nextern char pbsv1mod_meth_event_accept_doc[];\nextern PyObject *pbsv1mod_meth_event_accept(void);\n\nextern char pbsv1mod_meth_event_reject_doc[];\nextern PyObject *pbsv1mod_meth_event_reject(PyObject *self,\n\t\t\t\t\t    PyObject *args, PyObject *kwds);\n\nextern char pbsv1mod_meth_reboot_doc[];\nextern PyObject *pbsv1mod_meth_reboot(PyObject *self,\n\t\t\t\t      PyObject *args, PyObject *kwds);\n\nextern char pbsv1mod_meth_scheduler_restart_cycle_doc[];\nextern PyObject *pbsv1mod_meth_scheduler_restart_cycle(PyObject *self,\n\t\t\t\t\t\t       PyObject *args, PyObject *kwds);\n\nextern char pbsv1mod_meth_set_pbs_statobj_doc[];\nextern PyObject *pbsv1mod_meth_set_pbs_statobj(PyObject *self,\n\t\t\t\t\t       PyObject *args, PyObject *kwds);\n\nextern char pbsv1mod_meth_event_param_mod_allow_doc[];\nextern PyObject *pbsv1mod_meth_event_param_mod_allow(void);\n\nextern char pbsv1mod_meth_event_param_mod_disallow_doc[];\nextern PyObject *pbsv1mod_meth_event_param_mod_disallow(void);\n\nextern char pbsv1mod_meth_event_doc[];\nextern PyObject *pbsv1mod_meth_event(void);\n\nextern char pbsv1mod_meth_server_doc[];\nextern PyObject *pbsv1mod_meth_server(void);\n\nextern char pbsv1mod_meth_in_python_mode_doc[];\nextern PyObject *pbsv1mod_meth_in_python_mode(void);\n\nextern char pbsv1mod_meth_set_python_mode_doc[];\nextern PyObject *pbsv1mod_meth_set_python_mode(void);\n\nextern char pbsv1mod_meth_set_c_mode_doc[];\nextern PyObject *pbsv1mod_meth_set_c_mode(void);\n\nextern char pbsv1mod_meth_get_python_daemon_name_doc[];\nextern PyObject *pbsv1mod_meth_get_python_daemon_name(void);\n\nextern char pbsv1mod_meth_get_pbs_server_name_doc[];\nextern PyObject *pbsv1mod_meth_get_pbs_server_name(void);\n\nextern char pbsv1mod_meth_get_local_host_name_doc[];\nextern PyObject *pbsv1mod_meth_get_local_host_name(void);\n\nextern char pbsv1mod_meth_get_pbs_conf_doc[];\nextern PyObject *pbsv1mod_meth_get_pbs_conf(void);\n\nextern char pbsv1mod_meth_in_site_hook_doc[];\nextern PyObject *pbsv1mod_meth_in_site_hook(void);\n\nextern char pbsv1mod_meth_validate_input_doc[];\nextern PyObject *pbsv1mod_meth_validate_input(PyObject *self,\n\t\t\t\t\t      PyObject *args, PyObject *kwds);\n\nextern char pbsv1mod_meth_duration_to_secs_doc[];\nextern PyObject *pbsv1mod_meth_duration_to_secs(PyObject *self,\n\t\t\t\t\t\tPyObject *args, PyObject *kwds);\n\nextern char pbsv1mod_meth_wordsize_doc[];\nextern PyObject *pbsv1mod_meth_wordsize(void);\n\nextern char pbsv1mod_meth_size_to_kbytes_doc[];\nextern PyObject *pbsv1mod_meth_size_to_kbytes(PyObject *self,\n\t\t\t\t\t      PyObject *args, PyObject *kwds);\n\nextern char pbsv1mod_meth_get_server_static_doc[];\nextern PyObject *pbsv1mod_meth_get_server_static(PyObject *self,\n\t\t\t\t\t\t PyObject *args, PyObject *kwds);\n\nextern char pbsv1mod_meth_get_vnode_static_doc[];\nextern PyObject *pbsv1mod_meth_get_vnode_static(PyObject *self,\n\t\t\t\t\t\tPyObject *args, PyObject *kwds);\n\nextern char pbsv1mod_meth_get_job_static_doc[];\nextern PyObject *pbsv1mod_meth_get_job_static(PyObject *self,\n\t\t\t\t\t      PyObject *args, PyObject *kwds);\n\nextern char pbsv1mod_meth_get_resv_static_doc[];\nextern PyObject *pbsv1mod_meth_get_resv_static(PyObject *self,\n\t\t\t\t\t       PyObject *args, PyObject *kwds);\n\nextern char pbsv1mod_meth_get_vnode_static_doc[];\nextern PyObject *pbsv1mod_meth_get_vnode_static(PyObject *self,\n\t\t\t\t\t\tPyObject *args, PyObject *kwds);\n\nextern char pbsv1mod_meth_get_queue_static_doc[];\nextern PyObject *pbsv1mod_meth_get_queue_static(PyObject *self,\n\t\t\t\t\t\tPyObject *args, PyObject *kwds);\n\nextern char pbsv1mod_meth_get_server_data_fp_doc[];\nextern PyObject *pbsv1mod_meth_get_server_data_fp(void);\n\nextern char pbsv1mod_meth_get_server_data_file_doc[];\nextern PyObject *pbsv1mod_meth_get_server_data_file(void);\n\nextern char pbsv1mod_meth_use_static_data_doc[];\nextern PyObject *pbsv1mod_meth_use_static_data(void);\n\n/* private */\nstatic PyObject *PyPbsV1ModuleExtension_Obj = NULL; /* BORROWED reference */\n\n/*  -----                    MODULE HELPER FUNCTIONS          -----    */\n\n#define INSERT_STR_CONSTANT(k, v)                                                \\\n\tdo {                                                                     \\\n\t\tif ((pbs_python_dict_set_item_string_value(dict, k, v) == -1)) { \\\n\t\t\treturn -1;                                               \\\n\t\t}                                                                \\\n\t} while (0)\n\n/* special job states */\n#define JOB_STATE_SUSPEND 400\n#define JOB_STATE_SUSPEND_USERACTIVE 410\n\n/**\n * @brief\n * \t_pv1mod_insert_str_constants:\n *   \tinsert all PBS string constants\n *\n * @param[in] dict - dictionary object\n *\n * @return\tint\n * @retval\t0\n */\n\nstatic int\n_pv1mod_insert_str_constants(PyObject *dict)\n{\n\n\treturn 0;\n}\n\n#define INSERT_INT_CONSTANT(k, v)                                                  \\\n\tdo {                                                                       \\\n\t\tif ((pbs_python_dict_set_item_integral_value(dict, k, v) == -1)) { \\\n\t\t\treturn -1;                                                 \\\n\t\t}                                                                  \\\n\t} while (0)\n\n/**\n * @brief\n * \t_pv1mod_insert_str_constants:\n *   \tinsert all PBS string constants\n *\n * @param[in] dict - dictionary object\n *\n * @return      int\n * @retval      0\n *\n */\n\nstatic int\n_pv1mod_insert_int_constants(PyObject *dict)\n{\n\n\t/* first QTYPES */\n\tINSERT_INT_CONSTANT(\"QTYPE_EXECUTION\", QTYPE_Execution);\n\tINSERT_INT_CONSTANT(\"QTYPE_ROUTE\", QTYPE_RoutePush);\n\n\t/* reservation states */\n\tINSERT_INT_CONSTANT(\"RESV_STATE_NONE\", RESV_NONE);\n\tINSERT_INT_CONSTANT(\"RESV_STATE_UNCONFIRMED\", RESV_UNCONFIRMED);\n\tINSERT_INT_CONSTANT(\"RESV_STATE_CONFIRMED\", RESV_CONFIRMED);\n\tINSERT_INT_CONSTANT(\"RESV_STATE_WAIT\", RESV_WAIT);\n\tINSERT_INT_CONSTANT(\"RESV_STATE_TIME_TO_RUN\", RESV_TIME_TO_RUN);\n\tINSERT_INT_CONSTANT(\"RESV_STATE_RUNNING\", RESV_RUNNING);\n\tINSERT_INT_CONSTANT(\"RESV_STATE_FINISHED\", RESV_FINISHED);\n\tINSERT_INT_CONSTANT(\"RESV_STATE_BEING_DELETED\", RESV_BEING_DELETED);\n\tINSERT_INT_CONSTANT(\"RESV_STATE_DELETED\", RESV_DELETED);\n\tINSERT_INT_CONSTANT(\"RESV_STATE_DELETING_JOBS\", RESV_DELETING_JOBS);\n\tINSERT_INT_CONSTANT(\"RESV_STATE_DEGRADED\", RESV_DEGRADED);\n\tINSERT_INT_CONSTANT(\"RESV_STATE_BEING_ALTERED\", RESV_BEING_ALTERED);\n\tINSERT_INT_CONSTANT(\"RESV_STATE_IN_CONFLICT\", RESV_IN_CONFLICT);\n\n\t/* job states */\n\tINSERT_INT_CONSTANT(\"JOB_STATE_TRANSIT\", JOB_STATE_TRANSIT);\n\tINSERT_INT_CONSTANT(\"JOB_STATE_QUEUED\", JOB_STATE_QUEUED);\n\tINSERT_INT_CONSTANT(\"JOB_STATE_HELD\", JOB_STATE_HELD);\n\tINSERT_INT_CONSTANT(\"JOB_STATE_WAITING\", JOB_STATE_WAITING);\n\tINSERT_INT_CONSTANT(\"JOB_STATE_RUNNING\", JOB_STATE_RUNNING);\n\tINSERT_INT_CONSTANT(\"JOB_STATE_EXITING\", JOB_STATE_EXITING);\n\tINSERT_INT_CONSTANT(\"JOB_STATE_EXPIRED\", JOB_STATE_EXPIRED);\n\tINSERT_INT_CONSTANT(\"JOB_STATE_BEGUN\", JOB_STATE_BEGUN);\n\tINSERT_INT_CONSTANT(\"JOB_STATE_SUSPEND\", JOB_STATE_SUSPEND);\n\tINSERT_INT_CONSTANT(\"JOB_STATE_SUSPEND_USERACTIVE\", JOB_STATE_SUSPEND_USERACTIVE);\n\tINSERT_INT_CONSTANT(\"JOB_STATE_MOVED\", JOB_STATE_MOVED);\n\tINSERT_INT_CONSTANT(\"JOB_STATE_FINISHED\", JOB_STATE_FINISHED);\n\n\t/* job substates */\n\tINSERT_INT_CONSTANT(\"JOB_SUBSTATE_UNKNOWN\", JOB_SUBSTATE_UNKNOWN);\n\tINSERT_INT_CONSTANT(\"JOB_SUBSTATE_TRANSIN\", JOB_SUBSTATE_TRANSIN);\n\tINSERT_INT_CONSTANT(\"JOB_SUBSTATE_TRANSICM\", JOB_SUBSTATE_TRANSICM);\n\tINSERT_INT_CONSTANT(\"JOB_SUBSTATE_TRNOUT\", JOB_SUBSTATE_TRNOUT);\n\tINSERT_INT_CONSTANT(\"JOB_SUBSTATE_TRNOUTCM\", JOB_SUBSTATE_TRNOUTCM);\n\tINSERT_INT_CONSTANT(\"JOB_SUBSTATE_QUEUED\", JOB_SUBSTATE_QUEUED);\n\tINSERT_INT_CONSTANT(\"JOB_SUBSTATE_PRESTAGEIN\", JOB_SUBSTATE_PRESTAGEIN);\n\tINSERT_INT_CONSTANT(\"JOB_SUBSTATE_SYNCRES\", JOB_SUBSTATE_SYNCRES);\n\tINSERT_INT_CONSTANT(\"JOB_SUBSTATE_STAGEIN\", JOB_SUBSTATE_STAGEIN);\n\tINSERT_INT_CONSTANT(\"JOB_SUBSTATE_STAGEGO\", JOB_SUBSTATE_STAGEGO);\n\tINSERT_INT_CONSTANT(\"JOB_SUBSTATE_STAGECMP\", JOB_SUBSTATE_STAGECMP);\n\tINSERT_INT_CONSTANT(\"JOB_SUBSTATE_HELD\", JOB_SUBSTATE_HELD);\n\tINSERT_INT_CONSTANT(\"JOB_SUBSTATE_SYNCHOLD\", JOB_SUBSTATE_SYNCHOLD);\n\tINSERT_INT_CONSTANT(\"JOB_SUBSTATE_DEPNHOLD\", JOB_SUBSTATE_DEPNHOLD);\n\tINSERT_INT_CONSTANT(\"JOB_SUBSTATE_WAITING\", JOB_SUBSTATE_WAITING);\n\tINSERT_INT_CONSTANT(\"JOB_SUBSTATE_STAGEFAIL\", JOB_SUBSTATE_STAGEFAIL);\n\tINSERT_INT_CONSTANT(\"JOB_SUBSTATE_PRERUN\", JOB_SUBSTATE_PRERUN);\n\tINSERT_INT_CONSTANT(\"JOB_SUBSTATE_RUNNING\", JOB_SUBSTATE_RUNNING);\n\tINSERT_INT_CONSTANT(\"JOB_SUBSTATE_SUSPEND\", JOB_SUBSTATE_SUSPEND);\n\tINSERT_INT_CONSTANT(\"JOB_SUBSTATE_SCHSUSP\", JOB_SUBSTATE_SCHSUSP);\n\tINSERT_INT_CONSTANT(\"JOB_SUBSTATE_EXITING\", JOB_SUBSTATE_EXITING);\n\tINSERT_INT_CONSTANT(\"JOB_SUBSTATE_STAGEOUT\", JOB_SUBSTATE_STAGEOUT);\n\tINSERT_INT_CONSTANT(\"JOB_SUBSTATE_STAGEDEL\", JOB_SUBSTATE_STAGEDEL);\n\tINSERT_INT_CONSTANT(\"JOB_SUBSTATE_EXITED\", JOB_SUBSTATE_EXITED);\n\tINSERT_INT_CONSTANT(\"JOB_SUBSTATE_ABORT\", JOB_SUBSTATE_ABORT);\n\tINSERT_INT_CONSTANT(\"JOB_SUBSTATE_KILLSIS\", JOB_SUBSTATE_KILLSIS);\n\tINSERT_INT_CONSTANT(\"JOB_SUBSTATE_RUNEPILOG\", JOB_SUBSTATE_RUNEPILOG);\n\tINSERT_INT_CONSTANT(\"JOB_SUBSTATE_OBIT\", JOB_SUBSTATE_OBIT);\n\tINSERT_INT_CONSTANT(\"JOB_SUBSTATE_TERM\", JOB_SUBSTATE_TERM);\n\tINSERT_INT_CONSTANT(\"JOB_SUBSTATE_DELJOB\", JOB_SUBSTATE_DELJOB);\n\tINSERT_INT_CONSTANT(\"JOB_SUBSTATE_RERUN\", JOB_SUBSTATE_RERUN);\n\tINSERT_INT_CONSTANT(\"JOB_SUBSTATE_RERUN1\", JOB_SUBSTATE_RERUN1);\n\tINSERT_INT_CONSTANT(\"JOB_SUBSTATE_RERUN2\", JOB_SUBSTATE_RERUN2);\n\tINSERT_INT_CONSTANT(\"JOB_SUBSTATE_RERUN3\", JOB_SUBSTATE_RERUN3);\n\tINSERT_INT_CONSTANT(\"JOB_SUBSTATE_EXPIRED\", JOB_SUBSTATE_EXPIRED);\n\tINSERT_INT_CONSTANT(\"JOB_SUBSTATE_BEGUN\", JOB_SUBSTATE_BEGUN);\n\tINSERT_INT_CONSTANT(\"JOB_SUBSTATE_PROVISION\", JOB_SUBSTATE_PROVISION);\n\tINSERT_INT_CONSTANT(\"JOB_SUBSTATE_WAITING_JOIN_JOB\", JOB_SUBSTATE_WAITING_JOIN_JOB);\n\tINSERT_INT_CONSTANT(\"JOB_SUBSTATE_TERMINATED\", JOB_SUBSTATE_TERMINATED);\n\tINSERT_INT_CONSTANT(\"JOB_SUBSTATE_FINISHED\", JOB_SUBSTATE_FINISHED);\n\tINSERT_INT_CONSTANT(\"JOB_SUBSTATE_FAILED\", JOB_SUBSTATE_FAILED);\n\tINSERT_INT_CONSTANT(\"JOB_SUBSTATE_MOVED\", JOB_SUBSTATE_MOVED);\n\n\t/* server states */\n\tINSERT_INT_CONSTANT(\"SV_STATE_IDLE\", SV_STATE_INIT);\n\tINSERT_INT_CONSTANT(\"SV_STATE_ACTIVE\", SV_STATE_RUN);\n\tINSERT_INT_CONSTANT(\"SV_STATE_HOT\", SV_STATE_HOT);\n\tINSERT_INT_CONSTANT(\"SV_STATE_SHUTDEL\", SV_STATE_SHUTDEL);\n\tINSERT_INT_CONSTANT(\"SV_STATE_SHUTIMM\", SV_STATE_SHUTIMM);\n\tINSERT_INT_CONSTANT(\"SV_STATE_SHUTSIG\", SV_STATE_SHUTSIG);\n\n\t/* Log message severity */\n\tINSERT_INT_CONSTANT(\"LOG_DEBUG\", SEVERITY_LOG_DEBUG);\n\tINSERT_INT_CONSTANT(\"LOG_WARNING\", SEVERITY_LOG_WARNING);\n\tINSERT_INT_CONSTANT(\"LOG_ERROR\", SEVERITY_LOG_ERR);\n\n\t/* Log events levels */\n\tINSERT_INT_CONSTANT(\"EVENT_ERROR\", PBSEVENT_ERROR);\n\tINSERT_INT_CONSTANT(\"EVENT_SYSTEM\", PBSEVENT_SYSTEM);\n\tINSERT_INT_CONSTANT(\"EVENT_ADMIN\", PBSEVENT_ADMIN);\n\tINSERT_INT_CONSTANT(\"EVENT_JOB\", PBSEVENT_JOB);\n\tINSERT_INT_CONSTANT(\"EVENT_JOB_USAGE\", PBSEVENT_JOB_USAGE);\n\tINSERT_INT_CONSTANT(\"EVENT_SECURITY\", PBSEVENT_SECURITY);\n\tINSERT_INT_CONSTANT(\"EVENT_SCHED\", PBSEVENT_SCHED);\n\tINSERT_INT_CONSTANT(\"EVENT_DEBUG\", PBSEVENT_DEBUG);\n\tINSERT_INT_CONSTANT(\"EVENT_DEBUG2\", PBSEVENT_DEBUG2);\n\tINSERT_INT_CONSTANT(\"EVENT_RESV\", PBSEVENT_RESV);\n\tINSERT_INT_CONSTANT(\"EVENT_DEBUG3\", PBSEVENT_DEBUG3);\n\tINSERT_INT_CONSTANT(\"EVENT_DEBUG4\", PBSEVENT_DEBUG4);\n\tINSERT_INT_CONSTANT(\"EVENT_FORCE\", PBSEVENT_FORCE);\n\n\t/* Event types */\n\tINSERT_INT_CONSTANT(\"QUEUEJOB\", HOOK_EVENT_QUEUEJOB);\n\tINSERT_INT_CONSTANT(\"HOOK_EVENT_QUEUEJOB\", HOOK_EVENT_QUEUEJOB);\n\tINSERT_INT_CONSTANT(\"POSTQUEUEJOB\", HOOK_EVENT_POSTQUEUEJOB);\n\tINSERT_INT_CONSTANT(\"HOOK_EVENT_POSTQUEUEJOB\", HOOK_EVENT_POSTQUEUEJOB);\n\tINSERT_INT_CONSTANT(\"MODIFYJOB\", HOOK_EVENT_MODIFYJOB);\n\tINSERT_INT_CONSTANT(\"HOOK_EVENT_MODIFYJOB\", HOOK_EVENT_MODIFYJOB);\n\tINSERT_INT_CONSTANT(\"RESVSUB\", HOOK_EVENT_RESVSUB);\n\tINSERT_INT_CONSTANT(\"HOOK_EVENT_RESVSUB\", HOOK_EVENT_RESVSUB);\n\tINSERT_INT_CONSTANT(\"MODIFYRESV\", HOOK_EVENT_MODIFYRESV);\n\tINSERT_INT_CONSTANT(\"HOOK_EVENT_MODIFYRESV\", HOOK_EVENT_MODIFYRESV);\n\tINSERT_INT_CONSTANT(\"MOVEJOB\", HOOK_EVENT_MOVEJOB);\n\tINSERT_INT_CONSTANT(\"HOOK_EVENT_MOVEJOB\", HOOK_EVENT_MOVEJOB);\n\tINSERT_INT_CONSTANT(\"RUNJOB\", HOOK_EVENT_RUNJOB);\n\tINSERT_INT_CONSTANT(\"HOOK_EVENT_RUNJOB\", HOOK_EVENT_RUNJOB);\n\tINSERT_INT_CONSTANT(\"JOBOBIT\", HOOK_EVENT_JOBOBIT);\n\tINSERT_INT_CONSTANT(\"HOOK_EVENT_JOBOBIT\", HOOK_EVENT_JOBOBIT);\n\tINSERT_INT_CONSTANT(\"MANAGEMENT\", HOOK_EVENT_MANAGEMENT);\n\tINSERT_INT_CONSTANT(\"HOOK_EVENT_MANAGEMENT\", HOOK_EVENT_MANAGEMENT);\n\tINSERT_INT_CONSTANT(\"MODIFYVNODE\", HOOK_EVENT_MODIFYVNODE);\n\tINSERT_INT_CONSTANT(\"HOOK_EVENT_MODIFYVNODE\", HOOK_EVENT_MODIFYVNODE);\n\tINSERT_INT_CONSTANT(\"PROVISION\", HOOK_EVENT_PROVISION);\n\tINSERT_INT_CONSTANT(\"HOOK_EVENT_PROVISION\", HOOK_EVENT_PROVISION);\n\tINSERT_INT_CONSTANT(\"RESV_END\", HOOK_EVENT_RESV_END);\n\tINSERT_INT_CONSTANT(\"HOOK_EVENT_RESV_END\", HOOK_EVENT_RESV_END);\n\tINSERT_INT_CONSTANT(\"RESV_BEGIN\", HOOK_EVENT_RESV_BEGIN);\n\tINSERT_INT_CONSTANT(\"HOOK_EVENT_RESV_BEGIN\", HOOK_EVENT_RESV_BEGIN);\n\tINSERT_INT_CONSTANT(\"RESV_CONFIRM\", HOOK_EVENT_RESV_CONFIRM);\n\tINSERT_INT_CONSTANT(\"HOOK_EVENT_RESV_CONFIRM\", HOOK_EVENT_RESV_CONFIRM);\n\tINSERT_INT_CONSTANT(\"EXECJOB_BEGIN\", HOOK_EVENT_EXECJOB_BEGIN);\n\tINSERT_INT_CONSTANT(\"HOOK_EVENT_EXECJOB_BEGIN\", HOOK_EVENT_EXECJOB_BEGIN);\n\tINSERT_INT_CONSTANT(\"EXECJOB_PROLOGUE\", HOOK_EVENT_EXECJOB_PROLOGUE);\n\tINSERT_INT_CONSTANT(\"HOOK_EVENT_EXECJOB_PROLOGUE\", HOOK_EVENT_EXECJOB_PROLOGUE);\n\tINSERT_INT_CONSTANT(\"EXECJOB_EPILOGUE\", HOOK_EVENT_EXECJOB_EPILOGUE);\n\tINSERT_INT_CONSTANT(\"HOOK_EVENT_EXECJOB_EPILOGUE\", HOOK_EVENT_EXECJOB_EPILOGUE);\n\tINSERT_INT_CONSTANT(\"EXECJOB_PRETERM\", HOOK_EVENT_EXECJOB_PRETERM);\n\tINSERT_INT_CONSTANT(\"HOOK_EVENT_EXECJOB_PRETERM\", HOOK_EVENT_EXECJOB_PRETERM);\n\tINSERT_INT_CONSTANT(\"EXECJOB_END\", HOOK_EVENT_EXECJOB_END);\n\tINSERT_INT_CONSTANT(\"HOOK_EVENT_EXECJOB_END\", HOOK_EVENT_EXECJOB_END);\n\tINSERT_INT_CONSTANT(\"EXECJOB_LAUNCH\", HOOK_EVENT_EXECJOB_LAUNCH);\n\tINSERT_INT_CONSTANT(\"HOOK_EVENT_EXECJOB_LAUNCH\", HOOK_EVENT_EXECJOB_LAUNCH);\n\tINSERT_INT_CONSTANT(\"EXECHOST_PERIODIC\", HOOK_EVENT_EXECHOST_PERIODIC);\n\tINSERT_INT_CONSTANT(\"HOOK_EVENT_EXECHOST_PERIODIC\", HOOK_EVENT_EXECHOST_PERIODIC);\n\tINSERT_INT_CONSTANT(\"EXECHOST_STARTUP\", HOOK_EVENT_EXECHOST_STARTUP);\n\tINSERT_INT_CONSTANT(\"HOOK_EVENT_EXECHOST_STARTUP\", HOOK_EVENT_EXECHOST_STARTUP);\n\tINSERT_INT_CONSTANT(\"EXECJOB_ATTACH\", HOOK_EVENT_EXECJOB_ATTACH);\n\tINSERT_INT_CONSTANT(\"HOOK_EVENT_EXECJOB_ATTACH\", HOOK_EVENT_EXECJOB_ATTACH);\n\tINSERT_INT_CONSTANT(\"EXECJOB_RESIZE\", HOOK_EVENT_EXECJOB_RESIZE);\n\tINSERT_INT_CONSTANT(\"HOOK_EVENT_EXECJOB_RESIZE\", HOOK_EVENT_EXECJOB_RESIZE);\n\tINSERT_INT_CONSTANT(\"EXECJOB_ABORT\", HOOK_EVENT_EXECJOB_ABORT);\n\tINSERT_INT_CONSTANT(\"HOOK_EVENT_EXECJOB_ABORT\", HOOK_EVENT_EXECJOB_ABORT);\n\tINSERT_INT_CONSTANT(\"EXECJOB_POSTSUSPEND\", HOOK_EVENT_EXECJOB_POSTSUSPEND);\n\tINSERT_INT_CONSTANT(\"HOOK_EVENT_EXECJOB_POSTSUSPEND\", HOOK_EVENT_EXECJOB_POSTSUSPEND);\n\tINSERT_INT_CONSTANT(\"EXECJOB_PRERESUME\", HOOK_EVENT_EXECJOB_PRERESUME);\n\tINSERT_INT_CONSTANT(\"HOOK_EVENT_EXECJOB_PRERESUME\", HOOK_EVENT_EXECJOB_PRERESUME);\n\tINSERT_INT_CONSTANT(\"MOM_EVENTS\", MOM_EVENTS);\n\tINSERT_INT_CONSTANT(\"HOOK_EVENT_MOM_EVENTS\", MOM_EVENTS);\n\tINSERT_INT_CONSTANT(\"PERIODIC\", HOOK_EVENT_PERIODIC);\n\tINSERT_INT_CONSTANT(\"HOOK_EVENT_PERIODIC\", HOOK_EVENT_PERIODIC);\n\n\t/* Vnode State Constants */\n\tINSERT_INT_CONSTANT(\"ND_FREE\", INUSE_FREE);\n\tINSERT_INT_CONSTANT(\"ND_STATE_FREE\", INUSE_FREE);\n\tINSERT_INT_CONSTANT(\"ND_OFFLINE\", INUSE_OFFLINE);\n\tINSERT_INT_CONSTANT(\"ND_STATE_OFFLINE\", INUSE_OFFLINE);\n\tINSERT_INT_CONSTANT(\"ND_DOWN\", INUSE_DOWN);\n\tINSERT_INT_CONSTANT(\"ND_STATE_DOWN\", INUSE_DOWN);\n\tINSERT_INT_CONSTANT(\"ND_STATE_DELETED\", INUSE_DELETED);\n\tINSERT_INT_CONSTANT(\"ND_STALE\", INUSE_STALE);\n\tINSERT_INT_CONSTANT(\"ND_STATE_STALE\", INUSE_STALE);\n\tINSERT_INT_CONSTANT(\"ND_JOBBUSY\", INUSE_JOB);\n\tINSERT_INT_CONSTANT(\"ND_STATE_JOBBUSY\", INUSE_JOB);\n\tINSERT_INT_CONSTANT(\"ND_JOB_EXCLUSIVE\", INUSE_JOBEXCL);\n\tINSERT_INT_CONSTANT(\"ND_STATE_JOB_EXCLUSIVE\", INUSE_JOBEXCL);\n\tINSERT_INT_CONSTANT(\"ND_RESV_EXCLUSIVE\", INUSE_RESVEXCL);\n\tINSERT_INT_CONSTANT(\"ND_STATE_RESV_EXCLUSIVE\", INUSE_RESVEXCL);\n\tINSERT_INT_CONSTANT(\"ND_BUSY\", INUSE_BUSY);\n\tINSERT_INT_CONSTANT(\"ND_STATE_BUSY\", INUSE_BUSY);\n\tINSERT_INT_CONSTANT(\"ND_STATE_UNKNOWN\", INUSE_UNKNOWN);\n\tINSERT_INT_CONSTANT(\"ND_STATE_NEEDS_HELLOSVR\", INUSE_NEEDS_HELLOSVR);\n\tINSERT_INT_CONSTANT(\"ND_STATE_INIT\", INUSE_INIT);\n\tINSERT_INT_CONSTANT(\"ND_PROV\", INUSE_PROV);\n\tINSERT_INT_CONSTANT(\"ND_STATE_PROV\", INUSE_PROV);\n\tINSERT_INT_CONSTANT(\"ND_WAIT_PROV\", INUSE_WAIT_PROV);\n\tINSERT_INT_CONSTANT(\"ND_STATE_WAIT_PROV\", INUSE_WAIT_PROV);\n\tINSERT_INT_CONSTANT(\"ND_UNRESOLVABLE\", INUSE_UNRESOLVABLE);\n\tINSERT_INT_CONSTANT(\"ND_STATE_UNRESOLVABLE\", INUSE_UNRESOLVABLE);\n\tINSERT_INT_CONSTANT(\"ND_SLEEP\", INUSE_SLEEP);\n\tINSERT_INT_CONSTANT(\"ND_STATE_SLEEP\", INUSE_SLEEP);\n\tINSERT_INT_CONSTANT(\"ND_STATE_OFFLINE_BY_MOM\", INUSE_OFFLINE_BY_MOM);\n\tINSERT_INT_CONSTANT(\"ND_STATE_MARKEDDOWN\", INUSE_MARKEDDOWN);\n\tINSERT_INT_CONSTANT(\"ND_STATE_NEED_ADDRS\", INUSE_NEED_ADDRS);\n\tINSERT_INT_CONSTANT(\"ND_STATE_MAINTENANCE\", INUSE_MAINTENANCE);\n\tINSERT_INT_CONSTANT(\"ND_STATE_NEED_CREDENTIALS\", INUSE_NEED_CREDENTIALS);\n\tINSERT_INT_CONSTANT(\"ND_STATE_VNODE_UNAVAILABLE\", VNODE_UNAVAILABLE);\n\n\t/* Vnode Type Constants */\n\tINSERT_INT_CONSTANT(\"ND_PBS\", NTYPE_PBS);\n\n\t/* Vnode Sharing Constants */\n\tINSERT_INT_CONSTANT(\"ND_DEFAULT_SHARED\", VNS_DFLT_SHARED);\n\tINSERT_INT_CONSTANT(\"ND_DEFAULT_EXCL\", VNS_DFLT_EXCL);\n\tINSERT_INT_CONSTANT(\"ND_FORCE_EXCL\", VNS_FORCE_EXCL);\n\tINSERT_INT_CONSTANT(\"ND_IGNORE_EXCL\", VNS_IGNORE_EXCL);\n\tINSERT_INT_CONSTANT(\"ND_FORCE_EXCLHOST\", VNS_FORCE_EXCLHOST);\n\tINSERT_INT_CONSTANT(\"ND_DEFAULT_EXCLHOST\", VNS_DFLT_EXCLHOST);\n\n\tINSERT_INT_CONSTANT(\"MGR_CMD_NONE\", MGR_CMD_NONE);\n\tINSERT_INT_CONSTANT(\"MGR_CMD_CREATE\", MGR_CMD_CREATE);\n\tINSERT_INT_CONSTANT(\"MGR_CMD_DELETE\", MGR_CMD_DELETE);\n\tINSERT_INT_CONSTANT(\"MGR_CMD_SET\", MGR_CMD_SET);\n\tINSERT_INT_CONSTANT(\"MGR_CMD_UNSET\", MGR_CMD_UNSET);\n\tINSERT_INT_CONSTANT(\"MGR_CMD_LIST\", MGR_CMD_LIST);\n\tINSERT_INT_CONSTANT(\"MGR_CMD_PRINT\", MGR_CMD_PRINT);\n\tINSERT_INT_CONSTANT(\"MGR_CMD_ACTIVE\", MGR_CMD_ACTIVE);\n\tINSERT_INT_CONSTANT(\"MGR_CMD_IMPORT\", MGR_CMD_IMPORT);\n\tINSERT_INT_CONSTANT(\"MGR_CMD_EXPORT\", MGR_CMD_EXPORT);\n\tINSERT_INT_CONSTANT(\"MGR_CMD_LAST\", MGR_CMD_LAST);\n\n\tINSERT_INT_CONSTANT(\"MGR_OBJ_NONE\", MGR_OBJ_NONE);\n\tINSERT_INT_CONSTANT(\"MGR_OBJ_SERVER\", MGR_OBJ_SERVER);\n\tINSERT_INT_CONSTANT(\"MGR_OBJ_QUEUE\", MGR_OBJ_QUEUE);\n\tINSERT_INT_CONSTANT(\"MGR_OBJ_JOB\", MGR_OBJ_JOB);\n\tINSERT_INT_CONSTANT(\"MGR_OBJ_NODE\", MGR_OBJ_NODE);\n\tINSERT_INT_CONSTANT(\"MGR_OBJ_RESV\", MGR_OBJ_RESV);\n\tINSERT_INT_CONSTANT(\"MGR_OBJ_RSC\", MGR_OBJ_RSC);\n\tINSERT_INT_CONSTANT(\"MGR_OBJ_SCHED\", MGR_OBJ_SCHED);\n\tINSERT_INT_CONSTANT(\"MGR_OBJ_HOST\", MGR_OBJ_HOST);\n\tINSERT_INT_CONSTANT(\"MGR_OBJ_HOOK\", MGR_OBJ_HOOK);\n\tINSERT_INT_CONSTANT(\"MGR_OBJ_PBS_HOOK\", MGR_OBJ_PBS_HOOK);\n\tINSERT_INT_CONSTANT(\"MGR_OBJ_LAST\", MGR_OBJ_LAST);\n\n\tINSERT_INT_CONSTANT(\"BRP_CHOICE_NULL\", BATCH_REPLY_CHOICE_NULL);\n\tINSERT_INT_CONSTANT(\"BRP_CHOICE_Queue\", BATCH_REPLY_CHOICE_Queue);\n\tINSERT_INT_CONSTANT(\"BRP_CHOICE_RdytoCom\", BATCH_REPLY_CHOICE_RdytoCom);\n\tINSERT_INT_CONSTANT(\"BRP_CHOICE_Commit\", BATCH_REPLY_CHOICE_Commit);\n\tINSERT_INT_CONSTANT(\"BRP_CHOICE_Select\", BATCH_REPLY_CHOICE_Select);\n\tINSERT_INT_CONSTANT(\"BRP_CHOICE_Status\", BATCH_REPLY_CHOICE_Status);\n\tINSERT_INT_CONSTANT(\"BRP_CHOICE_Text\", BATCH_REPLY_CHOICE_Text);\n\tINSERT_INT_CONSTANT(\"BRP_CHOICE_Locate\", BATCH_REPLY_CHOICE_Locate);\n\tINSERT_INT_CONSTANT(\"BRP_CHOICE_RescQuery\", BATCH_REPLY_CHOICE_RescQuery);\n\tINSERT_INT_CONSTANT(\"BRP_CHOICE_PreemptJobs\", BATCH_REPLY_CHOICE_PreemptJobs);\n\n\t/* the pair to this list is in pbs_ifl.h and must be updated to reflect any changes */\n\tINSERT_INT_CONSTANT(\"BATCH_OP_SET\", SET);\n\tINSERT_INT_CONSTANT(\"BATCH_OP_UNSET\", UNSET);\n\tINSERT_INT_CONSTANT(\"BATCH_OP_INCR\", INCR);\n\tINSERT_INT_CONSTANT(\"BATCH_OP_DECR\", DECR);\n\tINSERT_INT_CONSTANT(\"BATCH_OP_EQ\", EQ);\n\tINSERT_INT_CONSTANT(\"BATCH_OP_NE\", NE);\n\tINSERT_INT_CONSTANT(\"BATCH_OP_GE\", GE);\n\tINSERT_INT_CONSTANT(\"BATCH_OP_GT\", GT);\n\tINSERT_INT_CONSTANT(\"BATCH_OP_LE\", LE);\n\tINSERT_INT_CONSTANT(\"BATCH_OP_LT\", LT);\n\tINSERT_INT_CONSTANT(\"BATCH_OP_DFLT\", DFLT);\n\n\tINSERT_INT_CONSTANT(\"ATR_VFLAG_SET\", ATR_VFLAG_SET);\n\tINSERT_INT_CONSTANT(\"ATR_VFLAG_MODIFY\", ATR_VFLAG_MODIFY);\n\tINSERT_INT_CONSTANT(\"ATR_VFLAG_DEFLT\", ATR_VFLAG_DEFLT);\n\tINSERT_INT_CONSTANT(\"ATR_VFLAG_MODCACHE\", ATR_VFLAG_MODCACHE);\n\tINSERT_INT_CONSTANT(\"ATR_VFLAG_INDIRECT\", ATR_VFLAG_INDIRECT);\n\tINSERT_INT_CONSTANT(\"ATR_VFLAG_TARGET\", ATR_VFLAG_TARGET);\n\tINSERT_INT_CONSTANT(\"ATR_VFLAG_HOOK\", ATR_VFLAG_HOOK);\n\n\treturn 0;\n}\n\n/*  -----                    MODULE METHODS                   -----    */\n\nstatic PyMethodDef pbs_v1_module_methods[] = {\n\t{\"wordsize\",\n\t (PyCFunction) pbsv1mod_meth_wordsize,\n\t METH_NOARGS, pbsv1mod_meth_wordsize_doc},\n\t{\"in_python_mode\",\n\t (PyCFunction) pbsv1mod_meth_in_python_mode,\n\t METH_NOARGS, pbsv1mod_meth_in_python_mode_doc},\n\t{\"in_site_hook\",\n\t (PyCFunction) pbsv1mod_meth_in_site_hook,\n\t METH_NOARGS, pbsv1mod_meth_in_site_hook_doc},\n\t{\"duration_to_secs\",\n\t (PyCFunction) pbsv1mod_meth_duration_to_secs,\n\t METH_VARARGS | METH_KEYWORDS, pbsv1mod_meth_duration_to_secs_doc},\n\t{\"validate_input\",\n\t (PyCFunction) pbsv1mod_meth_validate_input,\n\t METH_VARARGS | METH_KEYWORDS, pbsv1mod_meth_validate_input_doc},\n\t{\"event\",\n\t (PyCFunction) pbsv1mod_meth_event,\n\t METH_NOARGS, pbsv1mod_meth_event_doc},\n\t{\"server\",\n\t (PyCFunction) pbsv1mod_meth_server,\n\t METH_NOARGS, pbsv1mod_meth_server_doc},\n\t{\"_event_accept\",\n\t (PyCFunction) pbsv1mod_meth_event_accept,\n\t METH_NOARGS, pbsv1mod_meth_event_accept_doc},\n\t{\"_event_reject\",\n\t (PyCFunction) pbsv1mod_meth_event_reject,\n\t METH_VARARGS | METH_KEYWORDS, pbsv1mod_meth_event_reject_doc},\n\t{\"_event_param_mod_allow\",\n\t (PyCFunction) pbsv1mod_meth_event_param_mod_allow,\n\t METH_NOARGS, pbsv1mod_meth_event_param_mod_allow_doc},\n\t{\"_event_param_mod_disallow\",\n\t (PyCFunction) pbsv1mod_meth_event_param_mod_disallow,\n\t METH_NOARGS, pbsv1mod_meth_event_param_mod_disallow_doc},\n\t{\"is_attrib_val_settable\", (PyCFunction) pbsv1mod_meth_is_attrib_val_settable,\n\t METH_VARARGS | METH_KEYWORDS, pbsv1mod_meth_is_attrib_val_settable_doc},\n\t{\"get_queue\", (PyCFunction) pbsv1mod_meth_get_queue,\n\t METH_VARARGS | METH_KEYWORDS, pbsv1mod_meth_get_queue_doc},\n\t{\"get_job\", (PyCFunction) pbsv1mod_meth_get_job,\n\t METH_VARARGS | METH_KEYWORDS, pbsv1mod_meth_get_job_doc},\n\t{\"release_nodes\", (PyCFunction) pbsv1mod_meth_release_nodes,\n\t METH_VARARGS | METH_KEYWORDS, pbsv1mod_meth_release_nodes_doc},\n\t{PY_GETRESV_METHOD, (PyCFunction) pbsv1mod_meth_get_resv,\n\t METH_VARARGS | METH_KEYWORDS, pbsv1mod_meth_get_resv_doc},\n\t{PY_GETVNODE_METHOD, (PyCFunction) pbsv1mod_meth_get_vnode,\n\t METH_VARARGS | METH_KEYWORDS, pbsv1mod_meth_get_vnode_doc},\n\t{PY_ITER_NEXTFUNC_METHOD, (PyCFunction) pbsv1mod_meth_iter_nextfunc,\n\t METH_VARARGS | METH_KEYWORDS, pbsv1mod_meth_iter_nextfunc_doc},\n\t{PY_MARK_VNODE_SET_METHOD, (PyCFunction) pbsv1mod_meth_mark_vnode_set,\n\t METH_VARARGS | METH_KEYWORDS, pbsv1mod_meth_mark_vnode_set_doc},\n\t{PY_LOAD_RESOURCE_VALUE_METHOD,\n\t (PyCFunction) pbsv1mod_meth_load_resource_value,\n\t METH_VARARGS | METH_KEYWORDS, pbsv1mod_meth_load_resource_value_doc},\n\t{PY_RESOURCE_STR_VALUE_METHOD,\n\t (PyCFunction) pbsv1mod_meth_resource_str_value,\n\t METH_VARARGS | METH_KEYWORDS, pbsv1mod_meth_resource_str_value_doc},\n\t{PY_VNODE_STATE_TO_STR_METHOD,\n\t (PyCFunction) pbsv1mod_meth_vnode_state_to_str,\n\t METH_VARARGS | METH_KEYWORDS, pbsv1mod_meth_vnode_state_to_str_doc},\n\t{PY_VNODE_SHARING_TO_STR_METHOD,\n\t (PyCFunction) pbsv1mod_meth_vnode_sharing_to_str,\n\t METH_VARARGS | METH_KEYWORDS, pbsv1mod_meth_vnode_sharing_to_str_doc},\n\t{PY_VNODE_NTYPE_TO_STR_METHOD,\n\t (PyCFunction) pbsv1mod_meth_vnode_ntype_to_str,\n\t METH_VARARGS | METH_KEYWORDS, pbsv1mod_meth_vnode_ntype_to_str_doc},\n\t{\"logmsg\", (PyCFunction) pbsv1mod_meth_logmsg,\n\t METH_VARARGS | METH_KEYWORDS, pbsv1mod_meth_logmsg_doc},\n\t{PY_LOGJOBMSG_METHOD, (PyCFunction) pbsv1mod_meth_logjobmsg,\n\t METH_VARARGS | METH_KEYWORDS, pbsv1mod_meth_logjobmsg_doc},\n\t{PY_GET_PYTHON_DAEMON_NAME_METHOD,\n\t (PyCFunction) pbsv1mod_meth_get_python_daemon_name,\n\t METH_NOARGS, pbsv1mod_meth_get_python_daemon_name_doc},\n\t{PY_GET_PBS_SERVER_NAME_METHOD,\n\t (PyCFunction) pbsv1mod_meth_get_pbs_server_name,\n\t METH_NOARGS, pbsv1mod_meth_get_pbs_server_name_doc},\n\t{PY_GET_LOCAL_HOST_NAME_METHOD,\n\t (PyCFunction) pbsv1mod_meth_get_local_host_name,\n\t METH_NOARGS, pbsv1mod_meth_get_local_host_name_doc},\n\t{PY_SET_PYTHON_MODE_METHOD,\n\t (PyCFunction) pbsv1mod_meth_set_python_mode,\n\t METH_NOARGS, pbsv1mod_meth_set_python_mode_doc},\n\t{PY_SET_C_MODE_METHOD,\n\t (PyCFunction) pbsv1mod_meth_set_c_mode,\n\t METH_NOARGS, pbsv1mod_meth_set_c_mode_doc},\n\t{PY_STR_TO_VNODE_STATE_METHOD, (PyCFunction) pbsv1mod_meth_str_to_vnode_state,\n\t METH_VARARGS | METH_KEYWORDS, pbsv1mod_meth_str_to_vnode_state_doc},\n\t{PY_STR_TO_VNODE_NTYPE_METHOD, (PyCFunction) pbsv1mod_meth_str_to_vnode_ntype,\n\t METH_VARARGS | METH_KEYWORDS, pbsv1mod_meth_str_to_vnode_ntype_doc},\n\t{PY_STR_TO_VNODE_SHARING_METHOD, (PyCFunction) pbsv1mod_meth_str_to_vnode_sharing,\n\t METH_VARARGS | METH_KEYWORDS, pbsv1mod_meth_str_to_vnode_sharing_doc},\n\t{PY_REBOOT_HOST_METHOD,\n\t (PyCFunction) pbsv1mod_meth_reboot,\n\t METH_VARARGS | METH_KEYWORDS, pbsv1mod_meth_reboot_doc},\n\t{PY_SCHEDULER_RESTART_CYCLE_METHOD,\n\t (PyCFunction) pbsv1mod_meth_scheduler_restart_cycle,\n\t METH_VARARGS | METH_KEYWORDS, pbsv1mod_meth_scheduler_restart_cycle_doc},\n\t{PY_SET_PBS_STATOBJ_METHOD,\n\t (PyCFunction) pbsv1mod_meth_set_pbs_statobj,\n\t METH_VARARGS | METH_KEYWORDS, pbsv1mod_meth_set_pbs_statobj_doc},\n\t{PY_SIZE_TO_KBYTES_METHOD, (PyCFunction) pbsv1mod_meth_size_to_kbytes,\n\t METH_VARARGS | METH_KEYWORDS, pbsv1mod_meth_size_to_kbytes_doc},\n\t{PY_GET_SERVER_STATIC_METHOD, (PyCFunction) pbsv1mod_meth_get_server_static,\n\t METH_NOARGS, pbsv1mod_meth_get_server_static_doc},\n\t{PY_GET_JOB_STATIC_METHOD, (PyCFunction) pbsv1mod_meth_get_job_static,\n\t METH_VARARGS | METH_KEYWORDS, pbsv1mod_meth_get_job_static_doc},\n\t{PY_GET_RESV_STATIC_METHOD, (PyCFunction) pbsv1mod_meth_get_resv_static,\n\t METH_VARARGS | METH_KEYWORDS, pbsv1mod_meth_get_resv_static_doc},\n\t{PY_GET_QUEUE_STATIC_METHOD, (PyCFunction) pbsv1mod_meth_get_queue_static,\n\t METH_VARARGS | METH_KEYWORDS, pbsv1mod_meth_get_queue_static_doc},\n\t{PY_GET_VNODE_STATIC_METHOD, (PyCFunction) pbsv1mod_meth_get_vnode_static,\n\t METH_VARARGS | METH_KEYWORDS, pbsv1mod_meth_get_vnode_static_doc},\n\t{PY_GET_SERVER_DATA_FP_METHOD, (PyCFunction) pbsv1mod_meth_get_server_data_fp,\n\t METH_NOARGS, pbsv1mod_meth_get_server_data_fp_doc},\n\t{PY_GET_SERVER_DATA_FILE_METHOD, (PyCFunction) pbsv1mod_meth_get_server_data_file,\n\t METH_NOARGS, pbsv1mod_meth_get_server_data_file_doc},\n\t{PY_USE_STATIC_DATA_METHOD,\n\t (PyCFunction) pbsv1mod_meth_use_static_data,\n\t METH_NOARGS, pbsv1mod_meth_use_static_data_doc},\n\t{PY_GET_PBS_CONF_METHOD,\n\t (PyCFunction) pbsv1mod_meth_get_pbs_conf,\n\t METH_NOARGS, pbsv1mod_meth_get_pbs_conf_doc},\n\t{NULL, NULL} /* sentinel */\n};\n\nstatic char pbs_v1_module_doc[] =\n\t\"PBS Module providing PBS C/Python glue code\\n\\\n    \\t\\n\\\n    \";\n\nstatic struct PyModuleDef pbs_v1_module = {\n\tPyModuleDef_HEAD_INIT,\n\tPBS_PYTHON_V1_MODULE_EXTENSION_NAME,\n\tpbs_v1_module_doc,\n\t-1,\n\tpbs_v1_module_methods};\n\n/*\n * ----------------- EXTERNAL DEFINITIONS -----------------------\n */\n\n/**\n * @brief\n * \tpbs_v1_module_init\n *   \tThis is convenience routine to be used by both embedded and extension\n *   \tmechanism to initialize the module. Since Py_InitModuleX can be called\n *   \tmultiple time without any ill side effects :).\n *\n * @return\tobject\n * @retval\tThe module object (borrowed reference)\n */\n\nPyObject *\npbs_v1_module_init(void)\n{\n\n\tPyObject *m = NULL;\t/* the module object, NEW ref */\n\tPyObject *mdict = NULL; /* the module object, BORROWED ref */\n\tPyObject *py_types_module = NULL;\n\n\tm = PyModule_Create(&pbs_v1_module);\n\n\tif (m == NULL)\n\t\treturn m;\n\n\t/* IMPORTANT all our types ready */\n\tif (ppsvr_prepare_all_types() < 0)\n\t\tgoto ERROR_EXIT;\n\n\tmdict = PyModule_GetDict(m);\n\n\t/* get svr types */\n\tpy_types_module = ppsvr_create_types_module();\n\tif (py_types_module == NULL)\n\t\tgoto ERROR_EXIT;\n\tif ((PyDict_SetItemString(mdict, \"svr_types\",\n\t\t\t\t  py_types_module)) == -1) {\n\t\tPy_XDECREF(py_types_module);\n\t\treturn NULL;\n\t}\n\n\tPy_XDECREF(py_types_module);\n\n\t/* Add all our constants */\n\tif (_pv1mod_insert_int_constants(mdict) == -1)\n\t\treturn NULL;\n\n\tif (_pv1mod_insert_str_constants(mdict) == -1)\n\t\treturn NULL;\n\n\tPyPbsV1ModuleExtension_Obj = m; /* used to create separate namespaces later */\n\treturn m;\n\nERROR_EXIT:\n\treturn NULL;\n}\n\n/**\n * @brief\n * \tThe below is for embedded interpreter puts it in the __main__\n * \tmodule.\n *\n */\nPyObject *\npbs_v1_module_inittab(void)\n{\n\treturn pbs_v1_module_init();\n}\n\n/**\n * @brief\n * ==== For exposing the module as an external shared library =====\n */\n\nPyMODINIT_FUNC\ninit_pbs_v1(void)\n{\n\treturn pbs_v1_module_init();\n}\n"
  },
  {
    "path": "src/lib/Libpython/pbs_python_external.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tpbs_python_external.c\n * @brief\n *  This file contains shared routines that can be used by any of the PBS\n *  infrastructure daemons (Server,MOM or Scheduler). This file basically\n *  provides all the implementation for external interface routines found\n *  in pbs_python.h\n *\n */\n\n#include <pbs_config.h>\n\n/* --- BEGIN PYTHON DEPENDENCIES --- */\n\n#ifdef PYTHON\n\n#include <pbs_python_private.h> /* private python file  */\n#include <Python.h>\t\t/* Includes eval.h for PyEval_EvalCode  */\n#include <pythonrun.h>\t\t/* For Py_SetPythonHome */\n#include <sys/types.h>\n#include <sys/stat.h>\n#include <signal.h>\n#include <unistd.h>\n#include <wchar.h>\n#include \"hook.h\"\n\nextern PyObject *PyInit__pbs_ifl(void);\nextern pbs_list_head svr_allhooks;\n\nstatic struct _inittab pbs_python_inittab_modules[] = {\n\t{PBS_PYTHON_V1_MODULE_EXTENSION_NAME, pbs_v1_module_inittab},\n\t{\"_pbs_ifl\", PyInit__pbs_ifl},\n\t{NULL, NULL} /* sentinel */\n};\n\nstatic PyObject *\n_pbs_python_compile_file(const char *file_name,\n\t\t\t const char *compiled_code_file_name);\nextern int pbs_python_setup_namespace_dict(PyObject *globals);\n\n#endif /* PYTHON */\n\n#include <pbs_python.h>\n\n/* --- END PYTHON DEPENDENCIES --- */\n\n/*\n * GLOBAL\n */\n/* TODO make it autoconf? */\nchar *pbs_python_daemon_name;\n\n/*\n * ===================   BEGIN   EXTERNAL ROUTINES  ===================\n */\n\nstatic int\ninitialize_python_config(int install_signal_handlers)\n{\n\tPyStatus py_status;\n\tPyConfig py_config;\n\tchar *python_binpath = NULL;\n\tstatic wchar_t w_python_binpath[MAXPATHLEN + 1] = {'\\0'};\n\n\tPyConfig_InitPythonConfig(&py_config);\n\n\tpy_config._install_importlib = 1;\n\tpy_config.use_environment = 0;\n\tpy_config.optimization_level = 2;\n\tpy_config.isolated = 1;\n\tpy_config.site_import = 0;\n\tpy_config.install_signal_handlers = install_signal_handlers;\n\n\t/* Set python binary path if it's not already set */\n\tif (w_python_binpath[0] == '\\0') {\n\t\tif (get_py_progname(&python_binpath)) {\n\t\t\tlog_err(-1, __func__, \"Failed to find python binary path!\");\n\t\t\treturn -1;\n\t\t}\n\t\tmbstowcs(w_python_binpath, python_binpath, MAXPATHLEN + 1);\n\t\tfree(python_binpath);\n\t}\n\n\t/* Set the program name in the Python configuration */\n\tpy_status = PyConfig_SetString(&py_config, &py_config.program_name, w_python_binpath);\n\tif (PyStatus_Exception(py_status))\n\t\treturn -1;\n\n\t/* Initialize the top-level module */\n\tif (PyImport_ExtendInittab(pbs_python_inittab_modules) != 0) {\n\t\tlog_err(-1, \"PyImport_ExtendInittab\", \"--> Failed to initialize Python interpreter <--\");\n\t\treturn -1;\n\t}\n\n\t/* Initialize the Python interpreter with the given configuration */\n\tpy_status = Py_InitializeFromConfig(&py_config);\n\tif (PyStatus_Exception(py_status)) {\n\t\tlog_err(-1, \"Py_InitializeFromConfig\", \"--> Failed to initialize Python interpreter <--\");\n\t\tPyConfig_Clear(&py_config);  /* Clear the configuration object */\n\t\treturn -1;\n\t}\n\n\treturn 0;\n}\n\n/**\n *\n * @brief\n *\tStart the Python interpreter.\n *\n * @param[in/out] interp_data - has some prefilled information about\n *\t\t\t\tthe python interpreter to start, like python\n *\t\t\t\tdaemon name. This will also get filled in\n *\t\t\t\twith new information such as status of\n *\t\t\t\tof the python start.\n * @note\n *\tIf  called by pbs_python command, then any log messages are logged as\n *\tDEBUG3; otherwise, DEBUG2.\n */\n\nint\npbs_python_ext_start_interpreter(struct python_interpreter_data *interp_data)\n{\n\n#ifdef PYTHON /* -- BEGIN ONLY IF PYTHON IS CONFIGURED -- */\n\tstruct stat sbuf;\n\tchar pbs_python_destlib[MAXPATHLEN + 1] = {'\\0'};\n\tchar pbs_python_destlib2[MAXPATHLEN + 1] = {'\\0'};\n\tint evtype;\n\tint rc;\n\n\t/*\n\t * initialize the convenience global pbs_python_daemon_name, as it is\n\t * used everywhere\n\t */\n\n\tpbs_python_daemon_name = interp_data->daemon_name;\n\n\t/* Need to make logging less verbose if pbs_python command */\n\t/* used, since it can get called many times in a pbs daemon, */\n\t/* and it would litter that daemon's logs */\n\tif (IS_PBS_PYTHON_CMD(pbs_python_daemon_name))\n\t\tevtype = PBSEVENT_DEBUG3;\n\telse\n\t\tevtype = PBSEVENT_DEBUG2;\n\n\tsnprintf(pbs_python_destlib, MAXPATHLEN, \"%s/lib64/python/altair\",\n\t\t pbs_conf.pbs_exec_path);\n\tsnprintf(pbs_python_destlib2, MAXPATHLEN, \"%s/lib64/python/altair/pbs/v1\",\n\t\t pbs_conf.pbs_exec_path);\n\trc = stat(pbs_python_destlib, &sbuf);\n\tif (rc != 0) {\n\t\tsnprintf(pbs_python_destlib, MAXPATHLEN, \"%s/lib/python/altair\",\n\t\t\t pbs_conf.pbs_exec_path);\n\t\trc = stat(pbs_python_destlib, &sbuf);\n\t\tsnprintf(pbs_python_destlib2, MAXPATHLEN, \"%s/lib/python/altair/pbs/v1\",\n\t\t\t pbs_conf.pbs_exec_path);\n\t}\n\tif (rc != 0) {\n\t\tlog_err(-1, __func__,\n\t\t\t\"--> PBS Python library directory not found <--\");\n\t\tgoto ERROR_EXIT;\n\t}\n\tif (!S_ISDIR(sbuf.st_mode)) {\n\t\tlog_err(-1, __func__,\n\t\t\t\"--> PBS Python library path is not a directory <--\");\n\t\tgoto ERROR_EXIT;\n\t}\n\n\tif (interp_data) {\n\t\tinterp_data->init_interpreter_data(interp_data); /* to be safe */\n\t\tif (interp_data->interp_started) {\n\t\t\tlog_event(evtype, PBS_EVENTCLASS_SERVER,\n\t\t\t\t  LOG_INFO, interp_data->daemon_name,\n\t\t\t\t  \"--> Python interpreter already started <--\");\n\t\t\tgoto SUCCESS_EXIT;\n\t\t} /* else { we are not started but ready } */\n\t} else {  /* we need to allocate memory */\n\t\tlog_err(-1, __func__,\n\t\t\t\"--> Passed NULL for interpreter data <--\");\n\t\tgoto ERROR_EXIT;\n\t}\n\n#ifdef WIN32\n\tPy_NoSiteFlag = 1;\n\tPy_FrozenFlag = 1;\n\tPy_OptimizeFlag = 2;          /* TODO make this a compile flag variable */\n\tPy_IgnoreEnvironmentFlag = 1; /* ignore PYTHONPATH and PYTHONHOME */\n\tset_py_progname();\n\t/* we make sure our top level module is initialized */\n\tif ((PyImport_ExtendInittab(pbs_python_inittab_modules) != 0)) {\n\t\tlog_err(-1, \"PyImport_ExtendInittab\",\n\t\t\t\"--> Failed to initialize Python interpreter <--\");\n\t\tgoto ERROR_EXIT;\n\t}\n\n\t/*\n\t * arg '1' means to not skip init of signals\n\t * we want signals to propagate to the executing\n\t * Python script to be able to interrupt it\n\t */\n\tPy_InitializeEx(1);\n#else\n\tif (initialize_python_config(1))\n\t\tgoto ERROR_EXIT;\n\n#endif\n\n\tif (Py_IsInitialized()) {\n\t\tchar *msgbuf;\n\n\t\tinterp_data->interp_started = 1; /* mark python as initialized */\n\t\t/* print only the first five characters, TODO check for NULL? */\n\t\tpbs_asprintf(&msgbuf,\n\t\t\t     \"--> Python Interpreter started, compiled with version:'%s' <--\",\n\t\t\t     Py_GetVersion());\n\t\tlog_event(evtype, PBS_EVENTCLASS_SERVER,\n\t\t\t  LOG_INFO, interp_data->daemon_name, msgbuf);\n\t\tfree(msgbuf);\n\t} else {\n\t\tlog_err(-1, \"Py_InitializeEx\",\n\t\t\t\"--> Failed to initialize Python interpreter <--\");\n\t\tgoto ERROR_EXIT;\n\t}\n\t/*\n\t * Add Altair python module directory to sys path. NOTE:\n\t *  PBS_PYTHON_MODULE_DIR is a command line define, also insert\n\t * standard required python modules\n\t */\n\tif (pbs_python_modify_syspath(pbs_python_destlib, -1) == -1) {\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t \"could not insert %s into sys.path shutting down\",\n\t\t\t pbs_python_destlib);\n\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\tlog_err(-1, __func__, log_buffer);\n\t\tgoto ERROR_EXIT;\n\t}\n\n\tif (pbs_python_modify_syspath(pbs_python_destlib2, -1) == -1) {\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t \"could not insert %s into sys.path shutting down\",\n\t\t\t pbs_python_destlib2);\n\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\tlog_err(-1, __func__, log_buffer);\n\t\tgoto ERROR_EXIT;\n\t}\n\n\t/*\n\t * At this point it is safe to load the available server types from\n\t * the python modules. since the syspath is setup correctly\n\t */\n\tif ((pbs_python_load_python_types(interp_data) == -1)) {\n\t\tlog_err(-1, __func__, \"could not load python types into the interpreter\");\n\t\tgoto ERROR_EXIT;\n\t}\n\n\tinterp_data->pbs_python_types_loaded = 1; /* just in case */\n\n#ifdef LIBPYTHONSVR\n\tPyObject *m, *d, *f, *handler, *sigint;\n\tm = PyImport_ImportModule(\"signal\");\n\tif (!m) {\n\t\tlog_err(-1, __func__, \"failed to import the signal module\");\n\t\tgoto ERROR_EXIT;\n\t}\n\td = PyModule_GetDict(m);\n\tf = PyDict_GetItemString(d, \"signal\");\n\thandler = PyDict_GetItemString(d, \"default_int_handler\");\n\tsigint = PyDict_GetItemString(d, \"SIGINT\");\n\tif (f && PyCallable_Check(f)) {\n\t\tif (!PyObject_CallFunctionObjArgs(f, sigint, handler, NULL)) {\n\t\t\tPy_CLEAR(m);\n\t\t\tlog_err(-1, __func__, \"could not set up signal.default_int_handler\");\n\t\t\tgoto ERROR_EXIT;\n\t\t}\n\t} else {\n\t\tPy_CLEAR(m);\n\t\tlog_err(-1, __func__, \"could not call signal.signal\");\n\t\tgoto ERROR_EXIT;\n\t}\n\tPy_CLEAR(m);\n\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_SERVER, LOG_INFO, interp_data->daemon_name, \"successfully set up signal.default_int_handler\");\n#endif\n\nSUCCESS_EXIT:\n\treturn 0;\nERROR_EXIT:\n\tif (interp_data->interp_started) {\n\t\tpbs_python_ext_shutdown_interpreter(interp_data);\n\t}\n\treturn 1;\n#else  /* !PYTHON */\n\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_ADMIN |\n\t\t\t  PBSEVENT_DEBUG,\n\t\t  PBS_EVENTCLASS_SERVER,\n\t\t  LOG_INFO, \"start_python\",\n\t\t  \"--> Python interpreter not built in <--\");\n\treturn 0;\n#endif /* PYTHON */\n}\n\n/**\n *\n * @brief\n *\tShuts down the Python interpreter.\n *\n * @param[in/out] interp_data - has some prefilled information about\n *\t\t\t\tthe python interpreter to shutdown,\n *\t\t\t\tThis will also get filled in\n *\t\t\t\twith new information such as status of\n *\t\t\t\tof the python shutdown.\n * @note\n *\tIf  called by pbs_python command, then any log messages are logged as\n *\tDEBUG3; otherwise, DEBUG2.\n */\n\nvoid\npbs_python_ext_shutdown_interpreter(struct python_interpreter_data *interp_data)\n{\n#ifdef PYTHON /* -- BEGIN ONLY IF PYTHON IS CONFIGURED -- */\n\tint evtype;\n\thook *phook;\n\n\tif (IS_PBS_PYTHON_CMD(pbs_python_daemon_name))\n\t\tevtype = PBSEVENT_DEBUG3;\n\telse\n\t\tevtype = PBSEVENT_DEBUG2;\n\n\tif (interp_data) {\n\t\tif (interp_data->interp_started) {\n\t\t\tlog_event(evtype, PBS_EVENTCLASS_SERVER,\n\t\t\t\t  LOG_INFO, interp_data->daemon_name,\n\t\t\t\t  \"--> Stopping Python interpreter <--\");\n\t\t\t\n\t\t\tif (!IS_PBS_PYTHON_CMD(pbs_python_daemon_name)) {\n\t\t\t\t/* clear all code objects of hooks */\n\t\t\t\tphook = (hook *) GET_NEXT(svr_allhooks);\n\t\t\t\twhile (phook) {\n\t\t\t\t\tif (phook->script)\n\t\t\t\t\t\tpbs_python_ext_free_code_obj(phook->script);\n\t\t\t\t\tphook = (hook *) GET_NEXT(phook->hi_allhooks);\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t/* before finalize clear global python objects */\n\t\t\tpbs_python_unload_python_types(interp_data);\n\t\t\tPy_Finalize();\n\t\t}\n\t\tinterp_data->destroy_interpreter_data(interp_data);\n\t\t/* reset so that we do not have a problem */\n\t\tpbs_python_daemon_name = NULL;\n\t}\n\n#endif /* PYTHON */\n}\n\n/**\n * @brief\n * \tpbs_python_ext_quick_start_interpreter - the basic startup without loading\n *\tof PBS attributes and resources into Python.\n */\n\nvoid\npbs_python_ext_quick_start_interpreter(void)\n{\n\n#ifdef PYTHON /* -- BEGIN ONLY IF PYTHON IS CONFIGURED -- */\n\tchar pbs_python_destlib[MAXPATHLEN + 1] = {'\\0'};\n\tchar pbs_python_destlib2[MAXPATHLEN + 1] = {'\\0'};\n\n\tsnprintf(pbs_python_destlib, MAXPATHLEN, \"%s/lib/python/altair\",\n\t\t pbs_conf.pbs_exec_path);\n\tsnprintf(pbs_python_destlib2, MAXPATHLEN, \"%s/lib/python/altair/pbs/v1\",\n\t\t pbs_conf.pbs_exec_path);\n\n#ifdef WIN32\n\tPy_NoSiteFlag = 1;\n\tPy_FrozenFlag = 1;\n\tPy_OptimizeFlag = 2;\t      /* TODO make this a compile flag variable */\n\tPy_IgnoreEnvironmentFlag = 1; /* ignore PYTHONPATH and PYTHONHOME */\n\tset_py_progname();\n\t/* we make sure our top level module is initialized */\n\tif ((PyImport_ExtendInittab(pbs_python_inittab_modules) != 0)) {\n\t\tlog_err(-1, \"PyImport_ExtendInittab\",\n\t\t\t\"--> Failed to initialize Python interpreter <--\");\n\t\tgoto ERROR_EXIT;\n\t}\n\n\tPy_InitializeEx(0); /* SKIP initialization of signals */\n#else\n\tif (initialize_python_config(0))\n\t\tgoto ERROR_EXIT;\n#endif\n\n\tif (Py_IsInitialized()) {\n\t\tchar *msgbuf;\n\n\t\tpbs_asprintf(&msgbuf,\n\t\t\t     \"--> Python Interpreter quick started, compiled with version:'%s' <--\",\n\t\t\t     Py_GetVersion());\n\t\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_ADMIN |\n\t\t\t\t  PBSEVENT_DEBUG,\n\t\t\t  PBS_EVENTCLASS_SERVER,\n\t\t\t  LOG_INFO, __func__, msgbuf);\n\t\tfree(msgbuf);\n\t} else {\n\t\tlog_err(-1, \"Py_InitializeEx\",\n\t\t\t\"--> Failed to quick initialize Python interpreter <--\");\n\t\tgoto ERROR_EXIT;\n\t}\n\t/*\n\t * Add Altair python module directory to sys path. NOTE:\n\t *  PBS_PYTHON_MODULE_DIR is a command line define, also insert\n\t * standard required python modules\n\t */\n\tif (pbs_python_modify_syspath(pbs_python_destlib, -1) == -1) {\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t \"could not insert %s into sys.path shutting down\",\n\t\t\t pbs_python_destlib);\n\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\tlog_err(-1, __func__, log_buffer);\n\t\tgoto ERROR_EXIT;\n\t}\n\n\tif (pbs_python_modify_syspath(pbs_python_destlib2, -1) == -1) {\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t \"could not insert %s into sys.path shutting down\",\n\t\t\t pbs_python_destlib2);\n\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\tlog_err(-1, __func__, log_buffer);\n\t\tgoto ERROR_EXIT;\n\t}\n\n\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t \"--> Inserted Altair PBS Python modules dir '%s' '%s'<--\", pbs_python_destlib, pbs_python_destlib2);\n\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_ADMIN |\n\t\t\t  PBSEVENT_DEBUG,\n\t\t  PBS_EVENTCLASS_SERVER,\n\t\t  LOG_INFO, __func__, log_buffer);\n\n\treturn;\n\nERROR_EXIT:\n\tpbs_python_ext_quick_shutdown_interpreter();\n\treturn;\n#else  /* !PYTHON */\n\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_ADMIN |\n\t\t\t  PBSEVENT_DEBUG,\n\t\t  PBS_EVENTCLASS_SERVER,\n\t\t  LOG_INFO, \"start_python\",\n\t\t  \"--> Python interpreter not built in <--\");\n\treturn;\n#endif /* PYTHON */\n}\n\n/**\n * @brief\n * \tpbs_python_ext_quick_shutdown_interpreter\n */\n\nvoid\npbs_python_ext_quick_shutdown_interpreter(void)\n{\n#ifdef PYTHON /* -- BEGIN ONLY IF PYTHON IS CONFIGURED -- */\n\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_ADMIN |\n\t\t\t  PBSEVENT_DEBUG,\n\t\t  PBS_EVENTCLASS_SERVER,\n\t\t  LOG_INFO, \"pbs_python_ext_quick_shutdown_interpreter\",\n\t\t  \"--> Stopping Python interpreter <--\");\n\tPy_Finalize();\n#endif /* PYTHON */\n}\n\nvoid\npbs_python_ext_free_global_dict(\n\tstruct python_script *py_script)\n{\n#ifdef PYTHON /* --- BEGIN PYTHON BLOCK --- */\n\tif (py_script->global_dict) {\n\t\tPyDict_Clear((PyObject *) py_script->global_dict); /* clear k,v */\n\t\tPy_CLEAR(py_script->global_dict);\n\t}\n#endif /* --- END   PYTHON BLOCK --- */\n\treturn;\n}\n\nvoid\npbs_python_ext_free_code_obj(\n\tstruct python_script *py_script)\n{\n#ifdef PYTHON /* --- BEGIN PYTHON BLOCK --- */\n\tif (py_script->py_code_obj)\n\t\tPy_CLEAR(py_script->py_code_obj);\n#endif /* --- END   PYTHON BLOCK --- */\n\treturn;\n}\n\nvoid\npbs_python_ext_free_python_script(\n\tstruct python_script *py_script)\n{\n\tif (py_script) {\n\t\tif (py_script->path)\n\t\t\tfree(py_script->path);\n\t\tpbs_python_ext_free_code_obj(py_script);\n\t\tpbs_python_ext_free_global_dict(py_script);\n\t}\n\treturn;\n}\n#define COPY_STRING(dst, src)                                              \\\n\tdo {                                                               \\\n\t\tif (!((dst) = strdup(src))) {                              \\\n\t\t\tlog_err(errno, __func__, \"could not copy string\"); \\\n\t\t\tgoto ERROR_EXIT;                                   \\\n\t\t}                                                          \\\n\t} while (0)\n\nint\npbs_python_ext_alloc_python_script(\n\tconst char *script_path,\n\tstruct python_script **py_script /* returned value */\n)\n{\n\n#ifdef PYTHON /* --- BEGIN PYTHON BLOCK --- */\n\n\tstruct python_script *tmp_py_script = NULL;\n\tsize_t nbytes = sizeof(struct python_script);\n\tstruct stat sbuf;\n\n\t*py_script = NULL; /* init, to be safe */\n\n\tif (!(tmp_py_script = (struct python_script *) malloc(nbytes))) {\n\t\tlog_err(errno, __func__, \"failed to malloc struct python_script\");\n\t\tgoto ERROR_EXIT;\n\t}\n\t(void) memset(tmp_py_script, 0, nbytes);\n\t/* check for recompile true by default */\n\ttmp_py_script->check_for_recompile = 1;\n\n\tCOPY_STRING(tmp_py_script->path, script_path);\n\t/* store the stat */\n\tif ((stat(script_path, &sbuf) == -1)) {\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t \"failed to stat <%s>\", script_path);\n\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\tlog_err(errno, __func__, log_buffer);\n\t\tgoto ERROR_EXIT;\n\t}\n\t(void) memcpy(&(tmp_py_script->cur_sbuf), &sbuf,\n\t\t      sizeof(tmp_py_script->cur_sbuf));\n\t/* ok, we are set with py_script */\n\t*py_script = tmp_py_script;\n\treturn 0;\n\nERROR_EXIT:\n\tif (tmp_py_script) {\n\t\tpbs_python_ext_free_python_script(tmp_py_script);\n\t\tfree(tmp_py_script);\n\t}\n\treturn -1;\n#else\n\n\tlog_err(-1, __func__, \"--> Python is disabled <--\");\n\treturn -1;\n#endif /* --- END   PYTHON BLOCK --- */\n}\n\n/**\n * @brief\n * \tFunction to create a separate namespace. Essentially a sandbox to run\n * \tpython scrips independently.\n *\n * @param[in] interp_data - pointer to interpreter data\n *\n */\n\nvoid *\npbs_python_ext_namespace_init(\n\tstruct python_interpreter_data *interp_data)\n{\n\n#ifdef PYTHON /* --- BEGIN PYTHON BLOCK --- */\n\n\tPyObject *namespace_dict = NULL;\n\tPyObject *py_v1_module = NULL;\n\n\tnamespace_dict = PyDict_New(); /* New Refrence MUST Decref */\n\tif (!namespace_dict) {\n\t\tpbs_python_write_error_to_log(__func__);\n\t\tgoto ERROR_EXIT;\n\t}\n\t/*\n\t * setup our namespace, by including all the modules that are needed to\n\t * run the python scripts\n\t */\n\tif ((PyDict_SetItemString(namespace_dict, \"__builtins__\",\n\t\t\t\t  PyEval_GetBuiltins()) == -1)) {\n\t\tpbs_python_write_error_to_log(__func__);\n\t\tgoto ERROR_EXIT;\n\t}\n\n\t/*\n\t * Now, add our extension object/module to the namespace.\n\t */\n\tpy_v1_module = pbs_v1_module_init();\n\tif (py_v1_module == NULL)\n\t\tgoto ERROR_EXIT;\n\tif ((PyDict_SetItemString(namespace_dict,\n\t\t\t\t  PBS_PYTHON_V1_MODULE_EXTENSION_NAME,\n\t\t\t\t  py_v1_module) == -1)) {\n\t\tPy_XDECREF(py_v1_module);\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"%s|adding extension object\",\n\t\t\t __func__);\n\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\tpbs_python_write_error_to_log(__func__);\n\t\tgoto ERROR_EXIT;\n\t}\n\n\tPy_XDECREF(py_v1_module);\n\n\treturn namespace_dict;\n\nERROR_EXIT:\n\tif (namespace_dict) {\n\t\tPyDict_Clear(namespace_dict);\n\t\tPy_CLEAR(namespace_dict);\n\t}\n\treturn namespace_dict;\n\n#else  /* !PYTHON */\n\treturn NULL;\n#endif /* --- END   PYTHON BLOCK --- */\n}\n\n/**\n *\n * @brief\n *\tChecks if hook script needs recompilation.\n *\n * @param[in] interp_data - data to the python interpreter that will interpret\n *\t\t\t\tthe script.\n * @return\tint\n * @retval\t-2 \tscript  compilation failed\n * @retval\t-1 \tother failures\n * @retval\t0 \tsuccess\n *\n * @note\n *\tIf  called by pbs_python command, then any log messages are logged as\n *\tDEBUG3; otherwise, DEBUG2.\n */\nint\npbs_python_check_and_compile_script(struct python_interpreter_data *interp_data,\n\t\t\t\t    struct python_script *py_script)\n{\n\n#ifdef PYTHON\t\t  /* -- BEGIN ONLY IF PYTHON IS CONFIGURED -- */\n\tstruct stat nbuf; /* new stat buf */\n\tstruct stat obuf; /* old buf */\n\tint recompile = 1;\n\n\tif (!interp_data || !py_script) {\n\t\tlog_err(-1, __func__, \"Either interp_data or py_script is NULL\");\n\t\treturn -1;\n\t}\n\n\t/* ok, first time go straight to compile */\n\tdo {\n\t\tif (!py_script->py_code_obj)\n\t\t\tbreak;\n\t\t(void) memcpy(&obuf, &(py_script->cur_sbuf), sizeof(obuf));\n\t\tif (py_script->check_for_recompile) {\n\t\t\tif ((stat(py_script->path, &nbuf) != 1) &&\n\t\t\t    (nbuf.st_ino == obuf.st_ino) &&\n\t\t\t    (nbuf.st_size == obuf.st_size) &&\n\t\t\t    (nbuf.st_mtime == obuf.st_mtime)) {\n\t\t\t\trecompile = 0;\n\t\t\t} else {\n\t\t\t\trecompile = 1;\n\t\t\t\t(void) memcpy(&(py_script->cur_sbuf), &nbuf,\n\t\t\t\t\t      sizeof(py_script->cur_sbuf));\n\t\t\t\tPy_CLEAR(py_script->py_code_obj); /* we are rebuilding */\n\t\t\t}\n\t\t}\n\t} while (0);\n\n\tif (recompile) {\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE,\n\t\t\t \"Compiling script file: <%s>\", py_script->path);\n\n\t\tif (IS_PBS_PYTHON_CMD(pbs_python_daemon_name))\n\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_SERVER,\n\t\t\t\t  LOG_INFO, interp_data->daemon_name, log_buffer);\n\t\telse\n\t\t\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_ADMIN |\n\t\t\t\t\t  PBSEVENT_DEBUG,\n\t\t\t\t  PBS_EVENTCLASS_SERVER,\n\t\t\t\t  LOG_INFO, interp_data->daemon_name, log_buffer);\n\n\t\tif (!(py_script->py_code_obj =\n\t\t\t      _pbs_python_compile_file(py_script->path,\n\t\t\t\t\t\t       \"<embedded code object>\"))) {\n\t\t\tpbs_python_write_error_to_log(\"Failed to compile script\");\n\t\t\treturn -2;\n\t\t}\n\t}\n\n\t/* set dict to null during compilation, clearing previous global/local */\n\t/* dictionary to prevent leaks.                                        */\n\tif (py_script->global_dict) {\n\t\tPyDict_Clear((PyObject *) py_script->global_dict);\n\t\tPy_CLEAR(py_script->global_dict);\n\t}\n\n\treturn 0;\n#else  /* !PYTHON */\n\treturn -1;\n#endif /* PYTHON */\n}\n\n/**\n * @brief\n *\truns python script in namespace.\n *\n * @param[in] interp_data - pointer to interpreter data\n * @param[in] py_script - pointer to python script info\n * @param[out] exit_code - exit code\n *\n * @return\tint\n * @retval\t-3 \tscript executed but got KeyboardInterrupt\n * @retval\t-2 \tscript  compiled or executed with error\n * @retval\t-1 \tother failures\n * @retval\t0 \tsuccess\n */\nint\npbs_python_run_code_in_namespace(struct python_interpreter_data *interp_data,\n\t\t\t\t struct python_script *py_script,\n\t\t\t\t int *exit_code)\n{\n\n#ifdef PYTHON /* -- BEGIN ONLY IF PYTHON IS CONFIGURED -- */\n\n\tPyObject *pdict;\n\tstruct stat nbuf; /* new stat buf */\n\tstruct stat obuf; /* old buf */\n\tint recompile = 1;\n\tPyObject *ptype;\n\tPyObject *pvalue;\n\tPyObject *ptraceback;\n\tPyObject *pobjStr;\n\tPyObject *retval;\n\tconst char *pStr;\n\tint rc = 0;\n\tpid_t orig_pid;\n\n\tif (!interp_data || !py_script) {\n\t\tlog_err(-1, __func__, \"Either interp_data or py_script is NULL\");\n\t\treturn -1;\n\t}\n\n\t/* ok, first time go straight to compile */\n\tdo {\n\t\tif (!py_script->py_code_obj)\n\t\t\tbreak;\n\t\t(void) memcpy(&obuf, &(py_script->cur_sbuf), sizeof(obuf));\n\t\tif (py_script->check_for_recompile) {\n\t\t\tif ((stat(py_script->path, &nbuf) != -1) &&\n\t\t\t    (nbuf.st_ino == obuf.st_ino) &&\n\t\t\t    (nbuf.st_size == obuf.st_size) &&\n\t\t\t    (nbuf.st_mtime == obuf.st_mtime)) {\n\t\t\t\trecompile = 0;\n\t\t\t} else {\n\t\t\t\trecompile = 1;\n\t\t\t\t(void) memcpy(&(py_script->cur_sbuf), &nbuf,\n\t\t\t\t\t      sizeof(py_script->cur_sbuf));\n\t\t\t\tPy_CLEAR(py_script->py_code_obj); /* we are rebuilding */\n\t\t\t}\n\t\t}\n\t} while (0);\n\n\tif (recompile) {\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t \"Compiling script file: <%s>\", py_script->path);\n\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\tif (IS_PBS_PYTHON_CMD(pbs_python_daemon_name))\n\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_SERVER,\n\t\t\t\t  LOG_INFO, interp_data->daemon_name, log_buffer);\n\t\telse\n\t\t\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_ADMIN |\n\t\t\t\t\t  PBSEVENT_DEBUG,\n\t\t\t\t  PBS_EVENTCLASS_SERVER,\n\t\t\t\t  LOG_INFO, interp_data->daemon_name, log_buffer);\n\n\t\tif (!(py_script->py_code_obj =\n\t\t\t      _pbs_python_compile_file(py_script->path,\n\t\t\t\t\t\t       \"<embedded code object>\"))) {\n\t\t\tpbs_python_write_error_to_log(\"Failed to compile script\");\n\t\t\treturn -2;\n\t\t}\n\t}\n\n\t/* make new namespace dictionary, NOTE new reference */\n\n\tif (!(pdict = (PyObject *) pbs_python_ext_namespace_init(interp_data))) {\n\t\tlog_err(-1, __func__, \"while calling pbs_python_ext_namespace_init\");\n\t\treturn -1;\n\t}\n\tif ((pbs_python_setup_namespace_dict(pdict) == -1)) {\n\t\tPy_CLEAR(pdict);\n\t\treturn -1;\n\t}\n\n\tpy_script->global_dict = pdict;\n\n\torig_pid = getpid();\n\n\tPyErr_Clear(); /* clear any exceptions before starting code */\n\t/* precompile strings of code to bytecode objects */\n\tretval = PyEval_EvalCode((PyObject *) py_script->py_code_obj,\n\t\t\t\t pdict, pdict);\n\n\t/* check for a fork of the hook, terminate fork immediately */\n\tif (orig_pid != getpid())\n\t\texit(0);\n\n\t/* check for exception */\n\tif (PyErr_Occurred()) {\n\t\tif (PyErr_ExceptionMatches(PyExc_KeyboardInterrupt)) {\n\t\t\tpbs_python_write_error_to_log(\"Python script received a KeyboardInterrupt\");\n\t\t\tPy_XDECREF(retval);\n\t\t\treturn -3;\n\t\t}\n\n\t\tif (PyErr_ExceptionMatches(PyExc_SystemExit)) {\n\t\t\tPyErr_Fetch(&ptype, &pvalue, &ptraceback);\n\t\t\tPyErr_Clear(); /* just in case, not clear from API doc */\n\n\t\t\tif (pvalue) {\n\t\t\t\tpobjStr = PyObject_Str(pvalue); /* new ref */\n\t\t\t\tpStr = PyUnicode_AsUTF8(pobjStr);\n\t\t\t\trc = (int) atol(pStr);\n\t\t\t\tPy_XDECREF(pobjStr);\n\t\t\t}\n\n\t\t\tPy_XDECREF(ptype);\n\t\t\tPy_XDECREF(pvalue);\n\n#if !defined(WIN32)\n\t\t\tPy_XDECREF(ptraceback);\n#elif !defined(_DEBUG)\n\t\t\t/* for some reason this crashes on Windows Debug version */\n\t\t\tPy_XDECREF(ptraceback);\n#endif\n\n\t\t} else {\n\t\t\tpbs_python_write_error_to_log(\"Error evaluating Python script\");\n\t\t\tPy_XDECREF(retval);\n\t\t\treturn -2;\n\t\t}\n\t}\n\tPyErr_Clear();\n\tPy_XDECREF(retval);\n\n\tif (exit_code)\n\t\t*exit_code = rc; /* set exit code if var is not null */\n\n\treturn 0;\n#else  /* !PYTHON */\n\treturn -1;\n#endif /* PYTHON */\n}\n\n#ifdef PYTHON /*  === BEGIN ALL FUNCTIONS REQUIRING PYTHON HEADERS === */\n\n/**\n * @brief\n *\tonly compile the python script.\n *\n * @param[in]\tfile_name - abs file name\n * @param[out]  compiled_code_file_name - compiled file\n *\n * @return\tobject\n * @retval\n * @retval\terror code\terror\n *\n */\nstatic PyObject *\n_pbs_python_compile_file(const char *file_name,\n\t\t\t const char *compiled_code_file_name)\n{\n\tFILE *fp = NULL;\n\n\tlong len = 0;\n\tsize_t file_sz = 0;\t  /* script file no. of bytes */\n\tchar *file_buffer = NULL; /* buffer to hold the python script file */\n\tchar *cp = NULL;\t  /* useful character pointer */\n\tPyObject *rv = NULL;\n\n\tfp = fopen(file_name, \"rb\");\n\tif (!fp) {\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t \"could not open file <%s>: %s\\n\", file_name, strerror(errno));\n\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\tlog_err(errno, __func__, log_buffer);\n\t\tgoto ERROR_EXIT;\n\t}\n\n\tif ((fseek(fp, 0L, SEEK_END) == 0)) { /* ok we reached the end */\n\t\tlen = ftell(fp);\n\t\tif (len == -1) {\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t\t \"could not determine the file length: %s\\n\", strerror(errno));\n\t\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\tgoto ERROR_EXIT;\n\t\t}\n\t\tif ((fseek(fp, 0L, SEEK_SET) == -1)) {\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t\t \"could not fseek to beginning: %s\\n\", strerror(errno));\n\t\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\tgoto ERROR_EXIT;\n\t\t}\n\t\tfile_sz = len; /* ok good we have a file size */\n\t} else {\t       /* Uh-oh bad news */\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t \"could not fseek to end: %s\\n\", strerror(errno));\n\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\tlog_err(errno, __func__, log_buffer);\n\t\tgoto ERROR_EXIT;\n\t}\n\t/* allocate memory for file + \\n\\0 */\n\tfile_sz += 2;\n\n\tif (!(file_buffer = (char *) PyMem_Malloc(sizeof(char) * file_sz))) {\n\t\t/* could not allocate memory */\n\t\tpbs_python_write_error_to_log(__func__);\n\t\tgoto ERROR_EXIT;\n\t}\n\n\t/* read the file, clean up the file for DOS \\r stuff */\n\tfile_sz = fread(file_buffer, sizeof(char), (file_sz - 2), fp);\n\n\tfile_buffer[file_sz] = '\\n';\n\tfile_buffer[file_sz + 1] = '\\0';\n\n\tif (*file_buffer == '\\r')\n\t\t*file_buffer = ' ';\n\t/* TODO handle \\r in string constants? */\n\tfor (cp = file_buffer + 1; *cp != '\\0'; cp++) {\n\t\tif (*cp == '\\r') {\n\t\t\tif (*(cp - 1) == '\\\\') {\n\t\t\t\t*(cp - 1) = ' ';\n\t\t\t\t*cp = '\\\\';\n\t\t\t} else {\n\t\t\t\t*cp = ' ';\n\t\t\t}\n\t\t}\n\t}\n\n\tfclose(fp);\n\t/* compile the string to a code object,NEW reference caller must DECREF */\n\trv = Py_CompileString(file_buffer, compiled_code_file_name, Py_file_input);\n\tPyMem_Free(file_buffer);\n\treturn rv;\n\nERROR_EXIT:\n\tif (fp)\n\t\tfclose(fp);\n\tif (file_buffer)\n\t\tPyMem_Free(file_buffer); /* to be safe */\n\treturn rv;\n}\n\n#endif /* PYTHON */\n"
  },
  {
    "path": "src/lib/Libpython/pbs_python_import_types.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tpbs_python_import_types.c\n * @brief\n * This file contains all the neccessary type initialization for the \"extension\"\n * module.\n * Also all Python types implemented in \"C\" are added as \"module object\"\n * \"svr_types\" and inserted into the extension module dictionary.\n *\n */\n\n#include <pbs_config.h>\n#include <pbs_python_private.h>\n\n/* GLOBALS */\n\nextern PyTypeObject PPSVR_Size_Type; /* pbs_python_svr_size_type.c */\n\n/**\n * @brief\n * \tPrepare all the types, see the PBS Extensions Documentations on why we\n * \tneed to do this. Essentially, this ensures all the \"slots\" for the\n * \tPyTypeObject are properly initialized.\n *\n * @return\tint\n * @retval\t0\tsuccess\n * @retval\t!0\terror\n *\n */\n\nint\nppsvr_prepare_all_types(void)\n{\n\tint rv = 0; /* success */\n\n\tif ((rv = PyType_Ready(&PPSVR_Size_Type)) < 0)\n\t\treturn rv;\n\n\treturn rv;\n}\n\n/*\n * -----                 svr_types MODULE METHODS               ----- *\n */\n\nstatic PyMethodDef svr_types_module_methods[] = {\n\t{NULL, NULL} /* sentinel */\n};\n\nstatic char svr_types_module_doc[] =\n\t\"PBS Server types Module providing handy access to all the types\\n\\\n\\tavailable in the PBS Python Server modules.\\n\\\n\";\n\nstatic struct PyModuleDef svr_types_module = {\n\tPyModuleDef_HEAD_INIT,\n\tPBS_PYTHON_V1_MODULE_EXTENSION_NAME \".svr_types\",\n\tsvr_types_module_doc,\n\t-1,\n\tsvr_types_module_methods,\n\tNULL,\n\tNULL,\n\tNULL,\n\tNULL};\n\n/**\n * @brief\n * \tppsvr_create_types_module-creates and returns svr)types module object\n *\n * @returns\tPyObject*\n * @retval\tThe svr_types module object (BORROWED reference)\n */\n\nPyObject *\nppsvr_create_types_module(void)\n{\n\tPyObject *m = NULL;\t/* create types module */\n\tPyObject *mdict = NULL; /* module dict  */\n\n\tm = PyModule_Create(&svr_types_module);\n\n\tif (m == NULL)\n\t\treturn m;\n\t/* let's get the modules dict, we use this instead of PyModule_AddObject\n\t * because of Py_INCREF all types and then to Py_DECREF in case of error\n\t * is not going to be managable\n\t */\n\tmdict = PyModule_GetDict(m); /* never fails */\n\n\t/* Add _size type to svr_types */\n\tif ((PyDict_SetItemString(mdict, \"_size\",\n\t\t\t\t  (PyObject *) &PPSVR_Size_Type)) == -1)\n\t\treturn NULL;\n\n\treturn m;\n}\n"
  },
  {
    "path": "src/lib/Libpython/pbs_python_svr_external.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tpbs_python_svr_external.c\n * @brief\n * This file should contain interface implementation interacting with the\n * Core PBS Server. So *no* MOM and SCHED dependency should be in this file.\n *\n * @par\tlibrary:\n *   libpbspython_svr.a\n */\n\n#include <pbs_config.h>\n\n#ifdef PYTHON\n#include <pbs_python_private.h>\n#include <Python.h>\n#endif\n\n#include <pbs_python.h>\n#include <server_limits.h>\n#include \"pbs_ifl.h\" /* picks up the extern decl for pbs_config */\n#include \"resource.h\"\n#include \"pbs_share.h\"\n#include \"pbs_error.h\"\n#include \"pbs_sched.h\"\n#include \"server.h\"\n\n/* Functions */\n#ifdef PYTHON\nextern void _pbs_python_set_mode(int mode);\n\nextern int _pbs_python_event_mark_readonly(void);\n\nextern int _pbs_python_event_set(unsigned int hook_event, char *req_user,\n\t\t\t\t char *req_host, hook_input_param_t *req_params, char *perf_label);\n\nextern int _pbs_python_event_to_request(unsigned int hook_event, hook_output_param_t *req_params, char *perf_label, char *perf_action);\n\nextern int _pbs_python_event_set_attrval(char *name, char *value);\n\nextern char *_pbs_python_event_get_attrval(char *name);\n\nextern void _pbs_python_event_accept(void);\n\nextern void _pbs_python_event_reject(char *msg);\n\nextern char *_pbs_python_event_get_reject_msg(void);\n\nextern int _pbs_python_event_get_accept_flag(void);\n\nextern void _pbs_python_event_param_mod_allow(void);\n\nextern void _pbs_python_event_param_mod_disallow(void);\n\nextern int _pbs_python_event_param_get_mod_flag(void);\n\nextern char *\n_pbs_python_event_job_getval_hookset(char *attrib_name, char *opval,\n\t\t\t\t     int opval_len, char *delval, int delval_len);\n\nextern char *\n_pbs_python_event_job_getval(char *attrib_name);\n\nextern char *\n_pbs_python_event_jobresc_getval_hookset(char *attrib_name, char *resc_name);\n\nextern int\n_pbs_python_event_jobresc_clear_hookset(char *attrib_name);\n\nextern char *\n_pbs_python_event_jobresc_getval(char *attrib_name, char *resc_name);\n\nextern int _pbs_python_has_vnode_set(void);\n\nextern void _pbs_python_do_vnode_set(void);\n\nextern char *pbs_python_object_str(PyObject *);\n\n#endif /* PYTHON */\n\n/* GLOBAL vars */\nextern char *msg_daemonname; /* pbsd_main.c for SERVER */\n\n/*\n * Helper functions involving the PBS Server daemon\n */\n\n/**\n * @brief\n * \tpython initiliaze interpreter data\n *\n * @param[in] interp_data - pointer to python interpreter data\n *\n */\nvoid\npbs_python_svr_initialize_interpreter_data(struct python_interpreter_data *interp_data)\n{\n\t/* check whether we are already initialized */\n\tif (interp_data->data_initialized)\n\t\treturn;\n\n\tinterp_data->daemon_name = msg_daemonname;\n\tinterp_data->interp_started = 0;\n\tinterp_data->data_initialized = 1;\n\tinterp_data->pbs_python_types_loaded = 0;\n\treturn;\n}\n\n/**\n * @brief\n *      python destroy interpreter data\n *\n * @param[in] interp_data - pointer to python interpreter data\n *\n */\nvoid\npbs_python_svr_destroy_interpreter_data(struct python_interpreter_data *interp_data)\n{\n\t/* nothing to do or free data */\n\tinterp_data->data_initialized = 0;\n\tinterp_data->interp_started = 0;\n\tinterp_data->pbs_python_types_loaded = 0;\n\treturn;\n}\n\n/*\n * Helper functions related to PBS events\n */\n\n/**\n * @brief\n * \tSets the \"operation\" mode of Python: if 'mode' is PY_MODE, then we're\n * \tinside the hook script; if 'mode' is C_MODE, then we're inside some\n * \tinternal C helper function.\n * \tSetting 'mode' to C_MODE usually means we don't have any restriction\n * \tas to which attributes we can or cannot set.\n *\n * @param[in] mode - value to indicate which attr val can be set\n *\n */\nvoid\npbs_python_set_mode(int mode)\n{\n\n#ifdef PYTHON\n\t_pbs_python_set_mode(mode);\n#endif\n}\n\n/**\n * @brief\n * \tMakes the Python PBS event object read-only, meaning none of its\n * \tcould be modified in a hook script.\n *\n * @return\tint\n * @retval\t0 \tfor success\n * @retval\t-1 \totherwise\n */\nint\npbs_python_event_mark_readonly(void)\n{\n\n#ifdef PYTHON\n\treturn (_pbs_python_event_mark_readonly());\n#else\n\treturn (0);\n#endif\n}\n\n/**\n *\n * @brief\n *\tThis creates a PBS Python event object representing 'hook_event' with\n * request parameter 'req_params' and requested by:\n * 'req_user'@'req_host'.\n *\n * @param[in]\thook_event - the event represented\n * @param[in]\treq_user - who requested the hook event\n * @param[in]\treq_host - where the request came from\n * @param[in]\treq_params - array of input parameters\n * @param[in]\tperf_label - passed on to hook_perf_stat* call.\n *\n * @return int\n * @retval 0\t- success\n * @retval -1\t- error\n *\n */\nint\npbs_python_event_set(unsigned int hook_event, char *req_user,\n\t\t     char *req_host, hook_input_param_t *req_params,\n\t\t     char *perf_label)\n{\n#ifdef PYTHON\n\tint rc;\n\trc = _pbs_python_event_set(hook_event, req_user, req_host, req_params, perf_label);\n\n\tif (rc == -2) { /* _pbs_python_event_set got interrupted, retry */\n\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_SERVER, LOG_DEBUG,\n\t\t\t  \"_pbs_python_event_set\", \"retrying call\");\n\t\trc = _pbs_python_event_set(hook_event, req_user, req_host,\n\t\t\t\t\t   req_params, perf_label);\n\t}\n\treturn (rc);\n#else\n\treturn (0);\n#endif\n}\n\n\n/**\n *\n * @brief\n * \tRecreates 'req_params' (request structures,\n *\tex. rq_queuejob, rq_manage, rq_move) consulting the parameter values\n *\tobtained from current\n * \tPBS Python event object representing 'hook_event'.\n *\n * @param[in]\thook_event - the event represented\n * @param[in]\treq_params - array of input parameters\n * @param[in]\tperf_label - passed on to hook_perf_stat* call.\n * @param[in]\tperf_action - passed on to hook_perf_stat* call.\n *\n * @return int\n * @retval 0\t- success\n * @retval -1\t- error\n *\n * @note\n *\t\tThis function calls a single hook_perf_stat_start()\n *\t\tthat has some malloc-ed data that are freed in the\n *\t\thook_perf_stat_end() call, which is done at the end of\n *\t\tthis function.\n *\t\tEnsure that after the hook_perf_stat_start(), all\n *\t\tprogram execution path lead to hook_perf_stat_stop()\n *\t\tcall.\n */\nint\npbs_python_event_to_request(unsigned int hook_event, hook_output_param_t *req_params, char *perf_label, char *perf_action)\n{\n\n#ifdef PYTHON\n\treturn (_pbs_python_event_to_request(hook_event, req_params, perf_label, perf_action));\n#else\n\treturn (0);\n#endif\n}\n\nchar *\npbs_python_event_job_getval_hookset(char *attrib_name, char *opval,\n\t\t\t\t    int opval_len, char *delval, int delval_len)\n{\n#ifdef PYTHON\n\treturn (_pbs_python_event_job_getval_hookset(attrib_name, opval,\n\t\t\t\t\t\t     opval_len, delval, delval_len));\n#else\n\treturn (0);\n#endif\n}\n\n/**\n *\n * @brief\n *\tWrapper function to _pbs_python_event_job_getval().\n *\n * @param[in]\tattrib_name - parameter passed to the function\n *\t\t\t\t'_pbs_python_event_job_getval()'.\n */\nchar *\npbs_python_event_job_getval(char *attrib_name)\n{\n#ifdef PYTHON\n\treturn (_pbs_python_event_job_getval(attrib_name));\n#else\n\treturn NULL;\n#endif\n}\n\n/**\n *\n * @brief\n *\tWrapper function to _pbs_python_event_jobresc_getval_hookset().\n *\n * @param[in]\tattrib_name - parameter passed to the function\n *\t\t\t\t'_pbs_python_event_jobresc_getval_hookset()'.\n * @param[in]\tresc_name - parameter passed to the function\n *\t\t\t\t'_pbs_python_event_jobresc_getval_hookset()'.\n */\nchar *\npbs_python_event_jobresc_getval_hookset(char *attrib_name, char *resc_name)\n{\n#ifdef PYTHON\n\treturn (_pbs_python_event_jobresc_getval_hookset(attrib_name, resc_name));\n#else\n\treturn NULL;\n#endif\n}\n\n/**\n *\n * @brief\n *\tWrapper function to _pbs_python_event_jobresc_clear_hookset().\n *\n * @param[in]\tattrib_name - parameter passed to the function\n *\t\t\t\t'_pbs_python_event_jobresc_clear_hookset()'.\n */\nint\npbs_python_event_jobresc_clear_hookset(char *attrib_name)\n{\n#ifdef PYTHON\n\treturn (_pbs_python_event_jobresc_clear_hookset(attrib_name));\n#else\n\treturn (0);\n#endif\n}\n\n/**\n *\n * @brief\n *\tWrapper function to _pbs_python_event_jobresc_getval().\n *\n * @param[in]\tattrib_name - parameter passed to the function\n *\t\t\t\t'_pbs_python_event_jobresc_getval()'.\n * @param[in]\tresc_name - parameter passed to the function\n *\t\t\t\t'_pbs_python_event_jobresc_getval()'.\n */\nchar *\npbs_python_event_jobresc_getval(char *attrib_name, char *resc_name)\n{\n#ifdef PYTHON\n\treturn (_pbs_python_event_jobresc_getval(attrib_name, resc_name));\n#else\n\treturn NULL;\n#endif\n}\n\n/**\n * @brief\n * \tSets the value of the attribute 'name' of the current Python Object event\n * \tto a string 'value'. The descriptor for the attribute will take care of\n * \tconverting to an actual type.\n *\n * @param[out] name - attr name\n * @param[in] value - value for attribute name\n *\n * @return\tint\n * @retval\t0 \tfor success\n * @retval\t-1  \tfor error.\n */\nint\npbs_python_event_set_attrval(char *name, char *value)\n{\n\n#ifdef PYTHON\n\treturn (_pbs_python_event_set_attrval(name, value));\n#else\n\treturn (0);\n#endif\n}\n\n/**\n * @brief\n * \tGets the value of the attribute 'name' of the current Python Object event\n * \tas a string. Returns NULL if it doesn't find one.\n */\nchar *\npbs_python_event_get_attrval(char *name)\n{\n\n#ifdef PYTHON\n\treturn (_pbs_python_event_get_attrval(name));\n#else\n\treturn NULL;\n#endif\n}\n\n/**\n * @brief\n *  \tAllows the current PBS event request to proceed.\n */\nvoid\npbs_python_event_accept(void)\n{\n\n#ifdef PYTHON\n\t_pbs_python_event_accept();\n#endif\n}\n\n/**\n * @brief\n *  \tReject the current PBS event request.\n */\nvoid\npbs_python_event_reject(char *msg)\n{\n\n#ifdef PYTHON\n\t_pbs_python_event_reject(msg);\n#endif\n}\n\n/**\n * @brief\n * \tReturns the message string supplied in the hook script when it rejected\n * \tan event request.\n */\nchar *\npbs_python_event_get_reject_msg(void)\n{\n\n#ifdef PYTHON\n\treturn (_pbs_python_event_get_reject_msg());\n#else\n\treturn NULL;\n#endif\n}\n\n/**\n * @brief\n * \tReturns the value of the event accept flag (1 for TRUE or 0 for FALSE).\n */\nint\npbs_python_event_get_accept_flag(void)\n{\n\n#ifdef PYTHON\n\treturn (_pbs_python_event_get_accept_flag());\n#else\n\treturn (0); /* for FALSE */\n#endif\n}\n\n/**\n * @brief\n * \tSets a global flag that says modifications to the PBS Python\n * \tattributes are allowed.\n */\nvoid\npbs_python_event_param_mod_allow(void)\n{\n\n#ifdef PYTHON\n\t_pbs_python_event_param_mod_allow();\n#endif\n}\n\n/**\n * @brief\n * \tSets a global flag that says any more modifications to the PBS Python\n * \tattributes would be disallowed.\n */\nvoid\npbs_python_event_param_mod_disallow(void)\n{\n\n#ifdef PYTHON\n\t_pbs_python_event_param_mod_disallow();\n#endif\n}\n\n/**\n * @brief\n * \tReturns the value (0 or 1) of the global flag that says whether or not\n * \tmodifications to the PBS Python attributes are allowed.\n */\nint\npbs_python_event_param_get_mod_flag(void)\n{\n\n#ifdef PYTHON\n\treturn (_pbs_python_event_param_get_mod_flag());\n#else\n\treturn (0); /* for FALSE */\n#endif\n}\n\n/*\n *\n * @brief\n *\tChecks if there's at least one pending \"set\" vnode operation\n *\tthat needs to be performed by PBS.\n * @par\n *\tSee called function itself for details.\n *\n * @return int\n * @retval 1\t- if a \"set\" operation was found.\n * @retval 0   - if not found.\n *\n */\nint\npbs_python_has_vnode_set(void)\n{\n\n#ifdef PYTHON\n\treturn (_pbs_python_has_vnode_set());\n#else\n\treturn (0); /* for FALSE */\n#endif\n}\n\n/**\n * @brief\n *\tPerform all the \"set\" vnode operations to be performed by PBS.\n * @par\n *\tSee called function itself for details.\n */\nvoid\npbs_python_do_vnode_set(void)\n{\n\n#ifdef PYTHON\n\t_pbs_python_do_vnode_set();\n#endif\n}\n\n/**\n * @brief\n * \tpbs_python_set_interrupt - wrapper to PyErr_SetInterrupt()\n *\n */\n\nvoid\npbs_python_set_interrupt(void)\n{\n\n#ifdef PYTHON\n\tPyErr_SetInterrupt();\n#endif\n}\n\n/**\n *  @brief\n *  \tInitializes all the elements of hook_input_param_t structure.\n *\n *  @param[in/out]\thook_input - the structure to initialize.\n *\n *  @return void\n */\n\nvoid\nhook_input_param_init(hook_input_param_t *hook_input)\n{\n\n\thook_input->rq_job = NULL;\n\thook_input->rq_postqueuejob = NULL;\n\thook_input->rq_manage = NULL;\n\thook_input->rq_move = NULL;\n\thook_input->rq_prov = NULL;\n\thook_input->rq_run = NULL;\n\thook_input->rq_obit = NULL;\n\thook_input->progname = NULL;\n\thook_input->argv_list = NULL;\n\thook_input->env = NULL;\n\thook_input->jobs_list = NULL;\n\thook_input->vns_list = NULL;\n\thook_input->resv_list = NULL;\n\thook_input->vns_list_fail = NULL;\n}\n\n/**\n *  @brief\n *  \tInitializes all the elements of hook_output_param_t structure.\n *\n *  @param[in/out]\thook_output - the structure to initialize.\n *\n *  @return void\n */\nvoid\nhook_output_param_init(hook_output_param_t *hook_output)\n{\n\thook_output->rq_job = NULL;\n\thook_output->rq_postqueuejob = NULL;\n\thook_output->rq_manage = NULL;\n\thook_output->rq_move = NULL;\n\thook_output->rq_prov = NULL;\n\thook_output->rq_run = NULL;\n\thook_output->rq_obit = NULL;\n\thook_output->progname = NULL;\n\thook_output->argv_list = NULL;\n\thook_output->env = NULL;\n\thook_output->jobs_list = NULL;\n\thook_output->vns_list = NULL;\n\thook_output->resv_list = NULL;\n\thook_output->vns_list_fail = NULL;\n}\n"
  },
  {
    "path": "src/lib/Libpython/pbs_python_svr_internal.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tpbs_python_svr_internal.c\n * @brief\n * This are the internal helper functions that depend on a lot of PBS\n * Server Data structures.\n *\n */\n#include <pbs_config.h> /* the master config generated by configure */\n#include <pbs_python_private.h>\n\n#include <stdio.h>\n#include <unistd.h>\n#include <sys/types.h>\n#include <sys/param.h>\n#include <memory.h>\n#include <stdlib.h>\n#include <libpbs.h>\n#include <pbs_ifl.h>\n#include <errno.h>\n#include <string.h>\n#include <list_link.h>\n#include <log.h>\n#include <attribute.h>\n#include <resource.h>\n#include <server_limits.h>\n#include <server.h>\n#include <job.h>\n#include <reservation.h>\n#include <queue.h>\n#include <pbs_error.h>\n#include <batch_request.h>\n#include <provision.h>\n#include \"hook.h\"\n#include \"pbs_nodes.h\"\n\n#include \"cmds.h\"\n#include \"svrfunc.h\"\n#include \"pbs_ecl.h\"\n#include \"placementsets.h\"\n#include \"pbs_reliable.h\"\n\n/* -----                        GLOBALS                        -----    */\n\n/* pipe-separated string lists  */\n#define runjob_modifiable_jobattrs ATTR_h \"|\" ATTR_a \"|\" ATTR_project \"|\" ATTR_e \"|\" ATTR_o \"|\" ATTR_l \"|\" ATTR_v \"|\" ATTR_depend \"|\" ATTR_create_resv_from_job\n#define runjob_modifiable_vnattrs ATTR_NODE_state\n\n#define FMT_RUNJOB_ERRMSG \"Can only set job's (%s) attribute, or a vnode's (%s) attribute under RUNJOB event - got <%s>\"\n\nextern attribute_def que_attr_def[];\nextern resource_def *svr_resc_def; /* the resource def structure */\nextern int svr_resc_size;\t   /* static + dynamic */\nextern int svr_resc_unk;\t   /* the last one */\nextern char server_name[];\nextern struct python_interpreter_data svr_interp_data;\nextern pbs_list_head svr_queues;   /* list of queues                   */\nextern pbs_list_head svr_alljobs;  /* list of all jobs in server       */\nextern pbs_list_head svr_allresvs; /* all reservations in server */\nextern char *msg_man_set;\nextern char *path_hooks_workdir;\n\n/* Global symbols */\nextern char *vnode_state_to_str(int state_bit);\nextern char *vnode_sharing_to_str(enum vnode_sharing share);\nextern char *vnode_ntype_to_str(int vntype);\n\nextern int str_to_vnode_state(char *vnstate);\nextern enum vnode_sharing str_to_vnode_sharing(char *vn_str);\nextern int str_to_vnode_ntype(char *vntype);\nextern u_Long pps_size_to_kbytes(PyObject *l);\nextern PyObject *svrattrl_list_to_pyobject(int rq_cmd, pbs_list_head *);\nextern PyObject *svrattrl_to_server_attribute(int rq_cmd, svrattrl *);\n\nstatic PyObject *\n_pps_helper_get_resv(resc_resv *presv_o, const char *resvid, char *perf_label);\n\n/* A dictionary for quick access to the pbs.v1 EMBEDDED_EXTENSION_TYPES */\nstatic PyObject *PBS_PythonTypes = NULL; /* A dictionary maintaing name and type */\n\n/*\n *  BEGIN Quick Access Types Table\n *    The below code is a convenience scheme, not really needed as they are\n *    available in the PBS_PythonTypes dict. But, since we do this for every\n *    aggregate type, comes in handy.\n */\ntypedef struct _pbs_python_types_entry {\n\tchar *t_key;\t   /* The key in EXPORTED TYPES DICTIONARY */\n\tPyObject *t_class; /* The actual type or class */\n} pbs_python_types_entry;\n\n#define PP_DESC_IDX 0\n#define PP_GENERIC_IDX 1\n#define PP_SIZE_IDX 2\n#define PP_TIME_IDX 3\n#define PP_ACL_IDX 4\n#define PP_BOOL_IDX 5\n#define PP_JOB_IDX 6\n#define PP_QUE_IDX 7\n#define PP_SVR_IDX 8\n#define PP_RESV_IDX 9\n#define PP_EVENT_IDX 10\n#define PP_RESC_IDX 11\n#define PP_ARST_IDX 12\n#define PP_INT_IDX 13\n#define PP_STR_IDX 14\n#define PP_FLOAT_IDX 15\n#define PP_EVENT_ERR_IDX 16\n#define PP_UNSET_ATTR_NAME_ERR_IDX 17\n#define PP_BADATTR_VTYPE_ERR_IDX 18\n#define PP_BADATTR_VALUE_ERR_IDX 19\n#define PP_UNSET_RESC_NAME_ERR_IDX 20\n#define PP_BAD_RESC_VTYPE_ERR_IDX 21\n#define PP_BAD_RESC_VALUE_ERR_IDX 22\n#define PP_PBS_ITER_IDX 23\n#define PP_VNODE_IDX 24\n#define PP_ENTITY_IDX 25\n#define PP_ENV_IDX 26\n#define PP_MANAGEMENT_IDX 27\n#define PP_SERVER_ATTRIBUTE_IDX 28\n\npbs_python_types_entry pbs_python_types_table[] = {\n\t{PY_TYPE_ATTR_DESCRIPTOR, NULL}, /* 0 Always first */\n\t{PY_TYPE_GENERIC, NULL},\n\t{PY_TYPE_SIZE, NULL},\n\t{PY_TYPE_TIME, NULL},\n\t{PY_TYPE_ACL, NULL},\n\t{PY_TYPE_BOOL, NULL},\n\t{PY_TYPE_JOB, NULL},\n\t{PY_TYPE_QUEUE, NULL},\n\t{PY_TYPE_SERVER, NULL},\n\t{PY_TYPE_RESV, NULL},\n\t{PY_TYPE_EVENT, NULL}, /* 10 */\n\t{PY_TYPE_RESOURCE, NULL},\n\t{PY_TYPE_LIST, NULL},\n\t{PY_TYPE_INT, NULL},\n\t{PY_TYPE_STR, NULL},\n\t{PY_TYPE_FLOAT, NULL},\t\t\t   /* 15 */\n\t{PY_ERROR_EVENT_INCOMPATIBLE, NULL},\t   /* 16 */\n\t{PY_ERROR_EVENT_UNSET_ATTRIBUTE, NULL},\t   /* 17 */\n\t{PY_ERROR_BAD_ATTRIBUTE_VALUE_TYPE, NULL}, /* 18 */\n\t{PY_ERROR_BAD_ATTRIBUTE_VALUE, NULL},\t   /* 19 */\n\t{PY_ERROR_UNSET_RESOURCE, NULL},\t   /* 20 */\n\t{PY_ERROR_BAD_RESOURCE_VALUE_TYPE, NULL},  /* 21 */\n\t{PY_ERROR_BAD_RESOURCE_VALUE, NULL},\t   /* 22 */\n\t{PY_TYPE_PBS_ITER, NULL},\t\t   /* 23 */\n\t{PY_TYPE_VNODE, NULL},\t\t\t   /* 24 */\n\t{PY_TYPE_ENTITY, NULL},\t\t\t   /* 25 */\n\t{PY_TYPE_ENV, NULL},\t\t\t   /* 26 */\n\t{PY_TYPE_MANAGEMENT, NULL},\t\t   /* 27 */\n\t{PY_TYPE_SERVER_ATTRIBUTE, NULL},\t   /* 28 */\n\n\t/* ADD ENTRIES ONLY BELOW, OR CHANGE THE PP_XXX_IDX above the table */\n\n\t{NULL, NULL} /* sentinel */\n};\n\ntypedef struct _pbs_iter_item {\n\tPyObject *py_iter; /* *the* iterator */\n\tvoid *data;\t   /* arbitrary pbs data */\n\tint data_index;\t   /* index of data to some table */\n\tpbs_list_link all_iters;\n} pbs_iter_item;\n\nstatic pbs_list_head pbs_iter_list; /* list of PBS iterators */\n\ntypedef struct _vnode_set_req {\n\tchar vnode_name[PBS_MAXNODENAME + 1];\n\tpbs_list_head rq_attr; /* list of attributes to set */\n\tpbs_list_link all_reqs;\n} vnode_set_req;\n\nstatic pbs_list_head pbs_vnode_set_list; /* list of vnode set requests */\n\n/**\n * @brief\n * \tThe pbs_resource_value structure holds the cached resource values for\n *      Python py_resource object.\n *\n * @param[in]\tpy_resource - the Python pbs_resource object that will later\n * \t\t\t\tbe set with values in 'value_list'.\n * @param[in]\tpy_resource_str_value - a Python string object representing\n * \t\t\t\t\tthe values in 'value_list'.\n * @param[in]\tattr_def_p - the resource definition for the resource list\n * \t\t\t     represented by 'py_resource'.\n * @param[in]\tvalue_list - list of values cached for the 'py_resource' object\n * @param[in]\tall_resc - links various pbs_resource_value structures.\n */\ntypedef struct _pbs_resource_value {\n\tPyObject *py_resource;\n\tPyObject *py_resource_str_value;\n\tattribute_def *attr_def_p; /* corresponding resource definition */\n\tpbs_list_head value_list;  /* resource values to set */\n\tpbs_list_link all_rescs;\n} pbs_resource_value;\n\nstatic pbs_list_head pbs_resource_value_list; /* list of resource */\n\t\t\t\t\t      /* values to instantiate */\n\nstatic PyObject *PyPbsV1Module_Obj = NULL; /* pbs.v1 module object */\n\n/* an array holding all the vnode attribute descriptors (python pointers) */\nstatic PyObject **py_vnode_attr_types = NULL;\n/* an array holding all the resv attribute descriptors (python pointers) */\nstatic PyObject **py_resv_attr_types = NULL;\n/* an array holding all the server attribute descriptors (python pointers) */\nstatic PyObject **py_svr_attr_types = NULL;\n/* an array holding all the job attribute descriptors (python pointers) */\nstatic PyObject **py_job_attr_types = NULL;\n/* an array holding all the queue attribute descriptors (python pointers) */\nstatic PyObject **py_que_attr_types = NULL;\n/* an array of python objects holding all the resources (python pointers)*/\nstatic PyObject **py_svr_resc_types = NULL;\n\n/* The function object that  instantiates/populates a PBS object using */\nstatic PyObject *py_pbs_statobj = NULL;\n\n/* This is the current hook event object */\nstatic PyObject *py_hook_pbsevent = NULL;\n/* This is the cached local/server object */\nstatic PyObject *py_hook_pbsserver = NULL;\n/* An array of cached Python queue objects managed by the current server */\nstatic PyObject **py_hook_pbsque = NULL;\nstatic int py_hook_pbsque_max = 0;\t\t  /* Max # of entries in py_hook_pbsque */\nstatic int hook_pbsevent_accept = TRUE;\t\t  /* flag to accept/reject event */\nstatic int hook_pbsevent_stop_processing = FALSE; /* flag to stop */\n/* processing event */\n/* parameters */\nstatic char hook_pbsevent_reject_msg[HOOK_MSG_SIZE];\nstatic int hook_set_mode = C_MODE; /* in C_MODE, can set*/\n/* anything */\nstatic int hook_reboot_host = FALSE;\t\t/* flag to reboot host or not */\nstatic int hook_reboot_host_cmd[HOOK_BUF_SIZE]; /* cmdline to use */\n/* to reboot host */\n/* as alternate to */\n/* reboot() call */\nstatic int hook_scheduler_restart_cycle = FALSE; /* flag to tell local */\n/* server to tell the */\n/* scheduler to */\n/* restart sched cycle*/\n\n/*\n * The following limit and counter declarations are intended\n * to reduce the amount of memory consumed by the Python\n * interpreter so that the server process does not become\n * bloated. Python is able to garbage collect builtin\n * types (e.g. string, dict, etc.), but the PBS types are\n * not created such that their memory can be released.\n */\n/* Max hook events to service before restarting the interpreter */\n#define PBS_PYTHON_RESTART_MAX_HOOKS 100\n/* Max objects created before restarting the interpreter */\n#define PBS_PYTHON_RESTART_MAX_OBJECTS 1000\n/* Minimum interval between interpreter restarts */\n#define PBS_PYTHON_RESTART_MIN_INTERVAL 30\n/* count of Python objects created */\nstatic long object_counter = 0;\n\ntypedef struct hook_debug_t {\n\tFILE *input_fp;\n\tchar input_file[MAXPATHLEN + 1];\n\tFILE *output_fp;\n\tchar output_file[MAXPATHLEN + 1];\n\tFILE *data_fp;\n\tchar data_file[MAXPATHLEN + 1];\n\tchar objname[HOOK_BUF_SIZE + 1];\n} hook_debug_t;\n\nstatic int use_static_data = 0; /* use static server-related data */\n\nstatic hook_debug_t hook_debug;\n\nstatic pbs_list_head *server_data;\n\ntypedef struct server_jobs_t {\n\tpbs_list_head *data;\n\tpbs_list_head *ids;\n} server_jobs_t;\nstatic server_jobs_t server_jobs;\n\ntypedef struct server_queues_t {\n\tpbs_list_head *data;\n\tpbs_list_head *names;\n} server_queues_t;\nstatic server_queues_t server_queues;\n\ntypedef struct server_resvs_t {\n\tpbs_list_head *data;\n\tpbs_list_head *resvids;\n} server_resvs_t;\nstatic server_resvs_t server_resvs;\n\ntypedef struct server_vnodes_t {\n\tpbs_list_head *data;\n\tpbs_list_head *names;\n} server_vnodes_t;\nstatic server_vnodes_t server_vnodes;\n\n/**\n * @brief\n * \tThis is a function that logs the contents of the list\n *\theaded by 'phead'.\n *\n * @param[in] head_str - some string that gets printed spearheading the list.\n * @param[in] phead - header of the list to be printed.\n *\n * @return\tvoid\n *\n */\nvoid\nprint_svrattrl_list(char *head_str, pbs_list_head *phead)\n{\n\tsvrattrl *plist = NULL;\n\tint i;\n\n\tif ((head_str == NULL) || (phead == NULL)) {\n\t\treturn;\n\t}\n\tif (!will_log_event(PBSEVENT_DEBUG3))\n\t\treturn;\n\n\tplist = (svrattrl *) GET_NEXT(*phead);\n\ti = 0;\n\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_HOOK, LOG_INFO, __func__, head_str);\n\twhile (plist) {\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n#ifdef NAS /* localmod 005 */\n\t\t\t \"al_name=%s al_resc=%s al_value=%s al_flags=%ld\",\n\t\t\t plist->al_name, plist->al_resc ? plist->al_resc : \"null\",\n\t\t\t plist->al_value, (long) plist->al_flags);\n#else\n\t\t\t \"al_name=%s al_resc=%s al_value=%s al_flags=%d\",\n\t\t\t plist->al_name, plist->al_resc ? plist->al_resc : \"null\",\n\t\t\t plist->al_value, plist->al_flags);\n#endif /* localmod 005 */\n\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_HOOK, LOG_INFO, __func__,\n\t\t\t  log_buffer);\n\n\t\tplist = (svrattrl *) GET_NEXT(plist->al_link);\n\t\ti++;\n\t}\n}\n\n/**\n * @brief\n * \tThis is a stub function that should be implemented to add any additional\n * \telements to the global namespace dict.\n *\n * @param[in] globals - python object indicating reference to globals of a module\n *\n * @return\tint\n * @retval\t0\tsuccess\n * @retval\t-1\terror\n *\n */\n\nint\npbs_python_setup_namespace_dict(PyObject *globals)\n{\n#if 0 /* enable later */\n\t/*\n\t * First, insert our module object pbs.v1 as pbs so the hook scripts\n\t * can access all the globals\n\t */\n\t/* I believe our design REQUIRES hook write to issue a \"import pbs\" */\n\tif ((PyDict_SetItemString(globals, \"pbs\", PyPbsV1Module_Obj) == -1)) {\n\t\tgoto ERROR_EXIT;\n\t}\n\n\treturn 0;\nERROR_EXIT:\n\tpbs_python_write_error_to_log(\"pbs_python_setup_namespace_dict\");\n\treturn -1;\n#endif\n\treturn 0;\n}\n\n/**\n * @brief\n *      The C routine that will instantiate a PbsAttributeDescriptor\n *\tclass (see src/modules/_base_types.py), which wraps a\n *\tPBS attribute into a descriptor.\n *\n * @param[in]\tklass-the class attribute being wrapped into a data descriptor.\n * @param[in]\tname - name of a PBS attribute.\n * @param[in]\tdefault_value - given to the PBS attribute.\n * @param[in]\tval_klass - the class/object type of the attribute's value.\n * @param[in]\tresc_attr - if the attribute being wrapped is a pbs_resource,\n *\t\t\t    then this is the name of its parent.\n *\t\t\t    For example: name=\"ncpus\", resc_attr=\"Resource_List\"\n * @param[in]\tis_entity - if the attribute being wrapped is also an\n *\t\t\t    entity type.\n *\t\t\t    For example: name=\"ncpus\", resc_attr=\"max_run_res\",\n *\t\t\t\t\tis_entity=1.\n * @return      int\n * @retval      -1\tFailed to create descriptor.\n * @retval \t0\tSuccessfully created descriptor.\n * @retval \t1\tDescriptor already present.\n *\n */\nstatic int\n_pps_getset_descriptor_object(PyObject *klass,\n\t\t\t      const char *name,\n\t\t\t      PyObject *default_value,\n\t\t\t      PyObject *val_klass,\n\t\t\t      const char *resc_attr,\n\t\t\t      int is_entity)\n{\n\t/* this is for constructor call , see _base_types.py */\n\tstatic char *kwds[] = {\"cls\", \"name\", \"default_value\", \"value_type\",\n\t\t\t       \"resc_attr\", \"is_entity\"};\n\tPyObject *py_descr_class = NULL;\n\tPyObject *py_descr_args = PyTuple_New(0);\n\tPyObject *py_descr_kwds = NULL;\n\tPyObject *py_attr_descr = NULL;\n\n\t/* check we creation of tuple failed */\n\tif (!(py_descr_args)) {\n\t\tgoto ERROR_EXIT;\n\t}\n\t/* check whether the attribute descriptor is already present */\n\tif (PyObject_HasAttrString(klass, name)) {\n\t\tPy_CLEAR(py_descr_args);\n\t\treturn 1;\n\t}\n\n\tpy_descr_class = pbs_python_types_table[PP_DESC_IDX].t_class;\n\n\t/* good, build the arument list NEW ref\n\t *  class, name, default_value, value_class\n\t */\n\tif (resc_attr) { /* ok we are creating a attribute for a resource */\n\t\tpy_descr_kwds = Py_BuildValue(\"{s:O, s:s, s:O, s:(O), s:s, s:i}\",\n\t\t\t\t\t      kwds[0], klass,\n\t\t\t\t\t      kwds[1], name,\n\t\t\t\t\t      kwds[2], default_value,\n\t\t\t\t\t      kwds[3], val_klass,\n\t\t\t\t\t      kwds[4], resc_attr,\n\t\t\t\t\t      kwds[5], is_entity);\n\t} else {\n\t\tpy_descr_kwds = Py_BuildValue(\"{s:O, s:s, s:O, s:(O), s:i}\",\n\t\t\t\t\t      kwds[0], klass,\n\t\t\t\t\t      kwds[1], name,\n\t\t\t\t\t      kwds[2], default_value,\n\t\t\t\t\t      kwds[3], val_klass,\n\t\t\t\t\t      kwds[5], is_entity);\n\t}\n\n\tif (!py_descr_kwds) {\n\t\tgoto ERROR_EXIT;\n\t}\n\tpy_attr_descr = PyObject_Call(py_descr_class, py_descr_args, py_descr_kwds);\n\tif (!py_attr_descr) {\n\t\tgoto ERROR_EXIT;\n\t}\n\tPy_CLEAR(py_descr_args);\n\tPy_CLEAR(py_descr_kwds);\n\t/* now set the class attribute */\n\tif ((PyObject_SetAttrString(klass, name, py_attr_descr) == -1)) {\n\t\tgoto ERROR_EXIT;\n\t}\n\t/* we don't need the descriptor any more */\n\tPy_CLEAR(py_attr_descr);\n\treturn 0;\n\nERROR_EXIT:\n\tpbs_python_write_error_to_log(__func__);\n\tPy_CLEAR(py_descr_args);\n\tPy_CLEAR(py_descr_kwds);\n\tPy_CLEAR(py_attr_descr);\n\treturn -1;\n}\n\n/*\n * pbs_python_get_python_type:\n *  This is the bulk where all the magic happens regarding the mapping from\n *  PBS to python types.\n *\n * returns:\n *   - A Borrowed Reference.\n */\n\n/* TODO\n *  -big assumption, hopefully the DURATION/TIME encoding routine is\n *   not overloaded.\n *  -Make sure all types are there\n */\n#define TYPE_DURATION(p) (((p) == encode_time))\n#define TYPE_SIZE(p) (((p) == ATR_TYPE_SIZE))\n#define TYPE_ACL(p) (((p) == ATR_TYPE_ACL))\n#define TYPE_BOOL(p) (((p) == ATR_TYPE_BOOL))\n#define TYPE_ARST(p) (((p) == ATR_TYPE_ARST))\n#define TYPE_RESC(p) (((p) == ATR_TYPE_RESC))\n#define TYPE_INT(p) ((((p) == ATR_TYPE_LONG) ||  \\\n\t\t      ((p) == ATR_TYPE_SHORT) || \\\n\t\t      ((p) == ATR_TYPE_CHAR)))\n#define TYPE_STR(p) (((p) == ATR_TYPE_STR) || \\\n\t\t     ((p) == ATR_TYPE_JINFOP))\n#define TYPE_FLOAT(p) (((p) == ATR_TYPE_FLOAT))\n#define TYPE_ENTITY(p) (((p) == ATR_TYPE_ENTITY))\n#define ATTR_IS_RESC(a) (TYPE_RESC((a)->at_type) ||    \\\n\t\t\t (TYPE_ENTITY((a)->at_type) && \\\n\t\t\t  ((a)->at_decode == decode_entlim_res)))\n/*\n * TODO\n *  \t- It is possible to combine pbs_python_setup_resc_get_value_type\n *    \tand pbs_python_setup_attr_get_value_type into a macro.\n */\n\n/*\n * NO exception raised\n */\n/**\n * @brief\n *\tgiven the resource definition return the corresponding python type.\n *\n * @param[in] resc_def_p - pointer to resource_def indicating resource list\n *\n * @return\tpointer PyObject\n * @retval\tresource table\t\tsuccess\n * @retval\treturn generic\t\tfailure\n *\n */\nstatic PyObject *\npbs_python_setup_resc_get_value_type(resource_def *resc_def_p)\n{\n\tPyObject *py_tmp = NULL; /* return value */\n\n\t/* check if we are special aka check PBS_PythonTypes first */\n\tpy_tmp = PyDict_GetItemString(PBS_PythonTypes, resc_def_p->rs_name);\n\tif (py_tmp)\n\t\treturn py_tmp; /* cool, we are special */\n\n\t/* careful long is overloadded so duration comes first */\n\tif (TYPE_DURATION(resc_def_p->rs_encode)) /* check for time type */\n\t\treturn pbs_python_types_table[PP_TIME_IDX].t_class;\n\n\tif (TYPE_SIZE(resc_def_p->rs_type)) /* check for SIZE type */\n\t\treturn pbs_python_types_table[PP_SIZE_IDX].t_class;\n\n\tif (TYPE_ACL(resc_def_p->rs_type)) /* check for ACL type */\n\t\treturn pbs_python_types_table[PP_ACL_IDX].t_class;\n\n\tif (TYPE_BOOL(resc_def_p->rs_type)) /* check for BOOL type */\n\t\treturn pbs_python_types_table[PP_BOOL_IDX].t_class;\n\n\tif (TYPE_ARST(resc_def_p->rs_type)) /* check for list of strings */\n\t\treturn pbs_python_types_table[PP_ARST_IDX].t_class;\n\n\tif (TYPE_INT(resc_def_p->rs_type)) /* check for int,long,short type */\n\t\treturn pbs_python_types_table[PP_INT_IDX].t_class;\n\n\tif (TYPE_STR(resc_def_p->rs_type)) /* check for str type */\n\t\treturn pbs_python_types_table[PP_STR_IDX].t_class;\n\n\tif (TYPE_FLOAT(resc_def_p->rs_type)) /* check for float type */\n\t\treturn pbs_python_types_table[PP_FLOAT_IDX].t_class;\n\n\tif (TYPE_ENTITY(resc_def_p->rs_type)) /* check for entity type */\n\t\treturn pbs_python_types_table[PP_ENTITY_IDX].t_class;\n\n\t/* all else fails return generic */\n\n\treturn pbs_python_types_table[PP_GENERIC_IDX].t_class;\n}\n\n/**\n * @brief\n *      Given an attribute defintion, return the corresponding Python type.\n *\n * @param[in]\tattr_def_p\t- an attribute_def type.\n * @param[in]\tpy_type\t\t- a python type in string: \"job\", \"server\",\n *\t\t\t\t\"queue\", \"resv\", or \"vnode\".\n * @return\tPyObject *\n */\n\nstatic PyObject *\npbs_python_setup_attr_get_value_type(attribute_def *attr_def_p, char *py_type)\n{\n\tPyObject *py_tmp = NULL; /* return value */\n\n\t/* check if we are special aka check PBS_PythonTypes first */\n\n\tif ((strcmp(py_type, PY_TYPE_VNODE) != 0) ||\n\t    (strcmp(attr_def_p->at_name, ATTR_p) != 0)) {\n\n\t\t/* Only get the type from PBS_PythonTypes if not the vnode's Priority */\n\t\t/* attribute which is treated as a Python int instead of the */\n\t\t/* pbs.priority type mapped in PBS_Python_Types. */\n\n\t\tpy_tmp = PyDict_GetItemString(PBS_PythonTypes, attr_def_p->at_name);\n\t\tif (py_tmp)\n\t\t\treturn py_tmp; /* cool, we are special */\n\t}\n\n\t/* careful long is overloadded so duration comes first */\n\tif (TYPE_DURATION(attr_def_p->at_encode)) /* check for time type */\n\t\treturn pbs_python_types_table[PP_TIME_IDX].t_class;\n\n\tif (ATTR_IS_RESC(attr_def_p))\n\t\treturn pbs_python_types_table[PP_RESC_IDX].t_class;\n\n\tif (TYPE_SIZE(attr_def_p->at_type)) /* check for SIZE type */\n\t\treturn pbs_python_types_table[PP_SIZE_IDX].t_class;\n\n\tif (TYPE_ACL(attr_def_p->at_type)) /* check for ACL type */\n\t\treturn pbs_python_types_table[PP_ACL_IDX].t_class;\n\n\tif (TYPE_BOOL(attr_def_p->at_type)) /* check for BOOL type */\n\t\treturn pbs_python_types_table[PP_BOOL_IDX].t_class;\n\n\tif (TYPE_ARST(attr_def_p->at_type)) /* check for list of strings */\n\t\treturn pbs_python_types_table[PP_ARST_IDX].t_class;\n\n\tif (TYPE_INT(attr_def_p->at_type)) /* check for int,long,short */\n\t\treturn pbs_python_types_table[PP_INT_IDX].t_class;\n\n\tif (TYPE_STR(attr_def_p->at_type)) /* check for str type */\n\t\treturn pbs_python_types_table[PP_STR_IDX].t_class;\n\n\tif (TYPE_FLOAT(attr_def_p->at_type)) /* check for float type */\n\t\treturn pbs_python_types_table[PP_FLOAT_IDX].t_class;\n\n\tif (TYPE_ENTITY(attr_def_p->at_type)) /* check for entity type */\n\t\treturn pbs_python_types_table[PP_ENTITY_IDX].t_class;\n\n\t/* all else fails return generic */\n\n\treturn pbs_python_types_table[PP_GENERIC_IDX].t_class;\n}\n\n/**\n * @brief\n * \tpbs_python_free_py_types_array\n *   \tThis frees up the global arrays\n *\n * @param[in] py_types_array - address reference to globals\n *\n * @return\tVoid\n */\n\nvoid\npbs_python_free_py_types_array(PyObject ***py_types_array)\n{\n\tPyObject **py_array_tmp = *py_types_array;\n\tPyObject *py_tmp = NULL;\n\n\tif (*py_types_array) {\n\t\twhile ((py_tmp = *py_array_tmp)) {\n\t\t\tPy_CLEAR(py_tmp);\n\t\t\tpy_array_tmp++;\n\t\t}\n\t}\n\tPyMem_Free(*py_types_array);\n\t*py_types_array = NULL; /* since we might be called again */\n\treturn;\n}\n\n/**\n * @brief\n *\tmakes a call to given python object klass and maskes default value\n *\n * @param[in] klass - function\n * @param[in] args - arguments for function\n *\n * @return\tPyObject *\n *\n */\nPyObject *\npbs_python_make_default_value(PyObject *klass, PyObject *args)\n{\n\tPyObject *py_default_value;\n\n\tpy_default_value = PyObject_Call(klass, args, NULL);\n\tif (!py_default_value) {\n\t\tgoto ERROR_EXIT;\n\t}\n\treturn py_default_value;\n\nERROR_EXIT:\n\tpbs_python_write_error_to_log(\"could not make default value\");\n\treturn NULL;\n}\n\n/**\n * @brief\n *      Routine that actualizes *all* the attributes for a Python vnode object.\n *\n * @par Functionality:\n *\tThis takes input from node_attr_def[] and svr_resc_defm[] tables.\n *\tEach attribute is setup as a descriptor for finer granularity of\n * \tcontrol.\n *\n * @return      int\n * @retval       0 :    Successful execution of this function, with internal\n *\t\t\t'py_vnode_attr_types' list populated.\n * @retval      -1 \"    On failutre to populate 'py_vnode_attr_types' list.\n */\nint\npbs_python_setup_vnode_class_attributes(void)\n{\n\tint i = 0;\n\tattribute_def *attr_def_p = NULL; /* convenience pointer */\n\tPyObject *py_pbs_vnode_klass = pbs_python_types_table[PP_VNODE_IDX].t_class;\n\tPyObject *py_value_type = NULL;\n\tPyObject *py_default_value = NULL;\n\tPyObject *py_default_args = NULL;\n\tint num_entry = ND_ATR_LAST + 1; /* 1 for sentinel */\n\tint te;\n\n\tif (IS_PBS_PYTHON_CMD(pbs_python_daemon_name))\n\t\tDEBUG3_ARG1(\"BEGIN setting up all vnode attributes %s\", \"\");\n\telse\n\t\tDEBUG2_ARG1(\"BEGIN setting up all vnode attributes %s\", \"\");\n\tpy_vnode_attr_types = PyMem_New(PyObject *, num_entry);\n\n\tif (py_vnode_attr_types == NULL)\n\t\tgoto ERROR_EXIT;\n\n\tmemset(py_vnode_attr_types, 0, sizeof(PyObject *) * num_entry);\n\n\t/* ok now set all the node_attr_def types known to the server */\n\tattr_def_p = node_attr_def;\n\tfor (i = 0; i < ND_ATR_LAST; i++) {\n\t\t/* get the value type for this attribute */\n\t\tpy_value_type = pbs_python_setup_attr_get_value_type(attr_def_p,\n\t\t\t\t\t\t\t\t     PY_TYPE_VNODE);\n\t\t/* create a brand new default value from value type */\n\t\tif (ATTR_IS_RESC(attr_def_p)) {\n\t\t\tpy_default_args = Py_BuildValue(\"(s)\", attr_def_p->at_name);\n\t\t\tif (py_default_args == NULL) {\n\t\t\t\tlog_err(-1, attr_def_p->at_name, \"could not build args for default value\");\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\tpy_default_value = pbs_python_make_default_value(py_value_type, py_default_args);\n\t\t\tPy_DECREF(py_default_args);\n\t\t\tif (py_default_value == NULL) {\n\t\t\t\tlog_err(-1, attr_def_p->at_name, \"could not set default value\");\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\tte = TYPE_ENTITY(attr_def_p->at_type);\n\t\t} else {\n\t\t\tpy_default_value = Py_None;\n\t\t\tte = 0;\n\t\t}\n\t\tif (_pps_getset_descriptor_object(py_pbs_vnode_klass,\n\t\t\t\t\t\t  attr_def_p->at_name,\n\t\t\t\t\t\t  py_default_value,\n\t\t\t\t\t\t  py_value_type, NULL, te) == -1)\n\n\t\t\tgoto ERROR_EXIT;\n\t\tPy_INCREF(py_value_type);\n\t\tif (py_default_value != Py_None)\n\t\t\tPy_CLEAR(py_default_value);\n\t\tpy_vnode_attr_types[i] = py_value_type;\n\t\tattr_def_p++;\n\t}\n\tif (IS_PBS_PYTHON_CMD(pbs_python_daemon_name))\n\t\tDEBUG3_ARG1(\"DONE setting up all vnode attributes, number set <%d>\", i);\n\telse\n\t\tDEBUG2_ARG1(\"DONE setting up all vnode attributes, number set <%d>\", i);\n\treturn 0;\n\nERROR_EXIT:\n\tif (py_default_value != Py_None)\n\t\tPy_CLEAR(py_default_value);\n\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t \"could not set attribute <%s> for vnode python class\", attr_def_p->at_name);\n\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\tlog_err(-1, __func__, log_buffer);\n\treturn -1;\n}\n/**\n * @brief\n * \tpbs_python_setup_resv_class_attributes\n *   \troutine that sets up *all* the attributes for a Python server Object\n *   \tmapping directly to PBS Job object (struct job)\n *\n * @return\tint\n * @retval\t-1  \t: \tfailure\n * @retval\t0  \t: \tsuccess\n */\nint\npbs_python_setup_resv_class_attributes(void)\n{\n\tint i = 0;\n\tattribute_def *attr_def_p = NULL; /* convenience pointer */\n\tPyObject *py_pbs_resv_klass = pbs_python_types_table[PP_RESV_IDX].t_class;\n\tPyObject *py_value_type = NULL;\n\tPyObject *py_default_value = NULL;\n\tPyObject *py_default_args = NULL;\n\tint num_entry = RESV_ATR_LAST + 1; /* 1 for sentinel */\n\tint te;\n\n\tif (IS_PBS_PYTHON_CMD(pbs_python_daemon_name))\n\t\tDEBUG3_ARG1(\"BEGIN setting up all reservation attributes %s\", \"\");\n\telse\n\t\tDEBUG2_ARG1(\"BEGIN setting up all reservation attributes %s\", \"\");\n\tpy_resv_attr_types = PyMem_New(PyObject *, num_entry);\n\tif (!py_resv_attr_types) {\n\t\tgoto ERROR_EXIT;\n\t}\n\tmemset(py_resv_attr_types, 0, sizeof(PyObject *) * num_entry);\n\n\t/* ok now set all the resv_attr_def types known to the server */\n\tattr_def_p = resv_attr_def;\n\tfor (i = 0; i < RESV_ATR_LAST; i++) {\n\t\t/* get the value type for this attribute */\n\t\tpy_value_type = pbs_python_setup_attr_get_value_type(attr_def_p,\n\t\t\t\t\t\t\t\t     PY_TYPE_RESV);\n\t\t/* create a brand new default value from value type */\n\t\tif (ATTR_IS_RESC(attr_def_p)) {\n\t\t\tpy_default_args = Py_BuildValue(\"(s)\", attr_def_p->at_name);\n\t\t\tif (!py_default_args) {\n\t\t\t\t/* TODO, continuing instead of fatal error */\n\t\t\t\tlog_err(-1, attr_def_p->at_name, \"could not build args for default value\");\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\tpy_default_value = pbs_python_make_default_value(py_value_type, py_default_args);\n\t\t\tPy_DECREF(py_default_args);\n\t\t\tif (!py_default_value) {\n\t\t\t\t/* TODO, continuing instead of fatal error */\n\t\t\t\tlog_err(-1, attr_def_p->at_name, \"could not set default value\");\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\tte = TYPE_ENTITY(attr_def_p->at_type);\n\t\t} else {\n\t\t\tpy_default_value = Py_None;\n\t\t\tte = 0;\n\t\t}\n\t\tif (_pps_getset_descriptor_object(py_pbs_resv_klass,\n\t\t\t\t\t\t  attr_def_p->at_name,\n\t\t\t\t\t\t  py_default_value,\n\t\t\t\t\t\t  py_value_type, NULL, te) == -1)\n\n\t\t\tgoto ERROR_EXIT;\n\t\tPy_INCREF(py_value_type);\n\t\tif (py_default_value != Py_None)\n\t\t\tPy_CLEAR(py_default_value);\n\t\tpy_resv_attr_types[i] = py_value_type;\n\t\tattr_def_p++;\n\t}\n\tif (IS_PBS_PYTHON_CMD(pbs_python_daemon_name))\n\t\tDEBUG3_ARG1(\"DONE setting up all reservation attributes, number set <%d>\", i);\n\telse\n\t\tDEBUG2_ARG1(\"DONE setting up all reservation attributes, number set <%d>\", i);\n\treturn 0;\n\nERROR_EXIT:\n\tif (py_default_value != Py_None)\n\t\tPy_CLEAR(py_default_value);\n\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t \"could not set attribute <%s> for reservation python class\", attr_def_p->at_name);\n\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\tlog_err(-1, __func__, log_buffer);\n\treturn -1;\n}\n\n/**\n * @brief\n *\tpbs_python_setup_svr_class_attributes\n *  \t routine that sets up *all* the attributes for a Python server Object\n *  \t mapping directly to PBS Job object (struct job)\n *\n * @return      int\n * @retval      -1      :       failure\n * @retval      0       :       success\n */\nint\npbs_python_setup_server_class_attributes(void)\n{\n\tint i = 0;\n\tattribute_def *attr_def_p = NULL; /* convenience pointer */\n\tPyObject *py_pbs_svr_klass = pbs_python_types_table[PP_SVR_IDX].t_class;\n\tPyObject *py_value_type = NULL;\n\tPyObject *py_default_value = NULL;\n\tPyObject *py_default_args = NULL;\n\tint num_entry = SVR_ATR_LAST + 1; /* 1 for sentinel */\n\tint te;\n\n\tif (IS_PBS_PYTHON_CMD(pbs_python_daemon_name))\n\t\tDEBUG3_ARG1(\"BEGIN setting up all server attributes %s\", \"\");\n\telse\n\t\tDEBUG2_ARG1(\"BEGIN setting up all server attributes %s\", \"\");\n\tpy_svr_attr_types = PyMem_New(PyObject *, num_entry);\n\tif (!py_svr_attr_types) {\n\t\tgoto ERROR_EXIT;\n\t}\n\tmemset(py_svr_attr_types, 0, sizeof(PyObject *) * num_entry);\n\n\t/* ok now set all the svr_attr_def types known to the server */\n\tattr_def_p = svr_attr_def;\n\tfor (i = 0; i < SVR_ATR_LAST; i++) {\n\t\t/* get the value type for this attribute */\n\t\tpy_value_type = pbs_python_setup_attr_get_value_type(attr_def_p,\n\t\t\t\t\t\t\t\t     PY_TYPE_SERVER);\n\t\t/* create a brand new default value from value type */\n\t\tif (ATTR_IS_RESC(attr_def_p)) {\n\t\t\tpy_default_args = Py_BuildValue(\"(s)\", attr_def_p->at_name);\n\t\t\tif (!py_default_args) {\n\t\t\t\t/* TODO, continuing instead of fatal error */\n\t\t\t\tlog_err(-1, attr_def_p->at_name, \"could not build args for default value\");\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\tpy_default_value = pbs_python_make_default_value(py_value_type, py_default_args);\n\t\t\tPy_DECREF(py_default_args);\n\t\t\tif (!py_default_value) {\n\t\t\t\t/* TODO, continuing instead of fatal error */\n\t\t\t\tlog_err(-1, attr_def_p->at_name, \"could not set default value\");\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\tte = TYPE_ENTITY(attr_def_p->at_type);\n\t\t} else {\n\t\t\tpy_default_value = Py_None;\n\t\t\tte = 0;\n\t\t}\n\t\tif (_pps_getset_descriptor_object(py_pbs_svr_klass,\n\t\t\t\t\t\t  attr_def_p->at_name,\n\t\t\t\t\t\t  py_default_value,\n\t\t\t\t\t\t  py_value_type, NULL, te) == -1)\n\n\t\t\tgoto ERROR_EXIT;\n\t\tPy_INCREF(py_value_type);\n\t\tif (py_default_value != Py_None)\n\t\t\tPy_CLEAR(py_default_value);\n\t\tpy_svr_attr_types[i] = py_value_type;\n\t\tattr_def_p++;\n\t}\n\tif (IS_PBS_PYTHON_CMD(pbs_python_daemon_name))\n\t\tDEBUG3_ARG1(\"DONE setting up all server attributes, number set <%d>\", i);\n\telse\n\t\tDEBUG2_ARG1(\"DONE setting up all server attributes, number set <%d>\", i);\n\treturn 0;\n\nERROR_EXIT:\n\tif (py_default_value != Py_None)\n\t\tPy_CLEAR(py_default_value);\n\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t \"could not set attribute <%s> for <server> python class\", attr_def_p->at_name);\n\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\tlog_err(-1, __func__, log_buffer);\n\treturn -1;\n}\n\n/**\n * @brief\n * \tpbs_python_setup_job_class_attributes\n *   \troutine that sets up *all* the attributes for a Python Job Object\n *   \tmapping directly to PBS Job object (struct job)\n *\n * @return      int\n * @retval      -1      :       failure\n * @retval      0       :       success\n */\nint\npbs_python_setup_job_class_attributes(void)\n{\n\tint i = 0;\n\tattribute_def *attr_def_p = NULL; /* convenience pointer */\n\tPyObject *py_pbs_job_klass = pbs_python_types_table[PP_JOB_IDX].t_class;\n\tPyObject *py_value_type = NULL;\n\tPyObject *py_default_value = NULL;\n\tPyObject *py_default_args = NULL;\n\tint num_entry = JOB_ATR_LAST + 1; /* 1 for sentinel */\n\tint te;\n\n\tif (IS_PBS_PYTHON_CMD(pbs_python_daemon_name))\n\t\tDEBUG3_ARG1(\"BEGIN setting up all job attributes %s\", \"\");\n\telse\n\t\tDEBUG2_ARG1(\"BEGIN setting up all job attributes %s\", \"\");\n\tpy_job_attr_types = PyMem_New(PyObject *, num_entry);\n\tif (!py_job_attr_types) {\n\t\tgoto ERROR_EXIT;\n\t}\n\tmemset(py_job_attr_types, 0, sizeof(PyObject *) * num_entry);\n\n\t/* ok now set all the job_attr_def types known to the server */\n\tattr_def_p = job_attr_def;\n\tfor (i = 0; i < JOB_ATR_LAST; i++) {\n\t\t/* get the value type for this attribute */\n\t\tpy_value_type = pbs_python_setup_attr_get_value_type(attr_def_p,\n\t\t\t\t\t\t\t\t     PY_TYPE_JOB);\n\t\t/* create a brand new default value from value type */\n\t\tif (ATTR_IS_RESC(attr_def_p)) {\n\t\t\tpy_default_args = Py_BuildValue(\"(s)\", attr_def_p->at_name);\n\t\t\tif (!py_default_args) {\n\t\t\t\t/* TODO, continuing instead of fatal error */\n\t\t\t\tlog_err(-1, attr_def_p->at_name, \"could not build args for default value\");\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\tpy_default_value = pbs_python_make_default_value(py_value_type, py_default_args);\n\t\t\tPy_DECREF(py_default_args);\n\t\t\tif (!py_default_value) {\n\t\t\t\t/* TODO, continuing instead of fatal error */\n\t\t\t\tlog_err(-1, attr_def_p->at_name, \"could not set default value\");\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\tte = TYPE_ENTITY(attr_def_p->at_type);\n\t\t} else {\n\t\t\tpy_default_value = Py_None;\n\t\t\tte = 0;\n\t\t}\n\t\t/* default to None */\n\t\tif (_pps_getset_descriptor_object(py_pbs_job_klass,\n\t\t\t\t\t\t  attr_def_p->at_name,\n\t\t\t\t\t\t  py_default_value,\n\t\t\t\t\t\t  py_value_type, NULL, te) == -1)\n\n\t\t\tgoto ERROR_EXIT;\n\t\tPy_INCREF(py_value_type);\n\t\tif (py_default_value != Py_None)\n\t\t\tPy_CLEAR(py_default_value);\n\t\tpy_job_attr_types[i] = py_value_type;\n\t\tattr_def_p++;\n\t}\n\tif (IS_PBS_PYTHON_CMD(pbs_python_daemon_name))\n\t\tDEBUG3_ARG1(\"DONE setting up all job attributes, number set <%d>\", i);\n\telse\n\t\tDEBUG2_ARG1(\"DONE setting up all job attributes, number set <%d>\", i);\n\treturn 0;\n\nERROR_EXIT:\n\tif (py_default_value != Py_None)\n\t\tPy_CLEAR(py_default_value);\n\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t \"could not set attribute <%s> for <job> python class\", attr_def_p->at_name);\n\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\tlog_err(-1, __func__, log_buffer);\n\treturn -1;\n}\n\n/**\n * @brief\n * \tpbs_python_setup_queue_class_attributes\n *   \troutine that sets up *all* the attributes for a Python Queue Object\n *   \tmapping directly to PBS Queue object (pbs_queue)\n *\n * @return      int\n * @retval      -1      :       failure\n * @retval      0       :       success\n */\nint\npbs_python_setup_queue_class_attributes(void)\n{\n\tint i = 0;\n\tattribute_def *attr_def_p = NULL; /* convenience pointer */\n\tPyObject *py_pbs_que_klass = pbs_python_types_table[PP_QUE_IDX].t_class;\n\tPyObject *py_value_type = NULL;\n\tPyObject *py_default_value = NULL;\n\tPyObject *py_default_args = NULL;\n\tint num_entry = QA_ATR_LAST + 1; /* 1 for sentinel */\n\tint te;\n\n\tif (IS_PBS_PYTHON_CMD(pbs_python_daemon_name))\n\t\tDEBUG3_ARG1(\"BEGIN setting up all queue attributes %s\", \"\");\n\telse\n\t\tDEBUG2_ARG1(\"BEGIN setting up all queue attributes %s\", \"\");\n\tpy_que_attr_types = PyMem_New(PyObject *, num_entry);\n\tif (!py_que_attr_types) {\n\t\tgoto ERROR_EXIT;\n\t}\n\tmemset(py_que_attr_types, 0, sizeof(PyObject *) * num_entry);\n\n\t/* ok now set all the resources known to the server */\n\tattr_def_p = que_attr_def;\n\tfor (i = 0; i < QA_ATR_LAST; i++) {\n\t\t/* get the value type for this attribute */\n\t\tpy_value_type = pbs_python_setup_attr_get_value_type(attr_def_p,\n\t\t\t\t\t\t\t\t     PY_TYPE_QUEUE);\n\t\t/* create a brand new default value from value type */\n\t\tif (ATTR_IS_RESC(attr_def_p)) {\n\t\t\tpy_default_args = Py_BuildValue(\"(s)\", attr_def_p->at_name);\n\t\t\tif (!py_default_args) {\n\t\t\t\t/* TODO, continuing instead of fatal error */\n\t\t\t\tlog_err(-1, attr_def_p->at_name, \"could not build args for default value\");\n\t\t\t\tattr_def_p++;\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\tpy_default_value = pbs_python_make_default_value(py_value_type, py_default_args);\n\t\t\tPy_DECREF(py_default_args);\n\t\t\tif (!py_default_value) {\n\t\t\t\t/* TODO, continuing instead of fatal error */\n\t\t\t\tlog_err(-1, attr_def_p->at_name, \"could not set default value\");\n\t\t\t\tattr_def_p++;\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\tte = TYPE_ENTITY(attr_def_p->at_type);\n\t\t} else {\n\t\t\tpy_default_value = Py_None;\n\t\t\tte = 0;\n\t\t}\n\t\tif (_pps_getset_descriptor_object(py_pbs_que_klass,\n\t\t\t\t\t\t  attr_def_p->at_name,\n\t\t\t\t\t\t  py_default_value,\n\t\t\t\t\t\t  py_value_type, NULL, te) == -1)\n\n\t\t\tgoto ERROR_EXIT;\n\t\tPy_INCREF(py_value_type);\n\t\tif (py_default_value != Py_None)\n\t\t\tPy_CLEAR(py_default_value);\n\t\tpy_que_attr_types[i] = py_value_type;\n\t\tattr_def_p++;\n\t}\n\tif (IS_PBS_PYTHON_CMD(pbs_python_daemon_name))\n\t\tDEBUG3_ARG1(\"DONE setting up all queue attributes, number set <%d>\", i);\n\telse\n\t\tDEBUG2_ARG1(\"DONE setting up all queue attributes, number set <%d>\", i);\n\treturn 0;\n\nERROR_EXIT:\n\tif (py_default_value != Py_None)\n\t\tPy_CLEAR(py_default_value);\n\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t \"could not set attribute <%s> for <queue> python class\", attr_def_p->at_name);\n\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\tlog_err(-1, __func__, log_buffer);\n\treturn -1;\n}\n\n/**\n * @brief\n * \tpbs_python_setup_python_resource_type\n *\troutine that sets up *all* the resources for a Python resource Object\n *\n * @return      int\n * @retval      -1      :       failure\n * @retval      0       :       success\n */\nint\npbs_python_setup_python_resource_type(void)\n{\n\tint i = 0, j;\n\tresource_def *resc_def_p = NULL; /* convenience pointer */\n\tPyObject *py_pbs_resc_klass = pbs_python_types_table[PP_RESC_IDX].t_class;\n\tPyObject *py_value_type = NULL;\n\tint num_entry = svr_resc_size + 1; /* 1 for sentinel */\n\n\tif (IS_PBS_PYTHON_CMD(pbs_python_daemon_name))\n\t\tDEBUG3_ARG1(\"BEGIN setting up all resource attributes %s\", \"\");\n\telse\n\t\tDEBUG2_ARG1(\"BEGIN setting up all resource attributes %s\", \"\");\n\tpy_svr_resc_types = PyMem_New(PyObject *, num_entry);\n\tif (!py_svr_resc_types) {\n\t\tgoto ERROR_EXIT;\n\t}\n\tmemset(py_svr_resc_types, 0, sizeof(PyObject *) * num_entry);\n\n\t/* ok now set all the resources known to the server */\n\tresc_def_p = svr_resc_def;\n\ti = 0;\n\tj = svr_resc_size;\n\twhile (j--) {\n\n\t\t/* get the value type for this resource */\n\t\tpy_value_type = pbs_python_setup_resc_get_value_type(resc_def_p);\n\t\t/* default to None */\n\n\t\tif (_pps_getset_descriptor_object(py_pbs_resc_klass,\n\t\t\t\t\t\t  resc_def_p->rs_name,\n\t\t\t\t\t\t  Py_None,\n\t\t\t\t\t\t  py_value_type, PY_RESOURCE_GENERIC_VALUE, 0) == -1) {\n\t\t\tgoto ERROR_EXIT;\n\t\t}\n\t\tPy_INCREF(py_value_type);\n\t\tpy_svr_resc_types[i] = py_value_type;\n\t\tresc_def_p = resc_def_p->rs_next;\n\t\ti++;\n\t}\n\tif (IS_PBS_PYTHON_CMD(pbs_python_daemon_name))\n\t\tDEBUG3_ARG1(\"DONE setting up all resource attributes, number set <%d>\", i);\n\telse\n\t\tDEBUG2_ARG1(\"DONE setting up all resource attributes, number set <%d>\", i);\n\treturn 0;\n\nERROR_EXIT:\n\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t \"could not set attribute <%s> for <pbs_resource> python class\", resc_def_p->rs_name);\n\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\tlog_err(-1, __func__, log_buffer);\n\treturn -1;\n}\n\n/**\n * @brief\n * \tpbs_python_clear_types_table clear the python pointers\n *\n * exceptions:\n *   \tNone\n */\n\nstatic void\npbs_python_clear_types_table(void)\n{\n\tpbs_python_types_entry *pp_type = pbs_python_types_table;\n\n\twhile (pp_type->t_key) {\n\t\tPy_CLEAR(pp_type->t_class);\n\t\tpp_type++;\n\t}\n\treturn;\n}\n\n/**\n * @brief\n * \tpbs_python_types_table setup routine, called at initialization of interp.\n *\n * @return      int\n * @retval      -1      :       failure\n * @retval      0       :       success\n *\n * exceptions:\n *   None\n * Assumptions:\n *   PBS_PythonTypes must be setup.\n */\n\nstatic int\npbs_python_setup_types_table(void)\n{\n\tpbs_python_types_entry *pp_type = pbs_python_types_table;\n\n\twhile (pp_type->t_key) {\n\t\tpp_type->t_class = PyDict_GetItemString(PBS_PythonTypes, pp_type->t_key);\n\t\tif (!(pp_type->t_class)) {\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t\t \"could not find key <%s> in PBS_PythonTypes\",\n\t\t\t\t pp_type->t_key);\n\t\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\tgoto ERROR_EXIT;\n\t\t}\n\t\t/* since we are borrowed incref */\n\t\tPy_INCREF(pp_type->t_class);\n\t\tpp_type++;\n\t}\n\n\treturn 0;\n\nERROR_EXIT:\n\t/* on error, the load_python_type ... calls the unload for globals */\n\treturn -1;\n}\n\n/**\n * @brief\n *    Unload all the PBS python types initialized, as well as static Python\n *    objects created during the initial loading of the interpreter.\n *\n * @param[in/out]  interp_data\t- data representing the interpeter that will\n *\t\t\t\tbe filled with new information.\n *\n * @return none\n *\n */\n\nvoid\npbs_python_unload_python_types(struct python_interpreter_data *interp_data)\n{\n\tPy_CLEAR(PyPbsV1Module_Obj);\n\tPy_CLEAR(PBS_PythonTypes);\n\tpbs_python_clear_types_table();\n\tpbs_python_free_py_types_array(&py_svr_resc_types);   /* all resources */\n\tpbs_python_free_py_types_array(&py_que_attr_types);   /* pbs.queue attrs */\n\tpbs_python_free_py_types_array(&py_job_attr_types);   /* pbs.job attrs */\n\tpbs_python_free_py_types_array(&py_svr_attr_types);   /* pbs.server attrs */\n\tpbs_python_free_py_types_array(&py_resv_attr_types);  /* pbs.resv attrs */\n\tpbs_python_free_py_types_array(&py_vnode_attr_types); /* pbs.vnode attrs */\n\tPy_CLEAR(py_pbs_statobj);\n\n\tinterp_data->pbs_python_types_loaded = 0;\n\treturn;\n}\n\n/**\n * @brief\n * \tThis routine is called at the embedded intialization time, to load all\n * \tthe Python types into a dictionary. Mapping the name to a python type.\n * \tThe dictionary items are:\n *     \tKey  = <an attribute name>\n *    \tValue = <the attribute type object from python module (pure)>\n *\n * NOTES:\n *     The Dictionary overwrites if a key already exists!\n *\n * @return      int\n * @retval      -1      :       failure\n * @retval      0       :       success\n *\n * side effects:\n *      Any python exception raised is cleared via the call to\n *          pbs_python_write_error_to_log\n */\n\nint\npbs_python_load_python_types(struct python_interpreter_data *interp_data)\n{\n\tPyObject *py_import = NULL;\t /* new */\n\tPyObject *py_sys_modules = NULL; /* Borrowed ref */\n\n\tif ((PBS_PythonTypes)) { /* ok already loaded */\n\t\treturn 0;\n\t}\n\n\tinterp_data->pbs_python_types_loaded = 0; /* just to be safe */\n\n\tif (!(py_import =\n\t\t      PyImport_ImportModuleEx((char *) PBS_PYTHON_V1_MODULE, NULL, NULL,\n\t\t\t\t\t      NULL))) {\n\t\tgoto ERROR_EXIT;\n\t}\n\t/* we don't need it any more, see next for explanation */\n\tPy_CLEAR(py_import);\n\n\tpbs_python_write_object_to_log(PyImport_GetModuleDict(), \"sys.modules=\", LOG_DEBUG);\n\n\t/* At this point the sys.modules loads module from __init__.py of\n\t * PBS_PYTHON_V1_MODULE, which may as a side effect load a bunch of other\n\t * modules. So now we get the actual module from sys.modules\n\t */\n\tpy_sys_modules = PyImport_GetModuleDict(); /* get sys.modules */\n\tif (!(PyPbsV1Module_Obj = PyDict_GetItemString(py_sys_modules, PBS_PYTHON_V1_MODULE))) {\n\t\tgoto ERROR_EXIT;\n\t}\n\tPy_INCREF(PyPbsV1Module_Obj); /* since dict.get returns borrowed ref*/\n\n\tif (!(PBS_PythonTypes =\n\t\t      PyObject_GetAttrString(PyPbsV1Module_Obj,\n\t\t\t\t\t     PBS_PYTHON_V1_TYPES_DICTIONARY))) {\n\t\tgoto ERROR_EXIT;\n\t}\n\n\t/* check and make sure it is a mapping */\n\tif (!PyDict_Check(PBS_PythonTypes)) {\n\t\tlog_err(-1, pbs_python_daemon_name,\n\t\t\t\"FATAL: PBS_PythonTypes object does not support mapping protocol\");\n\t\tgoto ERROR_EXIT;\n\t}\n\n\tif ((pbs_python_setup_types_table() == -1)) {\n\t\tgoto ERROR_EXIT;\n\t}\n\n\t/* setup all attrs for pbs_resource class */\n\tif ((pbs_python_setup_python_resource_type() == -1))\n\t\tgoto ERROR_EXIT;\n\n\t/* setup all attrs for queue class */\n\tif ((pbs_python_setup_queue_class_attributes() == -1))\n\t\tgoto ERROR_EXIT;\n\n\t/* setup all attrs for job class */\n\tif ((pbs_python_setup_job_class_attributes() == -1))\n\t\tgoto ERROR_EXIT;\n\n\t/* setup all attrs for server class */\n\tif ((pbs_python_setup_server_class_attributes() == -1))\n\t\tgoto ERROR_EXIT;\n\n\t/* setup all attrs for reservation (resv) class */\n\tif ((pbs_python_setup_resv_class_attributes() == -1))\n\t\tgoto ERROR_EXIT;\n\n\t/* setup all attrs for vnode class */\n\tif ((pbs_python_setup_vnode_class_attributes() == -1))\n\t\tgoto ERROR_EXIT;\n\n\tinterp_data->pbs_python_types_loaded = 1;\n\treturn 0;\n\nERROR_EXIT:\n\tpbs_python_write_error_to_log(__func__);\n\tPy_CLEAR(py_import); /* just in case */\n\tpbs_python_unload_python_types(interp_data);\n\treturn -1;\n}\n\n/**\n * @brief\n * \tA convenience function where given a Python object 'py_resource'\n * \trepresenting 'reslist_name', set its 'resc' to 'value'.\n *\n * @param[in]\tpy_resource - Python resource list object to set\n * @param[in]\treslist_name - name of resource list\n * @param[in]\tresc - name of resource to set\n * @param[in]\tvalue - value of resource being set\n *\n * @return \tint\n * @retval\t!= -1\t- success\n * @retval\t-1\t- failure\n *\n */\nstatic int\nset_in_python(PyObject *py_resource,\n\t      char *reslist_name, char *resc, char *value)\n{\n\tint rc;\n\tint hook_set_mode_orig;\n\n\thook_set_mode_orig = hook_set_mode;\n\n\t/* C_MODE is permissive mode unlike PY_MODE, which would enable */\n\t/* extra checks. PY_MODE is needed only if we're loading data */\n\t/* from outside (i.e. specified by hook writer). Here, */\n\t/* set_in_python() is an internal function that is called when */\n\t/* loading data from PBS data structures. */\n\thook_set_mode = C_MODE;\n\trc = pbs_python_object_set_attr_string_value(py_resource, resc, value);\n\thook_set_mode = hook_set_mode_orig;\n\n\tif (rc == -1) {\n\t\tLOG_ERROR_ARG2(\"%s:failed to set resource <%s>\",\n\t\t\t       reslist_name, resc);\n\t}\n\treturn (rc);\n}\n\n/** @brief\n *\tGiven a list of entity resource values (limits) in 'resc_value_list',\n *\tif py_resource' is not NULL, then set the Python resource object\n *\trepresented by 'py_resource' to the values given in 'resc_value_list'.\n *\tOtherwise, if 'py_resource' is NULL, and then return a\n *\tcomma-separated string of resource values in 'resc_value_list' into\n *\t'p_strbuf'.\n *  @note\n *\tThe returned string in 'p_strbuf' is in a privately malloced area, and\n *\tshould not be freed outside this function.\n *\n * @param[in]\tresc_value_list - list of resource values (<resc>=<val).\n * @param[in]\treslist_name - name of the resource list (e.g. resources_used).\n * @param[in]\tpy_resource - the Python pbs resource object being set.\n * @param[in]\tp_strbuf - pointer to a string buffer to hold return string\n *\t\t\tvalue if 'py_resource' is NULL.\n * @return int\n * @retval 0\t- for success\n * @retval -1   - for failure\n *\n */\nstatic int\nset_entity_resource_or_return_value(pbs_list_head *resc_value_list,\n\t\t\t\t    char *reslist_name, PyObject *py_resource, char **p_strbuf)\n{\n\tstatic char *ret_str_value = NULL;\n\tstatic size_t ret_len = STRBUF;\n\tstatic size_t nlen = 0;\n\tsvrattrl *svrattr_val_tmp = NULL; /* tmp pointer for traversal*/\n\tsvrattrl *svrattr_val_next = NULL;\n\tsvrattrl *plist = NULL;\n\tpbs_list_head entity_head;\n\tchar *bef_resc = NULL;\n\tchar *cur_resc = NULL;\n\tchar *next_resc = NULL;\n\tchar *tmp_str = NULL;\n\tint rc = 0; /* will be set to -1 if there's at least one failure */\n\n\tif (ret_str_value == NULL) {\n\t\tret_str_value = (char *) malloc(ret_len);\n\t\tif (ret_str_value == NULL) {\n\t\t\tlog_err(-1, __func__, \"failed to malloc string buffer!\");\n\t\t\treturn (-1);\n\t\t}\n\t}\n\tret_str_value[0] = '\\0';\n\n\t/*\n\t * The 'resc_value_list' for entity resource say 'max_queued_res'\n\t * would look like:\n\t *\n\t * resc_value_list->\n\t * al_name\t\tal_resc\t\tal_value\n\t * ---------\t\t--------\t----------\n\t *  max_queued_res\tfile\t\t[u:alpha=110gb]\n\t *  max_queued_res\tncpus\t\t[u:bayucan=9]\n\t *  max_queued_res\tfile\t\t[u:fatir=120gb]\n\t *  max_queued_res\tfile\t\t[u:mega=115gb]\n\t *\n\t *  So we store these in an intermediate list (entity_head)\n\t *  in a sorted way, so as to group them according to resource names.\n\t *  By comparing adjacent entries, we can determine which values\n\t *  belong to a particular resource name.\n\t *\n\t *  entity_head->\n\t * al_name\t\tal_resc\t\tal_value\n\t * ---------\t\t--------\t----------\n\t *  max_queued_res\tfile\t\t[u:alpha=110gb]\n\t *  max_queued_res\tfile\t\t[u:fatir=120gb]\n\t *  max_queued_res\tfile\t\t[u:omega=115gb]\n\t *  max_queued_res\tncpus\t\t[u:bayucan=9]\n\t *\n\t * And process 'entity_head' in a way that the following is stored in\n\t * 'ret_str_value':\n\t *\tmax_queued_res=file=\"[u:alpha=110gb],[u:fatir=120gb],[u:omega]\",ncpus=[u:bayucan=9]\n\t *\t     <- note: there's an enclosing quote in multi-valued limits\n\t */\n\n\tCLEAR_HEAD(entity_head);\n\n\t/* need to append values in a sorted way */\n\tnlen = 1; /* 1 for \\0 */\n\tplist = (svrattrl *) GET_NEXT(*resc_value_list);\n\twhile (plist) {\n\t\tif (add_to_svrattrl_list_sorted(&entity_head,\n\t\t\t\t\t\tplist->al_name, plist->al_resc,\n\t\t\t\t\t\tplist->al_value, plist->al_op,\n\t\t\t\t\t\tplist->al_resc) == -1) {\n\t\t\tfree_attrlist(&entity_head);\n\t\t\tlog_err(-1, __func__,\n\t\t\t\t\"failed populating entity limits value\");\n\t\t\treturn (-1);\n\t\t}\n\t\tnlen += 1 + /*  for ',' */\n\t\t\tstrlen(plist->al_resc) +\n\t\t\t1 + /* for '=' */\n\t\t\tstrlen(plist->al_value) +\n\t\t\t3; /* allocate for each possible */\n\t\t/* '\"' and ',' in case we have a */\n\t\t/* multi-valued resource limit. Ex. */\n\t\t/* file=\"[u:alpha=50gb],[u:fap=60gb]\" */\n\n\t\tplist = (svrattrl *) GET_NEXT(plist->al_link);\n\t}\n\tif (nlen > ret_len) {\n\n\t\tnlen += BUF_SIZE; /* malloc a larger size */\n\t\ttmp_str = (char *) realloc(ret_str_value, nlen);\n\t\tif (tmp_str == NULL) {\n\t\t\tlog_err(-1, __func__,\n\t\t\t\t\"failed to realloc string buffer!\");\n\t\t\treturn (-1);\n\t\t}\n\t\tret_str_value = tmp_str;\n\t\tret_len = nlen;\n\t}\n\n\t/* at this point, we have enough buffer space in */\n\t/* ret_str_value */\n\t/*\n\n\t Algorithm:\n\n\t 3 variables are used to keep track of the individual entries\n\t in the sorted 'entity_head' list.\n\n\t 1.\tcur_resc is the current resource/value line being looked at.\n\t 2.\tbef_resc is the resource/value from the previous line.\n\t 3.\tnext_resc is the resource/value on the next line.\n\n\t Then there's a buffer 'ret_str_value' that accumulates values\n\t associated with a given resource name. <resource> and <value>\n\t are those associated with 'cur_resc'.\n\n\t bef_resc == NULL, next_resc == NULL\n\t -> \tif returning string value, then\n\t ret_str_value=<resource>=<value>\n\t else\n\t set <py_resource>.<resource>=<value>\n\n\t bef_resc == NULL, next_resc != cur_resc\n\t -> \tif returning string value, then\n\t ret_str_value=<resource>=<value>\n\t else\n\t set <py_resource>.<resource>=<value>\n\n\t bef_resc == NULL, next_resc == cur_resc\n\t ->\tif returning string value, then\n\t start accumulating values:\n\t ret_str_value=\"<resource>=\\\"<value>\"\n\t else\n\t ret_str_value=\"<value>\"\n\n\t bef_resc == cur_resc, cur_resc != next_resc\n\t ->\tif returning string value,\n\t terminate the current string value:\n\n\t ret_str_value += \",<value>\\\"\"\n\t else\n\t set <py_resource>.<resource>=<ret_str_value>,<value>\n\n\t bef_resc == cur_resc, cur_resc == next_resc\n\t ->\tcontinue on the string value:\n\t ret_str_value  += \",<value>\"\n\n\t bef_resc != cur_resc, cur_resc != next_resc\n\t ->\tif returning string value,\n\t start a new set of resource/value pairs:\n\t ret_str_value += \",<resource>=<value>\"\n\t else\n\t set <py_resource>.<resource>=<value>\n\n\t bef_resc != cur_resc, cur_resc == next_resc\n\t if returning string value,\n\t ret_str_value += \"=\\\"<value>\"\n\t else\n\t ret_str_value = \"<value>\"\n\t */\n\n\tsvrattr_val_tmp = (svrattrl *) GET_NEXT(entity_head);\n\tbef_resc = NULL;\n\twhile (svrattr_val_tmp) {\n\n\t\tsvrattr_val_next =\n\t\t\t(svrattrl *) GET_NEXT(svrattr_val_tmp->al_link);\n\n\t\tcur_resc = svrattr_val_tmp->al_resc;\n\t\tif (cur_resc == NULL) {\n\t\t\tbef_resc = cur_resc;\n\t\t\tsvrattr_val_tmp = svrattr_val_next;\n\t\t\tcontinue;\n\t\t}\n\t\tif (svrattr_val_next != NULL) {\n\t\t\tif (svrattr_val_next->al_resc == NULL) {\n\t\t\t\tbef_resc = cur_resc;\n\t\t\t\tsvrattr_val_tmp = svrattr_val_next;\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\tnext_resc = svrattr_val_next->al_resc;\n\t\t} else {\n\t\t\tnext_resc = NULL;\n\t\t}\n\n\t\tif (bef_resc == NULL) {\n\t\t\tif (next_resc == NULL) {\n\n\t\t\t\tif (py_resource == NULL) {\n\t\t\t\t\tsprintf(ret_str_value, \"%s=%s\",\n\t\t\t\t\t\tsvrattr_val_tmp->al_resc,\n\t\t\t\t\t\tsvrattr_val_tmp->al_value);\n\t\t\t\t} else {\n\t\t\t\t\tif (set_in_python(py_resource,\n\t\t\t\t\t\t\t  reslist_name,\n\t\t\t\t\t\t\t  svrattr_val_tmp->al_resc,\n\t\t\t\t\t\t\t  svrattr_val_tmp->al_value) == -1)\n\t\t\t\t\t\trc = -1;\n\t\t\t\t}\n\n\t\t\t\tif (hook_debug.data_fp != NULL) {\n\t\t\t\t\tfprintf(hook_debug.data_fp,\n\t\t\t\t\t\t\"%s.%s[%s]=%s\\n\",\n\t\t\t\t\t\t(char *) hook_debug.objname,\n\t\t\t\t\t\treslist_name,\n\t\t\t\t\t\tsvrattr_val_tmp->al_resc,\n\t\t\t\t\t\tsvrattr_val_tmp->al_value);\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tif (strcmp(cur_resc, next_resc) != 0) {\n\t\t\t\t\tif (py_resource == NULL) {\n\t\t\t\t\t\tsprintf(ret_str_value, \"%s=%s\",\n\t\t\t\t\t\t\tsvrattr_val_tmp->al_resc,\n\t\t\t\t\t\t\tsvrattr_val_tmp->al_value);\n\t\t\t\t\t} else {\n\t\t\t\t\t\tif (set_in_python(py_resource,\n\t\t\t\t\t\t\t\t  reslist_name,\n\t\t\t\t\t\t\t\t  svrattr_val_tmp->al_resc,\n\t\t\t\t\t\t\t\t  svrattr_val_tmp->al_value) == -1)\n\t\t\t\t\t\t\trc = -1;\n\t\t\t\t\t}\n\t\t\t\t\tif (hook_debug.data_fp != NULL) {\n\t\t\t\t\t\tfprintf(hook_debug.data_fp,\n\t\t\t\t\t\t\t\"%s.%s[%s]=%s\\n\",\n\t\t\t\t\t\t\t(char *) hook_debug.objname,\n\t\t\t\t\t\t\treslist_name,\n\t\t\t\t\t\t\tsvrattr_val_tmp->al_resc,\n\t\t\t\t\t\t\tsvrattr_val_tmp->al_value);\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tif (py_resource == NULL) {\n\t\t\t\t\t\tsprintf(ret_str_value,\n\t\t\t\t\t\t\t\"%s=\\\"%s\",\n\t\t\t\t\t\t\tsvrattr_val_tmp->al_resc,\n\t\t\t\t\t\t\tsvrattr_val_tmp->al_value);\n\t\t\t\t\t} else {\n\t\t\t\t\t\t/* use ret_str_value */\n\t\t\t\t\t\t/* as value buffer */\n\t\t\t\t\t\tstrcpy(ret_str_value,\n\t\t\t\t\t\t       svrattr_val_tmp->al_value);\n\t\t\t\t\t}\n\n\t\t\t\t\tif (hook_debug.data_fp != NULL) {\n\t\t\t\t\t\tfprintf(hook_debug.data_fp,\n\t\t\t\t\t\t\t\"%s.%s[%s]=%s\",\n\t\t\t\t\t\t\t(char *) hook_debug.objname,\n\t\t\t\t\t\t\treslist_name,\n\t\t\t\t\t\t\tsvrattr_val_tmp->al_resc,\n\t\t\t\t\t\t\tsvrattr_val_tmp->al_value);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\n\t\t} else {\n\n\t\t\tif (strcmp(bef_resc, cur_resc) == 0) {\n\t\t\t\tstrcat(ret_str_value, \",\");\n\t\t\t\tstrcat(ret_str_value,\n\t\t\t\t       svrattr_val_tmp->al_value);\n\t\t\t\tif (hook_debug.data_fp != NULL) {\n\t\t\t\t\tfprintf(hook_debug.data_fp,\n\t\t\t\t\t\t\",%s\",\n\t\t\t\t\t\tsvrattr_val_tmp->al_value);\n\t\t\t\t}\n\t\t\t\tif ((next_resc != NULL) &&\n\t\t\t\t    (strcmp(cur_resc, next_resc) != 0)) {\n\t\t\t\t\tif (py_resource == NULL) {\n\t\t\t\t\t\t/* terminate entity val  */\n\t\t\t\t\t\tstrcat(ret_str_value, \"\\\"\");\n\t\t\t\t\t} else {\n\t\t\t\t\t\tif (set_in_python(py_resource,\n\t\t\t\t\t\t\t\t  reslist_name,\n\t\t\t\t\t\t\t\t  svrattr_val_tmp->al_resc,\n\t\t\t\t\t\t\t\t  ret_str_value) == -1)\n\t\t\t\t\t\t\trc = -1;\n\t\t\t\t\t}\n\t\t\t\t\tif (hook_debug.data_fp != NULL) {\n\t\t\t\t\t\tfprintf(hook_debug.data_fp,\n\t\t\t\t\t\t\t\"\\n\");\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t} else { /* start new */\n\t\t\t\tif (py_resource == NULL) {\n\t\t\t\t\tstrcat(ret_str_value, \",\");\n\t\t\t\t} else {\n\t\t\t\t\tret_str_value[0] = '\\0';\n\t\t\t\t}\n\t\t\t\tif ((next_resc == NULL) ||\n\t\t\t\t    (strcmp(cur_resc, next_resc) != 0)) {\n\t\t\t\t\tif (py_resource == NULL) {\n\t\t\t\t\t\tstrcat(ret_str_value,\n\t\t\t\t\t\t       svrattr_val_tmp->al_resc);\n\t\t\t\t\t\tstrcat(ret_str_value, \"=\");\n\t\t\t\t\t\tstrcat(ret_str_value,\n\t\t\t\t\t\t       svrattr_val_tmp->al_value);\n\t\t\t\t\t} else {\n\t\t\t\t\t\tif (set_in_python(py_resource,\n\t\t\t\t\t\t\t\t  reslist_name,\n\t\t\t\t\t\t\t\t  svrattr_val_tmp->al_resc,\n\t\t\t\t\t\t\t\t  svrattr_val_tmp->al_value) == -1)\n\t\t\t\t\t\t\trc = -1;\n\t\t\t\t\t}\n\n\t\t\t\t\tif (hook_debug.data_fp !=\n\t\t\t\t\t    NULL) {\n\t\t\t\t\t\tfprintf(hook_debug.data_fp,\n\t\t\t\t\t\t\t\"%s.%s[%s]=%s\\n\",\n\t\t\t\t\t\t\t(char *) hook_debug.objname,\n\t\t\t\t\t\t\treslist_name,\n\t\t\t\t\t\t\tsvrattr_val_tmp->al_resc,\n\t\t\t\t\t\t\tsvrattr_val_tmp->al_value);\n\t\t\t\t\t}\n\t\t\t\t} else { /* cur_resc == next_resc */\n\t\t\t\t\tif (py_resource == NULL) {\n\t\t\t\t\t\tstrcat(ret_str_value,\n\t\t\t\t\t\t       svrattr_val_tmp->al_resc);\n\t\t\t\t\t\tstrcat(ret_str_value, \"=\\\"\");\n\t\t\t\t\t\tstrcat(ret_str_value,\n\t\t\t\t\t\t       svrattr_val_tmp->al_value);\n\t\t\t\t\t} else {\n\t\t\t\t\t\t/* using as value string */\n\t\t\t\t\t\tstrcpy(ret_str_value,\n\t\t\t\t\t\t       svrattr_val_tmp->al_value);\n\t\t\t\t\t}\n\t\t\t\t\tif (hook_debug.data_fp !=\n\t\t\t\t\t    NULL) {\n\t\t\t\t\t\tfprintf(hook_debug.data_fp,\n\t\t\t\t\t\t\t\"%s.%s[%s]=%s\",\n\t\t\t\t\t\t\t(char *) hook_debug.objname,\n\t\t\t\t\t\t\treslist_name,\n\t\t\t\t\t\t\tsvrattr_val_tmp->al_resc,\n\t\t\t\t\t\t\tsvrattr_val_tmp->al_value);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\tbef_resc = cur_resc;\n\t\tsvrattr_val_tmp = svrattr_val_next;\n\t}\n\tfree_attrlist(&entity_head);\n\n\tif (py_resource == NULL)\n\t\t*p_strbuf = ret_str_value;\n\n\treturn (rc);\n}\n\n/**\n * @brief\n *\tGiven a list of resource values in 'resc_value_list', if\n *\tpy_resource' is not NULL, then set the Python resource object\n *\trepresented by 'py_resource' to the values given in 'resc_value_list'.\n *\tIf 'py_resource' is NULL, and then return a comma-separated string\n *\tinto 'p_strbuf', which represent the values given in 'resc_value_list'.\n * @note\n *\tThe returned string in 'p_strbuf' is in a private malloced area that\n *\tshould not be freed outside this function.\n *\n * @param[in]\tresc_value_list - list of resource values (<resc>=<val).\n * @param[in]\tpy_resource - the Python pbs resource object being set.\n * @param[in]\tp_strbuf - pointer to a string buffer to hold return string\n *\t\t\tvalue if 'py_resource' is NULL.\n * @return \tint\n * @retval \t0    -\tfor success\n * @retval \t-1   - \tfor failure\n *\n */\nstatic int\nset_resource_or_return_value(pbs_list_head *resc_value_list, char *reslist_name,\n\t\t\t     PyObject *py_resource, char **p_strbuf)\n{\n\tstatic char *ret_str_value = NULL;\n\tstatic size_t ret_len = STRBUF;\n\tstatic size_t nlen = 0;\n\tsvrattrl *svrattr_val_tmp = NULL; /* tmp pointer for traversal*/\n\tchar *tmp_str = NULL;\n\tint rc = 0; /* set to 1 if there's at least 1 failure */\n\n\tif (ret_str_value == NULL) {\n\t\tret_str_value = (char *) malloc(ret_len);\n\t\tif (ret_str_value == NULL) {\n\t\t\tlog_err(-1, __func__, \"failed to malloc string buffer!\");\n\t\t\treturn (-1);\n\t\t}\n\t}\n\tret_str_value[0] = '\\0';\n\n\tsvrattr_val_tmp = (svrattrl *) GET_NEXT(*resc_value_list);\n\twhile (svrattr_val_tmp) {\n\n\t\tif (py_resource == NULL) {\n\t\t\tnlen = strlen(ret_str_value) +\n\t\t\t       1 + /*  for ',' */\n\t\t\t       strlen(svrattr_val_tmp->al_resc) +\n\t\t\t       1 + /* for '=' */\n\t\t\t       strlen(svrattr_val_tmp->al_value) +\n\t\t\t       1; /* for '\\0' */\n\n\t\t\tif (nlen > ret_len) {\n\t\t\t\tnlen += BUF_SIZE; /* malloc a larger size */\n\t\t\t\ttmp_str = (char *) realloc(ret_str_value, nlen);\n\t\t\t\tif (tmp_str == NULL) {\n\t\t\t\t\tlog_err(-1, __func__,\n\t\t\t\t\t\t\"failed to realloc string buffer!\");\n\t\t\t\t\treturn (-1);\n\t\t\t\t}\n\t\t\t\tret_str_value = tmp_str;\n\t\t\t\tret_len = nlen;\n\t\t\t}\n\n\t\t\tif (*ret_str_value == '\\0') {\n\n\t\t\t\tsprintf(ret_str_value, \"%s=%s\",\n\t\t\t\t\tsvrattr_val_tmp->al_resc,\n\t\t\t\t\tsvrattr_val_tmp->al_value);\n\t\t\t} else {\n\t\t\t\tstrcat(ret_str_value, \",\");\n\t\t\t\tstrcat(ret_str_value, svrattr_val_tmp->al_resc);\n\t\t\t\tstrcat(ret_str_value, \"=\");\n\t\t\t\tstrcat(ret_str_value, svrattr_val_tmp->al_value);\n\t\t\t}\n\t\t} else {\n\t\t\tif (set_in_python(py_resource,\n\t\t\t\t\t  reslist_name,\n\t\t\t\t\t  svrattr_val_tmp->al_resc,\n\t\t\t\t\t  svrattr_val_tmp->al_value) == -1) {\n\t\t\t\trc = -1;\n\t\t\t}\n\t\t}\n\t\tif (hook_debug.data_fp != NULL) {\n\t\t\tfprintf(hook_debug.data_fp, \"%s.%s[%s]=%s\\n\",\n\t\t\t\t(char *) hook_debug.objname,\n\t\t\t\treslist_name, svrattr_val_tmp->al_resc,\n\t\t\t\tsvrattr_val_tmp->al_value);\n\t\t}\n\t\tsvrattr_val_tmp =\n\t\t\t(svrattrl *) GET_NEXT(svrattr_val_tmp->al_link);\n\t}\n\n\tif (py_resource == NULL)\n\t\t*p_strbuf = ret_str_value;\n\treturn (rc);\n}\n\n/**\n * @brief\n *\tReturns tthe Python string value of 'resc_val' which is of type\n *\t'pbs_resource_value' representing a pbs_resource object.\n *\n * @param[in]\tresc_val  a 'pbs_resource_value' input type.\n *\n * @return\tPyObject *\n * @retval\tNULL\t- error getting string value.\n * @retval\t<python string value> - a Python object representing the\n * @retval\t\"<resc1>=<val1>,\"<entity_resc2>=<val2,val3\",...<rescN>=<valN>\"\n *\t\t- a concatenation of string values for py_resource.\n *\n */\nPyObject *\npy_resource_string_value(pbs_resource_value *resc_val)\n{\n\tchar *ret_string = NULL;\n\n\tif (resc_val == NULL) {\n\t\tPy_RETURN_NONE;\n\t}\n\n\tif (TYPE_ENTITY(resc_val->attr_def_p->at_type)) {\n\t\tset_entity_resource_or_return_value(&(resc_val->value_list),\n\t\t\t\t\t\t    resc_val->attr_def_p->at_name,\n\t\t\t\t\t\t    NULL, &ret_string);\n\t} else { /* a regular resource */\n\t\tset_resource_or_return_value(&(resc_val->value_list),\n\t\t\t\t\t     resc_val->attr_def_p->at_name,\n\t\t\t\t\t     NULL, &ret_string);\n\t}\n\t/* New reference returned below */\n\treturn PyUnicode_FromString(ret_string);\n}\n\n/*\n * ---------- ATTRIBUTE CONVERSION HELPER METHODS ------------\n */\n\n/**\n * @brief\n *\n * \tPopulates a Python instance 'py_instance' with values found in\n *\tan attributes data array.\n *\n * @param[in] py_instance -  a Python object/class to populate\n * @param[in] attr_py_array - list of Python types to map attributes with\n * @param[in] attr_data_array - array of actual attribute names/resources/values\n * @param[in] attr_def_array - array of attribute definitions (ex. job_attr_def)\n * @param[in] attr_def_array_size - size of attr_def_array.\n * @param[in]\tperf_label - passed on to hook_perf_stat* call.\n * @param[in]\tperf_action - passed on to hook_perf_stat* call.\n *\n * @return indication of whether or not 'py_instance' was completely\n *\t   populated or not\n * @return 0\t- completely populated\n * @return -1\t- incompletely populated\n *\n * @note\n *\t\tThis function calls a single hook_perf_stat_start()\n *\t\tthat has some malloc-ed data that are freed in the\n *\t\thook_perf_stat_stop() call, which is done at the end of\n *\t\tthis function.\n *\t\tEnsure that after the hook_perf_stat_start(), all\n *\t\tprogram execution path lead to hook_perf_stat_stop()\n *\t\tcall.\n */\n\nint\npbs_python_populate_attributes_to_python_class(PyObject *py_instance,\n\t\t\t\t\t       PyObject **attr_py_array,\n\t\t\t\t\t       attribute *attr_data_array,\n\t\t\t\t\t       attribute_def *attr_def_array,\n\t\t\t\t\t       int attr_def_array_size, char *perf_label, char *perf_action)\n{\n\tint i = 0;\t   /* index */\n\tint encode_rv = 0; /* at_encode functions return value */\n\tint rc = -1;\n\tint ret_rc = 0;\n\tsvrattrl *svrattr_val = NULL;\t  /* tmp pointer */\n\tsvrattrl *svrattr_val_tmp = NULL; /* tmp pointer for traversal*/\n\tpbs_list_head pheadp;\n\tattribute *attr_p = NULL;\n\tattribute_def *attr_def_p = NULL;\n\tPyObject *py_attr_resc = NULL; /* for resource types */\n\tchar *value_str = NULL;\n\tchar *new_value_str = NULL;\n\tpbs_resource_value *resc_val;\n\n\thook_perf_stat_start(perf_label, perf_action, 0);\n\tfor (i = 0; i < attr_def_array_size; i++) {\n\t\tattr_p = attr_data_array + i;\n\t\tattr_def_p = attr_def_array + i;\n\n\t\tmemset(&pheadp, 0, sizeof(pheadp));\n\t\tCLEAR_HEAD(pheadp);\n\n\t\tsvrattr_val = NULL;\n\t\tencode_rv = attr_def_p->at_encode(attr_p,\n\t\t\t\t\t\t  /* linked list */ &pheadp,\n\t\t\t\t\t\t  /* name        */ attr_def_p->at_name,\n\t\t\t\t\t\t  /* resource    */ NULL,\n\t\t\t\t\t\t  /* Encoding type */ ATR_ENCODE_HOOK,\n\t\t\t\t\t\t  /* returned svrattrl */ &svrattr_val);\n\n\t\tif ((encode_rv == 0) && (svrattr_val != NULL)) {\n\t\t\tencode_rv = 1;\n\t\t}\n\t\tif (encode_rv == 0) {\n\t\t\t/* not set or no value */\n\t\t\tcontinue;\n\t\t} else if (encode_rv >= 1) { /* good, single value */\n\t\t\t/* we could be a resource list */\n\t\t\tif (ATTR_IS_RESC(attr_def_p)) {\n\t\t\t\tif (!PyObject_HasAttrString(py_instance, attr_def_p->at_name)) {\n\t\t\t\t\tfree_attrlist(&pheadp);\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\n\t\t\t\t/* NOTE the below is a new reference */\n\t\t\t\tpy_attr_resc =\n\t\t\t\t\tPyObject_GetAttrString(py_instance,\n\t\t\t\t\t\t\t       attr_def_p->at_name);\n\t\t\t\tif (py_attr_resc == NULL) {\n\t\t\t\t\tpbs_python_write_error_to_log(__func__);\n\t\t\t\t\tfree_attrlist(&pheadp);\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\t\t\t\t/* Mark resource currently has no value */\n\t\t\t\t/* loaded, but the value will be set later */\n\t\t\t\t/* as needed, by saving the value in */\n\t\t\t\t/* pbs_resource_value_list */\n\t\t\t\trc = pbs_python_object_set_attr_integral_value(\n\t\t\t\t\tpy_attr_resc,\n\t\t\t\t\tPY_RESOURCE_HAS_VALUE, FALSE);\n\t\t\t\tif (rc == -1) {\n\t\t\t\t\tLOG_ERROR_ARG2(\"%s:failed to set resource <%s> to False\",\n\t\t\t\t\t\t       attr_def_p->at_name,\n\t\t\t\t\t\t       PY_RESOURCE_HAS_VALUE);\n\t\t\t\t\tret_rc = -1;\n\t\t\t\t} else {\n\t\t\t\t\tsprintf(log_buffer, \"set py_resource %s %s to FALSE\",\n\t\t\t\t\t\tattr_def_p->at_name,\n\t\t\t\t\t\tPY_RESOURCE_HAS_VALUE);\n\t\t\t\t\tresc_val =\n\t\t\t\t\t\t(pbs_resource_value *) malloc(\n\t\t\t\t\t\t\tsizeof(pbs_resource_value));\n\t\t\t\t\tif (resc_val ==\n\t\t\t\t\t    NULL) {\n\t\t\t\t\t\tfree_attrlist(&pheadp);\n\t\t\t\t\t\tcontinue;\n\t\t\t\t\t}\n\n\t\t\t\t\t(void) memset((char *) resc_val, (int) 0,\n\t\t\t\t\t\t      (size_t) sizeof(pbs_resource_value));\n\t\t\t\t\tCLEAR_LINK(resc_val->all_rescs);\n\t\t\t\t\t/* no need to incref py_attr_resc */\n\t\t\t\t\t/* since that's already done */\n\t\t\t\t\t/* with the PyObject_GetAttrString() */\n\t\t\t\t\t/* call earlier. */\n\t\t\t\t\tresc_val->py_resource = py_attr_resc;\n\t\t\t\t\tresc_val->attr_def_p = attr_def_p;\n\n\t\t\t\t\tCLEAR_HEAD(resc_val->value_list);\n\t\t\t\t\tlist_move(&pheadp,\n\t\t\t\t\t\t  &resc_val->value_list);\n\n\t\t\t\t\tappend_link(&pbs_resource_value_list,\n\t\t\t\t\t\t    &resc_val->all_rescs,\n\t\t\t\t\t\t    (pbs_resource_value *) resc_val);\n\t\t\t\t\tresc_val->py_resource_str_value =\n\t\t\t\t\t\tpy_resource_string_value(resc_val);\n\t\t\t\t}\n\t\t\t} else { /* attribute */\n\t\t\t\t/* PBS' ATTR_inter/ATTR_block/ATTR_X11_port can either have a boolean-like */\n\t\t\t\t/* value for client (i.e. \"True\" or \"False\"), or an int-like */\n\t\t\t\t/* value for others (e.g. \"2274\" for port number)            */\n\t\t\t\t/* Python's version of these attributes are defined as ints, */\n\t\t\t\t/* and are not modifiable in a hook script. So we need to    */\n\t\t\t\t/* map the values into something consistent.                 */\n\n\t\t\t\tif ((strcmp(attr_def_p->at_name, ATTR_inter) == 0) ||\n\t\t\t\t    (strcmp(attr_def_p->at_name, ATTR_block) == 0) ||\n\t\t\t\t    (strcmp(attr_def_p->at_name, ATTR_X11_port) == 0)) {\n\t\t\t\t\tchar inter_val[2];\n\n\t\t\t\t\tif (strcasecmp(svrattr_val->al_value, ATR_FALSE) == 0) {\n\t\t\t\t\t\tstrcpy(inter_val, \"0\");\n\t\t\t\t\t} else {\n\t\t\t\t\t\tstrcpy(inter_val, \"1\");\n\t\t\t\t\t}\n\t\t\t\t\trc = pbs_python_object_set_attr_string_value(py_instance,\n\t\t\t\t\t\t\t\t\t\t     attr_def_p->at_name,\n\t\t\t\t\t\t\t\t\t\t     inter_val);\n\t\t\t\t\tif ((rc != -1) && (hook_debug.data_fp != NULL)) {\n\t\t\t\t\t\tfprintf(hook_debug.data_fp, \"%s.%s=%s\\n\", (char *) hook_debug.objname,\n\t\t\t\t\t\t\tattr_def_p->at_name, inter_val);\n\t\t\t\t\t}\n\t\t\t\t} else if ((strcmp(attr_def_p->at_name,\n\t\t\t\t\t\t   ATTR_NODE_state) == 0) ||\n\t\t\t\t\t   (strcmp(attr_def_p->at_name,\n\t\t\t\t\t\t   ATTR_NODE_ntype) == 0)) {\n\t\t\t\t\t/* ignore these attributes, dealt with externally */\n\t\t\t\t\tfree_attrlist(&pheadp);\n\t\t\t\t\tcontinue;\n\n\t\t\t\t} else if ((strcmp(attr_def_p->at_name,\n\t\t\t\t\t\t   ATTR_NODE_Sharing) == 0)) {\n\n\t\t\t\t\tattribute lattr;\n\t\t\t\t\tchar nshare_str[HOOK_BUF_SIZE];\n\n\t\t\t\t\trc = decode_sharing(&lattr, attr_def_p->at_name, 0,\n\t\t\t\t\t\t\t    svrattr_val->al_value);\n\n\t\t\t\t\tif (rc == 0) {\n\t\t\t\t\t\tsnprintf(nshare_str, sizeof(nshare_str), \"%ld\",\n\t\t\t\t\t\t\t lattr.at_val.at_long);\n\n\t\t\t\t\t\trc = pbs_python_object_set_attr_string_value(py_instance,\n\t\t\t\t\t\t\t\t\t\t\t     attr_def_p->at_name, nshare_str);\n\t\t\t\t\t\tif ((rc != -1) && (hook_debug.data_fp != NULL)) {\n\t\t\t\t\t\t\tfprintf(hook_debug.data_fp, \"%s.%s=%s\\n\", (char *) hook_debug.objname,\n\t\t\t\t\t\t\t\tattr_def_p->at_name, nshare_str);\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\n\t\t\t\t} else if (TYPE_ENTITY(attr_def_p->at_type)) {\n\t\t\t\t\t/* an entity attribute - can have a list of values */\n\n\t\t\t\t\tsvrattr_val_tmp = svrattr_val;\n\t\t\t\t\twhile (svrattr_val_tmp) {\n\n\t\t\t\t\t\tnew_value_str = NULL;\n\t\t\t\t\t\tvalue_str = pbs_python_object_get_attr_string_value(\n\t\t\t\t\t\t\tpy_instance, svrattr_val_tmp->al_name);\n\n\t\t\t\t\t\tif (value_str != NULL) {\n\n\t\t\t\t\t\t\tnew_value_str = malloc(strlen(value_str) +\n\t\t\t\t\t\t\t\t\t       strlen(svrattr_val_tmp->al_value) + 2);\n\t\t\t\t\t\t\t/* +2 for: \",\" and \"\\0\" */\n\t\t\t\t\t\t\tif (new_value_str == NULL) {\n\t\t\t\t\t\t\t\tLOG_ERROR_ARG2(\n\t\t\t\t\t\t\t\t\t\"%s:malloc failed extending entity <%s>\",\n\t\t\t\t\t\t\t\t\tattr_def_p->at_name,\n\t\t\t\t\t\t\t\t\tsvrattr_val_tmp->al_name);\n\t\t\t\t\t\t\t\tret_rc = -1;\n\t\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t\tsprintf(new_value_str, \"%s,%s\",\n\t\t\t\t\t\t\t\t\tvalue_str, svrattr_val_tmp->al_value);\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t\trc = pbs_python_object_set_attr_string_value(\n\t\t\t\t\t\t\tpy_instance,\n\t\t\t\t\t\t\tattr_def_p->at_name,\n\t\t\t\t\t\t\tnew_value_str ? new_value_str : svrattr_val->al_value);\n\t\t\t\t\t\tif ((rc != -1) && (hook_debug.data_fp != NULL)) {\n\t\t\t\t\t\t\tfprintf(hook_debug.data_fp, \"%s.%s=%s\\n\", (char *) hook_debug.objname,\n\t\t\t\t\t\t\t\tattr_def_p->at_name,\n\t\t\t\t\t\t\t\tnew_value_str ? new_value_str : svrattr_val->al_value);\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\tif (new_value_str != NULL) {\n\t\t\t\t\t\t\tfree(new_value_str);\n\t\t\t\t\t\t}\n\t\t\t\t\t\tsvrattr_val_tmp = (svrattrl *) GET_NEXT(\n\t\t\t\t\t\t\tsvrattr_val_tmp->al_link);\n\n\t\t\t\t\t} /* while */\n\n\t\t\t\t} else {\n\t\t\t\t\trc = pbs_python_object_set_attr_string_value(py_instance,\n\t\t\t\t\t\t\t\t\t\t     attr_def_p->at_name,\n\t\t\t\t\t\t\t\t\t\t     svrattr_val->al_value);\n\n\t\t\t\t\tif ((rc != -1) && (hook_debug.data_fp != NULL)) {\n\t\t\t\t\t\tfprintf(hook_debug.data_fp, \"%s.%s=%s\\n\", (char *) hook_debug.objname,\n\t\t\t\t\t\t\tattr_def_p->at_name, svrattr_val->al_value);\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\tif (rc == -1) {\n\t\t\t\t\tLOG_ERROR_ARG2(\"%s:failed to set attribute <%s>\",\n\t\t\t\t\t\t       \"\", attr_def_p->at_name);\n\t\t\t\t\tret_rc = -1;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tfree_attrlist(&pheadp);\n\t\t} else { /* error */\n\t\t\tcontinue;\n\t\t}\n\t} /* for */\n\thook_perf_stat_stop(perf_label, perf_action, 0);\n\treturn ret_rc;\n}\n\n/**\n * @brief\n *\n * \tPopulates a Python instance 'py_instance' with values found in\n *\ta svrattrl list.\n *\n * @param[in] py_instance -  a Python object/class to populate\n * @param[in] svrattrl_list - a pbs_list_head with svrattrl entries whose\n *\t\t\t\tvalues will be used to populate 'py_instance'\n * @param[in]\tperf_label - data passed on to hook_perf_stat* call\n * @param[in]\tperf_action - dat passed on to hook_perf_stat* call\n *\n * @return indication of whether or not 'py_instance' was completely\n *\t   populated or not\n * @return 0\t- completely populated\n * @return -1\t- incompletely populated\n *\n * @note\n *\t\tThis function calls a single hook_perf_stat_start()\n *\t\tthat has some malloc-ed data that are freed in the\n *\t\thook_perf_stat_stop() call, which is done at the end of\n *\t\tthis function.\n *\t\tEnsure that after the hook_perf_stat_start(), all\n *\t\tprogram execution path lead to hook_perf_stat_stop()\n *\t\tcall.\n */\nint\npbs_python_populate_python_class_from_svrattrl(PyObject *py_instance, pbs_list_head *svrattrl_list, char *perf_label, char *perf_action)\n{\n\tsvrattrl *plist = NULL;\n\n\tint rc = 0;\n\tint ret_rc = 0;\n\tPyObject *py_attr_resc = NULL; /* for resource types */\n\tchar *objname = NULL;\n\n\tif (hook_debug.input_fp != NULL) {\n\n\t\tif (PyObject_IsInstance(py_instance,\n\t\t\t\t\tpbs_python_types_table[PP_JOB_IDX].t_class))\n\t\t\tobjname = EVENT_JOB_OBJECT;\n\t\telse if (PyObject_IsInstance(py_instance,\n\t\t\t\t\t     pbs_python_types_table[PP_RESV_IDX].t_class))\n\t\t\tobjname = EVENT_RESV_OBJECT;\n\t\telse if (PyObject_IsInstance(py_instance,\n\t\t\t\t\t     pbs_python_types_table[PP_VNODE_IDX].t_class))\n\t\t\tobjname = EVENT_VNODE_OBJECT;\n\t\telse\n\t\t\tobjname = EVENT_OBJECT;\n\t}\n\n\tprint_svrattrl_list(\"pbs_python_populate_python_class_from_svrattrl==>\",\n\t\t\t    svrattrl_list);\n\thook_perf_stat_start(perf_label, perf_action, 0);\n\tplist = (svrattrl *) GET_NEXT(*svrattrl_list);\n\n\twhile (plist) {\n\n\t\tif (plist->al_resc) {\n\t\t\tif (!PyObject_HasAttrString(py_instance, plist->al_name)) {\n\t\t\t\tplist = (svrattrl *) GET_NEXT(plist->al_link);\n\t\t\t\tcontinue;\n\t\t\t}\n\n\t\t\tpy_attr_resc = PyObject_GetAttrString(py_instance,\n\t\t\t\t\t\t\t      plist->al_name);\n\n\t\t\tif (!py_attr_resc) {\n\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t\t\t \"Could not find %s\", plist->al_name);\n\t\t\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\t\tpbs_python_write_error_to_log(log_buffer);\n\t\t\t\tret_rc = -1;\n\t\t\t} else {\n\t\t\t\trc = pbs_python_object_set_attr_string_value(py_attr_resc,\n\t\t\t\t\t\t\t\t\t     plist->al_resc, plist->al_value);\n\t\t\t\tPy_DECREF(py_attr_resc);\n\t\t\t\tif (rc == -1) {\n\t\t\t\t\tLOG_ERROR_ARG2(\"%s:failed to set resource <%s>\",\n\t\t\t\t\t\t       plist->al_resc, plist->al_name);\n\t\t\t\t\tret_rc = -1;\n\t\t\t\t} else if (hook_debug.input_fp != NULL) {\n\t\t\t\t\tfprintf(hook_debug.input_fp, \"%s.%s[%s]=%s\\n\", objname,\n\t\t\t\t\t\tplist->al_name, plist->al_resc, plist->al_value);\n\t\t\t\t}\n\t\t\t}\n\t\t} else {\n\n\t\t\tif (PyObject_IsInstance(py_instance,\n\t\t\t\t\t\tpbs_python_types_table[PP_VNODE_IDX].t_class) &&\n\t\t\t    (strcmp(plist->al_name, VNATTR_HOOK_REQUESTOR) == 0)) {\n\t\t\t\t/* an special value not be Python set */\n\t\t\t\tplist = (svrattrl *) GET_NEXT(plist->al_link);\n\t\t\t\tcontinue;\n\t\t\t}\n\n\t\t\trc = pbs_python_object_set_attr_string_value(py_instance,\n\t\t\t\t\t\t\t\t     plist->al_name, return_internal_value(plist->al_name, plist->al_value));\n\t\t\tif (rc == -1) {\n\t\t\t\tLOG_ERROR_ARG2(\"%s:failed to set attribute <%s>\",\n\t\t\t\t\t       \"\", plist->al_name);\n\t\t\t\tret_rc = -1;\n\t\t\t} else if (hook_debug.input_fp != NULL)\n\t\t\t\tfprintf(hook_debug.input_fp, \"%s.%s=%s\\n\", objname, plist->al_name, plist->al_value);\n\t\t}\n\n\t\tplist = (svrattrl *) GET_NEXT(plist->al_link);\n\t}\n\n\thook_perf_stat_stop(perf_label, perf_action, 0);\n\treturn (ret_rc);\n}\n\n/**\n * @brief\n * \tReturns the # of seconds equivalent to the given 'time_str' which is of\n * \tthe form [hh:[mm:]]ss[.ms]\n *\n * @return\tlong\n * @retval\t-1 or -2\t\t\t\tfor error, each filling\n *\t\t\t\t\t\t\ta differnt log_buffer message\n * @retval\ttime in[hh:[mm:]]ss[.ms] format\t\tsuccess\n */\nstatic long\nduration_to_secs(char *time_str)\n{\n\n\tchar *value_tmp = NULL;\n\tattribute attr;\n\tint rc;\n\n\t/*  The *decode* functions below \"munges\" the value argument, so will use */\n\t/*  a copy */\n\tvalue_tmp = strdup(time_str);\n\tif (value_tmp == NULL) {\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"strdup failed! (errno %d)\", errno);\n\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\treturn -1;\n\t}\n\n\tclear_attr(&attr, job_attr_def);\n\t/* just a dummy attribute representing \"walltime\" to get the # of secs */\n\t/* of duration time */\n\trc = decode_time(&attr, WALLTIME_RESC, NULL, value_tmp);\n\n\tif (value_tmp) {\n\t\tfree(value_tmp);\n\t}\n\n\tif (rc != 0) {\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"input value %s not of the right format'\",\n\t\t\t time_str);\n\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\treturn (-2);\n\t}\n\n\treturn (attr.at_val.at_long);\n}\n\n/**\n *\n * @brief\n *\tCompares 2 pbs variables string lists (i.e. comma-separated lists of\n *\tvar=value entries).\n *\n * @param[in]\tvarl1\t- variable string list #1\n * @param[in]\tvarl2\t- variable string list #2\n *\n * @return int\n * @retval 1\tif the variable string lists are the same\n * @retval 0\totherwise\n */\nint\nvarlist_same(char *varl1, char *varl2)\n{\n\tpbs_list_head list1; /* Caution: list maintained in sorted order */\n\tpbs_list_head list2; /* Caution: list maintained in sorted order */\n\tchar *pc, *pc1;\n\tchar *env_var;\n\tchar *env_val;\n\tchar *varl1_dup = NULL;\n\tchar *varl2_dup = NULL;\n\tsvrattrl *pal1 = NULL;\n\tsvrattrl *pal2 = NULL;\n\tint rc;\n\n\tif ((varl1 == NULL) || (varl2 == NULL))\n\t\treturn 0;\n\n\t/* quick test */\n\tif (strcmp(varl1, varl2) == 0) {\n\t\treturn 1;\n\t}\n\n\tvarl1_dup = strdup(varl1);\n\tif (varl1_dup == NULL)\n\t\treturn 0;\n\tvarl2_dup = strdup(varl2);\n\tif (varl2_dup == NULL) {\n\t\tfree(varl1_dup);\n\t\treturn 0;\n\t}\n\n\tCLEAR_HEAD(list1);\n\tCLEAR_HEAD(list2);\n\n\tpc = strtok(varl1_dup, \",\");\n\twhile (pc != NULL) {\n\t\tenv_var = pc;\n\t\tenv_val = NULL;\n\n\t\tpc1 = strchr(pc, '=');\n\t\tif (pc1) {\n\t\t\t*pc1 = '\\0';\n\t\t\tenv_val = pc1 + 1;\n\t\t}\n\t\t(void) add_to_svrattrl_list_sorted(&list1, env_var, NULL,\n\t\t\t\t\t\t   env_val ? env_val : \"\", 0, NULL);\n\n\t\tpc = strtok(NULL, \",\");\n\t}\n\n\tpc = strtok(varl2_dup, \",\");\n\twhile (pc != NULL) {\n\t\tenv_var = pc;\n\t\tenv_val = NULL;\n\n\t\tpc1 = strchr(pc, '=');\n\t\tif (pc1) {\n\t\t\t*pc1 = '\\0';\n\t\t\tenv_val = pc1 + 1;\n\t\t}\n\t\t(void) add_to_svrattrl_list_sorted(&list2, env_var, NULL,\n\t\t\t\t\t\t   env_val ? env_val : \"\", 0, NULL);\n\t\tpc = strtok(NULL, \",\");\n\t}\n\n\t/* now compare the 2 sorted lists, which means if the 2 lists are */\n\t/* the same, then there should be a line by line match. */\n\tpal1 = (svrattrl *) GET_NEXT(list1);\n\tpal2 = (svrattrl *) GET_NEXT(list2);\n\trc = 1;\n\twhile ((pal1 != NULL) && (pal2 != NULL)) {\n\t\tif ((strcmp(pal1->al_name, pal2->al_name) != 0) ||\n\t\t    (strcmp(pal1->al_value, pal2->al_value) != 0)) {\n\t\t\trc = 0;\n\t\t\tgoto varlist_same_end;\n\t\t}\n\t\tpal1 = (svrattrl *) GET_NEXT(pal1->al_link);\n\t\tpal2 = (struct svrattrl *) GET_NEXT(pal2->al_link);\n\t}\n\t/* in the end, if they both match, both pointers must not be pointing anywhere */\n\tif ((pal1 != NULL) || (pal2 != NULL)) {\n\t\trc = 0;\n\t}\n\nvarlist_same_end:\n\tfree_attrlist(&list1);\n\tfree_attrlist(&list2);\n\tfree(varl1_dup);\n\tfree(varl2_dup);\n\n\treturn (rc);\n}\n\n/**\n * @brief\n *\tLoad the cached values found 'pbs_resource_value_list' into the\n *\tPython resource list type object, py_resource_match.\n *\n * @param[in]\tpy_resource_match - the Resource list type object.\n *\n * @return int\n * @retval 0\t- for success.\n * @retval != 0 - if some failure occurred.\n *\n */\nstatic int\nload_cached_resource_value(PyObject *py_resource_match)\n{\n\tpbs_resource_value *resc_val = NULL;\n\tint rc;\n\n\tresc_val = (pbs_resource_value *) GET_NEXT(pbs_resource_value_list);\n\twhile (resc_val != NULL) {\n\n\t\tif ((resc_val->py_resource != NULL) &&\n\t\t    (py_resource_match == resc_val->py_resource)) {\n\t\t\tbreak;\n\t\t}\n\n\t\tresc_val = (pbs_resource_value *) GET_NEXT(resc_val->all_rescs);\n\t}\n\n\tif (resc_val == NULL) {\n\t\t/* no match */\n\t\treturn (0); /* no cached value found */\n\t}\n\n\tif (TYPE_ENTITY(resc_val->attr_def_p->at_type)) {\n\t\trc = set_entity_resource_or_return_value(\n\t\t\t&(resc_val->value_list), resc_val->attr_def_p->at_name,\n\t\t\tresc_val->py_resource, NULL);\n\t} else { /* a regular resource */\n\t\trc = set_resource_or_return_value(&(resc_val->value_list),\n\t\t\t\t\t\t  resc_val->attr_def_p->at_name,\n\t\t\t\t\t\t  resc_val->py_resource, NULL);\n\t}\n\n\tif (rc == 0) {\n\n\t\thook_set_mode = C_MODE;\n\t\trc = pbs_python_object_set_attr_integral_value(\n\t\t\tresc_val->py_resource, PY_RESOURCE_HAS_VALUE, TRUE);\n\t\thook_set_mode = PY_MODE;\n\n\t\tif (rc == -1) {\n\t\t\tLOG_ERROR_ARG2(\"%s:failed to set resource <%s>\",\n\t\t\t\t       resc_val->attr_def_p->at_name,\n\t\t\t\t       PY_RESOURCE_HAS_VALUE);\n\t\t}\n\t\tPy_DECREF(resc_val->py_resource);\n\t\tfree_attrlist(&resc_val->value_list);\n\t\tdelete_link(&resc_val->all_rescs);\n\t\tfree(resc_val);\n\t}\n\n\treturn (rc);\n}\n\n/**\n *\n * @brief\n *\tRepopulates the 'svrattrl_list' (pbs_list_head) with values found from\n * \tpy_instance's attributes  and resources.\n *\n * @param[in]\t  py_instance\t- Python object to get input from.\n * @param[in/out] svrattrl_list  - the list to populate with data, as well\n *\t\t\t\t   as the input list containing the\n *\t\t\t\t   attribute flags to retain for each attribute\n *\t\t\t\t   name.\n * @param[in]\t  name_prefix - If not NULL, then the value for attribute\n *\t\t\t\tname in svrattr_list elements is prefixed\n *\t\t\t\twith this string.\n * @param[in]\t  append   - if set to 0, then 'svrattrl_list' will get\n *\t\t\t   initialized and populated with\n *\t\t\t   data(name, resource, value, attribute_flag)\n *\t\t\t   coming from 'py_instance''s\n *\t\t\t   attribute name, resource name, value, and\n *\t\t\t   'attribute_flag' will retain the previous flag value\n *\t\t\t   if appearing on the original 'svrattrl_list'.\n * \t\t\t   - if set to 1, then 'svrattrl_list' will accumulate\n *\t\t\t   (get appended to) data(name, resource, value,\n *\t\t\t   attribute_flag) from 'py_instance', with the\n *\t\t\t   'attribute_flag' not be taken from the original\n *\t\t\t   svrattrl_list.\n * @note\n *\t'append' mode to 1 is usually done if calling this function multiple\n *\ttimes to accumulate the 'svrattrl_list'. For instance, this can be\n * \tuseful in periodic hook when trying to get all the vnode values in\n *\ta vnode_list.\n *\n * @return int\n * @retval 0\tfor success\n * @retval != 0 for error\n *\n */\nint\npbs_python_populate_svrattrl_from_python_class(PyObject *py_instance,\n\t\t\t\t\t       pbs_list_head *svrattrl_list, char *name_prefix, int append)\n{\n\tPyObject *py_attr_dict = NULL;\n\tPyObject *py_attr_hookset_dict = NULL;\n\tPyObject *py_resc_hookset_dict = NULL;\n\tPyObject *py_attr_keys = NULL;\n\tPyObject *py_val = NULL;\n\tPyObject *py_keys = NULL;\n\tPyObject *py_keys_dict = NULL;\n\tPyObject *py_keys_dict2 = NULL;\n\tchar *name_str_dup = NULL;\n\tchar *val_str_dup = NULL;\n\tint num_attrs, i;\n\tpbs_list_head svrattrl_list2;\n\tint rc = -1;\n\tint hook_set_flag = 0;\n\tint has_resv_duration;\n\tchar the_resc[HOOK_BUF_SIZE];\n\tstatic char *the_val = NULL;\n\tstatic int val_buf_size = HOOK_BUF_SIZE;\n\tPyObject *py_resc = NULL;\n\tlong val_sec;\n\tchar *objname = NULL;\n\n\tif (the_val == NULL) {\n\t\tthe_val = (char *) malloc(val_buf_size);\n\t\tif (the_val == NULL) {\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"malloc failure (errno %d)\",\n\t\t\t\t errno);\n\t\t\tlog_err(PBSE_SYSTEM, __func__, log_buffer);\n\t\t\treturn rc;\n\t\t}\n\t}\n\n\tif (hook_debug.output_fp != NULL) {\n\n\t\tif (PyObject_IsInstance(py_instance,\n\t\t\t\t\tpbs_python_types_table[PP_JOB_IDX].t_class))\n\t\t\tobjname = EVENT_JOB_OBJECT;\n\t\telse if (PyObject_IsInstance(py_instance,\n\t\t\t\t\t     pbs_python_types_table[PP_RESV_IDX].t_class))\n\t\t\tobjname = EVENT_RESV_OBJECT;\n\t\telse if (PyObject_IsInstance(py_instance,\n\t\t\t\t\t     pbs_python_types_table[PP_VNODE_IDX].t_class))\n\t\t\tobjname = EVENT_VNODE_OBJECT;\n\t\telse\n\t\t\tobjname = EVENT_OBJECT;\n\t}\n\n\tCLEAR_HEAD(svrattrl_list2);\n\t/* py_attr_dict = <py_instance>.attributes[] */\n\tif (!PyObject_HasAttrString(py_instance, PY_ATTRIBUTES)) {\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"python object does not have '%s'\", PY_ATTRIBUTES);\n\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\treturn rc;\n\t}\n\n\tpy_attr_dict = PyObject_GetAttrString(py_instance, PY_ATTRIBUTES); /* NEW*/\n\tif (!py_attr_dict) {\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"Failed to obtain event job's '%s'\", PY_ATTRIBUTES);\n\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\tgoto svrattrl_exit;\n\t}\n\n\tif (!PyDict_Check(py_attr_dict)) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"object's attributes not a dictionary!\");\n\t\tgoto svrattrl_exit;\n\t}\n\n\t/* py_attr_keys = keys of <py_instance>.attributes[] */\n\tpy_attr_keys = PyDict_Keys(py_attr_dict); /* NEW ref */\n\tif (!py_attr_keys) {\n\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\"Failed to obtain event job's attributes keys\");\n\t\tgoto svrattrl_exit;\n\t}\n\n\tif (!PyList_Check(py_attr_keys)) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"object's attributes keys not a list!\");\n\t\tgoto svrattrl_exit;\n\t}\n\n\t/* Get the attributes that have been set in the hook script */\n\tpy_attr_hookset_dict = PyObject_GetAttrString(\n\t\tpy_instance, PY_ATTRIBUTES_HOOK_SET); /* new ref */\n\tif (py_attr_hookset_dict == NULL || !PyDict_Check(py_attr_hookset_dict)) {\n\t\tLOG_ERROR_ARG2(\"%s: <%s> does not exist or is not a dict\", objname,\n\t\t\t       PY_ATTRIBUTES_HOOK_SET);\n\t\tPy_CLEAR(py_attr_hookset_dict); /* don't use if not dict */\n\t}\n\n\t/* Reservation specific, let's see if our  object already contains */\n\t/* a non-NULL, non-None ATTR_resv_duration attribute */\n\thas_resv_duration = 0;\n\tif (PyObject_IsInstance(py_instance,\n\t\t\t\tpbs_python_types_table[PP_RESV_IDX].t_class) &&\n\t    PyObject_HasAttrString(py_instance, ATTR_resv_duration)) {\n\t\tpy_val = PyObject_GetAttrString(py_instance, ATTR_resv_duration); /*NEW*/\n\t\tif (py_val && (py_val != Py_None)) {\n\t\t\thas_resv_duration = 1;\n\t\t}\n\t\tPy_CLEAR(py_val);\n\t}\n\n\tnum_attrs = PyList_Size(py_attr_keys);\n\n\tif (!append) {\n\t\t/* svrattrl_list2 is a copy of the original list */\n\t\t/* svrattrl_list2 must be freed at the end of this func */\n\t\tif (copy_svrattrl_list(svrattrl_list, &svrattrl_list2) == -1) {\n\t\t\tlog_err(errno, __func__, \"failed to save svrattrl_list\");\n\t\t\tgoto svrattrl_exit;\n\t\t}\n\t\tfree_attrlist(svrattrl_list);\n\t}\n\n\tfor (i = 0; i < num_attrs; i++) {\n\t\tchar *name_str = NULL;\n\t\tresource_def *rescdef = NULL;\n\n\t\tname_str_dup = strdup(pbs_python_list_get_item_string_value(py_attr_keys, i));\n\t\tif (!name_str_dup) {\n\t\t\tlog_err(errno, __func__, \"strdup error\");\n\t\t\tgoto svrattrl_exit;\n\t\t}\n\t\tname_str = name_str_dup;\n\n\t\t/* these don't have entries in svrattrl */\n\t\tif (!name_str || (name_str[0] == '\\0') ||\n\t\t    (strcmp(name_str, ATTR_queue) == 0) ||\n\t\t    in_string_list(name_str, ',', PY_PYTHON_DEFINED_ATTRIBUTES)) {\n\t\t\tif (name_str_dup) {\n\t\t\t\tfree(name_str_dup);\n\t\t\t\tname_str_dup = NULL;\n\t\t\t}\n\t\t\tcontinue;\n\t\t}\n\n\t\tif (!PyObject_HasAttrString(py_instance, name_str)) {\n\t\t\tif (name_str_dup) {\n\t\t\t\tfree(name_str_dup);\n\t\t\t\tname_str_dup = NULL;\n\t\t\t}\n\t\t\tcontinue;\n\t\t}\n\n\t\tpy_val = PyObject_GetAttrString(py_instance, name_str);\n\t\t/* must be Py_CLEAR(-)ed or\n\t\t * Py_DECREF()-ed later, so as to not leak memory\n\t\t */\n\n\t\tif (!py_val || (py_val == Py_None)) {\n\n\t\t\tif (name_str_dup) {\n\t\t\t\tfree(name_str_dup);\n\t\t\t\tname_str_dup = NULL;\n\t\t\t}\n\t\t\tcontinue;\n\t\t}\n\n\t\tif (PyObject_IsInstance(py_val,\n\t\t\t\t\tpbs_python_types_table[PP_RESC_IDX].t_class)) { /* a resource */\n\t\t\tchar *resc, *val;\n\t\t\tint k, num_keys;\n\t\t\tPyObject *py_class = pbs_python_types_table[PP_RESC_IDX].t_class;\n\t\t\tPyObject *py_tmp = NULL;\n\n\t\t\t/* code snippet below mimics what's done in the\n\t\t\t * __str__ method of class pbs_resource under\n\t\t\t * /pbs/v1/_base_types.py for accessing entries\n\t\t\t * in resource list.\n\t\t\t */\n\t\t\tif (PyObject_HasAttrString(py_val, PY_RESOURCE_HAS_VALUE)) {\n\t\t\t\tif (pbs_python_object_get_attr_integral_value(py_val, PY_RESOURCE_HAS_VALUE) == 0) { /* no value yet */\n\t\t\t\t\tif (load_cached_resource_value(py_val) != 0) {\n\t\t\t\t\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\t\t\t\t\"Failed to to load cached value for resource list\");\n\t\t\t\t\t\tgoto svrattrl_exit;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t/* Obtain list of resource names from\n\t\t\t *  <class pbs_resource>._attributes  dictionary\n\t\t\t */\n\t\t\tif (PyObject_HasAttrString(py_class, PY_ATTRIBUTES)) {\n\t\t\t\tpy_keys_dict = PyObject_GetAttrString(py_class,\n\t\t\t\t\t\t\t\t      PY_ATTRIBUTES);\n\t\t\t}\n\n\t\t\tif (!py_keys_dict || !PyDict_Check(py_keys_dict)) {\n\t\t\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\t\t\"Failed to obtain event job's resource list dictionary\");\n\t\t\t\tgoto svrattrl_exit;\n\t\t\t}\n\t\t\tpy_tmp = py_keys_dict;\n\t\t\tpy_keys_dict = PyDict_Copy(py_tmp);\n\t\t\tPy_CLEAR(py_tmp);\n\t\t\tif (!py_keys_dict) {\n\t\t\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\t\t\"Failed to duplicate resource list dictionary\");\n\t\t\t\tgoto svrattrl_exit;\n\t\t\t}\n\n\t\t\t/* look into <class pbs_resource>._attributes_unknown\n\t\t\t * for custom resource names defined in a hook but\n\t\t\t * not yet in resource table.\n\t\t\t */\n\t\t\tpy_keys_dict2 = PyObject_GetAttrString(\n\t\t\t\tpy_val, \"_attributes_unknown\"); /* new ref */\n\t\t\tif (py_keys_dict2) {\n\t\t\t\t/* Merge resource list dictionary with the dictionary of\n\t\t\t\t * unknown resources\n\t\t\t\t */\n\t\t\t\tif (PyDict_Check(py_keys_dict2)) {\n\t\t\t\t\tPyDict_Update(py_keys_dict, py_keys_dict2);\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tpy_keys = PyDict_Keys(py_keys_dict);\n\t\t\tif (!py_keys || !PyList_Check(py_keys)) {\n\t\t\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\t\t\"Failed to obtain event job's resource list keys\");\n\t\t\t\tgoto svrattrl_exit;\n\t\t\t}\n\n\t\t\t/* Get the resources that have been set in the hook script */\n\t\t\tpy_resc_hookset_dict = PyObject_GetAttrString(\n\t\t\t\tpy_val, PY_ATTRIBUTES_HOOK_SET); /* new ref */\n\t\t\tif (py_resc_hookset_dict != NULL && !PyDict_Check(py_resc_hookset_dict)) {\n\t\t\t\tPy_CLEAR(py_resc_hookset_dict); /* don't use if not dict */\n\t\t\t}\n\n\t\t\tnum_keys = PyList_Size(py_keys);\n\t\t\tfor (k = 0; k < num_keys; k++) {\n\t\t\t\tchar *tmpstr = NULL;\n\t\t\t\ttmpstr = pbs_python_list_get_item_string_value(py_keys, k);\n\t\t\t\tif (tmpstr == NULL)\n\t\t\t\t\tcontinue;\n\n\t\t\t\tresc = strdup(tmpstr);\n\t\t\t\tif (resc == NULL) {\n\t\t\t\t\tlog_err(errno, __func__, \"strdup error\");\n\t\t\t\t\tgoto svrattrl_exit;\n\t\t\t\t}\n\t\t\t\ttmpstr = pbs_python_object_get_attr_string_value(py_val, resc);\n\t\t\t\tif (tmpstr == NULL) {\n\t\t\t\t\tfree(resc);\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\n\t\t\t\tval = strdup(tmpstr);\n\t\t\t\tif (val == NULL) {\n\t\t\t\t\tlog_err(errno, __func__, \"strdup error\");\n\t\t\t\t\tfree(resc);\n\t\t\t\t\tgoto svrattrl_exit;\n\t\t\t\t}\n\n\t\t\t\tif ((strcmp(resc, PY_RESOURCE_NAME) == 0) ||\n\t\t\t\t    (strcmp(resc, PY_RESOURCE_HAS_VALUE) == 0)) {\n\t\t\t\t\tfree(resc);\n\t\t\t\t\tfree(val);\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\n\t\t\t\thook_set_flag = 0;\n\t\t\t\tif (py_resc_hookset_dict != NULL &&\n\t\t\t\t    PyDict_GetItemString(py_resc_hookset_dict, resc) != NULL) {\n\t\t\t\t\thook_set_flag = 1; /* resource set/unset in hook script */\n\t\t\t\t}\n\n\t\t\t\tif ((strcmp(resc, WALLTIME_RESC) == 0) && has_resv_duration) {\n\t\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t\t\t\t \"Ignoring reservation resource '%s' since '%s' \"\n\t\t\t\t\t\t \"already specified\",\n\t\t\t\t\t\t resc, ATTR_resv_duration);\n\t\t\t\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK, LOG_INFO,\n\t\t\t\t\t\t  __func__, log_buffer);\n\t\t\t\t\tfree(resc);\n\t\t\t\t\tfree(val);\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\n\t\t\t\tthe_val[0] = '\\0';\n\t\t\t\tif (pbs_strcat(&the_val, &val_buf_size, val) == NULL) {\n\t\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"malloc failure (errno %d)\",\n\t\t\t\t\t\t errno);\n\t\t\t\t\tlog_err(PBSE_SYSTEM, __func__, log_buffer);\n\t\t\t\t\tgoto svrattrl_exit;\n\t\t\t\t}\n\n\t\t\t\tstrncpy(the_resc, resc, sizeof(the_resc) - 1);\n\t\t\t\tif (IS_PBS_PYTHON_CMD(pbs_python_daemon_name)) {\n\n\t\t\t\t\tif ((rescdef = find_resc_def(svr_resc_def, resc)) == NULL) {\n\t\t\t\t\t\t/* not a builtin or previously defined resource */\n\t\t\t\t\t\tpy_resc = PyObject_GetAttrString(py_val, resc); /* NEW */\n\n\t\t\t\t\t\tif (PyBool_Check(py_resc)) {\n\t\t\t\t\t\t\tsnprintf(the_resc, sizeof(the_resc), \"%s,boolean\", resc);\n\t\t\t\t\t\t} else if (PyObject_IsInstance(py_resc,\n\t\t\t\t\t\t\t\t\t       pbs_python_types_table[PP_TIME_IDX].t_class)) {\n\t\t\t\t\t\t\t/* this must appear */\n\t\t\t\t\t\t\t/* before check to */\n\t\t\t\t\t\t\t/* Int instance */\n\t\t\t\t\t\t\t/* for TIME object */\n\t\t\t\t\t\t\t/* is derived from an */\n\t\t\t\t\t\t\t/*   int */\n\t\t\t\t\t\t\tsnprintf(the_resc, sizeof(the_resc), \"%s,long\", resc);\n\t\t\t\t\t\t\tif (val != NULL) {\n\t\t\t\t\t\t\t\tval_sec = duration_to_secs(val);\n\t\t\t\t\t\t\t\tsnprintf(the_val, val_buf_size, \"%ld\", val_sec);\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t} else if (PyLong_Check(py_resc) || PyLong_Check(py_resc)) {\n\t\t\t\t\t\t\tsnprintf(the_resc, sizeof(the_resc), \"%s,long\", resc);\n\t\t\t\t\t\t} else if (PyFloat_Check(py_resc)) {\n\t\t\t\t\t\t\tsnprintf(the_resc, sizeof(the_resc), \"%s,float\", resc);\n\t\t\t\t\t\t} else if (PyUnicode_Check(py_resc)) {\n\t\t\t\t\t\t\t/* this check should come first before the test of PP_ARST_IDX instance */\n\t\t\t\t\t\t\t/* for a regular string is also an instance/subset of PP_ARST_IDX type */\n\t\t\t\t\t\t\tsnprintf(the_resc, sizeof(the_resc), \"%s,string\", resc);\n\t\t\t\t\t\t} else if (PyObject_IsInstance(py_resc,\n\t\t\t\t\t\t\t\t\t       pbs_python_types_table[PP_SIZE_IDX].t_class)) {\n\t\t\t\t\t\t\tsnprintf(the_resc, sizeof(the_resc), \"%s,size\", resc);\n\t\t\t\t\t\t} else if (PyObject_IsInstance(py_resc,\n\t\t\t\t\t\t\t\t\t       pbs_python_types_table[PP_ARST_IDX].t_class)) {\n\t\t\t\t\t\t\tsnprintf(the_resc, sizeof(the_resc), \"%s,string_array\", resc);\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\tsnprintf(the_resc, sizeof(the_resc), \"%s,string\", resc);\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\tPy_CLEAR(py_resc);\n\t\t\t\t\t} else {\n\t\t\t\t\t\tif (TYPE_BOOL(rescdef->rs_type)) {\n\t\t\t\t\t\t\tsnprintf(the_resc, sizeof(the_resc), \"%s,boolean\", resc);\n\t\t\t\t\t\t} else if (TYPE_INT(rescdef->rs_type)) {\n\t\t\t\t\t\t\tsnprintf(the_resc, sizeof(the_resc), \"%s,long\", resc);\n\t\t\t\t\t\t} else if (TYPE_FLOAT(rescdef->rs_type)) {\n\t\t\t\t\t\t\tsnprintf(the_resc, sizeof(the_resc), \"%s,float\", resc);\n\t\t\t\t\t\t} else if (TYPE_STR(rescdef->rs_type)) {\n\t\t\t\t\t\t\t/* this check should come first before the test of PP_ARST_IDX instance */\n\t\t\t\t\t\t\t/* for a regular string is also an instance/subset of PP_ARST_IDX type */\n\t\t\t\t\t\t\tsnprintf(the_resc, sizeof(the_resc), \"%s,string\", resc);\n\t\t\t\t\t\t} else if (TYPE_SIZE(rescdef->rs_type)) {\n\t\t\t\t\t\t\tsnprintf(the_resc, sizeof(the_resc), \"%s,size\", resc);\n\t\t\t\t\t\t} else if (TYPE_ARST(rescdef->rs_type)) {\n\t\t\t\t\t\t\tsnprintf(the_resc, sizeof(the_resc), \"%s,string_array\", resc);\n\t\t\t\t\t\t} else if (TYPE_DURATION(rescdef->rs_encode)) {\n\t\t\t\t\t\t\tsnprintf(the_resc, sizeof(the_resc), \"%s,long\", resc);\n\t\t\t\t\t\t\tif (val != NULL) {\n\t\t\t\t\t\t\t\tval_sec = duration_to_secs(val);\n\t\t\t\t\t\t\t\tsnprintf(the_val, val_buf_size, \"%ld\", val_sec);\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\tsnprintf(the_resc, sizeof(the_resc), \"%s,string\", resc);\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\tif (add_to_svrattrl_list(svrattrl_list, name_str, the_resc, the_val,\n\t\t\t\t\t\t\t get_svrattrl_flag(name_str, the_resc, the_val,\n\t\t\t\t\t\t\t\t\t   &svrattrl_list2, hook_set_flag),\n\t\t\t\t\t\t\t name_prefix) == -1) {\n\t\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"failed to add_to_svrattrl_list(%s,%s,%s\",\n\t\t\t\t\t\t name_str, resc, (val ? val : \"\"));\n\t\t\t\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\t\t\tfree(resc);\n\t\t\t\t\tfree(val);\n\t\t\t\t\tgoto svrattrl_exit;\n\t\t\t\t}\n\n\t\t\t\tif (hook_debug.output_fp != NULL)\n\t\t\t\t\tfprintf(hook_debug.output_fp, \"%s.%s[%s]=%s\\n\", objname, name_str, the_resc,\n\t\t\t\t\t\treturn_external_value(name_str, the_val));\n\t\t\t\tfree(resc);\n\t\t\t\tfree(val);\n\t\t\t}\n\t\t} else {\n\n\t\t\tchar *val_str2;\t  /* what gets sent to the server */\n\t\t\tchar val_buf[40]; /* holds JOB_NAME_UNSET_VALUE and */\n\t\t\tchar *val_str;\t  /* what gets sent to the server */\n\t\t\t/* 'long' value string */\n\t\t\tlong nsecs;\n\n\t\t\tval_str = pbs_python_object_str(py_val); /* does not return NULL */\n\n\t\t\tval_str2 = val_str;\n\n\t\t\t/* For Job_Name attribute, if it got an \"\" value (meaning None    */\n\t\t\t/* was specified in a hook script), then mimic what PBS server    */\n\t\t\t/* does which is set it to \"none\".  We cannot have a NULL or \"\"   */\n\t\t\t/*  value for Job_Name as this is used for constructing job       */\n\t\t\t/* output files and accounting_logs entry and may cause the       */\n\t\t\t/* server to  crash later.                                        */\n\t\t\tif ((strcmp(name_str, ATTR_N) == 0) && (val_str[0] == '\\0')) {\n\t\t\t\tstrcpy(val_buf, JOB_NAME_UNSET_VALUE);\n\t\t\t\tval_str2 = val_buf;\n\t\t\t} else if ((strcmp(name_str, ATTR_resv_duration) == 0) &&\n\t\t\t\t   (val_str[0] != '\\0')) {\n\t\t\t\tnsecs = duration_to_secs(val_str);\n\t\t\t\tif (nsecs < 0) {\n\t\t\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\t\t\tgoto svrattrl_exit;\n\t\t\t\t}\n\t\t\t\tsprintf(val_buf, \"%ld\", nsecs);\n\t\t\t\tval_str2 = val_buf;\n\t\t\t}\n\n\t\t\thook_set_flag = 0;\n\t\t\tif (py_attr_hookset_dict != NULL &&\n\t\t\t    PyDict_GetItemString(py_attr_hookset_dict, name_str) != NULL) {\n\t\t\t\thook_set_flag = 1; /* attribute set/unset in a hook script */\n\t\t\t}\n\t\t\tif (strcmp(name_str, ATTR_v) == 0) {\n\t\t\t\tsvrattrl *svrattrl_e;\n\n\t\t\t\t/* if there's a change in Variables_List value, \t*/\n\t\t\t\t/* (svrattrl_list2 (orig) vs svrattrl_list (new)),    \t*/\n\t\t\t\t/* then flag as hook set */\n\t\t\t\tsvrattrl_e = find_svrattrl_list_entry(&svrattrl_list2, ATTR_v, NULL);\n\t\t\t\tif ((svrattrl_e == NULL) ||\n\t\t\t\t    !varlist_same(svrattrl_e->al_value, val_str))\n\t\t\t\t\thook_set_flag = 1;\n\t\t\t}\n\n\t\t\tif (add_to_svrattrl_list(svrattrl_list, name_str, NULL, val_str2,\n\t\t\t\t\t\t get_svrattrl_flag(name_str, NULL, val_str,\n\t\t\t\t\t\t\t\t   &svrattrl_list2, hook_set_flag),\n\t\t\t\t\t\t name_prefix) == -1) {\n\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"failed to add_to_svrattrl_list(%s,null,%s)\",\n\t\t\t\t\t name_str, val_str);\n\t\t\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\t\tgoto svrattrl_exit;\n\t\t\t}\n\t\t\tif (hook_debug.output_fp != NULL)\n\t\t\t\tfprintf(hook_debug.output_fp, \"%s.%s=%s\\n\", objname, name_str, return_external_value(name_str, val_str2));\n\t\t}\n\t\t/* must be cleared as they take on different values on each iteration */\n\t\tPy_CLEAR(py_val);\n\t\tPy_CLEAR(py_keys);\n\t\tPy_CLEAR(py_keys_dict);\n\t\tPy_CLEAR(py_keys_dict2);\n\t\tPy_CLEAR(py_resc_hookset_dict);\n\t\tfree(name_str_dup);\n\t\tname_str_dup = NULL;\n\n\t} /* for loop */\n\trc = 0;\nsvrattrl_exit:\n\tPy_CLEAR(py_attr_dict);\n\tPy_CLEAR(py_attr_hookset_dict);\n\tPy_CLEAR(py_resc_hookset_dict);\n\tPy_CLEAR(py_attr_keys);\n\tPy_CLEAR(py_val);\n\tPy_CLEAR(py_resc);\n\tPy_CLEAR(py_keys);\n\tPy_CLEAR(py_keys_dict);\n\tPy_CLEAR(py_keys_dict2);\n\n\tif (name_str_dup) {\n\t\tfree(name_str_dup);\n\t}\n\tif (val_str_dup) {\n\t\tfree(val_str_dup);\n\t}\n\t/* safe to call because start of func clears this list */\n\tfree_attrlist(&svrattrl_list2);\n\n\treturn (rc);\n}\n\n/**\n * @brief\n *  \tCauses the 'py_instance' object's attributes to be unsettable.\n *\n * @param[in] py_instance - PyObject with attributes\n *\n * @return\tint\n * @retval\t0 \tfor success;\n * @retval\t-1 \totherwise.\n *\n */\nint\npbs_python_mark_object_readonly(PyObject *py_instance)\n{\n\tPyObject *py_attr_dict = NULL;\n\tPyObject *py_attr_keys = NULL;\n\tPyObject *py_val = NULL;\n\tint num_attrs, i;\n\tint rc = -1;\n\n\t/* mark the owning object readonly */\n\trc = pbs_python_object_set_attr_integral_value(py_instance,\n\t\t\t\t\t\t       PY_READONLY_FLAG, TRUE);\n\tif (rc == -1) {\n\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"Failed set object's '%s' flag\", PY_READONLY_FLAG);\n\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\treturn (-1);\n\t}\n\n\t/* Now mark attached resources read-only */\n\n\t/* py_attr_dict = <py_instance>.attributes[] */\n\tif (!PyObject_HasAttrString(py_instance, PY_ATTRIBUTES)) {\n\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"encountered an object that has no '%s'\",\n\t\t\t PY_ATTRIBUTES);\n\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\treturn (-1);\n\t}\n\n\tpy_attr_dict = PyObject_GetAttrString(py_instance, PY_ATTRIBUTES); /* NEW*/\n\tif (!py_attr_dict) {\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"failed to obtain object's '%s'\", PY_ATTRIBUTES);\n\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\treturn (-1);\n\t}\n\n\tif (!PyDict_Check(py_attr_dict)) {\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"object's '%s' is not a dictionary!\",\n\t\t\t PY_ATTRIBUTES);\n\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\trc = -1;\n\t\tgoto mark_readonly_exit;\n\t}\n\n\t/* py_attr_keys = keys of <py_instance>.attributes[] */\n\tpy_attr_keys = PyDict_Keys(py_attr_dict); /* NEW ref */\n\n\tif (!py_attr_keys) {\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"Failed to obtain object's '%s' keys\",\n\t\t\t PY_ATTRIBUTES);\n\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\trc = -1;\n\t\tgoto mark_readonly_exit;\n\t}\n\tif (!PyList_Check(py_attr_keys)) {\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"object's '%s' keys is not a list!\", PY_ATTRIBUTES);\n\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\tlog_err(PBSE_INTERNAL, __func__, PY_ATTRIBUTES);\n\t\trc = -1;\n\t\tgoto mark_readonly_exit;\n\t}\n\n\tnum_attrs = PyList_Size(py_attr_keys);\n\tfor (i = 0; i < num_attrs; i++) {\n\t\tchar *name_str = NULL;\n\n\t\tname_str = pbs_python_list_get_item_string_value(py_attr_keys, i);\n\n\t\tif (!name_str || (name_str[0] == '\\0'))\n\t\t\tcontinue;\n\n\t\tif (!PyObject_HasAttrString(py_instance, name_str))\n\t\t\tcontinue;\n\n\t\tpy_val = PyObject_GetAttrString(py_instance, name_str); /* NEW */\n\n\t\tif (!py_val) {\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"failed to get attribute '%s' value\", name_str);\n\t\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\trc = -1;\n\t\t\tgoto mark_readonly_exit;\n\t\t}\n\n\t\tif (PyObject_IsInstance(py_val,\n\t\t\t\t\tpbs_python_types_table[PP_RESC_IDX].t_class)) { /* a resource */\n\n\t\t\trc = pbs_python_object_set_attr_integral_value(py_val,\n\t\t\t\t\t\t\t\t       PY_READONLY_FLAG, TRUE);\n\t\t\tif (rc == -1) {\n\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t\t\t \"failed to set %s '%s'\", name_str, PY_READONLY_FLAG);\n\t\t\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\t\tgoto mark_readonly_exit;\n\t\t\t}\n\t\t}\n\t\t/* must be cleared as they take on different values on each iteration */\n\t\tPy_CLEAR(py_val);\n\n\t} /* for loop */\n\n\trc = 0;\n\nmark_readonly_exit:\n\tPy_CLEAR(py_attr_dict);\n\tPy_CLEAR(py_attr_keys);\n\tPy_CLEAR(py_val);\n\treturn (rc);\n}\n\n/*\n * --------------------- MODULE HELPER METHODS  ----------------------------\n */\n\n/**\n *\n * @brief\n *     Helper method returning PBS Python queue object mapping the given 'pque'\n *     (a pbs_queue struct) if set; otherwise, look into the list of pbs_queue\n *     structures managed by the local server, which  matches 'que_name'.\n *\n * @param[in]\tpque - if set, this is the pbs_queue structure whose values\n *\t\t       will be mapped into a PBS Python queue object.\n * @param[in]   queue_name - if 'pque' is not set, then create a PBS Python\n *\t\t\t      queue object mapping a pbs_queue structure in\n *\t\t\t      the system that matches 'queue_name'.\n * @param[in]\tperf_label - passed on to hook_perf_stat* call.\n * @note\n *\tThis first returns any cached Python queue object found in\n *\t'py_hook_pbsque[]' matching 'que_name' or pque's que_name.\n *\tOtherwise, the Python queue object returned is cached in\n *\t'py_hook_pbsque[]' array.\n *\n * @return\tPyObject *\tpointer to a Python queue object to map the\n *\t\t\t\tqueue.\n */\nstatic PyObject *\n_pps_helper_get_queue(pbs_queue *pque, const char *que_name, char *perf_label)\n{\n\tPyObject *py_que_class = NULL;\n\tPyObject *py_que = NULL;\n\tPyObject *py_qargs = NULL;\n\tpbs_queue *que;\n\tint tmp_rc = -1;\n\tint i;\n\tchar perf_action[MAXBUFLEN];\n\tlong total_jobs;\n\tattribute *qattr;\n\n\tif (pque != NULL) {\n\t\tque = pque;\n\t} else {\n\t\tif ((que_name == NULL) || (que_name[0] == '\\0')) {\n\t\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\t\"Unable to populate python queue object\");\n\t\t\treturn NULL;\n\t\t}\n\t\tque = find_queuebyname((char *) que_name);\n\t}\n\n\t/* make sure que is not null */\n\tif (!que) {\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t \"could not find queue '%s'\", que_name);\n\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\treturn py_que;\n\t}\n\n\tif (py_hook_pbsque != NULL) {\n\n\t\tfor (i = 0; (i < py_hook_pbsque_max) && (py_hook_pbsque[i] != NULL);\n\t\t     i++) {\n\t\t\tchar *qn;\n\n\t\t\tqn = pbs_python_object_get_attr_string_value(py_hook_pbsque[i],\n\t\t\t\t\t\t\t\t     \"name\");\n\t\t\tif ((qn != NULL) && (qn[0] != '\\0') &&\n\t\t\t    (strcmp(qn, que->qu_qs.qu_name) == 0)) {\n\t\t\t\tPy_INCREF(py_hook_pbsque[i]);\n\t\t\t\treturn py_hook_pbsque[i];\n\t\t\t}\n\t\t}\n\t}\n\n\t/*\n\t * First things first create a Python queue  object.\n\t *  - Borrowed reference\n\t *  - Exception is *NOT* set\n\t */\n\tpy_que_class = pbs_python_types_table[PP_QUE_IDX].t_class;\n\n\tpy_qargs = Py_BuildValue(\"(s)\", que->qu_qs.qu_name); /* NEW ref */\n\tif (!py_qargs) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"could not build args list for queue\");\n\t\tgoto ERROR_EXIT;\n\t}\n\tpy_que = PyObject_Call(py_que_class, py_qargs, NULL);\n\tif (!py_que) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"failed to create a python queue object\");\n\t\tgoto ERROR_EXIT;\n\t}\n\tif (py_qargs)\n\t\tPy_CLEAR(py_qargs);\n\t/*\n\t * OK, At this point we need to start populating the que class.\n\t */\n\t/* As done is statque update the state count */\n\tif (!svr_chk_history_conf()) {\n\t\ttotal_jobs = que->qu_numjobs;\n\t} else {\n\t\ttotal_jobs = que->qu_numjobs - (que->qu_njstate[JOB_STATE_MOVED] + que->qu_njstate[JOB_STATE_FINISHED] + que->qu_njstate[JOB_STATE_EXPIRED]);\n\t}\n\tset_qattr_l_slim(que, QA_ATR_TotalJobs, total_jobs, SET);\n\n\tqattr = get_qattr(que, QA_ATR_JobsByState);\n\tupdate_state_ct(qattr, que->qu_njstate, &que_attr_def[QA_ATR_JobsByState]);\n\t/* stuff all the attributes */\n\tsnprintf((char *) hook_debug.objname, HOOK_BUF_SIZE - 1, \"%s(%s)\", SERVER_QUEUE_OBJECT, que->qu_qs.qu_name);\n\tsnprintf(perf_action, sizeof(perf_action), \"%s:%s\", HOOK_PERF_POPULATE, hook_debug.objname);\n\ttmp_rc = pbs_python_populate_attributes_to_python_class(py_que,\n\t\t\t\t\t\t\t\tpy_que_attr_types,\n\t\t\t\t\t\t\t\tque->qu_attr,\n\t\t\t\t\t\t\t\tque_attr_def,\n\t\t\t\t\t\t\t\tQA_ATR_LAST, perf_label, perf_action);\n\tif (tmp_rc == -1) {\n\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\"partially populated python queue object\");\n\t}\n\tfree_attr(que_attr_def, qattr, QA_ATR_JobsByState);\n\n\ttmp_rc = pbs_python_mark_object_readonly(py_que);\n\n\tif (tmp_rc == -1) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"Failed to mark queue readonly!\");\n\t\tgoto ERROR_EXIT;\n\t}\n\n\tobject_counter++;\n\n\tif (server.sv_qs.sv_numque > 0) {\n\n\t\tif (py_hook_pbsque == NULL) {\n\t\t\tpy_hook_pbsque = (PyObject **) calloc(server.sv_qs.sv_numque,\n\t\t\t\t\t\t\t      sizeof(PyObject *));\n\t\t\tif (py_hook_pbsque == NULL) {\n\t\t\t\tlog_err(errno, __func__,\n\t\t\t\t\t\"Failed to calloc array of cached pbs queue objects\");\n\t\t\t\tgoto ERROR_EXIT;\n\t\t\t}\n\t\t\tpy_hook_pbsque_max = server.sv_qs.sv_numque;\n\t\t} else if (server.sv_qs.sv_numque > py_hook_pbsque_max) {\n\t\t\tPyObject **py_hook_pbsque_tmp;\n\t\t\tpy_hook_pbsque_tmp = (PyObject **) realloc(py_hook_pbsque,\n\t\t\t\t\t\t\t\t   server.sv_qs.sv_numque * sizeof(PyObject *));\n\t\t\tif (py_hook_pbsque_tmp == NULL) {\n\t\t\t\tlog_err(errno, __func__,\n\t\t\t\t\t\"Failed to realloc array of cached pbs queue objects\");\n\t\t\t\tfor (i = 0; (i < py_hook_pbsque_max) &&\n\t\t\t\t\t    (py_hook_pbsque[i] != NULL);\n\t\t\t\t     i++) {\n\t\t\t\t\tPy_CLEAR(py_hook_pbsque[i]);\n\t\t\t\t}\n\t\t\t\tfree(py_hook_pbsque);\n\t\t\t\tpy_hook_pbsque = NULL;\n\t\t\t\tgoto ERROR_EXIT;\n\t\t\t}\n\t\t\tpy_hook_pbsque = py_hook_pbsque_tmp;\n\t\t\tfor (i = py_hook_pbsque_max; i < server.sv_qs.sv_numque; i++) {\n\t\t\t\tpy_hook_pbsque[i] = NULL;\n\t\t\t}\n\n\t\t\tpy_hook_pbsque_max = server.sv_qs.sv_numque;\n\t\t}\n\t}\n\n\tif (py_hook_pbsque != NULL) {\n\t\tfor (i = 0; i < py_hook_pbsque_max; i++) {\n\t\t\tif (py_hook_pbsque[i] == NULL) {\n\t\t\t\tPy_INCREF(py_que);\n\t\t\t\tpy_hook_pbsque[i] = py_que;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t}\n\n\treturn py_que;\nERROR_EXIT:\n\tif (PyErr_Occurred())\n\t\tpbs_python_write_error_to_log(__func__);\n\n\tif (py_qargs)\n\t\tPy_CLEAR(py_qargs);\n\tif (py_que)\n\t\tPy_CLEAR(py_que);\n\tPyErr_SetString(PyExc_AssertionError, \"Failed to create queue object\");\n\n\treturn NULL;\n}\n\n/**\n *\n * @brief\n * \tHelper method returning a server Python Object representing the local\n *\t(current) server.\n *\n * @param[in]\tperf_label - passed on to hook_perf_stat* call.\n *\n *  @note\n *\tThis marks the server object \"read-only\" in Python mode.\n *\tAlso, this first returns the cached 'py_hook_pbsserver' object.\n *\tOtherwise, the obtained PBS Python server object is cached in\n *\t'py_hook_pbsserver'.\n *\n * @return\tPyObject *\tpointer to a Python server object to map the\n *\t\t\t\tlocal server values.\n */\nstatic PyObject *\n_pps_helper_get_server(char *perf_label)\n{\n\tPyObject *py_svr_class = NULL;\n\tPyObject *py_svr = NULL;\n\tPyObject *py_sargs = NULL;\n\tint tmp_rc = -1;\n\tchar perf_action[MAXBUFLEN];\n\n\tif (py_hook_pbsserver != NULL) {\n\t\tPy_INCREF(py_hook_pbsserver);\n\t\treturn py_hook_pbsserver;\n\t}\n\n\t/*\n\t * First things first create a Python queue  object.\n\t *  - Borrowed reference\n\t *  - Exception is *NOT* set\n\t */\n\tpy_svr_class = pbs_python_types_table[PP_SVR_IDX].t_class;\n\n\tpy_sargs = Py_BuildValue(\"(s)\", server_name); /* NEW ref */\n\tif (!py_sargs) {\n\t\tlog_err(-1, pbs_python_daemon_name, \"could not build args list for server\");\n\t\tgoto ERROR_EXIT;\n\t}\n\n\tpy_svr = PyObject_Call(py_svr_class, py_sargs, NULL);\n\tif (!py_svr) {\n\t\tlog_err(-1, pbs_python_daemon_name, \"failed to create a python server object\");\n\t\tgoto ERROR_EXIT;\n\t}\n\tif (py_sargs)\n\t\tPy_CLEAR(py_sargs);\n\t/*\n\t * OK, At this point we need to start populating the server class.\n\t */\n\t/* As done is stat_svr update the state count */\n\n\t/* update count and state counts from sv_numjobs and sv_jobstates */\n\tset_sattr_l_slim(SVR_ATR_TotalJobs, server.sv_qs.sv_numjobs, SET);\n\tupdate_state_ct(get_sattr(SVR_ATR_JobsByState), server.sv_jobstates, &svr_attr_def[SVR_ATR_JobsByState]);\n\n\tupdate_license_ct();\n\n\t/* stuff all the attributes */\n\tstrncpy((char *) hook_debug.objname, SERVER_OBJECT, HOOK_BUF_SIZE - 1);\n\tsnprintf(perf_action, sizeof(perf_action), \"%s:%s\", HOOK_PERF_POPULATE, hook_debug.objname);\n\ttmp_rc = pbs_python_populate_attributes_to_python_class(py_svr,\n\t\t\t\t\t\t\t\tpy_svr_attr_types,\n\t\t\t\t\t\t\t\tserver.sv_attr,\n\t\t\t\t\t\t\t\tsvr_attr_def,\n\t\t\t\t\t\t\t\tSVR_ATR_LAST, perf_label, perf_action);\n\n\tif (tmp_rc == -1) {\n\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\"partially populated python server object\");\n\t}\n\n\ttmp_rc = pbs_python_mark_object_readonly(py_svr);\n\n\tif (tmp_rc == -1) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"Failed to mark server readonly!\");\n\t\tgoto ERROR_EXIT;\n\t}\n\n\tobject_counter++;\n\tPy_INCREF(py_svr);\n\tpy_hook_pbsserver = py_svr;\n\treturn py_svr;\nERROR_EXIT:\n\tif (PyErr_Occurred())\n\t\tpbs_python_write_error_to_log(__func__);\n\tif (py_sargs)\n\t\tPy_CLEAR(py_sargs);\n\tif (py_svr)\n\t\tPy_CLEAR(py_svr);\n\n\tPyErr_SetString(PyExc_AssertionError, \"Failed to create server object\");\n\n\treturn NULL;\n}\n\n/**\n * @brief\n * \tHelper method returning a job Python Object from a job struct\n * \tThis marks the job object \"read-only\" in Python mode.\n * \tIf  'qname' is not NULL or \"\", then the job object is returned if\n * \tit is queued in 'qname'.\n *\n * @param[in] pjob_o - job info\n * @param[in] jobid - job identifier\n * @param[in] qname - queuename\n * @param[in]\tperf_label - data passed on to hook_perf_stat* call\n *\n * @return\tPyObject*\n * @retval\tjob python object\tsuccess\n * @retval\tNULL\t\t\terror\n *\n */\nstatic PyObject *\n_pps_helper_get_job(job *pjob_o, const char *jobid, const char *qname, char *perf_label)\n{\n\tPyObject *py_job_class = NULL;\n\tPyObject *py_job = NULL;\n\tPyObject *py_jargs = NULL;\n\tPyObject *py_que = NULL;\n\tPyObject *py_resv = NULL;\n\tPyObject *py_server = NULL;\n\tjob *pjob;\n\tint tmp_rc = -1;\n\tint t;\n\tchar perf_action[MAXBUFLEN];\n\n\tif (pjob_o != NULL) {\n\t\tpjob = pjob_o;\n\t} else {\n\t\tif ((jobid == NULL) || (jobid[0] == '\\0')) {\n\t\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\t\"Unable to populate python job object\");\n\t\t\treturn NULL;\n\t\t}\n\t\tt = is_job_array((char *) jobid);\n\n\t\tif (t == IS_ARRAY_Single) {\n\t\t\tpjob = find_job((char *) jobid); /* has data if instantiated */\n\t\t\tif (pjob == NULL) {\t\t /* otherwise, return parent */\n\t\t\t\tpjob = find_arrayparent((char *) jobid);\n\t\t\t}\n\t\t} else if ((t == IS_ARRAY_NO) || (t == IS_ARRAY_ArrayJob)) {\n\t\t\tpjob = find_job((char *) jobid); /* regular or ArrayJob itself */\n\t\t} else {\n\t\t\tpjob = find_arrayparent((char *) jobid); /* subjob(s) */\n\t\t}\n\t}\n\n\t/* make sure pjob is not null */\n\tif (!pjob) {\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"could not find job '%s'\", jobid);\n\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\treturn py_job;\n\t}\n\tif (qname && (qname[0] != '\\0') &&\n\t    (strcmp(pjob->ji_qs.ji_queue, qname) != 0)) {\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"job '%s' not in '%s'\", jobid, qname);\n\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\treturn py_job;\n\t}\n\n\t/*\n\t * First things first create a Python queue  object.\n\t *  - Borrowed reference\n\t *  - Exception is *NOT* set\n\t */\n\tpy_job_class = pbs_python_types_table[PP_JOB_IDX].t_class;\n\n\tpy_jargs = Py_BuildValue(\"(s)\", pjob->ji_qs.ji_jobid); /* NEW ref */\n\tif (!py_jargs) {\n\t\tlog_err(-1, pbs_python_daemon_name, \"could not build args list for job\");\n\t\tgoto ERROR_EXIT;\n\t}\n\tpy_job = PyObject_Call(py_job_class, py_jargs, NULL);\n\tif (!py_job) {\n\t\tlog_err(-1, pbs_python_daemon_name, \"failed to create a python job object\");\n\t\tgoto ERROR_EXIT;\n\t}\n\tif (py_jargs)\n\t\tPy_CLEAR(py_jargs);\n\t/*\n\t * OK, At this point we need to start populating the job class.\n\t */\n\tsnprintf((char *) hook_debug.objname, HOOK_BUF_SIZE - 1, \"%s(%s)\", SERVER_JOB_OBJECT, pjob->ji_qs.ji_jobid);\n\tsnprintf(perf_action, sizeof(perf_action), \"%s:%s\", HOOK_PERF_POPULATE, hook_debug.objname);\n\ttmp_rc = pbs_python_populate_attributes_to_python_class(py_job,\n\t\t\t\t\t\t\t\tpy_job_attr_types,\n\t\t\t\t\t\t\t\tpjob->ji_wattr,\n\t\t\t\t\t\t\t\tjob_attr_def,\n\t\t\t\t\t\t\t\tJOB_ATR_LAST, perf_label, perf_action);\n\n\tif (tmp_rc == -1) {\n\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\"partially populated python job object\");\n\t}\n\n\t/* set job.queue to actual queue object */\n\tif (pjob->ji_qs.ji_queue[0]) {\n\t\tpy_que = _pps_helper_get_queue(NULL, pjob->ji_qs.ji_queue, perf_label); /* NEW ref */\n\t\tif (py_que) {\n\t\t\tif (PyObject_HasAttrString(py_job, ATTR_queue)) {\n\t\t\t\t/* py_que ref ct incremented as part of py_job */\n\t\t\t\t(void) PyObject_SetAttrString(py_job, ATTR_queue, py_que);\n\t\t\t}\n\t\t\tPy_DECREF(py_que); /* we no longer need to reference */\n\t\t}\n\t}\n\n\tif (pjob->ji_myResv) {\n\t\t/* set job.resv to actual reservation object */\n\t\tpy_resv = _pps_helper_get_resv(pjob->ji_myResv,\n\t\t\t\t\t       pjob->ji_myResv->ri_qs.ri_resvID, perf_label); /* NEW ref */\n\t\tif (py_resv) {\n\t\t\tif (PyObject_HasAttrString(py_job, ATTR_resv)) {\n\t\t\t\t/* py_resv ref ct incremented as part of py_job */\n\t\t\t\t(void) PyObject_SetAttrString(py_job, ATTR_resv, py_resv);\n\t\t\t}\n\t\t\tPy_DECREF(py_resv); /* we no longer need to reference */\n\t\t}\n\t}\n\n\t/* set job.server to actual server object */\n\tpy_server = _pps_helper_get_server(perf_label); /* NEW Ref */\n\n\tif (py_server) {\n\t\tif (PyObject_HasAttrString(py_job, ATTR_server)) {\n\t\t\t/* py_server ref ct incremented as part of py_job */\n\t\t\t(void) PyObject_SetAttrString(py_job, ATTR_server, py_server);\n\t\t}\n\t\tPy_DECREF(py_server);\n\t}\n\n\ttmp_rc = pbs_python_mark_object_readonly(py_job);\n\n\tif (tmp_rc == -1) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"Failed to mark job readonly!\");\n\t\tgoto ERROR_EXIT;\n\t}\n\n\tobject_counter++;\n\treturn py_job;\nERROR_EXIT:\n\tif (PyErr_Occurred())\n\t\tpbs_python_write_error_to_log(__func__);\n\tPy_CLEAR(py_jargs);\n\tPy_CLEAR(py_job);\n\tPyErr_SetString(PyExc_AssertionError, \"Failed to create job object\");\n\n\treturn NULL;\n}\n\n/**\n * @brief\n * \tHelper method returning a resv Python Object from a a resc_resv struct.\n *\n * @param[in] presv_o - reservation structure\n * @param[in] resvid - reservation name\n * @param[in] perf_label - passed on to hook_perf_stat* call.\n *\n * @return\tPyObject *\n * @retval\tThis returns a Python object that maps\n * \t\tto a resc_resv struct taken directly from presv_o if non-NULL,\n * \t\tor to the struct returned by find_resv(<resvid>).\n */\nstatic PyObject *\n_pps_helper_get_resv(resc_resv *presv_o, const char *resvid, char *perf_label)\n{\n\tPyObject *py_resv_class = NULL;\n\tPyObject *py_resv = NULL;\n\tPyObject *py_rargs = NULL;\n\tPyObject *py_que = NULL;\n\tPyObject *py_server = NULL;\n\tresc_resv *presv;\n\tint tmp_rc = -1;\n\tchar resvid_out[PBS_MAXCLTJOBID];\n\tchar server_out[MAXSERVERNAME];\n\tchar perf_action[MAXBUFLEN];\n\n\tif (presv_o != NULL) {\n\t\tpresv = presv_o;\n\t} else {\n\t\tif ((resvid == NULL) || (resvid[0] == '\\0')) {\n\t\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\t\"Unable to populate python reservation object\");\n\t\t\treturn NULL;\n\t\t}\n\n\t\tif (get_server((char *) resvid, (char *) resvid_out,\n\t\t\t       (char *) server_out)) {\n\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t\t \"illegally formed reservation identifier %s\", resvid);\n\t\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\treturn NULL;\n\t\t}\n\t\tpresv = find_resv((char *) resvid_out);\n\t}\n\n\tif (presv == NULL) {\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"%s: no such reservation\", resvid);\n\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t/* this takes care of incrementing ref cnt of Py_None */\n\t\tPy_RETURN_NONE;\n\t}\n\n\t/*\n\t * First things first create a Python resv object.\n\t *  - Borrowed reference\n\t *  - Exception is *NOT* set\n\t */\n\tpy_resv_class = pbs_python_types_table[PP_RESV_IDX].t_class;\n\n\tpy_rargs = Py_BuildValue(\"(s)\", presv->ri_qs.ri_resvID); /* NEW ref */\n\tif (py_rargs == NULL) {\n\t\tlog_err(-1, pbs_python_daemon_name,\n\t\t\t\"could not build args list for resv\");\n\t\tgoto GR_ERROR_EXIT;\n\t}\n\tpy_resv = PyObject_Call(py_resv_class, py_rargs, NULL);\n\tif (py_resv == NULL) {\n\t\tlog_err(-1, pbs_python_daemon_name,\n\t\t\t\"failed to create a python resv object\");\n\t\tgoto GR_ERROR_EXIT;\n\t}\n\n\tPy_CLEAR(py_rargs);\n\n\t/*\n\t * OK, At this point we need to start populating the resv class.\n\t */\n\tsnprintf((char *) hook_debug.objname, HOOK_BUF_SIZE - 1, \"%s(%s)\", SERVER_RESV_OBJECT, presv->ri_qs.ri_resvID);\n\tsnprintf(perf_action, sizeof(perf_action), \"%s:%s\", HOOK_PERF_POPULATE, hook_debug.objname);\n\ttmp_rc = pbs_python_populate_attributes_to_python_class(py_resv,\n\t\t\t\t\t\t\t\tpy_resv_attr_types,\n\t\t\t\t\t\t\t\tpresv->ri_wattr,\n\t\t\t\t\t\t\t\tresv_attr_def,\n\t\t\t\t\t\t\t\tRESV_ATR_LAST, perf_label, perf_action);\n\n\tif (tmp_rc == -1) {\n\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\"partially populated python resv object\");\n\t}\n\n\t/* set resv.queue to actual queue object */\n\tif (presv->ri_qs.ri_queue[0] && PyObject_HasAttrString(py_resv, ATTR_queue)) {\n\t\tpy_que = _pps_helper_get_queue(NULL, presv->ri_qs.ri_queue, perf_label); /* NEW */\n\t\tif (py_que) {\n\t\t\t/* py_que ref ct incremented as part of py_resv */\n\t\t\t(void) PyObject_SetAttrString(py_resv, ATTR_queue, py_que);\n\t\t\tPy_DECREF(py_que); /* we no longer need to reference */\n\t\t}\n\t}\n\n\t/* set resv.server to actual server object */\n\tpy_server = _pps_helper_get_server(perf_label); /* NEW */\n\n\tif (py_server) {\n\t\tif (PyObject_HasAttrString(py_resv, ATTR_server)) {\n\t\t\t/* py_server ref ct incremented as part of py_resv */\n\t\t\t(void) PyObject_SetAttrString(py_resv, ATTR_server, py_server);\n\t\t}\n\t\tPy_DECREF(py_server);\n\t}\n\n\ttmp_rc = pbs_python_mark_object_readonly(py_resv);\n\n\tif (tmp_rc == -1) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"Failed to mark resv readonly!\");\n\t\tgoto GR_ERROR_EXIT;\n\t}\n\n\tobject_counter++;\n\treturn py_resv;\n\nGR_ERROR_EXIT:\n\tif (PyErr_Occurred())\n\t\tpbs_python_write_error_to_log(__func__);\n\tPy_CLEAR(py_rargs);\n\tPy_CLEAR(py_resv);\n\tPyErr_SetString(PyExc_AssertionError, \"Failed to create resv object\");\n\n\treturn NULL;\n}\n\n/**\n * @brief\n *\tReturns a Python object that maps to a struct pbsnodes * taken directly\n *\tfrom pvnode_o if non-NULL, or to the struct returned by\n *\tfind_nodebyname(<vname>).\n *\n * @param[in]   pvnode_o\t- the \"struct pbsnode *\" that will be used to\n *\t\t\t\t  populate a Python vnode object.\n * @param[in]\tvname\t\t- name of a vnode to obtain \"struct pbsnode *\"\n *\t\t\t\t  content to populate a Python vnode object.\n * @param[in]\tperf_label\t- passed on to hook_perf_stat* call.\n *\n * @return      PyObject *\t- the Python vnode object corresponding to\n *\t\t\t\t  'pvnode_o' or 'vname'.\n */\nstatic PyObject *\n_pps_helper_get_vnode(struct pbsnode *pvnode_o, const char *vname, char *perf_label)\n{\n\tPyObject *py_vnode_class = NULL;\n\tPyObject *py_vnode = NULL;\n\tPyObject *py_rargs = NULL;\n\tPyObject *py_que = NULL;\n\tstruct pbsnode *pvnode;\n\tint tmp_rc = -1;\n\tchar buf[512];\n\tchar perf_action[MAXBUFLEN];\n\n\tif (pvnode_o != NULL) {\n\t\tpvnode = pvnode_o;\n\t} else {\n\t\tif ((vname == NULL) || (vname[0] == '\\0')) {\n\t\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\t\"Unable to populate python vnode object\");\n\t\t\treturn NULL;\n\t\t}\n\n\t\tpvnode = find_nodebyname((char *) vname);\n\t}\n\n\tif (pvnode == NULL) {\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"%s: no such vnode\", vname);\n\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\tPy_RETURN_NONE;\n\t}\n\n\t/*\n\t * First things first create a Python vnode object.\n\t *  - Borrowed reference\n\t *  - Exception is *NOT* set\n\t */\n\tpy_vnode_class = pbs_python_types_table[PP_VNODE_IDX].t_class;\n\tpy_rargs = Py_BuildValue(\"(s)\", pvnode->nd_name); /* NEW ref */\n\tif (py_rargs == NULL) {\n\t\tlog_err(-1, pbs_python_daemon_name,\n\t\t\t\"could not build args list for vnode\");\n\t\tgoto GR_ERROR_EXIT;\n\t}\n\tpy_vnode = PyObject_Call(py_vnode_class, py_rargs, NULL);\n\tif (py_vnode == NULL) {\n\t\tlog_err(-1, pbs_python_daemon_name,\n\t\t\t\"failed to create a python vnode object\");\n\t\tgoto GR_ERROR_EXIT;\n\t}\n\n\tPy_CLEAR(py_rargs);\n\n\t/*\n\t * OK, At this point we need to start populating the vnode class.\n\t */\n\tsnprintf((char *) hook_debug.objname, HOOK_BUF_SIZE - 1, \"%s(%s)\", SERVER_VNODE_OBJECT, pvnode->nd_name);\n\tsnprintf(perf_action, sizeof(perf_action), \"%s:%s\", HOOK_PERF_POPULATE, hook_debug.objname);\n\ttmp_rc = pbs_python_populate_attributes_to_python_class(py_vnode,\n\t\t\t\t\t\t\t\tpy_vnode_attr_types,\n\t\t\t\t\t\t\t\tpvnode->nd_attr,\n\t\t\t\t\t\t\t\tnode_attr_def,\n\t\t\t\t\t\t\t\tND_ATR_LAST, perf_label, perf_action);\n\n\tif (tmp_rc == -1) {\n\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\"partially populated python vnode object\");\n\t}\n\n\t/* set vnode.queue to actual queue object */\n\tif (pvnode->nd_pque && PyObject_HasAttrString(py_vnode, ATTR_queue)) {\n\t\tpy_que = _pps_helper_get_queue(pvnode->nd_pque, NULL, perf_label); /* NEW */\n\t\tif (py_que) {\n\t\t\t/* py_que ref ct incremented as part of py_vnode */\n\t\t\t(void) PyObject_SetAttrString(py_vnode, ATTR_queue, py_que);\n\t\t\tPy_DECREF(py_que); /* we no longer need to reference */\n\t\t}\n\t}\n\n\tsnprintf(buf, sizeof(buf), \"%ld\", pvnode->nd_state);\n\ttmp_rc = pbs_python_object_set_attr_string_value(py_vnode, ATTR_NODE_state,\n\t\t\t\t\t\t\t buf);\n\tif (tmp_rc == -1) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"Failed to set vnode's state.\");\n\t\tgoto GR_ERROR_EXIT;\n\t}\n\n\tsnprintf(buf, sizeof(buf), \"%d\", pvnode->nd_ntype);\n\n\ttmp_rc = pbs_python_object_set_attr_string_value(py_vnode, ATTR_NODE_ntype,\n\t\t\t\t\t\t\t buf);\n\tif (tmp_rc == -1) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"Failed to set vnode's type.\");\n\t\tgoto GR_ERROR_EXIT;\n\t}\n\n\tif (tmp_rc == -1) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"Failed to mark vnode readonly.\");\n\t\tgoto GR_ERROR_EXIT;\n\t}\n\n\tobject_counter++;\n\treturn py_vnode;\n\nGR_ERROR_EXIT:\n\tif (PyErr_Occurred())\n\t\tpbs_python_write_error_to_log(__func__);\n\tPy_CLEAR(py_rargs);\n\tPy_CLEAR(py_vnode);\n\tPyErr_SetString(PyExc_AssertionError, \"Failed to create vnode object\");\n\n\treturn NULL;\n}\n/*\n * ---------- EVENT RELATED FUNCTIONS ------------\n */\n\n/**\n * @brief\n * \tReturns the event param's item corresponding to key 'name'.\n *\n * @param[in] name - key\n *\n * @return \tPyObject *\n * @retval\ta borrowed reference.\n */\nPyObject *\n_pbs_python_event_get_param(char *name)\n{\n\tPyObject *py_param = NULL;\n\tPyObject *py_p = NULL;\n\n\tif (!py_hook_pbsevent) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"No hook event found!\");\n\t\treturn NULL;\n\t}\n\n\t/* py_param = event()._param[] */\n\tif (!PyObject_HasAttrString(py_hook_pbsevent, PY_EVENT_PARAM)) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"Failed to obtain event's param\");\n\t\treturn NULL;\n\t}\n\n\tpy_param = PyObject_GetAttrString(py_hook_pbsevent,\n\t\t\t\t\t  PY_EVENT_PARAM); /* NEW */\n\tif (!py_param) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"Failed to obtain event's param\");\n\t\treturn NULL;\n\t}\n\n\tif (!PyDict_Check(py_param)) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"event's param is not a dictionary\");\n\t\tPy_CLEAR(py_param);\n\t\treturn NULL;\n\t}\n\n\t/* ex. py_job = event().param[\"job\"] */\n\tpy_p = PyDict_GetItemString(py_param, name);\n\tPy_DECREF(py_param);\n\n\treturn (py_p);\n}\n/**\n * @brief\n * \tMakes the Python PBS event object read-only, meaning none of its\n * \tcould be modified in a hook script.\n *\n * @return\tint\n * @retval\t0 \tfor sucess;\n * @retval\t-1 \totherwise\n */\nint\n_pbs_python_event_mark_readonly(void)\n{\n\tint rv;\n\n\tif (!py_hook_pbsevent) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"event not found!\");\n\t\treturn (-1);\n\t}\n\n\trv = pbs_python_mark_object_readonly(py_hook_pbsevent);\n\n\tif (rv == -1) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"Failed to mark event readonly!\");\n\t\treturn (-1);\n\t}\n\treturn (rv);\n}\n\n/**\n * @brief\n * \tSets the \"operation\" mode of Python: if 'mode' is PY_MODE, then we're\n * \tinside the hook script; if 'mode' is C_MODE, then we're inside some\n * \tinternal C helper function.\n * \tSetting 'mode' to C_MODE usually means we don't have any restriction\n * \tas to which attributes we can or cannot set.\n */\nvoid\n_pbs_python_set_mode(int mode)\n{\n\tif ((mode != PY_MODE) && (mode != C_MODE)) {\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"unexpected mode %d\", mode);\n\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\treturn;\n\t}\n\n\thook_set_mode = mode;\n}\n\n/**\n *\n * @brief\n *\tGiven a list of vnode attributes/resources/values in 'vnlist',\n *\treturn a Python dictionary object to represent 'vnlist' as an array of\n *\tvnode objects.\n *\n * @param[in]\tvnlist - list of vnode attributes/resources/values.\n *\t\t\tformat: a list of plist entries:\n * @param[in]\tplist->al_name = <node_name>.<attribute_name>\n * @param[in]\tplist->al_resc =  <resource_name>,<type>\n * @param[in]\tplist->al_value = <value>\n * @param[in]\tperf_label - data passed on to hook_perf_stat* call\n * @param[in]\tperf_action - dat passed on to hook_perf_stat* call\n *\n * @return \tPyObject *\n * @retval\t<object>\t- the Python dictionary object holding\n *\t\t\t           the individual vnode objects, indexed by\n *\t\t\t\t   vnode names.\n * @retval\tNULL\t\t- if an error occured.\n *\n * @note\n *\t\tThis function calls a single hook_perf_stat_start()\n *\t\tthat has some malloc-ed data that are freed in the\n *\t\thook_perf_stat_stop() call, which is done at the end of\n *\t\tthis function.\n *\t\tEnsure that after the hook_perf_stat_start(), all\n *\t\tprogram execution path lead to hook_perf_stat_stop()\n *\t\tcall.\n */\nstatic PyObject *\ncreate_py_vnodelist(pbs_list_head *vnlist, char *perf_label, char *perf_action)\n{\n\tsvrattrl *plist, *plist_next;\n\tPyObject *py_vn = NULL; /* class vnode arg list */\n\tPyObject *py_va = NULL; /* instantiated vnode object */\n\tPyObject *py_vnodelist = NULL;\n\tPyObject *py_vnode_class = NULL;\n\tstruct rq_node {\n\t\tchar rq_id[PBS_MAXNODENAME * 2];\n\t\tpbs_list_head rq_attr;\n\t} rqs;\n\tchar *p = NULL;\n\tchar *pn = NULL;\n\tchar *p1 = NULL;\n\tchar *attr_name = NULL;\n\tPyObject *py_vnlist_ret = NULL;\n\tint rc;\n\n\tif (vnlist == NULL) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"bad input parameter\");\n\t\treturn (NULL);\n\t}\n\n\tpy_vnodelist = PyDict_New(); /* NEW - empty dict */\n\tif (py_vnodelist == NULL) {\n\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\"failed to create a Vnodes list dictionary!\");\n\t\treturn NULL;\n\t}\n\n\thook_perf_stat_start(perf_label, perf_action, 0);\n\n\tpy_vnode_class = pbs_python_types_table[PP_VNODE_IDX].t_class;\n\n\trqs.rq_id[0] = '\\0';\n\tCLEAR_HEAD(rqs.rq_attr);\n\n\tplist = (svrattrl *) GET_NEXT(*vnlist);\n\tdo {\n\t\tif (plist == NULL)\n\t\t\tbreak;\n\n\t\tplist_next = (svrattrl *) GET_NEXT(plist->al_link);\n\n\t\t/* look for last dot as the name could be dotted like a node name */\n\t\tp = strrchr(plist->al_name, '.');\n\t\tif (p == NULL) {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"warning: encountered an attribute %s without a node name...ignoring\", plist->al_name);\n\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\tplist = plist_next;\n\t\t\tcontinue;\n\t\t}\n\t\t*p = '\\0';\t   /* now plist->al_name would be the node name */\n\t\tattr_name = p + 1; /* p will be the actual attribute name */\n\t\tif (plist->al_resc != NULL) {\n\t\t\tp1 = strchr(plist->al_resc, ',');\n\t\t\tif (p1 != NULL) {\n\t\t\t\t*p1 = '\\0';\n\t\t\t}\n\t\t}\n\n\t\t/* at this point, we have:\n\t\t * \tplist->al_name: <node_name><p><attribute_name>\n\t\t * \t\t\t\twhere <p> = \\0\n\t\t * \tplist->al_resc: <resc_name><p1><type>\n\t\t * \t\t\t\twhere <p1> = \\0\n\t\t */\n\n\t\tif (add_to_svrattrl_list(&rqs.rq_attr, attr_name,\n\t\t\t\t\t plist->al_resc, plist->al_value, 0, NULL) != 0) {\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t\t \"warning: failed to add_to_svrattrl_list(%s,%s,%s)\",\n\t\t\t\t plist->al_name,\n\t\t\t\t plist->al_resc ? plist->al_resc : \"\", plist->al_value);\n\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\tgoto create_py_vnodelist_exit;\n\t\t}\n\t\tpn = NULL;\n\n\t\t/* Check if we're done processing the attributes/resources */\n\t\t/* of the current node. \t\t\t\t    */\n\t\tif (plist_next != NULL) {\n\n\t\t\t/* look at last dot for \"dotted\" node names */\n\t\t\tpn = strrchr(plist_next->al_name, '.');\n\t\t\tif (pn == NULL) {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t \"warning: encountered the next attribute %s without a node name...ignoring\", plist_next->al_name);\n\t\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\t\tplist = (svrattrl *) GET_NEXT(plist_next->al_link);\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\t*pn = '\\0'; /* now plist_next->al_name would be the */\n\t\t\t\t    /* node name */\n\t\t\t\t    /* at this point, we have:\n\t\t\t * \tplist_next->al_name:\n\t\t\t * \t\t<node_name><pn><attribute_name>\n\t\t\t * \t\t\t\twhere <pn> = \\0\n\t\t\t */\n\t\t}\n\n\t\t/* The next vnlist entry is for a different node name */\n\t\t/* or we've reached the end of the line */\n\t\tif ((plist_next == NULL) ||\n\t\t    (strcmp(plist->al_name, plist_next->al_name) != 0)) {\n\n\t\t\tstrncpy(rqs.rq_id, plist->al_name,\n\t\t\t\tsizeof(rqs.rq_id) - 1);\n\n\t\t\tpy_va = Py_BuildValue(\"(s)\", rqs.rq_id); /* NEW ref */\n\t\t\tif (py_va == NULL) {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t \"could not build args list for vnode %s\",\n\t\t\t\t\t plist->al_name);\n\t\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\t\tgoto create_py_vnodelist_exit;\n\t\t\t}\n\n\t\t\tpy_vn = PyObject_Call(py_vnode_class, py_va,\n\t\t\t\t\t      NULL); /* NEW ref */\n\t\t\tif (py_vn == NULL) {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t \"failed to create a python vnode %s object\",\n\t\t\t\t\t plist->al_name);\n\t\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\t\tgoto create_py_vnodelist_exit;\n\t\t\t}\n\n\t\t\trc = pbs_python_populate_python_class_from_svrattrl(py_vn, &rqs.rq_attr, NULL, NULL);\n\n\t\t\tif (rc == -1) {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t \"failed to fully populate Python\"\n\t\t\t\t\t \" vnode %s object\",\n\t\t\t\t\t plist->al_name);\n\t\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\t\tgoto create_py_vnodelist_exit;\n\t\t\t}\n\n\t\t\t/* set vnode : now py_vn ref count auto incremented*/\n\t\t\trc = PyDict_SetItemString(py_vnodelist, plist->al_name,\n\t\t\t\t\t\t  py_vn);\n\t\t\tif (rc == -1) {\n\t\t\t\tLOG_ERROR_ARG2(\"%s: partially set remaining param['%s'] attributes\",\n\t\t\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_VNODELIST);\n\t\t\t\tgoto create_py_vnodelist_exit;\n\t\t\t}\n\n\t\t\trqs.rq_id[0] = '\\0';\n\t\t\tfree_attrlist(&rqs.rq_attr);\n\t\t\tCLEAR_HEAD(rqs.rq_attr);\n\n\t\t\tPy_CLEAR(py_va);\n\t\t\tPy_CLEAR(py_vn);\n\t\t}\n\n\t\tplist = plist_next;\n\n\t\tif (p != NULL) {\n\t\t\t*p = '.'; /* restore prev plist->al_name to contain node name */\n\t\t\tp = NULL;\n\t\t}\n\n\t\tif (p1 != NULL) { /* restore prev \"<resc>,<resc_type>\"  value for plist->al_resc */\n\t\t\t*p1 = ',';\n\t\t\tp1 = NULL;\n\t\t}\n\n\t\tif (pn != NULL) {\n\t\t\t*pn = '.'; /* restore prev plist_next->al_name to contain node name */\n\t\t\tpn = NULL;\n\t\t}\n\n\t} while (plist);\n\n\tpy_vnlist_ret = py_vnodelist;\n\ncreate_py_vnodelist_exit:\n\trqs.rq_id[0] = '\\0';\n\tfree_attrlist(&rqs.rq_attr);\n\tCLEAR_HEAD(rqs.rq_attr);\n\tif (py_vnlist_ret != py_vnodelist) {\n\t\tPy_CLEAR(py_vnodelist);\n\t}\n\tPy_CLEAR(py_va);\n\tPy_CLEAR(py_vn);\n\n\tif (p != NULL) {\n\t\t*p = '.'; /* restore plist->al_name to contain node name */\n\t\tp = NULL;\n\t}\n\n\tif (p1 != NULL) { /* restore prev \"<resc>,<resc_type>\"  value for plist->al_resc */\n\t\t*p1 = ',';\n\t\tp1 = NULL;\n\t}\n\n\tif (pn != NULL) {\n\t\t*pn = '.'; /* restore prev plist_next->al_name to contain node name */\n\t\tpn = NULL;\n\t}\n\n\thook_perf_stat_stop(perf_label, perf_action, 0);\n\treturn (py_vnlist_ret);\n}\n\n/**\n * @brief\n *\tGiven a list of job attributes/resources/values in 'joblist',\n *\treturn a Python dictionary object to represent 'joblist' as an array of\n *\tjob objects.\n *\n * @param[in]\tjoblist - list of job attributes/resources/values.\n *\t\t\tformat: a list of svrattrl entries (plist):\n * @param[in]\tplist->al_name:\t<job_name>.<attribute_name>\n * @param[in]\tplist->al_resc: <resource_name>,<type>\n * @param[in]\tplist->al_value: <value>\n * @param[in]\tperf_label - data passed on to hook_perf_stat* call\n * @param[in]\tperf_action - dat passed on to hook_perf_stat* call\n *\n * @return \tPyObject *\n * @retval\t<object>\t- the Python dictionary object holding\n *\t\t\t           the individual job objects, indexed by\n *\t\t\t\t   job names.\n * @retval\tNULL\t\t- if an error occured.\n *\n * @note\n *\t\tThis function calls a single hook_perf_stat_start()\n *\t\tthat has some malloc-ed data that are freed in the\n *\t\thook_perf_stat_stop() call, which is done at the end of\n *\t\tthis function.\n *\t\tEnsure that after the hook_perf_stat_start(), all\n *\t\tprogram execution path lead to hook_perf_stat_stop()\n *\t\tcall.\n */\nstatic PyObject *\ncreate_py_joblist(pbs_list_head *joblist, char *perf_label, char *perf_action)\n{\n\tsvrattrl *plist, *plist_next;\n\tPyObject *py_jn = NULL; /* class job arg list */\n\tPyObject *py_ja = NULL; /* instantiated job object */\n\tPyObject *py_joblist = NULL;\n\tPyObject *py_job_class = NULL;\n\tstruct rq_job {\n\t\tchar rq_id[PBS_MAXNODENAME * 2];\n\t\tpbs_list_head rq_attr;\n\t} rqs;\n\tchar *p = NULL;\n\tchar *pn = NULL;\n\tchar *p1 = NULL;\n\tchar *attr_name = NULL;\n\tPyObject *py_joblist_ret = NULL;\n\tint rc;\n\n\tpy_joblist = PyDict_New(); /* NEW - empty dict */\n\tif (py_joblist == NULL) {\n\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\"failed to create a jobs list dictionary!\");\n\t\treturn NULL;\n\t}\n\n\thook_perf_stat_start(perf_label, perf_action, 0);\n\tpy_job_class = pbs_python_types_table[PP_JOB_IDX].t_class;\n\n\trqs.rq_id[0] = '\\0';\n\tCLEAR_HEAD(rqs.rq_attr);\n\n\tplist = (svrattrl *) GET_NEXT(*joblist);\n\tdo {\n\t\tif (plist == NULL)\n\t\t\tbreak;\n\n\t\tplist_next = (svrattrl *) GET_NEXT(plist->al_link);\n\n\t\t/* look for last dot as the name could be dotted like a job name */\n\t\tp = strrchr(plist->al_name, '.');\n\t\tif (p == NULL) { /* did not detect entry <job_name>.<atr_name> */\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"warning: encountered an attribute %s without a job name...ignoring\", plist->al_name);\n\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\tplist = plist_next;\n\t\t\tcontinue;\n\t\t}\n\t\t*p = '\\0';\t   /* now plist->al_name would be the job name */\n\t\tattr_name = p + 1; /* p will be the actual attribute name */\n\t\tif (plist->al_resc != NULL) {\n\t\t\t/* looking for resource entry \"<resc>,<resc_type>\" */\n\t\t\tp1 = strchr(plist->al_resc, ',');\n\t\t\tif (p1 != NULL) {\n\t\t\t\t*p1 = '\\0';\n\t\t\t}\n\t\t}\n\t\t/* at this point we have:\n\t\t * plist->al_name = <job_name><p><attribute_name>\n\t\t * \t\t\t\twhere <p> = \\0\n\t\t * plist->al_resc = <resource_name><p1><type>\n\t\t * \t\t\t\twhere <p1> = \\0\n\t\t */\n\n\t\tif (add_to_svrattrl_list(&rqs.rq_attr, attr_name,\n\t\t\t\t\t plist->al_resc, plist->al_value, 0, NULL) != 0) {\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t\t \"warning: failed to add_to_svrattrl_list(%s,%s,%s)\",\n\t\t\t\t plist->al_name,\n\t\t\t\t plist->al_resc ? plist->al_resc : \"\", plist->al_value);\n\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\tgoto create_py_joblist_exit;\n\t\t}\n\n\t\t/* Check if we're done processing the attributes/resources */\n\t\t/* of the current job. \t\t\t\t    */\n\t\tif (plist_next != NULL) {\n\n\t\t\t/* looking for the form: <job_name>.<attrib_name> */\n\t\t\tpn = strrchr(plist_next->al_name, '.');\n\t\t\tif (pn == NULL) {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t \"warning: encountered the next attribute %s without a job name...ignoring\", plist_next->al_name);\n\t\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\t\tplist = (svrattrl *) GET_NEXT(plist_next->al_link);\n\t\t\t\tif (p != NULL) {\n\t\t\t\t\t*p = '.'; /* restore plist->al_name to contain job name */\n\t\t\t\t\tp = NULL;\n\t\t\t\t}\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\t*pn = '\\0'; /* now plist_next->al_name would be the */\n\t\t\t\t    /* job name */\n\t\t\t\t    /* at this point, we have\n\t\t\t * plist_next->al_name: <job_name><pn><attrib_name>\n\t\t\t * \t\t\twhere <pn> = \\0\n\t\t\t */\n\t\t}\n\n\t\t/* The next joblist entry is for a different job name */\n\t\t/* or we've reached the end of the line */\n\t\tif ((plist_next == NULL) ||\n\t\t    (strcmp(plist->al_name, plist_next->al_name) != 0)) {\n\n\t\t\tstrncpy(rqs.rq_id, plist->al_name,\n\t\t\t\tsizeof(rqs.rq_id) - 1);\n\n\t\t\tpy_ja = Py_BuildValue(\"(s)\", rqs.rq_id); /* NEW ref */\n\t\t\tif (py_ja == NULL) {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t \"could not build args list for job %s\",\n\t\t\t\t\t plist->al_name);\n\t\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\t\tgoto create_py_joblist_exit;\n\t\t\t}\n\n\t\t\tpy_jn = PyObject_Call(py_job_class, py_ja,\n\t\t\t\t\t      NULL); /* NEW ref */\n\t\t\tif (py_jn == NULL) {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t \"failed to create a python job %s object\",\n\t\t\t\t\t plist->al_name);\n\t\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\t\tgoto create_py_joblist_exit;\n\t\t\t}\n\n\t\t\trc = pbs_python_populate_python_class_from_svrattrl(py_jn, &rqs.rq_attr, NULL, NULL);\n\n\t\t\tif (rc == -1) {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t \"failed to fully populate Python\"\n\t\t\t\t\t \" job %s object\",\n\t\t\t\t\t plist->al_name);\n\t\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\t\tgoto create_py_joblist_exit;\n\t\t\t}\n\n\t\t\t/* set job : now py_jn ref count auto incremented*/\n\t\t\trc = PyDict_SetItemString(py_joblist, plist->al_name,\n\t\t\t\t\t\t  py_jn);\n\t\t\tif (rc == -1) {\n\t\t\t\tLOG_ERROR_ARG2(\"%s: partially set remaining param['%s'] attributes\",\n\t\t\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_JOBLIST);\n\t\t\t\tgoto create_py_joblist_exit;\n\t\t\t}\n\n\t\t\trqs.rq_id[0] = '\\0';\n\t\t\tfree_attrlist(&rqs.rq_attr);\n\t\t\tCLEAR_HEAD(rqs.rq_attr);\n\n\t\t\tPy_CLEAR(py_ja);\n\t\t\tPy_CLEAR(py_jn);\n\t\t}\n\n\t\tplist = plist_next;\n\n\t\tif (p != NULL) {\n\t\t\t/* restore prev plist->al_name to contain job name */\n\t\t\t*p = '.';\n\t\t\tp = NULL;\n\t\t}\n\n\t\tif (p1 != NULL) {\n\t\t\t/* restore prev \"<resc>,<resc_type>\"  value for plist->al_resc */\n\t\t\t*p1 = ',';\n\t\t\tp1 = NULL;\n\t\t}\n\n\t\tif (pn != NULL) {\n\t\t\t/* restore prev plist_next->al_name to contain job name */\n\t\t\t*pn = '.';\n\t\t\tpn = NULL;\n\t\t}\n\n\t} while (plist);\n\n\tpy_joblist_ret = py_joblist;\n\ncreate_py_joblist_exit:\n\tfree_attrlist(&rqs.rq_attr);\n\tCLEAR_HEAD(rqs.rq_attr);\n\tif (py_joblist_ret != py_joblist) {\n\t\tPy_CLEAR(py_joblist);\n\t}\n\tPy_CLEAR(py_ja);\n\tPy_CLEAR(py_jn);\n\n\tif (p != NULL) {\n\t\t/* restore prev plist->al_name to contain job name */\n\t\t*p = '.';\n\t\tp = NULL;\n\t}\n\n\tif (p1 != NULL) {\n\t\t/* restore prev \"<resc>,<resc_type>\"  value for plist->al_resc */\n\t\t*p1 = ',';\n\t\tp1 = NULL;\n\t}\n\n\tif (pn != NULL) {\n\t\t/* restore prev plist_next->al_name to contain job name */\n\t\t*pn = '.';\n\t\tpn = NULL;\n\t}\n\n\thook_perf_stat_stop(perf_label, perf_action, 0);\n\treturn (py_joblist_ret);\n}\n\n/*\n * @brief\n *\tGiven a list of reservation attributes/resources/values in 'resvlist',\n *\treturn a Python dictionary object to represent 'resvlist' as an array of\n *\treservation objects.\n *\n * @param[in]\tresvlist - list of reservation attributes/resources/values.\n *\t\t\tformat: a list of svrattrl entries (plist):\n *\tplist->al_name:\t<resv_name>.<attribute_name>\n *\tplist->al_resc: <resource_name>.<type>\n *\tplist->al_value: <value>\n *\n * @param[in]\tperf_label - data passed on to hook_perf_stat* call\n * @param[in]\tperf_action - dat passed on to hook_perf_stat* call\n *\n * @return \tPyObject *\n * @retval\t<object>\t- the Python dictionary object holding\n *\t\t\t           the individual reservation objects, indexed by\n *\t\t\t\t   reservation names.\n * @retval\tNULL\t\t- if an error occured.\n *\n * @note\n *\t\tThis function calls a single hook_perf_stat_start()\n *\t\tthat has some malloc-ed data that are freed in the\n *\t\thook_perf_stat_stop() call, which is done at the end of\n *\t\tthis function.\n *\t\tEnsure that after the hook_perf_stat_start(), all\n *\t\tprogram execution path lead to hook_perf_stat_stop()\n *\t\tcall.\n */\nstatic PyObject *\ncreate_py_resvlist(pbs_list_head *resvlist, char *perf_label, char *perf_action)\n{\n\tsvrattrl *plist, *plist_next;\n\tPyObject *py_rn = NULL; /* class reservation arg list */\n\tPyObject *py_ra = NULL; /* instantiated reservation object */\n\tPyObject *py_resvlist = NULL;\n\tPyObject *py_resv_class = NULL;\n\tstruct rq_resv {\n\t\tchar rq_id[PBS_MAXNODENAME * 2];\n\t\tpbs_list_head rq_attr;\n\t} rqs;\n\tchar *p = NULL;\n\tchar *pn = NULL;\n\tchar *p1 = NULL;\n\tchar *attr_name = NULL;\n\tPyObject *py_resvlist_ret = (PyObject *) NULL;\n\tint rc;\n\n\tpy_resvlist = PyDict_New(); /* NEW - empty dict */\n\tif (py_resvlist == NULL) {\n\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\"failed to create a reservation list dictionary!\");\n\t\treturn (NULL);\n\t}\n\n\thook_perf_stat_start(perf_label, perf_action, 0);\n\tpy_resv_class = pbs_python_types_table[PP_RESV_IDX].t_class;\n\n\tmemset(rqs.rq_id, 0, sizeof(rqs.rq_id));\n\tCLEAR_HEAD(rqs.rq_attr);\n\n\tfor (plist = (svrattrl *) GET_NEXT(*resvlist); plist; plist = plist_next) {\n\n\t\tplist_next = (svrattrl *) GET_NEXT(plist->al_link);\n\n\t\t/* look for last dot as the name could be dotted like a resv name */\n\t\tp = strrchr(plist->al_name, '.');\n\t\tif (p == NULL) { /* did not detect entry <resv_name>.<atr_name> */\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"warning: encountered an attribute %s without a resv name...ignoring\", plist->al_name);\n\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\tcontinue;\n\t\t}\n\t\t*p = '\\0';\t   /* now plist->al_name would be the resv name */\n\t\tattr_name = p + 1; /* p will be the actual attribute name */\n\t\tif (plist->al_resc != NULL) {\n\t\t\t/* looking for resource entry \"<resc>,<resc_type>\" */\n\t\t\tp1 = strchr(plist->al_resc, ',');\n\t\t\tif (p1 != NULL) {\n\t\t\t\t*p1 = '\\0';\n\t\t\t}\n\t\t}\n\t\t/* at this point we have:\n\t\t * plist->al_name = <resv_name><p><attribute_name>\n\t\t * \t\t\t\twhere <p> = \\0\n\t\t * plist->al_resc = <resource_name><p1><type>\n\t\t * \t\t\t\twhere <p1> = \\0\n\t\t */\n\n\t\tif (add_to_svrattrl_list(&rqs.rq_attr, attr_name,\n\t\t\t\t\t plist->al_resc, plist->al_value, 0, NULL) != 0) {\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE,\n\t\t\t\t \"warning: failed to add_to_svrattrl_list(%s,%s,%s)\",\n\t\t\t\t plist->al_name,\n\t\t\t\t plist->al_resc ? plist->al_resc : \"\", plist->al_value);\n\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\tgoto create_py_resvlist_exit;\n\t\t}\n\n\t\t/* Check if we're done processing the attributes/resources */\n\t\t/* of the current resv. \t\t\t\t    */\n\t\tif (plist_next != NULL) {\n\n\t\t\t/* looking for the form: <resv_name>.<attrib_name> */\n\t\t\tpn = strrchr(plist_next->al_name, '.');\n\t\t\tif (pn == NULL) {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t \"warning: encountered the next attribute %s without a resv name...ignoring\", plist_next->al_name);\n\t\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\t\tplist = (svrattrl *) GET_NEXT(plist_next->al_link);\n\t\t\t\tif (p != NULL) {\n\t\t\t\t\t*p = '.'; /* restore plist->al_name to contain resv name */\n\t\t\t\t\tp = NULL;\n\t\t\t\t}\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\t*pn = '\\0'; /* now plist_next->al_name would be the */\n\t\t\t\t    /* resv name */\n\t\t\t\t    /* at this point, we have\n\t\t\t * plist_next->al_name: <resv_name><pn><attrib_name>\n\t\t\t * \t\t\twhere <pn> = \\0\n\t\t\t */\n\t\t}\n\n\t\t/* The next resvlist entry is for a different resv name */\n\t\t/* or we've reached the end of the line */\n\t\tif ((plist_next == NULL) ||\n\t\t    (strcmp(plist->al_name, plist_next->al_name) != 0)) {\n\n\t\t\tsnprintf(rqs.rq_id, sizeof(rqs.rq_id), \"%s\", plist->al_name);\n\n\t\t\tpy_ra = Py_BuildValue(\"(s)\", rqs.rq_id); /* NEW ref */\n\t\t\tif (py_ra == NULL) {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t \"could not build args list for resv %s\",\n\t\t\t\t\t plist->al_name);\n\t\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\t\tgoto create_py_resvlist_exit;\n\t\t\t}\n\n\t\t\tpy_rn = PyObject_Call(py_resv_class, py_ra,\n\t\t\t\t\t      (PyObject *) NULL); /* NEW ref */\n\t\t\tif (py_rn == NULL) {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t \"failed to create a python resv %s object\",\n\t\t\t\t\t plist->al_name);\n\t\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\t\tgoto create_py_resvlist_exit;\n\t\t\t}\n\n\t\t\trc = pbs_python_populate_python_class_from_svrattrl(py_rn, &rqs.rq_attr, NULL, NULL);\n\n\t\t\tif (rc == -1) {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t \"failed to fully populate Python\"\n\t\t\t\t\t \" resv %s object\",\n\t\t\t\t\t plist->al_name);\n\t\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\t\tgoto create_py_resvlist_exit;\n\t\t\t}\n\n\t\t\t/* set resv : now py_jn ref count auto incremented*/\n\t\t\trc = PyDict_SetItemString(py_resvlist, plist->al_name,\n\t\t\t\t\t\t  py_rn);\n\t\t\tif (rc == -1) {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t \"%s: partially set remaining param['%s'] attributes\",\n\t\t\t\t\t PY_TYPE_EVENT, PY_EVENT_PARAM_RESVLIST);\n\t\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\t\tgoto create_py_resvlist_exit;\n\t\t\t}\n\n\t\t\trqs.rq_id[0] = '\\0';\n\t\t\tfree_attrlist(&rqs.rq_attr);\n\t\t\tCLEAR_HEAD(rqs.rq_attr);\n\n\t\t\tPy_CLEAR(py_ra);\n\t\t\tPy_CLEAR(py_rn);\n\t\t}\n\n\t\tif (p != NULL) {\n\t\t\t/* restore prev plist->al_name to contain resv name */\n\t\t\t*p = '.';\n\t\t\tp = NULL;\n\t\t}\n\n\t\tif (p1 != NULL) {\n\t\t\t/* restore prev \"<resc>,<resc_type>\"  value for plist->al_resc */\n\t\t\t*p1 = ',';\n\t\t\tp1 = NULL;\n\t\t}\n\n\t\tif (pn != NULL) {\n\t\t\t/* restore prev plist_next->al_name to contain resv name */\n\t\t\t*pn = '.';\n\t\t\tpn = NULL;\n\t\t}\n\t}\n\n\tpy_resvlist_ret = py_resvlist;\n\ncreate_py_resvlist_exit:\n\tfree_attrlist(&rqs.rq_attr);\n\tCLEAR_HEAD(rqs.rq_attr);\n\tif (py_resvlist_ret != py_resvlist) {\n\t\tPy_CLEAR(py_resvlist);\n\t}\n\tPy_CLEAR(py_ra);\n\tPy_CLEAR(py_rn);\n\n\tif (p != NULL) {\n\t\t/* restore prev plist->al_name to contain resv name */\n\t\t*p = '.';\n\t\tp = NULL;\n\t}\n\n\tif (p1 != NULL) {\n\t\t/* restore prev \"<resc>,<resc_type>\"  value for plist->al_resc */\n\t\t*p1 = ',';\n\t\tp1 = NULL;\n\t}\n\n\tif (pn != NULL) {\n\t\t/* restore prev plist_next->al_name to contain resv name */\n\t\t*pn = '.';\n\t\tpn = NULL;\n\t}\n\n\thook_perf_stat_stop(perf_label, perf_action, 0);\n\treturn (py_resvlist_ret);\n}\n\n/**\n *\n * @brief\n *\tGiven a list of string values in 'str_list',\n *\treturn a Python list object to represent 'str_list'.\n *\n * @param[in]\tstr_list - an array of strings.\n *\n * @return \tPyObject *\n * @retval\t<object>\t- the Python list object holding\n *\t\t\t           the individual strings.\n * @retval\tNULL\t\t- if an error occured.\n */\nstatic PyObject *\ncreate_py_strlist(char **str_list)\n{\n\tint i;\n\tPyObject *py_str = NULL; /* a string value */\n\tPyObject *py_strlist = NULL;\n\tPyObject *py_strlist_ret = NULL;\n\n\tif (str_list == NULL)\n\t\treturn NULL;\n\n\tpy_strlist = PyList_New(0); /* NEW - empty list */\n\tif (py_strlist == NULL) {\n\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\"failed to create an array of strings list!\");\n\t\treturn NULL;\n\t}\n\ti = 0;\n\twhile (str_list[i]) {\n\t\tpy_str = Py_BuildValue(\"s\", str_list[i]); /* NEW ref */\n\t\tif (py_str == NULL) {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"could not create python object for %s\",\n\t\t\t\t str_list[i]);\n\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\tgoto create_py_strlist_exit;\n\t\t}\n\t\t/* py_str's reference count incremented inside PyList */\n\t\tif (PyList_Append(py_strlist, py_str) != 0) {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"Failed to append %s to python string list\",\n\t\t\t\t str_list[i]);\n\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\tPy_DECREF(py_str);\n\t\t\tgoto create_py_strlist_exit;\n\t\t}\n\t\tPy_DECREF(py_str);\n\t\ti++;\n\t}\n\tpy_strlist_ret = py_strlist;\n\ncreate_py_strlist_exit:\n\tif (py_strlist_ret != py_strlist) {\n\t\tPy_CLEAR(py_strlist);\n\t}\n\n\treturn (py_strlist_ret);\n}\n\n/**\n *\n * @brief\n *\tGiven a pbs_list_head 'phead', convert just the attribute names\n *      (plist->al_name) into a Python string list.\n *\treturn a Python list object to represent names of the given\n *\tpbs list..\n *\n * @param[in]\tphead - head of svrattrl entries..\n *\n * @return \tPyObject *\n * @retval\t<object>\t- the Python list object holding\n *\t\t\t           the individual strings of svrattrl names.\n * @retval\tNULL\t\t- if an error occured.\n */\nstatic PyObject *\ncreate_py_strlist_from_svrattrl_names(pbs_list_head *phead)\n{\n\tPyObject *py_str = NULL; /* a string value */\n\tPyObject *py_strlist = NULL;\n\tPyObject *py_strlist_ret = NULL;\n\tsvrattrl *plist;\n\n\tif (phead == NULL)\n\t\treturn NULL;\n\n\tpy_strlist = PyList_New(0); /* NEW - empty list */\n\tif (py_strlist == NULL) {\n\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\"failed to create a strings list!\");\n\t\treturn NULL;\n\t}\n\n\tfor (plist = (svrattrl *) GET_NEXT(*phead); plist;\n\t     plist = (svrattrl *) GET_NEXT(plist->al_link)) {\n\n\t\tif (plist->al_name == NULL) {\n\t\t\tcontinue;\n\t\t}\n\n\t\tpy_str = Py_BuildValue(\"s\", plist->al_name); /* NEW ref */\n\t\tif (py_str == NULL) {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"could not create python object for %s\",\n\t\t\t\t plist->al_name);\n\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\tgoto create_py_strlist_from_svrattrl_names_exit;\n\t\t}\n\t\t/* py_str's reference count incremented inside PyList */\n\t\tif (PyList_Append(py_strlist, py_str) != 0) {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"Failed to append %s to python string list\",\n\t\t\t\t plist->al_name);\n\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\tPy_DECREF(py_str);\n\t\t\tgoto create_py_strlist_from_svrattrl_names_exit;\n\t\t}\n\t\tPy_DECREF(py_str);\n\t}\n\tpy_strlist_ret = py_strlist;\n\ncreate_py_strlist_from_svrattrl_names_exit:\n\tif (py_strlist_ret != py_strlist) {\n\t\tPy_CLEAR(py_strlist);\n\t}\n\n\treturn (py_strlist_ret);\n}\n\n/**\n *\n * @brief\n *\tGiven a Python list of string values,\n *\tdump its contents into a svrattrl list.\n *\n * @param[in]\tpy_strlist - the input Python list\n * @param[in]\tto_head - destination svrattrl list\n * @param[in]\tname_str - attribute name to associate each entry\n *\n * @return \tint\n * @retval\t0\t- success\n * @retval\t-1\t- error\n */\nstatic int\npy_strlist_to_svrattrl(PyObject *py_strlist, pbs_list_head *to_head, char *name_str)\n{\n\tchar *str;\n\tint i;\n\tint len;\n\tchar index_str[20];\n\n\tif ((py_strlist == NULL) || (to_head == NULL) || (name_str == NULL))\n\t\treturn (-1);\n\n\tlen = PyList_Size(py_strlist);\n\tif (len == 0)\n\t\treturn (0);\n\n\tCLEAR_HEAD((*to_head));\n\tfor (i = 0; (i < len) && ((str = pbs_python_list_get_item_string_value(py_strlist, i)) != NULL); i++) {\n\t\tsnprintf(index_str, sizeof(index_str), \"%d\", i);\n\t\tif (add_to_svrattrl_list(to_head, name_str, index_str, str, ATR_VFLAG_HOOK, NULL) == -1) {\n\t\t\tfree_attrlist(to_head);\n\t\t\treturn (-1);\n\t\t}\n\t}\n\treturn (0);\n}\n\n/**\n *\n * @brief\n *\tGiven a Python list of string values,\n *\tdump its contents into a reliable_job_node list.\n *\n * @param[in]\tpy_strlist - the input Python list\n * @param[in]\tto_head - destination reliable_job_node list\n *\n * @return \tint\n * @retval\t0\t- success\n * @retval\t-1\t- error\n */\nstatic int\npy_strlist_to_reliable_job_node_list(PyObject *py_strlist, pbs_list_head *to_head)\n{\n\tchar *str;\n\tint i;\n\tint len;\n\n\tif ((py_strlist == NULL) || (to_head == NULL))\n\t\treturn (-1);\n\n\tlen = PyList_Size(py_strlist);\n\tif (len == 0)\n\t\treturn (0);\n\n\tCLEAR_HEAD((*to_head));\n\tfor (i = 0; (i < len) && ((str = pbs_python_list_get_item_string_value(py_strlist, i)) != NULL); i++) {\n\t\tif (reliable_job_node_add(to_head, str) == -1) {\n\t\t\tfree_attrlist(to_head);\n\t\t\treturn (-1);\n\t\t}\n\t}\n\treturn (0);\n}\n\n/**\n * @brief\n *\tRead data from /proc/self/statm if available.\n *\n * @return char *\n * @retval NULL: No data\n * @retval !NULL: Memory usage data\n */\nstatic char *\nread_statm(void)\n{\n\tstatic char buf[128] = {'\\0'};\n\tlong vmsize, vmrss;\n\tint rc;\n\tFILE *fp;\n\n\tfp = fopen(\"/proc/self/statm\", \"r\");\n\tif (!fp)\n\t\treturn NULL;\n\t/* Only fetch the first two entries. */\n\trc = fscanf(fp, \"%ld %ld\", &vmsize, &vmrss);\n\tfclose(fp);\n\tif (rc != 2)\n\t\treturn NULL;\n\t/* Convert to KB. */\n\tvmsize *= 4;\n\tvmrss *= 4;\n\tsnprintf(buf, sizeof(buf), \"VmSize=%ldkB, VmRSS=%ldkB\", vmsize, vmrss);\n\treturn (buf);\n}\n\n/**\n *\n * @brief\n *\tHelper function to create a vnode_list[] type of parameter\n *\tnamed 'param_name' under python object 'py_event_param' of\n * \ttype 'event_type', with data coming from 'vnlist'.\n *\n * @param[in]\tpy_event_param - event parameter object\n * @param[in]\tevent_type - the event type requesting for this\n * @param[in]\tparam_name - name of the vnode_list parameter\n * @param[in]\tvnlist - data for the vnode_list parameter\n * @param[in]\tperf_label - passed on to hook_perf_stat* call.\n * @param[in]\tperf_action - passed on to hook_perf_stat* call.\n *\n * @return PyObject *\n * @retval <python_object>\t- the Python object representing the\n *\t\t\t\t  vnode_list parameter\n * @retval NULL\t\t- if failure is encountered\n *\n */\nstatic PyObject *\ncreate_hook_vnode_list_param(PyObject *py_event_param,\n\t\t\t     char *event_type, char *param_name, pbs_list_head *vnlist,\n\t\t\t     char *perf_label, char *perf_action)\n{\n\tPyObject *py_vnlist = NULL;\n\tint rc;\n\n\tif ((py_event_param == NULL) || (param_name == NULL) || (vnlist == NULL)) {\n\t\tlog_err(-1, __func__, \"bad function parameter\");\n\t\treturn (NULL);\n\t}\n\n\t(void) PyDict_SetItemString(py_event_param, param_name, Py_None);\n\n\tpy_vnlist = create_py_vnodelist(vnlist, perf_label, perf_action);\n\tif (py_vnlist == NULL) {\n\t\treturn (NULL);\n\t}\n\n\t/* set vnode list: py_vnlist given to py_event_param so ref count auto incremented */\n\trc = PyDict_SetItemString(py_event_param, param_name, py_vnlist);\n\tif (rc == -1) {\n\t\tPy_CLEAR(py_vnlist);\n\t\tLOG_ERROR_ARG2(\"%s: partially set remaining param['%s'] attributes\", event_type, param_name);\n\t\treturn (NULL);\n\t}\n\treturn (py_vnlist);\n}\n\n/**\n *\n * @brief\n *\tFunction which will clear python objects after the processing\n *  hooks\n *\n */\nvoid\npbs_python_clear_attributes()\n{\n\tpbs_iter_item *iter_entry = NULL;\n\tpbs_iter_item *nxp_iter_entry;\n\n\tvnode_set_req *vn_set_req = NULL;\n\tvnode_set_req *nxp_vn_set_req;\n\n\tpbs_resource_value *resc_val = NULL;\n\tpbs_resource_value *nxp_resc_val;\n\tint i;\n\n\t/* Initialize the list of PBS iterators for new runs of hooks */\n\t/* servicing a given event (e.g. runjob event).               */\n\tif (pbs_iter_list.ll_next != NULL)\n\t\titer_entry = (pbs_iter_item *) GET_NEXT(pbs_iter_list);\n\twhile (iter_entry != NULL) {\n\t\t/* save the next iterator item */\n\t\tnxp_iter_entry = (pbs_iter_item *) GET_NEXT(iter_entry->all_iters);\n\n\t\tif (iter_entry->py_iter)\n\t\t\tPy_CLEAR(iter_entry->py_iter);\n\n\t\tdelete_link(&iter_entry->all_iters);\n\t\tfree(iter_entry);\n\t\titer_entry = nxp_iter_entry;\n\t}\n\n\t/* Initialize the list of PBS vnode set operations for new runs of hooks */\n\t/* servicing a given event (e.g. runjob event).               */\n\tif (pbs_vnode_set_list.ll_next != NULL)\n\t\tvn_set_req = (vnode_set_req *) GET_NEXT(pbs_vnode_set_list);\n\twhile (vn_set_req != NULL) {\n\t\t/* save the next vnode_set_req item  */\n\t\tnxp_vn_set_req = (vnode_set_req *) GET_NEXT(vn_set_req->all_reqs);\n\n\t\tfree_attrlist(&vn_set_req->rq_attr);\n\n\t\tdelete_link(&vn_set_req->all_reqs);\n\t\tfree(vn_set_req);\n\t\tvn_set_req = nxp_vn_set_req;\n\t}\n\n\t/* Initialize the list of PBS resource values to set for new runs of hooks */\n\t/* servicing a given event (e.g. runjob event).               */\n\tif (pbs_resource_value_list.ll_next != NULL)\n\t\tresc_val = (pbs_resource_value *) GET_NEXT(pbs_resource_value_list);\n\twhile (resc_val != NULL) {\n\t\t/* save the next vnode_set_req item  */\n\t\tnxp_resc_val = (pbs_resource_value *) GET_NEXT(resc_val->all_rescs);\n\n\t\tPy_CLEAR(resc_val->py_resource);\n\t\tPy_CLEAR(resc_val->py_resource_str_value);\n\t\tfree_attrlist(&resc_val->value_list);\n\n\t\tdelete_link(&resc_val->all_rescs);\n\t\tfree(resc_val);\n\t\tresc_val = nxp_resc_val;\n\t}\n\n\t/* py_hook_pbsevent is instantiated in C_MODE so I own it */\n\tif (py_hook_pbsevent != NULL)\n\t\tPy_CLEAR(py_hook_pbsevent);\n\n\t/* py_hook_pbsserver is instantiated in C_MODE so I own it */\n\tif (py_hook_pbsserver != NULL)\n\t\tPy_CLEAR(py_hook_pbsserver);\n\n\tif (py_hook_pbsque != NULL) {\n\t\tfor (i = 0; (i < py_hook_pbsque_max) && (py_hook_pbsque[i] != NULL); i++) {\n\t\t\tPy_CLEAR(py_hook_pbsque[i]);\n\t\t}\n\t}\n}\n\n\n/**\n * @brief\n *      Creates a PBS Python event object that can be accessed in a hook\n *\tscript as: pbs.event().\n *\n * @param[in]\thook_event\t- the hook event name for the event object.\n *\t\t\t\t  (e.g. HOOK_EVENT_QUEUEJOB)\n * @param[in]\treq_user\t- the requesting user\n * @param[in]\treq_host\t- the requesting host\n * @param[in]\treq_params\t- a structure containing the input parameters.\n * @param[in]\tperf_label - data passed on to hook_perf_stat* call\n *\n * @return int\n * @retval\t0\t- success\n * @retval\t-1\t- error\n * @retval\t-2\t- function failed to complete execution due to a\n *\t\t\t   a keyboard interrupt. This maybe caused by the\n *\t\t\t   calling process getting a SIGINT. In this case,\n *\t\t\t   just rerun this call.\n */\nint\n_pbs_python_event_set(unsigned int hook_event, char *req_user, char *req_host,\n\t\t      hook_input_param_t *req_params, char *perf_label)\n{\n\tPyObject *py_event = NULL;\n\tPyObject *py_eargs = NULL;\n\tPyObject *py_jargs = NULL;\n\tPyObject *py_rargs = NULL;\n\tPyObject *py_job = NULL;\n\tPyObject *py_job_o = NULL;\n\tPyObject *py_que = NULL;\n\tPyObject *py_resv = NULL;\n\tPyObject *py_resv_o = NULL;\n\tPyObject *py_margs = NULL;\n\tPyObject *py_management = NULL;\n\tPyObject *py_event_param = NULL;\n\tPyObject *py_event_class = NULL;\n\tPyObject *py_job_class = NULL;\n\tPyObject *py_management_class = NULL;\n\tPyObject *py_resv_class = NULL;\n\tPyObject *py_env_class = NULL;\n\tPyObject *py_varlist = NULL;\n\tPyObject *py_varlist_o = NULL;\n\tPyObject *py_vnodelist = NULL;\n\tPyObject *py_vnodelist_fail = NULL;\n\tPyObject *py_joblist = NULL;\n\tPyObject *py_resvlist = NULL;\n\tPyObject *py_exec_vnode = NULL;\n\tPyObject *py_vnode = NULL;\n\tPyObject *py_vnode_o = NULL;\n\tPyObject *py_aoe = NULL;\n\tPyObject *py_resclist = NULL;\n\tPyObject *py_progname = NULL;\n\tPyObject *py_arglist = NULL;\n\tPyObject *py_env = NULL;\n\tPyObject *py_pid = NULL;\n\tPyObject *py_node_list = (PyObject *) NULL;\n\tPyObject *py_failed_node_list = (PyObject *) NULL;\n\tchar perf_action[MAXBUFLEN];\n\n\tstatic long hook_counter = 0;\t      /* for tracking interpreter restart */\n\tstatic long min_restart_interval = 0; /* prevents frequent restarts */\n\tstatic int init_iters = 0;\t      /* 1 to initialize the PBS iterarators list */\n\tstatic int init_vnode_set = 0;\t      /* 1 to initialize the vnode set opers */\n\n\tstatic int init_resource_values = 0; /* 1 to initialize the */\n\t\t\t\t\t     /* list of pbs_resource values */\n\t\t\t\t\t     /* to instantiate. */\n\tstatic long max_hooks = 0;\n\tstatic long max_objects = 0;\n\tstatic time_t previous_restart = (time_t) 0;\n\n\tlong lval;\n\tint restart_python;\n\tint rc = -1;\n\n\tpbs_list_head *vnlist;\n\tpbs_list_head *joblist;\n\tpbs_list_head *resvlist;\n\n\tif (!init_iters) {\n\t\tCLEAR_HEAD(pbs_iter_list);\n\t\tinit_iters = 1;\n\t}\n\n\tif (!init_vnode_set) {\n\t\tCLEAR_HEAD(pbs_vnode_set_list);\n\t\tinit_vnode_set = 1;\n\t}\n\n\tif (!init_resource_values) {\n\t\tCLEAR_HEAD(pbs_resource_value_list);\n\t\tinit_resource_values = 1;\n\t}\n\n\tlval = max_hooks;\n\tif (is_sattr_set(SVR_ATR_PythonRestartMaxHooks))\n\t\tmax_hooks = get_sattr_long(SVR_ATR_PythonRestartMaxHooks);\n\telse\n\t\tmax_hooks = PBS_PYTHON_RESTART_MAX_HOOKS;\n\tif (lval != max_hooks) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"python_restart_max_hooks is now %ld\", max_hooks);\n\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_HOOK, LOG_INFO, __func__, log_buffer);\n\t}\n\n\tlval = max_objects;\n\tif (is_sattr_set(SVR_ATR_PythonRestartMaxObjects))\n\t\tmax_objects = get_sattr_long(SVR_ATR_PythonRestartMaxObjects);\n\telse\n\t\tmax_objects = PBS_PYTHON_RESTART_MAX_OBJECTS;\n\tif (lval != max_objects) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"python_restart_max_objects is now %ld\", max_objects);\n\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_HOOK, LOG_INFO, __func__, log_buffer);\n\t}\n\n\tlval = min_restart_interval;\n\tif (is_sattr_set(SVR_ATR_PythonRestartMinInterval))\n\t\tmin_restart_interval = get_sattr_long(SVR_ATR_PythonRestartMinInterval);\n\telse\n\t\tmin_restart_interval = PBS_PYTHON_RESTART_MIN_INTERVAL;\n\tif (lval != min_restart_interval) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"python_restart_min_interval is now %ld\", min_restart_interval);\n\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_HOOK, LOG_INFO, __func__, log_buffer);\n\t}\n\n\thook_counter++;\n\trestart_python = 0;\n\tif (hook_counter >= max_hooks)\n\t\trestart_python = 1;\n\tif (object_counter >= max_objects)\n\t\trestart_python = 1;\n\tif ((time(NULL) - previous_restart) < min_restart_interval)\n\t\trestart_python = 0;\n\tif (restart_python) {\n\t\tchar *line;\n\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK, LOG_INFO, __func__,\n\t\t\t  \"Restarting Python interpreter to reduce mem usage\");\n\t\tpbs_python_ext_shutdown_interpreter(&svr_interp_data);\n\t\tif (pbs_python_ext_start_interpreter(&svr_interp_data) != 0) {\n\t\t\tlog_err(PBSE_INTERNAL, __func__, \"Failed to restart Python interpreter\");\n\t\t\tgoto event_set_exit;\n\t\t}\n\t\t/* Reset counters for the next interpreter restart. */\n\t\thook_counter = 0;\n\t\tobject_counter = 0;\n\t\tprevious_restart = time(NULL);\n\t\t/* Log current memory usage. */\n\t\tline = read_statm();\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"Current memory usage: %s\",\n\t\t\t (line ? line : \"unknown\"));\n\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK, LOG_INFO, __func__, log_buffer);\n\t}\n\n\thook_set_mode = C_MODE;\n\n\t/*\n\t * First things first create a Python event object.\n\t *  - Borrowed reference\n\t *  - Exception is *NOT* set\n\t */\n\tpy_event_class = pbs_python_types_table[PP_EVENT_IDX].t_class;\n\n\tpy_eargs = Py_BuildValue(\"(iss)\", hook_event, req_user, req_host); /*NEW ref*/\n\n\tif (!py_eargs) {\n\t\tlog_err(-1, __func__, \"could not build args list for event\");\n\t\tgoto event_set_exit;\n\t}\n\tpy_event = PyObject_Call(py_event_class, py_eargs, NULL); /*NEW*/\n\n\tif (!py_event) {\n\t\tlog_err(-1, __func__, \"failed to create a python event object\");\n\t\tgoto event_set_exit;\n\t}\n\n\tif (!PyObject_HasAttrString(py_event, PY_EVENT_PARAM)) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"event has no param structure!\");\n\t\tgoto event_set_exit;\n\t}\n\n\tpy_event_param = PyObject_GetAttrString(py_event,\n\t\t\t\t\t\tPY_EVENT_PARAM); /* NEW ref*/\n\tif (!py_event_param) {\n\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\"failed to get param attribute of event object\");\n\t\tgoto event_set_exit;\n\t}\n\n\tif (!PyDict_Check(py_event_param)) {\n\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\"attribute of event object not a dictionary!\");\n\t\tgoto event_set_exit;\n\t}\n\n\tif (hook_event == HOOK_EVENT_QUEUEJOB) {\n\t\tstruct rq_queuejob *rqj = req_params->rq_job;\n\n\t\t/* initialize event param to None */\n\t\t(void) PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_JOB,\n\t\t\t\t\t    Py_None);\n\t\t/*\n\t\t * First things first create a Python job object.\n\t\t *  - Borrowed reference\n\t\t *  - Exception is *NOT* set\n\t\t */\n\t\tpy_job_class = pbs_python_types_table[PP_JOB_IDX].t_class;\n\n\t\tpy_jargs = Py_BuildValue(\"(s)\", rqj->rq_jid); /* NEW ref */\n\t\tif (!py_jargs) {\n\t\t\tlog_err(PBSE_INTERNAL, __func__, \"could not build args list for job\");\n\t\t\tgoto event_set_exit;\n\t\t}\n\t\tpy_job = PyObject_Call(py_job_class, py_jargs, NULL); /*NEW*/\n\n\t\tif (!py_job) {\n\t\t\tlog_err(PBSE_INTERNAL, __func__, \"failed to create a python job object\");\n\t\t\t(void) PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_JOB,\n\t\t\t\t\t\t    Py_None);\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\trc = PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_JOB, py_job);\n\n\t\tif (rc == -1) {\n\t\t\tLOG_ERROR_ARG2(\"%s:failed to set param attribute <%s>\",\n\t\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_JOB);\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\trc = pbs_python_object_set_attr_string_value(py_job, ATTR_queue,\n\t\t\t\t\t\t\t     rqj->rq_destin);\n\t\tif (rc == -1) {\n\t\t\tLOG_ERROR_ARG2(\"%s:failed to set attribute <%s>\",\n\t\t\t\t       \"\", ATTR_queue);\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\tsnprintf(perf_action, sizeof(perf_action), \"%s:%s(%s)\", HOOK_PERF_POPULATE, EVENT_JOB_OBJECT, rqj->rq_jid);\n\t\trc = pbs_python_populate_python_class_from_svrattrl(py_job,\n\t\t\t\t\t\t\t\t    &rqj->rq_attr, perf_label, perf_action);\n\n\t\tif (rc == -1) {\n\t\t\tLOG_ERROR_ARG2(\"%s: partially set remaining param['%s'] attributes\",\n\t\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_JOB);\n\t\t\tgoto event_set_exit;\n\t\t}\n\t} else if (hook_event == HOOK_EVENT_POSTQUEUEJOB) {\n\t\tstruct rq_postqueuejob *rqj = req_params->rq_postqueuejob;\n\n\t\t/* initialize event param to None */\n\t\t(void) PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_JOB,\n\t\t\t\t\t    Py_None);\n\t\tif (IS_PBS_PYTHON_CMD(pbs_python_daemon_name)) {\n\t\t\tif (py_pbs_statobj != NULL) {\n\t\t\t\tPy_XDECREF(py_jargs); /* discard previously used value */\n\t\t\t\tpy_jargs = Py_BuildValue(\"(sss)\", \"job\", rqj->rq_jid,\n\t\t\t\t\t\t\t pbs_conf.pbs_server_name); /* NEW ref */\n\t\t\t\tpy_job = PyObject_Call(py_pbs_statobj, py_jargs,\n\t\t\t\t\t\t       NULL); /*NEW*/\n\t\t\t\thook_set_mode = C_MODE;\t      /* ensure still in C mode */\n\t\t\t}\n\t\t} else {\n\t\t\tpy_job = _pps_helper_get_job(NULL, rqj->rq_jid, NULL, perf_label);\n\t\t}\n\t\t/* NEW - we own ref */\n\n\t\tif (!py_job || (py_job == Py_None)) {\n\t\t\tLOG_ERROR_ARG2(\"%s:failed to get job %s's python \"\n\t\t\t\t       \"job object\",\n\t\t\t\t       PY_TYPE_EVENT, rqj->rq_jid);\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\t/* py_job given to py_event_parm...so ref. count auto incremented */\n\t\trc = PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_JOB, py_job);\n\n\t\tif (rc == -1) {\n\t\t\tLOG_ERROR_ARG2(\"%s:failed to set param attribute <%s>\",\n\t\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_JOB);\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\t/* py_job is not read-only but only ATTR_h, ATTR_a are modifiable and */\n\t\t/* checked under is_attrib_val_settable(). Resource-type attributes   */\n\t\t/* such as ATTR_l, which is part of py_job,  go through a different   */\n\t\t/* processing mechanism.        \t\t\t\t      */\n\t\t/* It is fatal if py_job's readonly flag could not be set to False    */\n\t\t/* as it could prevent all job attributes including ATTR_l to be      */\n\t\t/* not settable.                                                      */\n\t\trc = pbs_python_object_set_attr_integral_value(py_job,\n\t\t\t\t\t\t\t       PY_READONLY_FLAG, FALSE);\n\t\tif (rc == -1) {\n\n\t\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\t\"Failed set object's readonly flag\");\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\tpy_resclist = PyObject_GetAttrString(py_job, ATTR_l); /* NEW */\n\t\tif ((py_resclist != NULL) && (py_resclist != Py_None)) {\n\t\t\t/* Don't mark pbs.event().job.Resource_List[] as readonly */\n\t\t\trc = pbs_python_object_set_attr_integral_value(py_resclist,\n\t\t\t\t\t\t\t\t       PY_READONLY_FLAG, FALSE);\n\t\t\tif (rc == -1) {\n\n\t\t\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\t\t\"Failed set object's readonly flag\");\n\t\t\t\tLOG_ERROR_ARG2(\"%s: warning - failed to set object's '%s' readonly flag\", __func__, \"Resource_List[]\");\n\t\t\t}\n\t\t}\n\t} else if (hook_event == HOOK_EVENT_RESVSUB) {\n\t\tstruct rq_queuejob *rqj = req_params->rq_job;\n\n\t\t/* initialize event param to None */\n\t\t(void) PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_RESV,\n\t\t\t\t\t    Py_None);\n\t\t/*\n\t\t * First things first create a Python job object.\n\t\t *  - Borrowed reference\n\t\t *  - Exception is *NOT* set\n\t\t */\n\t\tpy_resv_class = pbs_python_types_table[PP_RESV_IDX].t_class;\n\n\t\tpy_rargs = Py_BuildValue(\"(s)\", rqj->rq_jid); /* NEW ref */\n\n\t\tif (!py_rargs) {\n\t\t\tlog_err(PBSE_INTERNAL, __func__, \"could not build args list for resv\");\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\tpy_resv = PyObject_Call(py_resv_class, py_rargs,\n\t\t\t\t\tNULL); /*NEW*/\n\n\t\tif (!py_resv) {\n\t\t\tlog_err(PBSE_INTERNAL, __func__, \"failed to create a python resv object\");\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\trc = PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_RESV, py_resv);\n\n\t\tif (rc == -1) {\n\t\t\tLOG_ERROR_ARG2(\"%s:failed to set param attribute <%s>\",\n\t\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_RESV);\n\t\t\tgoto event_set_exit;\n\t\t}\n\t\tsnprintf(perf_action, sizeof(perf_action), \"%s:%s(%s)\", HOOK_PERF_POPULATE, EVENT_RESV_OBJECT, rqj->rq_jid);\n\t\trc = pbs_python_populate_python_class_from_svrattrl(py_resv,\n\t\t\t\t\t\t\t\t    &rqj->rq_attr, perf_label, perf_action);\n\n\t\tif (rc == -1) {\n\t\t\tLOG_ERROR_ARG2(\"%s:partially set remaining param['%s'] attributes\",\n\t\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_RESV);\n\t\t\tgoto event_set_exit;\n\t\t}\n\t} else if (hook_event == HOOK_EVENT_MODIFYRESV) {\n\t\tstruct rq_manage *rqr = req_params->rq_manage;\n\n\t\t/* initialize event params to None */\n\t\t(void) PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_JOB,\n\t\t\t\t\t    Py_None);\n\t\t(void) PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_JOB_O,\n\t\t\t\t\t    Py_None);\n\t\t/*\n\t\t * First things first create a Python job object.\n\t\t *  - Borrowed reference\n\t\t *  - Exception is *NOT* set\n\t\t */\n\t\tpy_resv_class = pbs_python_types_table[PP_RESV_IDX].t_class;\n\n\t\tpy_rargs = Py_BuildValue(\"(s)\", rqr->rq_objname); /* NEW ref */\n\n\t\tif (!py_rargs) {\n\t\t\tlog_err(PBSE_INTERNAL, __func__, \"could not build args list for reservation\");\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\tpy_resv = PyObject_Call(py_resv_class, py_rargs, NULL); /*NEW*/\n\n\t\tif (!py_resv) {\n\t\t\tlog_err(PBSE_INTERNAL, __func__, \"failed to create a python reservation object\");\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\trc = PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_RESV, py_resv);\n\n\t\tif (rc == -1) {\n\t\t\tLOG_ERROR_ARG2(\"%s:failed to set param attribute <%s>\",\n\t\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_RESV);\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\tsnprintf(perf_action, sizeof(perf_action), \"%s:%s(%s)\", HOOK_PERF_POPULATE, EVENT_JOB_OBJECT, rqr->rq_objname);\n\t\trc = pbs_python_populate_python_class_from_svrattrl(py_resv,\n\t\t\t\t\t\t\t\t    &rqr->rq_attr, perf_label, perf_action);\n\n\t\tif (rc == -1) {\n\t\t\tLOG_ERROR_ARG2(\"%s: partially set remaining param['%s'] attributes\",\n\t\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_RESV);\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\tif (IS_PBS_PYTHON_CMD(pbs_python_daemon_name)) {\n\t\t\tif (py_pbs_statobj != NULL) {\n\t\t\t\tPy_XDECREF(py_rargs); /* discard previously used value   */\n\t\t\t\t/* NOTE: *XDECREF() is safe where  */\n\t\t\t\t/* if py_jargs is NULL, then */\n\t\t\t\t/* nothing is released. */\n\t\t\t\t/* Current value of py_jargs is */\n\t\t\t\t/* released at the end of this  */\n\t\t\t\t/* function (at event_set_exit:) */\n\t\t\t\tpy_rargs = Py_BuildValue(\"(sss)\", \"resv\", rqr->rq_objname,\n\t\t\t\t\t\t\t pbs_conf.pbs_server_name); /* NEW ref */\n\t\t\t\tpy_resv_o = PyObject_Call(py_pbs_statobj, py_rargs,\n\t\t\t\t\t\t\t  NULL); /*NEW*/\n\t\t\t\thook_set_mode = C_MODE;\t\t /* ensure still in C mode */\n\t\t\t}\n\t\t} else {\n\t\t\t/* we own this reference */\n\t\t\tpy_resv_o = _pps_helper_get_resv(NULL, rqr->rq_objname, perf_label);\n\t\t}\n\n\t\tif (!py_resv_o || (py_resv_o == Py_None)) {\n\t\t\tLOG_ERROR_ARG2(\"%s:failed to create original reservation %s's python resv object\",\n\t\t\t\t       PY_TYPE_EVENT, rqr->rq_objname);\n\t\t\trc = -1;\n\t\t\tgoto event_set_exit;\n\t\t}\n\t\t/* handed off to py_event_parm...reference count incremented again */\n\t\trc = PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_RESV_O,\n\t\t\t\t\t  py_resv_o);\n\n\t\tif (rc == -1) {\n\t\t\tLOG_ERROR_ARG2(\"%s:failed to set param attribute <%s>\",\n\t\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_RESV_O);\n\t\t\tgoto event_set_exit;\n\t\t}\n\t\t/* Need to send a Variable_List of the original reservation, so that */\n\t\t/* in a hook script, we'll only allow to modify or add to existing */\n\t\t/* Variable_List */\n\t\tpy_varlist_o = PyObject_GetAttrString(py_resv_o, ATTR_v); /* NEW */\n\t\tif ((py_varlist_o == NULL) || (py_varlist_o == Py_None)) {\n\n\t\t\tpy_varlist = PyDict_New(); /* NEW - empty dict */\n\t\t} else {\n\t\t\t/* important to have a copy, so that changes in py_job's */\n\t\t\t/* Variable_List does not reflect in py_job_o's.\t */\n\t\t\tpy_varlist = PyDict_Copy(py_varlist_o); /* NEW */\n\t\t}\n\n\t\tif (py_varlist == NULL) {\n\t\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\t\"failed to create a Variable_List dictionary!\");\n\t\t\trc = -1;\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\t/* upon success, py_resv adds a reference count to py_varlist */\n\t\tif (PyObject_SetAttrString(py_resv, ATTR_v, py_varlist) == -1) {\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t\t \"failed to set dictionary for %s\", ATTR_v);\n\t\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\trc = -1;\n\t\t\tgoto event_set_exit;\n\t\t}\n\t\t/* end of Variable_List setting */\n\n\t} else if (hook_event == HOOK_EVENT_MODIFYJOB) {\n\t\tstruct rq_manage *rqj = req_params->rq_manage;\n\n\t\t/* initialize event params to None */\n\t\t(void) PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_JOB,\n\t\t\t\t\t    Py_None);\n\t\t(void) PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_JOB_O,\n\t\t\t\t\t    Py_None);\n\t\t/*\n\t\t * First things first create a Python job object.\n\t\t *  - Borrowed reference\n\t\t *  - Exception is *NOT* set\n\t\t */\n\t\tpy_job_class = pbs_python_types_table[PP_JOB_IDX].t_class;\n\n\t\tpy_jargs = Py_BuildValue(\"(s)\", rqj->rq_objname); /* NEW ref */\n\n\t\tif (!py_jargs) {\n\t\t\tlog_err(PBSE_INTERNAL, __func__, \"could not build args list for job\");\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\tpy_job = PyObject_Call(py_job_class, py_jargs, NULL); /*NEW*/\n\n\t\tif (!py_job) {\n\t\t\tlog_err(PBSE_INTERNAL, __func__, \"failed to create a python job object\");\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\trc = PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_JOB, py_job);\n\n\t\tif (rc == -1) {\n\t\t\tLOG_ERROR_ARG2(\"%s:failed to set param attribute <%s>\",\n\t\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_JOB);\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\tsnprintf(perf_action, sizeof(perf_action), \"%s:%s(%s)\", HOOK_PERF_POPULATE, EVENT_JOB_OBJECT, rqj->rq_objname);\n\t\trc = pbs_python_populate_python_class_from_svrattrl(py_job,\n\t\t\t\t\t\t\t\t    &rqj->rq_attr, perf_label, perf_action);\n\n\t\tif (rc == -1) {\n\t\t\tLOG_ERROR_ARG2(\"%s: partially set remaining param['%s'] attributes\",\n\t\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_JOB);\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\tif (IS_PBS_PYTHON_CMD(pbs_python_daemon_name)) {\n\t\t\tif (py_pbs_statobj != NULL) {\n\t\t\t\tPy_XDECREF(py_jargs); /* discard previously used value   */\n\t\t\t\t/* NOTE: *XDECREF() is safe where  */\n\t\t\t\t/* if py_jargs is NULL, then */\n\t\t\t\t/* nothing is released. */\n\t\t\t\t/* Current value of py_jargs is */\n\t\t\t\t/* released at the end of this  */\n\t\t\t\t/* function (at event_set_exit:) */\n\t\t\t\tpy_jargs = Py_BuildValue(\"(sss)\", \"job\", rqj->rq_objname,\n\t\t\t\t\t\t\t pbs_conf.pbs_server_name); /* NEW ref */\n\t\t\t\tpy_job_o = PyObject_Call(py_pbs_statobj, py_jargs,\n\t\t\t\t\t\t\t NULL); /*NEW*/\n\t\t\t\thook_set_mode = C_MODE;\t\t/* ensure still in C mode */\n\t\t\t}\n\t\t} else {\n\t\t\t/* we own this reference */\n\t\t\tpy_job_o = _pps_helper_get_job(NULL, rqj->rq_objname, NULL, perf_label);\n\t\t}\n\n\t\tif (!py_job_o || (py_job_o == Py_None)) {\n\t\t\tLOG_ERROR_ARG2(\"%s:failed to create original job %s's python job object\",\n\t\t\t\t       PY_TYPE_EVENT, rqj->rq_objname);\n\t\t\trc = -1;\n\t\t\tgoto event_set_exit;\n\t\t}\n\t\t/* handed off to py_event_parm...reference count incremented again */\n\t\trc = PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_JOB_O,\n\t\t\t\t\t  py_job_o);\n\n\t\tif (rc == -1) {\n\t\t\tLOG_ERROR_ARG2(\"%s:failed to set param attribute <%s>\",\n\t\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_JOB_O);\n\t\t\tgoto event_set_exit;\n\t\t}\n\t\t/* Need to send a Variable_List of the original job, so that */\n\t\t/* in a hook script, we'll only allow to modify or add to existing */\n\t\t/* Variable_List */\n\t\tpy_varlist_o = PyObject_GetAttrString(py_job_o, ATTR_v); /* NEW */\n\t\tif ((py_varlist_o == NULL) || (py_varlist_o == Py_None)) {\n\n\t\t\tpy_varlist = PyDict_New(); /* NEW - empty dict */\n\t\t} else {\n\t\t\t/* important to have a copy, so that changes in py_job's */\n\t\t\t/* Variable_List does not reflect in py_job_o's.\t */\n\t\t\tpy_varlist = PyDict_Copy(py_varlist_o); /* NEW */\n\t\t}\n\n\t\tif (py_varlist == NULL) {\n\t\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\t\"failed to create a Variable_List dictionary!\");\n\t\t\trc = -1;\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\t/* upon success, py_job adds a reference count to py_varlist */\n\t\tif (PyObject_SetAttrString(py_job, ATTR_v, py_varlist) == -1) {\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t\t \"failed to set dictionary for %s\", ATTR_v);\n\t\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\trc = -1;\n\t\t\tgoto event_set_exit;\n\t\t}\n\t\t/* end of Variable_List setting */\n\n\t} else if (hook_event == HOOK_EVENT_MOVEJOB) {\n\t\tstruct rq_move *rqj = req_params->rq_move;\n\n\t\t/* initialize params to None */\n\t\t(void) PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_SRC_QUEUE,\n\t\t\t\t\t    Py_None);\n\t\t(void) PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_JOB,\n\t\t\t\t\t    Py_None);\n\n\t\tif (IS_PBS_PYTHON_CMD(pbs_python_daemon_name)) {\n\t\t\tif (py_pbs_statobj != NULL) {\n\t\t\t\tPy_XDECREF(py_jargs); /* discard previously used value */\n\t\t\t\tpy_jargs = Py_BuildValue(\"(sss)\", \"job\", rqj->rq_jid,\n\t\t\t\t\t\t\t pbs_conf.pbs_server_name); /* NEW ref */\n\t\t\t\tpy_job = PyObject_Call(py_pbs_statobj, py_jargs,\n\t\t\t\t\t\t       NULL); /*NEW*/\n\t\t\t\thook_set_mode = C_MODE;\t      /* ensure still in C mode */\n\t\t\t}\n\t\t} else {\n\t\t\tpy_job = _pps_helper_get_job(NULL, rqj->rq_jid, NULL, perf_label);\n\t\t\t/* NEW - we own ref */\n\t\t}\n\n\t\tif (!py_job || (py_job == Py_None)) {\n\t\t\tLOG_ERROR_ARG2(\"%s:failed to create job %s's python \"\n\t\t\t\t       \"job object\",\n\t\t\t\t       PY_TYPE_EVENT, rqj->rq_jid);\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\t/* py_job handed off to py_event_parm...reference count incremented */\n\t\trc = PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_JOB, py_job);\n\n\t\tif (rc == -1) {\n\t\t\tLOG_ERROR_ARG2(\"%s:failed to set param attribute <%s>\",\n\t\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_JOB);\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\tif (!PyObject_HasAttrString(py_job, ATTR_queue)) {\n\t\t\tLOG_ERROR_ARG2(\"%s: does not have attribute <%s>\",\n\t\t\t\t       PY_TYPE_JOB, ATTR_queue);\n\t\t\trc = -1;\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\t/* save the current value of job's queue attribute */\n\t\tpy_que = PyObject_GetAttrString(py_job, ATTR_queue); /* NEW */\n\n\t\tif (py_que == NULL || (py_que == Py_None)) {\n\t\t\tLOG_ERROR_ARG2(\"movejob %s has a bad value for attribute <%s>\",\n\t\t\t\t       rqj->rq_jid, ATTR_queue);\n\t\t\trc = -1;\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\t/* handed off to py_event_parm...reference count incremented again */\n\t\trc = PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_SRC_QUEUE,\n\t\t\t\t\t  py_que);\n\n\t\tif (rc == -1) {\n\t\t\tLOG_ERROR_ARG2(\"%s:failed to set param attribute <%s>\",\n\t\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_SRC_QUEUE);\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\t/* reset the movejob's queue attribute to the *new* queue to move to */\n\t\trc = pbs_python_object_set_attr_string_value(py_job, ATTR_queue,\n\t\t\t\t\t\t\t     rqj->rq_destin);\n\t\tif (rc == -1) {\n\t\t\tLOG_ERROR_ARG2(\"%s:failed to set attribute <%s>\",\n\t\t\t\t       PY_TYPE_JOB, ATTR_queue);\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\t/* py_job is not read-only but only ATTR_queue is modifiable and */\n\t\t/* checked under is_attrib_val_settable() \t\t\t */\n\t\trc = pbs_python_object_set_attr_integral_value(py_job,\n\t\t\t\t\t\t\t       PY_READONLY_FLAG, FALSE);\n\t\tif (rc == -1) {\n\n\t\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\t\"Failed set object's readonly flag\");\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t} else if (hook_event == HOOK_EVENT_PROVISION) {\n\t\tstruct prov_vnode_info *prov_vnode_info = req_params->rq_prov;\n\n\t\t/* initialize event params to None */\n\t\t(void) PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_VNODE,\n\t\t\t\t\t    Py_None);\n\t\t(void) PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_AOE,\n\t\t\t\t\t    Py_None);\n\t\tpy_vnode = PyUnicode_FromString(prov_vnode_info->pvnfo_vnode);\n\t\t/* set vnode */\n\t\trc = PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_VNODE, py_vnode);\n\t\tif (rc == -1) {\n\t\t\tLOG_ERROR_ARG2(\"%s:failed to set param attribute <%s>\",\n\t\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_VNODE);\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\tpy_aoe = PyUnicode_FromString(prov_vnode_info->pvnfo_aoe_req); /* NEW ref */\n\t\t/* set aoe */\n\t\trc = PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_AOE, py_aoe);\n\t\tif (rc == -1) {\n\t\t\tLOG_ERROR_ARG2(\"%s:failed to set param attribute <%s>\",\n\t\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_AOE);\n\t\t\tgoto event_set_exit;\n\t\t}\n\t} else if (hook_event == HOOK_EVENT_PERIODIC) {\n\t\tvnlist = (pbs_list_head *) req_params->vns_list;\n\n\t\t/* SET VNODE_LIST param */\n\t\t(void) PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_VNODELIST,\n\t\t\t\t\t    Py_None);\n\t\tpy_vnodelist = create_py_vnodelist(vnlist, perf_label, HOOK_PERF_POPULATE_VNODELIST);\n\t\tif (py_vnodelist == NULL) {\n\t\t\tLOG_ERROR_ARG2(\"%s: failed to create a Python vnodelist object for param['%s']\",\n\t\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_VNODELIST);\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\t/* set vnode list: py_vnodelist given to py_event_param so ref count */\n\t\t/* auto incremented */\n\t\trc = PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_VNODELIST,\n\t\t\t\t\t  py_vnodelist);\n\t\tif (rc == -1) {\n\t\t\tLOG_ERROR_ARG2(\"%s: partially set remaining param['%s'] attributes\",\n\t\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_VNODELIST);\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\t/* SET RESV_LIST param */\n\t\tresvlist = (pbs_list_head *) req_params->resv_list;\n\n\t\t(void) PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_RESVLIST,\n\t\t\t\t\t    Py_None);\n\t\tpy_resvlist = create_py_resvlist(resvlist, perf_label, HOOK_PERF_POPULATE_RESVLIST);\n\t\tif (py_resvlist == NULL) {\n\t\t\tLOG_ERROR_ARG2(\"%s: failed to create a Python resvlist object for param['%s']\",\n\t\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_RESVLIST);\n\t\t\tgoto event_set_exit;\n\t\t}\n\t\t/* set resv list: py_resvlist given to py_event_param so ref count */\n\t\t/* auto incremented */\n\t\trc = PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_RESVLIST,\n\t\t\t\t\t  py_resvlist);\n\t\tif (rc == -1) {\n\t\t\tLOG_ERROR_ARG2(\"%s: partially set remaining param['%s'] attributes\",\n\t\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_RESVLIST);\n\t\t\tgoto event_set_exit;\n\t\t}\n\t} else if (hook_event == HOOK_EVENT_RUNJOB) {\n\t\tstruct rq_runjob *rqj = req_params->rq_run;\n\n\t\t/* initialize event param to None */\n\t\t(void) PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_JOB,\n\t\t\t\t\t    Py_None);\n\t\tif (IS_PBS_PYTHON_CMD(pbs_python_daemon_name)) {\n\t\t\tif (py_pbs_statobj != NULL) {\n\t\t\t\tPy_XDECREF(py_jargs); /* discard previously used value */\n\t\t\t\tpy_jargs = Py_BuildValue(\"(sss)\", \"job\", rqj->rq_jid,\n\t\t\t\t\t\t\t pbs_conf.pbs_server_name); /* NEW ref */\n\t\t\t\tpy_job = PyObject_Call(py_pbs_statobj, py_jargs,\n\t\t\t\t\t\t       NULL); /*NEW*/\n\t\t\t\thook_set_mode = C_MODE;\t      /* ensure still in C mode */\n\t\t\t}\n\t\t} else {\n\t\t\tpy_job = _pps_helper_get_job(NULL, rqj->rq_jid, NULL, perf_label);\n\t\t}\n\t\t/* NEW - we own ref */\n\n\t\tif (!py_job || (py_job == Py_None)) {\n\t\t\tLOG_ERROR_ARG2(\"%s:failed to get job %s's python \"\n\t\t\t\t       \"job object\",\n\t\t\t\t       PY_TYPE_EVENT, rqj->rq_jid);\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\t/* py_job given to py_event_parm...so ref. count auto incremented */\n\t\trc = PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_JOB, py_job);\n\n\t\tif (rc == -1) {\n\t\t\tLOG_ERROR_ARG2(\"%s:failed to set param attribute <%s>\",\n\t\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_JOB);\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\t/* py_job is not read-only but only ATTR_h, ATTR_a are modifiable and */\n\t\t/* checked under is_attrib_val_settable(). Resource-type attributes   */\n\t\t/* such as ATTR_l, which is part of py_job,  go through a different   */\n\t\t/* processing mechanism.        \t\t\t\t      */\n\t\t/* It is fatal if py_job's readonly flag could not be set to False    */\n\t\t/* as it could prevent all job attributes including ATTR_l to be      */\n\t\t/* not settable.                                                      */\n\t\trc = pbs_python_object_set_attr_integral_value(py_job,\n\t\t\t\t\t\t\t       PY_READONLY_FLAG, FALSE);\n\t\tif (rc == -1) {\n\n\t\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\t\"Failed set object's readonly flag\");\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\tpy_resclist = PyObject_GetAttrString(py_job, ATTR_l); /* NEW */\n\t\tif ((py_resclist != NULL) && (py_resclist != Py_None)) {\n\t\t\t/* Don't mark pbs.event().job.Resource_List[] as readonly */\n\t\t\trc = pbs_python_object_set_attr_integral_value(py_resclist,\n\t\t\t\t\t\t\t\t       PY_READONLY_FLAG, FALSE);\n\t\t\tif (rc == -1) {\n\n\t\t\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\t\t\"Failed set object's readonly flag\");\n\t\t\t\tLOG_ERROR_ARG2(\"%s: warning - failed to set object's '%s' readonly flag\", __func__, \"Resource_List[]\");\n\t\t\t}\n\t\t}\n\n\t\tif (!PyObject_HasAttrString(py_job, ATTR_execvnode)) {\n\t\t\tLOG_ERROR_ARG2(\"%s: does not have attribute <%s>\",\n\t\t\t\t       PY_TYPE_JOB,\n\t\t\t\t       ATTR_execvnode);\n\t\t\trc = -1;\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\t/* set value of job's exec_vnode attribute if not already set */\n\t\tpy_exec_vnode = PyObject_GetAttrString(py_job, ATTR_execvnode); /* NEW */\n\n\t\tif ((rqj->rq_destin != NULL) && (*rqj->rq_destin != '\\0') &&\n\t\t    ((py_exec_vnode == NULL) || (py_exec_vnode == Py_None))) {\n\t\t\t/* set \"exec_vnodes\" attribute if not set */\n\t\t\trc = pbs_python_object_set_attr_string_value(py_job,\n\t\t\t\t\t\t\t\t     ATTR_execvnode, rqj->rq_destin);\n\t\t\tif (rc == -1) {\n\t\t\t\tLOG_ERROR_ARG2(\"%s:failed to set attribute <%s>\",\n\t\t\t\t\t       PY_TYPE_JOB, ATTR_execvnode);\n\t\t\t\tgoto event_set_exit;\n\t\t\t}\n\t\t}\n\t} else if (hook_event == HOOK_EVENT_JOBOBIT) {\n\t\tstruct rq_jobobit *rqj = req_params->rq_obit;\n\t\t/* initialize params to None */\n\t\t(void) PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_JOB,\n\t\t\t\t\t    Py_None);\n\n\t\tif (IS_PBS_PYTHON_CMD(pbs_python_daemon_name)) {\n\t\t\tif (py_pbs_statobj != NULL) {\n\t\t\t\tPy_XDECREF(py_jargs); /* discard previously used value */\n\t\t\t\tpy_jargs = Py_BuildValue(\"(sss)\", \"job\", rqj->rq_pjob->ji_qs.ji_jobid,\n\t\t\t\t\t\t\t pbs_conf.pbs_server_name); /* NEW ref */\n\t\t\t\tpy_job = PyObject_Call(py_pbs_statobj, py_jargs,\n\t\t\t\t\t\t       NULL); /*NEW*/\n\t\t\t\thook_set_mode = C_MODE;\t      /* ensure still in C mode */\n\t\t\t}\n\t\t} else {\n\t\t\tpy_job = _pps_helper_get_job(NULL, rqj->rq_pjob->ji_qs.ji_jobid, NULL, perf_label);\n\t\t\t/* NEW - we own ref */\n\t\t}\n\n\t\tif (!py_job || (py_job == Py_None)) {\n\t\t\tLOG_ERROR_ARG2(\"%s:failed to create job %s's python \"\n\t\t\t\t       \"job object\",\n\t\t\t\t       PY_TYPE_EVENT, rqj->rq_pjob->ji_qs.ji_jobid);\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\t/* py_job handed off to py_event_parm...reference count incremented */\n\t\trc = PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_JOB, py_job);\n\n\t\tif (rc == -1) {\n\t\t\tLOG_ERROR_ARG2(\"%s:failed to set param attribute <%s>\",\n\t\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_JOB);\n\t\t\tgoto event_set_exit;\n\t\t}\n\t} else if (hook_event == HOOK_EVENT_MANAGEMENT) {\n\t\tPyObject *py_attr = (PyObject *) NULL;\n\t\tstruct rq_management *rqj = req_params->rq_manage;\n\t\tpy_management_class = pbs_python_types_table[PP_MANAGEMENT_IDX].t_class;\n\t\tif (!py_management_class) {\n\t\t\tlog_err(PBSE_INTERNAL, __func__, \"failed to acquire management class\");\n\t\t\t(void) PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_MANAGEMENT,\n\t\t\t\t\t\t    Py_None);\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\tpy_attr = svrattrl_list_to_pyobject(rqj->rq_manager.rq_cmd, &rqj->rq_manager.rq_attr);\n\t\tif (!py_attr) {\n\t\t\tlog_err(PBSE_INTERNAL, __func__, \"could not build the list of server attributes\");\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\tpy_margs = Py_BuildValue(\"(iisliiisO)\",\n\t\t\t\t\t rqj->rq_manager.rq_cmd,\n\t\t\t\t\t rqj->rq_manager.rq_objtype,\n\t\t\t\t\t rqj->rq_manager.rq_objname,\n\t\t\t\t\t rqj->rq_time,\n\t\t\t\t\t rqj->rq_reply->brp_code,\n\t\t\t\t\t rqj->rq_reply->brp_auxcode,\n\t\t\t\t\t rqj->rq_reply->brp_choice,\n\t\t\t\t\t (rqj->rq_reply->brp_choice == BATCH_REPLY_CHOICE_Text) ? rqj->rq_reply->brp_un.brp_txt.brp_str : NULL,\n\t\t\t\t\t py_attr); /* NEW ref */\n\t\tPy_CLEAR(py_attr);\n\n\t\tif (!py_margs) {\n\t\t\tlog_err(PBSE_INTERNAL, __func__, \"could not build args list for management\");\n\t\t\tgoto event_set_exit;\n\t\t}\n\t\tpy_management = PyObject_CallObject(py_management_class, py_margs);\n\n\t\tif (!py_management) {\n\t\t\tpbs_python_write_error_to_log(__func__);\n\t\t\tlog_err(PBSE_INTERNAL, __func__, \"failed to create a python management object\");\n\t\t\t(void) PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_MANAGEMENT,\n\t\t\t\t\t\t    Py_None);\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\trc = PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_MANAGEMENT,\n\t\t\t\t\t  py_management);\n\n\t\tif (rc == -1) {\n\t\t\tLOG_ERROR_ARG2(\"%s:failed to set param attribute <%s>\",\n\t\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_MANAGEMENT);\n\t\t\tgoto event_set_exit;\n\t\t}\n\t} else if (hook_event == HOOK_EVENT_MODIFYVNODE) {\n\t\tstruct rq_modifyvnode *rqmvn = req_params->rq_modifyvnode;\n\t\tstruct pbsnode *vnode_o = rqmvn->rq_vnode_o;\n\t\tstruct pbsnode *vnode = rqmvn->rq_vnode;\n\t\tint tmpv_rc;\n\n\t\t/* initialize event params to None */\n\t\t(void) PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_VNODE,\n\t\t\t\t\t    Py_None);\n\t\t(void) PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_VNODE_O,\n\t\t\t\t\t    Py_None);\n\n\t\t/* Retrieve the vnode_o data */\n\t\tpy_vnode_o = _pps_helper_get_vnode(vnode_o, NULL, HOOK_PERF_POPULATE_VNODE_O);\n\t\tif (py_vnode_o == NULL) {\n\t\t\tlog_err(PBSE_INTERNAL, __func__, \"failed to create a python vnode_o object\");\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\t/* Set the vnode_o object to readonly to prevent hook writers from modifying values */\n\t\ttmpv_rc = pbs_python_mark_object_readonly(py_vnode_o);\n\t\tif (tmpv_rc == -1) {\n\t\t\tlog_err(PBSE_INTERNAL, __func__, \"Failed to mark python vnode_o object readonly\");\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\t/* Retrieve the vnode data */\n\t\tpy_vnode = _pps_helper_get_vnode(vnode, NULL, HOOK_PERF_POPULATE_VNODE);\n\t\tif (py_vnode == NULL) {\n\t\t\tlog_err(PBSE_INTERNAL, __func__, \"failed to create a python vnode object\");\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\t/* Set the vnode object to readonly to prevent hook writers from modifying values */\n\t\ttmpv_rc = pbs_python_mark_object_readonly(py_vnode);\n\t\tif (tmpv_rc == -1) {\n\t\t\tlog_err(PBSE_INTERNAL, __func__, \"Failed to mark python vnode object readonly\");\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\t/* Set the vnode_o event param */\n\t\trc = PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_VNODE_O, py_vnode_o);\n\t\tif (rc == -1) {\n\t\t\tLOG_ERROR_ARG2(\"%s:failed to set param attribute <%s>\",\n\t\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_VNODE_O);\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\t/* Set the vnode event param */\n\t\trc = PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_VNODE, py_vnode);\n\t\tif (rc == -1) {\n\t\t\tLOG_ERROR_ARG2(\"%s:failed to set param attribute <%s>\",\n\t\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_VNODE);\n\t\t\tgoto event_set_exit;\n\t\t}\n\t} else if ((hook_event == HOOK_EVENT_RESV_END) ||\n\t\t   (hook_event == HOOK_EVENT_RESV_BEGIN)) {\n\t\tstruct rq_manage *rqj = req_params->rq_manage;\n\n\t\t/* initialize event param to None */\n\t\t(void) PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_RESV, Py_None);\n\n\t\tif (IS_PBS_PYTHON_CMD(pbs_python_daemon_name)) {\n\t\t\tif (py_pbs_statobj != NULL) {\n\t\t\t\tPy_XDECREF(py_rargs); /* discard previously used value */\n\t\t\t\tpy_rargs = Py_BuildValue(\"(sss)\", \"resv\", rqj->rq_objname,\n\t\t\t\t\t\t\t pbs_conf.pbs_server_name); /* NEW ref */\n\t\t\t\tpy_resv = PyObject_Call(py_pbs_statobj, py_rargs,\n\t\t\t\t\t\t\tNULL); /*NEW Reservation object*/\n\t\t\t\thook_set_mode = C_MODE;\t       /* ensure still in C mode */\n\t\t\t}\n\t\t} else {\n\t\t\tpy_resv = _pps_helper_get_resv(NULL, rqj->rq_objname, perf_label);\n\t\t}\n\n\t\tif (!py_resv || (py_resv == Py_None)) {\n\t\t\tLOG_ERROR_ARG2(\"%s:failed to create resv %s's python resv object\", PY_TYPE_EVENT, rqj->rq_objname);\n\t\t\tgoto event_set_exit;\n\t\t}\n\t\trc = PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_RESV, py_resv);\n\n\t\tif (rc == -1) {\n\t\t\tLOG_ERROR_ARG2(\"%s:failed to set param attribute <%s>\", PY_TYPE_EVENT, PY_EVENT_PARAM_JOB);\n\t\t\tgoto event_set_exit;\n\t\t}\n\t} else if (hook_event == HOOK_EVENT_RESV_CONFIRM) {\n\t\t/* Confirm uses rq_runjob, not rq_manage and sticks the reservation id in rq_jid */\n\t\tstruct rq_runjob *rqj = req_params->rq_run;\n\n\t\t/* initialize event param to None */\n\t\t(void) PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_RESV, Py_None);\n\n\t\tif (IS_PBS_PYTHON_CMD(pbs_python_daemon_name)) {\n\t\t\tif (py_pbs_statobj != NULL) {\n\t\t\t\tPy_XDECREF(py_rargs); /* discard previously used value */\n\t\t\t\tpy_rargs = Py_BuildValue(\"(sss)\", \"resv\", rqj->rq_jid,\n\t\t\t\t\t\t\t pbs_conf.pbs_server_name); /* NEW ref */\n\t\t\t\tpy_resv = PyObject_Call(py_pbs_statobj, py_rargs,\n\t\t\t\t\t\t\tNULL); /*NEW Reservation object*/\n\t\t\t\thook_set_mode = C_MODE;\t       /* ensure still in C mode */\n\t\t\t}\n\t\t} else {\n\t\t\tpy_resv = _pps_helper_get_resv(NULL, rqj->rq_jid, perf_label);\n\t\t}\n\n\t\tif (!py_resv || (py_resv == Py_None)) {\n\t\t\tLOG_ERROR_ARG2(\"%s:failed to create resv %s's python resv object\", PY_TYPE_EVENT, rqj->rq_jid);\n\t\t\tgoto event_set_exit;\n\t\t}\n\t\trc = PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_RESV, py_resv);\n\n\t\tif (rc == -1) {\n\t\t\tLOG_ERROR_ARG2(\"%s:failed to set param attribute <%s>\", PY_TYPE_EVENT, PY_EVENT_PARAM_JOB);\n\t\t\tgoto event_set_exit;\n\t\t}\n\t} else if ((hook_event == HOOK_EVENT_EXECJOB_BEGIN) ||\n\t\t   (hook_event == HOOK_EVENT_EXECJOB_RESIZE) ||\n\t\t   (hook_event == HOOK_EVENT_EXECJOB_PROLOGUE) ||\n\t\t   (hook_event == HOOK_EVENT_EXECJOB_EPILOGUE) ||\n\t\t   (hook_event == HOOK_EVENT_EXECJOB_END) ||\n\t\t   (hook_event == HOOK_EVENT_EXECJOB_ABORT) ||\n\t\t   (hook_event == HOOK_EVENT_EXECJOB_POSTSUSPEND) ||\n\t\t   (hook_event == HOOK_EVENT_EXECJOB_PRERESUME) ||\n\t\t   (hook_event == HOOK_EVENT_EXECJOB_PRETERM)) {\n\t\tstruct rq_queuejob *rqj = req_params->rq_job;\n\n\t\t/* initialize event param to None */\n\t\t(void) PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_JOB,\n\t\t\t\t\t    Py_None);\n\t\t/*\n\t\t * First things first create a Python job object.\n\t\t *  - Borrowed reference\n\t\t *  - Exception is *NOT* set\n\t\t */\n\t\tpy_job_class = pbs_python_types_table[PP_JOB_IDX].t_class;\n\n\t\tpy_jargs = Py_BuildValue(\"(s)\", rqj->rq_jid); /* NEW ref */\n\t\tif (!py_jargs) {\n\t\t\tlog_err(PBSE_INTERNAL, __func__, \"could not build args list for job\");\n\t\t\tgoto event_set_exit;\n\t\t}\n\t\tpy_job = PyObject_Call(py_job_class, py_jargs, NULL); /*NEW*/\n\n\t\tif (!py_job) {\n\t\t\tlog_err(PBSE_INTERNAL, __func__, \"failed to create a python job object\");\n\t\t\t(void) PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_JOB,\n\t\t\t\t\t\t    Py_None);\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\trc = PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_JOB, py_job);\n\n\t\tif (rc == -1) {\n\t\t\tLOG_ERROR_ARG2(\"%s:failed to set param attribute <%s>\",\n\t\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_JOB);\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\tsnprintf((char *) perf_action, sizeof(perf_action), \"%s:%s(%s)\", HOOK_PERF_POPULATE, EVENT_JOB_OBJECT, rqj->rq_jid);\n\t\trc = pbs_python_populate_python_class_from_svrattrl(py_job, &rqj->rq_attr, perf_label, perf_action);\n\n\t\tif (rc == -1) {\n\t\t\tLOG_ERROR_ARG2(\"%s: partially set remaining param['%s'] attributes\",\n\t\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_JOB);\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\tpy_vnodelist = create_hook_vnode_list_param(py_event_param,\n\t\t\t\t\t\t\t    PY_TYPE_EVENT, PY_EVENT_PARAM_VNODELIST,\n\t\t\t\t\t\t\t    (pbs_list_head *) req_params->vns_list,\n\t\t\t\t\t\t\t    perf_label, HOOK_PERF_POPULATE_VNODELIST);\n\n\t\tif (py_vnodelist == NULL) {\n\t\t\trc = -1;\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\tif (hook_event == HOOK_EVENT_EXECJOB_PROLOGUE) {\n\n\t\t\tpy_vnodelist_fail = create_hook_vnode_list_param(py_event_param,\n\t\t\t\t\t\t\t\t\t PY_TYPE_EVENT, PY_EVENT_PARAM_VNODELIST_FAIL,\n\t\t\t\t\t\t\t\t\t (pbs_list_head *) req_params->vns_list_fail, perf_label, HOOK_PERF_POPULATE_VNODELIST_FAIL);\n\n\t\t\tif (py_vnodelist_fail == NULL) {\n\t\t\t\trc = -1;\n\t\t\t\tgoto event_set_exit;\n\t\t\t}\n\t\t\tpy_failed_node_list = create_py_strlist_from_svrattrl_names(req_params->failed_mom_list);\n\t\t\tif (py_failed_node_list == NULL) {\n\t\t\t\trc = -1;\n\t\t\t\tgoto event_set_exit;\n\t\t\t}\n\n\t\t\t/* set failed_mom_list: py_vnlist given to py_job so ref count auto incremented */\n\t\t\trc = PyObject_SetAttrString(py_job, PY_JOB_FAILED_MOM_LIST, py_failed_node_list);\n\t\t\tif (rc == -1)\n\t\t\t\tgoto event_set_exit;\n\n\t\t\t/* set succeeded_mom_list: py_vnlist given to py_job so ref count auto incremented */\n\t\t\tpy_node_list = create_py_strlist_from_svrattrl_names((pbs_list_head *) req_params->succeeded_mom_list);\n\t\t\tif (py_node_list == NULL) {\n\t\t\t\trc = -1;\n\t\t\t\tgoto event_set_exit;\n\t\t\t}\n\n\t\t\t/* set vnode list: py_vnlist given to py_job so ref count auto incremented */\n\t\t\trc = PyObject_SetAttrString(py_job, PY_JOB_SUCCEEDED_MOM_LIST, py_node_list);\n\t\t\tif (rc == -1)\n\t\t\t\tgoto event_set_exit;\n\t\t}\n\n\t} else if (hook_event == HOOK_EVENT_EXECJOB_LAUNCH) {\n\t\tstruct rq_queuejob *rqj;\n\t\tchar *progname = NULL;\n\t\tchar **arg_list = NULL;\n\t\tchar *env_str = NULL;\n\n\t\trqj = (struct rq_queuejob *) req_params->rq_job;\n\n\t\t/* SET JOB param */\n\t\t(void) PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_JOB,\n\t\t\t\t\t    Py_None);\n\t\t/*\n\t\t * First things first create a Python job object.\n\t\t *  - Borrowed reference\n\t\t *  - Exception is *NOT* set\n\t\t */\n\t\tpy_job_class = pbs_python_types_table[PP_JOB_IDX].t_class;\n\n\t\tpy_jargs = Py_BuildValue(\"(s)\", rqj->rq_jid); /* NEW ref */\n\t\tif (py_jargs == NULL) {\n\t\t\tLOG_ERROR_ARG2(\"%s: could not build job args list for param['%s']\",\n\t\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_JOB);\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\tpy_job = PyObject_Call(py_job_class, py_jargs, NULL); /*NEW*/\n\t\tif (py_job == NULL) {\n\t\t\tLOG_ERROR_ARG2(\"%s: failed to create a python job object for param['%s']\",\n\t\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_JOB);\n\t\t\t(void) PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_JOB,\n\t\t\t\t\t\t    Py_None);\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\trc = PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_JOB, py_job);\n\n\t\tif (rc == -1) {\n\t\t\tLOG_ERROR_ARG2(\"%s: failed to set param['%s']\",\n\t\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_JOB);\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\tsnprintf((char *) perf_action, sizeof(perf_action), \"%s:%s(%s)\", HOOK_PERF_POPULATE, EVENT_JOB_OBJECT, rqj->rq_jid);\n\t\trc = pbs_python_populate_python_class_from_svrattrl(py_job, &rqj->rq_attr, perf_label, perf_action);\n\n\t\tif (rc == -1) {\n\t\t\tLOG_ERROR_ARG2(\"%s: partially set remaining param['%s'] attributes\",\n\t\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_JOB);\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\tpy_vnodelist =\n\t\t\tcreate_hook_vnode_list_param(py_event_param,\n\t\t\t\t\t\t     PY_TYPE_EVENT, PY_EVENT_PARAM_VNODELIST,\n\t\t\t\t\t\t     (pbs_list_head *) req_params->vns_list, perf_label, HOOK_PERF_POPULATE_VNODELIST);\n\n\t\tif (py_vnodelist == NULL) {\n\t\t\trc = -1;\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\tpy_vnodelist_fail =\n\t\t\tcreate_hook_vnode_list_param(py_event_param,\n\t\t\t\t\t\t     PY_TYPE_EVENT, PY_EVENT_PARAM_VNODELIST_FAIL,\n\t\t\t\t\t\t     (pbs_list_head *) req_params->vns_list_fail, perf_label, HOOK_PERF_POPULATE_VNODELIST_FAIL);\n\n\t\tif (py_vnodelist_fail == NULL) {\n\t\t\trc = -1;\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\t/* SET job's failed_mom_list param *vnlist */\n\t\tpy_failed_node_list = create_py_strlist_from_svrattrl_names(req_params->failed_mom_list);\n\t\tif (py_failed_node_list == NULL) {\n\t\t\trc = -1;\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\t/* set failed_mom_list: py_failed_node_list given to py_job so ref count auto incremented */\n\t\trc = PyObject_SetAttrString(py_job, PY_JOB_FAILED_MOM_LIST, py_failed_node_list);\n\t\tif (rc == -1)\n\t\t\tgoto event_set_exit;\n\n\t\t/* SET job's succeeded_mom_list param *vnlist */\n\t\tpy_node_list = create_py_strlist_from_svrattrl_names((pbs_list_head *) req_params->succeeded_mom_list);\n\t\tif (py_node_list == NULL) {\n\t\t\trc = -1;\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\t/* set succeeded_mom_list: py_node_list given to py_job so ref count auto incremented */\n\t\trc = PyObject_SetAttrString(py_job, PY_JOB_SUCCEEDED_MOM_LIST, py_node_list);\n\t\tif (rc == -1)\n\t\t\tgoto event_set_exit;\n\n\t\t/* SET PROGNAME param */\n\t\tprogname = req_params->progname;\n\t\t(void) PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_PROGNAME,\n\t\t\t\t\t    Py_None);\n\n\t\tpy_progname = Py_BuildValue(\"s\", progname); /* NEW ref */\n\t\tif (py_progname == NULL) {\n\t\t\tLOG_ERROR_ARG2(\"%s:failed to create a Python string object for param['%s']\",\n\t\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_PROGNAME);\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\t/* set progname: py_progname given to py_event_param so ref count */\n\t\t/* auto incremented */\n\t\trc = PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_PROGNAME,\n\t\t\t\t\t  py_progname);\n\t\tif (rc == -1) {\n\t\t\tLOG_ERROR_ARG2(\"%s: failed to set param['%s']\",\n\t\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_PROGNAME);\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\t/* SET ARGV param */\n\t\t(void) PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_ARGLIST,\n\t\t\t\t\t    Py_None);\n\n\t\targ_list = svrattrl_to_str_array(req_params->argv_list);\n\t\tif (arg_list == NULL) {\n\t\t\tLOG_ERROR_ARG2(\"%s: failed to build a string array for setting param['%s']\",\n\t\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_ARGLIST);\n\t\t\tgoto event_set_exit;\n\t\t}\n\t\tpy_arglist = create_py_strlist(arg_list);\n\n\t\tfree_str_array(arg_list);\n\t\targ_list = NULL;\n\n\t\tif (py_arglist == NULL) {\n\t\t\tLOG_ERROR_ARG2(\"%s: failed to create a Python string list for setting param['%s']\",\n\t\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_ARGLIST);\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\t/* set arg_list: py_arglist given to py_event_param so ref count */\n\t\t/* auto incremented */\n\t\trc = PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_ARGLIST,\n\t\t\t\t\t  py_arglist);\n\t\tif (rc == -1) {\n\t\t\tLOG_ERROR_ARG2(\"%s: failed to set param['%s']\",\n\t\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_ARGLIST);\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\t/* SET ENV param */\n\t\t(void) PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_ENV,\n\t\t\t\t\t    Py_None);\n\t\tenv_str = req_params->env;\n\t\tif (env_str == NULL) {\n\t\t\tLOG_ERROR_ARG2(\"%s: failed to build a string array for setting param['%s']\",\n\t\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_ENV);\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\tpy_env_class = pbs_python_types_table[PP_ENV_IDX].t_class;\n\n\t\tpy_eargs = Py_BuildValue(\"(si)\", env_str, 1); /* NEW ref */\n\t\tif (!py_eargs) {\n\t\t\tlog_err(PBSE_INTERNAL, __func__, \"could not build env list for job\");\n\t\t\tgoto event_set_exit;\n\t\t}\n\t\tpy_env = PyObject_Call(py_env_class, py_eargs, NULL); /*NEW*/\n\n\t\tif (!py_env) {\n\t\t\tlog_err(PBSE_INTERNAL, __func__, \"failed to create a python env object\");\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\t/* set env: py_env given to py_event_param so ref count */\n\t\t/* auto incremented */\n\t\trc = PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_ENV,\n\t\t\t\t\t  py_env);\n\t\tif (rc == -1) {\n\t\t\tLOG_ERROR_ARG2(\"%s: failed to set param['%s']\",\n\t\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_ENV);\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t} else if ((hook_event == HOOK_EVENT_EXECHOST_PERIODIC) ||\n\t\t   (hook_event == HOOK_EVENT_EXECHOST_STARTUP)) {\n\n\t\t/* SET VNODELIST param */\n\t\tpy_vnodelist = create_hook_vnode_list_param(py_event_param,\n\t\t\t\t\t\t\t    PY_TYPE_EVENT,\n\t\t\t\t\t\t\t    PY_EVENT_PARAM_VNODELIST,\n\t\t\t\t\t\t\t    (pbs_list_head *) req_params->vns_list,\n\t\t\t\t\t\t\t    perf_label, HOOK_PERF_POPULATE_VNODELIST);\n\n\t\tif (py_vnodelist == NULL) {\n\t\t\trc = -1;\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\t/* SET JOB_LIST param */\n\t\tif (hook_event == HOOK_EVENT_EXECHOST_PERIODIC) {\n\t\t\t/* initialize event param to None */\n\t\t\tjoblist = (pbs_list_head *) req_params->jobs_list;\n\n\t\t\t(void) PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_JOBLIST,\n\t\t\t\t\t\t    Py_None);\n\t\t\tpy_joblist = create_py_joblist(joblist, perf_label, HOOK_PERF_POPULATE_JOBLIST);\n\t\t\tif (py_joblist == NULL) {\n\t\t\t\tLOG_ERROR_ARG2(\"%s: failed to create a Python joblist object for param['%s']\",\n\t\t\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_JOBLIST);\n\t\t\t\tgoto event_set_exit;\n\t\t\t}\n\t\t\t/* set job list: py_vnodelist given to py_event_param so ref count */\n\t\t\t/* auto incremented */\n\t\t\trc = PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_JOBLIST,\n\t\t\t\t\t\t  py_joblist);\n\t\t\tif (rc == -1) {\n\t\t\t\tLOG_ERROR_ARG2(\"%s: partially set remaining param['%s'] attributes\",\n\t\t\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_JOBLIST);\n\t\t\t\tgoto event_set_exit;\n\t\t\t}\n\t\t}\n\n\t} else if (hook_event == HOOK_EVENT_EXECJOB_ATTACH) {\n\t\tstruct rq_queuejob *rqj;\n\t\tpid_t pid;\n\n\t\trqj = (struct rq_queuejob *) req_params->rq_job;\n\n\t\t/* SET JOB param */\n\t\t(void) PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_JOB,\n\t\t\t\t\t    Py_None);\n\t\t/*\n\t\t * First things first create a Python job object.\n\t\t *  - Borrowed reference\n\t\t *  - Exception is *NOT* set\n\t\t */\n\t\tpy_job_class = pbs_python_types_table[PP_JOB_IDX].t_class;\n\n\t\tpy_jargs = Py_BuildValue(\"(s)\", rqj->rq_jid); /* NEW ref */\n\t\tif (py_jargs == NULL) {\n\t\t\tLOG_ERROR_ARG2(\"%s: could not build job args list for param['%s']\",\n\t\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_JOB);\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\tpy_job = PyObject_Call(py_job_class, py_jargs, NULL); /*NEW*/\n\t\tif (py_job == NULL) {\n\t\t\tLOG_ERROR_ARG2(\"%s: failed to create a python job object for param['%s']\",\n\t\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_JOB);\n\t\t\t(void) PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_JOB,\n\t\t\t\t\t\t    Py_None);\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\trc = PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_JOB, py_job);\n\n\t\tif (rc == -1) {\n\t\t\tLOG_ERROR_ARG2(\"%s: failed to set param['%s']\",\n\t\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_JOB);\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\tsnprintf((char *) perf_action, sizeof(perf_action), \"%s:%s(%s)\", HOOK_PERF_POPULATE, EVENT_JOB_OBJECT, rqj->rq_jid);\n\t\trc = pbs_python_populate_python_class_from_svrattrl(py_job, &rqj->rq_attr, perf_label, perf_action);\n\n\t\tif (rc == -1) {\n\t\t\tLOG_ERROR_ARG2(\"%s: partially set remaining param['%s'] attributes\",\n\t\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_JOB);\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\t/* SET PID param */\n\t\tpid = req_params->pid;\n\t\t(void) PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_PID,\n\t\t\t\t\t    Py_None);\n\n\t\tpy_pid = Py_BuildValue(\"i\", (int) pid); /* NEW ref */\n\t\tif (py_pid == NULL) {\n\t\t\tLOG_ERROR_ARG2(\"%s:failed to create a Python int object for param['%s']\",\n\t\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_PID);\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\t/* set progname: py_progname given to py_event_param so ref count */\n\t\t/* auto incremented */\n\t\trc = PyDict_SetItemString(py_event_param, PY_EVENT_PARAM_PID,\n\t\t\t\t\t  py_pid);\n\t\tif (rc == -1) {\n\t\t\tLOG_ERROR_ARG2(\"%s: failed to set param['%s']\",\n\t\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_PID);\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t\tpy_vnodelist = create_hook_vnode_list_param(py_event_param,\n\t\t\t\t\t\t\t    PY_TYPE_EVENT,\n\t\t\t\t\t\t\t    PY_EVENT_PARAM_VNODELIST,\n\t\t\t\t\t\t\t    (pbs_list_head *) req_params->vns_list,\n\t\t\t\t\t\t\t    perf_label, HOOK_PERF_POPULATE_VNODELIST);\n\n\t\tif (py_vnodelist == NULL) {\n\t\t\trc = -1;\n\t\t\tgoto event_set_exit;\n\t\t}\n\n\t} else {\n\t\tLOG_ERROR_ARG2(\"%s:got unknown hook event %s\",\n\t\t\t       PY_TYPE_EVENT, hook_event_as_string(hook_event));\n\t\tgoto event_set_exit;\n\t}\n\n\trc = 0;\n\nevent_set_exit:\n\n\tpy_hook_pbsevent = py_event;\n\n\tif (py_hook_pbsevent != NULL) {\n\t\tPy_INCREF(py_hook_pbsevent); /* don't lose reference to event object */\n\n\t\t/* only applies to event being accessed, set  in a Python script */\n\t\tif (_pbs_python_event_mark_readonly() == -1) {\n\t\t\tlog_err(PBSE_INTERNAL, __func__, \"Failed to mark event readonly.\");\n\t\t\trc = -1;\n\t\t}\n\t}\n\n\tif (PyErr_Occurred()) {\n\n\t\tif (PyErr_ExceptionMatches(PyExc_KeyboardInterrupt))\n\t\t\trc = -2;\n\t\tpbs_python_write_error_to_log(__func__);\n\t}\n\n\tPy_CLEAR(py_eargs);\n\tPy_CLEAR(py_event);\n\tPy_CLEAR(py_jargs);\n\tPy_CLEAR(py_job);\n\tPy_CLEAR(py_job_o);\n\tPy_CLEAR(py_que);\n\tPy_CLEAR(py_rargs);\n\tPy_CLEAR(py_resv);\n\tPy_CLEAR(py_event_param);\n\tPy_CLEAR(py_varlist);\n\tPy_CLEAR(py_varlist_o);\n\tPy_CLEAR(py_vnodelist);\n\tPy_CLEAR(py_vnodelist_fail);\n\tPy_CLEAR(py_failed_node_list);\n\tPy_CLEAR(py_node_list);\n\tPy_CLEAR(py_resclist);\n\tPy_CLEAR(py_exec_vnode);\n\tPy_CLEAR(py_vnode);\n\tPy_CLEAR(py_vnode_o);\n\tPy_CLEAR(py_aoe);\n\tPy_CLEAR(py_progname);\n\tPy_CLEAR(py_arglist);\n\tPy_CLEAR(py_env);\n\tPy_CLEAR(py_joblist);\n\tPy_CLEAR(py_pid);\n\tPy_CLEAR(py_resvlist);\n\tPy_CLEAR(py_margs);\n\tPy_CLEAR(py_management);\n\treturn (rc);\n}\n\n\n/**\n *\n * @brief\n *\tHelper function to populate the svrattrl list 'vnlist' with\n *\tdata taken from the individual vnodes in event parameter\n *\t'vnodelist_name'.\n * @param[in]\tvnodelist_name  - name of vnode_list[] in pbs.event().\n * @param[in,out] vnlist \t- the pbs list to populate.\n *\n * @return int\n * @retval 0\t- success\n * @retval -1\t- if error occurred.\n *\n */\nstatic int\npopulate_svrattrl_from_vnodelist_param(char *vnodelist_name,\n\t\t\t\t       pbs_list_head *vnlist)\n{\n\tPyObject *py_vnlist = NULL;\n\tPyObject *py_attr_keys = NULL;\n\tPyObject *py_vnode = NULL;\n\tint num_attrs;\n\tint i;\n\n\tif ((vnodelist_name == NULL) || (vnlist == NULL)) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"bad input param\");\n\t\treturn -1;\n\t}\n\n\tpy_vnlist = _pbs_python_event_get_param(vnodelist_name);\n\n\tif (py_vnlist == NULL) {\n\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\"No vnode list parameter found for event!\");\n\t\treturn -1;\n\t}\n\n\tif (!PyDict_Check(py_vnlist)) {\n\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\"vnode list parameter not a dictionary!\");\n\t\treturn -1;\n\t}\n\tpy_attr_keys = PyDict_Keys(py_vnlist); /* NEW ref */\n\n\tif (py_attr_keys == NULL) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"Failed to obtain object's '%s' keys\",\n\t\t\t vnodelist_name);\n\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\treturn -1;\n\t}\n\n\tif (!PyList_Check(py_attr_keys)) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"object's '%s' keys is not a list!\",\n\t\t\t vnodelist_name);\n\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\tPy_CLEAR(py_attr_keys);\n\t\treturn -1;\n\t}\n\n\tnum_attrs = PyList_Size(py_attr_keys);\n\tfor (i = 0; i < num_attrs; i++) {\n\t\tchar *key_str;\n\n\t\tkey_str = strdup(pbs_python_list_get_item_string_value(py_attr_keys, i));\n\t\tif ((key_str == NULL) || (key_str[0] == '\\0')) {\n\t\t\tif (key_str != NULL) {\n\t\t\t\tfree(key_str);\n\t\t\t\tkey_str = NULL;\n\t\t\t}\n\t\t\tcontinue;\n\t\t}\n\n\t\tpy_vnode = PyDict_GetItemString(py_vnlist, key_str); /* no need to Py_CLEAR() later since this returns a borrowed reference */\n\n\t\tif (py_vnode == NULL) {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer) - 1,\n\t\t\t\t \"failed to get attribute '%s' value\",\n\t\t\t\t key_str);\n\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\tPy_CLEAR(py_attr_keys);\n\t\t\tfree(key_str);\n\t\t\tkey_str = NULL;\n\t\t\treturn -1;\n\t\t}\n\n\t\tif (pbs_python_populate_svrattrl_from_python_class(\n\t\t\t    py_vnode, vnlist, key_str, 1) == -1) {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"failed to populate svrattrl with key '%s' value\", key_str);\n\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\tPy_CLEAR(py_attr_keys);\n\t\t\tfree(key_str);\n\t\t\treturn -1;\n\t\t}\n\t\tfree(key_str);\n\t}\n\tPy_CLEAR(py_attr_keys);\n\n\treturn (0);\n}\n\n/**\n *\n * @brief\n * \tRecreates 'req_params' (structure of batch requests, ex. rq_queuejob, rq_manage,\n * \trq_move) consulting the parameter values obtained from current\n * \tPBS Python event object representing 'hook_event'.\n *\n * @param[in]\t\thook_event \t- event in question\n * @param[in/out]\treq_params\t- results parameter\n * @param[in]\t\tperf_label - passed on to hook_perf_stat* call.\n * @param[in]\t\tperf_action - passed on to hook_perf_stat* call.\n * @note\n * \tCare must be taken to malloc free up allocated entries in 'req_params'\n * \tafter use. The 'req_params' entries could be partially allocated upon\n * \ta failure from this function.\n *\n * @return int\n * @retval\t0 \tfor success\n * @retval\t-1\totherwise, for failure.\n *\n * @note\n *\t\tThis function calls a single hook_perf_stat_start()\n *\t\tthat has some malloc-ed data that are freed in the\n *\t\thook_perf_stat_stop() call, which is done at the end of\n *\t\tthis function.\n *\t\tEnsure that after the hook_perf_stat_start(), all\n *\t\tprogram execution path lead to hook_perf_stat_stop()\n *\t\tcall.\n */\nint\n_pbs_python_event_to_request(unsigned int hook_event, hook_output_param_t *req_params, char *perf_label, char *perf_action)\n{\n\tPyObject *py_job = NULL;\n\tPyObject *py_vnode = NULL;\n\tPyObject *py_vnodelist = NULL;\n\tPyObject *py_joblist = NULL;\n\tPyObject *py_resvlist = NULL;\n\tPyObject *py_job_o = NULL;\n\tPyObject *py_resv = NULL;\n\tPyObject *py_resv_o = NULL;\n\tchar *queue;\n\tPyObject *py_varlist = NULL;\n\tPyObject *py_varlist_o = NULL;\n\tint i, num_attrs;\n\tchar *key_str = NULL;\n\tPyObject *py_attr_keys = NULL;\n\tPyObject *py_progname = NULL;\n\tPyObject *py_arglist = NULL;\n\tPyObject *py_env = NULL;\n\tchar *progname;\n\tchar *env_str;\n\tint deletejob_flag;\n\tint rerunjob_flag;\n\tchar val_str[HOOK_BUF_SIZE];\n\tint rc = -1;\n\n\thook_perf_stat_start(perf_label, perf_action, 0);\n\tswitch (hook_event) {\n\t\tcase HOOK_EVENT_QUEUEJOB:\n\t\tcase HOOK_EVENT_POSTQUEUEJOB:\n\n\t\t\tpy_job = _pbs_python_event_get_param(PY_EVENT_PARAM_JOB);\n\t\t\tif (!py_job) {\n\t\t\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\t\t\"No job parameter found for event!\");\n\t\t\t\tgoto event_to_request_exit;\n\t\t\t}\n\n\t\t\tqueue = pbs_python_object_get_attr_string_value(py_job,\n\t\t\t\t\t\t\t\t\tATTR_queue);\n\t\t\tif (queue) {\n\t\t\t\tif (hook_event == HOOK_EVENT_QUEUEJOB)\n\t\t\t\t\tstrcpy(((struct rq_queuejob *) (req_params->rq_job))->rq_destin, queue);\n\t\t\t\telse\n\t\t\t\t\tstrcpy(((struct rq_postqueuejob *) (req_params->rq_postqueuejob))->rq_destin, queue);\n\t\t\t}\n\n\t\t\tif (hook_event == HOOK_EVENT_QUEUEJOB) {\n\t\t\t\tif (pbs_python_populate_svrattrl_from_python_class(py_job,\n\t\t\t\t\t\t\t\t\t\t   &((struct rq_queuejob *) (req_params->rq_job))->rq_attr, NULL, 0) == -1) {\n\t\t\t\t\tgoto event_to_request_exit;\n\t\t\t\t}\n\t\t\t\tprint_svrattrl_list(\"pbs_populate_svrattrl_from_python_class==>\", &((struct rq_queuejob *) (req_params->rq_job))->rq_attr);\n\t\t\t} else {\n\t\t\t\tif (pbs_python_populate_svrattrl_from_python_class(py_job,\n\t\t\t\t\t\t\t\t\t\t   &((struct rq_postqueuejob *) (req_params->rq_postqueuejob))->rq_attr, NULL, 0) == -1) {\n\t\t\t\t\tgoto event_to_request_exit;\n\t\t\t\t}\n\t\t\t\tprint_svrattrl_list(\"pbs_populate_svrattrl_from_python_class==>\", &((struct rq_postqueuejob *) (req_params->rq_job))->rq_attr);\n\t\t\t}\n\t\t\tbreak;\n\t\tcase HOOK_EVENT_EXECJOB_LAUNCH:\n\t\tcase HOOK_EVENT_EXECJOB_BEGIN:\n\t\tcase HOOK_EVENT_EXECJOB_PROLOGUE:\n\t\tcase HOOK_EVENT_EXECJOB_EPILOGUE:\n\t\tcase HOOK_EVENT_EXECJOB_PRETERM:\n\t\tcase HOOK_EVENT_EXECJOB_END:\n\t\tcase HOOK_EVENT_EXECJOB_ABORT:\n\t\tcase HOOK_EVENT_EXECJOB_POSTSUSPEND:\n\t\tcase HOOK_EVENT_EXECJOB_PRERESUME:\n\n\t\t\tpy_job = _pbs_python_event_get_param(PY_EVENT_PARAM_JOB);\n\t\t\tif (!py_job) {\n\t\t\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\t\t\"No job parameter found for event!\");\n\t\t\t\tgoto event_to_request_exit;\n\t\t\t}\n\n\t\t\tif (pbs_python_populate_svrattrl_from_python_class(py_job,\n\t\t\t\t\t\t\t\t\t   &((struct rq_queuejob *) (req_params->rq_job))->rq_attr, NULL, 0) == -1) {\n\t\t\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\t\t\"Failed to populate request structure!\");\n\t\t\t\tgoto event_to_request_exit;\n\t\t\t}\n\t\t\tprint_svrattrl_list(\"pbs_populate_svrattrl_from_python_class==>\", &((struct rq_queuejob *) (req_params->rq_job))->rq_attr);\n\n\t\t\tif (hook_event == HOOK_EVENT_EXECJOB_PROLOGUE) {\n\t\t\t\t/* populate vnodelist_fail event param */\n\t\t\t\tif (populate_svrattrl_from_vnodelist_param(PY_EVENT_PARAM_VNODELIST_FAIL, (pbs_list_head *) (req_params->vns_list_fail))) {\n\t\t\t\t\tgoto event_to_request_exit;\n\t\t\t\t}\n\t\t\t} else if (hook_event == HOOK_EVENT_EXECJOB_LAUNCH) {\n\t\t\t\tint ret;\n\n\t\t\t\tpy_progname = _pbs_python_event_get_param(PY_EVENT_PARAM_PROGNAME);\n\t\t\t\tif (py_progname == NULL) {\n\t\t\t\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\t\t\t\"No progname parameter found for event!\");\n\t\t\t\t\tgoto event_to_request_exit;\n\t\t\t\t}\n\t\t\t\tprogname = strdup(pbs_python_object_str(py_progname));\n\t\t\t\tif (progname == NULL) {\n\t\t\t\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\t\t\t\"Failed to strdup progname parameter!\");\n\t\t\t\t\tgoto event_to_request_exit;\n\t\t\t\t}\n\t\t\t\t*((char **) (req_params->progname)) = progname;\n\n\t\t\t\tpy_arglist = _pbs_python_event_get_param(PY_EVENT_PARAM_ARGLIST);\n\t\t\t\tif (py_arglist == NULL) {\n\t\t\t\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\t\t\t\"No argv parameter found for event!\");\n\t\t\t\t\tgoto event_to_request_exit;\n\t\t\t\t}\n\n\t\t\t\tret = py_strlist_to_svrattrl(py_arglist,\n\t\t\t\t\t\t\t     (pbs_list_head *) (req_params->argv_list),\n\t\t\t\t\t\t\t     PY_EVENT_PARAM_ARGLIST);\n\t\t\t\tif (ret == -1) {\n\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t\t \"%s: Failed to dump Python string list values into a svrattrl list!\",\n\t\t\t\t\t\t PY_EVENT_PARAM_ARGLIST);\n\t\t\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\t\t\tgoto event_to_request_exit;\n\t\t\t\t}\n\n\t\t\t\tpy_env = _pbs_python_event_get_param(PY_EVENT_PARAM_ENV);\n\t\t\t\tif (py_env == NULL) {\n\t\t\t\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\t\t\t\"No env parameter found for event!\");\n\t\t\t\t\tgoto event_to_request_exit;\n\t\t\t\t}\n\n\t\t\t\tenv_str = strdup(pbs_python_object_str(py_env));\n\t\t\t\tif (env_str == NULL) {\n\t\t\t\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\t\t\t\"Failed to strdup progname parameter!\");\n\t\t\t\t\tgoto event_to_request_exit;\n\t\t\t\t}\n\t\t\t\t*((char **) (req_params->env)) = env_str;\n\n\t\t\t\t/* populate vnodelist_fail event param */\n\t\t\t\tif (populate_svrattrl_from_vnodelist_param(PY_EVENT_PARAM_VNODELIST_FAIL, (pbs_list_head *) (req_params->vns_list_fail))) {\n\t\t\t\t\tgoto event_to_request_exit;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t/* fall through here */\n\t\tcase HOOK_EVENT_EXECHOST_PERIODIC:\n\t\tcase HOOK_EVENT_EXECHOST_STARTUP:\n\n\t\t\t/* populate vnodelist event param */\n\t\t\tif (populate_svrattrl_from_vnodelist_param(PY_EVENT_PARAM_VNODELIST, (pbs_list_head *) (req_params->vns_list))) {\n\t\t\t\tgoto event_to_request_exit;\n\t\t\t}\n\t\t\tprint_svrattrl_list(\"pbs_populate_svrattrl_from_python_class==>\", (pbs_list_head *) (req_params->vns_list));\n\t\t\tPy_CLEAR(py_attr_keys);\n\n\t\t\tif (hook_event == HOOK_EVENT_EXECHOST_PERIODIC) {\n\n\t\t\t\tpy_joblist = _pbs_python_event_get_param(PY_EVENT_PARAM_JOBLIST);\n\t\t\t\tif (!py_joblist) {\n\t\t\t\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\t\t\t\"No job list parameter found for event!\");\n\t\t\t\t\tgoto event_to_request_exit;\n\t\t\t\t}\n\n\t\t\t\tif (!PyDict_Check(py_joblist)) {\n\t\t\t\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\t\t\t\"job list parameter not a dictionary!\");\n\t\t\t\t\tgoto event_to_request_exit;\n\t\t\t\t}\n\n\t\t\t\tpy_attr_keys = PyDict_Keys(py_joblist); /* NEW ref */\n\n\t\t\t\tif (py_attr_keys == NULL) {\n\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t\t \"Failed to obtain object's '%s' keys\",\n\t\t\t\t\t\t PY_EVENT_PARAM_JOBLIST);\n\t\t\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\t\t\tgoto event_to_request_exit;\n\t\t\t\t}\n\n\t\t\t\tif (!PyList_Check(py_attr_keys)) {\n\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t\t \"object's '%s' keys is not a list!\",\n\t\t\t\t\t\t PY_EVENT_PARAM_JOBLIST);\n\t\t\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\t\t\tPy_CLEAR(py_attr_keys);\n\t\t\t\t\tgoto event_to_request_exit;\n\t\t\t\t}\n\n\t\t\t\tnum_attrs = PyList_Size(py_attr_keys);\n\n\t\t\t\tfor (i = 0; i < num_attrs; i++) {\n\n\t\t\t\t\tkey_str = strdup(pbs_python_list_get_item_string_value(\n\t\t\t\t\t\tpy_attr_keys, i));\n\n\t\t\t\t\tif ((key_str == NULL) || (key_str[0] == '\\0')) {\n\t\t\t\t\t\tif (key_str != NULL) {\n\t\t\t\t\t\t\tfree(key_str);\n\t\t\t\t\t\t}\n\t\t\t\t\t\tcontinue;\n\t\t\t\t\t}\n\n\t\t\t\t\tpy_job = PyDict_GetItemString(py_joblist,\n\t\t\t\t\t\t\t\t      key_str); /* borrowed */\n\n\t\t\t\t\tif (py_job == NULL) {\n\t\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer) - 1,\n\t\t\t\t\t\t\t \"failed to get attribute '%s' value\", key_str);\n\t\t\t\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\t\t\t\tPy_CLEAR(py_attr_keys);\n\t\t\t\t\t\tfree(key_str);\n\t\t\t\t\t\tgoto event_to_request_exit;\n\t\t\t\t\t}\n\n\t\t\t\t\tif (pbs_python_populate_svrattrl_from_python_class(py_job,\n\t\t\t\t\t\t\t\t\t\t\t   (pbs_list_head *) (req_params->jobs_list), key_str, 1) == -1) {\n\t\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer) - 1,\n\t\t\t\t\t\t\t \"failed to populate svrattrl with key '%s' value\", key_str);\n\t\t\t\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\t\t\t\tPy_CLEAR(py_attr_keys);\n\t\t\t\t\t\tfree(key_str);\n\t\t\t\t\t\tgoto event_to_request_exit;\n\t\t\t\t\t}\n\n\t\t\t\t\tdeletejob_flag = pbs_python_object_get_attr_integral_value(py_job,\n\t\t\t\t\t\t\t\t\t\t\t\t   PY_DELETEJOB_FLAG);\n\t\t\t\t\tif (deletejob_flag != -1) {\n\t\t\t\t\t\tif (deletejob_flag == 1) {\n\t\t\t\t\t\t\tstrcpy(val_str, \"True\");\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\tstrcpy(val_str, \"False\");\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif (add_to_svrattrl_list(\n\t\t\t\t\t\t\t    (pbs_list_head *) (req_params->jobs_list),\n\t\t\t\t\t\t\t    PY_DELETEJOB_FLAG, NULL, val_str,\n\t\t\t\t\t\t\t    ATR_VFLAG_HOOK, key_str) == -1) {\n\t\t\t\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"failed to add_to_svrattrl_list(%s.%s,null,%s)\",\n\t\t\t\t\t\t\t\t key_str, PY_DELETEJOB_FLAG, val_str);\n\t\t\t\t\t\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\t\t\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\t\t\t\t\tPy_CLEAR(py_attr_keys);\n\t\t\t\t\t\t\tfree(key_str);\n\t\t\t\t\t\t\tgoto event_to_request_exit;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\n\t\t\t\t\trerunjob_flag = pbs_python_object_get_attr_integral_value(py_job,\n\t\t\t\t\t\t\t\t\t\t\t\t  PY_RERUNJOB_FLAG);\n\t\t\t\t\tif (rerunjob_flag != -1) {\n\t\t\t\t\t\tif (rerunjob_flag == 1) {\n\t\t\t\t\t\t\tstrcpy(val_str, \"True\");\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\tstrcpy(val_str, \"False\");\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif (add_to_svrattrl_list(\n\t\t\t\t\t\t\t    (pbs_list_head *) (req_params->jobs_list),\n\t\t\t\t\t\t\t    PY_RERUNJOB_FLAG, NULL,\n\t\t\t\t\t\t\t    val_str, ATR_VFLAG_HOOK, key_str) == -1) {\n\t\t\t\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"failed to add_to_svrattrl_list(%s.%s,null,%s)\",\n\t\t\t\t\t\t\t\t key_str, PY_RERUNJOB_FLAG, val_str);\n\t\t\t\t\t\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\t\t\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\t\t\t\t\tPy_CLEAR(py_attr_keys);\n\t\t\t\t\t\t\tfree(key_str);\n\t\t\t\t\t\t\tgoto event_to_request_exit;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\n\t\t\t\t\tif (key_str != NULL) {\n\t\t\t\t\t\tfree(key_str);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tPy_CLEAR(py_attr_keys);\n\t\t\t\tprint_svrattrl_list(\"pbs_populate_svrattrl_from_python_class==>\", (pbs_list_head *) (req_params->jobs_list));\n\t\t\t}\n\n\t\t\tbreak;\n\t\tcase HOOK_EVENT_RESVSUB:\n\n\t\t\tpy_resv = _pbs_python_event_get_param(PY_EVENT_PARAM_RESV);\n\t\t\tif (!py_resv) {\n\t\t\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\t\t\"No resv parameter found for event!\");\n\t\t\t\tgoto event_to_request_exit;\n\t\t\t}\n\t\t\tif (pbs_python_populate_svrattrl_from_python_class(py_resv,\n\t\t\t\t\t\t\t\t\t   &((struct rq_queuejob *) (req_params->rq_job))->rq_attr, NULL, 0) == -1) {\n\t\t\t\tgoto event_to_request_exit;\n\t\t\t}\n\n\t\t\tprint_svrattrl_list(\"pbs_populate_svrattrl_from_python_class==>\", &((struct rq_queuejob *) (req_params->rq_job))->rq_attr);\n\t\t\tbreak;\n\t\tcase HOOK_EVENT_MODIFYRESV:\n\n\t\t\tpy_resv = _pbs_python_event_get_param(PY_EVENT_PARAM_RESV);\n\t\t\tif (!py_resv) {\n\t\t\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\t\t\"No resv parameter found for event!\");\n\t\t\t\tgoto event_to_request_exit;\n\t\t\t}\n\n\t\t\tpy_resv_o = _pbs_python_event_get_param(PY_EVENT_PARAM_RESV_O);\n\t\t\tif (!py_resv_o) {\n\t\t\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\t\t\"No resv_o parameter found for event!\");\n\t\t\t\tgoto event_to_request_exit;\n\t\t\t}\n\n\t\t\t/* Need to check if ATTR_v (i.e. Variable_list) changed, and if */\n\t\t\t/* so, needs to be sent with the MODIFYRESV request.\t            */\n\n\t\t\tif (PyObject_HasAttrString(py_resv, ATTR_v))\n\t\t\t\tpy_varlist = PyObject_GetAttrString(py_resv, ATTR_v); /* NEW */\n\n\t\t\tif (PyObject_HasAttrString(py_resv_o, ATTR_v))\n\t\t\t\tpy_varlist_o = PyObject_GetAttrString(py_resv_o, ATTR_v);\n\t\t\t/* NEW */\n\n\t\t\tif ((py_varlist != NULL) && (py_varlist_o != NULL) &&\n\t\t\t    (PyObject_RichCompareBool(py_varlist, py_varlist_o, Py_EQ))) {\n\t\t\t\t/* upon success, py_resv decreases ref count to py_varlist */\n\t\t\t\t(void) PyObject_SetAttrString(py_resv, ATTR_v, Py_None);\n\t\t\t}\n\n\t\t\tPy_CLEAR(py_varlist);\n\t\t\tPy_CLEAR(py_varlist_o);\n\n\t\t\tif (pbs_python_populate_svrattrl_from_python_class(py_resv,\n\t\t\t\t\t\t\t\t\t   &((struct rq_manage *) (req_params->rq_manage))->rq_attr, NULL, 0) == -1) {\n\t\t\t\tgoto event_to_request_exit;\n\t\t\t}\n\t\t\tprint_svrattrl_list(\"pbs_populate_svrattrl_from_python_class==>\", &((struct rq_manage *) (req_params->rq_manage))->rq_attr);\n\t\t\tbreak;\n\n\t\tcase HOOK_EVENT_MODIFYJOB:\n\n\t\t\tpy_job = _pbs_python_event_get_param(PY_EVENT_PARAM_JOB);\n\t\t\tif (!py_job) {\n\t\t\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\t\t\"No job parameter found for event!\");\n\t\t\t\tgoto event_to_request_exit;\n\t\t\t}\n\n\t\t\tpy_job_o = _pbs_python_event_get_param(PY_EVENT_PARAM_JOB_O);\n\t\t\tif (!py_job_o) {\n\t\t\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\t\t\"No job_o parameter found for event!\");\n\t\t\t\tgoto event_to_request_exit;\n\t\t\t}\n\n\t\t\t/* Need to check if ATTR_v (i.e. Variable_list) changed, and if */\n\t\t\t/* so, needs to be sent with the MODIFYJOB request.\t            */\n\n\t\t\tif (PyObject_HasAttrString(py_job, ATTR_v))\n\t\t\t\tpy_varlist = PyObject_GetAttrString(py_job, ATTR_v); /* NEW */\n\n\t\t\tif (PyObject_HasAttrString(py_job_o, ATTR_v))\n\t\t\t\tpy_varlist_o = PyObject_GetAttrString(py_job_o, ATTR_v);\n\t\t\t/* NEW */\n\n\t\t\tif ((py_varlist != NULL) && (py_varlist_o != NULL) &&\n\t\t\t    (PyObject_RichCompareBool(py_varlist, py_varlist_o, Py_EQ))) {\n\t\t\t\t/* upon success, py_job decreases ref count to py_varlist */\n\t\t\t\t(void) PyObject_SetAttrString(py_job, ATTR_v, Py_None);\n\t\t\t}\n\n\t\t\tPy_CLEAR(py_varlist);\n\t\t\tPy_CLEAR(py_varlist_o);\n\n\t\t\tif (pbs_python_populate_svrattrl_from_python_class(py_job,\n\t\t\t\t\t\t\t\t\t   &((struct rq_manage *) (req_params->rq_manage))->rq_attr, NULL, 0) == -1) {\n\t\t\t\tgoto event_to_request_exit;\n\t\t\t}\n\t\t\tprint_svrattrl_list(\"pbs_populate_svrattrl_from_python_class==>\", &((struct rq_manage *) (req_params->rq_manage))->rq_attr);\n\t\t\tbreak;\n\t\tcase HOOK_EVENT_MOVEJOB:\n\n\t\t\tpy_job = _pbs_python_event_get_param(PY_EVENT_PARAM_JOB);\n\t\t\tif (!py_job) {\n\t\t\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\t\t\"No job parameter found for event!\");\n\t\t\t\tgoto event_to_request_exit;\n\t\t\t}\n\t\t\tqueue = pbs_python_object_get_attr_string_value(py_job,\n\t\t\t\t\t\t\t\t\tATTR_queue);\n\t\t\tif (queue)\n\t\t\t\tstrcpy(((struct rq_move *) (req_params->rq_move))->rq_destin, queue);\n\n\t\t\tbreak;\n\t\tcase HOOK_EVENT_PERIODIC:\n\t\t\tpy_vnodelist = _pbs_python_event_get_param(PY_EVENT_PARAM_VNODELIST);\n\t\t\tif (!py_vnodelist) {\n\t\t\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\t\t\"No vnode list parameter found for event!\");\n\t\t\t\tgoto event_to_request_exit;\n\t\t\t}\n\n\t\t\tif (!PyDict_Check(py_vnodelist)) {\n\t\t\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\t\t\"vnode list parameter not a dictionary!\");\n\t\t\t\tgoto event_to_request_exit;\n\t\t\t}\n\n\t\t\tpy_attr_keys = PyDict_Keys(py_vnodelist); /* NEW ref */\n\n\t\t\tif (py_attr_keys == NULL) {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t \"Failed to obtain object's '%s' keys\",\n\t\t\t\t\t PY_EVENT_PARAM_VNODE);\n\t\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\t\tgoto event_to_request_exit;\n\t\t\t}\n\n\t\t\tif (!PyList_Check(py_attr_keys)) {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t \"object's '%s' keys is not a list!\",\n\t\t\t\t\t PY_EVENT_PARAM_VNODE);\n\t\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\t\tPy_CLEAR(py_attr_keys);\n\t\t\t\tgoto event_to_request_exit;\n\t\t\t}\n\n\t\t\tnum_attrs = PyList_Size(py_attr_keys);\n\t\t\tfor (i = 0; i < num_attrs; i++) {\n\n\t\t\t\tkey_str = strdup(pbs_python_list_get_item_string_value(\n\t\t\t\t\tpy_attr_keys, i));\n\n\t\t\t\tif ((key_str == NULL) || (key_str[0] == '\\0')) {\n\t\t\t\t\tif (key_str != NULL) {\n\t\t\t\t\t\tfree(key_str);\n\t\t\t\t\t\tkey_str = NULL;\n\t\t\t\t\t}\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\n\t\t\t\tpy_vnode = PyDict_GetItemString(py_vnodelist, key_str);\n\n\t\t\t\tif (py_vnode == NULL) {\n\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t\t \"failed to get attribute '%s' value\", key_str);\n\t\t\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\t\t\tPy_CLEAR(py_attr_keys);\n\t\t\t\t\tfree(key_str);\n\t\t\t\t\tkey_str = NULL;\n\t\t\t\t\tgoto event_to_request_exit;\n\t\t\t\t}\n\n\t\t\t\tif (pbs_python_populate_svrattrl_from_python_class(py_vnode,\n\t\t\t\t\t\t\t\t\t\t   (pbs_list_head *) (req_params->vns_list), key_str, 1) == -1) {\n\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t\t \"failed to populate svrattrl with key '%s' value\", key_str);\n\t\t\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\t\t\tPy_CLEAR(py_attr_keys);\n\t\t\t\t\tfree(key_str);\n\t\t\t\t\tkey_str = NULL;\n\t\t\t\t\tgoto event_to_request_exit;\n\t\t\t\t}\n\t\t\t\tfree(key_str);\n\t\t\t}\n\t\t\tPy_CLEAR(py_attr_keys);\n\n\t\t\tpy_resvlist = _pbs_python_event_get_param(PY_EVENT_PARAM_RESVLIST);\n\t\t\tif (!py_resvlist) {\n\t\t\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\t\t\"No reservation list parameter found for event!\");\n\t\t\t\tgoto event_to_request_exit;\n\t\t\t}\n\n\t\t\tif (!PyDict_Check(py_resvlist)) {\n\t\t\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\t\t\"reservation list parameter not a dictionary!\");\n\t\t\t\tgoto event_to_request_exit;\n\t\t\t}\n\n\t\t\tpy_attr_keys = PyDict_Keys(py_resvlist); /* NEW ref */\n\n\t\t\tif (py_attr_keys == NULL) {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t \"Failed to obtain object's '%s' keys\",\n\t\t\t\t\t PY_EVENT_PARAM_RESVLIST);\n\t\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\t\tgoto event_to_request_exit;\n\t\t\t}\n\n\t\t\tif (!PyList_Check(py_attr_keys)) {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t \"object's '%s' keys is not a list!\",\n\t\t\t\t\t PY_EVENT_PARAM_RESVLIST);\n\t\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\t\tPy_CLEAR(py_attr_keys);\n\t\t\t\tgoto event_to_request_exit;\n\t\t\t}\n\n\t\t\tnum_attrs = PyList_Size(py_attr_keys);\n\n\t\t\tfor (i = 0; i < num_attrs; i++) {\n\n\t\t\t\tkey_str = strdup(pbs_python_list_get_item_string_value(\n\t\t\t\t\tpy_attr_keys, i));\n\n\t\t\t\tif ((key_str == NULL) || (key_str[0] == '\\0')) {\n\t\t\t\t\tif (key_str != NULL) {\n\t\t\t\t\t\tfree(key_str);\n\t\t\t\t\t}\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\n\t\t\t\tpy_job = PyDict_GetItemString(py_resvlist,\n\t\t\t\t\t\t\t      key_str); /* borrowed */\n\n\t\t\t\tif (py_job == NULL) {\n\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer) - 1,\n\t\t\t\t\t\t \"failed to get attribute '%s' value\", key_str);\n\t\t\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\t\t\tPy_CLEAR(py_attr_keys);\n\t\t\t\t\tfree(key_str);\n\t\t\t\t\tgoto event_to_request_exit;\n\t\t\t\t}\n\n\t\t\t\tif (pbs_python_populate_svrattrl_from_python_class(py_job,\n\t\t\t\t\t\t\t\t\t\t   (pbs_list_head *) (req_params->resv_list), key_str, 1) == -1) {\n\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer) - 1,\n\t\t\t\t\t\t \"failed to populate svrattrl with key '%s' value\", key_str);\n\t\t\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\t\t\tPy_CLEAR(py_attr_keys);\n\t\t\t\t\tfree(key_str);\n\t\t\t\t\tgoto event_to_request_exit;\n\t\t\t\t}\n\n\t\t\t\tfree(key_str);\n\t\t\t}\n\t\t\tPy_CLEAR(py_attr_keys);\n\n\t\t\tbreak;\n\t\tdefault:\n\t\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\t\"unexpected hook event type\");\n\t\t\tgoto event_to_request_exit;\n\t}\n\trc = 0;\nevent_to_request_exit:\n\thook_perf_stat_stop(perf_label, perf_action, 0);\n\treturn rc;\n}\n\n/**\n * @brief\n *  \tAllows the current PBS event request to proceed.\n */\nvoid\n_pbs_python_event_accept(void)\n{\n\n\thook_pbsevent_accept = TRUE;\n}\n\n/**\n * @brief\n *  \tReject the current PBS event request.\n */\nvoid\n_pbs_python_event_reject(char *msg)\n{\n\n\thook_pbsevent_accept = FALSE;\n\tmemset(hook_pbsevent_reject_msg, '\\0', HOOK_MSG_SIZE);\n\tif (msg) {\n\t\tsnprintf(hook_pbsevent_reject_msg, HOOK_MSG_SIZE - 1, \"%s\", msg);\n\t}\n}\n\n/**\n * @brief\n * \tReturns the message string supplied in the hook script when it rejected\n * \tan event request.\n */\nchar *\n_pbs_python_event_get_reject_msg(void)\n{\n\tif (hook_pbsevent_reject_msg[0] != '\\0')\n\t\treturn ((char *) hook_pbsevent_reject_msg);\n\treturn NULL;\n}\n\n/**\n * @brief\n * \tReturns the value of the event accept flag (1 for TRUE or 0 for FALSE).\n */\nint\n_pbs_python_event_get_accept_flag(void)\n{\n\treturn (hook_pbsevent_accept);\n}\n\n/**\n * @brief\n * \tSets a global flag that says modifications to the PBS Python\n * \tattributes are allowed.\n */\nvoid\n_pbs_python_event_param_mod_allow(void)\n{\n\n\thook_pbsevent_stop_processing = FALSE;\n}\n\n/**\n * @brief\n * \tSets a global flag that says any more modifications to the PBS Python\n * \tattributes would be disallowed.\n */\nvoid\n_pbs_python_event_param_mod_disallow(void)\n{\n\n\thook_pbsevent_stop_processing = TRUE;\n}\n\n/**\n * @brief\n * \tReturns the value (0 or 1) of the global flag that says whether or not\n * \tmodifications to the PBS Python attributes is allowed.\n */\nint\n_pbs_python_event_param_get_mod_flag(void)\n{\n\treturn (hook_pbsevent_stop_processing);\n}\n\n/**\n * @brief\n * \tSets the value of the attribute 'name' of the current Python Object event\n * \tto a string 'value'. The descriptor for the attribute will take care of\n * \tconverting to an actual type.\n *\n * @param[in] name - attribute name\n * @param[in] value - attr value\n *\n * @param[in]\n * @return\tint\n * @retval\t0\tsuccess\n * @retval\t-1\terror\n */\nint\n_pbs_python_event_set_attrval(char *name, char *value)\n{\n\tint rc;\n\n\tif ((name == NULL) || (value == NULL)) {\n\t\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_FORCE, PBS_EVENTCLASS_SERVER,\n\t\t\t  LOG_ERR, __func__, \"Got a NULL 'name' or 'value'\");\n\t\treturn -1;\n\t}\n\n\tif (py_hook_pbsevent == NULL) {\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t \"can't set event attribute %s = %s: event is unset\", name, value);\n\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_FORCE, PBS_EVENTCLASS_SERVER,\n\t\t\t  LOG_ERR, __func__, log_buffer);\n\t\treturn -1;\n\t}\n\n\trc = pbs_python_object_set_attr_string_value(py_hook_pbsevent, name, value);\n\n\tif (rc == -1) {\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t \"failed to set event attribute %s = %s\", name, value);\n\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_FORCE, PBS_EVENTCLASS_SERVER,\n\t\t\t  LOG_ERR, __func__, log_buffer);\n\t\treturn -1;\n\t}\n\n\treturn (0);\n}\n/**\n * @brief\n * \tGets the value of the attribute 'name' of the current Python Object event\n * \tas a string.\n *\n * @param[in] name - attr name\n *\n * @return\tchar *\n * @retval\tattr name\tsuccess\n * @retval\tNULL  \t\tif it doesn't find one.\n */\nchar *\n_pbs_python_event_get_attrval(char *name)\n{\n\tPyObject *py_attrval = NULL;\n\tchar *attrval_str = NULL;\n\n\tif (name == NULL) {\n\t\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_FORCE, PBS_EVENTCLASS_SERVER,\n\t\t\t  LOG_ERR, __func__, \"Got a NULL 'name'\");\n\t\treturn NULL;\n\t}\n\n\tif (py_hook_pbsevent == NULL) {\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t \"can't get event attribute %s: event is unset\", name);\n\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\n\t\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_FORCE, PBS_EVENTCLASS_SERVER,\n\t\t\t  LOG_ERR, __func__, log_buffer);\n\t\treturn NULL;\n\t}\n\n\tif (!PyObject_HasAttrString(py_hook_pbsevent, name)) {\n\t\treturn NULL;\n\t}\n\n\tpy_attrval = PyObject_GetAttrString(py_hook_pbsevent, name); /* NEW */\n\n\tif (py_attrval) {\n\t\tPyArg_Parse(py_attrval, \"s\", &attrval_str);\n\t\tPy_DECREF(py_attrval);\n\t}\n\treturn (attrval_str);\n}\n\n/*\n * --------------------- MODULE METHODS ---------------------------------\n */\n/*\n * Create a queue object and stuff the attributes\n */\n/* pbs_v1_module method get_queue */\n\nconst char pbsv1mod_meth_get_queue_doc[] =\n\t\"get_queue(strName)\\n\\\n  where:\\n\\\n\\n\\\n   strName:  name of the queue to retrieve\\n\\\n\\n\\\n  returns:\\n\\\n         instance of _queue type representing queue 'strName'\\n\\\n\";\n\n/**\n * @brief\n *\tCreate a queue object and stuff the attributes.\n *\n * @par\tstrName:  name of the queue to retrieve.\n *\n * @return\tPyObject*\n * @retval\tinstance of _queue type representing queue 'strName'\tsuccess\n * @retval\tNULL\t\t\t\t\t\t\terror\n */\nPyObject *\npbsv1mod_meth_get_queue(PyObject *self, PyObject *args, PyObject *kwds)\n{\n\tstatic char *kwlist[] = {\"name\", NULL};\n\n\tchar *name = NULL;\n\tPyObject *py_que = NULL;\n\n\tif (!PyArg_ParseTupleAndKeywords(args, kwds,\n\t\t\t\t\t \"s:get_queue\",\n\t\t\t\t\t kwlist,\n\t\t\t\t\t &name)) {\n\t\treturn NULL;\n\t}\n\n\thook_set_mode = C_MODE;\n\tpy_que = _pps_helper_get_queue(NULL, name, HOOK_PERF_FUNC);\n\thook_set_mode = PY_MODE;\n\n\tif (py_que != NULL)\n\t\treturn py_que;\n\telse\n\t\tPy_RETURN_NONE;\n}\n\n/*\n * Create a job object and stuff the attributes\n */\n/* pbs_v1_module method get_job */\n\nconst char pbsv1mod_meth_get_job_doc[] =\n\t\"get_job(strName, strQueue)\\n\\\n  where:\\n\\\n\\n\\\n   strName:  name of the job to retrieve\\n\\\n\\n\\\n   strQueue:  name of the queue where job belongs to\\n\\\n\\n\\\n  returns:\\n\\\n         instance of _job type representing job 'strName' in 'strQueue'; or\\n\\\n         None if no job was found or job 'strName' is not part of 'strQueue'\\n\\\n\";\n\n/**\n * @brief\n *\tCreate a job object and stuff the attributes\n *\n * @par Note:\n *\tstrName:  name of the job to retrieve\\n\\\n *\tstrQueue:  name of the queue where job belongs to\\n\\\n *\n * @return\tPyObject*\n * @retval\tinstance of _job type representing job 'strName' in 'strQueue'\t\tsuccess\n * @retval\tNone if no job was found or job 'strName' is not part of 'strQueue'\terror\n */\nPyObject *\npbsv1mod_meth_get_job(PyObject *self, PyObject *args, PyObject *kwds)\n{\n\tstatic char *kwlist[] = {PY_TYPE_JOB, PY_TYPE_QUEUE, NULL};\n\n\tchar *jname = NULL;\n\tchar *qname = NULL;\n\tPyObject *py_job = NULL;\n\n\tif (!PyArg_ParseTupleAndKeywords(args, kwds,\n\t\t\t\t\t \"s|s:get_job\",\n\t\t\t\t\t kwlist,\n\t\t\t\t\t &jname,\n\t\t\t\t\t &qname)) {\n\t\treturn NULL;\n\t}\n\n\thook_set_mode = C_MODE;\n\tpy_job = _pps_helper_get_job(NULL, jname, qname, HOOK_PERF_FUNC);\n\thook_set_mode = PY_MODE;\n\n\tif (py_job != NULL)\n\t\treturn py_job;\n\telse\n\t\tPy_RETURN_NONE;\n}\n\n/*\n * Create a resv object and stuff the attributes\n */\n/* pbs_v1_module method get_resv */\n\nconst char pbsv1mod_meth_get_resv_doc[] =\n\t\"get_resv(strName)\\n\\\n  where:\\n\\\n\\n\\\n   strName:  name of the resv to retrieve\\n\\\n\\n\\\n  returns:\\n\\\n         instance of _resv type representing resv 'strName'; or\\n\\\n         None if no resv was found.\\n\\\n\";\n\n/**\n * @brief\n *\tCreate a resv object and stuff the attributes\n *\n * @par\tNote:\n *\tstrName:  name of the resv to retrieve.\n *\n * @return\tPyObject *\n * @retval\tinstance of _resv type representing resv 'strName'\tsuccess\n * @retval\tNone\t\t\t\t\t\t\tif no resv was found.\n *\n */\nPyObject *\npbsv1mod_meth_get_resv(PyObject *self, PyObject *args, PyObject *kwds)\n{\n\tstatic char *kwlist[] = {PY_TYPE_RESV, NULL};\n\n\tchar *rname = NULL;\n\tPyObject *py_resv = NULL;\n\n\tif (!PyArg_ParseTupleAndKeywords(args, kwds,\n\t\t\t\t\t \"s:get_resv\",\n\t\t\t\t\t kwlist,\n\t\t\t\t\t &rname)) {\n\t\treturn NULL;\n\t}\n\n\thook_set_mode = C_MODE;\n\tpy_resv = _pps_helper_get_resv(NULL, rname, HOOK_PERF_FUNC);\n\thook_set_mode = PY_MODE;\n\n\tif (py_resv != NULL)\n\t\treturn py_resv;\n\telse\n\t\tPy_RETURN_NONE;\n}\n\n/*\n * Create a vnode object and stuff the attributes\n */\n/* pbs_v1_module method get_vnode */\n\nconst char pbsv1mod_meth_get_vnode_doc[] =\n\t\"get_vnode(strVname)\\n\\\n  where:\\n\\\n\\n\\\n   strVname:  name of the vnode to retrieve\\n\\\n\\n\\\n  returns:\\n\\\n         instance of _vnode type representing vnode 'strVname'; or\\n\\\n         None if no vnode was found.\\n\\\n\";\n/**\n * @brief\n *\tThis is the C->Python wrapper program to _pps_helper_get_vnode()\n *\tthat is callable in Python script.\n *\n * @param[in]\targs[1]\t- the vnode name passed by the Python function invoking\n *\t\t\tthis function.\n *\n * @return\tPyObject *\t- the Python vnode object corresponding\n *\t\t\t\tto args[1].\n *\n */\nPyObject *\npbsv1mod_meth_get_vnode(PyObject *self, PyObject *args, PyObject *kwds)\n{\n\tstatic char *kwlist[] = {PY_TYPE_VNODE, NULL};\n\n\tchar *vname = NULL;\n\tPyObject *py_vnode = NULL;\n\n\tif (!PyArg_ParseTupleAndKeywords(args, kwds,\n\t\t\t\t\t \"s:get_vnode\",\n\t\t\t\t\t kwlist,\n\t\t\t\t\t &vname)) {\n\t\treturn NULL;\n\t}\n\n\thook_set_mode = C_MODE;\n\tpy_vnode = _pps_helper_get_vnode(NULL, vname, HOOK_PERF_FUNC);\n\thook_set_mode = PY_MODE;\n\n\tif (py_vnode != NULL)\n\t\treturn py_vnode;\n\telse\n\t\tPy_RETURN_NONE;\n}\n\n/*\n * Create a server object and stuff the attributes\n */\n/* pbs_v1_module method server */\n\nconst char pbsv1mod_meth_server_doc[] =\n\t\"server([strName])\\n\\\n      [strName] is an optional argument referring the server host name to\\n\\\n             query. Use of this argument is currently not implemented.\\n\\\n    returns:\\n\\\n         instance of _server type representing the local server if\\n\\\n\t 'strName' is not given.\\n\\\n Methods:\\n\\\n   Obtain information about the local server:\\n\\\n      s = pbs.server()\\n\\\n      s.pbs_version\t -> returns the PBS version\\n\\\n      s.job(22.fest)\t -> returns a job in server\\n\\\n\\n\\\n   Obtain information about a queue in the local server:\\n\\\n      q = s.queue(workq)\\n\\\n      q.total_jobs\t -> returns the # of jobs on workq\\n\\\n      q.job(22.fest)\t -> returns a job in the queue\\n\\\n\";\n\n/**\n * @brief\n *\tCreate a server object and stuff the attributes.\n *\n * @par\tNote:\n *\t[strName] is an optional argument referring the server host name to\n *\tquery. Use of this argument is currently not implemented.\n *\n * @return\tPyObject *\n * @retval\tinstance of _server type representing the local server if strname not given\n *\n */\n\nPyObject *\npbsv1mod_meth_server(void)\n{\n\tPyObject *py_svr = NULL;\n\thook_set_mode = C_MODE;\n\tpy_svr = _pps_helper_get_server(HOOK_PERF_FUNC);\n\thook_set_mode = PY_MODE;\n\treturn (py_svr);\n}\n\nconst char pbsv1mod_meth_in_python_mode_doc[] =\n\t\"in_python_mode()\\n\\\n\\n\\\n  returns:\\n\\\n         True if hook_set_mode is PY_MODE; False, otherwise.\\n\\\n  \t This is an internal function.\\n\\\n\";\n\n/**\n * @brief\n *\tcheck hook_set_mode.\n *\n * @return\tPyObject *\n * @retval\tPy_True\tif hook_set_mode is PY_MODE\n * @retval\tPy_False\totherwise\n *\n * @par\tThis is an internal function\n */\nPyObject *\npbsv1mod_meth_in_python_mode(void)\n{\n\tPyObject *ret;\n\tret = (hook_set_mode == PY_MODE) ? Py_True : Py_False;\n\tPy_INCREF(ret);\n\treturn (ret);\n}\n\nconst char pbsv1mod_meth_in_site_hook_doc[] =\n\t\"in_site_hook()\\n\\\n\\n\\\n  returns:\\n\\\n         True if executing under a HOOK_SITE hook; False, otherwise.\\n\\\n\t This is an internal function.\\n\\\n\";\n\n/**\n * @brief\n *\tcheck whether hook_site.\n *\n * @return\tPyObject *\n * @retval\tPy_True\tif executing under a HOOK_SITE hook\n * @retval\tPy_False\totherwise\n */\nPyObject *\npbsv1mod_meth_in_site_hook(void)\n{\n\tPyObject *ret;\n\tchar *hook_type;\n\n\thook_type = _pbs_python_event_get_attrval(PY_EVENT_HOOK_TYPE);\n\n\tif ((hook_type != NULL) && (strcmp(hook_type, HOOKSTR_SITE) == 0))\n\t\tret = Py_True;\n\telse\n\t\tret = Py_False;\n\n\tPy_INCREF(ret);\n\treturn (ret);\n}\n\nconst char pbsv1mod_meth_is_attrib_val_settable_doc[] =\n\t\"is_attrib_val_settable(self,owner,value)\\n\\\n  where:\\n\\\n\\n\\\n   self    :  obj attribute name (e.g. Resource_List or Priority)\\n\\\n   owner   :  obj resource name of self, if it is a resource (e.g. ncpus)\\n\\\n   value   :  obj value being set to\\n\\\n\\n\\\n  returns:\\n\\\n         True, False, or an exception\\n\\\n\";\n\n/**\n *\n * @brief\n *\tReturns Python True if some object attribute name/resource name\n *\tis allowed to set its value.\n *\n * @param[in]\tself\t- owning object\n * @param[in]\targs[1]\t- object attribute name (ex. Resource_List/Priority)\n * \t\targs[2]\t- object resource name (ex. ncpus)\n * \t\targs[3]\t- object value being set to\n * @param[in]\tkwds\t- keywords to objects mappings\n *\n * @return\tPyObject *\n * @retval\tPython True\t- if settable\n * @retval\tPython False\t- if not settable\n * @retval\tNULL\t\t- error\n *\n */\nPyObject *\npbsv1mod_meth_is_attrib_val_settable(PyObject *self, PyObject *args, PyObject *kwds)\n{\n\tPyObject *ret;\n\tstatic char *kwlist[] = {\"self\", \"owner\", \"value\", NULL};\n\n\tPyObject *py_self = NULL;\n\tPyObject *py_owner = NULL;\n\tPyObject *py_value = NULL;\n\n\tchar *name = NULL;     /* malloc   */\n\tchar *resource = NULL; /* malloced */\n\tchar *pstr;\n\n\tPyObject *py_value_type = NULL;\n\tPyObject *py_value_type_0 = NULL;\n\tPyObject *py_value_type_0_derived = NULL;\n\tint readonly = 0;\n\tint is_resource = 0;\n\tunsigned int event;\n\tint attr_idx = -1;\n\tresource_def *rscdef = NULL;\n\n\tint rc = 1;\n\n\tif (hook_set_mode == C_MODE) { /* can set anything */\n\t\tret = Py_True;\n\t\tPy_INCREF(ret);\n\t\treturn (ret);\n\t}\n\n\tmemset((char *) log_buffer, '\\0', LOG_BUF_SIZE);\n\n\tif (!PyArg_ParseTupleAndKeywords(args, kwds,\n\t\t\t\t\t \"OOO:is_attrib_val_settable\",\n\t\t\t\t\t kwlist,\n\t\t\t\t\t &py_self,\n\t\t\t\t\t &py_owner,\n\t\t\t\t\t &py_value)) {\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t \"in func %s, PyArg_ParseTupleAndKeywords failed!\",\n\t\t\t __func__);\n\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\tPyErr_SetString(PyExc_SyntaxError, log_buffer);\n\t\tgoto IAVS_ERROR_EXIT;\n\t}\n\n\t/* at this point, we can have an unset attribute (without _is_resource) */\n\t/* or a set attribute; or an unset resource (without is_resource with  */\n\t/* 'name\" as attribute), or a set resource */\n\n\treadonly = 0;\n\tif (PyObject_HasAttrString(py_owner, \"_readonly\")) {\n\t\treadonly = pbs_python_object_get_attr_integral_value(py_owner,\n\t\t\t\t\t\t\t\t     PY_READONLY_FLAG);\n\t}\n\n\tis_resource = 0;\n\tif (PyObject_HasAttrString(py_self, PY_DESCRIPTOR_IS_RESOURCE)) {\n\t\tis_resource = pbs_python_object_get_attr_integral_value(py_self,\n\t\t\t\t\t\t\t\t\tPY_DESCRIPTOR_IS_RESOURCE);\n\t}\n\n\tif (is_resource == 1) {\n\n\t\tif (PyObject_HasAttrString(py_owner, PY_RESOURCE_NAME)) {\n\t\t\t/* e.g. Resource_List */\n\t\t\tpstr = pbs_python_object_get_attr_string_value(py_owner,\n\t\t\t\t\t\t\t\t       PY_RESOURCE_NAME);\n\t\t\tif (pstr) {\n\t\t\t\tif ((name = strdup(pstr)) == NULL) {\n\t\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"in func %s, Unable to allocate Memory!\\n\", __func__);\n\t\t\t\t\tPyErr_SetString(PyExc_MemoryError, log_buffer);\n\t\t\t\t\tgoto IAVS_ERROR_EXIT;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\tif (PyObject_HasAttrString(py_self, PY_DESCRIPTOR_NAME)) {\n\t\t\t/* e.g. ncpus */\n\t\t\tpstr = pbs_python_object_get_attr_string_value(py_self,\n\t\t\t\t\t\t\t\t       PY_DESCRIPTOR_NAME);\n\t\t\tif (pstr) {\n\t\t\t\tif ((resource = strdup(pstr)) == NULL) {\n\t\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"in func %s, Unable to allocate Memory!\\n\", __func__);\n\t\t\t\t\tPyErr_SetString(PyExc_MemoryError, log_buffer);\n\t\t\t\t\tgoto IAVS_ERROR_EXIT;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t} else { /* 0 or -1 (not yet set, assume attribute) */\n\t\tif (PyObject_HasAttrString(py_self, PY_DESCRIPTOR_NAME)) {\n\t\t\t/* e.g. ncpus */\n\t\t\tpstr = pbs_python_object_get_attr_string_value(py_self,\n\t\t\t\t\t\t\t\t       PY_DESCRIPTOR_NAME);\n\t\t\tif (pstr) {\n\t\t\t\tif ((name = strdup(pstr)) == NULL) {\n\t\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"in func %s, Unable to allocate Memory!\\n\", __func__);\n\t\t\t\t\tPyErr_SetString(PyExc_MemoryError, log_buffer);\n\t\t\t\t\tgoto IAVS_ERROR_EXIT;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tif ((resource = strdup(\"\")) == NULL) {\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"in func %s, Unable to allocate Memory!\\n\", __func__);\n\t\t\tPyErr_SetString(PyExc_MemoryError, log_buffer);\n\t\t\tgoto IAVS_ERROR_EXIT;\n\t\t}\n\t}\n\n\tif (!name || !resource) {\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t \"name and/or resource is NULL: name=%s resource=%s\",\n\t\t\t (name ? name : \"null\"), (resource ? resource : \"null\"));\n\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\tPyErr_SetString(PyExc_AssertionError, log_buffer);\n\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\tgoto IAVS_ERROR_EXIT;\n\t}\n\n\t/* do some sanity checking */\n\n\tif (name[0] == '\\0') {\n\t\tPyErr_SetString(PyExc_AssertionError, \"No attribute name found\");\n\t\tgoto IAVS_ERROR_EXIT;\n\t}\n\n\t/* specia case: bypass a pbs_resource with default value */\n\tif ((is_resource == 1) && (strcmp(name, PY_RESOURCE_GENERIC_VALUE) == 0) && (strcmp(resource, PY_RESOURCE_NAME) == 0)) {\n\t\trc = 0; /* return True */\n\t\tgoto IAVS_OK_EXIT;\n\t}\n\n\tif (hook_pbsevent_stop_processing) {\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t \"Not permitted to modify attributes (name=%s,res=%s): an event.accept() or event.reject() already called.\", name, resource);\n\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\tPyErr_SetString(PyExc_AssertionError, log_buffer);\n\t\t/* throw an exception */\n\t\tgoto IAVS_ERROR_EXIT;\n\t}\n\n\tif (readonly) {\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t \"attribute '%s' is part of a readonly object\", name);\n\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\tPyErr_SetString(\n\t\t\tpbs_python_types_table[PP_BADATTR_VALUE_ERR_IDX].t_class,\n\t\t\tlog_buffer);\n\t\t/* throw an exception */\n\t\tgoto IAVS_ERROR_EXIT;\n\t}\n\tif (!py_hook_pbsevent) {\n\t\tPyErr_SetString(PyExc_AssertionError, \"Event not found\");\n\t\tgoto IAVS_ERROR_EXIT;\n\t}\n\t/* Variable_List can not be set in a python script, only individual */\n\t/* environment=variable settings can via dictionary setitem. */\n\tif (strcmp(name, ATTR_v) == 0) {\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t \"attribute '%s' cannot be directly set.\", name);\n\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\tPyErr_SetString(\n\t\t\tpbs_python_types_table[PP_BADATTR_VALUE_ERR_IDX].t_class,\n\t\t\tlog_buffer);\n\t\tgoto IAVS_ERROR_EXIT;\n\t}\n\n\tevent = pbs_python_object_get_attr_integral_value(py_hook_pbsevent, \"type\");\n\tswitch (event) {\n\t\tcase HOOK_EVENT_QUEUEJOB:\n\t\tcase HOOK_EVENT_POSTQUEUEJOB:\n\t\tcase HOOK_EVENT_MODIFYJOB:\n\t\t\tif (!PyObject_IsInstance(py_owner,\n\t\t\t\t\t\t pbs_python_types_table[PP_JOB_IDX].t_class) &&\n\t\t\t    !PyObject_IsInstance(py_owner,\n\t\t\t\t\t\t pbs_python_types_table[PP_RESC_IDX].t_class)) {\n\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t\t\t \"Can only set job,resource attributes under %s event.\", ((event == HOOK_EVENT_QUEUEJOB) ? \"queuejob\" : \"modifyjob\"));\n\t\t\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\t\tPyErr_SetString(PyExc_AssertionError, log_buffer);\n\t\t\t\tgoto IAVS_ERROR_EXIT;\n\t\t\t}\n\n\t\t\tif (in_string_list(name, ',', PY_PYTHON_DEFINED_ATTRIBUTES)) {\n\t\t\t\tif ((strcmp(name, PY_RESOURCE_NAME) == 0) || (strcmp(name, PY_RESOURCE_HAS_VALUE) == 0)) {\n\t\t\t\t\t/* matched a special, internal-only attribute */\n\t\t\t\t\t/* holding the resc name, has_value (e.g. \"Resource_List\") */\n\t\t\t\t\tgoto IAVS_OK_EXIT;\n\t\t\t\t}\n\n\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"attribute '%s' is readonly\", name);\n\t\t\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\t\tPyErr_SetString(pbs_python_types_table[PP_BADATTR_VALUE_ERR_IDX].t_class, log_buffer);\n\t\t\t\tgoto IAVS_ERROR_EXIT;\n\t\t\t}\n\n\t\t\tif ((event == HOOK_EVENT_QUEUEJOB) &&\n\t\t\t    strcmp(name, ATTR_queue) == 0) {\n\t\t\t\tbreak; /* ok to modify queue under qsub */\n\t\t\t}\n\n\t\t\tattr_idx = find_attr(job_attr_idx, job_attr_def, name);\n\t\t\tif (attr_idx == -1) {\n\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"job attribute '%s' not found\", name);\n\t\t\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\t\tPyErr_SetString(PyExc_LookupError, log_buffer);\n\t\t\t\tgoto IAVS_ERROR_EXIT;\n\t\t\t}\n\n\t\t\tif ((job_attr_def[attr_idx].at_flags & ATR_DFLAG_HOOK_SET) == 0) {\n\t\t\t\t/* ATTR_J, ATTR_cred override any read-only permission seen */\n\t\t\t\tif ((strcmp(name, ATTR_J) != 0) &&\n\t\t\t\t    (strcmp(name, ATTR_cred) != 0)) {\n\t\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"job attribute '%s' is readonly\", name);\n\t\t\t\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\t\t\tPyErr_SetString(pbs_python_types_table[PP_BADATTR_VALUE_ERR_IDX].t_class, log_buffer);\n\t\t\t\t\tgoto IAVS_ERROR_EXIT;\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (resource && (resource[0] != '\\0')) {\n\t\t\t\trscdef = find_resc_def(svr_resc_def, resource);\n\t\t\t\tif (!rscdef) {\n\t\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"resource attribute '%s' not found\", resource);\n\t\t\t\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\t\t\tPyErr_SetString(PyExc_LookupError, log_buffer);\n\t\t\t\t\tgoto IAVS_ERROR_EXIT;\n\t\t\t\t}\n\t\t\t\tif ((rscdef->rs_flags & ATR_DFLAG_HOOK_SET) == 0) {\n\t\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"resource attribute '%s' is readonly\", name);\n\t\t\t\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\t\t\tPyErr_SetString(pbs_python_types_table[PP_BADATTR_VALUE_ERR_IDX].t_class, log_buffer);\n\t\t\t\t\tgoto IAVS_ERROR_EXIT;\n\t\t\t\t}\n\t\t\t} else if (ATTR_IS_RESC(&job_attr_def[attr_idx])) {\n\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"can't set the head resource '%s' directly \", name);\n\t\t\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\t\tPyErr_SetString(PyExc_AssertionError, log_buffer);\n\t\t\t\tgoto IAVS_ERROR_EXIT;\n\t\t\t}\n\n\t\t\tbreak;\n\t\tcase HOOK_EVENT_EXECJOB_BEGIN:\n\t\tcase HOOK_EVENT_EXECJOB_PROLOGUE:\n\t\tcase HOOK_EVENT_EXECJOB_EPILOGUE:\n\t\tcase HOOK_EVENT_EXECJOB_END:\n\t\tcase HOOK_EVENT_EXECJOB_ABORT:\n\t\tcase HOOK_EVENT_EXECJOB_POSTSUSPEND:\n\t\tcase HOOK_EVENT_EXECJOB_PRERESUME:\n\t\tcase HOOK_EVENT_EXECJOB_PRETERM:\n\t\t\tif (!PyObject_IsInstance(py_owner,\n\t\t\t\t\t\t pbs_python_types_table[PP_JOB_IDX].t_class) &&\n\t\t\t    !PyObject_IsInstance(py_owner,\n\t\t\t\t\t\t pbs_python_types_table[PP_RESC_IDX].t_class) &&\n\t\t\t    !PyObject_IsInstance(py_owner,\n\t\t\t\t\t\t pbs_python_types_table[PP_VNODE_IDX].t_class)) {\n\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"Can only set job,resource,vnode attributes under %s event.\", \"mom hook\");\n\t\t\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\t\tPyErr_SetString(PyExc_AssertionError, log_buffer);\n\t\t\t\tgoto IAVS_ERROR_EXIT;\n\t\t\t}\n\t\t\tbreak;\n\n\t\tcase HOOK_EVENT_EXECJOB_LAUNCH:\n\t\t\tif (!PyObject_IsInstance(py_owner,\n\t\t\t\t\t\t pbs_python_types_table[PP_EVENT_IDX].t_class) &&\n\t\t\t    !PyObject_IsInstance(py_owner,\n\t\t\t\t\t\t pbs_python_types_table[PP_JOB_IDX].t_class) &&\n\t\t\t    !PyObject_IsInstance(py_owner,\n\t\t\t\t\t\t pbs_python_types_table[PP_RESC_IDX].t_class) &&\n\t\t\t    !PyObject_IsInstance(py_owner,\n\t\t\t\t\t\t pbs_python_types_table[PP_VNODE_IDX].t_class)) {\n\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE,\n\t\t\t\t\t \"Can only set progname, argv, env event parameters as well as job, resource, vnode under %s hook.\", HOOKSTR_EXECJOB_LAUNCH);\n\t\t\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\t\tPyErr_SetString(PyExc_AssertionError, log_buffer);\n\t\t\t\tgoto IAVS_ERROR_EXIT;\n\t\t\t}\n\t\t\tbreak;\n\n\t\tcase HOOK_EVENT_EXECJOB_ATTACH:\n\t\t\tPyErr_SetString(PyExc_AssertionError, \"nothing is settable inside an execjob_attach hook!\");\n\t\t\tgoto IAVS_ERROR_EXIT;\n\n\t\tcase HOOK_EVENT_EXECJOB_RESIZE:\n\t\t\tPyErr_SetString(PyExc_AssertionError, \"nothing is settable inside an execjob_resize hook!\");\n\t\t\tgoto IAVS_ERROR_EXIT;\n\n\t\tcase HOOK_EVENT_EXECHOST_PERIODIC:\n\t\t\tif (!PyObject_IsInstance(py_owner,\n\t\t\t\t\t\t pbs_python_types_table[PP_VNODE_IDX].t_class) &&\n\t\t\t    !PyObject_IsInstance(py_owner,\n\t\t\t\t\t\t pbs_python_types_table[PP_JOB_IDX].t_class) &&\n\t\t\t    !PyObject_IsInstance(py_owner,\n\t\t\t\t\t\t pbs_python_types_table[PP_RESC_IDX].t_class)) {\n\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t\t\t \"Can only set node,resource,job attributes under %s event.\", (event == HOOK_EVENT_EXECHOST_PERIODIC) ? HOOKSTR_EXECHOST_PERIODIC : HOOKSTR_EXECHOST_STARTUP);\n\t\t\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\t\tPyErr_SetString(PyExc_AssertionError, log_buffer);\n\t\t\t\tgoto IAVS_ERROR_EXIT;\n\t\t\t}\n\n\t\t\tbreak;\n\t\tcase HOOK_EVENT_EXECHOST_STARTUP:\n\t\t\tif (!PyObject_IsInstance(py_owner,\n\t\t\t\t\t\t pbs_python_types_table[PP_VNODE_IDX].t_class) &&\n\t\t\t    !PyObject_IsInstance(py_owner,\n\t\t\t\t\t\t pbs_python_types_table[PP_RESC_IDX].t_class)) {\n\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t\t\t \"Can only set node,resource attributes under %s event.\", (event == HOOK_EVENT_EXECHOST_PERIODIC) ? HOOKSTR_EXECHOST_PERIODIC : HOOKSTR_EXECHOST_STARTUP);\n\t\t\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\t\tPyErr_SetString(PyExc_AssertionError, log_buffer);\n\t\t\t\tgoto IAVS_ERROR_EXIT;\n\t\t\t}\n\n\t\t\tbreak;\n\t\tcase HOOK_EVENT_RESVSUB:\n\t\tcase HOOK_EVENT_MODIFYRESV:\n\t\t\tif (!PyObject_IsInstance(py_owner,\n\t\t\t\t\t\t pbs_python_types_table[PP_RESV_IDX].t_class) &&\n\t\t\t    !PyObject_IsInstance(py_owner,\n\t\t\t\t\t\t pbs_python_types_table[PP_RESC_IDX].t_class)) {\n\t\t\t\tPyErr_SetString(PyExc_AssertionError, \"Can only set job attributes under resvsub event.\");\n\t\t\t\tgoto IAVS_ERROR_EXIT;\n\t\t\t}\n\t\t\tif (in_string_list(name, ',', PY_PYTHON_DEFINED_ATTRIBUTES)) {\n\t\t\t\tif ((strcmp(name, PY_RESOURCE_NAME) == 0) || (strcmp(name, PY_RESOURCE_HAS_VALUE) == 0)) {\n\t\t\t\t\t/* matched a special, internal-only attribute */\n\t\t\t\t\t/* holding the resc name, has_value (e.g. \"Resource_List\") */\n\t\t\t\t\tgoto IAVS_OK_EXIT;\n\t\t\t\t}\n\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"attribute '%s' is readonly\", name);\n\t\t\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\t\tPyErr_SetString(pbs_python_types_table[PP_BADATTR_VALUE_ERR_IDX].t_class, log_buffer);\n\t\t\t\tgoto IAVS_ERROR_EXIT;\n\t\t\t}\n\t\t\tattr_idx = find_attr(resv_attr_idx, resv_attr_def, name);\n\t\t\tif (attr_idx == -1) {\n\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"resv attribute '%s' not found\", name);\n\t\t\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\t\tPyErr_SetString(PyExc_LookupError, log_buffer);\n\t\t\t\tgoto IAVS_ERROR_EXIT;\n\t\t\t}\n\n\t\t\tif ((resv_attr_def[attr_idx].at_flags & ATR_DFLAG_HOOK_SET) == 0) {\n\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"resv attribute '%s' is readonly\", name);\n\t\t\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\t\tPyErr_SetString(pbs_python_types_table[PP_BADATTR_VALUE_ERR_IDX].t_class, log_buffer);\n\t\t\t\tgoto IAVS_ERROR_EXIT;\n\t\t\t}\n\t\t\tif (resource && (resource[0] != '\\0')) {\n\t\t\t\trscdef = find_resc_def(svr_resc_def, resource);\n\n\t\t\t\tif (!rscdef) {\n\t\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"resv resource attribute '%s' not found\", resource);\n\t\t\t\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\t\t\tPyErr_SetString(PyExc_LookupError, log_buffer);\n\t\t\t\t\tgoto IAVS_ERROR_EXIT;\n\t\t\t\t}\n\n\t\t\t\tif ((rscdef->rs_flags & ATR_DFLAG_HOOK_SET) == 0) {\n\t\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"resv resource attribute '%s' is readonly\", resource);\n\t\t\t\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\t\t\tPyErr_SetString(pbs_python_types_table[PP_BADATTR_VALUE_ERR_IDX].t_class, log_buffer);\n\t\t\t\t\tgoto IAVS_ERROR_EXIT;\n\t\t\t\t}\n\t\t\t} else if (ATTR_IS_RESC(&resv_attr_def[attr_idx])) {\n\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"can't set the head resv resource '%s' directly \", name);\n\t\t\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\t\tPyErr_SetString(PyExc_AssertionError, log_buffer);\n\t\t\t\tgoto IAVS_ERROR_EXIT;\n\t\t\t}\n\n\t\t\tbreak;\n\t\tcase HOOK_EVENT_MOVEJOB:\n\n\t\t\tif (!PyObject_IsInstance(py_owner,\n\t\t\t\t\t\t pbs_python_types_table[PP_JOB_IDX].t_class)) {\n\t\t\t\tPyErr_SetString(PyExc_AssertionError, \"Can only set job attributes under MOVEJOB event.\");\n\t\t\t\tgoto IAVS_ERROR_EXIT;\n\t\t\t}\n\n\t\t\tif (strcmp(name, ATTR_queue) != 0) {\n\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"Can only set job's 'queue' attribute under MOVEJOB event - \"\n\t\t\t\t\t\t\t\t       \"got <%s>\",\n\t\t\t\t\t name);\n\t\t\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\t\tPyErr_SetString(PyExc_AssertionError, log_buffer);\n\t\t\t\tgoto IAVS_ERROR_EXIT;\n\t\t\t}\n\t\t\tbreak;\n\t\tcase HOOK_EVENT_RUNJOB:\n\n\t\t\tif (!PyObject_IsInstance(py_owner,\n\t\t\t\t\t\t pbs_python_types_table[PP_JOB_IDX].t_class) &&\n\t\t\t    !PyObject_IsInstance(py_owner,\n\t\t\t\t\t\t pbs_python_types_table[PP_RESC_IDX].t_class) &&\n\t\t\t    !PyObject_IsInstance(py_owner,\n\t\t\t\t\t\t pbs_python_types_table[PP_VNODE_IDX].t_class)) {\n\t\t\t\tPyErr_SetString(PyExc_AssertionError, \"Can only set job,vnode attributes under RUNJOB event.\");\n\t\t\t\tgoto IAVS_ERROR_EXIT;\n\t\t\t}\n\n\t\t\t/*\n\t\t\t * If its a job object, then 'name' below will be refer to the job\n\t\t\t * attribute name  for example Output_Path, Priority, etc. If it's\n\t\t\t * a resource object, then 'name'  will actually refer to the jobs\n\t\t\t * resource list like Resource_List, resources_used, etc...\n\t\t\t *  Resource_List is fine since  it is listed in the\n\t\t\t * runjob_modifiable_jobattrs (as ATTR_l). but if its resources_used\n\t\t\t * for example, which is not in the runjob_modifiable_jobatrs, then if\n\t\t\t * its not the vnode object, then issue error.\n\t\t\t * If the parent is of vnode object but if 'name' is not the list of\n\t\t\t * runjob_modifiable_vnattrs, then issue an error.\n\t\t\t */\n\t\t\tif (!in_string_list(name, '|', runjob_modifiable_jobattrs) &&\n\t\t\t    (!PyObject_IsInstance(py_owner,\n\t\t\t\t\t\t  pbs_python_types_table[PP_VNODE_IDX].t_class) ||\n\t\t\t     !in_string_list(name, '|', runjob_modifiable_vnattrs) != 0)) {\n\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t\t\t FMT_RUNJOB_ERRMSG, runjob_modifiable_jobattrs,\n\t\t\t\t\t runjob_modifiable_vnattrs, name);\n\t\t\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\t\tPyErr_SetString(PyExc_AssertionError, log_buffer);\n\t\t\t\tgoto IAVS_ERROR_EXIT;\n\t\t\t}\n\n\t\t\tbreak;\n\t\tcase HOOK_EVENT_PERIODIC:\n\t\t\tif (!PyObject_IsInstance(py_owner,\n\t\t\t\t\t\t pbs_python_types_table[PP_VNODE_IDX].t_class) &&\n\t\t\t    !PyObject_IsInstance(py_owner,\n\t\t\t\t\t\t pbs_python_types_table[PP_RESC_IDX].t_class)) {\n\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t\t\t \"Can only set node,resource attributes under %s event.\", HOOKSTR_PERIODIC);\n\t\t\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\t\tPyErr_SetString(PyExc_AssertionError, log_buffer);\n\t\t\t\tgoto IAVS_ERROR_EXIT;\n\t\t\t}\n\t\t\tbreak;\n\t\tdefault:\n\t\t\tPyErr_SetString(PyExc_AssertionError, \"Unexpected event\");\n\t\t\tgoto IAVS_ERROR_EXIT;\n\t}\n\n\t/* Now check validity of the value */\n\n\tif (PyObject_HasAttrString(py_self, PY_DESCRIPTOR_VALUE_TYPE)) {\n\t\tpy_value_type = PyObject_GetAttrString(py_self,\n\t\t\t\t\t\t       PY_DESCRIPTOR_VALUE_TYPE); /* NEW */\n\t\tif (!PyTuple_Check(py_value_type)) {\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t\t \"For name=%s res=%s, value type is not a tuple\",\n\t\t\t\t name, resource);\n\t\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\tPyErr_SetString(PyExc_AssertionError, log_buffer);\n\t\t\tgoto IAVS_ERROR_EXIT;\n\t\t}\n\t}\n\n\tif (py_value_type) {\n\t\tpy_value_type_0 = PyTuple_GetItem(py_value_type, 0);\n\t}\n\n\tif (py_value_type_0) {\n\t\tif (PyObject_HasAttrString(py_value_type_0, PY_CLASS_DERIVED_TYPES)) {\n\t\t\tpy_value_type_0_derived =\n\t\t\t\tPyObject_GetAttrString(py_value_type_0,\n\t\t\t\t\t\t       PY_CLASS_DERIVED_TYPES); /* NEW */\n\t\t}\n\t}\n\n\t/* Ok if py_value is None, or py_value is of py_value_type, or */\n\t/* py_value is of py_value_type_o_derived */\n\tif ((py_value != Py_None) &&\n\t    (py_value_type && !PyObject_IsInstance(py_value, py_value_type)) &&\n\t    (!py_value_type_0_derived || !PyObject_IsInstance(py_value,\n\t\t\t\t\t\t\t      py_value_type_0_derived))) {\n\t\tchar cls[STRBUF];\n\t\tchar att[STRBUF];\n\t\tchar vtype[STRBUF];\n\t\tchar dtype[STRBUF];\n\t\tint dlen = 0;\n\t\tchar *the_dtype = NULL;\n\t\tchar *msgbuf;\n\n\t\tmemset(cls, '\\0', STRBUF);\n\t\tmemset(att, '\\0', STRBUF);\n\t\tmemset(vtype, '\\0', STRBUF);\n\t\tmemset(dtype, '\\0', STRBUF);\n\n\t\tpstr = pbs_python_object_get_attr_string_value(py_self,\n\t\t\t\t\t\t\t       PY_DESCRIPTOR_CLASS_NAME);\n\t\tif (pstr)\n\t\t\tstrncpy(cls, pstr, STRBUF - 1);\n\n\t\tpstr = pbs_python_object_get_attr_string_value(py_self,\n\t\t\t\t\t\t\t       PY_DESCRIPTOR_NAME);\n\n\t\tif (pstr)\n\t\t\tstrncpy(att, pstr, STRBUF - 1);\n\n\t\tif (py_value_type) {\n\t\t\tstrncpy(vtype,\n\t\t\t\tpbs_python_object_str(py_value_type), STRBUF - 1);\n\t\t}\n\t\tif (py_value_type_0_derived) {\n\t\t\tstrncpy(dtype,\n\t\t\t\tpbs_python_object_str(py_value_type_0_derived), STRBUF - 1);\n\t\t\tthe_dtype = dtype;\n\t\t\tdlen = strlen(dtype);\n\t\t\t/* clear extra leading '(' and trailing ',)' in */\n\t\t\t/* <derived_type> value if both appear. */\n\t\t\tif ((dtype[0] == '(') && (dlen >= 2) &&\n\t\t\t    (dtype[dlen - 2] == ',') && (dtype[dlen - 1] == ')')) {\n\t\t\t\tdtype[dlen - 2] = '\\0';\n\t\t\t\tthe_dtype = dtype + 1; /* move past leading '(' */\n\t\t\t}\n\t\t}\n\n\t\tpbs_asprintf(&msgbuf,\n\t\t\t     \"value for class <%s> attribute <%s> must be 'None' or '%s%s%s'\",\n\t\t\t     cls, att, vtype,\n\t\t\t     the_dtype ? \",\" : \"\",\n\t\t\t     the_dtype ? the_dtype : \"\");\n\n\t\tif (is_resource == 1)\n\t\t\tPyErr_SetString(\n\t\t\t\tpbs_python_types_table[PP_BAD_RESC_VTYPE_ERR_IDX].t_class,\n\t\t\t\tmsgbuf);\n\t\telse\n\t\t\tPyErr_SetString(\n\t\t\t\tpbs_python_types_table[PP_BADATTR_VTYPE_ERR_IDX].t_class,\n\t\t\t\tmsgbuf);\n\t\tfree(msgbuf);\n\t\tgoto IAVS_ERROR_EXIT;\n\t}\n\n\tif (strcmp(name, ATTR_a) == 0) {\n\t\tlong exec_time = 0;\n\t\tint ret;\n\n\t\t/* parse floats nicely since time.time() returns floats */\n\t\tif (py_value != Py_None) {\n\t\t\tif (PyFloat_Check(py_value)) {\n\t\t\t\tdouble ftime;\n\t\t\t\tret = PyArg_Parse(py_value, \"d\", &ftime);\n\t\t\t\tif (ret != 0)\n\t\t\t\t\texec_time = (long) ftime;\n\t\t\t} else {\n\t\t\t\tret = PyArg_Parse(py_value, \"l\", &exec_time);\n\t\t\t}\n\n\t\t\t/* if the parse worked but the time is in the past */\n\t\t\tif (ret == 0) {\n\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t\t\t \"exec_time could not be parsed\");\n\t\t\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\t\tPyErr_SetString(\n\t\t\t\t\tpbs_python_types_table[PP_BADATTR_VALUE_ERR_IDX].t_class,\n\t\t\t\t\tlog_buffer);\n\t\t\t\trc = 1;\n\t\t\t\tgoto IAVS_ERROR_EXIT;\n\t\t\t} else if (exec_time < time(0)) {\n\t\t\t\tchar *str_time = NULL;\n\n\t\t\t\tstr_time = ctime(&exec_time);\n\t\t\t\tif (str_time != NULL)\n\t\t\t\t\tstr_time[strlen(str_time) - 1] = '\\0';\n\n\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t\t\t \"exec_time '%s' not in the future\",\n\t\t\t\t\t (str_time ? str_time : \"\"));\n\t\t\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\t\tPyErr_SetString(\n\t\t\t\t\tpbs_python_types_table[PP_BADATTR_VALUE_ERR_IDX].t_class,\n\t\t\t\t\tlog_buffer);\n\t\t\t\trc = 1;\n\t\t\t\tgoto IAVS_ERROR_EXIT;\n\t\t\t}\n\t\t}\n\t} else if (strcmp(name, ATTR_runcount) == 0) {\n\t\tlong runcount;\n\n\t\tif ((PyArg_Parse(py_value, \"l\", &runcount) == 0) ||\n\t\t    (runcount < 0)) {\n\n\t\t\tPyErr_SetString(\n\t\t\t\tpbs_python_types_table[PP_BADATTR_VALUE_ERR_IDX].t_class,\n\t\t\t\t\"run_count value must be >= 0\");\n\t\t\trc = 1;\n\t\t\tgoto IAVS_ERROR_EXIT;\n\t\t}\n\t}\n\n\trc = 0;\n\nIAVS_OK_EXIT:\n\tPy_CLEAR(py_value_type);\n\tPy_CLEAR(py_value_type_0_derived);\n\tfree(name);\n\tfree(resource);\n\tret = (rc == 0) ? Py_True : Py_False;\n\tPy_INCREF(ret);\n\treturn (ret);\n\nIAVS_ERROR_EXIT:\n\tPy_CLEAR(py_value_type);\n\tPy_CLEAR(py_value_type_0_derived);\n\tfree(name);\n\tfree(resource);\n\treturn NULL;\n}\n\n/*\n * Methods related to  hook_pbsevent_accept.\n */\n/* pbs_v1_module method event */\n\nconst char pbsv1mod_meth_event_accept_doc[] =\n\t\"event_accept()\\n\\\n\\n\\\n         make the current event to accept the corresponding request.\\n\\\n\";\n\n/**\n * @brief\n *\tmake the current event to accept the corresponding request.\n *\n */\nPyObject *\npbsv1mod_meth_event_accept(void)\n{\n\t_pbs_python_event_accept();\n\tPy_RETURN_NONE;\n}\n\nconst char pbsv1mod_meth_event_reject_doc[] =\n\t\"event_reject()\\n\\\n\\n\\\n         make the current event to reject a corresponding request.\\n\\\n\";\n\n/**\n * @brief\n *\tmake the current event to reject a corresponding request.\n */\nPyObject *\npbsv1mod_meth_event_reject(PyObject *self, PyObject *args, PyObject *kwds)\n{\n\n\tstatic char *kwlist[] = {\"message\", NULL};\n\tchar *emsg = NULL;\n\n\tif (!PyArg_ParseTupleAndKeywords(args, kwds,\n\t\t\t\t\t \"|s:_event_reject\",\n\t\t\t\t\t kwlist,\n\t\t\t\t\t &emsg)) {\n\t\treturn NULL;\n\t}\n\t_pbs_python_event_reject(emsg);\n\n\tPy_RETURN_NONE;\n}\n\n/*\n * Methods related to  hook_pbsevent_stop_processing flag.\n */\n/* pbs_v1_module method event */\n\nconst char pbsv1mod_meth_event_param_mod_allow_doc[] =\n\t\"event_accept()\\n\\\n\\n\\\n         Allow changes to the event object's param.\\n\\\n\";\n\n/**\n * @brief\n *\tAllow changes to the event object's param.\n */\nPyObject *\npbsv1mod_meth_event_param_mod_allow(void)\n{\n\t_pbs_python_event_param_mod_allow();\n\tPy_RETURN_NONE;\n}\n\nconst char pbsv1mod_meth_event_param_mod_disallow_doc[] =\n\t\"event_reject()\\n\\\n\\n\\\n         Disallow changes to the event object's param.\\n\\\n\";\n\n/**\n * @brief\n *\tDisallow changes to the event object's param.\n */\nPyObject *\npbsv1mod_meth_event_param_mod_disallow(PyObject *self)\n{\n\t_pbs_python_event_param_mod_disallow();\n\tPy_RETURN_NONE;\n}\n\n/*\n * Create an event object and stuff the fixed attributes\n */\n/* pbs_v1_module method event */\n\nconst char pbsv1mod_meth_event_doc[] =\n\t\"event()\\n\\\n\\n\\\n     returns:\\n\\\n         instance of _event type corresponding to the event that the\\n\\\n\t current hook is responding to.\\n\\\n\\n\\\n    Event attributes:\\n\\\n      e.type = {pbs.QUEUEJOB, pbs.MODIFYJOB, pbs.RESVSUB, pbs.MOVEJOB}\\n\\\n      e. requestor - who made the request.\\n\\\n   \t\t   Special values: PBS_Server, Scheduler, pbs_mom\\n\\\n\\n\\\n      e.requestor_host  where the request came from.\\n\\\n      e.hook_name  name of the hook being executed.\\n\\\n\\n\\\n    Event methods:\\n\\\n      e.accept()       : accepts the current event request and raises SystemExit\\n\\\n      e.reject([<msg>]): rejects the current event request and raises SystemExit\\n\\\n\t\t         Where <msg> shows up in STDERR of the originating\\n\\\n\t\t\t command, and the PBS daemon log.\\n\\\n\\n\\\n    Event parameters:\\n\\\n\\n\\\n      If e.type is pbs.QUEUEJOB:   e.job\\n\\\n\t e.job.<attribute_name> = <attribute_value>\\n\\\n\t e.job.Priority = 7\\n\\\n\t e.job.Resource_List[walltime] = pbs.duration(00:30:00)\\n\\\n\t e.job.Resource_List[mem] = None\\n\\\n\\n\\\n      If e.type is pbs.MODIFYJOB:   e.job, e.job_o\\n\\\n\t e.job.<attribute_name> = <attribute_value>  (where <attribute_name> != queue)\\n\\\n\\n\\\n      If e.type is pbs.RESVSU:   e.resv\\n\\\n\t e.resv.<attribute_name> = <attribute_value>\\n\\\n\t e.resv.Reserve_Name = Altair##\\n\\\n\t e.resv.Resource_List[select] = pbs.select(5:ncpus=1:mem=2gb)\\n\\\n\\n\\\n      If e.type is pbs.MOVEJOB:   e.job, e.src_queue\\n\\\n\t e.job.queue = pbs.server().queue(<destination_queue>)\\n\\\n\";\n\n/**\n * @brief\n *      Returns the current Python event object (i.e. pbs.event()).\n *\n * @return the Python event object.\n * @retval\tthe current event object\n * @retval\tNone if no event found\n *\n */\n\nPyObject *\npbsv1mod_meth_event(void)\n{\n\t/* This function gets invoked in Python realm (i.e. hook script),     */\n\t/* which causes Python to think py_hook_pbsevent was created in that  */\n\t/* realm. Then under Python world, the returned py_hook_pbsevent      */\n\t/* gets its reference count decremented by 1 after every access.      */\n\t/* Continually decrementing the reference count would cause us to     */\n\t/* crash! So we need to bump up by 1 the count under the C realm, to  */\n\t/* match th decrements under the Python realm.\t\t\t      */\n\n\tif (py_hook_pbsevent == NULL) {\n\t\tPy_RETURN_NONE;\n\t}\n\tPy_INCREF(py_hook_pbsevent);\n\treturn (py_hook_pbsevent);\n}\n\n/**\n * @brief\n *\tcheck whether job input is valid\n *\n * @return\tint\n * @retval\t0 \tif 'value' is a valid value for job attribute/resource 'name';\n * @retval\t1 \tif not a valid value;\n * @retval\t2 \tif did not find a criteria for determining validity of value against 'name'..\n *\n * @par Note:\n *\tTHis code is taken from the qsub/qalter parsing of input.\n *\n */\nstatic int\nis_job_input_valid(char *name, char *value)\n{\n\t/* create a attribute structure to pass to verify functionality */\n\tstruct attropl pattr;\n\tint verified;\n\tint err_code;\n\tchar *err_msg = NULL;\n\n\t/* create a copy of the attribute value pointers\n\t * because the verify_an_attribute function could change it\n\t */\n\tmemset(&pattr, 0, sizeof(struct attropl));\n\tpattr.name = name;\n\tpattr.value = strdup(value);\n\tif (pattr.value == NULL) {\n\t\tpbs_errno = PBSE_SYSTEM;\n\t\treturn (1);\n\t}\n\terr_code = verify_an_attribute(PBS_BATCH_QueueJob, MGR_OBJ_JOB,\n\t\t\t\t       MGR_CMD_NONE, &pattr, &verified, &err_msg);\n\tif (err_msg)\n\t\tfree(err_msg);\n\tif (pattr.value)\n\t\tfree(pattr.value);\n\n\tif (!verified)\n\t\treturn (2);\n\telse if (err_code)\n\t\treturn (1);\n\n\treturn (0);\n}\n\n/**\n * @brief\n *\tvalidate the input for reservation\n *\n * @return\tint\n * @retval\t0 \tif 'value' is a valid value for reservation attribute/resource 'name';\n * @retval\t1 \tif not a valid value;\n * @retval\t2 \tif did not find a criteria for determining validity of value against 'name'..\n *\n * @par\tNOTE:\n *\tThis code is taken from the pbs_rsub parsing of input.\n *\n */\nstatic int\nis_resv_input_valid(char *name, char *value)\n{\n\t/* create a attribute structure to pass to verify functionality */\n\tstruct attropl pattr;\n\tint verified;\n\tint err_code;\n\tchar *err_msg = NULL;\n\n\t/* create a copy of the attribute name and value pointers\n\t * because the verify_an_attribute function could change it\n\t */\n\tmemset(&pattr, 0, sizeof(struct attropl));\n\tpattr.name = name;\n\tpattr.value = strdup(value);\n\tif (pattr.value == NULL) {\n\t\tpbs_errno = PBSE_SYSTEM;\n\t\treturn (1);\n\t}\n\terr_code = verify_an_attribute(PBS_BATCH_SubmitResv, MGR_OBJ_RESV,\n\t\t\t\t       MGR_CMD_NONE, &pattr, &verified, &err_msg);\n\tif (err_msg)\n\t\tfree(err_msg);\n\tif (pattr.value)\n\t\tfree(pattr.value);\n\n\tif (!verified)\n\t\treturn (2);\n\telse if (err_code)\n\t\treturn (1);\n\n\treturn (0);\n}\n\nconst char pbsv1mod_meth_validate_input_doc[] =\n\t\"validate_input(table_descr, strName, strValue)\\n\\\n\\n\\\n   table_descr  : pbs table to consult: resc, job, queue, server, resv, float\\n\\\n   strName      : an attribute name\\n\\\n   strValue     : the value as a string\\n\\\n\\n\\\n   raises an exception if (strName, strValue) is not valid in 'table_descr'\\n\\\n\";\n\n/**\n * @brief\n *\tThis is callable in a Python script, for validating  the validity\n *\tof a PBS tuple: <attrbute_name>, <attribute_value>\n *\n * @param[in]\tself - the calling parent object\n * @param[in]\targs - the passed arguments:\n *\n *\t\targs[1]\t- the validation table(\"job\", \"resv\", \"server\", \"queue\")\n * \t\targs[2] - the attribute name.\n * \t\targs[3] - the attribute value.\n * @param[in]\tkwds  - Python variable arguments\n *\n * @return\tPyObject *\n * @retval\tNULL\t- failed validation of input\n * @retval\tPy_None - successful validation of input\n *\n */\nPyObject *\npbsv1mod_meth_validate_input(PyObject *self, PyObject *args, PyObject *kwds)\n{\n\n\tstatic char *kwlist[] = {\"table_descr\", \"attribute\", \"value\", NULL};\n\tchar *table = NULL;\n\tchar *name = NULL;\n\tchar *value = NULL;\n\tchar *value_tmp = NULL;\n\tint attr_idx = -1;\n\tattribute attr;\n\tint rc;\n\tint is_v;\n\n\tif (!PyArg_ParseTupleAndKeywords(args, kwds,\n\t\t\t\t\t \"sss:validate_input\",\n\t\t\t\t\t kwlist,\n\t\t\t\t\t &table,\n\t\t\t\t\t &name,\n\t\t\t\t\t &value)) {\n\t\treturn NULL;\n\t}\n\n\tif (hook_set_mode == C_MODE) {\n\t\t/* No need to validate input if not called inside a hook script. */\n\t\tPy_RETURN_NONE;\n\t}\n\n\t/*  The *decode* functions below \"munges\" the value argument, so will use */\n\t/*  a copy */\n\tvalue_tmp = strdup(value);\n\tif (value_tmp == NULL) {\n\t\tPyErr_SetString(PyExc_AssertionError, \"strdup of value failed\");\n\t\tgoto validate_input_error_exit;\n\t}\n\tif (strcmp(table, PY_TYPE_JOB) == 0) {\n\n\t\tis_v = is_job_input_valid(name, value_tmp);\n\n\t\tif (is_v == 1) {\n\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t\t \"input value %s not of the right format for '%s'\",\n\t\t\t\t value, name);\n\t\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\tPyErr_SetString(\n\t\t\t\tpbs_python_types_table[PP_BADATTR_VALUE_ERR_IDX].t_class,\n\t\t\t\tlog_buffer);\n\t\t\tgoto validate_input_error_exit;\n\n\t\t} else if (is_v == 2) { /* go to job table to validate */\n\n\t\t\tattr_idx = find_attr(job_attr_idx, job_attr_def, name);\n\t\t\tif (attr_idx == -1) {\n\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"job attribute %s not found\", name);\n\t\t\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\t\tPyErr_SetString(PyExc_LookupError, log_buffer);\n\t\t\t\tgoto validate_input_error_exit;\n\t\t\t}\n\n\t\t\tclear_attr(&attr, job_attr_def);\n\t\t\trc = set_attr_generic(&attr, &job_attr_def[attr_idx], value_tmp, NULL, INTERNAL);\n\t\t\tfree_attr(job_attr_def, &attr, attr_idx);\n\n\t\t\tif (rc != 0) {\n\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t\t\t \"input value %s not of the right format for '%s'\",\n\t\t\t\t\t value, name);\n\t\t\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\t\tPyErr_SetString(\n\t\t\t\t\tpbs_python_types_table[PP_BADATTR_VALUE_ERR_IDX].t_class,\n\t\t\t\t\tlog_buffer);\n\t\t\t\tgoto validate_input_error_exit;\n\t\t\t}\n\t\t}\n\n\t} else if (strcmp(table, PY_RESOURCE) == 0) {\n\t\tresource_def *rescdef;\n\n\t\trescdef = find_resc_def(svr_resc_def, name);\n\t\tif (!rescdef) {\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"resource attribute %s not found\", name);\n\t\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\tPyErr_SetString(PyExc_LookupError, log_buffer);\n\t\t\tgoto validate_input_error_exit;\n\t\t}\n\n\t\tif (rescdef->rs_decode) {\n\t\t\tmemset((void *) &attr, 0, sizeof(attribute));\n\t\t\trc = rescdef->rs_decode(&attr, name, NULL, value_tmp);\n\n\t\t\tif (rescdef->rs_free) {\n\t\t\t\trescdef->rs_free(&attr);\n\t\t\t}\n\t\t\tif (rc != 0) {\n\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t\t\t \"input value %s not of the right format for '%s'\",\n\t\t\t\t\t value, name);\n\t\t\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\t\tPyErr_SetString(\n\t\t\t\t\tpbs_python_types_table[PP_BAD_RESC_VALUE_ERR_IDX].t_class,\n\t\t\t\t\tlog_buffer);\n\t\t\t\tgoto validate_input_error_exit;\n\t\t\t}\n\t\t}\n\t} else if (strcmp(table, PY_TYPE_RESV) == 0) {\n\n\t\tis_v = is_resv_input_valid(name, value_tmp);\n\n\t\tif (is_v == 1) {\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t\t \"input value %s not of the right format for '%s'\",\n\t\t\t\t value, name);\n\t\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\tPyErr_SetString(\n\t\t\t\tpbs_python_types_table[PP_BADATTR_VALUE_ERR_IDX].t_class,\n\t\t\t\tlog_buffer);\n\t\t\tgoto validate_input_error_exit;\n\t\t} else if (is_v == 2) { /* go to resv table to validate */\n\n\t\t\tattr_idx = find_attr(resv_attr_idx, resv_attr_def, name);\n\t\t\tif (attr_idx == -1) {\n\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"reservation attribute %s not found\", name);\n\t\t\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\t\tPyErr_SetString(PyExc_LookupError, log_buffer);\n\t\t\t\tgoto validate_input_error_exit;\n\t\t\t}\n\n\t\t\tclear_attr(&attr, resv_attr_def);\n\t\t\trc = set_attr_generic(&attr, &resv_attr_def[attr_idx], value_tmp, NULL, INTERNAL);\n\t\t\tfree_attr(resv_attr_def, &attr, attr_idx);\n\t\t\tif (rc != 0) {\n\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t\t\t \"input value %s not of the right format for '%s'\",\n\t\t\t\t\t value, name);\n\t\t\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\t\tPyErr_SetString(\n\t\t\t\t\tpbs_python_types_table[PP_BADATTR_VALUE_ERR_IDX].t_class,\n\t\t\t\t\tlog_buffer);\n\t\t\t\tgoto validate_input_error_exit;\n\t\t\t}\n\t\t}\n\t} else if (strcmp(table, PY_TYPE_SERVER) == 0) {\n\t\tattr_idx = find_attr(svr_attr_idx, svr_attr_def, name);\n\t\tif (attr_idx == -1) {\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"server attribute %s not found\", name);\n\t\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\tPyErr_SetString(PyExc_LookupError, log_buffer);\n\t\t\tgoto validate_input_error_exit;\n\t\t}\n\n\t\tclear_attr(&attr, svr_attr_def);\n\t\trc = set_attr_generic(&attr, &svr_attr_def[attr_idx], value_tmp, NULL, INTERNAL);\n\t\tfree_attr(svr_attr_def, &attr, attr_idx);\n\t\tif (rc != 0) {\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t\t \"input value %s not of the right format for '%s'\",\n\t\t\t\t value, name);\n\t\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\tPyErr_SetString(\n\t\t\t\tpbs_python_types_table[PP_BADATTR_VALUE_ERR_IDX].t_class,\n\t\t\t\tlog_buffer);\n\t\t\tgoto validate_input_error_exit;\n\t\t}\n\t} else if (strcmp(table, PY_TYPE_QUEUE) == 0) {\n\t\tattr_idx = find_attr(que_attr_idx, que_attr_def, name);\n\t\tif (attr_idx == -1) {\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"queue attribute %s not found\", name);\n\t\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\tPyErr_SetString(PyExc_LookupError, log_buffer);\n\t\t\tgoto validate_input_error_exit;\n\t\t}\n\n\t\tclear_attr(&attr, que_attr_def);\n\t\trc = set_attr_generic(&attr, &que_attr_def[attr_idx], value_tmp, NULL, INTERNAL);\n\t\tfree_attr(que_attr_def, &attr, attr_idx);\n\t\tif (rc != 0) {\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t\t \"input value %s not of the right format for '%s'\",\n\t\t\t\t value, name);\n\t\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\tPyErr_SetString(\n\t\t\t\tpbs_python_types_table[PP_BADATTR_VALUE_ERR_IDX].t_class,\n\t\t\t\tlog_buffer);\n\t\t\tgoto validate_input_error_exit;\n\t\t}\n\t} else if (strcmp(table, PY_TYPE_FLOAT2) == 0) {\n\t\tclear_attr(&attr, que_attr_def);\n\t\trc = decode_f(&attr, name, NULL, value_tmp);\n\t\tif (rc != 0) {\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t\t \"input value %s not of the right format for '%s'\",\n\t\t\t\t value, name);\n\t\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\tPyErr_SetString(\n\t\t\t\tpbs_python_types_table[PP_BADATTR_VALUE_ERR_IDX].t_class,\n\t\t\t\tlog_buffer);\n\t\t\tgoto validate_input_error_exit;\n\t\t}\n\t} else {\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t \"could not find an attributes table called '%s'\", table);\n\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\tPyErr_SetString(PyExc_LookupError, log_buffer);\n\t\tgoto validate_input_error_exit;\n\t}\n\n\tif (value_tmp) {\n\t\tfree(value_tmp);\n\t}\n\tPy_RETURN_NONE;\n\nvalidate_input_error_exit:\n\tif (value_tmp) {\n\t\tfree(value_tmp);\n\t}\n\treturn NULL;\n}\n\nconst char pbsv1mod_meth_duration_to_secs_doc[] =\n\t\"duration_to_secs(strTime)\\n\\\n\\n\\\n   strTime:  a time string ([HH:MM:]SS[.ms]) to be converted into # of seconds \\n\\\n\\n\\\n         returns an int. If time_str is \\\"\\\", then returns 0\\n\\\n\";\n\n/**\n * @brief\n *\tconvert the time in ([HH:MM:]SS[.ms]) format to secs.\n *\n * @return\tPyObject*\n * @retval\tsecs\t\tsuccess\n * @retval\tNULL\t\terror\n *\n */\nPyObject *\npbsv1mod_meth_duration_to_secs(PyObject *self, PyObject *args, PyObject *kwds)\n{\n\n\tstatic char *kwlist[] = {\"time_str\", NULL};\n\tchar *time_str = NULL;\n\tlong num_secs;\n\n\tif (!PyArg_ParseTupleAndKeywords(args, kwds,\n\t\t\t\t\t \"s:duration_to_secs\",\n\t\t\t\t\t kwlist,\n\t\t\t\t\t &time_str)) {\n\t\treturn NULL;\n\t}\n\n\tnum_secs = duration_to_secs(time_str);\n\tif (num_secs == -1) {\n\t\tPyErr_SetString(PyExc_AssertionError, \"strdup of value failed\");\n\t\tgoto duration_error_exit;\n\t}\n\n\tif (num_secs == -2) {\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"input value '%s' not of the right format\",\n\t\t\t time_str);\n\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\tPyErr_SetString(\n\t\t\tpbs_python_types_table[PP_BADATTR_VALUE_ERR_IDX].t_class,\n\t\t\tlog_buffer);\n\t\tgoto duration_error_exit;\n\t}\n\n\treturn (PyLong_FromLong(num_secs));\n\nduration_error_exit:\n\n\treturn NULL;\n}\n\n/* pbs_v1_module method event */\n\nconst char pbsv1mod_meth_wordsize_doc[] =\n\t\"wordsize()\\n\\\n\\n\\\n  returns:\\n\\\n         size of a word in bytes(an int).\\n\\\n\";\n\n/**\n * @brief\n *\treturn the size of a word in bytes.\n */\nPyObject *\npbsv1mod_meth_wordsize(void)\n{\n\treturn (PyLong_FromSsize_t((ssize_t) sizeof(int)));\n}\n\n/**\n * @brief\n * \t_pbs_python_event_job_getval_hookset:\n * \tThis is a new, general purpose function that looks into the current hook\n * \tevent object's job parameter (i.e. pbs.event().job), and see if the job\n * \tattribute attrib_name was set inside a hook script. If so, return the\n * \tattributes value as a string; NULL otherwise. On some attributes like\n * \tATTR_h, optional value strings opval and delval are returned. opval tells\n * \thow the attribute value was obtained: via __init__, __add__, or __sub__\n * \tmethods; delval tells which hold values (e.g. \"us\") were actually removed\n * \tby a __sub__ (i.e. unset) action. opval_len and delval_len are the actual\n * \tnumber of bytes pre-allocated for the string arrays 'opval' and 'delval'.\n * \tThe caller is responsible for allocating enough space for these parameters.\n *\n * @par\tNOTE:\n *\tThis returns the string value returned by pbs_python_object_str(),\n * \twhich returns a fixed memory area that gets overwritten by subsequent\n * \tcalls to this function. So The return value of this function must be\n * \timmediately used.\n *\n */\nchar *\n_pbs_python_event_job_getval_hookset(char *attrib_name, char *opval,\n\t\t\t\t     int opval_len, char *delval, int delval_len)\n{\n\tPyObject *py_job = NULL;\n\tPyObject *py_attr_hookset_dict = NULL;\n\tchar *strval = NULL;\n\n\tif (py_hook_pbsevent == NULL) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"No hook event found!\");\n\t\treturn NULL;\n\t}\n\n\tif (!PyObject_HasAttrString(py_hook_pbsevent, PY_EVENT_PARAM_JOB)) {\n\t\tLOG_ERROR_ARG2(\"%s: does not have attribute <%s>\",\n\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_JOB);\n\t\treturn NULL;\n\t}\n\n\tpy_job = PyObject_GetAttrString(py_hook_pbsevent, PY_EVENT_PARAM_JOB);\n\t/* NEW */\n\n\tif (py_job == NULL || (py_job == Py_None)) {\n\t\tLOG_ERROR_ARG2(\"%s: does not have a value for <%s>\",\n\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_JOB);\n\t\treturn NULL;\n\t}\n\n\t/*\n\t * Get the attributes that have been set in the hook script via the\n\t * _attributes_hook_set dictionary.\n\t */\n\tpy_attr_hookset_dict = PyObject_GetAttrString(\n\t\tpy_job, PY_ATTRIBUTES_HOOK_SET); /* NEW */\n\tif (py_attr_hookset_dict == NULL) {\n\t\tLOG_ERROR_ARG2(\"%s: does not have a value for <%s>\",\n\t\t\t       PY_TYPE_JOB, PY_ATTRIBUTES_HOOK_SET);\n\t\tgoto getval_exit;\n\t}\n\tif (!PyDict_Check(py_attr_hookset_dict)) {\n\t\tLOG_ERROR_ARG2(\"%s: <%s> is not a dict\",\n\t\t\t       PY_TYPE_JOB, PY_ATTRIBUTES_HOOK_SET);\n\t\tgoto getval_exit;\n\t}\n\n\tif (PyDict_GetItemString(py_attr_hookset_dict, attrib_name) != NULL) {\n\t\tif (PyObject_HasAttrString(py_job, attrib_name)) {\n\t\t\tPyObject *py_attrval = NULL;\n\n\t\t\tpy_attrval = PyObject_GetAttrString(py_job,\n\t\t\t\t\t\t\t    attrib_name); /* NEW */\n\n\t\t\tif ((py_attrval != NULL) && (py_attrval != Py_None)) {\n\n\t\t\t\tif ((opval != NULL) && (opval_len > 1)) {\n\t\t\t\t\tstrval = pbs_python_object_get_attr_string_value(\n\t\t\t\t\t\tpy_attrval, PY_OPVAL);\n\t\t\t\t\tstrncpy(opval, (strval ? strval : \"\"), opval_len - 1);\n\t\t\t\t}\n\t\t\t\tif ((delval != NULL) && (delval_len > 1)) {\n\t\t\t\t\tstrval = pbs_python_object_get_attr_string_value(\n\t\t\t\t\t\tpy_attrval, PY_DELVAL);\n\t\t\t\t\tstrncpy(delval, (strval ? strval : \"\"), delval_len - 1);\n\t\t\t\t}\n\t\t\t\tstrval = pbs_python_object_str(py_attrval);\n\t\t\t\tPy_DECREF(py_attrval);\n\t\t\t}\n\t\t}\n\t}\n\ngetval_exit:\n\n\tPy_CLEAR(py_job);\n\tPy_CLEAR(py_attr_hookset_dict);\n\n\treturn (strval);\n}\n\nconst char pbsv1mod_meth_iter_nextfunc_doc[] =\n\t\"iter_nextfunc(meth_mode, obj_name, filter1, filter2)\\n\\\n\\n\\\n   meth_mode:\tcan be 1 if called from __init__() or 0 if from next()\\n\\\n\t\tmethod of a pbs_iter type.\\n\\\n   obj_name:\twhat type of object being iterated on: \\\"queues\\\", \\\"jobs\\\",\\n\\\n\t\t\\\"resvs\\\", \\\"vnodes\\\".\\n\\\n   filter1:\tis usually the <server_name> where the queues,\\n\\\n\t\tjobs, resvs, vnodes reside. A <server_name> of \\\"\\\"\\n\\\n\t\tmeans the local server host.\\n\\\n   filter2:\tcan be any string that can further restrict the list\\n\\\n\t\tbeing referenced. For example, this can be set to\\n\\\n\t\tsome <queue_name>, to have the iterator represents\\n\\\n\t\ta list of jobs on <queue_name>@<server_name>\\n\\\n\\n\\\n   Returns the next PBS object in Python form to evaluate within a looping\\n\\\n   construct. The idea is on a iterator instantiation, the following gets\\n\\\n   called:\\n\\\n   \t_pbs_v1.iter_nextfunc(self, 1, obj_name, filter1, filter2)\\n\\\n   This causes an iterator reference (i.e. self) to be internally stored by\\n\\\n   PBS in the internal pbs_iter_list, setting pbs_iter_items data field to\\n\\\n   the first element of the obj_name (e.g. queues) list, filtered\\n\\\n   according to filter1 (e.g. fest) and/or filter2 (e.g. workq).\\n\\\n\\n\\\n   Then on an interator next call, the following gets invoked:\\n\\\n\t_pbs_v1.iter_nextfunc(self, 0, obj_name, filter1, filter2)\\n\\\n   This returns the next Python object to process among the list of PBS\\n\\\n   objects represented by iterator self.\\n\\\n\";\n\nPyObject *\npbsv1mod_meth_iter_nextfunc(PyObject *self, PyObject *args, PyObject *kwds)\n{\n#ifdef NAS /* localmod 014 */\n\tstatic char *kwlist[] = {\"iter_obj\", \"meth_mode\", \"obj_name\", \"filter1\", \"filter2\", \"ignore_fin\", \"filter_user\",\n\t\t\t\t NULL};\n#else\n\tstatic char *kwlist[] = {\"iter_obj\", \"meth_mode\", \"obj_name\", \"filter1\", \"filter2\",\n\t\t\t\t NULL};\n#endif /* localmod 014 */\n\tint meth_mode;\n\tchar *obj_name = NULL;\n\tchar *filter1 = NULL;\n\tchar *filter2 = NULL;\n#ifdef NAS /* localmod 014 */\n\tint ignore_fin;\n\tchar *filter_user = NULL;\n#endif /* localmod 014 */\n\tpbs_iter_item *iter_entry = NULL;\n\tpbs_queue *pque = NULL;\n\tPyObject *py_object = NULL;\n\tPyObject *py_self = NULL;\n\tint vi;\n\n#ifdef NAS /* localmod 014 */\n\tif (!PyArg_ParseTupleAndKeywords(args, kwds,\n\t\t\t\t\t \"Oisssis:iter_nextfunc\",\n\t\t\t\t\t kwlist,\n\t\t\t\t\t &py_self,\n\t\t\t\t\t &meth_mode,\n\t\t\t\t\t &obj_name,\n\t\t\t\t\t &filter1,\n\t\t\t\t\t &filter2,\n\t\t\t\t\t &ignore_fin,\n\t\t\t\t\t &filter_user)) {\n#else\n\tif (!PyArg_ParseTupleAndKeywords(args, kwds,\n\t\t\t\t\t \"Oisss:iter_nextfunc\",\n\t\t\t\t\t kwlist,\n\t\t\t\t\t &py_self,\n\t\t\t\t\t &meth_mode,\n\t\t\t\t\t &obj_name,\n\t\t\t\t\t &filter1,\n\t\t\t\t\t &filter2)) {\n#endif /* localmod 014 */\n\t\treturn NULL;\n\t}\n\n\titer_entry = (pbs_iter_item *) GET_NEXT(pbs_iter_list);\n\n\twhile (iter_entry) {\n\t\tif (iter_entry->py_iter == py_self)\n\t\t\tbreak;\n\n\t\titer_entry = (pbs_iter_item *) GET_NEXT(iter_entry->all_iters);\n\t}\n\n\tswitch (meth_mode) {\n\n\t\tcase 1: /* in __init__ method */\n\n\t\t\tif (iter_entry != NULL) { /* must be NULL */\n\t\t\t\tPyErr_SetString(PyExc_AssertionError,\n\t\t\t\t\t\t\"attempted to initialize an already initialized iterator!\");\n\t\t\t\treturn NULL;\n\t\t\t}\n\n\t\t\titer_entry = (pbs_iter_item *) malloc(sizeof(pbs_iter_item));\n\t\t\tif (iter_entry == NULL) {\n\t\t\t\tlog_err(errno, __func__, \"no memory\");\n\t\t\t\tPyErr_SetString(PyExc_AssertionError,\n\t\t\t\t\t\t\"failed to malloc memory\");\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t\t(void) memset((char *) iter_entry, (int) 0,\n\t\t\t\t      (size_t) sizeof(pbs_iter_item));\n\t\t\tCLEAR_LINK(iter_entry->all_iters);\n\n\t\t\titer_entry->py_iter = py_self;\n\t\t\tPy_INCREF(py_self);\n\n\t\t\tif (strcmp(obj_name, ITER_QUEUES) == 0) {\n\t\t\t\tif ((filter1 != NULL) && (filter1[0] != '\\0') &&\n\t\t\t\t    (strcmp(filter1, server_name) != 0)) {\n\t\t\t\t\tPyErr_SetString(PyExc_StopIteration, \"\");\n\t\t\t\t\treturn NULL;\n\t\t\t\t}\n\t\t\t\titer_entry->data = (pbs_queue *) GET_NEXT(svr_queues);\n\t\t\t} else if (strcmp(obj_name, ITER_JOBS) == 0) {\n\n\t\t\t\tif ((filter1 != NULL) && (filter1[0] != '\\0') &&\n\t\t\t\t    (strcmp(filter1, server_name) != 0)) {\n\t\t\t\t\tPyErr_SetString(PyExc_StopIteration, \"\");\n\t\t\t\t\treturn NULL;\n\t\t\t\t}\n#ifdef NAS /* localmod 014 */\n\t\t\t\tif (!ignore_fin &&\n\t\t\t\t    (filter_user == NULL || filter_user[0] == '\\0') &&\n\t\t\t\t    (filter2 != NULL) && (filter2[0] != '\\0')) {\n#else\n\t\t\t\tif ((filter2 != NULL) && (filter2[0] != '\\0')) {\n#endif /* localmod 014 */\n\t\t\t\t\t/* refers to the queue name */\n\t\t\t\t\tpque = find_queuebyname(filter2);\n\t\t\t\t\tif (pque == NULL) {\n\t\t\t\t\t\tsprintf(log_buffer, \"queue %s not found\",\n\t\t\t\t\t\t\tfilter2);\n\t\t\t\t\t\tPyErr_SetString(PyExc_ValueError,\n\t\t\t\t\t\t\t\tlog_buffer);\n\t\t\t\t\t\treturn NULL;\n\t\t\t\t\t}\n\t\t\t\t\titer_entry->data = (job *) GET_NEXT(pque->qu_jobs);\n\n\t\t\t\t} else { /* get jobs from server */\n\t\t\t\t\titer_entry->data = (job *) GET_NEXT(svr_alljobs);\n\n#ifdef NAS /* localmod 014 */\n\t\t\t\t\t/* skip jobs according to filters requested for the iterator */\n\t\t\t\t\tjob *njob = (job *) iter_entry->data;\n\t\t\t\t\twhile (njob != NULL &&\n\t\t\t\t\t       ((ignore_fin && check_job_state(njob, JOB_STATE_LTR_FINISHED)) ||\n\t\t\t\t\t\t(filter2 != NULL && filter2[0] != '\\0' && strcmp(filter2, njob->ji_qs.ji_queue)) ||\n\t\t\t\t\t\t(filter_user != NULL && filter_user[0] != '\\0' && is_jattr_set(njob, JOB_ATR_euser) && get_jattr_str(njob, JOB_ATR_euser) != NULL && strcmp(filter_user, get_jattr_str(njob, JOB_ATR_euser))))) {\n\t\t\t\t\t\tnjob = (job *) GET_NEXT(njob->ji_alljobs);\n\t\t\t\t\t}\n\n\t\t\t\t\titer_entry->data = njob;\n#endif /* localmod 014 */\n\t\t\t\t}\n\t\t\t} else if (strcmp(obj_name, ITER_RESERVATIONS) == 0) {\n\t\t\t\tif ((filter1 != NULL) && (filter1[0] != '\\0') &&\n\t\t\t\t    (strcmp(filter1, server_name) != 0)) {\n\t\t\t\t\tPyErr_SetString(PyExc_StopIteration, \"\");\n\t\t\t\t\treturn NULL;\n\t\t\t\t}\n\t\t\t\titer_entry->data = (resc_resv *) GET_NEXT(svr_allresvs);\n\t\t\t} else if (strcmp(obj_name, ITER_VNODES) == 0) {\n\t\t\t\tif ((filter1 != NULL) && (filter1[0] != '\\0') &&\n\t\t\t\t    (strcmp(filter1, server_name) != 0)) {\n\t\t\t\t\tPyErr_SetString(PyExc_StopIteration, \"\");\n\t\t\t\t\treturn NULL;\n\t\t\t\t}\n\n\t\t\t\tif ((pbsndlist == NULL) || (svr_totnodes <= 0)) {\n\t\t\t\t\tPyErr_SetString(PyExc_StopIteration, \"\");\n\t\t\t\t\treturn NULL;\n\t\t\t\t}\n\t\t\t\titer_entry->data = NULL;\n\n\t\t\t\tfor (vi = 0; vi < svr_totnodes; vi++) {\n\n\t\t\t\t\tif ((pbsndlist[vi]->nd_state & INUSE_DELETED) == 0) {\n\t\t\t\t\t\titer_entry->data =\n\t\t\t\t\t\t\t(struct pbsnode *) pbsndlist[vi];\n\t\t\t\t\t\titer_entry->data_index = vi;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\"invalid parameter %s to iter_nextfunc()\",\n\t\t\t\t\tobj_name);\n\t\t\t\tPyErr_SetString(PyExc_AssertionError, log_buffer);\n\t\t\t\treturn NULL;\n\t\t\t}\n\n\t\t\tappend_link(&pbs_iter_list, &iter_entry->all_iters,\n\t\t\t\t    (pbs_iter_item *) iter_entry);\n\t\t\tPy_RETURN_NONE; /* nothing to return since this is __init__ */\n\n\t\tcase 0:\t\t\t\t  /* in next() */\n\t\t\tif (iter_entry == NULL) { /* must be stored internally */\n\t\t\t\tPyErr_SetString(PyExc_AssertionError,\n\t\t\t\t\t\t\"internal iterator should exist during next() call\");\n\t\t\t\treturn NULL;\n\t\t\t}\n\n\t\t\tif (iter_entry->data == NULL) {\n\t\t\t\tPyErr_SetString(PyExc_StopIteration, \"\");\n\t\t\t\treturn NULL;\n\t\t\t}\n\n\t\t\thook_set_mode = C_MODE;\n\t\t\tif (strcmp(obj_name, ITER_RESERVATIONS) == 0) {\n\t\t\t\tpy_object = _pps_helper_get_resv((resc_resv *) iter_entry->data, NULL, HOOK_PERF_FUNC);\n\t\t\t\titer_entry->data = (resc_resv *) GET_NEXT(\n\t\t\t\t\t((resc_resv *) iter_entry->data)->ri_allresvs);\n\t\t\t} else if (strcmp(obj_name, ITER_QUEUES) == 0) {\n\t\t\t\tpy_object = _pps_helper_get_queue((pbs_queue *) iter_entry->data, NULL, HOOK_PERF_FUNC);\n\t\t\t\titer_entry->data = (pbs_queue *) GET_NEXT(\n\t\t\t\t\t((pbs_queue *) iter_entry->data)->qu_link);\n\t\t\t} else if (strcmp(obj_name, ITER_JOBS) == 0) {\n\t\t\t\tpy_object = _pps_helper_get_job((job *) iter_entry->data, NULL, NULL, HOOK_PERF_FUNC);\n\n#ifdef NAS /* localmod 014 */\n\t\t\t\tif (!ignore_fin &&\n\t\t\t\t    (filter_user == NULL || filter_user[0] == '\\0') &&\n\t\t\t\t    (filter2 != NULL) && (filter2[0] != '\\0')) {\n#else\n\t\t\t\tif ((filter2 != NULL) && (filter2[0] != '\\0')) {\n#endif /* localmod 014 */\n\n\t\t\t\t\t/* list of jobs filtered by queue   */\n\t\t\t\t\t/* 'filter2', as setup in meth_mode */\n\t\t\t\t\t/* of 1 above. So we need to return */\n\t\t\t\t\t/* the jobs on the same queue       */\n\t\t\t\t\t/* (i.e. use ji_jobque here)        */\n\t\t\t\t\titer_entry->data = (job *) GET_NEXT(\n\t\t\t\t\t\t((job *) iter_entry->data)->ji_jobque);\n\t\t\t\t} else {\n\t\t\t\t\titer_entry->data = (job *) GET_NEXT(\n\t\t\t\t\t\t((job *) iter_entry->data)->ji_alljobs);\n#ifdef NAS /* localmod 014 */\n\t\t\t\t\t/* skip jobs according to filters requested for the iterator */\n\t\t\t\t\tjob *njob = (job *) iter_entry->data;\n\t\t\t\t\twhile (njob != NULL &&\n\t\t\t\t\t       ((ignore_fin && check_job_state(njob, JOB_STATE_LTR_FINISHED)) ||\n\t\t\t\t\t\t(filter2 != NULL && filter2[0] != '\\0' && strcmp(filter2, njob->ji_qs.ji_queue)) ||\n\t\t\t\t\t\t(filter_user != NULL && filter_user[0] != '\\0' && get_jattr_str(njob, JOB_ATR_euser) != NULL && strcmp(filter_user, get_jattr_str(njob, JOB_ATR_euser))))) {\n\t\t\t\t\t\tnjob = (job *) GET_NEXT(njob->ji_alljobs);\n\t\t\t\t\t}\n\n\t\t\t\t\titer_entry->data = njob;\n#endif /* localmod 014 */\n\t\t\t\t}\n\t\t\t} else if (strcmp(obj_name, ITER_VNODES) == 0) {\n\n\t\t\t\tpy_object = _pps_helper_get_vnode((struct pbsnode *) iter_entry->data, NULL, HOOK_PERF_FUNC);\n\n\t\t\t\titer_entry->data = NULL;\n\t\t\t\tvi = iter_entry->data_index + 1;\n\t\t\t\twhile (vi < svr_totnodes) {\n\n\t\t\t\t\tif ((pbsndlist[vi]->nd_state & INUSE_DELETED) == 0) {\n\t\t\t\t\t\titer_entry->data =\n\t\t\t\t\t\t\t(struct pbsnode *) pbsndlist[vi];\n\t\t\t\t\t\titer_entry->data_index = vi;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t\tvi++;\n\t\t\t\t}\n\n\t\t\t} else {\n\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\"invalid parameter %s to iter_nextfunc()\",\n\t\t\t\t\tobj_name);\n\t\t\t\tPyErr_SetString(PyExc_AssertionError, log_buffer);\n\t\t\t\thook_set_mode = PY_MODE;\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t\thook_set_mode = PY_MODE;\n\t\t\treturn py_object;\n\n\t\tdefault:\n\t\t\tsprintf(log_buffer,\n\t\t\t\t\"invalid method mode %d to iter_nextfunc()\",\n\t\t\t\tmeth_mode);\n\t\t\tPyErr_SetString(PyExc_AssertionError, log_buffer);\n\t}\n\treturn NULL;\n}\n\nconst char pbsv1mod_meth_mark_vnode_set_doc[] =\n\t\"mark_vnode_set(vnode_name, attr_name, attr_value)\\n\\\n\\n\\\n   vnode_name:  name of vnode to set\\n\\\n   attr_name:   the attribute name\\n\\\n   attr_value:  the attribute value\\n\\\n\\n\\\n   Adds to some internal, pending operations table,\\n\\\n   an 'attr_name=attr_value' set operation for the given 'vnode_name'.\\n\\\n\";\n\n/**\n * @brief\n *\tThis is callable in a Python script, for populating the internal\n *\tlist 'pbs_vnode_set_list' with\n *\t\t(<vnode_name>, <attribute_name>) representing a pending \"set\"\n *\toperation.\n *\n * @param[in]\targs[1]\t- the vnode name.\n * @param[in]\targs[2] - the attribute name.\n * @param[in]\targs[3] - the attribute value.\n *\n * @return\tPyObject *\n * @retval\tNULL\t- with an accompanying AssertionError Python exception.\n * @retval\tPy_None - successful execution.\n *\n */\nPyObject *\npbsv1mod_meth_mark_vnode_set(PyObject *self, PyObject *args, PyObject *kwds)\n{\n\n\tstatic char *kwlist[] = {\"vnode_name\", \"attr_name\", \"attr_value\",\n\t\t\t\t NULL};\n\tchar *vnode_name = NULL;\n\tchar *attr_name = NULL;\n\tchar *attr_value = NULL;\n\tvnode_set_req *vn_set_req = NULL;\n\tsvrattrl *plist = NULL;\n\n\tif (!PyArg_ParseTupleAndKeywords(args, kwds,\n\t\t\t\t\t \"sss:mark_vnode_set\",\n\t\t\t\t\t kwlist,\n\t\t\t\t\t &vnode_name,\n\t\t\t\t\t &attr_name,\n\t\t\t\t\t &attr_value)) {\n\t\treturn NULL;\n\t}\n\tif ((attr_name == NULL) || (attr_name[0] == '\\0') ||\n\t    (attr_value == NULL) || (attr_value[0] == '\\0')) {\n\t\tPyErr_SetString(PyExc_AssertionError,\n\t\t\t\t\"mark_vnode_set: bad parameter!\");\n\t\treturn NULL;\n\t}\n\n\tvn_set_req = (vnode_set_req *) GET_NEXT(pbs_vnode_set_list);\n\n\twhile (vn_set_req) {\n\t\tif (strcmp(vn_set_req->vnode_name, vnode_name) == 0)\n\t\t\tbreak;\n\n\t\tvn_set_req = (vnode_set_req *) GET_NEXT(vn_set_req->all_reqs);\n\t}\n\n\tif (vn_set_req == NULL) {\n\t\tvn_set_req = (vnode_set_req *) malloc(sizeof(vnode_set_req));\n\t\tif (vn_set_req == NULL) {\n\t\t\tlog_err(errno, __func__, \"no memory\");\n\t\t\tPyErr_SetString(PyExc_AssertionError,\n\t\t\t\t\t\"failed to malloc memory\");\n\t\t\treturn NULL;\n\t\t}\n\t\t(void) memset((char *) vn_set_req, (int) 0,\n\t\t\t      (size_t) sizeof(vnode_set_req));\n\t\tCLEAR_LINK(vn_set_req->all_reqs);\n\t\tCLEAR_HEAD(vn_set_req->rq_attr);\n\t\tstrncpy(vn_set_req->vnode_name, vnode_name, PBS_MAXNODENAME);\n\t\tappend_link(&pbs_vnode_set_list, &vn_set_req->all_reqs,\n\t\t\t    (vnode_set_req *) vn_set_req);\n\t}\n\n\tif ((plist = find_svrattrl_list_entry(&vn_set_req->rq_attr, attr_name,\n\t\t\t\t\t      NULL)) != NULL) {\n\n\t\t/* let's remove the entry and just recreate. */\n\t\t/* it's too messy and buggy, error prone to have */\n\t\t/* to \"extend\" the svratrl entry!                */\n\n\t\tdelete_link(&plist->al_link);\n\t\tfree(plist);\n\t}\n\n\tif (add_to_svrattrl_list(&vn_set_req->rq_attr, attr_name, NULL,\n\t\t\t\t ((strcmp(attr_name, ATTR_NODE_state) == 0) ? vnode_state_to_str(atoi(attr_value)) : attr_value),\n\t\t\t\t ATR_VFLAG_HOOK, NULL) == -1) {\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t \"failed to add_to_svrattrl_list(%s, 0, %s, ATR_VFLAG_HOOK)\",\n\t\t\t attr_name, attr_value);\n\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\tlog_err(errno, __func__, log_buffer);\n\t\tPyErr_SetString(PyExc_AssertionError, \"\");\n\t\treturn NULL;\n\t}\n\n\tPy_RETURN_NONE; /* nothing to return since this is __init__ */\n}\n\nconst char pbsv1mod_meth_vnode_state_to_str_doc[] =\n\t\"vnode_state_to_str(state_bit)\\n\\\n\\n\\\n   state_bit:\tvnode state in bit flag.\\n\\\n\\n\\\n   Returns the human readable form of 'state_bit'\\n\\\n   Ex: vnode_state_to_str(3) -> \\\"offline,down\\\"\\n\\\n\";\n\n/**\n * @brief\n *\tThis is the C->Python wrapper program to vnode_state_to_str() function,\n *\tthat is callable in Python script.\n *\n * @param[in]\tself\t- owning object\n * @param[in]\targs[1]\t- the  state bit value.\n * @param[in]\tkwds\t- keywords to objects mappings\n *\n * @return\tPyObject *\n * @retval\tA Python string corresponding to args[1].\n *\n */\nPyObject *\npbsv1mod_meth_vnode_state_to_str(PyObject *self, PyObject *args, PyObject *kwds)\n{\n\tstatic char *kwlist[] = {\"state_bit\", NULL};\n\tint state_bit;\n\n\tif (!PyArg_ParseTupleAndKeywords(args, kwds,\n\t\t\t\t\t \"i:vnode_state_to_str\",\n\t\t\t\t\t kwlist,\n\t\t\t\t\t &state_bit)) {\n\t\treturn NULL;\n\t}\n\n\treturn (PyUnicode_FromString(vnode_state_to_str(state_bit)));\n}\n\nconst char pbsv1mod_meth_vnode_sharing_to_str_doc[] =\n\t\"vnode_sharing_to_str(share_val)\\n\\\n\\n\\\n   share_val:\tvnode sharing value in int.\\n\\\n\\n\\\n   Returns the human readable form of 'state_bit'\\n\\\n   Ex: vnode_sharing_to_str(5) -> \\\"force_excl\\\"\\n\\\n\";\n\n/**\n * @brief\n *\tThis is the C->Python wrapper program to vnode_sharing_to_str() function,\n that is callable in Python script.\n *\n * @param[in]\tself\t- owning object\n * @param[in]\targs[1]\t- the  vnode sharing value.\n * @param[in]\tkwds\t- keywords to objects mappings\n *\n * @return\tPyObject *\n * @retval\tA Python string corresponding to args[1].\n *\n */\nPyObject *\npbsv1mod_meth_vnode_sharing_to_str(PyObject *self, PyObject *args, PyObject *kwds)\n{\n\tstatic char *kwlist[] = {\"share_val\", NULL};\n\tint share_val;\n\tchar *vns;\n\n\tif (!PyArg_ParseTupleAndKeywords(args, kwds,\n\t\t\t\t\t \"i:vnode_sharing_to_str\",\n\t\t\t\t\t kwlist,\n\t\t\t\t\t &share_val)) {\n\t\treturn NULL;\n\t}\n\n\tvns = vnode_sharing_to_str(share_val);\n\treturn (PyUnicode_FromString(vns ? vns : \"\"));\n}\n\nconst char pbsv1mod_meth_vnode_ntype_to_str_doc[] =\n\t\"vnode_ntype_to_str(ntype)\\n\\\n\\n\\\n   share_val:\tntype vnode type.\\n\\\n\\n\\\n   Returns the node type\\n\\\n   Ex: vnode_ntype_to_str(1) -> \\\"PBS\\\"\\n\\\n\";\n\n/**\n * @brief\n *\tThis is the C->Python wrapper program to vnode_ntype_to_str() function,\n *\tthat is callable in Python script.\n *\n * @param[in]\tself\t- owning object\n * @param[in]\targs[1]\t- the  vnode sharing value.\n * @param[in]\tkwds\t- keywords to objects mappings\n *\n * @return\tPyObject *\n * @retval\tA Python string corresponding to args[1].\n *\n */\nPyObject *\npbsv1mod_meth_vnode_ntype_to_str(PyObject *self, PyObject *args, PyObject *kwds)\n{\n\tstatic char *kwlist[] = {\"node_type\", NULL};\n\tint node_type;\n\n\tif (!PyArg_ParseTupleAndKeywords(args, kwds,\n\t\t\t\t\t \"i:vnode_ntype_to_str\",\n\t\t\t\t\t kwlist,\n\t\t\t\t\t &node_type)) {\n\t\treturn NULL;\n\t}\n\n\treturn (PyUnicode_FromString(vnode_ntype_to_str(node_type)));\n}\n\n/**\n *\n * @brief\n *\tChecks if there's at least one pending \"set\"  vnode operation in\n *\tthe internal list 'pbs_vnode_set_list'.\n *\n * @return int\n * @retval 1\t- if a \"set\" operation was found.\n * @retval 0   - if not found.\n *\n */\nint\n_pbs_python_has_vnode_set(void)\n{\n\tif (GET_NEXT(pbs_vnode_set_list) != NULL)\n\t\treturn (1);\n\treturn (0);\n}\n\n/**\n * @brief\n *\tPerform all the \"set\" operations found in 'pbs_vnode_set_list'.\n * @par\n *\tThis goes through each entry in the internal 'pbs_vnode_set_list',\n *\tand for every vnode_name, obtain the struct pbsnode * entry in the\n *\tsystem pbsdnlist[] table, and perform the set operation for the\n *\tcorresponding rq_attr entry.\n *\n * @note\n *\tFailures in set operations will be reflected in the daemons logs\n *\toutput.\n *\n */\nvoid\n_pbs_python_do_vnode_set(void)\n{\n\n\tvnode_set_req *vn_set_req = NULL;\n\tstruct pbsnode *pnode;\n\tint bad = 0;\n\tint rc;\n\tchar *hook_name = NULL;\n\tsvrattrl *plist;\n\tsvrattrl *pal;\n\n\thook_name = _pbs_python_event_get_attrval(PY_EVENT_HOOK_NAME);\n\tif (hook_name == NULL) {\n\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\"No hook name associated with set vnode operation!\");\n\t\treturn;\n\t}\n\n\tvn_set_req = (vnode_set_req *) GET_NEXT(pbs_vnode_set_list);\n\twhile (vn_set_req != NULL) {\n\n\t\tpnode = find_nodebyname(vn_set_req->vnode_name);\n\n\t\tif ((pnode == NULL) ||\n\t\t    (pnode->nd_state & INUSE_DELETED)) {\n\t\t\tvn_set_req = (vnode_set_req *) GET_NEXT(vn_set_req->all_reqs);\n\t\t\tcontinue;\n\t\t}\n\n\t\tplist = (svrattrl *) GET_NEXT(vn_set_req->rq_attr);\n\n\t\trc = mgr_set_attr(pnode->nd_attr, node_attr_idx, node_attr_def, ND_ATR_LAST,\n\t\t\t\t  plist, ATR_PERM_ALLOW_INDIRECT, &bad, (void *) pnode, ATR_ACTION_ALTER);\n\n\t\tif (rc != 0) {\n\t\t\tchar *pbse_err;\n\t\t\tchar raw_err[10];\n\t\t\tsvrattrl *pal;\n\t\t\tint i;\n\n\t\t\tpbse_err = pbse_to_txt(rc);\n\t\t\tsnprintf(raw_err, sizeof(raw_err) - 1, \"%d\", rc);\n\n\t\t\ti = 0;\n\t\t\tpal = plist;\n\t\t\tbad--; /* mgr_set_attr returns +1 of actual 'bad' index */\n\t\t\twhile (pal) {\n\t\t\t\tif (i == bad) {\n\t\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\t\"vnode %s: failed to set %s to %s: %s\",\n\t\t\t\t\t\tvn_set_req->vnode_name,\n\t\t\t\t\t\tpal->al_name, pal->al_value ? pal->al_value : \"\",\n\t\t\t\t\t\tpbse_err ? pbse_err : raw_err);\n\t\t\t\t\tlog_err(PBSE_SYSTEM, __func__, log_buffer);\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tif (hook_debug.output_fp != NULL) {\n\t\t\t\t\tfprintf(hook_debug.output_fp,\n\t\t\t\t\t\t\"%s(%s).%s=%s\\n\",\n\t\t\t\t\t\tSERVER_VNODE_OBJECT,\n\t\t\t\t\t\tpnode->nd_name,\n\t\t\t\t\t\tpal->al_name,\n\t\t\t\t\t\tpal->al_value);\n\t\t\t\t}\n\t\t\t\ti++;\n\t\t\t\tpal = (svrattrl *) GET_NEXT(pal->al_link);\n\t\t\t}\n\t\t\treturn;\n\t\t} else {\n\n\t\t\tmgr_log_attr(msg_man_set, plist,\n\t\t\t\t     PBS_EVENTCLASS_NODE, pnode->nd_name, hook_name);\n\n\t\t\tpal = plist;\n\t\t\twhile (pal) {\n\t\t\t\tif (hook_debug.output_fp != NULL) {\n\t\t\t\t\tfprintf(hook_debug.output_fp,\n\t\t\t\t\t\t\"%s(%s).%s=%s\\n\",\n\t\t\t\t\t\tSERVER_VNODE_OBJECT,\n\t\t\t\t\t\tpnode->nd_name,\n\t\t\t\t\t\tpal->al_name,\n\t\t\t\t\t\tpal->al_value);\n\t\t\t\t}\n\t\t\t\tpal = (svrattrl *) GET_NEXT(pal->al_link);\n\t\t\t}\n\t\t}\n\n\t\tvn_set_req = (vnode_set_req *) GET_NEXT(vn_set_req->all_reqs);\n\t}\n\n\tsave_nodes_db(0, NULL);\n}\n\nconst char pbsv1mod_meth_set_python_mode_doc[] =\n\t\"set_python_mode()\\n\\\n\\n\\\n  returns:\\n\\\n         Sets the internal variable 'hook_set_mode' to PY_MODE.\\n\\\n  \t This is an internal function.\\n\\\n\";\n\n/**\n *;p\n * @brief\n *\tThis is the C->Python wrapper program to\n *\tset the internal variable 'hook_set_mode' to PY_MODE.\n *\n * @return Python object\n * @retval PY_TRUE\n *\n */\nPyObject *\npbsv1mod_meth_set_python_mode(void)\n{\n\tPyObject *ret;\n\thook_set_mode = PY_MODE;\n\tret = Py_True;\n\tPy_INCREF(ret);\n\treturn (ret);\n}\n\nconst char pbsv1mod_meth_set_c_mode_doc[] =\n\t\"set_c_mode()\\n\\\n\\n\\\n  returns:\\n\\\n         Sets the internal variable 'hook_set_mode' to C_MODE.\\n\\\n  \t This is an internal function.\\n\\\n\";\n\n/**\n *\n * @brief\n *\tThis is the C->Python wrapper program to\n *\tset the internal variable 'hook_set_mode' to C_MODE.\n *\n * @return Python object\n * @retval PY_TRUE\n *\n */\nPyObject *\npbsv1mod_meth_set_c_mode(void)\n{\n\tPyObject *ret;\n\thook_set_mode = C_MODE;\n\tret = Py_True;\n\tPy_INCREF(ret);\n\treturn (ret);\n}\n\nconst char pbsv1mod_meth_get_python_daemon_name_doc[] =\n\t\"get_python_daemon_name()\\n\\\n\\n\\\n  returns:\\n\\\n         Returns the svr_interp_data.daemon_name value.\\n\\\n  \t This is an internal function.\\n\\\n\";\n\n/**\n * @brief\n *\tThis is the C->Python wrapper program for\n *\treturning the svr_interp_data.daemon_name value.\n *\tThis is callable within a Python script.\n *\n * @return Python str\n * @retval name of \"daemon\" invoking Python interpreter.\n *\n */\nPyObject *\npbsv1mod_meth_get_python_daemon_name(void)\n{\n\tif (svr_interp_data.daemon_name == NULL)\n\t\tPy_RETURN_NONE;\n\n\treturn (PyUnicode_FromString(svr_interp_data.daemon_name));\n}\n\nconst char pbsv1mod_meth_str_to_vnode_state_doc[] =\n\t\"str_to_vnode_state(state_str)\\n\\\n\\n\\\n   state_str:\tvnode state as a  string.\\n\\\n\\n\\\n   Returns the human readable form of 'state_str'\\n\\\n   Ex: str_to_vnode_state(\\\"offline,down\\\") -> \\\"3\\\"\\n\\\n\";\n\n/**\n * @brief\n *\tThis is the C->Python wrapper program to str_to_vnode_state()\n *\tfunction, that is callable in Python script.\n *\n * @param[in]\tself\t- owning object\n * @param[in]\targs[1]\t- the  state str value.\n * @param[in]\tkwds\t- keywords to objects mappings\n *\n * @return\tPyObject *\n * @retval\tA Python int corresponding to args[1].\n *\n */\nPyObject *\npbsv1mod_meth_str_to_vnode_state(PyObject *self, PyObject *args, PyObject *kwds)\n{\n\tstatic char *kwlist[] = {\"state_str\", NULL};\n\tchar *state_str = NULL;\n\n\tif (!PyArg_ParseTupleAndKeywords(args, kwds,\n\t\t\t\t\t \"s:str_to_vnode_state\",\n\t\t\t\t\t kwlist,\n\t\t\t\t\t &state_str)) {\n\t\treturn NULL;\n\t}\n\n\treturn (PyUnicode_FromFormat(\"%d\", str_to_vnode_state(state_str)));\n}\n\nconst char pbsv1mod_meth_str_to_vnode_ntype_doc[] =\n\t\"str_to_vnode_ntype(type_str)\\n\\\n\\n\\\n   type_str:\tvnode type as a string.\\n\\\n\\n\\\n   Returns the human readable form of 'type_str'\\n\\\n   Ex: str_to_vnode_ntype(\\\"pbs\\\") -> \\\"0\\\"\\n\\\n\";\n\n/**\n * @brief\n *\tThis is the C->Python wrapper program to str_to_vnode_ntype()\n *\tfunction, that is callable in Python script.\n *\n * @param[in]\tself\t- owning object\n * @param[in]\targs[1]\t- the  type str value.\n * @param[in]\tkwds\t- keywords to objects mappings\n *\n * @return\tPyObject *\n * @retval\tA Python string corresponding to args[1].\n *\n */\nPyObject *\npbsv1mod_meth_str_to_vnode_ntype(PyObject *self, PyObject *args, PyObject *kwds)\n{\n\tstatic char *kwlist[] = {\"type_str\", NULL};\n\tchar *type_str = NULL;\n\n\tif (!PyArg_ParseTupleAndKeywords(args, kwds,\n\t\t\t\t\t \"s:str_to_vnode_ntype\",\n\t\t\t\t\t kwlist,\n\t\t\t\t\t &type_str)) {\n\t\treturn NULL;\n\t}\n\n\treturn (PyUnicode_FromFormat(\"%d\", str_to_vnode_ntype(type_str)));\n}\n\nconst char pbsv1mod_meth_str_to_vnode_sharing_doc[] =\n\t\"str_to_vnode_sharing(share_str)\\n\\\n\\n\\\n   share_str:\tvnode share as a  string.\\n\\\n\\n\\\n   Returns the human readable form of 'share_str'\\n\\\n   Ex: str_to_vnode_sharing(\\\"default_shared\\\") -> \\\"1\\\"\\n\\\n\";\n\n/**\n * @brief\n *\tThis is the C->Python wrapper program to str_to_vnode_sharing()\n *\tfunction that is callable in Python script.\n *\n * @param[in]\tself\t- owning object\n * @param[in]\targs[1]\t- the  share str value.\n * @param[in]\tkwds\t- keywords to objects mappings\n *\n * @return\tPyObject *\n * @retval\tA Python string corresponding to args[1].\n *\n */\nPyObject *\npbsv1mod_meth_str_to_vnode_sharing(PyObject *self, PyObject *args, PyObject *kwds)\n{\n\tstatic char *kwlist[] = {\"share_str\", NULL};\n\tchar *share_str = NULL;\n\n\tif (!PyArg_ParseTupleAndKeywords(args, kwds,\n\t\t\t\t\t \"s:str_to_vnode_sharing\",\n\t\t\t\t\t kwlist,\n\t\t\t\t\t &share_str)) {\n\t\treturn NULL;\n\t}\n\treturn (PyUnicode_FromFormat(\"%d\", str_to_vnode_sharing(share_str)));\n}\n\nconst char pbsv1mod_meth_get_pbs_server_name_doc[] =\n\t\"get_pbs_server_name()\\n\\\n\\n\\\n  returns:\\n\\\n         Returns the configured server host name.\\n\\\n  \t This is an internal function.\\n\\\n\";\n\n/**\n *\n * @brief\n *\tThis is the C->Python wrapper program for\n *\treturning the PBS_SERVER value in pbs.conf file.\n *\tThis is callable within a Python script.\n *\n * @return Python str\n * @retval name of \"daemon\" invoking Python interpreter.\n *\n */\nPyObject *\npbsv1mod_meth_get_pbs_server_name(void)\n{\n\tif (server_host[0] == '\\0')\n\t\tPy_RETURN_NONE;\n\n\treturn (PyUnicode_FromString(server_host));\n}\n\nconst char pbsv1mod_meth_get_local_host_name_doc[] =\n\t\"get_local_host_name()\\n\\\n\\n\\\n  returns:\\n\\\n         Returns the local host name associated with currently instantiated.\\n\\\n\t Python interpreter.\\n\\\n  \t This is an internal function.\\n\\\n\";\n\n/**\n *\n * @brief\n *\tThis is the C->Python wrapper program for\n *\treturning the local host name associated with an instantiated\n *\tPython interpreter.\n *\tThis is callable within a Python script.\n *\n * @return Python str\n * @retval local hostname associated with the current Python interpreter.\n *\n */\nPyObject *\npbsv1mod_meth_get_local_host_name(void)\n{\n\treturn (PyUnicode_FromString((char *) svr_interp_data.local_host_name));\n}\n\n/**\n * @brief\n * \tReturns the command line string that was set in  a hook\n *\tas an alternate to the normal reboot() call.\n */\nchar *\npbs_python_get_reboot_host_cmd(void)\n{\n\tif (hook_reboot_host_cmd[0] != '\\0')\n\t\treturn ((char *) hook_reboot_host_cmd);\n\n\treturn NULL;\n}\n\n/**\n * @brief\n *\tReturns the value of the flag that tells pbs to reboot host or not.\n *\n * @return\tint\n * @retval\tTRUE \t- \tmeans yes, reboot host\n * @retval\tFALSE \t- \tmeans no.\n */\nint\npbs_python_get_reboot_host_flag(void)\n{\n\treturn (hook_reboot_host);\n}\n\n/**\n * @brief\n *\tSets the flag that tells pbs to reboot current host.\n *\tAlso, if 'cmd' is not NULL, sets the reboot command to use as an\n *\talternate to the default reboot() call.\n *\n * @param[in]\tcmd - the command line (including arguments) that should\n *\t\t\tbe used when the time comes to reboot host.\n */\nvoid\npbs_python_reboot_host(char *cmd)\n{\n\n\thook_reboot_host = TRUE;\n\thook_reboot_host_cmd[0] = '\\0';\n\tif (cmd != NULL) {\n\t\tsnprintf((char *) hook_reboot_host_cmd, sizeof(hook_reboot_host_cmd),\n\t\t\t \"%s\", cmd);\n\t}\n}\n\nconst char pbsv1mod_meth_reboot_doc[] =\n\t\"reboot([cmd])\\n\\\n\\n\\\n         Flags pbs that current host should be rebooted.\\n\\\n  where:\\n\\\n\\n\\\n   cmd:  optional command line to use by pbs to reboot host, as an\\n\\\n   alternate to the default reboot()  call.\\n\\\n\";\n\n/**\n *\n * @brief\n *\tThis is the C->Python wrapper program to the\n *\tpbs_python_reboot_host() call.\n *\n *\tThis is callable within a Python script.\n *\n * @param[in]\targs - arguments to this Python function\n * @param[in]\tkwds - keywords mappings  to the alternate 'cmd' used for\n *\t\t\trebooting host.\n *\n * @return PyObject *\n * @retval Python None\n */\nPyObject *\npbsv1mod_meth_reboot(PyObject *self, PyObject *args, PyObject *kwds)\n{\n\n\tstatic char *kwlist[] = {\"cmd\", NULL};\n\tchar *cmd = NULL;\n\n\tif (!PyArg_ParseTupleAndKeywords(args, kwds,\n\t\t\t\t\t \"|s:reboot\",\n\t\t\t\t\t kwlist,\n\t\t\t\t\t &cmd)) {\n\t\treturn NULL;\n\t}\n\tpbs_python_reboot_host(cmd);\n\n\tPy_RETURN_NONE;\n}\n\n/**\n * @brief\n *\tReturns the value of the flag that tells pbs to scheduler_restart_cycle host or not.\n * @return \tint\n * @retval\tTRUE\t - \tmeans yes, scheduler_restart_cycle host\n * @retval\tFALSE \t- \tmeans no.\n */\nint\npbs_python_get_scheduler_restart_cycle_flag(void)\n{\n\treturn (hook_scheduler_restart_cycle);\n}\n\n/**\n * @brief\n *\tSets the flag that tells pbs server to NOT tell scheduler to restart its\n *\tscheduling cycle.\n */\nvoid\npbs_python_no_scheduler_restart_cycle(void)\n{\n\n\thook_scheduler_restart_cycle = FALSE;\n}\n\n/**\n * @brief\n *\tSets the flag that tells pbs server to tell scheduler to restart its\n *\tscheduling cycle.\n */\nvoid\npbs_python_scheduler_restart_cycle(void)\n{\n\n\thook_scheduler_restart_cycle = TRUE;\n}\n\nconst char pbsv1mod_meth_scheduler_restart_cycle_doc[] =\n\t\"scheduler_restart_cycle(<server_host>)\\n\\\n\\n\\\n         Flags pbs server at 'server_host' to tell scheduler to restart its scheduling cycle.\\n\\\n  where:\\n\\\n\\n\\\n   server_host:  the name of the server to send the request to.\\n\\\n   NOTE: Currently, only supports on the same host.\\n\\\n\";\n\n/**\n *\n * @brief\n *\tThis is the C->Python wrapper program to the\n *\tpbs_python_scheduler_restart_cycle() call.\n *\n *\tThis is callable within a Python script.\n *\n * @param[in]\targs - arguments to this Python function\n * @param[in]\tkwds - keywords mappings  to the 'server_host' needing a\n *\t\t\tscheduler restart cycle.\n *\n * @return PyObject *\n * @retval Python None\n */\nPyObject *\npbsv1mod_meth_scheduler_restart_cycle(PyObject *self, PyObject *args, PyObject *kwds)\n{\n\n\tstatic char *kwlist[] = {\"server_host\", NULL};\n\tchar *shost = NULL;\n\n\tif (!PyArg_ParseTupleAndKeywords(args, kwds,\n\t\t\t\t\t \"|s:scheduler_restart_cycle\",\n\t\t\t\t\t kwlist,\n\t\t\t\t\t &shost)) {\n\t\treturn NULL;\n\t}\n\n\tif ((strcmp(shost, LOCALHOST_SHORTNAME) != 0) &&\n\t    (strcmp(shost, LOCALHOST_FULLNAME) != 0) &&\n\t    (strcmp(shost, pbs_conf.pbs_server_name) != 0)) {\n\t\tPyErr_SetString(PyExc_NotImplementedError,\n\t\t\t\t\"Allowed only to owning pbs server host\");\n\t\treturn NULL;\n\t}\n\n\tpbs_python_scheduler_restart_cycle();\n\n\tPy_RETURN_NONE;\n}\n\nconst char pbsv1mod_meth_set_pbs_statobj_doc[] =\n\t\"set_pbs_statobj(function_name)\\n\\\n\\n\\\n   function_name:  name of function that creates/populates a PBS object \\n\\\n\\n\\\n\";\n\n/**\n * @brief\n *\tset the status of job.\n */\nPyObject *\npbsv1mod_meth_set_pbs_statobj(PyObject *self, PyObject *args,\n\t\t\t      PyObject *kwds)\n{\n\tstatic char *kwlist[] = {\"func\", NULL};\n\tPyObject *f = NULL;\n\n\tif (!PyArg_ParseTupleAndKeywords(args, kwds,\n\t\t\t\t\t \"O:set_pbs_statobj\", kwlist, &f)) {\n\t\tPyErr_SetString(PyExc_AssertionError,\n\t\t\t\t\"set_pbs_statobj: Failed to parse arguments\");\n\t\treturn NULL;\n\t}\n\tif (!PyCallable_Check(f)) {\n\t\tPyErr_SetString(PyExc_AssertionError,\n\t\t\t\t\"Failed to get pbs_statobj function\");\n\t\treturn NULL;\n\t}\n\tPy_XINCREF(f);\n\tPy_XDECREF(py_pbs_statobj); /* release any previously set value    */\n\t/* NOTE: *XDECREF() is safe where      */\n\t/* if py_pbs_statobj is NULL, then     */\n\t/* nothing is released.                */\n\t/* Current value of py_pbs_statobj is  */\n\t/* released when Python interpreter is */\n\t/* restarted, during call to           */\n\t/* pbs_python_unload_python_types().   */\n\tpy_pbs_statobj = f;\n\tPy_RETURN_NONE;\n}\n\n/**\n *\n * @brief\n * \tLooks into the current hook event object's job's 'attr_name'\n *\tattribute, and returns the attribute value.\n *\n * @param[in]\tattr_name - the name of an attribute.\n *\n * @return char *\n * @retval \tstring - the string value returned by pbs_python_object_str(),\n * \t\t\twhich returns a fixed memory area that gets\n *\t\t\toverwritten by subsequent calls to this function.\n *\t\t\tSo the return value of this function must be\n *\t\t\timmediately used.\n * @retval\tNULL - if no proper value could be returned.\n *\n */\nchar *\n_pbs_python_event_job_getval(char *attr_name)\n{\n\tPyObject *py_job = NULL;\n\tPyObject *py_val = NULL;\n\tchar *strval = NULL;\n\n\tif (py_hook_pbsevent == NULL) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"No hook event found!\");\n\t\treturn NULL;\n\t}\n\n\tif (!PyObject_HasAttrString(py_hook_pbsevent, PY_EVENT_PARAM_JOB)) {\n\t\tLOG_ERROR_ARG2(\"%s: does not have attribute <%s>\",\n\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_JOB);\n\t\treturn NULL;\n\t}\n\n\tpy_job = PyObject_GetAttrString(py_hook_pbsevent, PY_EVENT_PARAM_JOB); /* NEW */\n\tif (py_job == NULL || (py_job == Py_None)) {\n\t\tLOG_ERROR_ARG2(\"%s: does not have a value for <%s>\",\n\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_JOB);\n\t\treturn NULL;\n\t}\n\n\tif (PyObject_HasAttrString(py_job, attr_name)) {\n\n\t\tpy_val = PyObject_GetAttrString(py_job,\n\t\t\t\t\t\tattr_name); /* NEW */\n\n\t\tif ((py_val != NULL) && (py_val != Py_None)) {\n\n\t\t\tstrval = pbs_python_object_str(py_val);\n\t\t}\n\t}\n\n\tPy_CLEAR(py_job);\n\tPy_CLEAR(py_val);\n\n\treturn (strval);\n}\n\n/**\n *\n * @brief\n * \tLooks into the current hook event object's job's 'attr_name'\n *\tparameter of type resource (e.g. pbs.event().job.Resource_List),\n *\tand returns the resource 'resc_name' value, if it was set inside\n *\ta hook script.\n *\n * @param[in]\tattr_name - the name of an attribute of type resource.\n * @param[in]\tresc_name - the name of the resource whose value is being\n *\t\t\t\trequested.\n * @return char *\n * @retval \tstring - the string value returned by pbs_python_object_str(),\n * \t\t\twhich returns a fixed memory area that gets\n *\t\t\toverwritten by subsequent calls to this function.\n *\t\t\tSo the return value of this function must be\n *\t\t\timmediately used.\n * @retval\tNULL - if no proper value could be returned.\n *\n */\nchar *\n_pbs_python_event_jobresc_getval_hookset(char *attr_name, char *resc_name)\n{\n\tPyObject *py_job = NULL;\n\tPyObject *py_jobresc = NULL;\n\tPyObject *py_attr_hookset_dict = NULL;\n\tPyObject *py_rescval = NULL;\n\n\tchar *strval = NULL;\n\n\tif (py_hook_pbsevent == NULL) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"No hook event found!\");\n\t\treturn NULL;\n\t}\n\n\tif (!PyObject_HasAttrString(py_hook_pbsevent, PY_EVENT_PARAM_JOB)) {\n\t\tLOG_ERROR_ARG2(\"%s: does not have attribute <%s>\",\n\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_JOB);\n\t\treturn NULL;\n\t}\n\n\tpy_job = PyObject_GetAttrString(py_hook_pbsevent, PY_EVENT_PARAM_JOB); /* NEW */\n\n\tif (py_job == NULL || (py_job == Py_None)) {\n\t\tLOG_ERROR_ARG2(\"%s: does not have a value for <%s>\",\n\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_JOB);\n\t\treturn NULL;\n\t}\n\n\t/* Get:pbs.event().job.<attr_name>[]\n\t Ex. pbs.event().job.Resource_List[]\n\t */\n\tpy_jobresc = PyObject_GetAttrString(py_job, attr_name); /* NEW */\n\n\tif (py_jobresc == NULL || (py_jobresc == Py_None)) {\n\t\tLOG_ERROR_ARG2(\"%s: does not have a value for <%s>\",\n\t\t\t       PY_TYPE_JOB, attr_name);\n\t\tgoto jobresc_getval_hookset_exit;\n\t}\n\n\t/*\n\t * Get the attributes that have been set in the hook script via the\n\t * _attributes_hook_set dictionary.\n\t */\n\tpy_attr_hookset_dict = PyObject_GetAttrString(\n\t\tpy_jobresc, PY_ATTRIBUTES_HOOK_SET); /* NEW */\n\tif (py_attr_hookset_dict == NULL) {\n\t\tLOG_ERROR_ARG2(\"%s: does not have a value for <%s>\",\n\t\t\t       attr_name, PY_ATTRIBUTES_HOOK_SET);\n\t\tgoto jobresc_getval_hookset_exit;\n\t}\n\tif (!PyDict_Check(py_attr_hookset_dict)) {\n\t\tLOG_ERROR_ARG2(\"%s: <%s> is not a dict\",\n\t\t\t       attr_name, PY_ATTRIBUTES_HOOK_SET);\n\t\tgoto jobresc_getval_hookset_exit;\n\t}\n\n\t/* And this all boils down to (whew!):\n\t Ex. pbs.event().job.Resource_List[]._attributes_hook_set[<pbs.event().job.Resource_List[]'s instance>][<resc_name>] = <resc_val>\n\t */\n\n\tif (PyDict_GetItemString(py_attr_hookset_dict, resc_name) != NULL) {\n\t\tif (PyObject_HasAttrString(py_jobresc, resc_name)) {\n\t\t\tpy_rescval = PyObject_GetAttrString(py_jobresc,\n\t\t\t\t\t\t\t    resc_name); /* NEW */\n\n\t\t\tif ((py_rescval != NULL) && (py_rescval != Py_None)) {\n\n\t\t\t\tstrval = pbs_python_object_str(py_rescval);\n\t\t\t}\n\t\t}\n\t}\n\njobresc_getval_hookset_exit:\n\n\tPy_CLEAR(py_job);\n\tPy_CLEAR(py_jobresc);\n\tPy_CLEAR(py_attr_hookset_dict);\n\tPy_CLEAR(py_rescval);\n\n\treturn (strval);\n}\n/**\n *\n * @brief\n * \tLooks into the current hook event object's job's 'attr_name'\n *\tparameter of type resource (e.g. pbs.event().job.Resource_List),\n *\tand returns the resource 'resc_name' value.\n *\n * @param[in]\tattr_name - the name of an attribute of type resource.\n * @param[in]\tresc_name - the name of the resource whose value is being\n *\t\t\t\trequested.\n * @return char *\n * @retval \tstring - the string value returned by pbs_python_object_str(),\n * \t\t\twhich returns a fixed memory area that gets\n *\t\t\toverwritten by subsequent calls to this function.\n *\t\t\tSo the return value of this function must be\n *\t\t\timmediately used.\n * @retval\tNULL - if no proper value could be returned.\n *\n */\nchar *\n_pbs_python_event_jobresc_getval(char *attr_name, char *resc_name)\n{\n\tPyObject *py_job = NULL;\n\tPyObject *py_jobresc = NULL;\n\tPyObject *py_rescval = NULL;\n\tchar *strval = NULL;\n\n\tif (py_hook_pbsevent == NULL) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"No hook event found!\");\n\t\treturn NULL;\n\t}\n\n\tif (!PyObject_HasAttrString(py_hook_pbsevent, PY_EVENT_PARAM_JOB)) {\n\t\tLOG_ERROR_ARG2(\"%s: does not have attribute <%s>\",\n\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_JOB);\n\t\treturn NULL;\n\t}\n\n\tpy_job = PyObject_GetAttrString(py_hook_pbsevent, PY_EVENT_PARAM_JOB); /* NEW */\n\n\tif (py_job == NULL || (py_job == Py_None)) {\n\t\tLOG_ERROR_ARG2(\"%s: does not have a value for <%s>\",\n\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_JOB);\n\t\treturn NULL;\n\t}\n\tpy_jobresc = PyObject_GetAttrString(py_job, attr_name); /* NEW */\n\n\tif (py_jobresc == NULL || (py_jobresc == Py_None)) {\n\t\tLOG_ERROR_ARG2(\"%s: does not have a value for <%s>\",\n\t\t\t       PY_TYPE_JOB, attr_name);\n\t\tgoto jobresc_getval_exit;\n\t}\n\n\tif (PyObject_HasAttrString(py_jobresc, resc_name)) {\n\n\t\tpy_rescval = PyObject_GetAttrString(py_jobresc,\n\t\t\t\t\t\t    resc_name); /* NEW */\n\n\t\tif ((py_rescval != NULL) && (py_rescval != Py_None)) {\n\n\t\t\tstrval = pbs_python_object_str(py_rescval);\n\t\t}\n\t}\n\njobresc_getval_exit:\n\tPy_CLEAR(py_jobresc);\n\tPy_CLEAR(py_job);\n\tPy_CLEAR(py_rescval);\n\treturn (strval);\n}\n\n/**\n *\n * @brief\n * \tClears/Empties the current hook event object's job's\n *\tPY_ATTRIBUTES_HOOK_SET dictionary. This is the set of resources that\n *      have been flagged as set in a hook script.\n *\n * @param[in]\tattr_name - the name of an attribute of type resource.\n * @return int\n * @retval \t0 - success\n * @retval\t1 - if unsuccessful clearing the dictionary.\n *\n */\nint\n_pbs_python_event_jobresc_clear_hookset(char *attr_name)\n{\n\tPyObject *py_job = NULL;\n\tPyObject *py_jobresc = NULL;\n\tPyObject *py_attr_hookset_dict = NULL;\n\tint rc = 1;\n\n\tif (py_hook_pbsevent == NULL) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"No hook event found!\");\n\t\treturn (1);\n\t}\n\n\tif (!PyObject_HasAttrString(py_hook_pbsevent, PY_EVENT_PARAM_JOB)) {\n\t\tLOG_ERROR_ARG2(\"%s: does not have attribute <%s>\",\n\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_JOB);\n\t\treturn (1);\n\t}\n\n\tpy_job = PyObject_GetAttrString(py_hook_pbsevent, PY_EVENT_PARAM_JOB); /* NEW */\n\n\tif (py_job == NULL || (py_job == Py_None)) {\n\t\tLOG_ERROR_ARG2(\"%s: does not have a value for <%s>\",\n\t\t\t       PY_TYPE_EVENT, PY_EVENT_PARAM_JOB);\n\t\treturn (1);\n\t}\n\n\t/* Get:pbs.event().job.<attr_name>[]\n\t Ex. pbs.event().job.Resource_List[]\n\t */\n\tpy_jobresc = PyObject_GetAttrString(py_job, attr_name); /* NEW */\n\n\tif (py_jobresc == NULL || (py_jobresc == Py_None)) {\n\t\tLOG_ERROR_ARG2(\"%s: does not have a value for <%s>\",\n\t\t\t       PY_TYPE_JOB, attr_name);\n\t\tgoto jobresc_clear_hookset_exit;\n\t}\n\n\t/*\n\t * Get the attributes that have been set in the hook script via the\n\t * _attributes_hook_set dictionary.\n\t */\n\tpy_attr_hookset_dict = PyObject_GetAttrString(\n\t\tpy_jobresc, PY_ATTRIBUTES_HOOK_SET); /* NEW */\n\tif (py_attr_hookset_dict == NULL) {\n\t\tLOG_ERROR_ARG2(\"%s: does not have a value for <%s>\",\n\t\t\t       attr_name, PY_ATTRIBUTES_HOOK_SET);\n\t\tgoto jobresc_clear_hookset_exit;\n\t}\n\tif (!PyDict_Check(py_attr_hookset_dict)) {\n\t\tLOG_ERROR_ARG2(\"%s: <%s> is not a dict\",\n\t\t\t       attr_name, PY_ATTRIBUTES_HOOK_SET);\n\t\tgoto jobresc_clear_hookset_exit;\n\t}\n\n\tPyDict_Clear(py_attr_hookset_dict);\n\trc = 0;\n\njobresc_clear_hookset_exit:\n\n\tPy_CLEAR(py_job);\n\tPy_CLEAR(py_jobresc);\n\tPy_CLEAR(py_attr_hookset_dict);\n\n\treturn (rc);\n}\n\nconst char pbsv1mod_meth_size_to_kbytes_doc[] =\n\t\"size_to_kbytes(py_size)\\n\\\n\\n\\\n   py_size: Python size object\\n\\\n\\n\\\n\";\n\n/**\n * @brief\n *\tconvert and return the size to kilobytes.\n *\n */\nPyObject *\npbsv1mod_meth_size_to_kbytes(PyObject *self, PyObject *args,\n\t\t\t     PyObject *kwds)\n{\n\tstatic char *kwlist[] = {\"py_size\", NULL};\n\tPyObject *l = NULL;\n\n\tif (!PyArg_ParseTupleAndKeywords(args, kwds,\n\t\t\t\t\t \"O:size_to_kbytes\", kwlist, &l)) {\n\t\tPyErr_SetString(PyExc_AssertionError,\n\t\t\t\t\"size_to_kbytes: Failed to parse arguments\");\n\t\treturn NULL;\n\t}\n\n\treturn (PyLong_FromUnsignedLongLong(pps_size_to_kbytes(l)));\n}\n\n/**\n * @brief\n *\tset the hook debug input file name.\n */\nvoid\npbs_python_set_hook_debug_input_fp(FILE *fp)\n{\n\thook_debug.input_fp = fp;\n}\n\n/**\n * @brief\n *\tget the reference to hook debug input file\n *\n */\nFILE *\npbs_python_get_hook_debug_input_fp(void)\n{\n\treturn (hook_debug.input_fp);\n}\n\n/**\n * @brief\n *\tset the hook debug input file name.\n *\n */\nvoid\npbs_python_set_hook_debug_input_file(char *filename)\n{\n\tif (filename != NULL)\n\t\tstrncpy(hook_debug.input_file, filename, MAXPATHLEN);\n}\n\n/**\n * @brief\n *      get the reference to hook debug input file\n *\n */\nchar *\npbs_python_get_hook_debug_input_file(void)\n{\n\tif (hook_debug.input_file[0] == '\\0')\n\t\treturn NULL;\n\treturn (hook_debug.input_file);\n}\n\n/**\n * @brief\n *      get the reference to hook debug output file\n *\n */\nvoid\npbs_python_set_hook_debug_output_fp(FILE *fp)\n{\n\thook_debug.output_fp = fp;\n}\n\n/**\n * @brief\n *\tget reference of hook debug output file\n *\n */\nFILE *\npbs_python_get_hook_debug_output_fp(void)\n{\n\treturn (hook_debug.output_fp);\n}\n\n/**\n * @brief\n *      set hook debug output file\n *\n */\n\nvoid\npbs_python_set_hook_debug_output_file(char *filename)\n{\n\tif (filename != NULL)\n\t\tstrncpy(hook_debug.output_file, filename, MAXPATHLEN);\n}\n\n/**\n * @brief\n *      get the reference to hook debug output file\n *\n */\n\nchar *\npbs_python_get_hook_debug_output_file(void)\n{\n\tif (hook_debug.output_file[0] == '\\0')\n\t\treturn NULL;\n\treturn (hook_debug.output_file);\n}\n\n/**\n * @brief\n *      set hook debug input file\n *\n */\n\nvoid\npbs_python_set_hook_debug_data_fp(FILE *fp)\n{\n\thook_debug.data_fp = fp;\n}\n\n/**\n * @brief\n *\tget reference of hook debug data file.\n *\n */\nFILE *\npbs_python_get_hook_debug_data_fp(void)\n{\n\treturn (hook_debug.data_fp);\n}\n\n/**\n * @brief\n *\tset hook debug data file.\n *\n */\nvoid\npbs_python_set_hook_debug_data_file(char *filename)\n{\n\tif (filename != NULL)\n\t\tstrncpy(hook_debug.data_file, filename, MAXPATHLEN);\n}\n\n/**\n * @brief\n *\tget hook debug data file name.\n */\nchar *\npbs_python_get_hook_debug_data_file(void)\n{\n\tif (hook_debug.data_file[0] == '\\0')\n\t\treturn NULL;\n\treturn (hook_debug.data_file);\n}\n\n/**\n * @brief\n *\tset the hook debug object name\n */\nvoid\npbs_python_set_hook_debug_objname(char *objname)\n{\n\tif (objname != NULL)\n\t\tstrncpy(hook_debug.objname, objname, HOOK_BUF_SIZE);\n}\n\n/**\n * @brief\n *\tget the hook debug object name\n */\nchar *\npbs_python_get_hook_debug_objname(void)\n{\n\tif (hook_debug.objname[0] == '\\0')\n\t\treturn NULL;\n\treturn (hook_debug.objname);\n}\n/**\n * @brief\n *\tset servr information in server data.\n *\n */\nvoid\npbs_python_set_server_info(pbs_list_head *server_input)\n{\n\tserver_data = server_input;\n}\n\n/**\n * @brief\n *\tset the job information into server_job data.\n */\nvoid\npbs_python_set_server_jobs_info(pbs_list_head *server_jobs_input,\n\t\t\t\tpbs_list_head *ids)\n{\n\tserver_jobs.data = server_jobs_input;\n\tserver_jobs.ids = ids;\n}\n\n/**\n * @brief\n *      set the queue information into server_queue data.\n */\n\nvoid\npbs_python_set_server_queues_info(pbs_list_head *server_queues_input,\n\t\t\t\t  pbs_list_head *names)\n{\n\tserver_queues.data = server_queues_input;\n\tserver_queues.names = names;\n}\n\n/**\n * @brief\n *      set the reservation information into server's reservation data.\n */\n\nvoid\npbs_python_set_server_resvs_info(pbs_list_head *server_resvs_input,\n\t\t\t\t pbs_list_head *resvids)\n{\n\tserver_resvs.data = server_resvs_input;\n\tserver_resvs.resvids = resvids;\n}\n\n/**\n * @brief\n *      set the vnode information into server vnode data.\n */\n\nvoid\npbs_python_set_server_vnodes_info(pbs_list_head *server_vnodes_input,\n\t\t\t\t  pbs_list_head *names)\n{\n\tserver_vnodes.data = server_vnodes_input;\n\tserver_vnodes.names = names;\n}\n\n/**\n * @brief\n *      unset the server information.\n *\n */\n\nvoid\npbs_python_unset_server_info(void)\n{\n\tserver_data = NULL;\n}\n\n/**\n * @brief\n *\tunset job info from server_job data\n */\nvoid\npbs_python_unset_server_jobs_info(void)\n{\n\tserver_jobs.data = NULL;\n\tserver_jobs.ids = NULL;\n}\n\n/**\n * @brief\n *      unset queue info from server_queue data\n */\n\nvoid\npbs_python_unset_server_queues_info(void)\n{\n\tserver_queues.data = NULL;\n\tserver_queues.names = NULL;\n}\n\n/**\n * @brief\n *      unset reservation info from server_resv data\n */\n\nvoid\npbs_python_unset_server_resvs_info(void)\n{\n\tserver_resvs.data = NULL;\n\tserver_resvs.resvids = NULL;\n}\n\n/**\n * @brief\n *      unset vnode info from server_vnode data\n */\n\nvoid\npbs_python_unset_server_vnodes_info(void)\n{\n\tserver_vnodes.data = NULL;\n\tserver_vnodes.names = NULL;\n}\n\n/**\n *\n * @brief\n * \tHelper method returning a server Python Object representing the local\n *\t(current) server, with data taken from some a static source.\n *  @note\n *\tThis marks the server object \"read-only\" in Python mode.\n *\n * @return\tPyObject *\tpointer to a Python server object to map the\n *\t\t\t\tlocal server values.\n */\nstatic PyObject *\npy_get_server_static(void)\n{\n\tPyObject *py_svr_class = NULL;\n\tPyObject *py_svr = NULL;\n\tPyObject *py_sargs = NULL;\n\tint tmp_rc = -1;\n\tchar perf_label[MAXBUFLEN];\n\n\tif (!use_static_data || (server_data == NULL))\n\t\tPy_RETURN_NONE;\n\n\tpy_svr_class = pbs_python_types_table[PP_SVR_IDX].t_class;\n\n\tpy_sargs = Py_BuildValue(\"(s)\", server_name); /* NEW ref */\n\tif (!py_sargs) {\n\t\tlog_err(-1, pbs_python_daemon_name, \"could not build args list for server\");\n\t\tgoto server_static_error_exit;\n\t}\n\n\tpy_svr = PyObject_Call(py_svr_class, py_sargs, NULL);\n\tif (!py_svr) {\n\t\tlog_err(-1, pbs_python_daemon_name, \"failed to create a python server object\");\n\t\tgoto server_static_error_exit;\n\t}\n\tPy_CLEAR(py_sargs);\n\n\tsnprintf(perf_label, sizeof(perf_label), \"hook_func:%s(%s)\", SERVER_OBJECT, server_name);\n\ttmp_rc = pbs_python_populate_python_class_from_svrattrl(py_svr,\n\t\t\t\t\t\t\t\tserver_data, perf_label, HOOK_PERF_POPULATE);\n\n\tif (tmp_rc == -1) {\n\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\"partially populated python server object\");\n\t}\n\n\ttmp_rc = pbs_python_mark_object_readonly(py_svr);\n\n\tif (tmp_rc == -1) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"Failed to mark server readonly!\");\n\t\tgoto server_static_error_exit;\n\t}\n\n\tobject_counter++;\n\treturn py_svr;\nserver_static_error_exit:\n\tif (PyErr_Occurred())\n\t\tpbs_python_write_error_to_log(__func__);\n\tif (py_sargs)\n\t\tPy_CLEAR(py_sargs);\n\tif (py_svr)\n\t\tPy_CLEAR(py_svr);\n\n\tPyErr_SetString(PyExc_AssertionError, \"Failed to create server object\");\n\n\treturn NULL;\n}\n\nconst char pbsv1mod_meth_get_server_static_doc[] =\n\t\"get_server_static()\\n\\\n\\n\\\n  returns:\\n\\\n         a Python server object representing the current instance of the.\\n\\\n         PBS server from a static source.\\n\\\n         or None of static data source\\n\\\n\";\n\n/**\n * @brief\n *\treturn server object representing the current instance of the PBS server.\n *\n * @return\tPyObject *\n * @retval\tPBS server\tsuccess\n * @retval\tNULL\t\terror\n */\nPyObject *\npbsv1mod_meth_get_server_static(void)\n{\n\tPyObject *py_obj = NULL;\n\n\thook_set_mode = C_MODE;\n\tpy_obj = py_get_server_static();\n\thook_set_mode = PY_MODE;\n\treturn (py_obj);\n}\n\n/**\n * @brief\n *\tReturns a Python object that maps to a struct queue * taken directly\n *\tfrom static server_queues data,, or a list of queue names if\n *\t'qname' is the empty string (\"\")..\n * @param[in]\tqname\t\t- name of a queue to obtain \"struct queue *\"\n *\t\t\t\t  content to populate a Python queue object,\n *\t\t\t\t  or  the empty string (\"\") to obtain the\n *\t\t\t\t  list of queue names.\n * @return      PyObject *\t- the Python queue object corresponding to\n *\t\t\t\t  'qname', or to a list queue names.\n */\nstatic PyObject *\npy_get_queue_static(char *qname, char *svr_name)\n{\n\tPyObject *py_queue_class = NULL;\n\tPyObject *py_queue = NULL;\n\tPyObject *py_qargs = NULL;\n\tsvrattrl *plist, *plist_next;\n\tchar *p = NULL;\n\tchar *pn = NULL;\n\tchar *p1 = NULL;\n\tchar *attr_name = NULL;\n\tpbs_list_head queuel;\n\tint rc;\n\tchar perf_label[MAXBUFLEN];\n\n\tif (!use_static_data || (server_queues.data == NULL)) {\n\t\tPy_RETURN_NONE;\n\t}\n\n\tif (qname == NULL) {\n\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\"Unable to populate python queue object\");\n\t\treturn NULL;\n\t}\n\n\tif (qname[0] == '\\0') {\n\t\treturn (create_py_strlist_from_svrattrl_names(server_queues.names));\n\t}\n\n\tCLEAR_HEAD(queuel);\n\n\tplist = (svrattrl *) GET_NEXT(*server_queues.data);\n\tdo {\n\t\tif (plist == NULL)\n\t\t\tbreak;\n\n\t\tplist_next = (svrattrl *) GET_NEXT(plist->al_link);\n\n\t\t/* look for last dot as the name could be dotted like a queue name */\n\t\tp = strrchr(plist->al_name, '.');\n\t\tif (p == NULL) {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"warning: encountered an attribute %s without a queue name...ignoring\", plist->al_name);\n\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\tplist = plist_next;\n\t\t\tcontinue;\n\t\t}\n\t\t*p = '\\0'; /* now plist->al_name would be the queue name */\n\t\tif (strcmp(plist->al_name, qname) != 0) {\n\t\t\t*p = '.'; /* restore */\n\t\t\tplist = plist_next;\n\t\t\tcontinue;\n\t\t}\n\t\tattr_name = p + 1; /* p will be the actual attribute name */\n\n\t\tp1 = NULL;\n\t\tif (plist->al_resc != NULL) {\n\t\t\tp1 = strchr(plist->al_resc, ',');\n\t\t\tif (p1 != NULL) {\n\t\t\t\t*p1 = '\\0'; /* now plist->al_resc is just */\n\t\t\t\t\t    /* the resource */\n\t\t\t}\n\t\t}\n\n\t\tif ((strcmp(attr_name, ATTR_server) == 0) &&\n\t\t    (svr_name != NULL) && (svr_name[0] != '\\0') &&\n\t\t    (strcmp(svr_name, \"localhost\") != 0) &&\n\t\t    (strcmp(plist->al_value, svr_name) != 0)) {\n\t\t\tif (p != NULL)\n\t\t\t\t*p = '.'; /* restore orig plist->al_name */\n\t\t\tif (p1 != NULL)\n\t\t\t\t*p1 = ','; /* restore orig plist->al_resc */\n\t\t\tfree_attrlist(&queuel);\n\t\t\tPy_RETURN_NONE;\n\t\t}\n\n\t\tif (add_to_svrattrl_list(&queuel, attr_name,\n\t\t\t\t\t plist->al_resc, plist->al_value, 0, NULL) != 0) {\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t\t \"warning: failed to add_to_svrattrl_list(%s,%s,%s)\",\n\t\t\t\t plist->al_name,\n\t\t\t\t plist->al_resc ? plist->al_resc : \"\", plist->al_value);\n\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\tif (p != NULL)\n\t\t\t\t*p = '.'; /* restore orig plist->al_name */\n\t\t\tif (p1 != NULL)\n\t\t\t\t*p1 = ','; /* restore the orig plist->al_resc */\n\t\t\tgoto get_queue_error_exit;\n\t\t}\n\t\tif (p1 != NULL)\n\t\t\t*p1 = ',';\n\n\t\t/* Check if we're done processing the attributes/resources */\n\t\t/* of the current queue. \t\t\t\t    */\n\t\tpn = NULL;\n\t\tif (plist_next != NULL) {\n\t\t\t/* look at last dot for \"dotted\" queue names */\n\t\t\tpn = strrchr(plist_next->al_name, '.');\n\t\t\tif (pn == NULL) {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t \"warning: encountered the next attribute %s without a queue name...ignoring\", plist_next->al_name);\n\t\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\t\t/* skip the next one */\n\t\t\t\tplist = (svrattrl *) GET_NEXT(plist_next->al_link);\n\t\t\t\tif (p != NULL)\n\t\t\t\t\t*p = '.'; /* restore orig plist->al_name */\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\t*pn = '\\0'; /* now plist_next->al_name would be the */\n\t\t\t/* queue name */\n\t\t\t/* The next queuelist entry is for a different queue name */\n\t\t\t/* since list is sorted according to queue name */\n\t\t\tif ((strcmp(plist->al_name, plist_next->al_name) != 0)) {\n\t\t\t\tif (p != NULL)\n\t\t\t\t\t*p = '.'; /* restore orig plist->al_name */\n\t\t\t\t*pn = '.';\t  /* restore orig plist_next->al_name */\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\t*pn = '.'; /* restore orig plist_next->al_name */\n\t\t}\n\n\t\tplist = plist_next;\n\t\tif (p != NULL)\n\t\t\t*p = '.'; /* restore orig plist->al_name */\n\n\t} while (plist);\n\tif (GET_NEXT(queuel) == NULL)\n\t\tPy_RETURN_NONE;\n\n\tpy_queue_class = pbs_python_types_table[PP_QUE_IDX].t_class;\n\tpy_qargs = Py_BuildValue(\"(s)\", qname); /* NEW ref */\n\tif (py_qargs == NULL) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"could not build args list for queue %s\",\n\t\t\t plist->al_name);\n\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\tgoto get_queue_error_exit;\n\t}\n\n\tpy_queue = PyObject_Call(py_queue_class, py_qargs,\n\t\t\t\t NULL); /* NEW ref */\n\tif (py_queue == NULL) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"failed to create a python queue %s object\",\n\t\t\t plist->al_name);\n\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\tgoto get_queue_error_exit;\n\t}\n\n\tsnprintf(perf_label, sizeof(perf_label), \"hook_func:%s(%s)\", SERVER_QUEUE_OBJECT, qname);\n\trc = pbs_python_populate_python_class_from_svrattrl(py_queue, &queuel, perf_label, HOOK_PERF_POPULATE);\n\n\tif (rc == -1) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"failed to fully populate Python\"\n\t\t\t \" queue %s object\",\n\t\t\t plist->al_name);\n\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\tgoto get_queue_error_exit;\n\t}\n\n\tfree_attrlist(&queuel);\n\tCLEAR_HEAD(queuel);\n\tPy_CLEAR(py_qargs);\n\n\treturn (py_queue);\n\nget_queue_error_exit:\n\tif (PyErr_Occurred())\n\t\tpbs_python_write_error_to_log(__func__);\n\tPy_CLEAR(py_qargs);\n\tPy_CLEAR(py_queue);\n\tPyErr_SetString(PyExc_AssertionError, \"Failed to create queue object\");\n\n\treturn NULL;\n}\n\nconst char pbsv1mod_meth_get_queue_static_doc[] =\n\t\"get_queue_static(qnamne)\\n\\\n\\n\\\n  'qnamne' is the name of queue whose info is beting returned.\\n\\\n  returns:\\n\\\n         a Python queue object representing the current instance of the.\\n\\\n         PBS queue 'qnamne', from a static source.\\n\\\n         or None of static data source is not available.\\n\\\n\";\n\n/**\n * @brief\n *\treturn a Python queue object representing the current instance\n *\tof the PBS queue 'qnamne' from a static source.\n *\n * @return\tPyObject *\n * @retval\treference to queue\tsuccess\n * @retval\tNULL\t\t\terror\n *\n * @par\tNote:\n *\t'qnamne' is the name of queue whose info is beting returned.\n *\n */\nPyObject *\npbsv1mod_meth_get_queue_static(PyObject *self, PyObject *args, PyObject *kwds)\n{\n\n\tstatic char *kwlist[] = {\"queue\", \"server_name\", NULL};\n\tchar *qname = NULL;\n\tchar *svr_name = NULL;\n\tPyObject *py_obj = NULL;\n\n\tif (!PyArg_ParseTupleAndKeywords(args, kwds,\n\t\t\t\t\t \"ss:get_queue_static\",\n\t\t\t\t\t kwlist,\n\t\t\t\t\t &qname,\n\t\t\t\t\t &svr_name)) {\n\t\treturn NULL;\n\t}\n\n\thook_set_mode = C_MODE;\n\tpy_obj = py_get_queue_static(qname, svr_name);\n\thook_set_mode = PY_MODE;\n\treturn (py_obj);\n}\n\n/**\n * @brief\n *\tReturns a Python object that maps to a struct pbsnodes * taken directly\n *\tfrom static server_vnodes, or a list of vnode names if 'vname'\n *\tgiven is \"\".\n * @param[in]\tvname\t\t- name of a vnode to obtain \"struct pbsnode *\"\n *\t\t\t\t  content to populate a Python vnode object,\n *\t\t\t\t  or the empty string (\"\") to match a list of\n *\t\t\t\t  vnode names.\n * @return      PyObject *\t- the Python vnode object corresponding to\n *\t\t\t\t  'vname', or a list of vnode names.\n * @retval\t<actual_vnode_object>\n * @retval\tNone\t\t- if vnode object was not found.\n */\nstatic PyObject *\npy_get_vnode_static(char *vname, char *svr_name)\n{\n\tPyObject *py_vnode_class = NULL;\n\tPyObject *py_vnode = NULL;\n\tPyObject *py_vargs = NULL;\n\tsvrattrl *plist, *plist_next;\n\tchar *p = NULL;\n\tchar *pn = NULL;\n\tchar *p1 = NULL;\n\tchar *attr_name = NULL;\n\tpbs_list_head vnodel;\n\tint rc;\n\tchar perf_label[MAXBUFLEN];\n\n\tif (!use_static_data || (server_vnodes.data == NULL)) {\n\t\tPy_RETURN_NONE;\n\t}\n\n\tif (vname == NULL) {\n\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\"Unable to populate python vnode object\");\n\t\treturn NULL;\n\t}\n\n\tif (vname[0] == '\\0') {\n\t\treturn (create_py_strlist_from_svrattrl_names(server_vnodes.names));\n\t}\n\n\tCLEAR_HEAD(vnodel);\n\n\t/* NOTE: The list is sorted according to vnode name */\n\tplist = (svrattrl *) GET_NEXT(*server_vnodes.data);\n\tdo {\n\t\tif (plist == NULL)\n\t\t\tbreak;\n\n\t\tplist_next = (svrattrl *) GET_NEXT(plist->al_link);\n\n\t\t/* look for last dot as the name could be dotted like a node name */\n\t\tp = strrchr(plist->al_name, '.');\n\t\tif (p == NULL) {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"warning: encountered an attribute %s without a node name...ignoring\", plist->al_name);\n\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\tplist = plist_next;\n\t\t\tcontinue;\n\t\t}\n\t\t*p = '\\0'; /* now plist->al_name would be the node name */\n\t\tif (strcmp(plist->al_name, vname) != 0) {\n\t\t\tplist = plist_next;\n\t\t\t*p = '.'; /* restore orig plist->al_name */\n\t\t\tcontinue;\n\t\t}\n\t\tattr_name = p + 1; /* p will be the actual attribute name */\n\t\tp1 = NULL;\n\t\tif (plist->al_resc != NULL) {\n\t\t\tp1 = strchr(plist->al_resc, ',');\n\t\t\tif (p1 != NULL) {\n\t\t\t\t*p1 = '\\0';\n\t\t\t}\n\t\t}\n\n\t\tif ((strcmp(attr_name, ATTR_server) == 0) &&\n\t\t    (svr_name != NULL) && (svr_name[0] != '\\0') &&\n\t\t    (strcmp(svr_name, \"localhost\") != 0) &&\n\t\t    (strcmp(plist->al_value, svr_name) != 0)) {\n\t\t\tif (p != NULL)\n\t\t\t\t*p = '.'; /* restore orig plist->al_name */\n\t\t\tif (p1 != NULL)\n\t\t\t\t*p1 = ','; /* restore orig plist->al_resc */\n\t\t\tfree_attrlist(&vnodel);\n\t\t\tPy_RETURN_NONE;\n\t\t}\n\n\t\tif (add_to_svrattrl_list(&vnodel, attr_name,\n\t\t\t\t\t plist->al_resc, plist->al_value, 0, NULL) != 0) {\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t\t \"warning: failed to add_to_svrattrl_list(%s,%s,%s)\",\n\t\t\t\t plist->al_name,\n\t\t\t\t plist->al_resc ? plist->al_resc : \"\", plist->al_value);\n\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\tif (p != NULL)\n\t\t\t\t*p = '.'; /* restore orig plist->al_name */\n\t\t\tif (p1 != NULL)\n\t\t\t\t*p1 = ','; /* restore orig plist->al_resc */\n\t\t\tgoto get_vnode_error_exit;\n\t\t}\n\n\t\tif (p1 != NULL)\n\t\t\t*p1 = ','; /* restore the orig plist->al_resc */\n\n\t\t/* Check if we're done processing the attributes/resources */\n\t\t/* of the current node. \t\t\t\t    */\n\t\tpn = NULL;\n\t\tif (plist_next != NULL) {\n\n\t\t\t/* look at last dot for \"dotted\" node names */\n\t\t\tpn = strrchr(plist_next->al_name, '.');\n\t\t\tif (pn == NULL) {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t \"warning: encountered the next attribute %s without a node name...ignoring\", plist_next->al_name);\n\t\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\t\t/* skip the next one */\n\t\t\t\tplist = (svrattrl *) GET_NEXT(plist_next->al_link);\n\t\t\t\tif (p != NULL)\n\t\t\t\t\t*p = '.'; /* restore orig plist->al_name */\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\t*pn = '\\0'; /* now plist_next->al_name would be the */\n\t\t\t/* node name */\n\n\t\t\tif ((strcmp(plist->al_name, plist_next->al_name) != 0)) {\n\t\t\t\t*pn = '.';\n\t\t\t\tif (p != NULL)\n\t\t\t\t\t*p = '.'; /* restore orig plist->al_name */\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\t*pn = '.'; /* restore orig plist_next->al_name */\n\t\t}\n\n\t\tplist = plist_next;\n\t\tif (p != NULL)\n\t\t\t*p = '.'; /* restore orig plist->al_name */\n\n\t} while (plist);\n\n\tif (GET_NEXT(vnodel) == NULL)\n\t\tPy_RETURN_NONE;\n\n\tpy_vnode_class = pbs_python_types_table[PP_VNODE_IDX].t_class;\n\tpy_vargs = Py_BuildValue(\"(s)\", vname); /* NEW ref */\n\tif (py_vargs == NULL) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"could not build args list for vnode %s\",\n\t\t\t plist->al_name);\n\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\tgoto get_vnode_error_exit;\n\t}\n\n\tpy_vnode = PyObject_Call(py_vnode_class, py_vargs,\n\t\t\t\t NULL); /* NEW ref */\n\tif (py_vnode == NULL) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"failed to create a python vnode %s object\",\n\t\t\t plist->al_name);\n\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\tgoto get_vnode_error_exit;\n\t}\n\n\tsnprintf(perf_label, sizeof(perf_label), \"hook_func:%s(%s)\", SERVER_VNODE_OBJECT, vname);\n\trc = pbs_python_populate_python_class_from_svrattrl(py_vnode, &vnodel, perf_label, HOOK_PERF_POPULATE);\n\tif (rc == -1) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"failed to fully populate Python\"\n\t\t\t \" vnode %s object\",\n\t\t\t plist->al_name);\n\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\tgoto get_vnode_error_exit;\n\t}\n\n\tfree_attrlist(&vnodel);\n\tCLEAR_HEAD(vnodel);\n\tPy_CLEAR(py_vargs);\n\n\treturn (py_vnode);\n\nget_vnode_error_exit:\n\tif (PyErr_Occurred())\n\t\tpbs_python_write_error_to_log(__func__);\n\tPy_CLEAR(py_vargs);\n\tPy_CLEAR(py_vnode);\n\tPyErr_SetString(PyExc_AssertionError, \"Failed to create vnode object\");\n\n\treturn NULL;\n}\n\nconst char pbsv1mod_meth_get_vnode_static_doc[] =\n\t\"get_vnode_static(vname)\\n\\\n\\n\\\n  'vname' is the name of vnode whose info is beting returned.\\n\\\n  returns:\\n\\\n         a Python vnode object representing the current instance of the.\\n\\\n         PBS vnode 'vname', from a static source.\\n\\\n         or None of static data source is not available.\\n\\\n\";\n/**\n * @brief\n *\tget vnode info\n *\n * @par Note:\n *\t'vname' is the name of vnode whose info is beting returned.\n *\n * @return\tPyObject*\n * @retval\ta Python vnode object representing the current instance of the.\\n\n *\t\tPBS vnode 'vname', from a static source.\t\t\t\tsuccess\n * @retval\tNone of static data source is not available.\t\t\t\terror\n *\n */\nPyObject *\npbsv1mod_meth_get_vnode_static(PyObject *self, PyObject *args, PyObject *kwds)\n{\n\n\tstatic char *kwlist[] = {\"vnode\", \"server_name\", NULL};\n\tchar *vname = NULL;\n\tchar *svr_name = NULL;\n\tPyObject *py_obj = NULL;\n\n\tif (!PyArg_ParseTupleAndKeywords(args, kwds,\n\t\t\t\t\t \"ss:get_vnode_static\",\n\t\t\t\t\t kwlist,\n\t\t\t\t\t &vname,\n\t\t\t\t\t &svr_name)) {\n\t\treturn NULL;\n\t}\n\n\thook_set_mode = C_MODE;\n\tpy_obj = py_get_vnode_static(vname, svr_name);\n\thook_set_mode = PY_MODE;\n\treturn (py_obj);\n}\n\n/**\n * @brief\n *\tIf 'jid' is not the empty string (\"\"), then this returns a Python\n *\tjob object that maps to a struct job * taken directly from static\n *\tserver_jobs data.\n *\tIf 'jid' is the empty string (\"\"), this returns a Python list\n *\tstring object enumerating the list of jobs found.\n *\n * @param[in]\tjid\t\t- name of a job to obtain \"struct job *\"\n *\t\t\t\t  content to populate a Python job object,\n *\t\t\t\t  or the empty string \"\", to enumerate\n *\t\t\t\t  the list of jobs.\n *\n * @param[in]\tsvr_name\t- return job @ server_name only\n * @param[in]\tqueue_name\t- return job @ queue_name only\n *\n * @return      PyObject *\t- the Python job object corresponding to\n *\t\t\t\t  'jid'.\n * @retval\t<job_object> or <list of string objects>\n * @retval\tNone\t\t- if no matching job.\n */\nstatic PyObject *\npy_get_job_static(char *jid, char *svr_name, char *queue_name)\n{\n\tPyObject *py_job_class = NULL;\n\tPyObject *py_job = NULL;\n\tPyObject *py_jargs = NULL;\n\tsvrattrl *plist, *plist_next;\n\tchar *p = NULL;\n\tchar *pn = NULL;\n\tchar *p1 = NULL;\n\tchar *attr_name = NULL;\n\tpbs_list_head jobl;\n\t/* set job.queue to actual queue object */\n\tchar *qname;\n\tPyObject *py_server = NULL;\n\tPyObject *py_que = NULL;\n\tint rc;\n\tchar perf_label[MAXBUFLEN];\n\n\tif (!use_static_data || (server_jobs.data == NULL)) {\n\t\tPy_RETURN_NONE;\n\t}\n\n\tif (jid == NULL) {\n\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\"Unable to populate python job object\");\n\t\treturn NULL;\n\t}\n\n\tif (jid[0] == '\\0') {\n\t\treturn (create_py_strlist_from_svrattrl_names(server_jobs.ids));\n\t}\n\n\tCLEAR_HEAD(jobl);\n\n\t/* NOTE: The list is sorted according to job name */\n\tplist = (svrattrl *) GET_NEXT(*server_jobs.data);\n\tdo {\n\t\tif (plist == NULL)\n\t\t\tbreak;\n\n\t\tplist_next = (svrattrl *) GET_NEXT(plist->al_link);\n\n\t\t/* look for last dot as the name could be dotted like a job name */\n\t\tp = strrchr(plist->al_name, '.');\n\t\tif (p == NULL) {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"warning: encountered an attribute %s without a job name...ignoring\", plist->al_name);\n\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\tplist = plist_next;\n\t\t\tcontinue;\n\t\t}\n\t\t*p = '\\0'; /* now plist->al_name would be the job name */\n\t\tif (strcmp(plist->al_name, jid) != 0) {\n\t\t\t*p = '.'; /* restore orig plist->al_name */\n\t\t\tplist = plist_next;\n\t\t\tcontinue;\n\t\t}\n\t\tattr_name = p + 1; /* p will be the actual attribute name */\n\t\tp1 = NULL;\n\t\tif (plist->al_resc != NULL) {\n\t\t\tp1 = strchr(plist->al_resc, ',');\n\t\t\tif (p1 != NULL) {\n\t\t\t\t*p1 = '\\0';\n\t\t\t}\n\t\t}\n\n\t\tif ((strcmp(attr_name, ATTR_server) == 0) &&\n\t\t    (svr_name != NULL) && (svr_name[0] != '\\0') &&\n\t\t    (strcmp(svr_name, \"localhost\") != 0) &&\n\t\t    (strcmp(plist->al_value, svr_name) != 0)) {\n\t\t\tif (p != NULL)\n\t\t\t\t*p = '.'; /* restore orig plist->al_name */\n\t\t\tif (p1 != NULL)\n\t\t\t\t*p1 = ','; /* restore orig plist->al_resc */\n\t\t\tfree_attrlist(&jobl);\n\t\t\tPy_RETURN_NONE;\n\t\t}\n\n\t\tif ((strcmp(attr_name, ATTR_queue) == 0) &&\n\t\t    (queue_name != NULL) && (queue_name[0] != '\\0') &&\n\t\t    (strcmp(plist->al_value, queue_name) != 0)) {\n\t\t\tif (p != NULL)\n\t\t\t\t*p = '.'; /* restore orig plist->al_name */\n\t\t\tif (p1 != NULL)\n\t\t\t\t*p1 = ','; /* restore orig plist->al_resc */\n\t\t\tfree_attrlist(&jobl);\n\t\t\tPy_RETURN_NONE;\n\t\t}\n\n\t\tif (add_to_svrattrl_list(&jobl, attr_name,\n\t\t\t\t\t plist->al_resc, plist->al_value, 0, NULL) != 0) {\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t\t \"warning: failed to add_to_svrattrl_list(%s,%s,%s)\",\n\t\t\t\t plist->al_name,\n\t\t\t\t plist->al_resc ? plist->al_resc : \"\", plist->al_value);\n\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\tif (p != NULL)\n\t\t\t\t*p = '.'; /* restore orig plist->al_name */\n\t\t\tif (p1 != NULL)\n\t\t\t\t*p1 = ','; /* restore orig plist->al_resc */\n\n\t\t\tgoto get_job_error_exit;\n\t\t}\n\t\tif (p1 != NULL)\n\t\t\t*p1 = ','; /* restore the orig plist->al_resc */\n\n\t\t/* Check if we're done processing the attributes/resources */\n\t\t/* of the current job. \t\t\t\t    */\n\t\tpn = NULL;\n\t\tif (plist_next != NULL) {\n\n\t\t\t/* look at last dot for \"dotted\" job names */\n\t\t\tpn = strrchr(plist_next->al_name, '.');\n\t\t\tif (pn == NULL) {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t \"warning: encountered the next attribute %s without a job name...ignoring\", plist_next->al_name);\n\t\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\t\t/* skip the next one */\n\t\t\t\tplist = (svrattrl *) GET_NEXT(plist_next->al_link);\n\t\t\t\tif (p != NULL)\n\t\t\t\t\t*p = '.'; /* restore orig plist->al_name */\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\t*pn = '\\0'; /* now plist_next->al_name would be the */\n\t\t\t/* job name */\n\n\t\t\tif ((strcmp(plist->al_name, plist_next->al_name) != 0)) {\n\t\t\t\t*pn = '.'; /* restore orig plist_next->al_name */\n\t\t\t\tif (p != NULL)\n\t\t\t\t\t*p = '.'; /* restore orig plist->al_name */\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\t*pn = '.'; /* restore orig plist_next->al_name */\n\t\t}\n\n\t\tplist = plist_next;\n\t\tif (p != NULL)\n\t\t\t*p = '.'; /* restore orig plist->al_name */\n\n\t} while (plist);\n\n\tif (GET_NEXT(jobl) == NULL) {\n\t\tfree_attrlist(&jobl);\n\t\tPy_RETURN_NONE;\n\t}\n\n\tpy_job_class = pbs_python_types_table[PP_JOB_IDX].t_class;\n\tpy_jargs = Py_BuildValue(\"(s)\", jid); /* NEW ref */\n\tif (py_jargs == NULL) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"could not build args list for job %s\",\n\t\t\t plist ? plist->al_name : \"\");\n\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\tgoto get_job_error_exit;\n\t}\n\n\tpy_job = PyObject_Call(py_job_class, py_jargs,\n\t\t\t       NULL); /* NEW ref */\n\tif (py_job == NULL) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"failed to create a python job %s object\",\n\t\t\t plist ? plist->al_name : \"\");\n\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\tgoto get_job_error_exit;\n\t}\n\n\tsnprintf(perf_label, sizeof(perf_label), \"hook_func:%s(%s)\", SERVER_JOB_OBJECT, jid);\n\trc = pbs_python_populate_python_class_from_svrattrl(py_job, &jobl, perf_label, HOOK_PERF_POPULATE);\n\n\tif (rc == -1) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"failed to fully populate Python\"\n\t\t\t \" job %s object\",\n\t\t\t plist ? plist->al_name : \"\");\n\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\tgoto get_job_error_exit;\n\t}\n\n\tfree_attrlist(&jobl);\n\tCLEAR_HEAD(jobl);\n\tPy_CLEAR(py_jargs);\n\n\tif (PyObject_HasAttrString(py_job, ATTR_queue)) {\n\t\tqname = pbs_python_object_get_attr_string_value(py_job, ATTR_queue);\n\t\tif (qname != NULL) {\n\t\t\tpy_que = py_get_queue_static(qname, svr_name); /* NEW ref */\n\t\t\tif (py_que != NULL) {\n\t\t\t\t/* py_server queue ref ct incremented as part of py_job */\n\t\t\t\t(void) PyObject_SetAttrString(py_job, ATTR_queue, py_que);\n\t\t\t}\n\t\t\tPy_DECREF(py_que); /* we no longer need to reference */\n\t\t}\n\t}\n\n\t/* set job.server to actual server object */\n\tpy_server = py_get_server_static(); /* NEW Ref */\n\n\tif (py_server != NULL) {\n\t\tif (PyObject_HasAttrString(py_job, ATTR_server)) {\n\t\t\t/* py_server ref ct incremented as part of py_job */\n\t\t\t(void) PyObject_SetAttrString(py_job, ATTR_server, py_server);\n\t\t}\n\t\tPy_DECREF(py_server);\n\t}\n\n\treturn (py_job);\n\nget_job_error_exit:\n\tif (PyErr_Occurred())\n\t\tpbs_python_write_error_to_log(__func__);\n\tfree_attrlist(&jobl);\n\tPy_CLEAR(py_jargs);\n\tPy_CLEAR(py_job);\n\tPyErr_SetString(PyExc_AssertionError, \"Failed to create job object\");\n\n\treturn NULL;\n}\n\nconst char pbsv1mod_meth_get_job_static_doc[] =\n\t\"get_job_static(jobid, queue_name, server_name)\\n\\\n\\n\\\n  'jobid' is the name of job whose info s getting returned.\\n\\\n  'queue_name' is the name of the queue where 'jobid' reside.\\n\\\n  'server_name' is the name of the server owning 'jobid'.\\n\\\n  returns:\\n\\\n         a Python job object representing the current instance of the.\\n\\\n         PBS job 'jobid', from a static source.\\n\\\n         or None of static data source is not available.\\n\\\n\";\n\n/**\n * @brief\n *\ta Python job object representing the current instance of the.\n *\tPBS job 'jobid', from a static source.\n *\n * @return\tPyObject *\n * @retval\treference to jobid\tsuccess\n * @retval\tNULL\t\t\terror\n *\n */\n\nPyObject *\npbsv1mod_meth_get_job_static(PyObject *self, PyObject *args, PyObject *kwds)\n{\n\tstatic char *kwlist[] = {\"job\", \"server_name\", \"queue_name\", NULL};\n\tchar *jobid = NULL;\n\tPyObject *py_obj = NULL;\n\tchar *qname = NULL;\n\tchar *sname = NULL;\n\n\tif (!PyArg_ParseTupleAndKeywords(args, kwds,\n\t\t\t\t\t \"sss:get_job_static\",\n\t\t\t\t\t kwlist,\n\t\t\t\t\t &jobid,\n\t\t\t\t\t &sname,\n\t\t\t\t\t &qname)) {\n\t\treturn NULL;\n\t}\n\thook_set_mode = C_MODE;\n\tpy_obj = py_get_job_static(jobid, sname, qname);\n\thook_set_mode = PY_MODE;\n\treturn (py_obj);\n}\n\n/**\n * @brief\n *\tReturns a Python object that maps to a struct resv * taken directly\n *\tfrom static server_resvs, or a list of reservation ids if 'resvid'\n *\tmatches empty string (\"\").\n * @param[in]\tresvid\t\t- name of a resv to obtain \"struct resv *\"\n *\t\t\t\t  content to populate a Python resv object,\n *\t\t\t\t  or the empty string (\"\") to return the\n *\t\t\t\t  list of reservation names.\n * @param[in]\tsvr_name\t- return resv @ server_name only\n * @param[in]\tqueue_name\t- return resv @ queue_name only\n * @return      PyObject *\t- the Python resv object corresponding to\n *\t\t\t\t  'resvid'.\n */\nstatic PyObject *\npy_get_resv_static(char *resvid, char *svr_name)\n{\n\tPyObject *py_resv_class = NULL;\n\tPyObject *py_resv = NULL;\n\tPyObject *py_rargs = NULL;\n\tsvrattrl *plist, *plist_next;\n\tchar *p = NULL;\n\tchar *pn = NULL;\n\tchar *p1 = NULL;\n\tchar *attr_name = NULL;\n\tpbs_list_head resvl;\n\tPyObject *py_server = NULL;\n\tPyObject *py_que = NULL;\n\tint rc;\n\tchar perf_label[MAXBUFLEN];\n\n\tif (!use_static_data || (server_resvs.data == NULL)) {\n\t\tPy_RETURN_NONE;\n\t}\n\n\tif (resvid == NULL) {\n\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\"Unable to populate python resv object\");\n\t\treturn NULL;\n\t}\n\n\tif (resvid[0] == '\\0') {\n\t\treturn (create_py_strlist_from_svrattrl_names(server_resvs.resvids));\n\t}\n\n\tCLEAR_HEAD(resvl);\n\t/* NOTE: The list is sorted according to resv name */\n\tplist = (svrattrl *) GET_NEXT(*server_resvs.data);\n\tdo {\n\t\tif (plist == NULL)\n\t\t\tbreak;\n\n\t\tplist_next = (svrattrl *) GET_NEXT(plist->al_link);\n\n\t\t/* look for last dot as the name could be dotted like a resv name */\n\t\tp = strrchr(plist->al_name, '.');\n\t\tif (p == NULL) {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"warning: encountered an attribute %s without a resv name...ignoring\", plist->al_name);\n\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\tplist = plist_next;\n\t\t\tcontinue;\n\t\t}\n\t\t*p = '\\0'; /* now plist->al_name would be the resv name */\n\n\t\tif (strcmp(plist->al_name, resvid) != 0) {\n\t\t\t*p = '.'; /* restore orig plist->al_name */\n\t\t\tplist = plist_next;\n\t\t\tcontinue;\n\t\t}\n\t\tattr_name = p + 1; /* p will be the actual attribute name */\n\n\t\tp1 = NULL;\n\t\tif (plist->al_resc != NULL) {\n\t\t\tp1 = strchr(plist->al_resc, ',');\n\t\t\tif (p1 != NULL) {\n\t\t\t\t*p1 = '\\0';\n\t\t\t}\n\t\t}\n\n\t\tif ((strcmp(attr_name, ATTR_server) == 0) &&\n\t\t    (svr_name != NULL) && (svr_name[0] != '\\0') &&\n\t\t    (strcmp(svr_name, \"localhost\") != 0) &&\n\t\t    (strcmp(plist->al_value, svr_name) != 0)) {\n\t\t\tif (p != NULL)\n\t\t\t\t*p = '.'; /* restore orig plist->al_name */\n\t\t\tif (p1 != NULL)\n\t\t\t\t*p1 = ','; /* restore orig plist->al_resc */\n\t\t\tfree_attrlist(&resvl);\n\t\t\tPy_RETURN_NONE;\n\t\t}\n\n\t\tif (add_to_svrattrl_list(&resvl, attr_name,\n\t\t\t\t\t plist->al_resc, plist->al_value, 0, NULL) != 0) {\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t\t \"warning: failed to add_to_svrattrl_list(%s,%s,%s)\",\n\t\t\t\t plist->al_name,\n\t\t\t\t plist->al_resc ? plist->al_resc : \"\", plist->al_value);\n\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\tif (p != NULL)\n\t\t\t\t*p = '.'; /* restore orig plist->al_name */\n\t\t\tif (p1 != NULL)\n\t\t\t\t*p1 = ','; /* restore orig plist->al_resc */\n\t\t\tgoto get_resv_error_exit;\n\t\t}\n\t\tif (p1 != NULL) {\n\t\t\t*p1 = ','; /* restore the orig plist->al_resc */\n\t\t}\n\n\t\t/* Check if we're done processing the attributes/resources */\n\t\t/* of the current resv. \t\t\t\t    */\n\t\tpn = NULL;\n\t\tif (plist_next != NULL) {\n\n\t\t\t/* look at last dot for \"dotted\" resv names */\n\t\t\tpn = strrchr(plist_next->al_name, '.');\n\t\t\tif (pn == NULL) {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t \"warning: encountered the next attribute %s without a resv name...ignoring\", plist_next->al_name);\n\t\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\t\t/* skip the next one */\n\t\t\t\tplist = (svrattrl *) GET_NEXT(plist_next->al_link);\n\t\t\t\tif (p != NULL)\n\t\t\t\t\t*p = '.'; /* restore orig plist->al_name */\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\t*pn = '\\0'; /* now plist_next->al_name would be the */\n\t\t\t/* resv name */\n\n\t\t\tif ((strcmp(plist->al_name, plist_next->al_name) != 0)) {\n\t\t\t\t*pn = '.'; /* restore orig plist_next->al_name */\n\t\t\t\tif (p != NULL)\n\t\t\t\t\t*p = '.'; /* restore orig plist->al_name */\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\n\t\tplist = plist_next;\n\t\tif (p != NULL)\n\t\t\t*p = '.'; /* restore orig plist->al_name */\n\n\t} while (plist);\n\n\tif (GET_NEXT(resvl) == NULL) {\n\t\tfree_attrlist(&resvl);\n\t\tPy_RETURN_NONE;\n\t}\n\n\tpy_resv_class = pbs_python_types_table[PP_RESV_IDX].t_class;\n\tpy_rargs = Py_BuildValue(\"(s)\", resvid); /* NEW ref */\n\tif (py_rargs == NULL) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"could not build args list for resv %s\",\n\t\t\t plist->al_name);\n\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\tgoto get_resv_error_exit;\n\t}\n\n\tpy_resv = PyObject_Call(py_resv_class, py_rargs,\n\t\t\t\tNULL); /* NEW ref */\n\tif (py_resv == NULL) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"failed to create a python resv %s object\",\n\t\t\t plist->al_name);\n\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\tgoto get_resv_error_exit;\n\t}\n\n\tsnprintf(perf_label, sizeof(perf_label), \"hook_func:%s(%s)\", SERVER_RESV_OBJECT, resvid);\n\trc = pbs_python_populate_python_class_from_svrattrl(py_resv, &resvl, perf_label, HOOK_PERF_POPULATE);\n\n\tif (rc == -1) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"failed to fully populate Python\"\n\t\t\t \" resv %s object\",\n\t\t\t plist->al_name);\n\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\tgoto get_resv_error_exit;\n\t}\n\n\tfree_attrlist(&resvl);\n\tCLEAR_HEAD(resvl);\n\tPy_CLEAR(py_rargs);\n\n\tif (PyObject_HasAttrString(py_resv, ATTR_queue)) {\n\t\tchar *qname;\n\n\t\tqname = pbs_python_object_get_attr_string_value(py_resv, ATTR_queue);\n\t\tif (qname != NULL) {\n\t\t\tpy_que = py_get_queue_static(qname, svr_name); /* NEW ref */\n\t\t\tif (py_que != NULL) {\n\t\t\t\t/* py_server queue ref ct incremented as part of py_resv */\n\t\t\t\t(void) PyObject_SetAttrString(py_resv, ATTR_queue, py_que);\n\t\t\t\tPy_DECREF(py_que); /* we no longer need to reference */\n\t\t\t}\n\t\t}\n\t}\n\n\t/* set resv.server to actual server object */\n\tpy_server = py_get_server_static(); /* NEW Ref */\n\n\tif (py_server != NULL) {\n\t\tif (PyObject_HasAttrString(py_resv, ATTR_server)) {\n\t\t\t/* py_server ref ct incremented as part of py_resv */\n\t\t\t(void) PyObject_SetAttrString(py_resv, ATTR_server, py_server);\n\t\t}\n\t\tPy_DECREF(py_server);\n\t}\n\n\treturn (py_resv);\n\nget_resv_error_exit:\n\tif (PyErr_Occurred())\n\t\tpbs_python_write_error_to_log(__func__);\n\tfree_attrlist(&resvl);\n\tPy_CLEAR(py_rargs);\n\tPy_CLEAR(py_resv);\n\tPyErr_SetString(PyExc_AssertionError, \"Failed to create resv object\");\n\n\treturn NULL;\n}\n\nconst char pbsv1mod_meth_get_resv_static_doc[] =\n\t\"get_resv_static(resvid, server)\\n\\\n\\n\\\n  'resvid' is the name of resv whose info is beting returned.\\n\\\n  'server' is the location of 'resvid'. Empty string (\"\n\t\") means local server\\n\\\n  returns:\\n\\\n         a Python resv object representing the current instance of the.\\n\\\n         PBS resv 'resvid', from a static source.\\n\\\n         or None of static data source is not available.\\n\\\n\";\n\n/**\n * @brief\n *\t'resvid' is the name of resv whose info is beting returned.\n *\tserver' is the location of 'resvid'. Empty string (\"\") means local server.\n *\n * @return\tPyObject *\n * @retval\ta Python resv object representing the current instance of the\n *\t\tPBS resv 'resvid', from a static source.\t\t\t\tsuccess\n * @retval\tNone of static data source is not available\t\t\t\terror\n */\nPyObject *\npbsv1mod_meth_get_resv_static(PyObject *self, PyObject *args, PyObject *kwds)\n{\n\n\tstatic char *kwlist[] = {\"resv\", \"server_name\", NULL};\n\tchar *resvid = NULL;\n\tchar *svr_name = NULL;\n\tPyObject *py_obj = NULL;\n\n\tif (!PyArg_ParseTupleAndKeywords(args, kwds,\n\t\t\t\t\t \"ss:get_resv_static\",\n\t\t\t\t\t kwlist,\n\t\t\t\t\t &resvid,\n\t\t\t\t\t &svr_name)) {\n\t\treturn NULL;\n\t}\n\n\thook_set_mode = C_MODE;\n\tpy_obj = py_get_resv_static(resvid, svr_name);\n\thook_set_mode = PY_MODE;\n\treturn (py_obj);\n}\n\nconst char pbsv1mod_meth_get_server_data_fp_doc[] =\n\t\"server_data_fp()\\n\\\n\\n\\\n   Returns the Python file object representing the already opened file 'fp.'\\n\\\n\\n\\\n\";\n\n/**\n * @brief\n *\tReturns the Python file object representing the already opened file 'fp.\n */\nPyObject *\npbsv1mod_meth_get_server_data_fp(void)\n{\n\tPyObject *fp_obj = NULL;\n\tint data_fd;\n\n\tif (hook_debug.data_fp == NULL)\n\t\tPy_RETURN_NONE;\n\n\tdata_fd = fileno(hook_debug.data_fp);\n\n\tfp_obj = PyFile_FromFd(data_fd, hook_debug.data_file, \"w\", -1,\n\t\t\t       NULL, NULL, NULL, 1);\n\tif (fp_obj == NULL)\n\t\tPy_RETURN_NONE;\n\n\treturn (fp_obj);\n}\n\nconst char pbsv1mod_meth_get_server_data_file_doc[] =\n\t\"server_data_fp()\\n\\\n\\n\\\n   Returns the Python string representing the pathname to the hook data file.\\n\\\n\\n\\\n\";\n\n/**\n * @brief\n *\tReturns the Python string representing the pathname to the hook data file.\n */\nPyObject *\npbsv1mod_meth_get_server_data_file(void)\n{\n\tif (hook_debug.data_file[0] == '\\0') {\n\t\tPy_RETURN_NONE;\n\t}\n\treturn (PyUnicode_FromString(hook_debug.data_file));\n}\n\nconst char pbsv1mod_meth_use_static_data_doc[] =\n\t\"use_static_data()\\n\\\n\\n\\\n  returns:\\n\\\n         True if PBS is using static data for pbs.server()* calls.\\n\\\n  \t This is an internal function.\\n\\\n\";\n\n/**\n * @brief\n *\tcheck for static data\n *\n * @return\tPyObject*\n * @retval\tPy_True\tif yes\n * @retval\tPy_False if not\n */\nPyObject *\npbsv1mod_meth_use_static_data(void)\n{\n\tPyObject *ret;\n\tret = (use_static_data) ? Py_True : Py_False;\n\tPy_INCREF(ret);\n\treturn (ret);\n}\n\n/**\n * @brief\n *\tassign value to use_static_data\n */\nvoid\npbs_python_set_use_static_data_value(int value)\n{\n\tuse_static_data = value;\n}\n\n/**\n * @brief\n * \tSet the environment variable 'env_var' to 'env_val'.\n *\n * @return int\n * @retval 0 \tfor success\n * @retval !=0\tfor error\n */\nint\npbs_python_set_os_environ(char *env_var, char *env_val)\n{\n\tPyObject *pystr_env_val = NULL;\n\tPyObject *pystr_env_var = NULL;\n\tPyObject *temp_item = NULL;\n\tPyObject *os_mod_obj = NULL; /* 'sys' module  */\n\tPyObject *os_mod_env = NULL; /* os.environ */\n\n\tif (env_var == NULL) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"passed NULL env_var!\");\n\t\treturn -1;\n\t}\n\n\tPyErr_Clear(); /* clear any exceptions */\n\n\t/* if sucess we get a NEW ref */\n\tif (!(os_mod_obj = PyImport_ImportModule(\"os\"))) { /* failed */\n\t\tsnprintf(log_buffer, sizeof(log_buffer), \"%s:import os module\",\n\t\t\t __func__);\n\t\tpbs_python_write_error_to_log(log_buffer);\n\t\treturn (-1);\n\t}\n\n\t/* if sucess we get a NEW ref */\n\tif ((os_mod_env =\n\t\t     PyObject_GetAttrString(os_mod_obj, \"environ\")) == NULL) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"%s:could not retrieve os environment\",\n\t\t\t __func__);\n\t\tpbs_python_write_error_to_log(log_buffer);\n\t\tPy_CLEAR(os_mod_obj);\n\t\treturn (-1);\n\t}\n\n\tif ((pystr_env_var = PyUnicode_FromString(env_var)) == NULL) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"%s:creating pystr_env_var <%s>\",\n\t\t\t __func__, env_var);\n\t\tpbs_python_write_error_to_log(log_buffer);\n\t\tPy_CLEAR(os_mod_obj);\n\t\tPy_CLEAR(os_mod_env);\n\t\treturn (-1);\n\t}\n\n\tif (env_val == NULL) {\n\n\t\tif ((temp_item = PyObject_GetItem(os_mod_env, pystr_env_var)) != NULL) {\n\t\t\tif (PyObject_DelItem(os_mod_env,\n\t\t\t\t\t     pystr_env_var) == -1) {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t \"%s: error unsetting environment <%s>\",\n\t\t\t\t\t __func__, env_var);\n\t\t\t\tpbs_python_write_error_to_log(log_buffer);\n\t\t\t\tPy_CLEAR(os_mod_obj);\n\t\t\t\tPy_CLEAR(os_mod_env);\n\t\t\t\tPy_CLEAR(pystr_env_var);\n\t\t\t\treturn (-1);\n\t\t\t}\n\t\t\tPy_CLEAR(temp_item);\n\t\t}\n\t} else {\n\t\t/* if sucess we get a NEW ref */\n\t\tif ((pystr_env_val = PyUnicode_FromString(env_val)) == NULL) {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"%s:creating pystr_env_val <%s>\",\n\t\t\t\t __func__, env_val);\n\t\t\tpbs_python_write_error_to_log(log_buffer);\n\t\t\tPy_CLEAR(os_mod_obj);\n\t\t\tPy_CLEAR(os_mod_env);\n\t\t\tPy_CLEAR(pystr_env_var);\n\t\t\treturn (-1);\n\t\t}\n\n\t\tif (PyObject_SetItem(os_mod_env, pystr_env_var,\n\t\t\t\t     pystr_env_val) == -1) {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"%s: error setting os.environ[%s]=%s\",\n\t\t\t\t __func__, env_var, env_val);\n\t\t\tpbs_python_write_error_to_log(log_buffer);\n\t\t\tPy_CLEAR(os_mod_obj);\n\t\t\tPy_CLEAR(os_mod_env);\n\t\t\tPy_CLEAR(pystr_env_val);\n\t\t\tPy_CLEAR(pystr_env_var);\n\t\t\treturn (-1);\n\t\t}\n\t}\n\tPy_CLEAR(os_mod_obj);\n\tPy_CLEAR(os_mod_env);\n\tPy_CLEAR(pystr_env_val);\n\tPy_CLEAR(pystr_env_var);\n\n\treturn (0);\n}\n\n/**\n * @brief\n * \tSet the pbs.hook_config_filename value to 'conf_file'.\n *\n * @param[in]\tconf_file - path to the pbs hook config file.\n *\n * @return int\n * @retval 0 \tfor success\n * @retval !=0\tfor error\n */\nint\npbs_python_set_pbs_hook_config_filename(char *conf_file)\n{\n\tPyObject *pbs_mod_obj = NULL; /* 'pbs' module  */\n\tchar *configfile_attrname = \"hook_config_filename\";\n\n\tPyErr_Clear(); /* clear any exceptions */\n\n\t/* if success we get a NEW ref */\n\tif (!(pbs_mod_obj = PyImport_ImportModule(PBS_OBJ))) { /* failed */\n\t\tsnprintf(log_buffer, sizeof(log_buffer), \"%s:import pbs module\",\n\t\t\t __func__);\n\t\tpbs_python_write_error_to_log(log_buffer);\n\t\treturn (-1);\n\t}\n\n\tif (conf_file != NULL) {\n\t\tif (pbs_python_object_set_attr_string_value(pbs_mod_obj,\n\t\t\t\t\t\t\t    configfile_attrname, conf_file) == -1) {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"%s: error setting pbs.hook_config_filename = %s\",\n\t\t\t\t __func__, conf_file);\n\t\t\tpbs_python_write_error_to_log(log_buffer);\n\t\t\tPy_CLEAR(pbs_mod_obj);\n\t\t\treturn (-1);\n\t\t}\n\t} else {\n\t\tif (PyObject_SetAttrString(pbs_mod_obj,\n\t\t\t\t\t   configfile_attrname, Py_None) == -1) {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"%s: error setting pbs.hook_config_filename = None\",\n\t\t\t\t __func__);\n\t\t\tpbs_python_write_error_to_log(log_buffer);\n\t\t\tPy_CLEAR(pbs_mod_obj);\n\t\t\treturn (-1);\n\t\t}\n\t}\n\tPy_CLEAR(pbs_mod_obj);\n\n\treturn (0);\n}\n\nconst char pbsv1mod_meth_get_pbs_conf_doc[] =\n\t\"get_pbs_conf()\\n\\\n\\n\\\n  returns:\\n\\\n         Returns a dictionary containing the entries to pbs conf file, which is by default\\n\\\n         /etc/pbs.conf in Linux/Unix or 'C:\\\\Program Files (x86)\\\\PBS\\\\pbs.conf'.\\n\\\n\";\n\n/**\n * @brief\n *\tThis is the C->Python wrapper program for\n *\treturning some key values in pbs.conf  file as loaded by the calling program.\n *\tThis is callable within a Python script.\n *\n * @return Python dict\n * @retval dictionary items are of the form: <pbs_conf_name>:<pbs_conf_value>\n *\n */\nPyObject *\npbsv1mod_meth_get_pbs_conf(void)\n{\n\treturn (Py_BuildValue(\"{s:s,s:s,s:s,s:s,s:s,s:s,s:s,s:s,s:s,s:s,s:s,s:s,s:s}\",\n\t\t\t      \"PBS_HOME\", pbs_conf.pbs_home_path ? pbs_conf.pbs_home_path : \"\",\n\t\t\t      \"PBS_EXEC\", pbs_conf.pbs_exec_path ? pbs_conf.pbs_exec_path : \"\",\n\t\t\t      \"PBS_ENVIRONMENT\", pbs_conf.pbs_environment ? pbs_conf.pbs_environment : \"\",\n\t\t\t      \"PBS_RCP\", pbs_conf.rcp_path ? pbs_conf.rcp_path : \"\",\n\t\t\t      \"PBS_CP\", pbs_conf.cp_path ? pbs_conf.cp_path : \"\",\n\t\t\t      \"PBS_SCP\", pbs_conf.scp_path ? pbs_conf.scp_path : \"\",\n\t\t\t      \"PBS_SCP_ARGS\", pbs_conf.scp_args ? pbs_conf.scp_args : \"\",\n\t\t\t      \"PBS_MOM_HOME\", pbs_conf.pbs_mom_home ? pbs_conf.pbs_mom_home : \"\",\n\t\t\t      \"PBS_TMPDIR\", pbs_conf.pbs_tmpdir ? pbs_conf.pbs_tmpdir : \"\",\n\t\t\t      \"PBS_SERVER\", pbs_conf.pbs_server_name ? pbs_conf.pbs_server_name : \"\",\n\t\t\t      \"PBS_SERVER_HOST_NAME\", pbs_conf.pbs_server_host_name ? pbs_conf.pbs_server_host_name : \"\",\n\t\t\t      \"PBS_PRIMARY\", pbs_conf.pbs_primary ? pbs_conf.pbs_primary : \"\",\n\t\t\t      \"PBS_SECONDARY\", pbs_conf.pbs_secondary ? pbs_conf.pbs_secondary : \"\"));\n}\n\nconst char pbsv1mod_meth_load_resource_value_doc[] =\n\t\"load_resource_value(resc_object)\\n\\\n\\n\\\n   resc_object:  resource object whose values are to be set\\n\\\n\\n\\\n   Load the values internally cached for 'resc_object'.\\n\\\n\";\n\n/**\n * @brief\n *\tThis is callable in a Python script, for populating 'resc_object' of\n *\ttype 'pbs_resource' with values cached in the internal\n *\tlist 'pbs_resource_value_list'.\n *\n * @param[in]\targs[1]\t- the pbs_resource Python object.\n *\n * @return\tPyObject *\n * @retval\tNULL\t- with an accompanying AssertionError Python exception.\n * @retval\tPy_None - successful execution.\n *\n */\nPyObject *\npbsv1mod_meth_load_resource_value(PyObject *self, PyObject *args, PyObject *kwds)\n{\n\tstatic char *kwlist[] = {\"resc_object\", NULL};\n\tPyObject *py_resource_match = NULL;\n\n\tif (!PyArg_ParseTupleAndKeywords(args, kwds,\n\t\t\t\t\t \"O:load_resource_value\",\n\t\t\t\t\t kwlist,\n\t\t\t\t\t &py_resource_match)) {\n\t\treturn NULL;\n\t}\n\n\tif (load_cached_resource_value(py_resource_match) != 0) {\n\t\tPyErr_SetString(PyExc_AssertionError,\n\t\t\t\t\"Failed to load cached value for resoure list\");\n\t\treturn NULL;\n\t}\n\n\tPy_RETURN_NONE;\n}\n\nconst char pbsv1mod_meth_resource_str_value_doc[] =\n\t\"str_resource_value(resc_object)\\n\\\n\\n\\\n   resc_object: resource object whose string value is to be returned\\n\\\n\";\n\n/**\n * @brief\n *\tThis is callable in a Python script, for returning the\n *\tstring value of a type 'pbs_resource' resc_object. The string value\n *\tis the comma-separated list of resource values for 'resc_object'.\n *\n * @param[in]\targs[1]\t- the pbs_resource Python object.\n *\n * @return\tPyObject *\n * @retval\tNULL\t- with an accompanying AssertionError Python exception.\n * @retval\t<python string value> - a Python object representing the\n *\t\t\t\t\tstring value.\n *\n */\nPyObject *\npbsv1mod_meth_resource_str_value(PyObject *self, PyObject *args, PyObject *kwds)\n{\n\tstatic char *kwlist[] = {\"resc_object\", NULL};\n\tpbs_resource_value *resc_val;\n\tPyObject *py_resource_match = NULL;\n\n\tif (!PyArg_ParseTupleAndKeywords(args, kwds,\n\t\t\t\t\t \"O:str_resource_value\",\n\t\t\t\t\t kwlist,\n\t\t\t\t\t &py_resource_match)) {\n\t\treturn NULL;\n\t}\n\n\tresc_val = (pbs_resource_value *) GET_NEXT(pbs_resource_value_list);\n\twhile (resc_val != NULL) {\n\n\t\tif ((resc_val->py_resource != NULL) &&\n\t\t    (py_resource_match == resc_val->py_resource))\n\t\t\tbreak;\n\n\t\tresc_val = (pbs_resource_value *) GET_NEXT(resc_val->all_rescs);\n\t}\n\n\tif (resc_val == NULL) {\n\t\t/* no match */\n\t\tPy_RETURN_NONE;\n\t}\n\n\tif (resc_val->py_resource_str_value == NULL) {\n\t\tPy_RETURN_NONE;\n\t}\n\n\tPy_INCREF(resc_val->py_resource_str_value);\n\treturn (resc_val->py_resource_str_value);\n}\n\nconst char pbsv1mod_meth_release_nodes_doc[] =\n\t\"release_nodes(job,node_list,keep_select)\\n\\\n  where:\\n\\\n\\n\\\n   job\t\t:  job whose nodes are being released\\n\\\n   node_list\t:  dictionary of pbs.vnode objects mapping to nodes being released\\n\\\n   keep_select\t:  string value mapping to a new job's select value\\n\\\n\\n\\\n  returns:\\n\\\n\tjob object with new values given to the following attributes:\\n\\\n\texec_vnode\\n\\\n\texec_host\\n\\\n\texec_host2\\n\\\n       as a result of keeping nodes in 'node_list' or nodes that satisfy the\\n\\\n\tjob's keep_select value.\\n\\\n\";\n\n/**\n *\n * @brief\n *\tThis is callable in a Python hook script for releasing assigned nodes\n  *\tthat don't appear in 'node_list' or are not needed\n *\tto satisfy the input 'keep_select' value.\n * @param[in]\tjob\t\t- Python job object in question\n * @param[in]\tnode_list\t- Python dictionary of pbs.vnode objects mapping to\n *\t\t\t\tnodes being released.\n * @param[in]\tkeep_select\t- Python string representing the new value to the\n *\t\t\t\tjob's select value.\n *\n * @return\tPyObject *\n * @retval\tjob\t- the passed 'job' parameter but with new values to\n *\t\t\t'exec_vnode', 'exec_host, and 'exec_host2' items  to\n *\t\t\tsatisfy request.\n *\t\tNone\t- if release of nodes is not possible due to some error or\n *\t\t\tnot enough resources are available to satisfy request.\n */\nPyObject *\npbsv1mod_meth_release_nodes(PyObject *self, PyObject *args, PyObject *kwds)\n{\n\tstatic char *kwlist[] = {\"job\", \"node_list\", \"keep_select\", NULL};\n\n\tPyObject *py_job = (PyObject *) NULL;\n\tPyObject *py_node_list = (PyObject *) NULL;\n\tPyObject *py_keep_select = (PyObject *) NULL;\n\tPyObject *py_nodes = (PyObject *) NULL;\n\tPyObject *py_attr_hookset_dict = (PyObject *) NULL;\n\tPyObject *py_attr_keys = (PyObject *) NULL;\n\tchar *vnodelist = NULL;\n\tint vnodelist_sz = 0;\n\n\tint rc = 0;\n\tint entry = 0;\n\trelnodes_input_t r_input;\n\trelnodes_input_vnodelist_t r_input_vnlist;\n\trelnodes_input_select_t r_input_select;\n\n\tchar *keep_select = NULL;\n\tchar *execvnode = NULL;\n\tchar *exechost = NULL;\n\tchar *exechost2 = NULL;\n\tchar *schedselect = NULL;\n\tpbs_list_head exec_vnode_list;\n\tpbs_list_head failed_mom_list;\n\tpbs_list_head succeeded_mom_list;\n\tvnl_t *failed_vnodes = NULL;\n\tvnl_t *good_vnodes = NULL;\n\tchar *new_exec_vnode = NULL;\n\tchar *new_exec_host = NULL;\n\tchar *new_exec_host2 = NULL;\n\tchar *new_schedselect = NULL;\n\tchar *tmpstr = NULL;\n\tPyObject *py_return = Py_None;\n\tint hook_set_mode_orig;\n\tchar *jobid = NULL;\n\tchar err_msg[LOG_BUF_SIZE] = {'\\0'};\n\n\thook_set_mode_orig = hook_set_mode;\n\n\tCLEAR_HEAD(exec_vnode_list);\n\tCLEAR_HEAD(succeeded_mom_list);\n\tCLEAR_HEAD(failed_mom_list);\n\tmemset(log_buffer, '\\0', LOG_BUF_SIZE);\n\n\tif (!PyArg_ParseTupleAndKeywords(args, kwds, \"OOO:release_nodes\",\n\t\t\t\t\t kwlist, &py_job, &py_node_list, &py_keep_select)) {\n\t\tlog_err(-1, __func__, \"PyArg_ParseTupleAndKeywords failed!\");\n\t\tgoto release_nodes_exit;\n\t}\n\n\tif (py_node_list != Py_None) {\n\t\tint num_attrs;\n\t\tint i;\n\t\tchar *vn_name;\n\n\t\tif (!PyDict_Check(py_node_list)) {\n\t\t\tlog_err(PBSE_INTERNAL, __func__, \"node_list is not a dictionary\");\n\t\t\tgoto release_nodes_exit;\n\t\t}\n\n\t\tpy_attr_keys = PyDict_Keys(py_node_list); /* NEW ref */\n\n\t\tif (py_attr_keys == NULL) {\n\t\t\tlog_err(PBSE_INTERNAL, __func__, \"Failed to obtain node_list's keys\");\n\t\t\tgoto release_nodes_exit;\n\t\t}\n\n\t\tif (!PyList_Check(py_attr_keys)) {\n\t\t\tlog_err(PBSE_INTERNAL, __func__, \"node_list key is not a list\");\n\t\t\tPy_CLEAR(py_attr_keys);\n\t\t\tgoto release_nodes_exit;\n\t\t}\n\n\t\tvnodelist_sz = HOOK_BUF_SIZE;\n\t\tvnodelist = (char *) malloc(vnodelist_sz);\n\t\tif (vnodelist == NULL) {\n\t\t\tlog_err(errno, __func__, \"malloc failure\");\n\t\t\tgoto release_nodes_exit;\n\t\t}\n\n\t\tvnodelist[0] = '\\0';\n\n\t\tnum_attrs = PyList_Size(py_attr_keys);\n\t\tfor (i = 0; i < num_attrs; i++) {\n\n\t\t\tvn_name = strdup(pbs_python_list_get_item_string_value(py_attr_keys, i));\n\t\t\tif ((vn_name == NULL) || (vn_name[0] == '\\0')) {\n\t\t\t\tfree(vn_name);\n\t\t\t\tcontinue;\n\t\t\t}\n\n\t\t\tif (vnodelist[0] != '\\0') {\n\t\t\t\tif (pbs_strcat(&vnodelist, &vnodelist_sz, \"+\") == NULL) {\n\t\t\t\t\tlog_err(errno, __func__, \"pbs_strcat failure\");\n\t\t\t\t\tfree(vn_name);\n\t\t\t\t\tgoto release_nodes_exit;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif (pbs_strcat(&vnodelist, &vnodelist_sz, vn_name) == NULL) {\n\t\t\t\tlog_err(errno, __func__, \"pbs_strcat failure\");\n\t\t\t\tfree(vn_name);\n\t\t\t\tgoto release_nodes_exit;\n\t\t\t}\n\n\t\t\tfree(vn_name);\n\t\t}\n\t} else {\n\t\tchar *str = pbs_python_object_str(py_keep_select);\n\n\t\tif ((str == NULL) || (str[0] == '\\0')) {\n\t\t\tlog_err(-1, __func__, \"empty keep_select value\");\n\t\t\tgoto release_nodes_exit;\n\t\t}\n\t\t/* release nodes in such a way that 'keep_select' request is satisfied */\n\t\tkeep_select = strdup(str);\n\t\tif (keep_select == NULL) {\n\t\t\tlog_err(-1, __func__, \"strdup keep_select failed\");\n\t\t\tgoto release_nodes_exit;\n\t\t}\n\n\t\t/* populate failed_mom_list used to decide which nodes to release */\n\t\tif (PyObject_HasAttrString(py_job, PY_JOB_FAILED_MOM_LIST)) {\n\t\t\tpy_nodes = PyObject_GetAttrString(py_job, PY_JOB_FAILED_MOM_LIST);\n\t\t\tif (py_nodes != NULL) {\n\t\t\t\tif (PyList_Check(py_nodes)) {\n\t\t\t\t\tif (py_strlist_to_reliable_job_node_list(py_nodes, &failed_mom_list) == -1) {\n\t\t\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"%s: Failed to dump Python string list values into a svrattrl list!\", PY_JOB_FAILED_MOM_LIST);\n\t\t\t\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\t\t\t\tgoto release_nodes_exit;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tPy_CLEAR(py_nodes);\n\t\t\t}\n\t\t}\n\n\t\t/* populate succeeded_mom_list used to decide which nodes to keep */\n\t\tif (PyObject_HasAttrString(py_job, PY_JOB_SUCCEEDED_MOM_LIST)) {\n\t\t\tpy_nodes = PyObject_GetAttrString(py_job, PY_JOB_SUCCEEDED_MOM_LIST); /* NEW */\n\n\t\t\tif (py_nodes != NULL) {\n\t\t\t\tif (PyList_Check(py_nodes)) {\n\t\t\t\t\tif (py_strlist_to_reliable_job_node_list(py_nodes, &succeeded_mom_list) == -1) {\n\t\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"%s: Failed to dump Python string list values into a svrattrl list!\", PY_JOB_SUCCEEDED_MOM_LIST);\n\t\t\t\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\t\t\t\tgoto release_nodes_exit;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tPy_CLEAR(py_nodes);\n\t\t\t}\n\t\t}\n\t}\n\n\t/* jobid */\n\ttmpstr = pbs_python_object_get_attr_string_value(py_job, \"id\");\n\tif ((tmpstr == NULL) || (tmpstr[0] == '\\0')) {\n\t\tlog_err(-1, __func__, \"did not find a value to jobid\");\n\t\tgoto release_nodes_exit;\n\t}\n\tjobid = strdup(tmpstr);\n\tif (jobid == NULL) {\n\t\tlog_err(-1, __func__, \"strdup jobid failed\");\n\t\tgoto release_nodes_exit;\n\t}\n\n\t/* exec_vnode */\n\ttmpstr = pbs_python_object_get_attr_string_value(py_job, ATTR_execvnode);\n\tif ((tmpstr == NULL) || (tmpstr[0] == '\\0')) {\n\t\tlog_err(-1, __func__, \"did not find a value to exec_vnode\");\n\t\tgoto release_nodes_exit;\n\t}\n\texecvnode = strdup(tmpstr);\n\tif (execvnode == NULL) {\n\t\tlog_err(-1, __func__, \"strdup exec_vnode failed\");\n\t\tgoto release_nodes_exit;\n\t}\n\n\t/* exec_host or exec_host2 */\n\ttmpstr = pbs_python_object_get_attr_string_value(py_job, ATTR_exechost);\n\tif ((tmpstr != NULL) && (tmpstr[0] != '\\0')) {\n\t\texechost = strdup(tmpstr);\n\t\tif (exechost == NULL) {\n\t\t\tlog_err(-1, __func__, \"strdup exec_host failed\");\n\t\t\tgoto release_nodes_exit;\n\t\t}\n\t}\n\n\ttmpstr = pbs_python_object_get_attr_string_value(py_job, ATTR_exechost2);\n\tif ((tmpstr != NULL) && (tmpstr[0] != '\\0')) {\n\t\texechost2 = strdup(tmpstr);\n\t\tif (exechost2 == NULL) {\n\t\t\tlog_err(-1, __func__, \"strdup exec_host2 failed\");\n\t\t\tgoto release_nodes_exit;\n\t\t}\n\t}\n\n\tif ((exechost == NULL) && (exechost2 == NULL)) {\n\t\tlog_err(-1, __func__, \"no value found for exec_host/exec_host2 \");\n\t\tgoto release_nodes_exit;\n\t}\n\n\tif (exechost == NULL) {\n\t\texechost = strdup(exechost2);\n\t\tif (exechost == NULL) {\n\t\t\tlog_err(-1, __func__, \"strdup exec_host failed\");\n\t\t\tgoto release_nodes_exit;\n\t\t}\n\t}\n\n\tif (exechost2 == NULL) {\n\t\texechost2 = strdup(exechost);\n\t\tif (exechost2 == NULL) {\n\t\t\tlog_err(-1, __func__, \"strdup exec_host2 failed\");\n\t\t\tgoto release_nodes_exit;\n\t\t}\n\t}\n\n\t/* schedselect */\n\ttmpstr = pbs_python_object_get_attr_string_value(py_job, ATTR_SchedSelect);\n\tif ((tmpstr == NULL) || (tmpstr[0] == '\\0')) {\n\t\tlog_err(-1, __func__, \"did not find a value to exec_vnode\");\n\t\tgoto release_nodes_exit;\n\t}\n\tschedselect = strdup(tmpstr);\n\tif (schedselect == NULL) {\n\t\tlog_err(-1, __func__, \"strdup schedselect failed\");\n\t\tgoto release_nodes_exit;\n\t}\n\n\t/* populate exec_vnode_list */\n\tif (populate_svrattrl_from_vnodelist_param(PY_EVENT_PARAM_VNODELIST, &exec_vnode_list) != 0) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer), \"%s: Failed to dump Python string list values into a svrattrl list\", PY_EVENT_PARAM_VNODELIST);\n\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\tgoto release_nodes_exit;\n\t}\n\n\terr_msg[0] = '\\0';\n\trelnodes_input_init(&r_input);\n\tr_input.jobid = jobid;\n\tr_input.vnodes_data = &exec_vnode_list;\n\tr_input.execvnode = execvnode;\n\tr_input.exechost = exechost;\n\tr_input.exechost2 = exechost2;\n\tr_input.schedselect = schedselect;\n\tr_input.p_new_exec_vnode = &new_exec_vnode;\n\tr_input.p_new_exec_host[0] = &new_exec_host;\n\tr_input.p_new_exec_host[1] = &new_exec_host2;\n\tr_input.p_new_schedselect = &new_schedselect;\n\n\tif (vnodelist != NULL) {\n\t\trelnodes_input_vnodelist_init(&r_input_vnlist);\n\t\tr_input_vnlist.vnodelist = vnodelist;\n\t\trc = pbs_release_nodes_given_nodelist(&r_input, &r_input_vnlist, err_msg, LOG_BUF_SIZE);\n\t\tsnprintf(log_buffer, sizeof(log_buffer), \"release_nodes_given_nodelist: AFT rc=%d jobid=%s vnodelist=%s execvnode=%s exechost=%s exechost2=%s schedselect=%s new_exec_vnode=%s new_exec_host=%s new_exec_host2=%s new_schedselect=%s\", rc, jobid, vnodelist, execvnode, exechost ? exechost : \"null\", exechost2 ? exechost2 : \"null\", schedselect, new_exec_vnode, new_exec_host, new_exec_host2, new_schedselect);\n\t\tlog_event(PBSEVENT_DEBUG4, PBS_EVENTCLASS_SERVER, LOG_ERR, __func__, log_buffer);\n\t} else if (keep_select != NULL) {\n\t\trelnodes_input_select_init(&r_input_select);\n\t\tr_input_select.select_str = keep_select;\n\t\tr_input_select.failed_mom_list = &failed_mom_list;\n\t\tr_input_select.succeeded_mom_list = &succeeded_mom_list;\n\t\tr_input_select.failed_vnodes = &failed_vnodes;\n\t\tr_input_select.good_vnodes = &good_vnodes;\n\t\trc = pbs_release_nodes_given_select(&r_input, &r_input_select, err_msg, LOG_BUF_SIZE);\n\t\tsnprintf(log_buffer, sizeof(log_buffer), \"release_nodes_given_select: AFT rc=%d keep_select=%s execvnode=%s exechost=%s exechost2=%s new_exec_vnode=%s new_exec_host=%s new_exec_host2=%s new_schedselect=%s\", rc, keep_select, execvnode, exechost ? exechost : \"null\", exechost2 ? exechost2 : \"null\", new_exec_vnode, new_exec_host, new_exec_host2, new_schedselect);\n\t\tlog_event(PBSEVENT_DEBUG4, PBS_EVENTCLASS_SERVER, LOG_ERR, __func__, log_buffer);\n\t} else {\n\t\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_FORCE, PBS_EVENTCLASS_SERVER, LOG_ERR, __func__, \"nothing to release: No values to both node_list and keep_select\");\n\t\tgoto release_nodes_exit;\n\t}\n\n\tif (rc != 0) {\n\t\tif (err_msg[0] != '\\0')\n\t\t\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_FORCE, PBS_EVENTCLASS_SERVER, LOG_ERR, __func__, err_msg);\n\t\tgoto release_nodes_exit;\n\t}\n\n\thook_set_mode = C_MODE;\n\n\t/* Get the attributes that have been set in the hook script */\n\tpy_attr_hookset_dict = PyObject_GetAttrString(py_job, PY_ATTRIBUTES_HOOK_SET); /* new ref */\n\tif (py_attr_hookset_dict == NULL) {\n\t\tpy_attr_hookset_dict = PyDict_New();\n\t\tPyObject_SetAttrString(py_job, PY_ATTRIBUTES_HOOK_SET, py_attr_hookset_dict);\n\t}\n\n\tif ((new_exec_vnode != NULL) && (new_exec_vnode[0] != '\\0')) {\n\t\tentry = strlen(new_exec_vnode) - 1;\n\t\tif (new_exec_vnode[entry] == '+')\n\t\t\tnew_exec_vnode[entry] = '\\0';\n\n\t\trc = pbs_python_object_set_attr_string_value(py_job, ATTR_execvnode, new_exec_vnode);\n\t\tif (rc == -1) {\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"failed to set job attribute %s = %s\", ATTR_execvnode, new_exec_vnode);\n\t\t\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_FORCE, PBS_EVENTCLASS_SERVER, LOG_ERR, __func__, log_buffer);\n\t\t\tgoto release_nodes_exit;\n\t\t}\n\t\tif (py_attr_hookset_dict != NULL) {\n\t\t\t/* mark that exec_vnode of the job has been set in a hook */\n\t\t\tPyDict_SetItemString(py_attr_hookset_dict, ATTR_execvnode, Py_None);\n\t\t}\n\t}\n\n\tif ((new_exec_host != NULL) && (new_exec_host[0] != '\\0')) {\n\t\tentry = strlen(new_exec_host) - 1;\n\t\tif (new_exec_host[entry] == '+')\n\t\t\tnew_exec_host[entry] = '\\0';\n\t\trc = pbs_python_object_set_attr_string_value(py_job, ATTR_exechost, new_exec_host);\n\t\tif (rc == -1) {\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t\t \"failed to set job attribute %s = %s\", ATTR_exechost, new_exec_host);\n\t\t\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_FORCE, PBS_EVENTCLASS_SERVER,\n\t\t\t\t  LOG_ERR, __func__, log_buffer);\n\t\t\tgoto release_nodes_exit;\n\t\t}\n\t\tif (py_attr_hookset_dict != NULL) {\n\t\t\t/* mark that exec_host of the job has been set in a hook */\n\t\t\tPyDict_SetItemString(py_attr_hookset_dict, ATTR_exechost, Py_None);\n\t\t}\n\t}\n\n\tif ((new_exec_host2 != NULL) && (new_exec_host2[0] != '\\0')) {\n\t\tentry = strlen(new_exec_host2) - 1;\n\t\tif (new_exec_host2[entry] == '+')\n\t\t\tnew_exec_host2[entry] = '\\0';\n\t\trc = pbs_python_object_set_attr_string_value(py_job, ATTR_exechost2, new_exec_host2);\n\t\tif (rc == -1) {\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t\t \"failed to set job attribute %s = %s\", ATTR_exechost2, new_exec_host2);\n\t\t\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_FORCE, PBS_EVENTCLASS_SERVER, LOG_ERR, __func__, log_buffer);\n\t\t\tgoto release_nodes_exit;\n\t\t}\n\t\tif (py_attr_hookset_dict != NULL) {\n\t\t\t/* mark that exec_host2 of the job has been set in a hook */\n\t\t\tPyDict_SetItemString(py_attr_hookset_dict, ATTR_exechost2, Py_None);\n\t\t}\n\t}\n\n\tif ((new_schedselect != NULL) && (new_schedselect[0] != '\\0')) {\n\t\trc = pbs_python_object_set_attr_string_value(py_job, ATTR_SchedSelect, new_schedselect);\n\t\tif (rc == -1) {\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"failed to set event attribute %s = %s\", ATTR_SchedSelect, new_schedselect);\n\t\t\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_FORCE, PBS_EVENTCLASS_SERVER, LOG_ERR, __func__, log_buffer);\n\t\t\tgoto release_nodes_exit;\n\t\t}\n\t\tif (py_attr_hookset_dict != NULL) {\n\t\t\t/* mark that exec_host2 of the job has been set in a hook */\n\t\t\tPyDict_SetItemString(py_attr_hookset_dict, ATTR_SchedSelect, Py_None);\n\t\t}\n\t}\n\n\tpy_return = py_job;\n\nrelease_nodes_exit:\n\tfree(vnodelist);\n\tPy_CLEAR(py_attr_keys);\n\tfree(keep_select);\n\tfree(execvnode);\n\tfree(exechost);\n\tfree(exechost2);\n\tfree(schedselect);\n\tfree_attrlist(&exec_vnode_list);\n\tPy_CLEAR(py_nodes);\n\treliable_job_node_free(&failed_mom_list);\n\treliable_job_node_free(&succeeded_mom_list);\n\tfree(new_exec_vnode);\n\tfree(new_exec_host);\n\tfree(new_exec_host2);\n\tfree(new_schedselect);\n\tvnl_free(failed_vnodes);\n\tvnl_free(good_vnodes);\n\tPy_CLEAR(py_attr_hookset_dict);\n\tfree(jobid);\n\n\thook_set_mode = hook_set_mode_orig;\n\treturn (py_return);\n}\n\n/**\n *\n * @brief\n *\tReturns a Python List of _server_attribute objects or  or NULL on error.\n * @param[in]\tphead\t- pointer to the head of the list containing data.\n *\n * @return \tPyObject *\n * @retval\t<object>\t- the Python list object holding\n * @retval\t\t\t\tthe _server_attribute objects.\n * @retval\tNULL\t\t- if an error occurred.\n * @note\n * \t\tthe returned PyObject must be cleared(Py_CLEAR) as it's a new\n * \t\treference.\n */\nPyObject *\nsvrattrl_list_to_pyobject(int rq_cmd, pbs_list_head *phead)\n{\n\tsvrattrl *plist = NULL;\n\tPyObject *py_list = PyList_New(0);\n\n\tif (phead == NULL) {\n\t\tlog_err(errno, __func__, \"NULL input parameters!\");\n\t\tPy_CLEAR(py_list);\n\t\treturn NULL;\n\t}\n\n\tfor (plist = (svrattrl *) GET_NEXT(*phead); plist != NULL;\n\t     plist = (svrattrl *) GET_NEXT(plist->al_link)) {\n\t\tPyObject *py_server_attribute = svrattrl_to_server_attribute(rq_cmd, plist);\n\t\tif (py_server_attribute) {\n\t\t\tsvrattrl *slist = NULL;\n\t\t\tPyObject *py_slist = PyObject_GetAttrString(py_server_attribute, \"sisters\");\n\t\t\tif (py_slist) {\n\t\t\t\tfor (slist = plist->al_sister; slist != NULL; slist = slist->al_sister) {\n\t\t\t\t\tPyObject *py_server_attribute_sister = svrattrl_to_server_attribute(rq_cmd, slist);\n\t\t\t\t\tif (py_server_attribute_sister) {\n\t\t\t\t\t\tPyList_Append(py_slist, py_server_attribute_sister);\n\t\t\t\t\t\tPy_CLEAR(py_server_attribute_sister);\n\t\t\t\t\t} else {\n\t\t\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t\t\t\t\t \"could not translate the sister for attribute <%s>\", plist->al_name);\n\t\t\t\t\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\t\t\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t} /* else {\n\t\t\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\t\t\"failed to acquire sisters in server_attribute object\");\n\t\t\t} */\n\t\t\tPyList_Append(py_list, py_server_attribute);\n\t\t\tPy_CLEAR(py_server_attribute);\n\t\t}\n\t}\n\treturn py_list;\n}\n\n/**\n *\n * @brief\n *\tReturns a Python _server_attribute object or NULL on error.\n * @param[in]\tattribute\t- pointer to the head of the list containing data.\n *\n * @return \tPyObject *\n * @retval\t<object>\t- the Python _server_attribute object.\n * @retval\tNULL\t\t- if an error occurred.\n * @note\n * \t\tthe returned PyObject must be cleared(Py_CLEAR) as it's a new\n * \t\treference.\n */\nPyObject *\nsvrattrl_to_server_attribute(int rq_cmd, svrattrl *attribute)\n{\n\tPyObject *py_server_attribute = NULL;\n\tPyObject *py_server_attribute_class = NULL;\n\tPyObject *py_server_attribute_args = NULL;\n\n\tif (attribute == NULL) {\n\t\tgoto server_attribute_exit;\n\t}\n\n\tpy_server_attribute_class = pbs_python_types_table[PP_SERVER_ATTRIBUTE_IDX].t_class;\n\tif (!py_server_attribute_class) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"failed to acquire server_attribute class\");\n\t\tgoto server_attribute_exit;\n\t}\n\n\tpy_server_attribute_args = Py_BuildValue(\"(sssii)\",\n\t\t\t\t\t\t attribute->al_name,\n\t\t\t\t\t\t attribute->al_resc,\n\t\t\t\t\t\t attribute->al_value,\n\t\t\t\t\t\t (rq_cmd != MGR_CMD_UNSET ? attribute->al_op : UNSET),\n\t\t\t\t\t\t attribute->al_flags); /* NEW ref */\n\n\tif (!py_server_attribute_args) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"could not build args list for server_attribute\");\n\t\tgoto server_attribute_exit;\n\t}\n\tpy_server_attribute = PyObject_CallObject(py_server_attribute_class, py_server_attribute_args);\n\n\tif (!py_server_attribute) {\n\t\tpbs_python_write_error_to_log(__func__);\n\t\tlog_err(PBSE_INTERNAL, __func__, \"failed to create a python server_attribute object\");\n\t\tgoto server_attribute_exit;\n\t}\nserver_attribute_exit:\n\tPy_CLEAR(py_server_attribute_args);\n\treturn py_server_attribute;\n}\n"
  },
  {
    "path": "src/lib/Libpython/pbs_python_svr_size_type.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tpbs_python_svr_size_type.c\n * @brief\n * CONVENTION\n *\n *  For Types and Objects:\n *  ----------------------\n *   PPSVR   : Pbs Python Server\n *   PPSCHED : Pbs Python Scheduler\n *   PPMOM   : Pbs Python Mom\n *\n *  For methods:\n *  -----------\n *     pps_    : Pbs Python Server <?> methods\n *     ppsd_   : Pbs Python Scheduler <?> methods\n *     ppm_    : Pbs Python MOM <?> methods\n */\n\n/*\n * --------- Python representation of a Queue Type ----------\n */\n#include <pbs_config.h> /* the master config generated by configure */\n#include <pbs_python_private.h>\n\n#include <stdio.h>\n#include <unistd.h>\n#include <sys/types.h>\n#include <sys/param.h>\n#include <memory.h>\n#include <stdlib.h>\n#include <pbs_ifl.h>\n#include <errno.h>\n#include <string.h>\n#include <list_link.h>\n#include <log.h>\n#include <attribute.h>\n#include <pbs_error.h>\n#include <Long.h>\n\nextern int comp_size(attribute *, attribute *);\nextern void from_size(const struct size_value *, char *);\nextern int set_size(attribute *, attribute *, enum batch_op op);\nextern int to_size(char *, struct size_value *);\nextern int normalize_size(struct size_value *, struct size_value *,\n\t\t\t  struct size_value *, struct size_value *);\n\n/*\n * ============             Internal SIZE Type            ============\n */\n\ntypedef struct {\n\tPyObject_HEAD struct size_value sz_value;\n\tchar *str_value; /* encoded represented of the above size value */\n} PPSVR_Size_Object;\n\nextern PyTypeObject PPSVR_Size_Type;\nextern PyObject *PPSVR_Size_FromSizeValue(struct size_value);\n\n#define PPSVR_Size_Type_Check(op) PyObject_TypeCheck(op, &PPSVR_Size_Type)\n#define PPSVR_Size_DFLT_NAME \"<unset>\"\n\n#define COPY_SIZE_VALUE(dst, src)                \\\n\tdo {                                     \\\n\t\tdst.atsv_num = src.atsv_num;     \\\n\t\tdst.atsv_shift = src.atsv_shift; \\\n\t\tdst.atsv_units = src.atsv_units; \\\n\t} while (0)\n\n/**\n * @brief\n *\tfunction for server\n *\tchecks whether the input object is negative number.\n *\n * @param[in] il - pointer to python object (number).\n *\n * @return\tint\n * @retval\t1\tif negative\n * @retval\t0\tif not\n *\n */\nstatic int\n_pps_check_for_negative_number(PyObject *il)\n{\n\tPyObject *str_value = NULL;\n\tconst char *c_value;\n\tint rc = 0;\n\n\tif (!(str_value = PyObject_Str(il))) {\n\t\tPyErr_Clear();\n\t\trc = -1;\n\t\tgoto EXIT;\n\t}\n\tc_value = PyUnicode_AsUTF8(str_value); /* TODO, is error check needed? */\n\tif (c_value && (*c_value == '-')) {\n\t\trc = 1;\n\t} else {\n\t\trc = -1;\n\t\tPyErr_Clear();\n\t\tgoto EXIT;\n\t}\nEXIT:\n\tif (str_value)\n\t\tPy_CLEAR(str_value);\n\treturn rc;\n}\n\n/**\n * @brief\n *\tserver function which encode a string\n *\tFROM a PPSVR_Size_Object  structure\n *\n * @param[in] self - python object (string)\n *\n * @return\tint\n * @retval\t0\tsuccess\n * @retval\t-1\terror\n *\n */\nstatic int\n_pps_size_make_str_value(PyObject *self)\n{\n\n\tPPSVR_Size_Object *working_copy = (PPSVR_Size_Object *) self;\n\tfrom_size(&working_copy->sz_value, log_buffer);\n\tif (working_copy->str_value)\n\t\tfree(working_copy->str_value);\n\tif (!(working_copy->str_value = strdup(log_buffer))) {\n\t\t(void) PyErr_NoMemory();\n\t\treturn -1;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *      server function which encode a string\n *      FROM a PPSVR_Size_Object  structure\n *\n * @param[in] self - python object (string)\n *\n * @return\tint\n * @retval\t0 \t: success was either a int or long\n * @retval\t-1 \t: failure\n * @retval\t1 \t: \"from\" is not a int or long python object\n */\n\nstatic int\n_pps_size_from_long_or_int(PyObject *self, PyObject *from)\n{\n\tPPSVR_Size_Object *working_copy = (PPSVR_Size_Object *) self;\n\tu_Long l_value;\n\n\tif (PyLong_Check(from)) {\n\t\tif (_pps_check_for_negative_number(from) > 0) {\n\t\t\tPyErr_SetString(PyExc_TypeError, \"_size instance cannot be negative\");\n\t\t\treturn -1;\n\t\t}\n\t\tl_value = PyLong_AsUnsignedLongLongMask(from);\n\t\tif (PyErr_Occurred())\n\t\t\treturn -1;\n\t\t/* good no error */\n\t\tworking_copy->sz_value.atsv_num = l_value;\n\t\tworking_copy->sz_value.atsv_units = ATR_SV_BYTESZ;\n\t\tworking_copy->sz_value.atsv_shift = 0;\n\t\tif ((_pps_size_make_str_value(self) != 0))\n\t\t\treturn -1;\n\t} else if (PyLong_Check(from)) {\n\t\tif (_pps_check_for_negative_number(from) > 0) {\n\t\t\tPyErr_SetString(PyExc_TypeError, \"_size instance cannot be negative\");\n\t\t\treturn -1;\n\t\t}\n\t\tl_value = PyLong_AsUnsignedLongLongMask(from);\n\t\tif (PyErr_Occurred())\n\t\t\treturn -1;\n\t\t/* good no error */\n\t\tworking_copy->sz_value.atsv_num = l_value;\n\t\tworking_copy->sz_value.atsv_units = ATR_SV_BYTESZ;\n\t\tworking_copy->sz_value.atsv_shift = 0;\n\t\tif ((_pps_size_make_str_value(self) != 0))\n\t\t\treturn -1;\n\t} else {\n\t\treturn 1;\n\t}\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tdecodes and assigns value(from) to size value structure(self)\n *\tand encodes the PPSVR_Size_Object structure(self).\n *\n * @param[out] self - python object(server size object)\n * @param[in] from - size string to be decoded and assigned\n *\n * @return\tint\n * @retval\t0\tsuccess was either a int or long\n * @retval\t-1\tfailure\n * @retval\t1\t\"from\" is not a string python object\n *\n */\n\nstatic int\n_pps_size_from_string(PyObject *self, PyObject *from)\n{\n\n\tPPSVR_Size_Object *working_copy = (PPSVR_Size_Object *) self;\n\tif (PyUnicode_Check(from)) {\n\t\tif ((to_size((char *) PyUnicode_AsUTF8(from), &working_copy->sz_value) != 0)) {\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"%s: bad value for _size\",\n\t\t\t\t pbs_python_object_str(from));\n\t\t\tPyErr_SetString(PyExc_TypeError, log_buffer);\n\t\t\treturn -1;\n\t\t}\n\t\tif ((_pps_size_make_str_value(self) != 0))\n\t\t\treturn -1;\n\t} else {\n\t\treturn 1;\n\t}\n\treturn 0;\n}\n\n/* -----                  PyTypeObject Methods                     ----- */\n\n/**\n * @brief\n *\tcreates a python object size_value structure.\n *\n * @param[in] type - pointer to size_value structure\n * @param[in] args - size_value (NULL since new)\n * @param[in] kwds - str_value (NULL since new)\n *\n * @return\tstructure handle\n * @retval\tpointer to size_val structure\tsuccess\n */\nstatic PyObject *\npps_size_new(PyTypeObject *type, PyObject *args, PyObject *kwds)\n{\n\tPPSVR_Size_Object *self = NULL;\n\tself = (PPSVR_Size_Object *) type->tp_alloc(type, 0);\n\tif (self) {\n\t\tmemset(&self->sz_value, 0, sizeof(self->sz_value));\n\t\tself->str_value = NULL;\n\t}\n\treturn (PyObject *) self;\n}\n\n/**\n * @brief\n *\tinitializes the size_value structure.\n *\n * @param[out] self - pointer to size_value structure\n * @param[in] args - size_value\n * @param[in] kwds - size_str\n *\n * @return\tint\n * @retval\t0\tsuccess\n * @retval\t-1\terror\n *\n */\nstatic int\npps_size_init(PPSVR_Size_Object *self, PyObject *args, PyObject *kwds)\n{\n\n\tstatic char *kwlist[] = {\"value\", NULL};\n\tPyObject *py_arg0 = NULL;\n\tint rc;\n\n\tif (!PyArg_ParseTupleAndKeywords(args, kwds,\n\t\t\t\t\t \"O:_size.__init__\",\n\t\t\t\t\t kwlist,\n\t\t\t\t\t &py_arg0)) {\n\t\treturn -1;\n\t}\n\n\tif (PPSVR_Size_Type_Check(py_arg0)) {\n\t\t/* size object received , deep copy */\n\t\tCOPY_SIZE_VALUE(self->sz_value, ((PPSVR_Size_Object *) py_arg0)->sz_value);\n\t\tif (self->str_value)\n\t\t\tfree(self->str_value);\n\t\tif (!(self->str_value =\n\t\t\t      strdup(((PPSVR_Size_Object *) py_arg0)->str_value))) {\n\t\t\t(void) PyErr_NoMemory();\n\t\t\treturn -1;\n\t\t}\n\t\tgoto SUCCESSFUL_INIT;\n\t}\n\tif ((rc = _pps_size_from_string((PyObject *) self, py_arg0)) == -1) {\n\t\treturn -1;\n\t}\n\tif (rc == 0)\n\t\tgoto SUCCESSFUL_INIT;\n\n\t/* check for int or long */\n\tif ((rc = _pps_size_from_long_or_int((PyObject *) self, py_arg0)) == -1) {\n\t\treturn -1;\n\t}\n\tif (rc == 0)\n\t\tgoto SUCCESSFUL_INIT;\n\n\t/* at this point there is no hope */\n\tPyErr_SetString(PyExc_TypeError, \"Bad _size value\");\n\treturn -1;\n\nSUCCESSFUL_INIT:\n\treturn 0;\n}\n\n/* __del__ */\n/**\n * @brief\n *      free the memory for size_value structure.\n *\n * @param[out] self - pointer to size_value structure\n *\n * @return      int\n * @retval      0       success\n * @retval      -1      error\n *\n */\n\nstatic void\npps_size_dealloc(PPSVR_Size_Object *self)\n{\n\tif (self->str_value) {\n\t\tfree(self->str_value);\n\t}\n\tPy_TYPE(self)->tp_free((PyObject *) self);\n\treturn;\n}\n\n/* __repr__ and __str__ */\n/**\n * @brief\n *      return the string_value.\n *\n * @param[out] self - pointer to size_value structure\n *\n * @return      structure handle\n * @retval      pointer to size_value structure       \tsuccess\n * @retval      NULL\t\t\t\t\terror\n *\n */\n\nstatic PyObject *\npps_size_repr(PPSVR_Size_Object *self)\n{\n\tif (self->str_value) {\n\t\treturn PyUnicode_FromString(self->str_value);\n\t} else {\n\t\treturn PyUnicode_InternFromString(\"0\");\n\t}\n}\n\n/* __cmp__ */\n/**\n * @brief\n *\tcompares two size objects \"self\" and \"with\".\n *\n * @param[in] self - size object to compare\n * @param[in] with - size object to be compared with\n * @param[in] op - option for comparison\n *\n * @return\tstructure handle\n * @retval\tPy_True\t\t\tsuccess\n * @retval\tPy_False\t\terror\n *\n */\nstatic PyObject *\npps_size_richcompare(PPSVR_Size_Object *self, PyObject *with, int op)\n{\n\tPyObject *result = Py_False; /* MUST incref result before return */\n\tattribute attr_self;\n\tattribute attr_with;\n\tint cmp_result;\n\n\t/* basic check make sure only size objects are compared */\n\t/* Al: I originally changed this to allow coercing compare operands but */\n\t/* ran into trouble with doing basic 'self' == \"\" as the empty string   */\n\t/* could not be converted into a size value. So I put back the original */\n\t/* assertion to only compare size values, but returning Py_False if the */\n\t/* size types don't match.  */\n\tif (!PyObject_TypeCheck(self, &PPSVR_Size_Type) ||\n\t    !PyObject_TypeCheck(with, &PPSVR_Size_Type)) {\n\t\tPy_INCREF(result);\n\t\treturn result;\n\t}\n\n\tCOPY_SIZE_VALUE(attr_self.at_val.at_size, self->sz_value);\n\tCOPY_SIZE_VALUE(attr_with.at_val.at_size, ((PPSVR_Size_Object *) with)->sz_value);\n\n\tcmp_result = comp_size(&attr_self, &attr_with);\n\n\tswitch (op) {\n\t\tcase Py_EQ:\n\t\t\tresult = (cmp_result == 0) ? Py_True : Py_False;\n\t\t\tbreak;\n\t\tcase Py_NE:\n\t\t\tresult = (cmp_result != 0) ? Py_True : Py_False;\n\t\t\tbreak;\n\t\tcase Py_LT:\n\t\t\tresult = (cmp_result == -1) ? Py_True : Py_False;\n\t\t\tbreak;\n\t\tcase Py_LE:\n\t\t\tresult = (cmp_result <= 0) ? Py_True : Py_False;\n\t\t\tbreak;\n\t\tcase Py_GT:\n\t\t\tresult = (cmp_result == 1) ? Py_True : Py_False;\n\t\t\tbreak;\n\t\tcase Py_GE:\n\t\t\tresult = (cmp_result >= 0) ? Py_True : Py_False;\n\t\t\tbreak;\n\t}\n\n\tPy_INCREF(result);\n\treturn result;\n}\n\n#undef COPY_SIZE_VALUE\n\n/* add of two size object returns a NEW Refrence */\n\n/* __add__ */\n/**\n * @brief\n *\tadds two python size objects and returns reference to it.\n *\n * @param[in] left - size object1\n * @param[in] right - size object2\n *\n * @return      structure handle\n * @retval      Py_True                 success\n * @retval      Py_False                error\n *\n */\nstatic PyObject *\npps_size_number_methods_add(PyObject *left, PyObject *right)\n{\n\tPyObject *result = Py_NotImplemented;\n\tstruct size_value tmp_left;\n\tstruct size_value tmp_right;\n\tstruct size_value sz_result;\n\tu_Long l_result;\n\n\tint rc;\n\n\tif ((PPSVR_Size_Type_Check(left)) && (PPSVR_Size_Type_Check(right))) {\n\t\trc = normalize_size(&((PPSVR_Size_Object *) left)->sz_value,\n\t\t\t\t    &((PPSVR_Size_Object *) right)->sz_value,\n\t\t\t\t    &tmp_left,\n\t\t\t\t    &tmp_right);\n\n\t\tif (rc != 0)\n\t\t\tgoto QUIT;\n\t\tl_result = tmp_left.atsv_num + tmp_right.atsv_num;\n\t\tif ((l_result < tmp_left.atsv_num) || (l_result < tmp_right.atsv_num)) {\n\t\t\tPyErr_SetString(PyExc_ArithmeticError,\n\t\t\t\t\t\"expression evaluates to wrong _size value (overflow?)\");\n\t\t\tresult = NULL;\n\t\t\tgoto QUIT;\n\t\t}\n\t\tsz_result.atsv_num = l_result;\n\t\tsz_result.atsv_shift = tmp_left.atsv_shift;\n\t\tsz_result.atsv_units = tmp_left.atsv_units;\n\t\tresult = PPSVR_Size_FromSizeValue(sz_result);\n\t}\n\nQUIT:\n\tif ((result) && (result == Py_NotImplemented))\n\t\tPy_INCREF(result);\n\treturn result;\n}\n\n/**\n * @brief\n *      subtracts two python size objects and returns reference to it.\n *\n * @param[in] left - size object1\n * @param[in] right - size object2\n *\n * @return      structure handle\n * @retval      Py_True                 success\n * @retval      Py_False                error\n *\n */\n\nstatic PyObject *\npps_size_number_methods_subtract(PyObject *left, PyObject *right)\n{\n\tPyObject *result = Py_NotImplemented;\n\tstruct size_value tmp_left;\n\tstruct size_value tmp_right;\n\tstruct size_value sz_result;\n\tu_Long l_result;\n\n\tint rc;\n\n\tif ((PPSVR_Size_Type_Check(left)) && (PPSVR_Size_Type_Check(right))) {\n\t\trc = normalize_size(&((PPSVR_Size_Object *) left)->sz_value,\n\t\t\t\t    &((PPSVR_Size_Object *) right)->sz_value,\n\t\t\t\t    &tmp_left,\n\t\t\t\t    &tmp_right);\n\n\t\tif (rc != 0)\n\t\t\tgoto QUIT;\n\t\tif (tmp_right.atsv_num > tmp_left.atsv_num) {\n\t\t\tPyErr_SetString(PyExc_ArithmeticError,\n\t\t\t\t\t\"expression evaluates to negative _size value\");\n\t\t\tresult = NULL;\n\t\t\tgoto QUIT;\n\t\t}\n\t\tl_result = tmp_left.atsv_num - tmp_right.atsv_num;\n\t\tsz_result.atsv_num = l_result;\n\t\tsz_result.atsv_shift = tmp_left.atsv_shift;\n\t\tsz_result.atsv_units = tmp_left.atsv_units;\n\t\tresult = PPSVR_Size_FromSizeValue(sz_result);\n\t}\n\nQUIT:\n\tif ((result) && (result == Py_NotImplemented))\n\t\tPy_INCREF(result);\n\treturn result;\n}\n\n/**\n * @brief\n * \tReturn the Python size's value in # oF kbytes.\n *\n * @param[in]\tself - a Python size object\n *\n * @return long\n * @retval <n> # of bytes of the Python size object.\n * @retval 0\tif there's an error\n *\n */\nu_Long\npps_size_to_kbytes(PyObject *self)\n{\n\tattribute attr;\n\tPPSVR_Size_Object *working_copy;\n\tif (!PPSVR_Size_Type_Check(self)) {\n\t\treturn 0;\n\t}\n\n\tworking_copy = (PPSVR_Size_Object *) self;\n\n\tif (working_copy == NULL)\n\t\treturn 0;\n\n\tattr.at_flags = ATR_VFLAG_SET;\n\tattr.at_type = ATR_TYPE_SIZE;\n\tattr.at_val.at_size = working_copy->sz_value;\n\n\treturn (get_kilobytes_from_attr(&attr));\n}\n\n/* --------- SIZE TYPE DEFINITION  --------- */\n\nstatic PyNumberMethods pps_size_as_number = {\n\t/* nb_add */ (binaryfunc) pps_size_number_methods_add,\n\t/* nb_subtract */ (binaryfunc) pps_size_number_methods_subtract,\n\t/* nb_multiply */ 0,\n\t/* nb_remainder */ 0,\n\t/* nb_divmod */ 0,\n\t/* nb_power */ 0,\n\t/* nb_negative */ 0,\n\t/* nb_positive */ (unaryfunc) 0,\n\t/* nb_absolute */ (unaryfunc) 0,\n\t/* nb_bool */ (inquiry) 0,\n\t/* nb_invert */ 0,\n\t/* nb_lshift */ 0,\n\t/* nb_rshift */ 0,\n\t/* nb_and */ 0,\n\t/* nb_xor */ 0,\n\t/* nb_or */ 0,\n\t/* nb_int */ 0,\n\t/* nb_reserved */ 0,\n\t/* nb_float */ 0,\n\t/* nb_inplace_add */ 0,\n\t/* nb_inplace_subtract */ 0,\n\t/* nb_inplace_multiply */ 0,\n\t/* nb_inplace_remainder */ 0,\n\t/* nb_inplace_power */ 0,\n\t/* nb_inplace_lshift */ 0,\n\t/* nb_inplace_rshift */ 0,\n\t/* nb_inplace_and */ 0,\n\t/* nb_inplace_xor */ 0,\n\t/* nb_inplace_or */ 0,\n\t/* nb_floor_divide */ 0,\n\t/* nb_true_divide */ 0,\n\t/* nb_inplace_floor_divide */ 0,\n\t/* nb_inplace_true_divide */ 0,\n};\n\nstatic char pps_size_doc[] =\n\t\"_size()\\n\\\n    \\tPython representation of PBS internal size structure\\n\\\n    \";\n\n/* external, hopefully no clash */\n\nPyTypeObject PPSVR_Size_Type = {\n\tPyVarObject_HEAD_INIT(NULL, 0)\n\t/* ob_size*/\n\t/* tp_name*/ \"_size\",\n\t/* tp_basicsize*/ sizeof(PPSVR_Size_Object),\n\t/* tp_itemsize*/ 0,\n\t/* tp_dealloc*/ (destructor) pps_size_dealloc,\n\t/* tp_print*/ 0,\n\t/* tp_getattr*/ 0,\n\t/* tp_setattr*/ 0,\n\t/* tp_as_async */ 0,\n\t/* tp_repr*/ (reprfunc) pps_size_repr,\n\t/* tp_as_number*/ &pps_size_as_number,\n\t/* tp_as_sequence*/ 0,\n\t/* tp_as_mapping*/ 0,\n\t/* tp_hash */ 0,\n\t/* tp_call*/ 0,\n\t/* tp_str*/ 0,\n\t/* tp_getattro*/ 0,\n\t/* tp_setattro*/ 0,\n\t/* tp_as_buffer*/ 0,\n\t/* tp_flags*/ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE,\n\t/* tp_doc */ pps_size_doc,\n\t/* tp_traverse */ 0,\n\t/* tp_clear */ 0,\n\t/* tp_richcompare */ (richcmpfunc) pps_size_richcompare,\n\t/* tp_weaklistoffset */ 0,\n\t/* tp_iter */ 0,\n\t/* tp_iternext */ 0,\n\t/* tp_methods */ 0,\n\t/* tp_members */ 0,\n\t/* tp_getset */ 0,\n\t/* tp_base */ 0,\n\t/* tp_dict */ 0,\n\t/* tp_descr_get */ 0,\n\t/* tp_descr_set */ 0,\n\t/* tp_dictoffset */ 0,\n\t/* tp_init */ (initproc) pps_size_init,\n\t/* tp_alloc */ 0,\n\t/* tp_new */ (newfunc) pps_size_new,\n\t/* tp_free */ 0,\n\t/* tp_is_gc For PyObject_IS_GC */ 0,\n\t/* tp_bases */ 0,\n\t/* tp_mro */ 0,\n\t/* tp_cache */ 0,\n\t/* tp_subclasses */ 0,\n\t/* tp_weaklist */ 0};\n\n/* -----                  External Functions for size              ----- */\n\n/**\n * @brief\n *\tconverts the input size_value struct to PyObject and returns reference to it.\n *\n * @param[in] from - size_value structure\n *\n * @return\tstructure handle\n * @retval\tpointer to size PyObject\tsuccess\n * @retval\tNULL\t\t\t\terror\n *\n */\nPyObject *\nPPSVR_Size_FromSizeValue(struct size_value from)\n{\n\tPPSVR_Size_Object *pyobj = NULL;\n\n\tif (!(pyobj = (PPSVR_Size_Object *) pps_size_new(&PPSVR_Size_Type, NULL, NULL)))\n\t\tgoto ERROR_EXIT;\n\tpyobj->sz_value = from;\n\tif (_pps_size_make_str_value((PyObject *) pyobj) != 0)\n\t\tgoto ERROR_EXIT;\n\treturn (PyObject *) pyobj;\nERROR_EXIT:\n\tif (pyobj)\n\t\tPy_CLEAR(pyobj);\n\treturn NULL;\n}\n"
  },
  {
    "path": "src/lib/Libpython/shared_python_utils.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h>\n#include <wchar.h>\n#include <pbs_python_private.h>\n#include <Python.h>\n#include \"pbs_ifl.h\"\n#include \"pbs_internal.h\"\n#include \"log.h\"\n\n#ifdef WIN32\n\n/**\n * @brief get_py_homepath\n * \tFind and return where python home is located\n *\n * NOTE: The caller must free the memory allocated by this function\n *\n * @param[in, out] homepath - buffer to copy python home path\n *\n * @return int\n * @retval 0 - Success\n * @retval 1 - Fail\n */\n\nint\nget_py_homepath(char **homepath)\n{\n#ifdef PYTHON\n\tstatic char python_homepath[MAXPATHLEN + 1] = {'\\0'};\n\tif (python_homepath[0] == '\\0') {\n\t\tsnprintf(python_homepath, MAXPATHLEN, \"%s/python\", pbs_conf.pbs_exec_path);\n\t\tfix_path(python_homepath, 3);\n\t\tif (!file_exists(python_homepath)) {\n\t\t\tlog_err(-1, __func__, \"Python home not found!\");\n\t\t\treturn 1;\n\t\t}\n\t}\n\t*homepath = strdup(python_homepath);\n\tif (*homepath == NULL)\n\t\treturn 1;\n\treturn 0;\n#else\n\treturn 1;\n#endif\n}\n#endif\n\n/**\n * @brief get_py_progname\n * \tFind and return where python binary is located\n *\n * NOTE: The caller must free the memory allocated by this function\n *\n * @param[in, out] binpath - buffer to copy python binary path\n *\n * @return int\n * @retval 0 - Success\n * @retval 1 - Fail\n */\nint\nget_py_progname(char **binpath)\n{\n#ifdef PYTHON\n\tstatic char python_binpath[MAXPATHLEN + 1] = {'\\0'};\n\tif (python_binpath[0] == '\\0') {\n#ifndef WIN32\n\t\tsnprintf(python_binpath, MAXPATHLEN, \"%s/python/bin/python3\", pbs_conf.pbs_exec_path);\n#else\n\t\tsnprintf(python_binpath, MAXPATHLEN, \"%s/python/python.exe\", pbs_conf.pbs_exec_path);\n\t\tfix_path(python_binpath, 3);\n#endif\n\t\tif (!file_exists(python_binpath)) {\n#ifdef PYTHON_BIN_PATH\n\t\t\tsnprintf(python_binpath, MAXPATHLEN, \"%s\", PYTHON_BIN_PATH);\n\t\t\tif (!file_exists(python_binpath))\n#endif\n\t\t\t{\n\t\t\t\tlog_err(-1, __func__, \"Python executable not found!\");\n\t\t\t\treturn 1;\n\t\t\t}\n\t\t}\n\t}\n\t*binpath = strdup(python_binpath);\n\tif (*binpath == NULL)\n\t\treturn 1;\n\treturn 0;\n#else\n\treturn 1;\n#endif\n}\n#ifdef WIN32\n/**\n * @brief set_py_progname\n * \tFind and tell Python interpreter where python binary is located\n *\n * @return int\n * @retval 0 - Success\n * @retval 1 - Fail\n */\nint\nset_py_progname(void)\n{\n#ifdef PYTHON\n\tchar *python_binpath = NULL;\n\tstatic wchar_t w_python_binpath[MAXPATHLEN + 1] = {'\\0'};\n\n\tif (w_python_binpath[0] == '\\0') {\n\t\tif (get_py_progname(&python_binpath)) {\n\t\t\tlog_err(-1, __func__, \"Failed to find python binary path!\");\n\t\t\treturn 1;\n\t\t}\n\t\tmbstowcs(w_python_binpath, python_binpath, MAXPATHLEN + 1);\n\t\tfree(python_binpath);\n\t}\n\tPy_SetProgramName(w_python_binpath);\n\t/*\n\t *  There is a bug in windows version of python resulting need to set python home explicitly.\n\t */\n\tstatic wchar_t w_python_homepath[MAXPATHLEN + 1] = {'\\0'};\n\tchar *python_homepath = NULL;\n\tif (w_python_homepath[0] == '\\0') {\n\t\tif (get_py_homepath(&python_homepath)) {\n\t\t\tlog_err(-1, __func__, \"Failed to find python home path!\");\n\t\t\treturn 1;\n\t\t}\n\t\tmbstowcs(w_python_homepath, python_homepath, MAXPATHLEN + 1);\n\t\tfree(python_homepath);\n\t}\n\tPy_SetPythonHome(w_python_homepath);\n\n\treturn 0;\n#else\n\treturn 0;\n#endif\n}\n#endif\n"
  },
  {
    "path": "src/lib/Libsec/Makefile.am",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nnoinst_LIBRARIES = libsec.a\n\nlibsec_a_CPPFLAGS = \\\n\t-I$(top_srcdir)/src/include \\\n\t@KRB5_CFLAGS@\n\nlibsec_a_SOURCES = \\\n\tcs_standard.c\n"
  },
  {
    "path": "src/lib/Libsec/cs_standard.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tcs_standard.c\n * @brief\n * cs_standard.c - Standard PBS Authentication Module\n * \t  Authentication provided by this module is the standard (pbs_iff)\n * \t  authentication used in vanilla PBS.  Internal security interface\n * \t  (Hooks) are for the most part stubs which return CS_SUCCESS\n */\n\n/* File is to be gutless if PBS not built vanilla w.r.t. security */\n\n#include <pbs_config.h>\n#include <stddef.h>\n#include <sys/types.h>\n#include \"libsec.h\"\n\n#if (!defined(PBS_SECURITY) || (PBS_SECURITY == STD)) || (defined(PBS_SECURITY) && (PBS_SECURITY == KRB5))\n\n/* system includes */\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <unistd.h>\n#include <signal.h>\n#include <string.h>\n\n#if defined(HAVE_SYS_IOCTL_H)\n#include <sys/ioctl.h>\n#include <sys/uio.h>\n#endif\n\n#include <netdb.h>\n#include <sys/socket.h>\n#include <netdb.h>\n\n/* PBS includes */\n\n#include <pbs_error.h>\n\n/*------------------------------------------------------------------------\n * interface return codes,...\n *------------------------------------------------------------------------\n * see pbs/src/include/libsec.h\n */\n\n/*------------------------------------------------------------------------\n * External functions\n *------------------------------------------------------------------------\n */\n\n/*------------------------------------------------------------------------\n * Global symbols and default \"evaluations\"\n *------------------------------------------------------------------------\n */\n/**\n * @brief\n * \tDefault logging function\n */\nvoid\nsec_cslog(int ecode, const char *caller, const char *txtmsg)\n{\n\treturn;\n}\n\nvoid (*p_cslog)(int ecode, const char *caller, const char *txtmsg) = sec_cslog;\n\n/*========================================================================\n * PBS hook functions\n * CS_read\t   - read encrypted data, decrypt and pass result to PBS\n * CS_write\t   - encrypt PBS data, and write result\n * CS_client_auth  - authenticate to a server, authenticate server\n * CS_server_auth  - authenticate a client, authenticate to client\n * CS_close_socket - release per-connection security data\n * CS_close_app    - release application security data\n * CS_client_init  - initialize client global data\n * CS_server_init  - initialize server global data\n * CS_verify       - verify a user id (still to be developed)\n * CS_remap_ctx - remap connection security context to a new descriptor.\n *------------------------------------------------------------------------\n */\n/**\n * @brief\n *\tCS_read - read data\n *\n * @par\tcall:\n *\tr = CS_read(fid,buf,len);\n *\n * @param[in]\tfid\t- file id to read from\n * @param[in]\tbuf\t- address of the buffer to fill\n * @param[in]\tlen\t- number of bytes to transfer\n *\n * @returns\tint\n * @retval\t- number of bytes read\n * @retval\tCS_IO_FAIL (-1) on error\n *------------------------------------------------------------------------\n */\n\nint\nCS_read(int sd, char *buf, size_t len)\n{\n\n\treturn (read(sd, buf, len));\n}\n\n/**\n * @brief\n * \tCS_write - write data\n *\n * @par\tcall:\n *      r = CS_write ( fid, buf, len )\n *\n * @param[in]\tfid     - file id to write to\n * @param[in]\tbuf     - address of the buffer to write\n * @param[in]\tlen     - number of bytes to transfer\n *\n * @returns\tint\n * @retval\t- number of bytes read\n * @retval\tCS_IO_FAIL (-1) on error\n *------------------------------------------------------------------------\n */\n\nint\nCS_write(int sd, char *buf, size_t len)\n{\n\treturn (write(sd, buf, len));\n}\n\n/**\n * @brief\n * \tCS_client_auth - stub interface for STD authentication to a remote\n *\n * @par\tserver.:\n *\tr = CS_client_auth ( fd );\n *\n * @param[in]\tfd\t- socket file id\n *\n * @returns\tint\n * @retval\t- CS_AUTH_USE_IFF\n *\n * @par\tNote:  upon getting the above return value, the calling code should\n *\tcall PBSD_authenticate (which issues a read  popen of pbs_iff)\n *\tand respond accordingly to its return value -  close down the\n *\tconnection's security, close the socket.  Otherwise,continue\n *\twith the steps that follow a successful authentication..\n *------------------------------------------------------------------------\n */\n\nint\nCS_client_auth(int sd)\n{\n\treturn (CS_AUTH_USE_IFF);\n}\n\n/**\n * @brief\n * \tCS_server_auth - authenticate to a client\n *\n * @par\tcall:\n *\tr = CS_server_auth(fd);\n *\n * @param[in]\tfd\t- socket file id\n *\n * @returns\tint\n * @retval\t- CS_AUTH_CHECK_PORT\n *\n * @par\tNote:  upon getting the above return value, the calling code should\n *\tcheck if the remote port is in the privileged range and\n *\tproceed accordingly.\n *------------------------------------------------------------------------\n */\n\nint\nCS_server_auth(int sd)\n{\n\n\treturn (CS_AUTH_CHECK_PORT);\n}\n\n/**\n * @brief\n * \tCS_close_socket - cleanup security blob when closing a socket.\n *\n * @par\tcall:\n * \tr = CS_close_socket ( fd );\n *\n * @param[in]\tfd\t- socket file id\n *\n * @return\tint\n * @retval\tstatus result, 0 => success\n *\n * @par\tnote:\n * \tThe socket should still be open when this function is called.\n * \tThe pointer to the security blob may be modified, hence pctx\n * \tpoints to the pointer to the blob.\n *------------------------------------------------------------------------\n */\n\nint\nCS_close_socket(int sd)\n{\n\t/*\n\t * For standard PBS security we don't have a need for a\n\t * \"per connection\" security context\n\t */\n\n\treturn (CS_SUCCESS);\n}\n\n/**\n * @brief\n * \tCS_close_app - the global cleanup function\n *\n * @par\tcall:\n *\tr = CS_close_app();\n *\n * @returns\tint\n * @retval\t- CS_SUCCESS\n *\n */\n\nint\nCS_close_app(void)\n{\n\treturn (CS_SUCCESS);\n}\n\n/**\n * @brief\n *\tCS_client_init - the client initialization function for global security\n * \t\t    data\n * @par\tusage:\n * \tr = CS_client_init();\n *\n * @returns \tint\n * @retval\tinitialization status\n *\n */\n\nint\nCS_client_init(void)\n{\n\n\treturn (CS_SUCCESS); /* always return success if no error */\n}\n\n/**\n * @brief\n *\tCS_server_init - the server initialization function for global security\n * \t\t    data\n * @par\tusage:\n * \tr = CS_server_init();\n *\n * @returns\tint\n * @retval\tinitialization status\n *\n */\n\nint\nCS_server_init(void)\n{\n\treturn (CS_SUCCESS);\n}\n\n/**\n * @brief\n * \tCS_verify - verify user is authorized on host\n * @par\tcall:\n * \tr = CS_verify ( ??? );\n *\n * @returns\tint\n * @retval\tverification status\n * @retval\tCS_SUCCESS\t    => start the user process\n * @retval\tCS_FATAL_NOAUTH   => user is not authorized\n * @retval\tCS_NOTIMPLEMENTED => do the old thing (rhosts)\n *\n */\n\nint\nCS_verify()\n{\n\treturn (CS_SUCCESS);\n}\n\n/**\n * @brief\n * \tCS_remap_ctx - interface is available to remap connection's context\n * \tto a new descriptor.   Old association is removed from the tracking\n * \tmechanism's data.\n *\n * Should the socket descriptor associated with the connection get\n * replaced by a different descriptor (e.g. mom's call of the FDMOVE\n * macro for interactive qsub job) this is the interface to use to\n * make the needed adjustment to the tracking table.\n *\n * @param[in] sd     connection's original socket descriptor\n * @param[in] newsd  connection's new socket descriptor\n *\n * @return\tint\n * @retval\tCS_SUCCESS\n * @retval\tCS_FATAL\n *\n * @par\tRemark:\n *\tIf the return value is CS_FATAL the connection should be\n *\tCS_close_socket should be called on the original descriptor\n *\tto deallocate the tracking table entry, and the connection\n *\tshould then be closed.\n *------------------------------------------------------------------------\n */\n\nint\nCS_remap_ctx(int sd, int newsd)\n{\n\t/*\n\t * For standard PBS security we don't have a need for a\n\t * \"per connection\" security context remapping\n\t */\n\n\treturn (CS_SUCCESS);\n}\n\n#endif /* undefined( PBS_SECURITY ) || ( PBS_SECURITY == STD ) */\n"
  },
  {
    "path": "src/lib/Libsite/Makefile.am",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nlib_LIBRARIES = libsite.a\n\nlibsite_a_CPPFLAGS = \\\n\t-I$(top_srcdir)/src/include \\\n\t-I$(top_srcdir)/src/resmom/linux \\\n\t@KRB5_CFLAGS@\n\nlibsite_a_SOURCES = \\\n\tsite_alt_rte.c \\\n\tsite_mom_chu.c \\\n\tsite_mom_jst.c \\\n\tsite_allow_u.c \\\n\tsite_check_u.c \\\n\tsite_map_usr.c \\\n\tsite_mom_ckp.c\n"
  },
  {
    "path": "src/lib/Libsite/site_allow_u.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <sys/types.h>\n#include \"portability.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"server_limits.h\"\n#include \"pbs_error.h\"\n#include \"job.h\"\n\n/**\n * @file\tsite_allow_u.c\n */\n\n/**\n * @brief\n * \tsite_allow_u - site allow user access\n *\n *\tThis routine determines if a user is privileged to access the batch\n *\tssytem on this host.\n *\n *\tIt's implementation is \"left as an exersize for the reader.\"\n *\n * @param[in]\tuser - the user's name making the request\n * @param[in]\thost - host from which the user is making the request\n *\n * @return\tint\n * @retval\tzero - if user is acceptable\n * @retval\tnon-zero error number (PBSE_PERM) if not\n */\n\nint\nsite_allow_u(char *user, char *host)\n{\n\treturn 0;\n}\n"
  },
  {
    "path": "src/lib/Libsite/site_alt_rte.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <sys/types.h>\n#include <string.h>\n#include \"portability.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"server_limits.h\"\n#include \"job.h\"\n#include \"reservation.h\"\n#include \"queue.h\"\n\n/**\n * @file\tsite_alt_rte.c\n */\nint default_router(job *jobp, struct pbs_queue *qp, long retry_time);\n\n/**\n * @brief\n *\tfunction for routing jobs.\n *\n * @param[in] jobp - job pointer\n * @param[in] qp - pointer to queue info\n * @param[in] retry_time - time to retry\n *\n * @return\tint\n * @retval\t0\tsuccess\n * @retval\t!0\terror\n *\n */\nint\nsite_alt_router(job *jobp, pbs_queue *qp, long retry_time)\n{\n\treturn (default_router(jobp, qp, retry_time));\n}\n"
  },
  {
    "path": "src/lib/Libsite/site_check_u.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <errno.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <unistd.h>\n#include <string.h>\n#include <sys/stat.h>\n#include <sys/types.h>\n#include <fcntl.h>\n#include <netdb.h>\n#include \"portability.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"server_limits.h\"\n#include \"job.h\"\n#include \"pbs_nodes.h\"\n#include \"reservation.h\"\n#include \"queue.h\"\n#include \"log.h\"\n#include \"pbs_ifl.h\"\n#include \"svrfunc.h\" /* to resolve cvrt_fqn_to_name */\n\n/**\n * @file\tsite_check_u.c\n */\n/* Global Data Items */\n\nextern char *pbs_o_host;\nextern char server_host[];\nextern char *msg_orighost; /* error message: no PBS_O_HOST */\n\n/**\n * @brief\n * \tsite_check_user_map - site_check_user_map()\n *\tThis routine determines if the specified \"luser\" is authorized\n *\ton this host to serve as a kind of \"proxy\" for the object's owner.\n *\tUses the object's \"User_List\" attribute.\n *\n *\tAs provided, this routine uses ruserok(3N).  If this is a problem,\n *\tIt's replacement is \"left as an exercise for the reader.\"\n *\n * @param[in] pjob - job info\n * @param[in] objtype - type of object\n * @param[in] luser - username\n *\n * @return\tint\n * @retval\t0\tsuccess\n * @retval\t>0\terror\n *\n */\n\nint\nsite_check_user_map(void *pobj, int objtype, char *luser)\n{\n\tchar *orighost;\n\tchar owner[PBS_MAXUSER + 1];\n\tchar *p1;\n\tchar *objid;\n\tint event_type, event_class;\n\tint rc;\n\n\t/* set pointer variables etc based on object's type */\n\tif (objtype == JOB_OBJECT) {\n\t\tp1 = get_jattr_str(pobj, JOB_ATR_job_owner);\n\t\torighost = get_jattr_str(pobj, JOB_ATR_submit_host);\n\t\tobjid = ((job *) pobj)->ji_qs.ji_jobid;\n\t\tevent_type = PBSEVENT_JOB;\n\t\tevent_class = PBS_EVENTCLASS_JOB;\n\t} else {\n\t\tp1 = get_rattr_str(pobj, RESV_ATR_resv_owner);\n\t\torighost = get_rattr_str(pobj, RESV_ATR_submit_host);\n\t\tobjid = ((resc_resv *) pobj)->ri_qs.ri_resvID;\n\t\tevent_type = PBSEVENT_RESV;\n\t\tevent_class = PBS_EVENTCLASS_RESV;\n\t}\n\n\t/* the owner name, without the \"@host\" */\n\tcvrt_fqn_to_name(p1, owner);\n\n\tif ((orighost == NULL) || (*orighost == '\\0')) {\n\t\tlog_event(event_type, event_class, LOG_INFO, objid, msg_orighost);\n\t\treturn (-1);\n\t}\n\tif (!strcasecmp(orighost, server_host) && !strcmp(owner, luser))\n\t\treturn (0);\n\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n\t/* if this is gss job/resv ignore the rest */\n\tif (objtype == JOB_OBJECT && is_jattr_set(pobj, JOB_ATR_cred_id)) {\n\t\treturn -1;\n\t}\n\tif (objtype == RESC_RESV_OBJECT && is_rattr_set(pobj, RESV_ATR_cred_id)) {\n\t\treturn -1;\n\t}\n#endif\n\n#ifdef WIN32\n\trc = ruserok(orighost, isAdminPrivilege(luser), owner, luser);\n\tif (rc == -2) {\n\t\tsprintf(log_buffer, \"User %s does not exist!\", luser);\n\t\tlog_err(0, \"site_check_user_map\", log_buffer);\n\t\trc = -1;\n\t} else if (rc == -3) {\n\t\tsprintf(log_buffer,\n\t\t\t\"User %s's [HOMEDIR]/.rhosts is unreadable! Needs SYSTEM or Everyone access\", luser);\n\t\tlog_err(0, \"site_check_user_map\", log_buffer);\n\t\trc = -1;\n\t}\n#else\n\trc = ruserok(orighost, 0, owner, luser);\n#endif\n\n\treturn (rc);\n}\n\n/**\n * @brief\n *\tsite_check_u - site_acl_check()\n *    \tThis routine is a place holder for sites that wish to implement\n *    \taccess controls that differ from the standard PBS user, group, host\n *    \taccess controls.  It does NOT replace their functionality.\n *\n * @param[in] pjob - job pointer\n * @param[in] pqueue - pointer to queue defn\n *\n * @return\tint\n * @retval\t0\tok\n * @retval\t-1\taccess denied\n *\n */\n\nint\nsite_acl_check(job *pjob, pbs_queue *pque)\n{\n\treturn (0);\n}\n"
  },
  {
    "path": "src/lib/Libsite/site_map_usr.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tsite_map_usr.c\n * @brief\n * site_map_user - map user name on a named host to user name on this host\n * @note\n *\tfor those of us who operate in a world of consistant\n *\tuser names, this routine just returns the original name.\n *\n *\tThis routine is \"left as an exercise for the reader.\"\n *\tIf you don't have consistant names, its up to you to replace\n *\tthis code with what will work for you.\n *\n *\tThe input parameters cannot be modified.   If the replacement routine\n *\tis to change the user's name, the new value should be kept in a static\n *\tcharacter array of size PBS_MAXUSER+1.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <unistd.h>\n#include \"pbs_ifl.h\"\n#include \"libpbs.h\"\n#include \"portability.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"pbs_nodes.h\"\n#include \"batch_request.h\"\n#include \"svrfunc.h\"\n#include <string.h>\n\n/* ARGSUSED */\n\n/**\n * @brief\n *\tmap requestor user@host to \"local\" name\n *\n * @param[in] uname - username\n * @param[in] host - hostname\n *\n * @return\tstring\n * @retval\tlocal name\tsuccess\n *\n */\nchar *\nsite_map_user(char *uname, char *host)\n{\n\treturn (uname);\n}\n\nchar *\nsite_map_resvuser(char *uname, char *host)\n{\n\treturn (uname);\n}\n"
  },
  {
    "path": "src/lib/Libsite/site_mom_chu.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tsite_mom_chu.c\n * @brief\n * site_mom_chu.c = a site modifible file\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n/*\n * This is used only in mom and needs PBS_MOM defined in order to\n * have things from other .h files (such as struct task) be defined\n */\n#define PBS_MOM\n\n#include <sys/types.h>\n#include <pwd.h>\n#include \"portability.h\"\n#include \"list_link.h\"\n#include \"server_limits.h\"\n#include \"attribute.h\"\n#include \"job.h\"\n#include \"mom_mach.h\"\n#include \"mom_func.h\"\n\n/**\n * @brief\n * \tsite_mom_chkuser() - for adding validity checking to the user's account\n * \ton the execution machine.  This is called from start_exec().\n *\n * @param[in] pjob - job pointer\n *\n * @return\tint\n * @retval\tnon-zero if you wish the job to be aborted.\n *\n */\nint\nsite_mom_chkuser(job *pjob)\n{\n\treturn (0);\n}\n"
  },
  {
    "path": "src/lib/Libsite/site_mom_ckp.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <stdio.h> /* DEBUG */\n/**\n * @file\tsite_mom_ckp.c\n * @brief\n * site_mom_pchk.c = a site modifiable file\n *\n *\tContains Pre and Post checkpoint stubs for MOM.\n */\n\n/*\n * This is used only in mom and needs PBS_MOM defined in order to\n * have things from other .h files (such as struct task) be defined\n */\n#define PBS_MOM\n\n#include <sys/types.h>\n#include <pwd.h>\n#include \"portability.h\"\n#include \"list_link.h\"\n#include \"server_limits.h\"\n#include \"attribute.h\"\n#include \"job.h\"\n#include \"mom_mach.h\"\n#include \"mom_func.h\"\n\n/**\n * @brief\n * \tsite_mom_postchk() - Post-checkpoint stub for MOM.\n *\tCalled if checkpoint (on qhold,qterm) suceeded.\n *\n * @param[in] pjob - job pointer\n * @param[in] hold_type - hold type indicating what caused held state of job\n *\n * @return\tint\n * @retval\t0\t\tIf ok\n * @retval\tnon-zero\tIf not ok.\n */\n\nint\nsite_mom_postchk(job *pjob, int hold_type)\n{\n\treturn 0;\n}\n\n/**\n * @brief\n * \tsite_mom_prerst() - Pre-restart stub for MOM.\n *\tCalled just before restart is performed.\n *\n * @param[in] pjob - job pointer\n *\n * @return\tint\n * @retval\t0\t\tif ok\n * @retval\tJOB_EXEC_FATAL1 for permanent error, abort job\n * @retval\tJOB_EXEC_RETRY\tfor temporary problem, requeue job.\n */\n\nint\nsite_mom_prerst(job *pjob)\n{\n\treturn 0;\n}\n"
  },
  {
    "path": "src/lib/Libsite/site_mom_jst.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n/**\n * @file\tsite_mom_jst.c\n * @brief\n * This is used only in mom and needs PBS_MOM defined in order to\n * have things from other .h files (such as struct task) be defined\n */\n#define PBS_MOM\n\n#include <sys/types.h>\n#include <pwd.h>\n#include \"portability.h\"\n#include \"list_link.h\"\n#include \"server_limits.h\"\n#include \"attribute.h\"\n#include \"job.h\"\n#include \"mom_mach.h\"\n#include \"mom_func.h\"\n\n/**\n * @brief\n * \tsite_job_setup() -  to allow a site to perform site specific actions\n *\tonce the session has been created and before the job run.\n *\n * @param[in] pjob - job pointer\n *\n * @return\tint\n * @retval\tnon-zero to abort the job.\n *\n */\nint\nsite_job_setup(job *pjob)\n{\n\treturn (0);\n}\n"
  },
  {
    "path": "src/lib/Libtpp/Makefile.am",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nnoinst_LIBRARIES = libtpp.a\n\nlibtpp_a_CPPFLAGS = \\\n\t-I$(top_srcdir)/src/include \\\n\t@KRB5_CFLAGS@\n\nlibtpp_a_SOURCES = \\\n\ttpp_internal.h \\\n\ttpp_client.c \\\n\ttpp_em.c \\\n\ttpp_platform.c \\\n\ttpp_router.c \\\n\ttpp_transport.c \\\n\ttpp_util.c\n"
  },
  {
    "path": "src/lib/Libtpp/tpp_client.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\ttpp_client.c\n *\n * @brief\tClient side of the TCP router based network\n *\n * @par\t\tFunctionality:\n *\n *\t\tTPP = TCP based Packet Protocol. This layer uses TCP in a multi-\n *\t\thop router based network topology to deliver packets to desired\n *\t\tdestinations. LEAF (end) nodes are connected to ROUTERS via\n *\t\tpersistent TCP connections. The ROUTER has intelligence to route\n *\t\tpackets to appropriate destination leaves or other routers.\n *\n *\t\tThis is the client side (referred to as leaf) in the tpp network\n *\t\ttopology. This compiles into the overall tpp library, and is\n *\t\tlinked to the PBS daemons. This code file implements the\n *\t\ttpp_ interface functions that the daemons use to communicate\n *\t\twith other daemons.\n *\n *\t\tThe code is driven by 2 threads. The Application thread (from\n *\t\tthe daemons) calls the main interfaces (tpp_ functions).\n *\t\tWhen a piece of data is to be transmitted, its queued to a\n *\t\tstream, and another independent thread drives the actual IO of\n *\t\tthe data. We refer to these two threads in the comments as\n *\t\tIO thread and APP thread.\n *\n *\t\tThis also presents a single fd (a pipe) that can be used\n *\t\tby the application to monitor for incoming data or events on\n *\t\tthe transport channel (much like the way a datagram socket works).\n *\t\tThis fd can be used by the application using a typical select or\n *\t\tpoll system call.\n *\n *\t\tThe functions in this code file are clearly de-marked as to which\n *\t\tof the two threads drives them. In certain rare cases, a function\n *\t\tor data structure is used by both the threads, and therefore is\n *\t\tsynchronized using a mutex, but in general, most functions are\n *\t\tdriven by only one thread. This allows for a minimal contention\n *\t\tdesign, requiring minimal synchronization primitives.\n *\n */\n#include <pbs_config.h>\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <unistd.h>\n#include <errno.h>\n#include <fcntl.h>\n#include <netdb.h>\n#include <string.h>\n#include <sys/time.h>\n#include <stdint.h>\n\n#include \"pbs_idx.h\"\n#include \"libpbs.h\"\n#include \"tpp_internal.h\"\n#include \"dis.h\"\n#include \"auth.h\"\n\n/*\n *\tGlobal Variables\n */\n\n/**\n *\tfile descriptor returned by tpp_init()\n */\nint tpp_fd = -1;\n\n/* whether a forked child called tpp_terminate or not? initialized to false */\nint tpp_terminated_in_child = 0;\n\n/*\n * app_mbox is the \"monitoring mechanism\" for the application\n * send notifications to the application about incoming data\n * or events. THIS IS EDGE TRIGGERED. Once triggered, app must\n * read all the data available, or else, it could end up with\n * a situation where data exists to be read, but there is no\n * notification to wake up waiting app thread from select/poll.\n */\ntpp_mbox_t app_mbox;\n\n/* counters for various statistics */\nint oopkt_cnt = 0;  /* number of out of order packets received */\nint duppkt_cnt = 0; /* number of duplicate packets received */\n\nstatic struct tpp_config *tpp_conf; /* the global TPP configuration file */\n\nstatic tpp_addr_t *leaf_addrs = NULL;\nstatic int leaf_addr_count = 0;\n\n/*\n * The structure to hold information about the multicast channel\n */\ntypedef struct {\n\tint num_fds;   /* number of streams that are part of mcast channel */\n\tint num_slots; /* number of slots in the channel (for resizing) */\n\tint *strms;    /* array of member stream descriptors */\n} mcast_data_t;\n\n/*\n * The stream structure. Information about each stream is maintained in this\n * structure.\n *\n * Various members of the stream structure are accessed by either of the threads\n * IO and APP. Some of the fields are set by the APP thread first time and then\n * on accessed/updated by the IO thread.\n */\ntypedef struct {\n\tunsigned char strm_type; /* normal stream or multicast stream */\n\n\tunsigned int sd;\t /* source stream descriptor, APP thread assigns, IO thread uses */\n\tunsigned int dest_sd;\t /* destination stream descriptor, IO thread only */\n\tunsigned int src_magic;\t /* A magically unique number that identifies src stream uniquely */\n\tunsigned int dest_magic; /* A magically unique number that identifies dest stream uniquely */\n\n\tshort used_locally; /* Whether this stream was accessed locally by the APP, APP thread only */\n\n\tunsigned short u_state; /* stream state, APP thread updates, IO thread read-only */\n\tunsigned short t_state;\n\tshort lasterr; /* updated by IO thread only, for future use */\n\n\ttpp_addr_t src_addr;  /* address of the source host */\n\ttpp_addr_t dest_addr; /* address of destination host - set by APP thread, read-only by IO thread */\n\n\tvoid *user_data; /* user data set by tpp_dis functions. Basically used for DIS encoding */\n\n\ttpp_que_t recv_queue; /* received packets - APP thread only, hence no lock */\n\n\tmcast_data_t *mcast_data; /* multicast related data in case of multicast stream type */\n\n\tvoid (*close_func)(int); /* close function to be called when this stream is closed */\n\n\ttpp_que_elem_t *timeout_node; /* pointer to myself in the timeout streams queue */\n} stream_t;\n\n/*\n * Slot structure - Streams are part of an array of slots\n * Using the stream sd, its easy to index into this slotarray to find the\n * stream structure\n */\ntypedef struct {\n\tint slot_state; /* state of the slot - used, free */\n\tstream_t *strm; /* pointer to the stream structure at this slot */\n} stream_slot_t;\nstream_slot_t *strmarray = NULL; /* array of streams */\npthread_rwlock_t strmarray_lock; /* global lock for the streams array and streams_idx (not for an individual stream) */\nunsigned int max_strms = 0;\t /* total number of streams allocated */\n\n/* the following two variables are used to quickly find out a unused slot */\nunsigned int high_sd = UNINITIALIZED_INT; /* the highest stream sd used */\ntpp_que_t freed_sd_queue;\t\t  /* last freed stream sd */\nint freed_queue_count = 0;\n\n/* index of streams - so that we can search faster inside it */\nvoid *streams_idx = NULL;\n\n/* following common structure is used to do a timed action on a stream */\ntypedef struct {\n\tunsigned int sd;\n\ttime_t strm_action_time;\n\tvoid (*strm_action_func)(unsigned int);\n} strm_action_info_t;\n\n/* global queue of stream slots to be marked FREE after TPP_CLOSE_WAIT time */\ntpp_que_t strm_action_queue;\npthread_mutex_t strm_action_queue_lock;\n\n/* leaf specific stream states */\n#define TPP_STRM_STATE_OPEN 1  /* stream is open */\n#define TPP_STRM_STATE_CLOSE 2 /* stream is closed */\n\n#define TPP_TRNS_STATE_OPEN 1\t     /* stream open */\n#define TPP_TRNS_STATE_PEER_CLOSED 2 /* stream closed by peer */\n#define TPP_TRNS_STATE_NET_CLOSED 3  /* network closed (noroute etc) */\n\n#define TPP_MCAST_SLOT_INC 100 /* inc members in mcast group */\n\n/* the physical connection to the router from this leaf */\nstatic tpp_router_t **routers = NULL;\nstatic int max_routers = 0;\n\n/* forward declarations of functions used by this code file */\n\n/* function pointers */\nvoid (*the_app_net_down_handler)(void *data) = NULL;\nvoid (*the_app_net_restore_handler)(void *data) = NULL;\ntime_t leaf_next_event_expiry(time_t now); /* IO thread only */\n\n/* static functions */\nstatic int connect_router(tpp_router_t *r);\nstatic tpp_router_t *get_active_router();\nstatic stream_t *get_strm_atomic(unsigned int sd);\nstatic stream_t *get_strm(unsigned int sd);\nstatic stream_t *alloc_stream(tpp_addr_t *src_addr, tpp_addr_t *dest_addr);\nstatic void free_stream(unsigned int sd);\nstatic void free_stream_resources(stream_t *strm);\nstatic void queue_strm_close(stream_t *);     /* call only by APP thread, however inserts into strm_action_queue */\nstatic void queue_strm_free(unsigned int sd); /* invoked by IO thread only, via acting on the strm_action_queue */\nstatic void act_strm(time_t now, int force);\nstatic int send_app_strm_close(stream_t *strm, int cmd, int error);\nstatic int send_pkt_to_app(stream_t *strm, unsigned char type, void *data, int sz, int totlen);\nstatic stream_t *find_stream_with_dest(tpp_addr_t *dest_addr, unsigned int dest_sd, unsigned int dest_magic);\nstatic int send_spl_packet(stream_t *strm, int type);\nstatic int leaf_send_ctl_join(int tfd, void *c);\nstatic int send_to_router(tpp_packet_t *pkt);\n\n/* forward declarations */\nstatic int leaf_pkt_presend_handler(int tfd, tpp_packet_t *pkt, void *ctx, void *extra);\nstatic int leaf_pkt_handler(int tfd, void *data, int len, void *ctx, void *extra);\nstatic int leaf_pkt_handler_inner(int tfd, void *buf, void **data_out, int len, void *c, void *extra);\nstatic int leaf_close_handler(int tfd, int error, void *ctx, void *extra);\nstatic int leaf_timer_handler(time_t now);\nstatic int leaf_post_connect_handler(int tfd, void *data, void *ctx, void *extra);\n\n/**\n * @brief\n *\tHelper function to get a stream pointer and slot state in an atomic fashion\n *\n * @par Functionality:\n *\tAcquire a lock on the strmarray lock and return the stream pointer\n *\n * @param[in] sd - The stream descriptor\n *\n * @return - Stream pointer\n * @retval NULL - Bad stream index/descriptor\n * @retval !NULL - Associated stream pointer\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nstatic stream_t *\nget_strm_atomic(unsigned int sd)\n{\n\tstream_t *strm = NULL;\n\n\tif (tpp_terminated_in_child == 1)\n\t\treturn NULL;\n\n\ttpp_read_lock(&strmarray_lock); /* walking the array, so read lock */\n\tif (sd < max_strms) {\n\t\tif (strmarray[sd].slot_state == TPP_SLOT_BUSY)\n\t\t\tstrm = strmarray[sd].strm;\n\t}\n\ttpp_unlock_rwlock(&strmarray_lock);\n\n\treturn strm;\n}\n\n/**\n * @brief\n *\tHelper function to get a stream pointer from a stream descriptor\n *\n * @par Functionality:\n *\tReturns the stream pointer associated to the stream index. Does some\n *\terror checking whether the stream slot is busy, and stream itself is\n *\topen from an application point of view.\n *\n * @param[in] sd - The stream descriptor\n *\n * @return - Stream pointer\n * @retval NULL - Bad stream index/descriptor\n * @retval !NULL - Associated stream pointer\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nstatic stream_t *\nget_strm(unsigned int sd)\n{\n\tstream_t *strm;\n\n\terrno = 0;\n\tstrm = get_strm_atomic(sd);\n\tif (!strm) {\n\t\terrno = EBADF;\n\t\treturn NULL;\n\t}\n\tif (strm->u_state == TPP_STRM_STATE_CLOSE) {\n\t\terrno = ENOTCONN;\n\t\treturn NULL;\n\t}\n\treturn strm;\n}\n\n/**\n * @brief\n *\tSets the APP handler to be called in case the network connection from\n *\tthe leaf to the router is restored, or comes back up.\n *\n * @par Functionality:\n *\tWhen a previously down connection between the leaf and router is\n *\trestored or vice-versa, IO thread sends notification to APP thread. The\n *\tAPP thread, then, calls the handler prior registered by this setter\n *\tfunction. This function is called by the APP to set such a handler.\n *\tFor example, in the case of pbs_server, such a handler is \"net_down_handler\".\n *\n * @see\n *\tleaf_close_handler\n *\n * @param[in] - app_net_down_handler - ptr to a function (in the calling APP)\n *\t    that must be called when the network link between leaf and router\n *\t    goes down.\n\n * @param[in] - app_net_restore_handler - ptr to function (in the calling APP)\n *\t    that must be called when the network link between leaf and router\n *\t    is restored.\n *\n * @return void\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nvoid\ntpp_set_app_net_handler(void (*app_net_down_handler)(void *data), void (*app_net_restore_handler)(void *data))\n{\n\tthe_app_net_down_handler = app_net_down_handler;\n\tthe_app_net_restore_handler = app_net_restore_handler;\n}\n\nstatic int\nleaf_send_ctl_join(int tfd, void *c)\n{\n\ttpp_context_t *ctx = (tpp_context_t *) c;\n\ttpp_router_t *r;\n\ttpp_join_pkt_hdr_t *hdr = NULL;\n\ttpp_packet_t *pkt = NULL;\n\tint len;\n\tint i;\n\n\tif (!ctx)\n\t\treturn 0;\n\n\tif (ctx->type == TPP_ROUTER_NODE) {\n\t\tr = (tpp_router_t *) ctx->ptr;\n\t\tr->state = TPP_ROUTER_STATE_CONNECTING;\n\n\t\t/* send a TPP_CTL_JOIN message */\n\t\tpkt = tpp_bld_pkt(NULL, NULL, sizeof(tpp_join_pkt_hdr_t), 1, (void **) &hdr);\n\t\tif (!pkt) {\n\t\t\ttpp_log(LOG_CRIT, __func__, \"Failed to build packet\");\n\t\t\treturn -1;\n\t\t}\n\n\t\thdr->type = TPP_CTL_JOIN;\n\t\thdr->node_type = tpp_conf->node_type;\n\t\thdr->hop = 1;\n\t\thdr->index = r->index;\n\t\thdr->num_addrs = leaf_addr_count;\n\n\t\t/* log my own leaf name to help in troubleshooting later */\n\t\tfor (i = 0; i < leaf_addr_count; i++) {\n\t\t\ttpp_log(LOG_CRIT, NULL, \"Registering address %s to pbs_comm %s\", tpp_netaddr(&leaf_addrs[i]), r->router_name);\n\t\t}\n\n\t\tlen = leaf_addr_count * sizeof(tpp_addr_t);\n\t\tif (!tpp_bld_pkt(pkt, leaf_addrs, len, 1, NULL)) {\n\t\t\ttpp_log(LOG_CRIT, __func__, \"Failed to build packet\");\n\t\t\treturn -1;\n\t\t}\n\n\t\tif (tpp_transport_vsend(r->conn_fd, pkt) != 0) { /* this has to go irrespective of router state being down */\n\t\t\ttpp_log(LOG_CRIT, __func__, \"tpp_transport_vsend failed, err=%d\", errno);\n\t\t\treturn -1;\n\t\t}\n\t}\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tThe leaf post connect handler\n *\n * @par Functionality\n *\tWhen the connection between this leaf and another is dropped, the IO\n *\tthread continuously attempts to reconnect to it. If the connection is\n *\trestored, then this prior registered function is called.\n *\n * @param[in] tfd - The actual IO connection on which data was about to be\n *\t\t\tsent (unused)\n * @param[in] data - Any data the IO thread might want to pass to this function.\n *\t\t     (unused)\n * @param[in] c - Context associated with this connection, points us to the\n *                router being connected to\n * @param[in] extra - The extra data associated with IO connection\n *\n * @return Error code\n * @retval 0 - Success\n * @retval -1 - Failure\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nstatic int\nleaf_post_connect_handler(int tfd, void *data, void *c, void *extra)\n{\n\ttpp_context_t *ctx = (tpp_context_t *) c;\n\tconn_auth_t *authdata = (conn_auth_t *) extra;\n\tint rc = 0;\n\n\tif (!ctx)\n\t\treturn 0;\n\n\tif (ctx->type != TPP_ROUTER_NODE)\n\t\treturn 0;\n\n\tif (tpp_conf->auth_config->encrypt_method[0] != '\\0' ||\n\t    strcmp(tpp_conf->auth_config->auth_method, AUTH_RESVPORT_NAME) != 0) {\n\n\t\t/*\n\t\t * Since either auth is not resvport or encryption is enabled,\n\t\t * initiate handshakes for them\n\t\t *\n\t\t * If encryption is enabled then first initiate handshake for it\n\t\t * else for authentication\n\t\t *\n\t\t * Here we are only initiating handshake, if any handshake needs\n\t\t * continuation then it will be handled in leaf_pkt_handler\n\t\t */\n\n\t\tint conn_fd = ((tpp_router_t *) ctx->ptr)->conn_fd;\n\t\tauthdata = tpp_make_authdata(tpp_conf, AUTH_CLIENT, tpp_conf->auth_config->auth_method, tpp_conf->auth_config->encrypt_method);\n\t\tif (authdata == NULL) {\n\t\t\t/* tpp_make_authdata already logged error */\n\t\t\treturn -1;\n\t\t}\n\t\tauthdata->conn_initiator = 1;\n\t\ttpp_transport_set_conn_extra(tfd, authdata);\n\n\t\tif (authdata->config->encrypt_method[0] != '\\0') {\n\t\t\trc = tpp_handle_auth_handshake(tfd, conn_fd, authdata, FOR_ENCRYPT, NULL, 0);\n\t\t\tif (rc != 1)\n\t\t\t\treturn rc;\n\t\t}\n\n\t\tif (strcmp(authdata->config->auth_method, AUTH_RESVPORT_NAME) != 0) {\n\t\t\tif (strcmp(authdata->config->auth_method, authdata->config->encrypt_method) != 0) {\n\t\t\t\trc = tpp_handle_auth_handshake(tfd, conn_fd, authdata, FOR_AUTH, NULL, 0);\n\t\t\t\tif (rc != 1)\n\t\t\t\t\treturn rc;\n\t\t\t} else {\n\t\t\t\tauthdata->authctx = authdata->encryptctx;\n\t\t\t\tauthdata->authdef = authdata->encryptdef;\n\t\t\t\ttpp_transport_set_conn_extra(tfd, authdata);\n\t\t\t}\n\t\t}\n\t}\n\n\t/*\n\t * Since we are in post conntect handler\n\t * and we have completed authentication\n\t * so send TPP_CTL_JOIN\n\t */\n\treturn leaf_send_ctl_join(tfd, c);\n}\n\n/**\n * @brief\n *\tThe function initiates a connection from the leaf to a router\n *\n * @par Functionality:\n *\tThis function calls tpp_transport_connect (from the transport layer)\n *\tand queues a \"JOIN\" message to be sent to the router once the connection\n *\tis established.\n *\n *\tThe TPP_CONTROL_JOIN message is a control message that identifies the\n *\tleaf's properties to the router (registers the leaf to the router).\n *\tThe properties of the leaf that are sent, are the type of the node, ie,\n *\tits a leaf or another router in the network, its name.\n *\n * @see\n *\ttpp_transport_connect\n *\n * @param[in] r - struct router - info of the router to connect to\n *\n * @return int\n * @retval -1 - Failure\n * @retval  0 - Success\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nstatic int\nconnect_router(tpp_router_t *r)\n{\n\ttpp_context_t *ctx;\n\n\t/* since we connected we should add a context */\n\tif ((ctx = (tpp_context_t *) malloc(sizeof(tpp_context_t))) == NULL) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory allocating tpp context\");\n\t\treturn -1;\n\t}\n\tctx->ptr = r;\n\tctx->type = TPP_ROUTER_NODE;\n\n\t/* initiate connections to the tpp router (single for now) */\n\tif (tpp_transport_connect(r->router_name, r->delay, ctx, &(r->conn_fd)) == -1) {\n\t\ttpp_log(LOG_ERR, NULL, \"Connection to pbs_comm %s failed\", r->router_name);\n\t\treturn -1;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\tInitializes the client side of the TPP library\n *\n * @par Functionality:\n *\tThis function creates the fd (pipe) that the APP can monitor for events,\n *\tinitializes the transport layer by calling tpp_transport_init.\n *\tIt initializes the various mutexes and global queues of structures.\n *\tIt also registers a set of \"handlers\" that the transport layer calls\n *\tusing the IO thread into the leaf logic code\n *\tetc.\n *\n * @see\n *\ttpp_transport_init\n *\ttpp_transport_set_handlers\n *\n * @param[in] cnf - The tpp configuration structure\n *\n * @return - The file descriptor that APP must use to monitor for events\n * @retval -1   - Function failed\n * @retval !=-1 - Success, read end of the pipe is returned to APP to monitor\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\ntpp_init(struct tpp_config *cnf)\n{\n\tint rc, i;\n\tint app_fd;\n\n\ttpp_conf = cnf;\n\tif (tpp_conf->node_name == NULL) {\n\t\ttpp_log(LOG_CRIT, NULL, \"TPP leaf node name is NULL\");\n\t\treturn -1;\n\t}\n\n\t/* before doing anything else, initialize the key to the tls */\n\tif (tpp_init_tls_key() != 0) {\n\t\t/* can only use prints since tpp key init failed */\n\t\tfprintf(stderr, \"Failed to initialize tls key\\n\");\n\t\treturn -1;\n\t}\n\n\ttpp_log(LOG_CRIT, NULL, \"TPP leaf node names = %s\", tpp_conf->node_name);\n\n\ttpp_init_rwlock(&strmarray_lock);\n\ttpp_init_lock(&strm_action_queue_lock);\n\n\tif (tpp_mbox_init(&app_mbox, \"app_mbox\", TPP_MAX_MBOX_SIZE) != 0) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Failed to create application mbox\");\n\t\treturn -1;\n\t}\n\n\t/* initialize the app_mbox */\n\tapp_fd = tpp_mbox_getfd(&app_mbox);\n\n\tTPP_QUE_CLEAR(&strm_action_queue);\n\tTPP_QUE_CLEAR(&freed_sd_queue);\n\n\tstreams_idx = pbs_idx_create(PBS_IDX_DUPS_OK, sizeof(tpp_addr_t));\n\tif (streams_idx == NULL) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Failed to create index for leaves\");\n\t\treturn -1;\n\t}\n\n\t/* get the addresses associated with this leaf */\n\tleaf_addrs = tpp_get_addresses(tpp_conf->node_name, &leaf_addr_count);\n\tif (!leaf_addrs) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Failed to resolve address, err=%d\", errno);\n\t\treturn -1;\n\t}\n\n\t/*\n\t * first register handlers with the transport, so these functions are called\n\t * from the IO thread from the transport layer\n\t */\n\ttpp_transport_set_handlers(\n\t\tleaf_pkt_presend_handler,  /* called before sending packet */\n\t\tleaf_pkt_handler,\t   /* called when a packet arrives */\n\t\tleaf_close_handler,\t   /* called when a connection closes */\n\t\tleaf_post_connect_handler, /* called when connection restores */\n\t\tleaf_timer_handler\t   /* called after amt of time from previous handler */\n\t);\n\n\t/* initialize the tpp transport layer */\n\tif ((rc = tpp_transport_init(tpp_conf)) == -1)\n\t\treturn -1;\n\n\tmax_routers = 0;\n\twhile (tpp_conf->routers[max_routers])\n\t\tmax_routers++; /* count max_routers */\n\n\tif (max_routers == 0) {\n\t\ttpp_log(LOG_CRIT, NULL, \"No pbs_comms configured, cannot start\");\n\t\treturn -1;\n\t}\n\n\tif ((routers = calloc(max_routers, sizeof(tpp_router_t *))) == NULL) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory allocating pbs_comms array\");\n\t\treturn -1;\n\t}\n\trouters[max_routers - 1] = NULL;\n\n\t/* initialize the router structures and initiate connections to them */\n\tfor (i = 0; tpp_conf->routers[i]; i++) {\n\t\tif ((routers[i] = malloc(sizeof(tpp_router_t))) == NULL) {\n\t\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory allocating pbs_comm structure\");\n\t\t\treturn -1;\n\t\t}\n\n\t\trouters[i]->router_name = tpp_conf->routers[i];\n\t\trouters[i]->conn_fd = -1;\n\t\trouters[i]->initiator = 1;\n\t\trouters[i]->state = TPP_ROUTER_STATE_DISCONNECTED;\n\t\trouters[i]->index = i;\n\t\trouters[i]->delay = 0;\n\n\t\ttpp_log(LOG_INFO, NULL, \"Connecting to pbs_comm %s\", routers[i]->router_name);\n\n\t\t/* connect to router and send initial join packet */\n\t\tif ((rc = connect_router(routers[i])) != 0)\n\t\t\treturn -1;\n\t}\n\n#ifndef WIN32\n\n\t/*\n\t * As such atfork handlers are required since after a fork, fork() replicates\n\t * only calling thread that called fork() and TPP layer never calls fork. So, this\n\t * means that the TPP thread is always dead/unavailable in a child process.\n\t *\n\t * We register only a post_fork child handler to set \"tpp_terminated_in_child\" flag\n\t * which renders TPP functions to return right away without doing anything,\n\t * rendering TPP functionality \"bypassed\" in the child process.\n\t *\n\t */\n\n\t/* for unix, set a pthread_atfork handler */\n\tif (pthread_atfork(NULL, NULL, tpp_terminate)) {\n\t\ttpp_log(LOG_CRIT, __func__, \"TPP client atfork handler registration failed\");\n\t\treturn -1;\n\t}\n#endif\n\n\treturn (app_fd);\n}\n\n/**\n * @brief\n *\ttpp/dis support routine for ending a message that was read\n *\tSkips over decoding to the next message\n *\n * @param[in] - fd - Tpp channel whose dis read packet has to be purged\n *\n * @retval\t0 Success\n * @retval\t-1 error\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nint\ntpp_eom(int fd)\n{\n\ttpp_packet_t *p;\n\tstream_t *strm;\n\tpbs_tcp_chan_t *tpp;\n\n\t/* check for bad file descriptor */\n\tif (fd < 0)\n\t\treturn -1;\n\n\tTPP_DBPRT(\"sd=%d\", fd);\n\tstrm = get_strm(fd);\n\tif (!strm) {\n\t\tTPP_DBPRT(\"Bad sd %d\", fd);\n\t\treturn -1;\n\t}\n\tp = tpp_deque(&strm->recv_queue); /* only APP thread accesses this queue, hence no lock */\n\ttpp_free_pkt(p);\n\ttpp = tpp_get_user_data(fd);\n\tif (tpp != NULL) {\n\t\t/* initialize read buffer */\n\t\tdis_clear_buf(&tpp->readbuf);\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\tOpens a virtual connection to another leaf (another PBS daemon)\n *\n * @par Functionality:\n *\tThis function merely allocates a free stream slot from the array of\n *\tstreams and sets the destination host and port, and returns the slot\n *\tindex as the fd for the application to use to read/write to the virtual\n *\tconnection\n *\n * @param[in] dest_host - Hostname of the destination leaf\n * @param[in] port - The port at which the destination is available\n *\n * @return - The file descriptor that APP must use to do the IO\n * @retval -1   - Function failed\n * @retval !=-1 - Success, the fd for the APP to use is returned\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\ntpp_open(char *dest_host, unsigned int port)\n{\n\tstream_t *strm;\n\tchar *dest;\n\ttpp_addr_t *addrs, dest_addr;\n\tint count;\n\tvoid *pdest_addr = &dest_addr;\n\tvoid *idx_ctx = NULL;\n\n\tif ((dest = mk_hostname(dest_host, port)) == NULL) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory opening stream\");\n\t\treturn -1;\n\t}\n\n\taddrs = tpp_get_addresses(dest, &count);\n\tif (!addrs) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Failed to resolve address, err=%d\", errno);\n\t\tfree(dest);\n\t\treturn -1;\n\t}\n\tmemcpy(&dest_addr, addrs, sizeof(tpp_addr_t));\n\tfree(addrs);\n\n\ttpp_read_lock(&strmarray_lock); /* walking the idx, so read lock */\n\n\t/*\n\t * Just try to find a fully open stream to use, else fall through\n\t * to create a new stream. Any half closed streams will be closed\n\t * elsewhere, either when network first dropped or if any message\n\t * comes to such a half open stream\n\t */\n\twhile (pbs_idx_find(streams_idx, &pdest_addr, (void **) &strm, &idx_ctx) == PBS_IDX_RET_OK) {\n\t\tif (memcmp(pdest_addr, &dest_addr, sizeof(tpp_addr_t)) != 0)\n\t\t\tbreak;\n\t\tif (strm->u_state == TPP_STRM_STATE_OPEN && strm->t_state == TPP_TRNS_STATE_OPEN && strm->used_locally == 1) {\n\t\t\ttpp_unlock_rwlock(&strmarray_lock);\n\t\t\tpbs_idx_free_ctx(idx_ctx);\n\n\t\t\tTPP_DBPRT(\"Stream for dest[%s] returned = %u\", dest, strm->sd);\n\t\t\tfree(dest);\n\t\t\treturn strm->sd;\n\t\t}\n\t}\n\tpbs_idx_free_ctx(idx_ctx);\n\n\ttpp_unlock_rwlock(&strmarray_lock);\n\n\t/* by default use the first address of the host as the source address */\n\tif ((strm = alloc_stream(&leaf_addrs[0], &dest_addr)) == NULL) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory allocating stream\");\n\t\tfree(dest);\n\t\treturn -1;\n\t}\n\n\t/* set the used_locally flag, since the APP is aware of this fd */\n\tstrm->used_locally = 1;\n\n\tTPP_DBPRT(\"Stream for dest[%s] returned = %d\", dest, strm->sd);\n\tfree(dest);\n\n\treturn strm->sd;\n}\n\n/**\n * @brief\n *\tReturns the active router which has an established TCP connection\n *\n * @par Functionality:\n *\tLoops through the list of routers and returns the first one having\n *\tan active TCP connection. Favors the currently active router\n *\n * @return - The active router\n * @retval NULL   - Function failed\n * @retval !NULL - Success, the active router is returned\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nstatic tpp_router_t *\nget_active_router()\n{\n\tint i;\n\tstatic int index = 0;\n\n\tif (routers == NULL)\n\t\treturn NULL;\n\n\t/*\n\t * If we had already been using an alternate router it should be good to use\n\t * without checking connection age, since we were already using it\n\t */\n\tif (index >= 0 && index < max_routers && routers[index] && routers[index]->state == TPP_ROUTER_STATE_CONNECTED)\n\t\treturn routers[index];\n\n\tfor (i = 0; i < max_routers; i++) {\n\t\tif (routers[i]->state == TPP_ROUTER_STATE_CONNECTED) {\n\t\t\tindex = i;\n\t\t\treturn routers[index];\n\t\t}\n\t}\n\n\treturn NULL;\n}\n\n/**\n * @brief\n *\tSends data to a stream\n *\n * @par Functionality:\n *\tBasically queues data to be sent by the IO thread to the desired\n *\tdestination (as specified by the stream descriptor)\n *\n * @param[in] sd - The stream descriptor to which to send data\n * @param[in] data - Pointer to the data block to be sent\n * @param[in] len - Length of the data block to be sent\n *\n * @return  Error code\n * @retval  -1 - Failure\n * @retval   >=0 - Success - amount of data sent\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\ntpp_send(int sd, void *data, int len)\n{\n\tstream_t *strm;\n\tint rc = -1;\n\tunsigned int to_send;\n\tvoid *data_dup;\n\ttpp_data_pkt_hdr_t *dhdr = NULL;\n\ttpp_packet_t *pkt;\n\n\tstrm = get_strm(sd);\n\tif (!strm) {\n\t\tTPP_DBPRT(\"Bad sd %d\", sd);\n\t\treturn -1;\n\t}\n\n\tif ((tpp_conf->compress == 1) && (len > TPP_COMPR_SIZE)) {\n\t\tdata_dup = tpp_deflate(data, len, &to_send); /* creates a copy */\n\t\tif (data_dup == NULL) {\n\t\t\ttpp_log(LOG_CRIT, __func__, \"tpp deflate failed\");\n\t\t\treturn -1;\n\t\t}\n\t} else {\n\t\tdata_dup = malloc(len);\n\t\tif (!data_dup) {\n\t\t\ttpp_log(errno, __func__, \"Failed to duplicate data\");\n\t\t\treturn -1;\n\t\t}\n\t\tmemcpy(data_dup, data, len);\n\t\tto_send = len;\n\t}\n\t/* we have created a copy of the data either way, compressed, or not */\n\n\ttpp_log(LOG_DEBUG, __func__, \"**** sd=%d, compr_len=%d, len=%d, dest_sd=%u\", sd, to_send, len, strm->dest_sd);\n\n\tif (strm->strm_type == TPP_STRM_MCAST) {\n\t\t/* do other stuff */\n\t\treturn tpp_mcast_send(sd, data_dup, to_send, len);\n\t}\n\n\t/* create a new pkt and add the dhdr chunk first */\n\tpkt = tpp_bld_pkt(NULL, NULL, sizeof(tpp_data_pkt_hdr_t), 1, (void **) &dhdr);\n\tif (!pkt) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Failed to build packet\");\n\t\tfree(data_dup);\n\t\treturn -1;\n\t}\n\tdhdr->type = TPP_DATA;\n\tdhdr->src_sd = htonl(sd);\n\tdhdr->src_magic = htonl(strm->src_magic);\n\tdhdr->dest_sd = htonl(strm->dest_sd);\n\tdhdr->totlen = htonl(len);\n\tmemcpy(&dhdr->src_addr, &strm->src_addr, sizeof(tpp_addr_t));\n\tmemcpy(&dhdr->dest_addr, &strm->dest_addr, sizeof(tpp_addr_t));\n\n\t/* add the data chunk to the already created pkt */\n\tif (!tpp_bld_pkt(pkt, data_dup, to_send, 0, NULL)) { /* data is already a duplicate buffer */\n\t\ttpp_log(LOG_CRIT, __func__, \"Failed to build packet\");\n\t\treturn -1;\n\t}\n\n\trc = send_to_router(pkt);\n\tif (rc == 0)\n\t\treturn len; /* all given data sent, so return len */\n\n\tif (rc == -2)\n\t\ttpp_log(LOG_CRIT, __func__, \"mbox full, returning error to App!\");\n\telse if (rc == -1)\n\t\ttpp_log(LOG_ERR, __func__, \"Failed to send to router\");\n\n\tsend_app_strm_close(strm, TPP_CMD_NET_CLOSE, 0);\n\treturn rc;\n}\n\n/**\n * @brief\n *\tpoll function to check if any streams have a message/notification\n *\twaiting to be read by the APP.\n *\n * @return - Descriptor of stream which has data/notification to be read\n * @retval -2   - No streams have outstanding/pending data/notifications\n * @retval != -2 - Stream descriptor that has pending data/notifications\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\ntpp_poll(void)\n{\n\tint tfd;\n\tif (tpp_ready_fds(&tfd, 1) == 1) {\n\t\treturn tfd;\n\t}\n\treturn -2;\n}\n\n/**\n * @brief\n *\tFunction to recv/read data from a tpp stream\n *\n * @par Functionality:\n *\tThis function reads the requested amount of bytes from the \"current\"\n *\tposition of the next available data packet in the \"received\" queue.\n *\n *\tIt advances the \"current position\" in the data packet, so subsequent\n *\treads on this stream reads the next bytes from the data packet.\n *\tIt never advances the \"current position\" past the end of the data\n *\tpacket. To move to the next packet, the APP must call \"tpp_eom\".\n *\n * @param[in]  sd   - The stream descriptor to which to read data\n * @param[out] data - Pointer to the buffer to read data into\n * @param[in]  len  - Length of the buffer\n *\n * @return\n * @retval -1    - Error reading the stream (errno set EWOULDBLOCK if no more\n *\t\t   data is available to be read)\n * @retval != -1 - Number of bytes of data actually read from the stream\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\ntpp_recv(int sd, void *data, int len)\n{\n\ttpp_que_elem_t *n;\n\ttpp_packet_t *cur_pkt = NULL;\n\ttpp_chunk_t *chunk = NULL;\n\tstream_t *strm;\n\tint offset, avail_bytes, trnsfr_bytes;\n\n\terrno = 0;\n\tif (len == 0)\n\t\treturn 0;\n\n\tstrm = get_strm(sd);\n\tif (!strm) {\n\t\tTPP_DBPRT(\"Bad sd %d\", sd);\n\t\treturn -1;\n\t}\n\n\tstrm->used_locally = 1;\n\n\tif ((n = TPP_QUE_HEAD(&strm->recv_queue))) /* only APP thread accesses this queue, hence no lock */\n\t\tcur_pkt = TPP_QUE_DATA(n);\n\n\t/* read from head */\n\tif (cur_pkt == NULL) {\n\t\terrno = EWOULDBLOCK;\n\t\treturn -1; /* no data currently - would block */\n\t}\n\n\tchunk = GET_NEXT(cur_pkt->chunks);\n\tif (chunk == NULL) {\n\t\terrno = EWOULDBLOCK;\n\t\treturn -1; /* no data currently - would block */\n\t}\n\n\toffset = chunk->pos - chunk->data;\n\tavail_bytes = chunk->len - offset;\n\ttrnsfr_bytes = (len < avail_bytes) ? len : avail_bytes;\n\n\tif (trnsfr_bytes == 0) {\n\t\terrno = EWOULDBLOCK;\n\t\treturn -1; /* no data currently - would block */\n\t}\n\n\tmemcpy(data, chunk->pos, trnsfr_bytes);\n\tchunk->pos = chunk->pos + trnsfr_bytes;\n\n\treturn trnsfr_bytes;\n}\n\n/**\n * @brief\n *\tLocal function to allocate a stream structure\n *\n * @par Functionality:\n *\tAllocates a stream structure and initializes its members. Adds the\n *\tstream structure to a free slot on the array of streams. To find a free\n *\tslot faster, it uses globals \"last_freed_sd\" and \"high_sd\". If it cannot\n *\tfind a free slot using these two indexes, it does a sequential search\n *\tfrom the start of the streams array to find a free slot.\n *\n * @param[in] src_addr  - The address of the src host.\n * @param[in] dest_addr - The address of the destination host.\n *\n * @return\t - Pointer to the newly allocated stream structure\n * @retval NUll  - Error, out of memory\n * @retval !NULl - Ptr to the new stream\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nstatic stream_t *\nalloc_stream(tpp_addr_t *src_addr, tpp_addr_t *dest_addr)\n{\n\tstream_t *strm;\n\tunsigned int sd = max_strms, i;\n\tvoid *data;\n\tunsigned int freed_sd = UNINITIALIZED_INT;\n\n\terrno = 0;\n\n\ttpp_write_lock(&strmarray_lock); /* updating the array + adding to idx, so WRITE lock */\n\n\tdata = tpp_deque(&freed_sd_queue);\n\tif (data) {\n\t\tfreed_sd = (unsigned int) (intptr_t) data;\n\t\tfreed_queue_count--;\n\t}\n\n\tif (freed_sd != UNINITIALIZED_INT && strmarray[freed_sd].slot_state == TPP_SLOT_FREE) {\n\t\tsd = freed_sd;\n\t} else if (high_sd != UNINITIALIZED_INT && max_strms > 0 && high_sd < max_strms - 1) {\n\t\tsd = high_sd + 1;\n\t} else {\n\t\tsd = max_strms;\n\n\t\tTPP_DBPRT(\"***Searching for a free slot\");\n\t\t/* search for a free sd */\n\t\tfor (i = 0; i < max_strms; i++) {\n\t\t\tif (strmarray[i].slot_state == TPP_SLOT_FREE) {\n\t\t\t\tsd = i;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t}\n\n\tif (high_sd == UNINITIALIZED_INT || sd > high_sd) {\n\t\thigh_sd = sd; /* remember the max sd used */\n\t}\n\n\tstrm = calloc(1, sizeof(stream_t));\n\tif (!strm) {\n\t\ttpp_unlock_rwlock(&strmarray_lock);\n\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory allocating stream\");\n\t\treturn NULL;\n\t}\n\tstrm->strm_type = TPP_STRM_NORMAL;\n\tstrm->sd = sd;\n\tstrm->dest_sd = UNINITIALIZED_INT;\n\tstrm->dest_magic = UNINITIALIZED_INT;\n\tif (src_addr)\n\t\tmemcpy(&strm->src_addr, src_addr, sizeof(tpp_addr_t));\n\tif (dest_addr)\n\t\tmemcpy(&strm->dest_addr, dest_addr, sizeof(tpp_addr_t));\n\tstrm->src_magic = (unsigned int) time(0); /* for now use time as the unique magic number */\n\tstrm->u_state = TPP_STRM_STATE_OPEN;\n\tstrm->t_state = TPP_TRNS_STATE_OPEN;\n\n\tstrm->close_func = NULL;\n\tstrm->timeout_node = NULL;\n\n\tTPP_QUE_CLEAR(&strm->recv_queue); /* only APP thread accesses this queue, once created here, hence no lock */\n\n\t/* set to stream array */\n\tif (max_strms == 0 || sd > max_strms - 1) {\n\t\tunsigned int newsize;\n\t\tvoid *p;\n\n\t\t/* resize strmarray */\n\t\tnewsize = sd + 100;\n\t\tp = realloc(strmarray, sizeof(stream_slot_t) * newsize);\n\t\tif (!p) {\n\t\t\tfree(strm);\n\t\t\ttpp_unlock_rwlock(&strmarray_lock);\n\t\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory resizing stream array\");\n\t\t\treturn NULL;\n\t\t}\n\t\tstrmarray = (stream_slot_t *) p;\n\t\tmemset(&strmarray[max_strms], 0, (newsize - max_strms) * sizeof(stream_slot_t));\n\t\tmax_strms = newsize;\n\t}\n\n\tstrmarray[sd].slot_state = TPP_SLOT_BUSY;\n\tstrmarray[sd].strm = strm;\n\n\tif (dest_addr) {\n\t\t/* also add stream to the streams_idx with the dest as key */\n\t\tif (pbs_idx_insert(streams_idx, &strm->dest_addr, strm) != PBS_IDX_RET_OK) {\n\t\t\ttpp_log(LOG_CRIT, __func__, \"Failed to add strm with sd=%u to streams\", strm->sd);\n\t\t\tfree(strm);\n\t\t\ttpp_unlock_rwlock(&strmarray_lock);\n\t\t\treturn NULL;\n\t\t}\n\t}\n\n\tTPP_DBPRT(\"*** Allocated new stream, sd=%u, src_magic=%u\", strm->sd, strm->src_magic);\n\n\ttpp_unlock_rwlock(&strmarray_lock);\n\n\treturn strm;\n}\n\n/**\n * @brief\n *\tSocket address of the local side for the given sd\n *\n * @param[in] sd - The stream descriptor\n *\n * @return\t - Pointer to a static sockaddr structure\n * @retval NUll  - Error, failed to get socket address or bad stream descriptor\n * @retval !NULl - Ptr to the static sockaddr structure\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nstruct sockaddr_in *\ntpp_localaddr(int fd)\n{\n\tstream_t *strm;\n\tstatic struct sockaddr_in sa;\n\n\tstrm = get_strm(fd);\n\tif (!strm)\n\t\treturn NULL;\n\n\tmemcpy((char *) &sa.sin_addr, &leaf_addrs->ip, sizeof(sa.sin_addr));\n\tsa.sin_port = htons(leaf_addrs->port);\n\n\treturn (&sa);\n}\n\n/**\n * @brief\n *\tSocket address of the remote side for the given sd\n *\n * @param[in] sd - The stream descriptor\n *\n * @return\t - Pointer to a static sockaddr structure\n * @retval NUll  - Error, failed to get socket address or bad stream descriptor\n * @retval !NULl - Ptr to the static sockaddr structure\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nstruct sockaddr_in *\ntpp_getaddr(int fd)\n{\n\tstream_t *strm;\n\tstatic struct sockaddr_in sa;\n\n\tstrm = get_strm(fd);\n\tif (!strm)\n\t\treturn NULL;\n\n\tmemcpy((char *) &sa.sin_addr, &strm->dest_addr.ip, sizeof(sa.sin_addr));\n\tsa.sin_port = strm->dest_addr.port;\n\n\treturn (&sa);\n}\n\n/**\n * @brief\n *\tShuts down the tpp library gracefully\n *\n * @par Functionality\n *\tCloses the APP notification fd, shuts down the IO thread\n *\tand destroys all the streams.\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nvoid\ntpp_shutdown()\n{\n\tunsigned int i;\n\tunsigned int sd;\n\n\tTPP_DBPRT(\"from pid = %d\", getpid());\n\n\ttpp_mbox_destroy(&app_mbox);\n\n\ttpp_going_down = 1;\n\n\ttpp_transport_shutdown();\n\t/* all threads are dead by now, so no locks required */\n\n\tDIS_tpp_funcs();\n\n\tfor (i = 0; i < max_strms; i++) {\n\t\tif (strmarray[i].slot_state == TPP_SLOT_BUSY) {\n\t\t\tsd = strmarray[i].strm->sd;\n\t\t\tdis_destroy_chan(sd);\n\t\t\tfree_stream_resources(strmarray[i].strm);\n\t\t\tfree_stream(sd);\n\t\t}\n\t}\n\n\tif (strmarray)\n\t\tfree(strmarray);\n\ttpp_destroy_rwlock(&strmarray_lock);\n\n\tfree_tpp_config(tpp_conf);\n}\n\n/**\n * @brief\n *\tTerminates (un-gracefully) the tpp library\n *\n * @par Functionality\n *\tTypically to be called after a fork. Threads are not preserved after\n *\tfork, so this function does not attempt to stop threads, just destroys\n *\tthe streams.\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nvoid\ntpp_terminate()\n{\n\t/* Warning: Do not attempt to destroy any lock\n\t * This is not required since our library is effectively\n\t * not used after a fork.\n\t * Also never log anything from (or after) a terminate handler.\n\t *\n\t * Don't bother to free any TPP data as well, as the forked\n\t * process is usually short lived and no point spending time\n\t * freeing space on a short lived forked process. Besides,\n\t * the TPP thread which is lost after fork might have been in\n\t * between using these data when the fork happened, so freeing\n\t * some structures might be dangerous.\n\t *\n\t * Thus the only thing we do here is to close file/sockets\n\t * so that the kernel can recognize when a close happens from the\n\t * main process.\n\t *\n\t */\n\tif (tpp_terminated_in_child == 1)\n\t\treturn;\n\n\t/* set flag so this function is never entered within\n\t * this process again, so no fear of double frees\n\t */\n\ttpp_terminated_in_child = 1;\n\n\ttpp_transport_terminate();\n\n\ttpp_mbox_destroy(&app_mbox);\n}\n\n/**\n * @brief\n *\tFind which streams have pending notifications/data\n *\n * @param[out] sds    - Arrays to be filled with descriptors of streams\n *                      having pending notifications\n * @param[in] len - Length of supplied array\n *\n * @return Number of ready streams\n * @retval   -1  - Function failed\n * @retval !=-1  - Number of ready streams\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\ntpp_ready_fds(int *sds, int len)\n{\n\tint strms_found = 0;\n\tunsigned int sd = 0;\n\tint cmd = 0;\n\tvoid *data = NULL;\n\tstream_t *strm;\n\n\terrno = 0;\n\n\t/* tpp_fd works like a level triggered fd */\n\twhile (strms_found < len) {\n\t\tdata = NULL;\n\t\tif (tpp_mbox_read(&app_mbox, &sd, &cmd, &data) != 0) {\n\t\t\tif (errno == EWOULDBLOCK)\n\t\t\t\tbreak;\n\t\t\telse\n\t\t\t\treturn -1;\n\t\t}\n\n\t\tif (cmd == TPP_CMD_NET_DATA) {\n\t\t\ttpp_packet_t *pkt = data;\n\t\t\tif ((strm = get_strm_atomic(sd))) {\n\t\t\t\tTPP_DBPRT(\"sd=%u, cmd=%d, u_state=%d, t_state=%d, len=%d, dest_sd=%u\", sd, cmd, strm->u_state, strm->t_state, pkt->totlen, strm->dest_sd);\n\n\t\t\t\tif (strm->u_state == TPP_STRM_STATE_OPEN) {\n\t\t\t\t\t/* add packet to recv queue */\n\t\t\t\t\tif (tpp_enque(&strm->recv_queue, pkt) == NULL) { /* only APP thread accesses this queue, hence not lock */\n\t\t\t\t\t\ttpp_log(LOG_CRIT, __func__, \"Failed to queue received pkt\");\n\t\t\t\t\t\ttpp_free_pkt(pkt);\n\t\t\t\t\t\treturn -1;\n\t\t\t\t\t}\n\t\t\t\t\tsds[strms_found++] = sd;\n\t\t\t\t} else {\n\t\t\t\t\tTPP_DBPRT(\"Data recvd on closed stream %u discarded\", sd);\n\t\t\t\t\ttpp_free_pkt(pkt);\n\t\t\t\t\t/* respond back by sending the close packet once more */\n\t\t\t\t\tsend_spl_packet(strm, TPP_CLOSE_STRM);\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tTPP_DBPRT(\"Data recvd on deleted stream %u discarded\", sd);\n\t\t\t\ttpp_free_pkt(pkt);\n\t\t\t}\n\t\t} else if (cmd == TPP_CMD_PEER_CLOSE || cmd == TPP_CMD_NET_CLOSE) {\n\n\t\t\tif ((strm = get_strm_atomic(sd))) {\n\t\t\t\tTPP_DBPRT(\"sd=%u, cmd=%d, u_state=%d, t_state=%d, data=%p\", sd, cmd, strm->u_state, strm->t_state, data);\n\n\t\t\t\tif (strm->u_state == TPP_STRM_STATE_OPEN) {\n\t\t\t\t\tif (cmd == TPP_CMD_PEER_CLOSE) {\n\t\t\t\t\t\t/* ask app to close stream */\n\t\t\t\t\t\tTPP_DBPRT(\"Sent peer close to stream sd=%u\", sd);\n\t\t\t\t\t\tsds[strms_found++] = sd;\n\n\t\t\t\t\t} else if (cmd == TPP_CMD_NET_CLOSE) {\n\t\t\t\t\t\t/* network closed, so clear all pending data to be\n\t\t\t\t\t\t * received, and signal that sd\n\t\t\t\t\t\t */\n\t\t\t\t\t\tTPP_DBPRT(\"Sent net close stream sd=%u\", sd);\n\t\t\t\t\t\tsds[strms_found++] = sd;\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\t/* app already closed */\n\t\t\t\t\tqueue_strm_close(strm);\n\t\t\t\t}\n\t\t\t}\n\t\t} else if (cmd == TPP_CMD_NET_RESTORE) {\n\n\t\t\tif (the_app_net_restore_handler)\n\t\t\t\tthe_app_net_restore_handler(data);\n\n\t\t} else if (cmd == TPP_CMD_NET_DOWN) {\n\n\t\t\tif (the_app_net_down_handler)\n\t\t\t\tthe_app_net_down_handler(data);\n\t\t}\n\t}\n\treturn strms_found;\n}\n\n/**\n * @brief\n *\tGet the user buffer pointer associated with the stream\n *\n * @par Functionality\n *\tUsed by the tpp_dis later to retrieve a previously associated buffer\n *\tthat is used to DIS encode/decode data before sending/after receiving\n *\tSince this is associated with the stream, this eliminates the need for\n *\tthe dis layer to maintain a separate array of buffers for each stream.\n *\n * @param[in] sd - The stream descriptor\n *\n * @return Ptr to user buffer (previously set with tpp_set_user_data)\n * @retval NULL - Bad descriptor or not user buffer was set\n * @retval !NULL - Pts to user buffer\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nvoid *\ntpp_get_user_data(int sd)\n{\n\tstream_t *strm;\n\n\terrno = 0;\n\tstrm = get_strm_atomic(sd);\n\tif (!strm) {\n\t\terrno = ENOTCONN;\n\t\treturn NULL;\n\t}\n\treturn strm->user_data;\n}\n\n/**\n * @brief\n *\tAssociated a user buffer with the stream\n *\n * @par Functionality\n *\tUsed by the tpp_dis later associate a buffer to the stream.\n *\tUsed by tppdis_get_user_data to encode/decode data before sending/after receiving\n *\tSince this is associated with the stream, this eliminates the need for\n *\tthe dis layer to maintain a separate array of buffers for each stream.\n *\n * @param[in] sd - The stream descriptor\n * @param[in] user_data - The user buffer allocated by the tpp_dis layer\n *\n * @return Error code\n * @retval -1 - Bad stream descriptor\n * @retval 0 - Success\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\ntpp_set_user_data(int sd, void *user_data)\n{\n\tstream_t *strm;\n\n\terrno = 0;\n\tstrm = get_strm_atomic(sd);\n\tif (!strm) {\n\t\terrno = ENOTCONN;\n\t\ttpp_log(LOG_WARNING, __func__, \"Slot %d freed!\", sd);\n\t\treturn -1;\n\t}\n\tstrm->user_data = user_data;\n\treturn 0;\n}\n\n/**\n * @brief\n *\tAssociate a user close function to be called when the stream\n *\tis being  closed\n *\n * @par Functionality\n *\tWhen tpp_close is called, the user defined close function is triggered.\n *\n * @param[in] sd - The stream descriptor\n * @param[in] fnc - The function to register as a user defined close function\n *\n * @return Error code\n * @retval -1 - Bad stream descriptor\n * @retval  0 - Success\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nvoid\ntpp_add_close_func(int sd, void (*func)(int))\n{\n\tstream_t *strm;\n\n\tstrm = get_strm(sd);\n\tif (!strm)\n\t\treturn;\n\n\tstrm->close_func = func;\n}\n\n/**\n * @brief\n *\tClose this side of the communication channel associated with the\n *\tstream descriptor.\n *\n * @par Functionality\n *\tQueues a close packet to be sent to the peer. The stream state itself\n *\tis changed to TPP_STRM_STATE_CLOSE_WAIT signifying that its sent a\n *\tclose packet to the peer and waiting for the peer to acknowledge it.\n *\tMeantime all sends and recvs are disabled on this stream.\n *\n * @param[in] sd - The stream descriptor\n *\n * @return Error code\n * @retval -1 - Failed to close the stream (bad state or bad stream)\n * @retval 0 - Success\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\ntpp_close(int sd)\n{\n\tstream_t *strm;\n\ttpp_packet_t *p;\n\n\tstrm = get_strm(sd);\n\tif (!strm) {\n\t\treturn -1;\n\t}\n\n\t/* call any user defined close function */\n\tif (strm->close_func)\n\t\tstrm->close_func(sd);\n\n\tTPP_DBPRT(\"Closing sd=%d\", sd);\n\t/* free the recv_queue also */\n\twhile ((p = tpp_deque(&strm->recv_queue))) /* only APP thread accesses this queue, hence no lock */\n\t\ttpp_free_pkt(p);\n\n\t/* send a close packet */\n\tstrm->u_state = TPP_STRM_STATE_CLOSE;\n\n\tDIS_tpp_funcs();\n\tdis_destroy_chan(strm->sd);\n\n\tif (strm->t_state != TPP_TRNS_STATE_OPEN || send_spl_packet(strm, TPP_CLOSE_STRM) != 0)\n\t\tqueue_strm_close(strm);\n\n\t/* for now we do not pass any data to the peer if this side closed */\n\treturn 0;\n}\n\n/**\n * @brief\n *\tOpen a multicast channel to multiple parties.\n *\n *\tAllocates a multicast stream and marks the type as TPP_STRM_MCAST\n *\n * @param[in] key - Any unique identifier to identify the channel with\n *\n * @return The file descriptor of the opened multicast channel\n * @retval   -1 - Failure\n * @retval !=-1 - Success, the opened channel fd\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\ntpp_mcast_open(void)\n{\n\tstream_t *strm;\n\n\tif ((strm = alloc_stream(&leaf_addrs[0], NULL)) == NULL) {\n\t\treturn -1;\n\t}\n\n\tTPP_DBPRT(\"tpp_mcast_open called with fd=%u\", strm->sd);\n\n\tstrm->used_locally = 1;\n\tstrm->strm_type = TPP_STRM_MCAST;\n\treturn strm->sd;\n}\n\n/**\n * @brief\n *\tAdd a stream to the multicast channel\n *\n * @param[in] mtfd - The multicast channel to which to add streams to\n * @param[in] tfd - Array of stream descriptors to add to the multicast stream\n * @param[in] unique - add only unique streams. Use only if caller might call\n * \t\t\tthis function with duplicate tfd.\n *\n * @return\tError code\n * @retval   -1 - Failure\n * @retval    0 - Success\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\ntpp_mcast_add_strm(int mtfd, int tfd, bool unique)\n{\n\tvoid *p;\n\tstream_t *mstrm;\n\tstream_t *strm;\n\tint i = 0;\n\n\tmstrm = get_strm_atomic(mtfd);\n\tif (!mstrm) {\n\t\terrno = ENOTCONN;\n\t\treturn -1;\n\t}\n\n\tstrm = get_strm(tfd);\n\tif (!strm) {\n\t\terrno = ENOTCONN;\n\t\treturn -1;\n\t}\n\n\tif (!mstrm->mcast_data) {\n\t\tmstrm->mcast_data = malloc(sizeof(mcast_data_t));\n\t\tif (!mstrm->mcast_data) {\n\t\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory allocating mcast data\");\n\t\t\treturn -1;\n\t\t}\n\n\t\tmstrm->mcast_data->strms = malloc(TPP_MCAST_SLOT_INC * sizeof(int));\n\t\tif (!mstrm->mcast_data->strms) {\n\t\t\tfree(mstrm->mcast_data);\n\t\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory allocating strm array of %lu bytes\",\n\t\t\t\t(unsigned long) (TPP_MCAST_SLOT_INC * sizeof(int)));\n\t\t\treturn -1;\n\t\t}\n\t\tmstrm->mcast_data->num_slots = TPP_MCAST_SLOT_INC;\n\t\tmstrm->mcast_data->num_fds = 0;\n\t} else if (mstrm->mcast_data->num_fds >= mstrm->mcast_data->num_slots) {\n\t\tp = realloc(mstrm->mcast_data->strms, (mstrm->mcast_data->num_slots + TPP_MCAST_SLOT_INC) * sizeof(int));\n\t\tif (!p) {\n\t\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory resizing strm array to %lu bytes\", (mstrm->mcast_data->num_slots + TPP_MCAST_SLOT_INC) * sizeof(int));\n\t\t\treturn -1;\n\t\t}\n\t\tmstrm->mcast_data->strms = p;\n\t\tmstrm->mcast_data->num_slots += TPP_MCAST_SLOT_INC;\n\t}\n\n\tif (unique) {\n\t\tfor (i = 0; i < mstrm->mcast_data->num_fds; i++) {\n\t\t\tif (mstrm->mcast_data->strms[i] == tfd)\n\t\t\t\treturn 0;\n\t\t}\n\t}\n\n\tmstrm->mcast_data->strms[mstrm->mcast_data->num_fds++] = tfd;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tReturn the current array of members of the mcast stream\n *\n * @param[in] mtfd - The multicast channel\n * @param[out] count - Return the number of members\n *\n * @return\tmember stream fd array\n * @retval   NULL  - Failure\n * @retval   !NULL - Success\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint *\ntpp_mcast_members(int mtfd, int *count)\n{\n\tstream_t *strm;\n\n\t*count = 0;\n\n\tstrm = get_strm_atomic(mtfd);\n\tif (!strm || !strm->mcast_data) {\n\t\terrno = ENOTCONN;\n\t\treturn NULL;\n\t}\n\n\t*count = strm->mcast_data->num_fds;\n\treturn strm->mcast_data->strms;\n}\n\n/**\n * @brief\n *\tSend a command notification to all member streams\n *\n * @param[in]  mtfd - The mcast stream\n * @param[in]  cmd  - The command to send\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nstatic void\ntpp_mcast_notify_members(int mtfd, int cmd)\n{\n\tstream_t *mstrm;\n\tint i;\n\n\tmstrm = get_strm_atomic(mtfd);\n\tif (!mstrm || !mstrm->mcast_data) {\n\t\terrno = ENOTCONN;\n\t\treturn;\n\t}\n\n\tfor (i = 0; i < mstrm->mcast_data->num_fds; i++) {\n\t\tint tfd;\n\t\tstream_t *strm;\n\n\t\ttfd = mstrm->mcast_data->strms[i];\n\t\tstrm = get_strm_atomic(tfd);\n\t\tif (!strm)\n\t\t\tcontinue;\n\t\tsend_app_strm_close(strm, cmd, 0);\n\t}\n}\n\n/**\n * @brief\n *\tCreate a multicast packet and send the data to all member streams\n *\n * @param[in] mtfd - The multicast channel to which to send data\n * @param[in] data - The pointer to the block of data to send\n * @param[in] to_send  - Length of the data to send\n * @param[in] len - In case of large packets data is sent in chunks,\n *                       len is the total length of the data\n *\n * @return  Error code\n * @retval  -1 - Failure\n * @retval  -2 - transport buffers full\n * @retval   >=0 - Success - amount of data sent\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\ntpp_mcast_send(int mtfd, void *data, unsigned int to_send, unsigned int len)\n{\n\tstream_t *mstrm = NULL;\n\tstream_t *strm = NULL;\n\tint i;\n\tint rc = -1;\n\ttpp_mcast_pkt_hdr_t *mhdr = NULL;\n\ttpp_mcast_pkt_info_t *minfo = NULL;\n\ttpp_mcast_pkt_info_t tmp_minfo;\n\ttpp_packet_t *pkt = NULL;\n\tunsigned int cmpr_len = 0;\n\tvoid *minfo_buf = NULL;\n\tint minfo_len;\n\tint ret;\n\tint finish;\n\tint num_fds;\n\tvoid *def_ctx = NULL;\n\n\tmstrm = get_strm_atomic(mtfd);\n\tif (!mstrm || !mstrm->mcast_data) {\n\t\terrno = ENOTCONN;\n\t\treturn -1;\n\t}\n\n\tnum_fds = mstrm->mcast_data->num_fds;\n\n\tminfo_len = sizeof(tpp_mcast_pkt_info_t) * num_fds;\n\n\t/* header data */\n\tpkt = tpp_bld_pkt(NULL, NULL, sizeof(tpp_mcast_pkt_hdr_t), 1, (void **) &mhdr);\n\tif (!pkt) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Failed to build packet\");\n\t\treturn -1;\n\t}\n\tmhdr->type = TPP_MCAST_DATA;\n\tmhdr->hop = 0;\n\tmhdr->totlen = htonl(len);\n\tmemcpy(&mhdr->src_addr, &mstrm->src_addr, sizeof(tpp_addr_t));\n\tmhdr->num_streams = htonl(num_fds);\n\tmhdr->info_len = htonl(minfo_len);\n\n\tif (tpp_conf->compress == 1 && minfo_len > TPP_COMPR_SIZE) {\n\t\tdef_ctx = tpp_multi_deflate_init(minfo_len);\n\t\tif (def_ctx == NULL)\n\t\t\tgoto err;\n\t} else {\n\t\tminfo_buf = malloc(minfo_len);\n\t\tif (!minfo_buf) {\n\t\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory allocating mcast buffer of %d bytes\", minfo_len);\n\t\t\tgoto err;\n\t\t}\n\t}\n\n\tfor (i = 0; i < num_fds; i++) {\n\t\tstrm = get_strm_atomic(mstrm->mcast_data->strms[i]);\n\t\tif (!strm) {\n\t\t\ttpp_log(LOG_ERR, NULL, \"Stream %d is not open\", mstrm->mcast_data->strms[i]);\n\t\t\tgoto err;\n\t\t}\n\n\t\t/* per stream data */\n\t\ttmp_minfo.src_sd = htonl(strm->sd);\n\t\ttmp_minfo.src_magic = htonl(strm->src_magic);\n\t\ttmp_minfo.dest_sd = htonl(strm->dest_sd);\n\n\t\tTPP_DBPRT(\"MCAST src_sd=%u, dest_sd=%u\", strm->sd, strm->dest_sd);\n\n\t\tmemcpy(&tmp_minfo.dest_addr, &strm->dest_addr, sizeof(tpp_addr_t));\n\n\t\tif (def_ctx == NULL) { /* no compression */\n\t\t\tminfo = (tpp_mcast_pkt_info_t *) ((char *) minfo_buf + (i * sizeof(tpp_mcast_pkt_info_t)));\n\t\t\tmemcpy(minfo, &tmp_minfo, sizeof(tpp_mcast_pkt_info_t));\n\t\t} else {\n\t\t\tfinish = (i == (num_fds - 1)) ? 1 : 0;\n\n\t\t\tret = tpp_multi_deflate_do(def_ctx, finish, &tmp_minfo, sizeof(tpp_mcast_pkt_info_t));\n\t\t\tif (ret != 0)\n\t\t\t\tgoto err;\n\t\t}\n\t}\n\n\tif (def_ctx != NULL) {\n\t\tminfo_buf = tpp_multi_deflate_done(def_ctx, &cmpr_len);\n\t\tif (minfo_buf == NULL)\n\t\t\tgoto err;\n\n\t\tTPP_DBPRT(\"*** mcast_send hdr orig=%d, cmprsd=%u\", minfo_len, cmpr_len);\n\t\tmhdr->info_cmprsd_len = htonl(cmpr_len);\n\t} else {\n\t\tTPP_DBPRT(\"*** mcast_send uncompressed hdr orig=%d\", minfo_len);\n\t\tmhdr->info_cmprsd_len = 0;\n\t\tcmpr_len = minfo_len;\n\t}\n\tdef_ctx = NULL; /* done with compression */\n\n\tif (!tpp_bld_pkt(pkt, minfo_buf, cmpr_len, 0, NULL)) { /* add minfo chunk */\n\t\ttpp_log(LOG_CRIT, __func__, \"Failed to build packet\");\n\t\treturn -1;\n\t}\n\n\tif (!tpp_bld_pkt(pkt, data, to_send, 0, NULL)) { /* add data chunk */\n\t\ttpp_log(LOG_CRIT, __func__, \"Failed to build packet\");\n\t\treturn -1;\n\t}\n\n\tTPP_DBPRT(\"*** sending %d totlen\", pkt->totlen);\n\n\trc = send_to_router(pkt);\n\tif (rc == 0)\n\t\treturn len; /* all given data sent, so return len */\n\n\ttpp_log(LOG_ERR, __func__, \"Failed to send to router\"); /* fall through */\n\nerr:\n\ttpp_mcast_notify_members(mtfd, TPP_CMD_NET_CLOSE);\n\tif (def_ctx)\n\t\ttpp_multi_deflate_done(def_ctx, &cmpr_len);\n\n\tif (minfo_buf)\n\t\tfree(minfo_buf);\n\treturn rc;\n}\n\n/**\n * @brief\n *\tClose a multicast channel\n *\n * @param[in] mtfd - The multicast channel to close\n *\n * @return\tError code\n * @retval   -1 - Failure\n * @retval    0 - Success\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\ntpp_mcast_close(int mtfd)\n{\n\tstream_t *strm;\n\n\tif (mtfd < 0)\n\t\treturn 0;\n\n\tstrm = get_strm_atomic(mtfd);\n\tif (!strm) {\n\t\treturn -1;\n\t}\n\tDIS_tpp_funcs();\n\tdis_destroy_chan(strm->sd);\n\n\tfree_stream_resources(strm);\n\tfree_stream(mtfd);\n\treturn 0;\n}\n\n/**\n * @brief\n *\tAdd the stream to a queue of streams to be closed by the transport thread.\n *\n * @par Functionality\n *\tEven if the app thread wants to free a stream, it adds the stream to this\n *\tqueue, so that the transport thread frees it, eliminating any thread\n *\traces.\n *\n * @param[in] strm - The stream pointer\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nstatic void\nqueue_strm_close(stream_t *strm)\n{\n\tstrm_action_info_t *c;\n\ttpp_router_t *r = get_active_router();\n\n\tif (!r)\n\t\treturn;\n\n\ttpp_write_lock(&strmarray_lock); /* already under lock, dont need get_strm_atomic */\n\n\tif (strmarray[strm->sd].slot_state != TPP_SLOT_BUSY) {\n\t\ttpp_unlock_rwlock(&strmarray_lock);\n\t\treturn;\n\t}\n\tstrmarray[strm->sd].slot_state = TPP_SLOT_DELETED;\n\ttpp_unlock_rwlock(&strmarray_lock);\n\n\tTPP_DBPRT(\"Marked sd=%u DELETED\", strm->sd);\n\n\tif ((c = malloc(sizeof(strm_action_info_t))) == NULL) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory allocating stream free info\");\n\t\treturn;\n\t}\n\tc->strm_action_time = time(0); /* asap */\n\tc->strm_action_func = queue_strm_free;\n\tc->sd = strm->sd;\n\n\ttpp_lock(&strm_action_queue_lock);\n\tif (tpp_enque(&strm_action_queue, c) == NULL)\n\t\ttpp_log(LOG_CRIT, __func__, \"Failed to Queue close\");\n\n\ttpp_unlock(&strm_action_queue_lock);\n\n\tTPP_DBPRT(\"Enqueued strm close for sd=%u\", strm->sd);\n\n\ttpp_transport_wakeup_thrd(r->conn_fd);\n\treturn;\n}\n\n/*\n * ============================================================================\n *\n * Functions below this are mostly driven by the IO thread. Some of them could\n * be accessed by both the IO and the App threads (and such functions need\n * synchronization)\n *\n * ============================================================================\n */\n\n/**\n * @brief\n *\tFree stream and add stream slot to a queue of slots to be marked free\n *\tafter TPP_CLOSE_WAIT time.\n *\n * @par Functionality\n *\tThe slot is not marked free immediately, rather after a period. This is to\n *\tensure that wandering/delayed messages do not cause havoc.\n *\tAdditionally deletes the stream's entry in the Stream index.\n *\n * @param[in] sd - The stream descriptor\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nstatic void\nqueue_strm_free(unsigned int sd)\n{\n\tstrm_action_info_t *c;\n\tstream_t *strm;\n\n\tstrm = get_strm_atomic(sd);\n\tif (!strm)\n\t\treturn;\n\n\tfree_stream_resources(strm);\n\tTPP_DBPRT(\"Freed sd=%u resources\", sd);\n\n\tif ((c = malloc(sizeof(strm_action_info_t))) == NULL) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory allocating stream action info\");\n\t\treturn;\n\t}\n\tc->strm_action_time = time(0) + TPP_CLOSE_WAIT; /* time to close */\n\tc->strm_action_func = free_stream;\n\tc->sd = sd;\n\n\ttpp_lock(&strm_action_queue_lock);\n\tif (tpp_enque(&strm_action_queue, c) == NULL)\n\t\ttpp_log(LOG_CRIT, __func__, \"Failed to Queue Free\");\n\ttpp_unlock(&strm_action_queue_lock);\n\n\treturn;\n}\n\n/**\n * @brief\n *\tPass on a close message from peer to the APP\n *\n * @par Functionality\n *\tIf this side had already called close, then instead of sending a\n *\tnotification to the app, it queues a close operation.\n *\tIf a NET_CLOSE happened (network between leaf and router broke), then\n *\tsend notification to APP.\n *\n * @param[in] strm - Pointer to the stream\n * @param[in] cmd - TPP_CMD_NET_CLOSE - network closed between leaf & router\n *\t\t            TPP_CMD_PEER_CLOSE - Peer sent a close message\n * @param[in] error - Error code in case of network closure, set for future use\n *\n * @return Error code\n * @retval -1 - Failure\n * @retval  0 - Success\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nstatic int\nsend_app_strm_close(stream_t *strm, int cmd, int error)\n{\n\terrno = 0;\n\n\tstrm->lasterr = error;\n\tstrm->t_state = TPP_TRNS_STATE_NET_CLOSED;\n\n\tif (tpp_mbox_post(&app_mbox, strm->sd, cmd, NULL, 0) != 0) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Error writing to app mbox\");\n\t\treturn -1;\n\t}\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tHelper function to find a stream based on destination address,\n *\tdestination stream descriptor.\n *\n * @par Functionality\n *\tSearches the index of streams based on the destination address.\n *\tThere could be several entries, since several streams could be open\n *\tto the same destination. The index search quickly find the first entry\n *\tthat matches the address. Then on, we serially match the fd of the\n *\tdestination stream.\n *\n * @param[in] dest_addr  - address of the destination\n * @param[in] dest_sd    - The descriptor of the destination stream\n * @param[in] dest_magic - The magic id of the destination stream\n *\n * @return stream ptr of the stream if found\n * @retval NULL - If a matching stream was not found\n * @retval !NULL - The ptr to the matching stream\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nstatic stream_t *\nfind_stream_with_dest(tpp_addr_t *dest_addr, unsigned int dest_sd, unsigned int dest_magic)\n{\n\tvoid *idx_ctx = NULL;\n\tvoid *idx_nkey = dest_addr;\n\tstream_t *strm;\n\n\twhile (pbs_idx_find(streams_idx, &idx_nkey, (void **) &strm, &idx_ctx) == PBS_IDX_RET_OK) {\n\t\tif (memcmp(idx_nkey, dest_addr, sizeof(tpp_addr_t)) != 0)\n\t\t\tbreak;\n\t\tTPP_DBPRT(\"sd=%u, dest_sd=%u, u_state=%d, t-state=%d, dest_magic=%u\", strm->sd, strm->dest_sd, strm->u_state, strm->t_state, strm->dest_magic);\n\t\tif (strm->dest_sd == dest_sd && strm->dest_magic == dest_magic) {\n\t\t\tpbs_idx_free_ctx(idx_ctx);\n\t\t\treturn strm;\n\t\t}\n\t}\n\tpbs_idx_free_ctx(idx_ctx);\n\treturn NULL;\n}\n\n/**\n * @brief\n *\tWalk the sorted global stream free queue and free  stream slot\n *\tafter TPP_CLOSE_WAIT time\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nstatic void\nact_strm(time_t now, int force)\n{\n\ttpp_que_elem_t *n = NULL;\n\n\ttpp_lock(&strm_action_queue_lock);\n\twhile ((n = TPP_QUE_NEXT(&strm_action_queue, n))) {\n\t\tstrm_action_info_t *c;\n\n\t\tc = TPP_QUE_DATA(n);\n\t\tif (c && ((c->strm_action_time <= now) || (force == 1))) {\n\t\t\tn = tpp_que_del_elem(&strm_action_queue, n);\n\t\t\tTPP_DBPRT(\"Calling action function for stream %u\", c->sd);\n\t\t\ttpp_unlock(&strm_action_queue_lock);\n\n\t\t\t/* unlock and call action function, then reacquire lock */\n\t\t\tc->strm_action_func(c->sd);\n\n\t\t\ttpp_lock(&strm_action_queue_lock);\n\t\t\tif (c->strm_action_func == free_stream) {\n\t\t\t\t/* free stream itself clears elements from the strm_action_queue\n\t\t\t\t * so restart walking from the head of strm_action_queue\n\t\t\t\t */\n\t\t\t\tn = NULL;\n\t\t\t}\n\t\t\tfree(c);\n\t\t}\n\t}\n\ttpp_unlock(&strm_action_queue_lock);\n}\n\n/**\n * @brief\n *\tThe timer handler function registered with the IO thread.\n *\n * @par Functionality\n *\tThis function is called periodically (after the amount of time as\n *\tspecified by leaf_next_event_expiry() function) by the IO thread. This\n *\tdrives the close packets to be acted upon in time.\n *\n * @retval  - Time of next event expriry\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nstatic int\nleaf_timer_handler(time_t now)\n{\n\tact_strm(now, 0);\n\n\treturn leaf_next_event_expiry(now);\n}\n\n/**\n * @brief\n *\tThis function returns the amt of time after which the nearest event\n *\thappens (close etc). The IO thread calls this function to determine\n *\thow much time to sleep before calling the timer_handler function\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\ntime_t\nleaf_next_event_expiry(time_t now)\n{\n\ttime_t rc1 = -1;\n\ttime_t rc2 = -1;\n\ttime_t rc3 = -1;\n\ttime_t res = -1;\n\ttpp_que_elem_t *n;\n\tstrm_action_info_t *f;\n\n\ttpp_lock(&strm_action_queue_lock);\n\n\tif ((n = TPP_QUE_HEAD(&strm_action_queue))) {\n\t\tif ((f = TPP_QUE_DATA(n)))\n\t\t\trc3 = f->strm_action_time;\n\t}\n\ttpp_unlock(&strm_action_queue_lock);\n\n\tif (rc1 > 0)\n\t\tres = rc1;\n\n\tif (rc2 > 0 && (res == -1 || rc2 < res))\n\t\tres = rc2;\n\n\tif (rc3 > 0 && (res == -1 || rc3 < res))\n\t\tres = rc3;\n\n\tif (res != -1)\n\t\tres = res - now;\n\n\treturn res;\n}\n\n/**\n * @brief\n *\tSend a data packet to the APP layer\n *\n * @par Functionality\n *\tWrites the packet to the pipe that the APP is monitoring, when APP reads\n *\tfrom the read end of the pipe, it gets the pointer to the data\n *\n * @param[in] sd - The descriptor of the stream\n * @param[in] type - The type of the data packet (data, close etc)\n * @param[in] buf - The data that has to be stored\n * @param[in] sz - The size of the data\n * @param[in] totlen - Total data size\n *\n * @return Error code\n * @retval 0 - Success\n * @retval -1 - Failure\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nstatic int\nsend_pkt_to_app(stream_t *strm, unsigned char type, void *data, int sz, int totlen)\n{\n\tint rc;\n\tint cmd;\n\tvoid *tmp;\n\ttpp_packet_t *obj;\n\n\tif (type == TPP_DATA) {\n\t\t/* in case of uncompressed packets, totlen == compressed_len\n\t\t * so amount of data on wire is compressed len, so allways check against\n\t\t * compressed_len\n\t\t */\n\t\tif (sz != totlen) {\n\t\t\tif (!(tmp = tpp_inflate(data, sz, totlen))) {\n\t\t\t\ttpp_log(LOG_CRIT, __func__, \"Decompression failed\");\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t\tdata = tmp;\n\t\t} else {\n\t\t\t/* this is still the pointer to the data part of original buffer, must make copy */\n\t\t\ttmp = malloc(totlen);\n\t\t\tmemcpy(tmp, data, totlen);\n\t\t\tdata = tmp;\n\t\t}\n\t\tobj = tpp_bld_pkt(NULL, data, totlen, 0, NULL);\n\t\tif (!obj) {\n\t\t\ttpp_log(LOG_CRIT, __func__, \"Failed to build packet\");\n\t\t\treturn -1;\n\t\t}\n\t\tcmd = TPP_CMD_NET_DATA;\n\t} else {\n\t\tcmd = TPP_CMD_PEER_CLOSE;\n\t\tstrm->t_state = TPP_TRNS_STATE_PEER_CLOSED;\n\t\tobj = NULL;\n\t}\n\n\tTPP_DBPRT(\"Sending cmd=%d to sd=%u\", cmd, strm->sd);\n\n\t/* since we received one packet, send notification to app */\n\trc = tpp_mbox_post(&app_mbox, strm->sd, cmd, obj, sz);\n\tif (rc != 0) {\n\t\tif (obj)\n\t\t\ttpp_free_pkt(obj);\n\n\t\ttpp_log(LOG_CRIT, __func__, \"Error writing to app mbox\");\n\t}\n\treturn rc;\n}\n\n/**\n * @brief\n *\tSends a close packet to a peer. This is called when this end calls\n *\ttpp_close()\n *\n * @param[in] strm - The stream to which close packet has to be sent\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nstatic int\nsend_spl_packet(stream_t *strm, int type)\n{\n\ttpp_data_pkt_hdr_t *dhdr = NULL;\n\ttpp_packet_t *pkt = NULL;\n\n\tTPP_DBPRT(\"Sending CLOSE packet sd=%u, dest_sd=%u\", strm->sd, strm->dest_sd);\n\n\tpkt = tpp_bld_pkt(NULL, dhdr, sizeof(tpp_data_pkt_hdr_t), 1, (void **) &dhdr);\n\tif (!pkt) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Failed to build packet\");\n\t\treturn -1;\n\t}\n\n\tdhdr->type = type;\n\tdhdr->src_sd = htonl(strm->sd);\n\tdhdr->src_magic = htonl(strm->src_magic);\n\tdhdr->dest_sd = htonl(strm->dest_sd);\n\n\tmemcpy(&dhdr->src_addr, &strm->src_addr, sizeof(tpp_addr_t));\n\tmemcpy(&dhdr->dest_addr, &strm->dest_addr, sizeof(tpp_addr_t));\n\n\tif (send_to_router(pkt) != 0) {\n\t\ttpp_log(LOG_ERR, __func__, \"Failed to send to router\");\n\t\treturn -1;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\tdestroy the stream finally\n *\n * @param[in] strm - The stream that needs to be freed\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nstatic void\nfree_stream_resources(stream_t *strm)\n{\n\tif (!strm)\n\t\treturn;\n\n\ttpp_write_lock(&strmarray_lock);\n\n\tTPP_DBPRT(\"Freeing stream resources for sd=%u\", strm->sd);\n\n\tstrmarray[strm->sd].slot_state = TPP_SLOT_DELETED;\n\n\ttpp_unlock_rwlock(&strmarray_lock);\n\n\tif (strm->mcast_data) {\n\t\tif (strm->mcast_data->strms)\n\t\t\tfree(strm->mcast_data->strms);\n\t\tfree(strm->mcast_data);\n\t}\n}\n\n/**\n * @brief\n *\tMarks the stream slot as free to be reused\n *\n * @param[in] sd - The stream descriptor that needs to be freed\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nstatic void\nfree_stream(unsigned int sd)\n{\n\tstream_t *strm;\n\ttpp_que_elem_t *n = NULL;\n\tstrm_action_info_t *c;\n\n\tTPP_DBPRT(\"Freeing stream %u\", sd);\n\n\ttpp_write_lock(&strmarray_lock); /* updating stream, idx, so WRITE lock */\n\n\tstrm = strmarray[sd].strm;\n\tif (strm->strm_type != TPP_STRM_MCAST) {\n\t\tvoid *idx_ctx = NULL;\n\t\tint found = 0;\n\t\tstream_t *t_strm = NULL;\n\t\tvoid *pdest_addr = &strm->dest_addr;\n\n\t\twhile (pbs_idx_find(streams_idx, &pdest_addr, (void **) &t_strm, &idx_ctx) == PBS_IDX_RET_OK) {\n\t\t\tif (memcmp(pdest_addr, &strm->dest_addr, sizeof(tpp_addr_t)) != 0)\n\t\t\t\tbreak;\n\t\t\tif (strm == t_strm) {\n\t\t\t\tfound = 1;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\n\t\tif (!found) {\n\t\t\t/* this should not happen ever */\n\t\t\ttpp_log(LOG_ERR, __func__, \"Failed finding strm with dest=%s, strm=%p, sd=%u\", tpp_netaddr(&strm->dest_addr), strm, strm->sd);\n\t\t\ttpp_unlock_rwlock(&strmarray_lock);\n\t\t\tpbs_idx_free_ctx(idx_ctx);\n\t\t\treturn;\n\t\t}\n\n\t\tpbs_idx_delete_byctx(idx_ctx);\n\t\tpbs_idx_free_ctx(idx_ctx);\n\t}\n\n\tstrmarray[sd].slot_state = TPP_SLOT_FREE;\n\tstrmarray[sd].strm = NULL;\n\tfree(strm);\n\n\tif (freed_queue_count < 100) {\n\t\ttpp_enque(&freed_sd_queue, (void *) (intptr_t) sd);\n\t\tfreed_queue_count++;\n\t}\n\n\ttpp_unlock_rwlock(&strmarray_lock);\n\n\ttpp_lock(&strm_action_queue_lock);\n\t/* empty all strm actions from the strm action queue */\n\twhile ((n = TPP_QUE_NEXT(&strm_action_queue, n))) {\n\t\tc = TPP_QUE_DATA(n);\n\t\tif (c && (c->sd == sd)) {\n\t\t\tn = tpp_que_del_elem(&strm_action_queue, n);\n\t\t\tfree(c);\n\t\t}\n\t}\n\ttpp_unlock(&strm_action_queue_lock);\n}\n\n/**\n * @brief\n *\tThe pre-send handler registered with the IO thread.\n *\n * @par Functionality\n *\tWhen the IO thread is ready to send out a packet over the wire, it calls\n *\ta prior registered \"pre-send\" handler\n *\n * @param[in] tfd - The actual IO connection on which data was about to be\n *\t\t\tsent (unused)\n * @param[in] pkt - The data packet that is about to be sent out by the IO thrd\n * @param[in] extra - The extra data associated with IO connection\n *\n * @par Side Effects:\n *\tNone\n *\n * @retval 0 - Success (Transport layer will send out the packet)\n * @retval -1 - Failure (Transport layer will not send packet and will delete packet)\n *\n * @par MT-safe: No\n *\n */\nstatic int\nleaf_pkt_presend_handler(int tfd, tpp_packet_t *pkt, void *c, void *extra)\n{\n\tconn_auth_t *authdata = (conn_auth_t *) extra;\n\ttpp_context_t *ctx = (tpp_context_t *) c;\n\ttpp_router_t *r;\n\ttpp_data_pkt_hdr_t *data;\n\ttpp_chunk_t *first_chunk;\n\tunsigned char type;\n\n\tif (!pkt)\n\t\treturn 0;\n\n\tfirst_chunk = GET_NEXT(pkt->chunks);\n\tif (!first_chunk)\n\t\treturn 0;\n\n\tdata = (tpp_data_pkt_hdr_t *) first_chunk->data;\n\ttype = data->type;\n\n\t/* Connection accepcted by comm, set router's state to connected */\n\tif (type == TPP_CTL_JOIN) {\n\t\tr = (tpp_router_t *) ctx->ptr;\n\t\tr->state = TPP_ROUTER_STATE_CONNECTED;\n\t\tr->delay = 0;\t\t/* reset connection retry time to 0 */\n\t\tr->conn_time = time(0); /* record connect time */\n\n\t\ttpp_log(LOG_CRIT, NULL, \"Connected to pbs_comm %s\", r->router_name);\n\n\t\tTPP_DBPRT(\"Sending cmd to call App net restore handler\");\n\t\tif (tpp_mbox_post(&app_mbox, UNINITIALIZED_INT, TPP_CMD_NET_RESTORE, NULL, 0) != 0) {\n\t\t\ttpp_log(LOG_CRIT, __func__, \"Error writing to app mbox\");\n\t\t\treturn -1;\n\t\t}\n\t}\n\n\t/*\n\t * if presend handler is called from handle_disconnect()\n\t * then extra will be NULL and this is just a sending simulation\n\t * so no encryption needed\n\t */\n\tif (authdata == NULL)\n\t\treturn 0;\n\n\tif (authdata->encryptdef == NULL)\n\t\treturn 0; /* no encryption set, so no need to encrypt packets */\n\n\treturn (tpp_encrypt_pkt(authdata, pkt));\n}\n\n/**\n * @brief\n *\tCheck a stream based on sd, destination address,\n *\tdestination stream descriptor.\n *\n * @param[in] src_sd - The stream with which to match\n * @param[in] dest_addr - address of the destination\n * @param[in] dest_sd   - The descriptor of the destination stream\n * @paarm[in] msg - point to buf to return message\n * @param[in] sz - length of message buffer\n *\n * @return stream ptr of the stream info matches passed params\n * @retval NULL - If a matching stream was not found, or passed params do not match\n * @retval !NULL - The ptr to the matching stream\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nstatic stream_t *\ncheck_strm_valid(unsigned int src_sd, tpp_addr_t *dest_addr, int dest_sd, char *msg, int sz)\n{\n\tstream_t *strm = NULL;\n\n\tif (strmarray == NULL || src_sd >= max_strms) {\n\t\tTPP_DBPRT(\"Must be data for old instance, ignoring\");\n\t\treturn NULL;\n\t}\n\n\tif (strmarray[src_sd].slot_state != TPP_SLOT_BUSY) {\n\t\tsnprintf(msg, sz, \"Data to sd=%u which is %s\", src_sd, (strmarray[src_sd].slot_state == TPP_SLOT_DELETED ? \"deleted\" : \"freed\"));\n\t\treturn NULL;\n\t}\n\n\tstrm = strmarray[src_sd].strm;\n\n\tif (strm->t_state != TPP_TRNS_STATE_OPEN) {\n\t\tsnprintf(msg, sz, \"Data to sd=%u whose transport is not open (t_state=%d)\", src_sd, strm->t_state);\n\t\tsend_app_strm_close(strm, TPP_CMD_NET_CLOSE, 0);\n\t\treturn NULL;\n\t}\n\n\tif ((strm->dest_sd != UNINITIALIZED_INT && strm->dest_sd != dest_sd) || memcmp(&strm->dest_addr, dest_addr, sizeof(tpp_addr_t)) != 0) {\n\t\tsnprintf(msg, sz, \"Data to sd=%u mismatch dest info in stream\", src_sd);\n\t\treturn NULL;\n\t}\n\n\treturn strm;\n}\n\n/**\n * @brief\n *\tWrapper function for the leaf to handle incoming data. This\n *  wrapper exists only to detect if the inner function\n *  allocated memory in data_out and free that memory in a\n *  clean way, so that we do not have to add a goto or free\n *  in every return path of the inner function.\n *\n * @param[in] tfd - The physical connection over which data arrived\n * @param[in] buf - The pointer to the received data packet\n * @param[in] len - The length of the received data packet\n * @param[in] c   - The context associated with this physical connection\n * @param[in] extra - The extra data associated with IO connection\n *\n * @return Error code\n * @retval -1 - Failure\n * @retval  0 - Success\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nstatic int\nleaf_pkt_handler(int tfd, void *buf, int len, void *c, void *extra)\n{\n\tvoid *data_out = NULL;\n\tint rc = leaf_pkt_handler_inner(tfd, buf, &data_out, len, c, extra);\n\tfree(data_out);\n\treturn rc;\n}\n\n/**\n * @brief\n *\tInner handler function for the received packet handler registered with the IO thread.\n *\n * @par Functionality\n *\tWhen the IO thread is received a packet over the wire, it calls\n *\ta prior registered \"chunk-send\" handler. This handler is responsible to\n *\tdecode the data in the packet and do the needful. This handler for the\n *\tleaf checks whether data came in order, or is a duplicate packet. If\n *\tOOO data arrived, then it is queued in a OOO queue, else the data is\n *\tsent to the application to be read. If an acknowledgment for a prior\n *\tsent packet is received, this handler releases any prior shelved packet.\n *\tIf a close packet is received, then a close notification is sent to the\n *\tAPP. If a prior sent close packet is acknowledged, then the stream is\n *\tqueued to be closed.\n *\n * @param[in] tfd - The actual IO connection on which data was about to be\n *\t\t\tsent (unused)\n * @param[in] buf - The pointer to the data that arrived\n * @param[out] data_out - The pointer to the newly allocated data buffer, if any\n * @param[in] len  - Length of the arrived data\n * @param[in] ctx - The context (prior associated, if any) with the IO thread\n *\t\t    (unused at the leaf)\n * @param[in] extra - The extra data associated with IO connection\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nstatic int\nleaf_pkt_handler_inner(int tfd, void *buf, void **data_out, int len, void *ctx, void *extra)\n{\n\tstream_t *strm;\n\tenum TPP_MSG_TYPES type;\n\ttpp_data_pkt_hdr_t *dhdr = buf;\n\tconn_auth_t *authdata = (conn_auth_t *) extra;\n\tint totlen;\n\tint rc;\n\nagain:\n\ttotlen = ntohl(dhdr->totlen);\n\ttype = dhdr->type;\n\terrno = 0;\n\n\tif (type >= TPP_LAST_MSG)\n\t\treturn -1;\n\n\tswitch (type) {\n\t\tcase TPP_ENCRYPTED_DATA: {\n\t\t\tint sz = sizeof(tpp_encrypt_hdr_t);\n\t\t\tint len_out;\n\n\t\t\tif (authdata == NULL) {\n\t\t\t\ttpp_log(LOG_CRIT, __func__, \"tfd=%d, No auth data found\", tfd);\n\t\t\t\treturn -1;\n\t\t\t}\n\n\t\t\tif (authdata->encryptdef == NULL) {\n\t\t\t\ttpp_log(LOG_CRIT, __func__, \"connetion doesn't support decryption of data\");\n\t\t\t\treturn -1;\n\t\t\t}\n\n\t\t\tif (authdata->encryptdef->decrypt_data(authdata->encryptctx, (void *) ((char *) buf + sz), (size_t) len - sz, data_out, (size_t *) &len_out) != 0) {\n\t\t\t\treturn -1;\n\t\t\t}\n\n\t\t\tif ((len - sz) > 0 && len_out <= 0) {\n\t\t\t\ttpp_log(LOG_CRIT, __func__, \"invalid decrypted data len: %d, pktlen: %d\", len_out, len - sz);\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t\tdhdr = *data_out;\n\t\t\tlen = len_out;\n\t\t\tgoto again;\n\t\t} break;\n\n\t\tcase TPP_AUTH_CTX: {\n\t\t\ttpp_auth_pkt_hdr_t ahdr = {0};\n\t\t\tsize_t len_in = 0;\n\t\t\tvoid *data_in = NULL;\n\t\t\tint rc = 0;\n\n\t\t\tmemcpy(&ahdr, dhdr, sizeof(tpp_auth_pkt_hdr_t));\n\t\t\tlen_in = (size_t) len - sizeof(tpp_auth_pkt_hdr_t);\n\t\t\tdata_in = calloc(1, len_in);\n\t\t\tif (data_in == NULL) {\n\t\t\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory\");\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t\tmemcpy(data_in, (char *) dhdr + sizeof(tpp_auth_pkt_hdr_t), len_in);\n\n\t\t\trc = tpp_handle_auth_handshake(tfd, tfd, authdata, ahdr.for_encrypt, data_in, len_in);\n\t\t\tif (rc != 1) {\n\t\t\t\tfree(data_in);\n\t\t\t\treturn rc;\n\t\t\t}\n\n\t\t\tfree(data_in);\n\n\t\t\tif (ahdr.for_encrypt == FOR_ENCRYPT && strcmp(authdata->config->auth_method, AUTH_RESVPORT_NAME) != 0) {\n\t\t\t\tif (strcmp(authdata->config->auth_method, authdata->config->encrypt_method) != 0) {\n\t\t\t\t\trc = tpp_handle_auth_handshake(tfd, tfd, authdata, FOR_AUTH, NULL, 0);\n\t\t\t\t\tif (rc != 1) {\n\t\t\t\t\t\treturn rc;\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tauthdata->authctx = authdata->encryptctx;\n\t\t\t\t\tauthdata->authdef = authdata->encryptdef;\n\t\t\t\t\ttpp_transport_set_conn_extra(tfd, authdata);\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t/* send TPP_CTL_JOIN msg to router */\n\t\t\treturn leaf_send_ctl_join(tfd, ctx);\n\t\t} break; /* TPP_AUTH_CTX */\n\n\t\tcase TPP_CTL_MSG: {\n\t\t\ttpp_ctl_pkt_hdr_t *hdr = (tpp_ctl_pkt_hdr_t *) dhdr;\n\t\t\tint code = hdr->code;\n\n\t\t\tif (code == TPP_MSG_NOROUTE) {\n\t\t\t\tunsigned int src_sd = ntohl(hdr->src_sd);\n\t\t\t\tstrm = get_strm_atomic(src_sd);\n\t\t\t\tif (strm) {\n\t\t\t\t\tchar *msg = ((char *) dhdr) + sizeof(tpp_ctl_pkt_hdr_t);\n\t\t\t\t\ttpp_log(LOG_DEBUG, NULL, \"sd %u, Received noroute to dest %s, msg=\\\"%s\\\"\", src_sd, tpp_netaddr(&hdr->src_addr), msg);\n\t\t\t\t\tsend_app_strm_close(strm, TPP_CMD_NET_CLOSE, 0);\n\t\t\t\t}\n\t\t\t\treturn 0;\n\t\t\t}\n\n\t\t\tif (code == TPP_MSG_UPDATE) {\n\t\t\t\ttpp_log(LOG_INFO, NULL, \"Received UPDATE from pbs_comm\");\n\t\t\t\tif (tpp_mbox_post(&app_mbox, UNINITIALIZED_INT, TPP_CMD_NET_RESTORE, NULL, 0) != 0) {\n\t\t\t\t\ttpp_log(LOG_CRIT, __func__, \"Error writing to app mbox\");\n\t\t\t\t}\n\t\t\t\treturn 0;\n\t\t\t}\n\n\t\t\tif (code == TPP_MSG_AUTHERR) {\n\t\t\t\tchar *msg = ((char *) dhdr) + sizeof(tpp_ctl_pkt_hdr_t);\n\t\t\t\ttpp_log(LOG_CRIT, NULL, \"tfd %d, Received authentication error from router %s, err=%d, msg=\\\"%s\\\"\", tfd, tpp_netaddr(&hdr->src_addr), hdr->error_num, msg);\n\t\t\t\treturn -1; /* close connection */\n\t\t\t}\n\t\t} break; /* TPP_CTL_MSG */\n\n\t\tcase TPP_CTL_LEAVE: {\n\t\t\ttpp_leave_pkt_hdr_t *hdr = (tpp_leave_pkt_hdr_t *) dhdr;\n\t\t\ttpp_que_t send_close_queue;\n\t\t\ttpp_addr_t *addrs;\n\t\t\tint i;\n\n\t\t\tPRTPKTHDR(__func__, hdr, 0);\n\n\t\t\t/* bother only about leave */\n\t\t\ttpp_read_lock(&strmarray_lock); /* walking stream idx, so read lock */\n\t\t\tTPP_QUE_CLEAR(&send_close_queue);\n\n\t\t\t/* go past the header and point to the list of addresses following it */\n\t\t\taddrs = (tpp_addr_t *) (((char *) dhdr) + sizeof(tpp_leave_pkt_hdr_t));\n\t\t\tfor (i = 0; i < hdr->num_addrs; i++) {\n\t\t\t\tvoid *idx_ctx = NULL;\n\t\t\t\tvoid *paddr = &addrs[i];\n\n\t\t\t\twhile (pbs_idx_find(streams_idx, &paddr, (void **) &strm, &idx_ctx) == PBS_IDX_RET_OK) {\n\t\t\t\t\tif (memcmp(paddr, &addrs[i], sizeof(tpp_addr_t)) != 0)\n\t\t\t\t\t\tbreak;\n\t\t\t\t\tstrm->lasterr = 0;\n\t\t\t\t\t/* under lock already, can access directly */\n\t\t\t\t\tif (strmarray[strm->sd].slot_state == TPP_SLOT_BUSY) {\n\t\t\t\t\t\tif (tpp_enque(&send_close_queue, strm) == NULL) {\n\t\t\t\t\t\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory enqueing to send close queue\");\n\t\t\t\t\t\t\ttpp_unlock_rwlock(&strmarray_lock);\n\t\t\t\t\t\t\tpbs_idx_free_ctx(idx_ctx);\n\t\t\t\t\t\t\treturn -1;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tpbs_idx_free_ctx(idx_ctx);\n\t\t\t}\n\t\t\ttpp_unlock_rwlock(&strmarray_lock);\n\n\t\t\twhile ((strm = (stream_t *) tpp_deque(&send_close_queue))) {\n\t\t\t\tTPP_DBPRT(\"received TPP_CTL_LEAVE, sending TPP_CMD_NET_CLOSE sd=%u\", strm->sd);\n\t\t\t\tsend_app_strm_close(strm, TPP_CMD_NET_CLOSE, hdr->ecode);\n\t\t\t}\n\n\t\t\treturn 0;\n\t\t} break;\n\t\t\t/* TPP_CTL_LEAVE */\n\n\t\tcase TPP_DATA:\n\t\tcase TPP_CLOSE_STRM: {\n\t\t\tchar msg[TPP_GEN_BUF_SZ] = \"\";\n\t\t\tunsigned int src_sd;\n\t\t\tunsigned int dest_sd;\n\t\t\tunsigned int src_magic;\n\t\t\tunsigned int sz = len - sizeof(tpp_data_pkt_hdr_t);\n\t\t\tvoid *data = (char *) dhdr + sizeof(tpp_data_pkt_hdr_t);\n\n\t\t\tsrc_sd = ntohl(dhdr->src_sd);\n\t\t\tdest_sd = ntohl(dhdr->dest_sd);\n\t\t\tsrc_magic = ntohl(dhdr->src_magic);\n\n\t\t\tPRTPKTHDR(__func__, dhdr, sz);\n\n\t\t\tif (dest_sd == UNINITIALIZED_INT && type != TPP_CLOSE_STRM && sz == 0) {\n\t\t\t\ttpp_log(LOG_ERR, NULL, \"ack packet without dest_sd set!!!\");\n\t\t\t\treturn -1;\n\t\t\t}\n\n\t\t\tif (dest_sd == UNINITIALIZED_INT) {\n\t\t\t\ttpp_read_lock(&strmarray_lock); /* walking stream idx, so read lock */\n\t\t\t\tstrm = find_stream_with_dest(&dhdr->src_addr, src_sd, src_magic);\n\t\t\t\ttpp_unlock_rwlock(&strmarray_lock);\n\t\t\t\tif (strm == NULL) {\n\t\t\t\t\tTPP_DBPRT(\"No stream associated, Opening new stream\");\n\t\t\t\t\t/*\n\t\t\t\t\t * packet's destination address = stream's source address at our end\n\t\t\t\t\t * packet's source address = stream's destination address at our end\n\t\t\t\t\t */\n\t\t\t\t\tif ((strm = alloc_stream(&dhdr->dest_addr, &dhdr->src_addr)) == NULL) {\n\t\t\t\t\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory allocating stream\");\n\t\t\t\t\t\treturn -1;\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tTPP_DBPRT(\"Stream sd=%u, u_state=%d, t_state=%d\", strm->sd, strm->u_state, strm->t_state);\n\t\t\t\t}\n\t\t\t\tdest_sd = strm->sd;\n\t\t\t} else {\n\t\t\t\tTPP_DBPRT(\"Stream found from index in packet = %u\", dest_sd);\n\t\t\t}\n\n\t\t\t/* In any case, check for the stream's validity */\n\t\t\ttpp_read_lock(&strmarray_lock); /* walking stream idx, so read lock */\n\t\t\tstrm = check_strm_valid(dest_sd, &dhdr->src_addr, src_sd, msg, sizeof(msg));\n\t\t\ttpp_unlock_rwlock(&strmarray_lock);\n\t\t\tif (strm == NULL) {\n\t\t\t\tif (type != TPP_CLOSE_STRM && sz == 0)\n\t\t\t\t\treturn 0; /* it is an ack packet, don't send noroute */\n\n\t\t\t\ttpp_log(LOG_WARNING, __func__, msg);\n\t\t\t\ttpp_send_ctl_msg(tfd, TPP_MSG_NOROUTE, &dhdr->src_addr, &dhdr->dest_addr, src_sd, 0, msg);\n\t\t\t\treturn 0;\n\t\t\t}\n\n\t\t\t/*\n\t\t\t * this should be set even close packets since\n\t\t\t * we could have opened a stream locally, sent a packet\n\t\t\t * and the ack carries the other sides sd, which we must store\n\t\t\t * and use in the next send out.\n\t\t\t */\n\t\t\tstrm->dest_sd = src_sd;\t      /* next time outgoing will have dest_fd */\n\t\t\tstrm->dest_magic = src_magic; /* used for matching next time onwards */\n\n\t\t\trc = send_pkt_to_app(strm, type, data, sz, totlen);\n\n\t\t\treturn rc; /* 0 - success, -1 failed, -2 app mbox full */\n\t\t} break;\t   /* TPP_DATA, TPP_CLOSE_STRM */\n\n\t\tdefault:\n\t\t\ttpp_log(LOG_ERR, NULL, \"Bad header for incoming packet on fd %d, header = %d, len = %d\", tfd, type, len);\n\n\t} /* switch */\n\n\treturn -1;\n}\n\n/**\n * @brief\n *\tThe connection drop (close) handler registered with the IO thread.\n *\n * @par Functionality\n *\tWhen the connection between this leaf and a router is dropped, the IO\n *\tthread first calls this (prior registered) function to notify the leaf\n *\tlayer of the fact that a connection was dropped.\n *\n * @param[in] tfd - The actual IO connection on which data was about to be\n *\t\t\tsent (unused)\n * @param[in] c - context associated with the IO thread (unused here)\n * @param[in] extra - The extra data associated with IO connection\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nstatic int\nleaf_close_handler(int tfd, int error, void *c, void *extra)\n{\n\tint rc;\n\ttpp_context_t *ctx = (tpp_context_t *) c;\n\ttpp_router_t *r;\n\tint last_state;\n\n\tif (extra) {\n\t\tconn_auth_t *authdata = (conn_auth_t *) extra;\n\t\tif (authdata->authctx && authdata->authdef)\n\t\t\tauthdata->authdef->destroy_ctx(authdata->authctx);\n\t\tif (authdata->authdef != authdata->encryptdef && authdata->encryptctx && authdata->encryptdef)\n\t\t\tauthdata->encryptdef->destroy_ctx(authdata->encryptctx);\n\t\tif (authdata->config)\n\t\t\tfree_auth_config(authdata->config);\n\t\t/* DO NOT free authdef here, it will be done in unload_auths() */\n\t\tfree(authdata);\n\t\ttpp_transport_set_conn_extra(tfd, NULL);\n\t}\n\n\tr = (tpp_router_t *) ctx->ptr;\n\n\t/* deallocate the connection structure associated with ctx */\n\ttpp_transport_close(r->conn_fd);\n\n\tif (tpp_going_down == 1)\n\t\treturn -1; /* while we are doing shutdown don't try to reconnect etc */\n\n\t/*\n\t * Disassociate the older context, so we can attach\n\t * to new connection old connection will be deleted\n\t * shortly by caller\n\t */\n\tfree(ctx);\n\ttpp_transport_set_conn_ctx(tfd, NULL);\n\tlast_state = r->state;\n\tr->state = TPP_ROUTER_STATE_DISCONNECTED;\n\tr->conn_fd = -1;\n\n\tif (last_state == TPP_ROUTER_STATE_CONNECTED) {\n\t\tunsigned int i;\n\t\t/* log disconnection message */\n\t\ttpp_log(LOG_CRIT, NULL, \"Connection to pbs_comm %s down\", r->router_name);\n\n\t\t/* send individual net close messages to app */\n\t\ttpp_read_lock(&strmarray_lock); /* walking stream idx, so read lock */\n\t\tfor (i = 0; i < max_strms; i++) {\n\t\t\tif (strmarray[i].slot_state == TPP_SLOT_BUSY) {\n\t\t\t\tstrmarray[i].strm->t_state = TPP_TRNS_STATE_NET_CLOSED;\n\t\t\t\tTPP_DBPRT(\"net down, sending TPP_CMD_NET_CLOSE sd=%u\", strmarray[i].strm->sd);\n\t\t\t\tsend_app_strm_close(strmarray[i].strm, TPP_CMD_NET_CLOSE, 0);\n\t\t\t}\n\t\t}\n\t\ttpp_unlock_rwlock(&strmarray_lock);\n\n\t\tif (the_app_net_down_handler) {\n\t\t\tif (tpp_mbox_post(&app_mbox, UNINITIALIZED_INT, TPP_CMD_NET_DOWN, NULL, 0) != 0) {\n\t\t\t\ttpp_log(LOG_CRIT, __func__, \"Error writing to app mbox\");\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t}\n\n\t\t/* if we are connected to another router, make app layer realize they need to restart streams */\n\t\t/* send a connection restore message, so app restarts streams on the alternate route */\n\t\tif (get_active_router()) {\n\t\t\tif (tpp_mbox_post(&app_mbox, UNINITIALIZED_INT, TPP_CMD_NET_RESTORE, NULL, 0) != 0) {\n\t\t\t\ttpp_log(LOG_CRIT, __func__, \"Error writing to app mbox\");\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t}\n\t}\n\n\tif (r->delay == 0)\n\t\tr->delay = TPP_CONNNECT_RETRY_MIN;\n\telse\n\t\tr->delay += TPP_CONNECT_RETRY_INC;\n\n\tif (r->delay > TPP_CONNECT_RETRY_MAX)\n\t\tr->delay = TPP_CONNECT_RETRY_MAX;\n\n\t/* since our connection with our router is down, we need to try again */\n\t/* connect to router and send initial join packet */\n\tif ((rc = connect_router(r)) != 0)\n\t\treturn -1;\n\n\treturn 0;\n}\n\n/* @brief\n * wrapper routine that checks the router connection status before calling\n * tpp_transport_send(). This function can check not just the router fd\n * but that the connection  is actually in fully connected state\n *\n * @return  Error code\n * @retval  -1 - Failure\n * @retval  -2 - transport buffers full\n * @retval   0 - Success\n *\n */\nstatic int\nsend_to_router(tpp_packet_t *pkt)\n{\n\ttpp_router_t *router = get_active_router();\n\tif ((router == NULL) || (router->conn_fd == -1) || (router->state != TPP_ROUTER_STATE_CONNECTED)) {\n\t\ttpp_log(LOG_ERR, __func__, \"No active router\");\n\t\treturn -1;\n\t}\n\n\treturn (tpp_transport_vsend(router->conn_fd, pkt));\n}\n"
  },
  {
    "path": "src/lib/Libtpp/tpp_em.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\ttpp_em.c\n *\n * @brief\tThe event monitor functions for TPP\n *\n * @par\t\tFunctionality:\n *\n *\t\tTPP = TCP based Packet Protocol. This layer uses TCP in a multi-\n *\t\thop router based network topology to deliver packets to desired\n *\t\tdestinations. LEAF (end) nodes are connected to ROUTERS via\n *\t\tpersistent TCP connections. The ROUTER has intelligence to route\n *\t\tpackets to appropriate destination leaves or other routers.\n *\n *\t\tThis file implements the em (event monitor) code such that it is\n *\t\tplatform independent. It provides a generic interface to add, remove\n *\t\tand wait for file descriptors to be monitored for events.\n *\n */\n#include <pbs_config.h>\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <unistd.h>\n#include <sys/types.h>\n#include <sys/socket.h>\n#include <netinet/in.h>\n#include <arpa/inet.h>\n#include <pthread.h>\n#include <errno.h>\n#include <fcntl.h>\n#include <netdb.h>\n#include <signal.h>\n#include \"tpp_internal.h\"\n#ifdef HAVE_SYS_EVENTFD_H\n#include <sys/eventfd.h>\n#endif\n\n/********************************** START OF MULTIPLEXING CODE *****************************************/\n/**\n * @brief\n *\tPlatform independent function to wait for a event to happen on the event context.\n *\tWaits for the specified timeout period. Does not care block/unblock signals.\n *\n * @param[in] -  em_ctx - The event monitor context\n * @param[out] - ev_array - Array of events returned\n * @param[in] - timeout - The timeout in milliseconds to wait for\n *\n * @return\tNumber of events returned\n * @retval -1\tFailure\n * @retval  0\tTimeout\n * @retval >0   Success (some events occured)\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\ntpp_em_wait(void *em_ctx, em_event_t **ev_array, int timeout)\n{\n#ifndef WIN32\n\treturn tpp_em_pwait(em_ctx, ev_array, timeout, NULL);\n#else\n\treturn tpp_em_wait_win(em_ctx, ev_array, timeout);\n#endif\n}\n\n/****************************************** Linux EPOLL ************************************************/\n\n#if defined(PBS_USE_EPOLL)\n/**\n * @brief\n *\tInitialize event monitoring\n *\n * @param[in] - max_events - max events that needs to be handled\n *\n * @return\tEvent context\n * @retval  NULL Failure\n * @retval !NULL Success\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: yes\n *\n */\nvoid *\ntpp_em_init(int max_events)\n{\n\tepoll_context_t *ctx = malloc(sizeof(epoll_context_t));\n\tif (!ctx)\n\t\treturn NULL;\n\n\tctx->events = malloc(sizeof(em_event_t) * max_events);\n\tif (ctx->events == NULL) {\n\t\tfree(ctx);\n\t\treturn NULL;\n\t}\n\n#if defined(EPOLL_CLOEXEC)\n\tctx->epoll_fd = epoll_create1(EPOLL_CLOEXEC);\n#else\n\tctx->epoll_fd = epoll_create(max_events);\n\tif (ctx->epoll_fd > -1) {\n\t\ttpp_set_close_on_exec(ctx->epoll_fd);\n\t}\n#endif\n\tif (ctx->epoll_fd == -1) {\n\t\tfree(ctx->events);\n\t\tfree(ctx);\n\t\treturn NULL;\n\t}\n\tctx->max_nfds = max_events;\n\tctx->init_pid = getpid();\n\n\treturn ((void *) ctx);\n}\n\n/**\n * @brief\n *\tDestroy event monitoring\n *\n * @param[in] ctx - The event monitoring context to destroy\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: yes\n *\n */\nvoid\ntpp_em_destroy(void *em_ctx)\n{\n\tepoll_context_t *ctx = (epoll_context_t *) em_ctx;\n\n\tif (ctx != NULL) {\n\t\tclose(ctx->epoll_fd);\n\t\tfree(ctx->events);\n\t\tfree(ctx);\n\t}\n}\n\n/**\n * @brief\n *\tAdd a file descriptor to the list of descriptors to be monitored for\n *\tevents\n *\n * @param[in] - em_ctx - The event monitor context\n * @param[in] - fd - The file descriptor to add to the monitored list\n * @param[in] - event_mask - A mask of events to monitor the fd for\n *\n * @return\tError code\n * @retval -1\tFailure\n * @retval  0\tSuccess\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nint\ntpp_em_add_fd(void *em_ctx, int fd, int event_mask)\n{\n\tepoll_context_t *ctx = (epoll_context_t *) em_ctx;\n\tstruct epoll_event ev;\n\n\t/*\n\t * if not the process which called em_init, (eg a child process),\n\t * we should not allow manipulating the epoll fd as it effects\n\t * the epoll_fd structure pointed to by the parent process\n\t */\n\tif (ctx->init_pid != getpid())\n\t\treturn 0;\n\n\tmemset(&ev, 0, sizeof(ev));\n\tev.events = event_mask;\n\tev.data.fd = fd;\n\n\tif (epoll_ctl(ctx->epoll_fd, EPOLL_CTL_ADD, fd, &ev) < 0)\n\t\treturn -1;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tModify a file descriptor to the list of descriptors to be monitored for\n *\tevents\n *\n * @param[in] - em_ctx - The event monitor context\n * @param[in] - fd - The file descriptor to add to the monitored list\n * @param[in] - event_mask - A mask of events to monitor the fd for\n *\n * @return\tError code\n * @retval -1\tFailure\n * @retval  0\tSuccess\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\ntpp_em_mod_fd(void *em_ctx, int fd, int event_mask)\n{\n\tepoll_context_t *ctx = (epoll_context_t *) em_ctx;\n\tstruct epoll_event ev;\n\n\t/*\n\t * if not the process which called em_init, (eg a child process),\n\t * we should not allow manipulating the epoll fd as it effects\n\t * the epoll_fd structure pointed to by the parent process\n\t */\n\tif (ctx->init_pid != getpid())\n\t\treturn 0;\n\n\tmemset(&ev, 0, sizeof(ev));\n\tev.events = event_mask;\n\tev.data.fd = fd;\n\n\tif (epoll_ctl(ctx->epoll_fd, EPOLL_CTL_MOD, fd, &ev) < 0)\n\t\treturn -1;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tRemove a file descriptor from the list of descriptors monitored for\n *\tevents\n *\n * @param[in] - em_ctx - The event monitor context\n * @param[in] - fd - The file descriptor to add to the monitored list\n *\n * @return\tError code\n * @retval -1\tFailure\n * @retval  0\tSuccess\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\ntpp_em_del_fd(void *em_ctx, int fd)\n{\n\tepoll_context_t *ctx = (epoll_context_t *) em_ctx;\n\tstruct epoll_event ev;\n\n\t/*\n\t * if not the process which called em_init, (eg a child process),\n\t * we should not allow manipulating the epoll fd as it effects\n\t * the epoll_fd structure pointed to by the parent process\n\t */\n\tif (ctx->init_pid != getpid())\n\t\treturn 0;\n\n\tmemset(&ev, 0, sizeof(ev));\n\tev.data.fd = fd;\n\tif (epoll_ctl(ctx->epoll_fd, EPOLL_CTL_DEL, fd, &ev) < 0)\n\t\treturn -1;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tWait for a event to happen on the event context. Waits for the specified\n *\ttimeout period.\n *\n * @param[in] -  em_ctx - The event monitor context\n * @param[out] - ev_array - Array of events returned\n * @param[in] - timeout - The timeout in milliseconds to wait for\n * @param[in] - sigmask - The signal mask to atomically unblock before sleeping\n *\n * @return\tNumber of events returned\n * @retval -1\tFailure\n * @retval  0\tTimeout\n * @retval >0   Success (some events occured)\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\n#ifdef PBS_HAVE_EPOLL_PWAIT\nint\ntpp_em_pwait(void *em_ctx, em_event_t **ev_array, int timeout, const sigset_t *sigmask)\n{\n\tepoll_context_t *ctx = (epoll_context_t *) em_ctx;\n\t*ev_array = ctx->events;\n\treturn (epoll_pwait(ctx->epoll_fd, ctx->events, ctx->max_nfds, timeout, sigmask));\n}\n#else\nint\ntpp_em_pwait(void *em_ctx, em_event_t **ev_array, int timeout, const sigset_t *sigmask)\n{\n\tepoll_context_t *ctx = (epoll_context_t *) em_ctx;\n\t*ev_array = ctx->events;\n\tsigset_t origmask;\n\tint n;\n\tsigprocmask(SIG_SETMASK, sigmask, &origmask);\n\tn = epoll_wait(ctx->epoll_fd, ctx->events, ctx->max_nfds, timeout);\n\tsigprocmask(SIG_SETMASK, &origmask, NULL);\n\treturn n;\n}\n#endif\n\n#elif defined(PBS_USE_POLL)\n\n/************************************************* POLL ************************************************/\n\n/**\n * @brief\n *\tInitialize event monitoring\n *\n * @param[in] - max_events - max events that needs to be handled\n *\n * @return\tEvent context\n * @retval  NULL Failure\n * @retval !NULL Success\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: yes\n *\n */\nvoid *\ntpp_em_init(int max_events)\n{\n\tint i;\n\tpoll_context_t *ctx = malloc(sizeof(poll_context_t));\n\tif (!ctx)\n\t\treturn NULL;\n\n\tctx->events = malloc(sizeof(em_event_t) * max_events);\n\tif (ctx->events == NULL) {\n\t\tfree(ctx);\n\t\treturn NULL;\n\t}\n\n\tctx->fds = malloc(sizeof(struct pollfd) * max_events);\n\tif (!ctx->fds) {\n\t\tfree(ctx->events);\n\t\tfree(ctx);\n\t\treturn NULL;\n\t}\n\tfor (i = 0; i < max_events; i++)\n\t\tctx->fds[i].fd = -1;\n\tctx->max_nfds = max_events;\n\tctx->curr_nfds = max_events;\n\n\treturn ((void *) ctx);\n}\n\n/**\n * @brief\n *\tDestroy event monitoring\n *\n * @param[in] ctx - The event monitoring context to destroy\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: yes\n *\n */\nvoid\ntpp_em_destroy(void *em_ctx)\n{\n\tfree(((poll_context_t *) em_ctx)->fds);\n\tfree(((poll_context_t *) em_ctx)->events);\n\tfree(em_ctx);\n}\n\n/**\n * @brief\n *\tAdd a file descriptor to the list of descriptors to be monitored for\n *\tevents\n *\n * @param[in] - em_ctx - The event monitor context\n * @param[in] - fd - The file descriptor to add to the monitored list\n * @param[in] - event_mask - A mask of events to monitor the fd for\n *\n * @return\tError code\n * @retval -1\tFailure\n * @retval  0\tSuccess\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nint\ntpp_em_add_fd(void *em_ctx, int fd, int event_mask)\n{\n\tpoll_context_t *ctx = (poll_context_t *) em_ctx;\n\tint nfds;\n\tint i;\n\n\tif (fd > ctx->curr_nfds - 1) {\n\t\tnfds = fd + 1000;\n\t\tctx->fds = realloc(ctx, sizeof(struct pollfd) * nfds);\n\t\tif (!ctx->fds) {\n\t\t\tfree(ctx);\n\t\t\treturn -1;\n\t\t}\n\t\tfor (i = ctx->curr_nfds; i < nfds; i++)\n\t\t\tctx->fds[i].fd = -1;\n\n\t\tctx->curr_nfds = nfds;\n\t}\n\n\tctx->fds[fd].fd = fd;\n\tctx->fds[fd].events = event_mask;\n\tctx->fds[fd].revents = 0;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tModify a file descriptor to the list of descriptors to be monitored for\n *\tevents\n *\n * @param[in] - em_ctx - The event monitor context\n * @param[in] - fd - The file descriptor to add to the monitored list\n * @param[in] - event_mask - A mask of events to monitor the fd for\n *\n * @return\tError code\n * @retval -1\tFailure\n * @retval  0\tSuccess\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\ntpp_em_mod_fd(void *em_ctx, int fd, int event_mask)\n{\n\tpoll_context_t *ctx = (poll_context_t *) em_ctx;\n\n\tctx->fds[fd].fd = fd;\n\tctx->fds[fd].events = event_mask;\n\tctx->fds[fd].revents = 0;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tRemove a file descriptor from the list of descriptors monitored for\n *\tevents\n *\n * @param[in] - em_ctx - The event monitor context\n * @param[in] - fd - The file descriptor to add to the monitored list\n *\n * @return\tError code\n * @retval -1\tFailure\n * @retval  0\tSuccess\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\ntpp_em_del_fd(void *em_ctx, int fd)\n{\n\tpoll_context_t *ctx = (poll_context_t *) em_ctx;\n\tctx->fds[fd].fd = -1;\n\treturn 0;\n}\n\n/**\n * @brief\n *\tWait for a event to happen on the event context. Waits for the specified\n *\ttimeout period.\n *\n * @param[in] -  em_ctx - The event monitor context\n * @param[out] - ev_array - Array of events returned\n * @param[in] - timeout - The timeout in milliseconds to wait for\n * @param[in] - sigmask - The signal mask to atomically unblock before sleeping\n *\n * @return\tNumber of events returned\n * @retval -1\tFailure\n * @retval  0\tTimeout\n * @retval >0   Success (some events occured)\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\ntpp_em_pwait(void *em_ctx, em_event_t **ev_array, int timeout, const sigset_t *sigmask)\n{\n\tpoll_context_t *ctx = (poll_context_t *) em_ctx;\n\tint nready;\n\tint i;\n\tint ev_count;\n#ifndef PBS_HAVE_PPOLL\n\tsigset_t origmask;\n#endif\n\n#ifdef PBS_HAVE_PPOLL\n\tnready = ppoll(ctx->fds, ctx->curr_nfds, timeout, sigmask);\n#else\n\tif (sigmask) {\n\t\tif (sigprocmask(SIG_SETMASK, sigmask, &origmask) == -1)\n\t\t\treturn -1;\n\t}\n\n\tnready = poll(ctx->fds, ctx->curr_nfds, timeout);\n\n\tif (sigmask) {\n\t\tsigprocmask(SIG_SETMASK, &origmask, NULL);\n\t}\n#endif\n\n\tif (nready == -1 || nready == 0)\n\t\treturn nready;\n\n\tev_count = 0;\n\t*ev_array = ctx->events;\n\tfor (i = 0; i < ctx->curr_nfds; i++) {\n\t\tif (ctx->fds[i].fd < 0)\n\t\t\tcontinue;\n\n\t\tif (ctx->fds[i].revents != 0) {\n\t\t\tctx->events[ev_count].fd = ctx->fds[i].fd;\n\t\t\tctx->events[ev_count].events = ctx->fds[i].revents;\n\t\t\tev_count++;\n\n\t\t\tif (ev_count > ctx->max_nfds)\n\t\t\t\treturn ev_count;\n\t\t}\n\t}\n\treturn ev_count;\n}\n\n/*************************************** GENERIC SELECT ************************************************/\n\n#elif defined(PBS_USE_SELECT)\n/**\n * @brief\n *\tInitialize event monitoring\n *\n * @param[in] - max_events - max events that needs to be handled\n *\n * @return\tEvent context\n * @retval  NULL Failure\n * @retval !NULL Success\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: yes\n *\n */\nvoid *\ntpp_em_init(int max_events)\n{\n\tsel_context_t *ctx;\n\n\tctx = malloc(sizeof(sel_context_t));\n\tif (!ctx)\n\t\treturn NULL;\n\n\tctx->events = malloc(sizeof(em_event_t) * max_events);\n\tif (ctx->events == NULL) {\n\t\tfree(ctx);\n\t\treturn NULL;\n\t}\n\n\tFD_ZERO(&ctx->master_read_fds);\n\tFD_ZERO(&ctx->master_write_fds);\n\tFD_ZERO(&ctx->master_err_fds);\n\n\tctx->maxfd = 0;\n\tctx->max_nfds = max_events;\n\n\treturn ((void *) ctx);\n}\n\n/**\n * @brief\n *\tDestroy event monitoring\n *\n * @param[in] ctx - The event monitoring context to destroy\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: yes\n *\n */\nvoid\ntpp_em_destroy(void *em_ctx)\n{\n\tfree(((sel_context_t *) em_ctx)->events);\n\tfree(em_ctx);\n}\n\n/**\n * @brief\n *\tAdd a file descriptor to the list of descriptors to be monitored for\n *\tevents\n *\n * @param[in] - em_ctx - The event monitor context\n * @param[in] - fd - The file descriptor to add to the monitored list\n * @param[in] - event_mask - A mask of events to monitor the fd for\n *\n * @return\tError code\n * @retval -1\tFailure\n * @retval  0\tSuccess\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nint\ntpp_em_add_fd(void *em_ctx, int fd, int event_mask)\n{\n\tsel_context_t *ctx = (sel_context_t *) em_ctx;\n\n\tif ((event_mask & EM_IN) == EM_IN)\n\t\tFD_SET(fd, &ctx->master_read_fds);\n\n\tif ((event_mask & EM_OUT) == EM_OUT)\n\t\tFD_SET(fd, &ctx->master_write_fds);\n\n\tif ((event_mask & EM_ERR) == EM_ERR)\n\t\tFD_SET(fd, &ctx->master_err_fds);\n\n\tif (fd >= ctx->maxfd)\n\t\tctx->maxfd = fd + 1;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tModify a file descriptor to the list of descriptors to be monitored for\n *\tevents\n *\n * @param[in] - em_ctx - The event monitor context\n * @param[in] - fd - The file descriptor to add to the monitored list\n * @param[in] - event_mask - A mask of events to monitor the fd for\n *\n * @return\tError code\n * @retval -1\tFailure\n * @retval  0\tSuccess\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\ntpp_em_mod_fd(void *em_ctx, int fd, int event_mask)\n{\n\tsel_context_t *ctx = (sel_context_t *) em_ctx;\n\n\tFD_CLR(fd, &ctx->master_read_fds);\n\tFD_CLR(fd, &ctx->master_write_fds);\n\tFD_CLR(fd, &ctx->master_err_fds);\n\n\tif ((event_mask & EM_IN) == EM_IN)\n\t\tFD_SET(fd, &ctx->master_read_fds);\n\n\tif ((event_mask & EM_OUT) == EM_OUT)\n\t\tFD_SET(fd, &ctx->master_write_fds);\n\n\tif ((event_mask & EM_ERR) == EM_ERR)\n\t\tFD_SET(fd, &ctx->master_err_fds);\n\n\tif (fd >= ctx->maxfd)\n\t\tctx->maxfd = fd + 1;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tRemove a file descriptor from the list of descriptors monitored for\n *\tevents\n *\n * @param[in] - em_ctx - The event monitor context\n * @param[in] - fd - The file descriptor to add to the monitored list\n *\n * @return\tError code\n * @retval -1\tFailure\n * @retval  0\tSuccess\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\ntpp_em_del_fd(void *em_ctx, int fd)\n{\n\tsel_context_t *ctx = (sel_context_t *) em_ctx;\n\n\tFD_CLR(fd, &ctx->master_read_fds);\n\tFD_CLR(fd, &ctx->master_write_fds);\n\tFD_CLR(fd, &ctx->master_err_fds);\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tWait for a event to happen on the event context. Waits for the specified\n *\ttimeout period.\n *\n * @param[in] -  em_ctx - The event monitor context\n * @param[out] - ev_array - Array of events returned\n * @param[in] - timeout - The timeout in milliseconds to wait for\n * @param[in] - sigmask - The signal mask to atomically unblock before sleeping\n *\n * @return\tNumber of events returned\n * @retval -1\tFailure\n * @retval  0\tTimeout\n * @retval >0   Success (some events occured)\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\n#ifndef WIN32\nint\ntpp_em_pwait(void *em_ctx, em_event_t **ev_array, int timeout, const sigset_t *sigmask)\n{\n\tsel_context_t *ctx = (sel_context_t *) em_ctx;\n\tint nready;\n\tint i;\n\tint ev_count;\n\tstruct timeval tv;\n\tstruct timeval *ptv;\n\tint event;\n\n\terrno = 0;\n\n\tmemcpy(&ctx->read_fds, &ctx->master_read_fds, sizeof(ctx->master_read_fds));\n\tmemcpy(&ctx->write_fds, &ctx->master_write_fds, sizeof(ctx->master_read_fds));\n\tmemcpy(&ctx->err_fds, &ctx->master_err_fds, sizeof(ctx->master_read_fds));\n\n\tif (timeout == -1) {\n\t\tptv = NULL;\n\t} else {\n\t\ttv.tv_sec = timeout / 1000;\n\t\ttv.tv_usec = (timeout % 1000) * 1000;\n\t\tptv = &tv;\n\t}\n\n\tnready = pselect(ctx->maxfd, &ctx->read_fds, &ctx->write_fds, &ctx->err_fds, ptv, sigmask);\n\n\tif (nready == -1 || nready == 0)\n\t\treturn nready;\n\n\tev_count = 0;\n\t*ev_array = ctx->events;\n\tfor (i = 0; i <= ctx->maxfd; i++) {\n\t\tevent = 0;\n\n\t\tif (FD_ISSET(i, &ctx->read_fds))\n\t\t\tevent |= EM_IN;\n\t\tif (FD_ISSET(i, &ctx->write_fds))\n\t\t\tevent |= EM_OUT;\n\t\tif (FD_ISSET(i, &ctx->err_fds))\n\t\t\tevent |= EM_ERR;\n\n\t\tif (event != 0) {\n\t\t\tctx->events[ev_count].fd = i;\n\t\t\tctx->events[ev_count].events = event;\n\t\t\tev_count++;\n\t\t}\n\n\t\tif (ev_count > ctx->max_nfds)\n\t\t\tbreak;\n\t}\n\treturn ev_count;\n}\n#else\nint\ntpp_em_wait_win(void *em_ctx, em_event_t **ev_array, int timeout)\n{\n\tsel_context_t *ctx = (sel_context_t *) em_ctx;\n\tint nready;\n\tint i;\n\tint ev_count;\n\tstruct timeval tv;\n\tstruct timeval *ptv;\n\tint event;\n\n\terrno = 0;\n\n\tmemcpy(&ctx->read_fds, &ctx->master_read_fds, sizeof(ctx->master_read_fds));\n\tmemcpy(&ctx->write_fds, &ctx->master_write_fds, sizeof(ctx->master_read_fds));\n\tmemcpy(&ctx->err_fds, &ctx->master_err_fds, sizeof(ctx->master_read_fds));\n\n\tif (timeout == -1) {\n\t\tptv = NULL;\n\t} else {\n\t\ttv.tv_sec = timeout / 1000;\n\t\ttv.tv_usec = (timeout % 1000) * 1000;\n\t\tptv = &tv;\n\t}\n\n\tnready = select(ctx->maxfd, &ctx->read_fds, &ctx->write_fds, &ctx->err_fds, ptv);\n\t/* for windows select, translate the errno and return value */\n\tif (nready == SOCKET_ERROR) {\n\t\terrno = tr_2_errno(WSAGetLastError());\n\t\tnready = -1;\n\t}\n\n\tif (nready == -1 || nready == 0)\n\t\treturn nready;\n\n\tev_count = 0;\n\t*ev_array = ctx->events;\n\tfor (i = 0; i <= ctx->maxfd; i++) {\n\t\tevent = 0;\n\n\t\tif (FD_ISSET(i, &ctx->read_fds))\n\t\t\tevent |= EM_IN;\n\t\tif (FD_ISSET(i, &ctx->write_fds))\n\t\t\tevent |= EM_OUT;\n\t\tif (FD_ISSET(i, &ctx->err_fds))\n\t\t\tevent |= EM_ERR;\n\n\t\tif (event != 0) {\n\t\t\tctx->events[ev_count].fd = i;\n\t\t\tctx->events[ev_count].events = event;\n\t\t\tev_count++;\n\t\t}\n\n\t\tif (ev_count > ctx->max_nfds)\n\t\t\tbreak;\n\t}\n\treturn ev_count;\n}\n#endif\n\n#endif\n\n/********************************** END OF MULTIPLEXING CODE *****************************************/\n\n/********************************** START OF MBOX CODE ***********************************************/\n/**\n * @brief\n *\tInitialize an mbox\n *\n * @param[in] - mbox   - The mbox to read from\n * @param[in] - size   - The total size allowed, or -1 for inifinite\n *\n * @return  Error code\n * @retval  -1 - Failure\n * @retval   0 - success\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\ntpp_mbox_init(tpp_mbox_t *mbox, char *name, int size)\n{\n\ttpp_init_lock(&mbox->mbox_mutex);\n\ttpp_lock(&mbox->mbox_mutex);\n\n\tTPP_QUE_CLEAR(&mbox->mbox_queue);\n\n\tsnprintf(mbox->mbox_name, sizeof(mbox->mbox_name), \"%s\", name);\n\tmbox->mbox_size = 0;\n\tmbox->max_size = size;\n\n#ifdef HAVE_SYS_EVENTFD_H\n\tif ((mbox->mbox_eventfd = eventfd(0, EFD_CLOEXEC | EFD_NONBLOCK)) == -1) {\n\t\ttpp_log(LOG_CRIT, __func__, \"eventfd() error, errno=%d\", errno);\n\t\ttpp_unlock(&mbox->mbox_mutex);\n\t\treturn -1;\n\t}\n#else\n\t/*\n\t * No eventfd\n\t * Using signals with select(), poll() is not race-safe\n\t * In linux we have ppoll() and pselect() which are race-safe\n\t * but for other Unices (dev/poll, pollset etc) there is no race-safe way\n\t * Use the self-pipe trick!\n\t */\n\tif (tpp_pipe_cr(mbox->mbox_pipe) != 0) {\n\t\ttpp_log(LOG_CRIT, __func__, \"pipe() error, errno=%d\", errno);\n\t\ttpp_unlock(&mbox->mbox_mutex);\n\t\treturn -1;\n\t}\n\t/* set the cmd pipe to nonblocking now\n\t * that we are ready to rock and roll\n\t */\n\ttpp_set_non_blocking(mbox->mbox_pipe[0]);\n\ttpp_set_non_blocking(mbox->mbox_pipe[1]);\n\ttpp_set_close_on_exec(mbox->mbox_pipe[0]);\n\ttpp_set_close_on_exec(mbox->mbox_pipe[1]);\n#endif\n\ttpp_unlock(&mbox->mbox_mutex);\n\treturn 0;\n}\n\n/**\n * @brief\n *\tGet the underlying file descriptor\n *\tassociated with the mbox\n *\n * @param[in] - mbox   - The mbox to read from\n *\n * @return  file descriptor\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\ntpp_mbox_getfd(tpp_mbox_t *mbox)\n{\n#ifdef HAVE_SYS_EVENTFD_H\n\treturn mbox->mbox_eventfd;\n#else\n\treturn mbox->mbox_pipe[0];\n#endif\n}\n\n/**\n * @brief\n *\tDestroy a message box\n *\n * @param[in] mbox - The message box to destroy\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: yes\n *\n */\nvoid\ntpp_mbox_destroy(tpp_mbox_t *mbox)\n{\n#ifdef HAVE_SYS_EVENTFD_H\n\tclose(mbox->mbox_eventfd);\n#else\n\tif (mbox->mbox_pipe[0] > -1)\n\t\ttpp_pipe_close(mbox->mbox_pipe[0]);\n\tif (mbox->mbox_pipe[1] > -1)\n\t\ttpp_pipe_close(mbox->mbox_pipe[1]);\n#endif\n}\n\n/**\n * @brief\n *\tAdd mbox to the monitoring infra\n *\tso that messages to the mbox will\n *\twake up handling thread\n *\n * @param[in] - em_ctx - The event monitoring context\n * @param[in] - mbox   - The mbox to read from\n *\n * @return  Error code\n * @retval  -1 - Failure\n * @retval   0 - success\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\ntpp_mbox_monitor(void *em_ctx, tpp_mbox_t *mbox)\n{\n\t/* add eventfd to the poll set */\n\tif (tpp_em_add_fd(em_ctx, tpp_mbox_getfd(mbox), EM_IN) == -1) {\n\t\ttpp_log(LOG_CRIT, __func__, \"em_add_fd() error for mbox=%s, errno=%d\", mbox->mbox_name, errno);\n\t\treturn -1;\n\t}\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tRead a command from the msg box.\n *\n * @param[in]  - mbox   - The mbox to read from\n * @param[out] - cmdval - The command or operation\n * @param[out] - tfd    - The Virtual file descriptor\n * @param[out] - data   - Data associated, if any (or NULL)\n *\n * @return Error code\n * @retval -1 Failure\n * @retval  0 Success\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\ntpp_mbox_read(tpp_mbox_t *mbox, unsigned int *tfd, int *cmdval, void **data)\n{\n#ifdef HAVE_SYS_EVENTFD_H\n\tuint64_t u;\n#else\n\tchar b;\n#endif\n\ttpp_cmd_t *cmd = NULL;\n\n\tif (cmdval)\n\t\t*cmdval = -1;\n\n\terrno = 0;\n\n\ttpp_lock(&mbox->mbox_mutex);\n\n\t/* read the data from the mbox cmd queue head */\n\tcmd = (tpp_cmd_t *) tpp_deque(&mbox->mbox_queue);\n\n\t/* if no more data, clear all notifications */\n\tif (cmd == NULL) {\n\t\tmbox->mbox_size = 0;\n#ifdef HAVE_SYS_EVENTFD_H\n\t\tif (read(mbox->mbox_eventfd, &u, sizeof(uint64_t)) == -1)\n\t\t\t;\n#else\n\t\twhile (tpp_pipe_read(mbox->mbox_pipe[0], &b, sizeof(char)) == sizeof(char))\n\t\t\t;\n#endif\n\t} else {\n\t\t/* reduce from mbox size during read */\n\t\tmbox->mbox_size -= cmd->sz;\n\t}\n\n\ttpp_unlock(&mbox->mbox_mutex);\n\n\tif (cmd == NULL) {\n\t\terrno = EWOULDBLOCK;\n\t\treturn -1;\n\t}\n\n\tif (tfd)\n\t\t*tfd = cmd->tfd;\n\n\tif (cmdval)\n\t\t*cmdval = cmd->cmdval;\n\n\t*data = cmd->data;\n\n\tfree(cmd);\n\treturn 0;\n}\n\n/**\n * @brief\n *\tClear pending commands pertaining to a connection\n *\tfrom this mbox\n *\tCalled usually when the connection got closed and\n *\tthe caller wants to clear the pending commands for\n *\tthat connection from this thread mbox\n *\n * @param[in] - mbox   - The mbox to read from\n * @param[in] - n      - The node/position to start searching from\n * @param[in] - tfd    - The Virtual file descriptor\n * @param[out] - cmdval - Return the cmdval\n * @param[out] - data - Return any data associated\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\ntpp_mbox_clear(tpp_mbox_t *mbox, tpp_que_elem_t **n, unsigned int tfd, short *cmdval, void **data)\n{\n\ttpp_cmd_t *cmd;\n\tint ret = -1;\n\terrno = 0;\n\n\ttpp_lock(&mbox->mbox_mutex);\n\n\twhile ((*n = TPP_QUE_NEXT(&mbox->mbox_queue, *n))) {\n\t\tcmd = TPP_QUE_DATA(*n);\n\t\tif (cmd && cmd->tfd == tfd) {\n\t\t\t*n = tpp_que_del_elem(&mbox->mbox_queue, *n);\n\t\t\tif (cmdval)\n\t\t\t\t*cmdval = cmd->cmdval;\n\t\t\tif (data)\n\t\t\t\t*data = cmd->data;\n\t\t\tfree(cmd);\n\t\t\tret = 0;\n\t\t\tbreak;\n\t\t}\n\t}\n\tmbox->mbox_size = 0;\n\n\ttpp_unlock(&mbox->mbox_mutex);\n\n\treturn ret;\n}\n\n/**\n * @brief\n *\tSend a command to the threads msg queue\n *\n * @param[in] - mbox   - The mbox to post to\n * @param[in] - cmdval - The command or operation\n * @param[in] - tfd    - The Virtual file descriptor\n * @param[in] - data   - Any data pointer associated, if any (or NULL)\n * @param[in] - sz     - size of the data\n *\n * @return Error code\n * @retval -1 Failure\n * @retval  0 Success\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\ntpp_mbox_post(tpp_mbox_t *mbox, unsigned int tfd, char cmdval, void *data, int sz)\n{\n\ttpp_cmd_t *cmd;\n\tssize_t s;\n#ifdef HAVE_SYS_EVENTFD_H\n\tuint64_t u;\n#else\n\tchar b;\n#endif\n\n\terrno = 0;\n\tcmd = malloc(sizeof(tpp_cmd_t));\n\tif (!cmd) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory in em_mbox_post for mbox=%s\", mbox->mbox_name);\n\t\treturn -1;\n\t}\n\tcmd->cmdval = cmdval;\n\tcmd->tfd = tfd;\n\tcmd->data = data;\n\tcmd->sz = sz;\n\n\t/* add the cmd to the threads queue */\n\ttpp_lock(&mbox->mbox_mutex);\n\n\tif (tpp_enque(&mbox->mbox_queue, cmd) == NULL) {\n\t\ttpp_unlock(&mbox->mbox_mutex);\n\t\tfree(cmd);\n\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory in em_mbox_post for mbox=%s\", mbox->mbox_name);\n\t\treturn -1;\n\t}\n\n\t/* add to the size to global size during enque */\n\tmbox->mbox_size += sz;\n\n\ttpp_unlock(&mbox->mbox_mutex);\n\n\twhile (1) {\n\t\t/* send a notification to the thread */\n#ifdef HAVE_SYS_EVENTFD_H\n\t\tu = 1;\n\t\ts = write(mbox->mbox_eventfd, &u, sizeof(uint64_t));\n\t\tif (s == sizeof(uint64_t))\n\t\t\tbreak;\n#else\n\t\tb = 1;\n\t\ts = tpp_pipe_write(mbox->mbox_pipe[1], &b, sizeof(char));\n\t\tif (s == sizeof(char))\n\t\t\tbreak;\n#endif\n\t\tif (s == -1) {\n\t\t\tif (errno == EAGAIN || errno == EWOULDBLOCK) {\n\t\t\t\t/* pipe is full, which is fine, anyway we behave like edge triggered */\n\t\t\t\tbreak;\n\t\t\t} else if (errno != EINTR) {\n\t\t\t\ttpp_log(LOG_CRIT, __func__, \"mbox post failed for mbox=%s, errno=%d\", mbox->mbox_name, errno);\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t}\n\t}\n\treturn 0;\n}\n"
  },
  {
    "path": "src/lib/Libtpp/tpp_internal.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef __TPP_INTERNAL_H\n#define __TPP_INTERNAL_H\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n#include <sys/time.h>\n#include <limits.h>\n#include <time.h>\n#include <pthread.h>\n#include <sys/socket.h>\n#include <netinet/in.h>\n#include \"log.h\"\n#include \"list_link.h\"\n#include \"avltree.h\"\n\n#include \"tpp.h\"\n\n#ifndef WIN32\n\n#define tpp_pipe_cr(a) pipe(a)\n#define tpp_pipe_read(a, b, c) read(a, b, c)\n#define tpp_pipe_write(a, b, c) write(a, b, c)\n#define tpp_pipe_close(a) close(a)\n\n#define tpp_sock_socket(a, b, c) socket(a, b, c)\n#define tpp_sock_bind(a, b, c) bind(a, b, c)\n#define tpp_sock_listen(a, b) listen(a, b)\n#define tpp_sock_accept(a, b, c) accept(a, b, c)\n#define tpp_sock_connect(a, b, c) connect(a, b, c)\n#define tpp_sock_recv(a, b, c, d) recv(a, b, c, d)\n#define tpp_sock_send(a, b, c, d) send(a, b, c, d)\n#define tpp_sock_select(a, b, c, d, e) select(a, b, c, d, e)\n#define tpp_sock_close(a) close(a)\n#define tpp_sock_getsockopt(a, b, c, d, e) getsockopt(a, b, c, d, e)\n#define tpp_sock_setsockopt(a, b, c, d, e) setsockopt(a, b, c, d, e)\n\n#else\n#ifndef EINPROGRESS\n#define EINPROGRESS EAGAIN\n#endif\n\nint tpp_pipe_cr(int fds[2]);\nint tpp_pipe_read(int, char *, int);\nint tpp_pipe_write(int, char *, int);\nint tpp_pipe_close(int);\n\nint tpp_sock_socket(int, int, int);\nint tpp_sock_listen(int, int);\nint tpp_sock_accept(int, struct sockaddr *, int *);\nint tpp_sock_bind(int, const struct sockaddr *, int);\nint tpp_sock_connect(int, const struct sockaddr *, int);\nint tpp_sock_recv(int, char *, int, int);\nint tpp_sock_send(int, const char *, int, int);\nint tpp_sock_select(int, fd_set *, fd_set *, fd_set *, const struct timeval *);\nint tpp_sock_close(int);\nint tpp_sock_getsockopt(int, int, int, int *, int *);\nint tpp_sock_setsockopt(int, int, int, const int *, int);\n\n#endif\n\nint tpp_sock_layer_init();\nint tpp_get_nfiles();\nint set_pipe_disposition();\nint tpp_sock_attempt_connection(int, char *, int);\nvoid tpp_invalidate_thrd_handle(pthread_t *);\nint tpp_is_valid_thrd(pthread_t);\n\n#define MAX_CON TPP_MAXOPENFD /* default max connections */\n#define UNINITIALIZED_INT -1\n#define TPP_GEN_BUF_SZ 1024\n#define TPP_MAXADDRLEN (INET6_ADDRSTRLEN + 10)\n\n/* some built in timing control defines to retry connections to routers */\n#define TPP_CONNNECT_RETRY_MIN 2\n#define TPP_CONNECT_RETRY_INC 2\n#define TPP_CONNECT_RETRY_MAX 10\n#define TPP_THROTTLE_RETRY 5 /* retry time after throttling a packet */\n\n/* defines for the TPP address families\n * we don't use the AF_INET etc, since their values could (though mostly does not)\n * differ between OS flavors, so choosing something that fits in a char\n * and also is same for TPP libraries on all OS flavors\n */\n#define TPP_ADDR_FAMILY_IPV4 0\n#define TPP_ADDR_FAMILY_IPV6 1\n#define TPP_ADDR_FAMILY_UNSPEC 2\n\n/*\n * Structure to hold an Address (ipv4 or ipv6)\n */\ntypedef struct {\n\tint ip[4];   /* can hold ipv6 as well */\n\tshort port;  /* hold short port, keep as int for alignment */\n\tchar family; /* Ipv4 or IPV6 etc */\n} tpp_addr_t;\n\ntypedef struct {\n\tpbs_list_link chunk_link;\n\tchar *data; /* pointer to the data buffer */\n\tsize_t len; /* length of the data buffer */\n\tchar *pos;  /* current position - till which data is consumed */\n} tpp_chunk_t;\n\n/*\n * Packet structure used at various places to hold a data and the\n * current position to which data has been consumed or processed\n */\ntypedef struct {\n\tpbs_list_head chunks;\n\ttpp_chunk_t *curr_chunk;\n\tsize_t totlen;\n\tint ref_count; /* number of accessors */\n} tpp_packet_t;\n\ntypedef struct {\n\tunsigned int ntotlen;\n\tchar type;\n} tpp_encrypt_hdr_t;\n\n/*\n * The authenticate packet header structure\n */\ntypedef struct {\n\tunsigned int ntotlen;\n\tunsigned char type;\n\tunsigned int for_encrypt;\n\tchar auth_method[MAXAUTHNAME + 1];\n\tchar encrypt_method[MAXAUTHNAME + 1];\n} tpp_auth_pkt_hdr_t;\n/* the authentication data follows this packet */\n\n/*\n * The Join packet header structure\n */\ntypedef struct {\n\tunsigned int ntotlen;\n\tunsigned char type;\t /* type packet, JOIN, LEAVE etc */\n\tunsigned char hop;\t /* hop count */\n\tunsigned char node_type; /* node type - leaf or router */\n\tunsigned char index;\t /* in case of leaves, primary connection or backup */\n\tunsigned char num_addrs; /* number of addresses of source joining, max 128 */\n} tpp_join_pkt_hdr_t;\n/* a bunch of tpp_addr structs follow this packet */\n\n/*\n * The Leave packet header structure\n */\ntypedef struct {\n\tunsigned int ntotlen;\n\tunsigned char type; /* type packet, JOIN, LEAVE etc */\n\tunsigned char hop;\n\tunsigned char ecode;\n\tunsigned char num_addrs; /* number of addresses of source leaving, max 128 */\n} tpp_leave_pkt_hdr_t;\n/* a bunch of tpp_addr structs follow this packet */\n\n/*\n * The control packet header structure, MSG, NOROUTE etc\n */\ntypedef struct {\n\tunsigned int ntotlen;\n\tunsigned char type;\n\tunsigned char code;\t /* NOROUTE, UPDATE, ERROR */\n\tunsigned char error_num; /* error_num in case of NOROUTE, ERRORs */\n\tunsigned int src_sd;\t /* source sd in case of NO ROUTE */\n\ttpp_addr_t src_addr;\t /* src host address */\n\ttpp_addr_t dest_addr;\t /* destination host dest host address */\n} tpp_ctl_pkt_hdr_t;\n\n/*\n * The data packet header structure\n */\ntypedef struct {\n\tunsigned int ntotlen;\n\tunsigned char type; /* type of the packet - TPP_DATA, JOIN etc */\n\n\tunsigned int src_magic; /* magic id of source stream */\n\n\tunsigned int src_sd;  /* source stream descriptor */\n\tunsigned int dest_sd; /* destination stream descriptor */\n\n\tunsigned int totlen; /* total pkt len */\n\n\ttpp_addr_t src_addr;  /* src host address */\n\ttpp_addr_t dest_addr; /* dest host address */\n} tpp_data_pkt_hdr_t;\n\n/*\n * The multicast packet header structure\n */\ntypedef struct {\n\tunsigned int ntotlen;\n\tunsigned char type;\t      /* type of packet - TPP_MCAST_DATA */\n\tunsigned char hop;\t      /* hop count */\n\tunsigned int num_streams;     /* number of member streams */\n\tunsigned int info_len;\t      /* total length of info */\n\tunsigned int info_cmprsd_len; /* compressed length of info */\n\tunsigned int totlen;\t      /* total pkt len (in case of fragmented pkts) */\n\ttpp_addr_t src_addr;\t      /* source host address */\n} tpp_mcast_pkt_hdr_t;\n\n/*\n * Structure describing information about each member stream.\n * The overall packet includes a mcast header and multiple member stream\n * info (each of one member stream)\n */\ntypedef struct {\n\tunsigned int src_sd;\t/* source descriptor of member stream */\n\tunsigned int src_magic; /* magic id of source stream */\n\tunsigned int dest_sd;\t/* destination descriptor of member stream */\n\ttpp_addr_t dest_addr;\t/* dest host address of member */\n} tpp_mcast_pkt_info_t;\n\n#define SLOT_INC 1000\n\n#define TPP_SLOT_FREE 0\n#define TPP_SLOT_BUSY 1\n#define TPP_SLOT_DELETED 2\n\n#define TPP_MAX_MBOX_SIZE 640000\n\n/* tpp internal message header types */\nenum TPP_MSG_TYPES {\n\tTPP_CTL_JOIN = 1,\n\tTPP_CTL_LEAVE,\n\tTPP_DATA,\n\tTPP_CTL_MSG,\n\tTPP_CLOSE_STRM,\n\tTPP_MCAST_DATA,\n\tTPP_AUTH_CTX,\n\tTPP_ENCRYPTED_DATA,\n\tTPP_LAST_MSG\n};\n\n#define TPP_MSG_NOROUTE 1\n#define TPP_MSG_UPDATE 2\n#define TPP_MSG_AUTHERR 3\n\n#define TPP_STRM_NORMAL 1\n#define TPP_STRM_MCAST 2\n\n#define TPP_MAX_ACK_DELAY 1\n#define TPP_MAX_RETRY_DELAY 30\n#define TPP_CLOSE_WAIT 60\n#define TPP_STRM_TIMEOUT 600\n#define TPP_MIN_WAIT 2\n#define TPP_SEND_SIZE 8192\n#define TPP_COMPR_SIZE 8192\n\n/* tpp cmds used internally by the layer to notify messages between threads */\n#define TPP_CMD_SEND 1\n#define TPP_CMD_CLOSE 2\n#define TPP_CMD_ASSIGN 3\n#define TPP_CMD_EXIT 4\n#define TPP_CMD_NET_CLOSE 5\n#define TPP_CMD_PEER_CLOSE 6\n#define TPP_CMD_NET_DATA 7\n#define TPP_CMD_DELAYED_CONNECT 8\n#define TPP_CMD_NET_RESTORE 9\n#define TPP_CMD_NET_DOWN 10\n#define TPP_CMD_WAKEUP 11\n#define TPP_CMD_READ 12\n#define TPP_CMD_CONNECT 13\n\n#define TPP_DEF_ROUTER_PORT 17001\n#define TPP_SCRATCHSIZE 8192\n\n#define TPP_ROUTER_STATE_DISCONNECTED 0 /* Leaf not connected to router */\n#define TPP_ROUTER_STATE_CONNECTING 1\t/* Leaf is connecting to router */\n#define TPP_ROUTER_STATE_CONNECTED 2\t/* Leaf connected to router */\n\n#define TPP_MBOX_NAME_SZ 10 /* max 10 mbox_name size */\n\n/*\n * This structure contains the information about what kind of end-point\n * is connected over each connection to this router. When some end-point\n * connects to this router (or this router connects to others), there is a\n * TCP connection created (we call it a physical connection). The end-point\n * then send a \"join\" packet identifying who it is, and what type it is.\n * The router keeps track of what \"kind\" of end-point is connected to each of\n * such physical connections.\n */\ntypedef struct {\n\tunsigned char type; /* leaf or router */\n\tvoid *ptr;\t    /* pointer to router or leaf structure */\n} tpp_context_t;\n\n/*\n * Structure to hold information about a router\n */\ntypedef struct {\n\tchar *router_name;\t/* router host id */\n\ttpp_addr_t router_addr; /* primary ip address of router */\n\tint conn_fd;\t\t/* fd - in case there is direct connection to router */\n\ttime_t conn_time;\t/* time at which connection completed */\n\tint initiator;\t\t/* we initialized the connection to the router */\n\tint state;\t\t/* 1 - connected or 0 - disconnected */\n\tint delay;\t\t/* time delay in re-connecting to the router */\n\tint index;\t\t/* the preference of data going over this connection */\n\tvoid *my_leaves_idx;\t/* leaves connected to this router, used by comm only */\n} tpp_router_t;\n\n/*\n * Structure to hold information of a leaf node\n */\ntypedef struct {\n\tint conn_fd;\t\t /* real connection id. -1 if not directly connected */\n\tunsigned char leaf_type; /* need notifications or not */\n\n\tint tot_routers; /* total number of routers which has this this leaf */\n\tint num_routers;\n\ttpp_router_t **r; /* list of routers leaf is connected to */\n\n\tint num_addrs;\n\ttpp_addr_t *leaf_addrs; /* list of leaf's addresses */\n} tpp_leaf_t;\n\n/* routines and headers to manage FIFO queues */\nstruct tpp_que_elem {\n\tvoid *queue_data;\n\tstruct tpp_que_elem *prev;\n\tstruct tpp_que_elem *next;\n};\ntypedef struct tpp_que_elem tpp_que_elem_t;\n\n/* queue consists of a pointer to the head and tail of the queue */\ntypedef struct {\n\ttpp_que_elem_t *head;\n\ttpp_que_elem_t *tail;\n} tpp_que_t;\n\n/*\n * The cmd structure is used to package the\n * command messages passed between threads\n */\ntypedef struct {\n\tunsigned int tfd;\n\tchar cmdval;\n\tvoid *data;\n\tint sz;\n} tpp_cmd_t;\n\n/*\n * mbox is the \"message box\" for each thread\n * When a thread wants to send a msg/cmd to another\n * thread, it posts a message to that threads mbox.\n * That wakes up the thread from a poll/select\n * and allows to act on the message\n */\ntypedef struct {\n\tchar mbox_name[TPP_MBOX_NAME_SZ]; /* small price for debuggability */\n\tpthread_mutex_t mbox_mutex;\n\ttpp_que_t mbox_queue;\n\tint max_size;\n\tint mbox_size;\n#ifdef HAVE_SYS_EVENTFD_H\n\tint mbox_eventfd;\n#else\n\tint mbox_pipe[2]; /* may be unused */\n#endif\n} tpp_mbox_t;\n\n/* quickie macros to work with queues */\n#define TPP_QUE_CLEAR(q)  \\\n\t(q)->head = NULL; \\\n\t(q)->tail = NULL\n#define TPP_QUE_HEAD(q) (q)->head\n#define TPP_QUE_TAIL(q) (q)->tail\n#define TPP_QUE_NEXT(q, n) (((n) == NULL) ? (q)->head : (n)->next)\n#define TPP_QUE_DATA(n) (((n) == NULL) ? NULL : (n)->queue_data)\n\ntypedef struct {\n\tvoid *td;\n\tchar tppstaticbuf[TPP_GEN_BUF_SZ];\n} tpp_tls_t;\n\ntypedef struct {\n\tvoid *authctx;\n\tauth_def_t *authdef;\n\tvoid *encryptctx;\n\tauth_def_t *encryptdef;\n\tpbs_auth_config_t *config;\n\tint conn_initiator;\n\tint conn_type;\n} conn_auth_t;\n\nextern int tpp_terminated_in_child; /* whether a forked child called tpp_terminate or not? initialized to 0 */\n\nconn_auth_t *tpp_make_authdata(struct tpp_config *, int, char *, char *);\nint tpp_handle_auth_handshake(int, int, conn_auth_t *, int, void *, size_t);\ntpp_que_elem_t *tpp_enque(tpp_que_t *, void *);\nvoid *tpp_deque(tpp_que_t *);\ntpp_que_elem_t *tpp_que_del_elem(tpp_que_t *, tpp_que_elem_t *);\ntpp_que_elem_t *tpp_que_ins_elem(tpp_que_t *, tpp_que_elem_t *, void *, int);\n/* End - routines and headers to manage FIFO queues */\n\nint tpp_send(int, void *, int);\nint tpp_recv(int, void *, int);\nint tpp_ready_fds(int *, int);\nvoid *tpp_get_user_data(int);\nint tpp_set_user_data(int, void *);\nchar *convert_to_ip_port(char *, int);\n\nint tpp_init_tls_key(void);\ntpp_tls_t *tpp_get_tls(void);\nchar *mk_hostname(char *, int);\nstruct sockaddr_in *tpp_localaddr(int);\ntpp_packet_t *tpp_bld_pkt(tpp_packet_t *, void *, int, int, void **);\n\nvoid tpp_router_terminate(void);\nvoid tpp_free_tls(void);\n\nint tpp_transport_connect(char *, int, void *, int *);\nint tpp_transport_vsend(int, tpp_packet_t *pkt);\nint tpp_transport_isresvport(int);\nint tpp_transport_init(struct tpp_config *);\nvoid tpp_transport_set_handlers(\n\tint (*pkt_presend_handler)(int, tpp_packet_t *, void *, void *),\n\tint (*pkt_handler)(int, void *, int, void *, void *),\n\tint (*close_handler)(int, int, void *, void *),\n\tint (*post_connect_handler)(int, void *, void *, void *),\n\tint (*timer_handler)(time_t));\nvoid tpp_set_logmask(long);\nint tpp_transport_shutdown(void);\nint tpp_transport_terminate(void);\nvoid tpp_transport_set_conn_ctx(int, void *);\nvoid *tpp_transport_get_conn_ctx(int);\nvoid *tpp_transport_get_thrd_context(int);\nint tpp_transport_wakeup_thrd(int);\nint tpp_transport_connect_spl(char *, int, void *, int *, void *);\nint tpp_transport_close(int);\n\nint tpp_init_lock(pthread_mutex_t *);\nint tpp_lock(pthread_mutex_t *);\nint tpp_unlock(pthread_mutex_t *);\nint tpp_destroy_lock(pthread_mutex_t *);\n\n/* rwlock is not supported by posix, so dont\n * refer to this in the header file, instead\n * use voids. The respective C sources which\n * implement this will defined _XOPEN_SOURCE\n * if necessary\n */\nint tpp_init_rwlock(void *);\nint tpp_read_lock(void *);\nint tpp_write_lock(void *);\nint tpp_unlock_rwlock(void *);\nint tpp_destroy_rwlock(void *);\n\nint tpp_set_non_blocking(int);\nint tpp_set_close_on_exec(int);\nvoid tpp_free_chunk(tpp_chunk_t *);\nvoid tpp_free_pkt(tpp_packet_t *);\nint tpp_send_ctl_msg(int, int, tpp_addr_t *, tpp_addr_t *, unsigned int, char, char *);\nint tpp_cr_thrd(void *(*start_routine)(void *), pthread_t *, void *);\nint tpp_set_keep_alive(int, struct tpp_config *);\n\nvoid *tpp_deflate(void *, unsigned int, unsigned int *);\nvoid *tpp_inflate(void *, unsigned int, unsigned int);\nvoid *tpp_multi_deflate_init(int);\nint tpp_multi_deflate_do(void *, int, void *, unsigned int);\nvoid *tpp_multi_deflate_done(void *, unsigned int *);\n\nint tpp_add_fd(int, int, int);\nint tpp_del_fd(int, int);\nint tpp_mod_fd(int, int, int);\n\n#ifndef WIN32\n/* a new mutex introduced to prevent inheriting lock from tpp thread\n * from getaddrinfo(nslookup) during fork for periodic hook\n * set handlers using pthread_atfork.\n */\nextern pthread_mutex_t tpp_nslookup_mutex;\nvoid tpp_nslookup_atfork_prepare();\nvoid tpp_nslookup_atfork_parent();\nvoid tpp_nslookup_atfork_child();\n#endif\n\nint tpp_validate_hdr(int, char *);\ntpp_addr_t *tpp_get_addresses(char *, int *);\ntpp_addr_t *tpp_get_local_host(int);\ntpp_addr_t *tpp_get_connected_host(int);\nint tpp_sock_resolve_ip(tpp_addr_t *, char *, int);\ntpp_addr_t *tpp_sock_resolve_host(char *, int *);\n\nconst char *tpp_transport_get_conn_hostname(int);\nvoid tpp_transport_set_conn_extra(int, void *);\nextern int tpp_get_thrd_index();\nchar *tpp_netaddr(tpp_addr_t *);\nchar *tpp_netaddr_sa(struct sockaddr *);\nint tpp_encrypt_pkt(conn_auth_t *authdata, tpp_packet_t *pkt);\nextern void tpp_auth_logger(int, int, int, const char *, const char *);\n\nvoid tpp_log(int level, const char *routine, const char *fmt, ...);\n\nvoid free_router(tpp_router_t *);\nvoid free_leaf(tpp_leaf_t *);\n\n#ifdef WIN32\nint tr_2_errno(int);\n#endif\n\n/**********************************************************************/\n/* em related definitions (internal version) */\n/**********************************************************************/\n#if defined(PBS_USE_POLL)\n\ntypedef struct {\n\tstruct pollfd *fds;\n\tem_event_t *events;\n\tint curr_nfds;\n\tint max_nfds;\n} poll_context_t;\n\n#elif defined(PBS_USE_EPOLL)\n\ntypedef struct {\n\tint epoll_fd;\n\tint max_nfds;\n\tpid_t init_pid;\n\tem_event_t *events;\n} epoll_context_t;\n\n#elif defined(PBS_USE_POLLSET)\n\ntypedef struct {\n\tpollset_t ps;\n\tint max_nfds;\n\tem_event_t *events;\n} pollset_context_t;\n\n#elif defined(PBS_USE_SELECT)\n\ntypedef struct {\n\tfd_set master_read_fds;\n\tfd_set master_write_fds;\n\tfd_set master_err_fds;\n\tfd_set read_fds;\n\tfd_set write_fds;\n\tfd_set err_fds;\n\tint maxfd;\n\tint max_nfds;\n\tem_event_t *events;\n} sel_context_t;\n\n#elif defined(PBS_USE_DEVPOLL)\n\ntypedef struct {\n\tint devpoll_fd;\n\tem_event_t *events;\n\tint max_nfds;\n} devpoll_context_t;\n\n#endif\n\n/* platform independent functions that manipulate a mbox of a thread\n * Internally these functions may use a eventfd, signalfd, signals,\n * plain pipes etc.\n */\nint tpp_mbox_init(tpp_mbox_t *, char *, int);\nvoid tpp_mbox_destroy(tpp_mbox_t *);\nint tpp_mbox_monitor(void *, tpp_mbox_t *);\nint tpp_mbox_read(tpp_mbox_t *, unsigned int *, int *, void **);\nint tpp_mbox_clear(tpp_mbox_t *, tpp_que_elem_t **, unsigned int, short *, void **);\nint tpp_mbox_post(tpp_mbox_t *, unsigned int, char, void *, int);\nint tpp_mbox_getfd(tpp_mbox_t *);\n\nextern int tpp_going_down;\n/**********************************************************************/\n\n/* \n * use TPPDEBUG instead of DEBUG, since DEBUG makes daemons not fork\n * and that does not work well with init scripts. Sometimes we need to\n * debug TPP in a PTL run where forked daemons are required\n * Hence use a separate macro\n */\n#ifdef TPPDEBUG\n\n#define TPP_DBPRT(...) tpp_log(LOG_CRIT, __func__, __VA_ARGS__)\n\nvoid print_packet_hdr(const char *, void *, int);\n#define PRTPKTHDR(id, data, len) print_packet_hdr(id, data, len);\n\n#else\n\n#define TPP_DBPRT(...)\n#define PRTPKTHDR(id, data, len)\n\n#endif\n\n#ifdef __cplusplus\n}\n#endif\n#endif /* _TPP_INTERNAL_H */\n"
  },
  {
    "path": "src/lib/Libtpp/tpp_platform.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\ttpp_platform.c\n *\n * @brief\tMiscellaneous socket and pipe routes for WIndows and Unix\n *\n *\n */\n#include <pbs_config.h>\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <unistd.h>\n#include <errno.h>\n#include <fcntl.h>\n#include <netdb.h>\n#include <pthread.h>\n#include <sys/types.h>\n#include <sys/socket.h>\n#include <netinet/in.h>\n#include <netinet/tcp.h>\n#include <sys/resource.h>\n#include <signal.h>\n#include \"tpp_internal.h\"\n\n#ifdef WIN32\n\n/**\n * @brief\n *\tEmulate pipe by using sockets on windows\n *\n * @param[in] - fds - returns the opened pipe fds\n *\n * @return Error code\n * @retval -1 Failure\n * @retval  0 Success\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\ntpp_pipe_cr(int fds[2])\n{\n\tSOCKET listenfd;\n\tstruct sockaddr_in serv_addr;\n\tpbs_socklen_t len = sizeof(serv_addr);\n\tchar *op;\n\n\terrno = 0;\n\tfds[0] = fds[1] = INVALID_SOCKET;\n\n\tif ((listenfd = socket(AF_INET, SOCK_STREAM, 0)) == INVALID_SOCKET) {\n\t\top = \"socket\";\n\t\tgoto tpp_pipe_err;\n\t}\n\n\tmemset(&serv_addr, 0, len);\n\tserv_addr.sin_family = AF_INET;\n\tserv_addr.sin_port = htons(0);\n\tserv_addr.sin_addr.s_addr = htonl(INADDR_LOOPBACK);\n\tif (bind(listenfd, (SOCKADDR *) &serv_addr, len) == SOCKET_ERROR) {\n\t\top = \"bind\";\n\t\tgoto tpp_pipe_err;\n\t}\n\n\tif (listen(listenfd, 1) == SOCKET_ERROR) {\n\t\top = \"listen\";\n\t\tgoto tpp_pipe_err;\n\t}\n\n\tif (getsockname(listenfd, (SOCKADDR *) &serv_addr, &len) == SOCKET_ERROR) {\n\t\top = \"getsockname\";\n\t\tgoto tpp_pipe_err;\n\t}\n\n\tif ((fds[1] = socket(PF_INET, SOCK_STREAM, 0)) == INVALID_SOCKET) {\n\t\top = \"socket\";\n\t\tgoto tpp_pipe_err;\n\t}\n\n\tif (tpp_sock_connect(fds[1], (SOCKADDR *) &serv_addr, len) == SOCKET_ERROR) {\n\t\top = \"connect\";\n\t\tgoto tpp_pipe_err;\n\t}\n\n\tif ((fds[0] = accept(listenfd, (SOCKADDR *) &serv_addr, &len)) == INVALID_SOCKET) {\n\t\top = \"accept\";\n\t\tgoto tpp_pipe_err;\n\t}\n\n\tclosesocket(listenfd);\n\treturn 0;\n\ntpp_pipe_err:\n\tclosesocket(listenfd);\n\tif (fds[0] != INVALID_SOCKET)\n\t\tclosesocket(fds[0]);\n\tif (fds[1] != INVALID_SOCKET)\n\t\tclosesocket(fds[1]);\n\n\terrno = tr_2_errno(WSAGetLastError());\n\ttpp_log(LOG_CRIT, __func__, \"%s failed, winsock errno= %d\", op, WSAGetLastError());\n\treturn -1;\n}\n\n/**\n * @brief\n *\tEmulate pipe read by using sockets on windows\n *\n * @param[in] - fd  - pipe file descriptor\n * @param[in] - buf - data buffer to read from pipe\n * @param[in] - len - length of the data buffer\n *\n * @return  Amount of data written\n * @retval  0 - Failure - close pipe\n * @retval  >0 - Amount of data written\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\ntpp_pipe_read(int fd, char *buf, int len)\n{\n\tint ret = recv(fd, buf, len, 0);\n\tif (ret == SOCKET_ERROR) {\n\t\terrno = tr_2_errno(WSAGetLastError());\n\t\treturn -1;\n\t}\n\treturn ret;\n}\n\n/**\n * @brief\n *\tEmulate pipe write by using sockets on windows\n *\n * @param[in] - fd  - pipe file descriptor\n * @param[in] - buf - data buffer to write to pipe\n * @param[in] - len - length of the data buffer\n *\n * @return  Amount of data written\n * @retval  0 - Failure - close pipe\n * @retval  >0 - Amount of data written\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\ntpp_pipe_write(int fd, char *buf, int len)\n{\n\tint ret = send(fd, buf, len, 0);\n\tif (ret == SOCKET_ERROR) {\n\t\terrno = tr_2_errno(WSAGetLastError());\n\t\treturn -1;\n\t}\n\treturn ret;\n}\n\n/**\n * @brief\n *\tEmulate pipe close by using sockets on windows\n *\n * @param[in] - fd  - pipe file descriptor\n\n * @return  return value of windows closesocket\n * @retval  -1 - Failure\n * @retval   0 - Success\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\ntpp_pipe_close(int fd)\n{\n\treturn closesocket(fd);\n}\n\n/*\n * wrapper to call windows socket() and map windows\n * error code to errno and massage the return value\n * so that callers do not need conditionally compiled\n * code\n */\nint\ntpp_sock_socket(int af, int type, int protocol)\n{\n\tint fd;\n\tDWORD dwFlags = 0;\n\n\t/*\n\t * Make TPP sockets un-inheritable (windows).\n\t *\n\t * Windows has a quirky implementation of socket inheritence due to\n\t * the support for Layered Service Providers. If Firewall/antivirus\n\t * are installed, the socket handle could get inherited despite\n\t * the fact that we are setting this as un-inheritable via a call\n\t * post the socket creation time.\n\t *\n\t * Use WSA_FLAG_NO_HANDLE_INHERIT available in newer windows\n\t * versions (7SP1 onwards) in the call to WSASocket().\n\t *\n\t * Also use the SetHandleInformation for older windows. (This may\n\t * not work with LSP's installed.\n\t *\n\t */\n#ifdef WSA_FLAG_NO_HANDLE_INHERIT\n\tdwFlags = WSA_FLAG_NO_HANDLE_INHERIT;\n#endif\n\tif ((fd = WSASocket(af, type, protocol, NULL, 0, dwFlags)) == INVALID_SOCKET) {\n\t\terrno = tr_2_errno(WSAGetLastError());\n\t\treturn -1;\n\t}\n\n\tif (SetHandleInformation((HANDLE) fd, HANDLE_FLAG_INHERIT, 0) == 0) {\n\t\terrno = tr_2_errno(WSAGetLastError());\n\t\tclosesocket(fd);\n\t\treturn -1;\n\t}\n\n\treturn fd;\n}\n\n/*\n * wrapper to call windows listen() and map windows\n * error code to errno and massage the return value\n * so that callers do not need conditionally compiled\n * code\n */\nint\ntpp_sock_listen(int s, int backlog)\n{\n\tif (listen(s, backlog) == SOCKET_ERROR) {\n\t\terrno = tr_2_errno(WSAGetLastError());\n\t\treturn -1;\n\t}\n\treturn 0;\n}\n\n/*\n * wrapper to call windows accept() and map windows\n * error code to errno and massage the return value\n * so that callers do not need conditionally compiled\n * code\n */\nint\ntpp_sock_accept(int s, struct sockaddr *addr, int *addrlen)\n{\n\tint fd;\n\tif ((fd = accept(s, addr, addrlen)) == INVALID_SOCKET) {\n\t\terrno = tr_2_errno(WSAGetLastError());\n\t\treturn -1;\n\t}\n\treturn fd;\n}\n\n/*\n * wrapper to call windows bind() and map windows\n * error code to errno and massage the return value\n * so that callers do not need conditionally compiled\n * code\n */\nint\ntpp_sock_bind(int s, const struct sockaddr *name, int namelen)\n{\n\tif (bind(s, name, namelen) == SOCKET_ERROR) {\n\t\terrno = tr_2_errno(WSAGetLastError());\n\t\treturn -1;\n\t}\n\treturn 0;\n}\n\n/*\n * wrapper to call windows connect() and map windows\n * error code to errno and massage the return value\n * so that callers do not need conditionally compiled\n * code\n */\nint\ntpp_sock_connect(int s, const struct sockaddr *name, int namelen)\n{\n\tif (connect(s, name, namelen) == SOCKET_ERROR) {\n\t\terrno = tr_2_errno(WSAGetLastError());\n\t\treturn -1;\n\t}\n\treturn 0;\n}\n\n/*\n * wrapper to call windows recv() and map windows\n * error code to errno and massage the return value\n * so that callers do not need conditionally compiled\n * code\n */\nint\ntpp_sock_recv(int s, char *buf, int len, int flags)\n{\n\tint ret = recv(s, buf, len, flags);\n\tif (ret == SOCKET_ERROR) {\n\t\terrno = tr_2_errno(WSAGetLastError());\n\t\treturn -1;\n\t}\n\treturn ret;\n}\n\n/*\n * wrapper to call windows send() and map windows\n * error code to errno and massage the return value\n * so that callers do not need conditionally compiled\n * code\n */\nint\ntpp_sock_send(int s, const char *buf, int len, int flags)\n{\n\tint ret = send(s, buf, len, flags);\n\tif (ret == SOCKET_ERROR) {\n\t\terrno = tr_2_errno(WSAGetLastError());\n\t\treturn -1;\n\t}\n\treturn ret;\n}\n\n/*\n * wrapper to call windows select() and map windows\n * error code to errno and massage the return value\n * so that callers do not need conditionally compiled\n * code\n */\nint\ntpp_sock_select(int nfds, fd_set *readfds, fd_set *writefds, fd_set *exceptfds, const struct timeval *timeout)\n{\n\tint nready = select(nfds, readfds, writefds, exceptfds, timeout);\n\tif (nready == SOCKET_ERROR) {\n\t\terrno = tr_2_errno(WSAGetLastError());\n\t\treturn -1;\n\t}\n\treturn nready;\n}\n\n/*\n * wrapper to call windows closesocket() and map windows\n * error code to errno and massage the return value\n * so that callers do not need conditionally compiled\n * code\n */\nint\ntpp_sock_close(int s)\n{\n\tif (closesocket(s) == SOCKET_ERROR) {\n\t\terrno = tr_2_errno(WSAGetLastError());\n\t\treturn -1;\n\t}\n\treturn 0;\n}\n\n/*\n * wrapper to call windows getsockopt() and map windows\n * error code to errno and massage the return value\n * so that callers do not need conditionally compiled\n * code\n */\nint\ntpp_sock_getsockopt(int s, int level, int optname, int *optval, int *optlen)\n{\n\tif (getsockopt(s, level, optname, (char *) optval, optlen) == SOCKET_ERROR) {\n\t\terrno = tr_2_errno(WSAGetLastError());\n\t\treturn -1;\n\t}\n\treturn 0;\n}\n\n/*\n * wrapper to call windows setsockopt() and map windows\n * error code to errno and massage the return value\n * so that callers do not need conditionally compiled\n * code\n */\nint\ntpp_sock_setsockopt(int s, int level, int optname, const int *optval, int optlen)\n{\n\tif (setsockopt(s, level, optname, (const char *) optval, optlen) == SOCKET_ERROR) {\n\t\terrno = tr_2_errno(WSAGetLastError());\n\t\treturn -1;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\tMap windows error number to errno\n *\n * @param[in] - win_errno - Windows error\n\n * @return  errno mapped value\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\ntr_2_errno(int win_errno)\n{\n\tint ret = 0;\n\t/* convert only a few to unix errors,\n\t * for others, we do not care,\n\t */\n\tswitch (win_errno) {\n\t\tcase WSAEINVAL:\n\t\t\tret = EINVAL;\n\t\t\tbreak;\n\t\tcase WSAEINPROGRESS:\n\t\t\tret = EINPROGRESS;\n\t\t\tbreak;\n\t\tcase WSAEINTR:\n\t\t\tret = EINTR;\n\t\t\tbreak;\n\t\tcase WSAECONNREFUSED:\n\t\t\tret = ECONNREFUSED;\n\t\t\tbreak;\n\t\tcase WSAEWOULDBLOCK:\n\t\t\tret = EWOULDBLOCK;\n\t\t\tbreak;\n\t\tcase WSAEADDRINUSE:\n\t\t\tret = EADDRINUSE;\n\t\t\tbreak;\n\t\tcase WSAEADDRNOTAVAIL:\n\t\t\tret = EADDRNOTAVAIL;\n\t\t\tbreak;\n\t\tdefault:\n\t\t\tret = EINVAL;\n\t}\n\treturn ret;\n}\n\n/**\n * @brief\n *\tInitialize winsock\n *\n * @return  errro code\n * @retval  -1 - Failure\n * @retval   0 - Success\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\ntpp_sock_layer_init()\n{\n\tWSADATA data;\n\tif (WSAStartup(MAKEWORD(2, 2), &data)) {\n\t\ttpp_log(LOG_CRIT, NULL, \"winsock_init failed! error=%d\", WSAGetLastError());\n\t\treturn -1;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\tRetrieve the value of nfiles from OS settings\n *\n * @return  nfiles value, for windows return\n *          constant MAX_CON\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\ntpp_get_nfiles()\n{\n\treturn MAX_CON;\n}\n\n/**\n * @brief\n *\tSetup SIGPIPE disposition properly.\n *  NOP on windows\n *\n * @return  errro code\n * @retval  -1 - Failure\n * @retval   0 - Success\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\nset_pipe_disposition()\n{\n\treturn 0;\n}\n\n#else\n\n/**\n * @brief\n *\tInitialize socket layer\n *  NOP on non-WIndows\n *\n * @return  errro code\n * @retval  -1 - Failure\n * @retval   0 - Success\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\ntpp_sock_layer_init()\n{\n\treturn 0;\n}\n\n/**\n * @brief\n *\tRetrieve the value of nfiles from OS settings\n *\n * @return  nfiles value\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\ntpp_get_nfiles()\n{\n\tstruct rlimit rlp;\n\n\tif (getrlimit(RLIMIT_NOFILE, &rlp) == -1) {\n\t\ttpp_log(LOG_CRIT, __func__, \"getrlimit failed\");\n\t\treturn -1;\n\t}\n\n\ttpp_log(LOG_INFO, NULL, \"Max files allowed = %ld\", (long) rlp.rlim_cur);\n\n\treturn (rlp.rlim_cur);\n}\n\n/**\n * @brief\n *\tSetup SIGPIPE disposition properly\n *\n * @return  errro code\n * @retval  -1 - Failure\n * @retval   0 - Success\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\nset_pipe_disposition()\n{\n\tstruct sigaction act;\n\tstruct sigaction oact;\n\n\t/*\n\t * Check if SIGPIPE's disposition is set to default, if so, set to ignore;\n\t * else we assume the application can handle SIGPIPE without quitting.\n\t *\n\t * MSG_NOSIGNAL is linux specific, and SO_NOSIGPIPE is not portable either.\n\t * As of now we do not need any more elegant solution than just ignoring\n\t * the sigpipe, if not already handled by the application.\n\t */\n\tif (sigaction(SIGPIPE, NULL, &oact) == 0) {\n\t\tif (oact.sa_handler == SIG_DFL) {\n\t\t\tact.sa_handler = SIG_IGN;\n\t\t\tif (sigaction(SIGPIPE, &act, &oact) != 0) {\n\t\t\t\ttpp_log(LOG_CRIT, __func__, \"Could not set SIGPIPE to IGN\");\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t}\n\t} else {\n\t\ttpp_log(LOG_CRIT, __func__, \"Could not query SIGPIPEs disposition\");\n\t\treturn -1;\n\t}\n\treturn 0;\n}\n#endif\n\n/**\n * @brief\n *\tFind the hostname associated with the provided ip\n *\n * @param[in] addr - The ip address for which we need to find the hostname\n * @param[in] host - The buffer to which to copy the hostname to\n * @param[in] len  - The length of the output buffer\n *\n * @return  error code\n * @retval  !0 - Failure\n * @retval   0 - Success\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\ntpp_sock_resolve_ip(tpp_addr_t *addr, char *host, int len)\n{\n\tsocklen_t salen;\n\tstruct sockaddr *sa;\n\tstruct sockaddr_in6 sa_in6;\n\tstruct sockaddr_in sa_in;\n\tint rc;\n\n\tif (addr->family == TPP_ADDR_FAMILY_IPV4) {\n\t\tmemcpy(&sa_in.sin_addr, (struct sockaddr_in *) addr->ip, sizeof(sa_in.sin_addr));\n\t\tsalen = sizeof(struct sockaddr_in);\n\t\tsa = (struct sockaddr *) &sa_in;\n\t\tsa->sa_family = AF_INET;\n\t} else if (addr->family == TPP_ADDR_FAMILY_IPV6) {\n\t\tmemcpy(&sa_in6.sin6_addr, (struct sockaddr_in *) addr->ip, sizeof(sa_in6.sin6_addr));\n\t\tsa = (struct sockaddr *) &sa_in6;\n\t\tsalen = sizeof(struct sockaddr_in6);\n\t\tsa->sa_family = AF_INET6;\n\t} else\n\t\treturn -1;\n#ifndef WIN32\n\t/* \n\t * introducing a new mutex to prevent child process from \n\t * inheriting getnameinfo mutex using pthread_atfork handlers\n\t */\n\ttpp_lock(&tpp_nslookup_mutex);\n#endif\n\trc = getnameinfo(sa, salen, host, len, NULL, 0, 0);\n\t/* unlock nslookup mutex */\n#ifndef WIN32\n\ttpp_unlock(&tpp_nslookup_mutex);\n#endif\n\tif (rc != 0) {\n\t\tTPP_DBPRT(\"Error: %s\", gai_strerror(rc));\n\t}\n\treturn rc;\n}\n\n/**\n * @brief\n *\tResolve the hostname to ip address list\n *\n * @param[in]  host  - The hostname to resolve\n * @param[out] count - The count of addresses returned\n *\n * @return  Array of address resolved from the host\n * @retval  NULL  - Failure\n * @retval  !NULL - Success\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\ntpp_addr_t *\ntpp_sock_resolve_host(char *host, int *count)\n{\n\ttpp_addr_t *ips = NULL;\n\tvoid *tmp;\n\tint i, j;\n\tstruct addrinfo *aip, *pai;\n\tstruct addrinfo hints;\n\tint rc = 0;\n\n\terrno = 0;\n\t*count = 0;\n\n\tmemset(&hints, 0, sizeof(struct addrinfo));\n\thints.ai_family = AF_INET;\n\thints.ai_socktype = SOCK_STREAM;\n\thints.ai_protocol = IPPROTO_TCP;\n\n#ifndef WIN32\n\t/* \n\t * introducing a new mutex to prevent child process from \n\t * inheriting getaddrinfo mutex using pthread_atfork handlers\n\t */\n\ttpp_lock(&tpp_nslookup_mutex);\n#endif\n\trc = getaddrinfo(host, NULL, &hints, &pai);\n\t/* unlock nslookup mutex */\n#ifndef WIN32\n\ttpp_unlock(&tpp_nslookup_mutex);\n#endif\n\tif (rc != 0) {\n\t\ttpp_log(LOG_CRIT, NULL, \"Error %d resolving %s\", rc, host);\n\t\treturn NULL;\n\t}\n\n\t*count = 0;\n\tfor (aip = pai; aip != NULL; aip = aip->ai_next) {\n\t\tif (aip->ai_family == AF_INET) { /* for now only count IPv4 addresses */\n\t\t\t(*count)++;\n\t\t}\n\t}\n\n\tif (*count == 0) {\n\t\ttpp_log(LOG_CRIT, NULL, \"Could not find any usable IP address for host %s\", host);\n\t\treturn NULL;\n\t}\n\n\tips = calloc(*count, sizeof(tpp_addr_t));\n\tif (!ips) {\n\t\t*count = 0;\n\t\treturn NULL;\n\t}\n\n\ti = 0;\n\tfor (aip = pai; aip != NULL; aip = aip->ai_next) {\n\t\t/* skip non-IPv4 addresses */\n\t\t/*if (aip->ai_family == AF_INET || aip->ai_family == AF_INET6) {*/\n\t\tif (aip->ai_family == AF_INET) { /* for now only work with IPv4 */\n\t\t\tif (aip->ai_family == AF_INET) {\n\t\t\t\tstruct sockaddr_in *sa = (struct sockaddr_in *) aip->ai_addr;\n\t\t\t\tif (ntohl(sa->sin_addr.s_addr) >> 24 == IN_LOOPBACKNET)\n\t\t\t\t\tcontinue;\n\t\t\t\tmemcpy(&ips[i].ip, &sa->sin_addr, sizeof(sa->sin_addr));\n\t\t\t} else if (aip->ai_family == AF_INET6) {\n\t\t\t\tstruct sockaddr_in6 *sa6 = (struct sockaddr_in6 *) aip->ai_addr;\n\t\t\t\tmemcpy(&ips[i].ip, &sa6->sin6_addr, sizeof(sa6->sin6_addr));\n\t\t\t}\n\t\t\tips[i].family = (aip->ai_family == AF_INET6) ? TPP_ADDR_FAMILY_IPV6 : TPP_ADDR_FAMILY_IPV4;\n\t\t\tips[i].port = 0;\n\n\t\t\tfor (j = 0; j < i; j++) {\n\t\t\t\t/* check for duplicate ip addresses dont add if duplicate */\n\t\t\t\tif (memcmp(&ips[j].ip, &ips[i].ip, sizeof(ips[j].ip)) == 0) {\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (j == i) {\n\t\t\t\t/* did not find duplicate so use this slot */\n\t\t\t\ti++;\n\t\t\t}\n\t\t}\n\t}\n\tfreeaddrinfo(pai);\n\n\tif (i == 0) {\n\t\tfree(ips);\n\t\t*count = 0;\n\t\treturn NULL;\n\t}\n\n\tif (i < *count) {\n\t\t/* try to resize the buffer, don't bother if resize failed */\n\t\ttmp = realloc(ips, i * sizeof(tpp_addr_t));\n\t\tif (tmp)\n\t\t\tips = tmp;\n\t}\n\t*count = i; /* adjust count */\n\n\treturn ips;\n}\n\n/**\n * @brief\n *\tHelper function to initiate a connection to a remote host\n *\n *\n * @param[in] conn - The physical connection structure\n *\n * @return  Error code\n * @retval  -1 - Failure\n * @retval   0 - Success\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nint\ntpp_sock_attempt_connection(int fd, char *host, int port)\n{\n\tstruct sockaddr_in dest_addr;\n\tint rc = 0;\n\ttpp_addr_t *addr;\n\tint count = 0, i;\n\n\terrno = 0;\n\n\taddr = tpp_sock_resolve_host(host, &count);\n\tif (count == 0 || addr == NULL) {\n\t\terrno = EADDRNOTAVAIL;\n\t\treturn -1;\n\t}\n\n\tfor (i = 0; i < count; i++) {\n\t\tif (addr[i].family == TPP_ADDR_FAMILY_IPV4)\n\t\t\tbreak;\n\t}\n\tif (i == count) {\n\t\t/* did not find a ipv4 address, fail for now */\n\t\tfree(addr);\n\t\terrno = EADDRNOTAVAIL;\n\t\treturn -1;\n\t}\n\n\tdest_addr.sin_family = AF_INET;\n\tdest_addr.sin_port = htons(port);\n\n\tmemcpy((char *) &dest_addr.sin_addr, &addr[i].ip, sizeof(dest_addr.sin_addr));\n\trc = tpp_sock_connect(fd, (struct sockaddr *) &dest_addr, sizeof(dest_addr));\n\tfree(addr);\n\n\treturn rc;\n}\n\n/**\n * @brief\n *\tInitialize thrd handle to an invalid value\n *\n * @param[in] thrd - the thrd handle to invalidate\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nvoid\ntpp_invalidate_thrd_handle(pthread_t *thrd)\n{\n#ifdef WIN32\n\tthrd->thHandle = INVALID_HANDLE_VALUE;\n\tthrd->thId = -1;\n#else\n\t*thrd = -1; /* initialize to -1 */\n#endif\n}\n\n/**\n * @brief\n *\tCheck if thrd has a valid handle\n *\n * @param[in] thrd - the thrd handle to check\n *\n * @return  Error code\n * @retval   1 - Valid\n * @retval   0 - invalid handle\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nint\ntpp_is_valid_thrd(pthread_t thrd)\n{\n#ifndef WIN32\n\tif (thrd != -1)\n\t\treturn 1;\n#else\n\tif (thrd.thHandle != INVALID_HANDLE_VALUE)\n\t\treturn 1;\n#endif\n\treturn 0;\n}\n"
  },
  {
    "path": "src/lib/Libtpp/tpp_router.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\ttpp_router.c\n *\n * @brief\tRouter part of the TCP router based network\n *\n * @par\t\tFunctionality:\n *\n *\t\tTPP = TCP based Packet Protocol. This layer uses TCP in a multi-\n *\t\thop router based network topology to deliver packets to desired\n *\t\tdestinations. LEAF (end) nodes are connected to ROUTERS via\n *\t\tpersistent TCP connections. The ROUTER has intelligence to route\n *\t\tpackets to appropriate destination leaves or other routers.\n *\n *\t\tThis is the router part in the tpp network topology.\n *\t\tThis compiles into the router process, and is\n *\t\tlinked to the PBS comm.\n *\n */\n#include <pbs_config.h>\n#if RWLOCK_SUPPORT == 2\n#if !defined(_XOPEN_SOURCE)\n#define _XOPEN_SOURCE 500\n#endif\n#endif\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <unistd.h>\n#include <sys/types.h>\n#include <sys/socket.h>\n#include <netinet/in.h>\n#include <arpa/inet.h>\n#include <pthread.h>\n#include <errno.h>\n#include <fcntl.h>\n#include <netdb.h>\n#ifdef PBS_COMPRESSION_ENABLED\n#include <zlib.h>\n#endif\n#include \"pbs_idx.h\"\n#include \"tpp_internal.h\"\n#include \"auth.h\"\n\n#define RLIST_INC 100\n\nstruct tpp_config *tpp_conf; /* copy of the global tpp_config */\n\npthread_rwlock_t router_lock; /* rw lock for router avl trees, searches over avl should be thread safe now */\npthread_mutex_t lj_lock;\n\n/* index of routers connected to this router */\nvoid *routers_idx = NULL;\n\n/* index of all leaves in the cluster */\nvoid *cluster_leaves_idx = NULL;\n\n/* index of special routers who need to be notified for join updates */\nvoid *my_leaves_notify_idx = NULL;\ntime_t router_last_leaf_joined = 0;\n\nstatic int router_send_ctl_join(int tfd, void *data, void *c);\n\n/* forward declarations */\nstatic int router_pkt_presend_handler(int tfd, tpp_packet_t *pkt, void *c, void *extra);\nstatic int router_pkt_handler(int phy_fd, void *data, int len, void *c, void *extra);\nstatic int router_pkt_handler_inner(int tfd, void *buf, void **data_out, int len, void *c, void *extra);\nstatic int router_close_handler(int phy_con, int error, void *c, void *extra);\nstatic int send_leaves_to_router(tpp_router_t *parent, tpp_router_t *target);\nstatic tpp_router_t *get_preferred_router(tpp_leaf_t *l, tpp_router_t *this_router, int *fd);\nstatic int add_route_to_leaf(tpp_leaf_t *l, tpp_router_t *r, int index);\nstatic tpp_router_t *del_router_from_leaf(tpp_leaf_t *l, int tfd);\nstatic int leaf_get_router_index(tpp_leaf_t *l, tpp_router_t *r);\nstatic int router_timer_handler(time_t now);\nstatic int router_post_connect_handler(int tfd, void *data, void *c, void *extra);\n\n/* structure identifying this router */\nstatic tpp_router_t *this_router = NULL;\n\nstatic tpp_router_t *\nalloc_router(char *name, tpp_addr_t *address)\n{\n\ttpp_router_t *r;\n\ttpp_addr_t *addrs = NULL;\n\tint count = 0;\n\tvoid *unused;\n\tvoid *p_r_addr;\n\n\t/* add self name to tree */\n\tr = (tpp_router_t *) calloc(1, sizeof(tpp_router_t));\n\tif (!r) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory allocating pbs_comm data\");\n\t\treturn NULL;\n\t}\n\n\tr->conn_fd = -1;\n\tr->router_name = name;\n\tr->initiator = 0;\n\tr->index = 0; /* index is not used between routers */\n\tr->state = TPP_ROUTER_STATE_DISCONNECTED;\n\n\tif (address == NULL) {\n\t\t/* do name resolution on the supplied name */\n\t\taddrs = tpp_get_addresses(r->router_name, &count);\n\t\tif (!addrs) {\n\t\t\ttpp_log(LOG_CRIT, __func__, \"Failed to resolve address, pbs_comm=%s\", r->router_name);\n\t\t\tfree_router(r);\n\t\t\treturn NULL;\n\t\t}\n\t\tmemcpy(&r->router_addr, addrs, sizeof(tpp_addr_t));\n\t\tfree(addrs);\n\t} else {\n\t\tmemcpy(&r->router_addr, address, sizeof(tpp_addr_t));\n\t}\n\n\t/* initialize the routers leaf tree */\n\tr->my_leaves_idx = pbs_idx_create(0, sizeof(tpp_addr_t));\n\tif (r->my_leaves_idx == NULL) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Failed to create index for my leaves\");\n\t\tfree_router(r);\n\t\treturn NULL;\n\t}\n\n\tp_r_addr = &r->router_addr;\n\tif (pbs_idx_find(routers_idx, &p_r_addr, &unused, NULL) == PBS_IDX_RET_OK) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Duplicate router %s in router list\", r->router_name);\n\t\tfree_router(r);\n\t\treturn NULL;\n\t}\n\n\tif (pbs_idx_insert(routers_idx, &r->router_addr, r) != PBS_IDX_RET_OK) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Failed to add router %s in routers index\", r->router_name);\n\t\tfree_router(r);\n\t\treturn NULL;\n\t}\n\n\treturn r;\n}\n\n/*\n * Convenience function to log a no route message in the logs\n */\nvoid\nlog_noroute(tpp_addr_t *src_host, tpp_addr_t *dest_host, int src_sd, char *msg)\n{\n\tchar src[TPP_MAXADDRLEN + 1];\n\tchar dest[TPP_MAXADDRLEN + 1];\n\n\tstrncpy(src, tpp_netaddr(src_host), TPP_MAXADDRLEN);\n\tstrncpy(dest, tpp_netaddr(dest_host), TPP_MAXADDRLEN);\n\n\ttpp_log(LOG_ERR, NULL, \"Pkt from src=%s[%d], noroute to dest=%s, %s\", src, src_sd, dest, msg);\n}\n\n/**\n * @brief\n *\tWhen a router joins, send all the leaves connected to that router to\n *\tother routers.\n *\n * @param[in] parent - router whose leaves are to be sent\n * @param[in] target - router to which the leaves must be sent\n *\n * @return Error code\n * @retval -1 - Failure\n * @retval  0 - Success\n *\n * @par Side Effects:\n *\tThis routine expects to be called with the \"router_lock\" and\n *\twill unlock the router_lock before exiting.\n *\n * @par MT-safe: Yes\n *\n */\nstatic int\nsend_leaves_to_router(tpp_router_t *parent, tpp_router_t *target)\n{\n\ttpp_leaf_t *l;\n\ttpp_que_t leaf_packets;\n\ttpp_packet_t *pkt = NULL;\n\tint index;\n\ttpp_join_pkt_hdr_t *hdr = NULL;\n\tvoid *idx_ctx = NULL;\n\n\tTPP_QUE_CLEAR(&leaf_packets);\n\n\tTPP_DBPRT(\"Sending leaves to router=%s\", target->router_name);\n\n\t/* traverse my leaves tree, there is only one record per leaf */\n\twhile (pbs_idx_find(parent->my_leaves_idx, NULL, (void **) &l, &idx_ctx) == PBS_IDX_RET_OK) {\n\t\tindex = leaf_get_router_index(l, this_router);\n\t\tif (index == -1) {\n\t\t\ttpp_log(LOG_CRIT, __func__, \"Could not find index of my router in leaf's pbs_comm list\");\n\t\t\tgoto err;\n\t\t}\n\n\t\t/* create a new pkt and add the dhdr chunk first */\n\t\tpkt = tpp_bld_pkt(NULL, NULL, sizeof(tpp_join_pkt_hdr_t), 1, (void **) &hdr);\n\t\tif (!pkt) {\n\t\t\ttpp_log(LOG_CRIT, __func__, \"Failed to build packet\");\n\t\t\tgoto err;\n\t\t}\n\t\t/* save hdr and addrs to be sent outside of locks */\n\t\thdr->type = TPP_CTL_JOIN;\n\t\thdr->node_type = l->leaf_type;\n\t\thdr->hop = 2;\n\t\thdr->index = index;\n\t\thdr->num_addrs = l->num_addrs;\n\n\t\t/* add the addresses */\n\t\tif (!tpp_bld_pkt(pkt, l->leaf_addrs, sizeof(tpp_addr_t) * l->num_addrs, 1, NULL)) {\n\t\t\ttpp_log(LOG_CRIT, __func__, \"Failed to build packet\");\n\t\t\tgoto err;\n\t\t}\n\n\t\tif (tpp_enque(&leaf_packets, pkt) == NULL) {\n\t\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory enqueuing to leaf_packets\");\n\t\t\tgoto err;\n\t\t}\n\t}\n\tpbs_idx_free_ctx(idx_ctx);\n\n\twhile ((pkt = (tpp_packet_t *) tpp_deque(&leaf_packets))) {\n\t\tif (tpp_transport_vsend(target->conn_fd, pkt) != 0) {\n\t\t\ttpp_log(LOG_ERR, __func__, \"Send leaves to pbs_comm %s failed\", target->router_name);\n\t\t\t/* let the dequeue continue to happen and attempt to send happen */\n\t\t\t/* vsend will free packets even in case of failure */\n\t\t}\n\t}\n\n\treturn 0;\n\nerr:\n\t/* we jump to the error block only before starting to send, so safe to clear the queue */\n\ttpp_log(LOG_CRIT, __func__, \"Error sending leaves to router %s\", target->router_name);\n\n\tpbs_idx_free_ctx(idx_ctx);\n\n\twhile ((pkt = (tpp_packet_t *) tpp_deque(&leaf_packets))) /* drain the list and free packets */\n\t\ttpp_free_pkt(pkt);\n\n\treturn -1;\n}\n\n/**\n * @brief\n *\tBroadcast the given data packet to all the routers connected to this\n *\trouter\n *\n * @param[in] - chunks - Chunks of data that needs to be sent to routers\n * @param[in] - count  - Number of chunks in the count array\n * @param[in] - origin_tfd - This routers physical connection descriptor\n *\n * @return Error code\n * @retval -1 - Failure\n * @retval  0 - Success\n *\n * @par Side Effects:\n *\tThis function is not guarded by an lock around it. So it should be called \n *  under a lock\n *\n * @par MT-safe: No\n *\n */\nstatic int\nbroadcast_to_my_routers(tpp_chunk_t *chunks, int count, int origin_tfd)\n{\n\ttpp_router_t *r;\n\ttpp_que_t router_list;\n\tvoid *idx_ctx = NULL;\n\n\tTPP_QUE_CLEAR(&router_list);\n\n\twhile (pbs_idx_find(routers_idx, NULL, (void **) &r, &idx_ctx) == PBS_IDX_RET_OK) {\n\t\tif (r->conn_fd == -1 || r == this_router || r->conn_fd == origin_tfd || r->state != TPP_ROUTER_STATE_CONNECTED) {\n\t\t\tcontinue; /* don't send to self, or to originating router */\n\t\t}\n\t\tif (tpp_enque(&router_list, r) == NULL) {\n\t\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory enqueuing to router_list\");\n\t\t\tpbs_idx_free_ctx(idx_ctx);\n\t\t\tgoto err;\n\t\t}\n\t}\n\tpbs_idx_free_ctx(idx_ctx);\n\n\twhile ((r = (tpp_router_t *) tpp_deque(&router_list))) {\n\t\tint j;\n\t\ttpp_packet_t *pkt = NULL;\n\n\t\tfor (j = 0; j < count; j++) {\n\t\t\tpkt = tpp_bld_pkt(pkt, chunks[j].data, chunks[j].len, 1, NULL);\n\t\t\tif (!pkt) {\n\t\t\t\ttpp_log(LOG_CRIT, __func__, \"Failed to build packet\");\n\t\t\t\tgoto err;\n\t\t\t}\n\t\t}\n\n\t\tif (tpp_transport_vsend(r->conn_fd, pkt) != 0) {\n\t\t\tTPP_DBPRT(\"Broadcasting leaf to router %s\", r->router_name);\n\t\t\tif (errno != ENOTCONN) {\n\t\t\t\ttpp_log(LOG_ERR, __func__, \"send failed\");\n\t\t\t\tgoto err;\n\t\t\t}\n\t\t\t/* vsend will free packets even in case of failure */\n\t\t}\n\t}\n\treturn 0;\n\nerr:\n\ttpp_log(LOG_CRIT, __func__, \"Error broadcasting to my routers\");\n\twhile (tpp_deque(&router_list))\n\t\t; /* drain the list, dont free packets, transport will free */\n\treturn -1;\n}\n\n/**\n * @brief\n *\tBroadcast the given data packet to all the leaves connected to this\n *\trouter\n *\n * @param[in] - chunks - Chunks of data that needs to be sent to routers\n * @param[in] - count  - Number of chunks in the count array\n * @param[in] - origin_tfd - This routers physical connection descriptor\n * @param[in] - type  0 - Notify all leaves\n *                    1 - Notify only listen leaves\n *\n * @return Error code\n * @retval -1 - Failure\n * @retval  0 - Success\n *\n * @par Side Effects:\n *\tNone\n *\n * @note\n *   This function is not guarded by an lock around it. So it should be called \n *   under a lock\n *\n * @par MT-safe: No\n *\n */\nstatic int\nbroadcast_to_my_leaves(tpp_chunk_t *chunks, int count, int origin_tfd, int type)\n{\n\ttpp_leaf_t *l;\n\tvoid *traverse_idx = NULL;\n\tvoid *idx_ctx = NULL;\n\ttpp_que_t leaf_list;\n\n\tTPP_QUE_CLEAR(&leaf_list);\n\n\tif (type == 1)\n\t\ttraverse_idx = my_leaves_notify_idx;\n\telse\n\t\ttraverse_idx = this_router->my_leaves_idx;\n\n\twhile (pbs_idx_find(traverse_idx, NULL, (void **) &l, &idx_ctx) == PBS_IDX_RET_OK) {\n\t\t/*\n\t\t * leaf directly connected to me? and not myself\n\t\t * and is interested in events\n\t\t */\n\t\tif (l->conn_fd != -1 && l->conn_fd != origin_tfd) {\n\n\t\t\t/* if type is 1, notify only listen leaves */\n\t\t\tif (type == 1 && l->leaf_type != TPP_LEAF_NODE_LISTEN)\n\t\t\t\tcontinue;\n\n\t\t\tif (tpp_enque(&leaf_list, l) == NULL) {\n\t\t\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory enqueuing to leaf_list\");\n\t\t\t\tpbs_idx_free_ctx(idx_ctx);\n\t\t\t\tgoto err;\n\t\t\t}\n\t\t}\n\t}\n\tpbs_idx_free_ctx(idx_ctx);\n\n\twhile ((l = (tpp_leaf_t *) tpp_deque(&leaf_list))) {\n\t\tint j;\n\t\ttpp_packet_t *pkt = NULL;\n\n\t\tfor (j = 0; j < count; j++) {\n\t\t\tpkt = tpp_bld_pkt(pkt, chunks[j].data, chunks[j].len, 1, NULL);\n\t\t\tif (!pkt) {\n\t\t\t\ttpp_log(LOG_CRIT, __func__, \"Failed to build packet\");\n\t\t\t\tgoto err;\n\t\t\t}\n\t\t}\n\n\t\tif (tpp_transport_vsend(l->conn_fd, pkt) != 0) {\n\t\t\tif (errno != ENOTCONN) {\n\t\t\t\ttpp_log(LOG_ERR, __func__, \"send failed\");\n\t\t\t\tgoto err;\n\t\t\t}\n\t\t\t/* vsend will free packets even in case of failure */\n\t\t}\n\t}\n\treturn 0;\n\nerr:\n\ttpp_log(LOG_CRIT, __func__, \"Error broadcasting to my leaves\");\n\twhile (tpp_deque(&leaf_list))\n\t\t; /* drain the list, dont free pacets, transport will free */\n\treturn -1;\n}\n\nstatic int\nrouter_send_ctl_join(int tfd, void *data, void *c)\n{\n\ttpp_context_t *ctx = (tpp_context_t *) c;\n\tint rc = 0;\n\n\tif (!ctx)\n\t\treturn 0;\n\n\tif (ctx->type == TPP_ROUTER_NODE) {\n\t\ttpp_router_t *r = NULL;\n\t\ttpp_join_pkt_hdr_t *hdr = NULL;\n\t\ttpp_packet_t *pkt = NULL;\n\n\t\tr = (tpp_router_t *) ctx->ptr;\n\n\t\t/* send a TPP_CTL_JOIN message */\n\t\tpkt = tpp_bld_pkt(NULL, NULL, sizeof(tpp_join_pkt_hdr_t), 1, (void **) &hdr);\n\t\tif (!pkt) {\n\t\t\ttpp_log(LOG_CRIT, __func__, \"Failed to build packet\");\n\t\t\treturn -1;\n\t\t}\n\n\t\thdr->type = TPP_CTL_JOIN;\n\t\thdr->node_type = TPP_ROUTER_NODE;\n\t\thdr->hop = 1;\n\t\thdr->index = 0;\n\t\thdr->num_addrs = 0;\n\n\t\trc = tpp_transport_vsend(r->conn_fd, pkt);\n\t\tif (rc == 0) {\n\t\t\ttpp_read_lock(&router_lock);\n\n\t\t\tr->state = TPP_ROUTER_STATE_CONNECTED;\n\n\t\t\ttpp_log(LOG_CRIT, NULL, \"tfd=%d, pbs_comm %s accepted connection\", tfd, r->router_name);\n\n\t\t\trc = send_leaves_to_router(this_router, r);\n\n\t\t\ttpp_unlock_rwlock(&router_lock);\n\t\t} else {\n\t\t\ttpp_log(LOG_CRIT, __func__, \"Failed to send JOIN packet/send leaves to pbs_comm %s\", this_router->router_name);\n\t\t\ttpp_transport_close(r->conn_fd);\n\t\t\treturn 0;\n\t\t}\n\t}\n\n\treturn rc;\n}\n\n/**\n * @brief\n *\tThe router post connect handler\n *\n * @par Functionality\n *\tWhen the connection between this router and another is dropped, the IO\n *\tthread continuously attempts to reconnect to it. If the connection is\n *\trestored, then this prior registered function is called.\n *\n * @param[in] tfd - The actual IO connection on which data was about to be\n *\t\t\tsent (unused)\n * @param[in] data - Any data the IO thread might want to pass to this function.\n *\t\t     (unused)\n * @param[in] extra - The extra data associated with IO connection\n *\n * @return Error code\n * @retval 0 - Success\n * @retval -1 - Failure\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nstatic int\nrouter_post_connect_handler(int tfd, void *data, void *c, void *extra)\n{\n\ttpp_context_t *ctx = (tpp_context_t *) c;\n\tconn_auth_t *authdata = (conn_auth_t *) extra;\n\tint rc = 0;\n\n\tif (!ctx)\n\t\treturn 0;\n\n\tif (ctx->type != TPP_ROUTER_NODE)\n\t\treturn 0;\n\n\tif (tpp_conf->auth_config->encrypt_method[0] != '\\0' ||\n\t    strcmp(tpp_conf->auth_config->auth_method, AUTH_RESVPORT_NAME) != 0) {\n\n\t\t/*\n\t\t * Since either auth is not resvport or encryption is enabled,\n\t\t * initiate handshakes for them\n\t\t *\n\t\t * If encryption is enabled then first initiate handshake for it\n\t\t * else for authentication\n\t\t *\n\t\t * Here we are only initiating handshake, if any handshake needs\n\t\t * continuation then it will be handled in leaf_pkt_handler\n\t\t */\n\n\t\tint conn_fd = ((tpp_router_t *) ctx->ptr)->conn_fd;\n\t\tauthdata = tpp_make_authdata(tpp_conf, AUTH_CLIENT, tpp_conf->auth_config->auth_method, tpp_conf->auth_config->encrypt_method);\n\t\tif (authdata == NULL) {\n\t\t\t/* tpp_make_authdata already logged error */\n\t\t\treturn -1;\n\t\t}\n\t\tauthdata->conn_initiator = 1;\n\t\ttpp_transport_set_conn_extra(tfd, authdata);\n\n\t\tif (authdata->config->encrypt_method[0] != '\\0') {\n\t\t\trc = tpp_handle_auth_handshake(tfd, conn_fd, authdata, FOR_ENCRYPT, NULL, 0);\n\t\t\tif (rc != 1)\n\t\t\t\treturn rc;\n\t\t}\n\n\t\tif (strcmp(authdata->config->auth_method, AUTH_RESVPORT_NAME) != 0) {\n\t\t\tif (strcmp(authdata->config->auth_method, authdata->config->encrypt_method) != 0) {\n\t\t\t\trc = tpp_handle_auth_handshake(tfd, conn_fd, authdata, FOR_AUTH, NULL, 0);\n\t\t\t\tif (rc != 1)\n\t\t\t\t\treturn rc;\n\t\t\t} else {\n\t\t\t\tauthdata->authctx = authdata->encryptctx;\n\t\t\t\tauthdata->authdef = authdata->encryptdef;\n\t\t\t\ttpp_transport_set_conn_extra(tfd, authdata);\n\t\t\t}\n\t\t}\n\t}\n\n\t/*\n\t * Since we are in post conntect handler\n\t * and we have completed authentication\n\t * so send TPP_CTL_JOIN\n\t */\n\treturn router_send_ctl_join(tfd, data, c);\n}\n\n/**\n * @brief\n *\tHandle a connection close handler\n *\n * @par Functionality:\n *\tIdentify what type of endpoint dropped the connection, and remove it\n *\tfrom the appropriate indexes (router or leaf). If a leaf or router\n *\twas down, inform all the other routers interested about the connection\n *\tloss.\n *\n *\tIf a router went down, then consider all leaves connected directly to\n *\tthat router to be down, and repeat the process.\n *\n *\tThis is also called when a leaf sends a LEAVE message, which is\n *\tforwarded by the router to other leaves and routers, in this case, the\n *\thop count is > 1.\n *\n *\tIf hop == 1, it means data came from a direct connection instead of\n *\tbeing forwarded by another router. Leafs that are directly connected\n *\thave conn_fd set to the actual socket descriptor. For leafs that are\n *\tnot connected directly to this router, the conn_fd is -1.\n *\n * @param[in] tfd   - The physical connection that went down\n * @param[in] error - Any error that was captured when the connection went down\n * @param[in] c     - The context that was associated with the connection\n * @param[in] hop   - The hop count - number of times that message traveled\n *\n * @return Error code\n * @retval -1 - Failure\n * @retval  0 - Success\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nstatic int\nrouter_close_handler_inner(int tfd, int error, void *c, int hop)\n{\n\ttpp_context_t *ctx = (tpp_context_t *) c;\n\ttpp_leave_pkt_hdr_t hdr;\n\ttpp_chunk_t chunks[2];\n\tint i;\n\n\tif (tpp_going_down == 1)\n\t\treturn 0;\n\n\tif (c == NULL) {\n\t\t/*\n\t\t * no context available, no join was done, so don't bother\n\t\t * about disconnection\n\t\t */\n\t\tTPP_DBPRT(\"tfd = %d, No context, leaving\", tfd);\n\t\treturn 0;\n\t}\n\n\tif (ctx->type == TPP_LEAF_NODE || ctx->type == TPP_LEAF_NODE_LISTEN) {\n\n\t\t/* connection to a leaf node dropped or a router dropped */\n\t\ttpp_leaf_t *l = (tpp_leaf_t *) ctx->ptr;\n\t\ttpp_router_t *r = NULL;\n\t\tint leaf_type = ctx->type;\n\n\t\thdr.type = TPP_CTL_LEAVE;\n\t\thdr.hop = hop + 1;\n\t\thdr.ecode = error;\n\t\thdr.num_addrs = l->num_addrs;\n\n\t\tchunks[0].data = (void *) &hdr;\n\t\tchunks[0].len = sizeof(tpp_leave_pkt_hdr_t);\n\n\t\tchunks[1].data = (void *) l->leaf_addrs;\n\t\tchunks[1].len = l->num_addrs * sizeof(tpp_addr_t);\n\n\t\tif (hop == 1) {\n\t\t\t/* request came directly to me? */\n\t\t\t/*\n\t\t\t * broadcast leave pkt to other routers,\n\t\t\t * except from where it came from\n\t\t\t */\n\t\t\ttpp_read_lock(&router_lock);\n\t\t\tbroadcast_to_my_routers(chunks, 2, tfd);\n\t\t\ttpp_unlock_rwlock(&router_lock);\n\n\t\t\ttpp_log(LOG_CRIT, NULL, \"tfd=%d, Connection from leaf %s down\", tfd, tpp_netaddr(&l->leaf_addrs[0]));\n\t\t}\n\n\t\ttpp_write_lock(&router_lock);\n\n\t\tif ((r = del_router_from_leaf(l, tfd)) == NULL) {\n\t\t\ttpp_log(LOG_CRIT, __func__, \"tfd=%d, Failed to clear pbs_comm from leaf %s's list\", tfd, tpp_netaddr(&l->leaf_addrs[0]));\n\t\t\ttpp_unlock_rwlock(&router_lock);\n\t\t\treturn -1;\n\t\t}\n\n\t\t/* we had only the first address record stored in the my_leaves tree */\n\t\tif (pbs_idx_delete(r->my_leaves_idx, &l->leaf_addrs[0]) != PBS_IDX_RET_OK) {\n\t\t\ttpp_log(LOG_CRIT, __func__, \"tfd=%d, Failed to delete address from my_leaves %s\", tfd, tpp_netaddr(&l->leaf_addrs[0]));\n\t\t\ttpp_unlock_rwlock(&router_lock);\n\t\t\treturn -1;\n\t\t}\n\n\t\tif (l->num_routers > 0) {\n\t\t\tTPP_DBPRT(\"tfd=%d, Other pbs_comms for leaf %s present\", tfd, tpp_netaddr(&l->leaf_addrs[0]));\n\t\t\ttpp_unlock_rwlock(&router_lock);\n\t\t\treturn 0;\n\t\t}\n\n\t\tTPP_DBPRT(\"No more pbs_comms to leaf %s, deleting leaf\", tpp_netaddr(&l->leaf_addrs[0]));\n\n\t\t/* delete all of this leaf's addresses from the search tree */\n\t\tfor (i = 0; i < l->num_addrs; i++) {\n\t\t\tif (pbs_idx_delete(cluster_leaves_idx, &l->leaf_addrs[i]) != PBS_IDX_RET_OK) {\n\t\t\t\ttpp_log(LOG_CRIT, __func__, \"tfd=%d, Failed to delete address %s from cluster leaves\", tfd, tpp_netaddr(&l->leaf_addrs[i]));\n\t\t\t\ttpp_unlock_rwlock(&router_lock);\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t}\n\n\t\tif (leaf_type == TPP_LEAF_NODE_LISTEN) {\n\t\t\t/*\n\t\t\t * if it is a notification leaf,\n\t\t\t * then remove from this tree also\n\t\t\t */\n\t\t\tpbs_idx_delete(my_leaves_notify_idx, &l->leaf_addrs[0]);\n\t\t}\n\n\t\t/* broadcast to all self connected leaves */\n\t\tbroadcast_to_my_leaves(chunks, 2, tfd, 0);\n\n\t\tfree_leaf(l);\n\n\t\ttpp_unlock_rwlock(&router_lock);\n\n\t\treturn 0;\n\n\t} else if (ctx->type == TPP_ROUTER_NODE) {\n\n\t\ttpp_router_t *r = (tpp_router_t *) ctx->ptr;\n\t\tint rc;\n\t\ttpp_leaf_t *l;\n\t\ttpp_que_t deleted_leaves;\n\t\ttpp_que_elem_t *n = NULL;\n\n\t\tif (r->state == TPP_ROUTER_STATE_CONNECTED) {\n\t\t\tvoid *idx_ctx = NULL;\n\n\t\t\t/* do any logging or leaf processing only if it was connected earlier */\n\t\t\ttpp_log(LOG_CRIT, NULL, \"tfd=%d, Connection %s pbs_comm %s down\", tfd, (r->initiator == 1) ? \"to\" : \"from\", r->router_name);\n\n\t\t\ttpp_write_lock(&router_lock);\n\t\t\tTPP_QUE_CLEAR(&deleted_leaves);\n\n\t\t\twhile (pbs_idx_find(r->my_leaves_idx, NULL, (void **) &l, &idx_ctx) == PBS_IDX_RET_OK) {\n\t\t\t\tif (l->num_routers > 0) {\n\t\t\t\t\tdel_router_from_leaf(l, tfd);\n\t\t\t\t\tif (l->num_routers == 0) {\n\t\t\t\t\t\t/*\n\t\t\t\t\t\t * delete leaf from the leaf tree, since it\n\t\t\t\t\t\t * is not connected to any routers now\n\t\t\t\t\t\t */\n\t\t\t\t\t\tTPP_DBPRT(\"All routers to leaf %s down, deleting leaf\", tpp_netaddr(&l->leaf_addrs[0]));\n\n\t\t\t\t\t\tif (tpp_enque(&deleted_leaves, l) == NULL) {\n\t\t\t\t\t\t\ttpp_unlock_rwlock(&router_lock);\n\t\t\t\t\t\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory enqueuing deleted leaves\");\n\t\t\t\t\t\t\treturn -1;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\tpbs_idx_free_ctx(idx_ctx);\n\n\t\t\t/* now remove each of the leaf's addresses from clusters index */\n\t\t\twhile ((n = TPP_QUE_NEXT(&deleted_leaves, n))) {\n\t\t\t\tl = (tpp_leaf_t *) TPP_QUE_DATA(n);\n\t\t\t\tif (l == NULL)\n\t\t\t\t\tcontinue;\n\n\t\t\t\tif (l->leaf_type == TPP_LEAF_NODE_LISTEN) {\n\t\t\t\t\tpbs_idx_delete(my_leaves_notify_idx, &l->leaf_addrs[0]);\n\t\t\t\t}\n\n\t\t\t\tfor (i = 0; i < l->num_addrs; i++) {\n\t\t\t\t\tif (pbs_idx_delete(cluster_leaves_idx, &l->leaf_addrs[i]) != PBS_IDX_RET_OK) {\n\t\t\t\t\t\ttpp_log(LOG_CRIT, __func__, \"tfd=%d, Failed to delete address %s\", tfd, tpp_netaddr(&l->leaf_addrs[i]));\n\t\t\t\t\t\ttpp_unlock_rwlock(&router_lock);\n\n\t\t\t\t\t\treturn -1;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t/* delete all leaf nodes from the my_leaves_idx tree of this router\n\t\t\t * and finally destroy that index since the router itself had\n\t\t\t * disconnected\n\t\t\t */\n\t\t\tpbs_idx_destroy(r->my_leaves_idx);\n\t\t\tr->my_leaves_idx = NULL;\n\t\t\tif (r->initiator == 1) {\n\t\t\t\t/* initialize the routers leaf tree */\n\t\t\t\tr->my_leaves_idx = pbs_idx_create(0, sizeof(tpp_addr_t));\n\t\t\t\tif (r->my_leaves_idx == NULL) {\n\t\t\t\t\ttpp_log(LOG_CRIT, __func__, \"Failed to create index for my leaves\");\n\t\t\t\t\tfree_router(r);\n\t\t\t\t\ttpp_unlock_rwlock(&router_lock);\n\t\t\t\t\treturn -1;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t/*\n\t\t\t * set the conn_fd of the router to -1 here and not before\n\t\t\t * because the del_router_from_leaf function above matches\n\t\t\t * with the routers conn_fd\n\t\t\t */\n\t\t\tr->conn_fd = -1;\n\t\t\tr->state = TPP_ROUTER_STATE_DISCONNECTED;\n\n\t\t\tchunks[0].data = (void *) &hdr;\n\t\t\tchunks[0].len = sizeof(tpp_leave_pkt_hdr_t);\n\n\t\t\t/* broadcast leave msgs of these leaves to my leaves */\n\t\t\twhile ((l = (tpp_leaf_t *) tpp_deque(&deleted_leaves))) {\n\t\t\t\thdr.type = TPP_CTL_LEAVE;\n\t\t\t\thdr.hop = 2;\n\t\t\t\thdr.ecode = error;\n\t\t\t\thdr.num_addrs = l->num_addrs;\n\n\t\t\t\tchunks[1].data = (void *) l->leaf_addrs;\n\t\t\t\tchunks[1].len = l->num_addrs * sizeof(tpp_addr_t);\n\n\t\t\t\t/* broadcast to all self connected leaves */\n\t\t\t\tbroadcast_to_my_leaves(chunks, 2, tfd, 0);\n\t\t\t\tfree_leaf(l);\n\t\t\t}\n\n\t\t\ttpp_unlock_rwlock(&router_lock);\n\t\t}\n\n\t\tif (r->initiator == 1) {\n\t\t\tvoid *thrd;\n\t\t\t/*\n\t\t\t * Attempt reconnects only if we had initiated the\n\t\t\t * connection ourselves\n\t\t\t */\n\t\t\tif (r->delay == 0)\n\t\t\t\tr->delay = TPP_CONNNECT_RETRY_MIN;\n\t\t\telse\n\t\t\t\tr->delay += TPP_CONNECT_RETRY_INC;\n\t\t\tif (r->delay > TPP_CONNECT_RETRY_MAX)\n\t\t\t\tr->delay = TPP_CONNECT_RETRY_MAX;\n\n\t\t\tr->state = TPP_ROUTER_STATE_CONNECTING;\n\n\t\t\t/* de-associate connection context from current tfd */\n\t\t\ttpp_transport_set_conn_ctx(tfd, NULL);\n\n\t\t\t/* find the transport thread associated with this connection\n\t\t\t * that is on its way to be closed, pass the same thrd context\n\t\t\t * to the special connect call, so that the new connection is\n\t\t\t * assigned to this same thread instead of new one\n\t\t\t */\n\t\t\ttpp_log(LOG_INFO, NULL, \"Connecting to pbs_comm %s\", r->router_name);\n\n\t\t\tthrd = tpp_transport_get_thrd_context(tfd);\n\t\t\trc = tpp_transport_connect_spl(r->router_name, r->delay, ctx, &r->conn_fd, thrd);\n\t\t\tif (rc != 0) {\n\t\t\t\ttpp_log(LOG_CRIT, NULL, \"tfd=%d, Failed initiating connection to pbs_comm %s\", tfd, r->router_name);\n\t\t\t\treturn -1;\n\t\t\t}\n\n\t\t\treturn 1; /* so caller does not free context or set anything */\n\t\t} else {\n\t\t\t/**\n\t\t\t * remove this router from our list of registered routers\n\t\t\t * ie, remove from routers_idx tree\n\t\t\t **/\n\t\t\ttpp_write_lock(&router_lock);\n\n\t\t\tpbs_idx_delete(routers_idx, &r->router_addr);\n\t\t\t/*\n\t\t\t * context will be freed and deleted by router_close_handler\n\t\t\t * so just free router structure itself\n\t\t\t */\n\t\t\tfree_router(r);\n\n\t\t\ttpp_unlock_rwlock(&router_lock);\n\t\t}\n\n\t\treturn 0;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\tWrapper to the close handle function. This is the one registered to be\n *\tcalled from the IO thread when the IO thread detects a connection loss.\n *\n *\tIt calls the wrapper \"router_close_handler_inner\" with a hop count to 1,\n *\tsince its called \"first hand\" by the registered function.\n *\n * @par Functionality:\n *\tIdentify what type of endpoint dropped the connection, and remove it\n *\tfrom the appropriate index (router or leaf). If a leaf or router\n *\twas down, inform all the other routers interested about the connection\n *\tloss.\n *\n *\tIf a router went down, then consider all leaves connected directly to\n *\tthat router to be down, and repeat the process.\n *\n * @param[in] tfd   - The physical connection that went down\n * @param[in] error - Any error that was captured when the connection went down\n * @param[in] c     - The context that was associated with the connection\n * @param[in] extra - The extra data associated with IO connection\n *\n * @return Error code\n * @retval -1 - Failure\n * @retval  0 - Success\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nstatic int\nrouter_close_handler(int tfd, int error, void *c, void *extra)\n{\n\tint rc;\n\n\tif (extra) {\n\t\tconn_auth_t *authdata = (conn_auth_t *) extra;\n\t\tif (authdata->authctx && authdata->authdef)\n\t\t\tauthdata->authdef->destroy_ctx(authdata->authctx);\n\t\tif (authdata->authdef != authdata->encryptdef && authdata->encryptctx && authdata->encryptdef)\n\t\t\tauthdata->encryptdef->destroy_ctx(authdata->encryptctx);\n\t\tif (authdata->config)\n\t\t\tfree_auth_config(authdata->config);\n\t\t/* DO NOT free authdef here, it will be done in unload_auths() */\n\t\tfree(authdata);\n\t\ttpp_transport_set_conn_extra(tfd, NULL);\n\t}\n\n\t/* set hop to 1 and send to inner */\n\tif ((rc = router_close_handler_inner(tfd, error, c, 1)) == 0) {\n\t\ttpp_transport_set_conn_ctx(tfd, NULL);\n\t\tTPP_DBPRT(\"Freeing context=%p for tfd=%d\", c, tfd);\n\t\tfree(c);\n\t}\n\treturn rc;\n}\n\n/**\n * @brief\n *\tThe timer handler function registered with the IO thread.\n *\n * @par Functionality\n *\tThis function is called periodically (after the amount of time as\n *\tspecified by router_next_event_expiry() function) by the IO thread. This\n *\tdrives sending notifications to any leaf listen nodes.\n *\n * @retval - next event time\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nstatic int\nrouter_timer_handler(time_t now)\n{\n\ttpp_ctl_pkt_hdr_t hdr;\n\ttpp_chunk_t chunks[1];\n\tint send_update = 0;\n\tint ret = -1;\n\n\ttpp_lock(&lj_lock);\n\tif (router_last_leaf_joined > 0) {\n\t\tif ((now - router_last_leaf_joined) < 3) {\n\t\t\tret = 3; /* time not yet over, retry in the next 3 seconds */\n\t\t} else {\n\t\t\tsend_update = 1;\n\t\t\trouter_last_leaf_joined = 0;\n\t\t}\n\t}\n\ttpp_unlock(&lj_lock);\n\n\tif (send_update == 1) {\n\t\tint len;\n\n\t\tmemset(&hdr, 0, sizeof(tpp_ctl_pkt_hdr_t)); /* only to satisfy valgrind */\n\t\thdr.type = TPP_CTL_MSG;\n\t\thdr.code = TPP_MSG_UPDATE;\n\n\t\tlen = sizeof(tpp_ctl_pkt_hdr_t);\n\t\tchunks[0].data = (void *) &hdr;\n\t\tchunks[0].len = len;\n\n\t\t/* broadcast to self connected leaves asking for notification */\n\t\ttpp_read_lock(&router_lock);\n\t\tbroadcast_to_my_leaves(chunks, 1, -1, 1);\n\t\ttpp_unlock_rwlock(&router_lock);\n\t}\n\n\treturn ret;\n}\n\n/**\n * @brief\n *\tThe pre-send handler registered with the IO thread.\n *\n * @par Functionality\n *\tWhen the IO thread is ready to send out a packet over the wire, it calls\n *\ta prior registered \"pre-send\" handler. This pre-send handler (for routers)\n *\ttakes care of encrypting data and save unencrypted data for \"post-send\" handler\n *\tin extra data associated with IO connection\n *\n * @param[in] tfd - The actual IO connection on which data was sent (unused)\n * @param[in] pkt - The data packet that is sent out by the IO thrd\n * @param[in] extra - The extra data associated with IO connection\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nstatic int\nrouter_pkt_presend_handler(int tfd, tpp_packet_t *pkt, void *c, void *extra)\n{\n\tconn_auth_t *authdata = (conn_auth_t *) extra;\n\n\t/*\n\t * if presend handler is called from handle_disconnect()\n\t * then extra will be NULL and this is just a sending simulation\n\t * so no encryption needed\n\t */\n\tif (authdata == NULL || authdata->encryptdef == NULL || pkt == NULL)\n\t\treturn 0;\n\n\treturn (tpp_encrypt_pkt(authdata, pkt));\n}\n\n/**\n * @brief\n *\tWrapper function for the router to handle incoming data. This\n *  wrapper exists only to detect if the inner function\n *  allocated memory in data_out and free that memory in a\n *  clean way, so that we do not have to add a goto or free\n *  in every return path of the inner function.\n *\n * @param[in] tfd - The physical connection over which data arrived\n * @param[in] buf - The pointer to the received data packet\n * @param[in] len - The length of the received data packet\n * @param[in] c   - The context associated with this physical connection\n * @param[in] extra - The extra data associated with IO connection\n *\n * @return Error code\n * @retval -1 - Failure\n * @retval  0 - Success\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nstatic int\nrouter_pkt_handler(int tfd, void *buf, int len, void *c, void *extra)\n{\n\tvoid *data_out = NULL;\n\tint rc = router_pkt_handler_inner(tfd, buf, &data_out, len, c, extra);\n\tfree(data_out);\n\treturn rc;\n}\n\n/**\n * @brief\n *\tInner handler function for the router to handle incoming data. When data\n *\tpacket arrives, it determines what is the intended destination and\n *\tforwards the data packet to that destination.\n *\n * @param[in] tfd - The physical connection over which data arrived\n * @param[in] buf - The pointer to the received data packet\n * @param[out] data_out - The pointer to the newly allocated data buffer, if any\n * @param[in] len - The length of the received data packet\n * @param[in] c   - The context associated with this physical connection\n * @param[in] extra - The extra data associated with IO connection\n *\n * @return Error code\n * @retval -1 - Failure\n * @retval  0 - Success\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nstatic int\nrouter_pkt_handler_inner(int tfd, void *buf, void **data_out, int len, void *c, void *extra)\n{\n\ttpp_context_t *ctx = (tpp_context_t *) c;\n\ttpp_data_pkt_hdr_t *dhdr = buf;\n\tenum TPP_MSG_TYPES type;\n\ttpp_chunk_t chunks[2];\n\ttpp_router_t *target_router = NULL;\n\tint target_fd = -1;\n\ttpp_addr_t connected_host;\n\tconn_auth_t *authdata = (conn_auth_t *) extra;\n\ttpp_addr_t *addr = tpp_get_connected_host(tfd);\n\tchar msg[TPP_GEN_BUF_SZ];\n\tshort rc = -1;\n\n\tif (!addr)\n\t\treturn -1;\n\n\tmemcpy(&connected_host, addr, sizeof(tpp_addr_t));\n\tfree(addr);\n\nagain:\n\ttype = dhdr->type;\n\terrno = 0;\n\n\tif (type >= TPP_LAST_MSG)\n\t\treturn -1;\n\n\tswitch (type) {\n\t\tcase TPP_ENCRYPTED_DATA: {\n\t\t\tint len_out;\n\t\t\tint sz = sizeof(tpp_encrypt_hdr_t);\n\t\t\tif (authdata == NULL) {\n\t\t\t\ttpp_log(LOG_CRIT, __func__, \"tfd=%d, No auth data found\", tfd);\n\t\t\t\treturn -1;\n\t\t\t}\n\n\t\t\tif (authdata->encryptdef == NULL) {\n\t\t\t\ttpp_log(LOG_CRIT, __func__, \"connetion doesn't support decryption of data\");\n\t\t\t\treturn -1;\n\t\t\t}\n\n\t\t\tif (authdata->encryptdef->decrypt_data(authdata->encryptctx, (void *) ((char *) buf + sz), (size_t) len - sz, data_out, (size_t *) &len_out) != 0) {\n\t\t\t\treturn -1;\n\t\t\t}\n\n\t\t\tif ((len - sz) > 0 && len_out <= 0) {\n\t\t\t\ttpp_log(LOG_CRIT, __func__, \"invalid decrypted data len: %d, pktlen: %d\", len_out, len - sz);\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t\tdhdr = *data_out;\n\t\t\tlen = len_out;\n\t\t\tgoto again;\n\t\t} break;\n\n\t\tcase TPP_AUTH_CTX: {\n\t\t\ttpp_auth_pkt_hdr_t ahdr = {0};\n\t\t\tsize_t len_in = 0;\n\t\t\tvoid *data_in = NULL;\n\t\t\tconn_auth_t *authdata = (conn_auth_t *) extra;\n\n\t\t\tmemcpy(&ahdr, dhdr, sizeof(tpp_auth_pkt_hdr_t));\n\n\t\t\tif (authdata == NULL) {\n\t\t\t\tmsg[0] = '\\0';\n\t\t\t\tif (!is_string_in_arr(tpp_conf->supported_auth_methods, ahdr.auth_method))\n\t\t\t\t\tsnprintf(msg, sizeof(msg), \"tfd=%d, Authentication method %s not allowed in connection %s\", tfd, ahdr.auth_method, tpp_netaddr(&connected_host));\n\n\t\t\t\telse if (strcmp(ahdr.auth_method, AUTH_RESVPORT_NAME) != 0 && get_auth(ahdr.auth_method) == NULL)\n\t\t\t\t\tsnprintf(msg, sizeof(msg), \"tfd=%d, Authentication method not supported in connection %s\", tfd, tpp_netaddr(&connected_host));\n\n\t\t\t\telse if (ahdr.encrypt_method[0] != '\\0' && get_auth(ahdr.encrypt_method) == NULL)\n\t\t\t\t\tsnprintf(msg, sizeof(msg), \"tfd=%d, Encryption method not supported in connection %s\", tfd, tpp_netaddr(&connected_host));\n\n\t\t\t\tif (msg[0] != '\\0') {\n\t\t\t\t\t/* error message was set, to take action */\n\t\t\t\t\ttpp_log(LOG_CRIT, NULL, msg);\n\t\t\t\t\ttpp_send_ctl_msg(tfd, TPP_MSG_AUTHERR, &connected_host, &this_router->router_addr, -1, 0, msg);\n\t\t\t\t\treturn 0; /* let connection be alive, so we can send error */\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tlen_in = (size_t) len - sizeof(tpp_auth_pkt_hdr_t);\n\t\t\tdata_in = calloc(1, len_in);\n\t\t\tif (data_in == NULL) {\n\t\t\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory allocating authdata credential\");\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t\tmemcpy(data_in, (char *) dhdr + sizeof(tpp_auth_pkt_hdr_t), len_in);\n\n\t\t\tif (authdata == NULL) {\n\t\t\t\tauthdata = tpp_make_authdata(tpp_conf, AUTH_SERVER, ahdr.auth_method, ahdr.encrypt_method);\n\t\t\t\tif (authdata == NULL) {\n\t\t\t\t\t/* tpp_make_authdata already logged error */\n\t\t\t\t\tfree(data_in);\n\t\t\t\t\treturn -1;\n\t\t\t\t}\n\t\t\t\ttpp_transport_set_conn_extra(tfd, authdata);\n\t\t\t}\n\n\t\t\trc = tpp_handle_auth_handshake(tfd, tfd, authdata, ahdr.for_encrypt, data_in, len_in);\n\t\t\tif (rc != 1) {\n\t\t\t\tfree(data_in);\n\t\t\t\treturn rc;\n\t\t\t}\n\n\t\t\tfree(data_in);\n\n\t\t\tif (ahdr.for_encrypt == FOR_ENCRYPT &&\n\t\t\t    strcmp(authdata->config->auth_method, AUTH_RESVPORT_NAME) != 0) {\n\n\t\t\t\tif (strcmp(authdata->config->auth_method, authdata->config->encrypt_method) != 0) {\n\t\t\t\t\tif (authdata->conn_initiator) {\n\t\t\t\t\t\trc = tpp_handle_auth_handshake(tfd, tfd, authdata, FOR_AUTH, NULL, 0);\n\t\t\t\t\t\tif (rc != 1) {\n\t\t\t\t\t\t\treturn rc;\n\t\t\t\t\t\t}\n\t\t\t\t\t} else\n\t\t\t\t\t\treturn 0;\n\t\t\t\t} else {\n\t\t\t\t\tauthdata->authctx = authdata->encryptctx;\n\t\t\t\t\tauthdata->authdef = authdata->encryptdef;\n\t\t\t\t\ttpp_transport_set_conn_extra(tfd, authdata);\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif (ctx == NULL) {\n\t\t\t\tif ((ctx = (tpp_context_t *) malloc(sizeof(tpp_context_t))) == NULL) {\n\t\t\t\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory allocating tpp context\");\n\t\t\t\t\treturn -1;\n\t\t\t\t}\n\t\t\t\tctx->ptr = NULL;\n\t\t\t\tctx->type = TPP_AUTH_NODE; /* denoting that this is an authenticated connection */\n\t\t\t}\n\n\t\t\t/*\n\t\t\t* associate this router structure (information) with\n\t\t\t* this physical connection\n\t\t\t*/\n\t\t\ttpp_transport_set_conn_ctx(tfd, ctx);\n\n\t\t\t/* send TPP_CTL_JOIN msg to fellow router */\n\t\t\treturn router_send_ctl_join(tfd, dhdr, c);\n\n\t\t} break;\n\n\t\tcase TPP_CTL_JOIN: {\n\t\t\tunsigned char hop;\n\t\t\tunsigned char node_type;\n\t\t\ttpp_join_pkt_hdr_t *hdr = (tpp_join_pkt_hdr_t *) dhdr;\n\n\t\t\thop = hdr->hop;\n\t\t\tnode_type = hdr->node_type;\n\n\t\t\tif (ctx == NULL) { /* connection not yet authenticated */\n\t\t\t\tmsg[0] = '\\0';\n\t\t\t\tif (extra && strcmp(((conn_auth_t *) extra)->config->auth_method, AUTH_RESVPORT_NAME) != 0) {\n\t\t\t\t\t/*\n\t\t\t\t\t * In case of external authentication, ctx must already be set\n\t\t\t\t\t * so error out if ctx is not set.\n\t\t\t\t\t */\n\t\t\t\t\tsnprintf(msg, sizeof(msg), \"tfd=%d Unauthenticated connection from %s\", tfd, tpp_netaddr(&connected_host));\n\t\t\t\t} else {\n\t\t\t\t\tif (!is_string_in_arr(tpp_conf->supported_auth_methods, AUTH_RESVPORT_NAME))\n\t\t\t\t\t\tsnprintf(msg, sizeof(msg), \"tfd=%d, Authentication method %s not allowed in connection %s\", tfd, AUTH_RESVPORT_NAME, tpp_netaddr(&connected_host));\n\n\t\t\t\t\telse if (tpp_transport_isresvport(tfd) != 0) /* reserved port based authentication, and is not yet authenticated, so check resv port */\n\t\t\t\t\t\tsnprintf(msg, sizeof(msg), \"Connection from non-reserved port, rejected\");\n\t\t\t\t}\n\t\t\t\tif (msg[0] != '\\0') {\n\t\t\t\t\t/* error message was set above, take action */\n\t\t\t\t\ttpp_log(LOG_CRIT, NULL, msg);\n\t\t\t\t\ttpp_send_ctl_msg(tfd, TPP_MSG_AUTHERR, &connected_host, &this_router->router_addr, -1, 0, msg);\n\t\t\t\t\treturn 0; /* let connection be alive, so we can send error */\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t/* check if type was router or leaf */\n\t\t\tif (node_type == TPP_ROUTER_NODE) {\n\t\t\t\ttpp_router_t *r = NULL;\n\t\t\t\tvoid *pconn_host = &connected_host;\n\n\t\t\t\tTPP_DBPRT(\"Recvd TPP_CTL_JOIN from pbs_comm node %s, len=%d\", tpp_netaddr(&connected_host), len);\n\n\t\t\t\ttpp_write_lock(&router_lock);\n\n\t\t\t\t/* find associated router */\n\t\t\t\tpbs_idx_find(routers_idx, &pconn_host, (void **) &r, NULL);\n\t\t\t\tif (r) {\n\t\t\t\t\tif (r->conn_fd != -1) {\n\t\t\t\t\t\t/* this router had not yet disconnected,\n\t\t\t\t\t\t * so close the existing connection\n\t\t\t\t\t\t */\n\t\t\t\t\t\ttpp_log(LOG_CRIT, NULL, \"tfd=%d, pbs_comm %s is still connected while \"\n\t\t\t\t\t\t\t\t\t\"another connect arrived, dropping existing connection %d\",\n\t\t\t\t\t\t\ttfd, r->router_name, r->conn_fd);\n\t\t\t\t\t\ttpp_transport_close(r->conn_fd);\n\t\t\t\t\t\ttpp_unlock_rwlock(&router_lock);\n\t\t\t\t\t\treturn -1;\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tr = alloc_router(strdup(tpp_netaddr(&connected_host)), &connected_host);\n\t\t\t\t\tif (!r) {\n\t\t\t\t\t\ttpp_unlock_rwlock(&router_lock);\n\t\t\t\t\t\treturn -1;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tr->conn_fd = tfd;\n\t\t\t\tr->initiator = 0;\n\t\t\t\tr->state = TPP_ROUTER_STATE_CONNECTED;\n\n\t\t\t\ttpp_log(LOG_CRIT, NULL, \"tfd=%d, pbs_comm %s connected\", tfd, tpp_netaddr(&r->router_addr));\n\n\t\t\t\tif (ctx == NULL) {\n\t\t\t\t\tif ((ctx = (tpp_context_t *) malloc(sizeof(tpp_context_t))) == NULL) {\n\t\t\t\t\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory allocating tpp context\");\n\t\t\t\t\t\ttpp_unlock_rwlock(&router_lock);\n\t\t\t\t\t\treturn -1;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tctx->ptr = r;\n\t\t\t\tctx->type = TPP_ROUTER_NODE;\n\n\t\t\t\t/*\n\t\t\t\t * associate this router structure (information) with\n\t\t\t\t * this physical connection\n\t\t\t\t */\n\t\t\t\ttpp_transport_set_conn_ctx(tfd, ctx);\n\n\t\t\t\t/* now send new router info about all leaves I have */\n\t\t\t\tsend_leaves_to_router(this_router, r);\n\n\t\t\t\ttpp_unlock_rwlock(&router_lock);\n\t\t\t\treturn 0;\n\n\t\t\t} else if (node_type == TPP_LEAF_NODE || node_type == TPP_LEAF_NODE_LISTEN) {\n\t\t\t\ttpp_leaf_t *l = NULL;\n\t\t\t\ttpp_router_t *r = NULL;\n\t\t\t\tint found;\n\t\t\t\tint i;\n\t\t\t\tint index = (int) hdr->index;\n\t\t\t\ttpp_addr_t *addrs;\n\t\t\t\tvoid *paddr;\n\n\t\t\t\tTPP_DBPRT(\"Recvd TPP_CTL_JOIN FOR LEAF from pbs_comm node %s, len=%d, hop=%d\", tpp_netaddr(&connected_host), len, hop);\n\n\t\t\t\tif (hdr->num_addrs == 0) {\n\t\t\t\t\t/* error, must have atleast one address associated */\n\t\t\t\t\ttpp_log(LOG_CRIT, NULL, \"tfd=%d, No address associated with join msg from leaf\", tfd);\n\t\t\t\t\treturn -1;\n\t\t\t\t}\n\t\t\t\taddrs = (tpp_addr_t *) (((char *) dhdr) + sizeof(tpp_join_pkt_hdr_t));\n\n\t\t\t\ttpp_write_lock(&router_lock);\n\n\t\t\t\tif (ctx == NULL || ctx->ptr == NULL) {\n\t\t\t\t\t/* router is myself */\n\t\t\t\t\tr = this_router;\n\t\t\t\t} else {\n\t\t\t\t\tvoid *pconn_host = &connected_host;\n\t\t\t\t\t/* must be a router forwarding leaves from its database to me */\n\n\t\t\t\t\t/* find associated router */\n\t\t\t\t\tpbs_idx_find(routers_idx, &pconn_host, (void **) &r, NULL);\n\t\t\t\t\tif (!r) {\n\t\t\t\t\t\tchar rname[TPP_MAXADDRLEN + 1];\n\n\t\t\t\t\t\tstrcpy(rname, tpp_netaddr(&connected_host));\n\t\t\t\t\t\ttpp_log(LOG_CRIT, NULL, \"tfd=%d, Failed to find pbs_comm %s in join for leaf %s\", tfd, rname, tpp_netaddr(&addrs[0]));\n\t\t\t\t\t\ttpp_unlock_rwlock(&router_lock);\n\t\t\t\t\t\treturn -1;\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\t/* find the leaf */\n\t\t\t\tfound = 1;\n\t\t\t\tpaddr = &addrs[0];\n\t\t\t\tpbs_idx_find(cluster_leaves_idx, &paddr, (void **) &l, NULL);\n\t\t\t\tif (!l) {\n\t\t\t\t\tfound = 0;\n\t\t\t\t\tl = (tpp_leaf_t *) calloc(1, sizeof(tpp_leaf_t));\n\t\t\t\t\tif (l)\n\t\t\t\t\t\tl->leaf_addrs = malloc(sizeof(tpp_addr_t) * hdr->num_addrs);\n\n\t\t\t\t\tif (!l || !l->leaf_addrs) {\n\t\t\t\t\t\tfree_leaf(l);\n\t\t\t\t\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory allocating leaf\");\n\t\t\t\t\t\ttpp_unlock_rwlock(&router_lock);\n\t\t\t\t\t\treturn -1;\n\t\t\t\t\t}\n\n\t\t\t\t\tl->leaf_type = node_type;\n\t\t\t\t\tmemcpy(l->leaf_addrs, addrs, sizeof(tpp_addr_t) * hdr->num_addrs);\n\t\t\t\t\tl->num_addrs = hdr->num_addrs;\n\n\t\t\t\t\tl->conn_fd = -1;\n\t\t\t\t}\n\n\t\t\t\tif (hop == 1) {\n\n\t\t\t\t\tfor (i = 0; i < l->num_addrs; i++) {\n\t\t\t\t\t\ttpp_log(LOG_CRIT, NULL, \"tfd=%d, Leaf registered address %s\", tfd, tpp_netaddr(&l->leaf_addrs[i]));\n\t\t\t\t\t}\n\n\t\t\t\t\tif (l->conn_fd != -1) {\n\t\t\t\t\t\t/* this leaf had not yet disconnected,\n\t\t\t\t\t\t * so close the existing connection.\n\t\t\t\t\t\t */\n\t\t\t\t\t\ttpp_log(LOG_CRIT, NULL, \"tfd=%d, Leaf %s still connected while \"\n\t\t\t\t\t\t\t\t\t\"another leaf connect arrived, dropping existing connection %d\",\n\t\t\t\t\t\t\ttfd, tpp_netaddr(&l->leaf_addrs[0]), l->conn_fd);\n\t\t\t\t\t\ttpp_transport_close(l->conn_fd);\n\t\t\t\t\t\ttpp_unlock_rwlock(&router_lock);\n\t\t\t\t\t\treturn -1;\n\t\t\t\t\t}\n\t\t\t\t\tl->conn_fd = tfd;\n\n\t\t\t\t\t/*\n\t\t\t\t\t * Set a context only if the JOIN came from a direct connection\n\t\t\t\t\t * from a leaf (hop == 1), and not a forwarded JOIN message.\n\t\t\t\t\t * In case of a forwarded JOIN message, the tfd is associated\n\t\t\t\t\t * with the routers context\n\t\t\t\t\t */\n\t\t\t\t\tif (ctx == NULL) {\n\t\t\t\t\t\tif ((ctx = (tpp_context_t *) malloc(sizeof(tpp_context_t))) == NULL) {\n\t\t\t\t\t\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory allocating tpp context\");\n\t\t\t\t\t\t\ttpp_unlock_rwlock(&router_lock);\n\t\t\t\t\t\t\treturn -1;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tctx->ptr = l;\n\t\t\t\t\tctx->type = l->leaf_type;\n\t\t\t\t\ttpp_transport_set_conn_ctx(tfd, ctx);\n\t\t\t\t}\n\n\t\t\t\tTPP_DBPRT(\"tfd=%d, Router name = %s, address leaf = %p, leaf name=%s, index=%d, hop=%d\", tfd, r->router_name, (void *) l, tpp_netaddr(&l->leaf_addrs[0]), (int) index, hop);\n\n\t\t\t\t/*\n\t\t\t\t * router is not part of leaf's list\n\t\t\t\t * of routers already, so add\n\t\t\t\t */\n\t\t\t\ti = add_route_to_leaf(l, r, index);\n\t\t\t\tif (i == -1) {\n\t\t\t\t\ttpp_log(LOG_CRIT, NULL, \"tfd=%d, Leaf %s exists!\", tfd, tpp_netaddr(&l->leaf_addrs[0]));\n\t\t\t\t\ttpp_unlock_rwlock(&router_lock);\n\t\t\t\t\treturn 0;\n\t\t\t\t}\n\n\t\t\t\tif (pbs_idx_insert(r->my_leaves_idx, &l->leaf_addrs[0], l) != PBS_IDX_RET_OK) {\n\t\t\t\t\ttpp_log(LOG_CRIT, __func__, \"tfd=%d, Failed to add address %s to index of my leaves\", tfd, tpp_netaddr(&l->leaf_addrs[0]));\n\t\t\t\t\ttpp_unlock_rwlock(&router_lock);\n\t\t\t\t\treturn -1;\n\t\t\t\t}\n\n\t\t\t\tif (found == 0) {\n\t\t\t\t\tint fatal = 0;\n\t\t\t\t\t/* add each address to the cluster_leaves_idx tree\n\t\t\t\t\t * since this is the primary \"routing table\"\n\t\t\t\t\t */\n\t\t\t\t\tfor (i = 0; i < l->num_addrs; i++) {\n\t\t\t\t\t\tif (pbs_idx_insert(cluster_leaves_idx, &l->leaf_addrs[i], l) != PBS_IDX_RET_OK) {\n\t\t\t\t\t\t\tvoid *unused;\n\t\t\t\t\t\t\tvoid *pleaf_addr = &l->leaf_addrs[i];\n\t\t\t\t\t\t\tif (pbs_idx_find(cluster_leaves_idx, &pleaf_addr, &unused, NULL) == PBS_IDX_RET_OK) {\n\t\t\t\t\t\t\t\tint k;\n\t\t\t\t\t\t\t\ttpp_log(LOG_CRIT, __func__, \"tfd=%d, Failed to add address %s to cluster-leaves index \"\n\t\t\t\t\t\t\t\t\t\t\t    \"since address already exists, dropping duplicate\",\n\t\t\t\t\t\t\t\t\ttfd, tpp_netaddr(&l->leaf_addrs[i]));\n\t\t\t\t\t\t\t\t/* remove this address from the list of addresses of the leaf */\n\t\t\t\t\t\t\t\tfor (k = i; k < (l->num_addrs - 1); k++) {\n\t\t\t\t\t\t\t\t\tl->leaf_addrs[k] = l->leaf_addrs[k + 1];\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\tl->num_addrs--;\n\n\t\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t\ttpp_log(LOG_CRIT, __func__, \"tfd=%d, Failed to add address %s to cluster-leaves index\", tfd, tpp_netaddr(&l->leaf_addrs[i]));\n\t\t\t\t\t\t\t\tfatal++;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\n\t\t\t\t\tif (fatal > 0 || l->num_addrs == 0) {\n\t\t\t\t\t\ttpp_log(LOG_CRIT, NULL, \"tfd=%d, Leaf %s had %s problem adding addresses, rejecting connection\",\n\t\t\t\t\t\t\ttfd, tpp_netaddr(&l->leaf_addrs[0]), (fatal > 0) ? \"fatal\" : \"all duplicates\");\n\t\t\t\t\t\ttpp_unlock_rwlock(&router_lock);\n\t\t\t\t\t\treturn -1;\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\tif (r == this_router) {\n\t\t\t\t\tif (l->leaf_type == TPP_LEAF_NODE_LISTEN) {\n\t\t\t\t\t\tif (pbs_idx_insert(my_leaves_notify_idx, &l->leaf_addrs[0], l) != PBS_IDX_RET_OK) {\n\t\t\t\t\t\t\ttpp_log(LOG_CRIT, __func__, \"tfd=%d, Failed to add address %s to notify-leaves index\", tfd, tpp_netaddr(&l->leaf_addrs[0]));\n\t\t\t\t\t\t\ttpp_unlock_rwlock(&router_lock);\n\t\t\t\t\t\t\treturn -1;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\tif (l->leaf_type != TPP_LEAF_NODE_LISTEN) {\n\t\t\t\t\t/* listen type leaf nodes might be interested to hear about\n\t\t\t\t\t * the other joined leaves. However don't send it updates\n\t\t\t\t\t * for each leaf; rather set a timer, postponing it each time\n\t\t\t\t\t * we get an update by 2 seconds\n\t\t\t\t\t */\n\t\t\t\t\ttpp_lock(&lj_lock);\n\t\t\t\t\trouter_last_leaf_joined = time(0);\n\t\t\t\t\ttpp_unlock(&lj_lock);\n\t\t\t\t}\n\n\t\t\t\tif (hop == 1) {\n\t\t\t\t\t/* broadcast to other routers if the hop is 1\n\t\t\t\t\t * while forwarding to next routers, they will\n\t\t\t\t\t * see incremented hop and will only update\n\t\t\t\t\t * their own data structures and will not\n\t\t\t\t\t * forward any further\n\t\t\t\t\t */\n\t\t\t\t\thop++; /* increment hop */\n\t\t\t\t\thdr->hop = hop;\n\n\t\t\t\t\tchunks[0].data = (char *) dhdr;\n\t\t\t\t\tchunks[0].len = len;\n\n\t\t\t\t\t/*\n\t\t\t\t\t * broadcast JOIN pkt to other routers,\n\t\t\t\t\t * except from where it came from\n\t\t\t\t\t */\n\t\t\t\t\tbroadcast_to_my_routers(chunks, 1, tfd);\n\t\t\t\t}\n\n\t\t\t\ttpp_unlock_rwlock(&router_lock);\n\t\t\t\treturn 0;\n\t\t\t}\n\t\t\treturn 0;\n\t\t} break; /* TPP_CTL_JOIN */\n\n\t\tcase TPP_CTL_LEAVE: {\n\t\t\tunsigned char hop;\n\t\t\ttpp_leave_pkt_hdr_t *hdr = (tpp_leave_pkt_hdr_t *) dhdr;\n\n\t\t\thop = hdr->hop;\n\n\t\t\tif (ctx == NULL) {\n\t\t\t\tTPP_DBPRT(\"tfd=%d, No context, leaving\", tfd);\n\t\t\t\treturn 0;\n\t\t\t}\n\n\t\t\tTPP_DBPRT(\"Recvd TPP_CTL_LEAVE message tfd=%d from src=%s, hop=%d, type=%d\", tfd, tpp_netaddr(&connected_host), hop, ctx->type);\n\n\t\t\tif (ctx->type == TPP_LEAF_NODE || ctx->type == TPP_LEAF_NODE_LISTEN) {\n\t\t\t\ttpp_log(LOG_CRIT, __func__, \"tfd=%d, Internal error! TPP_CTL_LEAVE arrived with a leaf context\", tfd);\n\t\t\t\treturn -1;\n\n\t\t\t} else if (ctx->type == TPP_ROUTER_NODE) {\n\t\t\t\t/*\n\t\t\t\t * If a TPP_CTL_LEAVE message comes, its basically\n\t\t\t\t * from a leaf, but fd is routers context\n\t\t\t\t */\n\t\t\t\ttpp_leaf_t *l = NULL;\n\t\t\t\ttpp_addr_t *src_addr = (tpp_addr_t *) (((char *) dhdr) + sizeof(tpp_leave_pkt_hdr_t));\n\n\t\t\t\ttpp_write_lock(&router_lock);\n\n\t\t\t\t/* find the leaf context to pass to close handler */\n\t\t\t\tpbs_idx_find(cluster_leaves_idx, (void **) &src_addr, (void **) &l, NULL);\n\t\t\t\tif (!l) {\n\t\t\t\t\tTPP_DBPRT(\"No leaf %s found\", tpp_netaddr(src_addr));\n\t\t\t\t\ttpp_unlock_rwlock(&router_lock);\n\t\t\t\t\treturn 0;\n\t\t\t\t}\n\n\t\t\t\ttpp_unlock_rwlock(&router_lock);\n\n\t\t\t\tif ((ctx = (tpp_context_t *) malloc(sizeof(tpp_context_t))) == NULL) {\n\t\t\t\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory allocating tpp context\");\n\t\t\t\t\treturn -1;\n\t\t\t\t}\n\t\t\t\tctx->ptr = l;\n\t\t\t\tctx->type = l->leaf_type;\n\n\t\t\t\trouter_close_handler_inner(tfd, 0, ctx, hop);\n\n\t\t\t\t/* we created the fake context here, so delete it here */\n\t\t\t\tfree(ctx);\n\t\t\t}\n\t\t\treturn 0;\n\t\t} break; /* TPP_CTL_LEAVE */\n\n\t\tcase TPP_MCAST_DATA: {\n\t\t\tint i, k;\n\t\t\ttpp_addr_t *src_host;\n\t\t\ttypedef struct {\n\t\t\t\tint target_fd;\t /* target comm fd */\n\t\t\t\tint num_streams; /* actual number of destination streams */\n\t\t\t\tchar *router_name;\n\t\t\t\tvoid *cmpr_ctx;\n\t\t\t\tvoid *minfo_buf; /* allocate size for total members */\n\t\t\t} target_comm_struct_t;\n\n\t\t\ttarget_comm_struct_t *rlist = NULL;\n\t\t\tint rsize = 0;\n\t\t\tint csize = 0;\n\t\t\tvoid *tmp;\n\n\t\t\t/* find the fd to forward to via the associated router */\n\t\t\ttpp_mcast_pkt_hdr_t *mhdr = (tpp_mcast_pkt_hdr_t *) dhdr;\n\t\t\tunsigned char orig_hop;\n\t\t\ttpp_mcast_pkt_info_t *minfo;\n\t\t\tvoid *minfo_base = NULL;\n\t\t\tvoid *info_start = (char *) dhdr + sizeof(tpp_mcast_pkt_hdr_t);\n\t\t\tunsigned int payload_len;\n\t\t\tvoid *payload;\n\t\t\tunsigned int cmprsd_len = ntohl(mhdr->info_cmprsd_len);\n\t\t\tunsigned int num_streams = ntohl(mhdr->num_streams);\n\t\t\tunsigned int info_len = ntohl(mhdr->info_len);\n\n\t\t\tif (cmprsd_len > 0) {\n\t\t\t\tpayload_len = len - sizeof(tpp_mcast_pkt_hdr_t) - cmprsd_len;\n\t\t\t\tpayload = ((char *) mhdr) + sizeof(tpp_mcast_pkt_hdr_t) + cmprsd_len;\n\t\t\t} else {\n\t\t\t\tpayload_len = len - sizeof(tpp_mcast_pkt_hdr_t) - info_len;\n\t\t\t\tpayload = ((char *) mhdr) + sizeof(tpp_mcast_pkt_hdr_t) + info_len;\n\t\t\t\tminfo_base = info_start;\n\t\t\t}\n\n\t\t\tsrc_host = &mhdr->src_addr;\n\t\t\torig_hop = mhdr->hop;\n\n\t\t\ttpp_log(LOG_INFO, NULL, \"tfd=%d, MCAST packet from %s, %u member streams, cmprsd_len=%d, info_len=%d, len=%d\",\n\t\t\t\ttfd, tpp_netaddr(src_host), num_streams, cmprsd_len, info_len, payload_len);\n\n#ifdef PBS_COMPRESSION_ENABLED\n\t\t\tif (cmprsd_len > 0) {\n\t\t\t\tminfo_base = tpp_inflate(info_start, cmprsd_len, info_len);\n\t\t\t\tif (minfo_base == NULL) {\n\t\t\t\t\ttpp_log(LOG_CRIT, __func__, \"Decompression of mcast hdr failed\");\n\t\t\t\t\treturn -1;\n\t\t\t\t}\n\t\t\t}\n#endif\n\n\t\t\tmhdr->hop = 1; /* set hop=1 to forward, use orig_hop for checking */\n\n\t\t\ttpp_log(LOG_INFO, __func__, \"Total mcast member streams=%d\", num_streams);\n\n\t\t\t/*\n\t\t\t * go backwards in an attempt to distribute mcast packet\n\t\t\t * first to other routers and then to local nodes\n\t\t\t */\n\t\t\tfor (k = num_streams - 1; k >= 0; k--) {\n\t\t\t\ttpp_addr_t *dest_host;\n\t\t\t\tunsigned int src_sd;\n\t\t\t\ttpp_leaf_t *l = NULL;\n\n\t\t\t\tminfo = (tpp_mcast_pkt_info_t *) (((char *) minfo_base) + k * sizeof(tpp_mcast_pkt_info_t));\n\n\t\t\t\tdest_host = &minfo->dest_addr;\n\t\t\t\tsrc_sd = ntohl(minfo->src_sd);\n\n\t\t\t\tTPP_DBPRT(\"MCAST data on fd=%u\", src_sd);\n\n\t\t\t\ttpp_read_lock(&router_lock);\n\t\t\t\tpbs_idx_find(cluster_leaves_idx, (void **) &dest_host, (void **) &l, NULL);\n\t\t\t\tif (l == NULL) {\n\t\t\t\t\ttpp_unlock_rwlock(&router_lock);\n\t\t\t\t\tsnprintf(msg, sizeof(msg), \"pbs_comm:%s: Dest not found at pbs_comm\", tpp_netaddr(&this_router->router_addr));\n\t\t\t\t\tlog_noroute(src_host, dest_host, src_sd, msg);\n\t\t\t\t\ttpp_send_ctl_msg(tfd, TPP_MSG_NOROUTE, src_host, dest_host, src_sd, 0, msg);\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\n\t\t\t\t/* find a router that is still connected */\n\t\t\t\ttarget_router = get_preferred_router(l, this_router, &target_fd);\n\t\t\t\ttpp_unlock_rwlock(&router_lock);\n\n\t\t\t\tif (target_router == NULL) {\n\t\t\t\t\tsnprintf(msg, sizeof(msg), \"pbs_comm:%s: No target pbs_comm found\", tpp_netaddr(&this_router->router_addr));\n\t\t\t\t\tlog_noroute(src_host, dest_host, src_sd, msg);\n\t\t\t\t\ttpp_send_ctl_msg(tfd, TPP_MSG_NOROUTE, src_host, dest_host, src_sd, 0, msg);\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\n\t\t\t\tif (target_router == this_router) {\n\t\t\t\t\ttpp_packet_t *pkt = NULL;\n\t\t\t\t\ttpp_data_pkt_hdr_t *shdr = NULL;\n\n\t\t\t\t\tpkt = tpp_bld_pkt(NULL, NULL, sizeof(tpp_data_pkt_hdr_t), 1, (void **) &shdr);\n\t\t\t\t\tif (!pkt) {\n\t\t\t\t\t\ttpp_log(LOG_CRIT, __func__, \"Failed to build packet\");\n\t\t\t\t\t\tgoto mcast_err;\n\t\t\t\t\t}\n\n\t\t\t\t\tshdr->type = TPP_DATA;\n\t\t\t\t\tshdr->src_sd = minfo->src_sd;\n\t\t\t\t\tshdr->src_magic = minfo->src_magic;\n\t\t\t\t\tshdr->dest_sd = minfo->dest_sd;\n\t\t\t\t\tshdr->totlen = mhdr->totlen;\n\t\t\t\t\tmemcpy(&shdr->src_addr, &mhdr->src_addr, sizeof(tpp_addr_t));\n\t\t\t\t\tmemcpy(&shdr->dest_addr, &minfo->dest_addr, sizeof(tpp_addr_t));\n\n\t\t\t\t\tif (!tpp_bld_pkt(pkt, payload, payload_len, 1, NULL)) {\n\t\t\t\t\t\ttpp_log(LOG_CRIT, __func__, \"Failed to build packet\");\n\t\t\t\t\t\tgoto mcast_err;\n\t\t\t\t\t}\n\n\t\t\t\t\tTPP_DBPRT(\"Send mcast indiv packet to %s\", tpp_netaddr(&shdr->dest_addr));\n\n\t\t\t\t\tif (tpp_transport_vsend(target_fd, pkt) != 0) {\n\t\t\t\t\t\ttpp_log(LOG_ERR, __func__, \"Failed to send mcast indiv pkt\");\n\t\t\t\t\t\ttpp_transport_close(target_fd);\n\t\t\t\t\t\tgoto mcast_err;\n\t\t\t\t\t}\n\t\t\t\t} else if (orig_hop == 0) {\n\t\t\t\t\t/* add this to list of routers to whom we need to send */\n\t\t\t\t\t/**\n\t\t\t\t\t * now walk list backwards checking if router was already added.\n\t\t\t\t\t * Rationale for checking backwards is that the last router\n\t\t\t\t\t * that we added data to, is probably the one that the next\n\t\t\t\t\t * few nodes are attached to as well.\n\t\t\t\t\t * Might be able to use a hash here for faster search\n\t\t\t\t\t **/\n\t\t\t\t\tint found = -1;\n\t\t\t\t\tfor (i = csize - 1; i >= 0; i--) {\n\t\t\t\t\t\tif (rlist[i].target_fd == target_fd) {\n\t\t\t\t\t\t\tfound = i;\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\n\t\t\t\t\tif (found == -1) {\n\t\t\t\t\t\tint c_minfo_len;\n\t\t\t\t\t\tif (csize == rsize) {\n\t\t\t\t\t\t\t/* got to add, but no space */\n\t\t\t\t\t\t\ttmp = realloc(rlist, sizeof(target_comm_struct_t) * (rsize + RLIST_INC));\n\t\t\t\t\t\t\tif (!tmp) {\n\t\t\t\t\t\t\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory resizing pbs_comm list to %lu bytes\", (unsigned long) (sizeof(int) * rsize));\n\t\t\t\t\t\t\t\tgoto mcast_err;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\trsize += RLIST_INC;\n\t\t\t\t\t\t\trlist = tmp;\n\t\t\t\t\t\t}\n\t\t\t\t\t\tfound = csize++; /* the last index, and increment post */\n\t\t\t\t\t\tmemset(&rlist[found], 0, sizeof(target_comm_struct_t));\n\t\t\t\t\t\trlist[found].target_fd = target_fd;\t\t       /* add this fd to the list of fds to send to */\n\t\t\t\t\t\trlist[found].router_name = target_router->router_name; /* keep a pointer to the router name */\n\n\t\t\t\t\t\t/* allocate minfo_buf for this target comm */\n\t\t\t\t\t\tc_minfo_len = sizeof(tpp_mcast_pkt_info_t) * num_streams;\n\t\t\t\t\t\tif (tpp_conf->compress == 1 && c_minfo_len > TPP_COMPR_SIZE) {\n\t\t\t\t\t\t\trlist[found].cmpr_ctx = tpp_multi_deflate_init(c_minfo_len);\n\t\t\t\t\t\t\tif (rlist[found].cmpr_ctx == NULL)\n\t\t\t\t\t\t\t\tgoto mcast_err;\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\trlist[found].minfo_buf = malloc(c_minfo_len);\n\t\t\t\t\t\t\tif (!rlist[found].minfo_buf) {\n\t\t\t\t\t\t\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory allocating mcast buffer of %d bytes\", c_minfo_len);\n\t\t\t\t\t\t\t\tgoto mcast_err;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t} /* if entry not found */\n\n\t\t\t\t\t/* at this point to have the entry particular target comm */\n\t\t\t\t\t/* copy (or compress) the minfo for the target leaf */\n\t\t\t\t\tif (rlist[found].cmpr_ctx == NULL) { /* no compression */\n\t\t\t\t\t\ttpp_mcast_pkt_info_t *c_minfo =\n\t\t\t\t\t\t\t(tpp_mcast_pkt_info_t *) ((char *) rlist[found].minfo_buf + (rlist[found].num_streams * sizeof(tpp_mcast_pkt_info_t)));\n\t\t\t\t\t\tmemcpy(c_minfo, minfo, sizeof(tpp_mcast_pkt_info_t));\n\t\t\t\t\t} else {\n\t\t\t\t\t\tif (tpp_multi_deflate_do(rlist[found].cmpr_ctx, 0, minfo, sizeof(tpp_mcast_pkt_info_t)) != 0)\n\t\t\t\t\t\t\tgoto mcast_err;\n\t\t\t\t\t}\n\n\t\t\t\t\trlist[found].num_streams++;\n\t\t\t\t}\n\t\t\t} /* for k streams */\n\n\t\t\tif (csize > 0) {\n\t\t\t\ttpp_log(LOG_INFO, __func__, \"Total target comms=%d\", csize);\n\n\t\t\t\t/* finish up the MCAST packets for each target comm and send */\n\t\t\t\tfor (k = 0; k < csize; k++) {\n\t\t\t\t\tvoid *t_minfo_buf = NULL;\n\t\t\t\t\tunsigned int t_minfo_len = 0;\n\t\t\t\t\ttpp_mcast_pkt_hdr_t *t_mhdr = NULL;\n\t\t\t\t\ttpp_packet_t *pkt = NULL;\n\n\t\t\t\t\tpkt = tpp_bld_pkt(NULL, mhdr, sizeof(tpp_mcast_pkt_hdr_t), 1, (void **) &t_mhdr);\n\t\t\t\t\tif (!pkt) {\n\t\t\t\t\t\ttpp_log(LOG_CRIT, __func__, \"Failed to build packet\");\n\t\t\t\t\t\tgoto mcast_err;\n\t\t\t\t\t}\n\n\t\t\t\t\tt_mhdr->num_streams = htonl(rlist[k].num_streams);\n\t\t\t\t\tt_minfo_len = rlist[k].num_streams * sizeof(tpp_mcast_pkt_info_t);\n\t\t\t\t\tt_mhdr->info_len = htonl(t_minfo_len);\n\n\t\t\t\t\t/* the router information has been collected in rlist[k] */\n\t\t\t\t\tif (rlist[k].cmpr_ctx != NULL) {\n\t\t\t\t\t\tif (tpp_multi_deflate_do(rlist[k].cmpr_ctx, 1, NULL, 0) != 0) /* finish the compression */\n\t\t\t\t\t\t\tgoto mcast_err;\n\t\t\t\t\t\tt_minfo_buf = tpp_multi_deflate_done(rlist[k].cmpr_ctx, &t_minfo_len);\n\t\t\t\t\t\tif (t_minfo_buf == NULL)\n\t\t\t\t\t\t\tgoto mcast_err;\n\t\t\t\t\t\tt_mhdr->info_cmprsd_len = htonl(t_minfo_len);\n\t\t\t\t\t\trlist[k].cmpr_ctx = NULL; /* done with compression */\n\t\t\t\t\t} else {\n\t\t\t\t\t\tt_minfo_buf = rlist[k].minfo_buf;\n\t\t\t\t\t\tt_mhdr->info_cmprsd_len = 0;\n\t\t\t\t\t}\n\n\t\t\t\t\tif (!tpp_bld_pkt(pkt, t_minfo_buf, t_minfo_len, 0, NULL)) {\n\t\t\t\t\t\ttpp_log(LOG_CRIT, __func__, \"Failed to build packet\");\n\t\t\t\t\t\tgoto mcast_err;\n\t\t\t\t\t}\n\n\t\t\t\t\tif (!tpp_bld_pkt(pkt, payload, payload_len, 1, NULL)) {\n\t\t\t\t\t\ttpp_log(LOG_CRIT, __func__, \"Failed to build packet\");\n\t\t\t\t\t\tgoto mcast_err;\n\t\t\t\t\t}\n\n\t\t\t\t\ttpp_log(LOG_INFO, __func__, \"Sending MCAST packet to %s, num_streams=%d\", rlist[k].router_name, rlist[k].num_streams);\n\t\t\t\t\tif (tpp_transport_vsend(rlist[k].target_fd, pkt) != 0)\n\t\t\t\t\t\ttpp_log(LOG_ERR, __func__, \"send failed: errno = %d\", errno);\n\t\t\t\t}\n\t\t\t}\n\t\tmcast_err:\n\t\t\tif (cmprsd_len > 0)\n\t\t\t\tfree(minfo_base);\n\n\t\t\tfree(rlist); /* minfo_buf which was allocated will be freed when sent */\n\n\t\t\ttpp_log(LOG_INFO, NULL, \"mcast done\");\n\n\t\t\treturn 0;\n\t\t} break; /* TPP_MCAST_DATA */\n\n\t\tcase TPP_DATA:\n\t\tcase TPP_CLOSE_STRM: {\n\t\t\ttpp_leaf_t *l = NULL;\n\t\t\ttpp_addr_t *src_host, *dest_host;\n\t\t\ttpp_packet_t *pkt = NULL;\n\t\t\tunsigned int src_sd;\n\n\t\t\tsrc_host = &dhdr->src_addr;\n\t\t\tdest_host = &dhdr->dest_addr;\n\t\t\tsrc_sd = ntohl(dhdr->src_sd);\n\n\t\t\ttpp_read_lock(&router_lock);\n\n\t\t\tpbs_idx_find(cluster_leaves_idx, (void **) &dest_host, (void **) &l, NULL);\n\t\t\tif (l == NULL) {\n\t\t\t\ttpp_unlock_rwlock(&router_lock);\n\t\t\t\tsnprintf(msg, sizeof(msg), \"tfd=%d, pbs_comm:%s: Dest not found\", tfd, tpp_netaddr(&this_router->router_addr));\n\t\t\t\tlog_noroute(src_host, dest_host, src_sd, msg);\n\t\t\t\ttpp_send_ctl_msg(tfd, TPP_MSG_NOROUTE, src_host, dest_host, src_sd, 0, msg);\n\t\t\t\treturn 0;\n\t\t\t}\n\n\t\t\t/* find a router that is still connected */\n\t\t\ttarget_router = get_preferred_router(l, this_router, &target_fd);\n\t\t\ttpp_unlock_rwlock(&router_lock);\n\n\t\t\tif (target_router == NULL) {\n\t\t\t\tsnprintf(msg, sizeof(msg), \"tfd=%d, pbs_comm:%s: No target pbs_comm found\", tfd, tpp_netaddr(&this_router->router_addr));\n\t\t\t\tlog_noroute(src_host, dest_host, src_sd, msg);\n\t\t\t\ttpp_send_ctl_msg(tfd, TPP_MSG_NOROUTE, src_host, dest_host, src_sd, 0, msg);\n\t\t\t\treturn 0;\n\t\t\t}\n\n\t\t\tpkt = tpp_bld_pkt(NULL, dhdr, len, 1, NULL);\n\t\t\tif (!pkt) {\n\t\t\t\ttpp_log(LOG_CRIT, __func__, \"Failed to build packet\");\n\t\t\t\treturn 0;\n\t\t\t}\n\n\t\t\trc = tpp_transport_vsend(target_fd, pkt);\n\t\t\tif (rc == -1) {\n\t\t\t\ttpp_log(LOG_ERR, __func__, \"Failed to send TPP_DATA/TPP_CLOSE_STRM\");\n\t\t\t\ttpp_transport_close(target_fd);\n\t\t\t}\n\n\t\t\treturn rc; /* 0 - success, -1 failed, -2 app mbox full */\n\t\t} break;\t   /* TPP_DATA, TPP_CLOSE_STRM */\n\n\t\tcase TPP_CTL_MSG: {\n\t\t\ttpp_ctl_pkt_hdr_t *ehdr = (tpp_ctl_pkt_hdr_t *) dhdr;\n\t\t\ttpp_leaf_t *l = NULL;\n\t\t\tint subtype = ehdr->code;\n\n\t\t\tif (subtype == TPP_MSG_NOROUTE) {\n\t\t\t\tchar lbuf[TPP_MAXADDRLEN + 1];\n\t\t\t\ttpp_packet_t *pkt = NULL;\n\t\t\t\ttpp_addr_t *dest_host = &ehdr->dest_addr;\n\t\t\t\tchar *msg = ((char *) ehdr) + sizeof(tpp_ctl_pkt_hdr_t);\n\n\t\t\t\tstrcpy(lbuf, tpp_netaddr(&ehdr->dest_addr));\n\t\t\t\ttpp_log(LOG_WARNING, __func__, \"tfd=%d, Recvd TPP_CTL_NOROUTE for message, %s(sd=%d) -> %s: %s\",\n\t\t\t\t\ttfd, lbuf, ntohl(ehdr->src_sd), tpp_netaddr(&ehdr->src_addr), msg);\n\n\t\t\t\t/* find the fd to forward to via the associated router */\n\t\t\t\ttpp_read_lock(&router_lock);\n\n\t\t\t\tpbs_idx_find(cluster_leaves_idx, (void **) &dest_host, (void **) &l, NULL);\n\t\t\t\tif (l == NULL) {\n\t\t\t\t\ttpp_unlock_rwlock(&router_lock);\n\t\t\t\t\treturn 0;\n\t\t\t\t}\n\t\t\t\t/* find a router that is still connected */\n\t\t\t\ttarget_router = get_preferred_router(l, this_router, &target_fd);\n\n\t\t\t\ttpp_unlock_rwlock(&router_lock);\n\t\t\t\tif (target_router == NULL) {\n\t\t\t\t\ttpp_log(LOG_WARNING, NULL, \"tfd=%d, No connections to send TPP_CTL_NOROUTE\", tfd);\n\t\t\t\t\treturn 0;\n\t\t\t\t}\n\n\t\t\t\tpkt = tpp_bld_pkt(NULL, dhdr, len, 1, NULL);\n\t\t\t\tif (!pkt) {\n\t\t\t\t\ttpp_log(LOG_CRIT, __func__, \"Failed to build packet\");\n\t\t\t\t\treturn 0;\n\t\t\t\t}\n\t\t\t\tif (tpp_transport_vsend(target_fd, pkt) != 0) {\n\t\t\t\t\ttpp_log(LOG_ERR, NULL, \"tfd=%d, Failed to send pkt type TPP_CTL_NOROUTE\", tfd);\n\t\t\t\t\ttpp_transport_close(target_fd);\n\t\t\t\t}\n\t\t\t\treturn 0;\n\t\t\t}\n\t\t} break; /* TPP_CTL_MSG */\n\n\t\tdefault:\n\t\t\t/* no known message type, log and close connection by returning error code */\n\t\t\ttpp_log(LOG_CRIT, __func__, \"tfd=%d, Unknown message type = %d\", tfd, type);\n\t} /* switch */\n\n\treturn -1;\n}\n\n/**\n * @brief\n *\tConvenience function to get the most preferred route to reach a leaf\n *\tIf the leaf is directly connected to this router, then the l->conn_fd is\n *\talready set, so just use it.\n *\tIf not, then search in the list of routes for the leaf starting from index\n *\t0 (since its sorted on preference), finding a router that is still\n *\tconnected, i.e., r[i]->conn_fd is not -1.\n *\n * @param[in] - l \t- Pointer to the leaf for which to find route\n * @param[in] - this_router - Pointer to the local router\n * @param[out] - fd - fd of the chosen router\n *\n * @return\tRouter to be used\n * @retval\tNULL  - Could not find a connected router\n * @retval\t!NULL  - Success (The router is returned).\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nstatic tpp_router_t *\nget_preferred_router(tpp_leaf_t *l, tpp_router_t *this_router, int *fd)\n{\n\tint i;\n\ttpp_router_t *r = NULL;\n\n\t*fd = -1;\n\n\tif (l->conn_fd != -1) {\n\t\tr = this_router;\n\t\t*fd = l->conn_fd;\n\t} else {\n\n\t\t/*\n\t\t * not directly connected to me, so search for a router\n\t\t * to which it is connected\n\t\t */\n\t\tif (l->r) {\n\t\t\tfor (i = 0; i < l->tot_routers; i++) {\n\t\t\t\tif (l->r[i]) {\n\t\t\t\t\tif (l->r[i]->conn_fd != -1) {\n\t\t\t\t\t\tr = l->r[i];\n\t\t\t\t\t\t*fd = r->conn_fd;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\treturn r;\n}\n\n/**\n * @brief\n *\tConvenience function to delete a route from a leaf's list of routers at the\n *\tspecified preference (specified by the index attribute of the leaf, if not -1).\n *\n * @param[in] - l - Pointer to the leaf for which to find route\n * @param[in] - tfd - The fd of the connection involved\n *\n * @return\tError code\n * @retval\tNULL    - Failure (Could not find router).\n * @retval\t!=NULL  - Success (The router that was removed is returned).\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nstatic tpp_router_t *\ndel_router_from_leaf(tpp_leaf_t *l, int tfd)\n{\n\tint i;\n\ttpp_router_t *r = NULL;\n\n\tfor (i = 0; i < l->tot_routers; i++) {\n\t\tif (l->r[i] &&\t\t\t  /* router exists in this slot */\n\t\t    ((l->r[i]->conn_fd == tfd) || /* router fd matches tfd */\n\t\t     (l->conn_fd == tfd && l->r[i]->conn_fd == -1))) {\n\t\t\tTPP_DBPRT(\"Removing pbs_comm %s from leaf %s\", l->r[i]->router_name, tpp_netaddr(&l->leaf_addrs[0]));\n\t\t\tr = l->r[i];\n\t\t\tl->r[i] = NULL;\n\t\t\tl->num_routers--;\n\t\t\tif (l->num_routers == 0)\n\t\t\t\tfree(l->r);\n\t\t\tTPP_DBPRT(\"pbs_comm count for leaf=%s is %d\", tpp_netaddr(&l->leaf_addrs[0]), l->num_routers);\n\t\t\treturn r;\n\t\t}\n\t}\n\treturn NULL;\n}\n\n/**\n * @brief\n *\tConvenience function to add a route to a leaf's list of routes at the\n *\tspecified preference (specified by the index parameter).\n *\n * @param[in] - l - Pointer to the leaf for which to find route\n * @param[in] - r - Pointer to the router to add\n * @param[in] - index - The preference for this route\n *\n * @return\tError code\n * @retval\t-1    - Failure (Another router exists at this index).\n * @retval\t!=-1  - Success (The router index is returned).\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nstatic int\nadd_route_to_leaf(tpp_leaf_t *l, tpp_router_t *r, int index)\n{\n\t/*\n\t * Associate the router with the leaf\n\t *\n\t * put the router in the list of routers of the leaf\n\t * at the specified index. The index is important to know\n\t * the priority of the router to reach this leaf\n\t */\n\tif (index == -1)\n\t\treturn -1; /* error - index must be set before calling add route */\n\n\tif (index >= l->tot_routers) {\n\t\tint sz;\n\t\tint i;\n\n\t\tsz = index + 3;\n\t\tl->r = realloc(l->r, sz * sizeof(tpp_router_t *));\n\t\tfor (i = l->tot_routers; i < sz; i++)\n\t\t\tl->r[i] = NULL;\n\t\tl->tot_routers = sz;\n\t}\n\n\tl->r[index] = r;\n\tl->num_routers++;\n\n#ifdef DEBUG\n\t{\n\t\tint i;\n\n\t\tfprintf(stderr, \"Leaf %s:%d routers [\", tpp_netaddr(&l->leaf_addrs[0]), l->conn_fd);\n\t\tfor (i = 0; i < l->tot_routers; i++) {\n\t\t\tif (l->r[i] && l->r[i]->router_name)\n\t\t\t\tfprintf(stderr, \"%s:%d,\", l->r[i]->router_name, l->r[i]->conn_fd);\n\t\t}\n\t\tfprintf(stderr, \"],router_count=%d\\n\", l->num_routers);\n\t}\n#endif\n\n\treturn index;\n}\n\n/**\n * @brief\n *\tConvenience function to find the index of a router in the leaf's\n *\tlist of routers associated\n *\n * @param[in] - l - Pointer to the leaf\n * @param[in] - r - Pointer to the router to find\n *\n * @return\tIndex of the router\n * @retval\t-1    - Failure (router not found)\n * @retval\t!=-1  - Success (The router index is returned).\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nstatic int\nleaf_get_router_index(tpp_leaf_t *l, tpp_router_t *r)\n{\n\tint i;\n\tfor (i = 0; i < l->tot_routers; i++) {\n\t\tif (l->r[i] == r)\n\t\t\treturn i;\n\t}\n\treturn -1;\n}\n\n/**\n * @brief\n *\tInitializes the Router\n *\n * @par Functionality:\n *\tCreates indexes for routers and leaves connected to this router.\n *\tRegisters the various handlers to be called from the IO thread.\n *\tFinally connect to all other routers listed.\n *\n * @see\n *\ttpp_transport_init\n *\ttpp_transport_set_handlers\n *\n * @param[in] cnf - The tpp configuration structure\n *\n * @return     - The file descriptor that APP must use to monitor for events\n * @retval  -1 - Function failed\n * @retval !-1 - Success, read end of the pipe is returned to APP to monitor\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\ntpp_init_router(struct tpp_config *cnf)\n{\n\tint j;\n\ttpp_router_t *r;\n\ttpp_context_t *ctx = NULL;\n\n\ttpp_conf = cnf;\n\n\t/* before doing anything else, initialize the key to the tls */\n\tif (tpp_init_tls_key() != 0) {\n\t\t/* can only use prints since tpp key init failed */\n\t\tfprintf(stderr, \"Failed to initialize tls key\\n\");\n\t\treturn -1;\n\t}\n\n\ttpp_init_lock(&lj_lock);\n\ttpp_init_rwlock(&router_lock);\n\n\trouters_idx = pbs_idx_create(0, sizeof(tpp_addr_t));\n\tif (routers_idx == NULL) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Failed to create index for pbs comms\");\n\t\treturn -1;\n\t}\n\n\tcluster_leaves_idx = pbs_idx_create(0, sizeof(tpp_addr_t));\n\tif (cluster_leaves_idx == NULL) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Failed to create index for cluster leaves\");\n\t\treturn -1;\n\t}\n\n\tmy_leaves_notify_idx = pbs_idx_create(0, sizeof(tpp_addr_t));\n\tif (my_leaves_notify_idx == NULL) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Failed to create index for leaves requiring notification\");\n\t\treturn -1;\n\t}\n\n\tr = alloc_router(tpp_conf->node_name, NULL);\n\tif (!r)\n\t\treturn -1; /* error already logged */\n\n\tthis_router = r; /* mark this one as this router */\n\n\t/* first set the transport handlers */\n\ttpp_transport_set_handlers(router_pkt_presend_handler, router_pkt_handler, router_close_handler, router_post_connect_handler, router_timer_handler);\n\n\tif ((tpp_transport_init(tpp_conf)) == -1)\n\t\treturn -1;\n\n\t/* initiate connections to sister routers */\n\tj = 0;\n\ttpp_write_lock(&router_lock);\n\twhile (tpp_conf->routers && tpp_conf->routers[j]) {\n\t\t/* add to connection table */\n\n\t\tr = alloc_router(tpp_conf->routers[j], NULL);\n\t\tif (!r) {\n\t\t\ttpp_unlock_rwlock(&router_lock);\n\t\t\treturn -1; /* error already logged */\n\t\t}\n\t\tr->initiator = 1;\n\n\t\t/* since we connected we should add a context */\n\t\tif ((ctx = (tpp_context_t *) malloc(sizeof(tpp_context_t))) == NULL) {\n\t\t\ttpp_unlock_rwlock(&router_lock);\n\t\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory allocating tpp context\");\n\t\t\treturn -1;\n\t\t}\n\t\tctx->ptr = r;\n\t\tctx->type = TPP_ROUTER_NODE;\n\n\t\ttpp_log(LOG_INFO, NULL, \"Connecting to pbs_comm %s\", tpp_conf->routers[j]);\n\n\t\tif (tpp_transport_connect(tpp_conf->routers[j], 0, ctx, &r->conn_fd) == -1) {\n\t\t\ttpp_unlock_rwlock(&router_lock);\n\t\t\treturn -1;\n\t\t}\n\n\t\tj++;\n\t}\n\ttpp_unlock_rwlock(&router_lock);\n\n\tsleep(1);\n\treturn 0;\n}\n\n/**\n * @brief\n *\tShuts down the tpp library gracefully\n *\n * @par Functionality\n *\tshuts down the IO threads\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nvoid\ntpp_router_shutdown()\n{\n\ttpp_going_down = 1;\n\n\tTPP_DBPRT(\"from pid = %d\", getpid());\n\ttpp_transport_shutdown();\n\n\tfree_tpp_config(tpp_conf);\n}\n\n/**\n * @brief\n *\tTerminates (un-gracefully) the tpp library\n *\n * @par Functionality\n *\tTypically to be called after a fork. Just a placeholder\n *\tfunction for now. Does not do anything.\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nvoid\ntpp_router_terminate()\n{\n\treturn;\n}\n"
  },
  {
    "path": "src/lib/Libtpp/tpp_transport.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\ttpp_transport.c\n *\n * @brief\tThe IO layer of the tpp library (drives the IO thread)\n *\n * @par\t\tFunctionality:\n *\n *\t\tTPP = TCP based Packet Protocol. This layer uses TCP in a multi-\n *\t\thop router based network topology to deliver packets to desired\n *\t\tdestinations. LEAF (end) nodes are connected to ROUTERS via\n *\t\tpersistent TCP connections. The ROUTER has intelligence to route\n *\t\tpackets to appropriate destination leaves or other routers.\n *\n *\t\tThis is the IO side in the tpp library.\n *\t\tThis IO layer is part of all the tpp participants,\n *\t\tboth leaves (end-points) and routers.\n *\n */\n#include <pbs_config.h>\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <unistd.h>\n#include <sys/types.h>\n#include <sys/socket.h>\n#include <netinet/in.h>\n#include <arpa/inet.h>\n#include <pthread.h>\n#include <errno.h>\n#include <fcntl.h>\n#include <netdb.h>\n#include <sys/time.h>\n#include <signal.h>\n#include \"pbs_idx.h\"\n#include \"tpp_internal.h\"\n#include \"auth.h\"\n\n#define TPP_CONN_DISCONNECTED 1 /* Channel is disconnected */\n#define TPP_CONN_INITIATING 2\t/* Channel is initiating */\n#define TPP_CONN_CONNECTING 3\t/* Channel is connecting */\n#define TPP_CONN_CONNECTED 4\t/* Channel is connected */\n\nint tpp_going_down = 0;\n\n/*\n * Delayed connection queue, to retry connection after\n * specific periods of time\n */\n#define TPP_CONN_CONNECT_DELAY 1\ntypedef struct {\n\tint tfd;\t  /* on which physical connection */\n\tchar cmdval;\t  /* cmd type */\n\ttime_t conn_time; /* time at which to connect */\n} conn_event_t;\n\n/*\n * The per thread data structure. This library creates a thread-pool of\n * a configuration supplied number of threads. Each thread maintains some\n * information about itself in this structure.\n *\n * The command pipe is a pipe which the thread monitors for incoming events, or\n * commands, so other threads can pass information to it via this pipe.\n *\n */\ntypedef struct {\n\tint thrd_index;\t\t  /* thread index for debugging */\n\tpthread_t worker_thrd_id; /* Thread id of this thread */\n\tint listen_fd;\t\t  /* If this is the thread that is also doing\n\t\t\t\t * the listening, then the listening socket\n\t\t\t\t * descriptor\n\t\t\t\t */\n#ifdef NAS\t\t\t  /* localmod 149 */\n\tint nas_tpp_log_enabled;  /* controls the printing of statistics\n\t\t\t\t\t * to the log\n\t\t\t\t\t */\n\tint NAS_TPP_LOG_PERIOD_A; /* this should be the shortest of the\n\t\t\t\t\t * logging periods, as it is also the\n\t\t\t\t\t * frequency with which we check if\n\t\t\t\t\t * statistics should be printed\n\t\t\t\t\t */\n\tint NAS_TPP_LOG_PERIOD_B;\n\tint NAS_TPP_LOG_PERIOD_C;\n\ttime_t nas_last_time_A;\n\tdouble nas_kb_sent_A;\n\tint nas_num_lrg_sends_A;\n\tint nas_num_qual_lrg_sends_A;\n\tint nas_max_bytes_lrg_send_A;\n\tint nas_min_bytes_lrg_send_A;\n\tdouble nas_lrg_send_sum_kb_A;\n\ttime_t nas_last_time_B;\n\tdouble nas_kb_sent_B;\n\tint nas_num_lrg_sends_B;\n\tint nas_num_qual_lrg_sends_B;\n\tint nas_max_bytes_lrg_send_B;\n\tint nas_min_bytes_lrg_send_B;\n\tdouble nas_lrg_send_sum_kb_B;\n\ttime_t nas_last_time_C;\n\tdouble nas_kb_sent_C;\n\tint nas_num_lrg_sends_C;\n\tint nas_num_qual_lrg_sends_C;\n\tint nas_max_bytes_lrg_send_C;\n\tint nas_min_bytes_lrg_send_C;\n\tdouble nas_lrg_send_sum_kb_C;\n#endif\t\t\t       /* localmod 149 */\n\tvoid *em_context;      /* the em context */\n\ttpp_que_t def_act_que; /* The deferred action queue on this thread */\n\ttpp_mbox_t mbox;       /* message box for this thread */\n\ttpp_tls_t *tpp_tls;    /* tls data related to tpp work */\n} thrd_data_t;\n\n#ifdef NAS /* localmod 149 */\nstatic char tpp_instr_flag_file[_POSIX_PATH_MAX] = \"/PBS/flags/tpp_instrumentation\";\n#endif /* localmod 149 */\n\nstatic thrd_data_t **thrd_pool; /* array of threads - holds the thread pool */\nstatic int num_threads;\t\t/* number of threads in the thread pool */\nstatic int max_con = MAX_CON;\t/* nfiles */\n\nstatic struct tpp_config *tpp_conf; /* store a pointer to the tpp_config supplied */\n\n/*\n * Save the connection related parameters here, so we don't have to parse\n * each time.\n */\ntypedef struct {\n\tchar *hostname;\t   /* the host name to connect to */\n\tint port;\t   /* the port to connect to */\n\tint need_resvport; /* bind to resv port? */\n} conn_param_t;\n\n/*\n * Structure that holds information about each TCP connection between leaves and\n * router or between routers and routers. A single IO thread can handle multiple\n * such physical connections. We refer to the indexes to the physical\n * connections as \"transport fd\" or tfd.\n */\ntypedef struct {\n\tint sock_fd;\t /* socket fd (TCP) for this physical connection*/\n\tint lasterr;\t /* last error that was captured on this socket */\n\tshort net_state; /* network status of this connection, up, down etc */\n\tint ev_mask;\t /* event mask in effect so far */\n\n\tconn_param_t *conn_params; /* the connection params */\n\n\ttpp_mbox_t send_mbox;\t     /* mbox of pkts to send */\n\ttpp_chunk_t scratch;\t     /* scratch to work on incoming data */\n\ttpp_packet_t *curr_send_pkt; /* current packet dequed from send_mbox to be sent out  */\n\tthrd_data_t *td;\t     /* connections controller thread */\n\n\ttpp_context_t *ctx; /* upper layers context information */\n\n\tvoid *extra; /* extra data structure */\n} phy_conn_t;\n\n/* structure for holding an array of physical connection structures */\ntypedef struct {\n\tint slot_state;\t  /* slot is busy or free */\n\tphy_conn_t *conn; /* the physical connection using this slot */\n} conns_array_type_t;\n\nconns_array_type_t *conns_array = NULL; /* array of physical connections */\nint conns_array_size = 0;\t\t/* the size of physical connection array */\npthread_rwlock_t cons_array_lock;\t/* rwlock used to synchronize array ops */\npthread_mutex_t thrd_array_lock;\t/* mutex used to synchronize thrd assignment */\n\n/* function forward declarations */\nstatic void *work(void *v);\nstatic int assign_to_worker(int tfd, int delay, thrd_data_t *td);\nstatic int handle_disconnect(phy_conn_t *conn);\nstatic void handle_incoming_data(phy_conn_t *conn);\nstatic void send_data(phy_conn_t *conn);\nstatic void free_phy_conn(phy_conn_t *conn);\nstatic void handle_cmd(thrd_data_t *td, int tfd, int cmd, void *data);\nstatic short add_pkt(phy_conn_t *conn);\nstatic phy_conn_t *get_transport_atomic(int tfd, int *slot_state);\n\n/**\n * @brief\n *\tEnqueue an deferred action\n *\n * @par Functionality\n *\tUsed for initiating a connection after a delay, or deferred close / reads\n *\n * @param[in] td    - The thread data for the controlling thread\n * @param[in] tfd   - The descriptor of the physical connection\n * @param[in] delay - The amount of time after which event has to be triggered\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nstatic void\nenque_deferred_event(thrd_data_t *td, int tfd, int cmd, int delay)\n{\n\tconn_event_t *conn_ev;\n\ttpp_que_elem_t *n;\n\tvoid *ret;\n\n\tconn_ev = malloc(sizeof(conn_event_t));\n\tif (!conn_ev) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory queueing a lazy connect\");\n\t\treturn;\n\t}\n\tconn_ev->tfd = tfd;\n\tconn_ev->cmdval = cmd;\n\tconn_ev->conn_time = time(0) + delay;\n\n\tn = NULL;\n\twhile ((n = TPP_QUE_NEXT(&td->def_act_que, n))) {\n\t\tconn_event_t *p;\n\n\t\tp = TPP_QUE_DATA(n);\n\n\t\t/* sorted list, insert before node which has higher time */\n\t\tif (p && (p->conn_time >= conn_ev->conn_time))\n\t\t\tbreak;\n\t}\n\t/* insert before this position */\n\tif (n)\n\t\tret = tpp_que_ins_elem(&td->def_act_que, n, conn_ev, 1);\n\telse\n\t\tret = tpp_enque(&td->def_act_que, conn_ev);\n\n\tif (ret == NULL) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory queueing a lazy connect\");\n\t\tfree(conn_ev);\n\t}\n}\n\n/**\n * @brief\n *\tTrigger deferred action for those whose time has been reached\n *\n * @param[in] td   - The thread data for the controlling thread\n * @param[in] now  - Current time to check events with\n *\n * @return Wait time for the next deferred event\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nstatic int\ntrigger_deferred_events(thrd_data_t *td, time_t now)\n{\n\tconn_event_t *q;\n\ttpp_que_elem_t *n = NULL;\n\tint slot_state;\n\ttime_t wait_time = -1;\n\n\twhile ((n = TPP_QUE_NEXT(&td->def_act_que, n))) {\n\t\tq = TPP_QUE_DATA(n);\n\t\tif (q == NULL)\n\t\t\tcontinue;\n\t\tif (now >= q->conn_time) {\n\t\t\t(void) get_transport_atomic(q->tfd, &slot_state);\n\t\t\tif (slot_state == TPP_SLOT_BUSY)\n\t\t\t\thandle_cmd(td, q->tfd, q->cmdval, NULL);\n\n\t\t\tn = tpp_que_del_elem(&td->def_act_que, n);\n\t\t\tfree(q);\n\t\t} else {\n\t\t\twait_time = q->conn_time - now;\n\t\t\tbreak;\n\t\t\t/*\n\t\t\t * events are sorted on time,\n\t\t\t * so if first not fitting, next events wont\n\t\t\t */\n\t\t}\n\t}\n\treturn wait_time;\n}\n\n/**\n * @brief\n * Function called by upper layers to get the \"thrd\" that\n * is associated with the connection\n *\n * @param[in] tfd - Descriptor to the physical connection\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nvoid *\ntpp_transport_get_thrd_context(int tfd)\n{\n\tthrd_data_t *td = NULL;\n\n\tif (tpp_read_lock(&cons_array_lock))\n\t\treturn NULL;\n\n\tif (tfd >= 0 && tfd < conns_array_size) {\n\t\tif (conns_array[tfd].conn && conns_array[tfd].slot_state == TPP_SLOT_BUSY)\n\t\t\ttd = conns_array[tfd].conn->td;\n\t}\n\ttpp_unlock_rwlock(&cons_array_lock);\n\n\treturn td;\n}\n\n/**\n * @brief\n *\tFunction called by upper layers to get the \"user data/context\" that\n *\tis associated with the connection (this was set earlier)\n *\n * @param[in] tfd - Descriptor to the physical connection\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nvoid *\ntpp_transport_get_conn_ctx(int tfd)\n{\n\tint slot_state;\n\tphy_conn_t *conn;\n\tconn = get_transport_atomic(tfd, &slot_state);\n\tif (conn)\n\t\treturn conn->ctx;\n\n\treturn NULL;\n}\n\n/**\n * @brief\n *\tFunction called by upper layers to associate a context (user data) to\n *\tthe physical connection\n *\n * @param[in] tfd - Descriptor to the physical connection\n * @param[in] ctx - Pointer to the user context\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nvoid\ntpp_transport_set_conn_ctx(int tfd, void *ctx)\n{\n\tint slot_state;\n\tphy_conn_t *conn;\n\tconn = get_transport_atomic(tfd, &slot_state);\n\tif (conn)\n\t\tconn->ctx = ctx;\n}\n\n/**\n * @brief\n *\tCreates a listening socket using platform specific calls\n *\n * @param[in] port - port to bind socket to\n *\n * @return - socket descriptor of server socket\n * @retval   -1 - Failure\n * @retval !=-1 - Socket descriptor of newly created server socket\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\ntpp_cr_server_socket(int port)\n{\n\tstruct sockaddr_in serveraddr;\n\tint sd;\n\tint yes = 1;\n\n\tserveraddr.sin_family = AF_INET;\n\tserveraddr.sin_addr.s_addr = INADDR_ANY;\n\tserveraddr.sin_port = htons(port);\n\tmemset(&(serveraddr.sin_zero), '\\0', sizeof(serveraddr.sin_zero));\n\n\tif ((sd = tpp_sock_socket(AF_INET, SOCK_STREAM, 0)) == -1) {\n\t\ttpp_log(LOG_CRIT, __func__, \"tpp_sock_socket() error, errno=%d\", errno);\n\t\treturn -1;\n\t}\n\tif (tpp_sock_setsockopt(sd, SOL_SOCKET, SO_REUSEADDR, &yes, sizeof(int)) == -1) {\n\t\ttpp_log(LOG_CRIT, __func__, \"tpp_sock_setsockopt() error, errno=%d\", errno);\n\t\treturn -1;\n\t}\n\tif (tpp_sock_bind(sd, (struct sockaddr *) &serveraddr, sizeof(serveraddr)) == -1) {\n\t\tchar *msgbuf;\n#ifdef HAVE_STRERROR_R\n\t\tchar buf[TPP_GEN_BUF_SZ];\n\n\t\tif (strerror_r(errno, buf, sizeof(buf)) == 0)\n\t\t\tpbs_asprintf(&msgbuf, \"%s while binding to port %d\", buf, port);\n\t\telse\n#endif\n\t\t\tpbs_asprintf(&msgbuf, \"Error %d while binding to port %d\", errno, port);\n\t\ttpp_log(LOG_CRIT, NULL, msgbuf);\n\t\tfree(msgbuf);\n\t\treturn -1;\n\t}\n\tif (tpp_sock_listen(sd, 1000) == -1) {\n\t\ttpp_log(LOG_CRIT, __func__, \"tpp_sock_listen() error, errno=%d\", errno);\n\t\treturn -1;\n\t}\n\treturn sd;\n}\n\n/**\n * @brief\n *\tInitialize the transport layer.\n *\n * @par Functionality\n *\tDoes the following:\n *\t1. Creates the TLS used for log buffer etc\n *\t2. Creates the thread pool of threads, and initialize the threads\n *\t3. If the caller is a router node, then assign one thread to also be\n *\t   the listening thread. (binds and listens to a port)\n *\t4. Create the command pipe of each thread.\n *\n * @param[in] conf - Ptr to the tpp_config passed from upper layers\n *\n * @return Error code\n * @retval -1 - Error\n * @retval  0 - Success\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nint\ntpp_transport_init(struct tpp_config *conf)\n{\n\tint i;\n\tchar mbox_name[TPP_MBOX_NAME_SZ];\n\n\tif (conf->node_type == TPP_LEAF_NODE || conf->node_type == TPP_LEAF_NODE_LISTEN) {\n\t\tif (conf->numthreads != 1) {\n\t\t\ttpp_log(LOG_CRIT, NULL, \"Leaves should start exactly one thread\");\n\t\t\treturn -1;\n\t\t}\n\t} else {\n\t\tif (conf->numthreads < 2) {\n\t\t\ttpp_log(LOG_CRIT, NULL, \"pbs_comms should have at least 2 threads\");\n\t\t\treturn -1;\n\t\t}\n\t\tif (conf->numthreads > 100) {\n\t\t\ttpp_log(LOG_CRIT, NULL, \"pbs_comms should have <= 100 threads\");\n\t\t\treturn -1;\n\t\t}\n\t}\n\n\ttpp_log(LOG_INFO, NULL, \"Initializing TPP transport Layer\");\n\tif (tpp_init_lock(&thrd_array_lock))\n\t\treturn -1;\n\n\tif (tpp_init_rwlock(&cons_array_lock))\n\t\treturn -1;\n\n#ifndef WIN32\n\t/* for unix, set a pthread_atfork handler */\n\tif (pthread_atfork(tpp_nslookup_atfork_prepare, tpp_nslookup_atfork_parent, tpp_nslookup_atfork_child) != 0) {\n\t\ttpp_log(LOG_CRIT, __func__, \"tpp nslookup mutex atfork handler failed\");\n\t\treturn -1;\n\t}\n#endif\n\n\ttpp_sock_layer_init();\n\n\tmax_con = tpp_get_nfiles();\n\tif (max_con < TPP_MAXOPENFD) {\n\t\ttpp_log(LOG_WARNING, NULL, \"Max files too low - you may want to increase it.\");\n\t\tif (max_con < 100) {\n\t\t\ttpp_log(LOG_CRIT, NULL, \"Max files < 100, cannot continue\");\n\t\t\treturn -1;\n\t\t}\n\t}\n\t/* reduce max_con by 1, else on solaris devpoll could fail\n\t * See https://community.oracle.com/thread/1915433?start=0&tstart=0.\n\t * Snippet from that link..(lest that link goes way).\n\t * \"We can't monitor our /dev/poll file descriptor using /dev/poll,\n\t * so the actual maximum number of file descriptors you can monitor\n\t * is OPEN_MAX - 1. Solaris enforces that limit. This breaks other code\n\t * out there too, e.g. the libevent library.\n\t * Annoying even though it's arguably technically correct\".\n\t */\n\tmax_con--;\n\n\tif (set_pipe_disposition() != 0) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Could not query SIGPIPEs disposition\");\n\t\treturn -1;\n\t}\n\n\t/* create num_threads worker threads */\n\tif ((thrd_pool = calloc(conf->numthreads, sizeof(thrd_data_t *))) == NULL) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory allocating threads\");\n\t\treturn -1;\n\t}\n\n\tfor (i = 0; i < conf->numthreads; i++) {\n\t\tthrd_pool[i] = calloc(1, sizeof(thrd_data_t));\n\t\tif (thrd_pool[i] == NULL) {\n\t\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory creating threadpool\");\n\t\t\treturn -1;\n\t\t}\n\t\ttpp_invalidate_thrd_handle(&(thrd_pool[i]->worker_thrd_id));\n#ifdef NAS /* localmod 149 */\n\t\tthrd_pool[i]->nas_tpp_log_enabled = 0;\n\t\tthrd_pool[i]->NAS_TPP_LOG_PERIOD_A = 60;\n\t\tthrd_pool[i]->NAS_TPP_LOG_PERIOD_B = 300;\n\t\tthrd_pool[i]->NAS_TPP_LOG_PERIOD_C = 600;\n\t\tthrd_pool[i]->nas_last_time_A = time(0);\n\t\tthrd_pool[i]->nas_kb_sent_A = 0.0;\n\t\tthrd_pool[i]->nas_num_lrg_sends_A = 0;\n\t\tthrd_pool[i]->nas_num_qual_lrg_sends_A = 0;\n\t\tthrd_pool[i]->nas_max_bytes_lrg_send_A = 0;\n\t\tthrd_pool[i]->nas_min_bytes_lrg_send_A = INT_MAX - 1;\n\t\tthrd_pool[i]->nas_lrg_send_sum_kb_A = 0.0;\n\t\tthrd_pool[i]->nas_last_time_B = time(0);\n\t\tthrd_pool[i]->nas_kb_sent_B = 0.0;\n\t\tthrd_pool[i]->nas_num_lrg_sends_B = 0;\n\t\tthrd_pool[i]->nas_num_qual_lrg_sends_B = 0;\n\t\tthrd_pool[i]->nas_max_bytes_lrg_send_B = 0;\n\t\tthrd_pool[i]->nas_min_bytes_lrg_send_B = INT_MAX - 1;\n\t\tthrd_pool[i]->nas_lrg_send_sum_kb_B = 0.0;\n\t\tthrd_pool[i]->nas_last_time_C = time(0);\n\t\tthrd_pool[i]->nas_kb_sent_C = 0.0;\n\t\tthrd_pool[i]->nas_num_lrg_sends_C = 0;\n\t\tthrd_pool[i]->nas_num_qual_lrg_sends_C = 0;\n\t\tthrd_pool[i]->nas_max_bytes_lrg_send_C = 0;\n\t\tthrd_pool[i]->nas_min_bytes_lrg_send_C = INT_MAX - 1;\n\t\tthrd_pool[i]->nas_lrg_send_sum_kb_C = 0.0;\n#endif /* localmod 149 */\n\n\t\tthrd_pool[i]->listen_fd = -1;\n\t\tTPP_QUE_CLEAR(&thrd_pool[i]->def_act_que);\n\n\t\tif ((thrd_pool[i]->em_context = tpp_em_init(max_con)) == NULL) {\n\t\t\ttpp_log(LOG_CRIT, __func__, \"em_init() error, errno=%d\", errno);\n\t\t\treturn -1;\n\t\t}\n\n\t\tsnprintf(mbox_name, sizeof(mbox_name), \"Th_%d\", (char) i);\n\t\tif (tpp_mbox_init(&thrd_pool[i]->mbox, mbox_name, -1) != 0) {\n\t\t\ttpp_log(LOG_CRIT, __func__, \"tpp_mbox_init() error, errno=%d\", errno);\n\t\t\treturn -1;\n\t\t}\n\n\t\tif (tpp_mbox_monitor(thrd_pool[i]->em_context, &thrd_pool[i]->mbox) != 0) {\n\t\t\ttpp_log(LOG_CRIT, __func__, \"em_mbox_enable_monitoing() error, errno=%d\", errno);\n\t\t\treturn -1;\n\t\t}\n\n\t\tthrd_pool[i]->thrd_index = i;\n\t}\n\n\tif (conf->node_type == TPP_ROUTER_NODE) {\n\t\tchar *host;\n\t\tint port;\n\n\t\tif ((host = tpp_parse_hostname(conf->node_name, &port)) == NULL) {\n\t\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory parsing pbs_comm name\");\n\t\t\treturn -1;\n\t\t}\n\t\tfree(host);\n\n\t\tif ((thrd_pool[0]->listen_fd = tpp_cr_server_socket(port)) == -1) {\n\t\t\ttpp_log(LOG_CRIT, __func__, \"pbs_comm socket creation failed\");\n\t\t\treturn -1;\n\t\t}\n\n\t\tif (tpp_em_add_fd(thrd_pool[0]->em_context, thrd_pool[0]->listen_fd, EM_IN) == -1) {\n\t\t\ttpp_log(LOG_CRIT, __func__, \"Multiplexing failed\");\n\t\t\treturn -1;\n\t\t}\n\t}\n\n\ttpp_conf = conf;\n\tnum_threads = conf->numthreads;\n\n\tfor (i = 0; i < conf->numthreads; i++) {\n\t\t/* leave the write side of the command pipe to block */\n\t\tif (tpp_cr_thrd(work, &(thrd_pool[i]->worker_thrd_id), thrd_pool[i]) != 0) {\n\t\t\ttpp_log(LOG_CRIT, __func__, \"Failed to create thread\");\n\t\t\treturn -1;\n\t\t}\n\t}\n\ttpp_log(LOG_INFO, NULL, \"TPP initialization done\");\n\n\treturn 0;\n}\n\n/* the function pointer to the upper layer received packet handler */\nint (*the_pkt_handler)(int tfd, void *data, int len, void *ctx, void *extra) = NULL;\n\n/* the function pointer to the upper layer connection close handler */\nint (*the_close_handler)(int tfd, int error, void *ctx, void *extra) = NULL;\n\n/* the function pointer to the upper layer connection restore handler */\nint (*the_post_connect_handler)(int tfd, void *data, void *ctx, void *extra) = NULL;\n\n/* the function pointer to the upper layer pre packet send handler */\nint (*the_pkt_presend_handler)(int tfd, tpp_packet_t *pkt, void *ctx, void *extra) = NULL;\n\n/* upper layer timer handler */\nint (*the_timer_handler)(time_t now) = NULL;\n\n/**\n * @brief\n *\tFunction to register the upper layer handler functions\n *\n * @param[in] pkt_presend_handler  - function ptr to presend handler\n * @param[in] pkt_handler          - function ptr to pkt recvd handler\n * @param[in] close_handler        - function ptr to net close handler\n * @param[in] post_connect_handler - function ptr to post_connect_handler\n * @param[in] timer_handler        - function ptr called periodically\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nvoid\ntpp_transport_set_handlers(\n\tint (*pkt_presend_handler)(int tfd, tpp_packet_t *pkt, void *ctx, void *extra),\n\tint (*pkt_handler)(int tfd, void *data, int len, void *ctx, void *extra),\n\tint (*close_handler)(int tfd, int error, void *ctx, void *extra),\n\tint (*post_connect_handler)(int tfd, void *data, void *ctx, void *extra),\n\tint (*timer_handler)(time_t now))\n{\n\tthe_pkt_handler = pkt_handler;\n\tthe_close_handler = close_handler;\n\tthe_post_connect_handler = post_connect_handler;\n\tthe_pkt_presend_handler = pkt_presend_handler;\n\tthe_timer_handler = timer_handler;\n}\n\n/**\n * @brief\n *\tAllocate a physical connection structure and initialize it\n *\n * @par Functionality\n *\tResize the physical connection array. Uses the mutex cons_array_lock\n *\tbefore it manipulates the global conns_array.\n *\n * @param[in] tfd - The file descriptor to be assigned to the new connection\n *\n * @return  Pointer to the newly allocated physical connection structure\n * @retval NULL - Failure\n * @retval !NULL - Ptr to physical connection\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nstatic phy_conn_t *\nalloc_conn(int tfd)\n{\n\tphy_conn_t *conn;\n\tchar mbox_name[TPP_MBOX_NAME_SZ];\n\n\tconn = calloc(1, sizeof(phy_conn_t));\n\tif (!conn) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory allocating physical connection\");\n\t\treturn NULL;\n\t}\n\tconn->sock_fd = tfd;\n\tconn->extra = NULL;\n\n\tsnprintf(mbox_name, sizeof(mbox_name), \"Conn_%d\", conn->sock_fd);\n\tif (tpp_mbox_init(&conn->send_mbox, mbox_name, TPP_MAX_MBOX_SIZE) != 0) {\n\t\tfree(conn);\n\t\ttpp_log(LOG_CRIT, __func__, \"tpp_mbox_init() error, errno=%d\", errno);\n\t\treturn NULL;\n\t}\n\t/* initialize the send queue to empty */\n\n\t/* set to stream array */\n\tif (tpp_write_lock(&cons_array_lock)) {\n\t\tfree(conn);\n\t\treturn NULL;\n\t}\n\tif (tfd >= conns_array_size - 1) {\n\t\tint newsize;\n\t\tvoid *p;\n\n\t\t/* resize conns */\n\t\tnewsize = tfd + 100;\n\t\tp = realloc(conns_array, sizeof(conns_array_type_t) * newsize);\n\t\tif (!p) {\n\t\t\tfree(conn);\n\t\t\ttpp_unlock_rwlock(&cons_array_lock);\n\t\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory expanding connection array\");\n\t\t\treturn NULL;\n\t\t}\n\t\tconns_array = (conns_array_type_t *) p;\n\n\t\t/* TPP_SLOT_FREE must remain defined as 0, for this memset to work\n\t\t * and automatically set all new slots to FREE. We do not want to\n\t\t * loop over 100 new slots just to set them free!\n\t\t */\n\t\tmemset(&conns_array[conns_array_size], 0, (newsize - conns_array_size) * sizeof(conns_array_type_t));\n\t\tconns_array_size = newsize;\n\t}\n\tif (conns_array[tfd].slot_state != TPP_SLOT_FREE) {\n\t\ttpp_log(LOG_ERR, __func__, \"Internal error - slot not free\");\n\t\tfree(conn);\n\t\ttpp_unlock_rwlock(&cons_array_lock);\n\t\treturn NULL;\n\t}\n\n\ttpp_set_non_blocking(conn->sock_fd);\n\ttpp_set_close_on_exec(conn->sock_fd);\n\n\tif (tpp_set_keep_alive(conn->sock_fd, tpp_conf) == -1) {\n\t\tfree(conn);\n\t\ttpp_unlock_rwlock(&cons_array_lock);\n\t\treturn NULL;\n\t}\n\n\tconns_array[tfd].slot_state = TPP_SLOT_BUSY;\n\tconns_array[tfd].conn = conn;\n\n\ttpp_unlock_rwlock(&cons_array_lock);\n\n\treturn conn;\n}\n\n/**\n * @brief\n *\tCreates a new physical connection between two routers or a router and\n *\ta leaf.\n *\n * @param[in] hostname - hostname to connect to\n * @param[in] delay    - Connect after delay of this much seconds\n * @param[in] ctx      - Associate the passed ctx with the connection fd\n * @param[in] tctx     - Transport thrd context of the caller\n * @param[out] ret_tfd - The fd of the connection returned\n *\n * @return  Error code\n * @retval  -1   - Failure\n * @retval   0   - Success\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nint\ntpp_transport_connect_spl(char *hostname, int delay, void *ctx, int *ret_tfd, void *tctx)\n{\n\tphy_conn_t *conn;\n\tint fd;\n\tchar *host;\n\tint port;\n\n\tif ((host = tpp_parse_hostname(hostname, &port)) == NULL) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory while parsing hostname\");\n\t\tfree(host);\n\t\treturn -1;\n\t}\n\n\tfd = tpp_sock_socket(AF_INET, SOCK_STREAM, 0);\n\tif (fd < 0) {\n\t\ttpp_log(LOG_CRIT, __func__, \"socket() error, errno=%d\", errno);\n\t\tfree(host);\n\t\treturn -1;\n\t}\n\n\tif (tpp_set_keep_alive(fd, tpp_conf) == -1) {\n\t\ttpp_sock_close(fd);\n\t\tfree(host);\n\t\treturn -1;\n\t}\n\n\t*ret_tfd = fd;\n\n\tconn = alloc_conn(fd);\n\tif (!conn) {\n\t\ttpp_sock_close(fd);\n\t\tfree(host);\n\t\treturn -1;\n\t}\n\n\tconn->conn_params = calloc(1, sizeof(conn_param_t));\n\tif (!conn->conn_params) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory allocating connection params\");\n\t\tif (tpp_write_lock(&cons_array_lock))\n\t\t\treturn -1;\n\t\tconns_array[fd].slot_state = TPP_SLOT_FREE;\n\t\tconns_array[fd].conn = NULL;\n\t\ttpp_unlock_rwlock(&cons_array_lock);\n\t\tfree_phy_conn(conn);\n\t\ttpp_sock_close(fd);\n\t\tfree(host);\n\t\treturn -1;\n\t}\n\tconn->conn_params->need_resvport = strcmp(tpp_conf->auth_config->auth_method, AUTH_RESVPORT_NAME) == 0;\n\tconn->conn_params->hostname = host;\n\tconn->conn_params->port = port;\n\n\tconn->sock_fd = fd;\n\tconn->net_state = TPP_CONN_INITIATING;\n\n\ttpp_transport_set_conn_ctx(fd, ctx);\n\n\tassign_to_worker(fd, delay, tctx);\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tWrapper to the call to tpp_transport_connect_spl, It calls\n *\ttpp_transport_connect_spl with the tctx parameter as NULL.\n *\n * @param[in] hostname - hostname to connect to\n * @param[in] delay    - Connect after delay of this much seconds\n * @param[in] ctx     - Associate the passed ctx with the connection fd\n * @param[out] ret_tfd - The fd of the connection returned\n *\n * @return  Error code\n * @retval  -1   - Failure\n * @retval   0   - Success\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nint\ntpp_transport_connect(char *hostname, int delay, void *ctx, int *ret_tfd)\n{\n\treturn tpp_transport_connect_spl(hostname, delay, ctx, ret_tfd, NULL);\n}\n\n/**\n * @brief\n *\tHelper function to get a transport channel pointer and\n *\tslot state in an atomic fashion\n *\n * @par Functionality:\n *\tAcquire a lock on the connsarray lock and return the conn pointer and\n *\tthe slot state\n *\n * @param[in] tfd - The transport descriptor\n * @param[out] slot_state - The state of the slot occupied by this stream\n *\n * @return - Transport channel pointer\n * @retval NULL - Bad stream index/descriptor\n * @retval !NULL - Associated stream pointer\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nstatic phy_conn_t *\nget_transport_atomic(int tfd, int *slot_state)\n{\n\tphy_conn_t *conn = NULL;\n\t*slot_state = TPP_SLOT_FREE;\n\n\tif (tpp_read_lock(&cons_array_lock))\n\t\treturn NULL;\n\n\tif (tfd >= 0 && tfd < conns_array_size) {\n\t\tconn = conns_array[tfd].conn;\n\t\t*slot_state = conns_array[tfd].slot_state;\n\t}\n\ttpp_unlock_rwlock(&cons_array_lock);\n\n\treturn conn;\n}\n\n/**\n * @brief\n *\tLock the strmarray lock and send post data on the\n *\tthreads mbox. The check for the tfd being up,\n *\tand the posting of data into the manager thread's mbox\n *\tare done as an atomic operation, i.e., under the cons_array_lock.\n *\n * @param[in] tfd - The file descriptor of the connection\n * @param[in] cmd - The cmd to post if conn is up\n * @param[in] pkt - Data associated with the command\n *\n * @return  Error code\n * @retval  -1 - Failure (slot free, or bad tfd)\n * @retval   0 - Success\n *\n * @par Side Effects:\n *\terrno is set\n *\n * @par MT-safe: No\n *\n */\nstatic int\ntpp_post_cmd(int tfd, char cmd, tpp_packet_t *pkt)\n{\n\tint rc;\n\tint slot_state;\n\tphy_conn_t *conn = NULL;\n\tthrd_data_t *td = NULL;\n\n\terrno = 0;\n\n\tconn = get_transport_atomic(tfd, &slot_state);\n\tif (conn)\n\t\ttd = conn->td;\n\n\tif (!conn || slot_state != TPP_SLOT_BUSY || !td) {\n\t\terrno = EBADF;\n\t\treturn -1;\n\t}\n\n\tif (cmd == TPP_CMD_SEND) {\n\t\t/* data associated that needs to be sent out, put directly into target mbox */\n\t\t/* write to worker threads send pipe */\n\t\trc = tpp_mbox_post(&conn->send_mbox, tfd, cmd, (void *) pkt, pkt->totlen);\n\t\tif (rc != 0)\n\t\t\treturn rc;\n\t}\n\n\t/* write to worker threads send pipe, to wakeup thread */\n\trc = tpp_mbox_post(&td->mbox, tfd, cmd, NULL, 0);\n\treturn rc;\n}\n\n/**\n * @brief\n *\tSend a wakeup packet (a packet without any data) to the active\n *\ttransport thread, so that it wakes up and processes any pending\n *\tnotifications.\n *\n * @param[in] tfd   - The file descriptor of the connection\n *\n * @return  Error code\n * @retval  -1 - Failure\n * @retval   0 - Success\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nint\ntpp_transport_wakeup_thrd(int tfd)\n{\n\tif (tfd < 0)\n\t\treturn -1;\n\n\tif (tpp_post_cmd(tfd, TPP_CMD_WAKEUP, NULL) != 0) {\n\t\treturn -1;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\tQueue data to be sent out by the IO thread. This function can take a\n *\tset of data buffers and sends them out after concatenating\n *\n * @param[in] tfd   - The file descriptor of the connection\n * @param[in] chunk - Array of chunks that describes each data buffer\n * @param[in] count - Number of chunks in the array of chunks\n * @param[in] totlen  - total length of data to be sent out\n * @param[in] extra - Extra data to be associated with the data packet\n *\n * @return  Error code\n * @retval  -1 - Failure\n * @retval  -2 - transport buffers full\n * @retval   0 - Success\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nint\ntpp_transport_vsend(int tfd, tpp_packet_t *pkt)\n{\n\t/* compute the length in network byte order for the whole packet */\n\ttpp_chunk_t *first_chunk = GET_NEXT(pkt->chunks);\n\tvoid *p_ntotlen = (void *) (first_chunk->data);\n\tint wire_len = htonl(pkt->totlen);\n\tint rc;\n\n\tif (tfd < 0) {\n\t\ttpp_free_pkt(pkt);\n\t\treturn -1;\n\t}\n\n\tTPP_DBPRT(\"sending total length = %d\", pkt->totlen);\n\n\t/* write the total packet length as the first byte of the packet header\n\t * every packet header type has a ntotlen as the exact first element\n\t * The total length of all the chunks of the packet is only know at this\n\t * function, when all chunks are complete, so we compute the total length\n\t * and set to the ntotlen element of the packet header\n\t */\n\tmemcpy(p_ntotlen, &wire_len, sizeof(int));\n\n\t/* write to worker threads send pipe */\n\trc = tpp_post_cmd(tfd, TPP_CMD_SEND, (void *) pkt);\n\tif (rc != 0) {\n\t\tif (rc == -1)\n\t\t\ttpp_log(LOG_CRIT, __func__, \"Error writing to thread cmd mbox\");\n\t\telse if (rc == -2)\n\t\t\ttpp_log(LOG_CRIT, __func__, \"thread cmd mbox is full\");\n\t\ttpp_free_pkt(pkt);\n\t}\n\treturn rc;\n}\n\n/**\n * @brief\n *\tWhether the underlying connection is from a reserved port or not\n *\n * @param[in] tfd   - The file descriptor of the connection\n *\n * @return  Error code\n * @retval  -1 - Not associated with a reserved port\n * @retval   0 - Associated with a reserved port\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nint\ntpp_transport_isresvport(int tfd)\n{\n\tint slot_state;\n\tphy_conn_t *conn;\n\n\tconn = get_transport_atomic(tfd, &slot_state);\n\tif (conn == NULL || slot_state != TPP_SLOT_BUSY)\n\t\treturn -1;\n\n\tif (conn->conn_params->port >= 0 && conn->conn_params->port < IPPORT_RESERVED)\n\t\treturn 0;\n\n\treturn -1;\n}\n\n/**\n * @brief\n *\tAssign a physical connection to a thread. A new connection (to be\n *\tcreated) or a new incoming connection is assigned to one of the\n *\texisting threads using this function. Assigns connections to a thread\n *\tin a round-robin fashion (based on global thrd_index)\n *\n * @param[in] tfd   - The file descriptor of the connection\n * @param[in] delay - Connect/accept this new function only after this delay\n * @param[in] td    - The thread index to which to assign the conn to\n *\n * @return\tError code\n * @retval\t1\tFailure\n * @retval\t0\tSuccess\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nstatic int\nassign_to_worker(int tfd, int delay, thrd_data_t *td)\n{\n\tint slot_state;\n\tphy_conn_t *conn;\n\tint thrd_index = 0;\n\n\tconn = get_transport_atomic(tfd, &slot_state);\n\tif (conn == NULL || slot_state != TPP_SLOT_BUSY)\n\t\treturn 1;\n\n\tif (conn->td != NULL)\n\t\ttpp_log(LOG_CRIT, __func__, \"ERROR! tfd=%d conn_td=%p, conn_td_index=%d, thrd_td=%p, thrd_td_index=%d\", tfd, conn->td, conn->td->thrd_index, td, td ? td->thrd_index : -1);\n\n\tif (td == NULL) {\n\t\tif (tpp_lock(&thrd_array_lock)) {\n\t\t\treturn 1;\n\t\t}\n\t\t/* find a thread index to assign to, since none provided */\n\t\tif (num_threads > 1)\n\t\t\tthrd_index = 1 + (tfd % (num_threads - 1));\n\t\tconn->td = thrd_pool[thrd_index];\n\t\ttpp_unlock(&thrd_array_lock);\n\t} else\n\t\tconn->td = td;\n\n\tif (tpp_mbox_post(&conn->td->mbox, tfd, TPP_CMD_ASSIGN, (void *) (long) delay, 0) != 0)\n\t\ttpp_log(LOG_CRIT, __func__, \"tfd=%d, Error writing to mbox\", tfd);\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tAdd (a new) or accept (incoming) a transport connection.\n *\n *\tIn case of creating a new connection, it binds to a reserved port if\n *\tthe authentication is set to priv_fd.\n *\n * @param[in] conn - The physical connection structure, to initiate or to\n *                   accept.\n *\n * @return  Error code\n * @retval  -1 - Failure\n * @retval   0 - Success\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nstatic int\nadd_transport_conn(phy_conn_t *conn)\n{\n\tif (conn->net_state == TPP_CONN_INITIATING) {\n\n\t\tint fd = conn->sock_fd;\n\n\t\t/* authentication */\n\t\tif (conn->conn_params->need_resvport) {\n\t\t\tint tryport;\n\t\t\tint start;\n\t\t\tint rc = -1;\n\n\t\t\tsrand(time(NULL));\n\t\t\tstart = (rand() % (IPPORT_RESERVED - 1)) + 1;\n\t\t\ttryport = start;\n\n\t\t\twhile (1) {\n\t\t\t\tstruct sockaddr_in serveraddr;\n\t\t\t\t/* bind this socket to a reserved port */\n\t\t\t\tserveraddr.sin_family = AF_INET;\n\t\t\t\tserveraddr.sin_addr.s_addr = INADDR_ANY;\n\t\t\t\tserveraddr.sin_port = htons(tryport);\n\t\t\t\tmemset(&(serveraddr.sin_zero), '\\0', sizeof(serveraddr.sin_zero));\n\t\t\t\tif ((rc = tpp_sock_bind(fd, (struct sockaddr *) &serveraddr, sizeof(serveraddr))) != -1)\n\t\t\t\t\tbreak;\n\t\t\t\tif ((errno != EADDRINUSE) && (errno != EADDRNOTAVAIL))\n\t\t\t\t\tbreak;\n\n\t\t\t\t--tryport;\n\t\t\t\tif (tryport <= 0)\n\t\t\t\t\ttryport = IPPORT_RESERVED;\n\t\t\t\tif (tryport == start)\n\t\t\t\t\tbreak;\n\t\t\t}\n\t\t\tif (rc == -1) {\n\t\t\t\ttpp_log(LOG_WARNING, NULL, \"No reserved ports available\");\n\t\t\t\treturn (-1);\n\t\t\t}\n\t\t}\n\n\t\tconn->net_state = TPP_CONN_CONNECTING;\n\n\t\tconn->ev_mask = EM_OUT | EM_ERR | EM_HUP;\n\t\tTPP_DBPRT(\"New socket, Added EM_OUT to ev_mask, now=%x\", conn->ev_mask);\n\t\tif (tpp_em_add_fd(conn->td->em_context, conn->sock_fd, conn->ev_mask) == -1) {\n\t\t\ttpp_log(LOG_ERR, __func__, \"Multiplexing failed\");\n\t\t\treturn -1;\n\t\t}\n\n\t\tif (tpp_sock_attempt_connection(conn->sock_fd, conn->conn_params->hostname, conn->conn_params->port) == -1) {\n\t\t\tif (errno != EINPROGRESS && errno != EWOULDBLOCK && errno != EAGAIN) {\n\t\t\t\tchar *msgbuf;\n#ifdef HAVE_STRERROR_R\n\t\t\t\tchar buf[TPP_GEN_BUF_SZ];\n\n\t\t\t\tif (strerror_r(errno, buf, sizeof(buf)) == 0)\n\t\t\t\t\tpbs_asprintf(&msgbuf, \"%s while connecting to %s:%d\", buf, conn->conn_params->hostname, conn->conn_params->port);\n\t\t\t\telse\n#endif\n\t\t\t\t\tpbs_asprintf(&msgbuf, \"Error %d while connecting to %s:%d\", errno, conn->conn_params->hostname, conn->conn_params->port);\n\t\t\t\ttpp_log(LOG_ERR, NULL, msgbuf);\n\t\t\t\tfree(msgbuf);\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t} else {\n\t\t\tTPP_DBPRT(\"phy_con %d connected\", fd);\n\t\t\tconn->net_state = TPP_CONN_CONNECTED;\n\n\t\t\t/* since we connected, remove EM_OUT from the list and add EM_IN */\n\t\t\tconn->ev_mask = EM_IN | EM_ERR | EM_HUP;\n\t\t\tTPP_DBPRT(\"Connected, Removed EM_OUT and added EM_IN to ev_mask, now=%x\", conn->ev_mask);\n\t\t\tif (tpp_em_mod_fd(conn->td->em_context, conn->sock_fd, conn->ev_mask) == -1) {\n\t\t\t\ttpp_log(LOG_CRIT, __func__, \"Multiplexing failed\");\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t\tif (the_post_connect_handler) {\n\t\t\t\tif (the_post_connect_handler(fd, NULL, conn->ctx, conn->extra)) {\n\t\t\t\t\t/* e.g.: the tpp_handle_auth_handshake could have failed */\n\t\t\t\t\ttpp_log(LOG_CRIT, __func__, \"the_post_connect_handler failed\");\n\t\t\t\t\treturn -1;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t} else if (conn->net_state == TPP_CONN_CONNECTED) { /* accepted socket */\n\t\t/* since we connected, remove EM_OUT from the list and add EM_IN */\n\t\tconn->ev_mask = EM_IN | EM_ERR | EM_HUP;\n\t\tTPP_DBPRT(\"Connected, Removed EM_OUT and added EM_IN to ev_mask, now=%x\", conn->ev_mask);\n\n\t\t/* add it to my own monitored list */\n\t\tif (tpp_em_add_fd(conn->td->em_context, conn->sock_fd, conn->ev_mask) == -1) {\n\t\t\ttpp_log(LOG_ERR, __func__, \"Multiplexing failed\");\n\t\t\treturn -1;\n\t\t}\n\n\t\tTPP_DBPRT(\"Phy Con %d accepted\", conn->sock_fd);\n\t} else {\n\t\ttpp_log(LOG_CRIT, __func__, \"Bad net state - internal error\");\n\t\treturn -1;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\tHandle a command sent to this thread by its monitored pipe fd\n *\n * @par Functionality\n *\tCommands:\n *\tTPP_CMD_EXIT: The thread is being asked to exit, close all connections\n *\t\t      and then exit this thread\n *\n *\tTPP_CMD_ASSIGN: Assign new connection (incoming) or add a\n *\t\t\tto-be-created connection to this thread\n *\n *\tTPP_CMD_SEND: Accept data from APP thread to be sent by this thread\n *\n * @param[in] td    - The threads data pointer\n * @param[in] tfd   - The tfd associated with this command\n * @param[in] cmd   - The command to execute (listed above)\n * @param[in] data  - The data (in case of send) to be sent\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nstatic void\nhandle_cmd(thrd_data_t *td, int tfd, int cmd, void *data)\n{\n\tint slot_state;\n\tphy_conn_t *conn;\n\tconn_event_t *conn_ev;\n\tint num_cons = 0;\n\n\tconn = get_transport_atomic(tfd, &slot_state);\n\n\tif (conn && (conn->td != td))\n\t\ttpp_log(LOG_CRIT, __func__, \"ERROR! tfd=%d conn_td=%p, conn_td_index=%d, thrd_td=%p, thrd_td_index=%d, cmd=%d\", tfd, conn->td, conn->td->thrd_index, td, td->thrd_index, cmd);\n\n\tif (cmd == TPP_CMD_CLOSE) {\n\t\thandle_disconnect(conn);\n\n\t} else if (cmd == TPP_CMD_EXIT) {\n\t\tint i;\n\n\t\tfor (i = 0; i < conns_array_size; i++) {\n\t\t\tconn = get_transport_atomic(i, &slot_state);\n\t\t\tif (slot_state == TPP_SLOT_BUSY && conn->td == td) {\n\t\t\t\t/* stream belongs to this thread */\n\t\t\t\tnum_cons++;\n\t\t\t\thandle_disconnect(conn);\n\t\t\t}\n\t\t}\n\n\t\ttpp_mbox_destroy(&td->mbox);\n\t\tif (td->listen_fd > -1)\n\t\t\ttpp_sock_close(td->listen_fd);\n\n\t\t/* clean up the lazy conn queue */\n\t\twhile ((conn_ev = tpp_deque(&td->def_act_que)))\n\t\t\tfree(conn_ev);\n\n\t\ttpp_log(LOG_INFO, NULL, \"Thrd exiting, had %d connections\", num_cons);\n\n\t\t/* destory the AVL tls */\n\t\tfree_avl_tls();\n\n\t\tpthread_exit(NULL);\n\t\t/* no execution after this */\n\n\t} else if ((cmd == TPP_CMD_ASSIGN) || (cmd == TPP_CMD_CONNECT)) {\n\t\tint delay = (int) (long) data;\n\n\t\tif (conn == NULL || slot_state != TPP_SLOT_BUSY) {\n\t\t\ttpp_log(LOG_WARNING, __func__, \"Phy Con %d (cmd = %d) already deleted/closing\", tfd, cmd);\n\t\t\treturn;\n\t\t}\n\t\tif ((delay == 0) || (cmd == TPP_CMD_CONNECT)) {\n\t\t\tif (add_transport_conn(conn) != 0) {\n\t\t\t\thandle_disconnect(conn);\n\t\t\t}\n\t\t} else {\n\t\t\tenque_deferred_event(td, tfd, TPP_CMD_CONNECT, delay);\n\t\t}\n\n\t} else if (cmd == TPP_CMD_SEND) {\n\t\ttpp_packet_t *pkt = (tpp_packet_t *) data;\n\n\t\tif (conn == NULL || slot_state != TPP_SLOT_BUSY) {\n\t\t\ttpp_log(LOG_WARNING, __func__, \"Phy Con %d (cmd = %d) already deleted/closing\", tfd, cmd);\n\t\t\ttpp_free_pkt(pkt);\n\t\t\treturn;\n\t\t}\n\n\t\t/* handle socket add calls */\n\t\tsend_data(conn);\n\n\t} else if (cmd == TPP_CMD_READ) {\n\t\tadd_pkt(conn);\n\t}\n}\n\n/**\n * @brief\n *\tReturn the threads index from the tls located thread data\n *\n * @return - Thread index of the calling thread\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nint\ntpp_get_thrd_index()\n{\n\ttpp_tls_t *tls;\n\tthrd_data_t *td;\n\tif ((tls = tpp_get_tls()) == NULL)\n\t\treturn -1;\n\n\ttd = (thrd_data_t *) (tpp_get_tls()->td);\n\tif (td == NULL)\n\t\treturn -1;\n\n\treturn td->thrd_index;\n}\n\n/**\n * @brief\n *\tThis is the IO threads \"thread-function\". It includes a loop of\n *\twaiting for monitored piped, sockets etc, and it drives the various\n *\tfunctionality that the IO thread does.\n *\n * @par Functionality\n *\t- Creates a event monitor context\n *\t- Adds the cmd socket to the event monitor set\n *\t- Adds the listening socket fd (if listening thread for router) to set\n *\t- Checks if any event is outstanding in event queue for this thread\n *\t  and if so, dispatches them\n *\t- Calls the_event_expiry_handler to find how long the next event is\n *\t  places at the upper layer\n *\t- Waits on em wait for the duration determined from previous step\n *\t- When em_wait is woken by a event, handles that event.\n *\t\t- Handle command sent by another thread, like\n *\t\t\t- Create a new connection\n *\t\t\t- Send data over a connection etc\n *\t\t\t- Close a thread\n *\t\t- Accept incoming new connections (if a listening thread)\n *\t\t- Accept data from peer and call upper layer handler\n *\t\t- Detect closure of socket event and call upper layer handler\n *\n * @param[in] v - The thrd_data pointer passed by the init function that created\n *\t\t  the threads. (Basically a pointer to its own thread_data).\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nstatic void *\nwork(void *v)\n{\n\tthrd_data_t *td = (thrd_data_t *) v;\n\tint newfd;\n\tint i;\n\tint cmd;\n\tvoid *data;\n\tunsigned int tfd;\n\tem_event_t *events;\n\tphy_conn_t *conn;\n\tint slot_state;\n\tstruct sockaddr clientaddr;\n\tint new_connection = 0;\n\tint timeout, timeout2;\n\ttime_t now;\n\ttpp_tls_t *ptr;\n#ifndef WIN32\n\tint rc;\n\tsigset_t blksigs;\n#endif\n\n\t/*\n\t * Get the tls for this thread and store the passed thread data\n\t * as a pointer in our own tls. This is for use in functions\n\t * where we cannot pass the thread data as a parameter\n\t */\n\tptr = tpp_get_tls();\n\tif (!ptr) {\n\t\tfprintf(stderr, \"Out of memory getting thread specific storage\\n\");\n\t\treturn NULL;\n\t}\n\tptr->td = (void *) td;\n\ttd->tpp_tls = ptr; /* store allocated area for tls into td to free at shutdown/terminate */\n\n#ifndef WIN32\n\t/* block a certain set of signals that we do not care about in this IO thread\n\t * A signal directed to a multi-threaded program can be delivered to any thread\n\t * which has a unblocked signal mask for that signal. This could cause havoc for\n\t * for signal handler which are not supposed to be called from this IO thread.\n\t * (example the SIGHUP handler for scheduler). Signals like SIGALRM and hardware\n\t * generated signals (like SIGBUS and SIGSEGV) are always delivered to the\n\t * thread that generated it (so they are thread specific anyway).\n\t */\n\tsigemptyset(&blksigs);\n\tsigaddset(&blksigs, SIGHUP);\n\tsigaddset(&blksigs, SIGINT);\n\tsigaddset(&blksigs, SIGTERM);\n\n\tif ((rc = pthread_sigmask(SIG_BLOCK, &blksigs, NULL)) != 0) {\n\t\ttpp_log(LOG_CRIT, NULL, \"Failed in pthread_sigmask, errno=%d\", rc);\n\t\treturn NULL;\n\t}\n#endif\n\ttpp_log(LOG_CRIT, NULL, \"Thread ready\");\n\n\t/* start processing loop */\n\tfor (;;) {\n\t\tint nfds;\n\n\t\twhile (1) {\n\t\t\tnow = time(0);\n\n\t\t\t/* trigger all delayed events, and return the wait time till the next one to trigger */\n\t\t\ttimeout = trigger_deferred_events(td, now);\n\t\t\tif (the_timer_handler) {\n\t\t\t\ttimeout2 = the_timer_handler(now);\n\t\t\t} else {\n\t\t\t\ttimeout2 = -1;\n\t\t\t}\n\n\t\t\tif (timeout2 != -1) {\n\t\t\t\tif (timeout == -1 || timeout2 < timeout)\n\t\t\t\t\ttimeout = timeout2;\n\t\t\t}\n\n\t\t\tif (timeout != -1) {\n\t\t\t\ttimeout = timeout * 1000; /* milliseconds */\n\t\t\t}\n\n\t\t\terrno = 0;\n\t\t\tnfds = tpp_em_wait(td->em_context, &events, timeout);\n\t\t\tif (nfds <= 0) {\n\t\t\t\tif (!(errno == EINTR || errno == EINPROGRESS || errno == EAGAIN || errno == 0)) {\n\t\t\t\t\ttpp_log(LOG_ERR, __func__, \"em_wait() error, errno=%d\", errno);\n\t\t\t\t}\n\t\t\t\tcontinue;\n\t\t\t} else\n\t\t\t\tbreak;\n\t\t} /* loop around em_wait */\n\n\t\tnew_connection = 0;\n\n\t\t/* check once more if cmd_pipe has any more data */\n\t\twhile (tpp_mbox_read(&td->mbox, &tfd, &cmd, &data) == 0)\n\t\t\thandle_cmd(td, tfd, cmd, data);\n\n\t\tfor (i = 0; i < nfds; i++) {\n\n\t\t\tint em_fd;\n\t\t\tint em_ev;\n\n\t\t\tem_fd = EM_GET_FD(events, i);\n\t\t\tem_ev = EM_GET_EVENT(events, i);\n\n\t\t\t/**\n\t\t\t * at each iteration clear the command pipe, to\n\t\t\t * avoid a deadlock between threads\n\t\t\t **/\n\t\t\twhile (tpp_mbox_read(&td->mbox, &tfd, &cmd, &data) == 0)\n\t\t\t\thandle_cmd(td, tfd, cmd, data);\n\n\t\t\tif (em_fd == td->listen_fd) {\n\t\t\t\tnew_connection = 1;\n\t\t\t} else {\n\t\t\t\tconn = get_transport_atomic(em_fd, &slot_state);\n\t\t\t\tif (conn == NULL || slot_state != TPP_SLOT_BUSY)\n\t\t\t\t\tcontinue;\n\n\t\t\t\tif ((em_ev & EM_HUP) || (em_ev & EM_ERR)) {\n\t\t\t\t\t/*\n\t\t\t\t\t * platforms differ in terms of when HUP or ERR is set\n\t\t\t\t\t * best is to allow read to determine whether it was\n\t\t\t\t\t * really a end of file\n\t\t\t\t\t */\n\t\t\t\t\thandle_incoming_data(conn);\n\t\t\t\t} else {\n\n\t\t\t\t\tif (em_ev & EM_IN) {\n\t\t\t\t\t\t/* handle existing connections for data or closure */\n\t\t\t\t\t\thandle_incoming_data(conn);\n\t\t\t\t\t}\n\n\t\t\t\t\tif (em_ev & EM_OUT) {\n\n\t\t\t\t\t\tif (conn->net_state == TPP_CONN_CONNECTING) {\n\t\t\t\t\t\t\t/* check to see if the connection really completed */\n\t\t\t\t\t\t\tint result;\n\t\t\t\t\t\t\tpbs_socklen_t result_len = sizeof(result);\n\n\t\t\t\t\t\t\tif (tpp_sock_getsockopt(conn->sock_fd, SOL_SOCKET, SO_ERROR, &result, &result_len) != 0) {\n\t\t\t\t\t\t\t\tTPP_DBPRT(\"phy_con %d getsockopt failed\", conn->sock_fd);\n\t\t\t\t\t\t\t\thandle_disconnect(conn);\n\t\t\t\t\t\t\t\tcontinue;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tif (result == EAGAIN || result == EINPROGRESS) {\n\t\t\t\t\t\t\t\t/* not yet connected, ignore the EM_OUT */\n\t\t\t\t\t\t\t\tcontinue;\n\t\t\t\t\t\t\t} else if (result != 0) {\n\t\t\t\t\t\t\t\t/* non-recoverable error occurred, eg, ECONNRESET, so disconnect */\n\t\t\t\t\t\t\t\tTPP_DBPRT(\"phy_con %d disconnected\", conn->sock_fd);\n\t\t\t\t\t\t\t\thandle_disconnect(conn);\n\t\t\t\t\t\t\t\tcontinue;\n\t\t\t\t\t\t\t}\n\n\t\t\t\t\t\t\t/* connected, finally!!! */\n\t\t\t\t\t\t\tconn->net_state = TPP_CONN_CONNECTED;\n\n\t\t\t\t\t\t\tif (the_post_connect_handler) {\n\t\t\t\t\t\t\t\tif (the_post_connect_handler(conn->sock_fd, NULL, conn->ctx, conn->extra)) {\n\t\t\t\t\t\t\t\t\t/* e.g.: the tpp_handle_auth_handshake could have failed */\n\t\t\t\t\t\t\t\t\tTPP_DBPRT(\"phy_con %d disconnected\", conn->sock_fd);\n\t\t\t\t\t\t\t\t\thandle_disconnect(conn);\n\t\t\t\t\t\t\t\t\tcontinue;\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tTPP_DBPRT(\"phy_con %d connected\", conn->sock_fd);\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\t/* since we connected, remove EM_OUT from the list and add EM_IN */\n\t\t\t\t\t\tconn->ev_mask = EM_IN | EM_ERR | EM_HUP;\n\t\t\t\t\t\tTPP_DBPRT(\"Connected, Removed EM_OUT and added EM_IN to ev_mask, now=%x\", conn->ev_mask);\n\t\t\t\t\t\tif (tpp_em_mod_fd(conn->td->em_context, conn->sock_fd, conn->ev_mask) == -1) {\n\t\t\t\t\t\t\ttpp_log(LOG_ERR, __func__, \"Multiplexing failed\");\n\t\t\t\t\t\t\treturn NULL;\n\t\t\t\t\t\t}\n\t\t\t\t\t\tsend_data(conn);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\tif (new_connection == 1) {\n\t\t\tpbs_socklen_t addrlen = sizeof(clientaddr);\n\t\t\tif ((newfd = tpp_sock_accept(td->listen_fd, (struct sockaddr *) &clientaddr, &addrlen)) == -1) {\n\t\t\t\ttpp_log(LOG_ERR, NULL, \"tpp_sock_accept() error, errno=%d\", errno);\n\t\t\t\tif (errno == EMFILE) {\n\t\t\t\t\t/* out of files, sleep couple of seconds to avoid error coming in loop */\n\t\t\t\t\tsleep(2);\n\t\t\t\t}\n\t\t\t\tcontinue;\n\t\t\t}\n\n\t\t\tconn = alloc_conn(newfd);\n\t\t\tif (!conn) {\n\t\t\t\ttpp_log(LOG_CRIT, __func__, \"Allocating socket connection failed.\");\n\t\t\t\ttpp_sock_close(newfd);\n\t\t\t\treturn NULL;\n\t\t\t}\n\n\t\t\tconn->net_state = TPP_CONN_CONNECTED;\n\n\t\t\tconn->conn_params = calloc(1, sizeof(conn_param_t));\n\t\t\tif (!conn->conn_params) {\n\t\t\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory allocating connection params\");\n\t\t\t\tif (tpp_write_lock(&cons_array_lock))\n\t\t\t\t\treturn NULL;\n\t\t\t\tconns_array[newfd].slot_state = TPP_SLOT_FREE;\n\t\t\t\tconns_array[newfd].conn = NULL;\n\t\t\t\ttpp_unlock_rwlock(&cons_array_lock);\n\t\t\t\tfree_phy_conn(conn);\n\t\t\t\ttpp_sock_close(newfd);\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t\tconn->conn_params->need_resvport = strcmp(tpp_conf->auth_config->auth_method, AUTH_RESVPORT_NAME) == 0;\n\t\t\tconn->conn_params->hostname = strdup(tpp_netaddr_sa(&clientaddr));\n\t\t\tconn->conn_params->port = ntohs(((struct sockaddr_in *) &clientaddr)->sin_port);\n\n\t\t\t/**\n\t\t\t *  accept socket, and add socket to stream, assign stream to\n\t\t\t * thread, and write to that thread control pipe\n\t\t\t **/\n\t\t\tassign_to_worker(newfd, 0, NULL); /* time 0 means no delay */\n\t\t}\n\t}\n\treturn NULL;\n}\n\n/**\n * @brief\n *\tFunction to close a transport layer connection.\n *\n * @param[in] tfd - The connection descriptor to be closed\n *\n * @retval 0  - Success\n * @retval -1 - Failure\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nint\ntpp_transport_close(int tfd)\n{\n\t/* write to worker threads send pipe */\n\tif (tpp_post_cmd(tfd, TPP_CMD_CLOSE, NULL) != 0)\n\t\treturn -1;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\thandle a disconnect notification by calling the upper layer\n *\tclose_handler. Called from the thread main loop inside work().\n *\n * @param[in] conn - The physical connection that was disconnected\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n * @return Error code\n * @retval\t1\tFailure\n * @retval\t0\tSucceess\n */\nstatic int\nhandle_disconnect(phy_conn_t *conn)\n{\n\tint error;\n\tshort cmd;\n\tint tfd;\n\ttpp_packet_t *pkt;\n\tpbs_socklen_t len = sizeof(error);\n\ttpp_que_elem_t *n = NULL;\n\n\tif (conn == NULL || conn->net_state == TPP_CONN_DISCONNECTED)\n\t\treturn 1;\n\n\tif (conn->net_state == TPP_CONN_CONNECTING || conn->net_state == TPP_CONN_CONNECTED) {\n\t\tif (tpp_em_del_fd(conn->td->em_context, conn->sock_fd) == -1) {\n\t\t\ttpp_log(LOG_ERR, __func__, \"Multiplexing failed\");\n\t\t\treturn 1;\n\t\t}\n\t}\n\ttpp_sock_getsockopt(conn->sock_fd, SOL_SOCKET, SO_ERROR, &error, &len);\n\n\tconn->net_state = TPP_CONN_DISCONNECTED;\n\tconn->lasterr = error;\n\n\ttfd = conn->sock_fd; /* store this since close_handler could unset this */\n\n\tif (the_close_handler)\n\t\tthe_close_handler(conn->sock_fd, error, conn->ctx, conn->extra);\n\n\tconn->extra = NULL;\n\n\tif (tpp_write_lock(&cons_array_lock))\n\t\treturn 1;\n\n\t/*\n\t * Since we are freeing the socket connection we must\n\t * empty any pending commands that were in this thread's\n\t * mbox (since this thread is the connection's manager\n\t *\n\t */\n\tn = NULL;\n\twhile (tpp_mbox_clear(&conn->td->mbox, &n, tfd, &cmd, (void **) &pkt) == 0)\n\t\ttpp_free_pkt(pkt);\n\n\tconns_array[tfd].slot_state = TPP_SLOT_FREE;\n\tconns_array[tfd].conn = NULL;\n\n\ttpp_unlock_rwlock(&cons_array_lock);\n\n\t/* free old connection */\n\tfree_phy_conn(conn);\n\ttpp_sock_close(tfd);\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\thandle incoming data using the scratch space which is part of each\n *\tconnection structure. Resize the scratch space if required.\n *\n *\tReceive data as much as available, check if a packet can be formed, if\n *\tso, then call app_pkts to form packets and send to the upper layer to\n *\tbe processed.\n *\n * @param[in] conn - The physical connection\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nstatic void\nhandle_incoming_data(phy_conn_t *conn)\n{\n\tint torecv = 0;\n\tint space_left;\n\tint offset;\n\tint closed;\n\tint pkt_len;\n\tchar *p;\n\tssize_t rc;\n\n\twhile (1) {\n\t\toffset = conn->scratch.pos - conn->scratch.data;\n\t\tspace_left = conn->scratch.len - offset; /* remaining space */\n\t\tif (space_left == 0) {\n\t\t\t/* resize buffer */\n\t\t\tif (conn->scratch.len == 0)\n\t\t\t\tconn->scratch.len = TPP_SCRATCHSIZE;\n\t\t\telse {\n\t\t\t\tconn->scratch.len += TPP_SCRATCHSIZE;\n\t\t\t\ttpp_log(LOG_INFO, __func__, \"Increased scratch size for tfd=%d to %d\", conn->sock_fd, conn->scratch.len);\n\t\t\t}\n\t\t\tp = realloc(conn->scratch.data, conn->scratch.len);\n\t\t\tif (!p) {\n\t\t\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory resizing scratch data\");\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tconn->scratch.data = p;\n\t\t\tconn->scratch.pos = conn->scratch.data + offset;\n\t\t\tspace_left = conn->scratch.len - offset;\n\t\t}\n\n\t\tif (offset > sizeof(int)) {\n\t\t\tpkt_len = ntohl(*((int *) conn->scratch.data));\n\t\t\ttorecv = pkt_len - offset; /* offset amount of data already received */\n\t\t\tTPP_DBPRT(\"tfd=%d, Need to receive: pkt_len=%d, torecv=%d, space_left=%d bytes\", conn->sock_fd, pkt_len, torecv, space_left);\n\t\t\tif (torecv > space_left)\n\t\t\t\ttorecv = space_left;\n\t\t} else {\n\t\t\t/*\n\t\t\t * we are starting to read a new packet now\n\t\t\t * so we try to read the length part only first\n\t\t\t * so we know how much more to read this is to\n\t\t\t * avoid reading more than one packet, to eliminate memmoves\n\t\t\t */\n\t\t\ttorecv = sizeof(int) + sizeof(char) - offset; /* also read the type character */\n\t\t\tpkt_len = 0;\n\t\t}\n\n\t\t/* receive as much as we can */\n\t\tclosed = 0;\n\t\twhile (torecv > 0) {\n\t\t\trc = tpp_sock_recv(conn->sock_fd, conn->scratch.pos, torecv, 0);\n\t\t\tif (rc == 0) {\n\t\t\t\tclosed = 1; /* received close */\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\tif (rc < 0) {\n\t\t\t\tif (errno != EWOULDBLOCK && errno != EAGAIN) {\n\t\t\t\t\thandle_disconnect(conn);\n\t\t\t\t\treturn; /* error case - don't even process data */\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\ttorecv -= rc;\n\t\t\tconn->scratch.pos += rc;\n\t\t}\n\n\t\tif (closed == 1) {\n\t\t\thandle_disconnect(conn);\n\t\t\treturn;\n\t\t}\n\t\tif (torecv > 0) /* did not receive full data, do not try any more */\n\t\t\tbreak;\n\n\t\tif (add_pkt(conn) != 0)\n\t\t\treturn;\n\t}\n}\n\n/**\n * @brief\n *\tAdd a packet to the receivers buffer or if buffer is full\n *  add a deffered action, so that it can be checked later\n *\n * @param[in] conn - The physical connection\n * \n * @return Error code\n * @retval 0 - Success\n * @retval -1 - Failure\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nstatic short\nadd_pkt(phy_conn_t *conn)\n{\n\tint avl_len;\n\tint pkt_len;\n\n\tavl_len = conn->scratch.pos - conn->scratch.data;\n\tif (avl_len >= sizeof(int)) {\n\t\tpkt_len = ntohl(*((int *) conn->scratch.data));\n\t\tif (pkt_len < avl_len) {\n\t\t\t/* some data corruption has happened, or sombody trying DOS */\n\t\t\ttpp_log(LOG_CRIT, __func__, \"tfd=%d, Critical error in protocol header, pkt_len=%d, avl_len=%d, dropping connection\", conn->sock_fd, pkt_len, avl_len);\n\t\t\thandle_disconnect(conn);\n\t\t\treturn -1; /* treat as bad data rejected by upper layer */\n\t\t}\n\t\tif (avl_len == pkt_len) {\n\t\t\t/* we got a full packet */\n\t\t\tif (the_pkt_handler) {\n\t\t\t\tif (the_pkt_handler(conn->sock_fd, conn->scratch.data, pkt_len, conn->ctx, conn->extra) != 0) {\n\t\t\t\t\t/* upper layer rejected data, disconnect */\n\t\t\t\t\thandle_disconnect(conn);\n\t\t\t\t\treturn -1;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t/*\n\t\t\t* no need to memmove or coalesce the data, since we would have read\n\t\t\t* just enough for a packet, so, just reset pointers\n\t\t\t*/\n\t\t\tconn->scratch.pos = conn->scratch.data;\n\t\t}\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\tLoop over the list of queued data and send out packet by packet\n *\tStop if sending would block.\n *\n * @param[in] conn - The physical connection\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nstatic void\nsend_data(phy_conn_t *conn)\n{\n\ttpp_chunk_t *p = NULL;\n\ttpp_packet_t *pkt = NULL;\n\tssize_t rc;\n\tint curr_pkt_done = 0;\n\tsize_t tosend;\n\n\t/*\n\t * if a socket is still connecting, we will wait to send out data,\n\t * even if app called close - so check this first\n\t */\n\tif ((conn->net_state == TPP_CONN_CONNECTING) || (conn->net_state == TPP_CONN_INITIATING))\n\t\treturn;\n\n\twhile ((conn->ev_mask & EM_OUT) == 0) {\n\t\trc = 0;\n\t\tcurr_pkt_done = 0;\n\n\t\tpkt = conn->curr_send_pkt;\n\t\tif (!pkt) {\n\t\t\t/* get the next packet from send_mbox */\n\t\t\tif (tpp_mbox_read(&conn->send_mbox, NULL, NULL, (void **) &conn->curr_send_pkt) != 0) {\n\t\t\t\tif (!(errno == EAGAIN || errno == EWOULDBLOCK))\n\t\t\t\t\ttpp_log(LOG_ERR, __func__, \"tpp_mbox_read failed\");\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tpkt = conn->curr_send_pkt;\n\t\t}\n\t\tp = pkt->curr_chunk;\n\n\t\t/* data available, first byte, presend handler present, call handler */\n\t\tif ((p == GET_NEXT(pkt->chunks)) && (p->pos == p->data) && the_pkt_presend_handler) {\n\t\t\tif ((rc = the_pkt_presend_handler(conn->sock_fd, pkt, conn->ctx, conn->extra)) == 0) {\n\t\t\t\tp = pkt->curr_chunk; /* presend handler could change pkt contents */\n\t\t\t}\n\t\t}\n\n\t\tif (p && (rc == 0)) {\n\t\t\ttosend = p->len - (p->pos - p->data);\n\t\t\twhile (tosend > 0) {\n\t\t\t\trc = tpp_sock_send(conn->sock_fd, p->pos, tosend, 0);\n\t\t\t\tif (rc < 0) {\n\t\t\t\t\tif (errno == EWOULDBLOCK || errno == EAGAIN) {\n\t\t\t\t\t\t/* set this socket in POLLOUT */\n\t\t\t\t\t\tconn->ev_mask |= EM_OUT;\n\t\t\t\t\t\tTPP_DBPRT(\"EWOULDBLOCK, added EM_OUT to ev_mask, now=%x\", conn->ev_mask);\n\t\t\t\t\t\tif (tpp_em_mod_fd(conn->td->em_context, conn->sock_fd, conn->ev_mask) == -1) {\n\t\t\t\t\t\t\ttpp_log(LOG_ERR, __func__, \"Multiplexing failed\");\n\t\t\t\t\t\t\treturn;\n\t\t\t\t\t\t}\n\t\t\t\t\t} else {\n\t\t\t\t\t\thandle_disconnect(conn);\n\t\t\t\t\t\treturn;\n\t\t\t\t\t}\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tTPP_DBPRT(\"tfd=%d, tosend=%d, sent=%d bytes\", conn->sock_fd, tosend, rc);\n\t\t\t\tp->pos += rc;\n\t\t\t\ttosend -= rc;\n\t\t\t}\n\n\t\t\tif (tosend == 0) {\n\t\t\t\tp = GET_NEXT(p->chunk_link);\n\t\t\t\tif (p)\n\t\t\t\t\tpkt->curr_chunk = p;\n\t\t\t\telse\n\t\t\t\t\tcurr_pkt_done = 1;\n\t\t\t}\n\t\t} else\n\t\t\tcurr_pkt_done = 1;\n\n\t\tif (pkt && curr_pkt_done) {\n\t\t\t/*\n\t\t\t* all data in this packet has been sent or done with.\n\t\t\t* delete this node and get next node in queue\n\t\t\t*/\n\t\t\ttpp_free_pkt(pkt);\n\t\t\tconn->curr_send_pkt = NULL;\n\t\t}\n\t}\n}\n\n/**\n * @brief\n *\tFree a physical connection\n *\n * @param[in] conn - The physical connection\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nstatic void\nfree_phy_conn(phy_conn_t *conn)\n{\n\ttpp_que_elem_t *n = NULL;\n\ttpp_packet_t *pkt;\n\tshort cmd;\n\n\tif (!conn)\n\t\treturn;\n\n\tif (conn->conn_params) {\n\t\tif (conn->conn_params->hostname)\n\t\t\tfree(conn->conn_params->hostname);\n\t\tfree(conn->conn_params);\n\t}\n\n\twhile (tpp_mbox_clear(&conn->send_mbox, &n, conn->sock_fd, &cmd, (void **) &pkt) == 0) {\n\t\tif (cmd == TPP_CMD_SEND)\n\t\t\ttpp_free_pkt(pkt);\n\t}\n\n\ttpp_mbox_destroy(&conn->send_mbox);\n\n\tfree(conn->ctx);\n\tfree(conn->scratch.data);\n\tfree(conn);\n}\n\n/**\n * @brief\n *\tShut down this layer, send \"exit\" commands to all threads, and then\n *\tfree the thread pool.\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n * @return error code\n * @retval\t1\tfailure\n * @retval\t0\tsuccess\n */\nint\ntpp_transport_shutdown()\n{\n\tint i;\n\tvoid *ret;\n\n\ttpp_log(LOG_INFO, NULL, \"Shutting down TPP transport Layer\");\n\n\tfor (i = 0; i < num_threads; i++) {\n\t\ttpp_mbox_post(&thrd_pool[i]->mbox, 0, TPP_CMD_EXIT, NULL, 0);\n\t}\n\n\tfor (i = 0; i < num_threads; i++) {\n\t\tif (tpp_is_valid_thrd(thrd_pool[i]->worker_thrd_id))\n\t\t\tpthread_join(thrd_pool[i]->worker_thrd_id, &ret);\n\n\t\ttpp_em_destroy(thrd_pool[i]->em_context);\n\t\tfree(thrd_pool[i]->tpp_tls);\n\t\tfree(thrd_pool[i]);\n\t}\n\tfree(thrd_pool);\n\n\tfor (i = 0; i < conns_array_size; i++) {\n\t\tif (conns_array[i].conn) {\n\t\t\ttpp_sock_close(conns_array[i].conn->sock_fd);\n\t\t\tfree_phy_conn(conns_array[i].conn);\n\t\t}\n\t}\n\n\t/* free the array */\n\tfree(conns_array);\n\tif (tpp_destroy_rwlock(&cons_array_lock))\n\t\treturn 1;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\t\"Terminate\" this layer, no threads to be stopped, just free all memory\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nint\ntpp_transport_terminate()\n{\n\tint i;\n\n\t/* Warning: Do not attempt to destroy any lock\n\t * This is not required since our library is effectively\n\t * not used after a fork.\n\t *\n\t * Don't bother to free any TPP data as well, as the forked\n\t * process is usually short lived and no point spending time\n\t * freeing space on a short lived forked process. Besides,\n\t * the TPP thread which is lost after fork might have been in\n\t * between using these data when the fork happened, so freeing\n\t * some structures might be dangerous.\n\t *\n\t * Thus the only thing we do here is to close file/sockets\n\t * so that the kernel can recognize when a close happens from the\n\t * main process.\n\t *\n\t */\n\n\tfor (i = 0; i < num_threads; i++) {\n\t\tif (thrd_pool[i]->listen_fd > -1)\n\t\t\ttpp_sock_close(thrd_pool[i]->listen_fd);\n\t}\n\n\t/* close all open physical connections, else child carries open socket\n\t * and a later close at parent is not all sides closed\n\t */\n\tfor (i = 0; i < conns_array_size; i++) {\n\t\tif (conns_array[i].conn)\n\t\t\ttpp_sock_close(conns_array[i].conn->sock_fd);\n\t}\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tRetrive hostname associated with given file descriptor of physical connection\n *\n * @param[in] tfd - Descriptor to the physical connection\n *\n */\nconst char *\ntpp_transport_get_conn_hostname(int tfd)\n{\n\tint slot_state;\n\tphy_conn_t *conn;\n\tconn = get_transport_atomic(tfd, &slot_state);\n\tif (conn) {\n\t\treturn ((const char *) (conn->conn_params->hostname));\n\t}\n\treturn NULL;\n}\n\n/**\n * @brief\n *\tFunction associates some extra structure with physical connection\n *\n * @param[in] tfd - Descriptor to the physical connection\n * @param[in] extra - Pointer to extra structure\n *\n */\nvoid\ntpp_transport_set_conn_extra(int tfd, void *extra)\n{\n\tint slot_state;\n\tphy_conn_t *conn;\n\tconn = get_transport_atomic(tfd, &slot_state);\n\tif (conn) {\n\t\tconn->extra = extra;\n\t}\n}\n"
  },
  {
    "path": "src/lib/Libtpp/tpp_util.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\ttpp_util.c\n *\n * @brief\tMiscellaneous utility routines used by the TPP library\n *\n *\n */\n#include <pbs_config.h>\n#if RWLOCK_SUPPORT == 2\n#if !defined(_XOPEN_SOURCE)\n#define _XOPEN_SOURCE 500\n#endif\n#endif\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <unistd.h>\n#include <errno.h>\n#include <fcntl.h>\n#include <netdb.h>\n#include <pthread.h>\n#include <sys/types.h>\n#include <sys/socket.h>\n#include <netinet/in.h>\n#include <netinet/tcp.h>\n#include <arpa/inet.h>\n#include <stdarg.h>\n#include <ctype.h>\n#include \"pbs_idx.h\"\n#include \"pbs_error.h\"\n#include \"tpp_internal.h\"\n#include \"dis.h\"\n#ifdef PBS_COMPRESSION_ENABLED\n#include <zlib.h>\n#endif\n\n#define BACKTRACE_SIZE 100\n#include <execinfo.h>\n\n/*\n *\tGlobal Variables\n */\n\n#ifndef WIN32\npthread_mutex_t tpp_nslookup_mutex = PTHREAD_MUTEX_INITIALIZER;\n#endif\n\n/* TLS data for each TPP thread */\nstatic pthread_key_t tpp_key_tls;\nstatic pthread_once_t tpp_once_ctrl = PTHREAD_ONCE_INIT; /* once ctrl to initialize tls key */\n\nlong tpp_log_event_mask = 0;\n\n/* default keepalive values */\n#define DEFAULT_TCP_KEEPALIVE_TIME 30\n#define DEFAULT_TCP_KEEPALIVE_INTVL 10\n#define DEFAULT_TCP_KEEPALIVE_PROBES 3\n#define DEFAULT_TCP_USER_TIMEOUT 60000\n\n#define PBS_TCP_KEEPALIVE \"PBS_TCP_KEEPALIVE\" /* environment string to search for */\n\n/* extern functions called from this file into the tpp_transport.c */\nstatic pbs_tcp_chan_t *tppdis_get_user_data(int sd);\n\nvoid\ntpp_auth_logger(int type, int objclass, int severity, const char *objname, const char *text)\n{\n\ttpp_log(severity, objname, (char *) text);\n}\n\n/**\n * @brief\n *\tGet the user buffer associated with the tpp channel. If no buffer has\n *\tbeen set, then allocate a tppdis_chan structure and associate with\n *\tthe given tpp channel\n *\n * @param[in] - fd - Tpp channel to which to get/associate a user buffer\n *\n * @retval\tNULL - Failure\n * @retval\t!NULL - Buffer associated with the tpp channel\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nstatic pbs_tcp_chan_t *\ntppdis_get_user_data(int fd)\n{\n\tvoid *data = tpp_get_user_data(fd);\n\tif (data == NULL) {\n\t\tif (errno != ENOTCONN) {\n\t\t\t/* fd connected, but first time - so call setup */\n\t\t\tdis_setup_chan(fd, (pbs_tcp_chan_t * (*) (int) ) & tpp_get_user_data);\n\t\t\t/* get the buffer again*/\n\t\t\tdata = tpp_get_user_data(fd);\n\t\t}\n\t}\n\treturn (pbs_tcp_chan_t *) data;\n}\n\n/**\n * @brief\n *\tSetup dis function pointers to point to tpp_dis routines\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nvoid\nDIS_tpp_funcs()\n{\n\tpfn_transport_get_chan = tppdis_get_user_data;\n\tpfn_transport_set_chan = (int (*)(int, pbs_tcp_chan_t *)) & tpp_set_user_data;\n\tpfn_transport_recv = tpp_recv;\n\tpfn_transport_send = tpp_send;\n}\n\n/**\n * @brief\n *\t\tThis is the log handler for tpp implemented in the daemon. The pointer to\n *\t\tthis function is used by the Libtpp layer when it needs to log something to\n *\t\tthe daemon logs\n *\n * @param[in]\tlevel   - Logging level\n * @param[in]\tobjname - Name of the object about which logging is being done\n * @param[in]\tmess    - The log message\n *\n */\nvoid\ntpp_log(int level, const char *routine, const char *fmt, ...)\n{\n\tchar id[2 * PBS_MAXHOSTNAME];\n\tchar func[PBS_MAXHOSTNAME];\n\tint thrd_index;\n\tint etype;\n\tint len;\n\tchar logbuf[LOG_BUF_SIZE];\n\tchar *buf;\n\tva_list args;\n\n#ifdef TPPDEBUG\n\tlevel = LOG_CRIT; /* for TPPDEBUG mode force all logs message */\n#endif\n\tetype = log_level_2_etype(level);\n\n\tfunc[0] = '\\0';\n\tif (routine)\n\t\tsnprintf(func, sizeof(func), \";%s\", routine);\n\n\tthrd_index = tpp_get_thrd_index();\n\tif (thrd_index == -1)\n\t\tsnprintf(id, sizeof(id), \"%s(Main Thread)%s\", msg_daemonname ? msg_daemonname : \"\", func);\n\telse\n\t\tsnprintf(id, sizeof(id), \"%s(Thread %d)%s\", msg_daemonname ? msg_daemonname : \"\", thrd_index, func);\n\n\tva_start(args, fmt);\n\n\tlen = vsnprintf(logbuf, sizeof(logbuf), fmt, args);\n\n\tif (len >= sizeof(logbuf)) {\n\t\tbuf = pbs_asprintf_format(len, fmt, args);\n\t\tif (buf == NULL) {\n\t\t\tva_end(args);\n\t\t\treturn;\n\t\t}\n\t} else\n\t\tbuf = logbuf;\n\n\tlog_event(etype, PBS_EVENTCLASS_TPP, level, id, buf);\n\n\tif (len >= sizeof(logbuf))\n\t\tfree(buf);\n\n\tva_end(args);\n}\n\n/**\n * @brief\n *\tHelper function called by PBS daemons to set the tpp configuration to\n *\tbe later used during tpp_init() call.\n *\n * @param[in] pbs_conf - Pointer to the pbs_config structure\n * @param[out] tpp_conf - The tpp configuration structure duly filled based on\n *\t\t\t  the input parameters\n * @param[in] nodenames - The comma separated list of name of this side of the communication.\n * @param[in] port     - The port at which this side is identified.\n * @param[in] routers  - Array of router addresses ended by a null entry\n *\t\t\t router addresses are of the form \"host:port\"\n *\n * @retval Error code\n * @return -1 - Failure\n * @return  0 - Success\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nint\nset_tpp_config(struct pbs_config *pbs_conf, struct tpp_config *tpp_conf, char *nodenames, int port, char *r)\n{\n\tint i, end;\n\tint num_routers = 0;\n\tchar *routers = NULL;\n\tchar *s, *t, *ctx;\n\tchar *nm;\n\tint len, hlen;\n\tchar *token, *saveptr, *tmp;\n\tchar *formatted_names = NULL;\n\n\t/* before doing anything else, initialize the key to the tls\n\t * its okay to call this function multiple times since it\n\t * uses a pthread_once functionality to initialize key only once\n\t */\n\tif (tpp_init_tls_key() != 0) {\n\t\t/* can only use prints since tpp key init failed */\n\t\tfprintf(stderr, \"Failed to initialize tls key\\n\");\n\t\treturn -1;\n\t}\n\n\tif (r) {\n\t\trouters = strdup(r);\n\t\tif (!routers) {\n\t\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory allocating routers\");\n\t\t\treturn -1;\n\t\t}\n\t}\n\n\tif (!nodenames) {\n\t\ttpp_log(LOG_CRIT, NULL, \"TPP node name not set\");\n\t\treturn -1;\n\t}\n\n\tif (port == -1) {\n\t\tstruct sockaddr_in in;\n\t\tint sd;\n\t\tint rc;\n\t\ttpp_addr_t *addr;\n\n\t\tif ((sd = tpp_sock_socket(AF_INET, SOCK_STREAM, 0)) == -1) {\n\t\t\ttpp_log(LOG_ERR, __func__, \"tpp_sock_socket() error, errno=%d\", errno);\n\t\t\treturn -1;\n\t\t}\n\n\t\t/* bind this socket to a reserved port */\n\t\tin.sin_family = AF_INET;\n\t\tin.sin_addr.s_addr = INADDR_ANY;\n\t\tin.sin_port = 0;\n\t\tmemset(&(in.sin_zero), '\\0', sizeof(in.sin_zero));\n\t\tif ((rc = tpp_sock_bind(sd, (struct sockaddr *) &in, sizeof(in))) == -1) {\n\t\t\ttpp_log(LOG_ERR, __func__, \"tpp_sock_bind() error, errno=%d\", errno);\n\t\t\ttpp_sock_close(sd);\n\t\t\treturn -1;\n\t\t}\n\n\t\taddr = tpp_get_local_host(sd);\n\t\tif (addr) {\n\t\t\tport = addr->port;\n\t\t\tfree(addr);\n\t\t}\n\n\t\tif (port == -1) {\n\t\t\ttpp_log(LOG_ERR, __func__, \"TPP client could not detect port to use\");\n\t\t\ttpp_sock_close(sd);\n\t\t\treturn -1;\n\t\t}\n\t\t/* don't close this socket */\n\t\ttpp_set_close_on_exec(sd);\n\t}\n\n\t/* add port information to the node names and format into a single string as desired by TPP */\n\tlen = 0;\n\ttoken = strtok_r(nodenames, \",\", &saveptr);\n\twhile (token) {\n\t\tnm = mk_hostname(token, port);\n\t\tif (!nm) {\n\t\t\ttpp_log(LOG_CRIT, NULL, \"Failed to make node name\");\n\t\t\treturn -1;\n\t\t}\n\n\t\thlen = strlen(nm);\n\t\tif ((tmp = realloc(formatted_names, len + hlen + 2)) == NULL) { /* 2 for command and null char */\n\t\t\ttpp_log(LOG_CRIT, NULL, \"Failed to make formatted node name\");\n\t\t\treturn -1;\n\t\t}\n\n\t\tformatted_names = tmp;\n\n\t\tif (len == 0) {\n\t\t\tstrcpy(formatted_names, nm);\n\t\t} else {\n\t\t\tstrcat(formatted_names, \",\");\n\t\t\tstrcat(formatted_names, nm);\n\t\t}\n\t\tfree(nm);\n\n\t\tlen += hlen + 2;\n\n\t\ttoken = strtok_r(NULL, \",\", &saveptr);\n\t}\n\n\ttpp_conf->node_name = formatted_names;\n\ttpp_conf->node_type = TPP_LEAF_NODE;\n\ttpp_conf->numthreads = 1;\n\n\ttpp_conf->auth_config = make_auth_config(pbs_conf->auth_method,\n\t\t\t\t\t\t pbs_conf->encrypt_method,\n\t\t\t\t\t\t pbs_conf->pbs_exec_path,\n\t\t\t\t\t\t pbs_conf->pbs_home_path,\n\t\t\t\t\t\t (void *) tpp_auth_logger);\n\tif (tpp_conf->auth_config == NULL) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory allocating auth config\");\n\t\treturn -1;\n\t}\n\n\ttpp_log(LOG_INFO, NULL, \"TPP authentication method = %s\", tpp_conf->auth_config->auth_method);\n\tif (tpp_conf->auth_config->encrypt_method[0] != '\\0')\n\t\ttpp_log(LOG_INFO, NULL, \"TPP encryption method = %s\", tpp_conf->auth_config->encrypt_method);\n\n\tif ((tpp_conf->supported_auth_methods = dup_string_arr(pbs_conf->supported_auth_methods)) == NULL) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory while making copy of supported auth methods\");\n\t\treturn -1;\n\t}\n\n#ifdef PBS_COMPRESSION_ENABLED\n\ttpp_conf->compress = pbs_conf->pbs_use_compression;\n#else\n\ttpp_conf->compress = 0;\n#endif\n\n\t/* set default parameters for keepalive */\n\ttpp_conf->tcp_keepalive = 1;\n\ttpp_conf->tcp_keep_idle = DEFAULT_TCP_KEEPALIVE_TIME;\n\ttpp_conf->tcp_keep_intvl = DEFAULT_TCP_KEEPALIVE_INTVL;\n\ttpp_conf->tcp_keep_probes = DEFAULT_TCP_KEEPALIVE_PROBES;\n\ttpp_conf->tcp_user_timeout = DEFAULT_TCP_USER_TIMEOUT;\n\n\t/* if set, read them from environment variable PBS_TCP_KEEPALIVE */\n\tif ((s = getenv(PBS_TCP_KEEPALIVE))) {\n\t\t/*\n\t\t * The format is a comma separated list of values in order, for the following variables,\n\t\t * tcp_keepalive_enable,tcp_keepalive_time,tcp_keepalive_intvl,tcp_keepalive_probes,tcp_user_timeout\n\t\t */\n\t\ttpp_conf->tcp_keepalive = 0;\n\t\tt = strtok_r(s, \",\", &ctx);\n\t\tif (t) {\n\t\t\t/* this has to be the tcp_keepalive_enable value */\n\t\t\tif (atol(t) == 1) {\n\t\t\t\ttpp_conf->tcp_keepalive = 1;\n\n\t\t\t\t/* parse other values only if this is enabled */\n\t\t\t\tif ((t = strtok_r(NULL, \",\", &ctx))) {\n\t\t\t\t\t/* tcp_keepalive_time */\n\t\t\t\t\ttpp_conf->tcp_keep_idle = (int) atol(t);\n\t\t\t\t}\n\n\t\t\t\tif (t && (t = strtok_r(NULL, \",\", &ctx))) {\n\t\t\t\t\t/* tcp_keepalive_intvl */\n\t\t\t\t\ttpp_conf->tcp_keep_intvl = (int) atol(t);\n\t\t\t\t}\n\n\t\t\t\tif (t && (t = strtok_r(NULL, \",\", &ctx))) {\n\t\t\t\t\t/* tcp_keepalive_probes */\n\t\t\t\t\ttpp_conf->tcp_keep_probes = (int) atol(t);\n\t\t\t\t}\n\n\t\t\t\tif (t && (t = strtok_r(NULL, \",\", &ctx))) {\n\t\t\t\t\t/*tcp_user_timeout */\n\t\t\t\t\ttpp_conf->tcp_user_timeout = (int) atol(t);\n\t\t\t\t}\n\n\t\t\t\t/* emit a log depicting what we are going to use as keepalive */\n\t\t\t\ttpp_log(LOG_CRIT, NULL,\n\t\t\t\t\t\"Using tcp_keepalive_time=%d, tcp_keepalive_intvl=%d, tcp_keepalive_probes=%d, tcp_user_timeout=%d\",\n\t\t\t\t\ttpp_conf->tcp_keep_idle, tpp_conf->tcp_keep_intvl, tpp_conf->tcp_keep_probes, tpp_conf->tcp_user_timeout);\n\t\t\t} else {\n\t\t\t\ttpp_log(LOG_CRIT, NULL, \"tcp keepalive disabled\");\n\t\t\t}\n\t\t}\n\t}\n\n\ttpp_conf->buf_limit_per_conn = 5000; /* size in KB, TODO: load from pbs.conf */\n\n\tif (routers && routers[0] != '\\0') {\n\t\tchar *p = routers;\n\t\tchar *q;\n\n\t\tnum_routers = 1;\n\n\t\twhile (*p) {\n\t\t\tif (*p == ',')\n\t\t\t\tnum_routers++;\n\t\t\tp++;\n\t\t}\n\n\t\ttpp_conf->routers = calloc(num_routers + 1, sizeof(char *));\n\t\tif (!tpp_conf->routers) {\n\t\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory allocating routers array\");\n\t\t\treturn -1;\n\t\t}\n\n\t\tq = p = routers;\n\t\ti = end = 0;\n\t\twhile (!end) {\n\t\t\tif (!*p)\n\t\t\t\tend = 1;\n\t\t\tif ((*p && *p == ',') || end) {\n\t\t\t\t*p = 0;\n\t\t\t\twhile (isspace(*q))\n\t\t\t\t\tq++;\n\t\t\t\tnm = mk_hostname(q, TPP_DEF_ROUTER_PORT);\n\t\t\t\tif (!nm) {\n\t\t\t\t\ttpp_log(LOG_CRIT, NULL, \"Failed to make router name\");\n\t\t\t\t\treturn -1;\n\t\t\t\t}\n\t\t\t\ttpp_conf->routers[i++] = nm;\n\t\t\t\tq = p + 1;\n\t\t\t}\n\t\t\tif (!end)\n\t\t\t\tp++;\n\t\t}\n\n\t} else {\n\t\ttpp_conf->routers = NULL;\n\t}\n\n\tfor (i = 0; i < num_routers; i++) {\n\t\tif (tpp_conf->routers[i] == NULL || strcmp(tpp_conf->routers[i], tpp_conf->node_name) == 0) {\n\t\t\ttpp_log(LOG_CRIT, NULL, \"Router name NULL or points to same node endpoint %s\", (tpp_conf->routers[i]) ? (tpp_conf->routers[i]) : \"\");\n\t\t\treturn -1;\n\t\t}\n\t}\n\n\tif (routers)\n\t\tfree(routers);\n\n\treturn 0;\n}\n\n/* \n * free tpp conf member variables\n * to be called before exit\n * \n * @param[in] tpp_conf - pointer to the tpp conf structure\n * \n */\nvoid\nfree_tpp_config(struct tpp_config *tpp_conf)\n{\n\tfree(tpp_conf->routers);\n\tfree_string_array(tpp_conf->supported_auth_methods);\n\tfree(tpp_conf->node_name);\n\tfree_auth_config(tpp_conf->auth_config);\n}\n\n/**\n * @brief tpp_make_authdata - allocate conn_auth_t structure based given values\n *\n * @param[in] tpp_conf - pointer to tpp config structure\n * @param[in] conn_type - one of AUTH_CLIENT or AUTH_SERVER\n * @param[in] auth_method - auth method name\n * @param[in] encrypt_method - encrypt method name\n *\n * @return conn_auth_t *\n * @return !NULL - success\n * @return NULL  - failure\n */\nconn_auth_t *\ntpp_make_authdata(struct tpp_config *tpp_conf, int conn_type, char *auth_method, char *encrypt_method)\n{\n\tconn_auth_t *authdata = NULL;\n\n\tif ((authdata = (conn_auth_t *) calloc(1, sizeof(conn_auth_t))) == NULL) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory\");\n\t\treturn NULL;\n\t}\n\tauthdata->conn_type = conn_type;\n\tauthdata->config = make_auth_config(auth_method,\n\t\t\t\t\t    encrypt_method,\n\t\t\t\t\t    tpp_conf->auth_config->pbs_exec_path,\n\t\t\t\t\t    tpp_conf->auth_config->pbs_home_path,\n\t\t\t\t\t    tpp_conf->auth_config->logfunc);\n\tif (authdata->config == NULL) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory\");\n\t\treturn NULL;\n\t}\n\n\treturn authdata;\n}\n\n/**\n * @brief tpp_handle_auth_handshake - initiate handshake or process incoming handshake data\n *\n * @param[in] tfd - file descriptor\n * @param[in] conn_fd - connection fd for sending data\n * @param[in] authdata - pointer to conn auth data struct associated with tfd\n * @param[in] for_encrypt - whether to handle incoming data for encrypt/decrypt or for authentication\n * @param[in] data_in - incoming handshake data (if any)\n * @param[in] len_in - length of data_in else 0\n *\n * @return int\n * @return -1 - failure\n * @return 0  - need handshake continuation\n * @return 1  - handshake completed\n */\nint\ntpp_handle_auth_handshake(int tfd, int conn_fd, conn_auth_t *authdata, int for_encrypt, void *data_in, size_t len_in)\n{\n\tvoid *data_out = NULL;\n\tsize_t len_out = 0;\n\tint is_handshake_done = 0;\n\tvoid *authctx = NULL;\n\tauth_def_t *authdef = NULL;\n\n\tif (authdata == NULL) {\n\t\ttpp_log(LOG_CRIT, __func__, \"tfd=%d, No auth data found\", tfd);\n\t\treturn -1;\n\t}\n\n\tif (for_encrypt == FOR_AUTH) {\n\t\tif (authdata->authdef == NULL) {\n\t\t\tauthdef = get_auth(authdata->config->auth_method);\n\t\t\tif (authdef == NULL) {\n\t\t\t\ttpp_log(LOG_CRIT, __func__, \"Failed to find authdef\");\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t\tauthdata->authdef = authdef;\n\t\t\tauthdef->set_config((const pbs_auth_config_t *) (authdata->config));\n\t\t\tif (authdef->create_ctx(&(authdata->authctx), authdata->conn_type, AUTH_SERVICE_CONN, tpp_transport_get_conn_hostname(tfd))) {\n\t\t\t\ttpp_log(LOG_CRIT, __func__, \"Failed to create auth context\");\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t}\n\t\tauthdef = authdata->authdef;\n\t\tauthctx = authdata->authctx;\n\t} else {\n\t\tif (authdata->encryptdef == NULL) {\n\t\t\tauthdef = get_auth(authdata->config->encrypt_method);\n\t\t\tif (authdef == NULL) {\n\t\t\t\ttpp_log(LOG_CRIT, __func__, \"Failed to find authdef\");\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t\tauthdata->encryptdef = authdef;\n\t\t\tauthdef->set_config((const pbs_auth_config_t *) (authdata->config));\n\t\t\tif (authdef->create_ctx(&(authdata->encryptctx), authdata->conn_type, AUTH_SERVICE_CONN, tpp_transport_get_conn_hostname(tfd))) {\n\t\t\t\ttpp_log(LOG_CRIT, __func__, \"Failed to create encrypt context\");\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t}\n\t\tauthdef = authdata->encryptdef;\n\t\tauthctx = authdata->encryptctx;\n\t}\n\ttpp_transport_set_conn_extra(tfd, authdata);\n\n\tif (authdef->process_handshake_data(authctx, data_in, len_in, &data_out, &len_out, &is_handshake_done) != 0) {\n\t\tif (len_out > 0) {\n\t\t\ttpp_log(LOG_CRIT, __func__, (char *) data_out);\n\t\t\tfree(data_out);\n\t\t}\n\t\treturn -1;\n\t}\n\n\tif (len_out > 0) {\n\t\ttpp_auth_pkt_hdr_t *ahdr = NULL;\n\t\ttpp_packet_t *pkt = NULL;\n\n\t\tpkt = tpp_bld_pkt(NULL, NULL, sizeof(tpp_auth_pkt_hdr_t), 1, (void **) &ahdr);\n\t\tif (!pkt) {\n\t\t\ttpp_log(LOG_CRIT, __func__, \"Failed to build packet\");\n\t\t\tfree(data_out);\n\t\t\treturn -1;\n\t\t}\n\t\tahdr->type = TPP_AUTH_CTX;\n\t\tahdr->for_encrypt = for_encrypt;\n\t\tstrcpy(ahdr->auth_method, authdata->config->auth_method);\n\t\tstrcpy(ahdr->encrypt_method, authdata->config->encrypt_method);\n\n\t\tif (!tpp_bld_pkt(pkt, data_out, len_out, 0, NULL)) {\n\t\t\ttpp_log(LOG_CRIT, __func__, \"Failed to build packet\");\n\t\t\tfree(data_out);\n\t\t\treturn -1;\n\t\t}\n\n\t\tif (tpp_transport_vsend(conn_fd, pkt) != 0) {\n\t\t\ttpp_log(LOG_CRIT, __func__, \"tpp_transport_vsend failed, err=%d\", errno);\n\t\t\treturn -1;\n\t\t}\n\t}\n\n\t/*\n\t * We didn't send any handshake data and handshake is not completed\n\t * so error out as we should send some handshake data\n\t * or handshake should be completed\n\t */\n\tif (is_handshake_done == 0 && len_out == 0) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Auth handshake failed\");\n\t\treturn -1;\n\t}\n\n\tif (is_handshake_done != 1)\n\t\treturn 0;\n\n\t/* Verify user name is in list of service users */\n\tif ((for_encrypt == FOR_AUTH) && (authdata->conn_type == AUTH_SERVER)) {\n\t\tchar *user = NULL;\n\t\tchar *host = NULL;\n\t\tchar *realm = NULL;\n\t\tif (authdef->get_userinfo(authctx, &user, &host, &realm) != 0) {\n\t\t\ttpp_log(LOG_CRIT, __func__, \"tfd=%d, Could not retrieve username from auth ctx\", tfd);\n\t\t\treturn -1;\n\t\t}\n\t\tif (user != NULL && !is_string_in_arr(pbs_conf.auth_service_users, user)) {\n\t\t\ttpp_log(LOG_CRIT, __func__, \"tfd=%d, User %s not in service users list\", tfd, user);\n\t\t\treturn -1;\n\t\t}\n\t\tif (user)\n\t\t\tfree(user);\n\t}\n\n\treturn 1;\n}\n\n/**\n * @brief\n *\tCreate a packet structure from the inputs provided\n *\n * @param[in] - pkt  - Pointer to packet to add chunk, or create new packet if NULL\n * @param[in] - data - pointer to data buffer (if NULL provided, no copy happens)\n * @param[in] - len  - Lentgh of data buffer\n * @param[in] - dup  - Make a copy of the data provided?\n * @param[in] - dup_data  - Ptr to copy of data created, if dup is true\n *\n * @return Newly allocated packet structure\n * @retval NULL - Failure (Out of memory)\n * @retval !NULL - Address of allocated packet structure\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\ntpp_packet_t *\ntpp_bld_pkt(tpp_packet_t *pkt, void *data, int len, int dup, void **dup_data)\n{\n\ttpp_chunk_t *chunk;\n\tvoid *d = data;\n\n\t/* first create the requested chunk for the packet */\n\tif ((chunk = malloc(sizeof(tpp_chunk_t))) == NULL) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Failed to build chunk\");\n\t\ttpp_free_pkt(pkt);\n\t\treturn NULL;\n\t}\n\t/* dup flag was provided, so allocate space */\n\tif (dup) {\n\t\td = malloc(len);\n\t\tif (!d) {\n\t\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory allocating packet duplicate data for chunk\");\n\t\t\tfree(chunk);\n\t\t\ttpp_free_pkt(pkt);\n\t\t\treturn NULL;\n\t\t}\n\t\tif (data)\n\t\t\tmemcpy(d, data, len);\n\t\tif (dup_data)\n\t\t\t*dup_data = d; /* return allocated data ptr */\n\t}\n\tchunk->data = d;\n\tchunk->pos = chunk->data;\n\tchunk->len = len;\n\tCLEAR_LINK(chunk->chunk_link);\n\n\t/* add chunk to packet */\n\t/* if packet NULL, create packet now and add chunk */\n\tif (pkt == NULL) {\n\t\tif ((pkt = malloc(sizeof(tpp_packet_t))) == NULL) {\n\t\t\tif (d != data)\n\t\t\t\tfree(d);\n\t\t\ttpp_free_pkt(pkt);\n\t\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory allocating packet\");\n\t\t\treturn NULL;\n\t\t}\n\t\tCLEAR_HEAD(pkt->chunks);\n\t\tpkt->ref_count = 1;\n\t\tpkt->totlen = 0;\n\t\tpkt->curr_chunk = chunk;\n\t}\n\n\tpkt->totlen += len;\n\tappend_link(&pkt->chunks, &chunk->chunk_link, chunk);\n\n\treturn pkt;\n}\n\n/**\n * @brief\n *\tFree a chunk\n *\n * @param[in] - chunk - Ptr to the chunk to be freed.\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nvoid\ntpp_free_chunk(tpp_chunk_t *chunk)\n{\n\tif (chunk) {\n\t\tdelete_link(&chunk->chunk_link);\n\t\tfree(chunk->data);\n\t\tfree(chunk);\n\t}\n}\n\n/**\n * @brief\n *\tFree a packet structure\n *\n * @param[in] - pkt - Ptr to the packet to be freed.\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nvoid\ntpp_free_pkt(tpp_packet_t *pkt)\n{\n\tif (pkt) {\n\t\tpkt->ref_count--;\n\n\t\tif (pkt->ref_count <= 0) {\n\t\t\ttpp_chunk_t *chunk;\n\t\t\twhile ((chunk = GET_NEXT(pkt->chunks)))\n\t\t\t\ttpp_free_chunk(chunk);\n\t\t\tfree(pkt);\n\t\t}\n\t}\n}\n\n/**\n * @brief\n *\tMark a file descriptor as non-blocking\n *\n * @param[in] - fd - The file descriptor\n *\n * @return\tError code\n * @retval -1\tFailure\n * @retval  0\tSuccess\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\ntpp_set_non_blocking(int fd)\n{\n\tint flags;\n\n\t/* If they have O_NONBLOCK, use the Posix way to do it */\n#if defined(O_NONBLOCK)\n\tif (-1 == (flags = fcntl(fd, F_GETFL, 0)))\n\t\tflags = 0;\n\treturn fcntl(fd, F_SETFL, flags | O_NONBLOCK);\n#elif defined(WIN32)\n\tflags = 1;\n\tif (ioctlsocket(fd, FIONBIO, &flags) == SOCKET_ERROR)\n\t\treturn -1;\n\treturn 0;\n#else\n\t/* Otherwise, use the old way of doing it */\n\tflags = 1;\n\treturn ioctl(fd, FIOBIO, &flags);\n#endif\n}\n\n/**\n * @brief\n *\tMark a file descriptor with close on exec flag\n *\n * @param[in] - fd - The file descriptor\n *\n * @return\tError code\n * @retval -1\tFailure\n * @retval  0\tSuccess\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\ntpp_set_close_on_exec(int fd)\n{\n#ifndef WIN32\n\tint flags;\n\tif ((flags = fcntl(fd, F_GETFD)) != -1)\n\t\tfcntl(fd, F_SETFD, flags | FD_CLOEXEC);\n#endif\n\treturn 0;\n}\n\n/**\n * @brief\n *\tMark a socket descriptor as keepalive\n *\n * @param[in] - fd - The socket descriptor\n *\n * @return\tError code\n * @retval -1\tFailure\n * @retval  0\tSuccess\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\ntpp_set_keep_alive(int fd, struct tpp_config *cnf)\n{\n\tint optval = 1;\n\tpbs_socklen_t optlen;\n\n\tif (cnf->tcp_keepalive == 0)\n\t\treturn 0; /* not using keepalive, return success */\n\n\toptlen = sizeof(optval);\n\n#ifdef SO_KEEPALIVE\n\toptval = cnf->tcp_keepalive;\n\tif (tpp_sock_setsockopt(fd, SOL_SOCKET, SO_KEEPALIVE, &optval, optlen) < 0) {\n\t\ttpp_log(LOG_CRIT, __func__, \"setsockopt(SO_KEEPALIVE) errno=%d\", errno);\n\t\treturn -1;\n\t}\n#endif\n\n#ifndef WIN32\n#ifdef TCP_KEEPIDLE\n\toptval = cnf->tcp_keep_idle;\n\tif (tpp_sock_setsockopt(fd, IPPROTO_TCP, TCP_KEEPIDLE, &optval, optlen) < 0) {\n\t\ttpp_log(LOG_CRIT, __func__, \"setsockopt(TCP_KEEPIDLE) errno=%d\", errno);\n\t\treturn -1;\n\t}\n#endif\n\n#ifdef TCP_KEEPINTVL\n\toptval = cnf->tcp_keep_intvl;\n\tif (tpp_sock_setsockopt(fd, IPPROTO_TCP, TCP_KEEPINTVL, &optval, optlen) < 0) {\n\t\ttpp_log(LOG_CRIT, __func__, \"setsockopt(TCP_KEEPINTVL) errno=%d\", errno);\n\t\treturn -1;\n\t}\n#endif\n\n#ifdef TCP_KEEPCNT\n\toptval = cnf->tcp_keep_probes;\n\tif (tpp_sock_setsockopt(fd, IPPROTO_TCP, TCP_KEEPCNT, &optval, optlen) < 0) {\n\t\ttpp_log(LOG_CRIT, __func__, \"setsockopt(TCP_KEEPCNT) errno=%d\", errno);\n\t\treturn -1;\n\t}\n#endif\n\n#ifdef TCP_USER_TIMEOUT\n\toptval = cnf->tcp_user_timeout;\n\tif (tpp_sock_setsockopt(fd, IPPROTO_TCP, TCP_USER_TIMEOUT, &optval, optlen) < 0) {\n\t\ttpp_log(LOG_CRIT, __func__, \"setsockopt(TCP_USER_TIMEOUT) errno=%d\", errno);\n\t\treturn -1;\n\t}\n#endif\n\n#endif /*for win32*/\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tCreate a posix thread\n *\n * @param[in] - start_routine - The threads routine\n * @param[out] - id - The thread id is returned in this field\n * @param[in] - data - The ptr to be passed to the thread routine\n *\n * @return\tError code\n * @retval -1\tFailure\n * @retval  0\tSuccess\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\ntpp_cr_thrd(void *(*start_routine)(void *), pthread_t *id, void *data)\n{\n\tpthread_attr_t *attr = NULL;\n\tint rc = -1;\n#ifndef WIN32\n\tpthread_attr_t setattr;\n\tsize_t stack_size;\n\n\tattr = &setattr;\n\tif (pthread_attr_init(attr) != 0) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Failed to initialize attribute\");\n\t\treturn -1;\n\t}\n\tif (pthread_attr_getstacksize(attr, &stack_size) != 0) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Failed to get stack size of thread\");\n\t\treturn -1;\n\t} else {\n\t\tif (stack_size < MIN_STACK_LIMIT) {\n\t\t\tif (pthread_attr_setstacksize(attr, MIN_STACK_LIMIT) != 0) {\n\t\t\t\ttpp_log(LOG_CRIT, __func__, \"Failed to set stack size for thread\");\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t} else {\n\t\t\tif (pthread_attr_setstacksize(attr, stack_size) != 0) {\n\t\t\t\ttpp_log(LOG_CRIT, __func__, \"Failed to set stack size for thread\");\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t}\n\t}\n#endif\n\tif (pthread_create(id, attr, start_routine, data) == 0)\n\t\trc = 0;\n\n#ifndef WIN32\n\tif (pthread_attr_destroy(attr) != 0) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Failed to destroy attribute\");\n\t\treturn -1;\n\t}\n#endif\n\treturn rc;\n}\n\n/**\n * @brief\n *\tInitialize a pthread mutex\n *\n * @param[in] - lock - A pthread mutex variable to initialize\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n * @return error code\n * @retval\t1 failure\n * @retval\t0\tsuccess\n */\nint\ntpp_init_lock(pthread_mutex_t *lock)\n{\n\tpthread_mutexattr_t attr;\n\tint type;\n\n\tif (pthread_mutexattr_init(&attr) != 0) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Failed to initialize mutex attr\");\n\t\treturn 1;\n\t}\n#if defined(linux)\n\ttype = PTHREAD_MUTEX_RECURSIVE_NP;\n#else\n\ttype = PTHREAD_MUTEX_RECURSIVE;\n#endif\n\tif (pthread_mutexattr_settype(&attr, type)) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Failed to set mutex type\");\n\t\treturn 1;\n\t}\n\n\tif (pthread_mutex_init(lock, &attr) != 0) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Failed to initialize mutex\");\n\t\treturn 1;\n\t}\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tDestroy a pthread mutex\n *\n * @param[in] - lock - The pthread mutex to destroy\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n * @return \terror code\n * @retval\t1\tfailure\n * @retval\t0\tsuccess\n */\nint\ntpp_destroy_lock(pthread_mutex_t *lock)\n{\n\tif (pthread_mutex_destroy(lock) != 0) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Failed to destroy mutex\");\n\t\treturn 1;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\tAcquire lock on a mutex\n *\n * @param[in] - lock - ptr to a mutex variable\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n * @return\terror code\n * @retval\t1\tfailure\n * @retval\t0\tsuccess\n */\nint\ntpp_lock(pthread_mutex_t *lock)\n{\n\tif (pthread_mutex_lock(lock) != 0) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Failed to lock mutex\");\n\t\treturn 1;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\tRelease lock on a mutex\n *\n * @param[in] - lock - ptr to a mutex variable\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n * @return\terror code\n * @retval\t1\tfailure\n * @retval\t0\tsuccess\n */\nint\ntpp_unlock(pthread_mutex_t *lock)\n{\n\tif (pthread_mutex_unlock(lock) != 0) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Failed to unlock mutex\");\n\t\treturn 1;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\tInitialize a rw lock\n *\n * @param[in] - lock - A pthread rw variable to initialize\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n * @return\terror code\n * @retval\t1\tfailure\n * @retval\t0\tsuccess\n */\nint\ntpp_init_rwlock(void *lock)\n{\n\tif (pthread_rwlock_init(lock, NULL) != 0) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Failed to initialize rw lock\");\n\t\treturn 1;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\tAcquire read lock on a rw lock\n *\n * @param[in] - lock - ptr to a rw lock variable\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n * @return\terror code\n * @retval\t1\tfailure\n * @retval\t0\tsuccess\n */\nint\ntpp_read_lock(void *lock)\n{\n\tif (pthread_rwlock_rdlock(lock) != 0) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Failed in rdlock\");\n\t\treturn 1;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\tAcquire write lock on a rw lock\n *\n * @param[in] - lock - ptr to a rw lock variable\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\ntpp_write_lock(void *lock)\n{\n\tif (pthread_rwlock_wrlock(lock) != 0) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Failed to wrlock\");\n\t\treturn 1;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\tUnlock an rw lock\n *\n * @param[in] - lock - ptr to a rw lock variable\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n * @return\terror code\n * @retval\t1\tfailure\n * @retval\t0\tsuccess\n */\nint\ntpp_unlock_rwlock(void *lock)\n{\n\tif (pthread_rwlock_unlock(lock) != 0) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Failed to unlock rw lock\");\n\t\treturn 1;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\tDestroy a rw lock\n *\n * @param[in] - lock - The rw to destroy\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n * * @return\terror code\n * @retval\t1\tfailure\n * @retval\t0\tsuccess\n */\nint\ntpp_destroy_rwlock(void *lock)\n{\n\tif (pthread_rwlock_destroy(lock) != 0) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Failed to destroy rw lock\");\n\t\treturn 1;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\tParse a hostname:port format and break into host and port portions.\n *\tIf port is not available, set the default port to DEF_TPP_ROUTER_PORT.\n *\n * @param[in] - full - The full hostname (host:port)\n * @param[out] - port - The port extracted from the full hostname\n *\n * @return\thostname part\n * @retval NULL Failure\n * @retval !NULL hostname\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nchar *\ntpp_parse_hostname(char *full, int *port)\n{\n\tchar *p;\n\tchar *host = NULL;\n\n\t*port = TPP_DEF_ROUTER_PORT;\n\tif ((host = strdup(full)) == NULL)\n\t\treturn NULL;\n\n\tif ((p = strstr(host, \":\"))) {\n\t\t*p = '\\0';\n\t\t*port = atol(p + 1);\n\t}\n\treturn host;\n}\n\n/**\n * @brief\n *\tEnqueue a node to a queue\n *\n * Linked List is from right to left\n * Insertion (tail) is at the right\n * and Deletion (head) is at left\n *\n * head ---> node ---> node ---> tail\n *\n * ---> Next points to right\n *\n * @param[in] - l - The address of the queue\n * @param[in] - data   - Data to be added as a node to the queue\n *\n * @return\tThe ptr to the newly created queue node\n * @retval\tNULL - Failed to enqueue data (out of memory)\n * @retval\t!NULL - Ptr to the newly created node\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\ntpp_que_elem_t *\ntpp_enque(tpp_que_t *l, void *data)\n{\n\ttpp_que_elem_t *nd;\n\n\tif ((nd = malloc(sizeof(tpp_que_elem_t))) == NULL) {\n\t\treturn NULL;\n\t}\n\tnd->queue_data = data;\n\n\tif (l->tail) {\n\t\tnd->prev = l->tail;\n\t\tnd->next = NULL;\n\t\tl->tail->next = nd;\n\t\tl->tail = nd;\n\t} else {\n\t\tl->tail = nd;\n\t\tl->head = nd;\n\t\tnd->next = NULL;\n\t\tnd->prev = NULL;\n\t}\n\treturn nd;\n}\n\n/**\n * @brief\n *\tDe-queue (remove) a node from a queue\n *\n * Linked List is from right to left\n * Insertion (tail) is at the right\n * and Deletion (head) is at left\n *\n * head ---> node ---> node ---> tail\n *\n * ---> Next points to right\n *\n * @param[in] - l - The address of the queue\n *\n * @return\tThe ptr to the data from the node just removed from queue\n * @retval\tNULL - Queue is empty\n * @retval\t!NULL - Ptr to the data\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nvoid *\ntpp_deque(tpp_que_t *l)\n{\n\tvoid *data = NULL;\n\ttpp_que_elem_t *p;\n\tif (l->head) {\n\t\tdata = l->head->queue_data;\n\t\tp = l->head;\n\t\tl->head = l->head->next;\n\t\tif (l->head)\n\t\t\tl->head->prev = NULL;\n\t\telse\n\t\t\tl->tail = NULL;\n\t\tfree(p);\n\t}\n\treturn data;\n}\n\n/**\n * @brief\n *\tDelete a specific node from a queue\n *\n * Linked List is from right to left\n * Insertion (tail) is at the right\n * and Deletion (head) is at left\n *\n * head ---> node ---> node ---> tail\n *\n * ---> Next points to right\n *\n * @param[in] - l - The address of the queue\n * @param[in] - n - Ptr of the node to remove\n *\n * @return\tThe ptr to the previous node in the queue (or NULL)\n * @retval\tNULL - Failed to enqueue data (out of memory)\n * @retval\t!NULL - Ptr to the previous node\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\ntpp_que_elem_t *\ntpp_que_del_elem(tpp_que_t *l, tpp_que_elem_t *n)\n{\n\ttpp_que_elem_t *p = NULL;\n\tif (n) {\n\t\tif (n->next) {\n\t\t\tn->next->prev = n->prev;\n\t\t}\n\t\tif (n->prev) {\n\t\t\tn->prev->next = n->next;\n\t\t}\n\n\t\tif (n == l->head) {\n\t\t\tl->head = n->next;\n\t\t}\n\t\tif (n == l->tail) {\n\t\t\tl->tail = n->prev;\n\t\t}\n\t\tif (l->head == NULL || l->tail == NULL) {\n\t\t\tl->tail = NULL;\n\t\t\tl->head = NULL;\n\t\t}\n\t\tif (n->prev)\n\t\t\tp = n->prev;\n\t\t/* else return p as NULL, so list QUE_NEXT starts from head again */\n\t\tfree(n);\n\t}\n\treturn p;\n}\n\n/**\n * @brief\n *\tInsert a node a specific position in the queue\n *\n * Linked List is from right to left\n * Insertion (tail) is at the right\n * and Deletion (head) is at left\n *\n * head ---> node ---> node ---> tail\n *\n * ---> Next points to right\n *\n * @param[in] - l - The address of the queue\n * @param[in] - n - Ptr to the location at which to insert node\n * @param[in] - data - Data to be put in the new node\n * @param[in] - before - Insert before or after the node location of n\n *\n * @return\tThe ptr to the just inserted node\n * @retval\tNULL - Failed to insert data (out of memory)\n * @retval\t!NULL - Ptr to the newly created node\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\ntpp_que_elem_t *\ntpp_que_ins_elem(tpp_que_t *l, tpp_que_elem_t *n, void *data, int before)\n{\n\ttpp_que_elem_t *nd = NULL;\n\n\tif (n) {\n\t\tif ((nd = malloc(sizeof(tpp_que_elem_t))) == NULL) {\n\t\t\treturn NULL;\n\t\t}\n\t\tnd->queue_data = data;\n\t\tif (before == 0) {\n\t\t\t/* after */\n\t\t\tnd->next = n->next;\n\t\t\tnd->prev = n;\n\t\t\tif (n->next)\n\t\t\t\tn->next->prev = nd;\n\t\t\tn->next = nd;\n\t\t\tif (n == l->tail)\n\t\t\t\tl->tail = nd;\n\n\t\t} else {\n\t\t\t/* before */\n\t\t\tnd->prev = n->prev;\n\t\t\tnd->next = n;\n\t\t\tif (n->prev)\n\t\t\t\tn->prev->next = nd;\n\t\t\tn->prev = nd;\n\t\t\tif (n == l->head)\n\t\t\t\tl->head = nd;\n\t\t}\n\t}\n\treturn nd;\n}\n\n/**\n * @brief\n *\tConvenience function to set the control header and and send the control\n *\tpacket (TPP_CTL_NOROUTE) to the given destination by calling\n *\ttpp_transport_vsend.\n *\n * @param[in] - fd - The physical connection via which to send control packet\n * @param[in] - src - The host:port of the source (sender)\n * @param[in] - dest - The host:port of the destination\n *\n * @return\tError code\n * @retval\t-1    - Failure\n * @retval\t 0    - Success\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\ntpp_send_ctl_msg(int fd, int code, tpp_addr_t *src, tpp_addr_t *dest, unsigned int src_sd, char err_num, char *msg)\n{\n\ttpp_ctl_pkt_hdr_t *lhdr = NULL;\n\ttpp_packet_t *pkt = NULL;\n\n\t/* send a packet back to where the original packet came from\n\t * basically reverse src and dest\n\t */\n\tpkt = tpp_bld_pkt(NULL, NULL, sizeof(tpp_ctl_pkt_hdr_t), 1, (void **) &lhdr);\n\tif (!pkt) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Failed to build packet\");\n\t\treturn -1;\n\t}\n\tlhdr->type = TPP_CTL_MSG;\n\tlhdr->code = code;\n\tlhdr->src_sd = htonl(src_sd);\n\tlhdr->error_num = err_num;\n\tif (src)\n\t\tmemcpy(&lhdr->dest_addr, src, sizeof(tpp_addr_t));\n\tif (dest)\n\t\tmemcpy(&lhdr->src_addr, dest, sizeof(tpp_addr_t));\n\tif (msg == NULL)\n\t\tmsg = \"\";\n\n\tif (!tpp_bld_pkt(pkt, msg, strlen(msg) + 1, 1, NULL)) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Failed to build packet\");\n\t\treturn -1;\n\t}\n\n\tTPP_DBPRT(\"Sending CTL PKT: sd=%d, msg=%s\", src_sd, msg);\n\tif (tpp_transport_vsend(fd, pkt) != 0) {\n\t\ttpp_log(LOG_CRIT, __func__, \"tpp_transport_vsend failed\");\n\t\treturn -1;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\tCombine the host and port parameters to a single string.\n *\n * @param[in] - host - hostname\n * @param[in] - port - add port if not already present\n *\n * @return\tThe combined string with the host:port\n * @retval\tNULL - Failure (out of memory)\n * @retval\t!NULl - Combined string\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nchar *\nmk_hostname(char *host, int port)\n{\n\tchar *node_name = malloc(strlen(host) + 10);\n\tif (node_name) {\n\t\tif (strchr(host, ':') || port == -1)\n\t\t\tstrcpy(node_name, host);\n\t\telse\n\t\t\tsprintf(node_name, \"%s:%d\", host, port);\n\t}\n\treturn node_name;\n}\n\n/**\n * @brief\n *\tOnce function for initializing TLS key\n *\n *  @return    - Error code\n *\t@retval -1 - Failure\n *\t@retval  0 - Success\n *\n * @par Side Effects:\n *\tInitializes the global tpp_key_tls, exits if fails\n *\n * @par MT-safe: No\n *\n */\nstatic void\ntpp_init_tls_key_once(void)\n{\n\tif (pthread_key_create(&tpp_key_tls, NULL) != 0) {\n\t\tfprintf(stderr, \"Failed to initialize TLS key\\n\");\n\t}\n}\n\n/**\n * @brief\n *\tInitialize the TLS key\n *\n *  @return    - Error code\n *\t@retval -1 - Failure\n *\t@retval  0 - Success\n *\n * @par Side Effects:\n *\tInitializes the global tpp_key_tls\n *\n * @par MT-safe: No\n *\n */\nint\ntpp_init_tls_key()\n{\n\tif (pthread_once(&tpp_once_ctrl, tpp_init_tls_key_once) != 0)\n\t\treturn -1;\n\treturn 0;\n}\n\n/**\n * @brief\n *\tGet the data from the thread TLS\n *\n * @return\tPointer of the tpp_thread_data structure from threads TLS\n * @retval\tNULL - Pthread functions failed\n * @retval\t!NULl - Data from TLS\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\ntpp_tls_t *\ntpp_get_tls()\n{\n\ttpp_tls_t *ptr;\n\tif ((ptr = pthread_getspecific(tpp_key_tls)) == NULL) {\n\t\tptr = calloc(1, sizeof(tpp_tls_t));\n\t\tif (!ptr)\n\t\t\treturn NULL;\n\n\t\tif (pthread_setspecific(tpp_key_tls, ptr) != 0) {\n\t\t\tfree(ptr);\n\t\t\treturn NULL;\n\t\t}\n\t}\n\treturn (tpp_tls_t *) ptr; /* thread data already initialized */\n}\n\n#ifdef PBS_COMPRESSION_ENABLED\n\n#define COMPR_LEVEL Z_DEFAULT_COMPRESSION\n\nstruct def_ctx {\n\tz_stream cmpr_strm;\n\tvoid *cmpr_buf;\n\tint len;\n};\n\n/**\n * @brief\n *\tInitialize a multi step deflation\n *\tAllocate an initial result buffer of given length\n *\n * @param[in] initial_len -  initial length of result buffer\n *\n * @return - The deflate context\n * @retval - NULL  - Failure\n * @retval - !NULL - Success\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nvoid *\ntpp_multi_deflate_init(int initial_len)\n{\n\tint ret;\n\tstruct def_ctx *ctx = malloc(sizeof(struct def_ctx));\n\tif (!ctx) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory allocating context buffer %lu bytes\", sizeof(struct def_ctx));\n\t\treturn NULL;\n\t}\n\n\tif ((ctx->cmpr_buf = malloc(initial_len)) == NULL) {\n\t\tfree(ctx);\n\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory allocating deflate buffer %d bytes\", initial_len);\n\t\treturn NULL;\n\t}\n\n\t/* allocate deflate state */\n\tctx->cmpr_strm.zalloc = Z_NULL;\n\tctx->cmpr_strm.zfree = Z_NULL;\n\tctx->cmpr_strm.opaque = Z_NULL;\n\tret = deflateInit(&ctx->cmpr_strm, COMPR_LEVEL);\n\tif (ret != Z_OK) {\n\t\tfree(ctx->cmpr_buf);\n\t\tfree(ctx);\n\t\ttpp_log(LOG_CRIT, __func__, \"Multi compression init failed\");\n\t\treturn NULL;\n\t}\n\n\tctx->len = initial_len;\n\tctx->cmpr_strm.avail_out = initial_len;\n\tctx->cmpr_strm.next_out = ctx->cmpr_buf;\n\treturn (void *) ctx;\n}\n\n/**\n * @brief\n *\tAdd data to a multi step deflation\n *\n * @param[in] c - The deflate context\n * @param[in] fini - Whether this call is the final data addition\n * @param[in] inbuf - Pointer to data buffer to add\n * @param[in] inlen - Length of input buffer to add\n *\n * @return - Error code\n * @retval - -1  - Failure\n * @retval -  0  - Success\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nint\ntpp_multi_deflate_do(void *c, int fini, void *inbuf, unsigned int inlen)\n{\n\tstruct def_ctx *ctx = c;\n\tint flush;\n\tint ret;\n\tint filled;\n\tvoid *p;\n\n\tctx->cmpr_strm.avail_in = inlen;\n\tctx->cmpr_strm.next_in = inbuf;\n\n\tflush = (fini == 1) ? Z_FINISH : Z_NO_FLUSH;\n\twhile (1) {\n\t\tret = deflate(&ctx->cmpr_strm, flush);\n\t\tif (ret == Z_OK && ctx->cmpr_strm.avail_out == 0) {\n\t\t\t/* more output pending, but no output buffer space */\n\t\t\tfilled = (char *) ctx->cmpr_strm.next_out - (char *) ctx->cmpr_buf;\n\t\t\tctx->len = ctx->len * 2;\n\t\t\tp = realloc(ctx->cmpr_buf, ctx->len);\n\t\t\tif (!p) {\n\t\t\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory allocating deflate buffer %d bytes\", ctx->len);\n\t\t\t\tdeflateEnd(&ctx->cmpr_strm);\n\t\t\t\tfree(ctx->cmpr_buf);\n\t\t\t\tfree(ctx);\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t\tctx->cmpr_buf = p;\n\t\t\tctx->cmpr_strm.next_out = (Bytef *) ((char *) ctx->cmpr_buf + filled);\n\t\t\tctx->cmpr_strm.avail_out = ctx->len - filled;\n\t\t} else\n\t\t\tbreak;\n\t}\n\tif (fini == 1 && ret != Z_STREAM_END) {\n\t\tdeflateEnd(&ctx->cmpr_strm);\n\t\tfree(ctx->cmpr_buf);\n\t\tfree(ctx);\n\t\ttpp_log(LOG_CRIT, __func__, \"Multi compression step failed\");\n\t\treturn -1;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\tComplete the deflate and\n *\n * @param[in] c - The deflate context\n * @param[out] cmpr_len - The total length after compression\n *\n * @return - compressed buffer\n * @retval - NULL  - Failure\n * @retval - !NULL - Success\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: No\n *\n */\nvoid *\ntpp_multi_deflate_done(void *c, unsigned int *cmpr_len)\n{\n\tstruct def_ctx *ctx = c;\n\tvoid *data = ctx->cmpr_buf;\n\tint ret;\n\n\t*cmpr_len = ctx->cmpr_strm.total_out;\n\n\tret = deflateEnd(&ctx->cmpr_strm);\n\tfree(ctx);\n\tif (ret != Z_OK) {\n\t\tfree(data);\n\t\ttpp_log(LOG_CRIT, __func__, \"Compression cleanup failed\");\n\t\treturn NULL;\n\t}\n\treturn data;\n}\n\n/**\n * @brief Deflate (compress) data\n *\n * @param[in] inbuf   - Ptr to buffer to compress\n * @param[in] inlen   - The size of input buffer\n * @param[out] outlen - The size of the compressed data\n *\n * @return      - Ptr to the compressed data buffer\n * @retval  !NULL - Success\n * @retval   NULL - Failure\n *\n * @par MT-safe: No\n **/\nvoid *\ntpp_deflate(void *inbuf, unsigned int inlen, unsigned int *outlen)\n{\n\tz_stream strm;\n\tint ret;\n\tvoid *data;\n\tunsigned int filled;\n\tvoid *p;\n\tint len;\n\n\t*outlen = 0;\n\n\t/* allocate deflate state */\n\tstrm.zalloc = Z_NULL;\n\tstrm.zfree = Z_NULL;\n\tstrm.opaque = Z_NULL;\n\tret = deflateInit(&strm, Z_DEFAULT_COMPRESSION);\n\tif (ret != Z_OK) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Compression failed\");\n\t\treturn NULL;\n\t}\n\n\t/* set input data to be compressed */\n\tlen = inlen;\n\tstrm.avail_in = len;\n\tstrm.next_in = inbuf;\n\n\t/* allocate buffer to temporarily collect compressed data */\n\tdata = malloc(len);\n\tif (!data) {\n\t\tdeflateEnd(&strm);\n\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory allocating deflate buffer %d bytes\", len);\n\t\treturn NULL;\n\t}\n\n\t/* run deflate() on input until output buffer not full, finish\n\t * compression if all of source has been read in\n\t */\n\n\tstrm.avail_out = len;\n\tstrm.next_out = data;\n\twhile (1) {\n\t\tret = deflate(&strm, Z_FINISH);\n\t\tif (ret == Z_OK && strm.avail_out == 0) {\n\t\t\t/* more output pending, but no output buffer space */\n\t\t\tfilled = (char *) strm.next_out - (char *) data;\n\t\t\tlen = len * 2;\n\t\t\tp = realloc(data, len);\n\t\t\tif (!p) {\n\t\t\t\tdeflateEnd(&strm);\n\t\t\t\tfree(data);\n\t\t\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory allocating deflate buffer %d bytes\", len);\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t\tdata = p;\n\t\t\tstrm.next_out = (Bytef *) ((char *) data + filled);\n\t\t\tstrm.avail_out = len - filled;\n\t\t} else\n\t\t\tbreak;\n\t}\n\tdeflateEnd(&strm); /* clean up */\n\tif (ret != Z_STREAM_END) {\n\t\tfree(data);\n\t\ttpp_log(LOG_CRIT, __func__, \"Compression failed\");\n\t\treturn NULL;\n\t}\n\tfilled = (char *) strm.next_out - (char *) data;\n\n\t/* reduce the memory area occupied */\n\tif (filled != inlen) {\n\t\tp = realloc(data, filled);\n\t\tif (!p) {\n\t\t\tfree(data);\n\t\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory allocating deflate buffer %d bytes\", filled);\n\t\t\treturn NULL;\n\t\t}\n\t\tdata = p;\n\t}\n\n\t*outlen = filled;\n\treturn data;\n}\n\n/**\n * @brief Inflate (de-compress) data\n *\n * @param[in] inbuf  - Ptr to compress data buffer\n * @param[in] inlen  - The size of input buffer\n * @param[in] totlen - The total size of the uncompress data\n *\n * @return      - Ptr to the uncompressed data buffer\n * @retval  !NULL - Success\n * @retval   NULL - Failure\n *\n * @par MT-safe: No\n **/\nvoid *\ntpp_inflate(void *inbuf, unsigned int inlen, unsigned int totlen)\n{\n\tint ret;\n\tz_stream strm;\n\tvoid *outbuf = NULL;\n\n\t/*\n\t * in some rare cases totlen < compressed_len (inlen)\n\t * so safer to malloc the larger of the two values\n\t */\n\toutbuf = malloc(totlen > inlen ? totlen : inlen);\n\tif (!outbuf) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory allocating inflate buffer %d bytes\", totlen);\n\t\treturn NULL;\n\t}\n\n\t/* allocate inflate state */\n\tstrm.zalloc = Z_NULL;\n\tstrm.zfree = Z_NULL;\n\tstrm.opaque = Z_NULL;\n\tstrm.avail_in = 0;\n\tstrm.next_in = Z_NULL;\n\tret = inflateInit(&strm);\n\tif (ret != Z_OK) {\n\t\tfree(outbuf);\n\t\ttpp_log(LOG_CRIT, __func__, \"Decompression Init (inflateInit) failed, ret = %d\", ret);\n\t\treturn NULL;\n\t}\n\n\t/* decompress until deflate stream ends or end of file */\n\tstrm.avail_in = inlen;\n\tstrm.next_in = inbuf;\n\n\t/* run inflate() on input until output buffer not full */\n\tstrm.avail_out = totlen;\n\tstrm.next_out = outbuf;\n\tret = inflate(&strm, Z_FINISH);\n\tinflateEnd(&strm);\n\tif (ret != Z_STREAM_END) {\n\t\tfree(outbuf);\n\t\ttpp_log(LOG_CRIT, __func__, \"Decompression (inflate) failed, ret = %d\", ret);\n\t\treturn NULL;\n\t}\n\treturn outbuf;\n}\n#else\nvoid *\ntpp_multi_deflate_init(int initial_len)\n{\n\ttpp_log(LOG_CRIT, __func__, \"TPP compression disabled\");\n\treturn NULL;\n}\n\nint\ntpp_multi_deflate_do(void *c, int fini, void *inbuf, unsigned int inlen)\n{\n\ttpp_log(LOG_CRIT, __func__, \"TPP compression disabled\");\n\treturn -1;\n}\n\nvoid *\ntpp_multi_deflate_done(void *c, unsigned int *cmpr_len)\n{\n\ttpp_log(LOG_CRIT, __func__, \"TPP compression disabled\");\n\treturn NULL;\n}\n\nvoid *\ntpp_deflate(void *inbuf, unsigned int inlen, unsigned int *outlen)\n{\n\ttpp_log(LOG_CRIT, __func__, \"TPP compression disabled\");\n\treturn NULL;\n}\n\nvoid *\ntpp_inflate(void *inbuf, unsigned int inlen, unsigned int totlen)\n{\n\ttpp_log(LOG_CRIT, __func__, \"TPP compression disabled\");\n\treturn NULL;\n}\n#endif\n\n/**\n * @brief Convenience function to validate a tpp header\n *\n * @param[in] tfd       - The transport fd\n * @param[in] pkt_start - The start address of the pkt\n *\n * @return - Packet validity status\n * @retval  0 - Packet has valid header structure\n * @retval -1 - Packet has invalid header structure\n *\n * @par MT-safe: No\n *\n **/\nint\ntpp_validate_hdr(int tfd, char *pkt_start)\n{\n\tenum TPP_MSG_TYPES type;\n\tchar *data;\n\tint data_len;\n\n\tdata_len = ntohl(*((int *) pkt_start));\n\tdata = pkt_start + sizeof(int);\n\ttype = *((unsigned char *) data);\n\n\tif ((data_len < 0 || type >= TPP_LAST_MSG) ||\n\t    (data_len > TPP_SEND_SIZE &&\n\t     type != TPP_DATA &&\n\t     type != TPP_MCAST_DATA &&\n\t     type != TPP_ENCRYPTED_DATA &&\n\t     type != TPP_AUTH_CTX)) {\n\t\ttpp_log(LOG_CRIT, __func__, \"tfd=%d, Received invalid packet type with type=%d? data_len=%d\", tfd, type, data_len);\n\t\treturn -1;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief Get a list of addresses for a given hostname\n *\n * @param[in] node_names - comma separated hostnames, each of format host:port\n * @param[out] count - return address count\n *\n * @return        - Array of addresses in tpp_addr structures\n * @retval  !NULL - Array of addresses returned\n * @retval  NULL  - call failed\n *\n * @par MT-safe: Yes\n *\n **/\ntpp_addr_t *\ntpp_get_addresses(char *names, int *count)\n{\n\ttpp_addr_t *addrs = NULL;\n\ttpp_addr_t *addrs_tmp = NULL;\n\tint tot_count = 0;\n\tint tmp_count;\n\tint i, j;\n\tchar *token;\n\tchar *saveptr;\n\tint port;\n\tchar *p;\n\tchar *node_names;\n\n\t*count = 0;\n\tif ((node_names = strdup(names)) == NULL) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory allocating address block\");\n\t\treturn NULL;\n\t}\n\n\ttoken = strtok_r(node_names, \",\", &saveptr);\n\twhile (token) {\n\t\t/* parse port from host name */\n\t\tif ((p = strchr(token, ':')) == NULL) {\n\t\t\tfree(addrs);\n\t\t\tfree(node_names);\n\t\t\treturn NULL;\n\t\t}\n\n\t\t*p = '\\0';\n\t\tport = atol(p + 1);\n\n\t\taddrs_tmp = tpp_sock_resolve_host(token, &tmp_count); /* get all ipv4 addresses */\n\t\tif (addrs_tmp) {\n\t\t\ttpp_addr_t *tmp;\n\t\t\tif ((tmp = realloc(addrs, (tot_count + tmp_count) * sizeof(tpp_addr_t))) == NULL) {\n\t\t\t\tfree(addrs);\n\t\t\t\tfree(node_names);\n\t\t\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory allocating address block\");\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t\taddrs = tmp;\n\n\t\t\tfor (i = 0; i < tmp_count; i++) {\n\t\t\t\tfor (j = 0; j < tot_count; j++) {\n\t\t\t\t\tif (memcmp(&addrs[j].ip, &addrs_tmp[i].ip, sizeof(addrs_tmp[i].ip)) == 0)\n\t\t\t\t\t\tbreak;\n\t\t\t\t}\n\n\t\t\t\t/* add if duplicate not found already */\n\t\t\t\tif (j == tot_count) {\n\t\t\t\t\tmemmove(&addrs[tot_count], &addrs_tmp[i], sizeof(tpp_addr_t));\n\t\t\t\t\taddrs[tot_count].port = htons(port);\n\t\t\t\t\ttot_count++;\n\t\t\t\t}\n\t\t\t}\n\t\t\tfree(addrs_tmp);\n\t\t}\n\n\t\ttoken = strtok_r(NULL, \",\", &saveptr);\n\t}\n\tfree(node_names);\n\n\t*count = tot_count;\n\treturn addrs; /* free @ caller */\n}\n\n/**\n * @brief Get the address of the local end of the connection\n *\n * @param[in] sock - connection id\n *\n * @return - Address of the local end of the connection in tpp_addr format\n * @retval  !NULL - address returned\n * @retval  NULL  - call failed\n *\n * @par MT-safe: Yes\n **/\ntpp_addr_t *\ntpp_get_local_host(int sock)\n{\n\tstruct sockaddr_storage addrs;\n\tstruct sockaddr *addr = (struct sockaddr *) &addrs;\n\tstruct sockaddr_in *inp = NULL;\n\tstruct sockaddr_in6 *inp6 = NULL;\n\ttpp_addr_t *taddr = NULL;\n\tsocklen_t len = sizeof(struct sockaddr);\n\n\tif (getsockname(sock, addr, &len) == -1) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Could not get name of peer for sock %d, errno=%d\", sock, errno);\n\t\treturn NULL;\n\t}\n\tif (addr->sa_family != AF_INET && addr->sa_family != AF_INET6) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Bad address family for sock %d\", sock);\n\t\treturn NULL;\n\t}\n\n\ttaddr = calloc(1, sizeof(tpp_addr_t));\n\tif (!taddr) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory allocating address\");\n\t\treturn NULL;\n\t}\n\n\tif (addr->sa_family == AF_INET) {\n\t\tinp = (struct sockaddr_in *) addr;\n\t\tmemcpy(&taddr->ip, &inp->sin_addr, sizeof(inp->sin_addr));\n\t\ttaddr->port = inp->sin_port; /* keep in network order */\n\t\ttaddr->family = TPP_ADDR_FAMILY_IPV4;\n\t} else if (addr->sa_family == AF_INET6) {\n\t\tinp6 = (struct sockaddr_in6 *) addr;\n\t\tmemcpy(&taddr->ip, &inp6->sin6_addr, sizeof(inp6->sin6_addr));\n\t\ttaddr->port = inp6->sin6_port; /* keep in network order */\n\t\ttaddr->family = TPP_ADDR_FAMILY_IPV6;\n\t}\n\n\treturn taddr;\n}\n\n/**\n * @brief Get the address of the remote (peer) end of the connection\n *\n * @param[in] sock - connection id\n *\n * @return  - Address of the remote end of the connection in tpp_addr format\n * @retval  !NULL - address returned\n * @retval  NULL  - call failed\n *\n * @par MT-safe: Yes\n **/\ntpp_addr_t *\ntpp_get_connected_host(int sock)\n{\n\tstruct sockaddr_storage addrs;\n\tstruct sockaddr *addr = (struct sockaddr *) &addrs;\n\tstruct sockaddr_in *inp = NULL;\n\tstruct sockaddr_in6 *inp6 = NULL;\n\ttpp_addr_t *taddr = NULL;\n\tsocklen_t len = sizeof(struct sockaddr);\n\n\tif (getpeername(sock, addr, &len) == -1) {\n\t\tif (errno == ENOTCONN)\n\t\t\ttpp_log(LOG_CRIT, __func__, \"Peer disconnected sock %d\", sock);\n\t\telse\n\t\t\ttpp_log(LOG_CRIT, __func__, \"Could not get name of peer for sock %d, errno=%d\", sock, errno);\n\n\t\treturn NULL;\n\t}\n\tif (addr->sa_family != AF_INET && addr->sa_family != AF_INET6) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Bad address family for sock %d\", sock);\n\t\treturn NULL;\n\t}\n\n\ttaddr = calloc(1, sizeof(tpp_addr_t));\n\tif (!taddr) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Out of memory allocating address\");\n\t\treturn NULL;\n\t}\n\n\tif (addr->sa_family == AF_INET) {\n\t\tinp = (struct sockaddr_in *) addr;\n\t\tmemcpy(&taddr->ip, &inp->sin_addr, sizeof(inp->sin_addr));\n\t\ttaddr->port = inp->sin_port; /* keep in network order */\n\t\ttaddr->family = TPP_ADDR_FAMILY_IPV4;\n\t} else if (addr->sa_family == AF_INET6) {\n\t\tinp6 = (struct sockaddr_in6 *) addr;\n\t\tmemcpy(&taddr->ip, &inp6->sin6_addr, sizeof(inp6->sin6_addr));\n\t\ttaddr->port = inp6->sin6_port; /* keep in network order */\n\t\ttaddr->family = TPP_ADDR_FAMILY_IPV6;\n\t}\n\n\treturn taddr;\n}\n\n/**\n * @brief return a human readable string representation of an address\n *        for either an ipv4 or ipv6 address\n *\n * @param[in] ap - address in tpp_addr format\n *\n * @return  - string representation of address\n *            (uses TLS area to make it easy and yet thread safe)\n *\n * @par MT-safe: Yes\n **/\nchar *\ntpp_netaddr(tpp_addr_t *ap)\n{\n\ttpp_tls_t *ptr;\n#ifdef WIN32\n\tstruct sockaddr_in in;\n\tstruct sockaddr_in6 in6;\n\tint len;\n#endif\n\tchar port[7];\n\n\tif (ap == NULL)\n\t\treturn \"unknown\";\n\n\tptr = tpp_get_tls();\n\tif (!ptr) {\n\t\tfprintf(stderr, \"Out of memory\\n\");\n\t\treturn \"unknown\";\n\t}\n\n\tptr->tppstaticbuf[0] = '\\0';\n\n\tif (ap->family == TPP_ADDR_FAMILY_UNSPEC)\n\t\treturn \"unknown\";\n\n#ifdef WIN32\n\tif (ap->family == TPP_ADDR_FAMILY_IPV4) {\n\t\tmemcpy(&in.sin_addr, ap->ip, sizeof(in.sin_addr));\n\t\tin.sin_family = AF_INET;\n\t\tin.sin_port = 0;\n\t\tlen = LOG_BUF_SIZE;\n\t\tWSAAddressToString((LPSOCKADDR) &in, sizeof(in), NULL, (LPSTR) &ptr->tppstaticbuf, &len);\n\t} else if (ap->family == TPP_ADDR_FAMILY_IPV6) {\n\t\tmemcpy(&in6.sin6_addr, ap->ip, sizeof(in6.sin6_addr));\n\t\tin6.sin6_family = AF_INET6;\n\t\tin6.sin6_port = 0;\n\t\tlen = LOG_BUF_SIZE;\n\t\tWSAAddressToString((LPSOCKADDR) &in6, sizeof(in6), NULL, (LPSTR) &ptr->tppstaticbuf, &len);\n\t}\n#else\n\tif (ap->family == TPP_ADDR_FAMILY_IPV4) {\n\t\tinet_ntop(AF_INET, &ap->ip, ptr->tppstaticbuf, INET_ADDRSTRLEN);\n\t} else if (ap->family == TPP_ADDR_FAMILY_IPV6) {\n\t\tinet_ntop(AF_INET6, &ap->ip, ptr->tppstaticbuf, INET6_ADDRSTRLEN);\n\t}\n#endif\n\tsprintf(port, \":%d\", ntohs(ap->port));\n\tstrcat(ptr->tppstaticbuf, port);\n\n\treturn ptr->tppstaticbuf;\n}\n\n/**\n * @brief return a human readable string representation of an address\n *        for either an ipv4 or ipv6 address\n *\n * @param[in] sa - address in sockaddr format\n *\n * @return  - string representation of address\n *            (uses TLS area to make it easy and yet thread safe)\n *\n * @par MT-safe: Yes\n **/\nchar *\ntpp_netaddr_sa(struct sockaddr *sa)\n{\n#ifdef WIN32\n\tint len;\n#endif\n\ttpp_tls_t *ptr = tpp_get_tls();\n\tif (!ptr) {\n\t\tfprintf(stderr, \"Out of memory\\n\");\n\t\treturn NULL;\n\t}\n\tptr->tppstaticbuf[0] = '\\0';\n\n#ifdef WIN32\n\tlen = sizeof(ptr->tppstaticbuf);\n\tWSAAddressToString((LPSOCKADDR) &sa, sizeof(struct sockaddr), NULL, (LPSTR) &ptr->tppstaticbuf, &len);\n#else\n\tif (sa->sa_family == AF_INET)\n\t\tinet_ntop(sa->sa_family, &(((struct sockaddr_in *) sa)->sin_addr), ptr->tppstaticbuf, sizeof(ptr->tppstaticbuf));\n\telse\n\t\tinet_ntop(sa->sa_family, &(((struct sockaddr_in6 *) sa)->sin6_addr), ptr->tppstaticbuf, sizeof(ptr->tppstaticbuf));\n#endif\n\n\treturn ptr->tppstaticbuf;\n}\n\n/*\n * Convenience function to delete information about a router\n */\nvoid\nfree_router(tpp_router_t *r)\n{\n\tif (r) {\n\t\tif (r->router_name)\n\t\t\tfree(r->router_name);\n\t\tfree(r);\n\t}\n}\n\n/*\n * Convenience function to delete information about a leaf\n */\nvoid\nfree_leaf(tpp_leaf_t *l)\n{\n\tif (l) {\n\t\tif (l->leaf_addrs)\n\t\t\tfree(l->leaf_addrs);\n\n\t\tfree(l);\n\t}\n}\n\n/**\n * @brief Set the loglevel for the tpp layer. This is used to print\n * additional information at times (like ip addresses are revers\n * looked-up and hostnames printed in logs).\n *\n * @param[in] logmask - The logmask value to set\n *\n * @par MT-safe: No\n **/\nvoid\ntpp_set_logmask(long logmask)\n{\n\ttpp_log_event_mask = logmask;\n}\n\n#ifndef WIN32\n\n/**\n * @brief\n *\twrapper function for tpp_nslookup_mutex_lock().\n *\n */\nvoid\ntpp_nslookup_atfork_prepare()\n{\n\ttpp_lock(&tpp_nslookup_mutex);\n}\n\n/**\n * @brief\n *\twrapper function for tpp_nslookup_mutex_unlock().\n *\n */\nvoid\ntpp_nslookup_atfork_parent()\n{\n\ttpp_unlock(&tpp_nslookup_mutex);\n}\n\n/**\n * @brief\n *\twrapper function for tpp_nslookup_mutex_unlock().\n *\n */\nvoid\ntpp_nslookup_atfork_child()\n{\n\ttpp_unlock(&tpp_nslookup_mutex);\n}\n#endif\n\n/**\n * @brief encrypt the pkt  with the authdata provided\n *\n * @param[in] authdata - encryption information\n * @param[in] pkt - packet of data\n *\n * @par MT-safe: No\n **/\nint\ntpp_encrypt_pkt(conn_auth_t *authdata, tpp_packet_t *pkt)\n{\n\tvoid *data_out = NULL;\n\tsize_t len_out = 0;\n\ttpp_encrypt_hdr_t *ehdr;\n\tint totlen = pkt->totlen;\n\ttpp_chunk_t *chunk, *next;\n\ttpp_auth_pkt_hdr_t *data = (tpp_auth_pkt_hdr_t *) (((tpp_chunk_t *) (GET_NEXT(pkt->chunks)))->data);\n\tunsigned char type = data->type;\n\tvoid *buf = NULL;\n\tchar *p;\n\n\tif (type == TPP_AUTH_CTX && data->for_encrypt == FOR_ENCRYPT)\n\t\treturn 0;\n\n\tbuf = malloc(totlen);\n\tif (buf == NULL) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Failed to allocated buffer for encrypting pkt data\");\n\t\treturn -1;\n\t}\n\tp = (char *) buf;\n\tchunk = GET_NEXT(pkt->chunks);\n\twhile (chunk) {\n\t\tmemcpy(p, chunk->data, chunk->len);\n\t\tp += chunk->len;\n\t\tnext = GET_NEXT(chunk->chunk_link);\n\t\ttpp_free_chunk(chunk);\n\t\tchunk = next;\n\t}\n\tpkt->totlen = 0;\n\tCLEAR_HEAD(pkt->chunks);\n\tpkt->curr_chunk = NULL;\n\n\tif (authdata->encryptdef->encrypt_data(authdata->encryptctx, buf, totlen, &data_out, &len_out) != 0) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Failed to encrypt pkt data\");\n\t\tfree(buf);\n\t\treturn -1;\n\t}\n\n\tif (totlen > 0 && len_out <= 0) {\n\t\ttpp_log(LOG_CRIT, __func__, \"invalid encrypted data len: %d, pktlen: %d\", (int) len_out, totlen);\n\t\tfree(buf);\n\t\treturn -1;\n\t}\n\tfree(buf);\n\tif (!tpp_bld_pkt(pkt, NULL, sizeof(tpp_encrypt_hdr_t), 1, (void **) &ehdr)) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Failed to add encrypt pkt header into pkt\");\n\t\tfree(data_out);\n\t\treturn -1;\n\t}\n\tif (!tpp_bld_pkt(pkt, data_out, len_out, 0, NULL)) {\n\t\ttpp_log(LOG_CRIT, __func__, \"Failed to add encrypted data into pkt\");\n\t\tfree(data_out);\n\t\treturn -1;\n\t}\n\tehdr->ntotlen = htonl(pkt->totlen);\n\tehdr->type = TPP_ENCRYPTED_DATA;\n\tpkt->curr_chunk = GET_NEXT(pkt->chunks);\n\n\treturn 0;\n}\n\n/*\n * use TPPDEBUG instead of DEBUG, since DEBUG makes daemons not fork\n * and that does not work well with init scripts. Sometimes we need to\n * debug TPP in a PTL run where forked daemons are required\n * Hence use a separate macro\n */\n#ifdef TPPDEBUG\n/*\n * Convenience function to print the packet header\n *\n * @param[in] fnc - name of calling function\n * @param[in] data - start of data packet\n * @param[in] len - length of data packet\n *\n * @par MT-safe: yes\n */\nvoid\nprint_packet_hdr(const char *fnc, void *data, int len)\n{\n\ttpp_ctl_pkt_hdr_t *hdr = (tpp_ctl_pkt_hdr_t *) data;\n\n\tchar str_types[][20] = {\"TPP_CTL_JOIN\", \"TPP_CTL_LEAVE\", \"TPP_DATA\", \"TPP_CTL_MSG\", \"TPP_CLOSE_STRM\", \"TPP_MCAST_DATA\"};\n\tunsigned char type = hdr->type;\n\n\tif (type == TPP_CTL_JOIN) {\n\t\ttpp_addr_t *addrs = (tpp_addr_t *) (((char *) data) + sizeof(tpp_join_pkt_hdr_t));\n\t\ttpp_log(LOG_CRIT, __func__, \"%s message arrived from src_host = %s\", str_types[type - 1], tpp_netaddr(addrs));\n\t} else if (type == TPP_CTL_LEAVE) {\n\t\ttpp_addr_t *addrs = (tpp_addr_t *) (((char *) data) + sizeof(tpp_leave_pkt_hdr_t));\n\t\ttpp_log(LOG_CRIT, __func__, \"%s message arrived from src_host = %s\", str_types[type - 1], tpp_netaddr(addrs));\n\t} else if (type == TPP_MCAST_DATA) {\n\t\ttpp_mcast_pkt_hdr_t *mhdr = (tpp_mcast_pkt_hdr_t *) data;\n\t\ttpp_log(LOG_CRIT, __func__, \"%s message arrived from src_host = %s\", str_types[type - 1], tpp_netaddr(&mhdr->src_addr));\n\t} else if ((type == TPP_DATA) || (type == TPP_CLOSE_STRM)) {\n\t\tchar buff[TPP_GEN_BUF_SZ + 1];\n\t\ttpp_data_pkt_hdr_t *dhdr = (tpp_data_pkt_hdr_t *) data;\n\n\t\tstrncpy(buff, tpp_netaddr(&dhdr->src_addr), sizeof(buff));\n\t\ttpp_log(LOG_CRIT, __func__, \"%s: src_host=%s, dest_host=%s, len=%d, data_len=%d, src_sd=%d, dest_sd=%d, src_magic=%d\",\n\t\t\tstr_types[type - 1], buff, tpp_netaddr(&dhdr->dest_addr), len + sizeof(tpp_data_pkt_hdr_t), len,\n\t\t\tntohl(dhdr->src_sd), (ntohl(dhdr->dest_sd) == UNINITIALIZED_INT) ? -1 : ntohl(dhdr->dest_sd), ntohl(dhdr->src_magic));\n\n\t} else {\n\t\ttpp_log(LOG_CRIT, __func__, \"%s message arrived from src_host = %s\", str_types[type - 1], tpp_netaddr(&hdr->src_addr));\n\t}\n}\n#endif\n"
  },
  {
    "path": "src/lib/Libutil/Makefile.am",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nnoinst_LIBRARIES = libutil.a\n\nlibutil_a_CPPFLAGS = \\\n\t-I$(top_srcdir)/src/include \\\n\t@libical_inc@ \\\n\t@KRB5_CFLAGS@\n\nlibutil_a_SOURCES = \\\n\tget_hostname.c \\\n\texecvnode_seq_util.c \\\n\tpbs_ical.c \\\n\tmisc_utils.c \\\n\tavltree.c \\\n\thook.c \\\n\twork_task.c \\\n\tentlim.c \\\n\tdaemon_protect.c \\\n\tpbs_array_list.c \\\n\tpbs_secrets.c \\\n\tpbs_aes_encrypt.c \\\n\tpbs_idx.c \\\n\trange.c  \\\n\tthread_utils.c \\\n\tdedup_jobids.c\n"
  },
  {
    "path": "src/lib/Libutil/avltree.c",
    "content": "/*\n **  avltree - AVL index routines by Gregory Tseytin.\n **\n **\n **    Copyright (c) 2000 Gregory Tseytin <tseyting@acm.org>\n **      All rights reserved.\n **\n **    Redistribution and use in source and binary forms, with or without\n **    modification, are permitted provided that the following conditions\n **    are met:\n **    1. Redistributions of source code must retain the above copyright\n **       notice, this list of conditions and the following disclaimer as\n **       the first lines of this file unmodified.\n **    2. Redistributions in binary form must reproduce the above copyright\n **       notice, this list of conditions and the following disclaimer in the\n **       documentation and/or other materials provided with the distribution.\n **\n **    THIS SOFTWARE IS PROVIDED BY Gregory Tseytin ``AS IS'' AND ANY EXPRESS OR\n **    IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES\n **    OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.\n **    IN NO EVENT SHALL Gregory Tseytin BE LIABLE FOR ANY DIRECT, INDIRECT,\n **    INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT\n **    NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n **    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n **    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n **    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF\n **    THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n **\n **\n */\n/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h>\n\n#include \"avltree.h\"\n#include <limits.h>\n#include <pthread.h>\n#include <stdbool.h>\n#include <stddef.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n\n/*\n **\t'inner' avl stuff\n */\n/* way3.h */\n\ntypedef char way3; /* -1, 0, 1 */\n\n#define way3stop ((way3) 0)\n#define way3left ((way3) -1)\n#define way3right ((way3) 1)\n\n#define way3sum(x, y) ((x) + (y)) /* assume x!=y */\n\n#define way3opp(x) (-(x))\n\n/* node.h */\n\ntypedef struct _node {\n\tstruct _node *ptr[2]; /* left, right */\n\tway3 balance, *trace;\n\trectype data;\n} node;\n\n#define stepway(n, x) (((n)->ptr)[way3ix(x)])\n#define stepopp(n, x) (((n)->ptr)[way3ix(way3opp(x))])\n\n/* tree.h */\n\n#define SRF_FINDEQUAL 1\n#define SRF_FINDLESS 2\n#define SRF_FINDGREAT 4\n#define SRF_SETMARK 8\n#define SRF_FROMMARK 16\n\n#define avltree_init(x) (*(x) = NULL)\n\ntypedef struct {\n\tshort __tind; /* index of this thread */\n\tint __ix_keylength;\n\tint __ix_flags;\t     /* set from AVL_IX_DESC */\n\tint __rec_keylength; /* set from actual key */\n\tint __node_overhead;\n\n\tnode **__t;\n\tnode *__r;\n\tnode *__s;\n\tway3 __wayhand;\n} avl_tls_t;\n\nstatic pthread_once_t avl_init_once = PTHREAD_ONCE_INIT;\nstatic pthread_key_t avl_tls_key;\npthread_mutex_t tind_lock;\n\n#define MAX_AVLKEY_LEN 100\n\n/**\n * Set max_threads to 2 by default since mom, server etc have basically 2 threads\n * If caller has > 2 threads, e.g. pbs_comm, it must first call avl_set_maxthreads()\n */\nstatic int max_threads = 2;\n\n/**\n * @brief set the max threads that the application uses, before any calls to avltree\n * \n */\nvoid\navl_set_maxthreads(int n)\n{\n\tmax_threads = n;\n}\n\n/**\n * @brief\n *\tinitializes avl tls by creating a key.\n *\n */\nvoid\navl_init_func(void)\n{\n\tif (pthread_key_create(&avl_tls_key, NULL) != 0) {\n\t\tfprintf(stderr, \"avl tls key creation failed\\n\");\n\t}\n\n\tif (pthread_mutex_init(&tind_lock, NULL) != 0) {\n\t\tfprintf(stderr, \"avl mutex init failed\\n\");\n\t\treturn;\n\t}\n}\n\n/**\n * @brief\n *\treturn an unique thread index for each new thread\n *\n */\nstatic short\nget_thread_index(void)\n{\n\tstatic short tind = -1;\n\tshort retval;\n\n\tpthread_mutex_lock(&tind_lock);\n\tretval = ++tind;\n\tpthread_mutex_unlock(&tind_lock);\n\treturn retval;\n}\n\n/**\n * @brief\n *\tretrieves and returns the avl tls by checking key initialization\n *\tsetting  value in tls and adding the key to list.\n *\n * @return \tstructure handle\n * @retval\tpointer to avl tree info (tls)\n */\nvoid *\nget_avl_tls(void)\n{\n\tavl_tls_t *p_avl_tls = NULL;\n\n\tpthread_once(&avl_init_once, avl_init_func);\n\n\tif ((p_avl_tls = (avl_tls_t *) pthread_getspecific(avl_tls_key)) == NULL) {\n\t\tp_avl_tls = (avl_tls_t *) calloc(1, sizeof(avl_tls_t));\n\t\tif (!p_avl_tls) {\n\t\t\tfprintf(stderr, \"Out of memory creating avl_tls\\n\");\n\t\t\treturn NULL;\n\t\t}\n\t\tp_avl_tls->__tind = get_thread_index();\n\t\tp_avl_tls->__node_overhead = sizeof(node) - AVL_DEFAULTKEYLEN;\n\t\tpthread_setspecific(avl_tls_key, (void *) p_avl_tls);\n\t}\n\treturn p_avl_tls;\n}\n\n/**\n * @brief\n *\tFree the thread local storage used for avltree for this thread\n */\nvoid\nfree_avl_tls(void)\n{\n\tavl_tls_t *p_avl_tls = NULL;\n\n\tpthread_once(&avl_init_once, avl_init_func);\n\n\tif ((p_avl_tls = (avl_tls_t *) pthread_getspecific(avl_tls_key)))\n\t\tfree(p_avl_tls);\n}\n\n#define tind (((avl_tls_t *) get_avl_tls())->__tind)\n#define ix_keylength (((avl_tls_t *) get_avl_tls())->__ix_keylength)\n#define ix_flags (((avl_tls_t *) get_avl_tls())->__ix_flags)\n#define rec_keylength (((avl_tls_t *) get_avl_tls())->__rec_keylength)\n#define node_overhead (((avl_tls_t *) get_avl_tls())->__node_overhead)\n#define avl_t (((avl_tls_t *) get_avl_tls())->__t)\n#define avl_r (((avl_tls_t *) get_avl_tls())->__r)\n#define avl_s (((avl_tls_t *) get_avl_tls())->__s)\n#define avl_wayhand (((avl_tls_t *) get_avl_tls())->__wayhand)\n\n/******************************************************************************\n WAY3\n ******************************************************************************/\nstatic way3\nmakeway3(int n)\n{\n\treturn n > 0 ? way3right : n < 0 ? way3left\n\t\t\t\t\t : way3stop;\n}\n\nstatic way3\nway3opp2(way3 x, way3 y)\n{\n\treturn x == y ? way3opp(x) : way3stop;\n}\n\n/*****************************************************************************/\n\n/**\n * @brief\n *\tfrees node n of type (node*).\n */\nstatic void\nfreenode(node *n)\n{\n\tif (n)\n\t\tfree(n->trace);\n\tfree(n);\n}\n\n/**\n * @brief\n *\tcompares two records r1 and r2\n *\n * @param[in] r1 - record1\n * @param[in] r2 - record2\n *\n * @return\tint\n * @retval\tmatched string count\tsuccess\n * @retval\tkeylength\t\tif dup keys\n *\n */\nstatic int\ncompkey(rectype *r1, rectype *r2)\n{\n\tint n;\n\tif (ix_keylength)\n\t\tn = memcmp(r1->key, r2->key, ix_keylength);\n\telse {\n\t\tif (ix_flags & AVL_CASE_CMP)\n\t\t\tn = strcasecmp(r1->key, r2->key);\n\t\telse\n\t\t\tn = strcmp(r1->key, r2->key);\n\t}\n\n\tif (n || !(ix_flags & AVL_DUP_KEYS_OK))\n\t\treturn n;\n\treturn memcmp(&(r1->recptr), &(r2->recptr), sizeof(AVL_RECPOS));\n}\n\n/**\n * @brief\n *\tcopy data of one record  to another\n *\n * @param[in] r1 - key1\n * @param[in] r2 - key2\n *\n * @return\tVoid\n */\nstatic void\ncopydata(rectype *r1, rectype *r2)\n{\n\tr1->recptr = r2->recptr;\n\tr1->count = r2->count;\n\tif (ix_keylength)\n\t\tmemcpy(r1->key, r2->key, ix_keylength);\n\telse\n\t\tstrcpy(r1->key, r2->key);\n}\n\n/**\n * @brief\n *\tallocate  memory for new node.\n *\n * @return\tstructure handle\n * @retval\tpointer to node key\tsuccess\n * @retval\tNULL\t\t\terror\n */\nstatic node *\nallocnode()\n{\n\tint size = (ix_keylength ? ix_keylength : rec_keylength);\n\tnode *n = (node *) malloc(size + node_overhead);\n\tif (n == NULL) {\n\t\tfprintf(stderr, \"avltrees: out of memory\\n\");\n\t\treturn NULL;\n\t}\n\tif (ix_flags & AVL_DUP_KEYS_OK)\n\t\tn->data.count = 1;\n\n\tn->trace = calloc(max_threads, sizeof(way3));\n\tif (n->trace == NULL) {\n\t\tfprintf(stderr, \"avltrees: out of memory\\n\");\n\t\treturn NULL;\n\t}\n\treturn n;\n}\n\n/******************************************************************************\n NODE\n ******************************************************************************/\n/**\n * @brief\n *\tswap the given pointers .\n *\n * @param[in] ptrptr - pointer to pointer to root node\n * @param[in] new - new pointer\n *\n * @return\tstructure handle\n * @retval\tpointer to old node\tsuccess\n *\n */\nstatic node *\nswapptr(node **ptrptr, node *new)\n{\n\tnode *old = *ptrptr;\n\t*ptrptr = new;\n\treturn old;\n}\n\nstatic int\nway3ix(way3 x) /* assume x != 0 */\n{\n\treturn x == way3right ? 1 : 0;\n}\n\n/******************************************************************************\n TREE\n ******************************************************************************/\n\n/**\n * @brief\n *\trestructure the tree inorder to maintain balance of tree whenever insertion or deletion of node happens\n *\n * @param[in]\top_del - value indicating insertion(0)/deletion(1) of node\n *\n * @return\tint\n * @retval\t0\tdelete node\n * @retval\t1\tinsert node\n *\n */\nstatic bool\nrestruct(bool op_del)\n{\n\tway3 n = avl_r->balance, c;\n\tnode *p;\n\tbool g = n == way3stop ? op_del : n == avl_wayhand;\n\tif (g)\n\t\tp = avl_r;\n\telse {\n\t\tp = stepopp(avl_r, avl_wayhand);\n\t\tstepopp(avl_r, avl_wayhand) = swapptr(&stepway(p, avl_wayhand), avl_r);\n\t\tc = p->balance;\n\t\tavl_s->balance = way3opp2(c, avl_wayhand);\n\t\tavl_r->balance = way3opp2(c, way3opp(avl_wayhand));\n\t\tp->balance = way3stop;\n\t}\n\tstepway(avl_s, avl_wayhand) = swapptr(&stepopp(p, avl_wayhand), avl_s);\n\t*avl_t = p;\n\treturn g;\n}\n\n/**\n * @brief\n *\tsearch the avl tree for given record.\n *\n * @param[in] tt - pointer to root of tree\n * @param[in] key - record to be searched\n * @param[in] searchflags- search flag indicating equal,greater\n *\n * @return\tstructure handle\n * @retval\tpointer to the key (found)\tsuccess\n * @retval\tNULL\t\t\t\terror\n *\n */\nstatic rectype *\navltree_search(node **tt, rectype *key, unsigned short searchflags)\n{\n\tnode *p, *q, *pp;\n\tway3 aa, waydir, wayopp;\n\n\tif (!(~searchflags & (SRF_FINDGREAT | SRF_FINDLESS)))\n\t\treturn NULL;\n\tif (!(searchflags & (SRF_FINDGREAT | SRF_FINDEQUAL | SRF_FINDLESS)))\n\t\treturn NULL;\n\twaydir = searchflags & SRF_FINDGREAT ? way3right : searchflags & SRF_FINDLESS ? way3left\n\t\t\t\t\t\t\t\t\t\t      : way3stop;\n\twayopp = way3opp(waydir);\n\tp = q = NULL;\n\twhile ((pp = *tt) != NULL) {\n\t\taa = searchflags & SRF_FROMMARK ? pp->trace[tind] : makeway3(compkey(key, &(pp->data)));\n\t\tif (searchflags & SRF_SETMARK)\n\t\t\tpp->trace[tind] = aa;\n\t\tif (aa == way3stop) {\n\t\t\tif (searchflags & SRF_FINDEQUAL)\n\t\t\t\treturn &(pp->data);\n\t\t\tif ((q = stepway(pp, waydir)) == NULL)\n\t\t\t\tbreak;\n\t\t\tif (searchflags & SRF_SETMARK)\n\t\t\t\tpp->trace[tind] = waydir;\n\t\t\twhile (1) {\n\t\t\t\tif ((pp = stepway(q, wayopp)) == NULL) {\n\t\t\t\t\tif (searchflags & SRF_SETMARK)\n\t\t\t\t\t\tq->trace[tind] = way3stop;\n\t\t\t\t\treturn &(q->data);\n\t\t\t\t}\n\t\t\t\tif (searchflags & SRF_SETMARK)\n\t\t\t\t\tq->trace[tind] = wayopp;\n\t\t\t\tq = pp;\n\t\t\t}\n\t\t}\n\t\t/* remember the point where we can change direction to waydir */\n\t\tif (aa == wayopp)\n\t\t\tp = pp;\n\t\ttt = &stepway(pp, aa);\n\t}\n\tif (p == NULL || !(searchflags & (SRF_FINDLESS | SRF_FINDGREAT)))\n\t\treturn NULL;\n\tif (searchflags & SRF_SETMARK)\n\t\tp->trace[tind] = way3stop;\n\treturn &(p->data);\n}\n\n/**\n * @brief\n *\treturn the address of first node.\n *\n * @param[out] tt - pointer to root node of tree.\n *\n * @erturn\tVoid\n */\nstatic void\navltree_first(node **tt)\n{\n\tnode *pp;\n\twhile ((pp = *tt) != NULL) {\n\t\tpp->trace[tind] = way3left;\n\t\ttt = &stepway(pp, way3left);\n\t}\n}\n\n/**\n * @brief\n *\tinsert the given record into the tree pointed by tt.\n *\n * @param[in] tt - address of root\n * @param[in] key - record to be inserted\n *\n * @return\tstructure handle\n * @retval\tpointer to key inserted\t\tsuccess\n * @retval\tNULL\t\t\t\terror\n *\n */\nstatic rectype *\navltree_insert(node **tt, rectype *key)\n{\n\tway3 aa, b;\n\tnode *p, *q, *pp;\n\n\tavl_t = tt;\n\tp = *tt;\n\twhile ((pp = *tt) != NULL) {\n\t\taa = makeway3(compkey(key, &(pp->data)));\n\t\tif (aa == way3stop) {\n\t\t\treturn NULL;\n\t\t}\n\t\tif (pp->balance != way3stop)\n\t\t\tavl_t = tt; /* t-> the last disbalanced node */\n\t\tpp->trace[tind] = aa;\n\t\ttt = &stepway(pp, aa);\n\t}\n\t*tt = q = allocnode();\n\tq->balance = q->trace[tind] = way3stop;\n\tstepway(q, way3left) = stepway(q, way3right) = NULL;\n\tkey->count = 1;\n\tcopydata(&(q->data), key);\n\t/* balancing */\n\tavl_s = *avl_t;\n\tavl_wayhand = avl_s->trace[tind];\n\tif (avl_wayhand != way3stop) {\n\t\tavl_r = stepway(avl_s, avl_wayhand);\n\t\tfor (p = avl_r; p != NULL; p = stepway(p, b))\n\t\t\tb = p->balance = p->trace[tind];\n\t\tb = avl_s->balance;\n\t\tif (b != avl_wayhand)\n\t\t\tavl_s->balance = way3sum(avl_wayhand, b);\n\t\telse if (restruct(0))\n\t\t\tavl_s->balance = avl_r->balance = way3stop;\n\t}\n\treturn &(q->data);\n}\n\n/**\n * @brief\n *      delete the given record  from  the tree.\n *\n * @param[in] tt - address of root node\n * @param[in] key - record to be deleted\n *\n * @return      structure handle\n * @retval      deleted key\t\t\tsuccess\n * @retval      NULL                            error\n *\n */\nstatic rectype *\navltree_delete(node **tt, rectype *key, unsigned short searchflags)\n{\n\tway3 aa, aaa, b, bb;\n\tnode *p, *q, *pp, *p1;\n\tnode **t1, **tt1, **qq1, **rr = tt;\n\n\tavl_t = t1 = tt1 = qq1 = tt;\n\tp = *tt;\n\tq = NULL;\n\taaa = way3stop;\n\n\twhile ((pp = *tt) != NULL) {\n\t\taa = aaa != way3stop ? aaa : searchflags & SRF_FROMMARK ? pp->trace[tind]\n\t\t\t\t\t\t\t\t\t: makeway3(compkey(key, &(pp->data)));\n\t\tb = pp->balance;\n\t\tif (aa == way3stop) {\n\t\t\tqq1 = tt;\n\t\t\tq = pp;\n\t\t\trr = t1;\n\t\t\taa = b != way3stop ? b : way3left;\n\t\t\taaa = way3opp(aa); /* will move opposite to aa */\n\t\t}\n\t\tavl_t = t1;\n\t\tif (b == way3stop || (b != aa && stepopp(pp, aa)->balance == way3stop))\n\t\t\tt1 = tt;\n\t\ttt1 = tt;\n\t\ttt = &stepway(pp, aa);\n\t\tpp->trace[tind] = aa;\n\t}\n\tif (aaa == way3stop)\n\t\treturn NULL;\n\tcopydata(key, &(q->data));\n\tp = *tt1;\n\t*tt1 = p1 = stepopp(p, p->trace[tind]);\n\tif (p != q) {\n\t\t*qq1 = p;\n\t\tmemcpy(p->ptr, q->ptr, sizeof(p->ptr));\n\t\tp->balance = q->balance;\n\t\tavl_wayhand = p->trace[tind] = q->trace[tind];\n\t\tif (avl_t == &stepway(q, avl_wayhand))\n\t\t\tavl_t = &stepway(p, avl_wayhand);\n\t}\n\twhile ((avl_s = *avl_t) != p1) {\n\t\tavl_wayhand = way3opp(avl_s->trace[tind]);\n\t\tb = avl_s->balance;\n\t\tif (b != avl_wayhand) {\n\t\t\tavl_s->balance = way3sum(avl_wayhand, b);\n\t\t} else {\n\t\t\tavl_r = stepway(avl_s, avl_wayhand);\n\t\t\tif (restruct(1)) {\n\t\t\t\tif ((bb = avl_r->balance) != way3stop)\n\t\t\t\t\tavl_s->balance = way3stop;\n\t\t\t\tavl_r->balance = way3sum(way3opp(avl_wayhand), bb);\n\t\t\t}\n\t\t}\n\t\tavl_t = &stepopp(avl_s, avl_wayhand);\n\t}\n\twhile ((p = *rr) != NULL) {\n\t\t/* adjusting trace */\n\t\taa = makeway3(compkey(&(q->data), &(p->data)));\n\t\tp->trace[tind] = aa;\n\t\trr = &stepway(p, aa);\n\t}\n\tfreenode(q);\n\treturn key;\n}\n\n/**\n * @brief\n *\tclear or free all the nodes\n *\n * @param[in] tt - pointer to root\n *\n * @return\tVoid\n *\n */\nstatic void\navltree_clear(node **tt)\n{\n\tlong nodecount = 0L;\n\tnode *p = *tt, *q = NULL, *x, **xx;\n\n\tif (p != NULL) {\n\t\twhile (1) {\n\t\t\tif ((x = stepway(p, way3left)) != NULL ||\n\t\t\t    (x = stepway(p, way3right)) != NULL) {\n\t\t\t\tstepway(p, way3left) = q;\n\t\t\t\tq = p;\n\t\t\t\tp = x;\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\tfreenode(p);\n\t\t\tnodecount++;\n\t\t\tif (q == NULL)\n\t\t\t\tbreak;\n\t\t\tif (*(xx = &stepway(q, way3right)) == p)\n\t\t\t\t*xx = NULL;\n\t\t\tp = q;\n\t\t\tq = *(xx = &stepway(p, way3left));\n\t\t\t*xx = NULL;\n\t\t}\n\t\t*tt = NULL;\n\t}\n}\n\n/******************************************************************************\n 'PLUS' interface style\n ******************************************************************************/\n\n/**\n * @brief\n *\tcreate index for the tree.\n *\n * @param[in] pix - record\n * @param[in] flags - 0x01 - dups allowed, 0x02 - case insensitive search\n * @param[in] keylength - key length\n *\n * @return\terror code\n * @retval\t1\terror\n * @retval\t0\tsuccess\n *\n */\nint\navl_create_index(AVL_IX_DESC *pix, int flags, int keylength)\n{\n\tif (keylength < 0) {\n\t\tfprintf(stderr, \"create_index 'keylength'=%d: programming error\\n\", keylength);\n\t\treturn 1;\n\t}\n\tpix->root = NULL;\n\tpix->keylength = keylength;\n\tpix->flags = flags;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tdestroy the avl tree pointed by pix.\n *\n * @param[in] pix - pointer to head of tree\n *\n * @return\tVoid\n */\nvoid\navl_destroy_index(AVL_IX_DESC *pix)\n{\n\tif (!pix)\n\t\treturn;\n\n\tix_keylength = pix->keylength;\n\tavltree_clear((node **) &(pix->root));\n\tpix->root = NULL;\n}\n\n/**\n * @brief\n *\tfinds a record in tree and copy its index.\n *\n * @param[in] pe - key\n * @param[in] pix - pointer to tree\n *\n * @return\tint\n * @retval      AVL_IX_OK(1)    success\n * @retval      AVL_IX_FAIL(0)  error\n *\n */\nint\navl_find_key(AVL_IX_REC *pe, AVL_IX_DESC *pix)\n{\n\trectype *ptr;\n\n\tix_keylength = pix->keylength;\n\tix_flags = pix->flags;\n\n\tmemset((void *) &(pe->recptr), 0, sizeof(AVL_RECPOS));\n\tptr = avltree_search((node **) &(pix->root), pe,\n\t\t\t     SRF_FINDEQUAL | SRF_SETMARK | SRF_FINDGREAT);\n\tif (ptr == NULL)\n\t\treturn AVL_IX_FAIL;\n\n\tpe->recptr = ptr->recptr;\n\tpe->count = ptr->count;\n\tif (compkey(pe, ptr))\n\t\treturn AVL_IX_FAIL;\n\treturn AVL_IX_OK;\n}\n\n/**\n * @brief\n *\tadd a key to the tree\n *\n * @param[in] pe - record to be added\n * @param[in] pix - pointer to root of tree\n *\n * @return\tint\n * @retval      AVL_IX_OK(1)    success\n * @retval      AVL_IX_FAIL(0)  error\n *\n */\nint\navl_add_key(AVL_IX_REC *pe, AVL_IX_DESC *pix)\n{\n\tix_keylength = pix->keylength;\n\tix_flags = pix->flags;\n\tif (ix_keylength == 0)\n\t\trec_keylength = strlen(pe->key) + 1;\n\tif (avltree_insert((node **) &(pix->root), pe) == NULL)\n\t\treturn AVL_IX_FAIL;\n\treturn AVL_IX_OK;\n}\n\n/**\n * @brief\n *\tdelete the given record from the tree\n *\n * @param[in] pe - index of record to be deleted\n * @param[in] pix - pointer to the root of tree\n *\n * @return\tint\n * @retval\tAVL_IX_OK(1)\tsuccess\n * @retval\tAVL_IX_FAIL(0)\terror\n *\n */\nint\navl_delete_key(AVL_IX_REC *pe, AVL_IX_DESC *pix)\n{\n\trectype *ptr;\n\n\tix_keylength = pix->keylength;\n\tix_flags = pix->flags;\n\n\tptr = avltree_search((node **) &(pix->root), pe, SRF_FINDEQUAL | SRF_SETMARK);\n\tif (ptr == NULL)\n\t\treturn AVL_IX_FAIL;\n\tavltree_delete((node **) &(pix->root), pe, SRF_FROMMARK);\n\treturn AVL_IX_OK;\n}\n\n/**\n * @brief\n *      return the first record in tree\n *\n * @param[out] pix - pointer to root node of tree.\n *\n * @return      Void\n */\nvoid\navl_first_key(AVL_IX_DESC *pix)\n{\n\tavltree_first((node **) &(pix->root));\n}\n\n/**\n * @brief\n *      copies and returns  the next node index.\n *\n * @param[out] pe - place to hold copied node data\n * @param[in] pix - pointer to root of tree\n *\n * @return      int\n * @retval      AVL_EOIX(-2)    error\n * @retval      AVL_IX_OK(1)    success\n *\n */\nint\navl_next_key(AVL_IX_REC *pe, AVL_IX_DESC *pix)\n{\n\trectype *ptr;\n\tix_keylength = pix->keylength;\n\tix_flags = pix->flags;\n\n\tif ((ptr = avltree_search((node **) &(pix->root),\n\t\t\t\t  pe, /* pe not used */\n\t\t\t\t  SRF_FROMMARK | SRF_SETMARK | SRF_FINDGREAT)) == NULL)\n\t\treturn AVL_EOIX;\n\tcopydata(pe, ptr);\n\treturn AVL_IX_OK;\n}\n\n/**\n * @brief\n *\tCreate an AVL key based on the string provided\n *\n * @param[in] - key - String to be used as the key\n *\n * @return\tThe AVL key\n * @retval\tNULL - Failure (out of memory)\n * @retval\t!NULL - Success - The AVL key\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nAVL_IX_REC *\navlkey_create(AVL_IX_DESC *tree, void *key)\n{\n\tsize_t keylen;\n\tAVL_IX_REC *pkey;\n\n\tif (tree->keylength != 0)\n\t\tkeylen = sizeof(AVL_IX_REC) - AVL_DEFAULTKEYLEN + tree->keylength;\n\telse {\n\t\tif (key == NULL)\n\t\t\tkeylen = sizeof(AVL_IX_REC) + MAX_AVLKEY_LEN + 1;\n\t\telse\n\t\t\tkeylen = sizeof(AVL_IX_REC) + strlen(key) + 1;\n\t}\n\tpkey = calloc(1, keylen);\n\tif (pkey == NULL)\n\t\treturn NULL;\n\n\tif (key != NULL) {\n\t\tif (tree->keylength != 0)\n\t\t\tmemcpy(pkey->key, key, tree->keylength);\n\t\telse\n\t\t\tstrcpy(pkey->key, (char *) key);\n\t}\n\n\treturn (pkey);\n}\n"
  },
  {
    "path": "src/lib/Libutil/daemon_protect.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n#include <pbs_config.h>\n\n#include <sys/types.h>\n#include <unistd.h>\n#include <string.h>\n#include <stdio.h>\n#include <fcntl.h>\n#include <errno.h>\n#include \"server_limits.h\"\n#include \"pbs_ifl.h\"\n#include \"log.h\"\n\n/**\n * @brief\n *\tWhere possible enable protection for the daemon from the OS.\n *\n * @par\tLinux\n *\tProtect from OOM (Out of Memory) killer\n *\n * @param[in] pid_t pid - pid of process to protect, if 0 then myself.\n * @param[in] enum PBS_Daemon_Protect action - turn on/off proection:\n *\tPBS_DAEMON_PROTECT_ON/OFF\n */\n\nvoid\ndaemon_protect(pid_t pid, enum PBS_Daemon_Protect action)\n{\n\n#ifdef linux\n\tint fd;\n\tchar fname[MAXPATHLEN + 1];\n\tstruct oom_protect {\n\t\tchar *oom_value[2]; /* value to write: unprotect/protect */\n\t\tchar *oom_path;\t    /* path to which to write */\n\t};\n\tstatic struct oom_protect oom_protect_old = {\n\t\t{\n\t\t\t\"0\\n\",\t/* unprotect value */\n\t\t\t\"-17\\n\" /* protect value   */\n\t\t},\n\t\t\"/proc/%ld/oom_adj\"};\n\tstatic struct oom_protect oom_protect_new = {\n\t\t{\n\t\t\t\"0\\n\",\t  /* unprotect value */\n\t\t\t\"-1000\\n\" /* protect value   */\n\t\t},\n\t\t\"/proc/%ld/oom_score_adj\"};\n\n\tif (pid == 0)\n\t\tpid = getpid(); /* use my pid */\n\n\t/**\n\t *\tfor Linux:  Need to protect daemons from the Out of Memory killer.\n\t *\n\t *\tFirst try to set /proc/<pid>/oom_score_adj to -1000 to protect\n\t *\tor 0 to unprotect.\n\t *\tIf oom_score_adj is does not exist, try setting /proc/<pid>oom_adj\n\t *\twhich is older to -17 to protect or 0 to unprotect.\n\t */\n\tsnprintf(fname, MAXPATHLEN, oom_protect_new.oom_path, pid);\n\tif ((fd = open(fname, O_WRONLY | O_TRUNC)) != -1) {\n\t\tif (write(fd, oom_protect_new.oom_value[(int) action], strlen(oom_protect_new.oom_value[(int) action])) == -1) \n\t\t\t\tlog_errf(-1, __func__, \"write failed. ERR: %s\", strerror(errno));\n\t} else {\n\n\t\t/* failed to open \"oom_score_adj\", now try \"oom_adj\" */\n\t\t/* found in older Linux kernels\t\t\t     */\n\t\tsnprintf(fname, MAXPATHLEN, oom_protect_old.oom_path, pid);\n\t\tif ((fd = open(fname, O_WRONLY | O_TRUNC)) != -1) {\n\t\t\tif (write(fd, oom_protect_old.oom_value[(int) action], strlen(oom_protect_old.oom_value[(int) action])) == -1) \n\t\t\t\tlog_errf(-1, __func__, \"write failed. ERR: %s\", strerror(errno));\n\t\t}\n\t}\n\tif (fd != -1)\n\t\tclose(fd);\n#endif /* linux */\n\n\t/**\n\t *\tFor any other OS, we don't do anything currently.\n\t */\n\treturn;\n}\n"
  },
  {
    "path": "src/lib/Libutil/dedup_jobids.c",
    "content": "/*\n * Copyright (C) 1994-2023 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/* C functions module to remove duplicates in jobids list */\n\n#include \"job.h\"\n#include \"range.h\"\n#include \"dedup_jobids.h\"\n#include \"pbs_idx.h\"\n\n/**\n * @brief is_array_job - determines if the job id indicates an array job\n *\n * @param[in]   id - Job Id.\n *\n * @return  Job Type\n * @retval  IS_ARRAY_NO  - A regular job\n * @retval  IS_ARRAY_ArrayJob  - A ArrayJob\n * @retval  IS_ARRAY_Single  - A single subjob\n * @retval  IS_ARRAY_Range  - A range of subjobs\n */\nint\nis_array_job(char *id)\n{\n\tchar *pc;\n\n\tif ((pc = strchr(id, (int) '[')) == NULL)\n\t\treturn IS_ARRAY_NO; /* not an ArrayJob nor a subjob (range) */\n\tif (*++pc == ']')\n\t\treturn IS_ARRAY_ArrayJob; /* an ArrayJob */\n\n\t/* know it is either a single subjob or an range there of */\n\twhile (isdigit((int) *pc))\n\t\t++pc;\n\tif ((*pc == '-') || (*pc == ','))\n\t\treturn IS_ARRAY_Range; /* a range of subjobs */\n\telse\n\t\treturn IS_ARRAY_Single;\n}\n\n/**\n * @brief Allocate memory for new job range\n *\n * @return array_job_range_list*\n * @retval new range object\n */\narray_job_range_list *\nnew_job_range(void)\n{\n\tarray_job_range_list *new_range;\n\tnew_range = (array_job_range_list *) malloc(sizeof(array_job_range_list));\n\tif (new_range == NULL)\n\t\treturn NULL;\n\tnew_range->range = NULL;\n\tnew_range->next = NULL;\n\treturn new_range;\n}\n\n/**\n * @brief Helper function to split sub jobid from its range\n * @example\n *  1. For jobid \"0[1-5].hostname\"\n *      outjobid = \"0.hostname\" , sub_job_range = \"1-5\" \n *  2. For jobid \"0[1-5]\"\n *      outjobid = \"0\" , sub_job_range = \"1-5\" \n *\n * @param[in]  jobid\n * @param[out] array_jobid\n * @param[out] sub_job_range\n *\n * @return int\n * @retval 0 for Success\n * @retval 1 for Failure\n *\n * @par NOTE: out_jobid will contain jobid without sub job range.\n */\nint\nsplit_sub_jobid(char *jobid, char **out_jobid, char **sub_job_range)\n{\n\tchar *range = NULL, *hostname, *array_jobid;\n\tchar *pc = NULL, save;\n\n\t/* subjobid */\n\tpc = strchr(jobid, (int) '[');\n\tsave = *pc;\n\t*pc = '\\0';\n\tarray_jobid = strdup(jobid);\n\t*pc = save;\n\n\t/* range */\n\trange = ++pc;\n\tpc = strchr(range, (int) ']');\n\tsave = *pc;\n\t*pc = '\\0';\n\t*sub_job_range = strdup(range);\n\t*pc = save;\n\n\t/* hostname */\n\tpc = strchr(jobid, (int) '.');\n\tif (pc == NULL) {\n\t\t*out_jobid = array_jobid;\n\t\treturn 0;\n\t}\n\thostname = strdup(++pc);\n\t/* one extra for the '.' */\n\t*out_jobid = (char *) malloc(strlen(array_jobid) + strlen(hostname) + 2);\n\tif (*out_jobid == NULL)\n\t\treturn 1;\n\tsprintf(*out_jobid, \"%s.%s\", array_jobid, hostname);\n\tfree(array_jobid);\n\tfree(hostname);\n\n\treturn 0;\n}\n\n/**\n * @brief Check if job id is a short job id\n * and misses the server name.\n *\n * @param [in] jobid to check\n * \n * @return bool\n * @retval true for short job id\n * @retval false otherwise\n */\nbool\ncheck_short_jobid(char *jobid)\n{\n\tif (jobid == NULL) {\n\t\treturn false;\n\t}\n\n\twhile (*jobid) {\n\t\tif (isdigit(*jobid) == 0 && *jobid != '[' && *jobid != ']') {\n\t\t\treturn false;\n\t\t}\n\t\tjobid++;\n\t}\n\treturn true;\n}\n\n/**\n * @brief Adds default server name to short job ids.\n *\n * @param [in,out] jobids list of jobids\n * @param [in] numjids total count of jobids\n * @param [in,out] malloc_track to track the memory allocated\n * \n * @return int\n * @retval 0  for Success\n * @retval -1 for Failure\n */\nint\nadd_default_server(char **jobids, int numjids, char *malloc_track)\n{\n\tint i;\n\tchar *def_server = pbs_default();\n\tchar jobid[PBS_MAXJOBNAME +1];\n\tfor (i = 0; i < numjids; i++) {\n\t\tif (check_short_jobid(jobids[i])) {\n\t\t\tif (def_server == NULL) {\n\t\t\t\treturn -1;\n\t\t\t}\n\n\t\t\tsprintf(jobid, \"%s.%s\", jobids[i], def_server);\n\t\t\tjobids[i] = strdup(jobid);\n\t\t\tif (jobids[i] == NULL) {\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t\tmalloc_track[i] = 1;\n\t\t}\n\t}\n\n\treturn 0;\n}\n\n/**\n * @brief Remove duplicate jobids from jobids list\n *\n * @param [in,out] jobids list of jobids\n * @param [in,out] numjids total count of jobids\n * @param [in,out] malloc_track to track the memory allocated\n * \n * @return int\n * @retval 0  for Success\n * @retval -1 for Failure\n */\nint\ndedup_jobids(char **jobids, int *numjids, char *malloc_track)\n{\n\tvoid *index = NULL;\n\tvoid *non_array_jobs_idx = NULL;\n\tvoid *array_jobs_idx = NULL;\n\tint uni_jobids = 0;\t  /* counter for unique jobids */\n\tint uni_array_jobids = 0; /* counter for unique array jobids */\n\tint i = 0, j = 0, rc = 0, dup = 0, ret = 0, is_job_array = 0;\n\tchar **array_jobid_list = NULL;\n\tchar *array_job_range = NULL, *array_jobid = NULL, *hostname = NULL;\n\tchar temp_range[256] = {\n\t\t'\\0',\n\t};\n\trange *r1 = NULL, *r2 = NULL, *r3 = NULL;\n\tarray_job_range_list **srange_list = NULL, *srange = NULL, *new_srange = NULL;\n\tint original_numjids = *numjids;\n\t\n\tif (jobids == NULL || numjids == NULL)\n\t\treturn -1;\n\n\tarray_jobid_list = calloc((*numjids + 1), sizeof(char *));\n\tif (array_jobid_list == NULL)\n\t\treturn -1;\n\n\tsrange_list = calloc((*numjids), sizeof(array_job_range_list *));\n\tif (array_jobid_list == NULL)\n\t\treturn -1;\n\n\tif (add_default_server(jobids, *numjids, malloc_track)) {\n\t\treturn -1;\n\t}\n\n\tnon_array_jobs_idx = pbs_idx_create(0, 0);\n\tarray_jobs_idx = pbs_idx_create(0, 0);\n\n\tfor (i = 0, j = 0; i < *numjids; i++) {\n\t\tis_job_array = is_array_job(jobids[i]);\n\t\tif (is_job_array == IS_ARRAY_ArrayJob ||\n\t\t\t\tis_job_array == IS_ARRAY_Single ||\n\t\t\t\tis_job_array == IS_ARRAY_Range) {\n\t\t\tdup = 0;\n\t\t\tif (split_sub_jobid(jobids[i], &array_jobid, &array_job_range) != 0) {\n\t\t\t\tret = -1;\n\t\t\t\tgoto err;\n\t\t\t}\n\t\t\tnew_srange = new_job_range();\n\t\t\tif (new_srange == NULL) {\n\t\t\t\tret = -1;\n\t\t\t\tgoto err;\n\t\t\t}\n\t\t\tnew_srange->range = array_job_range;\n\t\t\tsrange_list[i] = new_srange;\n\t\t\tif (pbs_idx_find(array_jobs_idx, (void **) &array_jobid,\n\t\t\t\t\t (void **) &srange,\n\t\t\t\t\t NULL) == PBS_IDX_RET_OK) {\n\t\t\t\t/* found duplicate, update the range */\n\t\t\t\tdup = 1;\n\t\t\t\tnew_srange->next = srange;\n\t\t\t\tsrange = new_srange;\n\t\t\t\t/* Delete existing value */\n\t\t\t\tif (pbs_idx_delete(array_jobs_idx,\n\t\t\t\t\t\t   array_jobid) != PBS_IDX_RET_OK) {\n\t\t\t\t\tret = -1;\n\t\t\t\t\tgoto err;\n\t\t\t\t}\n\t\t\t} else\n\t\t\t\tsrange = new_srange;\n\t\t\tif (pbs_idx_insert(array_jobs_idx, array_jobid, srange) !=\n\t\t\t    PBS_IDX_RET_OK) {\n\t\t\t\tret = -1;\n\t\t\t\tgoto err;\n\t\t\t}\n\t\t\tif (!dup) {\n\t\t\t\tarray_jobid_list[j] = array_jobid;\n\t\t\t\tj++;\n\t\t\t\tuni_array_jobids++;\n\t\t\t} else\n\t\t\t\tfree(array_jobid);\n\t\t} else if (non_array_jobs_idx) { /* For Normal jobs */\n\t\t\trc = pbs_idx_find(non_array_jobs_idx, (void **) &jobids[i],\n\t\t\t\t\t  (void **) &index, NULL);\n\t\t\tif (rc == PBS_IDX_RET_OK)\n\t\t\t\tcontinue;\n\t\t\trc = pbs_idx_insert(non_array_jobs_idx, jobids[i], NULL);\n\t\t\tif (rc == PBS_IDX_RET_OK) {\n\t\t\t\tjobids[uni_jobids] = jobids[i];\n\t\t\t\tuni_jobids++; /* counter for non array jobs */\n\t\t\t}\n\t\t}\n\t}\n\tarray_jobid_list[j] = NULL;\n\n\t/*\n\tIf the same jobids present with different ranges then\n\tjoin their ranges, to avoid the overlaping / duplicates\n\tof subjobids.\n\t*/\n\tfor (i = 0; array_jobid_list[i] != NULL; i++, uni_jobids++) {\n\t\tif (pbs_idx_find(array_jobs_idx, (void **) &array_jobid_list[i],\n\t\t\t\t (void **) &srange, NULL) == PBS_IDX_RET_OK) {\n\t\t\tmemset(temp_range, '\\0', sizeof(temp_range));\n\t\t\tfor (; srange; srange = (array_job_range_list *) srange->next) {\n\t\t\t\tif (strlen(temp_range) == 0) {\n\t\t\t\t\tstrncpy(temp_range, srange->range,\n\t\t\t\t\t\tsizeof(temp_range) - 1);\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\t\t\t\tr1 = range_parse(temp_range);\n\t\t\t\tr2 = range_parse(srange->range);\n\t\t\t\tr3 = range_join(r1, r2);\n\t\t\t\tstrncpy(temp_range, range_to_str(r3),\n\t\t\t\t\tsizeof(temp_range) - 1);\n\t\t\t\tfree_range_list(r1);\n\t\t\t\tfree_range_list(r2);\n\t\t\t\tfree_range_list(r3);\n\t\t\t}\n\t\t}\n\t\tif (malloc_track[uni_jobids]) {\n\t\t\tfree(jobids[uni_jobids]);\n\t\t}\n\t\t/* one for '[', one for ']' and one for '.' */\n\t\tjobids[uni_jobids] = (char *) malloc(strlen(array_jobid_list[i]) + strlen(temp_range) + 4);\n\t\tif (jobids[uni_jobids] == NULL) {\n\t\t\tret = -1;\n\t\t\tgoto err;\n\t\t}\n\t\tmalloc_track[uni_jobids] = 1;\n\t\thostname = strchr(array_jobid_list[i], (int) '.');\n\t\tif (hostname == NULL)\n\t\t\tsprintf(jobids[uni_jobids], \"%s[%s]\", array_jobid_list[i],\n\t\t\t\ttemp_range);\n\t\telse {\n\t\t\t*hostname = '\\0';\n\t\t\tsprintf(jobids[uni_jobids], \"%s[%s].%s\",\n\t\t\t\tarray_jobid_list[i], temp_range, ++hostname);\n\t\t}\n\t}\n\t*numjids = uni_jobids;\n\tret = 0;\nerr:\n\tfree_string_array(array_jobid_list);\n\tfor (i = 0; i < original_numjids; i++) {\n\t\tif (srange_list[i]) {\n\t\t\tif (srange_list[i]->range)\n\t\t\t\tfree(srange_list[i]->range);\n\t\t\tfree(srange_list[i]);\n\t\t}\n\t}\n\tfree(srange_list);\n\tpbs_idx_destroy(non_array_jobs_idx);\n\tpbs_idx_destroy(array_jobs_idx);\n\n\treturn ret;\n}\n"
  },
  {
    "path": "src/lib/Libutil/entlim.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n#include <pbs_config.h>\n\n#include <stdlib.h>\n#include <string.h>\n#include <stdio.h>\n#include \"pbs_entlim.h\"\n\n/* entlim iteration context structure, opaque to caller */\ntypedef struct _entlim_ctx {\n\tvoid *idx;\n\tvoid *idx_ctx;\n} entlim_ctx;\n\n/**\n * @brief\n * \tentlim_initialize_ctx - initialize the data context structure\n */\nvoid *\nentlim_initialize_ctx(void)\n{\n\tentlim_ctx *pctx = malloc(sizeof(entlim_ctx));\n\tif (pctx == NULL)\n\t\treturn NULL;\n\tpctx->idx_ctx = NULL;\n\tpctx->idx = pbs_idx_create(0, 0);\n\tif (pctx->idx == NULL) {\n\t\tfree(pctx);\n\t\treturn NULL;\n\t}\n\treturn (void *) pctx;\n}\n\n/**\n * @brief\n * \tentlim_get - get record whose key is built from the given key-string\n *\n * @param[in] keystr - key string whose key is to be built\n * @param[in] ctx - pointer to context\n *\n * @return\tvoid text\n * @retval\tkey\t\tsuccess\n * @retval\tNULL\t\terror\n *\n */\n\nvoid *\nentlim_get(const char *keystr, void *ctx)\n{\n\tvoid *rtn;\n\n\tif (pbs_idx_find(((entlim_ctx *) ctx)->idx, (void **) &keystr, &rtn, NULL) == PBS_IDX_RET_OK)\n\t\treturn rtn;\n\treturn NULL;\n}\n\n/**\n * @brief\n * \tentlim_add - add a record with a key based on the key-string\n *\n * @param[in] keystr - key string whose key is to be built\n * @param[in] recptr - pointer to record\n * @param[in] ctx - pointer to context\n *\n * @return\tint\n * @retval\t0\tsuccess, record added\n * @retval\t-1\tadd failed\n */\nint\nentlim_add(const char *keystr, const void *recptr, void *ctx)\n{\n\tif (pbs_idx_insert(((entlim_ctx *) ctx)->idx, (void *) keystr, (void *) recptr) == PBS_IDX_RET_OK)\n\t\treturn 0;\n\treturn -1;\n}\n\n/**\n * @brief\n * \tentlim_replace - replace a record with a key based on the key-string\n *\tif the record already exists, if not then this becomes equivalent\n *\tto entlim_add().\n *\n * @param[in] keystr - key string whose key is to be built\n * @param[in] recptr - pointer to record\n * @param[in] ctx - pointer to context\n * @param[in] free_leaf() - function called to delete data record when removing\n *\t\t\t    exiting record.\n *\n * @return\tint\n * @retval\t0\tsuccess, record replace/added\n * @retval\t-1\tchange failed\n */\nint\nentlim_replace(const char *keystr, void *recptr, void *ctx, void fr_leaf(void *))\n{\n\tvoid *olddata;\n\tentlim_ctx *pctx = (entlim_ctx *) ctx;\n\n\tif (pbs_idx_insert(pctx->idx, (void *) keystr, recptr) == PBS_IDX_RET_OK)\n\t\treturn 0;\n\telse {\n\t\tif (pbs_idx_find(pctx->idx, (void **) &keystr, &olddata, NULL) == PBS_IDX_RET_OK) {\n\t\t\tif (pbs_idx_delete(pctx->idx, (void *) keystr) == PBS_IDX_RET_OK) {\n\t\t\t\tfr_leaf(olddata);\n\t\t\t\tif (pbs_idx_insert(pctx->idx, (void *) keystr, recptr) == PBS_IDX_RET_OK)\n\t\t\t\t\treturn 0;\n\t\t\t}\n\t\t}\n\t}\n\treturn -1;\n}\n\n/**\n * @brief\n * \tentlim_delete - delete record with a key based on the keystr\n *\n * @param[in] keystr - key string whose key is to be built\n * @param[in] recptr - pointer to record\n * @param[in] ctx - pointer to context\n * @param[in] free_leaf() - function to free the data structure associated\n *\t\t\t    with the key.\n *\n * @return\tint\n * @retval\t0\tdelete success\n * @retval\t-1\tdelete failed\n */\nint\nentlim_delete(const char *keystr, void *ctx, void free_leaf(void *))\n{\n\tvoid *prec;\n\n\tif (pbs_idx_find(((entlim_ctx *) ctx)->idx, (void **) &keystr, &prec, NULL) == PBS_IDX_RET_OK) {\n\t\tif (pbs_idx_delete(((entlim_ctx *) ctx)->idx, (void *) keystr) == PBS_IDX_RET_OK) {\n\t\t\tfree_leaf(prec);\n\t\t\treturn 0;\n\t\t}\n\t}\n\treturn -1;\n}\n\n/**\n * @brief\n * \tentlim_get_next - walk the objects returning the next entry.\n *\tIf called with a NULL key, it allocates a key and returns\n *\tthe first entry; otherwise it returns the next entry.\n *\n * @param[in] keystr - key string whose key is to be built\n * @param[in] ctx - pointer to context\n *\n * @return\tstructure handle\n * @retval\tkey info\t\tsuccess\n * @retval\tNULL\t\t\terror\n *\t\tReturns NULL following the last entry or when no entry found\n *\t\tThe key needs to be freed by the caller when all is said and done.\n */\nvoid *\nentlim_get_next(void *ctx, void **key)\n{\n\n\tentlim_ctx *pctx = (entlim_ctx *) ctx;\n\tvoid *data;\n\n\tif (pctx == NULL || pctx->idx == NULL)\n\t\treturn NULL;\n\n\tif (key != NULL && *key != NULL) {\n\t\tif (pctx->idx_ctx == NULL)\n\t\t\treturn NULL;\n\t} else {\n\t\tif (pctx->idx_ctx != NULL)\n\t\t\tpbs_idx_free_ctx(pctx->idx_ctx);\n\t\tpctx->idx_ctx = NULL;\n\t}\n\n\tif (pbs_idx_find(pctx->idx, key, &data, &pctx->idx_ctx) == PBS_IDX_RET_OK)\n\t\treturn data;\n\n\tpbs_idx_free_ctx(pctx->idx_ctx);\n\tpctx->idx_ctx = NULL;\n\t*key = NULL;\n\treturn NULL;\n}\n\n/**\n * @brief\n * \tentlim_free_ctx - free the data structure including all keys and records\n *\n * @param[in] ctx - pointer to context\n * @param[in] free_leaf() - function called to delete data record when removing\n *                          exiting record.\n *\n * @return      int\n * @retval      0       success, record freed\n * @retval      -1      freeing failed\n */\n\nint\nentlim_free_ctx(void *ctx, void free_leaf(void *))\n{\n\tvoid *leaf;\n\tentlim_ctx *pctx = (entlim_ctx *) ctx;\n\n\tif (pctx->idx_ctx != NULL)\n\t\tpbs_idx_free_ctx(pctx->idx_ctx);\n\tpctx->idx_ctx = NULL;\n\twhile (pbs_idx_find(pctx->idx, NULL, &leaf, &pctx->idx_ctx) == PBS_IDX_RET_OK) {\n\t\tfree_leaf(leaf);\n\t}\n\tpbs_idx_free_ctx(pctx->idx_ctx);\n\tpbs_idx_destroy(pctx->idx);\n\tfree(pctx);\n\treturn 0;\n}\n\n/**\n * @brief\n * \tentlim_mk_keystr - make a key string from the key type, entity and resc\n *\n * @param[in] kt - enum for key token\n * @param[in] entity - entity name\n * @param[in] resc - resource name\n *\n * @return\tstrinf\n * @retval\tptr to key string in the heap -\n *\n * @par\tWARNING - it is up to you to free it\n */\nstatic char *\nentlim_mk_keystr(enum lim_keytypes kt, const char *entity, const char *resc)\n{\n\tsize_t keylen;\n\tchar *pkey;\n\tchar ktyl;\n\n\tif (kt == LIM_USER)\n\t\tktyl = 'u';\n\telse if (kt == LIM_GROUP)\n\t\tktyl = 'g';\n\telse if (kt == LIM_PROJECT)\n\t\tktyl = 'p';\n\telse if (kt == LIM_OVERALL)\n\t\tktyl = 'o';\n\telse\n\t\treturn NULL; /* invalid entity key type */\n\n\tkeylen = 2 + strlen(entity);\n\tif (resc)\n\t\tkeylen += 1 + strlen(resc);\n\tpkey = malloc(keylen + 1);\n\n\tif (pkey) {\n\t\tif (resc)\n\t\t\tsprintf(pkey, \"%c:%s;%s\", ktyl, entity, resc);\n\t\telse\n\t\t\tsprintf(pkey, \"%c:%s\", ktyl, entity);\n\t}\n\treturn (pkey);\n}\n\n/**\n * @brief\n * \tentlim_mk_runkey - make a key for a entity run (number of jobs) limit\n *\n * @param[in] kt - enum for key token\n * @param[in] entity - entity name\n *\n * @return      strinf\n * @retval      ptr to key string in the heap -\n *\n * @par WARNING - it is up to you to free it\n */\nchar *\nentlim_mk_runkey(enum lim_keytypes kt, const char *entity)\n{\n\treturn entlim_mk_keystr(kt, entity, NULL);\n}\n\n/**\n * @brief\n * \tentlim_mk_reskey - make a key for a entity resource usage limit\n *\n * @param[in] kt - enum for key token\n * @param[in] entity - entity name\n *\n * @return      strinf\n * @retval      ptr to key string in the heap -\n *\n * @par WARNING - it is up to you to free it\n */\nchar *\nentlim_mk_reskey(enum lim_keytypes kt, const char *entity, const char *resc)\n{\n\treturn entlim_mk_keystr(kt, entity, resc);\n}\n\n/**\n * @brief\n * \tentlim_entity_from_key - obtain the entity name from a key\n *\n * @param[in] key - pointer to key info\n * @param[in] rtnname - a buffer large enought to hold the max entity name +1\n * @param[in] ln      - the size of that buffer\n *\n * @return\tint\n * @retval\t0\tentity name found and returned\n * @retval\t-1\tentity name would not fit\n */\nint\nentlim_entity_from_key(char *key, char *rtnname, size_t ln)\n{\n\tchar *pc;\n\tint sz = 0;\n\n\tpc = key + 2;\n\twhile (*pc && (*pc != ';')) {\n\t\t++sz;\n\t\t++pc;\n\t}\n\tif ((size_t) sz < ln) {\n\t\t(void) strncpy(rtnname, key + 2, sz);\n\t\t*(rtnname + sz) = '\\0';\n\t\treturn 0;\n\t}\n\treturn -1;\n}\n\n/**\n * @brief\n * \tentlim_resc_from_key - obtain the resource name from a key if it\n *\tincludes one.\n *\n * @param[in] key - pointer to key info\n * @param[in] rtnname - a buffer large enought to hold the max entity name +1\n * @param[in] ln      - the size of that buffer\n *\n * @return      int\n * @retval      0 \tresource name found and returned\n * @retval      -1\tresource name would not fit\n * @retval\t+1\tno resource name found\n *\n */\nint\nentlim_resc_from_key(char *key, char *rtnresc, size_t ln)\n{\n\tchar *pc;\n\n\tpc = strchr(key, (int) ';');\n\tif (pc) {\n\t\tif (strlen(++pc) < ln) {\n\t\t\tstrcpy(rtnresc, pc);\n\t\t\treturn 0;\n\t\t} else\n\t\t\treturn -1;\n\t} else {\n\t\t*rtnresc = '\\0';\n\t\treturn 1;\n\t}\n}\n"
  },
  {
    "path": "src/lib/Libutil/execvnode_seq_util.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\texecvnode_seq_util.c\n * @brief\n *  Utility functions to condense and unroll a sequence of execvnodes that are\n *  returned by the scheduler as confirmation of a standing reservation.\n *  The functionality is to condense into a human-readable string, the execvnodes\n *  of each occurrence of a standing reservation, and be able to retrieve each\n *  occurrence by occurrence index of an array.\n *\n *  Example usage (also refer to the debug function int test_execvnode_seq\n *  for a practical example):\n *\n *  Assume str points to some string.\n *  char *condensed_str;\n *  char **unrolled_str;\n *  char **tofree;\n *\n *  condensed_str = condense_execvnode_seq(str);\n *  unrolled_str = unroll_execvnode_seq(condensed_str, &tofree);\n *  ...access an arbitrary, say 2nd occurrence, index via unrolled_str[1]\n *  free_execvnode_seq(tofree);\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <libutil.h>\n#include <log.h>\n#include <string.h>\n#include <stdio.h>\n#include <stdlib.h>\n\n/* Initialize a dictionary structure */\nstatic dictionary *new_dictionary();\n\n/* Initialize a new word with the given string value */\nstatic struct word *new_word(char *);\n\n/* Initialize a new map for a given value */\nstatic struct map *new_map(int);\n\n/* Map a sequence of words to a list of indices by which\n * they are represented in the passed string str.\n */\nstatic int direct_map(dictionary *, char *);\n\n/* find a word in a dictionary */\nstatic struct word *find_word(dictionary *, char *);\n\n/* Append word to dictionary */\nstatic int append_to_dict(dictionary *, char *, int);\n\n/* Append the index of a word in a string to the mapping of the word */\nstatic int append_to_word(dictionary *, struct word *, int);\n\n/* Create a string out of all words in a dictionary and their corresponding mapped indices */\nstatic char *dict_to_str(dictionary *);\n\n/* Free memory allocated to dictionary */\nstatic void free_dict(dictionary *);\n\n/* Free memory allocated to a word and its mapping */\nstatic void free_word(struct word *);\n\n/**\n * @brief\n *\tInitialize a dictionary structure.\n *\n * @return\tstructure handle\n * @retval\ta new dictionary structure \tsuccess\n * @retval\tNULL \t\t\t\tif unable to allocate memory.\n *\n */\nstatic dictionary *\nnew_dictionary()\n{\n\tdictionary *dict;\n\tif ((dict = (dictionary *) malloc(sizeof(dictionary))) == NULL) {\n\t\tDBPRT((\"new_dictionary: %s\\n\", MALLOC_ERR_MSG))\n\t\treturn NULL;\n\t}\n\tdict->first = NULL;\n\tdict->last = NULL;\n\tdict->count = 0;\n\tdict->length = 0;\n\tdict->max_idx = 0;\n\n\treturn dict;\n}\n\n/**\n * @brief\n * \tInitialize a new word with the given string value.\n *\n * @param[in] str - The string value associated to the new word\n *\n * @return \tstructure handle\n * @retval\ta new word structure \tsucces\n * @retval\tNULL \t\t\tif unable to allocate memory\n *\n */\nstatic struct word *\nnew_word(char *str)\n{\n\tstruct word *nw;\n\n\tif (str == NULL)\n\t\treturn NULL;\n\n\tif ((nw = malloc(sizeof(struct word))) == NULL) {\n\t\tDBPRT((\"new_word: %s\\n\", MALLOC_ERR_MSG))\n\t\treturn NULL;\n\t}\n\tif ((nw->name = strdup(str)) == NULL) {\n\t\tfree(nw);\n\t\tDBPRT((\"new_word: %s\\n\", MALLOC_ERR_MSG));\n\t\treturn NULL;\n\t}\n\tnw->next = NULL;\n\tnw->map = NULL;\n\tnw->count = 0;\n\n\treturn nw;\n}\n\n/**\n * @brief\n * \tInitialize a new map for a given value. A map is the associated value of a\n * \tword, and in practice is the index where the word appears in the sequence.\n *\n * @param[in] val - The integer value corresponding to the index  of a word.\n *\n * @return \tstructure handle\n * @retval\tA new map structure \tsuccess\n * @retval\tNULL \t\t\tif unable to allocate memory.\n *\n */\nstatic struct map *\nnew_map(int val)\n{\n\tstruct map *m;\n\n\tif (val < 0)\n\t\treturn NULL;\n\n\tif ((m = malloc(sizeof(struct map))) == NULL) {\n\t\tDBPRT((\"new_map: %s\\n\", MALLOC_ERR_MSG))\n\t\treturn NULL;\n\t}\n\tm->val = val;\n\tm->next = NULL;\n\n\treturn m;\n}\n\n/**\n * @brief\n * \tMap a sequence of words to a list of indices by which\n * \tthey are represented in the passed string str.\n *\n * @par\tGiven a sequence of words (the execvnodes), this function\\n\n * \t1) Looks for the word in the dictionary\\n\n * \t2) If the word is found, the index where it appears in the string is added\n *    \tas a mapping of the word\\n\n * \t3) If the word is not found in the dictionary, it is added.\\n\n *\n * @param[out] dict - The dictionary considered.\n * @param[in] str - The string to check against in the dictionary.\n *\n * @par\tNote:\n *\tIf the string exists in the dictionary, its index is appended\n * \tto the word entry. Otherwise it is added as a new word to the dictionary\n *\n * @return int\n * @retval 1 error\n * @retval 0 success\n */\nstatic int\ndirect_map(dictionary *dict, char *str)\n{\n\tstruct word *w;\n\tchar *str_tok;\n\tchar *str_copy;\n\tint i = 0;\n\n\tif (dict == NULL || str == NULL)\n\t\treturn 1;\n\n\tif ((str_copy = strdup(str)) == NULL) {\n\t\tDBPRT((\"new_word: %s\\n\", MALLOC_ERR_MSG));\n\t\treturn 1;\n\t}\n\n\tstr_tok = strtok(str_copy, TOKEN_SEPARATOR);\n\twhile (str_tok != NULL) {\n\t\tw = (struct word *) find_word(dict, str_tok);\n\t\tif (w == NULL) {\n\t\t\tif (append_to_dict(dict, str_tok, i)) {\n\t\t\t\tfree(str_copy);\n\t\t\t\treturn 1;\n\t\t\t}\n\t\t} else if (append_to_word(dict, w, i)) {\n\t\t\tfree(str_copy);\n\t\t\treturn 1;\n\t\t}\n\t\ti++;\n\t\tstr_tok = strtok(NULL, TOKEN_SEPARATOR);\n\t}\n\tdict->max_idx = i;\n\tfree(str_copy);\n\treturn 0;\n}\n\n/**\n * @brief\n * \tFind a word in a dictionary.\n *\n * @param[in] dict - The dictionary in which to search\n * @param[in] str - The string to be found.\n *\n * @return\tstructure handle\n * @retval\ta Word structure \tif the word is found,\n * @retval\tNULL.\t\t\tif not found\n *\n */\nstatic struct word *\nfind_word(dictionary *dict, char *str)\n{\n\tstruct word *cur;\n\n\tif (dict == NULL || str == NULL)\n\t\treturn NULL;\n\n\tif (dict->count == 0)\n\t\treturn NULL;\n\n\tfor (cur = dict->first; cur != NULL; cur = cur->next) {\n\t\tif (!strcmp(cur->name, str))\n\t\t\treturn cur;\n\t}\n\n\treturn NULL;\n}\n\n/**\n * @brief\n * \tAppend word to dictionary.\n *\n * @param[in] dict - The dictionary considered.\n * @param[in] str - The string representation of the word to append\n * @param[in] val - The index at which the string resides in the original string.\n * \n * @return int\n * @retval 1 error\n * @retval 0 success\n *\n */\nstatic int\nappend_to_dict(dictionary *dict, char *str, int val)\n{\n\tstruct word *nw;\n\tstruct word *tmp;\n\n\tif (dict == NULL || str == NULL || val < 0)\n\t\treturn 1;\n\n\tnw = new_word(str);\n\n\tif (nw == NULL)\n\t\treturn 1;\n\n\tif (dict->first == NULL) {\n\t\tdict->first = nw;\n\t\tdict->last = nw;\n\t} else {\n\t\ttmp = dict->first;\n\t\tdict->first = nw;\n\t\tnw->next = tmp;\n\t}\n\tnw->map = new_map(val);\n\n\tif (nw->map == NULL)\n\t\treturn 1;\n\n\tnw->count++;\n\tdict->length += strlen(str);\n\tdict->length += MAX_INT_LENGTH;\n\tdict->count++;\n\n\treturn 0;\n}\n\n/**\n * @brief\n * \tAppend the index of a word in a string to the mapping of the word\n *\n * @param[in] dict - The dictionary considered\n * @param[in] w - The word to append to\n * @param[in] val - The index value to append to the word\n * \n * @return int\n * @retval 1 error\n * @retval 0 success\n *\n */\nstatic int\nappend_to_word(dictionary *dict, struct word *w, int val)\n{\n\n\tstruct map *m, *tmp;\n\n\tif (dict == NULL || w == NULL || val < 0)\n\t\treturn 1;\n\n\tm = w->map;\n\ttmp = m;\n\tif (m == NULL) {\n\t\tm = new_map(val);\n\n\t\tif (m == NULL)\n\t\t\treturn 1;\n\n\t\tw->map = m;\n\t} else {\n\t\twhile (tmp->next != NULL) {\n\t\t\ttmp = tmp->next;\n\t\t}\n\t\ttmp->next = new_map(val);\n\n\t\tif (tmp->next == NULL)\n\t\t\treturn 1;\n\t}\n\tw->count++;\n\t/* MAX_INT_LENGTH is the length of a string representation of an index */\n\tdict->length += MAX_INT_LENGTH;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tConvert a TOKEN_SEPARATOR delimited string into a condensed dictionary representation.\n *\n * @param[in] str - The string to condense into indexed arrays or repeating occurrences.\n *\n * @return \tstring\n * @retval\ta Condensed reprsentation of the string in which all recurring tokens are\n * \t\trepresented by an indexed array. For example the string:\n *\t\t(tic)~(tac)~(toe)~(tic)~(tic)~(tic) is condensed to (tic){0,3-5} (tac){1} (toe){2}\n *\t\t( the condensed representation of the sequence of tokens)\n * @retval\tNULL\t\t\terror\n *\n */\nchar *\ncondense_execvnode_seq(const char *str)\n{\n\tdictionary *dict;\n\tchar *s_tmp;\n\tchar *cp;\n\n\tif (str == NULL)\n\t\treturn NULL;\n\n\tdict = new_dictionary();\n\n\tif (dict == NULL)\n\t\treturn NULL;\n\n\ts_tmp = strdup(str);\n\tif (s_tmp == NULL) {\n\t\tDBPRT((\"condense_execvnode_seq: %s\\n\", MALLOC_ERR_MSG));\n\t\tfree(dict);\n\t\treturn NULL;\n\t}\n\tif (direct_map(dict, s_tmp)) {\n\t\tfree(s_tmp);\n\t\tfree_dict(dict);\n\t\treturn NULL;\n\t}\n\tcp = dict_to_str(dict);\n\t/* Free up all memory allocated */\n\tfree_dict(dict);\n\tfree(s_tmp);\n\n\treturn cp;\n}\n\n/**\n * @brief\n * \tUnroll a condensed string into an indexed array of pointers to words (strings).\n * \tThis function takes as input a string of the form:\n * \t<count>COUNT_TOK<vnode1><range1><vnode2><range2>...\n * The tokens that are being used are:\n * 1) COUNT_TOK to separate the number of occurrences from the sequence of execvnodes\n * 2) WORD_TOK WORD_MAP_TOK used to enclose the range of indices for each execvnode\n *\n * and returns an array of pointers of length count, for which each index of\n * the array points to either vnode1, vnode2,etc based on the values in\n * range1, range2, etc respectively.\n * The idea is that instead of recreating the original long sequence of vnodes, only\n * unique vnode strings are created and is referred to as an array of pointers.\n *\n * Because of this special handling, the pointers to the unique vnodes has to be\n * visible at return time so that it can be properly free'd. The pointers to these\n * unique vnodes are kept in the tofree parameter.\n *\n * @param[in] str - The string to unroll\n * @param[in] tofree - A pointer to a block of memory to deallocate\n *\n * @return A pointer to the unrolled string\n *\n */\nchar **\nunroll_execvnode_seq(char *str, char ***tofree)\n{\n\tchar *word;\n\tchar *map;\n\tchar *range;\n\tchar *nm1 = NULL;\n\tchar *nm2 = NULL;\n\tchar *nm3 = NULL;\n\tint max_idx;\n\tint first, last;\n\tchar *tmp;\n\tint i;\n\tint j = 0;\n\n\tchar **rev_dict;\n\n\tif (str == NULL) {\n\t\t*tofree = NULL;\n\t\treturn NULL;\n\t}\n\n\t/* Tokenize the number of occurrences part */\n\tword = string_token(str, COUNT_TOK, &nm1);\n\n\tif (word == NULL)\n\t\treturn NULL;\n\t/* The number of occurrences */\n\tmax_idx = atoi(word);\n\t/* Allocate memory for the returning variable rev_dict which\n\t * will be an array of pointers of length, the number of occurrences,\n\t * for which each index corresponds to the execvnode associated to that occurrence */\n\tif ((rev_dict = (char **) malloc((max_idx + 1) * sizeof(char *))) == NULL) {\n\t\tDBPRT((\"unroll_execvnode_seq: %s\\n\", MALLOC_ERR_MSG));\n\t\treturn NULL;\n\t}\n\t/* Allocate memory to the block of memory that contains only Unique execvnodes\n\t * that will need to be freed once the string is not anymore needed.\n\t * This block of memory is made as large as rev_dict in the worst case where\n\t * all execvnodes are distinct but is resized later for the average case\n\t * where most execvnodes after a certain date will be identical */\n\tif ((*tofree = (char **) malloc((max_idx + 1) * sizeof(char *))) == NULL) {\n\t\tfree(rev_dict);\n\t\treturn NULL;\n\t}\n\t/* Tokenize the <vnode>{range} part */\n\tword = string_token(NULL, WORD_TOK, &nm1);\n\twhile (word != NULL) {\n\t\tif ((tmp = strdup(word)) == NULL) {\n\t\t\tfree(*tofree);\n\t\t\tfree(rev_dict);\n\t\t\tDBPRT((\"unroll_execvnode_seq: %s\\n\", MALLOC_ERR_MSG));\n\t\t\treturn NULL;\n\t\t}\n\t\t/* Tokenize to isolate <vnode> in <vnode>{range} */\n\t\tword = string_token(NULL, WORD_MAP_TOK, &nm1);\n\t\tif (word == NULL) {\n\t\t\tfree(tmp);\n\t\t\tbreak;\n\t\t}\n\t\t/* Tokenize to isolate {range} */\n\t\tmap = string_token(word, MAP_TOK, &nm2);\n\t\twhile (map != NULL) {\n\t\t\t/* Tokenize the range which can be of the form 0-10 or 0,3,5 */\n\t\t\trange = string_token(map, RANGE_TOK, &nm3);\n\t\t\tif (range != NULL) {\n\t\t\t\tfirst = atoi(range);\n\t\t\t\tlast = first;\n\t\t\t\t/* Each index in range is parsed */\n\t\t\t\trange = string_token(NULL, RANGE_TOK, &nm3);\n\t\t\t\tif (range != NULL)\n\t\t\t\t\tlast = atoi(range);\n\t\t\t\t/* Append the <vnode> token to the pointer array\n\t\t\t\t * for each index indicated by range */\n\t\t\t\tfor (i = first; i <= last; i++)\n\t\t\t\t\trev_dict[i] = (char *) tmp;\n\t\t\t} else {\n\t\t\t\trev_dict[atoi(map)] = (char *) tmp;\n\t\t\t}\n\t\t\tmap = string_token(NULL, MAP_TOK, &nm2);\n\t\t}\n\t\t/* Append each unique <vnode> to the block of memory to free */\n\t\tif (j < max_idx) {\n\t\t\t(*tofree)[j] = tmp;\n\t\t\tj++;\n\t\t} else {\n\t\t\tj = max_idx;\n\t\t\tbreak;\n\t\t}\n\t\tword = string_token(NULL, WORD_TOK, &nm1);\n\t}\n\n\trev_dict[max_idx] = NULL;\n\n\t(*tofree)[j] = NULL;\n\n\treturn rev_dict;\n}\n\n/**\n * @brief\n * \tGet the total number of indices represented in the condensed string\n * \twhich corresponds to the total number of occurrences in the execvnode string\n *\n * @param[in] str - Either a condensed execvnode_seq or a single execvnode\n * \t\tThe format expected is in the form of:\n * \t\texecvnode_seq: N#(execvnode){0-N-1} e.g. 10:(mars:ncpus=1){0-9}\n * \t\tSingle execvnode: (mars:ncpus=1)\n *\n * @return\tint\n * @retval\tThe number of occurrences. If the first token is\n * @retval\t0\t\t\t   NULL,\n *\n */\nint\nget_execvnodes_count(char *str)\n{\n\tint count;\n\tchar *word;\n\tchar *str_copy;\n\n\tif (str == NULL)\n\t\treturn 0;\n\n\tif (str[0] == '(')\n\t\treturn 1;\n\n\tif ((str_copy = strdup(str)) == NULL)\n\t\treturn 0;\n\tword = strtok(str_copy, COUNT_TOK);\n\n\tif (word == NULL)\n\t\treturn 0;\n\n\tcount = atoi(word);\n\tfree(str_copy);\n\n\treturn count;\n}\n\n/**\n * @brief\n *\tTranslate a dictionary into a string\n *      Walks the entire dictionary, word by word, and for each word creates a\n *      string representation by concatenating words together.\n *      The result is a string of tokens separated by the WORD_TOK character.\n *      The format of the returned string is:\n *\n *      <num_occurrences>COUNT_TOK<vnode1>WORD_TOK<range>WORD_MAP_TOK<vnode2>\n *      WORD_TOK...COUNT_TOK, WORD_TOK and WORD_MAP_TOK are defined in the\n *      header file.\n *\n * @param[in]   dict - the dictionary considered\n *\n * @return char* The concatenation of all words in the dictionary.\n * @retval SUCCESS The concatenation of all words in the dictionary\n * @retval NULL\tMemory allocation for returned string fails\n */\nstatic char *\ndict_to_str(dictionary *dict)\n{\n\tchar *condensed;\n\tchar *tmp;\n\tchar buf[1024];\n\tstruct word *w;\n\tstruct map *m;\n\tint prev = 0, first = 0, cur;\n\tint begin_range = 1;\n\n\tif (dict == NULL)\n\t\treturn NULL;\n\n\tw = dict->first;\n\n\tif (w == NULL)\n\t\treturn NULL;\n\n\t/* Allocate  sufficient memory for the string to be returned\n\t * this string will be resized at the end of the function.*/\n\tif ((condensed = malloc((dict->length + 1) * sizeof(char))) == NULL) {\n\t\tDBPRT((\"dict_to_str: %s\\n\", MALLOC_ERR_MSG));\n\t\treturn NULL;\n\t}\n\n\t/* Write the number of occurrences followed by COUNT_TOK */\n\tsprintf(condensed, \"%d%s\", dict->max_idx, COUNT_TOK);\n\n\tm = w->map;\n\n\twhile (w != NULL) {\n\t\ttmp = strdup(w->name);\n\t\tif (tmp == NULL) {\n\t\t\tbreak;\n\t\t}\n\t\t/* Concatenate the vnode followed by the separator\n\t\t * to start the range */\n\t\t(void) strcat(condensed, tmp);\n\t\t(void) strcat(condensed, WORD_TOK);\n\n\t\twhile (m != NULL) {\n\t\t\tcur = m->val;\n\t\t\t/* If current value (and not first scan) is increment\n\t\t\t *  of previous then keep reading */\n\t\t\tif (!begin_range && cur == prev + 1) {\n\t\t\t\tm = m->next;\n\t\t\t\tprev = cur;\n\t\t\t\tcontinue;\n\t\t\t}\n\n\t\t\tif (begin_range) {\n\t\t\t\tfirst = cur;\n\t\t\t\tbegin_range = 0;\n\t\t\t\tm = m->next;\n\t\t\t} else {\n\t\t\t\t/* Concatentate the range */\n\t\t\t\tif (first == prev)\n\t\t\t\t\tsprintf(buf, \"%d%s\", first, MAP_TOK);\n\t\t\t\telse\n\t\t\t\t\tsprintf(buf, \"%d%s%d, \", first, RANGE_TOK, prev);\n\t\t\t\t(void) strcat(condensed, buf);\n\t\t\t\tbegin_range = 1;\n\t\t\t}\n\n\t\t\tprev = cur;\n\t\t}\n\t\tif (first == prev)\n\t\t\tsprintf(buf, \"%d\", first);\n\t\telse\n\t\t\tsprintf(buf, \"%d%s%d\", first, RANGE_TOK, prev);\n\t\t(void) strcat(condensed, buf);\n\n\t\tbegin_range = 1;\n\n\t\tw = w->next;\n\t\tif (w != NULL)\n\t\t\tm = w->map;\n\t\t/* Concatenate the closing separator of the range */\n\t\tstrcat(condensed, WORD_MAP_TOK);\n\n\t\tfree(tmp);\n\t}\n\t/* condensed was malloc'd dict->length which was an \"overestimate\" of the actual needed memory\n\t * resize to what's actually been used */\n\ttmp = realloc(condensed, (strlen(condensed) + 1) * sizeof(char));\n\tif (tmp == NULL) {\n\t\tfree(condensed);\n\t\tDBPRT((\"dict_to_str: %s\\n\", MALLOC_ERR_MSG));\n\t\treturn NULL;\n\t} else\n\t\tcondensed = tmp;\n\n\tcondensed[strlen(condensed)] = '\\0';\n\n\treturn condensed;\n}\n\n/**\n * @brief\n * \tFree up all memory allocated to dictionary\n *\n * @param[in] dict - The dictionary considered\n *\n */\nstatic void\nfree_dict(dictionary *dict)\n{\n\tstruct word *w;\n\tstruct word *w_tmp;\n\n\tif (dict == NULL)\n\t\treturn;\n\n\tw = dict->first;\n\n\tif (w == NULL) {\n\t\tfree(dict);\n\t\treturn;\n\t}\n\n\twhile (w->next != NULL) {\n\t\tw_tmp = w->next;\n\t\tfree_word(w);\n\t\tw = w_tmp;\n\t}\n\tfree_word(w);\n\tfree(dict);\n}\n\n/**\n * @brief\n * \tFree memory allocated to a word and its mapping\n *\n * @param[in] w - The word to deallocate memory for\n *\n */\nstatic void\nfree_word(struct word *w)\n{\n\tstruct map *m;\n\tstruct map *m_tmp;\n\n\tif (w == NULL)\n\t\treturn;\n\n\tm = w->map;\n\n\tif (m == NULL)\n\t\treturn;\n\n\twhile (m->next != NULL) {\n\t\tm_tmp = m->next;\n\t\tfree(m);\n\t\tm = m_tmp;\n\t}\n\tfree(m);\n\tfree(w->name);\n\tfree(w);\n}\n\n/**\n * @brief\n * \tFree the memory allocated to an unrolled execvnode sequence.\n *  \tThe execvnode sequence is an array of pointers to the unique\n *  \texecvnode strings.\n *\n * @par\tNote that this function is passed the block of memory allocated\n *  \tfrom unroll_execvnode_seq's argument and NOT its return value.\n *  \tThe block of memory that is freed here is only the unique execvnodes\n *  \tthat had been allocated and not the array of pointers to these execvnodes.\n *\n * @param[in] ptr - Pointer to a block of memory to free\n *\n */\nvoid\nfree_execvnode_seq(char **ptr)\n{\n\tint i;\n\n\tif (ptr == NULL)\n\t\treturn;\n\n\tfor (i = 0; ptr[i] != NULL; i++)\n\t\tfree(ptr[i]);\n\tfree(ptr);\n}\n\n#ifdef DEBUG\n/**\n * @brief\n * \tfunction for Unit test\n *\n */\nvoid\ntest_execvnode_seq()\n{\n\tint i;\n\tint num;\n\tchar *str;\n\tchar *condensed_execvnodes;\n\tchar **tofree;\n\tchar **unroll_execvnodes;\n\n\t/* using all possible legal vnode characters separated by TOKEN_SEPARATOR */\n\tstr = \"(a-_^.#[0]:n=1)~(b@m.[1],c:m=2)~(a-_^.#[0]:n=1)~(b@m.[1],c:m=2)\";\n\tprintf(\"Original string: %s\\n\", str);\n\tcondensed_execvnodes = condense_execvnode_seq(str);\n\tnum = get_execvnodes_count(condensed_execvnodes);\n\tprintf(\"condensed string: %s\\n\", condensed_execvnodes);\n\tunroll_execvnodes = unroll_execvnode_seq(condensed_execvnodes, &tofree);\n\tprintf(\"Decompressed string:\\n\");\n\tfor (i = 0; i < num; i++)\n\t\tprintf(\"%s \", unroll_execvnodes[i]);\n\tprintf(\"\\n\");\n\tfree(unroll_execvnodes);\n\tfree_execvnode_seq(tofree);\n\tfree(condensed_execvnodes);\n}\n#endif\n"
  },
  {
    "path": "src/lib/Libutil/get_hostname.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tget_hostname.c\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <ctype.h>\n#include <sys/types.h>\n#include <sys/socket.h>\n#include <netinet/in.h>\n#include \"portability.h\"\n#include <netdb.h>\n#include <string.h>\n#include <errno.h>\n#include <arpa/inet.h>\n#include \"pbs_ifl.h\"\n#include \"pbs_internal.h\"\n\n/**\n * @brief\n * \t-get_fullhostname - get the fully qualified name of a host.\n *\n * @param[in] shortname - short name of host\n * @param[in] namebuf - buffer holding local name of host\n * @param[in] bufsize - buffer size\n *\n * @return\tint\n * @retval\t0\tif success\n *\t\t\thost name in character buffer pointed to by namebuf\n * @retval\t-1\tif error\n */\n\nint\nget_fullhostname(char *shortname, char *namebuf, int bufsize)\n{\n\tint i;\n\tchar *pbkslh = 0;\n\tchar *pcolon = 0;\n\tchar extname[PBS_MAXHOSTNAME + 1] = {'\\0'};\n\tchar localname[PBS_MAXHOSTNAME + 1] = {'\\0'};\n\tstruct addrinfo *aip, *pai;\n\tstruct addrinfo hints;\n\tstruct sockaddr_in *inp;\n\n\tif ((pcolon = strchr(shortname, (int) ':')) != NULL) {\n\t\t*pcolon = '\\0';\n\t\tif (*(pcolon - 1) == '\\\\')\n\t\t\t*(pbkslh = pcolon - 1) = '\\0';\n\t}\n\n\tmemset(&hints, 0, sizeof(struct addrinfo));\n\t/*\n\t *\tWhy do we use AF_UNSPEC rather than AF_INET?  Some\n\t *\timplementations of getaddrinfo() will take an IPv6\n\t *\taddress and map it to an IPv4 one if we ask for AF_INET\n\t *\tonly.  We don't want that - we want only the addresses\n\t *\tthat are genuinely, natively, IPv4 so we start with\n\t *\tAF_UNSPEC and filter ai_family below.\n\t */\n\thints.ai_family = AF_UNSPEC;\n\thints.ai_socktype = SOCK_STREAM;\n\thints.ai_protocol = IPPROTO_TCP;\n\n\tif (getaddrinfo(shortname, NULL, &hints, &pai) != 0)\n\t\treturn (-1);\n\n\tif (pcolon) {\n\t\t*pcolon = ':'; /* replace the colon */\n\t\tif (pbkslh)\n\t\t\t*pbkslh = '\\\\';\n\t}\n\n\t/*\n\t *\tThis loop tries to find a non-loopback IPv4 address suitable\n\t *\tfor use by, in particular, pbs_server (which doesn't want to\n\t *\tname its jobs <N>.localhost), so we ignore non-IPv4 addresses,\n\t *\tthose that aren't invertible, and those on a loopback net.\n\t */\n\tfor (aip = pai; aip != NULL; aip = aip->ai_next) {\n\t\tif (aip->ai_family != AF_INET)\n\t\t\tcontinue; /* skip non-IPv4 addresses */\n\t\tif (getnameinfo(aip->ai_addr, aip->ai_addrlen, namebuf,\n\t\t\t\tbufsize, NULL, 0, 0) != 0)\n\t\t\tcontinue; /* skip non-invertible addresses */\n\t\tinp = (struct sockaddr_in *) aip->ai_addr;\n\t\tif (ntohl(inp->sin_addr.s_addr) >> 24 != IN_LOOPBACKNET) {\n\t\t\tstrncpy(extname, namebuf, (sizeof(extname) - 1));\n\t\t\tbreak; /* skip loopback addresses */\n\t\t} else\n\t\t\tstrncpy(localname, namebuf, (sizeof(localname) - 1));\n\t}\n\tfreeaddrinfo(pai);\n\tif (extname[0] == '\\0')\n\t\tstrncpy(namebuf, localname, bufsize);\n\telse\n\t\tstrncpy(namebuf, extname, bufsize);\n\n\tif (namebuf[0] == '\\0')\n\t\treturn (-1);\n\n\tfor (i = 0; i < bufsize; i++) {\n\t\t*(namebuf + i) = tolower((int) *(namebuf + i));\n\t\tif (*(namebuf + i) == '\\0')\n\t\t\tbreak;\n\t}\n\n\t*(namebuf + bufsize) = '\\0'; /* insure null terminated */\n\treturn (0);\n}\n\n/**\n * @brief\n *\tGet hostname corresponding to the addr passed\n *\n *  @param[in]\taddr\t- addr contains ip and port\n * @param[in]\tport\t- port of peer server service\n *\n * @return\thost name\n * @retval\tNULL\t- Failure\n * @retval\t!NULL\t- Success\n */\nchar *\nget_hostname_from_addr(struct in_addr addr)\n{\n\tstruct hostent *hp;\n\n\thp = gethostbyaddr((void *) &addr, sizeof(struct in_addr), AF_INET);\n\tif (hp == NULL) {\n\t\tlog_errf(-1, __func__, \"%s: errno=%d, h_errno=%d\",\n\t\t\t inet_ntoa(addr), errno, h_errno);\n\t\treturn NULL;\n\t}\n\n\treturn hp->h_name;\n}\n"
  },
  {
    "path": "src/lib/Libutil/hook.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <unistd.h>\n#include <sys/param.h>\n#include <dirent.h>\n#include <sys/types.h>\n#include <sys/stat.h>\n#include <ctype.h>\n#include <errno.h>\n#include <assert.h>\n\n#include <memory.h>\n#include <fcntl.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include \"pbs_ifl.h\"\n#include \"libpbs.h\"\n#include \"list_link.h\"\n#include \"work_task.h\"\n#include \"log.h\"\n#include \"server_limits.h\"\n#include \"attribute.h\"\n#include \"credential.h\"\n#include \"batch_request.h\"\n#include \"job.h\"\n#include \"svrfunc.h\"\n#include \"pbs_nodes.h\"\n#include <pbs_python.h> /* for python interpreter */\n#include \"hook.h\"\n#include \"tpp.h\"\n#include <signal.h>\n#include \"hook_func.h\"\n\n/**\n * @file\thook.c\n */\n/* External functions */\n\n/* Local Private Functions */\n\n/* Global Data items */\nstatic void (*python_interrupt_func)(void) = NULL;\n\nextern char *path_priv;\nextern char *path_hooks_workdir;\n\nextern char *path_hooks;\nextern time_t time_now;\n\nextern pbs_list_head svr_allhooks;\nextern pbs_list_head svr_queuejob_hooks;\nextern pbs_list_head svr_postqueuejob_hooks;\nextern pbs_list_head svr_modifyjob_hooks;\nextern pbs_list_head svr_resvsub_hooks;\nextern pbs_list_head svr_modifyresv_hooks;\nextern pbs_list_head svr_movejob_hooks;\nextern pbs_list_head svr_runjob_hooks;\nextern pbs_list_head svr_jobobit_hooks;\nextern pbs_list_head svr_management_hooks;\nextern pbs_list_head svr_modifyvnode_hooks;\nextern pbs_list_head svr_provision_hooks;\nextern pbs_list_head svr_periodic_hooks;\nextern pbs_list_head svr_resv_confirm_hooks;\nextern pbs_list_head svr_resv_begin_hooks;\nextern pbs_list_head svr_resv_end_hooks;\nextern pbs_list_head svr_execjob_begin_hooks;\nextern pbs_list_head svr_execjob_prologue_hooks;\nextern pbs_list_head svr_execjob_epilogue_hooks;\nextern pbs_list_head svr_execjob_end_hooks;\nextern pbs_list_head svr_execjob_preterm_hooks;\nextern pbs_list_head svr_execjob_launch_hooks;\nextern pbs_list_head svr_exechost_periodic_hooks;\nextern pbs_list_head svr_exechost_startup_hooks;\nextern pbs_list_head svr_execjob_attach_hooks;\nextern pbs_list_head svr_execjob_resize_hooks;\nextern pbs_list_head svr_execjob_abort_hooks;\nextern pbs_list_head svr_execjob_postsuspend_hooks;\nextern pbs_list_head svr_execjob_preresume_hooks;\n\n/**\n *\n * @brief\n *\tRemoves 'phook' from all the hooks lists known to the system.\n *\n * @param[in]\tphook - the hook to remove.\n *\n */\nvoid\nclear_hook_links(hook *phook)\n{\n\tdelete_link(&phook->hi_queuejob_hooks);\n\tdelete_link(&phook->hi_postqueuejob_hooks);\n\tdelete_link(&phook->hi_modifyjob_hooks);\n\tdelete_link(&phook->hi_resvsub_hooks);\n\tdelete_link(&phook->hi_modifyresv_hooks);\n\tdelete_link(&phook->hi_movejob_hooks);\n\tdelete_link(&phook->hi_runjob_hooks);\n\tdelete_link(&phook->hi_jobobit_hooks);\n\tdelete_link(&phook->hi_provision_hooks);\n\tdelete_link(&phook->hi_periodic_hooks);\n\tdelete_link(&phook->hi_resv_confirm_hooks);\n\tdelete_link(&phook->hi_resv_begin_hooks);\n\tdelete_link(&phook->hi_resv_end_hooks);\n\tdelete_link(&phook->hi_allhooks);\n\tdelete_link(&phook->hi_management_hooks);\n\tdelete_link(&phook->hi_modifyvnode_hooks);\n\n\t/* mom hooks below */\n\tdelete_link(&phook->hi_execjob_begin_hooks);\n\tdelete_link(&phook->hi_execjob_prologue_hooks);\n\tdelete_link(&phook->hi_execjob_epilogue_hooks);\n\tdelete_link(&phook->hi_execjob_preterm_hooks);\n\tdelete_link(&phook->hi_execjob_launch_hooks);\n\tdelete_link(&phook->hi_execjob_end_hooks);\n\tdelete_link(&phook->hi_exechost_periodic_hooks);\n\tdelete_link(&phook->hi_exechost_startup_hooks);\n\tdelete_link(&phook->hi_execjob_attach_hooks);\n\tdelete_link(&phook->hi_execjob_resize_hooks);\n\tdelete_link(&phook->hi_execjob_abort_hooks);\n\tdelete_link(&phook->hi_execjob_postsuspend_hooks);\n\tdelete_link(&phook->hi_execjob_preresume_hooks);\n}\n\n/**\n *\n * @brief\n *\tReturn a comma separated set of strings giving\n *\tthe names of event bits turned on in 'event'.\n *\n * @param[in]\tevent - input event.\n *\n * @return char *\n * @retval <string>\tcomma-separated event names.\n *\n * @note\n *\tThis returns a static string that will get overwritten on the\n *\tnext call to this function.\n */\nchar *\nhook_event_as_string(unsigned int event)\n{\n\tstatic char eventstr[HOOK_BUF_SIZE];\n\tint ev_ct = 0;\n\n\teventstr[0] = '\\0';\n\tif (event & HOOK_EVENT_QUEUEJOB) {\n\t\tsnprintf(eventstr, sizeof(eventstr), HOOKSTR_QUEUEJOB);\n\t\tev_ct++;\n\t}\n\n\tif (event & HOOK_EVENT_POSTQUEUEJOB) {\n\t\tif (ev_ct > 0)\n\t\t\tstrncat(eventstr, \",\", sizeof(eventstr) - strlen(eventstr) - 1);\n\t\tstrncat(eventstr, HOOKSTR_POSTQUEUEJOB, sizeof(eventstr) - strlen(eventstr) - 1);\n\t\tev_ct++;\n\t}\n\n\tif (event & HOOK_EVENT_MODIFYJOB) {\n\t\tif (ev_ct > 0)\n\t\t\tstrncat(eventstr, \",\", sizeof(eventstr) - strlen(eventstr) - 1);\n\t\tstrncat(eventstr, HOOKSTR_MODIFYJOB, sizeof(eventstr) - strlen(eventstr) - 1);\n\t\tev_ct++;\n\t}\n\n\tif (event & HOOK_EVENT_RESVSUB) {\n\t\tif (ev_ct > 0)\n\t\t\tstrncat(eventstr, \",\", sizeof(eventstr) - strlen(eventstr) - 1);\n\t\tstrncat(eventstr, HOOKSTR_RESVSUB, sizeof(eventstr) - strlen(eventstr) - 1);\n\t\tev_ct++;\n\t}\n\n\tif (event & HOOK_EVENT_MODIFYRESV) {\n\t\tif (ev_ct > 0)\n\t\t\tstrncat(eventstr, \",\", sizeof(eventstr) - strlen(eventstr) - 1);\n\t\tstrncat(eventstr, HOOKSTR_MODIFYRESV, sizeof(eventstr) - strlen(eventstr) - 1);\n\t\tev_ct++;\n\t}\n\n\tif (event & HOOK_EVENT_MOVEJOB) {\n\t\tif (ev_ct > 0)\n\t\t\tstrncat(eventstr, \",\", sizeof(eventstr) - strlen(eventstr) - 1);\n\t\tstrncat(eventstr, HOOKSTR_MOVEJOB, sizeof(eventstr) - strlen(eventstr) - 1);\n\t\tev_ct++;\n\t}\n\n\tif (event & HOOK_EVENT_RUNJOB) {\n\t\tif (ev_ct > 0)\n\t\t\tstrncat(eventstr, \",\", sizeof(eventstr) - strlen(eventstr) - 1);\n\t\tstrncat(eventstr, HOOKSTR_RUNJOB, sizeof(eventstr) - strlen(eventstr) - 1);\n\t\tev_ct++;\n\t}\n\n\tif (event & HOOK_EVENT_JOBOBIT) {\n\t\tif (ev_ct > 0)\n\t\t\tstrncat(eventstr, \",\", sizeof(eventstr) - strlen(eventstr) - 1);\n\t\tstrncat(eventstr, HOOKSTR_JOBOBIT, sizeof(eventstr) - strlen(eventstr) - 1);\n\t\tev_ct++;\n\t}\n\n\tif (event & HOOK_EVENT_MANAGEMENT) {\n\t\tif (ev_ct > 0)\n\t\t\tstrncat(eventstr, \",\", sizeof(eventstr) - strlen(eventstr) - 1);\n\t\tstrncat(eventstr, HOOKSTR_MANAGEMENT, sizeof(eventstr) - strlen(eventstr) - 1);\n\t\tev_ct++;\n\t}\n\n\tif (event & HOOK_EVENT_MODIFYVNODE) {\n\t\tif (ev_ct > 0)\n\t\t\tstrncat(eventstr, \",\", sizeof(eventstr) - strlen(eventstr) - 1);\n\t\tstrncat(eventstr, HOOKSTR_MODIFYVNODE, sizeof(eventstr) - strlen(eventstr) - 1);\n\t\tev_ct++;\n\t}\n\n\tif (event & HOOK_EVENT_PERIODIC) {\n\t\tif (ev_ct > 0)\n\t\t\tstrncat(eventstr, \",\", sizeof(eventstr) - strlen(eventstr) - 1);\n\t\tstrncat(eventstr, HOOKSTR_PERIODIC, sizeof(eventstr) - strlen(eventstr) - 1);\n\t\tev_ct++;\n\t}\n\n\tif (event & HOOK_EVENT_PROVISION) {\n\t\tif (ev_ct > 0)\n\t\t\tstrncat(eventstr, \",\", sizeof(eventstr) - strlen(eventstr) - 1);\n\t\tstrncat(eventstr, HOOKSTR_PROVISION, sizeof(eventstr) - strlen(eventstr) - 1);\n\t\tev_ct++;\n\t}\n\n\tif (event & HOOK_EVENT_RESV_CONFIRM) {\n\t\tif (ev_ct > 0)\n\t\t\tstrncat(eventstr, \",\", sizeof(eventstr) - strlen(eventstr) - 1);\n\t\tstrncat(eventstr, HOOKSTR_RESV_CONFIRM, sizeof(eventstr) - strlen(eventstr) - 1);\n\t\tev_ct++;\n\t}\n\n\tif (event & HOOK_EVENT_RESV_BEGIN) {\n\t\tif (ev_ct > 0)\n\t\t\tstrncat(eventstr, \",\", sizeof(eventstr) - strlen(eventstr) - 1);\n\t\tstrncat(eventstr, HOOKSTR_RESV_BEGIN, sizeof(eventstr) - strlen(eventstr) - 1);\n\t\tev_ct++;\n\t}\n\n\tif (event & HOOK_EVENT_RESV_END) {\n\t\tif (ev_ct > 0)\n\t\t\tstrncat(eventstr, \",\", sizeof(eventstr) - strlen(eventstr) - 1);\n\t\tstrncat(eventstr, HOOKSTR_RESV_END, sizeof(eventstr) - strlen(eventstr) - 1);\n\t\tev_ct++;\n\t}\n\n\tif (event & HOOK_EVENT_EXECJOB_BEGIN) {\n\t\tif (ev_ct > 0)\n\t\t\tstrncat(eventstr, \",\", sizeof(eventstr) - strlen(eventstr) - 1);\n\t\tstrncat(eventstr, HOOKSTR_EXECJOB_BEGIN, sizeof(eventstr) - strlen(eventstr) - 1);\n\t\tev_ct++;\n\t}\n\n\tif (event & HOOK_EVENT_EXECJOB_PROLOGUE) {\n\t\tif (ev_ct > 0)\n\t\t\tstrncat(eventstr, \",\", sizeof(eventstr) - strlen(eventstr) - 1);\n\t\tstrncat(eventstr, HOOKSTR_EXECJOB_PROLOGUE, sizeof(eventstr) - strlen(eventstr) - 1);\n\t\tev_ct++;\n\t}\n\n\tif (event & HOOK_EVENT_EXECJOB_EPILOGUE) {\n\t\tif (ev_ct > 0)\n\t\t\tstrncat(eventstr, \",\", sizeof(eventstr) - strlen(eventstr) - 1);\n\t\tstrncat(eventstr, HOOKSTR_EXECJOB_EPILOGUE, sizeof(eventstr) - strlen(eventstr) - 1);\n\t\tev_ct++;\n\t}\n\n\tif (event & HOOK_EVENT_EXECJOB_END) {\n\t\tif (ev_ct > 0)\n\t\t\tstrncat(eventstr, \",\", sizeof(eventstr) - strlen(eventstr) - 1);\n\t\tstrncat(eventstr, HOOKSTR_EXECJOB_END, sizeof(eventstr) - strlen(eventstr) - 1);\n\t\tev_ct++;\n\t}\n\n\tif (event & HOOK_EVENT_EXECJOB_PRETERM) {\n\t\tif (ev_ct > 0)\n\t\t\tstrncat(eventstr, \",\", sizeof(eventstr) - strlen(eventstr) - 1);\n\t\tstrncat(eventstr, HOOKSTR_EXECJOB_PRETERM, sizeof(eventstr) - strlen(eventstr) - 1);\n\t\tev_ct++;\n\t}\n\n\tif (event & HOOK_EVENT_EXECJOB_LAUNCH) {\n\t\tif (ev_ct > 0)\n\t\t\tstrncat(eventstr, \",\", sizeof(eventstr) - strlen(eventstr) - 1);\n\t\tstrncat(eventstr, HOOKSTR_EXECJOB_LAUNCH, sizeof(eventstr) - strlen(eventstr) - 1);\n\t\tev_ct++;\n\t}\n\n\tif (event & HOOK_EVENT_EXECJOB_ATTACH) {\n\t\tif (ev_ct > 0)\n\t\t\tstrncat(eventstr, \",\", sizeof(eventstr) - strlen(eventstr) - 1);\n\t\tstrncat(eventstr, HOOKSTR_EXECJOB_ATTACH, sizeof(eventstr) - strlen(eventstr) - 1);\n\t\tev_ct++;\n\t}\n\n\tif (event & HOOK_EVENT_EXECJOB_RESIZE) {\n\t\tif (ev_ct > 0)\n\t\t\tstrncat(eventstr, \",\", sizeof(eventstr) - strlen(eventstr) - 1);\n\t\tstrncat(eventstr, HOOKSTR_EXECJOB_RESIZE, sizeof(eventstr) - strlen(eventstr) - 1);\n\t\tev_ct++;\n\t}\n\n\tif (event & HOOK_EVENT_EXECJOB_ABORT) {\n\t\tif (ev_ct > 0)\n\t\t\tstrncat(eventstr, \",\", sizeof(eventstr) - strlen(eventstr) - 1);\n\t\tstrncat(eventstr, HOOKSTR_EXECJOB_ABORT, sizeof(eventstr) - strlen(eventstr) - 1);\n\t\tev_ct++;\n\t}\n\n\tif (event & HOOK_EVENT_EXECJOB_POSTSUSPEND) {\n\t\tif (ev_ct > 0)\n\t\t\tstrncat(eventstr, \",\", sizeof(eventstr) - strlen(eventstr) - 1);\n\t\tstrncat(eventstr, HOOKSTR_EXECJOB_POSTSUSPEND, sizeof(eventstr) - strlen(eventstr) - 1);\n\t\tev_ct++;\n\t}\n\n\tif (event & HOOK_EVENT_EXECJOB_PRERESUME) {\n\t\tif (ev_ct > 0)\n\t\t\tstrncat(eventstr, \",\", sizeof(eventstr) - strlen(eventstr) - 1);\n\t\tstrncat(eventstr, HOOKSTR_EXECJOB_PRERESUME, sizeof(eventstr) - strlen(eventstr) - 1);\n\t\tev_ct++;\n\t}\n\n\tif (event & HOOK_EVENT_EXECHOST_PERIODIC) {\n\t\tif (ev_ct > 0)\n\t\t\tstrncat(eventstr, \",\", sizeof(eventstr) - strlen(eventstr) - 1);\n\t\tstrncat(eventstr, HOOKSTR_EXECHOST_PERIODIC, sizeof(eventstr) - strlen(eventstr) - 1);\n\t\tev_ct++;\n\t}\n\n\tif (event & HOOK_EVENT_EXECHOST_STARTUP) {\n\t\tif (ev_ct > 0)\n\t\t\tstrncat(eventstr, \",\", sizeof(eventstr) - strlen(eventstr) - 1);\n\t\tstrncat(eventstr, HOOKSTR_EXECHOST_STARTUP, sizeof(eventstr) - strlen(eventstr) - 1);\n\t\tev_ct++;\n\t}\n\n\tif (ev_ct == 0)\n\t\tsnprintf(eventstr, sizeof(eventstr), HOOKSTR_NONE);\n\treturn (eventstr);\n}\n\n/**\n * @brief\n *\tReturns the internal value (in integer) of the given event string name.\n *\n * @param[in]\teventstr - an event string value (ex. \"execjob_begin\")\n *\n * @return int\n * @retval <n>\tThe internal,numeric value (ex. 5)\n */\nunsigned int\nhookstr_event_toint(char *eventstr)\n{\n\tif (strcmp(eventstr, HOOKSTR_QUEUEJOB) == 0)\n\t\treturn HOOK_EVENT_QUEUEJOB;\n\tif (strcmp(eventstr, HOOKSTR_POSTQUEUEJOB) == 0)\n\t\treturn HOOK_EVENT_POSTQUEUEJOB;\n\tif (strcmp(eventstr, HOOKSTR_MODIFYJOB) == 0)\n\t\treturn HOOK_EVENT_MODIFYJOB;\n\tif (strcmp(eventstr, HOOKSTR_RESVSUB) == 0)\n\t\treturn HOOK_EVENT_RESVSUB;\n\tif (strcmp(eventstr, HOOKSTR_MODIFYRESV) == 0)\n\t\treturn HOOK_EVENT_MODIFYRESV;\n\tif (strcmp(eventstr, HOOKSTR_MOVEJOB) == 0)\n\t\treturn HOOK_EVENT_MOVEJOB;\n\tif (strcmp(eventstr, HOOKSTR_RUNJOB) == 0)\n\t\treturn HOOK_EVENT_RUNJOB;\n\tif (strcmp(eventstr, HOOKSTR_JOBOBIT) == 0)\n\t\treturn HOOK_EVENT_JOBOBIT;\n\tif (strcmp(eventstr, HOOKSTR_MANAGEMENT) == 0)\n\t\treturn HOOK_EVENT_MANAGEMENT;\n\tif (strcmp(eventstr, HOOKSTR_MODIFYVNODE) == 0)\n\t\treturn HOOK_EVENT_MODIFYVNODE;\n\tif (strcmp(eventstr, HOOKSTR_PROVISION) == 0)\n\t\treturn HOOK_EVENT_PROVISION;\n\tif (strcmp(eventstr, HOOKSTR_RESV_CONFIRM) == 0)\n\t\treturn HOOK_EVENT_RESV_CONFIRM;\n\tif (strcmp(eventstr, HOOKSTR_RESV_BEGIN) == 0)\n\t\treturn HOOK_EVENT_RESV_BEGIN;\n\tif (strcmp(eventstr, HOOKSTR_RESV_END) == 0)\n\t\treturn HOOK_EVENT_RESV_END;\n\tif (strcmp(eventstr, HOOKSTR_EXECJOB_BEGIN) == 0)\n\t\treturn HOOK_EVENT_EXECJOB_BEGIN;\n\tif (strcmp(eventstr, HOOKSTR_EXECJOB_PROLOGUE) == 0)\n\t\treturn HOOK_EVENT_EXECJOB_PROLOGUE;\n\tif (strcmp(eventstr, HOOKSTR_EXECJOB_EPILOGUE) == 0)\n\t\treturn HOOK_EVENT_EXECJOB_EPILOGUE;\n\tif (strcmp(eventstr, HOOKSTR_EXECJOB_END) == 0)\n\t\treturn HOOK_EVENT_EXECJOB_END;\n\tif (strcmp(eventstr, HOOKSTR_EXECJOB_PRETERM) == 0)\n\t\treturn HOOK_EVENT_EXECJOB_PRETERM;\n\tif (strcmp(eventstr, HOOKSTR_EXECJOB_LAUNCH) == 0)\n\t\treturn HOOK_EVENT_EXECJOB_LAUNCH;\n\tif (strcmp(eventstr, HOOKSTR_EXECHOST_PERIODIC) == 0)\n\t\treturn HOOK_EVENT_EXECHOST_PERIODIC;\n\tif (strcmp(eventstr, HOOKSTR_EXECHOST_STARTUP) == 0)\n\t\treturn HOOK_EVENT_EXECHOST_STARTUP;\n\tif (strcmp(eventstr, HOOKSTR_EXECJOB_ATTACH) == 0)\n\t\treturn HOOK_EVENT_EXECJOB_ATTACH;\n\tif (strcmp(eventstr, HOOKSTR_EXECJOB_RESIZE) == 0)\n\t\treturn HOOK_EVENT_EXECJOB_RESIZE;\n\tif (strcmp(eventstr, HOOKSTR_EXECJOB_ABORT) == 0)\n\t\treturn HOOK_EVENT_EXECJOB_ABORT;\n\tif (strcmp(eventstr, HOOKSTR_EXECJOB_POSTSUSPEND) == 0)\n\t\treturn HOOK_EVENT_EXECJOB_POSTSUSPEND;\n\tif (strcmp(eventstr, HOOKSTR_EXECJOB_PRERESUME) == 0)\n\t\treturn HOOK_EVENT_EXECJOB_PRERESUME;\n\n\treturn 0;\n}\n\nint\nhookstr_type_toint(char *hookstr_type)\n{\n\tif (strcmp(hookstr_type, HOOKSTR_SITE) == 0) {\n\t\treturn (HOOK_SITE);\n\t}\n\tif (strcmp(hookstr_type, HOOKSTR_PBS) == 0) {\n\t\treturn (HOOK_PBS);\n\t}\n\treturn (-1);\n}\n\n/*\n *\tReturns the string representation of hook 'type' value.\n */\nchar *\nhook_type_as_string(hook_type type)\n{\n\tswitch (type) {\n\n\t\tcase HOOK_SITE:\n\t\t\treturn HOOKSTR_SITE;\n\n\t\tcase HOOK_PBS:\n\t\t\treturn HOOKSTR_PBS;\n\n\t\tdefault:\n\t\t\treturn HOOKSTR_UNKNOWN;\n\t}\n}\n\n/*\n *\tReturns the string representation of hook 'enabled' value.\n */\nchar *\nhook_enabled_as_string(int enabled)\n{\n\tif (enabled == TRUE)\n\t\treturn HOOKSTR_TRUE;\n\telse\n\t\treturn HOOKSTR_FALSE;\n}\n\n/*\n *\tReturns the string representation of hook 'debug' value.\n */\nchar *\nhook_debug_as_string(int debug)\n{\n\tif (debug == TRUE)\n\t\treturn HOOKSTR_TRUE;\n\telse\n\t\treturn HOOKSTR_FALSE;\n}\n\n/**\n *\n * @brief\n *\tReturns the string representation of hook 'user' value.\n *\n * @return char *\n * @reval  <string> - the string version of the internal hook 'user' value.\n */\nchar *\nhook_user_as_string(hook_user user)\n{\n\tswitch (user) {\n\n\t\tcase HOOK_PBSADMIN:\n\t\t\treturn HOOKSTR_ADMIN;\n\t\tcase HOOK_PBSUSER:\n\t\t\treturn HOOKSTR_USER;\n\n\t\tdefault:\n\t\t\treturn HOOKSTR_UNKNOWN;\n\t}\n}\n\n/**\n *\n * @brief\n *\tReturn a comma separated set of strings giving\n *\tthe names of fail_action bits turned on in 'fail_action'.\n *\n * @param[in]\tfail_action - input fail_action.\n *\n * @return char *\n * @retval <string>\tcomma-separated fail_action names.\n *\n * @note\n *\tThis returns a static string that will get overwritten on the\n *\tnext call to this function.\n */\nchar *\nhook_fail_action_as_string(unsigned int fail_action)\n{\n\tstatic char fail_actionstr[HOOK_BUF_SIZE];\n\tchar *ptr = &fail_actionstr[0];\n\tint ev_ct = 0;\n\tint i;\n\tstatic struct {\n\t\tunsigned int flags;\n\t\tchar *name;\n\t} array[] = {\n\t\t{HOOK_FAIL_ACTION_NONE, HOOKSTR_FAIL_ACTION_NONE},\n\t\t{HOOK_FAIL_ACTION_OFFLINE_VNODES, HOOKSTR_FAIL_ACTION_OFFLINE_VNODES},\n\t\t{HOOK_FAIL_ACTION_CLEAR_VNODES, HOOKSTR_FAIL_ACTION_CLEAR_VNODES},\n\t\t{HOOK_FAIL_ACTION_SCHEDULER_RESTART_CYCLE, HOOKSTR_FAIL_ACTION_SCHEDULER_RESTART_CYCLE},\n\t\t{0, NULL}};\n\n\t*ptr = '\\0';\n\tfor (i = 0; array[i].flags != 0; i++) {\n\t\tif (fail_action & array[i].flags) {\n\t\t\tsize_t namelen = strlen(array[i].name);\n\t\t\tsize_t actionlen = ptr - fail_actionstr;\n\n\t\t\tif (ev_ct > 0) {\n\t\t\t\tif (2 > sizeof(fail_actionstr) - actionlen)\n\t\t\t\t\treturn NULL; /* out of space */\n\t\t\t\t*ptr++ = ',';\t     /* replace nul */\n\t\t\t\t*ptr = '\\0';\t     /* terminate string */\n\t\t\t\tactionlen++;\n\t\t\t}\n\t\t\tif (namelen + 1 > sizeof(fail_actionstr) - actionlen)\n\t\t\t\treturn NULL; /* out of space */\n\t\t\tstrncpy(ptr, array[i].name, sizeof(fail_actionstr) - actionlen);\n\t\t\tptr += namelen;\n\t\t\tev_ct++;\n\t\t}\n\t}\n\n\tif (ev_ct == 0)\n\t\tstrcpy(fail_actionstr, HOOKSTR_FAIL_ACTION_NONE);\n\treturn (fail_actionstr);\n}\n\n/*\n *\tReturns the string representation of hook 'order' value.\n */\nchar *\nhook_order_as_string(short order)\n{\n\tstatic char order_str[HOOK_BUF_SIZE];\n\n\tsprintf(order_str, \"%d\", order);\n\treturn (order_str);\n}\n\n/*\n *\tReturns the string representation of hook 'alarm' value.\n */\nchar *\nhook_alarm_as_string(int alarm)\n{\n\tstatic char alarm_str[HOOK_BUF_SIZE];\n\n\tsprintf(alarm_str, \"%d\", alarm);\n\treturn (alarm_str);\n}\n\n/**\n *\n * @brief\n *\tReturns the string representation of hook 'freq' value.\n *\n * @return char *\n * @reval  <string> - the string version of the internal hook 'freq' value.\n */\nchar *\nhook_freq_as_string(int freq)\n{\n\tstatic char freq_str[HOOK_BUF_SIZE + 1];\n\n\tsnprintf(freq_str, HOOK_BUF_SIZE, \"%d\", freq);\n\treturn (freq_str);\n}\n\n/*\n *\tSets the hook 'phook's name attribute to string 'newval'.\n *\tRETURNS: 0 for success; 1 otherwise with 'msg' of size 'msg_len'\n *\tfilled in.\n */\nint\nset_hook_name(hook *phook, char *newval, char *msg, size_t msg_len)\n{\n\tint pbsprefix;\n\n\tif (msg == NULL) { /* should not happen */\n\t\tlog_err(PBSE_INTERNAL, __func__, \"'msg' buffer is NULL\");\n\t\treturn (1);\n\t}\n\tmemset(msg, '\\0', msg_len);\n\n\tif (phook == NULL) {\n\t\tsnprintf(msg, msg_len - 1, \"%s: hook parameter is NULL!\",\n\t\t\t __func__);\n\t\treturn (1);\n\t}\n\n\tif (newval == NULL) {\n\t\tsnprintf(msg, msg_len - 1, \"%s: hook's name is NULL!\", __func__);\n\t\treturn (1);\n\t}\n\n\tpbsprefix = (strncmp(newval, HOOK_PBS_PREFIX,\n\t\t\t     strlen(HOOK_PBS_PREFIX)) == 0);\n\tif ((phook->type == HOOK_PBS) && !pbsprefix) {\n\t\tsnprintf(msg, msg_len - 1,\n\t\t\t \"hook_name '%s', must use %s as a \"\n\t\t\t \"prefix since this is a %s hook.\",\n\t\t\t newval, HOOK_PBS_PREFIX, HOOKSTR_PBS);\n\t\treturn (1);\n\t} else if ((phook->type == HOOK_SITE) && pbsprefix) {\n\t\tsnprintf(msg, msg_len - 1,\n\t\t\t \"hook_name '%s', cannot use %s as a \"\n\t\t\t \"prefix it is reserved for %s hooks\",\n\t\t\t newval, HOOK_PBS_PREFIX, HOOKSTR_PBS);\n\t\treturn (1);\n\t}\n\tphook->hook_name = strdup(newval);\n\n\treturn (0);\n}\n\n/*\n *\tSets the hook 'phook's enabled attribute to a value\n *\trepresenting 'newval'.\n *\tRETURNS: 0 for success; 1 otherwise with 'msg' of size 'msg_len'\n *\tfilled in.\n */\nint\nset_hook_enabled(hook *phook, char *newval, char *msg, size_t msg_len)\n{\n\tif (msg == NULL) { /* should not happen */\n\t\tlog_err(PBSE_INTERNAL, __func__, \"'msg' buffer is NULL\");\n\t\treturn (1);\n\t}\n\tmemset(msg, '\\0', msg_len);\n\n\tif (phook == NULL) {\n\t\tsnprintf(msg, msg_len - 1,\n\t\t\t \"%s: hook parameter is NULL!\", __func__);\n\t\treturn (1);\n\t}\n\n\tif (newval == NULL) {\n\t\tsnprintf(msg, msg_len - 1, \"%s: hook's value is NULL!\", __func__);\n\t\treturn (1);\n\t}\n\n\tif ((strcasecmp(newval, HOOKSTR_TRUE) == 0) ||\n\t    (strcasecmp(newval, \"t\") == 0) ||\n\t    (strcasecmp(newval, \"y\") == 0) ||\n\t    (strcmp(newval, \"1\") == 0)) {\n\t\tphook->enabled = TRUE;\n\t} else if ((strcasecmp(newval, HOOKSTR_FALSE) == 0) ||\n\t\t   (strcasecmp(newval, \"f\") == 0) ||\n\t\t   (strcasecmp(newval, \"n\") == 0) ||\n\t\t   (strcmp(newval, \"0\") == 0)) {\n\t\tphook->enabled = FALSE;\n\t} else {\n\t\tsnprintf(msg, msg_len - 1,\n\t\t\t \"unexpected value \\'%s\\', must be (not case sensitive) \"\n\t\t\t \"%s|t|y|1|%s|f|n|0\",\n\t\t\t newval,\n\t\t\t HOOKSTR_TRUE, HOOKSTR_FALSE);\n\t\treturn (1);\n\t}\n\treturn (0);\n}\n\n/*\n *\tSets the hook 'phook's debug attribute to a value\n *\trepresenting 'newval'.\n *\tRETURNS: 0 for success; 1 otherwise with 'msg' of size 'msg_len'\n *\tfilled in.\n */\nint\nset_hook_debug(hook *phook, char *newval, char *msg, size_t msg_len)\n{\n\tif (msg == NULL) { /* should not happen */\n\t\tlog_err(PBSE_INTERNAL, __func__, \"'msg' buffer is NULL\");\n\t\treturn (1);\n\t}\n\tmemset(msg, '\\0', msg_len);\n\n\tif (phook == NULL) {\n\t\tsnprintf(msg, msg_len - 1,\n\t\t\t \"%s: hook parameter is NULL!\", __func__);\n\t\treturn (1);\n\t}\n\n\tif (newval == NULL) {\n\t\tsnprintf(msg, msg_len - 1, \"%s: hook's value is NULL!\", __func__);\n\t\treturn (1);\n\t}\n\n\tif ((strcasecmp(newval, HOOKSTR_TRUE) == 0) ||\n\t    (strcasecmp(newval, \"t\") == 0) ||\n\t    (strcasecmp(newval, \"y\") == 0) ||\n\t    (strcmp(newval, \"1\") == 0)) {\n\t\tphook->debug = TRUE;\n\t} else if ((strcasecmp(newval, HOOKSTR_FALSE) == 0) ||\n\t\t   (strcasecmp(newval, \"f\") == 0) ||\n\t\t   (strcasecmp(newval, \"n\") == 0) ||\n\t\t   (strcmp(newval, \"0\") == 0)) {\n\t\tphook->debug = FALSE;\n\t} else {\n\t\tsnprintf(msg, msg_len - 1,\n\t\t\t \"unexpected value \\'%s\\', must be (not case sensitive) \"\n\t\t\t \"%s|t|y|1|%s|f|n|0\",\n\t\t\t newval,\n\t\t\t HOOKSTR_TRUE, HOOKSTR_FALSE);\n\t\treturn (1);\n\t}\n\treturn (0);\n}\n\n/*\n *\tSets the hook 'phook's type attribute to a value\n *\trepresenting 'newval'.\n *\tRETURNS: 0 for success; 1 otherwise with 'msg' of size 'msg_len'\n *\tfilled in.\n */\nint\nset_hook_type(hook *phook, char *newval, char *msg, size_t msg_len,\n\t      int allow_PBS_type)\n{\n\tif (msg == NULL) { /* should not happen */\n\t\tlog_err(PBSE_INTERNAL, __func__, \"'msg' buffer is NULL\");\n\t\treturn (1);\n\t}\n\tmemset(msg, '\\0', msg_len);\n\n\tif (phook == NULL) {\n\t\tsnprintf(msg, msg_len - 1, \"%s: hook parameter is NULL!\",\n\t\t\t __func__);\n\t\treturn (1);\n\t}\n\n\tif (newval == NULL) {\n\t\tsnprintf(msg, msg_len - 1, \"%s: hook's type is NULL!\", __func__);\n\t\treturn (1);\n\t}\n\n\tif (strcmp(newval, HOOKSTR_PBS) == 0) {\n\t\tif (!allow_PBS_type) {\n\t\t\tsnprintf(msg, msg_len - 1,\n\t\t\t\t \"not allowed to set hook to '%s' type\",\n\t\t\t\t HOOKSTR_PBS);\n\t\t\treturn (1);\n\t\t}\n\t\tif (phook->hook_name &&\n\t\t    (strncmp(phook->hook_name, HOOK_PBS_PREFIX,\n\t\t\t     strlen(HOOK_PBS_PREFIX)) != 0)) {\n\t\t\tsnprintf(msg, msg_len - 1,\n\t\t\t\t \"can't set hook to %s type - hook name (%s) \"\n\t\t\t\t \"not prefixed with %s\",\n\t\t\t\t HOOKSTR_PBS,\n\t\t\t\t phook->hook_name, HOOK_PBS_PREFIX);\n\t\t\treturn (1);\n\t\t}\n\n\t\tphook->type = HOOK_PBS;\n\t} else if (strcmp(newval, HOOKSTR_SITE) == 0) {\n\t\tif (phook->hook_name &&\n\t\t    (strncmp(phook->hook_name, HOOK_PBS_PREFIX,\n\t\t\t     strlen(HOOK_PBS_PREFIX)) == 0)) {\n\t\t\tsnprintf(msg, msg_len - 1,\n\t\t\t\t \"can't set hook to %s type - hook name (%s) \"\n\t\t\t\t \"already prefixed with %s\",\n\t\t\t\t HOOKSTR_SITE,\n\t\t\t\t phook->hook_name, HOOK_PBS_PREFIX);\n\t\t\treturn (1);\n\t\t}\n\n\t\tif (phook->order <= 0) {\n\t\t\tsnprintf(msg, msg_len - 1,\n\t\t\t\t \"can't set hook to %s type - hook order \"\n\t\t\t\t \"value is already set to <= 0\",\n\t\t\t\t HOOKSTR_SITE);\n\t\t\treturn (1);\n\t\t}\n\t\tphook->type = HOOK_SITE;\n\t} else {\n\t\tif (allow_PBS_type) {\n\t\t\tsnprintf(msg, msg_len - 1,\n\t\t\t\t \"invalid argument to type, must be \"\n\t\t\t\t \"\\\"%s\\\" or \\\"%s\\\"\",\n\t\t\t\t HOOKSTR_PBS, HOOKSTR_SITE);\n\t\t} else\n\t\t\tsnprintf(msg, msg_len - 1,\n\t\t\t\t \"invalid argument to type, must be \\\"%s\\\"\",\n\t\t\t\t HOOKSTR_SITE);\n\t\treturn (1);\n\t}\n\n\treturn (0);\n}\n\n/**\n * @brief\n *\tSets the hook 'phook's user attribute to a value\n *\trepresenting 'newval'.\n *\n * @param[in/out] phook - hook being operated on\n * @param[in]\t  newval - the new value to phook's user attribute.\n * @param[in/out] msg - filled in with error message if this function fails.\n * @param[in]\t  msg_len - length of 'msg' buffer.\n * @param[in]\t  strict - if 'strict' is set to 1, then only set 'user'\n *\t\t\t  attribute to pbsuser if it is a mom hook and matches\n *\t\t\t one of the MOM_USER_EVENTS.\n * @return int\n * @retval 0 \tfor success\n * @retval 1 \tfor failure, with 'msg' of size 'msg_len' filled in.\n */\nint\nset_hook_user(hook *phook, char *newval, char *msg, size_t msg_len, int strict)\n{\n\tif (msg == NULL) { /* should not happen */\n\t\tlog_err(PBSE_INTERNAL, __func__, \"'msg' buffer is NULL\");\n\t\treturn (1);\n\t}\n\tmemset(msg, '\\0', msg_len);\n\n\tif (phook == NULL) {\n\t\tsnprintf(msg, msg_len - 1,\n\t\t\t \"%s: hook parameter is NULL!\", __func__);\n\t\treturn (1);\n\t}\n\n\tif (newval == NULL) {\n\t\tsnprintf(msg, msg_len - 1, \"%s:  hook's user is NULL!\", __func__);\n\t\treturn (1);\n\t}\n\n\tif (strcmp(newval, HOOKSTR_ADMIN) != 0) {\n\t\tif (phook->event & HOOK_EVENT_PERIODIC) {\n\t\t\tsnprintf(msg, msg_len - 1, \"user value of a server periodic hook must be %s\", HOOKSTR_ADMIN);\n\t\t\treturn (1);\n\t\t} else if (strcmp(newval, HOOKSTR_USER) != 0) {\n\t\t\tsnprintf(msg, msg_len - 1,\n\t\t\t\t \"user value of a hook must be %s,%s\", HOOKSTR_ADMIN,\n\t\t\t\t HOOKSTR_USER);\n\t\t\treturn (1);\n\t\t}\n\t}\n\n\tif (strcmp(newval, HOOKSTR_ADMIN) == 0) {\n\t\tphook->user = HOOK_PBSADMIN;\n\t} else if (strcmp(newval, HOOKSTR_USER) == 0) {\n\n\t\tif (strict && ((phook->event & USER_MOM_EVENTS) == 0)) {\n\t\t\tsnprintf(msg, msg_len - 1,\n\t\t\t\t \"Can't set hook user value to '%s': hook event must contain at least %s\", HOOKSTR_USER, hook_event_as_string(USER_MOM_EVENTS));\n\t\t\treturn (1);\n\t\t}\n\n\t\tphook->user = HOOK_PBSUSER;\n\t}\n\n\treturn (0);\n}\n\n/**\n *\n * @brief\n *\tInserts the hook 'phook' into  'phook_head' that represents a list\n *\tof hooks of the given 'event'. The hook is inserted following the\n *\tphook->order, in ascending format.\n *\n * @param[in]\t  event - the event of the hooks.\n * @param[in/out] phook_head - list of hooks of the given 'event' where 'phook'\n *\t\t\t\twill be inserted.\n * @param[in]\t  phook - hook to insert.\n *\n */\nstatic void\ninsert_hook_sort_order(unsigned int event, pbs_list_head *phook_head, hook *phook)\n{\n\tpbs_list_link *plink_elem, *plink_cur;\n\thook *phook_cur;\n\n\tif ((phook_head == NULL) || (phook == NULL)) {\n\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\"NULL arguments to phook_head and/or phook\");\n\t\treturn;\n\t}\n\n\tif (event == HOOK_EVENT_QUEUEJOB) {\n\t\tplink_elem = &phook->hi_queuejob_hooks;\n\t} else if (event == HOOK_EVENT_POSTQUEUEJOB) {\n\t\tplink_elem = &phook->hi_postqueuejob_hooks;\n\t} else if (event == HOOK_EVENT_MODIFYJOB) {\n\t\tplink_elem = &phook->hi_modifyjob_hooks;\n\t} else if (event == HOOK_EVENT_RESVSUB) {\n\t\tplink_elem = &phook->hi_resvsub_hooks;\n\t} else if (event == HOOK_EVENT_MODIFYRESV) {\n\t\tplink_elem = &phook->hi_modifyresv_hooks;\n\t} else if (event == HOOK_EVENT_MOVEJOB) {\n\t\tplink_elem = &phook->hi_movejob_hooks;\n\t} else if (event == HOOK_EVENT_RUNJOB) {\n\t\tplink_elem = &phook->hi_runjob_hooks;\n\t} else if (event == HOOK_EVENT_JOBOBIT) {\n\t\tplink_elem = &phook->hi_jobobit_hooks;\n\t} else if (event == HOOK_EVENT_MANAGEMENT) {\n\t\tplink_elem = &phook->hi_management_hooks;\n\t} else if (event == HOOK_EVENT_MODIFYVNODE) {\n\t\tplink_elem = &phook->hi_modifyvnode_hooks;\n\t} else if (event == HOOK_EVENT_PROVISION) {\n\t\tplink_elem = &phook->hi_provision_hooks;\n\t} else if (event == HOOK_EVENT_PERIODIC) {\n\t\tplink_elem = &phook->hi_periodic_hooks;\n\t} else if (event == HOOK_EVENT_RESV_CONFIRM) {\n\t\tplink_elem = &phook->hi_resv_confirm_hooks;\n\t} else if (event == HOOK_EVENT_RESV_BEGIN) {\n\t\tplink_elem = &phook->hi_resv_begin_hooks;\n\t} else if (event == HOOK_EVENT_RESV_END) {\n\t\tplink_elem = &phook->hi_resv_end_hooks;\n\t} else if (event == HOOK_EVENT_EXECJOB_BEGIN) {\n\t\tplink_elem = &phook->hi_execjob_begin_hooks;\n\t} else if (event == HOOK_EVENT_EXECJOB_PROLOGUE) {\n\t\tplink_elem = &phook->hi_execjob_prologue_hooks;\n\t} else if (event == HOOK_EVENT_EXECJOB_EPILOGUE) {\n\t\tplink_elem = &phook->hi_execjob_epilogue_hooks;\n\t} else if (event == HOOK_EVENT_EXECJOB_PRETERM) {\n\t\tplink_elem = &phook->hi_execjob_preterm_hooks;\n\t} else if (event == HOOK_EVENT_EXECJOB_LAUNCH) {\n\t\tplink_elem = &phook->hi_execjob_launch_hooks;\n\t} else if (event == HOOK_EVENT_EXECJOB_END) {\n\t\tplink_elem = &phook->hi_execjob_end_hooks;\n\t} else if (event == HOOK_EVENT_EXECHOST_PERIODIC) {\n\t\tplink_elem = &phook->hi_exechost_periodic_hooks;\n\t} else if (event == HOOK_EVENT_EXECHOST_STARTUP) {\n\t\tplink_elem = &phook->hi_exechost_startup_hooks;\n\t} else if (event == HOOK_EVENT_EXECJOB_ATTACH) {\n\t\tplink_elem = &phook->hi_execjob_attach_hooks;\n\t} else if (event == HOOK_EVENT_EXECJOB_RESIZE) {\n\t\tplink_elem = &phook->hi_execjob_resize_hooks;\n\t} else if (event == HOOK_EVENT_EXECJOB_ABORT) {\n\t\tplink_elem = &phook->hi_execjob_abort_hooks;\n\t} else if (event == HOOK_EVENT_EXECJOB_POSTSUSPEND) {\n\t\tplink_elem = &phook->hi_execjob_postsuspend_hooks;\n\t} else if (event == HOOK_EVENT_EXECJOB_PRERESUME) {\n\t\tplink_elem = &phook->hi_execjob_preresume_hooks;\n\t} else {\n\t\t/* should not happen */\n\t\tlog_err(PBSE_INTERNAL, __func__, \"encountered a bad event\");\n\t\treturn;\n\t}\n\n\tif (is_linked(phook_head, plink_elem)) {\n\t\treturn;\n\t}\n\n\tphook_cur = (hook *) GET_NEXT(*phook_head);\n\tplink_cur = phook_head;\n\twhile (phook_cur) {\n\n\t\tif (event == HOOK_EVENT_QUEUEJOB) {\n\t\t\tplink_cur = &phook_cur->hi_queuejob_hooks;\n\t\t} else if (event == HOOK_EVENT_POSTQUEUEJOB) {\n\t\t\tplink_cur = &phook_cur->hi_postqueuejob_hooks;\n\t\t} else if (event == HOOK_EVENT_MODIFYJOB) {\n\t\t\tplink_cur = &phook_cur->hi_modifyjob_hooks;\n\t\t} else if (event == HOOK_EVENT_RESVSUB) {\n\t\t\tplink_cur = &phook_cur->hi_resvsub_hooks;\n\t\t} else if (event == HOOK_EVENT_MODIFYRESV) {\n\t\t\tplink_cur = &phook_cur->hi_modifyresv_hooks;\n\t\t} else if (event == HOOK_EVENT_MOVEJOB) {\n\t\t\tplink_cur = &phook_cur->hi_movejob_hooks;\n\t\t} else if (event == HOOK_EVENT_RUNJOB) {\n\t\t\tplink_cur = &phook_cur->hi_runjob_hooks;\n\t\t} else if (event == HOOK_EVENT_JOBOBIT) {\n\t\t\tplink_cur = &phook_cur->hi_jobobit_hooks;\n\t\t} else if (event == HOOK_EVENT_MANAGEMENT) {\n\t\t\tplink_cur = &phook_cur->hi_management_hooks;\n\t\t} else if (event == HOOK_EVENT_MODIFYVNODE) {\n\t\t\tplink_cur = &phook_cur->hi_modifyvnode_hooks;\n\t\t} else if (event == HOOK_EVENT_PROVISION) {\n\t\t\tplink_cur = &phook_cur->hi_provision_hooks;\n\t\t} else if (event == HOOK_EVENT_PERIODIC) {\n\t\t\tplink_cur = &phook_cur->hi_periodic_hooks;\n\t\t} else if (event == HOOK_EVENT_RESV_CONFIRM) {\n\t\t\tplink_cur = &phook_cur->hi_resv_confirm_hooks;\n\t\t} else if (event == HOOK_EVENT_RESV_BEGIN) {\n\t\t\tplink_cur = &phook_cur->hi_resv_begin_hooks;\n\t\t} else if (event == HOOK_EVENT_RESV_END) {\n\t\t\tplink_cur = &phook_cur->hi_resv_end_hooks;\n\t\t} else if (event == HOOK_EVENT_EXECJOB_BEGIN) {\n\t\t\tplink_cur = &phook_cur->hi_execjob_begin_hooks;\n\t\t} else if (event == HOOK_EVENT_EXECJOB_PROLOGUE) {\n\t\t\tplink_cur = &phook_cur->hi_execjob_prologue_hooks;\n\t\t} else if (event == HOOK_EVENT_EXECJOB_EPILOGUE) {\n\t\t\tplink_cur = &phook_cur->hi_execjob_epilogue_hooks;\n\t\t} else if (event == HOOK_EVENT_EXECJOB_PRETERM) {\n\t\t\tplink_cur = &phook_cur->hi_execjob_preterm_hooks;\n\t\t} else if (event == HOOK_EVENT_EXECJOB_LAUNCH) {\n\t\t\tplink_cur = &phook_cur->hi_execjob_launch_hooks;\n\t\t} else if (event == HOOK_EVENT_EXECJOB_END) {\n\t\t\tplink_cur = &phook_cur->hi_execjob_end_hooks;\n\t\t} else if (event == HOOK_EVENT_EXECHOST_PERIODIC) {\n\t\t\tplink_cur = &phook_cur->hi_exechost_periodic_hooks;\n\t\t} else if (event == HOOK_EVENT_EXECHOST_STARTUP) {\n\t\t\tplink_cur = &phook_cur->hi_exechost_startup_hooks;\n\t\t} else if (event == HOOK_EVENT_EXECJOB_ATTACH) {\n\t\t\tplink_cur = &phook_cur->hi_execjob_attach_hooks;\n\t\t} else if (event == HOOK_EVENT_EXECJOB_RESIZE) {\n\t\t\tplink_cur = &phook_cur->hi_execjob_resize_hooks;\n\t\t} else if (event == HOOK_EVENT_EXECJOB_ABORT) {\n\t\t\tplink_cur = &phook_cur->hi_execjob_abort_hooks;\n\t\t} else if (event == HOOK_EVENT_EXECJOB_POSTSUSPEND) {\n\t\t\tplink_cur = &phook_cur->hi_execjob_postsuspend_hooks;\n\t\t} else if (event == HOOK_EVENT_EXECJOB_PRERESUME) {\n\t\t\tplink_cur = &phook_cur->hi_execjob_preresume_hooks;\n\t\t} else {\n\t\t\t/* should not happen */\n\t\t\tlog_err(PBSE_INTERNAL, __func__, \"encountered a bad event\");\n\t\t\treturn;\n\t\t}\n\n\t\tif (phook_cur->order > phook->order) {\n\t\t\tbreak;\n\t\t}\n\n\t\tphook_cur = (hook *) GET_NEXT(*plink_cur);\n\t}\n\n\tif (phook_cur) {\n\t\t/* link before 'current' hook in server's list */\n\t\tinsert_link(plink_cur, plink_elem, phook, LINK_INSET_BEFORE);\n\t} else {\n\t\t/* attach either at the beginning or the last of the list */\n\t\tinsert_link(plink_cur, plink_elem, phook, LINK_INSET_AFTER);\n\t}\n}\n\n/**\n *\n * @brief\n *\tSets the hook 'phook's fail_action attribute to a value\n *\trepresenting 'newval'.\n *\n * @param[in/out]  phook - hook being operated on.\n * @param[in]\t   newval - new fail_action value\n * @param[in/out]  msg - error message buffer\n * @param[in]\t   msg_len - length of 'msg' buffer.\n * @param[in]\t  strict - if 'strict' is set to 1, then only non- \"none\"\n * \t\t\t value if the hook's event has \"execjob_begin\" or\n * \t\t\t \"exechost_periodic\".\n *\n * @return int\n * @retval 0 \tfor success\n * @retval 1 \tfor failure with 'msg' of size 'msg_len' filled in.\n */\nint\nset_hook_fail_action(hook *phook, char *newval, char *msg,\n\t\t     size_t msg_len, int strict)\n{\n\tif (msg == NULL) { /* should not happen */\n\t\tlog_err(PBSE_INTERNAL, __func__, \"'msg' buffer is NULL\");\n\t\treturn (1);\n\t}\n\tmemset(msg, '\\0', msg_len);\n\n\tif (phook == NULL) {\n\t\tsnprintf(msg, msg_len - 1, \"%s: hook parameter is NULL!\",\n\t\t\t __func__);\n\t\treturn (1);\n\t}\n\n\tif (newval == NULL) {\n\t\tsnprintf(msg, msg_len - 1, \"%s: hook's fail_action is NULL!\", __func__);\n\t\treturn (1);\n\t}\n\n\tif (phook->fail_action != 0) {\n\t\tphook->fail_action = 0;\n\t}\n\n\treturn add_hook_fail_action(phook, newval, msg, msg_len, strict);\n}\n\n/**\n * @brief\n *\tAdds to hook 'phook's fail_action attribute (bitmask) a value\n *\trepresenting 'newval'.\n * @note\n * \tThe valid 'newval' values are:\n *\t\tHOOKSTR_FAIL_ACTION_NONE\t   (\"none\")\n *\t\tHOOKSTR_FAIL_ACTION_OFFLINE_VNODES (\"offline_vnodes\")\n *\t\tHOOKSTR_FAIL_ACTION_CLEAR_VNODES   (\"clear_vnodes_upon_recovery\")\n *\t\tHOOKSTR_FAIL_ACTION_SCHEDULER_RESTART_CYCLE\n *\t\t\t\t\t\t(\"scheduler_restart_cycle\")\n *\n * @param[in/out]\tphook - hook being operated on.\n * @param[in]\t\tnewval - new hook fail_action value\n * @param[in/out]\tmsg - error message buffer\n * @param[in]\t\tmsg_len - size of message buffer\n * @param[in]\t  \tstrict - if 'strict' is set to 1, then only non- \"none\"\n * \t\t\t value if the hook's event has \"execjob_begin\" or\n * \t\t\t \"exechost_periodic\".\n *\n * @return int\n * @retval 0\tfor success\n * @retval 1 *  for failure with 'msg' of size 'msg_len' filled in.\n */\nint\nadd_hook_fail_action(hook *phook, char *newval, char *msg, size_t msg_len,\n\t\t     int strict)\n{\n\tchar *val, *newval_dup;\n\n\tif (msg == NULL) { /* should not happen */\n\t\tlog_err(PBSE_INTERNAL, __func__, \"'msg' buffer is NULL\");\n\t\treturn (1);\n\t}\n\tmemset(msg, '\\0', msg_len);\n\n\tif (phook == NULL) {\n\t\tsnprintf(msg, msg_len - 1, \"%s: hook parameter is NULL!\",\n\t\t\t __func__);\n\t\treturn (1);\n\t}\n\n\tif (newval == NULL) {\n\t\tsnprintf(msg, msg_len - 1, \"%s: hook's fail_action is NULL!\", __func__);\n\t\treturn (1);\n\t}\n\n\tif ((newval_dup = strdup(newval)) == NULL) {\n\t\tsnprintf(msg, msg_len - 1,\n\t\t\t \"%s: failed to malloc newval=%s!\", __func__, newval);\n\t\treturn (1);\n\t}\n\tval = strtok(newval_dup, \",\");\n\twhile (val) {\n\t\tif (strcmp(val, HOOKSTR_FAIL_ACTION_NONE) == 0) {\n\t\t\t/* can't combine \"none\" with other fail_action values */\n\t\t\tif ((phook->fail_action != HOOK_FAIL_ACTION_NONE) &&\n\t\t\t    (phook->fail_action != 0))\n\t\t\t\tgoto err;\n\t\t\tphook->fail_action |= HOOK_FAIL_ACTION_NONE;\n\t\t} else if (strcmp(val, HOOKSTR_FAIL_ACTION_OFFLINE_VNODES) == 0) {\n\t\t\t/* can't combine \"none\" with other fail_action values */\n\t\t\tif (phook->fail_action & HOOK_FAIL_ACTION_NONE)\n\t\t\t\tgoto err;\n\t\t\t/* must contain one of the events in FAIL_ACTION_EVENTS */\n\t\t\t/* in order to set an \"offline_vnodes\" fail_action value. */\n\t\t\tif (strict &&\n\t\t\t    ((phook->event & FAIL_ACTION_EVENTS) == 0)) {\n\t\t\t\tif (msg[0] == '\\0') {\n\t\t\t\t\tsnprintf(msg, msg_len - 1,\n\t\t\t\t\t\t \"Can't set hook fail_action value to '%s': \"\n\t\t\t\t\t\t \"hook event must contain at least one of %s\",\n\t\t\t\t\t\t val, HOOKSTR_FAIL_ACTION_EVENTS);\n\t\t\t\t}\n\t\t\t\tfree(newval_dup);\n\t\t\t\treturn (1);\n\t\t\t}\n\n\t\t\tphook->fail_action |= HOOK_FAIL_ACTION_OFFLINE_VNODES;\n\t\t} else if (strcmp(val, HOOKSTR_FAIL_ACTION_CLEAR_VNODES) == 0) {\n\t\t\t/* can't combine \"none\" with other fail_action values */\n\t\t\tif (phook->fail_action & HOOK_FAIL_ACTION_NONE)\n\t\t\t\tgoto err;\n\t\t\t/* must be an exechost_startup event value */\n\t\t\t/* in order to set a \"clear_vnodes_upon_recovery\" fail_action value. */\n\t\t\tif (strict &&\n\t\t\t    ((phook->event & HOOK_EVENT_EXECHOST_STARTUP) == 0)) {\n\t\t\t\tif (msg[0] == '\\0') {\n\t\t\t\t\tsnprintf(msg, msg_len - 1,\n\t\t\t\t\t\t \"Can't set hook fail_action value to '%s': \"\n\t\t\t\t\t\t \"hook event must contain at least an %s value\",\n\t\t\t\t\t\t val, HOOKSTR_EXECHOST_STARTUP);\n\t\t\t\t}\n\t\t\t\tfree(newval_dup);\n\t\t\t\treturn (1);\n\t\t\t}\n\t\t\tphook->fail_action |= HOOK_FAIL_ACTION_CLEAR_VNODES;\n\t\t} else if (strcmp(val,\n\t\t\t\t  HOOKSTR_FAIL_ACTION_SCHEDULER_RESTART_CYCLE) == 0) {\n\t\t\t/* can't combine \"none\" with other fail_action values */\n\t\t\tif (phook->fail_action & HOOK_FAIL_ACTION_NONE)\n\t\t\t\tgoto err;\n\t\t\t/* must contain one of the events in HOOK_EVENT_EXECJOB_BEGIN, HOOK_EVENT_EXECJOB_PROLOGUE */\n\t\t\t/* in order to set a \"scheduler_restart_cycle\" fail_action value. */\n\t\t\tif (strict &&\n\t\t\t    ((phook->event & HOOK_EVENT_EXECJOB_BEGIN) == 0) &&\n\t\t\t    ((phook->event & HOOK_EVENT_EXECJOB_PROLOGUE) == 0)) {\n\t\t\t\tif (msg[0] == '\\0') {\n\t\t\t\t\tsnprintf(msg, msg_len - 1,\n\t\t\t\t\t\t \"Can't set hook fail_action value to '%s': \"\n\t\t\t\t\t\t \"hook event must contain at least one of %s, %s\",\n\t\t\t\t\t\t val, HOOKSTR_EXECJOB_BEGIN, HOOKSTR_EXECJOB_PROLOGUE);\n\t\t\t\t}\n\t\t\t\tfree(newval_dup);\n\t\t\t\treturn (1);\n\t\t\t}\n\t\t\tphook->fail_action |= HOOK_FAIL_ACTION_SCHEDULER_RESTART_CYCLE;\n\t\t} else {\n\t\t\tsnprintf(msg, msg_len - 1,\n\t\t\t\t \"fail_action value of a hook \"\n\t\t\t\t \"must be \\\"%s\\\" or one or more of \\\"%s\\\",\"\n\t\t\t\t \"\\\"%s\\\", \\\"%s\\\"\",\n\t\t\t\t HOOKSTR_FAIL_ACTION_NONE,\n\t\t\t\t HOOKSTR_FAIL_ACTION_OFFLINE_VNODES,\n\t\t\t\t HOOKSTR_FAIL_ACTION_CLEAR_VNODES,\n\t\t\t\t HOOKSTR_FAIL_ACTION_SCHEDULER_RESTART_CYCLE);\n\t\t\tfree(newval_dup);\n\t\t\treturn (1);\n\t\t}\n\n\t\tval = strtok(NULL, \",\");\n\t}\n\n\tfree(newval_dup);\n\treturn (0);\nerr:\n\tif (msg[0] == '\\0') {\n\t\tsnprintf(msg, msg_len - 1,\n\t\t\t \"fail_action value of a hook \"\n\t\t\t \"must be \\\"%s\\\" or one or more of \\\"%s\\\",\"\n\t\t\t \"\\\"%s\\\", \\\"%s\\\"\",\n\t\t\t HOOKSTR_FAIL_ACTION_NONE,\n\t\t\t HOOKSTR_FAIL_ACTION_OFFLINE_VNODES,\n\t\t\t HOOKSTR_FAIL_ACTION_CLEAR_VNODES,\n\t\t\t HOOKSTR_FAIL_ACTION_SCHEDULER_RESTART_CYCLE);\n\t}\n\tfree(newval_dup);\n\treturn (1);\n}\n\n/**\n * @brief\n *\tRemoves from hook 'phook's fail_action attribute (bitmask) the value\n *\trepresenting 'newval'.\n *\n * @param[in/out]\tphook - hook being operated on.\n * @param[in]\t\tnewval - the hook fail_action value to delete\n * @param[in/out]\tmsg - error message buffer\n * @param[in]\t\tmsg_len - size of 'msg' buffer.\n *\n * @return int\n * @retval 0 for success\n * @retval 1 for failure with 'msg' of size 'msg_len' illed in.\n */\nint\ndel_hook_fail_action(hook *phook, char *newval, char *msg, size_t msg_len)\n{\n\tchar *val, *newval_dup;\n\n\tif (msg == NULL) { /* should not happen */\n\t\tlog_err(PBSE_INTERNAL, __func__, \"'msg' buffer is NULL\");\n\t\treturn (1);\n\t}\n\tmemset(msg, '\\0', msg_len);\n\n\tif (phook == NULL) {\n\t\tsnprintf(msg, msg_len - 1, \"%s: hook parameter is NULL!\",\n\t\t\t __func__);\n\t\treturn (1);\n\t}\n\n\tif (newval == NULL) {\n\t\tsnprintf(msg, msg_len - 1, \"%s: hook's fail_action is NULL!\", __func__);\n\t\treturn (1);\n\t}\n\n\tif ((newval_dup = strdup(newval)) == NULL) {\n\t\tsnprintf(msg, msg_len - 1,\n\t\t\t \"%s: failed to malloc newval=%s!\", __func__, newval);\n\t\treturn (1);\n\t}\n\n\tval = strtok(newval_dup, \",\");\n\twhile (val) {\n\t\tif (strcmp(val, HOOKSTR_FAIL_ACTION_NONE) == 0) {\n\t\t\tphook->fail_action &= ~HOOK_FAIL_ACTION_NONE;\n\t\t\tif (phook->fail_action == 0) {\n\t\t\t\tphook->fail_action = HOOK_FAIL_ACTION_NONE;\n\t\t\t}\n\t\t} else if (strcmp(val, HOOKSTR_FAIL_ACTION_OFFLINE_VNODES) == 0) {\n\t\t\tphook->fail_action &= ~HOOK_FAIL_ACTION_OFFLINE_VNODES;\n\t\t} else if (strcmp(val, HOOKSTR_FAIL_ACTION_CLEAR_VNODES) == 0) {\n\t\t\tphook->fail_action &= ~HOOK_FAIL_ACTION_CLEAR_VNODES;\n\t\t} else if (strcmp(val,\n\t\t\t\t  HOOKSTR_FAIL_ACTION_SCHEDULER_RESTART_CYCLE) == 0) {\n\t\t\tphook->fail_action &= ~HOOK_FAIL_ACTION_SCHEDULER_RESTART_CYCLE;\n\t\t} else if (strcmp(val, HOOKSTR_NONE) != 0) {\n\t\t\tsnprintf(msg, msg_len - 1,\n\t\t\t\t \"fail_action value of a hook \"\n\t\t\t\t \"must be \\\"%s\\\" or one or more of \\\"%s\\\",\"\n\t\t\t\t \"\\\"%s\\\", \\\"%s\\\"\",\n\t\t\t\t HOOKSTR_FAIL_ACTION_NONE,\n\t\t\t\t HOOKSTR_FAIL_ACTION_OFFLINE_VNODES,\n\t\t\t\t HOOKSTR_FAIL_ACTION_CLEAR_VNODES,\n\t\t\t\t HOOKSTR_FAIL_ACTION_SCHEDULER_RESTART_CYCLE);\n\t\t\tfree(newval_dup);\n\t\t\treturn (1);\n\t\t}\n\n\t\tval = strtok(NULL, \",\");\n\t}\n\n\tfree(newval_dup);\n\treturn (0);\n}\n\n/*\n *\n * @brief\n *\tSets the hook 'phook's event attribute to a value\n *\trepresenting 'newval'.\n *\n * @param[in/out]  phook - hook being operated on.\n * @param[in]\t   newval - new event value\n * @param[in/out]  msg - error message buffer\n * @param[in]\t   msg_len - length of 'msg' buffer.\n *\n * @return int\n * @retval 0 \tfor success\n * @retval 1 \tfor failure with 'msg' of size 'msg_len' filled in.\n */\nint\nset_hook_event(hook *phook, char *newval, char *msg, size_t msg_len)\n{\n\tif (msg == NULL) { /* should not happen */\n\t\tlog_err(PBSE_INTERNAL, __func__, \"'msg' buffer is NULL\");\n\t\treturn (1);\n\t}\n\tmemset(msg, '\\0', msg_len);\n\n\tif (phook == NULL) {\n\t\tsnprintf(msg, msg_len - 1, \"%s: hook parameter is NULL!\",\n\t\t\t __func__);\n\t\treturn (1);\n\t}\n\n\tif (newval == NULL) {\n\t\tsnprintf(msg, msg_len - 1, \"%s: hook's event is NULL!\", __func__);\n\t\treturn (1);\n\t}\n\n\tif (phook->event != 0) {\n\n\t\tdelete_link(&phook->hi_queuejob_hooks);\n\t\tdelete_link(&phook->hi_postqueuejob_hooks);\n\t\tdelete_link(&phook->hi_modifyjob_hooks);\n\t\tdelete_link(&phook->hi_modifyvnode_hooks);\n\t\tdelete_link(&phook->hi_management_hooks);\n\t\tdelete_link(&phook->hi_resvsub_hooks);\n\t\tdelete_link(&phook->hi_modifyresv_hooks);\n\t\tdelete_link(&phook->hi_movejob_hooks);\n\t\tdelete_link(&phook->hi_runjob_hooks);\n\t\tdelete_link(&phook->hi_provision_hooks);\n\t\tdelete_link(&phook->hi_periodic_hooks);\n\t\tdelete_link(&phook->hi_resv_confirm_hooks);\n\t\tdelete_link(&phook->hi_resv_begin_hooks);\n\t\tdelete_link(&phook->hi_resv_end_hooks);\n\t\tdelete_link(&phook->hi_execjob_begin_hooks);\n\t\tdelete_link(&phook->hi_execjob_prologue_hooks);\n\t\tdelete_link(&phook->hi_execjob_epilogue_hooks);\n\t\tdelete_link(&phook->hi_execjob_preterm_hooks);\n\t\tdelete_link(&phook->hi_execjob_launch_hooks);\n\t\tdelete_link(&phook->hi_execjob_end_hooks);\n\t\tdelete_link(&phook->hi_exechost_periodic_hooks);\n\t\tdelete_link(&phook->hi_exechost_startup_hooks);\n\t\tdelete_link(&phook->hi_execjob_attach_hooks);\n\t\tdelete_link(&phook->hi_execjob_resize_hooks);\n\t\tdelete_link(&phook->hi_execjob_abort_hooks);\n\t\tdelete_link(&phook->hi_execjob_postsuspend_hooks);\n\t\tdelete_link(&phook->hi_execjob_preresume_hooks);\n\t\tphook->event = 0;\n\t}\n\n\treturn add_hook_event(phook, newval, msg, msg_len);\n}\n\n/**\n * @brief\n *\tAdds to hook 'phook's event attribute (bitmask) a value\n *\trepresenting 'newval'. Various hooks lists where 'phook'\n *\tappear are automatically updated to reflect the changed\n *\tevent value.\n *\n * @param[in/out]\tphook - hook being operated on.\n * @param[in]\t\tnewval - new hook event value\n * @param[in/out]\tmsg - error message buffer\n * @param[in]\t\tmsg_len - size of message buffer\n *\n * @return int\n * @retval 0\tfor success\n * @retval 1 *  for failure with 'msg' of size 'msg_len' filled in.\n */\nint\nadd_hook_event(hook *phook, char *newval, char *msg, size_t msg_len)\n{\n\tchar *val, *newval_dup;\n\thook *anotherhook;\n\n\tif (msg == NULL) { /* should not happen */\n\t\tlog_err(PBSE_INTERNAL, __func__, \"'msg' buffer is NULL\");\n\t\treturn (1);\n\t}\n\tmemset(msg, '\\0', msg_len);\n\n\tif (phook == NULL) {\n\t\tsnprintf(msg, msg_len - 1, \"%s: hook parameter is NULL!\",\n\t\t\t __func__);\n\t\treturn (1);\n\t}\n\n\tif (newval == NULL) {\n\t\tsnprintf(msg, msg_len - 1, \"%s: hook's event is NULL!\", __func__);\n\t\treturn (1);\n\t}\n\n\tif ((newval_dup = strdup(newval)) == NULL) {\n\t\tsnprintf(msg, msg_len - 1,\n\t\t\t \"%s: failed to malloc newval=%s!\", __func__, newval);\n\t\treturn (1);\n\t}\n\tval = strtok(newval_dup, \",\");\n\twhile (val) {\n\t\tif (strcmp(val, HOOKSTR_QUEUEJOB) == 0) {\n\t\t\tif (phook->event & HOOK_EVENT_PROVISION)\n\t\t\t\tgoto err;\n\t\t\tdelete_link(&phook->hi_queuejob_hooks);\n\t\t\tphook->event |= HOOK_EVENT_QUEUEJOB;\n\t\t\tinsert_hook_sort_order(HOOK_EVENT_QUEUEJOB,\n\t\t\t\t\t       &svr_queuejob_hooks, phook);\n\t\t} else if (strcmp(val, HOOKSTR_POSTQUEUEJOB) == 0) {\n\t\t\tif (phook->event & HOOK_EVENT_PROVISION)\n\t\t\t\tgoto err;\n\t\t\tdelete_link(&phook->hi_postqueuejob_hooks);\n\t\t\tphook->event |= HOOK_EVENT_POSTQUEUEJOB;\n\t\t\tinsert_hook_sort_order(HOOK_EVENT_POSTQUEUEJOB,\n\t\t\t\t\t       &svr_postqueuejob_hooks, phook);\n\t\t} else if (strcmp(val, HOOKSTR_MODIFYJOB) == 0) {\n\t\t\tif (phook->event & HOOK_EVENT_PROVISION)\n\t\t\t\tgoto err;\n\t\t\tdelete_link(&phook->hi_modifyjob_hooks);\n\t\t\tphook->event |= HOOK_EVENT_MODIFYJOB;\n\t\t\tinsert_hook_sort_order(HOOK_EVENT_MODIFYJOB,\n\t\t\t\t\t       &svr_modifyjob_hooks, phook);\n\t\t} else if (strcmp(val, HOOKSTR_RESVSUB) == 0) {\n\t\t\tif (phook->event & HOOK_EVENT_PROVISION)\n\t\t\t\tgoto err;\n\t\t\tdelete_link(&phook->hi_resvsub_hooks);\n\t\t\tphook->event |= HOOK_EVENT_RESVSUB;\n\t\t\tinsert_hook_sort_order(HOOK_EVENT_RESVSUB,\n\t\t\t\t\t       &svr_resvsub_hooks, phook);\n\t\t} else if (strcmp(val, HOOKSTR_MODIFYRESV) == 0) {\n\t\t\tif (phook->event & HOOK_EVENT_PROVISION)\n\t\t\t\tgoto err;\n\t\t\tdelete_link(&phook->hi_modifyresv_hooks);\n\t\t\tphook->event |= HOOK_EVENT_MODIFYRESV;\n\t\t\tinsert_hook_sort_order(HOOK_EVENT_MODIFYRESV,\n\t\t\t\t\t       &svr_modifyresv_hooks, phook);\n\t\t} else if (strcmp(val, HOOKSTR_MOVEJOB) == 0) {\n\t\t\tif (phook->event & HOOK_EVENT_PROVISION)\n\t\t\t\tgoto err;\n\t\t\tdelete_link(&phook->hi_movejob_hooks);\n\t\t\tphook->event |= HOOK_EVENT_MOVEJOB;\n\t\t\tinsert_hook_sort_order(HOOK_EVENT_MOVEJOB,\n\t\t\t\t\t       &svr_movejob_hooks, phook);\n\t\t} else if (strcmp(val, HOOKSTR_RUNJOB) == 0) {\n\t\t\tif (phook->event & HOOK_EVENT_PROVISION)\n\t\t\t\tgoto err;\n\t\t\tdelete_link(&phook->hi_runjob_hooks);\n\t\t\tphook->event |= HOOK_EVENT_RUNJOB;\n\t\t\tinsert_hook_sort_order(HOOK_EVENT_RUNJOB,\n\t\t\t\t\t       &svr_runjob_hooks, phook);\n\t\t} else if (strcmp(val, HOOKSTR_JOBOBIT) == 0) {\n\t\t\tif (phook->event & HOOK_EVENT_PROVISION)\n\t\t\t\tgoto err;\n\t\t\tdelete_link(&phook->hi_jobobit_hooks);\n\t\t\tphook->event |= HOOK_EVENT_JOBOBIT;\n\t\t\tinsert_hook_sort_order(HOOK_EVENT_JOBOBIT,\n\t\t\t\t\t       &svr_jobobit_hooks, phook);\n\t\t} else if (strcmp(val, HOOKSTR_MANAGEMENT) == 0) {\n\t\t\tif (phook->event & HOOK_EVENT_PROVISION)\n\t\t\t\tgoto err;\n\t\t\tdelete_link(&phook->hi_management_hooks);\n\t\t\tphook->event |= HOOK_EVENT_MANAGEMENT;\n\t\t\tinsert_hook_sort_order(HOOK_EVENT_MANAGEMENT,\n\t\t\t\t\t       &svr_management_hooks, phook);\n\t\t} else if (strcmp(val, HOOKSTR_MODIFYVNODE) == 0) {\n\t\t\tif (phook->event & HOOK_EVENT_PROVISION)\n\t\t\t\tgoto err;\n\t\t\tdelete_link(&phook->hi_modifyvnode_hooks);\n\t\t\tphook->event |= HOOK_EVENT_MODIFYVNODE;\n\t\t\tinsert_hook_sort_order(HOOK_EVENT_MODIFYVNODE,\n\t\t\t\t\t       &svr_modifyvnode_hooks, phook);\n\t\t} else if (strcmp(val, HOOKSTR_PROVISION) == 0) {\n\t\t\tif (phook->event & ~HOOK_EVENT_PROVISION)\n\t\t\t\tgoto err;\n\t\t\tanotherhook = find_hookbyevent(HOOK_EVENT_PROVISION);\n\t\t\tif (anotherhook) {\n\t\t\t\tsnprintf(msg, msg_len - 1,\n\t\t\t\t\t \"Another hook '%s' already has \"\n\t\t\t\t\t \"event 'provision', only one 'provision' \"\n\t\t\t\t\t \"hook allowed\",\n\t\t\t\t\t anotherhook->hook_name);\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t\tdelete_link(&phook->hi_provision_hooks);\n\t\t\tphook->event |= HOOK_EVENT_PROVISION;\n\t\t\tinsert_hook_sort_order(HOOK_EVENT_PROVISION,\n\t\t\t\t\t       &svr_provision_hooks, phook);\n\t\t} else if (strcmp(val, HOOKSTR_PERIODIC) == 0) {\n\t\t\tif (phook->event & HOOK_EVENT_PROVISION)\n\t\t\t\tgoto err;\n\t\t\tdelete_link(&phook->hi_periodic_hooks);\n\t\t\tphook->event |= HOOK_EVENT_PERIODIC;\n\t\t\tinsert_hook_sort_order(HOOK_EVENT_PERIODIC,\n\t\t\t\t\t       &svr_periodic_hooks, phook);\n\t\t} else if (strcmp(val, HOOKSTR_RESV_CONFIRM) == 0) {\n\t\t\tif (phook->event & HOOK_EVENT_PROVISION)\n\t\t\t\tgoto err;\n\t\t\tdelete_link(&phook->hi_resv_confirm_hooks);\n\t\t\tphook->event |= HOOK_EVENT_RESV_CONFIRM;\n\t\t\tinsert_hook_sort_order(HOOK_EVENT_RESV_CONFIRM,\n\t\t\t\t\t       &svr_resv_confirm_hooks, phook);\n\t\t} else if (strcmp(val, HOOKSTR_RESV_BEGIN) == 0) {\n\t\t\tif (phook->event & HOOK_EVENT_PROVISION)\n\t\t\t\tgoto err;\n\t\t\tdelete_link(&phook->hi_resv_begin_hooks);\n\t\t\tphook->event |= HOOK_EVENT_RESV_BEGIN;\n\t\t\tinsert_hook_sort_order(HOOK_EVENT_RESV_BEGIN,\n\t\t\t\t\t       &svr_resv_begin_hooks, phook);\n\t\t} else if (strcmp(val, HOOKSTR_RESV_END) == 0) {\n\t\t\tif (phook->event & HOOK_EVENT_PROVISION)\n\t\t\t\tgoto err;\n\t\t\tdelete_link(&phook->hi_resv_end_hooks);\n\t\t\tphook->event |= HOOK_EVENT_RESV_END;\n\t\t\tinsert_hook_sort_order(HOOK_EVENT_RESV_END,\n\t\t\t\t\t       &svr_resv_end_hooks, phook);\n\t\t} else if (strcmp(val, HOOKSTR_EXECJOB_BEGIN) == 0) {\n\t\t\tif (phook->event & HOOK_EVENT_PROVISION)\n\t\t\t\tgoto err;\n\t\t\tdelete_link(&phook->hi_execjob_begin_hooks);\n\t\t\tphook->event |= HOOK_EVENT_EXECJOB_BEGIN;\n\t\t\tinsert_hook_sort_order(HOOK_EVENT_EXECJOB_BEGIN,\n\t\t\t\t\t       &svr_execjob_begin_hooks, phook);\n\t\t} else if (strcmp(val, HOOKSTR_EXECJOB_PROLOGUE) == 0) {\n\t\t\tif (phook->event & HOOK_EVENT_PROVISION)\n\t\t\t\tgoto err;\n\t\t\tdelete_link(&phook->hi_execjob_prologue_hooks);\n\t\t\tphook->event |= HOOK_EVENT_EXECJOB_PROLOGUE;\n\t\t\tinsert_hook_sort_order(HOOK_EVENT_EXECJOB_PROLOGUE,\n\t\t\t\t\t       &svr_execjob_prologue_hooks, phook);\n\t\t} else if (strcmp(val, HOOKSTR_EXECJOB_EPILOGUE) == 0) {\n\t\t\tif (phook->event & HOOK_EVENT_PROVISION)\n\t\t\t\tgoto err;\n\t\t\tdelete_link(&phook->hi_execjob_epilogue_hooks);\n\t\t\tphook->event |= HOOK_EVENT_EXECJOB_EPILOGUE;\n\t\t\tinsert_hook_sort_order(HOOK_EVENT_EXECJOB_EPILOGUE,\n\t\t\t\t\t       &svr_execjob_epilogue_hooks, phook);\n\t\t} else if (strcmp(val, HOOKSTR_EXECJOB_END) == 0) {\n\t\t\tif (phook->event & HOOK_EVENT_PROVISION)\n\t\t\t\tgoto err;\n\t\t\tdelete_link(&phook->hi_execjob_end_hooks);\n\t\t\tphook->event |= HOOK_EVENT_EXECJOB_END;\n\t\t\tinsert_hook_sort_order(HOOK_EVENT_EXECJOB_END,\n\t\t\t\t\t       &svr_execjob_end_hooks, phook);\n\t\t} else if (strcmp(val, HOOKSTR_EXECJOB_PRETERM) == 0) {\n\t\t\tif (phook->event & HOOK_EVENT_PROVISION)\n\t\t\t\tgoto err;\n\t\t\tdelete_link(&phook->hi_execjob_preterm_hooks);\n\t\t\tphook->event |= HOOK_EVENT_EXECJOB_PRETERM;\n\t\t\tinsert_hook_sort_order(HOOK_EVENT_EXECJOB_PRETERM,\n\t\t\t\t\t       &svr_execjob_preterm_hooks, phook);\n\t\t} else if (strcmp(val, HOOKSTR_EXECJOB_LAUNCH) == 0) {\n\t\t\tif (phook->event & HOOK_EVENT_PROVISION)\n\t\t\t\tgoto err;\n\t\t\tdelete_link(&phook->hi_execjob_launch_hooks);\n\t\t\tphook->event |= HOOK_EVENT_EXECJOB_LAUNCH;\n\t\t\tinsert_hook_sort_order(HOOK_EVENT_EXECJOB_LAUNCH,\n\t\t\t\t\t       &svr_execjob_launch_hooks, phook);\n\t\t} else if (strcmp(val, HOOKSTR_EXECHOST_PERIODIC) == 0) {\n\t\t\tif (phook->event & HOOK_EVENT_PROVISION)\n\t\t\t\tgoto err;\n\t\t\tdelete_link(&phook->hi_exechost_periodic_hooks);\n\t\t\tphook->event |= HOOK_EVENT_EXECHOST_PERIODIC;\n\t\t\tinsert_hook_sort_order(HOOK_EVENT_EXECHOST_PERIODIC,\n\t\t\t\t\t       &svr_exechost_periodic_hooks, phook);\n\t\t} else if (strcmp(val, HOOKSTR_EXECHOST_STARTUP) == 0) {\n\t\t\tif (phook->event & HOOK_EVENT_PROVISION)\n\t\t\t\tgoto err;\n\t\t\tdelete_link(&phook->hi_exechost_startup_hooks);\n\t\t\tphook->event |= HOOK_EVENT_EXECHOST_STARTUP;\n\t\t\tinsert_hook_sort_order(HOOK_EVENT_EXECHOST_STARTUP,\n\t\t\t\t\t       &svr_exechost_startup_hooks, phook);\n\t\t} else if (strcmp(val, HOOKSTR_EXECJOB_ATTACH) == 0) {\n\t\t\tif (phook->event & HOOK_EVENT_PROVISION)\n\t\t\t\tgoto err;\n\t\t\tdelete_link(&phook->hi_execjob_attach_hooks);\n\t\t\tphook->event |= HOOK_EVENT_EXECJOB_ATTACH;\n\t\t\tinsert_hook_sort_order(HOOK_EVENT_EXECJOB_ATTACH,\n\t\t\t\t\t       &svr_execjob_attach_hooks, phook);\n\t\t} else if (strcmp(val, HOOKSTR_EXECJOB_RESIZE) == 0) {\n\t\t\tif (phook->event & HOOK_EVENT_PROVISION)\n\t\t\t\tgoto err;\n\t\t\tdelete_link(&phook->hi_execjob_resize_hooks);\n\t\t\tphook->event |= HOOK_EVENT_EXECJOB_RESIZE;\n\t\t\tinsert_hook_sort_order(HOOK_EVENT_EXECJOB_RESIZE,\n\t\t\t\t\t       &svr_execjob_resize_hooks, phook);\n\t\t} else if (strcmp(val, HOOKSTR_EXECJOB_ABORT) == 0) {\n\t\t\tif (phook->event & HOOK_EVENT_PROVISION)\n\t\t\t\tgoto err;\n\t\t\tdelete_link(&phook->hi_execjob_abort_hooks);\n\t\t\tphook->event |= HOOK_EVENT_EXECJOB_ABORT;\n\t\t\tinsert_hook_sort_order(HOOK_EVENT_EXECJOB_ABORT,\n\t\t\t\t\t       &svr_execjob_abort_hooks, phook);\n\t\t} else if (strcmp(val, HOOKSTR_EXECJOB_POSTSUSPEND) == 0) {\n\t\t\tif (phook->event & HOOK_EVENT_PROVISION)\n\t\t\t\tgoto err;\n\t\t\tdelete_link(&phook->hi_execjob_postsuspend_hooks);\n\t\t\tphook->event |= HOOK_EVENT_EXECJOB_POSTSUSPEND;\n\t\t\tinsert_hook_sort_order(HOOK_EVENT_EXECJOB_POSTSUSPEND,\n\t\t\t\t\t       &svr_execjob_postsuspend_hooks, phook);\n\t\t} else if (strcmp(val, HOOKSTR_EXECJOB_PRERESUME) == 0) {\n\t\t\tif (phook->event & HOOK_EVENT_PROVISION)\n\t\t\t\tgoto err;\n\t\t\tdelete_link(&phook->hi_execjob_preresume_hooks);\n\t\t\tphook->event |= HOOK_EVENT_EXECJOB_PRERESUME;\n\t\t\tinsert_hook_sort_order(HOOK_EVENT_EXECJOB_PRERESUME,\n\t\t\t\t\t       &svr_execjob_preresume_hooks, phook);\n\t\t} else if (strcmp(val, HOOKSTR_NONE) != 0) {\n\t\t\tsnprintf(msg, msg_len - 1,\n\t\t\t\t \"invalid argument (%s) to event. \"\n\t\t\t\t \"Should be one or more of: %s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,\"\n\t\t\t\t \"%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s \"\n\t\t\t\t \"or %s for no event\",\n\t\t\t\t newval, HOOKSTR_QUEUEJOB, HOOKSTR_POSTQUEUEJOB, HOOKSTR_MODIFYJOB, HOOKSTR_MODIFYVNODE, HOOKSTR_MANAGEMENT,\n\t\t\t\t HOOKSTR_RESVSUB, HOOKSTR_MODIFYRESV, HOOKSTR_MOVEJOB, HOOKSTR_JOBOBIT,\n\t\t\t\t HOOKSTR_RUNJOB, HOOKSTR_PROVISION, HOOKSTR_PERIODIC, HOOKSTR_RESV_CONFIRM,\n\t\t\t\t HOOKSTR_RESV_BEGIN, HOOKSTR_RESV_END, HOOKSTR_EXECJOB_BEGIN,\n\t\t\t\t HOOKSTR_EXECJOB_PROLOGUE, HOOKSTR_EXECJOB_EPILOGUE, HOOKSTR_EXECJOB_PRETERM,\n\t\t\t\t HOOKSTR_EXECJOB_END, HOOKSTR_EXECHOST_PERIODIC, HOOKSTR_EXECJOB_LAUNCH,\n\t\t\t\t HOOKSTR_EXECHOST_STARTUP, HOOKSTR_EXECJOB_ATTACH, HOOKSTR_EXECJOB_RESIZE, HOOKSTR_EXECJOB_ABORT, HOOKSTR_EXECJOB_POSTSUSPEND, HOOKSTR_EXECJOB_PRERESUME, HOOKSTR_NONE);\n\t\t\tfree(newval_dup);\n\t\t\treturn (1);\n\t\t}\n\n\t\tval = strtok(NULL, \",\");\n\t}\n\n\tfree(newval_dup);\n\treturn (0);\nerr:\n\tif (msg[0] == '\\0') {\n\t\tsnprintf(msg, msg_len - 1,\n\t\t\t \"Event provision is exclusive with other events, \"\n\t\t\t \"and only one provision hook allowed\");\n\t}\n\tfree(newval_dup);\n\treturn (1);\n}\n\n/**\n * @brief\n *\tRemoves from hook 'phook's event attribute (bitmask) the value\n *\trepresenting 'newval'. Various hooks lists where 'phook' appears\n *\tare automatically updated to reflect the changed event value.\n *\n * @param[in/out]\tphook - hook being operated on.\n * @param[in]\t\tnewval - the hook event value to delete\n * @param[in/out]\tmsg - error message buffer\n * @param[in]\t\tmsg_len - size of 'msg' buffer.\n *\n * @return int\n * @retval 0 for success\n * @retval 1 for failure with 'msg' of size 'msg_len' illed in.\n */\nint\ndel_hook_event(hook *phook, char *newval, char *msg, size_t msg_len)\n{\n\tchar *val, *newval_dup;\n\n\tif (msg == NULL) { /* should not happen */\n\t\tlog_err(PBSE_INTERNAL, __func__, \"'msg' buffer is NULL\");\n\t\treturn (1);\n\t}\n\tmemset(msg, '\\0', msg_len);\n\n\tif (phook == NULL) {\n\t\tsnprintf(msg, msg_len - 1, \"%s: hook parameter is NULL!\",\n\t\t\t __func__);\n\t\treturn (1);\n\t}\n\n\tif (newval == NULL) {\n\t\tsnprintf(msg, msg_len - 1, \"%s: hook's event is NULL!\", __func__);\n\t\treturn (1);\n\t}\n\n\tif ((newval_dup = strdup(newval)) == NULL) {\n\t\tsnprintf(msg, msg_len - 1,\n\t\t\t \"%s: failed to malloc newval=%s!\", __func__, newval);\n\t\treturn (1);\n\t}\n\n\tval = strtok(newval_dup, \",\");\n\twhile (val) {\n\t\tif (strcmp(val, HOOKSTR_QUEUEJOB) == 0) {\n\t\t\tdelete_link(&phook->hi_queuejob_hooks);\n\t\t\tphook->event &= ~HOOK_EVENT_QUEUEJOB;\n\t\t} else if (strcmp(val, HOOKSTR_POSTQUEUEJOB) == 0) {\n\t\t\tdelete_link(&phook->hi_postqueuejob_hooks);\n\t\t\tphook->event &= ~HOOK_EVENT_POSTQUEUEJOB;\n\t\t} else if (strcmp(val, HOOKSTR_MODIFYJOB) == 0) {\n\t\t\tdelete_link(&phook->hi_modifyjob_hooks);\n\t\t\tphook->event &= ~HOOK_EVENT_MODIFYJOB;\n\t\t} else if (strcmp(val, HOOKSTR_RESVSUB) == 0) {\n\t\t\tdelete_link(&phook->hi_resvsub_hooks);\n\t\t\tphook->event &= ~HOOK_EVENT_RESVSUB;\n\t\t} else if (strcmp(val, HOOKSTR_MODIFYRESV) == 0) {\n\t\t\tdelete_link(&phook->hi_modifyresv_hooks);\n\t\t\tphook->event &= ~HOOK_EVENT_MODIFYRESV;\n\t\t} else if (strcmp(val, HOOKSTR_MOVEJOB) == 0) {\n\t\t\tdelete_link(&phook->hi_movejob_hooks);\n\t\t\tphook->event &= ~HOOK_EVENT_MOVEJOB;\n\t\t} else if (strcmp(val, HOOKSTR_RUNJOB) == 0) {\n\t\t\tdelete_link(&phook->hi_runjob_hooks);\n\t\t\tphook->event &= ~HOOK_EVENT_RUNJOB;\n\t\t} else if (strcmp(val, HOOKSTR_JOBOBIT) == 0) {\n\t\t\tdelete_link(&phook->hi_jobobit_hooks);\n\t\t\tphook->event &= ~HOOK_EVENT_JOBOBIT;\n\t\t} else if (strcmp(val, HOOKSTR_MANAGEMENT) == 0) {\n\t\t\tdelete_link(&phook->hi_management_hooks);\n\t\t\tphook->event &= ~HOOK_EVENT_MANAGEMENT;\n\t\t} else if (strcmp(val, HOOKSTR_MODIFYVNODE) == 0) {\n\t\t\tdelete_link(&phook->hi_modifyvnode_hooks);\n\t\t\tphook->event &= ~HOOK_EVENT_MODIFYVNODE;\n\t\t} else if (strcmp(val, HOOKSTR_PROVISION) == 0) {\n\t\t\tdelete_link(&phook->hi_provision_hooks);\n\t\t\tphook->event &= ~HOOK_EVENT_PROVISION;\n\t\t} else if (strcmp(val, HOOKSTR_PERIODIC) == 0) {\n\t\t\tdelete_link(&phook->hi_periodic_hooks);\n\t\t\tphook->event &= ~HOOK_EVENT_PERIODIC;\n\t\t\tdelete_task_by_parm1_func(phook, NULL, DELETE_ALL);\n\t\t} else if (strcmp(val, HOOKSTR_RESV_CONFIRM) == 0) {\n\t\t\tdelete_link(&phook->hi_resv_confirm_hooks);\n\t\t\tphook->event &= ~HOOK_EVENT_RESV_CONFIRM;\n\t\t} else if (strcmp(val, HOOKSTR_RESV_BEGIN) == 0) {\n\t\t\tdelete_link(&phook->hi_resv_begin_hooks);\n\t\t\tphook->event &= ~HOOK_EVENT_RESV_BEGIN;\n\t\t} else if (strcmp(val, HOOKSTR_RESV_END) == 0) {\n\t\t\tdelete_link(&phook->hi_resv_end_hooks);\n\t\t\tphook->event &= ~HOOK_EVENT_RESV_END;\n\t\t} else if (strcmp(val, HOOKSTR_EXECJOB_BEGIN) == 0) {\n\t\t\tdelete_link(&phook->hi_execjob_begin_hooks);\n\t\t\tphook->event &= ~HOOK_EVENT_EXECJOB_BEGIN;\n\t\t} else if (strcmp(val, HOOKSTR_EXECJOB_PROLOGUE) == 0) {\n\t\t\tdelete_link(&phook->hi_execjob_prologue_hooks);\n\t\t\tphook->event &= ~HOOK_EVENT_EXECJOB_PROLOGUE;\n\t\t} else if (strcmp(val, HOOKSTR_EXECJOB_EPILOGUE) == 0) {\n\t\t\tdelete_link(&phook->hi_execjob_epilogue_hooks);\n\t\t\tphook->event &= ~HOOK_EVENT_EXECJOB_EPILOGUE;\n\t\t} else if (strcmp(val, HOOKSTR_EXECJOB_END) == 0) {\n\t\t\tdelete_link(&phook->hi_execjob_end_hooks);\n\t\t\tphook->event &= ~HOOK_EVENT_EXECJOB_END;\n\t\t} else if (strcmp(val, HOOKSTR_EXECJOB_PRETERM) == 0) {\n\t\t\tdelete_link(&phook->hi_execjob_preterm_hooks);\n\t\t\tphook->event &= ~HOOK_EVENT_EXECJOB_PRETERM;\n\t\t} else if (strcmp(val, HOOKSTR_EXECJOB_LAUNCH) == 0) {\n\t\t\tdelete_link(&phook->hi_execjob_launch_hooks);\n\t\t\tphook->event &= ~HOOK_EVENT_EXECJOB_LAUNCH;\n\t\t} else if (strcmp(val, HOOKSTR_EXECHOST_PERIODIC) == 0) {\n\t\t\tdelete_link(&phook->hi_exechost_periodic_hooks);\n\t\t\tphook->event &= ~HOOK_EVENT_EXECHOST_PERIODIC;\n\t\t} else if (strcmp(val, HOOKSTR_EXECHOST_STARTUP) == 0) {\n\t\t\tdelete_link(&phook->hi_exechost_startup_hooks);\n\t\t\tphook->event &= ~HOOK_EVENT_EXECHOST_STARTUP;\n\t\t} else if (strcmp(val, HOOKSTR_EXECJOB_ATTACH) == 0) {\n\t\t\tdelete_link(&phook->hi_execjob_attach_hooks);\n\t\t\tphook->event &= ~HOOK_EVENT_EXECJOB_ATTACH;\n\t\t} else if (strcmp(val, HOOKSTR_EXECJOB_RESIZE) == 0) {\n\t\t\tdelete_link(&phook->hi_execjob_resize_hooks);\n\t\t\tphook->event &= ~HOOK_EVENT_EXECJOB_RESIZE;\n\t\t} else if (strcmp(val, HOOKSTR_EXECJOB_ABORT) == 0) {\n\t\t\tdelete_link(&phook->hi_execjob_abort_hooks);\n\t\t\tphook->event &= ~HOOK_EVENT_EXECJOB_ABORT;\n\t\t} else if (strcmp(val, HOOKSTR_EXECJOB_POSTSUSPEND) == 0) {\n\t\t\tdelete_link(&phook->hi_execjob_postsuspend_hooks);\n\t\t\tphook->event &= ~HOOK_EVENT_EXECJOB_POSTSUSPEND;\n\t\t} else if (strcmp(val, HOOKSTR_EXECJOB_PRERESUME) == 0) {\n\t\t\tdelete_link(&phook->hi_execjob_preresume_hooks);\n\t\t\tphook->event &= ~HOOK_EVENT_EXECJOB_PRERESUME;\n\t\t} else if (strcmp(val, HOOKSTR_NONE) != 0) {\n\t\t\tsnprintf(msg, msg_len - 1,\n\t\t\t\t \"invalid argument (%s) to event. \"\n\t\t\t\t \"Should be one or more of: %s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,\"\n\t\t\t\t \"%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s \"\n\t\t\t\t \"or %s for no event.\",\n\t\t\t\t newval, HOOKSTR_QUEUEJOB, HOOKSTR_POSTQUEUEJOB, HOOKSTR_MODIFYJOB, HOOKSTR_MODIFYVNODE, HOOKSTR_MANAGEMENT,\n\t\t\t\t HOOKSTR_RESVSUB, HOOKSTR_MODIFYRESV, HOOKSTR_MOVEJOB,\n\t\t\t\t HOOKSTR_RUNJOB, HOOKSTR_PERIODIC, HOOKSTR_PROVISION, HOOKSTR_RESV_CONFIRM,\n\t\t\t\t HOOKSTR_RESV_BEGIN, HOOKSTR_RESV_END, HOOKSTR_EXECJOB_BEGIN,\n\t\t\t\t HOOKSTR_EXECJOB_PROLOGUE, HOOKSTR_EXECJOB_EPILOGUE, HOOKSTR_EXECJOB_END,\n\t\t\t\t HOOKSTR_EXECJOB_PRETERM, HOOKSTR_EXECHOST_PERIODIC,\n\t\t\t\t HOOKSTR_EXECJOB_LAUNCH, HOOKSTR_EXECHOST_STARTUP,\n\t\t\t\t HOOKSTR_EXECJOB_ATTACH, HOOKSTR_EXECJOB_RESIZE, HOOKSTR_EXECJOB_ABORT, HOOKSTR_EXECJOB_POSTSUSPEND, HOOKSTR_EXECJOB_PRERESUME, HOOKSTR_NONE);\n\t\t\tfree(newval_dup);\n\t\t\treturn (1);\n\t\t}\n\n\t\tval = strtok(NULL, \",\");\n\t}\n\n\tfree(newval_dup);\n\treturn (0);\n}\n\n/**\n * @brief\n *\tSets the hook 'phook's order attribute to a value\n *\trepresenting 'newval'. Various hooks lists where\n *\t'phook' appears are updated to reflect the new phook->order value.\n *\n * @param[in/out]\tphook - hook being operated on.\n * @param[in]\t\tnewval - the hook order value to set to\n * @param[in/out]\tmsg - error message buffer\n * @param[in]\t\tmsg_len - size of 'msg' buffer.\n *\n * @return int\n * @retval 0 for success\n * @retval 1 for failure with 'msg' of size 'msg_len' filled in.\n *\n */\nint\nset_hook_order(hook *phook, char *newval, char *msg, size_t msg_len)\n{\n\n\tint val;\n\n\tif (msg == NULL) { /* should not happen */\n\t\tlog_err(PBSE_INTERNAL, __func__, \"'msg' buffer is NULL\");\n\t\treturn (1);\n\t}\n\tmemset(msg, '\\0', msg_len);\n\n\tif (phook == NULL) {\n\t\tsnprintf(msg, msg_len - 1, \"%s: hook parameter is NULL!\",\n\t\t\t __func__);\n\t\treturn (1);\n\t}\n\n\tif (newval == NULL) {\n\t\tsnprintf(msg, msg_len - 1, \"%s: hook's order is NULL!\", __func__);\n\t\treturn (1);\n\t}\n\n\tval = atoi(newval);\n\n\tif (phook->type == HOOK_SITE) {\n\n\t\tif ((val < HOOK_SITE_ORDER_MIN) ||\n\t\t    (val > HOOK_SITE_ORDER_MAX)) {\n\t\t\tsnprintf(msg, msg_len - 1,\n\t\t\t\t \"order given (%d) is outside the acceptable \"\n\t\t\t\t \"range of [%d, %d] for type \\'%s\\'.\",\n\t\t\t\t val, HOOK_SITE_ORDER_MIN, HOOK_SITE_ORDER_MAX,\n\t\t\t\t HOOKSTR_SITE);\n\t\t\treturn (1);\n\t\t}\n\t} else {\n\t\tif ((val < HOOK_PBS_ORDER_MIN) ||\n\t\t    (val > HOOK_PBS_ORDER_MAX)) {\n\t\t\tsnprintf(msg, msg_len - 1,\n\t\t\t\t \"order given (%d) is outside the acceptable \"\n\t\t\t\t \"range of [%d, %d] for type \\'%s\\'.\",\n\t\t\t\t val,\n\t\t\t\t HOOK_PBS_ORDER_MIN, HOOK_PBS_ORDER_MAX,\n\t\t\t\t HOOKSTR_PBS);\n\t\t\treturn (1);\n\t\t}\n\t}\n\n\tphook->order = val;\n\n\tif (phook->event & HOOK_EVENT_QUEUEJOB) {\n\t\tdelete_link(&phook->hi_queuejob_hooks);\n\t\tinsert_hook_sort_order(HOOK_EVENT_QUEUEJOB,\n\t\t\t\t       &svr_queuejob_hooks, phook);\n\t}\n\n\tif (phook->event & HOOK_EVENT_POSTQUEUEJOB) {\n\t\tdelete_link(&phook->hi_postqueuejob_hooks);\n\t\tinsert_hook_sort_order(HOOK_EVENT_POSTQUEUEJOB,\n\t\t\t\t       &svr_postqueuejob_hooks, phook);\n\t}\n\n\tif (phook->event & HOOK_EVENT_MODIFYJOB) {\n\t\tdelete_link(&phook->hi_modifyjob_hooks);\n\t\tinsert_hook_sort_order(HOOK_EVENT_MODIFYJOB,\n\t\t\t\t       &svr_modifyjob_hooks, phook);\n\t}\n\n\tif (phook->event & HOOK_EVENT_RESVSUB) {\n\t\tdelete_link(&phook->hi_resvsub_hooks);\n\t\tinsert_hook_sort_order(HOOK_EVENT_RESVSUB,\n\t\t\t\t       &svr_resvsub_hooks, phook);\n\t}\n\n\tif (phook->event & HOOK_EVENT_MODIFYRESV) {\n\t\tdelete_link(&phook->hi_modifyresv_hooks);\n\t\tinsert_hook_sort_order(HOOK_EVENT_MODIFYRESV,\n\t\t\t\t       &svr_modifyresv_hooks, phook);\n\t}\n\n\tif (phook->event & HOOK_EVENT_MOVEJOB) {\n\t\tdelete_link(&phook->hi_movejob_hooks);\n\t\tinsert_hook_sort_order(HOOK_EVENT_MOVEJOB,\n\t\t\t\t       &svr_movejob_hooks, phook);\n\t}\n\n\tif (phook->event & HOOK_EVENT_RUNJOB) {\n\t\tdelete_link(&phook->hi_runjob_hooks);\n\t\tinsert_hook_sort_order(HOOK_EVENT_RUNJOB,\n\t\t\t\t       &svr_runjob_hooks, phook);\n\t}\n\n\tif (phook->event & HOOK_EVENT_JOBOBIT) {\n\t\tdelete_link(&phook->hi_jobobit_hooks);\n\t\tinsert_hook_sort_order(HOOK_EVENT_JOBOBIT,\n\t\t\t\t       &svr_jobobit_hooks, phook);\n\t}\n\n\tif (phook->event & HOOK_EVENT_MANAGEMENT) {\n\t\tdelete_link(&phook->hi_management_hooks);\n\t\tinsert_hook_sort_order(HOOK_EVENT_MANAGEMENT,\n\t\t\t\t       &svr_management_hooks, phook);\n\t}\n\n\tif (phook->event & HOOK_EVENT_MODIFYVNODE) {\n\t\tdelete_link(&phook->hi_modifyvnode_hooks);\n\t\tinsert_hook_sort_order(HOOK_EVENT_MODIFYVNODE,\n\t\t\t\t       &svr_modifyvnode_hooks, phook);\n\t}\n\n\tif (phook->event & HOOK_EVENT_RESV_BEGIN) {\n\t\tdelete_link(&phook->hi_resv_begin_hooks);\n\t\tinsert_hook_sort_order(HOOK_EVENT_RESV_BEGIN,\n\t\t\t\t       &svr_resv_begin_hooks, phook);\n\t}\n\n\tif (phook->event & HOOK_EVENT_RESV_CONFIRM) {\n\t\tdelete_link(&phook->hi_resv_confirm_hooks);\n\t\tinsert_hook_sort_order(HOOK_EVENT_RESV_CONFIRM,\n\t\t\t\t       &svr_resv_confirm_hooks, phook);\n\t}\n\n\tif (phook->event & HOOK_EVENT_RESV_END) {\n\t\tdelete_link(&phook->hi_resv_end_hooks);\n\t\tinsert_hook_sort_order(HOOK_EVENT_RESV_END,\n\t\t\t\t       &svr_resv_end_hooks, phook);\n\t}\n\n\tif (phook->event & HOOK_EVENT_EXECJOB_BEGIN) {\n\t\tdelete_link(&phook->hi_execjob_begin_hooks);\n\t\tinsert_hook_sort_order(HOOK_EVENT_EXECJOB_BEGIN,\n\t\t\t\t       &svr_execjob_begin_hooks, phook);\n\t}\n\n\tif (phook->event & HOOK_EVENT_EXECJOB_PROLOGUE) {\n\t\tdelete_link(&phook->hi_execjob_prologue_hooks);\n\t\tinsert_hook_sort_order(HOOK_EVENT_EXECJOB_PROLOGUE,\n\t\t\t\t       &svr_execjob_prologue_hooks, phook);\n\t}\n\n\tif (phook->event & HOOK_EVENT_EXECJOB_EPILOGUE) {\n\t\tdelete_link(&phook->hi_execjob_epilogue_hooks);\n\t\tinsert_hook_sort_order(HOOK_EVENT_EXECJOB_EPILOGUE,\n\t\t\t\t       &svr_execjob_epilogue_hooks, phook);\n\t}\n\n\tif (phook->event & HOOK_EVENT_EXECJOB_END) {\n\t\tdelete_link(&phook->hi_execjob_end_hooks);\n\t\tinsert_hook_sort_order(HOOK_EVENT_EXECJOB_END,\n\t\t\t\t       &svr_execjob_end_hooks, phook);\n\t}\n\n\tif (phook->event & HOOK_EVENT_EXECJOB_PRETERM) {\n\t\tdelete_link(&phook->hi_execjob_preterm_hooks);\n\t\tinsert_hook_sort_order(HOOK_EVENT_EXECJOB_PRETERM,\n\t\t\t\t       &svr_execjob_preterm_hooks, phook);\n\t}\n\n\tif (phook->event & HOOK_EVENT_EXECJOB_LAUNCH) {\n\t\tdelete_link(&phook->hi_execjob_launch_hooks);\n\t\tinsert_hook_sort_order(HOOK_EVENT_EXECJOB_LAUNCH,\n\t\t\t\t       &svr_execjob_launch_hooks, phook);\n\t}\n\n\tif (phook->event & HOOK_EVENT_EXECHOST_PERIODIC) {\n\t\tdelete_link(&phook->hi_exechost_periodic_hooks);\n\t\tinsert_hook_sort_order(HOOK_EVENT_EXECHOST_PERIODIC,\n\t\t\t\t       &svr_exechost_periodic_hooks, phook);\n\t}\n\n\tif (phook->event & HOOK_EVENT_EXECHOST_STARTUP) {\n\t\tdelete_link(&phook->hi_exechost_startup_hooks);\n\t\tinsert_hook_sort_order(HOOK_EVENT_EXECHOST_STARTUP,\n\t\t\t\t       &svr_exechost_startup_hooks, phook);\n\t}\n\n\tif (phook->event & HOOK_EVENT_EXECJOB_ATTACH) {\n\t\tdelete_link(&phook->hi_execjob_attach_hooks);\n\t\tinsert_hook_sort_order(HOOK_EVENT_EXECJOB_ATTACH,\n\t\t\t\t       &svr_execjob_attach_hooks, phook);\n\t}\n\n\tif (phook->event & HOOK_EVENT_EXECJOB_RESIZE) {\n\t\tdelete_link(&phook->hi_execjob_resize_hooks);\n\t\tinsert_hook_sort_order(HOOK_EVENT_EXECJOB_RESIZE,\n\t\t\t\t       &svr_execjob_resize_hooks, phook);\n\t}\n\n\tif (phook->event & HOOK_EVENT_EXECJOB_ABORT) {\n\t\tdelete_link(&phook->hi_execjob_abort_hooks);\n\t\tinsert_hook_sort_order(HOOK_EVENT_EXECJOB_ABORT,\n\t\t\t\t       &svr_execjob_abort_hooks, phook);\n\t}\n\n\tif (phook->event & HOOK_EVENT_EXECJOB_POSTSUSPEND) {\n\t\tdelete_link(&phook->hi_execjob_postsuspend_hooks);\n\t\tinsert_hook_sort_order(HOOK_EVENT_EXECJOB_POSTSUSPEND,\n\t\t\t\t       &svr_execjob_postsuspend_hooks, phook);\n\t}\n\n\tif (phook->event & HOOK_EVENT_EXECJOB_PRERESUME) {\n\t\tdelete_link(&phook->hi_execjob_preresume_hooks);\n\t\tinsert_hook_sort_order(HOOK_EVENT_EXECJOB_PRERESUME,\n\t\t\t\t       &svr_execjob_preresume_hooks, phook);\n\t}\n\treturn (0);\n}\n\n/*\n *\tSets the hook 'phook's alarm attribute to a value\n *\trepresenting 'newval'.\n *\tRETURNS: 0 for success; 1 otherwise with 'msg' of size 'msg_len'\n *\tfilled in.\n */\nint\nset_hook_alarm(hook *phook, char *newval, char *msg, size_t msg_len)\n{\n\tint alarm;\n\n\tif (msg == NULL) { /* should not happen */\n\t\tlog_err(PBSE_INTERNAL, __func__, \"'msg' buffer is NULL\");\n\t\treturn (1);\n\t}\n\tmemset(msg, '\\0', msg_len);\n\n\tif (phook == NULL) {\n\t\tsnprintf(msg, msg_len - 1,\n\t\t\t \"%s: hook parameter is NULL!\", __func__);\n\t\treturn (1);\n\t}\n\tif (newval == NULL) {\n\t\tsnprintf(msg, msg_len - 1,\n\t\t\t \"%s: hook's alarm is NULL!\", __func__);\n\t\treturn (1);\n\t}\n\n\talarm = atoi(newval);\n\n\tif (alarm <= 0) {\n\t\tsnprintf(msg, msg_len - 1,\n\t\t\t \"%s: alarm value '%s' of a hook must be > 0\", __func__,\n\t\t\t newval);\n\t\treturn (1);\n\t}\n\n\tphook->alarm = alarm;\n\treturn (0);\n}\n\n/**\n * @brief\n *\tSets the hook 'phook's freq attribute to a value\n *\trepresenting 'newval'.\n *\n * @param[in/out]\tphook - hook being operated on.\n * @param[in]\t\tnewval - the hook freq value to set to\n * @param[in/out]\tmsg - error message buffer\n * @param[in]\t\tmsg_len - size of 'msg' buffer.\n *\n * @return int\n * @retval 0 for success\n * @retval 1 for failure with 'msg' of size 'msg_len' filled in.\n */\nint\nset_hook_freq(hook *phook, char *newval, char *msg, size_t msg_len)\n{\n\tint freq;\n\tchar *pc = NULL;\n\n\tif (msg == NULL) { /* should not happen */\n\t\tlog_err(PBSE_INTERNAL, __func__, \"'msg' buffer is NULL\");\n\t\treturn (1);\n\t}\n\tmemset(msg, '\\0', msg_len);\n\n\tif (phook == NULL) {\n\t\tsnprintf(msg, msg_len - 1,\n\t\t\t \"%s: hook parameter is NULL!\", __func__);\n\t\treturn (1);\n\t}\n\tif (newval == NULL) {\n\t\tsnprintf(msg, msg_len - 1,\n\t\t\t \"%s: hook's freq is NULL!\", __func__);\n\t\treturn (1);\n\t}\n\n\tpc = newval;\n\tif (*pc == '-')\n\t\t++pc; /* move past negative number, it will be caught later  */\n\n\twhile (isdigit((int) *pc))\n\t\t++pc;\n\n\tif (*pc != '\\0') {\n\t\tsnprintf(msg, msg_len - 1,\n\t\t\t \"%s: encountered a non-digit freq value: %c\",\n\t\t\t __func__, *pc);\n\t\treturn (1);\n\t}\n\n\tfreq = atoi(newval);\n\n\tif (freq <= 0) {\n\t\tsnprintf(msg, msg_len - 1,\n\t\t\t \"%s: freq value '%s' of a hook must be > 0\", __func__,\n\t\t\t newval);\n\t\treturn (1);\n\t}\n\n\tif (((phook->event & HOOK_EVENT_EXECHOST_PERIODIC) == 0) && ((phook->event & HOOK_EVENT_PERIODIC) == 0)) {\n\t\tsnprintf(msg, msg_len - 1,\n\t\t\t \"%s: Can't set hook freq value: hook event must contain at least'%s' or '%s'\", __func__,\n\t\t\t HOOKSTR_EXECHOST_PERIODIC, HOOKSTR_PERIODIC);\n\t\treturn (1);\n\t}\n\n\tphook->freq = freq;\n\treturn (0);\n}\n\n/*\n *\n * unset_hook* functions.\n *\n */\n\n/*\n *\tUnsets 'phook's enabled value, resetting back to default.\n *\tRETURNS: 0 for success; 1 otherwise with 'msg' of size 'msg_len'\n *\tfilled in.\n */\nint\nunset_hook_enabled(hook *phook, char *msg, size_t msg_len)\n{\n\tif (msg == NULL) { /* should not happen */\n\t\tlog_err(PBSE_INTERNAL, __func__, \"'msg' buffer is NULL\");\n\t\treturn (1);\n\t}\n\tmemset(msg, '\\0', msg_len);\n\n\tif (phook == NULL) {\n\t\tsnprintf(msg, msg_len - 1,\n\t\t\t \"%s: hook parameter is NULL\", __func__);\n\t\treturn (1);\n\t}\n\n\tphook->enabled = HOOK_ENABLED_DEFAULT;\n\treturn (0);\n}\n\n/*\n *\n * unset_hook* functions.\n *\n */\n\n/*\n *\tUnsets 'phook's debug value, resetting back to default.\n *\tRETURNS: 0 for success; 1 otherwise with 'msg' of size 'msg_len'\n *\tfilled in.\n */\nint\nunset_hook_debug(hook *phook, char *msg, size_t msg_len)\n{\n\tif (msg == NULL) { /* should not happen */\n\t\tlog_err(PBSE_INTERNAL, __func__, \"'msg' buffer is NULL\");\n\t\treturn (1);\n\t}\n\tmemset(msg, '\\0', msg_len);\n\n\tif (phook == NULL) {\n\t\tsnprintf(msg, msg_len - 1,\n\t\t\t \"%s: hook parameter is NULL\", __func__);\n\t\treturn (1);\n\t}\n\n\tphook->debug = HOOK_DEBUG_DEFAULT;\n\treturn (0);\n}\n\n/*\n *\tUnsets 'phook's type value, resetting back to default.\n *\tRETURNS: 0 for success; 1 otherwise with 'msg' of size 'msg_len'\n *\tfilled in.\n */\nint\nunset_hook_type(hook *phook, char *msg, size_t msg_len)\n{\n\tif (msg == NULL) { /* should not happen */\n\t\tlog_err(PBSE_INTERNAL, __func__, \"'msg' buffer is NULL\");\n\t\treturn (1);\n\t}\n\tmemset(msg, '\\0', msg_len);\n\n\tif (phook == NULL) {\n\t\tsnprintf(msg, msg_len - 1,\n\t\t\t \"%s: hook parameter is NULL\", __func__);\n\t\treturn (1);\n\t}\n\n\tif (phook->hook_name &&\n\t    (strncmp(phook->hook_name, HOOK_PBS_PREFIX,\n\t\t     strlen(HOOK_PBS_PREFIX)) == 0)) {\n\t\tsnprintf(msg, msg_len - 1,\n\t\t\t \"can't unset hook's type since hook name is %s\",\n\t\t\t phook->hook_name);\n\t\treturn (1);\n\t}\n\tphook->type = HOOK_TYPE_DEFAULT;\n\treturn (0);\n}\n\n/*\n *\tUnsets 'phook's user value, resetting back to default.\n *\tRETURNS: 0 for success; 1 otherwise with 'msg' of size 'msg_len'\n *\tfilled in.\n */\nint\nunset_hook_user(hook *phook, char *msg, size_t msg_len)\n{\n\tif (msg == NULL) { /* should not happen */\n\t\tlog_err(PBSE_INTERNAL, __func__, \"'msg' buffer is NULL\");\n\t\treturn (1);\n\t}\n\tmemset(msg, '\\0', msg_len);\n\n\tif (phook == NULL) {\n\t\tsnprintf(msg, msg_len - 1,\n\t\t\t \"%s: hook parameter is NULL\", __func__);\n\t\treturn (1);\n\t}\n\n\tphook->user = HOOK_USER_DEFAULT;\n\treturn (0);\n}\n\n/*\n *\tUnsets 'phook's fail_action value, resetting back to default.\n *\tRETURNS: 0 for success; 1 otherwise with 'msg' of size 'msg_len'\n *\tfilled in.\n */\nint\nunset_hook_fail_action(hook *phook, char *msg, size_t msg_len)\n{\n\tif (msg == NULL) { /* should not happen */\n\t\tlog_err(PBSE_INTERNAL, __func__, \"'msg' buffer is NULL\");\n\t\treturn (1);\n\t}\n\tmemset(msg, '\\0', msg_len);\n\n\tif (phook == NULL) {\n\t\tsnprintf(msg, msg_len - 1,\n\t\t\t \"%s: hook parameter is NULL\", __func__);\n\t\treturn (1);\n\t}\n\n\tphook->fail_action = HOOK_FAIL_ACTION_DEFAULT;\n\treturn (0);\n}\n\n/**\n * @brief\n *\tUnsets 'phook's event value, resetting back to default.\n *\tVarious hosts list where 'phook' appears are updated to\n *\treflect the changed event value.\n *\n * @param[in/out]\tphook - hook being operated on.\n * @param[in/out]\tmsg - error message buffer\n * @param[in]\t\tmsg_len - size of 'msg' buffer.\n *\n * @return int\n * @retval 0 for success\n * @retval 1 for failure with 'msg' of size 'msg_len' filled in.\n */\nint\nunset_hook_event(hook *phook, char *msg, size_t msg_len)\n{\n\tif (msg == NULL) { /* should not happen */\n\t\tlog_err(PBSE_INTERNAL, __func__, \"'msg' buffer is NULL\");\n\t\treturn (1);\n\t}\n\tmemset(msg, '\\0', msg_len);\n\n\tif (phook == NULL) {\n\t\tsnprintf(msg, msg_len - 1,\n\t\t\t \"%s: hook parameter is NULL\", __func__);\n\t\treturn (1);\n\t}\n\n\tif (phook->event & HOOK_EVENT_QUEUEJOB)\n\t\tdelete_link(&phook->hi_queuejob_hooks);\n\n\tif (phook->event & HOOK_EVENT_POSTQUEUEJOB)\n\t\tdelete_link(&phook->hi_postqueuejob_hooks);\n\n\tif (phook->event & HOOK_EVENT_MODIFYJOB)\n\t\tdelete_link(&phook->hi_modifyjob_hooks);\n\n\tif (phook->event & HOOK_EVENT_RESVSUB)\n\t\tdelete_link(&phook->hi_resvsub_hooks);\n\n\tif (phook->event & HOOK_EVENT_MODIFYRESV)\n\t\tdelete_link(&phook->hi_modifyresv_hooks);\n\n\tif (phook->event & HOOK_EVENT_MOVEJOB)\n\t\tdelete_link(&phook->hi_movejob_hooks);\n\n\tif (phook->event & HOOK_EVENT_RUNJOB)\n\t\tdelete_link(&phook->hi_runjob_hooks);\n\n\tif (phook->event & HOOK_EVENT_JOBOBIT)\n\t\tdelete_link(&phook->hi_jobobit_hooks);\n\n\tif (phook->event & HOOK_EVENT_MANAGEMENT)\n\t\tdelete_link(&phook->hi_management_hooks);\n\n\tif (phook->event & HOOK_EVENT_MODIFYVNODE)\n\t\tdelete_link(&phook->hi_modifyvnode_hooks);\n\n\tif (phook->event & HOOK_EVENT_PROVISION)\n\t\tdelete_link(&phook->hi_provision_hooks);\n\n\tif (phook->event & HOOK_EVENT_PERIODIC)\n\t\tdelete_link(&phook->hi_periodic_hooks);\n\n\tif (phook->event & HOOK_EVENT_RESV_CONFIRM)\n\t\tdelete_link(&phook->hi_resv_confirm_hooks);\n\n\tif (phook->event & HOOK_EVENT_RESV_BEGIN)\n\t\tdelete_link(&phook->hi_resv_begin_hooks);\n\n\tif (phook->event & HOOK_EVENT_RESV_END)\n\t\tdelete_link(&phook->hi_resv_end_hooks);\n\n\tif (phook->event & HOOK_EVENT_EXECJOB_BEGIN) {\n\t\tdelete_link(&phook->hi_execjob_begin_hooks);\n\t}\n\n\tif (phook->event & HOOK_EVENT_EXECJOB_PROLOGUE) {\n\t\tdelete_link(&phook->hi_execjob_prologue_hooks);\n\t}\n\n\tif (phook->event & HOOK_EVENT_EXECJOB_EPILOGUE) {\n\t\tdelete_link(&phook->hi_execjob_epilogue_hooks);\n\t}\n\n\tif (phook->event & HOOK_EVENT_EXECJOB_PRETERM) {\n\t\tdelete_link(&phook->hi_execjob_preterm_hooks);\n\t}\n\n\tif (phook->event & HOOK_EVENT_EXECJOB_LAUNCH) {\n\t\tdelete_link(&phook->hi_execjob_launch_hooks);\n\t}\n\n\tif (phook->event & HOOK_EVENT_EXECJOB_END) {\n\t\tdelete_link(&phook->hi_execjob_end_hooks);\n\t}\n\n\tif (phook->event & HOOK_EVENT_EXECHOST_PERIODIC) {\n\t\tdelete_link(&phook->hi_exechost_periodic_hooks);\n\t}\n\tif (phook->event & HOOK_EVENT_EXECHOST_STARTUP) {\n\t\tdelete_link(&phook->hi_exechost_startup_hooks);\n\t}\n\n\tif (phook->event & HOOK_EVENT_EXECJOB_ATTACH) {\n\t\tdelete_link(&phook->hi_execjob_attach_hooks);\n\t}\n\n\tif (phook->event & HOOK_EVENT_EXECJOB_RESIZE) {\n\t\tdelete_link(&phook->hi_execjob_resize_hooks);\n\t}\n\n\tif (phook->event & HOOK_EVENT_EXECJOB_ABORT) {\n\t\tdelete_link(&phook->hi_execjob_abort_hooks);\n\t}\n\n\tif (phook->event & HOOK_EVENT_EXECJOB_POSTSUSPEND) {\n\t\tdelete_link(&phook->hi_execjob_postsuspend_hooks);\n\t}\n\n\tif (phook->event & HOOK_EVENT_EXECJOB_PRERESUME) {\n\t\tdelete_link(&phook->hi_execjob_preresume_hooks);\n\t}\n\tphook->event = HOOK_EVENT_DEFAULT;\n\n\treturn (0);\n}\n\n/**\n * @brief\n *\tUnsets 'phook's order value, resetting back to default.\n *\tVarious hosts list where 'phook' appears are updated to\n *\treflect the changed phook->order value.\n *\n * @param[in/out]\tphook - hook being operated on.\n * @param[in/out]\tmsg - error message buffer\n * @param[in]\t\tmsg_len - size of 'msg' buffer.\n *\n * @return int\n * @retval 0 for success\n * @retval 1 for failure with 'msg' of size 'msg_len' filled in.\n */\nint\nunset_hook_order(hook *phook, char *msg, size_t msg_len)\n{\n\tif (msg == NULL) { /* should not happen */\n\t\tlog_err(PBSE_INTERNAL, __func__, \"'msg' buffer is NULL\");\n\t\treturn (1);\n\t}\n\tmemset(msg, '\\0', msg_len);\n\n\tif (phook == NULL) {\n\t\tsnprintf(msg, msg_len - 1,\n\t\t\t \"%s: hook parameter is NULL\", __func__);\n\t\treturn (1);\n\t}\n\n\tphook->order = HOOK_ORDER_DEFAULT;\n\n\tif (phook->event & HOOK_EVENT_QUEUEJOB) {\n\t\tdelete_link(&phook->hi_queuejob_hooks);\n\t\tinsert_hook_sort_order(HOOK_EVENT_QUEUEJOB,\n\t\t\t\t       &svr_queuejob_hooks, phook);\n\t}\n\n\tif (phook->event & HOOK_EVENT_POSTQUEUEJOB) {\n\t\tdelete_link(&phook->hi_postqueuejob_hooks);\n\t\tinsert_hook_sort_order(HOOK_EVENT_POSTQUEUEJOB,\n\t\t\t\t       &svr_postqueuejob_hooks, phook);\n\t}\n\n\tif (phook->event & HOOK_EVENT_MODIFYJOB) {\n\t\tdelete_link(&phook->hi_modifyjob_hooks);\n\t\tinsert_hook_sort_order(HOOK_EVENT_MODIFYJOB,\n\t\t\t\t       &svr_modifyjob_hooks, phook);\n\t}\n\n\tif (phook->event & HOOK_EVENT_RESVSUB) {\n\t\tdelete_link(&phook->hi_resvsub_hooks);\n\t\tinsert_hook_sort_order(HOOK_EVENT_RESVSUB,\n\t\t\t\t       &svr_resvsub_hooks, phook);\n\t}\n\n\tif (phook->event & HOOK_EVENT_MODIFYRESV) {\n\t\tdelete_link(&phook->hi_modifyresv_hooks);\n\t\tinsert_hook_sort_order(HOOK_EVENT_MODIFYRESV,\n\t\t\t\t       &svr_modifyresv_hooks, phook);\n\t}\n\n\tif (phook->event & HOOK_EVENT_MOVEJOB) {\n\t\tdelete_link(&phook->hi_movejob_hooks);\n\t\tinsert_hook_sort_order(HOOK_EVENT_MOVEJOB,\n\t\t\t\t       &svr_movejob_hooks, phook);\n\t}\n\n\tif (phook->event & HOOK_EVENT_RUNJOB) {\n\t\tdelete_link(&phook->hi_runjob_hooks);\n\t\tinsert_hook_sort_order(HOOK_EVENT_RUNJOB,\n\t\t\t\t       &svr_runjob_hooks, phook);\n\t}\n\n\tif (phook->event & HOOK_EVENT_JOBOBIT) {\n\t\tdelete_link(&phook->hi_jobobit_hooks);\n\t\tinsert_hook_sort_order(HOOK_EVENT_JOBOBIT,\n\t\t\t\t       &svr_jobobit_hooks, phook);\n\t}\n\n\tif (phook->event & HOOK_EVENT_MANAGEMENT) {\n\t\tdelete_link(&phook->hi_management_hooks);\n\t\tinsert_hook_sort_order(HOOK_EVENT_MANAGEMENT,\n\t\t\t\t       &svr_management_hooks, phook);\n\t}\n\n\tif (phook->event & HOOK_EVENT_MODIFYVNODE) {\n\t\tdelete_link(&phook->hi_modifyvnode_hooks);\n\t\tinsert_hook_sort_order(HOOK_EVENT_MODIFYVNODE,\n\t\t\t\t       &svr_modifyvnode_hooks, phook);\n\t}\n\n\tif (phook->event & HOOK_EVENT_RESV_CONFIRM) {\n\t\tdelete_link(&phook->hi_resv_confirm_hooks);\n\t\tinsert_hook_sort_order(HOOK_EVENT_RESV_CONFIRM,\n\t\t\t\t       &svr_resv_confirm_hooks, phook);\n\t}\n\n\tif (phook->event & HOOK_EVENT_RESV_BEGIN) {\n\t\tdelete_link(&phook->hi_resv_begin_hooks);\n\t\tinsert_hook_sort_order(HOOK_EVENT_RESV_BEGIN,\n\t\t\t\t       &svr_resv_begin_hooks, phook);\n\t}\n\n\tif (phook->event & HOOK_EVENT_RESV_END) {\n\t\tdelete_link(&phook->hi_resv_end_hooks);\n\t\tinsert_hook_sort_order(HOOK_EVENT_RESV_END,\n\t\t\t\t       &svr_resv_end_hooks, phook);\n\t}\n\n\tif (phook->event & HOOK_EVENT_EXECJOB_BEGIN) {\n\t\tdelete_link(&phook->hi_execjob_begin_hooks);\n\t\tinsert_hook_sort_order(HOOK_EVENT_EXECJOB_BEGIN,\n\t\t\t\t       &svr_execjob_begin_hooks, phook);\n\t}\n\n\tif (phook->event & HOOK_EVENT_EXECJOB_PROLOGUE) {\n\t\tdelete_link(&phook->hi_execjob_prologue_hooks);\n\t\tinsert_hook_sort_order(HOOK_EVENT_EXECJOB_PROLOGUE,\n\t\t\t\t       &svr_execjob_prologue_hooks, phook);\n\t}\n\n\tif (phook->event & HOOK_EVENT_EXECJOB_EPILOGUE) {\n\t\tdelete_link(&phook->hi_execjob_epilogue_hooks);\n\t\tinsert_hook_sort_order(HOOK_EVENT_EXECJOB_EPILOGUE,\n\t\t\t\t       &svr_execjob_epilogue_hooks, phook);\n\t}\n\n\tif (phook->event & HOOK_EVENT_EXECJOB_END) {\n\t\tdelete_link(&phook->hi_execjob_end_hooks);\n\t\tinsert_hook_sort_order(HOOK_EVENT_EXECJOB_END,\n\t\t\t\t       &svr_execjob_end_hooks, phook);\n\t}\n\n\tif (phook->event & HOOK_EVENT_EXECJOB_PRETERM) {\n\t\tdelete_link(&phook->hi_execjob_preterm_hooks);\n\t\tinsert_hook_sort_order(HOOK_EVENT_EXECJOB_PRETERM,\n\t\t\t\t       &svr_execjob_preterm_hooks, phook);\n\t}\n\n\tif (phook->event & HOOK_EVENT_EXECJOB_LAUNCH) {\n\t\tdelete_link(&phook->hi_execjob_launch_hooks);\n\t\tinsert_hook_sort_order(HOOK_EVENT_EXECJOB_LAUNCH,\n\t\t\t\t       &svr_execjob_launch_hooks, phook);\n\t}\n\n\tif (phook->event & HOOK_EVENT_EXECHOST_PERIODIC) {\n\t\tdelete_link(&phook->hi_exechost_periodic_hooks);\n\t\tinsert_hook_sort_order(HOOK_EVENT_EXECHOST_PERIODIC,\n\t\t\t\t       &svr_exechost_periodic_hooks, phook);\n\t}\n\n\tif (phook->event & HOOK_EVENT_EXECHOST_STARTUP) {\n\t\tdelete_link(&phook->hi_exechost_startup_hooks);\n\t\tinsert_hook_sort_order(HOOK_EVENT_EXECHOST_STARTUP,\n\t\t\t\t       &svr_exechost_startup_hooks, phook);\n\t}\n\n\tif (phook->event & HOOK_EVENT_EXECJOB_ATTACH) {\n\t\tdelete_link(&phook->hi_execjob_attach_hooks);\n\t\tinsert_hook_sort_order(HOOK_EVENT_EXECJOB_ATTACH,\n\t\t\t\t       &svr_execjob_attach_hooks, phook);\n\t}\n\n\tif (phook->event & HOOK_EVENT_EXECJOB_RESIZE) {\n\t\tdelete_link(&phook->hi_execjob_resize_hooks);\n\t\tinsert_hook_sort_order(HOOK_EVENT_EXECJOB_RESIZE,\n\t\t\t\t       &svr_execjob_resize_hooks, phook);\n\t}\n\n\tif (phook->event & HOOK_EVENT_EXECJOB_ABORT) {\n\t\tdelete_link(&phook->hi_execjob_abort_hooks);\n\t\tinsert_hook_sort_order(HOOK_EVENT_EXECJOB_ABORT,\n\t\t\t\t       &svr_execjob_abort_hooks, phook);\n\t}\n\n\tif (phook->event & HOOK_EVENT_EXECJOB_POSTSUSPEND) {\n\t\tdelete_link(&phook->hi_execjob_postsuspend_hooks);\n\t\tinsert_hook_sort_order(HOOK_EVENT_EXECJOB_POSTSUSPEND,\n\t\t\t\t       &svr_execjob_postsuspend_hooks, phook);\n\t}\n\n\tif (phook->event & HOOK_EVENT_EXECJOB_PRERESUME) {\n\t\tdelete_link(&phook->hi_execjob_preresume_hooks);\n\t\tinsert_hook_sort_order(HOOK_EVENT_EXECJOB_PRERESUME,\n\t\t\t\t       &svr_execjob_preresume_hooks, phook);\n\t}\n\n\treturn (0);\n}\n\n/*\n *\tUnsets 'phook's alarm value, resetting back to default.\n *\tRETURNS: 0 for success; 1 otherwise with 'msg' of size 'msg_len'\n *\tfilled in.\n */\nint\nunset_hook_alarm(hook *phook, char *msg, size_t msg_len)\n{\n\tif (msg == NULL) { /* should not happen */\n\t\tlog_err(PBSE_INTERNAL, __func__, \"'msg' buffer is NULL\");\n\t\treturn (1);\n\t}\n\tmemset(msg, '\\0', msg_len);\n\n\tif (phook == NULL) {\n\t\tsnprintf(msg, msg_len - 1,\n\t\t\t \"%s: hook parameter is NULL\", __func__);\n\t\treturn (1);\n\t}\n\n\tphook->alarm = HOOK_ALARM_DEFAULT;\n\n\treturn (0);\n}\n\n/**\n * @brief\n *\tUnsets 'phook's freq value, resetting back to default.\n *\n * @param[in/out]\tphook - hook being operated on.\n * @param[in/out]\tmsg - error message buffer\n * @param[in]\t\tmsg_len - size of 'msg' buffer.\n *\n * @return int\n * @retval 0 for success\n * @retval 1 for failure with 'msg' of size 'msg_len' filled in.\n */\nint\nunset_hook_freq(hook *phook, char *msg, size_t msg_len)\n{\n\tif (msg == NULL) { /* should not happen */\n\t\tlog_err(PBSE_INTERNAL, __func__, \"'msg' buffer is NULL\");\n\t\treturn (1);\n\t}\n\tmemset(msg, '\\0', msg_len);\n\n\tif (phook == NULL) {\n\t\tsnprintf(msg, msg_len - 1,\n\t\t\t \"%s: hook parameter is NULL\", __func__);\n\t\treturn (1);\n\t}\n\n\tphook->freq = HOOK_FREQ_DEFAULT;\n\n\treturn (0);\n}\n\n/**\n *\n * @brief\n *\tInitializes all the entries except the 'name' attribute of a hook.\n * @param[in/out] phook\t- the hook in question\n * @param[in]\t  pyfree_func - the \"free\" function to be used on a python\n *\t\t\t\tscript attribute.\n *\n */\nstatic void\nhook_init(hook *phook, void (*pyfree_func)(struct python_script *))\n{\n\n\tphook->type = HOOK_TYPE_DEFAULT;\n\tphook->user = HOOK_USER_DEFAULT;\n\tphook->fail_action = HOOK_FAIL_ACTION_DEFAULT;\n\tphook->enabled = HOOK_ENABLED_DEFAULT;\n\tphook->debug = HOOK_DEBUG_DEFAULT;\n\tphook->event = HOOK_EVENT_DEFAULT;\n\tphook->order = HOOK_ORDER_DEFAULT;\n\tphook->alarm = HOOK_ALARM_DEFAULT;\n\tphook->freq = HOOK_FREQ_DEFAULT;\n\tphook->pending_delete = HOOK_PENDING_DELETE_DEFAULT;\n\n\tif (phook->script != NULL) {\n\t\t/* Free up also any Python objects that may have been */\n\t\t/* instantiated for the hook script */\n\t\tif (pyfree_func != NULL)\n\t\t\tpyfree_func(phook->script);\n\t\tfree(phook->script);\n\t}\n\tphook->script = NULL;\n\tphook->hook_control_checksum = 0;\n\tphook->hook_script_checksum = 0;\n\tphook->hook_config_checksum = 0;\n}\n\n/**\n * @brief\n * \tAllocates space for a hook structure and initialize working\n *\tattributes to \"unset\"\n *\n * @return hook *\n * @retval <pointer to hook>\tpointer to structure\n * @retval NULL \t\tif malloc space not available.\n */\nhook *\nhook_alloc(void)\n{\n\thook *phook;\n\n\tphook = (hook *) malloc(sizeof(hook));\n\tif (phook == NULL) {\n\t\tlog_err(errno, __func__, \"no memory\");\n\t\treturn NULL;\n\t}\n\t(void) memset((char *) phook, (int) 0, (size_t) sizeof(hook));\n\n\tphook->hook_name = NULL;\n\n\thook_init(phook, NULL);\n\n\tclear_hook_links(phook);\n\tappend_link(&svr_allhooks, &phook->hi_allhooks, phook);\n\n\treturn (phook);\n}\n\n/**\n * @brief\n *\tFrees the hook structure and its sub-structures.\n *\n * @param[in/out]  phook\t- pointer to the hook structure.\n * @param[in] \t   pyfree_func - the special free function for the\n *\t\t\t\tPython-related field.\n *\t\t\tEx. pbs_python_ext_free_python_script()\n *\n */\nvoid\nhook_free(hook *phook, void (*pyfree_func)(struct python_script *))\n{\n\tif (phook->hook_name != NULL) {\n\t\tfree(phook->hook_name);\n\t}\n\tphook->hook_name = NULL;\n\thook_init(phook, pyfree_func);\n\n\tfree(phook); /* now free the main structure */\n}\n\n/**\n *\n * @brief\n *\tMark the given hook-related 'filename' as bad by renaming it\n *\twith a HOOK_BAD_SUFFIX.\n *\n * @note\n *\tThis function expects filename with HOOK_FILE_SUFFIX,\n *\tHOOK_SCRIPT_SUFFIX, HOOK_CONFIG_SUFFIX, or named PBS_RESCDEF.\n *\n */\nvoid\nmark_hook_file_bad(char *filename)\n{\n\tchar bad_filename[MAXPATHLEN + 1];\n\tchar *p;\n\tint is_hook_cntrl_file = 0;\n\tint is_hook_config_file = 0;\n\tint is_hook_script_file = 0;\n\n\tif (filename == NULL)\n\t\treturn;\n\n\tp = strstr(filename, HOOK_FILE_SUFFIX);\n\tif ((p != NULL) && (strcmp(p, HOOK_FILE_SUFFIX) == 0)) {\n\t\tis_hook_cntrl_file = 1;\n\t}\n\tif (!is_hook_cntrl_file) {\n\t\tp = strstr(filename, HOOK_SCRIPT_SUFFIX);\n\t\tif ((p != NULL) && (strcmp(p, HOOK_SCRIPT_SUFFIX) == 0))\n\t\t\tis_hook_script_file = 1;\n\t}\n\n\tif (!is_hook_cntrl_file && !is_hook_script_file) {\n\t\tp = strstr(filename, HOOK_CONFIG_SUFFIX);\n\t\tif ((p != NULL) && (strcmp(p, HOOK_CONFIG_SUFFIX) == 0))\n\t\t\tis_hook_config_file = 1;\n\t}\n\n\tif (!is_hook_cntrl_file && !is_hook_script_file &&\n\t    !is_hook_config_file) {\n\t\tp = strstr(filename, PBS_RESCDEF);\n\t\tif ((p == NULL) || (strcmp(p, PBS_RESCDEF) != 0))\n\t\t\t/* not a recognized file, so not moving it */\n\t\t\treturn;\n\t}\n\n\tsnprintf(bad_filename, sizeof(bad_filename), \"%s%s\",\n\t\t filename, HOOK_BAD_SUFFIX);\n\n#ifdef WIN32\n\tif (MoveFileEx(filename, bad_filename,\n\t\t       MOVEFILE_REPLACE_EXISTING | MOVEFILE_WRITE_THROUGH) == 0) {\n\t\terrno = GetLastError();\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"MoveFileEx(%s, %s) failed!\", filename, bad_filename);\n\t\tlog_err(errno, __func__, log_buffer);\n\n\t} else {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"renamed hook-related file %s as %s\", filename,\n\t\t\t bad_filename);\n\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_HOOK,\n\t\t\t  LOG_WARNING, __func__, log_buffer);\n\t}\n\tsecure_file(bad_filename, \"Administrators\",\n\t\t    READS_MASK | WRITES_MASK | STANDARD_RIGHTS_REQUIRED);\n#else\n\tif (rename(filename, bad_filename) == -1) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"error renaming hook file %s\", filename);\n\t\tlog_err(errno, __func__, log_buffer);\n\t} else {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"renamed hook-related file %s as %s\", filename,\n\t\t\t bad_filename);\n\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_HOOK,\n\t\t\t  LOG_WARNING, __func__, log_buffer);\n\t}\n#endif\n}\n\n/**\n * @brief\n *\tPurge hook from system.\n * \tThe hook is dequeued; the hook control file, script file\n * \tare unlinked, and the hook structure is freed.\n * @note\n *\tIf the hook-related file fails to unlink, then it is moved out\n *\tof the way (renamed via mark_hook_file_bad() call) so as not to\n *\tget rediscovered by hook_recov().\n *\n * @param[in/out] phook\t- pointer to the hook structure.\n * @param[in] \t  pyfree_func - the special free function for the Python-related\n *\t\t\tfield.\n *\t\t\tEx. pbs_python_ext_free_python_script()\n *\n */\nvoid\nhook_purge(hook *phook, void (*pyfree_func)(struct python_script *))\n{\n\tchar namebuf[MAXPATHLEN + 1];\n\n\tif (phook == NULL) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"phook is NULL!\");\n\t\treturn;\n\t}\n\n\tclear_hook_links(phook);\n\tif (phook->hook_name == NULL) {\n\t\t/* hook_name should not be NULL, but if so, we already    */\n\t\t/* malloced hook structure so don't return here so as     */\n\t\t/* to catch hook_free() at the end.                       */\n\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\"phook->hook_name is NULL!\");\n\t} else {\n\t\tmemset(namebuf, '\\0', MAXPATHLEN + 1);\n\n\t\tsnprintf(namebuf, MAXPATHLEN, \"%s%s%s\", path_hooks,\n\t\t\t phook->hook_name, HOOK_CONFIG_SUFFIX);\n\t\tif ((phook->event & HOOK_EVENT_PERIODIC) && (phook->enabled == TRUE))\n\t\t\tdelete_task_by_parm1_func(phook, NULL, DELETE_ALL);\n\n#ifdef WIN32\n\t\t/* in case file permission got corrupted */\n\t\tsecure_file(namebuf, \"Administrators\",\n\t\t\t    READS_MASK | WRITES_MASK | STANDARD_RIGHTS_REQUIRED);\n#endif\n\t\tif (unlink(namebuf) < 0) {\n\t\t\tif (errno != ENOENT) {\n\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\"Failed to delete hook config file %s\",\n\t\t\t\t\tnamebuf);\n\t\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\t\tmark_hook_file_bad(namebuf);\n\t\t\t}\n\t\t}\n\n\t\tsnprintf(namebuf, MAXPATHLEN, \"%s%s%s\", path_hooks,\n\t\t\t phook->hook_name, HOOK_SCRIPT_SUFFIX);\n\n#ifdef WIN32\n\t\t/* in case file permission got corrupted */\n\t\tsecure_file(namebuf, \"Administrators\",\n\t\t\t    READS_MASK | WRITES_MASK | STANDARD_RIGHTS_REQUIRED);\n#endif\n\n\t\tif (unlink(namebuf) < 0) {\n\t\t\tif (errno != ENOENT) {\n\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\"Failed to delete hook script %s\",\n\t\t\t\t\tnamebuf);\n\t\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\t\tmark_hook_file_bad(namebuf);\n\t\t\t}\n\t\t}\n\n\t\tsnprintf(namebuf, MAXPATHLEN, \"%s%s%s\", path_hooks,\n\t\t\t phook->hook_name, HOOK_FILE_SUFFIX);\n\n#ifdef WIN32\n\t\t/* in case file permission got corrupted */\n\t\tsecure_file(namebuf, \"Administrators\",\n\t\t\t    READS_MASK | WRITES_MASK | STANDARD_RIGHTS_REQUIRED);\n#endif\n\t\tif (unlink(namebuf) < 0) {\n\t\t\tif (errno != ENOENT) {\n\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\"Failed to delete hook control file %s\",\n\t\t\t\t\tnamebuf);\n\t\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\t\tmark_hook_file_bad(namebuf);\n\t\t\t}\n\t\t}\n\t}\n\n\thook_free(phook, pyfree_func);\n\treturn;\n}\n\n/**\n * @brief\n * \tSaves non-default attribute values into  the hook control file.\n *\n * @param[in/out]\tphook - hook being operated on.\n *\n * @return int\n * @retval\t0\tfor success\n * @retval\t-1\tfor failure\n */\nint\nhook_save(hook *phook)\n{\n\tchar hookfile[MAXPATHLEN + 1];\n\tchar hookfile_new[MAXPATHLEN + 1];\n\tFILE *hkfp = NULL;\n\n\tif (phook == NULL) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"phook is NULL!\");\n\t\treturn (-1);\n\t}\n\tif (phook->hook_name == NULL) {\n\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\"phook->hook_name is NULL!\");\n\t\treturn (-1);\n\t}\n\n\tmemset(hookfile, '\\0', MAXPATHLEN + 1);\n\tmemset(hookfile_new, '\\0', MAXPATHLEN + 1);\n\tsnprintf(hookfile, MAXPATHLEN, \"%s%s%s\", path_hooks, phook->hook_name,\n\t\t HOOK_FILE_SUFFIX);\n\tsnprintf(hookfile_new, MAXPATHLEN, \"%s%s%s.new\", path_hooks,\n\t\t phook->hook_name, HOOK_FILE_SUFFIX);\n\n#ifdef WIN32\n\tfix_perms2(hookfile, hookfile_new);\n#endif\n\n\tif ((hkfp = fopen(hookfile_new, \"w\")) == NULL) {\n\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_HOOK,\n\t\t\t  LOG_WARNING, __func__,\n\t\t\t  \"Hook control file update failed!\");\n\t\treturn (-1);\n\t}\n\n\tfprintf(hkfp, \"%s=%s\\n\", HOOKATT_NAME, phook->hook_name);\n\n\t/* Save only the non-defaults */\n\tif (phook->type != HOOK_TYPE_DEFAULT)\n\t\tfprintf(hkfp, \"%s=%s\\n\", HOOKATT_TYPE,\n\t\t\thook_type_as_string(phook->type));\n\n\tif (phook->enabled != HOOK_ENABLED_DEFAULT)\n\t\tfprintf(hkfp, \"%s=%s\\n\", HOOKATT_ENABLED,\n\t\t\thook_enabled_as_string(phook->enabled));\n\n\tif (phook->debug != HOOK_DEBUG_DEFAULT)\n\t\tfprintf(hkfp, \"%s=%s\\n\", HOOKATT_DEBUG,\n\t\t\thook_debug_as_string(phook->debug));\n\n\tif (phook->user != HOOK_USER_DEFAULT)\n\t\tfprintf(hkfp, \"%s=%s\\n\", HOOKATT_USER,\n\t\t\thook_user_as_string(phook->user));\n\n\tif (phook->fail_action != HOOK_FAIL_ACTION_DEFAULT)\n\t\tfprintf(hkfp, \"%s=%s\\n\", HOOKATT_FAIL_ACTION,\n\t\t\thook_fail_action_as_string(phook->fail_action));\n\n\tif (phook->event != HOOK_EVENT_DEFAULT) {\n\t\tfprintf(hkfp, \"%s=%s\\n\", HOOKATT_EVENT,\n\t\t\thook_event_as_string(phook->event));\n\t}\n\n\tif (phook->order != HOOK_ORDER_DEFAULT)\n\t\tfprintf(hkfp, \"%s=%s\\n\", HOOKATT_ORDER,\n\t\t\thook_order_as_string(phook->order));\n\n\tif (phook->alarm != HOOK_ALARM_DEFAULT)\n\t\tfprintf(hkfp, \"%s=%s\\n\", HOOKATT_ALARM,\n\t\t\thook_alarm_as_string(phook->alarm));\n\n\tif (phook->freq != HOOK_FREQ_DEFAULT)\n\t\tfprintf(hkfp, \"%s=%s\\n\", HOOKATT_FREQ,\n\t\t\thook_freq_as_string(phook->freq));\n\n\t/* need to save on disk that the hook is pending to be deleted */\n\tif (phook->pending_delete != HOOK_PENDING_DELETE_DEFAULT) {\n\t\tfprintf(hkfp, \"%s=%d\\n\", \"pending_delete\", phook->pending_delete);\n\t}\n\n#ifdef WIN32\n\tif ((fflush(hkfp) != 0) ||\n\t    (fclose(hkfp) != 0)) {\n\t\tsprintf(log_buffer, \"Failed to flush/close hook file %s\",\n\t\t\thookfile_new);\n\t\tlog_err(errno, __func__, log_buffer);\n\t\treturn (-1);\n\t}\n\n\tif (MoveFileEx(hookfile_new, hookfile,\n\t\t       MOVEFILE_REPLACE_EXISTING | MOVEFILE_WRITE_THROUGH) == 0) {\n\t\terrno = GetLastError();\n\t\tsprintf(log_buffer, \"MoveFileEx(%s, %s) failed! Deleting file.\",\n\t\t\thookfile_new, hookfile);\n\t\tlog_err(errno, __func__, log_buffer);\n\t\t(void) unlink(hookfile_new);\n\t\treturn (-1);\n\t}\n\tsecure_file(hookfile, \"Administrators\",\n\t\t    READS_MASK | WRITES_MASK | STANDARD_RIGHTS_REQUIRED);\n#else\n\tif ((fflush(hkfp) != 0) ||\n\t    (fsync(fileno(hkfp)) != 0) ||\n\t    (fclose(hkfp) != 0)) {\n\t\tsprintf(log_buffer, \"Failed to flush/close hook file %s\",\n\t\t\thookfile_new);\n\t\tlog_err(errno, __func__, log_buffer);\n\t\treturn (-1);\n\t}\n\tif (rename(hookfile_new, hookfile) < 0) {\n\t\tchar *msgbuf;\n\n\t\tpbs_asprintf(&msgbuf, \"rename(%s, %s) failed!\", hookfile_new, hookfile);\n\t\tlog_err(errno, __func__, msgbuf);\n\t\tfree(msgbuf);\n\t\t(void) unlink(hookfile_new);\n\t\treturn (-1);\n\t}\n#endif\n\tphook->hook_control_checksum = crc_file(hookfile);\n\n\treturn (0);\n}\n\n/*\n * find_hook() - find hook by hook_name\n *\n *\tSearch list of all server hooks for one with same hook name\n *\tReturn NULL if not found or pointer to hook struct if found\n */\n\nhook *\nfind_hook(char *hook_name)\n{\n\thook *phook;\n\n\tphook = (hook *) GET_NEXT(svr_allhooks);\n\twhile (phook) {\n\t\tif (phook->hook_name &&\n\t\t    (strcmp(hook_name, phook->hook_name) == 0)) {\n\t\t\tbreak;\n\t\t}\n\t\tphook = (hook *) GET_NEXT(phook->hi_allhooks);\n\t}\n\treturn (phook); /* may be a null pointer */\n}\n\n/**\n * @brief\n *\tLocate a hook by its event.\n *\n * @par Functionality:\n *      This function locates a hook by its event number.\n *\n * @see\n *\tadd_hook_event\n *\tset_srv_prov_attributes\n *\tstart_vnode_provisioning\n *\t#hook in hook.h\n *\t#HOOK_EVENT_PROVISION in hook.h\n *\n * @param[in]   hook_event           -       number identifying a hook event\n *\n * @return      poiner to hook\n * @retval       pointer to hook : if hook is found\n * @retval       NULL : if hook is not found\n *\n * @par Side Effects:\n *      Unknown\n *\n * @par MT-safe: No\n *\n */\n\nhook *\nfind_hookbyevent(int hook_event)\n{\n\thook *phook;\n\n\tphook = (hook *) GET_NEXT(svr_allhooks);\n\twhile (phook) {\n\t\tif (phook->event & hook_event)\n\t\t\tbreak;\n\t\tphook = (hook *) GET_NEXT(phook->hi_allhooks);\n\t}\n\tDBPRT((\"hook pointer is %p\\n\", (void *) phook))\n\treturn (phook); /* may be a null pointer */\n}\n\n/* The following are for encoding and decoding of a hook file in base 64 */\n/* Much of the code has been taken from Python-2.5.1/Modules/binascii.c  */\n/* as written by: Jack Jansen, CWI, July 1995.                           */\n\nstatic char table_a2b_base64[] = {\n\t-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1,\n\t-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1,\n\t-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 62, -1, -1, -1, 63,\n\t52, 53, 54, 55, 56, 57, 58, 59, 60, 61, -1, -1, -1, 0, -1, -1, /* Note PAD->0 */\n\t-1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14,\n\t15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, -1, -1, -1, -1, -1,\n\t-1, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40,\n\t41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, -1, -1, -1, -1, -1};\n\n#define BASE64_PAD '='\n\nstatic unsigned char table_b2a_base64[] =\n\t\"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/\";\n\nstatic int\nfind_valid_base64_char(unsigned char *s, ssize_t slen, int num)\n{\n\t/* Finds & returns the (num+1)th\n\t ** valid character for base64, or -1 if none.\n\t */\n\n\tint ret = -1;\n\tunsigned char c, b64val;\n\n\twhile ((slen > 0) && (ret == -1)) {\n\t\tc = *s;\n\t\tb64val = table_a2b_base64[c & 0x7f];\n\t\tif (((c <= 0x7f) && (b64val != (unsigned char) -1))) {\n\t\t\tif (num == 0)\n\t\t\t\tret = *s;\n\t\t\tnum--;\n\t\t}\n\n\t\ts++;\n\t\tslen--;\n\t}\n\treturn ret;\n}\n\n/*\n *\tDecodes a base64-encoded block of data (ascii_data, ascii_len), and\n *\tstore the result in (bin_data, p_bin_len).\n *\tRETURNS: 0 for success; 1 otherwise with 'msg' of size 'msg_len'\n *\t\tfilled in.\n */\nint\ndecode_block_base64(unsigned char *ascii_data, ssize_t ascii_len,\n\t\t    unsigned char *bin_data, ssize_t *p_bin_len,\n\t\t    char *msg, size_t msg_len)\n{\n\tint leftbits = 0;\n\tunsigned char this_ch;\n\tunsigned int leftchar = 0;\n\tint quad_pos = 0;\n\tssize_t bin_len = 0;\n\n\tif (msg == NULL) { /* should not happen */\n\t\tlog_err(PBSE_INTERNAL, __func__, \"'msg' buffer is NULL\");\n\t\treturn (1);\n\t}\n\tmemset(msg, '\\0', msg_len);\n\n\tfor (; ascii_len > 0; ascii_len--, ascii_data++) {\n\t\tthis_ch = *ascii_data;\n\t\tif (this_ch > 0x7f ||\n\t\t    this_ch == '\\r' || this_ch == '\\n' || this_ch == ' ')\n\t\t\tcontinue;\n\n\t\t/* Check for pad sequences and ignore\n\t\t ** the invalid ones.\n\t\t */\n\t\tif (this_ch == BASE64_PAD) {\n\t\t\tif ((quad_pos < 2) ||\n\t\t\t    ((quad_pos == 2) &&\n\t\t\t     (find_valid_base64_char(ascii_data, ascii_len, 1) != BASE64_PAD))) {\n\t\t\t\tcontinue;\n\t\t\t} else {\n\t\t\t\t/* A pad sequence means no more input.\n\t\t\t\t ** We've already interpreted the data\n\t\t\t\t ** from the quad at this point.\n\t\t\t\t */\n\t\t\t\tleftbits = 0;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\n\t\tthis_ch = table_a2b_base64[*ascii_data];\n\t\tif (this_ch == (unsigned char) -1)\n\t\t\tcontinue;\n\n\t\t/*\n\t\t ** Shift it in on the low end, and see if there's\n\t\t ** a byte ready for output.\n\t\t */\n\t\tquad_pos = (quad_pos + 1) & 0x03;\n\t\tleftchar = (leftchar << 6) | (this_ch);\n\t\tleftbits += 6;\n\n\t\tif (leftbits >= 8) {\n\t\t\tleftbits -= 8;\n\t\t\t*bin_data++ = (leftchar >> leftbits) & 0xff;\n\t\t\tbin_len++;\n\t\t\tleftchar &= ((1 << leftbits) - 1);\n\t\t}\n\t}\n\n\tif (leftbits != 0) {\n\t\tsnprintf(msg, msg_len - 1, \"Incorrect padding\");\n\t\treturn (1);\n\t}\n\n\t/* And set string size correctly. If the result string is empty\n\t ** (because the input was all invalid) return the shared empty\n\t ** string instead.\n\t */\n\tif (bin_len <= 0) {\n\t\tsnprintf(msg, msg_len - 1, \"Unable to decode...bad input\");\n\t\treturn (1);\n\t}\n\t*p_bin_len = bin_len;\n\n\treturn (0);\n}\n\n/*\n *\tEncodes a non-encoded block of data (bin_data, bin_len), and store\n *\tthe result in (asci_data, p_ascii_len).\n *\tRETURNS: 0 for success; 1 otherwise with 'msg' of size 'msg_len'\n *\tfilled in.\n */\nstatic int\nencode_block_base64(unsigned char *bin_data, ssize_t bin_len,\n\t\t    unsigned char *ascii_data, ssize_t *p_ascii_len,\n\t\t    char *msg, size_t msg_len)\n{\n\tint leftbits = 0;\n\tunsigned char this_ch;\n\tunsigned int leftchar = 0;\n\tunsigned char *ascii_data_start;\n\n\tif (msg == NULL) { /* should not happen */\n\t\tlog_err(PBSE_INTERNAL, __func__, \"'msg' buffer is NULL\");\n\t\treturn (1);\n\t}\n\tmemset(msg, '\\0', msg_len);\n\n\tascii_data_start = ascii_data;\n\n\tfor (; bin_len > 0; bin_len--, bin_data++) {\n\t\t/* Shift the data into our buffer */\n\t\tleftchar = (leftchar << 8) | *bin_data;\n\t\tleftbits += 8;\n\n\t\t/* See if there are 6-bit groups ready */\n\t\twhile (leftbits >= 6) {\n\t\t\tthis_ch = (leftchar >> (leftbits - 6)) & 0x3f;\n\t\t\tleftbits -= 6;\n\t\t\t*ascii_data++ = table_b2a_base64[this_ch];\n\t\t}\n\t}\n\tif (leftbits == 2) {\n\t\t*ascii_data++ = table_b2a_base64[(leftchar & 3) << 4];\n\t\t*ascii_data++ = BASE64_PAD;\n\t\t*ascii_data++ = BASE64_PAD;\n\t} else if (leftbits == 4) {\n\t\t*ascii_data++ = table_b2a_base64[(leftchar & 0xf) << 2];\n\t\t*ascii_data++ = BASE64_PAD;\n\t}\n\t*ascii_data++ = '\\n'; /* Append a courtesy newline */\n\n\t*p_ascii_len = (ssize_t)(ascii_data - ascii_data_start);\n\n\treturn (0);\n}\n\n/**\n * @brief\n *\tEncodes the contents of 'inline' file as 'content_encoding', and storing\n *      the result in 'outfile'\n *\n * @param[in]       inflie          - the content of this input file will be encoded\n * @param[in/out]   outfile         - the output file in which the encoded output is stored\n * @param[in]       conent_encoding - specifies the encoding\n * @param[in/out]   msg             - error message\n * @param[in]       msg_len         - specifiles length of error message\n *\n * @return int\n * @retval = 0\tSUCCESS\n * @retval = 1\tif not SUCCESS, then also sets 'msg' of size 'msg_len'\n */\nint\nencode_hook_content(char *infile, char *outfile, char *content_encoding,\n\t\t    char *msg, size_t msg_len)\n{\n\tFILE *infp = NULL;\n\tFILE *outfp = NULL;\n\tunsigned char in_data[HOOK_BUF_SIZE];\n\tunsigned char out_data[(HOOK_BUF_SIZE * 2) + 3];\n\t/* an upper bound that should work for */\n\t/* both encoded and decoded data */\n\tsize_t nread;\n\tssize_t in_len;\n\tssize_t out_len;\n\tint ret = 0;\n\n\tif (msg == NULL) { /* should not happen */\n\t\tlog_err(PBSE_INTERNAL, __func__, \"'msg' buffer is NULL\");\n\t\treturn (1);\n\t}\n\tmemset(msg, '\\0', msg_len);\n\n\tif ((infile == NULL) || (outfile == NULL)) {\n\t\tsnprintf(msg, msg_len - 1, \"no infile or outfile\");\n\t\treturn (1);\n\t}\n\n\tif (content_encoding == NULL) {\n\t\tsnprintf(msg, msg_len - 1, \"no content_encoding\");\n\t\treturn (1);\n\t}\n\toutfp = fopen(outfile, \"wb\");\n\tif (outfp == NULL) {\n\t\tsnprintf(msg, msg_len - 1, \"failed to open %s - error %s\", outfile,\n\t\t\t strerror(errno));\n\t\tret = 1;\n\t\tgoto encode_hook_content_exit;\n\t}\n\n#ifdef WIN32\n\tsecure_file(outfile, \"Administrators\",\n\t\t    READS_MASK | WRITES_MASK | STANDARD_RIGHTS_REQUIRED);\n#endif\n\n\tinfp = fopen(infile, \"rb\");\n\n\tif (infp == NULL) {\n\t\tif (errno == ENOENT) {\n\t\t\tret = 0;\n\t\t} else {\n\n\t\t\tsnprintf(msg, msg_len - 1,\n\t\t\t\t \"failed to open %s - error %s\", infile,\n\t\t\t\t strerror(errno));\n\t\t\tret = 1;\n\t\t}\n\n\t\tgoto encode_hook_content_exit;\n\t}\n\n\twhile ((nread = fread(in_data, 1, HOOK_BUF_SIZE, infp)) > 0) {\n\t\tin_len = nread;\n\t\tif (strcmp(content_encoding, HOOKSTR_DEFAULT) == 0) {\n\t\t\tout_len = nread;\n\t\t\tmemcpy((char *) out_data, (char *) in_data, in_len);\n\t\t} else if (strcmp(content_encoding, \"base64\") == 0) {\n\t\t\tif (encode_block_base64(in_data, in_len,\n\t\t\t\t\t\tout_data, &out_len, msg, msg_len) != 0) {\n\t\t\t\tret = 1;\n\t\t\t\tgoto encode_hook_content_exit;\n\t\t\t}\n\t\t} else {\n\t\t\tsnprintf(msg, msg_len - 1,\n\t\t\t\t \"encountered bad content_encoding=%s\",\n\t\t\t\t content_encoding);\n\t\t\tret = 1;\n\t\t\tgoto encode_hook_content_exit;\n\t\t}\n\n\t\tif (out_len > 0) {\n\t\t\tif (fwrite(out_data, 1, out_len, outfp) != out_len) {\n\t\t\t\tsnprintf(msg, msg_len - 1,\n\t\t\t\t\t \"write to %s failed! Aborting...\",\n\t\t\t\t\t outfile);\n\t\t\t\tret = 1;\n\t\t\t\tgoto encode_hook_content_exit;\n\t\t\t}\n\t\t}\n\t}\n\n\tif (fflush(outfp) != 0) {\n\t\tsnprintf(msg, msg_len - 1,\n\t\t\t \"Failed to flush/close hook file %s (error %s)\", outfile,\n\t\t\t strerror(errno));\n\t\tret = 1;\n\t}\n\nencode_hook_content_exit:\n\tif (infp != NULL)\n\t\tfclose(infp);\n\tif (outfp != NULL)\n\t\tfclose(outfp);\n\n\tif (ret != 0) {\n\t\tif (outfile)\n\t\t\t(void) unlink(outfile);\n\t}\n\n\treturn (ret);\n}\n\n/**\n * @brief\n *\tDecodes the contents of 'infile' encoded as 'content_encoding',\n *\tand storing the result in 'outfile'.\n *\n * @param[in]   infile - the encoded input file\n * @param[in]   outfile - used to store the decoded output\n * @param[in]   content_encoding - specifies the encoding of 'infile'\n * @param[in]   msg - message in case unsuccessful\n * @param[in]   msg_len - length of message\n *\n * @return int\n * @retval  0\tindicates SUCCESS\n * @retval  1   otherwise, setting 'msg' of size 'msg_len'\n */\nint\ndecode_hook_content(char *infile, char *outfile, char *content_encoding,\n\t\t    char *msg, size_t msg_len)\n{\n\tFILE *infp = NULL;\n\tFILE *outfp;\n\tunsigned char in_data[HOOK_BUF_SIZE + 1];\n\tunsigned char out_data[((HOOK_BUF_SIZE + 1) * 2) + 3];\n\t/* an upper bound that should work for */\n\t/* both encoded and decoded data */\n\tint ret = 0;\n\tssize_t in_len;\n\tssize_t out_len = 0;\n\n\tif (msg == NULL) { /* should not happen */\n\t\tlog_err(PBSE_INTERNAL, __func__, \"'msg' buffer is NULL\");\n\t\treturn (1);\n\t}\n\tmemset(msg, '\\0', msg_len);\n\n\tif ((infile == NULL) || (outfile == NULL)) {\n\t\tsnprintf(msg, msg_len - 1, \"no infile or outfile\");\n\t\treturn (1);\n\t}\n\n\tif (content_encoding == NULL) {\n\t\tsnprintf(msg, msg_len - 1, \"no content_encoding\");\n\t\treturn (1);\n\t}\n\n\toutfp = fopen(outfile, \"wb\");\n\tif (outfp == NULL) {\n\t\tsnprintf(msg, msg_len - 1,\n\t\t\t \"failed to open %s - error %s\", outfile,\n\t\t\t strerror(errno));\n\t\tret = 1;\n\t\tgoto decode_hook_content_exit;\n\t}\n\n#ifdef WIN32\n\tsecure_file(outfile, \"Administrators\",\n\t\t    READS_MASK | WRITES_MASK | STANDARD_RIGHTS_REQUIRED);\n#endif\n\n\tinfp = fopen(infile, \"rb\");\n\n\tif (infp == NULL) {\n\t\tif (errno == ENOENT) {\n\t\t\tret = 0; /* success since no file to decode */\n\t\t} else {\n\n\t\t\tsnprintf(msg, msg_len - 1,\n\t\t\t\t \"failed to open %s - error %s\", infile,\n\t\t\t\t strerror(errno));\n\t\t\tret = 1;\n\t\t}\n\t\tgoto decode_hook_content_exit;\n\t}\n\n\twhile (fgets((char *) in_data, sizeof(in_data), infp) != NULL) {\n\t\tin_len = strlen((char *) in_data);\n\t\tif (strcmp(content_encoding, HOOKSTR_DEFAULT) == 0) {\n\t\t\tout_len = in_len;\n\t\t\tmemcpy((char *) out_data, (char *) in_data, in_len);\n\t\t} else if (strcmp(content_encoding, \"base64\") == 0) {\n\t\t\tif (decode_block_base64(in_data, in_len,\n\t\t\t\t\t\tout_data, &out_len, msg, msg_len) != 0) {\n\t\t\t\tret = 1;\n\t\t\t\tgoto decode_hook_content_exit;\n\t\t\t}\n\t\t\tout_data[out_len] = '\\0';\n\t\t} else {\n\t\t\tsnprintf(msg, msg_len - 1,\n\t\t\t\t \"encountered bad content_encoding=%s\",\n\t\t\t\t content_encoding);\n\t\t\tret = 1;\n\t\t\tgoto decode_hook_content_exit;\n\t\t}\n\n\t\tif (out_len > 0) {\n\t\t\tif (fwrite(out_data, 1, out_len, outfp) != out_len) {\n\t\t\t\tsnprintf(msg, msg_len - 1,\n\t\t\t\t\t \"write to %s failed! Aborting...\",\n\t\t\t\t\t outfile);\n\t\t\t\tret = 1;\n\t\t\t\tgoto decode_hook_content_exit;\n\t\t\t}\n\t\t\tout_len = 0;\n\t\t}\n\t}\n\n\tif (fflush(outfp) != 0) {\n\t\tsnprintf(msg, msg_len - 1,\n\t\t\t \"Failed to flush/close hook file %s (error %s)\", outfile,\n\t\t\t strerror(errno));\n\t\tret = 1;\n\t}\n\ndecode_hook_content_exit:\n\tif (infp != NULL)\n\t\tfclose(infp);\n\n\tif (outfp != NULL)\n\t\tfclose(outfp);\n\n\tif (ret != 0) {\n\t\tif (outfile)\n\t\t\t(void) unlink(outfile);\n\t}\n\treturn (ret);\n}\n\n/**\n * @brief\n * \tPrints all the attributes and their values of 'phook'\n *\n * @param[in]\tphook - hook whoso values are being printed out.\n * @param[in]\theading - a string to print out before printing out values.\n */\nvoid\nprint_hook(hook *phook, char *heading)\n{\n\tif (phook == NULL)\n\t\treturn;\n\n\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t \"%s = {%s, %s=%d, %s=%d, %s=%d %s=%d, \"\n\t\t \"%s=(%d) %s=(%d), %s=(%s), %s=%d, %s=%d}\",\n\t\t heading, phook->hook_name ? phook->hook_name : \"\",\n\t\t HOOKATT_ORDER, phook->order,\n\t\t HOOKATT_TYPE, phook->type,\n\t\t HOOKATT_ENABLED, phook->enabled,\n\t\t HOOKATT_USER, phook->user,\n\t\t HOOKATT_DEBUG, phook->debug,\n\t\t HOOKATT_FAIL_ACTION, phook->fail_action,\n\t\t HOOKATT_EVENT, hook_event_as_string(phook->event),\n\t\t HOOKATT_ALARM, phook->alarm,\n\t\t HOOKATT_FREQ, phook->freq);\n\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_HOOK,\n\t\t  LOG_INFO, __func__, log_buffer);\n\n\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t \"checksums: %s: hook_control_checksum=%lu hook_script_checksum=%lu hook_config_checksum=%lu\",\n\t\t phook->hook_name ? phook->hook_name : \"\",\n\t\t phook->hook_control_checksum,\n\t\t phook->hook_script_checksum,\n\t\t phook->hook_config_checksum);\n\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_HOOK,\n\t\t  LOG_INFO, __func__, log_buffer);\n}\n\n/**\n * @brief\n * \tPrints all the attributes and their values of all hooks appearing\n *\tin the system hooks list representing 'event'.\n *\n * @param[in] event - the event whose hooks are being printed. If 'event' is 0,\n * \tmeans to print out the svr_allhooks list.\n */\nvoid\nprint_hooks(unsigned int event)\n{\n\thook *phook;\n\tint i;\n\tpbs_list_head l_elem;\n\tchar ev_str[40];\n\tchar heading[80];\n\n\tif (event == HOOK_EVENT_QUEUEJOB) {\n\t\tl_elem = svr_queuejob_hooks;\n\t\tstrcpy(ev_str, HOOKSTR_QUEUEJOB);\n\t} else if (event == HOOK_EVENT_POSTQUEUEJOB) {\n\t\tl_elem = svr_postqueuejob_hooks;\n\t\tstrcpy(ev_str, HOOKSTR_POSTQUEUEJOB);\n\t} else if (event == HOOK_EVENT_MODIFYJOB) {\n\t\tl_elem = svr_modifyjob_hooks;\n\t\tstrcpy(ev_str, HOOKSTR_MODIFYJOB);\n\t} else if (event == HOOK_EVENT_RESVSUB) {\n\t\tl_elem = svr_resvsub_hooks;\n\t\tstrcpy(ev_str, HOOKSTR_RESVSUB);\n\t} else if (event == HOOK_EVENT_MODIFYRESV) {\n\t\tl_elem = svr_modifyresv_hooks;\n\t\tstrcpy(ev_str, HOOKSTR_MODIFYRESV);\n\t} else if (event == HOOK_EVENT_MOVEJOB) {\n\t\tl_elem = svr_movejob_hooks;\n\t\tstrcpy(ev_str, HOOKSTR_MOVEJOB);\n\t} else if (event == HOOK_EVENT_RUNJOB) {\n\t\tl_elem = svr_runjob_hooks;\n\t\tstrcpy(ev_str, HOOKSTR_RUNJOB);\n\t} else if (event == HOOK_EVENT_JOBOBIT) {\n\t\tl_elem = svr_jobobit_hooks;\n\t\tstrcpy(ev_str, HOOKSTR_JOBOBIT);\n\t} else if (event == HOOK_EVENT_MANAGEMENT) {\n\t\tl_elem = svr_management_hooks;\n\t\tstrcpy(ev_str, HOOKSTR_MANAGEMENT);\n\t} else if (event == HOOK_EVENT_MODIFYVNODE) {\n\t\tl_elem = svr_modifyvnode_hooks;\n\t\tstrcpy(ev_str, HOOKSTR_MODIFYVNODE);\n\t} else if (event == HOOK_EVENT_PERIODIC) {\n\t\tl_elem = svr_periodic_hooks;\n\t\tstrcpy(ev_str, HOOKSTR_PERIODIC);\n\t} else if (event == HOOK_EVENT_PROVISION) {\n\t\tl_elem = svr_provision_hooks;\n\t\tstrcpy(ev_str, HOOKSTR_PROVISION);\n\t} else if (event == HOOK_EVENT_RESV_CONFIRM) {\n\t\tl_elem = svr_resv_confirm_hooks;\n\t\tstrcpy(ev_str, HOOKSTR_RESV_CONFIRM);\n\t} else if (event == HOOK_EVENT_RESV_BEGIN) {\n\t\tl_elem = svr_resv_begin_hooks;\n\t\tstrcpy(ev_str, HOOKSTR_RESV_BEGIN);\n\t} else if (event == HOOK_EVENT_RESV_END) {\n\t\tl_elem = svr_resv_end_hooks;\n\t\tstrcpy(ev_str, HOOKSTR_RESV_END);\n\t} else if (event == HOOK_EVENT_EXECJOB_BEGIN) {\n\t\tl_elem = svr_execjob_begin_hooks;\n\t\tstrcpy(ev_str, HOOKSTR_EXECJOB_BEGIN);\n\t} else if (event == HOOK_EVENT_EXECJOB_PROLOGUE) {\n\t\tl_elem = svr_execjob_prologue_hooks;\n\t\tstrcpy(ev_str, HOOKSTR_EXECJOB_PROLOGUE);\n\t} else if (event == HOOK_EVENT_EXECJOB_EPILOGUE) {\n\t\tl_elem = svr_execjob_epilogue_hooks;\n\t\tstrcpy(ev_str, HOOKSTR_EXECJOB_EPILOGUE);\n\t} else if (event == HOOK_EVENT_EXECJOB_END) {\n\t\tl_elem = svr_execjob_end_hooks;\n\t\tstrcpy(ev_str, HOOKSTR_EXECJOB_END);\n\t} else if (event == HOOK_EVENT_EXECJOB_PRETERM) {\n\t\tl_elem = svr_execjob_preterm_hooks;\n\t\tstrcpy(ev_str, HOOKSTR_EXECJOB_PRETERM);\n\t} else if (event == HOOK_EVENT_EXECJOB_LAUNCH) {\n\t\tl_elem = svr_execjob_launch_hooks;\n\t\tstrcpy(ev_str, HOOKSTR_EXECJOB_LAUNCH);\n\t} else if (event == HOOK_EVENT_EXECHOST_PERIODIC) {\n\t\tl_elem = svr_exechost_periodic_hooks;\n\t\tstrcpy(ev_str, HOOKSTR_EXECHOST_PERIODIC);\n\t} else if (event == HOOK_EVENT_EXECHOST_STARTUP) {\n\t\tl_elem = svr_exechost_startup_hooks;\n\t\tstrcpy(ev_str, HOOKSTR_EXECHOST_STARTUP);\n\t} else if (event == HOOK_EVENT_EXECJOB_ATTACH) {\n\t\tl_elem = svr_execjob_attach_hooks;\n\t\tstrcpy(ev_str, HOOKSTR_EXECJOB_ATTACH);\n\t} else if (event == HOOK_EVENT_EXECJOB_RESIZE) {\n\t\tl_elem = svr_execjob_resize_hooks;\n\t\tstrcpy(ev_str, HOOKSTR_EXECJOB_RESIZE);\n\t} else if (event == HOOK_EVENT_EXECJOB_ABORT) {\n\t\tl_elem = svr_execjob_abort_hooks;\n\t\tstrcpy(ev_str, HOOKSTR_EXECJOB_ABORT);\n\t} else if (event == HOOK_EVENT_EXECJOB_POSTSUSPEND) {\n\t\tl_elem = svr_execjob_postsuspend_hooks;\n\t\tstrcpy(ev_str, HOOKSTR_EXECJOB_POSTSUSPEND);\n\t} else if (event == HOOK_EVENT_EXECJOB_PRERESUME) {\n\t\tl_elem = svr_execjob_preresume_hooks;\n\t\tstrcpy(ev_str, HOOKSTR_EXECJOB_PRERESUME);\n\t} else {\n\t\tl_elem = svr_allhooks;\n\t\tstrcpy(ev_str, \"ALLHOOKS\");\n\t}\n\n\tphook = (hook *) GET_NEXT(l_elem);\n\ti = 0;\n\twhile (phook) {\n\n\t\tsprintf(heading, \"%s hook[%d]\", ev_str, i);\n\t\tprint_hook(phook, heading);\n\t\tif (event == HOOK_EVENT_QUEUEJOB)\n\t\t\tphook = (hook *) GET_NEXT(phook->hi_queuejob_hooks);\n\t\telse if (event == HOOK_EVENT_POSTQUEUEJOB)\n\t\t\tphook = (hook *) GET_NEXT(phook->hi_postqueuejob_hooks);\n\t\telse if (event == HOOK_EVENT_MODIFYJOB)\n\t\t\tphook = (hook *) GET_NEXT(phook->hi_modifyjob_hooks);\n\t\telse if (event == HOOK_EVENT_RESVSUB)\n\t\t\tphook = (hook *) GET_NEXT(phook->hi_resvsub_hooks);\n\t\telse if (event == HOOK_EVENT_MODIFYRESV)\n\t\t\tphook = (hook *) GET_NEXT(phook->hi_modifyresv_hooks);\n\t\telse if (event == HOOK_EVENT_MOVEJOB)\n\t\t\tphook = (hook *) GET_NEXT(phook->hi_movejob_hooks);\n\t\telse if (event == HOOK_EVENT_RUNJOB)\n\t\t\tphook = (hook *) GET_NEXT(phook->hi_runjob_hooks);\n\t\telse if (event == HOOK_EVENT_JOBOBIT)\n\t\t\tphook = (hook *) GET_NEXT(phook->hi_jobobit_hooks);\n\t\telse if (event == HOOK_EVENT_MANAGEMENT)\n\t\t\tphook = (hook *) GET_NEXT(phook->hi_management_hooks);\n\t\telse if (event == HOOK_EVENT_MODIFYVNODE)\n\t\t\tphook = (hook *) GET_NEXT(phook->hi_modifyvnode_hooks);\n\t\telse if (event == HOOK_EVENT_PROVISION)\n\t\t\tphook = (hook *) GET_NEXT(phook->hi_provision_hooks);\n\t\telse if (event == HOOK_EVENT_PERIODIC)\n\t\t\tphook = (hook *) GET_NEXT(phook->hi_periodic_hooks);\n\t\telse if (event == HOOK_EVENT_RESV_CONFIRM)\n\t\t\tphook = (hook *) GET_NEXT(phook->hi_resv_confirm_hooks);\n\t\telse if (event == HOOK_EVENT_RESV_BEGIN)\n\t\t\tphook = (hook *) GET_NEXT(phook->hi_resv_begin_hooks);\n\t\telse if (event == HOOK_EVENT_RESV_END)\n\t\t\tphook = (hook *) GET_NEXT(phook->hi_resv_end_hooks);\n\t\telse if (event == HOOK_EVENT_EXECJOB_BEGIN)\n\t\t\tphook = (hook *) GET_NEXT(phook->hi_execjob_begin_hooks);\n\t\telse if (event == HOOK_EVENT_EXECJOB_PROLOGUE)\n\t\t\tphook = (hook *) GET_NEXT(phook->hi_execjob_prologue_hooks);\n\t\telse if (event == HOOK_EVENT_EXECJOB_EPILOGUE)\n\t\t\tphook = (hook *) GET_NEXT(phook->hi_execjob_epilogue_hooks);\n\t\telse if (event == HOOK_EVENT_EXECJOB_END)\n\t\t\tphook = (hook *) GET_NEXT(phook->hi_execjob_end_hooks);\n\t\telse if (event == HOOK_EVENT_EXECJOB_PRETERM)\n\t\t\tphook = (hook *) GET_NEXT(phook->hi_execjob_preterm_hooks);\n\t\telse if (event == HOOK_EVENT_EXECJOB_LAUNCH)\n\t\t\tphook = (hook *) GET_NEXT(phook->hi_execjob_launch_hooks);\n\t\telse if (event == HOOK_EVENT_EXECHOST_PERIODIC)\n\t\t\tphook = (hook *) GET_NEXT(phook->hi_exechost_periodic_hooks);\n\t\telse if (event == HOOK_EVENT_EXECHOST_STARTUP)\n\t\t\tphook = (hook *) GET_NEXT(phook->hi_exechost_startup_hooks);\n\t\telse if (event == HOOK_EVENT_EXECJOB_ATTACH)\n\t\t\tphook = (hook *) GET_NEXT(phook->hi_execjob_attach_hooks);\n\t\telse if (event == HOOK_EVENT_EXECJOB_RESIZE)\n\t\t\tphook = (hook *) GET_NEXT(phook->hi_execjob_resize_hooks);\n\t\telse if (event == HOOK_EVENT_EXECJOB_ABORT)\n\t\t\tphook = (hook *) GET_NEXT(phook->hi_execjob_abort_hooks);\n\t\telse if (event == HOOK_EVENT_EXECJOB_POSTSUSPEND)\n\t\t\tphook = (hook *) GET_NEXT(phook->hi_execjob_postsuspend_hooks);\n\t\telse if (event == HOOK_EVENT_EXECJOB_PRERESUME)\n\t\t\tphook = (hook *) GET_NEXT(phook->hi_execjob_preresume_hooks);\n\t\telse\n\t\t\tphook = (hook *) GET_NEXT(phook->hi_allhooks);\n\n\t\ti++;\n\t}\n}\n\n/**\n * @brief\n * \tReads a <filename> under global variable path_hooks,\n *\t\twhere <filename> must have the format: <basename>.HK,\n *\t\tcreating a hook structure out of the read data.\n *\n * @param[in]   filename - contains the hook data, where the filename must have\n *\t\t\t the format <basename>.HK. <basename> must match a\n *\t\t\t\"hook_name=<basename>\" entry, which becomes the hook\n *\t\t\thook structure's hook_name value.\n * @param[in]\thookfp - If this is not NULL, then use this file`pointer to\n *\t\t\tto get contents of hook file.\n * @param[in/out] msg\t- buffer that is filled with error message, if any.\n * @param[in]\t  msg_len - size of 'msg'.\n * @param[in] \t  pyalloc_func - the special function  that\n *\t\t\tcreates a bytecode reprsentation of <basename>.PY,\n *\t\t\tthe actual Python hook script.\n *\t\t\tEx.  pbs_python_ext_alloc_python_script()\n * @param[in] \tpyfree_func - the special free function for the Python-related\n *\t\t\tfield.\n *\t\t\tEx. pbs_python_ext_free_python_script()\n * @return\thook *\n * @retval\tpointer to hook structure mapping the recovered hook.\n * @reval\tNULL - for any error encountered.\n *\n */\nhook *\nhook_recov(char *filename, FILE *hookfp, char *msg, size_t msg_len,\n\t   int (*pyalloc_func)(const char *, struct python_script **),\n\t   void (*pyfree_func)(struct python_script *))\n{\n\thook *phook;\n\tchar basename[MAXPATHLEN + 1];\n\tchar *p;\n\tFILE *fp = NULL;\n\tchar linebuf[BUFSIZ];\n\tint linenum;\n\tchar hook_script[MAXPATHLEN + 1];\n\tchar hook_config[MAXPATHLEN + 1];\n\tchar *hook_name;\n\tint created_here = 0;\n\n\tif (msg == NULL) { /* should not happen */\n\t\tlog_err(PBSE_INTERNAL, __func__, \"'msg' buffer is NULL\");\n\t\treturn NULL;\n\t}\n\tmemset(msg, '\\0', msg_len);\n\tmemset(basename, '\\0', MAXPATHLEN + 1);\n\n\tp = strstr(filename, HOOK_FILE_SUFFIX);\n\tif ((p == NULL) || (strcmp(p, HOOK_FILE_SUFFIX) != 0)) {\n\t\tsnprintf(msg, msg_len - 1,\n\t\t\t \"bad filename %s format: should have %s suffix\",\n\t\t\t filename, HOOK_FILE_SUFFIX);\n\t\treturn NULL;\n\t}\n\n\tstrncpy(basename, filename, p - filename);\n\n\thook_name = basename;\n\tif ((p = strrchr(basename, '/')) != NULL) {\n\t\thook_name = p + 1;\n\t}\n\n\t/* Reuse hook entry if any */\n\tphook = find_hook(hook_name);\n\tif (phook != NULL) {\n\t\thook_init(phook, pyfree_func);\n\t\tclear_hook_links(phook);\n\t\tappend_link(&svr_allhooks, &phook->hi_allhooks, phook);\n\t\tcreated_here = 0;\n\t} else {\n\t\tphook = hook_alloc();\n\t\tif (phook == NULL) {\n\t\t\tsnprintf(msg, msg_len - 1, \"hook_alloc() returned NULL!\");\n\t\t\treturn NULL;\n\t\t}\n\t\tcreated_here = 1;\n\t\tphook->hook_name = strdup(hook_name);\n\n\t\tif (phook->hook_name == NULL) {\n\t\t\tlog_err(errno, __func__, \"no memory\");\n\t\t\tsnprintf(msg, msg_len - 1,\n\t\t\t\t \"Hook name could not be determined!\");\n\t\t\tgoto hook_recov_error;\n\t\t}\n\t}\n\n\tif (strncmp(phook->hook_name, HOOK_PBS_PREFIX,\n\t\t    strlen(HOOK_PBS_PREFIX)) == 0) {\n\t\t/* Actually initializing a PBS* prefix hook conrol file */\n\t\tphook->type = HOOK_PBS;\n\t}\n\n#ifdef WIN32\n\t/* in case file's permission got corrupted */\n\tsecure_file(filename, \"Administrators\",\n\t\t    READS_MASK | WRITES_MASK | STANDARD_RIGHTS_REQUIRED);\n#endif\n\n\tif (hookfp == NULL) {\n\t\tif ((fp = fopen(filename, \"r\")) == NULL) {\n\t\t\tsprintf(log_buffer, \"%s\", filename);\n\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\tsnprintf(msg, msg_len - 1,\n\t\t\t\t \"error %s opening file %s\", strerror(errno), filename);\n\t\t\tgoto hook_recov_error;\n\t\t}\n\t} else {\n\t\tfp = hookfp;\n\t}\n\n\tlinenum = 1;\n\twhile (fgets(linebuf, sizeof(linebuf), fp) != NULL) {\n\t\tchar *p;\n\t\tchar *pequal;\n\n\t\tchar *attname, *attval;\n\n\t\tif (linebuf[0] == '#') {\n\t\t\tcontinue; /* ignore comment lines */\n\t\t}\n\n\t\tif ((p = strrchr(linebuf, '\\n')) != NULL) {\n\t\t\t*p = '\\0';\n\t\t} else {\n\t\t\tsnprintf(msg, msg_len - 1,\n\t\t\t\t \"line %d is too long\", linenum);\n\t\t\tlog_err(PBSE_SYSTEM, __func__, msg);\n\t\t\tgoto hook_recov_error;\n\t\t}\n\t\t/* ignore initial white space;  skip blank lines */\n\t\t/*\n\t\t *      Parse lines of the form\n\t\t *\n\t\t *\t<attribute name>=<attribute value>\n\t\t *\n\t\t */\n\t\tp = linebuf;\n\t\twhile ((*p != '\\0') && isspace(*p))\n\t\t\tp++;\n\n\t\tif (*p == '\\0')\n\t\t\tcontinue; /* empty line */\n\n\t\tif ((pequal = strchr(linebuf, '=')) == NULL) {\n\t\t\tsnprintf(msg, msg_len - 1,\n\t\t\t\t \"line %d:  missing '='\", linenum);\n\t\t\tlog_err(PBSE_SYSTEM, __func__, msg);\n\t\t\tgoto hook_recov_error;\n\t\t}\n\n\t\tif (*(pequal + 1) == '\\0') {\n\t\t\tsnprintf(msg, msg_len - 1,\n\t\t\t\t \"line %d:  no <attribute value>\",\n\t\t\t\t linenum);\n\t\t\tlog_err(PBSE_SYSTEM, __func__, msg);\n\t\t\tgoto hook_recov_error;\n\t\t}\n\t\t*pequal = '\\0';\n\t\tattname = p;\n\t\tattval = pequal + 1;\n\n\t\tif (strcmp(attname, HOOKATT_NAME) == 0) {\n\t\t\tif (strcmp(attval, hook_name) != 0) {\n\t\t\t\tsnprintf(msg, msg_len - 1,\n\t\t\t\t\t \"failed integrity check - \"\n\t\t\t\t\t \"found %s=%s not match \\'%s\\'\",\n\t\t\t\t\t HOOKATT_NAME, attval, hook_name);\n\t\t\t\tgoto hook_recov_error;\n\t\t\t}\n\t\t} else if (strcmp(attname, HOOKATT_TYPE) == 0) {\n\t\t\tif (set_hook_type(phook, attval, msg, msg_len,\n\t\t\t\t\t  1) != 0)\n\t\t\t\tgoto hook_recov_error;\n\t\t} else if (strcmp(attname, HOOKATT_USER) == 0) {\n\t\t\tif (set_hook_user(phook, attval, msg, msg_len, 0) != 0)\n\t\t\t\tgoto hook_recov_error;\n\t\t} else if (strcmp(attname, HOOKATT_FAIL_ACTION) == 0) {\n\t\t\tif (set_hook_fail_action(phook, attval, msg, msg_len, 0) != 0)\n\t\t\t\tgoto hook_recov_error;\n\t\t} else if (strcmp(attname, HOOKATT_ENABLED) == 0) {\n\t\t\tif (set_hook_enabled(phook, attval, msg, msg_len) != 0)\n\t\t\t\tgoto hook_recov_error;\n\t\t} else if (strcmp(attname, HOOKATT_DEBUG) == 0) {\n\t\t\tif (set_hook_debug(phook, attval, msg, msg_len) != 0)\n\t\t\t\tgoto hook_recov_error;\n\t\t} else if (strcmp(attname, HOOKATT_EVENT) == 0) {\n\t\t\tif (set_hook_event(phook, attval, msg, msg_len) != 0)\n\t\t\t\tgoto hook_recov_error;\n\t\t} else if (strcmp(attname, HOOKATT_ORDER) == 0) {\n\t\t\tif (set_hook_order(phook, attval, msg, msg_len) != 0)\n\t\t\t\tgoto hook_recov_error;\n\t\t} else if (strcmp(attname, HOOKATT_ALARM) == 0) {\n\t\t\tif (set_hook_alarm(phook, attval, msg, msg_len) != 0)\n\t\t\t\tgoto hook_recov_error;\n\t\t} else if (strcmp(attname, HOOKATT_FREQ) == 0) {\n\t\t\tif (set_hook_freq(phook, attval, msg, msg_len) != 0)\n\t\t\t\tgoto hook_recov_error;\n\t\t} else if (strcmp(attname, HOOKATT_PENDING_DELETE) == 0) {\n\t\t\tphook->pending_delete = atoi(attval);\n\t\t} else {\n\t\t\tsnprintf(msg, msg_len - 1,\n\t\t\t\t \"unknown attribute name \\'%s\\' in line %d\",\n\t\t\t\t attname, linenum);\n\t\t\tgoto hook_recov_error;\n\t\t}\n\t\tlinenum++;\n\t}\n\n\t/* Better not fclose the passed file pointer (hookfp) since the caller */\n\t/* will be fclose-ing that outside. */\n\tif ((fp != NULL) && (fp != hookfp))\n\t\t(void) fclose(fp);\n\n\tphook->hook_control_checksum = crc_file(filename);\n\n\t/* now do checksum of hook config file (if any) */\n\tsnprintf(hook_config, MAXPATHLEN, \"%s%s%s\", path_hooks,\n\t\t phook->hook_name, HOOK_CONFIG_SUFFIX);\n\tphook->hook_config_checksum = crc_file(hook_config);\n\n\t/* now set the hook script */\n\tsnprintf(hook_script, MAXPATHLEN, \"%s%s%s\", path_hooks,\n\t\t phook->hook_name, HOOK_SCRIPT_SUFFIX);\n\n\tif (pyalloc_func != NULL) {\n\t\tif (pyalloc_func(hook_script,\n\t\t\t\t (struct python_script **) &phook->script) == -1) {\n\t\t\tsnprintf(msg, msg_len - 1,\n\t\t\t\t \"failed to allocate storage for python script %s\",\n\t\t\t\t hook_script);\n\t\t\tlog_err(errno, __func__, msg);\n\t\t} else {\n\t\t\tphook->hook_script_checksum = crc_file(hook_script);\n\t\t}\n\t}\n\n#ifdef WIN32\n\t/* in case file's permission got corrupted  - ok if none existent */\n\tsecure_file(hook_script, \"Administrators\",\n\t\t    READS_MASK | WRITES_MASK | STANDARD_RIGHTS_REQUIRED);\n#endif\n\n\treturn (phook);\n\nhook_recov_error:\n\t/* Better not fclose the passed file pointer (hookfp) since the caller */\n\t/* will be fclose-ing that outside. */\n\tif ((fp != NULL) && (fp != hookfp))\n\t\t(void) fclose(fp);\n\n\tif (created_here) {\n\t\tclear_hook_links(phook);\n\t\thook_free(phook, pyfree_func);\n\t\treturn NULL;\n\t} else {\n\t\t/* reuse phook later */\n\t\thook_init(phook, pyfree_func);\n\t\tclear_hook_links(phook);\n\t\tappend_link(&svr_allhooks, &phook->hi_allhooks, phook);\n\t\treturn NULL;\n\t}\n}\n\n/*\n ************************************************************************\n *   Hook-related Alarm operations.\n ************************************************************************\n */\n\n#ifdef WIN32\n#define ALARM_HANDLER_ARG void\n#else\n#define ALARM_HANDLER_ARG int sig\n#endif\n\n/**\n * @brief\n *\tThe ALARM signal handler.\n *\n * @param[in]   sig\t- The signal received causing this function to get\n *\t\t\tcalled.\n * @param[in] \tpyinter_func - the interrupt function that raised some signal\n *\t\t\tto the calling process.\n *\t\t\tEx. pbs_python_set_interrupt() which sends an\n *\t\t\t    an INT signal (ctrl-C)\n *\n */\nvoid\ncatch_hook_alarm(ALARM_HANDLER_ARG)\n{\n\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t \"alarm call received, interrupting hook execution.\");\n\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK, LOG_NOTICE, __func__,\n\t\t  log_buffer);\n\n\tif (python_interrupt_func != NULL)\n\t\tpython_interrupt_func();\n}\n\n/**\n * @brief\n *\tThe ALARM signal handler.\n *\tSet or unset an alarm signal.\n *\tIf 'sec' > 0, then start an alarm of the given # of 'sec'; otherwise,\n *      if 'sec'is 0, then to stop the previous alarm.\n *\n *\tReturn: 0 for success; -1 otherwise.\n *\n * @param[in]   sec - # of seconds to alarm.\n * @param[in] \tpyinter_func - the interrupt function that raised some signal\n *\t\t\tto the calling process.\n *\t\t\tEx. pbs_python_set_interrupt() which sends an\n *\t\t\t    an INT signal (ctrl-C)\n * @return\tint\n * @retval\t0\tfor success.\n * @retval\t-1 \totherwise.\n *\n */\nint\nset_alarm(int sec, void (*pyinter_func)(void))\n{\n#ifndef WIN32\n\tstatic struct sigaction act, oact;\n\n\tpython_interrupt_func = pyinter_func;\n\n\tsigemptyset(&act.sa_mask);\n\n\tif (sec > 0) {\n\t\t/* set SIGALRM hander to catch_hook_alarm() */\n\t\tact.sa_flags = 0;\n\t\tact.sa_handler = catch_hook_alarm;\n\t\tif (sigaction(SIGALRM, &act, &oact) == -1) {\n\t\t\tlog_event(PBSEVENT_ADMIN | PBSEVENT_SYSTEM,\n\t\t\t\t  PBS_EVENTCLASS_HOOK, LOG_ERR, __func__,\n\t\t\t\t  \"Failed to install alarm\");\n\t\t\treturn (-1);\n\t\t}\n\t\t(void) alarm(sec);\n\t} else {\n\t\t(void) alarm(0);\n\t\t(void) sigaction(SIGALRM, &oact, NULL);\n\t\t/* reset handler for SIGALRM */\n\t}\n#else /* Windows */\n\n\tpython_interrupt_func = pyinter_func;\n\n\tif (sec > 0)\n\t\t(void) win_alarm(sec, catch_hook_alarm);\n\telse\n\t\t(void) win_alarm(0, NULL);\n\n#endif /* end of Windows */\n\treturn (0);\n}\n\n/**\n *\n * @brief\n * \tCleans up files older than HOOKS_TMPFILE_MAX_AGE under\n *\t<path_hooks_workdir> periodically as driven by the\n *\tHOOKS_TMPFILE_NEXT_CLEANUP_PERIOD.\n *\n * @param[in]\tptask\t- pointer to the task structure.\n *\n */\nvoid\ncleanup_hooks_workdir(struct work_task *ptask)\n{\n\tDIR *dir;\n\tstruct dirent *pdirent;\n\tstruct stat sbuf;\n\tchar hook_file[MAXPATHLEN + 1];\n\n\tmemset(hook_file, '\\0', MAXPATHLEN + 1);\n\tdir = opendir(path_hooks_workdir);\n\tif (dir == NULL) {\n\t\tsprintf(log_buffer, \"could not opendir %s\",\n\t\t\tpath_hooks_workdir);\n\t\tlog_err(errno, __func__, log_buffer);\n\t\treturn;\n\t}\n\twhile (errno = 0, (pdirent = readdir(dir)) != NULL) {\n\n\t\tif (pdirent->d_name[0] == '.') {\n\t\t\tif (pdirent->d_name[1] == '\\0' ||\n\t\t\t    (pdirent->d_name[1] == '.' &&\n\t\t\t     pdirent->d_name[2] == '\\0'))\n\t\t\t\tcontinue;\n\t\t}\n\n\t\tsnprintf(hook_file, MAXPATHLEN, \"%s%s\",\n\t\t\t path_hooks_workdir, pdirent->d_name);\n\t\tif (stat(hook_file, &sbuf) == -1) {\n\t\t\tsprintf(log_buffer, \"could not stat %s\", hook_file);\n\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\tcontinue;\n\t\t}\n\n\t\t/* remove files older than 'HOOKS_TMPFILE_MAX_AGE' */\n\t\tif ((time_now - sbuf.st_ctime) > HOOKS_TMPFILE_MAX_AGE) {\n\t\t\tif (unlink(hook_file) < 0) {\n\t\t\t\tif (errno != ENOENT) {\n\t\t\t\t\tsprintf(log_buffer, \"could not cleanup %s\",\n\t\t\t\t\t\thook_file);\n\t\t\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\tif (errno != 0 && errno != ENOENT) {\n\t\tlog_err(errno, __func__, \"readdir\");\n\t}\n\tif (dir) {\n\t\t(void) closedir(dir);\n\t}\n\t/*  cleanup of hooks temp files happen in the next */\n\t/* 'HOOKS_TMPFILE_NEXT_CLEANUP_PERIOD' secs.\t   */\n\t(void) set_task(WORK_Timed, time_now + HOOKS_TMPFILE_NEXT_CLEANUP_PERIOD,\n\t\t\tcleanup_hooks_workdir, NULL);\n}\n\n/**\n * @brief\n *\n *\tReturns the number of hook scripts that are eligible to\n *\tbe executed for the specified 'hook_event'.\n *\tThis means the hook is enabled and has hook content.\n *\n * @param[in] hook_event - the event of the hooks to count\n *\n * @return int\n * @retval <n> number of hooks\n *\n */\nint\nnum_eligible_hooks(unsigned int hook_event)\n{\n\thook *phook;\n\thook *phook_next = NULL;\n\tpbs_list_head *head_ptr;\n\tint num_hooks = 0;\n\n\tswitch (hook_event) {\n\n\t\tcase HOOK_EVENT_EXECJOB_BEGIN:\n\t\t\thead_ptr = &svr_execjob_begin_hooks;\n\t\t\tbreak;\n\t\tcase HOOK_EVENT_EXECJOB_PROLOGUE:\n\t\t\thead_ptr = &svr_execjob_prologue_hooks;\n\t\t\tbreak;\n\t\tcase HOOK_EVENT_EXECJOB_EPILOGUE:\n\t\t\thead_ptr = &svr_execjob_epilogue_hooks;\n\t\t\tbreak;\n\t\tcase HOOK_EVENT_EXECJOB_END:\n\t\t\thead_ptr = &svr_execjob_end_hooks;\n\t\t\tbreak;\n\t\tcase HOOK_EVENT_EXECJOB_PRETERM:\n\t\t\thead_ptr = &svr_execjob_preterm_hooks;\n\t\t\tbreak;\n\t\tcase HOOK_EVENT_EXECJOB_LAUNCH:\n\t\t\thead_ptr = &svr_execjob_launch_hooks;\n\t\t\tbreak;\n\t\tcase HOOK_EVENT_EXECHOST_PERIODIC:\n\t\t\thead_ptr = &svr_exechost_periodic_hooks;\n\t\t\tbreak;\n\t\tcase HOOK_EVENT_EXECHOST_STARTUP:\n\t\t\thead_ptr = &svr_exechost_startup_hooks;\n\t\t\tbreak;\n\t\tcase HOOK_EVENT_EXECJOB_ATTACH:\n\t\t\thead_ptr = &svr_execjob_attach_hooks;\n\t\t\tbreak;\n\t\tcase HOOK_EVENT_EXECJOB_RESIZE:\n\t\t\thead_ptr = &svr_execjob_resize_hooks;\n\t\t\tbreak;\n\t\tcase HOOK_EVENT_EXECJOB_ABORT:\n\t\t\thead_ptr = &svr_execjob_abort_hooks;\n\t\t\tbreak;\n\t\tcase HOOK_EVENT_EXECJOB_POSTSUSPEND:\n\t\t\thead_ptr = &svr_execjob_postsuspend_hooks;\n\t\t\tbreak;\n\t\tcase HOOK_EVENT_EXECJOB_PRERESUME:\n\t\t\thead_ptr = &svr_execjob_preresume_hooks;\n\t\t\tbreak;\n\t\tdefault:\n\t\t\treturn (0); /* unexpected event encountered */\n\t}\n\n\tfor (phook = (hook *) GET_NEXT(*head_ptr); phook; phook = phook_next) {\n\t\tswitch (hook_event) {\n\n\t\t\tcase HOOK_EVENT_EXECJOB_BEGIN:\n\t\t\t\tphook_next = (hook *) GET_NEXT(phook->hi_execjob_begin_hooks);\n\t\t\t\tbreak;\n\t\t\tcase HOOK_EVENT_EXECJOB_PROLOGUE:\n\t\t\t\tphook_next = (hook *) GET_NEXT(phook->hi_execjob_prologue_hooks);\n\t\t\t\tbreak;\n\t\t\tcase HOOK_EVENT_EXECJOB_EPILOGUE:\n\t\t\t\tphook_next = (hook *) GET_NEXT(phook->hi_execjob_epilogue_hooks);\n\t\t\t\tbreak;\n\t\t\tcase HOOK_EVENT_EXECJOB_END:\n\t\t\t\tphook_next = (hook *) GET_NEXT(phook->hi_execjob_end_hooks);\n\t\t\t\tbreak;\n\t\t\tcase HOOK_EVENT_EXECJOB_PRETERM:\n\t\t\t\tphook_next = (hook *) GET_NEXT(phook->hi_execjob_preterm_hooks);\n\t\t\t\tbreak;\n\t\t\tcase HOOK_EVENT_EXECJOB_LAUNCH:\n\t\t\t\tphook_next = (hook *) GET_NEXT(phook->hi_execjob_launch_hooks);\n\t\t\t\tbreak;\n\t\t\tcase HOOK_EVENT_EXECHOST_PERIODIC:\n\t\t\t\tphook_next = (hook *) GET_NEXT(phook->hi_exechost_periodic_hooks);\n\t\t\t\tbreak;\n\t\t\tcase HOOK_EVENT_EXECHOST_STARTUP:\n\t\t\t\tphook_next = (hook *) GET_NEXT(phook->hi_exechost_startup_hooks);\n\t\t\t\tbreak;\n\t\t\tcase HOOK_EVENT_EXECJOB_ATTACH:\n\t\t\t\tphook_next = (hook *) GET_NEXT(phook->hi_execjob_attach_hooks);\n\t\t\t\tbreak;\n\t\t\tcase HOOK_EVENT_EXECJOB_RESIZE:\n\t\t\t\tphook_next = (hook *) GET_NEXT(phook->hi_execjob_resize_hooks);\n\t\t\t\tbreak;\n\t\t\tcase HOOK_EVENT_EXECJOB_ABORT:\n\t\t\t\tphook_next = (hook *) GET_NEXT(phook->hi_execjob_abort_hooks);\n\t\t\t\tbreak;\n\t\t\tcase HOOK_EVENT_EXECJOB_POSTSUSPEND:\n\t\t\t\tphook_next = (hook *) GET_NEXT(phook->hi_execjob_postsuspend_hooks);\n\t\t\t\tbreak;\n\t\t\tcase HOOK_EVENT_EXECJOB_PRERESUME:\n\t\t\t\tphook_next = (hook *) GET_NEXT(phook->hi_execjob_preresume_hooks);\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\treturn (0); /*  should not get here */\n\t\t}\n\n\t\tif (phook->enabled == FALSE)\n\t\t\tcontinue;\n\n\t\tif (phook->script == NULL)\n\t\t\tcontinue;\n\n\t\tnum_hooks++;\n\t}\n\n\treturn (num_hooks);\n}\n\n/**\n *\n * @brief\n * \tStart profiling the next lines of code, collectively giving it\n *      a 'label' and an 'action' description.\n *\tThe 'label' usually identifies to some object being profiled\n *\t(e.g. hook), while the 'action' describes what\n *\tis being captured for the object (e.g. \"initialization\").\n *\tBoth 'label' and 'action' uniquely identify the\n *\tdata collected.\n *\n * @param[in]\tlabel - describes a particular object\n * @param[in]\taction - refers to the object's action\n *\n * @param[in]\tprint_start_msg - if 1, then log a \"profile_start\" message.\n *\n * @return void\n *\n */\nvoid\nhook_perf_stat_start(char *label, char *action, int print_start_msg)\n{\n\tchar instance[MAXBUFLEN];\n\n\tif (!will_log_event(PBSEVENT_DEBUG4))\n\t\treturn;\n\n\tif ((label == NULL) || (action == NULL))\n\t\treturn;\n\n\tsnprintf(instance, sizeof(instance), \"label=%s action=%s\", label, action);\n\tperf_stat_start(instance);\n\n\tif (print_start_msg) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer), \"%s profile_start\", instance);\n\t\tlog_event(PBSEVENT_DEBUG4, PBS_EVENTCLASS_HOOK, LOG_INFO, \"hook_perf_stat\", log_buffer);\n\t}\n}\n\n/**\n *\n * @brief\n * \tLog summary of what has been tracked since hook_perf_stat_start()\n *\tcall on the same label, action given.\n *\n * @param[in]\tlabel - refers to a particular object\n * @param[in]\taction - refers to the object's action\n *\n * @param[in]\tprint_end_msg - if 1, then mark a \"profile_stop\" message.\n *\n * @return void\n *\n */\nvoid\nhook_perf_stat_stop(char *label, char *action, int print_end_msg)\n{\n\tchar instance[MAXBUFLEN];\n\tchar *msg;\n\n\tif ((label == NULL) || (action == NULL))\n\t\treturn;\n\n\tsnprintf(instance, sizeof(instance), \"label=%s action=%s\", label, action);\n\n\tif (!will_log_event(PBSEVENT_DEBUG4)) {\n\t\tperf_stat_remove(instance);\n\t\treturn;\n\t}\n\n\tmsg = perf_stat_stop(instance);\n\n\tif (msg == NULL)\n\t\treturn;\n\n\tif (print_end_msg)\n\t\tsnprintf(log_buffer, sizeof(log_buffer), \"%s profile_stop\", msg);\n\telse\n\t\tsnprintf(log_buffer, sizeof(log_buffer), \"%s\", msg);\n\n\tlog_event(PBSEVENT_DEBUG4, PBS_EVENTCLASS_HOOK, LOG_INFO, \"hook_perf_stat\", log_buffer);\n}\n"
  },
  {
    "path": "src/lib/Libutil/misc_utils.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tmisc_utils.c\n * @brief\n *  Utility functions to condense and unroll a sequence of execvnodes that are\n *  returned by the scheduler for standing reservations.\n *  The objective is to condense in a human-readable format the execvnodes\n *  of each occurrence of a standing reservation, and be able to retrieve each\n *  such occurrence easily.\n *\n *  Example usage (also refer to the debug function int test_execvnode_seq\n *  for a code snippet):\n *\n *  Assume str points to some string.\n *  char *condensed_str;\n *  char **unrolled_str;\n *  char **tofree;\n *\n *  condensed_str = condense_execvnode_seq(str);\n *  unrolled_str = unroll_execvnode_seq(condensed_str, &tofree);\n *  ...access an arbitrary, say 2nd occurrence, index via unrolled_str[2]\n *  free_execvnode_seq(tofree);\n */\n#define _MISC_UTILS_C\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <libutil.h>\n#include <stdio.h>\n#include <string.h>\n#include <stdlib.h>\n#include <stdarg.h>\n#include <ctype.h>\n#include <errno.h>\n#include <fcntl.h>\n#include <unistd.h>\n#include <libpbs.h>\n#include <limits.h>\n#include <pbs_idx.h>\n#include <pbs_ifl.h>\n#include <pbs_internal.h>\n#include <pbs_sched.h>\n#include <pbs_share.h>\n#include <sys/types.h>\n#include <sys/stat.h>\n#include <pwd.h>\n#include <assert.h>\n#include <netinet/in.h>\n#include <dlfcn.h>\n#include <grp.h>\n#include <time.h>\n#include <sys/time.h>\n#include <sys/resource.h>\n\n#include \"pbs_error.h\"\n#include \"job.h\"\n#include \"ticket.h\"\n\n#ifdef HAVE_MALLOC_INFO\n#include <malloc.h>\n#endif\n\n#define ISESCAPED(ch) (ch == '\\'' || ch == '\\\"' || ch == ',')\n\n/** @brief conversion array for vnode sharing attribute between str and enum */\nstruct {\n\tchar *vn_str;\n\tenum vnode_sharing vns;\n} str2vns[] = {\n\t{ND_Default_Shared, VNS_DFLT_SHARED},\n\t{ND_Ignore_Excl, VNS_IGNORE_EXCL},\n\t{ND_Ignore_Excl, VNS_FORCE_SHARED},\n\t{ND_Default_Excl, VNS_DFLT_EXCL},\n\t{ND_Force_Excl, VNS_FORCE_EXCL},\n\t{ND_Default_Exclhost, VNS_DFLT_EXCLHOST},\n\t{ND_Force_Exclhost, VNS_FORCE_EXCLHOST}};\n\n/* Used for collecting performance stats */\ntypedef struct perf_stat {\n\tchar instance[MAXBUFLEN + 1];\n\tdouble walltime;\n\tdouble cputime;\n\tpbs_list_link pi_allstats;\n} perf_stat_t;\n\nstatic int perf_stats_initialized = 0;\nstatic pbs_list_head perf_stats;\n\n/**\n * @brief\n * \tchar_in_set - is the char c in the tokenset\n *\n * @param[in] c - the char\n * @param[in] tokset - string tokenset\n *\n * @return\tint\n * @retval\t1 \tif c is in tokset\n * @retval\t0 \tif c is not in tokset\n */\nint\nchar_in_set(char c, const char *tokset)\n{\n\n\tint i;\n\n\tfor (i = 0; tokset[i] != '\\0'; i++)\n\t\tif (c == tokset[i])\n\t\t\treturn 1;\n\n\treturn 0;\n}\n\n/**\n * @brief\n * \tstring_token - strtok() without an an internal state pointer\n *\n * @param[in]      str - the string to tokenize\n * @param[in] \t   tokset - the tokenset to look for\n * @param[in/out]  ret_str - the char ptr where we left off after the tokens\n *\t\t             ** ret_str is opaque to the caller\n *\n * @par\tcall:\n *\tstring_token( string, tokenset, &tokptr)\n *\t2nd call: string_token( NULL, tokenset2, &tokptr)\n *\n * @par\ttokenset can differ between the two calls (as per strtok())\n *\ttokptr is an opaque ptr, just keep passing &tokptr into all calls\n *\tto string_token()\n *\n * @return\tchar pointer\n * @retval\treturns ptr to front of string segment (as per strtok())\n *\n */\nchar *\nstring_token(char *str, const char *tokset, char **ret_str)\n{\n\tchar *tok;\n\tchar *search_string;\n\n\tif (str != NULL)\n\t\tsearch_string = str;\n\telse if (ret_str != NULL && *ret_str != NULL)\n\t\tsearch_string = *ret_str;\n\telse\n\t\treturn NULL;\n\n\ttok = strstr(search_string, tokset);\n\n\tif (tok != NULL) {\n\t\twhile (char_in_set(*tok, tokset) && *tok != '\\0') {\n\t\t\t*tok = '\\0';\n\t\t\ttok++;\n\t\t}\n\n\t\tif (ret_str != NULL)\n\t\t\t*ret_str = tok;\n\t} else\n\t\t*ret_str = NULL;\n\n\treturn search_string;\n}\n\n/**\n *\t@brief convert vnode sharing enum into string form\n\n * \t@par Note:\n * \t\tDo not free the return value - it's a statically allocated\n *\t\tstring.\n *\n *\t@param[in] vns - vnode sharing value\n *\n *\t@retval string form of sharing value\n *\t@retval NULL on error\n */\nchar *\nvnode_sharing_to_str(enum vnode_sharing vns)\n{\n\tint i;\n\tint size = sizeof(str2vns) / sizeof(str2vns[0]);\n\n\tfor (i = 0; i < size && str2vns[i].vns != vns; i++)\n\t\t;\n\n\tif (i == size)\n\t\treturn NULL;\n\n\treturn str2vns[i].vn_str;\n}\n\n/**\n *\t@brief convert string form of vnode sharing to enum\n *\n *\t@param[in] vn_str - vnode sharing  string\n *\n *\t@return\tenum\n *\t@retval vnode sharing value\n *\t@retval VNS_UNSET if not found\n */\nenum vnode_sharing\nstr_to_vnode_sharing(char *vn_str)\n{\n\tint i;\n\tint size = sizeof(str2vns) / sizeof(str2vns[0]);\n\n\tif (vn_str == NULL)\n\t\treturn VNS_UNSET;\n\n\tfor (i = 0; i < size && strcmp(vn_str, str2vns[i].vn_str) != 0; i++)\n\t\t;\n\n\tif (i == size)\n\t\treturn VNS_UNSET;\n\n\treturn str2vns[i].vns;\n}\n\n/**\n *\n * @brief concatenate two strings by expanding target string as needed.\n * \t  Operation: strbuf += str\n *\n *\t@param[in, out] strbuf - string that will expand to accommodate the\n *\t\t\t        concatenation of 'str' - if null, new buffer allocated\n *\t@param[in, out] ssize - if not NULL, allocated size of strbuf\n *\t@param[in]      str   - string to concatenate to 'strbuf'\n *\n *\t@return char *\n *\t@retval pointer to the resulting string on success (*strbuf)\n *\t@retval NULL on failure\n */\nchar *\npbs_strcat(char **strbuf, int *ssize, const char *str)\n{\n\tint len;\n\tint rbuf_len;\n\tchar *tmp;\n\tchar *rbuf;\n\tint size;\n\n\tif (str == NULL)\n\t\treturn *strbuf;\n\n\trbuf = *strbuf;\n\tsize = ssize == NULL ? 0 : *ssize;\n\n\tlen = strlen(str);\n\trbuf_len = rbuf == NULL ? 0 : strlen(rbuf);\n\n\tif (rbuf_len + len >= size) {\n\t\tif (len > size)\n\t\t\tsize = len * 2;\n\t\telse\n\t\t\tsize *= 2;\n\n\t\ttmp = realloc(rbuf, size + 1);\n\t\tif (tmp == NULL)\n\t\t\treturn NULL;\n\t\tif (ssize)\n\t\t\t*ssize = size;\n\t\t*strbuf = tmp;\n\t\trbuf = tmp;\n\t\t/* first allocate */\n\t\tif (rbuf_len == 0)\n\t\t\trbuf[0] = '\\0';\n\t}\n\n\treturn strcat(rbuf, str);\n}\n\n/**\n *\n * @brief special purpose strcpy for chain copying strings\n *        primary difference with normal strcpy is that it\n *        returns the destination buffer position just past\n *        the copied data. Thus the next string can be just\n *        added to the returned pointer.\n *\n * @param[in] dest - pointer to the destination buffer\n * @param[in] src  - pointer to the source buffer\n *\n * @return char *\n * @retval pointer to the end of the resulting string\n *\n * @note: Caller needs to ensure space and non-NULL pointers\n *        This function is created for performance so does not\n *        verify any paramaters\n */\nchar *\npbs_strcpy(char *dest, const char *src)\n{\n\twhile (*src)\n\t\t*dest++ = *src++;\n\n\t*dest = '\\0';\n\n\treturn dest;\n}\n/**\n *\n * @brief general purpose strncpy function that will make sure to\n *        copy '\\0' at the end of the buffer.\n *\n * @param[in] dest - pointer to the destination buffer\n * @param[in] src  - pointer to the source string\n * @param[in] n    - size of destination buffer\n *\n * @return char *\n * @retval pointer to the destination string\n *\n * @note: Caller needs ensure non-NULL pointers\n */\nchar *\npbs_strncpy(char *dest, const char *src, size_t n)\n{\n\tif (strlen(src) < n - 1)\n\t\tstrcpy(dest, src);\n\telse {\n\t\tmemcpy(dest, src, n - 1);\n\t\tdest[n - 1] = '\\0';\n\t}\n\treturn dest;\n}\n\n/**\n * @brief\n *\tget a line from a file of any length.  Extend string via realloc\n *\tif necessary\n *\n * @param fp[in] - open file\n * @param pbuf[in,out] - pointer to buffer to fill (may change ala realloc)\n * @param pbuf_size[in,out] - size of buf (may increase ala realloc)\n *\n * @return char *\n * @retval pointer to *pbuf(the string pbuf points at) on successful read\n * @retval NULL on EOF or error\n */\n#define PBS_FGETS_LINE_LEN 8192\nchar *\npbs_fgets(char **pbuf, int *pbuf_size, FILE *fp)\n{\n\tchar fbuf[PBS_FGETS_LINE_LEN];\n\tchar *buf;\n\tchar *p;\n\n\tif (fp == NULL || pbuf == NULL || pbuf_size == NULL)\n\t\treturn NULL;\n\n\tif (*pbuf_size == 0) {\n\t\tif ((*pbuf = malloc(PBS_FGETS_LINE_LEN)) == NULL)\n\t\t\treturn NULL;\n\t\t*pbuf_size = PBS_FGETS_LINE_LEN;\n\t}\n\tbuf = *pbuf;\n\n\tbuf[0] = '\\0';\n\twhile ((p = fgets(fbuf, PBS_FGETS_LINE_LEN, fp)) != NULL) {\n\t\tbuf = pbs_strcat(pbuf, pbuf_size, fbuf);\n\t\tif (buf == NULL)\n\t\t\treturn NULL;\n\n\t\tif (buf[strlen(buf) - 1] == '\\n') /* we've reached the end of the line */\n\t\t\tbreak;\n\t}\n\tif (p == NULL && buf[0] == '\\0')\n\t\treturn NULL;\n\n\treturn *pbuf;\n}\n\n/**\n * @brief\n * \tHelper function for pbs_fgets_extend() and callers to determine if string requires extending\n *\n * @param[in] buf - line to check for extendable ending\n *\n * @return int\n * @retval offset to extendable location, -1 if not extendable\n */\nint\npbs_extendable_line(char *buf)\n{\n\tint len = 0;\n\n\tif (buf == NULL)\n\t\treturn 0;\n\n\tlen = strlen(buf);\n\n\t/* we have two options:\n\t * 1) We extend: string ends in a '\\' and 0 or more whitespace\n\t * 2) we do not extend: Not #1\n\t * In the case of #1, we want the string to end just before the '\\'\n\t * In the case of #2 we want to leave the string alone.\n\t */\n\twhile (len > 0 && isspace(buf[len - 1]))\n\t\tlen--;\n\n\tif (len > 0 && buf[len - 1] == '\\\\')\n\t\treturn len - 1;\n\telse /* We're at the end of a non-extended line */\n\t\treturn -1;\n}\n\n/**\n * @brief get a line from a file pointed at by fp.  The line can be extended\n *\t  onto the next line if it ends in a backslash (\\).  If the string is\n *\t  extended, the lines will be combined and the backslash will be\n *        stripped.\n *\n * @param[in] fp pointer to file to read from\n * @param[in, out] pbuf_size - pointer to size of buffer\n * @param[in, out] pbuf - pointer to buffer\n *\n * @return char *\n * @retval string read from file\n * @retval NULL - EOF or error\n * @par MT-Safe: no\n */\nchar *\npbs_fgets_extend(char **pbuf, int *pbuf_size, FILE *fp)\n{\n\tstatic char *locbuf = NULL;\n\tstatic int locbuf_size = 0;\n\tchar *buf;\n\tchar *p;\n\tint len;\n\n\tif (pbuf == NULL || pbuf_size == NULL || fp == NULL)\n\t\treturn NULL;\n\n\tif (locbuf == NULL) {\n\t\tif ((locbuf = malloc(PBS_FGETS_LINE_LEN)) == NULL)\n\t\t\treturn NULL;\n\t\tlocbuf_size = PBS_FGETS_LINE_LEN;\n\t}\n\n\tif (*pbuf_size == 0 || *pbuf == NULL) {\n\t\tif ((*pbuf = malloc(PBS_FGETS_LINE_LEN)) == NULL)\n\t\t\treturn NULL;\n\t\t*pbuf_size = PBS_FGETS_LINE_LEN;\n\t}\n\n\tbuf = *pbuf;\n\tlocbuf[0] = '\\0';\n\tbuf[0] = '\\0';\n\n\twhile ((p = pbs_fgets(&locbuf, &locbuf_size, fp)) != NULL) {\n\t\tif (pbs_strcat(pbuf, pbuf_size, locbuf) == NULL)\n\t\t\treturn NULL;\n\n\t\tbuf = *pbuf;\n\t\tlen = pbs_extendable_line(buf);\n\t\tif (len >= 0)\n\t\t\tbuf[len] = '\\0'; /* remove the backslash (\\) */\n\t\telse\n\t\t\tbreak;\n\t}\n\n\t/* if we read just EOF */\n\tif (p == NULL && buf[0] == '\\0')\n\t\treturn NULL;\n\n\treturn buf;\n}\n\n/**\n * @brief\n * \tInternal helper function for pbs_asprintf() to determine the length of post-formatted string\n *\n * @param[in] fmt - printf format string\n * @param[in] args - va_list arguments from pbs_asprintf()\n *\n * @return int\n * @retval length of post-formatted string\n */\nint\npbs_asprintf_len(const char *fmt, va_list args)\n{\n\tint len;\n#ifdef WIN32\n\tlen = _vscprintf(fmt, args);\n#else\n\t{\n\t\tva_list dupargs;\n\t\tchar c;\n\n\t\tva_copy(dupargs, args);\n\t\tlen = vsnprintf(&c, 0, fmt, dupargs);\n\t\tva_end(dupargs);\n\t}\n#endif\n\treturn len;\n}\n\n/**\n * @brief\n * \tInternal helper function for pbs_asprintf() to allocate memory and format the string\n *\n * @param[in] len - length of post-formatted string\n * @param[in] fmt - format for printed string\n * @param[in] args - va_list arguments from pbs_asprintf()\n *\n * @return char *\n * @retval formatted string in allocated buffer\n */\n\nchar *\npbs_asprintf_format(int len, const char *fmt, va_list args)\n{\n\tchar *buf;\n\tint rc;\n\tbuf = malloc(len + 1);\n\tif (!buf)\n\t\treturn NULL;\n\trc = vsnprintf(buf, len + 1, fmt, args);\n\tif (rc != len) {\n\t\tfree(buf);\n\t\treturn NULL;\n\t}\n\treturn buf;\n}\n\n/**\n * @brief\n *\tInternal asprintf() implementation for use on all platforms\n *\n * @param[in, out] dest - character pointer that will point to allocated\n *\t\t\t  space ** must be freed by caller **\n * @param[in] fmt - format for printed string\n * @param[in] ... - arguments to format string\n *\n * @return int\n * @retval -1 - Error\n * @retval >=0 - Length of new string, not including terminator\n */\nint\npbs_asprintf(char **dest, const char *fmt, ...)\n{\n\tva_list args;\n\tint len;\n\tchar *buf = NULL;\n\n\tif (!dest)\n\t\treturn -1;\n\t*dest = NULL;\n\tif (!fmt)\n\t\treturn -1;\n\tva_start(args, fmt);\n\tlen = pbs_asprintf_len(fmt, args);\n\tif (len < 0)\n\t\tgoto pbs_asprintf_exit;\n\n\tbuf = pbs_asprintf_format(len, fmt, args);\n\tif (buf == NULL)\n\t\tgoto pbs_asprintf_exit;\n\t*dest = buf;\npbs_asprintf_exit:\n\tva_end(args);\n\tif (buf == NULL) {\n\t\tbuf = malloc(1);\n\t\tif (buf) {\n\t\t\t*buf = '\\0';\n\t\t\t*dest = buf;\n\t\t\treturn -1;\n\t\t}\n\t}\n\treturn len;\n}\n\n/**\n\n * @brief\n *\tCopies 'src' file  to 'dst' file.\n *\n * @param[in] src - file1\n * @param[in] dst - file2\n *\n * @return int\n * @retval 0\t- success\n * @retval COPY_FILE_BAD_INPUT\t- dst or src is NULL.\n * @retval COPY_FILE_BAD_SOURCE\t- failed to open 'src' file.\n * @retval COPY_FILE_BAD_DEST - failed to open 'dst' file.\n * @retval COPY_FILE_BAD_WRITE\t- incomplete write\n */\nint\ncopy_file_internal(char *src, char *dst)\n{\n\tFILE *fp_orig = NULL;\n\tFILE *fp_copy = NULL;\n\tchar in_data[BUFSIZ + 1];\n\n\tif ((src == NULL) || (dst == NULL)) {\n\t\treturn (COPY_FILE_BAD_INPUT);\n\t}\n\n\tfp_orig = fopen(src, \"r\");\n\n\tif (fp_orig == NULL) {\n\t\treturn (COPY_FILE_BAD_SOURCE);\n\t}\n\n\tfp_copy = fopen(dst, \"w\");\n\n\tif (fp_copy == NULL) {\n\t\tfclose(fp_orig);\n\t\treturn (COPY_FILE_BAD_DEST);\n\t}\n\n\twhile (fgets(in_data, sizeof(in_data),\n\t\t     fp_orig) != NULL) {\n\t\tif (fputs(in_data, fp_copy) < 0) {\n\t\t\tfclose(fp_orig);\n\t\t\tfclose(fp_copy);\n\t\t\t(void) unlink(dst);\n\t\t\treturn (COPY_FILE_BAD_WRITE);\n\t\t}\n\t}\n\n\tfclose(fp_orig);\n\tif (fclose(fp_copy) != 0) {\n\t\treturn (COPY_FILE_BAD_WRITE);\n\t}\n\n\treturn (0);\n}\n\n/**\n * @brief\n * \tPuts an advisory lock of type 'op' to the file whose descriptor\n *\tis 'fd'.\n *\n * @param[in]\tfd - descriptor of file being locked.\n * @param[in]\top - type of lock: F_WRLCK, F_RDLCK, F_UNLCK\n * @param[in]\tfilename -  corresonding name to 'fp' for logging purposes.\n * @param[in]\tlock_retry - number of attempts to retry lock if there's a\n *\t\t\t     failure to lock.\n * @param[out]\terr_msg - filled with the error message string if there's a\n *\t\t\t  failure to lock. (can be NULL if error message need\n *\t\t\t  not be saved).\n * @param[in]\terr_msg_len - size of the err_msg buffer.\n *\n * @return \tint\n * @retval \t0\tfor success\n * @reval\t1\tfor failure to lock\n *\n */\n\nint\nlock_file(int fd, int op, char *filename, int lock_retry,\n\t  char *err_msg, size_t err_msg_len)\n{\n\tint i;\n\tstruct flock flock;\n\n\tlseek(fd, (off_t) 0, SEEK_SET);\n\tflock.l_type = op;\n\tflock.l_whence = SEEK_SET;\n\tflock.l_start = 0;\n\tflock.l_len = 0;\n\n\tfor (i = 0; i < lock_retry; i++) {\n\t\tif ((fcntl(fd, F_SETLK, &flock) == -1) &&\n\t\t    ((errno == EACCES) || (errno == EAGAIN))) {\n\t\t\tif (err_msg != NULL)\n\t\t\t\tsnprintf(err_msg, err_msg_len,\n\t\t\t\t\t \"Failed to lock file %s, retrying\", filename);\n\t\t} else {\n\t\t\treturn 0; /* locked */\n\t\t}\n\t\tif (i < (lock_retry - 1))\n\t\t\tsleep(2);\n\t}\n\tif (err_msg != NULL)\n\t\tsnprintf(err_msg, err_msg_len,\n\t\t\t \"Failed to lock file %s, giving up\", filename);\n\treturn 1;\n}\n\n/**\n * @brief\n *\tcalculate the number of digits to the right of the decimal point in\n *\ta floating point number.  This can be used in conjunction with\n *\tprintf() to not print trailing zeros.\n *\n * @param[in] fl - the float point number\n * @param[in] digits - the max number of digits to check.  Can be -1 for max\n *            number of digits for 32/64 bit numbers.\n *\n * @par\tUse: int x = float_digits(fl, 8);\n * \tprintf(\"%0.*f\\n\", x, fl);\n *\n * @note It may be unwise to use use a large value for digits (or -1) due to\n * that the precision of a double will decrease after the first handful of\n * digits.\n *\n * @return int\n * @retval number of digits to the right of the decimal point in fl.\n *         in the range of 0..digits\n *\n * @par MT-Safe: Yes\n */\n\n#define FLOAT_DIGITS_ERROR_FACTOR 1000.0\n/* To be more generic, we should use a signed integer type.\n * This is fine for our current use and gives us 1 more digit.\n */\n#define TRUNCATE(x) (((x) > (double) ULONG_MAX) ? ULONG_MAX : (unsigned long) (x))\n\nint\nfloat_digits(double fl, int digits)\n{\n\tunsigned long num;\n\tint i;\n\n\t/* 2^64 = 18446744073709551616 (18 useful)\n\t * 2^32 = 4294967296 (9 useful)\n\t */\n\tif (digits == -1)\n\t\tdigits = (sizeof(unsigned long) >= 8) ? 18 : 9;\n\n\tfl = ((fl < 0) ? -fl : fl);\n\n\t/* The main part of the algorithm: Floating point numbers are not very exact.\n\t * We need to do something to determine how close we are to the right number\n\t * We multiply our floating point value by an error factor.  If we see a\n\t * string of 9's or 0's in a row, we stop.  For example, if the error factor\n\t * is 1000, if we see 3 9's or 0's we stop.  Every time through the loop we\n\t * multiply by 10 to shift over one digit and repeat.\n\t */\n\tfor (i = 0; i < digits; i++) {\n\t\tnum = TRUNCATE((fl - TRUNCATE(fl)) * FLOAT_DIGITS_ERROR_FACTOR);\n\t\tif ((num < 1) || (num >= (long) (FLOAT_DIGITS_ERROR_FACTOR - 1.0)))\n\t\t\tbreak;\n\t\tfl *= 10.0;\n\t}\n\treturn i;\n}\n\n/**\n *\n * @brief\n *\tReturns 1 for path is a full path; otherwise, 0 if\n * \trelative path.\n *\n * @param[in]\tpath\t- the filename path being checked.\n *\n * @return int\n * @retval\t1\tif 'path' is a full path.\n * @retval\t0\tif 'path' is  relative path\n */\nint\nis_full_path(char *path)\n{\n\tchar *cp = path;\n\n\tif (*cp == '\"')\n\t\t++cp;\n\n#ifdef WIN32\n\tif ((cp[0] == '/') || (cp[0] == '\\\\') ||\n\t    (strlen(cp) >= 3 &&\n\t     isalpha(cp[0]) && cp[1] == ':' &&\n\t     ((cp[2] == '\\\\') || (cp[2] == '/'))))\n\t/* matches c:\\ or c:/ */\n#else\n\tif (cp[0] == '/')\n#endif\n\t\treturn (1);\n\treturn (0);\n}\n\n/**\n * @brief\n *\tReplace sub with repl in str.\n *\n * @par Note\n *\tsame as replace_space except the matching character to replace\n *      is not necessarily a space but the supplied 'sub' string, plus leaving\n *      alone existing 'repl' sub strings and no quoting if 'repl' is \"\"\n *\n * @param[in]\tstr  - input buffer having patter sub\n * @param[in]\tsub  - pattern to be replaced\n * @param[in]   repl - pattern to be replaced with\n * @param[out]  retstr : string replaced with given pattern.\n *\n */\n\nvoid\nreplace(char *str, char *sub, char *repl, char *retstr)\n{\n\tchar rstr[MAXPATHLEN + 1];\n\tint i, j;\n\tint repl_len;\n\tint has_match = 0;\n\tint sub_len;\n\n\tif (str == NULL || repl == NULL || sub == NULL)\n\t\treturn;\n\n\tif (*str == '\\0') {\n\t\tretstr[0] = '\\0';\n\t\treturn;\n\t}\n\n\tif (*sub == '\\0') {\n\t\tstrcpy(retstr, str);\n\t\treturn;\n\t}\n\n\trepl_len = strlen(repl);\n\tsub_len = strlen(sub);\n\n\ti = 0;\n\twhile (*str != '\\0') {\n\t\tif (strncmp(str, sub, sub_len) == 0 &&\n\t\t    repl_len > 0) {\n\t\t\tfor (j = 0; (j < repl_len && i <= MAXPATHLEN); j++, i++) {\n\t\t\t\trstr[i] = repl[j];\n\t\t\t}\n\t\t\thas_match = 1;\n\t\t} else if (strncmp(str, sub, sub_len) == 0) {\n\t\t\tfor (j = 0; (j < sub_len && i <= MAXPATHLEN); j++, i++) {\n\t\t\t\trstr[i] = sub[j];\n\t\t\t}\n\t\t\thas_match = 1;\n\t\t} else {\n\t\t\trstr[i] = *str;\n\t\t\ti++;\n\t\t\thas_match = 0;\n\t\t}\n\n\t\tif (i > MAXPATHLEN) {\n\t\t\tretstr[0] = '\\0';\n\t\t\treturn;\n\t\t}\n\n\t\tif (has_match) {\n\t\t\tstr += sub_len;\n\t\t} else {\n\t\t\tstr++;\n\t\t}\n\t}\n\trstr[i] = '\\0';\n\n\tstrncpy(retstr, rstr, i + 1);\n}\n\n/**\n * @brief\n *\tEscape every occurrence of 'delim' in 'str' with 'esc'\n *\n * @param[in]\tstr     - input string\n * @param[in]\tdelim   - delimiter to be searched in str\n * @param[in]\tesc     - escape character to be added if delim found in str\n *\n * @return\tstring\n * @retval\tNULL\t- memory allocation failed or str is NULL\n * @retval\tretstr\t- output string, with every occurrence of delim escaped with 'esc'\n *\n * @note\n * \tThe string returned should be freed by the caller.\n */\n\nchar *\nescape_delimiter(char *str, char *delim, char esc)\n{\n\tint i = 0;\n\tint j = 0;\n\tint delim_len = 0;\n\tint retstrlen = 0;\n\tchar *retstr = NULL;\n\tchar *temp = NULL;\n\n\tif (str == NULL)\n\t\treturn NULL;\n\n\tif (*str == '\\0' || (delim == NULL || *delim == '\\0') || esc == '\\0') {\n\t\treturn strdup((char *) str);\n\t}\n\tdelim_len = strlen(delim);\n\tretstr = (char *) malloc(MAXBUFLEN);\n\tif (retstr == NULL)\n\t\treturn NULL;\n\tretstrlen = MAXBUFLEN;\n\n\twhile (*str != '\\0') {\n\t\t/* We dont want to use strncmp function if delimiter is a character. */\n\t\tif ((*str == esc && !ISESCAPED(*(str + 1))) || (delim_len == 1 && *str == *delim)) {\n\t\t\tretstr[i++] = esc;\n\t\t\tretstr[i++] = *str++;\n\t\t} else if (strncmp(str, delim, delim_len) == 0 && ((i + 1 + delim_len) < retstrlen)) {\n\t\t\tretstr[i++] = esc;\n\t\t\tfor (j = 0; j < delim_len; j++, i++)\n\t\t\t\tretstr[i] = *str++;\n\t\t} else if ((i + 1 + delim_len) < retstrlen)\n\t\t\tretstr[i++] = *str++;\n\n\t\tif (i >= (retstrlen - (1 + delim_len))) {\n\t\t\tretstrlen *= BUFFER_GROWTH_RATE;\n\t\t\ttemp = (char *) realloc(retstr, retstrlen);\n\t\t\tif (temp == NULL) {\n\t\t\t\tfree(retstr);\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t\tretstr = temp;\n\t\t}\n\t}\n\tretstr[i] = '\\0';\n\treturn retstr;\n}\n\n/**\n *\n * @brief\n * \tfile_exists: returns 1 if file exists; 0 otherwise.\n *\n * @param[in]\tpath - file pathname being checked.\n *\n * @return \tint\n * @retval\t1\tif 'path' exists.\n * @retval\t0\tif 'path'does not exist.\n */\nint\nfile_exists(char *path)\n{\n\tstruct stat sbuf;\n\n#ifdef WIN32\n\tif (lstat(path, &sbuf) == -1) {\n\t\tint ret = GetLastError();\n\t\tif (ret == ERROR_FILE_NOT_FOUND ||\n\t\t    ret == ERROR_PATH_NOT_FOUND) {\n\t\t\treturn 0;\n\t\t}\n\t}\n#else\n\tif ((stat(path, &sbuf) == -1) &&\n\t    (errno == ENOENT))\n\t\treturn (0);\n#endif\n\treturn (1);\n}\n\n/**\n * @brief\n *\tGiven the two hostnames, compare their short names and full names to make sure if those\n *      point to same host.\n *\n * @param[in]\t        host1 - first host\n * @param[in]           host2 - second host\n *\n * @return              int\n * @retval\t        0 - hostnames point to different hosts\n * @retval\t        1 - hostnames point to same host\n *\n */\nint\nis_same_host(char *host1, char *host2)\n{\n\tchar *host1_f = NULL;\n\tchar *host2_f = NULL;\n\n\tstatic void *hostmap = NULL;\n\n\tif (host1 == NULL || host2 == NULL)\n\t\treturn 0;\n\n\tif (hostmap == NULL)\n\t\thostmap = pbs_idx_create(0, 0);\n\n\tif (strcasecmp(host1, host2) == 0)\n\t\treturn 1;\n\n\tpbs_idx_find(hostmap, (void **) &host1, (void **) &host1_f, NULL);\n\tpbs_idx_find(hostmap, (void **) &host2, (void **) &host2_f, NULL);\n\n\tif (host1_f == NULL) {\n\t\tchar host1_full[PBS_MAXHOSTNAME + 1];\n\n\t\tif (get_fullhostname(host1, host1_full, PBS_MAXHOSTNAME) != 0 || host1_full[0] == '\\0')\n\t\t\treturn 0;\n\t\thost1_f = strdup(host1_full);\n\t\tpbs_idx_insert(hostmap, host1, host1_f);\n\t}\n\tif (host2_f == NULL) {\n\t\tchar host2_full[PBS_MAXHOSTNAME + 1];\n\n\t\tif (get_fullhostname(host2, host2_full, PBS_MAXHOSTNAME) != 0 || host2_full[0] == '\\0')\n\t\t\treturn 0;\n\t\thost2_f = strdup(host2_full);\n\t\tpbs_idx_insert(hostmap, host2, host2_f);\n\t}\n\tif (host1_f == NULL || host2_f == NULL)\n\t\treturn 0;\n\n\tif (strcasecmp(host1_f, host2_f) == 0)\n\t\treturn 1;\n\n\treturn 0;\n}\n\n/**\n * @brief Determine if place_def is in place_str\n * @see place_sharing_type and getplacesharing\n *\n * @param[in] place_str - The string representation of the place directive\n * @param[in] place_def - The type of place to check\n *\n * @return Whether the place directive contains the type of exclusivity\n * queried for.\n * @retval 1 If the place directive is of the type queried\n * @retval 0 If the place directive is not of the type queried\n *\n *@par MT-Safe: No\n */\nint\nplace_sharing_check(char *place_str, char *place_def)\n{\n\tchar *buf;\n\tchar *p;\n\tchar *psave;\n\n\tif ((place_str == NULL) || (*place_str == '\\0'))\n\t\treturn 0;\n\n\tif ((place_def == NULL) || (*place_def == '\\0'))\n\t\treturn 0;\n\n\tbuf = strdup(place_str);\n\tif (buf == NULL)\n\t\treturn 0;\n\n\tfor (p = buf; (p = strtok_r(p, \":\", &psave)) != NULL; p = NULL) {\n\t\tif (strcmp(p, place_def) == 0) {\n\t\t\tfree(buf);\n\t\t\treturn 1;\n\t\t}\n\t}\n\tfree(buf);\n\treturn 0;\n}\n\n/**\n *\n * @brief\n * \tDetermines if 'str' is found in 'sep'-separated 'string_list'.\n *\n * @param[in]\tstr\t- the substring to look for.\n * @param[in]\tsep\t- the separator character in 'string_list'\n * @param[in]\tstring_list - the list of characters to check for a 'str'\n *\t\t\t\tmatch.\n * @return int\n * @retval\t1\t- if 'str' is found in 'string_list'.\n * @retval\t0\t- if 'str' not found.\n *\n * @note\n *\tIn the absence of a 'sep' value (i.e. empty string ''), then\n *\ta white space is the default delimiter.\n *\tIf there's a 'sep' value, the white space character is also treated\n *\tas an additional delimeter, matching only the strings that don't\n *\tcontain leading/trailing 'sep' char and white space character.\n */\nint\nin_string_list(char *str, char sep, char *string_list)\n{\n\tchar *p = NULL;\n\tchar *p2 = NULL;\n\tchar *p_end = NULL;\n\tchar *ptoken = NULL;\n\tint found_match = 0;\n\n\tif ((str == NULL) || (str[0] == '\\0') || (string_list == NULL)) {\n\t\treturn (0);\n\t}\n\n\tp2 = strdup(string_list);\n\tif (p2 == NULL) {\n\t\treturn (0);\n\t}\n\n\tp = p2;\n\tp_end = p + strlen(string_list);\n\n\twhile (p < p_end) {\n\n\t\t/* skip past [<sep> ] characters */\n\t\twhile ((*p != '\\0') && ((*p == sep) || (*p == ' '))) {\n\t\t\tp++;\n\t\t}\n\n\t\tif (*p == '\\0')\n\t\t\tbreak;\n\n\t\tptoken = p; /* start of token */\n\n\t\t/* skip past not in [<sep> ] characters  */\n\t\twhile ((*p != '\\0') && ((*p != sep) && (*p != ' '))) {\n\t\t\tp++;\n\t\t}\n\t\t*p = '\\0'; /* delimeter value is nulled */\n\t\tif (strcmp(str, ptoken) == 0) {\n\t\t\tfound_match = 1;\n\t\t\tbreak;\n\t\t}\n\t\tp++;\n\t}\n\n\tif (p2) {\n\t\t(void) free(p2);\n\t}\n\treturn (found_match);\n}\n\n/**\n *\n *\t@brief break apart a delimited string into an array of strings\n *\n *\t@param[in] strlist - the delimited string\n *\t@param[in] sep - the separator character\n *\n *\t@return char **\n *\n *\t@note\n *\t\tThe returned array of strings has to be freed by the caller.\n */\nchar **\nbreak_delimited_str(char *strlist, char delim)\n{\n\tchar sep[2] = {0};\n\tint num_words = 1; /* number of words delimited by commas*/\n\tchar **arr = NULL; /* the array of words */\n\tchar *list;\n\tchar *tok; /* used with strtok() */\n\tchar *end;\n\tint i;\n\n\tsep[0] = delim;\n\n\tif (strlist == NULL) {\n\t\tpbs_errno = PBSE_BADATVAL;\n\t\treturn NULL;\n\t}\n\n\tlist = strdup(strlist);\n\n\tif (list != NULL) {\n\t\tchar *saveptr = NULL;\n\n\t\tfor (i = 0; list[i] != '\\0'; i++)\n\t\t\tif (list[i] == delim)\n\t\t\t\tnum_words++;\n\n\t\tif ((arr = (char **) malloc(sizeof(char *) * (num_words + 1))) == NULL) {\n\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\tfree(list);\n\t\t\treturn NULL;\n\t\t}\n\n\t\ttok = strtok_r(list, sep, &saveptr);\n\n\t\tfor (i = 0; tok != NULL; i++) {\n\t\t\twhile (isspace((int) *tok))\n\t\t\t\ttok++;\n\n\t\t\tend = &tok[strlen(tok) - 1];\n\n\t\t\twhile (isspace((int) *end)) {\n\t\t\t\t*end = '\\0';\n\t\t\t\tend--;\n\t\t\t}\n\n\t\t\tarr[i] = strdup(tok);\n\t\t\tif (arr[i] == NULL) {\n\t\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\t\tfree(list);\n\t\t\t\tfree_string_array(arr);\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t\ttok = strtok_r(NULL, sep, &saveptr);\n\t\t}\n\t\tarr[i] = NULL;\n\t}\n\tif (list != NULL)\n\t\tfree(list);\n\n\treturn arr;\n}\n\n/**\n *\n *\t@brief break apart a comma delimited string into an array of strings\n *\n *\t@param[in] strlist - the comma delimited string\n *\n *\t@return char **\n *\n */\nchar **\nbreak_comma_list(char *strlist)\n{\n\treturn (break_delimited_str(strlist, ','));\n}\n\n/**\n * @brief\n *\t\tDoes a string exist in the given array?\n *\n * @param[in]\tstrarr\t-\tthe string array to search, should be NULL terminated\n * @param[in]\tstr\t-\tthe string to find\n *\n * @return\tint\n * @retval\t1\t: if the string is found\n * @retval\t0\t: the string is not found or on error\n *\n */\nint\nis_string_in_arr(char **strarr, const char *str)\n{\n\tint ind;\n\n\tind = find_string_idx(strarr, str);\n\n\tif (ind >= 0)\n\t\treturn 1;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tmake copy of string array\n *\n * @param[in] strarr - the string array to make copy\n *\n * @return char **\n * @retval !NULL - copy of string array\n * @retval NULL  - failed to make copy of string array\n *\n */\nchar **\ndup_string_arr(char **strarr)\n{\n\tint i = 0;\n\tchar **retarr = NULL;\n\n\tif (strarr == NULL)\n\t\treturn NULL;\n\n\tfor (i = 0; strarr[i] != NULL; i++)\n\t\t;\n\n\tif ((retarr = (char **) malloc((i + 1) * sizeof(char *))) == NULL)\n\t\treturn NULL;\n\n\tfor (i = 0; strarr[i] != NULL; i++) {\n\t\tretarr[i] = strdup(strarr[i]);\n\t\tif (retarr[i] == NULL) {\n\t\t\tfor (i = 0; retarr[i] != NULL; i++)\n\t\t\t\tfree(retarr[i]);\n\t\t\tfree(retarr);\n\t\t\treturn NULL;\n\t\t}\n\t}\n\tretarr[i] = NULL;\n\treturn retarr;\n}\n\n/**\n * @brief\n * \t\tfind index of str in strarr\n *\n * @param[in]\tstrarr\t-\tthe string array to search\n * @param[in]\tstr\t-\tthe string to find\n *\n * @return\tint\n * @retval\tindex of string\n * @retval\t-1\t: if not found\n */\nint\nfind_string_idx(char **strarr, const char *str)\n{\n\tint i;\n\tif (strarr == NULL || str == NULL)\n\t\treturn -1;\n\n\tfor (i = 0; strarr[i] != NULL && strcmp(strarr[i], str); i++)\n\t\t;\n\tif (strarr[i] == NULL)\n\t\treturn -1;\n\n\treturn i;\n}\n\n/**\n * @brief\n *\t\tfree_string_array - free an array of strings with a NULL as a sentinel\n *\n * @param[in,out]\tarr\t-\tthe array to free\n *\n * @return\tnothing\n *\n */\nvoid\nfree_string_array(char **arr)\n{\n\tint i;\n\n\tif (arr != NULL) {\n\t\tfor (i = 0; arr[i] != NULL; i++)\n\t\t\tfree(arr[i]);\n\n\t\tfree(arr);\n\t}\n}\n\n/**\n * @brief\n *\tensure_string_not_null - if string is NULL, allocate an empty string\n *\n * @param[in]\tstr - pointer to pointer to string (or to NULL)\n *\n * @return\tnothing\n *\n */\nvoid\nensure_string_not_null(char **str)\n{\n\tif (*str == NULL)\n\t\t*str = strdup(\"\");\n}\n\n/**\n * @brief\n *\tconvert_string_to_lowercase - Convert string to lowercase\n *\n * @param[in]\tstr - string to be converted\n *\n * @return\tchar *\n * @retval\t!NULL - converted string\n * @retval\tNULL - failure\n *\n * @note\n * \tReturned string will be malloced area, so free after use\n *\n */\nchar *\nconvert_string_to_lowercase(char *str)\n{\n\tchar *ret = NULL;\n\tint i = 0;\n\tint len = 0;\n\n\tif (str == NULL || *str == '\\0')\n\t\treturn NULL;\n\n\tlen = strlen(str);\n\tif ((ret = calloc(1, len + 1)) == NULL)\n\t\treturn NULL;\n\n\tfor (i = 0; i < len; i++)\n\t\tret[i] = tolower(str[i]);\n\n\treturn ret;\n}\n\n/**\n * @brief\n * \t\tConvert a duration to HH:MM:SS format string\n *\n * @param[in]\tduration\t-\tthe duration\n * @param[out]\tbuf\t-\tthe buffer to be filled\n * @param[in]\tbufsize\t-\tsize of the buffer\n *\n * @return\tvoid\n */\nvoid\nconvert_duration_to_str(time_t duration, char *buf, int bufsize)\n{\n\tlong hour, min, sec;\n\tif (buf == NULL || bufsize == 0)\n\t\treturn;\n\thour = duration / 3600;\n\tduration = duration % 3600;\n\tmin = duration / 60;\n\tduration = duration % 60;\n\tsec = duration;\n\tsnprintf(buf, bufsize, \"%02ld:%02ld:%02ld\", hour, min, sec);\n}\n\n/**\n * @brief\n *\tDetermines if 'str' ends with three consecutive double quotes,\n *\tbefore a newline (if it exists).\n *\n * @param[in]\tstr - input string\n * @param[in]\tstrip_quotes - if set to 1, then modify 'str' so that\n *\t\t\t\tthe triple quotes are not part of the string.\n *\n * @return int\n * @retval 1 - if string ends with triple quotes.\n * @retval 0 - if string does not end with triple quotes.\n *\n */\nint\nends_with_triple_quotes(char *str, int strip_quotes)\n{\n\tint ct;\n\tchar *p = NULL;\n\tint ll = 0;\n\n\tif (str == NULL)\n\t\treturn (0);\n\n\tll = strlen(str);\n\tif (ll < 3) {\n\t\treturn (0);\n\t}\n\n\tp = str + (ll - 1);\n\n\tif (*p == '\\n') {\n\t\tp--;\n#ifdef WIN32\n\t\tif (*p == '\\r') {\n\t\t\tp--;\n\t\t}\n#endif\n\t}\n\n\tct = 0;\n\twhile ((p >= str) && (*p == '\"')) {\n\t\tct++;\n\t\tp--;\n\t\tif (ct == 3)\n\t\t\tbreak;\n\t}\n\tif (ct == 3) {\n\t\tif (strip_quotes == 1) {\n\t\t\t/* null terminate the first double quote */\n\t\t\t*(p + 1) = '\\0';\n\t\t}\n\t\treturn (1);\n\t}\n\treturn (0);\n}\n\n/**\n * @brief\n *\tDetermines if 'str' begins with three consecutive double quotes.\n *\n * @param[in]\tstr - input string\n *\n * @return int\n * @retval 1 - if string starts with triple quotes.\n * @retval 0 - if string does not start with triple quotes.\n *\n */\nint\nstarts_with_triple_quotes(char *str)\n{\n\tchar *p;\n\tint ct;\n\n\tif (str == NULL)\n\t\treturn (0);\n\n\tp = str;\n\tct = 0;\n\twhile ((*p != '\\0') && (*p == '\"')) {\n\t\tct++;\n\t\tp++;\n\t\tif (ct == 3)\n\t\t\tbreak;\n\t}\n\tif (ct == 3) {\n\t\treturn (1);\n\t}\n\treturn (0);\n}\n\n/*\n * @brief\n *\tGets malloc_info and returns as a string\n *\n * @return char *\n * @retval NULL - Error\n * @note\n * The buffer has to be freed by the caller.\n *\n */\n#ifndef WIN32\n#ifdef HAVE_MALLOC_INFO\nchar *\nget_mem_info(void)\n{\n\tFILE *stream;\n\tchar *buf;\n\tsize_t len;\n\tint err = 0;\n\n\tstream = open_memstream(&buf, &len);\n\tif (stream == NULL)\n\t\treturn NULL;\n\terr = malloc_info(0, stream);\n\tfclose(stream);\n\tif (err == -1) {\n\t\tfree(buf);\n\t\treturn NULL;\n\t}\n\treturn buf;\n}\n#endif /* malloc_info */\n#endif /* WIN32 */\n\n/**\n * @brief\n *\tReturn a copy of 'str' where non-printing characters\n *\t(except the ones listed in the local variable 'special_char') are\n *\tshown in ^<translated_char> notation.\n *\n * @param[in]\tstr - input string\n *\n * @return char *\n *\n * @note\n * \tDo not free the return value - it's in a fixed memory area that\n *\twill get overwritten the next time the function is called.\n *      So best to use the result immediately or strdup() it.\n *\n *\tThis will return the original (non-translated) 'str' value if\n *\tan error was encounted, like a realloc() error.\n */\nchar *\nshow_nonprint_chars(char *str)\n{\n#ifndef WIN32\n\tstatic char *locbuf = NULL;\n\tstatic size_t locbuf_size = 0;\n\tchar *buf, *buf2;\n\tsize_t nsize;\n\tint ch;\n\tchar special_char[] = \"\\n\\t\";\n\twchar_t wc;\n\tint len;\n\n\tif ((str == NULL) || (str[0] == '\\0'))\n\t\treturn str;\n\n\tnsize = (strlen(str) * 2) + 1;\n\tif (nsize > locbuf_size || locbuf == NULL) {\n\t\tchar *tmpbuf;\n\t\tif ((tmpbuf = realloc(locbuf, nsize)) == NULL)\n\t\t\treturn str;\n\t\tlocbuf = tmpbuf;\n\t\tlocbuf_size = nsize;\n\t}\n\n\tlocbuf[0] = '\\0';\n\tbuf = str;\n\tbuf2 = locbuf;\n\n\tmbtowc(NULL, NULL, 0);\n\n\twhile (*buf != '\\0') {\n\t\tlen = mbtowc(&wc, buf, MB_CUR_MAX);\n\n\t\tif (len == 0) {\n\t\t\t/* shoud not happen */\n\t\t\tbreak;\n\t\t} else if (len < 0) {\n\t\t\t/* non-ASCII */\n\t\t\tch = *buf & 0xFF;\n\t\t} else {\n\t\t\t/* wide character - len > 0 */\n\t\t\tch = (int)wc;\n\t\t}\n\n\t\tif ((ch < 32) && !char_in_set(ch, special_char)) {\n\t\t\t*buf2++ = '^';\n\t\t\t*buf2++ = ch + 64;\n\t\t} else {\n\t\t\t*buf2++ = ch;\n\t\t}\n\n\t\tif (len < 0) {\n\t\t\tbuf++;\n\t\t} else {\n\t\t\tbuf += len;\n\t\t}\n\t}\n\t*buf2 = '\\0';\n\treturn (locbuf);\n#else\n\treturn (str);\n#endif\n}\n\n/**\n * @brief\n *  get_preemption_order - deduce the preemption ordering to be used for a job\n *\n *  @param[in]\tporder - static value of preempt order from the sched object\n *  \t\t\t\t\t\tthis array is assumed to be of size PREEMPT_ORDER_MAX\n *  @param[in]\treq - amount of requested time for the job\n *  @param[in]\tused - amount of used time by the job\n *\n *  @return\tstruct preempt_ordering *\n *  @retval\tpreempt ordering for the job\n *  @retval\tNULL if error\n */\nstruct preempt_ordering *\nget_preemption_order(struct preempt_ordering *porder, int req, int used)\n{\n\tint i;\n\tint percent_left = 0;\n\tstruct preempt_ordering *po = NULL;\n\n\tif (porder == NULL)\n\t\treturn NULL;\n\n\tpo = &porder[0];\n\tif (req < 0 || used < 0)\n\t\treturn po;\n\n\t/* check if we have more then one range... no need to choose if not */\n\tif (porder[1].high_range != 0) {\n\t\tpercent_left = 100 - ((float) used / req) * 100;\n\t\tif (percent_left < 0)\n\t\t\tpercent_left = 1;\n\n\t\tfor (i = 0; i < PREEMPT_ORDER_MAX; i++) {\n\t\t\tif (percent_left <= porder[i].high_range && percent_left >= porder[i].low_range) {\n\t\t\t\tpo = &porder[i];\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t}\n\n\treturn po;\n}\n\n#ifdef WIN32\n/**\n * @brief\n *\tReturns the current wall clock time.\n *\n * @return double - number of seconds.\n */\nstatic double\nget_walltime(void)\n{\n\tLARGE_INTEGER time, freq;\n\n\tif (QueryPerformanceFrequency(&freq) == 0)\n\t\treturn (0);\n\n\tif (QueryPerformanceCounter(&time) == 0)\n\t\treturn (0);\n\n\treturn ((double) time.QuadPart / freq.QuadPart);\n}\n\n/**\n * @brief\n *\tReturns the current processor time.\n *\n * @return double - number of seconds.\n **/\nstatic double\nget_cputime()\n{\n\tFILETIME create_time;\n\tFILETIME exit_time;\n\tFILETIME kernel_time;\n\tFILETIME user_time;\n\n\t/* The times are returned in 100-nanosecond units */\n\tif (GetProcessTimes(GetCurrentProcess(), &create_time, &exit_time, &kernel_time, &user_time) == 0)\n\t\treturn (0);\n\n\tif (user_time.dwLowDateTime != 0)\n\t\treturn ((double) user_time.dwLowDateTime * 0.0000001);\n\n\treturn ((double) (((unsigned long long) user_time.dwHighDateTime << 32)) * 0.0000001);\n}\n\n#else\n\n/**\n * @brief\n *\tReturns the current wall clock time.\n *\n * @return double - number of seconds.\n */\nstatic double\nget_walltime(void)\n{\n\n\tstruct timeval time;\n\n\tif (gettimeofday(&time, NULL) == -1)\n\t\treturn (0);\n\n\treturn ((double) time.tv_sec + (double) time.tv_usec * .000001);\n}\n\n/**\n * @brief\n *\tReturns the current processor time.\n *\n * @return double - number of seconds.\n **/\nstatic double\nget_cputime()\n{\n\tclock_t clock_cycles;\n\tclock_cycles = clock();\n\tif (clock_cycles == -1)\n\t\treturn (0);\n\tassert(CLOCKS_PER_SEC != 0);\n\treturn (double) clock_cycles / CLOCKS_PER_SEC;\n}\n#endif\n\n/**\n * @brief\n *\tAllocates memory for a given 'instance' of measurements.\n *\n * @param[in] instance - a description of what is being measured.\n *\n * @return perf_stat_t - an allocated entry.\n */\nstatic perf_stat_t *\nperf_stat_alloc(char *instance)\n{\n\tperf_stat_t *p_stat;\n\n\tif ((instance == NULL) || (instance[0] == '\\0'))\n\t\treturn NULL;\n\n\tp_stat = malloc(sizeof(perf_stat_t));\n\tif (p_stat == NULL)\n\t\treturn NULL;\n\n\t(void) memset((char *) p_stat, (int) 0, (size_t) sizeof(perf_stat_t));\n\n\tstrncpy(p_stat->instance, instance, MAXBUFLEN);\n\tp_stat->instance[MAXBUFLEN] = '\\0';\n\tp_stat->walltime = 0;\n\tp_stat->cputime = 0;\n\n\tdelete_link(&p_stat->pi_allstats);\n\tappend_link(&perf_stats, &p_stat->pi_allstats, p_stat);\n\n\treturn (p_stat);\n}\n\n/**\n * @brief\n *\tFind an 'instance' entry among the list of saved performance\n *\tstats.\n *\n * @param[in] instance - entity being measured.\n *\n * @return perf_stat_t - found entry.\n */\nstatic perf_stat_t *\nperf_stat_find(char *instance)\n{\n\tperf_stat_t *p_stat;\n\n\tif ((instance == NULL) || (instance[0] == '\\0') || (perf_stats_initialized == 0))\n\t\treturn (NULL);\n\n\tp_stat = (perf_stat_t *) GET_NEXT(perf_stats);\n\twhile (p_stat) {\n\t\tif (strcmp(p_stat->instance, instance) == 0) {\n\t\t\tbreak;\n\t\t}\n\t\tp_stat = (perf_stat_t *) GET_NEXT(p_stat->pi_allstats);\n\t}\n\treturn (p_stat); /* may be a null pointer */\n}\n\n/**\n * @brief\n *\tRemove (deallocate) an 'instance' entry among the list of\n *\tsaved performance stats.\n *\n * @param[in] instance - entity being measured.\n *\n * @return void\n */\nvoid\nperf_stat_remove(char *instance)\n{\n\tperf_stat_t *p_stat;\n\n\tif ((instance == NULL) || (instance[0] == '\\0') || (perf_stats_initialized == 0))\n\t\treturn;\n\n\tp_stat = (perf_stat_t *) GET_NEXT(perf_stats);\n\twhile (p_stat) {\n\t\tif (strcmp(p_stat->instance, instance) == 0) {\n\t\t\tbreak;\n\t\t}\n\t\tp_stat = (perf_stat_t *) GET_NEXT(p_stat->pi_allstats);\n\t}\n\tif (p_stat != NULL) {\n\t\tdelete_link(&p_stat->pi_allstats);\n\t\tfree(p_stat);\n\t}\n}\n\n/**\n * @brief\n *\tRecord start counters for the 'instance' entry.\n *\n * @param[in] instance - entity being measured\n *\n * @return void\n */\nvoid\nperf_stat_start(char *instance)\n{\n\tperf_stat_t *p_stat;\n\n\tif ((instance == NULL) || (instance[0] == '\\0'))\n\t\treturn;\n\n\tif (perf_stats_initialized == 0) {\n\t\tCLEAR_HEAD(perf_stats);\n\t\tperf_stats_initialized = 1;\n\t}\n\n\tp_stat = perf_stat_find(instance);\n\tif (p_stat == NULL) {\n\t\tp_stat = perf_stat_alloc(instance);\n\t\tif (p_stat == NULL)\n\t\t\treturn;\n\t}\n\n\tp_stat->walltime = get_walltime();\n\tp_stat->cputime = get_cputime();\n}\n\n/**\n * @brief\n *\tReturns a summary of statistics gathered (e.g.\n *\telapsed walltime) since the perf_stat_start() call on the\n *\tsame 'instance'.\n *\n * @param[in] instance - entity being measured\n *\n * @return char *  - a string describing stats gathered.\n *\t\t   - this is a statically-allocated buffer that\n *\t\t     will get over-written by the next call to this\n *\t\t     function.\n * @note\n *\tThis also frees up the memory used by the 'instance' entry\n *      in the list of saved stats.\n */\nchar *\nperf_stat_stop(char *instance)\n{\n\tperf_stat_t *p_stat;\n\tdouble now_walltime;\n\tdouble now_cputime;\n\tstatic char stat_summary[MAXBUFLEN + 1];\n\n\tif ((instance == NULL) || (instance[0] == '\\0')) {\n\t\treturn (NULL);\n\t}\n\n\tp_stat = perf_stat_find(instance);\n\tif (p_stat == NULL)\n\t\treturn (NULL);\n\n\tnow_walltime = get_walltime();\n\tnow_cputime = get_cputime();\n\n\tsnprintf(stat_summary, sizeof(stat_summary), \"%s walltime=%f cputime=%f\", instance, (now_walltime - p_stat->walltime), (now_cputime - p_stat->cputime));\n\n\tdelete_link(&p_stat->pi_allstats);\n\tfree(p_stat);\n\n\treturn (stat_summary);\n}\n\n/**\n * @brief\n *\tcreates an empty file in /tmp/ and saves timestamp of that file\n *\n * @param[in] - void\n *\n * @return - void\n */\nvoid\ncreate_query_file(void)\n{\n\tFILE *f;\n\tchar filename[MAXPATHLEN + 1];\n\tuid_t usid = getuid();\n#ifdef WIN32\n\tLPSTR win_sid = NULL;\n\tif (!ConvertSidToStringSid(usid, &win_sid)) {\n\t\tfprintf(stderr, \"qstat: failed to convert SID to string with error=%d\\n\", GetLastError());\n\t\treturn;\n\t}\n\tsnprintf(filename, sizeof(filename), \"%s\\\\.pbs_last_query_%s\", TMP_DIR, win_sid);\n\tLocalFree(win_sid);\n#else\n\tsnprintf(filename, sizeof(filename), \"%s/.pbs_last_query_%d\", TMP_DIR, usid);\n#endif /* WIN32 */\n\tf = fopen(filename, \"w\");\n\tif (f != NULL)\n\t\tfclose(f);\n}\n\n/**\n * @brief\n *\tstats te information of the empty file created in /tmp/ to decide\n *  whether to add sleep for .2 seconds or not\n *\n * @param[in] - void\n *\n * @return - void\n */\nvoid\ndelay_query(void)\n{\n\tchar filename[MAXPATHLEN + 1];\n#ifdef WIN32\n\tstruct _stat buf;\n#else\n\tstruct stat buf;\n#endif\n\n\tuid_t usid = getuid();\n#ifdef WIN32\n\tLPSTR win_sid = NULL;\n\tif (!ConvertSidToStringSid(usid, &win_sid)) {\n\t\tfprintf(stderr, \"qstat: failed to convert SID to string with error=%d\\n\", GetLastError());\n\t\treturn;\n\t}\n\tsnprintf(filename, sizeof(filename), \"%s\\\\.pbs_last_query_%s\", TMP_DIR, win_sid);\n\tif (_stat(filename, &buf) == 0) {\n\t\tif (((time(NULL) * 1000) - (buf.st_mtime * 1000)) < 10) {\n\t\t\tSleep(200);\n\t\t}\n\t}\n\tLocalFree(win_sid);\n#else\n\tsnprintf(filename, sizeof(filename), \"%s/.pbs_last_query_%d\", TMP_DIR, usid);\n\tif (stat(filename, &buf) == 0) {\n\t\tif (((time(NULL) * 1000) - (buf.st_mtime * 1000)) < 10) {\n\t\t\tusleep(200000);\n\t\t}\n\t}\n#endif /* WIN32 */\n\tatexit(create_query_file);\n}\n\n/**\n * @brief\n *\tPut a human readable representation of a network addres into\n *\ta staticly allocated string.\n *\n * @param[in] ap - internet address\n *\n * @return\tstring\n * @retval\tstatic  string\t\tsuccess\n * @retval\t\"unknown\"\t\terror\n *\n */\nchar *\nnetaddr(struct sockaddr_in *ap)\n{\n\tstatic char out[80];\n\tu_long ipadd;\n\n\tif (ap == NULL)\n\t\treturn \"unknown\";\n\n\tipadd = ntohl(ap->sin_addr.s_addr);\n\n\tsprintf(out, \"%ld.%ld.%ld.%ld:%d\",\n\t\t(ipadd & 0xff000000) >> 24,\n\t\t(ipadd & 0x00ff0000) >> 16,\n\t\t(ipadd & 0x0000ff00) >> 8,\n\t\t(ipadd & 0x000000ff),\n\t\tntohs(ap->sin_port));\n\treturn out;\n}\n\n/*\n *\tBEGIN included source\n */\n/*-\n * Copyright (c) 1991, 1993\n *\tThe Regents of the University of California.  All rights reserved.\n *\n * This code is derived from software contributed to Berkeley by\n * James W. Williams of NASA Goddard Space Flight Center.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions\n * are met:\n * 1. Redistributions of source code must retain the above copyright\n *    notice, this list of conditions and the following disclaimer.\n * 2. Redistributions in binary form must reproduce the above copyright\n *    notice, this list of conditions and the following disclaimer in the\n *    documentation and/or other materials provided with the distribution.\n * 3. All advertising materials mentioning features or use of this software\n *    must display the following acknowledgment:\n *\tThis product includes software developed by the University of\n *\tCalifornia, Berkeley and its contributors.\n * 4. Neither the name of the University nor the names of its contributors\n *    may be used to endorse or promote products derived from this software\n *    without specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND\n * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED.  IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE\n * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS\n * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)\n * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT\n * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY\n * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF\n * SUCH DAMAGE.\n */\n\nstatic u_long crctab[] = {\n\t0x0,\n\t0x04c11db7, 0x09823b6e, 0x0d4326d9, 0x130476dc, 0x17c56b6b,\n\t0x1a864db2, 0x1e475005, 0x2608edb8, 0x22c9f00f, 0x2f8ad6d6,\n\t0x2b4bcb61, 0x350c9b64, 0x31cd86d3, 0x3c8ea00a, 0x384fbdbd,\n\t0x4c11db70, 0x48d0c6c7, 0x4593e01e, 0x4152fda9, 0x5f15adac,\n\t0x5bd4b01b, 0x569796c2, 0x52568b75, 0x6a1936c8, 0x6ed82b7f,\n\t0x639b0da6, 0x675a1011, 0x791d4014, 0x7ddc5da3, 0x709f7b7a,\n\t0x745e66cd, 0x9823b6e0, 0x9ce2ab57, 0x91a18d8e, 0x95609039,\n\t0x8b27c03c, 0x8fe6dd8b, 0x82a5fb52, 0x8664e6e5, 0xbe2b5b58,\n\t0xbaea46ef, 0xb7a96036, 0xb3687d81, 0xad2f2d84, 0xa9ee3033,\n\t0xa4ad16ea, 0xa06c0b5d, 0xd4326d90, 0xd0f37027, 0xddb056fe,\n\t0xd9714b49, 0xc7361b4c, 0xc3f706fb, 0xceb42022, 0xca753d95,\n\t0xf23a8028, 0xf6fb9d9f, 0xfbb8bb46, 0xff79a6f1, 0xe13ef6f4,\n\t0xe5ffeb43, 0xe8bccd9a, 0xec7dd02d, 0x34867077, 0x30476dc0,\n\t0x3d044b19, 0x39c556ae, 0x278206ab, 0x23431b1c, 0x2e003dc5,\n\t0x2ac12072, 0x128e9dcf, 0x164f8078, 0x1b0ca6a1, 0x1fcdbb16,\n\t0x018aeb13, 0x054bf6a4, 0x0808d07d, 0x0cc9cdca, 0x7897ab07,\n\t0x7c56b6b0, 0x71159069, 0x75d48dde, 0x6b93dddb, 0x6f52c06c,\n\t0x6211e6b5, 0x66d0fb02, 0x5e9f46bf, 0x5a5e5b08, 0x571d7dd1,\n\t0x53dc6066, 0x4d9b3063, 0x495a2dd4, 0x44190b0d, 0x40d816ba,\n\t0xaca5c697, 0xa864db20, 0xa527fdf9, 0xa1e6e04e, 0xbfa1b04b,\n\t0xbb60adfc, 0xb6238b25, 0xb2e29692, 0x8aad2b2f, 0x8e6c3698,\n\t0x832f1041, 0x87ee0df6, 0x99a95df3, 0x9d684044, 0x902b669d,\n\t0x94ea7b2a, 0xe0b41de7, 0xe4750050, 0xe9362689, 0xedf73b3e,\n\t0xf3b06b3b, 0xf771768c, 0xfa325055, 0xfef34de2, 0xc6bcf05f,\n\t0xc27dede8, 0xcf3ecb31, 0xcbffd686, 0xd5b88683, 0xd1799b34,\n\t0xdc3abded, 0xd8fba05a, 0x690ce0ee, 0x6dcdfd59, 0x608edb80,\n\t0x644fc637, 0x7a089632, 0x7ec98b85, 0x738aad5c, 0x774bb0eb,\n\t0x4f040d56, 0x4bc510e1, 0x46863638, 0x42472b8f, 0x5c007b8a,\n\t0x58c1663d, 0x558240e4, 0x51435d53, 0x251d3b9e, 0x21dc2629,\n\t0x2c9f00f0, 0x285e1d47, 0x36194d42, 0x32d850f5, 0x3f9b762c,\n\t0x3b5a6b9b, 0x0315d626, 0x07d4cb91, 0x0a97ed48, 0x0e56f0ff,\n\t0x1011a0fa, 0x14d0bd4d, 0x19939b94, 0x1d528623, 0xf12f560e,\n\t0xf5ee4bb9, 0xf8ad6d60, 0xfc6c70d7, 0xe22b20d2, 0xe6ea3d65,\n\t0xeba91bbc, 0xef68060b, 0xd727bbb6, 0xd3e6a601, 0xdea580d8,\n\t0xda649d6f, 0xc423cd6a, 0xc0e2d0dd, 0xcda1f604, 0xc960ebb3,\n\t0xbd3e8d7e, 0xb9ff90c9, 0xb4bcb610, 0xb07daba7, 0xae3afba2,\n\t0xaafbe615, 0xa7b8c0cc, 0xa379dd7b, 0x9b3660c6, 0x9ff77d71,\n\t0x92b45ba8, 0x9675461f, 0x8832161a, 0x8cf30bad, 0x81b02d74,\n\t0x857130c3, 0x5d8a9099, 0x594b8d2e, 0x5408abf7, 0x50c9b640,\n\t0x4e8ee645, 0x4a4ffbf2, 0x470cdd2b, 0x43cdc09c, 0x7b827d21,\n\t0x7f436096, 0x7200464f, 0x76c15bf8, 0x68860bfd, 0x6c47164a,\n\t0x61043093, 0x65c52d24, 0x119b4be9, 0x155a565e, 0x18197087,\n\t0x1cd86d30, 0x029f3d35, 0x065e2082, 0x0b1d065b, 0x0fdc1bec,\n\t0x3793a651, 0x3352bbe6, 0x3e119d3f, 0x3ad08088, 0x2497d08d,\n\t0x2056cd3a, 0x2d15ebe3, 0x29d4f654, 0xc5a92679, 0xc1683bce,\n\t0xcc2b1d17, 0xc8ea00a0, 0xd6ad50a5, 0xd26c4d12, 0xdf2f6bcb,\n\t0xdbee767c, 0xe3a1cbc1, 0xe760d676, 0xea23f0af, 0xeee2ed18,\n\t0xf0a5bd1d, 0xf464a0aa, 0xf9278673, 0xfde69bc4, 0x89b8fd09,\n\t0x8d79e0be, 0x803ac667, 0x84fbdbd0, 0x9abc8bd5, 0x9e7d9662,\n\t0x933eb0bb, 0x97ffad0c, 0xafb010b1, 0xab710d06, 0xa6322bdf,\n\t0xa2f33668, 0xbcb4666d, 0xb8757bda, 0xb5365d03, 0xb1f740b4};\n\n/**\n * @brief\n *\t-Compute a POSIX 1003.2 checksum.  This routine has been broken out so that\n * \tother programs can use it.  It takes a char pointer and length.\n * \tIt ruturns the crc value for the data in buf.\n *\n * @param[in] buf - input data data for which crc computed\n * @param[in] len - length of input data\n *\n * @return\tu_long\n * @retval\tcrc value\tsuccess\n *\n */\nstatic u_long\ncrc(u_char *buf, u_long clen)\n{\n\tregister u_char *p;\n\tregister u_long crc, len;\n\n#define COMPUTE(var, ch) (var) = (((var) << 8) ^                           \\\n\t\t\t\t  crctab[(((var) >> 24) & 0xff) ^ (ch)]) & \\\n\t\t\t\t 0xffffffff\n\n\tfor (crc = 0, len = clen, p = buf; len--; ++p) {\n\t\tCOMPUTE(crc, *p);\n\t}\n\n\t/* Include the length of the file. */\n\tfor (; clen != 0; clen >>= 8) {\n\t\tCOMPUTE(crc, clen & 0xff);\n\t}\n\n\treturn (~crc & 0xffffffff);\n}\n/*\n *\tEND of included source\n */\n\n#ifdef WIN32\n/**\n * @brief\n * \tGiven some bytes of data in 'buf' of size 'buf_sz', return a\n *\tcopy of the data but with <carriage return> <linefeed> combination\n *\tentries replaced by a single <linefeed>. The returned buffer\n *\tsize is returned in 'new_buf_sz'.\n *\n * @param[in]\t\tbuf\t- some bytes of data.\n * @param[in]\t\tbuf_sz\t- size of 'buf'.\n * @param[in/out]\tnew_buf_sz - holds the buffer size of the returned buf.\n *\n * @return char *\n * @retval <copy of filtered 'buf'>\n */\nstatic char *\ndos2unix(char *buf, unsigned long buf_sz, int *new_buf_sz)\n{\n\tstatic char *buf2 = NULL;\n\tstatic unsigned long buf2_sz = 0;\n\tchar *tmp_str = NULL;\n\tint i, j;\n\n\tif (buf_sz > buf2_sz) {\n\t\ttmp_str = realloc(buf2, buf_sz);\n\t\tif (tmp_str == NULL) {\n\t\t\t*new_buf_sz = buf_sz;\n\t\t\treturn (buf); /* return original */\n\t\t}\n\t\tbuf2 = tmp_str;\n\t\tbuf2_sz = buf_sz;\n\t}\n\n\tmemset(buf2, '\\0', buf2_sz);\n\tj = 0;\n\tfor (i = 0; i < buf_sz; i++) {\n\t\tif ((i < (buf_sz - 1)) && (buf[i] == '\\r') &&\n\t\t    (buf[i + 1] == '\\n')) {\n\t\t\tbuf2[j++] = '\\n';\n\t\t\ti++; /* skip the next linefeed */\n\t\t} else {\n\t\t\tbuf2[j++] = buf[i];\n\t\t}\n\t}\n\n\t*new_buf_sz = j;\n\treturn (buf2);\n}\n#endif\n\n/**\n * @brief\n * \tGiven a file represented by 'filepath', return its crc value.\n *\n * @param[in]\tfilepath\t- file being crc-ed.\n *\n * @return u_long\n * @retval\t> 0\t- crc (checksum) value of the file.\n * @retval\t0\t- if file is non-existent, or file is empty, or if an\n * \t\t\t  error encountered while opening or reading the\n * \t\t\t  file.\n */\nunsigned long\ncrc_file(char *filepath)\n{\n\tint fd;\n\tstruct stat sb;\n\tstatic u_char *buf = NULL;\n\tstatic int buf_sz = 0;\n\tu_char *tmp_str = NULL;\n\tint nread = 0;\n\tint count;\n\tu_char *tmpbuf;\n#ifdef WIN32\n\tu_char *tr_buf = NULL;\n\tint tr_buf_sz = 0;\n#endif\n\n\tif (filepath == NULL)\n\t\treturn (0);\n\n\tif (stat(filepath, &sb) == -1) {\n\t\treturn (0);\n\t}\n\n\tif (sb.st_size <= 0) {\n\t\treturn (0);\n\t}\n\n\tif ((fd = open(filepath, O_RDONLY)) <= 0) {\n\t\treturn (0);\n\t}\n\n#ifdef WIN32\n\tsetmode(fd, O_BINARY);\n#endif\n\n\tif (sb.st_size > buf_sz) {\n\t\ttmp_str = realloc(buf, sb.st_size);\n\t\tif (tmp_str == NULL) {\n\t\t\tclose(fd);\n\t\t\treturn (0);\n\t\t}\n\t\tbuf = tmp_str;\n\t\tbuf[0] = '\\0';\n\t\tbuf_sz = sb.st_size;\n\t}\n\n\ttmpbuf = buf;\n\tcount = sb.st_size;\n\n\twhile (((nread = read(fd, tmpbuf, count)) > 0) &&\n\t       (nread <= sb.st_size)) {\n\n\t\tcount -= nread;\n\t\ttmpbuf += nread;\n\n\t\tif (count == 0) {\n\t\t\tbreak;\n\t\t}\n\t}\n\n\tif (nread < 0) {\n\t\tclose(fd);\n\t\treturn (0);\n\t}\n\n\tclose(fd);\n#ifdef WIN32\n\ttr_buf = dos2unix(buf, sb.st_size, &tr_buf_sz);\n\treturn (crc(tr_buf, tr_buf_sz));\n#else\n\treturn (crc(buf, sb.st_size));\n#endif\n}\n\n/**\n * @brief\n * \t\tstate_char2int - return the state from character form to int form.\n *\n * @param[in]\tstc\t-\tstate in character form\n *\n * @return\tstate in int form\n * @retval\t-1\t: failure\n */\n\nint\nstate_char2int(char stc)\n{\n\tint i;\n\tchar statechars[] = \"TQHWREXBMF\";\n\n\tfor (i = 0; i < PBS_NUMJOBSTATE; i++) {\n\t\tif (statechars[i] == stc)\n\t\t\treturn i;\n\t}\n\treturn -1;\n}\n\n/**\n * @brief\n * \t\tstate_int2char - return the state from int form to char form.\n *\n * @param[in]\tsti\t-\tstate in int form\n *\n * @return\tstate in char form\n * @retval\t'0'\t: failure\n */\n\nchar\nstate_int2char(int sti)\n{\n\tswitch (sti) {\n\t\tcase JOB_STATE_TRANSIT:\n\t\t\treturn JOB_STATE_LTR_TRANSIT;\n\t\tcase JOB_STATE_QUEUED:\n\t\t\treturn JOB_STATE_LTR_QUEUED;\n\t\tcase JOB_STATE_HELD:\n\t\t\treturn JOB_STATE_LTR_HELD;\n\t\tcase JOB_STATE_WAITING:\n\t\t\treturn JOB_STATE_LTR_WAITING;\n\t\tcase JOB_STATE_RUNNING:\n\t\t\treturn JOB_STATE_LTR_RUNNING;\n\t\tcase JOB_STATE_EXITING:\n\t\t\treturn JOB_STATE_LTR_EXITING;\n\t\tcase JOB_STATE_EXPIRED:\n\t\t\treturn JOB_STATE_LTR_EXPIRED;\n\t\tcase JOB_STATE_BEGUN:\n\t\t\treturn JOB_STATE_LTR_BEGUN;\n\t\tcase JOB_STATE_MOVED:\n\t\t\treturn JOB_STATE_LTR_MOVED;\n\t\tcase JOB_STATE_FINISHED:\n\t\t\treturn JOB_STATE_LTR_FINISHED;\n\t\tdefault:\n\t\t\treturn JOB_STATE_LTR_UNKNOWN;\n\t}\n\n\treturn JOB_STATE_LTR_UNKNOWN;\n}\n\n/**\n * @brief\n * \t\tparse_servername - parse a server/vnode name in the form:\n *\t\t[(]name[:service_port][:resc=value[:...]][+name...]\n *\t\tfrom exec_vnode or from exec_hostname\n *\t\tname[:service_port]/NUMBER[*NUMBER][+...]\n *\t\tor basic servername:port string\n *\n *\t\tReturns ptr to the node name as the function value and the service_port\n *\t\tnumber (int) into service if :port is found, otherwise port is unchanged\n *\t\thost name is also terminated by a ':', '+' or '/' in string\n *\n * @param[in]\tname\t- server/node/exec_vnode string\n * @param[out]\tservice\t-  RETURN: service_port if :port\n *\n * @return\t ptr to the node name\n *\n * @par MT-safe: No\n */\n\nchar *\nparse_servername(char *name, unsigned int *service)\n{\n\tstatic char buf[PBS_MAXSERVERNAME + PBS_MAXPORTNUM + 2];\n\tint i = 0;\n\tchar *pc;\n\n\tif ((name == NULL) || (*name == '\\0'))\n\t\treturn NULL;\n\tif (*name == '(') /* skip leading open paren found in exec_vnode */\n\t\tname++;\n\n\t/* look for a ':', '+' or '/' in the string */\n\n\tpc = name;\n\twhile (*pc && (i < PBS_MAXSERVERNAME + PBS_MAXPORTNUM + 1)) {\n\t\tif ((*pc == '+') || (*pc == '/')) {\n\t\t\tbreak;\n\t\t} else if (*pc == ':') {\n\t\t\tif (isdigit((int) *(pc + 1)) && (service != NULL))\n\t\t\t\t*service = (unsigned int) atoi(pc + 1);\n\t\t\tbreak;\n\t\t} else {\n\t\t\tbuf[i++] = *pc++;\n\t\t}\n\t}\n\tbuf[i] = '\\0';\n\treturn (buf);\n}\n\n#ifndef WIN32\n/**\n * @brief\n * \tset limits for the current process\n *\n * @param[in] core_limit - core limit in string (this is usally pbs_conf.pbs_core_limit)\n * @param[in] fdlimit - max open fd limit (can be 0 to not to change limit)\n *\n * @return void\n *\n */\nvoid\nset_proc_limits(char *core_limit, int fdlimit)\n{\n#ifdef RLIMIT_CORE\n\tint char_in_cname = 0;\n\textern char *msg_corelimit;\n\n\tif (core_limit) {\n\t\tchar *pc = core_limit;\n\t\twhile (*pc != '\\0') {\n\t\t\tif (!isdigit(*pc)) {\n\t\t\t\t/* there is a character in core limit */\n\t\t\t\tchar_in_cname = 1;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\tpc++;\n\t\t}\n\t}\n#endif /* RLIMIT_CORE */\n\n#if defined(RLIM64_INFINITY)\n\t{\n\t\tstruct rlimit64 rlimit;\n\n\t\tif (fdlimit) {\n\t\t\trlimit.rlim_cur = fdlimit;\n\t\t\trlimit.rlim_max = fdlimit;\n\t\t\tif (setrlimit64(RLIMIT_NOFILE, &rlimit) == -1) {\n\t\t\t\tlog_err(errno, __func__, \"could not set max open files limit\");\n\t\t\t}\n\t\t}\n\n\t\trlimit.rlim_cur = RLIM64_INFINITY;\n\t\trlimit.rlim_max = RLIM64_INFINITY;\n\t\t(void) setrlimit64(RLIMIT_CPU, &rlimit);\n\t\t(void) setrlimit64(RLIMIT_FSIZE, &rlimit);\n\t\t(void) setrlimit64(RLIMIT_DATA, &rlimit);\n\t\t(void) setrlimit64(RLIMIT_STACK, &rlimit);\n#ifdef RLIMIT_RSS\n\t\t(void) setrlimit64(RLIMIT_RSS, &rlimit);\n#endif /* RLIMIT_RSS */\n#ifdef RLIMIT_VMEM\n\t\t(void) setrlimit64(RLIMIT_VMEM, &rlimit);\n#endif /* RLIMIT_VMEM */\n#ifdef RLIMIT_CORE\n\t\tif (core_limit) {\n\t\t\tstruct rlimit64 corelimit;\n\t\t\tcorelimit.rlim_max = RLIM64_INFINITY;\n\t\t\tif (strcmp(\"unlimited\", core_limit) == 0)\n\t\t\t\tcorelimit.rlim_cur = RLIM64_INFINITY;\n\t\t\telse if (char_in_cname == 1) {\n\t\t\t\tlog_record(PBSEVENT_ERROR, PBS_EVENTCLASS_SERVER, LOG_WARNING,\n\t\t\t\t\t   __func__, msg_corelimit);\n\t\t\t\tcorelimit.rlim_cur = RLIM64_INFINITY;\n\t\t\t} else\n\t\t\t\tcorelimit.rlim_cur = (rlim64_t) atol(core_limit);\n\t\t\t(void) setrlimit64(RLIMIT_CORE, &corelimit);\n\t\t}\n#endif /* RLIMIT_CORE */\n\t}\n\n#else /* setrlimit 32 bit */\n\t{\n\t\tstruct rlimit rlimit;\n\n\t\tif (fdlimit) {\n\t\t\trlimit.rlim_cur = fdlimit;\n\t\t\trlimit.rlim_max = fdlimit;\n\t\t\tif (setrlimit(RLIMIT_NOFILE, &rlimit) == -1) {\n\t\t\t\tlog_err(errno, __func__, \"could not set max open files limit\");\n\t\t\t}\n\t\t}\n\t\trlimit.rlim_cur = RLIM_INFINITY;\n\t\trlimit.rlim_max = RLIM_INFINITY;\n\t\t(void) setrlimit(RLIMIT_CPU, &rlimit);\n#ifdef RLIMIT_RSS\n\t\t(void) setrlimit(RLIMIT_RSS, &rlimit);\n#endif /* RLIMIT_RSS */\n#ifdef RLIMIT_VMEM\n\t\t(void) setrlimit(RLIMIT_VMEM, &rlimit);\n#endif /* RLIMIT_VMEM */\n#ifdef RLIMIT_CORE\n\t\tif (core_limit) {\n\t\t\tstruct rlimit corelimit;\n\t\t\tcorelimit.rlim_max = RLIM_INFINITY;\n\t\t\tif (strcmp(\"unlimited\", core_limit) == 0)\n\t\t\t\tcorelimit.rlim_cur = RLIM_INFINITY;\n\t\t\telse if (char_in_cname == 1) {\n\t\t\t\tlog_record(PBSEVENT_ERROR, PBS_EVENTCLASS_SERVER, LOG_WARNING,\n\t\t\t\t\t   (char *) __func__, msg_corelimit);\n\t\t\t\tcorelimit.rlim_cur = RLIM_INFINITY;\n\t\t\t} else\n\t\t\t\tcorelimit.rlim_cur =\n\t\t\t\t\t(rlim_t) atol(core_limit);\n\n\t\t\t(void) setrlimit(RLIMIT_CORE, &corelimit);\n\t\t}\n#endif /* RLIMIT_CORE */\n#ifndef linux\n\t\t(void) setrlimit(RLIMIT_FSIZE, &rlimit);\n\t\t(void) setrlimit(RLIMIT_DATA, &rlimit);\n\t\t(void) setrlimit(RLIMIT_STACK, &rlimit);\n#else\n\t\tif (getrlimit(RLIMIT_STACK, &rlimit) != -1) {\n\t\t\tif ((rlimit.rlim_cur != RLIM_INFINITY) && (rlimit.rlim_cur < MIN_STACK_LIMIT)) {\n\t\t\t\trlimit.rlim_cur = MIN_STACK_LIMIT;\n\t\t\t\trlimit.rlim_max = MIN_STACK_LIMIT;\n\t\t\t\tif (setrlimit(RLIMIT_STACK, &rlimit) == -1) {\n\t\t\t\t\tlog_err(errno, __func__, \"setting stack limit failed\");\n\t\t\t\t\texit(1);\n\t\t\t\t}\n\t\t\t}\n\t\t} else {\n\t\t\tlog_err(errno, __func__, \"getting current stack limit failed\");\n\t\t\texit(1);\n\t\t}\n#endif /* not linux */\n\t}\n#endif /* !RLIM64_INFINITY */\n}\n#endif\n\n/**\n * @brief\n *\trand_num - returns a random number.\n * \tThis function will seed using micro second if already not seeded\n *\n */\nint\nrand_num(void)\n{\n\tstatic int seeded = 0;\n\tstruct timeval tv;\n\n\tif (!seeded) {\n\t\tgettimeofday(&tv, NULL);\n\t\tsrand(1000000 * tv.tv_sec + tv.tv_usec); /* seed the random generator */\n\t\tseeded = 1;\n\t}\n\n\treturn rand();\n}\n\n/**\n * @brief\n * \tget subjob index from given jobid\n *\n * @param[in] jid - jobid\n *\n * @return int\n * @retval -1  - fail to determine index of subjob\n * @retval !-1 - index of subjob\n */\nint\nget_index_from_jid(char *jid)\n{\n\tchar *range = get_range_from_jid(jid);\n\n\tif (range != NULL) {\n\t\tchar *endptr = NULL;\n\t\tint idx = strtoul(range, &endptr, 10);\n\n\t\tif (endptr == NULL || *endptr != '\\0' || idx < 0)\n\t\t\treturn -1;\n\t\telse\n\t\t\treturn idx;\n\t} else\n\t\treturn -1;\n}\n/**\n * @brief\n * \tget range string of arrayjob from given jobid\n *\n * @param[in] jid - job id\n *\n * @return char *\n * @retval NULL - on error\n * @retval ptr - ptr to static char array containing range string if found\n *\n * @par\n * \tMT-safe: No - uses static variables - index, indexlen.\n */\nchar *\nget_range_from_jid(char *jid)\n{\n\tint i;\n\tchar *pcb;\n\tchar *pce;\n\tstatic char index[BUF_SIZE];\n\n\tif ((pcb = strchr(jid, (int) '[')) == NULL)\n\t\treturn NULL;\n\tif ((pce = strchr(jid, (int) ']')) == NULL)\n\t\treturn NULL;\n\tif (pce <= pcb)\n\t\treturn NULL;\n\n\ti = 0;\n\twhile (++pcb < pce)\n\t\tindex[i++] = *pcb;\n\tindex[i] = '\\0';\n\treturn index;\n}\n\n/**\n * @brief\n * \tcreate and return (in a static array) a jobid for a subjob based on\n * \tthe parent jobid and the subjob index\n *\n * @param[in] parent_jid - parent jobid\n * @param[in] sjidx -  subjob index.\n *\n * @return char *\n * @return !NULL - jobid of subjob\n * @return NULL - failure\n *\n * @par\n * \tMT-safe: No - uses a static buffer, \"jid\".\n */\nchar *\ncreate_subjob_id(char *parent_jid, int sjidx)\n{\n\tstatic char jid[PBS_MAXSVRJOBID + 1];\n\tchar *pcb;\n\tchar *pce;\n\n\tif ((pcb = strchr(parent_jid, (int) '[')) == NULL)\n\t\treturn NULL;\n\tif ((pce = strchr(parent_jid, (int) ']')) == NULL)\n\t\treturn NULL;\n\tif (pce <= pcb)\n\t\treturn NULL;\n\n\t*pcb = '\\0';\n\tsnprintf(jid, sizeof(jid), \"%s[%d]%s\", parent_jid, sjidx, pce + 1);\n\t*pcb = '[';\n\treturn jid;\n}\n\n/**\n * @brief\n * \t\tread attributes from file descriptor of a job file\n *\n * @param[in]\tfd\t-\tfile descriptor\n * @param[out]\terrbuf\t-\tbuffer to return messages for any errors\n *\n * @return\tsvrattrl *\n * @retval\tsvrattrl object for the attribute read\n * @retval\tNULL for error\n */\nstatic svrattrl *\nread_attr(int fd, char **errbuf)\n{\n\tint amt;\n\tint i;\n\tsvrattrl *pal;\n\tsvrattrl tempal;\n\n\ti = read(fd, (char *) &tempal, sizeof(tempal));\n\tif (i != sizeof(tempal)) {\n\t\tif (errbuf != NULL)\n\t\t\tsprintf(*errbuf, \"bad read of attribute\");\n\t\treturn NULL;\n\t}\n\tif (tempal.al_tsize == ENDATTRIBUTES)\n\t\treturn NULL;\n\n\tpal = (svrattrl *) malloc(tempal.al_tsize);\n\tif (pal == NULL) {\n\t\tif (errbuf != NULL)\n\t\t\tsprintf(*errbuf, \"malloc failed\");\n\t\treturn NULL;\n\t}\n\t*pal = tempal;\n\n\t/* read in the actual attribute data */\n\n\tamt = pal->al_tsize - sizeof(svrattrl);\n\ti = read(fd, (char *) pal + sizeof(svrattrl), amt);\n\tif (i != amt) {\n\t\tif (errbuf != NULL)\n\t\t\tsprintf(*errbuf, \"short read of attribute\");\n\t\treturn NULL;\n\t}\n\tpal->al_name = (char *) pal + sizeof(svrattrl);\n\tif (pal->al_rescln)\n\t\tpal->al_resc = pal->al_name + pal->al_nameln;\n\telse\n\t\tpal->al_resc = NULL;\n\tif (pal->al_valln)\n\t\tpal->al_value = pal->al_name + pal->al_nameln + pal->al_rescln;\n\telse\n\t\tpal->al_value = NULL;\n\n\treturn pal;\n}\n\n/**\n * @brief\tRead all job attribute values from a job file\n *\n * @param[in]\tfd - fd of job file\n * @param[out]\tstate - return pointer to state value\n * @param[out]\tsubstate - return pointer for substate value\n * @param[out]\terrbuf\t-\tbuffer to return messages for any errors\n *\n * @return\tsvrattrl*\n * @retval\tlist of attributes read from a job file\n * @retval\tNULL for error\n */\nsvrattrl *\nread_all_attrs_from_jbfile(int fd, char **state, char **substate, char **errbuf)\n{\n\tsvrattrl *pal = NULL;\n\tsvrattrl *pali = NULL;\n\n\twhile ((pali = read_attr(fd, errbuf)) != NULL) {\n\t\tif (pal == NULL) {\n\t\t\tpal = pali;\n\t\t\t(&pal->al_link)->ll_struct = (void *) (&pal->al_link);\n\t\t\t(&pal->al_link)->ll_next = NULL;\n\t\t\t(&pal->al_link)->ll_prior = NULL;\n\t\t} else {\n\t\t\tpbs_list_link *head = &pal->al_link;\n\t\t\tpbs_list_link *newp = &pali->al_link;\n\t\t\tnewp->ll_prior = NULL;\n\t\t\tnewp->ll_next = head;\n\t\t\tnewp->ll_struct = pali;\n\t\t\tpal = pali;\n\t\t}\n\t\t/* Check if the attribute read is state/substate and store it separately */\n\t\tif (state && strcmp(pali->al_name, ATTR_state) == 0)\n\t\t\t*state = pali->al_value;\n\t\telse if (substate && strcmp(pali->al_name, ATTR_substate) == 0)\n\t\t\t*substate = pali->al_value;\n\t}\n\n\treturn pal;\n}\n\n/**\n * @brief\n *  Generate a random string with given length..\n *\n * @param[in/out]\tstr - Character array where random string is stored\n * @param[in]\t\tlen - Length of string to be generated\n *\n * @return void\n */\nvoid\nset_rand_str(char *str, int len)\n{\n\tint i;\n\tfor (i = 0; i < len - 1; i++) {\n\t\tstr[i] = (rand_num() % 255) + 1; /* don't allow zero since it is null char */\n\t\tif (str[i] == ';') {\n\t\t\t/* we are using ; as a separator, so can't have that */\n\t\t\tstr[i] = '_';\n\t\t}\n\t}\n\tstr[i] = '\\0';\n\n}\n/**\n * @brief\n * \tGenerate a host key with encryption\n *\n * @param[in] cluster_key - The cluster key used for encryption\n * @param[in] salt -  A salt value to add randomness to the encryption\n * @param[in/out] len - Length of encrypted data\n *\n * @return char *\n * @return !NULL - Encrypted data\n * @return NULL - failure\n */\nchar *\ngen_hostkey(char *cluster_key, char *salt, size_t *len)\n{\n\tstatic size_t cred_len;\n\tstatic char *cred_buf = NULL;\n\tint cred_type;\n\textern unsigned char pbs_aes_key[][16];\n\textern unsigned char pbs_aes_iv[][16];\n\tchar *q;\n\tchar *p = NULL;\n\tconst char *timestr = uLTostr(time(0), 10);\n\n\tif (cluster_key == NULL)\n\t\treturn NULL;\n\n\tq = p = malloc(strlen(salt) + 1 + strlen(timestr) + 1 + strlen(cluster_key) + 1);\n\tp = pbs_strcpy(p, salt);\n\tp = pbs_strcpy(p, \";\");\n\tp = pbs_strcpy(p, timestr);\n\tp = pbs_strcpy(p, \";\");\n\tpbs_strcpy(p, cluster_key);\n\n\tif (pbs_encrypt_pwd(q, &cred_type, &cred_buf, &cred_len,\n\t\t\t    (const unsigned char *) pbs_aes_key, (const unsigned char *) pbs_aes_iv) != 0)\n\t\treturn NULL;\n\tfree(q);\n\n\t/* now form a string as such: str;encrypted-data */\n\t*len = strlen(salt) + 1 + cred_len;\n\tq = p = malloc(*len + 1);\n\tp = pbs_strcpy(p, salt);\n\tp = pbs_strcpy(p, \";\");\n\tmemcpy(p, cred_buf, cred_len);\n\tfree(cred_buf);\n\tcred_buf = NULL;\n\tq[*len] = '\\0';\n\n\treturn q;\n}\n\n/**\n * @brief\n * \tValidate a host key and extract cluster key\n *\n * @param[in] host_key - The host key to be validated\n * @param[in] host_keylen -  Length of host key\n * @param[out] cluster_key - Store the extracted cluster key\n *\n * @return int\n * @return 0 - success\n * @return -1 - failure\n */\nint\nvalidate_hostkey(char *host_key, size_t host_keylen, char **cluster_key)\n{\n\tchar *ck;\n\tchar *p;\n\tchar *key = NULL;\n\tchar *salt;\n\ttime_t origin;\n\textern unsigned char pbs_aes_key[][16];\n\textern unsigned char pbs_aes_iv[][16];\n\n\t/* break it down to find out salt;clusterkey */\n\tif (!(p = strchr(host_key, ';'))) {\n\t\tlog_errf(-1, __func__, \"first ; not found\");\n\t\tgoto err;\n\t}\n\t*p = '\\0';\n\tsalt = host_key;\n\n\tck = p + 1;\n\tif (pbs_decrypt_pwd(ck, PBS_CREDTYPE_AES, host_keylen - (ck - host_key), &key,\n\t\t\t    (const unsigned char *) pbs_aes_key, (const unsigned char *) pbs_aes_iv) != 0) {\n\t\tlog_errf(-1, __func__, \"decyrpt failed, host_keylen=%d, initial salt found was %s\", host_keylen - (ck - host_key), salt);\n\t\tgoto err;\n\t}\n\n\t/* break down decrypted cluster key into timestamp and rand and verify timestamp */\n\tif (!(p = strchr(key, ';'))) {\n\t\tlog_errf(-1, __func__, \"second ; not found\");\n\t\tgoto err;\n\t}\n\t*p = '\\0';\n\n\t/* compare the salt values */\n\tif (strcmp(salt, key) != 0) {\n\t\tlog_errf(-1, __func__, \"salt does not match salt=%s, key=%s\", salt, key);\n\t\tgoto err;\n\t}\n\n\t/* ensure timestamp is not too old (within 5 mins) */\n\torigin = atol(++p);\n\tif ((origin == -1) || ((time(0) - origin) > 300)) {\n\t\tlog_errf(-1, __func__, \"timestamp out of whack, time=%ld, origin=%ld\", time(0), origin);\n\t\tgoto err;\n\t}\n\tif (!(p = strchr(p, ';'))) {\n\t\tlog_errf(-1, __func__, \"Third ; not found\");\n\t\tgoto err;\n\t}\n\t*p = '\\0';\n\tck = ++p;\n\n\t/* if cluster_key provided compare them */\n\tif (cluster_key) {\n\t\tif (*cluster_key) {\n\t\t\tif (strcmp(ck, *cluster_key) != 0) {\n\t\t\t\tlog_errf(-1, __func__, \"Cluster key does not match, ck=%s, cluster_key=%s\", ck, *cluster_key);\n\t\t\t\tgoto err;\n\t\t\t}\n\t\t} else {\n\t\t\t*cluster_key = strdup(ck);\n\t\t\tif (!*cluster_key) {\n\t\t\t\tlog_errf(-1, __func__, \"strdup of cluster key failed\");\n\t\t\t\tgoto err;\n\t\t\t}\n\t\t}\n\t}\n\tfree(key);\n\treturn 0;\n\nerr:\n\tfree(key);\n\treturn -1;\n}\n"
  },
  {
    "path": "src/lib/Libutil/pbs_aes_encrypt.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h>\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <ctype.h>\n#include <memory.h>\n#include <string.h>\n#include <stdlib.h>\n\n#include <openssl/evp.h>\n#include <openssl/aes.h>\n#include <openssl/bio.h>\n#include <openssl/buffer.h>\n#include <openssl/sha.h>\n#include \"ticket.h\"\n/**\n * @file\tpbs_aes_encrypt.c\n */\nextern unsigned char pbs_aes_key[];\nextern unsigned char pbs_aes_iv[];\n\n#if OPENSSL_VERSION_NUMBER < 0x10100000L\n#define CIPHER_CONTEXT_INIT(v) EVP_CIPHER_CTX_init(v)\n#define CIPHER_CONTEXT_CLEAN(v) EVP_CIPHER_CTX_cleanup(v)\n#else\n#define CIPHER_CONTEXT_INIT(v)    \\\n\tv = EVP_CIPHER_CTX_new(); \\\n\tif (!v)                   \\\n\treturn -1\n#define CIPHER_CONTEXT_CLEAN(v)    \\\n\tEVP_CIPHER_CTX_cleanup(v); \\\n\tEVP_CIPHER_CTX_free(v)\n#endif\n\n/**\n * @brief\n *\tEncrypt the input string using AES encryption. The keys are rotated\n *\tfor each block of data equal to the size of each key.\n *\n * @param[in]\tuncrypted - The input data to encrypt\n * @param[out]\tcredtype  - The credential type\n * @param[out]\tcredbuf\t  - The buffer containing the encrypted data\n * @param[out]\toutlen\t  - Length of the buffer containing encrypted data\n *\n * @return      int\n * @retval\t-1 - Failure\n * @retval\t 0 - Success\n *\n */\nint\npbs_encrypt_pwd(char *uncrypted, int *credtype, char **crypted, size_t *outlen, const unsigned char *aes_key, const unsigned char *aes_iv)\n{\n\tint plen, len2 = 0;\n\tunsigned char *cblk;\n\tsize_t len = strlen(uncrypted) + 1;\n\n#if OPENSSL_VERSION_NUMBER < 0x10100000L\n\tEVP_CIPHER_CTX real_ctx;\n\tEVP_CIPHER_CTX *ctx = &real_ctx;\n#else\n\tEVP_CIPHER_CTX *ctx = NULL;\n#endif\n\n\tCIPHER_CONTEXT_INIT(ctx);\n\n\tif (EVP_EncryptInit_ex(ctx, EVP_aes_256_cbc(), NULL, (const unsigned char *) aes_key, (const unsigned char *) aes_iv) == 0) {\n\t\tCIPHER_CONTEXT_CLEAN(ctx);\n\t\treturn -1;\n\t}\n\n\tplen = len + EVP_CIPHER_CTX_block_size(ctx) + 1;\n\tcblk = malloc(plen);\n\tif (!cblk) {\n\t\tCIPHER_CONTEXT_CLEAN(ctx);\n\t\treturn -1;\n\t}\n\n\tif (EVP_EncryptUpdate(ctx, cblk, &plen, (unsigned char *) uncrypted, len) == 0) {\n\t\tCIPHER_CONTEXT_CLEAN(ctx);\n\t\tfree(cblk);\n\t\tcblk = NULL;\n\t\treturn -1;\n\t}\n\n\tif (EVP_EncryptFinal_ex(ctx, cblk + plen, &len2) == 0) {\n\t\tCIPHER_CONTEXT_CLEAN(ctx);\n\t\tfree(cblk);\n\t\tcblk = NULL;\n\t\treturn -1;\n\t}\n\n\tCIPHER_CONTEXT_CLEAN(ctx);\n\n\t*crypted = (char *) cblk;\n\t*outlen = plen + len2;\n\t*credtype = PBS_CREDTYPE_AES;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tDecrypt the encrypted input data using AES decryption.\n *\tThe keys are rotated for each block of data equal to size of each key.\n *\n * @param[in]\tcrypted   - The encrypted data to decrypt\n * @param[in]   credtype  - The credential type\n * @param[in]\tlen\t      - The length of crypted\n * @param[out]\tuncrypted - The decrypted data\n *\n * @return      int\n * @retval\t-1 - Failure\n * @retval\t 0 - Success\n *\n */\nint\npbs_decrypt_pwd(char *crypted, int credtype, size_t len, char **uncrypted, const unsigned char *aes_key, const unsigned char *aes_iv)\n{\n\tunsigned char *cblk;\n\tint plen, len2 = 0;\n\n#if OPENSSL_VERSION_NUMBER < 0x10100000L\n\tEVP_CIPHER_CTX real_ctx;\n\tEVP_CIPHER_CTX *ctx = &real_ctx;\n#else\n\tEVP_CIPHER_CTX *ctx = NULL;\n#endif\n\n\tCIPHER_CONTEXT_INIT(ctx);\n\n\tif (EVP_DecryptInit_ex(ctx, EVP_aes_256_cbc(), NULL, (const unsigned char *) aes_key, (const unsigned char *) aes_iv) == 0) {\n\t\tCIPHER_CONTEXT_CLEAN(ctx);\n\t\treturn -1;\n\t}\n\n\tcblk = malloc(len + EVP_CIPHER_CTX_block_size(ctx) + 1);\n\tif (!cblk) {\n\t\tCIPHER_CONTEXT_CLEAN(ctx);\n\t\treturn -1;\n\t}\n\n\tif (EVP_DecryptUpdate(ctx, cblk, &plen, (unsigned char *) crypted, len) == 0) {\n\t\tCIPHER_CONTEXT_CLEAN(ctx);\n\t\tfree(cblk);\n\t\tcblk = NULL;\n\t\treturn -1;\n\t}\n\n\tif (EVP_DecryptFinal_ex(ctx, cblk + plen, &len2) == 0) {\n\t\tCIPHER_CONTEXT_CLEAN(ctx);\n\t\tfree(cblk);\n\t\tcblk = NULL;\n\t\treturn -1;\n\t}\n\n\tCIPHER_CONTEXT_CLEAN(ctx);\n\n\t*uncrypted = (char *) cblk;\n\t(*uncrypted)[plen + len2] = '\\0';\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tencode_to_base64 - Encode data into base64 format\n *\n * @param[in]\t\tbuffer\t\t\t-\tData buffer for encoding\n * @param[in]\t\tbuffer_len\t\t-\tLength of the data buffer\n * @param[in/out]\tret_encoded_data\t-\tReturn the encoded data\n *\n * @return int\n * @retval 0 - for success\n * @retval 1 - for failure\n */\nint\nencode_to_base64(const unsigned char *buffer, size_t buffer_len, char **ret_encoded_data)\n{\n\tBIO *mem_obj1, *mem_obj2;\n\tlong buf_len = 0;\n\tchar *buf;\n\n\tmem_obj1 = BIO_new(BIO_s_mem());\n\tif (mem_obj1 == NULL)\n\t\treturn 1;\n\tmem_obj2 = BIO_new(BIO_f_base64());\n\tif (mem_obj2 == NULL) {\n\t\tBIO_free(mem_obj1);\n\t\treturn 1;\n\t}\n\n\tmem_obj1 = BIO_push(mem_obj2, mem_obj1);\n\tBIO_set_flags(mem_obj1, BIO_FLAGS_BASE64_NO_NL);\n\tBIO_write(mem_obj1, buffer, buffer_len);\n\t(void) BIO_flush(mem_obj1);\n\tbuf_len = BIO_get_mem_data(mem_obj1, &buf);\n\tif (buf_len <= 0)\n\t\treturn 1;\n\t*ret_encoded_data = (char *) malloc(buf_len + 1);\n\tif (*ret_encoded_data == NULL) {\n\t\tBIO_free_all(mem_obj1);\n\t\treturn 1;\n\t}\n\tmemcpy(*ret_encoded_data, buf, buf_len);\n\t(*ret_encoded_data)[buf_len] = '\\0';\n\n\tBIO_free_all(mem_obj1);\n\treturn 0;\n}\n\n/**\n * @brief\n *\tdecode_from_base64 - Decode data from base64 format\n *\n * @param[in]\t\tbuffer\t\t\t-\tData buffer for decoding\n * @param[in/out]\tret_decoded_data\t-\tReturn the decoded data\n * @param[in/out]\tret_decoded_len\t\t-\tReturn length of the decoded data\n *\n * @return int\n * @retval 0 - for success\n * @retval 1 - for failure\n */\nint\ndecode_from_base64(char *buffer, unsigned char **ret_decoded_data, size_t *ret_decoded_len)\n{\n\tBIO *mem_obj1, *mem_obj2;\n\tsize_t decode_length = 0;\n\tsize_t input_len = 0;\n\tsize_t char_padding = 0;\n\tint padding_enabled = 1;\n\n\tinput_len = strlen(buffer);\n\tif (input_len == 0)\n\t\treturn 1;\n\tif ((buffer[input_len - 1] == '=') && (buffer[input_len - 2] == '=')) {\n\t\tchar_padding = 2;\n\t\tpadding_enabled = 0;\n\t}\n\tif (padding_enabled) {\n\t\tif (buffer[input_len - 1] == '=')\n\t\t\tchar_padding = 1;\n\t}\n\tdecode_length = ((input_len * 3) / 4 - char_padding);\n\t*ret_decoded_data = (unsigned char *) malloc(decode_length + 1);\n\tif (*ret_decoded_data == NULL)\n\t\treturn 1;\n\t(*ret_decoded_data)[decode_length] = '\\0';\n\n\tmem_obj1 = BIO_new_mem_buf(buffer, -1);\n\tif (mem_obj1 == NULL)\n\t\treturn 1;\n\tmem_obj2 = BIO_new(BIO_f_base64());\n\tif (mem_obj2 == NULL) {\n\t\tBIO_free_all(mem_obj1);\n\t\treturn 1;\n\t}\n\n\tmem_obj1 = BIO_push(mem_obj2, mem_obj1);\n\tBIO_set_flags(mem_obj1, BIO_FLAGS_BASE64_NO_NL);\n\t*ret_decoded_len = BIO_read(mem_obj1, *ret_decoded_data, strlen(buffer));\n\n\tif (*ret_decoded_len != decode_length) {\n\t\tBIO_free_all(mem_obj1);\n\t\treturn 1;\n\t}\n\tBIO_free_all(mem_obj1);\n\treturn 0;\n}\n/** @brief\n *\tencode_SHA - Returns the hexadecimal hash.\n *\n *\t@param[in] : str - token to hash\n *\t@param[in] : len - length of the\n *\t@param[in] ebufsz - size of ebuf\n *\t@return\tint\n *\t@retval 1 on error\n *\t\t\t0 on success\n */\n\nvoid\nencode_SHA(char *token, size_t cred_len, char **hex_digest)\n{\n\tunsigned char obuf[SHA_DIGEST_LENGTH] = {'\\0'};\n\tint i;\n\n\tSHA1((const unsigned char *) token, cred_len, obuf);\n\tfor (i = 0; i < SHA_DIGEST_LENGTH; i++) {\n\t\tsprintf((char *) (*hex_digest + (i * 2)), \"%02x\", obuf[i]);\n\t}\n}\n"
  },
  {
    "path": "src/lib/Libutil/pbs_array_list.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h>\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include \"pbs_array_list.h\"\n\n/**\n * @brief\n *\treturn the reallocated chunk of memory to hold range of ip addresses,\n *\n * @return\tpntPBS_IP_RANGE\n * @retval\treallocated memory\n *\n */\npntPBS_IP_RANGE\ncreate_pbs_range(void)\n{\n\treturn ((pntPBS_IP_RANGE) calloc(CHUNK, sizeof(PBS_IP_RANGE)));\n}\n\n/**\n * @brief\n *\tresize the present ip list\n *\n * @param[in] list - list of ip address\n *\n * @return\tpntPBS_IP_LIST\n * @retval\tnew list\tsuccess\n * @retval\tNULL\t\terror\n */\npntPBS_IP_LIST\nresize_pbs_iplist(pntPBS_IP_LIST list)\n{\n\tpntPBS_IP_RANGE temp;\n\ttemp = (pntPBS_IP_RANGE) realloc(list->li_range, ((CHUNK + list->li_totalsize) * sizeof(PBS_IP_RANGE)));\n\tif (temp != NULL) {\n\t\tlist->li_range = temp;\n\t\tmemset(((char *) list->li_range + (list->li_totalsize * sizeof(PBS_IP_RANGE))), 0, CHUNK * sizeof(PBS_IP_RANGE));\n\t\tlist->li_totalsize += CHUNK;\n\t\treturn list;\n\t} else {\n\t\tdelete_pbs_iplist(list);\n\t\treturn NULL;\n\t}\n}\n\n/**\n * @brief\n *\tcreates ip address list.\n *\n * @return\tpntPBS_IP_LIST\n * @retval\treference to created ip addr list\tsuccess\n * @retval\tNULL\t\t\t\t\terror\n *\n */\npntPBS_IP_LIST\ncreate_pbs_iplist(void)\n{\n\tpntPBS_IP_LIST list = (pntPBS_IP_LIST) calloc(1, sizeof(PBS_IP_LIST));\n\tif (list) {\n\t\tlist->li_range = create_pbs_range();\n\t\tif (!list->li_range) {\n\t\t\tfree(list);\n\t\t\treturn NULL;\n\t\t}\n\t\tlist->li_totalsize = CHUNK;\n\t}\n\treturn list;\n}\n\n/**\n * @brief\n *      deletes ip address list.\n *\n * @param[in] pntPBS_IP_LIST pointer to list\n *\n */\nvoid\ndelete_pbs_iplist(pntPBS_IP_LIST list)\n{\n\tif (list) {\n\t\tif (list->li_range)\n\t\t\tfree(list->li_range);\n\t\tfree(list);\n\t}\n\treturn;\n}\n\n/**\n * @brief\n *\tsearches the the key value in list.\n *\n * @param[in] list - list\n * @param[in] key - key value\n * @param[out] location - index where key found\n *\n * @return\tint\n * @retval\tlocation\tif found\n * @retval\t-1\t\tif not found\n *\n */\nint\nsearch_location(pntPBS_IP_LIST list, T key, int *location)\n{\n\tint bottom, middle, top;\n\tbottom = 0;\n\ttop = list->li_nrowsused - 1;\n\twhile (top >= bottom) {\n\t\tmiddle = (top + bottom) / 2;\n\t\tif (key == IPLIST_GET_LOW(list, middle)) {\n\t\t\t*location = middle;\n\t\t\treturn middle;\n\t\t} else if (key < IPLIST_GET_LOW(list, middle))\n\t\t\ttop = middle - 1;\n\t\telse\n\t\t\tbottom = middle + 1;\n\t}\n\t*location = top;\n\tif (top != -1 && key <= (IPLIST_GET_LOW(list, *location) + IPLIST_GET_HIGH(list, *location))) {\n\t\treturn (*location);\n\t}\n\treturn -1;\n}\n\n/**\n * @brief\n *      insert the the key value into list.\n *\n * @param[in] list - list\n * @param[in] key - key value\n *\n * @return      int\n * @retval      0\tinsertion of key successful\n * @retval      !0\tinsertion of key  failed\n *\n */\n\nint\ninsert_iplist_element(pntPBS_IP_LIST list, T key)\n{\n\tint location = -1;\n\tint first_row = 0;\n\t/* If list is empty, go ahed and input the ip at first location */\n\tif (IPLIST_GET_LOW(list, 0) == 0 && list->li_nrowsused == 0) {\n\t\tIPLIST_SET_LOW(list, 0, key);\n\t\tlist->li_nrowsused++;\n\t\treturn IPLIST_INSERT_SUCCESS;\n\t}\n\n\tif (list->li_nrowsused == list->li_totalsize) {\n\t\tlist = resize_pbs_iplist(list);\n\t\tif (!list) {\n\t\t\treturn IPLIST_INSERT_FAILURE;\n\t\t}\n\t}\n\n\tif (search_location(list, key, &location) >= 0)\n\t\treturn IPLIST_INSERT_SUCCESS;\n\tif (location == -1) {\n\t\tfirst_row = 1;\n\t\tlocation++;\n\t}\n\n\tif (IPLIST_IS_CONTINUOUS_ROW(list, location, key)) {\n\t\tIPLIST_SET_HIGH(list, location, IPLIST_GET_HIGH(list, location) + 1);\n\t\tif (IPLIST_IS_CONTINUOUS_ROW(list, location, IPLIST_GET_LOW(list, location + 1))) {\n\t\t\tIPLIST_SET_HIGH(list, location, IPLIST_GET_HIGH(list, location) + 1 + IPLIST_GET_HIGH(list, location + 1));\n\t\t\t/** memove rows up , decrement nrowsused, memset one row with INIT_VALUE **/\n\t\t\tlist->li_nrowsused--;\n\t\t\tif (IPLIST_SHIFT_ALL_UP_BY_ONE(list, location + 1, list->li_nrowsused - (location + 1)) == NULL) {\n\t\t\t\tlist->li_nrowsused++;\n\t\t\t\treturn IPLIST_INSERT_FAILURE;\n\t\t\t}\n\t\t\tmemset(list->li_range + list->li_nrowsused, 0, sizeof(PBS_IP_RANGE));\n\t\t}\n\t} else {\n\t\tif (first_row)\n\t\t\tlocation--;\n\t\tif (IPLIST_IS_CONTINUOUS(key, IPLIST_GET_LOW(list, location + 1))) {\n\t\t\tIPLIST_SET_LOW(list, location + 1, key);\n\t\t\tIPLIST_SET_HIGH(list, location + 1, IPLIST_GET_HIGH(list, location + 1) + 1);\n\t\t} else {\n\t\t\tif (IPLIST_GET_LOW(list, location + 1) == INIT_VALUE) {\n\t\t\t\tIPLIST_SET_LOW(list, location + 1, key);\n\t\t\t\tlist->li_nrowsused++;\n\t\t\t} else {\n\t\t\t\t/** Add new Row **/\n\t\t\t\tif (IPLIST_SHIFT_ALL_DOWN_BY_ONE(list, location + 1, list->li_nrowsused - (location + 1)) == NULL)\n\t\t\t\t\treturn IPLIST_INSERT_FAILURE;\n\t\t\t\tIPLIST_SET_LOW(list, location + 1, key);\n\t\t\t\tIPLIST_SET_HIGH(list, location + 1, INIT_VALUE);\n\t\t\t\tlist->li_nrowsused++;\n\t\t\t}\n\t\t}\n\t}\n\treturn IPLIST_INSERT_SUCCESS;\n}\n\n/**\n * @brief\n *      delete the the key value into list.\n *\n * @param[in] list - list\n * @param[in] key - key value\n *\n * @return      int\n * @retval      0       deletion of key successful\n * @retval      !0      deletion of key  failed\n *\n */\n\nint\ndelete_iplist_element(pntPBS_IP_LIST list, T key)\n{\n\tint location = -1;\n\tT high = 0;\n\n\tif (list->li_nrowsused == list->li_totalsize) {\n\t\tlist = resize_pbs_iplist(list);\n\t\tif (!list) {\n\t\t\treturn IPLIST_INSERT_FAILURE;\n\t\t}\n\t}\n\n\tif (search_location(list, key, &location) == -1)\n\t\treturn IPLIST_DELETE_FAILURE;\n\tif ((IPLIST_GET_LOW(list, location) == key) && list->li_nrowsused) { /** If the Lower IP of range **/\n\t\tif (IPLIST_GET_HIGH(list, location) == INIT_VALUE) {\n\t\t\tif (IPLIST_SHIFT_ALL_UP_BY_ONE(list, location, list->li_nrowsused - (location + 1)) == NULL) {\n\t\t\t\tlist->li_nrowsused++;\n\t\t\t\treturn IPLIST_DELETE_FAILURE;\n\t\t\t}\n\t\t\tlist->li_nrowsused--;\n\t\t\tmemset(list->li_range + list->li_nrowsused, 0, sizeof(PBS_IP_RANGE));\n\t\t} else {\n\t\t\tIPLIST_SET_LOW(list, location, IPLIST_GET_LOW(list, location) + 1);\n\t\t\tIPLIST_SET_HIGH(list, location, IPLIST_GET_HIGH(list, location) - 1);\n\t\t}\n\t} else if ((IPLIST_GET_LOW(list, location) + IPLIST_GET_HIGH(list, location)) == key) { /** Is the biggest IP of range **/\n\t\tIPLIST_SET_HIGH(list, location, IPLIST_GET_HIGH(list, location) - 1);\n\t} else { /** Lies somewhere in between LOW & HIGH **/\n\t\t/* temp = IPLIST_GET_HIGH(list,location); */\n\t\thigh = IPLIST_GET_LOW(list, location) + IPLIST_GET_HIGH(list, location);\n\t\tIPLIST_SET_HIGH(list, location, key - IPLIST_GET_LOW(list, location) - 1);\n\t\tif (IPLIST_SHIFT_ALL_DOWN_BY_ONE(list, location + 1, list->li_nrowsused - (location + 1)) == NULL)\n\t\t\treturn IPLIST_DELETE_FAILURE;\n\t\tIPLIST_SET_LOW(list, location + 1, key + 1);\n\t\tIPLIST_SET_HIGH(list, location + 1, high - IPLIST_GET_LOW(list, location + 1));\n\t\tlist->li_nrowsused++;\n\t}\n\treturn IPLIST_DELETE_SUCCESS;\n}\n"
  },
  {
    "path": "src/lib/Libutil/pbs_ical.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tpbs_ical.c\n * @brief\n * pbs_ical.c : Utility functions to parse and handle iCal syntax\n *\n *\tThis is used to abstract the use of libical from the\n *  functionality of PBS. It is invoked from\n *  1) the commands (pbs_rsub and pbs_rstat)\n *  2) the server: req_rescq.c, svr_jobfunc.c\n *  3) the scheduler: resv_info.c\n *\n *  The purpose of this interface to libical is to wrap all iCalendar specific\n *  calls from PBS to an implementation independent function.\n *\n * See RFC 2445 for more information and specifics of the iCalendar syntax\n *\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <time.h>\n#include <libutil.h>\n\n#include \"pbs_error.h\"\n#ifdef LIBICAL\n#include <libical/ical.h>\n#endif\n\n#define DATE_LIMIT (3 * (60 * 60 * 24 * 365)) /* Limit to 3 years from now */\n\n/**\n * @brief\n * \tReturns the number of occurrences defined by a recurrence rule.\n *\n * @par\tThe total number of occurrences is currently limited to a hardcoded\n * \t3 years limit from the current date.\n *\n * @par\tNOTE: Determine whether 3 years limit is the right way to go about setting\n * \ta limit on the total number of occurrences.\n *\n * @param[in] rrule - The recurrence rule as defined by the user\n * @param[in] tt - The start time of the first occurrence\n * @param[in] tz - The timezone associated to the recurrence rule\n *\n * @return\tint\n * @retval \tthe total number of occurrences\n *\n */\nint\nget_num_occurrences(char *rrule, time_t dtstart, char *tz)\n{\n\n#ifdef LIBICAL\n\tstruct icalrecurrencetype rt;\n\tstruct icaltimetype start;\n\ticaltimezone *localzone;\n\tstruct icaltimetype next;\n\tstruct icalrecur_iterator_impl *itr;\n\ttime_t now;\n\ttime_t date_limit;\n\tint num_resv = 0;\n\n\t/* if any of the argument is NULL, we are dealing with\n\t * advance reservation, so return 1 occurrence */\n\tif (rrule == NULL || tz == NULL)\n\t\treturn 1;\n\n\ticalerror_clear_errno();\n\n\ticalerror_set_error_state(ICAL_PARSE_ERROR, ICAL_ERROR_NONFATAL);\n#ifdef LIBICAL_API2\n\ticalerror_set_errors_are_fatal(0);\n#else\n\ticalerror_errors_are_fatal = 0;\n#endif\n\tlocalzone = icaltimezone_get_builtin_timezone(tz);\n\n\tif (localzone == NULL)\n\t\treturn 0;\n\n\tnow = time(NULL);\n\tdate_limit = now + DATE_LIMIT;\n\n\trt = icalrecurrencetype_from_string(rrule);\n\n\tstart = icaltime_from_timet_with_zone(dtstart, 0, NULL);\n\ticaltimezone_convert_time(&start, icaltimezone_get_utc_timezone(), localzone);\n\n\titr = (struct icalrecur_iterator_impl *) icalrecur_iterator_new(rt, start);\n\n\tnext = icalrecur_iterator_next(itr);\n\n\t/* Compute the total number of occurrences.\n\t * Breaks out if the total number of allowed occurrences is exceeded */\n\twhile (!icaltime_is_null_time(next) &&\n\t       (icaltime_as_timet(next) < date_limit)) {\n\t\tnum_resv++;\n\t\tnext = icalrecur_iterator_next(itr);\n\t}\n\ticalrecur_iterator_free(itr);\n\n\treturn num_resv;\n#else\n\n\tif (rrule == NULL)\n\t\treturn 1;\n\n\treturn 0;\n#endif\n}\n\n/**\n * @brief\n * \tGet the occurrence as defined by the given recurrence rule,\n * \tindex, and start time. This function assumes that the\n * \ttime dtsart passed in is the one to start the occurrence from.\n *\n * @par\tNOTE: This function should be made reentrant such that\n * \tit can be looped over without having to loop over every occurrence\n * \tover and over again.\n *\n * @param[in] rrule - The recurrence rule as defined by the user\n * @param[in] dtstart - The start time from which to start\n * @param[in] tz - The timezone associated to the recurrence rule\n * @param[in] idx - The index of the occurrence to start counting from\n *\n * @return \ttime_t\n * @retval\tThe date of the next occurrence or -1 if the date exceeds libical's\n * \t\tUnix time in 2038\n *\n */\ntime_t\nget_occurrence(char *rrule, time_t dtstart, char *tz, int idx)\n{\n#ifdef LIBICAL\n\tstruct icalrecurrencetype rt;\n\tstruct icaltimetype start;\n\ticaltimezone *localzone;\n\tstruct icaltimetype next;\n\tstruct icalrecur_iterator_impl *itr;\n\tint i;\n\ttime_t next_occr = dtstart;\n\n\tif (rrule == NULL)\n\t\treturn dtstart;\n\n\tif (tz == NULL)\n\t\treturn -1;\n\n\ticalerror_clear_errno();\n\n\ticalerror_set_error_state(ICAL_PARSE_ERROR, ICAL_ERROR_NONFATAL);\n#ifdef LIBICAL_API2\n\ticalerror_set_errors_are_fatal(0);\n#else\n\ticalerror_errors_are_fatal = 0;\n#endif\n\tlocalzone = icaltimezone_get_builtin_timezone(tz);\n\n\tif (localzone == NULL)\n\t\treturn -1;\n\n\trt = icalrecurrencetype_from_string(rrule);\n\n\tstart = icaltime_from_timet_with_zone(dtstart, 0, NULL);\n\ticaltimezone_convert_time(&start, icaltimezone_get_utc_timezone(), localzone);\n\tnext = start;\n\n\titr = (struct icalrecur_iterator_impl *) icalrecur_iterator_new(rt, start);\n\t/* Skip as many occurrences as specified by idx */\n\tfor (i = 0; i < idx && !icaltime_is_null_time(next); i++)\n\t\tnext = icalrecur_iterator_next(itr);\n\n\tif (!icaltime_is_null_time(next)) {\n\t\ticaltimezone_convert_time(&next, localzone,\n\t\t\t\t\t  icaltimezone_get_utc_timezone());\n\t\tnext_occr = icaltime_as_timet(next);\n\t} else\n\t\tnext_occr = -1; /* If reached end of possible date-time return -1 */\n\ticalrecur_iterator_free(itr);\n\n\treturn next_occr;\n#else\n\treturn dtstart;\n#endif\n}\n\n/**\n * @brief\n * \tCheck if a recurrence rule is valid and consistent.\n * \tThe recurrence rule is verified against a start date and checks\n * \tthat the frequency of the recurrence matches the duration of the\n * \tsubmitted reservation. If the duration of a reservation exceeds the\n * \tgranularity of the frequency then an error message is displayed.\n *\n * @par The recurrence rule is checked to contain a COUNT or an UNTIL.\n *\n * @par\tNote that the PBS_TZID environment variable HAS to be set for the occurrence's\n * \tdates to be correctly computed.\n *\n * @param[in] rrule - The recurrence rule to unroll\n * @param[in] dtstart - The start time associated to the reservation (1st occurrence)\n * @param[in] dtend - The end time associated to the reservation (1st occurrence)\n * @param[in] duration - The duration of an occurrence. This is used when a reservation is\n *  \t\t\tsubmitted using the -D (duration) param instead of an end time\n * @param[in] tz - The timezone associated to the recurrence rule\n * @param[in] err_code - A pointer to the error code to return. Codes are defined in pbs_error.h\n *\n * @return\tint\n * @retval\tThe total number of occurrences that the recurrence rule and start date\n *\n * \t\tdefine. 1 for an advance reservation.\n *\n */\nint\ncheck_rrule(char *rrule, time_t dtstart, time_t dtend, char *tz, int *err_code)\n{\n\n#ifdef LIBICAL /* Standing Reservation Recurrence */\n\tint count = 1;\n\tstruct icalrecurrencetype rt;\n\tstruct icaltimetype start;\n\tstruct icaltimetype next;\n\tstruct icaltimetype first;\n\tstruct icaltimetype prev;\n\tstruct icalrecur_iterator_impl *itr;\n\ticaltimezone *localzone;\n\tint time_err = 0;\n\tint i;\n\tlong min_occr_duration = -1;\n\tlong tmp_occr_duration = 0;\n\tlong duration;\n\n\t*err_code = 0;\n\ticalerror_clear_errno();\n\n\ticalerror_set_error_state(ICAL_PARSE_ERROR, ICAL_ERROR_NONFATAL);\n#ifdef LIBICAL_API2\n\ticalerror_set_errors_are_fatal(0);\n#else\n\ticalerror_errors_are_fatal = 0;\n#endif\n\n\tif (tz == NULL || rrule == NULL)\n\t\treturn 0;\n\n\tlocalzone = icaltimezone_get_builtin_timezone(tz);\n\t/* If the timezone info directory is not accessible\n\t * then bail\n\t */\n\tif (localzone == NULL) {\n\t\t*err_code = PBSE_BAD_ICAL_TZ;\n\t\treturn 0;\n\t}\n\n\trt = icalrecurrencetype_from_string(rrule);\n\n\t/* Check if by_day rules are defined and valid\n\t * the first item in the array of by_* rule\n\t * determines whether the item exists or not.\n\t */\n\tfor (i = 0; rt.by_day[i] < 8; i++) {\n\t\tif (rt.by_day[i] <= 0) {\n\t\t\t*err_code = PBSE_BAD_RRULE_SYNTAX;\n\t\t\treturn 0;\n\t\t}\n\t}\n\n\t/* Check if by_hour rules are defined and valid\n\t * the first item in the array of by_* rule\n\t * determines whether the item exists or not.\n\t */\n\tfor (i = 0; rt.by_hour[i] < 25; i++) {\n\t\tif (rt.by_hour[i] < 0) {\n\t\t\t*err_code = PBSE_BAD_RRULE_SYNTAX;\n\t\t\treturn 0;\n\t\t}\n\t}\n\n\t/* Check if frequency was correctly set */\n\tif (rt.freq == ICAL_NO_RECURRENCE) {\n\t\t*err_code = PBSE_BAD_RRULE_SYNTAX;\n\t\treturn 0;\n\t}\n\t/* Check if the rest of the by_* rules are defined\n\t * and valid.\n\t * currently no support for\n\t * BYMONTHDAY, BYYEARDAY, BYSECOND,\n\t * BYMINUTE, BYWEEKNO, or BYSETPOS\n\t *\n\t */\n\tif (rt.by_second[0] < 61 ||    /* by_second is negative such as in -10 */\n\t    rt.by_minute[0] < 61 ||    /* by_minute is negative such as in -10 */\n\t    rt.by_year_day[0] < 367 || /* a year day is defined */\n\t    rt.by_month_day[0] < 31 || /* a month day is defined */\n\t    rt.by_week_no[0] < 52 ||   /* a week number is defined */\n\t    rt.by_set_pos[0] < 367) {  /* a set pos is defined */\n\t\t*err_code = PBSE_BAD_RRULE_SYNTAX;\n\t\treturn 0;\n\t}\n\n\t/* Require that either a COUNT or UNTIL be passed. But not both. */\n\tif ((rt.count == 0 && icaltime_is_null_time(rt.until)) ||\n\t    (rt.count != 0 && !icaltime_is_null_time(rt.until))) {\n\t\t*err_code = PBSE_BAD_RRULE_SYNTAX2; /* Undefined iCalendar synax. A valid COUNT or UNTIL is required */\n\t\treturn 0;\n\t}\n\n\tstart = icaltime_from_timet_with_zone(dtstart, 0, NULL);\n\ticaltimezone_convert_time(&start, icaltimezone_get_utc_timezone(), localzone);\n\n\titr = (struct icalrecur_iterator_impl *) icalrecur_iterator_new(rt, start);\n\n\tduration = dtend - dtstart;\n\n\t/* First check if the syntax of the iCalendar rule is valid */\n\tnext = icalrecur_iterator_next(itr);\n\n\t/* Catch case where first occurrence date is in the past */\n\tif (icaltime_is_null_time(next)) {\n\t\t*err_code = PBSE_BADTSPEC;\n\t\ticalrecur_iterator_free(itr);\n\t\treturn 0;\n\t}\n\n\tfirst = next;\n\tprev = first;\n\n\tfor (next = icalrecur_iterator_next(itr); !icaltime_is_null_time(next); next = icalrecur_iterator_next(itr), count++) {\n\t\t/* The interval duration between two occurrences\n\t\t * is the time between the end of an occurrence and the\n\t\t * start of the next one\n\t\t */\n\t\ttmp_occr_duration = icaltime_as_timet(next) - icaltime_as_timet(prev);\n\n\t\t/* Set the minimum time interval between occurrences */\n\t\tif (min_occr_duration == -1)\n\t\t\tmin_occr_duration = tmp_occr_duration;\n\t\telse if (tmp_occr_duration > 0 &&\n\t\t\t tmp_occr_duration < min_occr_duration)\n\t\t\tmin_occr_duration = tmp_occr_duration;\n\n\t\tprev = next;\n\t}\n\t/* clean up */\n\ticalrecur_iterator_free(itr);\n\n\tif (icalerrno != ICAL_NO_ERROR) {\n\t\t*err_code = PBSE_BAD_RRULE_SYNTAX; /*  Undefined iCalendar syntax */\n\t\treturn 0;\n\t}\n\n\t/* Then check if the duration fits in the frequency rule */\n\tswitch (rt.freq) {\n\n\t\tcase ICAL_SECONDLY_RECURRENCE: {\n\t\t\tif (duration > 1) {\n#ifdef NAS /* localmod 005 */\n\t\t\t\ticalerrno = ICAL_BADARG_ERROR;\n#else\n\t\t\t\ticalerrno = 1;\n#endif\t\t\t\t\t\t\t\t     /* localmod 005 */\n\t\t\t\t*err_code = PBSE_BAD_RRULE_SECONDLY; /* SECONDLY recurrence duration cannot exceed 1 second */\n\t\t\t\ttime_err++;\n\t\t\t}\n\t\t\tbreak;\n\t\t}\n\t\tcase ICAL_MINUTELY_RECURRENCE: {\n\t\t\tif (duration > 60) {\n#ifdef NAS /* localmod 005 */\n\t\t\t\ticalerrno = ICAL_BADARG_ERROR;\n#else\n\t\t\t\ticalerrno = 1;\n#endif\t\t\t\t\t\t\t\t     /* localmod 005 */\n\t\t\t\t*err_code = PBSE_BAD_RRULE_MINUTELY; /* MINUTELY recurrence duration cannot exceed 1 minute */\n\t\t\t\ttime_err++;\n\t\t\t}\n\t\t\tbreak;\n\t\t}\n\t\tcase ICAL_HOURLY_RECURRENCE: {\n\t\t\tif (duration > (60 * 60)) {\n#ifdef NAS /* localmod 005 */\n\t\t\t\ticalerrno = ICAL_BADARG_ERROR;\n#else\n\t\t\t\ticalerrno = 1;\n#endif\t\t\t\t\t\t\t\t   /* localmod 005 */\n\t\t\t\t*err_code = PBSE_BAD_RRULE_HOURLY; /* HOURLY recurrence duration cannot exceed 1 hour */\n\t\t\t\ttime_err++;\n\t\t\t}\n\t\t\tbreak;\n\t\t}\n\t\tcase ICAL_DAILY_RECURRENCE: {\n\t\t\tif (duration > (60 * 60 * 24)) {\n#ifdef NAS /* localmod 005 */\n\t\t\t\ticalerrno = ICAL_BADARG_ERROR;\n#else\n\t\t\t\ticalerrno = 1;\n#endif\t\t\t\t\t\t\t\t  /* localmod 005 */\n\t\t\t\t*err_code = PBSE_BAD_RRULE_DAILY; /* DAILY recurrence duration cannot exceed 24 hours */\n\t\t\t\ttime_err++;\n\t\t\t}\n\t\t\tbreak;\n\t\t}\n\t\tcase ICAL_WEEKLY_RECURRENCE: {\n\t\t\tif (duration > (60 * 60 * 24 * 7)) {\n#ifdef NAS /* localmod 005 */\n\t\t\t\ticalerrno = ICAL_BADARG_ERROR;\n#else\n\t\t\t\ticalerrno = 1;\n#endif\t\t\t\t\t\t\t\t   /* localmod 005 */\n\t\t\t\t*err_code = PBSE_BAD_RRULE_WEEKLY; /* WEEKLY recurrence duration cannot exceed 1 week */\n\t\t\t\ttime_err++;\n\t\t\t}\n\t\t\tbreak;\n\t\t}\n\t\tcase ICAL_MONTHLY_RECURRENCE: {\n\t\t\tif (duration > (60 * 60 * 24 * 30)) {\n#ifdef NAS /* localmod 005 */\n\t\t\t\ticalerrno = ICAL_BADARG_ERROR;\n#else\n\t\t\t\ticalerrno = 1;\n#endif\t\t\t\t\t\t\t\t    /* localmod 005 */\n\t\t\t\t*err_code = PBSE_BAD_RRULE_MONTHLY; /* MONTHLY recurrence duration cannot exceed 1 month */\n\t\t\t\ttime_err++;\n\t\t\t}\n\t\t\tbreak;\n\t\t}\n\t\tcase ICAL_YEARLY_RECURRENCE: {\n\t\t\tif (duration > (60 * 60 * 24 * 30 * 365)) {\n#ifdef NAS /* localmod 005 */\n\t\t\t\ticalerrno = ICAL_BADARG_ERROR;\n#else\n\t\t\t\ticalerrno = 1;\n#endif\t\t\t\t\t\t\t\t   /* localmod 005 */\n\t\t\t\t*err_code = PBSE_BAD_RRULE_YEARLY; /* YEARLY recurrence duration cannot exceed 1 year */\n\t\t\t\ttime_err++;\n\t\t\t}\n\t\t\tbreak;\n\t\t}\n\t\tdefault: {\n\t\t\ticalerror_set_errno(ICAL_MALFORMEDDATA_ERROR);\n\t\t\treturn 0;\n\t\t}\n\t}\n\n\tif (time_err)\n\t\treturn 0;\n\n\t/* If the requested reservation duration exceeds\n\t * the occurrence's duration then print an error\n\t * message and return */\n\tif (count != 1 && duration > min_occr_duration) {\n\t\t*err_code = PBSE_BADTSPEC; /*  Bad Time Specification(s) */\n\t\treturn 0;\n\t}\n\n\treturn count;\n\n#else\n\n\t*err_code = PBSE_BAD_RRULE_SYNTAX; /* iCalendar is undefined */\n\treturn 0;\n#endif\n}\n\n/**\n * @brief\n * \tDisplays the occurrences in a two-column format:\n * \tthe first column corresponds to the occurrence date and time\n * \tthe second column corresponds to the reserved execvnode\n *\n * @par\tNOTE: This is currently only used by pbs_rstat and only\n * \tfor testing purpose, it is not used in production environment.\n *\n * @param[in] rrule - The recurrence rule considered\n * @param[in] dtstart - The date start of the recurrence\n * @param[in] seq_execvnodes - The condensed form of execvnodes/occurrence\n * @param[in] tz - The timezone associated to the recurrence rule\n * @param[in] ridx - The index of the occurrence from which to start displaying information\n * @param[in] count - The total number of occurrences considered\n *\n */\nvoid\ndisplay_occurrences(char *rrule, time_t dtstart, char *seq_execvnodes, char *tz, int ridx, int count)\n{\n\n#ifdef LIBICAL\n\tchar **short_xc;\n\tchar **tofree;\n\tchar *tt_str;\n\ttime_t next = 0;\n\tint i = 1;\n\n\tif (tz == NULL || rrule == NULL)\n\t\treturn;\n\n\tif (get_num_occurrences(rrule, dtstart, tz) == 0)\n\t\treturn;\n\n\tshort_xc = (char **) unroll_execvnode_seq(seq_execvnodes, &tofree);\n\n\tprintf(\"Occurrence Dates\\t Occurrence Execvnode:\\n\");\n\tfor (; ridx <= count && next != -1; ridx++) {\n\t\tnext = get_occurrence(rrule, dtstart, tz, i);\n\t\ti++;\n\t\ttt_str = ctime(&next);\n\t\t/* ctime adds a carriage return at the end, strip it */\n\t\ttt_str[strlen(tt_str) - 1] = '\\0';\n\t\tprintf(\"%2s %2s\\n\", tt_str, short_xc[ridx - 1]);\n\t}\n\tfree_execvnode_seq(tofree);\n\tfree(short_xc);\n#endif\n}\n\n/**\n * @brief\n * \tSet the zoneinfo directory\n *\n * @param[in] path - The path to libical's zoneinfo\n *\n */\nvoid\nset_ical_zoneinfo(char *path)\n{\n#ifdef LIBICAL\n\tstatic int called = 0;\n\tif (path != NULL) {\n\t\tif (called)\n\t\t\tfree_zone_directory();\n\n\t\tset_zone_directory(path);\n\t\tcalled = 1;\n\t}\n#endif\n\treturn;\n}\n\n#ifdef DEBUG\n/**\n * @brief\n *\tdisplay the number of occurences and occurences.\n *\n * @param[in] rrule - The recurrence rule considered\n * @param[in] dtstart - The date start of the recurrence\n * @param[in] tz - The timezone associated to the recurrence rule\n *\n */\nvoid\ntest_it(char *rrule, time_t dtstart, char *tz)\n{\n\tint no;\n\ttime_t next;\n\tint i;\n\n\tno = get_num_occurrences(rrule, dtstart, tz);\n\tprintf(\"Number of occurrences = %d\\n\", no);\n\tnext = dtstart;\n\tfor (i = 0; i < no; i++) {\n\t\tnext = get_occurrence(rrule, dtstart, tz, i + 1);\n\t\tprintf(\"Next occurrence on %s\", ctime(&next));\n\t}\n\treturn;\n}\n\n/**\n * @brief\n *\tprint usage and wrapper function for test_it.\n *\n * @param[in] argc - num of args\n * @param[in] argv - argument list\n *\n * @return\tint\n * @retval\t0\tsuccess\n * #retval\t1\tfailure\n *\n */\nint\ntest_main(int argc, char *argv[])\n{\n\tchar *rrule;\n\tchar *tz;\n\ttime_t dtstart;\n\tint err_code;\n\n\tif (argc < 2) {\n\t\tprintf(\"Usage: test_it <rrule>\");\n\t\treturn 1;\n\t}\n\n\ttz = getenv(\"PBS_TZID\");\n\tdtstart = time(NULL);\n\n\trrule = argv[1];\n\t/* apply to your local configuration */\n\tset_ical_zoneinfo(\"/usr/local/pbs/lib/ical/zoneinfo\");\n\ttest_it(rrule, dtstart, tz);\n\tprintf(\"check_rrule returned %d\\n\", check_rrule(rrule, dtstart, 0, tz, &err_code));\n\tprintf(\"get_num_occurrences = %d\\n\", get_num_occurrences(rrule, dtstart, tz));\n\n\treturn 0;\n}\n#endif\n"
  },
  {
    "path": "src/lib/Libutil/pbs_idx.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h>\n\n#include \"pbs_idx.h\"\n#include \"avltree.h\"\n#include <stddef.h>\n#include <stdlib.h>\n#include <string.h>\n\n/* iteration context structure, opaque to application */\ntypedef struct _iter_ctx {\n\tAVL_IX_DESC *idx; /* pointer to idx */\n\tAVL_IX_REC *pkey; /* pointer to key used while iteration */\n} iter_ctx;\n\n/**\n * @brief\n *\tCreate an empty index\n *\n * @param[in] - flags  - index flags like duplicates allowed, or case insensitive compare\n * @param[in] - keylen - length of key in index (can be 0 for default size)\n *\n * @return void *\n * @retval !NULL - success\n * @retval NULL  - failure\n *\n */\nvoid *\npbs_idx_create(int flags, int keylen)\n{\n\tvoid *idx = NULL;\n\n\tidx = malloc(sizeof(AVL_IX_DESC));\n\tif (idx == NULL)\n\t\treturn NULL;\n\n\tif (avl_create_index(idx, flags, keylen)) {\n\t\tfree(idx);\n\t\treturn NULL;\n\t}\n\n\treturn idx;\n}\n\n/**\n * @brief\n *\tdestroy index\n *\n * @param[in] - idx - pointer to index\n *\n * @return void\n *\n */\nvoid\npbs_idx_destroy(void *idx)\n{\n\tif (idx != NULL) {\n\t\tavl_destroy_index(idx);\n\t\tfree(idx);\n\t\tidx = NULL;\n\t}\n}\n\n/**\n * @brief\n *\tadd entry in index\n *\n * @param[in] - idx  - pointer to index\n * @param[in] - key  - key of entry\n * @param[in] - data - data of entry\n *\n * @return int\n * @retval PBS_IDX_RET_OK   - success\n * @retval PBS_IDX_RET_FAIL - failure\n *\n */\nint\npbs_idx_insert(void *idx, void *key, void *data)\n{\n\tAVL_IX_REC *pkey;\n\n\tif (idx == NULL || key == NULL)\n\t\treturn PBS_IDX_RET_FAIL;\n\n\tpkey = avlkey_create(idx, key);\n\tif (pkey == NULL)\n\t\treturn PBS_IDX_RET_FAIL;\n\n\tpkey->recptr = data;\n\tif (avl_add_key(pkey, idx) != AVL_IX_OK) {\n\t\tfree(pkey);\n\t\treturn PBS_IDX_RET_FAIL;\n\t}\n\tfree(pkey);\n\treturn PBS_IDX_RET_OK;\n}\n\n/**\n * @brief\n *\tdelete entry from index\n *\n * @param[in] - idx - pointer to index\n * @param[in] - key - key of entry\n *\n * @return int\n * @retval PBS_IDX_RET_OK   - success\n * @retval PBS_IDX_RET_FAIL - failure\n *\n */\nint\npbs_idx_delete(void *idx, void *key)\n{\n\tAVL_IX_REC *pkey;\n\n\tif (idx == NULL || key == NULL)\n\t\treturn PBS_IDX_RET_FAIL;\n\n\tpkey = avlkey_create(idx, key);\n\tif (pkey == NULL)\n\t\treturn PBS_IDX_RET_FAIL;\n\n\tpkey->recptr = NULL;\n\tavl_delete_key(pkey, idx);\n\tfree(pkey);\n\treturn PBS_IDX_RET_OK;\n}\n\n/**\n * @brief\n *\tdelete exact entry from index using given context\n *\n * @param[in] - ctx - pointer to context used while\n *                    deleting exact entry in index\n *\n * @return int\n * @retval PBS_IDX_RET_OK   - success\n * @retval PBS_IDX_RET_FAIL - failure\n *\n */\nint\npbs_idx_delete_byctx(void *ctx)\n{\n\titer_ctx *pctx = (iter_ctx *) ctx;\n\n\tif (pctx == NULL || pctx->idx == NULL || pctx->pkey == NULL)\n\t\treturn PBS_IDX_RET_FAIL;\n\n\tavl_delete_key(pctx->pkey, pctx->idx);\n\treturn PBS_IDX_RET_OK;\n}\n\n/**\n * @brief\n *\tfind or iterate entry in index\n *\n * @param[in]     - idx  - pointer to index\n * @param[in/out] - key  - key of the entry\n *                         if *key is NULL then this routine will\n *                         return the first entry in index\n * @param[in/out] - data - data of the entry\n * @param[in/out] - ctx  - context to be set for iteration\n *                         can be NULL, if caller doesn't want\n *                         iteration context\n *                         if *ctx is not NULL, then this routine\n *                         will return next entry in index\n *\n * @return int\n * @retval PBS_IDX_RET_OK   - success\n * @retval PBS_IDX_RET_FAIL - failure\n *\n * @note\n * \tctx should be free'd after use, using pbs_idx_free_ctx()\n *\n */\nint\npbs_idx_find(void *idx, void **key, void **data, void **ctx)\n{\n\titer_ctx *pctx;\n\tAVL_IX_REC *pkey;\n\tint rc = AVL_IX_FAIL;\n\n\tif (idx == NULL || data == NULL)\n\t\treturn PBS_IDX_RET_FAIL;\n\n\tif (ctx != NULL && *ctx != NULL) {\n\t\tpctx = (iter_ctx *) *ctx;\n\n\t\t*data = NULL;\n\t\tif (key)\n\t\t\t*key = NULL;\n\n\t\tif (pctx->idx != idx || pctx->pkey == NULL)\n\t\t\treturn PBS_IDX_RET_FAIL;\n\n\t\tif (avl_next_key(pctx->pkey, pctx->idx) != AVL_IX_OK)\n\t\t\treturn PBS_IDX_RET_FAIL;\n\n\t\t*data = pctx->pkey->recptr;\n\t\tif (key)\n\t\t\t*key = &pctx->pkey->key;\n\n\t\treturn PBS_IDX_RET_OK;\n\t} else {\n\t\t*data = NULL;\n\t\tpkey = avlkey_create(idx, key ? *key : NULL);\n\t\tif (pkey == NULL)\n\t\t\treturn PBS_IDX_RET_FAIL;\n\n\t\tif (key != NULL && *key != NULL) {\n\t\t\trc = avl_find_key(pkey, idx);\n\t\t} else {\n\t\t\tavl_first_key(idx);\n\t\t\trc = avl_next_key(pkey, idx);\n\t\t}\n\n\t\tif (rc == AVL_IX_OK) {\n\t\t\t*data = pkey->recptr;\n\t\t\tif (key != NULL && *key == NULL)\n\t\t\t\t*key = &pkey->key;\n\t\t\tif (ctx != NULL) {\n\t\t\t\tpctx = (iter_ctx *) malloc(sizeof(iter_ctx));\n\t\t\t\tif (pctx == NULL) {\n\t\t\t\t\tfree(pkey);\n\t\t\t\t\treturn PBS_IDX_RET_FAIL;\n\t\t\t\t}\n\t\t\t\tpctx->idx = idx;\n\t\t\t\tpctx->pkey = pkey;\n\t\t\t\t*ctx = (void *) pctx;\n\n\t\t\t\treturn PBS_IDX_RET_OK;\n\t\t\t}\n\t\t}\n\t\tfree(pkey);\n\t}\n\n\treturn rc == AVL_IX_OK ? PBS_IDX_RET_OK : PBS_IDX_RET_FAIL;\n}\n\n/**\n * @brief\n *\tfree given iteration context\n *\n * @param[in] - ctx - pointer to context for iteration\n *\n * @return void\n *\n */\nvoid\npbs_idx_free_ctx(void *ctx)\n{\n\tif (ctx != NULL) {\n\t\titer_ctx *pctx = (iter_ctx *) ctx;\n\t\tfree(pctx->pkey);\n\t\tfree(ctx);\n\t\tctx = NULL;\n\t}\n}\n\n/**\n * @brief check whether idx is empty and has no key associated with it\n * \n * @param[in] idx - pointer to avl index\n * \n * @return bool\n * @retval 1 - idx is empty\n * @retval 0 - idx is not empty\n */\nbool\npbs_idx_is_empty(void *idx)\n{\n\tvoid *idx_ctx = NULL;\n\tchar **data = NULL;\n\n\tif (pbs_idx_find(idx, NULL, (void **) &data, &idx_ctx) == PBS_IDX_RET_OK)\n\t\treturn 0;\n\n\treturn 1;\n}\n"
  },
  {
    "path": "src/lib/Libutil/pbs_secrets.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _PBS_SECRETS_H\n#define _PBS_SECRETS_H\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n#include <pbs_config.h> /* the master config generated by configure */\n#include \"ticket.h\"\n\n/**\n * @file\tpbs_secrets.c\n * @brief\n *\tKey and iv to use with the AES encryption routines.\n */\nunsigned char pbs_aes_key[][16] = {\n\t{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},\n\t{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},\n\t{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},\n\t{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},\n\t{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},\n\t{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},\n\t{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},\n\t{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},\n\t{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},\n\t{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},\n\t{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},\n\t{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},\n\t{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},\n\t{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},\n\t{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},\n\t{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}};\n\nunsigned char pbs_aes_iv[][16] = {\n\t{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},\n\t{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},\n\t{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},\n\t{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},\n\t{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},\n\t{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},\n\t{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00},\n\t{0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00}};\n\n#ifdef __cplusplus\n}\n#endif\n#endif /* _PBS_SECRETS_H */\n"
  },
  {
    "path": "src/lib/Libutil/range.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <ctype.h>\n#include <errno.h>\n#include <string.h>\n#include <log.h>\n#include <libutil.h>\n#include \"range.h\"\n\n/**\n * @brief\n *\t\tnew_range - allocate and initialize a range structure\n *\n * @return\tnewly allocated range\n * @retval\tNULL\t: on error\n *\n */\nrange *\nnew_range(int start, int end, int step, int count, range *next)\n{\n\trange *r;\n\n\tif ((r = malloc(sizeof(range))) == NULL) {\n\t\tlog_err(errno, __func__, RANGE_MEM_ERR_MSG);\n\t\treturn NULL;\n\t}\n\n\tr->start = start;\n\tr->end = end;\n\tr->step = step;\n\tr->count = count;\n\tr->next = next;\n\n\treturn r;\n}\n\n/**\n * @brief\n *\t\tfree_range_list - free a list of ranges\n *\n * @param[in,out]\tr\t-\trange list to be freed.\n *\n * @return\tnothing\n *\n */\nvoid\nfree_range_list(range *r)\n{\n\trange *cur_r;\n\trange *next_r;\n\n\tcur_r = r;\n\n\twhile (cur_r != NULL) {\n\t\tnext_r = cur_r->next;\n\t\tfree_range(cur_r);\n\n\t\tcur_r = next_r;\n\t}\n}\n\n/**\n * @brief\n *\t\tfree_range - free a range structure\n *\n * @param[in,out]\tr\t-\trange structure to be freed.\n *\n * @return\tnothing\n *\n */\nvoid\nfree_range(range *r)\n{\n\tif (r == NULL)\n\t\treturn;\n\n\tfree(r);\n}\n\n/**\n * @brief\n *\t\tdup_range_list - duplicate a range list\n *\n * @param[in]\told_r\t-\trange to dup;\n *\n * @return\tnewly duplicated range list\n *\n */\nrange *\ndup_range_list(range *old_r)\n{\n\trange *new_r;\n\trange *new_r_head = NULL;\n\trange *cur_old_r;\n\trange *prev_new_r = NULL;\n\n\tif (old_r == NULL)\n\t\treturn NULL;\n\n\tcur_old_r = old_r;\n\n\twhile (cur_old_r != NULL) {\n\t\tnew_r = dup_range(cur_old_r);\n\t\tif (new_r == NULL) {\n\t\t\tfree_range_list(new_r_head);\n\t\t\treturn NULL;\n\t\t}\n\n\t\tif (prev_new_r != NULL)\n\t\t\tprev_new_r->next = new_r;\n\t\telse\n\t\t\tnew_r_head = new_r;\n\n\t\tcur_old_r = cur_old_r->next;\n\t\tprev_new_r = new_r;\n\t}\n\n\treturn new_r_head;\n}\n\n/**\n * @brief\n *\t\tdup_range - duplicate a range structure\n *\n * @param[in]\told_r\t-\trange structure to duplicate\n *\n * @return\tnew range structure\n * @retval\tNULL\t: on error\n *\n */\nrange *\ndup_range(range *old_r)\n{\n\trange *new_r;\n\n\tif (old_r == NULL)\n\t\treturn NULL;\n\n\tnew_r = new_range(old_r->start, old_r->end, old_r->step, old_r->count, NULL);\n\n\treturn new_r;\n}\n\n/**\n * @brief\n *\trange_count - count number of elements in a given range structure\n *\n * @param[in]\tr - range structure to count\n *\n * @return int\n * @retval # - number of elements in range\n *\n */\nint\nrange_count(range *r)\n{\n\tint count = 0;\n\trange *cur = r;\n\n\twhile (cur != NULL) {\n\t\tcount += cur->count;\n\t\tcur = cur->next;\n\t}\n\treturn count;\n}\n\n/**\n * @brief\n *\t\trange_parse - parse string of ranges delimited by comma\n *\n * @param[in]\tstr\t-\tstring of ranges to parse\n *\n * @return\tlist of ranges\n * @retval\tNULL\t: on error\n *\n */\nrange *\nrange_parse(char *str)\n{\n\trange *head = NULL;\n\trange *cur = NULL;\n\trange *r;\n\tchar *p;\n\tchar *endp;\n\tint ret;\n\n\tif (str == NULL)\n\t\treturn NULL;\n\n\tp = str;\n\n\tdo {\n\t\tint start;\n\t\tint end;\n\t\tint step;\n\t\tint count;\n\n\t\tret = parse_subjob_index(p, &endp, &start, &end, &step, &count);\n\t\tif (!ret) {\n\t\t\tr = new_range(start, end, step, count, NULL);\n\n\t\t\tif (r == NULL) {\n\t\t\t\tfree_range_list(head);\n\t\t\t\treturn NULL;\n\t\t\t}\n\n\t\t\t/* ensure the end value is contained in the range */\n\t\t\twhile (range_contains(r, end) == 0 && end > start)\n\t\t\t\tend--;\n\n\t\t\tif (range_contains(r, end))\n\t\t\t\tr->end = end;\n\t\t\telse { /* range is majorly hosed */\n\t\t\t\tfree_range_list(head);\n\t\t\t\tfree_range(r);\n\t\t\t\treturn NULL;\n\t\t\t}\n\n\t\t\tif (head == NULL)\n\t\t\t\thead = cur = r;\n\t\t\telse {\n\t\t\t\tcur->next = r;\n\t\t\t\tcur = r;\n\t\t\t}\n\n\t\t\tp = endp;\n\t\t}\n\t} while (!ret);\n\n\tif (ret == -1) {\n\t\tfree_range_list(head);\n\t\treturn NULL;\n\t}\n\n\treturn head;\n}\n\n/**\n * @brief\n *\t\trange_next_value - get the next value in a range\n *\t\t\t   if a current value is given, return the next\n *\t\t\t   if no current value is given, return the first\n *\n * @param[in]\tr\t-\tthe range to return the value from\n * @param[in]\tcur_value\t-\tthe current value or if negative, no value\n *\n * @return\tthe next value in the range\n * @retval\t-1\t: on error\n * @retval\t-2\t: if there is no next value\n *\n */\nint\nrange_next_value(range *r, int cur_value)\n{\n\trange *cur;\n\tint ret_val = -2;\n\n\tif (r == NULL)\n\t\treturn -1;\n\n\tif (cur_value < 0)\n\t\treturn r->start;\n\n\tif (range_contains(r, cur_value) == 0)\n\t\treturn -1;\n\n\tcur = r;\n\twhile (cur != NULL && ret_val < 0) {\n\t\tif (range_contains_single(cur, cur_value)) {\n\t\t\tif (cur_value == cur->end) {\n\t\t\t\tif (cur->next != NULL)\n\t\t\t\t\tret_val = cur->next->start;\n\t\t\t} else\n\t\t\t\tret_val = cur_value + cur->step;\n\t\t}\n\t\tcur = cur->next;\n\t}\n\n\treturn ret_val;\n}\n/**\n * @brief\n *\t\trange_contains - find if a range contains a value\n *\n * @param[in]\tr\t-\trange to search\n * @param[in]\tval\t-\tval to find\n *\n * @return\tint\n * @retval\t1\t: if the range contains the value\n * @retval\t0\t: if the range does not contain the value\n *\n */\nint\nrange_contains(range *r, int val)\n{\n\trange *cur;\n\tint found = 0;\n\n\tcur = r;\n\n\twhile (cur != NULL && !found) {\n\t\tif (range_contains_single(cur, val))\n\t\t\tfound = 1;\n\n\t\tcur = cur->next;\n\t}\n\n\tif (found)\n\t\treturn 1;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\t\trange_contains_single - is a value contained in a single range\n *\t\t\t\t  structure\n *\n * @param[in]\tr\t-\tthe range structure\n * @param[in]\tval\t-\tthe value\n *\n * @return\tint\n * @retval\t1\t: if the value is in the single range structure\n * @retval\t0\t: if not\n *\n */\nint\nrange_contains_single(range *r, int val)\n{\n\tif (r == NULL)\n\t\treturn 0;\n\n\tif (val >= r->start && val <= r->end)\n\t\tif ((val - r->start) % r->step == 0)\n\t\t\treturn 1;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\t\trange_remove_value - remove a value from a range list\n *\n * @param[in,out]\tr\t-\tpointer to pointer to the head of the range\n * @param[in]\tval\t-\tvalue to remove\n *\n * @return\tint\n * @retval\t1\t: on success\n * @retval\t0\t: if not done (see note)\n *\n * @par\tNOTE: value of r might be changed if the last value of first range\n *\t\t  is removed. only supports removing values from either the start or end\n *\t      of a range, not in the middle\n *\n */\n\nint\nrange_remove_value(range **r, int val)\n{\n\trange *cur;\n\trange *prev = NULL;\n\tint done = 0;\n\n\tif (r == NULL || *r == NULL || val < 0)\n\t\treturn 0;\n\n\tif (!range_contains(*r, val))\n\t\treturn 0;\n\n\tcur = *r;\n\twhile (cur != NULL && !done) {\n\t\tif (cur->start == val && cur->end == val) {\n\t\t\tif (prev == NULL) /* we're removing the first range struct in the list */\n\t\t\t\t*r = (*r)->next;\n\t\t\telse\n\t\t\t\tprev->next = cur->next;\n\t\t\tfree_range(cur);\n\t\t\treturn 1;\n\t\t} else if (cur->start == val) {\n\t\t\tcur->start += cur->step;\n\t\t\tcur->count--;\n\t\t\tdone = 1;\n\t\t} else if (cur->end == val) {\n\t\t\tcur->end -= cur->step;\n\t\t\tcur->count--;\n\t\t\tdone = 1;\n\t\t} else if ((val > cur->start) && (val < cur->end)) {\n\t\t\trange *next_range = NULL;\n\t\t\tif ((next_range = new_range(0, 0, 1, 0, NULL)) == NULL)\n\t\t\t\treturn 0;\n\n\t\t\tnext_range->count = (cur->end - val) / cur->step;\n\t\t\tnext_range->step = cur->step;\n\t\t\tnext_range->start = val + cur->step;\n\t\t\tnext_range->end = cur->end;\n\t\t\tnext_range->next = cur->next;\n\n\t\t\tcur->count = (val - cur->start) / cur->step;\n\t\t\tcur->end = val - cur->step;\n\t\t\tcur->next = next_range;\n\t\t\treturn 1;\n\t\t}\n\n\t\tif (!done) {\n\t\t\tprev = cur;\n\t\t\tcur = cur->next;\n\t\t}\n\t}\n\n\tif (done) {\n\t\t/* we removed the last value from this section of the range */\n\t\tif (cur->start > cur->end) {\n\t\t\tif (prev == NULL) /* we're removing the first range struct in the list */\n\t\t\t\t*r = (*r)->next;\n\t\t\telse\n\t\t\t\tprev->next = cur->next;\n\t\t\tfree_range(cur);\n\t\t}\n\t\treturn 1;\n\t}\n\n\t/* we didn't remove it */\n\treturn 0;\n}\n\n/**\n * @brief\n *\t\trange_add_value - add a value to a range list by adding it to the end\n *\t\t\t  of the list\n *\n * @param[in,out]\tr\t-\tpointer to pointer to the head of the range\n * @param[in]\tval\t-\tvalue to add\n * @param[in]\ttype\t-\tenable/disable subrange stepping\n *\n * @return\tint\n * @retval\t1\t: if successfully added value\n * @retval\t0\t: if val is in range, or val not successfully added\n *\n */\nint\nrange_add_value(range **r, int val, int range_step)\n{\n\trange *cur; /* current range structure in list */\n\trange *next_range;\n\n\tif (r == NULL)\n\t\treturn 0;\n\n\tif (*r == NULL) {\n\t\t/* If there are no range structs in the list; create the new range for first value */\n\t\trange *first_range = NULL;\n\t\tif ((first_range = new_range(val, val, range_step, 1, NULL)) == NULL) {\n\t\t\treturn 0;\n\t\t}\n\t\t*r = first_range;\n\t\treturn 1;\n\t}\n\n\tcur = *r;\n\t/* Value falls before the first sub-range */\n\tif (cur != NULL && val < cur->start) {\n\t\tif (val == cur->start - cur->step) {\n\t\t\tcur->start -= cur->step;\n\t\t\tcur->count++;\n\t\t\treturn 1;\n\t\t} else {\n\t\t\t/* Add new range as the first element with same value */\n\t\t\trange *first_range = NULL;\n\t\t\tif ((first_range = new_range(val, val, cur->step, 1, cur)) == NULL) {\n\t\t\t\treturn 0;\n\t\t\t}\n\t\t\t*r = first_range;\n\t\t\treturn 1;\n\t\t}\n\t}\n\n\t/* The value that needs to be added is in between the cur and the next sub-ranges  */\n\n\twhile (cur != NULL && cur->next != NULL) {\n\n\t\tnext_range = cur->next;\n\t\tif ((val > cur->end) && (val < next_range->start)) {\n\n\t\t\tif ((val == cur->end + cur->step) && (val == next_range->start - next_range->step)) {\n\t\t\t\t/* Adding this value would coalesce these two sub-ranges  */\n\t\t\t\tcur->end = next_range->end;\n\t\t\t\tcur->next = next_range->next;\n\t\t\t\tcur->count += next_range->count + 1;\n\t\t\t\tfree_range(next_range);\n\t\t\t\treturn 1;\n\n\t\t\t} else if (val == cur->end + cur->step) {\n\t\t\t\t/* Value falls in the cur sub-range end  */\n\t\t\t\tcur->end += cur->step;\n\t\t\t\tcur->count++;\n\t\t\t\treturn 1;\n\n\t\t\t} else if (val == next_range->start - next_range->step) {\n\t\t\t\t/* Value falls in the next sub-range start  */\n\t\t\t\tnext_range->start -= next_range->step;\n\t\t\t\tnext_range->count++;\n\t\t\t\treturn 1;\n\n\t\t\t} else {\n\t\t\t\t/* Value falls in this range; add new mid-range with same value */\n\t\t\t\trange *mid_range = NULL;\n\t\t\t\tif ((mid_range = new_range(val, val, cur->step, 1, cur->next)) == NULL) {\n\t\t\t\t\treturn 0;\n\t\t\t\t}\n\t\t\t\tcur->next = mid_range;\n\t\t\t\treturn 1;\n\t\t\t}\n\t\t}\n\t\tcur = next_range;\n\t}\n\n\t/* Coming out of the loop and check the extreme right corner case */\n\n\tif (cur != NULL && val > cur->end) {\n\t\tif (val == cur->end + cur->step) {\n\t\t\tcur->end += cur->step;\n\t\t\tcur->count++;\n\t\t\treturn 1;\n\t\t} else {\n\t\t\t/* Add new range at the end with same value */\n\t\t\trange *end_range = NULL;\n\t\t\tif ((end_range = new_range(val, val, cur->step, 1, NULL)) == NULL) {\n\t\t\t\treturn 0;\n\t\t\t}\n\t\t\tcur->next = end_range;\n\t\t\treturn 1;\n\t\t}\n\t}\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\t\trange_intersection - create an intersection between two ranges\n *\n * @param[in]\tr1\t-\trange 1\n * @param[in]\tr2\t-\trange 2\n *\n * @return\ta new range which is the intersection of r1 and r2\n * @retval\tNULL\t: on error or intersection is the null set\n *\n */\nrange *\nrange_intersection(range *r1, range *r2)\n{\n\trange *intersection = NULL;\n\tint cur = 0;\n\n\tif (r1 == NULL || r2 == NULL)\n\t\treturn NULL;\n\n\tcur = range_next_value(r1, -1);\n\n\twhile (cur >= 0) {\n\t\tif (range_contains(r2, cur))\n\t\t\trange_add_value(&intersection, cur, r2->step);\n\t\tcur = range_next_value(r1, cur);\n\t}\n\treturn intersection;\n}\n\n/**\n * @brief\n *\t\tparse_subjob_index - parse a subjob index range of the form:\n *\t\tSTART[-END[:STEP]][,...]\n *\t\tEach call parses up to the first comma or if no comma the end of\n *\t\tthe string or a ']'\n * @param[in]\tpc\t-\trange of sub jobs\n * @param[out]\tep\t-\tptr to character that terminated scan (comma or new-line)\n * @param[out]\tpstart\t-\tfirst number of range\n * @param[out]\tpend\t-\tmaximum value in range\n * @param[out]\tpstep\t-\tstepping factor\n * @param[out]\tpcount -\tnumber of entries in this section of the range\n *\n * @return\tinteger\n * @retval\t0\t- success\n * @retval\t1\t- no (more) indices are found\n * @retval\t-1\t- parse/format error\n */\nint\nparse_subjob_index(char *pc, char **ep, int *pstart, int *pend, int *pstep, int *pcount)\n{\n\tint start;\n\tint end;\n\tint step;\n\tchar *eptr;\n\n\tif (pc == NULL)\n\t\treturn (-1);\n\n\twhile (isspace((int) *pc) || (*pc == ','))\n\t\tpc++;\n\tif ((*pc == '\\0') || (*pc == ']')) {\n\t\t*pcount = 0;\n\t\t*ep = pc;\n\t\treturn (1);\n\t}\n\n\tif (!isdigit((int) *pc)) {\n\t\t/* Invalid format, 1st char not digit */\n\t\treturn (-1);\n\t}\n\tstart = (int) strtol(pc, &eptr, 10);\n\tpc = eptr;\n\twhile (isspace((int) *pc))\n\t\tpc++;\n\tif ((*pc == ',') || (*pc == '\\0') || (*pc == ']')) {\n\t\t/* \"X,\" or \"X\" case */\n\t\tend = start;\n\t\tstep = 1;\n\t\tif (*pc == ',')\n\t\t\tpc++;\n\t} else {\n\t\t/* should be X-Y[:Z] case */\n\t\tif (*pc != '-') {\n\t\t\t/* Invalid format, not in X-Y format */\n\t\t\t*pcount = 0;\n\t\t\treturn (-1);\n\t\t}\n\t\tend = (int) strtol(++pc, &eptr, 10);\n\t\tpc = eptr;\n\t\tif (isspace((int) *pc))\n\t\t\tpc++;\n\t\tif ((*pc == '\\0') || (*pc == ',') || (*pc == ']')) {\n\t\t\tstep = 1;\n\t\t} else if (*pc++ != ':') {\n\t\t\t/* Invalid format, not in X-Y:z format */\n\t\t\t*pcount = 0;\n\t\t\treturn (-1);\n\t\t} else {\n\t\t\twhile (isspace((int) *pc))\n\t\t\t\tpc++;\n\t\t\tstep = (int) strtol(pc, &eptr, 10);\n\t\t\tpc = eptr;\n\t\t\twhile (isspace((int) *pc))\n\t\t\t\tpc++;\n\t\t\tif (*pc == ',')\n\t\t\t\tpc++;\n\t\t}\n\n\t\t/* y must be greater than x for a range and z must be greater 0 */\n\t\tif ((start >= end) || (step < 1))\n\t\t\treturn (-1);\n\t}\n\n\t*ep = pc;\n\t/* now compute the number of extires ((end + 1) - start + (step - 1)) / step = (end - start + step) / step */\n\t*pcount = (end - start + step) / step;\n\t*pstart = start;\n\t*pend = end;\n\t*pstep = step;\n\treturn (0);\n}\n\n/**\n * @brief\n * \t\tReturns a string representation of a range structure.\n *\n * @param[in]\tr\t-\tThe range for which a string representation is expected\n\n * @par MT-safe:\tno\n *\n * @return\ta string representation of the range\n * @retval\t\"\"\t: on any malloc error\n *\n */\nchar *\nrange_to_str(range *r)\n{\n\tstatic char *range_str = NULL;\n\tstatic int size = 0;\n\trange *cur_r = NULL;\n\tchar numbuf[128];\n\tint len;\n\n\tif (r == NULL)\n\t\treturn \"\";\n\n\tif (range_str == NULL) {\n\t\tif ((range_str = malloc(INIT_RANGE_ARR_SIZE + 1)) == NULL) {\n\t\t\tlog_err(errno, __func__, RANGE_MEM_ERR_MSG);\n\t\t\treturn \"\";\n\t\t}\n\t\tsize = INIT_RANGE_ARR_SIZE;\n\t}\n\trange_str[0] = '\\0';\n\n\tfor (cur_r = r; cur_r != NULL; cur_r = cur_r->next) {\n\t\tif (cur_r->count > 1)\n\t\t\tsprintf(numbuf, \"%d-%d\", cur_r->start, cur_r->end);\n\t\telse\n\t\t\tsprintf(numbuf, \"%d\", cur_r->start);\n\n\t\tif (cur_r->step > 1 && cur_r->count > 1) {\n\t\t\tif (pbs_strcat(&range_str, &size, numbuf) == NULL)\n\t\t\t\treturn \"\";\n\t\t\tsprintf(numbuf, \":%d\", cur_r->step);\n\t\t\tif (pbs_strcat(&range_str, &size, numbuf) == NULL)\n\t\t\t\treturn \"\";\n\t\t} else if (pbs_strcat(&range_str, &size, numbuf) == NULL)\n\t\t\treturn \"\";\n\n\t\tif (pbs_strcat(&range_str, &size, \",\") == NULL)\n\t\t\treturn \"\";\n\t}\n\tlen = strlen(range_str);\n\tif (range_str[len - 1] == ',')\n\t\trange_str[len - 1] = '\\0';\n\n\treturn range_str;\n}\n\n/**\n * @brief\tJoin r1 and r2 by removing the duplicate values \n * \n * @param[in] r1 range of sub job\n * @param[in] r2 range of sub job\n * @return range\n * @retval r3 superset of r1 and r2 ranges\n * @retval NULL if not done\n */\nrange *\nrange_join(range *r1_in, range *r2_in)\n{\n\trange *r1 = dup_range_list(r1_in);\n\trange *r2 = dup_range_list(r2_in);\n\trange *ri = NULL;\n\trange *r3 = NULL;\n\tint cur = 0;\n\tri = range_intersection(r1, r2);\n\tif (ri != NULL) {\n\t\t/* XOR */\n\t\tcur = range_next_value(ri, -1);\n\t\twhile (cur >= 0) {\n\t\t\tif (range_contains(r1, cur)) {\n\t\t\t\trange_remove_value(&r1, cur);\n\t\t\t}\n\t\t\tif (range_contains(r2, cur)) {\n\t\t\t\trange_remove_value(&r2, cur);\n\t\t\t}\n\t\t\tcur = range_next_value(ri, cur);\n\t\t}\n\t\t/* JOIN r1, r2 and ri in r3*/\n\t\tr3 = dup_range(r1);\n\t\tcur = range_next_value(ri, -1);\n\t\twhile (cur >= 0) {\n\t\t\trange_add_value(&r3, cur, 1);\n\t\t\tcur = range_next_value(ri, cur);\n\t\t}\n\t\tcur = range_next_value(r2, -1);\n\t\twhile (cur >= 0) {\n\t\t\trange_add_value(&r3, cur, 1);\n\t\t\tcur = range_next_value(r2, cur);\n\t\t}\n\t\tfree_range_list(ri);\n\t} else {\n\t\tr3 = dup_range(r1);\n\t\tcur = range_next_value(r2, -1);\n\t\twhile (cur >= 0) {\n\t\t\trange_add_value(&r3, cur, 1);\n\t\t\tcur = range_next_value(r2, cur);\n\t\t}\n\t}\n\tfree_range_list(r1);\n\tfree_range_list(r2);\n\treturn r3;\n}\n"
  },
  {
    "path": "src/lib/Libutil/thread_utils.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tthread_utils.c\n * @brief\n * thread_utils.c - contains utility functions for multi-threading using pthread\n */\n\n#include <pbs_config.h>\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <unistd.h>\n#include <pthread.h>\n\n/**\n * @brief\tinitialize a mutex attr object\n *\n * @param[out]\tattr - the attr object to initialize\n *\n * @return int\n * @retval 0 for Success\n * @retval -1 for Error\n */\nint\ninit_mutex_attr_recursive(pthread_mutexattr_t *attr)\n{\n\tif (pthread_mutexattr_init(attr) != 0) {\n\t\treturn -1;\n\t}\n\n\tif (pthread_mutexattr_settype(attr,\n#if defined(linux)\n\t\t\t\t      PTHREAD_MUTEX_RECURSIVE_NP\n#else\n\t\t\t\t      PTHREAD_MUTEX_RECURSIVE\n#endif\n\t\t\t\t      )) {\n\t\treturn -1;\n\t}\n\n\treturn 0;\n}\n"
  },
  {
    "path": "src/lib/Libutil/work_task.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\twork_task.c\n * @brief\n * work_task.c - contains functions to deal with the server's task list\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include \"portability.h\"\n#include <stdlib.h>\n#include <time.h>\n#include <sys/param.h>\n#include <sys/types.h>\n#include <sys/wait.h>\n#include \"server_limits.h\"\n#include \"list_link.h\"\n#include \"work_task.h\"\n\n/* Global Data Items: */\n\nextern pbs_list_head task_list_immed;\t   /* list of tasks that can execute now */\nextern pbs_list_head task_list_interleave; /* list of tasks that can execute after interleaving other tasks */\nextern pbs_list_head task_list_timed;\t   /* list of tasks that have set start times */\nextern pbs_list_head task_list_event;\t   /* list of tasks responding to an event */\nextern int svr_delay_entry;\nextern time_t time_now;\n\n/**\n *\n * @brief\n * \tCreates a task of type 'type', 'event_id', and when task is dispatched,\n *\texecute func with argument 'parm'. The task is added to\n *\t'task_list_immed' if 'type' is  WORK_Immed; otherwise, task is added\n *\t'task_list_event'.\n *\n * @param[in]\ttype - of task\n * @param[in]\tevent_id - event id of the task\n * @param[in]\tfunc - function that will be executed in behalf of the task\n * @param[in]\tparm - parameter to 'func'\n *\n * @return struct work_task *\n * @retval <a work task entry>\t- for success\n * @retval NULL\t\t\t- for any error\n *\n */\nstruct work_task *\nset_task(enum work_type type, long event_id, void (*func)(struct work_task *), void *parm)\n{\n\tstruct work_task *pnew;\n\tstruct work_task *pold;\n\n\tpnew = (struct work_task *) malloc(sizeof(struct work_task));\n\tif (pnew == NULL)\n\t\treturn NULL;\n\tCLEAR_LINK(pnew->wt_linkevent);\n\tCLEAR_LINK(pnew->wt_linkobj);\n\tCLEAR_LINK(pnew->wt_linkobj2);\n\tpnew->wt_event = event_id;\n\tpnew->wt_event2 = NULL;\n\tpnew->wt_type = type;\n\tpnew->wt_func = func;\n\tpnew->wt_parm1 = parm;\n\tpnew->wt_parm2 = NULL;\n\tpnew->wt_parm3 = NULL;\n\tpnew->wt_aux = 0;\n\tpnew->wt_aux2 = 0;\n\n\tif (type == WORK_Immed)\n\t\tappend_link(&task_list_immed, &pnew->wt_linkevent, pnew);\n\telse if (type == WORK_Interleave)\n\t\tappend_link(&task_list_interleave, &pnew->wt_linkevent, pnew);\n\telse if (type == WORK_Timed) {\n\t\tpold = (struct work_task *) GET_NEXT(task_list_timed);\n\t\twhile (pold) {\n\t\t\tif (pold->wt_event > pnew->wt_event)\n\t\t\t\tbreak;\n\t\t\tpold = (struct work_task *) GET_NEXT(pold->wt_linkevent);\n\t\t}\n\t\tif (pold)\n\t\t\tinsert_link(&pold->wt_linkevent, &pnew->wt_linkevent, pnew,\n\t\t\t\t    LINK_INSET_BEFORE);\n\t\telse\n\t\t\tappend_link(&task_list_timed, &pnew->wt_linkevent, pnew);\n\t} else\n\t\tappend_link(&task_list_event, &pnew->wt_linkevent, pnew);\n\treturn (pnew);\n}\n\n/**\n *\n * @brief\n * \tConvert a work task to the type specified.\n *\n * @param[in]\tptask\t- task being converted\n * @param[in]\twtype\t- work task type to convert\n *\n * @return int\n * @retval 0: success\n * @retval -1: failure\n */\nint\nconvert_work_task(struct work_task *ptask, enum work_type wtype)\n{\n\tpbs_list_head *list;\n\n\tif (!ptask)\n\t\treturn -1;\n\n\tswitch (wtype) {\n\t\tcase WORK_Immed:\n\t\t\tlist = &task_list_immed;\n\t\t\tbreak;\n\t\tcase WORK_Timed:\n\t\t\tlist = &task_list_timed;\n\t\t\tbreak;\n\t\tdefault:\n\t\t\tlist = &task_list_event;\n\t}\n\n\tdelete_link(&ptask->wt_linkevent);\n\tappend_link(list, &ptask->wt_linkevent, ptask);\n\n\treturn 0;\n}\n\n/**\n *\n * @brief\n * \tDispatches a work task found on a work list\n *\n * @param[in]\tptask\t- task being dispatched\n *\n * @note:\n *\tThis also deletes the work task entry, calls the associated function\n *\twith the parameters from the work task entry, and then frees the\n *\tentry.\n */\n\nvoid\ndispatch_task(struct work_task *ptask)\n{\n\tdelete_link(&ptask->wt_linkevent);\n\tdelete_link(&ptask->wt_linkobj);\n\tdelete_link(&ptask->wt_linkobj2);\n\tif (ptask->wt_func)\n\t\tptask->wt_func(ptask); /* dispatch process function */\n\t(void) free(ptask);\n}\n\n/**\n *\n * @brief\n * \tUnlinks and frees a work_task structure.\n *\n * @param[in]\tptask\t- the task entry being deleted.\n */\n\nvoid\ndelete_task(struct work_task *ptask)\n{\n\tdelete_link(&ptask->wt_linkobj);\n\tdelete_link(&ptask->wt_linkobj2);\n\tdelete_link(&ptask->wt_linkevent);\n\t(void) free(ptask);\n}\n\n/**\n * @brief\n *\tCheck if some task in the specified task list\n *\thas a wt_parm1 matching 'parm1'\n *\tand wt_func matching 'func'\n *\n * @param[in]\ttask_list - header of task list to be searched\n * @param[in]\tparm1\t- parameter being matched.\n * @param[in]\tfunc\t- function being matched.\n *\n * @return work task\n * @retval\t!NULL if 'parm1' and 'func' was matched\n * @retval\tNULL otherwise\n */\nstatic struct work_task *\nfind_worktask_by_parm_func(pbs_list_head task_list, void *parm1, void *func)\n{\n\tstruct work_task *ptask;\n\tstruct work_task *ptask_next;\n\n\tfor (ptask = GET_NEXT(task_list); ptask; ptask = ptask_next) {\n\t\tptask_next = GET_NEXT(ptask->wt_linkevent);\n\n\t\tif (parm1 && (ptask->wt_parm1 != parm1))\n\t\t\tcontinue;\n\t\tif (func && (ptask->wt_func != func))\n\t\t\tcontinue;\n\n\t\treturn ptask;\n\t}\n\n\treturn NULL;\n}\n\n/**\n * @brief\n *\tCheck if some task in in any of the task lists (task_list_event,\n *\ttask_list_timed, task_list_immed)\n *\thas a wt_parm1 matching 'parm1'\n *\tand wt_func matching 'func'\n *\n * @param[in]\twtype - work task type enum, -1 to match all\n * @param[in]\tparm1\t- parameter being matched. NULL to ignore this field.\n * @param[in]\tfunc\t- function being matched. NULL to ignore this field.\n *\n * @return work task\n * @retval\t!NULL if 'parm1' and 'func' was matched\n * @retval\tNULL otherwise\n */\nstruct work_task *\nfind_work_task(enum work_type wtype, void *parm1, void *func)\n{\n\tstruct work_task *ptask;\n\n\tif (wtype == -1 || wtype == WORK_Immed) {\n\t\tptask = find_worktask_by_parm_func(task_list_immed, parm1, func);\n\t\tif (ptask)\n\t\t\treturn ptask;\n\t}\n\n\tif (wtype == -1 || wtype == WORK_Timed) {\n\t\tptask = find_worktask_by_parm_func(task_list_timed, parm1, func);\n\t\tif (ptask)\n\t\t\treturn ptask;\n\t}\n\n\tif (wtype == -1 || (wtype != WORK_Timed && wtype != WORK_Immed)) {\n\t\tptask = find_worktask_by_parm_func(task_list_event, parm1, func);\n\t\tif (ptask)\n\t\t\treturn ptask;\n\t}\n\n\treturn NULL;\n}\n\n/**\n *\n * @brief\n *\tDelete task found in task_list_event, task_list_immed, or\n *\ttask_list_timed by either its function pointer, parm1, or both.\n * \tAt least one of the function pointer or parm1 must not be NULL.\n *\n * @param[in]\tparm1\t- wt->parm1 parameter to match (can be NULL)\n * @param[in]\tfunc\t- function pointer to match (can be NULL)\n * @param[in]\toption  - option is used to decide whether the\n *\t\t\t  caller wants to delete all tasks that\n *\t\t\t  matches parm1 values or just one.\n *\n * @return none\n */\nvoid\ndelete_task_by_parm1_func(void *parm1, void (*func)(struct work_task *), enum wtask_delete_option option)\n{\n\tstruct work_task *ptask;\n\tstruct work_task *ptask_next;\n\tpbs_list_head task_lists[] = {task_list_event, task_list_timed, task_list_immed};\n\tint i;\n\n\tif (parm1 == NULL && func == NULL)\n\t\treturn;\n\n\tfor (i = 0; i < 3; i++) {\n\t\tfor (ptask = (struct work_task *) GET_NEXT(task_lists[i]); ptask; ptask = ptask_next) {\n\t\t\tptask_next = (struct work_task *) GET_NEXT(ptask->wt_linkevent);\n\n\t\t\tif ((parm1 != NULL) && (ptask->wt_parm1 != parm1))\n\t\t\t\tcontinue;\n\t\t\tif ((func != NULL) && (ptask->wt_func != func))\n\t\t\t\tcontinue;\n\n\t\t\tdelete_task(ptask);\n\t\t\tif (option == DELETE_ONE)\n\t\t\t\treturn;\n\t\t}\n\t}\n}\n\n/**\n *\n * @brief\n *\tCheck if some task in any of the task lists (task_list_event,\n *\ttask_list_timed, task_list_immed) has a wt_parm1 matching 'parm1'.\n *\n * @param[in]\tparm1\t- parameter being matched.\n *\n * @return int\n * @retval\t1 if 'parm1' was matched\n * @retval\t0 otherwise\n */\nint\nhas_task_by_parm1(void *parm1)\n{\n\tstruct work_task *ptask;\n\n\t/* only 1 ptask can be possibly matched */\n\tptask = find_work_task(-1, parm1, NULL);\n\tif (ptask)\n\t\treturn 1;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tLooks for the next work task to perform:\n *\t1. If svr_delay_entry is set, then a delayed task in the\n *\t   task_list_event is ready so find and process it.\n *\t2. All items on the immediate list, then\n *\t3. All items on the timed task list which have expired times\n *\n * @return time_t\n * @retval The amount of time till next task\n */\n\ntime_t\ndefault_next_task(void)\n{\n\ttime_t delay;\n\tstruct work_task *nxt;\n\tstruct work_task *ptask;\n\tstruct work_task *last_interleave_task;\n\t/*\n\t * tilwhen is the basic \"idle\" time if there is nothing pending sooner\n\t * for the Server (timed-events, call scheduler, IO)\n\t * It used to be 10, but that caused a delay of outgoing TPP packets\n\t * in some cases, and we don't burn too many extra cycles doing nothing\n\t * if the delay is shorted to 2.\n\t */\n\ttime_t tilwhen = 2; /* basic cycle time */\n\n\ttime_now = time(NULL);\n\n\tif (svr_delay_entry) {\n\t\tptask = (struct work_task *) GET_NEXT(task_list_event);\n\t\twhile (ptask) {\n\t\t\tnxt = (struct work_task *) GET_NEXT(ptask->wt_linkevent);\n\t\t\tif (ptask->wt_type == WORK_Deferred_Cmp)\n\t\t\t\tdispatch_task(ptask);\n\t\t\tptask = nxt;\n\t\t}\n\t\tsvr_delay_entry = 0;\n\t}\n\n\twhile ((ptask = (struct work_task *) GET_NEXT(task_list_immed)) != NULL)\n\t\tdispatch_task(ptask);\n\n\tlast_interleave_task = (struct work_task *) GET_PRIOR(task_list_interleave);\n\twhile ((ptask = (struct work_task *) GET_NEXT(task_list_interleave)) != NULL) {\n\t\tdispatch_task(ptask);\n\t\tif (ptask == last_interleave_task)\n\t\t\tbreak;\n\t}\n\n\tif (GET_NEXT(task_list_interleave)) {\n\t\t/* more tasks waiting, wait least */\n\t\ttilwhen = 0;\n\t}\n\n\twhile ((ptask = (struct work_task *) GET_NEXT(task_list_timed)) != NULL) {\n\t\tif ((delay = ptask->wt_event - time_now) > 0) {\n\t\t\tif (tilwhen > delay)\n\t\t\t\ttilwhen = delay;\n\t\t\tbreak;\n\t\t} else {\n\t\t\tdispatch_task(ptask); /* will delete link */\n\t\t}\n\t}\n\n\treturn (tilwhen);\n}\n"
  },
  {
    "path": "src/lib/Makefile.am",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nSUBDIRS = \\\n\tLibattr \\\n\tLibdb \\\n\tLiblog \\\n\tLibnet \\\n\tLibifl \\\n\tLibpbs \\\n\tLibpython \\\n\tLibsite \\\n\tLibsec \\\n\tLibtpp \\\n\tLibutil \\\n\tLibauth \\\n\tLiblicensing \\\n\tLibjson\n"
  },
  {
    "path": "src/modules/Makefile.am",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nSUBDIRS = python\n"
  },
  {
    "path": "src/modules/python/Makefile.am",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\npbs_v1_sodir = $(libdir)/python/altair\n\npbs_v1_so_LTLIBRARIES = \\\n\t_pbs_v1.la \\\n\t_pbs_ifl.la\n\nnodist_pbs_v1_so_PYTHON = $(top_builddir)/src/lib/Libifl/pbs_ifl.py\n\n_pbs_v1_la_SOURCES = \\\n\t$(top_builddir)/src/lib/Libattr/attr_fn_acl.c \\\n\t$(top_builddir)/src/lib/Libattr/attr_fn_entlim.c \\\n\t$(top_builddir)/src/lib/Libattr/attr_fn_resc.c \\\n\t$(top_builddir)/src/lib/Libattr/attr_fn_time.c \\\n\t$(top_builddir)/src/lib/Libattr/attr_node_func.c \\\n\t$(top_builddir)/src/lib/Libattr/job_attr_def.c \\\n\t$(top_builddir)/src/lib/Libattr/node_attr_def.c \\\n\t$(top_builddir)/src/lib/Libattr/queue_attr_def.c \\\n\t$(top_builddir)/src/lib/Libattr/resc_def_all.c \\\n\t$(top_builddir)/src/lib/Libattr/resv_attr_def.c \\\n\t$(top_builddir)/src/lib/Libattr/sched_attr_def.c \\\n\t$(top_builddir)/src/lib/Libattr/svr_attr_def.c \\\n\t$(top_srcdir)/src/lib/Libpython/shared_python_utils.c \\\n\t$(top_srcdir)/src/lib/Libpython/common_python_utils.c \\\n\t$(top_srcdir)/src/lib/Libpython/pbs_python_external.c \\\n\t$(top_srcdir)/src/lib/Libpython/pbs_python_svr_external.c \\\n\t$(top_srcdir)/src/lib/Libpython/module_pbs_v1.c \\\n\t$(top_srcdir)/src/lib/Libpython/pbs_python_svr_internal.c \\\n\t$(top_srcdir)/src/lib/Libpython/pbs_python_svr_size_type.c \\\n\t$(top_srcdir)/src/lib/Libpython/pbs_python_import_types.c \\\n\t$(top_srcdir)/src/lib/Libutil/entlim.c \\\n\t$(top_srcdir)/src/lib/Libutil/hook.c \\\n\t$(top_srcdir)/src/lib/Libutil/work_task.c \\\n\t$(top_srcdir)/src/server/resc_attr.c \\\n\t$(top_srcdir)/src/server/setup_resc.c \\\n\t$(top_srcdir)/src/server/vnparse.c \\\n\t$(top_srcdir)/src/server/jattr_get_set.c \\\n\t$(top_srcdir)/src/server/sattr_get_set.c \\\n\t$(top_srcdir)/src/server/qattr_get_set.c \\\n\t$(top_srcdir)/src/server/nattr_get_set.c \\\n\t$(top_srcdir)/src/server/rattr_get_set.c \\\n\tpbs_v1_module_init.c\n_pbs_v1_la_CPPFLAGS = \\\n\t-I$(top_srcdir)/src/include \\\n\t@PYTHON_INCLUDES@ \\\n\t@KRB5_CFLAGS@\n_pbs_v1_la_LDFLAGS = \\\n\t-module \\\n\t-shared \\\n\t-export-dynamic \\\n\t-avoid-version\n_pbs_v1_la_LIBADD = \\\n\t$(top_builddir)/src/lib/Libpbs/libpbs.la\n\nnodist__pbs_ifl_la_SOURCES = $(top_builddir)/src/lib/Libifl/pbs_ifl_wrap.c\n_pbs_ifl_la_CPPFLAGS = \\\n\t-I$(top_srcdir)/src/include \\\n\t@PYTHON_INCLUDES@\n_pbs_ifl_la_LDFLAGS = \\\n\t-module \\\n\t-shared \\\n\t-export-dynamic \\\n\t-avoid-version \\\n\t-no-undefined \\\n\t@PYTHON_LDFLAGS@\n_pbs_ifl_la_LIBADD = \\\n\t$(top_builddir)/src/lib/Libpbs/libpbs.la\n\npbsmoduledir = $(libdir)/python/altair/pbs\n\npbsmodule_PYTHON = pbs/__init__.py\n\npbsv1moduledir = $(libdir)/python/altair/pbs/v1\n\npbsv1module_PYTHON = \\\n\tpbs/v1/__init__.py \\\n\tpbs/v1/_attr_types.py \\\n\tpbs/v1/_base_types.py \\\n\tpbs/v1/_exc_types.py \\\n\tpbs/v1/_export_types.py \\\n\tpbs/v1/_svr_types.py \\\n\tpbs/v1/_pmi_types.py \\\n\tpbs/v1/_pmi_sgi.py \\\n\tpbs/v1/_pmi_cray.py \\\n\tpbs/v1/_pmi_none.py \\\n\tpbs/v1/_pmi_utils.py\n\n\n\npbshooksdir = $(libdir)/python/altair/pbs_hooks\n\ndist_pbshooks_DATA = \\\n\tpbs_hooks/PBS_power.HK \\\n\tpbs_hooks/PBS_power.PY \\\n\tpbs_hooks/PBS_power.CF \\\n\tpbs_hooks/PBS_alps_inventory_check.HK \\\n\tpbs_hooks/PBS_alps_inventory_check.PY \\\n\tpbs_hooks/PBS_xeon_phi_provision.HK \\\n\tpbs_hooks/PBS_xeon_phi_provision.PY \\\n\tpbs_hooks/PBS_cray_atom.HK \\\n\tpbs_hooks/PBS_cray_atom.PY \\\n\tpbs_hooks/PBS_cray_atom.CF\n"
  },
  {
    "path": "src/modules/python/pbs/__init__.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n# Allow pbs.v1 (Python support routines) and _pbs_v1 (C support routines)\n# to be callable under \"pbs.\" prefix.\nfrom _pbs_v1 import *\nfrom pbs.v1 import *\n"
  },
  {
    "path": "src/modules/python/pbs/v1/__init__.py",
    "content": "# coding: utf-8\n\"\"\"\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nThe PBS Python V1 package\n\"\"\"\n\n#: the following is used by the embedded interpreter\nfrom ._export_types import EXPORTED_TYPES_DICT\n#: import all types from pbs_v1 C module\nfrom _pbs_v1 import *\n#\nfrom ._base_types import *\nfrom ._exc_types import *\nfrom ._svr_types import *\n\n#: this is Power Management Infrastructure which may not exist on all system\n#: types yet\ntry:\n    from ._pmi_types import *\nexcept ImportError:\n    pass\n"
  },
  {
    "path": "src/modules/python/pbs/v1/_attr_types.py",
    "content": "# coding: utf-8\n\"\"\"\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\n\"\"\"\n__doc__ = \"\"\"\nAll Python representation of internal PBS attribute structures.\n\"\"\"\n# All types available when doing a\n__all__ = (\n           'acl_group_enable',\n           'acl_groups',\n           'acl_host_enable',\n           'hosts',\n           'acl_user_enable',\n           'enabled',\n           'from_route_only',\n           'max_array_size',\n           'max_queuable',\n           'max_running'\n           )\n\n#from _pbs_v1 import _attr\n_attr = object\n#                            (BEGIN) QUEUE  ATTRIBUTES\n\nclass acl_group_enable(_attr):\n    pass\n\nclass acl_groups(_attr):\n    pass\n\nclass acl_host_enable(_attr):\n    pass\n\nclass hosts(_attr):\n    pass\n\nclass acl_user_enable(_attr):\n    pass\n\nclass acl_users(_attr):\n    pass\n\nclass enabled(_attr):\n    pass\n\nclass from_route_only(_attr):\n    pass\n\nclass max_array_size(_attr):\n    pass\n\nclass max_queuable(_attr):\n    pass\n\nclass max_running(_attr):\n    pass\n\nclass node_group_key(_attr):\n    pass\n\nclass Priority(_attr):\n    pass\n\n\n\n\n\n#                            (END) QUEUE  ATTRIBUTES\n"
  },
  {
    "path": "src/modules/python/pbs/v1/_base_types.py",
    "content": "# coding: utf-8\n\"\"\"\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\n\"\"\"\n__doc__ = \"\"\"\nAll Python based types mapping to PBS attribute types.\n\"\"\"\n\n_ATTRIBUTES_KEY_NAME = 'attributes'\n\n__all__ = ['_generic_attr',\n           'size',\n           'to_bytes',\n           'size_to_kbytes',\n           'duration',\n           'pbs_env',\n           'email_list',\n           'pbs_list',\n           'pbs_bool',\n           'pbs_int',\n           'pbs_str',\n           'pbs_float',\n           'acl',\n           'select',\n           'place',\n           'exec_host',\n           'exec_vnode',\n           'checkpoint',\n           'depend',\n           'group_list',\n           'user_list',\n           'path_list',\n           'path',\n           'sandbox',\n           'hold_types',\n           'keep_files',\n           'mail_points',\n           'staging_list',\n           'range',\n           'state_count',\n           'license_count',\n           'route_destinations',\n           'args',\n           'job_sort_formula',\n           'node_group_key',\n           'version',\n           'software',\n           'priority',\n           'name',\n           'project',\n           'join_path',\n           'PbsAttributeDescriptor',\n           'PbsReadOnlyDescriptor',\n           'pbs_resource',\n           'vchunk',\n           'vnode_state',\n           'vnode_sharing',\n           'vnode_ntype'\n           ]\n\nimport _pbs_v1\nimport sys\nimport math\n\n_size = _pbs_v1.svr_types._size\n_LOG = _pbs_v1.logmsg\n_IS_SETTABLE = _pbs_v1.is_attrib_val_settable\n\n\nclass PbsAttributeDescriptor():\n    \"\"\"This class wraps evey PBS attribute into a *DATA* descriptor AND is\n    maintained per instance instead of the default per class.\n\n    Some things to note are:\n      - All attributes values are ensured to be an instance of value_type\n      - if read_only is set then any attempt to set will raise\n        BadAttributeValueError\n      - Add the attribute name to the dictionary 'attributes' on the instance\n        if it exists.\n      - Since a Descriptor is a class level object, to maintain unique values\n        across instances, we maintain an internal dictionary.\n    \"\"\"\n\n    def __init__(\n        self, cls, name, default_value, value_type=None, resc_attr=None,\n        is_entity=0\n    ):\n        \"\"\"\n        \"\"\"\n\n        self._name = name\n        self._value = default_value\n        self._class_name = cls.__name__\n        self._is_entity = is_entity\n\n        #: check if we are resource wrapped in a attribute\n        self._is_resource = False\n        self._resc_attribute = None\n        if resc_attr is not None:\n            self._is_resource = True\n            self._resc_attribute = resc_attr\n\n        #: WARNING value_type must support sequence protocol, see __set__ for\n        #: why\n        if value_type is None:\n            self._value_type = (str,)\n        elif isinstance(value_type, (list, tuple)):\n            self._value_type = value_type\n        else:\n            self._value_type = tuple(value_type)\n        #:\n\n        #: The below is used to keep track of all the descriptors in a Class\n        #: WARNING Purposely not checking exception so as to ensure all Classes\n        #: that use this descriptor define a _ATTRIBUTES_KEY_NAME which is of\n        #: Mapping type.\n\n        __attributes = getattr(cls, _ATTRIBUTES_KEY_NAME)\n        __attributes[name] = None\n    #: m(__init__)\n\n    @staticmethod\n    def _get_values_dict(obj):\n        \"\"\"\n        Obtain the values dictionary from the instance (\"obj\") of the class to\n        which the PbsAttributeDescriptors are attached.\n        \"\"\"\n        try:\n            values_dict = object.__getattribute__(obj, \"__pad_values\")\n        except AttributeError as e:\n            values_dict = {}\n            object.__setattr__(obj, \"__pad_values\", values_dict)\n        return values_dict\n\n    def __get__(self, obj, cls=None):\n        \"\"\"__get__\n        \"\"\"\n\n        #: if accessing from class then return self\n        if obj is None:\n            return self\n\n        values_dict = PbsAttributeDescriptor._get_values_dict(obj)\n        try:\n            value = values_dict[self._name]\n        except KeyError:\n            try:\n                value = self._get_default_value()\n            except Exception as e:\n                _pbs_v1.logmsg(\n                    _pbs_v1.EVENT_ERROR,\n                    f'PbsAttributeDescriptor._get_default_value() failed for '\n                    f'attribute \"{self._name}\": {str(e)}')\n                raise\n            values_dict[self._name] = value\n        return value\n    #: m(__get__)\n\n    def __set__(self, obj, value):\n        \"\"\"__set___\n        \"\"\"\n\n        if not _IS_SETTABLE(self, obj, value):\n            return\n\n        # if in Python (hook script mode), the hook writer has set value to\n        # to None, meaning to unset the attribute.\n\n        try:\n            basestring\n        except Exception:\n            basestring = str\n\n        if (value is None) and _pbs_v1.in_python_mode():\n            set_value = \"\"\n        elif ((value is None)\n              or (isinstance(value, basestring) and value == \"\")\n              or isinstance(value, self._value_type)\n              or self._is_entity\n              or (hasattr(obj, \"_is_entity\")\n                  and getattr(obj, \"_is_entity\"))):\n\n            # no instantiation/transformation of value needed if matching\n            # one of the following cases:\n            #     - value is unset  : (value is None) or (value == \"\")\n            #     - same type as value's type :\n            #                       (isinstance(value, self._value_type)\n            #     - a special entity resource type : self.is_entity is True\n            #                             or parent object is an entity type\n            set_value = value\n        else:\n            if (self._is_resource and isinstance(value, str)\n                    and value[0] == \"@\"):\n                # an indirect resource\n                set_value = value\n            else:\n                set_value = self._value_type[0](value)\n        PbsAttributeDescriptor._get_values_dict(obj)[self._name] = set_value\n    #: m(__set__)\n\n    def _set_resc_atttr(self, resc_attr, is_entity=0):\n        \"\"\"\n        \"\"\"\n        self._resc_attribute = resc_attr\n        self._is_resource = True\n        self._is_entity = is_entity\n\n    #: m(_set_resc_atttr)\n\n    def __delete__(self, obj):\n        \"\"\"__delete__, we just set the attribute value to None\"\"\"\n        PbsAttributeDescriptor._get_values_dict(obj)[self._name] = None\n    #: m(__delete__)\n\n    def _get_default_value(self):\n        \"\"\" get a default value\n        \"\"\"\n\n        if self._value is None:\n            return self._value\n\n        #: otherwise return a new instance of default value\n        if self._value_type[0] == pbs_resource:\n            # following results in the call\n            #\tpbs_resource(self._value, self._is_entity)\n            # (see class pbs_resource __init__ method)\n            s = self._value_type[0](self._value, self._is_entity)\n        else:\n            s = self._value_type[0](self._value)\n        return s\n\n    # the checkValue method here before was rolled into _IS_SETTABLE\n\n#: End Class PbsAttributeDescriptor\n\n\nclass PbsReadOnlyDescriptor():\n    \"\"\"This class wraps a generic read only data descriptor. This is a class\n    level descriptor.\n    \"\"\"\n\n    def __init__(self, name, value):\n        \"\"\"\n\n        \"\"\"\n        self._name = name\n        self._value = value\n    #: m(__init__)\n\n    def __get__(self, obj, cls=None):\n        \"\"\"get \"\"\"\n\n        return self._value\n    #: m(__get__)\n\n    def __set__(self, obj, value):\n        \"\"\"set\"\"\"\n        raise BadAttributeValueError(\"<%s> is readonly\" % (self._name,))\n    #: m(__set__)\n\n    def __delete__(self, obj):\n        \"\"\"delete, we just set the attribute value to None\"\"\"\n        raise BadAttributeValueError(\"cannot delete <%s>\" % (self._name,))\n    #: m(__delete__)\n\n    def __str__(self):\n        if isinstance(self._value, dict):\n            return \",\".join(list(self._value.keys()))\n        else:\n            return str(self._value)\n        #\n    #: m(__str__)\n    __repr__ = __str__\n\n#: End Class PbsReadOnlyDescriptor\n\n\n#\nfrom ._exc_types import *\n\n\nclass _generic_attr():\n    \"\"\"A generic attribute\"\"\"\n\n    # _derived_types: What type other than '_generic_attr' will be accepted\n    _derived_types = (str,)\n\n    def __init__(self, value):\n\n        self._value = None\n        if value is not None:\n            if isinstance(value, (str, _generic_attr)):\n                self._value = value\n            else:\n                self._value = value.__class__(value)\n\n        super().__init__()\n    #: m(__init__)\n\n    def __str__(self):\n        \"\"\"String representation of the object\"\"\"\n        return str(self._value)\n    #: m(__str__)\n\n    __repr__ = __str__\n\n#: C(_generic_attr)\n\n#: ---------------------  VALUE TYPES         ---------------------\n# to_bytes: given a _size 'sz' value, returns an integer which is the\n# equivalent number of bytes.\n\n\ndef to_bytes(sz):\n\n    s_str = str(sz).rstrip(\"bB\")\n    sl = len(s_str)\n    wordsz = 1\n    if (s_str[sl - 1] == \"w\") or (s_str[sl - 1] == \"W\"):\n        s_str = s_str.rstrip(\"wW\")\n        wordsz = _pbs_v1.wordsize()\n\n    sl = len(s_str)\n    if (s_str[sl - 1] == \"k\") or (s_str[sl - 1] == \"K\"):\n        s_num = int(s_str.rstrip(\"kK\")) * 1024\n    elif (s_str[sl - 1] == \"m\") or (s_str[sl - 1] == \"M\"):\n        s_num = int(s_str.rstrip(\"mM\")) * 1024 * 1024\n    elif (s_str[sl - 1] == \"g\") or (s_str[sl - 1] == \"G\"):\n        s_num = int(s_str.rstrip(\"gG\")) * 1024 * 1024 * 1024\n    elif (s_str[sl - 1] == \"t\") or (s_str[sl - 1] == \"T\"):\n        s_num = int(s_str.rstrip(\"tT\")) * 1024 * 1024 * 1024 * 1024\n    elif (s_str[sl - 1] == \"p\") or (s_str[sl - 1] == \"P\"):\n        s_num = int(s_str.rstrip(\"pP\")) * 1024 * 1024 * 1024 * 1024 * 1024\n    else:\n        s_num = int(s_str)\n\n    s_num *= wordsz\n    return s_num\n\n# transform_sizes: return _size transformation of 'sz1' and 'sz2' that\n# can be fed to the richcompare functions of _size without causing an\n# overflow or rounding up inherent in small values.\n\n\ndef transform_sizes(sz1, sz2):\n\n    s_num = -1\n    s = sz1\n    if isinstance(sz1, (int, size)):\n        s = _size(str(sz1))\n        if s.__le__(_size(\"10kb\")):\n            # make all values at least 10kb to prevent rounding up errors\n            # in normalize_size().  Here, we make it relative to 1gb.\n            s_num = to_bytes(s) + 1073741824\n            s = _size(s_num)\n\n    o_num = -1\n    o = sz2\n    if isinstance(sz2, (int, size)):\n        o = _size(str(sz2))\n        if o.__le__(_size(\"10kb\")):\n            # make all values at least 10kb to prevent rounding up errors\n            # in normalize_size()\n            o_num = to_bytes(o) + 1073741824\n            o = _size(o_num)\n\n    if s_num == -1 and isinstance(s, _size):\n        s = _size(s.__add__(_size(\"1gb\")))\n    if o_num == -1 and isinstance(o, _size):\n        o = _size(o.__add__(_size(\"1gb\")))\n\n    l = [s, o]\n    return l\n\n\ndef size_to_kbytes(sz):\n    \"\"\"\n    Given a pbs.size value 'sz', return the actual\n    # of kbytes representing the value.\n    \"\"\"\n    return _pbs_v1.size_to_kbytes(sz)\n\n\nclass size(_size):\n    \"\"\"\n    This represents a PBS size type.\n    pbs.size(int)\n    pbs.size(\"int[suffix]\") where suffix is:\n\n         b or  w     bytes or words.\n         kb or kw    Kilo (1024) bytes or words.\n         mb or mw    Mega (1,048,576) bytes or words.\n         gb or gw    Giga (1,073,741,824) bytes or words.\n         tb or tw    Tera (1024  gigabytes) bytes or words.\n         pb or pw    Peta  (1,048,576 gigabytes) bytes or words\n\n    pbs.size instances can be operated on by +, - operators, and can be\n    can be compared using the operators ==, !=, >, <, >=, and <=.\n\n        Ex.\n        >> sz = pbs.size(10gb)\n\n        # the sizes are normalize to the lower of\n        # the 2 suffixes.\n        # In this case, 10gb becomes 10240mb\n        # and added to 10mb\n        >> sz = sz + 10mb\n        10250mb\n\n        # following returns true as sz is greater\n        # than 100 bytes.\n        >> if  sz > 100:\n        print  true\n    \"\"\"\n\n    _derived_types = (_size,)\n\n    def __lt__(self, other):\n        so = transform_sizes(self, other)\n        s = so[0]\n        o = so[1]\n\n        s_str = str(s).rstrip(\"bB\")\n        o_str = str(o).rstrip(\"bB\")\n\n        if s_str.isdigit() and o_str.isdigit():\n            return(int(s_str) < int(o_str))\n\n        # uses _size's richcompare\n        return s.__lt__(o)\n\n    def __le__(self, other):\n        so = transform_sizes(self, other)\n        s = so[0]\n        o = so[1]\n\n        s_str = str(s).rstrip(\"bB\")\n        o_str = str(o).rstrip(\"bB\")\n\n        if s_str.isdigit() and o_str.isdigit():\n            return(int(s_str) <= int(o_str))\n\n        # uses _size's richcompare\n        return s.__le__(o)\n\n    def __gt__(self, other):\n        so = transform_sizes(self, other)\n        s = so[0]\n        o = so[1]\n\n        s_str = str(s).rstrip(\"bB\")\n        o_str = str(o).rstrip(\"bB\")\n\n        if s_str.isdigit() and o_str.isdigit():\n            return(int(s_str) > int(o_str))\n\n        # uses _size's richcompare\n        return s.__gt__(o)\n\n    def __ge__(self, other):\n        so = transform_sizes(self, other)\n        s = so[0]\n        o = so[1]\n\n        s_str = str(s).rstrip(\"bB\")\n        o_str = str(o).rstrip(\"bB\")\n\n        if s_str.isdigit() and o_str.isdigit():\n            return(int(s_str) >= int(o_str))\n\n        # uses _size's richcompare\n        return s.__ge__(o)\n\n    def __eq__(self, other):\n        so = transform_sizes(self, other)\n        s = so[0]\n        o = so[1]\n\n        s_str = str(s).rstrip(\"bB\")\n        o_str = str(o).rstrip(\"bB\")\n\n        if s_str.isdigit() and o_str.isdigit():\n            return(int(s_str) == int(o_str))\n\n        # uses _size's richcompare\n        return s.__eq__(o)\n\n    def __ne__(self, other):\n        \"\"\"\n        This is called on a <self> != <other> comparison, where\n        <self> is of size type.\n        \"\"\"\n        if not isinstance(other, (int, size)):\n            # if <other> object is not of type 'int', 'long', or 'size',\n            # then it cannot be transformed into size type.\n            # So automatically this != comparison should return\n            # True  - yes, they're not equal.\n            return True\n\n        so = transform_sizes(self, other)\n        s = so[0]\n        o = so[1]\n\n        s_str = str(s).rstrip(\"bB\")\n        o_str = str(o).rstrip(\"bB\")\n\n        if s_str.isdigit() and o_str.isdigit():\n            return(int(s_str) != int(o_str))\n\n        # uses _size's richcompare\n        return s.__ne__(o)\n\n    def __add__(self, other):\n        s = self\n        o = other\n        if isinstance(self, (int, size)):\n            s = _size(str(self))\n        if isinstance(other, (int, size)):\n            o = _size(str(other))\n        # uses _size's add function, but trick is return\n        # the \"size\" type so that any comparisons with the\n        # return would look in here for comparison operators\n        # and not in _size's richcompare.\n        return size(s.__add__(o))\n\n    def __sub__(self, other):\n        s = self\n        o = other\n        if isinstance(self, (int, size)):\n            s = _size(str(self))\n        if isinstance(other, (int, size)):\n            o = _size(str(other))\n        # uses _size's subtract function, but trick is return\n        # the \"size\" type so that any comparisons with the\n        # return would look in here for comparison operators\n        # and not in _size's richcompare.\n        return size(s.__sub__(o))\n\n    def __deepcopy__(self, mem):\n        return size(str(self))\n\n\nclass duration(int):\n    \"\"\"\n    Represents an interval or elapsed time object in number of seconds. This is\n    actually derived from a Python int type.\n\n    pbs.duration([[intHours:]intMinutes:]intSeconds[.intMilliseconds])\n    pbs.duration(int)\n    \"\"\"\n    # alternate form (i.e. what type can be used for pbs attribute of this\n    # type. For example, walltime is pbs.duration type, but can also be set\n    # using the given _derived_types:\n    _derived_types = (int,)\n\n    def __new__(cls, value):\n        valstr = str(value)\n        # validates against the 'walltime' attribute entry of the\n        # the server 'resource' table\n        _pbs_v1.validate_input(\"resc\", \"walltime\", valstr)\n        return int.__new__(cls, _pbs_v1.duration_to_secs(valstr))\n\n    def __init__(self, value):\n        self.duration_str = str(value)\n\n    def __str__(self):\n        return self.duration_str\n\n\ndef replace_char_not_before(str, chr, repl_substr, chr_after_list):\n    \"\"\"\n    Given 'str', replace all occurences of single character 'chr' with\n    replacement  substr 'repl_substr', only if 'chr' in 'str' is not\n    succeeded by any of the characters in 'chr_after_list'.\n    Ex. Given str = \"ab\\,c\\d\\'\\e\\\"\\f\\\\,\n              # replace occurences of \"\\\" with  \"\\\\\" as long as it is\n              # not followed by <,>, <'>, <\">,  or <\\>\n              replace_char_not_after(str, \"\\\", \"\\\\\",\n                                        [ ',', '\\'', '\\\"', '\\\\'])  =\n                \"ab\\,c\\\\d\\'\\\\e\\\"\\\\f\\\\\"\n        Here are sample transformations:\n\n         str= ab\\,c\\d\\'\\e\\\"\\f\\\n        rstr= ab\\,c\\\\d\\'\\\\e\\\"\\\\f\\\\\n\n         str= \\ab\\,c\\d\\'\\e\\\"\\f\\\n        rstr= \\\\ab\\,c\\\\d\\'\\\\e\\\"\\\\f\\\\\n\n         str= \\ab\\,c\\d\\'\\e\\\"\\f\\\n        rstr= \\\\ab\\,c\\\\d\\'\\\\e\\\"\\\\f\\\\\n\n         str= \\\\ab\\,c\\d\\'\\e\\\"\\f\\\n        rstr= \\\\ab\\,c\\\\d\\'\\\\e\\\"\\\\f\\\\\n\n         str= \\\\ab\\,c\\\\d\\'\\e\\\"\\f\\\n        rstr= \\\\ab\\,c\\\\d\\'\\\\e\\\"\\\\f\\\\\n    \"\"\"\n    i = 0\n    l = len(str)\n    end_index = l - 1\n    s = \"\"\n    while i < l:\n        if ((str[i] != chr) or\n            ((i > 0) and (str[i - 1] == chr) and (str[i] in chr_after_list)) or\n                ((i < end_index) and (str[i + 1] in chr_after_list))):\n            s += str[i]\n        else:\n            s += repl_substr\n        i = i + 1\n    return s\n\n\nclass pbs_env(dict):\n    # a list of path where \"\\\" will be converted to \"/\"\n    _attributes_readonly = PbsReadOnlyDescriptor('_attributes_readonly',\n                                                 ['PBS_ENVIRONMENT',\n                                                  'PBS_JOBDIR',\n                                                  'PBS_JOBID',\n                                                  'PBS_JOBNAME',\n                                                  'PBS_NODEFILE',\n                                                  'TMPDIR',\n                                                  'PBS_O_HOME',\n                                                  'PBS_O_HOST',\n                                                  'PBS_O_LANG',\n                                                  'PBS_O_LOGNAME',\n                                                  'PBS_O_MAIL',\n                                                  'PBS_O_PATH',\n                                                  'PBS_O_QUEUE',\n                                                  'PBS_O_SHELL',\n                                                  'PBS_O_SYSTEM',\n                                                  'PBS_O_TZ',\n                                                  'PBS_O_WORKDIR',\n                                                  'PBS_QUEUE'\n                                                  ])\n\n    def __init__(self, value, generic=False):\n        # if generic is True, this means to use pbs_env() type in a\n        # generic way, so that the PBS-related variables (e.g. PBS_O*)\n        # are allowed to be modified.\n        self._generic = generic\n        if isinstance(value, str):\n            # temporarily replace \"<esc_char>,\" with something we\n            # don't expect to see: two etx <ascii code 3>\n            # since ',' is used as a separator among env variables.\n            # NOTE: We take care here of also catching \"\\\\,\" which is\n            #       legal as in:  DPATH=\\\\a\\\\b\\\\,MP_MSG_API=MPI\\,LAPI\n            #       which must break down to:\n            #           v['DPATH'] = \"\\\\a\\\\b\\\\\"\n            #\t       v['MP_MSG_API'] = \"MPI\\,LAPI\"\n            if (sys.platform == \"win32\"):\n                esc_char = \"^\"\n            else:\n                esc_char = \"\\\\\"\n            double_stx = \"\\x02\\x02\"\n            double_etx = \"\\x03\\x03\"\n            value1 = value.replace(\n                esc_char + esc_char, double_stx).replace(\n                esc_char + \",\", double_etx)\n            vals = value1.split(\",\")\n            ev = {}\n            for v in vals:\n                # now restore \"<esc_char>,\"\n                v1 = v.replace(double_etx, esc_char +\n                               \",\").replace(double_stx, esc_char + esc_char)\n                e = v1.split(\"=\", 1)\n\n                if len(e) == 2:\n\n                    vue = e[1]\n                    if isinstance(e[1], str):\n                        if (_pbs_v1.get_python_daemon_name() != \"pbs_python\") \\\n                                or (sys.platform != \"win32\"):\n                            # replace \\ with \\\\ if not used to escape special\n                            # chars\n                            # note: no need to do this under a Windows mom\n                            #       since backslash is recognized as path\n                            #       character\n                            vue = replace_char_not_before(\n                                e[1], '\\\\', '\\\\\\\\', [',', '\\'', '\\\"', '\\\\'])\n                    ev.update({e[0]: vue})\n        else:\n            ev = value\n        super().__init__(ev)\n    #: m(__init__)\n\n    def __setitem__(self, name, value):\n        \"\"\"__setitem__\"\"\"\n        # pbs builtin variables are off limits except under a PBS hook\n        if name in pbs_env._attributes_readonly and \\\n                _pbs_v1.in_python_mode() and _pbs_v1.in_site_hook() and \\\n                not getattr(self, \"_generic\"):\n            raise BadAttributeValueError(\n                \"env variable '%s' is readonly\" % (name,))\n        v = value\n        if isinstance(value, str):\n            if (_pbs_v1.get_python_daemon_name() != \"pbs_python\") \\\n                    or (sys.platform != \"win32\"):\n                # replace \\ with \\\\ if not used to escape special chars\n                # note: no need to do this on a Windows mom\n                #       since backslash is recognized as path character\n                v = replace_char_not_before(value, '\\\\', '\\\\\\\\',\n                                            [',', '\\'', '\\\"', '\\\\'])\n        super().__setitem__(name, v)\n\n    def __str__(self):\n        \"\"\"String representation of the object\"\"\"\n        rv = \"\"\n        for k in self.keys():\n            if self[k] is not None:\n                rv += \"%s=%s,\" % (k, self[k])\n        return rv.rstrip(\",\")\n    #: m(__str__)\n\n\nclass email_list(_generic_attr):\n    \"\"\"\n    Represents the set of users to whom mail may be sent when a job makes\n    certain state changes. Ex. Jobs Mail_Users attribute.\n    Format: pbs.email_list(<email_address1>, <email address2>)\n    \"\"\"\n    _derived_types = (_generic_attr,)\n\n    def __init__(self, value):\n        _pbs_v1.validate_input(\"job\", \"Mail_Users\", value)\n        super().__init__(value)\n\n# pbs_list is like email_list except less strict - \"str\" is allowed as a\n# derived type.\n\n\nclass pbs_list(_generic_attr):\n    _derived_types = (_generic_attr, str)\n\n    def __init__(self, value):\n        _pbs_v1.validate_input(\"job\", \"Mail_Users\", value)\n        super().__init__(value)\n\n\nclass pbs_bool(_generic_attr):\n    _derived_types = (bool,)\n\n    def __init__(self, value):\n        if value in (\"true\", \"True\", \"TRUE\", \"t\", \"T\", \"y\", \"1\", 1):\n            v = 1\n        elif value in (\"false\", \"False\", \"FALSE\", \"f\", \"F\", \"n\", \"0\", 0):\n            v = 0\n        else:\n            # should not end up here\n            v = -1\n        # validates against the 'Rerunable' attribute entry of the\n        # the server 'job' table\n        _pbs_v1.validate_input(\"job\", \"Rerunable\", str(v))\n        super().__init__(v)\n\n    def __cmp__(self, value):\n        iself = int(str(self))\n\n        if value is None:\n            return 1\n\n        ivalue = int(value)\n\n        if iself == ivalue:\n            return 0\n        elif iself > ivalue:\n            return 1\n        else:\n            return -1\n\n    def __bool__(self):\n        if int(str(self)) == 1:\n            return True\n        else:\n            return False\n\n    def __int__(self):\n        return int(str(self))\n\n\nclass pbs_int(int):\n    _derived_types = (int, int, float)\n\n    def __init__(self, value):\n        # empty  string (\"\") also matched\n        if value != \"\":\n            _pbs_v1.validate_input(\"job\", \"ctime\", str(int(value)))\n        super().__init__()\n\n\nclass vnode_state(int):\n    _derived_types = (int, int, float)\n\n    def __init__(self, value):\n        # empty  string (\"\") also matched\n        if value != \"\":\n            if _pbs_v1.vnode_state_to_str(int(value)) == \"\":\n                raise BadAttributeValueError(\n                    \"invalid vnode state value '%s'\" % (value,))\n\n        super().__init__()\n\n    def __add__(self, val):\n        if _pbs_v1.vnode_state_to_str(val) == \"\":\n            raise BadAttributeValueError(\n                \"invalid vnode state value '%d'\" % (val,))\n        return (self | val)\n\n    def __sub__(self, val):\n        if _pbs_v1.vnode_state_to_str(val) == \"\":\n            raise BadAttributeValueError(\n                \"invalid vnode state value '%d'\" % (val,))\n        return (self & ~val)\n\n\nclass pbs_str(str):\n    _derived_types = (str,)\n\n    def __init__(self, value):\n        _pbs_v1.validate_input(\"job\", \"Job_Owner\", value)\n        super().__init__()\n\n\nclass pbs_float(float):\n    _derived_types = (int, int, float)\n\n    def __init__(self, value):\n        _pbs_v1.validate_input(\"float\", \"\", str(value))\n        super().__init__()\n\n#: ---------------------  SPECIAL VALUE TYPES ---------------------\n\n\nclass server_state(int):\n    _derived_types = (int,)\n\n    def __new__(cls, value):\n        v = value\n        if isinstance(value, str):\n            # convert to the internal long value\n            if value == \"Hot_Start\":\n                v = _pbs_v1.SV_STATE_HOT\n            elif value == \"Active\":\n                v = _pbs_v1.SV_STATE_ACTIVE\n            elif value == \"Terminating_Delay\":\n                v = _pbs_v1.SV_STATE_SHUTDEL\n            elif value == \"Terminating\":\n                v = _pbs_v1.SV_STATE_SHUTIMM\n            else:\n                # not all server states are captured in this function,\n                # so just default to 0 (instead of -1)\n                v = 0\n        return super().__new__(cls, v)\n\n\nclass queue_type(int):\n    _derived_types = (int,)\n\n    def __new__(cls, value):\n        v = value\n        if isinstance(value, str):\n            # convert to the internal long value\n            if (value == \"Execution\") or (value == \"E\"):\n                v = _pbs_v1.QTYPE_EXECUTION\n            elif value == \"Route\":\n                v = _pbs_v1.QTYPE_ROUTE\n            else:\n                # should not get here\n                v = -1\n        return super().__new__(cls, v)\n\n\nclass job_state(int):\n    _derived_types = (int,)\n\n    def __new__(cls, value):\n        v = value\n        if isinstance(value, str):\n            # convert to the internal long value\n            if value == \"T\":\n                v = _pbs_v1.JOB_STATE_TRANSIT\n            elif value == \"Q\":\n                v = _pbs_v1.JOB_STATE_QUEUED\n            elif value == \"H\":\n                v = _pbs_v1.JOB_STATE_HELD\n            elif value == \"W\":\n                v = _pbs_v1.JOB_STATE_WAITING\n            elif value == \"R\":\n                v = _pbs_v1.JOB_STATE_RUNNING\n            elif value == \"E\":\n                v = _pbs_v1.JOB_STATE_EXITING\n            elif value == \"X\":\n                v = _pbs_v1.JOB_STATE_EXPIRED\n            elif value == \"B\":\n                v = _pbs_v1.JOB_STATE_BEGUN\n            elif value == \"S\":\n                v = _pbs_v1.JOB_STATE_SUSPEND\n            elif value == \"U\":\n                v = _pbs_v1.JOB_STATE_SUSPEND_USERACTIVE\n            elif value == \"M\":\n                v = _pbs_v1.JOB_STATE_MOVED\n            elif value == \"F\":\n                v = _pbs_v1.JOB_STATE_FINISHED\n            else:\n                # should not get here\n                v = -1\n        return super().__new__(cls, v)\n\n\nclass acl(_generic_attr):\n    \"\"\"\n    Represents a PBS ACL type.\n    Format: pbs.acl(\"[+|-]<entity>][,...]\")\n    \"\"\"\n    _derived_types = (_generic_attr,)\n\n    def __init__(self, value):\n        _pbs_v1.validate_input(\"resv\", \"Authorized_Users\", value)\n        super().__init__(value)\n\n\nclass select(_generic_attr):\n    \"\"\"\n    This represents the select resource specification when submitting a job.\n    Format: pbs.select([N:]res=val[:res=val][+[N:]res=val[:res=val] ... ]\")\n\n    Ex. sel = pbs.select(\"2:ncpus=1:mem=5gb+3:ncpus=2:mem=5gb\")\n        s = repr(sel)\t or s = `sel`\n        print s[2]  prints n\n        s = s + \"+5:scratch=10gb\"  append to string\n        sel = pbs.select(s)  reset the value of sel\n\n    \"\"\"\n    _derived_types = (_generic_attr,)\n\n    def __init__(self, value):\n        _pbs_v1.validate_input(\"resc\", \"select\", value)\n        super().__init__(value)\n\n    def increment_chunks(self, increment_spec):\n        \"\"\"\n        Given a pbs.select value (i.e. <num>:r1=v1:r2=v2+...+<num>:rn=vN),\n        increase the number of chunks for each of the pbs.select chunk\n        specs (except for the first chunk assigned to primary mom)\n        by the 'increment' specification.\n        The first chunk is the single chunk inside the first item\n        in the plus-separated specs that is assigned to the\n        primary mom. It is left as is.\n        For instance, given a chunk specs of \"3:ncpus=2+2:ncpus=4\",\n        this is viewed as \"(1:ncpus=2+2:ncpus=2)+(2:ncpus=4)\", and\n        the increment specs described below would apply to the\n        chunk after the initial, single chunk \"1:ncpus=2\".\n\n        if 'increment_spec' is a number (int or long) or a numeric\n        string,  then it will be the amount to add to the number of\n        chunks spcified for each chunk that is not the first chunk\n        in the pbs.select spec.\n\n        if 'increment_spec' is a numeric string that ends with a percent\n        sign (%), then this will be the percent amount of chunks to\n        increase each chunk (except the first chunk) in the pbs.select spec.\n        The resulting amount is rounded up (i.e. ceiling)\n        (e.g. 1.23 rounds up to 2).\n\n        Finally, if 'increment_spec' is a dictionary with elements of the\n        form:\n               {<chunk_index_to_select_spec> : <increment>, ...}\n        where <chunk_index_to_select_spec> starts at 0 for the first\n        chunk appearing in the plus-separated spec list,\n        and <increment> can be numeric, numeric string or\n        a percent increase value. This allow for individually\n        specifying the number of chunks to increase original value.\n        Note that for the first chunk in the list (0th index), the\n        increment will apply to the chunks beyond the first chunk, which is\n        assigned to the primary mom.\n\n        Ex. Given:\n            sel=pbs.select(\"ncpus=3:mem=1gb+1:ncpus=2:mem=2gb+2:ncpus=1:mem=3gb\")\n\n            Calling sel.increment_chunks(2) would return a string:\n                \"1:ncpus=3:mem=1gb+3:ncpus=2:mem=2gb+4:ncpus=1:mem=3gb\"\n\n            Calling sel.increment_chunks(\"3\") would return a string:\n                \"1:ncpus=3:mem=1gb+4:ncpus=2:mem=2gb+5:ncpus=1:mem=3gb\"\n\n            Calling sel.increment_chunks(\"23.5%\"), would return a\n            pbs.select value mapping to:\n                \"1:ncpus=3:mem=1gb+2:ncpus=2:mem=2gb+3:ncpus=1:mem=3gb\"\n\n            with the first chunk, which is a single chunk, is left as is,\n            and the second and third chunks are increased by 23.5 %\n            resulting in 1.24 rounded up to 2 and 2.47 rounded up to 3.\n\n            Calling sel.increment_chunks({0: 0, 1: 4, 2: \"50%\"}), would\n            return a pbs.select value mapping to:\n                \"1:ncpus=3:mem=1gb+5:ncpus=2:mem=2gb+3:ncpus=1:mem=3gb\"\n\n            where no increase (0) for chunk 1, additional 4\n            chunks for chunk 2, 50% increase for chunk 3 resulting in\n            3.\n\n            Given:\n                sel=pbs.select(\"5:ncpus=3:mem=1gb+1:ncpus=2:mem=2gb+2:ncpus=1:mem=3gb\")\n\n            Then calling sel.increment_chunks(\"50%\") or\n            sel.increment_chunks({0: \"50%\", 1: \"50%\", 2: \"50%}) would return a\n            pbs.select value mapping to:\n                \"7:ncpus=3:mem=1gb+2:ncpus=2:mem=2gb+3:ncpus=1:mem=3gb\"\n            as for the first chunk, the initial single chunk of\n            \"1:ncpus=3:mem=1gb\" is left as is, with the \"50%\" increase applied\n            to the remaining chunks \"4:ncpus=3:mem=1gb\", and then added back to\n            the single chunk to make 7, while chunks 2 and 3 are increased to 2\n            and 3, respectively.\n        \"\"\"\n        increment = None\n        percent_inc = None\n        increment_dict = None\n        if isinstance(increment_spec, (int, int)):\n            increment = increment_spec\n        elif isinstance(increment_spec, str):\n            if increment_spec.endswith('%'):\n                percent_inc = float(increment_spec[:-1]) / 100 + 1.0\n            else:\n                increment = int(increment_spec)\n        elif isinstance(increment_spec, dict):\n            increment_dict = increment_spec\n        else:\n            raise ValueError(\"bad increment specs\")\n\n        ret_str = \"\"\n        i = 0  # index to each chunk in the + separated spec\n        for chunk in str(self).split(\"+\"):\n            if i != 0:\n                ret_str += '+'\n            j = 0  # index to items within a chunk separated by ':'\n            for subchunk in chunk.split(\":\"):\n                c_str = subchunk\n                if j == 0:\n                    # given <chunk_ct>:<res1>=<val1>:<res2>=<val2> or\n                    # <res1>=<val1>:<res2>:<val2> (without <chunk_ct>),\n                    # here we're looking at the first field:\n                    # subchunk=<chunk_ct> or subchunk=<res1>=<val1>\n                    save_str = None\n                    if not subchunk.isdigit():\n                        # detected a first field that is not\n                        # a <chunk_ct>, so default to 1\n                        subchunk = \"1\"\n                        save_str = c_str\n                    chunk_ct = int(subchunk)\n\n                    if i == 0:\n                        # don't touch the first chunk which lands in MS\n                        chunk_ct -= 1\n\n                    if chunk_ct <= 0:\n                        num = 0\n                    elif increment:\n                        num = chunk_ct + increment\n                    elif percent_inc:\n                        num = int(math.ceil(chunk_ct * percent_inc))\n                    elif increment_dict is not None and i in increment_dict:\n                        if isinstance(increment_dict[i], (int, int)):\n                            inc = increment_dict[i]\n                            num = chunk_ct + inc\n                        elif isinstance(increment_dict[i], str):\n                            if increment_dict[i].endswith('%'):\n                                p_inc = float(\n                                    increment_dict[i][:-1]) / 100 + 1.0\n                                num = int(math.ceil(chunk_ct * p_inc))\n                            else:\n                                inc = int(increment_dict[i])\n                                num = chunk_ct + inc\n                    else:\n                        raise ValueError(\"bad increment specs\")\n\n                    if (i == 0):\n                        num += 1  # put back the decremented count\n\n                    if save_str:\n                        c_str = \"%s:%s\" % (num, save_str)\n                    else:\n                        c_str = \"%s\" % (num)\n                else:\n                    ret_str += \":\"\n                ret_str += c_str\n                j += 1\n\n            i += 1\n\n        return select(ret_str)\n\n\nclass place(_generic_attr):\n    \"\"\"\n    the place specification when submitting a job.\n    Format: pbs.place(\"[arrangement]:[sharing]:[group]\")\n            where \t[arrangement] can be pack, scatter, free,\n                        [sharing] can be shared, excl, and\n                        [group] can be of the form group=<resource>.\n                        [arrangement], [sharing], and [group] can be given\n                        in any order or combination.\n    Ex.\tpl = pbs.place(\"pack:excl\")\n        s = repr(pl)\t or s = `pl`\n        print pl[0]  returns p\n        s = s + :group=host  append to string\n    \"\"\"\n    _derived_types = (_generic_attr,)\n\n    def __init__(self, value):\n        _pbs_v1.validate_input(\"resc\", \"place\", value)\n        super().__init__(value)\n\n\nclass vnode_sharing(int):\n    _derived_types = (int, int, float)\n\n    def __init__(self, value):\n        if _pbs_v1.vnode_sharing_to_str(int(value)) == \"\":\n            raise BadAttributeValueError(\n                \"invalid vnode sharing value '%s'\" % (value,))\n        super().__init__()\n\n\nclass vnode_ntype(int):\n    _derived_types = (int, int, float)\n\n    def __init__(self, value):\n        if _pbs_v1.vnode_ntype_to_str(int(value)) == \"\":\n            raise BadAttributeValueError(\n                \"invalid vnode ntype value '%s'\" % (value,))\n        super().__init__()\n\n\nclass exec_host(_generic_attr):\n    \"\"\"\n    Represents a PBS exec_host.\n    Format: pbs.exec_host(\"host/N[*C][+...]\")\n            where N are C are ints.\n    \"\"\"\n    _derived_types = (_generic_attr,)\n\n    def __init__(self, value):\n        _pbs_v1.validate_input(\"job\", \"exec_host\", value)\n        super().__init__(value)\n\n\nclass checkpoint(_generic_attr):\n    \"\"\"\n    Represents a job's checkpoint attribute.\n    Format: pbs.checkpoint( <chkpnt_string> )\n                where <chkpnt_string> must be one of \"n\", \"s\", \"c\", or \"c=mmm\"\n\n    \"\"\"\n    _derived_types = (_generic_attr,)\n\n    def __init__(self, value):\n        _pbs_v1.validate_input(\"job\", \"Checkpoint\", value)\n        super().__init__(value)\n\n\nclass depend(_generic_attr):\n    \"\"\"\n    Represents a job's dependency attribute.\n    Format: pbs.depend(\"<depend_string>\")\n                Creates a PBS dependency specification object out of\n                the given <depend_string>. <depend_string> must be of\n                \"<type>:<jobid>[,<jobid>...]\", or on:<count>.\n                <type> is one of \"after\", \"afterok\",\n                afterany\", \"before\", \"beforeok\", and \"beforenotok.\n    \"\"\"\n    _derived_types = (_generic_attr,)\n\n    def __init__(self, value):\n        _pbs_v1.validate_input(\"job\", \"depend\", value)\n        super().__init__(value)\n\n\nclass group_list(_generic_attr):\n    \"\"\"\n    Represents a list of group names.\n    Format: pbs.group_list(\"<group_name>[@<host>][,<group_name>[@<host>]..]\")\n    \"\"\"\n    _derived_types = (_generic_attr,)\n\n    def __init__(self, value):\n        _pbs_v1.validate_input(\"job\", \"group_list\", value)\n        super().__init__(value)\n\n\nclass user_list(_generic_attr):\n    \"\"\"\n    Represents a list of user names.\n    Format: pbs.user_list(\"<user>[@<host>][,<user>@<host>...]\")\n    \"\"\"\n    _derived_types = (_generic_attr,)\n\n    def __init__(self, value):\n        _pbs_v1.validate_input(\"job\", \"User_List\", value)\n        super().__init__(value)\n\n\nclass path(_generic_attr):\n    _derived_types = (_generic_attr, str)\n\n    def __init__(self, value):\n        # for windows\n        val = value\n        if isinstance(value, str):\n            val = value.replace(\"\\\\\", \"/\")\n        _pbs_v1.validate_input(\"job\", \"Output_Path\", val)\n        super().__init__(val)\n\n\nclass sandbox(_generic_attr):\n    _derived_types = (_generic_attr, str)\n\n    def __init__(self, value):\n        _pbs_v1.validate_input(\"job\", \"sandbox\", value)\n        super().__init__(value)\n\n\nclass priority(_generic_attr):\n    _derived_types = (_generic_attr, int)\n\n    def __init__(self, value):\n        _pbs_v1.validate_input(\"job\", \"Priority\", str(value))\n        super().__init__(value)\n\n\nclass name(_generic_attr):\n    _derived_types = (_generic_attr, str)\n\n    def __init__(self, value):\n        # Validate only if set inside a hook script and not internally\n        # by PBS.\n        if _pbs_v1.in_python_mode():\n            _pbs_v1.validate_input(\"job\", \"Job_Name\", value)\n        super().__init__(value)\n\n\nclass project(_generic_attr):\n    _derived_types = (_generic_attr, str)\n\n    def __init__(self, value):\n        # Validate only if set inside a hook script and not internally\n        # by PBS.\n        if _pbs_v1.in_python_mode():\n            _pbs_v1.validate_input(\"job\", \"project\", value)\n        super().__init__(value)\n\n\nclass join_path(_generic_attr):\n    \"\"\"\n    Represents how the output and error files are merged.\n    Format: pbs.join_path({oe|eo|n})\n    \"\"\"\n    _derived_types = (_generic_attr,)\n\n    def __init__(self, value):\n        _pbs_v1.validate_input(\"job\", \"Join_Path\", value)\n        super().__init__(value)\n\n\nclass path_list(_generic_attr):\n    \"\"\"\n    Represents a list of pathnames.\n    Format: pbs.path_list(\"<path>[@<host>][,<path>@<host> ...]\")\n    \"\"\"\n    _derived_types = (_generic_attr,)\n\n    def __init__(self, value):\n        # for windows\n        val = value\n        if isinstance(value, str):\n            val = value.replace(\"\\\\\", \"/\")\n        _pbs_v1.validate_input(\"job\", \"Shell_Path_List\", val)\n        super().__init__(val)\n\n\nclass hold_types(_generic_attr):\n    \"\"\"\n    Represents the Hold_Types attribute of a job.\n    Format: pbs.hold_types(<hold_type_str>)\n            where <hold_type_str> is one of \"u\", \"o\", \"s\",  or (\"n\" or \"p\").\n\n    \"\"\"\n    _derived_types = (_generic_attr,)\n\n    def __init__(self, value):\n        \"\"\"\n        Instantiates an pbs.holdtypes() value.\n        \"\"\"\n        _pbs_v1.validate_input(\"job\", \"Hold_Types\", value)\n        self.opval = \"__init__\"\n        super().__init__(value)\n\n    def __add__(self, val):\n        \"\"\"\n        Returns a new value containing the Hold_Types values in\n        self._value plus the the Hold_Types values in val.\n        This also ensures that each character in val will appear\n        only once in the new value.\n        Example:\n                Given: pbs.event().job.Hold_types = \"o\"\n                        pbs.event().job.Hold_Types += pbs.hold_types(\"uos\")\n                -> pbs.event().job.Hold_Types = uos\n        \"\"\"\n        sdict = {}\n        for c in self._value:\n            sdict[c] = \"\"\n        for c in str(val):\n            sdict[c] = \"\"\n        nval = \"\".join(list(sdict.keys()))\n\n        # nval will get validated inside hold_types instantiation\n        h = hold_types(nval)\n        h.opval = \"__add__\"\n        return h\n\n    def __sub__(self, val):\n        \"\"\"\n        Returns a new value containing the Hold_Types values\n        in self._value, but with the Hold_Types values in val\n        taken out.\n        Example:\n            Given: pbs.event().job.Hold_types = os\n                   pbs.event().job.Hold_Types  -= pbs.hold_types(\"us\")\n                                         -> pbs.event().job.Hold_types = o\n        \"\"\"\n        sdict = {}\n        # string that holds deleted Hold_Types values\n        deleted_vals = \"\"\n        for c in self._value:\n            sdict[c] = \"\"\n        for c in str(val):\n            if c in list(sdict.keys()):\n                del sdict[c]\n                deleted_vals += c\n        nval = \"\".join(list(sdict.keys()))\n\n        # nval will get validated inside hold_types instantiation\n        if nval == \"\":\n            nval = \"n\"\n        h = hold_types(nval)\n        h.opval = \"__sub__\"\n        h.delval = deleted_vals\n        return h\n\n\nclass keep_files(_generic_attr):\n    \"\"\"\n    Represents the Keep_Files job attribute.\n    Format: pbs.keep_files(<keep_files_str>)\n                where <keep_files_str> is one of \"o\", \"e\", \"oe\", \"eo\".\n\n    \"\"\"\n    _derived_types = (_generic_attr,)\n\n    def __init__(self, value):\n        _pbs_v1.validate_input(\"job\", \"Keep_Files\", value)\n        super().__init__(value)\n\n\nclass mail_points(_generic_attr):\n    \"\"\"\n    Represents the Mail_Points attribute of a job.\n    Format: pbs.mail_points(\"<mail_points_string>\")\n                Creates a PBS Mail_Points object, where\n                <mail_points_string> is \"a\", \"b\", and/or \"e\", or n.\n    \"\"\"\n    _derived_types = (_generic_attr,)\n\n    def __init__(self, value):\n        _pbs_v1.validate_input(\"job\", \"Mail_Points\", value)\n        super().__init__(value)\n\n\nclass staging_list(_generic_attr):\n    \"\"\"\n    Represents a list of file stagein or stageout parameters.\n    Format: pbs.staging_list(\"<filespec>[,<filespec>,...]\")\n                Creates a file staging parameters list object.\n                where <filespec> is\n                        <local_path>@<remote_host>:<remote_path>\n    \"\"\"\n    _derived_types = (_generic_attr,)\n\n    def __init__(self, value):\n        val = value\n        if isinstance(value, str):\n            val = value.replace(\"\\\\\", \"/\")\n            val = val.replace(\"/,\", \"\\\\,\")\n        _pbs_v1.validate_input(\"job\", \"stagein\", val)\n        super().__init__(val)\n\n\nclass range(_generic_attr):\n    \"\"\"\n    Represents a range of numbers referring to job array.\n    Format: pbs.range(\"<start>-<stop>:<end>\")\n                Creates a PBS object representing a range of values.\n                Ex. pbs.range(1-30:3)\n\n    \"\"\"\n    _derived_types = (_generic_attr,)\n\n    def __init__(self, value):\n        _pbs_v1.validate_input(\"job\", \"array_indices_submitted\", value)\n        super().__init__(value)\n\n\nclass state_count(_generic_attr):\n    \"\"\"\n    Represents a set of job-related state counters.\n    Format: pbs.state_count(\"Transit:<U> Queued:<V> Held:<W> Running:<X> \"\n                            \"Exiting:<Y> Begun:<Z>\")\n    \"\"\"\n    _derived_types = (_generic_attr,)\n\n    def __init__(self, value):\n        # validates against the 'state_account' attribute entry of the\n        # the server 'server' table\n        _pbs_v1.validate_input(\"server\", \"state_count\", value)\n        super().__init__(value)\n\n\nclass license_count(_generic_attr):\n    \"\"\"\n    Represents a set of licensing-related counters.\n    Format: pbs.license_count(\"Avail_Global:<W> Avail_Local:<X> Used:<Y> \"\n                              \"High_Use:<Z>\")\n    \"\"\"\n    _derived_types = (_generic_attr,)\n\n    def __init__(self, value):\n        _pbs_v1.validate_input(\"server\", \"license_count\", value)\n        super().__init__(value)\n\n\nclass route_destinations(_generic_attr):\n    \"\"\"\n    Represents the \"route_destinations\" attribute of a queue.\n    Format: pbs.route_destinations((\"<queue_spec>[,<queue_spec>,...]\"\n                Creates an object that represents route_destinations routing\n                queue attribute. <queue_spec> is\n                \"queue_name[@server_host[:port]]\"\n    \"\"\"\n    _derived_types = (_generic_attr,)\n\n    def __init__(self, value):\n        # validates against the 'state_account' attribute entry of the\n        # the server 'queue' table\n        _pbs_v1.validate_input(\"queue\", \"route_destinations\", value)\n        super().__init__(value)\n\n\nclass args(_generic_attr):\n    \"\"\"\n    Represents a space-separated list of PBS arguments to commands like\n    qsub, qdel.\n    Format: pbs.args(<space-separated PBS args to commands like qsub, qdel>)\n                Ex. pbs.args(\"-Wsuppress_mail=N r y\")\n    \"\"\"\n    _derived_types = (_generic_attr,)\n\n    def __init__(self, value):\n        _pbs_v1.validate_input(\"server\", \"default_qsub_arguments\", value)\n        super().__init__(value)\n\n\nclass job_sort_formula(_generic_attr):\n    \"\"\"\n    Represents the job_sort_formula server attribute.\n    Format: pbs.job_sort_formula(<string containing math formula>)\n    \"\"\"\n    _derived_types = (_generic_attr,)\n\n    def __init__(self, value):\n        # treat as string for now\n        if not isinstance(value, str):\n            raise BadAttributeValueError(\n                \"job_sort_formula value '%s' not a string\" % (value,))\n        super().__init__(value)\n\n\nclass node_group_key(_generic_attr):\n    \"\"\"\n    Represents the node group key atribute.\n    Format: pbs.node_group_key(<resource>)\n    \"\"\"\n    _derived_types = (_generic_attr,)\n\n    def __init__(self, value):\n        _pbs_v1.validate_input(\"queue\", \"node_group_key\", value)\n        super().__init__(value)\n\n\nclass version(_generic_attr):\n    \"\"\"\n    Represents a version information for PBS.\n    Format: pbs.version(<pbs version string>)\n    \"\"\"\n    _derived_types = (str, _generic_attr,)\n\n    def __init__(self, value):\n        _pbs_v1.validate_input(\"server\", \"pbs_version\", value)\n        super().__init__(value)\n\n\nclass software(_generic_attr):\n    \"\"\"\n    Represents a site-dependent software specification resource.\n    Format: pbs.software(<software info string>)\n    \"\"\"\n    _derived_types = (_generic_attr,)\n\n    def __init__(self, value):\n        _pbs_v1.validate_input(\"resc\", \"software\", value)\n        super().__init__(value)\n\n\n#:-------------------------------------------------------------------------\n#                       RESOURCE TYPE\n#:-------------------------------------------------------------------------\nclass pbs_resource():\n    \"\"\"A generic python representation of PBS resource type.\n\n    This leverages the Python descriptor mechanism to expose all the resources\n    as attributes.\n\n    \"\"\"\n\n    __resources = PbsReadOnlyDescriptor('__resources', {})\n    attributes = __resources\n\n    def __init__(self, name, is_entity=0):\n        \"\"\"__init__\"\"\"\n\n        #: name could be an instance of pbs_resource, in that case we are\n        #: actually creating a new instance of pbs_resource, this happens\n        #: when the attributes are actually setup by parent.\n        if isinstance(name, pbs_resource):\n            name = name._name\n            #: set all the attribute descriptors resc_attr to name\n            for a in pbs_resource.attributes:\n                #: get the descriptor\n                descr = getattr(pbs_resource, a)\n                if isinstance(descr, PbsAttributeDescriptor):\n                    descr._set_resc_atttr(name, is_entity)\n                #:\n            #:\n        self._attributes_hook_set = {}\n        self._attributes_unknown = {}\n        self._name = name\n        self._readonly = False\n        self._has_value = True\n        self._is_entity = is_entity\n    #: m(__init__)\n\n    def __str__(self):\n        \"\"\"\n        \"\"\"\n        if not self._has_value:\n            # return the cached value\n            return str(_pbs_v1.resource_str_value(self))\n\n        rv = []\n\n        d = pbs_resource.attributes.copy()\n        # update pbs_resource list of attribute names to contain the \"unknown\"\n        # names as well.\n        d.update(self._attributes_unknown)\n\n        for resc in d:\n            if resc == '_name' or resc == '_has_value':\n                continue\n            v = getattr(self, resc)\n            if (v is not None) or (v == \"\"):\n                str_v = str(v)\n                if (str_v.find(\"\\\"\") == -1) and (str_v.find(\",\") != -1):\n                    rv.append(\"%s=\\\"%s\\\"\" % (resc, v))\n                else:\n                    rv.append(\"%s=%s\" % (resc, v))\n            #\n        #\n        return \",\".join(rv)\n    #: m(__str__)\n\n    def __getitem__(self, resname):\n        \"\"\"__getitem__\"\"\"\n        if not self._has_value:\n            # load the cached resource value\n            _pbs_v1.load_resource_value(self)\n\n        return getattr(self, resname)\n    #: m(__getitem__)\n\n    def __setitem__(self, resname, resval):\n        \"\"\"__setitem__\"\"\"\n\n        if not self._has_value:\n            # load the cached resource value\n            _pbs_v1.load_resource_value(self)\n        setattr(self, resname, resval)\n    #: m(__setitem__)\n\n    def __contains__(self, resname):\n        \"\"\"__contains__\"\"\"\n\n        return hasattr(self, resname)\n    #: m(__contains__)\n\n    def __setattr__(self, nameo, value):\n        \"\"\"__setattr__\"\"\"\n        if nameo == '_attributes_hook_set' or nameo == '_attributes_unknown':\n            if _pbs_v1.in_python_mode() and hasattr(self, nameo):\n                raise BadAttributeValueError(\n                    f\"the attribute '{nameo}' is readonly\")\n            super().__setattr__(nameo, value)\n            return\n\n        name = nameo\n        if (nameo == \"_readonly\"):\n            if _pbs_v1.in_python_mode() and \\\n                    hasattr(self, \"_readonly\") and not value:\n                raise BadResourceValueError(\n                    \"_readonly can only be set to True!\")\n        elif (nameo != \"_has_value\") and (nameo != \"_is_entity\"):\n            # _has_value is a special, resource attribute that tells if a\n            # resource instance is already holding its value (i.e. value\n            # is not cached somewhere else).\n            # _is_entity is also a special attribute that tells if the\n            # resource instance is an entity resource type.\n\n            # resource names in PBS are case insensitive,\n            # so do caseless matching here.\n            found = False\n            namel = nameo.lower()\n            for resc in pbs_resource.attributes:\n                rescl = resc.lower()\n                if namel == rescl:\n                    # Need to use the matched name stored in PBS Python\n                    # resource table, to avoid resource ambiguity later on.\n                    name = resc\n                    found = True\n\n            if not found:\n                if _pbs_v1.in_python_mode():\n                    # if attribute name not found,and executing inside Python\n                    # script\n                    if _pbs_v1.get_python_daemon_name() != \"pbs_python\":\n                        # we're in a server hook\n                        raise UnsetResourceNameError(\n                            \"resource attribute '%s' not found\" % (name,))\n                    else:\n                        # we're in a mom hook, so no longer raising an\n                        # exception here since if it's an unknown resource, we\n                        # can now tell server to automatically add a custom\n                        # resource.\n                        pass\n                # add the current attribute name to the \"unknown\" list\n                self._attributes_unknown[name] = None\n\n        super().__setattr__(name, value)\n\n        # attributes that are set in python mode will be reflected in\n        # _attributes_hook_set dictionary.\n        if _pbs_v1.in_python_mode():\n            # using a dictionary value as easier to search for keys\n            self._attributes_hook_set[name] = None\n    #: m(__setattr__)\n\n    def keys(self):\n        \"\"\"\n        Returns keys that have non-empty values.\n        \"\"\"\n        rv = []\n        for resc in pbs_resource.attributes:\n            if resc == '_name' or resc == '_has_value':\n                continue\n            v = getattr(self, resc)\n            if v is not None:\n                rv.append(resc)\n        #\n        return rv\n    #: m(keys)\n\n\n#: C(pbs_resource)\npbs_resource._name = PbsAttributeDescriptor(pbs_resource, '_name',\n                                            \"<generic resource>\", (str,))\n\n\nclass vchunk():\n    \"\"\"\n    This represents a resource chunk assigned to a job.\n    Format: pbs.vchunk(\"<vnodeN>:<res1>=<val1>:<res2>=<val2>:...:\"\n                       \"<resN>=<valN>\")\n         where vnodeN is a name of a vnode.\n    \"\"\"\n\n    def __init__(self, achunk):\n        \"\"\"__init__\"\"\"\n\n        ch = achunk.split(\":\")\n        self.chunk_resources = pbs_resource(\"Resource_List\")\n        for c in ch:\n            if c.find(\"=\") == -1:\n                self.vnode_name = c\n            else:\n                rs = c.split(\"=\", 1)\n                descr = getattr(pbs_resource, rs[0])\n                self.chunk_resources[rs[0]] = descr._value_type[0](rs[1])\n    #: m(__init__)\n\n\nclass exec_vnode(_generic_attr):\n    \"\"\"\n    Represents a PBS exec_vnodes\n    Format: ev = pbs.exec_vnode(\n                (vnodeA:ncpus=N:mem=X)+(vnodeB:ncpus=P:mem=Y+vnodeC:mem=Z))\n                where vnodeA, ..., vnodeC are names of vnodes.\n            ev.chunks returns an array of pbs.vchunk job objects representing\n            that will show:\n            ev.chunks[0].vnode_name = 'vnodeA'\n            ev.chunks[0].vnode_resources = {'ncpus': N, 'mem': pbs.size('X')}\n\n            ev.chunks[1].vnode_name = 'vnodeB'\n            ev.chunks[1].vnode_resources = {'ncpus': P, 'mem': pbs.size('Y')}\n            ev.chunks[1].vnode_name = 'vnodeC'\n            ev.chunks[1].vnode_resources = {  'mem' : pbs.size('Z') }\n\n    \"\"\"\n    _derived_types = (_generic_attr,)\n\n    def __init__(self, value):\n        _pbs_v1.validate_input(\"job\", \"exec_vnode\", value)\n        super().__init__(value)\n        self.chunks = list()\n        vals = value.split(\"+\")\n        i = 0\n        for v in vals:\n            self.chunks.append(vchunk(v.strip(\"(\").strip(\")\")))\n#: --------         EXPORTED TYPES DICTIONARY                      ---------\n"
  },
  {
    "path": "src/modules/python/pbs/v1/_exc_types.py",
    "content": "# coding: utf-8\n\"\"\"\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\n\"\"\"\n__doc__ = \"\"\"\nThis module introduces all the exception for the PBS/Python interaction\n\"\"\"\n\n__all__ = [\n            'EventIncompatibleError',\n            'UnsetAttributeNameError',\n            'BadAttributeValueTypeError',\n            'BadAttributeValueError',\n            'UnsetResourceNameError',\n            'BadResourceValueTypeError',\n            'BadResourceValueError'\n          ]\n\nclass EventIncompatibleError(AttributeError):\n    pass\n\nclass UnsetAttributeNameError(Exception):\n    pass\n\nclass BadAttributeValueTypeError(Exception):\n    pass\n\nclass BadAttributeValueError(Exception):\n    pass\n\nclass UnsetResourceNameError(Exception):\n    pass\n\nclass BadResourceValueTypeError(Exception):\n    pass\n\nclass BadResourceValueError(Exception):\n    pass\n"
  },
  {
    "path": "src/modules/python/pbs/v1/_export_types.py",
    "content": "# coding: utf-8\n\"\"\"\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\n\"\"\"\n\n__doc__ = r\"\"\"\nThis module primarily exists to help the embedded interpreter import all the\npython types. All attribute names that require *special* handling are\nmaintained in a dictionary as:\n    Key <name> : Value <python_type>\n\n    where:\n        name *MUST* map to either attribute_def.at_name or resource_def.rs_name\n\nMotivation:\n    Since most of the object will be generated by the embedded python (aka C\n    code) this gives a simple way to leverage the Python Data types to pass the\n    information back to the embedded python. Which, then can use the excellent\n    C API to retrieve the python type.\n\n\"\"\"\n\nfrom . import _base_types as pbs_types\nfrom ._svr_types import (_queue, _job, _server, _resv, _vnode, _event,\n                         pbs_iter, _management, _server_attribute)\nfrom ._exc_types import *\n\n\n#: IMPORTANT the keys are imported by the C code, make sure the mapping is\n#: maintained.\n#: TODO In future or time permits should add a constants to the internal\n#: _pbs_v1 module.\n#:\nEXPORTED_TYPES_DICT = {\n    'interactive' \t\t: pbs_types.pbs_int,\n    'block'\t\t\t: pbs_types.pbs_int,\n    'Authorized_Users' \t: pbs_types.acl,\n    'Authorized_Groups' \t: pbs_types.acl,\n    'Authorized_Hosts' \t: pbs_types.acl,\n    'array_index' \t\t: pbs_types.pbs_int,\n    'array_indices_submitted': pbs_types.range,\n    'array_state_count'\t: pbs_types.state_count,\n    'group_list' \t\t: pbs_types.group_list,\n    'managers' \t\t: pbs_types.acl,\n    'operators' \t\t: pbs_types.acl,\n    'User_List' \t\t: pbs_types.user_list,\n    'Shell_Path_List' \t: pbs_types.path_list,\n    'Output_Path' \t\t: pbs_types.path,\n    'Error_Path' \t\t: pbs_types.path,\n    'Priority' \t\t: pbs_types.priority,\n    'Job_Name' \t\t: pbs_types.name,\n    'project' \t\t: pbs_types.project,\n    'Reserve_Name' \t\t: pbs_types.name,\n    'Join_Path' \t\t: pbs_types.join_path,\n    'default_qsub_arguments' : pbs_types.args,\n    'default_qdel_arguments' : pbs_types.args,\n    'select'             : pbs_types.select,\n    'schedselect'        : pbs_types.select,\n    'place'              : pbs_types.place,\n    'exec_host'          : pbs_types.exec_host,\n    'exec_vnode'         : pbs_types.exec_vnode,\n    'Checkpoint'         : pbs_types.checkpoint,\n    'depend'             : pbs_types.depend,\n    'Hold_Types'         : pbs_types.hold_types,\n    'Keep_Files'         : pbs_types.keep_files,\n    'Mail_Points'        : pbs_types.mail_points,\n    'Mail_Users'         : pbs_types.email_list,\n    'stagein'       \t    : pbs_types.staging_list,\n    'stageout'           : pbs_types.staging_list,\n    'range'              : pbs_types.range,\n    'state_count'        : pbs_types.state_count,\n    'license_count'      : pbs_types.license_count,\n    'scheduler_iteration': pbs_types.duration,\n    'reserve_duration'   : pbs_types.duration,\n    'args'               : pbs_types.args,\n    'job_sort_formula'   : pbs_types.job_sort_formula,\n    'node_group_key'     : pbs_types.node_group_key,\n    'sandbox'\t    : pbs_types.sandbox,\n    'pbs_version'        : pbs_types.version,\n    'software'           : pbs_types.software,\n    'acl_roots'          : pbs_types.acl,\n    'acl_hosts'          : pbs_types.acl,\n    'acl_resv_hosts'     : pbs_types.acl,\n    'acl_resv_groups'    : pbs_types.acl,\n    'acl_resv_users'     : pbs_types.acl,\n    'acl_groups'         : pbs_types.acl,\n    'acl_users'          : pbs_types.acl,\n    'state_count'        : pbs_types.state_count,\n    'server_state'       : pbs_types.server_state,\n    'route_destinations' : pbs_types.route_destinations,\n    'Variable_List'\t    : pbs_types.pbs_env,\n    'queue_type'\t    : pbs_types.queue_type,\n    'job_state'\t    : pbs_types.job_state,\n    'license'            : pbs_types.pbs_str,\n    'license_info'       : pbs_types.pbs_int,\n    'attr_descriptor'    : pbs_types.PbsAttributeDescriptor,\n    'generic_type'       : pbs_types._generic_attr,\n    'pbs_bool'           : pbs_types.pbs_bool,\n    'pbs_int'            : pbs_types.pbs_int,\n    'pbs_env'            : pbs_types.pbs_env,\n    'pbs_list'           : pbs_types.pbs_str,\n    'pbs_str'            : pbs_types.pbs_str,\n    'pbs_float'          : pbs_types.pbs_float,\n    'pbs_resource'       : pbs_types.pbs_resource,\n    'size'               : pbs_types.size,\n    'generic_time'       : pbs_types.duration,\n    'generic_acl'        : pbs_types.acl,\n    'queue'              : _queue,\n    'default_queue'      : _queue,\n    'job'                : _job,\n    'management'         : _management,\n    'server_attribute'   : _server_attribute,\n    'server'             : _server,\n    'resv'               : _resv,\n    'vnode'              : _vnode,\n    'event'              : _event,\n    'pbs_iter'\t    : pbs_iter,\n    'state'   \t    : pbs_types.vnode_state,\n    'sharing'   \t    : pbs_types.vnode_sharing,\n    'ntype'   \t    : pbs_types.vnode_ntype,\n    'pbs_entity'   \t    : str,\n    'EventIncompatibleError'    : EventIncompatibleError,\n    'UnsetAttributeNameError'   : UnsetAttributeNameError,\n    'BadAttributeValueTypeError': BadAttributeValueTypeError,\n    'BadAttributeValueError'    : BadAttributeValueError,\n    'UnsetResourceNameError'    : UnsetResourceNameError,\n    'BadResourceValueTypeError' : BadResourceValueTypeError,\n    'BadResourceValueError'     : BadResourceValueError\n}\n"
  },
  {
    "path": "src/modules/python/pbs/v1/_pmi_cray.py",
    "content": "# coding: utf-8\n\"\"\"\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n *\n */\n\"\"\"\n__doc__ = \"\"\"\nThis module is used for Cray systems.\n\"\"\"\n\nimport os\nimport stat\nimport time\nimport random\nfrom subprocess import Popen, PIPE\nfrom pbs.v1._pmi_types import BackendError\nimport pbs\nfrom pbs.v1._pmi_utils import _running_excl, _pbs_conf, _get_vnode_names, \\\n    _svr_vnode\n\npbsexec = _pbs_conf(\"PBS_EXEC\")\nif pbsexec is None:\n    raise BackendError(\"PBS_EXEC not found\")\n\n\ndef launch(jid, args):\n    \"\"\"\n    Run capmc and return the structured output.\n\n    :param jid: job id\n    :type jid: str\n    :param args: arguments for capmc command\n    :type args: str\n    :returns: capmc output in json format.\n    \"\"\"\n    import json\n\n    # full path to capmc given by Cray\n    cmd = os.path.join(os.path.sep, 'opt', 'cray',\n                       'capmc', 'default', 'bin', 'capmc')\n    if not os.path.exists(cmd):\n        cmd = \"capmc\"\t\t# should be in PATH then\n    cmd = cmd + \" \" + args\n    fail = \"\"\n\n    pbs.logjobmsg(jid, \"launch: \" + cmd)\n    cmd_run = Popen(cmd, shell=True, stdout=PIPE, stderr=PIPE)\n    (cmd_out, cmd_err) = cmd_run.communicate()\n    exitval = cmd_run.returncode\n    if exitval != 0:\n        fail = \"%s: exit %d\" % (cmd, exitval)\n    else:\n        pbs.logjobmsg(jid, \"launch: finished\")\n\n    try:\n        out = json.loads(cmd_out)\n    except Exception:\n        out = None\n    try:\n        err = cmd_err.splitlines()[0]           # first line only\n    except Exception:\n        err = \"\"\n    if out is not None:\n        errno = out[\"e\"]\n        msg = out[\"err_msg\"]\n        if errno != 0 or (len(msg) > 0 and msg != \"Success\"):\n            fail = \"output: e=%d err_msg='%s'\" % (errno, msg)\n    if len(err) > 0:\n        pbs.logjobmsg(jid, \"stderr: %s\" % err.strip())\n\n    if len(fail) > 0:\n        pbs.logjobmsg(jid, fail)\n        raise BackendError(fail)\n    return out\n\n\ndef jobnids(job):\n    \"\"\"\n    Return the set of nids belonging to a job.\n\n    :param job: job id\n    :type job: str\n    :returns: set of nids from node's resources_available[craynid].\n    \"\"\"\n    nidset = set()\n    craynid = \"PBScraynid\"\n    for vname in _get_vnode_names(job):\n        vnode = _svr_vnode(vname)\n        try:\n            nidset.add(int(vnode.resources_available[craynid]))\n        except Exception:\n            pass\n    return nidset\n\n\ndef nodenids(hosts):\n    \"\"\"\n    Return the set of nids from the host list.\n\n    :param hosts: list of exec hosts from the job.\n    :type hosts: str\n    :returns: set of nids from node's resources_available[craynid].\n    \"\"\"\n    nidset = set()\n    craynid = \"PBScraynid\"\n    for vnames in hosts:\n        vnode = _svr_vnode(vnames)\n        try:\n            nidset.add(int(vnode.resources_available[craynid]))\n        except Exception:\n            pass\n    return nidset\n\n\ndef nidlist(job=None, nidset=None):\n    \"\"\"\n    Return a string to be used with capmc --nids option\n    and the number of nids\n\n    :param job: job id.\n    :type job: str\n    :param nidset: nid set\n    :type nidset: set\n    :returns: retunrs a list of nids that can be used with capmc.\n              in the foramt \"24-30, 51, 53, 60-65\"\n    \"\"\"\n    if nidset is None:\n        nidset = jobnids(job)\n    nids = []\n    first = \"\"\n    last = \"\"\n    prev = 0\n    for nid in sorted(nidset):\n        val = nid\n        if len(first) == 0:\t\t# start point\n            first = str(nid)\n        elif prev + 1 != val:\n            if prev == int(first):\n                nids.append(first)\n            else:\n                nids.append(first + \"-\" + last)\n            first = str(nid)\n        prev = val\n        last = str(nid)\n    if len(first) > 0:\n        if prev == int(first):\n            nids.append(first)\n        else:\n            nids.append(first + \"-\" + last)\n    return \",\".join(nids), len(nidset)\n\n\ndef spool_file(name):\n    \"\"\"\n    Form a path to the PBS spool directory with name as the last element.\n\n    :param name: energy file name in format <jobid>.energy.\n    :type name: str\n    :returns: path to RUR/energy file in format PBS_HOME/spool/<jobid>.energy.\n    \"\"\"\n    home = _pbs_conf(\"PBS_HOME\")\n    if home is None:\n        raise BackendError(\"PBS_HOME not found\")\n    return os.path.join(home, \"spool\", name)\n\n\ndef energy_file(job):\n    \"\"\"\n    Form a path to the PBS spool directory with name as the last element.\n\n    :param job: job id.\n    :type job: str\n    :returns: path to energy file in format PBS_HOME/spool/<jobid>.energy.\n    \"\"\"\n    return spool_file(\"%s.energy\" % job.id)\n\n\ndef rur_file(job):\n    \"\"\"\n    Form a path to the PBS spool directory with name as the last element.\n\n    :param job: job id.\n    :type job: str\n    :returns: path to RUR file in format PBS_HOME/spool/<jobid>.rur.\n    \"\"\"\n    return spool_file(\"%s.rur\" % job.id)\n\n\ndef node_energy(jid, nids, cnt):\n    \"\"\"\n    Return the result of running capmc get_node_energy_counter.\n    The magic number of 15 seconds in the past is used because that\n    is the most current value that can be expected from capmc.\n\n    :param jid: job id.\n    :type jid: str\n    :param nids: nid list\n    :type nids: str\n    :param cnt: node count\n    :type cnt: int\n    :returns: ret on successfull energy usage capmc query.\n              None on failure.\n    \"\"\"\n    if cnt == 0:\n        return None\n    cmd = \"get_node_energy_counter --nids %s\" % nids\n    ret = launch(jid, cmd)\n    cntkey = \"nid_count\"\n    gotcnt = \"<notset>\"\n    if (ret is not None) and (cntkey in ret):\n        gotcnt = ret[cntkey]\n        if gotcnt == cnt:\n            return ret\n\n    pbs.logjobmsg(jid, \"node count %s, should be %d\" % (str(gotcnt), cnt))\n    ret = launch(jid, cmd)\n    gotcnt = \"<notset>\"\n    if (ret is not None) and (cntkey in ret):\n        gotcnt = ret[cntkey]\n        if gotcnt == cnt:\n            return ret\n\n    pbs.logjobmsg(jid, \"second query failed, node count %s, should be %d\" %\n                  (str(gotcnt), cnt))\n    return None\n\n\ndef job_energy(job, nids, cnt):\n    \"\"\"\n    Return energy counter from capmc.  Return None if no energy\n    value is available.\n\n    :param job: pbs job.\n    :type job: str\n    :param nids: nid list\n    :type nids: str\n    :param cnt: node count\n    :type cnt: int\n    :returns: ret on successfull energy usage capmc query.\n              None on failure.\n    \"\"\"\n    energy = None\n    ret = node_energy(job.id, nids, cnt)\n    if ret is not None and \"nodes\" in ret:\n        energy = 0\n        for node in ret[\"nodes\"]:\n            energy += node[\"energy_ctr\"]\n        pbs.logjobmsg(job.id, \"energy usage %dJ\" % energy)\n    return energy\n\n\nclass Pmi:\n\n    ninfo = None\n    nidarray = dict()\n\n    def __init__(self, pyhome=None):\n        pbs.logmsg(pbs.EVENT_DEBUG3, \"Cray: init\")\n\n    def _connect(self, endpoint=None, port=None, job=None):\n        if job is None:\n            pbs.logmsg(pbs.EVENT_DEBUG3, \"Cray: connect\")\n        else:\n            pbs.logmsg(pbs.EVENT_DEBUG3, \"Cray: %s connect\" % (job.id))\n        return\n\n    def _disconnect(self, job=None):\n        if job is None:\n            pbs.logmsg(pbs.EVENT_DEBUG3, \"Cray: disconnect\")\n        else:\n            pbs.logmsg(pbs.EVENT_DEBUG3, \"Cray: %s disconnect\" % (job.id))\n        return\n\n    def _get_usage(self, job):\n        pbs.logmsg(pbs.EVENT_DEBUG3, \"Cray: %s get_usage\" % (job.id))\n        try:\n            f = open(energy_file(job), \"r\")\n            start = int(f.read())\n            f.close()\n        except Exception:\n            return None\n\n        e = pbs.event()\n        if e.type == pbs.EXECHOST_PERIODIC:\n            # This function will be called for each job in turn when\n            # running from a periodic hook.  Here we fill in some\n            # global variables just once and use the information\n            # for each job in turn.  Save the result of calling capmc\n            # for all running jobs in the variable ninfo.  Keep a\n            # dictionary with the job id's as keys holding a set\n            # of nid numbers.\n            if Pmi.ninfo is None:\n                allnids = set()\n                for jobid in list(e.job_list.keys()):\n                    j = e.job_list[jobid]\n                    nidset = jobnids(j)\n                    allnids.update(nidset)\n                    Pmi.nidarray[jobid] = nidset\n                nids, cnt = nidlist(None, allnids)\n                Pmi.ninfo = node_energy(\"all\", nids, cnt)\n            nidset = Pmi.nidarray[job.id]\n            energy = None\n            if Pmi.ninfo is not None and \"nodes\" in Pmi.ninfo:\n                energy = 0\n                for node in Pmi.ninfo[\"nodes\"]:\n                    if node[\"nid\"] in nidset:\t\t# owned by job of interest\n                        energy += node[\"energy_ctr\"]\n                pbs.logjobmsg(job.id, \"Cray: get_usage: energy %dJ\" %\n                              energy)\n        else:\n            nids, cnt = nidlist(job)\n            energy = job_energy(job, nids, cnt)\n        if energy is not None:\n            return float(energy - start) / 3600000.0\n        else:\n            return None\n\n    def _query(self, query_type):\n        pbs.logmsg(pbs.LOG_DEBUG, \"Cray: query\")\n        return None\n\n    def _activate_profile(self, profile_name, job):\n        pbs.logmsg(pbs.LOG_DEBUG, \"Cray: %s activate '%s'\" %\n                   (job.id, str(profile_name)))\n\n        nids, cnt = nidlist(job)\n        if cnt == 0:\n            pbs.logjobmsg(job.id, \"Cray: no compute nodes for power setting\")\n            return False\n\n        energy = job_energy(job, nids, cnt)\n        if energy is not None:\n            f = open(energy_file(job), \"w\")\n            f.write(str(energy))\n            f.close()\n\n        # If this is the only job, set nodes to capped power.\n        if _running_excl(job):\n            cmd = \"set_power_cap --nids \" + nids\n            doit = False\n\n            pcap = job.Resource_List['pcap_node']\n            if pcap is not None:\n                pbs.logjobmsg(job.id, \"Cray: pcap node %d\" % pcap)\n                cmd += \" --node \" + str(pcap)\n                doit = True\n            pcap = job.Resource_List['pcap_accelerator']\n            if pcap is not None:\n                pbs.logjobmsg(job.id, \"Cray: pcap accel %d\" % pcap)\n                cmd += \" --accel \" + str(pcap)\n                doit = True\n\n            if doit:\n                launch(job.id, cmd)\n            else:\n                pbs.logjobmsg(job.id, \"Cray: no power cap to set\")\n\n        return True\n\n    def _deactivate_profile(self, job):\n        pbs.logmsg(pbs.LOG_DEBUG, \"Cray: deactivate %s\" % job.id)\n        nids, cnt = nidlist(job)\n        if cnt == 0:\n            pbs.logjobmsg(job.id, \"Cray: no compute nodes for power setting\")\n            return False\n\n        # remove initial energy file\n        try:\n            os.unlink(energy_file(job))\n        except Exception:\n            pass\n\n        # If this is the only job, undo any power cap we set.\n        if _running_excl(job):\n            cmd = \"set_power_cap --nids \" + nids\n            doit = False\n\n            pcap = job.Resource_List['pcap_node']\n            if pcap is not None:\n                pbs.logjobmsg(job.id, \"Cray: remove pcap node %d\" % pcap)\n                cmd += \" --node 0\"\n                doit = True\n            pcap = job.Resource_List['pcap_accelerator']\n            if pcap is not None:\n                pbs.logjobmsg(job.id, \"Cray: remove pcap accel %d\" % pcap)\n                cmd += \" --accel 0\"\n                doit = True\n\n            if doit:\n                try:\n                    launch(job.id, cmd)\n                except Exception:\n                    pass\n            else:\n                pbs.logjobmsg(job.id, \"Cray: no power cap to remove\")\n\n        # Get final energy value from RUR data\n        name = rur_file(job)\n        try:\n            rurfp = open(name, \"r\")\n        except Exception:\n            pbs.logjobmsg(job.id, \"Cray: no RUR data\")\n            return False\n\n        sbuf = os.fstat(rurfp.fileno())\n        if (sbuf.st_uid != 0) or (sbuf.st_mode & stat.S_IWOTH):\n            pbs.logjobmsg(job.id, \"Cray: RUR file permission: %s\" % name)\n            rurfp.close()\n            os.unlink(name)\n            return False\n\n        pbs.logjobmsg(job.id, \"Cray: reading RUR file: %s\" % name)\n        energy = 0\n        seen = False        # track if energy plugin is seen\n        for line in rurfp:\n            plugin, _, rest = line.partition(\" : \")\n            if plugin != \"energy\":\t\t# check that the plugin is energy\n                continue\n\n            apid, _, metstr = rest.partition(\" : \")\n            seen = True\n            try:\t\t\t\t\t\t# parse the metric list\n                metlist = eval(metstr, {})\n                metrics = dict(\n                    metlist[i:i + 2] for i in range(0, len(metlist), 2))\n                joules = metrics[\"energy_used\"]\n                energy += joules\n                pbs.logjobmsg(\n                    job.id,\n                    'Cray:RUR: {\"apid\":%s,\"apid_energy\":%dJ,\"job_energy\":%dJ}'\n                    % (apid, joules, energy))\n            except Exception as e:\n                pbs.logjobmsg(job.id,\n                              \"Cray:RUR: energy_used not found: %s\" % str(e))\n\n        rurfp.close()\n        os.unlink(name)\n\n        if not seen:\n            pbs.logjobmsg(job.id, \"Cray:RUR: no energy plugin\")\n            return False\n\n        old_energy = job.resources_used[\"energy\"]\n        new_energy = float(energy) / 3600000.0\n        if old_energy is None:\n            pbs.logjobmsg(job.id, \"Cray:RUR: energy %fkWh\" % new_energy)\n            job.resources_used[\"energy\"] = new_energy\n        elif new_energy > old_energy:\n            pbs.logjobmsg(\n                job.id,\n                \"Cray:RUR: energy %fkWh replaces periodic energy %fkWh\" %\n                (new_energy, old_energy))\n            job.resources_used[\"energy\"] = new_energy\n        else:\n            pbs.logjobmsg(\n                job.id,\n                \"Cray:RUR: energy %fkWh last periodic usage %fkWh\" %\n                (new_energy, old_energy))\n        return True\n\n    def _pmi_power_off(self, hosts):\n        pbs.logmsg(pbs.LOG_DEBUG, \"Cray: powering-off the node\")\n        nidset = nodenids(hosts)\n        nids, _ = nidlist(None, nidset)\n        cmd = \"node_off --nids \" + nids\n        func = \"pmi_power_off\"\n        launch(func, cmd)\n        return True\n\n    def _pmi_power_on(self, hosts):\n        pbs.logmsg(pbs.LOG_DEBUG, \"Cray: powering-on the node\")\n        nidset = nodenids(hosts)\n        nids, _ = nidlist(None, nidset)\n        cmd = \"node_on --nids \" + nids\n        func = \"pmi_power_on\"\n        launch(func, cmd)\n        return True\n\n    def _pmi_ramp_down(self, hosts):\n        pbs.logmsg(pbs.LOG_DEBUG, \"Cray: ramping down the node\")\n        nidset = nodenids(hosts)\n        nids, _ = nidlist(None, nidset)\n        cmd = \"get_sleep_state_limit_capabilities --nids \" + nids\n        func = \"pmi_ramp_down\"\n        out = launch(func, cmd)\n        for n in out[\"nids\"]:\n            if \"data\" in n:\n                nid = n[\"nid\"]\n                states = n[\"data\"][\"PWR_Attrs\"][0][\"PWR_AttrValueCapabilities\"]\n                for s in states:\n                    if int(s) != 0:\n                        cmd = (\"set_sleep_state_limit --nids \" + str(nid) +\n                               \" --limit \" + str(s))\n                        launch(func, cmd)\n                        sleep_time = random.randint(1, 10)\n                        time.sleep(sleep_time)\n        return True\n\n    def _pmi_ramp_up(self, hosts):\n        pbs.logmsg(pbs.LOG_DEBUG, \"Cray: ramping up the node\")\n        nidset = nodenids(hosts)\n        nids, _ = nidlist(None, nidset)\n        cmd = \"get_sleep_state_limit_capabilities --nids \" + nids\n        func = \"pmi_ramp_up\"\n        out = launch(func, cmd)\n        for n in out[\"nids\"]:\n            if \"data\" in n:\n                nid = n[\"nid\"]\n                states = n[\"data\"][\"PWR_Attrs\"][0][\"PWR_AttrValueCapabilities\"]\n                for s in reversed(states):\n                    if int(s) != 0:\n                        cmd = (\"set_sleep_state_limit --nids \" + str(nid) +\n                               \" --limit \" + str(s))\n                        launch(func, cmd)\n                        sleep_time = random.randint(1, 10)\n                        time.sleep(sleep_time)\n        return True\n\n    def _pmi_power_status(self, hosts):\n        # Do a capmc node_status and return a list of ready nodes.\n        pbs.logmsg(pbs.EVENT_DEBUG3, \"Cray: status of the nodes\")\n        nidset = nodenids(hosts)\n        nids, _ = nidlist(nidset=nidset)\n        cmd = \"node_status --nids \" + nids\n        func = \"pmi_power_status\"\n        out = launch(func, cmd)\n        ready = []\n        nodeset = set()\n        if 'ready' in out:\n            ready = out['ready']\n        else:\n            return nodeset\n        craynid = \"PBScraynid\"\n        for vnames in hosts:\n            vnode = _svr_vnode(vnames)\n            if craynid in vnode.resources_available:\n                nid = int(vnode.resources_available[craynid])\n                if nid in ready:\n                    nodeset.add(vnames)\n        return nodeset\n"
  },
  {
    "path": "src/modules/python/pbs/v1/_pmi_none.py",
    "content": "\n\"\"\"\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\n\"\"\"\n__doc__ = \"\"\"\nThis module is be used when no PMI is present.\n\"\"\"\n\nimport pbs\n\n\nclass Pmi:\n    def __init__(self, pyhome=None):\n        pbs.logmsg(pbs.LOG_WARNING, \"Stubbed PMI calls are being used\")\n\n    def _connect(self, endpoint, port, job):\n        return\n\n    def _disconnect(self, job):\n        return\n\n    def _get_usage(self, job):\n        return None\n\n    def _query(self, query_type):\n        return None\n\n    def _activate_profile(self, profile_name, job):\n        return False\n\n    def _deactivate_profile(self, job):\n        return False\n\n    def _pmi_power_off(self, hosts):\n        return False\n\n    def _pmi_power_on(self, hosts):\n        return False\n\n    def _pmi_ramp_down(self, hosts):\n        return False\n\n    def _pmi_ramp_up(self, hosts):\n        return False\n\n    def _pmi_power_status(self, hosts):\n        return False\n"
  },
  {
    "path": "src/modules/python/pbs/v1/_pmi_sgi.py",
    "content": "\n\"\"\"\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\n\"\"\"\n\n__doc__ = \"\"\"\nThis module is used for SGI systems.\n\"\"\"\n\nimport pbs\nimport os\nimport sys\nfrom pbs.v1._pmi_types import BackendError\nfrom pbs.v1._pmi_utils import _pbs_conf, _get_hosts\n\npbsexec = _pbs_conf(\"PBS_EXEC\")\nif pbsexec is None:\n    raise BackendError(\"PBS_EXEC not found\")\n\npy_version = str(sys.version_info.major) + \".\" + str(sys.version_info.minor)\n_path = os.path.join(pbsexec, \"python\", \"lib\", py_version)\nif _path not in sys.path:\n    sys.path.append(_path)\n_path = os.path.join(pbsexec, \"python\", \"lib\", py_version, \"lib-dynload\")\nif _path not in sys.path:\n    sys.path.append(_path)\nimport encodings\n\n\n# Plug in the path for the HPE/SGI power API.\n_path = \"/opt/clmgr/power-service\"\nif os.path.exists(_path):\n    # Look for HPCM support.\n    if _path not in sys.path:\n        sys.path.append(_path)\n    import hpe_clmgr_power_api as api\nelse:\n    # Look for SGIMC support.\n    _path = \"/opt/sgi/ta\"\n    if _path not in sys.path:\n        sys.path.append(_path)\n    import sgi_power_api as api\n\n\nclass Pmi:\n    def __init__(self, pyhome=None):\n        pbs.logmsg(pbs.EVENT_DEBUG3, \"SGI: init\")\n        api.SERVER = \"\"\"lead-eth:8888\"\"\"\n\n    def _connect(self, endpoint, port, job):\n        if job is None:\n            pbs.logmsg(pbs.EVENT_DEBUG3, \"SGI: connect\")\n        else:\n            pbs.logmsg(pbs.EVENT_DEBUG3, \"SGI: %s connect\" % (job.id))\n        api.VerifyConnection()\n        return\n\n    def _disconnect(self, job):\n        if job is None:\n            pbs.logmsg(pbs.EVENT_DEBUG3, \"SGI: disconnect\")\n        else:\n            pbs.logmsg(pbs.EVENT_DEBUG3, \"SGI: %s disconnect\" % (job.id))\n        return\n\n    def _get_usage(self, job):\n        pbs.logmsg(pbs.EVENT_DEBUG3, \"SGI: %s get_usage\" % (job.id))\n        report = api.MonitorReport(job.id)\n        if report is not None and report[0] == 'total_energy':\n            pbs.logjobmsg(job.id, \"SGI: energy %fkWh\" % report[1])\n            return report[1]\n        return None\n\n    def _query(self, query_type):\n        pbs.logmsg(pbs.LOG_DEBUG, \"SGI: query\")\n        if query_type == pbs.Power.QUERY_PROFILE:\n            return api.ListAvailableProfiles()\n        return None\n\n    def _activate_profile(self, profile_name, job):\n        pbs.logmsg(pbs.LOG_DEBUG, \"SGI: %s activate '%s'\" %\n                   (job.id, str(profile_name)))\n        api.NodesetCreate(job.id, _get_hosts(job))\n        api.MonitorStart(job.id, profile_name)\n        return False\n\n    def _deactivate_profile(self, job):\n        pbs.logmsg(pbs.LOG_DEBUG, \"SGI: %s deactivate\" % (job.id))\n        try:\n            api.MonitorStop(job.id)\n        # be sure to remove the nodeset\n        finally:\n            api.NodesetDelete(job.id)\n        return False\n\n    def _pmi_power_off(self, hosts):\n        pbs.logmsg(pbs.EVENT_DEBUG3, \"SGI: powering-off the node\")\n        return False\n\n    def _pmi_power_on(self, hosts):\n        pbs.logmsg(pbs.EVENT_DEBUG3, \"SGI: powering-on the node\")\n        return False\n\n    def _pmi_ramp_down(self, hosts):\n        pbs.logmsg(pbs.EVENT_DEBUG3, \"SGI: ramp-down the node\")\n        return False\n\n    def _pmi_ramp_up(self, hosts):\n        pbs.logmsg(pbs.EVENT_DEBUG3, \"SGI: ramp up the node\")\n        return False\n\n    def _pmi_power_status(self, hosts):\n        pbs.logmsg(pbs.EVENT_DEBUG3, \"SGI: status of the nodes\")\n        return False\n"
  },
  {
    "path": "src/modules/python/pbs/v1/_pmi_types.py",
    "content": "\n\"\"\"\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\n\"\"\"\n\n__doc__ = \"\"\"\nThis file contains the Power Management Infrastructure (PMI) base types.\nIt contains mostly vendor agnostic code with the exception of the\ninitialization method that determines which vendor specific PMI calls\nto import.\n\"\"\"\n\nimport sys\nimport os\nimport pbs\nfrom pbs.v1._pmi_utils import _get_hosts, _get_vnode_names, _running_excl\nfrom pbs.v1._exc_types import *\n\n\nclass InternalError(Exception):\n    def __init__(self, msg=\"Internal error encountered.\"):\n        self.msg = msg\n\n    def __str__(self):\n        return repr(self.msg)\n\n\nclass BackendError(Exception):\n    def __init__(self, msg=\"Backend error encountered.\"):\n        self.msg = msg\n\n    def __str__(self):\n        return repr(self.msg)\n\n\nclass Power:\n\n    QUERY_PROFILE = 0\n\n    def __init__(self, requested_pmi=None):\n        self.__pmi = None\n        self.__sitepk = None\n\n        if requested_pmi is None:\n            self.pmi_type = self.__get_pmi_type()\n        else:\n            self.pmi_type = requested_pmi\n\n        try:\n            _temp = __import__(\"pbs.v1._pmi_\" + self.pmi_type,\n                               globals(), locals(), ['Pmi'], 0)\n        except Exception as e:\n            raise InternalError(\n                \"could not import: \" + self.pmi_type + \": \" + str(e))\n\n        try:\n            self.__pmi = _temp.Pmi(self.__sitepk)\n        except Exception as e:\n            raise InternalError(\n                \"No such PMI: \" + self.pmi_type + \": \" + str(e))\n\n    def __get_pmi_type(self):\n        \"\"\"\n        Determine what system is being used.\n        \"\"\"\n        if os.path.exists('/proc/cray_xt/cname'):\n            return \"cray\"\n        if (os.path.exists('/opt/clmgr/power-service') or\n            os.path.exists('/opt/sgi')):\n            return \"sgi\"\n        return \"none\"\n\n    def _map_profile_names(self, pnames):\n        \"\"\"\n        Take a python list of profile names and create a string suitable for\n        setting the eoe value for a node.\n        \"\"\"\n        if pnames is None:\n            return \"\"\n        return \",\".join(pnames)\n\n    def _check_pmi(self):\n        if self.__pmi is None:\n            raise InternalError(\"No Power Management Interface instance.\")\n\n    def connect(self, endpoint=None, port=None, job=None):\n        self._check_pmi()\n        if job is None:\n            try:\n                job = pbs.event().job\n            except EventIncompatibleError:\n                pass\n        return self.__pmi._connect(endpoint, port, job)\n\n    def disconnect(self, job=None):\n        self._check_pmi()\n        if job is None:\n            try:\n                job = pbs.event().job\n            except EventIncompatibleError:\n                pass\n        return self.__pmi._disconnect(job)\n\n    def get_usage(self, job=None):\n        self._check_pmi()\n        if job is None:\n            job = pbs.event().job\n        return self.__pmi._get_usage(job)\n\n    def query(self, query_type=None):\n        self._check_pmi()\n        return self.__pmi._query(query_type)\n\n    def activate_profile(self, profile_name=None, job=None):\n        self._check_pmi()\n        if job is None:\n            job = pbs.event().job\n\n        try:\n            ret = self.__pmi._activate_profile(profile_name, job)\n            if profile_name is not None:\n                hosts = _get_vnode_names(job)\n                for h in hosts:\n                    try:\n                        pbs.event().vnode_list[h].current_eoe = profile_name\n                    except Exception:\n                        pass\n            return ret\n        except BackendError as e:\n            # get fresh set of profile names, ignore errors\n            mynode = pbs.event().vnode_list[pbs.get_local_nodename()]\n            if mynode.power_provisioning:\n                try:\n                    profiles = self.__pmi._query(\n                        pbs.Power.QUERY_PROFILE)\n                    names = self._map_profile_names(profiles)\n                    mynode.resources_available[\"eoe\"] = names\n                    pbs.logmsg(pbs.LOG_WARNING,\n                               \"PMI:activate: set eoe: %s\" % names)\n                except Exception:\n                    pass\n            raise BackendError(e)\n        except InternalError as e:\n            # couldn't do activation so set vnode offline\n            me = pbs.get_local_nodename()\n            pbs.event().vnode_list[me].state += pbs.ND_OFFLINE\n            pbs.logmsg(pbs.LOG_WARNING, \"PMI:activate: set vnode offline\")\n            raise InternalError(e)\n\n    def deactivate_profile(self, job=None):\n        self._check_pmi()\n\n        if job is None:\n            job = pbs.event().job\n        if _running_excl(job):\n            pbs.logjobmsg(job.id, \"PMI: reset current_eoe\")\n            for h in _get_vnode_names(job):\n                try:\n                    pbs.event().vnode_list[h].current_eoe = None\n                except Exception:\n                    pass\n        return self.__pmi._deactivate_profile(job)\n\n    def power_off(self, hosts=None):\n        self._check_pmi()\n        pbs.logmsg(pbs.EVENT_DEBUG3, \"PMI:poweroff: powering off nodes\")\n        return self.__pmi._pmi_power_off(hosts)\n\n    def power_on(self, hosts=None):\n        self._check_pmi()\n        pbs.logmsg(pbs.EVENT_DEBUG3, \"PMI:poweron: powering on nodes\")\n        return self.__pmi._pmi_power_on(hosts)\n\n    def ramp_down(self, hosts=None):\n        self._check_pmi()\n        pbs.logmsg(pbs.EVENT_DEBUG3, \"PMI:rampdown: ramping down nodes\")\n        return self.__pmi._pmi_ramp_down(hosts)\n\n    def ramp_up(self, hosts=None):\n        self._check_pmi()\n        pbs.logmsg(pbs.EVENT_DEBUG3, \"PMI:rampup: ramping up nodes\")\n        return self.__pmi._pmi_ramp_up(hosts)\n\n    def power_status(self, hosts=None):\n        self._check_pmi()\n        pbs.logmsg(pbs.EVENT_DEBUG3, \"PMI:powerstatus: status of nodes\")\n        return self.__pmi._pmi_power_status(hosts)\n"
  },
  {
    "path": "src/modules/python/pbs/v1/_pmi_utils.py",
    "content": "\"\"\"\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\n\"\"\"\n__doc__ = \"\"\"\n$Id$\nUtility power functions.\n\"\"\"\n\nimport pbs\nimport os\nimport sys\n\n\ndef _pbs_conf(confvar):\n    # Return the value of a setting in the pbs.conf file if it exists.\n    # Save the values in a global dictionary for future use.\n\n    if confvar in os.environ:\n        return os.environ[confvar]\n\n    global pmi_pbsconf\n    if \"pmi_pbsconf\" not in globals():\n        pmi_pbsconf = dict()\n        cfile = \"PBS_CONF_FILE\"\n        if cfile in os.environ:\n            pbsconf = os.environ[cfile]\n        else:\n            pbsconf = \"/etc/pbs.conf\"\n\n        try:\n            fp = open(pbsconf)\n        except OSError:\n            pbs.logmsg(pbs.DEBUG, \"%s: Unable to open conf file.\" % pbsconf)\n            return None\n        else:\n            for line in fp:\n                line = line.strip()\n                # ignore empty lines or those beginning with '#'\n                if line == \"\" or line[0] == \"#\":\n                    continue\n                var, eq, val = line.partition('=')\n                if val == \"\":\n                    continue\n                pmi_pbsconf[var] = val\n            fp.close()\n    if confvar in pmi_pbsconf:\n        return pmi_pbsconf[confvar]\n    else:\n        return None\n\n\ndef _is_node_provisionable():\n    \"\"\"\n    Check if the local machine is running pbs_server or\n    pbs_sched or pbs_comm. If any of these are running, provisioning\n    must not be automatically enabled.\n    \"\"\"\n    serv = _pbs_conf(\"PBS_START_SERVER\")\n    if serv is not None and serv == \"1\":\n        return False\n\n    sched = _pbs_conf(\"PBS_START_SCHED\")\n    if sched is not None and sched == \"1\":\n        return False\n\n    comm = _pbs_conf(\"PBS_START_COMM\")\n    if comm is not None and comm == \"1\":\n        return False\n\n    return True\n\n\ndef _get_hosts(job):\n    \"\"\"\n    Form a list of unique short hostnames from a pbs job.\n    The short names are used even if the hostnames from exec_host2\n    are FQDNs because SGIMC seems to have a bug in that it will\n    not accept FQDNs.\n    \"\"\"\n    hosts = str(job.exec_host2)\n    pbs_nodes = sorted({x.partition(':')[0].partition('.')[0]\n                            for x in hosts.split('+')})\n    return pbs_nodes\n\n\ndef _jobreq(job, name):\n    \"\"\"\n    Get a requested resource from a job.\n    \"\"\"\n    val = str(job.schedselect).partition(name + '=')[2]\n    if len(val) == 0:\n        return None\n    else:\n        return val.partition('+')[0].partition(':')[0]\n\n\ndef _get_vnode_names(job):\n    \"\"\"\n    Return a list of vnodes being used for a job.\n    \"\"\"\n    exec_vnode = str(job.exec_vnode).replace(\"(\", \"\").replace(\")\", \"\")\n    vnodes = sorted({x.partition(':')[0]\n                        for x in exec_vnode.split('+')})\n    return vnodes\n\n\ndef _svr_vnode(name):\n    # Return a vnode object obtained from the server by name.\n    # Save the values in a global dictionary for future use.\n    global pmi_pbsconf\n    if \"pmi_pbsvnodes\" not in globals():\n        global pmi_pbsvnodes\n        pmi_pbsvnodes = dict()\n        for vn in pbs.server().vnodes():\n            pmi_pbsvnodes[vn.name] = vn\n    return pmi_pbsvnodes[name]\n\n\ndef _running_excl(job):\n    # Look for any other job that is running on a job's vnodes\n    for vname in _get_vnode_names(job):\n        vnode = _svr_vnode(vname)\n        for j in str(vnode.jobs).split(', '):\n            id = j.partition('/')[0]\n            if job.id != id:\n                return False\n    return True\n"
  },
  {
    "path": "src/modules/python/pbs/v1/_svr_types.py",
    "content": "# coding: utf-8\n\"\"\"\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\n\"\"\"\n\n__doc__ = \"\"\"\n\nThis module captures all the python types representing the PBS Server objects\n(server,queue,job,resv, etc.)\n\"\"\"\nfrom ._base_types import (PbsAttributeDescriptor, PbsReadOnlyDescriptor,\n                          pbs_resource, pbs_bool, _LOG,\n                          )\nimport _pbs_v1\nfrom _pbs_v1 import (_event_accept, _event_reject,\n                     _event_param_mod_allow, _event_param_mod_disallow,\n                     iter_nextfunc)\n\nfrom ._exc_types import *\n\nNAS_mod = 0\n\ntry:\n    if _pbs_v1.get_python_daemon_name() == \"pbs_python\":\n        from _pbs_ifl import *\n        from pbs_ifl import *\nexcept ImportError:\n    pass\n\n# Set global hook_config_filename parameter.\nhook_config_filename = None\ntry:\n    import os\n\n    if \"PBS_HOOK_CONFIG_FILE\" in os.environ:\n        hook_config_filename = os.environ[\"PBS_HOOK_CONFIG_FILE\"]\nexcept Exception:\n    pass\n\n# Set global pbs_conf parameter.\npbs_conf = _pbs_v1.get_pbs_conf()\n\n\n#\n# get_server_data_fp: returns the file object representing the\n#                     hook debug data file.\ndef get_server_data_fp():\n    data_file = _pbs_v1.get_server_data_file()\n    if data_file is None:\n        return None\n    try:\n        return open(data_file, \"a+\")\n    except OSError:\n        _pbs_v1.logmsg(_pbs_v1.LOG_WARNING,\n                       \"warning: error opening debug data file %s\" % data_file)\n        return None\n\n#\n# get_local_nodename: returns the name of the current host as it would appear\n#                      as a vnode name. This is usually the short form of the\n#                      hostname.\n\n\ndef get_local_nodename():\n    return(_pbs_v1.get_local_host_name())\n\n\n#\n# pbs_statobj: general-purpose function that connects to server named\n#           'connect_server' or if None, use \"localhost\", and depending\n#            on 'objtype', then performs pbs_statjob(), ps_statque(),\n#            pbs_statresv(), pbs_statvnode(), or pbs_statserver(), and\n#            returning results in a new object of type _job, _queue,\n#            _resv, _vnode, or _server.\n#            NOTE: 'filter_queue' is used for a \"job\" type, which means\n#                  the job must be in the queue 'filter_queue' for the\n#                  job object to be instantiated.\ndef pbs_statobj(objtype, name=None, connect_server=None, filter_queue=None):\n    \"\"\"\n    Returns a PBS (e.g. _job, _queue, _resv, _vnode, _server) object\n    that is populated with data obtained by calling PBS APIs:\n    pbs_statjob(), pbs_statque(), pbs_statresv(), pbs_statvnode(),\n    pbs_statserver(), using a connection handle to 'connect_server'.\n\n    If 'objtype'  is \"job\", then return the _job object.\n    If 'objtype'  is \"queue\", then return the _queue object.\n    If 'objtype'  is \"resv\", then return the _resv object.\n    If 'objtype'  is \"vnode\", then return the _vnode object.\n    If 'objtype'  is \"server\", then return the _server object.\n\n    'filter_queue' is used for a \"job\" type, which means\n    the job must be in the queue 'filter_queue' for the\n    job object to be instantiated.\n    \"\"\"\n\n    _pbs_v1.set_c_mode()\n\n    if(connect_server is None):\n        con = pbs_connect(\"localhost\")\n    else:\n        con = pbs_connect(connect_server)\n\n    if con < 0:\n        _pbs_v1.logmsg(_pbs_v1.LOG_DEBUG,\n                       \"pbs_statobj: Unable to connect to server %s\"\n                       % (connect_server))\n        _pbs_v1.set_python_mode()\n        return None\n\n    if(objtype == \"job\"):\n        bs = pbs_statjob(con, name, None, None)\n        header_str = \"pbs.server().job(%s)\" % (name,)\n    elif(objtype == \"queue\"):\n        bs = pbs_statque(con, name, None, None)\n        header_str = \"pbs.server().queue(%s)\" % (name,)\n    elif(objtype == \"vnode\"):\n        bs = pbs_statvnode(con, name, None, None)\n        header_str = \"pbs.server().vnode(%s)\" % (name,)\n    elif(objtype == \"resv\"):\n        bs = pbs_statresv(con, name, None, None)\n        header_str = \"pbs.server().resv(%s)\" % (name,)\n    elif(objtype == \"server\"):\n        bs = pbs_statserver(con, None, None)\n        header_str = \"pbs.server()\"\n    else:\n        _pbs_v1.logmsg(_pbs_v1.LOG_DEBUG,\n                       \"pbs_statobj: Bad object type %s\" % (objtype))\n        pbs_disconnect(con)\n        _pbs_v1.set_python_mode()\n        return None\n\n    server_data_fp = get_server_data_fp()\n\n    b = bs\n    obj = None\n    while(b):\n        if(objtype == \"job\"):\n            obj = _job(b.name, connect_server)\n        elif(objtype == \"queue\"):\n            obj = _queue(b.name, connect_server)\n        elif(objtype == \"vnode\"):\n            obj = _vnode(b.name, connect_server)\n        elif(objtype == \"resv\"):\n            obj = _resv(b.name, connect_server)\n        elif(objtype == \"server\"):\n            obj = _server(b.name, connect_server)\n        else:\n            _pbs_v1.logmsg(_pbs_v1.LOG_DEBUG,\n                           \"pbs_statobj: Bad object type %s\" % (objtype))\n            pbs_disconnect(con)\n            if server_data_fp:\n                server_data_fp.close()\n            _pbs_v1.set_python_mode()\n            return None\n\n        a = b.attribs\n\n        while(a):\n            n = a.name\n            r = a.resource\n            v = a.value\n\n            if(objtype == \"vnode\"):\n                if(n == ATTR_NODE_state):\n                    v = _pbs_v1.str_to_vnode_state(v)\n                elif(n == ATTR_NODE_ntype):\n                    v = _pbs_v1.str_to_vnode_ntype(v)\n                elif(n == ATTR_NODE_Sharing):\n                    v = _pbs_v1.str_to_vnode_sharing(v)\n\n            elif(objtype == \"job\"):\n                if((filter_queue is not None) and (n == ATTR_queue) and\n                        (filter_queue != v)):\n                    pbs_disconnect(con)\n                    if server_data_fp:\n                        server_data_fp.close()\n                    _pbs_v1.set_python_mode()\n                    return None\n                if n == ATTR_inter or n == ATTR_block or n == ATTR_X11_port:\n                    v = int(pbs_bool(v))\n\n            if(r):\n                pr = getattr(obj, n)\n\n                # instantiate Resource_List object if not set\n                if(pr is None):\n                    setattr(obj, n)\n\n                pr = getattr(obj, n)\n                if (pr is None):\n                    _pbs_v1.logmsg(_pbs_v1.LOG_DEBUG,\n                                   \"pbs_statobj: missing %s\" % (n))\n                    a = a.next\n                    continue\n\n                vo = getattr(pr, r)\n                if(vo is None):\n                    setattr(pr, r, v)\n                    if server_data_fp:\n                        server_data_fp.write(\n                            \"%s.%s[%s]=%s\\n\" % (header_str, n, r, v))\n                else:\n                    # append value...\n                    # example: \"select=1:ncpus=1,ncpus=1,nodect=1,place=pack\"\n                    vl = [vo, v]\n                    setattr(pr, r, \",\".join(vl))\n                    if server_data_fp:\n                        server_data_fp.write(\"%s.%s[%s]=%s\\n\" % (\n                            header_str, n, r, \",\".join(vl)))\n\n            else:\n                vo = getattr(obj, n)\n\n                if(vo is None):\n                    setattr(obj, n, v)\n                    if server_data_fp:\n                        server_data_fp.write(\"%s.%s=%s\\n\" % (header_str, n, v))\n                else:\n                    # append value\n                    vl = [vo, v]\n                    setattr(obj, n, \",\".join(vl))\n                    if server_data_fp:\n                        server_data_fp.write(\"%s.%s=%s\\n\" %\n                                             (header_str, n, \",\".join(vl)))\n\n            a = a.next\n\n        b = b.next\n\n    pbs_disconnect(con)\n    if server_data_fp:\n        server_data_fp.close()\n    _pbs_v1.set_python_mode()\n    return obj\n\n\n# Allow the C implementation of hooks to call pbs_statobj function.\n_pbs_v1.set_pbs_statobj(pbs_statobj)\n\n#:------------------------------------------------------------------------\n#                       JOB TYPE\n#:-------------------------------------------------------------------------\n\n\nclass _job():\n    \"\"\"\n    This represents a PBS job.\n    \"\"\"\n\n    attributes = PbsReadOnlyDescriptor('attributes', {})\n\n    def __new__(cls, value, connect_server=None):\n        return object.__new__(cls)\n\n    def __init__(self, jid, connect_server=None,\n                 failed_node_list=None, node_list=None):\n        \"\"\"__init__\"\"\"\n\n        self._attributes_hook_set = {}\n        self.id = jid\n        self._connect_server = connect_server\n        self._readonly = False\n        self._rerun = False\n        self._delete = False\n        self._checkpointed = False\n        self._msmom = False\n        self._stdout_file = None\n        self._stderr_file = None\n        self.failed_mom_list = failed_node_list\n        self.succeeded_mom_list = node_list\n    #: m(__init__)\n\n    def __str__(self):\n        \"\"\"String representation of the object\"\"\"\n\n        return str(self.id)\n    #: m(__str__)\n\n    def __setattr__(self, name, value):\n        if name == '_attributes_hook_set':\n            if _pbs_v1.in_python_mode():\n                raise BadAttributeValueError(\n                    f\"the attribute '{name}' is readonly\")\n            super().__setattr__(name, value)\n            return\n\n        if name == \"_readonly\":\n            if _pbs_v1.in_python_mode() and \\\n                    hasattr(self, \"_readonly\") and not value:\n                raise BadAttributeValueError(\n                    \"_readonly can only be set to True!\")\n        elif ((name != \"_rerun\") and (name != \"_delete\") and\n              (name != \"_checkpointed\") and (name != \"_msmom\") and\n              (name != \"_stdout_file\") and (name != \"_stderr_file\") and\n              name not in _job.attributes):\n            raise UnsetAttributeNameError(\n                \"job attribute '%s' not found\" % (name,))\n\n        super().__setattr__(name, value)\n\n        # attributes that are set in python mode will be reflected in\n        # _attributes_hook_set dictionary.\n        if _pbs_v1.in_python_mode():\n            # using a dictionary value as easier to search for keys\n            self._attributes_hook_set[name] = None\n    #: m(__setattr__)\n\n    def rerun(self):\n        \"\"\"rerun\"\"\"\n        ev_type = _pbs_v1.event().type\n        if ((ev_type & _pbs_v1.MOM_EVENTS) == 0):\n            raise NotImplementedError(\"rerun(): only for mom hooks\")\n        self._rerun = True\n    #: m(rerun)\n\n    def delete(self):\n        \"\"\"delete\"\"\"\n        ev_type = _pbs_v1.event().type\n        if ((ev_type & _pbs_v1.MOM_EVENTS) == 0):\n            raise NotImplementedError(\"delete(): only for mom hooks\")\n        self._delete = True\n    #: m(rerun)\n\n    def is_checkpointed(self):\n        \"\"\"is_checkpointed\"\"\"\n        return self._checkpointed\n    #: m(is_checkpointed)\n\n    def in_ms_mom(self):\n        \"\"\"in_ms_mom\"\"\"\n        return self._msmom\n    #: m(in_ms_mom)\n\n    def stdout_file(self):\n        \"\"\"stdout_file\"\"\"\n        return self._stdout_file\n    #: m(stdout_file)\n\n    def stderr_file(self):\n        \"\"\"stderr_file\"\"\"\n        return self._stderr_file\n    #: m(stderr_file)\n\n    def release_nodes(self, node_list=None, keep_select=None):\n        \"\"\"release_nodes\"\"\"\n        if ((_pbs_v1.event().type & _pbs_v1.EXECJOB_PROLOGUE) == 0 and\n                (_pbs_v1.event().type & _pbs_v1.EXECJOB_LAUNCH) == 0):\n            return None\n        tolerate_node_failures = None\n        ajob = _pbs_v1.event().job\n        if hasattr(ajob, \"tolerate_node_failures\"):\n            tolerate_node_failures = getattr(ajob, \"tolerate_node_failures\")\n            if tolerate_node_failures not in [\"job_start\", \"all\"]:\n                msg = \"no nodes released as job does not \" \\\n                      \"tolerate node failures\"\n                _pbs_v1.logmsg(_pbs_v1.LOG_DEBUG, \"%s: %s\" % (ajob.id, msg))\n                return ajob\n        return _pbs_v1.release_nodes(self, node_list, keep_select)\n    #: m(release_nodes)\n\n\n_job.id = PbsAttributeDescriptor(_job, 'id', \"\", (str,))\n_job.failed_mom_list = PbsAttributeDescriptor(\n    _job, 'failed_mom_list', [], (list,))\n_job.succeeded_mom_list = PbsAttributeDescriptor(\n    _job, 'succeeded_mom_list', [], (list,))\n_job._connect_server = PbsAttributeDescriptor(\n    _job, '_connect_server', \"\", (str,))\n#: C(job)\n\n#:------------------------------------------------------------------------\n#                       VNODE TYPE\n#:-------------------------------------------------------------------------\n\n\nclass _vnode():\n    \"\"\"\n    This represents a PBS vnode.\n    \"\"\"\n\n    attributes = PbsReadOnlyDescriptor('attributes', {})\n\n    def __new__(cls, value, connect_server=None):\n        return object.__new__(cls)\n\n    def __init__(self, name, connect_server=None):\n        \"\"\"__init__\"\"\"\n\n        self._attributes_hook_set = {}\n        self.name = name\n        self._readonly = False\n        self._connect_server = connect_server\n    #: m(__init__)\n\n    def __str__(self):\n        \"\"\"String representation of the object\"\"\"\n\n        return str(self.name)\n    #: m(__str__)\n\n    def __setattr__(self, name, value):\n        if name == '_attributes_hook_set':\n            super().__setattr__(name, value)\n            return\n\n        if name == \"_readonly\":\n            if _pbs_v1.in_python_mode() and \\\n                    hasattr(self, \"_readonly\") and not value:\n                raise BadAttributeValueError(\n                    \"_readonly can only be set to True!\")\n        elif name not in _vnode.attributes:\n            raise UnsetAttributeNameError(\n                \"vnode attribute '%s' not found\" % (name,))\n\n        super().__setattr__(name, value)\n\n        # attributes that are set in python mode will be reflected in\n        # _attributes_hook_set dictionary.\n        if _pbs_v1.in_python_mode() and (name != \"_connect_server\"):\n            # using a dictionary value as easier to search for keys\n            self._attributes_hook_set[name] = None\n            _pbs_v1.mark_vnode_set(self.name, name, str(value))\n\n    #: m(__seattr__)\n    \n    def extract_state_strs(self):\n        \"\"\"returns the string values from the state bits.\"\"\"\n        lst = []\n        if self.state == _pbs_v1.ND_STATE_FREE:\n            lst.append('ND_STATE_FREE')\n        else:\n            lst = [\n                val for (mask, val) in\n                sorted(_pbs_v1.REVERSE_NODE_STATE.items())\n                if self.state & mask\n            ]\n        return lst\n\n    def extract_state_ints(self):\n        \"\"\"returns the integer values from the state bits.\"\"\"\n        lst = []\n        if self.state == _pbs_v1.ND_STATE_FREE:\n            lst.append(_pbs_v1.ND_STATE_FREE)\n        else:\n            lst = [mask for (mask, val)\n                   in sorted(_pbs_v1.REVERSE_NODE_STATE.items())\n                   if self.state & mask]\n        return lst\n\n_vnode.name = PbsAttributeDescriptor(_vnode, 'name', \"\", (str,))\n_vnode._connect_server = PbsAttributeDescriptor(\n    _vnode, '_connect_server', \"\", (str,))\n#: C(vnode)\n\n# This exposes pbs.vnode() to be callable in a hook script\nvnode = _vnode\n\n#:-------------------------------------------------------------------------\n#                       RESERVATION TYPE\n#:-------------------------------------------------------------------------\n\n\nclass _resv():\n    \"\"\"\n    This represents a PBS reservation entity.\n    \"\"\"\n\n    attributes = PbsReadOnlyDescriptor('attributes', {})\n    attributes_readonly = PbsReadOnlyDescriptor('attributes_readonly',\n                                                [])\n\n    def __new__(cls, value, connect_server=None):\n        return object.__new__(cls)\n\n    def __init__(self, resvid, connect_server=None):\n        \"\"\"__init__\"\"\"\n\n        self._attributes_hook_set = {}\n        self.resvid = resvid\n        self._readonly = False\n        self._connect_server = connect_server\n    #: m(__init__)\n\n    def __str__(self):\n        \"\"\"String representation of the object\"\"\"\n\n        return str(self.resvid)\n    #: m(__str__)\n\n    def __setattr__(self, name, value):\n        if name == '_attributes_hook_set':\n            if _pbs_v1.in_python_mode():\n                raise BadAttributeValueError(\n                    f\"the attribute '{name}' is readonly\")\n            super().__setattr__(name, value)\n            return\n\n        if (name == \"_readonly\"):\n            if _pbs_v1.in_python_mode() and \\\n                    hasattr(self, \"_readonly\") and not value:\n                raise BadAttributeValueError(\n                    \"_readonly can only be set to True!\")\n        elif name not in _resv.attributes:\n            raise UnsetAttributeNameError(\n                \"resv attribute '%s' not found\" % (name,))\n        elif name in _resv.attributes_readonly and \\\n                _pbs_v1.in_python_mode() and \\\n                _pbs_v1.in_site_hook():\n            # readonly under a SITE hook\n            raise BadAttributeValueError(\n                \"resv attribute '%s' is readonly\" % (name,))\n\n        super().__setattr__(name, value)\n\n        # attributes that are set in python mode will be reflected in\n        # _attributes_hook_set dictionary.\n        if _pbs_v1.in_python_mode():\n            # using a dictionary value as easier to search for keys\n            self._attributes_hook_set[name] = None\n    #: m(__setattr__)\n\n\n#: C(resv)\n_resv.resvid = PbsAttributeDescriptor(_resv, 'resvid', \"\", (str,))\n_resv._connect_server = PbsAttributeDescriptor(\n    _resv, '_connect_server', \"\", (str,))\n#: End (resv) setting class attributes\n\n#:-------------------------------------------------------------------------\n#                       QUEUE TYPE\n#:-------------------------------------------------------------------------\n\n\nclass _queue():\n    \"\"\"\n    This represents a PBS queue.\n    \"\"\"\n\n    attributes = PbsReadOnlyDescriptor('attributes', {})\n    #name = PbsAttributeDescriptor(queue, 'name', \"\", (str,))\n\n    def __init__(self, name, connect_server=None):\n        \"\"\"__init__\"\"\"\n        #: ok, descriptor is set.\n        self.name = name\n        self._readonly = False\n        self._connect_server = connect_server\n    #: m(__init__)\n\n    def __str__(self):\n        \"\"\"String representation of the object\"\"\"\n\n        return str(self.name)\n    #: m(__str__)\n\n    def __setattr__(self, name, value):\n        if (name == \"_readonly\"):\n            if _pbs_v1.in_python_mode() and \\\n                    hasattr(self, \"_readonly\") and not value:\n                raise BadAttributeValueError(\n                    \"_readonly can only be set to True!\")\n        elif name not in _queue.attributes:\n            raise UnsetAttributeNameError(\n                \"queue attribute '%s' not found\" % (name,))\n        super(_queue, self).__setattr__(name, value)\n    #: m(__setattr__)\n\n    def job(self, jobid):\n        \"\"\"Return a job object representing jobid that belongs to queue\"\"\"\n\n        if jobid.find(\".\") == -1:\n            jobid = jobid + \".\" + _pbs_v1.get_pbs_server_name()\n\n        if _pbs_v1.get_python_daemon_name() == \"pbs_python\":\n\n            if _pbs_v1.use_static_data():\n                if self._connect_server is None:\n                    sn = \"\"\n                else:\n                    sn = self._connect_server\n\n                if self.name is None:\n                    qn = \"\"\n                else:\n                    qn = self.name\n                return _pbs_v1.get_job_static(jobid, sn, qn)\n\n            return pbs_statobj(\"job\", jobid, self._connect_server,\n                               self.name)\n        else:\n            return _pbs_v1.get_job(jobid, self.name)\n    #: m(job)\n\n    def jobs(self):\n        \"\"\"\n            Returns an iterator that loops over the list of jobs on this queue.\n        \"\"\"\n        return pbs_iter(\"jobs\", \"\",  self.name, self._connect_server)\n    #: m(jobs)\n\n#: C(_queue)\n\n\n_queue.name = PbsAttributeDescriptor(_queue, 'name', \"\", (str,))\n_queue._connect_server = PbsAttributeDescriptor(\n    _queue, '_connect_server', \"\", (str,))\n\n#: End (queue) setting class attributes\n\n#:-------------------------------------------------------------------------\n#                       Server TYPE\n#:-------------------------------------------------------------------------\n\n\nclass _server():\n    \"\"\"\n    This represents the PBS server entity.\n    \"\"\"\n\n    attributes = PbsReadOnlyDescriptor('attributes', {})\n\n    def __init__(self, name, connect_server=None):\n        \"\"\"__init__\"\"\"\n\n        self.name = name\n        self._readonly = False\n        self._connect_server = connect_server\n    #: m(__init__)\n\n    def __str__(self):\n        \"\"\"String representation of the object\"\"\"\n\n        return str(self.name)\n    #: m(__str__)\n\n    def queue(self, qname):\n        \"\"\"\n        queue(strQname)\n            strQname -  name of a PBS queue (without the @host part) to query.\n\n          Returns a queue object representing the queue <queue name> that is\n          managed by server s.\n        \"\"\"\n        if qname.find(\"@\") != -1:\n            raise AssertionError(\n                \"Got '%s', please specify a queue name only (no @)\" % (qname,))\n\n        if _pbs_v1.get_python_daemon_name() == \"pbs_python\":\n            if _pbs_v1.use_static_data():\n                if self._connect_server is None:\n                    sn = \"\"\n                else:\n                    sn = self._connect_server\n                return _pbs_v1.get_queue_static(qname, sn)\n\n            return pbs_statobj(\"queue\", qname, self._connect_server)\n        else:\n            return _pbs_v1.get_queue(qname)\n    #: m(queue)\n\n    def job(self, jobid):\n        \"\"\"\n        job(strJobid)\n            strJobid - PBS jobid to query.\n          Returns a job object representing jobid\n        \"\"\"\n        if jobid.find(\".\") == -1:\n            jobid = jobid + \".\" + _pbs_v1.get_pbs_server_name()\n\n        if _pbs_v1.get_python_daemon_name() == \"pbs_python\":\n            if _pbs_v1.use_static_data():\n                if self._connect_server is None:\n                    sn = \"\"\n                else:\n                    sn = self._connect_server\n                return _pbs_v1.get_job_static(jobid, sn, \"\")\n\n            return pbs_statobj(\"job\", jobid, self._connect_server)\n        else:\n            return _pbs_v1.get_job(jobid)\n    #: m(job)\n\n    def vnode(self, vname):\n        \"\"\"\n        vnode(strVname)\n            strVname - PBS vnode name to query.\n          Returns a vnode object representing vname\n        \"\"\"\n        if _pbs_v1.get_python_daemon_name() == \"pbs_python\":\n            if _pbs_v1.use_static_data():\n                if self._connect_server is None:\n                    sn = \"\"\n                else:\n                    sn = self._connect_server\n                return _pbs_v1.get_vnode_static(vname, sn)\n\n            return pbs_statobj(\"vnode\", vname, self._connect_server)\n        else:\n            return _pbs_v1.get_vnode(vname)\n    #: m(vnode)\n\n    def resv(self, resvid):\n        \"\"\"Return a resv object representing resvid\"\"\"\n\n        if _pbs_v1.get_python_daemon_name() == \"pbs_python\":\n            if _pbs_v1.use_static_data():\n                if self._connect_server is None:\n                    sn = \"\"\n                else:\n                    sn = self._connect_server\n                return _pbs_v1.get_resv_static(resvid, sn)\n\n            return pbs_statobj(\"resv\", resvid, self._connect_server)\n        else:\n            return _pbs_v1.get_resv(resvid)\n    #: m(resv)\n\n    # NAS localmod 014\n    if NAS_mod is not None and NAS_mod != 0:\n        def jobs(self, ignore_fin=None, qname=None, username=None):\n            \"\"\"\n            Returns an iterator that loops over the list of jobs\n            on this server.\n            Jobs can be filtered in 3 ways:\n            - if ignore_fin is an integer != 0, finished jobs are ignored\n            - qname returns jobs from that queue\n            - username returns jobs with that euser\n            \"\"\"\n\n            return pbs_iter(\"jobs\", \"\",  qname, self._connect_server,\n                            ignore_fin, username)\n        #: m(jobs_nas)\n    else:\n        def jobs(self):\n            \"\"\"\n            Returns an iterator that loops over the list of jobs\n            on this server.\n            \"\"\"\n\n            return pbs_iter(\"jobs\", \"\",  \"\", self._connect_server)\n        #: m(jobs)\n\n    def vnodes(self):\n        \"\"\"\n        Returns an iterator that loops over the list of vnodes\n        on this server.\n        \"\"\"\n\n        return pbs_iter(\"vnodes\", \"\",  \"\", self._connect_server)\n    #: m(vnodes)\n\n    def queues(self):\n        \"\"\"\n        Returns an iterator that loops over the list of queues on this server.\n        \"\"\"\n        return pbs_iter(\"queues\", \"\",  \"\", self._connect_server)\n    #: m(queues)\n\n    def resvs(self):\n        \"\"\"\n        Returns an iterator that loops over the list of reservations on this\n        server.\n        \"\"\"\n        return pbs_iter(\"resvs\", \"\", \"\", self._connect_server)\n    #: m(resvs)\n\n    def scheduler_restart_cycle(self):\n        \"\"\"\n        Flags the server to tell the scheduler to restart scheduling cycle\n        \"\"\"\n        if self._connect_server is None:\n            _pbs_v1.scheduler_restart_cycle(_pbs_v1.get_pbs_server_name())\n        else:\n            _pbs_v1.scheduler_restart_cycle(self._connect_server)\n\n    def __setattr__(self, name, value):\n        if (name == \"_readonly\"):\n            if _pbs_v1.in_python_mode() and \\\n                    hasattr(self, \"_readonly\") and not value:\n                raise BadAttributeValueError(\n                    \"_readonly can only be set to True!\")\n        elif name not in _server.attributes:\n            raise UnsetAttributeNameError(\n                \"server attribute '%s' not found\" % (name,))\n        super(_server, self).__setattr__(name, value)\n    #: m(__setattr__)\n\n\n#: C(server)\n_server.name = PbsAttributeDescriptor(_server, 'name', \"\", (str,))\n_server._connect_server = PbsAttributeDescriptor(\n    _server, '_connect_server', \"\", (str,))\n#: End (server) setting class attributes\n\n\n#\n# server: this now gets invoked when pbs.server() is called.\n#        if in \"pbs_python\" mode, would use _pbs_ifl/pbs_ifl wrapped calls for\n#        querying the server for data; otherwise, use the builtin server()\n#        function in a server hook.\n#\ndef server():\n\n    if _pbs_v1.get_python_daemon_name() == \"pbs_python\":\n\n        if _pbs_v1.use_static_data():\n            return _pbs_v1.get_server_static()\n        connect_server = _pbs_v1.get_pbs_server_name()\n        return pbs_statobj(\"server\", None, connect_server)\n    else:\n        return _pbs_v1.server()\n#\n# reboot: this flags PBS to reboot the local host, and if reboot_cmd is\n#        given, to have PBS  use 'reboot_cmd' as the reboot command to\n#        execute.\n#        This immediately terminates the hook script.\n#\n\n\ndef reboot(reboot_cmd=\"\"):\n\n    ev_type = _pbs_v1.event().type\n    if ((ev_type & _pbs_v1.MOM_EVENTS) == 0):\n        raise NotImplementedError(\"reboot(): only for mom hooks\")\n    _pbs_v1.reboot(reboot_cmd)\n    raise SystemExit\n\n#:-------------------------------------------------------------------------\n#                       Event TYPE\n#:-------------------------------------------------------------------------\n\n\nclass _event():\n    \"\"\"\n    This represents the event that the current hook is responding to.\n    \"\"\"\n    #: the below is used for attribute type acess\n    attributes = PbsReadOnlyDescriptor('attributes', {})\n\n    def __init__(self, type, rq_user, rq_host):\n        \"\"\"__init__\"\"\"\n        self.type = type\n        self.requestor = rq_user\n        self.requestor_host = rq_host\n        self._readonly = False\n    #: m(__init__)\n\n    def accept(self, ecode=0):\n        \"\"\"\n        accept([ecode])\n           Terminates hook execution and causes PBS to perform the\n           associated event request action. If [ecode] argument is given,\n           it will be used as the value for the SystemExit exception, else\n           a value of 0 is used.\n\n           This terminates hook execution by throwing a SystemExit exception.\n        \"\"\"\n        _event_accept()\n        _event_param_mod_disallow()\n        raise SystemExit(str(ecode))\n    #: m(__accept__)\n\n    def reject(self, emsg=\"\", ecode=255):\n        \"\"\"\n        reject([msg])\n           Terminates hook execution and instructs PBS to not perform the\n           associated event request action. If [msg] argument is given, it\n                will be shown in the appropriate PBS daemon log, and the STDERR\n           of the PBS command that caused this event to take place.\n           If [ecode] argument is given, if will be used as the value for\n           the SystemExit exception, else a value of 255 is used.\n\n           This terminates hook execution by throwing a SystemExit exception.\n        \"\"\"\n        _event_reject(emsg)\n        _event_param_mod_disallow()\n        raise SystemExit(str(ecode))\n    #: m(__reject__)\n\n    def __getattr__(self, key):\n        try:\n            return self._param[key]\n        except KeyError:\n            raise EventIncompatibleError(f'\"{key}\" not found in self._param')\n    #: m(__getattr__)\n\n    def __setattr__(self, name, value):\n        if (name == \"_readonly\"):\n            if _pbs_v1.in_python_mode() and \\\n                    hasattr(self, \"_readonly\") and not value:\n                raise BadAttributeValueError(\n                    \"_readonly can only be set to True!\")\n        elif _pbs_v1.in_python_mode() and name in self._param:\n            if name == \"progname\" or name == \"argv\" or name == \"env\":\n                self._param[name] = value\n                return\n            else:\n                raise BadAttributeValueError(\n                    \"event attribute '%s' is readonly\" % (name,))\n        elif name not in _event.attributes:\n            raise UnsetAttributeNameError(\n                \"event attribute '%s' not found\" % (name,))\n        super().__setattr__(name, value)\n    #: m(__setattr__)\n\n\n#: C(event)\n_event.type = PbsAttributeDescriptor(_event, 'type', None, (int,))\n_event.hook_name = PbsAttributeDescriptor(_event, 'hook_name', \"\", (str,))\n_event.hook_type = PbsAttributeDescriptor(_event, 'hook_type', \"\", (str,))\n_event.requestor = PbsAttributeDescriptor(_event, 'requestor', \"\", (str,))\n_event.requestor_host = PbsAttributeDescriptor(\n    _event, 'requestor_host', \"\", (str,))\n_event._param = PbsAttributeDescriptor(_event, '_param', {}, (dict,))\n_event.freq = PbsAttributeDescriptor(_event, 'freq', None, (int,))\n#: End (event) setting class attributes\n\n#:-------------------------------------------------------------------------\n#                       PBS Iterator Type\n#:-------------------------------------------------------------------------\n\n\nclass pbs_iter():\n    \"\"\"\n    This represents an iterator for looping over a list of PBS objects.\n    Pbs_obj_name can be: queues, jobs, resvs, vnodes.\n    Pbs_filter1 is usually the <server_name> where the queues,\n                 jobs, resvs, vnodes reside. A <server_name> of \"\"\n                means the local server host.\n    Pbs_filter2 can be any string that can further restrict the list\n                being referenced. For example, this can be set to\n                some <queue_name>, to have the iterator represents\n                a list of jobs on <queue_name>@<server_name>\n\n    connect_server Name of the pbs server to get various stats.\n    \"\"\"\n    # NAS localmod 014\n    if NAS_mod is not None and NAS_mod != 0:\n        \"\"\"\n        We add the following args:\n\n        Pbs_ignore_fin can be used to ignore finished jobs.\n        Pbs_username   tells the iterator to return only jobs with this euser.\n        \"\"\"\n\n        def __init__(self, pbs_obj_name, pbs_filter1, pbs_filter2,\n                     connect_server=None, pbs_ignore_fin=None,\n                     pbs_username=None):\n\n            self._caller = _pbs_v1.get_python_daemon_name()\n            if self._caller == \"pbs_python\":\n\n                if(connect_server is None):\n                    self._connect_server = \"localhost\"\n                    sn = \"\"\n                else:\n                    self._connect_server = connect_server\n                    sn = connect_server\n\n                self.type = pbs_obj_name\n                if _pbs_v1.use_static_data():\n                    if(self.type == \"jobs\"):\n                        self.bs = iter(_pbs_v1.get_job_static(\"\", sn, \"\"))\n                    elif(self.type == \"queues\"):\n                        self.bs = iter(_pbs_v1.get_queue_static(\"\", sn))\n                    elif(self.type == \"vnodes\"):\n                        self.bs = iter(_pbs_v1.get_vnode_static(\"\", sn))\n                    elif(self.type == \"resvs\"):\n                        self.bs = iter(_pbs_v1.get_resv_static(\"\", sn))\n                    else:\n                        _pbs_v1.logmsg(_pbs_v1.LOG_DEBUG,\n                                       \"pbs_iter/init: Bad object iterator\"\n                                       \" type %s\"\n                                       % (self.type))\n                        return None\n                    return\n\n                self.con = pbs_connect(self._connect_server)\n                if self.con < 0:\n                    _pbs_v1.logmsg(_pbs_v1.LOG_DEBUG,\n                                   \"pbs_iter: Unable to connect to server %s\"\n                                   % (connect_server))\n                    return None\n\n                if(self.type == \"jobs\"):\n                    _pbs_v1.logmsg(_pbs_v1.LOG_DEBUG,\n                                   \"pbs_iter: pbs_python mode not\"\n                                   \" supported by NAS local mod\")\n                    pbs_disconnect(self.con)\n                    self.con = -1\n                    return None\n                elif(self.type == \"queues\"):\n                    self.bs = pbs_statque(self.con, None, None, None)\n                elif(self.type == \"vnodes\"):\n                    self.bs = pbs_statvnode(self.con, None, None, None)\n                elif(self.type == \"resvs\"):\n                    self.bs = pbs_statresv(self.con, None, None, None)\n                else:\n                    _pbs_v1.logmsg(_pbs_v1.LOG_DEBUG,\n                                   \"pbs_iter/init: Bad object iterator type %s\"\n                                   % (self.type))\n                    pbs_disconnect(self.con)\n                    self.con = -1\n                    return None\n\n            else:\n\n                self.obj_name = pbs_obj_name\n                self.filter1 = pbs_filter1\n                self.filter2 = \"\"\n                self.ignore_fin = 0\n                self.filter_user = \"\"\n\n                if pbs_filter2 is not None:\n                    self.filter2 = pbs_filter2\n\n                if pbs_ignore_fin is not None:\n                    self.ignore_fin = pbs_ignore_fin\n\n                if pbs_username is not None:\n                    self.filter_user = pbs_username\n\n                # argument 1 below tells C function were inside __init__\n                _pbs_v1.iter_nextfunc(\n                    self, 1, pbs_obj_name, pbs_filter1, self.filter2,\n                    self.ignore_fin, self.filter_user)\n    else:\n        def __init__(self, pbs_obj_name, pbs_filter1,\n                     pbs_filter2, connect_server=None):\n\n            self._caller = _pbs_v1.get_python_daemon_name()\n            if self._caller == \"pbs_python\":\n\n                if(connect_server is None):\n                    self._connect_server = \"localhost\"\n                    sn = \"\"\n                else:\n                    self._connect_server = connect_server\n                    sn = connect_server\n\n                self.type = pbs_obj_name\n                if _pbs_v1.use_static_data():\n                    if(self.type == \"jobs\"):\n                        self.bs = iter(_pbs_v1.get_job_static(\"\", sn, \"\"))\n                    elif(self.type == \"queues\"):\n                        self.bs = iter(_pbs_v1.get_queue_static(\"\", sn))\n                    elif(self.type == \"vnodes\"):\n                        self.bs = iter(_pbs_v1.get_vnode_static(\"\", sn))\n                    elif(self.type == \"resvs\"):\n                        self.bs = iter(_pbs_v1.get_resv_static(\"\", sn))\n                    else:\n                        _pbs_v1.logmsg(_pbs_v1.LOG_DEBUG,\n                                       \"pbs_iter/init: Bad object \"\n                                       \"iterator type %s\"\n                                       % (self.type))\n                        return None\n                    return\n\n                self.con = pbs_connect(self._connect_server)\n                if self.con < 0:\n                    _pbs_v1.logmsg(_pbs_v1.LOG_DEBUG,\n                                   \"pbs_iter: Unable to connect to server %s\"\n                                   % (connect_server))\n                    return None\n\n                if(self.type == \"jobs\"):\n                    self.bs = pbs_statjob(self.con, pbs_filter2, None, None)\n                elif(self.type == \"queues\"):\n                    self.bs = pbs_statque(self.con, None, None, None)\n                elif(self.type == \"vnodes\"):\n                    self.bs = pbs_statvnode(self.con, None, None, None)\n                elif(self.type == \"resvs\"):\n                    self.bs = pbs_statresv(self.con, None, None, None)\n                else:\n                    _pbs_v1.logmsg(_pbs_v1.LOG_DEBUG,\n                                   \"pbs_iter/init: Bad object iterator type %s\"\n                                   % (self.type))\n                    pbs_disconnect(self.con)\n                    self.con = -1\n                    return None\n\n            else:\n\n                self.obj_name = pbs_obj_name\n                self.filter1 = pbs_filter1\n                self.filter2 = pbs_filter2\n                # argument 1 below tells C function we're inside __init__\n                _pbs_v1.iter_nextfunc(\n                    self, 1, pbs_obj_name, pbs_filter1, pbs_filter2)\n\n    def __iter__(self):\n        return self\n\n    # NAS localmod 014\n    if NAS_mod is not None and NAS_mod != 0:\n        def __next__(self):\n            if self._caller == \"pbs_python\":\n                if not hasattr(self, \"bs\") or self.bs is None:\n                    if not _pbs_v1.use_static_data():\n                        pbs_disconnect(self.con)\n                        self.con = -1\n                    raise StopIteration\n\n                if _pbs_v1.use_static_data():\n                    if(self.type == \"jobs\"):\n                        return _pbs_v1.get_job_static(next(self.bs),\n                                                      self._connect_server, \"\")\n                    elif(self.type == \"queues\"):\n                        return _pbs_v1.get_queue_static(next(self.bs),\n                                                        self._connect_server)\n                    elif(self.type == \"resvs\"):\n                        return _pbs_v1.get_resv_static(next(self.bs),\n                                                       self._connect_server)\n                    elif(self.type == \"vnodes\"):\n                        return _pbs_v1.get_vnode_static(next(self.bs),\n                                                        self._connect_server)\n                    else:\n                        _pbs_v1.logmsg(_pbs_v1.LOG_DEBUG,\n                                       \"pbs_iter/next: Bad object\"\n                                       \" iterator type %s\"\n                                       % (self.type))\n                        raise StopIteration\n                    return\n\n                b = self.bs\n                job = None\n\n                _pbs_v1.set_c_mode()\n                server_data_fp = get_server_data_fp()\n                if(b):\n                    if(self.type == \"jobs\"):\n                        _pbs_v1.logmsg(_pbs_v1.LOG_DEBUG,\n                                       \"pbs_iter/next: pbs_python mode not\"\n                                       \" supported by NAS local mod\")\n                        pbs_disconnect(self.con)\n                        if server_data_fp:\n                            server_data_fp.close()\n                        self.con = -1\n                        _pbs_v1.set_python_mode()\n                        raise StopIteration\n                    elif(self.type == \"queues\"):\n                        obj = _queue(b.name, self._connect_server)\n                        header_str = \"pbs.server().queue(%s)\" % (b.name,)\n                    elif(self.type == \"resvs\"):\n                        obj = _resv(b.name, self._connect_server)\n                        header_str = \"pbs.server().resv(%s)\" % (b.name,)\n                    elif(self.type == \"vnodes\"):\n                        obj = _vnode(b.name, self._connect_server)\n                        header_str = \"pbs.server().vnode(%s)\" % (b.name,)\n                    else:\n                        _pbs_v1.logmsg(_pbs_v1.LOG_DEBUG,\n                                       \"pbs_iter/next: Bad object iterator \"\n                                       \"type %s\"\n                                       % (self.type))\n                        pbs_disconnect(self.con)\n                        if server_data_fp:\n                            server_data_fp.close()\n                        self.con = -1\n                        _pbs_v1.set_python_mode()\n                        raise StopIteration\n\n                    a = b.attribs\n\n                    while(a):\n                        n = a.name\n                        r = a.resource\n                        v = a.value\n\n                        if(self.type == \"vnodes\"):\n                            if(n == ATTR_NODE_state):\n                                v = _pbs_v1.str_to_vnode_state(v)\n                            elif(n == ATTR_NODE_ntype):\n                                v = _pbs_v1.str_to_vnode_ntype(v)\n                            elif(n == ATTR_NODE_Sharing):\n                                v = _pbs_v1.str_to_vnode_sharing(v)\n\n                        if(self.type == \"jobs\"):\n                            if n == ATTR_inter or n == ATTR_block or \\\n                                    n == ATTR_X11_port:\n                                v = int(pbs_bool(v))\n\n                        if(r):\n                            pr = getattr(obj, n)\n\n                            # if resource list does not exist, then set it\n                            if(pr is None):\n                                setattr(obj, n)\n\n                            pr = getattr(obj, n)\n                            if (pr is None):\n                                _pbs_v1.logmsg(_pbs_v1.LOG_DEBUG,\n                                               \"pbs_statobj: missing %s\" % (n))\n                                a = a.next\n                                continue\n\n                            vo = getattr(pr, r)\n                            if(vo is None):\n                                setattr(pr, r, v)\n                                if server_data_fp:\n                                    server_data_fp.write(\n                                        \"%s.%s[%s]=%s\\n\" % (header_str, n,\n                                                            r, v))\n                            else:\n                                # append value:\n                                # example: \"select=1:ncpus=1,ncpus=1,nodect=1,\n                                # place=pack\"\n                                vl = [vo, v]\n                                setattr(pr, r, \",\".join(vl))\n                                if server_data_fp:\n                                    server_data_fp.write(\"%s.%s[%s]=%s\\n\" % (\n                                        header_str, n, r, \",\".join(vl)))\n\n                        else:\n                            vo = getattr(obj, n)\n\n                            if(vo is None):\n                                setattr(obj, n, v)\n                                if server_data_fp:\n                                    server_data_fp.write(\n                                        \"%s.%s=%s\\n\" % (header_str, n, v))\n                            else:\n                                vl = [vo, v]\n                                setattr(obj, n, \",\".join(vl))\n                                if server_data_fp:\n                                    server_data_fp.write(\"%s.%s=%s\\n\" % (\n                                        header_str, n, \",\".join(vl)))\n\n                        a = a.next\n\n                self.bs = b.next\n\n                if server_data_fp:\n                    server_data_fp.close()\n                _pbs_v1.set_python_mode()\n                return obj\n            else:\n                # argument 0 below tells C function we're inside next\n                return _pbs_v1.iter_nextfunc(self, 0, self.obj_name,\n                                             self.filter1,\n                                             self.filter2, self.ignore_fin,\n                                             self.filter_user)\n    else:\n        def __next__(self):\n            if self._caller == \"pbs_python\":\n                if not hasattr(self, \"bs\") or self.bs is None:\n                    if not _pbs_v1.use_static_data():\n                        pbs_disconnect(self.con)\n                        self.con = -1\n                    raise StopIteration\n\n                if _pbs_v1.use_static_data():\n                    if(self.type == \"jobs\"):\n                        return _pbs_v1.get_job_static(next(self.bs),\n                                                      self._connect_server,\n                                                      \"\")\n                    elif(self.type == \"queues\"):\n                        return _pbs_v1.get_queue_static(next(self.bs),\n                                                        self._connect_server)\n                    elif(self.type == \"resvs\"):\n                        return _pbs_v1.get_resv_static(next(self.bs),\n                                                       self._connect_server)\n                    elif(self.type == \"vnodes\"):\n                        return _pbs_v1.get_vnode_static(next(self.bs),\n                                                        self._connect_server)\n                    else:\n                        _pbs_v1.logmsg(_pbs_v1.LOG_DEBUG,\n                                       \"pbs_iter/next: Bad object\"\n                                       \" iterator type %s\"\n                                       % (self.type))\n                        raise StopIteration\n                    return\n\n                b = self.bs\n                job = None\n\n                _pbs_v1.set_c_mode()\n\n                server_data_fp = get_server_data_fp()\n                if(b):\n                    if(self.type == \"jobs\"):\n                        obj = _job(b.name, self._connect_server)\n                        header_str = \"pbs.server().job(%s)\" % (b.name,)\n                    elif(self.type == \"queues\"):\n                        obj = _queue(b.name, self._connect_server)\n                        header_str = \"pbs.server().queue(%s)\" % (b.name,)\n                    elif(self.type == \"resvs\"):\n                        obj = _resv(b.name, self._connect_server)\n                        header_str = \"pbs.server().resv(%s)\" % (b.name,)\n                    elif(self.type == \"vnodes\"):\n                        obj = _vnode(b.name, self._connect_server)\n                        header_str = \"pbs.server().vnode(%s)\" % (b.name,)\n                    else:\n                        _pbs_v1.logmsg(_pbs_v1.LOG_DEBUG,\n                                       \"pbs_iter/next: Bad object\"\n                                       \" iterator type %s\"\n                                       % (self.type))\n                        pbs_disconnect(self.con)\n                        if server_data_fp:\n                            server_data_fp.close()\n                        self.con = -1\n                        _pbs_v1.set_python_mode()\n                        raise StopIteration\n\n                    a = b.attribs\n\n                    while(a):\n                        n = a.name\n                        r = a.resource\n                        v = a.value\n\n                        if(self.type == \"vnodes\"):\n                            if(n == ATTR_NODE_state):\n                                v = _pbs_v1.str_to_vnode_state(v)\n                            elif(n == ATTR_NODE_ntype):\n                                v = _pbs_v1.str_to_vnode_ntype(v)\n                            elif(n == ATTR_NODE_Sharing):\n                                v = _pbs_v1.str_to_vnode_sharing(v)\n\n                        if(self.type == \"jobs\"):\n                            if n == ATTR_inter or n == ATTR_block or \\\n                                    n == ATTR_X11_port:\n                                v = int(pbs_bool(v))\n\n                        if(r):\n                            pr = getattr(obj, n)\n\n                            # if resource list does not exist, then set it\n                            if(pr is None):\n                                setattr(obj, n)\n\n                            pr = getattr(obj, n)\n                            if (pr is None):\n                                _pbs_v1.logmsg(_pbs_v1.LOG_DEBUG,\n                                               \"pbs_statobj: missing %s\" % (n))\n                                a = a.__next__\n                                continue\n\n                            vo = getattr(pr, r)\n                            if(vo is None):\n                                setattr(pr, r, v)\n                                if server_data_fp:\n                                    server_data_fp.write(\n                                        \"%s.%s[%s]=%s\\n\" % (header_str, n, r,\n                                                            v))\n                            else:\n                                # append value:\n                                # example: \"select=1:ncpus=1,ncpus=1,nodect=1,\n                                # place=pack\"\n                                vl = [vo, v]\n                                setattr(pr, r, \",\".join(vl))\n                                if server_data_fp:\n                                    server_data_fp.write(\"%s.%s[%s]=%s\\n\" % (\n                                        header_str, n, r, \",\".join(vl)))\n\n                        else:\n                            vo = getattr(obj, n)\n\n                            if(vo is None):\n                                setattr(obj, n, v)\n                                if server_data_fp:\n                                    server_data_fp.write(\n                                        \"%s.%s=%s\\n\" % (header_str, n, v))\n                            else:\n                                vl = [vo, v]\n                                setattr(obj, n, \",\".join(vl))\n                                if server_data_fp:\n                                    server_data_fp.write(\"%s.%s=%s\\n\" % (\n                                        header_str, n, \",\".join(vl)))\n\n                        a = a.next\n\n                self.bs = b.next\n\n                _pbs_v1.set_python_mode()\n                if server_data_fp:\n                    server_data_fp.close()\n                return obj\n            else:\n                # argument 0 below tells C function we're inside next\n                return _pbs_v1.iter_nextfunc(self, 0, self.obj_name,\n                                             self.filter1, self.filter2)\n#: C(pbs_iter)\n\n#:------------------------------------------------------------------------\n#                  SERVER ATTRIBUTE TYPE\n#:-------------------------------------------------------------------------\nclass _server_attribute:\n    \"\"\"\n    This represents a external form of attributes..\n    \"\"\"\n    attributes = PbsReadOnlyDescriptor('attributes', {})\n\n    def __init__(self, name, resource, value, op, flags):\n        self.name = name\n        self.resource = resource\n        self.value = value\n        self.op = op\n        self.flags = flags\n        self.sisters = []\n    #: m(__init__)\n\n    def __str__(self):\n        return (\"name=%s:resource=%s:value=%s:op=%s:flags=%s:sisters=%s\" %\n                self.tup())\n    #: m(__str__)\n\n    def __setattr__(self, name, value):\n        if _pbs_v1.in_python_mode():\n            raise BadAttributeValueError(\n                \"'%s' attribute in the server_attribute object is readonly\"\n                % (name,))\n        super().__setattr__(name, value)\n    #: m(__setattr__)\n\n    def extract_flags_str(self):\n        \"\"\"returns the string values from the attribute flags.\"\"\"\n        lst = []\n        for mask, value in _pbs_v1.REVERSE_ATR_VFLAGS.items():\n            if self.flags & mask:\n                lst.append(value)\n        return lst\n    #: m(extract_flags_str)\n\n    def extract_flags_int(self):\n        \"\"\"returns the integer values from the attribute flags.\"\"\"\n        lst = []\n        for mask, value in _pbs_v1.REVERSE_ATR_VFLAGS.items():\n            if self.flags & mask:\n                lst.append(mask)\n        return lst\n    #: m(extract_flags_int)\n\n    def tup(self):\n        return (self.name, self.resource, self.value, self.op, self.flags,\n                self.sisters)\n    #: m(tup)\n\n_server_attribute._connect_server = PbsAttributeDescriptor(\n    _server_attribute, '_connect_server', \"\", (str,))\n#: C(_server_attribute)\n\n# This exposes pbs.server_attribute() to be callable in a hook script\nserver_attribute = _server_attribute\n\n#:------------------------------------------------------------------------\n#                  MANAGEMENT TYPE\n#:-------------------------------------------------------------------------\nclass _management:\n    \"\"\"\n    This represents a management operation.\n    \"\"\"\n    attributes = PbsReadOnlyDescriptor('attributes', {})\n\n    def __init__(self, cmd, objtype, objname, request_time, reply_code,\n        reply_auxcode, reply_choice, reply_text,\n        attribs, connect_server=None):\n        \"\"\"__init__\"\"\"\n        self.cmd = cmd\n        self.objtype = objtype\n        self.objname = objname\n        self.request_time = request_time\n        self.reply_code = reply_code\n        self.reply_auxcode = reply_auxcode\n        self.reply_choice = reply_choice\n        self.reply_text = reply_text\n        self.attribs = attribs\n        self._readonly = True\n        self._connect_server = connect_server\n    #: m(__init__)\n\n    def __str__(self):\n        \"\"\"String representation of the object\"\"\"\n        return \"%s:%s:%s\" % (\n            _pbs_v1.REVERSE_MGR_CMDS.get(self.cmd, self.cmd),\n            _pbs_v1.REVERSE_MGR_OBJS.get(self.objtype, self.objtype),\n            self.objname\n            )\n    #: m(__str__)\n\n    def __setattr__(self, name, value):\n        if _pbs_v1.in_python_mode():\n            raise BadAttributeValueError(\n                \"'%s' attribute in the management object is readonly\" %\n                (name,))\n        super().__setattr__(name, value)\n    #: m(__setattr__)\n\n_management._connect_server = PbsAttributeDescriptor(\n    _management, '_connect_server', \"\", (str,))\n#: C(_management)\n\n# This exposes pbs.management() to be callable in a hook script\nmanagement = _management\n\n\n#:------------------------------------------------------------------------\n#                  Reverse Lookup for _pv1mod_insert_int_constants\n#:-------------------------------------------------------------------------\n_pbs_v1.REVERSE_MGR_CMDS = {}\n_pbs_v1.REVERSE_MGR_OBJS = {}\n_pbs_v1.REVERSE_BRP_CHOICES = {}\n_pbs_v1.REVERSE_BATCH_OPS = {}\n_pbs_v1.REVERSE_ATR_VFLAGS = {}\n_pbs_v1.REVERSE_NODE_STATE = {}\n_pbs_v1.REVERSE_JOB_STATE = {}\n_pbs_v1.REVERSE_JOB_SUBSTATE = {}\n_pbs_v1.REVERSE_RESV_STATE = {}\n_pbs_v1.REVERSE_HOOK_EVENT = {}\n\nfor key, value in _pbs_v1.__dict__.items():\n    if key.startswith(\"MGR_CMD_\"):\n        _pbs_v1.REVERSE_MGR_CMDS[value] = key\n    elif key.startswith(\"MGR_OBJ_\"):\n        _pbs_v1.REVERSE_MGR_OBJS[value] = key\n    elif key.startswith(\"BRP_CHOICE_\"):\n        _pbs_v1.REVERSE_BRP_CHOICES[value] = key\n    elif key.startswith(\"BATCH_OP_\"):\n        _pbs_v1.REVERSE_BATCH_OPS[value] = key\n    elif key.startswith(\"ATR_VFLAG_\"):\n        _pbs_v1.REVERSE_ATR_VFLAGS[value] = key\n    elif key.startswith(\"ND_STATE_\"):\n        _pbs_v1.REVERSE_NODE_STATE[value] = key\n    elif key.startswith(\"JOB_STATE_\"):\n        _pbs_v1.REVERSE_JOB_STATE[value] = key\n    elif key.startswith(\"JOB_SUBSTATE_\"):\n        _pbs_v1.REVERSE_JOB_SUBSTATE[value] = key\n    elif key.startswith(\"RESV_STATE\"):\n        _pbs_v1.REVERSE_RESV_STATE[value] = key\n    elif key.startswith(\"HOOK_EVENT_\"):\n        _pbs_v1.REVERSE_HOOK_EVENT[value] = key\n"
  },
  {
    "path": "src/modules/python/pbs_hooks/PBS_alps_inventory_check.HK",
    "content": "type=pbs\nenabled=false\nuser=pbsadmin\nevent=exechost_periodic\nfreq=300\nalarm=90\n"
  },
  {
    "path": "src/modules/python/pbs_hooks/PBS_alps_inventory_check.PY",
    "content": "# coding: utf-8\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\n\n\"\"\"\nPurpose:\n\n- Force PBS to refresh nodes information when PBS and ALPS get out of\n  sync.\n\nRequirements:\n\n- PBS vnodes created in qmgr must be named after a locally-valid\n  alias for each login node.\n\n Some enhanced operations in a multi-Cray or mixed Cray/non-Cray\n environment with an external server:\n\n    1) Determine my node's cray host value by reading /etc/xthostname.\n    2) Only compile lists of compute and login nodes whose PBScrayhost\n       resource matches my crayhost value.\n    3) Current host always validates itself against the first \"up\"\n       Cray login node. This prevents non-Cray nodes from attempting\n       obtain an ALPS inventory in cases where there is only one\n       Cray login node. To facilitate this, we've a list,\n       cray_login_list.\n    4) Rather than attempting to index into the list of Cray login\n       nodes (which fails with a spurious error message if the hook\n       is running on a non-Cray node in a mixed environment, or if\n       the vnode name of the host does not match the default hostname\n       of the node), check all local aliases for this node against the\n       vnode name of the first \"up\" Cray login node to determine\n       whether to proceed.\n    5) Check that the PBScraynid resource is defined before attempting\n       to add a node's PBScraynid to the nid list. This prevents the\n       hook from failing on a conversion error if a site sets the\n       vntype resource on a non_Cray node.\n    6) A compute node that is offlined in PBS, but still preset as a\n       batch node in ALPS, would cause a false inventory mismatch\n       since only \"up\" nodes were included in the pbs_nids_set. This\n       would in turn cause the hook to HUP inventory MoM every cycle.\n       It's perfectly reasonable for an Admin to offline a node\n       without necessarily downing it in ALPS. We deal with this by:\n       a) treating node states as a bitmask rather than as a set of\n          integer values\n       b) distinguishing between \"down\" states (\"down\", \"unknown\",\n          \"stale\") and offline\n       c) excluding down or offline login nodes from consideration as\n          the inventory MoM\n       d) special-casing offline compute nodes by removing their nids\n          from the apstat_nids_set if present and excluding them from\n          the pbs_nids_set. This essentially means that we don't care\n          whether an offline node is present in ALPS or not.\n    7) When we check to see whether the current node matches the\n       inventory MoM, we actually resolve the inventory MoM and see\n       if its IP matches a valid IP from the current node . This allows\n       the hook to continue to work even if a site changes the default\n       hostname of a MoM node after installation.\n\"\"\"\n\nimport fcntl\nimport os\nimport re\nimport socket\nimport struct\nimport sys\nimport time\nfrom signal import SIGHUP\nfrom subprocess import PIPE, Popen\n\nimport pbs\n\nXTHOSTNAME = \"/etc/xthostname\"\nAPSTAT_CMD = \"/opt/cray/alps/default/bin/apstat\"\nSIOCGIFADDR = 0x8915\n\n\ndef get_mom_home():\n    \"\"\"\n    Return the path to the PBS home directory\n    \"\"\"\n\n    for v in (\"PBS_MOM_HOME\", \"PBS_HOME\"):\n        if v in os.environ:\n            return os.environ[v]\n\n    home = None\n    conf_file = os.environ.get(\"PBS_CONF_FILE\", \"/etc/pbs.conf\")\n    with open(conf_file) as conf:\n        for line in conf:\n            (key, val) = line.strip().split(\"=\")\n            if key == \"PBS_MOM_HOME\":\n                home = val\n                # PBS_MOM_HOME takes priority over PBS_HOME, so we are done\n                # searching\n                break\n            elif key == \"PBS_HOME\":\n                home = val\n\n    if not home.strip():\n        return None\n\n    return home\n\n\ndef hup_mom():\n    PBS_MOM_HOME = get_mom_home()\n    pidfile = open(os.path.join(PBS_MOM_HOME, \"mom_priv\", \"mom.lock\"))\n    pid = int(pidfile.readline())\n    pidfile.close()\n    os.kill(pid, SIGHUP)\n\n\ndef get_apstat_nids(msg):\n    \"\"\"\n    Returns the number of nodes reported by ALPS as marked \"up\" and of type\n    \"batch\".\n\n    Sample output of the command 'apstat -rn':\n\n    NID Arch State HW Rv Pl  PgSz     Avl    Conf Placed PEs Apids\n    2   XT UP  B 16  -  -    4K 4194304       0      0   0\n    3   XT UP  B 16  -  -    4K 4194304       0      0   0\n    \"\"\"\n\n    if not os.path.isfile(APSTAT_CMD):\n        msg += [\"ALPS Inventory Check: apstat command can not be found at %s\" %\n                (APSTAT_CMD)]\n        return None\n\n    apstat_nids = set()\n    cmd_apstat = APSTAT_CMD + \" -nv\"\n    apstat_out = Popen(cmd_apstat, shell=True, stdout=PIPE)\n\n    if apstat_out.wait() != 0 or apstat_out is None:\n        msg += [\"ALPS Inventory Check: No nodes reported by apstat.\"]\n        hup_mom()\n        __exit_hook(1, msg)\n\n    pattern = re.compile(\n        r\"(?P<nid>.+?)\\s+(?P<arch>.+?)\\s+(?P<state>.+?)\\s+\" +\n        r\"(?P<hw>.+?)\\s+(?P<everything>.+)\")\n\n    for apstat_line in apstat_out.stdout:\n        apstat_line = apstat_line.decode().strip()\n\n        apstat_record = re.search(pattern, apstat_line)\n\n        if apstat_record:\n            if apstat_record.group('state') == \"UP\" and \\\n                    apstat_record.group('hw') == \"B\":\n                apstat_nids.add(int(apstat_record.group('nid')))\n\n    return apstat_nids\n\n\ndef flush_log_messages(msg=None):\n    \"\"\"\n    Prints msg to the log file\n    \"\"\"\n    if msg is not None:\n        for m in msg:\n            pbs.logmsg(pbs.LOG_DEBUG, m)\n\n\ndef __exit_hook(code=0, msg=None):\n    flush_log_messages(msg)\n    sys.exit(code)\n\n\ndef my_addresses():\n    \"\"\" produces a list of ip strings for each network interface \"\"\"\n\n    for ifname in os.listdir('/sys/class/net'):\n        if ifname == 'lo':\n            continue\n\n        test_socket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)\n        ifreq = struct.pack('256s', bytes(ifname[:15], \"utf-8\"))\n\n        try:\n            sockaddr_in = fcntl.ioctl(test_socket.fileno(),\n                                      SIOCGIFADDR,\n                                      ifreq)[20:24]\n        except IOError:\n            continue\n\n        yield socket.inet_ntoa(sockaddr_in)\n\n\ncray_login_list = []\noffline_nids_list = []\npbs_nids_set = set()\nnow_mn = int(time.strftime(\"%M\", time.gmtime()))\nmsg = []\n\nif not os.path.isfile(XTHOSTNAME):\n    msg += [\"No %s file found on this host.\" % (XTHOSTNAME)]\n    __exit_hook(0, msg)\n\n# XTHOSTNAME file found on this host. Read it to determine our Cray hostname.\n\nwith open(XTHOSTNAME) as xthost_file:\n    my_crayhost = xthost_file.readline()\n    my_crayhost = my_crayhost.rstrip()\n    msg += [\"Processing ALPS inventory for crayhost %s\" % (my_crayhost)]\n\nstart = time.time()\nvnodes = pbs.server().vnodes()\nvnodes_query_duration = time.time() - start\nif not vnodes:\n    msg += [\"ALPS Inventory Check: No vnodes reported by PBS\"]\n    __exit_hook(1, msg)\n\ndown_states = pbs.ND_DOWN | pbs.ND_STALE | pbs.ND_STATE_UNKNOWN\n\nfor v in vnodes:\n    str_v = str(v)\n    vntype = \" \"\n\n    if (v.state & down_states) or \\\n        \"PBScrayhost\" not in v.resources_available or \\\n            v.resources_available[\"PBScrayhost\"] != my_crayhost:\n        continue\n\n    if \"vntype\" in v.resources_available:\n        vntype = v.resources_available[\"vntype\"]\n\n    if vntype == \"cray_login\":\n        if (not (v.state & pbs.ND_OFFLINE)) and \\\n                str_v not in cray_login_list:\n            cray_login_list.append(str_v)\n\n    elif vntype == \"cray_compute\":\n\n        if \"PBScraynid\" in v.resources_available:\n            pbs_craynid = v.resources_available[\"PBScraynid\"]\n\n        if pbs_craynid.isdigit():\n            if (v.state & pbs.ND_OFFLINE):\n\n                # if vnode is offline, add it to offline_nids_list.\n                # Otherwise, add it to the pbs_nids_set.\n                # Later on, the inventory MoM sill iterate through the\n                # offline_nids_list and discard those nids from the\n                # apstat_nids_set. This has the effect of causing us\n                # to ignore any offline nids so we won't generate a\n                # spurious HUP of the MoM if there are offline vnodes\n                # whose nids are present in the apstat output. We use\n                # the set.discard() method because it doesn't throw an\n                # error if the nid isn't present in the apstat_nids_set.\n\n                offline_nids_list.append(int(pbs_craynid))\n            else:\n                pbs_nids_set.add(int(pbs_craynid))\n\nif len(cray_login_list) == 0:\n    msg += [\"ALPS Inventory Check: No eligible \" +\n            \"login nodes to perform inventory check\"]\n    __exit_hook(0, msg)\n\ncray_login_local_name = pbs.get_local_nodename()\n\ntry:\n    inventory_node = cray_login_list[0]\n    inventory_addr = socket.gethostbyname(inventory_node)\n\n    if ((inventory_addr not in my_addresses()) and (\n            inventory_node != cray_login_local_name)):\n        msg += [\"ALPS Inventory Check: Login node '%s' is in charge of \"\n                \"verification, skipping check on '%s'.\" %\n                (inventory_node, socket.gethostname())]\n        __exit_hook(0, msg)\n\n    start = time.time()\n    apstat_nids_set = get_apstat_nids(msg)\n    apstat_query_duration = time.time() - start\n\n    if apstat_query_duration > 1 or vnodes_query_duration > 1:\n        msg += [\"ALPS Inventory Check: apstat query: %ds pbsnodes query: %ds\" %\n                (apstat_query_duration, vnodes_query_duration)]\n\n#  Remove any offline nids from the apstat_nids_set.\n    for offline_nid in offline_nids_list:\n        apstat_nids_set.discard(offline_nid)\n\n    pbs_apstat_diff = pbs_nids_set.difference(apstat_nids_set)\n    apstat_pbs_diff = apstat_nids_set.difference(pbs_nids_set)\n\n    if apstat_pbs_diff:\n        msg += [\"ALPS Inventory Check: Compute \" +\n                \"node%s defined in ALPS, but not in PBS: %s\" %\n                (['', \"s\"][len(apstat_pbs_diff) > 0],\n                 \",\".join(str(n) for n in apstat_pbs_diff))]\n\n    if pbs_apstat_diff:\n        msg += [\"ALPS Inventory Check: Compute \" +\n                \"node%s defined in PBS, but not in ALPS: %s\" %\n                (['', \"s\"][len(pbs_apstat_diff) > 0],\n                 \",\".join(str(n) for n in pbs_apstat_diff))]\n\n    if apstat_pbs_diff or pbs_apstat_diff:\n        PBS_MOM_HOME = get_mom_home()\n        if PBS_MOM_HOME is not None:\n            flush_log_messages(msg)\n            hup_mom()\n            sys.exit(0)\n        else:\n            msg += [\"ALPS Inventory Check: Internal error in retrieving path \"\n                    \"to mom_priv\"]\n    else:\n        msg += [\"ALPS Inventory Check: PBS and ALPS are in sync\"]\n\n    flush_log_messages(msg)\n\nexcept SystemExit:\n    pass\nexcept BaseException:\n    msg += [\"ALPS Inventory Check: Failure in refreshing \"\n            \"nodes on login node (%s) \" % (cray_login_local_name)]\n    __exit_hook(1, msg)\n"
  },
  {
    "path": "src/modules/python/pbs_hooks/PBS_cray_atom.CF",
    "content": "{\n    \"post_timeout\": 30,\n    \"delete_timeout\": 30,\n    \"unix_socket_file\": \"/var/run/atomd/atomd.sock\"\n}\n"
  },
  {
    "path": "src/modules/python/pbs_hooks/PBS_cray_atom.HK",
    "content": "type=pbs\nenabled=false\nuser=pbsadmin\nevent=execjob_begin,execjob_end\norder=100\nalarm=300\nfail_action=offline_vnodes\n"
  },
  {
    "path": "src/modules/python/pbs_hooks/PBS_cray_atom.PY",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\n\"\"\"\nPBS hook for consuming the Shasta Northbound API.\n\nThis hook services the following events:\n- execjob_begin\n- execjob_end\n\"\"\"\n\n\nimport json as JSON\nimport os\nimport site\nimport sys\nimport urllib\n\nsite.main()\n\n# to be PEP-8 compliant, the imports must be indented\nif True:\n    import requests\n    import requests_unixsocket\n    import pbs\n    import pwd\n    import copy\n    import re\n\nrequests_unixsocket.monkeypatch()\n\n# ============================================================================\n# Utility functions\n# ============================================================================\n\n\nclass OfflineError(Exception):\n    \"\"\"\n    Exception that will offline the node and reject the event\n    \"\"\"\n\n    def __init__(self, msg):\n        super().__init__(pbs.event().job.id + ': ' + msg)\n\n\nclass RejectError(Exception):\n    \"\"\"\n    Exception that will reject the event\n    \"\"\"\n    pass\n\n\nclass HookHelper(object):\n    \"\"\"\n    Helper to load config and event\n    \"\"\"\n    config = None\n\n    def __init__(self):\n        raise Exception('Access class via static methods')\n\n    @classmethod\n    def load_config(cls):\n        \"\"\"\n        Read the config file\n        \"\"\"\n        log_function_name()\n        defaults = {\n            'post_timeout': 30,\n            'delete_timeout': 30,\n            'unix_socket_file': '/var/run/atomd/atomd.sock',\n        }\n        constants = {\n            'version_uri': '/rm/v1',\n            'resources': {\n                'job': '/jobs',\n                'task': '/tasks'\n            }\n        }\n        # Identify the config file and read in the data\n        config_file = ''\n        if 'PBS_HOOK_CONFIG_FILE' in os.environ:\n            config_file = os.environ['PBS_HOOK_CONFIG_FILE']\n        else:\n            raise RuntimeError('%s: No config file set' % caller_name())\n        pbs.logmsg(pbs.EVENT_DEBUG4, 'config file is %s' % config_file)\n        try:\n            with open(config_file, 'r') as cfg:\n                config = merge_dict(defaults, JSON.load(cfg))\n        except IOError:\n            raise IOError('I/O error reading config file')\n        config = merge_dict(config, constants)\n        pbs.logmsg(pbs.EVENT_DEBUG4, 'loaded config is: %s' % (str(config)))\n        cls.config = config\n\n    @classmethod\n    def validate_config(cls):\n        \"\"\"\n        Validate the config file\n        This will check if the unix_socket_file resolves to a file.\n        \"\"\"\n        if not os.path.exists(cls.get_config()['unix_socket_file']):\n            log_with_caller(pbs.EVENT_DEBUG4,\n                            'Unix socket file does not exist, skipping hook',\n                            jobid=False)\n            pbs.event().accept()\n\n    @classmethod\n    def get_config(cls):\n        \"\"\"\n        Load the config if it hasn't already been loaded.\n        Return the config\n        \"\"\"\n        if not cls.config:\n            cls.load_config()\n            cls.validate_config()\n        return cls.config\n\n    @staticmethod\n    def build_path(resource, jobid=None):\n        \"\"\"\n        Given a jobid and a resource type build the path.\n        If jobid is none, build a path to the collection\n        requests_unixsockets requires the socket path to be percent-encoded\n        \"\"\"\n        log_function_name()\n        cfg = HookHelper.get_config()\n\n        if resource not in cfg['resources']:\n            log_with_caller(pbs.EVENT_ERROR, 'Invalid resource type %s' %\n                            resource, jobid=False)\n            raise RejectError()\n\n        path = 'http+unix://%s%s%s%s' % (\n            urllib.parse.quote(cfg['unix_socket_file'], safe=''),\n            cfg['version_uri'],\n            cfg['resources'][resource],\n            ('/' + jobid) if jobid else ''\n        )\n        log_with_caller(pbs.EVENT_DEBUG4, 'path is %s' % path)\n        return path\n\n    @staticmethod\n    def is_it_exclusive(job):\n        \"\"\"\n        check to see if the job requested exclusive, or if the\n        nodes are marked exclusive.  This needs to be passed\n        to ATOM.\n        \"\"\"\n        place = str(job.Resource_List[\"place\"])\n        log_with_caller(pbs.EVENT_DEBUG4, \"place is %s\" % place)\n\n        # See if the node sharing value has exclusive\n        vn = pbs.server().vnode(pbs.get_local_nodename())\n        sharing = vn.sharing\n        log_with_caller(pbs.EVENT_DEBUG4, \"The sharing value is %s type %s\" %\n                        (str(sharing), str(type(sharing))))\n\n        # Uses the same logic as the scheduler (is_excl())\n        if sharing == pbs.ND_FORCE_EXCL or sharing == pbs.ND_FORCE_EXCLHOST:\n            return True\n\n        if sharing == pbs.ND_IGNORE_EXCL:\n            return False\n\n        if any(s.startswith('excl') for s in place.split(':')):\n            return True\n        if any(s.startswith('shared') for s in place.split(':')):\n            return False\n\n        if (sharing == pbs.ND_DEFAULT_EXCL or\n            sharing == pbs.ND_DEFAULT_EXCLHOST):\n            return True\n\n        if sharing == pbs.ND_DEFAULT_SHARED:\n            return False\n\n        return False\n\ndef post(url, json=None, **kwargs):\n    \"\"\"\n    Wrapper to requests.post\n\n    Logs before and after\n    \"\"\"\n    log_with_caller(pbs.EVENT_DEBUG2, 'Sending POST to %s' % url, caller=1)\n    log_with_caller(pbs.EVENT_DEBUG2, 'Sending POST JSON: %s' %\n                    JSON.dumps(json), caller=1)\n    r = requests.post(url, json=json, **kwargs)\n    log_with_caller(pbs.EVENT_DEBUG2, 'Received POST status code = %s' %\n                    r.status_code, caller=1)\n    log_with_caller(pbs.EVENT_DEBUG2, 'Received POST text %s' %\n                    r.text, caller=1)\n    return r\n\n\ndef get(url, params=None, **kwargs):\n    \"\"\"\n    Wrapper to requests.get\n\n    Logs before and after\n    \"\"\"\n    log_with_caller(pbs.EVENT_DEBUG2, 'Sending GET to %s' % url, caller=1)\n    if params:\n        log_with_caller(pbs.EVENT_DEBUG2,\n                        'Sending GET params: %s' % params, caller=1)\n    r = requests.get(url, params=params, **kwargs)\n    log_with_caller(pbs.EVENT_DEBUG2, 'Received GET status code = %s' %\n                    r.status_code, caller=1)\n    log_with_caller(pbs.EVENT_DEBUG2, 'Received GET text %s' %\n                    r.text, caller=1)\n    return r\n\n\ndef delete(url, **kwargs):\n    \"\"\"\n    Wrapper to requests.delete\n\n    Logs before and after\n    \"\"\"\n    log_with_caller(pbs.EVENT_DEBUG2, 'Sending DELETE to %s' % url, caller=1)\n    r = requests.delete(url, **kwargs)\n    log_with_caller(pbs.EVENT_DEBUG2, 'Received DELETE status code = %s' %\n                    r.status_code, caller=1)\n    log_with_caller(pbs.EVENT_DEBUG2, 'Received DELETE text %s' %\n                    r.text, caller=1)\n    return r\n\n\ndef caller_name(frames=1):\n    \"\"\"\n    Return the name of the nth calling function or method.\n    \"\"\"\n    return str(sys._getframe(frames).f_code.co_name)\n\n\ndef log_function_name():\n    \"\"\"\n    Log the caller's name\n    \"\"\"\n    pbs.logmsg(pbs.EVENT_DEBUG4, '%s:%s: Method called' %\n               (pbs.event().hook_name, caller_name(2)))\n\n\ndef log_with_caller(sev, mes, caller=0, jobid=True):\n    \"\"\"\n    Wrapper to pbs.logmsg with caller's name prepended\n\n    Increment caller to get the caller of the calling function\n\n    If jobid is true, add the jobid from the event to the log message\n    \"\"\"\n    if jobid:\n        pbs.logmsg(sev, '%s:%s:%s: %s' %\n                   (pbs.event().hook_name, pbs.event().job.id,\n                    caller_name(2 + caller), mes))\n    else:\n        pbs.logmsg(sev, '%s:%s: %s' %\n                   (pbs.event().hook_name, caller_name(2 + caller), mes))\n\n\ndef merge_dict(base, new):\n    \"\"\"\n    Merge together two multilevel dictionaries where new\n    takes precedence over base\n    \"\"\"\n    if not isinstance(base, dict):\n        raise ValueError('base must be type dict')\n    if not isinstance(new, dict):\n        raise ValueError('new must be type dict')\n    newkeys = new.keys()\n    merged = {}\n    for key in base:\n        if key in newkeys and isinstance(base[key], dict):\n            # Take it off the list of keys to copy\n            newkeys.remove(key)\n            merged[key] = merge_dict(base[key], new[key])\n        else:\n            merged[key] = copy.deepcopy(base[key])\n    # Copy the remaining unique keys from new\n    for key in newkeys:\n        merged[key] = copy.deepcopy(new[key])\n    return merged\n\n\ndef retry_post(data):\n    \"\"\"\n    In the case where a POST fails due to a 400 error,\n    it could be because there is already a job on the cray side.\n    In that case, we should try to delete the existing job and\n    resubmit a new one.\n\n    If a previous POST timedout so we rejected it, but the service\n    just took too long to respond, it would exist on the service.\n    \"\"\"\n    event = pbs.event()\n    jid = event.job.id\n\n    joburl = HookHelper.build_path(resource='job', jobid=jid)\n    del_timeout = HookHelper.get_config()['delete_timeout']\n    try:\n        r_del = delete(joburl, timeout=del_timeout)\n        r_del.raise_for_status()\n    except requests.Timeout:\n        log_with_caller(pbs.EVENT_ERROR, 'DELETE timed out')\n        raise OfflineError('Job delete timed out')\n    except requests.HTTPError:\n        # If 404, then maybe the job that was there is now gone,\n        # try posting again. Otherwise, raise an OfflineError\n        if r_del.status_code != 404:\n            log_with_caller(pbs.EVENT_ERROR, 'DELETE job failed')\n            raise OfflineError('Job delete failed')\n\n    url = HookHelper.build_path(resource='job')\n    post_timeout = HookHelper.get_config()['post_timeout']\n    try:\n        r_post = post(url, json=data, timeout=post_timeout)\n        r_post.raise_for_status()\n    except requests.Timeout:\n        log_with_caller(pbs.EVENT_ERROR, 'POST timed out')\n        raise OfflineError('Job POST timed out')\n    except requests.HTTPError:\n        log_with_caller(pbs.EVENT_ERROR,\n                        'Invalid status code %d' % r_post.status_code)\n        raise OfflineError('Job POST encountered invalid status code')\n\n    # if we got here, we've successfully deleted and re-posted the job\n    log_with_caller(pbs.EVENT_DEBUG, 'Job %s registered' % jid)\n    return\n\ndef get_uid(user):\n    \"\"\"\n    get uid for the user\n    \"\"\"\n    try:\n        return pwd.getpwnam(user).pw_uid\n    except Exception:\n        pbs.logmsg(pbs.EVENT_DEBUG, \"Error while reading user uid from the\"\n                   \" machine.\")\n        raise RejectError(f\"Unable to get uid for {user}\")\n\ndef handle_execjob_begin():\n    \"\"\"\n    Handler for execjob_begin events.\n    \"\"\"\n    log_function_name()\n    event = pbs.event()\n    jid = event.job.id\n    uid = get_uid(event.job.euser)\n    log_with_caller(pbs.EVENT_DEBUG4, 'UID is %d' % uid)\n    excl = HookHelper.is_it_exclusive(event.job)\n    data = {\n        'jobid': jid,\n        'uid': uid,\n        'exclusive': excl\n    }\n    url = HookHelper.build_path(resource='job')\n    timeout = HookHelper.get_config()['post_timeout']\n    try:\n        r = post(url, json=data, timeout=timeout)\n        r.raise_for_status()\n    except requests.Timeout:\n        log_with_caller(pbs.EVENT_ERROR, 'POST timed out')\n        raise OfflineError('Job POST timed out')\n    except requests.HTTPError:\n        if r.status_code == 400:\n            retry_post(data)\n        else:\n            log_with_caller(pbs.EVENT_ERROR, 'Invalid status code %d' %\n                            r.status_code)\n            raise OfflineError('Job POST encountered invalid status code')\n    log_with_caller(pbs.EVENT_DEBUG, 'Job %s registered' % jid)\n\n\ndef handle_execjob_end():\n    \"\"\"\n    Handler for execjob_end events.\n    \"\"\"\n    log_function_name()\n    jid = pbs.event().job.id\n    url = HookHelper.build_path(resource='job', jobid=jid)\n    timeout = HookHelper.get_config()['delete_timeout']\n    try:\n        r = delete(url, timeout=timeout)\n        r.raise_for_status()\n    except requests.Timeout:\n        log_with_caller(pbs.EVENT_ERROR, 'DELETE timed out')\n        raise RejectError('Job delete timed out')\n    except requests.HTTPError:\n        log_with_caller(pbs.EVENT_ERROR, 'DELETE job failed')\n        raise RejectError('Job delete failed')\n\n    log_with_caller(pbs.EVENT_DEBUG, 'Job %s deleted' % jid)\n\n\ndef main():\n    \"\"\"\n    Main function for execution\n    \"\"\"\n    log_function_name()\n    hostname = pbs.get_local_nodename()\n    # Log the hook event type\n    event = pbs.event()\n\n    handlers = {\n        pbs.EXECJOB_BEGIN: (handle_execjob_begin, OfflineError),\n        pbs.EXECJOB_END: (handle_execjob_end, RejectError)\n    }\n\n    handler, timeout_exc = handlers.get(event.type, (None, None))\n    if not handler:\n        log_with_caller(pbs.EVENT_ERROR, '%s event is not handled by this hook'\n                        % event.type, jobid=False)\n        event.accept()\n    try:\n        handler()\n    except KeyboardInterrupt:\n        raise timeout_exc('Handler alarmed')\n\n\nif __name__ == 'builtins':\n    try:\n        main()\n    except OfflineError as e:\n        # the fail_action will offline the vnodes\n        raise\n    except RejectError:\n        pbs.event().reject()\n"
  },
  {
    "path": "src/modules/python/pbs_hooks/PBS_power.CF",
    "content": "{\n\t\"power_ramp_rate_enable\": false,\n\t\"power_on_off_enable\": false,\n\t\"node_idle_limit\": \"1800\",\n\t\"min_node_down_delay\": \"1800\",\n\t\"max_jobs_analyze_limit\": \"100\",\n\t\"max_concurrent_nodes\" : \"5\"\n}\n"
  },
  {
    "path": "src/modules/python/pbs_hooks/PBS_power.HK",
    "content": "type=pbs\nenabled=false\nuser=pbsadmin\nevent=periodic,execjob_prologue,execjob_epilogue,exechost_periodic,exechost_startup,execjob_begin,execjob_end\norder=2000\nalarm=180\nfreq=300\n"
  },
  {
    "path": "src/modules/python/pbs_hooks/PBS_power.PY",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n__doc__ = \"\"\"\nThis hook will activate and deactivate a power profile contained\nin the 'eoe' value for a job.\n\"\"\"\n\nimport datetime\nimport json\nimport os\nimport socket\nimport time\nfrom subprocess import PIPE, Popen\n\nimport pbs\nfrom pbs.v1._pmi_utils import _get_vnode_names, _svr_vnode\n\n\ndef init_power(event):\n    # Get a Power structure and do the connect.  Reject on failure.\n    try:\n        confvar = \"PBS_PMINAME\"\n        name = None\n        if confvar in os.environ:\n            name = os.environ[confvar]\n        power = pbs.Power(name)\n        power.connect()\n    except Exception as e:\n        event.reject(str(e))\n    return power\n\n\ndef vnodes_enabled(job):\n    # see if power operations are allowed on all job vnodes\n    for vn in _get_vnode_names(job):\n        if not _svr_vnode(vn).power_provisioning:\n            pbs.logjobmsg(job.id,\n                          \"power functionality is disabled on vnode %s\" % vn)\n            return False\n    return True\n\n\ndef get_local_node(name):\n    # Get host names from /etc/hosts and return matching name for the MoM\n    try:\n        (hostname, aliaslist, _) = socket.gethostbyname_ex(name)\n    except Exception:\n        return None\n    aliaslist.append(hostname)\n    # Search for possible match in server vnode list.\n    pbsvnodes = dict()\n    for vn in pbs.server().vnodes():\n        pbsvnodes[vn.name] = vn\n    for n in aliaslist:\n        if n in pbsvnodes:\n            return pbsvnodes[n]\n    return None\n\n\n# Read the config file in json format\ndef parse_config_file():\n    # Turn everything off by default. These settings be modified\n    # when the configuration file is read.\n    global pbs_home\n    global pbs_exec\n    global power_ramp_rate_enable\n    global power_on_off_enable\n    global node_idle_limit\n    global min_node_down_delay\n    global max_jobs_analyze_limit\n    global max_concurrent_nodes\n\n    try:\n        # This block will work for PBS versions 13 and later\n        pbs_conf = pbs.get_pbs_conf()\n        pbs_home = pbs_conf['PBS_HOME']\n        pbs_exec = pbs_conf['PBS_EXEC']\n    except Exception:\n        pbs.logmsg(pbs.EVENT_DEBUG,\n                   \"PBS_HOME needs to be defined in the config file\")\n        pbs.logmsg(pbs.EVENT_DEBUG, \"Exiting the power hook\")\n        pbs.event().accept()\n\n    # Identify the config file and read in the data\n    config_file = ''\n    if 'PBS_HOOK_CONFIG_FILE' in os.environ:\n        config_file = os.environ[\"PBS_HOOK_CONFIG_FILE\"]\n    tmpcfg = ''\n    if not config_file:\n        tmpcfg = os.path.join(pbs_home, 'server_priv', 'hooks',\n                              'PBS_power.CF')\n    if os.path.isfile(tmpcfg):\n        config_file = tmpcfg\n    if not config_file:\n        tmpcfg = os.path.join(pbs_home, 'mom_priv', 'hooks',\n                              'PBS_power.CF')\n    if os.path.isfile(tmpcfg):\n        config_file = tmpcfg\n    if not config_file:\n        raise Exception(\"Config file not found\")\n    pbs.logmsg(pbs.EVENT_DEBUG3, \"Config file is %s\" % config_file)\n    try:\n        fd = open(config_file, 'r')\n        config = json.load(fd)\n        fd.close()\n    except IOError:\n        raise Exception(\"I/O error reading config file\")\n    except Exception:\n        raise Exception(\"Error reading config file\")\n\n    # Assign default values to attributes\n    power_ramp_rate_enable = False\n    power_on_off_enable = False\n    node_idle_limit = 1800\n    min_node_down_delay = 1800\n    max_jobs_analyze_limit = 100\n    max_concurrent_nodes = 10\n\n    # Now assgin values read from config file\n    if 'power_on_off_enable' in config:\n        power_on_off_enable = config['power_on_off_enable']\n        pbs.logmsg(pbs.EVENT_DEBUG3, \"power_on_off_enable is set to %s\" %\n                   str(power_on_off_enable))\n    if 'power_ramp_rate_enable' in config:\n        power_ramp_rate_enable = config['power_ramp_rate_enable']\n        pbs.logmsg(pbs.EVENT_DEBUG3, \"power_ramp_rate_enable is set to %s\" %\n                   str(power_ramp_rate_enable))\n    if 'node_idle_limit' in config:\n        node_idle_limit = int(config['node_idle_limit'])\n        if not node_idle_limit or node_idle_limit < 0:\n            node_idle_limit = 1800\n        pbs.logmsg(pbs.EVENT_DEBUG3, \"node_idle_limit is set to %d\" %\n                   node_idle_limit)\n    if 'min_node_down_delay' in config:\n        min_node_down_delay = int(config['min_node_down_delay'])\n        if not min_node_down_delay or min_node_down_delay < 0:\n            min_node_down_delay = 1800\n        pbs.logmsg(pbs.EVENT_DEBUG3, \"min_node_down_delay is set to %d\" %\n                   min_node_down_delay)\n    if 'max_jobs_analyze_limit' in config:\n        max_jobs_analyze_limit = int(config['max_jobs_analyze_limit'])\n        if not max_jobs_analyze_limit or max_jobs_analyze_limit < 0:\n            max_jobs_analyze_limit = 100\n        pbs.logmsg(pbs.EVENT_DEBUG3, \"max_jobs_analyze_limit is set to %d\" %\n                   max_jobs_analyze_limit)\n    if 'max_concurrent_nodes' in config:\n        max_concurrent_nodes = int(config['max_concurrent_nodes'])\n        if not max_concurrent_nodes or max_concurrent_nodes < 0:\n            max_concurrent_nodes = 10\n        pbs.logmsg(pbs.EVENT_DEBUG3, \"max_concurrent_nodes is set to %d\" %\n                   max_concurrent_nodes)\n\n\n# Accept if event not serviceable.\nthis_event = pbs.event()\nif this_event.type not in [pbs.EXECJOB_PROLOGUE, pbs.EXECJOB_EPILOGUE,\n                           pbs.EXECJOB_BEGIN, pbs.EXECJOB_END,\n                           pbs.EXECHOST_STARTUP, pbs.EXECHOST_PERIODIC,\n                           pbs.PERIODIC]:\n    pbs.logmsg(pbs.LOG_WARNING,\n               \"Event not serviceable for power provisioning.\")\n    this_event.accept()\n\n\nif this_event.type == pbs.PERIODIC:\n    vnlist = this_event.vnode_list\n    resvlist = this_event.resv_list\n    time_now = time.time()\n\n    # Parse the config file for power attributes\n    try:\n        parse_config_file()\n    except Exception as e:\n        this_event.reject(str(e))\n\n    if power_ramp_rate_enable == 0 and power_on_off_enable == 0:\n        this_event.accept()\n\n    if power_on_off_enable and power_ramp_rate_enable:\n        # Disable ramp rate if power on/off is enabled as well.\n        power_ramp_rate_enable = 0\n        pbs.logmsg(pbs.LOG_WARNING,\n                   \"Hook config: power_on_off_enable is over-riding power_ramp_rate_enable\")\n\n    qselect_cmd = os.path.join(pbs_exec, 'bin', 'qselect')\n    qstat_cmd = os.path.join(pbs_exec, 'bin', 'qstat')\n    dtnow = datetime.datetime.now().strftime(\"%m%d%H%M\")\n    qselect_cmd += \" -tt.gt.\" + str(dtnow)\n    try:\n        p = Popen(qselect_cmd, shell=True, stdout=PIPE, stderr=PIPE)\n        (o, e) = p.communicate()\n        if p.returncode or not o:\n            job_list = []\n        else:\n            job_list = o.splitlines()\n    except (OSError, ValueError):\n        job_list = []\n    # Analyze queued jobs and see if any of the nodes are needed in near future\n    exec_vnodes = {}\n    i = 0\n    pattern = '%a %b %d %H:%M:%S %Y'\n    for jobid in job_list:\n        if not jobid:\n            break\n        if i == max_jobs_analyze_limit:\n            break\n        cmd = qstat_cmd + \" -f -F json \" + jobid\n        try:\n            p = Popen(cmd, shell=True, stdout=PIPE, stderr=PIPE)\n            (o, e) = p.communicate()\n            if p.returncode:\n                continue\n        except (OSError, ValueError):\n            pbs.logmsg(pbs.EVENT_DEBUG3,\n                       \"Error running qstat command for job %s\" % jobid)\n            continue\n        evnlist = None\n        start_time = 0\n        job_state = None\n        if not o:\n            continue\n        try:\n            out = json.loads(o)\n            job = out['Jobs'][jobid]\n            if 'estimated' in job:\n                est = job['estimated']\n                if 'start_time' in est:\n                    fmttime = est['start_time']\n                    start_time = int(time.mktime(\n                        time.strptime(fmttime, pattern)))\n                if 'exec_vnode' in est:\n                    evnlist = est['exec_vnode']\n            if 'job_state' in job:\n                job_state = job['job_state']\n        except ValueError:\n            pbs.logmsg(pbs.EVENT_DEBUG3,\n                       \"Error reading json output for job %s\" % jobid)\n            continue\n        if job_state == 'Q':\n            if start_time and evnlist:\n                for chunk in evnlist.split(\"+\"):\n                    vn = chunk.split(\":\")[0][1:]\n                    if vn not in exec_vnodes:\n                        exec_vnodes[vn] = {}\n                        exec_vnodes[vn][\"neededby\"] = start_time\n                    elif start_time < exec_vnodes[vn][\"neededby\"]:\n                        exec_vnodes[vn][\"neededby\"] = start_time\n        i += 1\n\n    pbs_conf = pbs.get_pbs_conf()\n    if 'PBS_HOME' in pbs_conf:\n        pbs_home = pbs_conf['PBS_HOME']\n    else:\n        pbs.logmsg(pbs.EVENT_DEBUG,\n                   \"PBS_HOME needs to be defined in the config file\")\n        pbs.logmsg(pbs.EVENT_DEBUG, \"Exiting the power hook\")\n        pbs.event().accept()\n\n    # Identify the nodes file and read in the data\n    node_file = ''\n    sleep_node_list = []\n    node_file = os.path.join(pbs_home,\n                             'server_priv', 'hooks', 'tmp', 'pbs_power_nodes_file')\n    if os.path.isfile(node_file) and os.stat(node_file).st_size:\n        pbs.logmsg(pbs.EVENT_DEBUG3, \"pbs_power_nodes_file is %s\" % node_file)\n        try:\n            with open(node_file, 'r') as fd:\n                sleep_node_list = fd.read().split(',')\n        except IOError as e:\n            this_event.reject(str(e))\n\n    nodes = {}\n    for vn in vnlist:\n        vnode = vnlist[vn]\n        host = vnode.resources_available[\"host\"]\n        can_power_off = 0\n        try:\n            if vnode.resources_available[\"PBScraynid\"]:\n                if power_on_off_enable:\n                    if vnode.poweroff_eligible:\n                        can_power_off = vnode.poweroff_eligible\n                else:\n                    # For ramp rate limiting.\n                    can_power_off = 1\n        except Exception:\n            pass\n\n        if host not in nodes:\n            # Initialize the nodes with new host\n            nodes[host] = {}\n            nodes[host][\"can_power_off\"] = can_power_off\n            nodes[host][\"poweroff\"] = 0\n            nodes[host][\"poweron\"] = 0\n            nodes[host][\"vnodes\"] = []\n        nodes[host][\"vnodes\"].append(vnode)\n\n        if can_power_off == 0:\n            # Not allowed to power on/off this node\n            nodes[host][\"can_power_off\"] = 0\n            nodes[host][\"poweroff\"] = 0\n            nodes[host][\"poweron\"] = 0\n\n        rs_list = []\n        if vnode.resv:\n            resv_str = str(vnode.resv)\n            rs_list = resv_str.split(\",\")\n\n        if nodes[host][\"can_power_off\"]:\n            # See if the node is actually free\n            if vnode.state != pbs.ND_FREE:\n                nodes[host][\"poweroff\"] = 0\n            else:\n                # See if there are any reservations on the vnode\n                # Reservations further in time can be ignored.\n                for resid in rs_list:\n                    resv = resvlist[resid.lstrip()]\n                    rstates = [pbs.RESV_STATE_RUNNING, pbs.RESV_STATE_DELETED,\n                               pbs.RESV_STATE_BEING_ALTERED,\n                               pbs.RESV_STATE_DELETING_JOBS]\n                    if resv.reserve_state in rstates:\n                        nodes[host][\"poweroff\"] = 0\n                        nodes[host][\"can_power_off\"] = 0\n                    if resv.reserve_state == pbs.RESV_STATE_CONFIRMED:\n                        reserve_start = resv.reserve_start\n                        if vn not in exec_vnodes:\n                            exec_vnodes[vn] = {}\n                            exec_vnodes[vn][\"neededby\"] = reserve_start\n                        elif reserve_start < exec_vnodes[vn][\"neededby\"]:\n                            exec_vnodes[vn][\"neededby\"] = reserve_start\n                        # Do not power off a node if it has reservation starting\n                        # in (time_now + min_node_down_delay + 900) seconds\n                        if time_now > (reserve_start - (min_node_down_delay + 900)):\n                            nodes[host][\"poweroff\"] = 0\n                            nodes[host][\"can_power_off\"] = 0\n\n                # Is the node idle enough to put to sleep or power down?\n                if nodes[host][\"can_power_off\"]:\n                    if vnode.last_used_time:\n                        last_used_time = vnode.last_used_time\n                    else:\n                        last_used_time = 0\n                    idle_time = time_now - last_used_time\n                    if node_idle_limit < idle_time:\n                        nodes[host][\"poweroff\"] = 1\n\n        # POWER-ON/RAMP-UP check: See if the node is down and\n        # needs to be brought up. Check if the node needs to be\n        # up before this periodic event runs again.\n        if vnode.state == pbs.ND_SLEEP:\n            if power_on_off_enable:\n                # Ignore the nodes in sleep_node_list as they are\n                # being powered up by previous interation of the hook.\n                if vn in sleep_node_list:\n                    continue\n            # Look for upcoming reservation starts in the node.\n            for resid in rs_list:\n                resv = resvlist[resid.lstrip()]\n                rstates = [pbs.RESV_STATE_CONFIRMED, pbs.RESV_STATE_DEGRADED]\n                if resv.reserve_state in rstates:\n                    reserve_start = resv.reserve_start\n                    if vn not in exec_vnodes:\n                        exec_vnodes[vn] = {}\n                        exec_vnodes[vn][\"neededby\"] = reserve_start\n                    elif reserve_start < exec_vnodes[vn][\"neededby\"]:\n                        exec_vnodes[vn][\"neededby\"] = reserve_start\n            if vn in exec_vnodes and exec_vnodes[vn][\"neededby\"]:\n                neededby = exec_vnodes[vn][\"neededby\"]\n            else:\n                neededby = 0\n            # Check if node is needed in near future.\n            # Assuming node will take about 900 seconds to come up.\n            if neededby and neededby < (time_now + 900):\n                nodes[host][\"poweron\"] = 1\n            # See when node was put down\n            if vnode.last_state_change_time:\n                last_state_change = vnode.last_state_change_time\n            else:\n                last_state_change = 0\n            # Do not power on a node if it went down within less than\n            # min_node_down_delay seconds\n            if power_on_off_enable and last_state_change:\n                if (time_now - last_state_change) < int(min_node_down_delay):\n                    nodes[host][\"poweron\"] = 0\n\n    poweroff_vnlist = []\n    poweron_vnlist = []\n    i = 0\n    j = 0\n    for n in nodes:\n        # Check if any nodes need to be ramped-down/powered-off.\n        if nodes[n][\"poweroff\"] == 1 and i < max_concurrent_nodes:\n            poweroff_vnlist.append(n)\n            i += 1\n        # Check if any nodes need to be ramped-up/powered-on.\n        elif nodes[n][\"poweron\"] == 1 and j < max_concurrent_nodes:\n            poweron_vnlist.append(n)\n            j += 1\n        if i == max_concurrent_nodes and j == max_concurrent_nodes:\n            break\n\n    if poweroff_vnlist or poweron_vnlist or sleep_node_list:\n        power = init_power(this_event)\n    else:\n        this_event.accept()\n    try:\n        if power_ramp_rate_enable:\n            # Ramp rate limiting\n            if poweroff_vnlist:\n                power.ramp_down(poweroff_vnlist)\n            if poweron_vnlist:\n                power.ramp_up(poweron_vnlist)\n        else:\n            # Power on/off nodes\n            if poweroff_vnlist:\n                power.power_off(poweroff_vnlist)\n            if poweron_vnlist:\n                power.power_on(poweron_vnlist)\n            ready_nodes = set()\n            if sleep_node_list:\n                ready_nodes = power.power_status(sleep_node_list)\n                # For the nodes which are up remove ND_SLEEP\n                for host in ready_nodes:\n                    for vn in nodes[host][\"vnodes\"]:\n                        prev_state = vn.state\n                        vn.state = prev_state & ~(pbs.ND_SLEEP)\n                        vn.last_used_time = time_now\n            # nodes which are still booting, write them to\n            # node file for status check in next iteration.\n            if sleep_node_list or poweron_vnlist:\n                if poweron_vnlist:\n                    sleeping_nodes = poweron_vnlist\n                else:\n                    sleeping_nodes = []\n                # Look for still booting nodes from previous iteration.\n                for n in sleep_node_list:\n                    if n not in ready_nodes:\n                        sleeping_nodes.append(n)\n                data = ','.join(sleeping_nodes)\n                try:\n                    with open(node_file, 'w') as fd:\n                        fd.write(data)\n                except IOError as e:\n                    power.disconnect()\n                    this_event.reject(str(e))\n        # Mark nodes to ND_Sleep\n        for nd in poweroff_vnlist:\n            for vn in nodes[nd][\"vnodes\"]:\n                vn.state = pbs.ND_SLEEP\n        # Mark nodes to ND_free\n        if power_ramp_rate_enable:\n            for nd in poweron_vnlist:\n                for vn in nodes[nd][\"vnodes\"]:\n                    vn.state = pbs.ND_FREE\n                    vn.last_used_time = time_now\n        power.disconnect()\n        this_event.accept()\n    except Exception as e:\n        power.disconnect()\n        this_event.reject(str(e))\n\n\n# Set eoe values for my node\nif this_event.type == pbs.EXECHOST_STARTUP:\n    from pbs.v1._pmi_utils import _is_node_provisionable\n\n    # Don't connect if the server or sched is running.\n    if not _is_node_provisionable():\n        pbs.logmsg(pbs.LOG_DEBUG,\n                   \"Provisioning cannot be enabled on this host\")\n        this_event.accept()\n    power = init_power(this_event)\n    profiles = power.query(pbs.Power.QUERY_PROFILE)\n    if profiles is not None:\n        me = pbs.get_local_nodename()\n        this_event.vnode_list[me].resources_available[\n            \"eoe\"] = power._map_profile_names(profiles)\n    power.disconnect()\n    this_event.accept()\n\n\n# Gather energy usage for all jobs\nif this_event.type == pbs.EXECHOST_PERIODIC:\n    # Check if any jobs are running\n    if len(this_event.job_list) == 0:\n        this_event.accept()\n\n    power = init_power(this_event)\n    for jobid in this_event.job_list:\n        # set energy usage\n        job = this_event.job_list[jobid]\n        # skip any jobs that MOM is not MS\n        if not job.in_ms_mom():\n            continue\n        # skip if vnodes have power_provisioning=0\n        if not vnodes_enabled(job):\n            continue\n\n        try:\n            usage = power.get_usage(job)\n            if usage is not None:\n                job.resources_used[\"energy\"] = usage\n        except Exception as e:\n            pbs.logmsg(pbs.LOG_ERROR, str(e))\n    power.disconnect()\n    this_event.accept()\n\n# From this point on, the event will have a job.\nthis_job = this_event.job\n\nif this_event.type == pbs.EXECJOB_BEGIN:\n    me = pbs.get_local_nodename()\n    try:\n        if not _svr_vnode(me).power_provisioning:\n            this_event.accept()\n    except Exception:\n        # Try with different hostname\n        vn = get_local_node(me)\n        if not _svr_vnode(vn.name).power_provisioning:\n            this_event.accept()\n    requested_profile = str(this_job.schedselect).partition(\n        'eoe=')[2].partition('+')[0].partition(':')[0]\n    if requested_profile != \"\":\n        try:\n            this_event.vnode_list[me].current_eoe = requested_profile\n        except (KeyError, ValueError):\n            pass\n    this_event.accept()\nif this_event.type == pbs.EXECJOB_END:\n    me = pbs.get_local_nodename()\n    try:\n        this_event.vnode_list[me].current_eoe = None\n    except (KeyError, ValueError):\n        pass\n\n    power = init_power(this_event)\n    try:\n        power.deactivate_profile(this_job)\n    except Exception as e:\n        pbs.logjobmsg(this_job.id, str(e))\n    power.disconnect()\n    this_event.accept()\n\n# No further processing is needed if we are not mother superior.\nif not this_job.in_ms_mom():\n    this_event.accept()\n\n# Don't do anything if power_provisioning=0\nif not vnodes_enabled(this_job):\n    this_event.accept()\n\n# Was an EOE requested?\nrequested_profile = str(this_job.schedselect).partition(\n    'eoe=')[2].partition('+')[0].partition(':')[0]\nif requested_profile == \"\":\n    this_event.accept()\n\nif this_event.type == pbs.EXECJOB_PROLOGUE:\n    power = init_power(this_event)\n    try:\n        power.activate_profile(requested_profile, this_job)\n        power.disconnect()\n    except Exception as e:\n        power.disconnect()\n        this_event.reject(str(e))\nelif this_event.type == pbs.EXECJOB_EPILOGUE:\n    power = init_power(this_event)\n    # set energy usage\n    try:\n        usage = power.get_usage(this_job)\n        if usage is not None:\n            this_job.resources_used[\"energy\"] = usage\n    except Exception as e:\n        pbs.logjobmsg(this_job.id, str(e))\n    power.disconnect()\n\nthis_event.accept()\n"
  },
  {
    "path": "src/modules/python/pbs_hooks/PBS_xeon_phi_provision.HK",
    "content": "type=pbs\nenabled=false\nuser=pbsadmin\nevent=provision\norder=1\nalarm=1800\n"
  },
  {
    "path": "src/modules/python/pbs_hooks/PBS_xeon_phi_provision.PY",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\nimport json as json\nimport os\nimport subprocess\nimport sys\nimport time\n\nimport pbs\n\n# Section to be moved to config file once available for provision hook\ncapmc_dir = '/opt/cray/capmc/default/bin'\npower_up_checks = 20\npower_up_sleep = 60\n\n# Get the information about the node and aoe to provision\ne = pbs.event()\nvnode = e.vnode\naoe = e.aoe\n\nnuma_cfg, cache_percent = aoe.split('_')\nnid = vnode.split('_')[1]\n\n# Add capmc_dir to path\nos.environ['PATH'] = capmc_dir + \":\" + os.environ['PATH']\n\n# capmc commands to configure the knl nodes\ncmd_set_numa_cfg = ['capmc', 'set_numa_cfg', '--nids', '%s' % nid,\n                    '--mode', '%s' % numa_cfg]\ncmd_set_mcdram_cfg = ['capmc', 'set_mcdram_cfg', '--nids',\n                      '%s' % nid, '--mode', '%s' % cache_percent]\ncmd_node_reinit = ['capmc', 'node_reinit', '--nids', '%s' % nid]\ncmd_node_status = ['capmc', 'node_status', '--nids', '%s' % nid]\ncmd_list = (cmd_set_numa_cfg, cmd_set_mcdram_cfg, cmd_node_reinit)\n\npbs.logmsg(pbs.EVENT_DEBUG3, \"vnode: %s\" % vnode)\npbs.logmsg(pbs.EVENT_DEBUG3, \"aoe: %s\" % aoe)\npbs.logmsg(pbs.EVENT_DEBUG3, \"numa cmd: %s\" % cmd_set_numa_cfg)\npbs.logmsg(pbs.EVENT_DEBUG3, \"cache cmd: %s\" % cmd_set_mcdram_cfg)\npbs.logmsg(pbs.EVENT_DEBUG3, \"power reinit cmd: %s\" % cmd_node_reinit)\n\ntry:\n    for cmd in cmd_list:\n        process = subprocess.Popen(cmd, stdout=subprocess.PIPE,\n                                   stderr=subprocess.PIPE)\n        (output, err) = process.communicate()\n        if process.returncode != 0:\n            error_msg = \"Error while running %s: %s\" % (cmd, err.strip())\n            e.reject(error_msg, process.returncode)\n        else:\n            node_status = json.loads(output)\n            # err_msg value will be Success\n            pbs.logmsg(pbs.EVENT_DEBUG3, \"%s return code: %s err_msg: %s\" % (\n                cmd, process.returncode, node_status['err_msg']))\n\n    # Nodes can take upwards of 15 minutes to come back online\n    poll_val = 0\n    poll_cnt = 0\n    while poll_val < 1 and poll_cnt < power_up_checks:\n        time.sleep(power_up_sleep)\n        process = subprocess.Popen(cmd_node_status, stdout=subprocess.PIPE,\n                                   stderr=subprocess.PIPE)\n        (output, err) = process.communicate()\n        if process.returncode != 0:\n            error_msg = \"Error while running %s: %s\" % (cmd_node_status, err.strip())\n            e.reject(error_msg, process.returncode)\n        node_status = json.loads(output)\n        pbs.logmsg(pbs.EVENT_DEBUG3,\n                   \"poll_cnt: %d node_status: %s\" % (poll_cnt, node_status))\n        if \"ready\" in node_status:\n            if int(nid) in node_status['ready']:\n                pbs.logmsg(pbs.EVENT_DEBUG3,\n                           \"Node was successfully powered on\")\n                poll_val = 1\n        poll_cnt += 1\n\n    if poll_val != 1:\n        e.reject(\"Provisioning with reboot failed\", 211)\n    else:\n        pbs.logmsg(pbs.EVENT_DEBUG3, \"Ready to accept\")\n        e.accept(0)\nexcept (OSError, ValueError) as err:\n    e.reject(\"Caught exception : %s\" % (str(err)), err.errno)\nexcept Exception as err:\n    e.reject(\"Caught exception : %s\" % (str(err)), err.errno)\n"
  },
  {
    "path": "src/modules/python/pbs_v1_module_init.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include \"pbs_config.h\"\n#include \"pbs_ifl.h\"\n#include \"pbs_internal.h\"\n#include \"pbs_version.h\"\n#include \"pbs_error.h\"\n#include \"attribute.h\"\n#include \"job.h\"\n#include \"reservation.h\"\n#include \"server.h\"\n#include \"pbs_nodes.h\"\n#include \"pbs_sched.h\"\n#include <pbs_python_private.h>\n#include <Python.h>\n\n#define PBS_V1_COMMON_MODULE_DEFINE_STUB_FUNCS 1\n#include \"pbs_v1_module_common.i\"\n\n/* #define MODULE_NAME \"_pbs_v1_module\" */\n#define MODULE_NAME \"pbs_python\"\n\nPyMODINIT_FUNC\nPyInit__pbs_v1(void)\n{\n\tint i;\n\tPyObject *module = NULL;\n\tPyObject *py_sys_modules = NULL;\n\n\tmemset(&server, 0, sizeof(server));\n\n\tif (set_msgdaemonname(MODULE_NAME)) {\n\t\treturn PyErr_Format(PyExc_MemoryError,\n\t\t\t\t    \"set_msgdaemonname() failed to allocate memory\");\n\t}\n\n\tif (pbs_loadconf(0) == 0) {\n\t\treturn PyErr_Format(PyExc_Exception, \"Failed to load pbs.conf!\");\n\t}\n\n\tset_log_conf(pbs_conf.pbs_leaf_name, pbs_conf.pbs_mom_node_name,\n\t\t     pbs_conf.locallog, pbs_conf.syslogfac,\n\t\t     pbs_conf.syslogsvr, pbs_conf.pbs_log_highres_timestamp);\n\n\tpbs_python_set_use_static_data_value(0);\n\n\t/* by default, server_name is what is set in /etc/pbs.conf */\n\tstrncpy(server_name, pbs_conf.pbs_server_name, PBS_MAXSERVERNAME);\n\n\t/* determine the actual server name */\n\tpbs_server_name = pbs_default();\n\tif ((!pbs_server_name) || (*pbs_server_name == '\\0')) {\n\t\treturn PyErr_Format(PyExc_Exception,\n\t\t\t\t    \"pbs_default() failed acquire the server name\");\n\t}\n\n\t/* determine the server host name */\n\tif (get_fullhostname(pbs_server_name, server_host, PBS_MAXSERVERNAME) != 0) {\n\t\treturn PyErr_Format(PyExc_Exception,\n\t\t\t\t    \"get_fullhostname() failed to acqiure the server host name\");\n\t}\n\n\tif ((job_attr_idx = cr_attrdef_idx(job_attr_def, JOB_ATR_LAST)) == NULL) {\n\t\treturn PyErr_Format(PyExc_Exception,\n\t\t\t\t    \"Failed creating job attribute search index\");\n\t}\n\tif ((node_attr_idx = cr_attrdef_idx(node_attr_def, ND_ATR_LAST)) == NULL) {\n\t\treturn PyErr_Format(PyExc_Exception,\n\t\t\t\t    \"Failed creating node attribute search index\");\n\t}\n\tif ((que_attr_idx = cr_attrdef_idx(que_attr_def, QA_ATR_LAST)) == NULL) {\n\t\treturn PyErr_Format(PyExc_Exception,\n\t\t\t\t    \"Failed creating queue attribute search index\");\n\t}\n\tif ((svr_attr_idx = cr_attrdef_idx(svr_attr_def, SVR_ATR_LAST)) == NULL) {\n\t\treturn PyErr_Format(PyExc_Exception,\n\t\t\t\t    \"Failed creating server attribute search index\");\n\t}\n\tif ((sched_attr_idx = cr_attrdef_idx(sched_attr_def, SCHED_ATR_LAST)) == NULL) {\n\t\treturn PyErr_Format(PyExc_Exception,\n\t\t\t\t    \"Failed creating sched attribute search index\");\n\t}\n\tif ((resv_attr_idx = cr_attrdef_idx(resv_attr_def, RESV_ATR_LAST)) == NULL) {\n\t\treturn PyErr_Format(PyExc_Exception,\n\t\t\t\t    \"Failed creating resv attribute search index\");\n\t}\n\tif (cr_rescdef_idx(svr_resc_def, svr_resc_size) != 0) {\n\t\treturn PyErr_Format(PyExc_Exception,\n\t\t\t\t    \"Failed creating resc definition search index\");\n\t}\n\n\t/* initialize the pointers in the resource_def array */\n\tfor (i = 0; i < (svr_resc_size - 1); ++i) {\n\t\tsvr_resc_def[i].rs_next = &svr_resc_def[i + 1];\n\t}\n\n\t/* set python interp data */\n\tsvr_interp_data.init_interpreter_data = NULL;\n\tsvr_interp_data.destroy_interpreter_data = NULL;\n\tsvr_interp_data.interp_started = 1;\n\tsvr_interp_data.pbs_python_types_loaded = 0;\n\tif (gethostname(svr_interp_data.local_host_name, PBS_MAXHOSTNAME) == -1) {\n\t\treturn PyErr_Format(PyExc_Exception,\n\t\t\t\t    \"gethostname() failed to acquire the local host name\");\n\t}\n\tsvr_interp_data.daemon_name = strdup(MODULE_NAME);\n\tsvr_interp_data.data_initialized = 1;\n\n\t/* construct _pbs_v1 module */\n\tmodule = pbs_v1_module_init();\n\tif (module == NULL) {\n\t\treturn PyErr_Format(PyExc_Exception,\n\t\t\t\t    PBS_PYTHON_V1_MODULE_EXTENSION_NAME\n\t\t\t\t    \" module initialization failed\");\n\t}\n\n\t/*\n\t * get a borrowed reference to sys.modules and add our module to it in order\n\t * to prevent an import cycle while loading the PBS python types\n\t */\n\tpy_sys_modules = PyImport_GetModuleDict();\n\tif (PyDict_SetItemString(py_sys_modules,\n\t\t\t\t PBS_PYTHON_V1_MODULE_EXTENSION_NAME, module)) {\n\t\treturn PyErr_Format(PyExc_Exception,\n\t\t\t\t    \"failed to addd the \" PBS_PYTHON_V1_MODULE_EXTENSION_NAME\n\t\t\t\t    \" module to sys.modules\");\n\t}\n\n\t/* load python types into the _pbs_v1 module */\n\tif ((pbs_python_load_python_types(&svr_interp_data) == -1)) {\n\t\tPyDict_DelItemString(py_sys_modules,\n\t\t\t\t     PBS_PYTHON_V1_MODULE_EXTENSION_NAME);\n\t\treturn PyErr_Format(PyExc_Exception,\n\t\t\t\t    \"pbs_python_load_python_types() failed to load Python types\");\n\t}\n\n\treturn module;\n}\n"
  },
  {
    "path": "src/modules/python/setup.cfg",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n[global]\n\n[install]\noptimize = 2\n"
  },
  {
    "path": "src/modules/python/setup.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\nfrom distutils.core import setup\nfrom distutils.core import DEBUG\n\npackage_version = \"1.0\"\npackage_name = \"pbs\"\npackage_src_root = \"pbs\"\npackages = []\npackage_dir = {}\n\npackages.extend([\n                    package_name,\n                    \"%s.v1\" % (package_name,)\n                ])\n\npackage_dir.update({\n                    package_name : package_src_root\n                  })\n\n#: ############################################################################\n#:             INVOKE setup (aka MAIN Program)\n#: ############################################################################\n\n#: build the setup keyword argument list.\nsetup(\n                name                             = package_name,\n                version                          = package_version,\n                maintainer                       = \"Altair Engineering\",\n                maintainer_email                 = \"support@altair.com\",\n                author                           = \"Altair\",\n                author_email                     = \"support@altair.com\",\n                url                              = \"http://www.altair.com\",\n                download_url                     = \"http://www.altair.com\",\n                license                          = \"Proprietary\",\n                platforms                        = [\"any\"],\n                packages                         = packages,\n                package_dir                      = package_dir,\n)\n"
  },
  {
    "path": "src/mom_rcp/Makefile.am",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nsbin_PROGRAMS = pbs_rcp\n\npbs_rcp_CPPFLAGS = \\\n\t-I$(top_srcdir)/src/include \\\n\t@KRB5_CFLAGS@\n\npbs_rcp_LDADD = @socket_lib@\n\npbs_rcp_SOURCES = \\\n\textern.h \\\n\tpathnames.h \\\n\tpbs_stat.h \\\n\trcp.c \\\n\treplace.c \\\n\tutil.c\n"
  },
  {
    "path": "src/mom_rcp/README",
    "content": "MOM's rcp\n\n1. What is this rcp stuff?\n\nThe majority contents of this directory is the source code\nfor the rcp(1) command.  This source is from the bsd4.4-Lite\ndistribution.  It is copyrighted by UCB as noted in the source\nfiles.  This code has been slightly modified in order to have\nin compile on systems other than bsd4.4;  note their liberal\nuse of functions suchs as vwarnx() and snprintf() not found in\nPOSIX.   The copyright, reproduced below, clearly grants the right\nto modify and redistribute the source.\n\n * Copyright (c) 1992, 1993\n *      The Regents of the University of California.  All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions\n * are met:\n * 1. Redistributions of source code must retain the above copyright\n *    notice, this list of conditions and the following disclaimer.\n * 2. Redistributions in binary form must reproduce the above copyright\n *    notice, this list of conditions and the following disclaimer in the\n *    documentation and/or other materials provided with the distribution.\n * 3. All advertising materials mentioning features or use of this software\n *    must display the following acknowledgement:\n *      This product includes software developed by the University of\n *      California, Berkeley and its contributors.\n * 4. Neither the name of the University nor the names of its contributors\n *    may be used to endorse or promote products derived from this software\n *    without specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND\n * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED.  IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE\n * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS\n * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)\n * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT\n * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY\n * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF\n * SUCH DAMAGE.\n\n\n2. Why do we need it?\n\nWithin PBS, there are three cases in which MOM must move files\nbetween her machine and some other:\n\ta. Pre-execution stage in of files.\n\tb. Post-execution stage out of files.\n\tc. Post-execution return of the job`s standard output and\n\t   standard error.\n\nThe PBS project did not wish to be dependent on NFS, AFS, or any\nother distributed file system in order to support file delivery.\nNor did we wish to restrict the source/target of file movement to\nthose systems with a PBS server.  This ruled out using the \"job\"\nprotocol as a file transport.  Ftp(1) and ftam require the user's\npassword.  We did not wish to require that knowledge.  Thus rcp(1)\nwas selected as the transport method.   MOM uses the system(3)\nlibrary routine to execute the rcp command.\n\nHowever, many rcp implementations come with a serious flaw.  They\nmay exit and return an exit status of zero (0), when the file was\nnot delivered.  If this happens, MOM would believe that the file\nwas delivered when it was not.  \n\nOne solution would have been to implement a new copy utility for MOM\nvery similar to rcp.  But this would have required it's installation\non every system to/from which the user may wish to move files.  Rather\nthan duplicate rcp, lets just fix it.  As only the rcp used by MOM\nmust be \"fixed\", the PBS team opted to provide a version of rcp that\nworks correctly.   The bsd4.4-Lite version was chosen because of the\nfreedom to copy and modify it granted by its copyright.\n\n\n3. How is it used?\n\nThe supplied rcp source is compiled and the program is named\n\"pbs_rcp\" in order to reduce the level of confusion on having\ntwo \"rcp\"s installed on the system.  It is installed in the same\ndirectory as MOM (pbs_mom).\n\nWhen MOM invokes pbs_rcp, MOM has forked a child which as set its\neffective and real uid to that of the user on whoses behalf MOM\nis operation.  This child of MOM, as the user, will use system(3)\nto fork a shell and execute pbs_rcp.  The path to the pbs_rcp \nis specified in building src/mom/requests.c and contains the directory\nwhere MOM is (will be) installed.\n\nPbs_rcp, as in normal rcp, must be installed \"setuid\" and owned by root.\n"
  },
  {
    "path": "src/mom_rcp/extern.h",
    "content": "/*-\n * Copyright (c) 1992, 1993\n *\tThe Regents of the University of California.  All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions\n * are met:\n * 1. Redistributions of source code must retain the above copyright\n *    notice, this list of conditions and the following disclaimer.\n * 2. Redistributions in binary form must reproduce the above copyright\n *    notice, this list of conditions and the following disclaimer in the\n *    documentation and/or other materials provided with the distribution.\n * 3. All advertising materials mentioning features or use of this software\n *    must display the following acknowledgement:\n *\tThis product includes software developed by the University of\n *\tCalifornia, Berkeley and its contributors.\n * 4. Neither the name of the University nor the names of its contributors\n *    may be used to endorse or promote products derived from this software\n *    without specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND\n * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED.  IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE\n * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS\n * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)\n * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT\n * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY\n * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF\n * SUCH DAMAGE.\n *\n *\t@(#)extern.h\t8.1 (Berkeley) 5/31/93\n */\n/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\ntypedef struct {\n\tint cnt;\n\tchar *buf;\n} BUF;\n\nextern int iamremote;\n\nBUF *allocbuf(BUF *, int, int);\nchar *colon(char *);\nvoid lostconn(int);\nvoid nospace(void);\nint okname(char *);\nvoid run_err(const char *, ...);\nint susystem(char *, uid_t, char *);\nvoid verifydir(char *);\nchar *strerror(int);\nvoid errx(int err, const char *fmt, ...);\nvoid err(int val, char *str);\nvoid warnx(const char *fmt, ...);\n"
  },
  {
    "path": "src/mom_rcp/pathnames.h",
    "content": "/*\n * Copyright (c) 1989, 1993\n *\tThe Regents of the University of California.  All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions\n * are met:\n * 1. Redistributions of source code must retain the above copyright\n *    notice, this list of conditions and the following disclaimer.\n * 2. Redistributions in binary form must reproduce the above copyright\n *    notice, this list of conditions and the following disclaimer in the\n *    documentation and/or other materials provided with the distribution.\n * 3. All advertising materials mentioning features or use of this software\n *    must display the following acknowledgement:\n *\tThis product includes software developed by the University of\n *\tCalifornia, Berkeley and its contributors.\n * 4. Neither the name of the University nor the names of its contributors\n *    may be used to endorse or promote products derived from this software\n *    without specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND\n * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED.  IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE\n * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS\n * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)\n * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT\n * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY\n * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF\n * SUCH DAMAGE.\n *\n *\t@(#)pathnames.h\t8.1 (Berkeley) 5/31/93\n */\n/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifdef WIN32\n\n#define _PATH_CP \"copy\"\n#define _PATH_BSHELL \"cmd\"\n#define _PATH_RSH \"\\\\winnt\\\\system32\\\\rsh.exe\"\n\n#else\n\n#define _PATH_CP \"/bin/cp\"\n#define _PATH_BSHELL \"/bin/sh\"\n#define _PATH_RSH \"rsh\"\n\n#endif\n"
  },
  {
    "path": "src/mom_rcp/pbs_stat.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <Long.h>\n#include <sys/stat.h>\n#include <sys/types.h>\n\n/*  This header file consist of new definition for Macro pbs_stat_struct,\n *  pbs_stat(const char * , struct pbs_stat_struct * ),  INT64  used\n *  in  Windows.\n */\n\n#ifdef WIN32\n#define pbs_stat(path, buffer) _stati64(path, buffer)\ntypedef struct _stati64 pbs_stat_struct;\ntypedef INT64 off_t_pbs;\ntypedef INT64 int_pbs;\n#else\n#define pbs_stat(path, buffer) stat(path, buffer)\ntypedef struct stat pbs_stat_struct;\ntypedef off_t off_t_pbs;\ntypedef long int_pbs;\n#endif\n"
  },
  {
    "path": "src/mom_rcp/rcp.c",
    "content": "/*\n * Copyright (c) 1983, 1990, 1992, 1993\n *\tThe Regents of the University of California.  All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions\n * are met:\n * 1. Redistributions of source code must retain the above copyright\n *    notice, this list of conditions and the following disclaimer.\n * 2. Redistributions in binary form must reproduce the above copyright\n *    notice, this list of conditions and the following disclaimer in the\n *    documentation and/or other materials provided with the distribution.\n * 3. All advertising materials mentioning features or use of this software\n *    must display the following acknowledgement:\n *\tThis product includes software developed by the University of\n *\tCalifornia, Berkeley and its contributors.\n * 4. Neither the name of the University nor the names of its contributors\n *    may be used to endorse or promote products derived from this software\n *    without specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND\n * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED.  IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE\n * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS\n * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)\n * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT\n * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY\n * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF\n * SUCH DAMAGE.\n */\n/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/*\n * Copyright (c) 1983, 1990, 1992, 1993\n * The Regents of the University of California.  All rights reserved.\n *\n * rcp.c 8.2 (Berkeley) 4/2/94\";\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <sys/param.h>\n#include <sys/time.h>\n#include <sys/socket.h>\n#include <netinet/in.h>\n#include <netinet/in_systm.h>\n#include <netinet/ip.h>\n\n#include <ctype.h>\n#include <dirent.h>\n#include <errno.h>\n#include <fcntl.h>\n#include <netdb.h>\n#include <pwd.h>\n#include <signal.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <string.h>\n#include <unistd.h>\n\n#include \"pathnames.h\"\n#include \"extern.h\"\n#include \"pbs_version.h\"\n#include \"pbs_stat.h\"\n#include \"libutil.h\"\n#include \"pbs_ifl.h\"\n#include \"pbs_internal.h\"\n\n#ifdef USELOG\n#include <syslog.h>\n#include <arpa/inet.h>\n#endif /* USELOG */\n/**\n * @file\trcp.c\n */\nextern int setresuid(uid_t, uid_t, uid_t);\n\n#ifdef KERBEROS\n#include <kerberosIV/des.h>\n#include <kerberosIV/krb.h>\n\nchar dst_realm_buf[REALM_SZ];\nchar *dest_realm = NULL;\nint use_kerberos = 1;\nCREDENTIALS cred;\nKey_schedule schedule;\nextern char *krb_realmofhost();\n#ifdef CRYPT\nint doencrypt = 0;\n#define OPTIONS \"dfKk:prtx-:\"\n#else\n#define OPTIONS \"dfKk:prt-:\"\n#endif\n#else\n#define OPTIONS \"Edfprt-:\"\n#endif\n\nchar *credb = NULL;\nsize_t credl = 0;\nstruct passwd *pwd;\nu_short port;\nuid_t userid;\nint errs, rem;\nint pflag, iamremote, iamrecursive, targetshouldbedirectory;\n\n#ifdef WIN32\nstruct _utimbuf times;\n#endif\n\n#define CMDNEEDS 64\nchar cmd[CMDNEEDS]; /* must hold \"rcp -r -p -d\\0\" */\n\n#ifdef KERBEROS\nint kerberos(char **, char *, char *, char *);\nvoid oldw(const char *, ...);\n#endif\nint response(void);\nvoid rsource(char *, pbs_stat_struct *);\nvoid sink(int, char *[]);\nvoid source(int, char *[]);\nvoid tolocal(int, char *[]);\nvoid toremote(char *, int, char *[]);\nvoid usage(void);\n\n#ifdef USELOG\n#define exit use_exit\nstatic void use_logusage();\nstatic void use_prep_timer();\nstatic float use_get_wtime(); /* wall clock time */\nstatic void use_tvsub();\nstatic void use_exit();\nstruct hostent *gethostbyname();\nstruct hostent *gethostbyaddr();\nstruct timeval use_time0; /* Time at which timing started */\nstruct hostent *use_host_rec;\nstruct sockaddr_in use_sock_rec;\nint use_namelen;\nchar use_message[160];\nchar use_host[50];\nchar use_user[20];\nchar use_direction[2]; /* Put/Send or Get/Receive a file */\nfloat use_size = 0.0;  /* Size of transfer in MBytes */\nfloat use_rate = 0.1;  /* MBytes/Second */\nint use_status = 0;\nint use_filec = 0;\nfloat use_wctime = 0.1;\nint use_neterr = 0;\n#endif /* USELOG */\n\nint\nmain(int argc, char *argv[])\n{\n\tstruct servent *sp;\n\tint ch, fflag, tflag;\n\tchar *targ, *shell;\n\textern int optind;\n#ifdef WIN32\n\tDWORD cnt;\n\tDWORD a_cnt = 0;\n#else\n\tsize_t cnt;\n\tsize_t a_cnt = 0;\n#endif\n\n#ifdef USELOG\n\tuse_prep_timer();\n\tstrcpy(use_user, \"no_user\");\n\tstrcpy(use_host, \"no_host\");\n\tstrcpy(use_direction, \"E\"); /* default is Error */\n#endif\t\t\t\t    /* USELOG */\n\n\t/*the real deal or output pbs_version and exit?*/\n\tPRINT_VERSION_AND_EXIT(argc, argv);\n\n\tfflag = tflag = 0;\n\twhile ((ch = getopt(argc, argv, OPTIONS)) != EOF)\n\t\tswitch (ch) { /* User-visible flags. */\n\t\t\tcase 'K':\n#ifdef KERBEROS\n\t\t\t\tuse_kerberos = 0;\n#endif\n\t\t\t\tbreak;\n#ifdef KERBEROS\n\t\t\tcase 'k':\n\t\t\t\tdest_realm = dst_realm_buf;\n\t\t\t\t(void) strncpy(dst_realm_buf, optarg, REALM_SZ);\n\t\t\t\tbreak;\n#ifdef CRYPT\n\t\t\tcase 'x':\n\t\t\t\tdoencrypt = 1;\n\t\t\t\t/* des_set_key(cred.session, schedule); */\n\t\t\t\tbreak;\n#endif\n#endif\n\t\t\tcase 'p':\n\t\t\t\tpflag = 1;\n\t\t\t\tbreak;\n\t\t\tcase 'r':\n\t\t\t\tiamrecursive = 1;\n\t\t\t\tbreak;\n\t\t\t\t/* Server options. */\n\t\t\tcase 'd':\n\t\t\t\ttargetshouldbedirectory = 1;\n\t\t\t\tbreak;\n\t\t\tcase 'f': /* \"from\" */\n\t\t\t\tiamremote = 1;\n\t\t\t\tfflag = 1;\n\t\t\t\tbreak;\n\t\t\tcase 't': /* \"to\" */\n\t\t\t\tiamremote = 1;\n\t\t\t\ttflag = 1;\n\t\t\t\tbreak;\n\t\t\tcase 'E': /* \"encrypted password\" */\n\n\t\t\t\tcnt = sizeof(size_t);\n#ifdef WIN32\n\t\t\t\tReadFile(GetStdHandle(STD_INPUT_HANDLE),\n\t\t\t\t\t (char *) &credl, cnt, &a_cnt, NULL);\n#else\n\t\t\t\ta_cnt = fread((char *) &credl, sizeof(char), cnt,\n\t\t\t\t\t      stdin);\n#endif\n\t\t\t\tif (cnt != a_cnt) {\n\t\t\t\t\tfprintf(stderr,\n\t\t\t\t\t\t\"failed to read in credlen. expect %ld got %ld\",\n#ifdef NAS /* localmod 005 */\n\t\t\t\t\t\t(long) cnt, (long) a_cnt);\n#else\n\t\t\t\t\t\tcnt, a_cnt);\n#endif /* localmod 005 */\n\t\t\t\t\texit(1);\n\t\t\t\t}\n\n\t\t\t\tif (credl == 0) {\n\t\t\t\t\tfprintf(stderr, \"credlen=0\\n\");\n\t\t\t\t\texit(1);\n\t\t\t\t}\n\n\t\t\t\tcredb = (char *) malloc(credl);\n\t\t\t\tif (credb == NULL) {\n\t\t\t\t\tfprintf(stderr, \"failed to malloc cred buf...\");\n\t\t\t\t\texit(1);\n\t\t\t\t}\n\n\t\t\t\ta_cnt = 0;\n#ifdef WIN32\n\t\t\t\tReadFile(GetStdHandle(STD_INPUT_HANDLE), (char *) credb,\n\t\t\t\t\t credl, &a_cnt, NULL);\n#else\n\t\t\t\ta_cnt = fread((char *) credb, sizeof(char), credl,\n\t\t\t\t\t      stdin);\n#endif\n\n\t\t\t\tif (credl != a_cnt) {\n\t\t\t\t\tfprintf(stderr,\n\t\t\t\t\t\t\"failed to read in cred...expect %ld got %ld\\n\",\n#ifdef NAS /* localmod 005 */\n\t\t\t\t\t\t(long) credl, (long) a_cnt);\n#else\n\t\t\t\t\t\tcredl, a_cnt);\n#endif /* localmod 005 */\n\t\t\t\t\texit(-1);\n\t\t\t\t}\n\t\t\t\tbreak;\n\n\t\t\tcase '?':\n\t\t\tdefault:\n\t\t\t\tusage();\n\t\t}\n\targc -= optind;\n\targv += optind;\n\n\tif (initsocketlib())\n\t\treturn 1;\n\n#ifdef KERBEROS\n\tif (use_kerberos) {\n#ifdef CRYPT\n\t\tshell = doencrypt ? \"ekshell\" : \"kshell\";\n#else\n\t\tshell = \"kshell\";\n#endif\n\t\tif ((sp = getservbyname(shell, \"tcp\")) == NULL) {\n\t\t\tuse_kerberos = 0;\n\t\t\toldw(\"can't get entry for %s/tcp service\", shell);\n\t\t\tsp = getservbyname(shell = \"shell\", \"tcp\");\n\t\t}\n\t} else\n\t\tsp = getservbyname(shell = \"shell\", \"tcp\");\n#else\n\tsp = getservbyname(shell = \"shell\", \"tcp\");\n#endif\n\tif (sp == NULL)\n\t\terrx(1, \"%s/tcp: unknown service\", shell);\n\tport = sp->s_port;\n\n\tif ((pwd = getpwuid(userid = getuid())) == NULL)\n\t\terrx(1, \"unknown user %d\", (int) userid);\n\n#ifdef USELOG\n\tstrcpy(use_user, pwd->pw_name);\n#endif /* USELOG */\n\n\trem = STDIN_FILENO;\n\n\tif (fflag) { /* Follow \"protocol\", send data. */\n\t\t(void) response();\n#ifndef WIN32\n\t\tif (setuid(userid) == -1)\n\t\t\texit(-11);\n#endif\n\t\tsource(argc, argv);\n\t\texit(errs);\n\t}\n\n\tif (tflag) { /* Receive data. */\n#ifndef WIN32\n\t\tif (setuid(userid) == -1)\n\t\t\texit(-12);\n#endif\n\t\tsink(argc, argv);\n\t\texit(errs);\n\t}\n\n\tif (argc < 2)\n\t\tusage();\n\tif (argc > 2)\n\t\ttargetshouldbedirectory = 1;\n\n\trem = -1;\n\t/* Command to be executed on remote system using \"rsh\". */\n#ifdef KERBEROS\n\t(void) sprintf(cmd,\n\t\t       \"rcp%s%s%s%s\", iamrecursive ? \" -r\" : \"\",\n#ifdef CRYPT\n\t\t       (doencrypt && use_kerberos ? \" -x\" : \"\"),\n#else\n\t\t       \"\",\n#endif\n\t\t       pflag ? \" -p\" : \"\", targetshouldbedirectory ? \" -d\" : \"\");\n#else\n\t(void) sprintf(cmd, \"rcp%s%s%s\",\n\t\t       iamrecursive ? \" -r\" : \"\", pflag ? \" -p\" : \"\",\n\t\t       targetshouldbedirectory ? \" -d\" : \"\");\n#endif\n\n#ifndef WIN32\n\t(void) signal(SIGPIPE, lostconn);\n#endif\n\n#ifdef WIN32\n\tif (!(IS_UNCPATH(argv[argc - 1])) &&\n\t    ((targ = colon(argv[argc - 1])) != NULL))\n#else\n\tif ((targ = colon(argv[argc - 1])) != NULL)\n#endif\n\t\t/* Dest is non-UNC remote path */\n\t\ttoremote(targ, argc, argv);\n\telse {\n\t\t/* Dest is local host or UNC path */\n\t\ttolocal(argc, argv);\n\t\tif (targetshouldbedirectory)\n\t\t\tverifydir(argv[argc - 1]);\n\t}\n\texit(errs);\n}\n\n/**\n * @brief\n *\tstrips the uid port number and other details required for authentication,\n *\tusing rcmd or kerboros authentication mechanism tries to execute commands on remote system.\n *\n * @param[in] targ - target path\n * @param[in] argc - num of args\n * @param[in] argv - pointer to array of args\n *\n * @return\tVoid\n *\n */\nvoid\ntoremote(char *targ, int argc, char *argv[])\n{\n\tint i, len;\n\tchar *bp, *host, *src, *suser, *thost, *tuser;\n\n\t*targ++ = 0;\n\tif (*targ == 0)\n\t\ttarg = \".\";\n\n\tif ((thost = strchr(argv[argc - 1], '@')) != NULL) {\n\t\t/* user@host */\n\t\t*thost++ = 0;\n\t\ttuser = argv[argc - 1];\n\t\tif (*tuser == '\\0')\n\t\t\ttuser = NULL;\n\t\telse if (!okname(tuser)) {\n\t\t\tif (credb != NULL) {\n\t\t\t\t(void) free(credb);\n\t\t\t\tcredb = NULL;\n\t\t\t}\n\t\t\texit(1);\n\t\t}\n\t} else {\n\t\tthost = argv[argc - 1];\n\t\ttuser = NULL;\n\t}\n\n#ifdef USELOG\n\tuse_host_rec = gethostbyname(thost);\n\tif (use_host_rec)\n\t\tstrcpy(use_host, use_host_rec->h_name);\n\telse\n\t\tstrcpy(use_host, thost);\n#endif /* USELOG */\n\n\tfor (i = 0; i < argc - 1; i++) {\n\t\tsrc = colon(argv[i]);\n\t\tif (src) { /* remote to remote */\n\t\t\t*src++ = 0;\n\t\t\tif (*src == 0)\n\t\t\t\tsrc = \".\";\n\t\t\thost = strchr(argv[i], '@');\n\t\t\tlen = strlen(_PATH_RSH) + strlen(argv[i]) +\n\t\t\t      strlen(src) + (tuser ? strlen(tuser) : 0) +\n\t\t\t      strlen(thost) + strlen(targ) + CMDNEEDS + 20;\n\t\t\tif (!(bp = malloc(len)))\n\t\t\t\terr(1, NULL);\n\t\t\tif (host) {\n\t\t\t\t*host++ = 0;\n\t\t\t\tsuser = argv[i];\n\t\t\t\tif (*suser == '\\0')\n\t\t\t\t\tsuser = pwd->pw_name;\n\t\t\t\telse if (!okname(suser))\n\t\t\t\t\tcontinue;\n\t\t\t\t(void) sprintf(bp,\n\t\t\t\t\t       \"%s %s -l %s -n %s %s '%s%s%s:%s'\",\n\t\t\t\t\t       _PATH_RSH, host, suser, cmd, src,\n\t\t\t\t\t       tuser ? tuser : \"\", tuser ? \"@\" : \"\",\n\t\t\t\t\t       thost, targ);\n\t\t\t} else\n\t\t\t\t(void) sprintf(bp,\n#ifdef WIN32\n\t\t\t\t\t       \"%s %s -n %s %s '%s%s%s:%s'\",\n#else\n\t\t\t\t\t       \"exec %s %s -n %s %s '%s%s%s:%s'\",\n#endif\n\t\t\t\t\t       _PATH_RSH, argv[i], cmd, src,\n\t\t\t\t\t       tuser ? tuser : \"\", tuser ? \"@\" : \"\",\n\t\t\t\t\t       thost, targ);\n\t\t\t(void) susystem(bp, userid, pwd->pw_name);\n\t\t\t(void) free(bp);\n\t\t} else { /* local to remote */\n\t\t\tif (rem == -1) {\n\t\t\t\tlen = strlen(targ);\n\t\t\t\tif ((targ[len - 1] == '/') ||\n\t\t\t\t    (targ[len - 1] == '\\\\'))\n\t\t\t\t\ttarg[len - 1] = '\\0';\n\t\t\t\tlen = len + CMDNEEDS + 20;\n\t\t\t\tif (!(bp = malloc(len)))\n\t\t\t\t\terr(1, NULL);\n\t\t\t\t(void) sprintf(bp, \"%s -t %s\", cmd, targ);\n\t\t\t\thost = thost;\n#ifdef KERBEROS\n\t\t\t\tif (use_kerberos)\n\t\t\t\t\trem = kerberos(&host, bp,\n\t\t\t\t\t\t       pwd->pw_name,\n\t\t\t\t\t\t       tuser ? tuser : pwd->pw_name);\n\t\t\t\telse\n#endif\n#ifdef WIN32\n\t\t\t\t\trem = rcmd2(&host, port, pwd->pw_name,\n\t\t\t\t\t\t    tuser ? tuser : pwd->pw_name,\n\t\t\t\t\t\t    credb, credl, bp, 0);\n#else\n\t\t\t\trem = rcmd(&host, port, pwd->pw_name,\n\t\t\t\t\t   tuser ? tuser : pwd->pw_name,\n\t\t\t\t\t   bp, 0);\n#endif\n\n\t\t\t\tif (rem < 0) {\n\t\t\t\t\tif (credb != NULL) {\n\t\t\t\t\t\t(void) free(credb);\n\t\t\t\t\t\tcredb = NULL;\n\t\t\t\t\t}\n\t\t\t\t\texit(1);\n\t\t\t\t}\n\t\t\t\tif (response() < 0) {\n\t\t\t\t\tif (credb != NULL) {\n\t\t\t\t\t\t(void) free(credb);\n\t\t\t\t\t\tcredb = NULL;\n\t\t\t\t\t}\n\t\t\t\t\texit(1);\n\t\t\t\t}\n\t\t\t\t(void) free(bp);\n#ifndef WIN32\n\t\t\t\tif (setuid(userid) == -1)\n\t\t\t\t\texit(-14);\n#endif\n\t\t\t}\n\t\t\tsource(1, argv + i);\n\t\t}\n\t}\n\tif (credb != NULL) {\n\t\t(void) free(credb);\n\t\tcredb = NULL;\n\t}\n}\n\n/**\n * @brief\n *      using rcmd or kerboros authentication mechanism tries to execute\n *\tcommands on local machine\n *\n * @param[in] argc - num of args\n * @param[in] argv - pointer to array of args\n *\n * @return      Void\n *\n */\n\nvoid\ntolocal(int argc, char *argv[])\n{\n\tint i, len;\n\tchar *bp, *host, *src, *suser;\n\n\tfor (i = 0; i < argc - 1; i++) {\n\t\tif (!(src = colon(argv[i]))) { /* Local to local. */\n\t\t\tlen = strlen(_PATH_CP) + strlen(argv[i]) +\n\t\t\t      strlen(argv[argc - 1]) + 20;\n\t\t\tif (!(bp = malloc(len)))\n\t\t\t\terr(1, NULL);\n#ifdef WIN32\n\t\t\t(void) sprintf(bp, \"cmd /c %s%s%s %s %s\", _PATH_CP,\n#else\n\t\t\t(void) sprintf(bp, \"exec %s%s%s %s %s\", _PATH_CP,\n#endif\n\t\t\t\t       iamrecursive ? \" -r\" : \"\", pflag ? \" -p\" : \"\",\n\t\t\t\t       argv[i], argv[argc - 1]);\n\t\t\tif (susystem(bp, userid, pwd->pw_name))\n\t\t\t\t++errs;\n\t\t\t(void) free(bp);\n\t\t\tcontinue;\n\t\t}\n\t\t*src++ = 0;\n\t\tif (*src == 0)\n\t\t\tsrc = \".\";\n\t\tif ((host = strchr(argv[i], '@')) == NULL) {\n\t\t\thost = argv[i];\n\t\t\tsuser = pwd->pw_name;\n\t\t} else {\n\t\t\t*host++ = 0;\n\t\t\tsuser = argv[i];\n\t\t\tif (*suser == '\\0')\n\t\t\t\tsuser = pwd->pw_name;\n\t\t\telse if (!okname(suser))\n\t\t\t\tcontinue;\n\t\t}\n\t\tlen = strlen(src) + CMDNEEDS + 20;\n\t\tif ((bp = malloc(len)) == NULL)\n\t\t\terr(1, NULL);\n\n#ifdef USELOG\n\t\tuse_host_rec = gethostbyname(host);\n\t\tif (use_host_rec)\n\t\t\tstrcpy(use_host, use_host_rec->h_name);\n\t\telse\n\t\t\tstrcpy(use_host, host);\n\t\tstrcpy(use_user, suser);\n#endif /* USELOG */\n\n\t\t(void) sprintf(bp, \"%s -f %s\", cmd, src);\n\t\trem =\n#ifdef KERBEROS\n\t\t\tuse_kerberos ? kerberos(&host, bp, pwd->pw_name, suser) :\n#endif\n\n#ifdef WIN32\n\t\t\t\t     rcmd2(&host, port, pwd->pw_name, suser, credb, credl,\n\t\t\t\t\t   bp, 0);\n#else\n\t\t\trcmd(&host, port, pwd->pw_name, suser, bp, 0);\n#endif\n\t\t(void) free(bp);\n\t\tif (rem < 0) {\n\t\t\t++errs;\n\t\t\tcontinue;\n\t\t}\n#if defined(HAVE_SETRESUID)\n#define seteuid(e) (setresuid(-1, (e), -1))\n#endif\n\n#ifdef WIN32\n\t\tsink(1, argv + argc - 1);\n#else\n\t\tif (seteuid(userid) == -1)\n\t\t\texit(15);\n\t\tsink(1, argv + argc - 1);\n\t\tif (seteuid(0) == -1)\n\t\t\texit(15);\n#endif\n\t\tclosesocket(rem);\n\n\t\trem = -1;\n\t}\n\n\tif (credb != NULL) {\n\t\t(void) free(credb);\n\t\tcredb = NULL;\n\t}\n}\n\n/**\n * @brief\n *\tSend requested file(s) (or directory(s)) information to rshd server\n *\n * @param[in]\targc - no. of arguments\n * @param[in]\targv - arguments which contains requested file informations\n *\n * @return\tvoid\n */\nvoid\nsource(int argc, char *argv[])\n{\n\tpbs_stat_struct stb = {0};\n\tstatic BUF buffer = {0};\n\tBUF *bp = NULL;\n\toff_t_pbs i = 0;\n\tint amt = 0;\n\tint fd = 0;\n\tint haderr = 0;\n\tint indx = 0;\n\tint result = 0;\n\tchar *last = NULL;\n\tchar *name = NULL;\n\tchar buf[RCP_BUFFER_SIZE] = {'\\0'};\n\n\tfor (indx = 0; indx < argc; ++indx) {\n\t\tname = argv[indx];\n\t\tif ((fd = open(name, O_RDONLY, 0)) < 0)\n\t\t\tgoto syserr;\n\n#ifdef WIN32\n\t\tsetmode(fd, O_BINARY);\n\t/* windows will fail to open file if a dir. To capture dir */\n\t/* info, we also do a pbs_stat() */\n\tsyserr:\n\t\tif (pbs_stat(name, &stb)) {\n\t\t\trun_err(\"%s: %s\", name, strerror(errno));\n\t\t\tgoto next;\n\t\t}\n#else\n\t\tif (fstat(fd, &stb)) {\n\t\tsyserr:\n\t\t\trun_err(\"%s: %s\", name, strerror(errno));\n\t\t\tgoto next;\n\t\t}\n#endif\n\n\t\tswitch (stb.st_mode & S_IFMT) {\n\t\t\tcase S_IFREG:\n\t\t\t\tbreak;\n\t\t\tcase S_IFDIR:\n\t\t\t\tif (iamrecursive) {\n\t\t\t\t\trsource(name, &stb);\n\t\t\t\t\tgoto next;\n\t\t\t\t}\n\t\t\t\t/* FALLTHROUGH */\n\t\t\tdefault:\n\t\t\t\trun_err(\"%s: not a regular file\", name);\n\t\t\t\tgoto next;\n\t\t}\n\t\tif ((last = strrchr(name, '/')) == NULL)\n\t\t\tlast = name;\n\t\telse\n\t\t\t++last;\n\t\tif (pflag) {\n\t\t\t/*\n\t\t\t * Make it compatible with possible future\n\t\t\t * versions expecting microseconds.\n\t\t\t */\n\t\t\t(void) sprintf(buf, \"T%ld 0 %ld 0\\n\",\n\t\t\t\t       (long) stb.st_mtime, (long) stb.st_atime);\n\n#ifdef WIN32\n\t\t\t(void) send(rem, buf, strlen(buf), 0);\n#else\n\t\t\tif (write(rem, buf, strlen(buf)) == -1) \n\t\t\t\terrx(-1, __func__, \"write failed. ERR : %s\",\n\t\t\t\t\t\tstrerror(errno));\t\t\t\t\n#endif\n\t\t\tif (response() < 0)\n\t\t\t\tgoto next;\n\t\t}\n\n#ifdef WIN32\n#define MODMASK (S_IRWXU | S_IRWXG | S_IRWXO)\n#else\n#define MODMASK (S_ISUID | S_ISGID | S_IRWXU | S_IRWXG | S_IRWXO)\n#endif\n\n#ifdef WIN32\n\t\t(void) sprintf(buf, \"C%04o %I64d %s\\n\",\n\t\t\t       stb.st_mode & MODMASK, stb.st_size, last);\n#else\n\t\t(void) sprintf(buf, \"C%04o %ld %s\\n\",\n\t\t\t       stb.st_mode & MODMASK, (long) stb.st_size, last);\n#endif\n\n#ifdef WIN32\n\t\t(void) send(rem, buf, strlen(buf), 0);\n#else\n\t\tif (write(rem, buf, strlen(buf)) == -1) \n\t\t\terrx(-1, __func__, \"write failed. ERR : %s\",\n\t\t\t\t\tstrerror(errno));\t\t\t\t\n#endif\n\t\tif (response() < 0)\n\t\t\tgoto next;\n\t\tif ((bp = allocbuf(&buffer, fd, RCP_BUFFER_SIZE)) == NULL) {\n\t\tnext:\n\t\t\tif (fd > 0)\n\t\t\t\t(void) close(fd);\n\t\t\tcontinue;\n\t\t}\n\n\t\t/* Keep writing after an error so that we stay sync'd up. */\n\t\thaderr = 0;\n\t\tfor (i = 0; i < stb.st_size; i += bp->cnt) {\n\t\t\tamt = bp->cnt;\n\t\t\tif (i + amt > stb.st_size)\n\t\t\t\tamt = (int) (stb.st_size - i);\n\t\t\tif (!haderr) {\n\t\t\t\tresult = read(fd, bp->buf, amt);\n\t\t\t\tif (result != amt)\n\t\t\t\t\thaderr = result >= 0 ? EIO : errno;\n\t\t\t}\n\t\t\tif (haderr) {\n\n#ifdef WIN32\n\t\t\t\t(void) send(rem, bp->buf, amt, 0);\n#else\n\t\t\t\tif (write(rem, bp->buf, amt) == -1) \n\t\t\t\t\terrx(-1, __func__, \"write fail. ERR:%s\",\n\t\t\t\t\t\t\tstrerror(errno));\t\t\t\t\n#endif\n\t\t\t} else {\n#ifdef WIN32\n\t\t\t\tresult = send(rem, bp->buf, amt, 0);\n#else\n\t\t\t\tresult = write(rem, bp->buf, amt);\n#endif\n\t\t\t\tif (result != amt)\n\t\t\t\t\thaderr = result >= 0 ? EIO : errno;\n\t\t\t}\n\t\t}\n\t\tif (close(fd) && !haderr)\n\t\t\thaderr = errno;\n\n#ifdef USELOG /* MegaBytes */\n\t\tuse_filec++;\n\t\tuse_size += stb.st_size / 1048576.0;\n\t\tstrcpy(use_direction, \"P\");\n\t\tuse_namelen = sizeof(use_sock_rec);\n\n\t\tif (!strcmp(use_host, \"no_host\")) {\n\t\t\tif (!getpeername(rem, (struct sockaddr *) &use_sock_rec,\n\t\t\t\t\t &use_namelen))\n\t\t\t\tuse_host_rec =\n\t\t\t\t\tgethostbyaddr((char *) &use_sock_rec.sin_addr,\n\t\t\t\t\t\t      sizeof(use_sock_rec.sin_addr),\n\t\t\t\t\t\t      AF_INET);\n\t\t\tif (use_host_rec)\n\t\t\t\tstrcpy(use_host, use_host_rec->h_name);\n\t\t\telse\n\t\t\t\tstrcpy(use_host,\n\t\t\t\t       inet_ntoa(use_sock_rec.sin_addr));\n\t\t}\n#endif /* USELOG */\n\n\t\tif (!haderr){\n\n#ifdef WIN32\n\t\t\t(void) send(rem, \"\", 1, 0);\n#else\n\t\t\tif (write(rem, \"\", 1) == -1) \n\t\t\t\terrx(-1, __func__, \"write failed. ERR : %s\",strerror(errno));\t\t\t\n#endif\n\t\t} else \n\t\t\trun_err(\"%s: %s\", name, strerror(haderr));\n\t\t\n\t\t(void) response();\n\t}\n}\n\n/**\n *\n *  @brief Send directory information to remote host.\n *\n *  @param[in]  name - directory name.\n *  @param[in]  statp - pointer to struct stat.\n *\n *  @return void\n */\nvoid\nrsource(char *name, pbs_stat_struct *statp)\n{\n\tDIR *dirp;\n\tstruct dirent *dp;\n\tchar *last, *vect[1], path[MAXPATHLEN];\n\n\tif (!(dirp = opendir(name))) {\n\t\trun_err(\"%s: %s\", name, strerror(errno));\n\t\treturn;\n\t}\n\tlast = strrchr(name, '/');\n\tif (last == 0) {\n#ifdef WIN32\n\t\t/* '/' not found so check for '\\\\' in windows*/\n\t\tlast = strrchr(name, '\\\\');\n\t\tif (last == 0)\n\t\t\tlast = name;\n\t\telse\n\t\t\tlast++;\n#else\n\t\tlast = name;\n#endif\n\t} else\n\t\tlast++;\n\tif (pflag) {\n\t\t(void) sprintf(path, \"T%ld 0 %ld 0\\n\",\n\t\t\t       (long) statp->st_mtime, (long) statp->st_atime);\n#ifdef WIN32\n\t\t(void) send(rem, path, strlen(path), 0);\n#else\n\t\tif (write(rem, path, strlen(path)) == -1) \n\t\t\terrx(-1, __func__, \"write failed. ERR : %s\",strerror(errno));\t\t\t\t\n#endif\n\t\tif (response() < 0) {\n\t\t\tclosedir(dirp);\n\t\t\treturn;\n\t\t}\n\t}\n\t(void) sprintf(path,\n\t\t       \"D%04o %d %s\\n\", statp->st_mode & MODMASK, 0, last);\n\n#ifdef WIN32\n\t(void) send(rem, path, strlen(path), 0);\n#else\n\tif (write(rem, path, strlen(path)) == -1) \n\t\terrx(-1, __func__, \"write failed. ERR : %s\",strerror(errno));\t\t\n#endif\n\tif (response() < 0) {\n\t\tclosedir(dirp);\n\t\treturn;\n\t}\n\twhile (errno = 0, (dp = readdir(dirp)) != NULL) {\n\n\t\tint len1 = 0;\n\t\tint len2 = 0;\n#ifndef WIN32\n\t\tif (dp->d_ino == 0)\n\t\t\tcontinue;\n#endif\n\t\tif (!strcmp(dp->d_name, \".\") || !strcmp(dp->d_name, \"..\"))\n\t\t\tcontinue;\n\t\tlen1 = strlen(name);\n\t\tlen2 = strlen(dp->d_name);\n\t\tif (len1 + 1 + len2 >= (size_t) (MAXPATHLEN - 1)) {\n\t\t\trun_err(\"%s/%s: name too long\", name, dp->d_name);\n\t\t\tcontinue;\n\t\t}\n\t\t(void) snprintf(path, len1 + 1, \"%s/\", name);\n\t\t(void) snprintf(path + len1 + 1, len2, \"%s\", dp->d_name);\n\t\tvect[0] = path;\n\t\tsource(1, vect);\n\t}\n\tif (errno != 0 && errno != ENOENT) {\n\t\trun_err(\"%s: %s\", name, strerror(errno));\n\t\t(void) closedir(dirp);\n\t\treturn;\n\t}\n\t(void) closedir(dirp);\n\n#ifdef WIN32\n\t(void) send(rem, \"E\\n\", 2, 0);\n#else\n\tif (write(rem, \"E\\n\", 2) == -1) \n\t\terrx(-1, __func__, \"write failed. ERR : %s\",strerror(errno));\n#endif\n\t(void) response();\n}\n\n/**\n * @brief\n *\tReceive file(s) (or directory(s)) informations sent by rshd server\n *\n * @param[in]\targc - no. arguments\n * @param[in]\targv - arguments which contains target information\n *\n * @return\tvoid\n */\nvoid\nsink(int argc, char *argv[])\n{\n\tstatic BUF buffer = {0};\n\tpbs_stat_struct stb = {0};\n\tstruct timeval tv[2];\n\tenum { YES,\n\t       NO,\n\t       DISPLAYED } wrerr;\n\tBUF *bp = NULL;\n\toff_t j = 0;\n\tint_pbs size = 0;\n\tint_pbs i = 0;\n\tint amt = 0;\n\tint count = 0;\n\tint exists = 0;\n\tint first = 0;\n\tint mask = 0;\n\tint mode = 0;\n\tint ofd = 0;\n\tint omode = 0;\n\tint setimes = 0;\n\tint targisdir = 0;\n\tint wrerrno = 0;\n\tchar ch = '\\0';\n\tchar *cp = NULL;\n\tchar *np = NULL;\n\tchar *targ = NULL;\n\tchar *why = NULL;\n\tchar *vect[1] = {0};\n\tchar buf[RCP_BUFFER_SIZE] = {'\\0'};\n\n\tmemset(tv, 0, sizeof(tv));\n\n#ifdef USELOG\n\tpbs_stat_struct use_stb;\n#endif /* USELOG */\n\n#define atime tv[0]\n#define mtime tv[1]\n#define SCREWUP(str)          \\\n\t{                     \\\n\t\twhy = str;    \\\n\t\tgoto screwup; \\\n\t}\n\n#ifdef USELOG\n\tstrcpy(use_direction, \"G\");\n#endif /* USELOG */\n\n\tsetimes = targisdir = 0;\n\tmask = umask(0);\n\tif (!pflag)\n\t\t(void) umask(mask);\n\tif (argc != 1) {\n\t\trun_err(\"ambiguous target\");\n\t\texit(1);\n\t}\n\ttarg = *argv;\n\tif (targetshouldbedirectory)\n\t\tverifydir(targ);\n\n#ifdef WIN32\n\t(void) send(rem, \"\", 1, 0);\n#else\n\tif (write(rem, \"\", 1) == -1) \n\t\terrx(-1, __func__, \"write failed. ERR : %s\",strerror(errno));\n#endif\n\tif (pbs_stat(targ, &stb) == 0 && S_ISDIR(stb.st_mode))\n\t\ttargisdir = 1;\n\tfor (first = 1;; first = 0) {\n\t\tcp = buf;\n\n#ifdef WIN32\n\t\tif (recv(rem, cp, 1, 0) <= 0)\n#else\n\t\tif (read(rem, cp, 1) <= 0)\n#endif\n\t\t\treturn;\n\t\tif (*cp++ == '\\n')\n\t\t\tSCREWUP(\"unexpected <newline>\");\n\t\tdo {\n#ifdef WIN32\n\t\t\tif (recv(rem, &ch, sizeof(ch), 0) != sizeof(ch))\n#else\n\t\t\tif (read(rem, &ch, sizeof(ch)) != sizeof(ch))\n#endif\n\t\t\t\tSCREWUP(\"lost connection\");\n\t\t\t*cp++ = ch;\n\t\t} while (cp < &buf[RCP_BUFFER_SIZE - 1] && ch != '\\n');\n\t\t*cp = 0;\n\n\t\tif (buf[0] == '\\01' || buf[0] == '\\02') {\n\t\t\tif (iamremote == 0)\n\t\t\t\tif ( write(STDERR_FILENO,\n\t\t\t\t\t     buf + 1, strlen(buf + 1)) == -1) \n\t\t\t\t\terrx(-1, __func__, \"write failed. ERR : %s\",strerror(errno));\n\n\t\t\tif (buf[0] == '\\02')\n\t\t\t\texit(1);\n\t\t\t++errs;\n\t\t\tcontinue;\n\t\t}\n\t\tif (buf[0] == 'E') {\n#ifdef WIN32\n\t\t\t(void) send(rem, \"\", 1, 0);\n#else\n\t\t\tif (write(rem, \"\", 1) == -1) \n\t\t\t\terrx(-1, __func__, \"write failed. ERR : %s\",strerror(errno));\n#endif\n\t\t\treturn;\n\t\t}\n\n\t\tif (ch == '\\n')\n\t\t\t*--cp = 0;\n\n#define getnum(t)            \\\n\t(t) = 0;             \\\n\twhile (isdigit(*cp)) \\\n\t\t(t) = (t) *10 + (*cp++ - '0');\n\t\tcp = buf;\n\t\tif (*cp == 'T') {\n\t\t\tsetimes++;\n\t\t\tcp++;\n\t\t\tgetnum(mtime.tv_sec);\n\t\t\tif (*cp++ != ' ')\n\t\t\t\tSCREWUP(\"mtime.sec not delimited\");\n\t\t\tgetnum(mtime.tv_usec);\n\t\t\tif (*cp++ != ' ')\n\t\t\t\tSCREWUP(\"mtime.usec not delimited\");\n\t\t\tgetnum(atime.tv_sec);\n\t\t\tif (*cp++ != ' ')\n\t\t\t\tSCREWUP(\"atime.sec not delimited\");\n\t\t\tgetnum(atime.tv_usec);\n\t\t\tif (*cp++ != '\\0')\n\t\t\t\tSCREWUP(\"atime.usec not delimited\");\n\n#ifdef WIN32\n\t\t\t(void) send(rem, \"\", 1, 0);\n#else\n\t\t\tif ( write(rem, \"\", 1) == -1) \n\t\t\t\terrx(-1, __func__, \"write failed. ERR : %s\",strerror(errno));\n#endif\n\t\t\tcontinue;\n\t\t}\n\t\tif (*cp != 'C' && *cp != 'D') {\n\t\t\t/*\n\t\t\t * Check for the case \"rcp remote:foo\\* local:bar\".\n\t\t\t * In this case, the line \"No match.\" can be returned\n\t\t\t * by the shell before the rcp command on the remote is\n\t\t\t * executed so the ^Aerror_message convention isn't\n\t\t\t * followed.\n\t\t\t */\n\t\t\tif (first) {\n\t\t\t\trun_err(\"%s\", cp);\n\t\t\t\texit(1);\n\t\t\t}\n\t\t\tSCREWUP(\"expected control record\");\n\t\t}\n\t\tmode = 0;\n\t\tfor (++cp; cp < buf + 5; cp++) {\n\t\t\tif (*cp < '0' || *cp > '7')\n\t\t\t\tSCREWUP(\"bad mode\");\n\t\t\tmode = (mode << 3) | (*cp - '0');\n\t\t}\n\t\tif (*cp++ != ' ')\n\t\t\tSCREWUP(\"mode not delimited\");\n\n\t\tfor (size = 0; isdigit(*cp);)\n\t\t\tsize = size * 10 + (*cp++ - '0');\n\t\tif (*cp++ != ' ')\n\t\t\tSCREWUP(\"size not delimited\");\n\t\tif (targisdir) {\n\t\t\tstatic char *namebuf;\n\t\t\tstatic int cursize;\n\t\t\tsize_t need;\n\n\t\t\tneed = strlen(targ) + strlen(cp) + 250;\n\t\t\tif (need > (unsigned) cursize) {\n\t\t\t\tif (!(namebuf = malloc(need)))\n\t\t\t\t\trun_err(\"%s\", strerror(errno));\n\t\t\t}\n\t\t\t(void) sprintf(namebuf, \"%s%s%s\", targ,\n\t\t\t\t       *targ ? \"/\" : \"\", cp);\n\t\t\tnp = namebuf;\n\t\t} else\n\t\t\tnp = targ;\n\t\texists = pbs_stat(np, &stb) == 0;\n\t\tif (buf[0] == 'D') {\n\t\t\tint mod_flag = pflag;\n\t\t\tif (exists) {\n\t\t\t\tif (!S_ISDIR(stb.st_mode)) {\n\t\t\t\t\terrno = ENOTDIR;\n\t\t\t\t\tgoto bad;\n\t\t\t\t}\n\t\t\t\tif (pflag)\n\t\t\t\t\t(void) chmod(np, mode);\n\t\t\t} else {\n\t\t\t\t/* Handle copying from a read-only directory */\n\t\t\t\tmod_flag = 1;\n\n#ifdef WIN32\n\t\t\t\t/* use _mkdir as it supports UNC path */\n\t\t\t\tif (_mkdir(np) == -1)\n#else\n\t\t\t\tif (mkdir(np, mode | S_IRWXU) < 0)\n#endif\n\t\t\t\t\tgoto bad;\n\t\t\t}\n\t\t\tvect[0] = np;\n\t\t\tsink(1, vect);\n\t\t\tif (setimes) {\n\t\t\t\tsetimes = 0;\n#ifdef WIN32\n\t\t\t\ttimes.actime = atime.tv_sec;\n\t\t\t\ttimes.modtime = mtime.tv_sec;\n\n\t\t\t\tif (_utime(np, &times) < 0)\n#else\n\t\t\t\tif (utimes(np, tv) < 0)\n#endif\n\t\t\t\t\trun_err(\"%s: set times: %s\",\n\t\t\t\t\t\tnp, strerror(errno));\n\t\t\t}\n\t\t\tif (mod_flag)\n\t\t\t\t(void) chmod(np, mode);\n\t\t\tcontinue;\n\t\t}\n\t\tomode = mode;\n\t\tmode |= S_IWRITE;\n\n#ifdef WIN32\n\t\tif ((ofd = open(np, O_WRONLY | O_CREAT | O_TRUNC | O_BINARY, mode)) < 0)\n#else\n\t\tif ((ofd = open(np, O_WRONLY | O_CREAT, mode)) < 0)\n#endif\n\t\t{\n\t\tbad:\n\t\t\trun_err(\"%s: %s\", np, strerror(errno));\n\t\t\tcontinue;\n\t\t}\n\n#ifdef WIN32\n\t\t(void) send(rem, \"\", 1, 0);\n#else\n\t\tif (write(rem, \"\", 1) == -1) \n\t\t\terrx(-1, __func__, \"write failed. ERR : %s\",strerror(errno));\n#endif\n\t\tif ((bp = allocbuf(&buffer, ofd, RCP_BUFFER_SIZE)) == NULL) {\n\t\t\t(void) close(ofd);\n\t\t\tcontinue;\n\t\t}\n\t\tcp = bp->buf;\n\t\twrerr = NO;\n\t\tcount = 0;\n\t\tfor (i = 0; i < size; i += RCP_BUFFER_SIZE) {\n\t\t\tamt = RCP_BUFFER_SIZE;\n\t\t\tif (i + amt > size)\n\t\t\t\tamt = (int) (size - i);\n\t\t\tcount += amt;\n\t\t\tdo {\n\n#ifdef WIN32\n\t\t\t\tj = recv(rem, cp, amt, 0);\n#else\n\t\t\t\tj = read(rem, cp, amt);\n#endif\n\t\t\t\tif (j <= 0) {\n\t\t\t\t\trun_err(\"%s\", j ? strerror(errno) : \"dropped connection\");\n\t\t\t\t\texit(1);\n\t\t\t\t}\n\t\t\t\tamt -= j;\n\t\t\t\tcp += j;\n\t\t\t} while (amt > 0);\n\t\t\tif (count == bp->cnt) {\n\t\t\t\t/* Keep reading so we stay sync'd up. */\n\t\t\t\tif (wrerr == NO) {\n\t\t\t\t\tj = write(ofd, bp->buf, (unsigned int) count);\n\t\t\t\t\tif (j != count) {\n\t\t\t\t\t\twrerr = YES;\n\t\t\t\t\t\twrerrno = j >= 0 ? EIO : errno;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tcount = 0;\n\t\t\t\tcp = bp->buf;\n\t\t\t}\n\t\t}\n\t\tif (count != 0 && wrerr == NO &&\n\t\t    (j = write(ofd, bp->buf, (unsigned int) count)) != count) {\n\t\t\twrerr = YES;\n\t\t\twrerrno = j >= 0 ? EIO : errno;\n\t\t}\n\n#ifdef WIN32\n\t\tif (pflag) {\n\t\t\tif (exists || omode != mode)\n\t\t\t\tif (_chmod(np, omode))\n\t\t\t\t\trun_err(\"%s: set mode: %s\",\n\t\t\t\t\t\tnp, strerror(errno));\n\t\t} else {\n\t\t\tif (!exists && omode != mode)\n\t\t\t\tif (_chmod(np, omode & ~mask))\n\t\t\t\t\trun_err(\"%s: set mode: %s\",\n\t\t\t\t\t\tnp, strerror(errno));\n\t\t}\n#else\n\t\tif (ftruncate(ofd, size)) {\n\t\t\trun_err(\"%s: truncate: %s\", np, strerror(errno));\n\t\t\twrerr = DISPLAYED;\n\t\t}\n\t\tif (pflag) {\n\t\t\tif (exists || omode != mode)\n\t\t\t\tif (fchmod(ofd, omode))\n\t\t\t\t\trun_err(\"%s: set mode: %s\",\n\t\t\t\t\t\tnp, strerror(errno));\n\t\t} else {\n\t\t\tif (!exists && omode != mode)\n\t\t\t\tif (fchmod(ofd, omode & ~mask))\n\t\t\t\t\trun_err(\"%s: set mode: %s\",\n\t\t\t\t\t\tnp, strerror(errno));\n\t\t}\n#endif\n\t\t(void) close(ofd);\n\n#ifdef USELOG\n\t\tuse_filec++;\n\t\tpbs_stat(np, &use_stb);\n\t\tuse_size += use_stb.st_size / 1048576.0;\n\t\tuse_namelen = sizeof(use_sock_rec);\n\n\t\tif (!strcmp(use_host, \"no_host\")) {\n\t\t\tif (!getpeername(rem, (struct sockaddr *) &use_sock_rec,\n\t\t\t\t\t &use_namelen))\n\t\t\t\tuse_host_rec =\n\t\t\t\t\tgethostbyaddr((char *) &use_sock_rec.sin_addr,\n\t\t\t\t\t\t      sizeof(use_sock_rec.sin_addr), AF_INET);\n\t\t\tif (use_host_rec)\n\t\t\t\tstrcpy(use_host, use_host_rec->h_name);\n\t\t\telse\n\t\t\t\tstrcpy(use_host, inet_ntoa(use_sock_rec.sin_addr));\n\t\t}\n\n#endif /* USELOG */\n\n\t\t(void) response();\n\t\tif (setimes && wrerr == NO) {\n\t\t\tsetimes = 0;\n\n#ifdef WIN32\n\t\t\ttimes.actime = atime.tv_sec;\n\t\t\ttimes.modtime = mtime.tv_sec;\n\n\t\t\tif (_utime(np, &times) < 0)\n#else\n\t\t\tif (utimes(np, tv) < 0)\n#endif\n\t\t\t{\n\t\t\t\trun_err(\"%s: set times: %s\",\n\t\t\t\t\tnp, strerror(errno));\n\t\t\t\twrerr = DISPLAYED;\n\t\t\t}\n\t\t}\n\t\tswitch (wrerr) {\n\t\t\tcase YES:\n\t\t\t\trun_err(\"%s: %s\", np, strerror(wrerrno));\n\t\t\t\tbreak;\n\t\t\tcase NO:\n#ifdef WIN32\n\t\t\t\t(void) send(rem, \"\", 1, 0);\n#else\n\t\t\t\tif (write(rem, \"\", 1) == -1) \n\t\t\t\t\terrx(-1, __func__, \"fchown failed. ERR : %s\",strerror(errno));\n#endif\n\t\t\t\tbreak;\n\t\t\tcase DISPLAYED:\n\t\t\t\tbreak;\n\t\t}\n\t}\nscrewup:\n\trun_err(\"protocol error: %s\", why);\n#ifdef USELOG\n\tuse_neterr++;\n#endif /* USELOG */\n\n\texit(1);\n}\n\n#ifdef KERBEROS\n\n/**\n * @brief\n *\tprovides kerberos authentication data.\n *\n * @param[in] host - host name\n * @param[in] bp -\n * @param[in] localuser - present working user\n * @param[in] user - username\n *\n * @return\tint\n * @retval\tcommunication handle\tsuccess\n * @retval\t-1\t\t\terror\n *\n */\nint\nkerberos(char **host, char *bp, char *locuser, char *user)\n{\n\tstruct servent *sp;\n\nagain:\n\tif (use_kerberos) {\n\t\trem = KSUCCESS;\n\t\terrno = 0;\n\t\tif (dest_realm == NULL)\n\t\t\tdest_realm = krb_realmofhost(*host);\n\t\trem =\n#ifdef CRYPT\n\t\t\tdoencrypt ? krcmd_mutual(host, port, user, bp, 0, dest_realm, &cred, schedule) :\n#endif\n\t\t\t\t  krcmd(host, port, user, bp, 0, dest_realm);\n\n\t\tif (rem < 0) {\n\t\t\tuse_kerberos = 0;\n\t\t\tif ((sp = getservbyname(\"shell\", \"tcp\")) == NULL)\n\t\t\t\terrx(1, \"unknown service shell/tcp\");\n\t\t\tif (errno == ECONNREFUSED)\n\t\t\t\toldw(\"remote host doesn't support Kerberos\");\n\t\t\telse if (errno == ENOENT)\n\t\t\t\toldw(\"can't provide Kerberos authentication data\");\n\t\t\tport = sp->s_port;\n\t\t\tgoto again;\n\t\t}\n\t} else {\n#ifdef CRYPT\n\t\tif (doencrypt)\n\t\t\terrx(1,\n\t\t\t     \"the -x option requires Kerberos authentication\");\n#endif\n\t\trem = rcmd(host, port, locuser, user, bp, 0);\n\t}\n\treturn (rem);\n}\n#endif /* KERBEROS */\n\n/**\n * @brief\n *\tReceive response of last operation from rshd server\n *\t\tresponse will be as follow:\n *\t\t0 - OK\n *\t\t1 - error, followed by error message\n *\t\t2 - fatal error followed by \"\"\n *\n * @return\tint\n * @retval\t0\tOK\n * @retval\t-1\terror\n *\n * @par\tNote:\n *\ton receiving fatal error response will do exit with 1.\n */\nint\nresponse()\n{\n\tchar ch = '\\0';\n\tchar *cp = NULL;\n\tchar resp = '\\0';\n\tchar rbuf[RCP_BUFFER_SIZE] = {'\\0'};\n\n#ifdef WIN32\n\tif (recv(rem, &resp, sizeof(resp), 0) != sizeof(resp))\n#else\n\tif (read(rem, &resp, sizeof(resp)) != sizeof(resp))\n#endif\n\t\tlostconn(0);\n\n\tcp = rbuf;\n\tswitch (resp) {\n\t\tcase 0: /* ok */\n\t\t\treturn (0);\n\t\tdefault:\n\t\t\t*cp++ = resp;\n\t\t\t/* FALLTHROUGH */\n\t\tcase 1: /* error, followed by error msg */\n\t\tcase 2: /* fatal error, \"\" */\n\t\t\tdo {\n\n#ifdef WIN32\n\t\t\t\tif (recv(rem, &ch, sizeof(ch), 0) != sizeof(ch))\n#else\n\t\t\t\tif (read(rem, &ch, sizeof(ch)) != sizeof(ch))\n#endif\n\t\t\t\t\tlostconn(0);\n\t\t\t\t*cp++ = ch;\n\t\t\t} while (cp < &rbuf[RCP_BUFFER_SIZE] && ch != '\\n');\n\n\t\t\tif (!iamremote)\n\t\t\t\tif (write(STDERR_FILENO, rbuf, cp - rbuf) == -1) \n\t\t\t\t\terrx(-1, __func__, \"fchown failed. ERR : %s\",strerror(errno));\n\t\t\t++errs;\n\t\t\tif (resp == 1)\n\t\t\t\treturn (-1);\n\t\t\texit(1);\n\t}\n\t/* NOTREACHED */\n}\n\n/**\n * @brief\n *\tprints the usage guidelines for using pbs_rcp.\n *\n */\nvoid\nusage()\n{\n#ifdef KERBEROS\n#ifdef CRYPT\n\t(void) fprintf(stderr, \"%s\\n\\t%s\\n\",\n\t\t       \"usage: pbs_rcp [-Kpx] [-k realm] f1 f2\",\n\t\t       \"or: pbs_rcp [-Kprx] [-k realm] f1 ... fn directory\");\n#else\n\t(void) fprintf(stderr, \"%s\\n\\t%s\\n\",\n\t\t       \"usage: pbs_rcp [-Kp] [-k realm] f1 f2\",\n\t\t       \"or: pbs_rcp [-Kpr] [-k realm] f1 ... fn directory\");\n#endif\n#else\n\t(void) fprintf(stderr,\n\t\t       \"usage: pbs_rcp [-E] [-p] f1 f2; or: pbs_rcp [-pr] f1 ... fn directory\\n\");\n#endif\n\t(void) fprintf(stderr,\n\t\t       \"       pbs_rcp --version\\n\");\n\texit(1);\n}\n\n#include <stdarg.h>\n\n#ifdef KERBEROS\n/**\n * @brief\n *\tprints error message to stderr in provided format.\n *\n * @param[in] fmt - error message to be logged\n *\n */\nvoid\noldw(const char *fmt, ...)\n{\n\tva_list ap;\n\tva_start(ap, fmt);\n\t(void) fprintf(stderr, \"rcp: \");\n\t(void) vfprintf(stderr, fmt, ap);\n\t(void) fprintf(stderr, \", using standard rcp\\n\");\n\tva_end(ap);\n}\n#endif\n\n/**\n * @brief\n *      prints error message to specified file in provided format.\n *\n * @param[in] fmt - error message to be logged\n *\n */\nvoid\nrun_err(const char *fmt, ...)\n{\n\tstatic FILE *fp;\n\tva_list ap;\n\tva_start(ap, fmt);\n\n\t++errs;\n\n\tif (!iamremote) {\n\t\t(void) vfprintf(stderr, fmt, ap);\n\t\t(void) fprintf(stderr, \"\\n\");\n\t}\n\n\tif (fp == NULL && !(fp = fdopen(rem, \"w\"))) {\n\t\tva_end(ap);\n\t\treturn;\n\t}\n\t(void) fprintf(fp, \"%c\", 0x01);\n\t(void) fprintf(fp, \"rcp: \");\n\t(void) vfprintf(fp, fmt, ap);\n\t(void) fprintf(fp, \"\\n\");\n\t(void) fflush(fp);\n\n\tva_end(ap);\n}\n\n#ifdef USELOG\n/**\n * @brief\n *\tgenerates a distributed log message.\n */\nstatic void\nuse_logusage(char *mes, char *app)\n{\n\tcloselog();\n\topenlog(app, LOG_INFO | LOG_NDELAY, LOG_LOCAL4);\n\tsyslog(LOG_INFO, \"%s\", mes);\n\tcloselog();\n}\n\n/**\n * @brief\n *\treturns the time and timezone info.\n */\nstatic void\nuse_prep_timer()\n{\n\tgettimeofday(&use_time0, NULL);\n}\n\n/**\n * @brief\n *\treturns the time in seconds and microseconds format.\n *\n */\nstatic float\nuse_get_wtime()\n{\n\tstruct timeval timedol;\n\tstruct timeval td;\n\tfloat realt;\n\n\tgettimeofday(&timedol, NULL);\n\n\t/* Get real time */\n\tuse_tvsub(&td, &timedol, &use_time0);\n\trealt = td.tv_sec + ((double) td.tv_usec) / 1000000;\n\tif (realt < 0.00001)\n\t\trealt = 0.00001;\n\n\treturn (realt);\n}\n\n/**\n * @brief\n *\tcalculate the time difference of t1 and t0.\n *\n * @param[in] t1 - time value1\n * @param[in] t1 - time value2\n *\n * @return\tVoid\n */\nstatic void\nuse_tvsub(struct timeval *tdiff, struct timeval *t1, struct timeval *t0)\n{\n\n\ttdiff->tv_sec = t1->tv_sec - t0->tv_sec;\n\ttdiff->tv_usec = t1->tv_usec - t0->tv_usec;\n\tif (tdiff->tv_usec < 0)\n\t\ttdiff->tv_sec--, tdiff->tv_usec += 1000000;\n}\n\n/**\n * @brief\n *\tend up a communication by logging appropriate msg.\n *\n * @param[in] status - status code\n *\n */\nvoid\nuse_exit(int status)\n{\n\t/*   UNICOS tcp/ip socket error codes */\n\tif (status && (127 < errno && errno < 156)) {\n\t\tstrcpy(use_direction, \"E\");\n\t\tuse_status = errno;\n\t\tuse_neterr++;\n\t} else\n\t\tuse_status = status;\n\n\t/* only report files bigger than 0.001MB or network problems */\n\tif (use_size > 0.001 || use_neterr) {\n\t\tuse_wctime = use_get_wtime();\n\t\tuse_rate = use_size / use_wctime;\n\n\t\tsprintf(use_message,\n\t\t\t\"%-25s %-9s %-2s %5.3f %5.3f %2d %2d\\n\",\n\t\t\tuse_host, use_user, use_direction, use_size,\n\t\t\tuse_rate, use_status, use_filec);\n\n\t\tuse_logusage(use_message, \"rcp\");\n\t}\n\n#undef exit\n\texit(status);\n}\n#endif /* USELOG */\n"
  },
  {
    "path": "src/mom_rcp/replace.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\treplace.c\n * @brief\n * This file contains homebrewed PBS replacements for\n * library functions found on BSD 4.4-Lite.\n *\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <errno.h>\n#include <stdio.h>\n#include <stdarg.h>\n#include <stdlib.h>\n\nextern char *credb;\n\n/**\n * @brief\n *\taccepts varargs as error inputs and logs error on stderr.\n *\n * @param[in] fmt - error msg\n *\n */\nvoid\nerrx(int err, const char *fmt, ...)\n{\n\tva_list ap;\n\n\tva_start(ap, fmt);\n\t(void) vfprintf(stderr, fmt, ap);\n\texit(err);\n}\n\n/**\n * @brief\n *      accepts varargs as warning inputs and logs the same on stderr.\n *\n * @param[in] fmt - error msg\n *\n */\nvoid\nwarnx(const char *fmt, ...)\n{\n\tva_list ap;\n\n\tva_start(ap, fmt);\n\t(void) vfprintf(stderr, fmt, ap);\n}\n\n/**\n * @brief\n *\tprint the error number and message to stderr.\n * @par Note:\tfrees the credentials.\n *\n * @param[in] val - error val\n * @param[in] str - error msg\n *\n * @return\tVoid\n *\n */\nvoid\nerr(int val, char *str)\n{\n\tif (str)\n\t\t(void) fprintf(stderr, \"%s\\n\", str);\n\n\tif (credb != NULL) {\n\t\t(void) free(credb);\n\t\tcredb = NULL;\n\t}\n\n\texit(val);\n}\n"
  },
  {
    "path": "src/mom_rcp/util.c",
    "content": "/*\n * Copyright (c) 1992, 1993\n *\tThe Regents of the University of California.  All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions\n * are met:\n * 1. Redistributions of source code must retain the above copyright\n *    notice, this list of conditions and the following disclaimer.\n * 2. Redistributions in binary form must reproduce the above copyright\n *    notice, this list of conditions and the following disclaimer in the\n *    documentation and/or other materials provided with the distribution.\n * 3. All advertising materials mentioning features or use of this software\n *    must display the following acknowledgement:\n *\tThis product includes software developed by the University of\n *\tCalifornia, Berkeley and its contributors.\n * 4. Neither the name of the University nor the names of its contributors\n *    may be used to endorse or promote products derived from this software\n *    without specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND\n * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED.  IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE\n * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS\n * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)\n * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT\n * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY\n * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF\n * SUCH DAMAGE.\n */\n/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <sys/param.h>\n#include <sys/types.h>\n#include <sys/stat.h>\n#include <sys/wait.h>\n\n#include <ctype.h>\n#include <errno.h>\n#include <signal.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <unistd.h>\n\n#include \"extern.h\"\n#include \"pathnames.h\"\n/**\n * @file\tutil.c\n */\n\n#ifdef USELOG\nextern int use_neterr;\n#endif /* USELOG */\n\n/**\n * @brief\n *\textract and return the file name from input string\n *\n * @param[in] cp - argument\n *\n * @return\tstring\n * @retval\tfilename\tsuccess\n * @retval\t0\t\terror\n */\nchar *\ncolon(char *cp)\n{\n\tif (*cp == ':') /* Leading colon is part of file name. */\n\t\treturn (0);\n\n\tfor (; *cp; ++cp) {\n\t\tif (*cp == ':')\n\t\t\treturn (cp);\n\t\tif (*cp == '/')\n\t\t\treturn (0);\n\t}\n\treturn (0);\n}\n\n/**\n * @breif\n *\tverify the input directory.\n *\n * @param[in] cp - directory path\n *\n * @return Void\n */\nvoid\nverifydir(char *cp)\n{\n\tstruct stat stb;\n\n\tif (!stat(cp, &stb)) {\n\t\tif (S_ISDIR(stb.st_mode))\n\t\t\treturn;\n\t\terrno = ENOTDIR;\n\t}\n\trun_err(\"%s: %s\", cp, strerror(errno));\n\texit(1);\n}\n\n/**\n * @brief\n *\tvalidate input user name.\n *\n * @param[in] cp0 - user name\n */\nint\nokname(char *cp0)\n{\n\tint c;\n\tchar *cp;\n\n\tcp = cp0;\n\tdo {\n\t\tc = *cp;\n\t\tif (c & 0200)\n\t\t\tgoto bad;\n\t\tif (!isalpha(c) && !isdigit(c) && c != '_' && c != '-' && c != '.')\n\t\t\tgoto bad;\n\t} while (*++cp);\n\treturn (1);\n\nbad:\n\twarnx(\"%s: invalid user name\", cp0);\n\treturn (0);\n}\n\n/**\n * @brief\n */\nint\nsusystem(char *s,\n\t uid_t userid,\t /* used in unix */\n\t char *username) /* used in windows */\n{\n\n#ifdef WIN32\n\tHANDLE hUser;\n\tint rc;\n\n\tSTARTUPINFO si = {0};\n\tPROCESS_INFORMATION pi = {0};\n\tint flags = CREATE_DEFAULT_ERROR_MODE | CREATE_NEW_CONSOLE |\n\t\t    CREATE_NEW_PROCESS_GROUP;\n\n\tif ((hUser = LogonUserNoPass(username)) == INVALID_HANDLE_VALUE)\n\t\treturn (1);\n\n\trc = CreateProcessAsUser(hUser, NULL, s, NULL, NULL, TRUE, flags,\n\t\t\t\t NULL, NULL, &si, &pi);\n\n\tif (rc == 0)\n\t\terrno = GetLastError();\n\n\tif (errno != 0) {\n\t\trc = errno + 10000; /* error on fork (100xx), retry */\n\t} else {\n\t\tif (WaitForSingleObject(pi.hProcess,\n\t\t\t\t\tINFINITE) != WAIT_OBJECT_0)\n\t\t\terrno = GetLastError();\n\t\telse if (!GetExitCodeProcess(pi.hProcess, &rc))\n\t\t\terrno = GetLastError();\n\t\tCloseHandle(pi.hProcess);\n\t\tCloseHandle(pi.hThread);\n\n\t\tif (errno != 0)\n\t\t\trc = (20000 + errno); /* 200xx is error on wait */\n\t}\n\n\tCloseHandle(hUser);\n\treturn (rc);\n#else\n\tint status;\n\tpid_t pid;\n\t\n\tstatic char ok_chars[] = \"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789.-_:/@\\\\ \";\n\tif (strspn(s, ok_chars) != strlen(s))\n\t\t_exit(1);\n\n\tpid = fork();\n\tswitch (pid) {\n\t\tcase -1:\n\t\t\treturn (127);\n\n\t\tcase 0:\n\t\t\tif (setuid(userid) == -1)\n\t\t\t\t_exit(126);\n\t\t\texecl(_PATH_BSHELL, \"sh\", \"-c\", s, NULL);\n\t\t\t_exit(127);\n\t}\n\tif (waitpid(pid, &status, 0) < 0)\n\t\tstatus = -1;\n\treturn (status);\n\n#endif\n}\n\n/**\n * @brief\n *\treallocate memory in BUF structure based on\n *\tgiven size of file <fd> in given block size <blksize>\n *\n * @param[in]\tbp - pointer to allocated BUF struct which will be reallocated\n * @param[in]\tfd - fd of file to get file size\n * @param[in]\tblksize - block size to get buffer size in BUF struct\n *\n * @return\tBUF*\n * @retval\tpointer to reallocated BUF struct\n * @par\tNOTE:\n *\tThis pointer can be same as <bp> if size is 0 or less than\n *\talready allocated buffer size in <bp>\n */\nBUF *\nallocbuf(BUF *bp, int fd, int blksize)\n{\n\tstruct stat stb = {0};\n\tint size = 0;\n\n\tif (fstat(fd, &stb) < 0) {\n\t\trun_err(\"fstat: %s\", strerror(errno));\n\t\treturn (0);\n\t}\n\n#ifdef WIN32\n\tsize = 0;\n#else\n\tsize = (((int) stb.st_blksize + blksize - 1) / blksize) * blksize;\n#endif\n\n\tif (size == 0)\n\t\tsize = blksize;\n\tif (bp->cnt >= size)\n\t\treturn (bp);\n\tif (bp->buf) {\n\t\tchar *tbuf;\n\t\ttbuf = realloc(bp->buf, size);\n\t\tif (tbuf == NULL)\n\t\t\tfree(bp->buf);\n\t\tbp->buf = tbuf;\n\t} else\n\t\tbp->buf = malloc(size);\n\tif (bp->buf == NULL) {\n\t\tbp->cnt = 0;\n\t\trun_err(\"%s\", strerror(errno));\n\t\treturn (0);\n\t}\n\tbp->cnt = size;\n\treturn (bp);\n}\n\n/**\n * @brief\n *\tlog warning msg.\n *\n * @param[in] signo -  signal num\n */\nvoid\nlostconn(int signo)\n{\n\tif (!iamremote)\n\t\twarnx(\"lost connection\");\n#ifdef USELOG\n\tuse_neterr++;\n#endif /* USELOG */\n\n\texit(1);\n}\n"
  },
  {
    "path": "src/resmom/Makefile.am",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nsbin_PROGRAMS = pbs_mom\n\npbs_mom_CPPFLAGS = \\\n\t-DPBS_MOM \\\n\t-I$(top_srcdir)/src/include \\\n\t-I$(top_srcdir)/src/resmom/linux \\\n\t@hwloc_flags@ \\\n\t@hwloc_inc@ \\\n\t@pmix_inc@ \\\n\t@libz_inc@ \\\n\t@PYTHON_INCLUDES@ \\\n\t@KRB5_CFLAGS@\n\npbs_mom_LDADD = \\\n\t$(top_builddir)/src/lib/Libpbs/libpbs.la \\\n\t$(top_builddir)/src/lib/Libattr/libattr.a \\\n\t$(top_builddir)/src/lib/Liblog/liblog.a \\\n\t$(top_builddir)/src/lib/Libnet/libnet.a \\\n\t$(top_builddir)/src/lib/Libsec/libsec.a \\\n\t$(top_builddir)/src/lib/Libsite/libsite.a \\\n\t$(top_builddir)/src/lib/Libtpp/libtpp.a \\\n\t$(top_builddir)/src/lib/Libutil/libutil.a \\\n\t@KRB5_LIBS@ \\\n\t@hwloc_lib@ \\\n\t@pmix_lib@ \\\n\t@PYTHON_LDFLAGS@ \\\n\t@PYTHON_LIBS@ \\\n\t@libz_lib@ \\\n\t-lssl \\\n\t-lcrypto\n\npbs_mom_SOURCES = \\\n\t$(top_builddir)/src/lib/Libattr/job_attr_def.c \\\n\t$(top_builddir)/src/lib/Libattr/node_attr_def.c \\\n\t$(top_builddir)/src/lib/Libattr/resc_def_all.c \\\n\t$(top_builddir)/src/lib/Libpython/shared_python_utils.c \\\n\t$(top_srcdir)/src/server/mom_info.c \\\n\t$(top_srcdir)/src/server/attr_recov.c \\\n\t$(top_srcdir)/src/server/dis_read.c \\\n\t$(top_srcdir)/src/server/jattr_get_set.c \\\n\t$(top_srcdir)/src/server/nattr_get_set.c \\\n\t$(top_srcdir)/src/server/job_func.c \\\n\t$(top_srcdir)/src/server/process_request.c \\\n\t$(top_srcdir)/src/server/reply_send.c \\\n\t$(top_srcdir)/src/server/req_quejob.c \\\n\t$(top_srcdir)/src/server/resc_attr.c \\\n\t$(top_srcdir)/src/server/vnparse.c \\\n\t$(top_srcdir)/src/server/setup_resc.c \\\n\tlinux/mom_mach.c \\\n\tlinux/mom_mach.h \\\n\tlinux/mom_start.c \\\n\tlinux/pe_input.c \\\n\tcatch_child.c \\\n\tjob_recov_fs.c \\\n\tmock_run.c \\\n\tmock_run.h \\\n\tmom_comm.c \\\n\tmom_hook_func.c \\\n\tmom_inter.c \\\n\tlinux/mom_func.c \\\n\tmom_main.c \\\n\tmom_updates_bundle.c \\\n\tmom_pmix.c \\\n\tmom_pmix.h \\\n\tmom_server.c \\\n\tmom_vnode.c \\\n\tmom_walltime.c \\\n\tpopen.c \\\n\tprolog.c \\\n\trequests.c \\\n\trm_dep.h \\\n\tstage_func.c \\\n\tstart_exec.c \\\n\tvnode_storage.c \\\n\trenew_creds.c \\\n\trenew_creds.h\n\nif ALPS_ENABLED\npbs_mom_CPPFLAGS += -DMOM_ALPS=1 @expat_inc@\npbs_mom_LDADD += @expat_lib@\npbs_mom_SOURCES += linux/alps.c\nendif\n"
  },
  {
    "path": "src/resmom/catch_child.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n#ifdef PYTHON\n#include <pbs_python_private.h>\n#include <Python.h>\n#endif\n\n#include <unistd.h>\n#include <dirent.h>\n#include <pwd.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <limits.h>\n#include <assert.h>\n#include <ctype.h>\n#include <errno.h>\n#include <fcntl.h>\n#include <signal.h>\n#include <string.h>\n#include <sys/types.h>\n#include <sys/stat.h>\n#include \"dis.h\"\n#include \"libpbs.h\"\n#include \"list_link.h\"\n#include \"server_limits.h\"\n#include \"attribute.h\"\n#include \"resource.h\"\n#include \"job.h\"\n#include \"log.h\"\n#include \"work_task.h\"\n#include \"credential.h\"\n#include \"batch_request.h\"\n#include \"net_connect.h\"\n#include \"pbs_nodes.h\"\n#include \"svrfunc.h\"\n#include \"mom_mach.h\"\n#include \"mom_func.h\"\n#include \"mom_server.h\"\n#include \"mom_vnode.h\"\n#include \"pbs_error.h\"\n#include \"tpp.h\"\n#include \"mom_hook_func.h\"\n#include \"placementsets.h\"\n#include \"hook.h\"\n#include \"renew_creds.h\"\n#include \"mock_run.h\"\n#include <libutil.h>\n\n/**\n * @file\tcatch_child.c\n */\n/* External Functions */\nvoid (*free_job_CPUs)(job *) = NULL;\n\n/* External Globals */\n\nextern char mom_host[];\nextern char *path_epilog;\nextern char *path_jobs;\nextern unsigned int default_server_port;\nextern pbs_list_head svr_alljobs;\nextern int exiting_tasks;\nextern char *msg_daemonname;\nextern char *mom_home;\n#ifndef WIN32\nextern int termin_child;\n#endif\nextern int server_stream;\nextern time_t time_now;\nextern pbs_list_head mom_polljobs;\nextern unsigned int pbs_mom_port;\nextern int gen_nodefile_on_sister_mom;\n#if MOM_ALPS\nextern useconds_t alps_release_wait_time;\nextern int alps_release_timeout;\nextern useconds_t alps_release_jitter;\n#endif\n\nextern char *path_hooks_workdir;\n\n#ifndef WIN32\n/**\n * @brief\n *\tcatch_child() - the signal handler for  SIGCHLD.\n *\n * @param[in] sig - signal number\n *\n * @par To keep the signal handler simple for\n *\tSIGCHLD  - just indicate there was one.\n *\n * @return Void\n *\n */\nvoid\ncatch_child(int sig)\n{\n\ttermin_child = 1;\n}\n#endif\n\n/**\n * @brief\n *\treturns execution node info for job pjob\n *\n * @param[in] pjob - job pointer to job\n * @param[in] nodeid - nodeid on which  job execs\n *\n * @return hnodent *\n * @retval  hostdetails  SUCCESS\n * @retval  NULL  \t Failure\n *\n */\nhnodent *\nget_node(job *pjob, tm_node_id nodeid)\n{\n\tint i;\n\tvmpiprocs *vp = pjob->ji_vnods;\n\n\tfor (i = 0; i < pjob->ji_numvnod; i++, vp++) {\n\t\tif (vp->vn_node == nodeid)\n\t\t\treturn vp->vn_host;\n\t}\n\treturn NULL;\n}\n\n/**\n * @brief\n *\tRestart each task which has exited and has TI_FLAGS_CHKPT turned on.\n *\tIf all tasks have been restarted, turn off MOM_CHKPT_POST.\n *\n * @param[in] pjob - pointer to job structure\n *\n * @return Void\n *\n */\nvoid\nchkpt_partial(job *pjob)\n{\n\tint i;\n\tchar namebuf[MAXPATHLEN + 1];\n\tchar *filnam;\n\tpbs_task *ptask;\n\tint texit = 0;\n\textern char task_fmt[];\n\textern char *path_checkpoint;\n\n\tassert(pjob != NULL);\n\n\tpbs_strncpy(namebuf, path_checkpoint, sizeof(namebuf));\n\tif (*pjob->ji_qs.ji_fileprefix != '\\0')\n\t\tstrcat(namebuf, pjob->ji_qs.ji_fileprefix);\n\telse\n\t\tstrcat(namebuf, pjob->ji_qs.ji_jobid);\n\tstrcat(namebuf, JOB_CKPT_SUFFIX);\n\n\ti = strlen(namebuf);\n\tfilnam = &namebuf[i];\n\n\tpjob->ji_sampletim = 0; /* reset sampletime for cpupercent */\n\n\tfor (ptask = (pbs_task *) GET_NEXT(pjob->ji_tasks);\n\t     ptask != NULL;\n\t     ptask = (pbs_task *) GET_NEXT(ptask->ti_jobtask)) {\n\t\t/*\n\t\t ** See if the task was marked as one of those that did\n\t\t ** actually checkpoint.\n\t\t */\n\t\tif ((ptask->ti_flags & TI_FLAGS_CHKPT) == 0)\n\t\t\tcontinue;\n\t\ttexit++;\n\t\t/*\n\t\t ** Now see if it was reaped.  We don't want to\n\t\t ** fool with it until we see it die.\n\t\t */\n\t\tif (ptask->ti_qs.ti_status != TI_STATE_EXITED)\n\t\t\tcontinue;\n\t\ttexit--;\n\n\t\tsprintf(filnam, task_fmt, ptask->ti_qs.ti_task);\n\n\t\t/*\n\t\t **\tTry action script with no post function.\n\t\t */\n\t\ti = do_mom_action_script(RestartAction, pjob, ptask,\n\t\t\t\t\t namebuf, NULL);\n\t\tif (i != 0) { /* script failed */\n\t\t\t/* if there is no script, try native support */\n\t\t\tif (i == -2)\n\t\t\t\ti = mach_restart(ptask, namebuf);\n\t\t\tif (i != 0) /* everything failed */\n\t\t\t\tgoto fail;\n\t\t}\n\n\t\tptask->ti_qs.ti_status = TI_STATE_RUNNING;\n\t\t/*\n\t\t ** Turn off TI_FLAGS_CHKPT if TI_FLAGS_SAVECKP is off.\n\t\t ** Turn off TI_FLAGS_SAVECKP if it is on.\n\t\t */\n\t\tif ((ptask->ti_flags & TI_FLAGS_SAVECKP) == 0)\n\t\t\tptask->ti_flags &= ~TI_FLAGS_CHKPT;\n\t\telse\n\t\t\tptask->ti_flags &= ~TI_FLAGS_SAVECKP;\n\t\t(void) task_save(ptask);\n\t}\n\n\tif (texit == 0) {\n\t\tchar oldname[MAXPATHLEN + 1];\n\t\tstruct stat statbuf;\n\n\t\t/*\n\t\t ** All tasks should now be running.\n\t\t ** Turn off MOM_CHKPT_POST and MOM_CHKPT_ACTIVE flags.\n\t\t ** Job is back to where it was before the bad checkpoint\n\t\t ** attempt.\n\t\t */\n\t\tpjob->ji_flags &= ~MOM_CHKPT_POST;\n\t\tpjob->ji_flags &= ~MOM_CHKPT_ACTIVE;\n\t\t/*\n\t\t ** Get rid of incomplete checkpoint directory and\n\t\t ** move old chkpt dir back to regular if it exists.\n\t\t */\n\t\t*filnam = '\\0';\n\t\t(void) remtree(namebuf);\n\t\tstrcpy(oldname, namebuf);\n\t\tstrcat(oldname, \".old\");\n\t\tif (stat(oldname, &statbuf) == 0) {\n\t\t\tif (rename(oldname, namebuf) == -1)\n\t\t\t\tpjob->ji_qs.ji_svrflags &= ~JOB_SVFLG_CHKPT;\n\t\t}\n\t}\n\treturn;\n\nfail:\n\t/*\n\t ** If we cannot restart a task from a partially failed checkpoint,\n\t ** the job will be killed.\n\t */\n\tlog_joberr(errno, __func__, \"failed to restart\", pjob->ji_qs.ji_jobid);\n\tpjob->ji_flags &= ~MOM_CHKPT_POST;\n\t(void) kill_job(pjob, SIGKILL);\n\treturn;\n}\n\n/**\n * @brief\n * \tupdate jobs rescused to the server\n *\n * @return void\n *\n */\nvoid\nupdate_jobs_status(void)\n{\n\tjob *pjob = (job *) GET_NEXT(svr_alljobs);\n\tfor (; pjob; pjob = (job *) GET_NEXT(pjob->ji_alljobs)) {\n\t\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE) == 0)\n\t\t\tcontinue;\n\t\tif (!check_job_substate(pjob, JOB_SUBSTATE_RUNNING))\n\t\t\tcontinue;\n\t\tenqueue_update_for_send(pjob, IS_RESCUSED);\n\t}\n}\n\n/**\n * @brief\n * \tsend_obit - routine called following completion of epilogue process\n *\tJob then moved into substate OBIT and Obit message sent to server over TPP stream\n *\n * @param[in] pjob - pointer to job structure\n * @param[in] exval - exit value\n *\n * @return Void\n *\n */\nvoid\nsend_obit(job *pjob, int exval)\n{\n#ifndef WIN32\n\t/* update pjob with values set from an epilogue hook */\n\t/* since these are hooks that are executing in a child process */\n\t/* and changes inside the child will not be reflected in main */\n\t/* mom */\n\tif (num_eligible_hooks(HOOK_EVENT_EXECJOB_EPILOGUE) > 0) {\n\t\tchar hook_outfile[MAXPATHLEN + 1];\n\t\tstruct stat stbuf;\n\n\t\tsnprintf(hook_outfile, MAXPATHLEN, FMT_HOOK_JOB_OUTFILE, path_hooks_workdir, pjob->ji_qs.ji_jobid);\n\t\tif (stat(hook_outfile, &stbuf) == 0) {\n\t\t\tpbs_list_head vnl_changes;\n\t\t\tint reject_deletejob = 0;\n\t\t\tint reject_rerunjob = 0;\n\t\t\tint accept_flag = 1;\n\n\t\t\tCLEAR_HEAD(vnl_changes);\n\t\t\tif (get_hook_results(hook_outfile, &accept_flag, NULL, NULL, 0,\n\t\t\t\t\t     &reject_rerunjob, &reject_deletejob, NULL,\n\t\t\t\t\t     NULL, 0, &vnl_changes, pjob,\n\t\t\t\t\t     NULL, 0, NULL) != 0) {\n\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK, LOG_ERR, __func__, \"Failed to get epilogue hook results\");\n\t\t\t\tvna_list_free(vnl_changes);\n\t\t\t} else {\n\t\t\t\t/* Delete job or reject job actions */\n\t\t\t\t/* NOTE: Must appear here before vnode changes, */\n\t\t\t\t/* since this action will be sent whether hook  */\n\t\t\t\t/* script executed by PBSADMIN or PBSUSER.      */\n\t\t\t\tif (reject_deletejob) {\n\t\t\t\t\t/* deletejob takes precedence */\n\t\t\t\t\tnew_job_action_req(pjob, HOOK_PBSADMIN, JOB_ACT_REQ_DELETE);\n\t\t\t\t} else if (reject_rerunjob) {\n\t\t\t\t\tnew_job_action_req(pjob, HOOK_PBSADMIN, JOB_ACT_REQ_REQUEUE);\n\t\t\t\t} else if (!accept_flag) {\n\t\t\t\t\t/* Per EDD on a pbs.event().reject() from an */\n\t\t\t\t\t/* epilogue hook, must delete the job. */\n\t\t\t\t\tnew_job_action_req(pjob, HOOK_PBSADMIN, JOB_ACT_REQ_DELETE);\n\t\t\t\t}\n\n\t\t\t\t/* Whether or not we accept or reject, we'll make */\n\t\t\t\t/* job changes, vnode changes, job actions */\n\t\t\t\tenqueue_update_for_send(pjob, IS_RESCUSED_FROM_HOOK);\n\n\t\t\t\t/* Push vnl hook changes to server */\n\t\t\t\thook_requests_to_server(&vnl_changes);\n\t\t\t}\n\t\t\t/* need to clear out hook_outfile, */\n\t\t\t/* as epilogue hook processing  */\n\t\t\t/* in mom_process_hooks() will end up appending to */\n\t\t\t/* this same file when job is rerun, resulting in */\n\t\t\t/* duplicate actions. */\n\t\t\tunlink(hook_outfile);\n\t\t}\n\t}\n#endif\n\n\tif (is_jattr_set(pjob, JOB_ATR_run_version)) {\n\t\tDBPRT((\"send_obit: job %s run_version %ld exval %d\\n\",\n\t\t       pjob->ji_qs.ji_jobid, get_jattr_long(pjob, JOB_ATR_run_version), exval))\n\t} else {\n\t\tDBPRT((\"send_obit: job %s runcount %ld exval %d\\n\",\n\t\t       pjob->ji_qs.ji_jobid, get_jattr_long(pjob, JOB_ATR_runcount), exval))\n\t}\n\n\tpjob->ji_mompost = NULL;\n\tif (!check_job_substate(pjob, JOB_SUBSTATE_OBIT)) {\n\t\tset_job_substate(pjob, JOB_SUBSTATE_OBIT);\n\t\tjob_save(pjob);\n\t}\n\n\tpjob->ji_sampletim = time_now; /* when obit sent to server */\n\t/* epilogue script exit of 2 means requeue for\t*/\n\t/* chkpt/restart if job was checkpointed\t*/\n\tif (exval == 2 && (pjob->ji_qs.ji_svrflags & JOB_SVFLG_CHKPT))\n\t\tpjob->ji_qs.ji_un.ji_momt.ji_exitstat = JOB_EXEC_QUERST;\n\tif (enqueue_update_for_send(pjob, IS_JOBOBIT) != 0)\n\t\tlog_joberr(PBSE_SYSTEM, __func__, \"Failed to enque job obit\", pjob->ji_qs.ji_jobid);\n\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid, \"Obit sent\");\n}\n\n/**\n * @brief\n * \tLook for job tasks that have terminated (see scan_for_terminating),\n *\tand for each task, find which job the task was part, and if the top\n *\tshell, start end of job processing by running the epilogue.\n *\n * @return Void\n *\n */\n\nvoid\nscan_for_exiting(void)\n{\n\n\tpid_t cpid;\n\tint i;\n\tint extval;\n\tint found_one = 0;\n\tu_long hours, mins, secs;\n\tjob *nxjob;\n\tjob *pjob;\n\tpbs_task *ptask;\n\tobitent *pobit;\n\tchar *cookie;\n\tu_long gettime(resource * pres);\n\tu_long getsize(resource * pres);\n\tint im_compose(int, char *, char *, int, tm_event_t, tm_task_id, int);\n\tmom_hook_input_t hook_input;\n\tint has_epilog = 0;\n\tint update_svr = 0;\n\n#ifdef WIN32\n\t/* update the latest intelligence about the running jobs; */\n\ttime_now = time(NULL);\n\tmom_set_use_all();\n\tupdate_svr = 1;\n#endif\n\tif (num_eligible_hooks(HOOK_EVENT_EXECJOB_EPILOGUE) > 0 || file_exists(path_epilog))\n\t\thas_epilog = 1;\n\n\t/*\n\t ** Look through the jobs.  Each one has it's tasks examined\n\t ** and if the job is EXITING, it meets it's fate depending\n\t ** on whether this is the Mother Superior or not.\n\t */\n\tfor (pjob = (job *) GET_NEXT(svr_alljobs); pjob; pjob = nxjob) {\n\t\tnxjob = (job *) GET_NEXT(pjob->ji_alljobs);\n\n\t\tif (pjob->ji_numnodes > 1 && !pjob->ji_msconnected && pjob->ji_nodeid) /* assume that MS has a connection to itself at all times */\n\t\t\tcontinue;\n\n\t\t/*\n\t\t ** If a restart is active, skip this job since\n\t\t ** not all of the tasks may have started yet.\n\t\t */\n\t\tif (pjob->ji_flags & MOM_RESTART_ACTIVE) {\n\t\t\tcontinue;\n\t\t}\n\t\t/*\n\t\t ** If a checkpoint with aborts is active,\n\t\t ** skip it.  We don't want to report any obits\n\t\t ** until we know that the whole thing worked.\n\t\t */\n\t\tif ((pjob->ji_flags & MOM_CHKPT_ACTIVE) &&\n\t\t    (pjob->ji_mompost != NULL)) {\n\t\t\tcontinue;\n\t\t}\n\t\t/*\n\t\t ** If the job has had an error doing a checkpoint with\n\t\t ** abort, the MOM_CHKPT_POST flag will be on.\n\t\t */\n\t\tif (pjob->ji_flags & MOM_CHKPT_POST) {\n\t\t\tchkpt_partial(pjob);\n\t\t\tcontinue;\n\t\t}\n\n\t\tif (is_jattr_set(pjob, JOB_ATR_Cookie))\n\t\t\tcookie = get_jattr_str(pjob, JOB_ATR_Cookie);\n\t\telse\n\t\t\tcookie = NULL;\n\n\t\t/*\n\t\t ** Check each EXITED task.  They transistion to DEAD here.\n\t\t */\n\t\tfor (ptask = (pbs_task *) GET_NEXT(pjob->ji_tasks);\n\t\t     ptask != NULL;\n\t\t     ptask = (pbs_task *) GET_NEXT(ptask->ti_jobtask)) {\n\t\t\tif (ptask->ti_qs.ti_status != TI_STATE_EXITED)\n\t\t\t\tcontinue;\n\t\t\t/*\n\t\t\t ** Check if it is the top shell.\n\t\t\t */\n\t\t\tif (ptask->ti_qs.ti_parenttask == TM_NULL_TASK) {\n\t\t\t\tint *exitstat =\n\t\t\t\t\t&pjob->ji_qs.ji_un.ji_momt.ji_exitstat;\n\n\t\t\t\tset_job_state(pjob, JOB_STATE_LTR_EXITING);\n\t\t\t\tset_job_substate(pjob, JOB_SUBSTATE_KILLSIS);\n\t\t\t\tif (*exitstat >= 0)\n\t\t\t\t\t*exitstat = ptask->ti_qs.ti_exitstat;\n\t\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB,\n\t\t\t\t\t  LOG_INFO,\n\t\t\t\t\t  pjob->ji_qs.ji_jobid, \"Terminated\");\n\n\t\t\t\tif (send_sisters(pjob, IM_KILL_JOB, NULL) == 0) {\n\t\t\t\t\tset_job_substate(pjob, JOB_SUBSTATE_EXITING);\n\t\t\t\t\t/*\n\t\t\t\t\t ** if the job was checkpointed ok,\n\t\t\t\t\t ** reset ji_nodekill to prevent mom_comm\n\t\t\t\t\t ** error on restart resulting in job\n\t\t\t\t\t ** being killed.\n\t\t\t\t\t */\n\t\t\t\t\tif ((pjob->ji_flags & MOM_CHKPT_ACTIVE) &&\n\t\t\t\t\t    !(pjob->ji_flags & MOM_CHKPT_POST) &&\n\t\t\t\t\t    (pjob->ji_qs.ji_svrflags & JOB_SVFLG_CHKPT))\n\t\t\t\t\t\tpjob->ji_nodekill = TM_ERROR_NODE;\n\t\t\t\t}\n\t\t\t}\n\t\t\t/*\n\t\t\t ** Go through any TM client obits waiting.\n\t\t\t */\n\t\t\tfor (pobit = (obitent *) GET_NEXT(ptask->ti_obits);\n\t\t\t     pobit != NULL;\n\t\t\t     pobit = (obitent *) GET_NEXT(ptask->ti_obits)) {\n\n\t\t\t\thnodent *pnode;\n\n\t\t\t\t/* see if this is a batch request */\n\t\t\t\tif (pobit->oe_type == OBIT_TYPE_BREVENT) {\n\t\t\t\t\tpobit->oe_u.oe_preq->rq_reply.brp_code =\n\t\t\t\t\t\tPBSE_NONE;\n\t\t\t\t\tpobit->oe_u.oe_preq->rq_reply.brp_auxcode =\n\t\t\t\t\t\tptask->ti_qs.ti_exitstat;\n\t\t\t\t\tpobit->oe_u.oe_preq->rq_reply.brp_choice =\n\t\t\t\t\t\tBATCH_REPLY_CHOICE_NULL;\n\t\t\t\t\t(void) reply_send(pobit->oe_u.oe_preq);\n\t\t\t\t\tgoto end_loop;\n\t\t\t\t}\n\n\t\t\t\tpnode = get_node(pjob, pobit->oe_u.oe_tm.oe_node);\n\n\t\t\t\t/* see if this is mother superior or a sister */\n\t\t\t\tif (pjob->ji_nodeid == pnode->hn_node) {\n\t\t\t\t\tpbs_task *tp;\n\n\t\t\t\t\t/* Send response locally */\n\t\t\t\t\ttp = task_find(pjob, pobit->oe_u.oe_tm.oe_taskid);\n\t\t\t\t\tif (pobit->oe_u.oe_tm.oe_fd != -1) {\n\t\t\t\t\t\tassert(tp != NULL);\n\t\t\t\t\t\t(void) tm_reply(pobit->oe_u.oe_tm.oe_fd,\n\t\t\t\t\t\t\t\ttp->ti_protover, IM_ALL_OKAY,\n\t\t\t\t\t\t\t\tpobit->oe_u.oe_tm.oe_event);\n\t\t\t\t\t\t(void) diswsi(pobit->oe_u.oe_tm.oe_fd,\n\t\t\t\t\t\t\t      ptask->ti_qs.ti_exitstat);\n\t\t\t\t\t\t(void) dis_flush(pobit->oe_u.oe_tm.oe_fd);\n\t\t\t\t\t}\n\t\t\t\t} else if (pnode->hn_stream != -1 &&\n\t\t\t\t\t   cookie != NULL) {\n\t\t\t\t\t/*\n\t\t\t\t\t * Send a response over to MOM\n\t\t\t\t\t * whose child sent the request\n\t\t\t\t\t */\n\t\t\t\t\t(void) im_compose(pnode->hn_stream,\n\t\t\t\t\t\t\t  pjob->ji_qs.ji_jobid,\n\t\t\t\t\t\t\t  cookie, IM_ALL_OKAY,\n\t\t\t\t\t\t\t  pobit->oe_u.oe_tm.oe_event,\n\t\t\t\t\t\t\t  pobit->oe_u.oe_tm.oe_taskid, IM_OLD_PROTOCOL_VER);\n\t\t\t\t\t(void) diswsi(pnode->hn_stream,\n\t\t\t\t\t\t      ptask->ti_qs.ti_exitstat);\n\t\t\t\t\t(void) dis_flush(pnode->hn_stream);\n\t\t\t\t}\n\n\t\t\tend_loop:\n\t\t\t\tdelete_link(&pobit->oe_next);\n\t\t\t\tfree(pobit);\n\t\t\t}\n\t\t\tptask->ti_qs.ti_status = TI_STATE_DEAD;\n\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n\t\t\tAFSLOG_TERM(ptask);\n#endif\n\n\t\t\t/*\n\t\t\t ** KLUDGE\n\t\t\t ** We need to save the value of the sid here just\n\t\t\t ** in case it is exiting from a checkpoint/abort\n\t\t\t ** and it will be restarted later.  Just set it\n\t\t\t ** to the negative of itself.\n\t\t\t */\n\t\t\tif (ptask->ti_qs.ti_sid <= 1) {\n\t\t\t\tptask->ti_qs.ti_sid = 0;\n\t\t\t} else\n\t\t\t\tptask->ti_qs.ti_sid = -ptask->ti_qs.ti_sid;\n\t\t\ttask_save(ptask);\n\t\t}\n\n\t\t/*\n\t\t ** Look to see if the job has terminated.  If it is\n\t\t ** in any state other than EXITING continue on.\n\t\t */\n\t\tif (!check_job_substate(pjob, JOB_SUBSTATE_EXITING))\n\t\t\tcontinue;\n\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n\t\t/* job in state JOB_SUBSTATE_EXITING destroy creds */\n\t\tif (cred_by_job(pjob, CRED_DESTROY) != PBS_KRB5_OK) {\n\t\t\tlog_record(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid,\n\t\t\t\t   \"failed to destroy credentials\");\n\t\t}\n#endif\n\n\t\t/*\n\t\t ** This job is exiting.  If MOM_CHKPT_ACTIVE is on, it\n\t\t ** is time to turn if off.\n\t\t */\n\t\tpjob->ji_flags &= ~MOM_CHKPT_ACTIVE;\n\n\t\t/*\n\t\t ** Once a job is exiting each task that is done running\n\t\t ** gets a log message for the cpu and mem usage.\n\t\t */\n\t\tptask = (pbs_task *) GET_NEXT(pjob->ji_tasks);\n\t\twhile (ptask != NULL) {\n\t\t\tsecs = ptask->ti_cput;\n\t\t\thours = secs / 3600;\n\t\t\tsecs -= hours * 3600;\n\t\t\tmins = secs / 60;\n\t\t\tsecs -= mins * 60;\n\t\t\tsprintf(log_buffer,\n\t\t\t\t\"task %8.8X cput=%02lu:%2.2lu:%2.2lu\",\n\t\t\t\tptask->ti_qs.ti_task,\n\t\t\t\thours, mins, secs);\n\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB,\n\t\t\t\t  LOG_DEBUG, pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\tptask = (pbs_task *) GET_NEXT(ptask->ti_jobtask);\n\t\t}\n\n\t\t/*\n\t\t ** Look to see if I am a regular sister.  If so,\n\t\t ** check to see if there is a obit event to\n\t\t ** send back to mother superior.\n\t\t ** Otherwise, I need to wait for her to send a KILL_JOB\n\t\t ** so I can send the obit (unless she died).\n\t\t */\n\t\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE) == 0) {\n\t\t\tint stream = (pjob->ji_hosts == NULL) ? -1 : pjob->ji_hosts[0].hn_stream;\n\n\t\t\t/*\n\t\t\t ** Check to see if I'm still in touch with\n\t\t\t ** the head office.  If not, I'm just going to\n\t\t\t ** get rid of this job.\n\t\t\t */\n\t\t\tif (stream == -1) {\n\t\t\t\t(void) kill_job(pjob, SIGKILL);\n\t\t\t\tif ((pjob->ji_qs.ji_svrflags &\n\t\t\t\t     (JOB_SVFLG_CHKPT | JOB_SVFLG_ChkptMig)) == 0) {\n\t\t\t\t\tmom_deljob(pjob);\n\t\t\t\t}\n\t\t\t\tcontinue;\n\t\t\t}\n\n\t\t\t/*\n\t\t\t * No event waiting for sending info to MS\n\t\t\t * so I'll just sit tight.\n\t\t\t */\n\t\t\tif (pjob->ji_obit == TM_NULL_EVENT)\n\t\t\t\tcontinue;\n\n\t\t\t/* Check to see if any tasks are running */\n\t\t\tptask = (pbs_task *) GET_NEXT(pjob->ji_tasks);\n\t\t\twhile (ptask != NULL) {\n\t\t\t\tif (ptask->ti_qs.ti_status == TI_STATE_RUNNING)\n\t\t\t\t\tbreak;\n\t\t\t\tptask = (pbs_task *) GET_NEXT(ptask->ti_jobtask);\n\t\t\t}\n\t\t\t/* Still somebody there so don't send it yet. */\n\t\t\tif (ptask != NULL)\n\t\t\t\tcontinue;\n\t\t\t/* No tasks running. Format and send a reply to the mother superior */\n\t\t\tif (cookie != NULL) {\n\t\t\t\t(void) im_compose(stream, pjob->ji_qs.ji_jobid,\n\t\t\t\t\t\t  cookie, IM_ALL_OKAY,\n\t\t\t\t\t\t  pjob->ji_obit, TM_NULL_TASK, IM_OLD_PROTOCOL_VER);\n\t\t\t\t(void) diswul(stream,\n\t\t\t\t\t      resc_used(pjob, \"cput\", gettime));\n\t\t\t\t(void) diswul(stream,\n\t\t\t\t\t      resc_used(pjob, \"mem\", getsize));\n\t\t\t\t(void) diswul(stream,\n\t\t\t\t\t      resc_used(pjob, \"cpupercent\", gettime));\n\t\t\t\t(void) send_resc_used_to_ms(stream, pjob);\n\t\t\t\t(void) dis_flush(stream);\n\t\t\t\tpjob->ji_obit = TM_NULL_EVENT;\n\t\t\t}\n\t\t\tcontinue;\n\t\t}\n\n\t\t/*\n\t\t * At this point, we know we are Mother Superior for this\n\t\t * job which is EXITING.  Time for it to die.\n\t\t */\n\t\tpjob->ji_qs.ji_svrflags &= ~(JOB_SVFLG_Suspend |\n\t\t\t\t\t     JOB_SVFLG_Actsuspd);\n\t\tif (pjob->ji_qs.ji_un.ji_momt.ji_exitstat != JOB_EXEC_INITABT)\n\t\t\t(void) kill_job(pjob, SIGKILL);\n\t\tdelete_link(&pjob->ji_jobque); /* unlink from poll list */\n\n\t\t/*\n\t\t * The SISTER_KILLDONE flag needs to be reset so\n\t\t * we can talk to the sisterhood.\n\t\t */\n\t\tfor (i = 0; i < pjob->ji_numnodes; i++) {\n\t\t\thnodent *np = &pjob->ji_hosts[i];\n\n\t\t\tif (np->hn_node == pjob->ji_nodeid) /* me */\n\t\t\t\tcontinue;\n\n\t\t\tif (np->hn_sister == SISTER_KILLDONE)\n\t\t\t\tnp->hn_sister = SISTER_OKAY;\n\t\t}\n\n\t\t/* Job termination begins */\n\n\t\t/* stop counting walltime */\n\t\tstop_walltime(pjob);\n\n\t\t/* summary for MS */\n\t\tsecs = resc_used(pjob, \"cput\", gettime);\n\t\thours = secs / 3600;\n\t\tsecs -= hours * 3600;\n\t\tmins = secs / 60;\n\t\tsecs -= mins * 60;\n\t\tsprintf(log_buffer,\n\t\t\t\"%s cput=%02lu:%2.2lu:%2.2lu mem=%lukb\",\n\t\t\tmom_short_name, hours, mins, secs,\n\t\t\tresc_used(pjob, \"mem\", getsize));\n\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB,\n\t\t\t  LOG_DEBUG, pjob->ji_qs.ji_jobid, log_buffer);\n\n\t\t/* summary for other nodes */\n\t\tfor (i = 0; i < pjob->ji_numrescs; i++) {\n\t\t\tnoderes *nr = &pjob->ji_resources[i];\n\t\t\tsecs = nr->nr_cput;\n\n\t\t\thours = secs / 3600;\n\t\t\tsecs -= hours * 3600;\n\t\t\tmins = secs / 60;\n\t\t\tsecs -= mins * 60;\n\n\t\t\t/*\n\t\t\t ** ji_hosts starts with node 0 (MS)\n\t\t\t ** ji_resource starts with node 1\n\t\t\t */\n\t\t\tsprintf(log_buffer,\n\t\t\t\t\"%s cput=%02lu:%2.2lu:%2.2lu mem=%lukb\",\n\t\t\t\tpjob->ji_resources[i].nodehost ? pjob->ji_resources[i].nodehost : \"\",\n\t\t\t\thours, mins, secs, nr->nr_mem);\n\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB,\n\t\t\t\t  LOG_DEBUG, pjob->ji_qs.ji_jobid, log_buffer);\n\t\t}\n\n\t\t/*\n\t\t ** Do dependent end of job processing if it needs to be\n\t\t ** done.\n\t\t */\n\t\tif (job_end_final != NULL)\n\t\t\tjob_end_final(pjob);\n\n\t\tif (mock_run || !has_epilog) {\n\t\t\tsend_obit(pjob, 0);\n\t\t\tcontinue;\n\t\t}\n\n\t\t/*\n\t\t * Parent:\n\t\t *  +  fork child process to run epilogue,\n\t\t *  +  look for more terminated jobs.\n\t\t * Child:\n\t\t *  +  Run the epilogue script\n\t\t */\n\n\t\tcpid = fork_me(-1);\n\t\tif (cpid > 0) {\n\t\t\tpjob->ji_sampletim = 0;\n\t\t\tpjob->ji_momsubt = cpid;\n\t\t\tpjob->ji_actalarm = 0;\n\t\t\tpjob->ji_mompost = send_obit;\n\t\t\tset_job_substate(pjob, JOB_SUBSTATE_RUNEPILOG);\n\n\t\t\tif (found_one++ < 20) {\n\t\t\t\tcontinue; /* look for more exiting jobs */\n\t\t\t} else {\n\t\t\t\tbreak; /* 20 exiting jobs at a time is our limit */\n\t\t\t}\n\t\t} else if (cpid < 0 && errno != ENOSYS)\n\t\t\tcontinue; /* curses, failed again */\n\n\t\tif (pjob->ji_grpcache) {\n\t\t\tif ((is_jattr_set(pjob, JOB_ATR_sandbox)) && (strcasecmp(get_jattr_str(pjob, JOB_ATR_sandbox), \"PRIVATE\") == 0)) {\n\t\t\t\t/* in \"sandbox=PRIVATE\" mode so run epilogue in PBS_JOBDIR */\n\t\t\t\tif (chdir(jobdirname(pjob->ji_qs.ji_jobid, pjob->ji_grpcache->gc_homedir)) == -1) \n\t\t\t\t\tlog_errf(-1, __func__, \"chdir failed. ERR : %s\", strerror(errno));\n\t\t\t} else {\n\t\t\t\t/* else run in usr's home */\n\t\t\t\tif (chdir(pjob->ji_grpcache->gc_homedir) == -1) \n\t\t\t\t\tlog_errf(-1, __func__, \"chdir failed. ERR : %s\", strerror(errno));\n\t\t\t}\n\t\t}\n\n\t\textval = 0;\n\t\tif (num_eligible_hooks(HOOK_EVENT_EXECJOB_EPILOGUE) > 0) {\n\t\t\tmom_hook_input_init(&hook_input);\n\t\t\thook_input.pjob = pjob;\n\t\t\t(void) mom_process_hooks(HOOK_EVENT_EXECJOB_EPILOGUE, PBS_MOM_SERVICE_NAME, mom_host, &hook_input, NULL, NULL, 0, update_svr);\n\t\t} else {\n\t\t\tif ((is_jattr_set(pjob, JOB_ATR_interactive)) && get_jattr_long(pjob, JOB_ATR_interactive)) {\n\t\t\t\textval = run_pelog(PE_EPILOGUE, path_epilog, pjob, PE_IO_TYPE_NULL);\n\t\t\t} else {\n\t\t\t\textval = run_pelog(PE_EPILOGUE, path_epilog, pjob, PE_IO_TYPE_STD);\n\t\t\t}\n\t\t}\n\t\tif (extval != 2)\n\t\t\textval = 0;\n\n\t\tif (!cpid)\n\t\t\t/* if we are child exit and parent will do send_obit() */\n\t\t\texit(extval);\n\n\t\tsend_obit(pjob, i);\n\t\t/* restore MOM's home if we are foreground */\n\t\tif (chdir(mom_home) == -1) \n\t\t\tlog_errf(-1, __func__, \"chdir failed. ERR : %s\", strerror(errno));\n\t}\n\tif (pjob == NULL)\n\t\texiting_tasks = 0; /* went through all jobs */\n}\n\n/**\n * @brief\n * \t\tchoosing the server to connect to if a failover server is already set up.\n * \t\tSelects primary and secondary alternatively.\n *\n * @param[out] port - Passed through to parse_servername(), not modified here.\n *\n * @return char *\n * @return NULL - failure\n * @retval !NULL - pointer to server name\n */\nstatic char *\nget_servername_failover(unsigned int *port)\n{\n\tstatic int whom_to_connect = 0;\n\n\tif (!pbs_conf.pbs_secondary)\n\t\treturn get_servername(port);\n\telse {\n\t\twhom_to_connect = !whom_to_connect;\n\n\t\tif (whom_to_connect)\n\t\t\treturn get_servername(port);\n\t\telse\n\t\t\treturn parse_servername(pbs_conf.pbs_secondary, port);\n\t}\n}\n\n/**\n * @brief\n * \tsend IS_HELLOSVR message to Server.\n *\n * @param[in]\tstream\t- connection stream\n *\n * @par\n *\tOpen a connection stream to the named server/port if not already exists,\n *\tcompose the IS_HELLOSVR, flush the stream and remember for future use.\n *\n *\n * @return\tvoid\n *\n */\n\nvoid\nsend_hellosvr(int stream)\n{\n\tint rc = 0;\n\tchar *svr = NULL;\n\tunsigned int port = default_server_port;\n\textern int mom_net_up;\n\n\tif (mom_net_up == 0)\n\t\treturn;\n\n\tif (stream < 0) {\n\t\tif ((svr = get_servername_failover(&port)) == NULL) {\n\t\t\tlog_err(errno, msg_daemonname, \"get_servername_failover() failed\");\n\t\t\treturn;\n\t\t}\n\n\t\tstream = tpp_open(svr, port);\n\t\tif (stream < 0) {\n\t\t\tlog_errf(errno, msg_daemonname, \"tpp_open(%s, %d) failed\", svr, port);\n\t\t\treturn;\n\t\t}\n\t}\n\n\tif ((rc = is_compose(stream, IS_HELLOSVR)) != DIS_SUCCESS)\n\t\tgoto err;\n\n\tif ((rc = diswui(stream, pbs_mom_port)) != DIS_SUCCESS)\n\t\tgoto err;\n\tif ((rc = dis_flush(stream)) != DIS_SUCCESS)\n\t\tgoto err;\n\n\tserver_stream = stream;\n\n\tif (svr)\n\t\tsprintf(log_buffer, \"HELLO sent to server at %s:%d, stream:%d\", svr, port, stream);\n\telse\n\t\tsprintf(log_buffer, \"HELLO sent to server at stream:%d\", stream);\n\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_NOTICE,\n\t\t  msg_daemonname, log_buffer);\n\treturn;\n\nerr:\n\tif (svr)\n\t\tlog_errf(errno, msg_daemonname, \"Failed to send HELLO at %s:%d\", svr, port);\n\telse\n\t\tlog_errf(errno, msg_daemonname, \"Failed to send HELLO at stream:%d\", stream);\n\ttpp_close(stream);\n\treturn;\n}\n\n/**\n * @brief\n *\tOn mom initialization, recover all running jobs.\n *\n *\tCalled on initialization\n *\t   If the -p option was given (recover = 2), Mom will allow the jobs\n *\t   to continue to run.   She depends on detecting when they terminate\n *\t   via the slow poll method rather than SIGCHLD.\n *\n *\t   If the -r option was given (recover = 1), MOM is recovering on a\n *  \t   running system and the session id of the jobs should be valid;\n *\t   the jobs are killed.\n *\n *\t   If -r was not given (recover = 0), it is assumed that the whole\n *\t   system, not just MOM, is comming up, the session ids are not valid;\n *\t   so no attempt is made to kill the job processes.  But the jobs are\n *\t   terminated and requeued.\n *\n * @param [in]\trecover - Specify recovering mode for MoM.\n * @param [in]\tmultinode_jobs - Pointer to list of pointers to recovered multinode jobs\n *\n */\n\nvoid\ninit_abort_jobs(int recover, pbs_list_head *multinode_jobs)\n{\n\tDIR *dir;\n\tint i, sisters;\n\tstruct dirent *pdirent;\n\tjob *pj = NULL;\n\tchar *job_suffix = JOB_FILE_SUFFIX;\n\tint job_suf_len = strlen(job_suffix);\n\tchar *psuffix;\n\tchar path[MAXPATHLEN + 1];\n\tchar oldp[MAXPATHLEN + 1];\n\tchar rcperr[] = \"rcperr.\";\n\tstruct stat statbuf;\n\textern char *path_checkpoint;\n\textern char *path_spool;\n\n\tCLEAR_HEAD((*multinode_jobs));\n\n\tdir = opendir(path_jobs);\n\tif (dir == NULL) {\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_SERVER, LOG_ALERT,\n\t\t\t  msg_daemonname, \"Jobs directory not found\");\n\t\texit(1);\n\t}\n\twhile (errno = 0, (pdirent = readdir(dir)) != NULL) {\n\t\tif ((i = strlen(pdirent->d_name)) <= job_suf_len)\n\t\t\tcontinue;\n\n\t\tpsuffix = pdirent->d_name + i - job_suf_len;\n\t\tif (strcmp(psuffix, job_suffix))\n\t\t\tcontinue;\n\t\tpj = job_recov(pdirent->d_name);\n\t\tif (pj == NULL) {\n\t\t\t(void) strcpy(path, path_jobs);\n\t\t\t(void) strcat(path, pdirent->d_name);\n\t\t\t(void) unlink(path);\n\t\t\tpsuffix = path + strlen(path) - job_suf_len;\n\t\t\tstrcpy(psuffix, JOB_TASKDIR_SUFFIX);\n\t\t\t(void) remtree(path);\n\t\t\tcontinue;\n\t\t}\n\n\t\t/* To get homedir info */\n\t\tpj->ji_grpcache = NULL;\n\t\tcheck_pwd(pj);\n\t\tif (pbs_idx_insert(jobs_idx, pj->ji_qs.ji_jobid, pj) != PBS_IDX_RET_OK) {\n\t\t\tlog_joberr(PBSE_INTERNAL, __func__, \"Failed to add job in index during recovery\", pj->ji_qs.ji_jobid);\n\t\t\tjob_free(pj);\n\t\t\tcontinue;\n\t\t}\n\t\tappend_link(&svr_alljobs, &pj->ji_alljobs, pj);\n\t\tjob_nodes(pj);\n\t\ttask_recov(pj);\n\n\t\t/*\n\t\t ** Check to see if a checkpoint.old dir exists.\n\t\t ** If so, remove the regular checkpoint dir\n\t\t ** and rename the old to the regular name.\n\t\t */\n\t\tpbs_strncpy(path, path_checkpoint, sizeof(path));\n\t\tif (*pj->ji_qs.ji_fileprefix != '\\0')\n\t\t\tstrcat(path, pj->ji_qs.ji_fileprefix);\n\t\telse\n\t\t\tstrcat(path, pj->ji_qs.ji_jobid);\n\t\tstrcat(path, JOB_CKPT_SUFFIX);\n\t\tstrcpy(oldp, path);\n\t\tstrcat(oldp, \".old\");\n\n\t\tif (stat(oldp, &statbuf) == 0) {\n\t\t\t(void) remtree(path);\n\t\t\tif (rename(oldp, path) == -1)\n\t\t\t\t(void) remtree(oldp);\n\t\t}\n\n\t\t/*\n\t\t ** Check to see if I am Mother Superior.  The\n\t\t ** JOB_SVFLG_HERE flag is overloaded for MOM\n\t\t ** for this purpose.\n\t\t */\n\t\tif ((pj->ji_qs.ji_svrflags & JOB_SVFLG_HERE) == 0) {\n\t\t\t/* I am sister, junk the job files */\n\t\t\tif (recover != 2) {\n\t\t\t\tmom_deljob(pj);\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t}\n\n\t\tsisters = pj->ji_numnodes - 1;\n\t\tif (sisters > 0) {\n\t\t\tpj->ji_resources = (noderes *) calloc(sisters,\n\t\t\t\t\t\t\t      sizeof(noderes));\n\t\t\tif (pj->ji_resources == NULL) {\n\t\t\t\tlog_err(ENOMEM, \"init_abort_jobs\", \"out of memory\");\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\tpj->ji_numrescs = sisters;\n\t\t}\n\n\t\t/*\n\t\t **\tIf mom went down during file stage ops,\n\t\t **\tthe substate should be EXITED.  Set it\n\t\t **\tback to OBIT so the server can verify that\n\t\t **\tit still has the job or not.\n\t\t */\n\t\tif (check_job_substate(pj, JOB_SUBSTATE_EXITED)) {\n\t\t\t/*\n\t\t\t ** We don't want to change the state if the\n\t\t\t ** job is checkpointed.\n\t\t\t */\n\t\t\tif ((pj->ji_qs.ji_svrflags &\n\t\t\t     (JOB_SVFLG_CHKPT |\n\t\t\t      JOB_SVFLG_ChkptMig)) == 0) {\n\t\t\t\tset_job_substate(pj, JOB_SUBSTATE_OBIT);\n\t\t\t\tjob_save(pj);\n\t\t\t}\n\t\t} else if (check_job_substate(pj, JOB_SUBSTATE_TERM)) {\n\t\t\t/*\n\t\t\t * Mom went down while terminate action script was\n\t\t\t * running, don't know if it finished or not;  force\n\t\t\t * Mom to send/resend OBIT and lets end it\n\t\t\t */\n\t\t\tif (recover)\n\t\t\t\t(void) kill_job(pj, SIGKILL);\n\t\t\tset_job_substate(pj, JOB_SUBSTATE_OBIT);\n\t\t\tjob_save(pj);\n\t\t} else if ((recover != 2) &&\n\t\t\t   ((check_job_substate(pj, JOB_SUBSTATE_RUNNING)) ||\n\t\t\t    (check_job_substate(pj, JOB_SUBSTATE_SUSPEND)) ||\n\t\t\t    (check_job_substate(pj, JOB_SUBSTATE_KILLSIS)) ||\n\t\t\t    (check_job_substate(pj, JOB_SUBSTATE_RUNEPILOG)) ||\n\t\t\t    (check_job_substate(pj, JOB_SUBSTATE_EXITING)))) {\n\n\t\t\tif (recover)\n\t\t\t\t(void) kill_job(pj, SIGKILL);\n\n\t\t\t/* set exit status to:\n\t\t\t *   JOB_EXEC_INITABT - init abort and no chkpnt\n\t\t\t *   JOB_EXEC_INITRST - init and chkpt, no mig\n\t\t\t *   JOB_EXEC_INITRMG - init and chkpt, migrate\n\t\t\t * to indicate recovery abort\n\t\t\t */\n\t\t\tif (pj->ji_qs.ji_svrflags &\n\t\t\t    (JOB_SVFLG_CHKPT |\n\t\t\t     JOB_SVFLG_ChkptMig)) {\n#if PBS_CHKPT_MIGRATE\n\t\t\t\tpj->ji_qs.ji_un.ji_momt.ji_exitstat =\n\t\t\t\t\tJOB_EXEC_INITRMG;\n#else\n\t\t\t\tpj->ji_qs.ji_un.ji_momt.ji_exitstat =\n\t\t\t\t\tJOB_EXEC_INITRST;\n#endif\n\t\t\t} else {\n\t\t\t\tpj->ji_qs.ji_un.ji_momt.ji_exitstat =\n\t\t\t\t\tJOB_EXEC_INITABT;\n\t\t\t}\n\n\t\t\t/*\n\t\t\t ** I am MS, send a DELETE_JOB request to any\n\t\t\t ** sisters that happen to still be alive.\n\t\t\t */\n\t\t\tif (sisters > 0) {\n\t\t\t\t(void) send_sisters(pj, IM_DELETE_JOB, NULL);\n\t\t\t}\n\t\t\tset_job_substate(pj, JOB_SUBSTATE_EXITING);\n\t\t\tjob_save(pj);\n\t\t\texiting_tasks = 1;\n\t\t} else if (recover == 2) {\n\t\t\tpbs_task *ptask;\n\n\t\t\tfor (ptask = (pbs_task *) GET_NEXT(pj->ji_tasks);\n\t\t\t     ptask != NULL;\n\t\t\t     ptask = (pbs_task *) GET_NEXT(ptask->ti_jobtask)) {\n\t\t\t\tptask->ti_flags |= TI_FLAGS_ORPHAN;\n\t\t\t}\n\n\t\t\tif (check_job_substate(pj, JOB_SUBSTATE_RUNNING)) {\n\t\t\t\trecover_walltime(pj);\n\t\t\t\tstart_walltime(pj);\n\t\t\t}\n\n\t\t\tif (mom_do_poll(pj))\n\t\t\t\tappend_link(&mom_polljobs, &pj->ji_jobque, pj);\n\n\t\t\tif (sisters > 0)\n\t\t\t\tappend_link(multinode_jobs, &pj->ji_multinodejobs, pj);\n\n\t\t\tif (pj->ji_qs.ji_svrflags & JOB_SVFLG_HERE) {\n\t\t\t\t/* I am MS */\n\t\t\t\tpj->ji_stdout = pj->ji_ports[0] = pj->ji_extended.ji_ext.ji_stdout;\n\t\t\t\tpj->ji_stderr = pj->ji_ports[1] = pj->ji_extended.ji_ext.ji_stdout;\n\t\t\t}\n\t\t}\n\t}\n\tif (errno != 0 && errno != ENOENT) {\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_SERVER, LOG_ALERT,\n\t\t\t  msg_daemonname, \"Jobs directory cannot be read\");\n\t\t(void) closedir(dir);\n\t\texit(1);\n\t}\n\t(void) closedir(dir);\n\n\t/*\n\t ** Go through spool dir and remove files that match\n\t ** \"rcperr.<pid>\".  These would be leftover from file\n\t ** stage operations that were interrupted.\n\t */\n\tdir = opendir(path_spool);\n\tif (dir == NULL) {\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_SERVER, LOG_ALERT,\n\t\t\t  msg_daemonname, \"spool directory not found\");\n\t\treturn;\n\t}\n\n\twhile (errno = 0, (pdirent = readdir(dir)) != NULL) {\n\t\tif (strncmp(pdirent->d_name, rcperr, sizeof(rcperr) - 1) != 0)\n\t\t\tcontinue;\n\n\t\t(void) strcpy(path, path_spool);\n\t\t(void) strcat(path, pdirent->d_name);\n\t\t(void) unlink(path);\n\t}\n\tif (errno != 0 && errno != ENOENT)\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_SERVER, LOG_ALERT,\n\t\t\t  msg_daemonname, \"spool directory cannot be read\");\n\t(void) closedir(dir);\n}\n\n/**\n * @brief\n * \tstatic handler function to be called by deferred child exit work task\n * \tfor alps cancel reservation child of mom\n *\n * \tThe forked child process cannot send a req_reject or reply_ack since\n * \ttransmission of data via tpp is not supported from child processes\n * \t(tpp streams are automatically closed when proccess forks).\n * \tThus this child exit handler is added to send the reply from the\n * \tparent process after reaping the exit status from child\n *\n * @param[in] ptask - Pointer to the task structure\n *\n */\n#if MOM_ALPS\nstatic void\npost_alps_cancel_resv(struct work_task *ptask)\n{\n\tstruct batch_request *preq = ptask->wt_parm1;\n\tint j;\n\n\tif (preq == NULL)\n\t\treturn;\n\n\tj = ptask->wt_aux;\n\tif (j > 0) {\n\t\t/* Tell the server we failed */\n\t\treq_reject(PBSE_ALPSRELERR, j, preq);\n\t} else if (j < 0) {\n\t\t/* Fatal error, log message was logged in\n\t\t * alps_cancel_request\n\t\t */\n\t\treq_reject(PBSE_ALPSRELERR, j, preq);\n\t} else {\n\t\t/* The job will have been purged in mom_deljob_wait at this point\n\t\t * so just do the reply.\n\t\t */\n\t\treply_ack(preq);\n\t}\n}\n#endif\n\n/**\n * @brief\n * \tdel_job_hw\tdelete job/hardware related resources such as ALPS reservations, ...\n *\n *\tUsed by del_job_resc() and exec_bail()\n *\tMost items here are platform dependent.\n *\n * @param pjob  - pointer to job structure\n *\n * @return void\n *\n */\nvoid\ndel_job_hw(job *pjob)\n{\n#if MOM_ALPS\n\tint i;\n\tint j;\n\tint sleeptime = 0;\n\ttime_t total_time = 0;\n\ttime_t begin_time = 0;\n\ttime_t end_time = 0;\n\tlong jitter = 0;\n\tpid_t parent_pid = 0;\n\tpid_t pid;\n\tint sconn = -1;\n\tstruct work_task *wtask = NULL;\n\n\t/*\n\t * Try to cancel the reservation once as 'main MOM'.\n\t * If we got an acknowledgment from ALPS that the reservation\n\t * is actually gone, then send ACK to server.\n\t * Else, fork a child process that will continue to try to cancel\n\t * the reservation until the remaining processes count is zero.\n\t * Or until the ALPS reservation no longer exists.\n\t */\n\tif ((j = alps_cancel_reservation(pjob)) > 0) {\n\t\t/*\n\t\t * alps reservation cancel failed with \"temporary\" error\n\t\t * This could be due to one of more of the following:\n\t\t * \t- the reservation still has claims on it\n\t\t * \t- ALPS is down\n\t\t * Retry in child until success, or a hard error is returned\n\t\t * Or alps_release_timeout is reached.\n\t\t * Once the ALPS reservation is successfully canceled,\n\t\t * respond to the server's delete job request.\n\t\t * The job will remain in the 'E' state until then.\n\t\t */\n\t\tif (pjob->ji_preq != NULL)\n\t\t\tsconn = pjob->ji_preq->rq_conn;\n\n\t\tif ((pid = fork_me(sconn)) == 0) {\n\t\t\t/* We are the child */\n\t\t\tbegin_time = time(NULL);\n\t\t\tend_time = begin_time;\n\t\t\t/* add jobid to the seed */\n\t\t\tsrandom((unsigned) (atoi(pjob->ji_qs.ji_jobid) + begin_time));\n\t\t\tfor (i = 1; (total_time = end_time - begin_time) < alps_release_timeout; ++i, end_time = time(NULL)) {\n\t\t\t\t/* calculate time to sleep */\n\t\t\t\tsleeptime = alps_release_wait_time;\n\t\t\t\t/* Add randomness of 0 to 0.12 seconds to the\n\t\t\t\t * sleeptime so we don't overwhelm ALPS with\n\t\t\t\t * multiple ALPS release requests when jobs end\n\t\t\t\t * at the same time.\n\t\t\t\t */\n\t\t\t\tjitter = random() % alps_release_jitter;\n\t\t\t\tsleeptime += jitter;\n\t\t\t\tusleep(sleeptime);\n\t\t\t\tif ((j = alps_cancel_reservation(pjob)) <= 0)\n\t\t\t\t\tbreak;\n\t\t\t}\n\t\t\tif (j > 0) {\n\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\"Timed out after %d attempts over \"\n\t\t\t\t\t\"%ld seconds of attempting \"\n\t\t\t\t\t\"to cancel ALPS reservation %ld\",\n\t\t\t\t\ti, total_time,\n\t\t\t\t\tpjob->ji_extended.ji_ext.ji_reservation);\n\t\t\t\tlog_joberr(-1, __func__, log_buffer,\n\t\t\t\t\t   pjob->ji_qs.ji_jobid);\n\t\t\t\t/* send a HUP to main MOM so she re-reads\n\t\t\t\t * the ALPS inventory\n\t\t\t\t */\n\t\t\t\tparent_pid = getppid();\n\t\t\t\tkill(parent_pid, SIGHUP);\n\n\t\t\t} else if (j == 0) {\n\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\"Cancelled ALPS reservation %ld after a \"\n\t\t\t\t\t\"total of %d tries\",\n\t\t\t\t\tpjob->ji_extended.ji_ext.ji_reservation, i + 1);\n\t\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB,\n\t\t\t\t\t  LOG_DEBUG, pjob->ji_qs.ji_jobid,\n\t\t\t\t\t  log_buffer);\n\t\t\t}\n\n\t\t\t/* exit with the respective error code to the parent process\n\t\t\t * Parents (moms) post handler will handle this\n\t\t\t */\n\t\t\texit(j);\n\n\t\t} else if (pid > 0) {\n\t\t\t/* we are the parent, the reply happens after the child exits */\n\t\t\tif ((wtask = set_task(WORK_Deferred_Child, pid,\n\t\t\t\t\t      post_alps_cancel_resv, pjob->ji_preq)) == NULL) {\n\t\t\t\tlog_err(errno, NULL, \"Failed to create deferred work task, Out of memory\");\n\t\t\t\treq_reject(PBSE_SYSTEM, 0, pjob->ji_preq);\n\t\t\t}\n\t\t} else if (pid < 0) {\n\t\t\t/* fork failed, reply to the server so the job\n\t\t\t * doesn't stay in the \"E\" state\n\t\t\t */\n\t\t\treq_reject(PBSE_ALPSRELERR, j, pjob->ji_preq);\n\t\t}\n\t} else if (j < 0) {\n\t\t/* ALPS returned a PERMANENT error */\n\t\treq_reject(PBSE_ALPSRELERR, j, pjob->ji_preq);\n\t} else {\n\t\t/* The reservation was canceled, let server know */\n\t\treply_ack(pjob->ji_preq);\n\t}\n\tpjob->ji_preq = NULL;\n#endif\n}\n\n/**\n * @brief\n * \tdel_job_resc - delete job related resources, files, etc\n *\tUsed by mom_deljob() and mom_deljob_wait()\n *\n *\tItems which are kept until the very bitter end of the job, just\n *\tbefore the job structure is freed, are released/freed/cleared here.\n *\n * @param[in] pjob - pointer to job structure\n *\n * @return Void\n *\n */\nvoid\ndel_job_resc(job *pjob)\n{\n\t/*\n\t * WARNING - the following is for QA automated testing to induce\n\t * certain failures modes\n\t */\n\n\tif (QA_testing != 0) {\n\t\tif (QA_testing & PBSQA_DELJOB_SLEEP)\n\t\t\tsleep(90); /* 90 second delay */\n\t\telse if (QA_testing & PBSQA_DELJOB_SLEEPLONG)\n\t\t\tsleep(900); /* 900 second long delay */\n\t\telse if (QA_testing & PBSQA_DELJOB_CRASH)\n\t\t\texit(99); /* simulate crash */\n\t}\n\n\t/* remove PBS_NODEFILE - Mother Superior shall have one and the sister\n\tmoms too if the mom config gen_nodefile_on_sister_mom is set to 1 */\n\n\tif (pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE || gen_nodefile_on_sister_mom) {\n\t\tchar file[MAXPATHLEN + 1];\n#ifdef WIN32\n\t\t(void) sprintf(file, \"%s/auxiliary/%s\",\n\t\t\t       pbs_conf.pbs_home_path, pjob->ji_qs.ji_jobid);\n#else\n\t\t(void) sprintf(file, \"%s/aux/%s\",\n\t\t\t       pbs_conf.pbs_home_path, pjob->ji_qs.ji_jobid);\n#endif\n\t\t(void) unlink(file);\n\t}\n\n\t/* TMPDIR removed in job_purge so files are available for staging */\n\n\tif (job_clean_extra != NULL) {\n\t\t(void) job_clean_extra(pjob);\n\t}\n\n\t/* delete the hardware related items */\n\n\tdel_job_hw(pjob);\n}\n\n/**\n * @brief\n * \tmom_deljob - delete the job entry, MOM no longer knows about the job\n *\tThis version does NOT wait for the Sisters to reply\n *\n * @param[in] pjob - pointer to job structure\n *\n * @return Void\n *\n */\nvoid\nmom_deljob(job *pjob)\n{\n\n\tdel_job_resc(pjob); /* rm tmpdir, etc. */\n\n\tif (pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE) /* MS */\n\t\t(void) send_sisters(pjob, IM_DELETE_JOB, NULL);\n\tjob_purge_mom(pjob);\n\n\t/*\n\t ** after job is gone, check to make sure no rogue user\n\t ** procs are still hanging about\n\t */\n\tdorestrict_user();\n}\n\n/**\n * @brief\n * \tmom_deljob_wait - deletes most of the job stuff, job entry not deleted\n *\tuntil the sisters have replied or are down\n *\tThis version DOES wait for the Sisters to reply, see processing of\n *\tIM_DELETE_JOB_REPLY in mom_comm.c\n *\tIT should only be called for a job for which this is Mother Superior.\n *\n * @param[in] pjob - pointer to job structure\n *\n * @return int\n * @retval the number of sisters to whom the request was sent\n *\n */\nint\nmom_deljob_wait(job *pjob)\n{\n\tint i;\n\n\tdel_job_resc(pjob); /* rm tmpdir, etc. */\n\n\tif (pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE) { /* MS */\n\t\tset_job_substate(pjob, JOB_SUBSTATE_DELJOB);\n\t\tpjob->ji_sampletim = time_now;\n\t\t/*\n\t\t * The SISTER_KILLDONE flag needs to be reset so\n\t\t * we can talk to the sisterhood and know when they reply.\n\t\t */\n\t\tfor (i = 0; i < pjob->ji_numnodes; i++) {\n\t\t\thnodent *np = &pjob->ji_hosts[i];\n\n\t\t\tif (np->hn_node == pjob->ji_nodeid) /* me */\n\t\t\t\tcontinue;\n\n\t\t\tif (np->hn_sister == SISTER_KILLDONE)\n\t\t\t\tnp->hn_sister = SISTER_OKAY;\n\t\t}\n\t\ti = send_sisters(pjob, IM_DELETE_JOB_REPLY, NULL);\n\t\tif (i == 0) {\n\t\t\tif (pjob->ji_numnodes > 1) {\n\t\t\t\tsprintf(log_buffer, \"Unable to send delete job \"\n\t\t\t\t\t\t    \"request to one or more sisters\");\n\t\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB,\n\t\t\t\t\t  LOG_ERR, pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\t}\n\n\t\t\tif (mock_run) {\n\t\t\t\t/* Delete the job end work task for this job */\n\t\t\t\tdelete_task_by_parm1_func(pjob, mock_run_end_job_task, DELETE_ALL);\n\t\t\t}\n\n\t\t\t/* job is purged here first, discard job happens later\n\t\t\t * and IM_DISCARD_JOB does not find pjob to kill\n\t\t\t * job process in case of a mom restart\n\t\t\t *\n\t\t\t * Fixing by killing job here, should not hurt in any\n\t\t\t * case (since we are purging job anyway)\n\t\t\t */\n\t\t\t(void) kill_job(pjob, SIGKILL);\n\t\t\tjob_purge_mom(pjob);\n\t\t\tdorestrict_user();\n\t\t}\n\t\t/*\n\t\t * otherwise, job_purge() and dorestrict_user() are called in\n\t\t * mom_comm when all the sisters have replied.  The reply to\n\t\t * the Server is also done there\n\t\t */\n\t\treturn (i);\n\t} else\n\t\treturn 0;\n}\n\n/**\n *\n * @brief\n *  The wrapper to \"mom_deljob_wait()\".\n * @par\n *  This will call mom_deljob_wait based on MOM_ALPS macro and\n *  reply to the batch request.\n *\n * @param[in] pjob - pointer to job structure\n *\n * @return void\n */\nvoid\nmom_deljob_wait2(job *pjob)\n{\n#if MOM_ALPS\n\t(void) mom_deljob_wait(pjob);\n\n#else\n\tint numnodes;\n\tstruct batch_request *preq;\n\t/*\n\t * save number of nodes in sisterhood in case\n\t * job is deleted in mom_deljob_wait()\n\t */\n\tnumnodes = pjob->ji_numnodes;\n\n\tpreq = pjob->ji_preq;\n\tpjob->ji_preq = NULL;\n\tif (mom_deljob_wait(pjob) > 0) {\n\t\t/* wait till sisters respond */\n\t\tpjob->ji_preq = preq;\n\t} else if (numnodes > 1) {\n\t\t/*\n\t\t* no messages sent, but there are sisters\n\t\t* must be all down\n\t\t*/\n\t\treq_reject(PBSE_SISCOMM, 0, preq); /* all sis down */\n\t} else {\n\t\treply_ack(preq); /* no sisters, reply now  */\n\t}\n#endif\n}\n\n/**\n * @brief\n * send_sisters_deljob_wait\t-\n * \tJob entry is not deleted until the sisters have replied or are down\n *\tThis version DOES wait for the Sisters to reply, see processing of\n *\tIM_DELETE_JOB_REPLY in mom_comm.c\n *\tIt should only be called for a job for which this is Mother Superior.\n *\n * @param[in] pjob - pointer to job structure\n *\n * @return int\n * @retval the number of sisters to whom the request was sent\n *\n */\nint\nsend_sisters_deljob_wait(job *pjob)\n{\n\tint i;\n\n\tif (pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE) { /* MS */\n\t\tset_job_substate(pjob, JOB_SUBSTATE_DELJOB);\n\t\tpjob->ji_sampletim = time_now;\n\t\t/*\n\t\t * The SISTER_KILLDONE flag needs to be reset so\n\t\t * we can talk to the sisterhood and know when they reply.\n\t\t */\n\t\tfor (i = 0; i < pjob->ji_numnodes; i++) {\n\t\t\thnodent *np = &pjob->ji_hosts[i];\n\n\t\t\tif (np->hn_node == pjob->ji_nodeid) /* me */\n\t\t\t\tcontinue;\n\n\t\t\tif (np->hn_sister == SISTER_KILLDONE)\n\t\t\t\tnp->hn_sister = SISTER_OKAY;\n\t\t}\n\t\treturn (send_sisters(pjob, IM_DELETE_JOB_REPLY, NULL));\n\t} else\n\t\treturn 0;\n}\n\n/**\n * @brief\n * \t\tConvenience function to call mom_set_use() when all jobs need to be updated\n *\n * @param\tvoid\n * @return\tvoid\n */\nvoid\nmom_set_use_all(void)\n{\n\tjob *pjob = NULL;\n\n\tif (!mock_run) {\n\t\tif (mom_get_sample() == PBSE_NONE) {\n\t\t\tpjob = (job *) GET_NEXT(svr_alljobs);\n\t\t\twhile (pjob) {\n\t\t\t\tif ((check_job_state(pjob, JOB_STATE_LTR_EXITING) &&\n\t\t\t\t     (get_job_substate(pjob) >= JOB_SUBSTATE_OBIT ||\n\t\t\t\t      get_job_substate(pjob) == JOB_SUBSTATE_EXITED)) ||\n\t\t\t\t    (check_job_state(pjob, JOB_STATE_LTR_RUNNING) && get_job_substate(pjob) <= JOB_SUBSTATE_PRERUN)) {\n\t\t\t\t\tpjob = (job *) GET_NEXT(pjob->ji_alljobs);\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\t\t\t\tmom_set_use(pjob);\n\t\t\t\tpjob = (job *) GET_NEXT(pjob->ji_alljobs);\n\t\t\t}\n\t\t}\n\t}\n}\n\n/**\n * @brief\tWrapper function to job purge\n *\n * @param[in]\tpjob - the job being purged\n *\n * @return\tvoid\n */\nvoid\njob_purge_mom(job *pjob)\n{\n\tif (mock_run)\n\t\tmock_run_job_purge(pjob);\n\telse\n\t\tjob_purge(pjob);\n}\n"
  },
  {
    "path": "src/resmom/job_recov_fs.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file    job_recov_fs.c\n *\n * @brief\n *\tjob_recov_fs.c - This file contains the functions to record a job\n *\tdata struture to disk and to recover it from disk by Mom\n *\n *\tThe data is recorded in a file whose name is the job_id.\n *\n *\tThe following public functions are provided:\n *\t\tjob_save_fs() -\t\tsave the disk image\n *\t\tjob_recov_fs() -\t\trecover (read) job from disk\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <stdio.h>\n#include <sys/types.h>\n#include <sys/param.h>\n\n#include \"pbs_ifl.h\"\n#include <errno.h>\n#include <fcntl.h>\n#include <string.h>\n#include <stdlib.h>\n#include <time.h>\n\n#include <unistd.h>\n#include \"server_limits.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n\n#include <sys/stat.h>\n\n#include \"job.h\"\n#include \"reservation.h\"\n#include \"queue.h\"\n#include \"log.h\"\n#include \"pbs_nodes.h\"\n#include \"svrfunc.h\"\n#include <memory.h>\n#include \"libutil.h\"\n\n#define MAX_SAVE_TRIES 3\n\n/* global data items */\n\nextern char *path_jobs;\nextern time_t time_now;\nextern char pbs_recov_filename[];\n\n/* data global only to this file */\n\nstatic const size_t fixedsize = sizeof(struct jobfix);\nstatic const size_t extndsize = sizeof(union jobextend);\n\n/**\n * @brief\n *\t\tSaves (or updates) a job structure image on disk\n *\n *\t\tSave does either - a quick update for state changes only,\n *\t\t\t - a full update for an existing file, or\n *\t\t\t - a full write for a new job\n *\n *\t\tFor a quick update, the data written is less than a disk block\n *\t\tsize and no size change occurs.\n *\n *\t\tNo need of O_SYNC flag as this will improve the performance.\n *\t\tThis might lead to data loss from file system in case of system\n *\t\tcrash. This is not an issue as data is mostly recovered from the\n *\t\tdatabase.\n *\n *\t\tFor a new file write, first time, the data is written directly to\n *\t\tthe file.\n *\n * @param[in]\tpjob - Pointer to the job structure to save\n *\n * @return      Error code\n * @retval\t 0  - Success\n * @retval\t-1  - Failure\n *\n */\n\nint\njob_save_fs(job *pjob)\n{\n\tint fds;\n\tint i;\n\tchar *filename;\n\tchar namebuf1[MAXPATHLEN + 1];\n\tchar namebuf2[MAXPATHLEN + 1];\n\tint openflags;\n\tint redo;\n\tint pmode;\n\tint quick = 1;\n\n#ifdef WIN32\n\tpmode = _S_IWRITE | _S_IREAD;\n#else\n\tpmode = 0600;\n#endif\n\n\t(void) strcpy(namebuf1, path_jobs); /* job directory path */\n\tif (*pjob->ji_qs.ji_fileprefix != '\\0')\n\t\t(void) strcat(namebuf1, pjob->ji_qs.ji_fileprefix);\n\telse\n\t\t(void) strcat(namebuf1, pjob->ji_qs.ji_jobid);\n\t(void) strcpy(namebuf2, namebuf1); /* setup for later */\n\t(void) strcat(namebuf1, JOB_FILE_SUFFIX);\n\n\tif (pjob->ji_qs.ji_jsversion != JSVERSION) {\n\t\t/* version of job structure changed, force full write */\n\t\tpjob->ji_qs.ji_jsversion = JSVERSION;\n\t\tquick = 0;\n\t}\n\n\tfor (i = 0; i < JOB_ATR_LAST; i++) {\n\t\tif ((get_jattr(pjob, i))->at_flags & ATR_VFLAG_MODIFY) {\n\t\t\tquick = 0;\n\t\t\tbreak;\n\t\t}\n\t}\n\n\tif (quick) {\n\t\topenflags = O_WRONLY;\n\t\tfds = open(namebuf1, openflags, pmode);\n\t\tif (fds < 0) {\n\t\t\tlog_errf(errno, __func__, \"Failed to open %s file\", namebuf1);\n\t\t\treturn (-1);\n\t\t}\n#ifdef WIN32\n\t\tsecure_file(namebuf1, \"Administrators\",\n\t\t\t    READS_MASK | WRITES_MASK | STANDARD_RIGHTS_REQUIRED);\n\t\tsetmode(fds, O_BINARY);\n#endif\n\n\t\t/* just write the \"critical\" base structure to the file */\n\n\t\tsave_setup(fds);\n\t\tif ((save_struct((char *) &pjob->ji_qs, fixedsize) == 0) &&\n\t\t    (save_struct((char *) &pjob->ji_extended, extndsize) == 0) &&\n\t\t    (save_flush() == 0)) {\n\t\t\t(void) close(fds);\n\t\t} else {\n\t\t\tlog_err(errno, \"job_save\", \"error quickwrite\");\n\t\t\t(void) close(fds);\n\t\t\treturn (-1);\n\t\t}\n\n\t} else {\n\t\t/* an attribute changed,  update mtime */\n\t\tset_jattr_l_slim(pjob, JOB_ATR_mtime, time_now, SET);\n\n\t\t/*\n\t\t * write the whole structure to the file.\n\t\t * For a update, this is done to a new file to protect the\n\t\t * old against crashs.\n\t\t * The file is written in four parts:\n\t\t * (1) the job structure,\n\t\t * (2) the extended area,\n\t\t * (3) if a Array Job, the index tracking table\n\t\t * (4) the attributes in the \"encoded \"external form, and last\n\t\t * (5) the dependency list.\n\t\t */\n\n\t\t(void) strcat(namebuf2, JOB_FILE_COPY);\n\t\topenflags = O_CREAT | O_WRONLY;\n\n#ifdef WIN32\n\t\tfix_perms2(namebuf2, namebuf1);\n#endif\n\n\t\tfilename = namebuf2;\n\n\t\tfds = open(filename, openflags, pmode);\n\t\tif (fds < 0) {\n\t\t\tlog_err(errno, \"job_save\",\n\t\t\t\t\"error opening for full save\");\n\t\t\treturn (-1);\n\t\t}\n\n#ifdef WIN32\n\t\tsecure_file(filename, \"Administrators\",\n\t\t\t    READS_MASK | WRITES_MASK | STANDARD_RIGHTS_REQUIRED);\n\t\tsetmode(fds, O_BINARY);\n#endif\n\n\t\tfor (i = 0; i < MAX_SAVE_TRIES; ++i) {\n\t\t\tredo = 0; /* try to save twice */\n\t\t\tsave_setup(fds);\n\t\t\tif (save_struct((char *) &pjob->ji_qs, fixedsize) != 0) {\n\t\t\t\tredo++;\n\t\t\t} else if (save_struct((char *) &pjob->ji_extended,\n\t\t\t\t\t       extndsize) != 0) {\n\t\t\t\tredo++;\n\t\t\t} else if (save_attr_fs(job_attr_def, pjob->ji_wattr,\n\t\t\t\t\t\t(int) JOB_ATR_LAST) != 0) {\n\t\t\t\tredo++;\n\t\t\t} else if (save_flush() != 0) {\n\t\t\t\tredo++;\n\t\t\t}\n\t\t\tif (redo != 0) {\n\t\t\t\tif (lseek(fds, (off_t) 0, SEEK_SET) < 0) {\n\t\t\t\t\tlog_err(errno, \"job_save\", \"error lseek\");\n\t\t\t\t}\n\t\t\t} else\n\t\t\t\tbreak;\n\t\t}\n\n\t\t(void) close(fds);\n\t\tif (i >= MAX_SAVE_TRIES)\n\t\t\treturn (-1);\n\n#ifdef WIN32\n\t\tif (MoveFileEx(namebuf2, namebuf1,\n\t\t\t       MOVEFILE_REPLACE_EXISTING | MOVEFILE_WRITE_THROUGH) == 0) {\n\n\t\t\terrno = GetLastError();\n\t\t\tsprintf(log_buffer, \"MoveFileEx(%s,%s) failed!\",\n\t\t\t\tnamebuf2, namebuf1);\n\t\t\tlog_err(errno, \"job_save\", log_buffer);\n\t\t}\n\t\tsecure_file(namebuf1, \"Administrators\",\n\t\t\t    READS_MASK | WRITES_MASK | STANDARD_RIGHTS_REQUIRED);\n#else\n\t\tif (rename(namebuf2, namebuf1) == -1) {\n\t\t\tlog_event(PBSEVENT_ERROR | PBSEVENT_SECURITY,\n\t\t\t\t  PBS_EVENTCLASS_JOB, LOG_ERR,\n\t\t\t\t  pjob->ji_qs.ji_jobid,\n\t\t\t\t  \"rename in job_save failed\");\n\t\t}\n#endif\n\t}\n\treturn (0);\n}\n\n/**\n * @brief\n *\t\trecover (read in) a job from its save file\n *\n *\t\tThis function is only needed upon server start up.\n *\n *\t\tThe job structure, its attributes strings, and its dependencies\n *\t\tare recovered from the disk.  Space to hold the above is\n *\t\tmalloc-ed as needed.\n *\n *\n * @param[in]\tfilename\t- Name of job file to load job from\n *\n * @return\tpointer to new job structure\n *\n * @retval\t NULL - Failure\n * @retval\t!NULL - Success\n *\n */\n\njob *\njob_recov_fs(char *filename)\n{\n\tint fds;\n\tchar basen[MAXPATHLEN + 1];\n\tjob *pj;\n\tchar *pn;\n\tchar *psuffix;\n\n\tpj = job_alloc(); /* allocate & initialize job structure space */\n\tif (pj == NULL) {\n\t\treturn NULL;\n\t}\n\n\t(void) strcpy(pbs_recov_filename, path_jobs); /* job directory path */\n\t(void) strcat(pbs_recov_filename, filename);\n#ifdef WIN32\n\tfix_perms(pbs_recov_filename);\n#endif\n\n\t/* change file name in case recovery fails so we don't try same file */\n\n\tpbs_strncpy(basen, pbs_recov_filename, sizeof(basen));\n\tpsuffix = basen + strlen(basen) - strlen(JOB_BAD_SUFFIX);\n\t(void) strcpy(psuffix, JOB_BAD_SUFFIX);\n#ifdef WIN32\n\tif (MoveFileEx(pbs_recov_filename, basen,\n\t\t       MOVEFILE_REPLACE_EXISTING | MOVEFILE_WRITE_THROUGH) == 0) {\n\t\terrno = GetLastError();\n\t\tsprintf(log_buffer, \"MoveFileEx(%s, %s) failed!\",\n\t\t\tpbs_recov_filename, basen);\n\t\tlog_err(errno, \"nodes\", log_buffer);\n\t}\n\tsecure_file(basen, \"Administrators\",\n\t\t    READS_MASK | WRITES_MASK | STANDARD_RIGHTS_REQUIRED);\n#else\n\tif (rename(pbs_recov_filename, basen) == -1) {\n\t\tsprintf(log_buffer, \"error renaming job file %s\",\n\t\t\tpbs_recov_filename);\n\t\tlog_err(errno, __func__, log_buffer);\n\t\tfree((char *) pj);\n\t\treturn NULL;\n\t}\n#endif\n\n\tfds = open(basen, O_RDONLY, 0);\n\tif (fds < 0) {\n\t\tsprintf(log_buffer, \"error opening of job file %s\",\n\t\t\tpbs_recov_filename);\n\t\tlog_err(errno, __func__, log_buffer);\n\t\tfree((char *) pj);\n\t\treturn NULL;\n\t}\n#ifdef WIN32\n\tsetmode(fds, O_BINARY);\n#endif\n\n\t/* read in job fixed sub-structure */\n\n\terrno = -1;\n\tif (read(fds, (char *) &pj->ji_qs, fixedsize) != (int) fixedsize) {\n\t\tsprintf(log_buffer, \"error reading fixed portion of %s\",\n\t\t\tpbs_recov_filename);\n\t\tlog_err(errno, __func__, log_buffer);\n\t\tfree((char *) pj);\n\t\t(void) close(fds);\n\t\treturn NULL;\n\t}\n\t/* Does file name match the internal name? */\n\t/* This detects ghost files */\n\n#ifdef WIN32\n\tpn = strrchr(pbs_recov_filename, (int) '/');\n\tif (pn == NULL)\n\t\tpn = strrchr(pbs_recov_filename, (int) '\\\\');\n\tif (pn == NULL) {\n\t\tsprintf(log_buffer, \"bad path %s\", pbs_recov_filename);\n\t\tlog_err(errno, __func__, log_buffer);\n\t\tfree((char *) pj);\n\t\t(void) close(fds);\n\t\treturn NULL;\n\t}\n\tpn++;\n#else\n\tpn = strrchr(pbs_recov_filename, (int) '/') + 1;\n#endif\n\n\tif (strncmp(pn, pj->ji_qs.ji_jobid, strlen(pn) - 3) != 0) {\n\t\t/* mismatch, discard job */\n\n\t\t(void) sprintf(log_buffer,\n\t\t\t       \"Job Id %s does not match file name for %s\",\n\t\t\t       pj->ji_qs.ji_jobid,\n\t\t\t       pbs_recov_filename);\n\t\tlog_err(-1, __func__, log_buffer);\n\t\tfree((char *) pj);\n\t\t(void) close(fds);\n\t\treturn NULL;\n\t}\n\n\t/* read in extended save area depending on JSVERSION */\n\n\terrno = 0;\n\tDBPRT((\"Job save version %d\\n\", pj->ji_qs.ji_jsversion))\n\tif (pj->ji_qs.ji_jsversion >= JSVERSION_18) {\n\t\t/* since there is no change in jobextend structure for JSVERSION(1900) and JSVERSION_18(800),\n\t\t * read the current structure.\n\t\t */\n\t\tif (read(fds, (char *) &pj->ji_extended,\n\t\t\t sizeof(union jobextend)) !=\n\t\t    sizeof(union jobextend)) {\n\t\t\tsprintf(log_buffer,\n\t\t\t\t\"error reading extended portion of %s\",\n\t\t\t\tpbs_recov_filename);\n\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\tfree((char *) pj);\n\t\t\t(void) close(fds);\n\t\t\treturn NULL;\n\t\t}\n\t} else {\n\t\t/* If really an old version(i.e. pre 13.x), it wasn't there, abort out */\n\t\tsprintf(log_buffer,\n\t\t\t\"Job structure version cannot be recovered for job %s\",\n\t\t\tpbs_recov_filename);\n\t\tlog_err(errno, __func__, log_buffer);\n\t\tfree((char *) pj);\n\t\t(void) close(fds);\n\t\treturn NULL;\n\t}\n\n\t/* read in working attributes */\n\n\tif (recov_attr_fs(fds, pj, job_attr_idx, job_attr_def, pj->ji_wattr, (int) JOB_ATR_LAST,\n\t\t\t  (int) JOB_ATR_UNKN) != 0) {\n\t\tsprintf(log_buffer, \"error reading attributes portion of %s\",\n\t\t\tpbs_recov_filename);\n\t\tlog_err(errno, __func__, log_buffer);\n\t\tjob_free(pj);\n\t\t(void) close(fds);\n\t\treturn NULL;\n\t}\n\t(void) close(fds);\n\n#if defined(WIN32)\n\t/* get a handle to the job (may not exist) */\n\tpj->ji_hJob = OpenJobObject(JOB_OBJECT_ALL_ACCESS, FALSE,\n\t\t\t\t    pj->ji_qs.ji_jobid);\n#endif\n\n\t/* all done recovering the job, change file name back to .JB */\n\n#ifdef WIN32\n\tif (MoveFileEx(basen, pbs_recov_filename,\n\t\t       MOVEFILE_REPLACE_EXISTING | MOVEFILE_WRITE_THROUGH) == 0) {\n\t\terrno = GetLastError();\n\t\tsprintf(log_buffer, \"MoveFileEx(%s, %s) failed!\",\n\t\t\tbasen, pbs_recov_filename);\n\t\tlog_err(errno, \"nodes\", log_buffer);\n\t}\n\tsecure_file(pbs_recov_filename, \"Administrators\",\n\t\t    READS_MASK | WRITES_MASK | STANDARD_RIGHTS_REQUIRED);\n#else\n\t(void) rename(basen, pbs_recov_filename);\n#endif\n\n\treturn (pj);\n}\n"
  },
  {
    "path": "src/resmom/linux/alps.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\talps.c\n * @brief\n * Cray ALPS related functionality.\n * The functions in this file are responsible for parsing the XML response\n * from the ALPS BASIL client (either catnip or apbasil). These functions\n * rely on the expat XML parsing engine, and require libexpat to be linked\n * with the binary. The expat library is commonly installed with all Linux\n * distributions, but the developer may choose to link the library statically\n * to eliminate the potential for version mismatch or lack of availability.\n * The 64 bit version of the static library is currently less that 256KB in\n * size, so the overhead of static linking is minimal.\n *\n * The Batch and Application Scheduling Interface Layer (BASIL) utilizes\n * the extensible markup language (XML) for input and output. A brief\n * description of XML may be found on Wikipedia at\n * http://en.wikipedia.org/wiki/XML\n *\n * A nice description of Expat can be found at:\n * http://www.xml.com/pub/a/1999/09/expat/index.html\n *\n * We are primarily concerned with XML elements and attributes. Perhaps\n * the easiest way to think of these structures is in relation to their\n * HTML counterparts. Both document types are heirarchical in nature and\n * are built upon a set of elements that may each contain attributes.\n * Descriptions of each element and its associated attributes may be\n * found in the basil.h header file.\n */\n\n#include \"pbs_config.h\"\n\n#if MOM_ALPS /* Defined when --enable-alps is passed to configure. */\n\n#include <stdio.h>\n#include <string.h>\n#include <unistd.h>\n#include <ctype.h>\n#include <fcntl.h>\n#include <errno.h>\n#include <expat.h>\n#include <sys/types.h>\n#include <sys/wait.h>\n#include <signal.h>\n#include <stdarg.h>\n#include <pwd.h>\n#include <assert.h>\n\n#ifndef _XOPEN_SOURCE\nextern pid_t getsid(pid_t);\n#endif /* _XOPEN_SOURCE */\n\n#include \"pbs_error.h\"\n#include \"list_link.h\"\n#include \"server_limits.h\"\n#include \"attribute.h\"\n#include \"resource.h\"\n#include \"job.h\"\n#include \"log.h\"\n#include \"mom_func.h\"\n#include \"basil.h\"\n#include \"placementsets.h\"\n#include \"mom_server.h\"\n#include \"mom_vnode.h\"\n#include \"resmon.h\"\n#include \"hwloc.h\"\n\n/**\n * Remember the PBScrayhost (mpphost) reported by ALPS.\n * Utilized during Inventory query procession for Compute nodes.\n */\nchar mpphost[BASIL_STRING_LONG];\n\n/*\n * Data types to support interaction with the Cray ALPS implementation.\n */\n\nextern char *alps_client;\nextern int vnode_per_numa_node;\nextern char *ret_string;\nextern vnl_t *vnlp;\n\n/**\n * Define a sane BASIL stack limit.\n * This specifies the how many levels deep the BASIL can go.\n * Need to increase this for each XML level indentation addition.\n */\n#define MAX_BASIL_STACK (16)\n\n/**\n * Maintain counts on elements that are limited to one instance per context.\n * These counters help keep track of the XML structure that is imposed\n * by ALPS. The counter is checked to be sure they are not nested or\n * get jumbled in any way.\n */\ntypedef struct element_counts {\n\tint response;\n\tint response_data;\n\tint reserved;\n\tint confirmed;\n\tint released;\n\tint inventory;\n\tint node_array;\n\tint socket_array;\n\tint segment_array;\n\tint processor_array;\n\tint memory_array;\n\tint label_array;\n\tint reservation_array;\n\tint application_array;\n\tint command_array;\n\tint accelerator_array;\n\tint computeunit_array;\n/*\n\tThe following entries are not needed now because we are just\n\tignoring the corresponding XML tags. If they become nesessary\n\tin the future, here they are.\n*/\n#if 0\n\tint reserved_node_array;\n\tint reserved_segment_array;\n\tint reserved_processor_array;\n\tint reserved_memory_array;\n#endif\n} element_counts_t;\n\n/**\n * This is for the SYSTEM Query XML Response.\n * Maintain counts on elements that are limited to one instance per context.\n * These counters are checked to ensure that the XML Response is not nested/\n * jumbled in any way.\n */\ntypedef struct element_counts_sys {\n\tint response;\n\tint response_data;\n\tint system;\n} element_counts_sys_t;\n\n/**\n * Pointers for node data used when parsing inventory.\n * These provide a place to hang lists of any possible result from an\n * ALPS inventory. Additionally, counters for node states are kept here.\n */\ntypedef struct inventory_data {\n\tbasil_node_t *node;\n\tbasil_node_socket_t *socket;\n\tbasil_node_segment_t *segment;\n\tbasil_node_processor_t *processor;\n\tbasil_processor_allocation_t *processor_allocation;\n\tbasil_node_memory_t *memory;\n\tbasil_memory_allocation_t *memory_allocation;\n\tbasil_label_t *label;\n\tbasil_rsvn_t *reservation;\n\tbasil_node_computeunit_t *cu;\n\tint role_int;\n\tint role_batch;\n\tint role_unknown;\n\tint state_up;\n\tint state_down;\n\tint state_unavail;\n\tint state_routing;\n\tint state_suspect;\n\tint state_admin;\n\tint state_unknown;\n\tbasil_node_accelerator_t *accelerator;\n\tbasil_accelerator_allocation_t *accelerator_allocation;\n\tint accel_type_gpu;\n\tint accel_type_unknown;\n\tint accel_state_up;\n\tint accel_state_down;\n\tint accel_state_unknown;\n\tint socket_count;\n} inventory_data_t;\n\n/**\n * Pointer to System <Nodes> data used when parsing System response.\n * This structure is expected to grow as/when we implement more of\n * the BASIL 1.7 features.\n */\ntypedef struct system_data {\n\tbasil_system_element_t *node_group;\n} system_data_t;\n\n/**\n * The user data structure for expat.\n */\ntypedef struct ud {\n\tint depth;\n\tint stack[MAX_BASIL_STACK + 1];\n\tchar status[BASIL_STRING_SHORT];\n\tchar message[BASIL_ERROR_BUFFER_SIZE];\n\tchar type[BASIL_STRING_SHORT];\n\tchar basil_ver[BASIL_STRING_SHORT];\n\tchar error_class[BASIL_STRING_SHORT];\n\tchar error_source[BASIL_STRING_SHORT];\n\telement_counts_t count;\n\telement_counts_sys_t count_sys;\n\tinventory_data_t current;\n\tsystem_data_t current_sys;\n\tbasil_response_t *brp;\n} ud_t;\n\n/**\n * Pointer to a response structure (that gets filled in with KNL Node information).\n */\nstatic basil_response_t *brp_knl;\n\n/**\n * List of all KNL Nodes extracted from the System (BASIL 1.7) XML Response.\n */\nstatic char *knl_node_list;\n\n/**\n * Function pointers to XML handler functions.\n * @param element string giving the XML tag\n * @param start function to call when the tag is seen\n * @param end function to call when the XML segment is finished\n * @param char_data character handler for the given XML segment\n */\ntypedef struct element_handler {\n\tchar *element;\n\tvoid (*start)(ud_t *, const XML_Char *, const XML_Char **);\n\tvoid (*end)(ud_t *, const XML_Char *);\n\tvoid (*char_data)(ud_t *, const XML_Char *, int);\n} element_handler_t;\n\nstatic XML_Parser parser;\nstatic element_handler_t handler[];\n\n#define EXPAT_BUFFER_LEN (65536)\nstatic char expatBuffer[(EXPAT_BUFFER_LEN * sizeof(char))];\nstatic char *basil_inventory;\nstatic char *alps_client_out;\n\nstatic char *requestBuffer;\nstatic char *requestBuffer_knl;\nstatic size_t requestSize_knl;\nstatic size_t requestCurr = 0;\nstatic size_t requestSize = 0;\n\n#define UTIL_BUFFER_LEN (4096)\nstatic char utilBuffer[(UTIL_BUFFER_LEN * sizeof(char))];\n\n#define VNODE_NAME_LEN 255\n\n#define BASIL_ERR_ID \"BASIL\"\n\n/**\n * Flag set to true when talking to Basil 1.1 original.\n */\nstatic int basil11orig = 0;\n\n/**\n * Variables that keep track of which basil version to speak.\n * The Inventory Query speaks BASIL 1.4 (stored in basilversion_inventory) and\n * the System Query speaks BASIL 1.7 (stored in basilversion_system).\n */\nstatic char basilversion_inventory[BASIL_STRING_SHORT];\nstatic char basilversion_system[BASIL_STRING_SHORT];\n\n/**\n * Flag to indicate BASIL 1.7 support.\n */\nstatic int basil_1_7_supported;\n\n/**\n * Variable that keeps track of the numeric value related to the basil version.\n * It is used to do specific validation per basil version.\n */\nstatic basil_version_t basilver = 0;\n\n/**\n * Versions of BASIL that PBS supports.\n * It is a smaller subset that what ALPS likely provides in\n * basil_supported_versions.\n * PBS no longer supports version 1.0.\n * As ALPS adds BASIL versions, once PBS supports them, they should\n * be added here.\n */\nstatic const char *pbs_supported_basil_versions[] __attribute__((unused)) = {\n\tBASIL_VAL_VERSION_1_4,\n\tBASIL_VAL_VERSION_1_3,\n\tBASIL_VAL_VERSION_1_2,\n\tBASIL_VAL_VERSION_1_1,\n\tNULL};\n\nstatic int first_compute_node = 1;\n\n/**\n * String to use for mpp_host in vnode names when basil11orig\n * is true.\n */\n#define FAKE_MPP_HOST \"default\"\n\n/**\n * Prototype declarations for System Query (KNL) related functions.\n */\nstatic int init_KNL_alps_req_buf(void);\nstatic void create_vnodes_KNL(basil_response_query_system_t *);\nstatic int exclude_from_KNL_processing(basil_system_element_t *,\n\t\t\t\t       short int check_state);\nstatic long *process_nodelist_KNL(char *, int *);\nstatic void store_nids(int, char *, long **, int *);\nstatic void free_basil_elements_KNL(basil_system_element_t *);\nstatic void alps_engine_query_KNL(void);\n\n/**\n * @brief\n *\tWhen DEBUG is defined, log XML parsing messages to MOM log file.\n *\n * @param[in] fmt - format of msg\n *\n * @return Void\n */\nstatic void\nxml_dbg(char *fmt, ...)\n{\n#ifdef DEBUG\n\tva_list argp;\n\tva_start(argp, fmt);\n\tvsnprintf(log_buffer, sizeof(log_buffer), fmt, argp);\n\tva_end(argp);\n\tlog_event(PBSEVENT_DEBUG2, 0, LOG_DEBUG, BASIL_ERR_ID, log_buffer);\n#endif /* DEBUG */\n\treturn;\n}\n\n/**\n * @brief\n * \tStart a new ALPS request.\n * \tIf need be, allocate a buffer. Set the start point requestCurr to 0.\n *\n * @return Void\n *\n */\nstatic void\nnew_alps_req(void)\n{\n\tif (requestBuffer == NULL) {\n\t\trequestSize = UTIL_BUFFER_LEN;\n\t\trequestBuffer = malloc(UTIL_BUFFER_LEN);\n\t}\n\tassert(requestBuffer != NULL);\n\trequestCurr = 0;\n}\n\n/**\n * @brief\n * Start a new ALPS request for KNL.\n *\n * If need be, allocate a buffer.\n * @retval 1 if buffer allocation failed.\n * @retval 0 if success.\n */\nstatic int\ninit_KNL_alps_req_buf(void)\n{\n\tif (requestBuffer_knl == NULL) {\n\t\trequestSize_knl = UTIL_BUFFER_LEN;\n\t\tif ((requestBuffer_knl = (char *) malloc(UTIL_BUFFER_LEN)) == NULL) {\n\t\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_NODE, LOG_ERR, __func__,\n\t\t\t\t  \"Memory allocation for XML request buffer failed.\");\n\t\t\treturn 1;\n\t\t}\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n * \tAdd new text to current ALPS request.\n *\n * \tIf need be, extend the buffer. Copy the new text into the buffer\n * \tand set the start point requestCurr to follow the added text.\n *\n * @param[in] new - text to add\n *\n * @return Void\n *\n */\nstatic void\nadd_alps_req(char *new)\n{\n\tsize_t len = strlen(new);\n\n\tif (requestCurr + len >= requestSize) {\n\t\tsize_t num = (UTIL_BUFFER_LEN + len) / UTIL_BUFFER_LEN;\n\t\trequestSize += num * UTIL_BUFFER_LEN;\n\t\trequestBuffer = realloc(requestBuffer, requestSize);\n\t\tassert(requestBuffer != NULL);\n\t}\n\tstrcpy(&requestBuffer[requestCurr], new);\n\trequestCurr += len;\n}\n\n/**\n * @brief\n * \tWhen an internal parse error is encountered, set the source, class,\n * \tand message pointers in the expat user data structure.\n *\n * @param d - pointer to user data structure\n *\n * @return Void\n *\n */\nstatic void\nparse_err_internal(ud_t *d)\n{\n\tsnprintf(d->message, sizeof(d->message), \"Internal error.\");\n\tsprintf(d->error_source, \"%s\", BASIL_VAL_INTERNAL);\n\tsprintf(d->error_class, \"%s\", BASIL_VAL_PERMANENT);\n\treturn;\n}\n\n/**\n * @brief\n * \tWhen an out of memory error is encountered, set the source, class,\n *\tand message pointers in the expat user data structure.\n *\n * @param[in] d - pointer to user data structure\n *\n * @return Void\n *\n */\nstatic void\nparse_err_out_of_memory(ud_t *d)\n{\n\tsnprintf(d->message, sizeof(d->message), \"Out of memory.\");\n\tsprintf(d->error_source, \"%s\", BASIL_VAL_SYSTEM);\n\tsprintf(d->error_class, \"%s\", BASIL_VAL_TRANSIENT);\n\treturn;\n}\n\n/**\n * @brief\n * \tWhen a stack depth error is encountered, set the source, class,\n * \tand message pointers in the expat user data structure.\n *\n * @param[in] d - pointer to user data structure\n *\n * @return Void\n *\n */\nstatic void\nparse_err_stack_depth(ud_t *d)\n{\n\tsnprintf(d->message, sizeof(d->message), \"Stack too deep.\");\n\tsprintf(d->error_source, \"%s\", BASIL_VAL_SYNTAX);\n\tsprintf(d->error_class, \"%s\", BASIL_VAL_PERMANENT);\n\treturn;\n}\n\n/**\n * @brief\n * \tWhen an invalid XML element is encountered, set the source, class,\n * \tand message pointers in the expat user data structure.\n *\n * @param[in] d - pointer to user data structure\n *\n * @return Void\n *\n */\nstatic void\nparse_err_illegal_start(ud_t *d)\n{\n\tchar *el = handler[d->stack[d->depth]].element;\n\n\tsnprintf(d->message, sizeof(d->message),\n\t\t \"Illegal element: %s\", el);\n\tsprintf(d->error_source, \"%s\", BASIL_VAL_SYNTAX);\n\tsprintf(d->error_class, \"%s\", BASIL_VAL_PERMANENT);\n\treturn;\n}\n\n/**\n * @brief\n * \tWhen a single XML element is expected, but multiple instances are\n * \tencountered, set the source, class, and message pointers in the\n * \texpat user data structure.\n *\n * param[in] d - pointer to user data structure\n *\n * @return Void\n *\n */\n\nstatic void\nparse_err_multiple_elements(ud_t *d)\n{\n\tchar *el = handler[d->stack[d->depth]].element;\n\n\tsnprintf(d->message, sizeof(d->message),\n\t\t \"Multiple instances of element: %s\", el);\n\tsprintf(d->error_source, \"%s\", BASIL_VAL_SYNTAX);\n\tsprintf(d->error_class, \"%s\", BASIL_VAL_PERMANENT);\n\treturn;\n}\n\n/**\n * @brief\n * \tWhen an unsupported BASIL version is encountered, set the source, class,\n * \tand message pointers in the expat user data structure.\n *\n * @param d pointer to user data structure\n * @param[in] remote version string from the XML\n * @param[in] local version define from 'basil.h' (BASIL_VAL_VERSION)\n *\n * @retval Void\n *\n */\nstatic void\nparse_err_version_mismatch(ud_t *d, const char *remote, const char *local)\n{\n\tsnprintf(d->message, sizeof(d->message),\n\t\t \"BASIL version mismatch: us=%s, them=%s\", local, remote);\n\tsprintf(d->error_source, \"%s\", BASIL_VAL_BACKEND);\n\tsprintf(d->error_class, \"%s\", BASIL_VAL_PERMANENT);\n\treturn;\n}\n\n/**\n * @brief\n * \tWhen an XML attribute is required but not specified, set the source,\n * \tclass, and message pointers in the expat user data structure.\n *\n * @param d pointer to user data structure\n * @param[in] attr name of the unspecified attribute\n *\n * @retval Void\n *\n */\nstatic void\nparse_err_unspecified_attr(ud_t *d, const char *attr)\n{\n\tsnprintf(d->message, sizeof(d->message),\n\t\t \"Unspecified attribute: %s\", attr);\n\tsprintf(d->error_source, \"%s\", BASIL_VAL_SYNTAX);\n\tsprintf(d->error_class, \"%s\", BASIL_VAL_PERMANENT);\n\treturn;\n}\n\n/**\n * @brief\n * \tWhen a single XML attribute is expected, but multiple instances are\n * \tencountered, set the source, class, and message pointers in the\n * \texpat user data structure.\n * \tMost fields are initialized to zero so a non-zero value means a repeat\n * \thas taken place.\n *\n * @param[in] d pointer to user data structure\n * @param[in] attr name of the repeated attribute\n *\n * @retval Void\n *\n */\nstatic void\nparse_err_multiple_attrs(ud_t *d, const char *attr)\n{\n\tsnprintf(d->message, sizeof(d->message),\n\t\t \"Multiple attribute instances: %s\", attr);\n\tsprintf(d->error_source, \"%s\", BASIL_VAL_SYNTAX);\n\tsprintf(d->error_class, \"%s\", BASIL_VAL_PERMANENT);\n\treturn;\n}\n\n/**\n * @brief\n * \tWhen an unrecognized XML attribute is specified within an element, set\n * \tthe source, class, and message pointers in the expat user data structure.\n *\n * @param[in] d pointer to user data structure\n * @param[in] attr name of the unrecognized attribute\n *\n * @return Void\n *\n */\nstatic void\nparse_err_unrecognized_attr(ud_t *d, const char *attr)\n{\n\tsnprintf(d->message, sizeof(d->message),\n\t\t \"Unrecognized attribute: %s\", attr);\n\tsprintf(d->error_source, \"%s\", BASIL_VAL_SYNTAX);\n\tsprintf(d->error_class, \"%s\", BASIL_VAL_PERMANENT);\n\treturn;\n}\n\n/**\n * @brief\n * \tWhen an illegal value is assigned to an attribute within an element, set\n * \tthe source, class, and message pointers in the expat user data structure.\n *\n * @param[in] d pointer to user data structure\n * @param[in] name name of the attribute\n * @param[in] value bad value\n *\n * @retval Void\n *\n */\nstatic void\nparse_err_illegal_attr_val(ud_t *d, const char *name, const char *value)\n{\n\tsnprintf(d->message, sizeof(d->message),\n\t\t \"Illegal attribute assignment: %s = %s\", name, value);\n\tsprintf(d->error_source, \"%s\", BASIL_VAL_SYNTAX);\n\tsprintf(d->error_class, \"%s\", BASIL_VAL_PERMANENT);\n\treturn;\n}\n\n/**\n * @brief\n * \tWhen illegal characters are encountered within the XML data, set the\n * \tsource, class, and message pointers in the expat user data structure.\n *\n * @param[in] d pointer to user data structure\n * @param[in] s string with bad characters\n *\n * @retval Void\n *\n */\nstatic void\nparse_err_illegal_char_data(ud_t *d, const char *s)\n{\n\tsnprintf(d->message, sizeof(d->message),\n\t\t \"Illegal character data: %s\", s);\n\tsprintf(d->error_source, \"%s\", BASIL_VAL_SYNTAX);\n\tsprintf(d->error_class, \"%s\", BASIL_VAL_PERMANENT);\n\treturn;\n}\n\n/**\n * @brief\n * \tWhen the end of the XML data is encountered prematurely, set the\n * \tsource, class, and message pointers in the expat user data structure.\n *\n * @param[in] d pointer to user data structure\n * @param[in] el name of bad end element\n *\n * @retval Void\n *\n */\nstatic void\nparse_err_illegal_end(ud_t *d, const char *el)\n{\n\tsnprintf(d->message, sizeof(d->message),\n\t\t \"Illegal end of element: %s\", el);\n\tsprintf(d->error_source, \"%s\", BASIL_VAL_SYNTAX);\n\tsprintf(d->error_class, \"%s\", BASIL_VAL_PERMANENT);\n\treturn;\n}\n\n/**\n * @brief\n * This function enforces the structure of the XML elements. Since\n * messages can occur in any element, they are not part of the check.\n *\n * Check that the depth is okay then look at the top element. Make\n * sure that what comes before the top is legal in the XML structure\n * we are parsing.\n *\n * @return int\n * @retval 1 if XML structure is incorrect\n * @retval 0 okay\n *\n */\nstatic int\nstack_busted(ud_t *d)\n{\n\tchar *top;\n\tchar *prev;\n\tbasil_response_t *brp;\n\n\tif (!d) {\n\t\tparse_err_internal(NULL);\n\t\treturn (1);\n\t}\n\tbrp = d->brp;\n\tif (d->depth < 1 || d->depth >= MAX_BASIL_STACK) {\n\t\tparse_err_stack_depth(d);\n\t\treturn (1);\n\t} else if (d->depth == 1) {\n\t\ttop = handler[d->stack[d->depth]].element;\n\t\tif (strcmp(BASIL_ELM_RESPONSE, top) != 0) {\n\t\t\tparse_err_illegal_start(d);\n\t\t\treturn (1);\n\t\t}\n\t} else {\n\t\ttop = handler[d->stack[d->depth]].element;\n\t\tprev = handler[d->stack[(d->depth - 1)]].element;\n\t\tif (strcmp(BASIL_ELM_RESPONSE, top) == 0) {\n\t\t\tparse_err_illegal_start(d);\n\t\t\treturn (1);\n\t\t} else if (strcmp(BASIL_ELM_RESPONSEDATA, top) == 0) {\n\t\t\tif (strcmp(BASIL_ELM_RESPONSE, prev) != 0) {\n\t\t\t\tparse_err_illegal_start(d);\n\t\t\t\treturn (1);\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ELM_RESERVED, top) == 0) {\n\t\t\tif (strcmp(BASIL_ELM_RESPONSEDATA, prev) != 0) {\n\t\t\t\tparse_err_illegal_start(d);\n\t\t\t\treturn (1);\n\t\t\t}\n\t\t\tif (brp->method != basil_method_reserve) {\n\t\t\t\tparse_err_illegal_start(d);\n\t\t\t\treturn (1);\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ELM_CONFIRMED, top) == 0) {\n\t\t\tif (strcmp(BASIL_ELM_RESPONSEDATA, prev) != 0) {\n\t\t\t\tparse_err_illegal_start(d);\n\t\t\t\treturn (1);\n\t\t\t}\n\t\t\tif (brp->method != basil_method_confirm) {\n\t\t\t\tparse_err_illegal_start(d);\n\t\t\t\treturn (1);\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ELM_RELEASED, top) == 0) {\n\t\t\tif (strcmp(BASIL_ELM_RESPONSEDATA, prev) != 0) {\n\t\t\t\tparse_err_illegal_start(d);\n\t\t\t\treturn (1);\n\t\t\t}\n\t\t\tif (brp->method != basil_method_release) {\n\t\t\t\tparse_err_illegal_start(d);\n\t\t\t\treturn (1);\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ELM_INVENTORY, top) == 0) {\n\t\t\tif (strcmp(BASIL_ELM_RESPONSEDATA, prev) != 0) {\n\t\t\t\tparse_err_illegal_start(d);\n\t\t\t\treturn (1);\n\t\t\t}\n\t\t\tif (brp->method != basil_method_query) {\n\t\t\t\tparse_err_illegal_start(d);\n\t\t\t\treturn (1);\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ELM_NODEARRAY, top) == 0) {\n\t\t\tif (strcmp(BASIL_ELM_INVENTORY, prev) != 0) {\n\t\t\t\tparse_err_illegal_start(d);\n\t\t\t\treturn (1);\n\t\t\t}\n\t\t\tif (brp->data.query.type != basil_query_inventory) {\n\t\t\t\tparse_err_illegal_start(d);\n\t\t\t\treturn (1);\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ELM_NODE, top) == 0) {\n\t\t\tif (strcmp(BASIL_ELM_NODEARRAY, prev) != 0) {\n\t\t\t\tparse_err_illegal_start(d);\n\t\t\t\treturn (1);\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ELM_SOCKETARRAY, top) == 0) {\n\t\t\t/* socket XML was introduced in BASIL 1.3*/\n\t\t\tif (strcmp(BASIL_ELM_NODE, prev) != 0) {\n\t\t\t\tparse_err_illegal_start(d);\n\t\t\t\treturn (1);\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ELM_SOCKET, top) == 0) {\n\t\t\tif (strcmp(BASIL_ELM_SOCKETARRAY, prev) != 0) {\n\t\t\t\tparse_err_illegal_start(d);\n\t\t\t\treturn (1);\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ELM_SEGMENTARRAY, top) == 0) {\n\t\t\tswitch (basilver) {\n\t\t\t\tcase basil_1_0:\n\t\t\t\tcase basil_1_1:\n\t\t\t\tcase basil_1_2:\n\t\t\t\t\tif (strcmp(BASIL_ELM_NODE, prev) != 0) {\n\t\t\t\t\t\tparse_err_illegal_start(d);\n\t\t\t\t\t\treturn (1);\n\t\t\t\t\t}\n\t\t\t\t\tbreak;\n\t\t\t\tcase basil_1_3:\n\t\t\t\tcase basil_1_4:\n\t\t\t\tcase basil_1_7:\n\t\t\t\t\tif (strcmp(BASIL_ELM_SOCKET, prev) != 0) {\n\t\t\t\t\t\tparse_err_illegal_start(d);\n\t\t\t\t\t\treturn (1);\n\t\t\t\t\t}\n\t\t\t\t\tbreak;\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ELM_SEGMENT, top) == 0) {\n\t\t\tif (strcmp(BASIL_ELM_SEGMENTARRAY, prev) != 0) {\n\t\t\t\tparse_err_illegal_start(d);\n\t\t\t\treturn (1);\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ELM_CUARRAY, top) == 0) {\n\t\t\t/* ComputeUnit Array XML was introduced in BASIL 1.3*/\n\t\t\tif (strcmp(BASIL_ELM_SEGMENT, prev) != 0) {\n\t\t\t\tparse_err_illegal_start(d);\n\t\t\t\treturn (1);\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ELM_COMPUTEUNIT, top) == 0) {\n\t\t\t/* Compute Unit XML was introduced in BASIL 1.3*/\n\t\t\tif (strcmp(BASIL_ELM_CUARRAY, prev) != 0) {\n\t\t\t\tparse_err_illegal_start(d);\n\t\t\t\treturn (1);\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ELM_PROCESSORARRAY, top) == 0) {\n\t\t\tswitch (basilver) {\n\t\t\t\tcase basil_1_0:\n\t\t\t\tcase basil_1_1:\n\t\t\t\tcase basil_1_2:\n\t\t\t\t\tif (strcmp(BASIL_ELM_SEGMENT, prev) != 0) {\n\t\t\t\t\t\tparse_err_illegal_start(d);\n\t\t\t\t\t\treturn (1);\n\t\t\t\t\t}\n\t\t\t\t\tbreak;\n\t\t\t\tcase basil_1_3:\n\t\t\t\tcase basil_1_4:\n\t\t\t\tcase basil_1_7:\n\t\t\t\t\tif (strcmp(BASIL_ELM_COMPUTEUNIT, prev) != 0) {\n\t\t\t\t\t\tparse_err_illegal_start(d);\n\t\t\t\t\t\treturn (1);\n\t\t\t\t\t}\n\t\t\t\t\tbreak;\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ELM_PROCESSOR, top) == 0) {\n\t\t\tif (strcmp(BASIL_ELM_PROCESSORARRAY, prev) != 0) {\n\t\t\t\tparse_err_illegal_start(d);\n\t\t\t\treturn (1);\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ELM_PROCESSORALLOC, top) == 0) {\n\t\t\tif (strcmp(BASIL_ELM_PROCESSOR, prev) != 0) {\n\t\t\t\tparse_err_illegal_start(d);\n\t\t\t\treturn (1);\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ELM_MEMORYARRAY, top) == 0) {\n\t\t\tif (strcmp(BASIL_ELM_SEGMENT, prev) != 0) {\n\t\t\t\tparse_err_illegal_start(d);\n\t\t\t\treturn (1);\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ELM_MEMORY, top) == 0) {\n\t\t\tif (strcmp(BASIL_ELM_MEMORYARRAY, prev) != 0) {\n\t\t\t\tparse_err_illegal_start(d);\n\t\t\t\treturn (1);\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ELM_MEMORYALLOC, top) == 0) {\n\t\t\tif (strcmp(BASIL_ELM_MEMORY, prev) != 0) {\n\t\t\t\tparse_err_illegal_start(d);\n\t\t\t\treturn (1);\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ELM_LABELARRAY, top) == 0) {\n\t\t\tif (strcmp(BASIL_ELM_SEGMENT, prev) != 0) {\n\t\t\t\tparse_err_illegal_start(d);\n\t\t\t\treturn (1);\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ELM_LABEL, top) == 0) {\n\t\t\tif (strcmp(BASIL_ELM_LABELARRAY, prev) != 0) {\n\t\t\t\tparse_err_illegal_start(d);\n\t\t\t\treturn (1);\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ELM_RSVNARRAY, top) == 0) {\n\t\t\tif (strcmp(BASIL_ELM_INVENTORY, prev) != 0) {\n\t\t\t\tif (strcmp(BASIL_ELM_RESPONSEDATA, prev) != 0) {\n\t\t\t\t\tparse_err_illegal_start(d);\n\t\t\t\t\treturn (1);\n\t\t\t\t}\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ELM_RESERVATION, top) == 0) {\n\t\t\tif (strcmp(BASIL_ELM_RSVNARRAY, prev) != 0) {\n\t\t\t\tparse_err_illegal_start(d);\n\t\t\t\treturn (1);\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ELM_APPARRAY, top) == 0) {\n\t\t\tif (strcmp(BASIL_ELM_RESERVATION, prev) != 0) {\n\t\t\t\tparse_err_illegal_start(d);\n\t\t\t\treturn (1);\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ELM_APPLICATION, top) == 0) {\n\t\t\tif (strcmp(BASIL_ELM_APPARRAY, prev) != 0) {\n\t\t\t\tparse_err_illegal_start(d);\n\t\t\t\treturn (1);\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ELM_CMDARRAY, top) == 0) {\n\t\t\tif (strcmp(BASIL_ELM_APPLICATION, prev) != 0) {\n\t\t\t\tparse_err_illegal_start(d);\n\t\t\t\treturn (1);\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ELM_COMMAND, top) == 0) {\n\t\t\tif (strcmp(BASIL_ELM_CMDARRAY, prev) != 0) {\n\t\t\t\tparse_err_illegal_start(d);\n\t\t\t\treturn (1);\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ELM_ACCELERATORARRAY, top) == 0) {\n\t\t\tif (strcmp(BASIL_ELM_NODE, prev) != 0) {\n\t\t\t\tparse_err_illegal_start(d);\n\t\t\t\treturn (1);\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ELM_ACCELERATOR, top) == 0) {\n\t\t\tif (strcmp(BASIL_ELM_ACCELERATORARRAY, prev) != 0) {\n\t\t\t\tparse_err_illegal_start(d);\n\t\t\t\treturn (1);\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ELM_ACCELERATORALLOC, top) == 0) {\n\t\t\tif (strcmp(BASIL_ELM_ACCELERATOR, prev) != 0) {\n\t\t\t\tparse_err_illegal_start(d);\n\t\t\t\treturn (1);\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ELM_RSVD_NODEARRAY, top) == 0) {\n\t\t\tif (strcmp(BASIL_ELM_RESERVED, prev) != 0) {\n\t\t\t\tparse_err_illegal_start(d);\n\t\t\t\treturn (1);\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ELM_RSVD_NODE, top) == 0) {\n\t\t\tif (strcmp(BASIL_ELM_RSVD_NODEARRAY, prev) != 0) {\n\t\t\t\tparse_err_illegal_start(d);\n\t\t\t\treturn (1);\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ELM_RSVD_SGMTARRAY, top) == 0) {\n\t\t\tif (strcmp(BASIL_ELM_RESERVED, prev) != 0) {\n\t\t\t\tparse_err_illegal_start(d);\n\t\t\t\treturn (1);\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ELM_RSVD_SGMT, top) == 0) {\n\t\t\tif (strcmp(BASIL_ELM_RSVD_SGMTARRAY, prev) != 0) {\n\t\t\t\tparse_err_illegal_start(d);\n\t\t\t\treturn (1);\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ELM_RSVD_PROCARRAY, top) == 0) {\n\t\t\tif (strcmp(BASIL_ELM_RESERVED, prev) != 0) {\n\t\t\t\tparse_err_illegal_start(d);\n\t\t\t\treturn (1);\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ELM_RSVD_PROCESSOR, top) == 0) {\n\t\t\tif (strcmp(BASIL_ELM_RSVD_PROCARRAY, prev) != 0) {\n\t\t\t\tparse_err_illegal_start(d);\n\t\t\t\treturn (1);\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ELM_RSVD_MEMARRAY, top) == 0) {\n\t\t\tif (strcmp(BASIL_ELM_RESERVED, prev) != 0) {\n\t\t\t\tparse_err_illegal_start(d);\n\t\t\t\treturn (1);\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ELM_RSVD_MEMORY, top) == 0) {\n\t\t\tif (strcmp(BASIL_ELM_RSVD_MEMARRAY, prev) != 0) {\n\t\t\t\tparse_err_illegal_start(d);\n\t\t\t\treturn (1);\n\t\t\t}\n\t\t}\n\t}\n\treturn (0);\n}\n\n/**\n * @brief\n * \tThis function is registered to handle the start of the BASIL response.\n * \tChecks the stack (depth should be 1) and the protocol version. The\n * \tprotocol version is defined in basil.h and will be updated whenever\n * \tthe BASIL document format changes. Cray will provide a new basil.h\n * \twhen this occurs.\n *\n *\n * The standard Expat start handler function prototype is used.\n * @param[in] d pointer to user data structure\n * @param[in] el unused in this function\n * @param[in] atts array of name/value pairs\n *\n * @retval Void\n *\n */\nstatic void\nresponse_start(ud_t *d, const XML_Char *el, const XML_Char **atts)\n{\n\tconst XML_Char **np;\n\tconst XML_Char **vp;\n\tchar protocol[BASIL_STRING_SHORT];\n\n\tif (stack_busted(d))\n\t\treturn;\n\tif (++(d->count.response) > 1) {\n\t\tparse_err_multiple_elements(d);\n\t\treturn;\n\t}\n\tprotocol[0] = '\\0';\n\t/*\n\t * Work through the attribute pairs updating the name pointer and\n\t * value pointer with each loop. The somewhat complex loop control\n\t * syntax is just a fancy way of stepping through the pairs.\n\t */\n\tfor (np = vp = atts, vp++; np && *np && vp && *vp; np = ++vp, vp++) {\n\t\txml_dbg(\"%s: %s = %s\", __func__, *np, *vp);\n\t\tif (strcmp(BASIL_ATR_PROTOCOL, *np) == 0) {\n\t\t\tBASIL_STRSET_SHORT(protocol, *vp);\n\t\t\tif ((strcmp(BASIL_VAL_VERSION_1_7, *vp) != 0) &&\n\t\t\t    (strcmp(BASIL_VAL_VERSION_1_4, *vp) != 0) &&\n\t\t\t    (strcmp(BASIL_VAL_VERSION_1_3, *vp) != 0) &&\n\t\t\t    (strcmp(BASIL_VAL_VERSION_1_2, *vp) != 0) &&\n\t\t\t    (strcmp(BASIL_VAL_VERSION_1_1, *vp) != 0)) {\n\t\t\t\tparse_err_version_mismatch(d, *vp, d->basil_ver);\n\t\t\t\treturn;\n\t\t\t}\n\t\t}\n\t}\n\tif (protocol[0] == '\\0') {\n\t\tparse_err_unspecified_attr(d, BASIL_ATR_PROTOCOL);\n\t\treturn;\n\t}\n}\n\n/**\n * @brief\n * \tThis funtion is registered to handle the start of the BASIL data.\n * \tIt checks to make sure there is a valid method type so we know\n * \twhat elements to expect later on.\n *\n *\n * The standard Expat start handler function prototype is used.\n *\n * @param[in] d pointer to user data structure\n * @param[in] el unused in this function\n * @param[in] atts array of name/value pairs\n *\n * @retval Void\n *\n */\nstatic void\nresponse_data_start(ud_t *d, const XML_Char *el, const XML_Char **atts)\n{\n\tconst XML_Char **np;\n\tconst XML_Char **vp;\n\tbasil_response_t *brp;\n\n\tif (stack_busted(d))\n\t\treturn;\n\tbrp = d->brp;\n\tif (++(d->count.response_data) > 1) {\n\t\tparse_err_multiple_elements(d);\n\t\treturn;\n\t}\n\t/*\n\t * Work through the attribute pairs updating the name pointer and\n\t * value pointer with each loop. The somewhat complex loop control\n\t * syntax is just a fancy way of stepping through the pairs.\n\t */\n\tfor (np = vp = atts, vp++; np && *np && vp && *vp; np = ++vp, vp++) {\n\t\txml_dbg(\"%s: %s = %s\", __func__, *np, *vp);\n\t\tif (strcmp(BASIL_ATR_METHOD, *np) == 0) {\n\t\t\tif (brp->method != basil_method_none) {\n\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tif (strcmp(BASIL_VAL_RESERVE, *vp) == 0) {\n\t\t\t\tbrp->method = basil_method_reserve;\n\t\t\t\tbrp->data.reserve.rsvn_id = -1;\n\t\t\t} else if (strcmp(BASIL_VAL_CONFIRM, *vp) == 0) {\n\t\t\t\tbrp->method = basil_method_confirm;\n\t\t\t} else if (strcmp(BASIL_VAL_RELEASE, *vp) == 0) {\n\t\t\t\tbrp->method = basil_method_release;\n\t\t\t\tbrp->data.release.claims = 0;\n\t\t\t} else if (strcmp(BASIL_VAL_QUERY, *vp) == 0) {\n\t\t\t\tbrp->method = basil_method_query;\n\t\t\t\t/*\n\t\t\t\t * Set type to status, for the switch status\n\t\t\t\t * response. The other types can get set in\n\t\t\t\t * inventory_start and engine_start.\n\t\t\t\t */\n\t\t\t\tbrp->data.query.type = basil_query_status;\n\t\t\t} else if (strcmp(BASIL_VAL_SWITCH, *vp) == 0) {\n\t\t\t\tbrp->method = basil_method_switch;\n\t\t\t} else {\n\t\t\t\tparse_err_illegal_attr_val(d, *np, *vp);\n\t\t\t\treturn;\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ATR_STATUS, *np) == 0) {\n\t\t\tpbs_strncpy(d->status, *vp, sizeof(d->status));\n\t\t\tif (strcmp(BASIL_VAL_SUCCESS, *vp) == 0) {\n\t\t\t\t*brp->error = '\\0';\n\t\t\t} else if (strcmp(BASIL_VAL_FAILURE, *vp) == 0) {\n\t\t\t\t/* do nothing here, brp->error was set */\n\t\t\t\t/* in alps_request_parent              */\n\t\t\t} else {\n\t\t\t\tparse_err_illegal_attr_val(d, *np, *vp);\n\t\t\t\treturn;\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ATR_ERROR_CLASS, *np) == 0) {\n\t\t\tpbs_strncpy(d->error_class, *vp, sizeof(d->error_class));\n\t\t\t/*\n\t\t\t * The existence of a PERMENENT error used to\n\t\t\t * reset the BASIL_ERR_TRANSIENT flag. This\n\t\t\t * is no longer done since the error_flags\n\t\t\t * field is initialized to zero.\n\t\t\t */\n\t\t\tif (strcmp(BASIL_VAL_TRANSIENT, *vp) == 0) {\n\t\t\t\tbrp->error_flags |= (BASIL_ERR_TRANSIENT);\n\t\t\t} else if (strcmp(BASIL_VAL_PERMANENT, *vp) != 0) {\n\t\t\t\tparse_err_illegal_attr_val(d, *np, *vp);\n\t\t\t\treturn;\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ATR_ERROR_SOURCE, *np) == 0) {\n\t\t\tpbs_strncpy(d->error_source, *vp, sizeof(d->error_source));\n\t\t\t/*\n\t\t\t * Consider \"BACKEND\" errors TRANSIENT when trying\n\t\t\t * to create an ALPS reservation.\n\t\t\t * It was found that a node being changed from\n\t\t\t * batch to interactive would cause a PERMANENT,\n\t\t\t * BACKEND error when a job was run on it. We\n\t\t\t * want this to not result in the job being deleted.\n\t\t\t */\n\t\t\tif (brp->method == basil_method_reserve) {\n\t\t\t\tif (strcmp(BASIL_VAL_BACKEND, *vp) == 0) {\n\t\t\t\t\tbrp->error_flags |= (BASIL_ERR_TRANSIENT);\n\t\t\t\t}\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ATR_TYPE, *np) == 0) {\n\t\t\tpbs_strncpy(d->type, *vp, sizeof(d->type));\n\t\t\tif ((strcmp(BASIL_VAL_SYSTEM, *vp) != 0) &&\n\t\t\t    (strcmp(BASIL_VAL_ENGINE, *vp) != 0)) {\n\t\t\t\tparse_err_illegal_attr_val(d, *np, *vp);\n\t\t\t\treturn;\n\t\t\t}\n\t\t} else {\n\t\t\tparse_err_unrecognized_attr(d, *np);\n\t\t\treturn;\n\t\t}\n\t}\n\n\tif (brp->method == basil_method_none) {\n\t\tparse_err_unspecified_attr(d, BASIL_ATR_METHOD);\n\t\treturn;\n\t}\n\tif (*d->status == '\\0') {\n\t\tparse_err_unspecified_attr(d, BASIL_ATR_STATUS);\n\t\treturn;\n\t}\n}\n\n/**\n * @brief\n * \tThis funtion is registered to handle BASIL message elements. Message\n * \telements may appear anywhere in the XML, and may be selectively\n * \tignored. Each message must have a severity defined as an attribute.\n *\n * \tThe standard Expat start handler function prototype is used.\n *\n * @param[in] d pointer to user data structure\n * @param[in] el unused in this function\n * @param[in] atts array of name/value pairs\n *\n * @retval Void\n */\nstatic void\nmessage_start(ud_t *d, const XML_Char *el, const XML_Char **atts)\n{\n\tconst XML_Char **np;\n\tconst XML_Char **vp;\n\n\tif (stack_busted(d))\n\t\treturn;\n\t*d->message = '\\0';\n\t/*\n\t * Work through the attribute pairs updating the name pointer and\n\t * value pointer with each loop. The somewhat complex loop control\n\t * syntax is just a fancy way of stepping through the pairs.\n\t */\n\tfor (np = vp = atts, vp++; np && *np && vp && *vp; np = ++vp, vp++) {\n\t\txml_dbg(\"%s: %s = %s\", __func__, *np, *vp);\n\t\tif (strcmp(BASIL_ATR_SEVERITY, *np) == 0) {\n\t\t\tif (strcmp(BASIL_VAL_DEBUG, *vp) == 0) {\n\t\t\t\tstrcat(d->message, BASIL_VAL_DEBUG \": \");\n\t\t\t} else if (strcmp(BASIL_VAL_WARNING, *vp) == 0) {\n\t\t\t\tstrcat(d->message, BASIL_VAL_WARNING \": \");\n\t\t\t} else if (strcmp(BASIL_VAL_ERROR, *vp) == 0) {\n\t\t\t\tstrcat(d->message, BASIL_VAL_ERROR \": \");\n\t\t\t} else {\n\t\t\t\tparse_err_illegal_attr_val(d, *np, *vp);\n\t\t\t\treturn;\n\t\t\t}\n\t\t} else {\n\t\t\tparse_err_unrecognized_attr(d, *np);\n\t\t\treturn;\n\t\t}\n\t}\n\tif (*d->message == '\\0') {\n\t\tparse_err_unspecified_attr(d, BASIL_ATR_SEVERITY);\n\t\treturn;\n\t}\n\treturn;\n}\n\n/**\n * @brief\n * \tThis function digests the text component of the message element and\n * \tupdates the message pointer in the user data structure.\n *\n * The standard Expat character handler function prototype is used.\n *\n * @param[in] d pointer to user data structure\n * @param[in] s string\n * @param[in] len length of string\n *\n * @retval Void\n *\n */\nstatic void\nmessage_char_data(ud_t *d, const XML_Char *s, int len)\n{\n\tstrncat(d->message, s, len);\n}\n\n/**\n * @brief\n * \tThis function handles the end of a BASIL message element by logging\n * \tthe message to the MOM log file.\n *\n * \tThe standard Expat end handler function prototype is used.\n *\n * @param[in] d pointer to user data structure\n * @param[in] el name of end element\n *\n * @retval Void\\\n *\n */\nstatic void\nmessage_end(ud_t *d, const XML_Char *el)\n{\n\tif (strcmp(el, handler[d->stack[d->depth]].element) != 0)\n\t\tparse_err_illegal_end(d, el);\n\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_NODE, LOG_DEBUG,\n\t\t  BASIL_ERR_ID, d->message);\n\treturn;\n}\n\n/**\n * @brief\n * \tThis function is registered to handle the reserved element in\n * \tresponse to a reservation creation request.\n *\n * Change from basil 1.0: admin_cookie is renamed to pagg_id\n * and alloc_cookie is deprecated as of 1.1.\n *\n * The standard Expat start handler function prototype is used.\n *\n * @param[in] d pointer to user data structure\n * @param[in] el unused in this function\n * @param[in] atts array of name/value pairs\n *\n * @retval Void\n *\n */\nstatic void\nreserved_start(ud_t *d, const XML_Char *el, const XML_Char **atts)\n{\n\tconst XML_Char **np;\n\tconst XML_Char **vp;\n\tbasil_response_t *brp;\n\n\tif (stack_busted(d))\n\t\treturn;\n\tif (++(d->count.reserved) > 1) {\n\t\tparse_err_multiple_elements(d);\n\t\treturn;\n\t}\n\tbrp = d->brp;\n\t/*\n\t * Work through the attribute pairs updating the name pointer and\n\t * value pointer with each loop. The somewhat complex loop control\n\t * syntax is just a fancy way of stepping through the pairs.\n\t */\n\tfor (np = vp = atts, vp++; np && *np && vp && *vp; np = ++vp, vp++) {\n\t\txml_dbg(\"%s: %s = %s\", __func__, *np, *vp);\n\t\tif (strcmp(BASIL_ATR_RSVN_ID, *np) == 0) {\n\t\t\tbrp->data.reserve.rsvn_id = strtol(*vp, NULL, 10);\n\t\t} else if (!basil11orig) {\n\t\t\t/*\n\t\t\t * Basil 1.1+ doesn't have any other elements\n\t\t\t * but Basil 1.1 orig has dummy entries for\n\t\t\t * \"admin_cookie\" and \"alloc_cookie\". Just\n\t\t\t * ignore them.\n\t\t\t */\n\t\t\tparse_err_unrecognized_attr(d, *np);\n\t\t\treturn;\n\t\t}\n\t}\n\t/* rsvn_id is initialized to -1 so this catches the unset case. */\n\tif (brp->data.reserve.rsvn_id < 0) {\n\t\tparse_err_unspecified_attr(d, BASIL_ATR_RSVN_ID);\n\t\treturn;\n\t}\n\treturn;\n}\n\n/**\n * @brief\n * \tThis function is registered to handle the confirmed element in\n * \tresponse to a reservation confirmation request.\n *\n * The standard Expat start handler function prototype is used.\n *\n * @param d pointer to user data structure\n * @param[in] el unused in this function\n * @param[in] atts array of name/value pairs\n *\n * @retval Void\n *\n */\nstatic void\nconfirmed_start(ud_t *d, const XML_Char *el, const XML_Char **atts)\n{\n\tconst XML_Char **np;\n\tconst XML_Char **vp;\n\n\tif (stack_busted(d))\n\t\treturn;\n\tif (++(d->count.confirmed) > 1) {\n\t\tparse_err_multiple_elements(d);\n\t\treturn;\n\t}\n\tfor (np = vp = atts, vp++; np && *np && vp && *vp; np = ++vp, vp++) {\n\t\t/*\n\t\t * These keywords do not need to be saved. The CONFIRM\n\t\t * reply is just sending back the same values given in\n\t\t * the CONFIRM request.\n\t\t */\n\t\tif (strcmp(BASIL_ATR_RSVN_ID, *np) == 0) {\n\t\t\txml_dbg(\"%s: %s = %s\", __func__, *np, *vp);\n\t\t} else if (strcmp(BASIL_ATR_PAGG_ID, *np) == 0) {\n\t\t\txml_dbg(\"%s: %s = %s\", __func__, *np, *vp);\n\t\t}\n\t}\n\treturn;\n}\n\n/**\n * @brief\n * \tThis function is registered to handle the released element in\n * \tresponse to a reservation release request.\n *\n * The standard Expat start handler function prototype is used.\n *\n * @param d pointer to user data structure\n * @param[in] el unused in this function\n * @param[in] atts array of name/value pairs\n *\n * @retval Void\n *\n */\nstatic void\nreleased_start(ud_t *d, const XML_Char *el, const XML_Char **atts)\n{\n\tconst XML_Char **np;\n\tconst XML_Char **vp;\n\tbasil_response_t *brp;\n\n\tif (stack_busted(d))\n\t\treturn;\n\tif (++(d->count.released) > 1) {\n\t\tparse_err_multiple_elements(d);\n\t\treturn;\n\t}\n\tbrp = d->brp;\n\n\tfor (np = vp = atts, vp++; np && *np && vp && *vp; np = ++vp, vp++) {\n\t\t/*\n\t\t * This keyword does not need to be saved. The\n\t\t * RELEASE reply is just sending back the same value\n\t\t * given in the RELEASE request.\n\t\t */\n\t\tif (strcmp(BASIL_ATR_RSVN_ID, *np) == 0) {\n\t\t\txml_dbg(\"%s: %s = %s\", __func__, *np, *vp);\n\t\t} else if (strcmp(BASIL_ATR_CLAIMS, *np) == 0) {\n\t\t\tbrp->data.release.claims = strtol(*vp, NULL, 10);\n\t\t\txml_dbg(\"%s: %s = %s\", __func__, *np, *vp);\n\t\t} else {\n\t\t\tparse_err_unrecognized_attr(d, *np);\n\t\t\treturn;\n\t\t}\n\t}\n\treturn;\n}\n\n/**\n * @brief\n * \tThis function is registered to handle the engine element in\n * \tresponse to an engine request.\n *\n * The standard Expat start handler function prototype is used.\n *\n * @param d pointer to user data structure\n * @param[in] el unused in this function\n * @param[in] atts array of name/value pairs\n *\n * @retval Void\n *\n */\nstatic void\nengine_start(ud_t *d, const XML_Char *el, const XML_Char **atts)\n{\n\tconst XML_Char **np;\n\tconst XML_Char **vp;\n\tbasil_response_t *brp;\n\tbasil_response_query_engine_t *eng;\n\tint len = 0;\n\n\tif (stack_busted(d))\n\t\treturn;\n\n\tbrp = d->brp;\n\tbrp->data.query.type = basil_query_engine;\n\teng = &brp->data.query.data.engine;\n\n\tfor (np = vp = atts, vp++; np && *np && vp && *vp; np = ++vp, vp++) {\n\t\tif (strcmp(BASIL_ATR_NAME, *np) == 0) {\n\t\t\t/* This keyword does not have to be saved */\n\t\t\txml_dbg(\"%s: %s = %s\", __func__, *np, *vp);\n\t\t} else if (strcmp(BASIL_ATR_VERSION, *np) == 0) {\n\t\t\t/* We will need this in alps_engine_query */\n\t\t\txml_dbg(\"%s: %s = %s\", __func__, *np, *vp);\n\t\t\tif (eng->version) {\n\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tlen = strlen(*vp) + 1;\n\t\t\teng->version = malloc(len);\n\t\t\tif (!eng->version) {\n\t\t\t\tparse_err_out_of_memory(d);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tsnprintf(eng->version, len, \"%s\", *vp);\n\t\t} else if (strcmp(BASIL_ATR_SUPPORTED, *np) == 0) {\n\t\t\t/* Save this for use in alps_engine_query */\n\t\t\txml_dbg(\"%s: %s = %s\", __func__, *np, *vp);\n\t\t\tif (eng->basil_support) {\n\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tlen = strlen(*vp) + 1;\n\t\t\teng->basil_support = malloc(len);\n\t\t\tif (!eng->basil_support) {\n\t\t\t\tparse_err_out_of_memory(d);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tsnprintf(eng->basil_support, len, \"%s\", *vp);\n\t\t}\n\t}\n\tif (!eng->version) {\n\t\tparse_err_unspecified_attr(d, BASIL_ATR_VERSION);\n\t\treturn;\n\t}\n\tif (!eng->basil_support) {\n\t\tparse_err_unspecified_attr(d, BASIL_ATR_SUPPORTED);\n\t\treturn;\n\t}\n}\n\n/**\n * @brief\n * \tThis function is registered to handle the inventory element in\n * \tresponse to an inventory request.\n *\n * The standard Expat start handler function prototype is used.\n *\n * @param d pointer to user data structure\n * @param[in] el unused in this function\n * @param[in] atts array of name/value pairs\n *\n * @return Void\n *\n */\nstatic void\ninventory_start(ud_t *d, const XML_Char *el, const XML_Char **atts)\n{\n\tconst XML_Char **np;\n\tconst XML_Char **vp;\n\tbasil_response_t *brp;\n\tbasil_response_query_inventory_t *inv;\n\n\tif (stack_busted(d))\n\t\treturn;\n\tif (++(d->count.inventory) > 1) {\n\t\tparse_err_multiple_elements(d);\n\t\treturn;\n\t}\n\n\tbrp = d->brp;\n\tbrp->data.query.type = basil_query_inventory;\n\tinv = &brp->data.query.data.inventory;\n\n\tfor (np = vp = atts, vp++; np && *np && vp && *vp; np = ++vp, vp++) {\n\t\txml_dbg(\"%s: %s = %s\", __func__, *np, *vp);\n\t\tif (strcmp(BASIL_ATR_TIMESTAMP, *np) == 0) {\n\t\t\tif (inv->timestamp != 0) {\n\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tinv->timestamp = atoll(*vp);\n\t\t} else if (strcmp(BASIL_ATR_MPPHOST, *np) == 0) {\n\t\t\tif (inv->mpp_host[0] != '\\0') {\n\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tsnprintf(&inv->mpp_host[0],\n\t\t\t\t BASIL_STRING_LONG, \"%s\", *vp);\n\t\t} else {\n\t\t\tparse_err_unrecognized_attr(d, *np);\n\t\t\treturn;\n\t\t}\n\t}\n\n\t/*\n\t * The mpp_host and timestamp fields will be filled in\n\t * for BASIL_VAL_VERSION_1_1 \"plus\" and higher. There is no other\n\t * way to tell BASIL_VAL_VERSION_1_1 from 1.1+.\n\t */\n\tif (inv->timestamp == 0) {\n\t\tinv->timestamp = time(NULL);\n\t\tbasil11orig = 1;\n\t}\n\tif (inv->mpp_host[0] == '\\0') {\n\t\tpbs_strncpy(inv->mpp_host, FAKE_MPP_HOST, sizeof(inv->mpp_host));\n\t\tbasil11orig = 1;\n\t}\n\n\td->count.node_array = 0;\n\td->count.reservation_array = 0;\n\td->count.accelerator_array = 0;\n\td->count.socket_array = 0;\n\td->count.segment_array = 0;\n\td->count.computeunit_array = 0;\n\n\t/* set interesting counts to zero */\n\td->current.role_int = 0;\n\td->current.role_batch = 0;\n\td->current.role_unknown = 0;\n\td->current.state_up = 0;\n\td->current.state_down = 0;\n\td->current.state_unavail = 0;\n\td->current.state_routing = 0;\n\td->current.state_suspect = 0;\n\td->current.state_admin = 0;\n\td->current.state_unknown = 0;\n\td->current.accel_type_gpu = 0;\n\td->current.accel_type_unknown = 0;\n\td->current.accel_state_up = 0;\n\td->current.accel_state_down = 0;\n\td->current.accel_state_unknown = 0;\n\td->current.socket_count = 0;\n\treturn;\n}\n\n/**\n * @brief\n * \tThis function is registered to handle the node array element within\n * \tan inventory response.\n *\n * The standard Expat start handler function prototype is used.\n *\n * @param d pointer to user data structure\n * @param[in] el unused in this function\n * @param[in] atts array of name/value pairs\n *\n * @retval Void]\n *\n */\nstatic void\nnode_array_start(ud_t *d, const XML_Char *el, const XML_Char **atts)\n{\n\tconst XML_Char **np;\n\tconst XML_Char **vp;\n\n\tif (stack_busted(d))\n\t\treturn;\n\tif (++(d->count.node_array) > 1) {\n\t\tparse_err_multiple_elements(d);\n\t\treturn;\n\t}\n\tfor (np = vp = atts, vp++; np && *np && vp && *vp; np = ++vp, vp++) {\n\t\txml_dbg(\"%s: %s = %s\", __func__, *np, *vp);\n\t\tif (strcmp(BASIL_ATR_CHANGECOUNT, *np) == 0) {\n\t\t\t/*\n\t\t\t * Currently unused.\n\t\t\t * We could save changecount if we ever started\n\t\t\t * requesting inventory more frequently.\n\t\t\t * changecount could help reduce the amount of data\n\t\t\t * returned if the inventory has not changed.\n\t\t\t */\n\t\t} else if (strcmp(BASIL_ATR_SCHEDCOUNT, *np) == 0) {\n\t\t\t/*\n\t\t\t * Currently unused.\n\t\t\t */\n\t\t} else {\n\t\t\tparse_err_unrecognized_attr(d, *np);\n\t\t\treturn;\n\t\t}\n\t}\n\td->current.node = NULL;\n\treturn;\n}\n\n/**\n * @brief\n * \tThis function is registered to handle the node element within an\n * \tinventory response.\n *\n * Due to the new response format introduced in BASIL 1.3, and\n * continuing in BASIL 1.4, the count for different arrays need to be\n * reset at different places.\n * For example, since BASIL 1.1 and BASIL 1.2 have segments but no sockets\n * we need to reset the segment_array as part of node_start, however in\n * BASIL 1.3 and 1.4 segment_array count will be reset in socket_start.\n *\n * @param d pointer to user data structure\n * @param[in] el unused in this function\n * @param[in] atts array of name/value pairs\n *\n * @retval Void\n *\n */\nstatic void\nnode_start(ud_t *d, const XML_Char *el, const XML_Char **atts)\n{\n\tconst XML_Char **np;\n\tconst XML_Char **vp;\n\tbasil_node_t *node;\n\tbasil_response_t *brp;\n\n\tif (stack_busted(d))\n\t\treturn;\n\tbrp = d->brp;\n\tnode = malloc(sizeof(basil_node_t));\n\tif (!node) {\n\t\tparse_err_out_of_memory(d);\n\t\treturn;\n\t}\n\tmemset(node, 0, sizeof(basil_node_t));\n\tnode->node_id = -1;\n\tif (d->current.node) {\n\t\t(d->current.node)->next = node;\n\t} else {\n\t\tbrp->data.query.data.inventory.nodes = node;\n\t}\n\td->current.node = node;\n\t/*\n\t * Work through the attribute pairs updating the name pointer and\n\t * value pointer with each loop. The somewhat complex loop control\n\t * syntax is just a fancy way of stepping through the pairs.\n\t */\n\tfor (np = vp = atts, vp++; np && *np && vp && *vp; np = ++vp, vp++) {\n\t\txml_dbg(\"%s: %s = %s\", __func__, *np, *vp);\n\t\tif (strcmp(BASIL_ATR_NODE_ID, *np) == 0) {\n\t\t\tif (node->node_id >= 0) {\n\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tnode->node_id = atol(*vp);\n\t\t\tif (node->node_id < 0) {\n\t\t\t\tparse_err_illegal_attr_val(d, *np, *vp);\n\t\t\t\treturn;\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ATR_ROUTER_ID, *np) == 0) {\n\t\t\tif (node->router_id > 0) {\n\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tnode->router_id = atol(*vp);\n\t\t\tif (*vp <= 0) {\n\t\t\t\tparse_err_illegal_attr_val(d, *np, *vp);\n\t\t\t\treturn;\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ATR_NAME, *np) == 0) {\n\t\t\tif (*node->name) {\n\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tsnprintf(node->name, BASIL_STRING_SHORT, \"%s\", *vp);\n\t\t} else if (strcmp(BASIL_ATR_ARCH, *np) == 0) {\n\t\t\tif (node->arch) {\n\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tif (strcmp(BASIL_VAL_XT, *vp) == 0) {\n\t\t\t\tnode->arch = basil_node_arch_xt;\n\t\t\t} else if (strcmp(BASIL_VAL_X2, *vp) == 0) {\n\t\t\t\tnode->arch = basil_node_arch_x2;\n\t\t\t} else {\n\t\t\t\tparse_err_illegal_attr_val(d, *np, *vp);\n\t\t\t\treturn;\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ATR_ROLE, *np) == 0) {\n\t\t\tif (node->role) {\n\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tif (strcmp(BASIL_VAL_INTERACTIVE, *vp) == 0) {\n\t\t\t\td->current.role_int++;\n\t\t\t\tnode->role = basil_node_role_interactive;\n\t\t\t} else if (strcmp(BASIL_VAL_BATCH, *vp) == 0) {\n\t\t\t\td->current.role_batch++;\n\t\t\t\tnode->role = basil_node_role_batch;\n\t\t\t} else {\n\t\t\t\td->current.role_unknown++;\n\t\t\t\tnode->role = basil_node_role_unknown;\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ATR_STATE, *np) == 0) {\n\t\t\tif (node->state) {\n\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tif (strcmp(BASIL_VAL_UP, *vp) == 0) {\n\t\t\t\td->current.state_up++;\n\t\t\t\tnode->state = basil_node_state_up;\n\t\t\t} else if (strcmp(BASIL_VAL_DOWN, *vp) == 0) {\n\t\t\t\td->current.state_down++;\n\t\t\t\tnode->state = basil_node_state_down;\n\t\t\t} else if (strcmp(BASIL_VAL_UNAVAILABLE, *vp) == 0) {\n\t\t\t\td->current.state_unavail++;\n\t\t\t\tnode->state = basil_node_state_unavail;\n\t\t\t} else if (strcmp(BASIL_VAL_ROUTING, *vp) == 0) {\n\t\t\t\td->current.state_routing++;\n\t\t\t\tnode->state = basil_node_state_route;\n\t\t\t} else if (strcmp(BASIL_VAL_SUSPECT, *vp) == 0) {\n\t\t\t\td->current.state_suspect++;\n\t\t\t\tnode->state = basil_node_state_suspect;\n\t\t\t} else if (strcmp(BASIL_VAL_ADMIN, *vp) == 0) {\n\t\t\t\td->current.state_admin++;\n\t\t\t\tnode->state = basil_node_state_admindown;\n\t\t\t} else {\n\t\t\t\td->current.state_unknown++;\n\t\t\t\tnode->state = basil_node_state_unknown;\n\t\t\t}\n\t\t} else {\n\t\t\tparse_err_unrecognized_attr(d, *np);\n\t\t\treturn;\n\t\t}\n\t}\n\tif (node->node_id < 0) {\n\t\tparse_err_unspecified_attr(d, BASIL_ATR_NODE_ID);\n\t\treturn;\n\t}\n\tif (*node->name == '\\0') {\n\t\tparse_err_unspecified_attr(d, BASIL_ATR_NAME);\n\t\treturn;\n\t}\n\tif (!node->role) {\n\t\tparse_err_unspecified_attr(d, BASIL_ATR_ROLE);\n\t\treturn;\n\t}\n\tif (!node->state) {\n\t\tparse_err_unspecified_attr(d, BASIL_ATR_STATE);\n\t\treturn;\n\t}\n\t/* Reset the array counters. */\n\tswitch (basilver) {\n\t\tcase basil_1_0:\n\t\tcase basil_1_1:\n\t\tcase basil_1_2:\n\t\t\td->count.segment_array = 0;\n\t\t\tbreak;\n\t\tcase basil_1_3:\n\t\tcase basil_1_4:\n\t\tcase basil_1_7:\n\t\t\t/* segment_array is reset in socket_start() for these\n\t\t\t * BASIL versions.\n\t\t\t */\n\t\t\tbreak;\n\t}\n\td->count.socket_array = 0;\n\td->count.accelerator_array = 0;\n\treturn;\n}\n\n/**\n * @brief\n * \tThis function is registered to handle the segment array element\n * \twithin an inventory response.\n *\n * The standard Expat start handler function prototype is used.\n *\n * @param d pointer to user data structure\n * @param[in] el unused in this function\n * @param[in] atts array of name/value pairs\n * @retval Void\n *\n */\nstatic void\nsocket_array_start(ud_t *d, const XML_Char *el, const XML_Char **atts)\n{\n\tconst XML_Char **np;\n\tconst XML_Char **vp;\n\n\tif (stack_busted(d))\n\t\treturn;\n\tif (++(d->count.socket_array) > 1) {\n\t\tparse_err_multiple_elements(d);\n\t\treturn;\n\t}\n\tfor (np = vp = atts, vp++; np && *np && vp && *vp; np = ++vp, vp++) {\n\t\tparse_err_unrecognized_attr(d, *np);\n\t\treturn;\n\t}\n\td->current.socket = NULL;\n\treturn;\n}\n\n/**\n * @brief\n *\t This function is registered to handle the socket element within an inventory response.\n * Starting with BASIL 1.3 the socket array has the architecture and clock_mhz of the processors.\n * However, this information is not used by PBS.\n *\n * The standard Expat start handler function prototype is used.\n *\n * @param d pointer to user data structure\n * @param[in] el unused in this function\n * @param[in] atts array of name/value pairs\n *\n * @return Void\n *\n */\nstatic void\nsocket_start(ud_t *d, const XML_Char *el, const XML_Char **atts)\n{\n\tconst XML_Char **np;\n\tconst XML_Char **vp;\n\tbasil_node_socket_t *socket;\n\n\tif (stack_busted(d))\n\t\treturn;\n\tsocket = malloc(sizeof(basil_node_socket_t));\n\tif (!socket) {\n\t\tparse_err_out_of_memory(d);\n\t\treturn;\n\t}\n\tmemset(socket, 0, sizeof(basil_node_socket_t));\n\tsocket->ordinal = -1;\n\tsocket->clock_mhz = -1;\n\tif (d->current.socket) {\n\t\t(d->current.socket)->next = socket;\n\t} else {\n\t\tif (!d->current.node) {\n\t\t\tparse_err_internal(d);\n\t\t\tfree(socket);\n\t\t\treturn;\n\t\t}\n\t\t(d->current.node)->sockets = socket;\n\t}\n\td->current.socket = socket;\n\n\t/*\n\t * Work through the attribute pairs updating the name pointer and\n\t * value pointer with each loop. The somewhat complex loop control\n\t * syntax is just a fancy way of stepping through the pairs.\n\t */\n\tfor (np = vp = atts, vp++; np && *np && vp && *vp; np = ++vp, vp++) {\n\t\txml_dbg(\"%s: %s = %s\", __func__, *np, *vp);\n\t\tif (strcmp(BASIL_ATR_ORDINAL, *np) == 0) {\n\t\t\tif (socket->ordinal >= 0) {\n\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tsocket->ordinal = atoi(*vp);\n\t\t\tif (socket->ordinal < 0) {\n\t\t\t\tparse_err_illegal_attr_val(d, *np, *vp);\n\t\t\t\treturn;\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ATR_ARCH, *np) == 0) {\n\t\t\tif (socket->arch) {\n\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tif (strcmp(BASIL_VAL_X86_64, *vp) == 0) {\n\t\t\t\tsocket->arch = basil_processor_x86_64;\n\t\t\t} else if (strcmp(BASIL_VAL_CRAY_X2, *vp) == 0) {\n\t\t\t\tsocket->arch = basil_processor_cray_x2;\n\t\t\t} else if (strcmp(BASIL_VAL_AARCH64, *vp) == 0) {\n\t\t\t\tsocket->arch = basil_processor_aarch64;\n\t\t\t} else {\n\t\t\t\tparse_err_illegal_attr_val(d, *np, *vp);\n\t\t\t\treturn;\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ATR_CLOCK_MHZ, *np) == 0) {\n\t\t\tif (socket->clock_mhz >= 0) {\n\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tsocket->clock_mhz = atoi(*vp);\n\t\t\tif (socket->clock_mhz < 0) {\n\t\t\t\tparse_err_illegal_attr_val(d, *np, *vp);\n\t\t\t\treturn;\n\t\t\t}\n\t\t} else {\n\t\t\tparse_err_unrecognized_attr(d, *np);\n\t\t\treturn;\n\t\t}\n\t}\n\n\tif (socket->ordinal < 0) {\n\t\tparse_err_unspecified_attr(d, BASIL_ATR_ORDINAL);\n\t\treturn;\n\t}\n\tif (!socket->arch) {\n\t\tparse_err_unspecified_attr(d, BASIL_ATR_ARCH);\n\t\treturn;\n\t}\n\tif (socket->clock_mhz < 0) {\n\t\tparse_err_unspecified_attr(d, BASIL_ATR_CLOCK_MHZ);\n\t\treturn;\n\t}\n\t/* Increase the socket count */\n\td->current.socket_count++;\n\n\t/* Reset the array counters and segment */\n\td->count.segment_array = 0;\n\td->current.segment = NULL;\n\n\treturn;\n}\n\n/**\n * @brief\n *\tThis function is registered to handle the segment array element\n * within an inventory response.\n *\n * The standard Expat start handler function prototype is used.\n *\n * @param d pointer to user data structure\n * @param[in] el unused in this function\n * @param[in] atts array of name/value pairs\n *\n * @return Void\n *\n */\nstatic void\nsegment_array_start(ud_t *d, const XML_Char *el, const XML_Char **atts)\n{\n\tconst XML_Char **np;\n\tconst XML_Char **vp;\n\n\tif (stack_busted(d))\n\t\treturn;\n\tif (++(d->count.segment_array) > 1) {\n\t\tparse_err_multiple_elements(d);\n\t\treturn;\n\t}\n\tfor (np = vp = atts, vp++; np && *np && vp && *vp; np = ++vp, vp++) {\n\t\tparse_err_unrecognized_attr(d, *np);\n\t\treturn;\n\t}\n\tif (!d->current.socket)\n\t\td->current.segment = NULL;\n\treturn;\n}\n\n/**\n * @brief\n *\tThis function is registered to handle the segment element within an\n * inventory response.\n *\n * Due to the new XML format introduced in BASIL 1.3, and\n * continuing in BASIL 1.4, the count for different arrays need to be\n * reset at different places.\n * For example, since BASIL 1.1 and BASIL 1.2 have no compute units we don't\n * need to reset the count here. Also for BASIL 1.1 and 1.2, the processor\n * count will be reset in processor_array_start.\n * For BASIL 1.3 and BASIL 1.4 we need to reset the count for compute unit\n * arrays here.\n *\n * The standard Expat start handler function prototype is used.\n * @param d pointer to user data structure\n * @param[in] el unused in this function\n * @param[in] atts array of name/value pairs\n *\n * @return Void\n *\n */\nstatic void\nsegment_start(ud_t *d, const XML_Char *el, const XML_Char **atts)\n{\n\tconst XML_Char **np;\n\tconst XML_Char **vp;\n\tbasil_node_segment_t *segment;\n\n\tif (stack_busted(d))\n\t\treturn;\n\tsegment = malloc(sizeof(basil_node_segment_t));\n\tif (!segment) {\n\t\tparse_err_out_of_memory(d);\n\t\treturn;\n\t}\n\tmemset(segment, 0, sizeof(basil_node_segment_t));\n\tsegment->ordinal = -1;\n\tif (d->current.segment) {\n\t\t(d->current.segment)->next = segment;\n\t} else {\n\t\tif (!d->current.node) {\n\t\t\tparse_err_internal(d);\n\t\t\tfree(segment);\n\t\t\treturn;\n\t\t}\n\t\tswitch (basilver) {\n\t\t\tcase basil_1_0:\n\t\t\tcase basil_1_1:\n\t\t\tcase basil_1_2:\n\t\t\t\t/* There are no socket elements. */\n\t\t\t\t(d->current.node)->segments = segment;\n\t\t\t\tbreak;\n\t\t\tcase basil_1_3:\n\t\t\tcase basil_1_4:\n\t\t\tcase basil_1_7:\n\t\t\t\t(d->current.socket)->segments = segment;\n\t\t\t\tbreak;\n\t\t}\n\t}\n\td->current.segment = segment;\n\t/*\n\t * Work through the attribute pairs updating the name pointer and\n\t * value pointer with each loop. The somewhat complex loop control\n\t * syntax is just a fancy way of stepping through the pairs.\n\t */\n\tfor (np = vp = atts, vp++; np && *np && vp && *vp; np = ++vp, vp++) {\n\t\txml_dbg(\"%s: %s = %s\", __func__, *np, *vp);\n\t\tif (strcmp(BASIL_ATR_ORDINAL, *np) == 0) {\n\t\t\tif (segment->ordinal >= 0) {\n\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tsegment->ordinal = atol(*vp);\n\t\t\tif (segment->ordinal < 0) {\n\t\t\t\tparse_err_illegal_attr_val(d, *np, *vp);\n\t\t\t\treturn;\n\t\t\t}\n\t\t} else {\n\t\t\tparse_err_unrecognized_attr(d, *np);\n\t\t\treturn;\n\t\t}\n\t}\n\tif (segment->ordinal < 0) {\n\t\tparse_err_unspecified_attr(d, BASIL_ATR_ORDINAL);\n\t\treturn;\n\t}\n\t/* Reset the array counters. */\n\tswitch (basilver) {\n\t\tcase basil_1_0:\n\t\tcase basil_1_1:\n\t\tcase basil_1_2:\n\t\t\t/* There are no compute units, and the processor\n\t\t\t * count was initialized as part of processor_array_start()\n\t\t\t */\n\t\t\tbreak;\n\t\tcase basil_1_3:\n\t\tcase basil_1_4:\n\t\tcase basil_1_7:\n\t\t\td->count.computeunit_array = 0;\n\t\t\td->current.processor = NULL;\n\t\t\tbreak;\n\t}\n\td->count.processor_array = 0;\n\td->count.memory_array = 0;\n\td->count.label_array = 0;\n\treturn;\n}\n\n/**\n * @brief\n *\tThis function is registered to handle the computeunit array element\n * within an inventory response.\n *\n * The standard Expat start handler function prototype is used.\n * @param d pointer to user data structure\n * @param[in] el unused in this function\n * @param[in] atts array of name/value pairs\n */\nstatic void\ncomputeunit_array_start(ud_t *d, const XML_Char *el, const XML_Char **atts)\n{\n\tconst XML_Char **np;\n\tconst XML_Char **vp;\n\n\tif (stack_busted(d))\n\t\treturn;\n\tif (++(d->count.computeunit_array) > 1) {\n\t\tparse_err_multiple_elements(d);\n\t\treturn;\n\t}\n\tfor (np = vp = atts, vp++; np && *np && vp && *vp; np = ++vp, vp++) {\n\t\tparse_err_unrecognized_attr(d, *np);\n\t\treturn;\n\t}\n\td->current.cu = NULL;\n\treturn;\n}\n\n/**\n * @brief\n *\tThis function is registered to handle the computeunit element within an\n * inventory response.\n *\n * The standard Expat start handler function prototype is used.\n *\n * @param d pointer to user data structure\n * @param[in] el unused in this function\n * @param[in] atts array of name/value pairs\n *\n * @return Void\n *\n */\nstatic void\ncomputeunit_start(ud_t *d, const XML_Char *el, const XML_Char **atts)\n{\n\tconst XML_Char **np;\n\tconst XML_Char **vp;\n\tbasil_node_computeunit_t *cu;\n\n\tif (stack_busted(d))\n\t\treturn;\n\tcu = malloc(sizeof(basil_node_computeunit_t));\n\tif (!cu) {\n\t\tparse_err_out_of_memory(d);\n\t\treturn;\n\t}\n\tmemset(cu, 0, sizeof(basil_node_computeunit_t));\n\tcu->ordinal = -1;\n\tcu->proc_per_cu_count = 0;\n\n\tif (d->current.cu) {\n\t\t(d->current.cu)->next = cu;\n\t} else {\n\t\tif (!d->current.segment) {\n\t\t\tparse_err_internal(d);\n\t\t\tfree(cu);\n\t\t\treturn;\n\t\t}\n\t\t(d->current.segment)->computeunits = cu;\n\t}\n\td->current.cu = cu;\n\n\t/*\n\t * Work through the attribute pairs updating the name pointer and\n\t * value pointer with each loop. The somewhat complex loop control\n\t * syntax is just a fancy way of stepping through the pairs.\n\t */\n\tfor (np = vp = atts, vp++; np && *np && vp && *vp; np = ++vp, vp++) {\n\t\txml_dbg(\"%s: %s = %s\", __func__, *np, *vp);\n\t\tif (strcmp(BASIL_ATR_ORDINAL, *np) == 0) {\n\t\t\tif (cu->ordinal >= 0) {\n\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tcu->ordinal = atol(*vp);\n\t\t\tif (cu->ordinal < 0) {\n\t\t\t\tparse_err_illegal_attr_val(d, *np, *vp);\n\t\t\t\treturn;\n\t\t\t}\n\t\t} else {\n\t\t\tparse_err_unrecognized_attr(d, *np);\n\t\t\treturn;\n\t\t}\n\t}\n\tif (cu->ordinal < 0) {\n\t\tparse_err_unspecified_attr(d, BASIL_ATR_ORDINAL);\n\t\treturn;\n\t}\n\t/* Reset the array counter */\n\td->count.processor_array = 0;\n}\n\n/**\n * @brief\n *\tThis function is registered to handle the processor array element\n * within an inventory response.\n *\n * Due to the new response format introduced in BASIL 1.3, and\n * continuing in BASIL 1.4, the count for different arrays need to be\n * reset at different times.\n * For example, for BASIL 1.1 and 1.2, the processor pointer will be\n * set to NULL.\n * Whereas, for BASIL 1.3 and BASIL 1.4 we reset it in segment_start.\n *\n * The standard Expat start handler function prototype is used.\n * @param d pointer to user data structure\n * @param[in] el unused in this function\n * @param[in] atts array of name/value pairs\n * @return Void\n *\n */\nstatic void\nprocessor_array_start(ud_t *d, const XML_Char *el, const XML_Char **atts)\n{\n\tconst XML_Char **np;\n\tconst XML_Char **vp;\n\n\tif (stack_busted(d))\n\t\treturn;\n\tif (++(d->count.processor_array) > 1) {\n\t\tparse_err_multiple_elements(d);\n\t\treturn;\n\t}\n\tfor (np = vp = atts, vp++; np && *np && vp && *vp; np = ++vp, vp++) {\n\t\tparse_err_unrecognized_attr(d, *np);\n\t\treturn;\n\t}\n\tswitch (basilver) {\n\t\tcase basil_1_0:\n\t\tcase basil_1_1:\n\t\tcase basil_1_2:\n\t\t\td->current.processor = NULL;\n\t\t\tbreak;\n\t\tcase basil_1_3:\n\t\tcase basil_1_4:\n\t\tcase basil_1_7:\n\t\t\t/* processor is reset in segment_start()\n\t\t\t * for these BASIL versions\n\t\t\t */\n\t\t\tbreak;\n\t}\n\treturn;\n}\n\n/**\n * @brief\n *\tThis function is registered to handle the processor element within an\n * inventory response.\n *\n * Due to the new response format introduced in BASIL 1.3, and\n * continuing in BASIL 1.4, the information attached to different XML\n * sections has changed. For example, processor arch and MHz info is\n * no longer part of the processor XML for BASIL 1.3 and 1.4. Thus that\n * information is not verified here.\n *\n * The standard Expat start handler function prototype is used.\n *\n * @param d pointer to user data structure\n * @param[in] el unused in this function\n * @param[in] atts array of name/value pairs\n *\n * @return Void\n */\nstatic void\nprocessor_start(ud_t *d, const XML_Char *el, const XML_Char **atts)\n{\n\tconst XML_Char **np;\n\tconst XML_Char **vp;\n\tbasil_node_processor_t *processor;\n\tbasil_node_computeunit_t *cu = NULL;\n\n\tif (stack_busted(d))\n\t\treturn;\n\tprocessor = malloc(sizeof(basil_node_processor_t));\n\tif (!processor) {\n\t\tparse_err_out_of_memory(d);\n\t\treturn;\n\t}\n\tmemset(processor, 0, sizeof(basil_node_processor_t));\n\tprocessor->ordinal = -1;\n\tprocessor->clock_mhz = -1;\n\tif (d->current.processor) {\n\t\t(d->current.processor)->next = processor;\n\t} else {\n\t\tif (!d->current.segment) {\n\t\t\tparse_err_internal(d);\n\t\t\tfree(processor);\n\t\t\treturn;\n\t\t}\n\t\t(d->current.segment)->processors = processor;\n\t}\n\td->current.processor = processor;\n\tswitch (basilver) {\n\t\tcase basil_1_0:\n\t\tcase basil_1_1:\n\t\tcase basil_1_2:\n\t\t\t/* There are no computeunits for these BASIL versions */\n\t\t\tbreak;\n\t\tcase basil_1_3:\n\t\tcase basil_1_4:\n\t\tcase basil_1_7:\n\t\t\tcu = (d->current.segment)->computeunits;\n\t\t\tif (!cu) {\n\t\t\t\tparse_err_internal(d);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tbreak;\n\t}\n\n\t/*\n\t * Work through the attribute pairs updating the name pointer and\n\t * value pointer with each loop. The somewhat complex loop control\n\t * syntax is just a fancy way of stepping through the pairs.\n\t */\n\tfor (np = vp = atts, vp++; np && *np && vp && *vp; np = ++vp, vp++) {\n\t\txml_dbg(\"%s: %s = %s\", __func__, *np, *vp);\n\t\tif (strcmp(BASIL_ATR_ORDINAL, *np) == 0) {\n\t\t\tif (processor->ordinal >= 0) {\n\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tprocessor->ordinal = atol(*vp);\n\t\t\tif (processor->ordinal < 0) {\n\t\t\t\tparse_err_illegal_attr_val(d, *np, *vp);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tif (cu) {\n\t\t\t\tcu->proc_per_cu_count = processor->ordinal + 1;\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ATR_ARCH, *np) == 0) {\n\t\t\tif (processor->arch) {\n\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tif (strcmp(BASIL_VAL_X86_64, *vp) == 0) {\n\t\t\t\tprocessor->arch = basil_processor_x86_64;\n\t\t\t} else if (strcmp(BASIL_VAL_CRAY_X2, *vp) == 0) {\n\t\t\t\tprocessor->arch = basil_processor_cray_x2;\n\t\t\t} else if (strcmp(BASIL_VAL_AARCH64, *vp) == 0) {\n\t\t\t\tprocessor->arch = basil_processor_aarch64;\n\t\t\t} else {\n\t\t\t\tparse_err_illegal_attr_val(d, *np, *vp);\n\t\t\t\treturn;\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ATR_CLOCK_MHZ, *np) == 0) {\n\t\t\tif (processor->clock_mhz >= 0) {\n\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tprocessor->clock_mhz = atoi(*vp);\n\t\t\tif (processor->clock_mhz < 0) {\n\t\t\t\tparse_err_illegal_attr_val(d, *np, *vp);\n\t\t\t\treturn;\n\t\t\t}\n\t\t} else {\n\t\t\tparse_err_unrecognized_attr(d, *np);\n\t\t\treturn;\n\t\t}\n\t}\n\tif (processor->ordinal < 0) {\n\t\tparse_err_unspecified_attr(d, BASIL_ATR_ORDINAL);\n\t\treturn;\n\t}\n\tswitch (basilver) {\n\t\tcase basil_1_0:\n\t\tcase basil_1_1:\n\t\tcase basil_1_2:\n\t\t\tif (!processor->arch) {\n\t\t\t\tparse_err_unspecified_attr(d, BASIL_ATR_ARCH);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tif (processor->clock_mhz < 0) {\n\t\t\t\tparse_err_unspecified_attr(d, BASIL_ATR_CLOCK_MHZ);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tbreak;\n\t\tcase basil_1_3:\n\t\tcase basil_1_4:\n\t\tcase basil_1_7:\n\t\t\t/* The arch and mhz info is no longer part of the\n\t\t\t * processor XML for these BASIL versions\n\t\t\t */\n\t\t\tbreak;\n\t}\n\td->current.processor_allocation = NULL;\n\treturn;\n}\n\n/**\n * This function is registered to handle the processor allocation element\n * within an inventory response.\n *\n * The standard Expat start handler function prototype is used.\n * @param d pointer to user data structure\n * @param[in] el unused in this function\n * @param[in] atts array of name/value pairs\n */\nstatic void\nprocessor_allocation_start(ud_t *d, const XML_Char *el, const XML_Char **atts)\n{\n\tconst XML_Char **np;\n\tconst XML_Char **vp;\n\tbasil_processor_allocation_t *procalloc;\n\n\tif (stack_busted(d))\n\t\treturn;\n\tprocalloc = malloc(sizeof(basil_processor_allocation_t));\n\tif (!procalloc) {\n\t\tparse_err_out_of_memory(d);\n\t\treturn;\n\t}\n\tmemset(procalloc, 0, sizeof(basil_processor_allocation_t));\n\tprocalloc->rsvn_id = -1;\n\tif (d->current.processor_allocation) {\n\t\t(d->current.processor_allocation)->next = procalloc;\n\t} else {\n\t\tif (!d->current.processor) {\n\t\t\tparse_err_internal(d);\n\t\t\tfree(procalloc);\n\t\t\treturn;\n\t\t}\n\t\t(d->current.processor)->allocations = procalloc;\n\t}\n\td->current.processor_allocation = procalloc;\n\t/*\n\t * Work through the attribute pairs updating the name pointer and\n\t * value pointer with each loop. The somewhat complex loop control\n\t * syntax is just a fancy way of stepping through the pairs.\n\t */\n\tfor (np = vp = atts, vp++; np && *np && vp && *vp; np = ++vp, vp++) {\n\t\txml_dbg(\"%s: %s = %s\", __func__, *np, *vp);\n\t\tif (strcmp(BASIL_ATR_RSVN_ID, *np) == 0) {\n\t\t\tif (procalloc->rsvn_id >= 0) {\n\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tprocalloc->rsvn_id = atol(*vp);\n\t\t\tif (procalloc->rsvn_id < 0) {\n\t\t\t\tparse_err_illegal_attr_val(d, *np, *vp);\n\t\t\t\treturn;\n\t\t\t}\n\t\t} else {\n\t\t\tparse_err_unrecognized_attr(d, *np);\n\t\t\treturn;\n\t\t}\n\t}\n\tif (procalloc->rsvn_id < 0) {\n\t\tparse_err_unspecified_attr(d, BASIL_ATR_RSVN_ID);\n\t\treturn;\n\t}\n\treturn;\n}\n\n/**\n * @brief\n * \tThis function is registered to handle the memory array element within\n * an inventory response.\n *\n * The standard Expat start handler function prototype is used.\n *\n * @param d pointer to user data structure\n * @param[in] el unused in this function\n * @param[in] atts array of name/value pairs\n *\n * @return Void\n *\n */\nstatic void\nmemory_array_start(ud_t *d, const XML_Char *el, const XML_Char **atts)\n{\n\tconst XML_Char **np;\n\tconst XML_Char **vp;\n\n\tif (stack_busted(d))\n\t\treturn;\n\tif (++(d->count.memory_array) > 1) {\n\t\tparse_err_multiple_elements(d);\n\t\treturn;\n\t}\n\tfor (np = vp = atts, vp++; np && *np && vp && *vp; np = ++vp, vp++) {\n\t\tparse_err_unrecognized_attr(d, *np);\n\t\treturn;\n\t}\n\td->current.memory = NULL;\n\treturn;\n}\n\n/**\n * @brief\n *\tThis function is registered to handle the memory element within an\n * inventory response.\n *\n * The standard Expat start handler function prototype is used.\n *\n * @param d pointer to user data structure\n * @param[in] el unused in this function\n * @param[in] atts array of name/value pairs\n *\n * @return Void\n */\nstatic void\nmemory_start(ud_t *d, const XML_Char *el, const XML_Char **atts)\n{\n\tconst XML_Char **np;\n\tconst XML_Char **vp;\n\tbasil_node_memory_t *memory;\n\n\tif (stack_busted(d))\n\t\treturn;\n\tmemory = malloc(sizeof(basil_node_memory_t));\n\tif (!memory) {\n\t\tparse_err_out_of_memory(d);\n\t\treturn;\n\t}\n\tmemset(memory, 0, sizeof(basil_node_memory_t));\n\tmemory->page_size_kb = -1;\n\tmemory->page_count = -1;\n\tif (d->current.memory) {\n\t\t(d->current.memory)->next = memory;\n\t} else {\n\t\tif (!d->current.segment) {\n\t\t\tparse_err_internal(d);\n\t\t\tfree(memory);\n\t\t\treturn;\n\t\t}\n\t\t(d->current.segment)->memory = memory;\n\t}\n\td->current.memory = memory;\n\t/*\n\t * Work through the attribute pairs updating the name pointer and\n\t * value pointer with each loop. The somewhat complex loop control\n\t * syntax is just a fancy way of stepping through the pairs.\n\t */\n\tfor (np = vp = atts, vp++; np && *np && vp && *vp; np = ++vp, vp++) {\n\t\txml_dbg(\"%s: %s = %s\", __func__, *np, *vp);\n\t\tif (strcmp(BASIL_ATR_TYPE, *np) == 0) {\n\t\t\tif (memory->type != 0) {\n\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tif (strcmp(BASIL_VAL_OS, *vp) == 0) {\n\t\t\t\tmemory->type = basil_memory_type_os;\n\t\t\t} else if (strcmp(BASIL_VAL_VIRTUAL, *vp) == 0) {\n\t\t\t\tmemory->type = basil_memory_type_virtual;\n\t\t\t} else if (strcmp(BASIL_VAL_HUGEPAGE, *vp) == 0) {\n\t\t\t\tmemory->type = basil_memory_type_hugepage;\n\t\t\t} else {\n\t\t\t\tparse_err_illegal_attr_val(d, *np, *vp);\n\t\t\t\treturn;\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ATR_PAGE_SIZE_KB, *np) == 0) {\n\t\t\tif (memory->page_size_kb >= 0) {\n\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tmemory->page_size_kb = atol(*vp);\n\t\t\tif (memory->page_size_kb < 1) {\n\t\t\t\tparse_err_illegal_attr_val(d, *np, *vp);\n\t\t\t\treturn;\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ATR_PAGE_COUNT, *np) == 0) {\n\t\t\tif (memory->page_count >= 0) {\n\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tmemory->page_count = atol(*vp);\n\t\t\tif (memory->page_count < 1) {\n\t\t\t\tparse_err_illegal_attr_val(d, *np, *vp);\n\t\t\t\treturn;\n\t\t\t}\n\t\t} else {\n\t\t\tparse_err_unrecognized_attr(d, *np);\n\t\t\treturn;\n\t\t}\n\t}\n\tif (!memory->type) {\n\t\tparse_err_unspecified_attr(d, BASIL_ATR_TYPE);\n\t\treturn;\n\t}\n\tif (memory->page_size_kb < 0) {\n\t\tparse_err_unspecified_attr(d, BASIL_ATR_PAGE_SIZE_KB);\n\t\treturn;\n\t}\n\tif (memory->page_count < 0) {\n\t\tparse_err_unspecified_attr(d, BASIL_ATR_PAGE_COUNT);\n\t\treturn;\n\t}\n\td->current.memory_allocation = NULL;\n\treturn;\n}\n\n/**\n * @brief\n *\tThis function is registered to handle the memory allocation element\n * \twithin an inventory response.\n *\n * The standard Expat start handler function prototype is used.\n *\n * @param d pointer to user data structure\n * @param[in] el unused in this function\n * @param[in] atts array of name/value pairs\n *\n * @return Void\n *\n */\nstatic void\nmemory_allocation_start(ud_t *d, const XML_Char *el, const XML_Char **atts)\n{\n\tconst XML_Char **np;\n\tconst XML_Char **vp;\n\tbasil_memory_allocation_t *memalloc;\n\n\tif (stack_busted(d))\n\t\treturn;\n\tmemalloc = malloc(sizeof(basil_memory_allocation_t));\n\tif (!memalloc) {\n\t\tparse_err_out_of_memory(d);\n\t\treturn;\n\t}\n\tmemset(memalloc, 0, sizeof(basil_memory_allocation_t));\n\tmemalloc->rsvn_id = -1;\n\tmemalloc->page_count = -1;\n\tif (d->current.memory_allocation) {\n\t\t(d->current.memory_allocation)->next = memalloc;\n\t} else {\n\t\tif (!d->current.memory) {\n\t\t\tparse_err_internal(d);\n\t\t\tfree(memalloc);\n\t\t\treturn;\n\t\t}\n\t\t(d->current.memory)->allocations = memalloc;\n\t}\n\td->current.memory_allocation = memalloc;\n\t/*\n\t * Work through the attribute pairs updating the name pointer and\n\t * value pointer with each loop. The somewhat complex loop control\n\t * syntax is just a fancy way of stepping through the pairs.\n\t */\n\tfor (np = vp = atts, vp++; np && *np && vp && *vp; np = ++vp, vp++) {\n\t\txml_dbg(\"%s: %s = %s\", __func__, *np, *vp);\n\t\tif (strcmp(BASIL_ATR_RSVN_ID, *np) == 0) {\n\t\t\tif (memalloc->rsvn_id >= 0) {\n\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tmemalloc->rsvn_id = atol(*vp);\n\t\t\tif (memalloc->rsvn_id < 0) {\n\t\t\t\tparse_err_illegal_attr_val(d, *np, *vp);\n\t\t\t\treturn;\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ATR_PAGE_COUNT, *np) == 0) {\n\t\t\tif (memalloc->page_count > 0) {\n\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tmemalloc->page_count = atol(*vp);\n\t\t\tif (memalloc->page_count <= 0) {\n\t\t\t\tparse_err_illegal_attr_val(d, *np, *vp);\n\t\t\t\treturn;\n\t\t\t}\n\t\t} else {\n\t\t\tparse_err_unrecognized_attr(d, *np);\n\t\t\treturn;\n\t\t}\n\t}\n\tif (memalloc->rsvn_id < 0) {\n\t\tparse_err_unspecified_attr(d, BASIL_ATR_RSVN_ID);\n\t\treturn;\n\t}\n\tif (memalloc->page_count <= 0) {\n\t\tparse_err_unspecified_attr(d, BASIL_ATR_PAGE_COUNT);\n\t\treturn;\n\t}\n\treturn;\n}\n\n/**\n * @brief\n * \tThis function is registered to handle the label array element within\n * \tan inventory response.\n *\n * The standard Expat start handler function prototype is used.\n *\n * @param d pointer to user data structure\n * @param[in] el unused in this function\n * @param[in] atts array of name/value pairs\n *\n * @return Void\n *\n */\nstatic void\nlabel_array_start(ud_t *d, const XML_Char *el, const XML_Char **atts)\n{\n\tconst XML_Char **np;\n\tconst XML_Char **vp;\n\n\tif (stack_busted(d))\n\t\treturn;\n\tif (++(d->count.label_array) > 1) {\n\t\tparse_err_multiple_elements(d);\n\t\treturn;\n\t}\n\tfor (np = vp = atts, vp++; np && *np && vp && *vp; np = ++vp, vp++) {\n\t\tparse_err_unrecognized_attr(d, *np);\n\t\treturn;\n\t}\n\td->current.label = NULL;\n\treturn;\n}\n\n/**\n * @brief\n *\t This function is registered to handle the label element within an\n * \tinventory response.\n *\n * The standard Expat start handler function prototype is used.\n *\n * @param d pointer to user data structure\n * @param[in] el unused in this function\n * @param[in] atts array of name/value pairs\n *\n * @return Void\n *\n */\nstatic void\nlabel_start(ud_t *d, const XML_Char *el, const XML_Char **atts)\n{\n\tconst XML_Char **np;\n\tconst XML_Char **vp;\n\tbasil_label_t *label;\n\n\tif (stack_busted(d))\n\t\treturn;\n\tlabel = malloc(sizeof(basil_label_t));\n\tif (!label) {\n\t\tparse_err_out_of_memory(d);\n\t\treturn;\n\t}\n\tmemset(label, 0, sizeof(basil_label_t));\n\tif (d->current.label) {\n\t\t(d->current.label)->next = label;\n\t} else {\n\t\tif (!d->current.segment) {\n\t\t\tparse_err_internal(d);\n\t\t\tfree(label);\n\t\t\treturn;\n\t\t}\n\t\t(d->current.segment)->labels = label;\n\t}\n\td->current.label = label;\n\t/*\n\t * Work through the attribute pairs updating the name pointer and\n\t * value pointer with each loop. The somewhat complex loop control\n\t * syntax is just a fancy way of stepping through the pairs.\n\t */\n\tfor (np = vp = atts, vp++; np && *np && vp && *vp; np = ++vp, vp++) {\n\t\txml_dbg(\"%s: %s = %s\", __func__, *np, *vp);\n\t\tif (strcmp(BASIL_ATR_NAME, *np) == 0) {\n\t\t\tif (*label->name) {\n\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tsnprintf(label->name, BASIL_STRING_MEDIUM, \"%s\", *vp);\n\t\t} else if (strcmp(BASIL_ATR_TYPE, *np) == 0) {\n\t\t\tif (label->type) {\n\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tif (strcmp(BASIL_VAL_HARD, *vp) == 0) {\n\t\t\t\tlabel->type = basil_label_type_hard;\n\t\t\t} else if (strcmp(BASIL_VAL_SOFT, *vp) == 0) {\n\t\t\t\tlabel->type = basil_label_type_soft;\n\t\t\t} else {\n\t\t\t\tparse_err_illegal_attr_val(d, *np, *vp);\n\t\t\t\treturn;\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ATR_DISPOSITION, *np) == 0) {\n\t\t\tif (label->disposition) {\n\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tif (strcmp(BASIL_VAL_ATTRACT, *vp) == 0) {\n\t\t\t\tlabel->disposition =\n\t\t\t\t\tbasil_label_disposition_attract;\n\t\t\t} else if (strcmp(BASIL_VAL_REPEL, *vp) == 0) {\n\t\t\t\tlabel->disposition =\n\t\t\t\t\tbasil_label_disposition_repel;\n\t\t\t} else {\n\t\t\t\tparse_err_illegal_attr_val(d, *np, *vp);\n\t\t\t\treturn;\n\t\t\t}\n\t\t} else {\n\t\t\tparse_err_unrecognized_attr(d, *np);\n\t\t\treturn;\n\t\t}\n\t}\n\tif (*label->name == '\\0') {\n\t\tparse_err_unspecified_attr(d, BASIL_ATR_NAME);\n\t\treturn;\n\t}\n\tif (!label->type) {\n\t\tlabel->type = basil_label_type_hard;\n\t}\n\tif (!label->disposition) {\n\t\tlabel->disposition = basil_label_disposition_attract;\n\t}\n\treturn;\n}\n\n/**\n * @brief\n * \tThis function is registered to handle the accelerator array element\n * \twithin an inventory response.\n *\n * The standard Expat start handler function prototype is used.\n *\n * @param d pointer to user data structure\n * @param[in] el unused in this function\n * @param[in] atts array of name/value pairs\n *\n * @return Void\n *\n */\nstatic void\naccelerator_array_start(ud_t *d, const XML_Char *el, const XML_Char **atts)\n{\n\tconst XML_Char **np;\n\tconst XML_Char **vp;\n\n\tif (stack_busted(d))\n\t\treturn;\n\tif (++(d->count.accelerator_array) > 1) {\n\t\tparse_err_multiple_elements(d);\n\t\treturn;\n\t}\n\tfor (np = vp = atts, vp++; np && *np && vp && *vp; np = ++vp, vp++) {\n\t\tparse_err_unrecognized_attr(d, *np);\n\t\treturn;\n\t}\n\td->current.accelerator = NULL;\n\treturn;\n}\n\n/**\n * @brief\n * \tThis function is registered to handle the accelerator element within an\n * \tinventory response.\n *\n * The standard Expat start handler function prototype is used.\n *\n * @param d pointer to user data structure\n * @param[in] el unused in this function\n * @param[in] atts array of name/value pairs\n *\n * @return Void\n *\n */\nstatic void\naccelerator_start(ud_t *d, const XML_Char *el, const XML_Char **atts)\n{\n\tconst XML_Char **np;\n\tconst XML_Char **vp;\n\tbasil_node_accelerator_t *accelerator;\n\tbasil_accelerator_gpu_t *gpu;\n\tchar *family;\n\n\tif (stack_busted(d))\n\t\treturn;\n\taccelerator = malloc(sizeof(basil_node_accelerator_t));\n\tif (!accelerator) {\n\t\tparse_err_out_of_memory(d);\n\t\treturn;\n\t}\n\tmemset(accelerator, 0, sizeof(basil_node_accelerator_t));\n\tgpu = malloc(sizeof(basil_accelerator_gpu_t));\n\tif (!gpu) {\n\t\tparse_err_out_of_memory(d);\n\t\tfree(accelerator);\n\t\treturn;\n\t}\n\tmemset(gpu, 0, sizeof(basil_accelerator_gpu_t));\n\taccelerator->data.gpu = gpu;\n\tif (d->current.accelerator) {\n\t\t(d->current.accelerator)->next = accelerator;\n\t} else {\n\t\tif (!d->current.node) {\n\t\t\tparse_err_internal(d);\n\t\t\tfree(gpu);\n\t\t\tfree(accelerator);\n\t\t\treturn;\n\t\t}\n\t\t(d->current.node)->accelerators = accelerator;\n\t}\n\td->current.accelerator = accelerator;\n\t/*\n\t * Work through the attribute pairs updating the name pointer and\n\t * value pointer with each loop. The somewhat complex loop control\n\t * syntax is just a fancy way of stepping through the pairs.\n\t */\n\tfor (np = vp = atts, vp++; np && *np && vp && *vp; np = ++vp, vp++) {\n\t\txml_dbg(\"%s: %s = %s\", __func__, *np, *vp);\n\t\tint len = 0;\n\t\tif (strcmp(BASIL_ATR_ORDINAL, *np) == 0) {\n\t\t\t/*\n\t\t\t * Do nothing with the ordinal there is no\n\t\t\t * place in the structure to put it\n\t\t\t */\n\t\t} else if (strcmp(BASIL_ATR_TYPE, *np) == 0) {\n\t\t\tif (accelerator->type) {\n\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tif (strcmp(BASIL_VAL_GPU, *vp) == 0) {\n\t\t\t\taccelerator->type = basil_accel_gpu;\n\t\t\t\td->current.accel_type_gpu++;\n\t\t\t} else {\n\t\t\t\td->current.accel_type_unknown++;\n\t\t\t\tparse_err_illegal_attr_val(d, *np, *vp);\n\t\t\t\treturn;\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ATR_STATE, *np) == 0) {\n\t\t\tif (accelerator->state) {\n\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tif (strcmp(BASIL_VAL_UP, *vp) == 0) {\n\t\t\t\td->current.accel_state_up++;\n\t\t\t\taccelerator->state = basil_accel_state_up;\n\t\t\t} else if (strcmp(BASIL_VAL_DOWN, *vp) == 0) {\n\t\t\t\td->current.accel_state_down++;\n\t\t\t\taccelerator->state = basil_accel_state_down;\n\t\t\t} else {\n\t\t\t\td->current.accel_state_unknown++;\n\t\t\t\taccelerator->state = basil_accel_state_unknown;\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ATR_FAMILY, *np) == 0) {\n\t\t\tif (gpu->family) {\n\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tlen = strlen(*vp) + 1;\n\t\t\tfamily = malloc(len);\n\t\t\tif (!family) {\n\t\t\t\tparse_err_out_of_memory(d);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tsnprintf(family, len, \"%s\", *vp);\n\t\t\tgpu->family = family;\n\t\t} else if (strcmp(BASIL_ATR_MEMORY_MB, *np) == 0) {\n\t\t\tif (gpu->memory > 0) {\n\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tgpu->memory = atoi(*vp);\n\t\t\tif (gpu->memory < 0) {\n\t\t\t\tparse_err_illegal_attr_val(d, *np, *vp);\n\t\t\t\treturn;\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ATR_CLOCK_MHZ, *np) == 0) {\n\t\t\tif (gpu->clock_mhz > 0) {\n\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tgpu->clock_mhz = atoi(*vp);\n\t\t\tif (gpu->memory < 1) {\n\t\t\t\tparse_err_illegal_attr_val(d, *np, *vp);\n\t\t\t\treturn;\n\t\t\t}\n\t\t} else {\n\t\t\tparse_err_unrecognized_attr(d, *np);\n\t\t\treturn;\n\t\t}\n\t}\n\tif (!accelerator->type) {\n\t\tparse_err_unspecified_attr(d, BASIL_ATR_TYPE);\n\t\treturn;\n\t}\n\tif (!accelerator->state) {\n\t\tparse_err_unspecified_attr(d, BASIL_ATR_STATE);\n\t\treturn;\n\t}\n\tif (!gpu->family) {\n\t\tparse_err_unspecified_attr(d, BASIL_ATR_FAMILY);\n\t\treturn;\n\t}\n\tif (gpu->clock_mhz < 1) {\n\t\tparse_err_unspecified_attr(d, BASIL_ATR_CLOCK_MHZ);\n\t\treturn;\n\t}\n\td->current.accelerator_allocation = NULL;\n\treturn;\n}\n\n/**\n * @brief\n * \tThis function is registered to handle the accelerator allocation element\n * \twithin an inventory response.\n *\n * The standard Expat start handler function prototype is used.\n *\n * @param d pointer to user data structure\n * @param[in] el unused in this function\n * @param[in] atts array of name/value pairs\n *\n * @return Void\n *\n */\nstatic void\naccelerator_allocation_start(ud_t *d, const XML_Char *el, const XML_Char **atts)\n{\n\tconst XML_Char **np;\n\tconst XML_Char **vp;\n\tbasil_accelerator_allocation_t *accelalloc;\n\n\tif (stack_busted(d))\n\t\treturn;\n\taccelalloc = malloc(sizeof(basil_accelerator_allocation_t));\n\tif (!accelalloc) {\n\t\tparse_err_out_of_memory(d);\n\t\treturn;\n\t}\n\tmemset(accelalloc, 0, sizeof(basil_accelerator_allocation_t));\n\taccelalloc->rsvn_id = -1;\n\tif (d->current.accelerator_allocation) {\n\t\t(d->current.accelerator_allocation)->next = accelalloc;\n\t} else {\n\t\tif (!d->current.accelerator) {\n\t\t\tparse_err_internal(d);\n\t\t\tfree(accelalloc);\n\t\t\treturn;\n\t\t}\n\t\t(d->current.accelerator)->allocations = accelalloc;\n\t}\n\td->current.accelerator_allocation = accelalloc;\n\t/*\n\t * Work through the attribute pairs updating the name pointer and\n\t * value pointer with each loop. The somewhat complex loop control\n\t * syntax is just a fancy way of stepping through the pairs.\n\t */\n\tfor (np = vp = atts, vp++; np && *np && vp && *vp; np = ++vp, vp++) {\n\t\txml_dbg(\"%s: %s = %s\", __func__, *np, *vp);\n\t\tif (strcmp(BASIL_ATR_RSVN_ID, *np) == 0) {\n\t\t\tif (accelalloc->rsvn_id >= 0) {\n\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\taccelalloc->rsvn_id = atol(*vp);\n\t\t\tif (accelalloc->rsvn_id < 0) {\n\t\t\t\tparse_err_illegal_attr_val(d, *np, *vp);\n\t\t\t\treturn;\n\t\t\t}\n\t\t} else {\n\t\t\tparse_err_unrecognized_attr(d, *np);\n\t\t\treturn;\n\t\t}\n\t}\n\tif (accelalloc->rsvn_id < 0) {\n\t\tparse_err_unspecified_attr(d, BASIL_ATR_RSVN_ID);\n\t\treturn;\n\t}\n\treturn;\n}\n\n/**\n * @brief\n *\t This function is registered to handle the reservation array element\n * \twithin an inventory response.\n *\n * The standard Expat start handler function prototype is used.\n *\n * @param d pointer to user data structure\n * @param[in] el unused in this function\n * @param[in] atts array of name/value pairs\n *\n * @return Void\n *\n */\nstatic void\nreservation_array_start(ud_t *d, const XML_Char *el, const XML_Char **atts)\n{\n\tconst XML_Char **np;\n\tconst XML_Char **vp;\n\n\tif (stack_busted(d))\n\t\treturn;\n\tif (++(d->count.reservation_array) > 1) {\n\t\tparse_err_multiple_elements(d);\n\t\treturn;\n\t}\n\tfor (np = vp = atts, vp++; np && *np && vp && *vp; np = ++vp, vp++) {\n\t\tparse_err_unrecognized_attr(d, *np);\n\t\treturn;\n\t}\n\td->current.reservation = NULL;\n\treturn;\n}\n\n/**\n * This function is registered to handle the reservation element within a\n * query response.  This is used for both status response and inventory response.\n *\n * The standard Expat start handler function prototype is used.\n * @param d pointer to user data structure\n * @param[in] el unused in this function\n * @param[in] atts array of name/value pairs\n */\nstatic void\nreservation_start(ud_t *d, const XML_Char *el, const XML_Char **atts)\n{\n\tconst XML_Char **np;\n\tconst XML_Char **vp;\n\tbasil_rsvn_t *rsvn;\n\tbasil_response_t *brp;\n\tbasil_response_query_status_res_t *res_status = NULL;\n\tbasil_response_switch_res_t *switch_res = NULL;\n\n\tif (stack_busted(d))\n\t\treturn;\n\n\t/*\n\t * Which type of reservation line is this?  Is it for an INVENTORY response?\n\t * Or is it for a SWITCH STATUS response?\n\t */\n\tbrp = d->brp;\n\tif (!brp)\n\t\treturn;\n\n\tif ((brp->method == basil_method_query) && (brp->data.query.type == basil_query_status)) {\n\t\t/* This is for a SWITCH status response */\n\t\tres_status = malloc(sizeof(basil_response_query_status_res_t));\n\t\tif (!res_status) {\n\t\t\tparse_err_out_of_memory(d);\n\t\t\treturn;\n\t\t}\n\t\tmemset(res_status, 0, sizeof(basil_response_query_status_res_t));\n\t\tres_status->rsvn_id = -1;\n\t\tres_status->status = basil_reservation_status_none;\n\t\tif (d->brp->data.query.data.status.reservation) {\n\t\t\t(d->brp->data.query.data.status.reservation)->next = res_status;\n\t\t} else {\n\t\t\td->brp->data.query.data.status.reservation = res_status;\n\t\t}\n\n\t\t/*\n\t\t * work through the attribute pairs updating the name pointer\n\t\t * and value pointer with each loop.  The somewhat complex loop\n\t\t * control syntax is just a fancy way of stepping through\n\t\t * the pairs.\n\t\t */\n\t\tfor (np = vp = atts, vp++; np && *np && vp && *vp; np = ++vp, vp++) {\n\t\t\txml_dbg(\"%s: %s = %s\", (char *) __func__, *np, *vp);\n\t\t\tif (strcmp(BASIL_ATR_RSVN_ID, *np) == 0) {\n\t\t\t\tif (res_status->rsvn_id >= 0) {\n\t\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t\tres_status->rsvn_id = atol(*vp);\n\t\t\t\tif (res_status->rsvn_id < 0) {\n\t\t\t\t\tparse_err_illegal_attr_val(d, *np, *vp);\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t} else if (strcmp(BASIL_ATR_STATUS, *np) == 0) {\n\t\t\t\tif (res_status->status > 0) {\n\t\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t\tif (strcmp(BASIL_VAL_EMPTY, *vp) == 0) {\n\t\t\t\t\tres_status->status = basil_reservation_status_empty;\n\t\t\t\t} else if (strcmp(BASIL_VAL_INVALID, *vp) == 0) {\n\t\t\t\t\tres_status->status = basil_reservation_status_invalid;\n\t\t\t\t} else if (strcmp(BASIL_VAL_MIX, *vp) == 0) {\n\t\t\t\t\tres_status->status = basil_reservation_status_mix;\n\t\t\t\t} else if (strcmp(BASIL_VAL_RUN, *vp) == 0) {\n\t\t\t\t\tres_status->status = basil_reservation_status_run;\n\t\t\t\t} else if (strcmp(BASIL_VAL_SUSPEND, *vp) == 0) {\n\t\t\t\t\tres_status->status = basil_reservation_status_suspend;\n\t\t\t\t} else if (strcmp(BASIL_VAL_SWITCH, *vp) == 0) {\n\t\t\t\t\tres_status->status = basil_reservation_status_switch;\n\t\t\t\t} else if (strcmp(BASIL_VAL_UNKNOWN, *vp) == 0) {\n\t\t\t\t\tres_status->status = basil_reservation_status_unknown;\n\t\t\t\t} else {\n\t\t\t\t\tparse_err_illegal_attr_val(d, *np, *vp);\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tparse_err_unrecognized_attr(d, *np);\n\t\t\t\treturn;\n\t\t\t}\n\t\t}\n\t\tif (res_status->rsvn_id < 0) {\n\t\t\tparse_err_unspecified_attr(d, BASIL_ATR_RSVN_ID);\n\t\t\treturn;\n\t\t}\n\t\tif (res_status->status == basil_reservation_status_none) {\n\t\t\tparse_err_unspecified_attr(d, BASIL_ATR_STATUS);\n\t\t\treturn;\n\t\t}\n\t} else if (brp->method == basil_method_switch) {\n\t\t/* This is for a response to a SWITCH request */\n\t\tswitch_res = malloc(sizeof(basil_response_switch_res_t));\n\t\tif (!switch_res) {\n\t\t\tparse_err_out_of_memory(d);\n\t\t\treturn;\n\t\t}\n\t\tmemset(switch_res, 0, sizeof(basil_response_switch_res_t));\n\n\t\tswitch_res->rsvn_id = -1;\n\t\tswitch_res->status = basil_reservation_status_none;\n\t\tif (brp->data.swtch.reservation) {\n\t\t\t(brp->data.swtch.reservation)->next = switch_res;\n\t\t} else {\n\t\t\tbrp->data.swtch.reservation = switch_res;\n\t\t}\n\t\t/*\n\t\t * work through the attribute pairs updating the name pointer\n\t\t * and value pointer with each loop.  The somewhat complex loop\n\t\t * control syntax is just a fancy way of stepping through\n\t\t * the pairs.\n\t\t */\n\t\tfor (np = vp = atts, vp++; np && *np && vp && *vp; np = ++vp, vp++) {\n\t\t\txml_dbg(\"%s: %s = %s\", (char *) __func__, *np, *vp);\n\t\t\tif (strcmp(BASIL_ATR_RSVN_ID, *np) == 0) {\n\t\t\t\tif (switch_res->rsvn_id >= 0) {\n\t\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t\tswitch_res->rsvn_id = atol(*vp);\n\t\t\t\tif (switch_res->rsvn_id < 0) {\n\t\t\t\t\tparse_err_illegal_attr_val(d, *np, *vp);\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t} else if (strcmp(BASIL_ATR_STATUS, *np) == 0) {\n\t\t\t\tif (switch_res->status > 0) {\n\t\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t\tif (strcmp(BASIL_VAL_SUCCESS, *vp) == 0) {\n\t\t\t\t\tswitch_res->status = basil_switch_status_success;\n\t\t\t\t} else if (strcmp(BASIL_VAL_FAILURE, *vp) == 0) {\n\t\t\t\t\t/* do nothing here, brp->error was set \t*/\n\t\t\t\t\t/* in alps_request_parent \t\t*/\n\t\t\t\t} else {\n\t\t\t\t\tparse_err_illegal_attr_val(d, *np, *vp);\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tparse_err_unrecognized_attr(d, *np);\n\t\t\t\treturn;\n\t\t\t}\n\t\t}\n\t} else {\n\t\t/* This is for an inventory response */\n\t\trsvn = malloc(sizeof(basil_rsvn_t));\n\t\tif (!rsvn) {\n\t\t\tparse_err_out_of_memory(d);\n\t\t\treturn;\n\t\t}\n\t\tmemset(rsvn, 0, sizeof(basil_rsvn_t));\n\t\trsvn->rsvn_id = -1;\n\t\tif (d->current.reservation) {\n\t\t\t(d->current.reservation)->next = rsvn;\n\t\t} else {\n\t\t\tbrp->data.query.data.inventory.rsvns = rsvn;\n\t\t}\n\t\td->current.reservation = rsvn;\n\t\t/*\n\t\t * Work through the attribute pairs updating the name pointer and\n\t\t * value pointer with each loop. The somewhat complex loop control\n\t\t * syntax is just a fancy way of stepping through the pairs.\n\t\t */\n\t\tfor (np = vp = atts, vp++; np && *np && vp && *vp; np = ++vp, vp++) {\n\t\t\txml_dbg(\"%s: %s = %s\", __func__, *np, *vp);\n\t\t\tif (strcmp(BASIL_ATR_RSVN_ID, *np) == 0) {\n\t\t\t\tif (rsvn->rsvn_id >= 0) {\n\t\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t\trsvn->rsvn_id = atol(*vp);\n\t\t\t\tif (rsvn->rsvn_id < 0) {\n\t\t\t\t\tparse_err_illegal_attr_val(d, *np, *vp);\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t} else if (strcmp(BASIL_ATR_USER_NAME, *np) == 0) {\n\t\t\t\tif (*rsvn->user_name != '\\0') {\n\t\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t\tsnprintf(rsvn->user_name, BASIL_STRING_MEDIUM,\n\t\t\t\t\t \"%s\", *vp);\n\t\t\t} else if (strcmp(BASIL_ATR_ACCOUNT_NAME, *np) == 0) {\n\t\t\t\tif (*rsvn->account_name != '\\0') {\n\t\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t\tsnprintf(rsvn->account_name, BASIL_STRING_MEDIUM,\n\t\t\t\t\t \"%s\", *vp);\n\t\t\t} else if (strcmp(BASIL_ATR_TIME_STAMP, *np) == 0) {\n\t\t\t\tif (*rsvn->time_stamp != '\\0') {\n\t\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t\tsnprintf(rsvn->time_stamp, BASIL_STRING_MEDIUM,\n\t\t\t\t\t \"%s\", *vp);\n\t\t\t} else if (strcmp(BASIL_ATR_BATCH_ID, *np) == 0) {\n\t\t\t\tif (*rsvn->batch_id != '\\0') {\n\t\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t\tsnprintf(rsvn->batch_id, BASIL_STRING_LONG,\n\t\t\t\t\t \"%s\", *vp);\n\t\t\t} else if (strcmp(BASIL_ATR_RSVN_MODE, *np) == 0) {\n\t\t\t\tif (*rsvn->rsvn_mode != '\\0') {\n\t\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t\tsnprintf(rsvn->rsvn_mode, BASIL_STRING_MEDIUM,\n\t\t\t\t\t \"%s\", *vp);\n\t\t\t} else if (strcmp(BASIL_ATR_GPC_MODE, *np) == 0) {\n\t\t\t\tif (*rsvn->gpc_mode != '\\0') {\n\t\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t\tsnprintf(rsvn->gpc_mode, BASIL_STRING_MEDIUM,\n\t\t\t\t\t \"%s\", *vp);\n\t\t\t} else {\n\t\t\t\tparse_err_unrecognized_attr(d, *np);\n\t\t\t\treturn;\n\t\t\t}\n\t\t}\n\t\tif (rsvn->rsvn_id < 0) {\n\t\t\tparse_err_unspecified_attr(d, BASIL_ATR_RSVN_ID);\n\t\t\treturn;\n\t\t}\n\t\tif (*rsvn->user_name == '\\0') {\n\t\t\tparse_err_unspecified_attr(d, BASIL_ATR_USER_NAME);\n\t\t\treturn;\n\t\t}\n\t\tif (*rsvn->account_name == '\\0') {\n\t\t\tparse_err_unspecified_attr(d, BASIL_ATR_ACCOUNT_NAME);\n\t\t\treturn;\n\t\t}\n\t\td->count.application_array = 0;\n\t}\n\treturn;\n}\n\n/**\n * @brief\n * \tThis function is registered to handle the application array element\n * \twithin an inventory response.\n *\n * @note\n * \tPBS accepts this element but ignores it.\n *\n * The standard Expat start handler function prototype is used.\n *\n * @param d pointer to user data structure\n * @param[in] el unused in this function\n * @param[in] atts array of name/value pairs\n *\n * @return Void\n *\n */\nstatic void\napplication_array_start(ud_t *d, const XML_Char *el, const XML_Char **atts)\n{\n\tconst XML_Char **np;\n\tconst XML_Char **vp;\n\n\tif (stack_busted(d))\n\t\treturn;\n\tif (++(d->count.application_array) > 1) {\n\t\tparse_err_multiple_elements(d);\n\t\treturn;\n\t}\n\tfor (np = vp = atts, vp++; np && *np && vp && *vp; np = ++vp, vp++) {\n\t\tparse_err_unrecognized_attr(d, *np);\n\t\treturn;\n\t}\n\treturn;\n}\n\n/**\n * @brief\n * \tThis function is registered to handle the application element\n * \twithin an inventory response.\n *\n * @note\n * \tPBS accepts this element but ignores it.\n *\n * The standard Expat start handler function prototype is used.\n *\n * @param d pointer to user data structure\n * @param[in] el unused in this function\n * @param[in] atts array of name/value pairs\n *\n * @return Void\n *\n */\nstatic void\napplication_start(ud_t *d, const XML_Char *el, const XML_Char **atts)\n{\n\tif (stack_busted(d))\n\t\treturn;\n\td->count.command_array = 0;\n\treturn;\n}\n\n/**\n * @brief\n * \tThis function is registered to handle the command array element\n * \twithin an inventory response.\n *\n * @note\n * \tPBS accepts this element but ignores it.\n *\n * The standard Expat start handler function prototype is used.\n *\n * @param d pointer to user data structure\n * @param[in] el unused in this function\n * @param[in] atts array of name/value pairs\n *\n * @return Void\n *\n */\nstatic void\ncommand_array_start(ud_t *d, const XML_Char *el, const XML_Char **atts)\n{\n\tconst XML_Char **np;\n\tconst XML_Char **vp;\n\n\tif (stack_busted(d))\n\t\treturn;\n\tif (++(d->count.command_array) > 1) {\n\t\tparse_err_multiple_elements(d);\n\t\treturn;\n\t}\n\tfor (np = vp = atts, vp++; np && *np && vp && *vp; np = ++vp, vp++) {\n\t\tparse_err_unrecognized_attr(d, *np);\n\t\treturn;\n\t}\n\treturn;\n}\n\n/**\n * @brief\n * \tThis function is registered to handle the XML elements that are\n * \tto be ignored.\n *\n * The standard Expat start handler function prototype is used.\n *\n * @param d pointer to user data structure\n * @param[in] el unused in this function\n * @param[in] atts array of name/value pairs\n *\n * @return Void\n *\n */\nstatic void\nignore_element(ud_t *d, const XML_Char *el, const XML_Char **atts)\n{\n\tconst XML_Char **np;\n\tconst XML_Char **vp;\n\n\tif (stack_busted(d))\n\t\treturn;\n\n\tfor (np = vp = atts, vp++; np && *np && vp && *vp; np = ++vp, vp++) {\n\t\txml_dbg(\"%s: %s = %s\", __func__, *np, *vp);\n\t}\n\treturn;\n}\n\n/**\n * @brief\n * \tGeneric method registered to handle character data for elements\n * \tthat do not utilize it. Make sure we skip whitespace characters\n * \tsince they may be there for formatting.\n *\n * The standard Expat character handler function prototype is used.\n *\n * @param d pointer to user data structure\n * @param[in] s string\n * @param[in] len length of string\n *\n * @return Void\n *\n */\nstatic void\ndisallow_char_data(ud_t *d, const XML_Char *s, int len)\n{\n\tint i;\n\n\tfor (i = 0; i < len; i++) {\n\t\tif (!isspace(*(s + i)))\n\t\t\tbreak;\n\t}\n\tif (i == len)\n\t\treturn;\n\tparse_err_illegal_char_data(d, s);\n\treturn;\n}\n\n/**\n * @brief\n * Helper function for allow_char_data() that is registered to handle\n * character data for elements.\n * The user data structure (ud_t) is populated with the node rangelist associated\n * with the 'Nodes' element (in the System BASIL 1.7 XML response), that is\n * currently being processed.\n *\n * @param d pointer to user data structure.\n * @param[in] s string.\n * @param[in] len length of string.\n *\n * @return void\n */\nstatic void\nparse_nidlist_char_data(ud_t *d, const char *s, int len)\n{\n\tchar *tmp_ptr = NULL;\n\n\t/*\n\t * Point 'nidlist' to a copy of s (which now contains the parsed char data\n\t * i.e. the node rangelist).\n\t */\n\tif ((d->current_sys.node_group->nidlist = strndup(s, len)) == NULL) {\n\t\tparse_err_out_of_memory(d);\n\t\td->current_sys.node_group->nidlist = NULL;\n\t\treturn;\n\t}\n\n\t/*\n\t * Check if the current rangelist of Nodes is of type KNL and that such Nodes\n\t * are in \"batch\" mode.  Checking the state of the node while parsing the\n\t * system query for node list is not require, doing this will keep this\n\t * (KNL node(s) in down state) out of KNL node list we populate (which we\n\t * use in inventory_to_vnodes to skip these node while creating non KNL nodes)\n\t * and if this node comes up in the inventory query then this(KNL node) will\n\t * be created as non KNL node.\n\t */\n\tif (!exclude_from_KNL_processing(d->current_sys.node_group, 0)) {\n\t\t/*\n\t\t * Accummulate KNL Nodes, extracted from each Node group, in a buffer\n\t\t * for later use. The KNL Nodes in this buffer will be excluded from vnode\n\t\t * creation in inventory_to_vnodes() (which creates non-KNL vnodes only).\n\t\t */\n\t\tif (!knl_node_list) {\n\t\t\tknl_node_list = malloc(sizeof(char) * (len + 1));\n\t\t\tif (knl_node_list == NULL) {\n\t\t\t\tlog_err(errno, __func__, \"malloc failure\");\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tpbs_strncpy(knl_node_list, d->current_sys.node_group->nidlist, (len + 1));\n\t\t} else {\n\t\t\t/* Allocate an extra byte for the \",\" separation between rangelists. */\n\t\t\ttmp_ptr = realloc(knl_node_list, sizeof(char) * (strlen(knl_node_list) + len + 2));\n\t\t\tif (!tmp_ptr) {\n\t\t\t\tlog_err(errno, __func__, \"realloc failure\");\n\t\t\t\tfree(knl_node_list);\n\t\t\t\tknl_node_list = NULL;\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tknl_node_list = tmp_ptr;\n\n\t\t\t/*\n\t\t\t * To maintain comma separation between Node rangelists belonging\n\t\t\t * to each Node Group (in the System XML Response), we append a\n\t\t\t * \",\" at the end of the current array e.g. 12,13-15,16,17 is what\n\t\t\t * we want and not 12,13-1516,17.\n\t\t\t */\n\t\t\tstrcat(knl_node_list, \",\");\n\t\t\tstrcat(knl_node_list, d->current_sys.node_group->nidlist);\n\t\t}\n\t}\n}\n\n/**\n * @brief\n * Function registered to handle character data for XML Elements\n * that utilize it. Skip leading whitespace characters since they\n * may be there for formatting.\n *\n * @param d pointer to user data structure.\n * @param[in] s string.\n * @param[in] len length of string.\n *\n * @return void\n */\nstatic void\nallow_char_data(ud_t *d, const XML_Char *s, int len)\n{\n\tint i = 0;\n\tint j = 0;\n\n\t/*\n\t * As an example, a string 's' could initially point to a rangelist\n\t * \"  12-15,18,19,20\".  'j' accummulates the leading whitespace count.\n\t */\n\tfor (i = 0; i < len; i++) {\n\t\tif (!isspace(*(s + i)))\n\t\t\tbreak;\n\t\tj++;\n\t}\n\tif (i == len)\n\t\treturn;\n\n\t/*\n\t * 's+j' is the location where the 'useful' data starts.\n\t * Subtracting the whitespace count from 'len' gives the true length of\n\t * the node rangelist string e.g. \"12-15,18,19,20\".\n\t */\n\tparse_nidlist_char_data(d, s + j, len - j);\n}\n\n/**\n * @brief\n * \tGeneric method registered to handle the end of an element where\n * \tno post processing needs to take place. Make sure the element end\n * \tis balanced with the element start.\n *\n * The standard Expat end handler function prototype is used.\n *\n * @param d pointer to user data structure\n * @param[in] el name of end element\n *\n * @return Void\n *\n */\nstatic void\ndefault_element_end(ud_t *d, const XML_Char *el)\n{\n\tif (strcmp(el, handler[d->stack[d->depth]].element) != 0)\n\t\tparse_err_illegal_end(d, el);\n\treturn;\n}\n\n/**\n * @brief\n * \tSpecial method registered to handle the end of the inventory element.\n * \tThe counts for the roles and states of the nodes are logged here.\n *\n * The standard Expat end handler function prototype is used.\n *\n * @param d pointer to user data structure\n * @param[in] el name of end element\n *\n * @return Void\n *\n */\nstatic void\ninventory_end(ud_t *d, const XML_Char *el)\n{\n\tif (strcmp(el, handler[d->stack[d->depth]].element) != 0)\n\t\tparse_err_illegal_end(d, el);\n\n\tsprintf(log_buffer, \"%d interactive, %d batch, %d unknown\",\n\t\td->current.role_int,\n\t\td->current.role_batch,\n\t\td->current.role_unknown);\n\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_NODE, LOG_DEBUG,\n\t\t  \"roles\", log_buffer);\n\tsprintf(log_buffer, \"%d up, %d down, %d unavailable, %d routing, \"\n\t\t\t    \"%d suspect, %d admin, %d unknown\",\n\t\td->current.state_up,\n\t\td->current.state_down,\n\t\td->current.state_unavail,\n\t\td->current.state_routing,\n\t\td->current.state_suspect,\n\t\td->current.state_admin,\n\t\td->current.state_unknown);\n\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_NODE, LOG_DEBUG,\n\t\t  \"state\", log_buffer);\n\tsprintf(log_buffer, \"%d gpu, %d unknown\",\n\t\td->current.accel_type_gpu,\n\t\td->current.accel_type_unknown);\n\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_NODE, LOG_DEBUG,\n\t\t  \"accelerator types\", log_buffer);\n\tsprintf(log_buffer, \"%d up, %d down, %d unknown\",\n\t\td->current.accel_state_up,\n\t\td->current.accel_state_down,\n\t\td->current.accel_state_unknown);\n\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_NODE, LOG_DEBUG,\n\t\t  \"accelerator state\", log_buffer);\n\tsprintf(log_buffer, \"%d sockets\", d->current.socket_count);\n\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_NODE, LOG_DEBUG,\n\t\t  \"inventory\", log_buffer);\n\treturn;\n}\n\n/**\n * @brief\n * \tFind the element handler function registered for a particular element.\n *\n * @param[in] el name of element to search for\n * @return index of the matching handler array entry\n *\n * @return \t\tint\n * @retval -1 no \tmatch\n * @retval  !(-1) \tmatched index\n *\n */\nint\nhandler_find_index(const XML_Char *el)\n{\n\tint i = 0;\n\n\tfor (i = 1; handler[i].element; i++) {\n\t\tif (strcmp(handler[i].element, el) == 0)\n\t\t\treturn (i);\n\t}\n\treturn (-1);\n}\n\n/**\n * @brief\n * \tParse the start of any element by looking up its handler and\n * \tcalling it.\n *\n * The standard Expat start handler function prototype is used.\n *\n * @param d pointer to user data structure\n * @param[in] el unused in this function\n * @param[in] atts array of name/value pairs\n *\n * @return Void\n *\n */\nstatic void\nparse_element_start(void *ud, const XML_Char *el, const XML_Char **atts)\n{\n\tint i = 0;\n\tud_t *d;\n\n\tif (!ud)\n\t\treturn;\n\td = (ud_t *) ud;\n\txml_dbg(\"parse_element_start: ELEMENT = %s\", el);\n\ti = handler_find_index(el);\n\tif (i < 0) {\n\t\tsprintf(d->error_class, \"%s\", BASIL_VAL_PERMANENT);\n\t\tsprintf(d->error_source, \"%s\", BASIL_VAL_SYNTAX);\n\t\tsnprintf(d->message, sizeof(d->message),\n\t\t\t \"Unrecognized element start at line %d: %s\",\n\t\t\t (int) XML_GetCurrentLineNumber(parser), el);\n\t\treturn;\n\t}\n\td->depth++;\n\td->stack[d->depth] = i;\n\thandler[i].start(d, el, atts);\n\treturn;\n}\n\n/**\n * @brief\n * \tParse the end of any element by looking up its handler and\n * \tcalling it.\n *\n * The standard Expat end handler function prototype is used.\n *\n * @param d pointer to user data structure\n * @param[in] el name of end element\n *\n * @return Void\n *\n */\nstatic void\nparse_element_end(void *ud, const XML_Char *el)\n{\n\tint i = 0;\n\tud_t *d;\n\n\tif (!ud)\n\t\treturn;\n\td = (ud_t *) ud;\n\txml_dbg(\"parse_element_end: ELEMENT = %s\", el);\n\ti = handler_find_index(el);\n\tif (i < 0) {\n\t\tsprintf(d->error_class, \"%s\", BASIL_VAL_PERMANENT);\n\t\tsprintf(d->error_source, \"%s\", BASIL_VAL_SYNTAX);\n\t\tsnprintf(d->message, sizeof(d->message),\n\t\t\t \"Unrecognized element end at line %d: %s\",\n\t\t\t (int) XML_GetCurrentLineNumber(parser), el);\n\t\treturn;\n\t}\n\thandler[i].end(d, el);\n\td->stack[d->depth] = 0;\n\td->depth--;\n\treturn;\n}\n\n/**\n * @brief\n * \tParse the character data for any element by invoking the registered\n *\thandler.\n *\n * The standard Expat character handler function prototype is used.\n *\n * @param d pointer to user data structure\n * @param[in] s string\n * @param[in] len length of string\n *\n * @return Void\n *\n */\nstatic void\nparse_char_data(void *ud, const XML_Char *s, int len)\n{\n\tud_t *d;\n\n\tif (!ud)\n\t\treturn;\n\td = (ud_t *) ud;\n\thandler[d->stack[d->depth]].char_data(d, s, len);\n\treturn;\n}\n\n/**\n * @brief\n *\tThis function walks all the segments and fills in the information needed to\n * generate the vnodes. It is called directly when PBS is using BASIL 1.2 or\n * prior XML inventories. It is called from within the socket loop when PBS\n * gets BASIL 1.3 and BASIL 1.4 XML inventory.\n * @param[in] node\tinformation about the node\n * @param[in] nv\tvnode list information\n * @param[in] arch\tthe architecture of the vnode\n * @param[in/out] total_seg\tlatest count of the segments, to increase\n *\t\t\t\tacross sockets\n * @param[in] order\torder number of the vnode\n * @param[in/out] name_buf\tupon exiting it contains the name of the vnode\n * @param[in/out] total_cpu\tkeeps a running count of cpus for the whole node\n * @param[in/out] total_mem\tkeeps a running count of mem for the whole node\n *\n * These last three parameters are used for the vnode_per_numa_node = 0 case.\n * Where we only create one PBS vnode per Cray compute node\n *\n * @return Void\n */\nvoid\ninventory_loop_on_segments(basil_node_t *node, vnl_t *nv, char *arch,\n\t\t\t   int *total_seg, long order, char *name_buf, int *total_cpu, long *total_mem)\n{\n\tbasil_node_socket_t *socket = NULL;\n\tbasil_node_segment_t *seg = NULL;\n\tbasil_node_processor_t *proc = NULL;\n\tbasil_node_memory_t *mem = NULL;\n\tbasil_label_t *label = NULL;\n\tbasil_node_accelerator_t *accel = NULL;\n\tbasil_node_computeunit_t *cu = NULL;\n\tint aflag = READ_WRITE | ATR_DFLAG_CVTSLT;\n\tlong totmem = 0;\n\tint totcpus = 0;\n\tint totaccel = 0;\n\tint first_seg = 0;\n\tchar vname[VNODE_NAME_LEN];\n\tchar *attr;\n\tint totseg = 0;\n\n\t/* Proceed only if we have valid pointers */\n\tif (node == NULL) {\n\t\tsprintf(log_buffer, \"Bad pointer to node info\");\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_NODE,\n\t\t\t  LOG_ERR, __func__, log_buffer);\n\t\treturn;\n\t}\n\tif (nv == NULL) {\n\t\tsprintf(log_buffer, \"Bad pointer to node list info\");\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_NODE,\n\t\t\t  LOG_ERR, __func__, log_buffer);\n\t\treturn;\n\t}\n\n\ttotseg = *total_seg;\n\ttotcpus = *total_cpu;\n\ttotmem = *total_mem;\n\n\tsocket = node->sockets;\n\tdo {\n\t\tif (socket) {\n\t\t\tseg = socket->segments;\n\t\t\tsocket = socket->next;\n\t\t} else {\n\t\t\tseg = node->segments;\n\t\t}\n\n\t\tfor (; seg; seg = seg->next, totseg++) {\n\t\t\tif (totseg == 0) {\n\t\t\t\t/* The first segment is different and important\n\t\t\t\t * because some information is only attached to the\n\t\t\t\t * very first segment of a vnode.\n\t\t\t\t */\n\t\t\t\tfirst_seg = 1;\n\t\t\t}\n\t\t\tif (vnode_per_numa_node) {\n\t\t\t\tsnprintf(vname, sizeof(vname), \"%s_%ld_%d\",\n\t\t\t\t\t mpphost, node->node_id, totseg);\n\t\t\t\tvname[sizeof(vname) - 1] = '\\0';\n\t\t\t} else if (first_seg) {\n\t\t\t\t/* When concatenating the segments into\n\t\t\t\t * one vnode, we don't put any segment info\n\t\t\t\t * in the name.\n\t\t\t\t */\n\t\t\t\tsnprintf(vname, sizeof(vname), \"%s_%ld\",\n\t\t\t\t\t mpphost, node->node_id);\n\t\t\t\tvname[sizeof(vname) - 1] = '\\0';\n\t\t\t}\n\n\t\t\tattr = \"sharing\";\n\t\t\t/* already exists so don't define type */\n\t\t\tif (vn_addvnr(nv, vname, attr,\n\t\t\t\t      ND_Force_Exclhost,\n\t\t\t\t      0, 0, NULL) == -1)\n\t\t\t\tgoto bad_vnl;\n\n\t\t\tattr = \"resources_available.PBScrayorder\";\n\t\t\tsprintf(utilBuffer, \"%ld\", order);\n\t\t\tif (vn_addvnr(nv, vname, attr, utilBuffer,\n\t\t\t\t      ATR_TYPE_LONG, aflag,\n\t\t\t\t      NULL) == -1)\n\t\t\t\tgoto bad_vnl;\n\n\t\t\tattr = \"resources_available.arch\";\n\t\t\tif (vn_addvnr(nv, vname, attr, arch,\n\t\t\t\t      0, 0, NULL) == -1)\n\t\t\t\tgoto bad_vnl;\n\n\t\t\tattr = \"resources_available.host\";\n\t\t\tsprintf(utilBuffer, \"%s_%ld\", mpphost, node->node_id);\n\t\t\tif (vn_addvnr(nv, vname, attr, utilBuffer,\n\t\t\t\t      0, 0, NULL) == -1)\n\t\t\t\tgoto bad_vnl;\n\n\t\t\tattr = \"resources_available.PBScraynid\";\n\t\t\tsprintf(utilBuffer, \"%ld\", node->node_id);\n\t\t\tif (vn_addvnr(nv, vname, attr, utilBuffer,\n\t\t\t\t      ATR_TYPE_STR, aflag,\n\t\t\t\t      NULL) == -1)\n\t\t\t\tgoto bad_vnl;\n\n\t\t\tif (vnode_per_numa_node) {\n\t\t\t\tattr = \"resources_available.PBScrayseg\";\n\t\t\t\tsprintf(utilBuffer, \"%d\", totseg);\n\t\t\t\tif (vn_addvnr(nv, vname, attr, utilBuffer,\n\t\t\t\t\t      ATR_TYPE_STR, aflag,\n\t\t\t\t\t      NULL) == -1)\n\t\t\t\t\tgoto bad_vnl;\n\t\t\t}\n\n\t\t\tattr = \"resources_available.vntype\";\n\t\t\tif (vn_addvnr(nv, vname, attr, CRAY_COMPUTE,\n\t\t\t\t      0, 0, NULL) == -1)\n\t\t\t\tgoto bad_vnl;\n\n\t\t\tattr = \"resources_available.PBScrayhost\";\n\t\t\tif (vn_addvnr(nv, vname, attr, mpphost,\n\t\t\t\t      ATR_TYPE_STR, aflag,\n\t\t\t\t      NULL) == -1)\n\t\t\t\tgoto bad_vnl;\n\n\t\t\tif (vnode_per_numa_node) {\n\t\t\t\ttotcpus = 0;\n\t\t\t\tcu = seg->computeunits;\n\t\t\t\tif (cu) {\n\t\t\t\t\tfor (cu = seg->computeunits; cu; cu = cu->next) {\n\t\t\t\t\t\ttotcpus++;\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tfor (proc = seg->processors; proc; proc = proc->next)\n\t\t\t\t\t\ttotcpus++;\n\t\t\t\t}\n\n\t\t\t\tattr = \"resources_available.ncpus\";\n\t\t\t\tsprintf(utilBuffer, \"%d\", totcpus);\n\t\t\t\tif (vn_addvnr(nv, vname, attr, utilBuffer,\n\t\t\t\t\t      0, 0, NULL) == -1)\n\t\t\t\t\tgoto bad_vnl;\n\n\t\t\t\tattr = \"resources_available.mem\";\n\t\t\t\ttotmem = 0;\n\t\t\t\tfor (mem = seg->memory; mem; mem = mem->next)\n\t\t\t\t\ttotmem += (mem->page_size_kb * mem->page_count);\n\t\t\t\tsprintf(utilBuffer, \"%ldkb\", totmem);\n\t\t\t\tif (vn_addvnr(nv, vname, attr, utilBuffer,\n\t\t\t\t\t      0, 0, NULL) == -1)\n\t\t\t\t\tgoto bad_vnl;\n\n\t\t\t\tfor (label = seg->labels; label; label = label->next) {\n\t\t\t\t\tsprintf(utilBuffer,\n\t\t\t\t\t\t\"resources_available.PBScraylabel_%s\",\n\t\t\t\t\t\tlabel->name);\n\t\t\t\t\tif (vn_addvnr(nv, vname, utilBuffer, \"true\",\n\t\t\t\t\t\t      ATR_TYPE_BOOL, aflag,\n\t\t\t\t\t\t      NULL) == -1)\n\t\t\t\t\t\tgoto bad_vnl;\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\t/*\n\t\t\t\t * vnode_per_numa_node is false, which\n\t\t\t\t * means we need to compress all the segment info (and\n\t\t\t\t * in the case of BASIl 1.3 and higher the socket\n\t\t\t\t * info too) into only one vnode. We need to\n\t\t\t\t * total up the cpus and memory for each of the\n\t\t\t\t * segments and report it as part of the\n\t\t\t\t * whole vnode. Add/set labels only once.\n\t\t\t\t * All labels are assumed to be the same on\n\t\t\t\t * all segments.\n\t\t\t\t */\n\t\t\t\tfor (mem = seg->memory; mem; mem = mem->next)\n\t\t\t\t\ttotmem += mem->page_size_kb * mem->page_count;\n\n\t\t\t\tcu = seg->computeunits;\n\t\t\t\tif (cu) {\n\t\t\t\t\tfor (cu = seg->computeunits; cu; cu = cu->next) {\n\t\t\t\t\t\ttotcpus++;\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tfor (proc = seg->processors; proc; proc = proc->next)\n\t\t\t\t\t\ttotcpus++;\n\t\t\t\t}\n\n\t\t\t\tif (totseg == 0) {\n\t\t\t\t\tfor (label = seg->labels; label; label = label->next) {\n\t\t\t\t\t\tsprintf(utilBuffer,\n\t\t\t\t\t\t\t\"resources_available.PBScraylabel_%s\",\n\t\t\t\t\t\t\tlabel->name);\n\t\t\t\t\t\tif (vn_addvnr(nv, vname, utilBuffer, \"true\",\n\t\t\t\t\t\t\t      ATR_TYPE_BOOL, aflag,\n\t\t\t\t\t\t\t      NULL) == -1)\n\t\t\t\t\t\t\tgoto bad_vnl;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\t/* Only do this for nodes that have accelerators */\n\t\t\tif (node->accelerators) {\n\t\t\t\tfor (accel = node->accelerators, totaccel = 0;\n\t\t\t\t     accel; accel = accel->next) {\n\t\t\t\t\tif (accel->state == basil_accel_state_up)\n\t\t\t\t\t\t/* Only count them if the state is UP */\n\t\t\t\t\t\ttotaccel++;\n\t\t\t\t}\n\t\t\t\tattr = \"resources_available.naccelerators\";\n\t\t\t\tif (totseg == 0) {\n\t\t\t\t\t/*\n\t\t\t\t\t * add the naccelerators count only to\n\t\t\t\t\t * the first vnode of a compute node\n\t\t\t\t\t * all other vnodes will share the count\n\t\t\t\t\t */\n\t\t\t\t\tsnprintf(utilBuffer, sizeof(utilBuffer),\n\t\t\t\t\t\t \"%d\", totaccel);\n\t\t\t\t} else if (vnode_per_numa_node) {\n\t\t\t\t\t/*\n\t\t\t\t\t * When there is a vnode being created\n\t\t\t\t\t * per numa node, only the first\n\t\t\t\t\t * (segment 0) vnode gets the accelerator.\n\t\t\t\t\t * The other vnodes must\n\t\t\t\t\t * share the accelerator count with\n\t\t\t\t\t * segment 0 vnodes\n\t\t\t\t\t */\n\t\t\t\t\tsnprintf(utilBuffer, sizeof(utilBuffer),\n\t\t\t\t\t\t \"@%s_%ld_0\",\n\t\t\t\t\t\t mpphost, node->node_id);\n\t\t\t\t}\n\n\t\t\t\tif (vnode_per_numa_node || totseg == 0) {\n\t\t\t\t\tif (vn_addvnr(nv, vname, attr, utilBuffer,\n\t\t\t\t\t\t      0, 0, NULL) == -1)\n\t\t\t\t\t\tgoto bad_vnl;\n\t\t\t\t}\n\n\t\t\t\tattr = \"resources_available.accelerator\";\n\t\t\t\tif (totaccel > 0) {\n\t\t\t\t\t/* set to 'true' if the accelerator\n\t\t\t\t\t * is in state=up, totaccel is only\n\t\t\t\t\t * incremented if state=up\n\t\t\t\t\t */\n\t\t\t\t\tsnprintf(utilBuffer, sizeof(utilBuffer), \"true\");\n\t\t\t\t} else {\n\t\t\t\t\t/* set to 'false' to show that the\n\t\t\t\t\t * vnode has accelerator(s) but they\n\t\t\t\t\t * are not currently state=up\n\t\t\t\t\t */\n\t\t\t\t\tsnprintf(utilBuffer, sizeof(utilBuffer), \"false\");\n\t\t\t\t}\n\n\t\t\t\tif (vn_addvnr(nv, vname, attr, utilBuffer,\n\t\t\t\t\t      0, 0, NULL) == -1)\n\t\t\t\t\tgoto bad_vnl;\n\n\t\t\t\t/*\n\t\t\t\t * Only set accelerator_model and\n\t\t\t\t * accelerator_memory if the accelerator is UP\n\t\t\t\t */\n\t\t\t\tif (totaccel > 0) {\n\t\t\t\t\taccel = node->accelerators;\n\t\t\t\t\tif (accel->data.gpu) {\n\t\t\t\t\t\tif (strcmp(accel->data.gpu->family, BASIL_VAL_UNKNOWN) == 0) {\n\t\t\t\t\t\t\tsprintf(log_buffer, \"The GPU family \"\n\t\t\t\t\t\t\t\t\t    \"value is 'UNKNOWN'. Check \"\n\t\t\t\t\t\t\t\t\t    \"your Cray GPU inventory.\");\n\t\t\t\t\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_NODE, LOG_DEBUG, __func__, log_buffer);\n\t\t\t\t\t\t}\n\t\t\t\t\t\tattr = \"resources_available.accelerator_model\";\n\t\t\t\t\t\tsnprintf(utilBuffer, sizeof(utilBuffer),\n\t\t\t\t\t\t\t \"%s\",\n\t\t\t\t\t\t\t accel->data.gpu->family);\n\t\t\t\t\t\tif (vn_addvnr(nv, vname, attr,\n\t\t\t\t\t\t\t      utilBuffer, 0,\n\t\t\t\t\t\t\t      0, NULL) == -1)\n\t\t\t\t\t\t\tgoto bad_vnl;\n\t\t\t\t\t\tif (accel->data.gpu->memory) {\n\t\t\t\t\t\t\tattr = \"resources_available.accelerator_memory\";\n\t\t\t\t\t\t\tif (totseg == 0) {\n\t\t\t\t\t\t\t\tsnprintf(utilBuffer,\n\t\t\t\t\t\t\t\t\t sizeof(utilBuffer),\n\t\t\t\t\t\t\t\t\t \"%umb\",\n\t\t\t\t\t\t\t\t\t accel->data.gpu->memory);\n\t\t\t\t\t\t\t} else if (vnode_per_numa_node) {\n\t\t\t\t\t\t\t\tsnprintf(utilBuffer,\n\t\t\t\t\t\t\t\t\t sizeof(utilBuffer),\n\t\t\t\t\t\t\t\t\t \"@%s_%ld_0\",\n\t\t\t\t\t\t\t\t\t mpphost, node->node_id);\n\t\t\t\t\t\t\t}\n\n\t\t\t\t\t\t\tif (vnode_per_numa_node || totseg == 0) {\n\t\t\t\t\t\t\t\tif (vn_addvnr(nv, vname, attr,\n\t\t\t\t\t\t\t\t\t      utilBuffer, 0,\n\t\t\t\t\t\t\t\t\t      0, NULL) == -1)\n\t\t\t\t\t\t\t\t\tgoto bad_vnl;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t} while (socket);\n\n\tpbs_strncpy(name_buf, vname, VNODE_NAME_LEN);\n\t*total_cpu = totcpus;\n\t*total_mem = totmem;\n\t*total_seg = totseg;\n\n\treturn;\n\nbad_vnl:\n\tsprintf(log_buffer, \"creation of Cray vnodes failed at %ld, with vname %s\", order, vname);\n\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_NODE, LOG_DEBUG,\n\t\t  __func__, log_buffer);\n\t/*\n\t * don't free nv since it might be important in the dump\n\t */\n\tabort();\n}\n\n/**\n * @brief\n * \tAfter the Cray inventory XML response is parsed, use the resulting structures\n * to generate vnodes for the compute nodes and send them to the server.\n *\n * @param brp ALPS inventory response\n *\n * @return\tint\n * @retval\t0\t: success\n * @retval\t-1\t: failure\n */\nstatic int\ninventory_to_vnodes(basil_response_t *brp)\n{\n\textern int internal_state_update;\n\textern int num_acpus;\n\textern unsigned long totalmem;\n\tint aflag = READ_WRITE | ATR_DFLAG_CVTSLT;\n\tlong order = 0;\n\tchar *attr;\n\tvnl_t *nv = NULL;\n\tint ret = 0;\n\tchar *xmlbuf;\n\tint xmllen = 0;\n\tint seg_num = 0;\n\tint cpu_ct = 0;\n\tlong *arr_nodes = NULL;\n\tint node_count = 0;\n\tint idx = 0;\n\tint skip_node = 0;\n\tlong mem_ct = 0;\n\tchar name[VNODE_NAME_LEN];\n\tbasil_node_t *node = NULL;\n\tbasil_response_query_inventory_t *inv = NULL;\n\thwloc_topology_t topology;\n\n\tif (!brp)\n\t\treturn -1;\n\tif (brp->method != basil_method_query) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer), \"Wrong method: %d\", brp->method);\n\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_NODE, LOG_DEBUG,\n\t\t\t  __func__, log_buffer);\n\t\treturn -1;\n\t}\n\tif (brp->data.query.type != basil_query_inventory) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer), \"Wrong query type: %d\",\n\t\t\t brp->data.query.type);\n\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_NODE, LOG_DEBUG,\n\t\t\t  __func__, log_buffer);\n\t\treturn -1;\n\t}\n\tif (*brp->error != '\\0') {\n\t\tsnprintf(log_buffer, sizeof(log_buffer), \"Error in BASIL response: %s\", brp->error);\n\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_NODE, LOG_DEBUG,\n\t\t\t  __func__, log_buffer);\n\t\treturn -1;\n\t}\n\n\tif (vnl_alloc(&nv) == NULL) {\n\t\tlog_err(errno, __func__, \"vnl_alloc failed!\");\n\t\treturn -1;\n\t}\n\tpbs_strncpy(mpphost, brp->data.query.data.inventory.mpp_host,\n\t\t    sizeof(mpphost));\n\tnv->vnl_modtime = (long) brp->data.query.data.inventory.timestamp;\n\n\t/*\n\t * add login node\n\t */\n\tret = 0;\n\tif (hwloc_topology_init(&topology) == -1)\n\t\tret = -1;\n\telse if ((hwloc_topology_set_flags(topology,\n\t\t\t\t\t   HWLOC_TOPOLOGY_FLAG_WHOLE_SYSTEM | HWLOC_TOPOLOGY_FLAG_IO_DEVICES) == -1) ||\n\t\t (hwloc_topology_load(topology) == -1) ||\n\t\t (hwloc_topology_export_xmlbuffer(topology, &xmlbuf, &xmllen) == -1)) {\n\t\thwloc_topology_destroy(topology);\n\t\tret = -1;\n\t}\n\tif (ret < 0) {\n\t\t/* on any failure above, issue log message */\n\t\tlog_err(PBSE_SYSTEM, __func__, \"topology init/load/export failed\");\n\t\treturn -1;\n\t} else {\n\t\tchar *lbuf;\n\t\tint lbuflen = xmllen + 1024;\n\n\t\t/*\n\t\t *\txmlbuf is almost certain to overflow log_buffer's size,\n\t\t *\tso for logging this information, we allocate one large\n\t\t *\tenough to hold it\n\t\t */\n\t\tif ((lbuf = malloc(lbuflen)) == NULL) {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"malloc logbuf (%d) failed\",\n\t\t\t\t lbuflen);\n\t\t\thwloc_free_xmlbuffer(topology, xmlbuf);\n\t\t\thwloc_topology_destroy(topology);\n\t\t\treturn -1;\n\t\t} else {\n\t\t\tsnprintf(lbuf, lbuflen, \"allocated log buffer, len %d\", lbuflen);\n\t\t\tlog_event(PBSEVENT_DEBUG4, PBS_EVENTCLASS_NODE,\n\t\t\t\t  LOG_DEBUG, __func__, lbuf);\n\t\t}\n\t\tlog_event(PBSEVENT_DEBUG4,\n\t\t\t  PBS_EVENTCLASS_NODE,\n\t\t\t  LOG_DEBUG, __func__, \"topology exported\");\n\t\tsnprintf(lbuf, lbuflen, \"%s%s\", NODE_TOPOLOGY_TYPE_HWLOC, xmlbuf);\n\t\tif (vn_addvnr(nv, mom_short_name, ATTR_NODE_TopologyInfo,\n\t\t\t      lbuf, ATR_TYPE_STR, READ_ONLY, NULL) == -1) {\n\t\t\thwloc_free_xmlbuffer(topology, xmlbuf);\n\t\t\thwloc_topology_destroy(topology);\n\t\t\tfree(lbuf);\n\t\t\tgoto bad_vnl;\n\t\t} else {\n\t\t\tsnprintf(lbuf, lbuflen, \"attribute '%s = %s%s' added\",\n\t\t\t\t ATTR_NODE_TopologyInfo,\n\t\t\t\t NODE_TOPOLOGY_TYPE_HWLOC, xmlbuf);\n\t\t\tlog_event(PBSEVENT_DEBUG4,\n\t\t\t\t  PBS_EVENTCLASS_NODE,\n\t\t\t\t  LOG_DEBUG, __func__, lbuf);\n\t\t\thwloc_free_xmlbuffer(topology, xmlbuf);\n\t\t\thwloc_topology_destroy(topology);\n\t\t\tfree(lbuf);\n\t\t}\n\t}\n\tattr = \"resources_available.ncpus\";\n\tsnprintf(utilBuffer, sizeof(utilBuffer), \"%d\", num_acpus);\n\t/* already exists so don't define type */\n\tif (vn_addvnr(nv, mom_short_name, attr, utilBuffer, 0, 0, NULL) == -1)\n\t\tgoto bad_vnl;\n\tattr = \"resources_available.mem\";\n\tsnprintf(utilBuffer, sizeof(utilBuffer), \"%lukb\", totalmem);\n\t/* already exists so don't define type */\n\tif (vn_addvnr(nv, mom_short_name, attr, utilBuffer, 0, 0, NULL) == -1)\n\t\tgoto bad_vnl;\n\n\tattr = \"resources_available.vntype\";\n\tif (vn_addvnr(nv, mom_short_name, attr, CRAY_LOGIN,\n\t\t      0, 0, NULL) == -1)\n\t\tgoto bad_vnl;\n\n\tattr = \"resources_available.PBScrayhost\";\n\tif (vn_addvnr(nv, mom_short_name, attr, mpphost,\n\t\t      ATR_TYPE_STR, aflag, NULL) == -1)\n\t\tgoto bad_vnl;\n\n\t/*\n\t * Extract KNL NIDs (Node ID) from 'knl_node_list' ('knl_node_list' is a string\n\t * containing a rangelist of KNL Node IDs) and populate 'arr_nodes'.\n\t * If BASIL 1.7 is not supported on the Cray system, knl_node_list remains empty,\n\t * causing NULL to be returned and node_count to be set to 0.\n\t */\n\n\tif (basil_1_7_supported)\n\t\tarr_nodes = process_nodelist_KNL(knl_node_list, &node_count);\n\t/*\n\t * now create the compute nodes\n\t */\n\tinv = &brp->data.query.data.inventory;\n\tfor (order = 1, node = inv->nodes; node; node = node->next, order++) {\n\t\tchar *arch;\n\n\t\t/*\n\t\t * We are only interested in creating non-KNL vnodes in this function.\n\t\t * We avoid creating vnodes for KNL Nodes here since they will be\n\t\t * created in system_to_vnodes_KNL(). So, filter them out here.\n\t\t */\n\t\tskip_node = 0;\n\t\tfor (idx = 0; idx < node_count; idx++) {\n\t\t\tif (arr_nodes && node->node_id == arr_nodes[idx]) {\n\t\t\t\tskip_node = 1;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t\tif (skip_node)\n\t\t\tcontinue;\n\n\t\t(void) memset(name, '\\0', VNODE_NAME_LEN);\n\t\tif (node->role != basil_node_role_batch)\n\t\t\tcontinue;\n\t\tif (node->state != basil_node_state_up)\n\t\t\tcontinue;\n\n\t\tswitch (node->arch) {\n\t\t\tcase basil_node_arch_xt:\n\t\t\t\tarch = BASIL_VAL_XT;\n\t\t\t\tbreak;\n\t\t\tcase basil_node_arch_x2:\n\t\t\t\tarch = BASIL_VAL_X2;\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\tcontinue;\n\t\t}\n\n\t\tif (basil_inventory != NULL) {\n\t\t\tif (first_compute_node) {\n\t\t\t\t/* Create the name of the very first vnode\n\t\t\t\t * so we can attach topology info to it\n\t\t\t\t */\n\t\t\t\tif (vnode_per_numa_node) {\n\t\t\t\t\tsnprintf(name, VNODE_NAME_LEN, \"%s_%ld_0\",\n\t\t\t\t\t\t mpphost, node->node_id);\n\t\t\t\t} else {\n\t\t\t\t\t/* When concatenating the segments into\n\t\t\t\t\t * one vnode, we don't put any segment info\n\t\t\t\t\t * in the name.\n\t\t\t\t\t */\n\t\t\t\t\tsnprintf(name, VNODE_NAME_LEN, \"%s_%ld\",\n\t\t\t\t\t\t mpphost, node->node_id);\n\t\t\t\t}\n\t\t\t\tfirst_compute_node = 0;\n\t\t\t\tattr = ATTR_NODE_TopologyInfo;\n\t\t\t\tif (vn_addvnr(nv, name, attr,\n\t\t\t\t\t      (char *) basil_inventory,\n\t\t\t\t\t      ATR_TYPE_STR, READ_ONLY,\n\t\t\t\t\t      NULL) == -1)\n\t\t\t\t\tgoto bad_vnl;\n\t\t\t}\n\t\t} else {\n\t\t\tsprintf(log_buffer, \"no saved basil_inventory\");\n\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_NODE,\n\t\t\t\t  LOG_DEBUG, __func__, log_buffer);\n\t\t}\n\t\tseg_num = 0;\n\t\tcpu_ct = 0;\n\t\tmem_ct = 0;\n\n\t\tinventory_loop_on_segments(node, nv, arch, &seg_num, order, name, &cpu_ct, &mem_ct);\n\n\t\tif (!vnode_per_numa_node) {\n\t\t\t/* Since we're creating one vnode that combines\n\t\t\t * the info for all the numa nodes,\n\t\t\t * we've now cycled through all the numa nodes, so\n\t\t\t * we need to set the total number of cpus and total\n\t\t\t * memory before moving on to the next node\n\t\t\t */\n\t\t\tattr = \"resources_available.ncpus\";\n\t\t\tsprintf(utilBuffer, \"%d\", cpu_ct);\n\t\t\tif (vn_addvnr(nv, name, attr, utilBuffer,\n\t\t\t\t      0, 0, NULL) == -1)\n\t\t\t\tgoto bad_vnl;\n\n\t\t\tattr = \"resources_available.mem\";\n\t\t\tsnprintf(utilBuffer, sizeof(utilBuffer), \"%lukb\", mem_ct);\n\t\t\tif (vn_addvnr(nv, name, attr, utilBuffer,\n\t\t\t\t      0, 0, NULL) == -1)\n\t\t\t\tgoto bad_vnl;\n\t\t}\n\t}\n\tinternal_state_update = UPDATE_MOM_STATE;\n\n\t/* merge any existing vnodes into the new set */\n\tif (vnlp != NULL) {\n\t\tif (vn_merge(nv, vnlp, NULL) == NULL)\n\t\t\tgoto bad_vnl;\n\t\tvnl_free(vnlp);\n\t}\n\tvnlp = nv;\n\n\t/* We have no further use for this string of KNL node rangelist(s), so free it. */\n\tif (knl_node_list) {\n\t\tfree(knl_node_list);\n\t\tknl_node_list = NULL;\n\t}\n\t/* We have no further use for this array of KNL node ids, so free it. */\n\tif (arr_nodes) {\n\t\tfree(arr_nodes);\n\t\tarr_nodes = NULL;\n\t}\n\n\treturn 0;\n\nbad_vnl:\n\tsnprintf(log_buffer, sizeof(log_buffer), \"creation of cray vnodes failed at %ld, with name %s\", order, name);\n\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_NODE, LOG_DEBUG,\n\t\t  __func__, log_buffer);\n\t/*\n\t * don't free nv since it might be importaint in the dump\n\t */\n\tabort();\n}\n\n/**\n * @brief\n * \tDestructor function for BASIL processor allocation structure.\n *\n * @param p structure to free\n *\n * @return Void\n *\n */\nstatic void\nfree_basil_processor_allocation(basil_processor_allocation_t *p)\n{\n\tif (!p)\n\t\treturn;\n\tfree_basil_processor_allocation(p->next);\n\tfree(p);\n\treturn;\n}\n\n/**\n * @brief\n * \tDestructor function for BASIL processor structure.\n *\n * @param p structure to free\n *\n * @return Void\n *\n */\nstatic void\nfree_basil_processor(basil_node_processor_t *p)\n{\n\tif (!p)\n\t\treturn;\n\tfree_basil_processor(p->next);\n\tfree_basil_processor_allocation(p->allocations);\n\tfree(p);\n\treturn;\n}\n\n/**\n * @brief\n * \tDestructor function for BASIL memory allocation structure.\n *\n * @param p structure to free\n *\n * @return Void\n *\n */\nstatic void\nfree_basil_memory_allocation(basil_memory_allocation_t *p)\n{\n\tif (!p)\n\t\treturn;\n\tfree_basil_memory_allocation(p->next);\n\tfree(p);\n\treturn;\n}\n\n/**\n * @brief\n * \tDestructor function for BASIL memory structure.\n *\n * @param p structure to free\n *\n * @return Void\n *\n */\nstatic void\nfree_basil_memory(basil_node_memory_t *p)\n{\n\tif (!p)\n\t\treturn;\n\tfree_basil_memory(p->next);\n\tfree_basil_memory_allocation(p->allocations);\n\tfree(p);\n\treturn;\n}\n\n/**\n * @brief\n * \tDestructor function for BASIL label structure.\n\n * @param p structure to free\n */\nstatic void\nfree_basil_label(basil_label_t *p)\n{\n\tif (!p)\n\t\treturn;\n\tfree_basil_label(p->next);\n\tfree(p);\n\treturn;\n}\n\n/**\n * @brief\n * \tDestructor function for BASIL accelerator gpu structure.\n *\n * @param p structure to free\n *\n * @return Void\n *\n */\nstatic void\nfree_basil_accelerator_gpu(basil_accelerator_gpu_t *p)\n{\n\tif (!p)\n\t\treturn;\n\tif (p->family)\n\t\tfree(p->family);\n\tfree(p);\n\treturn;\n}\n\n/**\n * @brief\n * \tDestructor function for BASIL accelerator allocation structure.\n *\n * @param p structure to free\n *\n * @return Void\n *\n */\nstatic void\nfree_basil_accelerator_allocation(basil_accelerator_allocation_t *p)\n{\n\tif (!p)\n\t\treturn;\n\tfree_basil_accelerator_allocation(p->next);\n\tfree(p);\n\treturn;\n}\n\n/**\n * @brief\n * \tDestructor function for BASIL accelerator structure.\n *\n * @param p structure to free\n *\n * @return Void\n *\n */\nstatic void\nfree_basil_accelerator(basil_node_accelerator_t *p)\n{\n\tif (!p)\n\t\treturn;\n\tfree_basil_accelerator(p->next);\n\tfree_basil_accelerator_allocation(p->allocations);\n\tfree_basil_accelerator_gpu(p->data.gpu);\n\tfree(p);\n\treturn;\n}\n\n/**\n * @brief\n * Destructor function for BASIL computeunit structure\n * @param p structure to free\n */\nstatic void\nfree_basil_computeunit(basil_node_computeunit_t *p)\n{\n\tif (!p)\n\t\treturn;\n\tfree_basil_computeunit(p->next);\n\tfree(p);\n}\n\n/**\n * @brief\n *\tDestructor function for BASIL node segment structure.\n *\n * @param p structure to free\n *\n * @return Void\n *\n */\nstatic void\nfree_basil_segment(basil_node_segment_t *p)\n{\n\tif (!p)\n\t\treturn;\n\tfree_basil_segment(p->next);\n\tfree_basil_processor(p->processors);\n\tfree_basil_memory(p->memory);\n\tfree_basil_label(p->labels);\n\tfree_basil_computeunit(p->computeunits);\n\tfree(p);\n}\n\n/**\n * @brief\n * Destructor function for BASIL socket structure\n * @param p structure to free\n */\nstatic void\nfree_basil_socket(basil_node_socket_t *p)\n{\n\tif (!p)\n\t\treturn;\n\tfree_basil_socket(p->next);\n\tfree_basil_segment(p->segments);\n\tfree(p);\n}\n\n/**\n * @brief\n * Destructor function for BASIL node structure.\n * @param p structure to free\n */\nstatic void\nfree_basil_node(basil_node_t *p)\n{\n\tif (!p)\n\t\treturn;\n\tfree_basil_node(p->next);\n\tfree_basil_segment(p->segments);\n\tfree_basil_accelerator(p->accelerators);\n\tfree_basil_socket(p->sockets);\n\tfree(p);\n}\n\n/**\n * @brief\n * Destructor function for BASIL System element structure.\n * @param p linked list of structures to free.\n */\nstatic void\nfree_basil_elements_KNL(basil_system_element_t *p)\n{\n\tbasil_system_element_t *nxtp;\n\tif (!p)\n\t\treturn;\n\tnxtp = p->next;\n\tif (p->nidlist)\n\t\tfree(p->nidlist);\n\tfree(p);\n\tfree_basil_elements_KNL(nxtp);\n}\n\n/**\n * @brief\n * \tDestructor function for BASIL reservation structure.\n *\n * @param p structure to free\n *\n * @return Void\n *\n */\nstatic void\nfree_basil_rsvn(basil_rsvn_t *p)\n{\n\tif (!p)\n\t\treturn;\n\tfree_basil_rsvn(p->next);\n\tfree(p);\n}\n\n/**\n * @brief\n * Destructor function for BASIL query status reservation structure.\n * @param p structure to free\n */\nstatic void\nfree_basil_query_status_res(basil_response_query_status_res_t *p)\n{\n\tif (!p)\n\t\treturn;\n\tfree_basil_query_status_res(p->next);\n\tfree(p);\n}\n\n/**\n * @brief\n * Destructor function for BASIL response structure.\n\n * @param brp structure to free.\n *\n * @return Void\n */\nstatic void\nfree_basil_response_data(basil_response_t *brp)\n{\n\tif (!brp)\n\t\treturn;\n\tif (brp->method == basil_method_query) {\n\t\tif (brp->data.query.type == basil_query_inventory) {\n\t\t\tfree_basil_node(brp->data.query.data.inventory.nodes);\n\t\t\tfree_basil_rsvn(brp->data.query.data.inventory.rsvns);\n\t\t} else if (brp->data.query.type == basil_query_system) {\n\t\t\tfree_basil_elements_KNL(brp->data.query.data.system.elements);\n\t\t} else if (brp->data.query.type == basil_query_engine) {\n\t\t\tif (brp->data.query.data.engine.name)\n\t\t\t\tfree(brp->data.query.data.engine.name);\n\t\t\tif (brp->data.query.data.engine.version)\n\t\t\t\tfree(brp->data.query.data.engine.version);\n\t\t\tif (brp->data.query.data.engine.basil_support)\n\t\t\t\tfree(brp->data.query.data.engine.basil_support);\n\t\t} else if (brp->data.query.type == basil_query_status) {\n\t\t\tfree_basil_query_status_res(brp->data.query.data.status.reservation);\n\t\t}\n\t}\n\tfree(brp);\n}\n\n/**\n * @brief\n * \tThe child side of the request handler that invokes the ALPS client.\n *\n * Setup stdin to map to infd and stdout to map to outfd. Once that is\n * done, call exec to run the ALPS client.\n *\n * @param[in] infd input file descriptor\n * @param[in] outfd output file descriptor\n *\n * @return exit value of ALPS client\n * @retval 127 failure to setup exec of ALPS client\n */\nstatic int\nalps_request_child(int infd, int outfd)\n{\n\tchar *p;\n\tint rc = 0;\n\tint in = 0;\n\tint out = 0;\n\n\tin = infd;\n\tout = outfd;\n\tif (in != STDIN_FILENO) {\n\t\tif (out == STDIN_FILENO) {\n\t\t\trc = dup(out);\n\t\t\tif (rc == -1) {\n\t\t\t\tsprintf(log_buffer, \"dup() of out failed: %s\",\n\t\t\t\t\tstrerror(errno));\n\t\t\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_NODE,\n\t\t\t\t\t  LOG_NOTICE, __func__, log_buffer);\n\t\t\t\t_exit(127);\n\t\t\t}\n\t\t\tclose(out);\n\t\t\tout = rc;\n\t\t}\n\t\trc = dup2(in, STDIN_FILENO);\n\t\tif (rc == -1) {\n\t\t\tsprintf(log_buffer, \"dup2() of in failed: %s\",\n\t\t\t\tstrerror(errno));\n\t\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_NODE,\n\t\t\t\t  LOG_NOTICE, __func__, log_buffer);\n\t\t\t_exit(127);\n\t\t}\n\t\tclose(in);\n\t}\n\tif (out != STDOUT_FILENO) {\n\t\trc = dup2(out, STDOUT_FILENO);\n\t\tif (rc == -1) {\n\t\t\tsprintf(log_buffer, \"dup2() of out failed: %s\",\n\t\t\t\tstrerror(errno));\n\t\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_NODE,\n\t\t\t\t  LOG_NOTICE, __func__, log_buffer);\n\t\t\t_exit(127);\n\t\t}\n\t\tclose(out);\n\t}\n\trc = open(\"/dev/null\", O_WRONLY);\n\tif (rc != -1)\n\t\tdup2(rc, STDERR_FILENO);\n\n\trc = fcntl(STDIN_FILENO, F_GETFD);\n\tif (rc == -1)\n\t\t_exit(127);\n\trc = fcntl(STDIN_FILENO, F_SETFD, (rc & ~(FD_CLOEXEC)));\n\tif (rc == -1)\n\t\t_exit(127);\n\trc = fcntl(STDOUT_FILENO, F_GETFD);\n\tif (rc == -1)\n\t\t_exit(127);\n\trc = fcntl(STDOUT_FILENO, F_SETFD, (rc & ~(FD_CLOEXEC)));\n\tif (rc == -1)\n\t\t_exit(127);\n\trc = fcntl(STDERR_FILENO, F_GETFD);\n\tif (rc == -1)\n\t\t_exit(127);\n\trc = fcntl(STDERR_FILENO, F_SETFD, (rc & ~(FD_CLOEXEC)));\n\tif (rc == -1)\n\t\t_exit(127);\n\n\tif (!alps_client)\n\t\t_exit(127);\n\tp = strrchr(alps_client, '/');\n\tif (!p)\n\t\t_exit(127);\n\tp++;\n\tif (*p == '\\0')\n\t\t_exit(127);\n\tlog_close(0);\n\tif (execl(alps_client, p, NULL) < 0)\n\t\t_exit(127);\n\texit(0);\n}\n\n/**\n * @brief\n * \tThe parent side of the request handler that reads and parses the\n * \tXML response from the ALPS client (child side).\n *\n * Read the XML from the ALPS client and pass it to XML_Parse.\n * This is where the actual Expat (XML_) functions are called.\n * @param[in] fdin file descriptor to read XML stream from child\n * @param[in] basil_ver BASIL version indicates context i.e. did control\n *\t      arrive at this function as a result of processing the\n *\t      Inventory Query or System Query\n * @return pointer to filled in response structure\n * @retval NULL no result\n *\n */\nstatic basil_response_t *\nalps_request_parent(int fdin, char *basil_ver)\n{\n\tud_t ud;\n\tbasil_response_t *brp;\n\tFILE *in = NULL;\n\tint status = 0;\n\tint eof = 0;\n\tint inventory_size = 0;\n\n\tin = fdopen(fdin, \"r\");\n\tif (!in) {\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_NODE, LOG_NOTICE, __func__,\n\t\t\t  \"Failed to open read FD.\");\n\t\treturn NULL;\n\t}\n\tmemset(&ud, 0, sizeof(ud));\n\tbrp = malloc(sizeof(basil_response_t));\n\tif (!brp) {\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_NODE, LOG_NOTICE, __func__,\n\t\t\t  \"Failed to allocate response structure.\");\n\t\treturn NULL;\n\t}\n\tmemset(brp, 0, sizeof(basil_response_t));\n\tud.brp = brp;\n\tparser = XML_ParserCreate(NULL);\n\tif (!parser) {\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_NODE, LOG_NOTICE, __func__,\n\t\t\t  \"Failed to create parser.\");\n\t\tfree_basil_response_data(brp);\n\t\treturn NULL;\n\t}\n\tXML_SetUserData(parser, (void *) &ud);\n\tXML_SetElementHandler(parser, parse_element_start, parse_element_end);\n\tXML_SetCharacterDataHandler(parser, parse_char_data);\n\n\tif (alps_client_out != NULL)\n\t\tfree(alps_client_out);\n\tif ((alps_client_out = strdup(NODE_TOPOLOGY_TYPE_CRAY)) == NULL) {\n\t\tsprintf(log_buffer, \"failed to allocate client output buffer\");\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_NODE, LOG_ERR,\n\t\t\t  __func__, log_buffer);\n\t\tfree_basil_response_data(brp);\n\t\treturn NULL;\n\t} else\n\t\tinventory_size = strlen(alps_client_out) + 1;\n\n\tif (basil_ver != NULL)\n\t\tpbs_strncpy(ud.basil_ver, basil_ver, BASIL_STRING_SHORT);\n\telse\n\t\tpbs_strncpy(ud.basil_ver, BASIL_VAL_UNKNOWN, BASIL_STRING_SHORT);\n\tud.basil_ver[BASIL_STRING_SHORT - 1] = '\\0';\n\n\tdo {\n\t\tint rc = 0;\n\t\tint len = 0;\n\t\texpatBuffer[0] = '\\0';\n\t\tlen = fread(expatBuffer, sizeof(char),\n\t\t\t    (EXPAT_BUFFER_LEN - 1), in);\n\t\trc = ferror(in);\n\t\tif (rc) {\n\t\t\tif (len == 0) {\n\t\t\t\tclearerr(in);\n\t\t\t\tusleep(100);\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\tsprintf(log_buffer,\n\t\t\t\t\"Read error on stream: rc=%d, len=%d\",\n\t\t\t\trc, len);\n\t\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_NODE,\n\t\t\t\t  LOG_NOTICE, __func__, log_buffer);\n\t\t\tbreak;\n\t\t}\n\t\t*(expatBuffer + len) = '\\0';\n\t\tif (pbs_strcat(&alps_client_out, &inventory_size,\n\t\t\t       expatBuffer) == NULL) {\n\t\t\tsprintf(log_buffer, \"failed to save client response\");\n\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_NODE, LOG_ERR,\n\t\t\t\t  __func__, log_buffer);\n\t\t\tfree(alps_client_out);\n\t\t\talps_client_out = NULL;\n\t\t\tbreak;\n\t\t}\n\t\teof = feof(in);\n\t\tstatus = XML_Parse(parser, expatBuffer, len, eof);\n\t\tif (status == XML_STATUS_ERROR) {\n\t\t\tsprintf(ud.error_class, \"%s\", BASIL_VAL_PERMANENT);\n\t\t\tsprintf(ud.error_source, \"%s\", BASIL_VAL_PARSER);\n\t\t\tsprintf(ud.message, \"%s\",\n\t\t\t\tXML_ErrorString(XML_GetErrorCode(parser)));\n\t\t\tbreak;\n\t\t}\n\t} while (!eof);\n\tfclose(in);\n\n\tif (*ud.error_class || *ud.error_source) {\n\t\tsprintf(log_buffer, \"%s BASIL error from %s: %s\",\n\t\t\tud.error_class, ud.error_source, ud.message);\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_NODE, LOG_NOTICE,\n\t\t\t  __func__, log_buffer);\n\t\tsnprintf(brp->error, BASIL_ERROR_BUFFER_SIZE, ud.message);\n\t\tif (strcmp(BASIL_VAL_PARSER, ud.error_source) == 0) {\n\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_NODE, LOG_DEBUG, __func__, \"XML buffer: \");\n\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_NODE,\n\t\t\t\t  LOG_DEBUG, __func__, expatBuffer);\n\t\t}\n\t}\n\tXML_ParserFree(parser);\n\treturn (brp);\n}\n\n/**\n * @brief\n * \tThe front-end function for all ALPS requests that calls the\n * \tappropriate subordinate functions to issue the request (child) and\n * \tparse the response (parent).\n *\n * Setup pipes and call fork so the ALPS client can be run and send its\n * stdout to be read by the parent.\n *\n * @param[in] msg XML message to send to ALPS client\n * @param[in] basil_ver BASIL version indicates context i.e. did control\n *\t      arrive at this function as a result of processing the\n *\t      Inventory Query or System Query.\n * @return pointer to filled in response structure\n * @retval NULL no result\n *\n */\nstatic basil_response_t *\nalps_request(char *msg, char *basil_ver)\n{\n\tint toChild[2];\n\tint fromChild[2];\n\tint status = 0;\n\tpid_t pid;\n\tpid_t exited;\n\tsize_t msglen = 0;\n\tsize_t wlen = -1;\n\tbasil_response_t *brp = NULL;\n\tFILE *fp = NULL;\n\n\tif (!alps_client) {\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_NODE, LOG_NOTICE, __func__,\n\t\t\t  \"No alps_client specified in MOM configuration file.\");\n\t\treturn NULL;\n\t}\n\tif (!msg) {\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_NODE, LOG_DEBUG,\n\t\t\t  __func__, \"No message parameter for method.\");\n\t\treturn NULL;\n\t}\n\tmsglen = strlen(msg);\n\tif (msglen < 32) {\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_NODE, LOG_DEBUG,\n\t\t\t  __func__, \"ALPS request too short.\");\n\t\treturn NULL;\n\t}\n\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t \"Sending ALPS request: %s\", msg);\n\tlog_event(PBSEVENT_DEBUG2, 0, LOG_DEBUG, __func__, log_buffer);\n\tif (pipe(toChild) == -1)\n\t\treturn NULL;\n\tif (pipe(fromChild) == -1) {\n\t\t(void) close(toChild[0]);\n\t\t(void) close(toChild[1]);\n\t\treturn NULL;\n\t}\n\n\tpid = fork();\n\tif (pid < 0) {\n\t\tlog_err(errno, __func__, \"fork\");\n\t\t(void) close(toChild[0]);\n\t\t(void) close(toChild[1]);\n\t\t(void) close(fromChild[0]);\n\t\t(void) close(fromChild[1]);\n\t\treturn NULL;\n\t}\n\tif (pid == 0) {\n\t\tclose(toChild[1]);\n\t\tclose(fromChild[0]);\n\t\talps_request_child(toChild[0], fromChild[1]);\n\t\texit(1);\n\t}\n\tclose(toChild[0]);\n\tclose(fromChild[1]);\n\tfp = fdopen(toChild[1], \"w\");\n\tif (fp == NULL) {\n\t\tsprintf(log_buffer, \"fdopen() failed: %s\", strerror(errno));\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_NODE, LOG_NOTICE,\n\t\t\t  __func__, log_buffer);\n\t\tkill(pid, SIGKILL); /* don't let child run */\n\t\tgoto done;\n\t}\n\n\twlen = fwrite(msg, sizeof(char), msglen, fp);\n\tif (wlen < msglen) {\n\t\tlog_err(errno, __func__, \"fwrite\");\n\t\tfclose(fp);\n\t\tkill(pid, SIGKILL); /* don't let child run */\n\t\tgoto done;\n\t}\n\n\tif (fflush(fp) != 0) {\n\t\tlog_err(errno, __func__, \"fflush\");\n\t\tfclose(fp);\n\t\tkill(pid, SIGKILL); /* don't let child run */\n\t\tgoto done;\n\t}\n\n\tfclose(fp);\n\tif ((brp = alps_request_parent(fromChild[0], basil_ver)) == NULL) {\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_NODE, LOG_DEBUG, __func__,\n\t\t\t  \"No response from ALPS.\");\n\t}\n\ndone:\n\texited = waitpid(pid, &status, 0);\n\t/*\n\t * If the wait fails or the process did not exit with 0,\n\t * generate a message.\n\t */\n\tif ((exited == -1) || (!WIFEXITED(status)) ||\n\t    (WEXITSTATUS(status) != 0)) {\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_NODE, LOG_DEBUG, __func__,\n\t\t\t  \"BASIL query process exited abnormally.\");\n\t}\n\n\tclose(toChild[1]);\n\tclose(fromChild[0]);\n\treturn (brp);\n}\n\n/**\n * @brief\n * \tDestructor function for BASIL memory parameter structure.\n *\n * @param p structure to free\n *\n * @return Void\n *\n */\nstatic void\nalps_free_memory_param(basil_memory_param_t *p)\n{\n\tif (!p)\n\t\treturn;\n\talps_free_memory_param(p->next);\n\tfree(p);\n}\n\n/**\n * @brief\n * \tDestructor function for BASIL label parameter structure.\n *\n * @param p structure to free\n *\n * @return Void\n *\n */\nstatic void\nalps_free_label_param(basil_label_param_t *p)\n{\n\tif (!p)\n\t\treturn;\n\talps_free_label_param(p->next);\n\tfree(p);\n}\n\n/**\n * @brief\n * \tDestructor function for BASIL node list parameter structure.\n *\n * @param p structure to free\n *\n * @return Void\n *\n */\nstatic void\nalps_free_nodelist_param(basil_nodelist_param_t *p)\n{\n\tif (!p)\n\t\treturn;\n\talps_free_nodelist_param(p->next);\n\tif (p->nodelist)\n\t\tfree(p->nodelist);\n\tfree(p);\n}\n\n/**\n * @brief\n * \tDestructor function for BASIL accelerator parameter structure.\n *\n * @param p structure to free\n *\n * @return Void\n *\n */\nstatic void\nalps_free_accelerator_param(basil_accelerator_param_t *p)\n{\n\tif (!p)\n\t\treturn;\n\talps_free_accelerator_param(p->next);\n\tfree_basil_accelerator_gpu(p->data.gpu);\n\tfree(p);\n}\n\n/**\n * @brief\n * \tDestructor function for BASIL parameter structure.\n *\n * @param p structure to free\n *\n * @return Void\n *\n */\nstatic void\nalps_free_param(basil_reserve_param_t *p)\n{\n\tif (!p)\n\t\treturn;\n\talps_free_param(p->next);\n\talps_free_memory_param(p->memory);\n\talps_free_label_param(p->labels);\n\talps_free_nodelist_param(p->nodelists);\n\talps_free_accelerator_param(p->accelerators);\n\tfree(p);\n}\n\n/**\n * @brief\n * \tDestructor function for BASIL reservation request structure.\n *\n * @param p structure to free\n *\n * @return Void\n *\n */\nvoid\nalps_free_reserve_request(basil_request_reserve_t *p)\n{\n\tif (!p)\n\t\treturn;\n\talps_free_param(p->params);\n\tfree(p);\n\treturn;\n}\n\n/**\n * Information to remember for each vnode in the exec_vnode for a job.\n * The vnode are combined by alps_create_reserve_request() to form\n * the ALPS reservation.\n */\ntypedef struct nodesum {\n\tchar *name;\n\tchar *vntype;\n\tchar *arch;\n\tlong nid;\n\tlong mpiprocs;\n\tlong ncpus;\n\tlong threads;\n\tlong long mem;\n\tlong chunks;\n\tlong width;\n\tlong depth;\n\tenum vnode_sharing_state share;\n\tint naccels;\n\tint need_accel;\n\tchar *accel_model;\n\tlong long accel_mem;\n\tint done;\n} nodesum_t;\n\n/**\n * @brief\n * Given a pointer to a PBS job (pjob), validate and construct a BASIL\n * reservation request.\n *\n * A loop goes through each element of the ji_vnods array for the job and\n * looks for entries that have cpus, the name matches mpphost, vntype is\n * CRAY_COMPUTE, and has a value for arch. Each of these entries causes\n * an entry to be made in the nodes array. If no vnodes are matched,\n * we can return since no compute nodes are being allocated.\n *\n * An error check is done to be sure no entries in the nodes array have\n * a bad combination of ncpus and mpiprocs. Then, a double loop is\n * entered that goes through each element of the of the nodes array\n * looking for matching entries. A match is when depth, width, mem,\n * share, arch, need_accel, accelerator_model and accelerator_mem are\n * all the same. All matches will be output to\n * a single ReserveParam XML section. Each node array entry that\n * is represented in an ReserveParam section is marked done so it\n * can be skipped as the loops run through the entries.\n *\n * @param[in] job PBS job\n * @param[out] req resulting completed request structure (NULL on error)\n *\n * @retval 0 success\n * @retval 1 failure\n * @retval 2 requeue job\n *\n */\nint\nalps_create_reserve_request(job *pjob, basil_request_reserve_t **req)\n{\n\tbasil_request_reserve_t *basil_req;\n\tbasil_reserve_param_t *pend;\n\tenum rlplace_value rpv;\n\tenum vnode_sharing vnsv;\n\tstruct passwd *pwent;\n\tint i = 0;\n\tint j = 0;\n\tint num = 0;\n\tint err_ret = 1;\n\tnodesum_t *nodes;\n\tvmpiprocs *vp;\n\tsize_t len = 0;\n\tsize_t nsize = 0;\n\tchar *cp;\n\tlong pstate = 0;\n\tchar *pgov = NULL;\n\tchar *pname = NULL;\n\tresource *pres;\n\n\t*req = NULL;\n\n\tnodes = (nodesum_t *) calloc(pjob->ji_numvnod, sizeof(nodesum_t));\n\tif (nodes == NULL)\n\t\treturn 1;\n\n\trpv = getplacesharing(pjob);\n\n\t/*\n\t * Go through the vnodes to consolidate the mpi ranks onto\n\t * the compute nodes. The index into ji_vnods will be\n\t * incremented by the value of vn_mpiprocs because the\n\t * entries in ji_vnods are replicated for each mpi rank.\n\t */\n\tnum = 0;\n\tlen = strlen(mpphost);\n\tfor (i = 0; i < pjob->ji_numvnod; i += vp->vn_mpiprocs) {\n\t\tvnal_t *vnp;\n\t\tchar *vntype, *vnt;\n\t\tchar *sharing;\n\t\tlong nid;\n\t\tint seg;\n\t\tlong long mem;\n\t\tchar *arch;\n\t\tenum vnode_sharing_state share;\n\t\tvp = &pjob->ji_vnods[i];\n\n\t\tassert(vp->vn_mpiprocs > 0);\n\t\tif (vp->vn_cpus == 0)\n\t\t\tcontinue;\n\n\t\t/*\n\t\t * Only match vnodes that begin with mpphost and have\n\t\t * a following \"_<num>_<num>\" (when\n\t\t * vnode_per_numa_node is true) otherwise,\n\t\t * just plain \"_<num>\"..\n\t\t */\n\t\tif (strncmp(vp->vn_vname, mpphost, len) != 0)\n\t\t\tcontinue;\n\t\tcp = &vp->vn_vname[len];\n\t\tif (vnode_per_numa_node) {\n\t\t\tif (sscanf(cp, \"_%ld_%d\", &nid, &seg) != 2)\n\t\t\t\tcontinue;\n\t\t} else {\n\t\t\tif (sscanf(cp, \"_%ld\", &nid) != 1)\n\t\t\t\tcontinue;\n\t\t}\n\n\t\t/* check that vnode exists */\n\t\tvnp = vn_vnode(vnlp, vp->vn_vname);\n\t\tif (vnp == NULL) {\n\t\t\tsprintf(log_buffer, \"vnode %s does not exist\",\n\t\t\t\tvp->vn_vname);\n\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB,\n\t\t\t\t  LOG_DEBUG, pjob->ji_qs.ji_jobid,\n\t\t\t\t  log_buffer);\n\t\t\tfree(nodes);\n\t\t\treturn 2;\n\t\t}\n\n\t\t/* see if this is a compute node */\n\t\tvntype = attr_exist(vnp, \"resources_available.vntype\");\n\t\tif (vntype == NULL) {\n\t\t\tsprintf(log_buffer, \"vnode %s has no vntype value\",\n\t\t\t\tvp->vn_vname);\n\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB,\n\t\t\t\t  LOG_DEBUG, pjob->ji_qs.ji_jobid,\n\t\t\t\t  log_buffer);\n\t\t\tcontinue;\n\t\t}\n\t\t/*\n\t\t * Check string array to be sure CRAY_COMPUTE is\n\t\t * one of the values.\n\t\t */\n\t\tfor (vnt = parse_comma_string(vntype); vnt != NULL;\n\t\t     vnt = parse_comma_string(NULL)) {\n\t\t\tif (strcmp(vnt, CRAY_COMPUTE) == 0)\n\t\t\t\tbreak;\n\t\t\tsprintf(log_buffer, \"vnode %s has vntype %s\",\n\t\t\t\tvp->vn_vname, vnt);\n\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB,\n\t\t\t\t  LOG_DEBUG, pjob->ji_qs.ji_jobid,\n\t\t\t\t  log_buffer);\n\t\t}\n\t\tif (vnt == NULL) {\n\t\t\tsprintf(log_buffer, \"vnode %s does not have vntype %s\",\n\t\t\t\tvp->vn_vname, CRAY_COMPUTE);\n\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB,\n\t\t\t\t  LOG_DEBUG, pjob->ji_qs.ji_jobid,\n\t\t\t\t  log_buffer);\n\t\t\tcontinue;\n\t\t}\n\n\t\tarch = attr_exist(vnp, \"resources_available.arch\");\n\t\tif (arch == NULL) {\n\t\t\tsprintf(log_buffer, \"vnode %s has no arch value\",\n\t\t\t\tvp->vn_vname);\n\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB,\n\t\t\t\t  LOG_DEBUG, pjob->ji_qs.ji_jobid,\n\t\t\t\t  log_buffer);\n\t\t\tfree(nodes);\n\t\t\treturn 2;\n\t\t}\n\n\t\t/* check legal values for arch */\n\t\tif (strcmp(BASIL_VAL_XT, arch) != 0 &&\n\t\t    strcmp(BASIL_VAL_X2, arch) != 0) {\n\t\t\tsprintf(log_buffer, \"vnode %s has bad arch value %s\",\n\t\t\t\tvp->vn_vname, arch);\n\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB,\n\t\t\t\t  LOG_DEBUG, pjob->ji_qs.ji_jobid,\n\t\t\t\t  log_buffer);\n\t\t\tfree(nodes);\n\t\t\treturn 2;\n\t\t}\n\n\t\t/* rounded up value for size_mb which is memory per MPI rank */\n\t\tmem = (vp->vn_mem + vp->vn_mpiprocs - 1) / vp->vn_mpiprocs;\n\t\tsharing = attr_exist(vnp, \"sharing\");\n\t\tvnsv = str_to_vnode_sharing(sharing);\n\t\tshare = vnss[vnsv][rpv];\n\n\t\t/*\n\t\t ** If the vnode is in the array but is setup to use\n\t\t ** different values for ncpus, mpiprocs etc, we need\n\t\t ** to allocate another slot for it so a separate\n\t\t ** ReserveParam XML section is created.\n\t\t */\n\t\tfor (j = 0; j < num; j++) {\n\t\t\tnodesum_t *ns = &nodes[j];\n\n\t\t\tif (ns->nid == nid && ns->share == share &&\n\t\t\t    ns->mpiprocs == vp->vn_mpiprocs &&\n\t\t\t    ns->ncpus == vp->vn_cpus &&\n\t\t\t    ns->threads == vp->vn_threads &&\n\t\t\t    ns->mem == mem &&\n\t\t\t    (strcmp(ns->arch, arch) == 0) &&\n\t\t\t    ns->need_accel == vp->vn_need_accel &&\n\t\t\t    ns->accel_mem == vp->vn_accel_mem) {\n\t\t\t\tif (ns->need_accel == 1) {\n\t\t\t\t\t/* If an accelerator is needed, check to\n\t\t\t\t\t * see if the model has been set.\n\t\t\t\t\t * Need a new XML block when the previous\n\t\t\t\t\t * model doesn't match the current.\n\t\t\t\t\t * Or if prev was set and current isn't,\n\t\t\t\t\t * or vice versa.\n\t\t\t\t\t */\n\t\t\t\t\tif (vp->vn_accel_model &&\n\t\t\t\t\t    ns->accel_model) {\n\t\t\t\t\t\tif (strcmp(ns->accel_model, vp->vn_accel_model) != 0) {\n\t\t\t\t\t\t\tcontinue;\n\t\t\t\t\t\t}\n\t\t\t\t\t} else if (!(vp->vn_accel_model == NULL &&\n\t\t\t\t\t\t     ns->accel_model == NULL)) {\n\t\t\t\t\t\t/* if both are NULL they match\n\t\t\t\t\t\t * otherwise keep looking\n\t\t\t\t\t\t */\n\t\t\t\t\t\tcontinue;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tns->chunks++;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t\tif (j == num) { /* need a new entry */\n\t\t\tnodes[num].nid = nid;\n\t\t\tnodes[num].name = vp->vn_vname;\n\t\t\tnodes[num].mpiprocs = vp->vn_mpiprocs;\n\t\t\tnodes[num].ncpus = vp->vn_cpus;\n\t\t\tnodes[num].threads = vp->vn_threads;\n\t\t\tnodes[num].mem = mem;\n\t\t\tnodes[num].naccels = vp->vn_naccels;\n\t\t\tnodes[num].need_accel = vp->vn_need_accel;\n\t\t\tif (nodes[num].need_accel) {\n\t\t\t\tif (vp->vn_accel_mem) {\n\t\t\t\t\tnodes[num].accel_mem = vp->vn_accel_mem;\n\t\t\t\t}\n\t\t\t\tif (vp->vn_accel_model) {\n\t\t\t\t\tnodes[num].accel_model = vp->vn_accel_model;\n\t\t\t\t}\n\t\t\t}\n\t\t\tnodes[num].vntype = vntype;\n\t\t\tnodes[num].arch = arch;\n\t\t\tnodes[num].share = share;\n\t\t\tnodes[num++].chunks = 1;\n\t\t}\n\t}\n\tif (num == 0) { /* no compute nodes -> no reservation */\n\t\tfree(nodes);\n\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB,\n\t\t\t  LOG_DEBUG, pjob->ji_qs.ji_jobid,\n\t\t\t  \"no ALPS reservation created: \"\n\t\t\t  \"no compute nodes allocated\");\n\t\treturn 0;\n\t}\n\n\tbasil_req = malloc(sizeof(basil_request_reserve_t));\n\tif (!basil_req)\n\t\treturn 1;\n\tmemset(basil_req, 0, sizeof(basil_request_reserve_t));\n\n\tpwent = getpwuid(pjob->ji_qs.ji_un.ji_momt.ji_exuid);\n\tif (!pwent)\n\t\tgoto err;\n\tsprintf(basil_req->user_name, \"%s\", pwent->pw_name);\n\n\tpbs_strncpy(basil_req->batch_id, pjob->ji_qs.ji_jobid,\n\t\t    sizeof(basil_req->batch_id));\n\n\t/* check for pstate or pgov */\n\tfor (pres = (resource *) GET_NEXT(get_jattr_list(pjob, JOB_ATR_resource));\n\t     pres != NULL;\n\t     pres = (resource *) GET_NEXT(pres->rs_link)) {\n\n\t\tif ((pstate > 0) && (pgov != NULL))\n\t\t\tbreak;\n\n\t\tif (pres->rs_defin == NULL)\n\t\t\tcontinue;\n\t\tpname = pres->rs_defin->rs_name;\n\t\tif (pname == NULL)\n\t\t\tcontinue;\n\n\t\tif (strcmp(pname, \"pstate\") == 0) {\n\t\t\tpstate = atol(pres->rs_value.at_val.at_str);\n\t\t\tif (pstate <= 0) {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t \"pstate value \\\"%s\\\" could not be used for the reservation\",\n\t\t\t\t\t pres->rs_value.at_val.at_str);\n\t\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB,\n\t\t\t\t\t  LOG_DEBUG, pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\t\tpstate = 0;\n\t\t\t}\n\t\t\tcontinue;\n\t\t}\n\t\tif (strcmp(pname, \"pgov\") == 0) {\n\t\t\tpgov = pres->rs_value.at_val.at_str;\n\t\t\tcontinue;\n\t\t}\n\t}\n\n\tfor (i = 0; i < num; i++) {\n\t\tnodesum_t *ns = &nodes[i];\n\n\t\t/*\n\t\t * ALPS cannot represent situations where a thread\n\t\t * or process does not have a cpu allocated.\n\t\t */\n\t\tif ((ns->ncpus % ns->mpiprocs) != 0)\n\t\t\tgoto err;\n\n\t\tns->width = ns->mpiprocs * ns->chunks;\n\t\tns->depth = ns->ncpus / ns->mpiprocs;\n\t}\n\n\tpend = NULL;\n\n\tfor (i = 0; i < num; i++) {\n\t\tbasil_reserve_param_t *p;\n\t\tbasil_nodelist_param_t *n;\n\t\tbasil_accelerator_param_t *a;\n\t\tbasil_accelerator_gpu_t *gpu;\n\t\tnodesum_t *ns = &nodes[i];\n\t\tchar *arch = ns->arch;\n\t\tlong long mem = ns->mem;\n\t\tchar *accel_model = ns->accel_model;\n\t\tlong long accel_mem = ns->accel_mem;\n\t\tlong width;\n\t\tlong last_nid, prev_nid;\n\n\t\tif (ns->done) /* already output */\n\t\t\tcontinue;\n\n\t\tp = malloc(sizeof(basil_reserve_param_t));\n\t\tif (p == NULL)\n\t\t\tgoto err;\n\n\t\tmemset(p, 0, sizeof(*p));\n\t\tif (pend == NULL)\n\t\t\tbasil_req->params = p;\n\t\telse\n\t\t\tpend->next = p;\n\t\tpend = p;\n\n\t\tn = malloc(sizeof(basil_nodelist_param_t));\n\t\tif (n == NULL)\n\t\t\tgoto err;\n\n\t\tmemset(n, 0, sizeof(*n));\n\t\tp->nodelists = n;\n\n\t\tnsize = BASIL_STRING_LONG;\n\t\tn->nodelist = malloc(nsize);\n\t\tif (n->nodelist == NULL)\n\t\t\tgoto err;\n\n\t\tsprintf(n->nodelist, \"%ld\", ns->nid);\n\t\tlast_nid = prev_nid = ns->nid;\n\n\t\tp->depth = ns->depth;\n\t\tp->nppn = width = ns->width;\n\n\t\t/*\n\t\t * If the user requested place=excl then we need to pass\n\t\t * that information into the ALPS reservation.\n\t\t */\n\t\tp->rsvn_mode = basil_rsvn_mode_none; /* initialize it */\n\t\tif (rpv == rlplace_excl) {\n\t\t\t/*\n\t\t\t * The user asked for the node exclusively.\n\t\t\t * Set it in the ALPS reservation.\n\t\t\t */\n\t\t\tp->rsvn_mode = basil_rsvn_mode_exclusive;\n\t\t}\n\t\tif (ns->ncpus != ns->threads) {\n\t\t\tsprintf(log_buffer, \"ompthreads %ld does not match\"\n\t\t\t\t\t    \" ncpus %ld\",\n\t\t\t\tns->threads, ns->ncpus);\n\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB,\n\t\t\t\t  LOG_DEBUG, pjob->ji_qs.ji_jobid,\n\t\t\t\t  log_buffer);\n\t\t}\n\n\t\t/*\n\t\t * Collapse matching entries.\n\t\t */\n\t\tfor (j = i + 1; j < num; j++) {\n\t\t\tnodesum_t *ns2 = &nodes[j];\n\n\t\t\t/* Look for matching nid entries that have not\n\t\t\t * yet been output.\n\t\t\t */\n\t\t\tif (ns2->done)\n\t\t\t\tcontinue;\n\n\t\t\t/* If everthing matches, add in this entry\n\t\t\t * and mark it done.\n\t\t\t */\n\t\t\tif (ns2->depth != ns->depth)\n\t\t\t\tcontinue;\n\t\t\tif (ns2->width != ns->width)\n\t\t\t\tcontinue;\n\t\t\tif (ns2->mem != ns->mem)\n\t\t\t\tcontinue;\n\t\t\tif (ns2->share != ns->share)\n\t\t\t\tcontinue;\n\t\t\tif (strcmp(ns2->arch, arch) != 0)\n\t\t\t\tcontinue;\n\t\t\tif (ns2->need_accel != ns->need_accel)\n\t\t\t\tcontinue;\n\t\t\tif (ns2->accel_mem != accel_mem)\n\t\t\t\tcontinue;\n\t\t\tif (ns->need_accel == 1) {\n\t\t\t\tif (accel_model &&\n\t\t\t\t    ns2->accel_model) {\n\t\t\t\t\tif (strcmp(ns2->accel_model, accel_model) != 0) {\n\t\t\t\t\t\tcontinue;\n\t\t\t\t\t}\n\t\t\t\t} else if (!(accel_model == NULL &&\n\t\t\t\t\t     ns2->accel_model == NULL)) {\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\twidth += ns2->width;\n\t\t\tns2->done = 1;\n\n\t\t\t/*\n\t\t\t * See if we can use a range of nid numbers.\n\t\t\t */\n\t\t\tif (ns2->nid == (prev_nid + 1)) {\n\t\t\t\tprev_nid = ns2->nid;\n\t\t\t\tcontinue;\n\t\t\t}\n\n\t\t\tif (last_nid == prev_nid) /* no range */\n\t\t\t\tsprintf(utilBuffer, \",%ld\", ns2->nid);\n\t\t\telse {\n\t\t\t\tsprintf(utilBuffer, \"-%ld,%ld\",\n\t\t\t\t\tprev_nid, ns2->nid);\n\t\t\t}\n\t\t\tprev_nid = last_nid = ns2->nid;\n\n\t\t\t/* check to see if we need to get a new nodelist */\n\t\t\tif (strlen(utilBuffer) + 1 >\n\t\t\t    nsize - strlen(n->nodelist)) {\n\t\t\t\tchar *hold;\n\n\t\t\t\tnsize *= 2; /* double size */\n\t\t\t\thold = realloc(n->nodelist, nsize);\n\t\t\t\tif (hold == NULL)\n\t\t\t\t\tgoto err;\n\t\t\t\tn->nodelist = hold;\n\t\t\t}\n\t\t\t/* this is safe since we checked for overflow */\n\t\t\tstrcat(n->nodelist, utilBuffer);\n\t\t}\n\t\tp->width = width;\n\t\tif (last_nid < prev_nid) { /* last range */\n\t\t\tsize_t slen;\n\n\t\t\tsprintf(utilBuffer, \"-%ld\", prev_nid);\n\t\t\tslen = strlen(utilBuffer) + 1;\n\n\t\t\t/* check to see if we need to get a new nodelist */\n\t\t\tif (slen > nsize - strlen(n->nodelist)) {\n\t\t\t\tchar *hold;\n\n\t\t\t\tnsize += slen + 1;\n\t\t\t\thold = realloc(n->nodelist, nsize);\n\t\t\t\tif (hold == NULL)\n\t\t\t\t\tgoto err;\n\t\t\t\tn->nodelist = hold;\n\t\t\t}\n\t\t\t/* this is safe since we checked for overflow */\n\t\t\tstrcat(n->nodelist, utilBuffer);\n\t\t}\n\n\t\tif (mem > 0) {\n\t\t\tp->memory = malloc(sizeof(basil_memory_param_t));\n\t\t\tif (p->memory == NULL)\n\t\t\t\tgoto err;\n\t\t\tmemset(p->memory, 0, sizeof(basil_memory_param_t));\n\t\t\tp->memory->size_mb = (long) ((mem + 1023) / 1024);\n\t\t\tp->memory->type = basil_memory_type_os;\n\t\t}\n\t\t/*\n\t\t * We don't include checking for ns->naccels here because\n\t\t * ALPS is currently unable to accept a specified count\n\t\t * of accelerators. Also ALPS currently needs a width\n\t\t * to be requested on every node, so an accelerator cannot\n\t\t * be the only thing requested on a node.\n\t\t */\n\t\tif (ns->need_accel) {\n\t\t\ta = malloc(sizeof(basil_accelerator_param_t));\n\t\t\tif (a == NULL)\n\t\t\t\tgoto err;\n\t\t\tmemset(a, 0, sizeof(*a));\n\t\t\ta->type = basil_accel_gpu;\n\t\t\tp->accelerators = a;\n\t\t\tif (accel_model || (accel_mem > 0)) {\n\t\t\t\tgpu = malloc(sizeof(basil_accelerator_gpu_t));\n\t\t\t\tif (gpu == NULL)\n\t\t\t\t\tgoto err;\n\t\t\t\tmemset(gpu, 0, sizeof(basil_accelerator_gpu_t));\n\t\t\t\ta->data.gpu = gpu;\n\t\t\t\tif (accel_model) {\n\t\t\t\t\tgpu->family = strdup(accel_model);\n\t\t\t\t\tif (gpu->family == NULL)\n\t\t\t\t\t\tgoto err;\n\t\t\t\t}\n\t\t\t\tif (accel_mem > 0) {\n\t\t\t\t\t/* ALPS expects MB */\n\t\t\t\t\tgpu->memory = (unsigned int) ((accel_mem + 1023) / 1024);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tif (strcmp(BASIL_VAL_XT, arch) == 0) {\n\t\t\tp->arch = basil_node_arch_xt;\n\t\t} else if (strcmp(BASIL_VAL_X2, arch) == 0) {\n\t\t\tp->arch = basil_node_arch_x2;\n\t\t}\n\t\tif (pstate > 0) {\n\t\t\tp->pstate = pstate;\n\t\t}\n\t\tif (pgov != NULL) {\n\t\t\tif (strlen(pgov) < sizeof(p->pgovernor)) {\n\t\t\t\tpbs_strncpy(p->pgovernor, pgov, sizeof(p->pgovernor));\n\t\t\t} else {\n\t\t\t\tsprintf(log_buffer, \"pgov value %s is too long,\"\n\t\t\t\t\t\t    \" length must be less than %ld\",\n\t\t\t\t\tpgov, sizeof(p->pgovernor));\n\t\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB,\n\t\t\t\t\t  LOG_DEBUG, pjob->ji_qs.ji_jobid,\n\t\t\t\t\t  log_buffer);\n\t\t\t}\n\t\t}\n\t}\n\t*req = basil_req;\n\tfree(nodes);\n\treturn 0;\n\nerr:\n\tfree(nodes);\n\talps_free_reserve_request(basil_req);\n\treturn err_ret;\n}\n\n/**\n * @brief\n * \tIssue a request to create a reservation on behalf of a user.\n *\n * Called during job initialization.\n *\n * @param[in] bresvp - pointer to the reserve request\n * @param[out] rsvn_id - reservation ID\n * @param pagg - unused\n *\n * @retval 0 success\n * @retval 1 transient error (retry)\n * @retval -1 fatal error\n *\n */\nint\nalps_create_reservation(basil_request_reserve_t *bresvp, long *rsvn_id,\n\t\t\tunsigned long long *pagg)\n{\n\tbasil_reserve_param_t *param;\n\tbasil_memory_param_t *mem;\n\tbasil_label_param_t *label;\n\tbasil_nodelist_param_t *nl;\n\tbasil_accelerator_param_t *accel;\n\tbasil_response_t *brp;\n\n\tif (!bresvp) {\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_NODE, LOG_NOTICE, __func__,\n\t\t\t  \"Cannot create ALPS reservation, missing data.\");\n\t\treturn (-1);\n\t}\n\tif (*bresvp->user_name == '\\0') {\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_NODE, LOG_NOTICE, __func__,\n\t\t\t  \"Cannot create ALPS reservation, missing user name.\");\n\t\treturn (-1);\n\t}\n\tif (!bresvp->params) {\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_NODE, LOG_NOTICE, __func__,\n\t\t\t  \"Cannot create ALPS reservation, missing parameters.\");\n\t\treturn (-1);\n\t}\n\tnew_alps_req();\n\tsprintf(utilBuffer, \"<?xml version=\\\"1.0\\\"?>\\n\"\n\t\t\t    \"<\" BASIL_ELM_REQUEST \" \" BASIL_ATR_PROTOCOL \"=\\\"%s\\\" \" BASIL_ATR_METHOD \"=\\\"\" BASIL_VAL_RESERVE \"\\\">\\n\",\n\t\tbasilversion_inventory);\n\tadd_alps_req(utilBuffer);\n\tsprintf(utilBuffer,\n\t\t\" <\" BASIL_ELM_RESVPARAMARRAY \" \" BASIL_ATR_USER_NAME \"=\\\"%s\\\" \" BASIL_ATR_BATCH_ID \"=\\\"%s\\\"\",\n\t\tbresvp->user_name, bresvp->batch_id);\n\tadd_alps_req(utilBuffer);\n\tif (*bresvp->account_name != '\\0') {\n\t\tsprintf(utilBuffer, \" \" BASIL_ATR_ACCOUNT_NAME \"=\\\"%s\\\"\",\n\t\t\tbresvp->account_name);\n\t\tadd_alps_req(utilBuffer);\n\t}\n\tadd_alps_req(\">\\n\");\n\tfor (param = bresvp->params; param; param = param->next) {\n\t\tadd_alps_req(\"  <\" BASIL_ELM_RESERVEPARAM);\n\t\tswitch (param->arch) {\n\t\t\tcase basil_node_arch_x2:\n\t\t\t\tadd_alps_req(\" \" BASIL_ATR_ARCH \"=\\\"\" BASIL_VAL_X2 \"\\\"\");\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\tadd_alps_req(\" \" BASIL_ATR_ARCH \"=\\\"\" BASIL_VAL_XT \"\\\"\");\n\t\t\t\tbreak;\n\t\t}\n\t\tif (param->width >= 0) {\n\t\t\tsprintf(utilBuffer, \" \" BASIL_ATR_WIDTH \"=\\\"%ld\\\"\",\n\t\t\t\tparam->width);\n\t\t\tadd_alps_req(utilBuffer);\n\t\t}\n\t\t/*\n\t\t * Only output BASIL_ATR_RSVN_MODE if we are not talking\n\t\t * to basil 1.1 orig.\n\t\t */\n\t\tif (!basil11orig) {\n\t\t\tif (param->rsvn_mode == basil_rsvn_mode_exclusive) {\n\t\t\t\tadd_alps_req(\" \" BASIL_ATR_RSVN_MODE \"=\\\"\" BASIL_VAL_EXCLUSIVE \"\\\"\");\n\t\t\t} else if (param->rsvn_mode == basil_rsvn_mode_shared) {\n\t\t\t\tadd_alps_req(\" \" BASIL_ATR_RSVN_MODE \"=\\\"\" BASIL_VAL_SHARED \"\\\"\");\n\t\t\t}\n\t\t}\n\t\tif (param->depth >= 0) {\n\t\t\tsprintf(utilBuffer, \" \" BASIL_ATR_DEPTH \"=\\\"%ld\\\"\",\n\t\t\t\tparam->depth);\n\t\t\tadd_alps_req(utilBuffer);\n\t\t}\n\t\tif (param->nppn > 0) {\n\t\t\tsprintf(utilBuffer, \" \" BASIL_ATR_NPPN \"=\\\"%ld\\\"\",\n\t\t\t\tparam->nppn);\n\t\t\tadd_alps_req(utilBuffer);\n\t\t}\n\t\tif (param->pstate > 0) {\n\t\t\tsprintf(utilBuffer, \" \" BASIL_ATR_PSTATE \"=\\\"%ld\\\"\",\n\t\t\t\tparam->pstate);\n\t\t\tadd_alps_req(utilBuffer);\n\t\t}\n\t\tif (param->nppcu > 0) {\n\t\t\tsprintf(utilBuffer, \" \" BASIL_ATR_NPPCU \"=\\\"0\\\"\");\n\t\t\tadd_alps_req(utilBuffer);\n\t\t}\n\t\tif (param->pgovernor[0] != '\\0') {\n\t\t\tsprintf(utilBuffer, \" \" BASIL_ATR_PGOVERNOR \"=\\\"%s\\\"\",\n\t\t\t\tparam->pgovernor);\n\t\t\tadd_alps_req(utilBuffer);\n\t\t}\n\t\tif (vnode_per_numa_node && param->segments[0] != '\\0') {\n\t\t\tsprintf(utilBuffer, \" \" BASIL_ATR_SEGMENTS \"=\\\"%s\\\"\",\n\t\t\t\tparam->segments);\n\t\t\tadd_alps_req(utilBuffer);\n\t\t}\n\t\tif (!param->memory && !param->labels && !param->nodelists) {\n\t\t\tadd_alps_req(\"/>\\n\");\n\t\t\tcontinue;\n\t\t}\n\t\tadd_alps_req(\">\\n\");\n\t\tif (param->memory) {\n\t\t\tadd_alps_req(\"   <\" BASIL_ELM_MEMPARAMARRAY \">\\n\");\n\t\t\tfor (mem = param->memory; mem; mem = mem->next) {\n\t\t\t\tadd_alps_req(\"    <\" BASIL_ELM_MEMPARAM \" \" BASIL_ATR_TYPE \"=\\\"\");\n\t\t\t\tswitch (mem->type) {\n\t\t\t\t\tcase basil_memory_type_hugepage:\n\t\t\t\t\t\tadd_alps_req(BASIL_VAL_HUGEPAGE);\n\t\t\t\t\t\tbreak;\n\t\t\t\t\tcase basil_memory_type_virtual:\n\t\t\t\t\t\tadd_alps_req(BASIL_VAL_VIRTUAL);\n\t\t\t\t\t\tbreak;\n\t\t\t\t\tdefault:\n\t\t\t\t\t\tadd_alps_req(BASIL_VAL_OS);\n\t\t\t\t}\n\t\t\t\tadd_alps_req(\"\\\"\");\n\t\t\t\tsprintf(utilBuffer,\n\t\t\t\t\t\" \" BASIL_ATR_SIZE_MB \"=\\\"%ld\\\"\",\n\t\t\t\t\tmem->size_mb);\n\t\t\t\tadd_alps_req(utilBuffer);\n\t\t\t\tadd_alps_req(\"/>\\n\");\n\t\t\t}\n\t\t\tadd_alps_req(\"   </\" BASIL_ELM_MEMPARAMARRAY \">\\n\");\n\t\t}\n\t\tif (param->labels) {\n\t\t\tadd_alps_req(\"   <\" BASIL_ELM_LABELPARAMARRAY \">\\n\");\n\t\t\tfor (label = param->labels; label && *label->name;\n\t\t\t     label = label->next) {\n\t\t\t\tadd_alps_req(\"    <\" BASIL_ELM_LABELPARAM \" \" BASIL_ATR_NAME \"=\");\n\t\t\t\tsprintf(utilBuffer, \"\\\"%s\\\"\", label->name);\n\t\t\t\tadd_alps_req(utilBuffer);\n\t\t\t\tadd_alps_req(\" \" BASIL_ATR_TYPE \"=\");\n\t\t\t\tswitch (label->type) {\n\t\t\t\t\tcase basil_label_type_soft:\n\t\t\t\t\t\tsprintf(utilBuffer, \"\\\"%s\\\"\",\n\t\t\t\t\t\t\tBASIL_VAL_SOFT);\n\t\t\t\t\t\tbreak;\n\t\t\t\t\tdefault:\n\t\t\t\t\t\tsprintf(utilBuffer, \"\\\"%s\\\"\",\n\t\t\t\t\t\t\tBASIL_VAL_HARD);\n\t\t\t\t}\n\t\t\t\tadd_alps_req(utilBuffer);\n\t\t\t\tadd_alps_req(\" \" BASIL_ATR_DISPOSITION \"=\");\n\t\t\t\tswitch (label->disposition) {\n\t\t\t\t\tcase basil_label_disposition_repel:\n\t\t\t\t\t\tsprintf(utilBuffer, \"\\\"%s\\\"\",\n\t\t\t\t\t\t\tBASIL_VAL_REPEL);\n\t\t\t\t\t\tbreak;\n\t\t\t\t\tdefault:\n\t\t\t\t\t\tsprintf(utilBuffer, \"\\\"%s\\\"\",\n\t\t\t\t\t\t\tBASIL_VAL_ATTRACT);\n\t\t\t\t}\n\t\t\t\tadd_alps_req(utilBuffer);\n\t\t\t\tadd_alps_req(\"/>\\n\");\n\t\t\t}\n\t\t\tadd_alps_req(\"   </\" BASIL_ELM_LABELPARAMARRAY \">\\n\");\n\t\t}\n\t\tif (param->nodelists) {\n\t\t\tadd_alps_req(\"   <\" BASIL_ELM_NODEPARMARRAY \">\\n\");\n\t\t\tfor (nl = param->nodelists;\n\t\t\t     nl && nl->nodelist && *nl->nodelist;\n\t\t\t     nl = nl->next) {\n\t\t\t\tadd_alps_req(\"    <\" BASIL_ELM_NODEPARAM \">\");\n\t\t\t\tadd_alps_req(nl->nodelist);\n\t\t\t\tadd_alps_req(\"</\" BASIL_ELM_NODEPARAM \">\\n\");\n\t\t\t}\n\t\t\tadd_alps_req(\"   </\" BASIL_ELM_NODEPARMARRAY \">\\n\");\n\t\t}\n\t\tif (param->accelerators) {\n\t\t\tadd_alps_req(\"   <\" BASIL_ELM_ACCELPARAMARRAY \">\\n\");\n\t\t\tfor (accel = param->accelerators; accel;\n\t\t\t     accel = accel->next) {\n\t\t\t\tadd_alps_req(\"    <\" BASIL_ELM_ACCELPARAM \" \" BASIL_ATR_TYPE \"=\\\"\" BASIL_VAL_GPU \"\\\"\");\n\t\t\t\tif (accel->data.gpu) {\n\t\t\t\t\tif (accel->data.gpu->family) {\n\t\t\t\t\t\tsprintf(utilBuffer, \" \" BASIL_ATR_FAMILY \"=\\\"%s\\\"\",\n\t\t\t\t\t\t\taccel->data.gpu->family);\n\t\t\t\t\t\tadd_alps_req(utilBuffer);\n\t\t\t\t\t}\n\t\t\t\t\tif (accel->data.gpu->memory > 0) {\n\t\t\t\t\t\tsprintf(utilBuffer, \" \" BASIL_ATR_MEMORY_MB \"=\\\"%d\\\"\",\n\t\t\t\t\t\t\taccel->data.gpu->memory);\n\t\t\t\t\t\tadd_alps_req(utilBuffer);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tadd_alps_req(\"/>\\n\");\n\t\t\t}\n\t\t\tadd_alps_req(\"   </\" BASIL_ELM_ACCELPARAMARRAY \">\\n\");\n\t\t}\n\t\tadd_alps_req(\"  </\" BASIL_ELM_RESERVEPARAM \">\\n\");\n\t}\n\tadd_alps_req(\" </\" BASIL_ELM_RESVPARAMARRAY \">\\n\");\n\tadd_alps_req(\"</\" BASIL_ELM_REQUEST \">\");\n\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_NODE, LOG_DEBUG, __func__,\n\t\t  \"Creating ALPS reservation for job.\");\n\tif ((brp = alps_request(requestBuffer, basilversion_inventory)) == NULL) {\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_NODE, LOG_NOTICE, __func__,\n\t\t\t  \"Failed to create ALPS reservation.\");\n\t\treturn (-1);\n\t}\n\tif (*brp->error != '\\0') {\n\t\tif (brp->error_flags & BASIL_ERR_TRANSIENT) {\n\t\t\tfree_basil_response_data(brp);\n\t\t\treturn (1);\n\t\t} else {\n\t\t\tfree_basil_response_data(brp);\n\t\t\treturn (-1);\n\t\t}\n\t}\n\tsprintf(log_buffer, \"Created ALPS reservation %ld.\",\n\t\tbrp->data.reserve.rsvn_id);\n\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_NODE, LOG_DEBUG,\n\t\t  __func__, log_buffer);\n\t*rsvn_id = brp->data.reserve.rsvn_id;\n\tfree_basil_response_data(brp);\n\treturn (0);\n}\n\n/**\n * @brief\n * \tIssue a request to confirm an existing reservation.\n *\n * Called during job initialization.\n * Change from basil 1.0: admin_cookie is renamed to pagg_id\n * and alloc_cookie is deprecated as of 1.1.\n *\n * @param[in] pjob - pointer to job structure\n *\n * @retval 0 success\n * @retval 1 transient error (retry)\n * @retval -1 fatal error\n *\n */\nint\nalps_confirm_reservation(job *pjob)\n{\n\tbasil_response_t *brp;\n\n\tif (!pjob) {\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_NODE, LOG_NOTICE, __func__,\n\t\t\t  \"Cannot confirm ALPS reservation, invalid job.\");\n\t\treturn (-1);\n\t}\n\t/* Return success if no reservation present. */\n\tif (pjob->ji_extended.ji_ext.ji_reservation < 0) {\n\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t  pjob->ji_qs.ji_jobid,\n\t\t\t  \"No MPP reservation to confirm.\");\n\t\treturn (0);\n\t}\n\tif (pjob->ji_extended.ji_ext.ji_pagg == 0) {\n\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t  pjob->ji_qs.ji_jobid,\n\t\t\t  \"No PAGG to confirm MPP reservation.\");\n\t\treturn (1);\n\t}\n\tsprintf(log_buffer, \"Confirming ALPS reservation %ld.\",\n\t\tpjob->ji_extended.ji_ext.ji_reservation);\n\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\tnew_alps_req();\n\tsprintf(requestBuffer, \"<?xml version=\\\"1.0\\\"?>\\n\"\n\t\t\t       \"<\" BASIL_ELM_REQUEST \" \" BASIL_ATR_PROTOCOL \"=\\\"%s\\\" \" BASIL_ATR_METHOD \"=\\\"\" BASIL_VAL_CONFIRM \"\\\" \" BASIL_ATR_RSVN_ID \"=\\\"%ld\\\" \"\n\t\t\t       \"%s =\\\"%llu\\\"/>\",\n\t\tbasilversion_inventory,\n\t\tpjob->ji_extended.ji_ext.ji_reservation,\n\t\tbasil11orig ? BASIL_ATR_ADMIN_COOKIE : BASIL_ATR_PAGG_ID,\n\t\tpjob->ji_extended.ji_ext.ji_pagg);\n\tif ((brp = alps_request(requestBuffer, basilversion_inventory)) == NULL) {\n\t\tsprintf(log_buffer, \"Failed to confirm ALPS reservation %ld.\",\n\t\t\tpjob->ji_extended.ji_ext.ji_reservation);\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_NOTICE,\n\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\treturn (-1);\n\t}\n\tif (*brp->error != '\\0') {\n\t\tif (brp->error_flags & BASIL_ERR_TRANSIENT) {\n\t\t\tfree_basil_response_data(brp);\n\t\t\treturn (1);\n\t\t} else {\n\t\t\tfree_basil_response_data(brp);\n\t\t\treturn (-1);\n\t\t}\n\t}\n\tsprintf(log_buffer, \"ALPS reservation confirmed.\");\n\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\tfree_basil_response_data(brp);\n\treturn (0);\n}\n\n/**\n * @brief\n * \tIssue a request to cancel an existing reservation.\n *\n * Called during job exit/cleanup.\n *\n * @param[in] pjob - pointer to job structure\n * @retval 0 success\n * @retval 1 transient error (retry)\n * @retval -1 fatal error\n *\n */\nint\nalps_cancel_reservation(job *pjob)\n{\n\tchar buf[1024];\n\tbasil_response_t *brp;\n\n\tif (!pjob) {\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_NODE, LOG_NOTICE, __func__,\n\t\t\t  \"Cannot cancel ALPS reservation, invalid job.\");\n\t\treturn (-1);\n\t}\n\t/* Return success if no reservation present. */\n\tif (pjob->ji_extended.ji_ext.ji_reservation < 0 ||\n\t    pjob->ji_extended.ji_ext.ji_pagg == 0) {\n\t\treturn (0);\n\t}\n\tsprintf(log_buffer, \"Canceling ALPS reservation %ld with PAGG %llu.\",\n\t\tpjob->ji_extended.ji_ext.ji_reservation,\n\t\tpjob->ji_extended.ji_ext.ji_pagg);\n\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\tnew_alps_req();\n\tsprintf(requestBuffer, \"<?xml version=\\\"1.0\\\"?>\\n\"\n\t\t\t       \"<\" BASIL_ELM_REQUEST \" \" BASIL_ATR_PROTOCOL \"=\\\"%s\\\" \" BASIL_ATR_METHOD \"=\\\"\" BASIL_VAL_RELEASE \"\\\" \" BASIL_ATR_RSVN_ID \"=\\\"%ld\\\" \"\n\t\t\t       \"%s =\\\"%llu\\\"/>\",\n\t\tbasilversion_inventory,\n\t\tpjob->ji_extended.ji_ext.ji_reservation,\n\t\tbasil11orig ? BASIL_ATR_ADMIN_COOKIE : BASIL_ATR_PAGG_ID,\n\t\tpjob->ji_extended.ji_ext.ji_pagg);\n\tif ((brp = alps_request(requestBuffer, basilversion_inventory)) == NULL) {\n\t\tsprintf(log_buffer, \"Failed to cancel ALPS reservation %ld.\",\n\t\t\tpjob->ji_extended.ji_ext.ji_reservation);\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_NOTICE,\n\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\treturn (-1);\n\t}\n\tif (*brp->error != '\\0') {\n\t\tif (brp->error_flags & BASIL_ERR_TRANSIENT) {\n\t\t\tfree_basil_response_data(brp);\n\t\t\treturn (1);\n\t\t} else {\n\t\t\t/*\n\t\t\t * check if it's a \"No entry for resID\"\n\t\t\t * error message. If so, we will assume the ALPS\n\t\t\t * reservation went away due to a prior release\n\t\t\t * request and fall through to the successful exit\n\t\t\t * If for some reason Cray changes this error string\n\t\t\t * the behavior of PBS will be to continue to try to\n\t\t\t * cancel the reservation (even though ALPS does not\n\t\t\t * know about this reservation) and the job will\n\t\t\t * remain in the \"E\" state until the\n\t\t\t * alps_release_timeout time has elapsed.\n\t\t\t */\n\t\t\tbzero(buf, sizeof(buf));\n\t\t\tsnprintf(buf, sizeof(buf), \"No entry for resId %ld\",\n\t\t\t\t pjob->ji_extended.ji_ext.ji_reservation);\n\t\t\tif (strstr(brp->error, buf) == NULL) {\n\t\t\t\tsprintf(log_buffer, \"Failed to cancel ALPS \"\n\t\t\t\t\t\t    \"reservation %ld. BASIL response error: %s\",\n\t\t\t\t\tpjob->ji_extended.ji_ext.ji_reservation,\n\t\t\t\t\tbrp->error);\n\t\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB,\n\t\t\t\t\t  LOG_NOTICE, pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\t\tfree_basil_response_data(brp);\n\t\t\t\treturn (-1);\n\t\t\t}\n\t\t}\n\t}\n\n\t/*\n\t * There are still claims on the ALPS reservation, so just\n\t * treat it like a transient error so we keep trying to\n\t * release the ALPS reservation.\n\t */\n\tif (brp->data.release.claims > 0) {\n\t\tsprintf(log_buffer, \"ALPS reservation %ld has %u claims \"\n\t\t\t\t    \"against it\",\n\t\t\tpjob->ji_extended.ji_ext.ji_reservation,\n\t\t\tbrp->data.release.claims);\n\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\tfree_basil_response_data(brp);\n\t\treturn (1);\n\t}\n\n\tsprintf(log_buffer, \"ALPS reservation cancelled.\");\n\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\tfree_basil_response_data(brp);\n\treturn (0);\n}\n\n/**\n * @brief\n * Issue a request to switch an existing reservation \"OUT\" (suspend it)\n * or \"IN\" (resume it).\n *\n * Called by mother superior when a job needs to be suspended or resumed.\n *\n * @retval 0 success\n * @retval 1 transient error (retry)\n * @retval -1 fatal error\n */\nint\nalps_suspend_resume_reservation(job *pjob, basil_switch_action_t switchval)\n{\n\tbasil_response_t *brp;\n\tchar actionstring[10] = \"\";\n\tchar switch_buf[10] = \"\";\n\n\tif (switchval == basil_switch_action_out) {\n\t\tstrcpy(switch_buf, \"suspend\");\n\t\tstrcpy(actionstring, BASIL_VAL_OUT);\n\t} else if (switchval == basil_switch_action_in) {\n\t\tstrcpy(switch_buf, \"resume\");\n\t\tstrcpy(actionstring, BASIL_VAL_IN);\n\t} else {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"Invalid switch action %d.\", switchval);\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_NODE, LOG_NOTICE,\n\t\t\t  (char *) __func__, log_buffer);\n\t\treturn (-1);\n\t}\n\n\tif (!pjob) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"Cannot %s (%d), invalid job.\", switch_buf, switchval);\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_NODE, LOG_NOTICE,\n\t\t\t  (char *) __func__, log_buffer);\n\t\treturn (-1);\n\t}\n\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t \"Switching ALPS reservation %ld to %s\",\n\t\t pjob->ji_extended.ji_ext.ji_reservation, switch_buf);\n\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\tnew_alps_req();\n\tsnprintf(utilBuffer, sizeof(utilBuffer), \"<?xml version=\\\"1.0\\\"?>\\n\"\n\t\t\t\t\t\t \"<\" BASIL_ELM_REQUEST \" \" BASIL_ATR_PROTOCOL \"=\\\"%s\\\" \" BASIL_ATR_METHOD \"=\\\"\" BASIL_VAL_SWITCH \"\\\">\\n\",\n\t\t basilversion_inventory);\n\tadd_alps_req(utilBuffer);\n\tadd_alps_req(\" <\" BASIL_ELM_RSVNARRAY \">\\n\");\n\tsnprintf(utilBuffer, sizeof(utilBuffer),\n\t\t \"  <\" BASIL_ELM_RESERVATION \" \" BASIL_ATR_RSVN_ID \"=\\\"%ld\\\" \" BASIL_ATR_ACTION \"=\\\"%s\\\"/>\\n\",\n\t\t pjob->ji_extended.ji_ext.ji_reservation,\n\t\t actionstring);\n\tadd_alps_req(utilBuffer);\n\tadd_alps_req(\" </\" BASIL_ELM_RSVNARRAY \">\\n\");\n\tadd_alps_req(\"</\" BASIL_ELM_REQUEST \">\");\n\tif ((brp = alps_request(requestBuffer, basilversion_inventory)) == NULL) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"Failed to switch %s ALPS reservation.\", actionstring);\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_NODE, LOG_NOTICE,\n\t\t\t  (char *) __func__, log_buffer);\n\t\treturn (-1);\n\t}\n\tif (*brp->error != '\\0') {\n\t\t/* A TRANSIENT error would mean the previous switch\n\t\t * method had not completed\n\t\t */\n\t\tif (brp->error_flags & BASIL_ERR_TRANSIENT) {\n\t\t\tfree_basil_response_data(brp);\n\t\t\tbrp = NULL;\n\t\t\treturn (1);\n\t\t} else {\n\t\t\tfree_basil_response_data(brp);\n\t\t\tbrp = NULL;\n\t\t\treturn (-1);\n\t\t}\n\t}\n\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_NODE, LOG_DEBUG,\n\t\t  (char *) __func__, \"Made the ALPS SWITCH request.\");\n\tfree_basil_response_data(brp);\n\treturn (0);\n}\n\n/**\n * @brief\n * Confirm that an ALPS reservation has been switched to either \"SUSPEND\"\n * or \"RUN\" status.\n * Confirm that an ALPS reservation has successfully finished switching in/out.\n *\n * @retval 0 success\n * @retval 1 transient error (retry)\n * @retval 2 transient error (retry) - When reservation is empty\n * @retval -1 fatal error\n */\nint\nalps_confirm_suspend_resume(job *pjob, basil_switch_action_t switchval)\n{\n\tbasil_response_t *brp = NULL;\n\tbasil_response_query_status_res_t *res = NULL;\n\n\tif (!pjob) {\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_JOB, LOG_ERR, (char *) __func__,\n\t\t\t  \"Cannot confirm ALPS reservation, invalid job.\");\n\t\treturn (-1);\n\t}\n\t/* If no reservation ID return an error */\n\tif (pjob->ji_extended.ji_ext.ji_reservation < 0) {\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_JOB, LOG_ERR,\n\t\t\t  pjob->ji_qs.ji_jobid,\n\t\t\t  \"No ALPS reservation ID provided.  Can't confirm SWITCH status.\");\n\t\treturn (-1);\n\t}\n\n\tif ((switchval != basil_switch_action_out) &&\n\t    (switchval != basil_switch_action_in)) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"Invalid switch action %d.\", switchval);\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_NODE, LOG_ERR,\n\t\t\t  (char *) __func__, log_buffer);\n\t\treturn (-1);\n\t}\n\n\tsprintf(log_buffer, \"Confirming ALPS reservation %ld SWITCH status.\",\n\t\tpjob->ji_extended.ji_ext.ji_reservation);\n\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\tnew_alps_req();\n\tsprintf(utilBuffer, \"<?xml version=\\\"1.0\\\"?>\\n\"\n\t\t\t    \"<\" BASIL_ELM_REQUEST \" \" BASIL_ATR_PROTOCOL \"=\\\"%s\\\" \" BASIL_ATR_METHOD \"=\\\"\" BASIL_VAL_QUERY \"\\\" \" BASIL_ATR_TYPE \"=\\\"\" BASIL_VAL_STATUS \"\\\">\\n\",\n\t\tbasilversion_inventory);\n\tadd_alps_req(utilBuffer);\n\tadd_alps_req(\" <\" BASIL_ELM_RSVNARRAY \">\\n\");\n\tsprintf(utilBuffer,\n\t\t\"  <\" BASIL_ELM_RESERVATION \" \" BASIL_ATR_RSVN_ID \"=\\\"%ld\\\"/>\\n\",\n\t\tpjob->ji_extended.ji_ext.ji_reservation);\n\tadd_alps_req(utilBuffer);\n\tadd_alps_req(\" </\" BASIL_ELM_RSVNARRAY \">\\n\");\n\tadd_alps_req(\"</\" BASIL_ELM_REQUEST \">\");\n\n\tif ((brp = alps_request(requestBuffer, basilversion_inventory)) == NULL) {\n\t\tsprintf(log_buffer, \"Failed to confirm ALPS reservation %ld has been switched.\",\n\t\t\tpjob->ji_extended.ji_ext.ji_reservation);\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_NOTICE,\n\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\treturn (-1);\n\t}\n\tif (*brp->error != '\\0') {\n\t\tif (brp->error_flags & BASIL_ERR_TRANSIENT) {\n\t\t\tfree_basil_response_data(brp);\n\t\t\treturn (1);\n\t\t} else {\n\t\t\tfree_basil_response_data(brp);\n\t\t\treturn (-1);\n\t\t}\n\t}\n\n\t/*\n\t * Now check for status of the suspend/resume\n\t * INVALID is considered a permanent error, just error out\n\t */\n\tres = brp->data.query.data.status.reservation;\n\tif (res->status == basil_reservation_status_invalid) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"ALPS SWITCH status is = 'INVALID'\");\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_NOTICE,\n\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\tfree_basil_response_data(brp);\n\t\treturn (-1);\n\t}\n\n\t/*\n\t * The MIX, SWITCH and UNKNOWN status response types are considered\n\t * transient, we will need to check the status again.\n\t */\n\tif (res->status == basil_reservation_status_mix) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"ALPS SWITCH status is = 'MIX', keep checking \"\n\t\t\t \"ALPS status.\");\n\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\tfree_basil_response_data(brp);\n\t\treturn (1);\n\t}\n\tif (res->status == basil_reservation_status_switch) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"ALPS SWITCH status is = 'SWITCH', keep checking \"\n\t\t\t \"ALPS status.\");\n\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\tfree_basil_response_data(brp);\n\t\treturn (1);\n\t}\n\tif (res->status == basil_reservation_status_unknown) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"ALPS SWITCH status is = 'UNKNOWN', keep checking \"\n\t\t\t \"ALPS status.\");\n\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\tfree_basil_response_data(brp);\n\t\treturn (1);\n\t}\n\n\t/* What we expect for status depends on whether we are trying to SWITCH IN or OUT */\n\tif (res->status == basil_reservation_status_run) {\n\t\tif (switchval == basil_switch_action_out) {\n\t\t\t/* We want to suspend the reservation's applications, so we\n\t\t\t * need to keep checking status\n\t\t\t */\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"ALPS SWITCH status is 'RUN', and \"\n\t\t\t\t \"'SUSPEND' was requested, keep checking \"\n\t\t\t\t \"ALPS status.\");\n\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\tfree_basil_response_data(brp);\n\t\t\treturn (1);\n\t\t} else {\n\t\t\t/* We are trying to run the application again, and it is running! */\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"ALPS reservation %ld has been successfully \"\n\t\t\t\t \"switched to 'RUN'.\",\n\t\t\t\t pjob->ji_extended.ji_ext.ji_reservation);\n\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\tfree_basil_response_data(brp);\n\t\t\treturn (0);\n\t\t}\n\t}\n\tif (res->status == basil_reservation_status_suspend) {\n\t\tif (switchval == basil_switch_action_in) {\n\t\t\t/* We want to run the reservation's applications, so we\n\t\t\t * need to keep checking status\n\t\t\t */\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"ALPS SWITCH status is 'SUSPEND', \"\n\t\t\t\t \"and 'RUN' was requested, keep checking \"\n\t\t\t\t \"ALPS status.\");\n\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\tfree_basil_response_data(brp);\n\t\t\treturn (1);\n\t\t} else {\n\t\t\t/* We are trying to suspend the application, and it is! */\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"ALPS reservation %ld has been successfully \"\n\t\t\t\t \"switched to 'SUSPEND'.\",\n\t\t\t\t pjob->ji_extended.ji_ext.ji_reservation);\n\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\tfree_basil_response_data(brp);\n\t\t\treturn (0);\n\t\t}\n\t}\n\t/*\n\t * Due to a race condition in ALPS where ALPS wrongly returns \"EMPTY\"\n\t * (which means no claim on the ALPS resv) when there may be a claim\n\t * on the reservation, PBS must work around this by polling for status\n\t * again when we get \"EMPTY\". Thus we will print at DEBUG2 level so\n\t * we can be aware of how often the race condition is encountered.\n\t */\n\tif ((res->status == basil_reservation_status_empty) &&\n\t    (switchval == basil_switch_action_out)) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"ALPS reservation %ld SWITCH status is = 'EMPTY'.\",\n\t\t\t pjob->ji_extended.ji_ext.ji_reservation);\n\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\tfree_basil_response_data(brp);\n\t\treturn (2);\n\t}\n\t/* Getting the status of EMPTY while SWITCH IN (resume) means\n\t * there was nothing to do, so consider the SWITCH done.\n\t */\n\tif ((res->status == basil_reservation_status_empty) &&\n\t    (switchval == basil_switch_action_in)) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"ALPS reservation %ld has been successfully switched.\",\n\t\t\t pjob->ji_extended.ji_ext.ji_reservation);\n\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t}\n\tfree_basil_response_data(brp);\n\treturn (0);\n}\n\n/**\n * Issue an ENGINE query and determine which version of BASIL\n * we should use.\n */\nstatic void\nalps_engine_query(void)\n{\n\tbasil_response_t *brp = NULL;\n\tchar *ver = NULL;\n\tchar *tmp = NULL;\n\tint i = 0;\n\tint found_ver = 0;\n\n\tnew_alps_req();\n\tfor (i = 0; (pbs_supported_basil_versions[i] != NULL); i++) {\n\t\tsprintf(basilversion_inventory, pbs_supported_basil_versions[i]);\n\t\tsprintf(requestBuffer, \"<?xml version=\\\"1.0\\\"?>\\n\"\n\t\t\t\t       \"<\" BASIL_ELM_REQUEST \" \" BASIL_ATR_PROTOCOL \"=\\\"%s\\\" \" BASIL_ATR_METHOD \"=\\\"\" BASIL_VAL_QUERY \"\\\" \" BASIL_ATR_TYPE \"=\\\"\" BASIL_VAL_ENGINE \"\\\"/>\",\n\t\t\tbasilversion_inventory);\n\t\tif ((brp = alps_request(requestBuffer, basilversion_inventory)) != NULL) {\n\t\t\tif (*brp->error == '\\0') {\n\t\t\t\t/*\n\t\t\t\t * There are no errors in the response data.\n\t\t\t\t * Check the response method to ensure we have\n\t\t\t\t * the correct response.\n\t\t\t\t */\n\t\t\t\tif (brp->method == basil_method_query) {\n\t\t\t\t\t/* Check if 'basil_support' is set before trying to strdup.\n\t\t\t\t\t * If basil_support is not set, it's likely\n\t\t\t\t\t * CLE 2.2 which doesn't have 'basil_support' but we'll\n\t\t\t\t\t * check that later.\n\t\t\t\t\t */\n\t\t\t\t\tif (brp->data.query.data.engine.basil_support != NULL) {\n\t\t\t\t\t\tver = strdup(brp->data.query.data.engine.basil_support);\n\t\t\t\t\t\tif (ver != NULL) {\n\t\t\t\t\t\t\ttmp = strtok(ver, \",\");\n\t\t\t\t\t\t\twhile (tmp) {\n\t\t\t\t\t\t\t\tif ((strcmp(basilversion_inventory, tmp)) == 0) {\n\t\t\t\t\t\t\t\t\t/* Success! We found a version to speak */\n\t\t\t\t\t\t\t\t\tsprintf(log_buffer, \"The basilversion is \"\n\t\t\t\t\t\t\t\t\t\t\t    \"set to %s\",\n\t\t\t\t\t\t\t\t\t\tbasilversion_inventory);\n\t\t\t\t\t\t\t\t\tlog_event(PBSEVENT_DEBUG,\n\t\t\t\t\t\t\t\t\t\t  PBS_EVENTCLASS_NODE,\n\t\t\t\t\t\t\t\t\t\t  LOG_DEBUG, __func__, log_buffer);\n\t\t\t\t\t\t\t\t\tfound_ver = 1;\n\t\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\ttmp = strtok(NULL, \",\");\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t/* We didn't find the version we were looking for\n\t\t\t\t\t\t\t * in basil_support, even though the engine query\n\t \t\t\t\t\t\t * itself succeeded. Something is wrong.\n\t \t\t\t\t\t\t */\n\t\t\t\t\t\t\tif (found_ver == 0) {\n\t\t\t\t\t\t\t\tsprintf(log_buffer, \"ALPS ENGINE query failed. \"\n\t\t\t\t\t\t\t\t\t\t    \"Supported BASIL versions returned: \"\n\t\t\t\t\t\t\t\t\t\t    \"'%s'\",\n\t\t\t\t\t\t\t\t\tver);\n\t\t\t\t\t\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_NODE,\n\t\t\t\t\t\t\t\t\t  LOG_NOTICE, __func__, log_buffer);\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t/* No memory */\n\t\t\t\t\t\t\tsprintf(log_buffer, \"ALPS ENGINE query failed. No \"\n\t\t\t\t\t\t\t\t\t    \"memory\");\n\t\t\t\t\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_NODE,\n\t\t\t\t\t\t\t\t  LOG_NOTICE, __func__, log_buffer);\n\t\t\t\t\t\t}\n\t\t\t\t\t} else {\n\t\t\t\t\t\tif ((strcmp(basilversion_inventory, BASIL_VAL_VERSION_1_1)) == 0) {\n\t\t\t\t\t\t\t/* basil_support isn't in the XML response\n\t\t\t\t\t\t\t * and the XML wasn't junk, so\n\t\t\t\t\t\t\t * assume CLE 2.2 is running.\n\t\t\t\t\t\t\t */\n\t\t\t\t\t\t\tsprintf(log_buffer, \"Assuming CLE 2.2 is running, \"\n\t\t\t\t\t\t\t\t\t    \"setting the basilversion to %s\",\n\t\t\t\t\t\t\t\tbasilversion_inventory);\n\t\t\t\t\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_NODE,\n\t\t\t\t\t\t\t\t  LOG_DEBUG, __func__, log_buffer);\n\t\t\t\t\t\t\tsprintf(log_buffer, \"The basilversion is \"\n\t\t\t\t\t\t\t\t\t    \"set to %s\",\n\t\t\t\t\t\t\t\tbasilversion_inventory);\n\t\t\t\t\t\t\tlog_event(PBSEVENT_DEBUG,\n\t\t\t\t\t\t\t\t  PBS_EVENTCLASS_NODE,\n\t\t\t\t\t\t\t\t  LOG_DEBUG, __func__, log_buffer);\n\t\t\t\t\t\t\tfound_ver = 1;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\t/* wrong method in the response */\n\t\t\t\t\tsprintf(log_buffer, \"Wrong method, expected: %d but \"\n\t\t\t\t\t\t\t    \"got: %d\",\n\t\t\t\t\t\tbasil_method_query, brp->method);\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_NODE,\n\t\t\t\t\t\t  LOG_DEBUG, __func__, log_buffer);\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\t/* There was an error in the BASIL response */\n\t\t\t\tsprintf(log_buffer, \"Error in BASIL response: %s\", brp->error);\n\t\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_NODE, LOG_DEBUG,\n\t\t\t\t\t  __func__, log_buffer);\n\t\t\t}\n\t\t} else {\n\t\t\tsprintf(log_buffer, \"ALPS ENGINE query failed with BASIL \"\n\t\t\t\t\t    \"version %s.\",\n\t\t\t\tbasilversion_inventory);\n\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_NODE,\n\t\t\t\t  LOG_NOTICE, __func__, log_buffer);\n\t\t}\n\t\tfree(ver);\n\t\tver = NULL;\n\t\tfree_basil_response_data(brp);\n\t\tbrp = NULL;\n\t\tif (found_ver != 0) {\n\t\t\t/* Found it, let's get outta here. */\n\t\t\tbreak;\n\t\t}\n\t}\n\n\t/*\n\t * We didn't find the right BASIL version.\n\t * Set basilversion to \"UNDEFINED\"\n\t */\n\tif (found_ver == 0) {\n\t\tsprintf(basilversion_inventory, BASIL_VAL_UNDEFINED);\n\t\tsprintf(log_buffer, \"No BASIL versions are understood.\");\n\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_NODE,\n\t\t\t  LOG_NOTICE, __func__, log_buffer);\n\t} else {\n\t\t/* we found a BASIL version that works\n\t\t * Set basilver so the rest of the code can use switch\n\t\t * statements to choose the appropriate code path\n\t\t */\n\t\tif ((strcmp(basilversion_inventory, BASIL_VAL_VERSION_1_4)) == 0) {\n\t\t\tbasilver = basil_1_4;\n\t\t} else if ((strcmp(basilversion_inventory, BASIL_VAL_VERSION_1_3)) == 0) {\n\t\t\tbasilver = basil_1_3;\n\t\t} else if ((strcmp(basilversion_inventory, BASIL_VAL_VERSION_1_2)) == 0) {\n\t\t\tbasilver = basil_1_2;\n\t\t} else if ((strcmp(basilversion_inventory, BASIL_VAL_VERSION_1_1)) == 0) {\n\t\t\tbasilver = basil_1_1;\n\t\t}\n\t}\n}\n\n/**\n * @brief\n *\t Issue a request for a system inventory including nodes, CPUs, and\n * \tassigned applications.\n *\n * @return\tint\n * @retval  0   : success\n * @retval  -1  : failure\n */\nint\nalps_inventory(void)\n{\n\tint rc = 0;\n\tbasil_response_t *brp;\n\tfirst_compute_node = 1;\n\n\t/* Determine what BASIL version we should speak */\n\talps_engine_query();\n\tnew_alps_req();\n\tsprintf(requestBuffer, \"<?xml version=\\\"1.0\\\"?>\\n\"\n\t\t\t       \"<\" BASIL_ELM_REQUEST \" \" BASIL_ATR_PROTOCOL \"=\\\"%s\\\" \" BASIL_ATR_METHOD \"=\\\"\" BASIL_VAL_QUERY \"\\\" \" BASIL_ATR_TYPE \"=\\\"\" BASIL_VAL_INVENTORY \"\\\"/>\",\n\t\tbasilversion_inventory);\n\tif ((brp = alps_request(requestBuffer, basilversion_inventory)) == NULL) {\n\t\tsprintf(log_buffer, \"ALPS inventory request failed.\");\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_NODE,\n\t\t\t  LOG_NOTICE, __func__, log_buffer);\n\t\treturn -1;\n\t}\n\tif (basil_inventory != NULL)\n\t\tfree(basil_inventory);\n\tbasil_inventory = strdup(alps_client_out);\n\tif (basil_inventory == NULL) {\n\t\tsprintf(log_buffer, \"failed to save inventory response\");\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_NODE, LOG_ERR,\n\t\t\t  __func__, log_buffer);\n\t}\n\trc = inventory_to_vnodes(brp);\n\tfree_basil_response_data(brp);\n\treturn rc;\n}\n\n/**\n *\n * @brief System Query handling (for KNL Nodes).\n * \t  Invoked from dep_topology() in mom_mach.c before alps_inventory()\n * \t  (which handles processing for non-KNL Cray Compute Nodes) is called.\n *\t  Checks if BASIL 1.7 is supported, then makes a System Query request and\n *\t  populates System Query related structures.\n *\n * @return void\n *\n */\nvoid\nalps_system_KNL(void)\n{\n\t/*\n\t * Determine if ALPS supports the BASIL 1.7 protocol. We are only\n\t * partially supporting the BASIL 1.7 protocol (for the System Query).\n\t */\n\talps_engine_query_KNL();\n\n\tif (basil_1_7_supported)\n\t\tlog_event(PBSEVENT_DEBUG4, PBS_EVENTCLASS_NODE, LOG_DEBUG, __func__,\n\t\t\t  \"This Cray system supports the BASIL 1.7 protocol.\");\n\telse {\n\t\tlog_event(PBSEVENT_DEBUG4, PBS_EVENTCLASS_NODE, LOG_ERR, __func__,\n\t\t\t  \"This Cray system does not support the BASIL 1.7 protocol.\");\n\t\treturn;\n\t}\n\n\t/*\n\t * Allocate a buffer (requestBuffer_knl) for a new ALPS request (System Query).\n\t * A nonzero return value indicates failure; return at this point without proceeding\n\t * with System Query processing.\n\t */\n\tif (init_KNL_alps_req_buf() != 0)\n\t\treturn;\n\n\t/* Create a System (BASIL 1.7) Query request to fetch KNL information. */\n\tsnprintf(requestBuffer_knl, UTIL_BUFFER_LEN, \"<?xml version=\\\"1.0\\\"?>\\n\"\n\t\t\t\t\t\t     \"<\" BASIL_ELM_REQUEST \" \" BASIL_ATR_PROTOCOL \"=\\\"%s\\\" \" BASIL_ATR_METHOD \"=\\\"\" BASIL_VAL_QUERY \"\\\" \" BASIL_ATR_TYPE \"=\\\"\" BASIL_VAL_SYSTEM \"\\\"/>\",\n\t\t basilversion_system);\n\n\t/*\n\t * The 'basil_ver' argument is checked in response_start() (a callback\n\t * function invoked during alps_request() processing). Flow of control can\n\t * arrive at response_start() from either alps_system_KNL() or alps_inventory().\n\t * This argument helps make the distinction.\n\t */\n\tif ((brp_knl = alps_request(requestBuffer_knl, basilversion_system)) == NULL) {\n\n\t\t/* Failure to get KNL Node information from ALPS. */\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_NODE, LOG_NOTICE, __func__,\n\t\t\t  \"ALPS System Query request failed.\");\n\t\treturn;\n\t}\n}\n\n/**\n * @brief Issue an ENGINE query to determine if BASIL 1.7 is supported.\n * \t  We are partially supporting the BASIL 1.7 protocol (for the System Query).\n *\t  If the BASIL 1.7 protocol is found in the query response, set a flag\n *\t  that will be checked in alps_system_KNL().\n *\n * @return void\n */\nstatic void\nalps_engine_query_KNL(void)\n{\n\tbasil_response_t *brp_eng;\n\n\t/*\n\t * Allocate a buffer (requestBuffer_knl) for a new ALPS request (Engine Query).\n\t * A nonzero return value indicates failure.\n\t */\n\tif (init_KNL_alps_req_buf() != 0)\n\t\treturn;\n\n\t/* This is set to \"1.1\", since PBS may be running on a system that may or */\n\t/* may not support BASIL 1.7. BASIL 1.1 is the lowest version that supports */\n\t/* the ENGINE Query. */\n\n\tsprintf(requestBuffer_knl, \"<?xml version=\\\"1.0\\\"?>\\n\"\n\t\t\t\t   \"<\" BASIL_ELM_REQUEST \" \" BASIL_ATR_PROTOCOL \"=\\\"%s\\\" \" BASIL_ATR_METHOD \"=\\\"\" BASIL_VAL_QUERY \"\\\" \" BASIL_ATR_TYPE \"=\\\"\" BASIL_VAL_ENGINE \"\\\"/>\",\n\t\tBASIL_VAL_VERSION_1_1);\n\tif ((brp_eng = alps_request(requestBuffer_knl, BASIL_VAL_VERSION_1_1)) != NULL) {\n\t\t/* Proceed if no errors in the response data. */\n\t\tif (*brp_eng->error == '\\0') {\n\t\t\t/* Ensure we have the correct response. */\n\t\t\tif (brp_eng->method == basil_method_query) {\n\t\t\t\t/* Check if 'basil_support' is set before trying to strdup. */\n\t\t\t\tif (brp_eng->data.query.data.engine.basil_support != NULL) {\n\n\t\t\t\t\tif (strstr(brp_eng->data.query.data.engine.basil_support,\n\t\t\t\t\t\t   BASIL_VAL_VERSION_1_7) != NULL) {\n\t\t\t\t\t\tbasil_1_7_supported = 1;\n\t\t\t\t\t\tsnprintf(basilversion_system, sizeof(basilversion_system),\n\t\t\t\t\t\t\t BASIL_VAL_VERSION_1_7);\n\t\t\t\t\t} else\n\t\t\t\t\t\tbasil_1_7_supported = 0;\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\t/* Wrong method in the response. */\n\t\t\t\tsprintf(log_buffer, \"Wrong method, expected: %d but \"\n\t\t\t\t\t\t    \"got: %d\",\n\t\t\t\t\tbasil_method_query, brp_eng->method);\n\t\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_NODE,\n\t\t\t\t\t  LOG_DEBUG, __func__, log_buffer);\n\t\t\t}\n\t\t} else {\n\t\t\t/* There was an error in the BASIL response. */\n\t\t\tsprintf(log_buffer, \"Error in BASIL response: %s\", brp_eng->error);\n\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_NODE, LOG_DEBUG,\n\t\t\t\t  __func__, log_buffer);\n\t\t}\n\t}\n\n\tfree_basil_response_data(brp_eng);\n}\n\n/**\n * @brief Process the System (BASIL 1.7) Query Response. This includes creation\n *\t  of KNL vnodes.\n *\n * @return void\n */\nvoid\nsystem_to_vnodes_KNL(void)\n{\n\tbasil_response_query_system_t *sys_knl;\n\n\tif (!basil_1_7_supported)\n\t\treturn;\n\n\t/* System 1.7 Query failed to get KNL Node information from ALPS. */\n\tif (!brp_knl)\n\t\treturn;\n\n\tif (brp_knl->method != basil_method_query) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer), \"Wrong method: %d\",\n\t\t\t brp_knl->method);\n\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_NODE, LOG_DEBUG, __func__,\n\t\t\t  log_buffer);\n\t\treturn;\n\t}\n\n\tif (brp_knl->data.query.type != basil_query_system) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer), \"Wrong query type: %d\",\n\t\t\t brp_knl->data.query.type);\n\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_NODE, LOG_DEBUG, __func__,\n\t\t\t  log_buffer);\n\t\treturn;\n\t}\n\n\tif (*brp_knl->error != '\\0') {\n\t\tsnprintf(log_buffer, sizeof(log_buffer), \"Error in BASIL response: %s\",\n\t\t\t brp_knl->error);\n\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_NODE, LOG_DEBUG, __func__,\n\t\t\t  log_buffer);\n\t\treturn;\n\t}\n\n\tsys_knl = &brp_knl->data.query.data.system;\n\n\tcreate_vnodes_KNL(sys_knl);\n\n\tfree_basil_response_data(brp_knl);\n}\n\n/**\n *\n * @brief Create KNL vnodes.\n * \t  'vnlp' is a global pointer that gets freed in process_hup() (mom_main.c)\n * \t  during a MoM restart. The vnode state gets cleared and we repeat the vnode\n *\t  creation cycle i.e. creation of non-KNL vnodes (in inventory_to_vnodes())\n *\t  followed by KNL vnodes in this function.\n *\n * @param[in] sys_knl ALPS BASIL System Query response.\n *\n * @return void\n */\nstatic void\ncreate_vnodes_KNL(basil_response_query_system_t *sys_knl)\n{\n\tchar *attr, *arch;\n\tchar vname[VNODE_NAME_LEN];\n\tchar utilBuffer_knl[(UTIL_BUFFER_LEN * sizeof(char))];\n\tint ncpus_per_knl;\n\tint node_idx = 0;\n\tint node_count = 0;\n\tlong node_id = 0;\n\tlong *nid_arr = NULL;\n\tint atype = READ_WRITE | ATR_DFLAG_CVTSLT;\n\tchar mpphost_knl[BASIL_STRING_LONG];\n\n\tbasil_system_element_t *node_group;\n\n\tif (sys_knl == NULL)\n\t\treturn;\n\n\tsnprintf(mpphost_knl, sizeof(mpphost_knl), \"%s\", sys_knl->mpp_host);\n\n\t/*\n\t * Iterate through all the Node groups in the System Query Response. Each\n\t * Node group may contain information about multiple (a 'range list') of KNL Nodes.\n\t * Each XML Element <Nodes ...> </Nodes> encapsulates information about a group of Nodes.\n\t */\n\tfor (node_group = sys_knl->elements; node_group; node_group = node_group->next) {\n\n\t\t/*\n\t\t * The System Query XML Response contains information about KNL and\n\t\t * non-KNL Nodes. We are only interested in KNL nodes that are in\n\t\t * \"batch\" mode and in the \"up\" state.\n\t\t */\n\t\tif (exclude_from_KNL_processing(node_group, 1))\n\t\t\tcontinue;\n\n\t\t/*\n\t\t * Extract NIDs (Node ID) from node_group->nidlist.\n\t\t * If nidlist is empty, node_count gets set to 0 and vnode\n\t\t * creation in the inner for() is bypassed.\n\t\t */\n\t\tnid_arr = process_nodelist_KNL(node_group->nidlist, &node_count);\n\n\t\t/*\n\t\t * Create vnodes for each of the KNL Nodes listed in this Node group.\n\t\t * All KNL Nodes within a Node Group will have similar vnode attributes,\n\t\t * since all attributes in each <Nodes ...> XML element apply to all\n\t\t * Nodes listed in the 'rangelist'.\n\t\t */\n\t\tfor (node_idx = 0; node_idx < node_count; node_idx++) {\n\n\t\t\tnode_id = nid_arr[node_idx];\n\t\t\tsnprintf(vname, VNODE_NAME_LEN, \"%s_%ld\", mpphost_knl, node_id);\n\n\t\t\tif (first_compute_node) {\n\t\t\t\t/*\n\t\t\t\t * Create the name of the very first vnode so we\n\t\t\t\t * can attach topology info to it.\n\t\t\t\t */\n\n\t\t\t\tattr = ATTR_NODE_TopologyInfo;\n\t\t\t\tif (vn_addvnr(vnlp, vname, attr, (char *) basil_inventory,\n\t\t\t\t\t      ATR_TYPE_STR, READ_ONLY, NULL) == -1)\n\t\t\t\t\tgoto bad_vnl;\n\t\t\t\tfirst_compute_node = 0;\n\t\t\t}\n\n\t\t\tattr = \"sharing\";\n\t\t\tif (vn_addvnr(vnlp, vname, attr, ND_Force_Exclhost, 0, 0, NULL) == -1)\n\t\t\t\tgoto bad_vnl;\n\n\t\t\tattr = \"resources_available.vntype\";\n\t\t\tif (vn_addvnr(vnlp, vname, attr, CRAY_COMPUTE, 0, 0, NULL) == -1)\n\t\t\t\tgoto bad_vnl;\n\n\t\t\tattr = \"resources_available.PBScrayhost\";\n\t\t\tif (vn_addvnr(vnlp, vname, attr, mpphost_knl, ATR_TYPE_STR, atype, NULL) == -1)\n\t\t\t\tgoto bad_vnl;\n\n\t\t\tattr = \"resources_available.arch\";\n\t\t\tarch = BASIL_VAL_XT;\n\t\t\tif (vn_addvnr(vnlp, vname, attr, arch, ATR_TYPE_STR, atype, NULL) == -1)\n\t\t\t\tgoto bad_vnl;\n\n\t\t\tattr = \"resources_available.host\";\n\t\t\tsnprintf(utilBuffer_knl, sizeof(utilBuffer_knl), \"%s_%ld\", mpphost_knl, node_id);\n\t\t\tif (vn_addvnr(vnlp, vname, attr, utilBuffer_knl, 0, 0, NULL) == -1)\n\t\t\t\tgoto bad_vnl;\n\n\t\t\tattr = \"resources_available.PBScraynid\";\n\t\t\tsnprintf(utilBuffer_knl, sizeof(utilBuffer_knl), \"%ld\", node_id);\n\t\t\tif (vn_addvnr(vnlp, vname, attr, utilBuffer_knl, ATR_TYPE_STR, atype, NULL) == -1)\n\t\t\t\tgoto bad_vnl;\n\n\t\t\tif (vnode_per_numa_node) {\n\t\t\t\tattr = \"resources_available.PBScrayseg\";\n\t\t\t\tif (vn_addvnr(vnlp, vname, attr, \"0\", ATR_TYPE_STR, atype, NULL) == -1)\n\t\t\t\t\tgoto bad_vnl;\n\t\t\t}\n\t\t\tattr = \"resources_available.ncpus\";\n\t\t\tncpus_per_knl = atoi(node_group->compute_units);\n\t\t\tsnprintf(utilBuffer_knl, sizeof(utilBuffer_knl), \"%d\", ncpus_per_knl);\n\t\t\tif (vn_addvnr(vnlp, vname, attr, utilBuffer_knl, 0, 0, NULL) == -1)\n\t\t\t\tgoto bad_vnl;\n\n\t\t\t/* avlmem is conventional DRAM mem. avlmem = page_size_kb * page_count. */\n\t\t\tattr = \"resources_available.mem\";\n\t\t\tsnprintf(utilBuffer_knl, sizeof(utilBuffer_knl), \"%skb\", node_group->avlmem);\n\t\t\tif (vn_addvnr(vnlp, vname, attr, utilBuffer_knl, 0, 0, NULL) == -1)\n\t\t\t\tgoto bad_vnl;\n\n\t\t\tattr = \"current_aoe\";\n\t\t\tsnprintf(utilBuffer_knl, sizeof(utilBuffer_knl), \"%s_%s\",\n\t\t\t\t node_group->numa_cfg, node_group->hbm_cfg);\n\t\t\tif (vn_addvnr(vnlp, vname, attr, utilBuffer_knl, 0, 0, NULL) == -1)\n\t\t\t\tgoto bad_vnl;\n\n\t\t\t/* hbmem is high bandwidth mem (MCDRAM) in megabytes. */\n\t\t\tattr = \"resources_available.hbmem\";\n\t\t\tsnprintf(utilBuffer_knl, sizeof(utilBuffer_knl), \"%smb\", node_group->hbmsize);\n\t\t\tif (vn_addvnr(vnlp, vname, attr, utilBuffer_knl, 0, 0, NULL) == -1)\n\t\t\t\tgoto bad_vnl;\n\t\t}\n\t\t/* We have no further use for this array of KNL node ids. */\n\t\tif (nid_arr) {\n\t\t\tfree(nid_arr);\n\t\t\tnid_arr = NULL;\n\t\t}\n\t}\n\n\treturn;\n\nbad_vnl:\n\tsnprintf(log_buffer, sizeof(log_buffer), \"Creation of Cray KNL vnodes failed with name %s\", vname);\n\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_NODE, LOG_DEBUG, __func__, log_buffer);\n\t/*\n\t * Don't free nv since it might be important in the dump.\n\t */\n\tabort();\n}\n\n/**\n *\n * @brief Check if this Node Group needs to be considered for KNL processing.\n *        We are only interested in KNL Nodes that have \"role\" set to \"batch\" and\n *\t  \"state\" set to \"up\".\n *\t  The attributes \"numa_cfg\", \"hbmsize\", \"hbm_cfg\" not being empty (\"\") implies\n *\t  they pertain to KNL Nodes.\n *\n * @param[in] ptrNodeGrp Pointer to the current Node Group in the System XML Response.\n * @param[in] check_state Indicates wheather this function to look for the KNL node state.\n *\n * @return int\n * @retval 1 Indicates that the Node Group should not be considered.\n * @retval 0 Indicates that the Node Group should be considered.\n *\n */\nstatic int\nexclude_from_KNL_processing(basil_system_element_t *ptrNodeGrp,\n\t\t\t    short int check_state)\n{\n\tif ((strcmp(ptrNodeGrp->role, BASIL_VAL_BATCH_SYS) != 0) ||\n\t    (check_state ? (strcmp(ptrNodeGrp->state,\n\t\t\t\t   BASIL_VAL_UP_SYS) != 0)\n\t\t\t : 0) ||\n\t    ((strcmp(ptrNodeGrp->numa_cfg, \"\") == 0) &&\n\t     (strcmp(ptrNodeGrp->hbmsize, \"\") == 0) &&\n\t     (strcmp(ptrNodeGrp->hbm_cfg, \"\") == 0)))\n\t\treturn 1;\n\telse\n\t\treturn 0;\n}\n\n/**\n *\n * @brief KNL Nodes are specified in 'Rangelist' format in a string e.g. \"12,13,14-18,21\".\n * \t  Extract Node IDs from this string and store them in an integer array.\n *\n * @param[in] nidlist String containing rangelist of Nodes.\n * @param[out] ptr_count Total number of Nodes in this list.\n *\n * @return int *\n * @retval This is an long integer array containing Node IDs.\n *\t   nid_arr returned here is freed in the calling function after use.\n */\nstatic long *\nprocess_nodelist_KNL(char *nidlist, int *ptr_count)\n{\n\tchar delim[] = \",\";\n\tchar *token, *nidlist_array, *endptr;\n\tint nid_count = 0;\n\tlong *nid_arr = NULL;\n\n\tif (nidlist == NULL) {\n\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_NODE, LOG_DEBUG, __func__, \"No KNL nodes.\");\n\t\t*ptr_count = 0;\n\t\treturn NULL;\n\t}\n\n\tif ((nidlist_array = strdup(nidlist)) == NULL) {\n\t\tlog_err(errno, __func__, \"malloc failure\");\n\t\t*ptr_count = 0;\n\t\treturn NULL;\n\t}\n\n\tendptr = NULL;\n\t/* 'token' points to a null terminated string containing the token e.g. \"12\\0\", \"14-18\\0\". */\n\ttoken = strtok(nidlist_array, delim);\n\twhile (token != NULL) {\n\t\tint nid_num;\n\t\t/*\n\t\t * Each token (e.g. \"12\" or \"14-18\") is converted to an int and sent as\n\t\t * an argument to store_nids(). In case of tokens such as \"14-18\", nid_num=14\n\t\t * and 'endptr' points to the first invalid character i.e. \"-\".\n\t\t */\n\t\tnid_num = (int) strtol(token, &endptr, 10);\n\t\t/* Checking for invalid data in the Node rangelist. */\n\t\tif ((*endptr != '\\0') && (*endptr != '-')) {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"Bad KNL Rangelist: \\\"%s\\\"\", nidlist_array);\n\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_NODE, LOG_ERR, __func__, log_buffer);\n\t\t\tfree(nid_arr);\n\t\t\tnid_arr = NULL;\n\t\t\tnid_count = 0;\n\t\t\tbreak;\n\t\t}\n\n\t\tstore_nids(nid_num, endptr, &nid_arr, &nid_count);\n\t\tif (nid_arr == NULL) {\n\t\t\tnid_count = 0;\n\t\t\tbreak;\n\t\t}\n\n\t\ttoken = strtok(NULL, delim);\n\t}\n\n\t*ptr_count = nid_count;\n\tfree(nidlist_array);\n\treturn nid_arr;\n}\n\n/**\n *\n * @brief Helper function for process_nodelist_KNL().\n *\t  It stores the tokenized Node IDs in an integer array.\n * @example For the token \"14-18\", nid_num = 14, *endptr = \"-\" and endptr = \"-18\\0\".\n *\t  The Node IDs 14 and 18 are extracted and stored in 'nid_arr'.\n *\n * @param[in] nid_num Node ID to be stored.\n * @param[in] endptr Ptr to invalid character (set by strtol()).\n *\n * @param[out] nid_arr Array to hold all Node IDs.\n * @param[out] nid_count Node count.\n *\n * @return void\n */\nstatic void\nstore_nids(int nid_num, char *endptr, long **nid_arr, int *nid_count)\n{\n\tint count = *nid_count;\n\tint range_len = 1;\n\tint i = 0;\n\tlong *tmp_ptr = NULL;\n\tchar *ptr = NULL;\n\n\tif (*endptr == '-') {\n\t\tint nid_num_last;\n\t\tnid_num_last = (int) strtol(endptr + 1, &ptr, 10);\n\t\t/* Checking for invalid data in the Node rangelist. */\n\t\tif ((*ptr != '\\0') && (*ptr != '-')) {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"Bad KNL Rangelist: \\\"%s\\\"\", endptr);\n\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_NODE, LOG_ERR, __func__, log_buffer);\n\t\t\tfree(*nid_arr);\n\t\t\t*nid_arr = NULL;\n\t\t\treturn;\n\t\t}\n\n\t\trange_len = (nid_num_last - nid_num) + 1;\n\t}\n\n\ttmp_ptr = (long *) realloc(*nid_arr, (count + range_len) * sizeof(long));\n\tif (!tmp_ptr) {\n\t\tlog_err(errno, __func__, \"realloc failure\");\n\t\tfree(*nid_arr);\n\t\t*nid_arr = NULL;\n\t\treturn;\n\t}\n\t*nid_arr = tmp_ptr;\n\n\tfor (i = 0; i < range_len; i++) {\n\t\t*(*nid_arr + count) = nid_num + i;\n\t\tcount++;\n\t}\n\n\t*nid_count = count;\n}\n\n/**\n * @brief\n * This function is registered to handle the System element in\n * the System XML response.\n *\n * The standard Expat start handler function prototype is used.\n * @param d pointer to user data structure\n * @param[in] el unused in this function\n * @param[in] atts array of name/value pairs\n *\n * @return void\n */\nstatic void\nsystem_start(ud_t *d, const XML_Char *el, const XML_Char **atts)\n{\n\tconst XML_Char **np;\n\tconst XML_Char **vp;\n\tbasil_response_t *brp;\n\tbasil_response_query_system_t *sys;\n\n\tif (++(d->count_sys.system) > 1) {\n\t\tparse_err_multiple_elements(d);\n\t\treturn;\n\t}\n\n\tbrp = d->brp;\n\tbrp->data.query.type = basil_query_system;\n\tsys = &brp->data.query.data.system;\n\n\tfor (np = vp = atts, vp++; np && *np && vp && *vp; np = ++vp, vp++) {\n\t\txml_dbg(\"%s: %s = %s\", __func__, *np, *vp);\n\t\tif (strcmp(BASIL_ATR_TIMESTAMP, *np) == 0) {\n\t\t\tif (sys->timestamp != 0) {\n\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tsys->timestamp = atoll(*vp);\n\t\t} else if (strcmp(BASIL_ATR_MPPHOST, *np) == 0) {\n\t\t\tif (sys->mpp_host[0] != '\\0') {\n\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tsnprintf(sys->mpp_host, BASIL_STRING_LONG, \"%s\", *vp);\n\t\t} else if (strcmp(BASIL_ATR_CPCU, *np) == 0) {\n\t\t\tif (sys->cpcu_val != 0) {\n\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\treturn;\n\t\t\t}\n\n\t\t\tsys->cpcu_val = atoi(*vp);\n\t\t\tif (sys->cpcu_val < 0) {\n\t\t\t\tparse_err_illegal_attr_val(d, *np, *vp);\n\t\t\t\treturn;\n\t\t\t}\n\t\t} else {\n\t\t\tparse_err_unrecognized_attr(d, *np);\n\t\t\treturn;\n\t\t}\n\t}\n}\n\n/**\n * @brief\n * This function is registered to handle the 'Nodes' element within a System XML\n * response.\n * This element handler is called each time a new <Nodes ...> element is encountered\n * in the XML response currently being parsed. It populates the user data structure\n * (ud_t *d) with the current <Nodes ...> element attribute/value pairs.\n *\n * The standard Expat start handler function prototype is used.\n * @param d pointer to user data structure\n * @param[in] el unused in this function\n * @param[in] atts array of name/value pairs\n *\n * @return void\n */\nstatic void\nnode_group_start(ud_t *d, const XML_Char *el, const XML_Char **atts)\n{\n\tconst XML_Char **np;\n\tconst XML_Char **vp;\n\tbasil_system_element_t *node_group;\n\tbasil_response_t *brp;\n\tint page_size_KB = 0;\n\tint shift_count = 0;\n\tint res = 0;\n\tlong page_count, avail_mem;\n\tchar *invalid_char_ptr;\n\n\tbrp = d->brp;\n\tnode_group = (basil_system_element_t *) calloc(1, sizeof(basil_system_element_t));\n\tif (!node_group) {\n\t\tparse_err_out_of_memory(d);\n\t\treturn;\n\t}\n\n\tif (d->current_sys.node_group)\n\t\t(d->current_sys.node_group)->next = node_group;\n\telse\n\t\tbrp->data.query.data.system.elements = node_group;\n\n\td->current_sys.node_group = node_group;\n\n\t/*\n\t * Iterate through the attribute name/value pairs. Update the name and\n\t * value pointers with each loop. If the XML attributes (\"role\", \"state\",\n\t * \"speed\", \"numa_nodes\", \"dies\", \"compute_units\", \"cpus_per_cu\",\n\t * \"page_size_kb\", \"page_count\", \"accels\", \"accel_state\", \"numa_cfg\",\n\t * \"hbm_size_mb\", \"hbm_cache_pct\") are repeated within each \"Nodes\"\n\t * element under consideration, invoke parse_err_multiple_attrs().\n\t */\n\tfor (np = vp = atts, vp++; np && *np && vp && *vp; np = ++vp, vp++) {\n\t\txml_dbg(\"%s: %s = %s\", __func__, *np, *vp);\n\t\tif (strcmp(BASIL_ATR_ROLE, *np) == 0) {\n\t\t\tif (*node_group->role != '\\0') {\n\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\treturn;\n\t\t\t}\n\n\t\t\tif ((strcmp(BASIL_VAL_BATCH_SYS, *vp) == 0) ||\n\t\t\t    (strcmp(BASIL_VAL_INTERACTIVE_SYS, *vp) == 0))\n\t\t\t\tpbs_strncpy(node_group->role, *vp, sizeof(node_group->role));\n\t\t\telse\n\t\t\t\tstrcpy(node_group->role, BASIL_VAL_UNKNOWN);\n\t\t} else if (strcmp(BASIL_ATR_STATE, *np) == 0) {\n\t\t\tif (*node_group->state != '\\0') {\n\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\treturn;\n\t\t\t}\n\n\t\t\tif ((strcmp(BASIL_VAL_UP_SYS, *vp) == 0) ||\n\t\t\t    (strcmp(BASIL_VAL_DOWN_SYS, *vp) == 0) ||\n\t\t\t    (strcmp(BASIL_VAL_UNAVAILABLE_SYS, *vp) == 0) ||\n\t\t\t    (strcmp(BASIL_VAL_ROUTING_SYS, *vp) == 0) ||\n\t\t\t    (strcmp(BASIL_VAL_SUSPECT_SYS, *vp) == 0) ||\n\t\t\t    (strcmp(BASIL_VAL_ADMIN_SYS, *vp) == 0))\n\t\t\t\tpbs_strncpy(node_group->state, *vp, sizeof(node_group->state));\n\t\t\telse\n\t\t\t\tstrcpy(node_group->state, BASIL_VAL_UNKNOWN);\n\t\t} else if (strcmp(BASIL_ATR_SPEED, *np) == 0) {\n\t\t\tif (*node_group->speed != '\\0') {\n\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\treturn;\n\t\t\t}\n\n\t\t\t/**\n\t\t\t * The speed attribute is not used elsewhere in PBS. Setting\n\t\t\t * it to -1 to catch the multiple instances scenario (i.e.\n\t\t\t * multiple speed attributes occurring in the XML response).\n \t\t\t */\n\t\t\tstrncpy(node_group->speed, \"-1\", sizeof(node_group->speed) - 1);\n\t\t} else if (strcmp(BASIL_ATR_NUMA_NODES, *np) == 0) {\n\t\t\tif (*node_group->numa_nodes != '\\0') {\n\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\treturn;\n\t\t\t}\n\n\t\t\t/* *vp cannot be empty (\"\"), \"0\" nor negative. */\n\t\t\tif (strtol(*vp, &invalid_char_ptr, 10) <= 0) {\n\t\t\t\tparse_err_illegal_attr_val(d, *np, *vp);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tpbs_strncpy(node_group->numa_nodes, *vp, sizeof(node_group->numa_nodes));\n\t\t} else if (strcmp(BASIL_ATR_DIES, *np) == 0) {\n\t\t\tif (*node_group->n_dies != '\\0') {\n\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\treturn;\n\t\t\t}\n\n\t\t\t/* *vp cannot be empty (\"\") nor negative. Can be \"0\". */\n\t\t\tif ((strcmp(*vp, \"\") == 0) || (strtol(*vp, &invalid_char_ptr, 10) < 0)) {\n\t\t\t\tparse_err_illegal_attr_val(d, *np, *vp);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tpbs_strncpy(node_group->n_dies, *vp, sizeof(node_group->n_dies));\n\t\t} else if (strcmp(BASIL_ATR_COMPUTE_UNITS, *np) == 0) {\n\t\t\tif (*node_group->compute_units != '\\0') {\n\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\treturn;\n\t\t\t}\n\n\t\t\t/* *vp cannot be empty (\"\") nor negative. Can be \"0\". */\n\t\t\tif ((strcmp(*vp, \"\") == 0) || (strtol(*vp, &invalid_char_ptr, 10) < 0)) {\n\t\t\t\tparse_err_illegal_attr_val(d, *np, *vp);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tpbs_strncpy(node_group->compute_units, *vp, sizeof(node_group->compute_units));\n\t\t} else if (strcmp(BASIL_ATR_CPUS_PER_CU, *np) == 0) {\n\t\t\tif (*node_group->cpus_per_cu != '\\0') {\n\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\treturn;\n\t\t\t}\n\n\t\t\t/* *vp cannot be empty (\"\"), \"0\" nor negative. */\n\t\t\tif (strtol(*vp, &invalid_char_ptr, 10) <= 0) {\n\t\t\t\tparse_err_illegal_attr_val(d, *np, *vp);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tpbs_strncpy(node_group->cpus_per_cu, *vp, sizeof(node_group->cpus_per_cu));\n\t\t} else if (strcmp(BASIL_ATR_PAGE_SIZE_KB, *np) == 0) {\n\t\t\tif (*node_group->pgszl2 != '\\0') {\n\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\treturn;\n\t\t\t}\n\n\t\t\t/* *vp cannot be empty (\"\"), \"0\" nor negative. */\n\t\t\tpage_size_KB = strtol(*vp, &invalid_char_ptr, 10);\n\t\t\tif (page_size_KB <= 0) {\n\t\t\t\tparse_err_illegal_attr_val(d, *np, *vp);\n\t\t\t\treturn;\n\t\t\t}\n\n\t\t\tshift_count = 0;\n\t\t\twhile (1) {\n\t\t\t\t/* Computing log base 2 of page_size_KB. */\n\t\t\t\t/* e.g. if page_size_kb = 1 KB (i.e. 1024 Bytes), */\n\t\t\t\t/* then pgszl2 = 10 (since 2 ^ 10 = 1024). */\n\t\t\t\tres = 1 << shift_count;\n\t\t\t\tif (res == page_size_KB)\n\t\t\t\t\tbreak;\n\t\t\t\telse\n\t\t\t\t\tshift_count++;\n\t\t\t}\n\t\t\t/* Adding log base 2 of 1024. */\n\t\t\tshift_count += 10;\n\t\t\tsnprintf(node_group->pgszl2, BASIL_STRING_SHORT, \"%d\", shift_count);\n\t\t} else if (strcmp(BASIL_ATR_PAGE_COUNT, *np) == 0) {\n\t\t\tif (*node_group->avlmem != '\\0') {\n\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\treturn;\n\t\t\t}\n\n\t\t\t/* *vp cannot be empty (\"\") nor negative. Can be \"0\". */\n\t\t\tpage_count = strtol(*vp, &invalid_char_ptr, 10);\n\t\t\tif ((strcmp(*vp, \"\") == 0) || (page_count < 0)) {\n\t\t\t\tparse_err_illegal_attr_val(d, *np, *vp);\n\t\t\t\treturn;\n\t\t\t}\n\n\t\t\tavail_mem = page_size_KB * page_count;\n\t\t\tsnprintf(node_group->avlmem, BASIL_STRING_SHORT, \"%ld\", avail_mem);\n\t\t} else if (strcmp(BASIL_ATR_ACCELS, *np) == 0) {\n\t\t\tif (*node_group->accel_name != '\\0') {\n\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\treturn;\n\t\t\t}\n\n\t\t\t/* *vp cannot be empty (\"\"). */\n\t\t\tif (strcmp(*vp, \"\") == 0) {\n\t\t\t\tparse_err_illegal_attr_val(d, *np, *vp);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tpbs_strncpy(node_group->accel_name, *vp, sizeof(node_group->accel_name));\n\t\t} else if (strcmp(BASIL_ATR_ACCEL_STATE, *np) == 0) {\n\t\t\tif (*node_group->accel_state != '\\0') {\n\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\treturn;\n\t\t\t}\n\n\t\t\tif ((strcmp(BASIL_VAL_UP_SYS, *vp) == 0) ||\n\t\t\t    (strcmp(BASIL_VAL_DOWN_SYS, *vp) == 0))\n\t\t\t\tpbs_strncpy(node_group->accel_state, *vp, sizeof(node_group->accel_state));\n\t\t\telse\n\t\t\t\tstrcpy(node_group->accel_state, BASIL_VAL_UNKNOWN);\n\t\t} else if (strcmp(BASIL_ATR_NUMA_CFG, *np) == 0) {\n\t\t\tif (*node_group->numa_cfg != '\\0') {\n\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\treturn;\n\t\t\t}\n\n\t\t\tif ((strcmp(BASIL_VAL_EMPTY_SYS, *vp) == 0) ||\n\t\t\t    (strcmp(BASIL_VAL_A2A_SYS, *vp) == 0) ||\n\t\t\t    (strcmp(BASIL_VAL_SNC2_SYS, *vp) == 0) ||\n\t\t\t    (strcmp(BASIL_VAL_SNC4_SYS, *vp) == 0) ||\n\t\t\t    (strcmp(BASIL_VAL_HEMI_SYS, *vp) == 0) ||\n\t\t\t    (strcmp(BASIL_VAL_QUAD_SYS, *vp) == 0))\n\t\t\t\tpbs_strncpy(node_group->numa_cfg, *vp, sizeof(node_group->numa_cfg));\n\t\t\telse {\n\t\t\t\tparse_err_illegal_attr_val(d, *np, *vp);\n\t\t\t\treturn;\n\t\t\t}\n\t\t} else if (strcmp(BASIL_ATR_HBMSIZE, *np) == 0) {\n\t\t\tif (*node_group->hbmsize != '\\0') {\n\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\treturn;\n\t\t\t}\n\n\t\t\t/* *vp cannot be negative. */\n\t\t\tif (strtol(*vp, &invalid_char_ptr, 10) < 0) {\n\t\t\t\tparse_err_illegal_attr_val(d, *np, *vp);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tpbs_strncpy(node_group->hbmsize, *vp, sizeof(node_group->hbmsize));\n\t\t} else if (strcmp(BASIL_ATR_HBM_CFG, *np) == 0) {\n\t\t\tif (*node_group->hbm_cfg != '\\0') {\n\t\t\t\tparse_err_multiple_attrs(d, *np);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tif ((strcmp(BASIL_VAL_EMPTY_SYS, *vp) == 0) ||\n\t\t\t    (strcmp(BASIL_VAL_0_SYS, *vp) == 0) ||\n\t\t\t    (strcmp(BASIL_VAL_25_SYS, *vp) == 0) ||\n\t\t\t    (strcmp(BASIL_VAL_50_SYS, *vp) == 0) ||\n\t\t\t    (strcmp(BASIL_VAL_100_SYS, *vp) == 0))\n\t\t\t\tpbs_strncpy(node_group->hbm_cfg, *vp, sizeof(node_group->hbm_cfg));\n\t\t\telse {\n\t\t\t\tparse_err_illegal_attr_val(d, *np, *vp);\n\t\t\t\treturn;\n\t\t\t}\n\t\t} else {\n\t\t\tparse_err_unrecognized_attr(d, *np);\n\t\t\treturn;\n\t\t}\n\t}\n}\n\n/**\n * Define the array that is used to register the expat element handlers.\n * See parse_element_start, parse_element_end, and parse_char_data for\n * further information.\n * The definition of element_handler_t above explains the different\n * structure elements.\n */\n\n// clang-format off\n\nstatic element_handler_t handler[] =\n{\n\t{\n\t\t\"UNDEFINED\",\n\t\tNULL,\n\t\tNULL,\n\t\tNULL\n\t},\n\t{\n\t\tBASIL_ELM_MESSAGE,\n\t\tmessage_start,\n\t\tmessage_end,\n\t\tmessage_char_data\n\t},\n\t{\n\t\tBASIL_ELM_RESPONSE,\n\t\tresponse_start,\n\t\tdefault_element_end,\n\t\tdisallow_char_data\n\t},\n\t{\n\t\tBASIL_ELM_RESPONSEDATA,\n\t\tresponse_data_start,\n\t\tdefault_element_end,\n\t\tdisallow_char_data\n\t},\n\t{\n\t\tBASIL_ELM_RESERVED,\n\t\treserved_start,\n\t\tdefault_element_end,\n\t\tdisallow_char_data\n\t},\n\t{\n\t\tBASIL_ELM_CONFIRMED,\n\t\tconfirmed_start,\n\t\tdefault_element_end,\n\t\tdisallow_char_data\n\t},\n\t{\n\t\tBASIL_ELM_RELEASED,\n\t\treleased_start,\n\t\tdefault_element_end,\n\t\tdisallow_char_data\n\t},\n\t{\n\t\tBASIL_ELM_INVENTORY,\n\t\tinventory_start,\n\t\tinventory_end,\n\t\tdisallow_char_data\n\t},\n\t{\n\t\tBASIL_ELM_ENGINE,\n\t\tengine_start,\n\t\tdefault_element_end,\n\t\tdisallow_char_data\n\t},\n\t{\n\t\tBASIL_ELM_NODEARRAY,\n\t\tnode_array_start,\n\t\tdefault_element_end,\n\t\tdisallow_char_data\n\t},\n\t{\n\t\tBASIL_ELM_NODE,\n\t\tnode_start,\n\t\tdefault_element_end,\n\t\tdisallow_char_data\n\t},\n\t{\n\t\tBASIL_ELM_SOCKETARRAY,\n\t\tsocket_array_start,\n\t\tdefault_element_end,\n\t\tdisallow_char_data\n\t},\n\t{\n\t\tBASIL_ELM_SOCKET,\n\t\tsocket_start,\n\t\tdefault_element_end,\n\t\tdisallow_char_data\n\t},\n\t{\n\t\tBASIL_ELM_SEGMENTARRAY,\n\t\tsegment_array_start,\n\t\tdefault_element_end,\n\t\tdisallow_char_data\n\t},\n\t{\n\t\tBASIL_ELM_SEGMENT,\n\t\tsegment_start,\n\t\tdefault_element_end,\n\t\tdisallow_char_data\n\t},\n\t{\n\t\tBASIL_ELM_CUARRAY,\n\t\tcomputeunit_array_start,\n\t\tdefault_element_end,\n\t\tdisallow_char_data\n\t},\n\t{\n\t\tBASIL_ELM_COMPUTEUNIT,\n\t\tcomputeunit_start,\n\t\tdefault_element_end,\n\t\tdisallow_char_data\n\t},\n\t{\n\t\tBASIL_ELM_PROCESSORARRAY,\n\t\tprocessor_array_start,\n\t\tdefault_element_end,\n\t\tdisallow_char_data\n\t},\n\t{\n\t\tBASIL_ELM_PROCESSOR,\n\t\tprocessor_start,\n\t\tdefault_element_end,\n\t\tdisallow_char_data\n\t},\n\t{\n\t\tBASIL_ELM_PROCESSORALLOC,\n\t\tprocessor_allocation_start,\n\t\tdefault_element_end,\n\t\tdisallow_char_data\n\t},\n\t{\n\t\tBASIL_ELM_MEMORYARRAY,\n\t\tmemory_array_start,\n\t\tdefault_element_end,\n\t\tdisallow_char_data\n\t},\n\t{\n\t\tBASIL_ELM_MEMORY,\n\t\tmemory_start,\n\t\tdefault_element_end,\n\t\tdisallow_char_data\n\t},\n\t{\n\t\tBASIL_ELM_MEMORYALLOC,\n\t\tmemory_allocation_start,\n\t\tdefault_element_end,\n\t\tdisallow_char_data\n\t},\n\t{\n\t\tBASIL_ELM_LABELARRAY,\n\t\tlabel_array_start,\n\t\tdefault_element_end,\n\t\tdisallow_char_data\n\t},\n\t{\n\t\tBASIL_ELM_LABEL,\n\t\tlabel_start,\n\t\tdefault_element_end,\n\t\tdisallow_char_data\n\t},\n\t{\n\t\tBASIL_ELM_RSVNARRAY,\n\t\treservation_array_start,\n\t\tdefault_element_end,\n\t\tdisallow_char_data\n\t},\n\t{\n\t\tBASIL_ELM_RESERVATION,\n\t\treservation_start,\n\t\tdefault_element_end,\n\t\tdisallow_char_data\n\t},\n\t{\n\t\tBASIL_ELM_APPARRAY,\n\t\tapplication_array_start,\n\t\tdefault_element_end,\n\t\tdisallow_char_data\n\t},\n\t{\n\t\tBASIL_ELM_APPLICATION,\n\t\tapplication_start,\n\t\tdefault_element_end,\n\t\tdisallow_char_data\n\t},\n\t{\n\t\tBASIL_ELM_CMDARRAY,\n\t\tcommand_array_start,\n\t\tdefault_element_end,\n\t\tdisallow_char_data\n\t},\n\t{\n\t\tBASIL_ELM_COMMAND,\n\t\tignore_element,\n\t\tdefault_element_end,\n\t\tdisallow_char_data\n\t},\n\t{\n\t\tBASIL_ELM_ACCELERATORARRAY,\n\t\taccelerator_array_start,\n\t\tdefault_element_end,\n\t\tdisallow_char_data\n\t},\n\t{\n\t\tBASIL_ELM_ACCELERATOR,\n\t\taccelerator_start,\n\t\tdefault_element_end,\n\t\tdisallow_char_data\n\t},\n\t{\n\t\tBASIL_ELM_ACCELERATORALLOC,\n\t\taccelerator_allocation_start,\n\t\tdefault_element_end,\n\t\tdisallow_char_data\n\t},\n\t{\n\t\tBASIL_ELM_RSVD_NODEARRAY,\n\t\tignore_element,\n\t\tdefault_element_end,\n\t\tdisallow_char_data\n\t},\n\t{\n\t\tBASIL_ELM_RSVD_NODE,\n\t\tignore_element,\n\t\tdefault_element_end,\n\t\tdisallow_char_data\n\t},\n\t{\n\t\tBASIL_ELM_RSVD_SGMTARRAY,\n\t\tignore_element,\n\t\tdefault_element_end,\n\t\tdisallow_char_data\n\t},\n\t{\n\t\tBASIL_ELM_RSVD_SGMT,\n\t\tignore_element,\n\t\tdefault_element_end,\n\t\tdisallow_char_data\n\t},\n\t{\n\t\tBASIL_ELM_RSVD_SGMT,\n\t\tignore_element,\n\t\tdefault_element_end,\n\t\tdisallow_char_data\n\t},\n\t{\n\t\tBASIL_ELM_RSVD_PROCARRAY,\n\t\tignore_element,\n\t\tdefault_element_end,\n\t\tdisallow_char_data\n\t},\n\t{\n\t\tBASIL_ELM_RSVD_PROCESSOR,\n\t\tignore_element,\n\t\tdefault_element_end,\n\t\tdisallow_char_data\n\t},\n\t{\n\t\tBASIL_ELM_RSVD_PROCESSOR,\n\t\tignore_element,\n\t\tdefault_element_end,\n\t\tdisallow_char_data\n\t},\n\t{\n\t\tBASIL_ELM_RSVD_MEMARRAY,\n\t\tignore_element,\n\t\tdefault_element_end,\n\t\tdisallow_char_data\n\t},\n\t{\n\t\tBASIL_ELM_RSVD_MEMORY,\n\t\tignore_element,\n\t\tdefault_element_end,\n\t\tdisallow_char_data\n\t},\n\t{\n\t\tBASIL_ELM_SYSTEM,\n\t\tsystem_start,\n\t\tdefault_element_end,\n\t\tdisallow_char_data,\n\t},\n\t{\n\t\tBASIL_ELM_NODES,\n\t\tnode_group_start,\n\t\tdefault_element_end,\n\t\tallow_char_data\n\t},\n\t{\n\t\tNULL,\n\t\tNULL,\n\t\tNULL,\n\t\tNULL\n\t}\n};\n#endif /* MOM_ALPS */\n\n// clang-format on\n"
  },
  {
    "path": "src/resmom/linux/mom_func.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h>\n\n#include <sys/stat.h>\n#include <signal.h>\n#include <sys/utsname.h>\n#include <limits.h>\n#include <sys/resource.h>\n\n#include \"pbs_ifl.h\"\n#include \"net_connect.h\"\n#include \"log.h\"\n#include \"job.h\"\n#include \"mom_func.h\"\n#include \"placementsets.h\"\n#include \"tpp.h\"\n\nextern int do_debug_report;\nextern int termin_child;\nextern int exiting_tasks;\nextern int next_sample_time;\nextern char *log_file;\nextern char *path_log;\nextern int mom_run_state;\nextern int vnode_additive;\nextern int kill_jobs_on_exit;\nextern vnl_t *vnlp;\nextern char *msg_corelimit;\nextern vnl_t *vnlp_from_hook;\nextern char *ret_string;\n\nextern void debug_report(void);\nextern void scan_for_exiting(void);\nextern int read_config(char *);\nextern void cleanup(void);\nextern void initialize(void);\nextern void mom_vnlp_report(vnl_t *vnl, char *header);\n\n/**\n * @brief\n *\tsignal handler for SIGTERM and SIGINT\n *\tTERM kills running jobs\n *\tINT  leaves them running\n *\n * @param[in] sig - signal number\n *\n * @return\tVoid\n *\n */\n\nvoid\nstop_me(int sig)\n{\n\tsprintf(log_buffer, \"caught signal %d\", sig);\n\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_FORCE, PBS_EVENTCLASS_SERVER,\n\t\t  LOG_NOTICE, msg_daemonname, log_buffer);\n\n\tswitch (sig) {\n\t\tcase SIGPIPE:\n\t\tcase SIGUSR1:\n#ifdef SIGINFO\n\t\tcase SIGINFO:\n#endif\n\t\t\treturn;\n\n\t\tdefault:\n\t\t\tbreak;\n\t}\n\n\tmom_run_state = 0;\n\tif (sig == SIGTERM)\n\t\tkill_jobs_on_exit = 1;\n}\n\n/**\n * @brief\n *\tThe finish of MOM's main loop\n *\tActually the heart of the loop\n *\n * @param[in] waittime - wait time\n *\n * @return Void\n *\n */\nvoid\nfinish_loop(time_t waittime)\n{\n\tif (do_debug_report)\n\t\tdebug_report();\n\tif (termin_child) {\n\t\tscan_for_terminated();\n\t\twaittime = 1; /* want faster time around to next loop */\n\t}\n\tif (exiting_tasks) {\n\t\tscan_for_exiting();\n\t\twaittime = 1; /* want faster time around to next loop */\n\t}\n\n\tif (waittime > next_sample_time)\n\t\twaittime = next_sample_time;\n\tDBPRT((\"%s: waittime %lu\\n\", __func__, (unsigned long) waittime));\n\n\t/* wait for a request to process */\n\tif (wait_request(waittime, NULL) != 0)\n\t\tlog_err(-1, msg_daemonname, \"wait_request failed\");\n}\n\n/**\n * @brief\n *\treturns access permission of a file\n *\n * @return int\n * @retval permission\n *\n */\nint\nget_permission(char *perm)\n{\n\tif (strcmp(perm, \"write\") == 0)\n\t\treturn (S_IWGRP | S_IWOTH);\n\treturn 0;\n}\n\n/**\n * @brief\n *\tVerify whether PBS_INTERACTIVE process is running.\n *\tAs this is not useful for *nix platform, so returns HANDLER_SUCCESS.\n *\n * @return handler_ret_t\n * @retval Success\n *\n */\n\nhandler_ret_t\ncheck_interactive_service()\n{\n\treturn HANDLER_SUCCESS;\n}\n\n/**\n * @brief\n *\treturns username\n *\n * @return string\n * @retval user name\n *\n */\nchar *\ngetuname(void)\n{\n\tstatic char *name = NULL;\n\tstruct utsname n;\n\n\tif (name == NULL) {\n\t\tif (uname(&n) == -1)\n\t\t\treturn NULL;\n\t\tsprintf(ret_string, \"%s %s %s %s %s\", n.sysname,\n\t\t\tn.nodename, n.release, n.version, n.machine);\n\t\tname = strdup(ret_string);\n\t}\n\treturn name;\n}\n\n/**\n * @brief\n *\tFunction to catch HUP signal.\n *\tSet call_hup = 1.\n * @param[in] sig - signal number\n *\n * @return Void\n *\n */\nvoid\ncatch_hup(int sig)\n{\n\tsprintf(log_buffer, \"caught signal %d\", sig);\n\tlog_event(PBSEVENT_SYSTEM, 0, LOG_INFO, \"catch_hup\", log_buffer);\n\tcall_hup = HUP_REAL;\n}\n\n/**\n * @brief\n *\tDo a restart of resmom.\n *\tRead the last seen config file and\n *\tClean up and reinit the dependent code.\n *\n * @return Void\n *\n */\nvoid\nprocess_hup(void)\n{\n\t/**\n\t * When call_hup == HUP_REAL, the catch_hup function has been called.\n\t * When call_hup == HUP_INIT, we couldn't start a job so the ALPS\n\t * inventory needs to be refreshed.\n\t * When real_hup is false, some actions don't need to be done.\n\t */\n\tint real_hup = (call_hup == HUP_REAL);\n\tint num_var_env;\n\n\tcall_hup = HUP_CLEAR;\n\n\tif (real_hup) {\n\t\tlog_event(PBSEVENT_SYSTEM, 0, LOG_INFO, __func__, \"reset\");\n\t\tlog_close(1);\n\t\tlog_open(log_file, path_log);\n\n\t\tif ((num_var_env = setup_env(pbs_conf.pbs_environment)) == -1) {\n\t\t\tmom_run_state = 0;\n\t\t\treturn;\n\t\t}\n\t}\n\n\t/*\n\t ** See if we need to get rid of the previous vnode state.\n\t */\n\tif (!vnode_additive) {\n\t\tif (vnlp != NULL) {\n\t\t\tvnl_free(vnlp);\n\t\t\tvnlp = NULL;\n\t\t}\n\t\tif (vnlp_from_hook != NULL) {\n\t\t\tvnl_free(vnlp_from_hook);\n\t\t\tvnlp_from_hook = NULL;\n\t\t}\n\t}\n\n\tif (read_config(NULL) != 0) {\n\t\tcleanup();\n\t\tlog_close(1);\n\t\ttpp_shutdown();\n\t\texit(1);\n\t}\n\n\tcleanup();\n\tinitialize();\n\n#if MOM_ALPS /* ALPS needs libjob support */\n\t/*\n\t * This needs to be called after the config file is read.\n\t */\n\tck_acct_facility_present();\n#endif /* MOM_ALPS */\n\n\tif (!real_hup) /* no need to go on */\n\t\treturn;\n}\n\n/**\n * @brief\n *\tsignal handler for SIG_USR2\n *\n * @return Void\n *\n */\n\nvoid\ncatch_USR2(int sig)\n{\n\tdo_debug_report = 1;\n}\n\n/**\n * @brief\n *\tCause useful information to be logged.  This function is called from\n *\tMoM's main loop after catching a SIGUSR2.\n *\n * @return Void\n *\n */\n\nvoid\ndebug_report(void)\n{\n\textern void mom_CPUs_report(void);\n\n\tmom_CPUs_report();\n\tmom_vnlp_report(vnlp, NULL);\n\tdo_debug_report = 0;\n}\n\n/**\n * @brief\n *\tGot an alarm call.\n *\n * @param[in] sig - signal number\n *\n * @return Void\n *\n */\nvoid\ntoolong(int sig)\n{\n\tlog_event(PBSEVENT_SYSTEM, 0, LOG_NOTICE, __func__, \"alarm call\");\n\tDBPRT((\"alarm call\\n\"))\n}\n\n/**\n * @brief\n *\tPrints usage for prog\n *\n * @param[in] prog - char pointer which holds program name\n *\n * @return Void\n *\n */\n\nvoid\nusage(char *prog)\n{\n\tconst char *configusage = \"%s -s insert scriptname inputfile\\n\"\n\t\t\t\t  \"%s -s [ remove | show ] scriptname\\n\"\n\t\t\t\t  \"%s -s list\\n\";\n\tfprintf(stderr,\n\t\t\"Usage: %s [-C chkdirectory][-d dir][-c configfile][-r|-p][-R port][-M port][-L log][-a alarm][-n nice]\\n\", prog);\n\tfprintf(stderr, \"or\\n\");\n\tfprintf(stderr, configusage, prog, prog, prog);\n\tfprintf(stderr, \"%s --version\\n\", prog);\n\texit(1);\n}\n"
  },
  {
    "path": "src/resmom/linux/mom_mach.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef PBSMOM_HTUNIT\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <assert.h>\n#include <limits.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <ctype.h>\n#include <unistd.h>\n#include <stddef.h>\n#include <dirent.h>\n#include <fcntl.h>\n#include <errno.h>\n#include <string.h>\n#include <pwd.h>\n#include <time.h>\n#include <ftw.h>\n#include <dlfcn.h>\n#include <sys/types.h>\n#include <sys/time.h>\n#include <sys/param.h>\n#include <sys/stat.h>\n#ifdef __linux__\n#include <sys/vfs.h>\n#else\n#include <sys/mount.h>\n#endif\n#include <sys/resource.h>\n#include <sys/utsname.h>\n#include <sys/wait.h>\n#include <signal.h>\n\n#include \"mom_mach.h\"\n#include \"pbs_error.h\"\n#include \"portability.h\"\n#include \"list_link.h\"\n#include \"server_limits.h\"\n#include \"attribute.h\"\n#include \"resource.h\"\n#include \"job.h\"\n#include \"log.h\"\n#include \"mom_func.h\"\n#include \"resmon.h\"\n#include \"../rm_dep.h\"\n#include \"tpp.h\"\n#include \"pbs_license.h\"\n#include \"pbs_ifl.h\"\n#include \"placementsets.h\"\n#include \"mom_vnode.h\"\n\n/**\n * @file\n * @brief\n *\tSystem dependent code to gather information for the resource\n *\tmonitor for a Linux i386 machine.\n *\n * @par Resources known by this code:\n *\t\tcput\t\tcpu time for a pid or session\n *\t\tmem\t\tmemory size for a pid or session in KB\n *\t\tresi\t\tresident memory size for a pid or session in KB\n *\t\tsessions\tlist of sessions in the system\n *\t\tpids\t\tlist of pids in a session\n *\t\tnsessions\tnumber of sessions in the system\n *\t\tnusers\t\tnumber of users in the system\n *\t\ttotmem\t\ttotal memory size in KB\n *\t\tavailmem\tavailable memory size in KB\n *\t\tncpus\t\tnumber of cpus\n *\t\tphysmem\t\tphysical memory size in KB\n *\t\tsize\t\tsize of a file or filesystem\n *\t\tidletime\tseconds of idle time (see mom_main.c)\n *\t\twalltime\twall clock time for a pid\n *\t\tloadave\t\tcurrent load average\n */\n\n#ifndef TRUE\n#define FALSE 0\n#define TRUE 1\n#endif /* TRUE */\n\n#define TBL_INC 20\n#define CPUT_POSSIBLE_FACTOR 5\n\nstatic char procfs[] = \"/proc\";\nstatic DIR *pdir = NULL;\nstatic int pagesize;\nstatic long hz;\n\n/* convert between jiffies and seconds */\n#define JTOS(x) (((x) + (hz / 2)) / hz)\n\nstatic char *choose_procflagsfmt(void);\n\nproc_stat_t *proc_info = NULL;\nint nproc = 0;\nint max_proc = 0;\n\nextern char *ret_string;\nextern char extra_parm[];\nextern char no_parm[];\nextern int exiting_tasks;\nextern vnl_t *vnlp;\n\nextern time_t time_now;\n\n/*\n ** external functions and data\n */\nextern int nice_val;\nextern int rm_errno;\nextern int reqnum;\nextern double cputfactor;\nextern double wallfactor;\nextern pid_t mom_pid;\nextern int num_acpus;\nextern int num_pcpus;\nextern int num_oscpus;\nstruct config *search(struct config *, char *);\nstruct rm_attribute *momgetattr(char *);\n\nchar *physmem(struct rm_attribute *attrib);\n\n/*\n ** local functions and data\n */\nstatic char *resi(struct rm_attribute *attrib);\nstatic char *totmem(struct rm_attribute *attrib);\nstatic char *availmem(struct rm_attribute *attrib);\nstatic char *ncpus(struct rm_attribute *attrib);\nstatic char *walltime(struct rm_attribute *attrib);\n\nextern char *loadave(struct rm_attribute *attrib);\nextern char *nullproc(struct rm_attribute *attrib);\n\ntime_t wait_time = 10;\n\ntypedef struct proc_mem {\n\tunsigned long total;\n\tunsigned long used;\n\tunsigned long free;\n} proc_mem_t;\n\nint mom_does_chkpnt = 0;\nunsigned long totalmem;\n\nstatic int myproc_max = 0;    /* entries in Proc_lnks  */\npbs_plinks *Proc_lnks = NULL; /* process links table head */\nstatic time_t sampletime_ceil;\nstatic time_t sampletime_floor;\n\n/*\n ** local resource array\n */\nstruct config dependent_config[] = {\n\t{\"resi\", {resi}},\n\t{\"totmem\", {totmem}},\n\t{\"availmem\", {availmem}},\n\t{\"physmem\", {physmem}},\n\t{\"ncpus\", {ncpus}},\n\t{\"loadave\", {loadave}},\n\t{\"walltime\", {walltime}},\n\t{NULL, {nullproc}},\n};\n\nunsigned linux_time = 0;\n/**\n * @brief\n * \tsupport routine for getting system time -- sets linux_time\n *\n * @return\tVoid\n *\n */\nvoid\nproc_get_btime(void)\n{\n\tFILE *fp;\n\tchar label[256];\n\n\tif ((fp = fopen(\"/proc/stat\", \"r\")) == NULL)\n\t\treturn;\n\n\twhile (!feof(fp)) {\n\t\tif (fscanf(fp, \"%s\", label) == EOF) \n\t\t\tlog_errf(-1, __func__, \"fscanf failed. ERR : %s\", strerror(errno));\n\t\tif (strcmp(label, \"btime\")) {\n\t\t\tif (fscanf(fp, \"%*[^\\n]%*c\") == EOF) \n\t\t\t\tlog_errf(-1, __func__, \"fscanf failed. ERR : %s\", strerror(errno));\t\t\t\t\n\t\t} else {\n\t\t\tif (fscanf(fp, \"%u\", &linux_time) == EOF) \n\t\t\t\tlog_errf(-1, __func__, \"fscanf failed. ERR : %s\", strerror(errno));\t\t\t\t\n\t\t\tfclose(fp);\n\t\t\treturn;\n\t\t}\n\t}\n\n\tfclose(fp);\n\treturn;\n}\n\nstatic char stat_str_pre[] =\n\t\"%%d \"\t    /* 1  pid %d The process id */\n\t\"(%%[^)]) \" /* 2  comm %s The filename of the executable */\n\t\"%%c \"\t    /* 3  state %c \"RSDZTW\" */\n\t\"%%d \"\t    /* 4  ppid %d The PID of the parent */\n\t\"%%d \"\t    /* 5  pgrp %d The process group ID */\n\t\"%%d \"\t    /* 6  session %d The session ID */\n\t\"%%*d \"\t    /* 7  ignored:  tty_nr */\n\t\"%%*d \"\t    /* 8  ignored:  tpgid */\n\t\"%s \"\t    /* 9  flags - %u or %lu */\n\t\"%%*lu \"    /* 10 ignored:  minflt */\n\t\"%%*lu \"    /* 11 ignored:  cminflt */\n\t\"%%*lu \"    /* 12 ignored:  majflt */\n\t\"%%*lu \"    /* 13 ignored:  cmajflt */\n\t\"%%lu \"\t    /* 14 utime %lu */\n\t\"%%lu \"\t    /* 15 stime %lu */\n\t\"%%ld \"\t    /* 16 cutime %ld */\n\t\"%%ld \"\t    /* 17 cstime %ld */\n\t\"%%*ld \"    /* 18 ignored:  priority %ld */\n\t\"%%*ld \"    /* 19 ignored:  nice %ld */\n\t\"%%*ld \"    /* 20 ignored:  num_threads %ld */\n\t\"%%*ld \"    /* 21 ignored:  itrealvalue %ld - no longer maintained */\n\t\"%%llu \"    /* 22 starttime (was %lu before Linux 2.6 - see proc(5) for conversion details */\n\t\"%%lu \"\t    /* 23 vsize (bytes) */\n\t\"%%ld \"\t    /* 24 rss (number of pages) */\n\t;\n\n/**\n * @brief\n *\treturns the process memory (used,free,total).\n *\n * @return\tstructure handle\n * @retval\tpointer to proc_mem_t structure \tSuccess\n * @retval\tNULL\t\t\t\t\tError\n *\n */\nproc_mem_t *\nget_proc_mem(void)\n{\n\tstatic proc_mem_t mm;\n\tFILE *fp;\n\tunsigned long m_tot, m_use, m_free;\n\tunsigned long s_tot, s_use, s_free;\n\tchar strbuf[BUFSIZ];\n\n\tif ((fp = fopen(\"/proc/meminfo\", \"r\")) == NULL)\n\t\treturn NULL;\n\n\tm_tot = m_free = s_tot = s_free = (unsigned long) 0;\n\twhile (fgets(strbuf, sizeof(strbuf), fp) != NULL) {\n\t\tsscanf(strbuf, \"MemTotal: %ld k\", &m_tot);\n\t\tsscanf(strbuf, \"MemFree: %ld k\", &m_free);\n\t\tsscanf(strbuf, \"SwapTotal: %ld k\", &s_tot);\n\t\tsscanf(strbuf, \"SwapFree: %ld k\", &s_free);\n\t}\n\n\t/* convert from kB to B */\n\tm_tot <<= 10;\n\tm_free <<= 10;\n\ts_tot <<= 10;\n\ts_free <<= 10;\n\tm_use = m_tot - m_free;\n\ts_use = s_tot - s_free;\n\n\tmm.total = m_tot + s_tot;\n\tmm.used = m_use + s_use;\n\tmm.free = m_free + s_free;\n\n\tfclose(fp);\n\treturn (&mm);\n}\n\n/**\n * @brief\n *\tCheck if attribute ATTR_NODE_TopologyInfo is in the global 'vnlp' structure.\n *\n * @return int\n * @retval 1\t- if ATTR_NODE_TopologyInfo is found as one of the entries in 'vnlp'.\n * @retval 0\t- otherwise, if not found or 'vnlp' is NULL.\n *\n */\nstatic int\nvnlp_has_topology_info(void)\n{\n\tint i, j;\n\n\tif (vnlp == NULL) {\n\t\treturn (0);\n\t}\n\n\tfor (i = 0; i < vnlp->vnl_used; i++) {\n\t\tvnal_t *vnalp;\n\n\t\tvnalp = VNL_NODENUM(vnlp, i);\n\n\t\tfor (j = 0; j < vnalp->vnal_used; j++) {\n\t\t\tvna_t *vnap;\n\n\t\t\tvnap = VNAL_NODENUM(vnalp, j);\n\t\t\tif (strcmp(vnap->vna_name, ATTR_NODE_TopologyInfo) == 0) {\n\t\t\t\treturn (1);\n\t\t\t}\n\t\t}\n\t}\n\n\treturn (0);\n}\n\n/**\n * @brief\n * \tdep_topology - compute and export platform-dependent topology information\n *\n * @return\tvoid\n *\n * @par MT-Safe:\tno\n * @par Side Effects:\n *\tNone\n *\n * @par Note:\tnominally, we use the Open-MPI hardware locality (a.k.a. hwloc)\n *\t\tfunctions to export the topology information that it generates,\n *\t\tbut on Cray systems we instead export information via the\n *\t\talps_inventory() function.\n * @brief A synopsis of the function call sequence (for vnode creation).\n *\t  1. Process the System (BASIL 1.7) Query in alps_system_KNL(). This\n *\t  \tdoes not include KNL vnode creation.\n *\t  2. Process the Inventory (BASIL 1.4) Query in alps_inventory() and\n *\t\tcreate non-KNL vnodes.\n *\t  \tKNL vnodes returned by the earlier System Query (step 1) are\n *\t\tfiltered from the Inventory (1.4) response.\n *\t  3. Create KNL vnodes in system_to_vnodes_KNL(), using information\n *\t\tretrieved earlier in alps_system_KNL() (step 1).\n *\n * @see\talps_inventory\n * @see\tmom_topology\n * @see alps_system_KNL\n * @see system_to_vnodes_KNL\n */\nvoid\ndep_topology(void)\n{\n#if MOM_ALPS\n\t/* This function is the entry point for System Query processing. */\n\t/* Activities include making a System XML Request & handling the XML Response. */\n\talps_system_KNL();\n\t/*\n\t * The call to physmem needs to take place before the ALPS inventory\n\t * because a vnode for the \"login node\" will be created which\n\t * must have the memory set.\n\t */\n\t/* Inventory (BASIL 1.4) Query processing. */\n\t/* Create non-KNL vnodes. */\n\tif (alps_inventory() != -1) {\n\t\t/* Create KNL VNodes. */\n\t\tsystem_to_vnodes_KNL();\n\t}\n#endif\n\tif (!vnlp_has_topology_info()) {\n\t\t/* Populate \"topology_info\", only if the attribute */\n\t\t/* has not been set inside alps_inventory(). */\n\t\tmom_topology();\n\t}\n}\n\n/**\n * @brief\n *\tinitialize the platform-dependent topology information\n *\n * @return\tVoid\n *\n */\nvoid\ndep_initialize(void)\n{\n\tpagesize = getpagesize();\n\n\tif ((pdir = opendir(procfs)) == NULL) {\n\t\tlog_err(errno, __func__, \"opendir\");\n\t\treturn;\n\t}\n\n\tproc_get_btime();\n\n\t/*\n\t ** The global cpu counts are now set in ncpus()\n\t */\n\t(void) ncpus(NULL);\n\n\t(void) physmem(0); /* get memory info */\n\n\tdep_topology();\n}\n\n/**\n * @brief\n *\tclean up platform-dependent topology information\n *\n * @return\tVoid\n *\n */\nvoid\ndep_cleanup(void)\n{\n\tif (pdir) {\n\t\tclosedir(pdir);\n\t\tpdir = NULL;\n\t}\n}\n\n/**\n * @brief\n *\t Scan a list of tasks and return true if one of them matches sid\n *\n * @param[in] pjob - job pointer\n * @param[in] sid - session id\n *\n * @return\tBool\n * @retval\tTRUE\n * @retval\tFALSE\tError\n *\n */\nstatic int\ninjob(job *pjob, pid_t sid)\n{\n\ttask *ptask;\n\n\tfor (ptask = (task *) GET_NEXT(pjob->ji_tasks);\n\t     ptask;\n\t     ptask = (task *) GET_NEXT(ptask->ti_jobtask)) {\n\t\tif (ptask->ti_qs.ti_sid <= 1)\n\t\t\tcontinue;\n\t\tif (ptask->ti_qs.ti_sid == sid)\n\t\t\treturn TRUE;\n\t}\n\treturn FALSE;\n}\n\n/**\n * @brief\n * \tInternal session cpu time decoding routine.\n *\n * @param[in] job - a job pointer.\n *\n * @return\tunsigned long\n * @retval\tsum of all cpu time consumed for all tasks executed by the job, in seconds,\n *\t\tadjusted by cputfactor.\n *\n */\nstatic unsigned long\ncput_sum(job *pjob)\n{\n\tint i;\n\tunsigned long cputime = 0;\n\tint nps = 0;\n\tint active_tasks = 0;\n\tint taskprocs;\n\tproc_stat_t *ps;\n\ttask *ptask;\n\tunsigned long pcput, tcput;\n\n\tfor (ptask = (task *) GET_NEXT(pjob->ji_tasks);\n\t     ptask != NULL;\n\t     ptask = (task *) GET_NEXT(ptask->ti_jobtask)) {\n\n\t\t/* DEAD task */\n\t\tif (ptask->ti_qs.ti_sid <= 1) {\n\t\t\tcputime += ptask->ti_cput;\n\t\t\tcontinue;\n\t\t}\n\n\t\tactive_tasks++;\n\t\ttcput = 0;\n\t\ttaskprocs = 0;\n\t\tfor (i = 0; i < nproc; i++) {\n\t\t\tps = &proc_info[i];\n\n\t\t\t/* is this process part of the task? */\n\t\t\tif (ptask->ti_qs.ti_sid != ps->session)\n\t\t\t\tcontinue;\n\n\t\t\t/*\n\t\t\t * is the owner of this process the job owner?\n\t\t\t * prevents random PID matches after reboot/restart\n\t\t\t */\n\t\t\tif (ps->uid != pjob->ji_qs.ji_un.ji_momt.ji_exuid)\n\t\t\t\tcontinue;\n\n\t\t\tnps++;\n\t\t\ttaskprocs++;\n\n\t\t\t/* don't include zombie unless it is the top proc */\n\t\t\tif ((ps->state == 'Z') && (ps->pid != ps->session) &&\n\t\t\t    (ps->ppid != mom_pid))\n\t\t\t\tcontinue;\n\n\t\t\tpcput = (ps->utime + ps->stime +\n\t\t\t\t ps->cutime + ps->cstime);\n\n\t\t\tif (pcput > num_oscpus * (sampletime_ceil + 1 - pjob->ji_qs.ji_stime) * CPUT_POSSIBLE_FACTOR) {\n\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\"cput for process %d impossible (%lds > %lds * %d), ignoring\",\n\t\t\t\t\tps->pid,\n\t\t\t\t\tpcput,\n\t\t\t\t\t(sampletime_ceil + 1 - pjob->ji_qs.ji_stime),\n\t\t\t\t\tnum_oscpus);\n\t\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB,\n\t\t\t\t\t  LOG_DEBUG, pjob->ji_qs.ji_jobid,\n\t\t\t\t\t  log_buffer);\n\t\t\t\tsampletime_floor = pjob->ji_qs.ji_stime;\n\t\t\t\tsampletime_ceil = pjob->ji_qs.ji_stime;\n\t\t\t\treturn 0;\n\n\t\t\t} else {\n\t\t\t\ttcput += pcput;\n\t\t\t}\n\n\t\t\tDBPRT((\"%s: task %8.8X ses %d pid %d cputime %lu\\n\",\n\t\t\t       __func__, ptask->ti_qs.ti_task,\n\t\t\t       ps->session, ps->pid, tcput))\n\t\t}\n\t\tif (tcput > ptask->ti_cput)\n\t\t\tptask->ti_cput = tcput;\n\t\tcputime += ptask->ti_cput;\n\t\tDBPRT((\"%s: task %8.8X cput %lu total %lu\\n\", __func__,\n\t\t       ptask->ti_qs.ti_task, ptask->ti_cput, cputime))\n\n\t\tif (taskprocs == 0) {\n\t\t\t/*\n\t\t\t * Linux seems to be able to forget about a\n\t\t\t * process on rare occations.  See if the\n\t\t\t * kill system call can see it.\n\t\t\t */\n\t\t\tif (kill(ptask->ti_qs.ti_sid, 0) == 0) {\n\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\"active processes for task %8.8X \"\n\t\t\t\t\t\"session %d exist but are not \"\n\t\t\t\t\t\"reported in /proc\",\n\t\t\t\t\tptask->ti_qs.ti_task,\n\t\t\t\t\t(int) ptask->ti_qs.ti_sid);\n\t\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB,\n\t\t\t\t\t  LOG_DEBUG, pjob->ji_qs.ji_jobid,\n\t\t\t\t\t  log_buffer);\n\t\t\t\t/*\n\t\t\t\t * Fake a non-zero nps so the job is not killed.\n\t\t\t\t */\n\t\t\t\tnps++;\n\t\t\t\tcontinue;\n\t\t\t}\n\n\t\t\t/*\n\t\t\t * Don't declare a running task exited without a small\n\t\t\t * grace time.\n\t\t\t */\n\t\t\tif ((ptask->ti_qs.ti_status == TI_STATE_RUNNING) &&\n\t\t\t    ((time_now - pjob->ji_qs.ji_stime) < 10)) {\n\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\"no active processes for task %8.8X \"\n\t\t\t\t\t\"session %d exist but the job is\"\n\t\t\t\t\t\"only %ld secs old\",\n\t\t\t\t\tptask->ti_qs.ti_task,\n\t\t\t\t\t(int) ptask->ti_qs.ti_sid,\n\t\t\t\t\ttime_now - pjob->ji_qs.ji_stime);\n\t\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB,\n\t\t\t\t\t  LOG_DEBUG, pjob->ji_qs.ji_jobid,\n\t\t\t\t\t  log_buffer);\n\t\t\t\t/*\n\t\t\t\t * Fake a non-zero nps so the job is not killed.\n\t\t\t\t */\n\t\t\t\tnps++;\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\tsprintf(log_buffer,\n\t\t\t\t\"no active process for task %8.8X\",\n\t\t\t\tptask->ti_qs.ti_task);\n\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB,\n\t\t\t\t  LOG_INFO, pjob->ji_qs.ji_jobid,\n\t\t\t\t  log_buffer);\n\t\t\tptask->ti_qs.ti_status = TI_STATE_EXITED;\n\t\t\ttask_save(ptask);\n\t\t\texiting_tasks = 1;\n\t\t}\n\t}\n\n\tif (active_tasks == 0) {\n\t\tsprintf(log_buffer, \"no active tasks\");\n\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB,\n\t\t\t  LOG_INFO, pjob->ji_qs.ji_jobid, log_buffer);\n\t}\n\tif (nps == 0)\n\t\tpjob->ji_flags |= MOM_NO_PROC;\n\n\tif (cputime > num_oscpus * (sampletime_ceil + 1 - pjob->ji_qs.ji_stime) * CPUT_POSSIBLE_FACTOR) {\n\t\tsprintf(log_buffer,\n\t\t\t\"cput for job impossible (%lds > %lds * %d), ignoring\",\n\t\t\tcputime,\n\t\t\t(sampletime_ceil + 1 - pjob->ji_qs.ji_stime),\n\t\t\tnum_oscpus);\n\n\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB,\n\t\t\t  LOG_DEBUG, pjob->ji_qs.ji_jobid,\n\t\t\t  log_buffer);\n\t\tsampletime_floor = pjob->ji_qs.ji_stime;\n\t\tsampletime_ceil = pjob->ji_qs.ji_stime;\n\t\treturn 0;\n\t}\n\n\treturn ((unsigned long) ((double) cputime * cputfactor));\n}\n\n/**\n * @brief\n * \tInternal session memory usage function.\n *\n * @param[in] job - job pointer\n *\n * @return\tunsigned long\n * @retval\tthe total number of bytes of address\n *\t\tspace consumed by all current processes within the job.\n *\n */\nstatic unsigned long\nmem_sum(job *pjob)\n{\n\tint i;\n\tunsigned long segadd;\n\tproc_stat_t *ps;\n\n\tsegadd = 0;\n\n\tfor (i = 0; i < nproc; i++) {\n\n\t\tps = &proc_info[i];\n\n\t\tif (!injob(pjob, ps->session))\n\t\t\tcontinue;\n\t\tsegadd += ps->vsize;\n\t\tDBPRT((\"%s: pid: %d  pr_size: %lu  total: %lu\\n\",\n\t\t       __func__, ps->pid, (unsigned long) ps->vsize, segadd))\n\t}\n\n\treturn (segadd);\n}\n\n/**\n * @brief\n * \tInternal session workingset size function.\n *\n * @param[in] pjob - job pointer\n *\n * @return\tunsigned long\n * @retval\tnew resident set size \tSuccess\n * @retval\told resident set size\tError\n *\n */\nstatic unsigned long\nresi_sum(job *pjob)\n{\n\tint i;\n\tunsigned long resisize;\n\tproc_stat_t *ps;\n\n\tresisize = 0;\n\tfor (i = 0; i < nproc; i++) {\n\n\t\tps = &proc_info[i];\n\n\t\tif (!injob(pjob, ps->session))\n\t\t\tcontinue;\n\n\t\tresisize += ps->rss * pagesize;\n\t}\n\n\treturn (resisize);\n}\n\n/**\n * @brief\n * \tEstablish system-enforced limits for the job.\n *\n *\tRun through the resource list, checking the values for all items\n *\twe recognize.\n *\n * @param[in] pjob - job pointer\n * @param[in]  set_mode\t- setting mode\n *\n *\tIf set_mode is SET_LIMIT_SET, then also set hard limits for the\n *\t\t\t  system enforced limits (not-polled).\n *\tIf anything goes wrong with the process, return a PBS error code\n *\tand print a message on standard error.  A zero-length resource list\n *\tis not an error.\n *\n *\tIf set_mode is SET_LIMIT_SET the entry conditions are:\n *\t    1.\tMOM has already forked, and we are called from the child.\n *\t    2.\tThe child is still running as root.\n *\t    3.  Standard error is open to the user's file.\n *\n *\tIf set_mode is SET_LIMIT_ALTER, we are beening called to modify\n *\texisting limits.  Cannot alter those set by setrlimit (kernel)\n *\tbecause we are the wrong process.\n *\n * @return\tint\n * @retval\tPBSE_NONE\tSuccess\n * @retval\tPBSE_*\t\tError\n *\n */\nint\nmom_set_limits(job *pjob, int set_mode)\n{\n\tchar *pname;\n\tint retval;\n\tunsigned long value; /* place in which to build resource value */\n\tresource *pres;\n\tstruct rlimit reslim;\n\tunsigned long mem_limit = 0;\n\tunsigned long vmem_limit = 0;\n\tunsigned long cput_limit = 0;\n\n\tDBPRT((\"%s: entered\\n\", __func__))\n\tassert(pjob != NULL);\n\tassert((get_jattr(pjob, JOB_ATR_resource))->at_type == ATR_TYPE_RESC);\n\tpres = (resource *) GET_NEXT(get_jattr_list(pjob, JOB_ATR_resource));\n\n\t/*\n\t * Cycle through all the resource specifications,\n\t * setting limits appropriately.\n\t */\n\n\t/* mem and vmem limits come from the local node limits, not the job */\n\tmem_limit = pjob->ji_hosts[pjob->ji_nodeid].hn_nrlimit.rl_mem << 10;\n\tvmem_limit = pjob->ji_hosts[pjob->ji_nodeid].hn_nrlimit.rl_vmem << 10;\n\n\twhile (pres != NULL) {\n\t\tassert(pres->rs_defin != NULL);\n\t\tpname = pres->rs_defin->rs_name;\n\t\tassert(pname != NULL);\n\t\tassert(*pname != '\\0');\n\n\t\tif (strcmp(pname, \"cput\") == 0 ||\n\t\t    strcmp(pname, \"pcput\") == 0) {\n\t\t\tretval = local_gettime(pres, &value);\n\t\t\tif (retval != PBSE_NONE)\n\t\t\t\treturn (error(pname, retval));\n\t\t\tif ((cput_limit == 0) || (value < cput_limit))\n\t\t\t\tcput_limit = value;\n\t\t} else if (strcmp(pname, \"pvmem\") == 0) {\n\t\t\tretval = local_getsize(pres, &value);\n\t\t\tif (retval != PBSE_NONE)\n\t\t\t\treturn (error(pname, retval));\n\t\t\tif ((vmem_limit == 0) || (value < vmem_limit))\n\t\t\t\tvmem_limit = value;\n\t\t} else if (strcmp(pname, \"pmem\") == 0) { /* set */\n\t\t\tretval = local_getsize(pres, &value);\n\t\t\tif (retval != PBSE_NONE)\n\t\t\t\treturn (error(pname, retval));\n\t\t\tif ((mem_limit == 0) || (value < mem_limit))\n\t\t\t\tmem_limit = value;\n\t\t} else if (strcmp(pname, \"walltime\") == 0) { /* Check */\n\t\t\tretval = local_gettime(pres, &value);\n\t\t\tif (retval != PBSE_NONE)\n\t\t\t\treturn (error(pname, retval));\n\t\t} else if (strcmp(pname, \"nice\") == 0) { /* set nice */\n\t\t\tif (set_mode == SET_LIMIT_SET) {\n\t\t\t\terrno = 0;\n\t\t\t\tif ((nice((int) pres->rs_value.at_val.at_long) == -1) && (errno != 0))\n\t\t\t\t\treturn (error(pname, PBSE_BADATVAL));\n\t\t\t}\n\t\t} else if (strcmp(pname, \"file\") == 0) { /* set */\n\t\t\tif (set_mode == SET_LIMIT_SET) {\n\t\t\t\tretval = local_getsize(pres, &value);\n\t\t\t\tif (retval != PBSE_NONE)\n\t\t\t\t\treturn (error(pname, retval));\n\t\t\t\treslim.rlim_cur = reslim.rlim_max = value;\n\t\t\t\tif (setrlimit(RLIMIT_FSIZE, &reslim) < 0)\n\t\t\t\t\treturn (error(pname, PBSE_SYSTEM));\n\t\t\t}\n\t\t}\n\t\tpres = (resource *) GET_NEXT(pres->rs_link);\n\t}\n\n\tif (set_mode == SET_LIMIT_SET) {\n\t\t/* if either vmem or pvmem was given, set sys limit to lesser */\n\t\tif (vmem_limit != 0) {\n\t\t\treslim.rlim_cur = reslim.rlim_max = vmem_limit;\n\t\t\tif (setrlimit(RLIMIT_AS, &reslim) < 0)\n\t\t\t\treturn (error(\"RLIMIT_AS\", PBSE_SYSTEM));\n\t\t}\n\n\t\t/* if either mem or pmem was given, set sys limit to lesser */\n\t\tif (mem_limit != 0) {\n\t\t\treslim.rlim_cur = reslim.rlim_max = mem_limit;\n\t\t\tif (setrlimit(RLIMIT_RSS, &reslim) < 0)\n\t\t\t\treturn (error(\"RLIMIT_RSS\", PBSE_SYSTEM));\n\t\t}\n\n\t\t/* if either cput or pcput was given, set sys limit to lesser */\n\t\tif (cput_limit != 0) {\n\t\t\treslim.rlim_cur = reslim.rlim_max =\n\t\t\t\t(unsigned long) ((double) cput_limit / cputfactor);\n\t\t\tif (setrlimit(RLIMIT_CPU, &reslim) < 0)\n\t\t\t\treturn (error(\"RLIMIT_CPU\", PBSE_SYSTEM));\n\t\t}\n\t}\n\treturn (PBSE_NONE);\n}\n\n/**\n * @brief\n * \tState whether MOM main loop has to poll this job to determine if some\n * \tlimits are being exceeded.\n *\n * @param[in] pjob - job pointer\n *\n * @return\tint\n * @retval\tTRUE\tif polling is necessary\n * @retval\tFALSE \totherwise.\n *\n * NOTE: Actual polling is done using the mom_over_limit machine-dependent function.\n *\n */\nint\nmom_do_poll(job *pjob)\n{\n\tchar *pname;\n\tresource *pres;\n\n\tDBPRT((\"%s: entered\\n\", __func__))\n\tassert(pjob != NULL);\n\tassert((get_jattr(pjob, JOB_ATR_resource))->at_type == ATR_TYPE_RESC);\n\tpres = (resource *) GET_NEXT(get_jattr_list(pjob, JOB_ATR_resource));\n\n\twhile (pres != NULL) {\n\t\tassert(pres->rs_defin != NULL);\n\t\tpname = pres->rs_defin->rs_name;\n\t\tassert(pname != NULL);\n\t\tassert(*pname != '\\0');\n\n\t\tif (strcmp(pname, \"walltime\") == 0 ||\n\t\t    strcmp(pname, \"cput\") == 0 ||\n\t\t    strcmp(pname, \"mem\") == 0 ||\n\t\t    strcmp(pname, \"vmem\") == 0 ||\n\t\t    strcmp(pname, \"ncpus\") == 0)\n\t\t\treturn (TRUE);\n\t\tpres = (resource *) GET_NEXT(pres->rs_link);\n\t}\n\n\treturn (FALSE);\n}\n\n/**\n * @brief\n * \tSetup for polling.\n *\tOpen kernel device and get namelist info.\n *\n * @return\tint\n * @retval\tPBSE_NONE\t\tSuccess\n * @retval\tPBSE_SYSTEM\t\tError\n *\n */\nint\nmom_open_poll(void)\n{\n\tDBPRT((\"%s: entered\\n\", __func__))\n\tpagesize = getpagesize();\n\tproc_info = (proc_stat_t *) malloc(sizeof(proc_stat_t) * TBL_INC);\n\tif (proc_info == NULL) {\n\t\tlog_err(errno, __func__, \"malloc\");\n\t\treturn (PBSE_SYSTEM);\n\t}\n\tmax_proc = TBL_INC;\n\n\treturn (PBSE_NONE);\n}\n\n/**\n * @brief\n * \tDeclare start of polling loop.\n *\n * @return\tint\n * @retval\tPBSE_INTERNAL\tDir pdir in NULL\n * @retval\tPBSE_NONE\tSuccess\n *\n */\nint\nmom_get_sample(void)\n{\n\tstruct dirent *dent = NULL;\n\tFILE *fd = NULL;\n\tstatic char path[MAXPATHLEN + 1];\n\tchar procname[MAXPATHLEN + 1]; /* space for dent->d_name plus extra */\n\tchar procid[MAXPATHLEN + 1];\n\tstruct stat sb;\n\tproc_stat_t *ps = NULL;\n\tint nprocs = 0;\n\tint ncached = 0;\n\tint ncantstat = 0;\n\tint nnomem = 0;\n\tunsigned long long starttime;\n\tint nskipped = 0;\n\textern time_t time_last_sample;\n\tchar *stat_str = NULL;\n\n\t/* There are no job tasks created in mock run mode, so no need to walk the proc table */\n\tif (mock_run)\n\t\treturn PBSE_NONE;\n\n\tDBPRT((\"%s: entered\\n\", __func__))\n\tif (pdir == NULL)\n\t\treturn PBSE_INTERNAL;\n\n\trewinddir(pdir);\n\tnproc = 0;\n\tfd = NULL;\n\tif (hz == 0)\n\t\thz = sysconf(_SC_CLK_TCK);\n\ttime_last_sample = time(0);\n\tsampletime_floor = time_last_sample;\n\twhile (errno = 0, (dent = readdir(pdir)) != NULL) {\n\t\tint nomem = 0;\n\t\tstruct stat sbuf;\n\n\t\tnprocs++;\n\n\t\t/*\n\t\t ** Check to see if we have /proc/pid or /proc/.pid\n\t\t */\n\t\tif (!isdigit(dent->d_name[0])) {\n\t\t\tif (dent->d_name[0] == '.' && isdigit(dent->d_name[1])) {\n\t\t\t\tnomem = 1;\n\t\t\t\tnnomem++;\n\t\t\t} else\n\t\t\t\tcontinue;\n\t\t}\n\t\tsnprintf(procid, sizeof(procid), \"/proc/%s\", dent->d_name);\n\t\tif ((stat(procid, &sbuf) == -1) || (sbuf.st_uid == 0)) {\n\t\t\t/* ignore root-owned processes */\n\t\t\tnskipped++;\n\t\t\tcontinue;\n\t\t}\n\t\tsnprintf(procname, sizeof(procname), \"/proc/%s/stat\", dent->d_name);\n\n\t\tif ((fd = fopen(procname, \"r\")) == NULL) {\n\t\t\tncantstat++;\n\t\t\tcontinue;\n\t\t}\n\n\t\tps = &proc_info[nproc];\n\t\tstat_str = choose_procflagsfmt();\n\t\tif (stat_str == NULL) {\n\t\t\tlog_err(errno, __func__, \"choose_procflagsfmt allocation failed\");\n\t\t\treturn PBSE_INTERNAL;\n\t\t}\n\t\tif (fscanf(fd, stat_str,\n\t\t\t   &ps->pid,\t /* \"%d \"\t1  pid %d The process id */\n\t\t\t   path,\t /* \"(%[^)]) \"\t2  comm %s The filename of the executable */\n\t\t\t   &ps->state,\t /* \"%c \"\t3  state %c \"RSDZTW\" */\n\t\t\t   &ps->ppid,\t /* \"%d \"\t4  ppid %d The PID of the parent */\n\t\t\t   &ps->pgrp,\t /* \"%d \"\t5  pgrp %d The process group ID */\n\t\t\t   &ps->session, /* \"%d \"\t6  session %d The session ID */\n\t\t\t   /* \"%*d \"\t7  ignored:  tty_nr */\n\t\t\t   /* \"%*d \"\t8  ignored:  tpgid */\n\t\t\t   &ps->flags, /* \"%u or %lu\"\t9  flags */\n\t\t\t   /* \"%*lu \"\t10 ignored:  minflt */\n\t\t\t   /* \"%*lu \"\t11 ignored:  cminflt */\n\t\t\t   /* \"%*lu \"\t12 ignored:  majflt */\n\t\t\t   /* \"%*lu \"\t13 ignored:  cmajflt */\n\t\t\t   &ps->utime,\t/* \"%lu \"\t14 utime %lu */\n\t\t\t   &ps->stime,\t/* \"%lu \"\t15 stime %lu */\n\t\t\t   &ps->cutime, /* \"%ld \"\t16 cutime %ld */\n\t\t\t   &ps->cstime, /* \"%ld \"\t17 cstime %ld */\n\t\t\t   /* \"%*ld \"\t18 ignored:  priority %ld */\n\t\t\t   /* \"%*ld \"\t19 ignored:  nice %ld */\n\t\t\t   /* \"%*ld \"\t20 ignored:  num_threads %ld */\n\t\t\t   /* \"%*ld \"\t21 ignored:  itrealvalue %ld - no longer maintained */\n\t\t\t   &starttime, /* \"%llu \"\t22 starttime (was %lu before Linux 2.6 - see proc(5) for conversion details */\n\t\t\t   &ps->vsize, /* \"%lu \"\t23 vsize (bytes) */\n\t\t\t   &ps->rss    /* \"%ld \"\t24 rss (number of pages) */\n\t\t\t   ) != 14) {\n\t\t\tncantstat++;\n\t\t\tfclose(fd);\n\t\t\tcontinue;\n\t\t}\n\n\t\tif (fstat(fileno(fd), &sb) == -1) {\n\t\t\tfclose(fd);\n\t\t\tcontinue;\n\t\t}\n\t\tps->uid = sb.st_uid;\n\t\tfclose(fd);\n\n\t\t/*\n\t\t ** A .pid thread shows the memory of the process\n\t\t ** but we only want to count it once.\n\t\t */\n\t\tif (nomem) {\n\t\t\tps->vsize = 0;\n\t\t\tps->rss = 0;\n\t\t}\n\n\t\tps->start_time = linux_time + (starttime / hz);\n\t\tsnprintf(ps->comm, sizeof(ps->comm), \"%.*s\",\n\t\t\t (int) (sizeof(ps->comm) - 1), path);\n\n\t\tps->utime = JTOS(ps->utime);\n\t\tps->stime = JTOS(ps->stime);\n\t\tps->cutime = JTOS(ps->cutime);\n\t\tps->cstime = JTOS(ps->cstime);\n\t\tif (++nproc == max_proc) {\n\t\t\tvoid *hold;\n\t\t\tDBPRT((\"%s: alloc more proc table space %d\\n\", __func__, nproc))\n\t\t\tmax_proc += TBL_INC;\n\t\t\thold = realloc((void *) proc_info,\n\t\t\t\t       max_proc * sizeof(proc_stat_t));\n\t\t\tassert(hold != NULL);\n\t\t\tproc_info = (proc_stat_t *) hold;\n\t\t}\n\t}\n\tif (errno != 0 && errno != ENOENT)\n\t\tlog_err(errno, __func__, \"readdir\");\n\tsampletime_ceil = time_last_sample;\n\tsprintf(log_buffer,\n\t\t\"nprocs:  %d, cantstat:  %d, nomem:  %d, skipped:  %d, \"\n\t\t\"cached:  %d\",\n\t\tnprocs - 2, ncantstat, nnomem, nskipped,\n\t\tncached);\n\tlog_event(PBSEVENT_DEBUG4, 0, LOG_DEBUG, __func__, log_buffer);\n\treturn (PBSE_NONE);\n}\n\n/**\n * @brief\n * \tUpdate the resources used.<attributes> of a job.\n *\n * @param[in]\tpjob - job in question.\n *\n * @note\n *\tThe first time this is called for a job, set up resource entries for\n *\teach resource that can be reported for this machine.  Fill in the\n *\tcorrect values.\n *\tIf a resource attribute has been set in a mom hook, then its value\n *\twill not be updated here. This allows a mom  hook to override\n *\tresource value.\n *\n * @return int\n * @retval PBSE_NONE\tfor success.\n */\nint\nmom_set_use(job *pjob)\n{\n\tresource *pres;\n\tresource *pres_req;\n\tattribute *at;\n\tattribute *at_req;\n\tresource_def *rd;\n\tu_Long *lp_sz, lnum_sz;\n\tunsigned long *lp, lnum, oldcput;\n\tlong ncpus_req;\n\n\tassert(pjob != NULL);\n\tat = get_jattr(pjob, JOB_ATR_resc_used);\n\tassert(at->at_type == ATR_TYPE_RESC);\n\n\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_Suspend) != 0)\n\t\treturn (PBSE_NONE); /* job suspended, don't track it */\n\n\tDBPRT((\"%s: entered %s\\n\", __func__, pjob->ji_qs.ji_jobid))\n\n\tat->at_flags |= (ATR_VFLAG_MODIFY | ATR_VFLAG_SET);\n\n\trd = &svr_resc_def[RESC_NCPUS];\n\tpres = find_resc_entry(at, rd);\n\tif (pres == NULL) {\n\t\tpres = add_resource_entry(at, rd);\n\t\tmark_attr_set(&pres->rs_value);\n\t\tpres->rs_value.at_type = ATR_TYPE_LONG;\n\n\t\t/*\n\t\t * get pointer to list of resources *requested* for the job\n\t\t * so the ncpus used can be set to ncpus requested\n\t\t */\n\t\tat_req = get_jattr(pjob, JOB_ATR_resource);\n\t\tassert(at->at_type == ATR_TYPE_RESC);\n\n\t\tpres_req = find_resc_entry(at_req, rd);\n\t\tif ((pres_req != NULL) &&\n\t\t    ((ncpus_req = pres_req->rs_value.at_val.at_long) != 0))\n\t\t\tpres->rs_value.at_val.at_long = ncpus_req;\n\t\telse\n\t\t\tpres->rs_value.at_val.at_long = 0;\n\t}\n\n\trd = &svr_resc_def[RESC_CPUT];\n\tpres = find_resc_entry(at, rd);\n\tif (pres == NULL) {\n\t\tpres = add_resource_entry(at, rd);\n\t\tmark_attr_set(&pres->rs_value);\n\t\tpres->rs_value.at_type = ATR_TYPE_LONG;\n\t\tpres->rs_value.at_val.at_long = 0;\n\t}\n\tlp = (unsigned long *) &pres->rs_value.at_val.at_long;\n\toldcput = *lp;\n\tlnum = cput_sum(pjob);\n\tlnum = MAX(*lp, lnum);\n\tif ((pres->rs_value.at_flags & ATR_VFLAG_HOOK) == 0) {\n\t\t/* don't conflict with hook setting a value */\n\t\t*lp = lnum;\n\t}\n\n\trd = &svr_resc_def[RESC_CPUPERCENT];\n\tpres = find_resc_entry(at, rd);\n\tif (pres == NULL) {\n\t\tpres = add_resource_entry(at, rd);\n\t\tmark_attr_set(&pres->rs_value);\n\t\tpres->rs_value.at_type = ATR_TYPE_LONG;\n\t\tpres->rs_value.at_val.at_long = 0;\n\t}\n\tif ((pres->rs_value.at_flags & ATR_VFLAG_HOOK) == 0) {\n\t\t/* now calculate weighted moving average cpu usage */\n\t\t/* percentage */\n\t\tcalc_cpupercent(pjob, oldcput, lnum, sampletime_ceil);\n\t}\n\tpjob->ji_sampletim = sampletime_floor;\n\n\trd = &svr_resc_def[RESC_VMEM];\n\tpres = find_resc_entry(at, rd);\n\tif (pres == NULL) {\n\t\tpres = add_resource_entry(at, rd);\n\t\tmark_attr_set(&pres->rs_value);\n\t\tpres->rs_value.at_type = ATR_TYPE_SIZE;\n\t\tpres->rs_value.at_val.at_size.atsv_shift = 10; /* KB */\n\t\tpres->rs_value.at_val.at_size.atsv_units = ATR_SV_BYTESZ;\n\t} else if ((pres->rs_value.at_flags & ATR_VFLAG_HOOK) == 0) {\n\t\tlp_sz = &pres->rs_value.at_val.at_size.atsv_num;\n\t\tlnum_sz = (mem_sum(pjob) + 1023) >> 10; /* as KB */\n\t\t*lp_sz = MAX(*lp_sz, lnum_sz);\n\t}\n\n\t/* update walltime usage */\n\tupdate_walltime(pjob);\n\n\trd = &svr_resc_def[RESC_MEM];\n\tpres = find_resc_entry(at, rd);\n\tif (pres == NULL) {\n\t\tpres = add_resource_entry(at, rd);\n\t\tmark_attr_set(&pres->rs_value);\n\t\tpres->rs_value.at_type = ATR_TYPE_SIZE;\n\t\tpres->rs_value.at_val.at_size.atsv_shift = 10; /* KB */\n\t\tpres->rs_value.at_val.at_size.atsv_units = ATR_SV_BYTESZ;\n\t} else if ((pres->rs_value.at_flags & ATR_VFLAG_HOOK) == 0) {\n\t\tlp_sz = &pres->rs_value.at_val.at_size.atsv_num;\n\t\tlnum_sz = (resi_sum(pjob) + 1023) >> 10; /* as KB */\n\t\t*lp_sz = MAX(*lp_sz, lnum_sz);\n\t}\n\n\treturn (PBSE_NONE);\n}\n\n/**\n * @brief\n * \tbld_ptree - establish links (parent, child, and sibling) for processes\n * \tin a given session.\n *\n *\tThe PBS_PROC_* macros are defined in resmom/.../mom_mach.h\n *\tto refer to the correct machine dependent table.\n *\tLinkage scope changed from static to default as this gets referred\n *\tfrom scan_for_terminated(), declaration\tadded in the mom_mach.h.\n *\n * @param[in] sid - session id\n *\n * @return\tint\n * @retval\tnumber of processes in session\tSuccess\n *\n */\nint\nbld_ptree(pid_t sid)\n{\n\tint myproc_ct; /* count of processes in a session */\n\tint i, j;\n\n\tif (Proc_lnks == NULL) {\n\t\tProc_lnks = (pbs_plinks *) malloc(TBL_INC * sizeof(pbs_plinks));\n\t\tassert(Proc_lnks != NULL);\n\t\tmyproc_max = TBL_INC;\n\t}\n\n\t/*\n\t * Build links for processes in the session in question.\n\t * First, load with the processes in the session.\n\t */\n\n\tmyproc_ct = 0;\n\tfor (i = 0; i < nproc; i++) {\n\t\tif (PBS_PROC_PID(i) <= 1)\n\t\t\tcontinue;\n\t\tif ((int) PBS_PROC_SID(i) == sid) {\n\t\t\tProc_lnks[myproc_ct].pl_pid = PBS_PROC_PID(i);\n\t\t\tProc_lnks[myproc_ct].pl_ppid = PBS_PROC_PPID(i);\n\t\t\tProc_lnks[myproc_ct].pl_parent = -1;\n\t\t\tProc_lnks[myproc_ct].pl_sib = -1;\n\t\t\tProc_lnks[myproc_ct].pl_child = -1;\n\t\t\tProc_lnks[myproc_ct].pl_done = 0;\n\t\t\tif (++myproc_ct == myproc_max) {\n\t\t\t\tvoid *hold;\n\n\t\t\t\tmyproc_max += TBL_INC;\n\t\t\t\thold = realloc((void *) Proc_lnks,\n\t\t\t\t\t       myproc_max * sizeof(pbs_plinks));\n\t\t\t\tassert(hold != NULL);\n\t\t\t\tProc_lnks = (pbs_plinks *) hold;\n\t\t\t}\n\t\t}\n\t}\n\n\t/* Now build the tree for those processes */\n\n\tfor (i = 0; i < myproc_ct; i++) {\n\t\t/*\n\t\t * Find all the children for this process, establish links.\n\t\t */\n\t\tfor (j = 0; j < myproc_ct; j++) {\n\t\t\tif (j == i)\n\t\t\t\tcontinue;\n\t\t\tif (Proc_lnks[j].pl_ppid == Proc_lnks[i].pl_pid) {\n\t\t\t\tProc_lnks[j].pl_parent = i;\n\t\t\t\tProc_lnks[j].pl_sib = Proc_lnks[i].pl_child;\n\t\t\t\tProc_lnks[i].pl_child = j;\n\t\t\t}\n\t\t}\n\t}\n\treturn (myproc_ct); /* number of processes in session */\n}\n\n/**\n * @brief\n * \tkill_ptree - traverse the process tree, killing the processes as we go\n *\n * @param[in]\tidx:\tcurrent pid index\n * @param[in]\tflag:\ttraverse order, top down (1) or bottom up (0)\n * @param[in]\tsig:\tthe signal to send\n *\n * @return\tVoid\n *\n */\nstatic void\nkill_ptree(int idx, int flag, int sig)\n{\n\tpid_t child;\n\n\tif (flag && !Proc_lnks[idx].pl_done) { /* top down */\n\t\tDBPRT((\"%s: top down %d\\n\", __func__, Proc_lnks[idx].pl_pid));\n\t\t(void) kill(Proc_lnks[idx].pl_pid, sig);\n\t\tProc_lnks[idx].pl_done = 1;\n\t}\n\tchild = Proc_lnks[idx].pl_child;\n\twhile (child != -1) {\n\t\tkill_ptree(child, flag, sig);\n\t\tchild = Proc_lnks[child].pl_sib;\n\t}\n\tif (!flag && !Proc_lnks[idx].pl_done) { /* bottom up */\n\t\tDBPRT((\"%s: bottom up %d\\n\", __func__, Proc_lnks[idx].pl_pid));\n\t\t(void) kill(Proc_lnks[idx].pl_pid, sig);\n\t\tProc_lnks[idx].pl_done = 1;\n\t}\n}\n/**\n * @brief\n *\tkill task session\n *\n * @param[in] ptask - pointer to pbs_task structure\n * @param[in] sig - signal number\n * @param[in] dir - indication how to kill\n *\t\t    0 - kill child first\n *\t\t    1 - kill parent first\n *\n * @return\tint\n * @retval\tnumber of tasks\n *\n */\nint\nkill_task(pbs_task *ptask, int sig, int dir)\n{\n\treturn kill_session(ptask->ti_qs.ti_sid, sig, dir);\n}\n\n/**\n * @brief\n *\tKill a task session.\n *\tCall with the task pointer and a signal number.\n *\n * @param[in] sesid - session id\n * @param[in] sig - signal number\n * @param[in] dir - indication how to kill\n *                  0 - kill child first\n *\t            1 - kill parent first\n *\n * @return\tint\n * @retval      number of tasks\n *\n */\nint\nkill_session(pid_t sesid, int sig, int dir)\n{\n\tint ct = 0;\n\tint i;\n\n\tDBPRT((\"%s: entered sid %d\\n\", __func__, sesid))\n\tif (sesid <= 1)\n\t\treturn 0;\n\n\t(void) mom_get_sample();\n\tct = bld_ptree(sesid);\n\tDBPRT((\"%s: bld_ptree %d\\n\", __func__, ct))\n\n\t/*\n\t ** Find index into the Proc_lnks table for the session lead.\n\t */\n\tfor (i = 0; i < ct; i++) {\n\t\tif (Proc_lnks[i].pl_pid == sesid) {\n\t\t\tkill_ptree(i, dir, sig);\n\t\t\tbreak;\n\t\t}\n\t}\n\t/*\n\t ** Do a linear pass.\n\t */\n\tfor (i = 0; i < ct; i++) {\n\t\tif (Proc_lnks[i].pl_done)\n\t\t\tcontinue;\n\t\tDBPRT((\"%s: cleanup %d\\n\", __func__, Proc_lnks[i].pl_pid))\n\t\tkill(Proc_lnks[i].pl_pid, sig);\n\t}\n\n\t/*\n\t ** Kill the process group in case anything was missed reading /proc\n\t */\n\tif ((sig == SIGKILL) || (ct == 0))\n\t\tkillpg(sesid, sig);\n\n\treturn ct;\n}\n\n/**\n * @brief\n *\tClean up everything related to polling.\n *\n * @return\tint\n * @retval\tPBSE_NONE\tSuccess\n * @retval\tPBSE_SYSTEM\tError\n *\n */\nint\nmom_close_poll(void)\n{\n\tDBPRT((\"%s: entered\\n\", __func__))\n\tif (pdir) {\n\t\tif (closedir(pdir) != 0) {\n\t\t\tlog_err(errno, __func__, \"closedir\");\n\t\t\treturn (PBSE_SYSTEM);\n\t\t}\n\t\tpdir = NULL;\n\t}\n\tif (proc_info) {\n\t\t(void) free(proc_info);\n\t\tproc_info = NULL;\n\t\tmax_proc = 0;\n\t}\n\n\treturn (PBSE_NONE);\n}\n\n/**\n * @brief\n * \tCheckpoint the job.\n *\n * @param[in] ptask - pointer to task\n * @param[in] file - filename\n * @param[in] abort - value indicating abort\n *\n * If abort is true, kill it too.\n *\n * @return\tint\n * @retval\t-1\n */\nint\nmach_checkpoint(task *ptask, char *file, int abort)\n{\n\treturn (-1);\n}\n\n/**\n * @brief\n * \tRestart the job from the checkpoint file.\n *\n * @param[in] ptask - pointer to task\n * @param[in] file - filename\n *\n * @return      long\n * @retval      session id\tSuccess\n * @retval\t-1\t\tError\n */\nlong\nmach_restart(task *ptask, char *file)\n{\n\treturn (-1);\n}\n\n/**\n * @brief\n *\tReturn 1 if proc table can be read, 0 otherwise.\n */\nint\ngetprocs(void)\n{\n\tstatic unsigned int lastproc = 0;\n\n\tif (lastproc == reqnum) /* don't need new proc table */\n\t\treturn 1;\n\n\tif (mom_get_sample() != PBSE_NONE)\n\t\treturn 0;\n\n\tlastproc = reqnum;\n\treturn 1;\n}\n\n#define dsecs(val) ((double) (val))\n\n/**\n * @brief\n *\tcomputes and returns the cpu time process with  pid jobid\n *\n * @param[in] jobid - process id for job\n *\n * @return\tstring\n * @retval\tcputime\t\tSuccess\n * @retval\tNULL\t\tError\n *\n */\nchar *\ncput_job(pid_t jobid)\n{\n\tint i;\n\tint found = 0;\n\tdouble cputime, addtime;\n\tproc_stat_t *ps;\n\n\tcputime = 0.0;\n\tfor (i = 0; i < nproc; i++) {\n\n\t\tps = &proc_info[i];\n\t\tif (jobid != ps->session)\n\t\t\tcontinue;\n\n\t\tfound = 1;\n\t\taddtime = dsecs(ps->cutime) + dsecs(ps->cstime);\n\n\t\tcputime += addtime;\n\t\tDBPRT((\"%s: total %.2f pid %d %.2f\\n\", __func__, cputime,\n\t\t       ps->pid, addtime))\n\t}\n\tif (found) {\n\t\tsprintf(ret_string, \"%.2f\", cputime * cputfactor);\n\t\treturn ret_string;\n\t}\n\n\trm_errno = RM_ERR_EXIST;\n\treturn NULL;\n}\n\n/**\n * @brief\n *      computes and returns the cpu time process with  pid pid.\n *\n * @param[in] pid - process id\n *\n * @return      string\n * @retval      cputime         Success\n * @retval      NULL            Error\n *\n */\nchar *\ncput_proc(pid_t pid)\n{\n\tint i;\n\tdouble cputime;\n\tproc_stat_t *ps = NULL;\n\n\tmom_get_sample();\n\tfor (i = 0; i < nproc; i++) {\n\t\tps = &proc_info[i];\n\t\tif (ps->pid == pid)\n\t\t\tbreak;\n\t}\n\tif (i == nproc) {\n\t\trm_errno = RM_ERR_EXIST;\n\t\treturn NULL;\n\t}\n\tcputime = dsecs(ps->utime) + dsecs(ps->stime);\n\n\tsprintf(ret_string, \"%.2f\", cputime * cputfactor);\n\treturn ret_string;\n}\n\n/**\n * @brief\n *\twrapper function for cput_proc and cput_job.\n *\n * @param[in] attrib - pointer to rm_attribute structure\n *\n * @return\tstring\n * @retval\tcputime\t\tSuccess\n * @retval\tNULL\t\tERRor\n *\n */\nchar *\ncput(struct rm_attribute *attrib)\n{\n\tint value;\n\n\tif (attrib == NULL) {\n\t\tlog_err(-1, __func__, no_parm);\n\t\trm_errno = RM_ERR_NOPARAM;\n\t\treturn NULL;\n\t}\n\tif ((value = atoi(attrib->a_value)) == 0) {\n\t\tsprintf(log_buffer, \"bad param: %s\", attrib->a_value);\n\t\tlog_err(-1, __func__, log_buffer);\n\t\trm_errno = RM_ERR_BADPARAM;\n\t\treturn NULL;\n\t}\n\tif (momgetattr(NULL)) {\n\t\tlog_err(-1, __func__, extra_parm);\n\t\trm_errno = RM_ERR_BADPARAM;\n\t\treturn NULL;\n\t}\n\n\tif (strcmp(attrib->a_qualifier, \"session\") == 0)\n\t\treturn (cput_job((pid_t) value));\n\telse if (strcmp(attrib->a_qualifier, \"proc\") == 0)\n\t\treturn (cput_proc((pid_t) value));\n\telse {\n\t\trm_errno = RM_ERR_BADPARAM;\n\t\treturn NULL;\n\t}\n}\n\n/**\n * @brief\n *      computes and returns the memory for session with  pid sid..\n *\n * @param[in] sid - process id\n *\n * @return      string\n * @retval      memsize         Success\n * @retval      NULL            Error\n *\n */\nchar *\nmem_job(pid_t sid)\n{\n\tunsigned long memsize;\n\tint i;\n\tproc_stat_t *ps;\n\n\tmemsize = 0;\n\n\tmom_get_sample();\n\tfor (i = 0; i < nproc; i++) {\n\n\t\tps = &proc_info[i];\n\n\t\tif (sid != ps->session)\n\t\t\tcontinue;\n\t\tmemsize += ps->vsize;\n\t}\n\n\tif (memsize == 0) {\n\t\trm_errno = RM_ERR_EXIST;\n\t\treturn NULL;\n\t} else {\n\t\tsprintf(ret_string, \"%lukb\", memsize >> 10); /* KB */\n\t\treturn ret_string;\n\t}\n}\n\n/**\n * @brief\n *      computes and returns the memory for process with  pid sid..\n *\n * @param[in] pid - process id\n *\n * @return      string\n * @retval      memsize         Success\n * @retval      NULL            Error\n *\n */\nchar *\nmem_proc(pid_t pid)\n{\n\tint i;\n\tproc_stat_t *ps = NULL;\n\n\tmom_get_sample();\n\tfor (i = 0; i < nproc; i++) {\n\t\tps = &proc_info[i];\n\t\tif (ps->pid == pid)\n\t\t\tbreak;\n\t}\n\tif (i == nproc) {\n\t\trm_errno = RM_ERR_SYSTEM;\n\t\treturn NULL;\n\t}\n\n\tsprintf(ret_string, \"%lukb\", (unsigned long) ps->vsize >> 10); /* KB */\n\treturn ret_string;\n}\n\n/**\n * @brief\n *      wrapper function for mem_job and mem_proc..\n *\n * @param[in] attrib - pointer to rm_attribute structure\n *\n * @return      string\n * @retval      memsize         Success\n * @retval      NULL            ERRor\n *\n */\nchar *\nmem(struct rm_attribute *attrib)\n{\n\tint value;\n\n\tif (attrib == NULL) {\n\t\tlog_err(-1, __func__, no_parm);\n\t\trm_errno = RM_ERR_NOPARAM;\n\t\treturn NULL;\n\t}\n\tif ((value = atoi(attrib->a_value)) == 0) {\n\t\tsprintf(log_buffer, \"bad param: %s\", attrib->a_value);\n\t\tlog_err(-1, __func__, log_buffer);\n\t\trm_errno = RM_ERR_BADPARAM;\n\t\treturn NULL;\n\t}\n\tif (momgetattr(NULL)) {\n\t\tlog_err(-1, __func__, extra_parm);\n\t\trm_errno = RM_ERR_BADPARAM;\n\t\treturn NULL;\n\t}\n\n\tif (strcmp(attrib->a_qualifier, \"session\") == 0)\n\t\treturn (mem_job((pid_t) value));\n\telse if (strcmp(attrib->a_qualifier, \"proc\") == 0)\n\t\treturn (mem_proc((pid_t) value));\n\telse {\n\t\trm_errno = RM_ERR_BADPARAM;\n\t\treturn NULL;\n\t}\n}\n\n/**\n * @brief\n *\tcomputes and returns resident set size for job\n *\n * @param[in] jobid - pid for job\n *\n * @return\tstring\n * @retval\tresident set size\tSuccess\n * @retval\tNULL\t\t\tError\n *\n */\nstatic char *\nresi_job(pid_t jobid)\n{\n\tint i;\n\tunsigned long resisize;\n\tint found = 0;\n\tproc_stat_t *ps;\n\n\tresisize = 0;\n\tmom_get_sample();\n\n\tfor (i = 0; i < nproc; i++) {\n\n\t\tps = &proc_info[i];\n\n\t\tif (jobid != ps->session)\n\t\t\tcontinue;\n\n\t\tfound = 1;\n\t\tresisize += ps->rss;\n\t}\n\tif (found) {\n\t\t/* in KB */\n\t\tsprintf(ret_string, \"%lukb\", (resisize * (unsigned long) pagesize) >> 10);\n\t\treturn ret_string;\n\t}\n\n\trm_errno = RM_ERR_EXIST;\n\treturn NULL;\n}\n\n/**\n * @brief\n *      computes and returns resident set size for process\n *\n * @param[in] pid - process id\n *\n * @return      string\n * @retval      resident set size       Success\n * @retval      NULL                    Error\n *\n */\nstatic char *\nresi_proc(pid_t pid)\n{\n\tint i;\n\tproc_stat_t *ps = NULL;\n\n\tmom_get_sample();\n\tfor (i = 0; i < nproc; i++) {\n\t\tps = &proc_info[i];\n\t\tif (ps->pid == pid)\n\t\t\tbreak;\n\t}\n\tif (i == nproc) {\n\t\trm_errno = RM_ERR_EXIST;\n\t\treturn NULL;\n\t}\n\t/* in KB */\n\tsprintf(ret_string, \"%lukb\", ((unsigned long) ps->rss * (unsigned long) pagesize) >> 10);\n\treturn ret_string;\n}\n\n/**\n * @brief\n *      wrapper function for mem_job and mem_proc..\n *\n * @param[in] attrib - pointer to rm_attribute structure\n *\n * @return      string\n * @retval      resident set size     \tSuccess\n * @retval      NULL            \tERRor\n *\n */\nstatic char *\nresi(struct rm_attribute *attrib)\n{\n\tint value;\n\n\tif (attrib == NULL) {\n\t\tlog_err(-1, __func__, no_parm);\n\t\trm_errno = RM_ERR_NOPARAM;\n\t\treturn NULL;\n\t}\n\tif ((value = atoi(attrib->a_value)) == 0) {\n\t\tsprintf(log_buffer, \"bad param: %s\", attrib->a_value);\n\t\tlog_err(-1, __func__, log_buffer);\n\t\trm_errno = RM_ERR_BADPARAM;\n\t\treturn NULL;\n\t}\n\tif (momgetattr(NULL)) {\n\t\tlog_err(-1, __func__, extra_parm);\n\t\trm_errno = RM_ERR_BADPARAM;\n\t\treturn NULL;\n\t}\n\n\tif (strcmp(attrib->a_qualifier, \"session\") == 0)\n\t\treturn (resi_job((pid_t) value));\n\telse if (strcmp(attrib->a_qualifier, \"proc\") == 0)\n\t\treturn (resi_proc((pid_t) value));\n\telse {\n\t\trm_errno = RM_ERR_BADPARAM;\n\t\treturn NULL;\n\t}\n}\n\n/**\n * @brief\n *\treturns the number of sessions\n *\n * @param[in] attrib - pointer to rm_attribute structure\n *\n * @return\tstring\n * @retval\tsessions\tSuccess\n * @retval\tNULL\t\terror\n *\n */\nchar *\nsessions(struct rm_attribute *attrib)\n{\n\tchar *fmt;\n\tint i, j;\n\tproc_stat_t *ps;\n\tint njids = 0;\n\tpid_t *jids, *hold;\n\tstatic int maxjid = 200;\n\tregister pid_t jobid;\n\n\tif (attrib) {\n\t\tlog_err(-1, __func__, extra_parm);\n\t\trm_errno = RM_ERR_BADPARAM;\n\t\treturn NULL;\n\t}\n\tif ((jids = (pid_t *) calloc(maxjid, sizeof(pid_t))) == NULL) {\n\t\tlog_err(errno, __func__, \"no memory\");\n\t\trm_errno = RM_ERR_SYSTEM;\n\t\treturn NULL;\n\t}\n\n\tmom_get_sample();\n\n\t/*\n\t ** Search for members of session\n\t */\n\tfor (i = 0; i < nproc; i++) {\n\t\tps = &proc_info[i];\n\n\t\tif (ps->uid == 0)\n\t\t\tcontinue;\n\t\tif ((jobid = ps->session) == 0)\n\t\t\tcontinue;\n\t\tDBPRT((\"%s[%d]: pid %d sid %d\\n\",\n\t\t       __func__, njids, ps->pid, jobid))\n\n\t\tfor (j = 0; j < njids; j++) {\n\t\t\tif (jids[j] == jobid)\n\t\t\t\tbreak;\n\t\t}\n\t\tif (j == njids) {\t       /* not found */\n\t\t\tif (njids == maxjid) { /* need more space */\n\t\t\t\tmaxjid += 100;\n\t\t\t\thold = (pid_t *) realloc(jids, maxjid);\n\t\t\t\tif (hold == NULL) {\n\t\t\t\t\tlog_err(errno, __func__, \"realloc\");\n\t\t\t\t\trm_errno = RM_ERR_SYSTEM;\n\t\t\t\t\tfree(jids);\n\t\t\t\t\treturn NULL;\n\t\t\t\t}\n\t\t\t\tjids = hold;\n\t\t\t}\n\t\t\tjids[njids++] = jobid; /* add jobid to list */\n\t\t}\n\t}\n\n\tfmt = ret_string;\n\tfor (j = 0; j < njids; j++) {\n\t\tcheckret(&fmt, 100);\n\t\tsprintf(fmt, \" %d\", (int) jids[j]);\n\t\tfmt += strlen(fmt);\n\t}\n\tfree(jids);\n\treturn ret_string;\n}\n\n/**\n * @brief\n *\twrapper function for sessions().\n *\n * @param[in] attrib - pointer to rm_attribute structure\n *\n * @return      string\n * @retval      sessions        Success\n * @retval      0           \terror\n *\n */\nchar *\nnsessions(struct rm_attribute *attrib)\n{\n\tchar *result, *ch;\n\tint num = 0;\n\n\tif ((result = sessions(attrib)) == NULL)\n\t\treturn result;\n\n\tfor (ch = result; *ch; ch++) {\n\t\tif (*ch == ' ') /* count blanks */\n\t\t\tnum++;\n\t}\n\tsprintf(ret_string, \"%d\", num);\n\treturn ret_string;\n}\n\n/**\n * @brief\n *      returns the number of processes in session\n *\n * @param[in] attrib - pointer to rm_attribute structure\n *\n * @return      string\n * @retval      process        Success\n * @retval      NULL            error\n *\n */\nchar *\npids(struct rm_attribute *attrib)\n{\n\tchar *fmt;\n\tint i;\n\tpid_t jobid;\n\tproc_stat_t *ps;\n\tint num_pids;\n\n\tif (attrib == NULL) {\n\t\tlog_err(-1, __func__, no_parm);\n\t\trm_errno = RM_ERR_NOPARAM;\n\t\treturn NULL;\n\t}\n\tif ((jobid = (pid_t) atoi(attrib->a_value)) == 0) {\n\t\tsprintf(log_buffer, \"bad param: %s\", attrib->a_value);\n\t\tlog_err(-1, __func__, log_buffer);\n\t\trm_errno = RM_ERR_BADPARAM;\n\t\treturn NULL;\n\t}\n\tif (momgetattr(NULL)) {\n\t\tlog_err(-1, __func__, extra_parm);\n\t\trm_errno = RM_ERR_BADPARAM;\n\t\treturn NULL;\n\t}\n\n\tif (strcmp(attrib->a_qualifier, \"session\") != 0) {\n\t\trm_errno = RM_ERR_BADPARAM;\n\t\treturn NULL;\n\t}\n\n\tmom_get_sample();\n\n\t/*\n\t ** Search for members of session\n\t */\n\tfmt = ret_string;\n\tnum_pids = 0;\n\n\tfor (i = 0; i < nproc; i++) {\n\n\t\tps = &proc_info[i];\n\t\tDBPRT((\"%s[%d]: pid: %d sid %d\\n\",\n\t\t       __func__, num_pids, ps->pid, ps->session))\n\t\tif (jobid != ps->session)\n\t\t\tcontinue;\n\n\t\tsprintf(fmt, \"%d \", ps->pid);\n\t\tfmt += strlen(fmt);\n\t\tnum_pids++;\n\t}\n\tif (num_pids == 0) {\n\t\trm_errno = RM_ERR_EXIST;\n\t\treturn NULL;\n\t}\n\treturn ret_string;\n}\n\n/**\n * @brief\n *      returns the number of users\n *\n * @param[in] attrib - pointer to rm_attribute structure\n *\n * @return      string\n * @retval      users        Success\n * @retval      NULL            error\n *\n */\nchar *\nnusers(struct rm_attribute *attrib)\n{\n\tint i;\n\tint j;\n\tproc_stat_t *ps;\n\tint nuids = 0;\n\tuid_t *uids, *hold;\n\tstatic int maxuid = 200;\n\tregister uid_t uid;\n\n\tif (attrib) {\n\t\tlog_err(-1, __func__, extra_parm);\n\t\trm_errno = RM_ERR_BADPARAM;\n\t\treturn NULL;\n\t}\n\tif ((uids = (uid_t *) calloc(maxuid, sizeof(uid_t))) == NULL) {\n\t\tlog_err(errno, __func__, \"no memory\");\n\t\trm_errno = RM_ERR_SYSTEM;\n\t\treturn NULL;\n\t}\n\n\tmom_get_sample();\n\tfor (i = 0; i < nproc; i++) {\n\t\tps = &proc_info[i];\n\n\t\tif ((uid = ps->uid) == 0)\n\t\t\tcontinue;\n\n\t\tDBPRT((\"%s[%d]: pid %d uid %d\\n\",\n\t\t       __func__, nuids, ps->pid, uid))\n\n\t\tfor (j = 0; j < nuids; j++) {\n\t\t\tif (uids[j] == uid)\n\t\t\t\tbreak;\n\t\t}\n\t\tif (j == nuids) {\t       /* not found */\n\t\t\tif (nuids == maxuid) { /* need more space */\n\t\t\t\tmaxuid += 100;\n\t\t\t\thold = (uid_t *) realloc(uids, maxuid);\n\t\t\t\tif (hold == NULL) {\n\t\t\t\t\tlog_err(errno, __func__, \"realloc\");\n\t\t\t\t\trm_errno = RM_ERR_SYSTEM;\n\t\t\t\t\tfree(uids);\n\t\t\t\t\treturn NULL;\n\t\t\t\t}\n\t\t\t\tuids = hold;\n\t\t\t}\n\t\t\tuids[nuids++] = uid; /* add uid to list */\n\t\t}\n\t}\n\n\tsprintf(ret_string, \"%d\", nuids);\n\tfree(uids);\n\treturn ret_string;\n}\n\n/**\n * @brief\n *\treturns all the process ids\n *\n * @return\tpid_t\n * @retval\tpids\tSuccess\n * @retval\tNULl\tError\n *\n */\npid_t *\nallpids(void)\n{\n\tint i;\n\tproc_stat_t *ps;\n\tstatic pid_t *pids = NULL;\n\n\tgetprocs();\n\n\tif (pids != NULL)\n\t\tfree(pids);\n\tif ((pids = (pid_t *) calloc(nproc + 1, sizeof(pid_t))) == NULL) {\n\t\tlog_err(errno, __func__, \"no memory\");\n\t\treturn NULL;\n\t}\n\n\tfor (i = 0; i < nproc; i++) {\n\t\tps = &proc_info[i];\n\n\t\tpids[i] = ps->pid; /* add pid to list */\n\t}\n\tpids[nproc] = -1;\n\treturn pids;\n}\n\n/**\n * @brief\n *\t return amount of total memory on system in KB as numeric string\n *\n * @return      string\n * @retval      total memory    \tSuccess\n * @retval      NULl    \t\tError\n *\n */\nstatic char *\ntotmem(struct rm_attribute *attrib)\n{\n\tproc_mem_t *mm;\n\n\tif (attrib) {\n\t\tlog_err(-1, __func__, extra_parm);\n\t\trm_errno = RM_ERR_BADPARAM;\n\t\treturn NULL;\n\t}\n\n\tif ((mm = get_proc_mem()) == NULL) {\n\t\tlog_err(errno, __func__, \"get_proc_mem\");\n\t\trm_errno = RM_ERR_SYSTEM;\n\t\treturn NULL;\n\t}\n\tDBPRT((\"%s: total mem=%lu\\n\", __func__, mm->total))\n\tsprintf(ret_string, \"%lukb\", (unsigned long) mm->total >> 10); /* KB */\n\treturn ret_string;\n}\n\n/**\n * @brief\n *      returns available free process memory\n *\n * @return      string\n * @retval      avbl free process memory\t\tSuccess\n * @retval      NULl \t\t\t\t\tError\n *\n */\nstatic char *\navailmem(struct rm_attribute *attrib)\n{\n\tproc_mem_t *mm;\n\n\tif (attrib) {\n\t\tlog_err(-1, __func__, extra_parm);\n\t\trm_errno = RM_ERR_BADPARAM;\n\t\treturn NULL;\n\t}\n\n\tif ((mm = get_proc_mem()) == NULL) {\n\t\tlog_err(errno, __func__, \"get_proc_mem\");\n\t\trm_errno = RM_ERR_SYSTEM;\n\t\treturn NULL;\n\t}\n\tDBPRT((\"%s: free mem=%lu\\n\", __func__, mm->free))\n\tsprintf(ret_string, \"%lukb\", (unsigned long) mm->free >> 10); /* KB */\n\treturn ret_string;\n}\n\n/**\n * @brief\tfind and remember the current Linux release number\n * @param[in]\tstruct utsname *\n *\n * @return\tvalue returned by uname(2)'s utsname release[] member\n */\nstatic char *\nuname2release(struct utsname *u)\n{\n\tstatic char *u_release = NULL;\n\n\tif (u_release != NULL)\n\t\treturn (u_release);\n\telse if ((u_release = malloc(strlen(u->release) + 1)) != NULL) {\n\t\tmemcpy(u_release, u->release, strlen(u->release) + 1);\n\t\tsprintf(log_buffer, \"uname release:  %s\", u_release);\n\t\tlog_event(PBSEVENT_DEBUG4, 0, LOG_DEBUG, __func__, log_buffer);\n\t\treturn (u_release);\n\t} else\n\t\treturn NULL;\n}\n\n/**\n * @brief\tchoose the format for the /proc \"flags\" field\n * @param[in]\trelease\n * @param[out]\tstdio format string\n *\n * @return\t\"%lu\" for /proc before Linux version 2.6.22\n * @return\t\"%u\" for  /proc Linux version 2.6.22 and later\n * *\n * @note\tTo derive release information, we're at the mercy of whoever\n *\t\tconfigures the kernel's UTS_RELEASE value when it's built.\n *\t\tWe hope that the version information is in the format\n *\t\t<major>.<minor>.<micro>, or - if not - that at least we can\n *\t\tdepend on sscanf() to throw away extraneous characters and\n *\t\tderive a number for the \"micro\" version that can be used to\n *\t\tleverage proc(5)'s \"%u (%lu before Linux 2.6.22)\" flags\n *\t\tfield format specification.\n *\n *\t\tThis code is not designed to work for Linux versions < 2.\n *\n * @par MT-Safe:\tyes\n */\nstatic char *\nprocflagsfmt(char *release)\n{\n\tchar *p;\n\tchar *ver_begin = release;\n\tchar rfseparator_dot = '.';\n\tchar rfseparator_dash = '-';\n\tint nseparators_seen = 0;\n\tint major, minor = 0, micro, ver;\n\tstatic char before[] = \"%lu\";\n\tstatic char after[] = \"%u\";\n\n\tfor (p = release; *p != '\\0'; p++) {\n\t\tif ((*p == rfseparator_dot) || (*p == rfseparator_dash)) {\n\t\t\tp++;\n\t\t\tif (sscanf(ver_begin, \"%d\", &ver) == 1) {\n\t\t\t\tif (nseparators_seen == 0) {\n\t\t\t\t\tmajor = ver;\n\t\t\t\t\tif (major > 2)\n\t\t\t\t\t\treturn (after);\n\t\t\t\t} else if (nseparators_seen == 1) {\n\t\t\t\t\tminor = ver;\n\t\t\t\t\tif (minor > 6)\n\t\t\t\t\t\treturn (after);\n\t\t\t\t} else {\n\t\t\t\t\tmicro = ver;\n\t\t\t\t\t/* \"flags %u (%lu before Linux 2.6.22)\" */\n\t\t\t\t\tif ((minor == 6) && (micro >= 22))\n\t\t\t\t\t\treturn (after);\n\t\t\t\t\telse\n\t\t\t\t\t\treturn (before);\n\t\t\t\t}\n\t\t\t}\n\t\t\tver_begin = p;\n\t\t\tnseparators_seen++;\n\t\t}\n\t}\n\n\treturn NULL;\n}\n\n/**\n * @brief\treturn the stdio format directive for the /proc flags field\n *\n * @param[out]\tformat string for the /proc flags field\n *\n * @return\tstatic char *\n *\n * @see\tprocflagsfmt\n * @see\tuname2release\n */\nstatic char *\nchoose_procflagsfmt(void)\n{\n\tchar buf[1024];\n\tstatic char *fmtstr = NULL;\n\tstatic int initialized = 0;\n\tstruct utsname u;\n\n\tif (initialized)\n\t\treturn (fmtstr);\n\n\tif (uname(&u) == -1) {\n\t\tlog_err(errno, __func__, \"uname\");\n\t\treturn NULL;\n\t} else {\n\t\tchar *release;\n\t\tchar *fffs; /* the flags field format string */\n\n\t\tif ((release = uname2release(&u)) == NULL) {\n\t\t\tlog_err(-1, __func__, \"uname2release returned NULL\");\n\t\t\treturn NULL;\n\t\t} else if ((fffs = procflagsfmt(release)) == NULL) {\n\t\t\tlog_err(-1, __func__, \"procflagsfmt returned NULL\");\n\t\t\treturn NULL;\n\t\t} else {\n\t\t\tsprintf(buf, stat_str_pre, fffs);\n\t\t\tif ((fmtstr = strdup(buf)) == NULL) {\n\t\t\t\tlog_err(-1, __func__, \"strdup returned NULL\");\n\t\t\t\treturn NULL;\n\t\t\t} else {\n\t\t\t\tinitialized = 1;\n\t\t\t\treturn (fmtstr);\n\t\t\t}\n\t\t}\n\t}\n}\n\n#else /* PBSMOM_HTUNIT */\n\n/*\n **\tThis is code to compile the ncpus function for unit testing.\n **\tWhat follows is a bit of cruft needed to make a correct program.\n */\n#include <stdio.h>\n#include <stdlib.h>\n#include <limits.h>\n#include <string.h>\n#include <assert.h>\n#include <sys/utsname.h>\n\nchar log_buffer[4096];\nchar ret_string[4096];\n\n/**\n * @brief\n *\toutputs logevent on stdout\n *\n * @param[in] a - event  number\n * @param[in] b - event number\n * @param[in] c - type of log\n * @param[in] id - id to indicate log from which object\n * @param[in] mess - message to be logged\n *\n * @return\tVoid\n *\n */\nvoid\nlog_event(int a, int b, int c, char *id, char *mess)\n{\n\tprintf(\"%s: %s\\n\", id, mess);\n}\n/**\n * @brief\n *      outputs logevent on stdout\n *\n * @param[in] a - error number\n * @param[in] id - id to indicate log from which object\n * @param[in] mess - message to be logged\n *\n * @return      Void\n *\n */\nvoid\nlog_err(int a, char *id, char *mess)\n{\n\tprintf(\"error %d %s: %s\\n\", a, id, mess);\n}\n\nstruct rm_attribute;\n\nstatic char *ncpus(struct rm_attribute *);\n\n#define PBSEVENT_SYSTEM 0\n#define LOG_NOTICE 0\n#define RM_ERR_BADPARAM 0\n#define pbs_strsep strsep\n\nint num_pcpus, num_acpus, num_oscpus, rm_errno;\nchar extra_parm[] = \"extra_parm\";\n\nint\nmain()\n{\n\tif (ncpus(NULL) != NULL)\n\t\tprintf(\"ncpus = %s\\n\", ret_string);\n\tprintf(\"physical %d  logical %d\\n\", num_pcpus, num_oscpus);\n\treturn 0;\n}\n#endif /* PBSMOM_HTUNIT */\n\n/**\n * @brief\n *\treturns the processed string (skip).\n *\tprocessed string format \"string\t:\"\n *\n * @param[in] str - label\n * @param[in] skip - string to be processed\n *\n * @return\tstring\n * @retval\tNULL\t\t\tError\n * @retval\tprocessed string\tSuccess\n *\n */\n\nchar *\nskipstr(char *str, char *skip)\n{\n\tint len = strlen(skip);\n\n\tif (strncmp(str, skip, len) != 0)\n\t\treturn NULL;\n\n\tskip = str + len;\n\treturn skip + strspn(skip, \"\\t :\");\n}\n\nint linenum;\nint errflag = 0;\n\nchar badformat[] = \"warning: /proc/cpuinfo format not recognized\";\n\n/**\n * @brief\n *\tprints log events about ncpus\n *\n * @return\tVoid\n *\n */\nvoid\nwarning(void)\n{\n\tif (!errflag) {\n\t\tlog_event(PBSEVENT_SYSTEM, 0, LOG_NOTICE, \"ncpus\", badformat);\n\t\terrflag = 1;\n\t}\n\tlog_event(PBSEVENT_SYSTEM, 0, LOG_NOTICE, \"ncpus\", log_buffer);\n\treturn;\n}\n\n/**\n * @brief\n *\tconverts and return the string value\n *\n * @param[in] str - string to be processed\n *\n * @return\tint\n * @retval\tconverted val(strtol)\tSuccess\n * @retval\t0\t\t\tError\n *\n */\nint\ngetnum(char *str)\n{\n\tlong val;\n\tchar *extra;\n\n\tif (str == NULL || *str == '\\0') {\n\t\tsprintf(log_buffer, \"line %d: number needed\", linenum);\n\t\twarning();\n\t\treturn 0;\n\t}\n\n\tval = strtol(str, &extra, 10);\n\tif (*extra != '\\0') {\n\t\tsprintf(log_buffer, \"line %d: bad number %s\", linenum, str);\n\t\twarning();\n\t}\n\treturn (int) val;\n}\n\n#define LABLELEN 2048\n\nstruct {\n\tint physid;\n\tint coreid;\n} *proc_array = NULL;\nint proc_num = 0;\n\n/**\n * @brief\n *\tAdd an entry to the proc_array[] with the physid/coreid\n *\tcombination of a cpu.  We do this to count the number of\n *\tunique tuples since HyperThread(tm) \"cpus\" will have duplicate\n *\tphysid/coreid values.\n *\n * @param[in] physid - physical id\n * @param[in] coreid - core id\n *\n * @return\tVoid\n *\n */\nstatic void\nproc_new(int physid, int coreid)\n{\n\tint i;\n\n\tif (physid < 0 || coreid < 0)\n\t\treturn;\n\n\tfor (i = 0; i < proc_num; i++) {\n\t\tif (proc_array[i].physid == physid &&\n\t\t    proc_array[i].coreid == coreid)\n\t\t\tbreak;\n\t}\n\tif (i == proc_num) { /* need new proc entry */\n\t\tproc_num++;\n\t\tproc_array = realloc(proc_array, sizeof(*proc_array) * proc_num);\n\t\tassert(proc_array != NULL);\n\t\tproc_array[i].physid = physid;\n\t\tproc_array[i].coreid = coreid;\n\t}\n}\n\n/**\n * @brief\n *\treturn the number of cpus\n *\n * @param[in] attrib - pointer to rm_attribute structure\n *\n * @return\tstring\n * @retval\tnumber of cpus\tSuccess\n * @retval\tNULL\t\tError\n *\n */\nstatic char *\nncpus(struct rm_attribute *attrib)\n{\n\tchar *file = \"/proc/cpuinfo\";\n\tchar label[LABLELEN];\n\tchar *cp;\n\tFILE *fp;\n\tint procs, logical;\n\tint skip = 0;\n\tint siblings = 0;\n\tint coreid = -1;\n\tint physid = -1;\n\tint maxsib = 0;\n\tint maxsibcpu = 0;\n\tint procnum = -1;\n\tint htseen, htany;\n\tint intelany;\n\tstatic int oldlinux = -1;\n\tint len;\n\n\tif (attrib) {\n\t\tlog_err(-1, __func__, extra_parm);\n\t\trm_errno = RM_ERR_BADPARAM;\n\t\treturn NULL;\n\t}\n\n\tif (num_pcpus > 0) {\n\t\tsprintf(ret_string, \"%d\", num_pcpus);\n\t\treturn ret_string;\n\t}\n\n\tif ((fp = fopen(file, \"r\")) == NULL)\n\t\treturn NULL;\n\n\tif (oldlinux == -1) {\n\t\tstruct utsname ubuf;\n\n\t\toldlinux = 0;\n\t\tif (uname(&ubuf) == 0) {\n\t\t\tif (strncmp(ubuf.release, \"2.4.\", 4) == 0 &&\n\t\t\t    strcmp(ubuf.machine, \"x86_64\") == 0)\n\t\t\t\toldlinux = 1;\n\t\t}\n\t}\n\n\terrflag = 0;\n\tlogical = procs = 0;\n\tlinenum = 0;\n\thtany = intelany = 0;\n\n\twhile (!feof(fp)) {\n\t\tif (fgets(label, LABLELEN, fp) == NULL)\n\t\t\tbreak;\n\n\t\tlinenum++;\n\t\tlen = strlen(label);\n\t\tif (label[len - 1] == '\\n')\n\t\t\tlabel[len - 1] = '\\0';\n\t\telse {\n\t\t\tsprintf(log_buffer, \"line %d too long\", linenum);\n\t\t\twarning();\n\t\t}\n\n\t\t/* x86 linux /proc/cpuinfo format is\n\t\t ** processor 0\n\t\t ** info about processor 0\n\t\t ** processor 1\n\t\t ** info about processor 1\n\t\t ** etc.... Alpha linux just prints \"cpus detected: X\"\n\t\t */\n\t\tif ((cp = skipstr(label, \"processor\")) != NULL) {\n\t\t\tproc_new(physid, coreid);\n\t\t\tphysid = coreid = -1;\n\t\t\thtseen = 0;\n\t\t\tsiblings = 0;\n\t\t\tprocnum = getnum(cp);\n\t\t\tlogical++;\n\t\t\tif (skip == 0)\n\t\t\t\tprocs++;\n\t\t} else if ((cp = skipstr(label, \"cpus detected\")) != NULL) {\n\t\t\tlogical = procs = getnum(cp);\n\t\t\tbreak;\n\t\t} else if ((cp = skipstr(label, \"siblings\")) != NULL ||\n\t\t\t   (cp = skipstr(label, \"threads\")) != NULL ||\n\t\t\t   (cp = skipstr(label, \"Number of siblings\")) != NULL) {\n\t\t\tsiblings = getnum(cp);\n\t\t\tif (siblings > maxsib) {\n\t\t\t\tmaxsib = siblings;\n\t\t\t\tmaxsibcpu = procnum;\n\t\t\t}\n\t\t\tif (skip == 0)\n\t\t\t\tskip = siblings - 1;\n\t\t\telse\n\t\t\t\tskip--;\n\t\t} else if ((cp = skipstr(label, \"physical id\")) != NULL) {\n\t\t\tphysid = getnum(cp);\n\t\t} else if ((cp = skipstr(label, \"core id\")) != NULL) {\n\t\t\tcoreid = getnum(cp);\n\t\t} else if ((cp = skipstr(label, \"vendor_id\")) != NULL) {\n\t\t\tif (strcmp(cp, \"GenuineIntel\") == 0)\n\t\t\t\tintelany = 1;\n\t\t} else if ((cp = skipstr(label, \"flags\")) != NULL) {\n\t\t\twhile (cp != NULL) {\n\t\t\t\tchar *flag = pbs_strsep(&cp, \" \");\n\n\t\t\t\tif (flag == NULL)\n\t\t\t\t\tbreak;\n\t\t\t\tif (strcmp(flag, \"ht\") == 0) {\n\t\t\t\t\thtany = htseen = 1;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\tfclose(fp);\n\tproc_new(physid, coreid);\n\n\tif (maxsib > logical) {\n\t\tsprintf(log_buffer, \"cpu %d: siblings=%d but OS only \"\n\t\t\t\t    \"reports %d cpus\",\n\t\t\tmaxsibcpu, maxsib, logical);\n\t\twarning();\n\t}\n\tif (errflag)\n\t\tprocs = logical;\n\telse if (htany || (oldlinux && intelany)) {\n\t\t/*\n\t\t ** If the version of linux is new enough to have\n\t\t ** physid and coreid, we can use the proc_num\n\t\t ** count as the value of physical processors.\n\t\t */\n\t\tif (proc_num > 0)\n\t\t\tprocs = proc_num;\n\t\tsprintf(log_buffer, \"hyperthreading %s\",\n\t\t\t(procs < logical) ? \"enabled\" : \"disabled\");\n\t\tlog_event(PBSEVENT_SYSTEM, 0, LOG_NOTICE, \"ncpus\", log_buffer);\n\t}\n\n\tnum_pcpus = num_acpus = num_oscpus = logical;\n\tif (proc_array != NULL) {\n\t\tfree(proc_array);\n\t\tproc_array = NULL;\n\t\tproc_num = 0;\n\t}\n\n\tsprintf(ret_string, \"%d\", num_oscpus);\n\treturn ret_string;\n}\n\n/**\n * @brief\n *\treturns the total physical memory\n *\n * @param[in] attrib - pointer to rm_attribute structure\n *\n * @return      string\n * @retval      tot physical memory  \tSuccess\n * @retval      NULL            \tError\n *\n */\n\n#ifndef PBSMOM_HTUNIT\nchar *\nphysmem(struct rm_attribute *attrib)\n{\n\tchar strbuf[256];\n\tFILE *fp;\n\n\tif (attrib) {\n\t\tlog_err(-1, __func__, extra_parm);\n\t\trm_errno = RM_ERR_BADPARAM;\n\t\treturn NULL;\n\t}\n\tif ((fp = fopen(\"/proc/meminfo\", \"r\")) == NULL) {\n\t\tlog_err(-1, __func__, extra_parm);\n\t\trm_errno = RM_ERR_SYSTEM;\n\t\treturn NULL;\n\t}\n\t/* the physmem of the machine is in MemTotal */\n\twhile (fgets(strbuf, 256, fp) != NULL) {\n\t\tif (sscanf(strbuf, \"MemTotal: %s k\", ret_string) == 1) {\n\t\t\tfclose(fp);\n\t\t\ttotalmem = (unsigned long) atol(ret_string);\n\n\t\t\tsprintf(ret_string, \"%lukb\",\n\t\t\t\ttotalmem * (num_acpus / num_pcpus));\n\t\t\treturn ret_string;\n\t\t}\n\t}\n\tfclose(fp);\n\trm_errno = RM_ERR_SYSTEM;\n\treturn NULL;\n}\n\n/**\n * @brief\n *\treturns the size of file system present in machine\n *\n * @param[in] param - attribute value(file system)\n *\n * @return \tstring\n * @retval\tsize of file system\tSuccess\n * @retval\tNULL\t\t\tError\n *\n */\nchar *\nsize_fs(char *param)\n{\n\tstruct statfs fsbuf;\n\n\tif (param[0] != '/') {\n\t\tsprintf(log_buffer, \"%s: not full path filesystem name: %s\",\n\t\t\t__func__, param);\n\t\tlog_err(-1, __func__, log_buffer);\n\t\trm_errno = RM_ERR_BADPARAM;\n\t\treturn NULL;\n\t}\n\tif (statfs(param, &fsbuf) == -1) {\n\t\tlog_err(errno, __func__, \"statfs\");\n\t\trm_errno = RM_ERR_BADPARAM;\n\t\treturn NULL;\n\t}\n\tsprintf(ret_string, \"%lukb\",\n\t\t(unsigned long) (((double) fsbuf.f_bsize *\n\t\t\t  (double) fsbuf.f_bfree) /\n\t\t\t 1024.0)); /* KB */\n\treturn ret_string;\n}\n\n/**\n * @brief\n *\tget file attribute(size) from param and put them in buffer.\n *\n * @param[in] param - file attributes\n *\n * @return\tstring\n * @retval\tsize of file\tSuccess\n * @retval\tNULL\t\tError\n *\n */\nchar *\nsize_file(char *param)\n{\n\tstruct stat sbuf;\n\n\tif (param[0] != '/') {\n\t\tsprintf(log_buffer, \"%s: not full path filesystem name: %s\",\n\t\t\t__func__, param);\n\t\tlog_err(-1, __func__, log_buffer);\n\t\trm_errno = RM_ERR_BADPARAM;\n\t\treturn NULL;\n\t}\n\n\tif (stat(param, &sbuf) == -1) {\n\t\tlog_err(errno, __func__, \"stat\");\n\t\trm_errno = RM_ERR_BADPARAM;\n\t\treturn NULL;\n\t}\n\n\tsprintf(ret_string, \"%lukb\", (unsigned long) (sbuf.st_size >> 10)); /* KB */\n\treturn ret_string;\n}\n\n/**\n * @brief\n *\twrapper function for size_file which returns the size of file system\n *\n * @param[in] attrib - pointer to rm_attribute structure\n *\n * @return\tstring\n * @retval      size of file system     Success\n * @retval      NULL                    Error\n *\n */\nchar *\nsize(struct rm_attribute *attrib)\n{\n\tchar *param;\n\n\tif (attrib == NULL) {\n\t\tlog_err(-1, __func__, no_parm);\n\t\trm_errno = RM_ERR_NOPARAM;\n\t\treturn NULL;\n\t}\n\tif (momgetattr(NULL)) {\n\t\tlog_err(-1, __func__, extra_parm);\n\t\trm_errno = RM_ERR_BADPARAM;\n\t\treturn NULL;\n\t}\n\n\tparam = attrib->a_value;\n\tif (strcmp(attrib->a_qualifier, \"file\") == 0)\n\t\treturn (size_file(param));\n\telse if (strcmp(attrib->a_qualifier, \"fs\") == 0)\n\t\treturn (size_fs(param));\n\telse {\n\t\trm_errno = RM_ERR_BADPARAM;\n\t\treturn NULL;\n\t}\n}\n\n/**\n * @brief\n *\tcomputes and returns walltime for process or session.\n *\n * @param[in] attrib - pointer to rm_attribute structure\n *\n * @return\tstring\n * @retval\twalltime\tSuccess\n * @retval\tNULL\t\tError\n *\n */\nstatic char *\nwalltime(struct rm_attribute *attrib)\n{\n\tint i;\n\tint value, job, found = 0;\n\ttime_t now, start;\n\tproc_stat_t *ps;\n\n\tif (attrib == NULL) {\n\t\tlog_err(-1, __func__, no_parm);\n\t\trm_errno = RM_ERR_NOPARAM;\n\t\treturn NULL;\n\t}\n\tif ((value = atoi(attrib->a_value)) == 0) {\n\t\tsprintf(log_buffer, \"bad param: %s\", attrib->a_value);\n\t\tlog_err(-1, __func__, log_buffer);\n\t\trm_errno = RM_ERR_BADPARAM;\n\t\treturn NULL;\n\t}\n\tif (momgetattr(NULL)) {\n\t\tlog_err(-1, __func__, extra_parm);\n\t\trm_errno = RM_ERR_BADPARAM;\n\t\treturn NULL;\n\t}\n\n\tif (strcmp(attrib->a_qualifier, \"proc\") == 0)\n\t\tjob = 0;\n\telse if (strcmp(attrib->a_qualifier, \"session\") == 0)\n\t\tjob = 1;\n\telse {\n\t\trm_errno = RM_ERR_BADPARAM;\n\t\treturn NULL;\n\t}\n\n\tif ((now = time(NULL)) <= 0) {\n\t\tlog_err(errno, __func__, \"time\");\n\t\trm_errno = RM_ERR_SYSTEM;\n\t\treturn NULL;\n\t}\n\tmom_get_sample();\n\n\tstart = now;\n\tfor (i = 0; i < nproc; i++) {\n\t\tps = &proc_info[i];\n\n\t\tif (job) {\n\t\t\tif (value != ps->session)\n\t\t\t\tcontinue;\n\t\t} else {\n\t\t\tif (value != ps->pid)\n\t\t\t\tcontinue;\n\t\t}\n\n\t\tfound = 1;\n\t\tstart = MIN(start, ps->start_time);\n\t}\n\n\tif (found) {\n\t\tsprintf(ret_string, \"%ld\",\n\t\t\t(long) ((double) (now - start) * wallfactor));\n\t\treturn ret_string;\n\t}\n\n\trm_errno = RM_ERR_EXIST;\n\treturn NULL;\n}\n\n/**\n * @brief\n *\treads load avg from file and returns\n *\n * @param[out] rv - var to hold load avg\n *\n * @return\tint\n * @retval\t0\t\t\tSuccess\n * @retval\tRM_ERR_SYSTEM(15205)\terror\n *\n */\nint\nget_la(double *rv)\n{\n\tFILE *fp;\n\tfloat load;\n\n\tif ((fp = fopen(\"/proc/loadavg\", \"r\")) == NULL)\n\t\treturn (rm_errno = RM_ERR_SYSTEM);\n\n\tif (fscanf(fp, \"%f\", &load) != 1) {\n\t\tlog_err(errno, __func__, \"fscanf of load in /proc/loadavg\");\n\t\t(void) fclose(fp);\n\t\treturn (rm_errno = RM_ERR_SYSTEM);\n\t}\n\n\t*rv = (double) load;\n\t(void) fclose(fp);\n\treturn 0;\n}\n\nu_long\ngracetime(u_long secs)\n{\n\ttime_t now = time(NULL);\n\n\tif (secs > now) /* time is in the future */\n\t\treturn (secs - now);\n\telse\n\t\treturn 0;\n}\n\n/**\n * @brief\n *\tset priority of processes.\n *\n * @return\tVoid\n *\n */\nvoid\nmom_nice(void)\n{\n\tif ((nice_val != 0) && (setpriority(PRIO_PROCESS, 0, nice_val) == -1)) {\n\t\t(void) sprintf(log_buffer, \"failed to nice(%d) mom\", nice_val);\n\t\tlog_err(errno, __func__, log_buffer);\n\t}\n}\n\n/**\n * @brief\n *      Unset priority of processes.\n *\n * @return      Void\n *\n */\nvoid\nmom_unnice(void)\n{\n\tif ((nice_val != 0) && (setpriority(PRIO_PROCESS, 0, 0) == -1)) {\n\t\t(void) sprintf(log_buffer, \"failed to nice(%d) mom\", nice_val);\n\t\tlog_err(errno, __func__, log_buffer);\n\t}\n}\n\n/**\n * @brief\n *\tGet the info required for tm_attach.\n *\n * @param[in] pid - process id\n * @param[in] sid - session id\n * @param[in] uid - user id\n * @param[in] comm - command name\n * @param[in] len - size of command\n *\n * @return\tint\n * @retval\tTM_OKAY\t\t\tSuccess\n * @retval\tTM_ENOPROC(17011)\tError\n *\n */\nint\ndep_procinfo(pid_t pid, pid_t *sid, uid_t *uid, char *comm, size_t len)\n{\n\tint i;\n\tproc_stat_t *ps;\n\n\tgetprocs();\n\tfor (i = 0; i < nproc; i++) {\n\t\tps = &proc_info[i];\n\t\tif (ps->pid == pid) {\n\t\t\t*sid = ps->session;\n\t\t\t*uid = ps->uid;\n\t\t\tmemset(comm, '\\0', len);\n\t\t\tmemcpy(comm, ps->comm,\n\t\t\t       MIN(len - 1, sizeof(ps->comm)));\n\t\t\treturn TM_OKAY;\n\t\t}\n\t}\n\treturn TM_ENOPROC;\n}\n\n#ifdef NAS_UNKILL /* localmod 011 */\n/**\n * @brief\n *\tGet the info required for tracking killed processes.\n *\n * @param[in] pid - process id\n * @param[in] ppid - parent process id\n * @param[in] start_time - start time of process\n *\n * @return      int\n * @retval      TM_OKAY                 Success\n * @retval      TM_ENOPROC(17011)       Error\n *\n */\nint\nkill_procinfo(pid_t pid, pid_t *ppid, u_Long *start_time)\n{\n\tint i;\n\tproc_stat_t *ps;\n\n\tgetprocs();\n\tfor (i = 0; i < nproc; i++) {\n\t\tps = &proc_info[i];\n\t\tif (ps->pid == pid) {\n\t\t\t*ppid = ps->ppid;\n\t\t\t*start_time = ps->start_time;\n\t\t\treturn TM_OKAY;\n\t\t}\n\t}\n\treturn TM_ENOPROC;\n}\n#endif /* localmod 011 */\n\n/**\n * @brief\n *\tFor cpuset machine, migrate new task to a cpuset.\n *\n * @param[in] ptask - pointer to task structure\n *\n * @return\tint\n * @retval\tTM_OKAY\t\t\tSuccess\n * @retval\tTM_ESYSTEM(17000)\tError\n *\n */\nint\ndep_attach(task *ptask)\n{\n\treturn TM_OKAY;\n}\n\n/**\n * @brief\n *\tadjusts the reserved mem attribute to make it hold in space\n *\n * @param[in] vp - pointer to vnl_t structure( vnode list)\n *\n * @return\tVoid\n *\n */\n#endif /* PBSMOM_HTUNIT */\n"
  },
  {
    "path": "src/resmom/linux/mom_mach.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _MOM_MACH_H\n#define _MOM_MACH_H\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n/*\n * Machine-dependent definitions for the Machine Oriented Miniserver\n *\n * Target System: linux\n */\n\n#ifndef PBS_MACH\n#define PBS_MACH \"linux\"\n#endif /* PBS_MACH */\n\n#ifndef MOM_MACH\n#define MOM_MACH \"linux\"\n\n#define SET_LIMIT_SET 1\n#define SET_LIMIT_ALTER 0\n#define PBS_CHKPT_MIGRATE 0\n#define PBS_PROC_SID(x) proc_info[x].session\n#define PBS_PROC_PID(x) proc_info[x].pid\n#define PBS_PROC_PPID(x) proc_info[x].ppid\n#define CLR_SJR(sjr) memset(&sjr, 0, sizeof(sjr));\n#define PBS_SUPPORT_SUSPEND 1\n#define task pbs_task\n\n#if MOM_ALPS\n#include <sys/types.h>\n#include <dlfcn.h>\n#include \"/usr/include/job.h\"\n#include <basil.h>\n#endif /* MOM_ALPS */\n\n#include \"job.h\"\n\ntypedef struct pbs_plinks { /* struct to link processes */\n\tpid_t pl_pid;\t    /* pid of this proc */\n\tpid_t pl_ppid;\t    /* parent pid of this proc */\n\tint pl_child;\t    /* index to child */\n\tint pl_sib;\t    /* index to sibling */\n\tint pl_parent;\t    /* index to parent */\n\tint pl_done;\t    /* kill has been done */\n} pbs_plinks;\n\nextern unsigned long totalmem;\nextern int kill_session(pid_t pid, int sig, int dir);\nextern int bld_ptree(pid_t sid);\n\n/* struct startjob_rtn = used to pass error/session/other info \t*/\n/* \t\t\tchild back to parent\t\t\t*/\n\nstruct startjob_rtn {\n\tint sj_code;\t  /* error code\t*/\n\tpid_t sj_session; /* session\t*/\n\n#if MOM_ALPS\n\tjid_t sj_jid;\n\tlong sj_reservation;\n\tunsigned long long sj_pagg;\n#endif /* MOM_ALPS */\n};\n\nextern int mom_set_limits(job *pjob, int); /* Set job's limits */\nextern int mom_do_poll(job *pjob);\t   /* Should limits be polled? */\nextern int mom_does_chkpnt;\t\t   /* see if mom does chkpnt */\nextern int mom_open_poll();\t\t   /* Initialize poll ability */\nextern int mom_get_sample();\t\t   /* Sample kernel poll data */\nextern int mom_over_limit(job *pjob);\t   /* Is polled job over limit? */\nextern int mom_set_use(job *pjob);\t   /* Set resource_used list */\nextern int mom_close_poll();\t\t   /* Terminate poll ability */\nextern int mach_checkpoint(struct task *, char *path, int abt);\nextern long mach_restart(struct task *, char *path); /* Restart checkpointed job */\nextern int set_job(job *, struct startjob_rtn *);\nextern void starter_return(int, int, int, struct startjob_rtn *);\nextern void set_globid(job *, struct startjob_rtn *);\nextern void mom_topology(void);\n\n#if MOM_ALPS\nextern void ck_acct_facility_present(void);\n\n/*\n *\tInterface to the Cray ALPS placement scheduler. (alps.c)\n */\nextern int alps_create_reserve_request(\n\tjob *,\n\tbasil_request_reserve_t **);\nextern void alps_free_reserve_request(basil_request_reserve_t *);\nextern int alps_create_reservation(\n\tbasil_request_reserve_t *,\n\tlong *,\n\tunsigned long long *);\nextern int alps_confirm_reservation(job *);\nextern int alps_cancel_reservation(job *);\nextern int alps_inventory(void);\nextern int alps_suspend_resume_reservation(job *, basil_switch_action_t);\nextern int alps_confirm_suspend_resume(job *, basil_switch_action_t);\nextern void alps_system_KNL(void);\nextern void system_to_vnodes_KNL(void);\n#endif /* MOM_ALPS */\n\n#define COMSIZE 12\ntypedef struct proc_stat {\n\tpid_t session;\t    /* session id */\n\tchar state;\t    /* one of RSDZT: Running, Sleeping,\n\t\t\t\t\t\t Sleeping (uninterruptable), Zombie,\n\t\t\t\t\t\t Traced or stopped on signal */\n\tpid_t ppid;\t    /* parent pid */\n\tpid_t pgrp;\t    /* process group id */\n\tunsigned long utime;\t    /* utime this process */\n\tunsigned long stime;\t    /* stime this process */\n\tunsigned long cutime;\t    /* sum of children's utime */\n\tunsigned long cstime;\t    /* sum of children's stime */\n\tpid_t pid;\t    /* process id */\n\tunsigned long vsize;\t    /* virtual memory size for proc */\n\tunsigned long rss;\t    /* resident set size */\n\tunsigned long start_time;   /* start time of this process */\n\tunsigned long flags;\t    /* the flags of the process */\n\tunsigned long uid;\t    /* uid of the process owner */\n\tchar comm[COMSIZE]; /* command name */\n} proc_stat_t;\n\ntypedef struct proc_map {\n\tunsigned long vm_start;\t /* start of vm for process */\n\tunsigned long vm_end;\t /* end of vm for process */\n\tunsigned long vm_size;\t /* vm_end - vm_start */\n\tunsigned long vm_offset; /* offset into vm? */\n\tunsigned inode;\t\t /* inode of region */\n\tchar *dev;\t\t /* device */\n} proc_map_t;\n#endif /* MOM_MACH */\n#ifdef __cplusplus\n}\n#endif\n#endif /* _MOM_MACH_H */\n"
  },
  {
    "path": "src/resmom/linux/mom_start.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n/**\n * @file\n */\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <unistd.h>\n#include <pwd.h>\n#include <fcntl.h>\n#include <signal.h>\n#include <errno.h>\n#include <assert.h>\n#include <ftw.h>\n#include <sys/types.h>\n#include <sys/wait.h>\n#include <sys/resource.h>\n#include <sys/param.h>\n#include <sys/utsname.h>\n#include <ctype.h>\n#include \"libpbs.h\"\n#include \"list_link.h\"\n#include \"log.h\"\n#include \"server_limits.h\"\n#include \"attribute.h\"\n#include \"resource.h\"\n#include \"job.h\"\n#include \"pbs_nodes.h\"\n#include \"mom_mach.h\"\n#include \"mom_func.h\"\n#ifdef PMIX\n#include \"mom_pmix.h\"\n#endif\n#include \"resmon.h\"\n#include \"mom_vnode.h\"\n#include \"libutil.h\"\n#include \"work_task.h\"\n\n/**\n * @struct\n *\tstruct release_info is used to parse the release information\n *\tof ProPack and linux distributions (i.e RHEL/SLES).\n *\n *\tKeep the sgi-release file information at index 0\n *\tTo load the ProPack information, index 0 is used.\n *\tTo load the os information, index 1 thru end is used\n *\tto search which file is available.\n */\n\nstatic struct release_info {\n\tchar *file;\n\tchar *pfx;\n\tchar *srch;\n\tchar *sep;\n} release_info[] = {\n\t{\"/etc/redhat-release\", \"RHEL\", \"release\", \" \"},\n\t{\"/etc/SuSE-release\", \"SLES\", \"VERSION\", \"=\"},\n\t{\"/etc/os-release\", \"SLES\", \"VERSION\", \"=\"}};\n\n/**\n * @struct\n *\tstruct libjob_support is used to hold verified and tested list of\n *\t<OS ver>, <Architecture>, <libjob ver>.\n */\n\nstatic struct libjob_support {\n\tchar *osver;\n\tchar *arch;\n\tchar *libjobver;\n} libjob_support[] = {\n\t{\"SLES10\", \"x86_64\", \"libjob.so\"},\n\t{\"SLES11\", \"x86_64\", \"libjob.so\"},\n\t{\"SLES12\", \"x86_64\", \"libjob.so.0\"},\n\t{\"SLES12\", \"aarch64\", \"libjob.so.0\"},\n\t{\"SLES15\", \"aarch64\", \"libjob.so.0\"},\n\t{\"SLES15\", \"x86_64\", \"libjob.so.0\"}};\n\n/* Global Variables */\n\nextern int exiting_tasks;\nextern char mom_host[];\nextern pbs_list_head svr_alljobs;\nextern int termin_child;\nextern int num_acpus;\nextern int num_pcpus;\nextern int svr_delay_entry;\n\nextern pbs_list_head task_list_event;\n\n#if MOM_ALPS\nextern char *path_jobs;\nchar *get_versioned_libname();\nint find_in_lib(void *handle, char *plnam, char *psnam, void **psp);\n\n/**\n *\tThis is a temporary kludge - this work should really be done by\n *\tpbs_sched:  if the job is getting exclusive use of a vnode, we\n *\twill assign all the CPU\n *\tresources of the vnode to the created CPU set.  Exclusive use of\n *\ta vnode is defined by a table in (currently) section E16.4 of the\n *\tGRUNT 2 document, q.v..  It is reproduced here\n *\n *\t\t\t\t\tResource_List.place value\n *\tvnode \"sharing\"\n *\t   value\t\tunset\t   contains \"share\"   contains \"excl\"\n *\t\t\t   ---------------------------------------------------|\n * \tunset  \t       \t   |   \tshare  \t  |    \tshare  \t  |    \texcl   \t      |\n *     \t       \t       \t   |--------------|---------------|-------------------|\n *\t\"default_shared\"   |\tshare\t  |\tshare\t  |\texcl\t      |\n *\t\t    \t   |--------------|---------------|-------------------|\n *\t\"default_excl\"\t   |\texcl\t  |\tshare\t  |\texcl\t      |\n *\t       \t    \t   |--------------|---------------|-------------------|\n *\t\"ignore_excl\"\t   |\tshare\t  |\tshare\t  |\tshare\t      |\n *\t\t    \t   |--------------|---------------|-------------------|\n *\t\"force_excl\"\t   |\texcl\t  |\texcl\t  |\texcl \t      |\n *\t\t\t   |---------------------------------------------------\n *\n *\tand reflected in the vnss[][] array below.\n *\n *\tThis applies to ALPS because the Cray reservation has an EXCLUSIVE\n *\tor SHARED mode that is set from this table.\n */\nenum vnode_sharing_state vnss[][rlplace_excl - rlplace_unset + 1] = {\n\t{isshared, isshared, isexcl},\t/* VNS_UNSET */\n\t{isshared, isshared, isexcl},\t/* VNS_DFLT_SHARED */\n\t{isexcl, isshared, isexcl},\t/* VNS_DFLT_EXCL */\n\t{isshared, isshared, isshared}, /* VNS_IGNORE_EXCL */\n\t{isexcl, isexcl, isexcl},\t/* VNS_FORCE_EXCL */\n\t{isexcl, isshared, isexcl},\t/* VNS_DFLT_EXCLHOST */\n\t{isexcl, isexcl, isexcl}\t/* VNS_FORCE_EXCLHOST */\n};\n\n#ifndef MAX\n#define MAX(a, b) (((a) > (b)) ? (a) : (b))\n#endif\n\n/**\n * @brief\n *   \tgetplacesharing\tsharing value for job place\n *\n *   \tCompare the \"place\" string for a job with \"excl\" and \"share\" and\n *   \treturn the corresponding rlplace_value.\n *\n * @param[in] \tpjob\tthe job of interest\n *\n * @return\tenum rlplace_value\n *\n * @par Side-effects\n *   \tA log message is printed at DEBUG level.\n *\n * @par\n *   \tThis code was put in an externally\n *   \tavailable function for use by the Cray project.\n *\n */\nenum rlplace_value\ngetplacesharing(job *pjob)\n{\n\tstatic resource_def *prsdef = NULL;\n\tenum rlplace_value rpv = rlplace_unset;\n\tresource *pplace;\n\n\t/*\n\t *\tCompute the \"Resource_List.place\" index for vnss[][]:\n\t */\n\tprsdef = &svr_resc_def[RESC_PLACE];\n\tif (prsdef != NULL) {\n\t\tchar *placeval = NULL;\n\n\t\tpplace = find_resc_entry(get_jattr(pjob, JOB_ATR_resource), prsdef);\n\t\tif (pplace)\n\t\t\tplaceval = pplace->rs_value.at_val.at_str;\n\t\tif (placeval != NULL) {\n\t\t\tif (place_sharing_check(placeval, PLACE_Excl))\n\t\t\t\trpv = rlplace_excl;\n\t\t\telse if (place_sharing_check(placeval, PLACE_ExclHost))\n\t\t\t\trpv = rlplace_excl;\n\t\t\telse if (place_sharing_check(placeval, PLACE_Shared))\n\t\t\t\trpv = rlplace_share;\n\n\t\t\tsprintf(log_buffer, \"Resource_List.place = %s\",\n\t\t\t\tplaceval);\n\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB,\n\t\t\t\t  LOG_DEBUG, pjob->ji_qs.ji_jobid,\n\t\t\t\t  log_buffer);\n\t\t}\n\t}\n\treturn rpv;\n}\n\n/* These globals are initialized in ck_acct_facility_present.\n *\n * At a later time it may be better to relocate them to the machine\n * independent portion of the mom code if they find use by more\n * than a single machine/OS type\n */\n\nint job_facility_present;\nint job_facility_enabled;\nint acct_facility_present;\nint acct_facility_active;\njid_t (*jc_create)();\njid_t (*jc_getjid)();\n\nvoid\nck_acct_facility_present(void)\n{\n\tint ret1;\n\tint ret2;\n\tchar *libjob;\n\n\tstatic void *handle1 = NULL;\n\n\tstruct config *cptr;\n\textern struct config *config_array;\n\n\t/* use of job_create defaults to True */\n\tjob_facility_enabled = 1;\n\n\tfor (cptr = config_array; cptr != NULL; cptr++) {\n\t\tif (cptr->c_name == NULL || *cptr->c_name == 0)\n\t\t\tbreak;\n\t\telse if (strcasecmp(cptr->c_name, \"pbs_jobcreate_workload_mgmt\") == 0) {\n\t\t\t(void) set_boolean(__func__, cptr->c_u.c_value,\n\t\t\t\t\t   &job_facility_enabled);\n\t\t}\n\t}\n\n\t/* multiple calls to dlopen with the same arguments do not cause multiple\n\t * copies of the library to get loaded into the proesses memory, they just\n\t * bump a reference count and return the same handle value.\n\t * If dlclose is issued when the reference count is 1, the library will be\n\t * unloaded from memory and any previous pointers obtained through calls to\n\t * dlsym will not be valid.\n\t */\n\n\tjob_facility_present = 0;\n\tacct_facility_present = 0;\n\tacct_facility_active = 0;\n\n\t/*\n\t * If job facility is turned off, don't call dlopen for job_create.\n\t */\n\tif (job_facility_enabled == 0)\n\t\tgoto done;\n\n\tlibjob = get_versioned_libname();\n\tif (libjob == NULL) {\n\t\tsprintf(log_buffer, \"Could not find a supported job shared object\");\n\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_ACCT, LOG_DEBUG, __func__,\n\t\t\t  log_buffer);\n\t\tgoto err;\n\t}\n\n\tsprintf(log_buffer, \"using %s for job shared object\", libjob);\n\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_ACCT, LOG_DEBUG, __func__, log_buffer);\n\thandle1 = dlopen(libjob, RTLD_LAZY);\n\tif (handle1 == NULL) {\n\t\t/* facility is not available */\n\n\t\tsprintf(log_buffer, \"%s. failed to dlopen %s\", dlerror(), libjob);\n\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_ACCT, LOG_DEBUG,\n\t\t\t  __func__, log_buffer);\n\t\tgoto err;\n\t}\n\n\tsprintf(log_buffer, \"dlopen of %s successful\", libjob);\n\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_ACCT, LOG_DEBUG,\n\t\t  __func__, log_buffer);\n\n\t/* find_in_lib sets message in log_buffer */\n\tret1 = find_in_lib(handle1, libjob, \"job_create\", (void **) &jc_create);\n\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_ACCT, LOG_DEBUG, __func__,\n\t\t  log_buffer);\n\n\tret2 = find_in_lib(handle1, libjob, \"job_getjid\", (void **) &jc_getjid);\n\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_ACCT, LOG_DEBUG, __func__,\n\t\t  log_buffer);\n\n\tif ((ret1 == 1) && (ret2 == 1))\n\t\tjob_facility_present = 1;\n\n\tif (job_facility_present == 0)\n\t\tgoto done;\n\nerr:\n\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_ACCT, LOG_DEBUG,\n\t\t  __func__, \"job facility not present or improperly setup\");\n\ndone:\n\t/*\n\t * When we get here, the flags are set to indicate what libs should\n\t * be kept open.\n\t */\n\tif (job_facility_present == 0) {\n\t\tif (handle1) {\n\t\t\tdlclose(handle1);\n\t\t\thandle1 = NULL;\n\t\t}\n\t}\n}\n\n/**\n * @brief\n *\tfind_in_lib -  Call this function when you want to find the address of symbol\n * \tin a shared library that has been opened by a call to dlopen.\n *\n * \tAn appropriate message will be written into PBS' global \"log_buffer\"\n * \tin each of the three possible cases (found, not found, bogus arguments).\n * \tThe caller chooses to log or ignore the content of log_buffer.\n *\n *\n * @param[in]\thandle\tvalid handle from call to dlopen\n * @param[in]\tplnam\tpointer to the name of the library (NULL acceptable)\n * @param[in]\tpsnam\tpointer to the name of the symbol\n * @param[out]\tpsp\twhere to return the symbol pointer if found\n *\n * @return\tint\n * @retval\t1      success, with symbol pointer stored to *psp\n * @retval \t0      failure, and *psp unmodified\n * @retval\t-1      bad input to this function\n *\n */\nint\nfind_in_lib(void *handle, char *plnam, char *psnam, void **psp)\n{\n\tvoid *psym;\n\tconst char *error;\n\tint retcode;\n\n\t/* check arguments */\n\tif (handle == NULL || psnam == NULL || *psnam == '\\0') {\n\t\tsprintf(log_buffer, \"%s: bad arguments %p %p %p %p\", __func__,\n\t\t\thandle, plnam, psnam, psp);\n\t\treturn -1;\n\t}\n\n\tpsym = dlsym(handle, psnam);\n\terror = dlerror();\n\n\tif (error != NULL) {\n\n\t\tretcode = 0;\n\t\tif (plnam)\n\t\t\tsprintf(log_buffer, \"%s. symbol %s not found in %s\", error, psnam, plnam);\n\t\telse\n\t\t\tsprintf(log_buffer, \"%s. symbol %s not found\", error, psnam);\n\t} else {\n\n\t\tretcode = 1;\n\t\t*psp = psym;\n\n\t\tif (plnam)\n\t\t\tsprintf(log_buffer, \"symbol %s found in %s\", psnam, plnam);\n\t\telse\n\t\t\tsprintf(log_buffer, \"symbol %s found\", psnam);\n\t}\n\treturn (retcode);\n}\n\n#endif /* MOM_ALPS */\n\n/* Private variables */\n\n/**\n * @brief\n * \tSet session id and whatever else is required on this machine\n *\tto create a new job.\n * \tOn a Cray, an ALPS reservation will be created and confirmed.\n *\n * @param[in]\tpjob\t-\tpointer to job structure\n * @param[in]\tsjr\t-\tpointer to startjob_rtn structure\n *\n * @return session/job id\n * @retval -1 error from setsid(), no message in log_buffer\n * @retval -2 temporary error, retry job, message in log_buffer\n * @retval -3 permanent error, abort job, message in log_buffer\n *\n */\nint\nset_job(job *pjob, struct startjob_rtn *sjr)\n{\n#if MOM_ALPS\n\tif (job_facility_present && pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE) {\n\n\t\t/* host system has necessary JOB container facility present\n\t\t * and this host is Mother Superior for this job\n\t\t */\n\n\t\tjid_t *pjid = (jid_t *) &pjob->ji_extended.ji_ext.ji_jid[0];\n\n\t\tif (*pjid != (jid_t) 0 && *pjid != (jid_t) -1) {\n\t\t\tsjr->sj_jid = *pjid;\n\t\t} else {\n\n\t\t\terrno = -1;\n\t\t\tsjr->sj_jid = (jc_create == NULL) ? -1 : (*jc_create)(0, pjob->ji_qs.ji_un.ji_momt.ji_exuid, 0);\n\n\t\t\tif (sjr->sj_jid == (jid_t) -1) {\n\n\t\t\t\t/* Failed: categorize errno into two cases and handle */\n\t\t\t\t/* Remark: sit_job call occurs before log_close()     */\n\n\t\t\t\tif (errno == ENOSYS) {\n\t\t\t\t\tif (job_facility_present == 1) {\n\t\t\t\t\t\tlog_joberr(errno, __func__,\n\t\t\t\t\t\t\t   \"Job container facility unavailable\",\n\t\t\t\t\t\t\t   pjob->ji_qs.ji_jobid);\n\t\t\t\t\t\tjob_facility_present = 0;\n\t\t\t\t\t}\n\t\t\t\t} else {\n\n\t\t\t\t\t/* log any other job_create failure type */\n\n\t\t\t\t\tlog_joberr(errno, __func__,\n\t\t\t\t\t\t   \"Job container job_create call failed\", pjob->ji_qs.ji_jobid);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\t*pjid = sjr->sj_jid;\n\t}\n#endif /* MOM_ALPS */\n\n\tsjr->sj_session = setsid();\n\n#if MOM_ALPS\n\t/*\n\t * Now that we have our SID/JID we can request/confirm our\n\t * placement scheduler reservation.\n\t *\n\t * Do this only if we are mother superior for the job.\n\t */\n\n\tif (pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE) {\n\t\tbasil_request_reserve_t *basil_req;\n\t\tint rc;\n\n\t\t/* initialized to -1 so this catches the unset case. */\n\t\tsjr->sj_reservation = -1;\n\n\t\trc = alps_create_reserve_request(pjob, &basil_req);\n\t\tif (rc == 1) {\n\t\t\tsprintf(log_buffer,\n\t\t\t\t\"Fatal MPP reservation error\"\n\t\t\t\t\" preparing request.\");\n\t\t\treturn -3;\n\t\t} else if (rc == 2) {\n\t\t\tsprintf(log_buffer,\n\t\t\t\t\"Transient MPP reservation error\"\n\t\t\t\t\" preparing request.\");\n\t\t\treturn -2;\n\t\t}\n\t\tif (basil_req) {\n\t\t\trc = alps_create_reservation(basil_req,\n\t\t\t\t\t\t     &sjr->sj_reservation,\n\t\t\t\t\t\t     &sjr->sj_pagg);\n\t\t\talps_free_reserve_request(basil_req);\n\t\t\tif (rc < 0) {\n\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\"Fatal MPP reservation error\"\n\t\t\t\t\t\" on create.\");\n\t\t\t\treturn -3;\n\t\t\t}\n\t\t\tif (rc > 0) {\n\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\"Transient MPP reservation error\"\n\t\t\t\t\t\" on create.\");\n\t\t\t\treturn -2;\n\t\t\t}\n\t\t\t/*\n\t\t\t * If we are interacting with ALPS, the cookie has\n\t\t\t * not been set. Fill in the session ID we just\n\t\t\t * acquired. Otherwise, we are interacting with\n\t\t\t * CPA and use the cookie that was acquired when\n\t\t\t * the reservation was created.\n\t\t\t */\n\t\t\tif (sjr->sj_pagg == 0) {\n\t\t\t\tif ((job_facility_present == 1))\n\t\t\t\t\tsjr->sj_pagg = sjr->sj_jid;\n\t\t\t\telse\n\t\t\t\t\tsjr->sj_pagg = sjr->sj_session;\n\t\t\t}\n\t\t\tpjob->ji_extended.ji_ext.ji_reservation =\n\t\t\t\tsjr->sj_reservation;\n\t\t\tpjob->ji_extended.ji_ext.ji_pagg =\n\t\t\t\tsjr->sj_pagg;\n\n\t\t\trc = alps_confirm_reservation(pjob);\n\t\t\tif (rc < 0) {\n\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\"Fatal MPP reservation error\"\n\t\t\t\t\t\" on confirm.\");\n\t\t\t\treturn -3;\n\t\t\t}\n\t\t\tif (rc > 0) {\n\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\"Transient MPP reservation error\"\n\t\t\t\t\t\" on confirm.\");\n\t\t\t\treturn -2;\n\t\t\t}\n\t\t} else { /* No error but no reservation made, reset so\n\t\t\t\t\t * the inventory will not be reread.\n\t\t\t\t\t */\n\t\t\tsjr->sj_reservation = 0;\n\t\t}\n\t}\n#endif /* MOM_ALPS */\n\n\treturn (sjr->sj_session);\n}\n\n/**\n * @brief\n *\tset_globid - set the global id for a machine type.\n *\n * @param[in] pjob - pointer to job structure\n * @param[in] sjr  - pointer to startjob_rtn structure\n *\n * @return Void\n *\n */\n\nvoid\nset_globid(job *pjob, struct startjob_rtn *sjr)\n{\n#if MOM_ALPS\n\tchar buf[19]; /* 0x,16 hex digits,'\\0' */\n\tchar altid_buf[23];\n\n\tif (sjr->sj_jid == (jid_t) -1)\n\t\tjob_facility_present = 0;\n\telse if (sjr->sj_jid) {\n\n\t\tsprintf(buf, \"%#0lx\", (unsigned long) sjr->sj_jid);\n\t\tset_jattr_str_slim(pjob, JOB_ATR_acct_id, buf, NULL);\n\t\t(void) memcpy(&pjob->ji_extended.ji_ext.ji_jid, &sjr->sj_jid, sizeof(pjob->ji_extended.ji_ext.ji_jid));\n\n\t\tif (job_facility_present == 0) {\n\t\t\t/* first success on job_create() after failure */\n\t\t\tjob_facility_present = 1;\n\t\t\tsprintf(log_buffer, \"Job container facility available\");\n\t\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_ACCT, LOG_DEBUG, __func__, log_buffer);\n\t\t}\n\t}\n\n\tpjob->ji_extended.ji_ext.ji_pagg = sjr->sj_pagg;\n\tpjob->ji_extended.ji_ext.ji_reservation = sjr->sj_reservation;\n\tsprintf(altid_buf, \"%ld\", sjr->sj_reservation);\n\tset_jattr_str_slim(pjob, JOB_ATR_altid, altid_buf, NULL);\n\n#endif /* MOM_ALPS */\n}\n\n/**\n * @brief\n *\tsets the shell to be used\n *\n * @param[in] pjob - pointer to job structure\n * @param[in] pwdp - pointer to passwd structure\n *\n * @return \tstring\n * @retval \tshellname\tSuccess\n *\n */\nchar *\nset_shell(job *pjob, struct passwd *pwdp)\n{\n\tchar *cp;\n\tint i;\n\tchar *shell;\n\tstruct array_strings *vstrs;\n\t/*\n\t * find which shell to use, one specified or the login shell\n\t */\n\n\tshell = pwdp->pw_shell;\n\tif ((is_jattr_set(pjob, JOB_ATR_shell)) &&\n\t    (vstrs = get_jattr_arst(pjob, JOB_ATR_shell))) {\n\t\tfor (i = 0; i < vstrs->as_usedptr; ++i) {\n\t\t\tcp = strchr(vstrs->as_string[i], '@');\n\t\t\tif (cp) {\n\t\t\t\tif (!strncmp(mom_host, cp + 1, strlen(cp + 1))) {\n\t\t\t\t\t*cp = '\\0'; /* host name matches */\n\t\t\t\t\tshell = vstrs->as_string[i];\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tshell = vstrs->as_string[i]; /* wildcard */\n\t\t\t}\n\t\t}\n\t}\n\treturn (shell);\n}\n\n/**\n *\n * @brief\n * \tChecks if a child of the current (mom) process has terminated, and\n *\tmatches it with the pid of one of the tasks in the task_list_event,\n *\tor matches the pid of a process being monitored for a PBS job.\n *\tif matching a task in the task_list_event, then that task is\n *\tmarked as WORK_Deferred_Cmp along with the exit value of the child\n *\tprocess. Otherwise if it's for a job, and that job's\n *\tJOB_SVFLAG_TERMJOB is set, then mark the job as exiting.\n *\n * @return\tVoid\n *\n */\n\nvoid\nscan_for_terminated(void)\n{\n\tint exiteval;\n\tpid_t pid;\n\tjob *pjob;\n\ttask *ptask = NULL;\n\tstruct work_task *wtask = NULL;\n\tint statloc;\n\n\t/* update the latest intelligence about the running jobs;         */\n\t/* must be done before we reap the zombies, else we lose the info */\n\n\ttermin_child = 0;\n\n\tmom_set_use_all();\n\n\t/* Now figure out which task(s) have terminated (are zombies) */\n\n\twhile ((pid = waitpid(-1, &statloc, WNOHANG)) > 0) {\n\t\tif (WIFEXITED(statloc))\n\t\t\texiteval = WEXITSTATUS(statloc);\n\t\telse if (WIFSIGNALED(statloc))\n\t\t\texiteval = WTERMSIG(statloc) + 0x100;\n\t\telse\n\t\t\texiteval = 1;\n\n\t\t/* Check for other task lists */\n\t\twtask = (struct work_task *) GET_NEXT(task_list_event);\n\t\twhile (wtask) {\n\t\t\tif ((wtask->wt_type == WORK_Deferred_Child) &&\n\t\t\t    (wtask->wt_event == pid)) {\n\t\t\t\twtask->wt_type = WORK_Deferred_Cmp;\n\t\t\t\twtask->wt_aux = (int) exiteval; /* exit status */\n\t\t\t\tsvr_delay_entry++;\t\t/* see next_task() */\n\t\t\t}\n\t\t\twtask = (struct work_task *) GET_NEXT(wtask->wt_linkevent);\n\t\t}\n\n\t\tpjob = (job *) GET_NEXT(svr_alljobs);\n\t\twhile (pjob) {\n\t\t\t/*\n\t\t\t ** see if process was a child doing a special\n\t\t\t ** function for MOM\n\t\t\t */\n\t\t\tif (pid == pjob->ji_momsubt)\n\t\t\t\tbreak;\n\t\t\t/*\n\t\t\t ** look for task\n\t\t\t */\n\t\t\tptask = (task *) GET_NEXT(pjob->ji_tasks);\n\t\t\twhile (ptask) {\n\t\t\t\tif (ptask->ti_qs.ti_sid == pid)\n\t\t\t\t\tbreak;\n\t\t\t\tptask = (task *) GET_NEXT(ptask->ti_jobtask);\n\t\t\t}\n\t\t\tif (ptask != NULL)\n\t\t\t\tbreak;\n\t\t\tpjob = (job *) GET_NEXT(pjob->ji_alljobs);\n\t\t}\n\n\t\tif (pjob == NULL) {\n\t\t\tDBPRT((\"%s: pid %d not tracked, exit %d\\n\",\n\t\t\t       __func__, pid, exiteval))\n\t\t\tcontinue;\n\t\t}\n\n\t\tif (pid == pjob->ji_momsubt) {\n\t\t\tpjob->ji_momsubt = 0;\n\t\t\tif (pjob->ji_mompost) {\n\t\t\t\tpjob->ji_mompost(pjob, exiteval);\n\t\t\t}\n\t\t\t(void) job_save(pjob);\n\t\t\tcontinue;\n\t\t}\n\t\tDBPRT((\"%s: task %8.8X pid %d exit value %d\\n\", __func__,\n\t\t       ptask->ti_qs.ti_task, pid, exiteval))\n\t\tptask->ti_qs.ti_exitstat = exiteval;\n\t\tsprintf(log_buffer, \"task %8.8X terminated\",\n\t\t\tptask->ti_qs.ti_task);\n\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\n#ifdef PMIX\n\t\t/* Inform PMIx that the task has exited. */\n\t\tpbs_pmix_notify_exit(pjob, ptask->ti_qs.ti_exitstat, NULL);\n#endif\n\n\t\t/*\n\t\t ** After the top process(shell) of the TASK exits, check if the\n\t\t ** JOB_SVFLG_TERMJOB job flag set. If yes, then check for any\n\t\t ** live process(s) in the session. If found, make the task\n\t\t ** ORPHAN by setting the flag and delay by kill_delay time. This\n\t\t ** will be exited in kill_job or by cput_sum() as can not be\n\t\t ** seen again by scan_for_terminated().\n\t\t */\n\t\tif (pjob->ji_qs.ji_svrflags & JOB_SVFLG_TERMJOB) {\n\t\t\tint n;\n\n\t\t\t(void) mom_get_sample();\n\t\t\tn = bld_ptree(ptask->ti_qs.ti_sid);\n\t\t\tif (n > 0) {\n\t\t\t\tptask->ti_flags |= TI_FLAGS_ORPHAN;\n\t\t\t\tDBPRT((\"%s: task %8.8X still has %d active procs\\n\", __func__,\n\t\t\t\t       ptask->ti_qs.ti_task, n))\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t}\n\n\t\tkill_session(ptask->ti_qs.ti_sid, SIGKILL, 0);\n\t\tptask->ti_qs.ti_status = TI_STATE_EXITED;\n\t\t(void) task_save(ptask);\n\t\texiting_tasks = 1;\n\t}\n}\n\n#ifdef HAVE_POSIX_OPENPT\n\n/**\n * @brief\n *\tThis is code adapted from an example for posix_openpt in\n *\tThe Open Group Base Specifications Issue 6.\n *\n *\tOn success, this function returns an open descriptor for the\n *\tmaster pseudotty and places a pointer to the (static) name of\n *\tthe slave pseudotty in *rtn_name;  on failure, -1 is returned.\n *\n * @param[out] rtn_name - holds info of tty\n *\n * @return \tint\n * @retval \tfd \tSuccess\n * @retval \t-1\tFailure\n *\n */\nint\nopen_master(char **rtn_name)\n{\n\tint masterfd;\n\tchar *newslavename;\n\tstatic char slavename[_POSIX_PATH_MAX];\n#ifndef _XOPEN_SOURCE\n\textern char *ptsname(int);\n\textern int grantpt(int);\n\textern int unlockpt(int);\n\textern int posix_openpt(int);\n#endif\n\n\tmasterfd = posix_openpt(O_RDWR | O_NOCTTY);\n\tif (masterfd == -1)\n\t\treturn (-1);\n\n\tif ((grantpt(masterfd) == -1) ||\n\t    (unlockpt(masterfd) == -1) ||\n\t    ((newslavename = ptsname(masterfd)) == NULL)) {\n\t\t(void) close(masterfd);\n\t\treturn (-1);\n\t}\n\n\tpbs_strncpy(slavename, newslavename, sizeof(slavename));\n\tassert(rtn_name != NULL);\n\t*rtn_name = slavename;\n\treturn (masterfd);\n}\n\n#else /* HAVE_POSIX_OPENPT */\n\n/**\n * @brief\n * \tcreat the master pty, this particular\n * \tpiece of code depends on multiplexor /dev/ptc\n *\n * @param[in] rtn_name - holds info about tty\n * @return      int\n * @retval      fd      Success\n * @retval      -1      Failure\n *\n */\n\n#define PTY_SIZE 12\n\nint\nopen_master(char **rtn_name)\n{\n\tchar *pc1;\n\tchar *pc2;\n\tint ptc; /* master file descriptor */\n\tstatic char ptcchar1[] = \"pqrs\";\n\tstatic char ptcchar2[] = \"0123456789abcdef\";\n\tstatic char pty_name[PTY_SIZE + 1]; /* \"/dev/[pt]tyXY\" */\n\n\tpbs_strncpy(pty_name, \"/dev/ptyXY\", sizeof(pty_name));\n\tfor (pc1 = ptcchar1; *pc1 != '\\0'; ++pc1) {\n\t\tpty_name[8] = *pc1;\n\t\tfor (pc2 = ptcchar2; *pc2 != '\\0'; ++pc2) {\n\t\t\tpty_name[9] = *pc2;\n\t\t\tif ((ptc = open(pty_name, O_RDWR | O_NOCTTY, 0)) >= 0) {\n\t\t\t\t/* Got a master, fix name to matching slave */\n\t\t\t\tpty_name[5] = 't';\n\t\t\t\t*rtn_name = pty_name;\n\t\t\t\treturn (ptc);\n\n\t\t\t} else if (errno == ENOENT)\n\t\t\t\treturn (-1); /* tried all entries, give up */\n\t\t}\n\t}\n\treturn (-1); /* tried all entries, give up */\n}\n#endif /* HAVE_POSIX_OPENPT */\n\n/*\n * struct sig_tbl = map of signal names to numbers,\n * see req_signal() in ../requests.c\n */\nstruct sig_tbl sig_tbl[] = {\n\t{\"NULL\", 0},\n\t{\"HUP\", SIGHUP},\n\t{\"INT\", SIGINT},\n\t{\"QUIT\", SIGQUIT},\n\t{\"ILL\", SIGILL},\n\t{\"TRAP\", SIGTRAP},\n\t{\"IOT\", SIGIOT},\n\t{\"ABRT\", SIGABRT},\n\t{\"FPE\", SIGFPE},\n\t{\"KILL\", SIGKILL},\n\t{\"BUS\", SIGBUS},\n\t{\"SEGV\", SIGSEGV},\n\t{\"PIPE\", SIGPIPE},\n\t{\"ALRM\", SIGALRM},\n\t{\"TERM\", SIGTERM},\n\t{\"URG\", SIGURG},\n\t{\"STOP\", SIGSTOP},\n\t{\"TSTP\", SIGTSTP},\n\t{\"CONT\", SIGCONT},\n\t{\"CHLD\", SIGCHLD},\n\t{\"CLD\", SIGCHLD},\n\t{\"TTIN\", SIGTTIN},\n\t{\"TTOU\", SIGTTOU},\n\t{\"IO\", SIGIO},\n#ifdef __linux__\n\t{\"POLL\", SIGPOLL},\n#endif\n\t{\"XCPU\", SIGXCPU},\n\t{\"XFSZ\", SIGXFSZ},\n\t{\"VTALRM\", SIGVTALRM},\n\t{\"PROF\", SIGPROF},\n\t{\"WINCH\", SIGWINCH},\n\t{\"USR1\", SIGUSR1},\n\t{\"USR2\", SIGUSR2},\n\t{NULL, -1}};\n\n/**\n * @brief\n *      Get the release information\n *\n * @par Functionality:\n *      This function extracts the release information of ProPack and Linux distributions\n *      from system files listed in struct release_info.\n *\n * @see\n *      get_versioned_lib\n *\n * @param[in]   file    -       pointer to file\n * @param[in]   pfx     -       pointer to prefix\n * @param[in]   srch    -       pointer to search string\n * @param[in]   sep     -       pointer to separator\n *\n * @return\tchar *\n * @retval\tdistro: <PP ver> or <OS ver>\n * @retval\tNULL: Not able to get the requested information from distro\n *\n * @par Side Effects: The value returned needs to be freed by the caller.\n *\n * @par MT-safe: Yes\n *\n */\n\nstatic char *\nparse_sysfile_info(const char *file,\n\t\t   const char *pfx,\n\t\t   const char *srch,\n\t\t   const char *sep)\n{\n\tFILE *fptr;\n\tchar rbuf[1024];\n\tchar *tok;\n\tchar *svptr = NULL;\n\tchar *distro;\n\tint found = 0;\n\n\tfptr = fopen(file, \"r\");\n\tif (fptr == NULL)\n\t\treturn NULL;\n\n\twhile (fgets(rbuf, sizeof(rbuf), fptr) != NULL) {\n\t\tif (strstr(rbuf, srch)) {\n\t\t\tfound = 1;\n\t\t\tbreak;\n\t\t}\n\t}\n\n\tfclose(fptr);\n\n\tif (found == 0) {\n\t\tsprintf(log_buffer, \"release info not found in %s\", file);\n\t\tlog_err(errno, __func__, log_buffer);\n\t\treturn NULL;\n\t}\n\n\ttok = string_token(rbuf, sep, &svptr);\n\twhile (tok) {\n\t\tif (strstr(tok, srch)) {\n\t\t\ttok = string_token(NULL, sep, &svptr);\n\t\t\tbreak;\n\t\t}\n\t\ttok = string_token(NULL, sep, &svptr);\n\t}\n\tif (tok == NULL)\n\t\treturn NULL;\n\n\twhile (!isdigit((int) (*tok)))\n\t\ttok++;\n\tdistro = malloc(MAXNAMLEN);\n\tif (distro == NULL) {\n\t\tsprintf(log_buffer, \"memory allocation failed\");\n\t\tlog_err(errno, __func__, log_buffer);\n\t\treturn NULL;\n\t}\n\t(void) snprintf(distro, MAXNAMLEN, \"%s%d\", pfx, atoi(tok));\n\tdistro[MAXNAMLEN - 1] = '\\0';\n\treturn distro;\n}\n\n/**\n *@brief\n *\tEnsure that the shared object exists and\n *\tget the shared object name from the table\n *\n * @par Functionality:\n *\tThis function checks verified and tested list of\n *\t<PP ver>, <OS ver>, <Architecture>  and\n *\tif the above entries matches with  libjob_support table,\n *\tthen returns shared object to the caller for dlopen.\n *\tOtherwise it returns NULL.\n *\n * @see\n *\tck_acct_facility_present\n *\n * @return\tchar *\n * @retval\tlibjob_support[idx].libjobver\n * @retval\tNULL:\t\t\t\tFailed to get the supported library\n *\n * @par Side Effects: None\n *\n * @par MT-safe: Yes\n *\n */\n\nchar *\nget_versioned_libname()\n{\n\tint idx;\n\tint table_size;\n\tstruct utsname buf;\n\tstruct libjob_support jobobj;\n\n\tmemset(&jobobj, 0, sizeof(jobobj));\n\n\t/* find OS information - loop to find out which file available */\n\ttable_size = sizeof(release_info) / sizeof(release_info[0]);\n\tfor (idx = 1; idx < table_size; idx++) {\n\t\tif (access(release_info[idx].file, R_OK) != -1)\n\t\t\tbreak;\n\t}\n\n\t/* if we found a readable os release file, parse it.\n\t * if we dont find a file or if parse_sysfile_info fails,\n\t * jobobj.osver remains NULL, and is handled later\n\t */\n\tif (idx < table_size)\n\t\tjobobj.osver = parse_sysfile_info(release_info[idx].file,\n\t\t\t\t\t\t  release_info[idx].pfx,\n\t\t\t\t\t\t  release_info[idx].srch,\n\t\t\t\t\t\t  release_info[idx].sep);\n\t/* Get the information on architecture */\n\tif (uname(&buf) == -1) {\n\t\tsprintf(log_buffer, \"uname() call failed\");\n\t\tlog_err(errno, __func__, log_buffer);\n\t\tgoto SYSFAIL;\n\t}\n\n\tjobobj.arch = strdup(buf.machine);\n\n\t/* check that all the required members of jobobj are NON-NULL */\n\tif ((jobobj.arch == NULL) || (jobobj.osver == NULL)) {\n\t\tsprintf(log_buffer, \"Failed to get system information\");\n\t\tlog_err(errno, __func__, log_buffer);\n\t\tgoto SYSFAIL;\n\t}\n\n\t/* Compare system information with verified list of platforms */\n\n\ttable_size = sizeof(libjob_support) / sizeof(libjob_support[0]);\n\tfor (idx = 0; idx < table_size; idx++) {\n\t\tif ((strcmp(jobobj.osver, libjob_support[idx].osver) == 0) &&\n\t\t    (strcmp(jobobj.arch, libjob_support[idx].arch) == 0)) {\n\t\t\tfree(jobobj.arch);\n\t\t\tfree(jobobj.osver);\n\t\t\treturn libjob_support[idx].libjobver;\n\t\t}\n\t}\n\nSYSFAIL:\n\tif (jobobj.arch)\n\t\tfree(jobobj.arch);\n\tif (jobobj.osver)\n\t\tfree(jobobj.osver);\n\treturn NULL;\n}\n"
  },
  {
    "path": "src/resmom/linux/pe_input.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <fcntl.h>\n#include <sys/param.h>\n\n/**\n * @file\n */\n/**\n * @brief\n * \tpe_input() - open architecture dependent input file for prologue\n *\tand epilogue scripts.  See ../prolog.c\n *\tFor linux - /dev/null\n *\n * @param[in] jobid - job id\n *\n * @return \tfile descriptor\n * @retval\tfd\tSuccess\n * @retval  \t-1\tFailure\n *\n */\n\nint\npe_input(char *jobid)\n{\n\treturn (open(\"/dev/null\", O_RDONLY, 0));\n}\n"
  },
  {
    "path": "src/resmom/mock_run.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h>\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <assert.h>\n#include <time.h>\n#include <errno.h>\n\n#include \"attribute.h\"\n#include \"batch_request.h\"\n#include \"job.h\"\n#include \"log.h\"\n#include \"mock_run.h\"\n#include \"mom_func.h\"\n#include \"pbs_error.h\"\n#include \"resource.h\"\n#include \"mom_server.h\"\n\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n#include \"renew_creds.h\"\n#endif\n\nextern time_t time_now;\nextern time_t time_resc_updated;\nextern int min_check_poll;\nextern int next_sample_time;\n\nvoid\nmock_run_finish_exec(job *pjob)\n{\n\tresource_def *rd;\n\tresource *wall_req;\n\tint walltime = 0;\n\n\trd = &svr_resc_def[RESC_WALLTIME];\n\twall_req = find_resc_entry(get_jattr(pjob, JOB_ATR_resource), rd);\n\tif (wall_req != NULL) {\n\t\twalltime = wall_req->rs_value.at_val.at_long;\n\t\tstart_walltime(pjob);\n\t}\n\n\ttime_now = time(NULL);\n\n\t/* Add a work task that runs when the job is supposed to end */\n\tset_task(WORK_Timed, time_now + walltime, mock_run_end_job_task, pjob);\n\n\tsprintf(log_buffer, \"Started mock run of job\");\n\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB,\n\t\t  LOG_INFO, pjob->ji_qs.ji_jobid, log_buffer);\n\n\tmock_run_record_finish_exec(pjob);\n\n\treturn;\n}\n\nvoid\nmock_run_record_finish_exec(job *pjob)\n{\n\tset_job_state(pjob, JOB_STATE_LTR_RUNNING);\n\tset_job_substate(pjob, JOB_SUBSTATE_RUNNING);\n\n\tjob_save(pjob);\n\n\ttime_resc_updated = time_now;\n\tmock_run_mom_set_use(pjob);\n\n\tenqueue_update_for_send(pjob, IS_RESCUSED);\n\tnext_sample_time = min_check_poll;\n\n\treturn;\n}\n\n/**\n * @brief\twork task handler for end of a job in mock run mode\n *\n * @param[in]\tptask - pointer to the work task\n *\n * @return void\n */\nvoid\nmock_run_end_job_task(struct work_task *ptask)\n{\n\tjob *pjob;\n\n\tif (ptask == NULL) {\n\t\tlog_err(PBSE_UNKJOBID, __func__, \"Task not received\");\n\t\treturn;\n\t}\n\n\tpjob = ptask->wt_parm1;\n\n\tset_job_state(pjob, JOB_STATE_LTR_EXITING);\n\tset_job_substate(pjob, JOB_SUBSTATE_EXITING);\n\n\tpjob->ji_qs.ji_un.ji_momt.ji_exitstat = JOB_EXEC_OK;\n\n\tscan_for_exiting();\n}\n\n/**\n * @brief\n * \tUpdate the resources used.<attributes> of a job when in mock run mode\n *\n * @param[in]\tpjob - job in question.\n *\n *\n * @return int\n * @retval PBSE_NONE\tfor success.\n */\nint\nmock_run_mom_set_use(job *pjob)\n{\n\tint i;\n\tresource *pres;\n\tresource *pres_req;\n\tattribute *at;\n\tresource_def *rdefp;\n\tlong val_req = 0;\n\tstatic resource_def **rd = NULL;\n\tstatic resource_def *vmemd = NULL;\n\tint memval = 0;\n\tunsigned int mem_atsv_shift = 10;\n\tunsigned int mem_atsv_units = ATR_SV_BYTESZ;\n\n\tassert(pjob != NULL);\n\tat = get_jattr(pjob, JOB_ATR_resc_used);\n\tat->at_flags |= (ATR_VFLAG_MODIFY | ATR_VFLAG_SET);\n\n\tif (rd == NULL) {\n\t\trd = malloc(5 * sizeof(resource_def *));\n\t\tif (rd == NULL) {\n\t\t\tlog_err(errno, __func__, \"Unable to allocate memory\");\n\t\t\treturn PBSE_SYSTEM;\n\t\t}\n\n\t\trd[0] = &svr_resc_def[RESC_NCPUS];\n\t\trd[1] = &svr_resc_def[RESC_MEM];\n\t\trd[2] = &svr_resc_def[RESC_CPUT];\n\t\trd[3] = &svr_resc_def[RESC_CPUPERCENT];\n\t\trd[4] = NULL;\n\t}\n\n\tvmemd = &svr_resc_def[RESC_VMEM];\n\n\tfor (i = 0; rd[i] != NULL; i++) {\n\t\trdefp = rd[i];\n\t\tpres = find_resc_entry(at, rdefp);\n\t\tif (pres == NULL) {\n\t\t\tpres = add_resource_entry(at, rdefp);\n\t\t\tmark_attr_set(&pres->rs_value);\n\t\t\tpres->rs_value.at_type = rd[i]->rs_type;\n\n\t\t\t/*\n\t\t\t * get pointer to list of resources *requested* for the job\n\t\t\t * so the res used can be set to res requested\n\t\t\t */\n\t\t\tpres_req = find_resc_entry(get_jattr(pjob, JOB_ATR_resource), rdefp);\n\t\t\tif (pres_req != NULL &&\n\t\t\t    (val_req = pres_req->rs_value.at_val.at_long) != 0)\n\t\t\t\tpres->rs_value.at_val.at_long = val_req;\n\t\t\telse\n\t\t\t\tpres->rs_value.at_val.at_long = 0;\n\n\t\t\tif (rd[i]->rs_type == ATR_TYPE_SIZE) {\n\t\t\t\tif (pres_req != NULL) {\n\t\t\t\t\tmemval = val_req;\n\t\t\t\t\tmem_atsv_shift = pres_req->rs_value.at_val.at_size.atsv_shift;\n\t\t\t\t\tmem_atsv_units = pres_req->rs_value.at_val.at_size.atsv_units;\n\t\t\t\t\tpres->rs_value.at_val.at_size.atsv_shift = mem_atsv_shift;\n\t\t\t\t\tpres->rs_value.at_val.at_size.atsv_units = mem_atsv_units;\n\t\t\t\t} else {\n\t\t\t\t\tpres->rs_value.at_val.at_size.atsv_shift = 10; /* KB */\n\t\t\t\t\tpres->rs_value.at_val.at_size.atsv_units = ATR_SV_BYTESZ;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\t/* Set vmem equal to the value of mem */\n\tpres = find_resc_entry(at, vmemd);\n\tif (pres == NULL) {\n\t\tpres = add_resource_entry(at, vmemd);\n\t\tmark_attr_set(&pres->rs_value);\n\t\tpres->rs_value.at_type = ATR_TYPE_LONG;\n\t\tpres->rs_value.at_val.at_long = memval;\n\t\tpres->rs_value.at_val.at_size.atsv_shift = mem_atsv_shift;\n\t\tpres->rs_value.at_val.at_size.atsv_units = mem_atsv_units;\n\t}\n\n\tpjob->ji_sampletim = time(NULL);\n\n\t/* update walltime usage */\n\tupdate_walltime(pjob);\n\n\treturn (PBSE_NONE);\n}\n\n/**\n * @brief\tjob_purge for mock run mode\n *\n * @param[in,out]\tpjob - the job being purged\n *\n * @return\tvoid\n */\nvoid\nmock_run_job_purge(job *pjob)\n{\n\tdelete_link(&pjob->ji_jobque);\n\tdelete_link(&pjob->ji_alljobs);\n\tdelete_link(&pjob->ji_unlicjobs);\n\n\tif (pjob->ji_preq != NULL) {\n\t\tlog_joberr(PBSE_INTERNAL, __func__, \"request outstanding\",\n\t\t\t   pjob->ji_qs.ji_jobid);\n\t\treply_text(pjob->ji_preq, PBSE_INTERNAL, \"job deleted\");\n\t\tpjob->ji_preq = NULL;\n\t}\n\n\t/* delete script file */\n\tdel_job_related_file(pjob, JOB_SCRIPT_SUFFIX);\n\n\tdel_job_dirs(pjob, NULL);\n\n\t/* delete job file */\n\tdel_job_related_file(pjob, JOB_FILE_SUFFIX);\n\n\tdel_chkpt_files(pjob);\n\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n\tdelete_cred(pjob->ji_qs.ji_jobid);\n#endif\n\tdel_job_related_file(pjob, JOB_CRED_SUFFIX);\n\n\tjob_free(pjob);\n}\n"
  },
  {
    "path": "src/resmom/mock_run.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _MOCK_RUN_H\n#define _MOCK_RUN_H\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n#include <stdio.h>\n\n#include \"work_task.h\"\n#include \"job.h\"\n\nvoid mock_run_finish_exec(job *pjob);\n\nvoid mock_run_record_finish_exec(job *pjob);\n\nvoid mock_run_end_job_task(struct work_task *ptask);\n\nint mock_run_mom_set_use(job *pjob);\n\nvoid mock_run_job_purge(job *pjob);\n\n#ifdef __cplusplus\n}\n#endif\n#endif /* _MOCK_RUN_H */\n"
  },
  {
    "path": "src/resmom/mom_comm.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tmom_comm.c\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <assert.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <sys/stat.h>\n\n#include <unistd.h>\n#include <dirent.h>\n#include <pwd.h>\n#include <netdb.h>\n#include <sys/socket.h>\n#include <netinet/in.h>\n#include <arpa/inet.h>\n#include <sys/param.h>\n#include <sys/times.h>\n#include <sys/time.h>\n#include <sys/resource.h>\n#include <signal.h>\n#include <string.h>\n#include <ctype.h>\n#include <errno.h>\n#include <fcntl.h>\n#include <time.h>\n#include <limits.h>\n#include <sys/types.h>\n#include <sys/stat.h>\n\n#include \"libpbs.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"resource.h\"\n#include \"server_limits.h\"\n#include \"job.h\"\n#include \"pbs_error.h\"\n#include \"log.h\"\n#include \"net_connect.h\"\n#include \"tpp.h\"\n#include \"dis.h\"\n#include \"mom_func.h\"\n#include \"mom_server.h\"\n#include \"credential.h\"\n#include \"ticket.h\"\n#include \"pbs_nodes.h\"\n#include \"svrfunc.h\"\n#include \"batch_request.h\"\n#include \"hook.h\"\n#include \"mom_hook_func.h\"\n#include \"pbs_internal.h\"\n#include \"placementsets.h\"\n#include \"pbs_reliable.h\"\n#include \"renew_creds.h\"\n#ifdef PMIX\n#include \"mom_pmix.h\"\n#endif\n\n/* Global Data Items */\n\nextern int exiting_tasks;\nextern char mom_host[];\nextern char *path_jobs;\nextern int pbs_errno;\nextern pbs_list_head mom_deadjobs; /* for deferred purging of job */\nextern pbs_list_head mom_polljobs; /* must have resource limits polled */\nextern pbs_list_head svr_alljobs;  /* all jobs under MOM's control */\nextern time_t time_now;\nextern int server_stream;\nextern char mom_short_name[];\nextern unsigned int pbs_mom_port;\nextern unsigned int pbs_rm_port;\nextern int gen_nodefile_on_sister_mom;\n\nextern int mom_net_up;\nextern time_t mom_net_up_time;\nextern int max_poll_downtime_val;\nextern char *msg_err_malloc;\nextern int\nwrite_pipe_data(int upfds, void *data, int data_size);\nchar task_fmt[] = \"/%8.8X\";\nextern void resume_multinode(job *pjob);\n\n/* Function pointers\n **\n ** These are functions to provide extra interaction between mother\n ** superior and the sisters for any special job setup that needs\n ** to take place. If no extra setup needs to happen, the function\n ** pointers are all NULL and standard MOM interaction takes place.\n ** The sequence of actions which happen for extra setup is as follows\n ** (showing one sister):\n **\n **    MS                          sister\n ** Sends JOIN_JOB\n **                                Gets JOIN_JOB, calls job_join_extra.\n **                                Calls job_join_ack to append to reply.\n ** Calls job_join_read to read\n ** extra values included with\n ** JOIN_JOB reply.\n **\n ** Calls job_join_extra to get\n ** her own extra info.\n **\n ** Calls send_sister to send a\n ** SETUP_JOB message and uses\n ** job_setup_send to append setup\n ** information to the message.\n **                                Gets SETUP_JOB, calls job_setup_final.\n ** Gets reply to SETUP_JOB, calls\n ** job_setup_final.\n **\n ** At this point, all the extra setup for the job is done and it\n ** can be started by calling finish_exec.  The clean up to undo or\n ** deallocate whatever resources were claimed in job_setup_final\n ** is done in job_clean_extra.\n */\n\n/*\n **\tGather any extra information needed at job start.  Called by\n **\ta sister when she gets JOIN_JOB.  Called by MS after she gets\n **\tgood JOIN_JOB replies from all the sisters.\n */\n#ifdef PMIX\npbs_jobnode_t job_join_extra = &pbs_pmix_job_join_extra;\n#else\npbs_jobnode_t job_join_extra = NULL;\n#endif\n\n/*\n **\tUsed by a sister node to write extra information back to MS\n **\twith the reply to JOIN_JOB.\n */\npbs_jobndstm_t job_join_ack = NULL;\n\n/*\n **\tUsed by MS to read extra information sent by a sister reply\n **\tto JOIN_JOB.\n */\npbs_jobndstm_t job_join_read = NULL;\n\n/*\n **\tCalled by MS from send_sisters to form a SETUP_JOB message.\n */\npbs_jobndstm_t job_setup_send = NULL;\n\n/*\n **\tCalled by a sister to read a SETUP_JOB message and do any\n **\tspecial setup required at job start.\n */\npbs_jobstream_t job_setup_final = NULL;\n\n/*\n **\tDoes any special processing needed before the epilogue runs.\n */\npbs_jobvoid_t job_end_final = NULL;\n\n/*\n **\tCalled at job end to undo the setup done in job_setup_final\n **\tor job_join_extra.\n */\n#ifdef PMIX\npbs_jobfunc_t job_clean_extra = &pbs_pmix_job_clean_extra;\n#else\npbs_jobfunc_t job_clean_extra = NULL;\n#endif\n\n/*\n **\tFree memory allocated in ji_setup and hn_setup for all nodes.\n */\npbs_jobvoid_t job_free_extra = NULL;\n\n/*\n **\tFree memory allocated in hn_setup for a given node.\n */\npbs_jobnodevoid_t job_free_node = NULL;\n\n/* the following depends on tm_node_id being 0 to n-1 */\n#define TO_PHYNODE(vnode) pjob->ji_vnods[vnode].vn_host->hn_node\n\neventent *event_dup(eventent *ep, job *pjob, hnodent *pnode);\n\n/**\n * @brief\n *\tSave the critical information associated with a task to disk.\n *\n * @param[in]   ptask - structure handle holding task info to be saved\n *\n * @return   Error code\n * @retval   0 Success\n * @retval  -1 Failure\n *\n */\nint\ntask_save(pbs_task *ptask)\n{\n\tjob *pjob = ptask->ti_job;\n\tint fds;\n\tint i;\n\tchar namebuf[MAXPATHLEN + 1];\n\tchar filnam[MAXPATHLEN + 1];\n\tint openflags;\n\n\t(void) strcpy(namebuf, path_jobs); /* job directory path */\n\tif (*pjob->ji_qs.ji_fileprefix != '\\0')\n\t\t(void) strcat(namebuf, pjob->ji_qs.ji_fileprefix);\n\telse\n\t\t(void) strcat(namebuf, pjob->ji_qs.ji_jobid);\n\t(void) strcat(namebuf, JOB_TASKDIR_SUFFIX);\n\t(void) sprintf(filnam, task_fmt, ptask->ti_qs.ti_task);\n\t(void) strcat(namebuf, filnam);\n\n\topenflags = O_WRONLY | O_CREAT;\n\tfds = open(namebuf, openflags, 0600);\n\tif (fds < 0) {\n\t\tsprintf(log_buffer, \"error on open %s\", namebuf);\n\t\tlog_err(errno, __func__, log_buffer);\n\t\treturn (-1);\n\t}\n\n#ifdef WIN32\n\tsecure_file(namebuf, \"Administrators\",\n\t\t    READS_MASK | WRITES_MASK | STANDARD_RIGHTS_REQUIRED);\n#endif\n\n\t/* just write the \"critical\" base structure to the file */\n\n\twhile ((i = write(fds, (char *) &ptask->ti_qs, sizeof(ptask->ti_qs))) !=\n\t       sizeof(ptask->ti_qs)) {\n\t\tif ((i < 0) && (errno == EINTR)) { /* retry the write */\n\t\t\tif (lseek(fds, (off_t) 0, SEEK_SET) < 0) {\n\t\t\t\tlog_err(errno, __func__, \"lseek\");\n\t\t\t\t(void) close(fds);\n\t\t\t\treturn (-1);\n\t\t\t}\n\t\t\tcontinue;\n\t\t} else {\n\t\t\tlog_err(errno, __func__, \"quickwrite\");\n\t\t\t(void) close(fds);\n\t\t\treturn (-1);\n\t\t}\n\t}\n\t(void) close(fds);\n\treturn (0);\n}\n\n/**\n * @brief\n *\tDuplicate an event and link it to the given nodeent entry.\n *\n * @param[in] ep - eventent pointer to event to be linked\n * @param[in] pjob - job pointer to job\n * @param[in] pnode - hnode pointer to node to link event\n *\n * @return structure\n * @retval event linked\n *\n */\neventent *\nevent_dup(eventent *ep, job *pjob, hnodent *pnode)\n{\n\teventent *nep;\n\n\tnep = (eventent *) malloc(sizeof(eventent));\n\tassert(nep);\n\n\tmemmove(nep, ep, sizeof(*ep));\n\tCLEAR_LINK(nep->ee_next);\n\n\tappend_link(&pnode->hn_events, &nep->ee_next, nep);\n\n\tif (pnode->hn_stream == -1)\n\t\tpnode->hn_stream = tpp_open(pnode->hn_host, pnode->hn_port);\n\n\treturn nep;\n}\n\n/**\n * @brief\n *\tAllocate an event and link it to the given nodeent entry.\n *\n * @param[in] pjob - pointer to job structure\n * @param[in] command - command event is for\n * @param[in] fd - TM stream\n * @param[in] pnode - pointer to structure to keep track of events for node\n * @param[in] event - MOM event number\n * @param[in] taskid - which task id\n *\n * @return structure handle\n * @retval eventent *\n *\n */\neventent *\nevent_alloc(job *pjob, int command, int fd, hnodent *pnode,\n\t    tm_event_t event, tm_task_id taskid)\n{\n\tstatic tm_event_t eventnum = TM_NULL_EVENT + 1;\n\tstatic int rollover = 0;\n\teventent *ep;\n\n\tep = (eventent *) malloc(sizeof(eventent));\n\tassert(ep);\n\tep->ee_command = command;\n\tep->ee_retry = 0;\n\tep->ee_fd = fd;\n\tep->ee_client = event;\n\tep->ee_taskid = taskid;\n\tep->ee_argv = NULL;\n\tep->ee_envp = NULL;\n\tCLEAR_LINK(ep->ee_next);\n\n\tif ((ep->ee_event = eventnum++) == TM_NULL_EVENT) {\n\t\t/*\n\t\t ** Set the eventnum counter back to initial condition.\n\t\t ** The first legal event number is TM_NULL_EVENT+1.\n\t\t */\n\t\tDBPRT((\"%s: EVENT ROLLOVER\\n\", __func__))\n\t\teventnum = TM_NULL_EVENT + 1;\n\t\tep->ee_event = eventnum++;\n\t\trollover = 1;\n\t}\n\n\tif (rollover) {\n\t\tint i;\n\n\t\t/*\n\t\t ** Check for events to be sure there are no dups.\n\t\t */\n\tcheck:\n\t\tfor (i = 0; i < pjob->ji_numnodes; i++) {\n\t\t\teventent *sp;\n\t\t\thnodent *np = &pjob->ji_hosts[i];\n\n\t\t\tsp = (eventent *) GET_NEXT(np->hn_events);\n\t\t\twhile (sp) {\n\t\t\t\tif (sp->ee_event == ep->ee_event) {\n\t\t\t\t\tDBPRT((\"%s: DUP host event\\n\", __func__))\n\t\t\t\t\tep->ee_event = eventnum++;\n\t\t\t\t\tgoto check;\n\t\t\t\t}\n\n\t\t\t\tsp = (eventent *) GET_NEXT(sp->ee_next);\n\t\t\t}\n\t\t}\n\t\t/*\n\t\t ** We don't need to search the obit events because\n\t\t ** any local client (not MOM) will have generated\n\t\t ** the event number that is saved for the obit.\n\t\t */\n\t}\n\n\tappend_link(&pnode->hn_events, &ep->ee_next, ep);\n\n\tif (pnode->hn_stream == -1)\n\t\tpnode->hn_stream = tpp_open(pnode->hn_host, pnode->hn_port);\n\n\treturn ep;\n}\n\n/**\n * @brief\n *\tHow many bits does it take to represent a number?\n *\n * @param[in] x - unsigned number\n *\n * @return int\n * @retval number of bits\n *\n */\nint\nnumbits(uint x)\n{\n\tint i;\n\n\tfor (i = 0; x != 0; i++)\n\t\tx >>= 1;\n\treturn i;\n}\n\n/**\n * @brief\n *\tCreate a new task for job\n *\n * @param[in] pjob - structure handle to job\n *\n * @return structure\n * @retval structure handle to pbs_task\n *\n */\npbs_task *\nmomtask_create(job *pjob)\n{\n\tpbs_task *ptask;\n\ttm_task_id taskid;\n\n\t{\n\t\tint i;\n\t\tuint nodeid = pjob->ji_numvnod; /* largest nodeid */\n\t\tuint myvnodeid = pjob->ji_nodeid;\n\n\t\ti = numbits(nodeid);\n\t\ttaskid = pjob->ji_taskid++;\n\n\t\t/* check for overflow */\n\t\tif (numbits(taskid) > (8 * sizeof(taskid) - i))\n\t\t\treturn NULL;\n\n\t\tmyvnodeid <<= (8 * sizeof(taskid) - i);\n\t\ttaskid |= myvnodeid;\n\t}\n\n\tptask = (pbs_task *) malloc(sizeof(pbs_task));\n\tassert(ptask);\n\tmemset((void *) ptask, 0, sizeof(pbs_task));\n\tptask->ti_job = pjob;\n\tCLEAR_LINK(ptask->ti_jobtask);\n\tappend_link(&pjob->ji_tasks, &ptask->ti_jobtask, ptask);\n\tptask->ti_tmfd = NULL;\n\tptask->ti_protover = -1;\n\tptask->ti_flags = 0;\n\tptask->ti_cput = 0;\n#ifdef WIN32\n\tptask->ti_hProc = NULL;\n#endif\n\tptask->ti_register = TM_NULL_EVENT;\n\tCLEAR_HEAD(ptask->ti_obits);\n\tCLEAR_HEAD(ptask->ti_info);\n\n\tptask->ti_qs.ti_parentnode = TM_ERROR_NODE;\n\tptask->ti_qs.ti_parenttask = 0;\n\tptask->ti_qs.ti_task = taskid;\n\n\tptask->ti_qs.ti_myvnode = 0;\n\tptask->ti_qs.ti_status = TI_STATE_EMBRYO;\n\tptask->ti_qs.ti_sid = 0;\n\tptask->ti_qs.ti_exitstat = 0;\n\tmemset(ptask->ti_qs.ti_u.ti_hold, 0, sizeof(ptask->ti_qs.ti_u.ti_hold));\n\n\treturn ptask;\n}\n\n/**\n * @brief\n *\tfind task for job\n *\n * @param[in] pjob - structure handle to job\n * @param[in] taskid - task id\n *\n * @retval structure handle to pbs_task\n *\n */\npbs_task *\ntask_find(job *pjob, tm_task_id taskid)\n{\n\tpbs_task *ptask;\n\n\tfor (ptask = (pbs_task *) GET_NEXT(pjob->ji_tasks);\n\t     ptask;\n\t     ptask = (pbs_task *) GET_NEXT(ptask->ti_jobtask)) {\n\t\tif (ptask->ti_qs.ti_task == taskid)\n\t\t\tbreak;\n\t}\n\treturn ptask;\n}\n\n/**\n * @brief\n *\tfind session  for task\n *\n * @param[in] sid - session id\n *\n * @return structure handle to pbs_task\n *\n */\npbs_task *\nfind_session(pid_t sid)\n{\n\tjob *pjob;\n\tpbs_task *ptask;\n\n\tfor (pjob = (job *) GET_NEXT(svr_alljobs);\n\t     pjob != NULL;\n\t     pjob = (job *) GET_NEXT(pjob->ji_alljobs)) {\n\t\tfor (ptask = (pbs_task *) GET_NEXT(pjob->ji_tasks);\n\t\t     ptask;\n\t\t     ptask = (pbs_task *) GET_NEXT(ptask->ti_jobtask)) {\n\t\t\tif (ptask->ti_qs.ti_sid == sid)\n\t\t\t\treturn ptask;\n\t\t}\n\t}\n\treturn NULL;\n}\n\n/**\n * @brief\n *\tcheck task for job\n *\n * @param[in] pjob - structure handle to job\n * @param[in] fd   - TM stream\n * @param[in] taskid - task's taskid\n *\n * @return structure handle to pbs_task\n *\n */\npbs_task *\ntask_check(job *pjob, int fd, tm_task_id taskid)\n{\n\tint i;\n\tpbs_task *ptask;\n\n\tptask = task_find(pjob, taskid);\n\tif (ptask == NULL) {\n\t\tsprintf(log_buffer, \"requesting task %8.8X not found\",\n\t\t\ttaskid);\n\t\tlog_joberr(-1, __func__, log_buffer, pjob->ji_qs.ji_jobid);\n\t\treturn NULL;\n\t}\n\tfor (i = 0; i < ptask->ti_tmnum; i++) {\n\t\tif (ptask->ti_tmfd[i] == fd)\n\t\t\tbreak;\n\t}\n\tif (fd < 0 || i == ptask->ti_tmnum) {\n\t\tsprintf(log_buffer, \"cannot tm_reply to task %8.8X\", taskid);\n\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\treturn NULL;\n\t}\n\treturn ptask;\n}\n\n/**\n * @brief\n *      Recover (read in) the tasks from their save files for a job.\n *      This function is only needed upon MOM start up.\n *\n * @param [in]\tpjob - pointer to struct job.\n *\n * @return\tint\n * @retval\t0\tSuccess\n * @retval\t-1\tOpen dir on dirname failed\n *\n */\nint\ntask_recov(job *pjob)\n{\n\tint fds;\n\tpbs_task *pt;\n\tchar dirname[MAXPATHLEN + 1];\n\tchar namebuf[MAXPATHLEN + 1];\n#ifdef WIN32\n\tint len;\n\tHANDLE hDir;\n\tWIN32_FIND_DATA finfo;\n#else\n\tDIR *dir;\n\tstruct dirent *pdirent;\n#endif\n\tstruct taskfix task_save;\n\n\t(void) strcpy(dirname, path_jobs); /* job directory path */\n\tif (*pjob->ji_qs.ji_fileprefix != '\\0')\n\t\t(void) strcat(dirname, pjob->ji_qs.ji_fileprefix);\n\telse\n\t\t(void) strcat(dirname, pjob->ji_qs.ji_jobid);\n\t(void) strcat(dirname, JOB_TASKDIR_SUFFIX);\n\n#ifdef WIN32\n\t(void) strcat(dirname, \"\\\\*\");\n\n\tif ((hDir = FindFirstFile(dirname, &finfo)) == INVALID_HANDLE_VALUE)\n\t\treturn -1;\n\n\tlen = strlen(dirname);\n\tdirname[len - 1] = '\\0'; /* trim wildcard */\n\tdo {\n\t\tif (finfo.cFileName[0] == '.')\n\t\t\tcontinue;\n\n\t\t(void) strcpy(namebuf, dirname);\n\t\t(void) strcat(namebuf, finfo.cFileName);\n\n\t\tfds = open(namebuf, O_RDONLY, 0);\n\t\tif (fds < 0) {\n\t\t\tlog_err(errno, __func__, \"open of task file\");\n\t\t\tunlink(namebuf);\n\t\t\tcontinue;\n\t\t}\n\n\t\t/* read in task quick save sub-structure */\n\t\tif (read(fds, (char *) &task_save, sizeof(task_save)) !=\n\t\t    sizeof(task_save)) {\n\t\t\tlog_err(errno, __func__, \"read\");\n\t\t\tunlink(namebuf);\n\t\t\t(void) close(fds);\n\t\t\tcontinue;\n\t\t}\n\t\tif ((pt = momtask_create(pjob)) == NULL) {\n\t\t\tunlink(namebuf);\n\t\t\t(void) close(fds);\n\t\t\tcontinue;\n\t\t}\n\t\tpt->ti_qs = task_save;\n\t\t(void) close(fds);\n\n\t\tif (task_save.ti_sid > 0) {\n\t\t\tpt->ti_hProc = OpenProcess(PROCESS_ALL_ACCESS,\n\t\t\t\t\t\t   FALSE, pt->ti_qs.ti_sid);\n\t\t}\n\t} while (FindNextFile(hDir, &finfo));\n\t(void) FindClose(hDir);\n#else\n\tif ((dir = opendir(dirname)) == NULL)\n\t\treturn -1;\n\n\t(void) strcat(dirname, \"/\");\n\twhile (errno = 0, (pdirent = readdir(dir)) != NULL) {\n\t\tif (pdirent->d_name[0] == '.')\n\t\t\tcontinue;\n\n\t\t(void) strcpy(namebuf, dirname);\n\t\t(void) strcat(namebuf, pdirent->d_name);\n\n\t\tfds = open(namebuf, O_RDONLY, 0);\n\t\tif (fds < 0) {\n\t\t\tlog_err(errno, __func__, \"open of task file\");\n\t\t\tunlink(namebuf);\n\t\t\tcontinue;\n\t\t}\n\n\t\t/* read in task quick save sub-structure */\n\t\tif (read(fds, (char *) &task_save, sizeof(task_save)) !=\n\t\t    sizeof(task_save)) {\n\t\t\tlog_err(errno, __func__, \"read\");\n\t\t\tunlink(namebuf);\n\t\t\t(void) close(fds);\n\t\t\tcontinue;\n\t\t}\n\t\tif ((pt = momtask_create(pjob)) == NULL) {\n\t\t\tunlink(namebuf);\n\t\t\t(void) close(fds);\n\t\t\tcontinue;\n\t\t}\n\t\tpt->ti_qs = task_save;\n\t\t(void) close(fds);\n\t}\n\tif (errno != 0 && errno != ENOENT) {\n\t\tlog_err(errno, __func__, \"readdir\");\n\t\t(void) closedir(dir);\n\t\treturn -1;\n\t}\n\t(void) closedir(dir);\n#endif /* WIN32 */\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tSend a reply message to a user proc over a TCP stream.\n *\n * @param[in] stream - file descriptor to tasks\n * @param[in] version - protocol version\n * @param[in] com - command event\n * @param[in] event - event number\n *\n * @return int\n * @retval (DIS_SUCCESS) 0  No error\n *\n */\nint\ntm_reply(int stream, int version, int com, tm_event_t event)\n{\n\tint ret;\n\n\tDBPRT((\"tm_reply: stream %d version %d com %d event %d\\n\",\n\t       stream, version, com, event))\n\tDIS_tcp_funcs();\n\n\tret = diswsi(stream, TM_PROTOCOL);\n\tif (ret != DIS_SUCCESS)\n\t\tgoto done;\n\tret = diswsi(stream, version);\n\tif (ret != DIS_SUCCESS)\n\t\tgoto done;\n\tret = diswsi(stream, com);\n\tif (ret != DIS_SUCCESS)\n\t\tgoto done;\n\tret = diswsi(stream, event);\n\tif (ret != DIS_SUCCESS)\n\t\tgoto done;\n\treturn DIS_SUCCESS;\n\ndone:\n\tDBPRT((\"tm_reply: send error %s\\n\", dis_emsg[ret]))\n\treturn ret;\n}\n\n/**\n * @brief\n *\tStart a standard inter-MOM message.\n *\n * @param[in] stream - file descriptor\n * @param[in] jobid  - character pointer holding jobid\n * @param[in] cookie -\n * @param[in] command - command for task\n * @param[in] event   - event number\n * @param[in] taskid  - task id\n * @param[in] version - protocol version\n *\n * @return int\n * @retval (DIS_SUCCESS) 0  No error\n *\n */\nint\nim_compose(int stream, char *jobid, char *cookie, int command,\n\t   tm_event_t event, tm_task_id taskid, int version)\n{\n\tint ret;\n\n\tif (stream < 0)\n\t\treturn DIS_EOF;\n\tDIS_tpp_funcs();\n\n\tret = diswsi(stream, IM_PROTOCOL);\n\tif (ret != DIS_SUCCESS)\n\t\tgoto done;\n\tret = diswsi(stream, version);\n\tif (ret != DIS_SUCCESS)\n\t\tgoto done;\n\tret = diswst(stream, jobid);\n\tif (ret != DIS_SUCCESS)\n\t\tgoto done;\n\tret = diswst(stream, cookie);\n\tif (ret != DIS_SUCCESS)\n\t\tgoto done;\n\tret = diswsi(stream, command);\n\tif (ret != DIS_SUCCESS)\n\t\tgoto done;\n\tret = diswsi(stream, event);\n\tif (ret != DIS_SUCCESS)\n\t\tgoto done;\n\tret = diswui(stream, taskid);\n\tif (ret != DIS_SUCCESS)\n\t\tgoto done;\n\treturn DIS_SUCCESS;\n\ndone:\n\tDBPRT((\"im_compose: send error %s\\n\", dis_emsg[ret]))\n\treturn ret;\n}\n\n/**\n * @brief\n * \tClose the sister streams associated with the mcast channel\n * \tfor a job\n * @param[in] pjob - structure handle to job\n *\n * @return Void\n *\n */\nvoid\nclose_sisters_mcast(job *pjob)\n{\n\tint i;\n\n\tfor (i = 0; i < pjob->ji_numnodes; i++) {\n\t\thnodent *np = &pjob->ji_hosts[i];\n\t\tif (np->hn_stream != -1) {\n\t\t\ttpp_close(np->hn_stream);\n\t\t\tnp->hn_stream = -1;\n\t\t}\n\t}\n}\n\n/**\n * @brief\n *\tsimple helper function that checks whether pbs_comm is up\n *\tup for a specified duration of time (in seconds)\n *\n * @param[in] - age of connection from establishment time, in seconds\n *\n * @return - communication up or down code\n * @retval 0 - Communications down or is younger than \"maturity_time\"\n * @retval 1 - Communications up and older than \"maturity_time\"\n *\n */\nint\nis_comm_up(int maturity_time)\n{\n\tif ((mom_net_up == 1) && ((time_now - mom_net_up_time) > maturity_time))\n\t\treturn 1;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tModify job 'pjob''s exec_vnode, exec_host, exec_host2\n *\tvalues so that only the nodes/vnodes\n *\tbelonging to the parent 'momlist' (pjob->ji_momlist)\n *\tare retained, and that it satisfies only the given 'select_str'.\n *\n * @param[in,out] pjob \t\t- job whose exec_vnode/exec_host/exec_host2\n *\t\t\t\t is being pruned.\n * @param[in]\tselect_str \t- the \"schedselect\"-like string containing the\n *\t\t\t\tspecifications that will filter the job's\n *\t\t\t\texec_vnode value.\n *\t\t\t\tIf this is NULL, then this function\n *\t\t\t\tdoes not prune job's\n *\t\t\t\texec_vnode/exec_host/exec_host2, but rather\n *\t\t\t\tjust return in 'failed_vnodes' the list of\n *\t\t\t\tvnodes assigned to the job that\n *\t\t\t\thave non-functioning parent moms.\n *\n * @param[out]  failed_vnodes - returns in here the vnodes and their resources\n *\t\t\t\tthat have been taken out from the list of\n *\t\t\t\tvnodes assigned with non-functioning parent\n *\t\t\t\tmoms.\n *\n * @return  int\n * @retval\t0\t- for success\n * @retval\t1\t- if any error occurred.\n *\n * @note\n *\tThe first chunk in job's original exec_vnode value is always retained.\n *\tIt is the one assigned by the mother superior mom.\n*/\nint\nprune_exec_vnode(job *pjob, char *select_str, vnl_t **failed_vnodes, vnl_t **good_vnodes, char *err_msg, int err_msg_sz)\n{\n\tchar *execvnode = NULL;\n\tchar *exechost = NULL;\n\tchar *exechost2 = NULL;\n\tchar *schedselect = NULL;\n\tint rc = 1;\n\tchar *new_exec_vnode = NULL;\n\tchar *new_exec_host = NULL;\n\tchar *new_exec_host2 = NULL;\n\tchar *new_schedselect = NULL;\n\tint entry = 0;\n\trelnodes_input_t r_input;\n\trelnodes_input_select_t r_input_select;\n\n\tif (pjob == NULL) {\n\t\tlog_err(-1, __func__, \"job parameter is NULL\");\n\t\treturn (1);\n\t}\n\n\tif (((is_jattr_set(pjob, JOB_ATR_exec_vnode)) == 0) ||\n\t    (get_jattr_str(pjob, JOB_ATR_exec_vnode) == NULL)) {\n\t\tlog_err(-1, __func__, \"no execvnode\");\n\t\treturn (1);\n\t}\n\n\texecvnode = get_jattr_str(pjob, JOB_ATR_exec_vnode);\n\tif (execvnode == NULL) {\n\t\tlog_err(-1, __func__, \"execvnode is NULL\");\n\t\treturn (1);\n\t}\n\n\tif (((is_jattr_set(pjob, JOB_ATR_exec_host)) != 0) &&\n\t    (get_jattr_str(pjob, JOB_ATR_exec_host) != NULL)) {\n\t\texechost = get_jattr_str(pjob, JOB_ATR_exec_host);\n\t}\n\n\tif (((is_jattr_set(pjob, JOB_ATR_exec_host2)) != 0) &&\n\t    (get_jattr_str(pjob, JOB_ATR_exec_host2) != NULL)) {\n\t\texechost2 = get_jattr_str(pjob, JOB_ATR_exec_host2);\n\t}\n\n\tif (((is_jattr_set(pjob, JOB_ATR_SchedSelect)) != 0) &&\n\t    (get_jattr_str(pjob, JOB_ATR_SchedSelect) != NULL)) {\n\t\tschedselect = get_jattr_str(pjob, JOB_ATR_SchedSelect);\n\t}\n\n\tif ((exechost == NULL) && (exechost2 == NULL)) {\n\t\tlog_err(-1, __func__, \"no exechost nor exechost2\");\n\t\tgoto prune_exec_vnode_exit;\n\t}\n\n\tif (exechost == NULL)\n\t\texechost = exechost2;\n\n\tif (exechost2 == NULL)\n\t\texechost2 = exechost;\n\n\trelnodes_input_init(&r_input);\n\tr_input.jobid = pjob->ji_qs.ji_jobid;\n\tr_input.execvnode = execvnode;\n\tr_input.exechost = exechost;\n\tr_input.exechost2 = exechost2;\n\tr_input.schedselect = schedselect;\n\tr_input.p_new_exec_vnode = &new_exec_vnode;\n\tr_input.p_new_exec_host[0] = &new_exec_host;\n\tr_input.p_new_exec_host[1] = &new_exec_host2;\n\tr_input.p_new_schedselect = &new_schedselect;\n\n\trelnodes_input_select_init(&r_input_select);\n\tr_input_select.select_str = select_str;\n\tr_input_select.failed_mom_list = &pjob->ji_failed_node_list;\n\tr_input_select.succeeded_mom_list = &pjob->ji_node_list;\n\tr_input_select.failed_vnodes = failed_vnodes;\n\tr_input_select.good_vnodes = good_vnodes;\n\trc = pbs_release_nodes_given_select(&r_input, &r_input_select, err_msg, LOG_BUF_SIZE);\n\n\tsnprintf(log_buffer, sizeof(log_buffer), \"MOM: release_nodes_given_select: AFT rc=%d keep_select=%s execvnode=%s exechost=%s exechost2=%s new_exec_vnode=%s new_exec_host=%s new_exec_host2=%s new_schedselect=%s\", rc, \"NULL\", execvnode,\n\t\t exechost ? exechost : \"null\", exechost2 ? exechost2 : \"null\",\n\t\t new_exec_vnode ? new_exec_vnode : \"null\",\n\t\t new_exec_host ? new_exec_host : \"null\",\n\t\t new_exec_host2 ? new_exec_host2 : \"null\", new_schedselect);\n\tlog_event(PBSEVENT_DEBUG4, PBS_EVENTCLASS_SERVER, LOG_ERR, __func__, log_buffer);\n\n\tif ((rc != 0) || (select_str == NULL)) {\n\t\t/* a NULL select_str means to just return in\n\t\t * 'failed_vnodes' those vnodes that are assigned to\n\t\t * job that have been seen as down.\n\t\t */\n\t\tgoto prune_exec_vnode_exit;\n\t}\n\n\tif (new_exec_vnode != NULL) {\n\n\t\tif (strcmp(execvnode, new_exec_vnode) == 0) {\n\t\t\t/* there was no change */\n\t\t\trc = 0;\n\t\t\tgoto prune_exec_vnode_exit;\n\t\t}\n\n\t\tentry = strlen(new_exec_vnode) - 1;\n\t\tif (new_exec_vnode[entry] == '+')\n\t\t\tnew_exec_vnode[entry] = '\\0';\n\n\t\tset_jattr_str_slim(pjob, JOB_ATR_exec_vnode, new_exec_vnode, NULL);\n\n\t\t(void) update_resources_list(pjob, ATTR_l, JOB_ATR_resource, new_exec_vnode, INCR, 0, JOB_ATR_resource_orig);\n\t}\n\n\tif (new_exec_host != NULL) {\n\t\tentry = strlen(new_exec_host) - 1;\n\t\tif (new_exec_host[entry] == '+')\n\t\t\tnew_exec_host[entry] = '\\0';\n\t\tset_jattr_str_slim(pjob, JOB_ATR_exec_host, new_exec_host, NULL);\n\t}\n\n\tif (new_exec_host2 != NULL) {\n\t\tentry = strlen(new_exec_host2) - 1;\n\t\tif (new_exec_host2[entry] == '+')\n\t\t\tnew_exec_host2[entry] = '\\0';\n\t\tset_jattr_str_slim(pjob, JOB_ATR_exec_host2, new_exec_host2, NULL);\n\t}\n\n\tif (new_schedselect != NULL) {\n\t\tset_jattr_str_slim(pjob, JOB_ATR_SchedSelect, new_schedselect, NULL);\n\t}\n\n\trc = 0;\nprune_exec_vnode_exit:\n\tfree(new_exec_vnode);\n\tfree(new_exec_host);\n\tfree(new_exec_host2);\n\tfree(new_schedselect);\n\n\treturn (rc);\n}\n\n/**\n * @brief\n *\tSend to sister nodes updates to exec_vnode, exec_host2,\n *\tand schedselect job attributes.\n *\n * @param[in]\tpjob - job to update\n *\n * @return int\n * @retval <num>\t- # of successfully sent requests to sis moms.\n * @retval -1\t\t- for failure.\n*/\nint\nsend_sisters_job_update(job *pjob)\n{\n\tpbs_list_head phead;\n\tint mtfd = -1;\n\tint com;\n\tsvrattrl *psatl;\n\tchar *cookie;\n\tint num = 0;\n\thnodent *np;\n\teventent *ep = NULL;\n\teventent *nep = NULL;\n\tint i;\n\tint ret;\n\n\tif (pjob == NULL) {\n\t\tlog_err(-1, __func__, \"bad pjob parameter\");\n\t\treturn (-1);\n\t}\n\tif (pjob->ji_numnodes <= 1) {\n\t\treturn (0);\n\t}\n\tif (!is_jattr_set(pjob, JOB_ATR_Cookie)) {\n\t\tlog_err(-1, __func__, \"job cookie not set\");\n\t\treturn (-1);\n\t}\n\n\tcookie = get_jattr_str(pjob, JOB_ATR_Cookie);\n\n\tCLEAR_HEAD(phead);\n\n\t(void) job_attr_def[(int) JOB_ATR_exec_vnode].at_encode(\n\t\tget_jattr(pjob, JOB_ATR_exec_vnode),\n\t\t&phead,\n\t\tATTR_execvnode,\n\t\tNULL,\n\t\tATR_ENCODE_MOM,\n\t\tNULL);\n\n\t(void) job_attr_def[(int) JOB_ATR_exec_host2].at_encode(\n\t\tget_jattr(pjob, JOB_ATR_exec_host2),\n\t\t&phead,\n\t\tATTR_exechost2,\n\t\tNULL,\n\t\tATR_ENCODE_MOM,\n\t\tNULL);\n\n\t(void) job_attr_def[(int) JOB_ATR_SchedSelect].at_encode(\n\t\tget_jattr(pjob, JOB_ATR_SchedSelect),\n\t\t&phead,\n\t\tATTR_SchedSelect,\n\t\tNULL,\n\t\tATR_ENCODE_MOM,\n\t\tNULL);\n\n\tattrl_fixlink(&phead);\n\t/* Open streams to the sisterhood.  */\n\tif (pbs_conf.pbs_use_mcast == 1) {\n\t\t/* open the tpp mcast channel here */\n\t\tif ((mtfd = tpp_mcast_open()) == -1) {\n\t\t\tsprintf(log_buffer, \"mcast open failed\");\n\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\treturn (-1);\n\t\t}\n\t}\n\n\tpsatl = (svrattrl *) GET_NEXT(phead);\n\tcom = IM_UPDATE_JOB;\n\tnum = 0;\n\tfor (i = 1; i < pjob->ji_numnodes; i++) {\n\t\tnp = &pjob->ji_hosts[i];\n\n\t\tif (reliable_job_node_find(&pjob->ji_failed_node_list, np->hn_host) != NULL) {\n\t\t\t/* ensure current node (which is managed by a failed mom\n\t\t\t * host) is not flagged as a problem\n\t\t\t */\n\t\t\tif (pjob->ji_nodekill == np->hn_node)\n\t\t\t\tpjob->ji_nodekill = TM_ERROR_NODE;\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"not sending request IM_UPDATE_JOB to failed mom %s\",\n\t\t\t\t np->hn_host ? np->hn_host : \"UNDEFINED\");\n\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\tcontinue;\n\t\t}\n\t\tif (np->hn_stream == -1)\n\t\t\tnp->hn_stream = tpp_open(np->hn_host, np->hn_port);\n\t\tif (np->hn_stream < 0) {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"tpp_open failed on %s:%d\", np->hn_host, np->hn_port);\n\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\tfree_attrlist(&phead);\n\t\t\tif (pbs_conf.pbs_use_mcast == 1)\n\t\t\t\ttpp_mcast_close(mtfd);\n\t\t\treturn (-1);\n\t\t}\n\n\t\tif (nep == NULL) {\n\t\t\tnep = event_alloc(pjob, com, -1, np,\n\t\t\t\t\t  TM_NULL_EVENT, TM_NULL_TASK);\n\t\t\tep = nep;\n\t\t} else {\n\t\t\tep = event_dup(nep, pjob, np);\n\t\t}\n\n\t\tif (ep == NULL) {\n\t\t\tsprintf(log_buffer,\n\t\t\t\t\"failed to create event for %s\",\n\t\t\t\tnp->hn_host ? np->hn_host : \"node\");\n\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\ttpp_close(np->hn_stream);\n\t\t\tnp->hn_stream = -1;\n\t\t\tif (pbs_conf.pbs_use_mcast == 1)\n\t\t\t\ttpp_mcast_close(mtfd);\n\t\t\tfree_attrlist(&phead);\n\t\t\treturn (-1);\n\t\t}\n\t\tif (pbs_conf.pbs_use_mcast == 1) {\n\t\t\t/* add each of the tpp streams to the tpp mcast channel */\n\t\t\tif (tpp_mcast_add_strm(mtfd, np->hn_stream, FALSE) == -1) {\n\t\t\t\tsnprintf(log_buffer,\n\t\t\t\t\t sizeof(log_buffer),\n\t\t\t\t\t \"mcast add to %s failed\",\n\t\t\t\t\t np->hn_host ? np->hn_host : \"node\");\n\t\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\t\ttpp_close(np->hn_stream);\n\t\t\t\tnp->hn_stream = -1;\n\t\t\t\ttpp_mcast_close(mtfd);\n\t\t\t\tfree_attrlist(&phead);\n\t\t\t\treturn (-1);\n\t\t\t}\n\t\t} else {\n\t\t\t/* send message header */\n\t\t\tret = im_compose(np->hn_stream,\n\t\t\t\t\t pjob->ji_qs.ji_jobid, cookie,\n\t\t\t\t\t com, ep->ee_event, TM_NULL_TASK,\n\t\t\t\t\t IM_OLD_PROTOCOL_VER);\n\t\t\tif (ret != DIS_SUCCESS) {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t \"failed to send job update to %s\",\n\t\t\t\t\t np->hn_host ? np->hn_host : \"node\");\n\t\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\t\tfree_attrlist(&phead);\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\t(void) encode_DIS_svrattrl(np->hn_stream,\n\t\t\t\t\t\t   psatl);\n\t\t\t(void) dis_flush(np->hn_stream);\n\t\t}\n\t\tnum++;\n\t}\n\n\tif (pbs_conf.pbs_use_mcast == 1) {\n\t\tif (num > 0) {\n\t\t\tret = im_compose(mtfd, pjob->ji_qs.ji_jobid,\n\t\t\t\t\t cookie, com, ep->ee_event, TM_NULL_TASK,\n\t\t\t\t\t IM_OLD_PROTOCOL_VER);\n\n\t\t\tif (ret != DIS_SUCCESS) {\n\t\t\t\tlog_err(errno, __func__, \"compose mcast header failed\");\n\t\t\t\ttpp_mcast_close(mtfd);\n\t\t\t\tfree_attrlist(&phead);\n\t\t\t\treturn (-1);\n\t\t\t}\n\t\t\t(void) encode_DIS_svrattrl(mtfd, psatl);\n\n\t\t\tret = dis_flush(mtfd);\n\t\t\tif (ret != DIS_SUCCESS) {\n\t\t\t\tlog_err(errno, __func__, \"flush mcast stream failed\");\n\t\t\t\ttpp_mcast_close(mtfd);\n\t\t\t\tfree_attrlist(&phead);\n\t\t\t\treturn (-1);\n\t\t\t}\n\t\t}\n\t\ttpp_mcast_close(mtfd);\n\t}\n\n\tfree_attrlist(&phead);\n\treturn (num);\n}\n/**\n *\n * @brief\n *\tReceive job updates to exec_vnode, exec_host2, and\n *\tschedselect from mother superior via 'stream'.\n *\tThis would cause job_nodes() to get called to\n *\tre-populate nodes info (i.e. ji_vnods)\n *\n * @return int\n * @retval 0\t- successs\n * retval  -1\t- failure\n */\nint\nreceive_job_update(int stream, job *pjob)\n{\n\tpbs_list_head lhead;\n\tint found_exechost = 0;\n\tint found_execvnode = 0;\n\tint found_schedselect = 0;\n\tint index;\n\tint errcode;\n\tint rc;\n\tint i;\n\tsvrattrl *psatl;\n\n\tCLEAR_HEAD(lhead);\n\tif (decode_DIS_svrattrl(stream, &lhead) != DIS_SUCCESS) {\n\t\tlog_err(-1, __func__, \"decode_DIS_svrattrl failed\");\n\t\treturn (-1);\n\t}\n\tfor (psatl = (svrattrl *) GET_NEXT(lhead);\n\t     psatl; psatl = (svrattrl *) GET_NEXT(psatl->al_link)) {\n\n\t\t/* identify the attribute by name */\n\t\tindex = find_attr(job_attr_idx, job_attr_def, psatl->al_name);\n\t\tif (index < 0) { /* didn`t recognize the name */\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"did not recognize attribute name %s\", psatl->al_name);\n\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_NOTICE,\n\t\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\tfree_attrlist(&lhead);\n\t\t\treturn (-1);\n\t\t}\n\n\t\tif (strcmp(psatl->al_name, ATTR_execvnode) == 0) {\n\t\t\tfound_execvnode = 1;\n\t\t} else if (strcmp(psatl->al_name, ATTR_SchedSelect) == 0) {\n\t\t\tfound_schedselect = 1;\n\t\t} else if (strcmp(psatl->al_name, ATTR_exechost2) == 0) {\n\t\t\tfound_exechost = 1;\n\t\t} else {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"warning: ignoring attribute name %s\", psatl->al_name);\n\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_NOTICE,\n\t\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\tcontinue;\n\t\t}\n\n\t\terrcode = set_jattr_generic(pjob, index, psatl->al_value, psatl->al_resc, INTERNAL);\n\t\t/* Unknown resources still get decoded */\n\t\t/* under \"unknown\" resource def */\n\t\tif ((errcode != 0) && (errcode != PBSE_UNKRESC)) {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"failed to decode attribute name %s\", psatl->al_name);\n\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_NOTICE,\n\t\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\tfree_attrlist(&lhead);\n\t\t\treturn (-1);\n\t\t}\n\n\t\tif (psatl->al_op == DFLT)\n\t\t\t(get_jattr(pjob, index))->at_flags |= ATR_VFLAG_DEFLT;\n\t}\n\tfree_attrlist(&lhead);\n\tfor (i = 0; i < pjob->ji_numvnod; i++) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"before: ji_vnods[%d].vn_node=%d phy node %d host=%s\",\n\t\t\t i, pjob->ji_vnods[i].vn_node,\n\t\t\t pjob->ji_vnods[i].vn_host->hn_node,\n\t\t\t pjob->ji_vnods[i].vn_host->hn_host ? pjob->ji_vnods[i].vn_host->hn_host : \"\");\n\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t}\n\tif (found_execvnode && found_schedselect && found_exechost) {\n\t\tmom_hook_input_t hook_input;\n\t\tmom_hook_output_t hook_output;\n\t\tchar hook_msg[HOOK_MSG_SIZE + 1];\n\t\tint hook_errcode = 0;\n\t\thook *last_phook;\n\t\tunsigned int hook_fail_action = 0;\n\n\t\tif ((rc = job_nodes(pjob)) != 0) {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"failed updating internal nodes data (rc=%d)\", rc);\n\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_NOTICE,\n\t\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\treturn (-1);\n\t\t}\n\n\t\tmom_hook_input_init(&hook_input);\n\t\thook_input.pjob = pjob;\n\n\t\tmom_hook_output_init(&hook_output);\n\t\thook_output.reject_errcode = &hook_errcode;\n\t\thook_output.last_phook = &last_phook;\n\t\thook_output.fail_action = &hook_fail_action;\n\t\tif (mom_process_hooks(HOOK_EVENT_EXECJOB_RESIZE,\n\t\t\t\t      PBS_MOM_SERVICE_NAME, mom_host,\n\t\t\t\t      &hook_input, &hook_output,\n\t\t\t\t      hook_msg, sizeof(hook_msg), 1) == 0) {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"execjob_resize hook rejected request: %s\", hook_msg);\n\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_NOTICE, pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\treturn (-1);\n\t\t}\n\n\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t  pjob->ji_qs.ji_jobid, \"updated nodes info\");\n\n\t\tpjob->ji_updated = 1;\n\t\t(void) job_save(pjob);\n\n\t\tfor (i = 0; i < pjob->ji_numvnod; i++) {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"after: ji_vnods[%d].vn_node=%d phy node %d \"\n\t\t\t\t \"host=%s\",\n\t\t\t\t i,\n\t\t\t\t pjob->ji_vnods[i].vn_node,\n\t\t\t\t pjob->ji_vnods[i].vn_host->hn_node,\n\t\t\t\t pjob->ji_vnods[i].vn_host->hn_host ? pjob->ji_vnods[i].vn_host->hn_host : \"\");\n\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB,\n\t\t\t\t  LOG_DEBUG, pjob->ji_qs.ji_jobid, log_buffer);\n\t\t}\n\t}\n\treturn (0);\n}\n\n/**\n * @brief\n *\tReturns 1 if mom entry ('mname', 'port')  is listed\n *\tas one of the entries in '+' separated 'exechost' string.\n *\n * @param[in]\texechost - a string of the form:\n *\t\t  exec_host2: <host1>:<port1>/...+<host2>:<port2>/...\n *\t\t  - or -\n *\t\t  exec_host: <host1>/...+<host2>/...\n * @param[in]\tmname - mom  hostname to match.\n * @param[in]\tport - mom  port number to match.\n *\n * @return int\n * @retval 1\t- for a match.\n * @retval 0\t- for a non-match or error.\n */\nstatic int\nin_exechost(char *exechost, char *mname, int port)\n{\n\tchar *ehost = NULL;\n\tchar *str = NULL;\n\tchar *hname = NULL;\n\tint hport;\n\tchar *pc, *pc2 = NULL;\n\tmomvmap_t *pnat = NULL;\n\tint match_short = 0;\n\tchar *save_ptr; /* posn for strtok_r() */\n\n\tif ((exechost == NULL) || (mname == NULL)) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"bad input parameter\");\n\t\treturn 0;\n\t}\n\n\tehost = strdup(exechost);\n\tif (ehost == NULL) {\n\t\tlog_err(errno, __func__, \"strdup failed\");\n\t\treturn 0;\n\t}\n\n\tstr = strtok_r(ehost, \"+\", &save_ptr);\n\twhile (str != NULL) {\n\t\thname = str;\n\t\thport = -1;\n\t\tpc = strchr(str, ':');\n\t\tmatch_short = 0;\n\t\tif (pc != NULL) {\n\t\t\t*pc = '\\0';\n\t\t\tpc++;\n\t\t\tpc2 = strchr(pc, '/');\n\t\t\tif (pc2 != NULL)\n\t\t\t\t*pc2 = '\\0';\n\t\t\thport = atoi(pc);\n\t\t} else { /* no port info...not exechost2 format*/\n\t\t\tpc2 = strchr(hname, '/');\n\t\t\tif (pc2 != NULL)\n\t\t\t\t*pc2 = '\\0';\n\t\t\tpnat = find_vmap_entry(hname);\n\t\t\tif (pnat != NULL) {\n\t\t\t\t/* found a map entry */\n\t\t\t\thport = pnat->mvm_mom->mi_port;\n\t\t\t} else {\n\t\t\t\t/* no map entry, use standard port */\n\t\t\t\t/* and match up to short names */\n\t\t\t\thport = pbs_mom_port;\n\t\t\t\tmatch_short = 1;\n\t\t\t}\n\t\t}\n\n\t\tif (match_short) {\n\t\t\tpc = strchr(hname, '.');\n\t\t\tif (pc != NULL)\n\t\t\t\t*pc = '\\0';\n\n\t\t\tpc2 = strchr(mname, '.');\n\t\t\tif (pc2 != NULL)\n\t\t\t\t*pc2 = '\\0';\n\n\t\t\tif ((strcmp(hname, mname) == 0) && (hport == port)) {\n\t\t\t\tif (pc != NULL)\n\t\t\t\t\t*pc = '.';\n\t\t\t\tif (pc2 != NULL)\n\t\t\t\t\t*pc2 = '.';\n\n\t\t\t\tfree(ehost);\n\t\t\t\treturn 1;\n\t\t\t}\n\t\t\tif (pc != NULL)\n\t\t\t\t*pc = '.';\n\t\t\tif (pc2 != NULL)\n\t\t\t\t*pc2 = '.';\n\n\t\t} else {\n\t\t\tif ((strcmp(hname, mname) == 0) && (hport == port)) {\n\t\t\t\tfree(ehost);\n\t\t\t\treturn 1;\n\t\t\t}\n\t\t}\n\t\tstr = strtok_r(NULL, \"+\", &save_ptr);\n\t}\n\n\tfree(ehost);\n\treturn 0;\n}\n/**\n * @brief\n *\tSend a message (command = com) to all the other MOMs in\n *\t'pjob'.  Set ji_nodekill if there is a problem\n *\twith a node.  Call the function command_func if it is\n *\tnot NULL.  It can be used to send extra information.\n *\n * @param[in] pjob - structure handle to job\n * @param[in] com  - command for task\n * @param[in] command_func - function\n * @param[in] exclude_exec_host - if sister host match one of these,\n *\t\t\tthen ignore sending mcast message to that host.\n *\n * @return int\n * @retval num - number of nodes without problem\n * @retval 0   - Failure\n *\n */\nint\nsend_sisters_mcast_inner(job *pjob, int com, pbs_jobndstm_t command_func,\n\t\t\t char *exclude_exec_host)\n{\n\tint i, num, ret;\n\teventent *ep, *nep = NULL;\n\ttm_event_t event = TM_NULL_EVENT;\n\tchar *cookie;\n\tint mtfd;\n\n\tDBPRT((\"send_sisters_mcast: command %d\\n\", com))\n\tif (!(is_jattr_set(pjob, JOB_ATR_Cookie)))\n\t\treturn 0;\n\tcookie = get_jattr_str(pjob, JOB_ATR_Cookie);\n\tnum = 0;\n\n\t/* open the tpp mcast channel here */\n\tif ((mtfd = tpp_mcast_open()) == -1)\n\t\treturn 0;\n\n\tfor (i = 0; i < pjob->ji_numnodes; i++) {\n\t\thnodent *np = &pjob->ji_hosts[i];\n\n\t\tif (np->hn_node == pjob->ji_nodeid) /* this is me */\n\t\t\tcontinue;\n\n\t\tif (pjob->ji_nodekill == TM_ERROR_NODE)\n\t\t\tpjob->ji_nodekill = np->hn_node;\n\n\t\tif (np->hn_sister != SISTER_OKAY) /* sis is gone? */\n\t\t\tcontinue;\n\n\t\t/*\n\t\t ** 'np' holds the RM port number in np->hn_port\n\t\t ** while exclude_exec_host stores the MOM port\n\t\t ** number. So we need to compare against\n\t\t ** np->hn_port-1, for PBS mom expects RM port =\n                 ** MOM port + 1.\n\t\t */\n\t\tif ((exclude_exec_host != NULL) &&\n\t\t    in_exechost(exclude_exec_host, np->hn_host,\n\t\t\t\tnp->hn_port - 1)) {\n\t\t\t/*\n\t\t\t ** ensure current node (which is managed by an\n\t\t\t ** excluded mom host) is not flagged as a problem\n\t\t\t */\n\t\t\tif (pjob->ji_nodekill == np->hn_node)\n\t\t\t\tpjob->ji_nodekill = TM_ERROR_NODE;\n\t\t\tcontinue;\n\t\t}\n\n\t\tif (reliable_job_node_find(&pjob->ji_failed_node_list, np->hn_host) != NULL) {\n\t\t\t/* ensure current node (which is managed by a failed mom\n\t\t\t * host) is not flagged as a problem\n\t\t\t */\n\t\t\tif (pjob->ji_nodekill == np->hn_node)\n\t\t\t\tpjob->ji_nodekill = TM_ERROR_NODE;\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"not sending request %d to failed mom %s\",\n\t\t\t\t com, np->hn_host ? np->hn_host : \"UNDEFINED\");\n\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\tcontinue;\n\t\t}\n\n\t\tif (np->hn_stream == -1)\n\t\t\tnp->hn_stream = tpp_open(np->hn_host, np->hn_port);\n\t\tnp->hn_sister = SISTER_EOF;\n\n\t\tif (np->hn_stream == -1)\n\t\t\tcontinue;\n\n\t\t/* add each of the tpp streams to the tpp mcast channel */\n\t\tif (tpp_mcast_add_strm(mtfd, np->hn_stream, FALSE) == -1) {\n\t\t\ttpp_close(np->hn_stream);\n\t\t\tnp->hn_stream = -1;\n\t\t\tcontinue;\n\t\t}\n\n\t\tif (com == IM_DELETE_JOB)\n\t\t\tevent = TM_NULL_EVENT;\n\t\telse {\n\t\t\tif (nep == NULL) {\n\t\t\t\tnep = event_alloc(pjob, com, -1, np,\n\t\t\t\t\t\t  TM_NULL_EVENT, TM_NULL_TASK);\n\t\t\t\tep = nep;\n\t\t\t} else {\n\t\t\t\tep = event_dup(nep, pjob, np);\n\t\t\t}\n\t\t\tevent = ep->ee_event;\n\t\t}\n\n\t\tif (pjob->ji_nodekill == np->hn_node)\n\t\t\tpjob->ji_nodekill = TM_ERROR_NODE;\n\t\tnp->hn_sister = SISTER_OKAY;\n\t\tnum++;\n\t}\n\n\tif (num > 0) {\n\t\tret = im_compose(mtfd, pjob->ji_qs.ji_jobid,\n\t\t\t\t cookie, com, event, TM_NULL_TASK, IM_OLD_PROTOCOL_VER);\n\t\tif (ret != DIS_SUCCESS) {\n\t\t\tclose_sisters_mcast(pjob);\n\t\t\ttpp_mcast_close(mtfd);\n\t\t\treturn 0;\n\t\t}\n\n\t\t/*\n\t\t ** Here we send any extra information that needs\n\t\t ** to follow the standard set.\n\t\t ** There was a np being passed, have to think what to do about that\n\t\t */\n\t\tif (command_func != NULL) {\n\t\t\tret = command_func(pjob, NULL, mtfd);\n\t\t\tif (ret != DIS_SUCCESS) {\n\t\t\t\tclose_sisters_mcast(pjob);\n\t\t\t\ttpp_mcast_close(mtfd);\n\t\t\t\treturn 0;\n\t\t\t}\n\t\t}\n\t\tret = dis_flush(mtfd);\n\t\tif (ret != DIS_SUCCESS) {\n\t\t\tclose_sisters_mcast(pjob);\n\t\t\ttpp_mcast_close(mtfd);\n\t\t\treturn 0;\n\t\t}\n\t}\n\n\ttpp_mcast_close(mtfd);\n\treturn num;\n}\n\n/**\n * @brief\n *\tSend a message (command = com) to all the other MOMs not\n *\tin 'exclude_exec_host' list attached to 'pjob'.\n *\tSet ji_nodekill if there is a problem\n *\twith a node.  Call the function command_func if it is\n *\tnot NULL.  It can be used to send extra information.\n *\n * @param[in] pjob - structure handle to job\n * @param[in] com  - command for task\n * @param[in] command_func - function\n * @param[in] exclude_exec_host - if not NULL, do not\n *\t\t\t\tsend command 'com' to MOM hostnames\n *\t\t\t\tappearing in this list, which has the\n *\t\t\t\tform:\n *\t\t\t\t <host1>:<port1>/...+<host2>:<port2>/...\n *\t\t\t\t - or -\n *\t\t\t\t<host1>/...+<host2>/...\n *\n * @return int\n * @retval num - number of command requests sent out.\n * @retval 0   - Failure\n *\n * @note\n *\tSet pjob->ji_nodekill if there is a problem with a node.\n *\n */\nint\nsend_sisters_inner(job *pjob, int com, pbs_jobndstm_t command_func,\n\t\t   char *exclude_exec_host)\n{\n\tint i, num, ret;\n\teventent *ep, *nep = NULL;\n\ttm_event_t event;\n\tchar *cookie;\n\n\tif (pbs_conf.pbs_use_mcast == 1)\n\t\treturn send_sisters_mcast_inner(pjob, com, command_func,\n\t\t\t\t\t\texclude_exec_host);\n\n\tDBPRT((\"send_sisters: command %d\\n\", com))\n\tif (!(is_jattr_set(pjob, JOB_ATR_Cookie)))\n\t\treturn 0;\n\n\tcookie = get_jattr_str(pjob, JOB_ATR_Cookie);\n\tnum = 0;\n\tfor (i = 0; i < pjob->ji_numnodes; i++) {\n\t\thnodent *np = &pjob->ji_hosts[i];\n\n\t\tif (np->hn_node == pjob->ji_nodeid) /* this is me */\n\t\t\tcontinue;\n\n\t\tif (pjob->ji_nodekill == TM_ERROR_NODE)\n\t\t\tpjob->ji_nodekill = np->hn_node;\n\n\t\tif (np->hn_sister != SISTER_OKAY) /* sis is gone? */\n\t\t\tcontinue;\n\t\t/* 'np' holds the RM port number in np->hn_port */\n\t\t/* while exclude_exec_host stores the MOM port */\n\t\t/* number. So we need to compare against */\n\t\t/* np->hn_port-1 */\n\t\tif ((exclude_exec_host != NULL) &&\n\t\t    in_exechost(exclude_exec_host, np->hn_host,\n\t\t\t\tnp->hn_port - 1)) {\n\t\t\t/* ensure current node (which is managed by an\n\t\t\t * excluded mom host) is not flagged as a problem\n\t\t\t */\n\t\t\tif (pjob->ji_nodekill == np->hn_node) {\n\t\t\t\tpjob->ji_nodekill = TM_ERROR_NODE;\n\t\t\t}\n\t\t\tcontinue;\n\t\t}\n\n\t\tif (reliable_job_node_find(&pjob->ji_failed_node_list, np->hn_host) != NULL) {\n\t\t\t/* ensure current node (which is managed by a failed mom\n\t\t\t * host) is not flagged as a problem\n\t\t\t */\n\t\t\tif (pjob->ji_nodekill == np->hn_node)\n\t\t\t\tpjob->ji_nodekill = TM_ERROR_NODE;\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer) - 1,\n\t\t\t\t \"not sending request %d to failed mom %s\",\n\t\t\t\t com, np->hn_host ? np->hn_host : \"UNDEFINED\");\n\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\tcontinue;\n\t\t}\n\n\t\tif (np->hn_stream == -1)\n\t\t\tnp->hn_stream = tpp_open(np->hn_host, np->hn_port);\n\n\t\tif (np->hn_stream == -1)\n\t\t\tcontinue;\n\n\t\tnp->hn_sister = SISTER_EOF;\n\n\t\tif (com == IM_DELETE_JOB)\n\t\t\tevent = TM_NULL_EVENT;\n\t\telse {\n\t\t\tif (nep == NULL) {\n\t\t\t\tnep = event_alloc(pjob, com, -1, np,\n\t\t\t\t\t\t  TM_NULL_EVENT, TM_NULL_TASK);\n\t\t\t\tep = nep;\n\t\t\t} else {\n\t\t\t\tep = event_dup(nep, pjob, np);\n\t\t\t}\n\t\t\tif (ep == NULL)\n\t\t\t\tcontinue;\n\t\t\tevent = ep->ee_event;\n\t\t}\n\n\t\tret = im_compose(np->hn_stream, pjob->ji_qs.ji_jobid,\n\t\t\t\t cookie, com, event, TM_NULL_TASK, IM_OLD_PROTOCOL_VER);\n\t\tif (ret != DIS_SUCCESS)\n\t\t\tcontinue;\n\t\t/*\n\t\t ** Here we send any extra information that needs\n\t\t ** to follow the standard set.\n\t\t */\n\t\tif (command_func != NULL) {\n\t\t\tret = command_func(pjob, np, np->hn_stream);\n\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\tcontinue;\n\t\t}\n\t\tret = dis_flush(np->hn_stream);\n\t\tif (ret == -1)\n\t\t\tcontinue;\n\n\t\tif (pjob->ji_nodekill == np->hn_node)\n\t\t\tpjob->ji_nodekill = TM_ERROR_NODE;\n\t\tnp->hn_sister = SISTER_OKAY;\n\t\tnum++;\n\t}\n\treturn num;\n}\n\n/**\n * @brief\n *\tThis is the wrapper function to 'send_sisters_inner()'.\n */\nint\nsend_sisters(job *pjob, int com, pbs_jobndstm_t command_func)\n{\n\treturn (send_sisters_inner(pjob, com, command_func, NULL));\n}\n\n#define SEND_ERR(err)                                                                                     \\\n\tif (reply) {                                                                                      \\\n\t\t(void) im_compose(stream, jobid, cookie, IM_ERROR, event, fromtask, IM_OLD_PROTOCOL_VER); \\\n\t\t(void) diswsi(stream, err);                                                               \\\n\t}\n\n#define SEND_ERR2(err, errmsg)                                                                             \\\n\tif (reply) {                                                                                       \\\n\t\t(void) im_compose(stream, jobid, cookie, IM_ERROR2, event, fromtask, IM_OLD_PROTOCOL_VER); \\\n\t\t(void) diswsi(stream, err);                                                                \\\n\t\t(void) diswst(stream, errmsg);                                                             \\\n\t}\n\n/**\n * @brief\n * \tCheck to see which node a stream is coming from.  Return a NULL\n * \tif it is not assigned to this job.  Return a nodeent pointer if\n * \tit is.\n *\n * @param[in] pjob - structure handle to job\n * @param[in] stream - file descriptor for task\n * @param[in] vnodeid - node id\n *\n * @return structure handle to hnodent - SUCCESS\n * @retval                     NULL    - FAILURE\n *\n */\nhnodent *\nfind_node(job *pjob, int stream, tm_node_id vnodeid)\n{\n\tint i;\n\tvmpiprocs *vp;\n\thnodent *hp;\n\tstruct sockaddr_in *node_addr;\n\tstruct sockaddr_in *stream_addr;\n\n\tfor (vp = pjob->ji_vnods, i = 0; i < pjob->ji_numvnod; vp++, i++) {\n\t\tif (vp->vn_node == vnodeid)\n\t\t\tbreak;\n\t}\n\tif (i == pjob->ji_numvnod) {\n\t\tsprintf(log_buffer, \"node %d not found\", vnodeid);\n\t\tlog_joberr(-1, __func__, log_buffer, pjob->ji_qs.ji_jobid);\n\t\treturn NULL;\n\t}\n\n\thp = vp->vn_host; /* host for virtual node */\n\tnode_addr = tpp_getaddr(hp->hn_stream);\n\tstream_addr = tpp_getaddr(stream);\n\n\tif (stream_addr == NULL) { /* caller didn't have a stream */\n\t\t/*\n\t\t ** If node is not me and no stream open, open one\n\t\t */\n\t\tif (pjob->ji_nodeid != hp->hn_node && node_addr == NULL)\n\t\t\thp->hn_stream = tpp_open(hp->hn_host, hp->hn_port);\n\t\treturn hp;\n\t}\n\n\t/*\n\t **\tNo stream recorded in the node info, save this one.\n\t */\n\tif (node_addr == NULL) {\n\t\thp->hn_stream = stream;\n\t\thp->hn_eof_ts = 0;\n\t\treturn hp;\n\t}\n\n\t/*\n\t **\tAt this point, both the input stream and the recorded\n\t **\tstream for the node are good.  If they are the same\n\t **\tindex, we are done.\n\t */\n\tif (hp->hn_stream == stream) {\n\t\thp->hn_eof_ts = 0;\n\t\treturn hp;\n\t}\n\n\t/*\n\t **\tThe node struct has a different stream number saved\n\t **\tthen the one passed in (supposedly from the same node).\n\t **\tCheck to see if stream recorded in the node struct\n\t **\tand the one passed in have the same IP address.  If\n\t **\tthey do (only a possibly different port number),\n\t **\twe are fine.  Otherwise, a mixup has happened.\n\t **\n\t **\tTODO: check possible multiple IP addresses for\n\t **\ta single host.\n\t */\n\tif (memcmp(&stream_addr->sin_addr, &node_addr->sin_addr,\n\t\t   sizeof(node_addr->sin_addr)) != 0) {\n\t\tsprintf(log_buffer,\n\t\t\t\"stream id %d does not match %d to node %d\",\n\t\t\tstream, hp->hn_stream, vnodeid);\n\t\tlog_err(-1, __func__, log_buffer);\n\n\t\tsprintf(log_buffer, \"%s: stream addr %s\", __func__,\n\t\t\tnetaddr(stream_addr));\n\t\tlog_err(-1, __func__, log_buffer);\n\n\t\tsprintf(log_buffer, \"%s: node addr %s\", __func__,\n\t\t\tnetaddr(node_addr));\n\t\tlog_err(-1, __func__, log_buffer);\n\t\treturn NULL;\n\t}\n\n\thp->hn_eof_ts = 0;\n\treturn hp;\n}\n\n/**\n *\n * @brief\n *\t Given an socket address 'ap', return the\n *\t hostname mapping the internet address given in 'ap'.\n *\n * @param[in]\tap\t- a socket addresas.\n *\n * @return  char *\n *\n * @retval <string>\t\t- the mapped hostname.\n * @retval \"\" (empty string)\t- if none found or error encountered.\n *\n * @note\n *\tThe returned hostname points to a fixed memory area that must not\n *\tbe freed, and get overwritten on the next call to\n *\taddr_to_hostname().\n *\n */\nchar *\naddr_to_hostname(struct sockaddr_in *ap)\n{\n\tstruct hostent *hp;\n\tstatic char *ret_hostname = NULL;\n\tstatic int hostname_sz = 0;\n\tchar *tmp_str;\n\tint new_sz;\n\n\tif (ap == NULL)\n\t\treturn (\"\");\n\n\thp = gethostbyaddr((void *) &ap->sin_addr, sizeof(struct in_addr), AF_INET);\n\tif (hp == NULL) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer), \"%s: h_errno=%d\",\n\t\t\t inet_ntoa(ap->sin_addr), h_errno);\n\t\tlog_err(-1, __func__, log_buffer);\n\t\treturn (\"\");\n\t}\n\tif (hp->h_name == NULL)\n\t\treturn (\"\");\n\n\tif ((ret_hostname != NULL) && (strcmp(ret_hostname, hp->h_name) == 0))\n\t\treturn (ret_hostname);\n\n\tnew_sz = strlen(hp->h_name) + 1;\n\tif (new_sz > hostname_sz) {\n\t\ttmp_str = realloc(ret_hostname, new_sz);\n\t\tif (tmp_str == NULL) {\n\t\t\tlog_err(errno, __func__, \"error on realloc\");\n\t\t\treturn (\"\");\n\t\t}\n\t\thostname_sz = new_sz;\n\t\tret_hostname = tmp_str;\n\t}\n\tpbs_strncpy(ret_hostname, hp->h_name, hostname_sz);\n\treturn (ret_hostname);\n}\n/**\n * @brief\n * \tAn error has been encountered starting a job.\n * \tFormat a message to all the sisterhood to get rid of their copy\n * \tof the job.  There should be no processes running at this point.\n *\n * @param pjob\t\tjob encountering error\n * @param code\t\terror code\n * @param nodename\tname of host that returned the error\n * @param cmd\t\tstring giving a verb for what failed\n *\n * @return Void\n *\n */\nvoid\njob_start_error(job *pjob, int code, char *nodename, char *cmd)\n{\n\tvoid exec_bail(job * pjob, int code, char *txt);\n\n\tif ((pjob == NULL) || (nodename == NULL) || (cmd == NULL))\n\t\treturn;\n\n\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t \"%s %d from node %s could not %s successfully\",\n\t\t __func__, code, nodename, cmd);\n\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\tif (do_tolerate_node_failures(pjob)) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"ignoring error from %s as job is tolerant of node failures\", nodename);\n\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\t/* a filled-in log_buffer could be mistaken for an error message */\n\t\tlog_buffer[0] = '\\0';\n\n\t\treliable_job_node_add(&pjob->ji_failed_node_list, nodename);\n\t\treliable_job_node_delete(&pjob->ji_node_list, nodename);\n\n#ifndef WIN32\n\t\tif (pjob->ji_parent2child_moms_status_pipe != -1) {\n\t\t\tsize_t r_size;\n\t\t\tr_size = strlen(nodename) + 1;\n\t\t\tif (write_pipe_data(pjob->ji_parent2child_moms_status_pipe, &r_size, sizeof(size_t)) == 0)\n\t\t\t\t(void) write_pipe_data(pjob->ji_parent2child_moms_status_pipe, nodename, r_size);\n\t\t\telse\n\t\t\t\tlog_err(errno, __func__, \"failed to write\");\n\t\t}\n#endif\n\t\treturn;\n\t}\n\tif (get_job_substate(pjob) >= JOB_SUBSTATE_EXITING)\n\t\treturn;\n\n\tif (code == PBSE_HOOK_REJECT_DELETEJOB)\n\t\texec_bail(pjob, JOB_EXEC_FAILHOOK_DELETE, NULL);\n\telse if (code == PBSE_HOOK_REJECT_RERUNJOB)\n\t\texec_bail(pjob, JOB_EXEC_FAILHOOK_RERUN, NULL);\n\telse if (code == PBSE_SISCOMM)\n\t\texec_bail(pjob, JOB_EXEC_JOINJOB, NULL);\n\telse\n\t\texec_bail(pjob, JOB_EXEC_RETRY, NULL);\n}\n\n/**\n * @brief\n * \tchk_del_job - check that all the sisters have replied to a\n *\tIM_DELETE_JOB_REPLY request or are dead.  When all are done,\n *\treply to the server and purge the job structure\n *\n * @param[in] pjob - structure handle to job\n * @param[in] errcode - error code\n *\n * @return Void\n *\n */\nstatic void\nchk_del_job(job *pjob, int errcode)\n{\n\tint bad = 0;\n\tint i;\n\n\tDBPRT((\"%s for job %s\\n\", __func__, pjob->ji_qs.ji_jobid))\n\n\tfor (i = 1; i < pjob->ji_numnodes; i++) {\n\t\tif (reliable_job_node_find(&pjob->ji_failed_node_list, pjob->ji_hosts[i].hn_host) != NULL) {\n\t\t\tDBPRT((\"%s: %d IGNORED for node %s\\n\", __func__,\n\t\t\t       i, pjob->ji_hosts[i].hn_host))\n\t\t} else if (pjob->ji_hosts[i].hn_sister == SISTER_OKAY) {\n\t\t\t/* still need to wait for her to answer */\n\t\t\tDBPRT((\"%s: %d OK  for node %s\\n\", __func__,\n\t\t\t       i, pjob->ji_hosts[i].hn_host))\n\t\t\tbreak;\n\t\t} else if (pjob->ji_hosts[i].hn_sister != SISTER_KILLDONE) {\n\t\t\t/* either dead or replied with error */\n\t\t\t++bad;\n\t\t\tDBPRT((\"%s: %d EOF for node %s\\n\", __func__,\n\t\t\t       i, pjob->ji_hosts[i].hn_host))\n\t\t} else {\n\t\t\tDBPRT((\"%s: %d DONE node %s\\n\", __func__,\n\t\t\t       i, pjob->ji_hosts[i].hn_host))\n\t\t}\n\t}\n\n\tif (i == pjob->ji_numnodes) {\n\t\t/* all sisters are dead or have replied, can    */\n\t\t/* now reply to the Server's delete job request */\n\t\tif (pjob->ji_preq) {\n\t\t\tif ((bad != 0) || (errcode != 0)) {\n\t\t\t\tif (pjob->ji_hook_running_bg_on == BG_NONE) {\n\t\t\t\t\treq_reject(PBSE_SISCOMM, 0, pjob->ji_preq);\n\t\t\t\t\tpjob->ji_preq = NULL;\n\t\t\t\t} else\n\t\t\t\t\tpjob->ji_hook_running_bg_on = BG_PBSE_SISCOMM;\n\t\t\t} else {\n\t\t\t\treply_ack(pjob->ji_preq);\n\t\t\t\tpjob->ji_preq = NULL;\n\t\t\t}\n\t\t}\n\t\tDBPRT((\"%s: all sisters done for job %s\\n\",\n\t\t       __func__, pjob->ji_qs.ji_jobid))\n\t\t/*\n\t\t * jobs \"could\" be purged now, but doing so may impact loop\n\t\t * processing at a higher level, see the chain:\n\t\t * im_eof() -> node_bailout() -> here\n\t\t *\n\t\t * So move job to list of jobs to be purged in Mom's main loop.\n\t\t * Note, we use the ji_jobque link, not the ji_alljobs.\n\t\t * If the job is already on the list of jobs to be purged,\n\t\t * do nothing.\n\t\t */\n\t\tif (is_linked(&mom_deadjobs, &pjob->ji_jobque)) {\n\t\t\tDBPRT((\"%s: job %s ALREADY LINKED to deadjobs\\n\",\n\t\t\t       __func__, pjob->ji_qs.ji_jobid))\n\t\t} else {\n\t\t\tif (is_linked(&mom_polljobs, &pjob->ji_jobque))\n\t\t\t\tdelete_link(&pjob->ji_jobque);\n\t\t\tif (pjob->ji_hook_running_bg_on == BG_NONE)\n\t\t\t\tappend_link(&mom_deadjobs, &pjob->ji_jobque, pjob);\n\t\t}\n\t}\n}\n\n/**\n * @brief\n *\tDeal with events hooked to a node where a stream has gone\n *\tsouth or we are going away.\n *\n * @param[in] pjob - structure handle to job\n * @param[in] np   - structure handle to hnodent\n *\n * @return Void\n *\n */\nvoid\nnode_bailout(job *pjob, hnodent *np)\n{\n\tpbs_task *ptask;\n\teventent *ep;\n\teventent *nxtep;\n\tint i;\n\tint keep_event = 0;\n\tchar *name;\n\tpbs_list_head phead;\n\n\tep = (eventent *) GET_NEXT(np->hn_events);\n\twhile (ep) {\n\t\tswitch (ep->ee_command) {\n\n\t\t\tcase IM_JOIN_JOB:\n\t\t\t\t/*\n\t\t\t\t ** I'm MS and a node has failed to respond to the\n\t\t\t\t ** call.  Maybe in the future the use can specify\n\t\t\t\t ** the job can start with a range of nodes so\n\t\t\t\t ** one (or more) missing can be tolerated.  Not\n\t\t\t\t ** for now.\n\t\t\t\t */\n\n\t\t\t\tDBPRT((\"%s: JOIN_JOB %s jjretry %d old stream %d\\n\", __func__, pjob->ji_qs.ji_jobid, ep->ee_retry, np->hn_stream))\n\t\t\t\tif (ep->ee_retry == 0) {\n\t\t\t\t\t/* first failure, try to reopen and resend */\n\t\t\t\t\tnp->hn_stream = tpp_open(np->hn_host,\n\t\t\t\t\t\t\t\t np->hn_port);\n\t\t\t\t\tif (np->hn_stream < 0) {\n\t\t\t\t\t\t/* reopen failed - fatal */\n\t\t\t\t\t\tjob_start_error(pjob, PBSE_SISCOMM,\n\t\t\t\t\t\t\t\tnp->hn_host,\n\t\t\t\t\t\t\t\t\"JOIN_JOB retry\");\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t\t/* clear error indicator set in im_eof */\n\t\t\t\t\tnp->hn_sister = SISTER_OKAY;\n\t\t\t\t\t/* encode job attributes to send to sister */\n\t\t\t\t\tCLEAR_HEAD(phead);\n\t\t\t\t\tfor (i = 0; i < (int) JOB_ATR_LAST; i++) {\n\t\t\t\t\t\t(void) (job_attr_def + i)->at_encode(get_jattr(pjob, i), &phead, (job_attr_def + i)->at_name, NULL, ATR_ENCODE_MOM, NULL);\n\t\t\t\t\t}\n\n\t\t\t\t\t++ep->ee_retry; /* retry count */\n\n\t\t\t\t\t/* resend JOIN_JOB to this sister */\n\t\t\t\t\ti = np - pjob->ji_hosts;\n\t\t\t\t\tDBPRT((\"%s: JOIN_JOB %s host %s port %d jjretry %d i %d new stream %d\\n\", __func__, pjob->ji_qs.ji_jobid, np->hn_host, np->hn_port, ep->ee_retry, i, np->hn_stream))\n\t\t\t\t\tsend_join_job_restart(IM_JOIN_JOB, ep, i,\n\t\t\t\t\t\t\t      pjob, &phead);\n\n\t\t\t\t\tfree_attrlist(&phead);\n\t\t\t\t\t/*\n\t\t\t\t\t * note that this event is to be retained in\n\t\t\t\t\t * in the list since the associated request\n\t\t\t\t\t * is being retried\n\t\t\t\t\t */\n\t\t\t\t\tkeep_event = 1;\n\t\t\t\t} else {\n\t\t\t\t\t/* failed on a retry - fatal */\n\t\t\t\t\tjob_start_error(pjob, PBSE_SISCOMM,\n\t\t\t\t\t\t\tnp->hn_host, \"JOIN_JOB\");\n\t\t\t\t}\n\t\t\t\tbreak;\n\n\t\t\tcase IM_SETUP_JOB:\n\t\t\t\t/*\n\t\t\t\t ** I'm MS and a node has failed during setup.\n\t\t\t\t */\n\t\t\t\tDBPRT((\"%s: SETUP_JOB %s\\n\", __func__, pjob->ji_qs.ji_jobid))\n\t\t\t\tjob_start_error(pjob, PBSE_SISCOMM, np->hn_host,\n\t\t\t\t\t\t\"SETUP_JOB\");\n\t\t\t\tbreak;\n\n\t\t\tcase IM_SUSPEND:\n\t\t\tcase IM_RESUME:\n\t\t\t\t/*\n\t\t\t\t ** A MOM has failed to suspend or resume a job.\n\t\t\t\t ** I'm mother superior.\n\t\t\t\t */\n\t\t\t\tsprintf(log_buffer, \"%s returned EOF\",\n\t\t\t\t\t(ep->ee_command == IM_SUSPEND) ? \"SUSPEND\" : \"RESUME\");\n\t\t\t\tlog_joberr(-1, __func__, log_buffer, pjob->ji_qs.ji_jobid);\n\t\t\t\tif (pjob->ji_mompost != NULL)\n\t\t\t\t\tpjob->ji_mompost(pjob, PBSE_SISCOMM);\n\t\t\t\tbreak;\n\n\t\t\tcase IM_RESTART:\n\t\t\tcase IM_CHECKPOINT:\n\t\t\tcase IM_CHECKPOINT_ABORT:\n\t\t\t\t/*\n\t\t\t\t ** A MOM has failed to do a checkpoint.\n\t\t\t\t ** I'm mother superior.\n\t\t\t\t */\n\t\t\t\tname = (ep->ee_command == IM_RESTART) ? \"RESTART\" : (ep->ee_command == IM_CHECKPOINT) ? \"CHECKPOINT\"\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t      : \"CHECKPOINT_ABORT\";\n\t\t\t\tsprintf(log_buffer, \"%s returned EOF\", name);\n\t\t\t\tlog_joberr(-1, __func__, log_buffer, pjob->ji_qs.ji_jobid);\n\t\t\t\tif (pjob->ji_mompost != NULL)\n\t\t\t\t\tpjob->ji_mompost(pjob, PBSE_SISCOMM);\n\t\t\t\tbreak;\n\n\t\t\tcase IM_ABORT_JOB:\n\t\t\tcase IM_KILL_JOB:\n\t\t\t\t/*\n\t\t\t\t ** The job is already in the process of being killed\n\t\t\t\t ** but somebody has dropped off the face of the\n\t\t\t\t ** earth.  Just check to see if everybody has\n\t\t\t\t ** been heard from in some form or another and\n\t\t\t\t ** set JOB_SUBSTATE_EXITING if so.\n\t\t\t\t */\n\t\t\t\tDBPRT((\"%s: KILL/ABORT JOB %s\\n\",\n\t\t\t\t       __func__, pjob->ji_qs.ji_jobid))\n\t\t\t\tfor (i = 1; i < pjob->ji_numnodes; i++) {\n\t\t\t\t\tif ((pjob->ji_hosts[i].hn_sister == SISTER_OKAY) && (reliable_job_node_find(&pjob->ji_failed_node_list, pjob->ji_hosts[i].hn_host) == NULL))\n\t\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tif (i == pjob->ji_numnodes) { /* all dead */\n\t\t\t\t\tif (check_job_substate(pjob, JOB_SUBSTATE_KILLSIS)) {\n\t\t\t\t\t\tset_job_state(pjob, JOB_STATE_LTR_EXITING);\n\t\t\t\t\t\tset_job_substate(pjob, JOB_SUBSTATE_EXITING);\n\t\t\t\t\t\texiting_tasks = 1;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tbreak;\n\n\t\t\tcase IM_DELETE_JOB_REPLY:\n\t\t\t\t/*\n\t\t\t\t ** The job is being deleted and a sister just went bye.\n\t\t\t\t ** See if everyone else has replied or died.\n\t\t\t\t */\n\t\t\t\tDBPRT((\"%s: DELETE_REPLY JOB eof %s\\n\",\n\t\t\t\t       __func__, pjob->ji_qs.ji_jobid))\n\t\t\t\tchk_del_job(pjob, 0);\n\t\t\t\tbreak;\n\n\t\t\tcase IM_SPAWN_TASK:\n\t\t\tcase IM_GET_TASKS:\n\t\t\tcase IM_SIGNAL_TASK:\n\t\t\tcase IM_OBIT_TASK:\n\t\t\tcase IM_GET_INFO:\n\t\t\tcase IM_GET_RESC:\n\t\t\tcase IM_CRED:\n\t\t\t\t/*\n\t\t\t\t ** A user attempt failed, inform process.\n\t\t\t\t */\n\t\t\t\tDBPRT((\"%s: REQUEST %d %s\\n\", __func__,\n\t\t\t\t       ep->ee_command, pjob->ji_qs.ji_jobid))\n\n\t\t\t\tptask = task_check(pjob, ep->ee_fd, ep->ee_taskid);\n\t\t\t\tif (ptask == NULL)\n\t\t\t\t\tbreak;\n\t\t\t\t(void) tm_reply(ep->ee_fd, ptask->ti_protover,\n\t\t\t\t\t\tTM_ERROR, ep->ee_client);\n\t\t\t\t(void) diswsi(ep->ee_fd, TM_ESYSTEM);\n\t\t\t\t(void) dis_flush(ep->ee_fd);\n\t\t\t\tbreak;\n\n\t\t\tcase IM_POLL_JOB:\n\t\t\t\t/*\n\t\t\t\t ** I must be Mother Superior for the job and\n\t\t\t\t ** this is an error reply to a poll request.\n\t\t\t\t */\n\t\t\t\tif (do_tolerate_node_failures(pjob)) {\n\n\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t\t \"ignoring POLL error from failed mom %s as job is tolerant of node failures\",\n\t\t\t\t\t\t np->hn_host ? np->hn_host : \"\");\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\"POLL failed from node %d\", np->hn_node);\n\t\t\t\tlog_joberr(-1, __func__, log_buffer, pjob->ji_qs.ji_jobid);\n\t\t\t\tpjob->ji_nodekill = np->hn_node;\n\t\t\t\tbreak;\n\n#ifdef PMIX\n\t\t\tcase IM_PMIX:\n\t\t\t\t/* I am MS and a node has failed a PMIX request. */\n\t\t\t\tDBPRT((\"%s: IM_PMIX %s\\n\", __func__, pjob->ji_qs.ji_jobid))\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t \"sister node %s failed PMIx operation\",\n\t\t\t\t\t np->hn_host ? np->hn_host : \"\");\n\t\t\t\texec_bail(pjob, JOB_EXEC_RETRY, log_buffer);\n\t\t\t\tbreak;\n#endif /* PMIX */\n\n\t\t\tcase IM_UPDATE_JOB:\n\t\t\t\t/*\n\t\t\t\t ** I'm MS and a node has failed during job update.\n\t\t\t\t */\n\t\t\t\tDBPRT((\"%s: UPDATE_JOB %s\\n\", __func__, pjob->ji_qs.ji_jobid))\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t \"sister node %s failed to update job\",\n\t\t\t\t\t np->hn_host ? np->hn_host : \"\");\n\n#ifndef WIN32\n\t\t\t\tclose_update_pipes(pjob);\n#endif\n\t\t\t\texec_bail(pjob, JOB_EXEC_RETRY, log_buffer);\n\t\t\t\tbreak;\n\n\t\t\tdefault:\n\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\"unknown command %d saved in event %d\",\n\t\t\t\t\tep->ee_command, ep->ee_event);\n\t\t\t\tif (pjob && pjob->ji_qs.ji_jobid[0]) {\n\t\t\t\t\tlog_joberr(-1, __func__, log_buffer,\n\t\t\t\t\t\t   pjob->ji_qs.ji_jobid);\n\t\t\t\t} else\n\t\t\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\t\tbreak;\n\t\t}\n\n\t\t/* get the next event to check; done here as we may keep */\n\t\t/* this event in the list (keep_event==1) or delete it   */\n\t\tnxtep = (eventent *) GET_NEXT(ep->ee_next);\n\t\tif (keep_event == 0) {\n\t\t\tdelete_link(&ep->ee_next);\n\t\t\tfree(ep);\n\t\t} else {\n\t\t\tkeep_event = 0; /* reset for next event */\n\t\t}\n\t\tep = nxtep; /* go on to examine the next event */\n\t}\n}\n\n/**\n * @brief\n *\tTie off all loose ends for a job that is going away.  In particular,\n * \trelease any special resources.  The job should already be terminated\n * \tbefore getting here.\n *\n * @param[in] pjob - structure handle to job\n *\n * @see job_clean_extra\n * @see del_job_hw\n *\n * @return Void\n *\n */\nvoid\nterm_job(job *pjob)\n{\n\thnodent *np;\n\tint num;\n\n\tfor (num = 0, np = pjob->ji_hosts;\n\t     num < pjob->ji_numnodes;\n\t     num++, np++) {\n\t\tif (np->hn_stream >= 0) {\n\t\t\tnp->hn_stream = -1;\n\t\t\tnp->hn_sister = SISTER_EOF;\n\t\t}\n\t\tnode_bailout(pjob, np);\n\t}\n\n\tif (job_clean_extra != NULL) {\n\t\t(void) job_clean_extra(pjob);\n\t}\n\n\tdel_job_hw(pjob); /* release special hardware related resources */\n}\n\n/**\n * @brief\n *\tHandle a stream that needs to be closed.\n *\tMay be either from another Mom, or the server.\n *\n * @param[in] stream - file descriptor\n * @param[in] ret    - indicates value for error message to be logged\n *\n * @return Void\n *\n */\nvoid\nim_eof(int stream, int ret)\n{\n\tint num;\n\tjob *pjob;\n\thnodent *np;\n\tstruct sockaddr_in *addr;\n\n\taddr = tpp_getaddr(stream);\n\tsprintf(log_buffer, \"%s from addr %s on stream %d\",\n\t\tdis_emsg[ret], netaddr(addr), stream);\n\tlog_err(-1, __func__, log_buffer);\n\ttpp_close(stream);\n\n\tif (stream == server_stream) {\n\t\tsprintf(log_buffer, \"Server closed connection.\");\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_NOTICE,\n\t\t\t  __func__, log_buffer);\n\t\tserver_stream = -1;\n\t}\n\n\t/*\n\t ** Search through all the jobs looking for this stream.\n\t ** We want to find if any events are being waited for\n\t ** from the \"dead\" stream and do something with them.\n\t */\n\tfor (pjob = (job *) GET_NEXT(svr_alljobs);\n\t     pjob != NULL;\n\t     pjob = (job *) GET_NEXT(pjob->ji_alljobs)) {\n\t\tfor (num = 0, np = pjob->ji_hosts;\n\t\t     num < pjob->ji_numnodes;\n\t\t     num++, np++) {\n\t\t\tif (np->hn_stream != stream)\n\t\t\t\tcontinue;\n\n\t\t\tnp->hn_stream = -1;\n\t\t\tif (np->hn_eof_ts == 0)\n\t\t\t\tnp->hn_eof_ts = time(0);\n\t\t\tpjob->ji_msconnected = 0;\n\n\t\t\t/*\n\t\t\t ** In case connection to pbs_comm is down/recently established, do not kill a job that is actually running.\n\t\t\t ** If this is the MS, we check job substate == JOB_SUBSTATE_RUNNING to see if job is running.\n\t\t\t ** If this is a sister, we check is substate is JOB_SUBSTATE_PRERUN or JOB_SUBSTATE_RUNNING\n\t\t\t ** We include PRERUN in case of jobs at sisters since at sister moms job substate stays at PRERUN\n\t\t\t ** till a tm task is initiated on it by the MS\n             ** We also check for substate JOB_SUBSTATE_SUSPEND to retain suspended jobs.\n\t\t\t **\n\t\t\t */\n\t\t\tif ((((pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE) == 0) && (check_job_substate(pjob, JOB_SUBSTATE_PRERUN))) ||\n\t\t\t    (check_job_substate(pjob, JOB_SUBSTATE_RUNNING)) || (check_job_substate(pjob, JOB_SUBSTATE_SUSPEND))) {\n\t\t\t\tif (do_tolerate_node_failures(pjob)) {\n\t\t\t\t\tsprintf(log_buffer, \"ignoring lost communication with %s as job is tolerant of node failures\", np->hn_host);\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\t\t\treliable_job_node_add(&pjob->ji_failed_node_list, np->hn_host);\n\t\t\t\t\treliable_job_node_delete(&pjob->ji_node_list, np->hn_host);\n#ifndef WIN32\n\t\t\t\t\tif (pjob->ji_parent2child_moms_status_pipe != -1) {\n\t\t\t\t\t\tsize_t r_size;\n\t\t\t\t\t\tr_size = strlen(np->hn_host) + 1;\n\t\t\t\t\t\tif (write_pipe_data(pjob->ji_parent2child_moms_status_pipe, &r_size, sizeof(size_t)) == 0)\n\t\t\t\t\t\t\t(void) write_pipe_data(pjob->ji_parent2child_moms_status_pipe, np->hn_host, r_size);\n\t\t\t\t\t\telse\n\t\t\t\t\t\t\tlog_err(errno, __func__, \"failed to write\");\n\t\t\t\t\t}\n#endif\n\t\t\t\t\tcontinue;\n\t\t\t\t} else if ((time_now - np->hn_eof_ts) <= max_poll_downtime_val) {\n\t\t\t\t\tsprintf(log_buffer, \"lost communication with %s, not killing job yet\", np->hn_host);\n\t\t\t\t\tlog_joberr(-1, __func__, log_buffer, pjob->ji_qs.ji_jobid);\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\n\t\t\t\tif (!is_comm_up(COMM_MATURITY_TIME)) {\n\t\t\t\t\tsprintf(log_buffer, \"lost connection to %s due to pbs_comm down/recently established, not killing job\", np->hn_host);\n\t\t\t\t\tlog_joberr(-1, __func__, log_buffer, pjob->ji_qs.ji_jobid);\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\n\t\t\t\tsprintf(log_buffer, \"lost communication with %s for > %d secs, killing job now\", np->hn_host, max_poll_downtime_val);\n\t\t\t\tlog_joberr(-1, __func__, log_buffer, pjob->ji_qs.ji_jobid);\n\t\t\t}\n\n\t\t\tnp->hn_sister = SISTER_EOF;\n\t\t\tnode_bailout(pjob, np);\n\n\t\t\t/*\n\t\t\t ** If dead stream is num = 0, I'm a regular node\n\t\t\t ** and my connection to Mother Superior is gone...\n\t\t\t ** kill job.\n\t\t\t */\n\t\t\tif (num != 0)\n\t\t\t\tcontinue;\n\n\t\t\tsprintf(log_buffer,\n\t\t\t\t\"lost connection to MS on %s\", np->hn_host);\n\t\t\tlog_joberr(-1, __func__, log_buffer, pjob->ji_qs.ji_jobid);\n\t\t\tkill_job(pjob, SIGKILL);\n\t\t\tset_job_substate(pjob, JOB_SUBSTATE_EXITING);\n\t\t\texiting_tasks = 1;\n\t\t}\n\t}\n}\n\n/**\n * @brief\n *\tCheck to be sure this is a connection from Mother Superior on\n * \ta good port.\n *\tCheck to make sure I am not Mother Superior (talking to myself).\n * \tSet the stream in ji_nodes[0] if needed.\n *\n * @param[in] stream - file descriptor\n * @param[in] pjob   - structure handle to job\n *\n * @return error code\n * @retval TRUE  error\n * @retval FALSE if okay\n *\n */\nint\ncheck_ms(int stream, job *pjob)\n{\n\tstruct sockaddr_in *addr;\n\thnodent *np;\n\n\taddr = tpp_getaddr(stream);\n\n\tif (pjob == NULL)\n\t\treturn FALSE;\n\n\tif (pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE) {\n\t\tlog_joberr(-1, __func__, \"Mother Superior talking to herself\",\n\t\t\t   pjob->ji_qs.ji_jobid);\n\t\ttpp_eom(stream);\n\t\treturn TRUE;\n\t}\n\n\t/*\n\t ** This should be mother superior calling.\n\t ** We always have a stream open to her at node 0.\n\t */\n\tnp = &pjob->ji_hosts[0]; /* MS entry */\n\tif (stream != np->hn_stream) {\n\t\tif (np->hn_stream != -1) {\n\t\t\tsprintf(log_buffer,\n\t\t\t\t\"MS reset from %d to %d (%s)\",\n\t\t\t\tnp->hn_stream, stream, netaddr(addr));\n\t\t\tlog_joberr(-1, __func__, log_buffer, pjob->ji_qs.ji_jobid);\n\t\t}\n\t\tnp->hn_stream = stream;\n\t}\n\tnp->hn_eof_ts = 0;\n\tpjob->ji_msconnected = 1;\n\treturn FALSE;\n}\n\n/**\n * @brief\n *\treturn resource used by job\n *\n * @param[in] pjob - structure handle to job\n * @param[in] name - character pointer holding resource name\n * @param[in] (*func)(resource *) -\n *\n *\n */\nu_long\nresc_used(job *pjob, char *name, u_long (*func)(resource *))\n{\n\tresource_def *rd;\n\tresource *pres;\n\tu_long val = 0L;\n\n\tif (!is_jattr_set(pjob, JOB_ATR_resc_used))\n\t\treturn 0;\n\n\trd = find_resc_def(svr_resc_def, name);\n\tif (rd == NULL)\n\t\treturn 0;\n\n\tpres = find_resc_entry(get_jattr(pjob, JOB_ATR_resc_used), rd);\n\tif (pres == NULL)\n\t\treturn 0;\n\n\tval = func(pres);\n\tDBPRT((\"resc_used: %s %lu\\n\", name, val))\n\treturn val;\n}\n\n/**\n * @brief\n *\tFind named info for a task.\n *\n * @param[in] ptask - structure handle to pbs_task\n * @param[in] name  - name of task\n *\n * @return structure handle to infoent\n *\n */\ninfoent *\ntask_findinfo(pbs_task *ptask, char *name)\n{\n\tinfoent *ip;\n\n\tfor (ip = (infoent *) GET_NEXT(ptask->ti_info);\n\t     ip;\n\t     ip = (infoent *) GET_NEXT(ip->ie_next)) {\n\t\tif (strcmp(ip->ie_name, name) == 0)\n\t\t\tbreak;\n\t}\n\treturn ip;\n}\n\n/**\n * @brief\n *\tSave named info with a task.\n *\n * @param[in] ptask - structure handle to pbs_task\n * @param[in] name  - char pointer to hold name of task\n * @param[in] info  - string counted\n * @param[in] len   - length of string\n *\n * @return Void\n *\n */\nvoid\ntask_saveinfo(pbs_task *ptask, char *name, void *info, int len)\n{\n\tinfoent *ip;\n\n\tif ((ip = task_findinfo(ptask, name)) == NULL) { /* new name */\n\t\tip = (infoent *) malloc(sizeof(infoent));\n\t\tassert(ip);\n\t\tCLEAR_LINK(ip->ie_next);\n\t\tappend_link(&ptask->ti_info, &ip->ie_next, ip);\n\t\tip->ie_name = name;\n\t} else /* replace name with new info */\n\t\tfree(ip->ie_info);\n\n\tip->ie_info = info;\n\tip->ie_len = len;\n}\n\n/**\n * @brief\n *\tGenerate a resource string for a job.\n *\n * @param pjob - structure handle to job\n *\n * @return string\n * @retval res_string\n *\n */\nchar *\nresc_string(job *pjob)\n{\n\tattribute *at;\n\tattribute_def *ad;\n\tsvrattrl *pal;\n\tpbs_list_head lhead;\n\tint len, used, tot;\n\tchar *res_str, *ch;\n\tchar *getuname();\n\textern int resc_access_perm;\n\n\tch = getuname();\n\tlen = strlen(ch);\n\ttot = len * 2;\n\tused = 0;\n\tres_str = (char *) malloc(tot);\n\tif (res_str == NULL)\n\t\treturn NULL;\n\tstrcpy(res_str, ch);\n\tused += len;\n\tres_str[used++] = ':';\n\n\tat = get_jattr(pjob, JOB_ATR_resource);\n\tif (at->at_type != ATR_TYPE_RESC) {\n\t\tres_str[used] = '\\0';\n\t\treturn res_str;\n\t}\n\tad = &job_attr_def[(int) JOB_ATR_resource];\n\tresc_access_perm = ATR_DFLAG_USRD;\n\tCLEAR_HEAD(lhead);\n\t(void) ad->at_encode(at,\n\t\t\t     &lhead, ad->at_name,\n\t\t\t     NULL, ATR_ENCODE_CLIENT, NULL);\n\tattrl_fixlink(&lhead);\n\n\tfor (pal = (svrattrl *) GET_NEXT(lhead);\n\t     pal;\n\t     pal = (svrattrl *) GET_NEXT(pal->al_link)) {\n\t\twhile (used + pal->al_rescln + pal->al_valln > tot) {\n\t\t\tchar *tmp_res_str;\n\n\t\t\ttot *= 2;\n\t\t\ttmp_res_str = realloc(res_str, tot);\n\t\t\tif (tmp_res_str == NULL) {\n\t\t\t\tfree(res_str);\n\t\t\t\tfree_attrlist(&lhead);\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t\tres_str = tmp_res_str;\n\t\t}\n\t\tstrcpy(&res_str[used], pal->al_resc);\n\t\tused += (pal->al_rescln - 1);\n\t\tres_str[used++] = '=';\n\t\tstrcpy(&res_str[used], pal->al_value);\n\t\tused += (pal->al_valln - 1);\n\t\tres_str[used++] = ',';\n\t}\n\tfree_attrlist(&lhead);\n\tres_str[--used] = '\\0';\n\treturn res_str;\n}\n\n/**\n * @brief\n *\tProcess a cred received in a JOIN_JOB.\n *\n * @param[in] pjob - structure handle to job\n * @param[in] info - string count\n * @param[in] len - length of string\n * @param[in] tcp - indiaction whether tcp or not\n * @param[in] con - inter mom stream\n *\n * @return  error code\n * @retval -1     error\n * @retval  0     Success\n *\n */\nint\nmom_create_cred(job *pjob, char *info, size_t len, int tcp, int con)\n{\n\tint ret = -1;\n\tint type = pjob->ji_extended.ji_ext.ji_credtype;\n\n\tDBPRT((\"%s: entered\\n\", __func__))\n\tswitch (type) {\n\n\t\tcase PBS_CREDTYPE_NONE:\n\t\t\tret = 0;\n\t\t\tbreak;\n\n\t\tdefault:\n\t\t\tret = write_cred(pjob, info, len);\n\t\t\tbreak;\n\t}\n\n\treturn ret;\n}\n\nstatic char bail_format[] = \"dis read failed: %s\";\n\n#define BAIL(message)                                      \\\n\tif (ret != DIS_SUCCESS) {                          \\\n\t\tsprintf(log_buffer, bail_format, message); \\\n\t\tgoto err;                                  \\\n\t}\n\n/**\n * @brief\n *\tSend resources_used values to the MS via\n *\t'stream' descriptor.\n *\n * @param[in] stream - descriptor pathway to MS.\n * @param[in] pjob - poineter to owning job structure\n *\n * @return  error code\n * @retval -1     error\n * @retval  0     Success\n *\n */\nint\nsend_resc_used_to_ms(int stream, job *pjob)\n{\n\textern int resc_access_perm;\n\tattribute *at;\n\tattribute_def *ad;\n\tsvrattrl *pal;\n\tsvrattrl *nxpal;\n\tpbs_list_head lhead;\n\tpbs_list_head send_head;\n\tsvrattrl *psatl;\n\tint ret;\n\n\tif (pjob == NULL || stream == -1)\n\t\treturn (-1);\n\n\tat = get_jattr(pjob, JOB_ATR_resc_used);\n\tif (at->at_type != ATR_TYPE_RESC)\n\t\treturn (-1);\n\tad = &job_attr_def[(int) JOB_ATR_resc_used];\n\tresc_access_perm = ATR_DFLAG_MGRD;\n\n\tmemset(&lhead, 0, sizeof(lhead));\n\tCLEAR_HEAD(lhead);\n\n\t(void) ad->at_encode(at, &lhead, ad->at_name, NULL, ATR_ENCODE_CLIENT, NULL);\n\tmemset(&send_head, 0, sizeof(send_head));\n\tCLEAR_HEAD(send_head);\n\n\tpal = (svrattrl *) GET_NEXT(lhead);\n\twhile (pal != NULL) {\n\t\tnxpal = (struct svrattrl *) GET_NEXT(pal->al_link);\n\n\t\t/* no need to track the resources automatically sent to MS */\n\t\t/* like 'cput', 'mem', and 'cpupercent',but only those */\n\t\t/* resources that are set in a mom hook */\n\t\tif ((pal->al_flags & ATR_VFLAG_HOOK) != 0 &&\n\t\t    strcmp(pal->al_resc, \"cput\") != 0 &&\n\t\t    strcmp(pal->al_resc, \"mem\") != 0 &&\n\t\t    strcmp(pal->al_resc, \"cpupercent\") != 0) {\n\t\t\tif (add_to_svrattrl_list(&send_head, pal->al_name, pal->al_resc,\n\t\t\t\t\t\t pal->al_value, pal->al_op, NULL) == -1) {\n\t\t\t\tfree_attrlist(&send_head);\n\t\t\t\tfree_attrlist(&lhead);\n\t\t\t\treturn (-1);\n\t\t\t}\n\t\t}\n\t\tpal = nxpal;\n\t}\n\tfree_attrlist(&lhead);\n\n\tpsatl = (svrattrl *) GET_NEXT(send_head);\n\tif (psatl == NULL) {\n\t\tfree_attrlist(&send_head);\n\t\treturn (-1);\n\t}\n\n\tret = encode_DIS_svrattrl(stream, psatl);\n\tfree_attrlist(&send_head);\n\tif (ret != DIS_SUCCESS)\n\t\treturn (-1);\n\treturn (0);\n}\n\n/**\n * @brief\n *\tReceived resources_used values for job 'jobid'\n *\tfrom descriptor 'stream', with values to be saved in\n *\tinternal nodes resources table indexed by 'nodeidx'.\n *\n * @param[in] stream - descriptor pathway\n * @param[in] pjob - pointer to owning job structure\n * @param[in] nodeidx - node index to the job's internal resources table\n *\t\t\twhere received values will be saved.\n *\t\t\tresources values received from\n *\n * @return  error code\n * @retval -1     error\n * @retval  0     Success\n *\n */\nint\nrecv_resc_used_from_sister(int stream, job *pjob, int nodeidx)\n{\n\textern int resc_access_perm;\n\tattribute_def *pdef;\n\tpbs_list_head lhead;\n\tsvrattrl *psatl;\n\tint errcode;\n\n\tif (pjob == NULL || stream == -1 || nodeidx < 0)\n\t\treturn (-1);\n\n\tpdef = &job_attr_def[(int) JOB_ATR_resc_used];\n\n\tCLEAR_HEAD(lhead);\n\tif (decode_DIS_svrattrl(stream, &lhead) != DIS_SUCCESS) {\n\t\tsprintf(log_buffer, \"decode_DIS_svrattrl failed\");\n\t\treturn (-1);\n\t}\n\tif (is_attr_set(&pjob->ji_resources[nodeidx].nr_used) != 0)\n\t\tpdef->at_free(&pjob->ji_resources[nodeidx].nr_used);\n\t/* decode attributes from request into job structure */\n\tclear_attr(&pjob->ji_resources[nodeidx].nr_used, &job_attr_def[JOB_ATR_resc_used]);\n\n\tresc_access_perm = READ_WRITE;\n\tpsatl = (svrattrl *) GET_NEXT(lhead);\n\tfor (; psatl; psatl = (svrattrl *) GET_NEXT(psatl->al_link)) {\n\n\t\tif ((psatl->al_name == NULL) || (psatl->al_resc == NULL)) {\n\t\t\tfree_attrlist(&lhead);\n\t\t\treturn (-1);\n\t\t}\n\n\t\tif (strcmp(psatl->al_name, ATTR_used) != 0) {\n\t\t\tfree_attrlist(&lhead);\n\t\t\treturn (-1);\n\t\t}\n\n\t\terrcode = set_attr_generic(&pjob->ji_resources[nodeidx].nr_used, pdef, psatl->al_value, psatl->al_resc, INTERNAL);\n\t\t/* Unknown resources still get decoded */\n\t\t/* under \"unknown\" resource def */\n\t\tif ((errcode != 0) && (errcode != PBSE_UNKRESC)) {\n\t\t\tfree_attrlist(&lhead);\n\t\t\treturn (-1);\n\t\t}\n\n\t\tif (psatl->al_op == DFLT)\n\t\t\tpjob->ji_resources[nodeidx].nr_used.at_flags |= ATR_VFLAG_DEFLT;\n\t}\n\n\tfree_attrlist(&lhead);\n\treturn (0);\n}\n\n/**\n * @brief\n *\tGeneral purpose function for executing actions that are done\n *\tbefore calling finish_exec() on a job.\n *\n * @param[in]       pjob\t\tjob being operated on\n * @param[in]       do_job_setup_send\tset to 1 if job_setup_send() should be done\n *\n * @return enum pre_finish_results_t\n * @retval PRE_FINISH_SUCCESS\t\t\tall actions executed successfully\n * @retval PRE_FINISH_FAIL\t\t\tat least one of the actions has failed\n * @retval PRE_FINISH_SUCCESS_JOB_SETUP_SEND\tall actions up to job_setup_send()\n *\t\t\t\t\t\tsucceeded\n * @retval PRE_FINISH_FAIL_JOB_SETUP_SEND\taction to do job_setup_send() failed\n * @retval PRE_FINISH_FAIL_JOIN_EXTRA\t\taction to do job_join_extra() failed\n *\n */\npre_finish_results_t\npre_finish_exec(job *pjob, int do_job_setup_send)\n{\n\tif (pjob == NULL)\n\t\treturn PRE_FINISH_FAIL;\n\n\t/*\n\t * If job_join_read exists, call it to read\n\t * any extra info included with the JOIN reply.\n\t * This function can return an error if there\n\t * is no extra information or deal with it\n\t * more gracefully and return SUCCESS.\n\t */\n\tif (job_join_extra != NULL) {\n\t\tif (job_join_extra(pjob, &pjob->ji_hosts[0]) != 0)\n\t\t\treturn PRE_FINISH_FAIL_JOIN_EXTRA;\n\t}\n\n\t/*\n\t * If there is a job_setup_send function,\n\t * send a SETUP_JOB message to each node.\n\t * The call to finish_exec will happen\n\t * when we get a reply from all the nodes.\n\t */\n\tif (do_job_setup_send && (job_setup_send != NULL)) {\n\t\tif (send_sisters(pjob, IM_SETUP_JOB, job_setup_send) != pjob->ji_numnodes - 1)\n\t\t\treturn PRE_FINISH_FAIL_JOB_SETUP_SEND;\n\t\treturn PRE_FINISH_SUCCESS_JOB_SETUP_SEND;\n\t}\n\treturn PRE_FINISH_SUCCESS;\n}\n\n// clang-format off\n\n/**\n * @brief\n *\tInput is coming from another MOM over a DIS on tpp stream.\n *\tRead the stream to get a Inter-MOM request.\n *\n *\trequest (\n *\t\tjobid\tstring\n *\t\tcookie\tstring\n *\t\tcommand\tint\n *\t\tevent\tint\n *\t\ttask\tint\n *\t)\n *\n *\tFormat the reply and write it back.\n *\n *\n * @param[in] stream\tinter-MOM TPP stream\n * @param[in] version\tinter-MOM protocol version; only IM_PROTOCOL_VER is currently supported\n * @return void\n *\n */\nvoid\nim_request(int stream, int version)\n{\n\tint\t\t\tcommand = 0;\n\tint\t\t\tevent_com = -1, ret;\n\tchar\t\t\t*jobid = NULL;\n\tchar\t\t\t*cookie = NULL;\n\tchar\t\t\t*oreo;\n\tchar\t\t\tbasename[MAXPATHLEN + 1] = {0};\n\tchar\t\t\tnamebuf[MAXPATHLEN+1];\n\tjob\t\t\t*pjob;\n\tpbs_task\t\t*ptask;\n\thnodent\t\t\t*np;\n\teventent\t\t*ep = NULL;\n\tinfoent\t\t\t*ip;\n\tstruct\tsockaddr_in\t*addr;\n\tu_long\t\t\tipaddr;\n\tint\t\t\ti, errcode;\n\tint\t\t\tnodeidx =0;\n\tint\t\t\tresc_idx = 0;\n\tint\t\t\treply;\n\tint\t\t\texitval;\n\ttm_node_id\t\tpvnodeid;\n\ttm_node_id\t\ttvnodeid;\n\ttm_task_id\t\tfromtask, event_task = 0, taskid;\n\tint\t\t\thnodenum, index;\n\tint\t\t\tnum;\n\tint\t\t\tsig;\n\tchar\t\t\t**argv, **envp, *cp;\n\tchar\t\t\t*name;\n\tvoid\t\t\t*info = NULL;\n\tsize_t\t\t\tlen;\n\ttm_event_t\t\tevent, event_client = 0;\n\tint\t\t\tefd = -1;\n\tpbs_list_head\t\tlhead;\n\tsvrattrl\t\t*psatl;\n\textern  unsigned long\t QA_testing;\n\textern\tint\t\tresc_access_perm;\n\tint\t\t\tlocal_supres(job *pjob, int which,\n\t\tstruct batch_request *preq);\n\tchar\t\t\t*errmsg = NULL;\n\tchar\t\t\thook_msg[HOOK_MSG_SIZE+1];\n\tint\t\t\targc = 0;\n\tmom_hook_input_t\thook_input;\n\tmom_hook_output_t\thook_output;\n\tmom_hook_input_t\t*hook_input_ptr;\n\tmom_hook_output_t\t*hook_output_ptr;\n\tint\t\t\thook_errcode = 0;\n\tint\t\t\thook_rc = 0;\n\thook\t\t\t*last_phook = NULL;\n\tunsigned int\t\thook_fail_action = 0;\n\tchar\t\t\t*nodehost = NULL;\n\tchar\t\t\ttimebuf[TIMEBUF_SIZE] = {0};\n  \tchar\t\t\t*delete_job_msg = NULL;\n\n\tDBPRT((\"%s: stream %d version %d\\n\", __func__, stream, version))\n\tif ((version != IM_PROTOCOL_VER) && (version != IM_OLD_PROTOCOL_VER)) {\n\t\tsprintf(log_buffer, \"protocol version %d unknown\", version);\n\t\tlog_err(-1, __func__, log_buffer);\n\t\ttpp_close(stream);\n\t\treturn;\n\t}\n\n\t/* check that machine is known */\n\taddr = tpp_getaddr(stream);\n\tif (addr == NULL) {\n\t\tsprintf(log_buffer, \"Sender unknown\");\n\t\tlog_err(-1, __func__, log_buffer);\n\t\ttpp_close(stream);\n\t\treturn;\n\t}\n\n\tipaddr = ntohl(addr->sin_addr.s_addr);\n\tDBPRT((\"connect from %s\\n\", netaddr(addr)))\n\tif (!addrfind(ipaddr)) {\n\t\tsprintf(log_buffer, \"bad connect from %s\",\n\t\t\tnetaddr(addr));\n\t\tlog_err(-1, __func__, log_buffer);\n\t\tim_eof(stream, 0);\n\t\treturn;\n\t}\n\n\tjobid = disrst(stream, &ret);\n\tBAIL(\"jobid\")\n\tcookie = disrst(stream, &ret);\n\tBAIL(\"cookie\")\n\tcommand = disrsi(stream, &ret);\n\tBAIL(\"command\")\n\tevent = disrsi(stream, &ret);\n\tBAIL(\"event\")\n\tfromtask = disrui(stream, &ret);\n\tBAIL(\"fromtask\")\n\tswitch (command) {\n\n\t\tcase IM_JOIN_RECOV_JOB:\n\t\t\treply = 1;\n\n\t\t\thnodenum = disrsi(stream, &ret);\n\t\t\tBAIL(\"JOINJOB nodenum\")\n\n\t\t\tnp = NULL;\n\t\t\t/* job should already exist */\n\t\t\tpjob = find_job(jobid);\n\t\t\tif( pjob == NULL ) {\n\t\t\t\tSEND_ERR(PBSE_SYSTEM)\n\t\t\t\tgoto done;\n\t\t\t}\n\t\t\tpjob->ji_stdout = disrsi(stream, &ret);\n\t\t\tBAIL(\"JOINJOB stdout\")\n\t\t\tpjob->ji_stderr = disrsi(stream, &ret);\n\t\t\tBAIL(\"JOINJOB stderr\")\n\t\t\tpjob->ji_qs.ji_un.ji_momt.ji_exuid = pjob->ji_grpcache->gc_uid;\n\t\t\tpjob->ji_qs.ji_un.ji_momt.ji_exgid = pjob->ji_grpcache->gc_gid;\n\t\t\tpjob->ji_msconnected = 1;\n\t\t\tgoto done;\n\t\tcase IM_JOIN_JOB:\n\t\t\t/*\n\t\t\t ** Sender is mom superior sending a job structure to me.\n\t\t\t ** I am going to become a member of a job.\n\t\t\t **\n\t\t\t ** auxiliary info (\n\t\t\t **\tlocal host id\tint;\n\t\t\t **\tnumber of nodes\tint;\n\t\t\t **\tstdout port\tint;\n\t\t\t **\tstderr port\tint;\n\t\t\t **\tcred type\tint;\n\t\t\t **\tcredential\tstring; <if cred type != 0>\n\t\t\t **\tjobattrs\tattrl;\n\t\t\t ** )\n\t\t\t */\n\t\t\treply = 1;\n\t\t\tif (check_ms(stream, NULL))\n\t\t\t\tgoto fini;\n\n\t\t\thnodenum = disrsi(stream, &ret);\n\t\t\tBAIL(\"JOINJOB nodenum\")\n\n\t\t\tnp = NULL;\n\t\t\t/* does job already exist? */\n\t\t\tpjob = find_job(jobid);\n\t\t\tif (pjob) {\t/* job is here */\n\t\t\t\tkill_job(pjob, SIGKILL);\n\t\t\t\tmom_deljob(pjob);\n\t\t\t}\n\t\t\tif ((pjob = job_alloc()) == NULL) {\n\t\t\t\tSEND_ERR(PBSE_SYSTEM)\n\t\t\t\tgoto done;\n\t\t\t}\n\n\t\t\tpjob->ji_stdout = disrsi(stream, &ret);\n\t\t\tBAIL(\"JOINJOB stdout\")\n\t\t\tpjob->ji_stderr = disrsi(stream, &ret);\n\t\t\tBAIL(\"JOINJOB stderr\")\n\t\t\tpjob->ji_extended.ji_ext.ji_credtype = disrsi(stream, &ret);\n\t\t\tBAIL(\"JOINJOB credtype\")\n\t\t\tif (pjob->ji_extended.ji_ext.ji_credtype != PBS_CREDTYPE_NONE) {\n\t\t\t\tinfo = disrcs(stream, &len, &ret);\n\t\t\t\tBAIL(\"JOINJOB credential\")\n\t\t\t}\n\t\t\tpjob->ji_msconnected = 1;\n\n\t\t\tpjob->ji_numnodes = hnodenum;\n\t\t\tCLEAR_HEAD(lhead);\n\t\t\tif (decode_DIS_svrattrl(stream, &lhead) != DIS_SUCCESS) {\n\t\t\t\tsprintf(log_buffer, \"decode_DIS_svrattrl failed\");\n\t\t\t\tgoto err;\n\t\t\t}\n\t\t\t/*\n\t\t\t ** Get the hashname from the attribute.\n\t\t\t */\n\t\t\tpsatl = (svrattrl *)GET_NEXT(lhead);\n\t\t\twhile (psatl) {\n\t\t\t\tif (!strcmp(psatl->al_name, ATTR_hashname)) {\n\t\t\t\t\tpbs_strncpy(basename, psatl->al_value, sizeof(basename));\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tpsatl = (svrattrl *)GET_NEXT(psatl->al_link);\n\t\t\t}\n\t\t\tpbs_strncpy(pjob->ji_qs.ji_jobid, jobid, sizeof(pjob->ji_qs.ji_jobid));\n\t\t\tif (strlen(basename) <= PBS_JOBBASE)\n\t\t\t\tstrcpy(pjob->ji_qs.ji_fileprefix, basename);\n\t\t\telse\n\t\t\t\t*pjob->ji_qs.ji_fileprefix = '\\0';\n\t\t\tpjob->ji_nodeid = -1;\n\t\t\tpjob->ji_qs.ji_svrflags = 0;\n\t\t\tpjob->ji_qs.ji_un_type = JOB_UNION_TYPE_MOM;\n\n\t\t\t/* decode attributes from request into job structure */\n\t\t\terrcode = 0;\n\t\t\tresc_access_perm = READ_WRITE;\n\t\t\tfor (psatl = (svrattrl *)GET_NEXT(lhead);\n\t\t\t\tpsatl;\n\t\t\t\tpsatl = (svrattrl *)GET_NEXT(psatl->al_link)) {\n\n\t\t\t\t/* identify the attribute by name */\n\t\t\t\tindex = find_attr(job_attr_idx, job_attr_def, psatl->al_name);\n\t\t\t\tif (index < 0) {\t/* didn`t recognize the name */\n\t\t\t\t\terrcode = PBSE_NOATTR;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\n\t\t\t\terrcode = set_jattr_generic(pjob, index, psatl->al_value, psatl->al_resc, INTERNAL);\n\t\t\t\t/* Unknown resources still get decoded */\n\t\t\t\t/* under \"unknown\" resource def */\n\t\t\t\tif ((errcode != 0) && (errcode != PBSE_UNKRESC))\n\t\t\t\t\tbreak;\n\n\t\t\t\tif (psatl->al_op == DFLT)\n\t\t\t\t\t(get_jattr(pjob, index))->at_flags |= ATR_VFLAG_DEFLT;\n\t\t\t}\n\t\t\tfree_attrlist(&lhead);\n\t\t\tif (errcode != 0) {\n\t\t\t\t(void)job_purge_mom(pjob);\n\t\t\t\tSEND_ERR(errcode)\n\t\t\t\tgoto done;\n\t\t\t}\n\n\t\t\tpjob->ji_nodeid = TM_ERROR_NODE;\n\t\t\tif ((errcode = job_nodes_inner(pjob, &np)) != 0) {\n\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\"job_nodes_inner failed with error %d\", errcode);\n\t\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB,\n\t\t\t\t\tLOG_NOTICE, pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\t\tnodes_free(pjob);\n\t\t\t\tSEND_ERR(errcode)\n\t\t\t\tgoto done;\n\t\t\t}\n\n\t\t\tpjob->ji_hosts[0].hn_stream = stream;\n\n\t\t\tif (gen_nodefile_on_sister_mom) {\n\t\t\t\tchar varlist[(2 * MAXPATHLEN) + 1] = \"PBS_NODEFILE=\";\n\t\t\t\tchar buf[MAXPATHLEN + 1];\n\t\t\t\tif (generate_pbs_nodefile(pjob, buf, sizeof(buf) - 1, log_buffer, LOG_BUF_SIZE - 1) == 0) {\n\t\t\t\t\tstrcat(varlist, buf);\n\t\t\t\t\tset_jattr_generic(pjob, JOB_ATR_variables, varlist, NULL, INCR);\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t/*\n\t\t\t ** Check to make sure we found ourself.\n\t\t\t */\n\t\t\tif (pjob->ji_nodeid == TM_ERROR_NODE) {\n\t\t\t\tchar *eh2;\n\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t\"no match for my hostname '%s' was found in exec_host2, \"\n\t\t\t\t\t\"possible network misconfiguration\", mom_host);\n\n\t\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_CRIT, pjob->ji_qs.ji_jobid, log_buffer);\n\n\t\t\t\tif ((eh2 = get_jattr_str(pjob,  JOB_ATR_exec_host2)) != NULL) {\n\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"exec_host2 = %s\", eh2);\n\t\t\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB,\n\t\t\t\t\tLOG_CRIT, pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\t\t}\n\n\t\t\t\tnodes_free(pjob);\n\t\t\t\tSEND_ERR(PBSE_INTERNAL);\n\t\t\t\tgoto done;\n\t\t\t}\n\n\t\t\t/* set remaining job structure elements */\n\t\t\tset_job_state(pjob, JOB_STATE_LTR_RUNNING);\n\t\t\tset_job_substate(pjob, JOB_SUBSTATE_PRERUN);\n\t\t\tset_jattr_l_slim(pjob, JOB_ATR_mtime, time_now, SET);\n\t\t\tpjob->ji_qs.ji_stime = time_now;\n\t\t\tpjob->ji_polltime = time_now;\n\n\t\t\t/* np is set from job_nodes_inner */\n\n\n\t\t\t/*\n\t\t\t * NULL value passed to hook_input.vnl\n\t\t\t * means to assign vnode list using pjob->ji_host[].\n\t\t\t */\n\t\t\tmom_hook_input_init(&hook_input);\n\t\t\thook_input.pjob = pjob;\n\n\t\t\tmom_hook_output_init(&hook_output);\n\t\t\thook_output.reject_errcode = &hook_errcode;\n\t\t\thook_output.last_phook = &last_phook;\n\t\t\thook_output.fail_action = &hook_fail_action;\n\n\t\t\tswitch ((hook_rc=mom_process_hooks(HOOK_EVENT_EXECJOB_BEGIN,\n\t\t\t\t\tPBS_MOM_SERVICE_NAME, mom_host,\n\t\t\t\t\t&hook_input, &hook_output,\n\t\t\t\t\thook_msg, sizeof(hook_msg), 1))) {\n\t\t\t\tcase 1:   \t/* explicit accept */\n\t\t\t\t\tbreak;\n\t\t\t\tcase 2:\t/* no hook script executed - go ahead and accept event*/\n\t\t\t\t\tbreak;\n\t\t\t\tdefault:\n\t\t\t\t\t/* a value of '0' means explicit reject encountered. */\n\t\t\t\t\tif (hook_rc != 0) {\n\t\t\t\t\t\t/*\n\t\t\t\t\t\t * we've hit an internal error (malloc error, full disk, etc...), so\n\t\t\t\t\t\t * treat this now like a  hook error so hook fail_action will be\n\t\t\t\t\t\t * consulted. Before, behavior of an internal error was to ignore it!\n\t\t\t\t\t\t */\n\t\t\t\t\t\thook_errcode = PBSE_HOOKERROR;\n\t\t\t\t\t}\n\t\t\t\t\tSEND_ERR2(hook_errcode, (char *)hook_msg);\n\t\t\t\t\tif ((hook_errcode == PBSE_HOOKERROR) &&\n\t\t\t\t\t    (last_phook != NULL) &&\n\t\t\t\t\t    ((last_phook->fail_action & HOOK_FAIL_ACTION_OFFLINE_VNODES) != 0)) {\n\t\t\t\t\t\tvnl_t *tvnl = NULL;\n\t\t\t\t\t\tchar\thook_buf[HOOK_BUF_SIZE];\n\t\t\t\t\t\tint\tvret = 0;\n\n\t\t\t\t\t\tsnprintf(hook_buf,\n\t\t\t\t\t\t\tHOOK_BUF_SIZE,\n\t\t\t\t\t\t\t\"1,%s\",\n\t\t\t\t\t\t\tlast_phook->hook_name);\n\t\t\t\t\t\tif (vnl_alloc(&tvnl) != NULL) {\n\t\t\t\t\t\t\tvret = vn_addvnr(tvnl,\n\t\t\t\t\t\t\t\tmom_short_name,\n\t\t\t\t\t\t\t\tVNATTR_HOOK_OFFLINE_VNODES,\n\t\t\t\t\t\t\t\thook_buf, 0, 0, NULL);\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\tvret = 1;\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\tif (vret != 0) {\n\t\t\t\t\t\t\tsnprintf(log_buffer,\n\t\t\t\t\t\t\t\tsizeof(log_buffer),\n\t\t\t\t\t\t\t\t\"Failed to add to \"\n\t\t\t\t\t\t\t\t\"vnlp: %s=%s\",\n\t\t\t\t\t\t\t\tVNATTR_HOOK_OFFLINE_VNODES,\n\t\t\t\t\t\t\t\thook_buf);\n\t\t\t\t\t\t\tlog_event(PBSEVENT_DEBUG2,\n\t\t\t\t\t\t\t\tPBS_EVENTCLASS_HOOK,\n\t\t\t\t\t\t\t\tLOG_INFO,\n\t\t\t\t\t\t\t\tlast_phook->hook_name,\n\t\t\t\t\t\t\t\tlog_buffer);\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t/* this saves 'tvnl' */\n\t\t\t\t\t\t\t/* in svr_vnl_action, */\n\t\t\t\t\t\t\t/* and later freed upon */\n\t\t\t\t\t\t\t/* server acking action */\n\t\t\t\t\t\t\t(void)send_hook_vnl(tvnl);\n\t\t\t\t\t\t\ttvnl = NULL;\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif (tvnl != NULL)\n\t\t\t\t\t\t\tvnl_free(tvnl);\n\t\t\t\t\t}\n\n\t\t\t\t\tmom_deljob(pjob);\n\t\t\t\t\tgoto done;\n\t\t\t}\n\n\t\t\tmom_hook_input_init(&hook_input);\n\t\t\thook_input.pjob = pjob;\n\n\t\t\tmom_hook_output_init(&hook_output);\n\t\t\thook_output.reject_errcode = &hook_errcode;\n\t\t\thook_output.last_phook = &last_phook;\n\t\t\thook_output.fail_action = &hook_fail_action;\n\n\t\t\tDBPRT((\"%s: JOIN_JOB %s node %d\\n\", __func__, jobid, pjob->ji_nodeid))\n\n\t\t\t/*\n\t\t\t ** Call job_join_extra to do setup.\n\t\t\t */\n\t\t\tif (job_join_extra != NULL) {\n\t\t\t\terrcode = job_join_extra(pjob, np);\n\t\t\t\tif (errcode != 0) {\n\t\t\t\t\t(void)mom_process_hooks(HOOK_EVENT_EXECJOB_ABORT, PBS_MOM_SERVICE_NAME, mom_host, &hook_input, &hook_output, hook_msg, sizeof(hook_msg), 1);\n\t\t\t\t\tmom_deljob(pjob);\n\t\t\t\t\tSEND_ERR(errcode)\n\t\t\t\t\tgoto done;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t(void)job_save(pjob);\n\t\t\t(void)strcpy(namebuf, path_jobs);\t/* job directory path */\n\t\t\tif (*pjob->ji_qs.ji_fileprefix != '\\0')\n\t\t\t\t(void)strcat(namebuf, pjob->ji_qs.ji_fileprefix);\n\t\t\telse\n\t\t\t\t(void)strcat(namebuf, pjob->ji_qs.ji_jobid);\n\t\t\t(void)strcat(namebuf, JOB_TASKDIR_SUFFIX);\n\n\t\t\tif (mkdir(namebuf, 0700) == -1) {\n\t\t\t\t(void)mom_process_hooks(HOOK_EVENT_EXECJOB_ABORT, PBS_MOM_SERVICE_NAME, mom_host, &hook_input, &hook_output, hook_msg, sizeof(hook_msg), 1);\n\t\t\t\tmom_deljob(pjob);\n\t\t\t\tSEND_ERR(PBSE_SYSTEM)\n\t\t\t\tgoto done;\n\t\t\t}\n#ifdef WIN32\n\t\t\t/* the following must appear before check_pwd() since the */\n\t\t\t/* latter tries to read cred info */\n\t\t\tif (mom_create_cred(pjob, info, len, FALSE, stream) == -1) {\n\t\t\t\t(void)mom_process_hooks(HOOK_EVENT_EXECJOB_ABORT, PBS_MOM_SERVICE_NAME, mom_host, &hook_input, &hook_output, hook_msg, sizeof(hook_msg), 1);\n\t\t\t\tmom_deljob(pjob);\n\t\t\t\tSEND_ERR(PBSE_SYSTEM)\n\t\t\t\tgoto done;\n\t\t\t}\n#endif\n\t\t\tif (check_pwd(pjob) == NULL) {\n\t\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_NOTICE,\n\t\t\t\t\tpjob->ji_qs.ji_jobid, log_buffer);\n\t\t\t\t(void)mom_process_hooks(HOOK_EVENT_EXECJOB_ABORT, PBS_MOM_SERVICE_NAME, mom_host, &hook_input, &hook_output, hook_msg, sizeof(hook_msg), 1);\n\t\t\t\tmom_deljob(pjob);\n\t\t\t\tSEND_ERR(PBSE_BADUSER)\n\t\t\t\tgoto done;\n\t\t\t}\n\t\t\tpjob->ji_qs.ji_un.ji_momt.ji_exuid = pjob->ji_grpcache->gc_uid;\n\t\t\tpjob->ji_qs.ji_un.ji_momt.ji_exgid = pjob->ji_grpcache->gc_gid;\n\n#ifndef WIN32\n\t\t\tif (mom_create_cred(pjob, info, len, FALSE, stream) == -1) {\n\t\t\t\t(void)mom_process_hooks(HOOK_EVENT_EXECJOB_ABORT, PBS_MOM_SERVICE_NAME, mom_host, &hook_input, &hook_output, hook_msg, sizeof(hook_msg), 1);\n\t\t\t\tmom_deljob(pjob);\n\t\t\t\tSEND_ERR(PBSE_SYSTEM)\n\t\t\t\tgoto done;\n\t\t\t}\n#endif\n\n\t\t\t/* create staging and execution dir if sandbox=PRIVATE mode is enabled */\n\t\t\t/* this code should appear after check_pwd() since */\n\t\t\t/* mkjobdir() depends on job uid and gid being set correctly */\n\t\t\tif ((is_jattr_set(pjob, JOB_ATR_sandbox)) &&\n\t\t\t\t(strcasecmp(get_jattr_str(pjob, JOB_ATR_sandbox), \"PRIVATE\") == 0)) {\n#ifdef WIN32\n\t\t\t\tif (mkjobdir(pjob->ji_qs.ji_jobid,\n\t\t\t\t\tjobdirname(pjob->ji_qs.ji_jobid, pjob->ji_grpcache->gc_homedir),\n\t\t\t\t\t(pjob->ji_user != NULL) ? pjob->ji_user->pw_name : NULL,\n\t\t\t\t\t(pjob->ji_user != NULL) ? pjob->ji_user->pw_userlogin : INVALID_HANDLE_VALUE)) {\n\t\t\t\t\tsprintf(log_buffer, \"unable to create the job directory %s\",\n\t\t\t\t\t\tjobdirname(pjob->ji_qs.ji_jobid, pjob->ji_grpcache->gc_homedir));\n\t\t\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\t\t\t(void)mom_process_hooks(HOOK_EVENT_EXECJOB_ABORT, PBS_MOM_SERVICE_NAME, mom_host, &hook_input, &hook_output, hook_msg, sizeof(hook_msg), 1);\n\t\t\t\t\tmom_deljob(pjob);\n\t\t\t\t\tSEND_ERR(PBSE_SYSTEM)\n\t\t\t\t\tgoto done;\n\t\t\t\t}\n\n#else\t/* Unix/Linux */\n\n\t\t\t\tmode_t myumask = 0;\n\t\t\t\tchar   maskbuf[22];\n\t\t\t\tmode_t j;\n\t\t\t\tint    e;\n\n\t\t\t\tif (is_jattr_set(pjob, JOB_ATR_umask)) {\n\t\t\t\t\tsprintf(maskbuf, \"%ld\", get_jattr_long(pjob, JOB_ATR_umask));\n\t\t\t\t\tsscanf(maskbuf, \"%o\", &j);\n\t\t\t\t\tmyumask = umask(j);\n\t\t\t\t} else {\n\t\t\t\t\tmyumask = umask(077);\n\t\t\t\t}\n\n\t\t\t\te = mkjobdir(pjob->ji_qs.ji_jobid,\n\t\t\t\t\tjobdirname(pjob->ji_qs.ji_jobid,\n\t\t\t\t\tpjob->ji_grpcache->gc_homedir),\n\t\t\t\t\tpjob->ji_qs.ji_un.ji_momt.ji_exuid,\n\t\t\t\t\tpjob->ji_qs.ji_un.ji_momt.ji_exgid);\n\t\t\t\tif (myumask != 0)\n\t\t\t\t\t(void)umask(myumask);\n\n\t\t\t\tif (e != 0) {\n\t\t\t\t\tsprintf(log_buffer, \"unable to create the job directory %s\", jobdirname(pjob->ji_qs.ji_jobid, pjob->ji_grpcache->gc_homedir));\n\t\t\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\t\t\t(void)mom_process_hooks(HOOK_EVENT_EXECJOB_ABORT, PBS_MOM_SERVICE_NAME, mom_host, &hook_input, &hook_output, hook_msg, sizeof(hook_msg), 1);\n\t\t\t\t\tmom_deljob(pjob);\n\t\t\t\t\tSEND_ERR(PBSE_SYSTEM)\n\t\t\t\t\tgoto done;\n\t\t\t\t}\n#endif\n\t\t\t}\n\n\t\t\tsprintf(log_buffer, \"JOIN_JOB as node %d\", pjob->ji_nodeid);\n\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t\tjobid, log_buffer);\n\t\t\t/*\n\t\t\t ** if certain resource limits require that the job usage be\n\t\t\t ** polled, we link the job to mom_polljobs.\n\t\t\t **\n\t\t\t ** NOTE: we overload the job field ji_jobque for this as it\n\t\t\t ** is not used otherwise by MOM\n\t\t\t */\n\t\t\tif (mom_do_poll(pjob))\n\t\t\t\tappend_link(&mom_polljobs, &pjob->ji_jobque, pjob);\n\t\t\tif (pbs_idx_insert(jobs_idx, pjob->ji_qs.ji_jobid, pjob) != PBS_IDX_RET_OK) {\n\t\t\t\tlog_joberr(PBSE_INTERNAL, __func__, \"Failed to add job in index during join job\", pjob->ji_qs.ji_jobid);\n\t\t\t\tgoto join_err;\n\t\t\t}\n\t\t\tappend_link(&svr_alljobs, &pjob->ji_alljobs, pjob);\n\n\t\t\t/*\n\t\t\t ** At this point, we have done all the job setup.\n\t\t\t ** Any error from now on is a problem sending the\n\t\t\t ** reply to MS.  We don't need to call SEND_ERR.\n\t\t\t */\n\t\t\tret = im_compose(stream, jobid, cookie, IM_ALL_OKAY,\n\t\t\t\tevent, fromtask, IM_OLD_PROTOCOL_VER);\n\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\tgoto join_err;\n\t\t\t/*\n\t\t\t ** Here we need to call job_join_ack to send any extra\n\t\t\t ** information with the reply to the JOIN request.\n\t\t\t ** The format of the data sent by job_join_ack will\n\t\t\t ** always have a version number as the first item\n\t\t\t ** sent as an int.  The rest depends on job_join_ack\n\t\t\t ** and will be defined in the function it points to.\n\t\t\t */\n\t\t\tif (job_join_ack != NULL) {\n\t\t\t\tret = job_join_ack(pjob, np, stream);\n\t\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\t\tgoto join_err;\n\t\t\t}\n\n\t\t\tif (tpp_eom(stream) == -1)\n\t\t\t\tgoto join_err;\n\n\t\t\tif (dis_flush(stream) == -1)\n\t\t\t\tgoto join_err;\n\n\t\t\tgoto fini;\n\njoin_err:\n\t\t\tlog_err(errno, __func__, \"tpp flush\");\n\t\t\t(void)mom_process_hooks(HOOK_EVENT_EXECJOB_ABORT, PBS_MOM_SERVICE_NAME, mom_host, &hook_input, &hook_output, hook_msg, sizeof(hook_msg), 1);\n\t\t\ttpp_close(stream);\n\t\t\tmom_deljob(pjob);\n\t\t\tgoto fini;\n\n\t\tcase IM_ALL_OKAY:\n\t\tcase IM_ERROR:\n\t\tcase IM_ERROR2:\n\t\t\treply = 0;\n\t\t\tbreak;\n\n\t\tdefault:\n\t\t\treply = 1;\n\t\t\tbreak;\n\t}\n\n\tnp = NULL;\n\t/*\n\t ** Check if job already exists.\n\t */\n\tif ((pjob = find_job(jobid)) == NULL) {\n\t\tSEND_ERR(PBSE_JOBEXIST)\n\t\tgoto done;\n\t}\n\n\t/* check cookie */\n\tif (!(is_jattr_set(pjob, JOB_ATR_Cookie))) {\n\t\tDBPRT((\"%s: job %s has no cookie\\n\", __func__, jobid))\n\t\tSEND_ERR(PBSE_BADSTATE)\n\t\tgoto done;\n\t}\n\toreo = get_jattr_str(pjob, JOB_ATR_Cookie);\n\tif (strcmp(oreo, cookie) != 0) {\n\t\tDBPRT((\"%s: job %s cookie %s message %s\\n\", __func__, jobid, oreo, cookie))\n\t\tSEND_ERR(PBSE_BADSTATE)\n\t\tgoto done;\n\t}\n\t/*\n\t ** This is some processing needed that is common between\n\t ** both kinds of reply.\n\t ** reply == 0 means that this message is a reply not a request\n\t ** reply == 1 means that this is a request to which a reply may happen\n\t */\n\tif (reply == 0) {\n\t\tfor (nodeidx = 0; nodeidx < pjob->ji_numnodes; nodeidx++) {\n\t\t\tnp = &pjob->ji_hosts[nodeidx];\n\n\t\t\tif (np->hn_stream == stream) {\n\t\t\t\tnp->hn_eof_ts = 0; /* reset down timestamp */\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t\tif (nodeidx == pjob->ji_numnodes) {\n\t\t\tif (pjob->ji_updated)  {\n\t\t\t\t/* since some of job's nodes have been released early,\n\t\t\t\t * this looks like a stream from one of the\n\t\t\t\t * released nodes.\n\t\t\t\t */\n\t\t\t\tgoto done;\n\t\t\t} else {\n\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\"stream %d not found to job nodes\", stream);\n\t\t\t\tgoto err;\n\t\t\t}\n\t\t}\n\t\tep = (eventent *)GET_NEXT(np->hn_events);\n\n\t\twhile (ep) {\n\t\t\tif (ep->ee_event == event &&\n\t\t\t\tep->ee_taskid == fromtask)\n\t\t\t\tbreak;\n\t\t\tep = (eventent *)GET_NEXT(ep->ee_next);\n\t\t}\n\t\tif (ep == NULL) {\n\t\t\tif (pjob->ji_updated)  {\n\t\t\t\t/* some of job's vnodes have been released early\n\t\t\t\t * along with their associated tm_spawn events\n\t\t\t\t */\n\t\t\t\tgoto done;\n\t\t\t} else {\n\t\t\t\tsprintf(log_buffer, \"event %d taskid %8.8X not found\",\n\t\t\t\t\tevent, fromtask);\n\t\t\t\tgoto err;\n\t\t\t}\n\t\t}\n\n\t\tefd = ep->ee_fd;\n\t\tevent_com = ep->ee_command;\n\t\tevent_task = ep->ee_taskid;\n\t\tevent_client = ep->ee_client;\n\t\targv = ep->ee_argv;\n\t\tenvp = ep->ee_envp;\n\t\tdelete_link(&ep->ee_next);\n\t\tfree(ep);\n\t}\n\n\tswitch (command) {\n\n\t\tcase\tIM_KILL_JOB:\n\t\t\t/*\n\t\t\t ** Sender is (must be) mom superior commanding me to kill a\n\t\t\t ** job which I should be a part of.\n\t\t\t ** Send a signal and set the jobstate to begin the\n\t\t\t ** kill.  We wait for all tasks to exit before sending\n\t\t\t ** an obit to mother superior.\n\t\t\t **\n\t\t\t ** auxiliary info (\n\t\t\t **\tnone;\n\t\t\t ** )\n\t\t\t */\n\t\t\tif (check_ms(stream, pjob))\n\t\t\t\tgoto fini;\n\n\t\t\tmom_hook_input_init(&hook_input);\n\t\t\thook_input.pjob = pjob;\n\n\t\t\tmom_hook_output_init(&hook_output);\n\t\t\thook_output.reject_errcode = &hook_errcode;\n\t\t\thook_output.last_phook = &last_phook;\n\t\t\thook_output.fail_action = &hook_fail_action;\n\t\t\tif (mom_process_hooks(HOOK_EVENT_EXECJOB_PRETERM,\n\t\t\t\tPBS_MOM_SERVICE_NAME, mom_host, &hook_input,\n\t\t\t\t&hook_output,\n\t\t\t\thook_msg, sizeof(hook_msg), 1) == 0) {\n\n\t\t\t\tSEND_ERR2(hook_errcode, (char *)hook_msg);\n\t\t\t\tgoto done;\t/* explicit reject - don't cancel */\n\t\t\t}\n\n\n\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t\tjobid, \"KILL_JOB received\");\n\t\t\t/*\n\t\t\t ** Send the jobs a signal but we have to wait to\n\t\t\t ** do a reply to mother superior until the procs\n\t\t\t ** die and are reaped.\n\t\t\t */\n\t\t\tDBPRT((\"%s: KILL_JOB %s\\n\", __func__, jobid))\n\t\t\treply = 0;\t/* reply will be deferred */\n\t\t\tkill_job(pjob, SIGKILL);\n\t\t\tset_job_substate(pjob, JOB_SUBSTATE_EXITING);\n\t\t\tset_job_state(pjob, JOB_STATE_LTR_EXITING);\n\t\t\tpjob->ji_obit = event;\n\t\t\texiting_tasks = 1;\n\n\t\t\tmom_hook_input_init(&hook_input);\n\t\t\thook_input.pjob = pjob;\n\n\t\t\tmom_hook_output_init(&hook_output);\n\t\t\thook_output.reject_errcode = &hook_errcode;\n\t\t\thook_output.last_phook = &last_phook;\n\t\t\thook_output.fail_action = &hook_fail_action;\n\n\t\t\t(void)mom_process_hooks(HOOK_EVENT_EXECJOB_EPILOGUE,\n\t\t\t\tPBS_MOM_SERVICE_NAME, mom_host, &hook_input,\n\t\t\t\t&hook_output, hook_msg, sizeof(hook_msg), 1);\n\t\t\tbreak;\n\n\t\tcase\tIM_DELETE_JOB:\n\t\tcase\tIM_DELETE_JOB2:\n\t\tcase\tIM_DELETE_JOB_REPLY:\n\t\t\t/*\n\t\t\t ** Sender is (must be) mom superior commanding me to delete a\n\t\t\t ** job which I should be a part of.  There is no reply for\n\t\t\t ** IM_DELETE_JOB but is for IM_DELETE_JOB_REPLY.\n\t\t\t **\n\t\t\t ** auxiliary info (\n\t\t\t **\tnone;\n\t\t\t ** )\n\t\t\t */\n\t\t\tDBPRT((\"%s: %s for %s\\n\", __func__, command==IM_DELETE_JOB?\"DELETE_JOB\":\"DELETE_JOB_REPLY\", pjob->ji_qs.ji_jobid));\n\n\t\t\tif (check_ms(stream, pjob))\n\t\t\t\tgoto fini;\n\n \t\t\tif ((command == IM_DELETE_JOB) || (command == IM_DELETE_JOB_REPLY))\n\t\t\t\t/* For IM_DELETE_JOB_REPLY, it should be\n\t\t\t\t * 'DELETE_JOB_REPLY received'\n\t\t\t\t * but there are QA tests out there that\n\t\t\t\t * already depend on the current message.\n\t\t\t\t */\n \t\t\t\tdelete_job_msg = \"DELETE_JOB received\";\n\t\t\telse if (command == IM_DELETE_JOB2)\n\t\t\t\tdelete_job_msg = \"DELETE_JOB2 received\";\n\n\t\t\tif (delete_job_msg != NULL)\n\t\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t\tjobid, delete_job_msg);\n\n\t\t\tif (pjob->ji_hook_running_bg_on)\n\t\t\t\tgoto fini;\n\n\t\t\tkill_job(pjob, SIGKILL);\t/* just in case */\n\n\t\t\t/* NULL value passed to hook_input.vnl \t\t\t\t*/\n\t\t\t/* means to assign vnode list using pjob->ji_host[].\t    \t*/\n\n\t\t\tif ((hook_input_ptr = (mom_hook_input_t *)malloc(\n\t\t\t\tsizeof(mom_hook_input_t))) == NULL) {\n\t\t\t\t\tlog_err(errno, __func__, MALLOC_ERR_MSG);\n\t\t\t\t\tgoto err;\n\t\t\t}\n\t\t\tmom_hook_input_init(hook_input_ptr);\n\t\t\thook_input_ptr->pjob = pjob;\n\n\t\t\tif ((hook_output_ptr = (mom_hook_output_t *)malloc(\n\t\t\t\tsizeof(mom_hook_output_t))) == NULL) {\n\t\t\t\t\tlog_err(errno, __func__, MALLOC_ERR_MSG);\n\t\t\t\t\tgoto err;\n\t\t\t}\n\t\t\tmom_hook_output_init(hook_output_ptr);\n\n\t\t\tif ((hook_output_ptr->reject_errcode =\n\t\t\t\t(int *)malloc(sizeof(int))) == NULL) {\n\t\t\t\t\tlog_err(errno, __func__, MALLOC_ERR_MSG);\n\t\t\t\t\tgoto err;\n\t\t\t}\n\t\t\tmemset(hook_output_ptr->reject_errcode, 0, sizeof(int));\n\n\t\t\tpjob->ji_postevent = event;\n\t\t\tpjob->ji_taskid = fromtask;\n\n\t\t\tif (mom_process_hooks(\n\t\t\t\t(command == IM_DELETE_JOB2)?\n\t\t\t\t\tHOOK_EVENT_EXECJOB_EPILOGUE:\n\t\t\t\t\tHOOK_EVENT_EXECJOB_END,\n\t\t\t\tPBS_MOM_SERVICE_NAME, mom_host, hook_input_ptr,\n\t\t\t\thook_output_ptr, NULL, 0, 1) ==\n\t\t\t\t\t\tHOOK_RUNNING_IN_BACKGROUND) {\n\t\t\t\t\tpjob->ji_hook_running_bg_on = (command == IM_DELETE_JOB)? BG_IM_DELETE_JOB: BG_IM_DELETE_JOB_REPLY;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\n\t\t\tif (command == IM_DELETE_JOB_REPLY) {\n\t\t\t\tmom_deljob(pjob);\n\t\t\t\tret = im_compose(stream, jobid, cookie, IM_ALL_OKAY,\n\t\t\t\t\tevent, fromtask, IM_OLD_PROTOCOL_VER);\n\t\t\t\treply = 1;\n\t\t\t} else if (command == IM_DELETE_JOB2) {\n\t\t\t\tjob *pjob2 = NULL;\n\t\t\t\tlong runver;\n\n\t\t\t\tret = im_compose(stream, jobid, cookie, IM_SEND_RESC,\n\t\t\t\t\tevent, fromtask, IM_OLD_PROTOCOL_VER);\n\n\t\t\t\t/* Send the information tallied for the job. */\n\t\t\t\tret = diswst(stream, mom_host);\n\t\t\t\tBAIL(\"mom_host\")\n\t\t\t\tret = diswul(stream, resc_used(pjob, \"cput\",\n\t\t\t\t\t\t\t\tgettime));\n\t\t\t\tBAIL(\"resources_used.cput\")\n\t\t\t\tret = diswul(stream, resc_used(pjob, \"mem\",\n\t\t\t\t\t\t\t\tgetsize));\n\t\t\t\tBAIL(\"resources_used.mem\")\n\t\t\t\tret = diswul(stream, resc_used(pjob,\n\t\t\t\t\t\t\"cpupercent\", gettime));\n\t\t\t\tBAIL(\"resources_used.cpupercent\")\n\t\t\t\tif (is_jattr_set(pjob, JOB_ATR_run_version))\n\t\t\t\t\trunver = get_jattr_long(pjob, JOB_ATR_run_version);\n\t\t\t\telse\n\t\t\t\t\trunver = get_jattr_long(pjob, JOB_ATR_runcount);\n\n\t\t\t\t/* Call the execjob_end hook now */\n\t\t\t\tif (mom_process_hooks(HOOK_EVENT_EXECJOB_END, PBS_MOM_SERVICE_NAME, mom_host, hook_input_ptr,\n\t\t\t\t\t\thook_output_ptr, NULL, 0, 1) == HOOK_RUNNING_IN_BACKGROUND) {\n\t\t\t\t\t\tpjob->ji_hook_running_bg_on = BG_IM_DELETE_JOB2;\n\t\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tmom_deljob(pjob);\n\n\t\t\t\t/* Needed to create a lightweight copy of the job to\n\t\t\t\t * contain only the jobid info, so I can just call\n\t\t\t\t * new_job_action() to create a JOB_ACT_REQ_DEALLOCATE\n\t\t\t\t * request. Can't use the original 'pjob' structure as\n\t\t\t\t * before creating the request, the real job should have\n\t\t\t\t * been deleted already.\n\t\t\t\t */\n\t\t\t\tif ((pjob2 = job_alloc()) != NULL) {\n\t\t\t\t\tpbs_strncpy(pjob2->ji_qs.ji_jobid, jobid, sizeof(pjob2->ji_qs.ji_jobid));\n\t\t\t\t\tset_jattr_l_slim(pjob2, JOB_ATR_run_version, runver, SET);\n\t\t\t\t \t/* JOB_ACT_REQ_DEALLOCATE request will tell the\n\t\t\t\t \t * the server that this mom has completely deleted the\n\t\t\t\t \t * job and now the server can officially free up the\n\t\t\t\t\t * job from the nodes managed by this mom, allowing\n\t\t\t\t\t * other jobs to run.\n\t\t\t\t \t */\n\t\t\t\t\tnew_job_action_req(pjob2, HOOK_PBSADMIN, JOB_ACT_REQ_DEALLOCATE);\n\t\t\t\t\tjob_free(pjob2);\n\t\t\t\t}\n\n\t\t\t\treply = 1;\n\t\t\t} else {\n\t\t\t\tmom_deljob(pjob);\n\t\t\t\treply = 0;\n\t\t\t}\n\t\t\tfree(hook_input_ptr);\n\t\t\tif (hook_output_ptr) {\n\t\t\t\tfree(hook_output_ptr->reject_errcode);\n\t\t\t\tfree(hook_output_ptr);\n\t\t\t}\n\t\t\tbreak;\n\n\t\tcase\tIM_EXEC_PROLOGUE:\n\t\t\t/*\n\t\t\t ** Sender is (must be) mom superior commanding me to execute\n\t\t\t ** a prologue hook.\n\t\t\t */\n\t\t\tDBPRT((\"%s: %s for %s\\n\", __func__, \"IM_EXEC_PROLOGUE\", pjob->ji_qs.ji_jobid));\n\t\t\tmom_hook_input_init(&hook_input);\n\t\t\thook_input.pjob = pjob;\n\n\t\t\tmom_hook_output_init(&hook_output);\n\t\t\thook_output.reject_errcode = &hook_errcode;\n\t\t\thook_output.last_phook = &last_phook;\n\t\t\thook_output.fail_action = &hook_fail_action;\n\n\t\t\tswitch (hook_rc = mom_process_hooks(HOOK_EVENT_EXECJOB_PROLOGUE,\n\t\t\t\t\t\tPBS_MOM_SERVICE_NAME,\n\t\t\t\t\t\tmom_host, &hook_input, &hook_output,\n\t\t\t\t\t\thook_msg, sizeof(hook_msg), 1)) {\n\n\t\t\t\tcase 1: /* explicit accept */\n\t\t\t\tcase 2:\t/* no hook script executed - go ahead and accept event*/\n\t\t\t\t\tret = im_compose(stream, jobid, cookie,\n\t\t\t\t\t\t\tIM_ALL_OKAY,\n\t\t\t\t\t\t\tevent, fromtask, IM_OLD_PROTOCOL_VER);\n\t\t\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\t\t\tgoto err;\n\t\t\t\t\tbreak;\n\t\t\t\tdefault:\n\t\t\t\t\t/* a value of '0' means explicit reject encountered. */\n\t\t\t\t\tif (hook_rc != 0) {\n\t\t\t\t\t\t/* we've hit an internal error (malloc error, full disk, etc...), so */\n\t\t\t\t\t\t/* treat this now like a  hook error so hook fail_action  */\n\t\t\t\t\t\t/* will be consulted.  */\n\t\t\t\t\t\t/* Before, behavior of an internal error was to ignore it! */\n\t\t\t\t\t\thook_errcode = PBSE_HOOKERROR;\n\t\t\t\t\t}\n\t\t\t\t\tSEND_ERR2(hook_errcode, (char *)hook_msg);\n\t\t\t\t\tif (hook_errcode == PBSE_HOOKERROR)\n\t\t\t\t\t    send_hook_fail_action(last_phook);\n\t\t\t}\n\t\t\tbreak;\n\n\t\tcase\tIM_SPAWN_TASK:\n\t\t\t/*\n\t\t\t ** Sender is a MOM in a job that wants to start a task.\n\t\t\t ** I am MOM on the node that is to run the task.\n\t\t\t **\n\t\t\t ** auxiliary info (\n\t\t\t **\tparent vnode\ttm_node_id\n\t\t\t **\ttarget vnode\ttm_node_id\n\t\t\t **\ttask id\t\ttm_task_id (not used)\n\t\t\t **\targv 0\t\tstring\n\t\t\t **\t...\n\t\t\t **\targv n\t\tstring\n\t\t\t **\tnull\n\t\t\t **\tenvp 0\t\tstring\n\t\t\t **\t...\n\t\t\t **\tenvp m\t\tstring\n\t\t\t ** )\n\t\t\t */\n\t\t\tpvnodeid = disrsi(stream, &ret);\n\t\t\tBAIL(\"SPAWN_TASK pvnodeid\")\n\n\t\t\tif ((np = find_node(pjob, stream, pvnodeid)) == NULL) {\n\t\t\t\tSEND_ERR(PBSE_BADHOST)\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\ttvnodeid = disrsi(stream, &ret);\n\t\t\tBAIL(\"SPAWN_TASK tvnodeid\")\n\t\t\ttaskid = disrui(stream, &ret);\n\t\t\tBAIL(\"SPAWN_TASK taskid\")\n\t\t\tDBPRT((\"%s: SPAWN_TASK %s parent %d target %d taskid %u\\n\",\n\t\t\t\t__func__, jobid, pvnodeid, tvnodeid, taskid))\n\n\t\t\t/*\n\t\t\t **\tThe target node must be here.\n\t\t\t */\n\t\t\tif (pjob->ji_nodeid != TO_PHYNODE(tvnodeid)) {\n\t\t\t\tSEND_ERR(PBSE_INTERNAL)\n\t\t\t\tbreak;\n\t\t\t}\n\n\t\t\tif( version == IM_OLD_PROTOCOL_VER) {\n\t\t\t\t/*\n\t\t\t\t * The arg list is ended by an empty (zero length)\n\t\t\t\t * string.\n\t\t\t\t */\n\t\t\t\tnum = 4;\n\t\t\t\targv = (char **)calloc(sizeof(char *), num);\n\t\t\t\tassert(argv);\n\t\t\t\tfor (i=0;; i++) {\n\t\t\t\t\tif ((cp = disrst(stream, &ret)) == NULL)\n\t\t\t\t\t\tbreak;\n\t\t\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\t\t\tbreak;\n\t\t\t\t\tif (*cp == '\\0') {\n\t\t\t\t\t\t/* got a empty string, end of args lits */\n\t\t\t\t\t\tfree(cp);\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t\tif (i == num-1) {\n\t\t\t\t\t\tnum *= 2;\n\t\t\t\t\t\targv = (char **)realloc(argv,\n\t\t\t\t\t\tnum*sizeof(char *));\n\t\t\t\t\t\tassert(argv);\n\t\t\t\t\t}\n\t\t\t\t\targv[i] = cp;\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t  \targc = disrui(stream, &ret);\n\t\t\t\tif (ret != DIS_SUCCESS) {\n\t\t\t\t\tsprintf(log_buffer, \"SPAWN_TASK read of argc\");\n\t\t\t\t\tgoto err;\n\t\t\t\t}\n\t\t\t\targv = (char **)calloc(argc+1, sizeof(char *));\n\t\t\t\tassert(argv);\n\t\t\t\tfor (i=0; i<argc; i++) {\n\t\t\t\t\targv[i] = disrst(stream, &ret);\n\t\t\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t\targv[i] = NULL;\n\t\t\tif (ret != DIS_SUCCESS) {\n\t\t\t\tarrayfree(argv);\n\t\t\t\tsprintf(log_buffer, \"SPAWN_TASK read of argv array\");\n\t\t\t\tgoto err;\n\t\t\t}\n\n\t\t\tnum = 8;\n\t\t\tenvp = (char **)calloc(sizeof(char *), num);\n\t\t\tassert(envp);\n\t\t\tfor (i=0;; i++) {\n\t\t\t\tif ((cp = disrst(stream, &ret)) == NULL)\n\t\t\t\t\tbreak;\n\t\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\t\tbreak;\n\t\t\t\tif (*cp == '\\0') {\n\t\t\t\t\tfree(cp);\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tif (i == num-1) {\n\t\t\t\t\tnum *= 2;\n\t\t\t\t\tenvp = (char **)realloc(envp,\n\t\t\t\t\t\tnum*sizeof(char *));\n\t\t\t\t\tassert(envp);\n\t\t\t\t}\n\t\t\t\tenvp[i] = cp;\n\t\t\t}\n\t\t\tenvp[i] = NULL;\n\t\t\tif (ret != DIS_EOD) {\n\t\t\t\tarrayfree(argv);\n\t\t\t\tarrayfree(envp);\n\t\t\t\tsprintf(log_buffer, \"SPAWN_TASK read of envp array\");\n\t\t\t\tgoto err;\n\t\t\t}\n#ifdef PMIX\n\t\t\tpbs_pmix_register_client(pjob, tvnodeid, &envp);\n#endif\n\t\t\tret = DIS_SUCCESS;\n\t\t\tif ((ptask = momtask_create(pjob)) == NULL) {\n\t\t\t\tSEND_ERR(PBSE_SYSTEM);\n\t\t\t\tarrayfree(argv);\n\t\t\t\tarrayfree(envp);\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\tstrcpy(ptask->ti_qs.ti_parentjobid, jobid);\n\t\t\tptask->ti_qs.ti_parentnode = pvnodeid;\n\t\t\tptask->ti_qs.ti_myvnode    = tvnodeid;\n\t\t\tptask->ti_qs.ti_parenttask = fromtask;\n\t\t\tif (task_save(ptask) == -1) {\n\t\t\t\tSEND_ERR(PBSE_SYSTEM)\n\t\t\t\tarrayfree(argv);\n\t\t\t\tarrayfree(envp);\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\terrcode = start_process(ptask, argv, envp, false);\n\t\t\tif (errcode != PBSE_NONE) {\n\t\t\t\tSEND_ERR(errcode)\n\t\t\t}\n\t\t\telse {\n\t\t\t\tret = im_compose(stream, jobid, cookie, IM_ALL_OKAY,\n\t\t\t\t\tevent, fromtask, IM_OLD_PROTOCOL_VER);\n\t\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\t\tbreak;\n\t\t\t\tret = diswui(stream, ptask->ti_qs.ti_task);\n\t\t\t}\n\n\t\t\tarrayfree(argv);\n\t\t\tarrayfree(envp);\n\t\t\tbreak;\n\n\t\tcase\tIM_GET_TASKS:\n\t\t\t/*\n\t\t\t ** Sender is MOM which controls a task that wants to get\n\t\t\t ** the list of tasks running here.\n\t\t\t **\n\t\t\t ** auxiliary info (\n\t\t\t **\tsending node\ttm_node_id;\n\t\t\t **\ttarget node\ttm_node_id;\n\t\t\t ** )\n\t\t\t */\n\t\t\tpvnodeid = disrsi(stream, &ret);\n\t\t\tBAIL(\"GET_TASKS pvnodeid\")\n\t\t\ttvnodeid = disrsi(stream, &ret);\n\t\t\tBAIL(\"GET_TASKS tvnodeid\")\n\t\t\tDBPRT((\"%s: GET_TASKS %s from node %d to node %d\\n\",\n\t\t\t\t__func__, jobid, pvnodeid, tvnodeid))\n\t\t\tif ((np = find_node(pjob, stream, pvnodeid)) == NULL) {\n\t\t\t\tSEND_ERR(PBSE_BADHOST)\n\t\t\t\tbreak;\n\t\t\t}\n\n\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t\tjobid, \"GET_TASKS received\");\n\t\t\tret = im_compose(stream, jobid, cookie, IM_ALL_OKAY,\n\t\t\t\tevent, fromtask, IM_OLD_PROTOCOL_VER);\n\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\tbreak;\n\t\t\tfor (ptask=(pbs_task *)GET_NEXT(pjob->ji_tasks);\n\t\t\t\tptask;\n\t\t\t\tptask=(pbs_task *)GET_NEXT(ptask->ti_jobtask)) {\n\t\t\t\tif (ptask->ti_qs.ti_myvnode == tvnodeid) {\n\t\t\t\t\tret = diswui(stream, ptask->ti_qs.ti_task);\n\t\t\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t\tbreak;\n\n\t\tcase\tIM_SIGNAL_TASK:\n\t\t\t/*\n\t\t\t ** Sender is MOM sending a task and signal to\n\t\t\t ** deliver.\n\t\t\t **\n\t\t\t ** auxiliary info (\n\t\t\t **\tsending node\ttm_node_id;\n\t\t\t **\ttaskid\t\ttm_task_id;\n\t\t\t **\tsignal\t\tint;\n\t\t\t ** )\n\t\t\t */\n\t\t\tpvnodeid = disrsi(stream, &ret);\n\t\t\tBAIL(\"SIGNAL_TASK pvnodeid\")\n\t\t\tif ((np = find_node(pjob, stream, pvnodeid)) == NULL) {\n\t\t\t\tSEND_ERR(PBSE_BADHOST)\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\ttaskid = disrui(stream, &ret);\n\t\t\tBAIL(\"SIGNAL_TASK taskit\")\n\t\t\tsig = disrsi(stream, &ret);\n\t\t\tBAIL(\"SIGNAL_TASK signum\")\n\t\t\tDBPRT((\"%s: SIGNAL_TASK %s fromnode %d task %8.8X sig %d\\n\",\n\t\t\t\t__func__, jobid, pvnodeid, taskid, sig))\n\t\t\tptask = task_find(pjob, taskid);\n\t\t\tif (ptask == NULL) {\n\t\t\t\tSEND_ERR(PBSE_JOBEXIST)\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\tsprintf(log_buffer, \"SIGNAL_TASK %8.8X sig %d\", taskid, sig);\n\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t\tjobid, log_buffer);\n\t\t\tkill_task(ptask, sig, 0);\n\t\t\tret = im_compose(stream, jobid, cookie, IM_ALL_OKAY,\n\t\t\t\tevent, fromtask, IM_OLD_PROTOCOL_VER);\n\t\t\tbreak;\n\n\t\tcase\tIM_OBIT_TASK:\n\t\t\t/*\n\t\t\t ** Sender is MOM sending a request to monitor a\n\t\t\t ** task for exit.\n\t\t\t **\n\t\t\t ** auxiliary info (\n\t\t\t **\tsending node\ttm_node_id;\n\t\t\t **\ttaskid\t\ttm_task_id;\n\t\t\t ** )\n\t\t\t */\n\t\t\tpvnodeid = disrsi(stream, &ret);\n\t\t\tBAIL(\"OBIT_TASK pvnodeid\")\n\t\t\tif ((np = find_node(pjob, stream, pvnodeid)) == NULL) {\n\t\t\t\tSEND_ERR(PBSE_BADHOST)\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\ttaskid = disrui(stream, &ret);\n\t\t\tBAIL(\"OBIT_TASK taskid\")\n\t\t\tptask = task_find(pjob, taskid);\n\t\t\tif (ptask == NULL) {\n\t\t\t\tSEND_ERR(PBSE_JOBEXIST)\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\tDBPRT((\"%s: OBIT_TASK %s from node %d task %8.8X\\n\", __func__,\n\t\t\t\tjobid, pvnodeid, taskid))\n\t\t\tif (ptask->ti_qs.ti_status >= TI_STATE_EXITED) {\n\t\t\t\tret = im_compose(stream, jobid, cookie, IM_ALL_OKAY,\n\t\t\t\t\tevent, fromtask, IM_OLD_PROTOCOL_VER);\n\t\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\t\tbreak;\n\t\t\t\tret = diswsi(stream, ptask->ti_qs.ti_exitstat);\n\t\t\t}\n\t\t\telse {\t/* save obit request with task */\n\t\t\t\tobitent\t*op = (obitent *)malloc(sizeof(obitent));\n\t\t\t\tassert(op);\n\t\t\t\tCLEAR_LINK(op->oe_next);\n\t\t\t\tappend_link(&ptask->ti_obits, &op->oe_next, op);\n\t\t\t\top->oe_type = OBIT_TYPE_TMEVENT;\n\t\t\t\top->oe_u.oe_tm.oe_fd = -1;\n\t\t\t\top->oe_u.oe_tm.oe_node = pvnodeid;\n\t\t\t\top->oe_u.oe_tm.oe_event = event;\n\t\t\t\top->oe_u.oe_tm.oe_taskid = fromtask;\n\t\t\t\ttask_save(ptask);\n\t\t\t\treply = 0;\n\t\t\t}\n\t\t\tbreak;\n\n\t\tcase\tIM_GET_INFO:\n\t\t\t/*\n\t\t\t ** Sender is MOM sending a task and name to lookup\n\t\t\t ** for info to report back.\n\t\t\t **\n\t\t\t ** auxiliary info (\n\t\t\t **\tsending node\ttm_node_id;\n\t\t\t **\ttaskid\t\ttm_task_id;\n\t\t\t **\tname\t\tstring;\n\t\t\t ** )\n\t\t\t */\n\t\t\tpvnodeid = disrsi(stream, &ret);\n\t\t\tBAIL(\"GET_INFO pvnodeid\")\n\t\t\tif ((np = find_node(pjob, stream, pvnodeid)) == NULL) {\n\t\t\t\tSEND_ERR(PBSE_BADHOST)\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\ttaskid = disrui(stream, &ret);\n\t\t\tBAIL(\"GET_INFO taskid\")\n\t\t\tptask = task_find(pjob, taskid);\n\t\t\tif (ptask == NULL) {\n\t\t\t\tSEND_ERR(PBSE_JOBEXIST)\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\tname = disrst(stream, &ret);\n\t\t\tBAIL(\"GET_INFO name\")\n\t\t\tDBPRT((\"%s: GET_INFO %s from node %d task %8.8X name %s\\n\",\n\t\t\t\t__func__, jobid, pvnodeid, taskid, name))\n\t\t\tif ((ip = task_findinfo(ptask, name)) == NULL) {\n\t\t\t\tSEND_ERR(PBSE_JOBEXIST)\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\tsprintf(log_buffer, \"GET_INFO task %8.8X name %s\",\n\t\t\t\ttaskid, name);\n\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t\tjobid, log_buffer);\n\t\t\tret = im_compose(stream, jobid, cookie, IM_ALL_OKAY,\n\t\t\t\tevent, fromtask, IM_OLD_PROTOCOL_VER);\n\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\tbreak;\n\t\t\tret = diswcs(stream, ip->ie_info, ip->ie_len);\n\t\t\tbreak;\n\n\t\tcase\tIM_GET_RESC:\n\t\t\t/*\n\t\t\t ** Sender is MOM requesting resource info to\n\t\t\t ** report back its client.\n\t\t\t **\n\t\t\t ** auxiliary info (\n\t\t\t **\tsending node\ttm_node_id;\n\t\t\t ** )\n\t\t\t */\n\t\t\tpvnodeid = disrsi(stream, &ret);\n\t\t\tBAIL(\"GET_RESC pvnodeid\")\n\t\t\tif ((np = find_node(pjob, stream, pvnodeid)) == NULL) {\n\t\t\t\tSEND_ERR(PBSE_BADHOST)\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\tDBPRT((\"%s: GET_RESC %s from node %d\\n\", __func__, jobid, pvnodeid))\n\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t\tjobid, \"GET_RESC received\");\n\t\t\tinfo = resc_string(pjob);\n\t\t\tret = im_compose(stream, jobid, cookie, IM_ALL_OKAY,\n\t\t\t\tevent, fromtask, IM_OLD_PROTOCOL_VER);\n\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\tbreak;\n\t\t\tret = diswst(stream, info);\n\t\t\tbreak;\n\n\t\tcase\tIM_POLL_JOB:\n\t\t\t/*\n\t\t\t ** Sender is (must be) mom superior commanding me to send\n\t\t\t ** information for a job which I should be a part of.\n\t\t\t **\n\t\t\t ** auxiliary info (\n\t\t\t **\tnone;\n\t\t\t ** )\n\t\t\t */\n\n\t\t\tif (QA_testing != 0) {\t\t/* for QA Testing only */\n\t\t\t\tif (QA_testing & PBSQA_POLLJOB_CRASH)\n\t\t\t\t\texit(98);\n\t\t\t\telse if (QA_testing & PBSQA_POLLJOB_SLEEP)\n\t\t\t\t\tsleep(90);\n\t\t\t}\n\n\t\t\tif (check_ms(stream, pjob))\n\t\t\t\tgoto fini;\n\t\t\tpjob->ji_polltime = time_now;\n\t\t\tDBPRT((\"%s: POLL_JOB %s\\n\", __func__, jobid))\n\t\t\tret = im_compose(stream, jobid, cookie, IM_ALL_OKAY,\n\t\t\t\tevent, fromtask, IM_OLD_PROTOCOL_VER);\n\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\tbreak;\n\t\t\t/*\n\t\t\t ** Now comes a recomendation for killing the job.\n\t\t\t */\n\t\t\texitval = (pjob->ji_qs.ji_svrflags &\n\t\t\t\t(JOB_SVFLG_OVERLMT1|JOB_SVFLG_OVERLMT2)) ? 1 : 0;\n\t\t\tret = diswsi(stream, exitval);\n\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\tbreak;\n\t\t\t/*\n\t\t\t ** Send the information tallyed for the job.\n\t\t\t */\n\t\t\tret = diswul(stream, resc_used(pjob, \"cput\", gettime));\n\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\tbreak;\n\t\t\tret = diswul(stream, resc_used(pjob, \"mem\", getsize));\n\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\tbreak;\n\t\t\tret = diswul(stream, resc_used(pjob, \"cpupercent\", gettime));\n\n\t\t\tsend_resc_used_to_ms(stream, pjob);\n\t\t\tbreak;\n\n#ifdef PMIX\n\t\tcase\tIM_PMIX:\n\t\t\t/*\n\t\t\t * Sender is MOM requesting a PMIX operation\n\t\t\t * be carried out.\n\t\t\t *\n\t\t\t * auxiliary info (\n\t\t\t *\tsending node\ttm_node_id;\n\t\t\t *\ttaskid\t\ttm_task_id;\n\t\t\t *\toperation\tstring;\n\t\t\t * )\n\t\t\t */\n\t\t\tsprintf(log_buffer, \"IM_PMIX request received\");\n\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t\tjobid, log_buffer);\n\t\t\t/* TODO: Read aux info, processes request, and send response */\n\t\t\tsprintf(log_buffer, \"Handle IM_PMIX request\");\n\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t\tjobid, log_buffer);\n\t\t\tsprintf(log_buffer, \"IM_PMIX replying IM_ALL_OK\");\n\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t\tjobid, log_buffer);\n\t\t\tret = im_compose(stream, jobid, cookie, IM_ALL_OKAY,\n\t\t\t\tevent, fromtask, IM_OLD_PROTOCOL_VER);\n\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\tbreak;\n\t\t\tbreak;\n#endif\n\n\t\tcase\tIM_SUSPEND:\n\t\tcase\tIM_RESUME:\n\t\t\t/*\n\t\t\t ** Sender is (must be) mom superior commanding me to do\n\t\t\t ** a suspend or resume of all local tasks for a job which\n\t\t\t ** I should be a part of.\n\t\t\t **\n\t\t\t ** auxiliary info (\n\t\t\t **\tnone;\n\t\t\t ** )\n\t\t\t */\n\t\t\tif (check_ms(stream, pjob))\n\t\t\t\tgoto fini;\n\t\t\tDBPRT((\"%s: %s %s\\n\", __func__, (command == IM_SUSPEND) ?\n\t\t\t\t\"SUSPEND\" : \"RESUME\", jobid))\n\t\t\tsprintf(log_buffer, \"%s received\", (command == IM_SUSPEND) ?\n\t\t\t\t\"SUSPEND\" : \"RESUME\");\n\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t\tjobid, log_buffer);\n\t\t\tif ((errcode = local_supres(pjob,\n\t\t\t\t(command == IM_SUSPEND) ? 1 : 0, NULL))\n\t\t\t\t!= PBSE_NONE) {\n\t\t\t\tSEND_ERR(errcode);\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\tpjob->ji_mompost = (command == IM_SUSPEND) ?\n\t\t\t\tpost_suspend : post_resume;\n\t\t\t/*\n\t\t\t ** If a child was started to handle the operation,\n\t\t\t ** wait to reply until the kid returns.\n\t\t\t */\n\t\t\tif (pjob->ji_momsubt) {\n\t\t\t\treply = 0;\n\t\t\t\tpjob->ji_postevent = event;\n\t\t\t\tpjob->ji_taskid = fromtask;\n\t\t\t\tbreak;\n\t\t\t}\n\n\t\t\tpjob->ji_mompost(pjob, PBSE_NONE);\n\t\t\tret = im_compose(stream, jobid, cookie, IM_ALL_OKAY,\n\t\t\t\tevent, fromtask, IM_OLD_PROTOCOL_VER);\n\t\t\tbreak;\n\n\t\tcase\tIM_RESTART:\n\t\t\t/*\n\t\t\t ** Sender is (must be) mom superior commanding me to do\n\t\t\t ** a restart of all local tasks for a job which\n\t\t\t ** I should be a part of.\n\t\t\t **\n\t\t\t ** auxiliary info (\n\t\t\t **\tnone;\n\t\t\t ** )\n\t\t\t */\n\t\t\tif (check_ms(stream, pjob))\n\t\t\t\tgoto fini;\n\t\t\tDBPRT((\"%s: RESTART %s\\n\", __func__, jobid))\n\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t\tjobid, \"RESTART received\");\n\n\t\t\t/*\n\t\t\t * NULL value passed to hook_input.vnl means to assign\n\t\t\t * vnode list using pjob->ji_host[].\n\t\t\t */\n\t\t\tmom_hook_input_init(&hook_input);\n\t\t\thook_input.pjob = pjob;\n\t\t\tmom_hook_output_init(&hook_output);\n\t\t\thook_output.reject_errcode = &hook_errcode;\n\t\t\thook_output.last_phook = &last_phook;\n\t\t\thook_output.fail_action = &hook_fail_action;\n\n\t\t\thook_rc=mom_process_hooks(HOOK_EVENT_EXECJOB_BEGIN,\n\t\t\t\t\tPBS_MOM_SERVICE_NAME, mom_host,\n\t\t\t\t\t&hook_input, &hook_output,\n\t\t\t\t\thook_msg, sizeof(hook_msg), 1);\n\t\t\tif (hook_rc <= 0) {\n\t\t\t\t/* a value of '0' means explicit reject encountered. */\n\t\t\t\tif (hook_rc != 0) {\n\t\t\t\t\t/*\n\t\t\t\t\t * we've hit an internal error (malloc error, full disk, etc...), so\n\t\t\t\t\t * treat this now like a  hook error so hook fail_action will be consulted.\n\t\t\t\t\t * Before, behavior of an internal error was to ignore it!\n\t\t\t\t\t */\n\t\t\t\t\thook_errcode = PBSE_HOOKERROR;\n\t\t\t\t\tsend_hook_fail_action(last_phook);\n\t\t\t\t}\n\t\t\t\tSEND_ERR2(hook_errcode, (char *)hook_msg);\n\t\t\t\tmom_deljob(pjob);\n\t\t\t\tbreak;\n\t\t\t}\n\n\t\t\terrcode = local_restart(pjob, NULL);\n\n\t\t\tif (errcode != PBSE_NONE) {\t/* error, send reply */\n\t\t\t\tSEND_ERR(errcode);\n\t\t\t\tbreak;\n\t\t\t}\n\n\t\t\t/*\n\t\t\t ** If a child was started to handle the operation,\n\t\t\t ** wait to reply until the kid returns.\n\t\t\t */\n\t\t\tif (pjob->ji_momsubt) {\n\t\t\t\treply = 0;\n\t\t\t\tpjob->ji_postevent = event;\n\t\t\t\tpjob->ji_taskid = fromtask;\n\t\t\t\tbreak;\n\t\t\t}\n\n\t\t\tpost_restart(pjob, PBSE_NONE);\n\t\t\tret = im_compose(stream, jobid, cookie, IM_ALL_OKAY,\n\t\t\t\tevent, fromtask, IM_OLD_PROTOCOL_VER);\n\t\t\tbreak;\n\n\n\t\tcase\tIM_CHECKPOINT:\n\t\tcase\tIM_CHECKPOINT_ABORT:\n\t\t\t/*\n\t\t\t ** Sender is (must be) mom superior commanding me to do\n\t\t\t ** a checkpoint of all local tasks for a job which\n\t\t\t ** I should be a part of.\n\t\t\t **\n\t\t\t ** auxiliary info (\n\t\t\t **\tnone;\n\t\t\t ** )\n\t\t\t */\n\t\t\tif (check_ms(stream, pjob))\n\t\t\t\tgoto fini;\n\t\t\tDBPRT((\"%s: %s %s\\n\", __func__,\n\t\t\t\t(command == IM_CHECKPOINT) ? \"CHECKPOINT\" :\n\t\t\t\t\"CHECKPOINT_ABORT\", jobid))\n\t\t\tsprintf(log_buffer, \"%s received\",\n\t\t\t\t(command == IM_CHECKPOINT) ?\n\t\t\t\t\"CHECKPOINT\" : \"CHECKPOINT_ABORT\");\n\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t\tjobid, log_buffer);\n\n\t\t\terrcode = local_checkpoint(pjob,\n\t\t\t\t(command == IM_CHECKPOINT) ? 0 : 1, NULL);\n\n\t\t\tif (errcode != PBSE_NONE) {\t/* error, send reply */\n\t\t\t\tSEND_ERR(errcode);\n\t\t\t\tbreak;\n\t\t\t}\n\n\t\t\t/*\n\t\t\t ** If a child was started to handle the operation,\n\t\t\t ** wait to reply until the kid returns.\n\t\t\t */\n\t\t\tif (pjob->ji_momsubt) {\n\t\t\t\treply = 0;\n\t\t\t\tpjob->ji_postevent = event;\n\t\t\t\tpjob->ji_taskid = fromtask;\n\t\t\t\tbreak;\n\t\t\t}\n\n\t\t\tpost_chkpt(pjob, PBSE_NONE);\n\t\t\tret = im_compose(stream, jobid, cookie, IM_ALL_OKAY,\n\t\t\t\tevent, fromtask, IM_OLD_PROTOCOL_VER);\n\t\t\tbreak;\n\n\t\tcase\tIM_ABORT_JOB:\n\t\t\t/*\n\t\t\t ** Sender is (must be) mom superior commanding me to\n\t\t\t ** abort a JOIN_JOB or RESTART request.\n\t\t\t **\n\t\t\t ** auxiliary info (\n\t\t\t **\tnone;\n\t\t\t ** )\n\t\t\t */\n\t\t\tif (check_ms(stream, pjob))\n\t\t\t\tgoto fini;\n\t\t\tDBPRT((\"%s: ABORT_JOB %s\\n\", __func__, jobid))\n\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t\tjobid, \"ABORT_JOB received\");\n\t\t\treply = 0;\n\t\t\tif (pjob->ji_qs.ji_svrflags &\n\t\t\t\t(JOB_SVFLG_CHKPT|JOB_SVFLG_ChkptMig)) {\n\t\t\t\tkill_job(pjob, SIGKILL);\t/* is this right? */\n\t\t\t} else {\n\t\t\t\tmom_hook_input_init(&hook_input);\n\t\t\t\thook_input.pjob = pjob;\n\n\t\t\t\tmom_hook_output_init(&hook_output);\n\t\t\t\thook_output.reject_errcode = &hook_errcode;\n\t\t\t\thook_output.last_phook = &last_phook;\n\t\t\t\thook_output.fail_action = &hook_fail_action;\n\t\t\t\t(void)mom_process_hooks(HOOK_EVENT_EXECJOB_ABORT, PBS_MOM_SERVICE_NAME, mom_host, &hook_input, &hook_output, hook_msg, sizeof(hook_msg), 1);\n\t\t\t\tmom_deljob(pjob);\n\t\t\t}\n\t\t\tbreak;\n\n\t\tcase\tIM_REQUEUE:\n\t\t\t/*\n\t\t\t ** Sender is another MOM telling me that she has gone\n\t\t\t ** keyboard busy and that a job needs to be requeued\n\t\t\t **\n\t\t\t ** auxiliary info (\n\t\t\t **\tnone;\n\t\t\t ** )\n\t\t\t */\n\t\t\tDBPRT((\"%s: IM_REQUEUE job %s\\n\", __func__, jobid))\n\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t\tjobid, \"REQUEUE received\");\n\t\t\treply = 0;\n\t\t\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE) != 0) {\n\t\t\t\tpjob->ji_qs.ji_un.ji_momt.ji_exitstat = JOB_EXEC_RERUN;\n\t\t\t\t(void)kill_job(pjob, SIGKILL);\n\t\t\t\t/* Server will decide if job is rerunnable or not */\n\t\t\t}\n\t\t\tbreak;\n\n\t\tcase\tIM_SETUP_JOB:\n\t\t\t/*\n\t\t\t ** Sender is (must be) mom superior sending me setup\n\t\t\t ** information to complete a JOIN_JOB or RESTART request.\n\t\t\t **\n\t\t\t ** auxiliary info (\n\t\t\t **\tidentity\tint\n\t\t\t **\t... dependent\n\t\t\t ** )\n\t\t\t */\n\t\t\tif (check_ms(stream, pjob))\n\t\t\t\tgoto fini;\n\t\t\tDBPRT((\"%s: SETUP_JOB %s\\n\", __func__, jobid))\n\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t\tjobid, \"SETUP_JOB received\");\n\n\t\t\t/*\n\t\t\t ** If there is a job_setup_final, call it,\n\t\t\t ** otherwise, send the \"not supported\" error.\n\t\t\t */\n\t\t\tif (job_setup_final == NULL) {\n\t\t\t\tSEND_ERR(PBSE_NOSUP);\n\t\t\t\tbreak;\n\t\t\t}\n\n\t\t\terrcode = job_setup_final(pjob, stream);\n\t\t\tif (errcode != PBSE_NONE) {\t/* error, send reply */\n\t\t\t\tSEND_ERR(errcode);\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\tret = im_compose(stream, jobid, cookie, IM_ALL_OKAY,\n\t\t\t\tevent, fromtask, IM_OLD_PROTOCOL_VER);\n\t\t\tbreak;\n\n\t\tcase\tIM_ALL_OKAY:\t\t/* this is a REPLY */\n\t\t\t/*\n\t\t\t ** Sender is another MOM telling me that a request has\n\t\t\t ** completed just dandy.\n\t\t\t */\n\t\t\tswitch (event_com) {\n\n\t\t\t\tcase\tIM_JOIN_JOB:\n\t\t\t\t\t/*\n\t\t\t\t\t ** Sender is one of the sisterhood saying she\n\t\t\t\t\t ** got the job structure sent and she accepts it.\n\t\t\t\t\t ** I'm mother superior.\n\t\t\t\t\t **\n\t\t\t\t\t ** auxiliary info (\n\t\t\t\t\t **\toptional;\n\t\t\t\t\t ** )\n\t\t\t\t\t */\n\t\t\t\t\tif ((nodeidx > 0) &&\n\t\t\t\t\t    (nodeidx < pjob->ji_numnodes) &&\n\t\t\t\t\t    ((nodeidx-1) < pjob->ji_numrescs) &&\n  \t\t\t\t\t    (pjob->ji_resources[nodeidx-1].nodehost == NULL))\n\t\t\t\t\t\tpjob->ji_resources[nodeidx-1].nodehost = strdup(pjob->ji_hosts[nodeidx].hn_host);\n\n\t\t\t\t\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE) == 0) {\n\t\t\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\t\t\"got JOIN_JOB OKAY and I'm not MS\");\n\t\t\t\t\t\tgoto err;\n\t\t\t\t\t}\n\t\t\t\t\tDBPRT((\"%s: JOIN_JOB %s OKAY\\n\", __func__, jobid))\n\n\t\t\t\t\t/*\n\t\t\t\t\t ** If job_join_read exists, call it to read\n\t\t\t\t\t ** any extra info included with the JOIN reply.\n\t\t\t\t\t ** This function can return an error if there\n\t\t\t\t\t ** is no extra information or deal with it\n\t\t\t\t\t ** more gracefully and return SUCCESS.\n\t\t\t\t\t */\n\t\t\t\t\tif (job_join_read != NULL) {\n\t\t\t\t\t\t/* on error, log_message set */\n\t\t\t\t\t\tret = job_join_read(pjob, np, stream);\n\t\t\t\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\t\t\t\tgoto err;\n\t\t\t\t\t}\n\n\t\t\t\t\tfor (i=0; i<pjob->ji_numnodes; i++) {\n\t\t\t\t\t\thnodent *xp = &pjob->ji_hosts[i];\n\t\t\t\t\t\tif ((ep = (eventent *)GET_NEXT(xp->hn_events))\n\t\t\t\t\t\t\t!= NULL)\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\n\t\t\t\t\tif (do_tolerate_node_failures(pjob) &&\n\t\t\t\t\t    (nodeidx > 0) && (nodeidx < pjob->ji_numnodes)) {\n\t\t\t\t\t\treliable_job_node_add(&pjob->ji_node_list, pjob->ji_hosts[nodeidx].hn_host);\n\t\t\t\t\t}\n\n\t\t\t\t\tif (ep == NULL) {\t/* no events */\n\t\t\t\t\t\tint rcode;\n\t\t\t\t\t\tint do_break = 0;\n\t\t\t\t\t\t/*\n\t\t\t\t\t\t * All the JOIN messages have come in.\n\t\t\t\t\t\t * Call job_join_extra for local MS setup.\n\t\t\t\t\t\t */\n\t\t\t\t\t\trcode = pre_finish_exec(pjob, 1);\n\t\t\t\t\t\tswitch (rcode) {\n\t\t\t\t\t\t  case PRE_FINISH_SUCCESS_JOB_SETUP_SEND:\n\t\t\t\t\t\t\tdo_break = 1;\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t  case PRE_FINISH_FAIL_JOIN_EXTRA:\n\t\t\t\t\t\t\tgoto done;\n\t\t\t\t\t\t  case PRE_FINISH_FAIL_JOB_SETUP_SEND:\n\t\t\t\t\t\t\tsprintf(log_buffer, \"could not send setup\");\n\t\t\t\t\t\t\tgoto err;\n\t\t\t\t\t\t  case PRE_FINISH_FAIL:\n\t\t\t\t\t\t\tgoto err;\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif (do_break)\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t/*\n\t\t\t\t\t\t ** At this point, we are ready to call\n\t\t\t\t\t\t ** finish_exec and launch the job.\n\t\t\t\t\t\t */\n \t\t\t\t\t\tif (!do_tolerate_node_failures(pjob) || (check_job_substate(pjob, JOB_SUBSTATE_WAITING_JOIN_JOB))) {\n\t\t\t\t\t\t\tif (check_job_substate(pjob, JOB_SUBSTATE_WAITING_JOIN_JOB)) {\n\t\t\t\t\t\t\t\tset_job_substate(pjob, JOB_SUBSTATE_PRERUN);\n\t\t\t\t\t\t\t\tjob_save(pjob);\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tfinish_exec(pjob);\n\t\t\t\t\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tbreak;\n\n\t\t\t\tcase\tIM_SETUP_JOB:\n\t\t\t\t\t/*\n\t\t\t\t\t ** Sender is one of the sisterhood saying she\n\t\t\t\t\t ** did the job setup step.\n\t\t\t\t\t ** I'm mother superior.\n\t\t\t\t\t **\n\t\t\t\t\t ** auxiliary info (\n\t\t\t\t\t **\tnone;\n\t\t\t\t\t ** )\n\t\t\t\t\t */\n\t\t\t\t\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE) == 0) {\n\t\t\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\t\t\"got JOIN_JOB OKAY and I'm not MS\");\n\t\t\t\t\t\tgoto err;\n\t\t\t\t\t}\n\t\t\t\t\tDBPRT((\"%s: SETUP_JOB %s from %s OKAY\\n\", __func__,\n\t\t\t\t\t\tjobid, np->hn_host))\n\t\t\t\t\tfor (i=0; i<pjob->ji_numnodes; i++) {\n\t\t\t\t\t\tnp = &pjob->ji_hosts[i];\n\t\t\t\t\t\tif ((ep = (eventent *)GET_NEXT(np->hn_events))\n\t\t\t\t\t\t\t!= NULL)\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\n\t\t\t\t\tif (ep == NULL) {\t/* all SETUPs done */\n\t\t\t\t\t\t/*\n\t\t\t\t\t\t ** Call finish_exec.  The MS call to\n\t\t\t\t\t\t ** job_setup_final is done in job_setup.\n\t\t\t\t\t\t */\n\t\t\t\t\t\tfinish_exec(pjob);\n\t\t\t\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB,\n\t\t\t\t\t\t\tLOG_DEBUG,\n\t\t\t\t\t\t\tpjob->ji_qs.ji_jobid, log_buffer);\n\t\t\t\t\t}\n\t\t\t\t\tbreak;\n\n\t\t\t\tcase\tIM_SUSPEND:\n\t\t\t\tcase\tIM_RESUME:\n\t\t\t\t\t/*\n\t\t\t\t\t ** Sender is one of the sisterhood saying she\n\t\t\t\t\t ** did a suspend or resume.\n\t\t\t\t\t ** I'm mother superior.\n\t\t\t\t\t **\n\t\t\t\t\t ** auxiliary info (\n\t\t\t\t\t **\tnone;\n\t\t\t\t\t ** )\n\t\t\t\t\t */\n\t\t\t\t\tname = (event_com == IM_SUSPEND) ?\n\t\t\t\t\t\t\"SUSPEND\" : \"RESUME\";\n\t\t\t\t\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE) == 0) {\n\t\t\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\t\t\"got %s OKAY and I'm not MS\", name);\n\t\t\t\t\t\tgoto err;\n\t\t\t\t\t}\n\t\t\t\t\tDBPRT((\"%s: %s %s OKAY\\n\", __func__, jobid, name))\n\n\t\t\t\t\tif (pjob->ji_mompost != NULL)\n\t\t\t\t\t\tpjob->ji_mompost(pjob, PBSE_NONE);\n\t\t\t\t\tbreak;\n\n\t\t\t\tcase\tIM_RESTART:\n\t\t\t\tcase\tIM_CHECKPOINT:\n\t\t\t\tcase\tIM_CHECKPOINT_ABORT:\n\t\t\t\t\t/*\n\t\t\t\t\t ** Sender is one of the sisterhood saying she\n\t\t\t\t\t ** did a checkpoint or restart.\n\t\t\t\t\t ** I'm mother superior.\n\t\t\t\t\t **\n\t\t\t\t\t ** auxiliary info (\n\t\t\t\t\t **\tnone;\n\t\t\t\t\t ** )\n\t\t\t\t\t */\n\t\t\t\t\tname = (event_com == IM_RESTART) ? \"RESTART\" :\n\t\t\t\t\t\t(event_com == IM_CHECKPOINT) ?\n\t\t\t\t\t\t\"CHECKPOINT\" : \"CHECKPOINT_ABORT\";\n\t\t\t\t\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE) == 0) {\n\t\t\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\t\t\"got %s OKAY and I'm not MS\", name);\n\t\t\t\t\t\tgoto err;\n\t\t\t\t\t}\n\n\t\t\t\t\tDBPRT((\"%s: %s %s OKAY\\n\", __func__, jobid, name))\n\t\t\t\t\tif (pjob->ji_mompost != NULL)\n\t\t\t\t\t\tpjob->ji_mompost(pjob, PBSE_NONE);\n\t\t\t\t\tbreak;\n\n\t\t\t\tcase\tIM_KILL_JOB:\n\t\t\t\t\t/*\n\t\t\t\t\t ** Sender is sending a response that a job\n\t\t\t\t\t ** which needs to die has been given the ax.\n\t\t\t\t\t ** I'm mother superior.\n\t\t\t\t\t **\n\t\t\t\t\t ** auxiliary info (\n\t\t\t\t\t **\tcput\tint;\n\t\t\t\t\t **\tmem\tint;\n\t\t\t\t\t **\tcpupercent int;\n\t\t\t\t\t ** )\n\t\t\t\t\t */\n\t\t\t\t\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE) == 0) {\n\t\t\t\t\t\tsprintf(log_buffer, \"got KILL_JOB OKAY and I'm not MS\");\n\t\t\t\t\t\tgoto err;\n\t\t\t\t\t}\n\t\t\t\t\tDBPRT((\"%s: KILL_JOB %s OKAY\\n\", __func__, jobid))\n\n\t\t\t\t\tpjob->ji_resources[nodeidx - 1].nr_cput = disrul(stream, &ret);\n\t\t\t\t\tBAIL(\"OK-KILL_JOB cput\")\n\t\t\t\t\tpjob->ji_resources[nodeidx - 1].nr_mem = disrul(stream, &ret);\n\t\t\t\t\tBAIL(\"OK-KILL_JOB mem\")\n\t\t\t\t\tpjob->ji_resources[nodeidx - 1].nr_cpupercent = disrul(stream, &ret);\n\t\t\t\t\tBAIL(\"OK-KILL_JOB cpupercent\")\n\n\t\t\t\t\tDBPRT((\"%s: %s FINAL from %d cpu %lu sec mem %lu kb\\n\",\n\t\t\t\t\t       __func__, jobid, nodeidx,\n\t\t\t\t\t       pjob->ji_resources[nodeidx - 1].nr_cput,\n\t\t\t\t\t       pjob->ji_resources[nodeidx - 1].nr_mem))\n\t\t\t\t\trecv_resc_used_from_sister(stream, pjob, nodeidx - 1);\n\n\t\t\t\t\t/* don't close stream in case other jobs use it */\n\t\t\t\t\tnp->hn_sister = SISTER_KILLDONE;\n\t\t\t\t\tfor (i = 1; i < pjob->ji_numnodes; i++) {\n\t\t\t\t\t\tif (reliable_job_node_find(&pjob->ji_failed_node_list, pjob->ji_hosts[i].hn_host) == NULL &&\n\t\t\t\t\t\t    pjob->ji_hosts[i].hn_sister == SISTER_OKAY)\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t\tif (i == pjob->ji_numnodes) { /* all dead */\n\t\t\t\t\t\tDBPRT((\"%s: ALL DONE, set EXITING job %s\\n\", __func__, jobid))\n\t\t\t\t\t\tif (check_job_substate(pjob, JOB_SUBSTATE_KILLSIS)) {\n\t\t\t\t\t\t\tset_job_state(pjob, JOB_STATE_LTR_EXITING);\n\t\t\t\t\t\t\tset_job_substate(pjob, JOB_SUBSTATE_EXITING);\n\t\t\t\t\t\t\texiting_tasks = 1;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tbreak;\n\n\t\t\t\tcase\tIM_DELETE_JOB_REPLY:\n\t\t\t\t\t/*\n\t\t\t\t\t ** Sender is MOM responding to a \"delete job and reply\"\n\t\t\t\t\t ** request.\n\t\t\t\t\t **\n\t\t\t\t\t ** auxiliary info - none\n\t\t\t\t\t */\n\t\t\t\t\tDBPRT((\"%s: reply for DELETE_JOB_REPLY %s received from %s\\n\", __func__, pjob->ji_qs.ji_jobid, np->hn_host))\n\t\t\t\t\tnp->hn_sister = SISTER_KILLDONE;\n\t\t\t\t\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE) == 0) {\n\t\t\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\t\t\"got DELETE_JOB_REPLY OKAY and I'm not MS\");\n\t\t\t\t\t\tgoto err;\n\t\t\t\t\t}\n\t\t\t\t\tDBPRT((\"%s: DELETE_JOB_REPLY %s OKAY\\n\", __func__, jobid))\n\t\t\t\t\tchk_del_job(pjob, 0);\n\t\t\t\t\tbreak;\n\n\t\t\t\tcase\tIM_SPAWN_TASK:\n\t\t\t\t\t/*\n\t\t\t\t\t ** Sender is MOM responding to a \"spawn_task\"\n\t\t\t\t\t ** request.\n\t\t\t\t\t **\n\t\t\t\t\t ** auxiliary info (\n\t\t\t\t\t **\ttask id\t\ttm_task_id;\n\t\t\t\t\t ** )\n\t\t\t\t\t */\n\t\t\t\t\ttaskid = disrui(stream, &ret);\n\t\t\t\t\tBAIL(\"OK-SPAWN taskid\")\n\t\t\t\t\tDBPRT((\"%s: SPAWN_TASK %s OKAY task %8.8X\\n\",\n\t\t\t\t\t\t__func__, jobid, taskid))\n\t\t\t\t\tptask = task_check(pjob, efd, event_task);\n\t\t\t\t\tif (ptask == NULL)\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t(void)tm_reply(efd, ptask->ti_protover,\n\t\t\t\t\t\tTM_OKAY, event_client);\n\t\t\t\t\t(void)diswui(efd, taskid);\n\t\t\t\t\t(void)dis_flush(efd);\n\t\t\t\t\tbreak;\n\n\t\t\t\tcase\tIM_GET_TASKS:\n\t\t\t\t\t/*\n\t\t\t\t\t ** Sender is MOM giving a list of tasks which she\n\t\t\t\t\t ** has started for this job.\n\t\t\t\t\t **\n\t\t\t\t\t ** auxiliary info (\n\t\t\t\t\t **\ttask id\t\ttm_task_id;\n\t\t\t\t\t **\t...\n\t\t\t\t\t **\ttask id\t\ttm_task_id;\n\t\t\t\t\t ** )\n\t\t\t\t\t */\n\t\t\t\t\tDBPRT((\"%s: GET_TASKS %s OKAY\\n\", __func__, jobid))\n\t\t\t\t\tptask = task_check(pjob, efd, event_task);\n\t\t\t\t\tif (ptask == NULL)\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t(void)tm_reply(efd, ptask->ti_protover,\n\t\t\t\t\t\tTM_OKAY, event_client);\n\t\t\t\t\tfor (;;) {\n\t\t\t\t\t\tDIS_tpp_funcs();\n\t\t\t\t\t\ttaskid = disrui(stream, &ret);\n\t\t\t\t\t\tif (ret != DIS_SUCCESS) {\n\t\t\t\t\t\t\tif (ret == DIS_EOD)\n\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\telse {\n\t\t\t\t\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\t\t\t\tbail_format,\n\t\t\t\t\t\t\t\t\t\"OK-GET_TASK idlist\");\n\t\t\t\t\t\t\t\tgoto err;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t\tDIS_tcp_funcs();\n\t\t\t\t\t\t(void)diswui(efd, taskid);\n\t\t\t\t\t}\n\t\t\t\t\tDIS_tcp_funcs();\n\t\t\t\t\t(void)diswui(efd, TM_NULL_TASK);\n\t\t\t\t\t(void)dis_flush(efd);\n\t\t\t\t\tbreak;\n\n\t\t\t\tcase\tIM_SIGNAL_TASK:\n\t\t\t\t\t/*\n\t\t\t\t\t ** Sender is MOM with a good signal to report.\n\t\t\t\t\t **\n\t\t\t\t\t ** auxiliary info (\n\t\t\t\t\t **\tnone;\n\t\t\t\t\t ** )\n\t\t\t\t\t */\n\t\t\t\t\tDBPRT((\"%s: %s SIGNAL_TASK %8.8X OKAY\\n\",\n\t\t\t\t\t\t__func__, jobid, event_task))\n\t\t\t\t\tptask = task_check(pjob, efd, event_task);\n\t\t\t\t\tif (ptask == NULL)\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t(void)tm_reply(efd, ptask->ti_protover,\n\t\t\t\t\t\tTM_OKAY, event_client);\n\t\t\t\t\t(void)dis_flush(efd);\n\t\t\t\t\tbreak;\n\n\t\t\t\tcase\tIM_OBIT_TASK:\n\t\t\t\t\t/*\n\t\t\t\t\t ** Sender is MOM with a death report.\n\t\t\t\t\t **\n\t\t\t\t\t ** auxiliary info (\n\t\t\t\t\t **\texit value\tint;\n\t\t\t\t\t ** )\n\t\t\t\t\t */\n\t\t\t\t\texitval = disrsi(stream, &ret);\n\t\t\t\t\tBAIL(\"OK-OBIT_TASK exitval\")\n\t\t\t\t\tDBPRT((\"%s: %s OBIT_TASK %8.8X OKAY exit val %d\\n\",\n\t\t\t\t\t\t__func__, jobid, event_task, exitval))\n\t\t\t\t\tptask = task_check(pjob, efd, event_task);\n\t\t\t\t\tif (ptask == NULL)\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t(void)tm_reply(efd, ptask->ti_protover,\n\t\t\t\t\t\tTM_OKAY, event_client);\n\t\t\t\t\t(void)diswsi(efd, exitval);\n\t\t\t\t\t(void)dis_flush(efd);\n\t\t\t\t\tbreak;\n\n\t\t\t\tcase\tIM_GET_INFO:\n\t\t\t\t\t/*\n\t\t\t\t\t ** Sender is MOM with a named info to report.\n\t\t\t\t\t **\n\t\t\t\t\t ** auxiliary info (\n\t\t\t\t\t **\tinfo\t\tcounted string;\n\t\t\t\t\t ** )\n\t\t\t\t\t */\n\t\t\t\t\tinfo = disrcs(stream, &len, &ret);\n\t\t\t\t\tBAIL(\"OK-GET_INFO info\")\n\t\t\t\t\tDBPRT((\"%s: %s GET_INFO %8.8X OKAY\\n\",\n\t\t\t\t\t\t__func__, jobid, event_task))\n\t\t\t\t\tptask = task_check(pjob, efd, event_task);\n\t\t\t\t\tif (ptask == NULL)\n\t\t\t\t\t\tbreak;\n\n\t\t\t\t\t(void)tm_reply(efd, ptask->ti_protover,\n\t\t\t\t\t\tTM_OKAY, event_client);\n\t\t\t\t\t(void)diswcs(efd, info, len);\n\t\t\t\t\t(void)dis_flush(efd);\n\t\t\t\t\tbreak;\n\n\t\t\t\tcase\tIM_GET_RESC:\n\t\t\t\t\t/*\n\t\t\t\t\t ** Sender is MOM with a resource info to report.\n\t\t\t\t\t **\n\t\t\t\t\t ** auxiliary info (\n\t\t\t\t\t **\tinfo\t\tcounted string;\n\t\t\t\t\t ** )\n\t\t\t\t\t */\n\t\t\t\t\tinfo = disrst(stream, &ret);\n\t\t\t\t\tBAIL(\"OK-GET_RESC info\")\n\t\t\t\t\tDBPRT((\"%s: %s GET_RESC %8.8X OKAY\\n\",\n\t\t\t\t\t\t__func__, jobid, event_task))\n\t\t\t\t\tptask = task_check(pjob, efd, event_task);\n\t\t\t\t\tif (ptask == NULL)\n\t\t\t\t\t\tbreak;\n\n\t\t\t\t\t(void)tm_reply(efd, ptask->ti_protover,\n\t\t\t\t\t\tTM_OKAY, event_client);\n\t\t\t\t\t(void)diswst(efd, info);\n\t\t\t\t\t(void)dis_flush(efd);\n\t\t\t\t\tbreak;\n\n\t\t\t\tcase\tIM_POLL_JOB:\n\t\t\t\t\t/*\n\t\t\t\t\t ** I must be Mother Superior for the job and\n\t\t\t\t\t ** this is a reply with job resources to\n\t\t\t\t\t ** tally up.\n\t\t\t\t\t **\n\t\t\t\t\t ** auxiliary info (\n\t\t\t\t\t **\trecommendation\tint;\n\t\t\t\t\t **\tcput\t\tu_long;\n\t\t\t\t\t **\tmem\t\tu_long;\n\t\t\t\t\t ** )\n\t\t\t\t\t */\n\t\t\t\t\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE) == 0) {\n\t\t\t\t\t\tsprintf(log_buffer, \"got POLL_JOB and I'm not MS\");\n\t\t\t\t\t\tgoto err;\n\t\t\t\t\t}\n\t\t\t\t\texitval = disrsi(stream, &ret);\n\t\t\t\t\tBAIL(\"OK-POLL_JOB exitval\")\n\t\t\t\t\tpjob->ji_resources[nodeidx - 1].nr_cput = disrul(stream, &ret);\n\t\t\t\t\tBAIL(\"OK-POLL_JOB cput\")\n\t\t\t\t\tpjob->ji_resources[nodeidx - 1].nr_mem = disrul(stream, &ret);\n\t\t\t\t\tBAIL(\"OK-POLL_JOB mem\")\n\t\t\t\t\tpjob->ji_resources[nodeidx - 1].nr_cpupercent = disrul(stream, &ret);\n\t\t\t\t\tBAIL(\"OK-POLL_JOB cpupercent\")\n\t\t\t\t\trecv_resc_used_from_sister(stream, pjob, nodeidx - 1);\n\t\t\t\t\tDBPRT((\"%s: POLL_JOB %s OKAY kill %d cpu %lu mem %lu\\n\",\n\t\t\t\t\t       __func__, jobid, exitval,\n\t\t\t\t\t       pjob->ji_resources[nodeidx - 1].nr_cput,\n\t\t\t\t\t       pjob->ji_resources[nodeidx - 1].nr_mem))\n\n\t\t\t\t\tif (exitval)\n\t\t\t\t\t\tpjob->ji_nodekill = np->hn_node;\n\t\t\t\t\tbreak;\n\n#ifdef PMIX\n\t\t\t\tcase\tIM_PMIX:\n\t\t\t\t\t/*\n\t\t\t\t\t * I must be mother superior for the job and\n\t\t\t\t\t * this is a reply for a PMIX operation.\n\t\t\t\t\t *\n\t\t\t\t\t * auxiliary info (\n\t\t\t\t\t *\toperation\tint;\n\t\t\t\t\t * )\n\t\t\t\t\t */\n\t\t\t\t\tsprintf(log_buffer, \"IM_PMIX reply received\");\n\t\t\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t\t\t\tjobid, log_buffer);\n\t\t\t\t\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE) == 0) {\n\t\t\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\t\t\"Received IM_PMIX reply \"\n\t\t\t\t\t\t\t\"and this is not MS\");\n\t\t\t\t\t\tgoto err;\n\t\t\t\t\t}\n\t\t\t\t\t/* TODO: Handle IM_PMIX reply */\n\t\t\t\t\tsprintf(log_buffer, \"Handle IM_PMIX reply here\");\n\t\t\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t\t\t\tjobid, log_buffer);\n\t\t\t\t\tbreak;\n#endif /* PMIX */\n\n\t\t\t\tcase\tIM_UPDATE_JOB:\n\t\t\t\t\tfor (i = 0; i < pjob->ji_numnodes; i++) {\n\t\t\t\t\t\thnodent *xp = &pjob->ji_hosts[i];\n\t\t\t\t\t\tep = (eventent *)GET_NEXT(xp->hn_events);\n  \t\t\t\t\t\tif (ep != NULL)\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\n\t\t\t\t\tif ((nodeidx > 0) && (nodeidx < pjob->ji_numnodes)) {\n\t\t\t\t\t\tchar *hn;\n\n\t\t\t\t\t\thn  = pjob->ji_hosts[nodeidx].hn_host;\n\t\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t\t\"received IM_ALL_OK job update from host %s\", hn?hn:\"\");\n\t\t\t\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\t\t\t}\n\n\t\t\t\t\tif (ep == NULL) {\n\t\t\t\t\t\t/* no events left */\n#ifndef WIN32\n\t\t\t\t\t\tif (do_tolerate_node_failures(pjob) && (pjob->ji_parent2child_job_update_status_pipe != -1)) {\n\t\t\t\t\t\t\tint cmd = IM_ALL_OKAY;\n\t\t\t\t\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid, \"all job updates from sisters done\");\n\t\t\t\t\t\t    \twrite_pipe_data(pjob->ji_parent2child_job_update_status_pipe, (int *)&cmd, sizeof(int));\n\t\t\t\t\t\t}\n#endif\n\n\t\t\t\t\t}\n\t\t\t\t\tbreak;\n\n\t\t\t\tcase\tIM_EXEC_PROLOGUE:\n\t\t\t\t\tfor (i = 0; i < pjob->ji_numnodes; i++) {\n\t\t\t\t\t\thnodent *xp = &pjob->ji_hosts[i];\n\t\t\t\t\t\tep = (eventent *)GET_NEXT(xp->hn_events);\n  \t\t\t\t\t\tif (ep != NULL)\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\n\t\t\t\t\tif ((nodeidx > 0) && (nodeidx < pjob->ji_numnodes)) {\n\t\t\t\t\t\tchar *hn;\n\t\t\t\t\t\treliable_job_node *rjn = NULL;\n\n\t\t\t\t\t\thn  = pjob->ji_hosts[nodeidx].hn_host;\n\t\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t\t\"received IM_ALL_OK prologue hook from host %s\", hn?hn:\"\");\n\t\t\t\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\t\t\t\t/* note that do_tolerate_node_failures() could return 0 if\n\t\t\t\t\t\t * tolerate_node_failures=job_start but job already moved past\n\t\t\t\t\t\t * the starting up phase. The second if clause will catch\n\t\t\t\t\t\t * previously failed node host due to not getting ack for\n\t\t\t\t\t\t * execjob_prologue hook execution, but we got the\n\t\t\t\t\t\t * ack now, just delayed.\n\t\t\t\t\t\t */\n\t\t\t\t\t\trjn = reliable_job_node_find(&pjob->ji_failed_node_list, hn);\n\t\t\t\t\t\tif (do_tolerate_node_failures(pjob) || (rjn != NULL)) {\n\t\t\t\t\t\t\t(void)reliable_job_node_set_prologue_hook_success(&pjob->ji_node_list, hn);\n\t\t\t\t\t\t\tif (rjn != NULL) {\n\t\t\t\t\t\t\t\tdelete_link(&rjn->rjn_link);\n\t\t\t\t\t\t\t\tfree(rjn);\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\n#ifndef WIN32\n\t\t\t\t\tif (ep == NULL) {\n\t\t\t\t\t\t/* no events left */\n\t\t\t\t\t\tif (do_tolerate_node_failures(pjob) && (pjob->ji_mjspipe2 != -1)) {\n\t\t\t\t\t\t\tint cmd = IM_ALL_OKAY;\n\t\t\t\t\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid, \"all job prologue hook from sisters done\");\n\t\t\t\t\t\t    \twrite_pipe_data(pjob->ji_mjspipe2, (int *)&cmd, sizeof(int));\n\t\t\t\t\t\t}\n\n\t\t\t\t\t}\n#endif\n\t\t\t\t\tbreak;\n\n\t\t\t\tdefault:\n\t\t\t\t\tsprintf(log_buffer, \"unknown request type %d saved\",\n\t\t\t\t\t\tevent_com);\n\t\t\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\t\t\tbreak;\n\t\t\t}\n\t\t\tbreak;\n\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n\t\tcase\tIM_CRED:\n\t\t\tret = im_cred_read(pjob, np, stream);\n\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\tgoto err;\n\t\t\tbreak;\n#endif\n\n\t\tcase\tIM_ERROR:\t\t/* this is a REPLY */\n\t\tcase\tIM_ERROR2:\t\t/* this is a REPLY */\n\t\t\t/*\n\t\t\t ** Sender is responding to a request with an error code.\n\t\t\t **\n\t\t\t ** auxiliary info (\n\t\t\t **\terror value\tint;\n\t\t\t ** )\n\t\t\t */\n\t\t\terrcode = disrsi(stream, &ret);\n\t\t\tBAIL(\"ERROR errcode\")\n\n\t\t\tif (command == IM_ERROR2) {\n\t\t\t\terrmsg = disrst(stream, &ret);\n\t\t\t}\n\n\t\t\tswitch (event_com) {\n\n\t\t\t\tcase\tIM_JOIN_JOB:\n\t\t\t\t\t/*\n\t\t\t\t\t * A MOM has rejected a request to join a job.\n\t\t\t\t\t * We need to send ABORT_JOB to all the sisterhood\n\t\t\t\t\t * and fail the job start to server.\n\t\t\t\t\t * I'm mother superior.\n\t\t\t\t\t */\n\t\t\t\t\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE) == 0) {\n\t\t\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\t\t\"JOIN_JOB ERROR and I'm not MS\");\n\t\t\t\t\t\tgoto err;\n\t\t\t\t\t}\n\t\t\t\t\tDBPRT((\"%s: JOIN_JOB %s returned ERROR %d\\n\",\n\t\t\t\t\t\t__func__, jobid, errcode))\n\t\t\t\t\tjob_start_error(pjob, errcode, (do_tolerate_node_failures(pjob) ? addr_to_hostname(addr) : netaddr(addr)), \"JOIN_JOB\");\n\t\t\t\t\tif (errmsg != NULL) {\n\t\t\t\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB,\n\t\t\t\t\t\t\tLOG_INFO, jobid, errmsg);\n\t\t\t\t\t}\n\t\t\t\t\tif (!do_tolerate_node_failures(pjob))\n\t\t\t\t\t\tbreak;\n\n\t\t\t\t\tfor (i = 0; i < pjob->ji_numnodes; i++) {\n\t\t\t\t\t\thnodent *xp = &pjob->ji_hosts[i];\n\t\t\t\t\t\tif ((ep = (eventent *)GET_NEXT(xp->hn_events)) != NULL)\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t\tif (ep == NULL) {\t/* no events */\n\t\t\t\t\t\tint rcode;\n\t\t\t\t\t\tint do_break = 0;\n\n\t\t\t\t\t\t/* All the JOIN messages have come in. */\n\t\t\t\t\t\trcode = pre_finish_exec(pjob, 1);\n\t\t\t\t\t\tswitch (rcode) {\n\t\t\t\t\t\t  case PRE_FINISH_SUCCESS_JOB_SETUP_SEND:\n\t\t\t\t\t\t\tdo_break = 1;\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t  case PRE_FINISH_FAIL_JOIN_EXTRA:\n\t\t\t\t\t\t\tgoto done;\n\t\t\t\t\t\t  case PRE_FINISH_FAIL_JOB_SETUP_SEND:\n\t\t\t\t\t\t\tsprintf(log_buffer, \"could not send setup\");\n\t\t\t\t\t\t\tgoto err;\n\t\t\t\t\t\t  case PRE_FINISH_FAIL:\n\t\t\t\t\t\t\tgoto err;\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif (do_break)\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t/*\n\t\t\t\t\t\t ** At this point, we are ready to call\n\t\t\t\t\t\t ** finish_exec and launch the job.\n\t\t\t\t\t\t */\n\t\t\t\t\t\tif (check_job_substate(pjob, JOB_SUBSTATE_WAITING_JOIN_JOB)) {\n\t\t\t\t\t\t\tset_job_substate(pjob, JOB_SUBSTATE_PRERUN);\n\t\t\t\t\t\t\tjob_save(pjob);\n\t\t\t\t\t\t}\n\t\t\t\t\t\tfinish_exec(pjob);\n\t\t\t\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\t\t\t}\n\t\t\t\t\tbreak;\n\n\t\t\t\tcase\tIM_EXEC_PROLOGUE:\n\t\t\t\t\t/*\n\t\t\t\t\t * A MOM prologue hook execution has been rejected\n\t\t\t\t\t * for the job.  We need to send ABORT_JOB to all\n\t\t\t\t\t * the sisterhood and fail the job start to server.\n\t\t\t\t\t * I'm mother superior.\n\t\t\t\t\t */\n\t\t\t\t\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE) == 0) {\n\t\t\t\t\t\tsprintf(log_buffer, \"IM_EXEC_PROLOGUE ERROR and I'm not MS\");\n\t\t\t\t\t\tgoto err;\n\t\t\t\t\t}\n\t\t\t\t\tDBPRT((\"%s: IM_EXEC_PROLOGUE %s returned ERROR %d\\n\", __func__, jobid, errcode))\n\n\t\t\t\t\tjob_start_error(pjob, errcode,(do_tolerate_node_failures(pjob)?addr_to_hostname(addr):netaddr(addr)), \"IM_EXEC_PROLOGUE\");\n\t\t\t\t\tif (errmsg != NULL) {\n\t\t\t\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB,\n\t\t\t\t\t\t\t  LOG_INFO, jobid, errmsg);\n\t\t\t\t\t}\n\t\t\t\t\tif (!do_tolerate_node_failures(pjob))\n\t\t\t\t\t\tbreak;\n\n\t\t\t\t\tfor (i = 0; i < pjob->ji_numnodes; i++) {\n\t\t\t\t\t\thnodent *xp = &pjob->ji_hosts[i];\n\t\t\t\t\t\tif ((ep = (eventent *)GET_NEXT(xp->hn_events))\n\t\t\t\t\t\t\t!= NULL)\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\n#ifndef WIN32\n\t\t\t\t\tif (ep == NULL) {\n\t\t\t\t\t\t/* no events left */\n\t\t\t\t\t\tif (pjob->ji_mjspipe2 != -1) {\n\t\t\t\t\t\t\tint cmd = IM_ALL_OKAY;\n\t\t\t\t\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid, \"all job prologue hooks from sisters executed\");\n\t\t\t\t\t\t    \twrite_pipe_data(pjob->ji_mjspipe2, (int *)&cmd, sizeof(int));\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n#endif\n\t\t\t\t\tbreak;\n\n\t\t\t\tcase\tIM_SETUP_JOB:\n\t\t\t\t\t/*\n\t\t\t\t\t ** A MOM has rejected a request to setup a job.\n\t\t\t\t\t ** If the error is PBSE_NOSUP, the job might be\n\t\t\t\t\t ** able to continue.  Otherwise, we need to send\n\t\t\t\t\t ** ABORT_JOB to all the sisterhood and fail the\n\t\t\t\t\t ** job start to server.  The determination of\n\t\t\t\t\t ** if the job cannot run in the case of PBSE_NOSUP\n\t\t\t\t\t ** is done when MS runs job_setup_final.  Then, the\n\t\t\t\t\t ** lack of information from this node will be\n\t\t\t\t\t ** noted and if it cannot be tolerated, the\n\t\t\t\t\t ** job will be aborted.\n\t\t\t\t\t ** I'm mother superior.\n\t\t\t\t\t */\n\t\t\t\t\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE) == 0) {\n\t\t\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\t\t\"SETUP_JOB ERROR and I'm not MS\");\n\t\t\t\t\t\tgoto err;\n\t\t\t\t\t}\n\t\t\t\t\tDBPRT((\"%s: SETUP_JOB %s returned ERROR %d\\n\",\n\t\t\t\t\t\t__func__, jobid, errcode))\n\t\t\t\t\tif (errcode != PBSE_NOSUP) {\n\t\t\t\t\t\tjob_start_error(pjob, errcode, (do_tolerate_node_failures(pjob)?addr_to_hostname(addr):netaddr(addr)), \"SETUP_JOB\");\n\t\t\t\t\t}\n\t\t\t\t\tbreak;\n\n\t\t\t\tcase\tIM_SUSPEND:\n\t\t\t\tcase\tIM_RESUME:\n\t\t\t\t\t/*\n\t\t\t\t\t ** A MOM has failed to suspend or resume a job.\n\t\t\t\t\t ** I'm mother superior.\n\t\t\t\t\t */\n\t\t\t\t\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE) == 0) {\n\t\t\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\t\t\"%s ERROR and I'm not MS\",\n\t\t\t\t\t\t\t(event_com == IM_SUSPEND) ?\n\t\t\t\t\t\t\t\"SUSPEND\" : \"RESUME\");\n\t\t\t\t\t\tgoto err;\n\t\t\t\t\t}\n\t\t\t\t\tsprintf(log_buffer, \"%s returned ERROR %d\",\n\t\t\t\t\t\t(event_com == IM_SUSPEND) ?\n\t\t\t\t\t\t\"SUSPEND\" : \"RESUME\", errcode);\n\t\t\t\t\tlog_joberr(-1, __func__, log_buffer, jobid);\n\n\t\t\t\t\tif (pjob->ji_mompost != NULL)\n\t\t\t\t\t\tpjob->ji_mompost(pjob, errcode);\n\t\t\t\t\tbreak;\n\n\t\t\t\tcase\tIM_RESTART:\n\t\t\t\tcase\tIM_CHECKPOINT:\n\t\t\t\tcase\tIM_CHECKPOINT_ABORT:\n\t\t\t\t\t/*\n\t\t\t\t\t ** A MOM has failed to do a checkpoint.\n\t\t\t\t\t ** I'm mother superior.\n\t\t\t\t\t **\n\t\t\t\t\t ** auxiliary info (\n\t\t\t\t\t **\tnone;\n\t\t\t\t\t ** )\n\t\t\t\t\t */\n\t\t\t\t\tname = (event_com == IM_RESTART) ? \"RESTART\" :\n\t\t\t\t\t\t(event_com == IM_CHECKPOINT) ?\n\t\t\t\t\t\t\"CHECKPOINT\" : \"CHECKPOINT_ABORT\";\n\n\t\t\t\t\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE) == 0) {\n\t\t\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\t\t\"%s ERROR and I'm not MS\", name);\n\t\t\t\t\t\tgoto err;\n\t\t\t\t\t}\n\t\t\t\t\tsprintf(log_buffer, \"%s returned ERROR %d\",\n\t\t\t\t\t\tname, errcode);\n\t\t\t\t\tlog_joberr(-1, __func__, log_buffer, jobid);\n\n\t\t\t\t\tif (pjob->ji_mompost != NULL)\n\t\t\t\t\t\tpjob->ji_mompost(pjob, errcode);\n\t\t\t\t\tbreak;\n\n\t\t\t\tcase\tIM_ABORT_JOB:\n\t\t\t\tcase\tIM_KILL_JOB:\n\t\t\t\t\t/*\n\t\t\t\t\t ** Job cleanup failed on a sister.\n\t\t\t\t\t ** Wait for everybody to respond then finishup.\n\t\t\t\t\t ** I'm mother superior.\n\t\t\t\t\t */\n\t\t\t\t\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE) == 0) {\n\t\t\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\t\t\"KILL/ABORT ERROR and I'm not MS\");\n\t\t\t\t\t\tgoto err;\n\t\t\t\t\t}\n\t\t\t\t\tDBPRT((\"%s: KILL/ABORT JOB %s returned ERROR %d\\n\",\n\t\t\t\t\t\t__func__, jobid, errcode))\n\n\t\t\t\t\tif (errcode == PBSE_HOOKERROR) {\n\t\t\t\t\t\tif (errmsg != NULL) {\n\t\t\t\t\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB,\n\t\t\t\t\t\t\t\tLOG_INFO,\n\t\t\t\t\t\t\t\tpjob->ji_qs.ji_jobid, errmsg);\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tnp->hn_sister = errcode ? errcode : SISTER_KILLDONE;\n\t\t\t\t\tfor (i=1; i<pjob->ji_numnodes; i++) {\n\t\t\t\t\t\tif ((reliable_job_node_find(&pjob->ji_failed_node_list, pjob->ji_hosts[i].hn_host) == NULL) && (pjob->ji_hosts[i].hn_sister == SISTER_OKAY))\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t\tif (i == pjob->ji_numnodes) {\t/* all dead */\n\t\t\t\t\t\tif (check_job_substate(pjob, JOB_SUBSTATE_KILLSIS)) {\n\t\t\t\t\t\t\tset_job_substate(pjob, JOB_SUBSTATE_EXITING);\n\t\t\t\t\t\t\texiting_tasks = 1;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tbreak;\n\n\t\t\t\tcase\tIM_DELETE_JOB_REPLY:\n\t\t\t\t\t/*\n\t\t\t\t\t ** Job delete failed on a sister.\n\t\t\t\t\t ** Wait for everybody to respond then finishup.\n\t\t\t\t\t ** I'm mother superior.\n\t\t\t\t\t */\n\t\t\t\t\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE) == 0) {\n\t\t\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\t\t\"DEL_JOB_REPLY ERROR and I'm not MS\");\n\t\t\t\t\t\tgoto err;\n\t\t\t\t\t}\n\t\t\t\t\tDBPRT((\"%s: DEL_JOB_REPLY job %s returned ERROR %d\\n\",\n\t\t\t\t\t\t__func__, jobid, errcode))\n\t\t\t\t\tif ((errcode == 0) || (errcode == PBSE_JOBEXIST))\n\t\t\t\t\t\tnp->hn_sister = SISTER_KILLDONE;\n\t\t\t\t\telse\n\t\t\t\t\t\tnp->hn_sister = errcode;\n\t\t\t\t\tchk_del_job(pjob, errcode);\n\n\t\t\t\t\tif (errmsg != NULL) {\n\t\t\t\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB,\n\t\t\t\t\t\t\tLOG_INFO, pjob->ji_qs.ji_jobid, errmsg);\n\t\t\t\t\t}\n\t\t\t\t\tbreak;\n\n\t\t\t\tcase\tIM_SPAWN_TASK:\n\t\t\t\tcase\tIM_GET_TASKS:\n\t\t\t\tcase\tIM_SIGNAL_TASK:\n\t\t\t\tcase\tIM_OBIT_TASK:\n\t\t\t\tcase\tIM_GET_INFO:\n\t\t\t\tcase\tIM_GET_RESC:\n\t\t\t\t\t/*\n\t\t\t\t\t ** A user attempt failed, inform process.\n\t\t\t\t\t */\n\t\t\t\t\tDBPRT((\"%s: REQUEST %d %s returned ERROR %d\\n\",\n\t\t\t\t\t\t__func__, event_com, jobid, errcode))\n\t\t\t\t\tptask = task_check(pjob, efd, event_task);\n\t\t\t\t\tif (ptask == NULL)\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t(void)tm_reply(efd, ptask->ti_protover,\n\t\t\t\t\t\tTM_ERROR, event_client);\n\t\t\t\t\t(void)diswsi(efd, errcode);\n\t\t\t\t\t(void)dis_flush(efd);\n\t\t\t\t\tbreak;\n\n\t\t\t\tcase\tIM_POLL_JOB:\n\t\t\t\t\t/*\n\t\t\t\t\t ** I must be Mother Superior for the job and\n\t\t\t\t\t ** this is an error reply to a poll request.\n\t\t\t\t\t */\n\t\t\t\t\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE) == 0) {\n\t\t\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\t\t\"POLL_JOB ERROR and I'm not MS\");\n\t\t\t\t\t\tgoto err;\n\t\t\t\t\t}\n\n\t\t\t\t\tif (do_tolerate_node_failures(pjob)) {\n\t\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"ignoring POLL_JOB error from failed mom %s as job is tolerant of node failures\", np->hn_host?np->hn_host:\"\");\n\t\t\t\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\n\t\t\t\t\tDBPRT((\"%s: POLL_JOB %s returned ERROR %d\\n\",\n\t\t\t\t\t\t__func__, jobid, errcode))\n\t\t\t\t\tsprintf(log_buffer, \"POLL_JOB returned ERROR %d\",\n\t\t\t\t\t\terrcode);\n\t\t\t\t\tlog_joberr(-1, __func__, log_buffer, jobid);\n\n\t\t\t\t\tnp->hn_sister = errcode ? errcode : SISTER_BADPOLL;\n\t\t\t\t\tpjob->ji_nodekill = np->hn_node;\n\t\t\t\t\tbreak;\n\n#ifdef PMIX\n\t\t\t\tcase\tIM_PMIX:\n\t\t\t\t\t/*\n\t\t\t\t\t * I must be mother superior for the job and\n\t\t\t\t\t * this is an error response to a PMIX request.\n\t\t\t\t\t */\n\t\t\t\t\tsprintf(log_buffer, \"IM_PMIX error encountered\");\n\t\t\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t\t\t\tjobid, log_buffer);\n\t\t\t\t\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE) == 0) {\n\t\t\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\t\t\"IM_PMIX error and this is not MS\");\n\t\t\t\t\t\tgoto err;\n\t\t\t\t\t}\n\t\t\t\t\t/* TODO: Handle IM_PMIX error */\n\t\t\t\t\tsprintf(log_buffer, \"Handle IM_PMIX error here\");\n\t\t\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t\t\t\tjobid, log_buffer);\n\t\t\t\t\tbreak;\n#endif /* PMIX */\n\n\t\t\t\tcase\tIM_UPDATE_JOB:\n\t\t\t\t\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE) == 0) {\n\t\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"IM_UPDATE_JOB ERROR and I'm not MS\");\n\t\t\t\t\t\tgoto err;\n\t\t\t\t\t}\n\t\t\t\t\tDBPRT((\"%s: IM_UPDATE_JOB %s returned ERROR %d\\n\", __func__, jobid, errcode))\n\t\t\t\t\tif (errmsg != NULL) {\n\t\t\t\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB,\n\t\t\t\t\t\t\t  LOG_INFO, jobid, errmsg);\n\t\t\t\t\t}\n\t\t\t\t\tbreak;\n\t\t\t\tdefault:\n\t\t\t\t\tsprintf(log_buffer, \"unknown command %d error\",\n\t\t\t\t\t\tevent_com);\n\t\t\t\t\tgoto err;\n\t\t\t}\n\t\t\tbreak;\n\n\t\tcase\tIM_SEND_RESC:\n\t\t\t/*\n\t\t\t ** I must be Mother Superior for the job and\n\t\t\t ** this is a reply with job resources to\n\t\t\t ** tally up.\n\t\t\t **\n\t\t\t */\n\t\t\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE) == 0) {\n\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\"got IM_SEND_RESC and I'm not MS\");\n\t\t\t\tgoto err;\n\t\t\t}\n\n\t\t\tnodehost = disrst(stream, &ret);\n\t\t\tBAIL(\"nodehost\")\n\t\t\tresc_idx = -1;\n\t\t\tfor (i=0; i < pjob->ji_numrescs; i++) {\n\t\t\t\tif ((pjob->ji_resources[i].nodehost != NULL) &&\n\t\t\t\t    (compare_short_hostname(pjob->ji_resources[i].nodehost, nodehost) == 0)) {\n\t\t\t\t\tresc_idx = i;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (resc_idx == -1) {\n\t\t\t\tnoderes *tmparr = NULL;\n\t\t\t\t/* add an entry to pjob->ji_resources */\n\t\t\t\t/* for this incoming resource report */\n\n\t\t\t\ttmparr = (noderes *)realloc(\n\t\t\t\t\t\tpjob->ji_resources,\n\t\t\t\t (pjob->ji_numrescs+1)*sizeof(noderes));\n\n\t\t\t\tif (tmparr == NULL) {\n\t\t\t\t\tsnprintf(log_buffer,\n\t\t\t\t\t\tsizeof(log_buffer),\n\t\t\t\t \t \t\"realloc failure extending\"\n\t\t\t\t\t \t\"  pjob->ji_resources\");\n\t\t\t\t\tgoto err;\n\t\t\t\t}\n\t\t\t\tpjob->ji_resources = tmparr;\n\t\t\t\tresc_idx = pjob->ji_numrescs;\n\t\t\t\tpjob->ji_resources[resc_idx].nodehost =\n\t\t\t\t\tstrdup(nodehost);\n\t\t\t\tif (pjob->ji_resources[resc_idx].nodehost == NULL) {\n\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \t \t\"strdup failure setting nodehost\");\n\t\t\t\t\tgoto err;\n\t\t\t\t}\n\t\t\t\tclear_attr(&pjob->ji_resources[resc_idx].nr_used,\n\t\t\t\t\t\t&job_attr_def[JOB_ATR_resc_used]);\n\t\t\t\tpjob->ji_numrescs++;\n\n\t\t\t}\n\t\t\tpjob->ji_resources[resc_idx].nr_cput =\n\t\t\t\tdisrul(stream, &ret);\n\t\t\tBAIL(\"resources_used.cput\")\n\t\t\tconvert_duration_to_str(pjob->ji_resources[resc_idx].nr_cput, timebuf, TIMEBUF_SIZE);\n\n\t\t\tpjob->ji_resources[resc_idx].nr_mem =\n\t\t\t\tdisrul(stream, &ret);\n\t\t\tBAIL(\"resources_used.mem\")\n\t\t\tpjob->ji_resources[resc_idx].nr_cpupercent =\n\t\t\t\tdisrul(stream, &ret);\n\t\t\tBAIL(\"resources_used.cpupercent\")\n\t\t\tDBPRT((\"%s: SEND_RESC %s OKAY nodeidx %d cpu %lu mem %lu\\n\",\n\t\t\t\t__func__, jobid, resc_idx,\n\t\t\t\tpjob->ji_resources[nodeidx-1].nr_cput,\n\t\t\t\tpjob->ji_resources[nodeidx-1].nr_mem))\n\n\t\t\tpjob->ji_resources[resc_idx].nr_status = PBS_NODERES_DELETE;\n\n\t\t\tsprintf(log_buffer,\n\t\t\t\t\"%s cput=%s mem=%lukb\", nodehost, timebuf,\n\t\t\t\tpjob->ji_resources[resc_idx].nr_mem);\n\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB,\n\t\t\t\tLOG_DEBUG, pjob->ji_qs.ji_jobid, log_buffer);\n\n\t\t\tfree(nodehost);\n\t\t\tnodehost = NULL;\n\t\t\tenqueue_update_for_send(pjob, IS_RESCUSED);\n\t\t\tbreak;\n\n\t\tcase\tIM_UPDATE_JOB:\n\t\t\tif (check_ms(stream, NULL))\n\t\t\t\tgoto fini;\n\t\t\tif (receive_job_update(stream, pjob) != 0) {\n\t\t\t\tsnprintf(log_buffer,\n\t\t\t\t\tsizeof(log_buffer),\n\t\t\t\t\t\"receive_job_update failed\");\n\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB,\n\t\t\t\t\tLOG_DEBUG, pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\t\tgoto err;\n\t\t\t}\n\t\t\tret = im_compose(stream, jobid, cookie,\n\t\t\t\tIM_ALL_OKAY,\n\t\t\t\tevent, fromtask, IM_OLD_PROTOCOL_VER);\n\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\tgoto err;\n\t\t\tbreak;\n\t\tcase IM_RECONNECT_TO_MS:\n\t\t\tif (pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE)\n\t\t\t\tresume_multinode(pjob);\n\t\t\tbreak;\n\n\t\tdefault:\n\t\t\tsprintf(log_buffer, \"unknown command %d sent\", command);\n\t\t\tgoto err;\n\t}\n\ndone:\n\ttpp_eom(stream);\n\tif (reply) {\t/* check if write worked */\n\t\tif (ret != DIS_SUCCESS ||\n\t\t\tdis_flush(stream) == -1) {\n\t\t\tif (errno != 0)\n\t\t\t\tlog_err(errno, __func__, \"dis_flush\");\n\t\t\ttpp_close(stream);\n\t\t\tif (np != NULL && np->hn_stream == stream)\n\t\t\t\tnp->hn_stream = -1;\n\t\t}\n\t}\n\tgoto fini;\n\nerr:\n\t/*\n\t ** We come here if we got a DIS read error or a protocol\n\t ** element is missing, or possibly because we failed to\n\t ** create a CPU set.  The likely case is the remote\n\t ** host has gone down.\n\t */\n\tif (jobid == NULL)\n\t\tlog_err(-1, __func__, log_buffer);\n\telse\n\t\tlog_joberr(-1, __func__, log_buffer, jobid);\n\tim_eof(stream, ret);\n\nfini:\n\tfree(jobid);\n\tfree(cookie);\n\tfree(info);\n\tfree(errmsg);\n\tfree(nodehost);\n}\n\n// clang-format on\n\n/**\n * @brief\n *      Handle a stream that needs to be closed.\n *      May be either from another Mom, or the server.\n *\n * @param[in] fd - file descriptor\n *\n * @return Void\n *\n */\nvoid\ntm_eof(int fd)\n{\n\tjob *pjob;\n\tpbs_task *ptask;\n\tint i;\n\tint events;\n\ttm_task_id fromtask;\n\n\t/*\n\t ** Search though all the jobs looking for this fd.\n\t */\n\tfor (pjob = (job *) GET_NEXT(svr_alljobs);\n\t     pjob != NULL;\n\t     pjob = (job *) GET_NEXT(pjob->ji_alljobs)) {\n\t\tfor (ptask = (pbs_task *) GET_NEXT(pjob->ji_tasks);\n\t\t     ptask;\n\t\t     ptask = (pbs_task *)\n\t\t\t     GET_NEXT(ptask->ti_jobtask)) {\n\n\t\t\tif (ptask->ti_tmfd == NULL)\n\t\t\t\tcontinue;\n\n\t\t\tfor (i = 0; i < ptask->ti_tmnum; i++) {\n\t\t\t\tif (ptask->ti_tmfd[i] == fd) {\n\t\t\t\t\tfromtask = ptask->ti_qs.ti_task;\n\t\t\t\t\tptask->ti_tmfd[i] = -1;\n\t\t\t\t\tgoto cleanup;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\tlog_err(-1, __func__, \"no matching task found\");\n\treturn;\n\ncleanup:\n\n\tevents = 0;\n\t/*\n\t ** Check for events waiting to be sent to the dead client.\n\t */\n\tfor (i = 0; i < pjob->ji_numnodes; i++) {\n\t\teventent *ep;\n\t\thnodent *np = &pjob->ji_hosts[i];\n\n\t\tep = (eventent *) GET_NEXT(np->hn_events);\n\t\twhile (ep) {\n\t\t\tif (ep->ee_fd == fd) {\n\t\t\t\tDBPRT((\"%s: fd %d drop command %d \"\n\t\t\t\t       \"client %d event %d task %8.8X\\n\",\n\t\t\t\t       __func__, ep->ee_fd, ep->ee_command,\n\t\t\t\t       ep->ee_client, ep->ee_event,\n\t\t\t\t       ep->ee_taskid))\n\t\t\t\tep->ee_fd = -1;\n\t\t\t\tevents++;\n\t\t\t}\n\n\t\t\tep = (eventent *) GET_NEXT(ep->ee_next);\n\t\t}\n\t}\n\n\t/*\n\t ** Throw away any obits the dead client was waiting for.\n\t */\n\tfor (ptask = (pbs_task *) GET_NEXT(pjob->ji_tasks);\n\t     ptask;\n\t     ptask = (pbs_task *) GET_NEXT(ptask->ti_jobtask)) {\n\t\tobitent *pobit;\n\n\t\tpobit = (obitent *) GET_NEXT(ptask->ti_obits);\n\t\twhile (pobit) {\n\t\t\tobitent *next = GET_NEXT(pobit->oe_next);\n\n\t\t\tif (pobit->oe_type == OBIT_TYPE_TMEVENT &&\n\t\t\t    pobit->oe_u.oe_tm.oe_fd == fd) {\n\t\t\t\tDBPRT((\"%s: fd %d drop obit event %d \"\n\t\t\t\t       \"node %d task %8.8X\\n\",\n\t\t\t\t       __func__,\n\t\t\t\t       pobit->oe_u.oe_tm.oe_fd,\n\t\t\t\t       pobit->oe_u.oe_tm.oe_event,\n\t\t\t\t       pobit->oe_u.oe_tm.oe_node,\n\t\t\t\t       pobit->oe_u.oe_tm.oe_taskid))\n\t\t\t\tdelete_link(&pobit->oe_next);\n\t\t\t\tfree(pobit);\n\t\t\t\tevents++;\n\t\t\t}\n\t\t\tpobit = next;\n\t\t}\n\t}\n\n\tif (events > 0) {\n\t\tsprintf(log_buffer,\n\t\t\t\"%d events dropped for TM client in task %8.8X\",\n\t\t\tevents, fromtask);\n\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t}\n\treturn;\n}\n\n#define TASK_FDMAX 10\n\n\n/**\n * @brief\n * \t\tFind the UID of a process that owns the socket between \n * \t\ttwo endpoints from /proc/net/tcp or /proc/net/tcp6.\n * \n * @param[in] path - Path to the TCP file (either /proc/net/tcp or /proc/net/tcp6)\n * @param[in] target_uid - Target UID to match\n * @param[in] target_local_port - Target local port to match\n * @param[in] target_remote_port - Target remote port to match\n * \n * @return int\n * @retval -1 on failure\n * @retval UID of the process on success\n * \n */\nstatic int\nfind_uid_from_tcp(const char *path, int target_uid, unsigned int target_local_port, unsigned int target_remote_port)\n{\n\tchar line[512];\n\tunsigned int local;\n\tunsigned int remote;\n\tint uid = -1;\n\tFILE *fp = fopen(path, \"r\");\n\n\tif (!fp) {\n\t\tlog_errf(errno, __func__, \"Failed to open %s\", path);\n\t\treturn -1;\n\t}\n\n\t/*\n\t * Read each line, extract the local port, remote port and uid,\n\t * and compare them with the target ports. If a match is found, return\n\t * the uid.\n\t */\n\twhile (fgets(line, sizeof(line), fp)) {\n\t\tif (sscanf(line, \"%*s %*[0-9A-Fa-f]:%4X %*[0-9A-Fa-f]:%4X %*s %*s %*s %*s %d\",\n\t\t\t   &local, &remote, &uid) == 3) {\n\t\t\tif (local == target_local_port && remote == target_remote_port && target_uid == uid) {\n\t\t\t\tgoto ret;\n\t\t\t} else if (local == target_remote_port && remote == target_local_port && target_uid == uid) {\n\t\t\t\tgoto ret;\n\t\t\t}\n\t\t}\n\t}\n\n\tuid = -1;\nret:\n\tfclose(fp);\n\treturn uid;\n}\n\n/**\n * @brief\n *      Discover the UID of the process that owns the socket connection by \n * \t\tlooking at /proc/net/tcp or /proc/net/tcp6.\n *\n * @param[in] conn - The connection structure\n * @param[in] target_uid - Target UID to match\n *\n * @return int\n * @retval -1 on failure\n * @retval UID of the process on success\n */\nstatic int\nget_uid_from_socket(conn_t *conn, int target_uid)\n{\n\tpbs_net_t local_addr;\n\tpbs_net_t remote_addr;\n\tpbs_socklen_t addr_len;\n\n\taddr_len = sizeof(local_addr);\n\tif (getsockname(conn->cn_sock, (struct sockaddr *) &local_addr, &addr_len) < 0) {\n\t\tlog_err(errno, __func__, \"Error getting the socket's local address\");\n\t\treturn -1;\n\t}\n\n\taddr_len = sizeof(remote_addr);\n\tif (getpeername(conn->cn_sock, (struct sockaddr *) &remote_addr, &addr_len) < 0) {\n\t\tlog_err(errno, __func__, \"Error getting the socket's remote address\");\n\t\treturn -1;\n\t}\n\n\tif (IS_VALID_IP(&local_addr) && IS_VALID_IP(&remote_addr)) {\n\t\tunsigned int local_port = ntohs(GET_IP_PORT(&local_addr));\n\t\tunsigned int remote_port = ntohs(GET_IP_PORT(&remote_addr));\n\n\t\t/* First try to find matching entry in /proc/net/tcp */\n\t\tint uid = find_uid_from_tcp(\"/proc/net/tcp\", target_uid, local_port, remote_port);\n\t\tif (uid != -1)\n\t\t\treturn uid;\n\n\t\t/* Then look at /proc/net/tcp6 */\n\t\tuid = find_uid_from_tcp(\"/proc/net/tcp6\", target_uid, local_port, remote_port);\n\t\tif (uid != -1)\n\t\t\treturn uid;\n\n\t\tlog_errf(-1, __func__, \"No matching UID found for local port %u and remote port %u in /proc/net/tcp or /proc/net/tcp6\",\n\t\t\t local_port, remote_port);\n\t} else {\n\t\tlog_err(-1, __func__, \"Invalid IP address in connection\");\n\t}\n\n\treturn -1;\n}\n\n/**\n *\n * @brief\n *\tInput is coming from a process running on this host which\n *\tshould be part of one of the jobs I am part of.  The i/o\n *\twill take place using DIS over a tcp fd.\n *\n * @param[in]\tfd - the stream to read input from.\n * @param[in]\tversion - protocol version\n *\n * @note\n *\tRead the stream to get a task manager request.  Format the reply\n *\tand write it back.\n *\n *\tread (\n *\t\tjobid\t\t\tstring\n *\t\tcookie\t\t\tstring\n *\t\tcommand\t\t\tint\n *\t\tevent\t\t\tint\n *\t\tfrom taskid\t\tuint\n *\t)\n *\n * @return int\n * @retval 0\t\tfor success\n * @retval non-zero \tfor failure\n *\n */\nint\ntm_request(int fd, int version)\n{\n\textern int reqnum;\n\tint command;\n\tint reply = TRUE;\n\tint ret = DIS_SUCCESS;\n\tchar *jobid = NULL;\n\tchar *cookie = NULL;\n\tchar *oreo;\n\tjob *pjob = NULL;\n\teventent *ep;\n\tpbs_task *ptask = NULL;\n\tvmpiprocs *pnode;\n\thnodent *phost;\n\tint i, event, numele;\n\tsize_t len;\n\tlong ipadd;\n\tchar **argv, **envp;\n\tchar *name, *info;\n\tinfoent *ip;\n\tint signum;\n\tint vnodenum;\n\tint prev_error = 0;\n\ttm_node_id tvnodeid;\n\ttm_node_id myvnodeid;\n\ttm_task_id taskid, fromtask;\n\textern u_long localaddr;\n\tchar hook_msg[HOOK_MSG_SIZE + 1];\n\tint argc = 0;\n\tint found_empty_string = 0;\n\tmom_hook_input_t hook_input;\n\n\tconn_t *conn = get_conn(fd);\n\tif (!conn) {\n\t\tsprintf(log_buffer, \"not found fd=%d in connection table\", fd);\n\t\tclosesocket(fd);\n\t\tif (cookie)\n\t\t\tfree(cookie);\n\t\treturn -1;\n\t}\n\n\tif (conn->cn_addr != localaddr) {\n\t\tsprintf(log_buffer, \"non-local connect\");\n\t\tgoto err;\n\t}\n\tif (version != TM_PROTOCOL_VER &&\n\t    version != TM_PROTOCOL_OLD) {\n\t\tsprintf(log_buffer, \"bad protocol version %d\", version);\n\t\tgoto err;\n\t}\n\n\tjobid = disrst(fd, &ret);\n\tBAIL(\"jobid\")\n\tcookie = disrst(fd, &ret);\n\tBAIL(\"cookie\")\n\tcommand = disrsi(fd, &ret);\n\tBAIL(\"command\")\n\tevent = disrsi(fd, &ret);\n\tBAIL(\"event\")\n\tfromtask = disrui(fd, &ret);\n\tBAIL(\"fromtask\")\n\n\tDBPRT((\"%s: job %s cookie %s task %8.8X com %d event %d\\n\", __func__,\n\t       jobid, cookie, fromtask, command, event))\n\n\t/*\n\t **\tCheck to see if we are doing a TM_ATTACH.  If so,\n\t **\tit is a special case since there will be no existing\n\t **\ttask to look up.\n\t */\n\tif (command == TM_ATTACH) {\n\t\tstatic char id[] = \"tm_attach\";\n\t\tpid_t pid;\n\t\tpid_t sid;\n\t\textern int attach_allow;\n\t\tuid_t proc_uid;\n\t\tuid_t jobowner;\n#ifdef WIN32\n\t\tchar proc_uname[UNLEN + 1] = {'\\0'};\n\t\tchar comm[MAX_PATH] = {'\\0'};\n\t\tHANDLE hProcess = INVALID_HANDLE_VALUE;\n\t\tchar *user_name = disrst(fd, &ret);\n\t\tBAIL(\"uid\")\n#else\n\t\tint system_uid = -1;\n\t\tchar comm[32] = {'\\0'};\n\t\tuid_t uid;\n\t\tuid = disrui(fd, &ret);\n\t\tBAIL(\"uid\")\n#endif\n\t\tpid = disrui(fd, &ret);\n\t\tBAIL(\"pid\")\n\n\t\t/*\n\t\t ** See if we are allowed to attach.\n\t\t */\n\t\tif (!attach_allow) {\n\t\t\tsprintf(log_buffer, \"%s: not allowed\", id);\n\t\t\ti = TM_ENOTIMPLEMENTED;\n\t\t\tgoto aterr;\n\t\t}\n\n\t\t/*\n\t\t ** The cookie must be NULL.\n\t\t */\n\t\tif (*cookie != '\\0') {\n\t\t\tsprintf(log_buffer, \"%s: job cookie is not NULL\", id);\n\t\t\tgoto err;\n\t\t}\n\n\t\t/* Discover the system_uid by indexing /proc/net/tcp\n\t\t * with the local and remote socket addresses\n\t\t * The system_uid should match uid\n\t\t */\n\t\tsystem_uid = get_uid_from_socket(conn, uid);\n\t\tif (system_uid == -1) {\n\t\t\tsprintf(log_buffer, \"%s: unable to determine the system UID\", id);\n\t\t\ti = TM_ENOTFOUND;\n\t\t\tgoto aterr;\n\t\t} else if (system_uid != uid) {\n\t\t\tsprintf(log_buffer, \"%s: system UID %d does not match UID %u\", id, system_uid, uid);\n\t\t\ti = TM_EUSER;\n\t\t\tgoto aterr;\n\t\t}\n\n\t\tif (*jobid == '\\0') { /* search for job */\n\t\t\tjob *pj;\n\n\t\t\ti = 0;\n\t\t\tfor (pj = (job *) GET_NEXT(svr_alljobs);\n\t\t\t     pj != NULL;\n\t\t\t     pj = (job *) GET_NEXT(pj->ji_alljobs)) {\n#ifdef WIN32\n\t\t\t\tif (user_name == NULL || pj->ji_user->pw_name == NULL || (strcasecmp(user_name, pj->ji_user->pw_name) != 0))\n#else\n\t\t\t\tif (uid != pj->ji_qs.ji_un.ji_momt.ji_exuid)\n#endif\n\t\t\t\t\tcontinue;\n\n\t\t\t\tif (!check_job_substate(pj, JOB_SUBSTATE_RUNNING) && !check_job_substate(pj, JOB_SUBSTATE_PRERUN))\n\t\t\t\t\tcontinue;\n\t\t\t\ti++;\n\t\t\t\tpjob = pj;\n\t\t\t}\n\t\t\t/*\n\t\t\t ** If one and only one match is found, pjob is good.\n\t\t\t */\n\t\t\tif (i != 1) {\n\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\"%s: job could not be determined\", id);\n\t\t\t\ti = TM_ENOTFOUND;\n\t\t\t\tgoto aterr;\n\t\t\t}\n\t\t\tjobowner = pjob->ji_qs.ji_un.ji_momt.ji_exuid;\n\t\t} else {\n\t\t\t/* verify the jobid is known */\n\t\t\tif ((pjob = find_job(jobid)) == NULL) {\n\t\t\t\tsprintf(log_buffer, \"job not found\");\n\t\t\t\ti = TM_ENOTFOUND;\n\t\t\t\tgoto aterr;\n\t\t\t}\n\t\t\tif (!check_job_substate(pjob, JOB_SUBSTATE_RUNNING) && !check_job_substate(pjob, JOB_SUBSTATE_PRERUN)) {\n\t\t\t\tsprintf(log_buffer, \"job not running\");\n\t\t\t\ti = TM_ENOTFOUND;\n\t\t\t\tgoto aterr;\n\t\t\t}\n\t\t\t/*\n\t\t\t ** The uid must match the job.\n\t\t\t */\n\t\t\tjobowner = pjob->ji_qs.ji_un.ji_momt.ji_exuid;\n#ifdef WIN32\n\t\t\tif (user_name == NULL || pjob->ji_user->pw_name == NULL || (strcasecmp(user_name, pjob->ji_user->pw_name) != 0)) {\n\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\"%s: uid mismatch %s to job %s\",\n\t\t\t\t\tid, user_name, pjob->ji_user->pw_name);\n\t\t\t\ti = TM_EUSER;\n\t\t\t\tgoto aterr;\n\t\t\t}\n#else\n\t\t\tif (uid != jobowner) {\n\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\"%s: uid mismatch %d to job %d\",\n\t\t\t\t\tid, uid, jobowner);\n\t\t\t\ti = TM_EUSER;\n\t\t\t\tgoto aterr;\n\t\t\t}\n#endif\n\t\t}\n\n\t\tmom_hook_input_init(&hook_input);\n\t\thook_input.pjob = pjob;\n\t\thook_input.pid = pid;\n\n\t\tswitch (mom_process_hooks(HOOK_EVENT_EXECJOB_ATTACH,\n\t\t\t\t\t  PBS_MOM_SERVICE_NAME, mom_host,\n\t\t\t\t\t  &hook_input, NULL,\n\t\t\t\t\t  hook_msg, sizeof(hook_msg), 1)) {\n\t\t\tcase 0: /* explicit reject */\n\t\t\t\t/* maybe a new TM error? */\n\t\t\t\ti = TM_EHOOK;\n\t\t\t\t/* in aterr, log_buffer gets printed */\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t \"execjob_attach hook rejected request\");\n\t\t\t\tgoto aterr;\n\t\t\tcase 1: /* explicit accept */\n\t\t\t\tbreak;\n\t\t\tcase 2: /* no hook script executed - go ahead and accept event*/\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t\t\t\t  LOG_INFO, \"\",\n\t\t\t\t\t  \"execjob_attach event: accept req by default\");\n\t\t}\n\n\t\t/*\n\t\t ** Get the session, uid and command name for the pid.\n\t\t ** I need to bump reqnum so dep_procinfo will get a\n\t\t ** fresh copy of the process table.\n\t\t */\n\t\treqnum++;\n#ifdef WIN32\n\t\ti = dep_procinfo(pid, &sid, &proc_uid, proc_uname, sizeof(proc_uname), comm, sizeof(comm));\n#else\n\t\ti = dep_procinfo(pid, &sid, &proc_uid, comm, sizeof(comm));\n#endif\n\t\tif (i != TM_OKAY) {\n#ifdef linux\n\t\t\tchar procid[MAXPATHLEN + 1];\n\t\t\tstruct stat sbuf;\n\n\t\t\tsnprintf(procid, sizeof(procid), \"/proc/%d\", pid);\n\t\t\tif (stat(procid, &sbuf) == -1)\n\t\t\t\tgoto aterr;\n\n\t\t\tsid = getsid(pid);\n\t\t\tif (sid == -1)\n\t\t\t\tgoto aterr;\n\n\t\t\tproc_uid = sbuf.st_uid;\n#else\n\t\t\tgoto aterr;\n#endif\n\t\t}\n\t\tif (sid <= 1) {\n\t\t\ti = TM_ENOPROC;\n\t\t\tgoto aterr;\n\t\t}\n\n\t\t/*\n\t\t ** Search all the tasks to make sure the session has\n\t\t ** not already been attached.\n\t\t */\n\t\tptask = find_session(sid);\n\t\tif (ptask != NULL) {\n\t\t\tsprintf(log_buffer, \"%s: sid %d already attached\",\n\t\t\t\tid, sid);\n\t\t\ti = TM_ESESSION;\n\t\t\tgoto aterr;\n\t\t}\n\n\t\t/*\n\t\t ** The process must be owned by\n\t\t ** the job owner.\n\t\t */\n#ifdef WIN32\n\t\tif (proc_uname == NULL || pjob->ji_user->pw_name == NULL || (strcasecmp(proc_uname, pjob->ji_user->pw_name) != 0)) {\n\t\t\tsprintf(log_buffer,\n\t\t\t\t\"%s: uid mismatch proc %s to job %s\",\n\t\t\t\tid, proc_uname, pjob->ji_user->pw_name);\n\t\t\ti = TM_EOWNER;\n\t\t\tgoto aterr;\n\t\t}\n#else\n\t\tif (proc_uid != jobowner) {\n\t\t\tsprintf(log_buffer,\n\t\t\t\t\"%s: uid mismatch proc %d to job %d\",\n\t\t\t\tid, proc_uid, jobowner);\n\t\t\ti = TM_EOWNER;\n\t\t\tgoto aterr;\n\t\t}\n#endif\n\t\t/*\n\t\t **\tCreate a new task for the session.\n\t\t */\n#ifdef WIN32\n\t\tif ((hProcess = OpenProcess(PROCESS_ALL_ACCESS, TRUE, (DWORD) sid)) == NULL) {\n\t\t\tsprintf(log_buffer, \"%s: OpenProcess Failed for pid %d with error %d\", id, sid, GetLastError());\n\t\t\ti = TM_ENOPROC;\n\t\t\tgoto aterr;\n\t\t}\n#endif\n\n\t\tptask = momtask_create(pjob);\n\t\tif (ptask == NULL) {\n\t\t\tsprintf(log_buffer, \"%s: task create failed\", id);\n\t\t\ti = TM_ESYSTEM;\n\t\t\tgoto aterr;\n\t\t}\n\n\t\tstrcpy(ptask->ti_qs.ti_parentjobid, jobid);\n\t\t/*\n\t\t **\tThe parent self virtual nodes are not known.\n\t\t */\n\t\tptask->ti_qs.ti_parentnode = TM_ERROR_NODE;\n\t\tptask->ti_qs.ti_myvnode = TM_ERROR_NODE;\n\t\tptask->ti_qs.ti_parenttask = TM_INIT_TASK;\n\t\tptask->ti_qs.ti_sid = sid;\n#ifdef WIN32\n\t\tptask->ti_hProc = hProcess;\n\t\tif (pjob->ji_hJob == NULL) {\n\t\t\tpjob->ji_hJob = CreateJobObject(NULL, pjob->ji_qs.ji_jobid);\n\t\t\tif (pjob->ji_hJob != NULL) {\n\t\t\t\t/*\n\t\t\t\t * When a process is attached using -p option of pbs_attach\n\t\t\t\t * and the processe is not running under session 0,\n\t\t\t\t * or when a pbs_attach is run outside the job in a session != 0,\n\t\t\t\t * it may fail to assign to the Windows Job object\n\t\t\t\t * but its resource accounting and resource limit enforcement\n\t\t\t\t * will still be applicable via polling.\n\t\t\t\t * Any processes created by pbs_attach are automatically\n\t\t\t\t * assigned to the job object as long as pbs_attach\n\t\t\t\t * gets run inside the job\n\t\t\t\t */\n\t\t\t\t(void) AssignProcessToJobObject(pjob->ji_hJob, hProcess);\n\t\t\t}\n\t\t} else {\n\t\t\t/*\n\t\t\t * When a process is attached using -p option of pbs_attach\n\t\t\t * and the process is not running under session 0,\n\t\t\t * or when a pbs_attach is run outside the job in a session != 0,\n\t\t\t * it may fail to assign to the Windows Job object\n\t\t\t * but its resource accounting and resource limit enforcement\n\t\t\t * will still be applicable via polling.\n\t\t\t * Any processes created by pbs_attach are automatically\n\t\t\t * assigned to the job object as long as pbs_attach\n\t\t\t * gets run inside the job\n\t\t\t */\n\t\t\t(void) AssignProcessToJobObject(pjob->ji_hJob, hProcess);\n\t\t}\n#endif\n\t\tptask->ti_qs.ti_status = TI_STATE_RUNNING;\n\t\tptask->ti_flags |= TI_FLAGS_ORPHAN;\n\t\t(void) task_save(ptask);\n\n\t\tif (!check_job_substate(pjob, JOB_SUBSTATE_RUNNING)) {\n\t\t\tset_job_state(pjob, JOB_STATE_LTR_RUNNING);\n\t\t\tset_job_substate(pjob, JOB_SUBSTATE_RUNNING);\n\t\t\tjob_save(pjob);\n\t\t}\n\n\t\t/*\n\t\t ** Add to list of polled jobs if it isn't\n\t\t ** already there.\n\t\t */\n\t\tif (is_linked(&mom_polljobs,\n\t\t\t      &pjob->ji_jobque) == 0) {\n\t\t\tappend_link(&mom_polljobs,\n\t\t\t\t    &pjob->ji_jobque, pjob);\n\t\t}\n\n\t\t/*\n\t\t ** Do any dependent attach operation.\n\t\t */\n\t\ti = dep_attach(ptask);\n\t\tif (i != TM_OKAY) {\n\t\t\tgoto aterr;\n\t\t}\n\n\t\tsprintf(log_buffer,\n\t\t\t\"pid %d sid %d cmd %s attached as task %8.8X\",\n\t\t\tpid, sid, comm, ptask->ti_qs.ti_task);\n\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\t/*\n\t\t * Do any dependent attach operation.\n\t\t */\n\taterr:\n\t\tif (i != TM_OKAY) {\n\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB,\n\t\t\t\t  LOG_NOTICE,\n\t\t\t\t  (pjob == NULL) ? \"N/A\" : pjob->ji_qs.ji_jobid,\n\t\t\t\t  log_buffer);\n\t\t}\n\t\tret = tm_reply(fd, version,\n\t\t\t       (i == TM_OKAY) ? i : TM_ERROR, event);\n\t\tif (ret != DIS_SUCCESS)\n\t\t\tgoto done;\n\t\tret = diswui(fd, ((i == TM_OKAY) ? ptask->ti_qs.ti_task : i));\n\t\tgoto done;\n\t}\n\n\t/* Continue normal processing for all other commands. */\n\t/* verify the jobid is known */\n\tif ((pjob = find_job(jobid)) == NULL) {\n\t\tsprintf(log_buffer, \"job not found\");\n\t\tgoto err;\n\t}\n\n\t/* see if the cookie matches */\n\tif (!(is_jattr_set(pjob, JOB_ATR_Cookie))) {\n\t\tsprintf(log_buffer, \"job has no cookie\");\n\t\tgoto err;\n\t}\n\toreo = get_jattr_str(pjob, JOB_ATR_Cookie);\n\tif (strcmp(oreo, cookie) != 0) {\n\t\tDBPRT((\"job cookie %s message %s\", oreo, cookie))\n\t\tsprintf(log_buffer, \"bad cookie\");\n\t\tgoto err;\n\t}\n\n\t/* verify this taskid is my baby */\n\tptask = task_find(pjob, fromtask);\n\tif (ptask == NULL) { /* not found */\n\t\tsprintf(log_buffer, \"task %8.8X not found\", fromtask);\n\t\tlog_joberr(-1, __func__, log_buffer, jobid);\n\t\tret = tm_reply(fd, version, TM_ERROR, event);\n\t\tif (ret != DIS_SUCCESS)\n\t\t\tgoto done;\n\t\tret = diswsi(fd, TM_ENOTFOUND);\n\t\tgoto done;\n\t}\n\tmyvnodeid = ptask->ti_qs.ti_myvnode;\n\tconn->cn_oncl = tm_eof;\n\n\tif (ptask->ti_protover != -1 && ptask->ti_protover != version) {\n\t\t/* the protocol version should not change */\n\t\tsprintf(log_buffer,\n\t\t\t\"inconsistent TM version %d from task %8.8X\",\n\t\t\tversion, fromtask);\n\t\tgoto err;\n\t}\n\tptask->ti_protover = version;\n\n\tif (ptask->ti_tmfd == NULL) {\n\t\tptask->ti_tmfd = (int *) calloc(TASK_FDMAX, sizeof(int));\n\t\tassert(ptask->ti_tmfd != NULL);\n\t\tptask->ti_tmnum = 0;\n\t\tptask->ti_tmmax = TASK_FDMAX;\n\t}\n\tfor (i = 0; i < ptask->ti_tmnum; i++) {\n\t\tif (ptask->ti_tmfd[i] == fd)\n\t\t\tbreak;\n\t}\n\tif (i == ptask->ti_tmnum) { /* didn't find existing fd */\n\t\tfor (i = 0; i < ptask->ti_tmnum; i++) {\n\t\t\tif (ptask->ti_tmfd[i] == -1)\n\t\t\t\tbreak;\n\t\t}\n\t\tif (i == ptask->ti_tmnum) { /* no free slot */\n\t\t\tif (ptask->ti_tmnum == ptask->ti_tmmax) {\n\t\t\t\t/* no more space */\n\t\t\t\tptask->ti_tmmax *= 2;\n\t\t\t\tptask->ti_tmfd = (int *) realloc(ptask->ti_tmfd,\n\t\t\t\t\t\t\t\t ptask->ti_tmmax * sizeof(int));\n\t\t\t\tassert(ptask->ti_tmfd != NULL);\n\t\t\t}\n\t\t\ti = ptask->ti_tmnum++;\n\t\t}\n\t\tptask->ti_tmfd[i] = fd;\n\t}\n\n\t/* set no timeout so connection is not closed for being idle */\n\tconn->cn_authen |= PBS_NET_CONN_NOTIMEOUT;\n\n\tswitch (command) {\n\n\t\tcase TM_INIT:\n\t\t\t/*\n\t\t\t ** A request to initialize.\n\t\t\t **\n\t\t\t **\tsend (\n\t\t\t **\t\tnumber of nodes int;\n\t\t\t **\t\tnodeid[0]       int;\n\t\t\t **\t\t...\n\t\t\t **\t\tnodeid[n-1]     int;\n\t\t\t **\t\tparent jobid    string;\n\t\t\t **\t\tparent nodeid   int;\n\t\t\t **\t\tparent taskid   int;\n\t\t\t **\t)\n\t\t\t */\n\t\t\tDBPRT((\"%s: INIT %s\\n\", __func__, jobid))\n\t\t\tif (prev_error)\n\t\t\t\tgoto done;\n\n\t\t\tret = tm_reply(fd, version, TM_OKAY, event);\n\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\tgoto done;\n\t\t\tvnodenum = pjob->ji_numvnod;\n\t\t\tret = diswui(fd, vnodenum); /* num nodes */\n\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\tgoto done;\n\n\t\t\tpnode = pjob->ji_vnods;\n\t\t\tfor (i = 0; i < vnodenum; i++) {\n\t\t\t\tret = diswsi(fd, pnode[i].vn_node);\n\t\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\t\tgoto done;\n\t\t\t}\n\t\t\tret = diswst(fd, ptask->ti_qs.ti_parentjobid); /* dad job */\n\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\tgoto done;\n\t\t\tret = diswsi(fd, ptask->ti_qs.ti_parentnode); /* dad node */\n\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\tgoto done;\n\t\t\tret = diswui(fd, ptask->ti_qs.ti_parenttask); /* dad task */\n\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\tgoto done;\n\n\t\t\tptask->ti_flags |= TI_FLAGS_INIT;\n\t\t\tgoto done;\n\n\t\tcase TM_POSTINFO:\n\t\t\t/*\n\t\t\t ** Post named info for a task.\n\t\t\t **\n\t\t\t **\tread (\n\t\t\t **\t\tname\t\tstring;\n\t\t\t **\t\tinfo\t\tcounted string;\n\t\t\t **\t)\n\t\t\t */\n\t\t\tname = disrst(fd, &ret);\n\t\t\tBAIL(\"POSTINFO name\")\n\t\t\tinfo = disrcs(fd, &len, &ret);\n\t\t\tif (ret != DIS_SUCCESS) {\n\t\t\t\tfree(name);\n\t\t\t\tsprintf(log_buffer, bail_format, \"POSTINFO info\");\n\t\t\t\tgoto err;\n\t\t\t}\n\t\t\tDBPRT((\"%s: POSTINFO %s task %8.8X sent info %s:%s(%d)\\n\", __func__,\n\t\t\t       jobid, fromtask, name, info, (int) len))\n\t\t\tif (prev_error) {\n\t\t\t\tfree(name);\n\t\t\t\tfree(info);\n\t\t\t\tgoto done;\n\t\t\t}\n\n\t\t\ttask_saveinfo(ptask, name, info, (int) len);\n\t\t\tret = tm_reply(fd, version, TM_OKAY, event);\n\t\t\tgoto done;\n\n\t\tcase TM_REGISTER:\n\t\t\tsprintf(log_buffer, \"REGISTER received - NOT IMPLEMENTED\");\n\t\t\t(void) tm_reply(fd, version, TM_ERROR, event);\n\t\t\t(void) diswsi(fd, TM_ENOTIMPLEMENTED);\n\t\t\t(void) dis_flush(fd);\n\t\t\tgoto err;\n\n\t\tdefault:\n\t\t\tbreak;\n\t}\n\n\t/*\n\t ** All requests beside TM_INIT and TM_POSTINFO\n\t ** require a node number where the action will take place.\n\t ** Read that and check that it is legal.\n\t **\n\t **\tread (\n\t **\t\tnode number\t\tint\n\t **\t)\n\t */\n\ttvnodeid = disrui(fd, &ret);\n\tBAIL(\"tvnodeid\")\n\n\tpnode = pjob->ji_vnods;\n\tfor (i = 0; i < pjob->ji_numvnod; i++, pnode++) {\n\t\tif (pnode->vn_node == tvnodeid)\n\t\t\tbreak;\n\t}\n\tif (i == pjob->ji_numvnod) {\n\t\tsprintf(log_buffer, \"node %d not found\", tvnodeid);\n\t\tlog_joberr(-1, __func__, log_buffer, jobid);\n\t\tret = tm_reply(fd, version, TM_ERROR, event);\n\t\tif (ret != DIS_SUCCESS)\n\t\t\tgoto done;\n\t\tret = diswsi(fd, TM_ENOTFOUND);\n\t\tif (ret != DIS_SUCCESS)\n\t\t\tgoto done;\n\t\tprev_error = 1;\n\t}\n\tphost = pnode->vn_host;\n\n\tswitch (command) {\n\n\t\tcase TM_TASKS:\n\t\t\t/*\n\t\t\t ** A request to read the list of tasks that a\n\t\t\t ** particular node has charge of.\n\t\t\t */\n\t\t\tDBPRT((\"%s: TASKS %s on node %d\\n\",\n\t\t\t       __func__, jobid, tvnodeid))\n\t\t\tif (prev_error)\n\t\t\t\tgoto done;\n\n\t\t\tif (pjob->ji_nodeid != TO_PHYNODE(tvnodeid)) { /* not me */\n\t\t\t\tep = event_alloc(pjob, IM_GET_TASKS, fd, phost,\n\t\t\t\t\t\t event, fromtask);\n\t\t\t\tret = im_compose(phost->hn_stream, jobid, cookie,\n\t\t\t\t\t\t IM_GET_TASKS, ep->ee_event, fromtask, IM_OLD_PROTOCOL_VER);\n\t\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\t\tgoto done;\n\t\t\t\tret = diswui(phost->hn_stream, myvnodeid);\n\t\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\t\tgoto done;\n\t\t\t\tret = diswui(phost->hn_stream, tvnodeid);\n\t\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\t\tgoto done;\n\t\t\t\tret = (dis_flush(phost->hn_stream) == -1) ? DIS_NOCOMMIT : DIS_SUCCESS;\n\t\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\t\tgoto done;\n\t\t\t\treply = FALSE;\n\t\t\t\tgoto done;\n\t\t\t}\n\t\t\tret = tm_reply(fd, version, TM_OKAY, event);\n\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\tgoto done;\n\t\t\tfor (ptask = (pbs_task *) GET_NEXT(pjob->ji_tasks);\n\t\t\t     ptask;\n\t\t\t     ptask = (pbs_task *) GET_NEXT(ptask->ti_jobtask)) {\n\t\t\t\tret = diswui(fd, ptask->ti_qs.ti_task);\n\t\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\t\tgoto done;\n\t\t\t}\n\t\t\tret = diswui(fd, TM_NULL_TASK);\n\t\t\tbreak;\n\n\t\tcase TM_SPAWN:\n\t\t\t/*\n\t\t\t ** Spawn a task on the requested node.\n\t\t\t **\n\t\t\t **\tread (\n\t\t\t **\t\targc\t\tint;\n\t\t\t **\t\targ 0\t\tstring;\n\t\t\t **\t\t...\n\t\t\t **\t\targ argc-1\tstring;\n\t\t\t **\t\tenv 0\t\tstring;\n\t\t\t **\t\t...\n\t\t\t **\t\tenv m\t\tstring;\n\t\t\t **\t)\n\t\t\t */\n\t\t\tDBPRT((\"%s: SPAWN %s on node %d\\n\",\n\t\t\t       __func__, jobid, tvnodeid))\n\t\t\targc = disrui(fd, &ret);\n\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\tgoto done;\n\t\t\targv = (char **) calloc(argc + 1, sizeof(char *));\n\t\t\tassert(argv);\n\t\t\tfor (i = 0; i < argc; i++) {\n\t\t\t\targv[i] = disrst(fd, &ret);\n\t\t\t\tif (ret != DIS_SUCCESS) {\n\t\t\t\t\targv[i] = NULL;\n\t\t\t\t\tarrayfree(argv);\n\t\t\t\t\tgoto done;\n\t\t\t\t}\n\t\t\t\tif (strlen(argv[i]) == 0)\n\t\t\t\t\tfound_empty_string = 1; /* arguments contains empty string, Used if spawn on another MOM*/\n\t\t\t}\n\t\t\targv[i] = NULL;\n\n\t\t\tnumele = 3;\n\t\t\tenvp = (char **) calloc(numele, sizeof(char *));\n\t\t\tassert(envp);\n\t\t\tfor (i = 0;; i++) {\n\t\t\t\tchar *env;\n\n\t\t\t\tenv = disrst(fd, &ret);\n\t\t\t\tif (ret != DIS_SUCCESS && ret != DIS_EOD) {\n\t\t\t\t\tarrayfree(argv);\n\t\t\t\t\tenvp[i] = NULL;\n\t\t\t\t\tarrayfree(envp);\n\t\t\t\t\tgoto done;\n\t\t\t\t}\n\t\t\t\tif (env == NULL)\n\t\t\t\t\tbreak;\n\t\t\t\tif (*env == '\\0') {\n\t\t\t\t\tfree(env);\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\t/*\n\t\t\t\t **\tNeed to remember extra slot for NULL\n\t\t\t\t **\tat the end.  Thanks to Pete Wyckoff\n\t\t\t\t **\tfor finding this.\n\t\t\t\t */\n\t\t\t\tif (i == numele - 1) {\n\t\t\t\t\tnumele *= 2;\n\t\t\t\t\tenvp = (char **) realloc(envp,\n\t\t\t\t\t\t\t\t numele * sizeof(char *));\n\t\t\t\t\tassert(envp);\n\t\t\t\t}\n\t\t\t\tenvp[i] = env;\n\t\t\t}\n\t\t\tenvp[i] = NULL;\n\t\t\tret = DIS_SUCCESS;\n\n\t\t\tif (prev_error) {\n\t\t\t\tarrayfree(argv);\n\t\t\t\tarrayfree(envp);\n\t\t\t\tgoto done;\n\t\t\t}\n\n\t\t\t/*\n\t\t\t ** If the spawn happens on me, just do it.\n\t\t\t */\n\t\t\tif (pjob->ji_nodeid == TO_PHYNODE(tvnodeid)) {\n#ifdef PMIX\n\t\t\t\tpbs_pmix_register_client(pjob, tvnodeid, &envp);\n#endif\n\t\t\t\ti = TM_ERROR;\n\t\t\t\tptask = momtask_create(pjob);\n\t\t\t\tif (ptask != NULL) {\n\t\t\t\t\tstrcpy(ptask->ti_qs.ti_parentjobid, jobid);\n\t\t\t\t\tptask->ti_qs.ti_parentnode = myvnodeid;\n\t\t\t\t\tptask->ti_qs.ti_myvnode = tvnodeid;\n\t\t\t\t\tptask->ti_qs.ti_parenttask = fromtask;\n\t\t\t\t\tif (task_save(ptask) != -1) {\n\t\t\t\t\t\tret = start_process(ptask, argv, envp, false);\n\t\t\t\t\t\tif (ret == PBSE_NONE) {\n\t\t\t\t\t\t\ti = TM_OKAY;\n\t\t\t\t\t\t} else if (ret == PBSE_SYSTEM) {\n\t\t\t\t\t\t\ti = TM_ESYSTEM;\n\t\t\t\t\t\t\tptask->ti_qs.ti_status = TI_STATE_EXITED;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tarrayfree(argv);\n\t\t\t\tarrayfree(envp);\n\t\t\t\tret = tm_reply(fd, version, i, event);\n\t\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\t\tgoto done;\n\t\t\t\tret = diswui(fd, ((i == TM_ERROR) ? TM_ESYSTEM : ptask->ti_qs.ti_task));\n\t\t\t\tgoto done;\n\t\t\t}\n\t\t\t/*\n\t\t\t ** Sending to another MOM.\n\t\t\t */\n\t\t\tep = event_alloc(pjob, IM_SPAWN_TASK, fd, phost,\n\t\t\t\t\t event, fromtask);\n\t\t\tret = im_compose(phost->hn_stream, jobid, cookie,\n\t\t\t\t\t IM_SPAWN_TASK, ep->ee_event, fromtask,\n\t\t\t\t\t found_empty_string ? IM_PROTOCOL_VER : IM_OLD_PROTOCOL_VER);\n\t\t\tif (ret != DIS_SUCCESS) {\n\t\t\t\tarrayfree(argv);\n\t\t\t\tarrayfree(envp);\n\t\t\t\tgoto done;\n\t\t\t}\n\t\t\tret = diswui(phost->hn_stream, myvnodeid);\n\t\t\tif (ret != DIS_SUCCESS) {\n\t\t\t\tarrayfree(argv);\n\t\t\t\tarrayfree(envp);\n\t\t\t\tgoto done;\n\t\t\t}\n\t\t\tret = diswui(phost->hn_stream, tvnodeid);\n\t\t\tif (ret != DIS_SUCCESS) {\n\t\t\t\tarrayfree(argv);\n\t\t\t\tarrayfree(envp);\n\t\t\t\tgoto done;\n\t\t\t}\n\t\t\tret = diswui(phost->hn_stream, TM_NULL_TASK);\n\t\t\tif (ret != DIS_SUCCESS) {\n\t\t\t\tarrayfree(argv);\n\t\t\t\tarrayfree(envp);\n\t\t\t\tgoto done;\n\t\t\t}\n\t\t\tif (found_empty_string) {\n\t\t\t\tret = diswui(phost->hn_stream, argc);\n\t\t\t\tif (ret != DIS_SUCCESS) {\n\t\t\t\t\tarrayfree(argv);\n\t\t\t\t\tarrayfree(envp);\n\t\t\t\t\tgoto done;\n\t\t\t\t}\n\t\t\t\tfor (i = 0; i < argc; i++) {\n\t\t\t\t\tret = diswst(phost->hn_stream, argv[i]);\n\t\t\t\t\tif (ret != DIS_SUCCESS) {\n\t\t\t\t\t\tarrayfree(argv);\n\t\t\t\t\t\tarrayfree(envp);\n\t\t\t\t\t\tgoto done;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tfor (i = 0; argv[i]; i++) {\n\t\t\t\t\tret = diswst(phost->hn_stream, argv[i]);\n\t\t\t\t\tif (ret != DIS_SUCCESS) {\n\t\t\t\t\t\tarrayfree(argv);\n\t\t\t\t\t\tarrayfree(envp);\n\t\t\t\t\t\tgoto done;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tret = diswst(phost->hn_stream, \"\");\n\t\t\t\tif (ret != DIS_SUCCESS) {\n\t\t\t\t\tarrayfree(argv);\n\t\t\t\t\tarrayfree(envp);\n\t\t\t\t\tgoto done;\n\t\t\t\t}\n\t\t\t}\n\t\t\tfor (i = 0; envp[i]; i++) {\n\t\t\t\tret = diswst(phost->hn_stream, envp[i]);\n\t\t\t\tif (ret != DIS_SUCCESS) {\n\t\t\t\t\tarrayfree(argv);\n\t\t\t\t\tarrayfree(envp);\n\t\t\t\t\tgoto done;\n\t\t\t\t}\n\t\t\t}\n\t\t\tret = (dis_flush(phost->hn_stream) == -1) ? DIS_NOCOMMIT : DIS_SUCCESS;\n\t\t\tif (ret != DIS_SUCCESS) {\n\t\t\t\tarrayfree(argv);\n\t\t\t\tarrayfree(envp);\n\t\t\t\tgoto done;\n\t\t\t}\n\t\t\treply = FALSE;\n\t\t\tarrayfree(argv);\n\t\t\tarrayfree(envp);\n\n\t\t\tbreak;\n\n\t\tcase TM_SIGNAL:\n\t\t\t/*\n\t\t\t ** Send a signal to the specified task.\n\t\t\t **\n\t\t\t **\tread (\n\t\t\t **\t\tto task\t\t\tint\n\t\t\t **\t\tsignal\t\t\tint\n\t\t\t **\t)\n\t\t\t */\n\t\t\ttaskid = disrui(fd, &ret);\n\t\t\tBAIL(\"SIGNAL taskid\")\n\t\t\tsignum = disrui(fd, &ret);\n\t\t\tBAIL(\"SIGNAL signum\")\n\t\t\tDBPRT((\"%s: SIGNAL %s on node %d task %8.8X sig %d\\n\",\n\t\t\t       __func__, jobid, tvnodeid, taskid, signum))\n\t\t\tif (prev_error)\n\t\t\t\tgoto done;\n\n\t\t\tif (pjob->ji_nodeid != TO_PHYNODE(tvnodeid)) { /* not me */\n\t\t\t\tep = event_alloc(pjob, IM_SIGNAL_TASK, fd, phost,\n\t\t\t\t\t\t event, fromtask);\n\t\t\t\tret = im_compose(phost->hn_stream, jobid, cookie,\n\t\t\t\t\t\t IM_SIGNAL_TASK, ep->ee_event, fromtask, IM_OLD_PROTOCOL_VER);\n\t\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\t\tgoto done;\n\t\t\t\tret = diswui(phost->hn_stream, myvnodeid);\n\t\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\t\tgoto done;\n\t\t\t\tret = diswui(phost->hn_stream, taskid);\n\t\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\t\tgoto done;\n\t\t\t\tret = diswsi(phost->hn_stream, signum);\n\t\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\t\tgoto done;\n\t\t\t\tret = (dis_flush(phost->hn_stream) == -1) ? DIS_NOCOMMIT : DIS_SUCCESS;\n\t\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\t\tgoto done;\n\t\t\t\treply = FALSE;\n\t\t\t\tgoto done;\n\t\t\t}\n\n\t\t\t/*\n\t\t\t ** Task should be here... look for it.\n\t\t\t */\n\t\t\tif ((ptask = task_find(pjob, taskid)) == NULL) {\n\t\t\t\tret = tm_reply(fd, version, TM_ERROR, event);\n\t\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\t\tgoto done;\n\t\t\t\tret = diswsi(fd, TM_ENOTFOUND);\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\tkill_task(ptask, signum, 0);\n\t\t\tret = tm_reply(fd, version, TM_OKAY, event);\n\t\t\tbreak;\n\n\t\tcase TM_OBIT:\n\t\t\t/*\n\t\t\t ** Register an obit request for the specified task.\n\t\t\t **\n\t\t\t **\tread (\n\t\t\t **\t\ttask to watch\t\tint\n\t\t\t **\t)\n\t\t\t */\n\t\t\ttaskid = disrui(fd, &ret);\n\t\t\tBAIL(\"OBIT taskid\")\n\t\t\tDBPRT((\"%s: fd %d OBIT %s on node %d task %8.8X\\n\",\n\t\t\t       __func__, fd, jobid, tvnodeid, taskid))\n\t\t\tif (prev_error)\n\t\t\t\tgoto done;\n\n\t\t\tif (pjob->ji_nodeid != TO_PHYNODE(tvnodeid)) { /* not me */\n\t\t\t\tep = event_alloc(pjob, IM_OBIT_TASK, fd, phost,\n\t\t\t\t\t\t event, fromtask);\n\t\t\t\tret = im_compose(phost->hn_stream, jobid, cookie,\n\t\t\t\t\t\t IM_OBIT_TASK, ep->ee_event, fromtask, IM_OLD_PROTOCOL_VER);\n\t\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\t\tgoto done;\n\t\t\t\tret = diswui(phost->hn_stream, myvnodeid);\n\t\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\t\tgoto done;\n\t\t\t\tret = diswui(phost->hn_stream, taskid);\n\t\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\t\tgoto done;\n\t\t\t\tret = (dis_flush(phost->hn_stream) == -1) ? DIS_NOCOMMIT : DIS_SUCCESS;\n\t\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\t\tgoto done;\n\t\t\t\treply = FALSE;\n\t\t\t\tgoto done;\n\t\t\t}\n\t\t\t/*\n\t\t\t ** Task should be here... look for it.\n\t\t\t */\n\t\t\tif ((ptask = task_find(pjob, taskid)) == NULL) {\n\t\t\t\tret = tm_reply(fd, version, TM_ERROR, event);\n\t\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\t\tgoto done;\n\t\t\t\tret = diswsi(fd, TM_ENOTFOUND);\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\tif (ptask->ti_qs.ti_status >= TI_STATE_EXITED) {\n\t\t\t\tret = tm_reply(fd, version, TM_OKAY, event);\n\t\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\t\tgoto done;\n\t\t\t\tret = diswsi(fd, ptask->ti_qs.ti_exitstat);\n\t\t\t} else {\n\t\t\t\tobitent *op = (obitent *) malloc(sizeof(obitent));\n\t\t\t\tassert(op);\n\t\t\t\tCLEAR_LINK(op->oe_next);\n\t\t\t\tappend_link(&ptask->ti_obits, &op->oe_next, op);\n\t\t\t\top->oe_type = OBIT_TYPE_TMEVENT;\n\t\t\t\top->oe_u.oe_tm.oe_fd = fd;\n\t\t\t\top->oe_u.oe_tm.oe_node = tvnodeid;\n\t\t\t\top->oe_u.oe_tm.oe_event = event;\n\t\t\t\top->oe_u.oe_tm.oe_taskid = fromtask;\n\t\t\t\treply = 0;\n\t\t\t}\n\t\t\tbreak;\n\n\t\tcase TM_GETINFO:\n\t\t\t/*\n\t\t\t ** Get named info for a specified task.\n\t\t\t **\n\t\t\t **\tread (\n\t\t\t **\t\ttask\t\t\tint\n\t\t\t **\t\tname\t\t\tstring\n\t\t\t **\t)\n\t\t\t */\n\t\t\ttaskid = disrui(fd, &ret);\n\t\t\tBAIL(\"GETINFO taskid\")\n\t\t\tname = disrst(fd, &ret);\n\t\t\tBAIL(\"GETINFO name\")\n\t\t\tDBPRT((\"%s: GETINFO %s from node %d task %8.8X name %s\\n\",\n\t\t\t       __func__, jobid, tvnodeid, taskid, name))\n\t\t\tif (prev_error)\n\t\t\t\tgoto done;\n\n\t\t\tif (pjob->ji_nodeid != TO_PHYNODE(tvnodeid)) { /* not me */\n\t\t\t\tep = event_alloc(pjob, IM_GET_INFO, fd, phost,\n\t\t\t\t\t\t event, fromtask);\n\t\t\t\tret = im_compose(phost->hn_stream, jobid, cookie,\n\t\t\t\t\t\t IM_GET_INFO, ep->ee_event, fromtask, IM_OLD_PROTOCOL_VER);\n\t\t\t\tif (ret == DIS_SUCCESS) {\n\t\t\t\t\tret = diswui(phost->hn_stream, myvnodeid);\n\t\t\t\t\tif (ret == DIS_SUCCESS) {\n\t\t\t\t\t\tret = diswui(phost->hn_stream, taskid);\n\t\t\t\t\t\tif (ret == DIS_SUCCESS) {\n\t\t\t\t\t\t\tret = diswst(phost->hn_stream,\n\t\t\t\t\t\t\t\t     name);\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tfree(name);\n\t\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\t\tgoto done;\n\t\t\t\tret = (dis_flush(phost->hn_stream) == -1) ? DIS_NOCOMMIT : DIS_SUCCESS;\n\t\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\t\tgoto done;\n\t\t\t\treply = FALSE;\n\t\t\t\tgoto done;\n\t\t\t}\n\n\t\t\t/*\n\t\t\t ** Task should be here... look for it.\n\t\t\t */\n\t\t\tif ((ptask = task_find(pjob, taskid)) != NULL) {\n\t\t\t\tif ((ip = task_findinfo(ptask, name)) != NULL) {\n\t\t\t\t\tret = tm_reply(fd, version, TM_OKAY, event);\n\t\t\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\t\t\tgoto done;\n\t\t\t\t\tret = diswcs(fd, ip->ie_info, ip->ie_len);\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t\tret = tm_reply(fd, version, TM_ERROR, event);\n\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\tgoto done;\n\t\t\tret = diswsi(fd, TM_ENOTFOUND);\n\t\t\tbreak;\n\n\t\tcase TM_RESOURCES:\n\t\t\t/*\n\t\t\t ** Get resource string for a node.\n\t\t\t */\n\t\t\tDBPRT((\"%s: RESOURCES %s for node %d\\n\", __func__, jobid, tvnodeid))\n\t\t\tif (prev_error)\n\t\t\t\tgoto done;\n\n\t\t\tif (pjob->ji_nodeid != TO_PHYNODE(tvnodeid)) { /* not me */\n\t\t\t\tep = event_alloc(pjob, IM_GET_RESC, fd, phost,\n\t\t\t\t\t\t event, fromtask);\n\t\t\t\tret = im_compose(phost->hn_stream, jobid, cookie,\n\t\t\t\t\t\t IM_GET_RESC, ep->ee_event, fromtask, IM_OLD_PROTOCOL_VER);\n\t\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\t\tgoto done;\n\t\t\t\tret = diswui(phost->hn_stream, myvnodeid);\n\t\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\t\tgoto done;\n\t\t\t\tret = (dis_flush(phost->hn_stream) == -1) ? DIS_NOCOMMIT : DIS_SUCCESS;\n\t\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\t\tgoto done;\n\t\t\t\treply = FALSE;\n\t\t\t\tgoto done;\n\t\t\t}\n\n\t\t\tinfo = resc_string(pjob);\n\t\t\tret = tm_reply(fd, version, TM_OKAY, event);\n\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\tgoto done;\n\t\t\tret = diswst(fd, info);\n\t\t\tfree(info);\n\t\t\tbreak;\n\n\t\tdefault:\n\t\t\tsprintf(log_buffer, \"%s: unknown command %d\", jobid, command);\n\t\t\t(void) tm_reply(fd, version, TM_ERROR, event);\n\t\t\t(void) diswsi(fd, TM_EUNKNOWNCMD);\n\t\t\t(void) dis_flush(fd);\n\t\t\tgoto err;\n\t}\n\ndone:\n\tif (reply) {\n\t\tDBPRT((\"%s: REPLY %s\\n\", __func__, dis_emsg[ret]))\n\t\tif (ret != DIS_SUCCESS || dis_flush(fd) == -1) {\n\t\t\tsprintf(log_buffer, \"comm failed %s\", dis_emsg[ret]);\n\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\tclose_conn(fd);\n\t\t}\n\t}\n\n\tfree(jobid);\n\tfree(cookie);\n\treturn 0;\n\nerr:\n\tif (jobid != NULL) {\n\t\tlog_joberr(-1, __func__, log_buffer, jobid);\n\t\tfree(jobid);\n\t} else\n\t\tlog_err(-1, __func__, log_buffer);\n\n\tipadd = conn->cn_addr;\n\tsprintf(log_buffer,\n\t\t\"message refused from port %d addr %ld.%ld.%ld.%ld\",\n\t\tconn->cn_port,\n\t\t(ipadd & 0xff000000) >> 24,\n\t\t(ipadd & 0x00ff0000) >> 16,\n\t\t(ipadd & 0x0000ff00) >> 8,\n\t\t(ipadd & 0x000000ff));\n\tclose_conn(fd);\n\tif (cookie)\n\t\tfree(cookie);\n\treturn -1;\n}\n\n/**\n * @brief\n *\tsend_join_job_restart - send the JOIN_JOB or RESTART message from\n *\ta Mother Superior to a Sister.\n *\n * @par Functionality:\n *\tThe message header is composed and sent.\n *\tIf the message is JOIN_JOB, the following information is also\n *\tencoded and sent:\n *\t    number of nodes\tint\n *\t    stdout port\t\tint\n *\t    stderr port\t\tint\n *\t    cred type\t\tint\n *\t\t<if cred len > 0>\n *\t\tcredential\tstring\n *\t    jobattrs\t\tattrl\n *\n * @param[in]\tcom    - IM message type: IM_JOIN_JOB or IM_RESTART\n * @param[in]\tep     - pointer to associated event\n * @param[in]\tnth    - index of host entry (job's node number for this sister)\n * @param[in]\tpjob   - pointer to job structure for job to be run\n * @param[in]\tphead  - pointer to pbs_list_head of job's encoded attributes\n */\n\nvoid\nsend_join_job_restart(int com, eventent *ep, int nth, job *pjob, pbs_list_head *phead)\n{\n\tsize_t mycred_len = 0;\n\tchar *mycred_buf = NULL;\n\thnodent *np;\n\tsvrattrl *psatl;\n\tint stream;\n\n\tif (pjob->ji_hosts == NULL)\n\t\treturn;\n\n\t/* find the \"nth\" hnodent (host entry) of the job and stream to it */\n\tnp = &pjob->ji_hosts[nth];\n\tstream = np->hn_stream;\n\n\t/* send message header */\n\tim_compose(stream, pjob->ji_qs.ji_jobid,\n\t\t   get_jattr_str(pjob, JOB_ATR_Cookie),\n\t\t   com, ep->ee_event, TM_NULL_TASK, IM_OLD_PROTOCOL_VER);\n\n\tif (com == IM_JOIN_JOB) {\n\t\t/* for JOIN_JOB send body of message */\n\t\t(void) get_credential(np->hn_host, pjob,\n\t\t\t\t      PBS_GC_EXEC, &mycred_buf, &mycred_len);\n\n\t\t(void) diswsi(stream, pjob->ji_numnodes);\n\t\t(void) diswsi(stream, pjob->ji_ports[0]);\n\t\t(void) diswsi(stream, pjob->ji_ports[1]);\n\t\t(void) diswsi(stream, pjob->ji_extended.ji_ext.ji_credtype);\n\t\tif (mycred_len > 0) {\n\t\t\t(void) diswcs(stream,\n\t\t\t\t      mycred_buf, mycred_len);\n\t\t\tfree(mycred_buf);\n\t\t}\n\n\t\tpsatl = (svrattrl *) GET_NEXT(*phead);\n\t\t(void) encode_DIS_svrattrl(stream, psatl);\n\t}\n\tdis_flush(stream);\n}\n\n/**\n * @brief\n *\tsend_join_job_restart - send the JOIN_JOB or RESTART message from\n *\ta Mother Superior to a Sister.\n *\n * @par Functionality:\n *\tThe message header is composed and sent.\n *\tIf the message is JOIN_JOB, the following information is also\n *\tencoded and sent:\n *\t    number of nodes\tint\n *\t    stdout port\t\tint\n *\t    stderr port\t\tint\n *\t    cred type\t\tint\n *\t\t<if cred len > 0>\n *\t\tcredential\tstring\n *\t    jobattrs\t\tattrl\n *\n * @param[in]   mtfd   - The TPP multicast stream descriptor\n * @param[in]\tcom    - IM message type: IM_JOIN_JOB or IM_RESTART\n * @param[in]\tep     - pointer to associated event\n * @param[in]\tnth    - index of host entry (job's node number for this sister)\n * @param[in]\tpjob   - pointer to job structure for job to be run\n * @param[in]\tphead  - pointer to list_head of job's encoded attributes\n */\n\nvoid\nsend_join_job_restart_mcast(int mtfd, int com, eventent *ep, int nth, job *pjob, pbs_list_head *phead)\n{\n\tsize_t mycred_len = 0;\n\tchar *mycred_buf = NULL;\n\thnodent *np;\n\tsvrattrl *psatl;\n\tint stream;\n\n\tif (pjob->ji_hosts == NULL)\n\t\treturn;\n\n\t/* find the \"nth\" hnodent (host entry) of the job and stream to it */\n\tnp = &pjob->ji_hosts[nth];\n\tstream = mtfd;\n\n\t/* send message header */\n\tim_compose(stream, pjob->ji_qs.ji_jobid,\n\t\t   get_jattr_str(pjob, JOB_ATR_Cookie),\n\t\t   com, ep->ee_event, TM_NULL_TASK, IM_OLD_PROTOCOL_VER);\n\n\tif (com == IM_JOIN_JOB) {\n\t\t/* for JOIN_JOB send body of message */\n\t\t(void) get_credential(np->hn_host, pjob,\n\t\t\t\t      PBS_GC_EXEC, &mycred_buf, &mycred_len);\n\n\t\t(void) diswsi(stream, pjob->ji_numnodes);\n\t\t(void) diswsi(stream, pjob->ji_ports[0]);\n\t\t(void) diswsi(stream, pjob->ji_ports[1]);\n\t\t(void) diswsi(stream, pjob->ji_extended.ji_ext.ji_credtype);\n\t\tif (mycred_len > 0) {\n\t\t\t(void) diswcs(stream,\n\t\t\t\t      mycred_buf, mycred_len);\n\t\t\tfree(mycred_buf);\n\t\t}\n\n\t\tpsatl = (svrattrl *) GET_NEXT(*phead);\n\t\t(void) encode_DIS_svrattrl(stream, psatl);\n\t}\n\tdis_flush(stream);\n}\n"
  },
  {
    "path": "src/resmom/mom_hook_func.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tmom_hook_func.c\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <unistd.h>\n#include <sys/param.h>\n#include <dirent.h>\n#include <sys/types.h>\n#include <sys/stat.h>\n#include <sys/wait.h>\n#include <ctype.h>\n#include <errno.h>\n#include <assert.h>\n\n#include <memory.h>\n#include <fcntl.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include \"pbs_ifl.h\"\n#include \"libpbs.h\"\n#include \"list_link.h\"\n#include \"work_task.h\"\n#include \"hook.h\"\n#include \"log.h\"\n#include \"server_limits.h\"\n#include \"attribute.h\"\n#include \"credential.h\"\n#include \"batch_request.h\"\n#include \"job.h\"\n#include \"svrfunc.h\"\n#include \"pbs_python.h\" /* for python interpreter */\n#include <signal.h>\n#include \"mom_func.h\"\n#include \"placementsets.h\"\n#include \"resmon.h\"\n#include \"libutil.h\"\n#include \"pbs_nodes.h\"\n#include \"net_connect.h\"\n#include \"mom_hook_func.h\"\n#include \"mom_server.h\"\n#include \"hook.h\"\n#include \"pbs_reliable.h\"\n#include \"pbs_version.h\"\n#include \"tpp.h\"\n#include \"dis.h\"\n#include <openssl/sha.h>\n\n#define RESCASSN_NCPUS \"resources_assigned.ncpus\"\n#define RESCASSN_MEM \"resources_assigned.mem\"\n#define RESCASSN_HOST \"resources_assigned.host\"\n/* External functions */\n\n/* Local Private Functions */\n\n/* Global Data items */\nstatic int run_exit = 0; /* run exit of child */\n\nextern int exiting_tasks;\nextern int resc_access_perm;\nextern char *path_hooks;\nextern char *path_hooks_workdir;\nextern char *path_log;\nextern char *path_spool;\nextern char *mom_home;\nextern char mom_host[];\nextern char mom_short_name[];\nextern pbs_list_head svr_execjob_begin_hooks;\nextern pbs_list_head svr_execjob_prologue_hooks;\nextern pbs_list_head svr_execjob_epilogue_hooks;\nextern pbs_list_head svr_execjob_preterm_hooks;\nextern pbs_list_head svr_execjob_launch_hooks;\nextern pbs_list_head svr_execjob_end_hooks;\nextern pbs_list_head svr_exechost_periodic_hooks;\nextern pbs_list_head svr_exechost_startup_hooks;\nextern pbs_list_head svr_execjob_attach_hooks;\nextern pbs_list_head svr_execjob_resize_hooks;\nextern pbs_list_head svr_execjob_abort_hooks;\nextern pbs_list_head svr_execjob_postsuspend_hooks;\nextern pbs_list_head svr_execjob_preresume_hooks;\nextern pbs_list_head svr_hook_job_actions;\nextern pbs_list_head svr_hook_vnl_actions;\n\nextern pbs_list_head task_list_immed;\nextern pbs_list_head task_list_timed;\nextern pbs_list_head task_list_event;\nextern pbs_list_head svr_alljobs;\n\nextern char *msg_err_malloc;\n\nextern time_t time_now;\n\nextern int num_pcpus;\nextern int num_acpus;\nextern u_Long av_phy_mem;\n\nextern int becomeuser(job *pjob);\n\nextern int send_sched_recycle(char *user);\n\nstatic void post_periodic_hook(struct work_task *pwt);\nstatic void mom_process_background_hooks(struct work_task *ptask);\n\nextern vnl_t *vnlp;\nextern unsigned long hook_action_id;\nextern int internal_state_update; /* flag for sending mom information update to the server */\n\nextern int server_stream;\n\nextern char **environ;\n\n/**\n * @brief\n * \tPrint job data into stream pointed by 'fp'.\n *\n * @param[in]\tfp - stream pointer where data is flushed\n * @param[in]\tpjob - pointer to job whose data is being printed out\n *\n */\nstatic void\nfprintf_job_struct(FILE *fp, job *pjob)\n{\n\tpbs_list_head phead;\n\tsvrattrl *psatl;\n\tsvrattrl *ps;\n\tint i;\n\n\tif ((fp == NULL) || (pjob == NULL)) {\n\t\treturn;\n\t}\n\n\tfprintf(fp, \"%s.%s=%s\\n\", EVENT_JOB_OBJECT, \"id\", pjob->ji_qs.ji_jobid);\n\n\t/* Now print job attributes and resources */\n\tCLEAR_HEAD(phead);\n\tfor (i = 0; i < (int) JOB_ATR_LAST; i++) {\n\t\t(void) (job_attr_def + i)->at_encode(get_jattr(pjob, i), &phead, (job_attr_def + i)->at_name, NULL, ATR_ENCODE_MOM, NULL);\n\t}\n\tattrl_fixlink(&phead);\n\n\tpsatl = (svrattrl *) GET_NEXT(phead);\n\tfor (ps = psatl; ps; ps = (svrattrl *) GET_NEXT(ps->al_link)) {\n\t\tif (ps->al_resc != NULL) {\n\t\t\tfprintf(fp, \"%s.%s[%s]=%s\\n\", EVENT_JOB_OBJECT,\n\t\t\t\tps->al_name, ps->al_resc, ps->al_value);\n\t\t} else {\n\t\t\tif (strcmp(ps->al_name, ATTR_v) == 0)\n\t\t\t\tfprintf(fp, \"%s.%s=\\\"\\\"\\\"%s\\\"\\\"\\\"\\n\", EVENT_JOB_OBJECT, ps->al_name, ps->al_value);\n#if MOM_ALPS\n\t\t\telse if (strcmp(ps->al_name, ATTR_tolerate_node_failures) == 0)\n\t\t\t\tfprintf(fp, \"%s.%s=none\\n\", EVENT_JOB_OBJECT, ps->al_name);\n#endif\n\t\t\telse\n\t\t\t\tfprintf(fp, \"%s.%s=%s\\n\", EVENT_JOB_OBJECT, ps->al_name, ps->al_value);\n\t\t}\n\t}\n\n\tfree_attrlist(&phead);\n}\n\n/**\n * @brief\n *\tAlarm handling function to the set_alarm() call.\n *\n */\nstatic void\nrun_hook_alarm(void)\n{\n\trun_exit = -3;\n}\n\n/**\n * @brief\n *\tPrint to file pointed to by 'fp', the values in a vnl_t structure 'vp'.\n *\n * @param[in] \tfp - pointer to file to dump output into\n * @param[in]\thead_str - some string to prefix outputted data\n * @param[in]\tvp - pointer to a vnl_t structure containing data to print out.\n *\n * @note\n * \tvnl_t entry with attribute ATTR_NODE_TopologyInfo is ignored.\n *\n * @return none\n *\n */\nvoid\nfprint_vnl(FILE *fp, char *head_str, vnl_t *vp)\n{\n\tchar *p;\n\tint i, j;\n\tchar *attname = NULL;\n\tchar *attres = NULL;\n\n\tif ((fp == NULL) || (head_str == NULL) || (vp == NULL))\n\t\treturn;\n\n\tfor (i = 0; i < vp->vnl_used; i++) {\n\t\tvnal_t *vnalp;\n\n\t\tvnalp = VNL_NODENUM(vp, i);\n\n\t\tfor (j = 0; j < vnalp->vnal_used; j++) {\n\t\t\tvna_t *vnap;\n\n\t\t\tvnap = VNAL_NODENUM(vnalp, j);\n\t\t\tattname = vnap->vna_name;\n\t\t\tif (strcmp(attname, ATTR_NODE_TopologyInfo) == 0) {\n\t\t\t\t/* ignoring this internal attribute  */\n\t\t\t\t/* since it tends to be quite a big data, */\n\t\t\t\t/* plus no support right now in hooks */\n\t\t\t\t/* to modify this data. */\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\tattres = NULL;\n\t\t\tp = strrchr(vnap->vna_name, '.');\n\t\t\tif (p != NULL) {\n\t\t\t\t*p = '\\0';\n\t\t\t\tp++;\n\t\t\t\tattres = p;\n\t\t\t}\n\n\t\t\tif (attres != NULL) {\n\t\t\t\tfprintf(fp, \"%s[\\\"%s\\\"].%s[%s]=%s\\n\",\n\t\t\t\t\thead_str,\n\t\t\t\t\tvnalp->vnal_id, attname, attres,\n\t\t\t\t\tvnap->vna_val);\n\t\t\t} else {\n\t\t\t\tfprintf(fp, \"%s[\\\"%s\\\"].%s=%s\\n\",\n\t\t\t\t\thead_str,\n\t\t\t\t\tvnalp->vnal_id, attname,\n\t\t\t\t\tvnap->vna_val);\n\t\t\t}\n\t\t\tif (p != NULL)\n\t\t\t\t*p = '.'; /* restore value */\n\t\t}\n\t}\n\tfflush(fp);\n}\n\n/**\n * @brief\n *\tPrint to file pointed to by 'fp', the values of each job in the 'joblist'.\n *\n * @param[in] \tfp - pointer to file to dump output into\n * @param[in]\thead_str - some string to prefix outputted data\n * @param[in]\tjoblist - pointer to a  list of jobs.\n *\n * @return none\n *\n */\nvoid\nfprint_joblist(FILE *fp, char *head_str, pbs_list_head *joblist)\n{\n\tjob *pjob;\n\tint i;\n\tpbs_list_head phead;\n\tsvrattrl *psatl;\n\tsvrattrl *ps;\n\tchar *jobid;\n\tint keeping = 0;\n\tchar *std_file = NULL;\n\n\tif ((fp == NULL) || (head_str == NULL) || (joblist == NULL))\n\t\treturn;\n\n\tfor (pjob = (job *) GET_NEXT(*joblist); pjob;\n\t     pjob = (job *) GET_NEXT(pjob->ji_alljobs)) {\n\n\t\tjobid = pjob->ji_qs.ji_jobid;\n\t\t/* Now print job attributes and resources */\n\t\tCLEAR_HEAD(phead);\n\t\tfor (i = 0; i < (int) JOB_ATR_LAST; i++) {\n\t\t\t(void) (job_attr_def + i)->at_encode(get_jattr(pjob, i), &phead, (job_attr_def + i)->at_name, NULL, ATR_ENCODE_MOM, NULL);\n\t\t}\n\t\tattrl_fixlink(&phead);\n\n\t\tpsatl = (svrattrl *) GET_NEXT(phead);\n\t\tfor (ps = psatl; ps; ps = (svrattrl *) GET_NEXT(ps->al_link)) {\n\t\t\tif (ps->al_resc != NULL) {\n\t\t\t\tfprintf(fp, \"%s[\\\"%s\\\"].%s[%s]=%s\\n\",\n\t\t\t\t\thead_str, jobid, ps->al_name, ps->al_resc,\n\t\t\t\t\tps->al_value);\n\t\t\t} else {\n\t\t\t\tfprintf(fp, \"%s[\\\"%s\\\"].%s=%s\\n\",\n\t\t\t\t\thead_str,\n\t\t\t\t\tjobid, ps->al_name, ps->al_value);\n\t\t\t}\n\t\t}\n\t\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE) != 0) {\n\t\t\tfprintf(fp, \"%s[\\\"%s\\\"]._msmom=True\\n\",\n\t\t\t\thead_str, jobid);\n\t\t}\n\n\t\tstd_file = std_file_name(pjob, StdOut, &keeping);\n\t\tfprintf(fp, \"%s[\\\"%s\\\"]._stdout_file=%s\\n\",\n\t\t\thead_str, jobid, std_file ? std_file : \"\");\n\t\tstd_file = std_file_name(pjob, StdErr, &keeping);\n\t\tfprintf(fp, \"%s[\\\"%s\\\"]._stderr_file=%s\\n\",\n\t\t\thead_str, jobid, std_file ? std_file : \"\");\n\t\tfree_attrlist(&phead);\n\t}\n}\n\n/**\n * @brief\n *\tAdd to a vnl_t structure 'p_vnlp' the natural vnode information.\n *\n * @Note\n *\tThis uses the mom_short_name as natural vnode name.\n *\n * @param[in/out] p_vnlp - the vnl_t structure to put natural vnode info.\n *\t\t\t - Be sure to free up the *p_vnlp structure when\n *\t\t\t - done as it is malloced.\n *\n * @return none\n *\n */\nstatic void\nadd_natural_vnode_info(vnl_t **p_vnlp)\n{\n\tchar bufs[BUFSIZ];\n\tchar *msgbuf;\n\n\tif (*p_vnlp == NULL) {\n\t\tif (vnl_alloc(p_vnlp) == NULL) {\n\t\t\tlog_err(errno, __func__, \"Failed to allocate vnlp\");\n\t\t\treturn;\n\t\t}\n\t}\n\n\tsnprintf(bufs, sizeof(bufs), \"%d\", num_pcpus);\n\tif (vn_addvnr(*p_vnlp, mom_short_name, ATTR_NODE_pcpus, bufs, 0, 0, NULL) == -1) {\n\t\tpbs_asprintf(&msgbuf,\n\t\t\t     \"Failed to add '%s %s=%s' to vnode list\",\n\t\t\t     mom_short_name, ATTR_NODE_pcpus, bufs);\n\t\tlog_err(-1, __func__, msgbuf);\n\t\tfree(msgbuf);\n\t\treturn;\n\t}\n\n\tsnprintf(bufs, sizeof(bufs), \"%d\", num_acpus);\n\tif (vn_addvnr(*p_vnlp, mom_short_name, \"resources_available.ncpus\", bufs, 0, 0, NULL) == -1) {\n\t\tpbs_asprintf(&msgbuf,\n\t\t\t     \"Failed to add '%s %s=%s' to vnode list\",\n\t\t\t     mom_short_name, \"resources_available.ncpus\", bufs);\n\t\tlog_err(-1, __func__, msgbuf);\n\t\tfree(msgbuf);\n\t\treturn;\n\t}\n\n\tsnprintf(bufs, sizeof(bufs), \"%llukb\", av_phy_mem);\n\tif (vn_addvnr(*p_vnlp, mom_short_name, \"resources_available.mem\", bufs, 0, 0, NULL) == -1) {\n\t\tpbs_asprintf(&msgbuf,\n\t\t\t     \"Failed to add '%s %s=%s' to vnode list\",\n\t\t\t     mom_short_name, \"resources_available.mem\", bufs);\n\t\tlog_err(-1, __func__, msgbuf);\n\t\tfree(msgbuf);\n\t\treturn;\n\t}\n\n\tif (vn_addvnr(*p_vnlp, mom_short_name, \"resources_available.arch\",\n\t\t      arch(NULL), 0, 0, NULL) == -1) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"Failed to add '%s %s=%s' to vnode list\",\n\t\t\t mom_host, \"arch\", arch(NULL));\n\t\tlog_err(-1, __func__, log_buffer);\n\t\treturn;\n\t}\n\n\tif (vn_addvnr(*p_vnlp, mom_short_name, \"pbs_version\",\n\t\t      PBS_VERSION, 0, 0, NULL) == -1) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"Failed to add '%s %s=%s' to vnode list\",\n\t\t\t mom_short_name, \"pbs_version\", PBS_VERSION);\n\t\tlog_err(-1, __func__, log_buffer);\n\t\treturn;\n\t}\n}\n\n/**\n * @brief\n *\tFree any vnl_t structures contained in a hook_vnl_action list and\n *\tdelete the individual list structure.\n *\n * @param[in]\tlisth - the head of the list of hook_vnl_action structures\n *\n * @return none\n *\n */\nvoid\nvna_list_free(pbs_list_head listh)\n{\n\tstruct hook_vnl_action *pvna;\n\tstruct hook_vnl_action *nxt;\n\n\tpvna = (struct hook_vnl_action *) GET_NEXT(listh);\n\n\twhile (pvna != NULL) {\n\t\tnxt = (struct hook_vnl_action *) GET_NEXT(pvna->hva_link);\n\t\tif (pvna->hva_vnl)\n\t\t\tvnl_free((vnl_t *) pvna->hva_vnl);\n\t\tdelete_link(&pvna->hva_link);\n\t\tfree(pvna);\n\t\tpvna = nxt;\n\t}\n}\n\n/**\n * @brief\n *\tCopies 'src_file' to 'dest_file' and set 'dest_file' permission\n *\tfrom 'pjob''s data.\n *\n * @param[in]\tsrc_file - the source file to copy\n * @param[in]\tdest_file - the destination file\n * @param[in]\tpjob - job whose permissions will be used to the\n * \t\t\t'dest_file'.\n *\n * @return int\n * @retval 0  - for success\n * @retval -1 - for failure.\n *\n */\nstatic int\ncopy_file_and_set_owner(char *src_file, char *dest_file, job *pjob)\n{\n\tint st;\n\n\tif ((src_file == NULL) || (dest_file == NULL) || (pjob == NULL))\n\t\treturn -1;\n\n\tst = copy_file_internal(src_file, dest_file);\n\n\tswitch (st) {\n\t\tcase 0:\n\t\t\tbreak;\n\t\tcase COPY_FILE_BAD_INPUT:\n\t\t\tlog_errf(errno, __func__,\n\t\t\t\t \"copy_file_internal: bad input parameter src_file: %s; dest_file: %s\",\n\t\t\t\t src_file, dest_file);\n\t\t\treturn -1;\n\t\tcase COPY_FILE_BAD_SOURCE:\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"Failed to open file %s\",\n\t\t\t\t src_file);\n\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\treturn -1;\n\t\tcase COPY_FILE_BAD_DEST:\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"Failed to open file copy %s\",\n\t\t\t\t dest_file);\n\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\treturn -1;\n\t\tcase COPY_FILE_BAD_WRITE:\n\t\t\tsnprintf(log_buffer,\n\t\t\t\t sizeof(log_buffer),\n\t\t\t\t \"Failed writing to file %s\",\n\t\t\t\t dest_file);\n\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\treturn -1;\n\t\tdefault:\n\t\t\tsnprintf(log_buffer,\n\t\t\t\t sizeof(log_buffer),\n\t\t\t\t \"Unknown copy_file_internal return %d; src_file: %s; dest_file: %s\",\n\t\t\t\t st, src_file, dest_file);\n\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\treturn -1;\n\t}\n#ifndef WIN32\n\tif (chown(dest_file,\n\t\t  pjob->ji_qs.ji_un.ji_momt.ji_exuid,\n\t\t  pjob->ji_qs.ji_un.ji_momt.ji_exgid) == -1) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"chown: %s\", dest_file);\n\t\tlog_err(errno, __func__, log_buffer);\n\t\t(void) unlink(dest_file);\n\t\treturn -1;\n\t}\n#else /* Windows */\n\n\tif (secure_file2(dest_file,\n\t\t\t pjob->ji_user->pw_name,\n\t\t\t READS_MASK | WRITES_MASK | STANDARD_RIGHTS_REQUIRED,\n\t\t\t \"Administrators\",\n\t\t\t READS_MASK | WRITES_MASK | STANDARD_RIGHTS_REQUIRED) == 0) {\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE, \"Unable to change permissions of the file for user: %s, file: %s\",\n\t\t\t pjob->ji_user->pw_name, dest_file);\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_ERR,\n\t\t\t  __func__, log_buffer);\n\t\t(void) unlink(dest_file);\n\t\treturn -1;\n\t}\n#endif\n\n\treturn 0;\n}\n\n/**\n * @brief\n * \tAdd to the vnodes list 'vnl' the data found in the vnode-to-mpi-process\n *\tmapping array 'vnode_entry', whose number of entries is in 'num_vnodes'.\n *\n * @param[out]\t\tvnl - pointer to the destination list of vnodes.\n * @param[in]\t\tvnode_entry - array of vnode-to-mpi-process mappings.\n * @param[in]\t\tnum_vnodes - numbver of entries in 'vnode_entry'.\n * @param[out]\t\tmatched_vnode - set to 1 if one of the added vnodes\n * \t\t\t\t\tmatches the natural vnode.\n * @return int\n * @retval 0  - for success\n * @retval -1 - for failure\n *\n */\nstatic int\nvnl_add_vnode_entries(vnl_t *vnl, vmpiprocs *vnode_entry, int num_vnodes,\n\t\t      int *matched_nvnode)\n{\n\tint i, rc;\n\tchar bufs[BUFSIZ];\n\tchar *msgbuf;\n\tchar *v_name = NULL;\n\tint v_cpus = 0;\n\tLong v_mem = 0;\n\tchar *v_cpus_o_str = NULL;\n\tchar *v_mem_o_str = NULL;\n\n\tif ((vnl == NULL) || (vnode_entry == NULL) ||\n\t    (matched_nvnode == NULL)) {\n\t\treturn (0); /* incomplete input, no need to proceed  */\n\t}\n\n\tfor (i = 0; i < num_vnodes; i++) {\n\n\t\tv_name = vnode_entry[i].vn_vname;\n\n\t\t/* matching vnode names is case-insensitive, */\n\t\t/* see find_nodebyname() where it does strcasecmp() */\n\t\t/* Also, we need to use the str case of mom_short_name */\n\t\t/* which is the natural vnode key returned by */\n\t\t/* pbs.get_local_nodename() */\n\t\tif (strcasecmp(v_name, mom_short_name) == 0)\n\t\t\t*matched_nvnode = 1;\n\n\t\tv_cpus = vnode_entry[i].vn_cpus;\n\t\tif ((v_cpus_o_str = vn_exist(vnl, v_name,\n\t\t\t\t\t     RESCASSN_NCPUS)) != NULL) {\n\t\t\tv_cpus += atoi(v_cpus_o_str);\n\t\t}\n\n\t\tsnprintf(bufs, sizeof(bufs), \"%d\", v_cpus);\n\t\trc = vn_addvnr(vnl, v_name, RESCASSN_NCPUS, bufs, 0, 0, NULL);\n\t\tif (rc == -1) {\n\t\t\tpbs_asprintf(&msgbuf, \"%s:failed to add '%s=%s'\",\n\t\t\t\t     v_name, RESCASSN_NCPUS, bufs);\n\t\t\tlog_err(-1, __func__, msgbuf);\n\t\t\tfree(msgbuf);\n\t\t\treturn (-1);\n\t\t}\n\n\t\tif ((vnode_entry[i].vn_host != NULL) &&\n\t\t    (vnode_entry[i].vn_host->hn_host != NULL)) {\n\n\t\t\tsnprintf(bufs, sizeof(bufs), \"%s\", vnode_entry[i].vn_host->hn_host);\n\t\t\trc = vn_addvnr(vnl, v_name, RESCASSN_HOST, bufs, 0, 0, NULL);\n\t\t\tif (rc == -1) {\n\t\t\t\tpbs_asprintf(&msgbuf, \"%s:failed to add '%s=%s'\",\n\t\t\t\t\t     v_name, \"host\", bufs);\n\t\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\t\tfree(msgbuf);\n\t\t\t\treturn (-1);\n\t\t\t}\n\t\t}\n\n\t\tv_mem = vnode_entry[i].vn_mem;\n\t\tif ((v_mem_o_str = vn_exist(vnl, v_name,\n\t\t\t\t\t    RESCASSN_MEM)) != NULL) {\n\t\t\tv_mem += atoL(v_mem_o_str);\n\t\t}\n\n\t\tsnprintf(bufs, sizeof(bufs), \"%lldkb\", v_mem);\n\t\trc = vn_addvnr(vnl, v_name, RESCASSN_MEM, bufs, 0, 0, NULL);\n\n\t\tif (rc == -1) {\n\t\t\tpbs_asprintf(&msgbuf, \"%s:failed add '%s=%s'\",\n\t\t\t\t     v_name, RESCASSN_MEM, bufs);\n\t\t\tlog_err(-1, __func__, msgbuf);\n\t\t\tfree(msgbuf);\n\t\t\treturn (-1);\n\t\t}\n\t}\n\treturn (0);\n}\n\n/**\n * @brief\n *\tDuplicates pointer to hooks parameters.\n *\n * @param[in]   php - structure for duplication\n *\n * @return new_php\tfor success\n * @return NULL\t\tfor error\n *\n */\nmom_process_hooks_params_t\n\t*\n\tduplicate_php(mom_process_hooks_params_t *php)\n{\n\tmom_process_hooks_params_t *new_php;\n\n\tif ((new_php = (mom_process_hooks_params_t *) malloc(\n\t\t     sizeof(mom_process_hooks_params_t))) == NULL) {\n\t\tlog_err(errno, __func__, MALLOC_ERR_MSG);\n\t\treturn NULL;\n\t}\n\n\tnew_php->hook_event = php->hook_event;\n\tnew_php->req_user = php->req_user;\n\tnew_php->req_host = php->req_host;\n\tnew_php->hook_msg = php->hook_msg;\n\tnew_php->msg_len = php->msg_len;\n\tnew_php->update_svr = php->update_svr;\n\tnew_php->hook_input = php->hook_input;\n\tnew_php->hook_output = php->hook_output;\n\tnew_php->parent_wait = php->parent_wait;\n\n\treturn new_php;\n}\n\n/**\n * @brief\n *\tRuns the hook 'phook' in a child process in response to 'event_type'\n *\twith input parameter 'hook_input'.\n *\n * @param[in]\tphook\t- pointer to hook being run.\n * @param[in]\tevent_type - the hook event type (e.g. HOOK_EVENT_EXECJOB_BEGIN)\n * @param[in]\thook_input - struct containing input parameters.\n * @param[in]\treq_user - user executing the hook\n * @param[in]\treq_host - where the hook is executing\n * @param[in]   parent_wait - if set to 1, parent process will wait for hook to\n *\t\tfinish; otherwise, it gets run in the background and when done,\n *\t\tthe 'post_func' below will execute.\n *\t\tWith parent_wait == 0, file_in, file_out are not filled in.\n * @param[in]   post_func - to function to execute when backgrounded hook\n *\t\tfinishes execution.\n *\t\t, parent process will wait for job\n * @param[in/out] file_in \t- be filled in by the hook input file used\n * @param[in/out] file_out\t- be filled in by the hook output file used\n * @param[in/out] file_data\t- be filled in by the hook server data file used\n * @param[in]\t  file_size\t- max size of file_in and file_out string\n *\t\t\t\t  buffers.\n * @param[in]\t  php - holds the arguments from mom_process_hooks function\n *\n *\n * @return int\t\tthe exit value of the executing hook script\n * @return 0\t\tfor success\n * @return != 0\t\tfor error\n *\n */\nstatic int\nrun_hook(hook *phook, unsigned int event_type, mom_hook_input_t *hook_input,\n\t char *req_user, char *req_host,\n\t int parent_wait, void (*post_func)(struct work_task *),\n\t char *file_in, char *file_out, char *file_data, size_t file_size,\n\t mom_process_hooks_params_t *php)\n{\n\n\tFILE *fp = NULL;\n\tchar in_data[LOG_BUF_SIZE + 1];\n\tchar hook_inputfile[MAXPATHLEN + 1];\n\tchar hook_outputfile[MAXPATHLEN + 1];\n\tchar hook_datafile[MAXPATHLEN + 1];\n\tchar script_copy[MAXPATHLEN + 1];\n\tchar hook_config_copy[MAXPATHLEN + 1];\n\tchar rescdef_copy[MAXPATHLEN + 1];\n\tchar log_file[MAXPATHLEN + 1];\n\tchar *script_file = NULL;\n\tchar *rescdef_file = NULL;\n\tint waitst = 0;\n\tchar pypath[MAXPATHLEN + 1];\n\tpid_t child;\n\tstruct stat sbuf;\n\tint runas_jobuser = 0; /* if 1, run as job's euser */\n\tstruct work_task *ptask;\n\tvnl_t *vnl = NULL;\n\tvnl_t *vnl_fail = NULL;\n\tpbs_list_head *failed_mom_list = NULL;\n\tpbs_list_head *succeeded_mom_list = NULL;\n\tvnl_t *nv = NULL;\n\tint vnl_created = 0;\n\tjob *pjob = NULL;\n\tint matched_nvnode = 0; /* match natural vnode */\n\tchar *arg[14];\n\tpid_t myseq; /* just some unique sequence number */\n\tchar logmask[BUFSIZ];\n\tchar path_hooks_rescdef[MAXPATHLEN + 1];\n\tchar cmdline[2 * BUFSIZ + 16]; /* Additional bytes for command options */\n\tstruct passwd *pwdp = NULL;\n#ifdef WIN32\n\tFILE *fp2 = NULL;\n\tpio_handles pio;\n#endif\n\tchar *progname = NULL;\n\tchar **argv = NULL;\n\tchar **env = NULL;\n\tchar *env_str = NULL;\n\tpid_t pid = -1;\n\tint k;\n\tchar script_path[MAXPATHLEN + 1];\n\tchar hook_config_path[MAXPATHLEN + 1];\n\tchar *p;\n\tpbs_list_head *jobs_list = NULL;\n\tchar *pc;\n\tint keeping = 0;\n\tchar *std_file = NULL;\n\treliable_job_node *rjn;\n\n\tif ((phook == NULL) || (req_user == NULL) || (req_host == NULL)) {\n\t\tlog_err(-1, __func__, \"Bad input received!\");\n\t\treturn -1;\n\t}\n\n\tif (phook->hook_name == NULL) {\n\t\tlog_err(-1, __func__, \"hook has no name\");\n\t\treturn -1;\n\t}\n\n\tif (phook->script == NULL) {\n\t\tlog_errf(-1, __func__, \"hook %s has no script value\", phook->hook_name);\n\t\treturn -1;\n\t}\n\tif (hook_input == NULL) {\n\t\tlog_err(-1, __func__, \"missing input argument to event\");\n\t\treturn (-1);\n\t}\n\n\t/* Validate input parameters */\n\n\tswitch (event_type) {\n\t\tcase HOOK_EVENT_EXECJOB_LAUNCH:\n\t\tcase HOOK_EVENT_EXECJOB_ATTACH:\n\t\t\tif (event_type == HOOK_EVENT_EXECJOB_LAUNCH) {\n\t\t\t\tprogname = hook_input->progname;\n\t\t\t\targv = hook_input->argv;\n\t\t\t\tenv = hook_input->env;\n\t\t\t} else {\n\t\t\t\tpid = hook_input->pid;\n\t\t\t}\n\t\t\t/* falls through */\n\t\tcase HOOK_EVENT_EXECJOB_BEGIN:\n\t\tcase HOOK_EVENT_EXECJOB_RESIZE:\n\t\tcase HOOK_EVENT_EXECJOB_PROLOGUE:\n\t\tcase HOOK_EVENT_EXECJOB_PRETERM:\n\t\tcase HOOK_EVENT_EXECJOB_EPILOGUE:\n\t\tcase HOOK_EVENT_EXECJOB_END:\n\t\tcase HOOK_EVENT_EXECJOB_ABORT:\n\t\tcase HOOK_EVENT_EXECJOB_POSTSUSPEND:\n\t\tcase HOOK_EVENT_EXECJOB_PRERESUME:\n\t\t\tpjob = hook_input->pjob;\n\t\t\tif ((event_type == HOOK_EVENT_EXECJOB_LAUNCH) ||\n\t\t\t    (event_type == HOOK_EVENT_EXECJOB_PROLOGUE)) {\n\t\t\t\tvnl = (vnl_t *) hook_input->vnl;\n\t\t\t\tvnl_fail = (vnl_t *) hook_input->vnl_fail;\n\t\t\t\tfailed_mom_list = hook_input->failed_mom_list;\n\t\t\t\tsucceeded_mom_list = hook_input->succeeded_mom_list;\n\t\t\t}\n\t\t\tbreak;\n\t\tcase HOOK_EVENT_EXECHOST_PERIODIC:\n\t\t\tjobs_list = hook_input->jobs_list;\n\t\t\t/* falls through */\n\t\tcase HOOK_EVENT_EXECHOST_STARTUP:\n\t\t\tvnl = hook_input->vnl;\n\t\t\tbreak;\n\t\tdefault:\n\t\t\tlog_err(-1, __func__, \"unknown hook event\");\n\t\t\treturn (-1);\n\t}\n\n\tsnprintf(pypath, MAXPATHLEN, \"%s/bin/pbs_python\", pbs_conf.pbs_exec_path);\n\trun_exit = 0;\n\n\tif ((phook->user == HOOK_PBSUSER) && (event_type & USER_MOM_EVENTS))\n\t\trunas_jobuser = 1;\n\n\tchild = fork();\n\tif (child > 0) { /* parent */\n\n\t\tif (!parent_wait) {\n\t\t\tptask = set_task(WORK_Deferred_Child, child,\n\t\t\t\t\t post_func, phook);\n\t\t\tif (!ptask) {\n\t\t\t\tlog_err(errno, __func__, msg_err_malloc);\n\t\t\t\treturn (-1);\n\t\t\t}\n\t\t\tif (php) {\n\t\t\t\tptask->wt_parm2 = (void *) php;\n\t\t\t\tif (php->hook_input && php->hook_input->pjob)\n\t\t\t\t\tphp->hook_input->pjob->ji_bg_hook_task = ptask;\n\t\t\t}\n\t\t\treturn (0); /* no hook output file at this time */\n\t\t} else if (php)\n\t\t\tphp->child = child;\n\n\t\tset_alarm(phook->alarm, run_hook_alarm);\n\t\twhile (waitpid(child, &waitst, 0) < 0) { /* error on wait */\n\t\t\tif (errno != EINTR) {\t\t /* continue loop on signal */\n\t\t\t\trun_exit = -5;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\tkill(-child, SIGKILL);\n\t\t}\n\t\tset_alarm(0, NULL);\n\t\tkill(-child, SIGKILL);\n\t\tif (run_exit == 0) {\n\t\t\tif (WIFEXITED(waitst)) {\n\t\t\t\trun_exit = WEXITSTATUS(waitst);\n\t\t\t} else if (WIFSIGNALED(waitst)) {\n\t\t\t\trun_exit = -4;\n\t\t\t}\n\t\t} else {\n\t\t\tlog_eventf(PBSEVENT_DEBUG, PBS_EVENTCLASS_HOOK, LOG_INFO, phook->hook_name,\n\t\t\t\t   \"prematurely completed %s, exit=%d\", ((struct python_script *) (phook->script))->path, run_exit);\n\t\t}\n\n\t} else {\n\t\trun_exit = 255;\n\t\tif (!child) { /* child */\n\t\t\t/* releasing ports */\n\t\t\ttpp_terminate();\n\t\t\tnet_close(-1);\n\t\t\tsetsid();\n\n\t\t\tmyseq = getpid();\n\t\t} else if (errno == ENOSYS) {\n\t\t\t/* fork not available continue in foreground */\n\t\t\tmyseq = rand();\n\t\t\tchild = myseq;\n\t\t\tif (php)\n\t\t\t\tphp->child = child;\n\t\t} else {\n\t\t\tlog_err(errno, __func__, \"fork failed\");\n\t\t\tgoto run_hook_exit;\n\t\t}\n\n\t\tsnprintf(path_hooks_rescdef, MAXPATHLEN, \"%s%s\", path_hooks, PBS_RESCDEF);\n\t\tpbs_strncpy(hook_config_path, ((struct python_script *) phook->script)->path, sizeof(hook_config_path));\n\t\tp = strstr(hook_config_path, HOOK_SCRIPT_SUFFIX);\n\t\tif (p != NULL) {\n\t\t\t/* replace <HOOK_SCRIPT_SUFFIX> with <HOOK_CONFIG_SUFFIX>: */\n\t\t\t/* must copy up to HOOK_SCRIPT_SUFFIX length so as to not */\n\t\t\t/* overflow */\n\t\t\tsnprintf(p, sizeof(hook_config_path) - (p - hook_config_path), \"%s\", HOOK_CONFIG_SUFFIX);\n\t\t\tif (stat(hook_config_path, &sbuf) != 0) {\n\t\t\t\thook_config_path[0] = '\\0';\n\t\t\t}\n\t\t} else\n\t\t\thook_config_path[0] = '\\0';\n\n\t\tif (runas_jobuser) {\n\n\t\t\t/* use world writable spool directory */\n\n\t\t\tsnprintf(hook_inputfile, MAXPATHLEN, FMT_HOOK_INFILE, path_spool, hook_event_as_string(event_type), phook->hook_name, myseq);\n\t\t\tsnprintf(hook_outputfile, MAXPATHLEN, FMT_HOOK_OUTFILE, path_spool, hook_event_as_string(event_type), phook->hook_name, myseq);\n\t\t\tsnprintf(hook_datafile, MAXPATHLEN, FMT_HOOK_DATAFILE, path_spool, hook_event_as_string(event_type), phook->hook_name, myseq);\n\t\t\tsnprintf(script_copy, MAXPATHLEN, FMT_HOOK_SCRIPT, path_spool, myseq);\n\t\t\tsnprintf(hook_config_copy, MAXPATHLEN, FMT_HOOK_CONFIG, path_spool, myseq);\n\t\t\tsnprintf(rescdef_copy, MAXPATHLEN, FMT_HOOK_RESCDEF, path_spool, myseq);\n\t\t\tsnprintf(log_file, MAXPATHLEN, FMT_HOOK_LOG, path_spool, myseq);\n\n\t\t\t/*\n\t\t\t * get the password entry for the user under which the\n\t\t\t * job is to be run we do this now to save a few things\n\t\t\t * in the job structure\n\t\t\t * All this info needed to pre-fetch for becomeuser()\n\t\t\t * to work.\n\t\t\t */\n\n\t\t\tif (pjob == NULL) {\n\t\t\t\tlog_err(-1, __func__, \"No job parameter passed!\");\n\t\t\t\tgoto run_hook_exit;\n\t\t\t}\n\n\t\t\tpwdp = check_pwd(pjob);\n\t\t\tif (pwdp == NULL) {\n\t\t\t\tlog_event(PBSEVENT_JOB | PBSEVENT_SECURITY, PBS_EVENTCLASS_JOB, LOG_ERR, pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\t\tgoto run_hook_exit;\n\t\t\t}\n\n\t\t\tif (pjob->ji_grpcache == NULL) {\n\t\t\t\tlog_errf(-1, __func__, \"job %s has no ji_grpcache value\", pjob->ji_qs.ji_jobid);\n\t\t\t\tgoto run_hook_exit;\n\t\t\t}\n\n\t\t\tpjob->ji_qs.ji_un.ji_momt.ji_exuid = pjob->ji_grpcache->gc_uid;\n\t\t\tpjob->ji_qs.ji_un.ji_momt.ji_exgid = pjob->ji_grpcache->gc_gid;\n\n\t\t\tpbs_strncpy(script_path, ((struct python_script *) phook->script)->path, sizeof(script_path));\n\n\t\t\t/* copy hook_config_path to user-accessible [PBS_HOME]/path_spool. */\n\t\t\tif (hook_config_path[0] != '\\0') {\n\t\t\t\tif (copy_file_and_set_owner(hook_config_path, hook_config_copy, pjob) == -1)\n\t\t\t\t\tgoto run_hook_exit;\n\t\t\t\t/* set hook_config_path to hook_config_copy if the copying was successful */\n\t\t\t\tsnprintf(hook_config_path, sizeof(hook_config_path), \"%s\", hook_config_copy);\n\t\t\t}\n\t\t\t/* copy script_path to user-accessible [PBS_HOME]/path_spool */\n\t\t\tif (copy_file_and_set_owner(script_path, script_copy, pjob) == -1)\n\t\t\t\tgoto run_hook_exit;\n\t\t\tif (stat(path_hooks_rescdef, &sbuf) == 0) {\n\t\t\t\tif (copy_file_and_set_owner(path_hooks_rescdef, rescdef_copy, pjob) == -1)\n\t\t\t\t\tgoto run_hook_exit;\n\t\t\t\trescdef_file = (char *) rescdef_copy;\n\t\t\t}\n\t\t\tscript_file = (char *) script_copy;\n\n#ifndef WIN32\n\t\t\tif (chown(script_file, pjob->ji_qs.ji_un.ji_momt.ji_exuid, pjob->ji_qs.ji_un.ji_momt.ji_exgid) == -1) {\n\t\t\t\tlog_errf(errno, __func__, \"chown: %s\", script_file);\n\t\t\t\tgoto run_hook_exit;\n\t\t\t}\n#else /* Windows */\n\n\t\t\tif (secure_file2(script_file, pjob->ji_user->pw_name, READS_MASK | WRITES_MASK | STANDARD_RIGHTS_REQUIRED,\n\t\t\t\t\t \"Administrators\", READS_MASK | WRITES_MASK | STANDARD_RIGHTS_REQUIRED) == 0) {\n\t\t\t\tlog_eventf(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_ERR, __func__,\n\t\t\t\t\t   \"Unable to change permissions of the script file for user: %s, file: %s\",\n\t\t\t\t\t   pjob->ji_user->pw_name, script_file);\n\t\t\t\tgoto run_hook_exit;\n\t\t\t}\n#endif\n\n\t\t\tif ((fp = fopen(hook_inputfile, \"w\")) == NULL) {\n\t\t\t\tlog_errf(errno, __func__, \"open of input file %s failed!\", hook_inputfile);\n\t\t\t\tgoto run_hook_exit;\n\t\t\t}\n\n#ifndef WIN32\n\t\t\tif (chown(hook_inputfile, pjob->ji_qs.ji_un.ji_momt.ji_exuid, pjob->ji_qs.ji_un.ji_momt.ji_exgid) == -1) {\n\t\t\t\tlog_errf(errno, __func__, \"chown: %s\", hook_inputfile);\n\t\t\t\tgoto run_hook_exit;\n\t\t\t}\n\n\t\t\t/* NOTE: \"launch\" hook is already running as the user */\n\t\t\tif (becomeuser(pjob) != 0) {\n\t\t\t\tchar *jobuser;\n\n\t\t\t\tjobuser = get_jattr_str(pjob, JOB_ATR_euser);\n\t\t\t\tlog_errf(errno, __func__, \"Unable to become user %s!\", (jobuser ? jobuser : \"<job euser unset>\"));\n\t\t\t\tgoto run_hook_exit;\n\t\t\t}\n#else\n\t\t\tif (secure_file2(hook_inputfile, pjob->ji_user->pw_name, READS_MASK | WRITES_MASK | STANDARD_RIGHTS_REQUIRED,\n\t\t\t\t\t \"Administrators\", READS_MASK | WRITES_MASK | STANDARD_RIGHTS_REQUIRED) == 0) {\n\t\t\t\tlog_eventf(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_ERR, __func__,\n\t\t\t\t\t   \"Unable to change permissions of the hook input file for user: %s, file: %s\",\n\t\t\t\t\t   pjob->ji_user->pw_name, hook_inputfile);\n\t\t\t\tgoto run_hook_exit;\n\t\t\t}\n\n\t\t\t/* Force create the log file, to secure afterwards */\n\t\t\tif ((fp2 = fopen(log_file, \"w\")) == NULL) {\n\t\t\t\tlog_errf(errno, __func__, \"open of log file %s failed!\", log_file);\n\t\t\t\tgoto run_hook_exit;\n\t\t\t}\n\t\t\tfclose(fp2);\n\t\t\tfp2 = NULL;\n\n\t\t\tif (secure_file2(log_file, pjob->ji_user->pw_name, READS_MASK | WRITES_MASK | STANDARD_RIGHTS_REQUIRED,\n\t\t\t\t\t \"Administrators\", READS_MASK | WRITES_MASK | STANDARD_RIGHTS_REQUIRED) == 0) {\n\t\t\t\tlog_eventf(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_ERR, __func__,\n\t\t\t\t\t   \"Unable to change permissions of the log file for user: %s, file: %s\",\n\t\t\t\t\t   pjob->ji_user->pw_name, log_file);\n\t\t\t\tgoto run_hook_exit;\n\t\t\t}\n#endif\n\n\t\t\t/*\n\t\t\t * chdir to path_spool like a job\n\t\t\t * NOTE: For launch hooks, it is already in the\n\t\t\t * environment of the job, so we don't want to\n\t\t\t * disturb its current working directory.\n\t\t\t */\n\t\t\tif (chdir(path_spool) != 0)\n\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK, LOG_WARNING, phook->hook_name, \"unable to go to spool directory\");\n\t\t} else { /* run as root */\n\t\t\tsnprintf(hook_inputfile, MAXPATHLEN, FMT_HOOK_INFILE, path_hooks_workdir, hook_event_as_string(event_type), phook->hook_name, myseq);\n\t\t\tsnprintf(hook_outputfile, MAXPATHLEN, FMT_HOOK_OUTFILE, path_hooks_workdir, hook_event_as_string(event_type), phook->hook_name, myseq);\n\t\t\tsnprintf(hook_datafile, MAXPATHLEN, FMT_HOOK_DATAFILE, path_hooks_workdir, hook_event_as_string(event_type), phook->hook_name, myseq);\n\n\t\t\tscript_file = ((struct python_script *) phook->script)->path;\n\t\t\tif (script_file == NULL) {\n\t\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_HOOK, LOG_ERR, phook->hook_name, \"No script file found!\");\n\t\t\t\tgoto run_hook_exit;\n\t\t\t}\n\n\t\t\tif (stat(path_hooks_rescdef, &sbuf) == 0)\n\t\t\t\trescdef_file = path_hooks_rescdef;\n\n\t\t\tlog_file[0] = '\\0';\n\n\t\t\tif ((fp = fopen(hook_inputfile, \"w\")) == NULL) {\n\t\t\t\tlog_errf(errno, __func__, \"open of input file %s failed!\", hook_inputfile);\n\t\t\t\tgoto run_hook_exit;\n\t\t\t}\n\n#ifdef WIN32\n\t\t\tif (secure_file(hook_inputfile, \"Administrators\", READS_MASK | WRITES_MASK | STANDARD_RIGHTS_REQUIRED) == 0)\n\t\t\t\tlog_eventf(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_ERR, __func__,\n\t\t\t\t\t   \"Failed to change hook input file permissions for file: %s\", hook_inputfile);\n#endif\n\t\t\t/*\n\t\t\t * still need to chdir() here. A periodic hook may be\n\t\t\t * running the hook periodically and may no longer in the\n\t\t\t * original working directory\n\t\t\t */\n\t\t\tif (chdir(path_hooks_workdir) != 0)\n\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK, LOG_WARNING, phook->hook_name, \"unable to go to hooks tmp directory\");\n\t\t}\n\n\t\tswitch (event_type) {\n\t\t\tcase HOOK_EVENT_EXECJOB_LAUNCH:\n\t\t\tcase HOOK_EVENT_EXECJOB_ATTACH:\n\t\t\t\tif (event_type == HOOK_EVENT_EXECJOB_LAUNCH) {\n\n\t\t\t\t\tif ((progname != NULL) && (progname[0] != '\\0'))\n\t\t\t\t\t\tfprintf(fp, \"%s.%s=%s\\n\", EVENT_OBJECT, PY_EVENT_PARAM_PROGNAME, progname);\n\n\t\t\t\t\tif (argv != NULL) {\n\t\t\t\t\t\tk = 0;\n\t\t\t\t\t\twhile (argv[k]) {\n\t\t\t\t\t\t\tfprintf(fp, \"%s.%s[%d]=%s\\n\", EVENT_OBJECT, PY_EVENT_PARAM_ARGLIST, k, argv[k]);\n\t\t\t\t\t\t\tk++;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\n\t\t\t\t\tenv_str = env_array_to_str(env, ',');\n\t\t\t\t\tif (env_str != NULL) {\n\t\t\t\t\t\tif (env_str[0] != '\\0')\n\t\t\t\t\t\t\tfprintf(fp, \"%s.%s=\\\"\\\"\\\"%s\\\"\\\"\\\"\\n\", EVENT_OBJECT, PY_EVENT_PARAM_ENV, env_str);\n\t\t\t\t\t\tfree(env_str);\n\t\t\t\t\t}\n\t\t\t\t} else\n\t\t\t\t\tfprintf(fp, \"%s.%s=%d\\n\", EVENT_OBJECT, PY_EVENT_PARAM_PID, pid);\n\t\t\t\t/* fall through */\n\t\t\tcase HOOK_EVENT_EXECJOB_BEGIN:\n\t\t\tcase HOOK_EVENT_EXECJOB_RESIZE:\n\t\t\tcase HOOK_EVENT_EXECJOB_PROLOGUE:\n\t\t\tcase HOOK_EVENT_EXECJOB_EPILOGUE:\n\t\t\tcase HOOK_EVENT_EXECJOB_END:\n\t\t\tcase HOOK_EVENT_EXECJOB_PRETERM:\n\t\t\tcase HOOK_EVENT_EXECJOB_ABORT:\n\t\t\tcase HOOK_EVENT_EXECJOB_POSTSUSPEND:\n\t\t\tcase HOOK_EVENT_EXECJOB_PRERESUME:\n\t\t\t\tif (pjob == NULL) {\n\t\t\t\t\tlog_err(-1, __func__, \"No job parameter passed!\");\n\t\t\t\t\tgoto run_hook_exit;\n\t\t\t\t}\n\n\t\t\t\t/* pass job parameter */\n\t\t\t\tfprintf_job_struct(fp, pjob);\n\t\t\t\tif (pjob->ji_qs.ji_svrflags & JOB_SVFLG_CHKPT)\n\t\t\t\t\tfprintf(fp, \"%s._checkpointed=True\\n\", EVENT_JOB_OBJECT);\n\t\t\t\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE) != 0)\n\t\t\t\t\tfprintf(fp, \"%s._msmom=True\\n\", EVENT_JOB_OBJECT);\n\t\t\t\tstd_file = std_file_name(pjob, StdOut, &keeping);\n\t\t\t\tfprintf(fp, \"%s._stdout_file=%s\\n\", EVENT_JOB_OBJECT, std_file ? std_file : \"\");\n\t\t\t\tstd_file = std_file_name(pjob, StdErr, &keeping);\n\t\t\t\tfprintf(fp, \"%s._stderr_file=%s\\n\", EVENT_JOB_OBJECT, std_file ? std_file : \"\");\n\n\t\t\t\t/* pass vnode list parameter */\n\t\t\t\tif (vnl == NULL) {\n\t\t\t\t\tint errcode;\n\n\t\t\t\t\tif (vnl_alloc(&vnl) == NULL) {\n\t\t\t\t\t\tlog_err(errno, __func__, \"Failed to allocate a vnlp structure\");\n\t\t\t\t\t\tgoto run_hook_exit;\n\t\t\t\t\t}\n\t\t\t\t\tvnl_created = 1;\n\n\t\t\t\t\tif (pjob->ji_numnodes == 0) {\n\t\t\t\t\t\t/* this executes in a child process */\n\t\t\t\t\t\tif ((errcode = job_nodes(pjob)) != 0) {\n\t\t\t\t\t\t\tlog_errf(-1, __func__, \"job_nodes failed with error %d\", errcode);\n\t\t\t\t\t\t\tgoto run_hook_exit;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\n\t\t\t\t\t/*\n\t\t\t\t\t* This looks into pjob->ji_assn_vnodes[] whose\n\t\t\t\t\t* entries map exec_vnode, the vnodes\n\t\t\t\t\t* assigned to the job along with the\n\t\t\t\t\t* allocated cpus and mem for each vnode.\n\t\t\t\t\t* For example, given\n\t\t\t\t\t*   exec_vnode=(hostA[0]:ncpus=3:mem=100kb)+\n\t\t\t\t\t*              (hostB[0]:mem=400kb:ncpus=1)+\n\t\t\t\t\t*              (hostA[0]:ncpus=2:mem=200kb)\n\t\t\t\t\t* vnl would end up getting accumulated\n\t\t\t\t\t* entries:\n\t\t\t\t\t*\t(hostA[0],ncpus=5,mem=300kb)\n\t\t\t\t\t*\t(hostB[0],ncpus=1,mem=400kb)\n\t\t\t\t\t*/\n\n\t\t\t\t\tif ((vnl_add_vnode_entries(vnl, pjob->ji_assn_vnodes, pjob->ji_num_assn_vnodes, &matched_nvnode) == -1))\n\t\t\t\t\t\tgoto run_hook_exit;\n\n\t\t\t\t\tif (matched_nvnode)\n\t\t\t\t\t\tadd_natural_vnode_info(&vnl);\n\t\t\t\t}\n\t\t\t\tfprint_vnl(fp, EVENT_VNODELIST_OBJECT, vnl);\n\t\t\t\tif (vnl_fail != NULL)\n\t\t\t\t\tfprint_vnl(fp, EVENT_VNODELIST_FAIL_OBJECT, vnl_fail);\n\n\t\t\t\tif (failed_mom_list != NULL) {\n\t\t\t\t\trjn = (reliable_job_node *) GET_NEXT(*failed_mom_list);\n\t\t\t\t\twhile (rjn) {\n\t\t\t\t\t\tfprintf(fp, \"%s=%s\\n\", JOB_FAILED_MOM_LIST_OBJECT, rjn->rjn_host);\n\t\t\t\t\t\trjn = (reliable_job_node *) GET_NEXT(rjn->rjn_link);\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\tif (succeeded_mom_list != NULL) {\n\t\t\t\t\trjn = (reliable_job_node *) GET_NEXT(*succeeded_mom_list);\n\t\t\t\t\twhile (rjn) {\n\t\t\t\t\t\tfprintf(fp, \"%s=%s\\n\", JOB_SUCCEEDED_MOM_LIST_OBJECT, rjn->rjn_host);\n\t\t\t\t\t\trjn = (reliable_job_node *) GET_NEXT(rjn->rjn_link);\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\tif (vnl_created) {\n\t\t\t\t\tvnl_free(vnl);\n\t\t\t\t\tvnl_created = 0;\n\t\t\t\t}\n\t\t\t\tbreak;\n\n\t\t\tcase HOOK_EVENT_EXECHOST_PERIODIC:\n\t\t\t\tfprintf(fp, \"%s.%s=%d\\n\", EVENT_OBJECT, \"freq\", phook->freq);\n\t\t\t\tadd_natural_vnode_info(&nv);\n\t\t\t\tif (nv == NULL)\n\t\t\t\t\tfprint_vnl(fp, EVENT_VNODELIST_OBJECT, vnl);\n\t\t\t\telse {\n\t\t\t\t\tif (vnl != NULL)\n\t\t\t\t\t\tvn_merge(nv, vnl, NULL);\n\t\t\t\t\tfprint_vnl(fp, EVENT_VNODELIST_OBJECT, nv);\n\t\t\t\t\tvnl_free(nv);\n\t\t\t\t}\n\n\t\t\t\tfprint_joblist(fp, EVENT_JOBLIST_OBJECT, jobs_list);\n\n\t\t\t\tbreak;\n\t\t\tcase HOOK_EVENT_EXECHOST_STARTUP:\n\t\t\t\tif (vnl == NULL) {\n\t\t\t\t\t/*\n\t\t\t\t\t* create a default vnode_list containing\n\t\t\t\t\t* the natural vnode, to be used as hook\n\t\t\t\t\t* input\n\t\t\t\t\t*/\n\t\t\t\t\tadd_natural_vnode_info(&vnl);\n\t\t\t\t\tvnl_created = 1;\n\t\t\t\t}\n\t\t\t\tfprint_vnl(fp, EVENT_VNODELIST_OBJECT, vnl);\n\t\t\t\tif (vnl_created) {\n\t\t\t\t\tvnl_free(vnl);\n\t\t\t\t\tvnl_created = 0;\n\t\t\t\t\tvnl = NULL;\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\tlog_errf(-1, __func__, \"Unknown event type %d\", event_type);\n\t\t\t\tgoto run_hook_exit;\n\t\t}\n\n\t\tfprintf(fp, \"%s.%s=%s\\n\", PBS_OBJ, GET_NODE_NAME_FUNC, (char *) mom_short_name);\n\t\tfprintf(fp, \"%s.%s=%s\\n\", EVENT_OBJECT, PY_EVENT_TYPE, hook_event_as_string(event_type));\n\t\tfprintf(fp, \"%s.%s=%s\\n\", EVENT_OBJECT, PY_EVENT_HOOK_NAME, phook->hook_name);\n\t\tfprintf(fp, \"%s.%s=%s\\n\", EVENT_OBJECT, PY_EVENT_HOOK_TYPE, hook_type_as_string(phook->type));\n\t\tfprintf(fp, \"%s.%s=%s\\n\", EVENT_OBJECT, \"requestor\", req_user);\n\t\tfprintf(fp, \"%s.%s=%s\\n\", EVENT_OBJECT, \"requestor_host\", req_host);\n\t\tfprintf(fp, \"%s.%s=%s\\n\", EVENT_OBJECT, \"user\", hook_user_as_string(phook->user));\n\t\tfprintf(fp, \"%s.%s=%d\\n\", EVENT_OBJECT, \"alarm\", phook->alarm);\n\n\t\tif (phook->debug)\n\t\t\tfprintf(fp, \"%s.%s=%s\\n\", EVENT_OBJECT, \"debug\", hook_datafile);\n\n\t\tfclose(fp);\n\t\tfp = NULL;\n\n\t\targ[0] = (char *) pypath;\n\t\targ[1] = \"--hook\";\n\t\targ[2] = \"-i\";\n\t\targ[3] = (char *) hook_inputfile;\n\t\targ[4] = \"-o\";\n\t\targ[5] = (char *) hook_outputfile;\n\n\t\tif (log_file[0] == '\\0') {\n\t\t\targ[6] = \"-L\";\n\t\t\targ[7] = (char *) path_log;\n\t\t} else {\n\t\t\targ[6] = \"-l\";\n\t\t\targ[7] = (char *) log_file;\n\t\t}\n\t\targ[8] = \"-e\";\n\t\tsnprintf(logmask, sizeof(logmask), \"%ld\", *log_event_mask);\n\t\targ[9] = (char *) logmask;\n\n\t\tif (rescdef_file != NULL) {\n\t\t\targ[10] = \"-r\";\n\t\t\targ[11] = (char *) rescdef_file;\n\t\t\targ[12] = script_file;\n\t\t\tsnprintf(cmdline, sizeof(cmdline),\n\t\t\t\t \"%s %s %s %s %s %s %s %s %s %s %s %s %s\",\n\t\t\t\t arg[0], arg[1], arg[2], arg[3], arg[4], arg[5],\n\t\t\t\t arg[6], arg[7], arg[8], arg[9], arg[10], arg[11],\n\t\t\t\t arg[12]);\n\t\t\targ[13] = NULL;\n\t\t} else {\n\t\t\targ[10] = script_file;\n\t\t\tsnprintf(cmdline, sizeof(cmdline),\n\t\t\t\t \"%s %s %s %s %s %s %s %s %s %s %s\",\n\t\t\t\t arg[0], arg[1], arg[2], arg[3], arg[4], arg[5],\n\t\t\t\t arg[6], arg[7], arg[8], arg[9], arg[10]);\n\t\t\targ[11] = NULL;\n\t\t}\n\t\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_HOOK, LOG_INFO, phook->hook_name,\n\t\t\t   \"execve %s runas_jobuser=%d in child pid=%d\", cmdline, runas_jobuser, myseq);\n\n\t\tif (hook_config_path[0] == '\\0') {\n\t\t\tif (child)\n\t\t\t\t/* since this is still main mom (not forked), need to unset the hook config environment variable. */\n\t\t\t\tif (unsetenv(PBS_HOOK_CONFIG_FILE) != 0)\n\t\t\t\t\tlog_err(-1, __func__, \"Failed to unset PBS_HOOK_CONFIG_FILE\");\n\t\t} else if (setenv(PBS_HOOK_CONFIG_FILE, hook_config_path, 1) != 0) {\n\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK, LOG_ERR, phook->hook_name, \"Failed to set PBS_HOOK_CONFIG_FILE\");\n\t\t\treturn (-1);\n\t\t}\n\n#ifndef WIN32\n\t\t/*\n\t\t * We're passing the calling process' (mom's) environment\n\t\t * to python hook execution here, so as to match what's\n\t\t * done on Windows. Also, this allows us to control\n\t\t * pbs_python's execution environment via PBS'\n\t\t * pbs_environment file (better security).\n\t\t * We can provide a  workaround, in case pbs_python does\n\t\t * not execute unless a particular variable is set,\n\t\t * perhaps due to an incorrectly setup host.\n\t\t */\n\t\tif (pbs_conf.pbs_conf_file != NULL) {\n\t\t\tif (setenv(\"PBS_CONF_FILE\", pbs_conf.pbs_conf_file, 1) != 0) {\n\t\t\t\tlog_err(errno, __func__, \"Failed to set PBS_CONF_FILE\");\n\t\t\t\tgoto run_hook_exit;\n\t\t\t}\n\t\t}\n\n#ifdef __SANITIZE_ADDRESS__\n\t\t/*\n\t\t * Ignore ASAN link order for pbs_python because Python bin\n\t\t * is not compiled with ASAN. There are also some leaks in the Python\n\t\t * library so ignore them by setting exitcode to 0.\n\t\t */\n\t\tchar *env_asan[] =\n\t\t{\n\t\t\t\t\"ASAN_OPTIONS=verify_asan_link_order=0\",\n\t\t\t\t\"LSAN_OPTIONS=exitcode=0\",\n\t\t\t\tNULL\n\t\t};\n\t\texecve(pypath, arg, env_asan);\n#else\n\t\texecve(pypath, arg, environ);\n#endif\n\trun_hook_exit:\n\t\tif (fp != NULL) {\n\t\t\tfclose(fp);\n\t\t\tfp = NULL;\n\t\t}\n\t\tif (vnl_created)\n\t\t\tvnl_free(vnl);\n\t\tlog_err(-1, __func__, \"execv of hook\");\n\t\tif (child)\n\t\t\treturn run_exit;\n\t\texit(run_exit);\n\t}\n#else\n\t\tif (!parent_wait) {\n\t\t\tif (win_popen(cmdline, \"w\", &pio, NULL) == 0) {\n\t\t\t\terrno = GetLastError();\n\t\t\t\tpbs_errno = errno;\n\t\t\t\tlog_eventf(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK, LOG_ERR, __func__, \"executing %s failed errno=%d\", cmdline, errno);\n\t\t\t\twin_pclose(&pio);\n\t\t\t\treturn (-1);\n\t\t\t}\n\t\t\tptask = set_task(WORK_Deferred_Child, (long) pio.pi.hProcess, post_func, phook);\n\t\t\tif (!ptask) {\n\t\t\t\tlog_err(errno, __func__, msg_err_malloc);\n\t\t\t\twin_pclose(&pio);\n\t\t\t\treturn (-1);\n\t\t\t}\n\t\t\tptask->wt_aux2 = myseq;\n\t\t\taddpid(pio.pi.hProcess);\n\t\t\twin_pclose2(&pio); /* closes all handles except the process handle */\n\t\t\tptask->wt_parm2 = (void *) php;\n\t\t\treturn (0); /* no hook output file at this time */\n\t\t} else if (runas_jobuser) {\n\t\t\tif (pwdp == NULL) {\n\t\t\t\tlog_err(-1, __func__, \"runas_jobuser does not have credential set!\");\n\t\t\t\trun_exit = 255;\n\t\t\t\tgoto run_hook_exit;\n\t\t\t}\n\t\t\t(void) win_alarm(phook->alarm, run_hook_alarm);\n\t\t\tchar *env_string = NULL;\n\t\t\tstruct var_table hook_env;\n\t\t\thook_env.v_envp = NULL;\n\t\t\tchar *pbs_hook_conf = NULL;\n\n\t\t\tif ((pjob->ji_env.v_envp != NULL) && (phook->user == HOOK_PBSUSER)) {\n\t\t\t\t/* Duplicate only when the hook user is pbsuser */\n\t\t\t\thook_env.v_envp = dup_string_arr(pjob->ji_env.v_envp);\n\t\t\t\tif (hook_env.v_envp == NULL) {\n\t\t\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_ERR,\n\t\t\t\t\t\t  __func__, \"Unable to set hook environment\");\n\t\t\t\t\tgoto run_hook_exit;\n\t\t\t\t}\n\t\t\t\thook_env.v_ensize = pjob->ji_env.v_ensize;\n\t\t\t\thook_env.v_used = pjob->ji_env.v_used;\n\t\t\t\tif (pbs_hook_conf = getenv(\"PBS_HOOK_CONFIG_FILE\"))\n\t\t\t\t\tbld_env_variables(&hook_env, \"PBS_HOOK_CONFIG_FILE\", pbs_hook_conf);\n\t\t\t\tenv_string = make_envp(hook_env.v_envp);\n\t\t\t}\n\t\t\twloaduserprofile(pwdp);\n\t\t\trun_exit = wsystem(cmdline, pwdp->pw_userlogin, env_string);\n\t\t\twunloaduserprofile(pwdp);\n\t\t\tfree(env_string);\n\t\t\t(void) win_alarm(0, NULL);\n\t\t\tfree_string_array(hook_env.v_envp);\n\t\t} else {\n\t\t\t/* The following blocks until after */\n\t\t\t(void) win_alarm(phook->alarm, run_hook_alarm);\n\t\t\tif (win_popen(cmdline, \"r\", &pio, NULL) == 0) {\n\t\t\t\terrno = GetLastError();\n\t\t\t\tpbs_errno = errno;\n\t\t\t\tlog_eventf(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK, LOG_ERR, __func__, \"executing %s failed errno=%d\", cmdline, errno);\n\n\t\t\t} else if (GetExitCodeProcess(pio.pi.hProcess, &run_exit) == 0 || run_exit == STILL_ACTIVE) {\n\t\t\t\tlog_err(-1, __func__, \"GetExitCodeProcess failed\");\n\t\t\t\trun_exit = 255;\n\t\t\t}\n\t\t\twin_pclose(&pio);\n\t\t\t(void) win_alarm(0, NULL);\n\t\t}\n\t\tif (php)\n\t\t\tphp->child = child;\n\trun_hook_exit:\n\t\tif (fp != NULL)\n\t\t\tfclose(fp);\n\n\t\tif (vnl_created)\n\t\t\tvnl_free(vnl);\n\t}\n#endif\n\n\tif (run_exit != 0)\n\t\tlog_errf(-1, __func__, \"execv of %s resulted in nonzero exit status=%d\", pypath, run_exit);\n\n\tif (file_in != NULL)\n\t\tsnprintf(file_in, file_size, FMT_HOOK_INFILE, path_hooks_workdir, hook_event_as_string(event_type), phook->hook_name, child);\n\tif (file_out != NULL)\n\t\tsnprintf(file_out, file_size, FMT_HOOK_OUTFILE, path_hooks_workdir, hook_event_as_string(event_type), phook->hook_name, child);\n\n\tif (file_data != NULL)\n\t\tsnprintf(file_data, file_size, FMT_HOOK_DATAFILE, path_hooks_workdir, hook_event_as_string(event_type), phook->hook_name, child);\n\n\tif (runas_jobuser) {\n\n\t\t/* move [PATH_SPOOL]/<hook input file> to <file_in> where <file_in> is in [PBS_HOME]/mom_priv/hooks/tmp. */\n\t\tsnprintf(hook_inputfile, MAXPATHLEN, FMT_HOOK_INFILE, path_spool, hook_event_as_string(event_type), phook->hook_name, child);\n\t\tif (stat(hook_inputfile, &sbuf) == 0 && file_in != NULL)\n\t\t\t(void) rename(hook_inputfile, file_in);\n\n\t\t/* move [PATH_SPOOL]/<hook output file> to <file_out> where <file_out> is in [PBS_HOME]/mom_priv/hooks/tmp. */\n\t\tsnprintf(hook_outputfile, MAXPATHLEN, FMT_HOOK_OUTFILE, path_spool, hook_event_as_string(event_type), phook->hook_name, child);\n\t\tif (stat(hook_outputfile, &sbuf) == 0 && file_out != NULL)\n\t\t\t(void) rename(hook_outputfile, file_out);\n\n\t\t/* move [PATH_SPOOL]/<hook data file> to <file_data> where <file_data> is in [PBS_HOME]/mom_priv/hooks/tmp. */\n\t\tsnprintf(hook_datafile, MAXPATHLEN, FMT_HOOK_DATAFILE, path_spool, hook_event_as_string(event_type), phook->hook_name, child);\n\t\tif (stat(hook_datafile, &sbuf) == 0 && file_data != NULL)\n\t\t\t(void) rename(hook_datafile, file_data);\n\t\tpbs_strncpy(script_path, ((struct python_script *) phook->script)->path, sizeof(script_path));\n\t\tpc = strrchr(script_path, '/');\n\t\tif (pc == NULL)\n\t\t\tpc = (char *) script_path;\n\t\telse\n\t\t\tpc++;\n\n\t\t/* delete [PATH_SPOOL]/<hook script file copy> */\n\t\tsnprintf(script_copy, MAXPATHLEN, FMT_HOOK_SCRIPT, path_spool, child);\n\n\t\tif (stat(script_copy, &sbuf) == 0)\n\t\t\t(void) unlink(script_copy);\n\n\t\t/* delete [PATH_SPOOL]/<hook config file copy> */\n\t\tsnprintf(hook_config_copy, MAXPATHLEN, FMT_HOOK_CONFIG, path_spool, child);\n\n\t\tif (stat(hook_config_copy, &sbuf) == 0)\n\t\t\t(void) unlink(hook_config_copy);\n\n\t\t/* delete [PATH_SPOOL]/<resourcedef copy> */\n\t\tsnprintf(rescdef_copy, MAXPATHLEN, FMT_HOOK_RESCDEF, path_spool, child);\n\n\t\tif (stat(rescdef_copy, &sbuf) == 0)\n\t\t\t(void) unlink(rescdef_copy);\n\n\t\tsnprintf(log_file, MAXPATHLEN, FMT_HOOK_LOG, path_spool, child);\n\n\t\t/* Log file generated in [PATH_SPOOL] should be appended to main mom_logs. */\n\t\tif ((fp = fopen(log_file, \"r\")) != NULL) {\n\t\t\tsize_t ll;\n\t\t\tchar *p;\n\t\t\tchar *jobid = NULL;\n\t\t\tint semicolons;\n\n\t\t\tjobid = pjob->ji_qs.ji_jobid;\n\n\t\t\twhile (fgets(in_data, sizeof(in_data), fp) != NULL) {\n\t\t\t\tll = strlen(in_data);\n\t\t\t\tif (in_data[ll - 1] == '\\n')\n\t\t\t\t\t/* remove newline */\n\t\t\t\t\tin_data[ll - 1] = '\\0';\n\n\t\t\t\t/* Format of pbs logfile is as follows: */\n\t\t\t\t/* <time>;<event>;<prog>;<class>;<obj>;<msg> */\n\t\t\t\t/* There are 5 semicolons before <msg>, */\n\t\t\t\t/* which is the one we need to get. */\n\t\t\t\tp = in_data;\n\t\t\t\tsemicolons = 0;\n\t\t\t\twhile (*p != '\\0') {\n\t\t\t\t\tif (*p == ';') {\n\t\t\t\t\t\tsemicolons++;\n\t\t\t\t\t\tif (semicolons == 5)\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t\tp++;\n\t\t\t\t}\n\n\t\t\t\tif (*p != '\\0') {\n\t\t\t\t\t*p = '\\0';\n\t\t\t\t\tp++;\n\t\t\t\t\tif (*p != '\\0') {\n\n\t\t\t\t\t\tif (strstr(in_data, \";Job;\") != NULL)\n\t\t\t\t\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG, jobid ? jobid : \"\", p);\n\t\t\t\t\t\telse\n\t\t\t\t\t\t\tlog_event(PBSEVENT_ADMIN | PBSEVENT_SYSTEM, PBS_EVENTCLASS_HOOK, LOG_DEBUG, \"pbs_python\", p);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\tfclose(fp);\n\t\t\tif (unlink(log_file) == -1)\n\t\t\t\tlog_err(errno, __func__, log_file);\n\t\t}\n\t}\n\n#ifdef WIN32\n\tif (secure_file(file_out, \"Administrators\", READS_MASK | WRITES_MASK | STANDARD_RIGHTS_REQUIRED) == 0)\n\t\tlog_eventf(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_ERR, __func__, \"Failed to change hook input file permissions for file: %s\", file_out);\n#endif\n\n\treturn (run_exit);\n}\n\n/**\n * @brief\n *\tFree the malloced \"path\" elemen of py_script.\n *\n * @param[in] py_script - the python_script structure containing the path element.\n *\n * @return Void\n *\n */\nvoid\npython_script_free(struct python_script *py_script)\n{\n\n\tif (py_script != NULL) {\n\t\tif (py_script->path != NULL) {\n\t\t\tfree(py_script->path);\n\t\t}\n\t} else\n\t\tlog_err(PBSE_HOOKERROR, __func__, \"Python Script is NULL\");\n}\n\n/**\n * @brief\n *\tCreates a path entry to *py_script using 'script_path' as value.\n *\n * @param[in]\tscript_path - the value to use\n * @param[in] \tpy_script - the python_script being instantiated.\n *\n * @Note\n *\tIf *py_script is not NULL (previously allocated), then\n *\tfree up previously malloced entries, and reset *py_script to NULL.\n *\n * @return int\n * @retval\t0\t- for success\n * @retval\t-1\t- for failure\n *\n */\n\nint\npython_script_alloc(const char *script_path, struct python_script **py_script)\n{\n\n\tstruct python_script *tmp_py_script = NULL;\n\tsize_t nbytes = sizeof(struct python_script);\n\tstruct stat sbuf;\n\n\tif ((script_path == NULL) || (py_script == NULL)) {\n\t\tlog_err(-1, __func__, \"Bad input parameters\");\n\t\treturn (-1);\n\t}\n\n\tif ((stat(script_path, &sbuf) == -1)) {\n\t\treturn (0); /* could not stat script, so nothing to set */\n\t}\n\n\tif (*py_script != NULL) {\n\t\tif ((*py_script)->path != NULL) {\n\t\t\tfree((*py_script)->path);\n\t\t}\n\t\tfree(*py_script);\n\t}\n\n\t*py_script = NULL; /* init, to be safe */\n\n\tif (!(tmp_py_script = (struct python_script *) malloc(nbytes))) {\n\t\tlog_err(errno, __func__, \"failed to malloc struct python_script\");\n\t\tgoto python_script_alloc_exit;\n\t}\n\t(void) memset(tmp_py_script, 0, nbytes);\n\n\ttmp_py_script->path = strdup(script_path);\n\tif (tmp_py_script->path == NULL) {\n\t\tlog_err(errno, __func__, \"failed to malloc path\");\n\t\tgoto python_script_alloc_exit;\n\t}\n\n\t/* ok, we are set with py_script */\n\t*py_script = tmp_py_script;\n\treturn 0;\n\npython_script_alloc_exit:\n\tif (tmp_py_script != NULL) {\n\t\tif (tmp_py_script->path != NULL) {\n\t\t\tfree(tmp_py_script->path);\n\t\t}\n\t\tfree(tmp_py_script);\n\t\ttmp_py_script = NULL;\n\t}\n\treturn -1;\n}\n\n/**\n * @brief\n *\tRuns a periodic hook in the background.\n *\n * @param[in] \tphook - hook to run.\n *\n * @return none\n *\n */\nvoid\nrun_periodic_hook_bg(hook *phook)\n{\n\tint rc;\n\tmom_hook_input_t hook_input;\n\n\tif (phook == NULL) {\n\t\tlog_err(-1, __func__, \"bad input parameter\");\n\t\treturn;\n\t}\n\n\tif (phook->enabled == FALSE) {\n\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_HOOK,\n\t\t\t  LOG_ERR, phook->hook_name,\n\t\t\t  \"periodic hook has been disabled. Skipping hook\");\n\t\treturn;\n\t}\n\n\tif (phook->script == NULL) {\n\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_HOOK,\n\t\t\t  LOG_ERR, phook->hook_name,\n\t\t\t  \"Hook has no script content. Skipping hook.\");\n\t\treturn;\n\t}\n\n\t/* Run the hook in a child process, and when done, */\n\t/* execute post_periodic_hook() */\n\t/* A copy of the system 'vnlp' is passed here */\n\t/* hook will execute in a child process, and even though */\n\t/* it may make changes to 'vnlp', it's only in the child */\n\t/* that will have this copy */\n\n\t/* this is the list of vnodes seen by a periodic hook including */\n\t/* those added by the exechost_startup hook */\n\tmom_hook_input_init(&hook_input);\n\thook_input.vnl = (vnl_t *) vnlp;\n\thook_input.jobs_list = &svr_alljobs;\n\n\trc = run_hook(phook, HOOK_EVENT_EXECHOST_PERIODIC, &hook_input,\n\t\t      PBS_MOM_SERVICE_NAME, mom_host, 0,\n\t\t      post_periodic_hook,\n\t\t      NULL, NULL, NULL, 0, NULL);\n\tif (rc != 0) {\n\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t\t  LOG_ERR, phook->hook_name,\n\t\t\t  \"Failed to deploy periodic hook\");\n\t}\n}\n\n/**\n * @brief\n *\tRuns the hook hanging off a work task.\n *\n * @param[in] \twork_task -  task to process.\n *\n * @return none\n *\n */\nstatic void\nrun_periodic_hook_bg_task(struct work_task *ptask)\n{\n\thook *phook = (hook *) ptask->wt_parm1;\n\n\tif (phook == NULL) {\n\t\tlog_err(-1, __func__, \"A hook has disappeared.\");\n\t\treturn; /* no hook to execute */\n\t}\n\n\trun_periodic_hook_bg(phook);\n}\n\n/**\n * @brief\n *\tGet the results from 'input_file' of a previously run hook.\n *\n * @param[in] \t\tinput_file -  file to process.\n * @param[in/out] \taccept_flag -  return 1 if event accept flag is true.\n * @param[in/out] \treject_flag -  return 1 if event reject flag is true.\n * @param[in/out] \treject_msg -  the reject message if reject_flag is 1.\n * @param[in]\t\treject_msg_size -  size of reject_msg buffer.\n * @param[out] \t\treject_rerunjob -  return 1 if job is to be rerun.\n * @param[out] \t\treject_deletejob -  return 1 if job is to be deleted.\n * @param[in/out] \treboot_flag -  return 1 if pbs should reboot host\n * @param[in/out] \treboot_cmd -  the command to use by pbs to reboot host.\n * @param[in]\t\treboot_cmd_size -  size of reboot_cmd buffer.\n * @param[in/out] \tpvnalist -  a linked list of any resultant vnodes list\n * @param[in/out] \tpjob -  job in question, where if present (not NULL),\n *\t\t\t        it gets filled in with the\n *\t\t\t\t\"pbs.event().job\" entries in 'input_file'.\n *\t\t\t\t'pjob' can be NULL in periodic hooks, since\n *\t\t\t\t periodic hooks are not tied to jobs.\n *\t\t\t\t Note that pbs.event().job_list[<jobid>] entries\n *\t\t\t\t in 'input_file' fill in the individual\n *\t\t\t\t <jobid>'s job struct entry in the system, and\n *\t\t\t\t not the passed 'pjob' structure.\n * @param[in]\t\tphook -  hook that executed of which we're getting the\n *\t\t\t\tresults. If non-NULL, then phook->user is\n *\t\t\t\tused to validate 'pbs.event().job.euser' line\n *\t\t\t\tin 'input_file'.\n *\t\t\t\tIf main Mom is reading a job related hook\n *\t\t\t\tresults file, phook will be null; an entry in\n *\t\t\t\tthe file should give us the hook name from which\n *\t\t\t\tphook is found.\n * @param[in]\t\tcopy_file - copy the results file to one who name has\n *\t\t\t\tthe job id appended.  This is done when in a\n *\t\t\t\tchild process of Mom.\n * @param[out]\t\thook_output - struct of parameters to fill in output.\n *\n * @note\n *\tIf \"copy_file\" is true, then the processed 'input_file' lines are\n *\tsaved under the file name:\n *\t\thook_output<pjob's job-id>\n *\tunder the same directory where original 'input_file' is located.\n *\tOne and only one hook's output is being processed in this case.\n *\t\"Processed\" here means the input line has passed some sanity checks, and\n *\tdeemed to be a valid input line.\n *\tThis new hook job output file can later be consulted by\n *\tsend_obit() and record_finish_exec() to retrieve job and vnl updates\n *\tmade by a hook in a child process.  It is in this case that a line\n *\tcontaining the hook name will appear.  Multiple hooks of the same event\n *\ttype may be concatenated.\n *\n * @return int\n * @retval\t0 for success\n * @retval\tnon-zero for failure;  the returned parameters (accept_flag,\n *\t\treject_flag, reject_rerunjob, reject_deletejob, reboot_flag,\n *\t\treboot_cmd, pvnalist and pjob) may be invalid and should be\n *\t\tignored.   The list pvnalisst could have mallocate space and\n *\t\tshould be freed by the calling program.\n *\n */\nint\nget_hook_results(char *input_file, int *accept_flag, int *reject_flag,\n\t\t char *reject_msg, int reject_msg_size, int *reject_rerunjob,\n\t\t int *reject_deletejob, int *reboot_flag, char *reboot_cmd,\n\t\t int reboot_cmd_size, pbs_list_head *pvnalist, job *pjob,\n\t\t hook *phook, int copy_file, mom_hook_output_t *hook_output)\n{\n\n\tchar *name_str;\n\tchar *resc_str;\n\tchar *obj_name;\n\tchar *data_value;\n\tchar *vname_str;\n\tchar *jobid_str;\n\tint rc = -1;\n\tchar *pc, *pc1, *pc2, *pc3, *pc4, *pc5;\n\tchar *in_data = NULL;\n\tsize_t ll;\n\tFILE *fp;\n\tchar *p;\n\tint vn_obj_len = strlen(EVENT_VNODELIST_OBJECT);\n\tint vn_fail_obj_len = strlen(EVENT_VNODELIST_FAIL_OBJECT);\n\tint job_obj_len = strlen(EVENT_JOBLIST_OBJECT);\n\tint index;\n\tint errcode;\n\tvnl_t *hvnlp = NULL;\n\tvnl_t *hvnlp_fail = NULL;\n\tchar hook_job_outfile[MAXPATHLEN + 1];\n\tFILE *fp2 = NULL;\n\tchar *line_data = NULL;\n\tint line_data_sz;\n\tlong int endpos;\n\tint start_new_vnl = 1;\n\tint start_new_vnl_fail = 1;\n\tstruct hook_vnl_action *pvna;\n\tchar hook_euser[PBS_MAXUSER + 1] = {'\\0'};\n\tchar *value_type = NULL;\n\tjob *pjob2 = NULL;\n\tjob *pjob2_prev = NULL;\n\tint found_rerunjob_action = 0;\n\tint found_deletejob_action = 0;\n\tint found_joblist = 0;\n\tint arg_list_entries = 0;\n\tint b_triple_quotes = 0;\n\tint e_triple_quotes = 0;\n\tchar buf_data[STRBUF];\n\n\t/* Preset hook_euser for later.  If we are reading a job related     */\n\t/* copy of hook results, there will be one or more (one per hook)    */\n\t/* pbs_event().hook_euser=<value> entries.  In that case, hook_euser */\n\t/* is reset to the <value>.  A null string <value> means PBSADMIN.   */\n\tif (phook && pjob && (phook->user == HOOK_PBSUSER))\n\t\tpbs_strncpy(hook_euser, get_jattr_str(pjob, JOB_ATR_euser), sizeof(hook_euser));\n\n\tif ((input_file != NULL) && (*input_file != '\\0')) {\n\t\tfp = fopen(input_file, \"r\");\n\n\t\tif (fp == NULL) {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"failed to open input file %s\", input_file);\n\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\treturn (1);\n\t\t}\n\t} else {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"bad input_file parameter\");\n\t\treturn (1);\n\t}\n\n\t/* if called by a child of Mom, then process the input file and    */\n\t/* copy it to a new file based on the job id.  Thus the main Mom   */\n\t/* process can read it and send any updates required to the Server */\n\tif (copy_file) {\n\t\t/* 'pjob' is used to name the file, and phook->user is used */\n\t\t/* to validate 'pbs.event().job.euser\" line in  */\n\t\t/* 'input_file'. */\n\t\tchar *p;\n\t\tchar chr_save = '\\0';\n\t\tchar *p_dir = NULL;\n\n\t\tif ((pjob == NULL) || (phook == NULL)) {\n\t\t\tlog_err(PBSE_INTERNAL, __func__, \"bad copy_file parameter\");\n\t\t\treturn (1);\n\t\t}\n\n\t\tp = strrchr(input_file, '/');\n\t\tif (p != NULL) {\n\t\t\tp++;\n\t\t\tchr_save = *p;\n\t\t\t*p = '\\0';\n\t\t\tp_dir = (char *) input_file;\n\t\t}\n\n\t\tsnprintf(hook_job_outfile, MAXPATHLEN,\n\t\t\t FMT_HOOK_JOB_OUTFILE, p_dir ? p_dir : \"\",\n\t\t\t pjob->ji_qs.ji_jobid);\n\n\t\tif (chr_save != '\\0')\n\t\t\t*p = chr_save; /* restore */\n\n\t\tfp2 = fopen(hook_job_outfile, \"a\");\n\t\tif (fp2 == NULL) {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"failed to open hook_job_outfile=%s\",\n\t\t\t\t hook_job_outfile);\n\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\tfclose(fp);\n\t\t\t(void) unlink(input_file);\n\t\t\treturn (1);\n\t\t}\n\t\t/* first record for each hook is one holding the euser  */\n\t\t/* as which the hook was executed.  This also indicates */\n\t\t/* the start of a new hook stanza in the file.          */\n\t\tfprintf(fp2, \"%s=%s\\n\", EVENT_HOOK_EUSER, hook_euser);\n\t}\n\n\tline_data_sz = STRBUF;\n\tline_data = (char *) malloc((size_t) line_data_sz);\n\tif (line_data == NULL) {\n\t\tlog_err(errno, __func__, \"malloc failed\");\n\t\trc = 1;\n\t\tgoto get_hook_results_end;\n\t}\n\tline_data[0] = '\\0';\n\n\tif (fseek(fp, 0, SEEK_END) != 0) {\n\t\tlog_err(errno, __func__, \"fseek to end failed\");\n\t\trc = 1;\n\t\tgoto get_hook_results_end;\n\t}\n\n\tendpos = ftell(fp);\n\tif (fseek(fp, 0, SEEK_SET) != 0) {\n\t\tlog_err(errno, __func__, \"fseek to beginning failed\");\n\t\trc = 1;\n\t\tgoto get_hook_results_end;\n\t}\n\n\tpjob2 = pjob;\n\twhile (fgets(buf_data, STRBUF, fp) != NULL) {\n\t\tb_triple_quotes = 0;\n\t\te_triple_quotes = 0;\n\n\t\tif (pbs_strcat(&line_data, &line_data_sz, buf_data) == NULL) {\n\t\t\tgoto get_hook_results_end;\n\t\t}\n\t\tif (in_data != NULL) {\n\t\t\tfree(in_data);\n\t\t}\n\t\tin_data = strdup(line_data); /* preserve line_data */\n\t\tif (in_data == NULL) {\n\t\t\tlog_err(errno, __func__, \"strdup failed\");\n\t\t\trc = 1;\n\t\t\tgoto get_hook_results_end;\n\t\t}\n\n\t\tif ((p = strchr(in_data, '=')) != NULL) {\n\t\t\tb_triple_quotes = starts_with_triple_quotes(p + 1);\n\t\t}\n\n\t\tll = strlen(in_data);\n\t\tif (in_data[ll - 1] == '\\n') {\n\t\t\te_triple_quotes = ends_with_triple_quotes(in_data, 0);\n\n\t\t\tif (b_triple_quotes && !e_triple_quotes) {\n\t\t\t\tint jj;\n\n\t\t\t\twhile (fgets(buf_data, STRBUF, fp) != NULL) {\n\t\t\t\t\tif (pbs_strcat(&line_data, &line_data_sz,\n\t\t\t\t\t\t       buf_data) == NULL) {\n\t\t\t\t\t\tgoto get_hook_results_end;\n\t\t\t\t\t}\n\n\t\t\t\t\tjj = strlen(line_data);\n\t\t\t\t\tif ((line_data[jj - 1] != '\\n') &&\n\t\t\t\t\t    (ftell(fp) != endpos)) {\n\t\t\t\t\t\t/* get more input for\n\t\t\t\t\t\t * current item.\n\t\t\t\t\t\t */\n\t\t\t\t\t\tcontinue;\n\t\t\t\t\t}\n\t\t\t\t\te_triple_quotes =\n\t\t\t\t\t\tends_with_triple_quotes(line_data, 0);\n\n\t\t\t\t\tif (e_triple_quotes) {\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\tif ((!b_triple_quotes && e_triple_quotes) ||\n\t\t\t\t    (b_triple_quotes && !e_triple_quotes)) {\n\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t\t \"unmatched triple quotes! Skipping  line %s\",\n\t\t\t\t\t\t in_data);\n\t\t\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\t\t\t/* process a new line */\n\t\t\t\t\tline_data[0] = '\\0';\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\t\t\t\tfree(in_data);\n\t\t\t\tin_data = strdup(line_data); /* preserve line_data */\n\t\t\t\tif (in_data == NULL) {\n\t\t\t\t\tlog_err(errno, __func__, \"strdup failed\");\n\t\t\t\t\trc = 1;\n\t\t\t\t\tgoto get_hook_results_end;\n\t\t\t\t}\n\t\t\t\t/* remove newline */\n\t\t\t\tin_data[strlen(in_data) - 1] = '\\0';\n\t\t\t} else {\n\t\t\t\t/* remove newline */\n\t\t\t\tin_data[ll - 1] = '\\0';\n\t\t\t}\n\n\t\t} else if (ftell(fp) != endpos) { /* continued on next line */\n\t\t\t/* get more input for current item.  */\n\t\t\tcontinue;\n\t\t}\n\n\t\tdata_value = NULL;\n\t\tif ((p = strchr(in_data, '=')) != NULL) {\n\t\t\t*p = '\\0';\n\t\t\tp++;\n\t\t\twhile (isspace(*p))\n\t\t\t\tp++;\n\n\t\t\tif (b_triple_quotes) {\n\t\t\t\t/* strip triple quotes */\n\t\t\t\tp += 3;\n\t\t\t}\n\t\t\tdata_value = p;\n\t\t\tif (e_triple_quotes) {\n\t\t\t\tends_with_triple_quotes(p, 1);\n\t\t\t}\n\t\t}\n\n\t\tobj_name = in_data;\n\n\t\tpc = strrchr(in_data, '.');\n\t\tif (pc) {\n\t\t\t*pc = '\\0';\n\t\t\tpc++;\n\t\t} else {\n\t\t\tpc = in_data;\n\t\t}\n\t\tname_str = pc;\n\n\t\tpc1 = strchr(pc, '[');\n\t\tpc2 = strchr(pc, ']');\n\t\tresc_str = NULL;\n\t\tif (pc1 && pc2 && (pc2 > pc1)) {\n\t\t\t*pc1 = '\\0';\n\t\t\tpc1++;\n\t\t\t*pc2 = '\\0';\n\t\t\tpc2++;\n\n\t\t\t/* now let's if there's anything quoted inside */\n\t\t\tpc3 = strchr(pc1, '\"');\n\t\t\tif (pc3 != NULL)\n\t\t\t\tpc4 = strchr(pc3 + 1, '\"');\n\t\t\telse\n\t\t\t\tpc4 = NULL;\n\n\t\t\tif (pc3 && pc4 && (pc4 > pc3)) {\n\t\t\t\tpc3++;\n\t\t\t\t*pc4 = '\\0';\n\t\t\t\tresc_str = pc3;\n\t\t\t} else {\n\t\t\t\tresc_str = pc1;\n\t\t\t}\n\t\t}\n\n\t\t/* at this point, we have */\n\t\t/* Given:  pbs.event().<attribute>=<value> */\n\t\t/* Given:  pbs.event().job.<attribute>=<value> */\n\t\t/* Given:  pbs.event().job.<attribute>[<resc>]=<value> */\n\t\t/* Given:  pbs.event().vnode_list[<vname>].<attribute>=<value> */\n\t\t/* Given:  pbs.event().vnode_list[<vname>].<attribute>[<resc>]=<value> */\n\t\t/* We get: */\n\n\t\t/* obj_name = pbs.event() or \"pbs.event().job\" or \"pbs.event().vnode_list[<vname>]\" */\n\t\t/* name_str = <attribute> */\n\t\t/* resc_str = <resc> */\n\t\t/* data_value = <value> */\n\n\t\tif (data_value == NULL) {\n\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"%s: no value given\", in_data);\n\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\trc = 1;\n\t\t\tgoto get_hook_results_end;\n\t\t}\n\n\t\tif (strcmp(obj_name, EVENT_OBJECT) == 0) {\n\t\t\tif (strcmp(name_str, \"hook_euser\") == 0) {\n\t\t\t\tpbs_strncpy(hook_euser, data_value, sizeof(hook_euser));\n\t\t\t\tstart_new_vnl = 1;\n\t\t\t\t/* Need to also clear 'hvnlp' as previous  */\n\t\t\t\t/* one would have already been saved in */\n\t\t\t\t/* svr_hook_vnl_actions structure via */\n\t\t\t\t/* hook_requests_to_server(). Otherwise, */\n\t\t\t\t/* it would be bad if the same 'hvnlp' */\n\t\t\t\t/* gets duplicated in svr_hook_vnl_actions */\n\t\t\t\t/* which could cause a mom crash. */\n\t\t\t\thvnlp = NULL;\n\t\t\t\thvnlp_fail = NULL;\n\t\t\t} else if ((accept_flag != NULL) &&\n\t\t\t\t   strcmp(name_str, \"accept\") == 0) {\n\t\t\t\tif (strcmp(data_value, \"True\") == 0)\n\t\t\t\t\t*accept_flag = 1;\n\t\t\t\telse\n\t\t\t\t\t*accept_flag = 0;\n\t\t\t} else if ((reject_flag != NULL) &&\n\t\t\t\t   strcmp(name_str, \"reject\") == 0) {\n\n\t\t\t\tif (strcmp(data_value, \"True\") == 0)\n\t\t\t\t\t*reject_flag = 1;\n\t\t\t\telse\n\t\t\t\t\t*reject_flag = 0;\n\t\t\t} else if ((reject_msg != NULL) &&\n\t\t\t\t   (strcmp(name_str, \"reject_msg\") == 0)) {\n\t\t\t\tpbs_strncpy(reject_msg, data_value,\n\t\t\t\t\t    reject_msg_size);\n\t\t\t} else if (strcmp(name_str, PY_EVENT_PARAM_PROGNAME) == 0) {\n\t\t\t\tchar **prog;\n\t\t\t\tif (hook_output != NULL) {\n\t\t\t\t\t/* need to free up here previous value */\n\t\t\t\t\t/* in case of multiple hooks! */\n\t\t\t\t\tprog = hook_output->progname;\n\t\t\t\t\tif (*prog != NULL) {\n\t\t\t\t\t\tfree(*prog);\n\t\t\t\t\t}\n\t\t\t\t\t*prog = strdup(data_value);\n\t\t\t\t}\n\t\t\t} else if (strcmp(name_str, PY_EVENT_PARAM_ARGLIST) == 0) {\n\t\t\t\tpbs_list_head *ar_list;\n\n\t\t\t\targ_list_entries++;\n\t\t\t\tif (hook_output != NULL) {\n\t\t\t\t\tar_list = hook_output->argv;\n\t\t\t\t\t/* free previous values at start of new list */\n\t\t\t\t\tif (arg_list_entries == 1) {\n\t\t\t\t\t\tfree_attrlist(ar_list);\n\t\t\t\t\t}\n\t\t\t\t\tadd_to_svrattrl_list(ar_list, name_str, resc_str,\n\t\t\t\t\t\t\t     data_value, 0, NULL);\n\t\t\t\t}\n\t\t\t} else if (strcmp(name_str, PY_EVENT_PARAM_ENV) == 0) {\n\t\t\t\tif (hook_output != NULL) {\n\t\t\t\t\tif (hook_output->env != NULL) {\n\t\t\t\t\t\tfree_str_array(hook_output->env);\n\t\t\t\t\t}\n\t\t\t\t\thook_output->env = str_to_str_array(data_value, ',');\n\t\t\t\t}\n\t\t\t}\n\t\t} else if ((strcmp(obj_name, EVENT_JOB_OBJECT) == 0) ||\n\t\t\t   (strncmp(obj_name, EVENT_JOBLIST_OBJECT, job_obj_len)) == 0) {\n\n\t\t\tif (strncmp(obj_name, EVENT_JOBLIST_OBJECT, job_obj_len) == 0) {\n\t\t\t\t/* NOTE: obj_name here is: pbs.event().job_list[<jobid>] */\n\n\t\t\t\t/* important here to look for the leftmost '[' (using strchr)\n\t\t\t\t * and the rightmost ']' (using strrchr)\n\t\t\t\t * as we can have:\n\t\t\t\t *\tpbs.event().job_list[\"23.fest\"].<attr>=<val>\n\t\t\t\t * \tand \"23.fest\" is a valid job id.\n\t\t\t\t */\n\n\t\t\t\tfound_joblist = 1;\n\t\t\t\tif (((pc1 = strchr(obj_name, '[')) != NULL) &&\n\t\t\t\t    ((pc2 = strrchr(obj_name, ']')) != NULL) &&\n\t\t\t\t    (pc2 > pc1)) {\n\t\t\t\t\tpc1++;\t     /*  pc1=<jobid>] */\n\t\t\t\t\t*pc2 = '\\0'; /* pc1=<jobid>  */\n\t\t\t\t\tpc2++;\n\n\t\t\t\t\t/* now let's if there's anything quoted inside */\n\t\t\t\t\tpc3 = strchr(pc1, '\"');\n\t\t\t\t\tif (pc3 != NULL)\n\t\t\t\t\t\tpc4 = strchr(pc3 + 1, '\"');\n\t\t\t\t\telse\n\t\t\t\t\t\tpc4 = NULL;\n\n\t\t\t\t\tif (pc3 && pc4 && (pc4 > pc3)) {\n\t\t\t\t\t\tpc3++;\n\t\t\t\t\t\t*pc4 = '\\0';\n\t\t\t\t\t\tjobid_str = pc3;\n\t\t\t\t\t} else {\n\t\t\t\t\t\tjobid_str = pc1;\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t\t \"object '%s' does not have a job id!\",\n\t\t\t\t\t\t obj_name);\n\t\t\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\t\t\t/* process a new line */\n\t\t\t\t\tin_data[0] = '\\0';\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\n\t\t\t\t/* jobs list are stored sorted. If current */\n\t\t\t\t/* jobid does not match previous jobid, */\n\t\t\t\t/* this means we've now switched to the */\n\t\t\t\t/* new job's data. */\n\t\t\t\tif ((pjob2_prev != NULL) &&\n\t\t\t\t    (strcmp(pjob2_prev->ji_qs.ji_jobid,\n\t\t\t\t\t    jobid_str) == 0)) {\n\t\t\t\t\tpjob2 = pjob2_prev; /* optimize */\n\t\t\t\t} else {\n\t\t\t\t\t/* working on a new list of job data */\n\t\t\t\t\tif (pjob2_prev != NULL) {\n\t\t\t\t\t\tif (*reject_deletejob) {\n\t\t\t\t\t\t\t/* deletejob takes precedence */\n\t\t\t\t\t\t\tnew_job_action_req(pjob2, phook ? phook->user : HOOK_PBSADMIN, JOB_ACT_REQ_DELETE);\n\t\t\t\t\t\t} else if (*reject_rerunjob) {\n\t\t\t\t\t\t\tnew_job_action_req(pjob2, phook ? phook->user : HOOK_PBSADMIN, JOB_ACT_REQ_REQUEUE);\n\t\t\t\t\t\t}\n\t\t\t\t\t\t/* already sent the action */\n\t\t\t\t\t\tfound_rerunjob_action = 0;\n\t\t\t\t\t\tfound_deletejob_action = 0;\n\t\t\t\t\t}\n\t\t\t\t\tpjob2 = find_job(jobid_str);\n\t\t\t\t\tpjob2_prev = pjob2;\n\t\t\t\t\tif (reject_rerunjob != NULL)\n\t\t\t\t\t\t*reject_rerunjob = 0;\n\t\t\t\t\tif (reject_deletejob != NULL)\n\t\t\t\t\t\t*reject_deletejob = 0;\n\t\t\t\t\tfound_rerunjob_action = 0;\n\t\t\t\t\tfound_deletejob_action = 0;\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\t/* not a EVENT_JOBLIST object. Switch */\n\t\t\t\t/* back to passed job object */\n\t\t\t\tpjob2 = pjob;\n\t\t\t}\n\t\t\t/* Found: <resource_name>,<job_value_type> */\n\t\t\tif (resc_str != NULL) {\n\t\t\t\tif ((pc5 = strchr(resc_str, ',')) != NULL) {\n\t\t\t\t\t*pc5 = '\\0';\n\t\t\t\t\tpc5++;\n\t\t\t\t\tvalue_type = pc5;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif (strcmp(name_str, \"_rerun\") == 0) {\n\t\t\t\tfound_rerunjob_action = 1;\n\t\t\t\tif (reject_rerunjob == NULL) {\n\t\t\t\t\t/* process a new line */\n\t\t\t\t\tline_data[0] = '\\0';\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\t\t\t\tif (strcmp(data_value, \"True\") == 0)\n\t\t\t\t\t*reject_rerunjob = 1;\n\t\t\t\telse\n\t\t\t\t\t*reject_rerunjob = 0;\n\t\t\t} else if (strcmp(name_str, \"_delete\") == 0) {\n\t\t\t\tfound_deletejob_action = 1;\n\t\t\t\tif (reject_deletejob == NULL) {\n\t\t\t\t\t/* process a new line */\n\t\t\t\t\tline_data[0] = '\\0';\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\t\t\t\tif (strcmp(data_value, \"True\") == 0)\n\t\t\t\t\t*reject_deletejob = 1;\n\t\t\t\telse\n\t\t\t\t\t*reject_deletejob = 0;\n\t\t\t} else if (pjob2 != NULL) {\n\t\t\t\t/* Is attribute not writeable by manager or */\n\t\t\t\t/* by a server? */\n\t\t\t\t/* Exempt attributes set by the hook script */\n\t\t\t\tresc_access_perm = ATR_DFLAG_USWR |\n\t\t\t\t\t\t   ATR_DFLAG_OPWR |\n\t\t\t\t\t\t   ATR_DFLAG_MGWR |\n\t\t\t\t\t\t   ATR_DFLAG_SvWR |\n\t\t\t\t\t\t   ATR_DFLAG_Creat;\n\n\t\t\t\t/* identify the attribute by name */\n\t\t\t\tindex = find_attr(job_attr_idx, job_attr_def, name_str);\n\t\t\t\tif (index < 0) { /* didn`t recognize the name */\n\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t\t \"object '%s' unrecognized attrbute name %s!\",\n\t\t\t\t\t\t obj_name, name_str);\n\t\t\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\t\t\t/* process new line */\n\t\t\t\t\tline_data[0] = '\\0';\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\t\t\t\tif ((index == JOB_ATR_runcount) && ((is_jattr_set(pjob2, JOB_ATR_run_version)) == 0)) {\n\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t\t \"object '%s': ignoring setting attribute %s,\"\n\t\t\t\t\t\t \" talking to a server that does not allow %s modification, \",\n\t\t\t\t\t\t obj_name, name_str, name_str);\n\t\t\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\t\t\t/* process new line */\n\t\t\t\t\tline_data[0] = '\\0';\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\n\t\t\t\t/* security guard: hook that runs as    */\n\t\t\t\t/* PBSUSER should not be allowed  */\n\t\t\t\t/* to modify the euser value of the job */\n\t\t\t\tif ((phook != NULL) &&\n\t\t\t\t    (phook->user == HOOK_PBSUSER)) {\n\t\t\t\t\tlong dval = atol(data_value);\n\t\t\t\t\tif (index == JOB_ATR_euser) {\n\t\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t\t\t \"object '%s': ignoring setting attribute %s,\"\n\t\t\t\t\t\t\t \" executing hook has user=pbsuser\",\n\t\t\t\t\t\t\t obj_name, name_str);\n\t\t\t\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\t\t\t\t/* process a new line */\n\t\t\t\t\t\tline_data[0] = '\\0';\n\t\t\t\t\t\tcontinue;\n\t\t\t\t\t} else if ((index == JOB_ATR_runcount) && (is_jattr_set(pjob2, index)) && (dval < get_jattr_long(pjob2, index))) {\n\t\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t\t\t \"object '%s': ignoring setting attribute %s,\"\n\t\t\t\t\t\t\t \" executing hook has user=pbsuser, \"\n\t\t\t\t\t\t\t \" cannot decrease value from %ld to %ld\",\n\t\t\t\t\t\t\t obj_name, name_str,\n\t\t\t\t\t\t\t get_jattr_long(pjob2, index),\n\t\t\t\t\t\t\t dval);\n\t\t\t\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\t\t\t\t/* process a new line */\n\t\t\t\t\t\tline_data[0] = '\\0';\n\t\t\t\t\t\tcontinue;\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\t/* decode attribute */\n\t\t\t\terrcode = set_jattr_generic(pjob2, index, data_value, resc_str, INTERNAL);\n\t\t\t\t/* unknown resources still get decoded */\n\t\t\t\t/* using \"unknown\" placeholder resc def */\n\t\t\t\tif ((errcode != 0) &&\n\t\t\t\t    (errcode != PBSE_UNKRESC)) {\n\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t\t \"object '%s' failed to decode (%s,%s,%s)! (errorcode %d)\",\n\t\t\t\t\t\t obj_name, name_str, resc_str ? resc_str : \"\",\n\t\t\t\t\t\t data_value, errcode);\n\t\t\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\t\t\t/* process a new line */\n\t\t\t\t\tline_data[0] = '\\0';\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\t\t\t\tif (errcode == PBSE_UNKRESC) {\n\t\t\t\t\tresource *prsc;\n\t\t\t\t\tresource_def *prdef;\n\t\t\t\t\tsvrattrl *plist, *plist2, *plist_next;\n\n\t\t\t\t\tprdef = &svr_resc_def[RESC_UNKN];\n\t\t\t\t\tprsc = find_resc_entry(get_jattr(pjob2, index), prdef);\n\n\t\t\t\t\tif ((prdef == NULL) || (prsc == NULL)) {\n\t\t\t\t\t\tlog_err(-1, __func__, \"bad unknown resc\");\n\t\t\t\t\t\t/* process a new line */\n\t\t\t\t\t\tline_data[0] = '\\0';\n\t\t\t\t\t\tcontinue;\n\t\t\t\t\t}\n\n\t\t\t\t\tplist = (svrattrl *) GET_NEXT(prsc->rs_value.at_val.at_list);\n\n\t\t\t\t\tdo {\n\t\t\t\t\t\tif (plist == NULL)\n\t\t\t\t\t\t\tbreak;\n\n\t\t\t\t\t\tplist_next = (svrattrl *) GET_NEXT(plist->al_link);\n\n\t\t\t\t\t\t/* check for duplicate resource */\n\t\t\t\t\t\t/* entry. The later ones take */\n\t\t\t\t\t\t/* precedence */\n\t\t\t\t\t\tplist2 = (svrattrl *) GET_NEXT(plist->al_link);\n\t\t\t\t\t\twhile (plist2 != NULL) {\n\t\t\t\t\t\t\tif (strcmp(plist->al_resc,\n\t\t\t\t\t\t\t\t   plist2->al_resc) == 0) {\n\t\t\t\t\t\t\t\tdelete_link(&plist->al_link);\n\t\t\t\t\t\t\t\tfree(plist);\n\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tplist2 = (svrattrl *) GET_NEXT(plist2->al_link);\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\tplist = plist_next;\n\t\t\t\t\t} while (plist != NULL);\n\t\t\t\t}\n\t\t\t\t/* resources set in a hook should be flagged */\n\t\t\t\t/* as such (see how mom_set_use treat */\n\t\t\t\t/* ATR_VFLAG_HOOK set resources) */\n\t\t\t\tif (resc_str != NULL) { /* a resource */\n\t\t\t\t\tresource_def *rd;\n\t\t\t\t\tresource *pres;\n\n\t\t\t\t\trd = find_resc_def(svr_resc_def, resc_str);\n\t\t\t\t\tif (rd != NULL) {\n\t\t\t\t\t\tpres = find_resc_entry(\n\t\t\t\t\t\t\tget_jattr(pjob2, index),\n\t\t\t\t\t\t\trd);\n\t\t\t\t\t\tif (pres != NULL) {\n\t\t\t\t\t\t\tpres->rs_value.at_flags |=\n\t\t\t\t\t\t\t\tATR_VFLAG_HOOK;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\t/* attributes in a hook should be flagged */\n\t\t\t\t/* with ATR_VFLAG_HOOK                    */\n\t\t\t\t(get_jattr(pjob2, index))->at_flags |= ATR_VFLAG_HOOK;\n\t\t\t}\n\t\t}\n\t\tif ((strncmp(obj_name, EVENT_VNODELIST_FAIL_OBJECT,\n\t\t\t     vn_fail_obj_len) == 0) ||\n\t\t    (strncmp(obj_name, EVENT_VNODELIST_OBJECT,\n\t\t\t     vn_obj_len) == 0)) {\n\n\t\t\tint *start_new_vnl_p;\n\t\t\tvnl_t **hvnlp_p = NULL;\n\n\t\t\tif (strncmp(obj_name,\n\t\t\t\t    EVENT_VNODELIST_FAIL_OBJECT,\n\t\t\t\t    vn_fail_obj_len) == 0) {\n\t\t\t\tstart_new_vnl_p = &start_new_vnl_fail;\n\t\t\t\thvnlp_p = &hvnlp_fail;\n\t\t\t} else {\n\t\t\t\tstart_new_vnl_p = &start_new_vnl;\n\t\t\t\thvnlp_p = &hvnlp;\n\t\t\t}\n\n\t\t\t/* NOTE: obj_name here is: pbs.event().vnode_list[<vname>] */\n\n\t\t\t/* important here to look for the leftmost '[' (using strchr)\n\t\t\t * and the rightmost ']' (using strrchr)\n\t\t\t * as we can have:\n\t\t\t *\tpbs.event().vnode_list[\"altix[5]\"].<attr>=<val>\n\t\t\t * \tand \"altix[5]\" is a valid vnode id.\n\t\t\t */\n\t\t\tif (((pc1 = strchr(obj_name, '[')) != NULL) &&\n\t\t\t    ((pc2 = strrchr(obj_name, ']')) != NULL) &&\n\t\t\t    (pc2 > pc1)) {\n\t\t\t\tpc1++;\t     /*  pc1=<vname>] */\n\t\t\t\t*pc2 = '\\0'; /* pc1=<vname>  */\n\t\t\t\tpc2++;\n\n\t\t\t\t/* now let's if there's anything quoted inside */\n\t\t\t\tpc3 = strchr(pc1, '\"');\n\t\t\t\tif (pc3 != NULL)\n\t\t\t\t\tpc4 = strchr(pc3 + 1, '\"');\n\t\t\t\telse\n\t\t\t\t\tpc4 = NULL;\n\n\t\t\t\tif (pc3 && pc4 && (pc4 > pc3)) {\n\t\t\t\t\tpc3++;\n\t\t\t\t\t*pc4 = '\\0';\n\t\t\t\t\tvname_str = pc3;\n\t\t\t\t} else {\n\t\t\t\t\tvname_str = pc1;\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t \"object '%s' does not have a vnode name!\",\n\t\t\t\t\t obj_name);\n\t\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\t\t/* process a new line */\n\t\t\t\tline_data[0] = '\\0';\n\t\t\t\tcontinue;\n\t\t\t}\n\n\t\t\tif ((name_str != NULL) && (resc_str != NULL)) {\n\t\t\t\t/* format is in in_data: <attrib>\\0<resc_str> */\n\t\t\t\t*(resc_str - 1) = '.'; /* so name_str=<attrib>.<resc_str> */\n\t\t\t}\n\t\t\t/* Found: resc_str = <resc_name>,<value_type> */\n\t\t\tif (resc_str != NULL) {\n\t\t\t\tif ((pc5 = strchr(resc_str, ',')) != NULL) {\n\t\t\t\t\t*pc5 = '\\0';\n\t\t\t\t\tpc5++;\n\t\t\t\t\tvalue_type = pc5;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif (*start_new_vnl_p) {\n\t\t\t\t*start_new_vnl_p = 0;\n\t\t\t\t/* now add new hook_vnl_action structure to */\n\t\t\t\t/* the list of such to return for updating  */\n\t\t\t\t/* the Server                               */\n\t\t\t\tpvna = malloc(sizeof(struct hook_vnl_action));\n\t\t\t\tif ((pvna != NULL) && (pvnalist != NULL)) {\n\t\t\t\t\tCLEAR_LINK(pvna->hva_link);\n\t\t\t\t\tsnprintf(pvna->hva_euser, sizeof(pvna->hva_euser),\n\t\t\t\t\t\t \"%s\", hook_euser);\n\t\t\t\t\tpvna->hva_actid = 0;\n\t\t\t\t\tpvna->hva_vnl = NULL;\n\t\t\t\t\tpvna->hva_update_cmd = IS_UPDATE_FROM_HOOK;\n\t\t\t\t\tif (strncmp(obj_name, EVENT_VNODELIST_FAIL_OBJECT, vn_fail_obj_len) == 0) {\n\t\t\t\t\t\tpvna->hva_update_cmd = IS_UPDATE_FROM_HOOK2;\n\t\t\t\t\t}\n\t\t\t\t\tif (vnl_alloc(hvnlp_p) == NULL) {\n\t\t\t\t\t\tlog_err(errno, __func__, \"Failed to allocate hvnlp\");\n\t\t\t\t\t} else {\n\t\t\t\t\t\tpvna->hva_vnl = *hvnlp_p;\n\t\t\t\t\t\tappend_link(pvnalist,\n\t\t\t\t\t\t\t    &pvna->hva_link, pvna);\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tlog_err(errno, __func__, \"Failed to allocate hook action\");\n\t\t\t\t}\n\t\t\t\tif ((copy_file == 0) && (*hvnlp_p != NULL)) {\n\t\t\t\t\t/* add a entry to the vnl for the user */\n\t\t\t\t\trc = vn_addvnr(*hvnlp_p, mom_short_name,\n\t\t\t\t\t\t       VNATTR_HOOK_REQUESTOR,\n\t\t\t\t\t\t       hook_euser,\n\t\t\t\t\t\t       ATR_TYPE_STR, READ_ONLY,\n\t\t\t\t\t\t       NULL);\n\t\t\t\t\tif (rc == -1) {\n\t\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t\t\t \"Failed to add '%s %s=%s' vnode resource copy_file=%d\",\n\t\t\t\t\t\t\t mom_short_name, VNATTR_HOOK_REQUESTOR,\n\t\t\t\t\t\t\t hook_euser, copy_file);\n\t\t\t\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\trc = -1;\n\t\t\tif (*hvnlp_p) {\n\t\t\t\tchar *p2;\n\t\t\t\tresource_def *prdef;\n\t\t\t\tunsigned int at_type;\n\t\t\t\tunsigned int rs_flag;\n\n\t\t\t\t/* We're now passing type and flag so that if 'name_str' resource */\n\t\t\t\t/* does not exist, then it will be dynamically allocated on the server */\n\t\t\t\t/* side, essentially allowing hook scripts to define custom resource! */\n\t\t\t\trs_flag = READ_WRITE | ATR_DFLAG_CVTSLT | ATR_DFLAG_MOM;\n\t\t\t\tif ((p2 = strrchr(name_str, '.')) != NULL) {\n\t\t\t\t\tp2++;\n\t\t\t\t\tprdef = find_resc_def(svr_resc_def, p2);\n\t\t\t\t\tif (prdef == NULL) { /* a custom resource */\n\t\t\t\t\t\tif (value_type != NULL) {\n\t\t\t\t\t\t\tif (strcmp(value_type, \"boolean\") == 0) {\n\t\t\t\t\t\t\t\tat_type = ATR_TYPE_BOOL;\n\t\t\t\t\t\t\t} else if (strcmp(value_type, \"long\") == 0) {\n\t\t\t\t\t\t\t\tat_type = ATR_TYPE_LONG;\n\t\t\t\t\t\t\t} else if (strcmp(value_type, \"size\") == 0) {\n\t\t\t\t\t\t\t\tat_type = ATR_TYPE_SIZE;\n\t\t\t\t\t\t\t} else if (strcmp(value_type, \"float\") == 0) {\n\t\t\t\t\t\t\t\tat_type = ATR_TYPE_FLOAT;\n\t\t\t\t\t\t\t} else if (strcmp(value_type, \"string_array\") == 0) {\n\t\t\t\t\t\t\t\tat_type = ATR_TYPE_ARST;\n\t\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t\tat_type = ATR_TYPE_STR;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\tat_type = ATR_TYPE_STR;\n\t\t\t\t\t\t}\n\t\t\t\t\t} else {\n\t\t\t\t\t\tat_type = prdef->rs_type;\n\t\t\t\t\t\trs_flag = prdef->rs_flags;\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tat_type = ATR_TYPE_STR;\n\t\t\t\t}\n\t\t\t\trc = vn_addvnr(*hvnlp_p, vname_str, name_str,\n\t\t\t\t\t       data_value, at_type, rs_flag, NULL);\n\t\t\t\tif (rc == -1) {\n\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t\t \"Failed to add '%s %s=%s' at_type=%d rs_flag=%d\",\n\t\t\t\t\t\t vname_str, name_str, data_value, at_type, rs_flag);\n\t\t\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\t\t}\n\t\t\t}\n\n\t\t} else if ((strcmp(obj_name, SERVER_OBJECT) == 0) &&\n\t\t\t   (strcmp(name_str, PY_SCHEDULER_RESTART_CYCLE_METHOD) == 0)) {\n\n\t\t\tif ((strcmp(data_value, \"True\") == 0) &&\n\t\t\t    (copy_file == 0)) {\n\t\t\t\t/* ask Server to tell Scheduler to restart a */\n\t\t\t\t/* scheduling cycle; only done from main Mom */\n\t\t\t\tif (send_sched_recycle(hook_euser) != 0) {\n\t\t\t\t\t/* process a new line */\n\t\t\t\t\tline_data[0] = '\\0';\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\t\t\t}\n\n\t\t} else if (strcmp(obj_name, PBS_OBJ) == 0) {\n\n\t\t\tif ((reboot_flag != NULL) &&\n\t\t\t    (strcmp(name_str, PBS_REBOOT_OBJECT) == 0)) {\n\t\t\t\tif (strcmp(data_value, \"True\") == 0)\n\t\t\t\t\t*reboot_flag = 1;\n\t\t\t\telse\n\t\t\t\t\t*reboot_flag = 0;\n\t\t\t} else if ((reboot_cmd != NULL) &&\n\t\t\t\t   (strcmp(name_str,\n\t\t\t\t\t   PBS_REBOOT_CMD_OBJECT) == 0)) {\n\t\t\t\tpbs_strncpy(reboot_cmd, data_value, HOOK_BUF_SIZE);\n\t\t\t}\n\t\t}\n\n\t\tif ((fp2 != NULL) && (fputs(line_data, fp2) < 0)) {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"Failed to save data in file %s\",\n\t\t\t\t hook_job_outfile);\n\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\trc = 1;\n\t\t\tgoto get_hook_results_end;\n\t\t}\n\t\tline_data[0] = '\\0';\n\t}\n\n\tif (found_joblist && (found_rerunjob_action || found_deletejob_action)) {\n\t\tif ((reject_deletejob != NULL) && (*reject_deletejob)) {\n\t\t\t/* deletejob takes precedence */\n\t\t\tnew_job_action_req(pjob2, phook ? phook->user : HOOK_PBSADMIN, JOB_ACT_REQ_DELETE);\n\t\t} else if ((reject_rerunjob != NULL) && (*reject_rerunjob)) {\n\t\t\tnew_job_action_req(pjob2, phook ? phook->user : HOOK_PBSADMIN, JOB_ACT_REQ_REQUEUE);\n\t\t}\n\t}\n\trc = 0;\n\nget_hook_results_end:\n\n\tif (fp != NULL)\n\t\tfclose(fp);\n\n\tif (fp2 != NULL) {\n\t\tif (fflush(fp2) != 0) {\n\t\t\t/* error in writting job related hook results file */\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"Failed to save data in file %s\",\n\t\t\t\t hook_job_outfile);\n\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\trc = 1;\n\t\t\tfclose(fp2);\n\t\t\tunlink(hook_job_outfile);\n\t\t} else {\n\t\t\tfclose(fp2);\n\t\t}\n\t}\n\tif (phook && !phook->debug) {\n\t\t(void) unlink(input_file);\n\t}\n\tif (line_data != NULL) {\n\t\tfree(line_data);\n\t}\n\tif (in_data != NULL) {\n\t\tfree(in_data);\n\t}\n\n\treturn (rc);\n}\n\n/**\n * @brief\n *\tMake the actual call to reboot the current host.\n *\tIf 'reboot_cmd' is NULL, then use the default reboot\n *\tcmd line - see REBOOT_CMD macro.\n *\n * @param[in] reboot_cmd char pointer which hold cmd to rebbot host\n *\n * @return Void\n *\n */\nstatic void\ndo_reboot(char *reboot_cmd)\n{\n\tchar bootcmd[HOOK_BUF_SIZE];\n\tint rcode;\n\n\tif ((reboot_cmd == NULL) || (*reboot_cmd == '\\0'))\n\t\tpbs_strncpy(bootcmd, REBOOT_CMD, sizeof(bootcmd));\n\telse\n\t\tpbs_strncpy(bootcmd, reboot_cmd, sizeof(bootcmd));\n\n\tsnprintf(log_buffer, sizeof(log_buffer), \"issuing cmd %s\", bootcmd);\n\tlog_event(PBSEVENT_DEBUG3, 0,\n\t\t  LOG_INFO, \"do_reboot\", log_buffer);\n\n#ifndef WIN32\n\tif ((rcode = system(bootcmd)) != -1)\n#else\n\tif ((rcode = wsystem(bootcmd, INVALID_HANDLE_VALUE)) == 0)\n#endif\n\t{\n\t\tlog_event(PBSEVENT_DEBUG3, 0,\n\t\t\t  LOG_INFO, \"do_reboot\", \"mom exiting\");\n\t\texit(0);\n\t} else {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"reboot failed exit code=%d\", rcode);\n\t\tlog_event(PBSEVENT_ERROR, 0,\n\t\t\t  LOG_ERR, __func__, log_buffer);\n\t}\n}\n\n/**\n * @brief\n *\tAdd a hook's delete or requeue action request to the list of such\n *\tactions sent to server and call send_hook_job_action() to send it\n *\tto the server.  It will be removed from the list when the Server\n *\treplies or resent if the Server connection is reestablished.\n *\n * @param[in] pjob - pointer to job structure\n * @param[in] huser - hook ran as admin or user\n * @param[in] action - JOB_ACT_REQ_DELETE (1) or JOB_ACT_REQ_REQUEUE (0)\n *\n * @return none\n *\n */\nvoid\nnew_job_action_req(job *pjob, enum hook_user huser, int action)\n{\n\tstruct hook_job_action *phja;\n\n\tif (pjob == NULL) {\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_ERR,\n\t\t\t  __func__, \"Job received is NULL\");\n\t\treturn;\n\t}\n\tphja = malloc(sizeof(struct hook_job_action));\n\tif (phja == NULL) {\n\t\tlog_err(PBSE_SYSTEM, __func__, msg_err_malloc);\n\t\treturn;\n\t}\n\tCLEAR_LINK(phja->hja_link);\n\tsnprintf(phja->hja_jid, sizeof(phja->hja_jid), \"%s\", pjob->ji_qs.ji_jobid);\n\tphja->hja_actid = ++hook_action_id;\n\n\tif (is_jattr_set(pjob, JOB_ATR_run_version))\n\t\tphja->hja_runct = get_jattr_long(pjob, JOB_ATR_run_version);\n\telse\n\t\tphja->hja_runct = get_jattr_long(pjob, JOB_ATR_runcount);\n\n\tphja->hja_huser = huser;\n\tphja->hja_action = action;\n\tappend_link(&svr_hook_job_actions, &phja->hja_link, phja);\n\tsend_hook_job_action(phja);\n}\n\n/**\n * @brief\n *\tThis function runs after the task that runs a single periodic hook\n *\tin the background completes execution.  If there was an error (the\n *\thook process returned a non-zero exits status), the results file\n *\tis just discarded (it may not even be there).\n *\n * @param[in] \tpwt - the work task.\n *\n * @return none\n *\n */\nstatic void\npost_periodic_hook(struct work_task *pwt)\n{\n\tint wstat = pwt->wt_aux;\n\thook *phook = (hook *) pwt->wt_parm1;\n#ifdef WIN32\n\tpid_t mypid = pwt->wt_aux2;\n#else\n\tpid_t mypid = pwt->wt_event;\n#endif\n\tchar hook_outfile[MAXPATHLEN + 1];\n\tchar reject_msg[HOOK_MSG_SIZE + 1];\n\ttime_t next_time;\n\tchar *next_time_str;\n\tint accept_flag = 1;\n\tint hook_error_flag = 0;\n\tint reject_flag = 0;\n\tint rerun_flag = 0;\n\tint delete_flag = 0;\n\tint reboot_flag = 0;\n\tchar reboot_cmd[HOOK_BUF_SIZE];\n\tpbs_list_head vnl_changes;\n\n\tCLEAR_HEAD(vnl_changes);\n\treboot_cmd[0] = '\\0';\n\treject_msg[0] = '\\0';\n\n\t/* Check hook exit status */\n\tif (wstat != 0) {\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t \"Non-zero exit status %d encountered for periodic hook\",\n\t\t\t wstat);\n\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t\t  LOG_ERR, phook->hook_name, log_buffer);\n\t\thook_error_flag = 1; /* hook results are invalid */\n\t}\n\n\t/* hook results path */\n\tsnprintf(hook_outfile, MAXPATHLEN, FMT_HOOK_OUTFILE,\n\t\t path_hooks_workdir,\n\t\t HOOKSTR_EXECHOST_PERIODIC,\n\t\t phook->hook_name, mypid);\n\n\tif (hook_error_flag == 0) {\n\n\t\t/* hook exited normally, get results from file  */\n\t\tif (get_hook_results(hook_outfile, &accept_flag, &reject_flag,\n\t\t\t\t     reject_msg, sizeof(reject_msg), &rerun_flag,\n\t\t\t\t     &delete_flag, &reboot_flag, reboot_cmd, HOOK_BUF_SIZE,\n\t\t\t\t     &vnl_changes, NULL, phook, 0, NULL) != 0) {\n\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t\t\t  LOG_ERR, phook->hook_name,\n\t\t\t\t  \"Failed getting hook results\");\n\t\t\t/* error getting results, do not accept results */\n\t\t\thook_error_flag = 1;\n\t\t}\n\t}\n\n\tif ((hook_error_flag == 1) || (accept_flag == 0)) {\n\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"%s request rejected by '%s'\",\n\t\t\t \"exechost_periodic\", phook->hook_name);\n\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t\t  LOG_ERR, phook->hook_name, log_buffer);\n\t\tif (reject_msg[0] != '\\0') {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"%s\",\n\t\t\t\t reject_msg);\n\t\t\t/* log also the custom reject message */\n\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t\t\t  LOG_ERR, phook->hook_name, log_buffer);\n\t\t}\n\t}\n\n\tif (hook_error_flag == 0) {\n\t\t/* No hook error means data is communicated to */\n\t\t/* the server and actions are done to jobs.    */\n\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_HOOK,\n\t\t\t  LOG_INFO, phook->hook_name, \"periodic hook accepted\");\n\n\t\t/* remove the processed results file, note that if  */\n\t\t/* there was an error, it is left for debugging use */\n\t\tif (phook && !phook->debug)\n\t\t\t(void) unlink(hook_outfile); /* remove file */\n\n\t\tif ((struct hook_vnl_action *) GET_NEXT(vnl_changes) != NULL) {\n\n\t\t\t/* there are vnode hook updates */\n\t\t\t/* Push hook changes to server */\n\n\t\t\thook_requests_to_server(&vnl_changes);\n\t\t}\n\n\t\tif (reboot_flag) {\n\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_HOOK,\n\t\t\t\t  LOG_INFO, phook->hook_name,\n\t\t\t\t  \"requested for host to be rebooted\");\n\t\t\tdo_reboot(reboot_cmd);\n\t\t}\n\t}\n\tvna_list_free(vnl_changes); /* free the list of changes */\n\n\tnext_time = time_now + phook->freq;\n\tnext_time_str = ctime(&next_time);\n\tif ((next_time_str != NULL) && (next_time_str[0] != '\\0')) {\n\t\tnext_time_str[strlen(next_time_str) - 1] = '\\0'; /* rem newline */\n\t\tsnprintf(log_buffer, sizeof(log_buffer), \"will run on %s\",\n\t\t\t next_time_str);\n\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_HOOK,\n\t\t\t  LOG_ERR, phook->hook_name, log_buffer);\n\t}\n\n\t(void) set_task(WORK_Timed, time_now + phook->freq,\n\t\t\trun_periodic_hook_bg_task, phook);\n}\n\n/**\n *\n *\tBased on 'hook_fail_action', send a request to server to\n *\tperform a hook fail action for hook named 'hook_name'.\n *\n * @param[in]\thook_name - hook in question\n * @param[in]\thook_fail_action - the actual hook fail action.\n *\n * @return void\n */\nvoid\nsend_hook_fail_action(hook *phook)\n{\n\n\tchar hook_buf[HOOK_BUF_SIZE];\n\tvnl_t *tvnl = NULL;\n\tint vret = -1;\n\n\tif ((phook == NULL) || (phook->hook_name == NULL)) {\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_HOOK,\n\t\t\t  LOG_ERR, __func__, \"Hook received is NULL\");\n\t\treturn;\n\t}\n\n\tsnprintf(hook_buf, sizeof(hook_buf), \"1,%s\", phook->hook_name);\n\n\tif (vnl_alloc(&tvnl) == NULL) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"Failed to vnl_alloc vnlp for %s\",\n\t\t\t phook->hook_name);\n\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t\t  LOG_INFO, \"\", log_buffer);\n\t\tgoto send_hook_fail_action_error;\n\t}\n\n\tif (phook->fail_action & HOOK_FAIL_ACTION_OFFLINE_VNODES) {\n\t\tvret = vn_addvnr(tvnl, mom_short_name,\n\t\t\t\t VNATTR_HOOK_OFFLINE_VNODES,\n\t\t\t\t hook_buf, 0, 0, NULL);\n\n\t\tif (vret != 0) {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"Failed to add to vnlp: %s=%s\",\n\t\t\t\t VNATTR_HOOK_OFFLINE_VNODES, hook_buf);\n\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t\t\t  LOG_INFO, phook->hook_name, log_buffer);\n\t\t\tgoto send_hook_fail_action_error;\n\t\t}\n\t}\n\n\tif (phook->fail_action & HOOK_FAIL_ACTION_SCHEDULER_RESTART_CYCLE) {\n\t\tvret = vn_addvnr(tvnl, mom_short_name,\n\t\t\t\t VNATTR_HOOK_SCHEDULER_RESTART_CYCLE,\n\t\t\t\t hook_buf, 0, 0, NULL);\n\t\tif (vret != 0) {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"Failed to add to vnlp: %s=%s\",\n\t\t\t\t VNATTR_HOOK_SCHEDULER_RESTART_CYCLE,\n\t\t\t\t hook_buf);\n\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t\t\t  LOG_INFO, phook->hook_name, log_buffer);\n\t\t\tgoto send_hook_fail_action_error;\n\t\t}\n\t}\n\n\tif (vret == 0) {\n\t\t/* this saves 'tvnl' in svr_vnl_action, and later freed\n\t\t * upon server acking action\n\t\t */\n\t\t(void) send_hook_vnl(tvnl);\n\t\ttvnl = NULL;\n\t}\n\nsend_hook_fail_action_error:\n\tif (tvnl != NULL)\n\t\tvnl_free(tvnl);\n}\n\n/**\n *\n * @brief\n *\tRecord the name of the last hook that executed\n *\ton behalf of 'pjob' into a well-known file\n *\tlocation:\n *\n *\t\"<location_directory>/hook_<pjob's jobid>.out\"\n *\n *\n * @param[in]\thook_event - calling event.\n * @param[in]\thook_name - name of hook that executed\n * @param[in] \tpjob - associated job executing hook\n * @param[in] \tfilepath - name of a file whose\n *\t\t\tdirectory location is used as\n *\t\t\t<location_directory>.\n *\n * @note\n *\tThis will currently record only for\n *\tHOOK_EVENT_EXECJOB_PROLOGUE hooks.\n * @return void\n */\nstatic void\nrecord_job_last_hook_executed(unsigned int hook_event,\n\t\t\t      char *hook_name, job *pjob, char *filepath)\n{\n\tchar hook_job_outfile[MAXPATHLEN + 1];\n\tFILE *fp = NULL;\n\tchar *p;\n\tchar chr_save = '\\0';\n\tchar *p_dir = NULL;\n\n\tif (pjob == NULL) {\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_HOOK,\n\t\t\t  LOG_ERR, __func__, \"Job not received\");\n\t\treturn;\n\t}\n\n\tif (hook_name == NULL) {\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_HOOK,\n\t\t\t  LOG_ERR, __func__, \"Hook not received\");\n\t\treturn;\n\t}\n\tif (hook_event != HOOK_EVENT_EXECJOB_PROLOGUE) {\n\t\treturn;\n\t}\n\n\tif (filepath != NULL) {\n\t\tp = strrchr(filepath, '/');\n\t\tif (p != NULL) {\n\t\t\tp++;\n\t\t\tchr_save = *p;\n\t\t\t*p = '\\0';\n\t\t\tp_dir = filepath;\n\t\t}\n\t} else {\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_HOOK,\n\t\t\t  LOG_ERR, __func__, \"Hook not received\");\n\t}\n\n\tsnprintf(hook_job_outfile, MAXPATHLEN,\n\t\t FMT_HOOK_JOB_OUTFILE, p_dir ? p_dir : \"\",\n\t\t pjob->ji_qs.ji_jobid);\n\n\tif (chr_save != '\\0')\n\t\t*p = chr_save; /* restore */\n\n\tfp = fopen(hook_job_outfile, \"w\");\n\tif (fp == NULL) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"failed to open hook_job_outfile=%s\",\n\t\t\t hook_job_outfile);\n\t\tlog_err(errno, __func__, log_buffer);\n\t\treturn;\n\t}\n\tfprintf(fp, \"%s=%s\\n\", PY_EVENT_HOOK_NAME,\n\t\thook_name);\n\tfclose(fp);\n}\n\n/**\n * @brief\n * This function runs after execution of a single hook and processes\n * the results from hook execution. If hook is backgrounded,on\n * successful execution, a new task will be created to run the next\n * hook script and if there was an error (the hook process returned a non-zero exit\n * status) it does not create the new task for the next hook script.\n *\n * @param[in] \tptask - the work task.\n *\n * @return 1 a hook accepted\n * @return 0 a hook rejected\n * @return -1 an internal error occurred\n */\nint\npost_run_hook(struct work_task *ptask)\n{\n\n\tint accept_flag = 1;\n\tint reject_flag = 0;\n\tint reject_rerunjob = 0;\n\tint reject_deletejob = 0;\n\tint reboot_flag = 0;\n\tint log_type = 0;\n\tint log_class = 0;\n\tint hook_error_flag = 0;\n\tint *reject_errcode = NULL;\n\tint wstat = 0;\n\tchar *log_id = NULL;\n\tchar reject_msg[HOOK_MSG_SIZE + 1] = {'\\0'};\n\tchar reboot_cmd[HOOK_BUF_SIZE] = {'\\0'};\n\tchar hook_outfile[MAXPATHLEN + 1] = {'\\0'};\n\thook *phook = NULL;\n\tmom_hook_input_t *hook_input = NULL;\n\tjob *pjob = NULL;\n\tstruct work_task *new_task = NULL;\n\tpbs_list_head vnl_changes;\n\tmom_process_hooks_params_t *php = NULL;\n\n\tif (ptask == NULL) {\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_HOOK,\n\t\t\t  LOG_ERR, __func__, \"missing ptask argument to event\");\n\t\treturn -1;\n\t}\n\n\tif ((phook = (hook *) ptask->wt_parm1) == NULL) {\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_HOOK,\n\t\t\t  LOG_ERR, __func__, \"missing hook phook argument to event\");\n\t\treturn -1;\n\t}\n\n\tif ((php = (mom_process_hooks_params_t *) ptask->wt_parm2) == NULL) {\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_HOOK,\n\t\t\t  LOG_ERR, __func__, \"missing hook params argument to event\");\n\t\treturn -1;\n\t}\n\n\tif (php->hook_event == HOOK_EVENT_EXECHOST_PERIODIC) {\n\t\tfree(php);\n\t\tpost_periodic_hook(ptask);\n\t\treturn 1;\n\t}\n\n\tif ((hook_input = (mom_hook_input_t *) php->hook_input) == NULL) {\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_HOOK,\n\t\t\t  LOG_ERR, __func__, \"missing input argument to event\");\n\t\treturn -1;\n\t}\n\n\tpjob = (job *) hook_input->pjob;\n\tCLEAR_HEAD(vnl_changes);\n\n\tif ((php->hook_event == HOOK_EVENT_EXECHOST_PERIODIC) ||\n\t    (php->hook_event == HOOK_EVENT_EXECHOST_STARTUP)) {\n\t\tlog_id = phook->hook_name;\n\t\tlog_type = PBSEVENT_DEBUG2;\n\t\tlog_class = PBS_EVENTCLASS_HOOK;\n\t} else {\n\t\tlog_id = pjob->ji_qs.ji_jobid;\n\t\tlog_type = PBSEVENT_JOB;\n\t\tlog_class = PBS_EVENTCLASS_JOB;\n\t}\n\n\t/* hook results path */\n\tsnprintf(hook_outfile, MAXPATHLEN, FMT_HOOK_OUTFILE,\n\t\t path_hooks_workdir,\n\t\t hook_event_as_string(php->hook_event),\n\t\t phook->hook_name,\n\t\t (pid_t) ((php->parent_wait) ? php->child : ptask->wt_event));\n\n\tif (php->parent_wait == 0) {\n\t\t/* background hook */\n\t\twstat = ptask->wt_aux;\n\n\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_HOOK,\n\t\t\t  LOG_INFO, phook->hook_name, \"finished\");\n\n\t\tswitch (wstat) {\n\t\t\tcase 0:\n\t\t\t\tbreak;\n\t\t\tcase -2: /* unhandled exception return on Windows */\n\t\t\tcase 254:\n\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t\t\t \"%s hook '%s' encountered an exception, \"\n\t\t\t\t\t \"request rejected\",\n\t\t\t\t\t hook_event_as_string(php->hook_event), phook->hook_name);\n\t\t\t\tlog_event(log_type, log_class,\n\t\t\t\t\t  LOG_ERR, log_id, log_buffer);\n\t\t\t\trecord_job_last_hook_executed(php->hook_event,\n\t\t\t\t\t\t\t      phook->hook_name, pjob, hook_outfile);\n\t\t\t\thook_error_flag = 1;\n\t\t\t\tbreak;\n\t\t\t\t/* -3 return from pbs_python == 2^8-3, but run_hook() */\n\t\t\t\t/* itself could return \"-3\" if it catches the alarm()   */\n\t\t\t\t/* to the process first. Both run_hook() code here, and */\n\t\t\t\t/* pbs_python program set alarm signals. One or the   */\n\t\t\t\t/* other would catch it first */\n\t\t\tcase -3: /* somewhere in this file, we do return -3 */\n\t\t\tcase 253:\n\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t\t\t \"alarm call while running %s hook '%s', \"\n\t\t\t\t\t \"request rejected\",\n\t\t\t\t\t hook_event_as_string(php->hook_event), phook->hook_name);\n\t\t\t\tlog_event(log_type, log_class, LOG_ERR, log_id,\n\t\t\t\t\t  log_buffer);\n\t\t\t\trecord_job_last_hook_executed(php->hook_event,\n\t\t\t\t\t\t\t      phook->hook_name, pjob, hook_outfile);\n\t\t\t\thook_error_flag = 1;\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t\t\t \"Non-zero exit status %d encountered for %s hook\",\n\t\t\t\t\t wstat, hook_event_as_string(php->hook_event));\n\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t\t\t\t  LOG_ERR, phook->hook_name, log_buffer);\n\t\t\t\thook_error_flag = 1; /* hook results are invalid */\n\t\t}\n\t}\n\n\tif (hook_error_flag == 0) {\n\t\t/* hook exited normally, get results from file  */\n\t\tif (get_hook_results(hook_outfile, &accept_flag, &reject_flag,\n\t\t\t\t     reject_msg, sizeof(reject_msg), &reject_rerunjob,\n\t\t\t\t     &reject_deletejob, &reboot_flag, reboot_cmd,\n\t\t\t\t     HOOK_BUF_SIZE, &vnl_changes, pjob, phook,\n\t\t\t\t     (php->hook_event == HOOK_EVENT_EXECHOST_STARTUP) ? 0 : !php->update_svr,\n\t\t\t\t     php->hook_output) != 0) {\n\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t\t\t  LOG_ERR, phook->hook_name,\n\t\t\t\t  \"Failed to get hook results\");\n\t\t\tvna_list_free(vnl_changes);\n\t\t\tif (php->parent_wait)\n\t\t\t\treturn -1;\n\t\t\thook_error_flag = 1;\n\t\t}\n\t}\n\n\tif (!hook_error_flag) {\n\t\tif ((php->hook_output != NULL) && (php->hook_output->vnl != NULL)) {\n\t\t\tstruct hook_vnl_action *phvna;\n\t\t\t/* save vnl changes into  results array */\n\t\t\tfor (phvna = GET_NEXT(vnl_changes); phvna;\n\t\t\t     phvna = GET_NEXT(phvna->hva_link)) {\n\t\t\t\tvn_merge(php->hook_output->vnl, phvna->hva_vnl, NULL);\n\t\t\t}\n\t\t}\n\n\t\tif (php->update_svr == 1) {\n\t\t\tif (pjob != NULL) {\n\t\t\t\t/* Delete job or reject job actions */\n\t\t\t\t/* NOTE: Must appear here before vnode changes, */\n\t\t\t\t/* since this action will be sent whether or not */\n\t\t\t\t/* hook script executed by PBSADMIN or not. */\n\t\t\t\tif (reject_deletejob) {\n\t\t\t\t\t/* deletejob takes precedence */\n\t\t\t\t\tnew_job_action_req(pjob, phook->user, JOB_ACT_REQ_DELETE);\n\t\t\t\t} else if (reject_rerunjob) {\n\t\t\t\t\tnew_job_action_req(pjob, phook->user, JOB_ACT_REQ_REQUEUE);\n\t\t\t\t}\n\n\t\t\t\t/* Whether or not we accept or reject, we'll make */\n\t\t\t\t/* job changes, vnode changes, job actions */\n\t\t\t\tenqueue_update_for_send(pjob, IS_RESCUSED_FROM_HOOK);\n\t\t\t}\n\n\t\t\tif (vnl_changes.ll_next != NULL)\n\t\t\t\t/* Push vnl hook changes to server */\n\t\t\t\thook_requests_to_server(&vnl_changes);\n\t\t} else {\n\t\t\tvna_list_free(vnl_changes);\n\t\t}\n\t}\n\n\t/* reject if at least one hook script rejects */\n\tif (hook_error_flag || !accept_flag) {\n\n\t\tif (php->hook_msg != NULL) {\n\t\t\tsnprintf(php->hook_msg, php->msg_len - 1,\n\t\t\t\t \"%s request rejected by '%s'\",\n\t\t\t\t hook_event_as_string(php->hook_event),\n\t\t\t\t phook->hook_name);\n\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t\t\t  LOG_ERR, phook->hook_name, php->hook_msg);\n\t\t} else {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"%s request rejected by '%s'\",\n\t\t\t\t hook_event_as_string(php->hook_event),\n\t\t\t\t phook->hook_name);\n\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t\t\t  LOG_ERR, phook->hook_name, log_buffer);\n\t\t}\n\n\t\tif (reject_msg[0] != '\\0') {\n\t\t\tif (php->hook_msg != NULL) {\n\t\t\t\tsnprintf(php->hook_msg, php->msg_len - 1, \"%s\",\n\t\t\t\t\t reject_msg);\n\t\t\t\t/* log also the custom reject message */\n\t\t\t\tlog_event(log_type,\n\t\t\t\t\t  log_class, LOG_ERR,\n\t\t\t\t\t  log_id, php->hook_msg);\n\t\t\t} else {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"%s\",\n\t\t\t\t\t reject_msg);\n\t\t\t\t/* log also the custom reject message */\n\t\t\t\tlog_event(log_type, log_class, LOG_ERR,\n\t\t\t\t\t  log_id, log_buffer);\n\t\t\t}\n\t\t}\n\n\t\tif (php->hook_output)\n\t\t\treject_errcode = php->hook_output->reject_errcode;\n\n\t\tif (reject_errcode != NULL) {\n\t\t\t*reject_errcode = PBSE_HOOK_REJECT;\n\t\t\tif (reject_rerunjob)\n\t\t\t\t*reject_errcode = PBSE_HOOK_REJECT_RERUNJOB;\n\t\t\tif (reject_deletejob)\n\t\t\t\t*reject_errcode = PBSE_HOOK_REJECT_DELETEJOB;\n\t\t}\n\t\tif (php->parent_wait)\n\t\t\treturn (0); /* don't process anymore hooks on reject */\n\t}\n\n\tif (!hook_error_flag && reboot_flag) {\n\t\tif (phook->user == HOOK_PBSUSER) {\n\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_HOOK,\n\t\t\t\t  LOG_INFO, phook->hook_name,\n\t\t\t\t  \"Not allowed to issue reboot if run as user\");\n\t\t} else {\n\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_HOOK,\n\t\t\t\t  LOG_INFO, phook->hook_name,\n\t\t\t\t  \"requested for host to be rebooted\");\n\t\t\tdo_reboot(reboot_cmd);\n\t\t}\n\t}\n\n\tif (!php->parent_wait) {\n\t\t/* Background hook */\n\t\t/* Create task to check and run next hook script */\n\t\tnew_task = set_task(WORK_Immed, 0, (void *) mom_process_background_hooks, phook);\n\t\tif (!new_task)\n\t\t\tlog_err(errno, __func__,\n\t\t\t\t\"Unable to set task for mom_process_background_hooks\");\n\t\telse\n\t\t\tnew_task->wt_parm2 = (void *) php;\n\t}\n\n\treturn 1;\n}\n\n/**\n * @brief\n * This function replies to the outstanding request after\n * execution of the hook event was in background.\n *\n * @param[in] pjob\n *\n * @return void\n */\n\nvoid\nreply_hook_bg(job *pjob)\n{\n\tint n = 0;\n\tint ret = 0;\n\tchar jobid[PBS_MAXSVRJOBID + 1] = {'\\0'};\n\tjob *pjob2 = NULL;\n\tlong runver;\n#if !MOM_ALPS\n\tstruct batch_request *preq = pjob->ji_preq;\n#endif\n\n\tif (pjob->ji_hook_running_bg_on == BG_IS_DISCARD_JOB) {\n\t\t/**\n\t\t * IS_DISCARD_JOB can be received by sister node as well,\n\t\t * when node fail requeue is activated\n\t\t */\n\t\tn = get_jattr_long(pjob, JOB_ATR_run_version);\n\t\tstrcpy(jobid, pjob->ji_qs.ji_jobid);\n\n\t\tdel_job_resc(pjob); /* rm tmpdir, etc. */\n\t\tpjob->ji_hook_running_bg_on = BG_NONE;\n\t\tjob_purge_mom(pjob);\n\t\tdorestrict_user();\n\n\t\tif ((ret = is_compose(server_stream, IS_DISCARD_DONE)) != DIS_SUCCESS)\n\t\t\tgoto err;\n\n\t\tif ((ret = diswst(server_stream, jobid)) != DIS_SUCCESS)\n\t\t\tgoto err;\n\n\t\tif ((ret = diswsi(server_stream, n)) != DIS_SUCCESS)\n\t\t\tgoto err;\n\n\t\tdis_flush(server_stream);\n\t\ttpp_eom(server_stream);\n\n\t} else if (pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE) { /*MS*/\n\t\tswitch (pjob->ji_hook_running_bg_on) {\n\t\t\tcase BG_CHECKPOINT_ABORT:\n\t\t\t\tpjob->ji_hook_running_bg_on = BG_NONE;\n\t\t\t\texiting_tasks = 1;\n\t\t\t\tterm_job(pjob);\n\t\t\t\tbreak;\n\t\t\tcase BG_PBS_BATCH_DeleteJob:\n\t\t\tcase BG_PBSE_SISCOMM:\n\t\t\t\tif ((pjob->ji_numnodes == 1) || (pjob->ji_hook_running_bg_on == BG_PBSE_SISCOMM)) {\n\t\t\t\t\tdel_job_resc(pjob); /* rm tmpdir, etc. */\n\t\t\t\t\tpjob->ji_preq = NULL;\n\t\t\t\t\t(void) kill_job(pjob, SIGKILL);\n\t\t\t\t\tdorestrict_user();\n#if !MOM_ALPS\n\t\t\t\t\t/*\n\t\t\t\t\t* The delete job request from Server will have\n\t\t\t\t\t* been or will be replied to and freed by the\n\t\t\t\t\t* alps_cancel_reservation code in the sequence\n\t\t\t\t\t* of functions started with the above call to\n\t\t\t\t\t* del_job_resc().\n\t\t\t\t\t*/\n\t\t\t\t\tif (pjob->ji_numnodes == 1)\n\t\t\t\t\t\treply_ack(preq);\n\t\t\t\t\telse if (pjob->ji_hook_running_bg_on == BG_PBSE_SISCOMM)\n\t\t\t\t\t\treq_reject(PBSE_SISCOMM, 0, preq); /* sis down */\n#endif\n\t\t\t\t\tjob_purge_mom(pjob);\n\t\t\t\t} else\n\t\t\t\t\tpjob->ji_hook_running_bg_on = BG_NONE;\n\t\t\t\t/*\n\t\t\t\t* otherwise, job_purge() and dorestrict_user() are called in\n\t\t\t\t* mom_comm when all the sisters have replied. The reply to\n\t\t\t\t* the Server is also done there\n\t\t\t\t*/\n\n\t\t\t/**\n\t\t\t * Following cases to avoid the below compilation\n\t\t\t * error: enumeration value not handled in switch\n\t\t\t */\n\t\t\tcase BG_NONE:\n\t\t\tcase BG_IM_DELETE_JOB_REPLY:\n\t\t\tcase BG_IM_DELETE_JOB:\n\t\t\tcase BG_IM_DELETE_JOB2:\n\t\t\tcase BG_IS_DISCARD_JOB:\n\t\t\t\tbreak;\n\t\t}\n\t} else { /*SISTER MOM*/\n\t\tswitch (pjob->ji_hook_running_bg_on) {\n\t\t\tcase BG_CHECKPOINT_ABORT:\n\t\t\t\tpjob->ji_hook_running_bg_on = BG_NONE;\n\t\t\t\texiting_tasks = 1;\n\t\t\t\tterm_job(pjob);\n\t\t\t\tbreak;\n\t\t\tcase BG_IM_DELETE_JOB_REPLY:\n\t\t\t\tpost_reply(pjob, 0);\n\t\t\tcase BG_IM_DELETE_JOB:\n\t\t\t\tpjob->ji_hook_running_bg_on = BG_NONE;\n\t\t\t\tmom_deljob(pjob);\n\t\t\t\tbreak;\n\t\t\tcase BG_IM_DELETE_JOB2:\n\t\t\t\tstrcpy(jobid, pjob->ji_qs.ji_jobid);\n\t\t\t\tpjob->ji_hook_running_bg_on = BG_NONE;\n\t\t\t\tif (is_jattr_set(pjob, JOB_ATR_run_version))\n\t\t\t\t\trunver = get_jattr_long(pjob, JOB_ATR_run_version);\n\t\t\t\telse\n\t\t\t\t\trunver = get_jattr_long(pjob, JOB_ATR_runcount);\n\n\t\t\t\tmom_deljob(pjob);\n\n\t\t\t\t/* Needed to create a lightweight copy of the job to\n\t\t\t\t * contain only the jobid info, so I can just call\n\t\t\t\t * new_job_action() to create a JOB_ACT_REQ_DEALLOCATE\n\t\t\t\t * request. Can't use the original 'pjob' structure as\n\t\t\t\t * before creating the request, the real job should have\n\t\t\t\t * been deleted already.\n\t\t\t\t */\n\t\t\t\tif ((pjob2 = job_alloc()) != NULL) {\n\t\t\t\t\tsnprintf(pjob2->ji_qs.ji_jobid, sizeof(pjob2->ji_qs.ji_jobid), \"%s\", jobid);\n\t\t\t\t\tset_jattr_l_slim(pjob2, JOB_ATR_run_version, runver, SET);\n\t\t\t\t\t/* JOB_ACT_REQ_DEALLOCATE request will tell the\n\t\t\t\t\t * the server that this mom has completely deleted the\n\t\t\t\t\t * job and now the server can officially free up the\n\t\t\t\t\t * job from the nodes managed by this mom, allowing\n\t\t\t\t\t * other jobs to run.\n\t\t\t\t\t */\n\t\t\t\t\tnew_job_action_req(pjob2, HOOK_PBSADMIN, JOB_ACT_REQ_DEALLOCATE);\n\t\t\t\t\tjob_free(pjob2);\n\t\t\t\t}\n\t\t\t\tbreak;\n\n\t\t\t/**\n\t\t\t * Following cases to avoid the below compilation\n\t\t\t * error: enumeration value not handled in switch\n\t\t\t */\n\t\t\tcase BG_NONE:\n\t\t\tcase BG_PBS_BATCH_DeleteJob:\n\t\t\tcase BG_PBSE_SISCOMM:\n\t\t\tcase BG_IS_DISCARD_JOB:\n\t\t\t\tbreak;\n\t\t}\n\t}\n\treturn;\nerr:\n\tsprintf(log_buffer, \"%s\", dis_emsg[ret]);\n\tlog_err(-1, __func__, log_buffer);\n\ttpp_close(server_stream);\n}\n\n/**\n * @brief\n * This function loops through the hook list,\n * and runs it in the background.\n *\n * @param[in] ptask - the work task.\n *\n * @retval void\n */\nstatic void\nmom_process_background_hooks(struct work_task *ptask)\n{\n\tchar hook_infile[MAXPATHLEN + 1] = {'\\0'};\n\tchar hook_outfile[MAXPATHLEN + 1] = {'\\0'};\n\tchar hook_datafile[MAXPATHLEN + 1] = {'\\0'};\n\thook *phook = NULL;\n\tjob *pjob = NULL;\n\tmom_process_hooks_params_t *php = NULL;\n\n\tif (ptask == NULL) {\n\t\tlog_err(-1, __func__, \"missing ptask argument\");\n\t\treturn;\n\t}\n\n\tif ((phook = (hook *) ptask->wt_parm1) == NULL) {\n\t\tlog_err(-1, __func__, \"missing phook argument\");\n\t\treturn;\n\t}\n\n\tif ((php = (mom_process_hooks_params_t *) ptask->wt_parm2) == NULL) {\n\t\tlog_err(-1, __func__, \"missing php argument\");\n\t\treturn;\n\t}\n\n\tpjob = php->hook_input->pjob;\n\n\tif (pjob->ji_bg_hook_task)\n\t\tpjob->ji_bg_hook_task = NULL;\n\n\tif (php->hook_output && *(php->hook_output->reject_errcode)) {\n\t\treply_hook_bg(pjob);\n\t\tgoto fini;\n\t}\n\n\tphook = (hook *) GET_NEXT(phook->hi_execjob_end_hooks);\n\twhile (phook) {\n\t\tif (phook->enabled == FALSE) {\n\t\t\tphook = (hook *) GET_NEXT(phook->hi_execjob_end_hooks);\n\t\t\tcontinue;\n\t\t}\n\t\tif (phook->script == NULL) {\n\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_HOOK,\n\t\t\t\t  LOG_ERR, phook->hook_name,\n\t\t\t\t  \"Hook has no script content. Skipping hook.\");\n\t\t\tphook = (hook *) GET_NEXT(phook->hi_execjob_end_hooks);\n\t\t\tcontinue;\n\t\t}\n\t\tbreak;\n\t}\n\tif (!phook) {\n\t\treply_hook_bg(pjob);\n\t\tgoto fini;\n\t}\n\n\trun_hook(phook, php->hook_event, php->hook_input,\n\t\t php->req_user, php->req_host, 0, (void *) post_run_hook,\n\t\t hook_infile, hook_outfile, hook_datafile, MAXPATHLEN + 1, php);\n\treturn;\n\nfini:\n\tif (php->hook_output) {\n\t\tfree(php->hook_output->reject_errcode);\n\t\tfree(php->hook_output);\n\t}\n\tfree(php->hook_input);\n\tfree(php);\n}\n\n/**\n * @brief\n *\tProcess hook scripts based on request type.\n *\tThis loops through the matching list of\n *\thooks, and executes the corresponding hook scripts.\n *\n * @param[in] \thook_event - the event of the hooks that need to process.\n * @param[in] \treq_user - who requested for the hook to execute\n * @param[in] \treq_host - where the hook to execute is located\n * @param[in]\thook_input - struct of input parameters\n * @param[out]\thook_output - struct of output parameters\n * @param[in] \thook_msg  - upon failure, if this buffer is set, fill it with\n *\t\tthe actual error message.\n * @param[in]   msg_len  - the size of 'hook_msg' buffer.\n * @param[in]\tupdate_server - if true, send vnode and job attributes updates\n *\t\t\tand/or requeue/delete job actions to the Server;\n *\t\t\tdone when NOT a child of Mom, but Mom herself.\n *\t\t\tFor exechost_startup hook, this must not be done,\n *\t\t\tas the event has no job actions associated with the\n *\t\t\thook, and the vnode changes are sent separately\n *\t\t\tupon mom acknowledging the HELLO server message.\n * @return\tint\n * @retval \t0 means at least one hook was encountered to have rejected the\n *\t\trequest.\n * @retval \t1 means all the executed hooks have agreed to accept the request\n * @retval\t2 means no hook script executed (special case).\n * @retval\t-1 an internal error occurred\n * @retval\tHOOK_RUNNING_IN_BACKGROUND\n * \t\t\t\tbackground process started for the hook script.\n *\n */\nint\nmom_process_hooks(unsigned int hook_event, char *req_user, char *req_host,\n\t\t  mom_hook_input_t *hook_input, mom_hook_output_t *hook_output, char *hook_msg,\n\t\t  size_t msg_len, int update_server)\n{\n\tchar hook_infile[MAXPATHLEN + 1];\n\tchar hook_outfile[MAXPATHLEN + 1];\n\tchar hook_datafile[MAXPATHLEN + 1];\n\tchar *log_id = NULL;\n\tint rc;\n\tint num_run = 0;\n\tint set_job_exit = 0;\n\tint log_type = 0;\n\tint log_class = 0;\n\tint *reject_errcode = NULL;\n\tunsigned int *pfail_action = NULL;\n\thook *phook;\n\thook *phook_next = NULL;\n\thook **last_phook = NULL;\n\tjob *pjob = NULL;\n\tpbs_list_head vnl_changes;\n\tpbs_list_head *head_ptr;\n\tmom_process_hooks_params_t *php = NULL;\n\tstruct work_task task;\n\tchar perf_label[MAXBUFLEN];\n\n\tif (hook_input == NULL) {\n\t\tlog_err(-1, __func__, \"missing input argument to event\");\n\t\treturn (-1);\n\t}\n\n\tpjob = hook_input->pjob;\n\n\t/* If output objects are given, must have at least 3 */\n\t/* otherwise, input array is ignored */\n\tif (hook_output != NULL) {\n\t\treject_errcode = hook_output->reject_errcode;\n\t\tlast_phook = hook_output->last_phook;\n\t\tpfail_action = hook_output->fail_action;\n\t}\n\tif ((php = (mom_process_hooks_params_t *) malloc(\n\t\t     sizeof(mom_process_hooks_params_t))) == NULL) {\n\t\tlog_err(errno, __func__, MALLOC_ERR_MSG);\n\t\treturn -1;\n\t}\n\tphp->hook_event = hook_event;\n\tphp->req_user = req_user;\n\tphp->req_host = req_host;\n\tphp->hook_msg = hook_msg;\n\tphp->msg_len = msg_len;\n\tphp->update_svr = update_server;\n\tphp->hook_input = hook_input;\n\tphp->hook_output = hook_output;\n\tphp->parent_wait = 1;\n\n\tCLEAR_HEAD(vnl_changes);\n\tswitch (hook_event) {\n\n\t\tcase HOOK_EVENT_EXECJOB_BEGIN:\n\t\t\thead_ptr = &svr_execjob_begin_hooks;\n\t\t\tbreak;\n\t\tcase HOOK_EVENT_EXECJOB_PROLOGUE:\n\t\t\thead_ptr = &svr_execjob_prologue_hooks;\n\t\t\tbreak;\n\t\tcase HOOK_EVENT_EXECJOB_EPILOGUE:\n\t\t\thead_ptr = &svr_execjob_epilogue_hooks;\n\t\t\tbreak;\n\t\tcase HOOK_EVENT_EXECJOB_END:\n\t\t\thead_ptr = &svr_execjob_end_hooks;\n#ifndef WIN32\n\t\t\tphp->parent_wait = 0;\n#endif\n\t\t\tbreak;\n\t\tcase HOOK_EVENT_EXECJOB_PRETERM:\n\t\t\thead_ptr = &svr_execjob_preterm_hooks;\n\t\t\tbreak;\n\t\tcase HOOK_EVENT_EXECJOB_LAUNCH:\n\t\t\thead_ptr = &svr_execjob_launch_hooks;\n\t\t\tbreak;\n\t\tcase HOOK_EVENT_EXECHOST_PERIODIC:\n\t\t\thead_ptr = &svr_exechost_periodic_hooks;\n\t\t\tphp->parent_wait = 0;\n\t\t\tbreak;\n\t\tcase HOOK_EVENT_EXECHOST_STARTUP:\n\t\t\thead_ptr = &svr_exechost_startup_hooks;\n\t\t\tbreak;\n\t\tcase HOOK_EVENT_EXECJOB_ATTACH:\n\t\t\thead_ptr = &svr_execjob_attach_hooks;\n\t\t\tbreak;\n\t\tcase HOOK_EVENT_EXECJOB_RESIZE:\n\t\t\thead_ptr = &svr_execjob_resize_hooks;\n\t\t\tbreak;\n\t\tcase HOOK_EVENT_EXECJOB_ABORT:\n\t\t\thead_ptr = &svr_execjob_abort_hooks;\n\t\t\tbreak;\n\t\tcase HOOK_EVENT_EXECJOB_POSTSUSPEND:\n\t\t\thead_ptr = &svr_execjob_postsuspend_hooks;\n\t\t\tbreak;\n\t\tcase HOOK_EVENT_EXECJOB_PRERESUME:\n\t\t\thead_ptr = &svr_execjob_preresume_hooks;\n\t\t\tbreak;\n\t\tdefault:\n\t\t\tfree(php);\n\t\t\treturn (-1); /* unexpected event encountered */\n\t}\n\n\tif (hook_msg != NULL)\n\t\tmemset(hook_msg, '\\0', msg_len);\n\n\tfor (phook = (hook *) GET_NEXT(*head_ptr); phook; phook = phook_next) {\n\t\tswitch (hook_event) {\n\n\t\t\tcase HOOK_EVENT_EXECJOB_BEGIN:\n\t\t\t\tphook_next = (hook *) GET_NEXT(phook->hi_execjob_begin_hooks);\n\t\t\t\tbreak;\n\t\t\tcase HOOK_EVENT_EXECJOB_PROLOGUE:\n\t\t\t\tphook_next = (hook *) GET_NEXT(phook->hi_execjob_prologue_hooks);\n\t\t\t\tbreak;\n\t\t\tcase HOOK_EVENT_EXECJOB_EPILOGUE:\n\t\t\t\tphook_next = (hook *) GET_NEXT(phook->hi_execjob_epilogue_hooks);\n\t\t\t\tbreak;\n\t\t\tcase HOOK_EVENT_EXECJOB_END:\n\t\t\t\tphook_next = (hook *) GET_NEXT(phook->hi_execjob_end_hooks);\n\t\t\t\tbreak;\n\t\t\tcase HOOK_EVENT_EXECJOB_PRETERM:\n\t\t\t\tphook_next = (hook *) GET_NEXT(phook->hi_execjob_preterm_hooks);\n\t\t\t\tbreak;\n\t\t\tcase HOOK_EVENT_EXECJOB_LAUNCH:\n\t\t\t\tphook_next = (hook *) GET_NEXT(phook->hi_execjob_launch_hooks);\n\t\t\t\tbreak;\n\t\t\tcase HOOK_EVENT_EXECHOST_PERIODIC:\n\t\t\t\tphook_next = (hook *) GET_NEXT(phook->hi_exechost_periodic_hooks);\n\t\t\t\tbreak;\n\t\t\tcase HOOK_EVENT_EXECHOST_STARTUP:\n\t\t\t\tphook_next = (hook *) GET_NEXT(phook->hi_exechost_startup_hooks);\n\t\t\t\tbreak;\n\t\t\tcase HOOK_EVENT_EXECJOB_ATTACH:\n\t\t\t\tphook_next = (hook *) GET_NEXT(phook->hi_execjob_attach_hooks);\n\t\t\t\tbreak;\n\t\t\tcase HOOK_EVENT_EXECJOB_RESIZE:\n\t\t\t\tphook_next = (hook *) GET_NEXT(phook->hi_execjob_resize_hooks);\n\t\t\t\tbreak;\n\t\t\tcase HOOK_EVENT_EXECJOB_ABORT:\n\t\t\t\tphook_next = (hook *) GET_NEXT(phook->hi_execjob_abort_hooks);\n\t\t\t\tbreak;\n\t\t\tcase HOOK_EVENT_EXECJOB_POSTSUSPEND:\n\t\t\t\tphook_next = (hook *) GET_NEXT(phook->hi_execjob_postsuspend_hooks);\n\t\t\t\tbreak;\n\t\t\tcase HOOK_EVENT_EXECJOB_PRERESUME:\n\t\t\t\tphook_next = (hook *) GET_NEXT(phook->hi_execjob_preresume_hooks);\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\tfree(php);\n\t\t\t\treturn (-1); /*  should not get here */\n\t\t}\n\n\t\tif (phook->enabled == FALSE)\n\t\t\tcontinue;\n\n\t\tif (phook->script == NULL) {\n\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_HOOK,\n\t\t\t\t  LOG_ERR, phook->hook_name,\n\t\t\t\t  \"Hook has no script content. Skipping hook.\");\n\t\t\tcontinue;\n\t\t}\n\n\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_HOOK,\n\t\t\t  LOG_INFO, phook->hook_name, \"started\");\n\n\t\thook_infile[0] = '\\0';\n\t\thook_outfile[0] = '\\0';\n\t\thook_datafile[0] = '\\0';\n\n\t\t/* on an execjob_end or execjob_epilogue hook, need to set  */\n\t\t/* the job's exit value. \t\t\t\t    */\n\t\t/* Set it only once, and only if there's a hook to execute  */\n\t\t/* since we're affecting the job directly.\t\t    */\n\t\tif (((hook_event == HOOK_EVENT_EXECJOB_END) ||\n\t\t     (hook_event == HOOK_EVENT_EXECJOB_EPILOGUE)) &&\n\t\t    !set_job_exit) {\n\n\t\t\tset_jattr_l_slim(pjob, JOB_ATR_exit_status, pjob->ji_qs.ji_un.ji_momt.ji_exitstat, SET);\n\t\t\tset_job_exit = 1;\n\t\t} else if ((hook_event == HOOK_EVENT_EXECJOB_LAUNCH) && (num_run >= 1)) {\n\n\t\t\t/*\n\t\t\t * If there are multiple execjob_launch hooks,\n\t\t\t * we need to cascade the execjob_launch specific\n\t\t\t * parameters to the next execjob_launch hook,\n\t\t\t * if any of the previous hooks has set these\n\t\t\t * values.\n\t\t\t */\n\n\t\t\tif (hook_output != NULL) {\n\n\t\t\t\tif (hook_output->progname != NULL) {\n\t\t\t\t\thook_input->progname = *hook_output->progname;\n\t\t\t\t}\n\n\t\t\t\tif (hook_output->env != NULL) {\n\t\t\t\t\thook_input->env = hook_output->env;\n\t\t\t\t}\n\t\t\t\thook_input->argv = svrattrl_to_str_array(hook_output->argv);\n\t\t\t}\n\t\t}\n\n\t\tif (pjob != NULL)\n\t\t\tsnprintf(perf_label, sizeof(perf_label), \"hook_%s_%s_%s\", hook_event_as_string(hook_event), phook->hook_name, pjob->ji_qs.ji_jobid);\n\t\telse\n\t\t\tsnprintf(perf_label, sizeof(perf_label), \"hook_%s_%s_%d\", hook_event_as_string(hook_event), phook->hook_name, getpid());\n\n\t\thook_perf_stat_start(perf_label, \"mom_process_hooks\", 1);\n\t\trc = run_hook(phook, hook_event, hook_input,\n\t\t\t      req_user, req_host, php->parent_wait, (void *) post_run_hook,\n\t\t\t      hook_infile, hook_outfile, hook_datafile, MAXPATHLEN + 1, php);\n\t\thook_perf_stat_stop(perf_label, \"mom_process_hooks\", 1);\n\n\t\tif (last_phook != NULL)\n\t\t\t*last_phook = phook;\n\n\t\tif (pfail_action != NULL)\n\t\t\t*pfail_action |= phook->fail_action;\n\n\t\t/* go back to mom's private directory */\n\t\tif (chdir(mom_home) != 0) {\n\t\t\tlog_event(PBSEVENT_DEBUG2,\n\t\t\t\t  PBS_EVENTCLASS_HOOK, LOG_WARNING, phook->hook_name,\n\t\t\t\t  \"unable to go back to mom_home\");\n\t\t}\n\n\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_HOOK, LOG_INFO, phook->hook_name, \"finished\");\n\n\t\tif ((hook_event == HOOK_EVENT_EXECHOST_PERIODIC) || (hook_event == HOOK_EVENT_EXECHOST_STARTUP)) {\n\t\t\tlog_id = phook->hook_name;\n\t\t\tlog_type = PBSEVENT_DEBUG2;\n\t\t\tlog_class = PBS_EVENTCLASS_HOOK;\n\t\t} else {\n\t\t\tlog_id = pjob->ji_qs.ji_jobid;\n\t\t\tlog_type = PBSEVENT_JOB;\n\t\t\tlog_class = PBS_EVENTCLASS_JOB;\n\t\t}\n\n\t\tswitch (rc) {\n\t\t\tcase 0: /* success */\n\t\t\t\tbreak;\n\t\t\t\t/* -2 return from pbs_python on Windows == 2^8-2 on Linux*/\n\t\t\tcase -2:  /* unhandled exception return on Windows */\n\t\t\tcase 254: /* unhandled exception return on Linux/Unix */\n\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t\t\t \"%s hook '%s' encountered an exception, \"\n\t\t\t\t\t \"request rejected\",\n\t\t\t\t\t hook_event_as_string(hook_event), phook->hook_name);\n\t\t\t\tlog_event(log_type, log_class,\n\t\t\t\t\t  LOG_ERR, log_id, log_buffer);\n\t\t\t\tif (hook_msg != NULL) {\n\t\t\t\t\tsnprintf(hook_msg, msg_len - 1,\n\t\t\t\t\t\t \"request rejected as filter hook '%s' encountered an \"\n\t\t\t\t\t\t \"exception. Please inform Admin\",\n\t\t\t\t\t\t phook->hook_name);\n\t\t\t\t}\n\t\t\t\tif (reject_errcode != NULL) {\n\t\t\t\t\t*reject_errcode = PBSE_HOOKERROR;\n\t\t\t\t}\n\t\t\t\trecord_job_last_hook_executed(hook_event, phook->hook_name, pjob, hook_outfile);\n\t\t\t\tfree(php);\n\t\t\t\treturn (0);\n\t\t\t\t/* -3 return from pbs_python == 2^8-3, but run_hook() */\n\t\t\t\t/* itself could return \"-3\" if it catches the alarm()   */\n\t\t\t\t/* to the process first. Both run_hook() code here, and */\n\t\t\t\t/* pbs_python program set alarm signals. One or the   */\n\t\t\t\t/* other would catch it first */\n\t\t\tcase -3:  /* somewhere in this file, we do return -3 */\n\t\t\tcase 253: /* alarm timeout */\n\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t\t\t \"alarm call while running %s hook '%s', \"\n\t\t\t\t\t \"request rejected\",\n\t\t\t\t\t hook_event_as_string(hook_event), phook->hook_name);\n\t\t\t\tlog_event(log_type, log_class,\n\t\t\t\t\t  LOG_ERR, log_id, log_buffer);\n\t\t\t\tif (hook_msg != NULL) {\n\t\t\t\t\tsnprintf(hook_msg, msg_len - 1,\n\t\t\t\t\t\t \"request rejected as filter hook '%s' got an \"\n\t\t\t\t\t\t \"alarm call. Please inform Admin\",\n\t\t\t\t\t\t phook->hook_name);\n\t\t\t\t}\n\t\t\t\tif (reject_errcode != NULL) {\n\t\t\t\t\t*reject_errcode = PBSE_HOOKERROR;\n\t\t\t\t}\n\t\t\t\trecord_job_last_hook_executed(hook_event, phook->hook_name, pjob, hook_outfile);\n\t\t\t\tfree(php);\n\t\t\t\treturn (0);\n\t\t\tdefault:\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t \"Internal server error encountered. Skipping hook %s\",\n\t\t\t\t\t phook->hook_name);\n\t\t\t\tlog_event(log_type, log_class,\n\t\t\t\t\t  LOG_ERR, log_id, log_buffer);\n\t\t\t\tfree(php);\n\t\t\t\treturn (-1); /* should not happen */\n\t\t}\n\n\t\tnum_run++;\n\n\t\tif (hook_event == HOOK_EVENT_EXECHOST_PERIODIC) {\n\t\t\t/* hook backgrounded */\n\t\t\tif ((php = duplicate_php(php)) == NULL)\n\t\t\t\treturn (-1);\n\t\t\tcontinue;\n\t\t}\n\n\t\tif (php->parent_wait == 0)\n\t\t\treturn (HOOK_RUNNING_IN_BACKGROUND);\n\n\t\ttask.wt_parm1 = (void *) phook;\n\t\ttask.wt_parm2 = (void *) php;\n\t\tif ((rc = post_run_hook(&task)) != 1) {\n\t\t\t/* if a hook is not accepted do not proceed further*/\n\t\t\tfree(php);\n\t\t\treturn rc;\n\t\t}\n\t}\n\tif (num_run == 0) {\n\t\tfree(php);\n\t\treturn (2);\n\t}\n\n\tfree(php);\n\treturn (1);\n}\n\n/**\n * @brief\n * \tCleans up files older than HOOKS_TMPFILE_MAX_AGE under\n *\t<path_spool>.\n *\n * @param[in] \tptask\t- a task to queue up\n *\n * @return none\n *\n */\nvoid\ncleanup_hooks_in_path_spool(struct work_task *ptask)\n{\n\tDIR *dir;\n\tstruct dirent *pdirent;\n\tstruct stat sbuf;\n\tchar hook_file[MAXPATHLEN + 1];\n\n\tmemset(hook_file, '\\0', MAXPATHLEN + 1);\n\tdir = opendir(path_spool);\n\tif (dir == NULL) {\n\t\tsprintf(log_buffer, \"could not opendir %s\",\n\t\t\tpath_hooks_workdir);\n\t\tlog_err(errno, __func__, log_buffer);\n\t\treturn;\n\t}\n\twhile (errno = 0, (pdirent = readdir(dir)) != NULL) {\n\n\t\tif (pdirent->d_name[0] == '.') {\n\t\t\tif (pdirent->d_name[1] == '\\0' ||\n\t\t\t    (pdirent->d_name[1] == '.' &&\n\t\t\t     pdirent->d_name[2] == '\\0'))\n\t\t\t\tcontinue;\n\t\t}\n\n\t\tif (strncmp(pdirent->d_name, FMT_HOOK_PREFIX,\n\t\t\t    sizeof(FMT_HOOK_PREFIX) - 1) != 0)\n\t\t\tcontinue;\n\n\t\tsnprintf(hook_file, MAXPATHLEN, \"%s%s\",\n\t\t\t path_spool, pdirent->d_name);\n\t\tif (stat(hook_file, &sbuf) == -1) {\n\t\t\tsprintf(log_buffer, \"could not stat %s\", hook_file);\n\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\tcontinue;\n\t\t}\n\n\t\t/* remove files older than 'HOOKS_TMPFILE_MAX_AGE' */\n\t\tif ((time_now - sbuf.st_ctime) > HOOKS_TMPFILE_MAX_AGE) {\n\t\t\tif (unlink(hook_file) < 0) {\n\t\t\t\tif (errno != ENOENT) {\n\t\t\t\t\tsprintf(log_buffer, \"could not cleanup %s\",\n\t\t\t\t\t\thook_file);\n\t\t\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\tif (errno != 0 && errno != ENOENT) {\n\t\tlog_err(errno, __func__, \"readdir\");\n\t}\n\tif (dir) {\n\t\t(void) closedir(dir);\n\t}\n\n\t/*  cleanup of hooks temp files happen in the next */\n\t/* 'HOOKS_TMPFILE_NEXT_CLEANUP_PERIOD' secs.\t   */\n\t(void) set_task(WORK_Timed, time_now + HOOKS_TMPFILE_NEXT_CLEANUP_PERIOD,\n\t\t\tcleanup_hooks_in_path_spool, NULL);\n}\n\n/**\n *  @brief\n *  \tInitializes all the elements of mom_hook_input_t structure.\n *\n *  @param[in]\thook_input - the structure to initialize.\n *  @return void\n *\n */\nvoid\nmom_hook_input_init(mom_hook_input_t *hook_input)\n{\n\tif (hook_input == NULL) {\n\t\tlog_err(PBSE_HOOKERROR, __func__, \"Hook input is NULL\");\n\t\treturn;\n\t}\n\n\thook_input->pjob = NULL;\n\thook_input->progname = NULL;\n\thook_input->argv = NULL;\n\thook_input->env = NULL;\n\thook_input->vnl = NULL;\n\thook_input->vnl_fail = NULL;\n\thook_input->failed_mom_list = NULL;\n\thook_input->succeeded_mom_list = NULL;\n\thook_input->jobs_list = NULL;\n}\n\n/**\n *  @brief\n *  \tInitializes all the elements of mom_hook_output_t structure.\n *\n *  @param[in]\thook_output - the structure to initialize.\n *  @return void\n *\n */\nvoid\nmom_hook_output_init(mom_hook_output_t *hook_output)\n{\n\tif (hook_output == NULL) {\n\t\tlog_err(PBSE_HOOKERROR, __func__, \"Hook output is NULL\");\n\t\treturn;\n\t}\n\n\thook_output->reject_errcode = NULL;\n\thook_output->last_phook = NULL;\n\thook_output->fail_action = NULL;\n\thook_output->progname = NULL;\n\thook_output->argv = NULL;\n\thook_output->env = NULL;\n\thook_output->vnl = NULL;\n\thook_output->vnl_fail = NULL;\n}\n"
  },
  {
    "path": "src/resmom/mom_inter.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n/**\n * @file\tmom_inter.c\n */\n#include <sys/types.h>\n#include <netinet/in.h>\n#include <sys/socket.h>\n#include <errno.h>\n#include <fcntl.h>\n#include <stdio.h>\n#include <string.h>\n#include <termios.h>\n#include <unistd.h>\n#include <stdlib.h>\n#include <netdb.h>\n#if defined(HAVE_SYS_IOCTL_H)\n#include <sys/ioctl.h>\n#endif\n#if !defined(sgi) && !defined(linux)\n#include <sys/tty.h>\n#endif /* ! sgi */\n#include \"portability.h\"\n#include \"pbs_ifl.h\"\n#include \"server_limits.h\"\n#include \"net_connect.h\"\n#include \"libsec.h\"\n#include \"port_forwarding.h\"\n#include \"log.h\"\n#include \"libpbs.h\"\n#include \"dis.h\"\n\nstatic char cc_array[PBS_TERM_CCA];\nstatic struct winsize wsz;\n\nextern int mom_reader_go;\n\nextern int ptc;\n#define XAUTH_LEN 512\n\n/**\n * @brief\n * \tread_net - read data from network till received amount expected\n *\n * @param[in] sock - file descriptor\n * @param[in] buf  - buffer\n * @param[in] amt  - amount of data\n *\n * @return int\n * @retval  >0 amount read\n * @retval  -1 on error\n *\n */\nstatic int\nread_net(int sock, char *buf, int amt)\n{\n\tint got;\n\tint total = 0;\n\n\twhile (amt > 0) {\n\t\tgot = CS_read(sock, buf, amt);\n\t\tif (got > 0) { /* read (some) data */\n\t\t\tamt -= got;\n\t\t\tbuf += got;\n\t\t\ttotal += got;\n\t\t} else if (got == 0)\n\t\t\tbreak;\n\t\telse\n\t\t\treturn (-1);\n\t}\n\treturn (total);\n}\n\n/**\n * @brief\n * \tread_net - read packet from network till received amount expected\n *\n * @param[in] sock - file descriptor\n * @param[in] buf  - buffer\n * @param[in] amt  - amount of data\n *\n * @return int\n * @retval  >0 amount read\n * @retval  -1 on error\n *\n */\nstatic int\nread_pkt_net(int sock, char *buf, int amt)\n{\n\tint got;\n\tint total = 0;\n\tvoid *data_in = NULL;\n\tsize_t len_in = 0;\n\tint type = 0;\n\n\twhile (amt > 0) {\n\t\tgot = transport_recv_pkt(sock, &type, &data_in, &len_in);\n\t\tif (got > 0) { /* read (some) data */\n\t\t\tmemcpy(buf, data_in, len_in);\n\t\t\tamt -= got;\n\t\t\tbuf += got;\n\t\t\ttotal += got;\n\t\t} else if (got == 0)\n\t\t\tbreak;\n\t\telse\n\t\t\treturn (-1);\n\t}\n\treturn (total);\n}\n\n/**\n * @brief\n * \trcvttype - receive the terminal type of the real terminal\n *\tSent over network as \"TERM=type_string\"\n *\n * @param sock - file descriptor\n *\n * @return string\n *\n */\n\nchar *\nrcvttype(int sock)\n{\n\tstatic char buf[PBS_TERM_BUF_SZ];\n\tint (*read_f)(int, char *, int) = NULL;\n\n\tif (transport_chan_get_ctx_status(sock, FOR_ENCRYPT) == (int) AUTH_STATUS_CTX_READY) {\n\t\tread_f = read_pkt_net;\n\t} else {\n\t\tread_f = read_net;\n\t}\n\n\t/* read terminal type as sent by qsub */\n\n\tif ((read_f(sock, buf, PBS_TERM_BUF_SZ) != PBS_TERM_BUF_SZ) ||\n\t    (strncmp(buf, \"TERM=\", 5) != 0))\n\t\treturn NULL;\n\n\t/* get the basic control characters from qsub's termial */\n\n\tif (read_f(sock, cc_array, PBS_TERM_CCA) != PBS_TERM_CCA) {\n\t\treturn NULL;\n\t}\n\n\treturn (buf);\n}\n\n/**\n * @brief\n * \tset_termcc - set the basic modes for the slave terminal, and set the\n *\tcontrol characters to those sent by qsub.\n *\n * @param[in] fd - file descriptor\n *\n * @return Void\n *\n */\n\nvoid\n\tset_termcc(int fd)\n{\n\tstruct termios slvtio;\n\n#ifdef PUSH_STREAM\n\t(void) ioctl(fd, I_PUSH, \"ptem\");\n\t(void) ioctl(fd, I_PUSH, \"ldterm\");\n#endif /* PUSH_STREAM */\n\n\tif (tcgetattr(fd, &slvtio) < 0)\n\t\treturn; /* cannot do it, leave as is */\n\n#ifdef IMAXBEL\n\tslvtio.c_iflag = (BRKINT | IGNPAR | ICRNL | IXON | IXOFF | IMAXBEL);\n#else\n\tslvtio.c_iflag = (BRKINT | IGNPAR | ICRNL | IXON | IXOFF);\n#endif\n\tslvtio.c_oflag = (OPOST | ONLCR);\n#if defined(ECHOKE) && defined(ECHOCTL)\n\tslvtio.c_lflag = (ISIG | ICANON | ECHO | ECHOE | ECHOK | ECHOKE | ECHOCTL);\n#else\n\tslvtio.c_lflag = (ISIG | ICANON | ECHO | ECHOE | ECHOK);\n#endif\n\tslvtio.c_cc[VEOL] = '\\0';\n\tslvtio.c_cc[VEOL2] = '\\0';\n\tslvtio.c_cc[VSTART] = '\\021'; /* ^Q */\n\tslvtio.c_cc[VSTOP] = '\\023';  /* ^S */\n#if defined(VDSUSP)\n\tslvtio.c_cc[VDSUSP] = '\\031'; /* ^Y */\n#endif\n#if defined(VREPRINT)\n\tslvtio.c_cc[VREPRINT] = '\\022'; /* ^R */\n#endif\n\tslvtio.c_cc[VLNEXT] = '\\017'; /* ^V */\n\n\tslvtio.c_cc[VINTR] = cc_array[0];\n\tslvtio.c_cc[VQUIT] = cc_array[1];\n\tslvtio.c_cc[VERASE] = cc_array[2];\n\tslvtio.c_cc[VKILL] = cc_array[3];\n\tslvtio.c_cc[VEOF] = cc_array[4];\n\tslvtio.c_cc[VSUSP] = cc_array[5];\n\t(void) tcsetattr(fd, TCSANOW, &slvtio);\n}\n\n/**\n * @brief\n * \trcvwinsize - receive the window size of the real terminal window\n *\n *\tSent over network as \"WINSIZE rn cn xn yn\"  where .n is numeric string\n *\n * @param[in] sock - file descriptor\n *\n * @return   error code\n * @retval   0     Success\n * @retval  -1     Failure\n *\n */\n\nint\nrcvwinsize(int sock)\n{\n\tchar buf[PBS_TERM_BUF_SZ];\n\tint (*read_f)(int, char *, int) = NULL;\n\n\tif (transport_chan_get_ctx_status(sock, FOR_ENCRYPT) == (int) AUTH_STATUS_CTX_READY) {\n\t\tread_f = read_pkt_net;\n\t} else {\n\t\tread_f = read_net;\n\t}\n\n\tif (read_f(sock, buf, PBS_TERM_BUF_SZ) != PBS_TERM_BUF_SZ)\n\t\treturn (-1);\n\tif (sscanf(buf, \"WINSIZE %hu,%hu,%hu,%hu\", &wsz.ws_row, &wsz.ws_col,\n\t\t   &wsz.ws_xpixel, &wsz.ws_ypixel) != 4)\n\t\treturn (-1);\n\treturn 0;\n}\n\n/**\n * @brief\n *\tset window size or terminal\n *\n * @param pty - terminal interface\n *\n * @return    error code\n * @retval    0     Success\n * @retval   -1     Failure\n *\n */\nint\nsetwinsize(int pty)\n{\n\tif (ioctl(pty, TIOCSWINSZ, &wsz) < 0) {\n\t\tperror(\"ioctl TIOCSWINSZ\");\n\t\treturn (-1);\n\t}\n\treturn (0);\n}\n\n/**\n * @brief\n *\treader process - reads from the remote socket, and writes\n *\tto the master pty\n *\n * @param[in] s - filr descriptor\n * @param[in] ptc - master file descriptor\n * @param[in] command - shell command(s) to be sent to the PTY before user data from the socket\n *\n * @return    error code\n * @retval    0     Success\n * @retval   -1     Write Failure\n * @retval   -2     Read Failure\n * \n */\nint\nmom_reader(int s, int ptc, char *command)\n{\n\tchar buf[PF_BUF_SIZE];\n\tint c;\n\n\t/* send shell command(s) to the PTY input stream */\n\tc = command ? strlen(command) : 0;\n\tif (c > 0) {\n\t\tint wc;\n\t\tchar *p = command;\n\n\t\twhile (c) {\n\t\t\tif ((wc = write(ptc, p, c)) < 0) {\n\t\t\t\tif (errno == EINTR) {\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\t\t\t\treturn (-1);\n\t\t\t}\n\t\t\tc -= wc;\n\t\t\tp += wc;\n\t\t}\n\t}\n\n\t/* read from the socket, and write to ptc */\n\twhile (mom_reader_go) {\n\t\tc = CS_read(s, buf, sizeof(buf));\n\t\tif (c > 0) {\n\t\t\tint wc;\n\t\t\tchar *p = buf;\n\n\t\t\twhile (c) {\n\t\t\t\tif ((wc = write(ptc, p, c)) < 0) {\n\t\t\t\t\tif (errno == EINTR) {\n\t\t\t\t\t\tcontinue;\n\t\t\t\t\t}\n\t\t\t\t\treturn (-1);\n\t\t\t\t}\n\t\t\t\tc -= wc;\n\t\t\t\tp += wc;\n\t\t\t}\n\t\t} else if (c == 0) {\n\t\t\treturn (0);\n\t\t} else if (c < 0) {\n\t\t\tif (errno == EINTR)\n\t\t\t\tcontinue;\n\t\t\telse {\n\t\t\t\treturn (-2);\n\t\t\t}\n\t\t}\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\treader process - reads from the remote socket, and writes\n *\tto the master pty\n *\n * @param[in] s - filr descriptor\n * @param[in] ptc - master file descriptor\n * @param[in] command - shell command(s) to be sent to the PTY before user data from the socket\n *\n * @return    error code\n * @retval    0     Success\n * @retval   -1     Write Failure\n * @retval   -2     Read Failure\n * \n */\nint\nmom_reader_pkt(int s, int ptc, char *command)\n{\n\tint c;\n\tvoid *data_in = NULL;\n\tsize_t len_in = 0;\n\tint type = 0;\n\n\t/* send shell command(s) to the PTY input stream */\n\tc = command ? strlen(command) : 0;\n\tif (c > 0) {\n\t\tint wc;\n\t\tchar *p = command;\n\n\t\twhile (c) {\n\t\t\tif ((wc = write(ptc, p, c)) < 0) {\n\t\t\t\tif (errno == EINTR) {\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\t\t\t\treturn (-1);\n\t\t\t}\n\t\t\tc -= wc;\n\t\t\tp += wc;\n\t\t}\n\t}\n\n\t/* read from the socket, and write to ptc */\n\twhile (mom_reader_go) {\n\t\tpbs_tcp_timeout = -1;\n\t\tc = transport_recv_pkt(s, &type, &data_in, &len_in);\n\t\tif (c > 0) {\n\t\t\tint wc;\n\t\t\tchar *p = data_in;\n\n\t\t\twhile (c) {\n\t\t\t\tif ((wc = write(ptc, p, c)) < 0) {\n\t\t\t\t\tif (errno == EINTR) {\n\t\t\t\t\t\tcontinue;\n\t\t\t\t\t}\n\t\t\t\t\treturn (-1);\n\t\t\t\t}\n\t\t\t\tc -= wc;\n\t\t\t\tp += wc;\n\t\t\t}\n\t\t} else if (c == -2) { /* tcp_recv returns -2 on EOF */\n\t\t\treturn (0);\n\t\t} else if (c < 0) {\n\t\t\tif (errno == EINTR)\n\t\t\t\tcontinue;\n\t\t\telse {\n\t\t\t\treturn (-2);\n\t\t\t}\n\t\t}\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *      This functions sets the job directory to PBS_JOBDIR for private sandbox\n *\n * @param command[in] - this parameter contains the job directory in case of\n *                      private sandbox.\n * @return int\n * @retval 0 Success\n * @retval -1 Failure\n *\n */\nint\nsetcurrentworkdir(char *command)\n\n{\n\tint c;\n\n\t/* send shell command(s) to the PTY input stream */\n\tc = command ? strlen(command) : 0;\n\tif (c > 0) {\n\t\tint wc;\n\t\tchar *p = command;\n\n\t\twhile (c) {\n\t\t\tif ((wc = write(ptc, p, c)) < 0) {\n\t\t\t\tif (errno == EINTR) {\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\t\t\t\treturn (-1);\n\t\t\t}\n\t\t\tc -= wc;\n\t\t\tp += wc;\n\t\t}\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *        This function reads the data from remote socket and write it to pty.\n *\n * @param[in] s - socket fd from where data needs to be read.\n *\n * @return  int\n * @retval    0  Success\n * @retval   -1  Failure\n * @retval   -2  Peer Closed\n *\n */\nint\nmom_reader_Xjob(int s)\n{\n\tstatic char buf[PF_BUF_SIZE];\n\tint c;\n\t/* read from the socket, and write to ptc */\n\tc = CS_read(s, buf, sizeof(buf));\n\tif (c > 0) {\n\t\tint wc;\n\t\tchar *p = buf;\n\n\t\twhile (c) {\n\t\t\tif ((wc = write(ptc, p, c)) < 0) {\n\t\t\t\tif (errno == EINTR) {\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\t\t\t\treturn (-1);\n\t\t\t}\n\t\t\tc -= wc;\n\t\t\tp += wc;\n\t\t}\n\t} else if (c == 0) {\n\t\t/* If control reaches here, then it means peer has closed the\n\t\t * connection\n\t\t */\n\t\treturn (-2);\n\t} else if (c < 0) {\n\t\tif (errno == EINTR) {\n\t\t\treturn (0);\n\t\t} else {\n\t\t\treturn (-1);\n\t\t}\n\t}\n\treturn (0);\n}\n\n/**\n * @brief\n *        This function reads the packets from remote socket and write it to pty.\n *\n * @param[in] s - socket fd from where data needs to be read.\n *\n * @return  int\n * @retval    0  Success\n * @retval   -1  Failure\n * @retval   -2  Peer Closed\n *\n */\nint\nmom_reader_pkt_Xjob(int s)\n{\n\tint c;\n\tvoid *data_in = NULL;\n\tsize_t len_in = 0;\n\tint type = 0;\n\n\t/* read from the socket, and write to ptc */\n\tc = transport_recv_pkt(s, &type, &data_in, &len_in);\n\tif (c > 0) {\n\t\tint wc;\n\t\tchar *p = data_in;\n\n\t\twhile (c) {\n\t\t\tif ((wc = write(ptc, p, c)) < 0) {\n\t\t\t\tif (errno == EINTR) {\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\t\t\t\treturn (-1);\n\t\t\t}\n\t\t\tc -= wc;\n\t\t\tp += wc;\n\t\t}\n\t} else if (c == -2) { /* tcp_recv returns -2 on EOF */\n\t\t/* If control reaches here, then it means peer has closed the\n\t\t * connection\n\t\t */\n\t\treturn (-2);\n\t} else if (c < 0) {\n\t\tif (errno == EINTR) {\n\t\t\treturn (0);\n\t\t} else {\n\t\t\treturn (-1);\n\t\t}\n\t}\n\treturn (0);\n}\n\n/**\n * @brief\n *        This function selects reader for data from remote socket and write it to pty.\n *\n * @param[in] s - socket fd from where data needs to be read.\n *\n * @return  int\n * @retval    0  Success\n * @retval   -1  Failure\n * @retval   -2  Peer Closed\n *\n */\nint\nmom_get_reader_Xjob(int s)\n{\n\tif (transport_chan_get_ctx_status(s, FOR_ENCRYPT) == (int) AUTH_STATUS_CTX_READY) {\n\t\treturn mom_reader_pkt_Xjob(s);\n\t} else {\n\t\treturn mom_reader_Xjob(s);\n\t}\n}\n\n/**\n * @brief\n * \tWriter process: reads from master pty, and writes\n * \tdata out to the rem socket\n *\n * @param[in] s - socket fd\n * @param[in] ptc - master file descriptor\n *\n * @return    error code\n * @retval    0     Success\n * @retval   -1     Write Failure\n * @retval   -2     Read Failure\n *\n */\nint\nmom_writer(int s, int ptc)\n{\n\tchar buf[PF_BUF_SIZE];\n\tint c;\n\n\t/* read from ptc, and write to the socket */\n\twhile (1) {\n\t\tc = read(ptc, buf, sizeof(buf));\n\t\tif (c > 0) {\n\t\t\tint wc;\n\t\t\tchar *p = buf;\n\n\t\t\twhile (c) {\n\t\t\t\tif ((wc = CS_write(s, p, c)) < 0) {\n\t\t\t\t\tif (errno == EINTR) {\n\t\t\t\t\t\tcontinue;\n\t\t\t\t\t}\n\t\t\t\t\treturn (-1);\n\t\t\t\t}\n\t\t\t\tc -= wc;\n\t\t\t\tp += wc;\n\t\t\t}\n\t\t} else if (c == 0) {\n\t\t\treturn (0);\n\t\t} else if (c < 0) {\n\t\t\tif (errno == EINTR)\n\t\t\t\tcontinue;\n\t\t\telse {\n\t\t\t\treturn (-2);\n\t\t\t}\n\t\t}\n\t}\n}\n\n/**\n * @brief\n * \tPacket Writer process: reads from master pty, and writes\n * \tdata out to the rem socket\n *\n * @param[in] s - socket fd\n * @param[in] ptc - master file descriptor\n *\n * @return    error code\n * @retval    0     Success\n * @retval   -1     Write Failure\n * @retval   -2     Read Failure\n *\n */\nint\nmom_writer_pkt(int s, int ptc)\n{\n\tchar buf[PF_BUF_SIZE];\n\tint c;\n\n\t/* read from ptc, and write to the socket */\n\twhile (1) {\n\t\tc = read(ptc, buf, sizeof(buf));\n\t\tif (c > 0) {\n\t\t\ttransport_send_pkt(s, AUTH_ENCRYPTED_DATA, buf, c);\n\t\t} else if (c == 0) {\n\t\t\treturn (0);\n\t\t} else if (c < 0) {\n\t\t\tif (errno == EINTR)\n\t\t\t\tcontinue;\n\t\t\telse {\n\t\t\t\treturn (-2);\n\t\t\t}\n\t\t}\n\t}\n}\n\n/**\n * @brief\n *      connect to the qsub that submitted this interactive job\n *\n * @param hostname[in] - hostname of the submission host where qsub is running.\n * @param port[in] - port number on which qsub is accepting connection.\n * @param authport_falgs[in] - Authentication port flags to use. Values defined in net_connect.h\n *\n * @return int\n * @retval >=0 the socket obtained\n * @retval  -1 PBS_NET_RC_FATAL\n * @retval  -2 PBS_NET_RC_RETRY\n *\n */\nstatic int\n_conn_qsub(char *hostname, long port, int authport_flags)\n{\n\tpbs_net_t hostaddr;\n\n\tif ((hostaddr = get_hostaddr(hostname)) == (pbs_net_t) 0)\n\t\treturn (PBS_NET_RC_FATAL);\n\n\t/* Yes, the qsub is listening, but for authentication\n\t * purposes mom wants authenticate as a server - not as\n\t * a client\n\t */\n\n\treturn (client_to_svr(hostaddr, (unsigned int) port, authport_flags));\n}\n\n/**\n * @brief\n *      connect to the qsub that submitted this interactive job, using a privileged port\n *\n * @param hostname[in] - hostname of the submission host where qsub is running.\n * @param port[in] - port number on which qsub is accepting connection.\n *\n * @return int\n * @retval >=0 the socket obtained\n * @retval  -1 PBS_NET_RC_FATAL\n * @retval  -2 PBS_NET_RC_RETRY\n *\n */\nint\nconn_qsub_resvport(char *hostname, long port)\n{\n\treturn _conn_qsub(hostname, port, B_SVR | B_RESERVED);\n}\n\n\n/**\n * @brief\n *      connect to the qsub that submitted this interactive job\n *\n * @param hostname[in] - hostname of the submission host where qsub is running.\n * @param port[in] - port number on which qsub is accepting connection.\n *\n * @return int\n * @retval >=0 the socket obtained\n * @retval  -1 PBS_NET_RC_FATAL\n * @retval  -2 PBS_NET_RC_RETRY\n *\n */\nint\nconn_qsub(char *hostname, long port)\n{\n\treturn _conn_qsub(hostname, port, B_SVR);\n}\n\n/**\n * @brief       This function creates a socket for listening for X11\n *              connections.The socket is created only for jobs that\n *              require X forwarding .\n *\n * @param socks[in/out] - Socks structure which keeps track of\n *                        sockets that are active and data read/written by\n *                        peers.\n * @param x11_use_localhost[in] - Non-zero value to use localhost only.\n * @param display[out] - sets the display number and screen number.\n * @param homedir[in] - uses this home directory to put in the environment.\n * @param x11authstr[in] - used to get the X11 protocol, hex data and screen.\n *\n * @return int - Return a suitable display number (>0) for the DISPLAY\n *               variable\n * @retval -1 Failure\n *\n */\nint\ninit_x11_display(\n\tstruct pfwdsock *socks,\n\tint x11_use_localhost,\n\tchar *display,\n\tchar *homedir,\n\tchar *x11authstr)\n{\n\tint display_number, sock, i;\n\tu_short port;\n\tstruct addrinfo hints, *ai, *aitop;\n\tchar strport[NI_MAXSERV];\n\tint gaierr, n, num_socks = 0, ret = 0;\n\tunsigned int x11screen;\n\tchar x11proto[XAUTH_LEN], x11data[XAUTH_LEN];\n\tchar format[XAUTH_LEN];\n\tchar *homeenv;\n\tchar logit[512] = {0};\n\tchar func[] = \"init_x11_display\";\n\n\t*display = '\\0';\n\tif ((homeenv = malloc(sizeof(\"HOME=\") + strlen(homedir) + 1)) == NULL) {\n\t\t/* FAILURE - cannot alloc memory */\n\t\tsprintf(logit, \"Malloc Failure : %.100s\\n\",\n\t\t\tstrerror(errno));\n\t\tlog_err(errno, func, logit);\n\n\t\treturn (-1);\n\t}\n\n\tsetenv(\"HOME\", homedir, 1);\n\n\tfor (n = 0; n < NUM_SOCKS; n++)\n\t\tsocks[n].active = 0;\n\n\tx11proto[0] = x11data[0] = '\\0';\n\tformat[0] = '\\0';\n\n\tsprintf(format, \" %%%d[^:]: %%%d[^:]: %%u\", XAUTH_LEN - 1, XAUTH_LEN - 1);\n\n\terrno = 0;\n\n\tif ((n = sscanf(x11authstr, format,\n\t\t\tx11proto,\n\t\t\tx11data,\n\t\t\t&x11screen)) != 3) {\n\t\tsprintf(logit, \"sscanf(%s)=%d failed: %s\\n\",\n\t\t\tx11authstr,\n\t\t\tn,\n\t\t\tstrerror(errno));\n\t\tlog_err(errno, func, logit);\n\t\tfree(socks);\n\t\treturn (-1);\n\t}\n\n\tfor (display_number = X11OFFSET; display_number < MAX_DISPLAYS;\n\t     display_number++) {\n\t\tport = X_PORT + display_number;\n\t\tmemset(&hints, 0, sizeof(hints));\n\t\thints.ai_family = AF_UNSPEC;\n\t\thints.ai_flags = x11_use_localhost ? 0 : AI_PASSIVE;\n\t\thints.ai_socktype = SOCK_STREAM;\n\t\tret = snprintf(strport, sizeof(strport), \"%d\", port);\n\t\tif (ret >= sizeof(strport)) {\n\t\t\tlog_err(-1, func, \"strport overflow\");\n\t\t}\n\n\t\tif ((gaierr = getaddrinfo(NULL, strport, &hints, &aitop)) != 0) {\n\t\t\tsprintf(logit, \"getaddrinfo: %.100s\\n\",\n\t\t\t\tgai_strerror(gaierr));\n\t\t\tlog_err(errno, func, logit);\n\t\t\tfree(socks);\n\t\t\treturn (-1);\n\t\t}\n\n\t\t/* create a socket and bind it to display_number */\n\t\tfor (ai = aitop; ai != NULL; ai = ai->ai_next) {\n\t\t\tif (ai->ai_family != AF_INET)\n\t\t\t\tcontinue;\n\t\t\tsock = socket(ai->ai_family, SOCK_STREAM, 0);\n\t\t\tif (sock < 0) {\n\t\t\t\tif ((errno != EINVAL) && (errno != EAFNOSUPPORT)) {\n\t\t\t\t\tsprintf(logit, \"socket: %.100s\\n\",\n\t\t\t\t\t\tstrerror(errno));\n\t\t\t\t\tlog_err(errno, func, logit);\n\t\t\t\t\tfree(socks);\n\t\t\t\t\treturn (-1);\n\t\t\t\t} else {\n\t\t\t\t\tsprintf(logit, \"Socket family %d *NOT* supported\\n\",\n\t\t\t\t\t\tai->ai_family);\n\t\t\t\t\tlog_err(errno, func, logit);\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\t\t\t}\n\t\t\ti = 1;\n\t\t\tsetsockopt(sock, SOL_SOCKET, SO_REUSEADDR, (char *) &i, sizeof(i));\n\t\t\tif (bind(sock, ai->ai_addr, ai->ai_addrlen) < 0) {\n\t\t\t\tlog_err(errno, func, strerror(errno));\n\t\t\t\tclose(sock);\n\t\t\t\tif (ai->ai_next)\n\t\t\t\t\tcontinue;\n\t\t\t\tfor (n = 0; n < num_socks; n++) {\n\t\t\t\t\tclose(socks[n].sock);\n\t\t\t\t}\n\t\t\t\tnum_socks = 0;\n\t\t\t\tbreak;\n\t\t\t}\n\n\t\t\tsocks[num_socks].sock = sock;\n\t\t\tsocks[num_socks].active = 1;\n\t\t\tnum_socks++;\n\t\t\tif (x11_use_localhost) {\n\t\t\t\tif (num_socks == NUM_SOCKS)\n\t\t\t\t\tbreak;\n\t\t\t} else {\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t\tfreeaddrinfo(aitop);\n\t\tif (num_socks > 0)\n\t\t\tbreak;\n\t} /* END for (display) */\n\tif (display_number >= MAX_DISPLAYS) {\n\t\tsprintf(logit, \"Failed to allocate internet-domain X11 display socket.\\n\");\n\t\tlog_err(errno, func, logit);\n\t\tfree(socks);\n\t\treturn (-1);\n\t}\n\n\t/* Start listening for connections on the socket. */\n\tfor (n = 0; n < num_socks; n++) {\n\t\tif (listen(socks[n].sock, 10) < 0) {\n\t\t\tsprintf(logit, \"listen : %.100s\\n\",\n\t\t\t\tstrerror(errno));\n\t\t\tlog_err(errno, func, logit);\n\t\t\tclose(socks[n].sock);\n\t\t\tfree(socks);\n\t\t\treturn (-1);\n\t\t}\n\t\tsocks[n].listening = 1;\n\t} /* END for (n) */\n\n\t/* setup local xauth */\n\n\tsprintf(display, \"localhost:%u.%u\",\n\t\tdisplay_number,\n\t\tx11screen);\n\n\treturn (display_number);\n}\n"
  },
  {
    "path": "src/resmom/mom_main.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tmom_main.c\n * @brief\n * The entry point function for MOM.\n */\n#define _MOM_MAIN_C\n#include <pbs_config.h> /* the master config generated by configure */\n\n#ifdef PYTHON\n#include <pbs_python_private.h>\n#include <Python.h>\n#include <pythonrun.h>\n#include <wchar.h>\n#endif\n\n#include <unistd.h>\n#include <pwd.h>\n#include <grp.h>\n#include <netdb.h>\n#include <sys/param.h>\n#include <sys/times.h>\n#include <netinet/in.h>\n#include <sys/socket.h>\n#include <sys/time.h>\n#include <sys/resource.h>\n#include <sys/utsname.h>\n#include <sys/wait.h>\n#ifdef _POSIX_MEMLOCK\n#include <sys/mman.h>\n#endif /* _POSIX_MEMLOCK */\n#include <dirent.h>\n\n#include <assert.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <signal.h>\n#include <string.h>\n#include <ctype.h>\n#include <errno.h>\n#include <fcntl.h>\n#include <time.h>\n#include <limits.h>\n#include <sys/types.h>\n#include <sys/stat.h>\n#include <arpa/inet.h>\n#include <libutil.h>\n\n#include \"auth.h\"\n#include \"libpbs.h\"\n#include \"pbs_ifl.h\"\n#include \"server_limits.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"placementsets.h\"\n#include \"resource.h\"\n#include \"job.h\"\n#include \"mom_func.h\"\n#include \"pbs_nodes.h\"\n#include \"svrfunc.h\"\n#include \"pbs_error.h\"\n#include \"log.h\"\n#include \"net_connect.h\"\n#include \"tpp.h\"\n#include \"dis.h\"\n#include \"resmon.h\"\n#include \"batch_request.h\"\n#include \"pbs_license.h\"\n#include \"pbs_version.h\"\n#include \"libsec.h\"\n#include \"pbs_ecl.h\"\n#include \"pbs_internal.h\"\n#include \"pbs_idx.h\"\n#ifdef HWLOC\n#include \"hwloc.h\"\n#endif\n#include \"hook.h\"\n#include \"mom_hook_func.h\"\n#include \"work_task.h\"\n#include \"pbs_share.h\"\n#include \"mom_server.h\"\n#if MOM_ALPS\n#include \"mom_mach.h\"\n#endif /* MOM_ALPS */\n#include \"pbs_reliable.h\"\n#ifdef PMIX\n#include \"mom_pmix.h\"\n#endif /* PMIX */\n\n#include \"renew_creds.h\"\n\n#define STATE_UPDATE_TIME 10\n#ifndef PRIO_MAX\n#define PRIO_MAX 20\n#endif\n#ifndef PRIO_MIN\n#define PRIO_MIN -20\n#endif\n\n/* Reducing tpp_request process for a minimum of 3 times to interleave other connections */\n#define MAX_TPP_LOOPS 3\n\n/*\n * Default \"mutual exclusion\" for job start/queue commit operations.  The\n * pointer is provided so multi-threaded mom implementations can replace it\n * with a pointer to a shared mutex.\n */\n\n/* Global Data Items */\nint mock_run = 0;\nenum hup_action call_hup = HUP_CLEAR;\nstatic int update_state_flag = 0;\ndouble cputfactor = 1.00;\nunsigned int default_server_port;\nint exiting_tasks = 0;\nfloat ideal_load_val = -1.0;\nint idle_on_maxload = 0;\nint internal_state = 0;\nint internal_state_update = 0;\nint termin_child = 0;\nint do_debug_report = 0;\nuid_t restrict_user_exempt_uids[NUM_RESTRICT_USER_EXEMPT_UIDS] = {0};\nint svr_delay_entry = 0;\nint mom_net_up = 0;\ntime_t mom_net_up_time = 0;\n#ifdef WIN32\nLASTINPUTINFO key_mouse_press = {sizeof(LASTINPUTINFO), 0};\nint nrun_factor = 0;\n\nvoid WINAPI PbsMomMain(DWORD dwArgc, LPTSTR *rgszArgv);\nvoid WINAPI PbsMomHandler(DWORD dwControl);\nDWORD WINAPI main_thread(void *pv);\n\n/*\n * NOTE: Note the global state used by your service. Your service has a name,\n * state and a status handle used by SetServiceStatus.\n */\nconst TCHAR *const g_PbsMomName = __TEXT(\"PBS_MOM\");\nHANDLE g_hthreadMain = 0;\nSERVICE_STATUS_HANDLE g_ssHandle = 0;\nDWORD g_dwCurrentState = SERVICE_START_PENDING;\nHANDLE hStop = NULL;\n#endif /* WIN32 */\nextern void mom_vnlp_report(vnl_t *vnl, char *header);\n\nint alien_attach = 0; /* attach alien procs */\nint alien_kill = 0;   /* kill alien procs */\nint lockfds;\nfloat max_load_val = -1.0;\nint max_poll_downtime_val = PBS_MAX_POLL_DOWNTIME;\nchar *mom_domain;\nchar *mom_home;\nchar mom_host[PBS_MAXHOSTNAME + 1];\npid_t mom_pid;\nint mom_run_state = 1;\nchar mom_short_name[PBS_MAXHOSTNAME + 1];\nint next_sample_time = MAX_CHECK_POLL_TIME;\nint max_check_poll = MAX_CHECK_POLL_TIME;\nint min_check_poll = MIN_CHECK_POLL_TIME;\nint inc_check_poll = 20;\nint num_acpus = 1;\nint num_pcpus = 1;\nint num_oscpus = 1;\nu_Long av_phy_mem = 0; /* physical memory in KB */\nint num_var_env;\nchar *path_epilog;\nchar *path_jobs;\nchar *path_prolog;\nchar *path_spool;\nchar *path_undeliv;\nchar *path_addconfigs;\nchar path_addconfigs_reserved_prefix[] = \"PBS\";\n\nchar *path_hooks;\nchar *path_hooks_workdir;\nchar *path_rescdef;\nhook *phook;\nchar *hook_suffix = HOOK_FILE_SUFFIX;\nint hook_suf_len;\nchar hook_msg[HOOK_MSG_SIZE + 1];\nint baselen;\nchar *psuffix;\nstruct dirent *pdirent;\nDIR *dir;\n/*char\t\tpbs_current_user[PBS_MAXUSER] = \"pbs_mom\";*/ /* for libpbs.a */\n/* above is TLS data now, strcpy the value \"pbs_mom\" into it in main */\n\nchar pbs_tmpdir[_POSIX_PATH_MAX] = TMP_DIR;\nchar pbs_jobdir_root[_POSIX_PATH_MAX] = \"\";\nint pbs_jobdir_root_shared = FALSE;\nvnl_t *vnlp = NULL; /* vnode list */\nunsigned long hooks_rescdef_checksum = 0;\n\n/* vnlp_from_hook: vnode list changes made by an exechost_startup hook, that */\n/* sent to the server initially as part of the IS_HELLO/IS_CLUSTER_ADDR/ */\n/* IS_CLUSTER_ADDR2 sequence . Then when successfully  sent to server,  */\n/* entries matching HOOK_VNL_PERSISTENT_ATTRIBS will be merged with the */\n/* main vnlp structure,  which gets  resent when server loses contact */\n/* with mom, and server sends an IS_HELLO request */\nvnl_t *vnlp_from_hook = NULL;\n\nextern char *msg_startup1;\nextern char *msg_init_chdir;\nextern char *msg_corelimit;\nint pbs_errno;\ngid_t pbsgroup;\nunsigned int pbs_mom_port;\nunsigned int pbs_rm_port;\npbs_list_head mom_polljobs; /* jobs that must have resource limits polled */\npbs_list_head mom_deadjobs; /* jobs that need to purged, see chk_del_job */\nint server_stream = -1;\npbs_list_head svr_newjobs; /* jobs being sent to MOM */\npbs_list_head svr_alljobs; /* all jobs under MOM's control */\ntime_t time_last_sample = 0;\nextern time_t time_now;\ntime_t time_resc_updated = 0;\nextern pbs_list_head svr_requests;\nstruct var_table vtable; /* see start_exec.c */\n\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\nextern pbs_list_head svr_allcreds;\n#endif\n\n#if MOM_ALPS\n#define ALPS_REL_WAIT_TIME_DFLT 400000; /* 0.4 sec */\n#define ALPS_REL_JITTER_DFLT 120000;\t/* 0.12 sec */\n#define ALPS_REL_TIMEOUT 600;\t\t/* 10 min */\n#define ALPS_CONF_EMPTY_TIMEOUT 10;\t/* 10 sec */\n#define ALPS_CONF_SWITCH_TIMEOUT 35;\t/* 35 sec */\nchar *alps_client = NULL;\nuseconds_t alps_release_wait_time = ALPS_REL_WAIT_TIME_DFLT;\nuseconds_t alps_release_jitter = ALPS_REL_JITTER_DFLT;\nint vnode_per_numa_node;\nint alps_release_timeout;\nint alps_confirm_empty_timeout;\nint alps_confirm_switch_timeout;\n#endif /* MOM_ALPS */\nchar *path_checkpoint = NULL;\nstatic resource_def *rdcput;\nstatic resource_def *rdwall;\nint restart_background = FALSE;\nint reject_root_scripts = FALSE;\nint report_hook_checksums = TRUE;\nint restart_transmogrify = FALSE;\nint attach_allow = TRUE;\nextern double wallfactor;\nint suspend_signal;\nint resume_signal;\nint cycle_harvester = 0;\t/* MOM configured for cycle harvesting */\nint restrict_user = 0;\t\t/* kill non PBS user procs */\nint restrict_user_maxsys = 999; /* largest system user id */\nint gen_nodefile_on_sister_mom = TRUE;\nint vnode_additive = 1;\nmomvmap_t **mommap_array = NULL;\nint mommap_array_size = 0;\nunsigned long QA_testing = 0;\n\nlong joinjob_alarm_time = -1;\nlong job_launch_delay = -1; /* # of seconds to delay job launch due to pipe reads (pipe read timeout)  */\nint update_joinjob_alarm_time = 0;\nint update_job_launch_delay = 0;\n\n#ifdef NAS\t\t     /* localmod 015 */\nunsigned long spoolsize = 0; /* default spoolsize = unlimited */\n#endif\t\t\t     /* localmod 015 */\n\n#ifdef NAS /* localmod 153 */\nstatic char quiesce_mom_flag_file[_POSIX_PATH_MAX] = \"/PBS/flags/quiesce_mom\";\nint mom_should_quiesce = 0;\n#endif /* localmod 153 */\n\n#ifdef NAS_UNKILL\t/* localmod 011 */\n#define KP_WAIT_TIME 60 /* number of seconds to wait for kill\n\t\t\t\t\t   to do its deed before declaring the\n\t\t\t\t\t   process unkillable */\n\nstruct kp {\n\tpbs_list_link kp_link; /* linked list struct */\n\tpid_t pid;\t       /* pid of process being killed */\n\tpid_t ppid;\t       /* ppid of process being killed */\n\tu_Long start_time;     /* start_time of process being killed */\n\ttime_t kill_time;      /* time() of first kill attempt */\n};\ntypedef struct kp kp;\n\npbs_list_head killed_procs; /* procs killed by dorestrict_user() */\n#endif\t\t\t    /* localmod 011 */\n\nvoid *jobs_idx = NULL;\npbs_list_head mom_pending_ruu;\n\nunsigned long hook_action_id = 0;\n\npbs_list_head svr_allhooks;\n/* hooks below ignored */\npbs_list_head svr_queuejob_hooks;\npbs_list_head svr_postqueuejob_hooks;\npbs_list_head svr_modifyjob_hooks;\npbs_list_head svr_resvsub_hooks;\npbs_list_head svr_modifyresv_hooks;\npbs_list_head svr_movejob_hooks;\npbs_list_head svr_runjob_hooks;\npbs_list_head svr_jobobit_hooks;\npbs_list_head svr_management_hooks;\npbs_list_head svr_modifyvnode_hooks;\npbs_list_head svr_periodic_hooks;\npbs_list_head svr_provision_hooks;\npbs_list_head svr_resv_confirm_hooks;\npbs_list_head svr_resv_begin_hooks;\npbs_list_head svr_resv_end_hooks;\npbs_list_head svr_hook_job_actions;\npbs_list_head svr_hook_vnl_actions;\nint mom_recvd_ip_cluster_addrs = 0;\n\n/* the mom hooks */\npbs_list_head svr_execjob_begin_hooks;\npbs_list_head svr_execjob_prologue_hooks;\npbs_list_head svr_execjob_epilogue_hooks;\npbs_list_head svr_execjob_preterm_hooks;\npbs_list_head svr_execjob_launch_hooks;\npbs_list_head svr_execjob_end_hooks;\npbs_list_head svr_exechost_periodic_hooks;\npbs_list_head svr_exechost_startup_hooks;\npbs_list_head svr_execjob_attach_hooks;\npbs_list_head svr_execjob_resize_hooks;\npbs_list_head svr_execjob_abort_hooks;\npbs_list_head svr_execjob_postsuspend_hooks;\npbs_list_head svr_execjob_preresume_hooks;\n\n/* the task lists */\npbs_list_head task_list_immed;\npbs_list_head task_list_interleave;\npbs_list_head task_list_timed;\npbs_list_head task_list_event;\n\n#ifdef WIN32\n/* copy request list */\npbs_list_head mom_copyreqs_list;\n#endif\n\n#ifndef WIN32\n#ifdef RLIM64_INFINITY\nstruct rlimit64 orig_stack_size;\nstruct rlimit64 orig_nproc_limit;\nstruct rlimit64 orig_core_limit;\n#else\nstruct rlimit orig_stack_size;\nstruct rlimit orig_nproc_limit;\nstruct rlimit orig_core_limit;\n#endif /* RLIM64... */\n#endif /* WIN32 */\n\n/* Local Data Items */\n\nstatic int nconfig;\t      /* items in conf file */\nstatic time_t idle_avail = 0; /* seconds for keyboard to be idle */\nstatic time_t idle_busy = 10; /* seconds for keyboard to remain */\nstatic int idle_check = -1;   /* indicate if doing idle check */\ntime_t idle_poll = 1;\t      /* rate to poll keyboard when ! busy */\nstatic time_t went_busy = 0;  /* time keyboard went busy */\nstatic time_t prior_key = 0;  /* time of prior keystroke/mouse */\nstatic int restrictrm = 0;    /* restricted RM request */\nint kill_jobs_on_exit = 0;    /* kill running jobs on Mom exit */\nstatic char *path_checkpoint_from_getopt = NULL;\nstatic char *path_checkpoint_from_getenv = NULL;\nstatic char *path_checkpoint_default = NULL;\nstatic char path_checkpoint_buf[_POSIX_PATH_MAX] = \"\\0\";\nstatic time_t maxtm; /* see getkbdtime() */\n\n#ifdef WIN32\n\n/**\n * This global variable is used to indicate whether PBS_INTERACTIVE service\n * has been registered with Service Control Manager\n *\n *\t0  - Error\n *\t1  - PBS_INTERACTIVE service is registered\n *\t-1 - PBS_INTERACTIVE service is not registered\n */\nint interactive_svc_avail = 0;\n\n#endif\n\n/**\n *\tTo handle new configuration file formats (beginning with the vnode-\n *\tspecific data needed for GRUNT), we introduce the notion that any\n *\tnew-style mom configuration file must declare its version number at\n *\tthe beginning of the file, via a \"$configversion\" directive.\n *\n *\tAt present, we handle only a single new version number, known internally\n *\tas CONFIG_VNODEVERS.\n */\nenum configvers {\n\tCONFIG_MINVERS = 2,\n\tCONFIG_VNODEVERS = 2,\n\tCONFIG_MAXVERS = 2\n};\n\nstruct config_list {\n\tstruct config c;\n\tstruct config_list *c_link;\n};\nstatic handler_ret_t config_versionhandler(char *, const char *, FILE *);\n\nstatic handler_ret_t addclient(char *);\nstatic handler_ret_t add_mom_action(char *);\nstatic handler_ret_t config_verscheck(char *);\nstatic handler_ret_t cputmult(char *);\nstatic handler_ret_t parse_config(char *);\nstatic handler_ret_t prologalarm(char *);\nstatic handler_ret_t set_joinjob_alarm(char *);\nstatic handler_ret_t set_job_launch_delay(char *);\nstatic handler_ret_t restricted(char *);\nstatic handler_ret_t set_alien_attach(char *);\nstatic handler_ret_t set_alien_kill(char *);\n#if MOM_ALPS\nstatic handler_ret_t set_alps_client(char *);\nstatic handler_ret_t set_alps_release_wait_time(char *);\nstatic handler_ret_t set_alps_release_jitter(char *);\nstatic handler_ret_t set_alps_release_timeout(char *);\nstatic handler_ret_t set_alps_confirm_empty_timeout(char *);\nstatic handler_ret_t set_vnode_per_numa_node(char *);\nstatic handler_ret_t set_alps_confirm_switch_timeout(char *);\n#endif /* MOM_ALPS */\nstatic handler_ret_t set_attach_allow(char *);\nstatic handler_ret_t set_checkpoint_path(char *);\nstatic handler_ret_t set_enforcement(char *);\nstatic handler_ret_t set_jobdir_root(char *);\nstatic handler_ret_t set_kbd_idle(char *);\nstatic handler_ret_t set_max_check_poll(char *);\nstatic handler_ret_t set_min_check_poll(char *);\nstatic handler_ret_t set_momname(char *);\nstatic handler_ret_t set_momport(char *);\n#ifdef WIN32\nstatic handler_ret_t set_nrun_factor(char *);\n#endif\nstatic handler_ret_t set_restart_background(char *);\nstatic handler_ret_t set_restart_transmogrify(char *);\nstatic handler_ret_t set_restrict_user(char *);\nstatic handler_ret_t set_restrict_user_maxsys(char *);\nstatic handler_ret_t set_restrict_user_exceptions(char *);\nstatic handler_ret_t set_gen_nodefile_on_sister_mom(char *);\nstatic handler_ret_t set_suspend_signal(char *);\nstatic handler_ret_t set_tmpdir(char *);\nstatic handler_ret_t set_vnode_additive(char *);\nstatic handler_ret_t setidealload(char *);\nstatic handler_ret_t setlogevent(char *);\nstatic handler_ret_t set_reject_root_scripts(char *);\nstatic handler_ret_t set_report_hook_checksums(char *);\nstatic handler_ret_t setmaxload(char *);\nstatic handler_ret_t set_max_poll_downtime(char *);\nstatic handler_ret_t usecp(char *);\nstatic handler_ret_t wallmult(char *);\n#ifdef NAS /* localmod 015 */\nstatic handler_ret_t set_spoolsize(char *);\n#endif /* localmod 015 */\n\nstatic struct specials {\n\tchar *name;\n\thandler_ret_t (*handler)(char *);\n} special[] = {\n\t/* alphabetized by name */\n\t{\"action\", add_mom_action},\n\t/*\n\t ****************************************************\n\t ** WARNING\n\t ** These \"alien\" entries are undocumented and are for\n\t ** prototype purposes only.  DO NOT USE.\n\t ****************************************************\n\t */\n\t{\"alien_attach\", set_alien_attach},\n\t{\"alien_kill\", set_alien_kill},\n#if MOM_ALPS\n\t{\"alps_client\", set_alps_client},\n\t{\"alps_confirm_empty_timeout\", set_alps_confirm_empty_timeout},\n\t{\"alps_release_wait_time\", set_alps_release_wait_time},\n\t{\"alps_release_jitter\", set_alps_release_jitter},\n\t{\"alps_release_timeout\", set_alps_release_timeout},\n\t{\"vnode_per_numa_node\", set_vnode_per_numa_node},\n\t{\"alps_confirm_switch_timeout\", set_alps_confirm_switch_timeout},\n#endif /* MOM_ALPS */\n\t{\"attach_allow\", set_attach_allow},\n\t{\"checkpoint_path\", set_checkpoint_path},\n\t{\"clienthost\", addclient},\n\t{\"configversion\", config_verscheck},\n\t{\"cputmult\", cputmult},\n\t{\"enforce\", set_enforcement},\n\t{\"ideal_load\", setidealload},\n\t{\"jobdir_root\", set_jobdir_root},\n\t{\"kbd_idle\", set_kbd_idle},\n\t{\"logevent\", setlogevent},\n\t{\"max_check_poll\", set_max_check_poll},\n\t{\"max_load\", setmaxload},\n\t{\"max_poll_downtime\", set_max_poll_downtime},\n\t{\"min_check_poll\", set_min_check_poll},\n\t{\"momname\", set_momname},\n#ifdef WIN32\n\t{\"nrun_factor\", set_nrun_factor},\n#endif\n\t{\"port\", set_momport},\n\t{\"prologalarm\", prologalarm},\n\t{\"sister_join_job_alarm\", set_joinjob_alarm},\n\t{\"job_launch_delay\", set_job_launch_delay},\n\t{\"restart_background\", set_restart_background},\n\t{\"restart_transmogrify\", set_restart_transmogrify},\n\t{\"restrict_user\", set_restrict_user},\n\t{\"restrict_user_exceptions\", set_restrict_user_exceptions},\n\t{\"restrict_user_maxsysid\", set_restrict_user_maxsys},\n\t{\"restricted\", restricted},\n\t{\"gen_nodefile_on_sister_mom\", set_gen_nodefile_on_sister_mom},\n#ifdef NAS /* localmod 015 */\n\t/*\n\t * spool size limit\n\t */\n\t{\"spool_size\", set_spoolsize},\n#endif /* localmod 015 */\n\t{\"suspendsig\", set_suspend_signal},\n\t{\"tmpdir\", set_tmpdir},\n\t{\"vnodedef_additive\", set_vnode_additive},\n\t{\"usecp\", usecp},\n\t{\"wallmult\", wallmult},\n\t{\"reject_root_scripts\", set_reject_root_scripts},\n\t{\"report_hook_checksums\", set_report_hook_checksums},\n\t{NULL, NULL}};\n\nstatic struct specials addspecial[] = {\n\t{NULL, NULL}};\n\nvoid *job_attr_idx = NULL;\nchar *log_file = NULL;\nchar *path_log;\n#ifndef WIN32\nsigset_t allsigs;\n#endif\nchar *ret_string;\nint ret_size;\nstruct config *config_array = NULL;\nstruct config_list *config_list = NULL;\nint rm_errno;\nunsigned int reqnum = 0; /* the packet number */\nint port_care = 1;\t /* secure connecting ports */\nuid_t uid = 0;\t\t /* uid we are running with */\nint alarm_time = 10;\t /* time before alarm */\nint nice_val = 0;\t /* nice daemon by this much */\n\nchar **maskclient = NULL; /* wildcard connections */\nint mask_num = 0;\nint mask_max = 0;\nu_long localaddr = 0;\n\nchar extra_parm[] = \"extra parameter(s)\";\nchar no_parm[] = \"required parameter not found\";\n\nint cphosts_num = 0;\nstruct cphosts *pcphosts = 0;\nint enable_exechost2 = 0;\nstatic int config_file_specified = 0;\nstatic char config_file[_POSIX_PATH_MAX] = \"config\";\n\nstruct mom_action mom_action[(int) LastAction] = {\n\t{\"terminate\", 0, Default, NULL, NULL},\n\t{\"checkpoint\", 0, Default, NULL, NULL},\n\t{\"checkpoint_abort\", 0, Default, NULL, NULL},\n\t{\"restart\", 0, Default, NULL, NULL},\n\t{\"multinodebusy\", 0, Default, NULL, NULL}};\n\n/*\n **\tThese routines are in the \"dependent\" code.\n */\nextern void dep_initialize(void);\nextern void dep_cleanup(void);\n\n/* External Functions */\nextern void catch_child(int);\nextern void init_abort_jobs(int, pbs_list_head *);\nextern void scan_for_exiting(void);\n#ifdef NAS /* localmod 015 */\nextern int to_size(char *, struct size_value *);\n#endif /* localmod 015 */\n\nextern void cleanup_hooks_workdir(struct work_task *);\n\nextern char *getuname(void);\nextern int get_permission(char *perm);\nextern handler_ret_t check_interactive_service();\nextern void finish_loop(time_t waittime);\nextern void usage(char *prog);\n\n#ifndef WIN32\nextern void scan_for_terminated(void);\n\n/* Local public functions */\n\nextern void stop_me(int);\nextern void catch_USR2(int);\nextern void catch_hup(int);\nextern void toolong(int);\n#endif\n\nextern eventent *event_dup(eventent *ep, job *pjob, hnodent *pnode);\n\n/* Local private functions */\nstatic char *mk_dirs(char *);\nstatic void check_busy(double);\n\n/**\n * @brief\n *\tauth_handler - Handles additional authentication required for server TCP connections\n *\n * @param[in] conn - pointer to connection struct\n *\n * @return int\n * @retval -1 - Authentication failed\n * @retval 0 - Connection was authenticated successfully\n * @retval 1 - Conncection did not need additional authentication\n *\n */\nstatic int\nauth_handler(conn_t *conn)\n{\n\tif (conn->cn_authen & PBS_NET_CONN_PREVENT_IP_SPOOFING) {\n\t\tchar ebuf[LOG_BUF_SIZE] = \"\";\n\t\tchar port_str[6];\n\n\t\tsnprintf(port_str, sizeof(port_str), \"%d\", conn->cn_port);\n\n\t\t/* The received cipher should include the remote's port used to connect\n\t\t * with this MoM, to prevent IP Spoofing and capture-replay attacks.\n\t\t */\n\t\tif (server_cipher_auth(conn->cn_sock, port_str, ebuf, sizeof(ebuf))) {\n\t\t\tlog_err(errno, __func__, ebuf);\n\t\t\treturn -1;\n\t\t}\n\t\tconn->cn_authen &= ~PBS_NET_CONN_PREVENT_IP_SPOOFING;\n\t\treturn 0;\n\t} else {\n\t\treturn 1;\n\t}\n}\n\n/**\n * @brief\n *\tlogs error message\n *\n * @param[in] attrib - pointer to rm_attribute structure\n *\n * @return NULL\n *\n */\nchar *\nnullproc(struct rm_attribute *attrib)\n{\n\n\tlog_err(-1, __func__, \"should not be called\");\n\treturn NULL;\n}\n\nchar *pbs_mach = NULL;\n\n/**\n * @brief\n *\tgets machine architecture else logs error msg\n *\n * @param[in] attrib - pointer to rm_attribute structure\n *\n * @return string\n * @retval PBS_ARCH\tSuccess\n * @retval NULL\t\tFailure\n *\n */\nchar *\narch(struct rm_attribute *attrib)\n{\n\tif (attrib) {\n\t\tlog_err(-1, __func__, extra_parm);\n\t\trm_errno = RM_ERR_BADPARAM;\n\t\treturn NULL;\n\t}\n\tif (pbs_mach != NULL)\n\t\treturn pbs_mach;\n\telse\n\t\treturn PBS_MACH;\n}\n\n/**\n * @brief\n *\trequsts username else logs error msg on failure\n *\n * @param[in] attrib - pointer to rm_attribute structure\n *\n * @return string\n * @retval username\tSuccess\n * @retval NULL\t\tFailure\n *\n */\nstatic char *\nrequname(struct rm_attribute *attrib)\n{\n\tchar *cp;\n\n\tif (attrib) {\n\t\tlog_err(-1, __func__, extra_parm);\n\t\trm_errno = RM_ERR_BADPARAM;\n\t\treturn NULL;\n\t}\n\tcp = getuname();\n\treturn cp;\n}\n\n/**\n * @brief\n *\tchecks whether valid user\n *\n * @param[in] attrib - pointer to rm_attribute structure\n *\n * @return\tstring\n * @retval\tyes\tSuccess\n * @retval\tno\tFailure\n *\n */\nstatic char *\nvaliduser(struct rm_attribute *attrib)\n{\n\tstruct passwd *p;\n\n\tif (attrib == NULL || attrib->a_value == NULL) {\n\t\tlog_err(-1, __func__, no_parm);\n\t\trm_errno = RM_ERR_NOPARAM;\n\t\treturn NULL;\n\t}\n\n\tp = getpwnam(attrib->a_value);\n\tif (p) {\n\t\treturn \"yes\";\n\t} else {\n\t\treturn \"no\";\n\t}\n}\n\n/**\n * @brief\n *\treturns the current load average on node\n *\n * @param[in] attrib - pointer to rm_attribute structure\n *\n * @return   string\n * @retval   loadvalue   Success\n * @retval   NULL         Failure\n *\n */\nchar *\nloadave(struct rm_attribute *attrib)\n{\n\tstatic char ret_string[20];\n\tdouble la;\n\n\tif (attrib) {\n\t\tlog_err(-1, __func__, extra_parm);\n\t\trm_errno = RM_ERR_BADPARAM;\n\t\treturn NULL;\n\t}\n\n\tif (get_la(&la) != 0) {\n\t\trm_errno = RM_ERR_SYSTEM;\n\t\treturn NULL;\n\t}\n\n\tsprintf(ret_string, \"%.2f\", la);\n\treturn ret_string;\n}\n\n/**\n * @brief\n *\tOutput the various resource lists.\n *\n * @param[in] attrib - pointer to rm_attribute structure\n *\n * @return  string\n * @retval  log_buffer  Success\n * @retval  NULL\tFailure\n *\n */\nchar *\nreslist(struct rm_attribute *attrib)\n{\n\tstruct config *cp;\n\textern struct config common_config[];\n\textern struct config standard_config[];\n\textern struct config dependent_config[];\n\tsize_t len;\n\n\tif (attrib) {\n\t\tlog_err(-1, __func__, extra_parm);\n\t\trm_errno = RM_ERR_BADPARAM;\n\t\treturn NULL;\n\t}\n\n\tlog_buffer[0] = '\\0';\n\n\tfor (cp = common_config; cp->c_name; cp++) {\n\t\tstrcat(log_buffer, cp->c_name);\n\t\tstrcat(log_buffer, \" \");\n\t}\n\n\tfor (cp = standard_config; cp->c_name; cp++) {\n\t\tstrcat(log_buffer, cp->c_name);\n\t\tstrcat(log_buffer, \" \");\n\t}\n\n\tfor (cp = dependent_config; cp->c_name; cp++) {\n\t\tstrcat(log_buffer, cp->c_name);\n\t\tstrcat(log_buffer, \" \");\n\t}\n\n\tif (config_array) {\n\t\tfor (cp = config_array; cp->c_name; cp++) {\n\t\t\tstrcat(log_buffer, cp->c_name);\n\t\t\tstrcat(log_buffer, \" \");\n\t\t}\n\t}\n\n\tlen = strlen(log_buffer);\n\tif (len > 0) {\n\t\tlog_buffer[len - 1] = '\\0';\n\t\treturn log_buffer;\n\t} else\n\t\treturn NULL;\n}\n\nstruct config common_config[] = {\n\t{\"arch\", {arch}},\n\t{\"uname\", {requname}},\n\t{\"validuser\", {validuser}},\n\t{\"reslist\", {reslist}},\n\t{NULL, {nullproc}}};\n\n/**\n * @brief\n *\tSearch the array of resources read from the config files.\n *\n * @param[in] where - pointer to config structure\n * @param[in] what  - char pointer holding what to search\n *\n * @return\tstructure handle\n * @retval\tpointer to config structure\tSuccess\n * @retval      NULL\t\t\t\tFailure\n *\n */\nstruct config *\nrm_search(struct config *where, char *what)\n{\n\tstruct config *cp;\n\n\tif (where == NULL || what == NULL)\n\t\treturn NULL;\n\n\tfor (cp = where; cp->c_name; cp++) {\n\t\tif (strcmp(cp->c_name, what) == 0)\n\t\t\tbreak;\n\t}\n\treturn (cp->c_name ? cp : NULL);\n}\n\n/**\n * @brief\n *\tSearch the various resource lists.\n *\n * @param[in] res - string holding resource\n * @param[in] attr - pointer to rm_attribute structure\n *\n * @return\tstring\n * @retval\tstructure handler to config\tSuccess\n * @retval      NULL\t\t\t\tFailure\n *\n */\nchar *\ndependent(char *res, struct rm_attribute *attr)\n{\n\tstruct config *ap;\n\textern struct config standard_config[];\n\textern struct config dependent_config[];\n\n\tap = rm_search(common_config, res);\n\tif (ap)\n\t\treturn (ap->c_u.c_func(attr));\n\n\tap = rm_search(standard_config, res);\n\tif (ap)\n\t\treturn (ap->c_u.c_func(attr));\n\n\tap = rm_search(dependent_config, res);\n\tif (ap)\n\t\treturn (ap->c_u.c_func(attr));\n\n\trm_errno = RM_ERR_UNKNOWN;\n\treturn NULL;\n}\n\n/**\n * @brief\n *\twrapper function to dep_cleanup\n *\n */\nvoid\ncleanup(void)\n{\n\tdep_cleanup();\n}\n\n/**\n * @brief\n *\tClean up after a signal.\n *\n * @param[in] sig - signal number\n *\n * @return Void\n *\n */\nvoid\ndie(int sig)\n{\n\tif (sig > 0) {\n\t\tsprintf(log_buffer, \"caught signal %d\", sig);\n\t\tlog_event(PBSEVENT_SYSTEM, 0, LOG_NOTICE, __func__, log_buffer);\n\t} else\n\t\tlog_event(PBSEVENT_SYSTEM, 0, LOG_ALERT, __func__,\n\t\t\t  \"abnormal termination\");\n\n\tcleanup();\n\tpbs_idx_destroy(jobs_idx);\n\tunload_auths();\n\tlog_close(1);\n#ifdef WIN32\n\tExitThread(1);\n#else\n\texit(1);\n#endif\n}\n\n/**\n * @brief\n *\tPerforms initialization steps like loading pbs.conf values,\n *\tsetting core limit size, running platform-specific initializations\n *\t(e.g. topology data gathering),\n *\trunning the exechost_startup hook, and\n *\tchecking that there are no bad combinations of sharing values\n *\tacross the vnodes.\n *\n * @return void\n *\n */\nvoid\ninitialize(void)\n{\n\tunsigned int i;\n\tvoid *temp_idx;\n\tchar hook_msg[HOOK_MSG_SIZE + 1];\n\tchar hook_buf[HOOK_BUF_SIZE + 1];\n\tmom_hook_input_t hook_input;\n\tmom_hook_output_t hook_output;\n\tint hook_errcode = 0;\n\tint hook_rc = 0;\n\thook *last_phook = NULL;\n\tunsigned int hook_fail_action = 0;\n\tint ret;\n\tchar none[] = \"<unset>\";\n\tenum vnode_sharing hostval;\n\n\t/* set limits that can be modified by the Admin */\n#ifndef WIN32 /* ---- UNIX ------------------------------------------*/\n#ifdef RLIMIT_CORE\n\tint char_in_cname = 0;\n\n\t(void) pbs_loadconf(0);\n\tset_log_conf(pbs_conf.pbs_leaf_name, pbs_conf.pbs_mom_node_name,\n\t\t     pbs_conf.locallog, pbs_conf.syslogfac,\n\t\t     pbs_conf.syslogsvr, pbs_conf.pbs_log_highres_timestamp);\n\n\tif (pbs_conf.pbs_core_limit) {\n\t\tchar *pc = pbs_conf.pbs_core_limit;\n\t\twhile (*pc != '\\0') {\n\t\t\tif (!isdigit(*pc)) {\n\t\t\t\t/* there is a character in core limit */\n\t\t\t\tchar_in_cname = 1;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\tpc++;\n\t\t}\n\t}\n\n#if defined(RLIM64_INFINITY)\n\tif (pbs_conf.pbs_core_limit) {\n\t\tstruct rlimit64 corelimit;\n\t\tcorelimit.rlim_max = RLIM64_INFINITY;\n\t\tif (strcmp(\"unlimited\", pbs_conf.pbs_core_limit) == 0)\n\t\t\tcorelimit.rlim_cur = RLIM64_INFINITY;\n\t\telse if (char_in_cname == 1) {\n\t\t\tlog_record(PBSEVENT_ERROR, PBS_EVENTCLASS_NODE, LOG_WARNING,\n\t\t\t\t   __func__, msg_corelimit);\n\t\t\tcorelimit.rlim_cur = RLIM64_INFINITY;\n\t\t} else\n\t\t\tcorelimit.rlim_cur =\n\t\t\t\t(rlim64_t) atol(pbs_conf.pbs_core_limit);\n\t\t/* get system core limit */\n\t\t(void) getrlimit64(RLIMIT_CORE, &orig_core_limit);\n\n\t\t(void) setrlimit64(RLIMIT_CORE, &corelimit);\n\t}\n\n#else  /* set rlimit 32 bit */\n\n\tif (pbs_conf.pbs_core_limit) {\n\t\tstruct rlimit corelimit;\n\t\tcorelimit.rlim_max = RLIM_INFINITY;\n\t\tif (strcmp(\"unlimited\", pbs_conf.pbs_core_limit) == 0)\n\t\t\tcorelimit.rlim_cur = RLIM_INFINITY;\n\t\telse if (char_in_cname == 1) {\n\t\t\tlog_record(PBSEVENT_ERROR, PBS_EVENTCLASS_NODE, LOG_WARNING,\n\t\t\t\t   __func__, msg_corelimit);\n\t\t\tcorelimit.rlim_cur = RLIM_INFINITY;\n\t\t} else\n\t\t\tcorelimit.rlim_cur =\n\t\t\t\t(rlim_t) atol(pbs_conf.pbs_core_limit);\n\n\t\t/* get system core limit */\n\t\t(void) getrlimit(RLIMIT_CORE, &orig_core_limit);\n\n\t\t(void) setrlimit(RLIMIT_CORE, &corelimit);\n\t}\n#endif /* RLIM64_INFINITY */\n\n#endif /* RLIMIT_CORE */\n#endif /* !WIN32 ---------------------------------------------------------- */\n\n\tnum_pcpus = num_acpus = num_oscpus = 0;\n\tdep_initialize();\n\tif (num_oscpus == 0)\n\t\tnum_oscpus = num_pcpus;\n\tsprintf(log_buffer, \"pcpus=%d, OS reports %d cpu(s)\",\n\t\tnum_pcpus, num_oscpus);\n\tlog_event(PBSEVENT_SYSTEM, 0, LOG_NOTICE, \"initialize\", log_buffer);\n\n\tif (vnlp_from_hook == NULL) {\n\t\tif (vnl_alloc(&vnlp_from_hook) == NULL) {\n\t\t\tlog_err(PBSE_SYSTEM, __func__, \"vnl_alloc failed\");\n\t\t\treturn;\n\t\t}\n\t\tvnlp_from_hook->vnl_modtime = time(NULL);\n\t}\n\n\tmom_hook_input_init(&hook_input);\n\thook_input.vnl = (vnl_t *) vnlp;\n\n\tmom_hook_output_init(&hook_output);\n\thook_output.reject_errcode = &hook_errcode;\n\thook_output.last_phook = &last_phook;\n\thook_output.fail_action = &hook_fail_action;\n\thook_output.vnl = (vnl_t *) vnlp_from_hook;\n\n\tif (setup_resc(1) != 0) {\n\t\t/* log_buffer set in setup_resc */\n\t\tlog_err(-1, \"setup_resc\", \"warning: failed to setup resourcdef\");\n\t}\n\n\tswitch ((hook_rc = mom_process_hooks(HOOK_EVENT_EXECHOST_STARTUP,\n\t\t\t\t\t     PBS_MOM_SERVICE_NAME,\n\t\t\t\t\t     mom_host, &hook_input, &hook_output, hook_msg,\n\t\t\t\t\t     sizeof(hook_msg), 0))) {\n\n\t\tcase 2: /* no hook script executed - go ahead and accept event */\n\t\t\tbreak;\n\t\tdefault:\n\t\t\t/* a value of '0' means explicit reject encountered, and '1' means explicit accept. */\n\t\t\tif ((hook_rc != 0) && (hook_rc != 1)) {\n\t\t\t\t/* we've hit an internal error (malloc error, full disk, etc...), so */\n\t\t\t\t/* treat this now like a  hook error so hook fail_action  */\n\t\t\t\t/* will be consulted.  */\n\t\t\t\t/* Before, behavior of an internal error was to ignore it! */\n\t\t\t\thook_errcode = PBSE_HOOKERROR;\n\t\t\t}\n\t\t\tif (hook_errcode == PBSE_HOOKERROR) { /* error */\n\t\t\t\tif ((last_phook != NULL) &&\n\t\t\t\t    (last_phook->fail_action &\n\t\t\t\t     HOOK_FAIL_ACTION_OFFLINE_VNODES)) {\n\t\t\t\t\tsnprintf(hook_buf,\n\t\t\t\t\t\t HOOK_BUF_SIZE + 1,\n\t\t\t\t\t\t \"1,%s\", last_phook->hook_name);\n\n\t\t\t\t\tret = vn_addvnr(vnlp_from_hook,\n\t\t\t\t\t\t\tmom_short_name,\n\t\t\t\t\t\t\tVNATTR_HOOK_OFFLINE_VNODES,\n\t\t\t\t\t\t\thook_buf, 0, 0, NULL);\n\n\t\t\t\t\tif (ret != 0) {\n\t\t\t\t\t\tsnprintf(log_buffer,\n\t\t\t\t\t\t\t sizeof(log_buffer),\n\t\t\t\t\t\t\t \"Failed to add to \"\n\t\t\t\t\t\t\t \"vnlp_from_hook: %s=%s\",\n\t\t\t\t\t\t\t VNATTR_HOOK_OFFLINE_VNODES,\n\t\t\t\t\t\t\t hook_buf);\n\t\t\t\t\t\tlog_event(PBSEVENT_DEBUG2,\n\t\t\t\t\t\t\t  PBS_EVENTCLASS_HOOK, LOG_INFO,\n\t\t\t\t\t\t\t  last_phook->hook_name,\n\t\t\t\t\t\t\t  log_buffer);\n\t\t\t\t\t}\n\t\t\t\t\tvnlp_from_hook->vnl_modtime = time(NULL);\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\t} else if (hook_fail_action & HOOK_FAIL_ACTION_CLEAR_VNODES) {\n\t\t\t\t/* no hook error */\n\t\t\t\tvnl_t *vnlp_tmp = NULL;\n\n\t\t\t\t/* of vnlp_from_hook */\n\t\t\t\tif (vnl_alloc(&vnlp_tmp) == NULL) {\n\t\t\t\t\tlog_err(PBSE_SYSTEM, __func__,\n\t\t\t\t\t\t\"vnl_alloc failed\");\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t\tret = vn_addvnr(vnlp_tmp, mom_short_name,\n\t\t\t\t\t\tVNATTR_HOOK_OFFLINE_VNODES, \"0\", 0,\n\t\t\t\t\t\t0, NULL);\n\t\t\t\tif (ret != 0) {\n\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t\t \"Failed to add to \"\n\t\t\t\t\t\t \"vnlp_tmp: %s=%s\",\n\t\t\t\t\t\t VNATTR_HOOK_OFFLINE_VNODES,\n\t\t\t\t\t\t hook_buf);\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG2,\n\t\t\t\t\t\t  PBS_EVENTCLASS_HOOK, LOG_INFO,\n\t\t\t\t\t\t  last_phook->hook_name,\n\t\t\t\t\t\t  log_buffer);\n\t\t\t\t\tvnl_free(vnlp_tmp);\n\t\t\t\t\tvnlp_tmp = NULL;\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t\tif (vnlp_from_hook->vnl_used > 0) {\n\t\t\t\t\t/* the clear offline_vnodes action ,*/\n\t\t\t\t\t/* as stored in 'vnlp_tmp' */\n\t\t\t\t\t/* must appear before other vn */\n\t\t\t\t\t/* actions (currently in\n\t\t\t\t\t * vnlp_from_hook),  since it would be */\n\t\t\t\t\t/* clearing the states of all vnodes */\n\t\t\t\t\t/* and their comments. vnlp_from_hook */\n\t\t\t\t\t/* may contain vnode state and */\n\t\t\t\t\t/* comment changes, and we would not */\n\t\t\t\t\t/* want to override that. */\n\t\t\t\t\tvn_merge(vnlp_tmp, vnlp_from_hook,\n\t\t\t\t\t\t NULL);\n\t\t\t\t\tvnl_free(vnlp_from_hook);\n\t\t\t\t\tvnlp_from_hook = vnlp_tmp;\n\t\t\t\t} else {\n\t\t\t\t\tvn_merge(vnlp_from_hook, vnlp_tmp,\n\t\t\t\t\t\t NULL);\n\t\t\t\t\tvnl_free(vnlp_tmp);\n\t\t\t\t}\n\t\t\t\tvnlp_tmp = NULL;\n\t\t\t}\n\t}\n\n\tmom_vnlp_report(vnlp_from_hook, \"vnlp_from_hook\");\n\n\tif (vnlp_from_hook->vnl_used == 0) {\n\t\tvnl_free(vnlp_from_hook);\n\t\tvnlp_from_hook = NULL;\n\t}\n\n\tif (vnlp == NULL)\n\t\treturn;\n\n\t/*\n\t *\tCheck that there are no bad combinations of sharing values\n\t *\tacross the vnodes.\n\t */\n\tif ((temp_idx = pbs_idx_create(0, 0)) == NULL) {\n\t\tlog_err(-1, __func__, \"Failed to create index for checking sharing value on vnodes\");\n\t\tdie(0);\n\t}\n\n\tfor (i = 0; i < vnlp->vnl_used; i++) {\n\t\tvnal_t *vnrlp = VNL_NODENUM(vnlp, i);\n\t\tchar *host = attr_exist(vnrlp, \"resources_available.host\");\n\t\tchar *share;\n\t\tchar *exclhost = none;\n\t\tchar *exclhost_frmidx = none;\n\t\tenum vnode_sharing shareval;\n\n\t\tif (host == NULL)\n\t\t\t/* mom_host and mom_short_name are different!! */\n\t\t\t/* use mom short name by default */\n\t\t\thost = mom_short_name;\n\n\t\tshare = attr_exist(vnrlp, \"sharing\");\n\t\tshareval = str_to_vnode_sharing(share);\n\t\tif (shareval != VNS_UNSET)\n\t\t\texclhost = vnode_sharing_to_str(shareval);\n\n\t\t/* search for host */\n\t\tif (pbs_idx_find(temp_idx, (void **) &host, (void **) &exclhost_frmidx, NULL) != PBS_IDX_RET_OK) {\n\t\t\tif (pbs_idx_insert(temp_idx, host, (void *) exclhost) != PBS_IDX_RET_OK) {\n\t\t\t\tlog_errf(errno, __func__, \"Failed to add exechost = %s for host %s in index\", exclhost, host);\n\t\t\t\tdie(0);\n\t\t\t}\n\t\t\tcontinue;\n\t\t}\n\n\t\t/* the host exists, check if the saved value is the same */\n\t\tif (exclhost_frmidx == exclhost)\n\t\t\tcontinue;\n\n\t\t/* they are different, now check if it is a bad combo */\n\t\thostval = str_to_vnode_sharing(exclhost_frmidx);\n\t\tif (hostval == VNS_DFLT_EXCLHOST ||\n\t\t    hostval == VNS_FORCE_EXCLHOST ||\n\t\t    shareval == VNS_DFLT_EXCLHOST ||\n\t\t    shareval == VNS_FORCE_EXCLHOST) {\n\t\t\tsprintf(log_buffer,\n\t\t\t\t\"It is erroneous to mix sharing=%s \"\n\t\t\t\t\"for vnode %s with sharing=%s which \"\n\t\t\t\t\"is set for other vnodes on host %s\",\n\t\t\t\texclhost, vnrlp->vnal_id,\n\t\t\t\texclhost_frmidx, host);\n\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_NODE,\n\t\t\t\t  LOG_NOTICE, __func__, log_buffer);\n\t\t\tdie(0);\n\t\t}\n\t}\n\n\tpbs_idx_destroy(temp_idx);\n\n\tif (joinjob_alarm_time == -1)\n\t\tjoinjob_alarm_time = DEFAULT_JOINJOB_ALARM;\n\n\tif (job_launch_delay == -1)\n\t\tjob_launch_delay = DEFAULT_JOB_LAUNCH_DELAY;\n\n\ttime_delta_hellosvr(MOM_DELTA_RESET);\n}\n\n/**\n * @brief\n *\tCheck for fatal memory allocation error.\n *\n * @param[in]  buf - reallocated memory\n *\n * @return Void\n *\n */\nvoid\nmemcheck(char *buf)\n{\n\tif (buf)\n\t\treturn;\n\tlog_err(-1, \"memcheck\", \"memory allocation failed\");\n\tdie(0);\n}\n\n/**\n * @brief\n *\tCheck the ret_string buffer to make sure that there is\n *\tenough room starting at *spot to hold len characters more.\n *\tIf not, realloc the buffer and make *spot point to\n *\tthe corresponding place that it used to point to in\n *\tthe old buffer.\n *\n * @param[in] spot - buffer\n * @param[in] len - buffer len\n *\n * @return Void\n *\n */\nvoid\ncheckret(char **spot, int len)\n{\n\tchar *hold;\n\n\tif ((*spot - ret_string) < (ret_size - len))\n\t\treturn;\n\n\tret_size += len * 2; /* new buf size */\n\tsprintf(log_buffer, \"size increased to %d\", ret_size);\n\tlog_event(PBSEVENT_SYSTEM, 0, LOG_DEBUG, __func__, log_buffer);\n\thold = realloc(ret_string, ret_size); /* new buf */\n\tmemcheck(hold);\n\t*spot = *spot - ret_string + hold; /* new spot in buf */\n\tret_string = hold;\n}\n\n/**\n * @brief\n *\tskipwhite - process the string to make it blank free\n *\n * @param[in] str - string to be processed\n *\n * @return\tstring\n * @retval\tstring with no blanks\n *\n */\nchar *\nskipwhite(char *str)\n{\n\tfor (; *str; str++) {\n\t\tif (!isspace(*str))\n\t\t\tbreak;\n\t}\n\treturn str;\n}\n\n/**\n * @brief\n *\tcopies string in str to string tok\n *\n * @param[in] str - string to be copied\n * @param[in] tok - destination string to be copied to\n * @param[in] len - size of string str\n *\n * @return\tstring\n * @retval\tdestination string \"tok\"\n *\n */\nchar *\ntokcpy(char *str, char *tok, size_t len)\n{\n\tsize_t i;\n\n\tfor (i = 0; *str && (i < len); str++, tok++, i++) {\n\t\tif (!isalnum(*str) && *str != ':' && *str != '_' && check_spl_ch(*str))\n\t\t\tbreak;\n\t\t*tok = *str;\n\t}\n\t*tok = '\\0';\n\treturn str;\n}\n\n#define TOKCPY(a, b) tokcpy(a, b, sizeof(b))\n\n/**\n * @brief\n *\tremoves new line from str\n *\n * @param[in] str - string to be processed\n *\n * @return Void\n *\n */\nvoid\nrmnl(char *str)\n{\n\tint i;\n\n\ti = strlen(str);\n\twhile (--i) {\n\t\tif ((*(str + i) != '\\n') && !isspace((int) *(str + i)))\n\t\t\tbreak;\n\t\t*(str + i) = '\\0';\n\t}\n}\n\n/**\n * @brief\n *\tSimilar to tokcpy() with only whitespace as the delimiting characters\n *\n * @param[in] str - string to be copied\n * @param[in] tok - destination string to be copied to\n * @param[in] len - size of string str\n *\n * @return string\n * @retval processed destination string \"tok\"\n *\n */\nchar *\nwtokcpy(char *str, char *tok, int len)\n{\n\tint i;\n\tfor (i = 0; *str && (i < len); str++, tok++, i++) {\n\t\tif (isspace((int) *str))\n\t\t\tbreak;\n\t\t*tok = *str;\n\t}\n\t*tok = '\\0';\n\treturn str;\n}\n\n/**\n * @brief\n *\tmalloc memory and make a copy of the path input string that does not\n *\tcontain any double quote marks\n *\n * @param[in] path - char pointer holding input path\n *\n * @return string\n * @retval processed path string\n *\n */\nchar *\nremove_quotes(char *path)\n{\n\tchar *dp, *dup;\n\n\tif (!path || !(dup = strdup(path)))\n\t\treturn NULL;\n\telse\n\t\tdp = dup;\n\n\tdo {\n\t\tif (*path != '\"')\n\t\t\t*dp++ = *path;\n\t} while (*path++);\n\n\treturn dup;\n}\n\n#ifdef WIN32\nextern void stop_pbs_interactive();\n#endif\n\n/**\n * @brief\n *\tadd_mom_action - Parse mom action command from mom config file and add\n *\tinto mom_action array\n *\n * @param[in]\tstr -\tline from mom config file which contain action\n *\t\t\tcommand for mom\n *\n * @return\thandler_ret_t\n * @retval\tHANDLER_FAIL\t- on fail\n * @retval\tHANDLER_SUCCESS\t- on success\n *\n */\nstatic handler_ret_t\nadd_mom_action(char *str)\n{\n\tchar arg[_POSIX_PATH_MAX + 1];\n\tint i;\n\tint count;\n\tchar *pc;\n\tint na;\n\tchar **pargs;\n\tchar *scp;\n\tint tout;\n\tint white;\n\n\tif (*str == '\\0')\n\t\treturn HANDLER_FAIL;\n\n\t/* first token is name of event */\n\tstr = TOKCPY(str, arg);\n\tstr = skipwhite(str);\n\tif (*str == '\\0')\n\t\treturn HANDLER_FAIL;\n\tfor (na = 0; na < (int) LastAction; na++) {\n\t\tif (strcmp(arg, mom_action[na].ma_name) == 0) {\n\t\t\t/* have a valid event name */\n\t\t\tbreak;\n\t\t}\n\t}\n\tif (na >= (int) LastAction)\n\t\treturn HANDLER_FAIL;\n\n\t/* next should come the time out value */\n\tstr = TOKCPY(str, arg);\n\tstr = skipwhite(str);\n\tif (*str == '\\0')\n\t\treturn HANDLER_FAIL;\n\tif (!isdigit((int) *arg))\n\t\treturn HANDLER_FAIL;\n\ttout = atoi(arg);\n\n\t/* next is the action verb: a script or some keyword */\n\tif (*str == '!') {\n\n\t\t/* script specified */\n\t\tstr = process_string(++str, arg, _POSIX_PATH_MAX);\n\t\tstr = skipwhite(str);\n\n\t\tif (is_full_path(arg)) {\n\n\t\t\tscp = malloc(strlen(arg) + 1);\n\t\t\tif (scp == NULL) {\n\t\t\t\treturn HANDLER_FAIL;\n\t\t\t}\n\t\t\tstrcpy(scp, arg);\n\t\t} else {\n\t\t\t/* convert relative path to an absolute */\n\t\t\t/* path based on PBS_HOME/mom_priv      */\n\n\t\t\tscp = malloc(strlen(arg) + strlen(mom_home) + 2);\n\t\t\tif (scp == NULL) {\n\t\t\t\treturn HANDLER_FAIL;\n\t\t\t}\n\t\t\tstrcpy(scp, mom_home);\n\t\t\tstrcat(scp, \"/\");\n\t\t\tstrcat(scp, arg);\n\t\t}\n\n\t\t/* now count up the number of args */\n\n\t\twhite = -1;\n\t\tcount = 0;\n\t\tpargs = 0;\n\t\tpc = str;\n\t\twhile (*pc) {\n\t\t\tif (isspace((int) *pc)) {\n\t\t\t\tif (white != 1)\n\t\t\t\t\twhite = 1;\n\t\t\t} else {\n\t\t\t\tif (white != 0) {\n\t\t\t\t\twhite = 0;\n\t\t\t\t\tcount++;\n\t\t\t\t}\n\t\t\t}\n\t\t\tpc++;\n\t\t}\n\t\tpargs = (char **) malloc((count + 1) * sizeof(char *));\n\t\tif (pargs == NULL) {\n\t\t\tfree(scp);\n\t\t\treturn HANDLER_FAIL;\n\t\t}\n\n\t\t/* now we know how many and have space, copy each arg */\n\n\t\tfor (i = 0; i < count; i++) {\n\t\t\tstr = wtokcpy(str, arg, _POSIX_PATH_MAX);\n\t\t\tstr = skipwhite(str);\n\t\t\tif ((*(pargs + i) = strdup(arg)) == NULL) {\n\t\t\t\tfor (; i >= 0; i--) {\n\t\t\t\t\tfree(*(pargs + i));\n\t\t\t\t}\n\t\t\t\tfree(scp);\n\t\t\t\tfree(pargs);\n\t\t\t\treturn HANDLER_FAIL;\n\t\t\t}\n\t\t}\n\t\t*(pargs + i) = NULL;\n\n\t\t/* now we can set the action array member */\n\n\t\tmom_action[na].ma_verb = Script;\n\t\tmom_action[na].ma_timeout = tout;\n\t\tmom_action[na].ma_script = scp;\n\t\tmom_action[na].ma_args = pargs;\n\t\tgoto done;\n\t}\n\n\t/* not a script, must be a recognized verb */\n\n\tif (strcmp(arg, \"requeue\") == 0) {\n\n\t\t/* Requeue Verb */\n\n\t\tmom_action[na].ma_verb = Requeue;\n\t\tmom_action[na].ma_timeout = tout;\n\t\tmom_action[na].ma_script = NULL;\n\t\tmom_action[na].ma_args = NULL;\n\n\t} else\n\t\treturn HANDLER_FAIL; /* error */\n\ndone:\n\tif (mom_action[na].ma_verb == Script)\n\t\tsprintf(log_buffer, \"%s: %s\", mom_action[na].ma_name,\n\t\t\tmom_action[na].ma_script);\n\telse\n\t\tsprintf(log_buffer, \"%s: %s\", mom_action[na].ma_name, arg);\n\n\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_NOTICE,\n\t\t  \"action\", log_buffer);\n\treturn HANDLER_SUCCESS;\n}\n\n/**\n * @brief\n *\tadds client by name.\n *\n * @param[in] name - name of host\n *\n * @return\tu_long\n * @retval\t0\t\tFailure\n * @retval\tipaddr of host\tSuccess\n *\n */\nstatic u_long\naddclient_byname(char *name)\n{\n\tstruct hostent *host;\n\tstruct in_addr saddr;\n\tu_long ipaddr = 0;\n\tint i;\n\n\tif ((host = gethostbyname(name)) == NULL) {\n\t\tsprintf(log_buffer, \"host %s not found\", name);\n\t\tlog_err(-1, __func__, log_buffer);\n\t\treturn 0;\n\t}\n\n\tfor (i = 0; host->h_addr_list[i]; i++) {\n\t\tmemcpy((char *) &saddr, host->h_addr_list[i], host->h_length);\n\t\tipaddr = ntohl(saddr.s_addr);\n\t\taddrinsert(ipaddr);\n\t}\n\treturn ipaddr;\n}\n\n/**\n * @brief\n *\twrapper func for addclient_byname.\n *\n * @param[in] name - name of host\n *\n * @return\thandler_ret_t (return value)\n * @retval      HANDLER_FAIL(0)\t\t\tFailure\n * @retval\tHANDLER_SUCCESS(1)\t\tSuccess\n *\n */\n\nstatic handler_ret_t\naddclient(char *name)\n{\n\tif (addclient_byname(name) == 0)\n\t\treturn HANDLER_FAIL;\n\telse\n\t\treturn HANDLER_SUCCESS;\n}\n\n/**\n * @brief\n *\tsets the log event\n *\n * @param[in] value - log value\n *\n * @return      handler_ret_t (return value)\n * @retval      HANDLER_FAIL(0)                 Failure\n * @retval      HANDLER_SUCCESS(1)              Success\n *\n */\n\nstatic handler_ret_t\nsetlogevent(char *value)\n{\n\tchar *bad;\n\n\t*log_event_mask = strtol(value, &bad, 0);\n\ttpp_set_logmask(*log_event_mask);\n\tif ((*bad == '\\0') || isspace((int) *bad))\n\t\treturn HANDLER_SUCCESS;\n\telse\n\t\treturn HANDLER_FAIL;\n}\n\n/**\n * @brief\n *\tSet the configuration flag that defines whether the hook files/scripts\n *\tor job scripts to be run under root are rejected by mom.\n *\n * @param[in] value - log value\n *\n * @retval 0 failure\n * @retval 1 success\n *\n */\nstatic handler_ret_t\nset_reject_root_scripts(char *value)\n{\n\treturn (set_boolean(__func__, value, &reject_root_scripts));\n}\n\n/**\n * @brief\n *\tSet the configuration flag that tells the mom to send the checksums\n *\tof the hooks it knows about.\n *\n * @param[in] value - log value\n *\n * @retval 0 failure\n * @retval 1 success\n *\n */\nstatic handler_ret_t\nset_report_hook_checksums(char *value)\n{\n\treturn (set_boolean(__func__, value, &report_hook_checksums));\n}\n\n/**\n * @brief\n *\tsets log event if host is restricted.\n *\n * @param[in] name - name of host\n *\n * @return\thandler_ret_t\n * @retval\tHANDLER_FAIL(0)\t\tFailure\n * @retval\tHANDLER_SUCCESS\t\tSuccess\n *\n */\n\nstatic handler_ret_t\nrestricted(char *name)\n{\n\tint i;\n\n\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_DEBUG, __func__, name);\n\tif (mask_max == 0) {\n\t\tmaskclient = (char **) calloc(4, sizeof(char *));\n\t\tif (maskclient == NULL)\n\t\t\treturn HANDLER_FAIL; /* error */\n\t\tmask_max = 4;\n\t}\n\tif ((maskclient[mask_num] = strdup(name)) == NULL) {\n\t\tfor (i = 0; i < mask_num; i++)\n\t\t\tfree(maskclient[i]);\n\t\tmask_num = 0;\n\t\treturn HANDLER_FAIL;\n\t}\n\tif (maskclient[mask_num++] == NULL) {\n\t\tfor (i = 0; i < mask_num; i++)\n\t\t\tfree(maskclient[i]);\n\t\tmask_num = 0;\n\t\treturn HANDLER_FAIL; /* error */\n\t}\n\n\tif (mask_num == mask_max) {\n\t\tchar **tmcl;\n\t\ttmcl = (char **) realloc(maskclient,\n\t\t\t\t\t 2 * mask_max * sizeof(char *));\n\t\tif (tmcl == NULL)\n\t\t\treturn HANDLER_FAIL; /* error */\n\t\tmaskclient = tmcl;\n\t\tmask_max *= 2;\n\t}\n\treturn HANDLER_SUCCESS;\n}\n\n/**\n * @brief\n *\tsets the cputfactor value\n *\n * @param[in] value - value for cputfactor\n *\n * @return      handler_ret_t (return value)\n * @retval      HANDLER_FAIL(0)                 Failure\n * @retval      HANDLER_SUCCESS(1)              Success\n *\n */\n\nstatic handler_ret_t\ncputmult(char *value)\n{\n\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_DEBUG, __func__, value);\n\tif ((cputfactor = atof(value)) == 0.0)\n\t\treturn HANDLER_FAIL; /* error */\n\treturn HANDLER_SUCCESS;\n}\n\n/**\n * @brief\n *\tsets wallfactor\n *\n * @param[in] value - value for wallfactor\n *\n * @return      handler_ret_t\n * @retval      HANDLER_FAIL(0)         Failure\n * @retval      HANDLER_SUCCESS         Success\n *\n */\n\nstatic handler_ret_t\nwallmult(char *value)\n{\n\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_DEBUG, __func__, value);\n\tif ((wallfactor = atof(value)) == 0.0)\n\t\treturn HANDLER_FAIL; /* error */\n\treturn HANDLER_SUCCESS;\n}\n\n/**\n * @brief\n *      sets hosts\n *\n * @param[in] value - value for hosts\n *\n * @return      handler_ret_t\n * @retval      HANDLER_FAIL(0)         Failure\n * @retval      HANDLER_SUCCESS         Success\n *\n */\n\nstatic handler_ret_t\nusecp(char *value)\n{\n\tchar *pnxt;\n\tstatic int cphosts_max = 0;\n\n\tif (cphosts_max == 0) {\n\t\tpcphosts = malloc(2 * sizeof(struct cphosts));\n\t\tif (pcphosts == NULL) {\n\t\t\treturn HANDLER_FAIL;\n\t\t}\n\t\tcphosts_max = 2;\n\t} else if (cphosts_max == cphosts_num) {\n\n\t\tstruct cphosts *tmppcphosts;\n\t\ttmppcphosts = realloc(pcphosts,\n\t\t\t\t      (cphosts_max + 2) * sizeof(struct cphosts));\n\t\tif (tmppcphosts == NULL) {\n\t\t\tfree(pcphosts);\n\t\t\treturn HANDLER_FAIL;\n\t\t}\n\t\tpcphosts = tmppcphosts;\n\t\tcphosts_max += 2;\n\t}\n\tpnxt = strchr(value, (int) ':');\n\tif (pnxt == NULL) {\n\t\tsprintf(log_buffer, \"invalid host specification: %s\", value);\n\t\tlog_err(-1, __func__, log_buffer);\n\t\treturn HANDLER_FAIL;\n\t}\n\t*pnxt++ = '\\0';\n#ifdef NAS /* localmod 009 */\n\t/* support $usecp rules that exclude a pattern, look for hostname\n\t * that starts with ! */\n\tif (value[0] == '!') {\n\t\t(pcphosts + cphosts_num)->cph_exclude = 1;\n\t\tvalue++;\n\t} else {\n\t\t(pcphosts + cphosts_num)->cph_exclude = 0;\n\t}\n#endif /* localmod 009 */\n\n\tif (((pcphosts + cphosts_num)->cph_hosts = strdup(value)) == NULL)\n\t\treturn HANDLER_FAIL;\n\tvalue = pnxt; /* now ptr to path */\n\twhile (!isspace(*pnxt))\n\t\tpnxt++;\n\t*pnxt++ = '\\0';\n\tif (((pcphosts + cphosts_num)->cph_from = strdup(value)) == NULL)\n\t\treturn HANDLER_FAIL;\n\n\tif (((pcphosts + cphosts_num)->cph_to = strdup(skipwhite(pnxt))) == NULL)\n\t\treturn HANDLER_FAIL;\n\n\tfix_path((pcphosts + cphosts_num)->cph_from, 1);\n\tfix_path((pcphosts + cphosts_num)->cph_to, 1);\n\n\tcphosts_num++;\n\n\treturn HANDLER_SUCCESS;\n}\n\n/**\n * @brief\n *      sets prolog alarm\n *\n * @param[in] value - value for prolog alarm\n *\n * @return      handler_ret_t\n * @retval      HANDLER_FAIL(0)         Failure\n * @retval      HANDLER_SUCCESS         Success\n *\n */\n\nstatic handler_ret_t\nprologalarm(char *value)\n{\n\tint i;\n\textern unsigned int pe_alarm_time;\n\n\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_NOTICE,\n\t\t  \"prolog alarm\", value);\n\ti = (unsigned int) atoi(value);\n\tif (i <= 0)\n\t\treturn HANDLER_FAIL; /* error */\n\tpe_alarm_time = (unsigned int) i;\n\treturn HANDLER_SUCCESS;\n}\n\n/**\n * @brief\n *\tHandler function for the $sister_join_job_alarm config option.\n *\n * @param[in]\tvalue - the input given in config file.\n *\n * @return handler_ret_t\n * @retval HANNDLER_SUCCESS\n * @retval HANDLER_FAIL\n */\nstatic handler_ret_t\nset_joinjob_alarm(char *value)\n{\n\tlong i;\n\tchar *endp;\n\n\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_NOTICE,\n\t\t  \"sister_join_job_alarm\", value);\n\ti = strtol(value, &endp, 10);\n\tif ((*endp != '\\0') || (i <= 0) || (i == LONG_MIN) || (i == LONG_MAX))\n\t\treturn HANDLER_FAIL; /* error */\n\tjoinjob_alarm_time = i;\n\treturn HANDLER_SUCCESS;\n}\n\n/**\n * @brief\n *\tHandler function for the $job_launch_delay cconfig option.\n *\n * @param[in]\tvalue - the input given in config file.\n *\n * @return handler_ret_t\n * @retval HANNDLER_SUCCESS\n * @retval HANDLER_FAIL\n */\nstatic handler_ret_t\nset_job_launch_delay(char *value)\n{\n\tlong i;\n\tchar *endp;\n\n\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_NOTICE,\n\t\t  \"job_launch_delay\", value);\n\ti = strtol(value, &endp, 10);\n\n\tif ((*endp != '\\0') || (i <= 0) || (i == LONG_MIN) || (i == LONG_MAX))\n\t\treturn HANDLER_FAIL; /* error */\n\tjob_launch_delay = i;\n\treturn HANDLER_SUCCESS;\n}\n\n#ifdef WIN32\n\n/**\n * @brief\n *      sets nrun_factor\n *\n * @param[in] value - value for nrun_factor\n *\n * @return      handler_ret_t\n * @retval      HANDLER_FAIL(0)         Failure\n * @retval      HANDLER_SUCCESS         Success\n *\n */\n\nstatic handler_ret_t\nset_nrun_factor(char *value)\n{\n\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_DEBUG, __func__, value);\n\tif ((nrun_factor = atoi(value)) == 0)\n\t\treturn HANDLER_FAIL; /* error */\n\telse\n\t\treturn HANDLER_SUCCESS;\n}\n\n/**\n * @brief\n *      Performs shell_escape_timeout on process tree by using given parent process handle <hProcess> and close that handle.\n *\n * @return\tVoid\n *\n */\n\nstatic HANDLE shell_escape_handle = INVALID_HANDLE_VALUE;\nstatic void\nshell_escape_timeout(void)\n{\n\tint ret = 0;\n\tint err_no = 0;\n\tchar log_buf[LOG_BUF_SIZE] = \"\";\n\tif (shell_escape_handle != INVALID_HANDLE_VALUE) {\n\t\tret = processtree_op_by_handle(shell_escape_handle, TERMINATE, 13); /* Terminated process would have exit code 13 */\n\t\tif (ret == -1) {\n\t\t\terr_no = GetLastError();\n\t\t\tsprintf(log_buf, \"could not terminate shell escape process tree, pid=%d\", GetProcessId(shell_escape_handle));\n\t\t\tlog_err(err_no, \"shell_escape_timeout\", log_buf);\n\t\t} else {\n\t\t\tSetLastError(0);\n\t\t}\n\t\tlog_err(-1, \"shell_escape_timeout\", \"terminate shell escape\");\n\t}\n}\n#endif /* WIN32 */\n\n/**\n * @brief\n * read and set values used in enforcement of cpupercent calculation and\n * other limit enforcement\n *\n * In the form of:\n * $enforce NAME VALUE\n *\n *\twhere \"NAME is     default\t   range of values\t*/\n\nint delta_percent_over = 50;\t  /* 0   <= I <= 100\t*/\ndouble delta_cpufactor = 1.05;\t  /* 1.0 <= D\t\t*/\ndouble delta_weightup = 0.4;\t  /* 0.0 <= D <= 1.0\t*/\ndouble delta_weightdown = 0.1;\t  /* 0.0 <= D <= 1.0\t*/\nint average_percent_over = 50;\t  /* 0   <= I <= 100\t*/\ndouble average_cpufactor = 1.025; /* 1.0 <= D\t\t*/\nint average_trialperiod = 120;\t  /* 0   <= I\t\t*/\n/*\n * or the form of:  $enforce [!]NAME\n * where NAME is:\n */\n/* cpuburst */\nint cpuburst = 0; /* 1 or 0\t\t*/\n/* cpuaverage */\nint cpuaverage = 0; /* 1 or 0\t\t*/\n/* mem */\nint enforce_mem = 0; /* on, value ignored\t*/\n/* complexmem\t*/\nint complex_mem_calc = 0; /* 1 or 0\t\t*/\n\nstatic handler_ret_t\nset_enforcement(char *str)\n{\n\tchar arg[80];\n\tint on = 1;\n\n\tif (!str)\n\t\treturn HANDLER_FAIL;\n\n\t/* if current token starts with !, then set value off and skip ! */\n\tif (*str == '!') {\n\t\ton = 0; /* set off */\n\t\tstr++;\n\t}\n\n\tstr = TOKCPY(str, arg);\n\tstr = skipwhite(str);\n\n\tif (strcmp(arg, \"delta_percent_over\") == 0) {\n\t\tif (*str == '\\0')\n\t\t\treturn HANDLER_FAIL;\n\t\tdelta_percent_over = atoi(str);\n\t} else if (strcmp(arg, \"delta_cpufactor\") == 0) {\n\t\tif (*str == '\\0')\n\t\t\treturn HANDLER_FAIL;\n\t\tdelta_cpufactor = atof(str);\n\t} else if (strcmp(arg, \"delta_weightup\") == 0) {\n\t\tif (*str == '\\0')\n\t\t\treturn HANDLER_FAIL;\n\t\tdelta_weightup = atof(str);\n\t} else if (strcmp(arg, \"delta_weightdown\") == 0) {\n\t\tif (*str == '\\0')\n\t\t\treturn HANDLER_FAIL;\n\t\tdelta_weightdown = atof(str);\n\t} else if (strcmp(arg, \"average_percent_over\") == 0) {\n\t\tif (*str == '\\0')\n\t\t\treturn HANDLER_FAIL;\n\t\taverage_percent_over = atoi(str);\n\t} else if (strcmp(arg, \"average_cpufactor\") == 0) {\n\t\tif (*str == '\\0')\n\t\t\treturn HANDLER_FAIL;\n\t\taverage_cpufactor = atof(str);\n\t} else if (strcmp(arg, \"average_trialperiod\") == 0) {\n\t\tif (*str == '\\0')\n\t\t\treturn HANDLER_FAIL;\n\t\taverage_trialperiod = atoi(str);\n\t} else if (strcmp(arg, \"cpuburst\") == 0) {\n\t\tcpuburst = on; /* may be off */\n\t} else if (strcmp(arg, \"cpuaverage\") == 0) {\n\t\tcpuaverage = on; /* may be off */\n\t} else if (strcmp(arg, \"mem\") == 0) {\n\t\tenforce_mem = on; /* may be off */\n\t} else if (strcmp(arg, \"complexmem\") == 0) {\n\t\tcomplex_mem_calc = on; /* may be off */\n\t} else {\n\t\treturn HANDLER_FAIL;\n\t}\n\treturn HANDLER_SUCCESS;\n}\n\n/**\n * @brief\n *\tcheck for the type of action to be done on a certain event\n *\n * @param[in] ae - enum val for action_event\n *\n * @return\tthe Action_Verb enum value, see mom_func.h:\n * @retval\tDefault no directive to change the action for the event\n * @retval\tScript defined in see mom_func.h\n * @retval\tRequeue defined in see mom_func.h\n *\n */\nenum Action_Verb\nchk_mom_action(enum Action_Event ae)\n{\n\tassert((0 <= ae) && (ae < (int) LastAction));\n\n\treturn mom_action[ae].ma_verb;\n}\n\n/**\n * @brief\n *\tif there is an external script defined for this\n *\taction, do it and return values:\n *\n * @retval\t1\tscript running in child process\n * @retval\t0\tscript ran with no error\n * @retval\t-1\terror, script did not run correctly\n * @retval\t-2\terror, no script - do normal default action\n *\n *\tThe \"post\" function is called out of scan_for_terminated() when the\n *\tchild process (script) exits.  It is called with the pointer to the job\n *\tand the script exit value.   If the script does not complete in the\n *\tspecified timeout value,  the \"post\" function will be called with the\n *\terror value of -1.\n *\n *\tThe action taken by the \"post\" function on a error depends on the\n *\tfunction itself.   Usually it should preform the \"default\" action\n *\tfor that action.\n *\n */\nint\ndo_mom_action_script(int ae,\t      /* index into action table */\n\t\t     job *pjob,\t      /* ptr to job */\n\t\t     pbs_task *ptask, /* ptr to task */\n\t\t     char *path,\n\t\t     void (*post)(job *p, int e)) /* post action func */\n{\n\tchar **args = NULL;\n\tchar buf[MAXPATHLEN + 1];\n\tint i;\n\tint nargs;\n\tchar **pargs;\n\tstruct stat sb;\n\tstruct passwd *pwdp;\n\tint rc = -1;\n\tstruct mom_action *ma;\n\tint transmog = 0;\n\tint j;\n\tint pipes[2], kid_read = -1, kid_write = -1;\n\tint parent_read = -1, parent_write = -1;\n\tstruct startjob_rtn sjr;\n\tpid_t child;\n\n\tmemset(&sjr, 0, sizeof(sjr));\n\n\tassert((0 <= ae) && (ae < (int) LastAction));\n\n\tma = &mom_action[ae];\n\tif (ma == NULL || ma->ma_script == NULL)\n\t\treturn -2;\n\n\t/* does script really exist? */\n\tif (stat(ma->ma_script, &sb) == -1) {\n\t\tlog_eventf(PBSEVENT_ADMIN, PBS_EVENTCLASS_JOB, LOG_INFO, pjob->ji_qs.ji_jobid,\n\t\t\t   \"action %s script %s does not exist\", ma->ma_name, ma->ma_script);\n\t\treturn -1;\n\t} else if (sb.st_uid != 0 || sb.st_gid > 10 || (sb.st_mode & S_IXUSR) != S_IXUSR || (sb.st_mode & S_IWOTH) != 0) {\n\t\tlog_eventf(PBSEVENT_ADMIN, PBS_EVENTCLASS_JOB, LOG_INFO, pjob->ji_qs.ji_jobid,\n\t\t\t   \"action %s script %s cannot be executed due to permissions\", ma->ma_name, ma->ma_script);\n\t\treturn -1;\n\t}\n\n\tif (ptask == NULL)\n\t\tptask = (pbs_task *) GET_NEXT(pjob->ji_tasks);\n\n\tif (ptask == NULL) {\n\t\tlog_eventf(PBSEVENT_ADMIN, PBS_EVENTCLASS_JOB, LOG_INFO, pjob->ji_qs.ji_jobid,\n\t\t\t   \"action %s script %s cannot run because job has no tasks\", ma->ma_name, ma->ma_script);\n\t\treturn -1;\n\t}\n\n\t/*\n\t ** If we are going to leave the script running in the background,\n\t ** the ji_momsubt field has to be free to track the pid.\n\t */\n\tif (post != NULL && pjob->ji_momsubt) {\n\t\tlog_eventf(PBSEVENT_ADMIN, PBS_EVENTCLASS_JOB, LOG_INFO, pjob->ji_qs.ji_jobid,\n\t\t\t   \"action %s script %s cannot be run due to existing subtask\", ma->ma_name, ma->ma_script);\n\t\treturn -1;\n\t}\n\n\tif ((pwdp = check_pwd(pjob)) == NULL) {\n\t\tlog_event(PBSEVENT_JOB | PBSEVENT_SECURITY, PBS_EVENTCLASS_JOB, LOG_ERR, pjob->ji_qs.ji_jobid, log_buffer);\n\t\treturn -1;\n\t}\n\n\t/* build up args to script */\n\tfor (nargs = 0, pargs = ma->ma_args; pargs && *pargs; pargs++)\n\t\tnargs++;\n\t/* Add one for the command itself */\n\tnargs++;\n\targs = calloc((nargs + 1), sizeof(char *));\n\tif (args == NULL)\n\t\treturn -1;\n\n\t/* set args[0] to script */\n\targs[0] = strdup(ma->ma_script);\n\tif (args[0] == NULL) {\n\t\tfree(args);\n\t\treturn -1;\n\t}\n\n\tpargs = ma->ma_args;\n\tfor (i = 1; i < nargs; i++, pargs++) {\n\t\tif (**pargs == '%') {\n\t\t\tif (strcmp(*pargs + 1, \"jobid\") == 0)\n\t\t\t\tstrcpy(buf, pjob->ji_qs.ji_jobid);\n\t\t\telse if (strcmp(*pargs + 1, \"sid\") == 0)\n\t\t\t\tsprintf(buf, \"%d\", ptask->ti_qs.ti_sid);\n\t\t\telse if (strcmp(*pargs + 1, \"taskid\") == 0)\n\t\t\t\tsprintf(buf, \"%d\", ptask->ti_qs.ti_task);\n\t\t\telse if (strcmp(*pargs + 1, \"uid\") == 0) {\n\t\t\t\tsprintf(buf, \"%d\", pjob->ji_qs.ji_un.ji_momt.ji_exuid);\n\t\t\t} else if (strcmp(*pargs + 1, \"gid\") == 0) {\n\t\t\t\tsprintf(buf, \"%d\", pjob->ji_qs.ji_un.ji_momt.ji_exgid);\n\t\t\t} else if (strcmp(*pargs + 1, \"login\") == 0)\n\t\t\t\tstrcpy(buf, get_jattr_str(pjob, JOB_ATR_euser));\n\t\t\telse if (strcmp(*pargs + 1, \"owner\") == 0)\n\t\t\t\tstrcpy(buf, get_jattr_str(pjob, JOB_ATR_job_owner));\n\t\t\telse if (strcmp(*pargs + 1, \"globid\") == 0)\n\t\t\t\tstrcpy(buf, \"NULL\");\n\t\t\telse if (strcmp(*pargs + 1, \"auxid\") == 0) {\n\t\t\t\tchar *value;\n\t\t\t\tif ((value = get_jattr_str(pjob, JOB_ATR_altid)) != NULL)\n\t\t\t\t\tpbs_strncpy(buf, value, sizeof(buf));\n\t\t\t\telse\n\t\t\t\t\tstrcpy(buf, \"NULL\");\n\t\t\t} else if (strcmp(*pargs + 1, \"path\") == 0) {\n\t\t\t\tif (path != NULL)\n\t\t\t\t\tpbs_strncpy(buf, path, sizeof(buf));\n\t\t\t\telse\n\t\t\t\t\tstrcpy(buf, \"NULL\");\n\t\t\t} else {\n\t\t\t\tlog_eventf(PBSEVENT_ADMIN, PBS_EVENTCLASS_JOB, LOG_INFO, pjob->ji_qs.ji_jobid,\n\t\t\t\t\t   \"action %s script %s cannot be run due to unknown parameter %s\",\n\t\t\t\t\t   ma->ma_name, ma->ma_script, *pargs);\n\t\t\t\tgoto done;\n\t\t\t}\n\t\t} else\n\t\t\tpbs_strncpy(buf, *pargs, sizeof(buf));\n\n\t\t*(args + i) = strdup(buf);\n\t\tif (*(args + i) == NULL)\n\t\t\treturn -1;\n\t}\n\n\t/*\n\t * Special case for restart_transmogrify.\n\t * The script is going to morf into the task so we have to\n\t * setup pipes just like in start_process()\n\t */\n\tif ((transmog = (ae == RestartAction) && restart_transmogrify)) {\n\t\tlog_eventf(PBSEVENT_ADMIN, PBS_EVENTCLASS_JOB, LOG_INFO, pjob->ji_qs.ji_jobid,\n\t\t\t   \"action %s script %s preparing to transmogrify task %8.8X\",\n\t\t\t   ma->ma_name, ma->ma_script, ptask->ti_qs.ti_task);\n\n\t\tif (pipe(pipes) == -1)\n\t\t\tgoto done;\n\t\tif (pipes[1] < 3) {\n\t\t\tkid_write = fcntl(pipes[1], F_DUPFD, 3);\n\t\t\tclose(pipes[1]);\n\t\t} else\n\t\t\tkid_write = pipes[1];\n\t\tparent_read = pipes[0];\n\n\t\tif (pipe(pipes) == -1) {\n\t\t\tclose(kid_write);\n\t\t\tclose(parent_read);\n\t\t\tgoto done;\n\t\t}\n\t\tif (pipes[0] < 3) {\n\t\t\tkid_read = fcntl(pipes[0], F_DUPFD, 3);\n\t\t\tclose(pipes[0]);\n\t\t} else\n\t\t\tkid_read = pipes[0];\n\t\tparent_write = pipes[1];\n\t} else if (ae == RestartAction)\n\t\tlog_eventf(PBSEVENT_ADMIN, PBS_EVENTCLASS_JOB, LOG_INFO, pjob->ji_qs.ji_jobid,\n\t\t\t   \"action %s script %s preparing to restart task %8.8X\",\n\t\t\t   ma->ma_name, ma->ma_script, ptask->ti_qs.ti_task);\n\n\tif ((child = fork_me(-1)) == 0) { /* child */\n\t\textern char *variables_else[];\n\t\tchar *shell;\n\n\t\t/* unprotect the child process which becomes the job */\n\t\tdaemon_protect(0, PBS_DAEMON_PROTECT_OFF);\n\n\t\tshell = set_shell(pjob, pwdp); /* machine dependent */\n\t\tvtable.v_ensize = 30;\n\t\tvtable.v_used = 0;\n\t\tvtable.v_envp = (char **) calloc(vtable.v_ensize, sizeof(char *));\n\t\tif (vtable.v_envp == NULL) {\n\t\t\tfree(args);\n\t\t\tlog_err(errno, \"setup environment\", \"out of memory\");\n\t\t\treturn -1;\n\t\t}\n\t\t/* setup environment */\n\t\t/* UID */\n\t\tsprintf(buf, \"%d\", pjob->ji_qs.ji_un.ji_momt.ji_exuid);\n\t\tbld_env_variables(&vtable, \"UID\", buf);\n\t\t/* GID */\n\t\tsprintf(buf, \"%d\", pjob->ji_qs.ji_un.ji_momt.ji_exgid);\n\t\tbld_env_variables(&vtable, \"GID\", buf);\n\t\t/* HOME */\n\t\tbld_env_variables(&vtable, variables_else[0], pwdp->pw_dir);\n\t\t/* LOGNAME */\n\t\tbld_env_variables(&vtable, variables_else[1], pwdp->pw_name);\n\t\t/* PBS_JOBNAME */\n\t\tbld_env_variables(&vtable, variables_else[2], get_jattr_str(pjob, JOB_ATR_jobname));\n\t\t/* PBS_JOBID */\n\t\tbld_env_variables(&vtable, variables_else[3], pjob->ji_qs.ji_jobid);\n\t\t/* PBS_QUEUE */\n\t\tbld_env_variables(&vtable, variables_else[4], get_jattr_str(pjob, JOB_ATR_in_queue));\n\t\t/* SHELL */\n\t\tbld_env_variables(&vtable, variables_else[5], shell);\n\t\t/* USER */\n\t\tbld_env_variables(&vtable, variables_else[6], pwdp->pw_name);\n\t\t/* PBS_JOBCOOKIE */\n\t\tbld_env_variables(&vtable, variables_else[7], get_jattr_str(pjob, JOB_ATR_Cookie));\n\t\t/* PBS_NODENUM */\n\t\tsprintf(buf, \"%d\", pjob->ji_nodeid);\n\t\tbld_env_variables(&vtable, variables_else[8], buf);\n\t\t/* PBS_TASKNUM */\n\t\tsprintf(buf, \"%ld\", (long) ptask->ti_qs.ti_task);\n\t\tbld_env_variables(&vtable, variables_else[9], buf);\n\t\t/* PBS_MOMPORT */\n\t\tsprintf(buf, \"%d\", pbs_rm_port);\n\t\tbld_env_variables(&vtable, variables_else[10], buf);\n\t\t/* PBS_NODEFILE */\n\t\tsprintf(buf, \"%s/aux/%s\", pbs_conf.pbs_home_path, pjob->ji_qs.ji_jobid);\n\t\tbld_env_variables(&vtable, variables_else[11], buf);\n\t\t/* PBS_SID */\n\t\tsprintf(buf, \"%d\", ptask->ti_qs.ti_sid);\n\t\tbld_env_variables(&vtable, \"PBS_SID\", buf);\n\t\t/* PBS_JOBDIR */\n\t\tif (is_jattr_set(pjob, JOB_ATR_sandbox) && strcasecmp(get_jattr_str(pjob, JOB_ATR_sandbox), \"PRIVATE\") == 0) {\n\t\t\tbld_env_variables(&vtable, \"PBS_JOBDIR\",\n\t\t\t\t\t  jobdirname(pjob->ji_qs.ji_jobid, pjob->ji_grpcache->gc_homedir));\n\t\t} else\n\t\t\tbld_env_variables(&vtable, \"PBS_JOBDIR\", pjob->ji_grpcache->gc_homedir);\n\t\tmom_unnice();\n\n\t\t/*\n\t\t ** Do the same operations as start_process() but we don't\n\t\t ** need to reset the global ID.\n\t\t */\n\t\tif (transmog) {\n\t\t\tclose(parent_read);\n\t\t\tclose(parent_write);\n\n#if MOM_ALPS\n\t\t\t/*\n\t\t\t * ALPS jobs need a new PAGG when\n\t\t\t * being restarted.\n\t\t\t */\n\t\t\tmemset(pjob->ji_extended.ji_ext.ji_jid, 0, sizeof(pjob->ji_extended.ji_ext.ji_jid));\n#endif\n\t\t\tj = set_job(pjob, &sjr);\n\t\t\tif (j < 0) {\n\t\t\t\tif (j == -1)\n\t\t\t\t\tstrcpy(log_buffer, \"Unable to set task session\");\n\t\t\t\tDBPRT((\"%s: %s\\n\", __func__, log_buffer))\n\t\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_NOTICE, pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\t\tif (j == -3)\n\t\t\t\t\tj = JOB_EXEC_FAIL2;\n\t\t\t\telse\n\t\t\t\t\tj = JOB_EXEC_RETRY;\n\t\t\t\tstarter_return(kid_write, kid_read, j, &sjr);\n\t\t\t}\n\t\t\tptask->ti_qs.ti_sid = sjr.sj_session;\n\t\t\ti = mom_set_limits(pjob, SET_LIMIT_SET);\n\t\t\tif (i != PBSE_NONE) {\n\t\t\t\tlog_eventf(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_WARNING, pjob->ji_qs.ji_jobid, \"Unable to set limits, err=%d\", i);\n\t\t\t\tif (i == PBSE_RESCUNAV)\n\t\t\t\t\tj = JOB_EXEC_RETRY;\n\t\t\t\telse\n\t\t\t\t\tj = JOB_EXEC_FAIL2;\n\t\t\t\tstarter_return(kid_write, kid_read, j, &sjr);\n\t\t\t}\n\t\t\tlog_close(0);\n\t\t\tstarter_return(kid_write, kid_read, JOB_EXEC_OK, &sjr);\n\t\t} else { /* just close down anything hanging */\n\t\t\tclose(0);\n\t\t\tclose(1);\n\t\t\tclose(2);\n\t\t}\n\n\t\texecve(ma->ma_script, args, vtable.v_envp);\n\t\texit(254);\n\t}\n\n\tif (child == -1) {\n\t\tlog_eventf(PBSEVENT_ADMIN, PBS_EVENTCLASS_JOB, LOG_INFO, pjob->ji_qs.ji_jobid, \"action script %s cannot be run due to fork failure %d\", ma->ma_script, errno);\n\t\tgoto done;\n\t}\n\n\tif (post != NULL) { /* post func means we do not wait */\n\t\trc = 1;\n\t\tpjob->ji_momsubt = child;\n\t\tpjob->ji_mompost = post;\n\t\tif (ma->ma_timeout)\n\t\t\tpjob->ji_actalarm = time_now + ma->ma_timeout;\n\t\telse\n\t\t\tpjob->ji_actalarm = 0;\n\t\tgoto done;\n\t}\n\n\tif (transmog) { /* setup new task */\n\t\tclose(kid_read);\n\t\tclose(kid_write);\n\n\t\t/* read sid */\n\t\ti = readpipe(parent_read, &sjr, sizeof(sjr));\n\t\tj = errno;\n\t\tclose(parent_read);\n\t\tif (i != sizeof(sjr)) {\n\t\t\tlog_errf(j, __func__, \"read of pipe for pid job %s got %d not %d\", pjob->ji_qs.ji_jobid, i, (int) sizeof(sjr));\n\t\t\tclose(parent_write);\n\t\t\tgoto done;\n\t\t}\n\t\t/* send info back as an acknowlegment */\n\t\twritepipe(parent_write, &sjr, sizeof(sjr));\n\t\tclose(parent_write);\n\t\tDBPRT((\"%s: read start return %d %d\\n\", __func__, sjr.sj_code, sjr.sj_session))\n\t\t/* update system specific ids and information from set_job() */\n\t\tset_globid(pjob, &sjr);\n\t\tif (sjr.sj_code < 0) {\n\t\t\tlog_eventf(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_NOTICE, pjob->ji_qs.ji_jobid,\n\t\t\t\t   \"task %8.8X not started, %s %d\", (unsigned int) ptask->ti_qs.ti_task, (sjr.sj_code == JOB_EXEC_RETRY) ? \"Retry\" : \"Failure\", sjr.sj_code);\n\t\t\tgoto done;\n\t\t}\n\t\tptask->ti_qs.ti_sid = sjr.sj_session;\n\t\tptask->ti_qs.ti_status = TI_STATE_RUNNING;\n\t\t(void) task_save(ptask);\n\t\t/* update the job with the new session id */\n\t\tset_jattr_l_slim(pjob, JOB_ATR_session_id, sjr.sj_session, SET);\n\t\tif (!check_job_substate(pjob, JOB_SUBSTATE_RUNNING)) {\n\t\t\tset_job_state(pjob, JOB_STATE_LTR_RUNNING);\n\t\t\tset_job_substate(pjob, JOB_SUBSTATE_RUNNING);\n\t\t\tjob_save(pjob);\n\t\t}\n\n\t\trc = 0;\n\t\tlog_eventf(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO, pjob->ji_qs.ji_jobid, \"task %8.8X transmogrified\", (unsigned int) ptask->ti_qs.ti_task);\n\t\tenqueue_update_for_send(pjob, IS_RESCUSED);\n\t} else { /* wait for script */\n\t\tDBPRT((\"action: setting alarm %d\\n\", ma->ma_timeout))\n\t\talarm(ma->ma_timeout);\n\t\trc = 0;\n\t\tif (waitpid(child, &rc, 0) == -1) {\n\t\t\tlog_eventf(PBSEVENT_SYSTEM, PBS_EVENTCLASS_JOB, LOG_INFO, pjob->ji_qs.ji_jobid, \"%s script %s: wait failed %d\", ma->ma_name, ma->ma_script, errno);\n\t\t\t(void) kill(child, SIGKILL);\n\t\t\t(void) waitpid(child, &rc, 0);\n\t\t}\n\t\talarm(0);\n\t\tif (WIFEXITED(rc)) {\n\t\t\trc = WEXITSTATUS(rc);\n\t\t\tlog_eventf(PBSEVENT_SYSTEM, PBS_EVENTCLASS_JOB, LOG_INFO, pjob->ji_qs.ji_jobid, \"%s script %s: exit code %d\", ma->ma_name, ma->ma_script, rc);\n\t\t\tif (rc != 0)\n\t\t\t\trc = -1;\n\t\t} else if (WIFSIGNALED(rc)) {\n\t\t\trc = WTERMSIG(rc);\n\t\t\tlog_eventf(PBSEVENT_SYSTEM, PBS_EVENTCLASS_JOB, LOG_INFO, pjob->ji_qs.ji_jobid, \"%s script %s: got signal %d\", ma->ma_name, ma->ma_script, rc);\n\t\t\trc = -1;\n\t\t} else {\n\t\t\tlog_eventf(PBSEVENT_SYSTEM, PBS_EVENTCLASS_JOB, LOG_INFO, pjob->ji_qs.ji_jobid, \"%s script %s: exited abnormally\", ma->ma_name, ma->ma_script);\n\t\t\trc = -1;\n\t\t}\n\t}\n\ndone:\n\t/* free args arrays */\n\tfor (pargs = args; *pargs; pargs++)\n\t\t(void) free(*pargs);\n\t(void) free(args);\n\n\treturn rc;\n}\n\n/**\n * @brief\n *\tset the suspend (and resume) signal used\n *\n * @param[in] str - signal name\n *\n * @return\thandler_ret_t\n * @retval\tHANDLER_FAIL\t\tFailure\n * @retval\tHANDLER_SUCCESS\t\tSuccess\n *\n */\nstatic handler_ret_t\nset_suspend_signal(char *str)\n{\n\tchar tok[80];\n\n\tif ((str == 0) || (*str == '\\0'))\n\t\treturn HANDLER_FAIL;\n\n\tstr = TOKCPY(str, tok);\n\tstr = skipwhite(str);\n\n\tsuspend_signal = atoi(tok);\n\n\tif (*str != '\\0')\n\t\tresume_signal = atoi(str);\n\n\treturn HANDLER_SUCCESS;\n}\n\n/**\n * @brief\n *\tAdd static resource or shell escape line from config file.\n *\tThis is a support routine for read_config().\n *\n * @param[in] str - string holding resource name\n * @param[in] file - filename\n * @param[in] linenum - line number in file\n *\n * @return int\n *\n * @retval  1 - In case of error\n * @retval  0 - In case of success\n *\n */\nstatic int\nadd_static(char *str, char *file, int linenum)\n{\n\tint i;\n\tchar name[256];\n\tstruct config_list *cp;\n\tint perm;\n\n\tstr = TOKCPY(str, name); /* resource name */\n\tstr = skipwhite(str);\t /* resource value */\n\tif (*str == '!') {\t /* shell escape command */\n\t\tint err;\n\t\tchar *filename;\n\t\trmnl(str);\n\t\tfilename = get_script_name(&str[1]);\n\t\tif (filename == NULL)\n\t\t\treturn 1;\n\t\tperm = get_permission(\"write\");\n\t\terr = tmp_file_sec(filename, 0, 1, perm, 1);\n\n\t\tif (err != 0) {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"error: %s file has a non-secure file access, errno: %d\", filename, err);\n\t\t\tlog_event(PBSEVENT_SECURITY, PBS_EVENTCLASS_SERVER, LOG_ERR, __func__, log_buffer);\n\t\t\tfree(filename);\n\t\t\treturn 1;\n\t\t}\n\t\tfree(filename);\n\t} else { /* get the value */\n\t\ti = strlen(str);\n\t\twhile (--i) { /* strip trailing blanks */\n\t\t\tif (!isspace((int) *(str + i)))\n\t\t\t\tbreak;\n\t\t\t*(str + i) = '\\0';\n\t\t}\n\t}\n\n\tcp = (struct config_list *) malloc(sizeof(struct config_list));\n\tmemcheck((char *) cp);\n\n\tcp->c_link = config_list;\n\tcp->c.c_name = strdup(name);\n\tmemcheck(cp->c.c_name);\n\tcp->c.c_u.c_value = strdup(str);\n\tmemcheck(cp->c.c_u.c_value);\n\n\tsnprintf(log_buffer, sizeof(log_buffer), \"%s[%d] add name %s value %s\",\n\t\t file, linenum, name, str);\n\tlog_event(PBSEVENT_DEBUG, 0, LOG_DEBUG, \"add_static\", log_buffer);\n\n\tconfig_list = cp;\n\treturn 0;\n}\n\n/**\n * @brief\n *\tsets ideal load\n *\n * @param[in] value - value for ideal load\n *\n * @return      handler_ret_t\n * @retval      HANDLER_FAIL            Failure\n * @retval      HANDLER_SUCCESS         Success\n *\n */\n\nstatic handler_ret_t\nsetidealload(char *value)\n{\n\tchar newstr[50] = \"ideal_load \";\n\tfloat val;\n\n\tval = (float) atof(value);\n\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_DEBUG,\n\t\t  \"ideal_load\", value);\n\tif (val < 0.0)\n\t\treturn HANDLER_FAIL; /* error */\n\tideal_load_val = val;\n\tif (max_load_val < 0.0)\n\t\tmax_load_val = val; /* set a default */\n\t(void) strcat(newstr, value);\n\tif (add_static(newstr, \"config\", 0))\n\t\treturn HANDLER_FAIL;\n\tnconfig++;\n\treturn HANDLER_SUCCESS;\n}\n\n/**\n * @brief\n *      sets maximum load\n *\n * @param[in] value - value for maximum load\n *\n * @return      handler_ret_t\n * @retval      HANDLER_FAIL            Failure\n * @retval      HANDLER_SUCCESS         Success\n *\n */\n\nstatic handler_ret_t\nsetmaxload(char *value)\n{\n\tchar newstr[50] = \"max_load \";\n\tchar *endptr;\n\tfloat val;\n\n\tendptr = value;\n\twhile ((!isspace((int) *endptr)) && *endptr)\n\t\tendptr++;\n\tval = (float) atof(value);\n\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_DEBUG,\n\t\t  \"max_load\", value);\n\tif (val < 0.0)\n\t\treturn HANDLER_FAIL; /* error */\n\tmax_load_val = val;\n\tif (ideal_load_val < 0.0)\n\t\tideal_load_val = val;\n\t(void) strncat(newstr, value, 40);\n\tif (add_static(newstr, \"config\", 0))\n\t\treturn HANDLER_FAIL;\n\tnconfig++;\n\n\tif (*endptr != '\\0') {\n\t\tif (strstr(endptr, \"suspend\"))\n\t\t\tidle_on_maxload = 1;\n\t}\n\treturn HANDLER_SUCCESS;\n}\n\n/**\n * process $max_poll_downtime directive in config file:\n *\t$max_poll_downtime 300\n */\nstatic handler_ret_t\nset_max_poll_downtime(char *value)\n{\n\tchar *sbuf;\n\tchar *ebuf;\n\n\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER,\n\t\t  LOG_INFO, \"max_poll_downtime\", value);\n\tsbuf = value;\n\tmax_poll_downtime_val = (time_t) strtol(sbuf, &ebuf, 10);\n\tif (max_poll_downtime_val <= 0)\n\t\treturn HANDLER_FAIL; /* error */\n\n\treturn HANDLER_SUCCESS;\n}\n\n/**\n * @brief\n *\tprocess $kbd_idle directive in config file:\n *\t$kbidle avail [busy]\n *\n * @param[in] value - value for kb idle\n *\n * @return      handler_ret_t\n * @retval      HANDLER_FAIL            Failure\n * @retval      HANDLER_SUCCESS         Success\n *\n */\n\nstatic handler_ret_t\nset_kbd_idle(char *value)\n{\n\tchar *sbuf;\n\tchar *ebuf;\n\n\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER,\n\t\t  LOG_INFO, \"idle_avail\", value);\n\tsbuf = value;\n\tidle_avail = (time_t) strtol(sbuf, &ebuf, 10);\n\tif (idle_avail <= 0)\n\t\treturn HANDLER_FAIL; /* error */\n\n\tidle_check = 1;\n\tcycle_harvester = 1;\n\tsbuf = ebuf;\n\twhile (isspace((int) *sbuf))\n\t\t++sbuf;\n\tif (*sbuf == '\\0')\n\t\tgoto chk_for_interactive; /* no idle_busy, but that is ok */\n\n\tidle_busy = (time_t) strtol(sbuf, &ebuf, 10);\n\tif (idle_busy <= 0)\n\t\treturn HANDLER_FAIL; /* error */\n\n\tsbuf = ebuf;\n\twhile (isspace((int) *sbuf))\n\t\t++sbuf;\n\tif (*sbuf == '\\0')\n\t\tgoto chk_for_interactive; /* no idle_poll, but that is ok */\n\n\tidle_poll = (time_t) strtol(sbuf, &ebuf, 10);\n\tif (idle_poll <= 0)\n\t\treturn HANDLER_FAIL; /* error */\n\n\t/* check whether PBS_INTERACTIVE service is registered or not? */\nchk_for_interactive:\n\treturn check_interactive_service();\n}\n\n/**\n * @brief\n *\tsets temporary dirctory\n *\n * @param[in] value - value for temp directory\n *\n * @return      handler_ret_t\n * @retval      HANDLER_FAIL            Failure\n * @retval      HANDLER_SUCCESS         Success\n *\n */\n\nstatic handler_ret_t\nset_tmpdir(char *value)\n{\n\tchar *cleaned_value;\n\tint i;\n\n\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER,\n\t\t  LOG_INFO, \"tmpdir\", value);\n\tcleaned_value = remove_quotes(value); /* remove quotes if any present */\n\tif (cleaned_value == NULL)\n\t\treturn HANDLER_FAIL;\n\n\t/* Remove trailing separator */\n\tfor (i = (strlen(cleaned_value) - 1); i >= 0; i--) {\n\t\tif (cleaned_value[i] != TRAILING_CHAR)\n\t\t\tbreak;\n\t\tcleaned_value[i] = '\\0';\n\t}\n\n\tif (strlen(cleaned_value) > sizeof(pbs_tmpdir) - 1) {\n\t\tfree(cleaned_value);\n\t\treturn HANDLER_FAIL;\n\t}\n\n#if !defined(DEBUG) && !defined(NO_SECURITY_CHECK)\n\tif (verify_dir(cleaned_value, 1, 1, 0, 1)) {\n\t\tfree(cleaned_value);\n\t\treturn HANDLER_FAIL; /* error */\n\t}\n#endif /* NO_SECURITY_CHECK */\n\n\tstrcpy(pbs_tmpdir, cleaned_value);\n\tfree(cleaned_value);\n\treturn HANDLER_SUCCESS;\n}\n\n/**\n * @brief\n *      sets job dirctory\n *\n * @param[in] value - value for job directory\n *\n * @return      handler_ret_t\n * @retval      HANDLER_FAIL            Failure\n * @retval      HANDLER_SUCCESS         Success\n *\n */\n\nstatic handler_ret_t\nset_jobdir_root(char *value)\n{\n\tchar *cleaned_value;\n\tchar *savep = NULL;\n\tchar *directive;\n\tchar *p;\n\n\tp = value;\n\tvalue = strtok_r(p, \" \", &savep);\n\tdirective = strtok_r(NULL, \" \", &savep);\n\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER,\n\t\t  LOG_INFO, __func__, value);\n\tcleaned_value = remove_quotes(value); /* remove quotes if any present */\n\tif (cleaned_value == NULL)\n\t\treturn HANDLER_FAIL;\n\n\tif (strlen(cleaned_value) > sizeof(pbs_jobdir_root) - 1) {\n\t\tfree(cleaned_value);\n\t\treturn HANDLER_FAIL;\n\t}\n\n#if !defined(DEBUG) && !defined(NO_SECURITY_CHECK)\n\tif ((strcmp(cleaned_value, JOBDIR_DEFAULT) != 0) && (verify_dir(cleaned_value, 1, 1, 0, 1))) {\n\t\tfree(cleaned_value);\n\t\treturn HANDLER_FAIL;\n\t}\n#endif /* NO_SECURITY_CHECK */\n\n\tstrcpy(pbs_jobdir_root, cleaned_value);\n\tfree(cleaned_value);\n\n\tif (directive != NULL) {\n\t\tif (strcmp(directive, \"shared\") == 0)\n\t\t\tpbs_jobdir_root_shared = TRUE;\n\t}\n\treturn HANDLER_SUCCESS;\n}\n\n/**\n * @brief\n *\tsets boolean value\n *\n * @param[in] id - function name\n * @param[in] value - value\n * @param[in] flag - configuration flag\n *\n * @return      handler_ret_t\n * @retval      HANDLER_FAIL            Failure\n * @retval      HANDLER_SUCCESS         Success\n *\n */\n\nhandler_ret_t\nset_boolean(const char *id, char *value, int *flag)\n{\n\tif (value == NULL || *value == '\\0') {\n\t\tsprintf(log_buffer, \"No value specified, no action taken.\");\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_NOTICE,\n\t\t\t  id, log_buffer);\n\t\treturn HANDLER_FAIL; /* error */\n\t}\n\n\tif ((strcasecmp(value, \"no\") == 0) ||\n\t    (strcasecmp(value, \"false\") == 0) ||\n\t    (strcasecmp(value, \"off\") == 0) ||\n\t    (strcmp(value, \"0\") == 0)) {\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_NOTICE,\n\t\t\t  id, \"false\");\n\t\t*flag = FALSE;\n\t} else if ((strcasecmp(value, \"yes\") == 0) ||\n\t\t   (strcasecmp(value, \"true\") == 0) ||\n\t\t   (strcasecmp(value, \"on\") == 0) ||\n\t\t   (strcmp(value, \"1\") == 0)) {\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_NOTICE,\n\t\t\t  id, \"true\");\n\t\t*flag = TRUE;\n\t} else {\n\t\tsprintf(log_buffer,\n\t\t\t\"Illegal value \\\"%s\\\", no action taken.\", value);\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_NOTICE,\n\t\t\t  id, log_buffer);\n\t\treturn HANDLER_FAIL; /* error */\n\t}\n\treturn HANDLER_SUCCESS; /* success */\n}\n\nstatic handler_ret_t\nset_int(const char *id, char *value, int *var)\n{\n\tchar *left;\n\tint val;\n\n\tif (value == NULL || *value == '\\0') {\n\t\tsprintf(log_buffer, \"No value specified, no action taken.\");\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_NOTICE,\n\t\t\t  id, log_buffer);\n\t\treturn HANDLER_FAIL; /* error */\n\t}\n\n\tval = (int) strtol(value, &left, 0);\n\tif (*left != '\\0' || val <= 0) {\n\t\tsprintf(log_buffer, \"bad value \\\"%s\\\"\", value);\n\t\tlog_event(PBSEVENT_SYSTEM, 0, LOG_ERR, id, log_buffer);\n\t\treturn HANDLER_FAIL; /* error */\n\t}\n\t*var = val;\n\n\tsprintf(log_buffer, \"setting %d\", val);\n\tlog_event(PBSEVENT_SYSTEM, 0, LOG_DEBUG, id, log_buffer);\n\n\treturn HANDLER_SUCCESS;\n}\n\n/**\n * @brief\n *\tset float value\n *\n * @param[in] id - function name\n * @param[in] value - value\n * @param[out] var - output float value\n *\n * @return      handler_ret_t\n * @retval      HANDLER_FAIL            Failure\n * @retval      HANDLER_SUCCESS         Success\n *\n */\nhandler_ret_t\nset_float(const char *id, char *value, float *var)\n{\n\tchar *left;\n\tfloat val;\n\n\tif (value == NULL || *value == '\\0') {\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_NOTICE,\n\t\t\t  id, \"No value specified, no action taken.\");\n\t\treturn HANDLER_FAIL; /* error */\n\t}\n\n\tval = strtod(value, &left);\n\tif (left == value || val <= 0) {\n\t\tsprintf(log_buffer, \"bad value \\\"%s\\\"\", value);\n\t\tlog_event(PBSEVENT_SYSTEM, 0, LOG_ERR, id, log_buffer);\n\t\treturn HANDLER_FAIL; /* error */\n\t}\n\t*var = val;\n\n\tsnprintf(log_buffer, sizeof(log_buffer), \"setting %f\", val);\n\tlog_event(PBSEVENT_SYSTEM, 0, LOG_DEBUG, id, log_buffer);\n\n\treturn HANDLER_SUCCESS;\n}\n\n/**\n * @brief\n *\tSet the configuration flag that defines whether the restart nunction\n *\toccurs in the background.\n *\n * @retval 0 failure\n * @retval 1 success\n *\n */\nstatic handler_ret_t\nset_restart_background(char *value)\n{\n\treturn (set_boolean(__func__, value, &restart_background));\n}\n\n/**\n * @brief\n *\t Set the configuration flag that defines whether the restart function\n *\ttransmogrifies into a task.\n *\n * @retval 0 failure\n * @retval 1 success\n *\n */\nstatic handler_ret_t\nset_restart_transmogrify(char *value)\n{\n\treturn (set_boolean(__func__, value, &restart_transmogrify));\n}\n\n/**\n * @brief\n *\tSet the configuration flag that defines whether a call to tm_attach\n *\tis allowed.\n *\n * @retval 0 failure\n * @retval 1 success\n *\n */\nstatic handler_ret_t\nset_attach_allow(char *value)\n{\n\treturn (set_boolean(__func__, value, &attach_allow));\n}\n\n#if MOM_ALPS\n/**\n * @brief\n *\tSet the path used for invoking the ALPS BASIL client.\n *\n * @retval 0 failure\n * @retval 1 success\n *\n */\nstatic handler_ret_t\nset_alps_client(char *value)\n{\n\tchar *p;\n\n\tif (!value) {\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_NOTICE,\n\t\t\t  __func__, \"Unsetting alps_client.\");\n\t\tif (alps_client) {\n\t\t\tfree(alps_client);\n\t\t\talps_client = NULL;\n\t\t}\n\t\treturn HANDLER_SUCCESS;\n\t}\n\tif (alps_client && strcmp(value, alps_client) == 0) {\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_NOTICE,\n\t\t\t  __func__, \"alps_client unchanged, values identical.\");\n\t\treturn HANDLER_SUCCESS;\n\t}\n\tif (strlen(value) > _POSIX_PATH_MAX) {\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_NOTICE,\n\t\t\t  __func__, \"alps_client unchanged, new value too long.\");\n\t\treturn HANDLER_FAIL;\n\t}\n\tif (*value != '/') {\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_NOTICE,\n\t\t\t  __func__, \"alps_client unchanged, must be full path.\");\n\t\treturn HANDLER_FAIL; /* Must be full path. */\n\t}\n#if !defined(DEBUG) && !defined(NO_SECURITY_CHECK)\n\tif (chk_file_sec(value, 0, 0, S_IWGRP | S_IWOTH, 0) != 0) {\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_NOTICE,\n\t\t\t  __func__, \"alps_client unchanged, security check failed.\");\n\t\treturn HANDLER_FAIL;\n\t}\n#else\n\t{\n\t\tstruct stat sb;\n\t\tif (stat(value, &sb) == -1) {\n\t\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER,\n\t\t\t\t  LOG_NOTICE, __func__,\n\t\t\t\t  \"alps_client unchanged, cannot stat file.\");\n\t\t\treturn HANDLER_FAIL;\n\t\t}\n\t\tif (!S_ISREG(sb.st_mode)) {\n\t\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER,\n\t\t\t\t  LOG_NOTICE, __func__,\n\t\t\t\t  \"alps_client unchanged, not a regular file.\");\n\t\t\treturn HANDLER_FAIL;\n\t\t}\n\t}\n#endif\n\tp = strdup(value);\n\tif (!p) {\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_NOTICE,\n\t\t\t  __func__, \"alps_client unchanged, out of memory.\");\n\t\treturn HANDLER_FAIL;\n\t}\n\tif (alps_client)\n\t\tfree(alps_client);\n\talps_client = p;\n\t(void) sprintf(log_buffer, \"%s %s\",\n\t\t       \"alps_client now set to\", alps_client);\n\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_NOTICE,\n\t\t  __func__, log_buffer);\n\treturn HANDLER_SUCCESS;\n}\n\n/**\n * @brief\n *\tSet the timeout value in seconds when we will stop checking for\n *\tALPS SWITCH response to change from \"EMPTY\".\n *\tIn order to work around a situation where we must poll on \"EMPTY\" in\n *\tcase it changes.  After the timeout, we can proceed with the suspend.\n *\n * @par\n *\tIt is best if this value is not too large, since PBS will be\n *\tblocked until the timeout is reached or the response changes from \"EMPTY\".\n *\n * @retval 0 failure\n * @retval 1 success\n */\nstatic handler_ret_t\nset_alps_confirm_empty_timeout(char *value)\n{\n\treturn (set_int(__func__, value, &alps_confirm_empty_timeout));\n}\n\n/**\n * @brief\n *\tSet the time out value in seconds when we will stop checking for\n *\tALPS SWITCH to complete.  PBS will basically give up trying.\n *\n * @par\n *\tIt is best if this value is not too large, since PBS will be\n *\tblocked until the timeout is reached or the SWITCH completes.\n *\n * @retval 0 failure\n * @retval 1 success\n */\nstatic handler_ret_t\nset_alps_confirm_switch_timeout(char *value)\n{\n\treturn (set_int(__func__, value, &alps_confirm_switch_timeout));\n}\n\n/**\n * @brief\n * Set the configuration flag that defines vnode creation behavior\n * on a Cray.\n *\n * @par\n * If vnode_per_numa_node is set to TRUE, then\n * PBS will create one vnode per NUMA node (i.e. per segment).\n * Thus there will be multiple vnodes per host.\n *\n * If vnode_per_numa_node is set to FALSE, then\n * PBS will create one vnode for the compute node (i.e. it will\n * cover all segment information in one vnode).\n *\n * @retval 0 failure\n * @retval 1 success\n */\nstatic handler_ret_t\nset_vnode_per_numa_node(char *value)\n{\n\n\treturn (set_boolean(__func__, value, &vnode_per_numa_node));\n}\n\n/**\n * @brief\n *\tSet the alps_release_wait_time in micoseconds to wait between ALPS release\n *\treservation requests\n *\n * @return returns value of set_float()\n *\n */\nstatic handler_ret_t\nset_alps_release_wait_time(char *value)\n{\n\tfloat tmp;\n\thandler_ret_t ret;\n\tdouble fract, integ;\n\tret = set_float(__func__, value, &tmp);\n\tif (ret == HANDLER_SUCCESS) {\n\t\tfract = modf(tmp, &integ);\n\t\talps_release_wait_time = (useconds_t) integ * 1000000 + (fract * 1000000);\n\t}\n\treturn (ret);\n}\n\n/**\n * @brief\n *\tSet the alps_release_jitter value in microseconds.\n *\n * @par\n *\tPBS will randomly generate how much\n *\tmicroseconds to add to the alps_release_wait_time value\n *\n * @return returns value of set_float()\n *\n */\nstatic handler_ret_t\nset_alps_release_jitter(char *value)\n{\n\tfloat tmp;\n\thandler_ret_t ret;\n\tdouble fract, integ;\n\tret = set_float(__func__, value, &tmp);\n\tif (ret == HANDLER_SUCCESS) {\n\t\tfract = modf(tmp, &integ);\n\t\talps_release_jitter = (useconds_t) integ * 1000000 + (fract * 1000000);\n\t}\n\treturn (ret);\n}\n\n/**\n * @brief\n *\tSet the time out value in seconds when we will stop making ALPS release\n *\treservation requests.  PBS will basically give up trying.\n *\n * @par\n * It is best if this value is greater than the Cray node health\n * value for \"suspectbegin\" (configurable by admins)\n *\n * @retval 0 failure\n * @retval 1 success\n *\n */\nstatic handler_ret_t\nset_alps_release_timeout(char *value)\n{\n\treturn (set_int(__func__, value, &alps_release_timeout));\n}\n#endif /* MOM_ALPS */\n\n/**\n * @brief\n *\tSet the base directory used for checkpoint/restart functions.\n *\n * @retval 0 failure\n * @retval 1 success\n */\nstatic handler_ret_t\nset_checkpoint_path(char *value)\n{\n\tint rc = 0;\n\tchar newpath[_POSIX_PATH_MAX] = \"\\0\";\n\n\t/*\n\t * If value and path_checkpoint both contain the same address,\n\t * then we have nothing to do.\n\t */\n\tif (value == path_checkpoint && path_checkpoint)\n\t\treturn HANDLER_SUCCESS;\n\t/*\n\t * Try setting path_checkpoint in the following order:\n\t * 1. command line argument\n\t * 2. environment variable\n\t * 3. mom config file value\n\t * 4. PBS_HOME/checkpoint\n\t *\n\t * Only alter path_checkpoint if we succeed.\n\t */\n\tif (path_checkpoint_from_getopt) {\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_NOTICE,\n\t\t\t  __func__, \"Using checkpoint path from command line.\");\n\t\tpbs_strncpy(newpath, path_checkpoint_from_getopt, sizeof(newpath));\n\t} else if (path_checkpoint_from_getenv) {\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_NOTICE,\n\t\t\t  __func__, \"Using checkpoint path from environment.\");\n\t\tpbs_strncpy(newpath, path_checkpoint_from_getenv, sizeof(newpath));\n\t} else if (value) {\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_NOTICE,\n\t\t\t  __func__, \"Using checkpoint path from config file.\");\n\t\tpbs_strncpy(newpath, value, sizeof(newpath));\n\t} else {\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_NOTICE,\n\t\t\t  __func__, \"Using default checkpoint path.\");\n\t\tpbs_strncpy(newpath, path_checkpoint_default, sizeof(newpath));\n\t}\n\tif (strlen(newpath) == 0) {\n\t\t/* Bad mojo, fall back to existing or default. */\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_NOTICE,\n\t\t\t  __func__, \"Empty checkpoint path specified, ignoring.\");\n\t\tif (*path_checkpoint == '\\0')\n\t\t\t/* path_checkpoint is never allocated memory, it points to path_checkpoint_buf */\n\t\t\tpbs_strncpy(path_checkpoint, path_checkpoint_default, sizeof(path_checkpoint_buf));\n\t\treturn HANDLER_FAIL; /* error */\n\t}\n\tif (*(newpath + strlen(newpath) - 1) != '/') {\n\t\tstrcat(newpath, \"/\");\n\t}\n#if !defined(DEBUG) && !defined(NO_SECURITY_CHECK)\n\tint perm;\n\tperm = get_permission(\"write\");\n\trc = chk_file_sec(newpath, 1, 0, perm, 0);\n#else\n\t{\n\t\tstruct stat sb;\n\t\tif (stat(newpath, &sb) == -1)\n\t\t\trc = errno;\n\t\telse\n\t\t\trc = 0;\n\t}\n#endif /* !DEBUG && !NO_SECURITY_CHECK */\n\tif (rc == 0) {\n\t\t(void) sprintf(log_buffer, \"%s %s\",\n\t\t\t       \"Setting checkpoint path to\", newpath);\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_NOTICE,\n\t\t\t  __func__, log_buffer);\n\t\tstrcpy(path_checkpoint_buf, newpath);\n\t\tif (!path_checkpoint)\n\t\t\tpath_checkpoint = path_checkpoint_buf;\n\t\treturn HANDLER_SUCCESS; /* success */\n\t}\n\t(void) sprintf(log_buffer, \"%s %s\",\n\t\t       \"Error encountered setting checkpoint path to\", newpath);\n\tlog_err(rc, __func__, log_buffer);\n\treturn HANDLER_FAIL; /* error */\n}\n\n/**\n * @brief\n *\tsets mom name\n *\n * @param[in] value - value for mom name\n *\n * @return\thandler_ret_t\n * @retval\tHANDLER_SUCCESS\t\tsuccess\n * @retval\tHANDLER_FAIL\t\tFailure\n *\n */\nstatic handler_ret_t\nset_momname(char *value)\n{\n\tif (strlen(value) > 63)\n\t\treturn HANDLER_FAIL;\n\t(void) strcpy(mom_short_name, value);\n\treturn HANDLER_SUCCESS;\n}\n\n/**\n * @brief\n *      sets mom port\n *\n * @param[in] value - value for mom port\n *\n * @return      handler_ret_t\n * @retval      HANDLER_SUCCESS         success\n * @retval      HANDLER_FAIL            Failure\n *\n */\n\nstatic handler_ret_t\nset_momport(char *value)\n{\n\tchar *ebuf;\n\tchar *sbuf;\n\n\tsbuf = value;\n\tpbs_mom_port = (unsigned int) strtol(sbuf, &ebuf, 10);\n\tif (pbs_mom_port == 0)\n\t\treturn HANDLER_FAIL;\n\tpbs_rm_port = pbs_mom_port + 1; /* assume next port for RM */\n\n\treturn HANDLER_SUCCESS;\n}\n\n/**\n * @brief\n *      sets maximum poll checks\n *\n * @param[in] value - max poll check\n *\n * @return      handler_ret_t\n * @retval      HANDLER_SUCCESS         success\n * @retval      HANDLER_FAIL            Failure\n *\n */\n\nstatic handler_ret_t\nset_max_check_poll(char *value)\n{\n\tstatic char id[] = \"max_check_poll\";\n\n\treturn (set_int(id, value, &max_check_poll));\n}\n\n/**\n * @brief\n *      sets minimum poll checks\n *\n * @param[in] value - min poll checks\n *\n * @return      handler_ret_t\n * @retval      HANDLER_SUCCESS         success\n * @retval      HANDLER_FAIL            Failure\n *\n */\n\nstatic handler_ret_t\nset_min_check_poll(char *value)\n{\n\tstatic char id[] = \"min_check_poll\";\n\n\treturn (set_int(id, value, &min_check_poll));\n}\n\n/**\n * @brief\n *      sets alien to be attached\n *\n * @param[in] value - value alien\n *\n * @return      handler_ret_t\n * @retval      HANDLER_SUCCESS         success\n * @retval      HANDLER_FAIL            Failure\n *\n */\n\nstatic handler_ret_t\nset_alien_attach(char *value)\n{\n\treturn (set_boolean(__func__, value, &alien_attach));\n}\n\n/**\n * @brief\n *      sets alien kill value\n *\n * @param[in] value - value for alien kill\n *\n * @return      handler_ret_t\n * @retval      HANDLER_SUCCESS         success\n * @retval      HANDLER_FAIL            Failure\n *\n */\n\nstatic handler_ret_t\nset_alien_kill(char *value)\n{\n\treturn (set_boolean(__func__, value, &alien_kill));\n}\n\n/**\n * @brief\n *      sets value for restrict user\n *\n * @param[in] value - value for restrict user\n *\n * @return      handler_ret_t\n * @retval      HANDLER_SUCCESS         success\n * @retval      HANDLER_FAIL            Failure\n *\n */\n\nstatic handler_ret_t\nset_restrict_user(char *value)\n{\n\treturn (set_boolean(__func__, value, &restrict_user));\n}\n\n/**\n * @brief\n *      sets value for restrict maxsys user\n *\n * @param[in] value - value for restrict maxsys user\n *\n * @return      handler_ret_t\n * @retval      HANDLER_SUCCESS         success\n * @retval      HANDLER_FAIL            Failure\n *\n */\n\nstatic handler_ret_t\nset_restrict_user_maxsys(char *value)\n{\n\treturn (set_int(__func__, value, &restrict_user_maxsys));\n}\n\n/**\n * @brief\n *\tExempt users from the restrict user feature.  The restrict_user_exempt_uids\n *\tarray holds the uids of the exempted users.\n *\n * @param[in]\tuser_list\tcomma separated string of usernames\n *\n * @return      handler_ret_t\n * @retval      HANDLER_SUCCESS         success\n * @retval      HANDLER_FAIL            Failure\n *\n */\nstatic handler_ret_t\nset_restrict_user_exceptions(char *user_list)\n{\n\tchar *db_admins = NULL;\n\tchar *p2 = NULL;\n\tstruct passwd *pwent;\n\tint i;\n\n\tdb_admins = strdup(user_list);\n\tif (db_admins == NULL) {\n\t\tlog_err(errno, __func__, \"strdup failed\");\n\t\treturn HANDLER_FAIL;\n\t}\n\n\tp2 = strtok(db_admins, \", \");\n\n\ti = 0;\n\twhile (p2 != NULL) { /* each part */\n\n\t\tif (i == NUM_RESTRICT_USER_EXEMPT_UIDS) {\n\t\t\tsprintf(log_buffer, \"Reached limit on # of uids exempted from dorestrict_user = %d\", NUM_RESTRICT_USER_EXEMPT_UIDS);\n\t\t\tlog_event(PBSEVENT_SYSTEM, 0, LOG_DEBUG, __func__,\n\t\t\t\t  log_buffer);\n\n\t\t\tbreak;\n\t\t}\n\n\t\tif (((pwent = getpwnam(p2)) == NULL)) {\n\t\t\tsprintf(log_buffer, \"user %s doesn't exist\", p2);\n\t\t\tlog_event(PBSEVENT_SYSTEM, 0, LOG_DEBUG, __func__,\n\t\t\t\t  log_buffer);\n\t\t} else if (pwent->pw_uid == 0) {\n\t\t\tsprintf(log_buffer,\n\t\t\t\t\"user %s ignored because uid=0\", p2);\n\t\t\tlog_event(PBSEVENT_SYSTEM, 0, LOG_DEBUG, __func__,\n\t\t\t\t  log_buffer);\n\t\t} else {\n\t\t\trestrict_user_exempt_uids[i] = pwent->pw_uid;\n\t\t\tsprintf(log_buffer,\n\t\t\t\t\"restrict_user_exempt_uids[%d]=%d (user %s)\",\n\t\t\t\ti, restrict_user_exempt_uids[i], p2);\n\t\t\tlog_event(PBSEVENT_SYSTEM, 0, LOG_DEBUG, __func__,\n\t\t\t\t  log_buffer);\n\t\t\ti++;\n\t\t}\n\t\tp2 = strtok(NULL, \", \");\n\t} /* while */\n\n\t/* terminate the list of uids */\n\tif (i < NUM_RESTRICT_USER_EXEMPT_UIDS) {\n\t\trestrict_user_exempt_uids[i] = 0;\n\t}\n\n\t(void) free(db_admins);\n\n\treturn HANDLER_SUCCESS;\n}\n\n/**\n * @brief\n *      sets value for gen_nodefile_on_sister_mom\n *\n * @param[in] value - value for gen_nodefile_on_sister_mom\n *\n * @return      handler_ret_t\n * @retval      HANDLER_SUCCESS         success\n * @retval      HANDLER_FAIL            Failure\n *\n */\n\nstatic handler_ret_t\nset_gen_nodefile_on_sister_mom(char *value)\n{\n\treturn (set_boolean(__func__, value, &gen_nodefile_on_sister_mom));\n}\n\n/**\n * @brief\n *\tSet the configuration flag that defines whether to get rid of\n *\tall vnode defs when reading the config files.\n *\n * @return\thandler_ret_t.\n * @retval\t0\tFailure\n * @retval\t1\tSuccess\n *\n */\nstatic handler_ret_t\nset_vnode_additive(char *value)\n{\n\tstatic char id[] = \"set_vnodedef_additive\";\n\n\treturn (set_boolean(id, value, &vnode_additive));\n}\n\n/**\n * @brief\n *\tparse the mom config file\n *\n * @param[in] file - filename\n *\n * @return      handler_ret_t\n * @retval      HANDLER_SUCCESS         success\n * @retval      HANDLER_FAIL            Failure\n *\n */\n\nstatic handler_ret_t\nparse_config(char *file)\n{\n\tFILE *conf;\n\tchar line[512];\n\tchar name[256], *str;\n\tint linenum, i;\n\tint err = 0;\n\tint num_newstaticdefs;\n\thandler_ret_t handler_ret = HANDLER_SUCCESS; /* init to success */\n\n\tif ((conf = fopen(file, \"r\")) == NULL) {\n\t\tsprintf(log_buffer, \"fopen: %s\", file);\n\t\tlog_err(errno, __func__, log_buffer);\n\t\treturn HANDLER_FAIL;\n\t} else {\n\t\tsprintf(log_buffer, \"file %s\", file);\n\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_SERVER, LOG_DEBUG,\n\t\t\t  __func__, log_buffer);\n\t}\n\n\tnum_newstaticdefs = 0;\n\tlinenum = 0;\n\twhile (fgets(line, sizeof(line), conf)) {\n\t\tlinenum++;\n\t\tif (line[0] == '#') /* comment */\n\t\t\tcontinue;\n\t\tstr = skipwhite(line); /* pass over initial whitespace */\n\t\tif (*str == '\\0')\n\t\t\tcontinue;\n\n\t\tskip_trailing_spcl_char(line, 13);\n\n\t\tif (*str == '$') {\t\t   /* special command */\n\t\t\tstr = TOKCPY(++str, name); /* resource name */\n\t\t\tfor (i = 0; special[i].name; i++) {\n\t\t\t\tif (strcmp(name, special[i].name) == 0)\n\t\t\t\t\tbreak;\n\t\t\t}\n\t\t\tif (special[i].name == NULL) { /* didn't find it */\n\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\"command name %s not found\", name);\n\t\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\t\terr = 1;\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\tstr = skipwhite(str); /* command param */\n\t\t\trmnl(str);\n\n\t\t\tif ((handler_ret = special[i].handler(str)) ==\n\t\t\t    HANDLER_FAIL) {\n\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\"%s[%d] command \\\"$%s %s\\\" failed, aborting\",\n\t\t\t\t\tfile, linenum, name, str);\n\t\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\t\terr = 1;\n\t\t\t} else if (handler_ret == HANDLER_REPARSE) {\n\t\t\t\t/*\n\t\t\t\t *\thandler() asked us to pass parsing off\n\t\t\t\t *\tto the function that understands how to\n\t\t\t\t *\tread new-style configuration files.\n\t\t\t\t *\tAs a result, this function will not\n\t\t\t\t *\tcontinue processing this file.  As an\n\t\t\t\t *\tadditional check, we require that this\n\t\t\t\t *\tredirection occur at line 1 of the file\n\t\t\t\t *\twe're currently parsing.\n\t\t\t\t */\n\t\t\t\tif (linenum != 1) {\n\t\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\t\"%s:  handler REPARSE at line %d\",\n\t\t\t\t\t\tfile, linenum);\n\t\t\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\t\t\thandler_ret = HANDLER_FAIL;\n\t\t\t\t\terr = 1;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\thandler_ret = config_versionhandler(str, file,\n\t\t\t\t\t\t\t\t    conf);\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\tcontinue;\n\t\t}\n\n\t\tif (add_static(str, file, linenum))\n\t\t\tcontinue;\n\t\tnum_newstaticdefs++;\n\t}\n\tnconfig += num_newstaticdefs;\n\n\t(void) fclose(conf);\n\tif (err)\n\t\treturn HANDLER_FAIL;\n\treturn handler_ret;\n}\n\n/**\n * @brief\n *\tCheck the version number supplied to the $configversion directive to\n *\tmake sure it's within the range of versions we support.\n *\n * @param[in] value - value for config version check\n *\n * @return      handler_ret_t\n * @retval      HANDLER_SUCCESS         success\n * @retval      HANDLER_FAIL            Failure\n *\n */\n\nstatic handler_ret_t\nconfig_verscheck(char *value)\n{\n\tint vers;\n\tchar *err = \"$configversion value is out of range\"\n\t\t    \" (min %d, max %d) - will treat as old-style\";\n\n\tvers = atoi(value);\n\tif ((vers < CONFIG_MINVERS) || (vers > CONFIG_MAXVERS)) {\n\t\t(void) sprintf(log_buffer, err, CONFIG_MINVERS, CONFIG_MAXVERS);\n\t\tlog_err(-1, __func__, log_buffer);\n\t\treturn HANDLER_FAIL;\n\t}\n\n\treturn HANDLER_REPARSE;\n}\n\n/**\n * @brief\n *\tversion handler function for config file\n *\n * @param[in] value - value for config version check\n *\n * @return      handler_ret_t\n * @retval      HANDLER_SUCCESS         success\n * @retval      HANDLER_FAIL            Failure\n *\n */\n\nstatic handler_ret_t\nconfig_versionhandler(char *value, const char *filename, FILE *fp)\n{\n\tint vers;\n\tvnl_t *nv;\n\textern callfunc_t vn_callback;\n\n\tswitch (vers = atoi(value)) {\n\t\tcase CONFIG_VNODEVERS:\n\t\t\tif ((nv = vn_parse_stream(fp, vn_callback)) == NULL) {\n\t\t\t\t(void) sprintf(log_buffer,\n\t\t\t\t\t       \"error(s) parsing vnode definitions file %s\",\n\t\t\t\t\t       filename);\n\t\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\t\treturn HANDLER_FAIL;\n\t\t\t}\n\n\t\t\tif (vnlp == NULL)\n\t\t\t\tvnlp = nv;\n\t\t\telse {\n\t\t\t\tvn_merge(vnlp, nv, vn_callback);\n\t\t\t\tvnl_free(nv);\n\t\t\t}\n\t\t\tbreak;\n\n\t\tdefault:\n\t\t\t(void) sprintf(log_buffer,\n\t\t\t\t       \"unhandled config file version (%d) in file %s\",\n\t\t\t\t       vers, filename);\n\t\t\treturn HANDLER_FAIL;\n\t}\n\n\treturn HANDLER_SUCCESS;\n}\n\n/**\n * @brief\n *\tcompares two directories from directory list\n *\n * @param[in] s1 - directory 1\n * @param[in] s2 - directory 2\n *\n * @return int\n * @retval 0 Failure\n * @retval 1 Success\n *\n */\n\nstatic int\ndirsort(const void *s1, const void *s2)\n{\n\treturn (strcmp(*((const char **) s1), *((const char **) s2)));\n}\n\n/**\n * @brief\n *\tfrees the directory list\n *\n * @param[in] list - pointer to pointer directory list\n *\n * @return Void\n *\n */\n\nstatic void\nfree_dirlist(char **list)\n{\n\tchar **p;\n\n\tif (list != NULL) {\n\t\tfor (p = list; (p != NULL) && (*p != NULL); p++)\n\t\t\tfree(*p);\n\t\tfree(list);\n\t}\n}\n\n/**\n * @brief\n *\tGiven a directory name, read its contents.\n *\n * @param[in] dirname\tdirectory name to read\n * @param[out] mod\tmodified time of the directory\n *\n * @return a sorted, NULL-terminated list of strings (excluding \".\" and \"..\")\n * @retval NULL\t\t\t\t\tFailure\n * @retval NULL-terminated list of strings\tSuccess\n *\n */\nstatic char **\ndo_readdir(char *dirname, time_t *mod)\n{\n\tDIR *dirp;\n\tstruct dirent *dp;\n\tchar **list = NULL;\n\tchar **newlist;\n\tsize_t nelem = 0;\n\n\tif ((dirp = opendir(dirname)) == NULL) {\n\t\tperror(dirname);\n\t\treturn NULL;\n\t}\n\t/*\n\t * Get mod time if requested.\n\t */\n\tif (mod != NULL) {\n\t\tstruct stat sb;\n\n\t\tif (stat(dirname, &sb) == -1) {\n\t\t\tperror(dirname);\n\t\t\treturn NULL;\n\t\t}\n\t\t*mod = sb.st_mtime;\n\t}\n\n\twhile (errno = 0, (dp = readdir(dirp)) != NULL) {\n\t\tchar *s;\n\n\t\tif ((strcmp(dp->d_name, \".\") == 0) ||\n\t\t    (strcmp(dp->d_name, \"..\") == 0))\n\t\t\tcontinue;\n\t\tnewlist = realloc(list, (nelem + 2) * sizeof(char *));\n\t\tif (newlist == NULL) {\n\t\t\tlog_err(errno, __func__, \"realloc\");\n\t\t\tfree_dirlist(list);\n\t\t\t(void) closedir(dirp);\n\t\t\treturn NULL;\n\t\t} else\n\t\t\tlist = newlist;\n\t\tif ((s = strdup(dp->d_name)) == NULL) {\n\t\t\tlog_err(errno, __func__, \"strdup\");\n\t\t\tfree_dirlist(list);\n\t\t\t(void) closedir(dirp);\n\t\t\treturn NULL;\n\t\t} else {\n\t\t\t/*\n\t\t\t *\tN.B.:  the free_dirlist() function above\n\t\t\t *\tdepends on the list always being NULL-\n\t\t\t *\tterminated.\n\t\t\t */\n\t\t\tlist[nelem++] = s;\n\t\t\tlist[nelem] = NULL;\n\t\t}\n\t}\n\tif (errno != 0 && errno != ENOENT) {\n\t\tfree_dirlist(list);\n\t\tperror(dirname);\n\t\t(void) closedir(dirp);\n\t\treturn NULL;\n\t}\n\t(void) closedir(dirp);\n\n\tif (list)\n\t\tqsort(list, nelem, sizeof(*list), dirsort);\n\treturn (list);\n}\n\n/**\n * @brief\n *\tWe make two passes through the list of additional configuration\n *\tfiles, first executing those whose names begin with a special\n *\tprefix (see path_addconfigs_reserved_prefix above) which marks\n *\tthem as reserved for our use, then the rest (which customers\n *\tmay have added).  Files reserved for our use may have special\n *\thandling.  See the addspecial[] list above.\n *\n *\tDepending on the error encountered, we may either return\n *\timmediately (return HANDLER_FAIL) or set a return value\n *\tthat will cause the overall config file reading process\n *\tto fail, but allow us to continue parsing other config\n *\tfiles (ret = HANDLER_FAIL).\n *\n * @return      handler_ret_t\n * @retval      HANDLER_SUCCESS         success\n * @retval      HANDLER_FAIL            Failure\n *\n */\n\nstatic handler_ret_t\ndo_addconfigs(void)\n{\n\tstruct stat sb;\n\tlong pathlen;\n\tchar *namebuf = NULL;\n\tchar **list; /* ... of config files */\n\tchar **listhead;\n\tunsigned int i;\n\ttime_t modtime;\n\thandler_ret_t ret = HANDLER_SUCCESS; /* accumulated return value */\n\n\tif (stat(path_addconfigs, &sb) == -1) /* no work to do */\n\t\treturn HANDLER_SUCCESS;\n\n\tif ((pathlen = pathconf(mom_home, _PC_PATH_MAX)) == -1) {\n\t\tlog_err(errno, __func__, \"pathconf\");\n\t\treturn HANDLER_FAIL;\n\t} else if ((namebuf = malloc(pathlen)) == NULL) {\n\t\tlog_err(errno, __func__, \"malloc\");\n\t\treturn HANDLER_FAIL;\n\t}\n\n\tif ((listhead = do_readdir(path_addconfigs, &modtime)) == NULL) {\n\t\tfree(namebuf);\n\t\treturn HANDLER_SUCCESS; /* no work to do */\n\t}\n\tfor (list = listhead; list != NULL && *list != NULL; list++) {\n\t\tif (strstr(*list, path_addconfigs_reserved_prefix) == *list) {\n\t\t\tif (snprintf(namebuf, pathlen, \"%s/%s\", path_addconfigs,\n\t\t\t\t     *list) >= pathlen) {\n\t\t\t\tsprintf(log_buffer, \"%s/%s\", path_addconfigs,\n\t\t\t\t\t*list);\n\t\t\t\tlog_err(ENAMETOOLONG, __func__, log_buffer);\n\t\t\t\tfree(namebuf);\n\t\t\t\tfree_dirlist(listhead);\n\t\t\t\treturn HANDLER_FAIL;\n\t\t\t}\n\t\t\tfor (i = 0; addspecial[i].name; i++) {\n\t\t\t\tif (strcmp(*list, addspecial[i].name) == 0)\n\t\t\t\t\tbreak;\n\t\t\t}\n\t\t\tif (addspecial[i].name == NULL) {\n\t\t\t\t/* no special handling */\n\t\t\t\tif (parse_config(namebuf) == HANDLER_FAIL)\n\t\t\t\t\tret = HANDLER_FAIL;\n\t\t\t} else {\n\t\t\t\tif (addspecial[i].handler(namebuf) ==\n\t\t\t\t    HANDLER_FAIL)\n\t\t\t\t\tret = HANDLER_FAIL;\n\t\t\t}\n\t\t}\n\t}\n\tfor (list = listhead; list != NULL && *list != NULL; list++) {\n\t\tif (strstr(*list, path_addconfigs_reserved_prefix) != *list) {\n\t\t\tif (snprintf(namebuf, pathlen, \"%s/%s\", path_addconfigs,\n\t\t\t\t     *list) >= pathlen) {\n\t\t\t\tsprintf(log_buffer, \"%s/%s\", path_addconfigs,\n\t\t\t\t\t*list);\n\t\t\t\tlog_err(ENAMETOOLONG, __func__, log_buffer);\n\t\t\t\tfree(namebuf);\n\t\t\t\tfree_dirlist(listhead);\n\t\t\t\treturn HANDLER_FAIL;\n\t\t\t}\n\t\t\tif (parse_config(namebuf) == HANDLER_FAIL)\n\t\t\t\tret = HANDLER_FAIL;\n\t\t}\n\t}\n\n\tfree(namebuf);\n\tfree_dirlist(listhead);\n\n\tif (vnlp != NULL && modtime > vnlp->vnl_modtime)\n\t\tvnlp->vnl_modtime = modtime;\n\n\treturn (ret);\n}\n\n/**\n * @brief\n *\tOpen and read the config file.  Save information in a linked\n *\tlist.  After reading the file, create an array, copy the list\n *\telements to the array and free the list.\n *\n * @param[in] file - filename\n *\n * @return int\n * @retval 0 success\n * @retval 1 error\n *\n */\nint\nread_config(char *file)\n{\n\tstruct config_list *cp;\n\tstruct config *ap;\n\tint i, j;\n\tint addconfig_ret;\n\n\t/*\tinitialize variable that can be set by config entries in case\t*/\n\t/*\tthey are removed and we are HUPped\t\t\t\t*/\n\n\tfor (i = 0; i < mask_num; i++)\n\t\tfree(maskclient[i]);\n\tmask_num = 0;\n\n\taverage_percent_over = 50;\n\taverage_cpufactor = 1.025;\n\taverage_trialperiod = 120;\n\tcomplex_mem_calc = 0;\n\tcpuburst = 0;\n\tcpuaverage = 0;\n\tcputfactor = 1.0;\n\tdelta_percent_over = 50;\n\tdelta_cpufactor = 1.05;\n\tdelta_weightup = 0.4;\n\tdelta_weightdown = 0.1;\n\tenforce_mem = 0;\n\tideal_load_val = -1.0;\n\tmax_load_val = -1.0;\n\tidle_avail = 0;\n\tidle_busy = 10;\n\tidle_check = -1;\n\tidle_poll = 1;\n\tidle_on_maxload = 0;\n\tcycle_harvester = 0;\n\twallfactor = 1.0;\n\tsuspend_signal = SIGSTOP;\n\tresume_signal = SIGCONT;\n\trestart_background = FALSE;\n\treject_root_scripts = FALSE;\n\treport_hook_checksums = TRUE;\n\trestart_transmogrify = FALSE;\n\tattach_allow = TRUE;\n\tmax_check_poll = MAX_CHECK_POLL_TIME;\n\tmin_check_poll = MIN_CHECK_POLL_TIME;\n\tvnode_additive = 1; /* keep vnodes on HUP */\n\tjoinjob_alarm_time = -1;\n\tjob_launch_delay = -1;\n#ifdef NAS\t       /* localmod 015 */\n\tspoolsize = 0; /* unlimited by default */\n#endif\t\t       /* localmod 015 */\n\n#if MOM_ALPS\n\talps_release_wait_time = ALPS_REL_WAIT_TIME_DFLT;\n\talps_release_jitter = ALPS_REL_JITTER_DFLT;\n\tvnode_per_numa_node = FALSE;\n\talps_release_timeout = ALPS_REL_TIMEOUT;\n\talps_confirm_empty_timeout = ALPS_CONF_EMPTY_TIMEOUT;\n\talps_confirm_switch_timeout = ALPS_CONF_SWITCH_TIMEOUT;\n\tset_alps_client(NULL);\n#endif /* MOM_ALPS */\n\n\tstrcpy(pbs_jobdir_root, \"\");\n\trestrict_user = 0;\n\trestrict_user_maxsys = 999;\n\tgen_nodefile_on_sister_mom = TRUE;\n\tfor (j = 0; j < NUM_RESTRICT_USER_EXEMPT_UIDS; j++)\n\t\tif (restrict_user_exempt_uids[j] != 0)\n\t\t\trestrict_user_exempt_uids[j] = 0;\n\n\tfor (i = 0; i < (int) LastAction; i++) {\n\t\tif (mom_action[i].ma_script) {\n\t\t\tfree(mom_action[i].ma_script);\n\t\t\tmom_action[i].ma_script = NULL;\n\t\t}\n\t\tif (mom_action[i].ma_args) {\n\t\t\tfor (j = 0; mom_action[i].ma_args[j]; j++)\n\t\t\t\tfree(mom_action[i].ma_args[j]);\n\t\t\tfree(mom_action[i].ma_args);\n\t\t\tmom_action[i].ma_args = NULL;\n\t\t}\n\t\tmom_action[i].ma_timeout = 0;\n\t}\n\n\tif (file == NULL)\n\t\tfile = config_file;\n\tif (file[0] == '\\0')\n\t\treturn 0; /* no config file */\n\n\tif (access(file, F_OK) == -1) {\n\t\tsprintf(log_buffer, \"access: %s\", file);\n\t\tlog_err(errno, __func__, log_buffer);\n\t\tif (config_file_specified)\n\t\t\treturn 1; /* file given and not there = error */\n\t\telse\n\t\t\treturn 0; /* ok for \"config\" not to be there  */\n\t}\n#if !defined(DEBUG) && !defined(NO_SECURITY_CHECK)\n\tint perm;\n\tperm = get_permission(\"write\");\n\tif (chk_file_sec(file, 0, 0, perm, FULLPATH)) {\n\t\tsprintf(log_buffer,\n\t\t\t\"warning: %s file has a non-secure file access mask\", file);\n\t\tlog_err(errno, __func__, log_buffer);\n\t\treturn 1;\n\t}\n#endif /* NO_SECURITY_CHECK */\n\n\tnconfig = 0;\n\tif (parse_config(file) == HANDLER_FAIL)\n\t\treturn 1;\n\n\taddconfig_ret = do_addconfigs();\n\n\t/* check for any bad combinations */\n\tif (min_check_poll > max_check_poll) {\n\t\tsprintf(log_buffer, \"min_check_poll(%u) > max_check_poll(%u)\",\n\t\t\tmin_check_poll, max_check_poll);\n\t\tlog_event(PBSEVENT_SYSTEM, 0, LOG_ERR, __func__, log_buffer);\n\n\t\tmax_check_poll = MAX_CHECK_POLL_TIME;\n\t\tmin_check_poll = MIN_CHECK_POLL_TIME;\n\t}\n\tsprintf(log_buffer, \"max_check_poll = %u, min_check_poll = %u\",\n\t\tmax_check_poll, min_check_poll);\n\tlog_event(PBSEVENT_SYSTEM, 0, LOG_DEBUG, __func__, log_buffer);\n\n\tinc_check_poll = (max_check_poll - min_check_poll + 19) / 20;\n\n\tif (restart_transmogrify) {\n\t\tif (mom_action[RestartAction].ma_script == NULL) {\n\t\t\tsprintf(log_buffer, \"restart_transmogrify \"\n\t\t\t\t\t    \"value is TRUE but there is no restart script;\"\n\t\t\t\t\t    \" This is unsupported\");\n\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\treturn 1;\n\t\t} else if (!restart_background) {\n\t\t\tsprintf(log_buffer, \"WARNING: restart_background \"\n\t\t\t\t\t    \"value is FALSE but restart_transmogrify is \"\n\t\t\t\t\t    \"TRUE; this type of restart takes place in \"\n\t\t\t\t\t    \"the background regardless of the setting \"\n\t\t\t\t\t    \"of restart_background\");\n\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t}\n\t} else if (!restart_background &&\n\t\t   mom_action[RestartAction].ma_script != NULL) {\n\t\tsprintf(log_buffer, \"WARNING: restart_background value \"\n\t\t\t\t    \"is FALSE but restart is being done by a script; \"\n\t\t\t\t    \"restart_background forced TRUE\");\n\t\tlog_err(-1, __func__, log_buffer);\n\t}\n\n\t/* Create a new config_array[] */\n\tif (config_array) {\n\t\tfor (ap = config_array; ap->c_name; ap++) {\n\t\t\tfree(ap->c_name);\n\t\t\tfree(ap->c_u.c_value);\n\t\t}\n\t\tfree(config_array);\n\t}\n\tconfig_array = (struct config *) calloc(nconfig + 1,\n\t\t\t\t\t\tsizeof(struct config));\n\tmemcheck((char *) config_array);\n\n\t/* copy information from config_list to config_array[] */\n\tfor (i = 0, ap = config_array; i < nconfig; i++, ap++) {\n\t\t*ap = config_list->c;\n\t\tcp = config_list->c_link;\n\t\tfree(config_list); /* don't free name and value strings */\n\t\tconfig_list = cp;  /* they carry over from the list */\n\t}\n\tap->c_name = NULL; /* config_array[] is NULL-terminated */\n\n\tif (addconfig_ret == HANDLER_FAIL)\n\t\treturn (1);\n\telse {\n\t\tif (joinjob_alarm_time == -1)\n\t\t\tupdate_joinjob_alarm_time = 1;\n\t\telse\n\t\t\tupdate_joinjob_alarm_time = 0;\n\n\t\tif (job_launch_delay == -1)\n\t\t\tupdate_job_launch_delay = 1;\n\t\telse\n\t\t\tupdate_job_launch_delay = 0;\n\n\t\treturn (0);\n\t}\n}\n\n/**\n * @brief\n *\tInserts the config file\n *\n * @param[in] name - name of file\n * @param[in] input - input for file\n *\n * @return Void\n *\n */\n\nvoid\ndoconfig_insert(char *name, char *input)\n{\n\tstruct stat sb;\n\tssize_t nread;\n\tint fdin, fdout;\n\tlong pathlen;\n\tchar *namebuf;\n\tchar iobuf[BUFSIZ];\n\n\tsprintf(log_buffer, \"name %s, input %s\", name, input);\n\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_SERVER, LOG_DEBUG,\n\t\t  __func__, log_buffer);\n\n\tif (stat(path_addconfigs, &sb) == -1) {\n\t\t/*\n\t\t *\tFirst time through, we will need to make the directory\n\t\t *\tthat holds new config files.\n\t\t */\n\n\t\tif (mkdir(path_addconfigs, S_IRWXU) == -1) {\n\t\t\tsprintf(log_buffer, \"mkdir %s\", path_addconfigs);\n\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\texit(1);\n\t\t}\n\t} else if (!S_ISDIR(sb.st_mode)) {\n\t\tsprintf(log_buffer, \"%s is not a directory (0%o)\",\n\t\t\tpath_addconfigs, sb.st_mode);\n\t\tlog_err(-1, __func__, log_buffer);\n\t\texit(1);\n\t}\n\n\tif ((pathlen = pathconf(mom_home, _PC_PATH_MAX)) == -1) {\n\t\tlog_err(errno, __func__, \"pathconf\");\n\t\texit(1);\n\t} else if ((namebuf = malloc(pathlen)) == NULL) {\n\t\tlog_err(errno, __func__, \"malloc\");\n\t\texit(1);\n\t}\n\n\tif (snprintf(namebuf, pathlen, \"%s/%s\", path_addconfigs, name) >=\n\t    pathlen) {\n\t\tsprintf(log_buffer, \"%s/%s\", path_addconfigs, name);\n\t\tlog_err(ENAMETOOLONG, __func__, log_buffer);\n\t\texit(1);\n\t}\n\tif (stat(namebuf, &sb) == 0) {\n\t\tsprintf(log_buffer, \"attempt to add existing config file %s\",\n\t\t\tnamebuf);\n\t\tlog_err(EEXIST, __func__, log_buffer);\n\t\texit(1);\n\t}\n\tif (strstr(name, path_addconfigs_reserved_prefix) == name) {\n\t\tsprintf(log_buffer, \"config file may not start with \\\"%s\\\"\",\n\t\t\tpath_addconfigs_reserved_prefix);\n\t\tlog_err(EPERM, __func__, log_buffer);\n\t\texit(1);\n\t}\n\n\tfdin = fdout = -1; /* avoid accidentally close()ing an open fd */\n\tif (access(input, R_OK) == -1) {\n\t\tsprintf(log_buffer, \"access R_OK %s\", input);\n\t\tlog_err(errno, __func__, log_buffer);\n\t\texit(1);\n\t} else if ((fdin = open(input, O_RDONLY)) == -1) {\n\t\tsprintf(log_buffer, \"open %s\", input);\n\t\tlog_err(errno, __func__, log_buffer);\n\t\texit(1);\n\t} else if ((fdout = open(namebuf, O_WRONLY | O_CREAT, 0600)) == -1) {\n\t\tsprintf(log_buffer, \"open %s\", namebuf);\n\t\tlog_err(errno, __func__, log_buffer);\n\t\texit(1);\n\t} else {\n\t\twhile ((nread = read(fdin, iobuf, sizeof(iobuf))) != 0) {\n\t\t\tif (nread == -1) {\n\t\t\t\tlog_err(errno, __func__, \"read\");\n\t\t\t\texit(1);\n\t\t\t} else if (write(fdout, iobuf, nread) != nread) {\n\t\t\t\tlog_err(errno, __func__, \"write\");\n\t\t\t\texit(1);\n\t\t\t}\n\t\t}\n\t}\n\n\t(void) close(fdout);\n\t(void) close(fdin);\n\tfree(namebuf);\n}\n\n/**\n * @brief\n *\tRemoves config files\n *\n * @param[in] name - name of file\n *\n * @return Void\n *\n */\n\nvoid\ndoconfig_remove(char *name)\n{\n\tstruct stat sb;\n\tlong pathlen;\n\tchar *namebuf;\n\n\tsprintf(log_buffer, \"name %s\", name);\n\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_SERVER, LOG_DEBUG,\n\t\t  __func__, log_buffer);\n\n\tif ((pathlen = pathconf(mom_home, _PC_PATH_MAX)) == -1) {\n\t\tlog_err(errno, __func__, \"pathconf\");\n\t\texit(1);\n\t} else if ((namebuf = malloc(pathlen)) == NULL) {\n\t\tlog_err(errno, __func__, \"malloc\");\n\t\texit(1);\n\t}\n\n\tif (strstr(name, path_addconfigs_reserved_prefix) == name) {\n\t\tsprintf(log_buffer, \"file begins with reserved prefix \\\"%s\\\"\"\n\t\t\t\t    \" and may not be removed\",\n\t\t\tpath_addconfigs_reserved_prefix);\n\t\tlog_err(EPERM, __func__, log_buffer);\n\t\texit(1);\n\t}\n\n\tif (snprintf(namebuf, pathlen, \"%s/%s\", path_addconfigs, name) >=\n\t    pathlen) {\n\t\tsprintf(log_buffer, \"%s/%s\", path_addconfigs, name);\n\t\tlog_err(ENAMETOOLONG, __func__, log_buffer);\n\t\texit(1);\n\t}\n\tif (stat(namebuf, &sb) == -1) {\n\t\tlog_err(errno, __func__, namebuf);\n\t\texit(1);\n\t}\n\n\tif (S_ISREG(sb.st_mode)) {\n\t\tif (unlink(namebuf) == -1) {\n\t\t\tlog_err(errno, __func__, namebuf);\n\t\t\texit(1);\n\t\t}\n\t} else {\n\t\t/*\n\t\t *\tWe were asked to remove something that was not a regular\n\t\t *\tfile, and refuse.  We use the same error (EPERM) that\n\t\t *\tunlink() returns if ``The file named by path is a\n\t\t *\tdirectory, and either the calling process does not\n\t\t *\thave appropriate privileges, or the implementation\n\t\t *\tprohibits using unlink() on directories.''\n\t\t */\n\t\tsprintf(log_buffer, \"%s/%s\", path_addconfigs, name);\n\t\tlog_err(EPERM, __func__, log_buffer);\n\t\texit(1);\n\t}\n\n\tfree(namebuf);\n}\n\n/**\n * @brief\n *\tLists the config files by reading directory\n *\n * @return Void\n *\n */\n\nvoid\ndoconfig_list(void)\n{\n\tstruct stat sb;\n\tchar **list;\n\tchar **listhead;\n\n\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_SERVER, LOG_DEBUG, __func__, \"\");\n\n\tif (stat(path_addconfigs, &sb) == -1)\n\t\treturn;\n\tif ((listhead = do_readdir(path_addconfigs, NULL)) == NULL)\n\t\treturn; /* no work to do */\n\n\tfor (list = listhead; list != NULL && *list != NULL; list++)\n\t\tif (strstr(*list, path_addconfigs_reserved_prefix) == *list)\n\t\t\tprintf(\"%s\\n\", *list);\n\tfor (list = listhead; list != NULL && *list != NULL; list++)\n\t\tif (strstr(*list, path_addconfigs_reserved_prefix) != *list)\n\t\t\tprintf(\"%s\\n\", *list);\n\n\tfree_dirlist(listhead);\n}\n\n/**\n * @brief\n *\n */\nvoid\ndoconfig_show(char *name)\n{\n\tssize_t nread;\n\tint fdin;\n\tlong pathlen;\n\tchar *namebuf;\n\tchar iobuf[BUFSIZ];\n\n\tsprintf(log_buffer, \"name %s\", name);\n\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_SERVER, LOG_DEBUG,\n\t\t  __func__, log_buffer);\n\n\tif ((pathlen = pathconf(mom_home, _PC_PATH_MAX)) == -1) {\n\t\tlog_err(errno, __func__, \"pathconf\");\n\t\texit(1);\n\t} else if ((namebuf = malloc(pathlen)) == NULL) {\n\t\tlog_err(errno, __func__, \"malloc\");\n\t\texit(1);\n\t}\n\n\tif (snprintf(namebuf, pathlen, \"%s/%s\", path_addconfigs, name) >=\n\t    pathlen) {\n\t\tsprintf(log_buffer, \"%s/%s\", path_addconfigs, name);\n\t\tlog_err(ENAMETOOLONG, __func__, log_buffer);\n\t\texit(1);\n\t}\n\tfdin = -1; /* avoid accidentally close()ing an open fd */\n\tif (access(namebuf, R_OK) == -1) {\n\t\tsprintf(log_buffer, \"access R_OK %s\", namebuf);\n\t\tlog_err(errno, __func__, log_buffer);\n\t\texit(1);\n\t} else if ((fdin = open(namebuf, O_RDONLY)) == -1) {\n\t\tsprintf(log_buffer, \"open %s\", namebuf);\n\t\tlog_err(errno, __func__, log_buffer);\n\t\texit(1);\n\t} else {\n\t\twhile ((nread = read(fdin, iobuf, sizeof(iobuf))) != 0) {\n\t\t\tif (nread == -1) {\n\t\t\t\tlog_err(errno, __func__, \"read\");\n\t\t\t\texit(1);\n\t\t\t} else if (write(1, iobuf, nread) != nread) {\n\t\t\t\tlog_err(errno, __func__, \"write\");\n\t\t\t\texit(1);\n\t\t\t}\n\t\t}\n\t}\n\n\t(void) close(fdin);\n\tfree(namebuf);\n}\n\n/**\n * @brief\n *\twrapper function for different operations on config file\n *\n * @param[in] action - action name\n * @param[in] name - name of config file\n * @param[in] input - input for config file\n *\n * @return Void\n *\n */\n\nvoid\ndo_configs(char *action, char *name, char *input)\n{\n\n\tif (strcmp(action, \"insert\") == 0)\n\t\tdoconfig_insert(name, input);\n\telse if (strcmp(action, \"remove\") == 0)\n\t\tdoconfig_remove(name);\n\telse if (strcmp(action, \"show\") == 0)\n\t\tdoconfig_show(name);\n\telse if (strcmp(action, \"list\") == 0)\n\t\tdoconfig_list();\n\telse {\n\t\tsprintf(log_buffer, \"internal error:  unexpected action %s\",\n\t\t\taction);\n\t\tlog_err(-1, __func__, log_buffer);\n\t}\n\n\texit(0);\n}\n\n/**\n * @brief\n *\tGet an rm_attribute structure from a string.  If a NULL is passed\n *\tfor the string, use the previously remembered string.\n *\n * @param[in] str - string holding info of attributes structure\n *\n * @return structure handle\n * @retval\tpointer to rm_attribute structure\tSuccess\n * @retval\tNULL\t\t\t\t\tFailure\n *\n */\nstruct rm_attribute *\nmomgetattr(char *str)\n{\n\tstatic char cookie[] = \"tag:\"; /* rm_attribute to ignore */\n\tstatic char *hold = NULL;\n\tstatic char qual[256] = \"\";\n\tstatic char valu[4096] = \"\";\n\tstatic struct rm_attribute attr = {qual, valu};\n\tint level, i;\n\n\tif (str == NULL) /* if NULL is passed, used prev value */\n\t\tstr = hold;\n\n\tdo {\n\t\tstr = skipwhite(str);\n\t\tif (*str++ != '[')\n\t\t\treturn NULL;\n\n\t\tstr = skipwhite(str); /* copy qualifier */\n\t\tstr = TOKCPY(str, qual);\n\t\tstr = skipwhite(str);\n\n\t\tif (*str++ != '=')\n\t\t\treturn NULL;\n\n\t\tlevel = 0;\n\t\tfor (i = 0; *str; str++, i++) {\n\t\t\tif (*str == '[')\n\t\t\t\tlevel++;\n\t\t\telse if (*str == ']') {\n\t\t\t\tif (level == 0)\n\t\t\t\t\tbreak;\n\t\t\t\tlevel--;\n\t\t\t}\n\t\t\tvalu[i] = *str;\n\t\t}\n\t\tif (*str++ != ']')\n\t\t\treturn NULL;\n\n\t\tvalu[i] = '\\0';\n\t\tDBPRT((\"momgetattr: found %s = %s\\n\", qual, valu))\n\t} while (strncmp(qual, cookie, sizeof(cookie) - 1) == 0);\n\thold = str;\n\tDBPRT((\"momgetattr: passing back %s = %s\\n\", qual, valu))\n\treturn &attr;\n}\n\n/**\n * @brief\n *\tCheck the request against the format of the line read from\n *\tthe config file.  If it is a static value, there should be\n *\tno params.  If it is a shell escape, the parameters (if any)\n *\tshould match the command line for the system call.\n *\n *\n */\nchar *\nconf_res(char *s, struct rm_attribute *attr)\n{\n\tchar *name[RM_NPARM];\n\tchar *value[RM_NPARM];\n\tint used[RM_NPARM];\n\tchar param[256], *d;\n\tint i, len;\n\tchar *filename = NULL;\n#ifdef WIN32\n\tpio_handles child;\n#else\n\tFILE *child;\n\tint fd;\n#endif\n\tchar *child_spot;\n\tint child_len;\n\tint secondalarm = 0;\n\tint err;\n\tint perm;\n\n\tif (*s != '!') { /* static value */\n\t\tif (attr) {\n\t\t\tsprintf(ret_string, \"? %d\", RM_ERR_BADPARAM);\n\t\t\treturn ret_string;\n\t\t} else\n\t\t\treturn s;\n\t}\n\n\tif (restrictrm) /* no restricted shell escape */\n\t\treturn \"?\";\n\n\t/*\n\t **\tFrom here on we are going to put together a shell command\n\t **\tto do the requestor's bidding.  Parameter substitution\n\t **\tis the first step.\n\t */\n\tfor (i = 0; i < RM_NPARM; i++) { /* remember params */\n\t\tif (attr == NULL)\n\t\t\tbreak;\n\t\tname[i] = strdup(attr->a_qualifier);\n\t\tmemcheck(name[i]);\n\t\tvalue[i] = strdup(attr->a_value);\n\t\tmemcheck(value[i]);\n\t\tused[i] = 0;\n\t\tattr = momgetattr(NULL);\n\t}\n\tif (attr) { /* too many params */\n\t\tlog_err(-1, __func__, \"too many parms\");\n\t\tsprintf(ret_string, \"? %d\", RM_ERR_BADPARAM);\n\t\tgoto done;\n\t}\n\tname[i] = NULL;\n\n\tfor (d = ret_string, s++; *s;) { /* scan command */\n\t\tif (*s == '%') {\t /* possible token */\n\t\t\tchar *hold;\n\n\t\t\thold = TOKCPY(s + 1, param);\n\t\t\tfor (i = 0; name[i]; i++) {\n\t\t\t\tif (strcmp(param, name[i]) == 0)\n\t\t\t\t\tbreak;\n\t\t\t}\n\t\t\tif (name[i]) { /* found a match */\n\t\t\t\tchar *x = value[i];\n\t\t\t\twhile (*x)\n\t\t\t\t\t*d++ = *x++;\n\t\t\t\ts = hold;\n\t\t\t\tused[i] = 1;\n\t\t\t} else\n\t\t\t\t*d++ = *s++;\n\t\t} else\n\t\t\t*d++ = *s++;\n\t}\n\tfor (i = 0; name[i]; i++) {\n\t\tif (!used[i]) { /* parameter sent but not used */\n\t\t\tlog_err(-1, __func__, \"unused parameters\");\n\t\t\tsprintf(ret_string, \"? %d\", RM_ERR_BADPARAM);\n\t\t\tgoto done;\n\t\t}\n\t}\n\n\t*d = '\\0';\n\tDBPRT((\"command: %s\\n\", ret_string))\n\n\tfilename = get_script_name(ret_string);\n\tif (filename == NULL)\n\t\treturn NULL;\n\t/* Make sure file does not have open permissions */\n\tperm = get_permission(\"write\");\n\terr = tmp_file_sec(filename, 0, 1, perm, 1);\n\n\tif (err != 0) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"error: %s file has a non-secure file access, errno: %d\", filename, err);\n\t\tlog_event(PBSEVENT_SECURITY, PBS_EVENTCLASS_SERVER, LOG_ERR, __func__, log_buffer);\n\t\tgoto done;\n\t}\n\n#ifdef WIN32\n\tif (!win_popen(ret_string, \"w\", &child, NULL)) {\n\t\terrno = GetLastError();\n\t\tlog_err(errno, __func__, \"popen\");\n\t\tsprintf(ret_string, \"? %d\", RM_ERR_SYSTEM);\n\t\tgoto done;\n\t}\n#else\n\tif ((child = pbs_popen(ret_string, \"r\")) == NULL) {\n\t\tlog_err(errno, __func__, \"popen\");\n\t\tsprintf(ret_string, \"? %d\", RM_ERR_SYSTEM);\n\t\tgoto done;\n\t}\n#endif /* WIN32 */\n\n#ifdef WIN32\n\tshell_escape_handle = child.pi.hProcess;\n\t(void) win_alarm(alarm_time, shell_escape_timeout);\n#else\n\tfd = fileno(child);\n#endif /* WIN32 */\n\tchild_spot = ret_string;\n\tchild_len = 0;\n\tchild_spot[0] = '\\0';\n\twhile (child_len < ret_size) {\n\n#ifdef WIN32\n\t\tif ((len = win_pread(&child, child_spot, ret_size - child_len)) > 0)\n#else\n\t\tif ((len = read(fd, child_spot, ret_size - child_len)) > 0)\n#endif\n\t\t{\n\n\t\t\tfor (i = 0; i < len; i++) {\n#ifdef WIN32\n\t\t\t\t/* match \\r\\n in windows */\n\t\t\t\tif ((child_spot[i] == '\\n') || (child_spot[i] == '\\r'))\n#else\n\t\t\t\tif (child_spot[i] == '\\n')\n#endif\n\t\t\t\t{\n\t\t\t\t\tchild_spot[i] = '\\0';\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (i < len) /* found newline */\n\t\t\t\tbreak;\n\n\t\t\tchild_len += len;\n\t\t\tif (child_len >= ret_size) {\n\t\t\t\tlog_err(-1, __func__, \"line too long\");\n\t\t\t\tsprintf(ret_string, \"? %d\", RM_ERR_SYSTEM);\n\t\t\t\tbreak;\n\t\t\t}\n\n\t\t\tchild_spot += len;\n\t\t\tcheckret(&child_spot, len);\n\t\t} else if (len == 0) {\n#ifdef WIN32\n\t\t\tif (GetLastError() == ERROR_BROKEN_PIPE) {\n\t\t\t\tlog_err(errno, __func__, \"resource read\");\n\t\t\t\tsprintf(ret_string, \"? %d\", RM_ERR_SYSTEM);\n\t\t\t}\n#endif\n\t\t\tbreak;\n\t\t} else if ((len == -1) && (errno == EINTR)) {\n\t\t\tlog_err(errno, __func__, \"resource read alarm\");\n\t\t\tif (secondalarm) {\n#ifndef WIN32\n\t\t\t\tpbs_pkill(child, SIGKILL);\n#endif\n\t\t\t\tsprintf(ret_string, \"? %d\", RM_ERR_SYSTEM);\n\t\t\t\tbreak;\n\t\t\t} else {\n#ifndef WIN32\n\t\t\t\tpbs_pkill(child, SIGINT);\n\t\t\t\t(void) alarm(alarm_time);\n#endif\n\t\t\t\tsecondalarm = 1;\n\t\t\t}\n\t\t} else {\n\t\t\tlog_err(errno, __func__, \"resource read\");\n\t\t\tsprintf(ret_string, \"? %d\", RM_ERR_SYSTEM);\n\t\t\tbreak;\n\t\t}\n\t}\n\n#ifdef WIN32\n\n\tif (shell_escape_handle != INVALID_HANDLE_VALUE) {\n\t\tclose_valid_handle(&(shell_escape_handle));\n\t\tchild.pi.hThread = INVALID_HANDLE_VALUE;\n\t\tchild.pi.hProcess = INVALID_HANDLE_VALUE;\n\t}\n\t(void) win_alarm(0, NULL);\n\twin_pclose(&child);\n#else\n\tpbs_pclose(child);\n#endif /* WIN32 */\n\ndone:\n\tfor (i = 0; name[i]; i++) { /* free up params */\n\t\tfree(name[i]);\n\t\tfree(value[i]);\n\t}\n\tfree(filename);\n\treturn ret_string;\n}\n#ifndef WIN32\nextern void process_hup(void);\n#endif\n\n#ifdef DEBUG\n/**\n * @brief\n *\tcreates logs event\n *\n * @param[in] id - function name\n * @param[in] buf - msg\n * @param[in] len - length of msg\n *\n * @return Void\n *\n */\nvoid\nlog_verbose(char *id, char *buf, int len)\n{\n\tint i;\n\tchar *cp;\n\n\tlen = MIN(len, 50);\n\tcp = log_buffer;\n\tfor (i = 0; i < len; i++) {\n\t\tint c = buf[i];\n\n\t\tif (isprint(c))\n\t\t\t*cp++ = c;\n\t\telse {\n\t\t\tsprintf(cp, \"(%d)\", c);\n\t\t\tcp += strlen(cp);\n\t\t}\n\t}\n\t*cp = '\\0';\n\tlog_event(PBSEVENT_DEBUG, 0, LOG_DEBUG, id, log_buffer);\n}\n#else\n#define log_verbose(a, b, c)\n#endif\n\n/**\n * @brief\n *\tSee if an IP address matches any names stored as \"restricted\"\n *\taccess hosts.  Return 0 if a name matches, 1 if not.\n *\n * @param[in] ipadd - ip address of host\n *\n * @return\tint\n * @retval\t0\tFailure\n * @retval\t1\tSuccess\n *\n */\nint\nbad_restrict(u_long ipadd)\n{\n\tstruct hostent *host;\n\tstruct in_addr in;\n\tint i, len1, len2;\n\tchar *cp1, *cp2;\n\n\tin.s_addr = htonl(ipadd);\n\tif ((host = gethostbyaddr((void *) &in,\n\t\t\t\t  sizeof(struct in_addr), AF_INET)) == NULL)\n\t\treturn 1;\n\tlen1 = strlen(host->h_name) - 1;\n\n\tfor (i = 0; i < mask_num; i++) {\n\t\tlen2 = strlen(maskclient[i]) - 1;\n\t\tif (len1 < len2)\n\t\t\tcontinue;\n\t\tcp1 = &host->h_name[len1];\n\t\tcp2 = &maskclient[i][len2];\n\t\twhile (len2 >= 0 && tolower(*cp1) == tolower(*cp2)) {\n\t\t\tcp1--;\n\t\t\tcp2--;\n\t\t\tlen2--;\n\t\t}\n\t\tif ((len2 == 0 && *cp2 == '*') || len2 == -1)\n\t\t\treturn 0;\n\t}\n\treturn 1;\n}\n\n/**\n * @brief\n *\tProcess a request for the resource monitor. The i/o\n *\twill take place using DIS over an tpp stream.\n *\n * @param[in] iochan - i/o channel to indicate stream or fd\n * @param[in] version - protocol version\n * @param[in] prot - PROT_TCP or PROT_TPP\n *\n * @return int\n * @retval\t0\tSuccess\n * @retval\t-1\tFailure\n *\n */\n\nint\nrm_request(int iochan, int version)\n{\n\tchar name[256];\n\tstatic char *output = NULL;\n\tstatic int output_size = 0;\n\tint len;\n\tint command, ret;\n\tchar *curr, *value, *cp, *body;\n\tstruct config *ap;\n\tstruct rm_attribute *attr;\n\tstruct sockaddr_in *addr;\n\tu_long ipadd = 0;\n\tu_short port = 0;\n\tvoid (*close_io)(int) = NULL;\n\n\terrno = 0;\n\tif (!output) {\n\t\toutput = (char *) malloc(BUFSIZ);\n\t\tif (!output) {\n\t\t\tlog_err(errno, __func__, \"malloc\");\n\t\t\tgoto bad;\n\t\t}\n\t\toutput_size = BUFSIZ;\n\t}\n\t(void) memset(output, 0, output_size);\n\n\taddr = tpp_getaddr(iochan);\n\tif (addr == NULL) {\n\t\tsprintf(log_buffer, \"Sender unknown\");\n\t\tgoto bad;\n\t}\n\n\tipadd = ntohl(addr->sin_addr.s_addr);\n\tport = ntohs((unsigned short) addr->sin_port);\n\tclose_io = (void (*)(int)) & tpp_close;\n\n\tif (version != RM_PROTOCOL_VER) {\n\t\tsprintf(log_buffer, \"protocol version %d unknown\", version);\n\t\tgoto bad;\n\t}\n\n\trestrictrm = 0;\n\tif (!addrfind(ipadd)) {\n\t\tif (bad_restrict(ipadd)) {\n\t\t\tsprintf(log_buffer, \"bad attempt to connect\");\n\t\t\tgoto bad;\n\t\t}\n\t\trestrictrm = 1;\n\t}\n\n\t/* looks okay, find out what command it is */\n\tcommand = disrsi(iochan, &ret);\n\tif (ret != DIS_SUCCESS) {\n\t\tsprintf(log_buffer, \"no command %s\", dis_emsg[ret]);\n\t\tgoto bad;\n\t}\n\n\tswitch (command) {\n\n\t\tcase RM_CMD_CLOSE: /* no response to this */\n\t\t\tclose_io(iochan);\n\t\t\treturn 1;\n\n\t\tcase RM_CMD_REQUEST:\n\t\t\treqnum++;\n\t\t\tret = diswsi(iochan, RM_RSP_OK);\n\t\t\tif (ret != DIS_SUCCESS) {\n\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\"write request response failed: %s\",\n\t\t\t\t\tdis_emsg[ret]);\n\t\t\t\tgoto bad;\n\t\t\t}\n\n\t\t\tfor (;;) {\n\t\t\t\tcp = disrst(iochan, &ret);\n\t\t\t\tif (ret == DIS_EOD)\n\t\t\t\t\tbreak;\n\t\t\t\telse if (ret != DIS_SUCCESS) {\n\t\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\t\"problem with request line: %s\",\n\t\t\t\t\t\tdis_emsg[ret]);\n\t\t\t\t\tgoto bad;\n\t\t\t\t}\n\t\t\t\tcurr = skipwhite(cp);\n\t\t\t\tcurr = TOKCPY(curr, name);\n\t\t\t\tif (strlen(name) == 0) { /* no name */\n\t\t\t\t\tsprintf(output, \"%s=? %d\",\n\t\t\t\t\t\tcp, RM_ERR_UNKNOWN);\n\t\t\t\t} else {\n\t\t\t\t\tap = rm_search(config_array, name);\n\t\t\t\t\tattr = momgetattr(curr);\n\n#ifndef WIN32\n\t\t\t\t\t(void) alarm(alarm_time);\n#endif\n\t\t\t\t\trm_errno = PBSE_NONE;\n\t\t\t\t\tif (ap) /* static */\n\t\t\t\t\t\tvalue = conf_res(ap->c_u.c_value, attr);\n\t\t\t\t\telse /* dynamic */\n\t\t\t\t\t\tvalue = dependent(name, attr);\n#ifndef WIN32\n\t\t\t\t\t(void) alarm(0);\n#endif\n\t\t\t\t\tif (value) {\n#if MOM_ALPS\n\t\t\t\t\t\tint lim = 1 << 20;\n#else\n\t\t\t\t\t\tint lim = 65536;\n#endif\n\n\t\t\t\t\t\tlen = strlen(cp) + strlen(value) + 2;\n\t\t\t\t\t\tif (len >= lim) {\n\t\t\t\t\t\t\tsprintf(output, \"%s=? %d\",\n\t\t\t\t\t\t\t\tcp, PBSE_BADATVAL);\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\tchar *hold;\n\t\t\t\t\t\t\tif (len > output_size) {\n\t\t\t\t\t\t\t\thold = (char *) realloc(output, len);\n\t\t\t\t\t\t\t\tif (!hold) {\n\t\t\t\t\t\t\t\t\tlog_err(errno, __func__, \"realloc\");\n\t\t\t\t\t\t\t\t\tgoto bad;\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\toutput = hold;\n\t\t\t\t\t\t\t\toutput_size = len;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tsprintf(output, \"%s=%s\", cp, value);\n\t\t\t\t\t\t}\n\t\t\t\t\t} else { /* not found anywhere */\n\t\t\t\t\t\tsprintf(output, \"%s=? %d\", cp, rm_errno);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tfree(cp);\n\t\t\t\tret = diswst(iochan, output);\n\t\t\t\tif (ret != DIS_SUCCESS) {\n\t\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\t\"write string failed %s\",\n\t\t\t\t\t\tdis_emsg[ret]);\n\t\t\t\t\tgoto bad;\n\t\t\t\t}\n\t\t\t}\n\t\t\tbreak;\n\n\t\tcase RM_CMD_CONFIG:\n\t\t\tif (restrictrm) {\n\t\t\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER,\n\t\t\t\t\t  LOG_NOTICE | LOG_AUTH, __func__,\n\t\t\t\t\t  \"restricted configure attempt\");\n\t\t\t\tgoto bad;\n\t\t\t}\n\n\t\t\tlog_event(PBSEVENT_SYSTEM, 0, LOG_INFO, __func__, \"configure\");\n\t\t\tbody = disrst(iochan, &ret);\n\t\t\tif (ret == DIS_EOD)\n\t\t\t\tbody = NULL;\n\t\t\telse if (ret != DIS_SUCCESS) {\n\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\"problem with config body %s\",\n\t\t\t\t\tdis_emsg[ret]);\n\t\t\t\tgoto bad;\n\t\t\t}\n\t\t\tlen = read_config(body);\n\n\t\t\tret = diswsi(iochan, len ? RM_RSP_ERROR : RM_RSP_OK);\n\t\t\tif (ret != DIS_SUCCESS) {\n\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\"write config response failed %s\",\n\t\t\t\t\tdis_emsg[ret]);\n\t\t\t\tif (len == 0)\t  /* config was okay but reply */\n\t\t\t\t\tgoto bad; /* didn't work */\n\t\t\t}\n\n\t\t\t/* check if read_config failed */\n\t\t\tif (len != 0) {\n\t\t\t\tcleanup();\n\t\t\t\tlog_close(1);\n\t\t\t\ttpp_shutdown();\n#ifdef WIN32\n\t\t\t\tExitThread(1);\n#else\n\t\t\t\texit(1);\n#endif\n\t\t\t}\n\t\t\tbreak;\n\n\t\tcase RM_CMD_SHUTDOWN:\n\t\t\tif (restrictrm) {\n\t\t\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER,\n\t\t\t\t\t  LOG_NOTICE | LOG_AUTH, __func__,\n\t\t\t\t\t  \"restricted shutdown attempt\");\n\t\t\t\tgoto bad;\n\t\t\t}\n\n\t\t\tlog_event(PBSEVENT_SYSTEM, 0, LOG_NOTICE, __func__, \"shutdown\");\n\t\t\tret = diswsi(iochan, RM_RSP_OK);\n\t\t\tif (ret != DIS_SUCCESS) {\n\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\"write shutdown response failed %s\",\n\t\t\t\t\tdis_emsg[ret]);\n\t\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\t}\n\t\t\tdis_flush(iochan);\n\t\t\tclose_io(iochan);\n\t\t\tcleanup();\n\t\t\tlog_close(1);\n\t\t\ttpp_shutdown();\n#ifdef WIN32\n\t\t\tExitThread(0);\n#else\n\t\t\texit(0);\n#endif\n\n\t\tdefault:\n\t\t\tsprintf(log_buffer, \"unknown command %d\", command);\n\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\tret = diswsi(iochan, RM_RSP_ERROR);\n\t\t\tif (ret != DIS_SUCCESS) {\n\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\"write default response failed %s\",\n\t\t\t\t\tdis_emsg[ret]);\n\t\t\t\tgoto bad;\n\t\t\t}\n\t\t\tret = diswst(iochan, log_buffer);\n\t\t\tif (ret != DIS_SUCCESS) {\n\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\"write string failed %s\",\n\t\t\t\t\tdis_emsg[ret]);\n\t\t\t\tgoto bad;\n\t\t\t}\n\t\t\tbreak;\n\t}\n\tif (dis_flush(iochan) == -1) {\n\t\tlog_err(errno, __func__, \"flush\");\n\t\tgoto bad;\n\t}\n\treturn 0;\n\nbad:\n\tsprintf(output,\n\t\t\"\\n\\tmessage refused from port %d addr %ld.%ld.%ld.%ld\", port,\n\t\t(ipadd & 0xff000000) >> 24,\n\t\t(ipadd & 0x00ff0000) >> 16,\n\t\t(ipadd & 0x0000ff00) >> 8,\n\t\t(ipadd & 0x000000ff));\n\tstrcat(log_buffer, output);\n\tlog_err(errno, __func__, log_buffer);\n\n\t/*\n\t ** This is a special case, if the malloc fails for the 'output'\n\t ** buffer, then 'close_io' function pointer won't get a chance\n\t ** to be initialized. So, Initialize accordingly before use.\n\t */\n\tif (close_io == NULL)\n\t\tclose_io = (void (*)(int)) & tpp_close;\n\n\tclose_io(iochan);\n\treturn -1;\n}\n\n/**\n * @brief\n *\tRead a message from an TPP stream, figure out if it is a\n *\tResource Monitor request or an InterMom message.\n *\n * @param[in] stream - TPP stream\n *\n * @return Void\n *\n */\nvoid\ndo_tpp(int stream)\n{\n\tint ret, proto, version;\n\tvoid im_request(int stream, int version);\n\tvoid is_request(int stream, int version);\n\tvoid im_eof(int stream, int ret);\n\n\tDIS_tpp_funcs();\n\tproto = disrsi(stream, &ret);\n\tif (ret != DIS_SUCCESS) {\n\t\tim_eof(stream, ret);\n\t\treturn;\n\t}\n\tversion = disrsi(stream, &ret);\n\tif (ret != DIS_SUCCESS) {\n\t\tDBPRT((\"%s: no protocol version number %s\\n\",\n\t\t       __func__, dis_emsg[ret]))\n\t\tim_eof(stream, ret);\n\t\treturn;\n\t}\n\n\tswitch (proto) {\n\t\tcase RM_PROTOCOL:\n\t\t\tDBPRT((\"%s: got a resource monitor request\\n\", __func__))\n\t\t\tif (rm_request(stream, version) == 0)\n\t\t\t\ttpp_eom(stream);\n\t\t\tbreak;\n\n\t\tcase IM_PROTOCOL:\n\t\t\tDBPRT((\"%s: got an internal task manager request\\n\", __func__))\n\t\t\tim_request(stream, version);\n\t\t\tbreak;\n\n\t\tcase IS_PROTOCOL:\n\t\t\tDBPRT((\"%s: got an inter-server request\\n\", __func__))\n\t\t\tis_request(stream, version);\n\t\t\tbreak;\n\n\t\tdefault:\n\t\t\tDBPRT((\"%s: unknown request %d\\n\", __func__, proto))\n\t\t\ttpp_close(stream);\n\t\t\tbreak;\n\t}\n}\n\n/* ARGSUSED */\n\n/**\n * @brief\n *\twrapper function for do_tpp\n *\n * @param[in] fd - file descriptor\n *\n * @return Void\n *\n */\nvoid\ntpp_request(int fd)\n{\n\tint stream;\n\tint i;\n\t/* To reduce tpp process storm reducing max do_tpp processing to MAX_TPP_LOOPS times */\n\tfor (i = 0; i < MAX_TPP_LOOPS; i++) {\n\t\tif ((stream = tpp_poll()) == -1) {\n#ifdef WIN32\n\t\t\tif (errno != 10054)\n#endif\n\t\t\t\tlog_err(errno, __func__, \"tpp_poll\");\n\t\t\tbreak;\n\t\t}\n\t\tif (stream == -2)\n\t\t\tbreak;\n\t\tdo_tpp(stream);\n\t}\n}\n\n/**\n * @brief\n *      Read a TCP message from fd, figure out if it is a\n *      Resource Monitor request or an InterMom message.\n *      Serve only the IM message\n *\n * @param[in] fd - tcp msg\n *\n * @return\tint\n * @retval\t0\tSuccess\n * @retval\t!0\tFailure\n *\n */\n\nint\ndo_tcp(int fd)\n{\n\tint ret, proto, version;\n\tint tm_request(int stream, int version);\n\n\tpbs_tcp_timeout = 0;\n\tproto = disrsi(fd, &ret);\n\tpbs_tcp_timeout = PBS_DIS_TCP_TIMEOUT_SHORT;\n\n\tswitch (ret) {\n\t\tcase DIS_SUCCESS: /* worked */\n\t\t\tbreak;\n\t\tcase DIS_EOF: /* closed */\n\t\t\tclose_conn(fd);\n\t\tcase DIS_EOD: /* still open */\n\t\t\treturn 1;\n\t\tdefault:\n\t\t\tsprintf(log_buffer, \"no protocol number: %s\",\n\t\t\t\tdis_emsg[ret]);\n\t\t\tgoto bad;\n\t}\n\n\tversion = disrsi(fd, &ret);\n\tif (ret != DIS_SUCCESS) {\n\t\tDBPRT((\"%s: no protocol version number %s\\n\",\n\t\t       __func__, dis_emsg[ret]))\n\t\tgoto bad;\n\t}\n\n\tswitch (proto) {\n\t\tcase TM_PROTOCOL:\n\t\t\tDBPRT((\"%s: got an internal task manager request\\n\", __func__))\n\t\t\tret = tm_request(fd, version);\n\t\t\tbreak;\n\n\t\tdefault:\n\t\t\tDBPRT((\"%s: unknown request %d\\n\", __func__, proto))\n\t\t\tgoto bad;\n\t}\n\treturn ret;\n\nbad:\n\tclose_conn(fd);\n\treturn -1;\n}\n\n/**\n * @brief\n *      wrapper function for do_tcp which calls infinitely\n *\n * @param[in] fd - file descriptor\n *\n * @return Void\n *\n */\n\nvoid\ntcp_request(int fd)\n{\n\tint c;\n\tlong ipadd;\n\tchar address[80];\n\tconn_t *conn = get_conn(fd);\n\tif (!conn) {\n\t\tsprintf(log_buffer, \"could not find fd=%d in connection table\",\n\t\t\tfd);\n\t\tlog_err(-1, __func__, log_buffer);\n\t\tclosesocket(fd);\n\t\treturn;\n\t}\n\n\tipadd = conn->cn_addr;\n\n\tsprintf(address, \"%ld.%ld.%ld.%ld:%d\",\n\t\t(ipadd & 0xff000000) >> 24,\n\t\t(ipadd & 0x00ff0000) >> 16,\n\t\t(ipadd & 0x0000ff00) >> 8,\n\t\t(ipadd & 0x000000ff),\n\t\tntohs(conn->cn_port));\n\tDBPRT((\"%s: fd %d addr %s\\n\", __func__, fd, address))\n\tDIS_tcp_funcs();\n\tif (!addrfind(ipadd)) {\n\t\tsprintf(log_buffer, \"bad connect from %s\", address);\n\t\tlog_err(errno, __func__, log_buffer);\n\t\tclose_conn(fd);\n\t\treturn;\n\t}\n\tlog_buffer[0] = '\\0';\n\tfor (c = 0;; c++) {\n\t\tDIS_tcp_funcs();\n\n\t\tif (do_tcp(fd))\n\t\t\tbreak;\n\t}\n\tDBPRT((\"%s: processed %d\\n\", __func__, c))\n}\n\n/**\n * @brief\n *\tKill a job.\n *\n * @param[in]\tpjob - pointer to job\n * @param[in]\tsig - signal number\n *\n * @return      int\n * @retval      nonzero - success, on *NIX returns number of tasks killed\n * @retval      !1 - failure\n *\n */\nint\nkill_job(job *pjob, int sig)\n{\n\tpbs_task *ptask = NULL;\n\tint ct = 0;\n\tint tsk_ct;\n\n#ifdef WIN32\n\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t  pjob->ji_qs.ji_jobid, \"kill_job\");\n\n\tif (pjob->ji_hJob == NULL)\n\t\treturn 0;\n\n\t/* Normal process termination, top command shell termination will result in exit codes < 256.     */\n\t/* To differentiate a process termination by signals, add BASE_SIGEXIT_CODE to sig, and the       */\n\t/* value (BASE_SIGEXIT_CODE + sig) will be assigned as the exit code for that terminated process. */\n\tif (TerminateJobObject(pjob->ji_hJob, BASE_SIGEXIT_CODE + sig) == 0) {\n\t\tlog_err(-1, __func__, \"TerminateJobObject\");\n\t\treturn 0;\n\t}\n\t/*\n\t * for any external processes that got attached to the PBS job via pbs_attach\n\t * but are not part of the job object, kill the individual tasks\n\t */\n\tfor (ptask = (pbs_task *) GET_NEXT(pjob->ji_tasks);\n\t     ptask;\n\t     ptask = (pbs_task *) GET_NEXT(ptask->ti_jobtask)) {\n\t\tDBPRT((\"%s: task %8.8X status %d\\n\", __func__,\n\t\t       ptask->ti_qs.ti_task, ptask->ti_qs.ti_status))\n\t\tif (ptask->ti_qs.ti_status != TI_STATE_RUNNING)\n\t\t\tcontinue;\n\t\tct += kill_task(ptask, sig, 0);\n\t}\n\treturn 1;\n#else\n\n\tDBPRT((\"%s: entered %s\\n\", __func__, pjob->ji_qs.ji_jobid))\n\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t  pjob->ji_qs.ji_jobid, __func__);\n\n\tfor (ptask = (pbs_task *) GET_NEXT(pjob->ji_tasks);\n\t     ptask;\n\t     ptask = (pbs_task *) GET_NEXT(ptask->ti_jobtask)) {\n\t\tDBPRT((\"%s: task %8.8X status %d\\n\", __func__,\n\t\t       ptask->ti_qs.ti_task, ptask->ti_qs.ti_status))\n\t\tif (ptask->ti_qs.ti_status != TI_STATE_RUNNING)\n\t\t\tcontinue;\n\t\ttsk_ct = kill_task(ptask, sig, 0);\n\t\tct += tsk_ct;\n\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n\t\tif (sig == SIGKILL) { /* only stop afslog when the task is finally dying */\n\t\t\tAFSLOG_TERM(ptask);\n\t\t}\n#endif\n\n\t\t/*\n\t\t ** If this is an orphan task, force it to be EXITED\n\t\t ** since it will not be seen by scan_for_terminated.\n\t\t **\n\t\t ** Also set the task status to EXITED if the count of\n\t\t ** processes in this task == 0. This is to allow them\n\t\t ** to properly transition to TI_DEAD in scan_for_exiting.\n\t\t */\n\t\tif (((sig == SIGKILL) && (ptask->ti_flags & TI_FLAGS_ORPHAN)) || (tsk_ct == 0)) {\n\t\t\tptask->ti_qs.ti_status = TI_STATE_EXITED;\n\t\t\ttask_save(ptask);\n\t\t\tsprintf(log_buffer, \"task %8.8X force exited\",\n\t\t\t\tptask->ti_qs.ti_task);\n\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB,\n\t\t\t\t  LOG_DEBUG, pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\t/*\n\t\t\t ** If it is the parent task who became orphan by\n\t\t\t ** loosing the top shell, then set exiting_tasks.\n\t\t\t */\n\t\t\tif (ptask->ti_qs.ti_parenttask == TM_NULL_TASK)\n\t\t\t\texiting_tasks = 1;\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n\t\t\tAFSLOG_TERM(ptask);\n#endif\n\t\t}\n\t}\n\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5) && 0\n\tif (cred_by_job(pjob, CRED_DESTROY) != PBS_KRB5_OK) {\n\t\tlog_record(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid,\n\t\t\t   \"failed to destroy credentials\");\n\t}\n#endif\n\tDBPRT((\"%s: done %s killed %d\\n\", __func__, pjob->ji_qs.ji_jobid, ct))\n\treturn ct;\n#endif /* WIN32 */\n}\n\n/**\n * @brief\n *\tsize decoding routine.\n *\tAccepts a resource pointer and a pointer to the unsigned long integer\n *\tto receive the decoded value.  It returns the decoded value in kb.\n *\n * @param[in] pres - pointer to resource\n *\n * @note  This will return only up to ULONG_MAX kb (i.e. return value is\n *\t  unsigned long). Even though the new size data type has been expanded\n *\t  to hold up to UlONG_MAX kb (i.e. unsigned long long max), this\n *\t  function will still only return up to the ULONG_MAX kb. Extra care\n *\t  has been taken to make sure bit shifts in this function don't go past\n *\t  the # of bits of an unsigned long; otherwise, we might get some\n *\t  unexpected result.\n *\n * @note  The size of a word is a constant shared between daemons to ensure\n *        consistency over correctness.\n *\n * @return\tu_long\n * @retval\tdecoded value for size\n *\n */\nu_long\ngetsize(resource *pres)\n{\n\tu_Long value;\n\tu_long shift;\n\n\tif (pres->rs_value.at_type != ATR_TYPE_SIZE)\n\t\treturn (0);\n\tvalue = pres->rs_value.at_val.at_size.atsv_num;\n\tshift = pres->rs_value.at_val.at_size.atsv_shift;\n\n\tif (pres->rs_value.at_val.at_size.atsv_units ==\n\t    ATR_SV_WORDSZ) {\n\t\tif (value > ULONG_MAX / SIZEOF_WORD)\n\t\t\treturn (0);\n\t\tvalue *= SIZEOF_WORD;\n\t}\n\tif (shift > 10) {\n\t\tshift -= 10;\n\t\tif (shift >= (sizeof(ULONG_MAX) * CHAR_BIT)) {\n\t\t\treturn (0);\n\t\t}\n\t\treturn ((u_long) (value << shift));\n\t} else { /* in kb or < 1 kb */\n\t\tu_Long avalue;\n\n\t\tshift = 10 - shift;\n\t\tavalue = (value >> shift);\n\t\t/* if value is < 1kb but !0, then round up to 1kb */\n\t\tif ((value % (1 << shift)) > 0) { /* any remainder, round UP */\n\t\t\tavalue++;\n\t\t}\n\t\treturn ((u_long) avalue);\n\t}\n}\n\n/**\n * @brief\n *\ttime decoding routine.\n *\n *\tAccepts a resource pointer and a pointer to the unsigned long integer\n *\tto receive the decoded value.  It returns the decoded value of time\n *\tin seconds.\n *\n * @param[in] pres - pointer to resource structure\n *\n * @return\tu_long\n * @retval\tdecoded value for time\n *\n */\nu_long\ngettime(resource *pres)\n{\n\n\tif (pres->rs_value.at_type != ATR_TYPE_LONG)\n\t\treturn (0);\n\tif (pres->rs_value.at_val.at_long < 0)\n\t\treturn (0);\n\treturn ((u_long) pres->rs_value.at_val.at_long);\n}\n\n/**\n * @brief\n *\tInternal size decoding routine.\n *\n *\tAccepts a resource pointer and a pointer to the unsigned long integer\n *\tto receive the decoded value.  It returns a PBS error code, and the\n *\tdecoded value in the unsigned long integer.\n *\n * @param[in] pres - pointer to resource\n *\n * @note  In platforms other than sgi, the *ret value is only up to\n *\t  ULONG_MAX bytes (i.e. return value is unsigned long). Even though the\n *\t  new size data type has been expanded to hold up to UlONG_MAX bytes\n *\t  (i.e. unsigned long long max), this function will still only return\n *\t  up to the ULONG_MAX bytes. Extra care has been taken to make sure bit\n *\t  shifts in this function don't go past the # of bits of an unsigned\n *\t  long; otherwise, we might get unexpected result.\n *\n * @retval  ZERO(PBSE_NONE) SUCCESS\n * @retval  NON-ZERO PBS error code indicates failure\n *\n */\nint\nlocal_getsize(resource *pres, u_long *ret)\n{\n\tu_Long value;\n#define PBS_RLIM_MAX ULONG_MAX\n#define PBS_RLIM_TYPE u_long\n\n\t/*\n\t * If the resource pointer(pres) is NULL, then just\n\t * return with error code PBSE_UNKRESC.\n\t */\n\tif (pres == NULL)\n\t\treturn (PBSE_UNKRESC);\n\n\tif (pres->rs_value.at_type != ATR_TYPE_SIZE)\n\t\treturn (PBSE_ATTRTYPE);\n\tvalue = pres->rs_value.at_val.at_size.atsv_num;\n\tif (pres->rs_value.at_val.at_size.atsv_units == ATR_SV_WORDSZ) {\n\t\tif (value > (ULONG_MAX / SIZEOF_WORD))\n\t\t\treturn (PBSE_BADATVAL);\n\t\tvalue *= SIZEOF_WORD;\n\t}\n\tif ((pres->rs_value.at_val.at_size.atsv_shift >=\n\t     (sizeof(PBS_RLIM_MAX) * CHAR_BIT)) ||\n\t    (value >\n\t     (PBS_RLIM_MAX >> pres->rs_value.at_val.at_size.atsv_shift))) {\n\t\treturn (PBSE_BADATVAL);\n\t}\n\t*ret = (PBS_RLIM_TYPE) (value << pres->rs_value.at_val.at_size.atsv_shift);\n\n\treturn (PBSE_NONE);\n}\n\n/**\n * @brief\n *\tInternal time decoding routine.\n *\n *\tAccepts a resource pointer and a pointer to the unsigned long integer\n *\tto receive the decoded value.  It returns a PBS error code, and the\n *\tdecoded value of time in seconds in the unsigned long integer.\n *\n * @param[in] pres - pointer to resource\n * @param[out] ret - pointer to u_long to hold decoded value\n *\n * @return\terror(numbers)\n * @retval\tPBSE_NONE\tno error\n * @retval\tPBSE_UNKRESC\tUnknown resource\n * @retval\tPBSE_ATTRTYPE\tincompatable queue attribute type\n * @retval\tPBSE_BADATVAL\tbad attribute value\n *\n *\n */\nint\nlocal_gettime(resource *pres, u_long *ret)\n{\n\t/*\n\t * If pres is NULL, then just return with PBSE_UNKRESC.\n\t */\n\tif (pres == NULL)\n\t\treturn (PBSE_UNKRESC);\n\n\tif (pres->rs_value.at_type != ATR_TYPE_LONG)\n\t\treturn (PBSE_ATTRTYPE);\n\tif (pres->rs_value.at_val.at_long < 0)\n\t\treturn (PBSE_BADATVAL);\n\t*ret = pres->rs_value.at_val.at_long;\n\n\treturn (PBSE_NONE);\n}\n\n/**\n * @brief\n *\tInternal long decoding routine.\n *\n *\tAccepts a resource pointer and a pointer to the unsigned long integer\n *\tto receive the decoded value.  It returns a PBS error code, and the\n *\tdecoded value in the unsigned long integer.\n *\n * @param[in] pres - pointer to resource\n * @param[out] ret - pointer to u_long to hold decoded value\n *\n * @return\terror numbers\n *\n */\nint\ngetlong(resource *pres, u_long *ret)\n{\n\treturn (local_gettime(pres, ret));\n}\n\n/**\n * @brief\n *\tCalculate a moving weighted average percentage of cpus used by job.\n *\t100% = 1 cpu full time.\n *\n * @param\t\tpjob\t\tpointer to job structure\n * @param\t\toldcput\t\tcpu time from previous sample\n * @param\t\tnewcput\t\tcpu time from current sample\n * @param\t\tsampletime\ttime stamp of current sample\n *\n * @return\t\tNothing\n *\n */\nvoid\ncalc_cpupercent(job *pjob, u_long oldcput, u_long newcput, time_t sampletime)\n{\n\tattribute *at_used;\n\tu_long *lp;\n\tlong ncpus_req;\n\tdouble new_sample_weight;\n\tu_long percent;\n\tresource *pres;\n\tresource *pres_req;\n\tresource *preswalltime;\n\tresource_def *rd;\n\tlong dur;\n\n\t/* if job started after last sample skip calculation */\n\tif (pjob->ji_qs.ji_stime > sampletime)\n\t\treturn;\n\n\tncpus_req = 0;\n\trd = &svr_resc_def[RESC_NCPUS];\n\tpres_req = find_resc_entry(get_jattr(pjob, JOB_ATR_resource), rd);\n\tif (pres_req != NULL)\n\t\tncpus_req = MAX(0, pres_req->rs_value.at_val.at_long);\n\n\trd = &svr_resc_def[RESC_CPUPERCENT];\n\tat_used = get_jattr(pjob, JOB_ATR_resc_used);\n\tpres = find_resc_entry(at_used, rd);\n\tif (pres == NULL)\n\t\treturn;\n\n\tlp = (u_long *) &pres->rs_value.at_val.at_long;\n\tif (pjob->ji_sampletim == 0) {\n\t\tdur = MAX(1, sampletime - pjob->ji_qs.ji_stime);\n\t} else {\n\t\tdur = MAX(1, sampletime - pjob->ji_sampletim);\n\t}\n\tpercent = ((newcput - oldcput) * 100) / dur;\n\n\tif ((*lp) == 0 && percent != 0 && (ncpus_req > 0))\n\t\t/* set old sample used in averaging to sane value at start */\n\t\t*lp = MIN(percent, ncpus_req * 100);\n\n\tif (percent >= (*lp)) { /* moving cpupercent up */\n\t\tnew_sample_weight = delta_weightup *\n\t\t\t\t    MIN((double) 1.0, (double) dur / (double) max_check_poll);\n\t} else { /* moving cpupercent down */\n\t\t/*\n\t\t * clamp sample that would push cpupercent down again\n\t\t * so as to move back to the greater of\n\t\t *   - average over entire run or\n\t\t *   - ncpus*100\n\t\t * rather than back to zero in the face of no cpu\n\t\t * utilisation\n\t\t *\n\t\t * sample going down -\n\t\t *   never allow percent to rise above (*lp)\n\t\t */\n\t\tlong wallt = -1;\n\t\trd = &svr_resc_def[RESC_WALLTIME];\n\t\tpreswalltime = find_resc_entry(at_used, rd);\n\t\tif ((preswalltime != NULL) &&\n\t\t    ((is_attr_set(&preswalltime->rs_value)) != 0)) {\n\t\t\twallt = preswalltime->rs_value.at_val.at_long;\n\t\t}\n\t\tif (wallt <= 0)\n\t\t\treturn;\n\n\t\tpercent = MIN((double) (*lp),\n\t\t\t      MAX((double) percent,\n\t\t\t\t  MAX((double) ncpus_req,\n\t\t\t\t      (double) oldcput / (double) wallt) *\n\t\t\t\t\t  100.0));\n\t\t/* note wallt above corresponds to the old cput\n\t\t * -- resource is only set to time_now further down\n\t\t *    in the mom_set_use() routine\n\t\t */\n\n\t\tnew_sample_weight = delta_weightdown *\n\t\t\t\t    (MIN((double) 1.0, (double) dur / (double) max_check_poll));\n\t}\n\n\t*lp = (u_long) (percent * new_sample_weight + (*lp) * (1.0 - new_sample_weight));\n\n\tDBPRT((\"cpu%% : ses %ld (new %lu - old %lu)/delta %ld = %lu%% or %ld%% weighted\\n\", get_jattr_long(pjob, JOB_ATR_session_id), newcput, oldcput, dur, percent, *lp))\n}\n\n#ifdef NAS /* localmod 015 */\n/* functions for spool_usage limit */\n\n/**\n * @brief\n *\tsets spool size for job\n *\n * @param[in] value - spool size\n *\n * @return\thandler_ret_t\n * @retval\tHANDLER_FAIL\tFailure\n * @retval\tHANDLER_SUCCESS\tSuccess\n *\n */\n\nstatic handler_ret_t\nset_spoolsize(char *value)\n\n{\n\tchar newstr[50] = \"spool_size \";\n\tu_long val;\n\tstruct size_value psize;\n\n\tpsize.atsv_shift = 0;\n\tpsize.atsv_num = 0;\n\tpsize.atsv_units = ATR_SV_BYTESZ;\n\n\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_DEBUG,\n\t\t  \"spool_size\", value);\n\tif (to_size(value, &psize) != 0) {\n\t\tsprintf(log_buffer, \"invalid spool size specification %s\", value);\n\t\tlog_err(-1, __func__, log_buffer);\n\t\treturn HANDLER_FAIL;\n\t}\n\n\t/*\n\t * Convert psize to kilobytes\n\t */\n\n\tval = psize.atsv_num;\n\n\t/* Special-case 0 */\n\n\tif (val != 0) {\n\t\tif (psize.atsv_units == ATR_SV_WORDSZ)\n\t\t\tval *= sizeof(int);\n\t\tif (psize.atsv_shift == 0)\n\t\t\tval = (val + 1023) >> 10;\n\t\telse\n\t\t\tval = val << (psize.atsv_shift - 10);\n\t}\n\n\tspoolsize = val;\n\t(void) strncat(newstr, value, 39);\n\tif (add_static(newstr, \"config\", 0))\n\t\treturn HANDLER_FAIL;\n\tnconfig++;\n\treturn HANDLER_SUCCESS;\n}\n\n/**\n * @brief\n *\tspool_usage - compute a job's spool usage (in KB)\n *\n * Returns the sum of the lengths of stdout and stderr in KB, iff they\n * are being written to the $PBS_HOME/spool directory. In all other cases\n * (e.g. interactive jobs, jobs running in a sandbox), returns zero.\n *\n * @param[in] pjob - pointer to job\n *\n * @return\tlong\n * @retval\tlength of output written by job into spool dir\n * @retval\t0\tnot writing into spool dir\n *\n */\n\nunsigned long\nspool_usage(pjob)\njob *pjob;\n\n{\n\tunsigned long outsize = 0;\n\tunsigned long errsize = 0;\n\tchar *outpath;\n\tchar *errpath;\n\tstruct stat buf; /* return buffer for stat(2) */\n\tsize_t spool_path_len = strlen(path_spool);\n\tint keeping;\n\n\t/* Return 0 if job is interactive */\n\n\tif (is_jattr_set(pjob, JOB_ATR_interactive) && (get_jattr_long(pjob, JOB_ATR_interactive) > 0)) {\n\t\tDBPRT((\"job is interactive\\n\"));\n\t\treturn (0L);\n\t}\n\n\t/* Job is not interactive */\n\n\t/* Get full pathname of stdout file */\n\n\toutpath = std_file_name(pjob, StdOut, &keeping);\n\tDBPRT((\"file = %s/n\", outsize));\n\tif (strncmp(outpath, path_spool, spool_path_len) == 0) {\n\n\t\t/* stdout file resides under the spool directory (if it exists). */\n\n\t\tif (stat(outpath, &buf) == 0) { /* file exists */\n\t\t\toutsize = (unsigned long) buf.st_size;\n\t\t\tDBPRT((\"size = %d\\n\", outsize));\n\t\t}\n\t}\n\n\t/* Get full pathname of stderr file */\n\n\terrpath = std_file_name(pjob, StdErr, &keeping);\n\tDBPRT((\"file = %s\\n\", errsize));\n\tif (strncmp(errpath, path_spool, spool_path_len) == 0) {\n\n\t\t/* stderr file resides under the spool directory (if it exists). */\n\n\t\tif (stat(errpath, &buf) == 0) { /* file exists */\n\t\t\terrsize = (unsigned long) buf.st_size;\n\t\t\tDBPRT((\"size = %d\\n\", outsize));\n\t\t}\n\t}\n\n\treturn ((outsize + errsize) >> 10);\n}\n\n/**\n * @brief\n *\tspool_over_limit - check job's spool usage against limit\n *\n * @param[in] pjob - pointer to job\n *\n * @return\tint\n * @retval\t1\tiff job's spool usage exceeds spool limit in config file\n * @retval\t0\totherwise.\n *\n * A spool limit <= 0 is considered to be \"unlimited\".\n *\n */\n\nint\nspool_over_limit(pjob)\njob *pjob;\n{\n\tif (spoolsize <= 0)\n\t\treturn 0;\n\n\treturn (spool_usage(pjob) > spoolsize);\n}\n#endif /* localmod 015 */\n\n/**\n * @brief\n *\tMeasure job resource usage and compare with its limits.\n *\n *\tncpus, mem, vmem are checked against the node specific limit\n *\testablished by the job's select directive and passed via the\n *\texec_vnode string to job_nodes() into the ji_hosts array.\n *\n *\tJob level resource limits, such as cput, walltime, ... are\n *\tchecked also as no single node can exceed the total\n *\n * @param[in] pjob - pointer to job\n *\n * @return Bool\n * @retval TRUE If any well-formed polled limit has been exceeded\n * @retval FALSE no polled limit has been exceeded\n *\n */\nint\nmom_over_limit(job *pjob)\n{\n\tchar *pname;\n\tint retval;\n\tu_long llvalue, llnum;\n\tu_long value, num;\n\tresource *pres;\n\tresource *used;\n\tattribute *uattr = get_jattr(pjob, JOB_ATR_resc_used);\n\tresource_def *rd;\n\n\tassert(pjob != NULL);\n\tassert((get_jattr(pjob, JOB_ATR_resource))->at_type == ATR_TYPE_RESC);\n\tpres = (resource *) GET_NEXT(get_jattr_list(pjob, JOB_ATR_resource));\n\n\tDBPRT((\"%s: entered\\n\", __func__))\n\n\t/* check ncpus usage locally */\n\n\tvalue = pjob->ji_hosts[pjob->ji_nodeid].hn_nrlimit.rl_ncpus;\n\tif (value != 0) { /* ignore cpuusage check when ncpus=0 */\n\t\tattribute *at;\n\t\tresource *prescpup;\n\t\tresource *prescput;\n\t\tresource *preswalltime;\n\t\tu_long cput_sum;\n\t\tu_long walltime_sum;\n\n\t\tat = get_jattr(pjob, JOB_ATR_resc_used);\n\t\tassert(at->at_type == ATR_TYPE_RESC);\n\n\t\trd = &svr_resc_def[RESC_CPUPERCENT];\n\t\tprescpup = find_resc_entry(at, rd);\n\t\tif ((prescpup != NULL) &&\n\t\t    ((is_attr_set(&prescpup->rs_value)) != 0)) {\n\t\t\tnum = prescpup->rs_value.at_val.at_long;\n\t\t\tif ((float) num >\n\t\t\t    (value * 100 * delta_cpufactor + delta_percent_over)) {\n\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\"ncpus %.1f exceeded limit %lu (burst)\",\n\t\t\t\t\t(float) num / 100.0, value);\n\t\t\t\tif (cpuburst) { /* abort job */\n\t\t\t\t\tpjob->ji_qs.ji_un.ji_momt.ji_exitstat = JOB_EXEC_KILL_NCPUS_BURST;\n\t\t\t\t\treturn (TRUE);\n\t\t\t\t} else if ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_cpuperc) == 0) {\n\t\t\t\t\t/* just log it */\n\t\t\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB,\n\t\t\t\t\t\t  LOG_INFO, pjob->ji_qs.ji_jobid,\n\t\t\t\t\t\t  log_buffer);\n\t\t\t\t\tpjob->ji_qs.ji_svrflags |= JOB_SVFLG_cpuperc;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\trd = &svr_resc_def[RESC_WALLTIME];\n\t\t\tpreswalltime = find_resc_entry(at, rd);\n\t\t\tif ((preswalltime != NULL) &&\n\t\t\t    ((is_attr_set(&preswalltime->rs_value)) != 0)) {\n\t\t\t\twalltime_sum = preswalltime->rs_value.at_val.at_long;\n\t\t\t\tif (walltime_sum > average_trialperiod) {\n\t\t\t\t\trd = &svr_resc_def[RESC_CPUT];\n\t\t\t\t\tprescput = find_resc_entry(at, rd);\n\t\t\t\t\tif ((prescput != NULL) &&\n\t\t\t\t\t    ((is_attr_set(&prescput->rs_value)) != 0)) {\n\t\t\t\t\t\tcput_sum = prescput->rs_value.at_val.at_long;\n\t\t\t\t\t\t/* \"value\" is from ncpus */\n\t\t\t\t\t\tif (((double) cput_sum / (double) walltime_sum) >\n\t\t\t\t\t\t    (value * average_cpufactor + average_percent_over / 100.0)) {\n\t\t\t\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\t\t\t\"ncpus %.2f exceeded limit %lu (sum)\",\n\t\t\t\t\t\t\t\t(double) cput_sum / (double) walltime_sum,\n\t\t\t\t\t\t\t\tvalue);\n\t\t\t\t\t\t\tif (cpuaverage) { /* abort job */\n\t\t\t\t\t\t\t\tpjob->ji_qs.ji_un.ji_momt.ji_exitstat = JOB_EXEC_KILL_NCPUS_SUM;\n\t\t\t\t\t\t\t\treturn (TRUE);\n\t\t\t\t\t\t\t} else if ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_cpuperc) == 0) {\n\t\t\t\t\t\t\t\t/* just log it */\n\t\t\t\t\t\t\t\tlog_event(PBSEVENT_JOB,\n\t\t\t\t\t\t\t\t\t  PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t\t\t\t\t\t  pjob->ji_qs.ji_jobid,\n\t\t\t\t\t\t\t\t\t  log_buffer);\n\t\t\t\t\t\t\t\tpjob->ji_qs.ji_svrflags |= JOB_SVFLG_cpuperc;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\t/* check vmem useage locally */\n\tllvalue = pjob->ji_hosts[pjob->ji_nodeid].hn_nrlimit.rl_vmem << 10;\n\tif (llvalue != 0) {\n\t\trd = &svr_resc_def[RESC_VMEM];\n\t\tused = find_resc_entry(uattr, rd);\n\t\tretval = local_getsize(used, &llnum);\n\t\tif (retval == PBSE_NONE) {\n\t\t\tif (llnum > llvalue) {\n\t\t\t\tsprintf(log_buffer, \"vmem %lukb exceeded limit %lukb\", llnum / 1024, llvalue / 1024);\n\t\t\t\tpjob->ji_qs.ji_un.ji_momt.ji_exitstat = JOB_EXEC_KILL_VMEM;\n\t\t\t\treturn (TRUE);\n\t\t\t}\n\t\t}\n\t}\n\n\t/* check mem usage locally */\n\tllvalue = pjob->ji_hosts[pjob->ji_nodeid].hn_nrlimit.rl_mem << 10;\n\tif (llvalue != 0) {\n\t\trd = &svr_resc_def[RESC_MEM];\n\t\tused = find_resc_entry(uattr, rd);\n\t\tretval = local_getsize(used, &llnum);\n\t\tif (retval == PBSE_NONE) {\n\t\t\tif ((llnum > llvalue) && enforce_mem) {\n\t\t\t\tsprintf(log_buffer, \"mem %lukb exceeded limit %lukb\", llnum / 1024, llvalue / 1024);\n\t\t\t\tpjob->ji_qs.ji_un.ji_momt.ji_exitstat = JOB_EXEC_KILL_MEM;\n\t\t\t\treturn (TRUE);\n\t\t\t}\n\t\t}\n\t}\n\n\tpres = (resource *) GET_NEXT(get_jattr_list(pjob, JOB_ATR_resource));\n\n\tfor (; pres != NULL; pres = (resource *) GET_NEXT(pres->rs_link)) {\n\t\tassert(pres->rs_defin != NULL);\n\t\tpname = pres->rs_defin->rs_name;\n\t\tused = find_resc_entry(uattr, pres->rs_defin);\n\t\tassert(pname != NULL);\n\t\tassert(*pname != '\\0');\n\n\t\t/* The checks for cput and walltime (job wide limits) should\n\t\t * only be done on the MS.  We are leaving the Cray specific\n\t\t * mppe and mppsse to be checked on all nodes though\n\t\t */\n\n\t\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE) != 0) {\n\t\t\tif (strcmp(pname, \"cput\") == 0) {\n\t\t\t\tretval = local_gettime(pres, &value);\n\t\t\t\tif (retval != PBSE_NONE)\n\t\t\t\t\tcontinue;\n\t\t\t\tretval = local_gettime(used, &num);\n\t\t\t\tif (retval != PBSE_NONE)\n\t\t\t\t\tcontinue;\n\t\t\t\tif (num > value) {\n\t\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\t\"cput %lu exceeded limit %lu\",\n\t\t\t\t\t\tnum, value);\n\t\t\t\t\tpjob->ji_qs.ji_un.ji_momt.ji_exitstat = JOB_EXEC_KILL_CPUT;\n\t\t\t\t\treturn (TRUE);\n\t\t\t\t}\n\t\t\t} else if (strcmp(pname, \"walltime\") == 0) {\n\t\t\t\tretval = local_gettime(pres, &value);\n\t\t\t\tif (retval != PBSE_NONE)\n\t\t\t\t\tcontinue;\n\t\t\t\t/* use the resources_used.walltime value */\n\t\t\t\tretval = local_gettime(used, &num);\n\t\t\t\tif (retval != PBSE_NONE)\n\t\t\t\t\tcontinue;\n\t\t\t\t/* add time that has not been accumulated */\n\t\t\t\tnum += (time_now - pjob->ji_walltime_stamp) * wallfactor;\n\t\t\t\tif (num > value) {\n\t\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\t\"walltime %lu exceeded limit %lu\",\n\t\t\t\t\t\tnum, value);\n\t\t\t\t\tpjob->ji_qs.ji_un.ji_momt.ji_exitstat = JOB_EXEC_KILL_WALLTIME;\n\t\t\t\t\treturn (TRUE);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\tif (strcmp(pname, \"mppe\") == 0) {\n\t\t\tretval = getlong(pres, &value);\n\t\t\tif (retval != PBSE_NONE)\n\t\t\t\tcontinue;\n\t\t\tretval = getlong(used, &num);\n\t\t\tif (retval != PBSE_NONE)\n\t\t\t\tcontinue;\n\t\t\tif (num > value) {\n\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\"mppe %lu exceeded limit %lu\",\n\t\t\t\t\tnum, value);\n\t\t\t\treturn (TRUE);\n\t\t\t}\n\t\t} else if (strcmp(pname, \"mppssp\") == 0) {\n\t\t\tretval = getlong(pres, &value);\n\t\t\tif (retval != PBSE_NONE)\n\t\t\t\tcontinue;\n\t\t\tretval = getlong(used, &num);\n\t\t\tif (retval != PBSE_NONE)\n\t\t\t\tcontinue;\n\t\t\tif (num > value) {\n\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\"mppssp %lu exceeded limit %lu\",\n\t\t\t\t\tnum, value);\n\t\t\t\treturn (TRUE);\n\t\t\t}\n\t\t}\n\t}\n\n\treturn (FALSE);\n}\n\n/**\n * @brief\n *\tcheck attr value limits of job\n *\n * @param[in] pjob - pointer to job\n * @param[in] recover - recovering mode for MoM\n *\n * @return\tint\n * @retval\t0\tFailure\n * @retval\t1\tSuccess\n *\n */\n\nint\njob_over_limit(job *pjob, int recover)\n{\n\tattribute *used;\n\tresource *limresc;\n\tresource *useresc;\n\tstruct resource_def *rd;\n\tu_long total_cpu, total_mem;\n\tu_long *total;\n\tint i;\n\tu_long limit;\n\tchar *units;\n\n\tif (mom_over_limit(pjob)) {\t\t     /* check my own limits */\n\t\tpjob->ji_nodekill = pjob->ji_nodeid; /* no more POLL's */\n\t\treturn 1;\n\t}\n\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE) == 0) /* not MS */\n\t\treturn 0;\n\n\tif (pjob->ji_nodekill >= pjob->ji_numnodes) {\n\t\tchar *msgbuf = NULL;\n\n\t\tpbs_asprintf(&msgbuf,\n\t\t\t     \"warning: job %s ji_nodekill=%d >= ji_numnodes=%d\",\n\t\t\t     pjob->ji_qs.ji_jobid, pjob->ji_nodekill, pjob->ji_numnodes);\n\t\tlog_err(-1, __func__, msgbuf);\n\t\tfree(msgbuf);\n\t} else if (pjob->ji_nodekill != TM_ERROR_NODE) {\n\t\thnodent *pnode = &pjob->ji_hosts[pjob->ji_nodekill];\n\n\t\t/* special case EOF */\n\t\tif (pnode->hn_sister == SISTER_EOF) {\n\t\t\tif ((reliable_job_node_find(&pjob->ji_failed_node_list, pnode->hn_host) != NULL) || (do_tolerate_node_failures(pjob)) || recover == 2) {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"ignoring node EOF %d from failed mom %s as job is tolerant of node failures\", pjob->ji_nodekill, pnode->hn_host ? pnode->hn_host : \"\");\n\t\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\t\treturn 0;\n\t\t\t} else {\n\t\t\t\tsprintf(log_buffer, \"node EOF %d (%s)\",\n\t\t\t\t\tpjob->ji_nodekill,\n\t\t\t\t\tpnode->hn_host);\n\t\t\t\tlog_event(PBSEVENT_JOB | PBSEVENT_FORCE,\n\t\t\t\t\t  PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\t\tpjob->ji_qs.ji_un.ji_momt.ji_exitstat = JOB_EXEC_RERUN_SIS_FAIL;\n\t\t\t\t(void) kill_job(pjob, SIGKILL);\n\t\t\t\treturn 0;\n\t\t\t}\n\t\t}\n\t\tsprintf(log_buffer, \"node %d (%s) requested job die, code %d\",\n\t\t\tpjob->ji_nodekill, pnode->hn_host, pnode->hn_sister);\n\t\treturn 1;\n\t}\n\n#ifdef NAS /* localmod 015 */\n\t/* Check spool usage on the MS */\n\tif (spool_over_limit(pjob)) {\n\t\tsprintf(log_buffer, \"spool usage job total %luKB exceeded limit %luKB\",\n\t\t\tspool_usage(pjob), spoolsize);\n\t\tpjob->ji_nodekill = pjob->ji_nodeid;\n\t\treturn 1;\n\t}\n#endif /* localmod 015 */\n\n\t/* sum up cput and mem for all nodes */\n\ttotal_cpu = 0;\n\ttotal_mem = 0;\n\tfor (i = 0; i < pjob->ji_numnodes - 1; i++) {\n\t\tnoderes *nr = &pjob->ji_resources[i];\n\n\t\ttotal_cpu += nr->nr_cput;\n\t\ttotal_mem += nr->nr_mem;\n\t}\n\n\tused = get_jattr(pjob, JOB_ATR_resc_used);\n\tfor (limresc = (resource *) GET_NEXT(get_jattr_list(pjob, JOB_ATR_resource));\n\t     limresc != NULL;\n\t     limresc = (resource *) GET_NEXT(limresc->rs_link)) {\n\n\t\tif (!is_attr_set(&limresc->rs_value))\n\t\t\tcontinue;\n\n\t\trd = limresc->rs_defin;\n\t\t/* this is so we don't make extra calls to find_resc_entry */\n\t\tif (strcmp(rd->rs_name, \"cput\") != 0 &&\n\t\t    strcmp(rd->rs_name, \"mem\") != 0)\n\t\t\tcontinue;\n\n\t\tuseresc = find_resc_entry(used, rd);\n\t\tif (useresc == NULL)\n\t\t\tcontinue;\n\t\tif (!is_attr_set(&useresc->rs_value))\n\t\t\tcontinue;\n\n\t\tif (strcmp(rd->rs_name, \"cput\") == 0) {\n\t\t\ttotal_cpu += gettime(useresc);\n\t\t\tlimit = gettime(limresc);\n\n\t\t\tunits = \"secs\";\n\t\t\ttotal = &total_cpu;\n\t\t\tif (limit < total_cpu)\n\t\t\t\tbreak;\n\t\t} else if (strcmp(rd->rs_name, \"mem\") == 0) {\n\t\t\ttotal_mem += getsize(useresc);\n\t\t\tlimit = getsize(limresc);\n\n\t\t\tunits = \"kb\";\n\t\t\ttotal = &total_mem;\n\t\t\tif (enforce_mem && (limit < total_mem))\n\t\t\t\tbreak;\n\t\t}\n\t}\n\tif (limresc == NULL)\n\t\treturn 0;\n\n\tsprintf(log_buffer, \"%s job total %lu %s exceeded limit %lu %s\",\n\t\trd->rs_name, *total, units, limit, units);\n\tpjob->ji_nodekill = pjob->ji_nodeid;\n\treturn 1;\n}\n\n#ifdef NAS_UNKILL /* localmod 011 */\n\n/**\n * @brief\n *\tfree_kp_list_entries(head) - delete_link() and free() entries of a linked\n *\tlist\n *\n * @param[in] head - pointer to pbs_list_head\n *\n * @return\tVoid\n *\n */\nvoid\nfree_kp_list_entries(pbs_list_head *head)\n{\n\tkp *entry;\n\tkp *next;\n\n\tentry = (kp *) GET_NEXT(*head);\n\twhile (entry) {\n\t\tnext = (kp *) GET_NEXT(entry->kp_link);\n\t\tdelete_link(&entry->kp_link);\n\t\tfree(entry);\n\t\tentry = next;\n\t}\n\tCLEAR_HEAD((*head));\n}\n\n/**\n * @brief\n *\tkp_comment_node() - Set a node comment regarding the presence of unkillable\n *\tprocesses. Based on Altair's offline_job_vnodes().\n *\n * @return\tVoid\n *\n */\nvoid\nkp_comment_node(void)\n{\n\tstatic char id[] = \"offline_node\";\n\tstatic char *cmdbuf = NULL;\n\tstatic char *cmdprefix = \"qmgr -c 'set node \";\n\tstatic char *cmdmidfix = \"comment = \\\"\";\n\tstatic char *cmdsuffix = \"unkillable process\\\"'\";\n\tchar linebuf[_POSIX_ARG_MAX];\n\tlong execmax = _POSIX_ARG_MAX;\n\tsize_t linebufmax = sizeof(linebuf);\n\ttime_t now;\n\n\t/*\n\t ** Prepare cmdbuf with prefix up to before the node name\n\t */\n\tif (cmdbuf == NULL) {\n\t\tcmdbuf = malloc(execmax);\n\t\tif (cmdbuf == NULL) {\n\t\t\tlog_err(errno, id, \"cmdbuf malloc\");\n\t\t\treturn;\n\t\t}\n\t}\n\tif (snprintf(cmdbuf, execmax, \"%s/bin/%s\",\n\t\t     pbs_conf.pbs_exec_path, cmdprefix) >= execmax) {\n\t\tlog_err(-1, id, \"cmdbuf overflow\");\n\t\treturn;\n\t}\n\n\t/*\n\t ** Write out the rest of the command\n\t */\n\tnow = time(0);\n\tif (snprintf(linebuf, linebufmax, \"%s %s%.24s: %s\",\n\t\t     mom_short_name, cmdmidfix,\n\t\t     ctime(&now), cmdsuffix) >= linebufmax) {\n\t\tlog_err(-1, id, \"overflow of linebuf\");\n\t\treturn;\n\t}\n\n\t/* cmdbuf length + linebuf length + terminating character */\n\tif (strlen(cmdbuf) + strlen(linebuf) + 1 > execmax) {\n\t\tlog_err(-1, id, \"cmdbuf overflow\");\n\t\treturn;\n\t}\n\t(void) strcat(cmdbuf, linebuf);\n\n\tif (system(cmdbuf) == -1)\n\t\tlog_err(errno, id, \"attempt to set node comment failed\");\n}\n#endif /* localmod 011 */\n\nvoid\ndorestrict_user(void)\n{\n\tstatic char id[] = \"restrict_user\";\n\tpid_t *allpids(void);\n\tpid_t *pids = NULL;\n\tstatic pid_t mom_sid = -1;\n\tint i = 0;\n\tjob *pjob = NULL, *hjob = NULL;\n\tpbs_task *ptask = NULL;\n\tstatic resource_def *prsdef = NULL;\n\tresource *pplace = NULL;\n\tint j = 0;\n\tint found_exempt = 0;\n\tstruct passwd *pwent = NULL;\n\tstatic uid_t uid_dataservice = -1;\n\tchar errmsg[PBS_MAX_DB_ERR];\n\tchar *usr = NULL;\n\n#ifdef NAS_UNKILL /* localmod 011 */\n\tpbs_list_head new_killed_procs;\n\tkp *current_kp, *prev_kp;\n\n\tCLEAR_HEAD(new_killed_procs);\n#endif /* localmod 011 */\n\n\tif (!restrict_user)\n\t\treturn;\n\n\tif (pbs_conf.start_server && uid_dataservice == -1) {\n\t\t/* setting uid_dataservice to 0 to prevent infinite logging of error message if the call to\n\t\t *  pbs_get_dataservice_usr fails\n\t\t */\n\t\tuid_dataservice = 0;\n\n\t\t/* Database user must be an exempted user\n\t\t *\n\t\t * errmsg is set to a default value and passed to the pbs_dataservice_usr api . It may be\n\t\t * overwritten inside the pbs_get_dataservice_usr api by some other errorneous condition.\n\t\t * The max possible length of the error message is set to PBS_MAX_DB_ERR\n\t\t *\n\t\t * On success the dataservice username is returned which should be freed by the caller to\n\t\t * prevent a mem leak.\n\t\t */\n\t\tif ((usr = pbs_get_dataservice_usr(errmsg, PBS_MAX_DB_ERR)) != NULL) {\n\t\t\t/* usr now contains the dataservice user name . Get the uid of the dataservice\n\t\t\t * user name using the  getpwnam() api\n\t\t\t */\n\n\t\t\tif (((pwent = getpwnam(usr)) == NULL)) {\n\t\t\t\tsprintf(log_buffer, \"user %s doesn't exist\", usr);\n\t\t\t\tlog_event(PBSEVENT_SYSTEM, 0, LOG_DEBUG, id,\n\t\t\t\t\t  log_buffer);\n\t\t\t} else {\n\t\t\t\tuid_dataservice = pwent->pw_uid;\n\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\"Dataservice user %s is an exmpted user \",\n\t\t\t\t\tusr);\n\t\t\t\tlog_event(PBSEVENT_SYSTEM, 0, LOG_DEBUG, id,\n\t\t\t\t\t  log_buffer);\n\t\t\t}\n\t\t\tfree(usr);\n\t\t}\n\t}\n\n\treqnum++;\n\tif ((pids = allpids()) == NULL)\n\t\treturn;\n\n\tif (prsdef == NULL)\n\t\tprsdef = &svr_resc_def[RESC_PLACE];\n\n#ifndef WIN32\n\tif (mom_sid == -1) {\n\t\tmom_sid = getsid(0);\n\t\tDBPRT((\"%s: set mom sid %d\\n\", id, mom_sid))\n\t}\n#endif\n\tfor (i = 0; pids[i] != -1; i++) {\n\t\tpid_t pid = pids[i];\n\t\tint ret;\n\t\tpid_t procsid;\n\t\tuid_t uid;\n\t\tchar comm[30];\n#ifdef WIN32\n\t\tchar *uname = NULL; /* dummy */\n\t\tret = dep_procinfo(pid, &procsid, &uid, uname, 0, comm, sizeof(comm));\n#else\n\t\tret = dep_procinfo(pid, &procsid, &uid, comm, sizeof(comm));\n#endif\n\t\tif (ret != TM_OKAY) {\n\t\t\tDBPRT((\"%s: no info pid %d\\n\", id, pid))\n\t\t\tcontinue;\n\t\t}\n\n\t\t/*\n\t\t ** Ignore processes within MOM's session so we do not\n\t\t ** kill stagein procs where a job does not yet exist.\n\t\t */\n\t\tif (procsid == mom_sid) {\n\t\t\tDBPRT((\"%s: MOM session pid %d uid %d comm %s\\n\",\n\t\t\t       id, pid, uid, comm))\n\t\t\tcontinue;\n\t\t}\n\n\t\t/* Ignore system processes. */\n\t\tif (uid <= (uid_t) restrict_user_maxsys)\n\t\t\tcontinue;\n\t\t/* Ignore the postgres process */\n\t\tif (uid == uid_dataservice)\n\t\t\tcontinue;\n\n\t\tfound_exempt = 0;\n\t\tfor (j = 0; j < NUM_RESTRICT_USER_EXEMPT_UIDS; j++) {\n\t\t\tif (restrict_user_exempt_uids[j] == 0)\n\t\t\t\tbreak;\n\n\t\t\tif (uid == restrict_user_exempt_uids[j]) {\n\t\t\t\tfound_exempt = 1;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t\tif (found_exempt)\n\t\t\tcontinue;\n\n\t\tfor (pjob = (job *) GET_NEXT(svr_alljobs);\n\t\t     pjob;\n\t\t     pjob = (job *) GET_NEXT(pjob->ji_alljobs)) {\n\t\t\tif (pjob->ji_qs.ji_un.ji_momt.ji_exuid == uid)\n\t\t\t\tbreak;\n\t\t}\n\n\t\tif (pjob == NULL) /* no job with same uid */\n\t\t\tgoto badguy;\n\n\t\t/*\n\t\t ****************************************************\n\t\t ** WARNING\n\t\t ** THIS IS PROTOTYPE CODE ... DO NOT USE\n\t\t ****************************************************\n\t\t ** Job with same uid exists but we are not doing any\n\t\t ** special handling of aliens so it doesn't mater if\n\t\t ** this is part of a job or not.\n\t\t */\n\t\tif (alien_attach == 0 && alien_kill == 0)\n\t\t\tcontinue;\n\n\t\thjob = pjob; /* save matching job */\n\n\t\tfor (pjob = (job *) GET_NEXT(svr_alljobs);\n\t\t     pjob;\n\t\t     pjob = (job *) GET_NEXT(pjob->ji_alljobs)) {\n\t\t\tif (pjob->ji_qs.ji_un.ji_momt.ji_exuid != uid)\n\t\t\t\tcontinue;\n\t\t\t/* job should be running */\n\t\t\tif (!check_job_substate(pjob, JOB_SUBSTATE_RUNNING) && !check_job_substate(pjob, JOB_SUBSTATE_PRERUN))\n\t\t\t\tcontinue;\n\n\t\t\tfor (ptask = (pbs_task *) GET_NEXT(pjob->ji_tasks);\n\t\t\t     ptask != NULL;\n\t\t\t     ptask = (pbs_task *)\n\t\t\t\t     GET_NEXT(ptask->ti_jobtask)) {\n\t\t\t\tpid_t tasksid;\n\n\t\t\t\ttasksid = ptask->ti_qs.ti_sid;\n\t\t\t\t/* DEAD task */\n\t\t\t\tif (tasksid <= 1)\n\t\t\t\t\tcontinue;\n\n\t\t\t\tif (procsid == tasksid)\n\t\t\t\t\tbreak;\n\t\t\t}\n\n\t\t\tif (ptask != NULL)\n\t\t\t\tbreak;\n\t\t}\n\t\t/*\n\t\t ** If pjob is not NULL, the process is part of a job.\n\t\t ** We do not want to touch it.\n\t\t */\n\t\tif (pjob != NULL)\n\t\t\tcontinue;\n\n\t\t/*\n\t\t ** We are not going to attach the alien, here we check\n\t\t ** to see if the job has the node \"excl\".  If so, we\n\t\t ** leave it alone.  If the job is not \"excl\", we kill\n\t\t ** the alien.\n\t\t */\n\t\tif (alien_kill) {\n\t\t\tpplace = find_resc_entry(get_jattr(hjob, JOB_ATR_resource), prsdef);\n\t\t\tif (pplace && pplace->rs_value.at_val.at_str) {\n\t\t\t\tif (strstr(pplace->rs_value.at_val.at_str,\n\t\t\t\t\t   \"excl\"))\n\t\t\t\t\tcontinue;\n\t\t\t}\n\t\t\t/* fall through to \"badguy\" */\n\t\t}\n\n\t\t/*\n\t\t ** From here on, we are looking an 'alien' process i.e.\n\t\t ** a process that is not part of a job.\n\t\t */\n\t\tif (alien_attach && /* attach the alien */\n\t\t    procsid > 1) {  /* only if sid is good */\n\t\t\t/*\n\t\t\t **\tCreate a new task for the session.\n\t\t\t */\n\t\t\tptask = momtask_create(hjob);\n\t\t\tif (ptask == NULL) {\n\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\"%s: task create failed\", id);\n\t\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB,\n\t\t\t\t\t  LOG_NOTICE, hjob->ji_qs.ji_jobid,\n\t\t\t\t\t  log_buffer);\n\t\t\t\tcontinue;\n\t\t\t}\n\n\t\t\tstrcpy(ptask->ti_qs.ti_parentjobid,\n\t\t\t       hjob->ji_qs.ji_jobid);\n\t\t\t/*\n\t\t\t **\tThe parent self virtual nodes are not known.\n\t\t\t */\n\t\t\tptask->ti_qs.ti_parentnode = TM_ERROR_NODE;\n\t\t\tptask->ti_qs.ti_myvnode = TM_ERROR_NODE;\n\t\t\tptask->ti_qs.ti_parenttask = TM_INIT_TASK;\n\t\t\tptask->ti_qs.ti_sid = procsid;\n\t\t\tptask->ti_qs.ti_status = TI_STATE_RUNNING;\n\t\t\tptask->ti_flags |= TI_FLAGS_ORPHAN;\n\t\t\t(void) task_save(ptask);\n\n\t\t\tif (!check_job_substate(hjob, JOB_SUBSTATE_RUNNING)) {\n\t\t\t\tset_job_state(hjob, JOB_STATE_LTR_RUNNING);\n\t\t\t\tset_job_substate(hjob, JOB_SUBSTATE_RUNNING);\n\t\t\t\tjob_save(hjob);\n\t\t\t}\n\n\t\t\t/*\n\t\t\t ** Add to list of polled jobs if it isn't\n\t\t\t ** already there.\n\t\t\t */\n\t\t\tif (is_linked(&mom_polljobs,\n\t\t\t\t      &hjob->ji_jobque) == 0) {\n\t\t\t\tappend_link(&mom_polljobs,\n\t\t\t\t\t    &hjob->ji_jobque, hjob);\n\t\t\t}\n\t\t\tsprintf(log_buffer,\n\t\t\t\t\"%s: pid %d sid %d cmd %s attached \"\n\t\t\t\t\"as task %8.8X\",\n\t\t\t\tid,\n\t\t\t\tpid, procsid, comm, ptask->ti_qs.ti_task);\n\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t  hjob->ji_qs.ji_jobid, log_buffer);\n\n\t\t\t/*\n\t\t\t ** Do any dependent attach operation.\n\t\t\t */\n\t\t\tif (dep_attach(ptask) != TM_OKAY) {\n\t\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB,\n\t\t\t\t\t  LOG_NOTICE, hjob->ji_qs.ji_jobid,\n\t\t\t\t\t  log_buffer);\n\t\t\t}\n\t\t\tcontinue;\n\t\t}\n\n\t\t/*\n\t\t ** We are not going to attach the alien, here we check\n\t\t ** to see if the job has the node \"excl\".  If so, we\n\t\t ** leave it alone.  If the job is not \"excl\", we kill\n\t\t ** the alien.\n\t\t */\n\t\tif (alien_kill) {\n\t\t\tpplace = find_resc_entry(get_jattr(hjob, JOB_ATR_resource), prsdef);\n\t\t\tif (pplace && pplace->rs_value.at_val.at_str) {\n\t\t\t\tif (strstr(pplace->rs_value.at_val.at_str,\n\t\t\t\t\t   \"excl\"))\n\t\t\t\t\tcontinue;\n\t\t\t}\n\t\t\t/* fall through to \"badguy\" */\n\t\t}\n\n\tbadguy:\n\n#ifdef WIN32\n\t\tlog_err(-1, id, \"not supported function\");\n\n#else\n#ifdef NAS_UNKILL /* localmod 011 */\n\t\tif ((current_kp = (kp *) malloc(sizeof(kp))) == NULL) {\n\t\t\tlog_err(errno, id, \"malloc\");\n\t\t\texit(1);\n\t\t}\n\n\t\t/*\n\t\t ** Gather indentifying information for the process that should\n\t\t ** be killed\n\t\t */\n\t\tcurrent_kp->pid = pid;\n\t\ttime(&current_kp->kill_time);\n\t\tCLEAR_LINK(current_kp->kp_link);\n\t\tret = kill_procinfo(pid, &current_kp->ppid, &current_kp->start_time);\n\t\tif (ret != TM_OKAY) {\n\t\t\t/*\n\t\t\t ** Give up on maintaining a kill history for this\n\t\t\t ** process, but still continue with kill attempts\n\t\t\t */\n\t\t\tsprintf(log_buffer, \"unable to gather additional info for pid %d(%s) to track kill history\",\n\t\t\t\tpid, comm);\n\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_SERVER,\n\t\t\t\t  LOG_DEBUG, id, log_buffer);\n\n\t\t\tfree(current_kp);\n\t\t\tcurrent_kp = NULL;\n\t\t} else {\n\t\t\t/*\n\t\t\t ** Search through the list of processes we have already\n\t\t\t ** tried to kill.\n\t\t\t */\n\t\t\tfor (prev_kp = (kp *) GET_NEXT(killed_procs);\n\t\t\t     prev_kp;\n\t\t\t     prev_kp = (kp *) GET_NEXT(prev_kp->kp_link)) {\n\t\t\t\tif (current_kp->pid == prev_kp->pid &&\n\t\t\t\t    current_kp->ppid == prev_kp->ppid &&\n\t\t\t\t    current_kp->start_time == prev_kp->start_time) {\n\t\t\t\t\tif (time(0) - prev_kp->kill_time > KP_WAIT_TIME) {\n\t\t\t\t\t\t/*\n\t\t\t\t\t\t ** We've determined we have an\n\t\t\t\t\t\t ** unkillable process - set a\n\t\t\t\t\t\t ** server comment for this\n\t\t\t\t\t\t ** node, log a message, then\n\t\t\t\t\t\t ** set flags to have the mom\n\t\t\t\t\t\t ** quickly exit. For good\n\t\t\t\t\t\t ** measure we cleanup the\n\t\t\t\t\t\t ** memory we've malloc'd\n\t\t\t\t\t\t */\n\t\t\t\t\t\tfree(current_kp);\n\t\t\t\t\t\tfree_kp_list_entries(&killed_procs);\n\t\t\t\t\t\tfree_kp_list_entries(&new_killed_procs);\n\t\t\t\t\t\tkp_comment_node();\n\n\t\t\t\t\t\tsprintf(log_buffer, \"SEC_EVENT |unkillable process|host %s\", mom_short_name);\n\t\t\t\t\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_NOTICE, id, log_buffer);\n\n\t\t\t\t\t\tmom_run_state = 0;\n\t\t\t\t\t\tnext_sample_time = 1;\n\n\t\t\t\t\t\treturn;\n\t\t\t\t\t} else {\n\t\t\t\t\t\t/*\n\t\t\t\t\t\t ** Again attempt to kill the\n\t\t\t\t\t\t ** process, keeping the\n\t\t\t\t\t\t ** previous identifying info\n\t\t\t\t\t\t */\n\t\t\t\t\t\tfree(current_kp);\n\t\t\t\t\t\tcurrent_kp = prev_kp;\n\t\t\t\t\t\tdelete_link(&prev_kp->kp_link);\n\t\t\t\t\t}\n\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\t/*\n\t\t ** Build a new list of processes we've attempted to kill\n\t\t */\n\t\tif (current_kp != NULL)\n\t\t\tappend_link(&new_killed_procs, &current_kp->kp_link, current_kp);\n#endif\t\t  /* localmod 011 */\n\t\tDBPRT((\"%s: KILL pid %d sid %d\\n\", id, pid, procsid))\n\t\tif (kill(pid, SIGKILL) == 0) {\n\t\t\tsprintf(log_buffer, \"killed uid %d pid %d(%s)\",\n\t\t\t\tuid, pid, comm);\n\t\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER,\n\t\t\t\t  LOG_NOTICE, id, log_buffer);\n\t\t} else {\n\t\t\tsprintf(log_buffer,\n\t\t\t\t\"failed kill uid %d pid %d(%s)\",\n\t\t\t\tuid, pid, comm);\n\t\t\tlog_err(errno, id, log_buffer);\n\t\t}\n#endif\n\t}\n#ifdef NAS_UNKILL /* localmod 011 */\n\t/*\n\t ** Our new list is now complete and includes processes we attempted to\n\t ** kill previously but which are not dead yet. Discard the old list and\n\t ** replace with the new\n\t */\n\tfree_kp_list_entries(&killed_procs);\n\tlist_move(&new_killed_procs, &killed_procs);\n#endif /* localmod 011 */\n}\n\n/*\n * @brief\n *\tFunction called by the Libtpp layer when the network connection to\n *\tthe pbs_comm router is restored. This is the implementation for mom.\n *\n * @param[in] data - currently unused\n *\n * @return\tVoid\n *\n */\nvoid\nnet_restore_handler(void *data)\n{\n\tmom_net_up = 1;\n\tmom_net_up_time = time(0);\n\ttime_delta_hellosvr(MOM_DELTA_RESET);\n\n\tlog_event(PBSEVENT_ERROR | PBSEVENT_FORCE, PBS_EVENTCLASS_SERVER, LOG_ALERT, __func__, \"net restore handler called\");\n}\n\n/*\n * @brief\n *\tFunction called by the Libtpp layer when the network connection to\n *\tthe pbs_comm router is down. In this implementation for the mom,\n *\tthe down handler closes the server stream. It then goes over the\n *\tlist of jobs and closes all streams to its sisterhood.\n *\n * @param[in] data - currently unused\n *\n * @return\tVoid\n *\n */\nvoid\nnet_down_handler(void *data)\n{\n\tint num;\n\thnodent *np;\n\tjob *pjob;\n\n\tlog_event(PBSEVENT_ERROR | PBSEVENT_FORCE, PBS_EVENTCLASS_SERVER, LOG_ALERT, __func__, \"net down handler called\");\n\tif (server_stream >= 0) {\n\t\tlog_eventf(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_NOTICE, msg_daemonname,\n\t\t\t   \"Closing existing server stream %d\", server_stream);\n\t\tdis_flush(server_stream);\n\t\ttpp_close(server_stream);\n\t\tserver_stream = -1;\n\t}\n\n\t/* close streams to sister-hood for all jobs */\n\tfor (pjob = (job *) GET_NEXT(svr_alljobs);\n\t     pjob != NULL;\n\t     pjob = (job *) GET_NEXT(pjob->ji_alljobs)) {\n\n\t\tfor (num = 0, np = pjob->ji_hosts;\n\t\t     num < pjob->ji_numnodes;\n\t\t     num++, np++) {\n\t\t\tif (np->hn_stream >= 0) {\n\t\t\t\tdis_flush(np->hn_stream);\n\t\t\t\ttpp_close(np->hn_stream);\n\t\t\t\tnp->hn_stream = -1;\n\t\t\t}\n\t\t}\n\t}\n\tmom_net_up = 0;\n\tmom_net_up_time = 0;\n}\n\n/**\n * @brief\n *      This function returns the time delta\n * \treturns short bursts followed by longer intervals.\n *\n * \tThis is used for how long mom should wait before sending next hello (in secs)\n * \tCan be used for any such scenario.\n *\n * @param[in] mode -\treset mode is to bring it back to bursting mode\n *\n * @return int\n * @retval >0 : time delta\n * @retval 0 : only in case of reset mode.\n */\nint\ntime_delta_hellosvr(int mode)\n{\n\tstatic int delta = 1;\n\tstatic int cnt = 1;\n\tint max_delta = 1 << 6; /* max interval will be 64s */\n\n\tDBPRT((\"%s: mode= %d, delta= %d, cnt= %d\", __func__, mode, delta, cnt))\n\n\tif (mode == MOM_DELTA_RESET) {\n\t\tdelta = 1;\n\t\tcnt = 1;\n\t\treturn 0;\n\t}\n\n\tif (cnt == 0) {\n\t\tif (delta == max_delta)\n\t\t\treturn delta;\n\n\t\tdelta <<= 1;\n\t\tcnt = delta << 1;\n\t} else\n\t\tcnt--;\n\n\treturn delta;\n}\n\n/**\n * @brief\n * \tResume multinode job after one or more sisters has been restarted\n *\n * @param[in] pjob - job pointer\n *\n * @return\tVoid\n *\n */\n\nvoid\nresume_multinode(job *pjob)\n{\n\tif (pjob->ji_hosts == NULL)\n\t\treturn;\n\n\tint com = IM_JOIN_RECOV_JOB;\n\thnodent *np = NULL;\n\teventent *ep = NULL;\n\tint i;\n\tfor (i = 1; i < pjob->ji_numnodes; i++) {\n\t\tnp = &pjob->ji_hosts[i];\n\n\t\tif (i == 1)\n\t\t\tep = event_alloc(pjob, com, -1, np, TM_NULL_EVENT, TM_NULL_TASK);\n\t\telse\n\t\t\tep = event_dup(ep, pjob, np);\n\n\t\tif (ep == NULL) {\n\t\t\texec_bail(pjob, JOB_EXEC_FAIL1, NULL);\n\t\t\treturn;\n\t\t}\n\n\t\tint stream = np->hn_stream;\n\t\tim_compose(stream, pjob->ji_qs.ji_jobid,\n\t\t\t   get_jattr_str(pjob, JOB_ATR_Cookie),\n\t\t\t   com, ep->ee_event, TM_NULL_TASK, IM_OLD_PROTOCOL_VER);\n\t\t(void) diswsi(stream, pjob->ji_numnodes);\n\t\t(void) diswsi(stream, pjob->ji_ports[0]);\n\t\t(void) diswsi(stream, pjob->ji_ports[1]);\n\t\tdis_flush(stream);\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n\t\tsend_cred_sisters(pjob);\n#endif\n\t}\n}\n\n#ifdef WIN32\n/**\n * @brief\n *\tmain - the main program of MOM\n */\nDWORD WINAPI\n\tmain_thread(pv) void *pv;\n#else\nint\nmain(int argc, char *argv[])\n#endif\n{\n\t/* both Win32 and Unix */\n\tint rc;\n\tchar *nodename;\n\tstruct tpp_config tpp_conf;\n\tint errflg, c;\n\tint stalone = 0;\n\tint i;\n\tchar *ptr;\n\tchar *servername;\n\tunsigned int serverport;\n\tint recover = 0;\n\ttime_t time_state_update = 0;\n\tint tppfd; /* fd for rm and im comm */\n\tdouble myla;\n\ttime_t time_next_hello = 0;\n\tjob *nxpjob;\n\tjob *pjob;\n\textern time_t wait_time;\n\ttime_t getkbdtime();\n\tvoid activate_jobs();\n\tvoid idle_jobs();\n\tchar *configscriptaction = NULL;\n\tchar *inputfile = NULL;\n\tchar *scriptname = NULL;\n\tresource *prscput;\n\tresource *prswall;\n\tchar *getopt_str;\n\tint fd;\n\tu_long ipaddr;\n\tint optindinc = 0;\n\tmom_hook_input_t hook_input;\n\tchar path_hooks_rescdef[MAXPATHLEN + 1];\n\tint sock_bind_rm;\n\tint sock_bind_mom;\n#ifdef WIN32\n\t/* Win32 only */\n\tstruct arg_param *p = (struct arg_param *) pv;\n\tint argc;\n\tchar **argv;\n\tSERVICE_STATUS ss;\n\tint pmode = S_IREAD | S_IWRITE;\n\tstruct _timeb tval;\n\tchar *pwst = NULL;\n\tchar winsta_name[MAXPATHLEN + 1];\n\tchar desktop_name[MAXPATHLEN + 1];\n\tHWINSTA old_winsta = NULL;\n\tHWINSTA pbs_winsta = NULL;\n\tHDESK pbs_desktop = NULL;\n\tchar *pch = NULL;\n\textern char *pbs_conf_env;\n#else\n\t/* Unix only */\n\tint pmode = S_IRUSR | S_IWUSR | S_IRGRP | S_IROTH;\n\tstruct timeval tval;\n\tstruct sigaction act;\n\tgid_t mygid;\n\textern char *optarg;\n\textern int optind;\n#endif /* WIN32 */\n\n#ifdef _POSIX_MEMLOCK\n\tint do_mlockall = 0;\n#endif\n\n#ifdef WIN32\n\t_fcloseall(); /* Close any inherited extra files, leaving stdin-err open */\n#else\n\t/* Close any inherited extra files, leaving stdin-err open */\n\tc = sysconf(_SC_OPEN_MAX);\n\twhile (--c > 2)\n\t\t(void) close(c); /* close any file desc left open by parent */\n\n\t/* the real deal or version and exit? */\n\tPRINT_VERSION_AND_EXIT(argc, argv);\n#endif\n\n\t/* If we are not run with real and effective uid of 0, forget it */\n#ifdef WIN32\n\t_set_fmode(_O_BINARY);\n\targc = p->argc;\n\targv = p->argv;\n\n\tZeroMemory(&ss, sizeof(ss));\n\tss.dwCheckPoint = 0;\n\tss.dwServiceType = SERVICE_WIN32_OWN_PROCESS | SERVICE_INTERACTIVE_PROCESS;\n\tss.dwCurrentState = g_dwCurrentState;\n\tss.dwControlsAccepted = SERVICE_ACCEPT_STOP | SERVICE_ACCEPT_SHUTDOWN;\n\tss.dwWaitHint = 3000;\n\t/*If this is a multi-instance Mom, it needs the corresponding PBS_CONF_FILE environment.*/\n\tif ((strlen(argv[0]) != strlen(\"PBS_MOM\")) && (pch = strstr(argv[0], \"PBS_MOM\"))) {\n\t\tchar *pbsconf_temp = \"PBS_CONF_FILE\";\n\t\tpch = pch + strlen(\"PBS_MOM\");\n\t\tif ((pbs_conf_env = (char *) malloc(strlen(pbsconf_temp) + strlen(pch) + 1)) != NULL) {\n\t\t\tmemset(pbs_conf_env, 0, strlen(pbsconf_temp) + strlen(pch) + 1);\n\t\t\tpbs_strncpy(pbs_conf_env, pbsconf_temp, strlen(pbsconf_temp) + strlen(pch) + 1);\n\t\t\tpbs_conf_env = strcat(pbs_conf_env, pch);\n\t\t} else {\n\t\t\tg_dwCurrentState = SERVICE_STOPPED;\n\t\t\tss.dwCurrentState = g_dwCurrentState;\n\t\t\tss.dwWin32ExitCode = ERROR_BAD_CONFIGURATION;\n\t\t\tif (g_ssHandle != 0)\n\t\t\t\tSetServiceStatus(g_ssHandle, &ss);\n\t\t\treturn (1);\n\t\t}\n\t}\n#endif\n\n\t/* set single threaded mode */\n\tpbs_client_thread_set_single_threaded_mode();\n\t/* disable attribute verification */\n\tset_no_attribute_verification();\n\n#ifdef WIN32\n\n\tif (g_ssHandle != 0)\n\t\tSetServiceStatus(g_ssHandle, &ss);\n\t/* load the pbs conf file */\n\tif (pbs_loadconf(0) == 0) {\n\t\tg_dwCurrentState = SERVICE_STOPPED;\n\t\tss.dwCurrentState = g_dwCurrentState;\n\t\tss.dwWin32ExitCode = ERROR_BAD_CONFIGURATION;\n\t\tif (g_ssHandle != 0)\n\t\t\tSetServiceStatus(g_ssHandle, &ss);\n\t\treturn (1);\n\t}\n\n\tset_log_conf(pbs_conf.pbs_leaf_name, pbs_conf.pbs_mom_node_name,\n\t\t     pbs_conf.locallog, pbs_conf.syslogfac,\n\t\t     pbs_conf.syslogsvr, pbs_conf.pbs_log_highres_timestamp);\n\n\tif (!isAdminPrivilege(getlogin())) {\n\t\tg_dwCurrentState = SERVICE_STOPPED;\n\t\tss.dwCurrentState = g_dwCurrentState;\n\t\tss.dwWin32ExitCode = CO_E_LAUNCH_PERMSSION_DENIED;\n\t\tif (g_ssHandle != 0)\n\t\t\tSetServiceStatus(g_ssHandle, &ss);\n\t\tfprintf(stderr, \"%s: Must be run as root\\n\", argv[0]);\n\t\treturn (1);\n\t}\n\n\tif (!has_privilege(SE_DEBUG_NAME))\n\t\tena_privilege(SE_DEBUG_NAME);\n#else\n#ifndef DEBUG\n\tif ((getuid() != 0) || (geteuid() != 0)) {\n\t\tfprintf(stderr, \"%s: Must be run as root\\n\", argv[0]);\n\t\treturn (1);\n\t}\n#endif\n#endif /* WIN32 */\n\n\t/* initialize the thread context */\n\tif (pbs_client_thread_init_thread_context() != 0) {\n#ifdef WIN32\n\t\tg_dwCurrentState = SERVICE_STOPPED;\n\t\tss.dwCurrentState = g_dwCurrentState;\n\t\tss.dwWin32ExitCode = ERROR_OUTOFMEMORY;\n\t\tif (g_ssHandle != 0)\n\t\t\tSetServiceStatus(g_ssHandle, &ss);\n#else\n\t\tfprintf(stderr, \"%s: Unable to initialize thread context\\n\",\n\t\t\targv[0]);\n\t\treturn (1);\n#endif /* WIN32 */\n\t}\n\n\tif (set_msgdaemonname(\"pbs_mom\")) {\n\t\tfprintf(stderr, \"Out of memory\\n\");\n\t\treturn 1;\n\t}\n#ifndef WIN32\n\tif (pbs_loadconf(0) == 0) {\n\t\treturn (1);\n\t}\n\tset_log_conf(pbs_conf.pbs_leaf_name, pbs_conf.pbs_mom_node_name,\n\t\t     pbs_conf.locallog, pbs_conf.syslogfac,\n\t\t     pbs_conf.syslogsvr, pbs_conf.pbs_log_highres_timestamp);\n#endif\n\tpbsgroup = getgid();\n\n\t/* Get our default service port */\n\n\tpbs_mom_port = pbs_conf.mom_service_port;\n\tdefault_server_port = pbs_conf.batch_service_port;\n\tpbs_rm_port = pbs_conf.manager_service_port;\n\n\t/* Is an alternate Mom Home path specified in pbs.conf ? */\n\n\tif (pbs_conf.pbs_mom_home) {\n\t\tif (pbs_conf.pbs_home_path != NULL)\n\t\t\tfree(pbs_conf.pbs_home_path);\n#ifdef WIN32\n\t\tpbs_conf.pbs_home_path =\n\t\t\tshorten_and_cleanup_path(pbs_conf.pbs_mom_home);\n\t\tif (pbs_conf.pbs_environment) {\n\t\t\tfree(pbs_conf.pbs_environment);\n\t\t\tif ((pbs_conf.pbs_environment =\n\t\t\t\t     malloc(strlen(pbs_conf.pbs_home_path) + 17))) {\n\t\t\t\tsprintf(pbs_conf.pbs_environment, \"%s/pbs_environment\",\n\t\t\t\t\tpbs_conf.pbs_home_path);\n\t\t\t\tfix_path(pbs_conf.pbs_environment, 1);\n\t\t\t}\n\t\t}\n#else\n\t\tif ((pbs_conf.pbs_home_path = strdup(pbs_conf.pbs_mom_home)) == NULL) {\n\t\t\tfprintf(stderr, \"Unable to allocate Memory!\\n\");\n\t\t\treturn (1);\n\t\t}\n#endif\n\t}\n\n\t/*\n\t * Set the default tmp directory, which may get overridden by $tmpdir in\n\t * mom_priv/config later on.\n\t */\n\tif (set_tmpdir(pbs_conf.pbs_tmpdir) != HANDLER_SUCCESS) {\n\t\tfprintf(stderr, \"%s: Unable to configure temporary directory.\\n\", argv[0]);\n\t\treturn (1);\n\t}\n\n\terrflg = 0;\n\tgetopt_str = \"d:c:M:mNS:R:lL:a:xC:prs:n:Q:-:\";\n\twhile ((c = getopt(argc, argv, getopt_str)) != -1) {\n\t\tswitch (c) {\n\t\t\tcase 'N': /* stand alone (win), no fork (others) */\n\t\t\t\tstalone = 1;\n\t\t\t\tbreak;\n\t\t\tcase 'm':\n#ifdef WIN32\n\t\t\t\tfprintf(stderr, \"-m option not supported for Windows\\n\");\n\t\t\t\tg_dwCurrentState = SERVICE_STOPPED;\n\t\t\t\tss.dwCurrentState = g_dwCurrentState;\n\t\t\t\tss.dwWin32ExitCode = ERROR_INVALID_PARAMETER;\n\t\t\t\tif (g_ssHandle != 0)\n\t\t\t\t\tSetServiceStatus(g_ssHandle, &ss);\n\t\t\t\treturn 1;\n#endif\n\t\t\t\tmock_run = 1;\n\t\t\t\tbreak;\n\t\t\tcase 'd': /* directory */\n\t\t\t\tif (pbs_conf.pbs_home_path != NULL)\n\t\t\t\t\tfree(pbs_conf.pbs_home_path);\n\t\t\t\tpbs_conf.pbs_home_path = optarg;\n\t\t\t\tbreak;\n\t\t\tcase 'c': /* config file */\n\t\t\t\tconfig_file_specified = 1;\n\t\t\t\tpbs_strncpy(config_file, optarg, sizeof(config_file)); /* remember name */\n\t\t\t\tbreak;\n\t\t\tcase 'M':\n\t\t\t\tpbs_mom_port = (unsigned int) atoi(optarg);\n\t\t\t\tif (pbs_mom_port == 0) {\n\t\t\t\t\tfprintf(stderr, \"Bad MOM port value %s\\n\",\n\t\t\t\t\t\toptarg);\n#ifdef WIN32\n\t\t\t\t\tg_dwCurrentState = SERVICE_STOPPED;\n\t\t\t\t\tss.dwCurrentState = g_dwCurrentState;\n\t\t\t\t\tss.dwWin32ExitCode = ERROR_INVALID_PARAMETER;\n\t\t\t\t\tif (g_ssHandle != 0)\n\t\t\t\t\t\tSetServiceStatus(g_ssHandle, &ss);\n#endif\n\t\t\t\t\treturn (1);\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase 'S':\n\t\t\t\tdefault_server_port = (unsigned int) atoi(optarg);\n\t\t\t\tif (default_server_port == 0) {\n\t\t\t\t\tfprintf(stderr, \"Bad Server port value %s\\n\",\n\t\t\t\t\t\toptarg);\n#ifdef WIN32\n\t\t\t\t\tg_dwCurrentState = SERVICE_STOPPED;\n\t\t\t\t\tss.dwCurrentState = g_dwCurrentState;\n\t\t\t\t\tss.dwWin32ExitCode = ERROR_INVALID_PARAMETER;\n\t\t\t\t\tif (g_ssHandle != 0)\n\t\t\t\t\t\tSetServiceStatus(g_ssHandle, &ss);\n#endif\n\t\t\t\t\treturn (1);\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase 'R':\n\t\t\t\tpbs_rm_port = (unsigned int) atoi(optarg);\n\t\t\t\tif (pbs_rm_port == 0) {\n\t\t\t\t\tfprintf(stderr, \"Bad RM port value %s\\n\",\n\t\t\t\t\t\toptarg);\n#ifdef WIN32\n\t\t\t\t\tg_dwCurrentState = SERVICE_STOPPED;\n\t\t\t\t\tss.dwCurrentState = g_dwCurrentState;\n\t\t\t\t\tss.dwWin32ExitCode = ERROR_INVALID_PARAMETER;\n\t\t\t\t\tif (g_ssHandle != 0)\n\t\t\t\t\t\tSetServiceStatus(g_ssHandle, &ss);\n#endif\n\t\t\t\t\treturn (1);\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase 'l':\n#ifdef _POSIX_MEMLOCK\n\t\t\t\tdo_mlockall = 1;\n#else\n\t\t\t\tfprintf(stderr, \"-l option - mlockall not supported\\n\");\n#endif /* _POSIX_MEMLOCK */\n\t\t\t\tbreak;\n\t\t\tcase 'L':\n\t\t\t\tlog_file = optarg;\n\t\t\t\tbreak;\n\t\t\tcase 'a':\n\t\t\t\talarm_time = (int) strtol(optarg, &ptr, 10);\n\t\t\t\tif (alarm_time <= 0 || *ptr != '\\0') {\n\t\t\t\t\tfprintf(stderr,\n\t\t\t\t\t\t\"%s: bad alarm time\\n\", optarg);\n\t\t\t\t\terrflg = 1;\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase 'x':\n\t\t\t\tport_care = 0;\n\t\t\t\tbreak;\n\t\t\tcase 'C':\n\t\t\t\tpath_checkpoint_from_getopt = optarg;\n\t\t\t\tbreak;\n\t\t\tcase 'p':\n\t\t\t\tif (recover == 0)\n\t\t\t\t\trecover = 2;\n\t\t\t\telse\n\t\t\t\t\terrflg = 1;\n\t\t\t\tbreak;\n\t\t\tcase 'r':\n\t\t\t\tif (recover == 0)\n\t\t\t\t\trecover = 1;\n\t\t\t\telse\n\t\t\t\t\terrflg = 1;\n\t\t\t\tbreak;\n\t\t\tcase 's':\n\t\t\t\tconfigscriptaction = optarg;\n\t\t\t\tif (strcmp(optarg, \"insert\") == 0) {\n\t\t\t\t\tif (optind == argc - 2) {\n\t\t\t\t\t\tscriptname = argv[optind];\n\t\t\t\t\t\tinputfile = argv[optind + 1];\n\t\t\t\t\t\toptindinc += 2;\n\t\t\t\t\t} else\n\t\t\t\t\t\terrflg = 1;\n\t\t\t\t} else if ((strcmp(optarg, \"remove\") == 0) ||\n\t\t\t\t\t   (strcmp(optarg, \"show\") == 0)) {\n\t\t\t\t\tif (optind == argc - 1) {\n\t\t\t\t\t\tscriptname = argv[optind];\n\t\t\t\t\t\toptindinc++;\n\t\t\t\t\t} else\n\t\t\t\t\t\terrflg = 1;\n\t\t\t\t} else if (strcmp(optarg, \"list\") != 0)\n\t\t\t\t\terrflg = 1;\n\t\t\t\tbreak;\n\t\t\tcase 'n':\n\t\t\t\tnice_val = (int) strtol(optarg, &ptr, 10);\n\t\t\t\tif ((nice_val < PRIO_MIN) ||\n\t\t\t\t    (nice_val > PRIO_MAX) ||\n\t\t\t\t    (*ptr != '\\0')) {\n\t\t\t\t\tfprintf(stderr,\n\t\t\t\t\t\t\"%s: bad nice value\\n\", optarg);\n\t\t\t\t\terrflg = 1;\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase 'Q':\n\t\t\t\tfprintf(stderr, \"Warning, this option is for QA testing only,  it should never be used by a production site\\n\");\n\t\t\t\tQA_testing = atol(optarg);\n\t\t\t\tbreak;\n\t\t\tcase '?':\n\t\t\tdefault:\n\t\t\t\terrflg = 1;\n\t\t}\n\t}\n\toptind += optindinc;\n\n\tif (errflg || optind != argc) {\n#ifdef WIN32\n\t\tg_dwCurrentState = SERVICE_STOPPED;\n\t\tss.dwCurrentState = g_dwCurrentState;\n\t\tss.dwWin32ExitCode = ERROR_INVALID_PARAMETER;\n\t\tif (g_ssHandle != 0)\n\t\t\tSetServiceStatus(g_ssHandle, &ss);\n\t\tusage2(argv[0]); /* exits */\n\t\treturn (1);\n#else\n\t\tusage(argv[0]); /* exits */\n#endif /* WIN32 */\n\t}\n\n\tumask(022);\n\n#ifdef WIN32\n\tsave_env();\n#endif\n\t/*\n\t * The following is code to reduce security risks\n\t * start out with standard umask, system resource limit infinite\n\t */\n\tif ((num_var_env = setup_env(pbs_conf.pbs_environment)) == -1) {\n#ifdef WIN32\n\t\tg_dwCurrentState = SERVICE_STOPPED;\n\t\tss.dwCurrentState = g_dwCurrentState;\n\t\tss.dwWin32ExitCode = ERROR_INVALID_ENVIRONMENT;\n\t\tif (g_ssHandle != 0)\n\t\t\tSetServiceStatus(g_ssHandle, &ss);\n\t\treturn (1);\n#else\n\t\texit(1);\n#endif /* WIN32 */\n\t}\n\n#ifndef WIN32 /* ---- UNIX ------------------------------------------*/\n\tmygid = getgid();\n\t(void) setgroups(1, &mygid); /* secure suppl. groups */\n\n#if defined(RLIM64_INFINITY)\n\t{\n\t\tstruct rlimit64 rlimit;\n\t\tint curerror;\n\n\t\trlimit.rlim_cur = RLIM64_INFINITY;\n\t\trlimit.rlim_max = RLIM64_INFINITY;\n\n\t\t(void) setrlimit64(RLIMIT_CPU, &rlimit);\n\t\t(void) setrlimit64(RLIMIT_FSIZE, &rlimit);\n\t\t(void) setrlimit64(RLIMIT_DATA, &rlimit);\n\t\tif (getrlimit64(RLIMIT_STACK, &orig_stack_size) != -1) {\n\t\t\tif ((orig_stack_size.rlim_cur != RLIM64_INFINITY) && (orig_stack_size.rlim_cur < MIN_STACK_LIMIT)) {\n\t\t\t\trlimit.rlim_cur = MIN_STACK_LIMIT;\n\t\t\t\trlimit.rlim_max = MIN_STACK_LIMIT;\n\t\t\t\tif (setrlimit64(RLIMIT_STACK, &rlimit) == -1) {\n\t\t\t\t\tchar msgbuf[] = \"Stack limit setting failed\";\n\t\t\t\t\tcurerror = errno;\n\t\t\t\t\tlog_err(curerror, __func__, msgbuf);\n\t\t\t\t\tsprintf(log_buffer, \"%s errno=%d\", msgbuf, curerror);\n\t\t\t\t\tlog_record(PBSEVENT_ERROR, PBS_EVENTCLASS_SERVER, LOG_ERR, (char *) __func__, log_buffer);\n\t\t\t\t\texit(1);\n\t\t\t\t}\n\t\t\t}\n\t\t} else {\n\t\t\tchar msgbuf[] = \"Getting current Stack limit failed\";\n\n\t\t\tcurerror = errno;\n\t\t\tlog_err(curerror, __func__, msgbuf);\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"%s errno=%d\", msgbuf, curerror);\n\t\t\tlog_record(PBSEVENT_ERROR, PBS_EVENTCLASS_SERVER, LOG_ERR, (char *) __func__, log_buffer);\n\t\t\texit(1);\n\t\t}\n\n\t\trlimit.rlim_cur = RLIM64_INFINITY;\n\t\trlimit.rlim_max = RLIM64_INFINITY;\n\n#ifdef RLIMIT_NPROC\n\t\t(void) getrlimit64(RLIMIT_NPROC, &orig_nproc_limit); /* get for later */\n\t\tif (setrlimit64(RLIMIT_NPROC, &rlimit) == -1) {\t     /* set unlimited */\n\t\t\tchar msgbuf[] = \"setrlimit NPROC setting failed\";\n\t\t\tcurerror = errno;\n\t\t\tlog_err(curerror, __func__, msgbuf);\n\t\t}\n#endif /* RLIMIT_NPROC */\n#ifdef RLIMIT_RSS\n\t\t(void) setrlimit64(RLIMIT_RSS, &rlimit);\n#endif /* RLIMIT_RSS */\n#ifdef RLIMIT_VMEM\n\t\t(void) setrlimit64(RLIMIT_VMEM, &rlimit);\n#endif /* RLIMIT_VMEM */\n#ifdef RLIMIT_MEMLOCK\n\t\t(void) setrlimit64(RLIMIT_MEMLOCK, &rlimit);\n#endif /* RLIMIT_MEMLOCK */\n\t}\n\n#else /* set rlimit 32 bit */\n\t{\n\t\tstruct rlimit rlimit;\n\t\tint curerror;\n\n\t\trlimit.rlim_cur = RLIM_INFINITY;\n\t\trlimit.rlim_max = RLIM_INFINITY;\n\t\t(void) setrlimit(RLIMIT_CPU, &rlimit);\n#ifdef RLIMIT_NPROC\n\t\t(void) getrlimit(RLIMIT_NPROC, &orig_nproc_limit); /* get for later */\n\t\tif (setrlimit(RLIMIT_NPROC, &rlimit) == -1) {\t   /* set unlimited */\n\t\t\tchar msgbuf[] = \"setrlimit NPROC setting failed\";\n\t\t\tcurerror = errno;\n\t\t\tlog_err(curerror, __func__, msgbuf);\n\t\t}\n#endif /* RLIMIT_NPROC */\n#ifdef RLIMIT_RSS\n\t\t(void) setrlimit(RLIMIT_RSS, &rlimit);\n#endif /* RLIMIT_RSS */\n#ifdef RLIMIT_VMEM\n\t\t(void) setrlimit(RLIMIT_VMEM, &rlimit);\n#endif /* RLIMIT_VMEM */\n#ifdef RLIMIT_MEMLOCK\n\t\t(void) setrlimit(RLIMIT_MEMLOCK, &rlimit);\n#endif /* RLIMIT_MEMLOCK */\n#ifndef linux\n\t\t(void) setrlimit(RLIMIT_FSIZE, &rlimit);\n\t\t(void) setrlimit(RLIMIT_DATA, &rlimit);\n\t\t(void) getrlimit(RLIMIT_STACK, &orig_stack_size); /* get for later */\n\t\t(void) setrlimit(RLIMIT_STACK, &rlimit);\n#else\n\t\tif (getrlimit(RLIMIT_STACK, &orig_stack_size) != -1) {\n\t\t\tif ((orig_stack_size.rlim_cur != RLIM_INFINITY) && (orig_stack_size.rlim_cur < MIN_STACK_LIMIT)) {\n\t\t\t\trlimit.rlim_cur = MIN_STACK_LIMIT;\n\t\t\t\trlimit.rlim_max = MIN_STACK_LIMIT;\n\t\t\t\tif (setrlimit(RLIMIT_STACK, &rlimit) == -1) {\n\t\t\t\t\tchar msgbuf[] = \"Stack limit setting failed\";\n\t\t\t\t\tcurerror = errno;\n\t\t\t\t\tlog_err(curerror, __func__, msgbuf);\n\t\t\t\t\tsprintf(log_buffer, \"%s errno=%d\", msgbuf, curerror);\n\t\t\t\t\tlog_record(PBSEVENT_ERROR, PBS_EVENTCLASS_SERVER, LOG_ERR, (char *) __func__, log_buffer);\n\t\t\t\t\texit(1);\n\t\t\t\t}\n\t\t\t}\n\t\t} else {\n\t\t\tchar msgbuf[] = \"Getting current Stack limit failed\";\n\t\t\tcurerror = errno;\n\t\t\tlog_err(curerror, __func__, msgbuf);\n\t\t\tsprintf(log_buffer, \"%s errno=%d\", msgbuf, curerror);\n\t\t\tlog_record(PBSEVENT_ERROR, PBS_EVENTCLASS_SERVER, LOG_ERR, (char *) __func__, log_buffer);\n\t\t\texit(1);\n\t\t}\n#endif /* not linux */\n\t}\n#endif /* !RLIM64_INFINITY */\n\n#endif /* !WIN32 */\n\n\tif ((job_attr_idx = cr_attrdef_idx(job_attr_def, JOB_ATR_LAST)) == NULL) {\n\t\tlog_err(errno, __func__, \"Failed creating job attribute search index\");\n\t\treturn (-1);\n\t}\n\tif (cr_rescdef_idx(svr_resc_def, svr_resc_size) != 0) {\n\t\tlog_err(errno, __func__, \"Failed creating resc definition search index\");\n\t\treturn (-1);\n\t}\n\n\t/* initialize pointers in resource_def array */\n\tfor (i = 0; i < (svr_resc_size - 1); ++i)\n\t\tsvr_resc_def[i].rs_next = &svr_resc_def[i + 1];\n\t/* last entry is left with null pointer */\n\n\t/* set up and validate home paths    */\n\n\tc = 0;\n\tmom_home = mk_dirs(\"mom_priv\");\n\tpath_jobs = mk_dirs(\"mom_priv/jobs/\");\n\tpath_hooks = mk_dirs(\"mom_priv/hooks/\");\n\tpath_hooks_workdir = mk_dirs(\"mom_priv/hooks/tmp/\");\n#ifdef WIN32\n\tpath_epilog = mk_dirs(\"mom_priv/epilogue.bat\");\n\tpath_prolog = mk_dirs(\"mom_priv/prologue.bat\");\n#else\n\tpath_epilog = mk_dirs(\"mom_priv/epilogue\");\n\tpath_prolog = mk_dirs(\"mom_priv/prologue\");\n#endif\n\tpath_log = mk_dirs(\"mom_logs\");\n\tpath_spool = mk_dirs(\"spool/\");\n\tpath_undeliv = mk_dirs(\"undelivered/\");\n\tpath_addconfigs = mk_dirs(\"mom_priv/config.d\");\n\n\t/* open log file while std in,out,err still open, forces to fd 4 */\n#ifdef WIN32\n\t/* Don't worry about return value of log_open() like */\n\t/* in the server code.  A -1 is returned in log_open() for all */\n\t/* failure cases so it's hard to know if a corrupt log file is the */\n\t/* culprit, which happes occasionally under windows. */\n\n\t/*\n\t * let SCM wait 10 seconds for log_open() to complete\n\t * as it does network interface query which can take time\n\t */\n\n\tss.dwCheckPoint++;\n\tss.dwWaitHint = 60000;\n\tif (g_ssHandle != 0)\n\t\tSetServiceStatus(g_ssHandle, &ss);\n\tlog_open(log_file, path_log);\n\n\t/* moved the 2 functions here as they are now using the log functions,\n\tand these functions can be used after calling log_open() */\n\n\t/* let SCM wait 20 seconds for secure_misc_files() to complete */\n\tss.dwCheckPoint++;\n\tss.dwWaitHint = 20000;\n\tif (g_ssHandle != 0)\n\t\tSetServiceStatus(g_ssHandle, &ss);\n\n\tsecure_misc_files();\n\n\t/* let SCM wait 30 seconds for secure_mom_files() to complete */\n\tss.dwCheckPoint++;\n\tss.dwWaitHint = 30000;\n\tif (g_ssHandle != 0)\n\t\tSetServiceStatus(g_ssHandle, &ss);\n\n\tsecure_mom_files();\n\n#else\n\tif ((c = log_open(log_file, path_log)) != 0) { /* use given name */\n\t\tfprintf(stderr, \"pbs_mom: Unable to open logfile\\n\");\n\t\treturn (1);\n\t}\n#endif /* WIN32 */\n\n\tif (QA_testing != 0) {\n\t\tsprintf(log_buffer, \"Warning QA_testing option set to %lu\",\n\t\t\tQA_testing);\n\t\tlog_event(PBSEVENT_ADMIN | PBSEVENT_FORCE, PBS_EVENTCLASS_SERVER,\n\t\t\t  LOG_WARNING, msg_daemonname, log_buffer);\n\t}\n\n\tif (configscriptaction != NULL) {\n\t\t/* must precede chdir(mom_home) */\n\t\tdo_configs(configscriptaction, scriptname, inputfile);\n\t\t/* NOTREACHED */\n\t}\n\n\t/* change working directory to mom home (mom_priv) */\n\n\tif (chdir(mom_home) == -1) {\n\t\tperror(\"pbs_mom unable to change working directory to mom home\");\n#ifdef WIN32\n\t\tg_dwCurrentState = SERVICE_STOPPED;\n\t\tss.dwCurrentState = g_dwCurrentState;\n\t\tss.dwWin32ExitCode = ERROR_DIRECTORY;\n\t\tif (g_ssHandle != 0)\n\t\t\tSetServiceStatus(g_ssHandle, &ss);\n#endif\n\t\treturn (1);\n\t}\n\n#if !defined(DEBUG) && !defined(NO_SECURITY_CHECK)\n#ifdef WIN32\n\t/* For windows, don't check full path. Let system put in default */\n\t/* permissions for top-level directories */\n\tc |= chk_file_sec(path_jobs, 1, 0, WRITES_MASK ^ FILE_WRITE_EA, 0);\n\tc |= chk_file_sec(path_hooks, 1, 0, WRITES_MASK ^ FILE_WRITE_EA, 0);\n\tc |= chk_file_sec(path_hooks_workdir, 1, 0, WRITES_MASK ^ FILE_WRITE_EA, 0);\n\tc |= chk_file_sec(path_spool, 1, 1, 0, 0);\n\tc |= chk_file_sec(pbs_conf.pbs_environment, 0, 0, WRITES_MASK ^ FILE_WRITE_EA, 0);\n#else\n\tc |= chk_file_sec(path_jobs, 1, 0, S_IWGRP | S_IWOTH, 1);\n\tc |= chk_file_sec(path_hooks, 1, 0, S_IWGRP | S_IWOTH, 1);\n\tc |= chk_file_sec(path_hooks_workdir, 1, 0, S_IWGRP | S_IWOTH, 1);\n\tc |= chk_file_sec(path_spool, 1, 1, 0, 0);\n\tc |= chk_file_sec(pbs_conf.pbs_environment, 0, 0, S_IWGRP | S_IWOTH, 0);\n#endif /* WIN32 */\n\tif (c) {\n\t\tsprintf(log_buffer,\n\t\t\t\"Warning: one of chk_file_sec failed: %s, %s, %s, %s, %s\",\n\t\t\tpath_jobs, path_spool, pbs_conf.pbs_environment,\n\t\t\tpath_hooks, path_hooks_workdir);\n\t\tlog_err(0, \"mom_main\", log_buffer);\n\n#ifdef WIN32\n\t\tg_dwCurrentState = SERVICE_STOPPED;\n\t\tss.dwCurrentState = g_dwCurrentState;\n\t\tss.dwWin32ExitCode = ERROR_ACCESS_DENIED;\n\t\tif (g_ssHandle != 0)\n\t\t\tSetServiceStatus(g_ssHandle, &ss);\n#endif\n\t\treturn (3);\n\t}\n\n#endif /* not DEBUG and not NO_SECURITY_CHECK */\n\n\tif (initsocketlib())\n\t\treturn 1;\n\n#ifdef WIN32\n\t/* Under WIN32, create structure that will be used to track child processes. */\n\tif (initpids() == 0) {\n\t\tlog_err(-1, \"pbsd_init\", \"Creating pid handles table failed!\");\n\t\treturn (-1);\n\t}\n\n\t/* Let's do an extra validity check */\n\n\tif (check_executor() == 1) { /* failed on check for root */\n\t\tlog_err(-1, msg_daemonname, winlog_buffer);\n\t\treturn (3);\n\t}\n\tif (strlen(winlog_buffer) > 0) {\n\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER,\n\t\t\t  LOG_WARNING, msg_daemonname, winlog_buffer);\n\t}\n#endif\n\n\t/*\n\t * Set mom_host to gethostname(), if gethostname() fail, then use PBS_MOM_NODE_NAME\n\t * if it is defined and complies to RFC 952/1123\n\t */\n\tc = gethostname(mom_host, (sizeof(mom_host) - 1));\n\tif (c != 0) {\n\t\t/*\n\t\t * backup plan\n\t\t * use PBS_MOM_NODE_NAME as hostname if it is defined and complies to RFC 952/1123\n\t\t */\n\t\tc = 0;\n\t\tif (pbs_conf.pbs_mom_node_name) {\n\t\t\tpbs_strncpy(mom_host, pbs_conf.pbs_mom_node_name, sizeof(mom_host));\n\t\t\tptr = mom_host;\n\t\t\t/* First character must be alpha-numeric */\n\t\t\tif (isalnum((int) *ptr)) {\n\t\t\t\t/* Subsequent characters may also be dots or dashes */\n\t\t\t\tfor (ptr++; (c == 0) && (*ptr != '\\0'); ptr++) {\n\t\t\t\t\tif (*ptr == '.') {\n\t\t\t\t\t\t/* Disallow two dots in a row or a trailing dot */\n\t\t\t\t\t\tif (*(ptr + 1) == '.' || *(ptr + 1) == '\\0')\n\t\t\t\t\t\t\tc = -1;\n\t\t\t\t\t} else if ((*ptr != '-') && !isalnum((int) *ptr)) {\n\t\t\t\t\t\tc = -1;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tc = -1;\n\t\t\t}\n\t\t} else {\n\t\t\tc = -1;\n\t\t}\n\t\tif (c != 0) {\n\t\t\tlog_err(-1, msg_daemonname, \"Unable to obtain my host name\");\n\t\t\treturn (-1);\n\t\t}\n\t}\n\n\t/*\n\t * Set mom_short_name to PBS_MOM_NODE_NAME if it is defined.\n\t * Otherwise, set mom_short_name to the return value of\n\t * gethostname(), truncated to first dot.\n\t */\n\tif (pbs_conf.pbs_mom_node_name)\n\t\t/* mom_short_name was specified explicitly using PBS_MOM_NODE_NAME */\n\t\tpbs_strncpy(mom_short_name, pbs_conf.pbs_mom_node_name, sizeof(mom_short_name));\n\telse {\n\t\t/* use gethostname(), truncated to first dot */\n\t\tpbs_strncpy(mom_short_name, mom_host, sizeof(mom_short_name));\n\t\tif ((ptr = strchr(mom_short_name, (int) '.')) != NULL)\n\t\t\t*ptr = '\\0'; /* terminate at first dot */\n\t}\n\n\t/*\n\t * Now get mom_host, which determines resources_available.host\n\t * and also the interface used to register to pbs_comm if\n\t * PBS_LEAF_NAME is unset. Note that mom_host will be overridden\n\t * by PBS_LEAF_NAME later on if TPP is enabled (search\n\t * pbs_conf.pbs_leaf_name to find code section).\n\t */\n\tc = get_fullhostname(mom_host, mom_host, (sizeof(mom_host) - 1));\n\tif (c == -1) {\n\t\tlog_err(-1, msg_daemonname, \"Unable to resolve my host name\");\n#ifdef WIN32\n\t\tg_dwCurrentState = SERVICE_STOPPED;\n\t\tss.dwCurrentState = g_dwCurrentState;\n\t\tss.dwWin32ExitCode = ERROR_INVALID_COMPUTERNAME;\n\t\tif (g_ssHandle != 0)\n\t\t\tSetServiceStatus(g_ssHandle, &ss);\n#endif\n\t\treturn (-1);\n\t}\n\n\tlockfds = open(\"mom.lock\", O_CREAT | O_WRONLY, pmode);\n\tif (lockfds < 0) {\n\t\t(void) strcpy(log_buffer, \"Unable to open lock file\");\n\t\tlog_err(-1, msg_daemonname, log_buffer);\n\t\t(void) strcat(log_buffer, \"\\n\");\n\t\t(void) fprintf(stderr, \"%s\", log_buffer);\n#ifdef WIN32\n\t\tg_dwCurrentState = SERVICE_STOPPED;\n\t\tss.dwCurrentState = g_dwCurrentState;\n\t\tss.dwWin32ExitCode = ERROR_LOCK_FAILED;\n\t\tif (g_ssHandle != 0)\n\t\t\tSetServiceStatus(g_ssHandle, &ss);\n#endif\n\t\treturn (1);\n\t}\n\n#ifdef WIN32\n\tsecure_file(\"mom.lock\", \"Administrators\",\n\t\t    READS_MASK | WRITES_MASK | STANDARD_RIGHTS_REQUIRED);\n#endif\n\tif (lock_file(lockfds, F_WRLCK, \"mom.lock\", 1, NULL, 0)) { /* See if other MOMs are running */\n\t\tlog_errf(errno, msg_daemonname, \"pbs_mom: another mom running\");\n\t\tfprintf(stderr, \"%s\\n\", \"pbs_mom: another mom running\");\n\t\texit(1);\n\t}\n\n\tif (read_config(NULL)) {\n\t\tfprintf(stderr, \"%s: config file(s) parsing failed\\n\", argv[0]);\n#ifdef WIN32\n\t\tg_dwCurrentState = SERVICE_STOPPED;\n\t\tss.dwCurrentState = g_dwCurrentState;\n\t\tss.dwWin32ExitCode = ERROR_FILE_INVALID;\n\t\tif (g_ssHandle != 0)\n\t\t\tSetServiceStatus(g_ssHandle, &ss);\n#endif /* WIN32 */\n\t\treturn (1);\n\t}\n\tif (pbs_rm_port != (pbs_mom_port + 1)) {\n\t\tfprintf(stderr, \"Mom RM port must be one greater than the Mom Service port\\n\");\n#ifdef WIN32\n\t\tg_dwCurrentState = SERVICE_STOPPED;\n\t\tss.dwCurrentState = g_dwCurrentState;\n\t\tss.dwWin32ExitCode = ERROR_FILE_INVALID;\n\t\tif (g_ssHandle != 0)\n\t\t\tSetServiceStatus(g_ssHandle, &ss);\n#endif /* WIN32 */\n\t\treturn (1);\n\t}\n\n#if MOM_ALPS /* ALPS needs libjob support */\n\t/*\n\t * This needs to be called after the config file is read and before MOM\n\t * forks so the exit value can be seen if there is a bad flag combination.\n\t */\n\tck_acct_facility_present();\n#endif /* MOM_ALPS */\n\n\t/* initialize the network interface */\n\n\tif ((sock_bind_mom = init_network(pbs_mom_port)) < 0) {\n#ifdef WIN32\n\t\terrno = WSAGetLastError();\n#endif\n\t\tc = errno;\n\t\t(void) sprintf(log_buffer,\n\t\t\t       \"server port = %u, errno = %d\",\n\t\t\t       pbs_mom_port, c);\n#ifdef WIN32\n\t\tif (c == WSAEADDRINUSE)\n#else\n\t\tif (c == EADDRINUSE)\n#endif\n\t\t\t(void) strcat(log_buffer, \", already in use\");\n\t\tlog_err(-1, msg_daemonname, log_buffer);\n\t\t(void) strcat(log_buffer, \"\\n\");\n\t\t(void) fprintf(stderr, \"%s\", log_buffer);\n#ifdef WIN32\n\t\tg_dwCurrentState = SERVICE_STOPPED;\n\t\tss.dwCurrentState = g_dwCurrentState;\n\t\tss.dwWin32ExitCode = ERROR_NETWORK_ACCESS_DENIED;\n\t\tif (g_ssHandle != 0)\n\t\t\tSetServiceStatus(g_ssHandle, &ss);\n#endif\n\t\treturn (3);\n\t}\n\n\tif ((sock_bind_rm = init_network(pbs_rm_port)) < 0) {\n\n#ifdef WIN32\n\t\terrno = WSAGetLastError();\n#endif\n\t\tc = errno;\n\t\t(void) sprintf(log_buffer,\n\t\t\t       \"resource (tcp) port = %u, errno = %d\",\n\t\t\t       pbs_rm_port, c);\n#ifdef WIN32\n\t\tif (c == WSAEADDRINUSE)\n#else\n\t\tif (c == EADDRINUSE)\n#endif\n\t\t\t(void) strcat(log_buffer, \", already in use\");\n\t\tlog_err(-1, msg_daemonname, log_buffer);\n\t\t(void) strcat(log_buffer, \"\\n\");\n\t\t(void) fprintf(stderr, \"%s\", log_buffer);\n\n#ifdef WIN32\n\t\tg_dwCurrentState = SERVICE_STOPPED;\n\t\tss.dwCurrentState = g_dwCurrentState;\n\t\tss.dwWin32ExitCode = ERROR_NETWORK_ACCESS_DENIED;\n\t\tif (g_ssHandle != 0)\n\t\t\tSetServiceStatus(g_ssHandle, &ss);\n#endif /* WIN32 */\n\t\treturn (3);\n\t}\n\n\t/*Initialize security library's internal data structures*/\n\tif (load_auths(AUTH_SERVER)) {\n\t\tlog_err(-1, __func__, \"Failed to load auth lib\");\n\t\texit(3);\n\t}\n\n\t{\n\t\tint csret;\n\n\t\t/* allow Libsec to log errors if part of PBS daemon code */\n\t\tp_cslog = log_err;\n\n\t\tif ((csret = CS_server_init()) != CS_SUCCESS) {\n\t\t\tsprintf(log_buffer,\n\t\t\t\t\"Problem initializing security library (%d)\", csret);\n\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\texit(3);\n\t\t}\n\t}\n\n#ifndef DEBUG\n\tif (stalone != 1) {\n\t\t/* go into the background and become own session/process group */\n\t\tif (lock_file(lockfds, F_UNLCK, \"mom.lock\", 1, NULL, 0)) { /* unlock so child can relock */\n\t\t\tlog_errf(errno, msg_daemonname, \"failed to unlock mom.lock file\");\n\t\t\tfprintf(stderr, \"%s\\n\", \"failed to unlock mom.lock file\");\n\t\t\texit(1);\n\t\t}\n\n\t\tif (fork() > 0)\n\t\t\treturn (0); /* parent goes away */\n\n\t\tif ((setsid() == -1) && (errno != ENOSYS)) {\n\t\t\tlog_err(errno, msg_daemonname, \"setsid failed\");\n\t\t\treturn (2);\n\t\t}\n\t\tif (lock_file(lockfds, F_WRLCK, \"mom.lock\", 1, NULL, 0)) { /* lock out other MOMs */\n\t\t\tlog_errf(errno, msg_daemonname, \"pbs_mom: another mom running\");\n\t\t\tfprintf(stderr, \"%s\\n\", \"pbs_mom: another mom running\");\n\t\t\texit(1);\n\t\t}\n\t}\n\tif (freopen(NULL_DEVICE, \"r\", stdin) == NULL) \n\t\tlog_errf(-1, __func__, \"freopen failed. ERR : %s\", strerror(errno));\n\tif (freopen(NULL_DEVICE, \"w\", stdout) == NULL) \n\t\tlog_errf(-1, __func__, \"freopen failed. ERR : %s\", strerror(errno));\t\n\tif (freopen(NULL_DEVICE, \"w\", stderr) == NULL) \n\t\tlog_errf(-1, __func__, \"freopen failed. ERR : %s\", strerror(errno));\t\t\n#else  /* DEBUG */\n\tsetvbuf(stdout, NULL, _IONBF, 0);\n\tsetvbuf(stderr, NULL, _IONBF, 0);\n\tif (stalone != 1) {\n\t\tlog_record(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_INFO,\n\t\t\t   __func__, \"Debug build does not fork.\");\n\t}\n#endif /* DEBUG */\n\n\tmom_pid = getpid();\n\tsprintf(log_buffer, \"%d\\n\", mom_pid);\n\tif (ftruncate(lockfds, 0) == -1) \n\t\tlog_errf(-1, __func__, \"ftruncate failed. ERR : %s\", strerror(errno));\t\t\n\tif (write(lockfds, log_buffer, strlen(log_buffer)) == -1) \n\t\tlog_errf(-1, __func__, \"write failed. ERR : %s\", strerror(errno));\t\t\n\n#ifndef WIN32 /* ------------------------------------------------------------*/\n\n\tdaemon_protect(0, PBS_DAEMON_PROTECT_ON);\n#ifdef _POSIX_MEMLOCK\n\tif (do_mlockall == 1) {\n\t\tif (mlockall(MCL_CURRENT | MCL_FUTURE) == -1) {\n\t\t\tlog_err(errno, __func__, \"mlockall failed\");\n\t\t}\n\t}\n#endif /* _POSIX_MEMLOCK */\n\n\tsigemptyset(&allsigs);\n\tsigaddset(&allsigs, SIGHUP);  /* remember to block these */\n\tsigaddset(&allsigs, SIGINT);  /* during critical sections */\n\tsigaddset(&allsigs, SIGTERM); /* so we don't get confused */\n\tsigaddset(&allsigs, SIGCHLD);\n\tsigaddset(&allsigs, SIGUSR1);\n\tsigaddset(&allsigs, SIGUSR2);\n\tsigaddset(&allsigs, SIGPIPE);\n#ifdef SIGINFO\n\tsigaddset(&allsigs, SIGINFO);\n#endif\n\n\tact.sa_flags = 0;\n\tact.sa_mask = allsigs;\n\n\t/*\n\t **\tWe want to abort system calls\n\t **\tand call a function.\n\t */\n#ifdef SA_INTERRUPT\n\tact.sa_flags |= SA_INTERRUPT; /* don't restart system calls */\n#endif\n\tact.sa_handler = catch_USR2;\n\tsigaction(SIGUSR2, &act, NULL);\n\n\tact.sa_handler = catch_child; /* set up to catch Death of Child */\n\tsigaction(SIGCHLD, &act, NULL);\n\tact.sa_handler = catch_hup; /* do a restart on SIGHUP */\n\tsigaction(SIGHUP, &act, NULL);\n\n\tact.sa_handler = toolong; /* handle an alarm call */\n\tsigaction(SIGALRM, &act, NULL);\n\n\tact.sa_handler = stop_me; /* shutdown for these */\n\tsigaction(SIGINT, &act, NULL);\n\tsigaction(SIGTERM, &act, NULL);\n#ifdef SIGXCPU\n\tsigaction(SIGXCPU, &act, NULL);\n#endif\n#ifdef SIGXFSZ\n\tsigaction(SIGXFSZ, &act, NULL);\n#endif\n#ifdef SIGCPULIM\n\tsigaction(SIGCPULIM, &act, NULL);\n#endif\n#ifdef SIGSHUTDN\n\tsigaction(SIGSHUTDN, &act, NULL);\n#endif\n\t/*\n\t **\tThese signals will be sent to 'stop_me' which will just\n\t **\treturn so they will be ignored.  This is so any process\n\t **\tthat is exec'ed will not have SIG_IGN set for anything.\n\t */\n\tsigaction(SIGPIPE, &act, NULL);\n\tsigaction(SIGUSR1, &act, NULL);\n#ifdef SIGINFO\n\tsigaction(SIGINFO, &act, NULL);\n#endif\n#endif /* ! WIN32 end -------------------------------------------------------*/\n\n\t/* initialize variables */\n\n\tif ((jobs_idx = pbs_idx_create(0, 0)) == NULL) {\n\t\tlog_err(-1, __func__, \"Creating jobs index failed!\");\n\t\tfprintf(stderr, \"Creating jobs index failed!\\n\");\n\t\treturn (-1);\n\t}\n\n\tCLEAR_HEAD(mom_pending_ruu);\n\n\tCLEAR_HEAD(svr_newjobs);\n\tCLEAR_HEAD(svr_alljobs);\n\tCLEAR_HEAD(mom_polljobs);\n\tCLEAR_HEAD(svr_requests);\n\tCLEAR_HEAD(mom_deadjobs);\n\n#ifdef NAS_UNKILL /* localmod 011 */\n\tCLEAR_HEAD(killed_procs);\n#endif /* localmod 011 */\n\n\tCLEAR_HEAD(svr_allhooks);\n\tCLEAR_HEAD(svr_queuejob_hooks);\n\tCLEAR_HEAD(svr_postqueuejob_hooks);\n\tCLEAR_HEAD(svr_modifyjob_hooks);\n\tCLEAR_HEAD(svr_resvsub_hooks);\n\tCLEAR_HEAD(svr_modifyresv_hooks);\n\tCLEAR_HEAD(svr_movejob_hooks);\n\tCLEAR_HEAD(svr_runjob_hooks);\n\tCLEAR_HEAD(svr_jobobit_hooks);\n\tCLEAR_HEAD(svr_management_hooks);\n\tCLEAR_HEAD(svr_modifyvnode_hooks);\n\tCLEAR_HEAD(svr_periodic_hooks);\n\tCLEAR_HEAD(svr_provision_hooks);\n\tCLEAR_HEAD(svr_resv_confirm_hooks);\n\tCLEAR_HEAD(svr_resv_begin_hooks);\n\tCLEAR_HEAD(svr_resv_end_hooks);\n\tCLEAR_HEAD(svr_hook_job_actions);\n\tCLEAR_HEAD(svr_hook_vnl_actions);\n\n\tCLEAR_HEAD(svr_execjob_begin_hooks);\n\tCLEAR_HEAD(svr_execjob_prologue_hooks);\n\tCLEAR_HEAD(svr_execjob_epilogue_hooks);\n\tCLEAR_HEAD(svr_execjob_preterm_hooks);\n\tCLEAR_HEAD(svr_execjob_launch_hooks);\n\tCLEAR_HEAD(svr_execjob_end_hooks);\n\tCLEAR_HEAD(svr_exechost_periodic_hooks);\n\tCLEAR_HEAD(svr_exechost_startup_hooks);\n\tCLEAR_HEAD(svr_execjob_attach_hooks);\n\tCLEAR_HEAD(svr_execjob_resize_hooks);\n\tCLEAR_HEAD(svr_execjob_abort_hooks);\n\tCLEAR_HEAD(svr_execjob_postsuspend_hooks);\n\tCLEAR_HEAD(svr_execjob_preresume_hooks);\n\n\tCLEAR_HEAD(task_list_immed);\n\tCLEAR_HEAD(task_list_timed);\n\tCLEAR_HEAD(task_list_event);\n\tCLEAR_HEAD(task_list_interleave);\n\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n\tCLEAR_HEAD(svr_allcreds);\n#endif\n\n#ifdef WIN32\n\tCLEAR_HEAD(mom_copyreqs_list);\n\n\t_ftime_s(&tval);\n\ttime_now = tval.time;\n\tsrand(tval.millitm);\n#else\n\tmaxtm = time(0);\n#endif\n\n#ifndef WIN32\n\t/* block signals while we do things */\n\tif (sigprocmask(SIG_BLOCK, &allsigs, NULL) == -1)\n\t\tlog_err(errno, __func__, \"sigprocmask(BLOCK)\");\n\n\tgettimeofday(&tval, NULL);\n\ttime_now = tval.tv_sec;\n\n\tsrandom(tval.tv_usec);\n#endif /* !WIN32 */\n\n\tret_size = 4096;\n\tif ((ret_string = malloc(ret_size)) == NULL) {\n\t\tperror(\"malloc\");\n#ifdef WIN32\n\t\tg_dwCurrentState = SERVICE_STOPPED;\n\t\tss.dwCurrentState = g_dwCurrentState;\n\t\tss.dwWin32ExitCode = ERROR_OUTOFMEMORY;\n\t\tif (g_ssHandle != 0)\n\t\t\tSetServiceStatus(g_ssHandle, &ss);\n#endif /* WIN32 */\n\t\treturn (1);\n\t}\n\n\ttpp_fd = -1;\n\tif (init_network_add(sock_bind_mom, auth_handler, process_request) != 0) {\n\n\t\tc = SOCK_ERRNO;\n\t\t(void) sprintf(log_buffer,\n\t\t\t       \"server port = %u, errno = %d\",\n\t\t\t       pbs_mom_port, c);\n\n\t\tif (c == EADDRINUSE)\n\t\t\t(void) strcat(log_buffer, \", already in use\");\n\t\tlog_err(-1, msg_daemonname, log_buffer);\n\t\t(void) strcat(log_buffer, \"\\n\");\n#ifdef WIN32\n\t\tg_dwCurrentState = SERVICE_STOPPED;\n\t\tss.dwCurrentState = g_dwCurrentState;\n\t\tss.dwWin32ExitCode = ERROR_NETWORK_ACCESS_DENIED;\n\t\tif (g_ssHandle != 0)\n\t\t\tSetServiceStatus(g_ssHandle, &ss);\n#endif\n\t\treturn (3);\n\t}\n\n\tif (init_network_add(sock_bind_rm, NULL, tcp_request) != 0) {\n\n\t\tc = SOCK_ERRNO;\n\t\t(void) sprintf(log_buffer,\n\t\t\t       \"resource (tcp) port = %u, errno = %d\",\n\t\t\t       pbs_rm_port, c);\n\n\t\tif (c == EADDRINUSE)\n\t\t\t(void) strcat(log_buffer, \", already in use\");\n\t\tlog_err(-1, msg_daemonname, log_buffer);\n\t\t(void) strcat(log_buffer, \"\\n\");\n\n#ifdef WIN32\n\t\tg_dwCurrentState = SERVICE_STOPPED;\n\t\tss.dwCurrentState = g_dwCurrentState;\n\t\tss.dwWin32ExitCode = ERROR_NETWORK_ACCESS_DENIED;\n\t\tif (g_ssHandle != 0)\n\t\t\tSetServiceStatus(g_ssHandle, &ss);\n#endif /* WIN32 */\n\t\treturn (3);\n\t}\n\n\tsprintf(log_buffer, \"Out of memory\");\n\tif (pbs_conf.pbs_leaf_name) {\n\t\tchar *p;\n\t\tnodename = strdup(pbs_conf.pbs_leaf_name);\n\n\t\t/* reset pbs_leaf_name to only the first leaf name with port */\n\t\tp = strchr(pbs_conf.pbs_leaf_name, ','); /* keep only the first leaf name */\n\t\tif (p)\n\t\t\t*p = '\\0';\n\t\tp = strchr(pbs_conf.pbs_leaf_name, ':'); /* cut out the port */\n\t\tif (p)\n\t\t\t*p = '\\0';\n\t} else {\n\t\tnodename = get_all_ips(mom_host, log_buffer, sizeof(log_buffer) - 1);\n\t}\n\tif (!nodename) {\n\t\tlog_err(-1, __func__, log_buffer);\n\t\tfprintf(stderr, \"%s\\n\", \"Unable to determine TPP node name\");\n\t\treturn (1);\n\t}\n\n\tservername = get_servername(&serverport);\n\tlocaladdr = addclient_byname(LOCALHOST_SHORTNAME);\n\t(void) addclient_byname(mom_host);\n\tif (gethostname(ret_string, ret_size) == 0)\n\t\t(void) addclient_byname(ret_string);\n\t(void) addclient_byname(servername);\n\tif (pbs_conf.pbs_secondary) {\n\t\tservername = parse_servername(pbs_conf.pbs_secondary, &serverport);\n\t\t(void) addclient_byname(servername);\n\t}\n\n\t/* locate cput resource definition, needed for checking chkpt time */\n\trdcput = &svr_resc_def[RESC_CPUT];\n\trdwall = &svr_resc_def[RESC_WALLTIME];\n\t/* locate the checkpoint path */\n\tpath_checkpoint_from_getenv = getenv(\"PBS_CHECKPOINT_PATH\");\n\tpath_checkpoint_default = mk_dirs(\"checkpoint/\");\n#define MAX_CHECKPOINT_DIR_RETRIES 5\n\tfor (i = 0; i < MAX_CHECKPOINT_DIR_RETRIES; i++) {\n\t\terrno = 0;\n\t\tif ((c = set_checkpoint_path(path_checkpoint)) == 1)\n\t\t\tbreak;\n\t\tif (errno == ENOENT) {\n\t\t\t/* Skip the sleep() if this is the last try. */\n\t\t\tif (i >= (MAX_CHECKPOINT_DIR_RETRIES - 1))\n\t\t\t\tbreak;\n\t\t\t(void) sprintf(log_buffer, \"%s %s %s %s\",\n\t\t\t\t       \"PBS checkpoint directory\",\n\t\t\t\t       path_checkpoint ? path_checkpoint : \"NULL\",\n\t\t\t\t       \"does not exist or is not NFS mounted.\",\n#ifdef WIN32\n\t\t\t\t       \"Retrying in 2 secs.\"\n#else\n\t\t\t\t       \"Retrying in 1 minute.\"\n#endif\n\t\t\t);\n\t\t\tlog_err(errno, msg_daemonname, log_buffer);\n\n#ifdef WIN32\n\t\t\tsleep(2);\n#else\n\t\t\tsleep(60);\n#endif\n\t\t\tcontinue;\n\t\t}\n\t\tbreak;\n\t}\n\tif (c == 0) {\n\t\t(void) sprintf(log_buffer, \"%s %s %s %d %s\",\n\t\t\t       \"Error configuring PBS checkpoint directory\",\n\t\t\t       path_checkpoint, \"; Giving up after\", i, \"attempts.\");\n\t\tlog_err(errno, msg_daemonname, log_buffer);\n\n#ifdef WIN32\n\t\tg_dwCurrentState = SERVICE_STOPPED;\n\t\tss.dwCurrentState = g_dwCurrentState;\n\t\tss.dwWin32ExitCode = ERROR_FILE_CORRUPT;\n\t\tif (g_ssHandle != 0)\n\t\t\tSetServiceStatus(g_ssHandle, &ss);\n#endif /* WIN32 */\n\t\treturn (3);\n\t}\n\n#ifndef WIN32\n\tif (!mock_run)\n\t\tmom_nice();\n#endif\n\t/*\n\t * Recover the hooks.\n\t *\n\t */\n\tif (chdir(path_hooks) != 0) {\n\t\t(void) snprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\tmsg_init_chdir, path_hooks);\n\t\tlog_err(errno, __func__, log_buffer);\n\t\treturn (-1);\n\t}\n\thook_suf_len = strlen(hook_suffix);\n\n\tdir = opendir(\".\");\n\tif (dir == NULL) {\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_SERVER,\n\t\t\t  LOG_DEBUG, msg_daemonname,\n\t\t\t  \"Could not open hooks dir\");\n\t} else {\n\t\t/* Now, for each hook found ... */\n\t\twhile (errno = 0, (pdirent = readdir(dir)) != NULL) {\n\t\t\t/* recover the hooks */\n\t\t\tbaselen = strlen(pdirent->d_name) - hook_suf_len;\n\t\t\tif (baselen < 0)\n\t\t\t\tcontinue;\n\t\t\tpsuffix = pdirent->d_name + baselen;\n\t\t\tif (strcmp(psuffix, hook_suffix)) {\n\t\t\t\tcontinue;\n\t\t\t}\n\n\t\t\tif ((phook = hook_recov(pdirent->d_name, NULL, hook_msg,\n\t\t\t\t\t\tsizeof(hook_msg), python_script_alloc,\n\t\t\t\t\t\tpython_script_free)) == NULL) {\n\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\"hook_recov(%s): can't recover - %s\",\n\t\t\t\t\tpdirent->d_name, hook_msg);\n\t\t\t\tlog_event(PBSEVENT_SYSTEM,\n\t\t\t\t\t  PBS_EVENTCLASS_SERVER, LOG_NOTICE,\n\t\t\t\t\t  msg_daemonname, log_buffer);\n\t\t\t} else {\n\t\t\t\tsprintf(log_buffer, \"Found hook %s type=%s\",\n\t\t\t\t\tphook->hook_name,\n\t\t\t\t\t((phook->type == HOOK_SITE) ? \"site\" : \"pbs\"));\n\t\t\t\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_ADMIN |\n\t\t\t\t\t\t  PBSEVENT_DEBUG,\n\t\t\t\t\t  PBS_EVENTCLASS_SERVER,\n\t\t\t\t\t  LOG_INFO, msg_daemonname, log_buffer);\n\n\t\t\t\tif (update_joinjob_alarm_time &&\n\t\t\t\t    (phook->enabled == TRUE) &&\n\t\t\t\t    ((phook->event & HOOK_EVENT_EXECJOB_BEGIN) != 0)) {\n\t\t\t\t\tif (joinjob_alarm_time == -1)\n\t\t\t\t\t\tjoinjob_alarm_time = 0;\n\t\t\t\t\tjoinjob_alarm_time += phook->alarm;\n\t\t\t\t}\n\t\t\t\tif (update_job_launch_delay &&\n\t\t\t\t    (phook->enabled == TRUE) &&\n\t\t\t\t    ((phook->event & HOOK_EVENT_EXECJOB_PROLOGUE) != 0)) {\n\t\t\t\t\tif (job_launch_delay == -1)\n\t\t\t\t\t\tjob_launch_delay = 0;\n\t\t\t\t\tjob_launch_delay += phook->alarm;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tif (errno != 0 && errno != ENOENT)\n\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_SERVER,\n\t\t\t\t  LOG_DEBUG, msg_daemonname,\n\t\t\t\t  \"Could not read hooks dir\");\n\n\t\t(void) closedir(dir);\n\t}\n\n\tsnprintf(path_hooks_rescdef, MAXPATHLEN, \"%s%s\", path_hooks,\n\t\t PBS_RESCDEF);\n\thooks_rescdef_checksum = crc_file(path_hooks_rescdef);\n\n\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t \"hooks_rescdef_checksum(%s)=%lu\",\n\t\t path_hooks_rescdef, hooks_rescdef_checksum);\n\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_HOOK, LOG_INFO,\n\t\t  PBS_RESCDEF, log_buffer);\n\tpath_rescdef = (char *) path_hooks_rescdef;\n\n\t/* Need to go back to mom's working directory since when recovering */\n\t/* hooks, we temporarily chdir-ed to the hooks directory. */\n\tif (chdir(mom_home) == -1) {\n\t\tlog_err(errno, __func__, \"pbs_mom unable to change working directory to mom home\");\n\t\treturn (-1);\n\t}\n\n\tprint_hooks(0);\n\tprint_hooks(HOOK_EVENT_EXECJOB_BEGIN);\n\tprint_hooks(HOOK_EVENT_EXECJOB_PROLOGUE);\n\tprint_hooks(HOOK_EVENT_EXECJOB_LAUNCH);\n\tprint_hooks(HOOK_EVENT_EXECJOB_EPILOGUE);\n\tprint_hooks(HOOK_EVENT_EXECJOB_PRETERM);\n\tprint_hooks(HOOK_EVENT_EXECJOB_END);\n\tprint_hooks(HOOK_EVENT_EXECHOST_PERIODIC);\n\tprint_hooks(HOOK_EVENT_EXECHOST_STARTUP);\n\tprint_hooks(HOOK_EVENT_EXECJOB_ATTACH);\n\tprint_hooks(HOOK_EVENT_EXECJOB_RESIZE);\n\tprint_hooks(HOOK_EVENT_EXECJOB_ABORT);\n\tprint_hooks(HOOK_EVENT_EXECJOB_POSTSUSPEND);\n\tprint_hooks(HOOK_EVENT_EXECJOB_PRERESUME);\n\n\t/* cleanup the hooks work directory */\n\tcleanup_hooks_workdir(0);\n\tcleanup_hooks_in_path_spool(0);\n\n#ifdef PYTHON\n#ifdef WIN32\n\tset_py_progname();\n\tPy_NoSiteFlag = 1;\n\tPy_FrozenFlag = 1;\n\tPy_OptimizeFlag = 2;\n\tPy_IgnoreEnvironmentFlag = 1;\n\tPy_InitializeEx(0);\n#else\n\tchar *python_binpath = NULL;\n\tstatic wchar_t w_python_binpath[MAXPATHLEN + 1] = {'\\0'};\n\tPyStatus py_status;\n\tPyConfig py_config;\n\n\tPyConfig_InitPythonConfig(&py_config);\n\n\tpy_config._install_importlib = 1;\n\tpy_config.use_environment = 0;\n\tpy_config.optimization_level = 2;\n\tpy_config.isolated = 1;\n\tpy_config.site_import = 0;\n\tpy_config.install_signal_handlers = 0;\n        if (w_python_binpath[0] == '\\0') {\n                if (get_py_progname(&python_binpath))\n                        log_err(-1, __func__, \"Failed to find python binary path!\");\n                mbstowcs(w_python_binpath, python_binpath, MAXPATHLEN + 1);\n                free(python_binpath);\n        }\n\n\tpy_status = PyConfig_SetString(&py_config, &py_config.program_name, w_python_binpath);\n\tif (PyStatus_Exception(py_status))\n\t\tlog_err(-1, __func__, \"Failed to set python binary path!\");\n\n\tpy_status = Py_InitializeFromConfig(&py_config);\n\tif (PyStatus_Exception(py_status)) {\n\t\tlog_err(-1, \"Py_InitializeFromConfig\",\n\t\t\t\"--> Failed to initialize Python interpreter <--\");\n\t\tPyConfig_Clear(&py_config);  // Clear the configuration object\n\t}\n#endif\n#endif\n\n#ifndef WIN32\n\tinitialize(); /* init RM code */\n#endif\n\n\trc = set_tpp_config(&pbs_conf, &tpp_conf, nodename, pbs_rm_port, pbs_conf.pbs_leaf_routers);\n\tfree(nodename);\n\n\tif (rc == -1) {\n\t\t(void) sprintf(log_buffer, \"Error setting TPP config\");\n\t\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER,\n\t\t\t  LOG_ERR, msg_daemonname, log_buffer);\n\t\tfprintf(stderr, \"%s\", log_buffer);\n\t\treturn (3);\n\t}\n\n\ttpp_set_app_net_handler(net_down_handler, net_restore_handler);\n\n\tif ((tppfd = tpp_init(&tpp_conf)) == -1) {\n\t\t(void) sprintf(log_buffer, \"tpp_init failed\");\n\t\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER,\n\t\t\t  LOG_ERR, msg_daemonname, log_buffer);\n\t\tfprintf(stderr, \"%s\", log_buffer);\n\t\treturn (3);\n\t}\n\t(void) add_conn(tppfd, TppComm, (pbs_net_t) 0, 0, NULL, tpp_request);\n\n\t/* initialize machine dependent polling routines */\n\tif ((c = mom_open_poll()) != PBSE_NONE) {\n\t\tlog_err(c, msg_daemonname, \"pre_poll failed\");\n\n#ifdef WIN32\n\t\tg_dwCurrentState = SERVICE_STOPPED;\n\t\tss.dwCurrentState = g_dwCurrentState;\n\t\tss.dwWin32ExitCode = ERROR_NETWORK_ACCESS_DENIED;\n\t\tif (g_ssHandle != 0)\n\t\t\tSetServiceStatus(g_ssHandle, &ss);\n#endif /* WIN32 */\n\t\treturn (3);\n\t}\n\n\t/* recover & abort Jobs which were under MOM's control */\n// clang-format off\n#ifdef\tWIN32\n\told_winsta = GetProcessWindowStation();\n\n\tstrcpy(winsta_name, PBS_DESKTOP_NAME);\n\tstrcpy(desktop_name, PBS_DESKTOP_NAME);\n\n\tif ((pwst = strrchr(winsta_name, '\\\\'))) {\n\t\t*pwst = '\\0';\n\t\tstrcpy(desktop_name, pwst+1);\n\t}\n\t/*\n\t * Only members of the Administrators group are allowed\n\t * to specify a name of windows station. If lpwinsta is\n\t * NULL or an empty string the system forms a window station\n\t * name using the logon session identifier for the calling process.\n\t */\n\n\tpbs_winsta = CreateWindowStation(winsta_name, 0,\n\t\tWINSTA_ALL_ACCESS, NULL);\n\n\tif (pbs_winsta == NULL) {\n\t\t(void)sprintf(log_buffer, \"CreateWindowStation failed! error=%d\",\n\t\t\tGetLastError());\n\t\tlog_err(errno, msg_daemonname, log_buffer);\n\t\tg_dwCurrentState = SERVICE_STOPPED;\n\t\tss.dwCurrentState = g_dwCurrentState;\n\t\tss.dwWin32ExitCode = ERROR_INVALID_HANDLE;\n\t\tif (g_ssHandle != 0) SetServiceStatus(g_ssHandle, &ss);\n\n\t\treturn (3);\n\t}\n\n\tsprintf(log_buffer, \"Created window station=%s\", winsta_name);\n\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_FORCE, PBS_EVENTCLASS_SERVER, LOG_NOTICE, __func__, log_buffer);\n\n\tSetProcessWindowStation(pbs_winsta);\n\n\tpbs_desktop = CreateDesktop(desktop_name, NULL, NULL,\n\t\tDF_ALLOWOTHERACCOUNTHOOK,\n\t\tREAD_CONTROL|\n\t\tWRITE_DAC|\n\t\tDESKTOP_READOBJECTS|\n\t\tDESKTOP_CREATEWINDOW|\n\t\tDESKTOP_CREATEMENU|\n\t\tDESKTOP_HOOKCONTROL|\n\t\tDESKTOP_JOURNALRECORD|\n\t\tDESKTOP_JOURNALPLAYBACK|\n\t\tDESKTOP_ENUMERATE|\n\t\tDESKTOP_WRITEOBJECTS|\n\t\tDESKTOP_SWITCHDESKTOP, NULL);\n\n\tif (pbs_desktop == NULL) {\n\t\t(void)sprintf(log_buffer, \"CreateDesktop failed! error=%d\",\n\t\t\tGetLastError());\n\t\tlog_err(errno, msg_daemonname, log_buffer);\n\t\tg_dwCurrentState = SERVICE_STOPPED;\n\t\tss.dwCurrentState = g_dwCurrentState;\n\t\tss.dwWin32ExitCode = ERROR_INVALID_HANDLE;\n\t\tif (g_ssHandle != 0) SetServiceStatus(g_ssHandle, &ss);\n\n\t\treturn (3);\n\t}\n\tsprintf(log_buffer, \"Created desktop %s in window station=%s\",\n\t\tdesktop_name, winsta_name);\n\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_FORCE, PBS_EVENTCLASS_SERVER, LOG_NOTICE, __func__, log_buffer);\n\n\tSetProcessWindowStation(old_winsta);\n\n#endif\t/* WIN32 */\n\n\t// clang-format on\n\n\t/* recover vnode to host map from file in case Server is not yet up */\n\tif ((c = recover_vmap()) != 0) {\n\t\tlog_err(c, msg_daemonname, \"unable to recover vnode to host mapping\");\n\t}\n\n\tpbs_list_link multinode_jobs;\n\n\t/* recover & abort Jobs which were under MOM's control */\n\tinit_abort_jobs(recover, &multinode_jobs);\n\n\t/* deploy periodic hooks */\n\tmom_hook_input_init(&hook_input);\n\thook_input.vnl = (vnl_t *) vnlp;\n\thook_input.jobs_list = &svr_alljobs;\n\n\t(void) mom_process_hooks(HOOK_EVENT_EXECHOST_PERIODIC,\n\t\t\t\t PBS_MOM_SERVICE_NAME, mom_host, &hook_input,\n\t\t\t\t NULL, NULL, 0, 0);\n\n\t/* record the fact that we are up and running */\n\t(void) sprintf(log_buffer, msg_startup1, PBS_VERSION, recover);\n\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_ADMIN | PBSEVENT_FORCE,\n\t\t  LOG_NOTICE, PBS_EVENTCLASS_SERVER, msg_daemonname, log_buffer);\n\t(void) sprintf(log_buffer,\n\t\t       \"Mom pid = %d ready, using ports Server:%d MOM:%d RM:%d\",\n\t\t       mom_pid, default_server_port, pbs_mom_port, pbs_rm_port);\n\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_FORCE, PBS_EVENTCLASS_SERVER,\n\t\t  LOG_NOTICE, msg_daemonname, log_buffer);\n\n\t/* tell server we have restarted */\n#ifdef WIN32\n\tss.dwCheckPoint = 0;\n\tg_dwCurrentState = SERVICE_RUNNING;\n\tss.dwCurrentState = g_dwCurrentState;\n\tif (g_ssHandle != 0)\n\t\tSetServiceStatus(g_ssHandle, &ss);\n\n\t/* put here to minimize chance of hanging up or delaying mom startup */\n\tinitialize();\n#endif /* WIN32 */\n\n#ifdef PMIX\n\tpbs_pmix_server_init(msg_daemonname);\n#endif\n\n\t/*\n\t * Now at last, we are ready to do some work, the following section\n\t * constitutes the \"main\" loop of MOM\n\t */\n\tfor (; mom_run_state; finish_loop(wait_time)) {\n\n#ifndef WIN32\n\t\tif (call_hup != HUP_CLEAR) {\n\t\t\tprocess_hup();\n\t\t\tinternal_state_update = UPDATE_MOM_STATE;\n\t\t}\n#endif\n\n\t\ttime_now = time(NULL);\n\t\tif (server_stream == -1) {\n\t\t\tif (time_now > time_next_hello) {\n\t\t\t\tsend_hellosvr(server_stream);\n\t\t\t\ttime_next_hello = time_now + time_delta_hellosvr(MOM_DELTA_NORMAL);\n\t\t\t\tif (server_stream != -1) {\n\t\t\t\t\tjob *m_job;\n\t\t\t\t\tfor (m_job = (job *) GET_NEXT(multinode_jobs); m_job;\n\t\t\t\t\t     m_job = (job *) GET_NEXT(m_job->ji_multinodejobs)) {\n\t\t\t\t\t\tif (m_job->ji_qs.ji_svrflags & JOB_SVFLG_HERE) {\n\t\t\t\t\t\t\t/* I am MS */\n\t\t\t\t\t\t\tresume_multinode(m_job);\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t/* I am sister */\n\t\t\t\t\t\t\tsend_sisters(m_job, IM_RECONNECT_TO_MS, NULL);\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tCLEAR_HEAD(multinode_jobs);\n\t\t\t\t}\n\t\t\t}\n\t\t} else\n\t\t\tsend_pending_updates();\n\n\t\twait_time = default_next_task();\n#ifdef WIN32\n\t\tend_proc();\n#endif\n\n\t\tdorestrict_user();\n\n\t\t/* check on User Activity */\n\t\tif (server_stream != -1) {\n\t\t\tif (idle_check > 0) {\n\t\t\t\ttime_t lastkey, idletime;\n\n\t\t\t\t/*\n\t\t\t\t * cycle stealing is turned on, monitor\n\t\t\t\t * keyboard/mouse activity\n\t\t\t\t *\n\t\t\t\t * This impacts the \"wait_time\" above,  it must be\n\t\t\t\t * about the same time of the next activity check\n\t\t\t\t */\n\t\t\t\tlastkey = getkbdtime();\n\t\t\t\tif (lastkey > time_now)\n\t\t\t\t\tidletime = 0;\n\t\t\t\telse\n\t\t\t\t\tidletime = time_now - lastkey;\n\n\t\t\t\tif (internal_state & MOM_STATE_BUSYKB) {\n\t\t\t\t\t/* currently busy keyboard */\n\t\t\t\t\tif (idletime >= idle_avail) {\n\t\t\t\t\t\t/* no longer busy */\n\t\t\t\t\t\tinternal_state &= ~MOM_STATE_BUSYKB;\n\t\t\t\t\t\tinternal_state_update = UPDATE_MOM_STATE;\n\t\t\t\t\t\twent_busy = 0;\n\t\t\t\t\t\twait_time = idle_poll;\n\t\t\t\t\t\tactivate_jobs();\n\t\t\t\t\t}\n\t\t\t\t} else if (internal_state & MOM_STATE_INBYKB) {\n\t\t\t\t\tif (lastkey > (went_busy + idle_busy)) {\n\t\t\t\t\t\tinternal_state &= ~MOM_STATE_INBYKB;\n\t\t\t\t\t\tinternal_state |= MOM_STATE_BUSYKB;\n\t\t\t\t\t\twait_time = 10;\n\t\t\t\t\t} else if (idletime > idle_busy) {\n\t\t\t\t\t\t/* can resume jobs */\n\t\t\t\t\t\tinternal_state &= ~MOM_STATE_INBYKB;\n\t\t\t\t\t\tinternal_state_update = UPDATE_MOM_STATE;\n\t\t\t\t\t\twent_busy = 0;\n\t\t\t\t\t\twait_time = idle_poll;\n\t\t\t\t\t\tactivate_jobs();\n\t\t\t\t\t}\n\t\t\t\t\tprior_key = lastkey;\n\t\t\t\t} else {\n\t\t\t\t\t/* not currently busy */\n\t\t\t\t\tif ((idletime < idle_avail) &&\n\t\t\t\t\t    (went_busy == 0) &&\n\t\t\t\t\t    (lastkey != prior_key)) {\n\t\t\t\t\t\twent_busy = lastkey;\n\t\t\t\t\t\tprior_key = lastkey;\n\t\t\t\t\t\tinternal_state |= MOM_STATE_INBYKB;\n\t\t\t\t\t\tinternal_state_update = UPDATE_MOM_STATE;\n\t\t\t\t\t\twait_time = (idle_busy + 1) >> 1;\n\t\t\t\t\t\tidle_jobs();\n\t\t\t\t\t\t/* suspend jobs */\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t} else if (idle_check == -1) {\n\t\t\t\t/*\n\t\t\t\t * need to activate jobs that may have been idled before\n\t\t\t\t * restart or SIGHUP\n\t\t\t\t */\n\t\t\t\tactivate_jobs();\n\t\t\t\tif (update_state_flag) {\n\t\t\t\t\tinternal_state &= ~(MOM_STATE_INBYKB | MOM_STATE_BUSYKB | INUSE_BUSY);\n\t\t\t\t\tinternal_state_update = UPDATE_MOM_STATE;\n\t\t\t\t}\n\t\t\t\tidle_check = 0;\n\t\t\t}\n\t\t}\n\n\t\t/*\n\t\t * Is it time to update internal state?\n\t\t * This is done at a more leisurely pace\n\t\t */\n\t\tif (time_now > time_state_update) {\n\t\t\ttime_state_update = time_now + STATE_UPDATE_TIME;\n\n\t\t\t/*\n\t\t\t * If required, update node state info to Server\n\t\t\t * check if loadave means we should be \"busy\"\n\t\t\t */\n\t\t\tif (max_load_val > 0.0) {\n\t\t\t\t(void) get_la(&myla);\n\t\t\t\t/* check if need to update busy state */\n\t\t\t\tcheck_busy(myla);\n\t\t\t}\n\t\t}\n\n\t\t/*\n\t\t * if needed, update server with my state change can be changed in\n\t\t * check_busy() or query_adp()\n\t\t */\n\t\tif (internal_state_update) {\n\t\t\tstate_to_server(UPDATE_VNODES, 0);\n\n\t\t\t(void) send_hook_vnl(vnlp_from_hook);\n\t\t\t/*\n\t\t\t * send_hook_vnl() saves 'vnlp_from_hook' internally, to be freed\n\t\t\t * later when server acks the request.\n\t\t\t */\n\t\t\tvnlp_from_hook = NULL;\n\t\t}\n\n\t\t/*\n\t\t * Have we any jobs that can be purged now?\n\t\t * They would be added to this list if the purge is ready to\n\t\t * be done but the code is in the middle of a loop where\n\t\t * purging the job would mess up the linked list over which\n\t\t * the loop is running, or for some other reason it is not\n\t\t * desirable to actually purge the job then.  For example,\n\t\t * see chk_del_job() in mom_comm.c\n\t\t * If any are found, then call dorestrict_user()\n\t\t */\n\t\ti = 0;\n\t\twhile ((pjob = (job *) GET_NEXT(mom_deadjobs)) != NULL) {\n\t\t\t/* sometimes this purge is happening earlier than\n\t\t\t* IS_DISCARD_JOB, which then does not get the pjob\n\t\t\t* pointer to call kill_job().\n\t\t\t*\n\t\t\t* Fixed by adding a kill_job here, which should be\n\t\t\t* no harm anyway.\n\t\t\t*/\n\t\t\t(void) kill_job(pjob, SIGKILL);\n\t\t\tjob_purge_mom(pjob);\n\t\t\t++i;\n\t\t}\n\t\tif (i > 0)\n\t\t\tdorestrict_user();\n\n\t\t/*\n\t\t * Have we any external (script) actions running for jobs\n\t\t * that have gone longer than their alarm cut off time?\n\t\t * If so, call the post action routine with error of -1\n\t\t *\n\t\t * Also, if this platform supports checkpoint/restart we\n\t\t * want to minimize the wait time for qhold by using the\n\t\t * minimum update time if a checkpoint is active.\n\t\t */\n\t\tfor (pjob = (job *) GET_NEXT(svr_alljobs);\n\t\t     pjob;\n\t\t     pjob = (job *) GET_NEXT(pjob->ji_alljobs)) {\n\t\t\tif ((pjob->ji_momsubt != 0) &&\n\t\t\t    (pjob->ji_actalarm != 0) &&\n\t\t\t    (pjob->ji_actalarm < time_now)) {\n\t\t\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_JOB,\n\t\t\t\t\t  LOG_INFO, pjob->ji_qs.ji_jobid,\n\t\t\t\t\t  \"Action alarm time exceeded\");\n\t\t\t\tkill(pjob->ji_momsubt, SIGKILL);\n\t\t\t\tpjob->ji_momsubt = 0;\n\t\t\t\tpjob->ji_mompost(pjob, -1);\n\t\t\t\tpjob->ji_mompost = NULL;\n\t\t\t}\n\n\t\t\tif (do_tolerate_node_failures(pjob) &&\n\t\t\t    (check_job_substate(pjob, JOB_SUBSTATE_WAITING_JOIN_JOB)) &&\n\t\t\t    (pjob->ji_joinalarm != 0) &&\n\t\t\t    (pjob->ji_joinalarm < time_now)) {\n\t\t\t\tint rcode;\n\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"sister_join_job_alarm wait time %ld secs exceeded\", joinjob_alarm_time);\n\t\t\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_JOB,\n\t\t\t\t\t  LOG_INFO, pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\t\tset_job_substate(pjob, JOB_SUBSTATE_PRERUN);\n\n\t\t\t\trcode = pre_finish_exec(pjob, 1);\n\t\t\t\tif (rcode == PRE_FINISH_SUCCESS)\n\t\t\t\t\tfinish_exec(pjob);\n\t\t\t\telse if (rcode != PRE_FINISH_SUCCESS_JOB_SETUP_SEND)\n\t\t\t\t\texec_bail(pjob, JOB_EXEC_RETRY, \"pre_finish_exec failure\");\n\t\t\t\tpjob->ji_joinalarm = 0;\n\t\t\t}\n\n\t\t\tif (pjob->ji_flags & MOM_CHKPT_ACTIVE)\n\t\t\t\tnext_sample_time = min_check_poll;\n\t\t}\n\n\t\t/*\n\t\t * Is it time to update resources used for jobs?\n\t\t * This is done fairly slowly.  This rate applies to\n\t\t * everything from here on in the main loop.\n\t\t */\n\n\t\tif (time_now < (time_resc_updated + next_sample_time))\n\t\t\tcontinue;\n\n\t\t/*\n\t\t * time to next resources check is set to MIN when new job run\n\t\t * increments upwards until MAX time\n\t\t */\n\n\t\tif (server_stream == -1)\n\t\t\tnext_sample_time = max_check_poll;\n\t\telse if ((next_sample_time += inc_check_poll) > max_check_poll)\n\t\t\tnext_sample_time = max_check_poll;\n\t\tDBPRT((\"next_sample_time = %d\\n\", next_sample_time))\n\n\t\t/* are there any jobs? No - then don't bother with Resources */\n\n\t\tif ((pjob = (job *) GET_NEXT(svr_alljobs)) == NULL)\n\t\t\tcontinue;\n\n\t\t/* there are jobs so update status\t */\n\t\t/* if we just got a sample, don't bother */\n\t\tif (time_now > time_last_sample) {\n\t\t\tif (mom_get_sample() != PBSE_NONE)\n\t\t\t\tcontinue;\n\t\t}\n\n\t\ttime_resc_updated = time_now;\n\t\tfor (pjob = (job *) GET_NEXT(svr_alljobs);\n\t\t     pjob != NULL;\n\t\t     pjob = nxpjob) {\n\n\t\t\t/* next job pointer in case job is purged */\n\t\t\tnxpjob = (job *) GET_NEXT(pjob->ji_alljobs);\n\n\t\t\t/* check for job stuck waiting for Svr to ack obit */\n\t\t\tif (!pjob->ji_hook_running_bg_on && check_job_substate(pjob, JOB_SUBSTATE_OBIT) &&\n\t\t\t    pjob->ji_sampletim < time_now - 45) {\n\t\t\t\tsend_obit(pjob, 0); /* resend obit */\n\t\t\t}\n\t\t\t/* check for job stuck waiting for sister to deljob */\n\t\t\tif ((check_job_substate(pjob, JOB_SUBSTATE_DELJOB)) &&\n\t\t\t    (pjob->ji_sampletim < (time_now - 2 * MAX_CHECK_POLL_TIME))) {\n\t\t\t\t/* just delete the job and let server deal */\n\t\t\t\tif (pjob->ji_preq) {\n\t\t\t\t\treq_reject(PBSE_SISCOMM, 0, pjob->ji_preq);\n\t\t\t\t\tpjob->ji_preq = NULL;\n\t\t\t\t}\n\t\t\t\tjob_purge_mom(pjob);\n\t\t\t\tdorestrict_user();\n\t\t\t\tcontinue;\n\t\t\t}\n\n\t\t\tc = pjob->ji_qs.ji_svrflags;\n\t\t\tif (c & (JOB_SVFLG_OVERLMT1 | JOB_SVFLG_OVERLMT2 |\n\t\t\t\t JOB_SVFLG_TERMJOB)) {\n\t\t\t\t/* job is over a limit, if it is not already  */\n\t\t\t\t/* being terminated by action script, kill it */\n\t\t\t\tif ((!check_job_substate(pjob, JOB_SUBSTATE_TERM)) && (time_now >= pjob->ji_overlmt_timestamp)) {\n\t\t\t\t\t/* Unset the TERMJOB flag for KILL signal */\n\t\t\t\t\tpjob->ji_qs.ji_svrflags &= ~JOB_SVFLG_TERMJOB;\n\t\t\t\t\t(void) kill_job(pjob, SIGKILL);\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif (!check_job_substate(pjob, JOB_SUBSTATE_RUNNING))\n\t\t\t\tcontinue;\n\n\t\t\t/* update information for my tasks */\n\t\t\t(void) mom_set_use(pjob);\n\n\t\t\t/* see if need to check point any job */\n\t\t\tif (pjob->ji_chkpttype == PBS_CHECKPOINT_CPUT) {\n\t\t\t\t/* checkpoint on cputime used */\n\t\t\t\tprscput = find_resc_entry(\n\t\t\t\t\tget_jattr(pjob, JOB_ATR_resc_used),\n\t\t\t\t\trdcput);\n\t\t\t\tif (pjob->ji_chkptnext > prscput->rs_value.at_val.at_long)\n\t\t\t\t\tcontinue;\n\n\t\t\t\tpjob->ji_chkptnext = prscput->rs_value.at_val.at_long + pjob->ji_chkpttime;\n\n\t\t\t} else if (pjob->ji_chkpttype == PBS_CHECKPOINT_WALLT) {\n\t\t\t\t/*  checkpoint on walltime */\n\t\t\t\tprswall = find_resc_entry(\n\t\t\t\t\tget_jattr(pjob, JOB_ATR_resc_used),\n\t\t\t\t\trdwall);\n\t\t\t\tif (pjob->ji_chkptnext > prswall->rs_value.at_val.at_long)\n\t\t\t\t\tcontinue;\n\n\t\t\t\tpjob->ji_chkptnext = prswall->rs_value.at_val.at_long + pjob->ji_chkpttime;\n\n\t\t\t} else {\n\t\t\t\tcontinue; /* no checkpoint to do */\n\t\t\t}\n\t\t\t/* now do the actual checkpoint */\n\t\t\tif ((c = start_checkpoint(pjob, 0, 0)) == PBSE_NONE)\n\t\t\t\tcontinue;\n\t\t\tif (c == PBSE_NOSUP)\n\t\t\t\tcontinue;\n\n\t\t\t/* getting here means something bad happened */\n\t\t\t(void) sprintf(log_buffer,\n\t\t\t\t       \"Checkpoint failed, error %d\", c);\n\t\t\t(void) message_job(pjob, StdErr, log_buffer);\n\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_NOTICE,\n\t\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\t}\n\n\t\t/* dont try to send update to sevrer or send polls to sisters\n\t\t * since server did not connect yet, and since that has not happened\n\t\t * it means we do not have the IM_CLUSTERS information with us\n\t\t */\n\t\tif (server_stream >= 0) {\n\t\t\t/* send updated resource usage info to server */\n\t\t\tupdate_jobs_status();\n\t\t}\n#ifdef NAS /* localmod 153 */\n\t\tint rc_qflag = access(quiesce_mom_flag_file, F_OK);\n\n\t\tif (rc_qflag != 0 && mom_should_quiesce != 0) {\n\t\t\tlog_event(PBSEVENT_SYSTEM, 0, LOG_NOTICE, __func__, \"mom is no longer quiesced\");\n\t\t\tmom_should_quiesce = 0;\n\t\t} else if (rc_qflag == 0 && mom_should_quiesce == 0) {\n\t\t\tlog_event(PBSEVENT_SYSTEM, 0, LOG_NOTICE, __func__, \"mom will now quiesce\");\n\t\t\tmom_should_quiesce = 1;\n\t\t}\n#endif /* localmod 153 */\n\n\t\tif (mom_recvd_ip_cluster_addrs) {\n\t\t\tint num;\n\t\t\thnodent *np;\n\n\t\t\t/* check on over limit condition for polled jobs */\n\t\t\tfor (pjob = (job *) GET_NEXT(mom_polljobs); pjob;\n\t\t\t     pjob = (job *) GET_NEXT(pjob->ji_jobque)) {\n#ifdef NAS /* localmod 153 */\n\t\t\t\tif (mom_should_quiesce) {\n\t\t\t\t\tbreak;\n\t\t\t\t}\n#endif /* localmod 153 */\n\t\t\t\tif (!check_job_substate(pjob, JOB_SUBSTATE_RUNNING))\n\t\t\t\t\tcontinue;\n\t\t\t\t/*\n\t\t\t\t ** Send message to get info from other MOM's\n\t\t\t\t ** if I am Mother Superior for the job and\n\t\t\t\t ** it is not being killed.\n\t\t\t\t */\n\t\t\t\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE) &&\n\t\t\t\t    (pjob->ji_nodekill == TM_ERROR_NODE)) {\n\t\t\t\t\tint err_flag = 0;\n\t\t\t\t\t/*\n\t\t\t\t\t ** If can't send poll to everybody, the\n\t\t\t\t\t ** time has come to die.\n\t\t\t\t\t */\n\t\t\t\t\tif (send_sisters(pjob, IM_POLL_JOB, NULL) !=\n\t\t\t\t\t    pjob->ji_numnodes - 1) {\n\n\t\t\t\t\t\tfor (num = 0, np = pjob->ji_hosts; num < pjob->ji_numnodes; num++, np++) {\n\t\t\t\t\t\t\tif (reliable_job_node_find(&pjob->ji_failed_node_list, np->hn_host) != NULL) {\n\t\t\t\t\t\t\t\tsprintf(log_buffer, \"ignoring lost communication with %s for reliable job startup\", np->hn_host);\n\t\t\t\t\t\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\t\t\t\t\t\terr_flag = 1;\n\t\t\t\t\t\t\t} else if ((time_now - np->hn_eof_ts) <= max_poll_downtime_val) {\n\t\t\t\t\t\t\t\tpjob->ji_nodekill = TM_ERROR_NODE; /* send poll failed, but dont kill job */\n\t\t\t\t\t\t\t\tsprintf(log_buffer, \"lost communication with %s, not killing job yet\", np->hn_host);\n\t\t\t\t\t\t\t\tlog_joberr(-1, __func__, log_buffer, pjob->ji_qs.ji_jobid);\n\t\t\t\t\t\t\t\terr_flag = 1;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif (err_flag == 0) {\n\t\t\t\t\t\t\tlog_event(PBSEVENT_JOB | PBSEVENT_FORCE,\n\t\t\t\t\t\t\t\t  PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t\t\t\t\t  pjob->ji_qs.ji_jobid,\n\t\t\t\t\t\t\t\t  \"send POLL failed\");\n\t\t\t\t\t\t\tif (!is_comm_up(COMM_MATURITY_TIME)) {\n\t\t\t\t\t\t\t\tpjob->ji_nodekill = TM_ERROR_NODE; /* send poll failed, but dont kill job */\n\t\t\t\t\t\t\t\tsprintf(log_buffer, \"Connection to pbs_comm down/recently established, not killing job\");\n\t\t\t\t\t\t\t\tlog_joberr(-1, __func__, log_buffer, pjob->ji_qs.ji_jobid);\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\tlog_buffer[0] = '\\0';\n\t\t\t\tc = pjob->ji_qs.ji_svrflags;\n\n\t\t\t\t/*\n\t\t\t\t * Do not update job's overlimit timestamp\n\t\t\t\t * when following flags are set\n\t\t\t\t */\n\t\t\t\tif (c & (JOB_SVFLG_OVERLMT1 | JOB_SVFLG_OVERLMT2 | JOB_SVFLG_TERMJOB))\n\t\t\t\t\tcontinue;\n\n\t\t\t\tif (job_over_limit(pjob, recover)) {\n\n\t\t\t\t\tchar *kill_msg;\n\t\t\t\t\tlog_event(PBSEVENT_JOB | PBSEVENT_FORCE,\n\t\t\t\t\t\t  PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\n\t\t\t\t\tkill_msg = malloc(80 + strlen(log_buffer));\n\t\t\t\t\tif (kill_msg != NULL) {\n\t\t\t\t\t\tsprintf(kill_msg, \"=>> PBS: job killed: %s\\n\", log_buffer);\n\t\t\t\t\t\tif (c & JOB_SVFLG_HERE) {\n\t\t\t\t\t\t\tmessage_job(pjob, StdErr, kill_msg);\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t/* Multi-mom scenario - adding a connection to demux for reporting error */\n\n\t\t\t\t\t\t\tstruct sockaddr_in *ap;\n\t\t\t\t\t\t\t/* We always have a stream open to MS at node 0 */\n\t\t\t\t\t\t\ti = pjob->ji_hosts[0].hn_stream;\n\t\t\t\t\t\t\tif ((ap = tpp_getaddr(i)) == NULL) {\n\t\t\t\t\t\t\t\tlog_joberr(-1, \"over_limit_message\",\n\t\t\t\t\t\t\t\t\t   \"cannot write to job stderr because there is no stream to MS\",\n\t\t\t\t\t\t\t\t\t   pjob->ji_qs.ji_jobid);\n\t\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t\tipaddr = ap->sin_addr.s_addr;\n\t\t\t\t\t\t\t\tif ((fd = open_demux(ipaddr, pjob->ji_stderr)) == -1) {\n\t\t\t\t\t\t\t\t\t(void) sprintf(log_buffer,\n\t\t\t\t\t\t\t\t\t\t       \"over_limit_message: cannot write to job stderr because open_demux failed\");\n\t\t\t\t\t\t\t\t\tlog_event(PBSEVENT_JOB | PBSEVENT_FORCE,\n\t\t\t\t\t\t\t\t\t\t  PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t\t\t\t\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t\t\tif (write(fd, get_jattr_str(pjob, JOB_ATR_Cookie),\n\t\t\t\t\t\t\t\t\t      strlen(get_jattr_str(pjob, JOB_ATR_Cookie))) == -1) \n\t\t\t\t\t\t\t\t\t\t\tlog_errf(-1, __func__, \"write failed. ERR : %s\", strerror(errno));\t\t\t\t\t\n\t\t\t\t\t\t\t\t\tif (write(fd, kill_msg, strlen(kill_msg)) == -1) \n\t\t\t\t\t\t\t\t\t\tlog_errf(-1, __func__, \"write failed. ERR : %s\", strerror(errno));\n\t\t\t\t\t\t\t\t\t(void) close(fd);\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t\tfree(kill_msg);\n\t\t\t\t\t}\n\n\t\t\t\t\t(void) terminate_job(pjob, 1);\n\t\t\t\t}\n\t\t\t} /* for pjob in mom_polljobs */\n\t\t}\t  /* for pjob in svr_alljobs */\n\t}\t\t  /* Mom main loop */\n\n\t/* if kill_jobs_on_exit set, kill any running/suspended jobs */\n\n\tif (kill_jobs_on_exit) {\n\t\tpjob = (job *) GET_NEXT(svr_alljobs);\n\t\twhile (pjob) {\n\t\t\tif (check_job_substate(pjob, JOB_SUBSTATE_RUNNING) || check_job_substate(pjob, JOB_SUBSTATE_SUSPEND) || check_job_substate(pjob, JOB_SUBSTATE_SCHSUSP))\n\t\t\t\t(void) kill_job(pjob, SIGKILL);\n\t\t\telse\n\t\t\t\tterm_job(pjob);\n\n\t\t\tpjob = (job *) GET_NEXT(pjob->ji_alljobs);\n\t\t}\n\t}\n\n#ifndef WIN32\n\tif (termin_child)\n\t\tscan_for_terminated();\n#endif\n\n\tif (exiting_tasks)\n\t\tscan_for_exiting();\n\t(void) mom_close_poll();\n\tsend_pending_updates();\n\n\tnet_close(-1); /* close all network connections */\n\ttpp_shutdown();\n\n\t/* Have we any jobs that can be purged before we go away? */\n\n\twhile ((pjob = (job *) GET_NEXT(mom_deadjobs)) != NULL)\n\t\tjob_purge_mom(pjob);\n\n\t{\n\t\tint csret;\n\t\tif ((csret = CS_close_app()) != CS_SUCCESS) {\n\t\t\t/*had some problem closing the security library*/\n\n\t\t\tsprintf(log_buffer, \"problem closing security library (%d)\", csret);\n\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t}\n\t}\n\n\tcleanup();\n\n#ifdef PMIX\n\tPMIx_server_finalize();\n#endif\n\n\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_FORCE, PBS_EVENTCLASS_SERVER,\n\t\t  LOG_NOTICE, msg_daemonname, \"Is down\");\n\tpbs_idx_destroy(jobs_idx);\n\tunload_auths();\n\tif (lock_file(lockfds, F_UNLCK, \"mom.lock\", 1, NULL, 0))\n\t\tlog_errf(errno, msg_daemonname, \"failed to unlock mom.lock file\");\n\tlog_close(1);\n\tclose(lockfds);\n\tunlink(\"mom.lock\");\n#ifdef WIN32\n\tCloseDesktop(pbs_desktop);\n\tCloseWindowStation(pbs_winsta);\n#endif\n\n#ifdef PYTHON\n\tPy_Finalize();\n#endif\n\treturn (0);\n}\n\n/**\n * @brief\n *\tmake the directory names used by MOM\n *\n * @param[in] base - base path\n *\n * @return\tstring\n * @retval\tstring holding directory path\n *\n */\n\nstatic char *\nmk_dirs(char *base)\n{\n\tchar *pn;\n\tint ltop = strlen(pbs_conf.pbs_home_path);\n\n\tpn = malloc(ltop + strlen(base) + 2);\n\tif (pn == NULL)\n#ifdef WIN32\n\t\tExitThread(2);\n#else\n\t\texit(2);\n#endif\n\n\t(void) strcpy(pn, pbs_conf.pbs_home_path);\n#ifdef WIN32\n\tif (strchr(pn, '\\\\')) {\n\t\tif (*(pbs_conf.pbs_home_path + ltop - 1) != '\\\\')\n\t\t\t(void) strcat(pn, \"\\\\\");\n\t} else {\n\t\tif (*(pbs_conf.pbs_home_path + ltop - 1) != '/')\n\t\t\t(void) strcat(pn, \"/\");\n\t}\n#else\n\tif (*(pbs_conf.pbs_home_path + ltop - 1) != '/')\n\t\t(void) strcat(pn, \"/\");\n#endif /* WIN32 */\n\t(void) strcat(pn, base);\n\treturn (pn);\n}\n\n#ifdef WIN32\nint\nmain(int argc, char *argv[])\n{\n\tint reg = 0;\n\tint unreg = 0;\n\tint stalone = 0;\n\tSC_HANDLE schManager;\n\tSC_HANDLE schSelf;\n\tTCHAR szFileName[MAX_PATH];\n\n\t/*the real deal or version and exit?*/\n\tPRINT_VERSION_AND_EXIT(argc, argv);\n\n\tif (argc > 1) {\n\t\tif (strcmp(argv[1], \"-R\") == 0)\n\t\t\treg = 1;\n\t\telse if (strcmp(argv[1], \"-U\") == 0)\n\t\t\tunreg = 1;\n\t\telse if (strcmp(argv[1], \"-N\") == 0)\n\t\t\tstalone = 1;\n\t\telse\n\t\t\tusage2(argv[0]);\n\t}\n\n\tif (reg || unreg) {\n\n\t\tschManager = OpenSCManager(0, 0, SC_MANAGER_ALL_ACCESS);\n\t\tif (schManager == 0) {\n\t\t\tErrorMessage(\"OpenSCManager\");\n\t\t\treturn 1;\n\t\t}\n\n\t\tif (reg) {\n\t\t\tGetModuleFileName(0, szFileName,\n\t\t\t\t\t  sizeof(szFileName) / sizeof(*szFileName));\n\n\t\t\tprintf(\"Installing service %s\\n\", g_PbsMomName);\n\t\t\tschSelf =\n\t\t\t\tCreateService(schManager, g_PbsMomName,\n\t\t\t\t\t      __TEXT(\"PBS_MOM\"),\n\t\t\t\t\t      SERVICE_ALL_ACCESS,\n\t\t\t\t\t      SERVICE_WIN32_OWN_PROCESS | SERVICE_INTERACTIVE_PROCESS,\n\t\t\t\t\t      SERVICE_AUTO_START, SERVICE_ERROR_NORMAL,\n\t\t\t\t\t      replace_space(szFileName, \"\"), 0, 0, 0, 0, 0);\n\n\t\t\tif (schSelf) {\n\t\t\t\tprintf(\"Service %s installed successfully!\\n\",\n\t\t\t\t       g_PbsMomName);\n\n\t\t\t} else {\n\t\t\t\tErrorMessage(\"CreateService\");\n\t\t\t\treturn 1;\n\t\t\t}\n\n\t\t\tif (schSelf != 0)\n\t\t\t\tCloseServiceHandle(schSelf);\n\n\t\t} else if (unreg) {\n\t\t\tschSelf = OpenService(schManager, g_PbsMomName, DELETE);\n\n\t\t\tif (schSelf) {\n\t\t\t\tif (DeleteService(schSelf)) {\n\t\t\t\t\tprintf(\"Service %s uninstalled successfully!\\n\", g_PbsMomName);\n\t\t\t\t} else {\n\t\t\t\t\tErrorMessage(\"DeleteService\");\n\t\t\t\t\treturn 1;\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tErrorMessage(\"OpenService failed\");\n\t\t\t\treturn 1;\n\t\t\t}\n\t\t\tif (schSelf != 0)\n\t\t\t\tCloseServiceHandle(schSelf);\n\t\t}\n\n\t\tif (schManager != 0)\n\t\t\tCloseServiceHandle(schManager);\n\n\t} else if (stalone) {\n\n\t\tstruct arg_param *pap;\n\t\tint i, j;\n\t\tpap = create_arg_param();\n\t\tif (pap == NULL) {\n\t\t\tErrorMessage(\"create_arg_param\");\n\t\t\treturn 1;\n\t\t}\n\t\tpap->argc = argc - 1; /* don't pass the second argument */\n\t\tfor (i = j = 0; i < argc; i++) {\n\t\t\tif (i == 1)\n\t\t\t\tcontinue;\n\t\t\tif ((pap->argv[j] = strdup(argv[i])) == NULL) {\n\t\t\t\tfree_arg_param(pap);\n\t\t\t\tErrorMessage(\"strdup\");\n\t\t\t\treturn 1;\n\t\t\t}\n\t\t\tj++;\n\t\t}\n\t\tmain_thread((void *) pap);\n\t\tif (cycle_harvester && interactive_svc_avail) {\n\t\t\tstop_pbs_interactive();\n\t\t}\n\t\tfree_arg_param(pap);\n\n\t} else { /* run as service */\n\t\tSERVICE_TABLE_ENTRY ServiceTable[] = {\n\t\t\t{(TCHAR *) g_PbsMomName, PbsMomMain},\n\t\t\t{0}};\n\n\t\tif (getenv(\"PBS_CONF_FILE\") == NULL) {\n\t\t\tchar conf_path[80];\n\t\t\tchar conf_env[80];\n\t\t\tchar *p;\n\t\t\tchar psave;\n\t\t\tstruct stat sbuf;\n\n\t\t\tif (p = strstr(argv[0], \"exec\")) {\n\t\t\t\tpsave = *p;\n\t\t\t\t*p = '\\0';\n\t\t\t\t_snprintf(conf_path, 79, \"%spbs.conf\", argv[0]);\n\t\t\t\t*p = psave;\n\t\t\t\tif (stat(conf_path, &sbuf) == 0) {\n\t\t\t\t\tsetenv(\"PBS_CONF_FILE\", conf_path, 1);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\thStop = CreateMutex(NULL, TRUE, NULL);\n\t\tif (!StartServiceCtrlDispatcher(ServiceTable)) {\n\t\t\tlog_err(-1, \"main\", \"StartServiceCtrlDispatcher\");\n\t\t\tErrorMessage(\"StartServiceCntrlDispatcher\");\n\t\t\treturn 1;\n\t\t}\n\t\tCloseHandle(hStop);\n\t}\n\treturn (0);\n}\n\n/**\n * @brief\n *\tEntry point for the service\n *\n * @param[in] dwArgc - Number of arguments in the rgszArgv array\n * @param[in] rgszArgv - Array of strings. The first string is the name of\n *\t\t\t the service and subsequent strings are passed by the process\n *\t\t\t that called the StartService function to start the service.\n *\n * @return Void\n *\n */\nvoid\n\tWINAPI\n\tPbsMomMain(DWORD dwArgc, LPTSTR *rgszArgv)\n{\n\tDWORD dwTID;\n\tDWORD dwWait;\n\tSERVICE_STATUS ss;\n\tDWORD i;\n\n\tstruct arg_param *pap;\n\n\tg_ssHandle = RegisterServiceCtrlHandler(g_PbsMomName, PbsMomHandler);\n\tif (g_ssHandle == 0) {\n\t\tErrorMessage(\"RegisterServiceCtrlHandler\");\n\t\treturn 1;\n\t}\n\n\tpap = create_arg_param();\n\tif (pap == NULL)\n\t\treturn;\n\tpap->argc = dwArgc;\n\n\tfor (i = 0; i < dwArgc; i++) {\n\t\tif ((pap->argv[i] = strdup(rgszArgv[i])) == NULL) {\n\t\t\tfree_arg_param(pap);\n\t\t\tErrorMessage(\"strdup\");\n\t\t\treturn 1;\n\t\t}\n\t}\n\n\tg_hthreadMain = (HANDLE) _beginthreadex(0, 0, main_thread, pap, 0, &dwTID);\n\tif (g_hthreadMain == 0) {\n\t\t(void) free_arg_param(pap);\n\t\tErrorMessage(\"CreateThread\");\n\t\treturn 1;\n\t}\n\n\tdwWait = WaitForSingleObject(g_hthreadMain, INFINITE);\n\tif (dwWait != WAIT_OBJECT_0) {\n\t\t(void) free_arg_param(pap);\n\t\tErrorMessage(\"WaitForSingleObject\");\n\t\treturn 1;\n\t}\n\n\t// NOTE: Update the global service state variable to indicate\n\t//      that the server has STOPPED. Use this to ACK the SCM\n\t//      that the service has stopped using SetServiceStatus.\n\tZeroMemory(&ss, sizeof(ss));\n\tss.dwServiceType = SERVICE_WIN32_OWN_PROCESS | SERVICE_INTERACTIVE_PROCESS;\n\tss.dwCurrentState = SERVICE_STOPPED;\n\tss.dwControlsAccepted = SERVICE_ACCEPT_STOP | SERVICE_ACCEPT_SHUTDOWN;\n\n\tif (g_ssHandle != 0)\n\t\tSetServiceStatus(g_ssHandle, &ss);\n\n\tfree_arg_param(pap);\n}\n\n/**\n * @brief\n *\tHandler function accepts shutdown and releases mutex so the\n *\tPbsMomMain() can know it's time to exit.\n *\n * @param[in] dwControl - control code\n *\n * @return Void\n *\n */\nvoid\n\tWINAPI\n\tPbsMomHandler(DWORD dwControl)\n{\n\tSERVICE_STATUS ss;\n\n\tZeroMemory(&ss, sizeof(ss));\n\tss.dwServiceType = SERVICE_WIN32_OWN_PROCESS | SERVICE_INTERACTIVE_PROCESS;\n\tss.dwCurrentState = g_dwCurrentState;\n\tss.dwControlsAccepted = SERVICE_ACCEPT_STOP | SERVICE_ACCEPT_SHUTDOWN;\n\n\tswitch (dwControl) {\n\t\tcase SERVICE_CONTROL_STOP:\n\t\tcase SERVICE_CONTROL_SHUTDOWN:\n\t\t\t// DONE: When you receive a stop request, update the global state\n\t\t\t//      variable to indicate that a STOP is pending. You need\n\t\t\t//      to then ACK the SCM by calling SetServiceStatus. Set\n\t\t\t//      the check point to 1 and the wait hint to 1 second,\n\t\t\t//      since we are going to wait for the server to shutdown.\n\n\t\t\tg_dwCurrentState = SERVICE_STOP_PENDING;\n\t\t\tss.dwCurrentState = g_dwCurrentState;\n\t\t\tss.dwCheckPoint = 1;\n\t\t\tss.dwWaitHint = 1000;\n\t\t\tif (g_ssHandle != 0)\n\t\t\t\tSetServiceStatus(g_ssHandle, &ss);\n\n\t\t\t/*\n\t\t\t log_close(1);\n\t\t\t TerminateThread(g_hthreadMain, 0);\n\t\t\t */\n\t\t\tReleaseMutex(hStop);\n\t\t\tkill_jobs_on_exit = 1;\n\n\t\t\t/* if cycle harvesting configured and PBS_INTERACTIVE service is registered and started, then stop PBS_INTERACTIVE service */\n\t\t\tif (cycle_harvester && interactive_svc_avail) {\n\t\t\t\t_beginthreadex(0, 0, (void *) stop_pbs_interactive, NULL, 0, 0);\n\t\t\t}\n\n\t\t\tCloseHandle(g_hthreadMain);\n\t\t\tbreak;\n\n\t\tdefault:\n\t\t\tif (g_ssHandle != 0)\n\t\t\t\tSetServiceStatus(g_ssHandle, &ss);\n\t\t\tbreak;\n\t}\n}\n#else /* WIN32 */\n\n#endif /* WIN32 */\n\n/* the following is used in support of the getkbdtime() function */\n\nstatic int check_idle_daemon = 1;\nstruct input_dev_list {\n\tint idl_here;\n\tchar *idl_name;\n} input_dev_list[] = {\n\t{1, \"/dev/mouse\"},\n\t{1, \"/dev/kbd\"},\n\t{1, \"/dev/keybd\"},\n\t{1, \"/dev/kbd0\"},\n\t{1, \"/dev/kbd1\"},\n\t{1, \"/dev/hid/mouse_000\"},\n\t{1, \"/dev/hid/kbd_000\"},\n\t{0, NULL}};\n\n/**\n * @brief\n *\tset most recent access time found for dev/file\n *      maxtm set if the dev/file st_atime is more recent than current value\n *\n * @param[in] dev - dev/file\n *\n * @retval  -1 if dev/file does not exist\n * @retval   0 if dev/file st_atime not most recent\n * @retval  +1 if dev/file st_atime is most recent (so far)\n *\n */\nstatic int\nsetmax(char *dev)\n{\n\tstruct stat sb;\n\n\tif (stat(dev, &sb) == -1)\n\t\treturn -1;\n\tif (maxtm < sb.st_atime) {\n\t\tmaxtm = sb.st_atime;\n\t\treturn 1;\n\t}\n\n\treturn 0;\n}\n\n/**\n * get the most recent access time for mouse/keyboard/...\n *\twe assume that \"time_now\" is the current time\n */\n#ifdef WIN32\n/**\n * @brief\n *\tgets most recent access time found for idle_touch file\n *\n * @return\ttime_t\n * @retval\t-1\t\tFailure\n * @retval\tmax time\tSuccess\n *\n */\n\ntime_t\ngetkbdtime()\n{\n\tchar idle_touch[MAX_PATH];\n\n\t/* Create full path of idle_touch file */\n\tsnprintf(idle_touch, MAX_PATH, \"%s/spool/idle_touch\", pbs_conf.pbs_home_path);\n\n\t/* set most recent access time found for idle_touch file */\n\tif (setmax(idle_touch) == -1) {\n\t\t/* idled_touch file not found, Log this event and disable cycle harvesting */\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"Cycle Harvesting Failed, Please contact Admin\");\n\t\tlog_event(PBSEVENT_SYSTEM, 0, LOG_WARNING, \"Cycle_Harvesting\", log_buffer);\n\t\tcycle_harvester = 0;\n\t\tidle_check = -1;\n\t}\n\treturn (maxtm);\n}\n#else\n\ntime_t\ngetkbdtime(void)\n{\n\tDIR *dp;\n\tstruct dirent *de;\n\tstatic char idle_dir[MAXPATHLEN + 1] = {'\\0'};\n\tchar *idle_file = NULL;\n\tstruct input_dev_list *pl = &input_dev_list[0];\n\tint i;\n\tint checked = 0;\n\tchar *ptsname;\n\n\t/* since we call this function so often, only want to set this once */\n\tif (idle_dir[0] == '\\0')\n\t\tsnprintf(idle_dir, sizeof(idle_dir), \"%s/spool/idledir\", pbs_conf.pbs_home_path);\n\n\tif (check_idle_daemon) {\n\t\tif ((dp = opendir(idle_dir)) != NULL) {\n\t\t\twhile ((de = readdir(dp)) != NULL) {\n\t\t\t\tptsname = de->d_name;\n\n\t\t\t\tif (maxtm >= time_now)\n\t\t\t\t\tbreak;\n\t\t\t\tif (*ptsname == '.')\n\t\t\t\t\tcontinue;\n\t\t\t\tpbs_asprintf(&idle_file, \"%s/%s\", idle_dir, ptsname);\n\t\t\t\tif (setmax(idle_file) > 0)\n\t\t\t\t\tchecked = 1;\n\t\t\t\tfree(idle_file);\n\t\t\t}\n\t\t\tclosedir(dp);\n\t\t} else\n\t\t\tcheck_idle_daemon = 0;\n\t}\n\n\tif (checked == 0) {\n\t\t/* look at list of know keyboard/mouse devices */\n\t\tfor (i = 0; (pl + i)->idl_name; ++i) {\n\t\t\tif ((pl + i)->idl_here && (pl + i)->idl_name) {\n\t\t\t\tif (setmax((pl + i)->idl_name) == -1)\n\t\t\t\t\t(pl + i)->idl_here = 0; /* ignore this now on */\n\t\t\t}\n\t\t}\n\t}\n\n\treturn (maxtm);\n}\n#endif /* WIN32 */\n\n/**\n * @brief\n *\treturns the idle time of keyboard\n *\n * @param[in] attrib - pointer to rm_attribute structure\n *\n * @return\tstring\n * @retval\tNULL\tFailure\n * @retval\tstring\tSuccess\n *\n */\nchar *\nidletime(struct rm_attribute *attrib)\n{\n\ttime_t idle;\n\ttime_t lastkey;\n\n\tif (attrib) {\n\t\trm_errno = RM_ERR_BADPARAM;\n\t\treturn NULL;\n\t}\n\n\ttime_now = time(0);\n\n\tlastkey = getkbdtime();\n\tif (lastkey > time_now)\n\t\tidle = 0;\n\telse\n\t\tidle = time_now - lastkey;\n\tsprintf(ret_string, \"%lu\", (u_long) idle);\n\treturn ret_string;\n}\n\n/**\n * @brief\n *\tsuspend/resume jobs based on keyboard being active/idle\n *\tThis function is like susp_resum() in requests.except that a\n *\tdifferent flag is set\n *\n * @param[in] pjob - pointer to job\n * @param[in] which is 1 to suspend, 0 to resume\n *\n * @return Void\n *\n */\nstatic void\nactive_idle(job *pjob, int which)\n{\n\tDBPRT((\"active_idle (keyboard): %s job %s\\n\",\n\t       which == 1 ? \"suspending\" : \"resuming\", pjob->ji_qs.ji_jobid))\n\tif (((which == 1) && ((pjob->ji_qs.ji_svrflags &\n\t\t\t       (JOB_SVFLG_Suspend | JOB_SVFLG_Actsuspd)) == 0)) ||\n\t    ((which == 0) && ((pjob->ji_qs.ji_svrflags &\n\t\t\t       (JOB_SVFLG_Suspend | JOB_SVFLG_Actsuspd)) ==\n\t\t\t      JOB_SVFLG_Actsuspd))) {\n\t\tif (do_susres(pjob, which) < 0)\n\t\t\treturn;\n\t}\n\n\ttime_now = time(0);\n\tif (which == 1) { /* suspend */\n\t\tset_job_substate(pjob, JOB_SUBSTATE_SUSPEND);\n\t\tpjob->ji_qs.ji_svrflags |= JOB_SVFLG_Actsuspd;\n\t\tsend_wk_job_idle(pjob->ji_qs.ji_jobid, which);\n\t\tif ((pjob->ji_qs.ji_svrflags &\n\t\t     (JOB_SVFLG_Suspend | JOB_SVFLG_Actsuspd)) == 0) {\n\t\t\tstop_walltime(pjob);\n\t\t}\n\t} else { /* resume */\n\t\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_Suspend) == 0) {\n\t\t\tstart_walltime(pjob);\n\t\t\tset_job_substate(pjob, JOB_SUBSTATE_RUNNING);\n\t\t}\n\t\tpjob->ji_qs.ji_svrflags &= ~JOB_SVFLG_Actsuspd;\n\t\tsend_wk_job_idle(pjob->ji_qs.ji_jobid, which);\n\t}\n\tjob_save(pjob);\n}\n\nvoid\ndo_multinodebusy(job *pjob, int which)\n{\n\tint stream;\n\n\tDBPRT((\"multinodebusy: dealing with job %s\\n\", pjob->ji_qs.ji_jobid))\n\n\tif (chk_mom_action(MultiNodeBusy) == Requeue) {\n\n\t\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE) != 0) {\n\n\t\t\t/* this is Mother Superior, kill and requeue job */\n\n\t\t\tpjob->ji_qs.ji_un.ji_momt.ji_exitstat = JOB_EXEC_RERUN;\n\t\t\t(void) kill_job(pjob, SIGKILL);\n\t\t\t/* Server will decide if job is rerunnable or not */\n\t\t} else {\n\n\t\t\t/* I am a sister node, send requeue message to Mom Superior */\n\n\t\t\tstream = pjob->ji_hosts[0].hn_stream;\n\t\t\tim_compose(stream, pjob->ji_qs.ji_jobid,\n\t\t\t\t   get_jattr_str(pjob, JOB_ATR_Cookie),\n\t\t\t\t   IM_REQUEUE, TM_NULL_EVENT, TM_NULL_TASK, IM_OLD_PROTOCOL_VER);\n\t\t\tdis_flush(stream);\n\t\t}\n\t}\n}\n\n/**\n * @brief\n *\tidle all running jobs on keyboard active\n *\tEven jobs that have been suspended on \"suspend\" signal request are idled\n *\tAt the current time, only single node jobs are suspended,\n *\tmultinode jobs are ignored or requeued.\n *\n * @return\tVoid\n *\n */\nvoid\nidle_jobs(void)\n{\n\tjob *pjob;\n\tupdate_state_flag = 0;\n\tfor (pjob = (job *) GET_NEXT(svr_alljobs);\n\t     pjob;\n\t     pjob = (job *) GET_NEXT(pjob->ji_alljobs)) {\n\t\tif (check_job_state(pjob, JOB_STATE_LTR_RUNNING) &&\n\t\t    ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_Actsuspd) == 0)) {\n\t\t\t/* right now we can only handle one node jobs */\n\t\t\tif (pjob->ji_numnodes == 1) {\n\t\t\t\tupdate_state_flag = 1;\n\t\t\t\tactive_idle(pjob, 1);\n\t\t\t} else {\n\t\t\t\tdo_multinodebusy(pjob, 1);\n\t\t\t}\n\t\t}\n\t}\n}\n\n/**\n * @brief\n *\tactivate all idle jobs on keyboard going idle\n *\tEven jobs that have been suspended on \"suspend\" signal request are\n *\t\"activated\", though they will remain in suspend state.\n *\tAt the current time, only single node jobs are handled.\n *\n * @return\tVoid\n *\n */\n\nvoid\nactivate_jobs(void)\n{\n\tjob *pjob;\n\tupdate_state_flag = 0;\n\tfor (pjob = (job *) GET_NEXT(svr_alljobs);\n\t     pjob;\n\t     pjob = (job *) GET_NEXT(pjob->ji_alljobs)) {\n\t\tif (check_job_state(pjob, JOB_STATE_LTR_RUNNING) &&\n\t\t    ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_Actsuspd) != 0)) {\n\t\t\tif (pjob->ji_numnodes == 1) {\n\t\t\t\tupdate_state_flag = 1;\n\t\t\t\tactive_idle(pjob, 0);\n\t\t\t}\n\t\t}\n\t}\n}\n\n/**\n * @brief\n *\tIf current load average >= max_load_val and busy not already set\n *\t\tset it\n *\tIf current load average lt ideal_load_val and busy currently set\n *\t\tunset it\n */\nstatic void\ncheck_busy(double mla)\n{\n\textern int internal_state;\n\textern float ideal_load_val;\n\textern float max_load_val;\n\n\tif ((mla >= max_load_val) && ((internal_state & INUSE_BUSY) == 0)) {\n\t\tinternal_state |= INUSE_BUSY;\n\t\tinternal_state_update = UPDATE_MOM_STATE;\n\t\tif (idle_on_maxload)\n\t\t\tidle_jobs();\n\t} else if ((mla < ideal_load_val) && ((internal_state & INUSE_BUSY) != 0)) {\n\t\tinternal_state = (internal_state & ~INUSE_BUSY);\n\t\tinternal_state_update = UPDATE_MOM_STATE;\n\t\tif (idle_on_maxload)\n\t\t\tactivate_jobs();\n\t}\n}\n\n/**\n * @fn mom_topology\n * @brief\n *\tcompute and export platform-dependent topology information\n *\n * @return\tvoid\n *\n * @par MT-Safe:\tno\n * @par Side Effects:\n *\tNone\n *\n * @par Note:\tnominally, we use the Open-MPI hardware locality (a.k.a. hwloc)\n *\t\tfunctions to export the topology information that it generates,\n *\t\tbut the case for the Cray is different.\n *\n *\t\tAlso note that whenever we want the topology node attribute to\n *\t\tcontain a different type of information, this function will need\n *\t\tto change.\n *\n *\t\tOn Windows we use native Windows API's to discover the topology\n *\n * @see\tdep_topology()\n *\n */\nvoid\nmom_topology(void)\n{\n\textern char mom_short_name[];\n\textern callfunc_t vn_callback;\n\tint ret = -1;\n\tchar *xmlbuf = NULL;\n\tint xmllen = 0;\n\tvnl_t *vtp = NULL;\n\tchar *topology_type;\n\tint fd[2];\n\tint pid;\n\n#ifndef WIN32\n\tif (pipe(fd) == -1) \n\t\tlog_errf(-1, __func__, \"pipe API failed. ERR : %s\", strerror(errno));\n\n\tif ((pid = fork()) == -1) {\n\t\tlog_err(PBSE_SYSTEM, __func__, \"fork failed\");\n\t\treturn;\n\t}\n\n\tif (pid == 0) {\n\t\thwloc_topology_t topology;\n\t\tret = 0;\n\n\t\tclose(fd[0]);\n\n\t\tret = hwloc_topology_init(&topology);\n\t\tif (ret == 0)\n#if HWLOC_API_VERSION < 0x00020000\n\t\t\tret = hwloc_topology_set_flags(topology,\n\t\t\t\t\t\t       HWLOC_TOPOLOGY_FLAG_WHOLE_SYSTEM |\n\t\t\t\t\t\t\t       HWLOC_TOPOLOGY_FLAG_IO_DEVICES);\n#else\n\t\t\tret = hwloc_topology_set_io_types_filter(topology,\n\t\t\t\t\t\t\t\t HWLOC_TYPE_FILTER_KEEP_ALL);\n#endif\n\t\tif (ret == 0)\n\t\t\tret = hwloc_topology_load(topology);\n\t\tif (ret == 0)\n#if HWLOC_API_VERSION < 0x00020000\n\t\t\tret = hwloc_topology_export_xmlbuffer(topology,\n\t\t\t\t\t\t\t      &xmlbuf, &xmllen);\n#else\n\t\t\tret = hwloc_topology_export_xmlbuffer(topology,\n\t\t\t\t\t\t\t      &xmlbuf, &xmllen,\n\t\t\t\t\t\t\t      HWLOC_TOPOLOGY_EXPORT_XML_FLAG_V1);\n#endif\n\t\tif (ret != 0)\n\t\t\tret = -1;\n\n\t\tif (write(fd[1], &ret, (sizeof(ret))) == -1) \n\t\t\tlog_errf(-1, __func__, \"write failed. ERR : %s\", strerror(errno));\n\t\tif (write(fd[1], &xmllen, (sizeof(xmllen))) == -1) \n\t\t\tlog_errf(-1, __func__, \"write failed. ERR : %s\", strerror(errno));\t\t\t\n\t\tif (write(fd[1], xmlbuf, xmllen) == -1) \n\t\t\tlog_errf(-1, __func__, \"write failed. ERR : %s\", strerror(errno));\t\t\t\n\n\t\thwloc_free_xmlbuffer(topology, xmlbuf);\n\t\thwloc_topology_destroy(topology);\n\n\t\texit(0);\n\t} else {\n\t\tclose(fd[1]);\n\n\t\tif (read(fd[0], &ret, sizeof(ret)) == -1) \n\t\t\tlog_errf(-1, __func__, \"read failed. ERR : %s\", strerror(errno));\n\t\tif (read(fd[0], &xmllen, sizeof(xmllen)) == -1) \n\t\t\tlog_errf(-1, __func__, \"read failed. ERR : %s\", strerror(errno));\t\t\t\n\t\tif ((xmlbuf = malloc(xmllen + 1)) == NULL) {\n\t\t\tlog_err(PBSE_SYSTEM, __func__, \"malloc failed\");\n\t\t\treturn;\n\t\t}\n\t\txmlbuf[xmllen] = '\\0';\n\t\tif (read(fd[0], xmlbuf, xmllen) == -1) \n\t\t\tlog_errf(-1, __func__, \"read failed. ERR : %s\", strerror(errno));\n\n\t\tclose(fd[0]);\n\n\t\twaitpid(pid, NULL, 0);\n\t}\n\tif (ret < 0) {\n\t\t/* on any failure above, issue log message */\n\t\tlog_err(PBSE_SYSTEM, __func__, \"topology init/load/export failed\");\n\t\treturn;\n\t} else\n#endif\n\t{\n\t\tchar *lbuf;\n\t\tint lbuflen = xmllen + 1024;\n#ifdef WIN32\n\t\tint no_of_sockets = 0;\n#endif\n\n\t\t/*\n\t\t *\txmlbuf is almost certain to overflow log_buffer's size,\n\t\t *\tso for logging this information, we allocate one large\n\t\t *\tenough to hold it\n\t\t */\n\t\tif ((lbuf = malloc(lbuflen)) == NULL) {\n\t\t\tsprintf(log_buffer, \"malloc logbuf (%d) failed\",\n\t\t\t\tlbuflen);\n\t\t\tgoto bad;\n\t\t} else {\n\t\t\tsprintf(lbuf, \"allocated log buffer, len %d\", lbuflen);\n\t\t\tlog_event(PBSEVENT_DEBUG4, PBS_EVENTCLASS_NODE,\n\t\t\t\t  LOG_DEBUG, __func__, lbuf);\n\t\t}\n\t\tlog_event(PBSEVENT_DEBUG4,\n\t\t\t  PBS_EVENTCLASS_NODE,\n\t\t\t  LOG_DEBUG, __func__, \"topology exported\");\n\t\tif (vnl_alloc(&vtp) == NULL) {\n\t\t\tlog_err(PBSE_SYSTEM, __func__, \"vnl_alloc failed\");\n\t\t\tfree(lbuf);\n\t\t\tgoto bad;\n\t\t}\n#ifndef WIN32\n\t\ttopology_type = NODE_TOPOLOGY_TYPE_HWLOC;\n\t\tsprintf(lbuf, \"%s%s\", topology_type, xmlbuf);\n#else\n\t\ttopology_type = NODE_TOPOLOGY_TYPE_WIN;\n\t\tno_of_sockets = count_sockets();\n\t\tif (no_of_sockets == -1)\n\t\t\tgoto bad;\n\t\tsprintf(lbuf, \"%ssockets:%d,gpus:%d,mics:%d\", topology_type,\n\t\t\tno_of_sockets, count_gpus(), count_mics());\n#endif\n\t\tif ((ret = vn_addvnr(vtp, mom_short_name, ATTR_NODE_TopologyInfo,\n\t\t\t\t     lbuf, ATR_TYPE_STR, READ_ONLY,\n\t\t\t\t     NULL)) != 0) {\n\t\t\tlog_err(PBSE_SYSTEM, __func__, \"vnl_addvnr failed\");\n\t\t\tvnl_free(vtp);\n\t\t\tfree(lbuf);\n\t\t\tgoto bad;\n\t\t} else {\n#ifndef WIN32\n\t\t\tsprintf(lbuf, \"attribute '%s = %s%s' added\", ATTR_NODE_TopologyInfo,\n\t\t\t\ttopology_type, xmlbuf);\n\t\t\tlog_event(PBSEVENT_DEBUG4, PBS_EVENTCLASS_NODE, LOG_DEBUG, __func__, lbuf);\n#else\n\t\t\tsprintf(log_buffer, \"attribute '%s = %s' added\", ATTR_NODE_TopologyInfo,\n\t\t\t\tlbuf);\n\t\t\tlog_event(PBSEVENT_DEBUG4, PBS_EVENTCLASS_NODE, LOG_DEBUG, __func__,\n\t\t\t\t  log_buffer);\n#endif\n\t\t}\n\t\tif (vnlp == NULL) {\n\t\t\t/*\n\t\t\t *\tWe must create a natural vnode to hold the\n\t\t\t *\tATTR_NODE_TopologyInfo attribute.  This natural\n\t\t\t *\tvnode must also include available memory and\n\t\t\t *\tncpus information.\n\t\t\t */\n\n\t\t\tchar attrbuf[1024];\n\t\t\tchar valbuf[1024];\n\t\t\tchar *memstr = physmem(NULL);\n\n\t\t\tsprintf(attrbuf, \"%s.%s\", ATTR_rescavail, \"mem\");\n\t\t\tsprintf(valbuf, \"%s\", memstr != NULL ? memstr : \"0\");\n\t\t\tif ((ret = vn_addvnr(vtp, mom_short_name, attrbuf,\n\t\t\t\t\t     valbuf, 0, 0, NULL)) != 0) {\n\t\t\t\tlog_err(PBSE_SYSTEM, __func__, \"vnl_alloc failed\");\n\t\t\t\tvnl_free(vtp);\n\t\t\t\tfree(lbuf);\n\t\t\t\tgoto bad;\n\t\t\t} else {\n\t\t\t\tsprintf(lbuf, \"resource '%s = %s' added\",\n\t\t\t\t\tattrbuf, valbuf);\n\t\t\t\tlog_event(PBSEVENT_DEBUG4,\n\t\t\t\t\t  PBS_EVENTCLASS_NODE,\n\t\t\t\t\t  LOG_DEBUG, __func__, lbuf);\n\t\t\t}\n\n\t\t\tsprintf(attrbuf, \"%s.%s\", ATTR_rescavail, \"ncpus\");\n\t\t\tsprintf(valbuf, \"%d\", num_acpus);\n\t\t\tif ((ret = vn_addvnr(vtp, mom_short_name, attrbuf,\n\t\t\t\t\t     valbuf, 0, 0, NULL)) != 0) {\n\t\t\t\tlog_err(PBSE_SYSTEM, __func__, \"vnl_alloc failed\");\n\t\t\t\tvnl_free(vtp);\n\t\t\t\tfree(lbuf);\n\t\t\t\tgoto bad;\n\t\t\t} else {\n\t\t\t\tsprintf(lbuf, \"resource '%s = %s' added\",\n\t\t\t\t\tattrbuf, valbuf);\n\t\t\t\tlog_event(PBSEVENT_DEBUG4,\n\t\t\t\t\t  PBS_EVENTCLASS_NODE,\n\t\t\t\t\t  LOG_DEBUG, __func__, lbuf);\n\t\t\t}\n\t\t\tvtp->vnl_modtime = time(NULL);\n\t\t\tvnlp = vtp;\n\n\t\t} else {\n\t\t\tvn_merge(vnlp, vtp, vn_callback);\n\t\t\tvnl_free(vtp);\n\t\t}\n\t\tfree(lbuf);\n\t}\nbad:\n#ifndef WIN32\n\tfree(xmlbuf);\n#else\n    ;\n#endif\n}\n"
  },
  {
    "path": "src/resmom/mom_pmix.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tmom_pmix.c\n */\n\n#include <pbs_config.h>\n\n#ifdef PMIX\n\n#include <stdio.h>\n#include <string.h>\n#include <time.h>\n#include <errno.h>\n#include <sys/types.h>\n#include <sys/wait.h>\n#include <sys/stat.h>\n#include <fcntl.h>\n#include <errno.h>\n#include <pthread.h>\n#include \"mom_pmix.h\"\n#include \"mom_func.h\"\n#include \"list_link.h\"\n#include \"log.h\"\n#include \"tm.h\"\n\nextern char *log_file;\nextern char *path_log;\nextern char mom_short_name[];\nextern pbs_list_head svr_alljobs;\n\n#if 0\n\n/*\n * Some operations like spawn and fence are not atomic and occur\n * over a series of steps. In some cases, data needs to be retained\n * and used in subsequent steps. We might choose to define a data\n * structure that houses both tracking information (e.g. namespace,\n * operation type, etc.) together with an opaque union to house the\n * underlying data. That is what is being suggested (but not yet\n * implemented) here.\n */\n\n/* Enumerate operations that require PBS to call back */\ntypedef enum pbs_pmix_oper_type {\n\tPBS_PMIX_OPER_NONE,\n\tPBS_PMIX_FENCE,\n\tPBS_PMIX_SPAWN,\n\t/* Add new operations before PBS_PMIX_OPER_UNDEFINED */\n\tPBS_PMIX_OPER_UNDEFINED\n} pbs_pmix_oper_type_t;\n\n/* Data structure to house auxiliary PMIx operation data */\ntypedef struct pbs_pmix_oper {\n\tpbs_pmix_oper_type_t type;\n\tjob *pjob;\n\tstruct pbs_pmix_oper *next;\n\tunion {\n\t\tpbs_pmix_fence_data_t fence;\n\t\tpbs_pmix_spawn_data_t spawn;\n\t} data;\n} pbs_pmix_oper_t;\n\n#endif\n\n/* Locking structure and macros */\ntypedef struct {\n\tpthread_mutex_t mutex;\n\tpthread_cond_t cond;\n\tvolatile bool active;\n\tpmix_status_t status;\n} pbs_pmix_lock_t;\n\n#define PBS_PMIX_CONSTRUCT_LOCK(l)                     \\\n\tdo {                                           \\\n\t\tpthread_mutex_init(&(l)->mutex, NULL); \\\n\t\tpthread_cond_init(&(l)->cond, NULL);   \\\n\t\t(l)->active = true;                    \\\n\t\t(l)->status = PMIX_SUCCESS;            \\\n\t} while (0)\n\n#define PBS_PMIX_DESTRUCT_LOCK(l)                   \\\n\tdo {                                        \\\n\t\tpthread_mutex_destroy(&(l)->mutex); \\\n\t\tpthread_cond_destroy(&(l)->cond);   \\\n\t} while (0)\n\n#define PBS_PMIX_WAIT_THREAD(lck)                                       \\\n\tdo {                                                            \\\n\t\tpthread_mutex_lock(&(lck)->mutex);                      \\\n\t\twhile ((lck)->active) {                                 \\\n\t\t\tpthread_cond_wait(&(lck)->cond, &(lck)->mutex); \\\n\t\t}                                                       \\\n\t\tpthread_mutex_unlock(&(lck)->mutex);                    \\\n\t} while (0)\n\n#define PBS_PMIX_WAKEUP_THREAD(lck)                   \\\n\tdo {                                          \\\n\t\tpthread_mutex_lock(&(lck)->mutex);    \\\n\t\t(lck)->active = false;                \\\n\t\tpthread_cond_broadcast(&(lck)->cond); \\\n\t\tpthread_mutex_unlock(&(lck)->mutex);  \\\n\t} while (0)\n\n/**\n * @brief\n * This callback function is invoked by the PMIx library after it\n * has been notified a process has exited.\n *\n * @param[in] status - exit status of the process\n * @param[in] cbdata - opaque callback data passed to the caller\n *\n * @retval void\n *\n * @note\n * This function may be superfluous, in which case the call to\n * PMIx_Notify_event() should be passed NULL in pbs_pmix_notify_exit()\n * for its callback and this funtion removed. It has been left in for\n * the time being so that the log shows it being called.\n */\nstatic void\npbs_pmix_notify_exit_cb(pmix_status_t status, void *cbdata)\n{\n\tlog_event(PBSEVENT_DEBUG3, 0, LOG_DEBUG, __func__, \"called\");\n\tlog_event(PBSEVENT_DEBUG3, 0, LOG_DEBUG, __func__, \"returning\");\n}\n\n/**\n * @brief\n * Notify PMIx that a task has exited by constructing a PMIx info\n * array and passing it to PMIx_Notify_event.\n *\n * @param[in] pjob - pointer to the job structure for the exited task\n * @param[in] exitstat - numeric exit status of the task\n * @param[in] msg - optional message supplied by the caller\n *\n * @retval void\n */\nvoid\npbs_pmix_notify_exit(job *pjob, int exitstat, char *msg)\n{\n\tpmix_status_t status;\n\tpmix_info_t *pinfo;\n\tsize_t ninfo;\n\tpmix_proc_t procname, procsrc;\n\tbool flag = true;\n\n\tlog_event(PBSEVENT_DEBUG3, 0, LOG_DEBUG, __func__, \"called\");\n\tif (!pjob) {\n\t\tlog_event(PBSEVENT_DEBUG, 0, LOG_ERR, __func__,\n\t\t\t  \"No job supplied, returning\");\n\t\treturn;\n\t}\n\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t  pjob->ji_qs.ji_jobid,\n\t\t  \"Setting up the info array for termination\");\n\t/* Info array will contain three entries */\n\tninfo = 3;\n\t/* Add one if a message was provided */\n\tif (msg)\n\t\tninfo++;\n\t/* Create the info array */\n\tPMIX_INFO_CREATE(pinfo, ninfo);\n\t/* Ensure this only goes to the job terminated event handler */\n\tPMIX_INFO_LOAD(&pinfo[0], PMIX_EVENT_NON_DEFAULT, &flag, PMIX_BOOL);\n\t/* Provide the exit status of the application */\n\tPMIX_INFO_LOAD(&pinfo[1], PMIX_JOB_TERM_STATUS, &exitstat, PMIX_STATUS);\n\t/* Provide the rank */\n\tPMIX_LOAD_PROCID(&procname, pjob->ji_qs.ji_jobid, PMIX_RANK_WILDCARD);\n\tPMIX_INFO_LOAD(&pinfo[2], PMIX_EVENT_AFFECTED_PROC, &procname, PMIX_PROC);\n\t/* Provide the message if provided */\n\tif (msg)\n\t\tPMIX_INFO_LOAD(&pinfo[3], PMIX_EVENT_TEXT_MESSAGE, msg, PMIX_STRING);\n\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t  pjob->ji_qs.ji_jobid, \"Info array populated\");\n\t/*\n\t * The source of the event may not be mother superior because it\n\t * will cause the PMIx server to upcall recursively. Use an\n\t * undefined rank as the source.\n\t */\n\tPMIX_LOAD_PROCID(&procsrc, pjob->ji_qs.ji_jobid, PMIX_RANK_UNDEF);\n\tstatus = PMIx_Notify_event(PMIX_ERR_JOB_TERMINATED, &procsrc,\n\t\t\t\t   PMIX_RANGE_SESSION, pinfo, ninfo, pbs_pmix_notify_exit_cb,\n\t\t\t\t   NULL);\n\tswitch (status) {\n\t\t/* The first four status cases are documented */\n\t\tcase PMIX_SUCCESS:\n\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t  pjob->ji_qs.ji_jobid,\n\t\t\t\t  \"Exit notification pending callback\");\n\t\t\tbreak;\n\t\tcase PMIX_OPERATION_SUCCEEDED:\n\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t  pjob->ji_qs.ji_jobid,\n\t\t\t\t  \"Exit notification successful\");\n\t\t\tbreak;\n\t\tcase PMIX_ERR_BAD_PARAM:\n\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t  pjob->ji_qs.ji_jobid,\n\t\t\t\t  \"Exit notification contains bad parameter\");\n\t\t\tbreak;\n\t\tcase PMIX_ERR_NOT_SUPPORTED:\n\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t  pjob->ji_qs.ji_jobid,\n\t\t\t\t  \"Exit notification not supported\");\n\t\t\tbreak;\n\t\tdefault:\n\t\t\t/* An undocumented error type was encountered */\n\t\t\tlog_eventf(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t   pjob->ji_qs.ji_jobid,\n\t\t\t\t   \"Exit notification failed: %s\",\n\t\t\t\t   PMIx_Error_string(status));\n\t\t\tbreak;\n\t}\n\tlog_event(PBSEVENT_DEBUG3, 0, LOG_DEBUG, __func__, \"returning\");\n}\n\n/**\n * @brief\n * Client called PMIx_server_register_client\n *\n * @param[in] proc - client handle\n * @param[in] server_object - value provided by caller\n * @param[in] cbfunc - optional callback function\n * @param[in] cbdata - opaque data provided to cbfunc\n *\n * @return pmix_status_t\n * @retval PMIX_SUCCESS - request in progress, cbfunc should not be called\n *                        here but will be called later by PMIx\n * @retval PMIX_OPERATION_SUCCEEDED - request immediately processed and\n *                                    successful, cbfunc will not be called\n * @retval PMIX_ERR_BAD_PARAM - one of the provided parameters was invalid\n * @retval PMIX_ERR_NOT_IMPLEMENTED - function not implemented\n */\nstatic pmix_status_t\npbs_pmix_client_connected(\n\tconst pmix_proc_t *proc,\n\tvoid *server_object,\n\tpmix_op_cbfunc_t cbfunc,\n\tvoid *cbdata)\n{\n\tlog_event(PBSEVENT_DEBUG3, 0, LOG_DEBUG, __func__, \"called\");\n\tlog_event(PBSEVENT_DEBUG3, 0, LOG_DEBUG, __func__, \"returning\");\n\treturn PMIX_OPERATION_SUCCEEDED;\n}\n\n/**\n * @brief\n * Client called PMIx_Finalize\n *\n * @param[in] proc - client handle\n * @param[in] server_object - value provided by caller\n * @param[in] cbfunc - optional callback function\n * @param[in] cbdata - opaque data provided to cbfunc\n *\n * @return pmix_status_t\n * @retval PMIX_SUCCESS - request in progress, cbfunc should not be called\n *                        here but will be called later by PMIx\n * @retval PMIX_OPERATION_SUCCEEDED - request immediately processed and\n *                                    successful, cbfunc will not be called\n * @retval PMIX_ERR_BAD_PARAM - one of the provided parameters was invalid\n * @retval PMIX_ERR_NOT_IMPLEMENTED - function not implemented\n */\nstatic pmix_status_t\npbs_pmix_client_finalized(\n\tconst pmix_proc_t *proc,\n\tvoid *server_object,\n\tpmix_op_cbfunc_t cbfunc,\n\tvoid *cbdata)\n{\n\tlog_event(PBSEVENT_DEBUG3, 0, LOG_DEBUG, __func__, \"called\");\n\tlog_event(PBSEVENT_DEBUG3, 0, LOG_DEBUG, __func__, \"returning\");\n\treturn PMIX_OPERATION_SUCCEEDED;\n}\n\n/**\n * @brief\n * Client called PMIx_Abort\n *\n * @param[in] proc - client handle\n * @param[in] server_object - value provided by caller\n * @param[in] status - client status\n * @param[in] msg - client status message\n * @param[in] procs - array of client handle pointers\n * @param[in] nprocs - number of client handle pointers\n * @param[in] cbfunc - optional callback function\n * @param[in] cbdata - opaque data provided to cbfunc\n *\n * @return pmix_status_t\n * @retval PMIX_SUCCESS - request in progress, cbfunc should not be called\n *                        here but will be called later by PMIx\n * @retval PMIX_OPERATION_SUCCEEDED - request immediately processed and\n *                                    successful, cbfunc will not be called\n * @retval PMIX_ERR_BAD_PARAM - one of the provided parameters was invalid\n * @retval PMIX_ERR_NOT_IMPLEMENTED - function not implemented\n */\nstatic pmix_status_t\npbs_pmix_abort(\n\tconst pmix_proc_t *proc,\n\tvoid *server_object,\n\tint status,\n\tconst char msg[],\n\tpmix_proc_t procs[],\n\tsize_t nprocs,\n\tpmix_op_cbfunc_t cbfunc,\n\tvoid *cbdata)\n{\n\tint i;\n\tjob *pjob;\n\n\tlog_event(PBSEVENT_DEBUG3, 0, LOG_DEBUG, __func__, \"called\");\n\tif (!proc) {\n\t\tlog_err(-1, __func__, \"pmix_proc_t parameter is NULL\");\n\t\tlog_event(PBSEVENT_DEBUG, 0, LOG_DEBUG, __func__, \"returning\");\n\t\treturn PMIX_ERROR;\n\t}\n\tif (!proc->nspace || (*proc->nspace == '\\0')) {\n\t\tlog_err(-1, __func__, \"Invalid PMIx namespace\");\n\t\tlog_event(PBSEVENT_DEBUG, 0, LOG_DEBUG, __func__, \"returning\");\n\t\treturn PMIX_ERROR;\n\t}\n\tpjob = find_job((char *) proc->nspace);\n\tif (!pjob) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"Job not found: %s\", proc->nspace);\n\t\tlog_err(-1, __func__, log_buffer);\n\t\tlog_event(PBSEVENT_DEBUG, 0, LOG_DEBUG, __func__, \"returning\");\n\t\treturn PMIX_ERROR;\n\t}\n\tlog_eventf(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t   pjob->ji_qs.ji_jobid, \"abort status: %d\", status);\n\tif (msg && *msg) {\n\t\tlog_eventf(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t   pjob->ji_qs.ji_jobid, \"abort message: %s\", msg);\n\t}\n\tif (!procs) {\n\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t  pjob->ji_qs.ji_jobid, \"All processes to be aborted\");\n\t} else {\n\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t  pjob->ji_qs.ji_jobid, \"Following processes to be aborted:\");\n\t\tfor (i = 0; i < nprocs; i++) {\n\t\t\tlog_eventf(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t\t   pjob->ji_qs.ji_jobid, \"namespace/rank: %s/%u\",\n\t\t\t\t   procs[i].nspace, procs[i].rank);\n\t\t}\n\t}\n\tlog_event(PBSEVENT_DEBUG3, 0, LOG_DEBUG, __func__, \"returning\");\n\treturn PMIX_ERR_NOT_IMPLEMENTED;\n}\n\n/**\n * @brief\n * At least one client called PMIx_Fence (blocking) or\n * PMIx_Fence_nb (non-blocking)\n *\n * @param[in] procs - array of client handle pointers\n * @param[in] nprocs - number of client handle pointers\n * @param[in] info - PMIx info array (parameters provided by caller)\n * @param[in] ninfo - number of info array entries\n * @param[in] data - data (string) to aggregate\n * @param[in] ndata - length of data\n * @param[in] cbfunc - optional callback function\n * @param[in] cbdata - opaque data provided to cbfunc\n *\n * @return pmix_status_t\n * @retval PMIX_SUCCESS - request in progress, cbfunc should not be called\n *                        here but will be called later by PMIx\n * @retval PMIX_OPERATION_SUCCEEDED - request immediately processed and\n *                                    successful, cbfunc will not be called\n * @retval PMIX_ERR_BAD_PARAM - one of the provided parameters was invalid\n * @retval PMIX_ERR_NOT_IMPLEMENTED - function not implemented\n *\n * @note\n * Required attributes:\n * PMIX_COLLECT_DATA\n * Optional attributes:\n * PMIX_TIMEOUT\n * PMIX_COLLECTIVE_ALGO\n * PMIX_COLLECTIVE_ALGO_REQD\n */\nstatic pmix_status_t\npbs_pmix_fence_nb(\n\tconst pmix_proc_t proc[],\n\tsize_t nproc,\n\tconst pmix_info_t info[],\n\tsize_t ninfo,\n\tchar *data,\n\tsize_t ndata,\n\tpmix_modex_cbfunc_t cbfunc,\n\tvoid *cbdata)\n{\n\tint i;\n\tjob *pjob;\n\n\tlog_event(PBSEVENT_DEBUG3, 0, LOG_DEBUG, __func__, \"called\");\n\tif (!proc) {\n\t\tlog_err(-1, __func__, \"pmix_proc_t parameter is NULL\");\n\t\treturn PMIX_ERROR;\n\t}\n\tif (!proc->nspace || (*proc->nspace == '\\0')) {\n\t\tlog_err(-1, __func__, \"Invalid PMIx namespace\");\n\t\treturn PMIX_ERROR;\n\t}\n\tpjob = find_job((char *) proc->nspace);\n\tif (!pjob) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"Job not found: %s\", proc->nspace);\n\t\tlog_err(-1, __func__, log_buffer);\n\t\treturn PMIX_ERROR;\n\t}\n\tfor (i = 0; i < nproc; i++) {\n\t\tlog_eventf(PBSEVENT_DEBUG3, 0, LOG_DEBUG, __func__,\n\t\t\t   \"proc[%d].nspace = %s\", i, proc[i].nspace);\n\t\tlog_eventf(PBSEVENT_DEBUG3, 0, LOG_DEBUG, __func__,\n\t\t\t   \"proc[%d].rank = %u\", i, (unsigned int) proc[i].rank);\n\t}\n\tfor (i = 0; i < ninfo; i++) {\n\t\tlog_eventf(PBSEVENT_DEBUG3, 0, LOG_DEBUG, __func__,\n\t\t\t   \"info[%d].key = %s\", i, info[i].key);\n\t}\n\tlog_eventf(PBSEVENT_DEBUG3, 0, LOG_DEBUG, __func__,\n\t\t   \"There are %lu data entries\", ndata);\n\t/*\n\t * If MS, find/create the barrier for this job. Otherwise, send\n\t * a message to MS that a fence has been encountered. Once all\n\t * ranks have been accounted for, invoke the callback function.\n\t */\n\tlog_eventf(PBSEVENT_DEBUG3, 0, LOG_DEBUG, __func__,\n\t\t   \"cbfunc %s NULL\", cbfunc ? \"is not\" : \"is\");\n\tlog_eventf(PBSEVENT_DEBUG3, 0, LOG_DEBUG, __func__,\n\t\t   \"cbdata %s NULL\", cbdata ? \"is not\" : \"is\");\n\tlog_event(PBSEVENT_DEBUG3, 0, LOG_DEBUG, __func__, \"returning\");\n\treturn PMIX_OPERATION_SUCCEEDED;\n}\n\n/**\n * @brief\n * PMIx server on local host is requesting information from remote\n * node hosting provided proc handle\n *\n * @param[in] proc - client handle pointer\n * @param[in] info - PMIx info array (parameters provided by caller)\n * @param[in] ninfo - number of info array entries\n * @param[in] cbfunc - required callback function\n * @param[in] cbdata - opaque data provided to cbfunc\n *\n * @return pmix_status_t\n * @retval PMIX_SUCCESS - request in progress, cbfunc should not be called\n *                        here but will be called later by PMIx\n * @retval PMIX_ERR_BAD_PARAM - one of the provided parameters was invalid\n * @retval PMIX_ERR_NOT_IMPLEMENTED - function not implemented\n */\nstatic pmix_status_t\npbs_pmix_direct_modex(\n\tconst pmix_proc_t *proc,\n\tconst pmix_info_t info[],\n\tsize_t ninfo,\n\tpmix_modex_cbfunc_t cbfunc,\n\tvoid *cbdata)\n{\n\tlog_event(PBSEVENT_DEBUG3, 0, LOG_DEBUG, __func__, \"called\");\n\tlog_event(PBSEVENT_DEBUG3, 0, LOG_DEBUG, __func__, \"returning\");\n\treturn PMIX_ERR_NOT_IMPLEMENTED;\n}\n\n/**\n * @brief\n * Caller is requesting data be published per the PMIx API spec\n *\n * @param[in] proc - client handle pointer\n * @param[in] info - PMIx info array (parameters provided by caller)\n * @param[in] ninfo - number of info array entries\n * @param[in] cbfunc - optional callback function\n * @param[in] cbdata - opaque data provided to cbfunc\n *\n * @return pmix_status_t\n * @retval PMIX_SUCCESS - request in progress, cbfunc should not be called\n *                        here but will be called later by PMIx\n * @retval PMIX_OPERATION_SUCCEEDED - request immediately processed and\n *                                    successful, cbfunc will not be called\n * @retval PMIX_ERR_NOT_IMPLEMENTED - function not implemented\n */\nstatic pmix_status_t\npbs_pmix_publish(\n\tconst pmix_proc_t *proc,\n\tconst pmix_info_t info[],\n\tsize_t ninfo,\n\tpmix_op_cbfunc_t cbfunc,\n\tvoid *cbdata)\n{\n\tlog_event(PBSEVENT_DEBUG3, 0, LOG_DEBUG, __func__, \"called\");\n\tlog_event(PBSEVENT_DEBUG3, 0, LOG_DEBUG, __func__, \"returning\");\n\treturn PMIX_ERR_NOT_IMPLEMENTED;\n}\n\n/**\n * @brief\n * Caller is requesting published data be looked up\n *\n * @param[in] proc - client handle pointer\n * @param[in] keys - array of strings to lookup\n * @param[in] info - PMIx info array (parameters provided by caller)\n * @param[in] ninfo - number of info array entries\n * @param[in] cbfunc - optional callback function\n * @param[in] cbdata - opaque data provided to cbfunc\n *\n * @return pmix_status_t\n * @retval PMIX_SUCCESS - request in progress, cbfunc should not be called\n *                        here but will be called later by PMIx\n * @retval PMIX_OPERATION_SUCCEEDED - request immediately processed and\n *                                    successful, cbfunc will not be called\n * @retval PMIX_ERR_BAD_PARAM - one of the provided parameters was invalid\n * @retval PMIX_ERR_NOT_IMPLEMENTED - function not implemented\n */\nstatic pmix_status_t\npbs_pmix_lookup(\n\tconst pmix_proc_t *proc,\n\tchar **keys,\n\tconst pmix_info_t info[],\n\tsize_t ninfo,\n\tpmix_lookup_cbfunc_t cbfunc,\n\tvoid *cbdata)\n{\n\tlog_event(PBSEVENT_DEBUG3, 0, LOG_DEBUG, __func__, \"called\");\n\tlog_event(PBSEVENT_DEBUG3, 0, LOG_DEBUG, __func__, \"returning\");\n\treturn PMIX_ERR_NOT_IMPLEMENTED;\n}\n\n/**\n * @brief\n * Delete previously published data from the data store\n *\n * @param[in] proc - client handle pointer\n * @param[in] keys - array of strings to lookup\n * @param[in] info - PMIx info array (parameters provided by caller)\n * @param[in] ninfo - number of info array entries\n * @param[in] cbfunc - optional callback function\n * @param[in] cbdata - opaque data provided to cbfunc\n *\n * @return pmix_status_t\n * @retval PMIX_SUCCESS - request in progress, cbfunc should not be called\n *                        here but will be called later by PMIx\n * @retval PMIX_OPERATION_SUCCEEDED - request immediately processed and\n *                                    successful, cbfunc will not be called\n * @retval PMIX_ERR_BAD_PARAM - one of the provided parameters was invalid\n * @retval PMIX_ERR_NOT_IMPLEMENTED - function not implemented\n */\nstatic pmix_status_t\npbs_pmix_unpublish(\n\tconst pmix_proc_t *proc,\n\tchar **keys,\n\tconst pmix_info_t info[],\n\tsize_t ninfo,\n\tpmix_op_cbfunc_t cbfunc,\n\tvoid *cbdata)\n{\n\tlog_event(PBSEVENT_DEBUG3, 0, LOG_DEBUG, __func__, \"called\");\n\tlog_event(PBSEVENT_DEBUG3, 0, LOG_DEBUG, __func__, \"returning\");\n\treturn PMIX_ERR_NOT_IMPLEMENTED;\n}\n\n/**\n * @brief\n * Client called PMIx_Spawn\n *\n * @param[in] proc - client handle pointer\n * @param[in] info - PMIx info array (parameters provided by caller)\n * @param[in] ninfo - number of info array entries\n * @param[in] apps - array of application handles\n * @param[in] napps - number of application handles\n * @param[in] cbfunc - optional callback function\n * @param[in] cbdata - opaque data provided to cbfunc\n *\n * @return pmix_status_t\n * @retval PMIX_SUCCESS - request in progress, cbfunc should not be called\n *                        here but will be called later by PMIx\n * @retval PMIX_OPERATION_SUCCEEDED - request immediately processed and\n *                                    successful, cbfunc will not be called\n * @retval PMIX_ERR_BAD_PARAM - one of the provided parameters was invalid\n * @retval PMIX_ERR_NOT_IMPLEMENTED - function not implemented\n *\n * @note\n * The PMIx spec refers to the info parameter as job_info. PMIx refers to\n * an application or client as a job, whereas a job refers to a batch job\n * in PBS nomenclature.\n */\nstatic pmix_status_t\npbs_pmix_spawn(\n\tconst pmix_proc_t *proc,\n\tconst pmix_info_t info[],\n\tsize_t ninfo,\n\tconst pmix_app_t apps[],\n\tsize_t napps,\n\tpmix_spawn_cbfunc_t cbfunc,\n\tvoid *cbdata)\n{\n\tlog_event(PBSEVENT_DEBUG3, 0, LOG_DEBUG, __func__, \"called\");\n\tlog_event(PBSEVENT_DEBUG3, 0, LOG_DEBUG, __func__, \"returning\");\n\treturn PMIX_ERR_NOT_IMPLEMENTED;\n}\n\n/**\n * @brief\n * Record process(es) as connected\n *\n * @param[in] procs - array of client handle pointers\n * @param[in] nprocs - number of client handle pointers\n * @param[in] info - PMIx info array (parameters provided by caller)\n * @param[in] ninfo - number of info array entries\n * @param[in] cbfunc - optional callback function\n * @param[in] cbdata - opaque data provided to cbfunc\n *\n * @return pmix_status_t\n * @retval PMIX_SUCCESS - request in progress, cbfunc should not be called\n *                        here but will be called later by PMIx\n * @retval PMIX_OPERATION_SUCCEEDED - request immediately processed and\n *                                    successful, cbfunc will not be called\n * @retval PMIX_ERR_BAD_PARAM - one of the provided parameters was invalid\n * @retval PMIX_ERR_NOT_IMPLEMENTED - function not implemented\n */\nstatic pmix_status_t\npbs_pmix_connect(\n\tconst pmix_proc_t procs[],\n\tsize_t nprocs,\n\tconst pmix_info_t info[],\n\tsize_t ninfo,\n\tpmix_op_cbfunc_t cbfunc,\n\tvoid *cbdata)\n{\n\tlog_event(PBSEVENT_DEBUG3, 0, LOG_DEBUG, __func__, \"called\");\n\tlog_event(PBSEVENT_DEBUG3, 0, LOG_DEBUG, __func__, \"returning\");\n\treturn PMIX_ERR_NOT_IMPLEMENTED;\n}\n\n/**\n * @brief\n * Record process(es) as disconnected\n *\n * @param[in] procs - array of client handle pointers\n * @param[in] nprocs - number of client handle pointers\n * @param[in] info - PMIx info array (parameters provided by caller)\n * @param[in] ninfo - number of info array entries\n * @param[in] cbfunc - optional callback function\n * @param[in] cbdata - opaque data provided to cbfunc\n *\n * @return pmix_status_t\n * @retval PMIX_SUCCESS - request in progress, cbfunc should not be called\n *                        here but will be called later by PMIx\n * @retval PMIX_OPERATION_SUCCEEDED - request immediately processed and\n *                                    successful, cbfunc will not be called\n * @retval PMIX_ERR_BAD_PARAM - one of the provided parameters was invalid\n * @retval PMIX_ERR_NOT_IMPLEMENTED - function not implemented\n */\nstatic pmix_status_t\npbs_pmix_disconnect(\n\tconst pmix_proc_t procs[],\n\tsize_t nprocs,\n\tconst pmix_info_t info[],\n\tsize_t ninfo,\n\tpmix_op_cbfunc_t cbfunc,\n\tvoid *cbdata)\n{\n\tlog_event(PBSEVENT_DEBUG3, 0, LOG_DEBUG, __func__, \"called\");\n\tlog_event(PBSEVENT_DEBUG3, 0, LOG_DEBUG, __func__, \"returning\");\n\treturn PMIX_ERR_NOT_IMPLEMENTED;\n}\n\n/**\n * @brief\n * Register to recieve event notifications\n *\n * @param[in] codes - array of status codes to register for\n * @param[in] ncodes - number of codes in the array\n * @param[in] info - PMIx info array (parameters provided by caller)\n * @param[in] ninfo - number of info array entries\n * @param[in] cbfunc - optional callback function\n * @param[in] cbdata - opaque data provided to cbfunc\n *\n * @return pmix_status_t\n * @retval PMIX_SUCCESS - request in progress, cbfunc should not be called\n *                        here but will be called later by PMIx\n * @retval PMIX_OPERATION_SUCCEEDED - request immediately processed and\n *                                    successful, cbfunc will not be called\n * @retval PMIX_ERR_BAD_PARAM - one of the provided parameters was invalid\n * @retval PMIX_ERR_NOT_IMPLEMENTED - function not implemented\n */\nstatic pmix_status_t\npbs_pmix_register_events(\n\tpmix_status_t *codes,\n\tsize_t ncodes,\n\tconst pmix_info_t info[],\n\tsize_t ninfo,\n\tpmix_op_cbfunc_t cbfunc,\n\tvoid *cbdata)\n{\n\tlog_event(PBSEVENT_DEBUG3, 0, LOG_DEBUG, __func__, \"called\");\n\tlog_event(PBSEVENT_DEBUG3, 0, LOG_DEBUG, __func__, \"returning\");\n\treturn PMIX_ERR_NOT_IMPLEMENTED;\n}\n\n/**\n * @brief\n * Deregister from event notifications\n *\n * @param[in] codes - array of status codes to deregister\n * @param[in] ncodes - number of codes in the array\n * @param[in] info - PMIx info array (parameters provided by caller)\n * @param[in] ninfo - number of info array entries\n * @param[in] cbfunc - optional callback function\n * @param[in] cbdata - opaque data provided to cbfunc\n *\n * @return pmix_status_t\n * @retval PMIX_SUCCESS - request in progress, cbfunc should not be called\n *                        here but will be called later by PMIx\n * @retval PMIX_OPERATION_SUCCEEDED - request immediately processed and\n *                                    successful, cbfunc will not be called\n * @retval PMIX_ERR_BAD_PARAM - one of the provided parameters was invalid\n * @retval PMIX_ERR_NOT_IMPLEMENTED - function not implemented\n */\nstatic pmix_status_t\npbs_pmix_deregister_events(\n\tpmix_status_t *codes,\n\tsize_t ncodes,\n\tpmix_op_cbfunc_t cbfunc,\n\tvoid *cbdata)\n{\n\tlog_event(PBSEVENT_DEBUG3, 0, LOG_DEBUG, __func__, \"called\");\n\tlog_event(PBSEVENT_DEBUG3, 0, LOG_DEBUG, __func__, \"returning\");\n\treturn PMIX_ERR_NOT_IMPLEMENTED;\n}\n\n/**\n * @brief\n * Initialize the PMIx server\n *\n * @param[in] name - name of daemon (used for logging)\n *\n * @return void\n *\n * @note\n * The PMIx library spawns threads from pbs_mom to act as the PMIx server\n * for applications (PMIx clients) assigned to this vnode. The pbs_mom acts\n * as the PMIx server even though all it does is call PMIx library functions.\n * It also means that if pbs_mom exits, any PMIx clients (PMIx enabled\n * applications) will lose their local server and fail.\n */\nvoid\npbs_pmix_server_init(char *name)\n{\n\tpmix_status_t pstat;\n\tpmix_server_module_t pbs_pmix_server_module = {\n\t\t/* v1x interfaces */\n\t\t.client_connected = pbs_pmix_client_connected,\n\t\t.client_finalized = pbs_pmix_client_finalized,\n\t\t.abort = pbs_pmix_abort,\n\t\t.fence_nb = pbs_pmix_fence_nb,\n\t\t.direct_modex = pbs_pmix_direct_modex,\n\t\t.publish = pbs_pmix_publish,\n\t\t.lookup = pbs_pmix_lookup,\n\t\t.unpublish = pbs_pmix_unpublish,\n\t\t.spawn = pbs_pmix_spawn,\n\t\t.connect = pbs_pmix_connect,\n\t\t.disconnect = pbs_pmix_disconnect,\n\t\t.register_events = pbs_pmix_register_events,\n\t\t.deregister_events = pbs_pmix_deregister_events,\n\t\t.listener = NULL\n#if PMIX_VERSION_MAJOR > 1\n\t\t,\n\t\t/* v2x interfaces */\n\t\t.notify_event = NULL,\n\t\t.query = NULL,\n\t\t.tool_connected = NULL,\n\t\t.log = NULL,\n\t\t.allocate = NULL,\n\t\t.job_control = NULL,\n\t\t.monitor = NULL\n#endif\n#if PMIX_VERSION_MAJOR > 2\n\t\t,\n\t\t/* v3x interfaces */\n\t\t.get_credential = NULL,\n\t\t.validate_credential = NULL,\n\t\t.iof_pull = NULL,\n\t\t.push_stdin = NULL\n#endif\n#if PMIX_VERSION_MAJOR > 3\n\t\t,\n\t\t/* v4x interfaces */\n\t\t.group = NULL\n#endif\n\t};\n\n\tpstat = PMIx_server_init(&pbs_pmix_server_module, NULL, 0);\n\tif (pstat != PMIX_SUCCESS) {\n\t\tlog_eventf(PBSEVENT_ERROR, PBS_EVENTCLASS_SERVER,\n\t\t\t   LOG_ERR, name,\n\t\t\t   \"Could not initialize PMIx server: %s\",\n\t\t\t   PMIx_Error_string(pstat));\n\t} else {\n\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_SERVER,\n\t\t\t  LOG_DEBUG, name, \"PMIx server initialized\");\n\t}\n}\n\n/**\n * @brief\n * Generic callback used to wakeup a locked thread\n *\n * @param[in] status - status of locked thread\n * @cbdata[in] cbdata - callback data (pbs_pmix_lock_t *)\n *\n * @return void\n */\nstatic void\npbs_pmix_wait_cb(pmix_status_t status, void *cbdata)\n{\n\tlog_event(PBSEVENT_DEBUG3, 0, LOG_DEBUG, __func__, \"called\");\n\tif (!cbdata) {\n\t\tlog_err(-1, __func__, \"cbdata may not be NULL, returning\");\n\t\treturn;\n\t}\n\tlog_eventf(PBSEVENT_DEBUG3, 0, LOG_DEBUG, __func__,\n\t\t   \"Setting thread status to %s\", PMIx_Error_string(status));\n\t((pbs_pmix_lock_t *) cbdata)->status = status;\n\tPBS_PMIX_WAKEUP_THREAD((pbs_pmix_lock_t *) cbdata);\n\tlog_event(PBSEVENT_DEBUG3, 0, LOG_DEBUG, __func__, \"returning\");\n}\n\n/**\n * @brief\n * Register the PMIx client and adjust the environment so the child will\n * be able to phone home\n *\n * @param[in] pjob - pointer to job structure\n * @param[in] tvnodeid - the global rank of the task\n * @param[in/out] envpp - environment array to modify\n *\n * @return void\n */\nvoid\npbs_pmix_register_client(job *pjob, int tvnodeid, char ***envpp)\n{\n\tchar **ep;\n\tpmix_status_t pstat;\n\tpmix_proc_t pproc;\n\tint before, after;\n\tpbs_pmix_lock_t pmix_lock;\n\n\tif (!pjob) {\n\t\tlog_err(-1, __func__, \"Invalid job pointer\");\n\t\treturn;\n\t}\n\tif (!envpp) {\n\t\tlog_err(-1, __func__, \"Invalid environment pointer\");\n\t\treturn;\n\t}\n\t/* Register the PMIx client */\n\tlog_eventf(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t   pjob->ji_qs.ji_jobid,\n\t\t   \"Registering PMIx client %d\", tvnodeid);\n\t/* Rank is based on tvnodeid */\n\tPMIX_LOAD_PROCID(&pproc, pjob->ji_qs.ji_jobid, tvnodeid);\n\tPBS_PMIX_CONSTRUCT_LOCK(&pmix_lock);\n\tpstat = PMIx_server_register_client(&pproc,\n\t\t\t\t\t    pjob->ji_qs.ji_un.ji_momt.ji_exuid,\n\t\t\t\t\t    pjob->ji_qs.ji_un.ji_momt.ji_exgid,\n\t\t\t\t\t    NULL, pbs_pmix_wait_cb, (void *) &pmix_lock);\n\tPBS_PMIX_WAIT_THREAD(&pmix_lock);\n\tPBS_PMIX_DESTRUCT_LOCK(&pmix_lock);\n\tif (pstat != PMIX_SUCCESS) {\n\t\tlog_eventf(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t   pjob->ji_qs.ji_jobid,\n\t\t\t   \"Failed to register PMIx client: %s\",\n\t\t\t   PMIx_Error_string(pstat));\n\t\treturn;\n\t}\n\tlog_eventf(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t   pjob->ji_qs.ji_jobid,\n\t\t   \"PMIx client %d registered\", tvnodeid);\n\t/* Setup for the PMIx fork */\n\tlog_eventf(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t   pjob->ji_qs.ji_jobid,\n\t\t   \"Setting up PMIx fork for client %d\", tvnodeid);\n\t/* Allow PMIx to add required environment variables */\n\tfor (before = 0, ep = *envpp; ep && *ep; before++, ep++) {\n\t}\n\tpstat = PMIx_server_setup_fork(&pproc, envpp);\n\tif (pstat != PMIX_SUCCESS) {\n\t\tlog_eventf(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t   pjob->ji_qs.ji_jobid,\n\t\t\t   \"Failed to setup PMIx server fork: %s\",\n\t\t\t   PMIx_Error_string(pstat));\n\t\treturn;\n\t}\n\tfor (after = 0, ep = *envpp; ep && *ep; after++, ep++) {\n\t}\n\tlog_eventf(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t   pjob->ji_qs.ji_jobid,\n\t\t   \"PMIx server setup fork added %d env var(s)\",\n\t\t   (after - before));\n\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t  pjob->ji_qs.ji_jobid, \"PMIx server setup fork complete\");\n}\n\n/**\n * @brief\n * Calculate the number of characters required to print an integer\n *\n * @param[in] val - interger value\n *\n * @return int\n * @retval length of string to print provided integer\n */\nstatic int\nintlen(int val)\n{\n\tint i = 1;\n\n\tif (val < 0) {\n\t\ti++;\n\t\tval *= -1;\n\t}\n\tfor (; val > 9; i++)\n\t\tval /= 10;\n\treturn i;\n}\n\n/*\n * @brief\n * Construct a map of the vnodes and ranks that will be provided to PMIx.\n *\n * @param[in] pjob - pointer to job structure\n * @param[out] nodelist - comma separated list of node names\n * @param[out] nodect - number of nodes in list\n * @param[out] nodeid - numeric rank of the local node\n * @param[out] ppnlist - list of ranks on all nodes\n * @param[out] ppnlocal - list of ranks on this node\n *\n * @return int\n * @retval 0 - success\n * @retval -1 - failure\n *\n * @note\n * The node list looks like: host0,host1,...\n * There are no duplicates in the node list\n * The ppn list looks like: 0,100,200;1,101,201;...\n * Order matches that of the node list with same number of entries\n * The ppnlocal list is the list of ranks on the local node\n */\nstatic int\npbs_pmix_gen_map(\n\tjob *pjob,\n\tchar **nodelist,\n\tuint32_t *nodect,\n\tuint32_t *nodeid,\n\tchar **ppnlist,\n\tchar **ppnlocal)\n{\n\ttypedef struct {\n\t\tvmpiprocs *pmpiproc;\n\t\tint nextrank;\n\t\tint locrank;\n\t} pbs_pmix_map_t;\n\tpbs_pmix_map_t *map;\n\tint i, j, ilen, jlen, nodelen, ppnlen, ppnloclen, msnlen, locrank;\n\tchar *iname, *jname, *pn, *pp, *ploc, *pdot;\n\n\tif (nodelist)\n\t\t*nodelist = NULL;\n\tif (nodeid)\n\t\t*nodeid = 0;\n\tif (nodect)\n\t\t*nodect = 0;\n\tif (ppnlist)\n\t\t*ppnlist = NULL;\n\tif (ppnlocal)\n\t\t*ppnlocal = NULL;\n\tif (!nodelist || !nodeid || !nodect || !ppnlist || !ppnlocal)\n\t\treturn -1;\n\tif (!pjob)\n\t\treturn -1;\n\tif (pjob->ji_numvnod < 1)\n\t\treturn -1;\n\tmap = calloc(pjob->ji_numvnod, sizeof(pbs_pmix_map_t));\n\tif (!map)\n\t\treturn -1;\n\tfor (i = 0; i < pjob->ji_numvnod; i++) {\n\t\tmap[i].pmpiproc = &pjob->ji_vnods[i];\n\t\tmap[i].nextrank = -1;\n\t\tmap[i].locrank = -1;\n\t}\n\tmsnlen = strlen(mom_short_name);\n\t/*\n\t * The following loop calculates the length of the node and\n\t * PPN lists. It also sets up the map array so that it will\n\t * be easier to construct the lists.\n\t */\n\tfor (nodelen = ppnlen = ppnloclen = i = 0; i < pjob->ji_numvnod; i++) {\n\t\tif (map[i].locrank >= 0)\n\t\t\tcontinue;\n\t\tmap[i].locrank = locrank = 0;\n\t\tiname = map[i].pmpiproc->vn_hname ? map[i].pmpiproc->vn_hname : map[i].pmpiproc->vn_host->hn_host;\n\t\tif ((pdot = strchr(iname, '.')) != NULL)\n\t\t\tilen = pdot - iname;\n\t\telse\n\t\t\tilen = strlen(iname);\n\t\tnodelen += ilen + 1;\n\t\tppnlen += intlen(i) + 1;\n\t\tif (ilen == msnlen) {\n\t\t\tif (strncmp(mom_short_name, iname, ilen) == 0)\n\t\t\t\tppnloclen += intlen(i) + 1;\n\t\t}\n\t\t/* Add additional ranks on this node */\n\t\tfor (j = i + 1; j < pjob->ji_numvnod; j++) {\n\t\t\tjname = map[j].pmpiproc->vn_hname ? map[j].pmpiproc->vn_hname : map[j].pmpiproc->vn_host->hn_host;\n\t\t\tif ((pdot = strchr(jname, '.')) != NULL)\n\t\t\t\tjlen = pdot - jname;\n\t\t\telse\n\t\t\t\tjlen = strlen(jname);\n\t\t\tif (ilen == jlen) {\n\t\t\t\tif (strncmp(iname, jname, jlen) == 0) {\n\t\t\t\t\tmap[i].nextrank = j;\n\t\t\t\t\tmap[j].locrank = ++locrank;\n\t\t\t\t\tppnlen += intlen(j) + 1;\n\t\t\t\t\tif (jlen == msnlen) {\n\t\t\t\t\t\tif (strncmp(mom_short_name, jname, jlen) == 0) {\n\t\t\t\t\t\t\tppnloclen += intlen(map[j].locrank) + 1;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\t/* Perform some sanity checks */\n\tif (nodelen < 1) {\n\t\tlog_eventf(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t   pjob->ji_qs.ji_jobid,\n\t\t\t   \"%s: zero length node list\", __func__);\n\t\tfree(map);\n\t\treturn -1;\n\t}\n\tif (ppnlen < 1) {\n\t\tlog_eventf(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t   pjob->ji_qs.ji_jobid,\n\t\t\t   \"%s: zero length ppn list\", __func__);\n\t\tfree(map);\n\t\treturn -1;\n\t}\n\tif (ppnloclen < 1) {\n\t\tlog_eventf(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t   pjob->ji_qs.ji_jobid,\n\t\t\t   \"%s: zero length local ppn list\", __func__);\n\t\tfree(map);\n\t\treturn -1;\n\t}\n\t/* Allocate the memory for the lists themselves */\n\t*nodelist = malloc(nodelen);\n\tif (!*nodelist) {\n\t\tfree(map);\n\t\treturn -1;\n\t}\n\t*ppnlist = malloc(ppnlen);\n\tif (!*ppnlist) {\n\t\tfree(map);\n\t\tfree(*nodelist);\n\t\t*nodelist = NULL;\n\t\treturn -1;\n\t}\n\t*ppnlocal = malloc(ppnloclen);\n\tif (!*ppnlocal) {\n\t\tfree(map);\n\t\tfree(*nodelist);\n\t\t*nodelist = NULL;\n\t\tfree(*ppnlist);\n\t\t*ppnlist = NULL;\n\t\treturn -1;\n\t}\n\t/* Construct the node and PPN lists using the map array */\n\tpn = *nodelist;\n\t*pn = '\\0';\n\tpp = *ppnlist;\n\t*pp = '\\0';\n\tploc = *ppnlocal;\n\t*ploc = '\\0';\n\tfor (i = 0, j = 0; i < pjob->ji_numvnod; i++) {\n\t\tbool localnode;\n\t\tint next;\n\n\t\tif (map[i].locrank != 0)\n\t\t\tcontinue;\n\t\tiname = map[i].pmpiproc->vn_hname ? map[i].pmpiproc->vn_hname : map[i].pmpiproc->vn_host->hn_host;\n\t\tif ((pdot = strchr(iname, '.')) != NULL)\n\t\t\tilen = pdot - iname;\n\t\telse\n\t\t\tilen = strlen(iname);\n\t\t/* Append to the node list */\n\t\tif (pn != *nodelist) {\n\t\t\tsprintf(pn, \",\");\n\t\t\tpn++;\n\t\t}\n\t\tsnprintf(pn, ilen + 1, \"%s\", iname);\n\t\tpn += ilen;\n\t\t(*nodect)++;\n\t\t/* Determine if this rank is on the local node */\n\t\tlocalnode = false;\n\t\tif (ilen == msnlen) {\n\t\t\tif (strncmp(mom_short_name, iname, ilen) == 0)\n\t\t\t\tlocalnode = true;\n\t\t}\n\t\tif (localnode)\n\t\t\t*nodeid = j;\n\t\t/* Append to the PPN and local PPN lists */\n\t\tif (pp != *ppnlist) {\n\t\t\tsprintf(pp, \";\");\n\t\t\tpp++;\n\t\t}\n\t\tsprintf(pp, \"%d\", i);\n\t\tpp += intlen(i);\n\t\tif (localnode) {\n\t\t\tsprintf(ploc, \"%d\", i);\n\t\t\tploc += intlen(i);\n\t\t}\n\t\tfor (next = map[i].nextrank; next >= 0; next = map[next].nextrank) {\n\t\t\tsprintf(pp, \",%d\", next);\n\t\t\tpp += intlen(next) + 1;\n\t\t\tif (localnode) {\n\t\t\t\tsprintf(ploc, \",%d\", next);\n\t\t\t\tploc += intlen(next) + 1;\n\t\t\t}\n\t\t}\n\t\tj++;\n\t}\n\tfree(map);\n\treturn 0;\n}\n\n/**\n * @brief\n * Register the PMIx namespace on the local node\n *\n * @param[in] pjob - pointer to job structure\n *\n * @return void\n *\n * @note\n * Populate a PMIx info array and pass it to PMIx_server_register_nspace().\n * This function relies on pbs_pmix_gen_map() to construct the data in the\n * info array.\n */\nstatic void\npbs_pmix_register_namespace(job *pjob)\n{\n\tpmix_info_t *pinfo;\n\tpmix_status_t pstat;\n\tpmix_rank_t rank;\n\tpbs_pmix_lock_t pmix_lock;\n\tchar *pmix_node_list = NULL;\n\tchar *pmix_ppn_list = NULL;\n\tchar *pmix_ppn_local = NULL;\n\tchar *pmix_node_regex;\n\tchar *pmix_ppn_regex;\n\tint loc_size;\n\tint msnlen;\n\tint rc;\n\tint i, n, ninfo;\n\tuint32_t ui, pmix_node_ct, pmix_node_idx;\n\n\tlog_event(PBSEVENT_DEBUG3, 0, LOG_DEBUG, __func__, \"called\");\n\tif (!pjob) {\n\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_ERR,\n\t\t\t  NULL, \"Invalid job pointer\");\n\t\treturn;\n\t}\n\trc = pbs_pmix_gen_map(pjob, &pmix_node_list,\n\t\t\t      &pmix_node_ct, &pmix_node_idx,\n\t\t\t      &pmix_ppn_list, &pmix_ppn_local);\n\tif (rc != 0) {\n\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_ERR,\n\t\t\t  pjob->ji_qs.ji_jobid,\n\t\t\t  \"Failed to generate PMIx mapping\");\n\t\treturn;\n\t}\n\tlog_eventf(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t   pjob->ji_qs.ji_jobid,\n\t\t   \"PMIX nodes: %s\", pmix_node_list);\n\tlog_eventf(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t   pjob->ji_qs.ji_jobid,\n\t\t   \"PMIX ppn: %s\", pmix_ppn_list);\n\tlog_eventf(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t   pjob->ji_qs.ji_jobid,\n\t\t   \"PMIX local ppn: %s\", pmix_ppn_local);\n\t/* Generate the regex */\n\tPMIx_generate_regex(pmix_node_list, &pmix_node_regex);\n\tPMIx_generate_ppn(pmix_ppn_list, &pmix_ppn_regex);\n\tmsnlen = strlen(mom_short_name);\n\t/* Count the number of ranks assigned to this node */\n\tfor (loc_size = i = 0; i < pjob->ji_numvnod; i++) {\n\t\tchar *pdot, *hname;\n\t\tint hlen;\n\n\t\thname = pjob->ji_vnods[i].vn_hname ? pjob->ji_vnods[i].vn_hname : pjob->ji_vnods[i].vn_host->hn_host;\n\t\tif (!hname || (*hname == '\\0'))\n\t\t\tcontinue;\n\t\tpdot = strchr(hname, '.');\n\t\tif (pdot)\n\t\t\thlen = pdot - hname;\n\t\telse\n\t\t\thlen = strlen(hname);\n\t\tif (hlen != msnlen)\n\t\t\tcontinue;\n\t\tif (strncmp(mom_short_name, hname, hlen) != 0)\n\t\t\tcontinue;\n\t\tloc_size++;\n\t}\n\tfree(pmix_node_list);\n\tpmix_node_list = NULL;\n\tfree(pmix_ppn_list);\n\tpmix_ppn_list = NULL;\n\tninfo = 14;\n\tPMIX_INFO_CREATE(pinfo, ninfo);\n\tn = 0;\n\t/*\n\t * INFO #1: Universe size\n\t */\n\tui = pjob->ji_numvnod;\n\t/*\n\t * Do not increment n in the PMIX_INFO_LOAD macro call!\n\t * The macro references the first parameter multiple\n\t * times and would thereby increment it multiple times.\n\t */\n\tPMIX_INFO_LOAD(&pinfo[n], PMIX_UNIV_SIZE, &ui, PMIX_UINT32);\n\tlog_eventf(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t   pjob->ji_qs.ji_jobid,\n\t\t   \"%d. PMIX_UNIV_SIZE: %u\", ++n, ui);\n\t/*\n\t * INFO #2: Maximum number of processes the user is allowed\n\t * to start within this allocation - usually the same as\n\t * univ_size\n\t */\n\tui = pjob->ji_numvnod;\n\tPMIX_INFO_LOAD(&pinfo[n], PMIX_MAX_PROCS, &ui, PMIX_UINT32);\n\tlog_eventf(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t   pjob->ji_qs.ji_jobid,\n\t\t   \"%d. PMIX_MAX_PROCS: %u\", ++n, ui);\n\t/*\n\t * INFO #3: Number of processess being spawned in this job\n\t * Note that job refers to a PMIx job (i.e. application)\n\t * Note that this again is a value PMIx could compute from\n\t * the proc_map\n\t */\n\tui = pjob->ji_numvnod;\n\tPMIX_INFO_LOAD(&pinfo[n], PMIX_JOB_SIZE, &ui, PMIX_UINT32);\n\tlog_eventf(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t   pjob->ji_qs.ji_jobid,\n\t\t   \"%d. PMIX_JOB_SIZE: %u\", ++n, ui);\n\t/*\n\t * INFO #4: Node map\n\t */\n\tPMIX_INFO_LOAD(&pinfo[n], PMIX_NODE_MAP, pmix_node_regex, PMIX_STRING);\n\tlog_eventf(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t   pjob->ji_qs.ji_jobid,\n\t\t   \"%d. PMIX_NODE_MAP: %s\", ++n, pmix_node_regex);\n\t/*\n\t * INFO #5: Process map\n\t */\n\tPMIX_INFO_LOAD(&pinfo[n], PMIX_PROC_MAP, pmix_ppn_regex, PMIX_STRING);\n\tlog_eventf(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t   pjob->ji_qs.ji_jobid,\n\t\t   \"%d. PMIX_PROC_MAP: %s\", ++n, pmix_ppn_regex);\n\t/*\n\t * INFO #6: This process was not created by PMIx_Spawn()\n\t */\n\tui = 0;\n\tPMIX_INFO_LOAD(&pinfo[n], PMIX_SPAWNED, &ui, PMIX_UINT32);\n\tlog_eventf(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t   pjob->ji_qs.ji_jobid,\n\t\t   \"%d. PMIX_SPAWNED: %u\", ++n, ui);\n\t/*\n\t * INFO #7: Number of local ranks for this application\n\t * Note: This could be smaller than the nnumber allocated\n\t *       if the application is not utilizing them all.\n\t */\n\tui = loc_size;\n\tPMIX_INFO_LOAD(&pinfo[n], PMIX_LOCAL_SIZE, &ui, PMIX_UINT32);\n\tlog_eventf(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t   pjob->ji_qs.ji_jobid,\n\t\t   \"%d. PMIX_LOCAL_SIZE: %u\", ++n, ui);\n\t/*\n\t * INFO #8: Number of local ranks for this allocation\n\t */\n\tui = loc_size;\n\tPMIX_INFO_LOAD(&pinfo[n], PMIX_NODE_SIZE, &ui, PMIX_UINT32);\n\tlog_eventf(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t   pjob->ji_qs.ji_jobid,\n\t\t   \"%d. PMIX_NODE_SIZE: %u\", ++n, ui);\n\t/*\n\t * INFO #9: Number of ranks for the entire job\n\t */\n\tui = pmix_node_ct;\n\tPMIX_INFO_LOAD(&pinfo[n], PMIX_NUM_NODES, &ui, PMIX_UINT32);\n\tlog_eventf(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t   pjob->ji_qs.ji_jobid,\n\t\t   \"%d. PMIX_NUM_NODES: %u\", ++n, ui);\n\t/*\n\t * INFO #10: Comma delimited list of ranks on local node\n\t */\n\tPMIX_INFO_LOAD(&pinfo[n], PMIX_LOCAL_PEERS, pmix_ppn_local, PMIX_STRING);\n\tlog_eventf(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t   pjob->ji_qs.ji_jobid,\n\t\t   \"%d. PMIX_LOCAL_PEERS: %s\", ++n, pmix_ppn_local);\n\t/*\n\t * INFO #11: Process leader on local node (first rank)\n\t */\n\tif (sscanf(pmix_ppn_local, \"%u\", &rank) != 1) {\n\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t  pjob->ji_qs.ji_jobid,\n\t\t\t  \"Invalid rank in local ppn list\");\n\t\t/* Punt and set it to zero */\n\t\trank = 0;\n\t}\n\tPMIX_INFO_LOAD(&pinfo[n], PMIX_LOCALLDR, &rank, PMIX_PROC_RANK);\n\tlog_eventf(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t   pjob->ji_qs.ji_jobid,\n\t\t   \"%d. PMIX_LOCALLDR: %u\", ++n, rank);\n\tfree(pmix_ppn_local);\n\tpmix_ppn_local = NULL;\n\t/*\n\t * INFO #12: Index of the local node in the node map\n\t */\n\tui = pmix_node_idx;\n\tPMIX_INFO_LOAD(&pinfo[n], PMIX_NODEID, &ui, PMIX_UINT32);\n\tlog_eventf(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t   pjob->ji_qs.ji_jobid,\n\t\t   \"%d. PMIX_NODEID: %u\", ++n, ui);\n\t/*\n\t * INFO #13: The job ID string\n\t */\n\tPMIX_INFO_LOAD(&pinfo[n], PMIX_JOBID, pjob->ji_qs.ji_jobid, PMIX_STRING);\n\tlog_eventf(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t   pjob->ji_qs.ji_jobid,\n\t\t   \"%d. PMIX_JOBID: %s\", ++n, pjob->ji_qs.ji_jobid);\n\t/*\n\t * INFO #14: Number of different executables in this PMIx job\n\t */\n\tui = 1;\n\tPMIX_INFO_LOAD(&pinfo[n], PMIX_JOB_NUM_APPS, &ui, PMIX_UINT32);\n\tlog_eventf(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t   pjob->ji_qs.ji_jobid,\n\t\t   \"%d. PMIX_JOB_NUM_APPS: %u\", ++n, ui);\n\t/* Grab the lock and register the PMIx namespace */\n\tPBS_PMIX_CONSTRUCT_LOCK(&pmix_lock);\n\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t  pjob->ji_qs.ji_jobid, \"Registering PMIx namespace\");\n\tpstat = PMIx_server_register_nspace(pjob->ji_qs.ji_jobid, loc_size,\n\t\t\t\t\t    pinfo, ninfo, pbs_pmix_wait_cb, (void *) &pmix_lock);\n\tPBS_PMIX_WAIT_THREAD(&pmix_lock);\n\tPBS_PMIX_DESTRUCT_LOCK(&pmix_lock);\n\tPMIX_INFO_FREE(pinfo, ninfo);\n\tif (pstat != PMIX_SUCCESS) {\n\t\tlog_eventf(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t   pjob->ji_qs.ji_jobid,\n\t\t\t   \"Failed to register PMIx namespace: %s\",\n\t\t\t   PMIx_Error_string(pstat));\n\t} else {\n\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t  pjob->ji_qs.ji_jobid, \"PMIx namespace registered\");\n\t}\n\tlog_event(PBSEVENT_DEBUG3, 0, LOG_DEBUG, __func__, \"returning\");\n}\n\n/*\n * @brief\n * Deregister the PMIx namespace for a job on the local node\n *\n * @param[in] pjob - pointer to job structure\n *\n * @return void\n */\nstatic void\npbs_pmix_deregister_namespace(job *pjob)\n{\n\tpbs_pmix_lock_t pmix_lock;\n\n\t/* Grab the lock and deregister the PMIx namespace */\n\tPBS_PMIX_CONSTRUCT_LOCK(&pmix_lock);\n\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t  pjob->ji_qs.ji_jobid, \"Deregistering PMIx namespace\");\n\tPMIx_server_deregister_nspace(pjob->ji_qs.ji_jobid,\n\t\t\t\t      pbs_pmix_wait_cb, (void *) &pmix_lock);\n\tPBS_PMIX_WAIT_THREAD(&pmix_lock);\n\tPBS_PMIX_DESTRUCT_LOCK(&pmix_lock);\n}\n\n/*\n * @brief\n * Extra processing required when spawning a TM task with PMIx enabled\n *\n * @param[in] pjob - pointer to job structure\n * @param[in] pnode - pointer to node entry for local node\n *\n * @return int\n * @retval 0 - success\n * @retval -1 - failure\n */\nint\npbs_pmix_job_join_extra(job *pjob, hnodent *pnode)\n{\n\tlog_event(PBSEVENT_DEBUG3, 0, LOG_DEBUG, __func__, \"called\");\n\tpbs_pmix_register_namespace(pjob);\n\tlog_event(PBSEVENT_DEBUG3, 0, LOG_DEBUG, __func__, \"returning\");\n\treturn 0;\n}\n\n/*\n * @brief\n * Extra processing required when reaping a TM task with PMIx enabled\n *\n * @param[in] pjob - pointer to job structure\n *\n * @return int\n * @retval 0 - success\n * @retval -1 - failure\n */\nint\npbs_pmix_job_clean_extra(job *pjob)\n{\n\tlog_event(PBSEVENT_DEBUG3, 0, LOG_DEBUG, __func__, \"called\");\n\tpbs_pmix_deregister_namespace(pjob);\n\tlog_event(PBSEVENT_DEBUG3, 0, LOG_DEBUG, __func__, \"returning\");\n\treturn 0;\n}\n\n#endif /* PMIX */\n"
  },
  {
    "path": "src/resmom/mom_pmix.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tmom_pmix.h\n */\n\n#ifndef _MOM_PMIX_H\n#define _MOM_PMIX_H\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n#include <pbs_config.h>\n\n#ifdef PMIX\n\n#include <pmix_server.h>\n#include \"job.h\"\n\nextern void\npbs_pmix_server_init(char *);\n\nextern void\npbs_pmix_register_client(job *, int, char ***);\n\nextern void\npbs_pmix_notify_exit(job *pjob, int exitstat, char *msg);\n\nextern int\npbs_pmix_job_join_extra(job *, hnodent *);\n\nextern int\npbs_pmix_job_clean_extra(job *);\n\n#endif /* PMIX */\n\n#ifdef __cplusplus\n}\n#endif\n#endif /* _MOM_PMIX_H */\n"
  },
  {
    "path": "src/resmom/mom_server.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tmom_server.c\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <unistd.h>\n#include <netdb.h>\n#include <netinet/in.h>\n#include <sys/param.h>\n#include <sys/times.h>\n#include <sys/time.h>\n\n#include <assert.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <ctype.h>\n#include <errno.h>\n#include <time.h>\n#include <limits.h>\n#include <sys/types.h>\n#include <sys/stat.h>\n#include <signal.h>\n\n#include \"portability.h\"\n#include \"list_link.h\"\n#include \"pbs_ifl.h\"\n#include \"server_limits.h\"\n#include \"pbs_error.h\"\n#include \"attribute.h\"\n#include \"log.h\"\n#include \"net_connect.h\"\n#include \"tpp.h\"\n#include \"dis.h\"\n#include \"pbs_nodes.h\"\n#include \"placementsets.h\"\n#include \"resmon.h\"\n#include \"mom_server.h\"\n#include \"svrfunc.h\"\n#include \"server_limits.h\"\n#include \"credential.h\"\n#include \"ticket.h\"\n#include \"libpbs.h\"\n#include \"batch_request.h\"\n#include \"pbs_version.h\"\n#define MOM_MACH 1 /* don't include the dependent header */\n#include \"mom_func.h\"\n#include \"mom_hook_func.h\"\n\n/* Global Data Items */\nextern u_Long av_phy_mem; /* phyical memory in KB */\nextern unsigned int default_server_port;\nextern char mom_host[];\nextern int mom_run_state;\nextern char *msg_daemonname;\nextern int num_acpus;\nextern int num_pcpus;\nextern char *path_jobs;\nextern int pbs_errno;\nextern int next_sample_time;\nextern int min_check_poll;\nextern unsigned int pbs_mom_port;\nextern unsigned int pbs_rm_port;\nextern unsigned int pbs_tm_port;\nextern time_t time_now;\nextern int internal_state;\nextern int internal_state_update;\nextern int cycle_harvester;\nextern char *mom_home;\nextern unsigned long hook_action_id;\nextern pbs_list_head svr_alljobs;\nextern pbs_list_head svr_hook_job_actions;\nextern pbs_list_head svr_hook_vnl_actions;\nextern pbs_list_head svr_allhooks;\nextern int mom_recvd_ip_cluster_addrs;\n\nextern int server_stream;\nextern int enable_exechost2;\nextern vnl_t *vnlp;\t      /* vnode list */\nextern vnl_t *vnlp_from_hook; /* vnode list updates from hook */\nextern char *msg_request;\n\nextern void req_commit(struct batch_request *preq);\nextern void req_quejob(struct batch_request *preq);\nextern void req_jobscript(struct batch_request *preq);\nextern void mom_vnlp_report(vnl_t *vnl, char *header);\nextern char *path_hooks;\nextern unsigned long hooks_rescdef_checksum;\nextern int report_hook_checksums;\n\n/*\n * Tree search generalized from Knuth (6.2.2) Algorithm T just like\n * the AT&T man page says.\n *\n * The node_t structure is for internal use only, lint doesn't grok it.\n *\n * Written by reading the System V Interface Definition, not the code.\n *\n * Totally public domain.\n */\n/*LINTLIBRARY*/\n\n/*\n **\tModified by Tom Proett <proett@nas.nasa.gov> for PBS.\n */\n\ntypedef struct node_t {\n\tu_long key;\n\tstruct node_t *left, *right;\n} node;\nnode *okclients = NULL; /* tree of ip addrs */\n\n/**\n * @brief\n *\tfind value in tree.\n *\n * @param[in]  key - value to be found in tree\n *\n * @return \terror code\n * @retval  \t1     if found,\n * @retval \t0     if not\n */\nint\n\taddrfind(const u_long key)\n{\n\tnode **rootp = &okclients; /* address of tree root */\n\n#ifdef NAS_CLUSTER /* localmod 024 */\n\treturn 1;\n#endif /* localmod 024 */\n\n\twhile (*rootp != NULL) {\t\t\t\t  /* Knuth's T1: */\n\t\tif (key == (*rootp)->key)\t\t\t  /* T2: */\n\t\t\treturn 1;\t\t\t\t  /* we found it! */\n\t\trootp = (key < (*rootp)->key) ? &(*rootp)->left : /* T3: follow left branch */\n\t\t\t\t&(*rootp)->right;\t\t  /* T4: follow right branch */\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n * \tinsert value into tree\n *\n * @param[in] key - value to be inserted\n *\n * @return Void\n *\n */\nvoid\naddrinsert(const u_long key)\n{\n\tregister node *q;\n\tnode **rootp = &okclients; /* address of tree root */\n\n\twhile (*rootp != NULL) {\t\t\t\t  /* Knuth's T1: */\n\t\tif (key == (*rootp)->key)\t\t\t  /* T2: */\n\t\t\treturn;\t\t\t\t\t  /* we found it! */\n\t\trootp = (key < (*rootp)->key) ? &(*rootp)->left : /* T3: follow left branch */\n\t\t\t\t&(*rootp)->right;\t\t  /* T4: follow right branch */\n\t}\n\tq = (node *) malloc(sizeof(node)); /* T5: key not found */\n\tif (q != NULL) {\t\t   /* make new node */\n\t\t*rootp = q;\t\t   /* link new node to old */\n\t\tq->key = key;\t\t   /* initialize new node */\n\t\tq->left = q->right = NULL;\n\t\tsprintf(log_buffer,\n\t\t\t\"Adding IP address %ld.%ld.%ld.%ld as authorized\",\n\t\t\t(key & 0xff000000) >> 24,\n\t\t\t(key & 0x00ff0000) >> 16,\n\t\t\t(key & 0x0000ff00) >> 8,\n\t\t\t(key & 0x000000ff));\n#ifdef NAS /* localmod 094 */\n\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_SERVER, LOG_DEBUG,\n\t\t\t  msg_daemonname, log_buffer);\n#else\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_DEBUG,\n\t\t\t  msg_daemonname, log_buffer);\n#endif /* localmod 094 */\n\n\t} else\n\t\tlog_err(errno, __func__, \"Failed to allocate memory for new node in tree\");\n\treturn;\n}\n\n/**\n * @brief\n *\tfree the value in tree\n *\n * @param[in] rootp - pointer to root node\n *\n * @return Void\n *\n */\nvoid\naddrfree(node **rootp)\n{\n\tif (rootp == NULL || *rootp == NULL)\n\t\treturn;\n\taddrfree(&(*rootp)->left);\n\taddrfree(&(*rootp)->right);\n\tfree(*rootp);\n\t*rootp = NULL;\n}\n\n/**\n * @brief\n *\tfree_vnodemap - free the mominfo_array entries and mommap_array\n *\n * @return Void\n *\n */\nstatic void\nfree_vnodemap(void)\n{\n\tint i;\n\n\tif (mominfo_array) {\n\t\tfor (i = 0; i < mominfo_array_size; ++i) {\n\t\t\tif (mominfo_array[i]) {\n\t\t\t\tdelete_mom_entry(mominfo_array[i]);\n\t\t\t\tmominfo_array[i] = NULL;\n\t\t\t}\n\t\t}\n\t}\n\n\tif (mommap_array) {\n\t\tfor (i = 0; i < mommap_array_size; ++i) {\n\t\t\tif (mommap_array[i]) {\n\t\t\t\tdelete_momvmap_entry(mommap_array[i]);\n\t\t\t\tmommap_array[i] = NULL;\n\t\t\t}\n\t\t}\n\t}\n}\n\n/**\n * @brief\n *\treply to server\n *\n * @param[in] stream - connection stream\n * @param[in] combine_msg - combine message in the caller\n *\n * @return int\n * @retval\t0: success\n * @retval\t!0: error code\n *\n */\nstatic int\nregistermom(int stream, int combine_msg)\n{\n\tint count = 0;\n\tint ret;\n\tjob *pjob;\n\n\t/* how many jobs are present */\n\tfor (pjob = (job *) GET_NEXT(svr_alljobs);\n\t     pjob; pjob = (job *) GET_NEXT(pjob->ji_alljobs)) {\n\t\t++count;\n\t}\n\n\t/* Now that all of the options data items are set, send */\n\t/* the option set, followed by the optional data if any */\n\t/* Please note,  the options MUST be sent in the order  */\n\t/* that they are defined, least significant bit to most */\n\n\tif (!combine_msg)\n\t\tif ((ret = is_compose(stream, IS_REGISTERMOM)) != DIS_SUCCESS)\n\t\t\tgoto err;\n\n\t/* if there are running jobs, report them to the Server */\n\t/*\n\t\t* Add to the REGISTERMOM the count of jobs and the\n\t\t* following per running job:\n\t\t*   string  - job id\n\t\t*   int     - job substate\n\t\t*   long    - run version (count)\n\t\t*   int     - Node Id  (0 if Mother Superior)\n\t\t*   string  - exec_vnode string\n\t*/\n\n\tif ((ret = diswui(stream, count)) != DIS_SUCCESS)\n\t\tgoto err;\n\tfor (pjob = (job *) GET_NEXT(svr_alljobs);\n\t     pjob && (count > 0);\n\t     pjob = (job *) GET_NEXT(pjob->ji_alljobs)) {\n\n\t\t--count;\n\n\t\tif ((ret = diswst(stream, pjob->ji_qs.ji_jobid)) != DIS_SUCCESS)\n\t\t\tgoto err;\n\t\tif ((ret = diswsi(stream, get_job_substate(pjob))) != DIS_SUCCESS)\n\t\t\tgoto err;\n\n\t\tif (is_jattr_set(pjob, JOB_ATR_run_version))\n\t\t\tret = diswsl(stream, get_jattr_long(pjob, JOB_ATR_run_version));\n\t\telse\n\t\t\tret = diswsl(stream, get_jattr_long(pjob, JOB_ATR_runcount));\n\n\t\tif (ret != DIS_SUCCESS)\n\t\t\tgoto err;\n\t\t/* send Node Id */\n\t\tif ((ret = diswsi(stream, pjob->ji_nodeid)) != DIS_SUCCESS)\n\t\t\tgoto err;\n\t\tif ((ret = diswst(stream, get_jattr_str(pjob, JOB_ATR_exec_vnode))) != DIS_SUCCESS)\n\t\t\tgoto err;\n\n\t\tif (ret != DIS_SUCCESS)\n\t\t\tgoto err;\n\t}\n\n\tif (!combine_msg)\n\t\tdis_flush(stream);\n\treturn 0;\n\nerr:\n\tsprintf(log_buffer, \"%s for %s\", dis_emsg[ret], \"HELLO\");\n#ifdef WIN32\n\n\tif (errno != 10054)\n#endif\n\t\tlog_err(errno, __func__, log_buffer);\n\ttpp_close(stream);\n\treturn ret;\n}\n\n/**\n * @brief\n *\tSend one or the entire set of unacknowledged hook_job_actions\n *\tto the server.   If called with a non-null pointer to an action,\n *\tthat one is sent;  otherwise all in the list are sent.\n *\n *\tIf only sending one (non-null argument), please note that that item\n *\thas already been linked into the list headed by svr_hook_job_actions.\n *\n * @param[in] phjba - specific action to send or null for all\n *\n * @return none\n *\n */\nvoid\nsend_hook_job_action(struct hook_job_action *phjba)\n{\n\tstruct hook_job_action *pka;\n\tunsigned int count;\n\tint ret;\n\n\tif (server_stream == -1) {\n\t\t/* no stream to server, ok as item already queued to resend */\n\t\treturn;\n\t}\n\n\tif (phjba != NULL) {\n\t\t/* single new item to send */\n\t\tpka = phjba;\n\t\tcount = 1;\n\t} else {\n\t\t/* resend queued up list of items */\n\t\tpka = GET_NEXT(svr_hook_job_actions);\n\t\tif (pka == NULL)\n\t\t\treturn; /* none in the list to send */\n\t\tcount = 0;\n\t\twhile (pka) {\n\t\t\t++count;\n\t\t\tpka = GET_NEXT(pka->hja_link);\n\t\t}\n\t\tpka = GET_NEXT(svr_hook_job_actions);\n\t}\n\n\tif ((ret = is_compose(server_stream, IS_HOOK_JOB_ACTION)) != DIS_SUCCESS)\n\t\tgoto err;\n\n\tret = diswui(server_stream, count);\n\tif (ret != DIS_SUCCESS)\n\t\tgoto err;\n\twhile (count--) {\n\t\tret = diswst(server_stream, pka->hja_jid);\n\t\tif (ret != DIS_SUCCESS)\n\t\t\tgoto err;\n\t\tret = diswul(server_stream, pka->hja_actid);\n\t\tif (ret != DIS_SUCCESS)\n\t\t\tgoto err;\n\t\tret = diswsi(server_stream, pka->hja_runct);\n\t\tif (ret != DIS_SUCCESS)\n\t\t\tgoto err;\n\t\tret = diswsi(server_stream, pka->hja_action);\n\t\tif (ret != DIS_SUCCESS)\n\t\t\tgoto err;\n\t\tret = diswui(server_stream, pka->hja_huser);\n\t\tif (ret != DIS_SUCCESS)\n\t\t\tgoto err;\n\t\tpka = GET_NEXT(pka->hja_link);\n\t}\n\tdis_flush(server_stream);\n\treturn;\n\nerr:\n\tlog_err(errno, \"send_hook_job_action\", (char *) dis_emsg[ret]);\n\treturn;\n}\n/**\n *  @brief\n * \tSend the vnode changes in 'vnl' to the server via\n * \thook_requests_to_server() function call, and also\n * \trequests saving 'vnlp' onto the svr_hook_vnl_action list.\n * \tThis list will be tracked for an ack from the server, and if\n * \tfound, then deletes 'vnl' from the svr_hook_vnl_action_list, and\n * \tfrees 'vnl' itself.\n * \tIf there's no ack from the server, and communication with the\n * \tserver is interrupted, the 'vnl' request would be sent again.\n *\n * @note\n *\tBe sure to NULL the value of 'vnl' upon return from this function,\n *\tso as to not be referenced again if it later gets freed.\n *\n * @param[in]\tvnl\t- vnode changes to send.\n *\t\t\t  This 'vnl' is saved internally inside\n *\t\t\t  hook_requests_to_server(), to be freed later in\n *\t\t\t  is_request() under IS_HOOK_ACTION_ACK request\n *\t\t\t  on an IS_UPDATE_FROM_HOOK/IS_UPDATE_FROM_HOOK2\n *\t\t\t  acknowledgement.\n * @return\tint\n * \t\tDIS_SUCCESS\t- for successful operations.\n * \t\t!= DIS_SUCCESS\t- for failure encountered\n *\n */\n\nint\nsend_hook_vnl(void *vnl)\n{\n\tstruct hook_vnl_action *pvna;\n\tpbs_list_head pvnalist;\n\tint ret;\n\tvnl_t *the_vnlp = vnl;\n\n\tif ((the_vnlp == NULL) || (the_vnlp->vnl_used == 0))\n\t\t/* nothing to send */\n\t\treturn DIS_SUCCESS;\n\n\tpvna = malloc(sizeof(struct hook_vnl_action));\n\tif (pvna == NULL) {\n\t\tlog_err(errno, __func__, \"malloc\");\n\t\treturn DIS_NOMALLOC;\n\t}\n\tCLEAR_HEAD(pvnalist);\n\tCLEAR_LINK(pvna->hva_link);\n\tpvna->hva_euser[0] = '\\0';\n\tpvna->hva_actid = hook_action_id++;\n\tpvna->hva_vnl = the_vnlp;\n\tpvna->hva_update_cmd = IS_UPDATE_FROM_HOOK;\n\tappend_link(&pvnalist, &pvna->hva_link, pvna);\n\n\t/* The argument of 1 means to save action to */\n\t/* svr_vnl_actions list for possible resend. */\n\tret = hook_requests_to_server(&pvnalist);\n\tvna_list_free(pvnalist);\n\treturn (ret);\n}\n\n/**\n * @brief\n *\tSend a checksum report of the various hooks known to the current mom,\n *\tif the configuration flag 'report_hook_checksums' is TRUE.\n *\n * @return\tint\n * @retval\tDIS_SUCCESS\t- for successful operation\n * @retval\t!= DIS_SUCCESS\t- for failure encountered\n *\n */\nstatic int\nsend_hook_checksums(void)\n{\n\tunsigned int count;\n\thook *phook;\n\tint ret;\n\n\tif (!report_hook_checksums)\n\t\treturn DIS_SUCCESS;\n\n\tif (server_stream == -1) {\n\t\t/* no stream to server...ok */\n\t\treturn DIS_SUCCESS;\n\t}\n\n\tphook = (hook *) GET_NEXT(svr_allhooks);\n\tcount = 0;\n\twhile (phook) {\n\t\tphook = (hook *) GET_NEXT(phook->hi_allhooks);\n\t\tcount++;\n\t}\n\n\tif ((ret = is_compose(server_stream, IS_HOOK_CHECKSUMS)) != DIS_SUCCESS)\n\t\tgoto err;\n\n\tret = diswui(server_stream, count);\n\tif (ret != DIS_SUCCESS)\n\t\tgoto err;\n\n\tphook = (hook *) GET_NEXT(svr_allhooks);\n\twhile (count--) {\n\t\tret = diswst(server_stream, phook->hook_name);\n\t\tif (ret != DIS_SUCCESS)\n\t\t\tgoto err;\n\t\tret = diswul(server_stream, phook->hook_control_checksum);\n\t\tif (ret != DIS_SUCCESS)\n\t\t\tgoto err;\n\t\tret = diswul(server_stream, phook->hook_script_checksum);\n\t\tif (ret != DIS_SUCCESS)\n\t\t\tgoto err;\n\t\tret = diswul(server_stream, phook->hook_config_checksum);\n\t\tif (ret != DIS_SUCCESS)\n\t\t\tgoto err;\n\t\tphook = (hook *) GET_NEXT(phook->hi_allhooks);\n\t}\n\n\tret = diswul(server_stream, hooks_rescdef_checksum);\n\tif (ret != DIS_SUCCESS)\n\t\tgoto err;\n\n\t(void) dis_flush(server_stream);\n\n\treturn DIS_SUCCESS;\n\nerr:\n\tlog_err(errno, \"send_hook_checksums\", (char *) dis_emsg[ret]);\n\treturn (ret);\n}\n\n/**\n * @brief\n *\tThis function will process the cluster addresses from the server stream.\n *\n * @param[in]\tstream - the communication stream\n *\n * @return\tint\n * @retval\t0: success\n * @retval\t!0: Error code\n */\nstatic int\nprocess_cluster_addrs(int stream)\n{\n\tu_long ipaddr;\n\tint i;\n\tint tot = 0;\n\tint ret = 0;\n\tu_long ipdepth = 0;\n\tu_long counter = 0;\n\n\tDBPRT((\"%s: IS_CLUSTER_ADDRS\\n\", __func__))\n\tenable_exechost2 = 1;\n\n\ttot = disrui(stream, &ret);\n\tif (ret != DIS_SUCCESS)\n\t\treturn ret;\n\n\tfor (i = 0; i < tot; i++) {\n\t\tipaddr = disrul(stream, &ret);\n\t\tif (ret != DIS_SUCCESS)\n\t\t\tbreak;\n\t\tipdepth = disrul(stream, &ret);\n\t\tif (ret != DIS_SUCCESS)\n\t\t\tbreak;\n\t\tcounter = ipaddr;\n\t\twhile (counter <= ipaddr + ipdepth) {\n\t\t\tDBPRT((\"%s:\\t%ld.%ld.%ld.%ld\", __func__,\n\t\t\t       (counter & 0xff000000) >> 24,\n\t\t\t       (counter & 0x00ff0000) >> 16,\n\t\t\t       (counter & 0x0000ff00) >> 8,\n\t\t\t       (counter & 0x000000ff)))\n\t\t\taddrinsert(counter++);\n\t\t\tDBPRT((\"ipdepth: %lu\\n\", ipdepth))\n\t\t}\n\t}\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tThis handles input coming from another server over a DIS on tpp stream.\n *\tRead the stream to get a Inter-Server request.\n *\n * @param[in]\tstream - the tpp stream\n * @param[in]\tversion - protocol version of the incoming connection\n *\n */\nvoid\nis_request(int stream, int version)\n{\n\tint command = 0;\n\tint n;\n\tint ret = DIS_SUCCESS;\n\tu_long ipaddr;\n\tchar *jobid = NULL;\n\tstruct sockaddr_in *addr;\n\tvoid init_addrs();\n\tjob *pjob;\n\tFILE *filen = 0;\n\textern vnl_t *vnlp;\t      /* vnode list */\n\textern vnl_t *vnlp_from_hook; /* vnode list updates from hook */\n\tint hktype;\n\tunsigned long hkseq;\n\tstruct hook_job_action *phjba;\n\tstruct hook_vnl_action *phvna;\n\tint need_inv;\n\tmom_hook_input_t *phook_input = NULL;\n\tmom_hook_output_t *phook_output = NULL;\n\n\tDBPRT((\"%s: stream %d version %d\\n\", __func__, stream, version))\n\tif (version != IS_PROTOCOL_VER) {\n\t\tsprintf(log_buffer, \"protocol version %d unknown\", version);\n\t\tlog_err(-1, __func__, log_buffer);\n\t\ttpp_close(stream);\n\t\treturn;\n\t}\n\n\t/* check that machine is okay to be a server */\n\taddr = tpp_getaddr(stream);\n\tif (addr == NULL) {\n\t\tsprintf(log_buffer, \"Sender unknown\");\n\t\tlog_err(-1, __func__, log_buffer);\n\t\ttpp_close(stream);\n\t\treturn;\n\t}\n\tipaddr = ntohl(addr->sin_addr.s_addr);\n\n\tif (!addrfind(ipaddr)) {\n\t\tsprintf(log_buffer, \"bad connect from %s\", netaddr(addr));\n\t\tlog_err(PBSE_BADHOST, __func__, log_buffer);\n\t\ttpp_close(stream);\n\t\treturn;\n\t}\n\n\t/* Server can reach out to mom with requests even before mom sending a hello exchange.\n\t   This is one such occassion. So trigger hello exchange now */\n\tif (server_stream == -1)\n\t\tsend_hellosvr(stream);\n\n\tcommand = disrsi(stream, &ret);\n\tif (ret != DIS_SUCCESS)\n\t\tgoto err;\n\n\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_SERVER, LOG_DEBUG, msg_daemonname, \"Received request: %d\", command);\n\n\tswitch (command) {\n\n\t\tcase IS_REPLYHELLO: /* servers return greeting to IS_HELLOSVR */\n\n\t\t\tDBPRT((\"%s: IS_REPLYHELLO, state=0x%x stream=%d\\n\", __func__,\n\t\t\t       internal_state, stream))\n\n\t\t\ttime_delta_hellosvr(MOM_DELTA_RESET);\n\n\t\t\tneed_inv = disrsi(stream, &ret);\n\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\tgoto err;\n\n\t\t\tret = process_cluster_addrs(stream);\n\t\t\tif (ret != 0)\n\t\t\t\tgoto err;\n\n\t\t\t/* return a IS_REGISTERMOM followed by an UPDATE or UPDATE2 */\n\n\t\t\tnext_sample_time = min_check_poll;\n\t\t\tif ((ret = is_compose(stream, IS_REGISTERMOM)) != DIS_SUCCESS)\n\t\t\t\tgoto err;\n\t\t\tif ((ret = registermom(stream, 1)) != 0)\n\t\t\t\tgoto err;\n\t\t\tinternal_state_update = UPDATE_MOM_STATE;\n\n\t\t\tif (need_inv) {\n\t\t\t\tif ((ret = state_to_server(UPDATE_VNODES, 1)) != DIS_SUCCESS)\n\t\t\t\t\tgoto err;\n\t\t\t\tsprintf(log_buffer, \"ReplyHello from server at %s\", netaddr(addr));\n\t\t\t} else {\n\t\t\t\tif ((ret = state_to_server(UPDATE_MOM_ONLY, 1)) != DIS_SUCCESS)\n\t\t\t\t\tgoto err;\n\t\t\t\tsprintf(log_buffer, \"ReplyHello (no inventory required) from server at %s\", netaddr(addr));\n\t\t\t}\n\t\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_DEBUG,\n\t\t\t\t  msg_daemonname, log_buffer);\n\t\t\tdis_flush(server_stream);\n\n\t\t\tif (send_hook_checksums() != DIS_SUCCESS)\n\t\t\t\tgoto err;\n\n\t\t\t/* send any unacknowledged hook job and vnl action requests */\n\t\t\tsend_hook_job_action(NULL);\n\t\t\thook_requests_to_server(&svr_hook_vnl_actions);\n\n\t\t\t/* send any vnode changes made by */\n\t\t\t/* exechost_startup hook */\n\t\t\tmom_vnlp_report(vnlp_from_hook, \"VNLP_FROM_HOOK\");\n\t\t\t(void) send_hook_vnl(vnlp_from_hook);\n\t\t\t/* send_hook_vnl() saves 'vnlp_from_hook' internally, */\n\t\t\t/* to be freed later when server acks the request. */\n\t\t\tvnlp_from_hook = NULL;\n\t\t\tmom_recvd_ip_cluster_addrs = 1;\n\t\t\tbreak;\n\n\t\tcase IS_CLUSTER_ADDRS:\n\t\t\tret = process_cluster_addrs(stream);\n\t\t\tif (ret != 0 && ret != DIS_EOD)\n\t\t\t\tgoto err;\n\t\t\tbreak;\n\n\t\tcase IS_OBITREPLY: {\n\t\t\tint njobs = 0;\n\n\t\t\tnjobs = disrui(stream, &ret); /* number of acks in reply */\n\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\tgoto err;\n\n\t\t\tDBPRT((\"%s: IS_OBITREPLY ack njobs: %d\\n\", __func__, njobs))\n\t\t\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG, __func__, \"received ack obits = %d\", njobs);\n\n\t\t\twhile (njobs-- > 0) {\n\t\t\t\tjobid = disrst(stream, &ret);\n\t\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\t\tgoto err;\n\n\t\t\t\tpjob = find_job(jobid);\n\t\t\t\tif (pjob) {\n\t\t\t\t\t/* note: see on_job_exit() for more info */\n\t\t\t\t\tif (!has_stage(pjob) && num_eligible_hooks(HOOK_EVENT_EXECJOB_END) == 0) {\n\t\t\t\t\t\tmom_deljob(pjob);\n\t\t\t\t\t} else {\n\t\t\t\t\t\tset_job_substate(pjob, JOB_SUBSTATE_EXITED);\n\t\t\t\t\t\tif (pjob->ji_qs.ji_svrflags & JOB_SVFLG_CHKPT) {\n\t\t\t\t\t\t\t/*\n\t\t\t\t\t\t\t* if checkpointed, save state to disk, otherwise\n\t\t\t\t\t\t\t* leave unchanges on disk so recovery will resend\n\t\t\t\t\t\t\t* obit to server\n\t\t\t\t\t\t\t*/\n\t\t\t\t\t\t\tjob_save(pjob);\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tfree(jobid);\n\t\t\t\tjobid = NULL;\n\t\t\t}\n\n\t\t\tnjobs = disrui(stream, &ret); /* number of rejects in reply */\n\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\tgoto err;\n\n\t\t\tDBPRT((\"%s: IS_OBITREPLY reject njobs: %d\\n\", __func__, njobs))\n\t\t\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG, __func__, \"received reject obits = %d\", njobs);\n\n\t\t\twhile (njobs-- > 0) {\n\t\t\t\tjobid = disrst(stream, &ret);\n\t\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\t\tgoto err;\n\n\t\t\t\tpjob = find_job(jobid);\n\t\t\t\t/*\n\t\t\t\t * Allowing only to delete a job that has actually\n\t\t\t\t * started (i.e. not in JOB_SUBSTATE_PRERUN), would\n\t\t\t\t * avoid the race condition resulting in a hung job:\n\t\t\t\t * server force reruns a job which is lingering in\n\t\t\t\t * PRERUN state, and an Obit request for the previous\n\t\t\t\t * instance of the job is received by the server and\n\t\t\t\t * rejected, causing mom to delete the new instance of\n\t\t\t\t * the job. If the job has passed the PRERUN stage,\n\t\t\t\t * then it would have already synced up with the server\n\t\t\t\t * on status, and not end up in this race condition.\n\t\t\t\t */\n\t\t\t\tif (pjob && !pjob->ji_hook_running_bg_on && !check_job_substate(pjob, JOB_SUBSTATE_PRERUN)) {\n\t\t\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_NOTICE, jobid, \"Job removed, Server rejected Obit\");\n\t\t\t\t\tmom_deljob(pjob);\n\t\t\t\t}\n\t\t\t\tfree(jobid);\n\t\t\t\tjobid = NULL;\n\t\t\t}\n\t\t} break;\n\n\t\tcase IS_SHUTDOWN:\n\t\t\tDBPRT((\"%s: IS_SHUTDOWN\\n\", __func__))\n\t\t\tmom_run_state = 0;\n\t\t\tbreak;\n\n\t\tcase IS_DISCARD_JOB:\n\t\t\tjobid = disrst(stream, &ret);\n\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\tgoto err;\n\t\t\tDBPRT((\"%s: IS_DISCARD_JOB %s\\n\", __func__, jobid))\n\t\t\tn = disrsi(stream, &ret); /* job's run_version */\n\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\tn = -1; /* default to -1 */\n\t\t\tpjob = find_job(jobid);\n\t\t\tif (pjob) {\n\t\t\t\tlong runver;\n\n\t\t\t\tif (is_jattr_set(pjob, JOB_ATR_run_version))\n\t\t\t\t\trunver = get_jattr_long(pjob, JOB_ATR_run_version);\n\t\t\t\telse\n\t\t\t\t\trunver = get_jattr_long(pjob, JOB_ATR_runcount);\n\t\t\t\t/* a run_version of -1 means any version is to be discarded */\n\t\t\t\tif ((n == -1) || (runver == n)) {\n\t\t\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB,\n\t\t\t\t\t\t  LOG_NOTICE,\n\t\t\t\t\t\t  pjob->ji_qs.ji_jobid,\n\t\t\t\t\t\t  \"Job discarded at request of Server\");\n\t\t\t\t\tif (pjob->ji_hook_running_bg_on) {\n\t\t\t\t\t\tfree(jobid);\n\t\t\t\t\t\tjobid = NULL;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t\t(void) kill_job(pjob, SIGKILL);\n\t\t\t\t\tphook_input = (mom_hook_input_t *) malloc(sizeof(mom_hook_input_t));\n\t\t\t\t\tif (phook_input == NULL) {\n\t\t\t\t\t\tlog_err(errno, __func__, MALLOC_ERR_MSG);\n\t\t\t\t\t\tgoto err;\n\t\t\t\t\t}\n\t\t\t\t\tmom_hook_input_init(phook_input);\n\t\t\t\t\tphook_input->pjob = pjob;\n\t\t\t\t\tif ((phook_output = (mom_hook_output_t *) malloc(\n\t\t\t\t\t\t     sizeof(mom_hook_output_t))) == NULL) {\n\t\t\t\t\t\tlog_err(errno, __func__, MALLOC_ERR_MSG);\n\t\t\t\t\t\tgoto err;\n\t\t\t\t\t}\n\t\t\t\t\tmom_hook_output_init(phook_output);\n\n\t\t\t\t\tif ((phook_output->reject_errcode =\n\t\t\t\t\t\t     (int *) malloc(sizeof(int))) == NULL) {\n\t\t\t\t\t\tlog_err(errno, __func__, MALLOC_ERR_MSG);\n\t\t\t\t\t\tfree(phook_output);\n\t\t\t\t\t\tgoto err;\n\t\t\t\t\t}\n\t\t\t\t\t*(phook_output->reject_errcode) = 0;\n\n\t\t\t\t\tif (mom_process_hooks(HOOK_EVENT_EXECJOB_END,\n\t\t\t\t\t\t\t      PBS_MOM_SERVICE_NAME, mom_host,\n\t\t\t\t\t\t\t      phook_input, phook_output, NULL, 0, 1) == HOOK_RUNNING_IN_BACKGROUND) {\n\t\t\t\t\t\tpjob->ji_hook_running_bg_on = BG_IS_DISCARD_JOB;\n\t\t\t\t\t\tif (pjob->ji_qs.ji_svrflags &\n\t\t\t\t\t\t    JOB_SVFLG_HERE) /* MS */\n\t\t\t\t\t\t\t(void) send_sisters(pjob,\n\t\t\t\t\t\t\t\t\t    IM_DELETE_JOB, NULL);\n\t\t\t\t\t\tfree(jobid);\n\t\t\t\t\t\tjobid = NULL;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t\tmom_deljob(pjob);\n\t\t\t\t\tfree(phook_output->reject_errcode);\n\t\t\t\t\tfree(phook_output);\n\t\t\t\t\tfree(phook_input);\n\t\t\t\t}\n\t\t\t}\n\t\t\tif ((ret = is_compose(server_stream, IS_DISCARD_DONE)) != DIS_SUCCESS) {\n\t\t\t\tfree(jobid);\n\t\t\t\tjobid = NULL;\n\t\t\t\tgoto err;\n\t\t\t}\n\t\t\tif ((ret = diswst(server_stream, jobid)) != DIS_SUCCESS) {\n\t\t\t\tfree(jobid);\n\t\t\t\tjobid = NULL;\n\t\t\t\tgoto err;\n\t\t\t}\n\t\t\tfree(jobid); /* can be freed now */\n\t\t\tjobid = NULL;\n\t\t\tif ((ret = diswsi(server_stream, n)) != DIS_SUCCESS)\n\t\t\t\tgoto err;\n\t\t\tdis_flush(server_stream);\n\t\t\tbreak;\n\n\t\tcase IS_CMD:\n\t\t\tDBPRT((\"%s: IS_CMD\\n\", __func__))\n\t\t\tprocess_IS_CMD(stream);\n\t\t\tbreak;\n\n\t\tcase IS_HOOK_ACTION_ACK: {\n\t\t\tint nacks = 0;\n\t\t\tstatic char **vnl_allow_attrs = NULL;\n\n\t\t\t/*\n\t\t\t * the Server is sending an acknowledgement that it received\n\t\t\t * and processed an IS_HOOK_JOB_ACTION request for a job.\n\t\t\t * The Server will send one such per job\n\t\t\t */\n\n\t\t\thktype = disrsi(stream, &ret);\n\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\tgoto err;\n\t\t\tnacks = disrsi(stream, &ret);\n\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\tgoto err;\n\t\t\twhile (nacks--) {\n\t\t\t\thkseq = disrsi(stream, &ret);\n\t\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\t\tgoto err;\n\t\t\t\tif (hktype == IS_HOOK_JOB_ACTION) {\n\t\t\t\t\tphjba = GET_NEXT(svr_hook_job_actions);\n\t\t\t\t\tfor (; phjba; phjba = GET_NEXT(phjba->hja_link)) {\n\t\t\t\t\t\tif (hkseq == phjba->hja_actid) {\n\t\t\t\t\t\t\tdelete_link(&phjba->hja_link);\n\t\t\t\t\t\t\tfree(phjba);\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t} else if (hktype == IS_UPDATE_FROM_HOOK || hktype == IS_UPDATE_FROM_HOOK2) {\n\t\t\t\t\tif (vnl_allow_attrs == NULL) {\n\t\t\t\t\t\tvnl_allow_attrs = break_delimited_str(HOOK_VNL_PERSISTENT_ATTRIBS, ' ');\n\t\t\t\t\t\tif (vnl_allow_attrs == NULL)\n\t\t\t\t\t\t\tcontinue;\n\t\t\t\t\t}\n\t\t\t\t\tphvna = GET_NEXT(svr_hook_vnl_actions);\n\t\t\t\t\tfor (; phvna; phvna = GET_NEXT(phvna->hva_link)) {\n\t\t\t\t\t\tif (hkseq == phvna->hva_actid) {\n\t\t\t\t\t\t\tdelete_link(&phvna->hva_link);\n\t\t\t\t\t\t\t/* save admin vnode changes done by various hooks */\n\t\t\t\t\t\t\tif (phvna->hva_euser[0] == '\\0') {\n\t\t\t\t\t\t\t\tif (vnlp != NULL || vnl_alloc(&vnlp) != NULL) {\n\t\t\t\t\t\t\t\t\tvnlp->vnl_modtime = time(NULL);\n\t\t\t\t\t\t\t\t\tvn_merge2(vnlp, phvna->hva_vnl, vnl_allow_attrs, NULL);\n\t\t\t\t\t\t\t\t\tmom_vnlp_report(vnlp, \"vnlp\");\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tvnl_free(phvna->hva_vnl);\n\t\t\t\t\t\t\tfree(phvna);\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t} break;\n\n\t\tdefault:\n\t\t\tsprintf(log_buffer, \"unknown command %d sent\", command);\n\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\tgoto err;\n\t}\n\n\ttpp_eom(stream);\n\treturn;\n\nerr:\n\t/*\n\t ** We come here if we got a DIS read error or a protocol\n\t ** element is missing.\n\t */\n\tsprintf(log_buffer, \"%s from %s\", dis_emsg[ret], netaddr(addr));\n\tlog_err(-1, __func__, log_buffer);\n\ttpp_close(stream);\n\tif (filen)\n\t\tfclose(filen);\n\tif (jobid)\n\t\tfree(jobid);\n\n\treturn;\n}\n\n/**\n * @brief\n *\tSends any pending requests to the server related to hooks on tpp stream\n *\n * @par\n *\tMay be called with:\n *\t1. a new linked list in which case each vnl entry is sent to the\n *\t   Server and the list entry is relinked into svr_hook_vnl_actions\n *\t   where it remains until the update is acknowledged by the Server; OR\n *\t2. svr_hook_vnl_actions which is done when a new tPP stream is opened\n *\t   by a server on restart or reestablished communications.  In this\n *\t   case only the entries in svr_hook_vnl_actions are only resent.\n * @Note\n *\tUpdate is sent if the list of vnl changes is not empty.\n *\tUpon any error, the connection to the server_stream is not closed.\n *\n * @param[in]\tplist - pointer to head of list of vnl actions to send to Server\n *\n * @return\tint\n * \t\tDIS_SUCCESS\t- for successful operations.\n * \t\t!= DIS_SUCCESS\t- for failure encountered\n *\n */\nint\nhook_requests_to_server(pbs_list_head *plist)\n{\n\tint resending = 0;\n\tint ret;\n\tstruct hook_vnl_action *nxt;\n\tstruct hook_vnl_action *pvna;\n\tvnl_t *pvnlph;\n\textern const char *dis_emsg[];\n\n\tif (plist == NULL)\n\t\treturn (0); /* nothing to send */\n\n\tif (server_stream < 0) {\n\t\t/* log but keep going to link the changes to be sent later */\n\t\tlog_err(errno, __func__, \"warning: unable to send hook requests to server: No server_stream! (to be retried)\");\n\t}\n\n\tif (plist == &svr_hook_vnl_actions) {\n\t\t/* we are resending the vnl lists on svr_hool_vnl_actions */\n\t\t/* so we don't need to update modtime or to relink        */\n\t\tresending = 1;\n\t}\n\n\tpvna = (struct hook_vnl_action *) GET_NEXT(*plist);\n\twhile (pvna != NULL) {\n\n\t\tnxt = (struct hook_vnl_action *) GET_NEXT(pvna->hva_link);\n\n\t\tif ((pvnlph = pvna->hva_vnl) == NULL) {\n\t\t\t/* nothing to send, get rid of it */\n\t\t\tdelete_link(&pvna->hva_link);\n\t\t\tfree(pvna);\n\t\t\tpvna = nxt;\n\t\t\tcontinue;\n\t\t}\n\n\t\t/* We have hook changes to the vnodes to send to the Server */\n\n\t\tif (resending == 0) {\n\n\t\t\t/* relink them into the main list of \"outstanding\" */\n\t\t\t/* changes sent to the server */\n\t\t\tdelete_link(&pvna->hva_link);\n\t\t\tappend_link(&svr_hook_vnl_actions, &pvna->hva_link, pvna);\n\t\t\tpvna->hva_actid = ++hook_action_id;\n\n\t\t\t/*\n\t\t\t * Put in a legit vnl_modtime value; otherwise, garbage\n\t\t\t * value could be sent, causing pbs_server to panic with\n\t\t\t * \"Input value too large\" upon vn_decode_DIS()\n\t\t\t */\n\t\t\tpvnlph->vnl_modtime = time(NULL);\n\t\t}\n\n\t\t/* Now send each update to the Server if we can */\n\t\tif (server_stream == -1) {\n\t\t\tpvna = nxt; /* next set of vnl changes */\n\t\t\tcontinue;\n\t\t}\n\n\t\tret = is_compose(server_stream, pvna->hva_update_cmd);\n\t\tif (ret != DIS_SUCCESS)\n\t\t\tgoto hook_requests_to_server_err;\n\n\t\tret = diswul(server_stream, pvna->hva_actid);\n\t\tif (ret != DIS_SUCCESS)\n\t\t\tgoto hook_requests_to_server_err;\n\n\t\tret = diswst(server_stream, pvna->hva_euser);\n\t\tif (ret != DIS_SUCCESS)\n\t\t\tgoto hook_requests_to_server_err;\n\n\t\tret = vn_encode_DIS(server_stream, pvnlph); /* vnode list */\n\t\tif (ret != DIS_SUCCESS)\n\t\t\tgoto hook_requests_to_server_err;\n\n\t\tdis_flush(server_stream);\n\n\t\tpvna = nxt; /* next set of vnl changes */\n\t}\n\n\treturn 0;\n\nhook_requests_to_server_err:\n\tlog_err(errno, __func__, (char *) dis_emsg[ret]);\n\treturn (ret);\n}\n\n/**\n * @brief\n * \tstate_to_server() - if UPDATE_MOM_STATE is set, send state update message to\n *\tthe server.\n *\n * @param[in]\twhat_to_update - defines what to update\n * \t\tUPDATE_VNODES - update all the vnodes\n *\t\tUPDATE_MOM_ONLY - update only the info about the mom\n * @param[in]\tcombine_msg\t- combine message in the caller.\n *\n *\tIf we have placement set information to send, we use IS_UPDATE2;\n *\totherwise, we fall back to IS_UPDATE.\n *\n * @return int\n * @retval\t0: success\n * @retval\t1: failure\n *\n */\nint\nstate_to_server(int what_to_update, int combine_msg)\n{\n\tint i, ret;\n\textern const char *dis_emsg[];\n\textern vnl_t *vnlp; /* vnode list */\n\tchar *pv;\n\tint use_UPDATE2 = 0;\n\tint cmd = IS_UPDATE;\n\n\tif (internal_state_update == 0)\n\t\treturn 0;\n\n\tif (server_stream < 0)\n\t\treturn -1;\n\n\tif (av_phy_mem == 0)\n\t\tav_phy_mem = strTouL(physmem(0), &pv, 10);\n\n\ti = internal_state & MOM_STATE_MASK;\n\tif (internal_state & (MOM_STATE_BUSYKB | MOM_STATE_INBYKB))\n\t\ti |= MOM_STATE_BUSY;\n\tif (cycle_harvester == 1)\n\t\ti |= MOM_STATE_CONF_HARVEST;\n\n\tDBPRT((\"updating state 0x%x to server\\n\", i))\n\n\tif ((vnlp != NULL) && (what_to_update == UPDATE_VNODES)) {\n\t\tuse_UPDATE2 = 1;\n\t\tcmd = IS_UPDATE2;\n\t}\n\n\tif (!combine_msg)\n\t\tif ((ret = is_compose(server_stream, cmd)) != DIS_SUCCESS)\n\t\t\tgoto err;\n\n\tif ((ret = diswui(server_stream, i)) != DIS_SUCCESS) /* node state */\n\t\tgoto err;\n\tif ((ret = diswui(server_stream, num_pcpus)) != DIS_SUCCESS) /* phy cpus */\n\t\tgoto err;\n\tif ((ret = diswui(server_stream, num_acpus)) != DIS_SUCCESS) /* avail cpus */\n\t\tgoto err;\n\tif ((ret = diswull(server_stream, av_phy_mem)) != DIS_SUCCESS) /* phy mem */\n\t\tgoto err;\n\tif ((ret = diswst(server_stream, arch(0))) != DIS_SUCCESS) /* arch type */\n\t\tgoto err;\n\n\tif (use_UPDATE2) {\n#if MOM_ALPS\n\t\t/*\n\t\t * This is a workaround for a problem with the reporting of\n\t\t * vnodes by multiple MoMs:  the \"check_other_moms_time\"\n\t\t * variable's value being nonzero results in the vnl_modtime\n\t\t * for additional MoMs' vnodes being set to match the modtime\n\t\t * for the first one to report.  This in turn causes the call\n\t\t * to update2_to_vnode() to be skipped in the case of additional\n\t\t * MoMs because they are still reporting the old time (the one\n\t\t * recorded when inventory_to_vnodes() created the vnodes.\n\t\t *\n\t\t * The fact that update2_to_vnode() is skipped means that the\n\t\t * ATTR_NODE_TopologyInfo action function is not called and as\n\t\t * a result, the other MoMs don't acquire socket licenses.\n\t\t *\n\t\t * This workaround makes sure that in response to an IS_HELLO\n\t\t * from the server, a Cray always reports current time as the\n\t\t * vnode mod time.\n\t\t */\n\t\tvnlp->vnl_modtime = time(0);\n#endif /* MOM_ALPS */\n\n\t\tif ((ret = vn_encode_DIS(server_stream, vnlp)) != DIS_SUCCESS) /* vnode list */\n\t\t\tgoto err;\n\t}\n\n\tif ((ret = diswst(server_stream, PBS_VERSION)) != DIS_SUCCESS) /* pbs_version */\n\t\tgoto err;\n\n\tif (!combine_msg)\n\t\tdis_flush(server_stream);\n\tinternal_state_update = 0;\n\treturn 0;\n\nerr:\n\tlog_err(errno, \"state_to_server\", (char *) dis_emsg[ret]);\n\ttpp_close(server_stream);\n\tserver_stream = -1;\n\treturn ret;\n}\n\n/**\n * @brief\n * \tsend_wk_job_idle - send IDLE message to server for each job suspended/resumed\n *\ton the workstation going busy/idle.\n *\n * @param[in] idle   suspend/reusme (1/0)\n * @param[in] jobid  job id\n *\n * @return Void\n *\n */\nvoid\nsend_wk_job_idle(char *jobid, int idle)\n{\n\tint ret;\n\n\tif (server_stream < 0)\n\t\treturn;\n\n\tret = is_compose(server_stream, IS_IDLE);\n\tif (ret != DIS_SUCCESS)\n\t\tgoto err;\n\n\tret = diswui(server_stream, idle);\n\tif (ret != DIS_SUCCESS)\n\t\tgoto err;\n\tret = diswst(server_stream, jobid);\n\tif (ret != DIS_SUCCESS)\n\t\tgoto err;\n\tdis_flush(server_stream);\n\treturn;\n\nerr:\n\tsprintf(log_buffer, \"%s for %d\", dis_emsg[ret], idle);\n\tlog_err(errno, \"send_wk_job_idle\", log_buffer);\n\ttpp_close(server_stream);\n\tserver_stream = -1;\n\treturn;\n}\n\n/**\n * @brief\n * \trecover_vmap - recover the vnode to host mapping data from\n *\tthe mom_priv/vnodemap file.   See resmom/mom_server.c\n *\tis_request() function where it is written.\n *\n *\tFormat of the file is:\n *\t\tinteger time stamp\n *\t\thostname port num_of_vnodes\n *\t\t\tvnode_name vhost no_task_value\n *\t\t\tvnode_name vhost no_task_value\n *\t\t\t...\n *\t\thostname ...\n *\t\t\t...\n *\tNote:  if \"vhost\" is '-', then we use the hostname of Mom, \"hostname\"\n *\n * @return   int\n * @retval   errno  Failure\n * @retval   0      Success\n *\n */\nint\nrecover_vmap(void)\n{\n\tchar buf[PBS_MAXHOSTNAME + 64];\n\tchar *endp;\n\tmominfo_time_t maptime = {0, 0};\n\tint n;\n\tchar name[PBS_MAXHOSTNAME + 1];\n\tint notask;\n\tmominfo_t *pmom;\n\tunsigned short port;\n\tchar *str;\n\tFILE *vmf;\n\tchar vmapfile[MAXPATHLEN + 1];\n\tchar vhost[PBS_MAXHOSTNAME + 1];\n\textern char *skipwhite(char *);\n\textern char *wtokcpy(char *, char *, int);\n\n\tsprintf(vmapfile, \"%s/%s\", mom_home, VNODE_MAP);\n\tvmf = fopen(vmapfile, \"r\");\n\tif (vmf == NULL) {\n\t\tif (errno == ENOENT)\n\t\t\treturn 0;\n\t\telse\n\t\t\treturn errno;\n\t}\n\n\tif (fgets(buf, sizeof(buf), vmf) == NULL)\n\t\treturn 0;\n\tstr = buf;\n\twhile (isdigit(*str))\n\t\tstr++;\n\tif ((*str != '\\n') && (*str != '.')) {\n\t\tfclose(vmf);\n\t\treturn PBSE_BADTSPEC;\n\t}\n\t/* record time stamp of vmap data */\n\tsscanf(buf, \"%lu.%d\", &maptime.mit_time, &maptime.mit_gen);\n\n\twhile (fgets(buf, sizeof(buf), vmf)) {\n\t\tstr = skipwhite(buf); /* pass over initial whitespace */\n\t\tif (*str == '\\0')\n\t\t\tcontinue;\n\t\tstr = wtokcpy(str, name, sizeof(name));\n\t\tstr = skipwhite(str);\n\t\tif (*str == '\\0')\n\t\t\tcontinue;\n\t\tport = (unsigned short) strtol(str, &endp, 10);\n\t\tstr = skipwhite(endp);\n\t\tif (*str == '\\0')\n\t\t\tcontinue;\n\t\tn = (int) strtol(str, &endp, 10);\n\n\t\tpmom = create_mom_entry(name, (unsigned int) port);\n\n\t\twhile (n--) {\n\t\t\tif (fgets(buf, sizeof(buf), vmf) == NULL)\n\t\t\t\tbreak;\n\t\t\tstr = skipwhite(buf);\n\t\t\tif (*str == '\\0')\n\t\t\t\tbreak;\n\t\t\tstr = wtokcpy(str, name, sizeof(name));\n\t\t\tstr = skipwhite(str);\n\t\t\tif (*str == '\\0')\n\t\t\t\tbreak;\n\t\t\tstr = wtokcpy(str, vhost, sizeof(vhost));\n\t\t\tstr = skipwhite(str);\n\t\t\tnotask = (int) strtol(str, &endp, 10);\n\n\t\t\tif ((vhost[0] == '-') && (vhost[1] == '\\0'))\n\t\t\t\tvhost[0] = '\\0'; /* make null str */\n\t\t\tif (create_mommap_entry(name, vhost, pmom, notask) == NULL)\n\t\t\t\tbreak;\n\t\t}\n\t\tif (n > 0) {\n\t\t\tfree_vnodemap();\n\t\t\tfclose(vmf);\n\t\t\treturn PBSE_INTERNAL;\n\t\t}\n\t}\n\tmominfo_time = maptime;\n\tfclose(vmf);\n\treturn 0;\n}\n\n/**\n * @brief\n *\tSend a message on tpp stream to the Server asking that it tell the Scheduler\n *\tto restart it's scheduling cycle.\n * @par\n *\tIf this message is lost due to a closed stream to the Server, so be it.\n *\tThe world will likely have likely changed by the time a new connection\n *\tis established.\n *\n * @param[in] hook_user - effective user running hook,  null string if PBSADMIN\n *\n * @return int\n * @retval 0 - message queued on stream.\n * @retval non-zero - DIS error, stream may be closed.\n *\n */\nint\nsend_sched_recycle(char *hook_user)\n{\n\tint ret;\n\tret = is_compose(server_stream, IS_HOOK_SCHEDULER_RESTART_CYCLE);\n\tif (ret != DIS_SUCCESS)\n\t\tgoto recycle_err;\n\tret = diswst(server_stream, hook_user);\n\tif (ret != DIS_SUCCESS)\n\t\tgoto recycle_err;\n\tret = dis_flush(server_stream);\n\tif (ret != DIS_SUCCESS)\n\t\tgoto recycle_err;\n\treturn (0);\n\nrecycle_err:\n\tsprintf(log_buffer, \"%s error %s\",\n\t\t\"Failed to contact server for sched recycle\",\n\t\tdis_emsg[ret]);\n\tlog_err(-1, __func__, log_buffer);\n\treturn (ret);\n}\n"
  },
  {
    "path": "src/resmom/mom_updates_bundle.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * This file containes functions related to generate resc used updates\n * and make bundle of it and sent it to server\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n#include <pbs_python_private.h>\n#include <Python.h>\n#include <time.h>\n#include \"resource.h\"\n#include \"job.h\"\n#include \"mom_func.h\"\n#include \"mom_server.h\"\n#include \"hook.h\"\n#include \"tpp.h\"\n\nextern pbs_list_head mom_pending_ruu;\nextern int resc_access_perm;\nextern int server_stream;\nextern time_t time_now;\n\nstatic void bundle_ruu(int *r_cnt, ruu **prused, int *rh_cnt, ruu **prhused, int *o_cnt, ruu **obits);\nstatic ruu *get_job_update(job *pjob);\nstatic PyObject *json_loads(char *value, char *msg, size_t msg_len);\nstatic char *json_dumps(PyObject *py_val, char *msg, size_t msg_len);\nstatic void encode_used(job *pjob, pbs_list_head *phead);\n\nstatic PyObject *py_json_name = NULL;\nstatic PyObject *py_json_module = NULL;\nstatic PyObject *py_json_dict = NULL;\nstatic PyObject *py_json_func_loads = NULL;\nstatic PyObject *py_json_func_dumps = NULL;\n\n#ifdef PYTHON\n/**\n * @brief\n * \tReturns the Python dictionary representation of a string\n *\tspecyfing a JSON object.\n *\n * @param[in]  value   - string of JSON-object format\n * @param[out] msg     - error message buffer\n * @param[in]  msg_len - size of 'msg' buffer\n *\n * @return PyObject *\n * @retval !NULL - dictionary representation of 'value'\n * @retval NULL  - if not successful, filling out 'msg' with the actual error message.\n */\nstatic PyObject *\njson_loads(char *value, char *msg, size_t msg_len)\n{\n\tPyObject *py_value = NULL;\n\tPyObject *py_result = NULL;\n\n\tif (value == NULL)\n\t\treturn NULL;\n\n\tif (msg != NULL) {\n\t\tif (msg_len <= 0)\n\t\t\treturn NULL;\n\t\tmsg[0] = '\\0';\n\t}\n\n\tif (py_json_name == NULL) {\n\t\tpy_json_name = PyUnicode_FromString(\"json\");\n\t\tif (py_json_name == NULL) {\n\t\t\tif (msg != NULL)\n\t\t\t\tsnprintf(msg, msg_len, \"failed to construct json name\");\n\t\t\treturn NULL;\n\t\t}\n\t}\n\n\tif (py_json_module == NULL) {\n\t\tpy_json_module = PyImport_Import(py_json_name);\n\t\tif (py_json_module == NULL) {\n\t\t\tif (msg != NULL)\n\t\t\t\tsnprintf(msg, msg_len, \"failed to import json\");\n\t\t\treturn NULL;\n\t\t}\n\t}\n\n\tif (py_json_dict == NULL) {\n\t\tpy_json_dict = PyModule_GetDict(py_json_module);\n\t\tif (py_json_dict == NULL) {\n\t\t\tif (msg != NULL)\n\t\t\t\tsnprintf(msg, msg_len, \"failed to get json module dictionary\");\n\t\t\treturn NULL;\n\t\t}\n\t}\n\n\tif (py_json_func_loads == NULL) {\n\t\tpy_json_func_loads = PyDict_GetItemString(py_json_dict, (char *) \"loads\");\n\t\tif ((py_json_func_loads == NULL) || !PyCallable_Check(py_json_func_loads)) {\n\t\t\tif (msg != NULL)\n\t\t\t\tsnprintf(msg, msg_len, \"did not find json.loads() function\");\n\t\t\treturn NULL;\n\t\t}\n\t}\n\n\tpy_value = Py_BuildValue(\"(z)\", (char *) value);\n\tif (py_value == NULL) {\n\t\tif (msg != NULL)\n\t\t\tsnprintf(msg, msg_len, \"failed to build python arg %s\", value);\n\t\treturn NULL;\n\t}\n\n\tPyErr_Clear(); /* clear any exception */\n\tpy_result = PyObject_CallObject(py_json_func_loads, py_value);\n\n\tif (PyErr_Occurred()) {\n\t\tif (msg != NULL) {\n\t\t\tPyObject *exc_string = NULL;\n\t\t\tPyObject *exc_type = NULL;\n\t\t\tPyObject *exc_value = NULL;\n\t\t\tPyObject *exc_traceback = NULL;\n\n\t\t\tPyErr_Fetch(&exc_type, &exc_value, &exc_traceback);\n\n\t\t\t/* get the exception */\n\t\t\tif (exc_type != NULL && (exc_string = PyObject_Str(exc_type)) != NULL && PyUnicode_Check(exc_string))\n\t\t\t\tsnprintf(msg, msg_len, \"%s\", PyUnicode_AsUTF8(exc_string));\n\t\t\tPy_XDECREF(exc_string);\n\t\t\tPy_XDECREF(exc_type);\n\t\t\tPy_XDECREF(exc_value);\n#if !defined(WIN32)\n\t\t\tPy_XDECREF(exc_traceback);\n#elif !defined(_DEBUG)\n\t\t\t/* for some reason this crashes on Windows Debug */\n\t\t\tPy_XDECREF(exc_traceback);\n#endif\n\t\t}\n\t\tgoto json_loads_fail;\n\t} else if (!PyDict_Check(py_result)) {\n\t\tif (msg != NULL)\n\t\t\tsnprintf(msg, msg_len, \"value is not a dictionary\");\n\t\tgoto json_loads_fail;\n\t}\n\n\tPy_XDECREF(py_value);\n\treturn (py_result);\n\njson_loads_fail:\n\tPy_XDECREF(py_value);\n\tPy_XDECREF(py_result);\n\treturn NULL;\n}\n\n/**\n * @brief\n * \tReturns a JSON-formatted string representing the Python object 'py_val'.\n *\n * @param[in]  py_val  - Python object\n * @param[out] msg     - error message buffer\n * @param[in]  msg_len - size of 'msg' buffer\n *\n * @return char *\n * @retval !NULL - the returned JSON-formatted string\n * @retval NULL  - if not successful, filling out 'msg' with the actual error message.\n *\n * @note\n *\tThe returned string is malloced space that must be freed later when no longer needed.\n */\nstatic char *\njson_dumps(PyObject *py_val, char *msg, size_t msg_len)\n{\n\tPyObject *py_value = NULL;\n\tPyObject *py_result = NULL;\n\tconst char *tmp_str = NULL;\n\tchar *ret_string = NULL;\n\tint slen;\n\n\tif (py_val == NULL)\n\t\treturn NULL;\n\n\tif (msg != NULL) {\n\t\tif (msg_len <= 0)\n\t\t\treturn NULL;\n\t\tmsg[0] = '\\0';\n\t}\n\n\tif (py_json_name == NULL) {\n\t\tpy_json_name = PyUnicode_FromString(\"json\");\n\t\tif (py_json_name == NULL) {\n\t\t\tif (msg != NULL)\n\t\t\t\tsnprintf(msg, msg_len, \"failed to construct json name\");\n\t\t\treturn NULL;\n\t\t}\n\t}\n\n\tif (py_json_module == NULL) {\n\t\tpy_json_module = PyImport_Import(py_json_name);\n\t\tif (py_json_module == NULL) {\n\t\t\tif (msg != NULL)\n\t\t\t\tsnprintf(msg, msg_len, \"failed to import json\");\n\t\t\treturn NULL;\n\t\t}\n\t}\n\n\tif (py_json_dict == NULL) {\n\t\tpy_json_dict = PyModule_GetDict(py_json_module);\n\t\tif (py_json_dict == NULL) {\n\t\t\tif (msg != NULL)\n\t\t\t\tsnprintf(msg, msg_len, \"failed to get json module dictionary\");\n\t\t\treturn NULL;\n\t\t}\n\t}\n\n\tif (py_json_func_dumps == NULL) {\n\t\tpy_json_func_dumps = PyDict_GetItemString(py_json_dict, (char *) \"dumps\");\n\t\tif ((py_json_func_dumps == NULL) || !PyCallable_Check(py_json_func_dumps)) {\n\t\t\tif (msg != NULL)\n\t\t\t\tsnprintf(msg, msg_len, \"did not find json.dumps() function\");\n\t\t\treturn NULL;\n\t\t}\n\t}\n\n\tpy_value = Py_BuildValue(\"(O)\", py_val);\n\tif (py_value == NULL) {\n\t\tif (msg != NULL)\n\t\t\tsnprintf(msg, msg_len, \"failed to build python arg %p\", (void *) py_val);\n\t\treturn NULL;\n\t}\n\tPyErr_Clear(); /* clear any exception */\n\tpy_result = PyObject_CallObject(py_json_func_dumps, py_value);\n\n\tif (PyErr_Occurred()) {\n\t\tif (msg != NULL) {\n\t\t\tPyObject *exc_string = NULL;\n\t\t\tPyObject *exc_type = NULL;\n\t\t\tPyObject *exc_value = NULL;\n\t\t\tPyObject *exc_traceback = NULL;\n\n\t\t\tPyErr_Fetch(&exc_type, &exc_value, &exc_traceback);\n\n\t\t\t/* get the exception */\n\t\t\tif (exc_type != NULL && (exc_string = PyObject_Str(exc_type)) != NULL && PyUnicode_Check(exc_string))\n\t\t\t\tsnprintf(msg, msg_len, \"%s\", PyUnicode_AsUTF8(exc_string));\n\t\t\tPy_XDECREF(exc_string);\n\t\t\tPy_XDECREF(exc_type);\n\t\t\tPy_XDECREF(exc_value);\n#if !defined(WIN32)\n\t\t\tPy_XDECREF(exc_traceback);\n#elif !defined(_DEBUG)\n\t\t\t/* for some reason this crashes on Windows Debug */\n\t\t\tPy_XDECREF(exc_traceback);\n#endif\n\t\t}\n\t\tgoto json_dumps_fail;\n\t} else if (!PyUnicode_Check(py_result)) {\n\t\tif (msg != NULL)\n\t\t\tsnprintf(msg, msg_len, \"value is not a string\");\n\t\tgoto json_dumps_fail;\n\t}\n\n\ttmp_str = PyUnicode_AsUTF8(py_result);\n\t/* returned tmp_str points to an internal buffer of 'py_result' */\n\tif (tmp_str == NULL) {\n\t\tif (msg != NULL)\n\t\t\tsnprintf(msg, msg_len, \"PyUnicode_AsUTF8 failed\");\n\t\tgoto json_dumps_fail;\n\t}\n\tslen = strlen(tmp_str) + 3; /* for null character + 2 single quotes */\n\tret_string = (char *) malloc(slen);\n\tif (ret_string == NULL) {\n\t\tif (msg != NULL)\n\t\t\tsnprintf(msg, msg_len, \"malloc of ret_string failed\");\n\t\tgoto json_dumps_fail;\n\t}\n\tsnprintf(ret_string, slen, \"'%s'\", tmp_str);\n\tPy_XDECREF(py_value);\n\tPy_XDECREF(py_result);\n\treturn (ret_string);\n\njson_dumps_fail:\n\tPy_XDECREF(py_value);\n\tPy_XDECREF(py_result);\n\treturn NULL;\n}\n#endif\n\n/**\n * @brief\n * \t encode_used - encode resources used by a job to be returned to the server\n *\n * @param[in] pjob - pointer to job structure\n * @param[in] phead - pointer to pbs_list_head structure\n *\n * @return Void\n *\n */\nstatic void\nencode_used(job *pjob, pbs_list_head *phead)\n{\n\tattribute_def *ad;\n\tattribute_def *ad3;\n\tresource *rs;\n\tresource_def *rd;\n\tint include_resc_used_update = 0;\n\n\tad = &job_attr_def[JOB_ATR_resc_used];\n\tif (!is_jattr_set(pjob, JOB_ATR_resc_used))\n\t\treturn;\n\n\tad3 = &job_attr_def[JOB_ATR_resc_used_update];\n\tif (pjob->ji_updated || (is_jattr_set(pjob, JOB_ATR_relnodes_on_stageout) && (get_jattr_long(pjob, JOB_ATR_relnodes_on_stageout) != 0)))\n\t\tinclude_resc_used_update = 1;\n\n\trs = (resource *) GET_NEXT(get_jattr_list(pjob, JOB_ATR_resc_used));\n\tfor (; rs != NULL; rs = (resource *) GET_NEXT(rs->rs_link)) {\n\n\t\tint i;\n\t\tattribute val;\t/* holds the final accumulated resources_used values from Moms including those released from the job */\n\t\tattribute val3; /* holds the final accumulated resources_used values from Moms, which does not include the released moms from job */\n\t\tPyObject *py_jvalue;\n\t\tchar *sval;\n\t\tchar *dumps;\n\t\tchar emsg[HOOK_BUF_SIZE];\n\t\tattribute tmpatr = {0};\n\t\tattribute tmpatr3 = {0};\n\n\t\trd = rs->rs_defin;\n\t\tif ((rd->rs_flags & resc_access_perm) == 0)\n\t\t\tcontinue;\n\n\t\tval = val3 = rs->rs_value; /* copy resource attribute */\n\n\t\t/* NOTE: presence of pjob->ji_resources means a multinode job (i.e. pjob->ji_numnodes > 1) */\n\t\tif (pjob->ji_resources != NULL) {\n\t\t\t/* count up sisterhood too */\n\t\t\tunsigned long lnum = 0;\n\t\t\tunsigned long lnum3 = 0;\n\t\t\tnoderes *nr;\n\n\t\t\tif (strcmp(rd->rs_name, \"cput\") == 0) {\n\t\t\t\tfor (i = 0; i < pjob->ji_numrescs; i++) {\n\t\t\t\t\tnr = &pjob->ji_resources[i];\n\t\t\t\t\tlnum += nr->nr_cput;\n\t\t\t\t\tif (nr->nr_status != PBS_NODERES_DELETE)\n\t\t\t\t\t\tlnum3 += nr->nr_cput;\n\t\t\t\t}\n\t\t\t\tval.at_val.at_long += lnum;\n\t\t\t\tval3.at_val.at_long += lnum3;\n\t\t\t} else if (strcmp(rd->rs_name, \"mem\") == 0) {\n\t\t\t\tfor (i = 0; i < pjob->ji_numrescs; i++) {\n\t\t\t\t\tnr = &pjob->ji_resources[i];\n\t\t\t\t\tlnum += nr->nr_mem;\n\t\t\t\t\tif (nr->nr_status != PBS_NODERES_DELETE)\n\t\t\t\t\t\tlnum3 += nr->nr_mem;\n\t\t\t\t}\n\t\t\t\tval.at_val.at_long += lnum;\n\t\t\t\tval3.at_val.at_long += lnum3;\n\t\t\t} else if (strcmp(rd->rs_name, \"cpupercent\") == 0) {\n\t\t\t\tfor (i = 0; i < pjob->ji_numrescs; i++) {\n\t\t\t\t\tnr = &pjob->ji_resources[i];\n\t\t\t\t\tlnum += nr->nr_cpupercent;\n\t\t\t\t\tif (nr->nr_status != PBS_NODERES_DELETE)\n\t\t\t\t\t\tlnum3 += nr->nr_cpupercent;\n\t\t\t\t}\n\t\t\t\tval.at_val.at_long += lnum;\n\t\t\t\tval3.at_val.at_long += lnum3;\n\t\t\t}\n#ifdef PYTHON\n\t\t\telse if (strcmp(rd->rs_name, RESOURCE_UNKNOWN) != 0 &&\n\t\t\t\t (val.at_type == ATR_TYPE_LONG ||\n\t\t\t\t  val.at_type == ATR_TYPE_FLOAT ||\n\t\t\t\t  val.at_type == ATR_TYPE_SIZE ||\n\t\t\t\t  val.at_type == ATR_TYPE_STR)) {\n\n\t\t\t\tPyObject *py_accum = NULL;  /* holds accum resources_used values from all moms (including the released sister moms from job) */\n\t\t\t\tPyObject *py_accum3 = NULL; /* holds accum resources_used values from all moms (NOT including the released sister moms from job) */\n\n\t\t\t\t/* The following 2 temp variables will be set to 1\n\t\t\t\t * if there's an error accumulating resources_used\n\t\t\t\t * values from all sister moms including those that\n\t\t\t\t * have been released from the job (fail) or from\n\t\t\t\t * all sister moms NOT including the released nodes\n\t\t\t\t * from job (fail2).\n\t\t\t\t */\n\t\t\t\tint fail = 0;\n\t\t\t\tint fail2 = 0;\n\n\t\t\t\tpy_jvalue = NULL;\n\t\t\t\ttmpatr.at_type = tmpatr3.at_type = val.at_type;\n\n\t\t\t\tif (val.at_type != ATR_TYPE_STR) {\n\t\t\t\t\trd->rs_set(&tmpatr, &val, SET);\n\t\t\t\t\trd->rs_set(&tmpatr3, &val, SET);\n\t\t\t\t} else {\n\t\t\t\t\tpy_accum = PyDict_New();\n\t\t\t\t\tif (py_accum == NULL) {\n\t\t\t\t\t\tlog_err(-1, __func__, \"error creating accumulation dictionary\");\n\t\t\t\t\t\tcontinue;\n\t\t\t\t\t}\n\t\t\t\t\tpy_accum3 = PyDict_New();\n\t\t\t\t\tif (py_accum3 == NULL) {\n\t\t\t\t\t\tlog_err(-1, __func__, \"error creating accumulation dictionary 3\");\n\t\t\t\t\t\tPy_CLEAR(py_accum);\n\t\t\t\t\t\tcontinue;\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\t/* accumulating resources_used values from sister\n\t\t\t\t * moms into tmpatr (from all sisters including released\n\t\t\t\t * moms) and tmpatr3 (from sisters that have not been\n\t\t\t\t * released from the job).\n\t\t\t\t */\n\t\t\t\tfor (i = 0; i < pjob->ji_numrescs; i++) {\n\t\t\t\t\tchar mom_hname[PBS_MAXHOSTNAME + 1];\n\t\t\t\t\tchar *p = NULL;\n\t\t\t\t\tattribute *at2;\n\t\t\t\t\tresource *rs2;\n\n\t\t\t\t\tif (pjob->ji_resources[i].nodehost == NULL)\n\t\t\t\t\t\tcontinue;\n\n\t\t\t\t\tat2 = &pjob->ji_resources[i].nr_used;\n\t\t\t\t\tif ((at2->at_flags & ATR_VFLAG_SET) == 0)\n\t\t\t\t\t\tcontinue;\n\n\t\t\t\t\tpbs_strncpy(mom_hname, pjob->ji_resources[i].nodehost, sizeof(mom_hname));\n\t\t\t\t\tmom_hname[PBS_MAXHOSTNAME] = '\\0';\n\t\t\t\t\tp = strchr(mom_hname, '.');\n\t\t\t\t\tif (p != NULL)\n\t\t\t\t\t\t*p = '\\0';\n\n\t\t\t\t\tfail = fail2 = 0;\n\t\t\t\t\trs2 = (resource *) GET_NEXT(at2->at_val.at_list);\n\t\t\t\t\tfor (; rs2 != NULL; rs2 = (resource *) GET_NEXT(rs2->rs_link)) {\n\n\t\t\t\t\t\tattribute val2; /* temp variable for accumulating resources_used from sis Moms */\n\t\t\t\t\t\tresource_def *rd2;\n\n\t\t\t\t\t\trd2 = rs2->rs_defin;\n\t\t\t\t\t\tval2 = rs2->rs_value; /* copy resource attribute */\n\t\t\t\t\t\tif ((val2.at_flags & ATR_VFLAG_SET) == 0 || strcmp(rd2->rs_name, rd->rs_name) != 0)\n\t\t\t\t\t\t\tcontinue;\n\n\t\t\t\t\t\tif (val2.at_type == ATR_TYPE_STR) {\n\t\t\t\t\t\t\tsval = val2.at_val.at_str;\n\t\t\t\t\t\t\tpy_jvalue = json_loads(sval, emsg, HOOK_BUF_SIZE - 1);\n\t\t\t\t\t\t\tif (py_jvalue == NULL) {\n\t\t\t\t\t\t\t\tlog_errf(-1, __func__,\n\t\t\t\t\t\t\t\t\t \"Job %s resources_used.%s cannot be accumulated: value '%s' from mom %s not JSON-format: %s\",\n\t\t\t\t\t\t\t\t\t pjob->ji_qs.ji_jobid, rd2->rs_name, sval, mom_hname, emsg);\n\t\t\t\t\t\t\t\tfail = 1;\n\t\t\t\t\t\t\t} else if (PyDict_Merge(py_accum, py_jvalue, 1) != 0) {\n\t\t\t\t\t\t\t\tlog_errf(-1, __func__,\n\t\t\t\t\t\t\t\t\t \"Job %s resources_used.%s cannot be accumulated: value '%s' from mom %s: error merging values\",\n\t\t\t\t\t\t\t\t\t pjob->ji_qs.ji_jobid, rd2->rs_name, sval, mom_hname);\n\t\t\t\t\t\t\t\tPy_CLEAR(py_jvalue);\n\t\t\t\t\t\t\t\tfail = 1;\n\t\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t\tif (pjob->ji_resources[i].nr_status != PBS_NODERES_DELETE) {\n\t\t\t\t\t\t\t\t\tif (PyDict_Merge(py_accum3, py_jvalue, 1) != 0) {\n\t\t\t\t\t\t\t\t\t\tlog_errf(-1, __func__,\n\t\t\t\t\t\t\t\t\t\t\t \"Job %s resources_used.%s cannot be accumulated: value '%s' from mom %s: error merging values\",\n\t\t\t\t\t\t\t\t\t\t\t pjob->ji_qs.ji_jobid, rd2->rs_name, sval, mom_hname);\n\t\t\t\t\t\t\t\t\t\tfail2 = 1;\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\tPy_CLEAR(py_jvalue);\n\t\t\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t\t\tPy_CLEAR(py_jvalue);\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\trd->rs_set(&tmpatr, &val2, INCR);\n\t\t\t\t\t\t\tif (pjob->ji_resources[i].nr_status != PBS_NODERES_DELETE)\n\t\t\t\t\t\t\t\trd->rs_set(&tmpatr3, &val2, INCR);\n\t\t\t\t\t\t}\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\t/* accumulating the resources_used values from MS mom */\n\n\t\t\t\tif (val.at_type == ATR_TYPE_STR) {\n\n\t\t\t\t\tif (fail) {\n\t\t\t\t\t\tPy_CLEAR(py_accum);\n\t\t\t\t\t\tPy_CLEAR(py_accum3);\n\t\t\t\t\t\t/* unset resc */\n\t\t\t\t\t\t(void) add_to_svrattrl_list(phead, ad->at_name, rd->rs_name, \"\", SET, NULL);\n\t\t\t\t\t\t/* go to next resource to encode_used */\n\t\t\t\t\t\tcontinue;\n\t\t\t\t\t}\n\n\t\t\t\t\tif (fail2) {\n\t\t\t\t\t\tPy_CLEAR(py_accum);\n\t\t\t\t\t\tPy_CLEAR(py_accum3);\n\t\t\t\t\t\t/* unset resc */\n\t\t\t\t\t\t(void) add_to_svrattrl_list(phead, ad3->at_name, rd->rs_name, \"\", SET, NULL);\n\t\t\t\t\t\t/* go to next resource to encode_used */\n\t\t\t\t\t\tcontinue;\n\t\t\t\t\t}\n\n\t\t\t\t\tsval = val.at_val.at_str;\n\t\t\t\t\tif (PyDict_Size(py_accum) == 0) {\n\t\t\t\t\t\t/* no other values seen\n\t\t\t\t\t\t * except from MS...use as is\n\t\t\t\t\t\t * don't JSONify\n\t\t\t\t\t\t */\n\t\t\t\t\t\trd->rs_decode(&tmpatr, ATTR_used, rd->rs_name, sval);\n\t\t\t\t\t\tPy_CLEAR(py_accum);\n\t\t\t\t\t\tPy_CLEAR(py_accum3);\n\t\t\t\t\t} else if ((py_jvalue = json_loads(sval, emsg, HOOK_BUF_SIZE - 1)) == NULL) {\n\t\t\t\t\t\tlog_errf(-1, __func__,\n\t\t\t\t\t\t\t \"Job %s resources_used.%s cannot be accumulated: value '%s' from mom %s not JSON-format: %s\",\n\t\t\t\t\t\t\t pjob->ji_qs.ji_jobid, rd->rs_name, sval, mom_short_name, emsg);\n\t\t\t\t\t\tPy_CLEAR(py_accum);\n\t\t\t\t\t\tPy_CLEAR(py_accum3);\n\t\t\t\t\t\t/* unset resc */\n\t\t\t\t\t\t(void) add_to_svrattrl_list(phead, ad->at_name, rd->rs_name, \"\", SET, NULL);\n\t\t\t\t\t\t/* go to next resource to encode */\n\t\t\t\t\t\tcontinue;\n\t\t\t\t\t} else if (PyDict_Merge(py_accum, py_jvalue, 1) != 0) {\n\t\t\t\t\t\tlog_errf(-1, __func__,\n\t\t\t\t\t\t\t \"Job %s resources_used.%s cannot be accumulated: value '%s' from mom %s: error merging values\",\n\t\t\t\t\t\t\t pjob->ji_qs.ji_jobid, rd->rs_name, sval, mom_short_name);\n\t\t\t\t\t\tPy_CLEAR(py_jvalue);\n\t\t\t\t\t\tPy_CLEAR(py_accum);\n\t\t\t\t\t\tPy_CLEAR(py_accum3);\n\t\t\t\t\t\t/* unset resc */\n\t\t\t\t\t\t(void) add_to_svrattrl_list(phead, ad->at_name, rd->rs_name, \"\", SET, NULL);\n\t\t\t\t\t\t/* go to next resource to encode */\n\t\t\t\t\t\tcontinue;\n\t\t\t\t\t} else {\n\t\t\t\t\t\tdumps = json_dumps(py_accum, emsg, HOOK_BUF_SIZE - 1);\n\t\t\t\t\t\tif (dumps == NULL) {\n\t\t\t\t\t\t\tlog_errf(-1, __func__,\n\t\t\t\t\t\t\t\t \"Job %s resources_used.%s cannot be accumulated: %s\",\n\t\t\t\t\t\t\t\t pjob->ji_qs.ji_jobid, rd->rs_name, emsg);\n\t\t\t\t\t\t\tPy_CLEAR(py_jvalue);\n\t\t\t\t\t\t\tPy_CLEAR(py_accum);\n\t\t\t\t\t\t\tPy_CLEAR(py_accum3);\n\t\t\t\t\t\t\t/* unset resc */\n\t\t\t\t\t\t\t(void) add_to_svrattrl_list(phead, ad->at_name, rd->rs_name, \"\", SET, NULL);\n\t\t\t\t\t\t\tcontinue;\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\trd->rs_decode(&tmpatr, ATTR_used, rd->rs_name, dumps);\n\t\t\t\t\t\tPy_CLEAR(py_accum);\n\t\t\t\t\t\tfree(dumps);\n\n\t\t\t\t\t\tif (PyDict_Merge(py_accum3, py_jvalue, 1) != 0) {\n\t\t\t\t\t\t\tlog_errf(-1, __func__,\n\t\t\t\t\t\t\t\t \"Job %s resources_used_update.%s cannot be accumulated: value '%s' from mom %s: error merging values\",\n\t\t\t\t\t\t\t\t pjob->ji_qs.ji_jobid, rd->rs_name, sval, mom_short_name);\n\t\t\t\t\t\t\tPy_CLEAR(py_jvalue);\n\t\t\t\t\t\t\tPy_CLEAR(py_accum3);\n\t\t\t\t\t\t\t/* unset resc */\n\t\t\t\t\t\t\t(void) add_to_svrattrl_list(phead, ad3->at_name, rd->rs_name, \"\", SET, NULL);\n\t\t\t\t\t\t\t/* go to next resource to encode */\n\t\t\t\t\t\t\tcontinue;\n\t\t\t\t\t\t} else if ((dumps = json_dumps(py_accum3, emsg, HOOK_BUF_SIZE - 1)) == NULL) {\n\t\t\t\t\t\t\tlog_errf(-1, __func__,\n\t\t\t\t\t\t\t\t \"Job %s resources_used_update.%s cannot be accumulated: %s\",\n\t\t\t\t\t\t\t\t pjob->ji_qs.ji_jobid, rd->rs_name, emsg);\n\t\t\t\t\t\t\tPy_CLEAR(py_jvalue);\n\t\t\t\t\t\t\tPy_CLEAR(py_accum3);\n\t\t\t\t\t\t\t/* unset resc */\n\t\t\t\t\t\t\t(void) add_to_svrattrl_list(phead, ad3->at_name, rd->rs_name, \"\", SET, NULL);\n\t\t\t\t\t\t\tcontinue;\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\trd->rs_decode(&tmpatr3, ATTR_used_update, rd->rs_name, dumps);\n\t\t\t\t\t\t\tPy_CLEAR(py_jvalue);\n\t\t\t\t\t\t\tPy_CLEAR(py_accum3);\n\t\t\t\t\t\t\tfree(dumps);\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tval = tmpatr;\n\t\t\t\tval3 = tmpatr3;\n\t\t\t}\n#endif\n\t\t\t/* no resource to accumulate and yet a multinode job */\n\t\t}\n\n\t\tif (val.at_type != ATR_TYPE_STR || pjob->ji_numnodes == 1 || pjob->ji_resources != NULL) {\n\t\t\t/* for string values, set value if single node job\n\t\t\t * (i.e. pjob->ji_numnodes == 1), or\n\t\t\t * if the value is accumulated from the various\n\t\t\t * values obtained from sister nodes\n\t\t\t * (i.e. pjob->ji_resources != NULL).\n\t\t\t */\n\t\t\tif (val.at_type == ATR_TYPE_STR && pjob->ji_numnodes == 1) {\n\t\t\t\t/* check if string value is a valid json string,\n\t\t\t\t * if it is then set the resource string within\n\t\t\t\t * single quotes.\n\t\t\t\t */\n\n\t\t\t\tsval = val.at_val.at_str;\n\t\t\t\tif ((py_jvalue = json_loads(sval, emsg, HOOK_BUF_SIZE - 1)) != NULL) {\n\t\t\t\t\tdumps = json_dumps(py_jvalue, emsg, HOOK_BUF_SIZE - 1);\n\t\t\t\t\tif (dumps == NULL)\n\t\t\t\t\t\tPy_CLEAR(py_jvalue);\n\t\t\t\t\telse {\n\t\t\t\t\t\trd->rs_decode(&tmpatr, ATTR_used, rd->rs_name, dumps);\n\t\t\t\t\t\tval = tmpatr;\n\t\t\t\t\t\tPy_CLEAR(py_jvalue);\n\t\t\t\t\t\tfree(dumps);\n\t\t\t\t\t\tdumps = NULL;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif (rd->rs_encode(&val, phead, ad->at_name, rd->rs_name, ATR_ENCODE_CLIENT, NULL) < 0)\n\t\t\t\tgoto encode_used_exit;\n\n\t\t\tif (include_resc_used_update) {\n\t\t\t\tif (rd->rs_encode(&val3, phead, ad3->at_name, rd->rs_name, ATR_ENCODE_CLIENT, NULL) < 0)\n\t\t\t\t\tgoto encode_used_exit;\n\t\t\t}\n\t\t}\n\n\tencode_used_exit:\n\t\tif ((tmpatr.at_flags & ATR_VFLAG_SET) != 0 && tmpatr.at_type == ATR_TYPE_STR)\n\t\t\trd->rs_free(&tmpatr);\n\t\tif ((tmpatr3.at_flags & ATR_VFLAG_SET) != 0 && tmpatr3.at_type == ATR_TYPE_STR)\n\t\t\trd->rs_free(&tmpatr3);\n\t}\n}\n\n/**\n * @brief\n * \tgenerate new resc used update based on given job information\n *\n * @param[in] pjob - pointer to job\n *\n * @return ruu *\n *\n * @return NULL  - failure\n * @return !NULL - success\n *\n * @warning\n * \tretuned pointer should be free'd using FREE_RUU() when not needed\n */\nstatic ruu *\nget_job_update(job *pjob)\n{\n\t/*\n\t * the following is a list of attributes to be returned to the server\n\t * for a newly executing job. They are returned only if they have\n\t * been modified by Mom.  Note that JOB_ATR_session_id and JOB_ATR_resc_used\n\t * are always returned;\n\t */\n\tstatic enum job_atr mom_rtn_list[] = {\n\t\tJOB_ATR_errpath,\n\t\tJOB_ATR_outpath,\n\t\tJOB_ATR_altid,\n\t\tJOB_ATR_acct_id,\n\t\tJOB_ATR_jobdir,\n\t\tJOB_ATR_exectime,\n\t\tJOB_ATR_hold,\n\t\tJOB_ATR_variables,\n\t\tJOB_ATR_runcount,\n\t\tJOB_ATR_exec_vnode,\n\t\tJOB_ATR_SchedSelect,\n\t\tJOB_ATR_LAST};\n\truu *prused;\n\tint i;\n\tint nth;\n\tattribute *at;\n\tattribute_def *ad;\n\n\tprused = (ruu *) calloc(1, sizeof(ruu));\n\tif (prused == NULL) {\n\t\tlog_joberr(errno, __func__, \"Out of memory while encoding stat update\", pjob->ji_qs.ji_jobid);\n\t\treturn NULL;\n\t}\n\tCLEAR_LINK(prused->ru_pending);\n\tCLEAR_HEAD(prused->ru_attr);\n\tprused->ru_created_at = time(0);\n\tprused->ru_pjobid = strdup(pjob->ji_qs.ji_jobid);\n\tif (prused->ru_pjobid == NULL) {\n\t\tFREE_RUU(prused);\n\t\tlog_joberr(errno, __func__, \"Out of memory while encoding jobid in stat update\", pjob->ji_qs.ji_jobid);\n\t\treturn NULL;\n\t}\n\n\tresc_access_perm = ATR_DFLAG_MGRD;\n\n\tif (is_jattr_set(pjob, JOB_ATR_run_version))\n\t\tprused->ru_hop = get_jattr_long(pjob, JOB_ATR_run_version);\n\telse\n\t\tprused->ru_hop = get_jattr_long(pjob, JOB_ATR_runcount);\n#ifdef WIN32\n\tif (is_jattr_set(pjob, JOB_ATR_Comment)) {\n\t\tprused->ru_comment = strdup(get_jattr_str(pjob, JOB_ATR_Comment));\n\t\tif (prused->ru_comment == NULL)\n\t\t\tlog_joberr(errno, __func__, \"Out of memory while encoding comment in stat update\", pjob->ji_qs.ji_jobid);\n\t}\n#endif\n\tif ((at = get_jattr(pjob, JOB_ATR_session_id))->at_flags & ATR_VFLAG_MODIFY) {\n\t\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid, \"SID is: %ld\", get_jattr_long(pjob, JOB_ATR_session_id));\n\t\tjob_attr_def[JOB_ATR_session_id].at_encode(at, &prused->ru_attr,\n\t\t\t\t\t\t\t   job_attr_def[JOB_ATR_session_id].at_name,\n\t\t\t\t\t\t\t   NULL, ATR_ENCODE_CLIENT, NULL);\n\t}\n\n\tif (mock_run) {\n\t\t/* Also add substate & state to the attrs sent to servers since we don't have a session id */\n\t\tjob_attr_def[JOB_ATR_state].at_encode(get_jattr(pjob, JOB_ATR_state), &prused->ru_attr,\n\t\t\t\t\t\t      job_attr_def[JOB_ATR_state].at_name, NULL, ATR_ENCODE_CLIENT, NULL);\n\t\tjob_attr_def[JOB_ATR_substate].at_encode(get_jattr(pjob, JOB_ATR_substate), &prused->ru_attr,\n\t\t\t\t\t\t\t job_attr_def[JOB_ATR_substate].at_name, NULL, ATR_ENCODE_CLIENT, NULL);\n\t}\n\n\t/* sister moms update of used resources\n\t * is done by send_resc_used_to_ms(); do not send it here\n\t */\n\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE)) {\n\t\t/*\n\t\t * The update_walltime() ensures the walltime is always set before encode_used().\n\t\t * A job without used walltime could occur in the first seconds of a job.\n\t\t * This ensures the used walltime is set even if the elapsed time is zero.\n\t\t */\n\t\tupdate_walltime(pjob);\n\n\t\tencode_used(pjob, &prused->ru_attr);\n\t}\n\n\t/* Now add certain others as required for updating at the Server */\n\tfor (i = 0; mom_rtn_list[i] != JOB_ATR_LAST; ++i) {\n\t\tnth = mom_rtn_list[i];\n\t\tat = get_jattr(pjob, nth);\n\t\tad = &job_attr_def[nth];\n\n\t\tif ((at->at_flags & ATR_VFLAG_MODIFY) ||\n\t\t    (at->at_flags & ATR_VFLAG_HOOK) ||\n\t\t    (pjob->ji_pending_ruu != NULL && find_svrattrl_list_entry(&(((ruu *) pjob->ji_pending_ruu)->ru_attr), ad->at_name, NULL) != NULL)) {\n\t\t\tad->at_encode(at, &prused->ru_attr, ad->at_name, NULL, ATR_ENCODE_CLIENT, NULL);\n\t\t\tif (at->at_flags & ATR_VFLAG_MODIFY)\n\t\t\t\tat->at_flags &= ~ATR_VFLAG_MODIFY;\n\t\t}\n\t}\n\n\treturn prused;\n}\n\n/**\n * @brief\n * \tgenerate resc used update for given job and put it in queue\n * \tto send to server\n *\n * @param[in] pjob - pointer to job\n * @param[in] cmd  - cmd to be send along with resc used update to server\n *\n * @return int\n *\n * @retval 1 - failure\n * @retval 0 - success\n *\n */\nint\nenqueue_update_for_send(job *pjob, int cmd)\n{\n\truu *prused = get_job_update(pjob);\n\tif (prused == NULL)\n\t\treturn 1; /* get_job_update has done error logging */\n\n\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE) == 0) {\n\t\t/* If sister node of job, send update right away */\n\t\tsend_resc_used(cmd, 1, prused);\n\t\tFREE_RUU(prused);\n\t\treturn 0;\n\t}\n\n\tif (pjob->ji_pending_ruu != NULL) {\n\t\truu *x = (ruu *) (pjob->ji_pending_ruu);\n\t\tFREE_RUU(x);\n\t}\n\tprused->ru_cmd = cmd;\n\tprused->ru_pjob = pjob;\n\tpjob->ji_pending_ruu = prused;\n\tif (cmd == IS_JOBOBIT)\n\t\tprused->ru_status = pjob->ji_qs.ji_un.ji_momt.ji_exitstat;\n\n\t/* link in global pending ruu update list */\n\tappend_link(&mom_pending_ruu, &prused->ru_pending, (void *) prused);\n\n\treturn 0;\n}\n\n/**\n * @brief\n * \tcreate bundles of pending updates in queue based on their cmds\n * \tif cmd of update is not IS_OBIT then check time of creatation\n * \tof update and based on that decide whether to send that update\n * \tor delay it for rescused_send_delay\n *\n * @param[out] r_cnt    - number of updates in prused bundle\n * @param[out] prused   - bundle of IS_RESCUSED updates\n * @param[out] rh_cnt   - number of updates in prhused bundle\n * @param[out] prhused  - bundle of IS_RESCUSED_FROM_HOOK updates\n * @param[out] obits_cnt - number of updates in obits bundle\n * @param[out] obits    - bundle of IS_OBIT updates\n *\n * @return void\n *\n */\nstatic void\nbundle_ruu(int *r_cnt, ruu **prused, int *rh_cnt, ruu **prhused, int *obits_cnt, ruu **obits)\n{\n\tstatic int rescused_send_delay = 2;\n\truu *cur;\n\truu *next;\n\n\t*r_cnt = 0;\n\t*prused = NULL;\n\t*rh_cnt = 0;\n\t*prhused = NULL;\n\t*obits_cnt = 0;\n\t*obits = NULL;\n\n\tcur = (ruu *) GET_NEXT(mom_pending_ruu);\n\twhile (cur != NULL) {\n\t\tnext = (ruu *) GET_NEXT(cur->ru_pending);\n\t\tif (cur->ru_cmd == IS_JOBOBIT) {\n\t\t\tcur->ru_next = *obits;\n\t\t\t*obits = cur;\n\t\t\t(*obits_cnt)++;\n\t\t} else if (time_now >= (cur->ru_created_at + rescused_send_delay)) {\n\t\t\tif (cur->ru_cmd == IS_RESCUSED) {\n\t\t\t\tcur->ru_next = *prused;\n\t\t\t\t*prused = cur;\n\t\t\t\t(*r_cnt)++;\n\t\t\t} else if (cur->ru_cmd == IS_RESCUSED_FROM_HOOK) {\n\t\t\t\tcur->ru_next = *prhused;\n\t\t\t\t*prhused = cur;\n\t\t\t\t(*rh_cnt)++;\n\t\t\t}\n\t\t}\n\t\tcur = next;\n\t}\n}\n\n/**\n * @brief\n * \tSend the amount of resources used by jobs to the server\n * \tThis function used to encode and send the data for IS_RESCUSED,\n * \tIS_JOBOBIT, IS_RESCUSED_FROM_HOOK.\n *\n * @param[in] cmd   - communication command to use\n * @param[in] count - number of  jobs to update.\n * @param[in] rud   - input structure containing info about the jobs, resources used, etc...\n *\n * @note\n * \tIf cmd is IS_RESCUSED_FROM_HOOK and there's an error communicating\n * \tto the server, the server_stream connection is not closed automatically.\n * \tIt's possible it could be a transient error, and this function may\n * \thave been called from a child mom. Closing the server_stream would\n * \tcause the server to see mom as down.\n *\n * @return void\n *\n */\nvoid\nsend_resc_used(int cmd, int count, ruu *rud)\n{\n\tint ret;\n\n\tif (count == 0 || rud == NULL || server_stream < 0)\n\t\treturn;\n\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_NODE, LOG_DEBUG, \"\",\n\t\t   \"send_resc_used update to server on stream %d\\n\", server_stream);\n\n\tret = is_compose(server_stream, cmd);\n\tif (ret != DIS_SUCCESS)\n\t\tgoto err;\n\n\tret = diswui(server_stream, count);\n\tif (ret != DIS_SUCCESS)\n\t\tgoto err;\n\n\twhile (rud) {\n\t\tret = diswst(server_stream, rud->ru_pjobid);\n\t\tif (ret != DIS_SUCCESS)\n\t\t\tgoto err;\n\n\t\tif (rud->ru_comment) {\n\t\t\t/* non-null comment: send \"1\" followed by comment */\n\t\t\tret = diswsi(server_stream, 1);\n\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\tgoto err;\n\t\t\tret = diswst(server_stream, rud->ru_comment);\n\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\tgoto err;\n\t\t} else {\n\t\t\t/* null comment: send \"0\" */\n\t\t\tret = diswsi(server_stream, 0);\n\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\tgoto err;\n\t\t}\n\t\tret = diswsi(server_stream, rud->ru_status);\n\t\tif (ret != DIS_SUCCESS)\n\t\t\tgoto err;\n\n\t\tret = diswsi(server_stream, rud->ru_hop);\n\t\tif (ret != DIS_SUCCESS)\n\t\t\tgoto err;\n\n\t\tret = encode_DIS_svrattrl(server_stream, (svrattrl *) GET_NEXT(rud->ru_attr));\n\t\tif (ret != DIS_SUCCESS)\n\t\t\tgoto err;\n\n\t\trud = rud->ru_next;\n\t}\n\n\tif (dis_flush(server_stream) != 0)\n\t\tgoto err;\n\n\treturn;\n\nerr:\n\tsprintf(log_buffer, \"%s for %d\", dis_emsg[ret], cmd);\n#ifdef WIN32\n\tif (errno != 10054)\n#endif\n\t\tlog_err(errno, \"send_resc_used\", log_buffer);\n\n\tif (cmd != IS_RESCUSED_FROM_HOOK) {\n\t\ttpp_close(server_stream);\n\t\tserver_stream = -1;\n\t}\n\treturn;\n}\n\n/**\n * @brief\n * \tgenerate pending update bundles and send it to server\n *\n * @retval void\n */\nvoid\nsend_pending_updates(void)\n{\n\tint r_cnt;\n\truu *prused;\n\tint rh_cnt;\n\truu *prhused;\n\tint obits_cnt;\n\truu *obits;\n\truu *next;\n\n\tbundle_ruu(&r_cnt, &prused, &rh_cnt, &prhused, &obits_cnt, &obits);\n\tif (r_cnt > 0) {\n\t\tsend_resc_used(IS_RESCUSED, r_cnt, prused);\n\t\twhile (prused != NULL) {\n\t\t\tnext = prused->ru_next;\n\t\t\tFREE_RUU(prused);\n\t\t\tprused = next;\n\t\t}\n\t}\n\tif (rh_cnt > 0) {\n\t\tsend_resc_used(IS_RESCUSED_FROM_HOOK, rh_cnt, prhused);\n\t\twhile (prhused != NULL) {\n\t\t\tnext = prhused->ru_next;\n\t\t\tFREE_RUU(prhused);\n\t\t\tprhused = next;\n\t\t}\n\t}\n\tif (obits_cnt > 0) {\n\t\tsend_resc_used(IS_JOBOBIT, obits_cnt, obits);\n\t\twhile (obits != NULL) {\n\t\t\tnext = obits->ru_next;\n\t\t\t/*\n\t\t\t * Here, we reply to outstanding request\n\t\t\t * this should come after obit sent\n\t\t\t */\n\t\t\tif (obits->ru_pjob && obits->ru_pjob->ji_preq) {\n\t\t\t\treply_ack(obits->ru_pjob->ji_preq);\n\t\t\t\tobits->ru_pjob->ji_preq = NULL;\n\t\t\t}\n\t\t\tFREE_RUU(obits);\n\t\t\tobits = next;\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "src/resmom/mom_vnode.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tmom_vnode.c\n */\n#include <sys/types.h>\n#include <assert.h>\n#include <errno.h>\n#include <stdio.h>\n#include <unistd.h>\n#include <limits.h>\n#include \"pbs_config.h\"\n#include \"pbs_internal.h\"\n#include \"libpbs.h\"\n#include \"log.h\"\n#include \"server_limits.h\"\n#include \"attribute.h\"\n#include \"placementsets.h\"\n#include \"resource.h\"\n#include \"pbs_nodes.h\"\n#include \"mom_mach.h\"\n#include \"mom_vnode.h\"\n#include \"libutil.h\"\n#include \"hook_func.h\"\n\n#ifndef _POSIX_ARG_MAX\n#define _POSIX_ARG_MAX 4096 /* largest value standards guarantee */\n#endif\n\nextern char mom_host[];\nextern unsigned int pbs_mom_port;\nextern unsigned int pbs_rm_port;\nextern vnl_t *vnlp; /* vnode list */\nstatic void *cpuctx;\nstatic void cpu_inuse(unsigned int, job *, int);\nstatic void *new_ctx(void);\nstatic mom_vninfo_t *new_vnid(const char *, void *);\nstatic void truncate_and_log(int, const char *, char *, int);\nstatic mom_vninfo_t *vnid2mominfo(const char *, const void *);\nenum res_op { RES_DECR,\n\t      RES_INCR,\n\t      RES_SET };\n\n/**\n * @brief\n *\tLog debugging information pertaining to each CPU that we are managing.\n *\tEach CPU may be in one of three states:  free for use, in use by a job,\n *\tor in use but not assigned to a job (the last of these is used for CPUs\n *\tdeclared unusable by cpunum_outofservice()).\n *\n * @return Void\n *\n */\nvoid\nmom_CPUs_report(void)\n{\n\tint ret;\n\tchar *p;\n\tchar reportbuf[LOG_BUF_SIZE];\n\tint bufspace; /* space remaining in reportbuf[] */\n\tvoid *idx_ctx = NULL;\n\tmominfo_t *mip = NULL;\n\tint log_ev = PBSEVENT_DEBUG3;\n\n\tif (cpuctx == NULL || !will_log_event(log_ev))\n\t\treturn;\n\n\twhile (pbs_idx_find(cpuctx, NULL, (void **) &mip, &idx_ctx) == PBS_IDX_RET_OK) {\n\t\tunsigned int i;\n\t\tint first;\n\t\tmom_vninfo_t *mvp;\n\n\t\tassert(mip != NULL);\n\t\tassert(mip->mi_data != NULL);\n\n\t\tmvp = (mom_vninfo_t *) mip->mi_data;\n\t\tp = reportbuf;\n\t\tbufspace = sizeof(reportbuf);\n\t\tret = snprintf(p, bufspace, \"%s:  cpus = \", mvp->mvi_id);\n\t\tif (ret >= bufspace) {\n\t\t\ttruncate_and_log(log_ev, __func__, reportbuf, sizeof(reportbuf));\n\t\t\tcontinue;\n\t\t}\n\t\tp += ret;\n\t\tbufspace -= ret;\n\t\tfor (i = 0, first = 1; i < mvp->mvi_ncpus; i++) {\n\t\t\tif (first)\n\t\t\t\tfirst = 0;\n\t\t\telse {\n\t\t\t\tif (bufspace < 1) {\n\t\t\t\t\ttruncate_and_log(log_ev, __func__, reportbuf, sizeof(reportbuf));\n\t\t\t\t\tgoto line_done;\n\t\t\t\t}\n\t\t\t\tsprintf(p, \",\");\n\t\t\t\tp++;\n\t\t\t\tbufspace--;\n\t\t\t}\n\n\t\t\tret = snprintf(p, bufspace, \"%d\", mvp->mvi_cpulist[i].mvic_cpunum);\n\t\t\tif (ret >= bufspace) {\n\t\t\t\ttruncate_and_log(log_ev, __func__, reportbuf, sizeof(reportbuf));\n\t\t\t\tgoto line_done;\n\t\t\t}\n\t\t\tp += ret;\n\t\t\tbufspace -= ret;\n\n\t\t\tif (MVIC_CPUISFREE(mvp, i))\n\t\t\t\tret = snprintf(p, bufspace, \" (free)\");\n\t\t\telse {\n\t\t\t\tif (mvp->mvi_cpulist[i].mvic_job == NULL)\n\t\t\t\t\tret = snprintf(p, bufspace, \" (inuse, no job)\");\n\t\t\t\telse\n\t\t\t\t\tret = snprintf(p, bufspace, \" (inuse, job %s)\", mvp->mvi_cpulist[i].mvic_job->ji_qs.ji_jobid);\n\t\t\t}\n\t\t\tif (ret >= bufspace) {\n\t\t\t\ttruncate_and_log(log_ev, __func__, reportbuf, sizeof(reportbuf));\n\t\t\t\tgoto line_done;\n\t\t\t}\n\t\t\tp += ret;\n\t\t\tbufspace -= ret;\n\t\t}\n\t\tlog_event(log_ev, 0, LOG_DEBUG, __func__, reportbuf);\n\tline_done:;\n\t}\n\n\tpbs_idx_free_ctx(idx_ctx);\n}\n\n/**\n * @brief\n *\tIn case of buffer overflow, we log what we can and indicate with an\n *\tellipsis at the end that the line overflowed.\n *\n * @param[in] log_ev - log event\n * @param[in] id - id for log msg\n * @param[in] buf - buffer holding log msg\n * @param[in] bufsize - buffer size\n *\n * @return Void\n *\n */\nstatic void\ntruncate_and_log(int log_ev, const char *id, char *buf, int bufsize)\n{\n\tbuf[bufsize - 4] = buf[bufsize - 3] = buf[bufsize - 2] = '.';\n\tbuf[bufsize - 1] = '\\0';\n\tlog_event(log_ev, 0, LOG_DEBUG, id, buf);\n}\n\n/**\n * @brief\n *\tLog debugging information containing a description of the vnode list\n *\tin 'vnl'.\n *\t(a rather complicated structure described in \"placementsets.h\").\n *\n * @param[in]\tvnl  - vnode list we're interested in\n * @param[in] \theader - heading of the log message\n *\n * @return void\n *\n */\nvoid\nmom_vnlp_report(vnl_t *vnl, char *header)\n{\n\tint i;\n\tchar reportbuf[LOG_BUF_SIZE + 1];\n\tchar *p = NULL;\n\tvnl_t *vp;\n\tchar attrprefix[] = \", attrs[]:  \";\n\tint bytes_left;\n\tint log_ev = PBSEVENT_DEBUG3;\n\n\tif (vnl == NULL || !will_log_event(log_ev))\n\t\treturn;\n\n\tvp = vnl;\n\n\tfor (i = 0; i < vp->vnl_used; i++) {\n\t\tvnal_t *vnalp;\n\t\tint j, k;\n\n\t\tvnalp = VNL_NODENUM(vp, i);\n\t\tbytes_left = LOG_BUF_SIZE;\n\t\tp = reportbuf;\n\t\tk = snprintf(p, bytes_left, \"vnode %s:  nelem %lu\", vnalp->vnal_id, vnalp->vnal_used);\n\t\tif (k < 0)\n\t\t\tbreak;\n\t\tbytes_left -= k;\n\t\tif (bytes_left <= 0)\n\t\t\tbreak;\n\t\tp += k;\n\t\tif (vnalp->vnal_used > 0) {\n\t\t\tif (bytes_left < sizeof(attrprefix))\n\t\t\t\tbreak;\n\t\t\tstrcat(p, attrprefix);\n\t\t\tbytes_left -= sizeof(attrprefix);\n\t\t\tif (bytes_left <= 0)\n\t\t\t\tbreak;\n\t\t\tp += sizeof(attrprefix);\n\t\t}\n\t\tfor (j = 0; j < vnalp->vnal_used; j++) {\n\t\t\tvna_t *vnap;\n\n\t\t\tvnap = VNAL_NODENUM(vnalp, j);\n\t\t\tif (j > 0) {\n\t\t\t\tstrcat(p, \", \");\n\t\t\t\tbytes_left -= 2;\n\t\t\t\tif (bytes_left <= 0)\n\t\t\t\t\tbreak;\n\t\t\t\tp += 2;\n\t\t\t}\n\t\t\tk = snprintf(p, bytes_left, \"\\\"%s\\\" = \\\"%s\\\"\", vnap->vna_name, vnap->vna_val);\n\t\t\tif (k < 0)\n\t\t\t\tbreak;\n\t\t\tbytes_left -= k;\n\t\t\tif (bytes_left <= 0)\n\t\t\t\tbreak;\n\t\t\tp += k;\n\t\t}\n\t\tlog_event(log_ev, 0, LOG_DEBUG, header ? header : __func__, reportbuf);\n\t\tp = NULL;\n\t}\n\tif (p != NULL) { /* log any remaining item */\n\t\tlog_event(log_ev, 0, LOG_DEBUG, header ? header : __func__, reportbuf);\n\t}\n}\n\n/**\n * @brief\n *\tAdd a range of CPUs (an element of the form M or M-N where M and N are\n *\tnonnegative integers) to the given mvi.  If any CPUs are already present\n *\tin mvp->mvi_cpulist[], they are preserved and their state is unchanged.\n *\n * @param[in] mvp - pointer to mom_vninfo_t\n * @param[in] cpurange - cpu range\n *\n * @return Void\n *\n */\nstatic void\nadd_CPUrange(mom_vninfo_t *mvp, char *cpurange)\n{\n\tchar *p;\n\tunsigned int from, to;\n\tunsigned int cpunum;\n\tunsigned int i, ncpus;\n\tstatic int chunknum = 0;\n\n\tif ((p = strchr(cpurange, '-')) != NULL) {\n\n\t\t*p = '\\0';\n\t\tfrom = strtoul(cpurange, NULL, 0);\n\t\tto = strtoul(p + 1, NULL, 0);\n\t\tif (from > to) {\n\t\t\tsprintf(log_buffer, \"chunk %d:  lhs (%u) > rhs (%u)\",\n\t\t\t\tchunknum, from, to);\n\t\t\tlog_err(PBSE_SYSTEM, __func__, log_buffer);\n\t\t\treturn;\n\t\t}\n\t} else {\n\t\tfrom = to = strtoul(cpurange, NULL, 0);\n\t\tchunknum++;\n\t}\n\n\tfor (cpunum = from, ncpus = mvp->mvi_ncpus; cpunum <= to; cpunum++) {\n#ifdef DEBUG\n\t\t/*\n\t\t *\tIt's not obvious in reading the code that when we get\n\t\t *\tto the \"for\" loop below, mvp->mvi_cpulist is non-NULL.\n\t\t *\tIt happens because the first time we add any CPUs to\n\t\t *\tthis vnode's CPU list, ncpus will be 0 and we do a\n\t\t *\trealloc(NULL, ...) to allocate the initial storage.\n\t\t */\n\t\tif (ncpus > 0)\n\t\t\tassert(mvp->mvi_cpulist != NULL);\n#endif /* DEBUG */\n\t\tfor (i = 0; i < ncpus; i++)\n\t\t\tif (cpunum == mvp->mvi_cpulist[i].mvic_cpunum)\n\t\t\t\tbreak;\n\n\t\tif (i >= ncpus) { /* CPU cpunum not in mvi_cpulist[] */\n\t\t\tmom_mvic_t *l;\n\n\t\t\tl = realloc(mvp->mvi_cpulist,\n\t\t\t\t    (ncpus + 1) * sizeof(mom_mvic_t));\n\t\t\tif (l == NULL) {\n\t\t\t\tlog_err(errno, __func__, \"malloc failure\");\n\t\t\t\treturn;\n\t\t\t} else\n\t\t\t\tmvp->mvi_cpulist = l;\n\t\t\tmvp->mvi_cpulist[ncpus].mvic_cpunum = cpunum;\n\t\t\tcpuindex_free(mvp, ncpus);\n\t\t\tmvp->mvi_ncpus++;\n\t\t\tmvp->mvi_acpus++;\n\t\t\tncpus = mvp->mvi_ncpus;\n\t\t}\n\t}\n}\n\n/**\n * @brief\n *\tcpuindex_free() and cpuindex_inuse() are ``context-sensitive'' functions\n *\tthat mark as free or busy a CPU which is referred to by an index that is\n *\trelative to the vnode to which it's attached.  That is, physical CPU 17\n *\tmay be referred to as index 3 relative to vnode \"foo\".\n *\n * @param[in] mvp - pointer to mom_vninfo_t structure\n * @param[in] cpuindex - cpu index\n *\n * @return Void\n *\n */\nvoid\ncpuindex_free(mom_vninfo_t *mvp, unsigned int cpuindex)\n{\n\tchar buf[BUFSIZ];\n\n\tassert(mvp != NULL);\n\tassert(cpuindex <= mvp->mvi_ncpus);\n\tsprintf(buf, \"vnode %s:  mark CPU %u free\", mvp->mvi_id,\n\t\tmvp->mvi_cpulist[cpuindex].mvic_cpunum);\n\tlog_event(PBSEVENT_DEBUG3, 0, LOG_DEBUG, __func__, buf);\n\n\tmvp->mvi_cpulist[cpuindex].mvic_flags = MVIC_FREE;\n\tmvp->mvi_cpulist[cpuindex].mvic_job = NULL;\n}\n\n/**\n * @brief\n *      cpuindex_free() and cpuindex_inuse() are ``context-sensitive'' functions\n *      that mark as free or busy a CPU which is referred to by an index that is\n *      relative to the vnode to which it's attached.  That is, physical CPU 17\n *      may be referred to as index 3 relative to vnode \"foo\".\n *\n * @param[in] mvp - pointer to mom_vninfo_t structure\n * @param[in] cpuindex - cpu index\n * @param[in] pjob - pointer to job structure\n *\n * @return Void\n *\n */\nvoid\ncpuindex_inuse(mom_vninfo_t *mvp, unsigned int cpuindex, job *pjob)\n{\n\tchar buf[BUFSIZ];\n\n\tassert(mvp != NULL);\n\tassert(cpuindex <= mvp->mvi_ncpus);\n\tif (pjob == NULL) {\n\t\tsprintf(buf, \"vnode %s:  mark CPU %u inuse\", mvp->mvi_id,\n\t\t\tmvp->mvi_cpulist[cpuindex].mvic_cpunum);\n\t} else {\n\t\tsprintf(buf, \"vnode %s:  mark CPU %u inuse by job %s\",\n\t\t\tmvp->mvi_id, mvp->mvi_cpulist[cpuindex].mvic_cpunum,\n\t\t\tpjob->ji_qs.ji_jobid);\n\t}\n\tlog_event(PBSEVENT_DEBUG3, 0, LOG_DEBUG, __func__, buf);\n\n\tmvp->mvi_cpulist[cpuindex].mvic_flags = MVIC_ASSIGNED;\n\tmvp->mvi_cpulist[cpuindex].mvic_job = pjob;\n}\n\n/**\n * @brief\n *\tFind the vnode with ID vnid and adjust (decrement, increment, or set)\n *\tthe value of resource res by the amount adjval\n *\n * @param[in] vp - pointer to vnl_t structure\n * @param[in] vnid - vnode id\n * @param[in] res - resource name\n * @param[in] op - enum for resource option\n *\n * @return Void\n *\n */\nstatic void\nresadj(vnl_t *vp, const char *vnid, const char *res, enum res_op op,\n       unsigned int adjval)\n{\n\tint i, j;\n\tchar *vna_newval;\n\n\tsprintf(log_buffer, \"vnode %s, resource %s, res_op %d, adjval %u\",\n\t\tvnid, res, (int) op, adjval);\n\tlog_event(PBSEVENT_DEBUG3, 0, LOG_DEBUG, __func__, log_buffer);\n\tfor (i = 0; i < vp->vnl_used; i++) {\n\t\tvnal_t *vnalp;\n\n\t\tvnalp = VNL_NODENUM(vp, i);\n\t\tif (strcmp(vnalp->vnal_id, vnid) != 0)\n\t\t\tcontinue;\n\t\tfor (j = 0; j < vnalp->vnal_used; j++) {\n\t\t\tvna_t *vnap;\n\n\t\t\tvnap = VNAL_NODENUM(vnalp, j);\n\t\t\tif (strcmp(vnap->vna_name, res) == 0) {\n\t\t\t\tunsigned int resval;\n\t\t\t\tchar valbuf[BUFSIZ];\n\n\t\t\t\tresval = strtoul(vnap->vna_val, NULL, 0);\n\t\t\t\tswitch ((int) op) {\n\t\t\t\t\tcase RES_DECR:\n\t\t\t\t\t\tresval -= adjval;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\tcase RES_INCR:\n\t\t\t\t\t\tresval += adjval;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\tcase RES_SET:\n\t\t\t\t\t\tresval = adjval;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\tdefault:\n\t\t\t\t\t\tsprintf(log_buffer, \"unknown res_op %d\",\n\t\t\t\t\t\t\t(int) op);\n\t\t\t\t\t\tlog_event(PBSEVENT_ERROR, 0, LOG_ERR, __func__,\n\t\t\t\t\t\t\t  log_buffer);\n\t\t\t\t\t\treturn;\n\t\t\t\t}\n\n\t\t\t\t/*\n\t\t\t\t *\tDeal with two things that should never\n\t\t\t\t *\thappen:  first, the result of adjusting\n\t\t\t\t *\tthe resource value should never be\n\t\t\t\t *\tnegative.  Second, BUFSIZ should always\n\t\t\t\t *\tbe sufficient to hold any unsigned\n\t\t\t\t *\tquantity PBS deals with.\n\t\t\t\t */\n\t\t\t\tif (((int) resval) < 0) {\n\t\t\t\t\tlog_event(PBSEVENT_ERROR, 0, LOG_ERR,\n\t\t\t\t\t\t  __func__, \"res underflow\");\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t\tif (snprintf(valbuf, sizeof(valbuf), \"%u\",\n\t\t\t\t\t     resval) >= sizeof(valbuf)) {\n\t\t\t\t\tlog_event(PBSEVENT_ERROR, 0, LOG_ERR,\n\t\t\t\t\t\t  __func__, \"res overflow\");\n\t\t\t\t\treturn;\n\t\t\t\t}\n\n\t\t\t\t/*\n\t\t\t\t *\tWe now replace the current value with\n\t\t\t\t *\tthe adjusted one.  This may involve\n\t\t\t\t *\tsurgery on the vna_t.\n\t\t\t\t */\n\t\t\t\tvna_newval = strdup(valbuf);\n\t\t\t\tif (vna_newval != NULL) {\n\t\t\t\t\tfree(vnap->vna_val);\n\t\t\t\t\tvnap->vna_val = vna_newval;\n\t\t\t\t} else\n\t\t\t\t\tlog_err(PBSE_SYSTEM, __func__, \"vna_newval strdup failed\");\n\t\t\t\treturn;\n\t\t\t}\n\t\t}\n\t}\n\n\tsprintf(log_buffer, \"vnode %s, resource %s not found\", vnid, res);\n\tlog_event(PBSEVENT_DEBUG, 0, LOG_DEBUG, __func__, log_buffer);\n}\n\n/**\n * @brief\n *\tcpunum_outofservice() is a ``context-free'' function that marks a CPU\n *\t(which is referred to by its physical CPU number) as being unusable.\n *\n * @param[in] cpunum - number of cpu\n *\n * @return Void\n *\n */\nvoid\ncpunum_outofservice(unsigned int cpunum)\n{\n\tchar buf[BUFSIZ];\n\n\tsprintf(buf, \"mark CPU %u out of service\", cpunum);\n\tlog_event(PBSEVENT_DEBUG3, 0, LOG_DEBUG, __func__, buf);\n\n\tcpu_inuse(cpunum, NULL, 1);\n}\n\n/**\n * @brief\n *\tCommon code for cpunum_inuse() and cpunum_outofservice():  to find\n *\tthe given CPU in our list of CPUs per vnode, we walk the list of\n *\tmom_vninfo_t structures and for each of those, the attached CPU lists\n *\tlooking for a match.  If taking a CPU out of service, cpu_inuse() must\n *\talso adjust the \"resources_available.ncpus\" for the vnode that contains\n *\tthe CPU being taken out of service.\n *\n * @param[in] cpunum - number of cpu\n * @param[in] pjob - pointer to job structure\n * @param[in] outofserviceflag - flag value to indicate whether cpu out of service\n *\n * @return - Void\n *\n */\nstatic void\ncpu_inuse(unsigned int cpunum, job *pjob, int outofserviceflag)\n{\n\tstatic char ra_ncpus[] = \"resources_available.ncpus\";\n\tvoid *idx_ctx = NULL;\n\tmominfo_t *mip = NULL;\n\n\tif (cpuctx == NULL)\n\t\treturn;\n\n\twhile (pbs_idx_find(cpuctx, NULL, (void **) &mip, &idx_ctx) == PBS_IDX_RET_OK) {\n\t\tunsigned int i;\n\t\tmom_vninfo_t *mvp;\n\n\t\tassert(mip != NULL);\n\t\tassert(mip->mi_data != NULL);\n\n\t\tmvp = (mom_vninfo_t *) mip->mi_data;\n\t\tfor (i = 0; i < mvp->mvi_ncpus; i++) {\n\t\t\tif (mvp->mvi_cpulist[i].mvic_cpunum == cpunum) {\n\t\t\t\tif (MVIC_CPUISFREE(mvp, i)) {\n\t\t\t\t\tcpuindex_inuse(mvp, i, pjob);\n\t\t\t\t\tif (outofserviceflag != 0) {\n\t\t\t\t\t\tassert(vnlp != NULL);\n\t\t\t\t\t\tassert(mvp->mvi_id != NULL);\n\t\t\t\t\t\tresadj(vnlp, mvp->mvi_id, ra_ncpus, RES_DECR, 1);\n\t\t\t\t\t\tmvp->mvi_acpus--;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tpbs_idx_free_ctx(idx_ctx);\n\t\t\t\treturn;\n\t\t\t}\n\t\t}\n\t}\n\n\tpbs_idx_free_ctx(idx_ctx);\n\n\t/*\n\t *\tIf we get here, we didn't find the CPU in question.\n\t *\tRequests to mark a CPU for which we have no record\n\t *\tout of service may be benign;  we may never have\n\t *\tknown about it because we were never told about it\n\t *\tin a vnode definitions file, and the caller may\n\t *\tsimply not have checked first.  So, we silently\n\t *\tignore those requests.  However, if we're asked\n\t *\tto mark a CPU in use but haven't heard of it, that's\n\t *\tan error.\n\t */\n\tif (outofserviceflag == 0) {\n\t\tsprintf(log_buffer, \"CPU %u not found in cpuctx\", cpunum);\n\t\tlog_err(PBSE_SYSTEM, __func__, log_buffer);\n\t}\n}\n\n/**\n * @brief\n *\tAdd a list of CPUs (one or more elements separated by ',' and of the\n *\tform M or M-N where M and N are nonnegative integers) to the given mvi.\n *\n * @param[in] mvp - pointer to mom_vninfo_t structure which holds per mom info\n * @param[in] cpulist - cpu list seperated by ','.\n *\n * @return Void\n *\n */\nstatic void\nadd_CPUlist(mom_vninfo_t *mvp, char *cpulist)\n{\n\tchar *p;\n\n\tif ((p = strtok(cpulist, \",\")) != NULL) {\n\t\tadd_CPUrange(mvp, p);\n\t\twhile ((p = strtok(NULL, \",\")) != NULL)\n\t\t\tadd_CPUrange(mvp, p);\n\t}\n}\n\n/**\n * @brief\n *\tAdd the given wad of data (really a mominfo_t structure) to the given\n *\tvnode ID, returning 1 if successful or 0 on failure.  An entry with the\n *\tgiven vnode ID should not already be present;  users of this function\n *\tshould first check this using find_mominfo(), calling add_mominfo()\n *\tonly if find_mominfo() returned NULL.\n *\n * @param[in] ctx - new vnode\n * @param[in] vnid - vnode id\n * @param[in] data - info about vnode\n *\n * @return int\n * @retval 1 Failure\n * @retval 0 Success\n *\n */\nstatic int\nadd_mominfo(void *ctx, const char *vnid, void *data)\n{\n\n\tsprintf(log_buffer, \"ctx %p, vnid %s, data %p\", ctx, vnid, data);\n\tlog_event(PBSEVENT_DEBUG3, 0, LOG_DEBUG, __func__, log_buffer);\n\n\tassert(find_vmapent_byID(ctx, vnid) == NULL);\n\n\tif (add_vmapent_byID(ctx, vnid, data) == 0)\n\t\treturn (0);\n\telse\n\t\treturn (1);\n}\n\n/**\n * @brief\n *\tReturn a pointer to the mominfo_t data associated with a given vnode ID,\n *\tor NULL if no vnode with the given ID is present.\n *\n * @param[in] vnid - vnode id\n *\n * @return structure\n * @retval structure handle to mominfo_t\n *\n */\nmominfo_t *\nfind_mominfo(const char *vnid)\n{\n\tif (cpuctx == NULL) {\n\t\tlog_err(PBSE_SYSTEM, __func__, \"CPU context not initialized\");\n\t\treturn NULL;\n\t} else\n\t\treturn (find_vmapent_byID(cpuctx, vnid));\n}\n\n/**\n * @brief\n *\tThis function is called from vn_addvnr() before vn_addvnr() inserts a\n *\tnew name/value pair.  If we return zero, the insertion of the given\n *\t<ID, name, value> tuple will not occur (but processing of the file\n *\twill continue normally);  if we return nonzero, the insertion of\n *\tthe given tuple will occur (and again, processing continues normally).\n *\n *\tCurrently we use this function to perform these actions:\n *\n *\t\tfor the \"cpus\" attribute, build a list of the CPUs belonging\n *\t\tto given vnodes\n *\n *\t\tfor the \"mems\" attribute, to record the memory node number of\n *\t\tthe memory board belonging to a given vnode (note that in\n *\t\tcontrast to CPUs, of which there may be more than one, the\n *\t\tmodel for memory is that of a single (logical) memory board\n *\t\tper vnode)\n *\n *\t\tfor the \"sharing\" attribute, we simply remember the attribute\n *\t\tvalue for later use\n *\n *\t\tfor the \"resources_available.mem\" attribute, set a flag that\n *\t\ttells us to remember to do the memreserved adjustment\n *\n * @param[in] vnid - vnode id\n * @param[in] attr - attributes\n * @param[in] attrval - attribute value\n *\n * @return int\n * @retval -1     Failure\n * @retval  0,1   Success\n *\n */\nint\nvn_callback(const char *vnid, char *attr, char *attrval)\n{\n\tstatic void *ctx = NULL;\n\n\tif (strcmp(attr, \"cpus\") == 0) {\n\t\tmom_vninfo_t *mvp;\n\n\t\tsprintf(log_buffer, \"vnid %s, attr %s, val %s\",\n\t\t\tvnid, attr, attrval);\n\t\tlog_event(PBSEVENT_DEBUG3, 0, 0, __func__, log_buffer);\n\n\t\tif ((ctx == NULL) && ((ctx = new_ctx()) == NULL))\n\t\t\treturn (-1);\n\t\tif ((mvp = vnid2mominfo(vnid, ctx)) == NULL)\n\t\t\treturn (0);\n\n\t\tadd_CPUlist(mvp, attrval);\n\t\treturn (0);\n\t} else if (strcmp(attr, \"mems\") == 0) {\n\t\tmom_vninfo_t *mvp;\n\n\t\tsprintf(log_buffer, \"vnid %s, attr %s, val %s\",\n\t\t\tvnid, attr, attrval);\n\t\tlog_event(PBSEVENT_DEBUG3, 0, 0, __func__, log_buffer);\n\n\t\tif ((ctx == NULL) && ((ctx = new_ctx()) == NULL))\n\t\t\treturn (-1);\n\t\tif ((mvp = vnid2mominfo(vnid, ctx)) == NULL)\n\t\t\treturn (0);\n\n\t\tmvp->mvi_memnum = atoi(attrval);\n\t\treturn (0);\n\t} else if (strcmp(attr, \"sharing\") == 0) {\n\t\tmom_vninfo_t *mvp;\n\n\t\tif ((ctx == NULL) && ((ctx = new_ctx()) == NULL))\n\t\t\treturn (-1);\n\t\tif ((mvp = vnid2mominfo(vnid, ctx)) == NULL)\n\t\t\treturn (0);\n\n\t\tmvp->mvi_sharing = str_to_vnode_sharing(attrval);\n\t\treturn (1);\n\n\t} else\n\t\treturn (1);\n}\n\n/**\n * @brief\n *\treturns new vnode\n *\n * @return vnode on Success or NULL on failure\n *\n */\nstatic void *\nnew_ctx(void)\n{\n\tif (!create_vmap(&cpuctx)) {\n\t\tlog_err(PBSE_SYSTEM, __func__, \"create_vmap failed\");\n\t\treturn NULL;\n\t} else\n\t\treturn (cpuctx);\n}\n\n/**\n * @brief\n *\treturns pointer to vnode info (mom_vninfo_t).\n *\n * @param[in] vnid - vnode id\n * @param[in] ctx - vnode info\n *\n * @return structure handle\n * @retval pointer to mom_vninfo_t\n *\n */\nstatic mom_vninfo_t *\nvnid2mominfo(const char *vnid, const void *ctx)\n{\n\tmominfo_t *mip;\n\tmom_vninfo_t *mvp;\n\n\tassert(vnid != NULL);\n\tassert(ctx != NULL);\n\n\tif ((mip = find_vmapent_byID((void *) ctx, vnid)) != NULL) {\n\t\tsprintf(log_buffer, \"found vnid %s\", vnid);\n\t\tlog_event(PBSEVENT_DEBUG3, 0, 0, __func__, log_buffer);\n\n\t\tmvp = mip->mi_data;\n\t\tassert(mvp != NULL);\n\t} else if ((mvp = new_vnid(vnid, (void *) ctx)) == NULL)\n\t\treturn NULL;\n\n\treturn (mvp);\n}\n\n/**\n * @brief\n *\tcreate new vnode id for vnode\n *\n * @param[in] vnid - vnode id\n * @param[in] ctx - vnode info\n *\n * @return structure handle\n * @retval pointer to mom_vninfo_t\n *\n */\nstatic mom_vninfo_t *\nnew_vnid(const char *vnid, void *ctx)\n{\n\tmominfo_t *mip;\n\tmom_vninfo_t *mvp;\n\tchar *newid;\n\n\tsprintf(log_buffer, \"no vnid %s - creating\", vnid);\n\tlog_event(PBSEVENT_DEBUG3, 0, 0, __func__, log_buffer);\n\n\tif ((mip = malloc(sizeof(mominfo_t))) == NULL) {\n\t\tlog_err(errno, __func__, \"malloc mominfo_t\");\n\t\treturn NULL;\n\t}\n\tif ((mvp = malloc(sizeof(mom_vninfo_t))) == NULL) {\n\t\tfree(mip);\n\t\tlog_err(errno, __func__, \"malloc vninfo_t\");\n\t\treturn NULL;\n\t}\n\tif ((newid = strdup(vnid)) == NULL) {\n\t\tfree(mvp);\n\t\tfree(mip);\n\t\tlog_err(errno, __func__, \"strdup vnid\");\n\t\treturn NULL;\n\t}\n\n\tsnprintf(mip->mi_host, sizeof(mip->mi_host), \"%s\", mom_host);\n\tmip->mi_port = pbs_mom_port;\n\tmip->mi_rmport = pbs_rm_port;\n\tmip->mi_data = mvp;\n\tmvp->mvi_id = newid;\n\tmvp->mvi_ncpus = mvp->mvi_acpus = 0;\n\tmvp->mvi_cpulist = NULL;\n\tmvp->mvi_memnum = (unsigned int) -1; /* uninitialized data marker */\n\n\tif (add_mominfo(ctx, vnid, mip) != 0) {\n\t\tlog_errf(PBSE_SYSTEM, __func__, \"add_mominfo %s failed\", vnid);\n\t\treturn NULL;\n\t}\n\n\treturn (mvp);\n}\n"
  },
  {
    "path": "src/resmom/mom_walltime.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n#include <pbs_config.h>\n\n#include \"attribute.h\"\n#include \"job.h\"\n#include \"pbs_assert.h\"\n#include \"resource.h\"\n#include <stdlib.h>\n#include <sys/types.h>\n#include <time.h>\n\ntime_t time_now = 0;\ndouble wallfactor = 1.00;\n\n/**\n * @brief\n *\n *\t\tstart_walltime() starts counting the walltime of a job.\n *\n * @param[in] \tpjob\t    - pointer to the job\n *\n * @return\tvoid\n *\n * @par MT-safe: No\n */\nvoid\nstart_walltime(job *pjob)\n{\n\tif (NULL == pjob)\n\t\treturn;\n\t/*\n\t * time_now is global and should have positive value at this time\n\t * if not, set it to current time\n\t */\n\tif (0 == time_now)\n\t\ttime_now = time(NULL);\n\n\tpjob->ji_walltime_stamp = time_now;\n}\n\n/**\n * @brief\n *\n *\tupdate_walltime() updates the walltime of a job. If walltime is\n *\tnot in resources_used then update_walltime() creates a new entry\n *\tfor it.\n *\n * @param[in] \tpjob\t    - pointer to the job\n *\n * @return\tvoid\n *\n * @par MT-safe: No\n */\nvoid\nupdate_walltime(job *pjob)\n{\n\tattribute *resources_used;\n\tresource_def *walltime_def;\n\tresource *used_walltime;\n\n\tresources_used = get_jattr(pjob, JOB_ATR_resc_used);\n\tassert(resources_used != NULL);\n\twalltime_def = &svr_resc_def[RESC_WALLTIME];\n\tused_walltime = find_resc_entry(resources_used, walltime_def);\n\n\t/* if walltime entry is not created yet, create it */\n\tif (NULL == used_walltime) {\n\t\tused_walltime = add_resource_entry(resources_used, walltime_def);\n\t\tmark_attr_set(&used_walltime->rs_value);\n\t\tused_walltime->rs_value.at_type = ATR_TYPE_LONG;\n\t\tused_walltime->rs_value.at_val.at_long = 0;\n\t}\n\n\tif (0 != (used_walltime->rs_value.at_flags & ATR_VFLAG_HOOK)) {\n\t\t/* walltime is set by hook so do not update here */\n\t\treturn;\n\t}\n\n\tif (0 != pjob->ji_walltime_stamp) {\n\t\t/* walltime counting is not stopped so update it */\n\t\tset_attr_l(&used_walltime->rs_value, (long) ((time_now - pjob->ji_walltime_stamp) * wallfactor), INCR);\n\t\tpjob->ji_walltime_stamp = time_now;\n\t}\n}\n\n/**\n * @brief\n *\n *\t\tstop_walltime() stops counting the walltime of a job.\n *\n * @param[in] \tpjob\t    - pointer to the job\n *\n * @return\tvoid\n *\n * @par MT-safe: No\n */\nvoid\nstop_walltime(job *pjob)\n{\n\tif (NULL == pjob)\n\t\treturn;\n\t/*\n\t * time_now is global and should have positive value at this time\n\t * if not, set it to current time\n\t */\n\tif (0 == time_now)\n\t\ttime_now = time(NULL);\n\n\t/* update walltime and stop accumulating */\n\tupdate_walltime(pjob);\n\tpjob->ji_walltime_stamp = 0;\n}\n\n/**\n * @brief\n *\n *\t\trecover_walltime() tries to recover the used walltime of a job.\n *\n * @param[in] \tpjob\t    - pointer to the job\n *\n * @return\tvoid\n *\n * @par MT-safe: No\n */\nvoid\nrecover_walltime(job *pjob)\n{\n\tattribute *resources_used;\n\tresource_def *walltime_def;\n\tresource *used_walltime;\n\n\tif (NULL == pjob)\n\t\treturn;\n\n\tif (0 == pjob->ji_qs.ji_stime)\n\t\treturn;\n\n\tif (0 == time_now)\n\t\ttime_now = time(NULL);\n\n\tresources_used = get_jattr(pjob, JOB_ATR_resc_used);\n\tassert(resources_used != NULL);\n\twalltime_def = &svr_resc_def[RESC_WALLTIME];\n\tassert(walltime_def != NULL);\n\tused_walltime = find_resc_entry(resources_used, walltime_def);\n\n\t/*\n\t* if the used walltime is not set, try to recover it.\n\t*/\n\tif (NULL == used_walltime) {\n\t\tused_walltime = add_resource_entry(resources_used, walltime_def);\n\t\tmark_attr_set(&used_walltime->rs_value);\n\t\tused_walltime->rs_value.at_type = ATR_TYPE_LONG;\n\t\tused_walltime->rs_value.at_val.at_long = (long) ((double) (time_now - pjob->ji_qs.ji_stime) * wallfactor);\n\t}\n}\n"
  },
  {
    "path": "src/resmom/popen.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/*\n * Copyright (c) 1988, 1993\n *\tThe Regents of the University of California.  All rights reserved.\n *\n * This code is derived from software written by Ken Arnold and\n * published in UNIX Review, Vol. 6, No. 8.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions\n * are met:\n * 1. Redistributions of source code must retain the above copyright\n *    notice, this list of conditions and the following disclaimer.\n * 2. Redistributions in binary form must reproduce the above copyright\n *    notice, this list of conditions and the following disclaimer in the\n *    documentation and/or other materials provided with the distribution.\n * 3. All advertising materials mentioning features or use of this software\n *    must display the following acknowledgement:\n *\tThis product includes software developed by the University of\n *\tCalifornia, Berkeley and its contributors.\n * 4. Neither the name of the University nor the names of its contributors\n *    may be used to endorse or promote products derived from this software\n *    without specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND\n * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED.  IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE\n * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS\n * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)\n * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT\n * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY\n * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF\n * SUCH DAMAGE.\n *\n * $FreeBSD: src/lib/libc/gen/popen.c,v 1.14 2000/01/27 23:06:19 jasone Exp $\n */\n/**\n * @file\tpopen.c\n */\n#include <stdio.h>\n#include <stdlib.h>\n#include <signal.h>\n#include <errno.h>\n#include <unistd.h>\n#include <string.h>\n\n#include <sys/param.h>\n#include <sys/wait.h>\n#include \"log.h\"\n\nextern pid_t fork_me(int sock);\nextern int kill_session(pid_t pid, int sig, int dir);\n\nextern char **environ;\n\nstatic struct pid {\n\tstruct pid *next;\n\tFILE *fp;\n\tpid_t pid;\n} * pidlist;\n\n/**\n * @brief\n *\t-implementation of pbs pipe open call.\n *\n * @param[in] command - arguments\n * @param[in] type - mode of pipe\n *\n * @return\tFILE pointer\n * @retval\tfd\t\tsuccess\n * @retval\tNULL\t\terror\n *\n */\n\nFILE *\npbs_popen(const char *command, const char *type)\n{\n\tstruct pid *cur;\n\tFILE *iop;\n\tint pdes[2], pid, twoway;\n\tchar *argv[4];\n\tstruct pid *p;\n\n\t/*\n\t * Lite2 introduced two-way popen() pipes using socketpair().\n\t * FreeBSD's pipe() is bidirectional, so we use that.\n\t */\n\tif (strchr(type, '+')) {\n\t\ttwoway = 1;\n\t\ttype = \"r+\";\n\t} else {\n\t\ttwoway = 0;\n\t\tif ((*type != 'r' && *type != 'w') || type[1])\n\t\t\treturn NULL;\n\t}\n\tif (pipe(pdes) < 0)\n\t\treturn NULL;\n\n\tif ((cur = malloc(sizeof(struct pid))) == NULL) {\n\t\tlog_err(errno, __func__, \"Could not allocate memory for new file descriptor\");\n\t\t(void) close(pdes[0]);\n\t\t(void) close(pdes[1]);\n\t\treturn NULL;\n\t}\n\n\targv[0] = \"sh\";\n\targv[1] = \"-c\";\n\targv[2] = (char *) command;\n\targv[3] = NULL;\n\n\tswitch (pid = fork_me(-1)) {\n\t\tcase -1: /* Error. */\n\t\t\t(void) close(pdes[0]);\n\t\t\t(void) close(pdes[1]);\n\t\t\tfree(cur);\n\t\t\treturn NULL;\n\t\t\t/* NOTREACHED */\n\t\tcase 0: /* Child. */\n\t\t\t/* create a new session */\n\t\t\tif (setsid() == -1)\n\t\t\t\t_exit(127);\n\n\t\t\tif (*type == 'r') {\n\t\t\t\t/*\n\t\t\t\t * The dup2() to STDIN_FILENO is repeated to avoid\n\t\t\t\t * writing to pdes[1], which might corrupt the\n\t\t\t\t * parent's copy.  This isn't good enough in\n\t\t\t\t * general, since the _exit() is no return, so\n\t\t\t\t * the compiler is free to corrupt all the local\n\t\t\t\t * variables.\n\t\t\t\t */\n\t\t\t\t(void) close(pdes[0]);\n\t\t\t\tif (pdes[1] != STDOUT_FILENO) {\n\t\t\t\t\t(void) dup2(pdes[1], STDOUT_FILENO);\n\t\t\t\t\t(void) close(pdes[1]);\n\t\t\t\t\tif (twoway)\n\t\t\t\t\t\t(void) dup2(STDOUT_FILENO, STDIN_FILENO);\n\t\t\t\t} else if (twoway && (pdes[1] != STDIN_FILENO))\n\t\t\t\t\t(void) dup2(pdes[1], STDIN_FILENO);\n\t\t\t} else {\n\t\t\t\tif (pdes[0] != STDIN_FILENO) {\n\t\t\t\t\t(void) dup2(pdes[0], STDIN_FILENO);\n\t\t\t\t\t(void) close(pdes[0]);\n\t\t\t\t}\n\t\t\t\t(void) close(pdes[1]);\n\t\t\t}\n\t\t\tfor (p = pidlist; p; p = p->next) {\n\t\t\t\t(void) close(fileno(p->fp));\n\t\t\t}\n\t\t\texecve(\"/bin/sh\", argv, environ);\n\t\t\t_exit(127);\n\t\t\t/* NOTREACHED */\n\t}\n\n\t/* Parent; assume fdopen can't fail. */\n\tif (*type == 'r') {\n\t\tiop = fdopen(pdes[0], type);\n\t\t(void) close(pdes[1]);\n\t} else {\n\t\tiop = fdopen(pdes[1], type);\n\t\t(void) close(pdes[0]);\n\t}\n\n\t/* Link into list of file descriptors. */\n\tcur->fp = iop;\n\tcur->pid = pid;\n\tcur->next = pidlist;\n\tpidlist = cur;\n\n\treturn (iop);\n}\n\n/**\n * @brief\n * \t-pbs_pkill Send a signal to the child process started by pbs_popen.\n *\n * @param[in] iop - file pointer\n * @param[in] sig - signal number\n *\n * @return\tint\n * @retval\to\tsuccess\n * @retval\t-1\terror\n *\n */\nint\npbs_pkill(FILE *iop, int sig)\n{\n\tregister struct pid *cur;\n\tint ret;\n\n\t/* Find the appropriate file pointer. */\n\tfor (cur = pidlist; cur; cur = cur->next) {\n\t\tif (cur->fp == iop)\n\t\t\tbreak;\n\t}\n\tif (cur == NULL)\n\t\treturn -1;\n\n\tret = kill_session(cur->pid, sig, 0);\n\treturn ret;\n}\n\n/**\n * @brief\n * \t-pbs_pclose -- close fds related to opened pipe\n *\n * @par\tPclose returns -1 if stream is not associated with a `popened' command,\n *\tif already `pclosed', or waitpid returns an error.\n *\n * @param[in] iop - fd\n *\n * @return \tint\n * @retval\t0\tsuccess\n * @retval\t-1\terror\n *\n */\nint\npbs_pclose(FILE *iop)\n{\n\tregister struct pid *cur, *last;\n\tint pstat;\n\tpid_t pid;\n\n\t/* Find the appropriate file pointer. */\n\tfor (last = NULL, cur = pidlist; cur; last = cur, cur = cur->next) {\n\t\tif (cur->fp == iop)\n\t\t\tbreak;\n\t}\n\tif (cur == NULL)\n\t\treturn (-1);\n\n\t(void) fclose(iop);\n\t(void) kill_session(cur->pid, SIGKILL, 0);\n\n\tdo {\n\t\tpid = waitpid(cur->pid, &pstat, 0);\n\t} while (pid == -1 && errno == EINTR);\n\n\t/* Remove the entry from the linked list. */\n\tif (last == NULL)\n\t\tpidlist = cur->next;\n\telse\n\t\tlast->next = cur->next;\n\tfree(cur);\n\n\treturn (pid == -1 ? -1 : pstat);\n}\n"
  },
  {
    "path": "src/resmom/prolog.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n/**\n * @file prolog.c\n */\n#define PBS_MOM 1\n#include <stdio.h>\n#include <unistd.h>\n#include <errno.h>\n#include <fcntl.h>\n#include <signal.h>\n#include <sys/types.h>\n\n#include <unistd.h>\n#include <pwd.h>\n#include <sys/wait.h>\n#include <sys/stat.h>\n#include \"libpbs.h\"\n#include \"list_link.h\"\n#include \"server_limits.h\"\n#include \"attribute.h\"\n#include \"job.h\"\n#include \"log.h\"\n#include \"mom_mach.h\"\n#include \"mom_func.h\"\n\n#define PBS_PROLOG_TIME 30\n\nunsigned int pe_alarm_time = PBS_PROLOG_TIME;\nstatic pid_t child;\nstatic int run_exit;\n\nextern int pe_input(char *jobid);\n\n/**\n * @brief\n *\tconvert resources_[list or used] values to a single string that are\n *\tcomma-separated.\n *\n * @param[in] pattr - the attribute to convert.\n * @param[in][out] buf - the buffer into which to convert.\n * @param[in] buflen - the length of the above buffer.\n *\n * @return pointer to 'buf', and also a modified 'buf'.\n * @retval = string\n * @retval = empty buffer on malloc failure under windows only\n *\n * @note     \tThis function may not concatenate all resources and their values as one string if buf size is insufficient.\n *\t\tThat mean returned buf value may have less resources list compared to actual list.\n *\t\tThis effect may occur in both windows and Linux. No indication of this error given.\n *\t\tAlso, for windows only, this function returns empty buf if malloc fails.\n *\n */\n\nstatic char *\nresc_to_string(job *pjob, int attr_idx, char *buf, int buflen)\n{\n\tint need;\n\tsvrattrl *patlist;\n\tpbs_list_head svlist;\n\n\tCLEAR_HEAD(svlist);\n\t*buf = '\\0';\n\n\tif (encode_resc(get_jattr(pjob, attr_idx), &svlist, \"x\", NULL, ATR_ENCODE_CLIENT, NULL) <= 0)\n\t\treturn (buf);\n\n\tpatlist = (svrattrl *) GET_NEXT(svlist);\n\twhile (patlist) {\n\t\tneed = strlen(patlist->al_resc) + strlen(patlist->al_value) + 3;\n\t\tif (need < buflen) {\n\t\t\t(void) strcat(buf, patlist->al_resc);\n\t\t\t(void) strcat(buf, \"=\");\n\t\t\t(void) strcat(buf, patlist->al_value);\n\t\t\tbuflen -= need;\n\t\t}\n\t\tpatlist = (svrattrl *) GET_NEXT(patlist->al_link);\n\t\tif (patlist)\n\t\t\t(void) strcat(buf, \",\");\n\t}\n\treturn (buf);\n}\n\n/**\n * @brief\n * \tpelog_err - record error for run_pelog()\n *\n * @param[in] pjob - pointer to job structure\n * @param[in] file - file name\n * @param[in] text - error message\n *\n * @return int\n * @retval error number\n *\n */\n\nstatic int\npelog_err(job *pjob, char *file, int n, char *text)\n{\n\tsprintf(log_buffer, \"pro/epilogue failed, file: %s, exit: %d, %s\",\n\t\tfile, n, text);\n\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_WARNING,\n\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\treturn (n);\n}\n\n/**\n * @brief\n *\tpelogalm() - alarm handler for run_pelog()\n *\n * @param[in] sig - signal number\n *\n * @return Void\n *\n */\nstatic void\n\tpelogalm(int sig)\n{\n\trun_exit = -4;\n}\n\n/**\n * @brief\n *\trun_pelog() - Run the Prologue/Epilogue script\n * @par\n *\tScript is run under uid of root, prologue and the epilogue have:\n *\t\t- argv[1] is the jobid\n *\t\t- argv[2] is the user's name\n *\t\t- argv[3] is the user's group name\n *\t\t- the input file is a architecture dependent file\n *\t\t- the output and error are the job's output and error\n #ifdef NAS localmod 095\n *\tThe prologue also has:\n *\t\t- argv[4] is the list of resource limits specified\n #endif localmod 095\n *\tThe epilogue also has:\n *\t\t- argv[4] is the job name\n *\t\t- argv[5] is the session id\n *\t\t- argv[6] is the list of resource limits specified\n *\t\t- argv[7] is the list of resources used\n *\t\t- argv[8] is the queue in which the job resides\n *\t\t- argv[9] is the Account under which the job run\n *\t\t- argv[10] is the job exit code\n *\n * @param[in] which - Script type (PE_PROLOGUE or PE_EPILOGUE)\n * @param[in] pelog - Path to the script file\n * @param[in] pjob - Pointer to the associated job structure\n * @param[in] pe_io_type - Output specifier (PE_IO_TYPE_NULL, PE_IO_TYPE_ASIS, or PE_IO_TYPE_STD)\n *\n * @return - Exit code\n * @retval  -2 - script not found\n * @retval  -1 - permission error\n * @retval   0 - success\n * @retval  >0 - exit status returned from script\n *\n */\n\nint\nrun_pelog(int which, char *pelog, job *pjob, int pe_io_type)\n{\n\tchar *arg[12];\n\tchar exitcode[20];\n\tchar resc_list[2048];\n\tchar resc_used[2048];\n\tstruct stat sbuf;\n\tchar sid[20];\n\tstruct sigaction act;\n\tint fd_input;\n\tint waitst;\n\tchar buf[MAXNAMLEN + MAXPATHLEN + 2];\n\n\tif (stat(pelog, &sbuf) == -1) {\n\t\tif (errno == ENOENT)\n\t\t\treturn (0);\n\t\telse\n\t\t\treturn (pelog_err(pjob, pelog, errno, \"cannot stat\"));\n\t} else if ((sbuf.st_uid != 0) ||\n\t\t   (!S_ISREG(sbuf.st_mode)) ||\n\t\t   ((sbuf.st_mode & (S_IRUSR | S_IXUSR)) !=\n\t\t    (S_IRUSR | S_IXUSR)) ||\n\t\t   (sbuf.st_mode & (S_IWGRP | S_IWOTH)))\n\t\treturn (pelog_err(pjob, pelog, -1, \"Permission Error\"));\n\n\tfd_input = pe_input(pjob->ji_qs.ji_jobid);\n\tif (fd_input < 0) {\n\t\treturn (pelog_err(pjob, pelog, -2,\n\t\t\t\t  \"no pro/epilogue input file\"));\n\t}\n\n\trun_exit = 0;\n\tchild = fork();\n\tif (child > 0) { /* parent */\n\t\t(void) close(fd_input);\n\t\tsprintf(log_buffer, \"running %s\",\n\t\t\twhich == PE_PROLOGUE ? \"prologue\" : \"epilogue\");\n\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\n\t\tact.sa_handler = pelogalm;\n\t\tsigemptyset(&act.sa_mask);\n\t\tact.sa_flags = 0;\n\t\tsigaction(SIGALRM, &act, 0);\n\t\talarm(pe_alarm_time);\n\t\twhile (wait(&waitst) < 0) {\n\t\t\tif (errno != EINTR) { /* continue loop on signal */\n\t\t\t\trun_exit = -3;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\tkill(-child, SIGKILL);\n\t\t}\n\t\talarm(0);\n\t\tact.sa_handler = SIG_DFL;\n\t\tsigaction(SIGALRM, &act, 0);\n\t\tkill(-child, SIGKILL);\n\t\tif (run_exit == 0) {\n\t\t\tif (WIFEXITED(waitst)) {\n\t\t\t\trun_exit = WEXITSTATUS(waitst);\n\t\t\t} else if (WIFSIGNALED(waitst)) {\n\t\t\t\trun_exit = -3;\n\t\t\t}\n\t\t} else {\n\t\t\tsprintf(log_buffer, \"completed %s, exit=%d\",\n\t\t\t\twhich == PE_PROLOGUE ? \"prologue\" : \"epilogue\",\n\t\t\t\trun_exit);\n\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\t}\n\n\t} else { /* child */\n\t\t /*\n\t\t * For sanity sake we make sure the following are defined.\n\t\t * They should have been defined in unistd.h\n\t\t */\n#ifndef STDIN_FILENO\n#define STDIN_FILENO 0\n#endif\n#ifndef STDOUT_FILENO\n#define STDOUT_FILENO 1\n#endif\n#ifndef STDERR_FILENO\n#define STDERR_FILENO 2\n#endif\n\n\t\t/*\n\t\t ** As these fd variabes (fds1(2)) are used in the child\n\t\t ** process, define here only.\n\t\t */\n\t\tint fds1 = -1;\n\t\tint fds2 = -1;\n\n\t\tif (fd_input != 0) {\n\t\t\t(void) close(STDIN_FILENO);\n\t\t\tif (dup(fd_input) == -1) \n\t\t\t\tlog_errf(-1, __func__, \"dup failed. ERR : %s\", strerror(errno));\t\t\t\t\n\t\t\t(void) close(fd_input);\n\t\t}\n\n\t\t/* unprotect from kernel killers (such as oom) */\n\t\tdaemon_protect(0, PBS_DAEMON_PROTECT_OFF);\n\n\t\t/*\n\t\t * If PE_IO_TYPE_ASIS, leave stdout/stderr alone as they\n\t\t * are already open to job. Otherwise, set FDs 1 and 2\n\t\t * appropriately being careful to join them where\n\t\t * necessary.\n\t\t */\n\t\tif (pe_io_type == PE_IO_TYPE_NULL) {\n\t\t\t/* Close any existing stdout/stderr. */\n\t\t\t(void) close(STDOUT_FILENO);\n\t\t\t(void) close(STDERR_FILENO);\n\t\t\t/* No output, force to /dev/null */\n\t\t\tfds1 = open(\"/dev/null\", O_WRONLY, 0600);\n\t\t\tfds2 = dup(fds1);\n\t\t} else if (pe_io_type == PE_IO_TYPE_STD) {\n\t\t\tint join_method;\n\t\t\t/* Close any existing stdout/stderr. */\n\t\t\t(void) close(STDOUT_FILENO);\n\t\t\t(void) close(STDERR_FILENO);\n\t\t\t/*\n\t\t\t * Do not open an output file unless it will be used.\n\t\t\t * Otherwise, it will be left behind in spool.\n\t\t\t */\n\t\t\tjoin_method = is_joined(pjob);\n\t\t\t/* Open job stdout/stderr. */\n\t\t\tif (join_method < 0) { /* joined as stderr */\n\t\t\t\tfds1 = open(\"/dev/null\", O_WRONLY, 0600);\n\t\t\t\tfds2 = open_std_file(pjob, StdErr, O_WRONLY | O_APPEND,\n\t\t\t\t\t\t     pjob->ji_qs.ji_un.ji_momt.ji_exgid);\n\t\t\t\t(void) close(fds1);\n\t\t\t\tfds1 = dup(fds2);\n\t\t\t} else if (join_method > 0) { /* joined as stdout */\n\t\t\t\tfds1 = open_std_file(pjob, StdOut, O_WRONLY | O_APPEND,\n\t\t\t\t\t\t     pjob->ji_qs.ji_un.ji_momt.ji_exgid);\n\t\t\t\tfds2 = dup(fds1);\n\t\t\t} else { /* not joined */\n\t\t\t\tfds1 = open_std_file(pjob, StdOut, O_WRONLY | O_APPEND,\n\t\t\t\t\t\t     pjob->ji_qs.ji_un.ji_momt.ji_exgid);\n\t\t\t\tfds2 = open_std_file(pjob, StdErr, O_WRONLY | O_APPEND,\n\t\t\t\t\t\t     pjob->ji_qs.ji_un.ji_momt.ji_exgid);\n\t\t\t}\n\t\t\tif (fds1 == -1 || fds2 == -1) {\n\t\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_WARNING,\n\t\t\t\t\t  pjob->ji_qs.ji_jobid, \"problem opening job output file(s)\");\n\t\t\t}\n\t\t}\n\n\t\t/* for both prologue and epilogue */\n\n\t\targ[0] = pelog;\n\t\targ[1] = pjob->ji_qs.ji_jobid;\n\t\targ[2] = get_jattr_str(pjob, JOB_ATR_euser);\n\t\targ[3] = get_jattr_str(pjob, JOB_ATR_egroup);\n\n\t\t/* for epilogue only */\n\n\t\tif (which == PE_EPILOGUE) {\n\t\t\targ[4] = get_jattr_str(pjob, JOB_ATR_jobname);\n\t\t\tsprintf(sid, \"%ld\", get_jattr_long(pjob, JOB_ATR_session_id));\n\t\t\targ[5] = sid;\n\t\t\targ[6] = resc_to_string(pjob, JOB_ATR_resource, resc_list, 2048);\n\t\t\targ[7] = resc_to_string(pjob, JOB_ATR_resc_used, resc_used, 2048);\n\t\t\targ[8] = get_jattr_str(pjob, JOB_ATR_in_queue);\n\t\t\tif ((is_jattr_set(pjob, JOB_ATR_account)) && (strlen(get_jattr_str(pjob, JOB_ATR_account)) > 0))\n\t\t\t\targ[9] = get_jattr_str(pjob, JOB_ATR_account);\n\t\t\telse\n\t\t\t\targ[9] = \"null\";\n\t\t\tsprintf(exitcode, \"%d\", pjob->ji_qs.ji_un.ji_momt.ji_exitstat);\n\t\t\targ[10] = exitcode;\n\t\t\targ[11] = 0;\n\n\t\t} else {\n#ifdef NAS /* localmod 095 */\n\t\t\targ[4] = resc_to_string(pjob, JOB_ATR_resource, resc_list, 2048);\n\t\t\targ[5] = NULL;\n#else\n\t\t\targ[4] = NULL;\n#endif /* localmod 095 */\n\t\t}\n\n\t\t(void) setsid();\n\n\t\t/* Add PBS_JOBDIR to the current process environement */\n\t\tif (pjob->ji_grpcache) {\n\t\t\tif ((is_jattr_set(pjob, JOB_ATR_sandbox)) && (strcasecmp(get_jattr_str(pjob, JOB_ATR_sandbox), \"PRIVATE\") == 0)) {\n\t\t\t\t/* set PBS_JOBDIR to the per-job staging and execution directory*/\n\t\t\t\tsprintf(buf, \"PBS_JOBDIR=%s\",\n\t\t\t\t\tjobdirname(pjob->ji_qs.ji_jobid,\n\t\t\t\t\t\t   pjob->ji_grpcache->gc_homedir));\n\t\t\t} else {\n\t\t\t\t/* set PBS_JOBDIR to user HOME*/\n\t\t\t\tsprintf(buf, \"PBS_JOBDIR=%s\", pjob->ji_grpcache->gc_homedir);\n\t\t\t}\n\t\t\tif (setenv(\"PBS_JOBDIR\", pjob->ji_grpcache->gc_homedir, 1) != 0)\n\t\t\t\tlog_err(-1, \"run_pelog\", \"set environment variable PBS_JOBDIR\");\n\t\t}\n\n\t\texecv(pelog, arg);\n\n\t\tlog_err(errno, \"run_pelog\", \"execle of prologue failed\");\n\t\texit(255);\n\t}\n\n\tif (run_exit)\n\t\t(void) pelog_err(pjob, pelog, run_exit, \"nonzero p/e exit status\");\n\n\treturn (run_exit);\n}\n"
  },
  {
    "path": "src/resmom/renew_creds.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n\n#include \"renew_creds.h\"\n\n#include \"log.h\"\n#include \"mom_func.h\"\n\n#include <errno.h>\n#include <string.h>\n#include <stdlib.h>\n#include <stdio.h>\n#include <pwd.h>\n#include <sys/types.h>\n#include <sys/stat.h>\n#include <sys/fcntl.h>\n#include <signal.h>\n#include <unistd.h>\n#include <math.h>\n\n#include \"libpbs.h\"\n#include \"portability.h\"\n#include \"list_link.h\"\n#include \"server_limits.h\"\n#include \"attribute.h\"\n#include \"resource.h\"\n#include \"resmon.h\"\n#include \"libutil.h\"\n\n#include \"tpp.h\"\n#include \"pbs_error.h\"\n\n#include \"net_connect.h\"\n#include \"dis.h\"\n#include \"batch_request.h\"\n#include \"resource.h\"\n\n#include \"work_task.h\"\n\n#if defined(HAVE_LIBKAFS)\n#include <kafs.h>\n#include <grp.h>\n#elif defined(HAVE_LIBKOPENAFS)\n#include <kopenafs.h>\n#include <grp.h>\n#endif\n\n#include <krb5.h>\n#include <com_err.h>\n\ntypedef struct eexec_job_info_t {\n\ttime_t endtime;\t    /* tickets expiration time */\n\tkrb5_creds *creds;  /* User's TGT */\n\tkrb5_ccache ccache; /* User's credentials cache */\n\tuid_t job_uid;\n\tchar *username;\n\tchar *krb_principal;\n\tchar *jobid;\n\tchar *ccache_name;\n\tkrb5_principal client;\n} eexec_job_info_t, *eexec_job_info;\n\nstruct krb_holder {\n\tint got_ticket;\n\teexec_job_info_t job_info_;\n\teexec_job_info job_info;\n\tkrb5_context context;\n};\n\npbs_list_head svr_allcreds; /* all credentials received from server */\n\nstruct svrcred_data {\n\tpbs_list_link cr_link;\n\tchar *cr_jobid;\n\tchar *cr_credid;\n\tint cr_type;\n\tkrb5_data *cr_data;\n\tchar *cr_data_base64; /* used for sending to sis moms*/\n\tlong cr_validity;\n};\ntypedef struct svrcred_data svrcred_data;\n\nconst char *str_cred_actions[] = {\"singleshot\", \"renewal\", \"setenv\", \"close\", \"destroy\"};\n\nextern char *path_jobs; /* job directory path */\nextern struct var_table vtable;\nextern time_t time_now;\nextern int decode_block_base64(unsigned char *ascii_data, ssize_t ascii_len, unsigned char *bin_data, ssize_t *p_bin_len, char *msg, size_t msg_len);\n\nextern char *log_file;\nextern char *path_log;\n\nstatic int get_job_info_from_job(const job *pjob, const task *ptask, eexec_job_info job_info);\nstatic int get_job_info_from_principal(const char *principal, const char *jobid, eexec_job_info job_info);\nstatic krb5_error_code get_ticket_from_storage(struct krb_holder *ticket, char *errbuf, size_t errbufsz);\nstatic krb5_error_code get_ticket_from_ccache(struct krb_holder *ticket, char *errbuf, size_t errbufsz);\nstatic krb5_error_code get_renewed_creds(struct krb_holder *ticket, char *errbuf, size_t errbufsz);\nstatic int init_ticket(struct krb_holder *ticket, job *pjob, int cred_action);\n\nstatic svrcred_data *find_cred_data_by_jobid(char *jobid);\n\n/**\n * @brief\n * \tinit_ticket_from_req - Initialize a kerberos ticket from request\n *\n * @param[in] principal - kerberos principal\n * @param[in] jobid - job id associated with request\n * @param[out] ticket - Kerberos ticket for initialization\n * @param[in] cred_action - credentials action type\n *\n * @return \tint\n * @retval\t0 on success\n * @retval\t!= 0 on error\n */\nint\ninit_ticket_from_req(char *principal, char *jobid, struct krb_holder *ticket, int cred_action)\n{\n\tint ret;\n\tchar buf[LOG_BUF_SIZE];\n\n\tif ((ret = get_job_info_from_principal(principal, jobid, ticket->job_info)) != 0) {\n\t\tsnprintf(buf, sizeof(buf), \"Could not fetch GSSAPI information from principal (get_job_info_from_principal returned %d).\", ret);\n\t\tlog_err(errno, __func__, buf);\n\t\treturn ret;\n\t}\n\n\tret = init_ticket(ticket, NULL, cred_action);\n\tif (ret == 0) {\n\t\tticket->got_ticket = 1;\n\t}\n\n\treturn ret;\n}\n\n/**\n * @brief\n * \tinit_ticket_from_job - Initialize a kerberos ticket from job\n *\n * @param[in] pjob - job structure\n * @param[in] ptask - optional - ptask associated with job\n * @param[out] ticket - Kerberos ticket for initialization\n * @param[in] cred_action - credentials action type\n *\n * @return \tint\n * @retval\t0 on success\n * @retval\t!= 0 on error\n */\nint\ninit_ticket_from_job(job *pjob, const task *ptask, struct krb_holder *ticket, int cred_action)\n{\n\tint ret;\n\tchar buf[LOG_BUF_SIZE];\n\n\tif ((ret = get_job_info_from_job(pjob, ptask, ticket->job_info)) != 0) {\n\t\tsnprintf(buf, sizeof(buf), \"Could not fetch GSSAPI information from job (get_job_info_from_job returned %d).\", ret);\n\t\tlog_err(errno, __func__, buf);\n\t\treturn ret;\n\t}\n\n\tret = init_ticket(ticket, pjob, cred_action);\n\tif (ret == 0) {\n\t\tticket->got_ticket = 1;\n\t}\n\n\treturn ret;\n}\n\n/**\n * @brief\n * \tinit_ticket_from_ccache - Initialize a kerberos ticket from ccache file\n *\n * @param[in] pjob - job structure\n * @param[in] ptask - optional - ptask associated with job\n * @param[out] ticket - Kerberos ticket for initialization\n *\n * @return \tint\n * @retval\t0 on success\n * @retval\t!= 0 on error\n */\nint\ninit_ticket_from_ccache(job *pjob, const task *ptask, struct krb_holder *ticket)\n{\n\tint ret;\n\tchar buf[LOG_BUF_SIZE];\n\n\tif ((ret = get_job_info_from_job(pjob, ptask, ticket->job_info)) != 0) {\n\t\tsnprintf(buf, sizeof(buf), \"Could not fetch GSSAPI information from job (get_job_info_from_job returned %d).\", ret);\n\t\tlog_err(errno, __func__, buf);\n\t\treturn ret;\n\t}\n\n\tif ((ret = krb5_init_context(&ticket->context)) != 0) {\n\t\tlog_err(ret, __func__, \"Failed to initialize kerberos context.\");\n\t\treturn PBS_KRB5_ERR_CONTEXT_INIT;\n\t}\n\n\tif ((ret = get_ticket_from_ccache(ticket, buf, LOG_BUF_SIZE))) {\n\t\tlog_err(ret, __func__, buf);\n\n\t\tsnprintf(buf, sizeof(buf), \"Could not get ticket: %s.\", error_message(ret));\n\t\tlog_err(errno, __func__, buf);\n\t\treturn ret;\n\t}\n\n\tif (ret == 0) {\n\t\tticket->got_ticket = 1;\n\t}\n\n\treturn ret;\n}\n\n/**\n * @brief\n * \tinit_ticket - Initialize a kerberos ticket\n *\n * @param[out] ticket - Kerberos ticket for initialization\n * @param[in] cred_action - credentials action type\n *\n * @return \tint\n * @retval\tPBS_KRB5_OK on success\n * @retval\t!= PBS_KRB5_OK on error\n */\nstatic int\ninit_ticket(struct krb_holder *ticket, job *pjob, int cred_action)\n{\n\tint ret;\n\tchar buf[LOG_BUF_SIZE * 2];\n\n\tif ((ret = krb5_init_context(&ticket->context)) != 0) {\n\t\tlog_err(ret, __func__, \"Failed to initialize context.\");\n\t\treturn PBS_KRB5_ERR_CONTEXT_INIT;\n\t}\n\n\tswitch (cred_action) {\n\t\tcase CRED_SINGLESHOT:\n\t\tcase CRED_RENEWAL:\n\t\t\tif ((ret = get_renewed_creds(ticket, buf, sizeof(buf))) != 0) {\n\t\t\t\tchar buf2[LOG_BUF_SIZE * 3];\n\n\t\t\t\tkrb5_free_context(ticket->context);\n\t\t\t\tsnprintf(buf2, sizeof(buf2), \"get_renewed_creds returned %d, %s\", ret, buf);\n\t\t\t\tlog_err(errno, __func__, buf2);\n\t\t\t\treturn PBS_KRB5_ERR_GET_CREDS;\n\t\t\t}\n\t\t\tbreak;\n\n\t\tcase CRED_DESTROY:\n\t\t\tif ((ret = krb5_cc_resolve(ticket->context, ticket->job_info->ccache_name, &ticket->job_info->ccache))) {\n\t\t\t\tsnprintf(buf, sizeof(buf), \"Could not resolve ccache name \\\"krb5_cc_resolve()\\\" : %s.\", error_message(ret));\n\t\t\t\tlog_err(errno, __func__, buf);\n\t\t\t\treturn (ret);\n\t\t\t}\n\t\t\tbreak;\n\n\t\tcase CRED_SETENV:\n\t\tcase CRED_CLOSE:\n\t\t\tbreak;\n\t}\n\n\tif (pjob != NULL && pjob->ji_env.v_envp != NULL)\n\t\tbld_env_variables(&(pjob->ji_env), \"KRB5CCNAME\", ticket->job_info->ccache_name);\n\telse\n\t\tsetenv(\"KRB5CCNAME\", ticket->job_info->ccache_name, 1);\n\n\treturn PBS_KRB5_OK;\n}\n\n/**\n * @brief\n * \tstore_ticket - store the credentials into ccache file\n *\n * @param[in] ticket - ticket with credential\n * @param[out] errbuf - buffer to be filled on error\n * @param[in] errbufsz - size of error buffer\n *\n * @return \tkrb5_error_code\n * @retval\t0 on success\n * @retval\terror code on error\n */\nstatic krb5_error_code\nstore_ticket(struct krb_holder *ticket, char *errbuf, size_t errbufsz)\n{\n\tkrb5_error_code ret;\n\n\tif ((ret = krb5_cc_resolve(ticket->context, ticket->job_info->ccache_name, &ticket->job_info->ccache))) {\n\t\tsnprintf(errbuf, errbufsz, \"%s - Could not resolve cache name \\\"krb5_cc_resolve()\\\" : %s.\", __func__, error_message(ret));\n\t\treturn (ret);\n\t}\n\n\tif ((ret = krb5_cc_initialize(ticket->context, ticket->job_info->ccache, ticket->job_info->creds->client))) {\n\t\tsnprintf(errbuf, errbufsz, \"%s - Could not initialize cache \\\"krb5_cc_initialize()\\\" : %s.\", __func__, error_message(ret));\n\t\treturn (ret);\n\t}\n\n\tif ((ret = krb5_cc_store_cred(ticket->context, ticket->job_info->ccache, ticket->job_info->creds))) {\n\t\tsnprintf(errbuf, errbufsz, \"%s - Could not store credentials initialize cache \\\"krb5_cc_store_cred()\\\" : %s.\", __func__, error_message(ret));\n\t\treturn (ret);\n\t}\n\n\treturn (0);\n}\n\n/**\n * @brief\n * \tget_renewed_creds - Get and store renewed credentials for a given ticket.\n *\tThe credentials are obtained from storage (which is the memory) and stored\n *\tinto ccache file.\n *\n * @param[in] ticket - ticket for which to get and store credentials\n * @param[out] errbuf - buffer to be filled on error\n * @param[in] errbufsz - size of error buffer\n * @param[in] cred_action - type of action with credentials\n *\n * @return \tkrb5_error_code\n * @retval\t0 on success\n * @retval\terror code on error\n */\nstatic krb5_error_code\nget_renewed_creds(struct krb_holder *ticket, char *errbuf, size_t errbufsz)\n{\n\tkrb5_error_code ret;\n\tchar strerrbuf[LOG_BUF_SIZE];\n\n\t/* Get TGT for user */\n\tif ((ret = get_ticket_from_storage(ticket, errbuf, errbufsz)) != 0) {\n\t\tlog_record(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_DEBUG, ticket->job_info->jobid, \"no credentials supplied\");\n\t\treturn ret;\n\t}\n\n\t/* Go user */\n\tif (seteuid(ticket->job_info->job_uid) < 0) {\n\t\tstrerror_r(errno, strerrbuf, LOG_BUF_SIZE);\n\t\tsnprintf(errbuf, errbufsz, \"%s - Could not set uid using \\\"setuid()\\\": %s.\", __func__, strerrbuf);\n\t\treturn errno;\n\t}\n\n\t/* Store TGT */\n\tif ((ret = store_ticket(ticket, errbuf, errbufsz))) {\n\t\tseteuid(0);\n\t\treturn ret;\n\t}\n\n\t/* Go root */\n\tif (seteuid(0) < 0) {\n\t\tsnprintf(errbuf, errbufsz, \"%s - Could not reset root privileges.\", __func__);\n\t\treturn errno;\n\t}\n\n\treturn 0;\n}\n\n/**\n * @brief\n * \tget_ticket_from_storage - Acquire a user ticket. The credentials are\n *\texpected to be stored in the mom's memory (supplied by the pbs server).\n *\n * @param[in] ticket - ticket to be filled with credentials\n * @param[out] errbuf - buffer to be filled on error\n * @param[in] errbufsz - size of error buffer\n *\n * @return \tkrb5_error_code\n * @retval\t0 on success\n * @retval\terror code on error\n */\nstatic krb5_error_code\nget_ticket_from_storage(struct krb_holder *ticket, char *errbuf, size_t errbufsz)\n{\n\tkrb5_error_code ret = 0;\n\tkrb5_auth_context auth_context = NULL;\n\tkrb5_creds **creds = NULL;\n\tkrb5_data *datatmp;\n\tkrb5_data *data;\n\tint32_t flags;\n\tsvrcred_data *cred_data;\n\n\tcred_data = find_cred_data_by_jobid(ticket->job_info->jobid);\n\tif (cred_data == NULL || (datatmp = cred_data->cr_data) == NULL) {\n\t\tsnprintf(errbuf, errbufsz, \"find_cred_by_jobid failed; no credentials supplied for job: %s\", ticket->job_info->jobid);\n\t\tret = KRB5_NOCREDS_SUPPLIED;\n\t\tgoto out;\n\t}\n\n\tif ((ret = krb5_copy_data(ticket->context, datatmp, &data))) {\n\t\tconst char *krb5_err = krb5_get_error_message(ticket->context, ret);\n\t\tsnprintf(errbuf, errbufsz, \"krb5_copy_data failed; Error text: %s\", krb5_err);\n\t\tkrb5_free_error_message(ticket->context, krb5_err);\n\t\tgoto out;\n\t}\n\n\tif ((ret = krb5_auth_con_init(ticket->context, &auth_context))) {\n\t\tconst char *krb5_err = krb5_get_error_message(ticket->context, ret);\n\t\tsnprintf(errbuf, errbufsz, \"krb5_auth_con_init failed; Error text: %s\", krb5_err);\n\t\tkrb5_free_error_message(ticket->context, krb5_err);\n\t\tgoto out;\n\t}\n\n\tkrb5_auth_con_getflags(ticket->context, auth_context, &flags);\n\t/* We disable timestamps in the message so the expected message could be cached\n\t * and re-sent. The following flag needs to be also set by the tool\n\t * that supplies credential.\n\t * N.B. The semantics of KRB5_AUTH_CONTEXT_DO_TIME applied in\n\t * krb5_fwd_tgt_creds() seems to differ between Heimdal and MIT. MIT uses\n\t * it to (also) enable replay cache checks (that are useless and\n\t * troublesome for us). Heimdal uses it to just specify whether or not the\n\t * timestamp is included in the forwarded message. */\n\tflags &= ~(KRB5_AUTH_CONTEXT_DO_TIME);\n\tkrb5_auth_con_setflags(ticket->context, auth_context, flags);\n\n\tif ((ret = krb5_rd_cred(ticket->context, auth_context, data, &creds, NULL))) {\n\t\tconst char *krb5_err = krb5_get_error_message(ticket->context, ret);\n\t\tsnprintf(errbuf, errbufsz, \"krb5_rd_cred - reading credentials; Error text: %s\", krb5_err);\n\t\tkrb5_free_error_message(ticket->context, krb5_err);\n\t\tgoto out;\n\t}\n\n\tif ((ret = krb5_auth_con_free(ticket->context, auth_context))) {\n\t\tconst char *krb5_err = krb5_get_error_message(ticket->context, ret);\n\t\tsnprintf(errbuf, errbufsz, \"krb5_auth_con_free - freeing authentication context; Error text: %s\", krb5_err);\n\t\tkrb5_free_error_message(ticket->context, krb5_err);\n\t\tgoto out;\n\t}\n\n\tkrb5_free_data(ticket->context, data);\n\n\tticket->job_info->creds = creds[0];\n\tfree(creds);\n\n\tticket->job_info->endtime = ticket->job_info->creds->times.endtime;\n\n\tif (ticket->job_info->endtime < time_now)\n\t\treturn KRB5_NOCREDS_SUPPLIED;\n\nout:\n\treturn ret;\n}\n\n/**\n * @brief\n * \tget_ticket_from_ccache - Acquire a user ticket. The credentials are\n *\tread from ccache file.\n *\n * @param[in] ticket - ticket to be filled with credentials\n * @param[out] errbuf - buffer to be filled on error\n * @param[in] errbufsz - size of error buffer\n *\n * @return \tkrb5_error_code\n * @retval\t0 on success\n * @retval\terror code on error\n */\nstatic krb5_error_code\nget_ticket_from_ccache(struct krb_holder *ticket, char *errbuf, size_t errbufsz)\n{\n\tkrb5_error_code ret = 0;\n\tkrb5_creds *mcreds = NULL;\n\tint32_t flags;\n\tkrb5_auth_context auth_context = NULL;\n\n\tif ((mcreds = malloc(sizeof(krb5_creds))) == NULL) {\n\t\tlog_err(errno, __func__, \"Unable to allocate Memory!\\n\");\n\t\treturn KRB5KRB_ERR_GENERIC;\n\t}\n\tmemset(mcreds, 0, sizeof(krb5_creds));\n\n\tif ((ret = krb5_copy_principal(ticket->context, ticket->job_info->client, &mcreds->client))) {\n\t\tconst char *krb5_err = krb5_get_error_message(ticket->context, ret);\n\t\tsnprintf(errbuf, errbufsz, \"krb5_get_ticket - couldn't copy client principal - (%s)\", krb5_err);\n\t\tkrb5_free_error_message(ticket->context, krb5_err);\n\t\tgoto out;\n\t}\n\n\tif ((ret = krb5_cc_resolve(ticket->context, ticket->job_info->ccache_name, &ticket->job_info->ccache))) {\n\t\tconst char *krb5_err = krb5_get_error_message(ticket->context, ret);\n\t\tsnprintf(errbuf, errbufsz, \"krb5_cc_resolve failed; Error text: %s\", krb5_err);\n\t\tkrb5_free_error_message(ticket->context, krb5_err);\n\t\tgoto out;\n\t}\n\n\tif ((ret = krb5_auth_con_init(ticket->context, &auth_context))) {\n\t\tconst char *krb5_err = krb5_get_error_message(ticket->context, ret);\n\t\tsnprintf(errbuf, errbufsz, \"krb5_auth_con_init failed; Error text: %s\", krb5_err);\n\t\tkrb5_free_error_message(ticket->context, krb5_err);\n\t\tgoto out;\n\t}\n\n\tkrb5_auth_con_getflags(ticket->context, auth_context, &flags);\n\tflags &= ~(KRB5_AUTH_CONTEXT_DO_TIME);\n\tkrb5_auth_con_setflags(ticket->context, auth_context, flags);\n\n\tif ((ticket->job_info->creds = malloc(sizeof(krb5_creds))) == NULL) {\n\t\tlog_err(errno, __func__, \"Unable to allocate Memory!\\n\");\n\t\tret = KRB5KRB_ERR_GENERIC;\n\t\tgoto out;\n\t}\n\tmemset(ticket->job_info->creds, 0, sizeof(krb5_creds));\n\n\tif ((ret = krb5_cc_retrieve_cred(ticket->context, ticket->job_info->ccache, 0, mcreds, ticket->job_info->creds))) {\n\t\tconst char *krb5_err = krb5_get_error_message(ticket->context, ret);\n\t\tsnprintf(errbuf, errbufsz, \"krb5_cc_retrieve_cred failed; Error text: %s\", krb5_err);\n\t\tkrb5_free_error_message(ticket->context, krb5_err);\n\t\tgoto out;\n\t}\n\n\tticket->job_info->endtime = ticket->job_info->creds->times.endtime;\n\nout:\n\tkrb5_free_creds(ticket->context, mcreds);\n\treturn ret;\n}\n\n/**\n * @brief\n * \tget_ticket_ccname - Get ccname file name from ticket\n *\n * @param[in] ticket\n *\n * @return \tchar\n * @retval\tccache file\n * @retval\tNULL on error\n */\nchar *\nget_ticket_ccname(struct krb_holder *ticket)\n{\n\tif (ticket == NULL || ticket->job_info == NULL)\n\t\treturn NULL;\n\n\treturn ticket->job_info->ccache_name;\n}\n\n/**\n * @brief\n * \tgot_ticket - Allocated a new krb_holder structure (ticket)\n *\n * @return \tkrb_holder\n * @retval\tstructure - on success\n * @retval\tNULL - otherwise\n */\nstruct krb_holder *\nalloc_ticket()\n{\n\tstruct krb_holder *ticket = (struct krb_holder *) malloc(sizeof(struct krb_holder));\n\tif (ticket == NULL)\n\t\treturn NULL;\n\n\tticket->job_info = &ticket->job_info_;\n\tticket->job_info->creds = NULL;\n\tticket->job_info->ccache_name = NULL;\n\tticket->job_info->krb_principal = NULL;\n\tticket->job_info->username = NULL;\n\tticket->job_info->jobid = NULL;\n\tticket->got_ticket = 0;\n\n\treturn ticket;\n}\n\n/**\n * @brief\n * \tfree_ticket - Free a kerberos ticket. Distinguishes whether the\n *\tcredentials should be also destroyed (removed ccache) or only\n *\tfree the structures.\n *\n * @param[in] ticket - Ticket with context and job info information\n * @param[in] cred_action - requested action\n *\n * @return void\n */\nvoid\nfree_ticket(struct krb_holder *ticket, int cred_action)\n{\n\tkrb5_error_code ret = 0;\n\n\tif (ticket == NULL)\n\t\treturn;\n\n\tif (ticket->got_ticket) {\n\t\tswitch (cred_action) {\n\t\t\tcase CRED_SINGLESHOT:\n\t\t\tcase CRED_RENEWAL:\n\t\t\tcase CRED_CLOSE:\n\t\t\t\tif ((ret = krb5_cc_close(ticket->context, ticket->job_info->ccache))) {\n\t\t\t\t\tconst char *krb5_err = krb5_get_error_message(ticket->context, ret);\n\t\t\t\t\tlog_err(ret, __func__, krb5_err);\n\t\t\t\t\tkrb5_free_error_message(ticket->context, krb5_err);\n\t\t\t\t}\n\n\t\t\t\tbreak;\n\n\t\t\tcase CRED_DESTROY:\n#if defined(HAVE_LIBKAFS) || defined(HAVE_LIBKOPENAFS)\n\t\t\t\tif (k_hasafs())\n\t\t\t\t\tk_unlog();\n#endif\n\t\t\t\tif ((ret = krb5_cc_destroy(ticket->context, ticket->job_info->ccache))) {\n\t\t\t\t\tconst char *krb5_err = krb5_get_error_message(ticket->context, ret);\n\t\t\t\t\tlog_err(ret, __func__, krb5_err);\n\t\t\t\t\tkrb5_free_error_message(ticket->context, krb5_err);\n\t\t\t\t}\n\n\t\t\t\tunlink(ticket->job_info->ccache_name);\n\n\t\t\t\tbreak;\n\n\t\t\tcase CRED_SETENV:\n\t\t\t\tbreak;\n\t\t}\n\n\t\tkrb5_free_creds(ticket->context, ticket->job_info->creds);\n\t\tkrb5_free_principal(ticket->context, ticket->job_info->client);\n\t\tkrb5_free_context(ticket->context);\n\t}\n\n\tfree(ticket->job_info->ccache_name);\n\tfree(ticket->job_info->krb_principal);\n\tfree(ticket->job_info->username);\n\tfree(ticket->job_info->jobid);\n\n\tfree(ticket);\n}\n\n/**\n * @brief\n * \tget_job_info_from_job - Fill in job info from job structure\n *\n * @param[in] pjob - job structure\n * @param[in] ptask - optional ptask associated with job process\n * @param[out] job_info - filled job information\n *\n * @return \tint\n * @retval\tPBS_KRB5_OK - on sucess\n * @retval\t!= PBS_KRB5_OK - on error\n */\nstatic int\nget_job_info_from_job(const job *pjob, const task *ptask, eexec_job_info job_info)\n{\n\tchar *krb_principal = NULL;\n\tsize_t len;\n\tchar *ccname = NULL;\n\n\tif (is_jattr_set(pjob, JOB_ATR_cred_id))\n\t\tkrb_principal = strdup(get_jattr_str(pjob, JOB_ATR_cred_id));\n\telse {\n\t\tlog_err(-1, __func__, \"No ticket found on job.\");\n\t\treturn PBS_KRB5_ERR_NO_KRB_PRINC;\n\t}\n\n\tif (krb_principal == NULL) /* memory allocation error */\n\t\treturn PBS_KRB5_ERR_INTERNAL;\n\n\tif (ptask == NULL) {\n\t\tlen = snprintf(NULL, 0, \"FILE:/tmp/krb5cc_pbsjob_%s\", pjob->ji_qs.ji_jobid);\n\t\tccname = (char *) (malloc(len + 1));\n\t\tif (ccname != NULL)\n\t\t\tsnprintf(ccname, len + 1, \"FILE:/tmp/krb5cc_pbsjob_%s\", pjob->ji_qs.ji_jobid);\n\t} else {\n\t\tlen = snprintf(NULL, 0, \"FILE:/tmp/krb5cc_pbsjob_%s_%ld\", pjob->ji_qs.ji_jobid, (long) ptask->ti_qs.ti_task);\n\t\tccname = (char *) (malloc(len + 1));\n\t\tif (ccname != NULL)\n\t\t\tsnprintf(ccname, len + 1, \"FILE:/tmp/krb5cc_pbsjob_%s_%ld\", pjob->ji_qs.ji_jobid, (long) ptask->ti_qs.ti_task);\n\t}\n\n\tif (ccname == NULL) { /* memory allocation error */\n\t\tfree(krb_principal);\n\t\treturn PBS_KRB5_ERR_INTERNAL;\n\t}\n\n\tif (get_jattr_str(pjob, JOB_ATR_euser) == NULL) {\n\t\tfree(krb_principal);\n\t\tfree(ccname);\n\t\treturn PBS_KRB5_ERR_NO_USERNAME;\n\t}\n\n\tchar *username = strdup(get_jattr_str(pjob, JOB_ATR_euser));\n\tif (username == NULL) {\n\t\tfree(krb_principal);\n\t\tfree(ccname);\n\t\treturn PBS_KRB5_ERR_INTERNAL;\n\t}\n\n\tkrb5_context context;\n\tkrb5_init_context(&context);\n\tkrb5_parse_name(context, krb_principal, &job_info->client);\n\n\tkrb5_free_context(context);\n\n\tjob_info->krb_principal = krb_principal;\n\tjob_info->ccache_name = ccname;\n\tjob_info->username = username;\n\tjob_info->job_uid = pjob->ji_qs.ji_un.ji_momt.ji_exuid;\n\n\tjob_info->jobid = strdup(pjob->ji_qs.ji_jobid);\n\tif (job_info->jobid == NULL) {\n\t\tfree(krb_principal);\n\t\tfree(ccname);\n\t\treturn PBS_KRB5_ERR_INTERNAL;\n\t}\n\n\treturn PBS_KRB5_OK;\n}\n\n/**\n * @brief\n * \tget_job_info_from_principal - Fill in job info from a principal\n *\n * @param[in] principal - Principal for which to construct job info\n * @param[in] jobid - Job ID for which to construct job info\n * @param[out] job_info - filled job information\n *\n * @return \tint\n * @retval\tPBS_KRB5_OK - on sucess\n * @retval\t!= PBS_KRB5_OK - on error\n */\nstatic int\nget_job_info_from_principal(const char *principal, const char *jobid, eexec_job_info job_info)\n{\n\tstruct passwd pwd;\n\tstruct passwd *result;\n\tchar *buf;\n\tlong int bufsize;\n\n\tif (principal == NULL) {\n\t\tlog_err(-1, __func__, \"No principal provided.\");\n\t\treturn PBS_KRB5_ERR_NO_KRB_PRINC;\n\t}\n\n\tchar *krb_principal = strdup(principal);\n\tif (krb_principal == NULL)\n\t\treturn PBS_KRB5_ERR_INTERNAL;\n\n\tchar login[PBS_MAXUSER + 1];\n\tpbs_strncpy(login, principal, PBS_MAXUSER + 1);\n\tchar *c = strchr(login, '@');\n\tif (c != NULL)\n\t\t*c = '\\0';\n\n\t/* get users uid */\n\tbufsize = sysconf(_SC_GETPW_R_SIZE_MAX);\n\tif (bufsize == -1)\t /* Value was indeterminate */\n\t\tbufsize = 16384; /* Should be more than enough */\n\n\tif ((buf = (char *) (malloc(bufsize))) == NULL) {\n\t\tfree(krb_principal);\n\t\treturn PBS_KRB5_ERR_INTERNAL;\n\t}\n\n\tint ret = getpwnam_r(login, &pwd, buf, bufsize, &result);\n\n\tif (result == NULL) {\n\t\tfree(krb_principal);\n\t\tfree(buf);\n\t\tif (ret == 0)\n\t\t\treturn PBS_KRB5_ERR_CANT_OPEN_FILE;\n\t\telse\n\t\t\treturn PBS_KRB5_ERR_INTERNAL;\n\t}\n\n\tuid_t uid = pwd.pw_uid;\n\tfree(buf);\n\n\tchar *username = strdup(login);\n\tif (username == NULL) {\n\t\tfree(krb_principal);\n\t\treturn PBS_KRB5_ERR_INTERNAL;\n\t}\n\n\tsize_t len;\n\tchar *ccname;\n\tlen = snprintf(NULL, 0, \"FILE:/tmp/krb5cc_pbsjob_%s\", jobid);\n\tccname = (char *) (malloc(len + 1));\n\tif (ccname != NULL)\n\t\tsnprintf(ccname, len + 1, \"FILE:/tmp/krb5cc_pbsjob_%s\", jobid);\n\n\tif (ccname == NULL) {\n\t\tfree(krb_principal);\n\t\tfree(username);\n\t}\n\n\tkrb5_context context;\n\tkrb5_init_context(&context);\n\tkrb5_parse_name(context, principal, &job_info->client);\n\n\tkrb5_free_context(context);\n\n\tjob_info->krb_principal = krb_principal;\n\tjob_info->job_uid = uid;\n\tjob_info->username = username;\n\tjob_info->ccache_name = ccname;\n\n\tjob_info->jobid = strdup(jobid);\n\tif (job_info->jobid == NULL) {\n\t\tfree(krb_principal);\n\t\tfree(ccname);\n\t\tfree(username);\n\t\treturn PBS_KRB5_ERR_INTERNAL;\n\t}\n\n\treturn PBS_KRB5_OK;\n}\n\n/**\n * @brief\n * \tcred_by_job - renew/create or destroy credential associated with job id\n *\n * @param[in] pjob - job structure\n * @param[in] cred_action - type of action (renewal, destroy)\n *\n * @return \tint\n * @retval\tPBS_KRB5_OK - on sucess\n * @retval\t!= PBS_KRB5_OK - on error\n */\nint\ncred_by_job(job *pjob, int cred_action)\n{\n\tstruct krb_holder *ticket = NULL;\n\tint ret;\n\n\tticket = alloc_ticket();\n\tif (ticket == NULL)\n\t\treturn PBS_KRB5_ERR_INTERNAL;\n\n\tret = init_ticket_from_job(pjob, NULL, ticket, cred_action);\n\tif (ret == PBS_KRB5_ERR_NO_KRB_PRINC) {\n\t\t/* job without a principal\n\t\t * not an error, but do nothing */\n\t\treturn PBS_KRB5_OK;\n\t}\n\n\tif (ret == PBS_KRB5_OK) {\n\t\tsprintf(log_buffer, \"credential %s for %s succeeded\",\n\t\t\tstr_cred_actions[cred_action],\n\t\t\tticket->job_info->ccache_name);\n\t\tlog_record(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid, log_buffer);\n\t} else {\n\t\tsprintf(log_buffer, \"credential %s for %s failed with error: %d\",\n\t\t\tstr_cred_actions[cred_action],\n\t\t\tticket->job_info->ccache_name,\n\t\t\tret);\n\t\tlog_record(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid, log_buffer);\n\t}\n\n\tfree_ticket(ticket, cred_action);\n\n\treturn ret;\n}\n\n/**\n * @brief\n * \trenew_job_cred - renew credentials for job and also do the AFS log.\n *\n * @param[in] pjob - job structure\n *\n * @return void\n */\nvoid\nrenew_job_cred(job *pjob)\n{\n\tint ret = 0;\n\n\tif ((ret = cred_by_job(pjob, CRED_RENEWAL)) != PBS_KRB5_OK) {\n\t\tsprintf(log_buffer, \"renewal failed, error: %d\", ret);\n\t\tlog_record(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_ERR,\n\t\t\t   pjob->ji_qs.ji_jobid, log_buffer);\n\t}\n#if defined(HAVE_LIBKAFS) || defined(HAVE_LIBKOPENAFS)\n\tpbs_task *ptask;\n\n\tfor (ptask = (pbs_task *) GET_NEXT(pjob->ji_tasks);\n\t     ptask;\n\t     ptask = (pbs_task *) GET_NEXT(ptask->ti_jobtask)) {\n\t\tif (ptask->ti_job == pjob && ptask->ti_qs.ti_status == TI_STATE_RUNNING) {\n\t\t\tret = signal_afslog(ptask, SIGHUP);\n\t\t\tif (ret) {\n\t\t\t\tsprintf(log_buffer, \"afslog SIGHUP failed for task: %8.8X\",\n\t\t\t\t\tptask->ti_qs.ti_task);\n\t\t\t\tlog_record(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_ERR,\n\t\t\t\t\t   ptask->ti_job->ji_qs.ji_jobid, log_buffer);\n\t\t\t}\n\t\t}\n\t}\n#endif\n\t/* we don't want mom to have ccache of some user... */\n\tunsetenv(\"KRB5CCNAME\");\n}\n\n/**\n * @brief\n * \tstore_or_update_cred - save received credentials into mom's memory\n *\n * @param[in] jobid - Job ID\n * @param[in] princ - cred id (e.g. principal)\n * @param[in] data - the credentials itself\n * @param[in] data_base64 - the credentials in base64\n *\n * @return void\n */\nvoid\nstore_or_update_cred(char *jobid, char *credid, int cred_type, krb5_data *data, char *data_base64, long validity)\n{\n\tsvrcred_data *cred_data;\n\n\tcred_data = (svrcred_data *) GET_NEXT(svr_allcreds);\n\twhile (cred_data) {\n\t\tif (strcmp(cred_data->cr_jobid, jobid) == 0) {\n\t\t\tfree(cred_data->cr_data->data);\n\t\t\tfree(cred_data->cr_data);\n\t\t\tif (cred_data->cr_data_base64)\n\t\t\t\tfree(cred_data->cr_data_base64);\n\n\t\t\tcred_data->cr_type = cred_type;\n\t\t\tcred_data->cr_data = data;\n\t\t\tcred_data->cr_data_base64 = data_base64;\n\t\t\tcred_data->cr_validity = validity;\n\t\t\treturn;\n\t\t}\n\t\tcred_data = (svrcred_data *) GET_NEXT(cred_data->cr_link);\n\t}\n\n\tif ((cred_data = (svrcred_data *) malloc(sizeof(svrcred_data))) == NULL) {\n\t\tlog_err(errno, __func__, \"Unable to allocate Memory!\\n\");\n\t\treturn;\n\t}\n\n\tCLEAR_LINK(cred_data->cr_link);\n\n\tcred_data->cr_jobid = strdup(jobid);\n\tcred_data->cr_credid = strdup(credid);\n\tcred_data->cr_type = cred_type;\n\tcred_data->cr_data = data;\n\tcred_data->cr_data_base64 = data_base64;\n\tcred_data->cr_validity = validity;\n\n\tappend_link(&svr_allcreds, &cred_data->cr_link, cred_data);\n}\n\n/**\n * @brief\n * \tdelete_cred - delete credentials associated with job id from the mom's\n *\tmemory\n *\n * @param[in] jobid - Job ID\n *\n * @return void\n */\nvoid\ndelete_cred(char *jobid)\n{\n\tsvrcred_data *cred_data;\n\n\tcred_data = (svrcred_data *) GET_NEXT(svr_allcreds);\n\twhile (cred_data) {\n\t\tif (strcmp(cred_data->cr_jobid, jobid) == 0) {\n\t\t\tfree(cred_data->cr_jobid);\n\t\t\tfree(cred_data->cr_credid);\n\t\t\tfree(cred_data->cr_data->data);\n\t\t\tfree(cred_data->cr_data);\n\t\t\tif (cred_data->cr_data_base64)\n\t\t\t\tfree(cred_data->cr_data_base64);\n\n\t\t\tdelete_link(&cred_data->cr_link);\n\t\t\tfree(cred_data);\n\t\t\treturn;\n\t\t}\n\t\tcred_data = (svrcred_data *) GET_NEXT(cred_data->cr_link);\n\t}\n}\n\n/**\n * @brief\n * \tfind_cred_by_jobid - try to find credentials in mom's memory by the jobid\n *\n * @param[in] jobid - Job ID\n *\n * @return \tsvrcred_data\n * @retval\tcredentials data on success\n * @retval\tNULL otherwise\n */\nstatic svrcred_data *\nfind_cred_data_by_jobid(char *jobid)\n{\n\tsvrcred_data *cred_data;\n\n\tcred_data = (svrcred_data *) GET_NEXT(svr_allcreds);\n\twhile (cred_data) {\n\t\tif (strcmp(cred_data->cr_jobid, jobid) == 0)\n\t\t\treturn cred_data;\n\n\t\tcred_data = (svrcred_data *) GET_NEXT(cred_data->cr_link);\n\t}\n\treturn NULL;\n}\n\n/**\n * @brief\n * \tim_cred_send - send job's credentials from superior mom to sister mom.\n *\tFind the credentials in base64 in superior mom's memory and send them\n *\tvia IM protocol. This function is meant to by run by send_sisters and\n *\tit shouldn't be sent via mcast because mcast can't be wrapped by GSS\n *\n * @param[in] jobid - Job ID\n *\n * @return \tint\n * @retval\tDIS_SUCCESS on success\n * @retval\t!= DIS_SUCCESS otherwise\n */\nint\nim_cred_send(job *pjob, hnodent *xp, int stream)\n{\n\tint ret;\n\tsvrcred_data *cred_data;\n\tchar *data_base64;\n\n\tcred_data = find_cred_data_by_jobid(pjob->ji_qs.ji_jobid);\n\n\tif (cred_data == NULL || (data_base64 = cred_data->cr_data_base64) == NULL) {\n\t\tret = DIS_PROTO;\n\t\tgoto done;\n\t}\n\n\tret = diswui(stream, cred_data->cr_type);\n\tif (ret != DIS_SUCCESS)\n\t\tgoto done;\n\n\tret = diswst(stream, data_base64);\n\tif (ret != DIS_SUCCESS)\n\t\tgoto done;\n\n\tret = diswul(stream, cred_data->cr_validity);\n\tif (ret != DIS_SUCCESS)\n\t\tgoto done;\n\n\treturn DIS_SUCCESS;\n\ndone:\n\tsprintf(log_buffer, \"dis err %d (%s)\", ret, dis_emsg[ret]);\n\tDBPRT((\"%s: %s\\n\", __func__, log_buffer))\n\tlog_joberr(-1, __func__, log_buffer, pjob->ji_qs.ji_jobid);\n\treturn ret;\n}\n\n/**\n * @brief\n * \tim_cred_read - read received (via IM) credentials in base64 on sister\n *\tmom, store the credentials in mom's memory and renew the credentials for\n *\tassociated job.\n *\n * @param[in] pjob - job structure\n * @param[in] np - node structure\n * @param[in] stream - tpp channel\n *\n * @return \tint\n * @retval\tDIS_SUCCESS on success\n * @retval\t!= DIS_SUCCESS otherwise\n */\nint\nim_cred_read(job *pjob, hnodent *np, int stream)\n{\n\tint ret;\n\tchar *data_base64;\n\tunsigned char out_data[CRED_DATA_SIZE];\n\tssize_t out_len = 0;\n\tchar buf[LOG_BUF_SIZE];\n\tkrb5_data *data;\n\tint cred_type;\n\tlong validity;\n\n\tDBPRT((\"%s: entry\\n\", __func__))\n\n\tcred_type = disrui(stream, &ret);\n\tif (ret != DIS_SUCCESS)\n\t\tgoto err;\n\n\tdata_base64 = disrst(stream, &ret);\n\tif (ret != DIS_SUCCESS)\n\t\tgoto err;\n\n\tvalidity = disrul(stream, &ret);\n\tif (ret != DIS_SUCCESS)\n\t\tgoto err;\n\n\tif (decode_block_base64((unsigned char *) data_base64, strlen(data_base64), out_data, &out_len, buf, LOG_BUF_SIZE) != 0) {\n\t\tlog_err(errno, __func__, buf);\n\t\tret = DIS_PROTO;\n\t\tgoto err;\n\t}\n\n\tfree(data_base64);\n\n\tif ((data = (krb5_data *) malloc(sizeof(krb5_data))) == NULL) {\n\t\tlog_err(errno, __func__, \"Unable to allocate Memory!\\n\");\n\t\tret = DIS_NOMALLOC;\n\t\tgoto err;\n\t}\n\n\tif ((data->data = (char *) malloc(sizeof(unsigned char) * out_len)) == NULL) {\n\t\tlog_err(errno, __func__, \"Unable to allocate Memory!\\n\");\n\t\tret = DIS_NOMALLOC;\n\t\tgoto err;\n\t}\n\n\tdata->length = out_len;\n\tmemcpy(data->data, out_data, out_len);\n\n\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB,\n\t\t  LOG_INFO,\n\t\t  pjob->ji_qs.ji_jobid,\n\t\t  \"credentials from superior mom received\");\n\n\tstore_or_update_cred(pjob->ji_qs.ji_jobid,\n\t\t\t     get_jattr_str(pjob, JOB_ATR_cred_id),\n\t\t\t     cred_type,\n\t\t\t     data,\n\t\t\t     NULL,\n\t\t\t     validity);\n\n\t/* I am the sister and new cred has been received - lets renew creds */\n\trenew_job_cred(pjob);\n\n\treturn DIS_SUCCESS;\n\nerr:\n\t/* Getting here means we had a read failure. */\n\tsprintf(log_buffer, \"dis err %d (%s)\", ret, dis_emsg[ret]);\n\tDBPRT((\"%s: %s\\n\", __func__, log_buffer))\n\tlog_joberr(-1, __func__, log_buffer, pjob->ji_qs.ji_jobid);\n\treturn ret;\n}\n\n/**\n * @brief\n * \tsend_cred_sisters - send credentials from superior mom to all sister moms\n *\n * @param[in] pjob - job structure\n *\n * @return void\n */\nvoid\nsend_cred_sisters(job *pjob)\n{\n\tint i;\n\n\tif (pjob->ji_numnodes > 1) {\n\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB,\n\t\t\t  LOG_INFO,\n\t\t\t  pjob->ji_qs.ji_jobid,\n\t\t\t  \"sending credentials to sisters\");\n\n\t\ti = send_sisters(pjob, IM_CRED, im_cred_send);\n\n\t\tif (i != pjob->ji_numnodes - 1)\n\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_ERR,\n\t\t\t\t  pjob->ji_qs.ji_jobid,\n\t\t\t\t  \"could not send credentials to sisters\");\n\t\t/* If send_sisters() fails, the job is probably doomed anyway.\n\t\t\t * Should we resend credentials on fail?\n\t\t\t * If yes, here is the right place*/\n\t}\n}\n\n#if defined(HAVE_LIBKAFS) || defined(HAVE_LIBKOPENAFS)\nstatic volatile sig_atomic_t rec_signal = 0;\nstatic volatile struct krb_holder *afslog_ticket;\n\n#define AFSLOG_TIME_SLEEP 60 /* 1 minute */\n\n/**\n * @brief\n * \tdo_afslog - tests the presence of AFS and do the AFS log if the test is true\n *\n * @param[in] context - GSS context\n * @param[in] job_info - eexec_job_info\n *\n * @return \tkrb5_error_code\n * @retval\t0\n */\nstatic krb5_error_code\ndo_afslog(krb5_context context, eexec_job_info job_info)\n{\n\tkrb5_error_code ret = 0;\n\n\tif (k_hasafs() && (ret = krb5_afslog(context, job_info->ccache, NULL, NULL)) != 0) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer), \"krb5_afslog failed, error: %d\", ret);\n\t\tlog_record(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_ERR, job_info->jobid, log_buffer);\n\n\t\t/* ignore this error */\n\t\tret = 0;\n\t}\n\n\treturn (ret);\n}\n\n/**\n * @brief\n * \tsingleshot_afslog - do imminent afslog, no process started\n *\n * @param[in] ticket - kerberos ticket\n *\n * @return void\n */\nvoid\nsingleshot_afslog(struct krb_holder *ticket)\n{\n\tchar buf[LOG_BUF_SIZE * 2];\n\tchar errbuf[LOG_BUF_SIZE];\n\n\tif (k_hasafs()) {\n\t\t/* Go user */\n\t\tif (seteuid(ticket->job_info->job_uid) < 0) {\n\t\t\tstrerror_r(errno, errbuf, sizeof(errbuf));\n\t\t\tsnprintf(buf, sizeof(buf), \"Could not set uid using \\\"setuid()\\\": %s.\", errbuf);\n\t\t\tlog_err(errno, __func__, buf);\n\n\t\t\treturn;\n\t\t}\n\n\t\tk_setpag();\n\t\tdo_afslog(ticket->context, ticket->job_info);\n\n\t\t/* Go root */\n\t\tif (seteuid(0) < 0) {\n\t\t\tstrerror_r(errno, errbuf, sizeof(errbuf));\n\t\t\tsnprintf(buf, sizeof(buf), \"Could not reset root privileges: %s.\", errbuf);\n\t\t\tlog_err(errno, __func__, buf);\n\t\t}\n\t}\n}\n\n/**\n * @brief\n * \tdo_afslog_on_signal - AFS log signal handler\n *\n * @param[in] signal - received signal\n *\n * @return void\n */\nstatic void\ndo_afslog_on_signal(int signal)\n{\n\tif (signal == SIGHUP) {\n\t\tif (do_afslog(afslog_ticket->context, afslog_ticket->job_info))\n\t\t\treturn;\n\t} else {\n\t\trec_signal = signal;\n\t}\n}\n\n/**\n * @brief\n * \twait_afslog - in AFS log process wait for signal HUP for do AFS log or\n *\t0 for terminate\n *\n * @param[in] signal - received signal\n *\n * @return void\n */\nstatic void\nwait_afslog()\n{\n\t/* initialize signal catcher */\n\tstruct sigaction sa;\n\tmemset(&sa, 0, sizeof(sa));\n\tsa.sa_handler = do_afslog_on_signal;\n\tsigaction(SIGTERM, &sa, NULL);\n\tsa.sa_handler = do_afslog_on_signal;\n\tsigaction(SIGHUP, &sa, NULL);\n\n\twhile (rec_signal == 0) {\n\t\tsleep(AFSLOG_TIME_SLEEP);\n\t}\n}\n\n/**\n * @brief\n * \tstart_afslog - fork AFS log process for a specific job task,\n *\tsave PID file and do AFS log for the task. The AFS log need to be done\n *\tin the child of original process who set the pag.\n *\n * @param[in] ptask - job task\n * @param[in] ticket - kerberos ticket associated with process\n * @param[in] fd1 - read socket - need to be closed in forked process\n * @param[in] fd2 - write socker - need to be closed in forked process\n *\n * @return \tint\n * @retval\tPBSGSS_OK on success\n * @retval\t!= PBSGSS_OK otherwise\n */\nint\nstart_afslog(const task *ptask, struct krb_holder *ticket, int fd1, int fd2)\n{\n\tchar buf[LOG_BUF_SIZE * 2];\n\tchar errbuf[LOG_BUF_SIZE];\n\tint ret = PBS_KRB5_OK;\n\tchar pid_file[MAXPATHLEN];\n\tint local_ticket = 0;\n\tjob *pjob;\n\tint fd;\n\tint pid;\n\n\tif (!k_hasafs())\n\t\treturn PBS_KRB5_OK;\n\n\tif (ptask == NULL)\n\t\treturn PBS_KRB5_ERR_INTERNAL;\n\n\tpjob = ptask->ti_job;\n\n\tif (*pjob->ji_qs.ji_fileprefix != '\\0')\n\t\tsnprintf(pid_file, sizeof(pid_file), \"%s%s_afslog_%8.8X.pid\", path_jobs, pjob->ji_qs.ji_fileprefix, (unsigned int) ptask->ti_qs.ti_task);\n\telse\n\t\tsnprintf(pid_file, sizeof(pid_file), \"%s%s_afslog_%8.8X.pid\", path_jobs, pjob->ji_qs.ji_jobid, (unsigned int) ptask->ti_qs.ti_task);\n\n\tfd = open(pid_file, O_CREAT | O_EXCL | O_WRONLY, 0600);\n\tif (fd == -1) {\n\t\t/* another afslog process is running ? */\n\t\tsnprintf(buf, sizeof(buf), \"opening PID file for afslog process (%s) uid = %d\", pid_file, getuid());\n\t\tlog_err(errno, __func__, buf);\n\t\treturn PBS_KRB5_ERR_CANT_OPEN_FILE;\n\t}\n\n\t/* Go user */\n\tif (seteuid(pjob->ji_qs.ji_un.ji_momt.ji_exuid) < 0) {\n\t\tstrerror_r(errno, errbuf, sizeof(errbuf));\n\t\tsnprintf(buf, sizeof(buf), \"Could not set uid using \\\"setuid()\\\": %s.\", errbuf);\n\t\tlog_err(errno, __func__, buf);\n\n\t\treturn PBS_KRB5_ERR_INTERNAL;\n\t}\n\n\tif (ticket == NULL) {\n\t\tticket = alloc_ticket();\n\t\tif (ticket == NULL) {\n\t\t\t/* Go root on error */\n\t\t\tif (seteuid(0) < 0) {\n\t\t\t\tstrerror_r(errno, errbuf, sizeof(errbuf));\n\t\t\t\tsnprintf(buf, sizeof(buf), \"Could not reset root privileges: %s.\", errbuf);\n\t\t\t\tlog_err(errno, __func__, buf);\n\t\t\t}\n\n\t\t\treturn PBS_KRB5_ERR_INTERNAL;\n\t\t}\n\n\t\tif ((ret = init_ticket_from_ccache(pjob, NULL, ticket)) != PBS_KRB5_OK) {\n\t\t\tif (local_ticket)\n\t\t\t\tfree_ticket(ticket, CRED_RENEWAL);\n\n\t\t\t/* Go root */\n\t\t\tif (seteuid(0) < 0) {\n\t\t\t\tstrerror_r(errno, errbuf, sizeof(errbuf));\n\t\t\t\tsnprintf(buf, sizeof(buf), \"Could not reset root privileges: %s.\", errbuf);\n\t\t\t\tlog_err(errno, __func__, buf);\n\n\t\t\t\treturn PBS_KRB5_ERR_INTERNAL;\n\t\t\t}\n\n\t\t\t/* job without a principal */\n\t\t\t/* not an error, but do nothing */\n\t\t\tif (ret == PBS_KRB5_ERR_NO_KRB_PRINC)\n\t\t\t\treturn PBS_KRB5_OK;\n\n\t\t\treturn ret;\n\t\t}\n\n\t\tlocal_ticket = 1;\n\t}\n\n\tafslog_ticket = ticket;\n\n\tk_setpag();\n\n\tdo_afslog(afslog_ticket->context, afslog_ticket->job_info);\n\n\tpid = fork();\n\tif (pid < 0) {\n\t\t/* Go root on error */\n\t\tif (seteuid(0) < 0) {\n\t\t\tstrerror_r(errno, errbuf, sizeof(errbuf));\n\t\t\tsnprintf(buf, sizeof(buf), \"Could not reset root privileges: %s.\", errbuf);\n\t\t\tlog_err(errno, __func__, buf);\n\n\t\t\treturn PBS_KRB5_ERR_INTERNAL;\n\t\t}\n\n\t\tlog_err(errno, __func__, \"fork() failed\");\n\t\tclose(fd);\n\n\t\treturn PBS_KRB5_ERR_INTERNAL;\n\t}\n\n\tif (pid > 0) {\n\t\t/* Go root in parent */\n\t\tif (seteuid(0) < 0) {\n\t\t\tstrerror_r(errno, errbuf, sizeof(errbuf));\n\t\t\tsnprintf(buf, sizeof(buf), \"Could not reset root privileges: %s.\", errbuf);\n\t\t\tlog_err(errno, __func__, buf);\n\n\t\t\treturn PBS_KRB5_ERR_INTERNAL;\n\t\t}\n\n\t\tsnprintf(buf, sizeof(buf), \"%d\\n\", pid);\n\t\tret = write(fd, buf, strlen(buf));\n\t\tif (ret == -1) {\n\t\t\tlog_err(errno, __func__, \"writing pid failed\");\n\t\t\tgoto out;\n\t\t}\n\n\t\tsnprintf(log_buffer, sizeof(buf), \"afslog for task %8.8X started, pid: %d\",\n\t\t\t ptask->ti_qs.ti_task, pid);\n\t\tlog_record(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid, log_buffer);\n\n\t\tret = PBS_KRB5_OK;\n\n\tout:\n\t\tif (fd != -1) {\n\t\t\tfsync(fd);\n\t\t\tclose(fd);\n\t\t}\n\n\t\tif (local_ticket)\n\t\t\tfree_ticket(ticket, CRED_RENEWAL);\n\n\t\treturn ret;\n\t}\n\n\tclose(fd);\n\n\tlog_close(0);\n\tpbs_conf.locallog = 0;\n\tlog_open(log_file, path_log);\n\n\tif (fd1 >= 0)\n\t\tclose(fd1);\n\tif (fd2 >= 0)\n\t\tclose(fd2);\n\n\tif (setsid() == -1)\n\t\tlog_record(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_ERR, pjob->ji_qs.ji_jobid, \"afslog could not setsid\");\n\n\twait_afslog();\n\n\tif (local_ticket)\n\t\tfree_ticket(ticket, CRED_RENEWAL);\n\n\texit(0);\n}\n\n/**\n * @brief\n * \tsignal_afslog - send signal to the specific job task in order to do\n *\tAFS log or terminate the process on job exit.\n *\n * @param[in] ptask - job task\n * @param[in] signal - signal to send\n *\n * @return \tint\n * @retval\tPBSGSS_OK on success\n * @retval\t!= PBSGSS_OK otherwise\n */\nint\nsignal_afslog(const task *ptask, int signal)\n{\n\tchar buf[LOG_BUF_SIZE];\n\tchar pid_file[MAXPATHLEN];\n\tFILE *fd;\n\tstruct stat cache_info;\n\tjob *pjob;\n\n\tif (ptask == NULL)\n\t\treturn PBS_KRB5_ERR_INTERNAL;\n\n\tpjob = ptask->ti_job;\n\tif (pjob == NULL)\n\t\treturn PBS_KRB5_ERR_INTERNAL;\n\n\tif (*pjob->ji_qs.ji_fileprefix != '\\0')\n\t\tsnprintf(pid_file, sizeof(pid_file), \"%s%s_afslog_%8.8X.pid\", path_jobs, pjob->ji_qs.ji_fileprefix, (unsigned int) ptask->ti_qs.ti_task);\n\telse\n\t\tsnprintf(pid_file, sizeof(pid_file), \"%s%s_afslog_%8.8X.pid\", path_jobs, pjob->ji_qs.ji_jobid, (unsigned int) ptask->ti_qs.ti_task);\n\n\tfd = fopen(pid_file, \"r\");\n\tif (fd == NULL) {\n\t\tsnprintf(buf, sizeof(buf), \"Failed to open pidfile: %s\", pid_file);\n\t\tlog_err(errno, __func__, buf);\n\n\t\treturn PBS_KRB5_ERR_CANT_OPEN_FILE;\n\t}\n\n\tint pid = 0;\n\tif (fscanf(fd, \"%d\", &pid) < 1) {\n\t\tpid = -1;\n\t}\n\n\tfclose(fd);\n\n\tif (pid >= 0) {\n\t\tif (kill(pid, signal) != 0) {\n\t\t\tsnprintf(buf, sizeof(buf), \"afslog for task %8.8X could not send signal %d to PID %d.\",\n\t\t\t\t (unsigned int) ptask->ti_qs.ti_task, signal, pid);\n\t\t\tlog_err(errno, __func__, buf);\n\n\t\t\treturn PBS_KRB5_ERR_KILL_PROCESS;\n\t\t} else {\n\t\t\tsprintf(log_buffer, \"afslog for task %8.8X, signal %d sent to pid %d\",\n\t\t\t\t(unsigned int) ptask->ti_qs.ti_task, signal, pid);\n\t\t\tlog_record(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid, log_buffer);\n\t\t}\n\t} else {\n\t\tsnprintf(buf, sizeof(buf), \"afslog for task %8.8X failed to get pid from pidfile: %s\",\n\t\t\t (unsigned int) ptask->ti_qs.ti_task, pid_file);\n\t\tlog_err(errno, __func__, buf);\n\n\t\treturn PBS_KRB5_ERR_KILL_PROCESS;\n\t}\n\n\tif (signal != SIGHUP && stat(pid_file, &cache_info) == 0) {\n\t\tunlink(pid_file);\n\t}\n\n\treturn PBS_KRB5_OK;\n}\n\n/**\n * @brief\n * \tgetpag - recognize afs pag in groups and return the pag.\n *\n * @return \tint32_t\n * @retval\tpag > 0 on success\n * @retval\t0 otherwise\n */\nint32_t\ngetpag()\n{\n\tgid_t *grplist = NULL;\n\tint i;\n\tint numsup;\n\tstatic int maxgroups = 0;\n\tint32_t pag = 0;\n\n\tif (k_hasafs() == 0)\n\t\treturn 0;\n\n\tmaxgroups = (int) sysconf(_SC_NGROUPS_MAX);\n\n\tgrplist = calloc((size_t) maxgroups, sizeof(gid_t));\n\tif (grplist == NULL)\n\t\treturn 0;\n\n\tnumsup = getgroups(maxgroups, grplist);\n\n\tfor (i = 0; i < numsup; ++i) {\n\t\t/* last (4th) byte in pag is char 'A' */\n\t\tif ((grplist[i] >> 24) == 'A') {\n\t\t\tpag = grplist[i];\n\t\t\tbreak;\n\t\t}\n\t}\n\n\tif (grplist)\n\t\tfree(grplist);\n\n\treturn pag;\n}\n#endif\n#endif\n"
  },
  {
    "path": "src/resmom/renew_creds.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef RENEW_CREDS_H\n#define RENEW_CREDS_H\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n\n#include <sys/types.h>\n\n#include \"list_link.h\"\n#include \"pbs_ifl.h\"\n#include \"attribute.h\"\n#include \"job.h\"\n#include \"mom_mach.h\"\n#include \"work_task.h\"\n\n#include <krb5.h>\n\nstruct krb_holder;\n\n/* cred actions */\n#define CRED_SINGLESHOT 0\n#define CRED_RENEWAL 1\n#define CRED_SETENV 2\n#define CRED_CLOSE 3\n#define CRED_DESTROY 4\n\n#define CRED_DATA_SIZE 4096\n\nenum PBS_KRB5_ERRORS {\n\tPBS_KRB5_OK = 0,\n\tPBS_KRB5_ERR_INTERNAL,\n\tPBS_KRB5_ERR_CONTEXT_INIT,\n\tPBS_KRB5_ERR_GET_CREDS,\n\tPBS_KRB5_ERR_NO_KRB_PRINC,\n\tPBS_KRB5_ERR_NO_USERNAME,\n\tPBS_KRB5_ERR_USER_NOT_FOUND,\n\tPBS_KRB5_ERR_CANT_OPEN_FILE,\n\tPBS_KRB5_ERR_KILL_PROCESS,\n\tPBS_KRB5_LAST\n};\n\nstruct krb_holder *alloc_ticket();\nint init_ticket_from_job(job *pjob, const task *ptask, struct krb_holder *ticket, int cred_action);\nint init_ticket_from_req(char *principal, char *jobid, struct krb_holder *ticket, int cred_action);\nvoid free_ticket(struct krb_holder *ticket, int cred_action);\nchar *get_ticket_ccname(struct krb_holder *ticket);\n\nint cred_by_job(job *pjob, int cred_action);\nvoid renew_job_cred(job *pjob);\n\n/* storage functions */\nvoid store_or_update_cred(char *jobid, char *credid, int cred_type, krb5_data *data, char *data_base64, long validity);\nvoid delete_cred(char *jobid);\n\nvoid send_cred_sisters(job *pjob);\n\nint im_cred_send(job *pjob, hnodent *xp, int stream);\nint im_cred_read(job *pjob, hnodent *np, int stream);\n\n#if defined(HAVE_LIBKAFS) || defined(HAVE_LIBKOPENAFS)\nvoid singleshot_afslog(struct krb_holder *ticket);\nint start_afslog(const task *ptask, struct krb_holder *ticket, int, int);\nint signal_afslog(const task *ptask, int signal);\n\nint32_t getpag();\n\n#define AFSLOG_TERM(x)                                                                                                                                  \\\n\t{                                                                                                                                               \\\n\t\tif (signal_afslog(x, SIGTERM))                                                                                                          \\\n\t\t\tlog_record(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_ERR, x->ti_job->ji_qs.ji_jobid, \"sending SIGTERM to afslog process failed\"); \\\n\t}\n#else\n#define AFSLOG_TERM(x) \\\n\t{              \\\n\t}\n#endif /* OpenAFS */\n\n#endif\n\n#endif /* RENEW_CREDS_H */\n"
  },
  {
    "path": "src/resmom/requests.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <ctype.h>\n#include <errno.h>\n#include <fcntl.h>\n#include <limits.h>\n#include <signal.h>\n#include <grp.h>\n#include <pwd.h>\n#include <unistd.h>\n#include <sys/wait.h>\n#include <dirent.h>\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <assert.h>\n#include <sys/stat.h>\n#include <sys/types.h>\n#include <time.h>\n#include \"dis.h\"\n#include \"libpbs.h\"\n#include \"pbs_error.h\"\n#include \"server_limits.h\"\n#include \"list_link.h\"\n#include \"ticket.h\"\n#include \"credential.h\"\n#include \"attribute.h\"\n#include \"resource.h\"\n#include \"job.h\"\n#include \"batch_request.h\"\n#include \"pbs_nodes.h\"\n#include \"svrfunc.h\"\n#include \"mom_mach.h\"\n#include \"mom_func.h\"\n#include \"mom_server.h\"\n#include \"net_connect.h\"\n#include \"log.h\"\n#include \"tpp.h\"\n#include \"hook.h\"\n#include \"pbs_python.h\"\n#include \"mom_hook_func.h\"\n#include \"work_task.h\"\n#include \"placementsets.h\"\n#include \"pbs_internal.h\"\n#include \"portability.h\"\n\n#ifdef __SANITIZE_ADDRESS__\n#include <sanitizer/common_interface_defs.h>\npid_t pid_sanitizer = -1;\n#endif\n\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n#include \"renew_creds.h\"\n#include <krb5.h>\nextern int decode_block_base64(unsigned char *ascii_data, ssize_t ascii_len, unsigned char *bin_data, ssize_t *p_bin_len, char *msg, size_t msg_len);\n#endif\n\n/**\n * @file\trequests.c\n */\n/* External Global Data Items */\n\nextern unsigned int default_server_port;\nextern int exiting_tasks;\nextern pbs_list_head svr_alljobs;\nextern char mom_host[];\n#ifdef WIN32\nextern char *mom_home;\n#endif\nextern char *msg_err_unlink;\nextern char *msg_mom_reject_root_scripts;\nextern int reject_root_scripts;\nextern char *path_spool;\nextern char *path_undeliv;\nextern attribute_def job_attr_def[];\nextern char *msg_jobmod;\nextern char *msg_manager;\nextern time_t time_now;\nextern time_t time_resc_updated;\nextern int resc_access_perm; /* see encode_resc() */\n/* in attr_fn_resc.c */\nextern int suspend_signal;\nextern int resume_signal;\nextern char *path_checkpoint;\nextern int restart_background;\nextern int reject_hook_scripts;\nextern int restart_transmogrify;\nextern char task_fmt[];\nextern char *msg_noloopbackif;\nextern char *msg_stageremote;\nextern char *path_hooks;\nextern char *msg_hookfile_open;\nextern char *msg_hookfile_write;\nextern unsigned long hooks_rescdef_checksum;\nextern char *path_rescdef;\n#if MOM_ALPS\nextern int alps_confirm_empty_timeout;\nextern int alps_confirm_switch_timeout;\n#endif\n\nextern long joinjob_alarm_time;\nextern long job_launch_delay;\nextern int update_joinjob_alarm_time;\nextern int update_job_launch_delay;\nextern pbs_list_head svr_allhooks;\n/* External Functions */\nextern int is_direct_write(job *, enum job_file, char *, int *);\nextern unsigned char pbs_aes_key[][16];\nextern unsigned char pbs_aes_iv[][16];\n\n/* Local Data Items */\nchar rcperr[MAXPATHLEN] = {'\\0'}; /* file to contain rcp error */\nchar *pbs_jobdir = NULL;\t  /* path to staging and execution dir of current job */\nchar *cred_buf = NULL;\nsize_t cred_len = 0;\nint cred_type = 0;\nint cred_pipe = -1;\nchar *pwd_buf = NULL;\n\n#ifndef WIN32\nstatic void post_cpyfile(job *pjob, int ev);\n#else\nextern char *save_actual_homedir(struct passwd *, job *);\nextern char *set_homedir_to_local_default(job *, char *);\n#endif\n\n#ifndef TRUE\n#define TRUE 1\n#define FALSE 0\n#endif\n\n#define STAGEOUT_FAILURE 65\n#define SUSPEND 1\n#define RESUME 0\n\n#ifdef __SANITIZE_ADDRESS__\n/**\n * @brief\n * \t__lsan_is_turned_off() - disable leak sanitizer for running process\n *  based on pid_sanitizer. If pid_sanitizer is zero, lsan is disabled.\n *  This is used for disabling lsan in child process in case of changing\n *  uid/gid of the child process. The sanitizer can not handle functions\n *  like setuid() and fails with not being able to connect to the thread.\n *\n * @return \tint\n * @retval\t1\tdisable LSAN\n * @retval\t0\tenable LSAN\n *\n */\nint __attribute__((used))\n__lsan_is_turned_off(void)\n{\n\tif (pid_sanitizer)\n\t\treturn 0;\n\n\treturn 1;\n}\n#endif\n\n/**\n * @brief\n * \tis_file_same() - are two paths pointing to the same file\n * @param[in] file1 - path1\n * @param[in] file2 - path2\n *\n * @return \tint\n * @retval\t1\tif are the same\n * @retval\t0\tif not the same (or cannot tell)\n *\n */\n\nstatic int\nis_file_same(char *file1, char *file2)\n{\n#ifndef WIN32\n\tstruct stat sb1, sb2;\n\n\tif ((stat(file1, &sb1) == 0) && (stat(file2, &sb2) == 0)) {\n\t\tif ((sb1.st_dev == sb2.st_dev) && (sb1.st_ino == sb2.st_ino))\n\t\t\treturn 1;\n\t}\n#endif\n\treturn 0;\n}\n\n/**\n * @brief\n * \tfork_to_user - fork mom and go to user's home directory\n *\talso sets up the global useruid and usergid in the child\n *\n *\tWARNING: valid only if called when preq points to a cpyfiles structure\n *\n * @param[in] preq - pointer to batch_request structure\n * @param[in] pjob - pointer to job structure (can be null)\n *\n * @return \tHANDLE\n * @retval\n * @retval\t!INVALID_HANDLE_VALUE - success\n * @retval\tINVALID_HANDLE_VALUE - failure\n */\n#ifdef WIN32\nstatic HANDLE\nfork_to_user(struct batch_request *preq, job *pjob)\n{\n\tstruct passwd *pwdp = NULL;\n\tstruct rq_cpyfile *rqcpf;\n\tstatic char buf[MAXPATHLEN + 1];\n\tchar lpath[MAXPATHLEN + 1];\n\n\t/* Need to look up the uid, gid, and home directory */\n\tif (preq->rq_type == PBS_BATCH_CopyFiles_Cred || preq->rq_type == PBS_BATCH_DelFiles_Cred) {\n\t\trqcpf = &preq->rq_ind.rq_cpyfile_cred.rq_copyfile;\n\t\tcred_buf = preq->rq_ind.rq_cpyfile_cred.rq_pcred;\n\t\tcred_len = preq->rq_ind.rq_cpyfile_cred.rq_credlen;\n\t} else {\n\t\trqcpf = &preq->rq_ind.rq_cpyfile;\n\t\tcred_buf = NULL;\n\t\tcred_len = 0;\n\t}\n\n\tif (pjob)\n\t\tpwdp = getpwnam(get_jattr_str(pjob, JOB_ATR_euser));\n\n\t/* we're trying to reuse old pw_userlogin since a mapped UNC */\n\t/* path maybe hanging off it. With pbs_mom running under     */\n\t/* SERVICE_ACCOUNT, we have to map drives under user session */\n\t/* Only the session that mapped the drive can unmap it.      */\n\tlog_buffer[0] = '\\0';\n\tif ((pwdp == NULL || pwdp->pw_userlogin == INVALID_HANDLE_VALUE) &&\n\t    (pwdp = logon_pw(preq->rq_ind.rq_cpyfile.rq_user, cred_buf, cred_len, pbs_decrypt_pwd, 0, log_buffer)) == NULL) {\n\t\tlog_err(-1, __func__, log_buffer);\n\t\treturn (INVALID_HANDLE_VALUE);\n\t}\n\n\tif (strlen(log_buffer) > 0)\n\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_DEBUG, __func__, log_buffer);\n\n\tif (pwdp->pw_userlogin != INVALID_HANDLE_VALUE) {\n\t\tif (!impersonate_user(pwdp->pw_userlogin)) {\n\t\t\tlog_err(-1, \"fork_to_user2\", \"ImpersonateLoggedOnUser\");\n\t\t\treturn (INVALID_HANDLE_VALUE);\n\t\t}\n\t} else\n\t\treturn (INVALID_HANDLE_VALUE);\n\n\tpbs_strncpy(lpath, save_actual_homedir(pwdp, pjob), sizeof(lpath));\n\tCreateDirectory(lpath, 0); /* user homedir may not exist yet */\n\tif (chdir(lpath) == -1) {\n\t\tpbs_strncpy(lpath, set_homedir_to_local_default(pjob, preq->rq_ind.rq_cpyfile.rq_user), sizeof(lpath));\n\t\tCreateDirectory(lpath, 0); /* user homedir may not exist yet */\n\t\t(void) chdir(lpath);\n\t}\n\n\tsetenv(\"PBS_EXEC\", pbs_conf.pbs_exec_path, 1);\n\n\treturn (pwdp->pw_userlogin);\n}\n\n#else\n\n/**\n * @brief\n * \tfrk_err - error function for \"fork_to_user()\".\n *\tCall this if there is an error reply needed for the the batch request\n *\tin the child process.  The error is returned to the Server and the\n *\tchild process exits.\n *\n * @param[in] err - error number\n * @param[in] preq - pointer to batch_request structure\n *\n * @return \tVoid\n *\n */\n\nstatic void\nfrk_err(int err, struct batch_request *preq)\n{\n\treq_reject(err, 0, preq);\n\texit(0);\n}\n\n/**\n * @brief\n *\tfork_to_user - fork mom and go to user's home directory\n *\t\t  also sets up the global useruid and usergid in the child\n *\n *\tWARNING: valid only if called when preq points to a cpyfiles structure\n *\t\t or a cpyfiles_cred structure\n *\n * @param[in] preq - pointer to batch_request structure\n * @param[in] pjob - pointer to job structure (can be null)\n *\n * @return\tpid_t\n * @retval\t>0 - success\n * @retval\t<0 - failure\n */\n\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\nstatic pid_t\nfork_to_user(struct batch_request *preq, job *pjob, struct krb_holder *ticket)\n#else\nstatic pid_t\nfork_to_user(struct batch_request *preq, job *pjob)\n#endif\n{\n\tstruct group *grpp;\n\tpid_t pid;\n\tstruct passwd *pwdp;\n\tuid_t useruid;\n\tgid_t usergid;\n\tgid_t user_rgid;\n\tint fds[2];\n\tstruct rq_cpyfile *rqcpf;\n\tstatic char buf[MAXPATHLEN + 1];\n\n\tpid = fork_me(preq->rq_conn);\n\n#ifdef __SANITIZE_ADDRESS__\n\t/* see the comment of __lsan_is_turned_off() */\n\tpid_sanitizer = pid;\n#endif\n\n\tif (pid > 0) {\n\t\tif (preq->prot == PROT_TCP)\n\t\t\tfree_br(preq); /* parent - note leave connection open   */\n\t\treturn pid;\n\t} else if (pid < 0)\n\t\treturn (-PBSE_SYSTEM);\n\n\t/* The Child */\n\n\tif (preq->rq_type == PBS_BATCH_CopyFiles_Cred || preq->rq_type == PBS_BATCH_DelFiles_Cred)\n\t\trqcpf = &preq->rq_ind.rq_cpyfile_cred.rq_copyfile;\n\telse\n\t\trqcpf = &preq->rq_ind.rq_cpyfile;\n\n\t/* create a PBS_EXEC env entry */\n\tsetenv(\"PBS_EXEC\", pbs_conf.pbs_exec_path, 1);\n\n\tif (pjob != NULL && pjob->ji_grpcache != NULL) {\n\t\t/* used the good stuff cached in the job structure */\n\t\tuseruid = pjob->ji_qs.ji_un.ji_momt.ji_exuid;\n\t\tusergid = pjob->ji_qs.ji_un.ji_momt.ji_exgid;\n\t\tif (chdir(pjob->ji_grpcache->gc_homedir) == -1) \n\t\t\tlog_errf(-1, __func__, \"chdir failed. ERR : %s\", strerror(errno));\t\t\n\t\tuser_rgid = pjob->ji_grpcache->gc_rgid;\n\t\t/* Account ID used to be set her for Cray via acctid(). */\n\t} else {\n\t\t/* Need to look up the uid, gid, and home directory */\n\t\tif ((pwdp = getpwnam(rqcpf->rq_user)) == NULL)\n\t\t\tfrk_err(PBSE_BADUSER, preq); /* no return */\n\t\tuseruid = pwdp->pw_uid;\n\t\tuser_rgid = pwdp->pw_gid;\n\n\t\tif (rqcpf->rq_group[0] == '\\0')\n\t\t\tusergid = pwdp->pw_gid; /* default to login group */\n\t\telse {\n\t\t\tif ((grpp = getgrnam(rqcpf->rq_group)) == NULL)\n\t\t\t\tfrk_err(PBSE_BADUSER, preq); /* no return */\n\t\t\tusergid = grpp->gr_gid;\n\t\t}\n\t\tif (chdir(pwdp->pw_dir) == -1)  /* change to user`s home directory */\n\t\t\tlog_errf(-1, __func__, \"chdir failed. ERR : %s\", strerror(errno));\n\t}\n\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n\t/* singleshot ticket, without renewal */\n\tif (pjob != NULL)\n\t\tinit_ticket_from_job(pjob, NULL, ticket, CRED_SINGLESHOT);\n\telse\n\t\tinit_ticket_from_req(preq->rq_extend, preq->rq_ind.rq_cpyfile.rq_jobid, ticket, CRED_SINGLESHOT);\n\n#if defined(HAVE_LIBKAFS) || defined(HAVE_LIBKOPENAFS)\n\tsingleshot_afslog(ticket);\n#endif\n#endif\n\n\tif (preq->rq_type == PBS_BATCH_CopyFiles_Cred || preq->rq_type == PBS_BATCH_DelFiles_Cred) {\n\n\t\tcred_buf = preq->rq_ind.rq_cpyfile_cred.rq_pcred;\n\t\tcred_len = preq->rq_ind.rq_cpyfile_cred.rq_credlen;\n\n\t\tswitch (preq->rq_ind.rq_cpyfile_cred.rq_credtype) {\n\t\t\tcase PBS_CREDTYPE_NONE:\n\t\t\t\tif (becomeuser_args(rqcpf->rq_user, useruid, usergid, user_rgid) == -1) {\n\t\t\t\t\tlog_err(errno, __func__, \"set privilege as user\");\n\t\t\t\t\tfrk_err(PBSE_SYSTEM, preq); /* no return */\n\t\t\t\t}\n\t\t\t\tbreak;\n\n\t\t\tcase PBS_CREDTYPE_AES:\n\t\t\t\tif (becomeuser_args(rqcpf->rq_user, useruid, usergid, user_rgid) == -1) {\n\t\t\t\t\tlog_err(errno, __func__, \"set privilege as user\");\n\t\t\t\t\tfrk_err(PBSE_SYSTEM, preq); /* no return */\n\t\t\t\t}\n\t\t\t\tif (pbs_decrypt_pwd(cred_buf, PBS_CREDTYPE_AES, cred_len, &pwd_buf, (const unsigned char *) pbs_aes_key, (const unsigned char *) pbs_aes_iv) != 0) {\n\t\t\t\t\tlog_joberr(-1, __func__, \"decrypt_pwd\", rqcpf->rq_jobid);\n\t\t\t\t\tfrk_err(PBSE_BADCRED, preq); /* no return */\n\t\t\t\t}\n\t\t\t\tif (pipe(fds) == -1) {\n\t\t\t\t\tlog_err(errno, __func__, \"pipe\");\n\t\t\t\t\tfrk_err(PBSE_SYSTEM, preq); /* no return */\n\t\t\t\t}\n\t\t\t\tcred_pipe = fds[1];\n\n\t\t\t\tsprintf(buf, \"%d\", fds[0]);\n\t\t\t\tsetenv(\"PBS_PWPIPE\", buf, 1);\n\t\t\t\tfcntl(cred_pipe, F_SETFD, 1); /* close on exec */\n\n\t\t\t\tbreak;\n\n\t\t\tdefault:\n\t\t\t\tlog_err(errno, __func__, \"unknown credential type\");\n\t\t\t\tbreak;\n\t\t}\n\t} else { /* no cred */\n\t\tif (becomeuser_args(rqcpf->rq_user, useruid, usergid, user_rgid) == -1) {\n\t\t\tlog_err(errno, __func__, \"set privilege as user\");\n\t\t\tfrk_err(PBSE_SYSTEM, preq); /* no return */\n\t\t}\n\t}\n\n\treturn pid;\n}\n#endif /* WIN32 */\n\n#define RT_BLK_SZ 65536\n/**\n * @brief\n * \tCalled when a job is rerun (qrerun) to copy the job's standard out/error\n * \tfiles back to the Server until job is rescheduled.  Function opens\n * \tthe StdOut or StdErr file for the job, reading in blocks of it and\n * \tships the blocks to the Server.\n * \tIf the file is shipped back to the Server successfully and it was in\n * \tPBS_HOME/spool, it is then deleted.\n *\n * @see\n * @param[in] pjob  - Accepts a job pointer.\n * @param[in] which - enum for standard job files\n * @param[in] sock  - socket descriptor value\n *\n * @return\tint\n * @retval\t 0  - success.\n * @retval\t-1  - failure.\n *\n */\nstatic int\nreturn_file(job *pjob, enum job_file which, int sock)\n{\n\tint amt;\n\tchar buf[RT_BLK_SZ];\n\tint fds;\n\tstruct batch_request *prq;\n\tint rc = 0;\n\tint seq = 0;\n\tint direct_write_possible = 1;\n\n\tchar path[MAXPATHLEN + 1]; /* needed by is_direct_write */\n\n\tif ((is_jattr_set(pjob, JOB_ATR_interactive)) && (get_jattr_long(pjob, JOB_ATR_interactive) > 0)) {\n\t\treturn (0); /* interactive job, no file to copy */\n\t}\n\n\t/* Check for direct write of this file - direct write files are */\n\t/* not copied back to the server on rerun.                      */\n\tif (is_direct_write(pjob, which, path, &direct_write_possible)) {\n\t\tsprintf(log_buffer,\n\t\t\t\"Skipping copy of directly written %s file of job %s\",\n\t\t\t(which == StdOut) ? \"STDOUT\" : \"STDERR\", pjob->ji_qs.ji_jobid);\n\t\tlog_event(PBSEVENT_DEBUG4, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\treturn (0); /* Direct write, no copy done */\n\t}\n\n\tfds = open_std_file(pjob, which, O_RDONLY,\n\t\t\t    pjob->ji_qs.ji_un.ji_momt.ji_exgid);\n\tif (fds < 0)\n\t\treturn (0);\n\n\t/* Build a \"request\" to the Server which will contain */\n\t/* a block of the file and send it on its way         */\n\n\tprq = alloc_br(PBS_BATCH_MvJobFile);\n\tif (prq == NULL) {\n\t\tclose(fds);\n\t\treturn (-1);\n\t}\n\n\t(void) strcpy(prq->rq_host, mom_host);\n\t(void) strcpy(prq->rq_ind.rq_jobfile.rq_jobid, pjob->ji_qs.ji_jobid);\n\n\twhile ((amt = read(fds, buf, RT_BLK_SZ)) > 0) {\n\t\t/* prq->rq_ind.rq_jobfile.rq_sequence = seq++; */\n\t\t/* prq->rq_ind.rq_jobfile.rq_type = (int)which; */\n\t\t/* prq->rq_ind.rq_jobfile.rq_size = amt; */\n\t\t/* prq->rq_ind.rq_jobfile.rq_data = buf; */\n\n\t\tDIS_tcp_funcs();\n\t\tif ((rc = encode_DIS_ReqHdr(sock, PBS_BATCH_MvJobFile,\n\t\t\t\t\t    pbs_current_user)) ||\n\t\t    (rc = encode_DIS_JobFile(sock, seq++, buf, amt,\n\t\t\t\t\t     pjob->ji_qs.ji_jobid, which)) ||\n\t\t    (rc = encode_DIS_ReqExtend(sock, NULL))) {\n\t\t\tbreak;\n\t\t}\n\n\t\tdis_flush(sock);\n\n\t\tif ((DIS_reply_read(sock, &prq->rq_reply, 0) != 0) ||\n\t\t    (prq->rq_reply.brp_code != 0)) {\n\t\t\trc = -1;\n\t\t\tbreak;\n\t\t}\n\t}\n\tfree_br(prq);\n\t(void) close(fds);\n\n\tif (rc == 0) {\n\t\tint keeping;\n\t\tchar *path;\n\n\t\t/* get path of file and if \"keeping\" indicates file is in */\n\t\t/* the job's working directory, don't bother to delete it */\n\t\t/* it will be deleted when the sandbox is removed or left */\n\t\t/* in the user's home to be replaced when job rerun.      */\n\t\tpath = std_file_name(pjob, which, &keeping);\n\t\tif (keeping == 0)\n\t\t\t(void) unlink(path);\n\t\treturn (0);\n\t} else\n\t\treturn (-1);\n}\n\n/**\n * Delete job request\n * @brief\n * \tDelete job request - wait for the sisters to finish, cleanup, and respond back to the\n *\tthe server\n *\tIn the case of Cray, the response to the server will be done\n *\tat a lower level in del_job_hw() to allow the MOM cancel the\n *\tALPS reservation.  In Cray's case, the job will remain in the\n *\t\"E\" state until the MOM responds to the deletejob request from\n *\tthe server.\n *\n * @param[in] batch_request structure for the job\n *\n * @return Void\n *\n */\nvoid\nreq_deletejob(struct batch_request *preq)\n{\n\tjob *pjob;\n\tmom_hook_input_t *hook_input = NULL;\n\tmom_hook_output_t *hook_output = NULL;\n\tchar *jobid = NULL;\n\n\tjobid = preq->rq_ind.rq_delete.rq_objname;\n\tpjob = find_job(jobid);\n\n\tif (!pjob) {\n\t\treq_reject(PBSE_UNKJOBID, 0, preq);\n\t\treturn;\n\t}\n\n\tif (pjob->ji_hook_running_bg_on)\n\t\t/* This is a duplicate request just return from here. */\n\t\treturn;\n\n\t\t/*\n\t\t* check to see is there any copy request pending\n\t\t* for this job ?\n\t\t*/\n#ifdef WIN32\n\tif (get_copyinfo_from_list(jobid) != NULL)\n#else\n\tif (pjob->ji_momsubt != 0 && pjob->ji_mompost == post_cpyfile)\n#endif\n\t{\n\t\t/*\n\t\t * we have copy request pending so we\n\t\t * need to first process the post_cpyfile\n\t\t * request before starting this one.\n\t\t * Tell the server to try again later.\n\t\t */\n\t\treq_reject(PBSE_TRYAGAIN, 0, preq);\n\t\treturn;\n\t}\n\t/*\n\t * mom_deljob_wait() sets substate to\n\t * prevent sending more OBIT messages\n\t */\n\tpjob->ji_preq = preq;\n\tif ((hook_input = (mom_hook_input_t *) malloc(\n\t\t     sizeof(mom_hook_input_t))) == NULL) {\n\t\tlog_err(errno, __func__, MALLOC_ERR_MSG);\n\t\treturn;\n\t}\n\tmom_hook_input_init(hook_input);\n\thook_input->pjob = pjob;\n\n\tif ((hook_output = (mom_hook_output_t *) malloc(\n\t\t     sizeof(mom_hook_output_t))) == NULL) {\n\t\tlog_err(errno, __func__, MALLOC_ERR_MSG);\n\t\treturn;\n\t}\n\tmom_hook_output_init(hook_output);\n\tif ((hook_output->reject_errcode =\n\t\t     (int *) malloc(sizeof(int))) == NULL) {\n\t\tlog_err(errno, __func__, MALLOC_ERR_MSG);\n\t\tfree(hook_output);\n\t\treturn;\n\t}\n\t*(hook_output->reject_errcode) = 0;\n\n\tif (mom_process_hooks(HOOK_EVENT_EXECJOB_END,\n\t\t\t      PBS_MOM_SERVICE_NAME, mom_host, hook_input,\n\t\t\t      hook_output, NULL, 0, 1) == HOOK_RUNNING_IN_BACKGROUND) {\n\n\t\tpjob->ji_hook_running_bg_on = BG_PBS_BATCH_DeleteJob;\n\n\t\t/*\n\t\t* save number of nodes in sisterhood in case\n\t\t* job is deleted in send_sisters_deljob()\n\t\t*/\n\t\tif (pjob->ji_numnodes > 1) {\n\t\t\tif (send_sisters_deljob_wait(pjob) == 0) {\n\t\t\t\tsprintf(log_buffer, \"Unable to send delete job \"\n\t\t\t\t\t\t    \"request to one or more sisters\");\n\t\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB,\n\t\t\t\t\t  LOG_ERR, pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\t\t/*\n\t\t\t\t* no messages sent, but there are sisters\n\t\t\t\t* must be all down\n\t\t\t\t*/\n\t\t\t\tpjob->ji_hook_running_bg_on = BG_PBSE_SISCOMM;\n\t\t\t}\n\t\t}\n\t\t/*\n\t\t* Hook is running in background reply to the batch\n\t\t* request will be taken care of in mom_process_background_hooks\n\t\t* function\n\t\t*/\n\t\treturn;\n\t}\n\tmom_deljob_wait2(pjob);\n\tfree(hook_output->reject_errcode);\n\tfree(hook_output);\n\tfree(hook_input);\n}\n\n/**\n * @brief\n * \treq_holdjob - checkpoint and terminate job\n *\n * @param[in] batch_request structure for the job\n *\n * @return Void\n *\n */\n\nvoid\nreq_holdjob(struct batch_request *preq)\n{\n\tjob *pjob;\n\n\tpjob = find_job(preq->rq_ind.rq_hold.rq_orig.rq_objname);\n\tif (pjob == NULL) {\n\t\treq_reject(PBSE_UNKJOBID, 0, preq);\n\t\treturn;\n\t}\n\n\tif (pjob->ji_flags & MOM_CHKPT_ACTIVE) {\n\t\treq_reject(PBSE_BADSTATE, 0, preq);\n\t\tsprintf(log_buffer, \"req_holdjob failed: Checkpoint active.\");\n\t} else if (pjob->ji_flags & MOM_RESTART_ACTIVE) {\n\t\treq_reject(PBSE_BADSTATE, 0, preq);\n\t\tsprintf(log_buffer, \"req_holdjob failed: Restart active.\");\n\t} else if (!check_job_substate(pjob, JOB_SUBSTATE_RUNNING) &&\n\t\t   !check_job_substate(pjob, JOB_SUBSTATE_SUSPEND)) {\n\t\treq_reject(PBSE_BADSTATE, 0, preq);\n\t\tsprintf(log_buffer,\n\t\t\t\"req_holdjob failed: Job not running or suspended.\");\n\t} else {\n\t\tstart_checkpoint(pjob, 1, preq);\n\t\tsprintf(log_buffer, \"req_holdjob: Checkpoint initiated.\");\n\t}\n\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\n\t/* note, normally the reply to the server is in start_checkpoint() */\n}\n\n/**\n * @brief\n *\tWrite text into a job's output file,\n *\tReturn a PBS error code.\n *\n * @param[in] pjob - pointer to job structure\n * @param[in] jft  - enum for job_file\n * @param[in] text - message to be written into job's o/p file\n *\n * @return \tPBS error code\n * @retval\tPBSE_NONE\tNo error\n * @retval\tPBSE_MOMREJECT\n * @retval\tPBSE_UNKJOBID\tUnknown Job Identifier\n * @retval\tPBSE_MOMREJECT\tRequest to MOM failed\n * @retval\tPBSE_INTERNAL\tinternal server error occurred\n *\n */\nint\nmessage_job(job *pjob, enum job_file jft, char *text)\n{\n\tchar *pstr = NULL;\n\tint len;\n\tint fds = -1;\n\tssize_t bytes_written = 0;\n\tssize_t total_bytes_written = 0;\n\n\tif (pjob == NULL)\n\t\treturn PBSE_UNKJOBID;\n\n\t/* must be Mother Superior for this to make sence */\n\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE) == 0)\n\t\treturn PBSE_MOMREJECT;\n\n\tlen = is_joined(pjob);\n\tif (len == -1)\n\t\tjft = StdErr; /* only have stderr open */\n\telse if (len == 1)\n\t\tjft = StdOut; /* only have stdout open */\n\n#ifdef WIN32\n\tif ((fds = open_std_file(pjob, jft, O_WRONLY | O_APPEND,\n\t\t\t\t pjob->ji_qs.ji_un.ji_momt.ji_exgid)) < 0)\n\t\treturn PBSE_MOMREJECT;\n\n\t/* set to append mode */\n\tSetFilePointer((HANDLE) _get_osfhandle(fds), (LONG) NULL,\n\t\t       (PLONG) NULL, FILE_END);\n#else\n\tint i;\n\tunsigned int usecs = 250 * 1000; /* 250 milliseconds */\n\tfor (i = 0; i < 3; i++) {\n\t\tfds = open_std_file(pjob, jft, O_WRONLY | O_APPEND | O_NONBLOCK,\n\t\t\t\t    pjob->ji_qs.ji_un.ji_momt.ji_exgid);\n\t\tif (fds < 0)\n\t\t\tif (errno == EAGAIN || errno == EWOULDBLOCK)\n\t\t\t\tusleep(usecs);\n\t\t\telse\n\t\t\t\treturn PBSE_MOMREJECT;\n\t\telse\n\t\t\tbreak;\n\t}\n\n\tif (fds < 0)\n\t\treturn PBSE_MOMREJECT;\n#endif\n\tlen = strlen(text);\n\tif (text[len - 1] != '\\n') {\n\t\tif ((pstr = malloc(len + 2)) == NULL)\n\t\t\treturn PBSE_INTERNAL;\n\n\t\t(void) strcpy(pstr, text);\n\t\tpstr[len++] = '\\n'; /* append new-line */\n\t\ttext = pstr;\n\t}\n#ifdef WIN32\n\ttotal_bytes_written = write(fds, text, len);\n\t(void) _commit(fds);\n#else\n\tfor (i = 0; i < 3; i++) {\n\t\tbytes_written = write(fds, text, len - total_bytes_written);\n\t\tif (bytes_written <= 0) {\n\t\t\tif (errno == EAGAIN || errno == EWOULDBLOCK)\n\t\t\t\tusleep(usecs);\n\t\t\telse {\n\t\t\t\t(void) close(fds);\n\t\t\t\tfree(pstr);\n\t\t\t\treturn PBSE_MOMREJECT;\n\t\t\t}\n\t\t} else {\n\t\t\ttext += bytes_written;\n\t\t\ttotal_bytes_written += bytes_written;\n\t\t\tif (total_bytes_written == len)\n\t\t\t\tbreak;\n\t\t}\n\t}\n#endif\n\t(void) close(fds);\n\tif (pstr)\n\t\tfree(pstr);\n\n\tif (total_bytes_written == len)\n\t\treturn PBSE_NONE;\n\telse\n\t\treturn PBSE_MOMREJECT;\n}\n\n/**\n * @brief\n * \treq_messagejob - Append message to job's output/error file\n *\n * @param[in] preq - pointer to batc_request structure\n *\n * @return \tVoid\n *\n */\n\nvoid\nreq_messagejob(struct batch_request *preq)\n{\n\tint ret = 0;\n\tjob *pjob;\n\n\tpjob = find_job(preq->rq_ind.rq_message.rq_jid);\n\tif ((preq->rq_ind.rq_message.rq_file == PBS_BATCH_FileOpt_Default) ||\n\t    (preq->rq_ind.rq_message.rq_file & PBS_BATCH_FileOpt_OFlg)) {\n\t\tret = message_job(pjob, StdOut, preq->rq_ind.rq_message.rq_text);\n\t}\n\n\tif ((preq->rq_ind.rq_message.rq_file & PBS_BATCH_FileOpt_EFlg) &&\n\t    (ret == 0)) {\n\t\tret = message_job(pjob, StdErr,\n\t\t\t\t  preq->rq_ind.rq_message.rq_text);\n\t}\n\n\tif (ret == PBSE_NONE)\n\t\treply_ack(preq);\n\telse\n\t\treq_reject(ret, 0, preq);\n\treturn;\n}\n\n/**\n * @brief\n *\tSpawn a Python process.\n *\n * @param[in] preq - pointer to batc_request structure\n *\n * @return      Void\n *\n */\nvoid\nreq_py_spawn(struct batch_request *preq)\n{\n\tstatic char pypath[MAXPATHLEN + 1];\n\tchar *allargs;\n\tint allarglen = 1;\n\tstruct stat sbuf;\n\tint ret, i;\n\tjob *pjob;\n\tpbs_task *ptask;\n\tchar **argv;\n\tobitent *op;\n\n\tpjob = find_job(preq->rq_ind.rq_py_spawn.rq_jid);\n\tif (pjob == NULL) {\n\t\treq_reject(PBSE_UNKJOBID, 0, preq);\n\t\treturn;\n\t}\n\n\tif (pypath[0] == '\\0') { /* initialize pbs_python path */\n\t\tsprintf(pypath, \"%s/bin/pbs_python\", pbs_conf.pbs_exec_path);\n#ifdef WIN32\n\t\tstrncat(pypath, \".exe\", sizeof(pypath) - strlen(pypath) - 1);\n#endif\n\t}\n\n\t/* return error if python is not found in PBS_EXEC/bin */\n\tif (stat(pypath, &sbuf) == -1) {\n\t\tif (errno == ENOENT) {\n\t\t\tsprintf(log_buffer, \"%s: %s not installed\",\n\t\t\t\t__func__, pypath);\n\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB,\n\t\t\t\t  LOG_WARNING, pjob->ji_qs.ji_jobid, log_buffer);\n\t\t} else {\n\t\t\tlog_err(errno, __func__, pypath);\n\t\t}\n\t\treq_reject(PBSE_MOMREJECT, 0, preq);\n\t\treturn;\n\t}\n\n\t/* check to see it is a regular file that is executable */\n\tif (((sbuf.st_mode & S_IFMT) != S_IFREG) ||\n\t    (sbuf.st_mode & (S_IXUSR | S_IXGRP | S_IXOTH)) !=\n\t\t    (S_IXUSR | S_IXGRP | S_IXOTH)) {\n\t\tsprintf(log_buffer, \"%s: %s not executable\",\n\t\t\t__func__, pypath);\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB,\n\t\t\t  LOG_WARNING, pjob->ji_qs.ji_jobid, log_buffer);\n\t\treq_reject(PBSE_PERM, 0, preq);\n\t\treturn;\n\t}\n\n\t/* count the number of args (2 for space and null) */\n\tfor (i = 0; preq->rq_ind.rq_py_spawn.rq_argv[i] != NULL; i++)\n\t\tallarglen += strlen(preq->rq_ind.rq_py_spawn.rq_argv[i]) + 2;\n\n\targv = (char **) calloc(i + 2, sizeof(char *));\n\tif (argv == NULL) {\n\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\treturn;\n\t}\n\tallargs = (char *) malloc(allarglen);\n\tif (allargs == NULL) {\n\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\tfree(argv);\n\t\treturn;\n\t}\n\top = (obitent *) malloc(sizeof(obitent));\n\tif (op == NULL) {\n\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\tfree(argv);\n\t\tfree(allargs);\n\t\treturn;\n\t}\n\n\t/* fill argv array and create arg string */\n\t/* allargs will have a trailing blank */\n\targv[0] = pypath;\n\tallargs[0] = '\\0';\n\tfor (i = 0; preq->rq_ind.rq_py_spawn.rq_argv[i] != NULL; i++) {\n\t\targv[i + 1] = preq->rq_ind.rq_py_spawn.rq_argv[i];\n\t\tstrcat(allargs, preq->rq_ind.rq_py_spawn.rq_argv[i]);\n\t\tstrcat(allargs, \" \");\n\t}\n\targv[i + 1] = NULL;\n\n\tptask = momtask_create(pjob);\n\tif (ptask == NULL) {\n\t\treq_reject(PBSE_INTERNAL, 0, preq);\n\t\tfree(argv);\n\t\tfree(allargs);\n\t\tfree(op);\n\t\treturn;\n\t}\n\n\tstrcpy(ptask->ti_qs.ti_parentjobid, preq->rq_ind.rq_py_spawn.rq_jid);\n\tptask->ti_qs.ti_parentnode = TM_ERROR_NODE;\n\tptask->ti_qs.ti_myvnode = TM_ERROR_NODE;\n\tptask->ti_qs.ti_parenttask = TM_INIT_TASK;\n\t(void) task_save(ptask);\n\n\t/* start the task with no demux option */\n\tret = start_process(ptask, argv, preq->rq_ind.rq_py_spawn.rq_envp, true);\n\tfree(argv);\n\tif (ret != PBSE_NONE) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"%s: FAILED %stask %8.8X err %d\", __func__,\n\t\t\t allargs, ptask->ti_qs.ti_task, ret);\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_WARNING,\n\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\n\t\treq_reject(ret, 0, preq);\n\t\tfree(allargs);\n\t\tfree(op);\n\t\treturn;\n\t}\n\n\tsnprintf(log_buffer, sizeof(log_buffer), \"%s: args %stask %8.8X\", __func__,\n\t\t allargs, ptask->ti_qs.ti_task);\n\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\tfree(allargs);\n\n\tCLEAR_LINK(op->oe_next);\n\tappend_link(&ptask->ti_obits, &op->oe_next, op);\n\top->oe_type = OBIT_TYPE_BREVENT;\n\top->oe_u.oe_preq = preq;\n\n\treturn;\n}\n\n/**\n * @brief\n * \treq_modifyjob - service the Modify Job Request\n *\tThis request modifys a job's attributes.\n *\n * @param[in] preq - pointer to batch_request structure\n *\n * @return \tVoid\n *\n */\n\nvoid\nreq_modifyjob(struct batch_request *preq)\n{\n\tint bad = 0;\n\tint i;\n\tattribute newattr[(int) JOB_ATR_LAST];\n\tattribute *pattr;\n\tjob *pjob;\n\tsvrattrl *plist;\n\tint rc;\n\tint recreate_nodes = 0;\n\tchar *new_peh = NULL;\n\n\tpjob = find_job(preq->rq_ind.rq_modify.rq_objname);\n\tif (pjob == NULL) {\n\t\treq_reject(PBSE_UNKJOBID, 0, preq);\n\t\treturn;\n\t}\n\n\tplist = (svrattrl *) GET_NEXT(preq->rq_ind.rq_modify.rq_attr);\n\tif (plist == NULL) { /* nothing to do */\n\t\treply_ack(preq);\n\t\treturn;\n\t}\n\n\t/* modify the jobs attributes */\n\n\tbad = 0;\n\tpattr = pjob->ji_wattr;\n\n\t/* call attr_atomic_set to decode and set a copy of the attributes */\n\n\trc = attr_atomic_set(plist, pattr, newattr, job_attr_idx, job_attr_def, JOB_ATR_LAST, -1, ATR_DFLAG_MGWR | ATR_DFLAG_MOM, &bad);\n\tif (rc) {\n\t\t/* leave old values, free the new ones */\n\t\tfor (i = 0; i < JOB_ATR_LAST; i++)\n\t\t\tfree_attr(job_attr_def, &newattr[i], i);\n\t\treq_reject(rc, 0, preq);\n\t\treturn;\n\t}\n\n\t/* OK, now copy the new values into the job attribute array */\n\n\tfor (i = 0; i < JOB_ATR_LAST; i++) {\n\t\tif (newattr[i].at_flags & ATR_VFLAG_MODIFY) {\n\n\t\t\tif (job_attr_def[i].at_action)\n\t\t\t\t(void) job_attr_def[i].at_action(&newattr[i],\n\t\t\t\t\t\t\t\t pjob, ATR_ACTION_ALTER);\n\t\t\tfree_attr(job_attr_def, &pattr[i], i);\n\t\t\tif ((newattr[i].at_type == ATR_TYPE_LIST) ||\n\t\t\t    (newattr[i].at_type == ATR_TYPE_RESC)) {\n\t\t\t\tlist_move(&newattr[i].at_val.at_list,\n\t\t\t\t\t  &(pattr + i)->at_val.at_list);\n\t\t\t} else {\n\t\t\t\t*(pattr + i) = newattr[i];\n\t\t\t}\n\t\t\t(pattr + i)->at_flags = newattr[i].at_flags;\n\t\t\tif ((i == JOB_ATR_exec_vnode) ||\n\t\t\t    (i == JOB_ATR_exec_host) ||\n\t\t\t    (i == JOB_ATR_exec_host2) ||\n\t\t\t    (i == JOB_ATR_SchedSelect) ||\n\t\t\t    (i == JOB_ATR_resource)) {\n\t\t\t\t/* all of the above attributes must */\n\t\t\t\t/*  be received (recreate_nodes == 5) */\n\t\t\t\t/*  in order to trigger recreation of */\n\t\t\t\t/*  job nodes and PBS_NODEFILE */\n\t\t\t\trecreate_nodes++;\n\t\t\t\tif (recreate_nodes == 5)\n\t\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t}\n\n\tif (get_jattr_str(pjob, JOB_ATR_exec_host2) != NULL) /* Mom got information from new server */\n\t\tnew_peh = get_jattr_str(pjob, JOB_ATR_exec_host2);\n\telse\n\t\tnew_peh = get_jattr_str(pjob, JOB_ATR_exec_host);\n\n\tif (recreate_nodes == 5) {\n\n\t\t/* Send IM_DELETE_JOB2 request to the sister moms not in */\n\t\t/* 'new_peh', to kill the job on that sister and */\n\t\t/* report resources_used info. */\n\t\t(void) send_sisters_inner(pjob, IM_DELETE_JOB2,\n\t\t\t\t\t  NULL, new_peh);\n\n\t\tif ((rc = job_nodes(pjob)) != 0) {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer) - 1,\n\t\t\t\t \"failed updating internal nodes data (rc=%d)\", rc);\n\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB,\n\t\t\t\t  LOG_NOTICE, pjob->ji_qs.ji_jobid,\n\t\t\t\t  log_buffer);\n\t\t\treply_text(preq, rc, log_buffer);\n\t\t\treturn;\n\t\t}\n\n\t\tif (generate_pbs_nodefile(pjob, NULL, 0,\n\t\t\t\t\t  log_buffer, LOG_BUF_SIZE - 1) != 0) {\n\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB,\n\t\t\t\t  LOG_NOTICE, pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\treply_text(preq, rc, log_buffer);\n\t\t\treturn;\n\t\t}\n\t\tsend_sisters_job_update(pjob);\n\t\tpjob->ji_updated = 1;\n\t}\n\t/* note, the newattr[] attributes are on the stack, they goaway auto */\n\n\tif (rc == 0)\n\t\trc = mom_set_limits(pjob, SET_LIMIT_ALTER);\n\tif (rc) {\n\t\treq_reject(rc, bad, preq);\n\t\treturn;\n\t}\n\n\t(void) job_save(pjob);\n\t(void) sprintf(log_buffer, msg_manager, msg_jobmod,\n\t\t       preq->rq_user, preq->rq_host);\n\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\treply_ack(preq);\n}\n\n/**\n * @brief\n *\tCreate a reject reply for a request, then send the reply.\n *\n * @param[in] preq - pointer to batch_request structure\n *\n * @return \tVoid\n *\n */\n\nvoid\nreq_shutdown(struct batch_request *preq)\n{\n\treq_reject(PBSE_NOSUP, 0, preq);\n}\n\n/**\n * @brief\n * \tSee if there are any events of type event_comm left to wait for.\n *\n * @param[in] pjob - pointer to job\n * @param[in] event_com - inter mom request\n *\n * @return \tint\n * @retval\t1\tif event exist\n * @retval\t0 \tif no event left\n *\n */\n\nstatic int\neventleft(job *pjob, int event_com)\n{\n\tint i;\n\teventent *ep;\n\thnodent *np;\n\n\tDBPRT((\"eventleft: %s com %d\\n\", pjob->ji_qs.ji_jobid, event_com))\n\n\tfor (i = 0; i < pjob->ji_numnodes; i++) {\n\t\tnp = &pjob->ji_hosts[i];\n\t\tep = (eventent *) GET_NEXT(np->hn_events);\n\t\twhile (ep) {\n\t\t\tif (ep->ee_command == event_com)\n\t\t\t\tbreak;\n\t\t\tep = (eventent *) GET_NEXT(ep->ee_next);\n\t\t}\n\t\tif (ep != NULL)\n\t\t\treturn 1;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\tClean up any saved event state in the job structure.\n *\n * @param[in] pjob - pointer to job\n * @param[in] err - exit value\n *\n * @return \tVoid\n *\n */\nvoid\npost_reply(job *pjob, int err)\n{\n\tint stream;\n\tchar *cookie;\n\tchar *jobid;\n\tint im_compose(int, char *, char *, int, tm_event_t, tm_task_id, int);\n\n\tif (pjob->ji_postevent == TM_NULL_EVENT) /* no event */\n\t\treturn;\n\n\tif (pjob->ji_hosts == NULL) { /* No one to talk to */\n\t\tpjob->ji_postevent = TM_NULL_EVENT;\n\t\tpjob->ji_taskid = TM_NULL_TASK;\n\t\treturn;\n\t}\n\n\tstream = pjob->ji_hosts[0].hn_stream; /* MS stream */\n\tcookie = get_jattr_str(pjob, JOB_ATR_Cookie);\n\tjobid = pjob->ji_qs.ji_jobid;\n\n\t/*\n\t **\tI'm a sister and the reply needs to be sent back\n\t **\tto MS for this operation.\n\t */\n\n\tif (err == 0) {\n\t\t(void) im_compose(stream, jobid, cookie, IM_ALL_OKAY,\n\t\t\t\t  pjob->ji_postevent, pjob->ji_taskid, IM_OLD_PROTOCOL_VER);\n\t} else {\n\t\t(void) im_compose(stream, jobid, cookie, IM_ERROR,\n\t\t\t\t  pjob->ji_postevent, pjob->ji_taskid, IM_OLD_PROTOCOL_VER);\n\t\t(void) diswsi(stream, err);\n\t}\n\t(void) dis_flush(stream);\n\n\tpjob->ji_postevent = TM_NULL_EVENT;\n\tpjob->ji_taskid = TM_NULL_TASK;\n}\n\n/**\n * @brief\n * \tDo all the common operations for post_action function.\n * \tFor MS, if there are more things to wait for, return 1, else return 0.\n * \tFor a sister, send reply back to MS and return 0.\n *\n * @param[in] pjob - pointer to job\n * @param[in] event_com - inter mom request\n * @param[in] err - exit value\n *\n * @return \tint\n * @retval\t1\tif there are more things to wait for\n * @retval\t0 \telse return 0\n *\n */\n\nint\npost_action(job *pjob, int event_com, int err)\n{\n\tDBPRT((\"post_action: %s com %d err %d\\n\", pjob->ji_qs.ji_jobid,\n\t       event_com, err))\n\n\tif (err != 0) {\n\t\tpjob->ji_flags |= MOM_SISTER_ERR;\n\t\tif (pjob->ji_preq) {\n\t\t\treq_reject(err, 0, pjob->ji_preq);\n\t\t\tpjob->ji_preq = NULL;\n\t\t}\n\t}\n\n\tif (pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE) {\n\n\t\tif (pjob->ji_momsubt != 0) /* child running */\n\t\t\treturn 1;\n\n\t\t/*\n\t\t **\tIf I'm MS, I need to check for events\n\t\t **\tto see if processing is still going on.\n\t\t */\n\t\tif (eventleft(pjob, event_com))\n\t\t\treturn 1;\n\n\t\t/*\n\t\t ** No more operations are waiting.\n\t\t ** This will be the final call to whoever called me.\n\t\t */\n\t\tif (pjob->ji_preq) {\n\t\t\treply_ack(pjob->ji_preq);\n\t\t\tpjob->ji_preq = NULL;\n\t\t}\n\t} else\n\t\tpost_reply(pjob, err);\n\n\t/*\n\t ** Everything is done, now is the time to clear ji_mompost.\n\t */\n\tpjob->ji_mompost = NULL;\n\n\treturn 0;\n}\n\n/**\n * @brief\n * \tpost_suspend - post exit of child for suspending a job\n *\n * @param[in] pjob - pointer to job\n * @param[in] err - exit value\n *\n * @return\tVoid\n *\n */\n\nvoid\npost_suspend(job *pjob, int err)\n{\n\tDBPRT((\"post_suspend: %s err %d\\n\", pjob->ji_qs.ji_jobid, err))\n\n\tif (post_action(pjob, IM_SUSPEND, err))\n\t\treturn;\n\n\tif ((pjob->ji_flags & MOM_SISTER_ERR) == 0) {\n\t\tstop_walltime(pjob);\n\n\t\tpjob->ji_polltime = 0; /* don't check polling */\n\t\tif (get_job_substate(pjob) < JOB_SUBSTATE_EXITING) {\n\t\t\tmom_hook_input_t hook_input;\n\t\t\tmom_hook_output_t hook_output;\n\t\t\tchar hook_msg[HOOK_MSG_SIZE + 1];\n\t\t\thook *last_phook = NULL;\n\t\t\tunsigned int hook_fail_action = 0;\n\t\t\tint reject_errcode = 0;\n\n\t\t\tset_job_substate(pjob, JOB_SUBSTATE_SUSPEND);\n\t\t\tpjob->ji_qs.ji_svrflags |= JOB_SVFLG_Suspend;\n\t\t\t(void) job_save(pjob);\n\n\t\t\tmom_hook_input_init(&hook_input);\n\t\t\thook_input.pjob = pjob;\n\n\t\t\tmom_hook_output_init(&hook_output);\n\t\t\thook_output.reject_errcode = &reject_errcode;\n\t\t\thook_output.last_phook = &last_phook;\n\t\t\thook_output.fail_action = &hook_fail_action;\n\n\t\t\tif (mom_process_hooks(HOOK_EVENT_EXECJOB_POSTSUSPEND,\n\t\t\t\t\t      PBS_MOM_SERVICE_NAME, mom_host, &hook_input,\n\t\t\t\t\t      &hook_output, hook_msg, sizeof(hook_msg), 1) == 0) {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t \"execjob_postsuspend hook rejected request: %s\", hook_msg);\n\t\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO, pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\t}\n\t\t} else {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"This job can't be suspended, since the job was in %ld substate\", get_job_substate(pjob));\n\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO, pjob->ji_qs.ji_jobid,\n\t\t\t\t  log_buffer);\n\t\t}\n\t} else\n\t\tpjob->ji_flags &= ~MOM_SISTER_ERR;\n}\n\n/**\n * @brief\n * \tpost_resume - post exit of child for a resume of a job\n *\n * @param[in] pjob - pointer to job\n * @param[in] err - exit value\n *\n * @return      Void\n *\n */\n\nvoid\npost_resume(job *pjob, int err)\n{\n\tDBPRT((\"post_resume: %s err %d\\n\", pjob->ji_qs.ji_jobid, err))\n\n\tif (post_action(pjob, IM_RESUME, err))\n\t\treturn;\n\n\tif ((pjob->ji_flags & MOM_SISTER_ERR) == 0) {\n\n\t\tif (pjob->ji_qs.ji_svrflags & JOB_SVFLG_Suspend) {\n\t\t\tstart_walltime(pjob);\n\t\t}\n\t\t/* if I'm not MS, start to check for polling again */\n\t\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE) == 0)\n\t\t\tpjob->ji_polltime = time_now;\n\t\tset_job_substate(pjob, JOB_SUBSTATE_RUNNING);\n\t\tpjob->ji_qs.ji_svrflags &= ~JOB_SVFLG_Suspend;\n\t\t(void) job_save(pjob);\n\t} else\n\t\tpjob->ji_flags &= ~MOM_SISTER_ERR;\n}\n\n#if MOM_ALPS\n\n/*\n * Try to minimize latency of suspend/resume. Wait half a second before\n * the first check, and then poll every tenth of a second.\n */\n\n#define ALPS_SWITCH_SLEEP_USECS_LONG (500000)\n#define ALPS_SWITCH_SLEEP_USECS_SHORT (100000)\n\n/**\n * On a Cray, make the requested switch, and confirm it\n * @param[in]\tpjob\tjob of interest\n * @param[in]\twhich\tSUSPEND/RESUME\n * @retval\tPBSE_NONE\tno error\n * @retval\tPBSE_ALPS_SWITCH_ERR\n */\nstatic int\ndo_cray_susres_conf(job *pjob, int which)\n{\n\t/**\n\t * On a Cray, we need to send an ALPS SWITCH request to move the jobs\n\t * to a suspend or resume state.\n\t */\n\tbasil_switch_action_t action;\n\tint i;\n\tint rc = 0;\n\ttime_t total_time = 0;\n\ttime_t begin_time = 0;\n\ttime_t end_time = 0;\n\tint timeout_val = alps_confirm_switch_timeout;\n\tint first_status;\n\tint first_status_was_empty = 0;\n\tint first_sleep;\n\n\t/* Check if there is an ALPS reservation to act on.\n\t * If not, just return PBSE_NONE.\n\t */\n\tif (pjob->ji_extended.ji_ext.ji_reservation <= 0) {\n\t\treturn PBSE_NONE;\n\t}\n\n\tif (which == SUSPEND)\n\t\taction = basil_switch_action_out;\n\telse\n\t\taction = basil_switch_action_in;\n\n\trc = alps_suspend_resume_reservation(pjob, action);\n\tif (rc < 0) {\n\t\tsprintf(log_buffer, \"Fatal ALPS SWITCH request.\");\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_JOB, LOG_ERR,\n\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\treturn (PBSE_ALPS_SWITCH_ERR);\n\t}\n\tif (rc > 0) {\n\t\tsprintf(log_buffer, \"Transient ALPS SWITCH error, the \"\n\t\t\t\t    \"prior SWITCH method has not yet completed.\");\n\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_ERR,\n\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\treturn (PBSE_ALPS_SWITCH_ERR);\n\t}\n\n\t/* The call to ALPS SWITCH was successful\n\t * Now we have to poll to confirm the SWITCH happens\n\t * We will assume that the ALPS suspend happens \"relatively quickly\"\n\t * as per the Cray ALPS folks, and we will poll for a successful\n\t * suspend state here.  If the confirmation takes a while, then PBS\n\t * may need to somehow poll for the confirmation without tying up\n\t * the mom.\n\t *\n\t * Keep trying in this process (don't fork a child) until the SWITCH\n\t * completes, or a hard error is returned, or\n\t * alps_confirm_switch_timeout is reached.\n\t * alps_confirm_switch_timeout is set by default to\n\t * ALPS_CONF_SWITCH_TIMEOUT\n\t * NOTE:  The MOM, server and scheduler are blocked while we poll\n\t * ALPS and wait for the SWITCH.\n\t */\n\tfirst_status = 1;\n\tfirst_sleep = 1;\n\tbegin_time = time(NULL);\n\tend_time = begin_time;\n\ti = 0;\n\tdo {\n\t\ti++;\n\t\tif ((rc = alps_confirm_suspend_resume(pjob, action)) <= 0)\n\t\t\tbreak;\n\t\tif (rc == 2) {\n\t\t\t/* we got a response of \"EMPTY\" */\n\t\t\tif (first_status) {\n\t\t\t\t/*\n\t\t\t\t * Alps may report EMPTY reservation for the first\n\t\t\t\t * time we query it. So for the first time we\n\t\t\t\t * need to poll for alps_confirm_empty_timeout\n\t\t\t\t * time in hopes of letting the Cray race\n\t\t\t\t * condition work itself out.  Once we hit the\n\t\t\t\t * timeout we assume that there are no ALPS\n\t\t\t\t * claims (i.e. apruns) on the reservation and\n\t\t\t\t * the suspend can proceed.\n\t\t\t\t */\n\t\t\t\ttimeout_val = alps_confirm_empty_timeout;\n\t\t\t\tfirst_status_was_empty = 1;\n\t\t\t}\n\t\t\tif (!first_status_was_empty) {\n\t\t\t\t/* The first status response wasn't EMPTY and\n\t\t\t\t * the status is now EMPTY.  According to\n\t\t\t\t * Cray this means PBS can proceed\n\t\t\t\t * as if the switch request was successful.\n\t\t\t\t */\n\t\t\t\tbreak;\n\t\t\t}\n\t\t} else {\n\t\t\t/* Reset the timeout_val if we get anything besides\n\t\t\t * \"EMPTY\"\n\t\t\t */\n\t\t\ttimeout_val = alps_confirm_switch_timeout;\n\t\t}\n\t\t/* Getting a transient error, sleep then retry */\n\t\tif (first_sleep) {\n\t\t\tusleep(ALPS_SWITCH_SLEEP_USECS_LONG);\n\t\t\tfirst_sleep = 0;\n\t\t} else {\n\t\t\tusleep(ALPS_SWITCH_SLEEP_USECS_SHORT);\n\t\t}\n\t\tend_time = time(NULL);\n\t\tfirst_status = 0;\n\t} while ((total_time = end_time - begin_time) < timeout_val);\n\tif (rc == 1) {\n\t\tsprintf(log_buffer, \"Timed out after %d attempts over %ld \"\n\t\t\t\t    \"seconds of attempting to confirm the ALPS \"\n\t\t\t\t    \"SWITCH completed.\",\n\t\t\ti, total_time);\n\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_ERR,\n\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\treturn (PBSE_ALPS_SWITCH_ERR);\n\t} else if ((rc == 2) && first_status_was_empty) {\n\t\tsprintf(log_buffer, \"Timed out after %d attempts over \"\n\t\t\t\t    \"%ld seconds of waiting for a status of EMPTY \"\n\t\t\t\t    \"to change.  Proceeding as if the SWITCH \"\n\t\t\t\t    \"succeeded.\",\n\t\t\ti, total_time);\n\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB,\n\t\t\t  LOG_DEBUG, pjob->ji_qs.ji_jobid, log_buffer);\n\t\treturn (PBSE_NONE);\n\t} else if (rc < 0) {\n\t\tsprintf(log_buffer, \"Fatal ALPS QUERY of status.\");\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_JOB, LOG_ERR,\n\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\treturn (PBSE_ALPS_SWITCH_ERR);\n\t} else {\n\t\t/* the SWITCH has completed successfully */\n\t\tsprintf(log_buffer, \"The SWITCH was confirmed after a total \"\n\t\t\t\t    \"of %d attempts, and %ld seconds.\",\n\t\t\ti, total_time);\n\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t}\n\n\treturn PBSE_NONE;\n}\n#endif /* MOM_ALPS */\n\n/**\n * @brief\n *\tresponsible for suspend/resume job.\n *\n * @param[in] pjob - pointer to job\n * @param[in] which - indication for whether SUSPEND/RESUME\n *\n * @return\tint PBSE error number\n * @retval\tPBSE_NONE\tno error\n * @retval\tPBSE_SYSTEM\t system error occurred\n *\n */\n\nint\ndo_susres(job *pjob, int which)\n{\n\tpbs_task *ptask;\n\tint rc = 0;\n\tint err;\n\n\tif (pjob == NULL) {\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_JOB,\n\t\t\t  LOG_ERR, \"do_susres\", \"The job information is NULL\");\n\t\treturn (PBSE_SYSTEM);\n\t}\n\n#if MOM_ALPS\n\t/* if we're trying to suspend, then ask ALPS to suspend, before\n\t * we send the signal to the processes\n\t */\n\tif (which == SUSPEND) {\n\t\tif ((rc = do_cray_susres_conf(pjob, which)) != PBSE_NONE) {\n\t\t\t/* We failed to do the suspend */\n\t\t\treturn PBSE_ALPS_SWITCH_ERR;\n\t\t}\n\t}\n\n\t/* Continue on through the code.  Let the signal get sent to the job */\n\n#endif /* MOM_ALPS */\n\n\tfor (ptask = (pbs_task *) GET_NEXT(pjob->ji_tasks);\n\t     ptask != NULL;\n\t     ptask = (pbs_task *) GET_NEXT(ptask->ti_jobtask)) {\n\n\t\trc = (which == SUSPEND) ? kill_task(ptask, suspend_signal, 1) : kill_task(ptask, resume_signal, 0);\n\t\tDBPRT((\"%s: %s of task %8.8X rc %d\\n\", __func__,\n\t\t       (which == SUSPEND) ? \"suspend\" : \"resume\",\n\t\t       ptask->ti_qs.ti_task, rc))\n\t}\n\tif (rc < 0) {\n\t\t/* error recovery, set things back */\n\t\terr = errno;\n\t\tfor (ptask = (pbs_task *) GET_NEXT(pjob->ji_tasks);\n\t\t     ptask != NULL;\n\t\t     ptask = (pbs_task *) GET_NEXT(ptask->ti_jobtask)) {\n\t\t\tif (which == SUSPEND)\n\t\t\t\tkill_task(ptask, resume_signal, 0);\n\t\t\telse\n\t\t\t\tkill_task(ptask, suspend_signal, 1);\n\t\t}\n\t\terrno = err;\n\t\treturn PBSE_SYSTEM;\n\t}\n\n#if MOM_ALPS\n\t/*\n\t * We're trying to resume, we already sent the signal to the processes\n\t * now we tell ALPS to resume the ALPS reservation\n\t */\n\tif (which == RESUME) {\n\t\tif ((rc = do_cray_susres_conf(pjob, which)) != PBSE_NONE) {\n\t\t\t/* We failed to do the resume */\n\t\t\treturn PBSE_ALPS_SWITCH_ERR;\n\t\t}\n\t}\n#endif /* MOM_ALPS */\n\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n * \tDo the common things needed for both suspend and resume\n * \tfor tasks that are local.\n *\n * @param[in]\tpjob\tjob of interest\n * @param[in]\twhich\tSUSPEND/RESUME\n * @param[in]\tpreq\tbatch request\n *\n * @return \tint\n * @retval\t0\tno error\n * @retval\t!0\tPBS error code\n *\n */\nint\nlocal_supres(job *pjob, int which, struct batch_request *preq)\n{\n\tint rc;\n\n\tDBPRT((\"%s: %s %s %s request\\n\", __func__, pjob->ji_qs.ji_jobid,\n\t       which == SUSPEND ? \"suspend\" : \"resume\",\n\t       preq == NULL ? \"no\" : \"with\"))\n\n\tif (which == RESUME) {\n\t\tmom_hook_input_t hook_input;\n\t\tmom_hook_output_t hook_output;\n\t\tchar hook_msg[HOOK_MSG_SIZE + 1];\n\t\thook *last_phook = NULL;\n\t\tunsigned int hook_fail_action = 0;\n\t\tint reject_errcode = 0;\n\n\t\tmom_hook_input_init(&hook_input);\n\t\thook_input.pjob = pjob;\n\n\t\tmom_hook_output_init(&hook_output);\n\t\thook_output.reject_errcode = &reject_errcode;\n\t\thook_output.last_phook = &last_phook;\n\t\thook_output.fail_action = &hook_fail_action;\n\n\t\tif (mom_process_hooks(HOOK_EVENT_EXECJOB_PRERESUME,\n\t\t\t\t      PBS_MOM_SERVICE_NAME, mom_host, &hook_input,\n\t\t\t\t      &hook_output, hook_msg, sizeof(hook_msg), 1) == 0) {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"execjob_preresume hook rejected request: %s\", hook_msg);\n\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO, pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\terrno = reject_errcode;\n\t\t\treturn (PBSE_MOMREJECT);\n\t\t}\n\t}\n\t/*\n\t ** Check to see if something is already going on.\n\t */\n\tif (pjob->ji_momsubt != 0 ||\n\t    pjob->ji_mompost != NULL)\n\t\treturn PBSE_MOMREJECT;\n\n\trc = do_susres(pjob, which);\n\n\treturn rc;\n}\n\n/**\n * @brief\n * \tsusp_resum - the suspend/resume function\n *\n * @param[in] pjob - pointer to job\n * @param[in] which - SUSPEND/RESUME\n * @param[in] preq - pointer to batch_request structure\n *\n * @return \tVoid\n *\n */\n\nstatic void\nsusp_resum(job *pjob, int which, struct batch_request *preq)\n{\n\tint rc;\n\n\tDBPRT((\"susp_resum: %s %s %s request\\n\", pjob->ji_qs.ji_jobid,\n\t       which == SUSPEND ? \"suspend\" : \"resume\",\n\t       preq == NULL ? \"no\" : \"with\"))\n\n\t/* if already suspended for keyboard activity, just set/clear flag */\n\tif (which == SUSPEND) {\n\t\tif (pjob->ji_qs.ji_svrflags & JOB_SVFLG_Actsuspd) {\n\t\t\t/* already suspended for keyboard activity */\n\t\t\tpjob->ji_qs.ji_svrflags |= JOB_SVFLG_Suspend;\n\t\t\t(void) job_save(pjob);\n\t\t\treply_ack(preq);\n\t\t\treturn;\n\t\t}\n\t} else {\n\t\tif (pjob->ji_qs.ji_svrflags & JOB_SVFLG_Actsuspd) {\n\t\t\t/* keep suspended for keyboard activity */\n\t\t\tpjob->ji_qs.ji_svrflags &= ~JOB_SVFLG_Suspend;\n\t\t\t(void) job_save(pjob);\n\t\t\treply_ack(preq);\n\t\t\treturn;\n\t\t}\n\t}\n\n\t/* do suspend/resume of local tasks */\n\n\tif ((rc = local_supres(pjob, which, preq)) != PBSE_NONE) {\n\t\treq_reject(rc, errno, preq);\n\t\treturn;\n\t}\n\n\t/*\n\t ** If there is a sisterhood, send command.\n\t */\n\tif (pjob->ji_numnodes > 1) {\n\t\tint i;\n\n\t\ti = send_sisters(pjob,\n\t\t\t\t (which == SUSPEND) ? IM_SUSPEND : IM_RESUME, NULL);\n\n\t\tif (i > 0) {\n\t\t\tpjob->ji_mompost = (which == SUSPEND) ? post_suspend : post_resume;\n\t\t}\n\t\tif (i != (pjob->ji_numnodes - 1)) {\n\t\t\tpjob->ji_flags |= MOM_SISTER_ERR;\n\t\t\treq_reject(PBSE_SYSTEM, errno, preq);\n\t\t\treturn;\n\t\t}\n\t\tpjob->ji_preq = preq;\n\t}\n\n\tif (pjob->ji_mompost != NULL) /* later action */\n\t\treturn;\n\n\tif (which == SUSPEND) /* local */\n\t\tpost_suspend(pjob, 0);\n\telse\n\t\tpost_resume(pjob, 0);\n\n\treply_ack(preq);\n\treturn;\n}\n\n/**\n * @brief\n *\tfunction for post termination of job\n *\n * @param[in] pjob - pointer to job\n * @param[in] err - exit value\n *\n * @return \tVoid\n *\n */\nvoid\npost_terminate(job *pjob, int err)\n{\n\tif (err) {\n#ifdef WIN32\n\t\tif (err == -1) {\n\t\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t  pjob->ji_qs.ji_jobid,\n\t\t\t\t  \"Terminate script failed to exit in allocated time\");\n\t\t} else {\n\t\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t  pjob->ji_qs.ji_jobid,\n\t\t\t\t  \"Terminate script exited with non-zero status\");\n\t\t}\n\t\t/* assume that terminate action processes are hooked to */\n#else\n\n\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t  pjob->ji_qs.ji_jobid, \"Terminate action failed\");\n#endif\n\t\t/* kill job */\n\t\tif (kill_job(pjob, SIGKILL) == 0) {\n\t\t\t/* no processes around, force into exiting */\n\t\t\tset_job_substate(pjob, JOB_SUBSTATE_EXITING);\n\t\t\texiting_tasks = 1;\n\t\t}\n\t}\n\treturn;\n}\n\n/**\n * @brief\n * \tterminate_job - terminate a job\n *\tIf there is a site supplied script as given by \"$action terminate\"\n *\tthen run it and place job in special exiting state:\n *\tNote: This function is invoked on Mother Superior only.\n *\n *\tIf no script, or error,  do the normal termination: kill_job()\n *\twith SIGTERM.\n *\n * @param[in] pjob - pointer to job\n * @param[in] internal - 1 if Mom terminating job for being overlimit\n *\t\t         0 if Server terminating job\n *\t\t\t ( in both cases, SIGTERM is used )\n *\t\t        -1 internal and use SIGKILL\n *\n * @return \tint\n * @retval\t1\tif script running (see do_mom_action_script())\n * @retval\t-1 \tif error\n * @retval\t-2 \tno script\n *\n */\n\nint\nterminate_job(job *pjob, int internal)\n{\n\tint i;\n\tint s;\n\n\tif (internal == 1)\n\t\tpjob->ji_qs.ji_svrflags |= JOB_SVFLG_OVERLMT1;\n\n\t/* set overlimit time stamp by adding kill_delay and time_now. */\n\tpjob->ji_overlmt_timestamp = time_now + get_jattr_long(pjob, JOB_ATR_job_kill_delay);\n\tif ((chk_mom_action(TerminateAction) == Script) &&\n\t    ((i = do_mom_action_script(TerminateAction, pjob, NULL, NULL,\n\t\t\t\t       post_terminate)) == 1)) {\n\t\tset_job_state(pjob, JOB_STATE_LTR_EXITING);\n\t\tset_job_substate(pjob, JOB_SUBSTATE_TERM);\n\t} else {\n\t\tif (internal == -1)\n\t\t\ts = SIGKILL;\n\t\telse {\n\t\t\textern int next_sample_time;\n\t\t\textern int min_check_poll;\n\n\t\t\t/* The job is going to be terminated by TERM */\n\t\t\ts = SIGTERM;\n\t\t\t/* set the TERMJOB flag */\n\t\t\tpjob->ji_qs.ji_svrflags |= JOB_SVFLG_TERMJOB;\n\t\t\t/* poll ASAP in case job ignores SIGTERM */\n\t\t\tnext_sample_time = min_check_poll;\n\t\t}\n\t\tif (kill_job(pjob, s) == 0) {\n\t\t\t/* no processes around, time to exit */\n\t\t\texiting_tasks = 1;\n\t\t}\n\t\ti = -2;\n\t}\n\t(void) job_save(pjob);\n\treturn i;\n}\n\n/**\n * @brief\n *\tIssue a specified signal to a job.\n *\n * @par Functionality:\n *\tServer has requested that a real or pseudo (made up for PBS) signal\n *\tbe issued to a job. Real signals (see qsig command) may be sepcified\n *\tby number (numeric string), or by name with or without the \"SIG\" prefix.\n *\tAdditional processing may be required depending on the signal.\n *\n * @param[in]\tpreq  - pointer to the batch request structure which contains\n *\t\tjobid and signal.\n *\n * @return\tnone\n */\n\nvoid\nreq_signaljob(struct batch_request *preq)\n{\n\tjob *pjob;\n\tpbs_task *ptask;\n\tint sig;\n\tchar *sname;\n\tstruct sig_tbl *psigt;\n\textern struct sig_tbl sig_tbl[];\n\tmom_hook_input_t hook_input;\n\tmom_hook_output_t hook_output;\n\tchar hook_msg[HOOK_MSG_SIZE + 1];\n\thook *last_phook = NULL;\n\tunsigned int hook_fail_action = 0;\n\tint reject_errcode = 0;\n\n\tsname = preq->rq_ind.rq_signal.rq_signame;\n\tpjob = find_job(preq->rq_ind.rq_signal.rq_jid);\n\n\tif (pjob == NULL) {\n\t\treq_reject(PBSE_UNKJOBID, 0, preq);\n\t\treturn;\n\t}\n\n\tsprintf(log_buffer, \"signal job with %s\", sname);\n\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\n\t/**\n\t *\tApparently the Server didn't receive or process an Obit sent earlier.\n\t *\tJust force a resend of the obit.\n\t */\n\tif (check_job_substate(pjob, JOB_SUBSTATE_OBIT)) {\n\t\tsend_obit(pjob, 0);\n\t\tif (strcmp(sname, SIG_RESUME) == 0)\n\t\t\treq_reject(PBSE_BADSTATE, 0, preq);\n\t\telse\n\t\t\treply_ack(preq);\n\t\treturn;\n\t} else if ((check_job_substate(pjob, JOB_SUBSTATE_RUNEPILOG)) &&\n\t\t   (strcmp(sname, \"SIGKILL\") != 0)) {\n\t\t/* If epilogue is running and signal is not SIGKILL, */\n\t\t/* disallow request;  note SIGKILL sent on qdel -w force */\n\t\treq_reject(PBSE_BADSTATE, 0, preq);\n\t\treturn;\n\t}\n\n\tif ((strcmp(sname, SIG_TermJob) == 0) ||\n\t    (strcmp(sname, SIG_RERUN) == 0)) {\n\n\t\tif (strcmp(sname, SIG_TermJob) == 0) {\n\t\t\tmom_hook_input_init(&hook_input);\n\t\t\thook_input.pjob = pjob;\n\n\t\t\tmom_hook_output_init(&hook_output);\n\t\t\thook_output.reject_errcode = &reject_errcode;\n\t\t\thook_output.last_phook = &last_phook;\n\t\t\thook_output.fail_action = &hook_fail_action;\n\n\t\t\tif (mom_process_hooks(HOOK_EVENT_EXECJOB_PRETERM,\n\t\t\t\t\t      PBS_MOM_SERVICE_NAME, mom_host,\n\t\t\t\t\t      &hook_input, &hook_output,\n\t\t\t\t\t      hook_msg, sizeof(hook_msg), 1) == 0) {\n\t\t\t\treply_text(preq, PBSE_HOOK_REJECT, hook_msg);\n\t\t\t\treturn;\n\t\t\t}\n\t\t}\n\t\t/**\n\t\t *\t\tPBS pseudo signal for either:\n\t\t *\t\tjob termination, sent when a qdel is issued on a running job; or\n\t\t *\t\trerunning (requeuing) a job, sent on qrerun.\n\t\t */\n\t\tif (strcmp(sname, SIG_RERUN) == 0) {\n\t\t\t/* Set RERUN exit value */\n\t\t\tpjob->ji_qs.ji_un.ji_momt.ji_exitstat = JOB_EXEC_RERUN;\n\t\t}\n\n\t\t/**\n\t\t *\t\tFor both terminate and rerun terminate the job either via\n\t\t *\t\trunning the terminate action script or by sending  a\n\t\t *\t\tSIGTERM-delay-SIGKILL sequence\n\t\t */\n\t\tif (terminate_job(pjob, 0) == 1) {\n\t\t\t/* let server know (via ..._TERM) that a site\n\t\t\t * script is being run\n\t\t\t *\n\t\t\t * The reason for the req_reject(PBSE_NONE, ..) call\n\t\t\t * was that req_reject (unlike reply_ack()) allows MOM\n\t\t\t * to pass back JOB_SUBSTATE_TERM to the server.\n\t\t\t */\n\t\t\treq_reject(PBSE_NONE, JOB_SUBSTATE_TERM, preq);\n\t\t} else {\n\t\t\treply_ack(preq);\n\t\t}\n\t\treturn;\n\t} else if (strcmp(sname, SIG_SUSPEND) == 0 || strcmp(sname, SIG_ADMIN_SUSPEND) == 0) {\n\t\t/**\n\t\t *\t\tPBS pseudo signal to suspend a running job.\n\t\t *\t\tJob must be actually running.\n\t\t */\n\t\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_Suspend) != 0) {\n\t\t\tsprintf(log_buffer, \"suspend failed: %s\",\n\t\t\t\t\"server indicates job is already suspended\");\n\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\treq_reject(PBSE_BADSTATE, 0, preq);\n\t\t\treturn;\n\t\t}\n\t\tswitch (get_job_substate(pjob)) {\n\t\t\tcase JOB_SUBSTATE_RUNNING:\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\tsprintf(log_buffer, \"suspend failed, job substate = %ld\",\n\t\t\t\t\tget_job_substate(pjob));\n\t\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\t\treq_reject(PBSE_BADSTATE, 0, preq);\n\t\t\t\treturn;\n\t\t}\n\t\tsusp_resum(pjob, 1, preq);\n\t\treturn;\n\t} else if (strcmp(sname, SIG_RESUME) == 0 || strcmp(sname, SIG_ADMIN_RESUME) == 0) {\n\t\t/**\n\t\t *\t\tPBS pseudo signal to resume a suspended job.\n\t\t */\n\t\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_Suspend) == 0) {\n\t\t\tsprintf(log_buffer, \"resume failed: %s\",\n\t\t\t\t\"server indicates job is not suspended\");\n\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\treq_reject(PBSE_BADSTATE, 0, preq);\n\t\t\treturn;\n\t\t}\n\t\tswitch (get_job_substate(pjob)) {\n\t\t\tcase JOB_SUBSTATE_SUSPEND:\n\t\t\tcase JOB_SUBSTATE_SCHSUSP:\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\tsprintf(log_buffer, \"resume failed, job substate = %ld\",\n\t\t\t\t\tget_job_substate(pjob));\n\t\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\t\treq_reject(PBSE_BADSTATE, 0, preq);\n\t\t\t\treturn;\n\t\t}\n\t\tsusp_resum(pjob, 0, preq);\n\t\treturn;\n\t}\n\n\t/**\n\t *\tFrom here on, we are dealing with a \"real\" signal.  It is sent to all\n\t *\tprocesses in the job.\n\t */\n\tif (isdigit((int) *sname))\n\t\tsig = atoi(sname);\n\telse {\n\t\tif (!strncmp(\"SIG\", sname, 3))\n\t\t\tsname += 3;\n\t\tpsigt = sig_tbl;\n\t\twhile (psigt->sig_name) {\n\t\t\tif (!strcmp(sname, psigt->sig_name))\n\t\t\t\tbreak;\n\t\t\tpsigt++;\n\t\t}\n\t\tsig = psigt->sig_val;\n\t}\n\tif (sig < 0) {\n\t\treq_reject(PBSE_UNKSIG, 0, preq);\n\t\treturn;\n\t}\n#ifdef SIGKILL\n\tif ((sig != SIGKILL) &&\n\t    (!check_job_substate(pjob, JOB_SUBSTATE_RUNNING)))\n#else\n\tif (!check_job_substate(pjob, JOB_SUBSTATE_RUNNING))\n#endif\n\t{\n\t\tsprintf(log_buffer, \"cannot signal job, job substate = %ld\",\n\t\t\tget_job_substate(pjob));\n\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\treq_reject(PBSE_BADSTATE, 0, preq);\n\t\treturn;\n\t}\n\t/* Now, send signal to the MOM's child process */\n\tif (kill_job(pjob, sig) == 0) {\n\t\tif ((get_job_substate(pjob) <= JOB_SUBSTATE_EXITING) ||\n\t\t    (check_job_substate(pjob, JOB_SUBSTATE_TERM))) {\n\t\t\t/* No procs found, force job to exiting */\n\t\t\t/* force issue of (another) job obit */\n\t\t\t(void) sprintf(log_buffer,\n\t\t\t\t       \"Job recycled into exiting on signal from substate %ld\",\n\t\t\t\t       get_job_substate(pjob));\n\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\tset_job_substate(pjob, JOB_SUBSTATE_EXITING);\n\t\t\tptask = GET_NEXT(pjob->ji_tasks);\n\t\t\tif (ptask)\n\t\t\t\tptask->ti_qs.ti_status = TI_STATE_EXITED;\n\t\t\texiting_tasks = 1;\n\t\t}\n\t}\n\n\treply_ack(preq);\n\treturn;\n}\n\n/**\n * @brief\n *\tRemove a file which is specified by path and owned by user.\n *\n * @param[in] path - path for file to be deleted\n * @param[in] user - user name\n * @param[in] bad_list - pointer to bad file list\n *\n * @return\tint\n * @retval\t0\tsuccess\n * @retval\terrno\tfailure.\n *\n */\n\nstatic int\ndelete_file(char *path, char *user, char *prmt, char **bad_list)\n{\n\tint rc;\n\n\tDBPRT((\"%s: path %s\\n\", __func__, path))\n\tfix_path(prmt, 3);\n\tfix_path(path, 3);\n\tif (local_or_remote(&prmt) == 0) {\n\t\t/* local file, is the source == destination? */\n\t\t/* if so, don't delete it\t\t     */\n\t\tif (is_file_same(prmt, path) == 1) {\n\t\t\tDBPRT((\"%s: path same as %s\\n\", __func__, prmt))\n\t\t\treturn 0;\n\t\t}\n\t}\n\n\trc = remtree(path);\n\tif (rc == -1 && errno == ENOENT)\n\t\trc = 0;\n\n\tif (rc != 0) {\n\t\tsprintf(log_buffer,\n\t\t\t\"Unable to delete file %s for user %s, error = %d\",\n\t\t\tpath, user, errno);\n\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_REQUEST,\n\t\t\t  LOG_INFO, __func__, log_buffer);\n\t\tadd_bad_list(bad_list, log_buffer, 2);\n\t\trc = errno;\n\t} else {\n\t\tDBPRT((\"%s: Deleted file %s\\n\", __func__, path))\n\t}\n\treturn rc;\n}\n\n/**\n * @brief\n *\tdelete the files in a copy files or delete files request\n *\tWARNING: fork_to_user() must be called first so that useruid/gid is set up\n *\n * @param[in] rqcpf - pointer to file list structure from request\n * @param[in] pjob - pointer to job structure (can be null)\n * @param[out] pbadfile - pointer to bad file list\n *\n * @return int\n * @retval 0 - success\n * @retval errno - failure.\n *\n */\nstatic int\ndel_files(struct rq_cpyfile *rqcpf, job *pjob, char **pbadfile)\n{\n\tstruct rqfpair *pair = NULL;\n\tint rc = 0;\n\tint ret = 0;\n\tchar path[MAXPATHLEN + 1] = {'\\0'};\n\tstruct stat sb = {0};\n\tchar dname[MAXPATHLEN + 1] = {'\\0'};\n\tchar matched[MAXPATHLEN + 1] = {'\\0'};\n\tchar rmt_file[MAXPATHLEN + 1] = {'\\0'};\n\tchar local_file[MAXPATHLEN + 1] = {'\\0'};\n\tchar *ps = NULL;\n\tchar *pp = NULL;\n\tDIR *dirp = NULL;\n\tstruct dirent *pdirent = NULL;\n\tchar prmt[MAXPATHLEN + 1] = {'\\0'};\n\tint sandbox_private = 0;\n\n\tDBPRT((\"%s: entered\\n\", __func__))\n\t/*\n\t * Should be running in the user's home directory.\n\t * Build up path of file using local name only, then unlink it.\n\t * The first set of files may have the STDJOBFILE\n\t * flag set, which we need to unlink as root, the others as the user.\n\t * This is changed from the past.  We no longer delete\n\t * checkpoint files here.\n\t */\n\tsandbox_private = (rqcpf->rq_dir & STAGE_JOBDIR) ? TRUE : FALSE;\n\t/* When sandbox=private, chdir to job directory */\n\tif (sandbox_private) {\n\t\tif (!pjob) {\n\t\t\tlog_eventf(PBSEVENT_JOB, PBS_EVENTCLASS_REQUEST, LOG_INFO, __func__, \"%s: no job information\", rqcpf->rq_jobid);\n\t\t\treturn -1;\n\t\t}\n\t\tif (pjob->ji_grpcache)\n\t\t\tpbs_jobdir = jobdirname(rqcpf->rq_jobid, pjob->ji_grpcache->gc_homedir);\n\t\telse\n\t\t\tpbs_jobdir = jobdirname(rqcpf->rq_jobid, NULL);\n\t\tif (chdir(pbs_jobdir) == -1) \n\t\t\tlog_errf(-1, __func__, \"chdir failed. ERR : %s\", strerror(errno));\t\t\n\t}\n\n\tpair = (struct rqfpair *) GET_NEXT(rqcpf->rq_pair);\n\tfor (; pair; pair = (struct rqfpair *) GET_NEXT(pair->fp_link)) {\n\n\t\treplace(pair->fp_rmt, \"\\\\,\", \",\", rmt_file);\n\t\tif (*rmt_file != '\\0')\n\t\t\tstrcpy(prmt, rmt_file);\n\t\telse\n\t\t\tpbs_strncpy(prmt, pair->fp_rmt, sizeof(prmt));\n\t\tpath[0] = '\\0';\n\t\tif (pair->fp_flag == STDJOBFILE) { /* standard out or error */\n#ifndef NO_SPOOL_OUTPUT\n\t\t\tif (!sandbox_private) {\n\t\t\t\tDBPRT((\"%s:, STDJOBFILE in %s\\n\", __func__, path_spool))\n\t\t\t\tpbs_strncpy(path, path_spool, sizeof(path));\n\t\t\t}\n#endif /* NO_SPOOL_OUTPUT */\n\t\t}\n\t\treplace(pair->fp_local, \"\\\\,\", \",\", local_file);\n\t\tif (*local_file != '\\0')\n\t\t\t(void) strcat(path, local_file);\n\t\telse\n\t\t\t(void) strcat(path, pair->fp_local);\n\t\tDBPRT((\"%s: path %s\\n\", __func__, path))\n\n\t\t/* will have to fix this for O_WORKDIR - or change O_WORKDIR behavior */\n\t\t/* O_WORKDIR behavior should match the behavior of HOME */\n\t\t/* and delete files one by one */\n\t\tif (sandbox_private) {\n\t\t\tif (is_child_path(pbs_jobdir, path) == 1) {\n\t\t\t\t/* file is under staging and execution dir, */\n\t\t\t\t/* so defer its removal until staging and */\n\t\t\t\t/* execution dir removal time */\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t}\n\n#ifdef WIN32\n\t\tif (stat_uncpath(path, &sb) == 0)\n#else\n\t\tif (stat(path, &sb) == 0)\n#endif\n\t\t{\n\t\t\tif (S_ISDIR(sb.st_mode)) {\n\n\t\t\t\t/* have a directory, must append last segment */\n\t\t\t\t/* of source name to it for  the unlink\t      */\n\n\t\t\t\t(void) strcat(path, \"/\");\n\t\t\t\tpp = strrchr(prmt, '/');\n\n\t\t\t\tif (pp && *(pp + 1) == '\\0') {\n\t\t\t\t\t/* reduce /dir/dir/  case to /dir/dir */\n\t\t\t\t\t*pp = '\\0';\n\t\t\t\t\tpp = strrchr(prmt, '/');\n\t\t\t\t}\n\t\t\t\tif (pp)\n\t\t\t\t\t++pp;\n\t\t\t\telse if ((pp = strrchr(prmt, ':')) != NULL)\n\t\t\t\t\t++pp;\n\t\t\t\telse\n\t\t\t\t\tpp = prmt;\n\n\t\t\t\t(void) strcat(path, pp);\n\n\t\t\t\tDBPRT((\"%s: append segment to path %s\\n\", __func__, path))\n\t\t\t}\n\t\t} else {\n\t\t\tif (errno != ENOENT)\n\t\t\t\tlog_eventf(PBSEVENT_JOB, PBS_EVENTCLASS_REQUEST, LOG_INFO, __func__, \"cannot stat(%s): %s\", path, strerror(errno));\n\t\t}\n\n\t\t/*\n\t\t * If the wildcard \"*\" is given to delete every file\n\t\t * in the homedir, don't do it.\n\t\t */\n\t\tif (strcmp(path, \"./*\") == 0) {\n\t\t\tDBPRT((\"%s: wildcard delete of all files skipped\\n\", __func__))\n\t\t\tcontinue;\n\t\t}\n\n\t\tps = strrchr(path, (int) '/');\n\t\tif (ps) {\n\t\t\t/* has prefix path, save parent directory name */\n\t\t\tint len = (int) (ps - path) + 1;\n\n\t\t\tpbs_strncpy(dname, path, len);\n\t\t\tps++;\n\t\t} else { /* no prefix path */\n\t\t\t/*\n\t\t\t * If the wildcard \"*\" is given to delete every file\n\t\t\t * in the homedir, don't do it.\n\t\t\t */\n\t\t\tif (strcmp(path, \"*\") == 0) {\n\t\t\t\tDBPRT((\"%s: wildcard delete of all files skipped\\n\", __func__))\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\tdname[0] = '.';\n\t\t\tdname[1] = '/';\n\t\t\tdname[2] = '\\0';\n\t\t\tps = path;\n\t\t}\n\n\t\t/* if there are no wildcards we don't need to search */\n\t\tif (strchr(ps, '*') == NULL && strchr(ps, '?') == NULL) {\n\t\t\tDBPRT((\"%s: path has no wildcards\\n\", __func__))\n\t\t\trc = delete_file(path, rqcpf->rq_user, prmt, pbadfile);\n\t\t\tif (rc != 0)\n\t\t\t\tret = rc;\n\t\t\tcontinue;\n\t\t}\n\n\t\tdirp = opendir(dname);\n\t\tif (dirp == NULL) { /* dir cannot be opened, just call delete_file */\n\t\t\tDBPRT((\"%s: cannot open dir %s\\n\", __func__, dname))\n\t\t\trc = delete_file(path, rqcpf->rq_user, prmt, pbadfile);\n\t\t\tif (rc != 0)\n\t\t\t\tret = rc;\n\t\t\tcontinue;\n\t\t}\n\n\t\twhile (errno = 0, (pdirent = readdir(dirp)) != NULL) {\n\t\t\tif (pdirent->d_name[0] == '.') {\n\t\t\t\tif (pdirent->d_name[1] == '\\0' || (pdirent->d_name[1] == '.' && pdirent->d_name[2] == '\\0'))\n\t\t\t\t\tcontinue;\n\t\t\t}\n\t\t\tif (pbs_glob(pdirent->d_name, ps) != 0) {\n\t\t\t\t/* name matches */\n\t\t\t\tstrcpy(matched, dname);\n\t\t\t\tstrcat(matched, pdirent->d_name);\n\t\t\t\tDBPRT((\"%s: match %s\\n\", __func__, matched))\n\t\t\t\trc = delete_file(matched, rqcpf->rq_user, prmt, pbadfile);\n\t\t\t\tif (rc != 0)\n\t\t\t\t\tret = rc;\n\t\t\t}\n\t\t}\n\t\tif (errno != 0 && errno != ENOENT) { /* dir cannot be read, just call delete_file */\n\t\t\tDBPRT((\"%s: cannot read dir %s\\n\", __func__, dname))\n\t\t\trc = delete_file(path, rqcpf->rq_user, prmt, pbadfile);\n\t\t\tif (rc != 0)\n\t\t\t\tret = rc;\n\t\t}\n\n\t\t(void) closedir(dirp);\n\t}\n\treturn (ret);\n}\n\n/**\n * @brief\n * \tDo post rerunjob processing and cleanup for both tcp\n * \tand tpp requests\n *\n * @param[in]\tptask - Work task\n *\n * @return \tnone\n *\n */\n\nstatic void\npost_rerunjob(struct work_task *ptask)\n{\n\tstruct batch_request *preq = ptask->wt_parm1;\n\tif (preq == NULL)\n\t\treturn;\n\n\tif (ptask->wt_aux != 0)\n\t\treq_reject(-ptask->wt_aux, 0, preq);\n\telse\n\t\treply_ack(preq);\n}\n\n/**\n * @brief\n *\trequest to rerun job.\n *\n * @param[in] preq - pointer to batch_request structure\n *\n * @return \tVoid\n *\n */\n\nvoid\nreq_rerunjob(struct batch_request *preq)\n{\n\tjob *pjob;\n\tunsigned int port;\n\tint rc;\n\tint sock;\n\tchar *svrport;\n\tstruct work_task *wtask = NULL;\n\tpid_t child;\n\n\tpjob = find_job(preq->rq_ind.rq_rerun);\n\tif (pjob == NULL) {\n\t\treq_reject(PBSE_UNKJOBID, 0, preq);\n\t\treturn;\n\t}\n\n\t/* try fork to send files back */\n\n\tif ((child = fork_me(preq->rq_conn)) > 0) {\n\t\twtask = set_task(WORK_Deferred_Child, child, post_rerunjob, preq);\n\t\tif (!wtask) {\n\t\t\tlog_err(errno, NULL, \"Failed to create deferred work task, Out of memory\");\n\t\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\t\treturn;\n\t\t}\n\n\t\t/* change substate so Mom doesn't send another obit     */\n\t\t/* do not record to disk, so Obit is resent on recovery */\n\t\tif (check_job_substate(pjob, JOB_SUBSTATE_OBIT))\n\t\t\tset_job_substate(pjob, JOB_SUBSTATE_EXITED);\n\t\treturn;\n\t} else if ((child < 0) && (errno != ENOSYS)) {\n\t\treq_reject(-child, 0, preq);\n\t\treturn;\n\t}\n\n\t/* Child process ...  if fork available else continue in foreground */\n\t/* send a Job Files request(s).                           */\n\n\trc = 0;\n\tsvrport = strchr(get_jattr_str(pjob, JOB_ATR_at_server),\n\t\t\t (int) ':');\n\tif (svrport)\n\t\tport = atoi(svrport + 1);\n\telse\n\t\tport = default_server_port;\n\n\tsock = client_to_svr(pjob->ji_qs.ji_un.ji_momt.ji_svraddr, port, B_RESERVED);\n\n\tif (pbs_errno == PBSE_NOLOOPBACKIF)\n\t\tlog_err(PBSE_NOLOOPBACKIF, \"client_to_svr\", msg_noloopbackif);\n\n\tif (sock < 0) {\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_REQUEST, LOG_WARNING,\n\t\t\t  \"req_rerun\", \"no contact with the server\");\n\t\tif (child) {\n\t\t\t/* TPP streams cannot be inherited.\n\t\t\t * So, we need to reject the request here itself if in foreground.\n\t\t\t */\n\t\t\treq_reject(PBSE_NOSERVER, 0, preq);\n\t\t}\n\t\trc = 1;\n\t}\n\n\tif (rc == 0) {\n\t\tif (((rc = return_file(pjob, StdOut, sock)) != 0) ||\n\t\t    ((rc = return_file(pjob, StdErr, sock)) != 0)) {\n\t\t\t/* TPP streams cannot be inherited.\n\t\t * So, we need to reject/ack the request here itself if in foreground.\n\t\t */\n\t\t\tif (child)\n\t\t\t\treq_reject(rc, 0, preq);\n\t\t\telse\n\t\t\t\trc = 1;\n\t\t} else if (child)\n\t\t\treply_ack(preq);\n\t}\n\n\tclosesocket(sock);\n\tif (!child)\n\t\texit(rc);\n\treturn;\n}\n\n#ifdef WIN32 /* WIN32 ------------------------------------------------------ */\n/**\n * @brief\n * \tDo post cpyfile processing and cleanup\n * \tCalled when child process started in req_cpyfile()\n *\tIf it had a major failure, resend obit to server, otherwise set\n *\tsubstate back to OBIT\n *\n * @param[in]\tpjob - pointer to the job structure\n * @param[in]\tev - exit value of the child process\n *\n * @return \tnone\n *\n */\n\nvoid\npost_cpyfile(struct work_task *pwt)\n{\n\tstruct batch_request *preq = NULL;\n\tpio_handles *pio = NULL;\n\tstruct rq_cpyfile *rqcpf = NULL;\n\tcopy_info *cpyinfo = NULL;\n\tint ecode = -1;\n\tjob *pjob = NULL;\n\tchar buf[CPY_PIPE_BUFSIZE] = {'\\0'};\n\tint buflen = 0;\n\tchar *jobid = NULL;\n\n\tif ((pwt == NULL) || (pwt->wt_parm1 == NULL))\n\t\treturn;\n\n\tcpyinfo = pwt->wt_parm1;\n\tif (cpyinfo->preq == NULL || cpyinfo->jobid == NULL)\n\t\treturn;\n\tpreq = cpyinfo->preq;\n\tpio = &cpyinfo->pio;\n\tjobid = cpyinfo->jobid;\n\tpjob = cpyinfo->pjob;\n\tecode = pwt->wt_aux;\n\n\tDBPRT((\"%s: entered %s\\n\", __func__, jobid))\n\tlog_eventf(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG, jobid, \"%s: entered %s\", __func__, jobid);\n\n\tswitch (ecode) {\n\t\tcase STAGEFILE_OK:\n\t\t\tif (pjob) {\n\t\t\t\t/*\n\t\t\t\t * reset substate to OBIT,  if server doesn't move\n\t\t\t\t * on to next step in End of Job processing quickly\n\t\t\t\t * we will resend obit, see mom_main.c\n\t\t\t\t */\n\t\t\t\tset_job_substate(pjob, JOB_SUBSTATE_OBIT);\n\t\t\t\tpjob->ji_sampletim = time(0);\n\t\t\t}\n\t\t\treply_ack(preq);\n\t\t\tbreak;\n\t\tcase STAGEFILE_NOCOPYFILE:\n\t\t\t(void) win_pread(pio, buf, sizeof(buf));\n\t\t\tbuflen = strlen(buf);\n\t\t\tif (buflen > 0)\n\t\t\t\tbuf[buflen - 1] = '\\0';\n\t\t\t(void) reply_text(preq, PBSE_NOCOPYFILE, buf);\n\t\t\tif (pjob) {\n\t\t\t\tpjob->ji_qs.ji_svrflags |= JOB_SVFLG_StgoFal;\n\t\t\t\t(void) job_save(pjob);\n\t\t\t}\n\t\t\tbreak;\n\t\tcase STAGEFILE_FATAL:\n\t\t\t(void) win_pread(pio, buf, sizeof(buf));\n\t\t\tbuflen = strlen(buf);\n\t\t\tif (buflen > 0)\n\t\t\t\tbuf[buflen - 1] = '\\0';\n\t\t\t(void) snprintf(log_buffer, sizeof(log_buffer), \"file copy failed for jobid %s with fatal error: %s\", jobid, buf);\n\t\t\tlog_err(PBSE_NOCOPYFILE, __func__, log_buffer);\n\t\t\t(void) reply_text(preq, PBSE_NOCOPYFILE, log_buffer);\n\t\t\tbreak;\n\t\tcase STAGEFILE_BADUSER:\n\t\t\t(void) snprintf(log_buffer, sizeof(log_buffer), \"file copy failed for jobid %s with baduser\", jobid);\n\t\t\tlog_err(PBSE_BADUSER, __func__, log_buffer);\n\t\t\treq_reject(PBSE_BADUSER, 0, preq);\n\t\t\tbreak;\n\t\tdefault:\n\t\t\t(void) snprintf(log_buffer, sizeof(log_buffer), \"file copy failed for job %s with error %d\", jobid, ecode);\n\t\t\tlog_err(PBSE_NOCOPYFILE, __func__, log_buffer);\n\n\t\t\tif (pjob) {\n\t\t\t\t/*\n\t\t\t\t* child that was doing file copies had major error\n\t\t\t\t* was killed or crashed,  resend obit to restart\n\t\t\t\t*/\n\t\t\t\tsend_obit(pjob, 0);\n\t\t\t}\n\n\t\t\t(void) reply_text(preq, PBSE_NOCOPYFILE, log_buffer);\n\t\t\tbreak;\n\t}\n\n\twin_pclose2(pio);\n\tDBPRT((\"%s: done %s\\n\", __func__, jobid))\n\tlog_eventf(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG, jobid, \"%s: done %s\", __func__, jobid);\n\n\tdelete_link(&cpyinfo->al_link);\n\tfree(cpyinfo->jobid);\n\tcpyinfo->jobid = NULL;\n\tfree(cpyinfo);\n\tcpyinfo = NULL;\n}\n\n/**\n * @brief\n *\tfind and return copy information saved in global list mom_copyreqs_list\n *\tfor given <jobid>\n *\n * @param[in]\tjobid - job id\n *\n * @return\tcopy_info *\n * @retval\tpointer to copy_info\tif copy info found\n * @retval\tNULL\t\t\tif copy info not found\n *\n */\ncopy_info *\nget_copyinfo_from_list(char *jobid)\n{\n\tcopy_info *cpyinfo = NULL;\n\n\tif (jobid == NULL || *jobid == '\\0')\n\t\treturn 0;\n\n\tcpyinfo = GET_NEXT(mom_copyreqs_list);\n\twhile (cpyinfo) {\n\t\tif (!strncmp(cpyinfo->jobid, jobid, strlen(jobid))) {\n\t\t\treturn cpyinfo;\n\t\t}\n\t\tcpyinfo = GET_NEXT(cpyinfo->al_link);\n\t}\n\n\treturn NULL;\n}\n\n/**\n * @brief\n *\tprocess the Copy Files request from the server to dispose\n *\tof output from the job.  This is done by a child of MOM since it\n *\tmight take time.\n *\t<Windows version>\n *\n * @param[in]\tpreq - pointer to batch request for copy file\n * @return\tvoid\n *\n * NOTE:The supplied PBS means of moving the file is by \"rcp\". A site may wish to change this.\n *\n */\n\nvoid\nreq_cpyfile(struct batch_request *preq)\n{\n\tint dir = -1;\n\tjob *pjob = NULL;\n\tint rc = -1;\n\tstruct rq_cpyfile *rqcpf = NULL;\n\tstruct passwd *pw = NULL;\n\tchar actual_homedir[MAXPATHLEN + 1] = {'\\0'};\n\tcpy_files stage_inout = {0};\n\tchar cmdline[PBS_CMDLINE_LENGTH + 1] = {'\\0'};\n\tchar buf[CPY_PIPE_BUFSIZE + 1] = {'\\0'};\n\tstruct work_task *ptask = NULL;\n\tcopy_info *cpyinfo = NULL;\n\tstruct proc_ctrl proc_info;\n\textern char *path_log;\n\textern char *log_file;\n\textern pbs_list_head task_list_event;\n\tint is_network_drive = 0;\n\tchar current_dir[MAX_PATH + 1] = {'\\0'};\n\tint direct_write = 0;\n\n\tif (preq->rq_type == PBS_BATCH_CopyFiles_Cred)\n\t\trqcpf = &preq->rq_ind.rq_cpyfile_cred.rq_copyfile;\n\telse\n\t\trqcpf = &preq->rq_ind.rq_cpyfile;\n\n\tpjob = find_job(rqcpf->rq_jobid);\n\tif (pjob != NULL) {\n\t\t/*\n\t\t * Once a job starts file processing, the checkpoint\n\t\t * flags need to be turned off so a restart cannot\n\t\t * send us back to the future.\n\t\t */\n\t\tif (pjob->ji_qs.ji_svrflags &\n\t\t    (JOB_SVFLG_CHKPT | JOB_SVFLG_ChkptMig)) {\n\t\t\tpjob->ji_qs.ji_svrflags &=\n\t\t\t\t~(JOB_SVFLG_CHKPT | JOB_SVFLG_ChkptMig);\n\t\t\t(void) job_save(pjob);\n\t\t}\n\t\t/*\n\t\t * change substate so Mom doesn't send another obit\n\t\t * do not record to disk, so Obit is resent on recovery\n\t\t */\n\t\tif (check_job_substate(pjob, JOB_SUBSTATE_OBIT))\n\t\t\tset_job_substate(pjob, JOB_SUBSTATE_EXITED);\n\t}\n\n\tdir = (rqcpf->rq_dir & STAGE_DIRECTION) ? STAGE_DIR_OUT : STAGE_DIR_IN;\n\tstage_inout.sandbox_private = (rqcpf->rq_dir & STAGE_JOBDIR) ? TRUE : FALSE;\n\tif (pjob != NULL && (dir == STAGE_DIR_OUT)) {\n\t\tdirect_write = direct_write_requested(pjob);\n\t}\n\n\t/*\n\t * In Windows, we need to be the user in order to call\n\t * map_unc_path() therefore we will fork_to_user before\n\t * calling getpwnam()\n\t */\n\tif (fork_to_user(preq, pjob) == INVALID_HANDLE_VALUE) {\n\t\treq_reject(PBSE_BADUSER, 0, preq);\n\t\treturn;\n\t}\n\n\tif (pjob == NULL) {\n\t\t/*\n\t\t * no homedir can be cached in job's gc_homedir/altid\n\t\t * attribute, so we call map_unc_path to get it now\n\t\t */\n\t\tif ((pw = getpwnam(preq->rq_ind.rq_cpyfile.rq_user)) != NULL) {\n\t\t\tpbs_strncpy(actual_homedir,\n\t\t\t\t    map_unc_path(pw->pw_dir, pw), sizeof(actual_homedir));\n\t\t\tpbs_jobdir = jobdirname(rqcpf->rq_jobid, actual_homedir);\n\t\t} else {\n\t\t\tsprintf(log_buffer, \"unable to find a password entry for %s\", preq->rq_ind.rq_cpyfile.rq_user);\n\t\t\tlog_err(errno, \"req_cpyfile\", log_buffer);\n\t\t\treq_reject(PBSE_BADUSER, 0, preq);\n\t\t\treturn;\n\t\t}\n\t} else {\n\t\t/*\n\t\t * stage out will have the pjob already set, and the\n\t\t * home directory should have also been set.\n\t\t * Find the pbs_jobdir based off the user home info\n\t\t * stored in the pjob\n\t\t */\n\t\tif (pjob->ji_grpcache)\n\t\t\tpbs_jobdir = jobdirname(pjob->ji_qs.ji_jobid, pjob->ji_grpcache->gc_homedir);\n\t\telse\n\t\t\tpbs_jobdir = jobdirname(pjob->ji_qs.ji_jobid, NULL);\n\t}\n\n\t/*\n\t * revert to ADMIN to do stuff like\n\t * create the job directory as PBS, so it has\n\t * the same permissions as TMPDIR\n\t */\n\t(void) revert_impersonated_user();\n\n\tif ((dir == STAGE_DIR_IN) && (stage_inout.sandbox_private)) {\n\t\t/* Create PBS_JOBDIR */\n\t\trc = mkjobdir(rqcpf->rq_jobid, pbs_jobdir, preq->rq_ind.rq_cpyfile.rq_user, (pjob != NULL && pjob->ji_user != NULL) ? pjob->ji_user->pw_userlogin : INVALID_HANDLE_VALUE);\n\t\tif (rc != 0) {\n\t\t\tsprintf(log_buffer, \"unable to create the job directory %s\", pbs_jobdir);\n\t\t\tlog_err(errno, \"req_cpyfile\", log_buffer);\n\t\t\treq_reject(PBSE_MOMREJECT, 0, preq);\n\t\t\treturn;\n\t\t}\n\t}\n\n\tsnprintf(cmdline, sizeof(cmdline), \"%s/sbin/pbs_stage_file.exe\", pbs_conf.pbs_exec_path);\n\n\tif ((cpyinfo = (copy_info *) malloc(sizeof(copy_info))) == NULL) {\n\t\t(void) snprintf(log_buffer, sizeof(log_buffer), \"unable to allocate memory for copy_info for job %s\", rqcpf->rq_jobid);\n\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_ERR, rqcpf->rq_jobid, log_buffer);\n\t\treq_reject(PBSE_MOMREJECT, 0, preq);\n\t\treturn;\n\t}\n\n\tmemset(cpyinfo, 0, sizeof(copy_info));\n\tCLEAR_LINK(cpyinfo->al_link);\n\n\tif ((cpyinfo->jobid = strdup(rqcpf->rq_jobid)) == NULL) {\n\t\t(void) snprintf(log_buffer, sizeof(log_buffer), \"unable to allocate memory for copy_info->jobid for job %s\", rqcpf->rq_jobid);\n\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_ERR, rqcpf->rq_jobid, log_buffer);\n\t\tfree(cpyinfo);\n\t\tcpyinfo = NULL;\n\t\treq_reject(PBSE_MOMREJECT, 0, preq);\n\t\treturn;\n\t}\n\n\tcpyinfo->pjob = pjob;\n\tcpyinfo->preq = preq;\n\tproc_info.bInheritHandle = TRUE;\n\tproc_info.bnowait = 0;\n\tproc_info.buse_cmd = TRUE;\n\tproc_info.need_ptree_termination = TRUE;\n#ifndef DEBUG\n\tproc_info.flags = 0;\n#else\n\tproc_info.flags = CREATE_NO_WINDOW | CREATE_BREAKAWAY_FROM_JOB;\n#endif\n\t/* win_popen() doesn't launch process if current directory is a mapped path is user session */\n\tcurrent_dir[0] = '\\0';\n\t_getcwd(current_dir, MAX_PATH + 1);\n\tif ((pjob != NULL) && (pjob->ji_user != NULL) && impersonate_user(pjob->ji_user->pw_userlogin) == 0) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer) - 1, \"req_cpyfile: failed to impersonate user %s error=%d\",\n\t\t\t pjob->ji_user->pw_name, GetLastError());\n\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB,\n\t\t\t  LOG_DEBUG, pjob->ji_qs.ji_jobid, log_buffer);\n\t\treturn;\n\t}\n\tis_network_drive = is_network_drive_path(current_dir);\n\tproc_info.is_current_path_network = is_network_drive;\n\t(void) revert_impersonated_user();\n\n\tif (win_popen(cmdline, \"w\", &cpyinfo->pio, &proc_info) == 0) {\n\t\terrno = GetLastError();\n\t\tpbs_errno = errno;\n\t\t(void) snprintf(log_buffer, sizeof(log_buffer) - 1, \"executing %s for job %s failed errno=%d\", cmdline, rqcpf->rq_jobid, errno);\n\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_ERR, rqcpf->rq_jobid, log_buffer);\n\t\twin_pclose(&cpyinfo->pio);\n\t\tfree(cpyinfo->jobid);\n\t\tcpyinfo->jobid = NULL;\n\t\tfree(cpyinfo);\n\t\tcpyinfo = NULL;\n\t\treq_reject(PBSE_MOMREJECT, 0, preq);\n\t\treturn;\n\t}\n\n\tptask = set_task(WORK_Deferred_Child, (long) cpyinfo->pio.pi.hProcess, post_cpyfile, cpyinfo);\n\tif (!ptask) {\n\t\terrno = ENOMEM;\n\t\tpbs_errno = errno;\n\t\t(void) snprintf(log_buffer, sizeof(log_buffer) - 1, \"unable to set task for cpyreq for job %s\", rqcpf->rq_jobid);\n\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_ERR, rqcpf->rq_jobid, log_buffer);\n\t\twin_pclose(&cpyinfo->pio);\n\t\tfree(cpyinfo->jobid);\n\t\tcpyinfo->jobid = NULL;\n\t\tfree(cpyinfo);\n\t\tcpyinfo = NULL;\n\t\treq_reject(PBSE_MOMREJECT, 0, preq);\n\t\treturn;\n\t}\n\n\tcpyinfo->ptask = ptask;\n\tappend_link(&mom_copyreqs_list, &cpyinfo->al_link, cpyinfo);\n\n\taddpid(cpyinfo->pio.pi.hProcess);\n\n\tsnprintf(buf, sizeof(buf) - 1, \"path_log=%s\\n\", path_log);\n\tcheck_err(__func__, buf, win_pwrite(&cpyinfo->pio, buf, strlen(buf)));\n\n\tsnprintf(buf, sizeof(buf) - 1, \"path_spool=%s\\n\", path_spool);\n\tcheck_err(__func__, buf, win_pwrite(&cpyinfo->pio, buf, strlen(buf)));\n\n\tsnprintf(buf, sizeof(buf) - 1, \"path_undeliv=%s\\n\", path_undeliv);\n\tcheck_err(__func__, buf, win_pwrite(&cpyinfo->pio, buf, strlen(buf)));\n\n\tsnprintf(buf, sizeof(buf) - 1, \"path_checkpoint=%s\\n\", path_checkpoint);\n\tcheck_err(__func__, buf, win_pwrite(&cpyinfo->pio, buf, strlen(buf)));\n\n\tsnprintf(buf, sizeof(buf) - 1, \"pbs_jobdir=%s\\n\", pbs_jobdir);\n\tcheck_err(__func__, buf, win_pwrite(&cpyinfo->pio, buf, strlen(buf)));\n\n\tsnprintf(buf, sizeof(buf) - 1, \"actual_homedir=%s\\n\",\n\t\t (pjob ? (pjob->ji_grpcache ? pjob->ji_grpcache->gc_homedir : \"\") : actual_homedir));\n\tcheck_err(__func__, buf, win_pwrite(&cpyinfo->pio, buf, strlen(buf)));\n\n\tsnprintf(buf, sizeof(buf) - 1, \"mom_host=%s\\n\", mom_host);\n\tcheck_err(__func__, buf, win_pwrite(&cpyinfo->pio, buf, strlen(buf)));\n\n\tsnprintf(buf, sizeof(buf) - 1, \"log_file=%s\\n\", (log_file ? log_file : \"\"));\n\tcheck_err(__func__, buf, win_pwrite(&cpyinfo->pio, buf, strlen(buf)));\n\n\tsnprintf(buf, sizeof(buf) - 1, \"log_event_mask=%ld\\n\", *log_event_mask);\n\tcheck_err(__func__, buf, win_pwrite(&cpyinfo->pio, buf, strlen(buf)));\n\n\tsnprintf(buf, sizeof(buf) - 1, \"direct_write=%d\\n\", direct_write);\n\tcheck_err(__func__, buf, win_pwrite(&cpyinfo->pio, buf, strlen(buf)));\n\n\tsend_pcphosts(&cpyinfo->pio, pcphosts);\n\n\tif (!send_rq_cpyfile_cred(&cpyinfo->pio, rqcpf)) {\n\t\tlog_err(-1, __func__, \"Failed to send data\");\n\t}\n\n\tsnprintf(buf, sizeof(buf) - 1, \"quit\\n\");\n\tcheck_err(__func__, buf, win_pwrite(&cpyinfo->pio, buf, strlen(buf)));\n\n\tchdir(mom_home);\n}\n\n/**\n * @brief\n *\tdelete the specifled output/staged files\n *\t<Windows version>\n *\n * @param[in]\tpreq - pointer to batch request for delete file\n *\n * @return\tvoid\n *\n */\n\nvoid\nreq_delfile(struct batch_request *preq)\n{\n\tint rc = 0;\n\tstruct rq_cpyfile *rqcpf = NULL;\n\tjob *pjob = NULL;\n\tchar *bad_list = NULL;\n\tHANDLE hUser = INVALID_HANDLE_VALUE;\n\n\tif (preq->rq_type == PBS_BATCH_DelFiles_Cred)\n\t\trqcpf = &preq->rq_ind.rq_cpyfile_cred.rq_copyfile;\n\telse\n\t\trqcpf = &preq->rq_ind.rq_cpyfile;\n\n\tpjob = find_job(rqcpf->rq_jobid);\n\tif (pjob) {\n\t\t/*\n\t\t * check to see is there any copy request pending\n\t\t * for this job ?\n\t\t */\n\t\tif (get_copyinfo_from_list(rqcpf->rq_jobid) != NULL) {\n\t\t\t/*\n\t\t\t * we have copy request pending so we\n\t\t\t * need to first process the post_cpyfile\n\t\t\t * request before starting this one.\n\t\t\t * Tell the server to try again later.\n\t\t\t */\n\t\t\treq_reject(PBSE_TRYAGAIN, 0, preq);\n\t\t\treturn;\n\t\t}\n\n\t\t/*\n\t\t * Once a job starts file processing, the checkpoint\n\t\t * flags need to be turned off so a restart cannot\n\t\t * send us back to the future.\n\t\t */\n\t\tif (pjob->ji_qs.ji_svrflags & (JOB_SVFLG_CHKPT | JOB_SVFLG_ChkptMig)) {\n\t\t\tpjob->ji_qs.ji_svrflags &= ~(JOB_SVFLG_CHKPT | JOB_SVFLG_ChkptMig);\n\t\t\t(void) job_save(pjob);\n\t\t}\n\n\t\tif (check_job_substate(pjob, JOB_SUBSTATE_OBIT)) {\n\t\t\t/* change substate so Mom doesn't send another obit\n\t\t\t * do not record to disk, so Obit is resent on recovery\n\t\t\t */\n\t\t\tset_job_substate(pjob, JOB_SUBSTATE_EXITED);\n\t\t}\n\t}\n\n\thUser = fork_to_user(preq, pjob);\n\tif (hUser == INVALID_HANDLE_VALUE) {\n\t\treq_reject(PBSE_BADUSER, 0, preq);\n\t\treturn;\n\t}\n\n\t/* Child process ... delete the files */\n\n\tif ((rc = del_files(rqcpf, pjob, &bad_list)) != 0) {\n\t\treply_text(preq, rc, bad_list);\n\t\tif (bad_list != NULL) {\n\t\t\tfree(bad_list);\n\t\t\tbad_list = NULL;\n\t\t}\n\t} else\n\t\treply_ack(preq);\n\n\t(void) revert_impersonated_user();\n\tchdir(mom_home);\n}\n\n#else /* UNIX---------------------------------------------------------------*/\n/**\n * @brief\n * \tDo post cpyfile processing and cleanup in case of tpp connection\n * \tand for stagein when the job is not yet available at the server\n *\n * @param[in]\tptask - Work task\n *\n * @return \tnone\n *\n */\n\nstatic void\npost_cpyfile_nojob(struct work_task *ptask)\n{\n\tstruct batch_request *preq = ptask->wt_parm1;\n\tif (preq == NULL)\n\t\treturn;\n\n\tif (ptask->wt_aux != 0)\n\t\treq_reject(PBSE_NOCOPYFILE, 0, preq);\n\telse\n\t\treply_ack(preq);\n}\n\n/**\n * @brief\n * \tDo post cpyfile processing and cleanup\n * @ par\n * \tCalled when child process started in req_cpyfile() on\n *\tstageout only\n *\tIf it had a major failure, resend obit to server, otherwise set\n *\tsubstate back to OBIT\n *\n * @param[in]\tpjob - pointer to the job structure\n * @param[in]\tev - exit value of the child process\n *\n * @return \tnone\n *\n */\n\nstatic void\npost_cpyfile(job *pjob, int ev)\n{\n\tif (pjob == NULL)\n\t\treturn;\n\n\tpjob->ji_mompost = NULL;\n\tif (ev != 0) {\n\t\tif (pjob->ji_preq)\n\t\t\treq_reject(PBSE_NOCOPYFILE, 0, pjob->ji_preq);\n\t\tpjob->ji_preq = NULL;\n\t\tif ((is_jattr_set(pjob, JOB_ATR_sandbox)) &&\n\t\t    (strcasecmp(get_jattr_str(pjob, JOB_ATR_sandbox), \"PRIVATE\") == 0) &&\n\t\t    (ev == STAGEOUT_FAILURE)) {\n\t\t\t/* We are in sandbox=private mode and there was */\n\t\t\t/* a stageout failure */\n\t\t\t/* Set the flag to show the stageout failure */\n\t\t\tpjob->ji_qs.ji_svrflags |= JOB_SVFLG_StgoFal;\n\t\t} else {\n\t\t\t/* child that was doing file copies had major error */\n\t\t\t/* was killed or crashed,  resend obit to restart   */\n\t\t\tsend_obit(pjob, 0);\n\t\t\treturn;\n\t\t}\n\t} else {\n\t\tif (pjob->ji_preq)\n\t\t\treply_ack(pjob->ji_preq);\n\t\tpjob->ji_preq = NULL;\n\t\t/* reset substate to OBIT,  if server doesn't move  */\n\t\t/* on to next step in End of Job processing quickly */\n\t\t/* we will resend obit, see mom_main.c              */\n\t\tset_job_substate(pjob, JOB_SUBSTATE_OBIT);\n\t\tpjob->ji_sampletim = time(0);\n\t}\n}\n\n/**\n * @brief\n * \treq_cpyfile - process the Copy Files request from the server to dispose\n *\tof output from the job.  This is done by a child of MOM since it\n *\tmight take time.\n *\n *\tUNIX version\n *\n *\tThe supplied PBS means of moving the file is by \"rcp\".\n * \tA site may wish to change this.\n *\n * @param[in] preq - pointer to batch_request structure\n *\n * @return\tVoid\n *\n */\n\nvoid\nreq_cpyfile(struct batch_request *preq)\n{\n\tjob *pjob;\n\tstruct rq_cpyfile *rqcpf;\n\ttime_t copy_start;\n\ttime_t copy_stop;\n\tint num_copies = 0;\n\tint dir;\n\tstruct passwd *pwdp;\n\tstruct group *grpp;\n\tuid_t useruid = 0;\n\tgid_t usergid = 0;\n\tint rc;\n\tpid_t pid;\n\tstruct rqfpair *pair;\n\tint rmtflag;\n\tcpy_files stage_inout;\n\tchar *prmt;\n\tchar dup_rqcpf_jobid[PBS_MAXSVRJOBID + 1];\n\tstruct work_task *wtask = NULL;\n\tint tot_copies = 0;\n\tbool copy_failed = FALSE;\n\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n\tstruct krb_holder *ticket = NULL;\n\tchar *krbccname = NULL;\n#endif\n\n\tif (mock_run) {\n\t\t/*\n\t\t * in mock run we don't have any files to copy back,\n\t\t * so just ack request to make server happy\n\t\t * and return\n\t\t */\n\t\treply_ack(preq);\n\t\treturn;\n\t}\n\n\tDBPRT((\"%s: entered\\n\", __func__))\n\n\tif (preq->rq_type == PBS_BATCH_CopyFiles_Cred)\n\t\trqcpf = &preq->rq_ind.rq_cpyfile_cred.rq_copyfile;\n\telse\n\t\trqcpf = &preq->rq_ind.rq_cpyfile;\n\n\tstage_inout.stageout_failed = FALSE;\n\tstage_inout.bad_files = 0;\n\tstage_inout.file_num = 0;\n\tstage_inout.file_max = 0;\n\tstage_inout.file_list = NULL;\n\tstage_inout.bad_list = NULL;\n\tpjob = find_job(rqcpf->rq_jobid);\n\tif (pjob) {\n\t\t/*\n\t\t ** Once a job starts file processing, the checkpoint\n\t\t ** flags need to be turned off so a restart cannot\n\t\t ** send us back to the future.\n\t\t */\n\t\tif (pjob->ji_qs.ji_svrflags &\n\t\t    (JOB_SVFLG_CHKPT | JOB_SVFLG_ChkptMig)) {\n\t\t\tpjob->ji_qs.ji_svrflags &=\n\t\t\t\t~(JOB_SVFLG_CHKPT | JOB_SVFLG_ChkptMig);\n\t\t\t(void) job_save(pjob);\n\t\t}\n\t\t/* change substate so Mom doesn't send another obit     */\n\t\t/* do not record to disk, so Obit is resent on recovery */\n\t\tif (check_job_substate(pjob, JOB_SUBSTATE_OBIT))\n\t\t\tset_job_substate(pjob, JOB_SUBSTATE_EXITED);\n\t}\n\n\tdir = (rqcpf->rq_dir & STAGE_DIRECTION) ? STAGE_DIR_OUT : STAGE_DIR_IN;\n\tstage_inout.sandbox_private = (rqcpf->rq_dir & STAGE_JOBDIR) ? TRUE : FALSE;\n\n\t/* Call getpwnam for user info */\n\tpwdp = getpwnam(rqcpf->rq_user);\n\tif (pwdp != NULL) {\n\t\tpbs_jobdir = jobdirname(rqcpf->rq_jobid, pwdp->pw_dir);\n\t} else {\n\t\tsprintf(log_buffer, \"unable to find a password entry\");\n\t\tlog_joberr(errno, __func__, log_buffer, rqcpf->rq_jobid);\n\t\treq_reject(PBSE_BADUSER, 0, preq);\n\t\treturn;\n\t}\n\n\tif ((dir == STAGE_DIR_IN) && stage_inout.sandbox_private) {\n\t\t/* Need to look up the uid, gid */\n\t\tif (pwdp == NULL) {\n\t\t\treq_reject(PBSE_BADUSER, 0, preq);\n\t\t\treturn;\n\t\t}\n\t\tuseruid = pwdp->pw_uid;\n\n\t\tif (rqcpf->rq_group[0] == '\\0') {\n\t\t\tusergid = pwdp->pw_gid; /* default to login group */\n\t\t} else {\n\t\t\tif ((grpp = getgrnam(rqcpf->rq_group)) == NULL) {\n\t\t\t\treq_reject(PBSE_BADUSER, 0, preq);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tusergid = grpp->gr_gid;\n\t\t}\n\t\t/* Create PBS_JOBDIR  and no change of environment */\n\t\trc = mkjobdir(rqcpf->rq_jobid, pbs_jobdir, useruid, usergid);\n\n\t\tif (rc != 0) {\n\t\t\tsprintf(log_buffer, \"unable to create the job directory %s\", pbs_jobdir);\n\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\treq_reject(PBSE_MOMREJECT, 0, preq);\n\t\t\treturn;\n\t\t}\n\t}\n\n\tif ((pjob != NULL) && (dir == STAGE_DIR_OUT) && direct_write_requested(pjob))\n\t\tstage_inout.direct_write = 1;\n\telse\n\t\tstage_inout.direct_write = 0;\n\n\t\t/* Become the user */\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n\tticket = alloc_ticket();\n\tpid = fork_to_user(preq, pjob, ticket);\n#else\n\tpid = fork_to_user(preq, pjob);\n#endif\n\trc = (int) pid;\n\tif (pid > 0) {\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n\t\tfree_ticket(ticket, CRED_CLOSE);\n#endif\n\n\t\tif (pjob) {\n\t\t\t/* change substate so Mom doesn't send another obit     */\n\t\t\t/* do not record to disk, so Obit is resent on recovery */\n\t\t\tif (check_job_substate(pjob, JOB_SUBSTATE_OBIT))\n\t\t\t\tset_job_substate(pjob, JOB_SUBSTATE_EXITED);\n\t\t\tpjob->ji_momsubt = pid;\n\t\t\tpjob->ji_mompost = post_cpyfile;\n\t\t\tif (preq->prot == PROT_TPP)\n\t\t\t\tpjob->ji_preq = preq; /* keep the batch request pointer */\n\t\t} else {\n\t\t\tif (preq->prot == PROT_TPP) {\n\t\t\t\t/* there is no job yet, so cant hang this post function to job\n\t\t\t\t * but this is tpp based connection, so we cannot reply in child\n\t\t\t\t * lets hang the preq in a work task\n\t\t\t\t */\n\t\t\t\twtask = set_task(WORK_Deferred_Child, pid, post_cpyfile_nojob, preq);\n\t\t\t\tif (!wtask) {\n\t\t\t\t\tlog_err(errno, __func__, \"Failed to create deferred work task, Out of memory\");\n\t\t\t\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\treturn; /* parent - continue with someother task */\n\t} else if (rc < 0) {\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n\t\tfree_ticket(ticket, CRED_DESTROY);\n#endif\n\n\t\treq_reject(-rc, 0, preq);\n\t\treturn;\n\t}\n\n\t/* chdir to job pbs_jobdir directory if \"sandbox=PRIVATE\" mode is requested */\n\tif (stage_inout.sandbox_private) {\n\t\tif (chdir(pbs_jobdir) == -1) \n\t\t\tlog_errf(-1, __func__, \"chdir failed. ERR : %s\", strerror(errno));\t\t\t\n\t}\n\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n\tkrbccname = get_ticket_ccname(ticket);\n\tif (krbccname != NULL)\n\t\tsetenv(\"KRB5CCNAME\", krbccname, 1);\n#endif\n\n\t/*\n\t * Child process ...\n\t * Now running in the user's home or job staging and execution directory as the user.\n\t * Build up cp/rcp command(s), one per file pair\n\t */\n\n\tcopy_start = time(0);\n\tfor (pair = (struct rqfpair *) GET_NEXT(rqcpf->rq_pair);\n\t     pair != 0;\n\t     pair = (struct rqfpair *) GET_NEXT(pair->fp_link), tot_copies++) {\n\t\tif (copy_failed)\n\t\t\tcontinue;\n\t\tDBPRT((\"%s: local %s remote %s\\n\", __func__, pair->fp_local, pair->fp_rmt))\n\n\t\tstage_inout.from_spool = 0;\n\t\tprmt = pair->fp_rmt;\n\n\t\tif (local_or_remote(&prmt) == 0) {\n\t\t\t/* destination host is this host, use cp */\n\t\t\trmtflag = 0;\n\t\t} else {\n\t\t\t/* destination host is another, use (pbs_)rcp */\n\t\t\trmtflag = 1;\n\t\t}\n\n\t\trc = stage_file(dir, rmtflag, rqcpf->rq_owner,\n\t\t\t\tpair, preq->rq_conn, &stage_inout, prmt, rqcpf->rq_jobid);\n\t\t/*\n\t\t ** Here we break out of the the loop on error.\n\t\t ** This will only happen on a stagein failure.\n\t\t */\n\t\tif (rc != 0) {\n\t\t\tcopy_failed = TRUE;\n\t\t\tcontinue;\n\t\t}\n\t\tnum_copies++;\n\t}\n\tcopy_stop = time(0);\n\n\t/* If there was a stage in failure, remove the job directory.\n\t * There is no guarantee we'll run on this mom again,\n\t * So we need to cleanup.\n\t */\n\tif ((dir == STAGE_DIR_IN) && stage_inout.sandbox_private && stage_inout.bad_files) {\n\t\t/* cd to user's home to be out of   */\n\t\t/* the sandbox so it can be deleted */\n\t\tif (chdir(pwdp->pw_dir) == -1) \n\t\t\tlog_errf(-1, __func__, \"chdir failed. ERR : %s\", strerror(errno));\t\t\t\n\t\trmjobdir(rqcpf->rq_jobid, pbs_jobdir, useruid, usergid, 0);\n\t}\n\n\tpbs_strncpy(dup_rqcpf_jobid, rqcpf->rq_jobid, sizeof(dup_rqcpf_jobid));\n\tif (preq->prot == PROT_TCP) {\n\t\tif (stage_inout.bad_files) {\n\t\t\treply_text(preq, PBSE_NOCOPYFILE, stage_inout.bad_list);\n\t\t} else {\n\t\t\treply_ack(preq);\n\t\t}\n\t} else {\n\t\tif (stage_inout.bad_files) {\n\t\t\tchar *token = NULL;\n\t\t\tchar *rest = stage_inout.bad_list;\n\t\t\tchar *save_ptr = NULL;\n\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t\t  dup_rqcpf_jobid, \"Job files not copied:---->>>>\");\n\t\t\ttoken = strtok_r(rest, \"\\n\", &save_ptr);\n\t\t\twhile (token != NULL) {\n\t\t\t\tchar *temp_buff = NULL;\n\t\t\t\tif ((pbs_asprintf(&temp_buff, \"%s\\n\", token)) != -1) {\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t\t\t\t  dup_rqcpf_jobid, temp_buff);\n\t\t\t\t\tfree(temp_buff);\n\t\t\t\t\ttemp_buff = NULL;\n\t\t\t\t} else\n\t\t\t\t\tbreak;\n\t\t\t\ttoken = strtok_r(NULL, \"\\n\", &save_ptr);\n\t\t\t}\n\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t\t  dup_rqcpf_jobid, \"---->>>>\");\n\t\t}\n\t}\n\n\t/* log the number of files/directories copied and the time it took */\n\tcopy_stop = copy_stop - copy_start;\n\n#ifdef NAS /* localmod 005 */\n\tsprintf(log_buffer, \"Staged %d/%d items %s over %ld:%02ld:%02ld\",\n\t\tnum_copies, tot_copies, (dir == STAGE_DIR_OUT) ? \"out\" : \"in\",\n\t\t(long) copy_stop / 3600, ((long) copy_stop % 3600) / 60,\n\t\t(long) copy_stop % 60);\n#else\n\tsprintf(log_buffer, \"Staged %d/%d items %s over %d:%02d:%02d\",\n\t\tnum_copies, tot_copies, (dir == STAGE_DIR_OUT) ? \"out\" : \"in\",\n\t\t(int) copy_stop / 3600, ((int) copy_stop % 3600) / 60,\n\t\t(int) copy_stop % 60);\n#endif /* localmod 005 */\n\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t  dup_rqcpf_jobid, log_buffer);\n\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n\tfree_ticket(ticket, CRED_DESTROY);\n#endif\n\n\tif (preq->prot == PROT_TPP && stage_inout.bad_files)\n\t\texit(STAGEOUT_FAILURE);\n\n\tif (stage_inout.sandbox_private && stage_inout.stageout_failed) {\n\t\texit(STAGEOUT_FAILURE);\n\t}\n\n\texit(0); /* remember, we are the child, exit not return */\n}\n\n/**\n * @brief\n * \tDo post delete file processing and cleanup.\n *\n * @par\n * \tCalled when the child process started in req_delfile() exits.\n * \tIf it had a major failure, resend obit to server, otherwise\n * \tset substate back to OBIT\n *\tThis is only called in the UNIX code\n *\n * @param[in]\tpjob - pointer to the job structure\n * @param[in]\tev   - exit value of the chile process\n *\n * @return\tnone\n *\n */\nstatic void\npost_delfile(job *pjob, int ev)\n{\n\tif (pjob == NULL)\n\t\treturn;\n\tpjob->ji_mompost = NULL;\n\n\tif (pjob->ji_preq)\n\t\treply_ack(pjob->ji_preq);\n\tpjob->ji_preq = NULL;\n\n\tif (ev == 0) {\n\t\t/* reset substate to OBIT,  if server doesn't move  */\n\t\t/* on to next step in End of Job processing quickly */\n\t\t/* we will resend obit, see mom_main.c              */\n\t\tset_job_substate(pjob, JOB_SUBSTATE_OBIT);\n\t\tpjob->ji_sampletim = time(0);\n\t} else {\n\t\t/* child that was doing file copies had major error */\n\t\t/* was killed or crashed,  resend obit to restart   */\n\t\tsend_obit(pjob, 0);\n\t}\n}\n\n/**\n * @brief\n * \treq_delfile - delete the specifled output/staged files\n *\n * UNIX version\n *\n * @param[in] preq - pointer to batch_request structure\n *\n * @return\tVoid\n *\n */\nvoid\nreq_delfile(struct batch_request *preq)\n{\n\tint rc;\n\tpid_t pid;\n\tjob *pjob;\n\tchar *bad_list = NULL;\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n\tstruct krb_holder *ticket = NULL;\n\tchar *krbccname = NULL;\n#endif\n\n\tif (mock_run) {\n\t\t/*\n\t\t * in mock run we don't have any files to delete,\n\t\t * so just ack request to make server happy\n\t\t * and return\n\t\t */\n\t\treply_ack(preq);\n\t\treturn;\n\t}\n\n\tpjob = find_job(preq->rq_ind.rq_cpyfile.rq_jobid);\n\tif (pjob) {\n\t\t/*\n\t\t * Check to see if the post_cpyfile has already been\n\t\t * processed.  If it has been processed the momsubt == 0\n\t\t */\n\t\tif (pjob->ji_momsubt != 0 && pjob->ji_mompost == post_cpyfile) {\n\t\t\t/* Need to first process the post_cpyfile\n\t\t\t * request before starting this one.\n\t\t\t * Tell the server to try again later.\n\t\t\t */\n\t\t\treq_reject(PBSE_TRYAGAIN, 0, preq);\n\t\t\treturn;\n\t\t}\n\t}\n\n\tif (pjob) {\n\t\tpjob->ji_preq = NULL;\n\t\tif (preq->prot == PROT_TPP)\n\t\t\tpjob->ji_preq = preq; /* keep the batch request pointer */\n\t}\n\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n\tticket = alloc_ticket();\n\tif ((pid = fork_to_user(preq, pjob, ticket)) > 0)\n#else\n\tif ((pid = fork_to_user(preq, pjob)) > 0)\n#endif\n\t{\n\t\t/* parent */\n\t\tif (pjob) {\n\t\t\tpjob->ji_momsubt = pid;\n\t\t\tpjob->ji_mompost = post_delfile;\n\t\t\tpjob->ji_sampletim = time(0);\n\t\t\tset_job_substate(pjob, JOB_SUBSTATE_EXITED);\n\t\t}\n\t\treturn; /* parent - continue with someother task */\n\t} else if (pid < 0) {\n\t\treq_reject(-(int) pid, 0, preq);\n\t\treturn;\n\t}\n\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n\tkrbccname = get_ticket_ccname(ticket);\n\tif (krbccname != NULL)\n\t\tsetenv(\"KRB5CCNAME\", krbccname, 1);\n#endif\n\n\t/* Child process ... delete the files */\n\n\trc = del_files(&(preq->rq_ind.rq_cpyfile), pjob, &bad_list);\n\tif (rc != 0) {\n\t\tif (preq->prot == PROT_TCP) {\n\t\t\treply_text(preq, rc, bad_list);\n\t\t} else {\n\t\t\tchar *token = NULL;\n\t\t\tchar *rest = bad_list;\n\t\t\tchar *save_ptr = NULL;\n\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG, preq->rq_ind.rq_cpyfile.rq_jobid, \"Job files not deleted:---->>>>\");\n\t\t\ttoken = strtok_r(rest, \"\\n\", &save_ptr);\n\t\t\twhile (token != NULL) {\n\t\t\t\tchar *temp_buff = NULL;\n\t\t\t\tif ((pbs_asprintf(&temp_buff, \"%s\\n\", token)) != -1) {\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG, preq->rq_ind.rq_cpyfile.rq_jobid, temp_buff);\n\t\t\t\t\tfree(temp_buff);\n\t\t\t\t\ttemp_buff = NULL;\n\t\t\t\t} else\n\t\t\t\t\tbreak;\n\t\t\t\ttoken = strtok_r(NULL, \"\\n\", &save_ptr);\n\t\t\t}\n\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG, preq->rq_ind.rq_cpyfile.rq_jobid, \"---->>>>\");\n\t\t}\n\t} else {\n\t\tif (preq->prot == PROT_TCP)\n\t\t\treply_ack(preq);\n\t}\n\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n\tfree_ticket(ticket, CRED_DESTROY);\n#endif\n\n\texit(0); /* remember, we are the child, exit not return */\n}\n#endif /* WIN32/UNIX ------------------------------------------------------- */\n\n/**\n * @brief\n * \tCheckpoint the job.\n *\n *\tIf abort is TRUE, kill it too.  Return a PBS error code.\n *\tDone in a child of MOM.\n *\n * @param[in] pjob - job pointer\n * @param[in] abort - indiacation whether abort true or false\n *\n * @return\tPBSerrorcode\n * @retval\t0\t\tno error\n * @retval\t!0\t\terror\n *\n */\n\nint\nmom_checkpoint_job(job *pjob, int abort)\n{\n\tint hasold = 0;\n\tint ckerr = ENOENT;\n\tstruct stat statbuf;\n\tchar path[MAXPATHLEN + 1];\n\tchar oldp[MAXPATHLEN + 1];\n\tchar file[MAXPATHLEN + 1], *name;\n\tint filelen;\n\tpbs_task *ptask;\n\tchar *cwdname = NULL;\n\tstruct passwd *pwdp = NULL;\n\n\tassert(pjob != NULL);\n\n\tDBPRT((\"mom_checkpoint_job: %s %s abort\\n\", pjob->ji_qs.ji_jobid,\n\t       abort ? \"with\" : \"no\"))\n\n\tpbs_strncpy(path, path_checkpoint, sizeof(path));\n\tif (*pjob->ji_qs.ji_fileprefix != '\\0')\n\t\tstrcat(path, pjob->ji_qs.ji_fileprefix);\n\telse\n\t\tstrcat(path, pjob->ji_qs.ji_jobid);\n\tstrcat(path, JOB_CKPT_SUFFIX);\n\n\tif (stat(path, &statbuf) == 0) {\n\t\t(void) strcpy(oldp, path); /* file already exists, rename it */\n\t\t(void) strcat(oldp, \".old\");\n\t\tif (rename(path, oldp) < 0)\n#ifdef WIN32\n\t\t\treturn 73;\n#else\n\t\t\treturn errno;\n#endif\n\t\thasold = 1;\n\t}\n\n\tif (mkdir(path, 0755) == -1) {\n\t\tckerr = errno;\n\t\tgoto checkpoint_fail;\n\t}\n\n\tfilelen = strlen(path);\n\tstrcpy(file, path);\n\tname = &file[filelen];\n\n\t/* Change to user's home to pick up .cpr */\n#ifdef WIN32\n\tif ((cwdname = getcwd(NULL, _MAX_PATH + 2)) != NULL) {\n\t\tif ((is_jattr_set(pjob, JOB_ATR_sandbox)) &&\n\t\t    (strcasecmp(get_jattr_str(pjob, JOB_ATR_sandbox), \"PRIVATE\") == 0)) {\n\t\t\t/* \"sandbox=PRIVATE\" mode is enabled, so restart job in PBS_JOBDIR */\n\t\t\tpwdp = getpwnam(get_jattr_str(pjob, JOB_ATR_euser));\n\t\t\tif (pwdp != NULL) {\n\t\t\t\t(void) chdir(jobdirname(pjob->ji_qs.ji_jobid,\n\t\t\t\t\t\t\tsave_actual_homedir(pwdp, pjob)));\n\t\t\t}\n\t\t} else {\n\t\t\tpwdp = getpwnam(get_jattr_str(pjob, JOB_ATR_euser));\n\t\t\tif (pwdp != NULL)\n\t\t\t\t(void) chdir(save_actual_homedir(pwdp, pjob));\n\t\t}\n\t}\n#else\n\tif ((cwdname = getcwd(NULL, 0)) != NULL) {\n\t\tif ((is_jattr_set(pjob, JOB_ATR_sandbox)) &&\n\t\t    (strcasecmp(get_jattr_str(pjob, JOB_ATR_sandbox), \"PRIVATE\") == 0)) {\n\t\t\t/* \"sandbox=PRIVATE\" mode is enabled, so restart job in PBS_JOBDIR */\n\t\t\tpwdp = getpwnam(get_jattr_str(pjob, JOB_ATR_euser));\n\t\t\tif (pwdp != NULL) {\n\t\t\t\tif (chdir(jobdirname(pjob->ji_qs.ji_jobid, pwdp->pw_dir)) == -1) \n\t\t\t\t\tlog_errf(-1, __func__, \"chdir failed. ERR : %s\", strerror(errno));\t\t\t\t\t\n\t\t\t}\n\t\t} else {\n\t\t\tpwdp = getpwnam(get_jattr_str(pjob, JOB_ATR_euser));\n\t\t\tif (pwdp != NULL)\n\t\t\t\tif (chdir(pwdp->pw_dir) == -1) \n\t\t\t\t\tlog_errf(-1, __func__, \"chdir failed. ERR : %s\", strerror(errno));\n\t\t}\n\t}\n#endif\n\n\terrno = 0;\n\tfor (ptask = (pbs_task *) GET_NEXT(pjob->ji_tasks);\n\t     ptask != NULL;\n\t     ptask = (pbs_task *) GET_NEXT(ptask->ti_jobtask)) {\n\t\tint i;\n\n\t\tif (ptask->ti_qs.ti_status != TI_STATE_RUNNING)\n\t\t\tcontinue;\n\t\tsprintf(name, task_fmt, ptask->ti_qs.ti_task);\n\n\t\t/*\n\t\t **\tTry action script with no post function.\n\t\t */\n\t\ti = do_mom_action_script(abort ? ChkptAbtAction : ChkptAction,\n\t\t\t\t\t pjob, ptask, file, NULL);\n\t\tif (i != 0) { /* script didn't work */\n\t\t\t/* if there is no script, try native support */\n\t\t\tif (i == -2)\n\t\t\t\ti = mach_checkpoint(ptask, file, abort);\n\t\t\tif (i != 0) /* nothing worked */\n\t\t\t\tgoto checkpoint_fail;\n\t\t}\n\t\tif (stat(file, &statbuf) == -1) { /* no file created */\n\t\t\tint fd;\n\n\t\t\t/*\n\t\t\t ** create a zero len file to mark checkpoint\n\t\t\t */\n\t\t\tfd = open(file, O_CREAT | O_TRUNC | O_WRONLY, 0600);\n\t\t\tif (fd == -1)\n\t\t\t\tgoto errout;\n\t\t\tclose(fd);\n\t\t}\n\t}\n\n\t/* Checkpoint successful */\n\t/* return to MOM's rightful lair */\n\tif (cwdname) {\n\t\tif (chdir(cwdname) == -1) \n\t\t\tlog_errf(-1, __func__, \"chdir failed. ERR : %s\", strerror(errno));\t\t\n\t\tfree(cwdname);\n\t}\n\n\tsprintf(log_buffer, \"checkpointed to %s\", path);\n\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\tif (hasold)\n\t\t(void) remtree(oldp);\n\n\treturn 0;\n\ncheckpoint_fail:\n\tswitch (errno) {\n#ifdef ERFLOCK\n\t\tcase ERFLOCK:\n#endif\n#ifdef EQUSR\n\t\tcase EQUSR:\n#endif\n#ifdef EQGRP\n\t\tcase EQGRP:\n#endif\n#ifdef EQACT\n\t\tcase EQACT:\n#endif\n#ifdef ENOSDS\n\t\tcase ENOSDS:\n#endif\n\t\tcase EAGAIN:\n\t\tcase ENOMEM:\n\t\tcase ENOLCK:\n\t\tcase ENOSPC:\n\t\tcase ENFILE:\n\t\tcase EDEADLK:\n\t\tcase EBUSY:\n\t\t\tckerr = EAGAIN;\n\t\t\tbreak;\n\t}\n\nerrout:\n\t/*\n\t ** A checkpoint has failed.  Log and return error.\n\t */\n\tsprintf(log_buffer, \"checkpoint failed: errno=%d\", errno);\n\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\n\t/* return to MOM's rightful lair */\n\tif (cwdname) {\n\t\tif (chdir(cwdname) == -1) \n\t\t\tlog_errf(-1, __func__, \"chdir failed. ERR : %s\", strerror(errno));\t\t\n\t\tfree(cwdname);\n\t}\n\n\t/*\n\t ** See if any checkpoints worked and abort is set.\n\t ** If so, we need to restart these tasks so the whole job is\n\t ** still running.  This has to wait until we reap the\n\t ** aborted task(s).\n\t */\n\tif (abort)\n\t\treturn ckerr;\n\n\t/*\n\t ** Clean up files.\n\t */\n\t(void) remtree(path);\n\tif (hasold) {\n\t\tif (rename(oldp, path) == -1) {\n\t\t\tpjob->ji_qs.ji_svrflags &= ~JOB_SVFLG_CHKPT;\n\t\t\t(void) job_save(pjob);\n\t\t}\n\t}\n\treturn ckerr;\n}\n\n/**\n * @brief\n * \tpost processor for start_checkpoint()\n *\n * Called from scan_for_terminated() when found in ji_mompost;\n * This sets the \"has checkpoint image\" bit in the job.\n *\n * @param[in]\tpjob\tjob pointer\n * @param[in]\tev\texit value of checkpoint process\n *\n * @return \tVoid\n *\n */\n\nvoid\npost_chkpt(job *pjob, int ev)\n{\n\tchar path[MAXPATHLEN + 1];\n\tchar oldname[MAXPATHLEN + 1];\n\tstruct stat statbuf;\n\tDIR *dir;\n\tstruct dirent *pdir;\n\ttm_task_id tid;\n\tpbs_task *ptask;\n\tint i;\n\tint abort = pjob->ji_flags & MOM_CHKPT_ACTIVE;\n\n\tDBPRT((\"%s: %s %s abort err %d\\n\", __func__, pjob->ji_qs.ji_jobid,\n\t       abort ? \"with\" : \"no\", ev))\n\n\tif (ev != 0) {\n\t\t/* checkpoint action exited with an error, set error flag */\n\t\tpjob->ji_flags |= MOM_SISTER_ERR;\n\t\tif (pjob->ji_preq) {\n\t\t\t/* as there is request waiting, reply with error */\n\t\t\treq_reject(PBSE_CKPSHORT, ev, pjob->ji_preq);\n\t\t\t/* and clear request pointer */\n\t\t\tpjob->ji_preq = NULL;\n\t\t}\n\t}\n\n\tif (pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE) {\n\t\t/*\n\t\t **\tIf I'm MS, I need to check for checkpoint events\n\t\t **\tto see if non-local processing is still going on.\n\t\t */\n\n\t\tif (pjob->ji_momsubt != 0) /* child running */\n\t\t\treturn;\n\n\t\t/*\n\t\t ** See if there are any checkpoint events left\n\t\t ** to wait for.\n\t\t */\n\t\tfor (i = 0; i < pjob->ji_numnodes; i++) {\n\t\t\thnodent *np;\n\t\t\teventent *ep;\n\n\t\t\tnp = &pjob->ji_hosts[i];\n\t\t\tep = (eventent *) GET_NEXT(np->hn_events);\n\t\t\twhile (ep) {\n\t\t\t\tif (ep->ee_command == IM_CHECKPOINT)\n\t\t\t\t\tbreak;\n\t\t\t\tif (ep->ee_command == IM_CHECKPOINT_ABORT)\n\t\t\t\t\tbreak;\n\t\t\t\tep = (eventent *) GET_NEXT(ep->ee_next);\n\t\t\t}\n\t\t\tif (ep != NULL)\n\t\t\t\treturn;\n\t\t}\n\t} else\n\t\tpost_reply(pjob, ev);\n\n\t/*\n\t ** No more operations are waiting.\n\t ** Now is the time to clear ji_mompost.\n\t ** Wait to turn off MOM_CHKPT_ACTIVE until scan_for_exiting is done.\n\t */\n\tpjob->ji_mompost = NULL;\n\n\t/*\n\t ** Set the TI_FLAGS_CHKPT flag for each task that was checkpointed.\n\t */\n\tpbs_strncpy(path, path_checkpoint, sizeof(path));\n\tif (*pjob->ji_qs.ji_fileprefix != '\\0')\n\t\tstrcat(path, pjob->ji_qs.ji_fileprefix);\n\telse\n\t\tstrcat(path, pjob->ji_qs.ji_jobid);\n\tstrcat(path, JOB_CKPT_SUFFIX);\n\n\tdir = opendir(path);\n\tif (dir != NULL) {\n\t\twhile (errno = 0, (pdir = readdir(dir)) != NULL) {\n\t\t\tif (pdir->d_name[0] == '.')\n\t\t\t\tcontinue;\n\t\t\ttid = strtoul(pdir->d_name, NULL, 16);\n\t\t\tif (tid == 0)\n\t\t\t\tcontinue;\n\t\t\tptask = task_find(pjob, tid);\n\t\t\tif (ptask == NULL)\n\t\t\t\tcontinue;\n\t\t\tptask->ti_flags |= TI_FLAGS_CHKPT;\n\t\t}\n\t\tif (errno != 0 && errno != ENOENT) {\n\t\t\tsprintf(log_buffer, \"readdir failed for directory :%s\", path);\n\t\t\tlog_joberr(errno, __func__, log_buffer, pjob->ji_qs.ji_jobid);\n\t\t}\n\t\tclosedir(dir);\n\t}\n\n\tif ((pjob->ji_flags & MOM_SISTER_ERR) == 0) {\n\t\t/*\n\t\t **\tEverything worked.  The checkpoint process\n\t\t **\tis done and no IM_CHECKPOINT events are\n\t\t **\toutstanding.  Any resources owned by the\n\t\t **\tjob should be cleaned up here.\n\t\t **\tIf abort is set the job's tasks will be killed\n\t\t **\tand should be picked up in scan_for_exiting().\n\t\t **\tAny saved batch request will be acked after the\n\t\t **\tobit is sent.\n\t\t */\n\t\tif (abort) {\n\t\t\tmom_hook_input_t *hook_input = NULL;\n\n\t\t\thook_input = (mom_hook_input_t *) malloc(sizeof(mom_hook_input_t));\n\t\t\tif (hook_input) {\n\t\t\t\tmom_hook_input_init(hook_input);\n\t\t\t\thook_input->pjob = pjob;\n\t\t\t}\n\t\t\tif ((hook_input != NULL) && (mom_process_hooks(HOOK_EVENT_EXECJOB_END, PBS_MOM_SERVICE_NAME, mom_host, hook_input, NULL, NULL, 0, 1) == HOOK_RUNNING_IN_BACKGROUND)) {\n\t\t\t\tpjob->ji_hook_running_bg_on = BG_CHECKPOINT_ABORT;\n\t\t\t} else {\n\t\t\t\tfree(hook_input);\n\t\t\t\texiting_tasks = 1;\n\t\t\t\tterm_job(pjob);\n\t\t\t}\n\t\t} else if (pjob->ji_preq) {\n\t\t\t/*\n\t\t\t **\tIf abort is not set and there is a request\n\t\t\t **\tsaved, do an ack for it.  This will be\n\t\t\t **\tthe response for the hold.\n\t\t\t */\n\t\t\treply_ack(pjob->ji_preq);\n\t\t\tpjob->ji_preq = NULL;\n\t\t}\n\t\t/*\n\t\t ** Turn off TI_FLAGS_SAVECKP.\n\t\t */\n\t\tfor (ptask = (pbs_task *) GET_NEXT(pjob->ji_tasks);\n\t\t     ptask != NULL;\n\t\t     ptask = (pbs_task *) GET_NEXT(ptask->ti_jobtask)) {\n\t\t\tptask->ti_flags &= ~TI_FLAGS_SAVECKP;\n\t\t}\n\t\tpjob->ji_qs.ji_svrflags |= JOB_SVFLG_CHKPT;\n\t\t(void) job_save(pjob);\n\t\treturn;\n\t}\n\tpjob->ji_flags &= ~MOM_SISTER_ERR;\n\n\t/*\n\t ** If we get here, an error happened.  Only try to recover\n\t ** if we had abort set.\n\t */\n\tif (pjob->ji_qs.ji_un.ji_momt.ji_exitstat == JOB_EXEC_CHKP)\n\t\tpjob->ji_qs.ji_un.ji_momt.ji_exitstat = 0;\n\n\t/*\n\t ** If abort is on, I'm MS and there is a sisterhood, send restart.\n\t */\n\tif (abort && (pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE) &&\n\t    pjob->ji_numnodes > 1) {\n\t\tint i;\n\n\t\tpjob->ji_mompost = post_restart;\n\t\ti = send_sisters(pjob, IM_RESTART, NULL);\n\n\t\tif (i != (pjob->ji_numnodes - 1)) {\n\t\t\tlog_joberr(errno, __func__, \"could not send restart\",\n\t\t\t\t   pjob->ji_qs.ji_jobid);\n\t\t\t(void) kill_job(pjob, SIGKILL);\n\t\t\treturn;\n\t\t}\n\t}\n\n\tfor (ptask = (pbs_task *) GET_NEXT(pjob->ji_tasks);\n\t     ptask != NULL;\n\t     ptask = (pbs_task *) GET_NEXT(ptask->ti_jobtask)) {\n\t\tif (ptask->ti_flags & TI_FLAGS_CHKPT)\n\t\t\tbreak;\n\t}\n\n\t/*\n\t ** If any tasks were checkpointed and abort was set, set a flag for\n\t ** scan_for_exiting() to be able to deal with a failed checkpoint.\n\t */\n\tif (ptask != NULL && abort) {\n\t\tpjob->ji_flags |= MOM_CHKPT_POST;\n\t\treturn;\n\t}\n\n\t/*\n\t ** No tasks were checkpointed.\n\t ** Get rid of incomplete checkpoint directory and\n\t ** move old chkpt dir back to regular if it exists.\n\t */\n\t(void) remtree(path);\n\tstrcpy(oldname, path);\n\tstrcat(oldname, \".old\");\n\tif (stat(oldname, &statbuf) == 0) {\n\t\tif (rename(oldname, path) == -1) {\n\t\t\tpjob->ji_qs.ji_svrflags &= ~JOB_SVFLG_CHKPT;\n\t\t}\n\t}\n\n\t/*\n\t ** Set TI_FLAGS_CHKPT back on if it was on before this attempt\n\t ** started.  Turn off TI_FLAGS_SAVECKP.\n\t */\n\tfor (ptask = (pbs_task *) GET_NEXT(pjob->ji_tasks);\n\t     ptask != NULL;\n\t     ptask = (pbs_task *) GET_NEXT(ptask->ti_jobtask)) {\n\t\tif (ptask->ti_flags & TI_FLAGS_SAVECKP)\n\t\t\tptask->ti_flags |= TI_FLAGS_CHKPT;\n\t\tptask->ti_flags &= ~TI_FLAGS_SAVECKP;\n\t}\n\n\t/* clear checkpoint active flag so a following checkpoint can happen */\n\tpjob->ji_flags &= ~MOM_CHKPT_ACTIVE;\n\t(void) job_save(pjob);\n\treturn;\n}\n\nint\nlocal_checkpoint(job *pjob, int abort, struct batch_request *preq) /* may be null */\n{\n\tsvrattrl *pal;\n\tint rc;\n\tattribute tmph;\n\tpbs_task *ptask;\n\tint hok = 1;\n\tpid_t pid;\n\n\tDBPRT((\"local_checkpoint: %s %s abort %s request\\n\",\n\t       pjob->ji_qs.ji_jobid,\n\t       abort ? \"with\" : \"no\", preq ? \"with\" : \"no\"))\n\n\t/* no checkpoint, reject request */\n\trc = (int) (abort ? ChkptAbtAction : ChkptAction);\n\tif ((mom_does_chkpnt == 0) &&\n\t    (mom_action[rc].ma_script == NULL))\n\t\treturn PBSE_NOSUP;\n\n\t/*\n\t **\tCheck to see if anything is going on.\n\t */\n\tif (pjob->ji_momsubt != 0 ||\n\t    pjob->ji_mompost != NULL)\n\t\treturn PBSE_MOMREJECT;\n\n\t/*\n\t **\tReset TI_FLAGS_CHKPT flag for this attempt.\n\t */\n\tfor (ptask = (pbs_task *) GET_NEXT(pjob->ji_tasks);\n\t     ptask != NULL;\n\t     ptask = (pbs_task *) GET_NEXT(ptask->ti_jobtask)) {\n\t\tif (ptask->ti_flags & TI_FLAGS_CHKPT)\n\t\t\tptask->ti_flags |= TI_FLAGS_SAVECKP;\n\t\tptask->ti_flags &= ~TI_FLAGS_CHKPT;\n\t}\n\n\t/* now try set up as child of MOM */\n\tpid = fork_me(-1);\n\tif ((pid < 0) && (errno != ENOSYS))\n\t\treturn PBSE_SYSTEM; /* error on fork */\n\n\tif (pid > 0) {\n\t\t/* parent, record pid in job for when child terminates */\n\n\t\tDBPRT((\"local_checkpoint: %s pid %d\\n\", pjob->ji_qs.ji_jobid, pid))\n\t\tpjob->ji_momsubt = pid;\n\t\tpjob->ji_mompost = post_chkpt;\n\t\tpjob->ji_actalarm = 0;\n\n\t\t/*\n\t\t** If we are going to have tasks dieing, set a flag.\n\t\t*/\n\t\tif (abort) {\n\t\t\tpjob->ji_flags |= MOM_CHKPT_ACTIVE;\n\t\t\tpjob->ji_qs.ji_un.ji_momt.ji_exitstat = JOB_EXEC_CHKP;\n\t\t}\n\t\t(void) job_save(pjob);\n\n\t\treturn PBSE_NONE; /* parent return */\n\t}\n\n\t/* if fork available child does the checkpoint else by foreground */\n\n\tclear_attr(&tmph, &job_attr_def[(int) JOB_ATR_hold]);\n\tif (preq) {\n\t\tpal = (svrattrl *) GET_NEXT(preq->rq_ind.rq_hold.rq_orig.rq_attr);\n\t\tif (pal)\n\t\t\thok = set_attr_generic(&tmph, &job_attr_def[(int) JOB_ATR_hold], pal->al_value, NULL, INTERNAL);\n\t}\n\trc = mom_checkpoint_job(pjob, abort);\n\tif ((rc == 0) && (hok == 0))\n\t\trc = site_mom_postchk(pjob, (int) tmph.at_val.at_long);\n\tif (pid) {\n\t\tpjob->ji_preq = preq;\n\t\tif (abort) {\n\t\t\tpjob->ji_flags |= MOM_CHKPT_ACTIVE;\n\t\t\tpjob->ji_qs.ji_un.ji_momt.ji_exitstat = JOB_EXEC_CHKP;\n\t\t}\n\n\t\t(void) job_save(pjob);\n\n\t\tpost_chkpt(pjob, rc);\n\n\t\treturn (rc);\n\t} else\n\t\texit(rc); /* zero exit tells main chkpnt ok */\n}\n\n/**\n * @brief\n * \tstart_checkpoint - start a checkpoint going\n *\n *\tcheckpoint done from a child because it takes a while\n *\n * @param[in] pjob - pointer to job\n * @param[in] abort - indication for abort true or false\n * @param[in] preq - pointer to batch_request structure\n *\n * @return \tint\n * @retval\t0 \tsuccess\n * @retval\t!0\tError\n *\n */\nint\nstart_checkpoint(job *pjob,\n\t\t int abort,\n\t\t struct batch_request *preq) /* may be null */\n{\n\tint rc;\n\n\tDBPRT((\"start_checkpoint: %s %s abort %s request\\n\",\n\t       pjob->ji_qs.ji_jobid,\n\t       abort ? \"with\" : \"no\", preq ? \"with\" : \"no\"))\n\n\tif ((rc = local_checkpoint(pjob, abort, preq)) != PBSE_NONE) {\n\t\treq_reject(rc, errno, preq);\n\t\treturn rc;\n\t}\n\n\t/*\n\t ** If there is a sisterhood, send command.\n\t */\n\tif (pjob->ji_numnodes > 1) {\n\t\tint i;\n\n\t\ti = send_sisters(pjob, abort ? IM_CHECKPOINT_ABORT : IM_CHECKPOINT, NULL);\n\n\t\tif (i != (pjob->ji_numnodes - 1)) {\n\t\t\tpjob->ji_flags |= MOM_SISTER_ERR;\n\t\t\treq_reject(PBSE_SYSTEM, errno, preq);\n\t\t\treturn PBSE_SYSTEM;\n\t\t}\n\t}\n\tpjob->ji_preq = preq;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tRestart the job.\n *\tMay be done in a child of MOM.\n *\n * @param[in] pjob - pointer to job\n *\n * @return \tint\n * @retval\t0\tno error\n * @retval\t!0\terror\n *\n */\n\nint\nmom_restart_job(job *pjob)\n{\n\tint i;\n\tint rserr = ENOENT;\n\tchar path[MAXPATHLEN + 1];\n\tchar *filnam;\n\ttm_task_id taskid;\n\tpbs_task *ptask;\n\tint tcount = 0;\n\tstruct stat sbuf;\n\textern pid_t mom_pid;\n\n\t/* changing directory to job user's home */\n\tchar *cwdname = NULL;\n\tstruct passwd *pwdp = NULL;\n\n\tassert(pjob != NULL);\n\tDBPRT((\"%s: %s\\n\", __func__, pjob->ji_qs.ji_jobid))\n\n\t/* perform any site required setup before restart */\n\tif ((i = site_mom_prerst(pjob)) != 0) {\n\t\tsprintf(log_buffer, \"Pre-restart failed: return=%d errno=%d\",\n\t\t\ti, errno);\n\t\trserr = errno;\n\t\tgoto done;\n\t}\n\n\tpbs_strncpy(path, path_checkpoint, sizeof(path));\n\tif (*pjob->ji_qs.ji_fileprefix != '\\0')\n\t\tstrcat(path, pjob->ji_qs.ji_fileprefix);\n\telse\n\t\tstrcat(path, pjob->ji_qs.ji_jobid);\n\tstrcat(path, JOB_CKPT_SUFFIX);\n\n\ti = strlen(path);\n\tfilnam = &path[i];\n\n\t/* Change to user's home or PBS_JOBDIR to pick up .cpr */\n#ifdef WIN32\n\tif ((cwdname = getcwd(NULL, _MAX_PATH + 2)) != NULL) {\n\t\tif ((is_jattr_set(pjob, JOB_ATR_sandbox)) &&\n\t\t    (strcasecmp(get_jattr_str(pjob, JOB_ATR_sandbox), \"PRIVATE\") == 0)) {\n\t\t\t/* \"sandbox=PRIVATE\" mode is enabled, so restart job in PBS_JOBDIR */\n\t\t\tpwdp = getpwnam(get_jattr_str(pjob, JOB_ATR_euser));\n\t\t\tif (pwdp != NULL) {\n\t\t\t\tif (chdir(jobdirname(pjob->ji_qs.ji_jobid,\n\t\t\t\t\tsave_actual_homedir(pwdp, pjob))) == -1) \n\t\t\t\t\tlog_errf(-1, __func__, \"chdir failed. ERR : %s\", strerror(errno));\t\t\t\t\n\t\t\t}\n\t\t} else {\n\t\t\tpwdp = getpwnam(get_jattr_str(pjob, JOB_ATR_euser));\n\t\t\tif (pwdp != NULL)\n\t\t\t\tif (chdir(save_actual_homedir(pwdp, pjob)) == -1) \n\t\t\t\t\tlog_errf(-1, __func__, \"chdir failed. ERR : %s\", strerror(errno));\n\t\t}\n\t}\n#else\n\tif ((cwdname = getcwd(NULL, 0)) != NULL) {\n\t\tif ((is_jattr_set(pjob, JOB_ATR_sandbox)) &&\n\t\t    (strcasecmp(get_jattr_str(pjob, JOB_ATR_sandbox), \"PRIVATE\") == 0)) {\n\t\t\t/* \"sandbox=PRIVATE\" mode is enabled, so restart job in PBS_JOBDIR */\n\t\t\tpwdp = getpwnam(get_jattr_str(pjob, JOB_ATR_euser));\n\t\t\tif (pwdp != NULL)\n\t\t\t\tif (chdir(jobdirname(pjob->ji_qs.ji_jobid, pwdp->pw_dir)) == -1) \n\t\t\t\t\tlog_errf(-1, __func__, \"chdir failed. ERR : %s\", strerror(errno));\t\t\n\t\t} else {\n\t\t\tpwdp = getpwnam(get_jattr_str(pjob, JOB_ATR_euser));\n\t\t\tif (pwdp != NULL)\n\t\t\t\tif (chdir(pwdp->pw_dir) == -1) {\n\t\t\t\t\tlog_errf(-1, __func__, \"chdir failed. ERR : %s\", strerror(errno));\t\t\t\n\t\t\t}\n\t\t}\n\t}\n#endif\n\n\tfor (ptask = (pbs_task *) GET_NEXT(pjob->ji_tasks);\n\t     ptask != NULL;\n\t     ptask = (pbs_task *) GET_NEXT(ptask->ti_jobtask)) {\n\n\t\ttaskid = ptask->ti_qs.ti_task;\n\t\tsprintf(filnam, task_fmt, taskid);\n\n\t\t/* check to see if checkpoint file exists */\n\t\tif (stat(path, &sbuf) == -1) {\n\t\t\tif (errno == ENOENT)\n\t\t\t\tcontinue;\n\n\t\t\tsprintf(log_buffer,\n\t\t\t\t\"checkpoint path %s stat failed %d\",\n\t\t\t\tpath, errno);\n\t\t\tgoto done;\n\t\t}\n\n\t\t/*\n\t\t **\tTry action script with no post function.\n\t\t */\n\t\ti = do_mom_action_script(RestartAction, pjob, ptask,\n\t\t\t\t\t path, NULL);\n\t\tif (i == 0) { /* script worked */\n\t\t\ttcount++;\n\t\t\tcontinue;\n\t\t}\n\n\t\t/* if there is no script, try native support */\n\t\tif (i == -2) {\n\t\t\ti = mach_restart(ptask, path);\n\t\t\tif (i != -1) { /* it worked */\n\t\t\t\ttcount++;\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\t/*\n\t\t\t ** Look to see if errno is any of the set\n\t\t\t ** of values that should cause us to return\n\t\t\t ** EAGAIN.  Don't need to do this for action\n\t\t\t ** script since it runs in a child.\n\t\t\t */\n\t\t\tswitch (errno) {\n#ifdef ERFLOCK\n\t\t\t\tcase ERFLOCK:\n#endif\n#ifdef EQUSR\n\t\t\t\tcase EQUSR:\n#endif\n#ifdef EQGRP\n\t\t\t\tcase EQGRP:\n#endif\n#ifdef EQACT\n\t\t\t\tcase EQACT:\n#endif\n#ifdef ENOSDS\n\t\t\t\tcase ENOSDS:\n#endif\n\t\t\t\tcase EAGAIN:\n\t\t\t\tcase ENOMEM:\n\t\t\t\tcase ENOLCK:\n\t\t\t\tcase ENOSPC:\n\t\t\t\tcase ENFILE:\n\t\t\t\tcase EDEADLK:\n\t\t\t\tcase EBUSY:\n\t\t\t\t\trserr = EAGAIN;\n\t\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\n\t\tsprintf(log_buffer,\n\t\t\t\"restart of task %8.8X from file %s failed\",\n\t\t\ttaskid, path);\n\t\tgoto done;\n\t}\n\n\tsprintf(log_buffer, \"Restarted %d task(s)\", tcount);\n\trserr = PBSE_NONE;\n\ndone:\n\t/* log if not the main mom */\n\tif (getpid() != mom_pid) {\n\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t}\n\n\t/* return to MOM's rightful lair */\n\tif (cwdname) {\n\t\tif (chdir(cwdname) == -1) \n\t\t\tlog_errf(-1, __func__, \"chdir failed. ERR : %s\", strerror(errno));\t\t\n\t\tfree(cwdname);\n\t}\n\treturn rserr;\n}\n\n/**\n * @brief\n * \tpost_restart - post processor for start_restart()\n *\n *\tCalled from catch_child() when found in ji_mompost.\n *\n * @param[in] pjob - pointer to job\\\n * @param[in] ev -  exit value of the child process\n *\n * @return\tVoid\n *\n */\nvoid\npost_restart(job *pjob, int ev)\n{\n\tpbs_task *ptask;\n\n\tDBPRT((\"post_restart: %s err %d\\n\", pjob->ji_qs.ji_jobid, ev))\n\n\tif (post_action(pjob, IM_RESTART, ev))\n\t\treturn;\n\n\t/*\n\t ** No more operations are waiting.\n\t ** Now is the time to clear ji_mompost.\n\t */\n\tpjob->ji_mompost = NULL;\n\tpjob->ji_flags &= ~MOM_RESTART_ACTIVE;\n\n\tif (pjob->ji_flags & MOM_SISTER_ERR) {\n\t\t/*\n\t\t ** If we get here, an error happened.\n\t\t */\n\t\tset_job_substate(pjob, JOB_SUBSTATE_EXITING);\n\t\texiting_tasks = 1;\n\t\treturn;\n\t}\n\n\t/*\n\t **\tThe restart worked.\n\t */\n\tpjob->ji_flags &= ~MOM_SISTER_ERR;\n\n\t/* reset sample time for cpupercent, to start over */\n\tpjob->ji_sampletim = 0;\n\n\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_Suspend) == 0) {\n\t\t/*\n\t\t ** Set all checkpointed tasks running.\n\t\t */\n\t\tfor (ptask = (pbs_task *) GET_NEXT(pjob->ji_tasks);\n\t\t     ptask != NULL;\n\t\t     ptask = (pbs_task *) GET_NEXT(ptask->ti_jobtask)) {\n\t\t\tif (ptask->ti_flags & TI_FLAGS_CHKPT) {\n\t\t\t\tptask->ti_qs.ti_status = TI_STATE_RUNNING;\n\t\t\t\t/*\n\t\t\t\t * KLUDGE\n\t\t\t\t * The sid for the task is saved as a negative value in\n\t\t\t\t * scan_for_exiting() when it goes into DEAD state. We\n\t\t\t\t * need to keep it for the restarted task if a new sid\n\t\t\t\t * has not been generated.\n\t\t\t\t */\n\t\t\t\tif (ptask->ti_qs.ti_sid < 0) {\n\t\t\t\t\tptask->ti_qs.ti_sid =\n\t\t\t\t\t\t-ptask->ti_qs.ti_sid;\n\t\t\t\t}\n\t\t\t\t(void) task_save(ptask);\n\t\t\t}\n\t\t}\n\n\t\tset_job_substate(pjob, JOB_SUBSTATE_RUNNING);\n\t\tstart_walltime(pjob);\n\n\t\tif (mom_get_sample() != PBSE_NONE) {\n\t\t\ttime_resc_updated = time_now;\n\t\t\t(void) mom_set_use(pjob);\n\t\t}\n\t} else {\n\t\tset_job_substate(pjob, JOB_SUBSTATE_SUSPEND);\n\t\tstop_walltime(pjob);\n\t}\n\tif (pjob->ji_preq) {\n\t\t/*\n\t\t **\tIf there is a request saved, do an ack\n\t\t **\tfor it.  This will be the response for\n\t\t **\tthe release.\n\t\t */\n\t\treply_ack(pjob->ji_preq);\n\t\tpjob->ji_preq = NULL;\n\t}\n\n\treturn;\n}\n\nint\nlocal_restart(job *pjob,\n\t      struct batch_request *preq) /* may be null */\n{\n\tpid_t pid;\n\tint rc;\n\tint background = restart_background;\n\n\tDBPRT((\"local_restart: %s %s request\\n\",\n\t       pjob->ji_qs.ji_jobid, preq ? \"with\" : \"without\"))\n\n\t/* no restart, reject request */\n\tif ((mom_does_chkpnt == 0) &&\n\t    (mom_action[RestartAction].ma_script == NULL))\n\t\treturn PBSE_NOSUP;\n\n\t/*\n\t ** If a script is going to transmogrify, don't run in the\n\t ** background, otherwise, do run in the background.\n\t */\n\tif (mom_action[RestartAction].ma_script != NULL)\n\t\tbackground = (restart_transmogrify ? FALSE : TRUE);\n\n\tif ((pjob->ji_mompost != NULL) && (pjob->ji_mompost != post_restart))\n\t\treturn PBSE_CKPBSY;\n\n\t/*\n\t * If restart_background is NOT enabled, perform the restart\n\t * in the foreground.\n\t */\n\tif (background == FALSE) {\n\t\trc = mom_restart_job(pjob);\n\n\t\t/* retry for any kind of changable thing */\n\t\tswitch (rc) {\n\t\t\tcase PBSE_NONE:\n\t\t\t\tbreak;\n\t\t\tcase 75:\n\t\t\t\trc = PBSE_CKPBSY;\n\t\t\t\tpjob->ji_qs.ji_un.ji_momt.ji_exitstat = JOB_EXEC_RETRY;\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\trc = PBSE_SYSTEM;\n\t\t\t\tpjob->ji_qs.ji_un.ji_momt.ji_exitstat =\n\t\t\t\t\tJOB_EXEC_BADRESRT;\n\t\t\t\tbreak;\n\t\t}\n\t\t/* post_restart gets called from start_exec or finish_exec */\n\t\treturn rc;\n\t}\n\n\t/*\n\t **\tCheck to see if anything is going on.\n\t */\n\tif (pjob->ji_momsubt != 0)\n\t\treturn PBSE_CKPBSY;\n\n\t/*\n\t * If we get to this point, restart_background is enabled, perform\n\t * the restart as a subtask of MOM.\n\t */\n\tpid = fork_me(-1);\n\tif ((pid < 0) && (errno != ENOSYS))\n\t\treturn PBSE_SYSTEM; /* error on fork */\n\n\tif (pid > 0) {\n\t\t/* parent, records pid in job for when child terminates */\n\n\t\tDBPRT((\"local_restart: %s pid %d\\n\", pjob->ji_qs.ji_jobid, pid))\n\t\tpjob->ji_momsubt = pid;\n\t\tpjob->ji_mompost = post_restart;\n\t\tpjob->ji_actalarm = 0;\n\t\tpjob->ji_flags |= MOM_RESTART_ACTIVE;\n\t\t(void) job_save(pjob);\n\t\treturn PBSE_NONE; /* parent return */\n\t}\n\t/* child - does the restart if fork avaialable else by foreground*/\n\trc = mom_restart_job(pjob);\n\tif (pid) {\n\t\tpjob->ji_preq = preq;\n\t\tpost_restart(pjob, rc);\n\t\treturn (rc);\n\t} else\n\t\texit(rc); /* zero exit tells main restart ok */\n}\n\n/**\n * @brief\n *\tParse a resourcedef file and return an array of resource names\n *\n * @param[in] path - Path to a resourcedef file\n *\n * @par The return value is to be freed by the caller using a call to\n * free_str_array()\n *\n * @return \tan array of resource names.\n * @retval\tarray of resource names\n * @retval\tNULL on failure\n *\n */\nstatic char **\nget_resources_from_file(char *path)\n{\n\tFILE *fp;\n\tchar line[256];\n\tchar *n;\n\tchar **resources;\n\tint numlines;\n\tint i;\n\n\t/* Assume no resources */\n\tresources = malloc(sizeof(char *));\n\tif (resources == NULL) {\n\t\tlog_err(errno, __func__, MALLOC_ERR_MSG);\n\t\treturn NULL;\n\t}\n\n\tresources[0] = NULL;\n\n\t/* Note that the absence of a file is not an error, it means that\n\t * there are no resources defined\n\t */\n\tif ((fp = fopen(path, \"r\")) == NULL) {\n\t\treturn resources;\n\t}\n\n\tfor (numlines = 0; fgets(line, sizeof(line), fp); numlines++)\n\t\t;\n\n\t(void) fseek(fp, 0, SEEK_SET);\n\n\t/* now that we have the number of resources defined, we allocate an\n\t * array that can hold those resources names\n\t */\n\tfree(resources);\n\tresources = malloc((numlines + 1) * sizeof(char *));\n\tif (resources == NULL) {\n\t\tlog_err(PBSEVENT_SYSTEM, __func__, MALLOC_ERR_MSG);\n\t\tfclose(fp);\n\t\treturn NULL;\n\t}\n\n\tfor (i = 0; fgets(line, sizeof(line), fp);) {\n\t\tn = strtok(line, \" \");\n\t\tif (n[0] != '#') {\n\t\t\tresources[i] = strdup(n);\n\t\t\tif (resources[i] == NULL) {\n\t\t\t\tlog_err(PBSEVENT_SYSTEM, __func__, MALLOC_ERR_MSG);\n\t\t\t\tfree_str_array(resources);\n\t\t\t\tfclose(fp);\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t\ti++;\n\t\t}\n\t}\n\tresources[i] = NULL;\n\tfclose(fp);\n\n\treturn resources;\n}\n\n/**\n * @brief\n *\tReturns an array of names of resources that were deleted based on\n * \tthe comparison between an 'old' (r1) and 'new' (r2) set of names.\n *\n * @param[in] r1 - The array of 'old' resource names. i.e., prior to update\n * @param[in] r2 - The array of 'new' resource names.\n *\n * @par The return value is to be freed by the caller.\n *\n *\n * @return An array of names of resources that were deleted, i.e., that were\n * in r1 but are not in r2.\n * @retval\tnames of resource\tSuccess\n * @retval\tNULL\t\t\tFailure\n *\n */\n\nstatic char **\nget_deleted_resources(char **r1, char **r2)\n{\n\tchar **deleted_resources;\n\tint i, j;\n\tint k = 0;\n\n\tfor (i = 0; r1[i] != NULL; i++)\n\t\t;\n\n\t/* worst case is that all resources in r1 were deleted */\n\tdeleted_resources = malloc((i + 1) * sizeof(char *));\n\tif (deleted_resources == NULL) {\n\t\tlog_err(errno, __func__, MALLOC_ERR_MSG);\n\t\treturn NULL;\n\t}\n\n\tfor (i = 0; r1[i] != NULL; i++) {\n\t\tfor (j = 0; r2[j] != NULL; j++) {\n\t\t\tif (strcmp(r1[i], r2[j]) == 0) {\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t\t/* r1[i] is no longer reported in r2 */\n\t\tif (r2[j] == NULL) {\n\t\t\tdeleted_resources[k++] = r1[i];\n\t\t}\n\t}\n\tdeleted_resources[k] = NULL;\n\n\treturn deleted_resources;\n}\n\n/**\n * @brief\n *\tUpdate vnodes when resource definitions have changed\n *\n * @param deleted_resources - Array of deleted resources\n *\n * @return\tVoid\n *\n */\nstatic void\nupdate_vnodes_on_resourcedef_change(char **deleted_resources)\n{\n\textern vnl_t *vnlp;\n\tint i, j, k;\n\tchar *attr;\n\tint attrlen;\n\tchar *attrprefix = \"resources_available.\";\n\tvnl_t *nv = NULL;\n\tint mod_vnlp = 0; /* track whether vnode list was modified */\n\n\tif (vnlp == NULL)\n\t\treturn;\n\n\t/* The deleted resources may have a single NULL entry if no resources\n\t * are defined\n\t */\n\tif ((deleted_resources == NULL) || (deleted_resources[0] == NULL)) {\n\t\treturn;\n\t}\n\n\tif (vnl_alloc(&nv) == NULL) {\n\t\tlog_err(errno, __func__, \"vnl_alloc failed!\");\n\t\treturn;\n\t}\n\n\tattrlen = strlen(attrprefix) + 20; /* for resources_available.<some arbitrary resource> (will be extended as needed) */\n\tattr = malloc(attrlen);\n\tif (attr == NULL) {\n\t\tvnl_free(nv);\n\t\tlog_err(errno, __func__, MALLOC_ERR_MSG);\n\t\treturn;\n\t}\n\n\tfor (i = 0; i < vnlp->vnl_used; i++) {\n\t\tvnal_t *vnrlp = VNL_NODENUM(vnlp, i);\n\t\tfor (j = 0; j < vnrlp->vnal_used; j++) {\n\t\t\tvna_t *vnrp = VNAL_NODENUM(vnrlp, j);\n\t\t\tfor (k = 0; deleted_resources[k] != NULL; k++) {\n\t\t\t\tstrcpy(attr, attrprefix);\n\t\t\t\t(void) pbs_strcat(&attr, &attrlen, deleted_resources[k]);\n\t\t\t\tif (strcmp(vnrp->vna_name, attr) == 0) {\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t\t/* attribute not found in deleted resources list, add\n\t\t\t * it to new vnode list */\n\t\t\tif (deleted_resources[k] == NULL) {\n\t\t\t\tvn_addvnr(nv, vnrlp->vnal_id, vnrp->vna_name, vnrp->vna_val, vnrp->vna_type, vnrp->vna_flag, NULL);\n\t\t\t} else {\n\t\t\t\t/* attribute was deleted, do not add it to new\n\t\t\t\t * vnode list, keep track of modification */\n\t\t\t\tmod_vnlp = 1;\n\t\t\t}\n\t\t}\n\t}\n\n\tfree(attr);\n\tif (mod_vnlp) {\n\t\tvnl_free(vnlp);\n\t\tnv->vnl_modtime = time(0);\n\t\tvnlp = nv;\n\t} else {\n\t\tvnl_free(nv);\n\t}\n}\n\n/**\n * @brief\n *\tReceive a hook-related file.\n *\n *  @param[in] \tpreq - Pointer to batch request structure for a Copy Hook\n *\t\t\trequest.\n *\t\t\tThe 'preq' parameter holds:\n * \t\t\t- preq->rq_ind.rq_hookfile.rq_filename - the basename\n *\t\t\t  including suffix)  of the target hook file.\n *\t\t\t- preq->rq_ind.rq_hookfile.rq_data contains the hook\n *\t\t\t  data.\n *\t\t\t- preq->rq_ind.rq_hookfile.rq_size is the size of\n *\t\t\t  rq_data.\n * @note\n *\tThe idea is to put contents (preq->rq_ind.rq_hookfile.rq_data) into\n *\t[PATH_HOOKS]/<preq->rq_ind.rq_hookfile.rq_filename>\n *\n *\tIf the file received is for a periodic hook, then attempt is made\n *\tto instantiate the hook, if none is queued up for execution.\n *\n *\tThis function expects 4 types of files and they are:\n *\t\t<file_name>HOOK_FILE_SUFFIX\n *\t\t<file_name>HOOK_CONFIG_SUFFIX\n *\t\t<file_name>HOOK_SCRIPT_SUFFIX\n *\t\tPBS_RESCDEF\n *\n * @return \tVoid\n *\n */\n\nvoid\nreq_copy_hookfile(struct batch_request *preq) /* ptr to the decoded request   */\n{\n\tint filemode = 0700;\n\tint fds;\n\tchar namebuf[MAXPATHLEN + 1];\n\tchar *p;\n\tint is_hook_cntrl_file = 0;\n\tint is_hook_config_file = 0;\n\tint is_hook_script_file = 0;\n\tint is_hook_resourcedef_file = 0;\n\thook *phook = NULL;\n\tchar *hook_name;\n\tchar hook_msg[HOOK_MSG_SIZE + 1];\n\tint oflag;\n\tchar **prev_resources = NULL;\n\n\tif (reject_root_scripts == TRUE) {\n\t\tlog_err(-1, __func__, msg_mom_reject_root_scripts);\n\t\treq_reject(PBSE_MOM_REJECT_ROOT_SCRIPTS, 0, preq);\n\t\treturn;\n\t}\n\n\tp = strstr(preq->rq_ind.rq_hookfile.rq_filename, HOOK_FILE_SUFFIX);\n\tif ((p != NULL) && (strcmp(p, HOOK_FILE_SUFFIX) == 0)) {\n\t\tis_hook_cntrl_file = 1;\n\t}\n\tif (!is_hook_cntrl_file) {\n\t\tp = strstr(preq->rq_ind.rq_hookfile.rq_filename, HOOK_SCRIPT_SUFFIX);\n\t\tif ((p != NULL) && (strcmp(p, HOOK_SCRIPT_SUFFIX) == 0))\n\t\t\tis_hook_script_file = 1;\n\t}\n\n\tif (!is_hook_cntrl_file && !is_hook_script_file) {\n\t\tp = strstr(preq->rq_ind.rq_hookfile.rq_filename, HOOK_CONFIG_SUFFIX);\n\t\tif ((p != NULL) && (strcmp(p, HOOK_CONFIG_SUFFIX) == 0))\n\t\t\tis_hook_config_file = 1;\n\t}\n\n\tif (!is_hook_cntrl_file && !is_hook_script_file &&\n\t    !is_hook_config_file) {\n\t\tp = strstr(preq->rq_ind.rq_hookfile.rq_filename, PBS_RESCDEF);\n\t\tif ((p == NULL) || (strcmp(p, PBS_RESCDEF) != 0)) {\n\t\t\tlog_err(errno, __func__, \"malformed request\");\n\t\t\treq_reject(PBSE_INTERNAL, 0, preq);\n\t\t\treturn;\n\t\t}\n\t\tis_hook_resourcedef_file = 1;\n\t}\n\n\tsnprintf(namebuf, sizeof(namebuf), \"%s%s\", path_hooks,\n\t\t preq->rq_ind.rq_hookfile.rq_filename);\n\n\t/* Resources prior to update of the resourcedef file */\n\tif (is_hook_resourcedef_file) {\n\t\tprev_resources = get_resources_from_file(namebuf);\n\t}\n\n\tif (preq->rq_ind.rq_hookfile.rq_sequence == 0) { /* 1st chunk of data */\n\t\toflag = O_TRUNC | O_RDWR | O_CREAT | O_Sync;\n\t} else {\n\t\toflag = O_RDWR | O_APPEND | O_CREAT | O_Sync;\n\t}\n\n\tfds = open(namebuf, oflag, filemode);\n\n\tif (fds < 0) {\n\t\tlog_err(errno, __func__, msg_hookfile_open);\n\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\tfree_str_array(prev_resources);\n\t\treturn;\n\t}\n\n#ifdef WIN32\n\tsecure_file2(namebuf, \"Administrators\", READS_MASK | WRITES_MASK | STANDARD_RIGHTS_REQUIRED, \"Everyone\", READS_MASK | READ_CONTROL);\n\tsetmode(fds, O_BINARY);\n#endif /* WIN32 */\n\n\tif (write(fds, preq->rq_ind.rq_hookfile.rq_data,\n\t\t  (unsigned) preq->rq_ind.rq_hookfile.rq_size) !=\n\t    preq->rq_ind.rq_hookfile.rq_size) {\n\t\tlog_err(errno, __func__, msg_hookfile_write);\n\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\t(void) close(fds);\n\t\tfree_str_array(prev_resources);\n\t\treturn;\n\t}\n\n\tif (is_hook_cntrl_file) {\n\t\tFILE *fp = NULL;\n\n\t\t(void) lseek(fds, 0L, SEEK_SET);\n\n\t\tfp = fdopen(fds, \"r\");\n\n\t\t/* Load the contents of hook control file into memory */\n\t\t/* passed fp to hook_recov()  so file does not need to be */\n\t\t/* reopened */\n\t\tif ((phook = hook_recov(namebuf, fp, hook_msg, HOOK_MSG_SIZE,\n\t\t\t\t\tpython_script_alloc, python_script_free)) == NULL) {\n\t\t\tlog_err(-1, __func__, hook_msg);\n\t\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\t\tif (fp != NULL)\n\t\t\t\t(void) fclose(fp);\n\t\t\telse\n\t\t\t\t(void) close(fds);\n\t\t\treturn;\n\t\t} else {\n\n\t\t\thook *phook2;\n\t\t\tint i;\n\t\t\tint j;\n\n\t\t\tif ((phook->event & HOOK_EVENT_EXECHOST_PERIODIC) &&\n\t\t\t    !has_task_by_parm1(phook)) {\n\t\t\t\trun_periodic_hook_bg(phook);\n\t\t\t}\n\n\t\t\tphook2 = (hook *) GET_NEXT(svr_allhooks);\n\t\t\ti = j = 0;\n\t\t\tfor (phook2 = (hook *) GET_NEXT(svr_allhooks); phook2 != NULL;\n\t\t\t     phook2 = (hook *) GET_NEXT(phook2->hi_allhooks)) {\n\t\t\t\tif (update_joinjob_alarm_time &&\n\t\t\t\t    (phook2->enabled == TRUE) &&\n\t\t\t\t    ((phook2->event & HOOK_EVENT_EXECJOB_BEGIN) != 0)) {\n\t\t\t\t\tif (i == 0)\n\t\t\t\t\t\tjoinjob_alarm_time = 0;\n\t\t\t\t\tjoinjob_alarm_time += phook2->alarm;\n\t\t\t\t\ti++;\n\t\t\t\t} else if (update_job_launch_delay &&\n\t\t\t\t\t   (phook2->enabled == TRUE) &&\n\t\t\t\t\t   ((phook2->event & HOOK_EVENT_EXECJOB_PROLOGUE) != 0)) {\n\t\t\t\t\tif (j == 0)\n\t\t\t\t\t\tjob_launch_delay = 0;\n\t\t\t\t\tjob_launch_delay += phook2->alarm;\n\t\t\t\t\tj++;\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (i > 0) {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"joinjob_alarm_time updated to %ld\", joinjob_alarm_time);\n\t\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\t}\n\t\t\tif (j > 0) {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"job_launch_delay updated to %ld\", job_launch_delay);\n\t\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\t}\n\t\t}\n\t\tif (fp != NULL)\n\t\t\t(void) fclose(fp);\n\t\telse\n\t\t\t(void) close(fds);\n\n\t} else { /* a hook script, resourcedef, or hook config  file*/\n\n\t\tphook = NULL;\n\t\tif (is_hook_script_file)\n\t\t\tp = strstr(namebuf, HOOK_SCRIPT_SUFFIX);\n\t\telse if (is_hook_config_file)\n\t\t\tp = strstr(namebuf, HOOK_CONFIG_SUFFIX);\n\t\telse\n\t\t\tp = NULL;\n\n\t\tif (p != NULL) {\n\t\t\t*p = '\\0';\n\t\t\thook_name = strrchr(namebuf, '/');\n\t\t} else {\n\t\t\thook_name = NULL;\n\t\t}\n\t\tif (hook_name != NULL) { /* hook related */\n\t\t\thook_name++;\n\n\t\t\tif ((phook = find_hook(hook_name)) != NULL) {\n\t\t\t\tif (is_hook_script_file) {\n\t\t\t\t\tstrcat(p, HOOK_SCRIPT_SUFFIX);\n\t\t\t\t\tif (python_script_alloc(namebuf,\n\t\t\t\t\t\t\t\t(struct python_script **) &phook->script) == -1) {\n\t\t\t\t\t\tlog_err(-1, __func__, \"python_script_alloc call failed!\");\n\t\t\t\t\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\t\t\t\t\t(void) close(fds);\n\t\t\t\t\t\treturn;\n\t\t\t\t\t}\n\t\t\t\t} else if (is_hook_config_file) {\n\t\t\t\t\tstrcat(p, HOOK_CONFIG_SUFFIX);\n\t\t\t\t}\n\t\t\t\tif ((phook->event & HOOK_EVENT_EXECHOST_PERIODIC) &&\n\t\t\t\t    !has_task_by_parm1(phook)) {\n\t\t\t\t\trun_periodic_hook_bg(phook);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\t(void) close(fds);\n\n\t\tif (is_hook_resourcedef_file) {\n\t\t\t/* check if any deleted resources were set on the\n\t\t\t * vnodes attribute list, and if so, update the vnodes\n\t\t\t */\n\t\t\tchar **new_resources;\n\t\t\tchar **deleted_resources;\n\n\t\t\tnew_resources = get_resources_from_file(namebuf);\n\t\t\tdeleted_resources = get_deleted_resources(prev_resources, new_resources);\n\t\t\tif (deleted_resources != NULL) {\n\t\t\t\tupdate_vnodes_on_resourcedef_change(deleted_resources);\n\t\t\t\tfree(deleted_resources);\n\t\t\t}\n\t\t\tfree_str_array(new_resources);\n\t\t\tfree_str_array(prev_resources);\n\n\t\t\t/* Call setup_resc() only if received\n\t\t\t * resourcedef file is the one for path_rescdef,\n \t\t\t * which is set up at mom startup and used when\n\t\t\t * HUP-ed.\n\t\t\t */\n\t\t\tif ((path_rescdef != NULL) &&\n\t\t\t    (strcmp(path_rescdef, namebuf) == 0) &&\n\t\t\t    (setup_resc(1) != 0)) {\n\t\t\t\t/* log_buffer set in setup_resc */\n\t\t\t\tlog_err(-1, \"setup_resc\",\n\t\t\t\t\t\"warning: failed to setup resourcedef\");\n\t\t\t}\n\t\t}\n\t}\n\t/* obtain new checksums after file is closed/flushed */\n\tif (is_hook_cntrl_file) {\n\t\tif (phook != NULL) {\n\t\t\tphook->hook_control_checksum = crc_file(namebuf);\n\t\t}\n\t} else if (is_hook_script_file) {\n\t\tif (phook != NULL) {\n\t\t\tphook->hook_script_checksum = crc_file(namebuf);\n\t\t}\n\t} else if (is_hook_config_file) {\n\t\tif (phook != NULL) {\n\t\t\tphook->hook_config_checksum = crc_file(namebuf);\n\t\t}\n\t} else if (is_hook_resourcedef_file) {\n\t\thooks_rescdef_checksum = crc_file(namebuf);\n\t}\n\n\treply_ack(preq);\n}\n\n/**\n * @brief\n *\tReceive a request to delete a hook-related file.\n *\n *  @param[in]\tpreq - pointer to batch request structure for a Delete Hook\n *\t\t\trequest.\n *\t\t     - contains preq->rq_ind.rq_hookfile.rq_filename which\n *\t\t\tis the hook-related filename to delete. If this\n *\t\t\tmatches the hook control file (*.HK suffix), then\n *\t\t\tbefore deleting the file, the asociated hook in\n *\t\t\tmemory is also purged.\n *\n * @return\tVoid\n *\n */\n\nvoid\nreq_del_hookfile(struct batch_request *preq) /* ptr to the decoded request   */\n{\n\n\tchar namebuf[MAXPATHLEN + 1];\n\tchar *p;\n\tchar hook_name[MAXPATHLEN + 1];\n\thook *phook;\n\tjob *pjob = NULL;\n\tint hook_running = 0;\n\n\tp = strstr(preq->rq_ind.rq_hookfile.rq_filename, HOOK_FILE_SUFFIX);\n\tif ((p == NULL) || (strcmp(p, HOOK_FILE_SUFFIX) != 0)) {\n\t\tp = strstr(preq->rq_ind.rq_hookfile.rq_filename,\n\t\t\t   HOOK_SCRIPT_SUFFIX);\n\t} else {\n\t\t*p = '\\0';\n\t\tsnprintf(hook_name, sizeof(hook_name), \"%s\",\n\t\t\t preq->rq_ind.rq_hookfile.rq_filename);\n\t\tstrcat(p, HOOK_FILE_SUFFIX);\n\t\tif ((phook = find_hook(hook_name)) != NULL) {\n#ifndef WIN32\n\t\t\tpjob = (job *) GET_NEXT(svr_alljobs);\n\t\t\twhile (pjob) {\n\t\t\t\t/* See if any asynchronous hook is running */\n\t\t\t\tif (pjob->ji_hook_running_bg_on) {\n\t\t\t\t\thook_running = 1;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tpjob = (job *) GET_NEXT(pjob->ji_alljobs);\n\t\t\t}\n\t\t\tif (hook_running && phook->event & HOOK_EVENT_EXECJOB_END) {\n\t\t\t\t/**\n\t\t\t\t * This event runs hook in the background,\n\t\t\t\t * and it's deferred task created while\n\t\t\t\t * running the hook, is required for graceful\n\t\t\t\t * exit of the job.\n\t\t\t\t */\n\t\t\t\treply_ack(preq);\n\t\t\t\treturn;\n\t\t\t}\n#endif\n\t\t\tdelete_task_by_parm1_func(phook, NULL, DELETE_ONE);\n\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_HOOK,\n\t\t\t\t  LOG_INFO, phook->hook_name,\n\t\t\t\t  \"deleted any hook task entry\");\n\t\t\t/* inside hook_purge() is where the hook control */\n\t\t\t/* file is deleted */\n\t\t\thook_purge(phook, python_script_free);\n\t\t}\n\t\treply_ack(preq);\n\t\treturn;\n\t}\n\tif (((p == NULL) || (strcmp(p, HOOK_SCRIPT_SUFFIX) != 0)) &&\n\t    (strcmp(preq->rq_ind.rq_hookfile.rq_filename,\n\t\t    PBS_RESCDEF) != 0)) {\n\t\tp = strstr(preq->rq_ind.rq_hookfile.rq_filename, HOOK_CONFIG_SUFFIX);\n\t\tif ((p == NULL) || (strcmp(p, HOOK_SCRIPT_SUFFIX) != 0)) {\n\t\t\tlog_err(errno, __func__, \"malformed request\");\n\t\t\treq_reject(PBSE_INTERNAL, 0, preq);\n\t\t}\n\t}\n\n\tsnprintf(namebuf, sizeof(namebuf), \"%s%s\", path_hooks,\n\t\t preq->rq_ind.rq_hookfile.rq_filename);\n\n\tif (unlink(namebuf) < 0) {\n\t\tif (errno != ENOENT) {\n\t\t\tsprintf(log_buffer,\n\t\t\t\t\"Failed to delete hook file %s\",\n\t\t\t\tnamebuf);\n\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\treq_reject(PBSE_INTERNAL, 0, preq);\n\t\t\tmark_hook_file_bad(namebuf);\n\t\t}\n\t} else {\n\t\tif (!strcmp(preq->rq_ind.rq_hookfile.rq_filename, PBS_RESCDEF))\n\t\t\thooks_rescdef_checksum = 0LU;\n\t}\n\n\treply_ack(preq);\n}\n\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\nvoid\nreq_cred(struct batch_request *preq) /* ptr to the decoded request */\n{\n\tunsigned char out_data[CRED_DATA_SIZE];\n\tssize_t out_len = 0;\n\tchar buf[LOG_BUF_SIZE];\n\tkrb5_data *data;\n\tchar *data_base64 = NULL;\n\tjob *pjob;\n\n\tif (decode_block_base64((unsigned char *) preq->rq_ind.rq_cred.rq_cred_data, preq->rq_ind.rq_cred.rq_cred_size, out_data, &out_len, buf, LOG_BUF_SIZE) != 0) {\n\t\tlog_err(errno, __func__, buf);\n\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\treturn;\n\t}\n\n\tif ((data = (krb5_data *) malloc(sizeof(krb5_data))) == NULL) {\n\t\tlog_err(errno, __func__, \"Unable to allocate Memory!\\n\");\n\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\treturn;\n\t}\n\n\tif ((data->data = (char *) malloc(sizeof(unsigned char) * out_len)) == NULL) {\n\t\tlog_err(errno, __func__, \"Unable to allocate Memory!\\n\");\n\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\treturn;\n\t}\n\n\tdata->length = out_len;\n\tmemcpy(data->data, out_data, out_len);\n\n\tdata_base64 = strdup(preq->rq_ind.rq_cred.rq_cred_data);\n\n\tstore_or_update_cred(preq->rq_ind.rq_cred.rq_jobid, preq->rq_ind.rq_cred.rq_credid, preq->rq_ind.rq_cred.rq_cred_type, data, data_base64, preq->rq_ind.rq_cred.rq_cred_validity);\n\n\t/* renew ticket for the job */\n\tif ((pjob = find_job(preq->rq_ind.rq_cred.rq_jobid)) != NULL) {\n\t\t/* send cred to sisters too */\n\t\tsend_cred_sisters(pjob);\n\n\t\t/* new creds received - lets renew cred */\n\t\trenew_job_cred(pjob);\n\t}\n\n\treply_ack(preq);\n}\n#endif\n"
  },
  {
    "path": "src/resmom/rm_dep.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/*\n **\tCommon resource names for dependent code.  All machines\n **\tsupported by the resource monitor should include at least\n **\tthese resources.\n */\n\nstatic char *cput(struct rm_attribute *attrib);\n#ifndef WIN32\nstatic char *mem(struct rm_attribute *attrib);\nstatic char *sessions(struct rm_attribute *attrib);\nstatic char *pids(struct rm_attribute *attrib);\nstatic char *nsessions(struct rm_attribute *attrib);\nstatic char *nusers(struct rm_attribute *attrib);\n#endif\nstatic char *size(struct rm_attribute *attrib);\nextern char *idletime(struct rm_attribute *attrib);\n\nextern char *nullproc(struct rm_attribute *attrib);\n\nstruct config standard_config[] = {\n\t{\"cput\", {cput}},\n#ifndef WIN32\n\t{\"mem\", {mem}},\n\t{\"sessions\", {sessions}},\n\t{\"pids\", {pids}},\n\t{\"nsessions\", {nsessions}},\n\t{\"nusers\", {nusers}},\n#endif\n\t{\"size\", {size}},\n\t{\"idletime\", {idletime}},\n\t{NULL, {nullproc}},\n};\n"
  },
  {
    "path": "src/resmom/stage_func.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n#include <sys/types.h>\n#include <sys/stat.h>\n#include <assert.h>\n#include <ctype.h>\n#include <errno.h>\n#include <fcntl.h>\n#include <stdio.h>\n#include <limits.h>\n#include <stdlib.h>\n#include <string.h>\n#include <time.h>\n#include <sys/wait.h>\n#include <dirent.h>\n#include \"tpp.h\"\n#include \"pbs_ifl.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"job.h\"\n#include \"ticket.h\"\n#include \"libpbs.h\"\n#include \"batch_request.h\"\n#include \"pbs_nodes.h\"\n#include \"mom_func.h\"\n\n/**\n * @file\tstage_func.c\n */\nextern char *path_spool;\t/* path to spool directory */\nextern char *path_undeliv;\t/* path to undelivered directory */\nextern char *path_checkpoint;\t/* path to checkpoint directory */\nextern char *msg_err_unlink;\t/* unlink error message (see pbs_messages.c) */\nextern char rcperr[MAXPATHLEN]; /* path to rcp errfile for current copy */\nextern char *pbs_jobdir;\t/* path to staging and execution dir of current job */\nextern char *cred_buf;\t\t/* cred buffer */\nextern size_t cred_len;\t\t/* length of cred buffer */\n#ifndef WIN32\nextern int cred_pipe;\nextern char *pwd_buf;\n#endif\nextern char mom_host[PBS_MAXHOSTNAME + 1]; /* MoM host name */\n\nint stage_file(int, int, char *, struct rqfpair *, int, cpy_files *, char *, char *);\nstatic int sys_copy(int, int, char *, char *, struct rqfpair *, int, char *, char *);\n\n/**\n * A path in windows is not case sensitive so do a define\n * to do the right compare.\n */\n#ifdef WIN32\n#define PATHCMP strncasecmp\n#else\n#define PATHCMP strncmp\n#endif\n\n#ifdef WIN32\n/**\n * @brief\n * check_err - Report the error in log file, if the length of actual data\n *             buffer and length of the total data written is different.\n *\n * @param[in]\t\tfunc_name\t\t-\tName of the caller function\n * @param[in]\t\tbuffer\t\t\t-\tActual data buffer\n * @param[in]\t\twritten_len\t\t-\tTotal length of the written data\n *\n * @return void\n */\nvoid\ncheck_err(const char *func_name, char *buffer, int written_len)\n{\n\tif (written_len != strlen(buffer)) {\n\t\tDWORD ecode = GetLastError();\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"Failed to write or written partial data to the pipe: data=[%s], total_len=%d, written_len=%d, ecode=%d\",\n\t\t\t buffer, strlen(buffer), written_len, ecode);\n\t\tlog_err(-1, func_name, log_buffer);\n\t}\n}\n#endif\n\n/**\n * @brief\n *\tadd_bad_list\n *\tadd new bad file message to bad file messages list\n *\n * @param[in/out]\tpbl\t-\tlist of bad file messages\n * @param[in]\t\tnewtext\t-\tnew bad file message\n * @param[in]\t\tnl\t-\tnumber of prefix new-lines\n *\n * @return\tvoid\n *\n * @note: if <pbl> is non-NULL then it will be reallocated based on length of\n *        <newtext> or if <pbl> is NULL then it will be allocated\n *        So, caller should free <pbl> after use\n *\n */\nvoid\nadd_bad_list(char **pbl, char *newtext, int nl)\n{\n\tint needed = 0;\n\tchar *pnew = NULL;\n\n\tif (*pbl) {\n\t\tneeded += strlen(*pbl) + strlen(newtext) + nl + 1;\n\t\tpnew = realloc(*pbl, needed);\n\t} else {\n\t\tneeded += strlen(newtext) + nl + 1;\n\t\tpnew = malloc(needed);\n\t\tif (pnew)\n\t\t\t*pnew = '\\0';\n\t}\n\tif (pnew == NULL) {\n\t\tlog_err(errno, __func__, \"Failed to allocate memory\");\n\t\treturn;\n\t}\n\n\t*pbl = pnew;\n\twhile (nl--) /* prefix new-lines */\n\t\t(void) strcat(*pbl, \"\\n\");\n\t(void) strcat(*pbl, newtext);\n\treturn;\n}\n\n/**\n * @brief\n *\tis_child_path\n *\t\tcheck if provided <path> specifies location under specified directory <dir>\n *\n * @param[in]\tdir\t-\tdirectory path\n * @param[in]\tpath\t-\tpath to check\n *\n * @return\tint\n * @retval\t1 - given path is child of dir\n * @retval\t0 - given not a child path\n * @retval -1 - error encountered\n */\nint\nis_child_path(char *dir, char *path)\n{\n\tchar fullpath[2 * MAXPATHLEN + 2] = {'\\0'};\n\tchar *dir_real = NULL;\n\tchar *fullpath_real = NULL;\n\tchar *pos = NULL;\n\tint return_value = 0;\n\n\t/* if file path is relative, combine it with directory */\n\tif (!is_full_path(path)) {\n\t\tsnprintf(fullpath, sizeof(fullpath), \"%s/%s\", dir, path);\n\t} else {\n\t\tsnprintf(fullpath, sizeof(fullpath), \"%s\", path);\n\t}\n\n#ifdef WIN32\n\tdir_real = malloc(sizeof(char) * (MAXPATHLEN + 1));\n\tfullpath_real = malloc(sizeof(char) * (2 * MAXPATHLEN + 2));\n#else\n\tdir_real = realpath(dir, NULL);\n\tfullpath_real = realpath(fullpath, NULL);\n#endif\n\n\tif (dir_real == NULL || fullpath_real == NULL) {\n\t\tlog_err(errno, __func__, \"Failed to allocate memory\");\n\t\treturn_value = -1;\n\t\tgoto error_exit;\n\t}\n\n\t/* even if the file path is relative to some directory, */\n\t/* it may use /../../ notation and escape that directory, */\n\t/* so always perform full check of parent-child directory relation */\n#ifdef WIN32\n\tfix_path(fullpath, 3);\n\tpbs_strncpy(dir_real, lpath2short(dir), MAXPATHLEN + 1);\n\tpbs_strncpy(fullpath_real, lpath2short(fullpath), 2 * MAXPATHLEN + 2);\n#endif\n\t/* check that fullpath_real begins with dir_real */\n\tif (strlen(dir_real) && strlen(fullpath_real)) {\n\t\tpos = strstr(fullpath_real, dir_real);\n\t\tif (pos == fullpath_real) {\n\t\t\treturn_value = 1;\n\t\t}\n\t}\n\nerror_exit:\n\tfree(dir_real);\n\tfree(fullpath_real);\n\treturn return_value;\n}\n\n/**\n * @brief\twchost_match\n *\t\tWild card host name match.\n *\t\tDo a case insensitive compare since hostnames are not case sensitive.\n *\n * @param[in]\tcan\t-\tstage request hostname\n * @param[in]\tmaster\t-\tname from usecp list (may be wild carded at beginning)\n *\n * @return\tint\n * @retval\t1 - can\"idate\" matches master name\n * @retval\t0 - not a match\n *\n */\nint\nwchost_match(const char *can, const char *master)\n{\n\tconst char *pc;\n\tconst char *pm;\n\n\tif ((can == NULL) || (master == NULL))\n\t\treturn 0;\n\n\tpc = can + strlen(can) - 1;\n\tpm = master + strlen(master) - 1;\n\twhile ((pc > can) && (pm > master)) {\n\t\tif (tolower(*pc) != tolower(*pm))\n\t\t\treturn 0;\n\t\tpc--;\n\t\tpm--;\n\t}\n\n\t/* comparison of one or both reached the start of the string */\n\tif (pm == master) {\n\t\tif (*pm == '*')\n\t\t\treturn 1;\n\t\telse if ((pc == can) && (tolower(*pc) == tolower(*pm)))\n\t\t\treturn 1;\n\t}\n\treturn 0;\n}\n\n#ifdef NAS /* localmod 009 */\n/**\n * @brief\n *\tWild card suffix host name match.  Do a case insensitive compare since\n * \thostnames are not case sensitive.\n *\n * @param can           stage request hostname\n * @param master        name from usecp list (may be wild carded at end)\n *\n * @return int\n * @retval 1    candidate matches master name\n * @retval 0    not a match\n *\n */\nstatic int\n\twcsuffix_host_match(can, master)\n\t\tconst char *can;\nconst char *master;\n{\n\tint cmp_length;\n\tint master_length = strlen(master);\n\n\tif (master[master_length - 1] == '*') {\n\t\tcmp_length = master_length - 1;\n\t} else {\n\t\tcmp_length = master_length;\n\t}\n\n\tif (strncasecmp(can, master, cmp_length) == 0)\n\t\treturn 1;\n\n\treturn 0;\n}\n#endif /* localmod 009 */\n\n/**\n * @brief\n *\ttold_to_cp - Check a stage file request against the saved set of \"usecp\" paths.\n *\n * @param[in]\thost\t-\tstage request hostname\n * @param[in]\toldpath\t-\tstage request path\n * @param[out]\tnewpath\t-\tpointer to \"usecp\" path\n *\n * @return\tint\n * @retval\t1 - file matched, newpath is updated\n * @retval\t0 - no match, newpath is unchanged\n *\n */\nint\ntold_to_cp(char *host, char *oldpath, char **newpath)\n{\n\tint i = 0;\n\tint nh = 0;\n\tstatic char newp[MAXPATHLEN + 1] = {'\\0'};\n\n#ifdef NAS /* localmod 009 */\n\textern struct cphosts *pcphosts;\n\tint match_found = 0;\n\tfor (nh = 0; nh < cphosts_num; nh++) {\n\t\tif (wchost_match(host, (pcphosts + nh)->cph_hosts) ||\n\t\t    wcsuffix_host_match(host, (pcphosts + nh)->cph_hosts)) {\n\t\t\ti = strlen((pcphosts + nh)->cph_from);\n\t\t\tif (PATHCMP((pcphosts + nh)->cph_from, oldpath, i) == 0) {\n\t\t\t\tif ((pcphosts + nh)->cph_exclude)\n\t\t\t\t\treturn 0;\n\n\t\t\t\tmatch_found = 1;\n\t\t\t\tpbs_strncpy(newp, (pcphosts + nh)->cph_to, sizeof(newp));\n\t\t\t\t(void) strcat(newp, oldpath + i);\n\t\t\t}\n\t\t}\n\t}\n\n\tif (match_found) {\n\t\t*newpath = newp;\n\t}\n\n\treturn match_found;\n#else\n\tfor (nh = 0; nh < cphosts_num; nh++) {\n\t\tif (wchost_match(host, (pcphosts + nh)->cph_hosts)) {\n\t\t\ti = strlen((pcphosts + nh)->cph_from);\n\t\t\tif (PATHCMP((pcphosts + nh)->cph_from, oldpath, i) == 0) {\n\t\t\t\t(void) strcpy(newp, (pcphosts + nh)->cph_to);\n\t\t\t\t(void) strcat(newp, oldpath + i);\n\t\t\t\t*newpath = newp;\n\t\t\t\treturn 1;\n\t\t\t}\n\t\t}\n\t}\n\treturn 0;\n#endif /* localmod 009 */\n}\n\n/**\n * @brief\n *\tlocal_or_remote - Decide if the specified path is to a local or remote file.\n *\n * @param[in]\tpath\t-\tpointer to stage path string\n *\n * @return\tint\n * @retval\t1 - if given path is remote path\n * @retval\t0 - if given path is local path\n *\n * @note\tThis function will updates the path pointer to just the path name if local.\n *\n */\nint\nlocal_or_remote(char **path)\n{\n\tint len = 0;\n\tchar *pcolon = NULL;\n\n\tpcolon = strchr(*path, (int) ':');\n\tif (pcolon == NULL)\n\t\treturn 0;\n\n\t*pcolon = '\\0';\n\tlen = strlen(*path);\n#ifdef WIN32\n\tif (IS_UNCPATH(pcolon + 1)) {\n\t\t/*\n\t\t * UNC path found\n\t\t * treat as local path and\n\t\t * remove given hostname part from path\n\t\t */\n\t\t*pcolon = ':';\n\t\t*path = pcolon + 1;\n\t\treturn 0;\n\t} else if (told_to_cp(*path, pcolon + 1, path))\n#else\n\tif (told_to_cp(*path, pcolon + 1, path))\n#endif\n\t{\n\t\t/* path updated in told_to_cp() */\n\t\t*pcolon = ':';\n\t\treturn 0;\n\t} else if ((strcasecmp(\"localhost\", *path) == 0) ||\n\t\t   ((strncasecmp(mom_host, *path, len) == 0) &&\n\t\t    ((mom_host[len] == '\\0') || (mom_host[len] == '.')))) {\n\t\t/* we have a host match, file is local */\n\t\t*pcolon = ':';\n\t\t*path = pcolon + 1;\n\t\treturn 0;\n\t} else {\n\t\t/* remote file */\n\t\t*pcolon = ':';\n\t\treturn 1;\n\t}\n}\n\n/**\n * @brief\n *\tSetup for direct write of spool file\n * @par\n *\tDetermines if a spool file is to be directly written to its final destination, i.e:\n *\t1. Direct write of spool files has been requested by the job, and\n *  2. Final destination of the file maps to a locally-mounted directory, either because it is\n *\t   explicitly mapped by $usecp, or the destination hostname is Mom's host.\n *\n * @param[in]  pjob - pointer to job structure\n * @param[in]  which - identifies which file: StdOut, StdErr, or Chkpt.\n * @param[out]  path -  pointer to array of size MAPATHXLEN+1 into which the final path of the file is to be stored.\n * @param[out]  direct_write_possible -  Determines whether direct_write is possible.\n *                                         Useful to decide whether to write warning message to stderr file\n *                                          after multiple function invocation.\n *\n * @return int\n *\n * @retval\t0 if file is not to be directly written,\n * @retval\t1 otherwise.\n *\n * @par\n *\tIf the file is not to be directly written (return zero), the contents of *path are unchanged.\n * @par\n *\tDirect write of checkpoint files is not currently supported.\n *\n * @par @par MT-safe: No\n *\n */\nint\nis_direct_write(job *pjob, enum job_file which, char *path, int *direct_write_possible)\n{\n\tchar working_path[MAXPATHLEN + PBS_MAXSVRJOBID + 3 + 1];\n\tchar *p = working_path;\n\n\tif (which == Chkpt)\n\t\treturn (0); /* direct write of checkpoint not supported */\n\n\t/* Check if direct_write requested. */\n\tif (!((is_jattr_set(pjob, JOB_ATR_keep)) &&\n\t      (strchr(get_jattr_str(pjob, JOB_ATR_keep), 'd'))))\n\t\treturn (0);\n\n\t/* Figure out what the final destination path is */\n\tswitch (which) {\n\t\tcase StdOut:\n\t\t\tif (!strchr(get_jattr_str(pjob, JOB_ATR_keep), 'o'))\n\t\t\t\treturn (0);\n\t\t\telse\n\t\t\t\t/* Make local working copy of path for call to local_or_remote */\n\t\t\t\tsnprintf(working_path, MAXPATHLEN + 1, \"%s\", get_jattr_str(pjob, JOB_ATR_outpath));\n\t\t\tif (\n#ifdef WIN32\n\t\t\t\tworking_path[strlen(working_path) - 1] == '\\\\'\n#else\n\t\t\t\tworking_path[strlen(working_path) - 1] == '/'\n#endif\n\t\t\t) {\n\t\t\t\tstrcat(working_path, pjob->ji_qs.ji_jobid);\n\t\t\t\tstrcat(working_path, JOB_STDOUT_SUFFIX);\n\t\t\t}\n\t\t\tbreak;\n\t\tcase StdErr:\n\t\t\tif (!strchr(get_jattr_str(pjob, JOB_ATR_keep), 'e'))\n\t\t\t\treturn (0);\n\t\t\telse\n\t\t\t\t/* Make local working copy of path for call to local_or_remote */\n\t\t\t\tsnprintf(working_path, MAXPATHLEN + 1, \"%s\", get_jattr_str(pjob, JOB_ATR_errpath));\n\t\t\tif (\n#ifdef WIN32\n\t\t\t\tworking_path[strlen(working_path) - 1] == '\\\\'\n#else\n\t\t\t\tworking_path[strlen(working_path) - 1] == '/'\n#endif\n\t\t\t) {\n\t\t\t\tstrcat(working_path, pjob->ji_qs.ji_jobid);\n\t\t\t\tstrcat(working_path, JOB_STDERR_SUFFIX);\n\t\t\t}\n\t\t\tbreak;\n\t\tdefault:\n\t\t\treturn (0);\n\t}\n\n\tif (local_or_remote(&p) == 1) {\n\t\t*direct_write_possible = 0;\n\t\tif (pjob->ji_hosts != NULL) {\n\t\t\tlog_eventf(PBSEVENT_DEBUG3,\n\t\t\t\t   PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid,\n\t\t\t\t   \"Direct write is requested for job: %s, but the destination: %s is not usecp-able from %s\",\n\t\t\t\t   pjob->ji_qs.ji_jobid, p,\n\t\t\t\t   pjob->ji_hosts[pjob->ji_nodeid].hn_host);\n\t\t} else {\n\t\t\t/* When a job is requeued and later run, the ji_hosts\n\t\t\t * information is not available when this function is\n\t\t\t * called as part of req_mvjobfile\n\t\t\t */\n\t\t\tlog_eventf(PBSEVENT_DEBUG3,\n\t\t\t\t   PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid,\n\t\t\t\t   \"Direct write is requested for job: %s, but the destination: %s is not usecp-able\",\n\t\t\t\t   pjob->ji_qs.ji_jobid, p);\n\t\t}\n\t\treturn (0);\n\t}\n\n\tif (strlen(p) > MAXPATHLEN) {\n\t\t*direct_write_possible = 0;\n\t\tsprintf(log_buffer,\n\t\t\t\"Direct write is requested for job: %s, but the destination path is longer than %d\",\n\t\t\tpjob->ji_qs.ji_jobid, MAXPATHLEN);\n\t\tlog_event(PBSEVENT_DEBUG3,\n\t\t\t  PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid, log_buffer);\n\t\treturn (0);\n\t}\n\n\t/* Destination maps to local directory - final path is in working_path. */\n\tsnprintf(path, MAXPATHLEN + 1, \"%s\", p);\n\treturn (1);\n}\n\n#ifdef WIN32\n/**\n * @brief\n *\tremtree - wrapper function to remove a tree (or single file) to support for UNC path\n *\n * @param[in]\tdirname\t-\tpath of single file or dir to remove\n *\n * @return\tint\n * @retval\t0 - success\n * @retval\t-1 - failure\n *\n */\nint\nremtree(char *dirname)\n{\n\tint rtnv = 0;\n\tchar unipath[MAXPATHLEN + 1] = {'\\0'};\n\tint unmap = 0;\n\tchar map_drive[MAXPATHLEN + 1] = {'\\0'};\n\n\tif (dirname != NULL && *dirname != '\\0') {\n\t\treplace(dirname, \"\\\\ \", \" \", unipath);\n\t\tfix_path(unipath, 3);\n\t} else {\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_FILE, LOG_ERR, __func__, \"directory or file path is NULL\");\n\t\treturn -1;\n\t}\n\n\tunmap = get_localpath(unipath, map_drive);\n\n\trtnv = remdir(unipath);\n\n\tif (unmap)\n\t\tunmap_unc_path(map_drive);\n\n\treturn rtnv;\n}\n\n/**\n * @brief\n *\tremove a tree (or single file)\n *\n * @param[in]\tpath - path of single file or dir to remove\n *\n * @return \tint\n * @retval \t0 on success\n * @retval\t-1 on failure\n *\n */\nint\nremdir(char *dirname)\n#else /* WIN32 */\nint\nremtree(char *dirname)\n#endif\n{\n#ifdef WIN32\n\tstatic char id[] = \"remdir\";\n#else\n\tstatic char id[] = \"remtree\";\n#endif\n\tDIR *dir = NULL;\n\tstruct dirent *pdir = NULL;\n\tchar namebuf[MAXPATHLEN] = {'\\0'};\n\tchar *filnam = NULL;\n\tint i = 0;\n\tint rtnv = 0;\n\tstruct stat sb = {0};\n\n\tif (lstat(dirname, &sb) == -1) {\n\t\tif (errno != ENOENT) {\n\t\t\tsprintf(log_buffer, \"lstat: %s\", dirname);\n\t\t\tlog_err(errno, id, log_buffer);\n\t\t}\n\t\treturn -1;\n\t}\n\n\tif (S_ISDIR(sb.st_mode)) {\n\t\tif ((dir = opendir(dirname)) == NULL) {\n\t\t\tif (errno != ENOENT) {\n\t\t\t\tsprintf(log_buffer, \"opendir: %s\", dirname);\n\t\t\t\tlog_err(errno, id, log_buffer);\n\t\t\t}\n\t\t\treturn -1;\n\t\t}\n\n\t\tpbs_strncpy(namebuf, dirname, sizeof(namebuf));\n#ifdef WIN32\n\t\t(void) strcat(namebuf, \"\\\\\");\n#else\n\t\t(void) strcat(namebuf, \"/\");\n#endif\n\t\ti = strlen(namebuf);\n\t\tfilnam = &namebuf[i];\n\n\t\twhile (errno = 0, (pdir = readdir(dir)) != NULL) {\n\t\t\tif (pdir->d_name[0] == '.') {\n\t\t\t\tif (pdir->d_name[1] == '\\0' ||\n\t\t\t\t    (pdir->d_name[1] == '.' &&\n\t\t\t\t     pdir->d_name[2] == '\\0'))\n\t\t\t\t\tcontinue;\n\t\t\t}\n\n\t\t\tpbs_strncpy(filnam, pdir->d_name, sizeof(namebuf) - (filnam - namebuf));\n#ifdef WIN32\n\t\t\trtnv = remdir(namebuf);\n#else\n\t\t\trtnv = remtree(namebuf);\n#endif\n\t\t\tif (rtnv == -1) {\n\t\t\t\terrno = 0;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t\tif (errno != 0 && errno != ENOENT) {\n\t\t\tsprintf(log_buffer, \"readdir: %s\", dirname);\n\t\t\tlog_err(errno, id, log_buffer);\n\t\t\t(void) closedir(dir);\n\t\t\treturn -1;\n\t\t}\n\t\t(void) closedir(dir);\n\n\t\tif (rmdir(dirname) < 0) {\n\t\t\tif (errno != ENOENT) {\n\t\t\t\tsprintf(log_buffer, \"rmdir: %s\", dirname);\n\t\t\t\tlog_err(errno, id, log_buffer);\n\t\t\t\trtnv = -1;\n\t\t\t}\n\t\t}\n\t} else if (unlink(dirname) < 0) {\n\t\tsprintf(log_buffer, \"unlink: %s\", dirname);\n\t\tlog_err(errno, id, log_buffer);\n\t\trtnv = -1;\n\t}\n\treturn rtnv;\n}\n\n/**\n * @brief\n *\tpbs_glob - check whether given pattern matches in filename\n *\tpattern matches:\n *\t*\tmatches zero or more characters\n *\t?\tmatches any single character\n *\tchar\tmatches itself except where char is '*' or '?'\n *\n * @param[in]\tfilen\t-\tfilename\n * @param[in]\tpat\t-\tpattern to match\n *\n * @return\tBoolean\n * @retval\tTRUE - if pattern matches\n * @retval\tFALSE - if no match\n *\n */\nint\npbs_glob(char *filen, char *pat)\n{\n\tint c = 0;\n\n\twhile (*pat) {\n\t\tif (!*filen && *pat != '*')\n\t\t\treturn FALSE;\n\n\t\tc = *pat++;\n\t\tswitch (c) {\n\n\t\t\tcase '*':\n\t\t\t\twhile (*pat == '*')\n\t\t\t\t\tpat++;\n\n\t\t\t\tif (!*pat)\n\t\t\t\t\treturn TRUE;\n\n\t\t\t\tif (*pat != '?') {\n\t\t\t\t\twhile (*filen && *pat != *filen)\n\t\t\t\t\t\tfilen++;\n\t\t\t\t}\n\n\t\t\t\twhile (*filen) {\n\t\t\t\t\tif (pbs_glob(filen, pat))\n\t\t\t\t\t\treturn TRUE;\n\t\t\t\t\tfilen++;\n\t\t\t\t}\n\t\t\t\treturn FALSE;\n\n\t\t\tcase '?':\n\t\t\t\tif (*filen)\n\t\t\t\t\tbreak;\n\t\t\t\treturn FALSE;\n\n\t\t\tdefault:\n\t\t\t\tif (c != *filen)\n\t\t\t\t\treturn FALSE;\n\t\t\t\tbreak;\n\t\t}\n\t\tfilen++;\n\t}\n\n\treturn !*filen;\n}\n\n/**\n * @brief\n *\tcopy_file - Do a single staging file copy.\n *\n * @param[in]\t\tdir\t\t-\tdirection of copy\n *\t\t\t\t\t\tSTAGE_DIR_IN - for stage in request\n *\t\t\t\t\t\tSTAGE_DIR_OUT - for stageout request\n * @param[in]\t\trmtflag\t\t-\tis remote file copy\n * @param[in]\t\towner\t\t-\tusername for owner of copy request\n * @param[in]\t\tsrc\t\t-\tpath to source is stageout else local file name\n * @param[in]\t\tpair\t\t-\tlist of file pair\n * @param[in]\t\tconn\t\t-\tsocket on which request is received\n * @param[in/out]\tstage_inout\t-\tpointer to cpy_files struct\n * @param[in]\t\tprmt\t\t-\tpath to destination if stageout else source path\n * @param[in]\t\tjobid\t\t- \tjob ID\n *\n * @return\tint\n * @retval\t0 - all OK\n * @retval\t!0 - error\n *\n */\nint\ncopy_file(int dir, int rmtflag, char *owner, char *src, struct rqfpair *pair, int conn, cpy_files *stage_inout, char *prmt, char *jobid)\n{\n\tint rc = 0;\n\tint ret = 0;\n\tint len = 0;\n\tstruct stat buf = {0};\n\tchar dest[MAXPATHLEN + 1] = {'\\0'};\n\tchar src_file[MAXPATHLEN + 1] = {'\\0'};\n\n\t/*\n\t ** The destination is calcluated for a stagein so it can\n\t ** be used later.  It does not need to be passed to sys_copy.\n\t */\n\tif (dir == STAGE_DIR_IN) {\n\t\t/* if destination is a directory, append filename */\n#ifdef WIN32\n\t\tif (stat_uncpath(pair->fp_local, &buf) == 0 && S_ISDIR(buf.st_mode))\n#else\n\t\tif (stat(pair->fp_local, &buf) == 0 && S_ISDIR(buf.st_mode))\n#endif\n\t\t{\n\t\t\tchar *slash = strrchr(src, '/');\n\n\t\t\tpbs_strncpy(dest, pair->fp_local, sizeof(dest));\n\t\t\tstrcat(dest, \"/\");\n\t\t\tstrcat(dest, (slash != NULL) ? slash + 1 : src);\n\t\t} else\n\t\t\tpbs_strncpy(dest, pair->fp_local, sizeof(dest));\n\t}\n\n\tret = sys_copy(dir, rmtflag, owner, src, pair, conn, prmt, jobid);\n\n\tif (ret == 0) {\n\t\t/*\n\t\t ** Copy worked.  If old behavior is used, a stageout file\n\t\t ** is deleted now.  New behavior of waiting to delete\n\t\t ** everything could be achived by adding the file to\n\t\t ** a list to delete later.\n\t\t */\n\t\tif (dir == STAGE_DIR_OUT) {\n\t\t\t/*\n\t\t\t ** have copied out, may need to remove local file\n\t\t\t ** if sandbox=private then the file is not removed here\n\t\t\t ** it will be removed when the sandbox directory is removed\n\t\t\t */\n\n\t\t\tif (!(stage_inout->sandbox_private &&\n\t\t\t      is_child_path(pbs_jobdir, src) == 1)) {\n\t\t\t\t/* Check if local file path has comma in it, if\n\t\t\t\t * found escape character prefixed will be\n\t\t\t\t * removed\n\t\t\t\t */\n\t\t\t\treplace(src, \"\\\\,\", \",\", src_file);\n\t\t\t\tif (*src_file == '\\0')\n\t\t\t\t\tpbs_strncpy(src_file, src, sizeof(src_file));\n\n\t\t\t\tif (remtree(src_file) < 0) {\n\t\t\t\t\tif (errno == ENOENT) {\n\t\t\t\t\t\tlog_event(PBSEVENT_ADMIN,\n\t\t\t\t\t\t\t  PBS_EVENTCLASS_FILE,\n\t\t\t\t\t\t\t  LOG_INFO, src,\n\t\t\t\t\t\t\t  \"previously removed\");\n\t\t\t\t\t} else {\n\t\t\t\t\t\tchar temp[80 + MAXPATHLEN];\n\n\t\t\t\t\t\tsnprintf(temp, sizeof(temp),\n\t\t\t\t\t\t\t msg_err_unlink,\n\t\t\t\t\t\t\t \"stage out\", src);\n\t\t\t\t\t\tlog_err(errno, \"req_cpyfile\",\n\t\t\t\t\t\t\ttemp);\n\t\t\t\t\t\tadd_bad_list(&(stage_inout->bad_list),\n\t\t\t\t\t\t\t     temp, 2);\n\t\t\t\t\t\tstage_inout->bad_files = 1;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t} else {\n\t\t\t/*\n\t\t\t ** Add destination (local) filename to list so it can\n\t\t\t ** be deleted on later failure.\n\t\t\t */\n\t\t\tchar temp[80 + MAXPATHLEN];\n\t\t\tchar **stage_file_list_temp = NULL;\n\t\t\tif (stage_inout->file_max == stage_inout->file_num) { /* need to extend list */\n\t\t\t\tstage_inout->file_max += 10;\n\t\t\t\tif ((stage_file_list_temp = (char **) realloc(stage_inout->file_list, stage_inout->file_max * sizeof(char **))) == NULL) {\n\t\t\t\t\tsnprintf(temp, sizeof(temp), \"Out of Memory!\");\n\t\t\t\t\tlog_err(ENOMEM, \"req_cpyfile\", temp);\n\t\t\t\t\treturn -1;\n\t\t\t\t} else {\n\t\t\t\t\tstage_inout->file_list = stage_file_list_temp;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tDBPRT((\"%s: listadd %s\\n\", __func__, dest))\n\t\t\tif ((stage_inout->file_list[stage_inout->file_num++] = strdup(dest)) == NULL) {\n\t\t\t\tsnprintf(temp, sizeof(temp), \"Out of Memory!\");\n\t\t\t\tlog_err(ENOMEM, \"req_cpyfile\", temp);\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t}\n\t} else { /* failure */\n\n\t\tFILE *fp = NULL;\n\t\tDBPRT((\"%s: sys_copy failed, error = %d\\n\", __func__, ret))\n\t\tsnprintf(log_buffer, sizeof(log_buffer), \"Job %s: sys_copy failed, return value=%d\", jobid, ret);\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_FILE, LOG_ERR, __func__, log_buffer);\n\t\tstage_inout->bad_files = 1;\n\t\tsnprintf(log_buffer, sizeof(log_buffer), \"Unable to copy file %s %s %s\",\n\t\t\t (dir == STAGE_DIR_IN) ? dest : src,\n\t\t\t (dir == STAGE_DIR_IN) ? \"from\" : \"to\",\n\t\t\t (dir == STAGE_DIR_IN) ? src : pair->fp_rmt);\n\t\tadd_bad_list(&(stage_inout->bad_list), log_buffer, 2);\n\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_FILE, LOG_INFO,\n\t\t\t  pair->fp_local, log_buffer);\n\n\t\t/* copy message from rcp as well */\n\t\tif ((fp = fopen(rcperr, \"r\")) != NULL) {\n\t\t\tadd_bad_list(&(stage_inout->bad_list), \">>> error from copy\", 1);\n\t\t\twhile (fgets(log_buffer, LOG_BUF_SIZE, fp) != NULL) {\n\t\t\t\tlen = strlen(log_buffer) - 1;\n\n\t\t\t\t/* strip up to 2 line endings */\n\t\t\t\tif (len >= 0) {\n\t\t\t\t\tif (log_buffer[len] == '\\n' ||\n\t\t\t\t\t    log_buffer[len] == '\\r')\n\t\t\t\t\t\tlog_buffer[len] = '\\0';\n\t\t\t\t\tlen--;\n\t\t\t\t\tif (len >= 0) {\n\t\t\t\t\t\tif (log_buffer[len] == '\\n' ||\n\t\t\t\t\t\t    log_buffer[len] == '\\r')\n\t\t\t\t\t\t\tlog_buffer[len] = '\\0';\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tadd_bad_list(&(stage_inout->bad_list), log_buffer, 1);\n\t\t\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_FILE,\n\t\t\t\t\t  LOG_INFO, pair->fp_local, log_buffer);\n\t\t\t}\n\t\t\tfclose(fp);\n\t\t\tadd_bad_list(&(stage_inout->bad_list), \">>> end error output\", 1);\n\t\t} else {\n\t\t\tlog_errf(errno, __func__, \"Failed to open %s file\", rcperr);\n\t\t}\n\n\t\trc = -1;\n\t\tif (dir == STAGE_DIR_OUT) {\n#ifndef NO_SPOOL_OUTPUT\n\t\t\tif (stage_inout->from_spool == 1) { /* copy out of spool */\n\t\t\t\tchar undelname[MAXPATHLEN + 1];\n\n\t\t\t\tlen = strlen(path_spool);\n\t\t\t\tpbs_strncpy(undelname, path_undeliv, sizeof(undelname));\n\t\t\t\tstrcat(undelname, src + len); /* src path begins with spool */\n\n\t\t\t\tif (rename(src, undelname) == 0) { /* move file to undelivered */\n\t\t\t\t\tadd_bad_list(&(stage_inout->bad_list), \"Output retained on that host in: \", 1);\n\t\t\t\t\tadd_bad_list(&(stage_inout->bad_list), undelname, 0);\n\t\t\t\t} else {\n\t\t\t\t\tchar temp[80 + 2 * MAXPATHLEN];\n\n\t\t\t\t\tsprintf(temp, \"Unable to rename %s to %s\",\n\t\t\t\t\t\tsrc, undelname);\n\t\t\t\t\tlog_err(errno, \"req_cpyfile\", temp);\n\t\t\t\t}\n\t\t\t}\n#endif /* NO_SPOOL_OUTPUT */\n\n\t\t\tif (is_child_path(pbs_jobdir, src) == 1)\n\t\t\t\tstage_inout->stageout_failed = TRUE;\n\t\t}\n\t}\n\n\tunlink(rcperr);\n\treturn rc;\n}\n\n/**\n * @brief\n *\tstage_file - Handle file stage pair. The source could have a wildcard\n *\twhich would result in multipile copies.\n *\n * @param[in]\t\tdir\t\t-\tdirection of copy\n *\t\t\t\t\t\t\tSTAGE_DIR_IN - for stage in request\n *\t\t\t\t\t\t\tSTAGE_DIR_OUT - for stageout request\n * @param[in]\t\trmtflag\t\t-\tis remote file copy\n * @param[in]\t\towner\t\t-\tusername for owner of copy request\n * @param[in]\t\tpair\t\t-\tlist of file pair\n * @param[in]\t\tconn\t\t-\tsocket on which request is received\n * @param[in/out]\tstage_inout\t-\tpointer cpy_files struct\n * @param[in]\t\tprmt\t\t-\tpath to destination if stageout else source path\n * @param[in]\t\tjobid\t\t- \tjob ID\n *\n * @return\tint\n * @retval\t0 - all OK\n * @retval \t!0 - error\n *\n */\nint\nstage_file(int dir, int rmtflag, char *owner, struct rqfpair *pair, int conn, cpy_files *stage_inout, char *prmt, char *jobid)\n{\n\tchar *ps = NULL;\n\tint i = 0;\n\tint rc = 0;\n\tint len = 0;\n\tchar dname[MAXPATHLEN + 1] = {'\\0'};\n\tchar source[MAXPATHLEN + 1] = {'\\0'};\n\tchar matched[MAXPATHLEN + 1] = {'\\0'};\n\tDIR *dirp = NULL;\n\tstruct dirent *pdirent = NULL;\n\tstruct stat statbuf;\n\n\tDBPRT((\"%s: entered local %s remote %s\\n\", __func__, pair->fp_local, prmt))\n\n\t/*\n\t * figure out the source path\n\t */\n\tif (dir == STAGE_DIR_OUT) {\n\t\tsource[0] = '\\0';\n\t\tif (pair->fp_flag == STDJOBFILE) {\n#ifndef NO_SPOOL_OUTPUT\n\t\t\t/* stdout | stderr from MOM's spool area */\n\n\t\t\tif (!(stage_inout->sandbox_private)) {\n\t\t\t\tDBPRT((\"%s: STDJOBFILE from %s\\n\", __func__, path_spool))\n\t\t\t\tpbs_strncpy(source, path_spool, sizeof(source));\n\t\t\t\tstage_inout->from_spool = 1; /* flag as being in spool dir */\n\t\t\t}\n\n\t\t\t/*\n\t\t\t * note, if NO_SPOOL_OUTPUT is defined, the\n\t\t\t * output is in the user's home directory or job directory where\n\t\t\t * we currently are.\n\t\t\t */\n#endif /* NO_SPOOL_OUTPUT */\n\n\t\t} else if (pair->fp_flag == JOBCKPFILE) {\n\t\t\tDBPRT((\"%s: JOBCKPFILE from %s\\n\", __func__, path_checkpoint))\n\t\t\tpbs_strncpy(source, path_checkpoint, sizeof(source));\n\t\t}\n\t\tstrcat(source, pair->fp_local);\n\n\t\t/* Staging out. Check to see if file is being staged out from spool directory (i.e., is stdout or stderr). If so,\n\t\t * skip file if it doesn't exist in the spool directory, since it may have been directly written.\n\t\t */\n\n\t\tif (stage_inout->from_spool && stage_inout->direct_write && !rmtflag) {\n\t\t\tif (stat(source, &statbuf) == -1) {\n\t\t\t\tif (errno == ENOENT) {\n\t\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\t\"Skipping directly written/absent spool file %s\",\n\t\t\t\t\t\tsource);\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG4, PBS_EVENTCLASS_JOB,\n\t\t\t\t\t\t  LOG_DEBUG, __func__, log_buffer);\n\t\t\t\t\treturn 0;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t} else { /* in bound (stage-in) file */\n\t\t/* take (remote) source name from request */\n\t\tpbs_strncpy(source, prmt, sizeof(source));\n\t}\n\tDBPRT((\"%s: source %s\\n\", __func__, source))\n\n\tps = strrchr(source, (int) '/');\n\tif (ps) {\n\t\t/* has prefix path, save parent directory name */\n\t\tlen = (int) (ps - source) + 1;\n\t\tpbs_strncpy(dname, source, len + 1);\n\t\tps++;\n\t} else {\n\t\t/* no prefix path, go with \"./source\" */\n\t\tdname[0] = '.';\n\t\tdname[1] = '/';\n\t\tdname[2] = '\\0';\n\t\tps = source;\n\t}\n\n\tif ((rmtflag != 0) && (dir == STAGE_DIR_IN)) { /* no need to check for wildcards */\n\t\tDBPRT((\"%s: simple copy, remote/stagein\\n\", __func__))\n\t\trc = copy_file(dir, rmtflag, owner, source,\n\t\t\t       pair, conn, stage_inout, prmt, jobid);\n\t\tif (rc != 0) {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"Job %s: remote stagein failed for %s from %s to %s\",\n\t\t\t\t jobid, owner, source, pair->fp_local);\n\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_FILE, LOG_ERR, __func__, log_buffer);\n\t\t\tgoto error;\n\t\t}\n\t\treturn 0;\n\t}\n\n\t/* if there are no wildcards we don't need to search */\n\tif ((strchr(ps, '*') == NULL) && (strchr(ps, '?') == NULL)) {\n\t\tDBPRT((\"%s: simple copy, no wildcards\\n\", __func__))\n\t\trc = copy_file(dir, rmtflag, owner, source,\n\t\t\t       pair, conn, stage_inout, prmt, jobid);\n\t\tif (rc != 0) {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"Job %s: no wildcards:%s stage%s failed for %s from %s to %s\",\n\t\t\t\t jobid, (rmtflag == 1) ? \"remote\" : \"local\", (dir == STAGE_DIR_OUT) ? \"out\" : \"in\", owner, source,\n\t\t\t\t (dir == STAGE_DIR_OUT) ? pair->fp_rmt : pair->fp_local);\n\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_FILE, LOG_ERR, __func__, log_buffer);\n\t\t\tgoto error;\n\t\t}\n\t\treturn 0;\n\t}\n\n\tdirp = opendir(dname);\n\tif (dirp == NULL) { /* dir cannot be opened, just call copy_file */\n\t\tDBPRT((\"%s: cannot open dir %s\\n\", __func__, dname))\n\t\trc = copy_file(dir, rmtflag, owner, source,\n\t\t\t       pair, conn, stage_inout, prmt, jobid);\n\t\tif (rc != 0) {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"Job %s: Cannot open directory:%s stage%s failed for %s from %s to %s\",\n\t\t\t\t jobid, (rmtflag == 1) ? \"remote\" : \"local\", (dir == STAGE_DIR_OUT) ? \"out\" : \"in\", owner, source,\n\t\t\t\t (dir == STAGE_DIR_OUT) ? pair->fp_rmt : pair->fp_local);\n\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_FILE, LOG_ERR, __func__, log_buffer);\n\t\t\tgoto error;\n\t\t}\n\t\treturn 0;\n\t}\n\n\twhile (errno = 0, (pdirent = readdir(dirp)) != NULL) {\n#ifdef WIN32\n\t\tDWORD fa = 0;\n\n\t\tif (strcmp(pdirent->d_name, \".\") == 0 ||\n\t\t    strcmp(pdirent->d_name, \"..\") == 0)\n\t\t\tcontinue;\n\n\t\t/* get Windows file attributes */\n\t\tpbs_strncpy(matched, dname, sizeof(matched));\n\t\tstrcat(matched, pdirent->d_name);\n\t\tfa = GetFileAttributes(matched);\n\n\t\t/* skip windows HIDDEN or SYSTEM files */\n\t\tif (fa == INVALID_FILE_ATTRIBUTES)\n\t\t\tcontinue;\n\t\tif (fa == FILE_ATTRIBUTE_HIDDEN)\n\t\t\tcontinue;\n\t\tif (fa == FILE_ATTRIBUTE_SYSTEM)\n\t\t\tcontinue;\n\n#else\n\t\t/* skip unix files that begin with '.' */\n\t\tif (pdirent->d_name[0] == '.')\n\t\t\tcontinue;\n#endif\n\t\tif (pbs_glob(pdirent->d_name, ps) != 0) {\n\t\t\t/* name matches */\n\n\t\t\tpbs_strncpy(matched, dname, sizeof(matched));\n\t\t\tstrcat(matched, pdirent->d_name);\n\t\t\tDBPRT((\"%s: match %s\\n\", __func__, matched))\n\t\t\trc = copy_file(dir, rmtflag, owner, matched,\n\t\t\t\t       pair, conn, stage_inout, prmt, jobid);\n\t\t\tif (rc != 0) {\n\t\t\t\t(void) closedir(dirp);\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"Job %s: Pattern matched:%s stage%s failed for %s from %s to %s\",\n\t\t\t\t\t jobid, (rmtflag == 1) ? \"remote\" : \"local\", (dir == STAGE_DIR_OUT) ? \"out\" : \"in\", owner, source,\n\t\t\t\t\t (dir == STAGE_DIR_OUT) ? pair->fp_rmt : pair->fp_local);\n\t\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_FILE, LOG_ERR, __func__, log_buffer);\n\t\t\t\tgoto error;\n\t\t\t}\n\t\t}\n\t}\n\tif (errno != 0 && errno != ENOENT) { /* dir cannot be read, just call copy_file */\n\t\tDBPRT((\"%s: cannot read dir %s\\n\", __func__, dname))\n\t\trc = copy_file(dir, rmtflag, owner, source,\n\t\t\t       pair, conn, stage_inout, prmt, jobid);\n\t\t(void) closedir(dirp);\n\t\tif (rc != 0) {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"Job %s: Cannot read directory:%s stage%s failed for %s from %s to %s\",\n\t\t\t\t jobid, (rmtflag == 1) ? \"remote\" : \"local\", (dir == STAGE_DIR_OUT) ? \"out\" : \"in\", owner, source,\n\t\t\t\t (dir == STAGE_DIR_OUT) ? pair->fp_rmt : pair->fp_local);\n\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_FILE, LOG_ERR, __func__, log_buffer);\n\t\t\tgoto error;\n\t\t}\n\t\treturn 0;\n\t}\n\n\t(void) closedir(dirp);\n\treturn 0;\n\nerror:\n\t/* delete all the files in the list */\n\tfor (i = 0; i < stage_inout->file_num; i++) {\n\t\tDBPRT((\"%s: delete %s\\n\", __func__, stage_inout->file_list[i]))\n\t\tif (remtree(stage_inout->file_list[i]) != 0 && errno != ENOENT) {\n\t\t\tchar temp[80 + MAXPATHLEN];\n\n\t\t\tsprintf(temp, msg_err_unlink, \"stage in\", stage_inout->file_list[i]);\n\t\t\tlog_err(errno, \"req_cpyfile\", temp);\n\t\t\tadd_bad_list(&(stage_inout->bad_list), temp, 2);\n\t\t}\n\t}\n\treturn rc;\n}\n\n/**\n * @brief\n *\trmjobdir - Remove the staging and execution directory and any files\n *\twithin it.\n *\n * @param[in] jobid  - the job id string, i.e. \"123.server\"\n * @param[in] jobdir - the full path to the sandox (working directory to make\n * @param[in] uid    - the user id of the user under which the job will run.\n *\t\t       Currenty this parameter only used in *nix so for\n *\t\t       windows this parameter should be NULL\n * @param[in] gid    - the group id of the user under which the job will run.\n *\t\t       Currenty this parameter only used in *nix so for\n *\t\t       windows this parameter should be NULL\n * @param[in] check_shared - if set to '1', test if the staging and execution\n *\t\t\tdirectory is sitting on a shared location, and if so,\n *\t\t\tdo not remove any files in it.\n * @return void\n *\n * @note\tThis may take awhile so the task is forked and execed to another\n *\t\tprocess. In *nix, as with mkjobdir(),  the actions must be done\n *\t\tas the User or as root depending on the location of the sandbox.\n *\n */\nvoid\nrmjobdir(char *jobid, char *jobdir, uid_t uid, gid_t gid, int check_shared)\n{\n\tstatic char rmdir_buf[MAXPATHLEN + 1] = {'\\0'};\n\tstruct stat sb = {0};\n\tchar *newdir = NULL;\n\tchar *nameptr = NULL;\n#ifdef WIN32\n\tstruct pio_handles pio = {0};\n\tchar cmdbuf[MAXPATHLEN + 1] = {'\\0'};\n\tchar sep = '\\\\';\n#else\n\tpid_t pid = -1;\n\tchar *rm = \"/bin/rm\";\n\tchar *rf = \"-rf\";\n\tchar sep = '/';\n#endif\n\n\tif (jobdir == NULL)\n\t\treturn;\n\n\tif (check_shared &&\n\t    (pbs_jobdir_root[0] != '\\0') &&\n\t    pbs_jobdir_root_shared &&\n\t    ((strcmp(pbs_jobdir_root, JOBDIR_DEFAULT) == 0) ||\n\t     (strncmp(pbs_jobdir_root, jobdir, strlen(pbs_jobdir_root)) == 0))) {\n\t\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG, jobid ? jobid : \"\", \"shared jobdir %s to be removed by primary mom\", jobdir);\n\n\t\treturn;\n\t}\n\n#ifndef WIN32\n\tif ((pbs_jobdir_root[0] == '\\0') || (strcmp(pbs_jobdir_root, JOBDIR_DEFAULT) == 0)) {\n\t\t/* In user's home, need to be user */\n\t\t/* The rest must be done as the User */\n\t\tif (impersonate_user(uid, gid) == -1)\n\t\t\treturn;\n\t}\n#endif\n\n\t/* Hello, is any body there? */\n\tif (stat(jobdir, &sb) == -1) {\n#ifndef WIN32\n\t\tif ((pbs_jobdir_root[0] == '\\0') || (strcmp(pbs_jobdir_root, JOBDIR_DEFAULT) == 0)) {\n\t\t\t/* oops, have to go back to being root */\n\t\t\trevert_from_user();\n\t\t}\n#endif\n\t\tif (errno != ENOENT) {\n\t\t\tsprintf(log_buffer, \"stat: %s\", jobdir);\n\t\t\tlog_joberr(errno, __func__, log_buffer, jobid);\n\t\t}\n\t\treturn;\n\t}\n\n\t/*\n\t * The job path should be \"pbs.<jobid>.x8z\"\n\t * if the string \"pbs.\" is not found in the basename of\n\t * the given path, then there is no staging and execution\n\t * directory.  So don't delete anything, we don't want\n\t * to remove too much!\n\t * There is no need to check for trailing separator at the\n\t * end of the job path, because jobdirname() does not add\n\t * a trailing separator.\n\t * NOTE: basename() wasn't available on all supported architectures,\n\t * so instead we look for the last separator in the path name,\n\t * and advance the pointer one space (past the\n\t * separator) to get the basename.\n\t */\n\tif ((nameptr = strrchr(jobdir, sep)) != NULL) {\n\t\tnameptr++;\n\t} else {\n\t\t/* we already have the basename */\n\t\tnameptr = jobdir;\n\t}\n\n\tif (strncmp(nameptr, \"pbs.\", 4) != 0) {\n#ifndef WIN32\n\t\tif ((pbs_jobdir_root[0] == '\\0') || (strcmp(pbs_jobdir_root, JOBDIR_DEFAULT) == 0))\n\t\t\trevert_from_user();\n#endif\n\t\tsprintf(log_buffer, \"%s is not a staging and execution directory\", jobdir);\n\t\tlog_joberr(-1, __func__, log_buffer, jobid);\n\t\treturn;\n\t}\n\n\tsnprintf(rmdir_buf, sizeof(rmdir_buf) - 1, \"%s_remove\", jobdir);\n\tnewdir = rmdir_buf;\n\n#ifdef WIN32\n\tmake_dir_files_service_account_read(jobdir);\n\n\tif (!MoveFile(jobdir, newdir)) {\n\t\terrno = GetLastError();\n\t\tsprintf(log_buffer, \"rename: %s %s\", jobdir, newdir);\n\t\tlog_joberr(errno, __func__, log_buffer, jobid);\n\t\tnewdir = jobdir;\n\t}\n\n\tsprintf(cmdbuf, \"rmdir /S /Q \\\"%s\\\"\", newdir);\n\tif (!win_popen(cmdbuf, \"w\", &pio, NULL)) {\n\t\terrno = GetLastError();\n\t\tlog_joberr(errno, __func__, \"win_popen\", jobid);\n\t}\n\twin_pclose2(&pio);\n\tclose_valid_handle(&(pio.pi.hProcess));\n#else\n\tif (rename(jobdir, newdir) == -1) {\n\t\tsprintf(log_buffer, \"rename: %s %s\", jobdir, newdir);\n\t\tlog_joberr(errno, __func__, log_buffer, jobid);\n\t\tnewdir = jobdir;\n\t}\n\n\t/* fork and exec the cleantmp process */\n\tpid = fork();\n\tif (pid != 0) { /* parent or error */\n\t\tint err = errno;\n\t\tif ((pbs_jobdir_root[0] == '\\0') || (strcmp(pbs_jobdir_root, JOBDIR_DEFAULT) == 0))\n\t\t\trevert_from_user();\n\n\t\tif (pid < 0)\n\t\t\tlog_err(err, __func__, \"fork\");\n\t\treturn;\n\t}\n\n\ttpp_terminate();\n\texecl(rm, \"pbs_cleandir\", rf, newdir, NULL);\n\tlog_err(errno, __func__, \"execl\");\n\texit(21);\n#endif\n}\n\n#ifndef WIN32\n\n/**\n * @brief\n *\tcopy string quoting whitespace by prefixing with back-slash\n *\n * @par Functionality:\n *\tCopy a source string into a buffer.  Any whitespace as defined by\n *\t\"isspace()\" is prefixed by a back-slash '\\'.\n *\n * @par Note\n *\tthis is compiled for Unix/Linux only.   Windows has it's own thing\n *\tin lib/Libwin.\n *\n * @param[in]\tpd  - input string\n * @param[in]\tps  - output buffer into which the coping is done\n * @param[in]\tsz  - length of output buffer\n *\n * @return\tint\n * @retval\t0 : \tcopied successfully\n * @retval\t1 :\tbuffer would have overflowed, buf is not null terminated\n *\n */\n\nstatic int\nquote_and_copy_white(char *pd, char *ps, ssize_t sz)\n{\n\n\t/* Copy the file path escaping white space with a back-slash */\n\n\twhile (*ps) {\n\t\tif (isspace((int) *ps)) {\n\t\t\t*pd++ = '\\\\';\n\t\t\tif (--sz < 1)\n\t\t\t\treturn 1;\n\t\t}\n\t\t*pd++ = *ps++; /* copy the character */\n\t\tif (--sz < 1)\n\t\t\treturn 1;\n\t}\n\t*pd = '\\0';\n\treturn 0;\n}\n#else\n/**\n * @brief\n *\tis_scp_path - check whether scp_path in pbs_conf is set and contain \"scp\"\n *\n * @return\tint\n * @retval\t0 - either scp_path is not set or does not contain \"scp\"\n * @retval\t1 - scp_path is set and contain \"scp\"\n *\n */\nstatic int\nis_scp_path(void)\n{\n\tif (pbs_conf.scp_path &&\n\t    (strstr(pbs_conf.scp_path, \"scp\") ||\n\t     strstr(pbs_conf.scp_path, \"SCP\") ||\n\t     strstr(pbs_conf.scp_path, \"Scp\"))) {\n\t\treturn (1);\n\t}\n\treturn (0);\n}\n\n/**\n * @brief\n *\tis_pbs_rcp_path - check whether rcp_path in pbs_conf is set and contain \"pbs_rcp\"\n *\n * @return\tint\n * @retval\t0 - either rcp_path is not set or does not contain \"pbs_rcp\"\n * @retval\t1 - rcp_path is set and contain \"pbs_rcp\"\n *\n */\nstatic int\nis_pbs_rcp_path(void)\n{\n\tif (pbs_conf.rcp_path &&\n\t    (strstr(pbs_conf.rcp_path, \"pbs_rcp\") ||\n\t     strstr(pbs_conf.scp_path, \"PBS_RCP\") ||\n\t     strstr(pbs_conf.scp_path, \"Pbs_rcp\"))) {\n\t\treturn (1);\n\t}\n\treturn (0);\n}\n#endif\n/**\n * @brief\n *\tsys_copy\n *\tissue system call to copy file\n *\tAlso check error of copy process and retry as required\n *\n *\tIn Windows, use \"xcopy\" (for directory) or \"copy\" (for single file)\n *\tfor local copy and \"pbs_rcp\" for remote copy.\n *\tIf there is an error in the copy and pbs_rcp is used, it will try with scp.\n *\n *\tIn *nix, use \"cp\" for local copy and \"scp\"/\"rcp\" for remote copy.\n *\tIf there is an error in the copy and scp is used, it will try with rcp.\n *\n *\tIf there is an error, the copy will be retried 3 additional times.\n *\n * @param[in]\t\tdir\t\t-\tdirection of copy\n *\t\t\t\t\t\tSTAGE_DIR_IN - for stage in request\n *\t\t\t\t\t\tSTAGE_DIR_OUT - for stageout request\n * @param[in]\t\trmtflag\t\t-\tis remote file copy\n * @param[in]\t\towner\t\t-\tusername for owner of copy request\n * @param[in]\t\tsrc\t\t-\tpath to source is stageout else local file name\n * @param[in]\t\tpair\t\t-\tlist of file pair\n * @param[in]\t\tconn\t\t-\tsocket on which request is received\n * @param[in]\t\tprmt\t\t-\tpath to destination if stageout else source path\n * @param[in]\t\tjobid\t\t-\tJob ID\n *\n * @return\tint\n * @retval\t0 - successful copy\n * @retval \t!0 - copy failed\n *\n * @note\n * Use of the arguments is somewhat confusing, in summary the files copied are:\n *\tSTAGE_DIR_OUT\n *\t\t\"src\"  local source file to copy\n *\t\t\"prmt\" local or remote destination file\n *\tSTAGE_DIR_IN\n *\t\tif remote - \"prmt\" is the remote source file\n *\t\tif local  - \"src\"  is the local source file\n *\t\t\"pair->fp_local\" is the local destination path in both cases\n *\n */\nstatic int\nsys_copy(int dir, int rmtflg, char *owner, char *src, struct rqfpair *pair, int conn, char *prmt, char *jobid)\n{\n\tchar *ag0 = NULL;\n\tchar *ag1 = NULL;\n\tchar ag2[MAXPATHLEN + 1] = {'\\0'};\n\tchar ag3[MAXPATHLEN + 1] = {'\\0'};\n\tchar local_file[MAXPATHLEN + 1] = {'\\0'};\n\tchar rmt_file[MAXPATHLEN + 1] = {'\\0'};\n\tint loop = 0;\n\tint rc = 0;\n\ttime_t original = 0;\n\tstruct stat sb = {0};\n#ifdef WIN32\n\tchar str_buf[MAXPATHLEN + 1] = {'\\0'};\n\tchar *sp = NULL;\n\tchar *dp = NULL;\n\tint fd = -1;\n\tSTARTUPINFO si = {0};\n\tPROCESS_INFORMATION pi = {0};\n\tint flags = CREATE_DEFAULT_ERROR_MODE | CREATE_NEW_CONSOLE |\n\t\t    CREATE_NEW_PROCESS_GROUP;\n\tchar cmd_line[PBS_CMDLINE_LENGTH] = {'\\0'};\n\tchar ag2_path[MAXPATHLEN + 1] = {'\\0'};\n\tchar ag3_path[MAXPATHLEN + 1] = {'\\0'};\n\tchar wdir[MAXPATHLEN + 1] = {'\\0'};\n\tHANDLE readp = INVALID_HANDLE_VALUE;\n\tHANDLE writep = INVALID_HANDLE_VALUE;\n\tSECURITY_ATTRIBUTES sa = {0};\n\tstruct passwd *pw = NULL;\n#else\n\tint i;\n\tssize_t len;\n#endif\n\n\tDBPRT((\"%s: %s %s copy %s of %s\\n\", __func__, owner,\n\t       rmtflg ? \"remote\" : \"local\",\n\t       (dir == STAGE_DIR_OUT) ? \"out\" : \"in\", src))\n\n#ifdef WIN32\n\tZeroMemory(&si, sizeof(SYSTEM_INFO));\n\tsa.nLength = sizeof(sa);\n\tsa.lpSecurityDescriptor = NULL;\n\tsa.bInheritHandle = TRUE;\n\n\tstrcpy(wdir, \"C:\\\\\");\n\n\tif (getcwd(wdir, MAXPATHLEN + 1) == NULL) {\n\t\tlog_errf(-1, __func__, \"Failed to get the current working directory %s\", wdir);\n\t}\n\n\tsi.cb = sizeof(si);\n\tsi.lpDesktop = PBS_DESKTOP_NAME;\n\n\tif ((pw = getpwnam(owner)) == NULL) {\n\t\tlog_errf(-1, __func__, \"Failed to get %s password\", owner);\n\t\trc = PBSE_BADUSER;\n\t\tgoto sys_copy_end;\n\t}\n#endif\n\tsprintf(rcperr, \"%srcperr.%d\", path_spool, getpid());\n\n\tif (dir == STAGE_DIR_OUT) {\n#ifndef WIN32\n\t\tif (*src == '-')\n\t\t\tstrcpy(ag2, \"./\"); /* prefix leading '-' with \"./\" */\n#endif\n\t\treplace(src, \"\\\\,\", \",\", local_file);\n\t\tif (*local_file != '\\0')\n\t\t\tstrcpy(ag2, local_file);\n\t\telse\n\t\t\tpbs_strncpy(ag2, src, sizeof(ag2));\n\n#ifndef WIN32\n\t\t/* Is the file there?  If not, don`t try copy */\n\t\tif (access(ag2, F_OK | R_OK) < 0)\n#else\n\t\tif (access_uncpath(ag2, F_OK | R_OK) < 0)\n#endif\n\t\t{\n\t\t\tif (errno == ENOENT)\n\t\t\t\treturn 1;\n\t\t}\n\n\t\t/* take (remote) destination name from request */\n\t\tif (rmtflg) {\n#ifdef WIN32\n\t\t\t/* using rcp, need to prepend the owner name */\n\t\t\tif (is_scp_path() || is_pbs_rcp_path()) {\n\t\t\t\tstrcat(ag3, owner);\n\t\t\t\tstrcat(ag3, \"@\");\n\t\t\t\treplace(prmt, \"\\\\,\", \",\", rmt_file);\n\t\t\t\tif (*rmt_file != '\\0')\n\t\t\t\t\tstrcat(ag3, rmt_file);\n\t\t\t\telse\n\t\t\t\t\tstrcat(ag3, prmt);\n\t\t\t} else {\n\t\t\t\tfor (dp = ag3, sp = prmt; *sp; dp++, sp++) {\n\t\t\t\t\tif (*sp == ':')\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t*dp = *sp;\n\t\t\t\t}\n\t\t\t\t*dp++ = '.';\n\t\t\t\tstrcpy(dp, owner);\n\t\t\t\tstrcat(dp, \":\");\n\t\t\t\tstrcat(dp, sp + 1);\n\t\t\t}\n#else\n\t\t\t/* using scp/rcp, need to prepend the owner name */\n\t\t\tstrcat(ag3, owner);\n\t\t\tstrcat(ag3, \"@\");\n\t\t\tlen = strlen(ag3);\n\t\t\tif (quote_and_copy_white(ag3 + len, prmt, MAXPATHLEN - len) != 0)\n\t\t\t\treturn 1;\n#endif\n\t\t} else {\n#ifndef WIN32\n\t\t\tif (*prmt == '-')\n\t\t\t\tstrcat(ag3, \"./\"); /* prefix '-' with \"./\" */\n#endif\n\t\t\treplace(prmt, \"\\\\,\", \",\", rmt_file);\n\t\t\tif (*rmt_file != '\\0')\n\t\t\t\tstrcat(ag3, rmt_file);\n\t\t\telse\n\t\t\t\tstrcat(ag3, prmt);\n\t\t}\n\t} else { /* in bound (stage-in) file */\n\t\t/* take (remote) source name from request */\n\t\tif (rmtflg) {\n#ifdef WIN32\n\t\t\t/* using rcp, need to prepend the owner name */\n\t\t\tif (is_scp_path() || is_pbs_rcp_path()) {\n\t\t\t\tstrcat(ag2, owner);\n\t\t\t\tstrcat(ag2, \"@\");\n\t\t\t\treplace(prmt, \"\\\\,\", \",\", rmt_file);\n\t\t\t\tif (*rmt_file != '\\0')\n\t\t\t\t\tstrcat(ag2, rmt_file);\n\t\t\t\telse\n\t\t\t\t\tstrcat(ag2, prmt);\n\t\t\t} else {\n\t\t\t\tfor (dp = ag2, sp = prmt; *sp; dp++, sp++) {\n\t\t\t\t\tif (*sp == ':')\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t*dp = *sp;\n\t\t\t\t}\n\t\t\t\t*dp++ = '.';\n\t\t\t\tstrcpy(dp, owner);\n\t\t\t\tstrcat(dp, \":\");\n\t\t\t\tstrcat(dp, sp + 1);\n\t\t\t}\n#else\n\t\t\t/* using scp/rcp, need to prepend the owner name */\n\t\t\tpbs_strncpy(ag2, owner, sizeof(ag2));\n\t\t\tstrcat(ag2, \"@\");\n\t\t\tlen = strlen(ag2);\n\t\t\tif (quote_and_copy_white(ag2 + len, prmt, MAXPATHLEN - len) != 0)\n\t\t\t\treturn 1;\n#endif\n\t\t} else {\n#ifndef WIN32\n\t\t\tif (*src == '-')\n\t\t\t\tstrcpy(ag2, \"./\"); /* prefix '-' with \"./\" */\n#endif\n\t\t\treplace(src, \"\\\\,\", \",\", rmt_file);\n\t\t\tif (*rmt_file != '\\0')\n\t\t\t\tstrcat(ag2, rmt_file);\n\t\t\telse\n\t\t\t\tstrcat(ag2, src);\n\t\t}\n#ifndef WIN32\n\t\tif (*pair->fp_local == '-')\n\t\t\tstrcpy(ag3, \"./\"); /* prefix leading '-' with \"./\" */\n#endif\n\t\treplace(pair->fp_local, \"\\\\,\", \",\", local_file);\n\t\tif (*local_file != '\\0')\n\t\t\tstrcpy(ag3, local_file);\n\t\telse\n\t\t\tpbs_strncpy(ag3, pair->fp_local, sizeof(ag3));\n\t}\n\n#ifndef WIN32\n\tfor (loop = 1; loop < 5; ++loop) {\n\t\toriginal = 0;\n\t\tif (rmtflg == 0) { /* local copy */\n\t\t\tag0 = pbs_conf.cp_path;\n\t\t\tif (strcmp(ag3, \"/dev/null\") == 0)\n\t\t\t\treturn (0); /* don't need to copy, just return zero */\n\t\t\telse\n\t\t\t\tag1 = \"-rp\";\n\n\t\t\t/* remote, try scp */\n\t\t} else if (pbs_conf.scp_path != NULL && (loop % 2) == 1) {\n\t\t\tag0 = pbs_conf.scp_path;\n\t\t\tif (pbs_conf.scp_args != NULL) {\n\t\t\t\tag1 = pbs_conf.scp_args;\n                        } else {\n\t\t\t\tag1 = \"-Brvp\";\n                        }\n\t\t} else {\n\t\t\tag0 = pbs_conf.rcp_path;\n\t\t\tag1 = \"-rp\";\n\t\t}\n\n\t\t/*\n\t\t * There is a problem where scp can return zero indicating success even\n\t\t * though the copy did not work.  In the case of a remote stagein,\n\t\t * where the file already exists, save the ctime of an existing file so\n\t\t * we can check the ctime after the copy to be sure it worked.  ctime\n\t\t * is used instead of mtime because scp may reset mtime.\n\t\t */\n\t\tif ((rmtflg != 0) && (dir == STAGE_DIR_IN)) {\n\t\t\tif (stat(ag3, &sb) != -1)\n\t\t\t\toriginal = sb.st_ctime;\n\t\t}\n\n\t\tDBPRT((\"%s: %s %s %s %s\\n\", __func__, ag0, ag1, ag2, ag3))\n\n\t\tif ((rc = fork()) > 0) {\n\n\t\t\t/* Parent */\n\t\t\tif (cred_pipe != -1) {\n\t\t\t\tif (write(cred_pipe, pwd_buf, cred_len) != cred_len) {\n\t\t\t\t\tlog_err(errno, __func__, \"pipe write\");\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t/* wait for copy to complete */\n\t\t\twhile (((i = wait(&rc)) < 0) && (errno == EINTR))\n\t\t\t\t;\n\t\t\tif (i == -1) {\n\t\t\t\trc = (20000 + errno); /* 200xx is error on wait */\n\t\t\t} else if (WIFEXITED(rc)) {\n\t\t\t\tif ((rc = WEXITSTATUS(rc)) == 0) {\n\t\t\t\t\tif ((rmtflg != 0) && (dir == STAGE_DIR_IN)) {\n\t\t\t\t\t\tif ((stat(ag3, &sb) == -1) ||\n\t\t\t\t\t\t    (original == sb.st_ctime))\n\t\t\t\t\t\t\trc = 13;\n\t\t\t\t\t}\n\t\t\t\t\treturn (rc); /* good,  stop now */\n\t\t\t\t}\n\t\t\t} else if (WIFSTOPPED(rc)) {\n\t\t\t\trc = (30000 + WSTOPSIG(rc)); /* 300xx is stopped */\n\t\t\t} else if (WIFSIGNALED(rc)) {\n\t\t\t\trc = (40000 + WTERMSIG(rc)); /* 400xx is signaled */\n\t\t\t}\n\n\t\t} else if (rc < 0) {\n\n\t\t\trc = errno + 10000; /* error on fork (100xx), retry */\n\n\t\t} else {\n\n\t\t\tint fd;\n\n\t\t\t/* child - exec the copy command */\n\n\t\t\t(void) close(conn);\n\n\t\t\t/* redirect stderr to make error from rcp available to MOM */\n\t\t\tif ((fd = open(rcperr, O_RDWR | O_CREAT, 0644)) < 0) {\n\t\t\t\t(void) sprintf(log_buffer, \"can't open %s, error = %d\",\n\t\t\t\t\t       rcperr, errno);\n\t\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_FILE,\n\t\t\t\t\t  LOG_DEBUG, __func__, log_buffer);\n\t\t\t\texit(12);\n\t\t\t};\n\t\t\tif (fd != 2) {\n\t\t\t\t(void) dup2(fd, 2);\n\t\t\t\t(void) close(fd);\n\t\t\t}\n\t\t\t/*\n\t\t\t * In order to fix a timing problem where the copy may have\n\t\t\t * succeeded in less than a second, the child process may have to\n\t\t\t * sleep for 1 second before doing the copy.  This will ensure the\n\t\t\t * ctime will be different. This is needed because the exit value\n\t\t\t * of the copy agent can be zero even when the copy did not work\n\t\t\t * correctly.\n\t\t\t *\n\t\t\t * The value of original will be 0 or the previously existing\n\t\t\t * file's ctime.  If this ctime is \"right now\", then we have to\n\t\t\t * sleep a second so we can tell if the copy worked.  sleep() can\n\t\t\t * be terminated early if a signal is received, so loop until the\n\t\t\t * current time is different than the initial ctime of the file.\n\t\t\t */\n\t\t\twhile (original == time(NULL)) {\n\t\t\t\tsleep(1);\n\t\t\t}\n\n\t\t\texecl(ag0, ag0, ag1, ag2, ag3, NULL);\n\t\t\tsprintf(log_buffer, \"command: %s %s %s %s execl failed %d\", ag0, ag1, ag2, ag3, errno);\n\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_FILE, LOG_DEBUG, __func__, log_buffer);\n\t\t\texit(13); /* 13, an unluckly number */\n\t\t}\n\n\t\t/* copy did not work, try again */\n\n\t\tsprintf(log_buffer, \"command: %s %s %s %s status=%d, try=%d\", ag0, ag1, ag2, ag3, rc, loop);\n\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_FILE, LOG_DEBUG, __func__, log_buffer);\n\t\tif ((loop % 2) == 0) /* don't sleep between scp and rcp */\n\t\t\tsleep(loop / 2 * 10 + 1);\n\t}\n#else\n\tfor (loop = 1; loop < 5; ++loop) {\n\t\toriginal = 0;\n\t\tif (rmtflg == 0) { /* local copy */\n\t\t\tag0 = pbs_conf.cp_path;\n\t\t\tag1 = \"/e/i/q/y\";\n\t\t\t/* remote, try scp */\n\t\t} else if (pbs_conf.scp_path != NULL && (loop % 2) == 1) {\n\t\t\tag0 = pbs_conf.scp_path;\n\t\t\tstruct stat local_sb = {0};\n\t\t\tchar ag0_cpy[MAXPATHLEN + 1] = {'\\0'};\n\t\t\tpbs_strncpy(ag0_cpy, ag0, sizeof(ag0_cpy));\n\t\t\tif (stat_uncpath(ag0_cpy, &local_sb) == -1) {\n\t\t\t\tlog_errf(errno, __func__, \"%s\", ag0_cpy);\n\t\t\t\t/* let's try to copy using pbs_rcp now */\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\tif (pbs_conf.scp_args != NULL) {\n\t\t\t\tag1 = pbs_conf.scp_args;\n                        } else {\n\t\t\t\tag1 = \"-Brv\";\n                        }\n\t\t} else {\n\t\t\tag0 = pbs_conf.rcp_path;\n\t\t\tif ((cred_buf != NULL) && (cred_len != 0)) {\n\t\t\t\tint clen;\n\t\t\t\tint wrote;\n\n\t\t\t\tag1 = \"-E -r\";\n\n\t\t\t\tif (CreatePipe(&readp, &writep, &sa, 0) == 0) {\n\t\t\t\t\tlog_err(-1, __func__, \"Unable to create pipe\");\n\t\t\t\t\trc = 22;\n\t\t\t\t\tgoto sys_copy_end;\n\t\t\t\t}\n\t\t\t\tclen = sizeof(size_t);\n\t\t\t\tWriteFile(writep, &cred_len, clen, &wrote, NULL);\n\n\t\t\t\tif (wrote != clen) {\n\t\t\t\t\tlog_errf(-1, __func__, \"sending cred_len: wrote %d should be %d\", wrote, clen);\n\t\t\t\t\trc = 22;\n\t\t\t\t\tgoto sys_copy_end;\n\t\t\t\t}\n\t\t\t\tWriteFile(writep, cred_buf, cred_len, &wrote, NULL);\n\t\t\t\tif (wrote != cred_len) {\n\t\t\t\t\tlog_errf(-1, __func__, \"sending cred_buf: wrote %d should be %d\", wrote, clen);\n\t\t\t\t\trc = 22;\n\t\t\t\t\tgoto sys_copy_end;\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tag1 = \"-r\";\n\t\t\t}\n\t\t}\n\n\t\t/*\n\t\t ** There is a problem where scp can return zero\n\t\t ** indicating success even though the copy did\n\t\t ** not work.  In the case of a remote stagein,\n\t\t ** save the mtime of an existing file so we can\n\t\t ** check the mtime after the copy to be sure it\n\t\t ** worked.\n\t\t */\n\t\tif ((rmtflg != 0) && (dir == STAGE_DIR_IN)) {\n\t\t\tif (stat_uncpath(ag3, &sb) != -1)\n\t\t\t\toriginal = sb.st_mtime;\n\t\t}\n\n\t\t/* redirect stderr to make error from rcp available to MOM */\n\t\tif ((fd = open(rcperr, O_RDWR | O_CREAT, 0644)) < 0) {\n\t\t\tlog_errf(errno, __func__, \"can't open %s\", rcperr);\n\t\t\trc = 12;\n\t\t} else {\n\t\t\terrno = 0;\n\n\t\t\tif (rmtflg == 0) {\n\t\t\t\t/*\n\t\t\t\t *  Xcopy Implementation.\n\t\t\t\t */\n\t\t\t\treplace(ag2, \"\\\\ \", \" \", str_buf);\n\t\t\t\tsnprintf(ag2_path, sizeof(ag2_path),\n\t\t\t\t\t \"%s\", str_buf);\n\t\t\t\tfix_path(ag2_path, 3);\n\n\t\t\t\tif (stat_uncpath(ag2, &sb) != -1) {\n\t\t\t\t\tif (S_ISREG(sb.st_mode)) {\n\t\t\t\t\t\treplace(ag3, \"\\\\ \", \" \", str_buf);\n\t\t\t\t\t\tsnprintf(ag3_path, sizeof(ag3_path), \"%s\",\n\t\t\t\t\t\t\t str_buf);\n\t\t\t\t\t\tfix_path(ag3_path, 3);\n\n\t\t\t\t\t\t/* if file, use copy with /y option */\n\t\t\t\t\t\t/* the option /y will supress any file\n\t\t\t\t\t\t overwrite messages */\n\t\t\t\t\t\tag0 = \"copy\";\n\t\t\t\t\t\tag1 = \"/y\";\n\t\t\t\t\t\tsnprintf(cmd_line, sizeof(cmd_line),\n\t\t\t\t\t\t\t \"cmd /c %s %s \\\"%s\\\" \\\"%s\\\"\",\n\t\t\t\t\t\t\t ag0, ag1, ag2_path, ag3_path);\n\t\t\t\t\t} else {\n\t\t\t\t\t\tchar ag3_tmp[MAXPATHLEN + 1];\n\t\t\t\t\t\tchar *ss, *ts;\n\t\t\t\t\t\tsize_t len;\n\n\t\t\t\t\t\tpbs_strncpy(ag3_tmp, ag3, sizeof(ag3_tmp));\n\t\t\t\t\t\tlen = strlen(ag3_tmp);\n\n\t\t\t\t\t\t/*\n\t\t\t\t\t\t * Handle case where destination path has\n\t\t\t\t\t\t * a trailing slash.\n\t\t\t\t\t\t */\n\t\t\t\t\t\tif (len > 0 && ag3_tmp[len - 1] == '/') {\n\t\t\t\t\t\t\tlen--;\n\t\t\t\t\t\t\tag3_tmp[len] = '\\0';\n\t\t\t\t\t\t}\n\t\t\t\t\t\t/*\n\t\t\t\t\t\t * Compare the last component of the source\n\t\t\t\t\t\t * to the last component of the target to\n\t\t\t\t\t\t * see if they match.\n\t\t\t\t\t\t */\n\t\t\t\t\t\tss = strrchr(ag2, '/');\n\t\t\t\t\t\tss = ss ? ss + 1 : ag2;\n\t\t\t\t\t\tts = strrchr(ag3_tmp, '/');\n\t\t\t\t\t\tts = ts ? ts + 1 : ag3_tmp;\n\n\t\t\t\t\t\tif (strcmp(ss, ts) != 0) {\n\t\t\t\t\t\t\t/*\n\t\t\t\t\t\t\t * The last path components are different\n\t\t\t\t\t\t\t * so add last component of source directory\n\t\t\t\t\t\t\t * to target since xcopy will not do it.\n\t\t\t\t\t\t\t */\n\t\t\t\t\t\t\tstrncat(ag3_tmp, \"/\",\n\t\t\t\t\t\t\t\tsizeof(ag3_tmp) - len - 1);\n\t\t\t\t\t\t\tstrncat(ag3_tmp, ss,\n\t\t\t\t\t\t\t\tsizeof(ag3_tmp) - len - 2);\n\t\t\t\t\t\t}\n\t\t\t\t\t\treplace(ag3_tmp, \"\\\\ \", \" \", str_buf);\n\t\t\t\t\t\tsnprintf(ag3_path, sizeof(ag3_path), \"%s\",\n\t\t\t\t\t\t\t str_buf);\n\t\t\t\t\t\tfix_path(ag3_path, 3);\n\t\t\t\t\t\tsnprintf(cmd_line, sizeof(cmd_line),\n\t\t\t\t\t\t\t \"cmd /c %s %s \\\"%s\\\" \\\"%s\\\"\",\n\t\t\t\t\t\t\t ag0, ag1, ag2_path, ag3_path);\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tlog_errf(errno, __func__, \"%s\", ag2);\n\t\t\t\t\trc = errno;\n\t\t\t\t\tgoto sys_copy_end;\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tsi.dwFlags = STARTF_USESTDHANDLES;\n\n\t\t\t\tif (readp != INVALID_HANDLE_VALUE)\n\t\t\t\t\tsi.hStdInput = readp;\n\t\t\t\telse\n\t\t\t\t\tsi.hStdInput = INVALID_HANDLE_VALUE;\n\n\t\t\t\tsi.hStdOutput = INVALID_HANDLE_VALUE;\n\t\t\t\tsi.hStdError = (HANDLE) _get_osfhandle(fd);\n\n\t\t\t\tif (ag2[1] == ':') { /* has drive info */\n\n\t\t\t\t\tsprintf(wdir, \"%c:\\\\\", toupper(ag2[0]));\n\t\t\t\t\tif (_getdcwd(toupper(ag2[0]) - 'A' + 1, wdir, sizeof(wdir)) == NULL) {\n\t\t\t\t\t\tlog_errf(errno, __func__, \"Failed to get the full path of the current working directory on the specified drive %s\", wdir);\n\t\t\t\t\t}\n\t\t\t\t\tstrcpy(ag2_path, replace_space(ag2 + 2, \"\"));\n\t\t\t\t\tfix_path(ag2_path, 3);\n\t\t\t\t} else if (strchr(ag2, ':')) {\n\t\t\t\t\t/* replace \"\\ \" wth \" \" so \"\\\" not forwarded */\n\t\t\t\t\treplace(ag2, \"\\\\ \", \" \", ag2_path);\n\t\t\t\t\tfix_path(ag2_path, 1);\n\t\t\t\t\tsprintf(ag2_path, \"\\\"%s\\\"\",\n\t\t\t\t\t\treplace_space(ag2_path, \"\\\\ \"));\n\t\t\t\t} else {\n\t\t\t\t\tsprintf(ag2_path, \"\\\"%s\\\"\", ag2);\n\t\t\t\t\tfix_path(ag2_path, 3);\n\t\t\t\t}\n\n\t\t\t\tif (ag3[1] == ':') { /* has drive info */\n\n\t\t\t\t\tsprintf(wdir, \"%c:\\\\\", toupper(ag3[0]));\n\n\t\t\t\t\tstrcpy(ag3_path, replace_space(ag3 + 2, \"\"));\n\t\t\t\t\tfix_path(ag3_path, 3);\n\t\t\t\t} else if (strchr(ag3, ':')) {\n\t\t\t\t\t/* replace \"\\ \" wth \" \" so \"\\\" not forwarded */\n\t\t\t\t\treplace(ag3, \"\\\\ \", \" \", ag3_path);\n\t\t\t\t\tfix_path(ag3_path, 1);\n\t\t\t\t\tsprintf(ag3_path, \"\\\"%s\\\"\",\n\t\t\t\t\t\treplace_space(ag3_path, \"\\\\ \"));\n\t\t\t\t} else {\n\t\t\t\t\tsprintf(ag3_path, \"\\\"%s\\\"\", ag3);\n\t\t\t\t\tfix_path(ag3_path, 3);\n\t\t\t\t}\n\n\t\t\t\tsprintf(cmd_line, \"%s %s %s %s\", ag0, ag1,\n\t\t\t\t\tag2_path, ag3_path);\n\t\t\t}\n\n\t\t\t/*\n\t\t\t ** In order to fix a timing problem where the copy may\n\t\t\t ** have succeeded in less than a second, the child\n\t\t\t ** process may have to sleep for 1 second before doing\n\t\t\t ** the copy.  This will ensure the mtime will be different.\n\t\t\t ** This is needed because the exit value of\n\t\t\t ** the copy agent can be zero even when the copy\n\t\t\t ** did not work correctly.\n\t\t\t **\n\t\t\t ** The value of original will be 0 or the previously\n\t\t\t ** existing file's mtime.  If this mtime is \"right now\",\n\t\t\t ** then we have to sleep a second so we can tell if\n\t\t\t ** the copy worked.\n\t\t\t */\n\t\t\tif (original == time(NULL))\n\t\t\t\tsleep(1);\n\t\t\t/* Re-initialize errno to zero */\n\t\t\terrno = 0;\n\n\t\t\t/* do we need 'AsUser' here? */\n\t\t\tif ((pw->pw_userlogin == INVALID_HANDLE_VALUE) && (strcmpi(owner, getlogin()) == 0)) {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"Job %s: CreateProcess(%s) under acct %s wdir=%s\",\n\t\t\t\t\t jobid, cmd_line, owner, wdir);\n\t\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_DEBUG, __func__, log_buffer);\n\t\t\t\trc = CreateProcess(NULL, cmd_line,\n\t\t\t\t\t\t   NULL, NULL, TRUE, flags,\n\t\t\t\t\t\t   NULL, wdir, &si, &pi);\n\t\t\t\tif (rc == 0) {\n\t\t\t\t\terrno = GetLastError();\n\t\t\t\t\tlog_err(-1, __func__, \"CreateProcess failed\");\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"Job %s: CreateProcessAsUser(%d, %s) under acct %s wdir=%s\",\n\t\t\t\t\t jobid, pw->pw_userlogin, cmd_line, getlogin(), wdir);\n\t\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_DEBUG, __func__, log_buffer);\n\t\t\t\trc = CreateProcessAsUser(pw->pw_userlogin, NULL, cmd_line,\n\t\t\t\t\t\t\t NULL, NULL, TRUE, flags,\n\t\t\t\t\t\t\t NULL, wdir, &si, &pi);\n\t\t\t\tif (rc == 0) {\n\t\t\t\t\terrno = GetLastError();\n\t\t\t\t\tlog_err(-1, __func__, \"CreateProcessAsUser failed\");\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif (rc == 0) {\n\t\t\t\terrno = GetLastError();\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"Job %s: process creation failed errno %lu\", jobid, errno);\n\t\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\t}\n\n\t\t\tclose(fd);\n\t\t\tfd = -1; /* already done the close */\n\n\t\t\tif (errno != 0) {\n\t\t\t\trc = errno + 10000; /* error on fork (100xx), retry */\n\t\t\t} else {\n\t\t\t\tint ret = 0;\n\t\t\t\tret = WaitForSingleObject(pi.hProcess, INFINITE);\n\t\t\t\tif (ret != WAIT_OBJECT_0) {\n\t\t\t\t\tif (ret != WAIT_FAILED) {\n\t\t\t\t\t\tsprintf(log_buffer, \"Job %s: WaitForSingleObject failed with status=%d\", jobid, ret);\n\t\t\t\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_FILE, LOG_ERR, __func__, log_buffer);\n\t\t\t\t\t} else {\n\t\t\t\t\t\tsprintf(log_buffer, \"Job %s: WaitForSingleObject failed with status=%d\", jobid, ret);\n\t\t\t\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tif (!GetExitCodeProcess(pi.hProcess, &rc)) {\n\t\t\t\t\tsprintf(log_buffer, \"Job %s: GetExitCodeProcess failed\", jobid);\n\t\t\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\t\t}\n\t\t\t\tif (rc != 0) {\n\t\t\t\t\tsprintf(log_buffer, \"Job %s: GetExitCodeProcess return code=%d\", jobid, rc);\n\t\t\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_ERR, __func__, log_buffer);\n\t\t\t\t}\n\t\t\t\tCloseHandle(pi.hProcess);\n\t\t\t\tCloseHandle(pi.hThread);\n\n\t\t\t\tif (errno != 0) {\n\t\t\t\t\trc = (20000 + errno); /* 200xx is error on wait */\n\t\t\t\t} else {\n\t\t\t\t\tif ((rmtflg != 0) && (dir == STAGE_DIR_IN)) {\n\t\t\t\t\t\tif ((stat_uncpath(ag3, &sb) == -1) ||\n\t\t\t\t\t\t    (original == sb.st_mtime))\n\t\t\t\t\t\t\trc = 13;\n\t\t\t\t\t}\n\t\t\t\t\tgoto sys_copy_end;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\t/* copy did not work, try again */\n\t\t(void) sprintf(log_buffer,\n\t\t\t       \"command: %s %s %s %s status=%d, try=%d\",\n\t\t\t       ag0, ag1, ag2, ag3, rc, loop);\n\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_FILE, LOG_DEBUG,\n\t\t\t  __func__, log_buffer);\n\t\tif ((loop % 2) == 0) /* don't sleep between scp and rcp */\n\t\t\tSleep(1000 * (loop / 2 * 10 + 1));\n\t}\n\nsys_copy_end:\n\tif (readp != INVALID_HANDLE_VALUE) {\n\t\tCloseHandle(readp);\n\t}\n\tif (writep != INVALID_HANDLE_VALUE) {\n\t\tCloseHandle(writep);\n\t}\n\n\tif (fd != -1)\n\t\t(void) close(fd);\n#endif\n\treturn (rc); /* tried a bunch of times, just give up */\n}\n\n#ifdef WIN32\n/**\n * @brief\n *\tfree_pcphosts - Free allocated cphosts struct (i.e. pcphosts)\n *\n * @return void\n *\n */\nstatic void\nfree_pcphosts(void)\n{\n\tint nh = 0;\n\n\tif (pcphosts != NULL && cphosts_num > 0) {\n\t\tfor (nh = 0; nh < cphosts_num; nh++) {\n\t\t\tif ((pcphosts + nh)->cph_hosts != NULL) {\n\t\t\t\tfree((pcphosts + nh)->cph_hosts);\n\t\t\t\t(pcphosts + nh)->cph_hosts = NULL;\n\t\t\t}\n\n\t\t\tif ((pcphosts + nh)->cph_from != NULL) {\n\t\t\t\tfree((pcphosts + nh)->cph_from);\n\t\t\t\t(pcphosts + nh)->cph_from = NULL;\n\t\t\t}\n\n\t\t\tif ((pcphosts + nh)->cph_to != NULL) {\n\t\t\t\tfree((pcphosts + nh)->cph_to);\n\t\t\t\t(pcphosts + nh)->cph_to = NULL;\n\t\t\t}\n\t\t}\n\n\t\tfree(pcphosts);\n\t\tpcphosts = NULL;\n\t\tcphosts_num = 0;\n\t}\n}\n\n/**\n * @brief\n *\trecv_pcphosts - Receive cphosts struct through stdin sent by parent MoM\n *\n * @return\tint\n * @retval\t0 - on Error\n * @retval\t1 - on Success\n *\n * @note\tThis function will put received info in\n *\t\tglobal struct pcphosts and cphosts_num variable\n *\n */\nint\nrecv_pcphosts(void)\n{\n\tchar buf[CPY_PIPE_BUFSIZE] = {'\\0'};\n\tint nh = 0;\n\n\tif (fgets(buf, sizeof(int), stdin) == NULL) {\n\t\tlog_err(-1, __func__, \"Failed to read cphosts_num\");\n\t\treturn 0;\n\t} else {\n\t\tbuf[strlen(buf) - 1] = '\\0';\n\t\tcphosts_num = (unsigned) atoi(buf);\n\t}\n\n\tif (cphosts_num <= 0) {\n\t\treturn 1;\n\t} else {\n\t\tif ((pcphosts = malloc(cphosts_num * sizeof(struct cphosts))) == NULL) {\n\t\t\tlog_err(errno, __func__, \"malloc failed\");\n\t\t\treturn 0;\n\t\t} else {\n\t\t\tmemset(pcphosts, 0, (cphosts_num * sizeof(struct cphosts)));\n\t\t}\n\t}\n\n\tfor (nh = 0; nh < cphosts_num; nh++) {\n\n\t\tif (fgets(buf, sizeof(buf), stdin) == NULL) {\n\t\t\tfree_pcphosts();\n\t\t\tlog_err(-1, __func__, \"Failed to read cph_hosts\");\n\t\t\treturn 0;\n\t\t} else {\n\t\t\tbuf[strlen(buf) - 1] = '\\0';\n\t\t\tif (((pcphosts + nh)->cph_hosts = strdup(buf)) == NULL) {\n\t\t\t\tlog_err(-1, __func__, \"Failed to allocate data to cph_hosts\");\n\t\t\t\tfree_pcphosts();\n\t\t\t\treturn 0;\n\t\t\t} else\n\t\t\t\tmemset(buf, 0, sizeof(buf));\n\t\t}\n\n\t\tif (fgets(buf, sizeof(buf), stdin) == NULL) {\n\t\t\tfree_pcphosts();\n\t\t\tlog_err(-1, __func__, \"Failed to read cph_from\");\n\t\t\treturn 0;\n\t\t} else {\n\t\t\tbuf[strlen(buf) - 1] = '\\0';\n\t\t\tif (((pcphosts + nh)->cph_from = strdup(buf)) == NULL) {\n\t\t\t\tlog_err(-1, __func__, \"Failed to allocate data to cph_from\");\n\t\t\t\tfree_pcphosts();\n\t\t\t\treturn 0;\n\t\t\t} else\n\t\t\t\tmemset(buf, 0, sizeof(buf));\n\t\t}\n\n\t\tif (fgets(buf, sizeof(buf), stdin) == NULL) {\n\t\t\tlog_err(-1, __func__, \"Failed to read cph_to\");\n\t\t\tfree_pcphosts();\n\t\t\treturn 0;\n\t\t} else {\n\t\t\tbuf[strlen(buf) - 1] = '\\0';\n\t\t\tif (((pcphosts + nh)->cph_to = strdup(buf)) == NULL) {\n\t\t\t\tlog_err(-1, __func__, \"Failed to allocate data to cph_to\");\n\t\t\t\tfree_pcphosts();\n\t\t\t\treturn 0;\n\t\t\t} else\n\t\t\t\tmemset(buf, 0, sizeof(buf));\n\t\t}\n\t}\n\treturn 1;\n}\n\n/**\n * @brief\n *\tfree_rq_cpyfile_cred - Free allocated rq_cpyfile and cred info (if any)\n *\n * @param[in] pcf - pointer to already allocated rq_cpyfile struct\n *\n * @return void\n *\n */\nstatic void\nfree_rq_cpyfile_cred(struct rq_cpyfile *pcf)\n{\n\tif (pcf != NULL) {\n\t\tstruct rqfpair *ppair = NULL;\n\n\t\twhile ((ppair = (struct rqfpair *) GET_NEXT(pcf->rq_pair)) != NULL) {\n\t\t\tdelete_link(&ppair->fp_link);\n\t\t\tif (ppair->fp_local)\n\t\t\t\t(void) free(ppair->fp_local);\n\t\t\tif (ppair->fp_rmt)\n\t\t\t\t(void) free(ppair->fp_rmt);\n\t\t\t(void) free(ppair);\n\t\t}\n\t}\n\n\tif (cred_buf != NULL) {\n\t\tmemset(cred_buf, 0, cred_len);\n\t\tfree(cred_buf);\n\t\tcred_buf = NULL;\n\t\tcred_len = 0;\n\t}\n}\n\n/**\n * @brief\n *\trecv_rq_cpyfile_cred - Receive rq_cpyfile struct and cred_info (if any) through stdin sent by parent MoM\n *\n * @param[in/out] pcf - pointer to already allocated rq_cpyfile struct\n *\n * @return\tint\n * @retval\t0 - on Error\n * @retval\t1 - on Success\n *\n * @note\tThis function will put received cred info (if any) in\n *\t\tglobal variable cred_buf and cred_len\n *\n */\nint\nrecv_rq_cpyfile_cred(struct rq_cpyfile *pcf)\n{\n\tint pair_ct = 0;\n\tstruct rqfpair *ppair = NULL;\n\tchar buf[CPY_PIPE_BUFSIZE] = {'\\0'};\n\n\tCLEAR_HEAD(pcf->rq_pair);\n\n\tif (fgets(pcf->rq_jobid, PBS_MAXJOBNAME, stdin) == NULL) {\n\t\tlog_err(-1, __func__, \"Failed to read rq_jobid\");\n\t\treturn 0;\n\t} else {\n\t\tpcf->rq_jobid[strlen(pcf->rq_jobid) - 1] = '\\0';\n\t}\n\n\tif (fgets(pcf->rq_owner, PBS_MAXUSER, stdin) == NULL) {\n\t\tlog_err(-1, __func__, \"Failed to read rq_owner\");\n\t\treturn 0;\n\t} else {\n\t\tpcf->rq_owner[strlen(pcf->rq_owner) - 1] = '\\0';\n\t}\n\n\tif (fgets(pcf->rq_user, PBS_MAXUSER, stdin) == NULL) {\n\t\tlog_err(-1, __func__, \"Failed to read rq_user\");\n\t\treturn 0;\n\t} else {\n\t\tpcf->rq_user[strlen(pcf->rq_user) - 1] = '\\0';\n\t}\n\n\tif (fgets(pcf->rq_group, PBS_MAXGRPN, stdin) == NULL) {\n\t\tlog_err(-1, __func__, \"Failed to read rq_group\");\n\t\treturn 0;\n\t} else {\n\t\tpcf->rq_group[strlen(pcf->rq_group) - 1] = '\\0';\n\t}\n\n\tif (fgets(buf, sizeof(int), stdin) == NULL) {\n\t\tlog_err(-1, __func__, \"Failed to read rq_dir\");\n\t\treturn 0;\n\t} else {\n\t\tbuf[strlen(buf) - 1] = '\\0';\n\t\tpcf->rq_dir = (unsigned) atoi(buf);\n\t}\n\n\tif (fgets(buf, sizeof(int), stdin) == NULL) {\n\t\tlog_err(-1, __func__, \"Failed to read pair_ct\");\n\t\treturn 0;\n\t} else {\n\t\tbuf[strlen(buf) - 1] = '\\0';\n\t\tpair_ct = (unsigned) atoi(buf);\n\t}\n\n\twhile (pair_ct--) {\n\n\t\tppair = (struct rqfpair *) malloc(sizeof(struct rqfpair));\n\t\tif (ppair == NULL) {\n\t\t\tlog_err(-1, __func__, \"rqfpair: malloc failed\");\n\t\t\tfree_rq_cpyfile_cred(pcf);\n\t\t\treturn 0;\n\t\t}\n\t\tCLEAR_LINK(ppair->fp_link);\n\t\tppair->fp_flag = 0;\n\t\tppair->fp_local = NULL;\n\t\tppair->fp_rmt = NULL;\n\n\t\tif (fgets(buf, sizeof(int), stdin) == NULL) {\n\t\t\tfree(ppair);\n\t\t\tppair = NULL;\n\t\t\tfree_rq_cpyfile_cred(pcf);\n\t\t\tlog_err(-1, __func__, \"Failed to read fp_flag\");\n\t\t\treturn 0;\n\t\t} else {\n\t\t\tbuf[strlen(buf) - 1] = '\\0';\n\t\t\tppair->fp_flag = atoi(buf);\n\t\t}\n\n\t\tif ((ppair->fp_local = (char *) malloc(MAXPATHLEN + 1)) == NULL) {\n\t\t\tfree(ppair);\n\t\t\tppair = NULL;\n\t\t\tfree_rq_cpyfile_cred(pcf);\n\t\t\tlog_err(-1, __func__, \"Failed to read fp_local\");\n\t\t\treturn 0;\n\t\t} else {\n\t\t\tmemset(ppair->fp_local, 0, MAXPATHLEN + 1);\n\t\t\tif (fgets(ppair->fp_local, MAXPATHLEN, stdin) == NULL) {\n\t\t\t\tfree(ppair->fp_local);\n\t\t\t\tppair->fp_local = NULL;\n\t\t\t\tfree(ppair);\n\t\t\t\tppair = NULL;\n\t\t\t\tfree_rq_cpyfile_cred(pcf);\n\t\t\t\treturn 0;\n\t\t\t} else {\n\t\t\t\tppair->fp_local[strlen(ppair->fp_local) - 1] = '\\0';\n\t\t\t}\n\t\t}\n\n\t\tif ((ppair->fp_rmt = (char *) malloc(MAXPATHLEN + 1)) == NULL) {\n\t\t\tfree(ppair->fp_local);\n\t\t\tppair->fp_local = NULL;\n\t\t\tfree(ppair);\n\t\t\tppair = NULL;\n\t\t\tfree_rq_cpyfile_cred(pcf);\n\t\t\tlog_err(errno, __func__, \"fp_rmt: malloc failed\");\n\t\t\treturn 0;\n\t\t} else {\n\t\t\tmemset(ppair->fp_rmt, 0, MAXPATHLEN + 1);\n\t\t\tif (fgets(ppair->fp_rmt, MAXPATHLEN, stdin) == NULL) {\n\t\t\t\tfree(ppair->fp_rmt);\n\t\t\t\tppair->fp_rmt = NULL;\n\t\t\t\tfree(ppair->fp_local);\n\t\t\t\tppair->fp_local = NULL;\n\t\t\t\tfree(ppair);\n\t\t\t\tppair = NULL;\n\t\t\t\tfree_rq_cpyfile_cred(pcf);\n\t\t\t\tlog_err(-1, __func__, \"Failed to read fp_rmt\");\n\t\t\t\treturn 0;\n\t\t\t} else {\n\t\t\t\tppair->fp_rmt[strlen(ppair->fp_rmt) - 1] = '\\0';\n\t\t\t}\n\t\t}\n\t\tappend_link(&pcf->rq_pair, &ppair->fp_link, ppair);\n\t}\n\n\tif (fgets(buf, sizeof(buf), stdin) == NULL) {\n\t\tfree_rq_cpyfile_cred(pcf);\n\t\tlog_err(-1, __func__, \"Failed to read pcf\");\n\t\treturn 0;\n\t} else {\n\t\tbuf[strlen(buf) - 1] = '\\0';\n\t\tif (strcmp(buf, \"cred_info\") == 0) {\n\t\t\tDWORD a_cnt = 0;\n\n\t\t\tif (fgets(buf, sizeof(buf), stdin) == NULL) {\n\t\t\t\tfree_rq_cpyfile_cred(pcf);\n\t\t\t\tlog_err(-1, __func__, \"Failed to read cred_length\");\n\t\t\t\treturn 0;\n\t\t\t} else {\n\t\t\t\tbuf[strlen(buf) - 1] = '\\0';\n\t\t\t\tcred_len = atoi(buf);\n\t\t\t}\n\n\t\t\tif (fgets(buf, sizeof(buf), stdin) == NULL) {\n\t\t\t\tfree_rq_cpyfile_cred(pcf);\n\t\t\t\tlog_err(-1, __func__, \"Failed to read cred_buf\");\n\t\t\t\treturn 0;\n\t\t\t} else {\n\t\t\t\tbuf[strlen(buf) - 1] = '\\0';\n\t\t\t\tcred_buf = NULL;\n\t\t\t\tif (decode_from_base64(buf, &cred_buf, &a_cnt)) {\n\t\t\t\t\tsprintf(log_buffer, \"Failed to decode buf: %s, len: %d\", buf, strlen(buf));\n\t\t\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\t\t\treturn 0;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif (cred_len != a_cnt) {\n\t\t\t\tsprintf(log_buffer, \"Data mismatch, cred_len: %d, decoded_cred_len: %d\", cred_len, a_cnt);\n\t\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\t\tfree_rq_cpyfile_cred(pcf);\n\t\t\t\tfree(cred_buf);\n\t\t\t\treturn 0;\n\t\t\t}\n\t\t} else if (strcmp(buf, \"no_cred_info\") == 0) {\n\t\t\tcred_buf = NULL;\n\t\t\tcred_len = 0;\n\t\t}\n\t}\n\treturn 1;\n}\n\n/**\n * @brief\n *\tsend_pcphosts - Send cphosts structure (information about usecp from mom config file)\n *\tand cphosts_num (no. of usecp entry in cphosts struct) to\n *\tgiven pipe handles <pio>\n *\n * @param[in]\tpio      -\tpio_handles struct which include\n *\t\t\t\thandle to pipe on which info will be sent\n * @param[in]\tpcphosts -\tpointer to cphosts struct which will be sent to pipe\n *\n * @return\tvoid\n *\n */\nvoid\nsend_pcphosts(pio_handles *pio, struct cphosts *pcphosts)\n{\n\tchar buf[CPY_PIPE_BUFSIZE + 1] = {'\\0'};\n\tint nh = 0;\n\n\tif ((cphosts_num <= 0) || (pcphosts == NULL))\n\t\t/* nothing to send, just return */\n\t\treturn;\n\n\tsnprintf(buf, sizeof(buf) - 1, \"%s\\n\", \"pcphosts=\");\n\tcheck_err(__func__, buf, win_pwrite(pio, buf, strlen(buf)));\n\n\tsnprintf(buf, sizeof(buf) - 1, \"%d\\n\", cphosts_num);\n\tcheck_err(__func__, buf, win_pwrite(pio, buf, strlen(buf)));\n\n\tfor (nh = 0; nh < cphosts_num; nh++) {\n\t\tsnprintf(buf, sizeof(buf) - 1, \"%s\\n\", (pcphosts + nh)->cph_hosts);\n\t\tcheck_err(__func__, buf, win_pwrite(pio, buf, strlen(buf)));\n\n\t\tsnprintf(buf, sizeof(buf) - 1, \"%s\\n\", (pcphosts + nh)->cph_from);\n\t\tcheck_err(__func__, buf, win_pwrite(pio, buf, strlen(buf)));\n\n\t\tsnprintf(buf, sizeof(buf) - 1, \"%s\\n\", (pcphosts + nh)->cph_to);\n\t\tcheck_err(__func__, buf, win_pwrite(pio, buf, strlen(buf)));\n\t}\n}\n\n/**\n * @brief\n *\tsend_rq_cpyfile_cred - Send given rq_cpyfile structure and cred info (if any) to\n *\tgiven pipe handles <pio>\n *\n * @param[in]\tpio\t-\tpio_handles struct which include\n *\t\t\t\thandle to pipe on which info will be sent\n * @param[in]\tpcf\t-\tpointer to rq_cpyfile struct which will be sent to pipe\n *\n * @return\tint\n * @retval\t0 - on Error\n * @retval  \t1 - on Success\n *\n * @note\tThis function will cred info from global variable cred_buf and cred_len\n *\n */\nint\nsend_rq_cpyfile_cred(pio_handles *pio, struct rq_cpyfile *pcf)\n{\n\tchar buf[CPY_PIPE_BUFSIZE + 1] = {'\\0'};\n\tint pair_ct = 0;\n\tstruct rqfpair *ppair = NULL;\n\n\tif ((pio == NULL) || (pcf == NULL)) {\n\t\treturn 0;\n\t}\n\n\tppair = (struct rqfpair *) GET_NEXT(pcf->rq_pair);\n\twhile (ppair) {\n\t\t++pair_ct;\n\t\tppair = (struct rqfpair *) GET_NEXT(ppair->fp_link);\n\t}\n\n\tsnprintf(buf, sizeof(buf) - 1, \"%s\\n\", \"rq_cpyfile=\");\n\tcheck_err(__func__, buf, win_pwrite(pio, buf, strlen(buf)));\n\n\tsnprintf(buf, sizeof(buf) - 1, \"%s\\n\", pcf->rq_jobid);\n\tcheck_err(__func__, buf, win_pwrite(pio, buf, strlen(buf)));\n\n\tsnprintf(buf, sizeof(buf) - 1, \"%s\\n\", pcf->rq_owner);\n\tcheck_err(__func__, buf, win_pwrite(pio, buf, strlen(buf)));\n\n\tsnprintf(buf, sizeof(buf) - 1, \"%s\\n\", pcf->rq_user);\n\tcheck_err(__func__, buf, win_pwrite(pio, buf, strlen(buf)));\n\n\tsnprintf(buf, sizeof(buf) - 1, \"%s\\n\", pcf->rq_group);\n\tcheck_err(__func__, buf, win_pwrite(pio, buf, strlen(buf)));\n\n\tsnprintf(buf, sizeof(buf) - 1, \"%d\\n\", pcf->rq_dir);\n\tcheck_err(__func__, buf, win_pwrite(pio, buf, strlen(buf)));\n\n\tsnprintf(buf, sizeof(buf) - 1, \"%d\\n\", pair_ct);\n\tcheck_err(__func__, buf, win_pwrite(pio, buf, strlen(buf)));\n\n\tppair = (struct rqfpair *) GET_NEXT(pcf->rq_pair);\n\twhile (ppair) {\n\t\tsnprintf(buf, sizeof(buf) - 1, \"%d\\n\", ppair->fp_flag);\n\t\tcheck_err(__func__, buf, win_pwrite(pio, buf, strlen(buf)));\n\n\t\tsnprintf(buf, sizeof(buf) - 1, \"%s\\n\", ppair->fp_local);\n\t\tcheck_err(__func__, buf, win_pwrite(pio, buf, strlen(buf)));\n\n\t\tsnprintf(buf, sizeof(buf) - 1, \"%s\\n\", ppair->fp_rmt);\n\t\tcheck_err(__func__, buf, win_pwrite(pio, buf, strlen(buf)));\n\n\t\tppair = (struct rqfpair *) GET_NEXT(ppair->fp_link);\n\t}\n\n\tif (cred_buf != NULL && cred_len != 0) {\n\t\tchar *str = NULL;\n\t\tsnprintf(buf, sizeof(buf) - 1, \"cred_info\\n\");\n\t\tcheck_err(__func__, buf, win_pwrite(pio, buf, strlen(buf)));\n\n\t\tsnprintf(buf, sizeof(buf) - 1, \"%d\\n\", cred_len);\n\t\tcheck_err(__func__, buf, win_pwrite(pio, buf, strlen(buf)));\n\n\t\tif (encode_to_base64(cred_buf, cred_len, &str)) {\n\t\t\tsprintf(log_buffer, \"Failed to encode cred_buf: %s, len: %d\", cred_buf, cred_len);\n\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\treturn 0;\n\t\t}\n\t\tsnprintf(buf, sizeof(buf) - 1, \"%s\\n\", str);\n\t\tfree(str);\n\t\tcheck_err(__func__, buf, win_pwrite(pio, buf, strlen(buf)));\n\t} else {\n\t\tsnprintf(buf, sizeof(buf) - 1, \"no_cred_info\\n\");\n\t\tcheck_err(__func__, buf, win_pwrite(pio, buf, strlen(buf)));\n\t}\n\treturn 1;\n}\n#endif\n"
  },
  {
    "path": "src/resmom/start_exec.c",
    "content": "\t/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <unistd.h>\n#include <dirent.h>\n#include <ctype.h>\n#include <errno.h>\n#include <fcntl.h>\n#include <pwd.h>\n#include <grp.h>\n#include <string.h>\n#include <limits.h>\n#include <assert.h>\n#include <signal.h>\n#include <termios.h>\n#include <sys/param.h>\n#include <sys/types.h>\n#include <sys/stat.h>\n#include <sys/time.h>\n#include <sys/resource.h>\n#include <sys/socket.h>\n#include <sys/wait.h>\n#include <netinet/in.h>\n#include <fcntl.h>\n\n#if defined(__osf__)\n#include <stropts.h>\n#endif\n\n#include \"libpbs.h\"\n#include \"portability.h\"\n#include \"list_link.h\"\n#include \"server_limits.h\"\n#include \"attribute.h\"\n#include \"resource.h\"\n#include \"job.h\"\n#include \"log.h\"\n#include \"tpp.h\"\n#include \"dis.h\"\n#include \"pbs_nodes.h\"\n#include \"mom_mach.h\"\n#include \"pbs_error.h\"\n#include \"net_connect.h\"\n#include \"batch_request.h\"\n#include \"mom_func.h\"\n#include \"pbs_ifl.h\"\n#include \"port_forwarding.h\"\n\n#include \"credential.h\"\n#include \"ticket.h\"\n#include \"svrfunc.h\"\n#include \"libsec.h\"\n#include \"mom_hook_func.h\"\n#include \"mom_server.h\"\n#include \"placementsets.h\"\n#include \"pbs_internal.h\"\n#include \"pbs_reliable.h\"\n\n#include \"renew_creds.h\"\n\n#include \"mock_run.h\"\n\n#define PIPE_READ_TIMEOUT 5\n#define EXTRA_ENV_PTRS 32\n\n/* Global Variables */\n\nextern char mom_host[];\nextern int num_var_env;\nextern char **environ;\nextern int exiting_tasks;\nextern u_long localaddr;\nextern int lockfds;\nextern pbs_list_head mom_polljobs;\nextern int next_sample_time;\nextern int min_check_poll;\nextern char *path_jobs;\nextern char *path_prolog;\nextern char *path_spool;\nextern unsigned int pbs_rm_port;\nextern gid_t pbsgroup;\nextern int server_stream;\nextern unsigned int pbs_mom_port;\nextern time_t time_now;\nextern time_t time_resc_updated;\nextern char *path_hooks_workdir;\nextern long joinjob_alarm_time;\nextern long job_launch_delay;\nint mom_reader_go; /* see catchinter() & mom_writer() */\n\nextern int x11_reader_go;\nextern int enable_exechost2;\nextern char *msg_err_malloc;\nextern unsigned char pbs_aes_key[][16];\nextern unsigned char pbs_aes_iv[][16];\n\nint ptc = -1; /* fd for master pty */\n#include <poll.h>\n#ifdef RLIM64_INFINITY\nextern struct rlimit64 orig_nproc_limit;\nextern struct rlimit64 orig_core_limit;\n#else\nextern struct rlimit orig_nproc_limit;\nextern struct rlimit orig_core_limit;\n#endif /* RLIM64... */\n\nextern eventent *event_dup(eventent *ep, job *pjob, hnodent *pnode);\nextern void send_join_job_restart_mcast(int mtfd, int com, eventent *ep, int nth, job *pjob, pbs_list_head *phead);\n\n/* Local Varibles */\n\nstatic int script_in;\t/* script file, will be stdin\t  */\nstatic pid_t writerpid; /* writer side of interactive job */\nstatic pid_t shellpid;\t/* shell part of interactive job  */\nstatic size_t cred_len;\nstatic char *cred_buf;\n\nchar *variables_else[] = {/* variables to add, value computed */\n\t\t\t  \"HOME\",\n\t\t\t  \"LOGNAME\",\n\t\t\t  \"PBS_JOBNAME\",\n\t\t\t  \"PBS_JOBID\",\n\t\t\t  \"PBS_QUEUE\",\n\t\t\t  \"SHELL\",\n\t\t\t  \"USER\",\n\t\t\t  \"PBS_JOBCOOKIE\",\n\t\t\t  \"PBS_NODENUM\",\n\t\t\t  \"PBS_TASKNUM\",\n\t\t\t  \"PBS_MOMPORT\",\n\t\t\t  \"PBS_NODEFILE\",\n\t\t\t  \"OMP_NUM_THREADS\",\n\t\t\t  \"PBS_ACCOUNT\",\n\t\t\t  \"PBS_ARRAY_INDEX\",\n\t\t\t  \"PBS_ARRAY_ID\"};\n\nstatic int num_var_else = sizeof(variables_else) / sizeof(char *);\nstatic void catchinter(int);\n\nextern int is_direct_write(job *, enum job_file, char *, int *);\nstatic int direct_write_possible = 1;\n\nvoid\nstarter_return(int upfds, int downfds, int code,\n\t       struct startjob_rtn *);\n\n#define FDMOVE(fd)                                \\\n\tif (fd < 3) {                             \\\n\t\tint hold = fcntl(fd, F_DUPFD, 3); \\\n\t\t(void) close(fd);                 \\\n\t\tfd = hold;                        \\\n\t}\n\n/**\n * @brief\n * \tInternal error routine.\n *\n * @param[in] string - error related to\n * @param[in] value - error number\n *\n * @return \tint\n * @retval\terror number\n *\n */\nint\nerror(char *string, int value)\n{\n\tint i = 0;\n\tchar *message;\n\textern char *msg_momsetlim;\n\textern struct pbs_err_to_txt pbs_err_to_txt[];\n\n\tassert(string != NULL);\n\tassert(*string != '\\0');\n\tassert(value > PBSE_);\t\t  /* minimum PBS error number */\n\tassert(value <= PBSE_NOSYNCMSTR); /* maximum PBS error number */\n\tassert(pbs_err_to_txt[i].err_no != 0);\n\n\tdo {\n\t\tif (pbs_err_to_txt[i].err_no == value)\n\t\t\tbreak;\n\t} while (pbs_err_to_txt[++i].err_no != 0);\n\n\tassert(pbs_err_to_txt[i].err_txt != NULL);\n\tmessage = *pbs_err_to_txt[i].err_txt;\n\tassert(message != NULL);\n\tassert(*message != '\\0');\n\n\tif (value == PBSE_SYSTEM) {\n\t\tstrcpy(log_buffer, message);\n\t\tstrcat(log_buffer, strerror(errno));\n\t\tmessage = log_buffer;\n\t}\n\t(void) fprintf(stderr, msg_momsetlim, string, message);\n\t(void) fflush(stderr);\n\n\treturn value;\n}\n\n/**\n * @brief\n * \tno_hang() - interrupt handler for alarm() around attempt to connect\n *\tto qsub for interactive jobs.   If qsub hung or suspended or if the\n *\tnetwork is fouled up, mom cannot afford to wait forever.\n *\n * @param[in] sig - signal number\n *\n * @return \tVoid\n *\n */\n\nstatic void\nno_hang(int sig)\n{\n\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_REQUEST, LOG_DEBUG, \" \",\n\t\t  \"alarm timed-out connect to qsub\");\n}\n\n/**\n * @brief\n *\tvalidate credentials of user for job.\n *\n * @param[in] pjob - job pointer\n *\n * @return\tpointer to structure\n * @retval\tstructure handle to passwd\n *\n */\n\nstruct passwd *\ncheck_pwd(job *pjob)\n{\n\tstruct passwd *pwdp;\n\tstruct group *grpp;\n\tstruct stat sb;\n\tattribute *jb_group;\n\n\tpwdp = getpwnam(get_jattr_str(pjob, JOB_ATR_euser));\n\tif (pwdp == NULL) {\n\t\t(void) sprintf(log_buffer, \"No Password Entry for User %s\",\n\t\t\t       get_jattr_str(pjob, JOB_ATR_euser));\n\t\treturn NULL;\n\t}\n\t/* check that home directory is valid */\n\tif (*pwdp->pw_dir == '\\0') {\n\t\tsprintf(log_buffer, \"null home directory\");\n\t\treturn NULL;\n\t}\n\tif (pjob->ji_grpcache == NULL) {\n\t\tpjob->ji_grpcache = malloc(sizeof(struct grpcache) +\n\t\t\t\t\t   strlen(pwdp->pw_dir) + 1);\n\t\tif (pjob->ji_grpcache == NULL) {\n\t\t\tsprintf(log_buffer, \"Malloc failed\");\n\t\t\treturn NULL;\n\t\t}\n\t\tif (stat(pwdp->pw_dir, &sb) == -1) {\n\t\t\tsprintf(log_buffer, \"%s: home directory: %s\",\n\t\t\t\tpjob->ji_qs.ji_jobid, pwdp->pw_dir);\n\t\t\tlog_err(errno, \"check_pwd\", log_buffer);\n\t\t}\n\t\tstrcpy(pjob->ji_grpcache->gc_homedir, pwdp->pw_dir);\n\t}\n\n\tpjob->ji_grpcache->gc_uid = pwdp->pw_uid;  /* execution uid */\n\tpjob->ji_grpcache->gc_rgid = pwdp->pw_gid; /* real user gid */\n\n\t/* get the group and supplimentary under which the job is to be run */\n\n\tjb_group = get_jattr(pjob, JOB_ATR_egroup);\n\tif ((jb_group->at_flags & (ATR_VFLAG_SET | ATR_VFLAG_DEFLT)) == ATR_VFLAG_SET) {\n\n\t\t/* execution group specified - not defaulting to login group */\n\n\t\tgrpp = getgrnam(get_jattr_str(pjob, JOB_ATR_egroup));\n\t\tif (grpp == NULL) {\n\t\t\t(void) sprintf(log_buffer, \"No Group Entry for Group %s\",\n\t\t\t\t       get_jattr_str(pjob, JOB_ATR_egroup));\n\t\t\treturn NULL;\n\t\t}\n\t\tif (grpp->gr_gid != pwdp->pw_gid) {\n\t\t\tchar **pgnam;\n\n\t\t\tpgnam = grpp->gr_mem;\n\t\t\twhile (*pgnam) {\n\t\t\t\tif (!strcmp(*pgnam, pwdp->pw_name))\n\t\t\t\t\tbreak;\n\t\t\t\t++pgnam;\n\t\t\t}\n\t\t\tif (*pgnam == 0) {\n\t\t\t\t(void) sprintf(log_buffer, \"user not in group\");\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t}\n\t\tpjob->ji_grpcache->gc_gid = grpp->gr_gid;\n\t} else {\n\t\t/* default to login group */\n\t\tpjob->ji_grpcache->gc_gid = pwdp->pw_gid;\n\t}\n\n\t/* perform site specific check on validatity of account */\n\tif (site_mom_chkuser(pjob))\n\t\treturn NULL;\n\n\treturn pwdp;\n}\n\n/**\n * @brief\n *\twritepipe() - writes to pipe\n *\n * @param[in] pfd - file descriptor\n * @param[in] vptr - content to be written\n * @param[in] nbytes - length of content\n *\n * @return\tssize_t\n * @retval\t-1\t\t\t\t\terror\n * @retval\tnumber of bytes written to pipe\t\tsuccess\n *\n */\n\nssize_t\nwritepipe(int pfd, void *vptr, size_t nbytes)\n{\n\tsize_t nleft;\n\tchar *ptr = vptr; /* so we can do pointer arithmetic */\n\n\tnleft = nbytes;\n\twhile (nleft > 0) {\n\t\tssize_t nwritten;\n\t\tnwritten = write(pfd, ptr, nleft);\n\t\tif (nwritten == -1) {\n\t\t\tif (errno == EINTR)\n\t\t\t\tcontinue;\n\t\t\telse\n\t\t\t\treturn -1;\n\t\t}\n\n\t\tnleft -= nwritten;\n\t\tptr += nwritten;\n\t}\n\treturn nbytes;\n}\n\n/**\n * @brief\n *      readpipe() - reads from pipe\n *\n * @param[in] pfd - file descriptor\n * @param[in] vptr - content to be into\n * @param[in] nbytes - length of content\n *\n * @return      ssize_t\n * @retval      -1                                      error\n * @retval      number of bytes read to pipe         success\n *\n */\n\nssize_t\nreadpipe(int pfd, void *vptr, size_t nbytes)\n{\n\tsize_t nleft;\n\tchar *ptr = vptr; /* so we can do pointer arithmetic */\n\n\tnleft = nbytes;\n\twhile (nleft > 0) {\n\t\tssize_t nread;\n\t\tnread = read(pfd, ptr, nleft);\n\t\tif (nread == -1) {\n\t\t\tif (errno == EINTR)\n\t\t\t\tcontinue;\n\t\t\telse\n\t\t\t\treturn -1;\n\t\t}\n\t\tif (nread == 0)\n\t\t\tbreak;\n\n\t\tnleft -= nread;\n\t\tptr += nread;\n\t}\n\treturn (nbytes - nleft);\n}\n\n/**\n * @brief\n *\texec_bail - called when the start of a job fails to clean up\n *\n * @par Functionality:\n *\tLogs the message if one is passed in.\n *\tSends IM_ABORT_JOB to the sisters.\n *\tsets the job's substate to JOB_SUBSTATE_EXITING, sets the job's\n *\texit code and sets exiting_tasks so an obit is sent for the job.\n *\tThe job's standard out/err are closed and then resources are released.\n *\n * @param[in]\tpjob - pointer to job structure\n * @param[in]\tcode - the error code for the exit value, typically JOB_EXEC_*\n * @param[in]\ttxt  - a message to log or NULL if none or already logged\n *\n * @return\tNone\n *\n * @par MT-safe: likely no\n *\n */\n\nvoid\nexec_bail(job *pjob, int code, char *txt)\n{\n\tint nodes;\n\tmom_hook_input_t hook_input;\n\tmom_hook_output_t hook_output;\n\tint hook_errcode = 0;\n\thook *last_phook = NULL;\n\tunsigned int hook_fail_action = 0;\n\tchar hook_msg[HOOK_MSG_SIZE + 1];\n\n\t/* log message passed in if one was */\n\tif (txt != NULL) {\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_ERR,\n\t\t\t  pjob->ji_qs.ji_jobid, txt);\n\t}\n\n\tmom_hook_input_init(&hook_input);\n\thook_input.pjob = pjob;\n\n\tmom_hook_output_init(&hook_output);\n\thook_output.reject_errcode = &hook_errcode;\n\thook_output.last_phook = &last_phook;\n\thook_output.fail_action = &hook_fail_action;\n\n\t(void) mom_process_hooks(HOOK_EVENT_EXECJOB_ABORT,\n\t\t\t\t PBS_MOM_SERVICE_NAME, mom_host,\n\t\t\t\t &hook_input, &hook_output, hook_msg,\n\t\t\t\t sizeof(hook_msg), 1);\n\n\tnodes = send_sisters(pjob, IM_ABORT_JOB, NULL);\n\tif (nodes != pjob->ji_numnodes - 1) {\n\t\tsprintf(log_buffer,\n\t\t\t\"sent %d ABORT requests, should be %d\",\n\t\t\tnodes, pjob->ji_numnodes - 1);\n\t\tlog_joberr(-1, __func__, log_buffer, pjob->ji_qs.ji_jobid);\n\t}\n\tset_job_substate(pjob, JOB_SUBSTATE_EXITING);\n\tpjob->ji_qs.ji_un.ji_momt.ji_exitstat = code;\n\texiting_tasks = 1;\n\tif (pjob->ji_stdout > 0)\n\t\t(void) close(pjob->ji_stdout);\n\tif (pjob->ji_stderr > 0)\n\t\t(void) close(pjob->ji_stderr);\n\n\tif (job_clean_extra != NULL) {\n\t\t(void) job_clean_extra(pjob);\n\t}\n}\n\n#define RETRY 3\n\n/**\n * @brief\n *\topens the demux\n *\n * @param[in] addr - ip address\n * @param[in] port - port number\n *\n * @return \tint\n * @retval\t-1\t\tError\n * @retval\tsocket number\tSuccess\n *\n */\n\nint\nopen_demux(u_long addr, int port)\n{\n\tint sock;\n\tint i;\n\tstruct sockaddr_in remote;\n\n\tremote.sin_addr.s_addr = addr;\n\tremote.sin_port = htons((unsigned short) port);\n\tremote.sin_family = AF_INET;\n\n\tif ((sock = socket(AF_INET, SOCK_STREAM, 0)) == -1) {\n\t\tsprintf(log_buffer, \"%s: socket %s\", __func__, netaddr(&remote));\n\t\tlog_err(errno, __func__, log_buffer);\n\t\treturn -1;\n\t}\n\n\tfor (i = 0; i < RETRY; i++) {\n\t\tif (connect(sock, (struct sockaddr *) &remote,\n\t\t\t    sizeof(remote)) == 0)\n\t\t\treturn sock;\n\n\t\tswitch (errno) {\n\n\t\t\tcase EINTR:\n\t\t\tcase EADDRINUSE:\n\t\t\tcase ETIMEDOUT:\n\t\t\tcase ECONNREFUSED:\n\t\t\t\tsleep(2);\n\t\t\t\tcontinue;\n\n\t\t\tdefault:\n\t\t\t\tbreak;\n\t\t}\n\t\tbreak;\n\t}\n\tsprintf(log_buffer, \"%s: connect %s\", __func__, netaddr(&remote));\n\tlog_err(errno, __func__, log_buffer);\n\t(void) close(sock);\n\treturn -1;\n}\n\n/**\n * @brief\n * \topen_pty - open slave side of master/slave pty\n *\n * @param[in] pjob - job pointer\n *\n * @retval\tint\n * @retval \tpty descriptor\tSuccess\n *\n */\n\nstatic int\nopen_pty(job *pjob)\n{\n\tchar *name;\n\tint pts;\n\n\t/* Open the slave pty as the controlling tty */\n\n\tname = get_jattr_str(pjob, JOB_ATR_outpath);\n\n\tif ((pts = open(name, O_RDWR, 0600)) < 0) {\n\t\tsprintf(log_buffer, \"open_pty(%s): cannot open slave\", name);\n\t\tlog_err(errno, \"open_pty\", log_buffer);\n\t} else {\n\n\t\tFDMOVE(pts);\n\n\t\tif (fchmod(pts, 0620) == -1) \n\t\t\tlog_errf(-1, __func__, \"fchmod failed. ERR : %s\",strerror(errno));\t\t\t\t\n\t\tif (fchown(pts, pjob->ji_qs.ji_un.ji_momt.ji_exuid,\n\t\t\t      pjob->ji_qs.ji_un.ji_momt.ji_exgid) == -1) \n\t\t\tlog_errf(-1, __func__, \"fchown failed. ERR : %s\",strerror(errno));\t\t\n#if defined(__osf__)\n\t\t(void) ioctl(pts, TIOCSCTTY, 0); /* make controlling */\n#endif\n\t}\n\treturn (pts);\n}\n\n/**\n * @brief\n * \tis_joined - determine if stdard out and stardard error are joined together\n *\t(-j option) and if so which is first\n *\n * @param[in] pjob - job pointer\n *\n * @return \tint\n * @retval\t0\tno join, separate files\n * @retval  \t+1\tjoined as stdout\n * @retval  \t-1 \tjoined as stderr\n *\n */\nint\nis_joined(job *pjob)\n{\n\tchar *join;\n\n\tif (is_jattr_set(pjob, JOB_ATR_join)) {\n\t\tjoin = get_jattr_str(pjob, JOB_ATR_join);\n\t\tif (join[0] != 'n') {\n\t\t\tif (join[0] == 'o' && strchr(join, (int) 'e') != 0)\n\t\t\t\treturn 1;\n\t\t\telse if (join[0] == 'e' && strchr(join, (int) 'e') != 0)\n\t\t\t\treturn -1;\n\t\t}\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n * \topen_std_out_err - open standard out and err to files\n *\n * @param[in] pjob - job pointer\n *\n * @return\tint\n * @retval\t0\tSuccess\n * @retval\t-1\tError\n *\n */\n\nstatic int\nopen_std_out_err(job *pjob)\n{\n\tint i;\n\tint file_out = -2;\n\tint file_err = -2;\n\tint filemode = O_CREAT | O_WRONLY | O_APPEND;\n\tdirect_write_possible = 1;\n\n\t/* if std out/err joined (set and !=\"n\"),which file is first */\n\n\ti = is_joined(pjob);\n\tif (i == 1) {\n\t\tfile_out = open_std_file(pjob, StdOut, filemode,\n\t\t\t\t\t pjob->ji_qs.ji_un.ji_momt.ji_exgid);\n\t\tfile_err = dup(file_out);\n\t} else if (i == -1) {\n\t\tfile_err = open_std_file(pjob, StdErr, filemode,\n\t\t\t\t\t pjob->ji_qs.ji_un.ji_momt.ji_exgid);\n\t\tfile_out = dup(file_err);\n\t}\n\n\tif (file_out == -2)\n\t\tfile_out = open_std_file(pjob, StdOut, filemode,\n\t\t\t\t\t pjob->ji_qs.ji_un.ji_momt.ji_exgid);\n\tif (file_err == -2)\n\t\tfile_err = open_std_file(pjob, StdErr, filemode,\n\t\t\t\t\t pjob->ji_qs.ji_un.ji_momt.ji_exgid);\n\tif ((file_out < 0 || file_err < 0)) {\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_NOTICE,\n\t\t\t  pjob->ji_qs.ji_jobid,\n\t\t\t  \"Unable to open standard output/error\");\n\t\treturn -1;\n\t}\n\n\tif (!direct_write_possible && direct_write_requested(pjob)) {\n\t\tsprintf(log_buffer,\n\t\t\t\"Direct write is requested for job: %s, but the destination is not usecp-able from %s\\n\",\n\t\t\tpjob->ji_qs.ji_jobid, pjob->ji_hosts[pjob->ji_nodeid].hn_host);\n\t\tif (write(file_err, log_buffer, strlen(log_buffer)) == -1) \n\t\t\tlog_errf(-1, __func__, \"write failed. ERR : %s\",strerror(errno));\t\t\t\n\t}\n\n\tFDMOVE(file_out); /* make sure descriptor > 2       */\n\tFDMOVE(file_err); /* so don't clobber stdin/out/err */\n\tif (file_out != 1) {\n\t\t(void) close(1);\n\t\tif (dup(file_out) == -1) \n\t\t\tlog_errf(-1, __func__, \"dup failed. ERR : %s\",strerror(errno));\t\t\n\t\t(void) close(file_out);\n\t}\n\tif (file_err != 2) {\n\t\t(void) close(2);\n\t\tif (dup(file_err) == -1) \n\t\t\tlog_errf(-1, __func__, \"dup failed. ERR : %s\",strerror(errno));\t\t\n\t\t(void) close(file_err);\n\t}\n\treturn 0;\n}\n#ifdef NAS /* localmod 010 */\n\n/**\n * @brief\n * \tNAS_tmpdirname - build NAS version of temporary directory name\n * \tModifies pbs_tmpdir to append user name\n *\n * @param[in] pjob - job pointer\n *\n * @return \tstring\n * @retval\ttemp directory name\n *\n */\n\nchar *\nNAS_tmpdirname(job *pjob)\n{\n\tchar *ss;\n\n\tss = strstr(pbs_tmpdir, \"//\");\n\tif (ss != NULL)\n\t\tstrcpy(ss + 2, get_jattr_str(pjob, JOB_ATR_euser));\n\n\treturn tmpdirname(pjob->ji_qs.ji_jobid);\n}\n#endif /* localmod 010 */\n\n/**\n * @brief\n * \ttmpdirname - build a temporary directory name\n *\n * @param[in] sequence - directory name\n *\n * @return \tstring\n * @retval\tdirectory name\n *\n */\n\nchar *\ntmpdirname(char *sequence)\n{\n\tstatic char tmpdir[MAXPATHLEN + 1];\n\n\tsprintf(tmpdir, \"%s/pbs.%s\", pbs_tmpdir, sequence);\n\treturn tmpdir;\n}\n\n/**\n * @brief\n *\tjobdirname - build the staging and execution directory name\n *\twith a random number tagged onto the end\n *\n * @param[in] sequence - directory name\n * @param[in] homedir - home dirctory\n *\n * @return\tstring\n * @retval\tjob directory name\tsuccess\n *\n */\n\nchar *\njobdirname(char *sequence, char *homedir)\n{\n\tstatic char dir[MAXPATHLEN + 1];\n\n\t/* this will be implemented/used in phase II\n\t **\tstatic char unique[5+1];\n\t **\tunsigned int seed = (unsigned int)time(NULL);\n\t **\n\t **\tfor (i = 0; i < 5; i++) {\n\t **\t\tsrandom(seed);\n\t **\t\ttempnum = random(NULL);\n\t **\t\tseed = tempnum;\n\t **\t\tunique[i] = (tempnum%26) + '0';\n\t **\t}\n\t **\n\t **\tsprintf(dir, \"%s/%s.%s\", pbs_jobdir_root, sequence, unique);\n\t */\n\n\tif ((pbs_jobdir_root[0] != '\\0') && (strcmp(pbs_jobdir_root, JOBDIR_DEFAULT) != 0)) {\n\t\tsprintf(dir, \"%s/pbs.%s.%s\", pbs_jobdir_root, sequence, FAKE_RANDOM);\n\t} else if ((homedir != NULL) && (*homedir != '\\0')) {\n\t\t/*\n\t\t * jobdir_root was not set in mom_priv/config file\n\t\t * so use the given homedir\n\t\t */\n\t\tsprintf(dir, \"%s/pbs.%s.%s\", homedir, sequence, FAKE_RANDOM);\n\t} else {\n\t\t/* last resort, use default tmp dir */\n\t\tsprintf(dir, \"%s/pbs.%s.%s\", pbs_tmpdir, sequence, FAKE_RANDOM);\n\t}\n\n\treturn dir;\n}\n\n/**\n * @brief\n * \tmktmpdir - make temporary directory(s)\n *\tA temporary directory is created and the name is\n *\tplaced in an environment variable.\n *\n * @param[in] jobid - job id\n * @param[in] uid - user id\n * @param[in] gid - group id\n * @param[in] vtab - pointer to variable table\n *\n * @return\tint\n * @retval\t0\t\tSuccess\n * @retval \tJOB_EXEC_FAIL1 \tfailure to make directory\n *\n */\n\nint\nmktmpdir(char *jobid, uid_t uid, gid_t gid, struct var_table *vtab)\n{\n\tchar *tmpdir;\n\n\ttmpdir = tmpdirname(jobid);\n\terrno = 0;\n\tif (mkdir(tmpdir, 0700) == -1) {\n\t\tint tmp_errno;\n\n\t\tif ((tmp_errno = errno) != EEXIST) {\n\t\t\tsprintf(log_buffer, \"mkdir: %s\", tmpdir);\n\t\t\tlog_joberr(tmp_errno, __func__, log_buffer, jobid);\n\t\t\treturn JOB_EXEC_FAIL1;\n\t\t} else if (tmp_errno == EEXIST) {\n\t\t\tstruct stat statbuf;\n\t\t\tif (lstat(tmpdir, &statbuf) == -1) {\n\t\t\t\tsprintf(log_buffer, \"%s: lstat : %s\", jobid, tmpdir);\n\t\t\t\tlog_joberr(tmp_errno, __func__, log_buffer, jobid);\n\t\t\t\treturn JOB_EXEC_FAIL1;\n\t\t\t}\n\t\t\tif (!S_ISDIR(statbuf.st_mode)) { /* Not a directory */\n\t\t\t\tsprintf(log_buffer, \"mkdir: %s is already available: possible attempted security breach by %d:%d(uid:gid of job tmpdir)\",\n\t\t\t\t\ttmpdir, statbuf.st_uid, statbuf.st_gid);\n\t\t\t\tlog_joberr(tmp_errno, __func__, log_buffer, jobid);\n\t\t\t\treturn JOB_EXEC_FAIL_SECURITY;\n\t\t\t} else if (!((statbuf.st_uid == uid || statbuf.st_uid == 0) && (statbuf.st_gid == gid || statbuf.st_gid == 0))) {\n\t\t\t\tsprintf(log_buffer, \"mkdir: %s is already available: possible attempted security breach by %d:%d(uid:gid of job tmpdir)\",\n\t\t\t\t\ttmpdir, statbuf.st_uid, statbuf.st_gid);\n\t\t\t\tlog_joberr(tmp_errno, __func__, log_buffer, jobid);\n\t\t\t\treturn JOB_EXEC_FAIL_SECURITY;\n\t\t\t}\n\t\t}\n\t}\n\t/* Explicitly call chmod because umask affects mkdir() */\n\tif (chmod(tmpdir, 0700) == -1) {\n\t\tsprintf(log_buffer, \"chmod: %s\", tmpdir);\n\t\tlog_joberr(errno, __func__, log_buffer, jobid);\n\t\treturn JOB_EXEC_FAIL1;\n\t}\n\tif (chown(tmpdir, uid, gid) == -1) {\n\t\tsprintf(log_buffer, \"chown: %s\", tmpdir);\n\t\tlog_joberr(errno, __func__, log_buffer, jobid);\n\t\treturn JOB_EXEC_FAIL1;\n\t}\n\t/* Only set TMPDIR if everything succeeded to this point. */\n\tif (vtab) {\n\t\tbld_env_variables(vtab, \"TMPDIR\", tmpdir);\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\tMake the staging and execution directory with what ever\n *\tprivileges are currently set,  may be root or may be user.\n *\tThis function is a helper task for mkjobdir() below.\n *\n * @param[in] jobid - the job id string\n * @param[in] jobdir - the full path to the sandox (working directory to make\n *\n * @return int\n * @retval JOB_EXEC_FAIL1 failure to make directory\n * @retval  0 success\n *\n */\nstatic int\ninternal_mkjobdir(char *jobid, char *jobdir)\n{\n\tif (mkdir(jobdir, 0700) == -1) {\n\t\tif (errno == EEXIST) {\n\t\t\tsprintf(log_buffer, \"the staging and execution directory %s already exists\", jobdir);\n\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_INFO, jobid, log_buffer);\n\t\t} else {\n\t\t\tsprintf(log_buffer, \"mkdir: %s\", jobdir);\n\t\t\tlog_joberr(errno, __func__, log_buffer, jobid);\n\t\t\treturn JOB_EXEC_FAIL1;\n\t\t}\n\t}\n\tif (chmod(jobdir, 0700) == -1) {\n\t\tsprintf(log_buffer, \"unable to change permissions on staging and execution directory %s\", jobdir);\n\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_INFO, jobid, log_buffer);\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n * \tImpersonate the user by changing effective uid and gid.\n *\n * @param[in] uid - user id\n * @param[in] gid - group id\n *\n * @return \tint\n * @retval\t0\tSuccess\n * @retval\t-1\tError\n *\n */\n\nint\nimpersonate_user(uid_t uid, gid_t gid)\n{\n#if defined(HAVE_GETPWUID) && defined(HAVE_INITGROUPS)\n\tstruct passwd *pwd = getpwuid(uid);\n\tif (pwd == NULL)\n\t\treturn -1;\n\n\tif ((geteuid() != uid) &&\n\t    (initgroups(pwd->pw_name, gid) == -1)) {\n\t\treturn -1;\n\t}\n\n#endif\n#if defined(HAVE_SETEUID) && defined(HAVE_SETEGID)\n\t/* most systems */\n\tif ((setegid(gid) == -1) ||\n\t    (seteuid(uid) == -1)) {\n\t\tif (setegid(pbsgroup) == -1) \n\t\t\tlog_errf(-1, __func__, \"setegid to pbs group failed. ERR : %s\",strerror(errno));\t\t\n\t\treturn -1;\n\t}\n#elif defined(HAVE_SETRESUID) && defined(HAVE_SETRESGID)\n\tif ((setresgid(-1, gid, -1) == -1) ||\n\t    (setresuid(-1, uid, -1) == -1)) {\n\t\t(void) setresgid(-1, pbsgroup, -1);\n\t\treturn -1;\n\t}\n#else\n#error No function to change effective UID or GID\n#endif\n\treturn 0;\n}\n\nvoid\nrevert_from_user(void)\n{\n#if defined(HAVE_SETEUID)\n\t/* most systems */\n\tif (seteuid(0) == -1) \n\t\tlog_errf(-1, __func__, \"seteuid failed. ERR : %s\",strerror(errno));\t\n#elif defined(HAVE_SETRESUID)\n\t(void) setresuid(-1, 0, -1);\n#else\n#error No function to change effective UID\n#endif\n#if defined(HAVE_INITGROUPS)\n\t(void) initgroups(\"root\", pbsgroup);\n#endif\n#if defined(HAVE_SETEGID)\n\tif (setegid(pbsgroup) == -1) \n\t\tlog_errf(-1, __func__, \"setegid to pbs group failed. ERR : %s\",strerror(errno));\t\t\n#elif defined(HAVE_SETRESGID)\n\t(void) setresgid(-1, pbsgroup, -1);\n#else\n#error No function to change effective GID\n#endif\n}\n\n/**\n * @brief\n *\tMake the staging and execution directory for the job.\n *\n * @par If the root of the working directory (sandbox) is the User's home\n *\tdirectory, make the directory as the User in case root has no access.\n *\tOtherwise, it is being made in a admin specified secure root owned\n *\tlocation, \"job_dir_root\" and should be made as root and then the\n *\township changed.\n * @par The global character array pbs_jobdir_root is set to a non null string\n *\tif the base for the sandbox is not to be the User's home directory.\n *\n * @param[in] jobid - the job id string, i.e. \"123.server\"\n * @param[in] jobdir - the full path to the sandox (working directory to make\n * @param[in] uid    - the user id of the user under which the job will run\n * @param[in] gid    - the group id of the user under which the job will run\n *\n * @return int\n * @retval -1 failure to make directory\n * @retval  0 success\n *\n */\n\n/**\n * @brief\n * \tmkjobdir - make the staging and execution directory\n *\tA per-job staging and execution directory is created.\n *\tIf the parent of the directory is the user's home, it is made while\n *\toperating with the user's privilege.  Otherwise, it is made as root\n *\tand then changed as it would be in \"job_dir_root\" which is root owned.\n *\n * @param[in] jobid - job id\n * @param[in] jobdir - job directory\n * @param[in] uid - user id\n * @param[in] gid - group id\n *\n * @return \tint\n * @retval\t0\tSuccess\n * @retval\t-1\tError\n *\n */\n\nint\nmkjobdir(char *jobid, char *jobdir, uid_t uid, gid_t gid)\n{\n\tint rc;\n\n\tif ((pbs_jobdir_root[0] != '\\0') && (strcmp(pbs_jobdir_root, JOBDIR_DEFAULT) != 0)) {\n\n\t\t/* making the directory as root in a secure root owned dir */\n\n\t\tif ((rc = internal_mkjobdir(jobid, jobdir)) != 0)\n\t\t\treturn (rc);\n\n\t\t/* now change ownership to the user */\n\t\tif (chown(jobdir, uid, gid) == -1) {\n\t\t\tsprintf(log_buffer, \"chown: %s\", jobdir);\n\t\t\tlog_joberr(errno, __func__, log_buffer, jobid);\n\t\t\treturn JOB_EXEC_FAIL1;\n\t\t}\n\t} else {\n\n\t\t/* making the directory in the user's home, do it as user */\n\n\t\tif (impersonate_user(uid, gid) == -1)\n\t\t\treturn -1;\n\n\t\t/* make the directory */\n\t\trc = internal_mkjobdir(jobid, jobdir);\n\n\t\t/* go back to being root */\n\n\t\trevert_from_user();\n\n\t\tif (rc != 0)\n\t\t\treturn (rc);\n\t}\n\n\t/*\n\t * success.  log a message that shows the name of the\n\t * staging and execution dir\n\t */\n\tsprintf(log_buffer, \"created the job directory %s\", jobdir);\n\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO, jobid, log_buffer);\n\treturn 0;\n}\n\n/**\n * @brief\n * \trmtmpdir - remove the temporary directory\n *\tThis may take awhile so the task is forked and execed to another\n *\tprocess.\n *\n * @param[in] jobid - job id\n *\n * @return\tVoid\n *\n */\n\nvoid\nrmtmpdir(char *jobid)\n{\n\tstatic char rmdir[MAXPATHLEN + 1];\n\tstruct stat sb;\n\tpid_t pid;\n\tchar *rm = \"/bin/rm\";\n\tchar *rf = \"-rf\";\n\tchar *tmpdir;\n\tchar *newdir = rmdir;\n\n\t/* Hello, is any body there? */\n\ttmpdir = tmpdirname(jobid);\n\tif (stat(tmpdir, &sb) == -1) {\n\t\tif (errno != ENOENT) {\n\t\t\tsprintf(log_buffer, \"stat: %s\", tmpdir);\n\t\t\tlog_joberr(errno, __func__, log_buffer, jobid);\n\t\t}\n\t\treturn;\n\t}\n\n\tsprintf(rmdir, \"%s/pbs_remove.%s\", pbs_tmpdir, jobid);\n\tif (rename(tmpdir, newdir) == -1) {\n\t\tchar *msgbuf;\n\n\t\tpbs_asprintf(&msgbuf, \"%s %s\", tmpdir, newdir);\n\t\tlog_joberr(errno, __func__, msgbuf, jobid);\n\t\tfree(msgbuf);\n\t\tnewdir = tmpdir;\n\t}\n\n\t/* fork and exec the cleantmp process */\n\tpid = fork();\n\tif (pid < 0) {\n\t\tlog_err(errno, __func__, \"fork\");\n\t\treturn;\n\t}\n\n\tif (pid > 0) /* parent */\n\t\treturn;\n\n\ttpp_terminate();\n\texecl(rm, \"pbs_cleandir\", rf, newdir, NULL);\n\tlog_err(errno, __func__, \"execl\");\n\texit(21);\n}\n\n/**\n * @brief\n *\treturns shell name\n *\n * @param[in] shell - shellname\n *\n * @return \tstring\n * @retval\tshell name\n *\n */\n\nchar *\nlastname(char *shell)\n{\n\tchar *shellname;\n\n\tshellname = strrchr(shell, '/');\n\tif (shellname)\n\t\tshellname++; /* go past last '/' */\n\telse\n\t\tshellname = shell;\n\treturn shellname;\n}\n\n/**\n * @brief\n *\tBecome the user with specified user name, uid, and gids.\n *\tObtains the current supplement group list and if necessary adds\n *\tthe user's login group to it,  then changes to the specified group,\n *\tnew group list, and the specified uid.\n *\n * @param[in] eusrname - the execution user name\n * @param[in] euid     - the execution uid\n * @param[in] egid     - the execution gid\n * @param[in] rgid     - the login (or real) gid of the user\n *\n * @return int\n * @retval 0  - success\n * @retval -1 - failure to change\n *\n */\nint\nbecomeuser_args(char *eusrname, uid_t euid, gid_t egid, gid_t rgid)\n{\n\tgid_t *grplist = NULL;\n\tstatic int maxgroups = 0;\n\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n#if defined(HAVE_LIBKAFS) || defined(HAVE_LIBKOPENAFS)\n\tint32_t pag = 0;\n\tpag = getpag();\n#endif\n#endif\n\n\t/* obtain the maximum number of groups possible in the list */\n\tif (maxgroups == 0)\n\t\tmaxgroups = (int) sysconf(_SC_NGROUPS_MAX);\n\n\tif (initgroups(eusrname, egid) != -1) {\n\t\tint numsup;\n\t\tint i;\n\n\t\t/* allocate an array for the group list */\n\t\tgrplist = calloc((size_t) maxgroups, sizeof(gid_t));\n\t\tif (grplist == NULL)\n\t\t\treturn -1;\n\t\t/* get the current list of groups */\n\t\tnumsup = getgroups(maxgroups, grplist);\n\t\tfor (i = 0; i < numsup; ++i) {\n\t\t\tif (grplist[i] == rgid)\n\t\t\t\tbreak;\n\t\t}\n\t\tif (i == numsup) {\n\t\t\t/* need to add primary group to list */\n\t\t\tif (numsup == maxgroups) {\n\t\t\t\t/* cannot, list already at max size */\n\t\t\t\tfree(grplist);\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t\tgrplist[numsup++] = rgid;\n\t\t}\n\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n#if defined(HAVE_LIBKAFS) || defined(HAVE_LIBKOPENAFS)\n\t\tif (pag)\n\t\t\tgrplist[numsup++] = pag;\n#endif\n#endif\n\n\t\tif ((setgroups((size_t) numsup, grplist) != -1) &&\n\t\t    (setgid(egid) != -1) &&\n\t\t    (setuid(euid) != -1)) {\n\t\t\tfree(grplist);\n\t\t\treturn 0;\n\t\t}\n\t}\n\tif (grplist)\n\t\tfree(grplist);\n\treturn -1;\n}\n\n/**\n * @brief\n *\tBecome the user using information sent with the job and in the cached\n *\tpassword information in the job structure if available.\n *\n *\tPicks up the execution user name from the euser attribute, the euid\n *\tand egid from the mom subarea of the job structure and the login gid\n *\tfrom the cached password info if that has been set.  Otherwise use\n *\tthe egid.\n *\n *\tThe real work is done by passing the above to becomeuser_args().\n * @see\tbecomeuser_args\n *\n * @param[in] pjob - pointer to the job structure\n * @return int\n * @retval 0  - success\n * @retval -1 - failure to change\n *\n */\n\nint\nbecomeuser(job *pjob)\n{\n\tgid_t rgid;\n\tif (pjob->ji_grpcache != NULL)\n\t\trgid = pjob->ji_grpcache->gc_rgid;\n\telse\n\t\trgid = pjob->ji_qs.ji_un.ji_momt.ji_exgid;\n\tif (becomeuser_args(get_jattr_str(pjob, JOB_ATR_euser), pjob->ji_qs.ji_un.ji_momt.ji_exuid, pjob->ji_qs.ji_un.ji_momt.ji_exgid, rgid) == -1) {\n\t\tfprintf(stderr, \"unable to set user privileges, errno = %d\\n\",\n\t\t\terrno);\n\t\treturn -1;\n\t} else\n\t\treturn 0;\n}\n\n/**\n * @brief\n * \tExpects the current process will invoke some external program,\n * \tand this sets the process to have the special credential\n * \tstored in the job, along with 'shell', arguments array (argarray),\n * \tand 'pjob->ji_env' values.\n *\n * @param[in] pjob - job in question\n * @param[out] shell - if not NULL, filled in with shell to use for future\n *\t\t\t\texternal program invocations.\n * @param[out] argarray - if not NULL, filled in with argument array to be\n *\t\t\t\tused for future external program invocations.\n *\n *\tDo the right thing for the type of credential the job has.\n *\tWe are in a child process which will become a task.\n *\n * @return\tint\n * @retval\t-1\terror\n * @retval\t0\tSuccess\n *\n */\nint\nset_credential(job *pjob, char **shell, char ***argarray)\n{\n\tchar **argv;\n\tstatic char buf[MAXPATHLEN + 1];\n\tint ret = 0;\n\tchar *prog = NULL; /* possible new shell */\n\tchar *name;\n\tint i = 0;\n\tint j;\n\tint num = 0;\n\tint fds[2];\n\n\tif ((argarray != NULL) && (*argarray != NULL)) {\n\t\twhile ((*argarray)[num] != NULL) {\n\t\t\tnum++;\n\t\t}\n\t}\n\tcred_buf = NULL;\n\n\tswitch (pjob->ji_extended.ji_ext.ji_credtype) {\n\n\t\tcase PBS_CREDTYPE_NONE:\n\t\t\targv = (char **) calloc(2 + num, sizeof(char *));\n\t\t\tassert(argv != NULL);\n\n\t\t\t/* construct argv array */\n\t\t\tif (shell != NULL) {\n\t\t\t\tprog = *shell;\n\t\t\t\tname = lastname(*shell);\n\t\t\t\targv[i] = malloc(strlen(name) + 2);\n\t\t\t\tassert(argv[i] != NULL);\n\t\t\t\tstrcpy(argv[i], \"-\");\n\t\t\t\tstrcat(argv[i++], name);\n\t\t\t\t/* copy remaining command line args 1..end, skip 0 */\n\t\t\t\tif (num >= 2) { /* num=# of !NULL argarray entries */\n\t\t\t\t\tfor (j = 1; (*argarray)[j]; j++)\n\t\t\t\t\t\targv[i++] = (*argarray)[j];\n\t\t\t\t}\n\t\t\t}\n\t\t\tret = becomeuser(pjob);\n\t\t\tbreak;\n\n\t\tcase PBS_CREDTYPE_AES:\n\t\t\t/* there are 3 set argv[] entries below, so need to alloc */\n\t\t\t/* 3+1 initial slots (+1 for the terminating NULL entry) */\n\t\t\targv = (char **) calloc(4 + num, sizeof(char *));\n\t\t\tassert(argv != NULL);\n\n\t\t\tif (read_cred(pjob, &cred_buf, &cred_len) != 0)\n\t\t\t\tbreak;\n\n\t\t\tret = becomeuser(pjob);\n\n\t\t\tif (pipe(fds) == -1) {\n\t\t\t\tlog_err(errno, __func__, \"pipe\");\n\t\t\t\tbreak;\n\t\t\t}\n\n\t\t\tname = NULL;\n\t\t\tif (pbs_decrypt_pwd(cred_buf, PBS_CREDTYPE_AES, cred_len, &name, (const unsigned char *) pbs_aes_key, (const unsigned char *) pbs_aes_iv) != 0) {\n\t\t\t\tlog_joberr(-1, __func__, \"decrypt_pwd\", pjob->ji_qs.ji_jobid);\n\t\t\t\tclose(fds[0]);\n\t\t\t} else if (writepipe(fds[1], name, cred_len) != cred_len) {\n\t\t\t\tlog_err(errno, __func__, \"pipe write\");\n\t\t\t\tclose(fds[0]);\n\t\t\t} else {\n\t\t\t\tsprintf(buf, \"%d\", fds[0]);\n\t\t\t\tbld_env_variables(&pjob->ji_env, \"PBS_PWPIPE\", buf);\n\t\t\t}\n\t\t\tif (name != NULL) {\n\t\t\t\tmemset(name, 0, cred_len);\n\t\t\t\tfree(name);\n\t\t\t}\n\t\t\tclose(fds[1]);\n\n\t\t\t/* construct argv array */\n\t\t\tif (shell != NULL) {\n\t\t\t\tprog = *shell;\n\t\t\t\tname = lastname(*shell);\n\t\t\t\targv[i] = malloc(strlen(name) + 2);\n\t\t\t\tif (argv[i] == NULL)\n\t\t\t\t\tbreak;\n\t\t\t\tstrcpy(argv[i], \"-\");\n\t\t\t\tstrcat(argv[i++], name);\n\t\t\t}\n\t\t\tbreak;\n\n\t\tdefault:\n\t\t\tlog_err(errno, __func__, \"unknown credential type\");\n\t\t\treturn -1;\n\t}\n\n\tif (shell == NULL ||  /* only args OR */\n\t    prog != *shell) { /* we added a program */\n\t\t/* copy remaining command line args */\n\t\tif (argarray != NULL) {\n\t\t\tif (*argarray != NULL) {\n\t\t\t\targv[i++] = (shell == NULL) ? (*argarray)[0] : *shell;\n\t\t\t\tif (num >= 2) { /* num=# of !NULL argarray entries */\n\t\t\t\t\tfor (j = 1; (*argarray)[j]; j++) {\n\t\t\t\t\t\targv[i++] = (*argarray)[j];\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\targv[i++] = (shell == NULL) ? NULL : *shell;\n\t\t\t}\n\t\t}\n\t\tif (shell != NULL)\n\t\t\t*shell = prog;\n\t}\n\targv[i++] = NULL;\n\n\tif (argarray != NULL) {\n\t\t*argarray = argv;\n\t} else {\n\t\tfree_str_array(argv);\n\t}\n\n\tif (cred_buf) {\n\t\tfree(cred_buf);\n\t\tcred_buf = NULL;\n\t}\n\treturn ret;\n}\n\n/** @brief\n * \tget_index_and_parent - from the job if of a subjob, return the parent array\n *\tjob jobid and the index for this subjob. The two returned strings are\n *\tin static buffers and must be copied before this is called again.\n *\n * @param[in] jobid - job id\n * @param[out] pparent - parent array job\n * @param[out] pindex - index for subjob\n *\n * @return\tVoid\n *\n */\n\nvoid\nget_index_and_parent(char *jobid, char **pparent, char **pindex)\n{\n\tchar *pd;\n\tchar *pi;\n\tchar *ps;\n\tstatic char parent[PBS_MAXSVRJOBID + 1];\n\tstatic char index[20];\n\n\tps = jobid;\n\tpd = parent;\n\tpi = index;\n\twhile (*ps != '[') /* copy first part of job id */\n\t\t*pd++ = *ps++;\n\t*pd++ = *ps++;\t   /* copy in '[' */\n\twhile (*ps != ']') /* copy index  */\n\t\t*pi++ = *ps++;\n\t*pi = '\\0';\n\twhile (*ps)\n\t\t*pd++ = *ps++;\n\t*pd = '\\0';\n\t*pparent = parent;\n\t*pindex = index;\n}\n\n/**\n * @brief\n *\tcreates set up for job.\n *\n * @param[in] pjob - job pointer\n * @param[in] pwdparm - pointer to passwd structure\n *\n * @return\tint\n * @retval\tJOB_EXEC_FAILUID(-10)\tError\n * @retval\tJOB_EXEC_RETRY(-3)\tError\n * @retval\tJOB_EXEC_OK(0)\t\tSuccess\n *\n */\n\nstatic int\njob_setup(job *pjob, struct passwd **pwdparm)\n{\n\tstruct passwd *pwdp;\n\tchar *chkpnt;\n\n\t/*\n\t * get the password entry for the user under which the job is to be run\n\t * we do this now to save a few things in the job structure\n\t */\n\tpwdp = check_pwd(pjob);\n\tif (pwdparm != NULL)\n\t\t*pwdparm = pwdp;\n\n\tif (pwdp == NULL) {\n\t\tlog_event(PBSEVENT_JOB | PBSEVENT_SECURITY, PBS_EVENTCLASS_JOB,\n\t\t\t  LOG_ERR, pjob->ji_qs.ji_jobid, log_buffer);\n\t\tpjob->ji_qs.ji_stime = time_now; /* for walltime */\n\t\tset_jattr_l_slim(pjob, JOB_ATR_stime, time_now, SET);\n\t\treturn JOB_EXEC_FAILUID;\n\t}\n\tpjob->ji_qs.ji_un.ji_momt.ji_exuid = pjob->ji_grpcache->gc_uid;\n\tpjob->ji_qs.ji_un.ji_momt.ji_exgid = pjob->ji_grpcache->gc_gid;\n\n\t/*\n\t ** Call job_setup_final if it is available.\n\t ** The stream parameter is not used by mother superior.\n\t */\n\tif (job_setup_final != NULL) {\n\t\tif (job_setup_final(pjob, -1) != PBSE_NONE)\n\t\t\treturn JOB_EXEC_RETRY;\n\t}\n\n\t/*\n\t * if certain resource limits require that the job usage be\n\t * polled or it is a multinode job, we link the job to mom_polljobs.\n\t *\n\t * NOTE: we overload the job field ji_jobque for this as it\n\t * is not used otherwise by MOM\n\t */\n\n\tif (pjob->ji_numnodes > 1 || mom_do_poll(pjob))\n\t\tif (is_linked(&mom_polljobs, &pjob->ji_jobque) == 0)\n\t\t\tappend_link(&mom_polljobs, &pjob->ji_jobque, pjob);\n\n\t/* Is the job to be periodic checkpointed */\n\n\tpjob->ji_chkpttype = PBS_CHECKPOINT_NONE;\n\tif (is_jattr_set(pjob, JOB_ATR_chkpnt)) {\n\t\tchkpnt = get_jattr_str(pjob, JOB_ATR_chkpnt);\n\t\tif ((*chkpnt == 'c') && (*(chkpnt + 1) == '=')) {\n\t\t\t/* has cpu checkpoint time in minutes, convert to seconds */\n\t\t\tpjob->ji_chkpttype = PBS_CHECKPOINT_CPUT;\n\t\t\tpjob->ji_chkpttime = atoi(chkpnt + 2) * 60;\n\t\t} else if ((*chkpnt == 'w') && (*(chkpnt + 1) == '=')) {\n\t\t\t/* has checkpoint walltime in minutes, convert to seconds */\n\t\t\tpjob->ji_chkpttype = PBS_CHECKPOINT_WALLT;\n\t\t\tpjob->ji_chkpttime = atoi(chkpnt + 2) * 60;\n\t\t}\n\t\tpjob->ji_chkptnext = pjob->ji_chkpttime;\n\t}\n\treturn JOB_EXEC_OK;\n}\n\n/**\n * @brief\n *\trecord_finish_exec - record the results of finish_exec()\n *\tprimarily the session id of the started job.\n *\n * @par Functionality:\n *\tFind the connection table entry associated with the pipe file\n *\tdescriptor.  This leads to the task and from there to the job\n *\tbeing started.  The starter return information is read from the pipe.\n *\tIf the read fails, log the fact and requeue the job.\n *\tOtherwise, record that the job is now running:\n *\t- the session id and global id (if one)\n *\t- set the state/substate to RUNNING\n *\t- get a first sample of usage for this job and\n *\t  return a status update to the Server so it knows the job is going.\n *\n * @param[in]\tsd - file descriptor of the pipe on which the job starter\n *\t\tprocess has written the session id and other info\n *\n * @return\tNone\n *\n * @par MT-safe: likely no\n *\n */\nstatic void\nrecord_finish_exec(int sd)\n{\n\tconn_t *conn = NULL;\n\tint i;\n\tint j;\n\tjob *pjob = NULL;\n\tpbs_task *ptask;\n\tstruct startjob_rtn sjr;\n\n\tif ((conn = get_conn(sd)) == NULL) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"unable to find pipe\");\n\t\treturn;\n\t}\n\n\tptask = (pbs_task *) conn->cn_data;\n\tif (ptask != NULL)\n\t\tpjob = ptask->ti_job;\n\telse {\n\t\t/*\n\t\t * Job has been deleted before recording session id.\n\t\t * Read the session information and kill the process.\n\t\t */\n\t\tmemset(&sjr, 0, sizeof(sjr));\n\t\ti = readpipe(sd, &sjr, sizeof(sjr));\n\t\tj = errno;\n\t\t(void) close_conn(sd);\n\n\t\tif (i == sizeof(sjr))\n\t\t\tkill_session(sjr.sj_session, SIGKILL, 0);\n\t\telse {\n\t\t\tsprintf(log_buffer,\n\t\t\t\t\"read of pipe for session information got %d not %d\",\n\t\t\t\ti, (int) sizeof(sjr));\n\t\t\tlog_err(j, __func__, log_buffer);\n\t\t}\n\n\t\treturn;\n\t}\n\n\tif (pjob == NULL) {\n\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\"no job task associated with connection\");\n\t\treturn;\n\t}\n\n\t/* now we read the session id or error */\n\tmemset(&sjr, 0, sizeof(sjr));\n\ti = readpipe(pjob->ji_jsmpipe, &sjr, sizeof(sjr));\n\tj = errno;\n\n\tif (i != sizeof(sjr)) {\n\t\tsprintf(log_buffer,\n\t\t\t\"read of pipe for pid job %s got %d not %d\",\n\t\t\tpjob->ji_qs.ji_jobid,\n\t\t\ti, (int) sizeof(sjr));\n\t\tlog_err(j, __func__, log_buffer);\n\t\t(void) close_conn(pjob->ji_jsmpipe);\n\t\tpjob->ji_jsmpipe = -1;\n\t\t(void) close(pjob->ji_mjspipe);\n\t\tpjob->ji_mjspipe = -1;\n\n\t\tif (pjob->ji_jsmpipe2 != -1) {\n\t\t\t(void) close_conn(pjob->ji_jsmpipe2);\n\t\t\tpjob->ji_jsmpipe2 = -1;\n\t\t}\n\n\t\tif (pjob->ji_mjspipe2 != -1) {\n\t\t\t(void) close(pjob->ji_mjspipe2);\n\t\t\tpjob->ji_mjspipe2 = -1;\n\t\t}\n\n\t\tif (pjob->ji_child2parent_job_update_pipe != -1) {\n\t\t\t(void) close_conn(pjob->ji_child2parent_job_update_pipe);\n\t\t\tpjob->ji_child2parent_job_update_pipe = -1;\n\t\t}\n\n\t\tif (pjob->ji_parent2child_job_update_pipe != -1) {\n\t\t\t(void) close(pjob->ji_parent2child_job_update_pipe);\n\t\t\tpjob->ji_parent2child_job_update_pipe = -1;\n\t\t}\n\n\t\tif (pjob->ji_parent2child_job_update_status_pipe != -1) {\n\t\t\t(void) close(pjob->ji_parent2child_job_update_status_pipe);\n\t\t\tpjob->ji_parent2child_job_update_status_pipe = -1;\n\t\t}\n\n\t\tif (pjob->ji_parent2child_moms_status_pipe != -1) {\n\t\t\t(void) close(pjob->ji_parent2child_moms_status_pipe);\n\t\t\tpjob->ji_parent2child_moms_status_pipe = -1;\n\t\t}\n\t\t(void) sprintf(log_buffer, \"start failed, improper sid\");\n\t\texec_bail(pjob, JOB_EXEC_RETRY, log_buffer);\n\t\treturn;\n\t}\n\n#if MOM_ALPS\n\tif (sjr.sj_code == JOB_EXEC_UPDATE_ALPS_RESV_ID) {\n\t\tpjob->ji_extended.ji_ext.ji_pagg = sjr.sj_pagg;\n\t\tpjob->ji_extended.ji_ext.ji_reservation = sjr.sj_reservation;\n\t\t(void) writepipe(pjob->ji_mjspipe, &sjr, sizeof(sjr));\n\t\treturn;\n\t}\n#endif\n\n\t/* send back as an acknowledgement that MOM got it */\n\t(void) writepipe(pjob->ji_mjspipe, &sjr, sizeof(sjr));\n\t(void) close_conn(pjob->ji_jsmpipe);\n\tpjob->ji_jsmpipe = -1;\n\t(void) close(pjob->ji_mjspipe);\n\tpjob->ji_mjspipe = -1;\n\n\tif (pjob->ji_jsmpipe2 != -1) {\n\t\t(void) close_conn(pjob->ji_jsmpipe2);\n\t\tpjob->ji_jsmpipe2 = -1;\n\t}\n\n\tif (pjob->ji_mjspipe2 != -1) {\n\t\t(void) close(pjob->ji_mjspipe2);\n\t\tpjob->ji_mjspipe2 = -1;\n\t}\n\n\tif (pjob->ji_child2parent_job_update_pipe != -1) {\n\t\t(void) close_conn(pjob->ji_child2parent_job_update_pipe);\n\t\tpjob->ji_child2parent_job_update_pipe = -1;\n\t}\n\n\tif (pjob->ji_parent2child_job_update_pipe != -1) {\n\t\t(void) close(pjob->ji_parent2child_job_update_pipe);\n\t\tpjob->ji_parent2child_job_update_pipe = -1;\n\t}\n\n\tif (pjob->ji_parent2child_job_update_status_pipe != -1) {\n\t\t(void) close(pjob->ji_parent2child_job_update_status_pipe);\n\t\tpjob->ji_parent2child_job_update_status_pipe = -1;\n\t}\n\n\tif (pjob->ji_parent2child_moms_status_pipe != -1) {\n\t\t(void) close(pjob->ji_parent2child_moms_status_pipe);\n\t\tpjob->ji_parent2child_moms_status_pipe = -1;\n\t}\n\n\tDBPRT((\"%s: read start return %d %d\\n\", __func__,\n\t       sjr.sj_code, sjr.sj_session))\n\n\t/* update pjob with values set from a prologue/launch hook\n\t * since these are hooks that are executing in a child process\n\t * and changes inside the child will not be reflected in main\n\t * mom\n\t */\n\tif ((num_eligible_hooks(HOOK_EVENT_EXECJOB_PROLOGUE) > 0) ||\n\t    (num_eligible_hooks(HOOK_EVENT_EXECJOB_LAUNCH) > 0)) {\n\t\tchar hook_outfile[MAXPATHLEN + 1];\n\t\tstruct stat stbuf;\n\t\tint reject_rerunjob = 0;\n\t\tint reject_deletejob = 0;\n\n\t\tsnprintf(hook_outfile, MAXPATHLEN, FMT_HOOK_JOB_OUTFILE,\n\t\t\t path_hooks_workdir, pjob->ji_qs.ji_jobid);\n\t\tif (stat(hook_outfile, &stbuf) == 0) {\n\t\t\tpbs_list_head vnl_changes;\n\n\t\t\tCLEAR_HEAD(vnl_changes);\n\t\t\tif (sjr.sj_code == JOB_EXEC_HOOKERROR) {\n\n\t\t\t\tchar hook_buf2[HOOK_BUF_SIZE];\n\t\t\t\tint fd;\n\t\t\t\tchar *hook_name = NULL;\n\t\t\t\tint rd_size = stbuf.st_size;\n\n\t\t\t\tif (rd_size >= HOOK_BUF_SIZE) {\n\t\t\t\t\trd_size = HOOK_BUF_SIZE - 1;\n\t\t\t\t}\n\n\t\t\t\tfd = open(hook_outfile, O_RDONLY);\n\t\t\t\thook_buf2[0] = '\\0';\n\t\t\t\tif (fd != -1) {\n\t\t\t\t\tif (read(fd, hook_buf2, rd_size) == rd_size) {\n\t\t\t\t\t\thook_buf2[rd_size] = '\\0';\n\t\t\t\t\t\tif (hook_buf2[rd_size - 1] == '\\n') {\n\t\t\t\t\t\t\thook_buf2[rd_size - 1] = '\\0';\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\thook_name = strchr(hook_buf2, '=');\n\t\t\t\t\t\tif (hook_name != NULL)\n\t\t\t\t\t\t\thook_name++;\n\t\t\t\t\t}\n\n\t\t\t\t\tclose(fd);\n\t\t\t\t\tunlink(hook_outfile);\n\t\t\t\t}\n\t\t\t\tif (hook_name != NULL) {\n\t\t\t\t\tsend_hook_fail_action(find_hook(hook_name));\n\t\t\t\t}\n\n\t\t\t} else if (get_hook_results(hook_outfile, NULL, NULL, NULL, 0,\n\t\t\t\t\t\t    &reject_rerunjob, &reject_deletejob, NULL,\n\t\t\t\t\t\t    NULL, 0, &vnl_changes, pjob,\n\t\t\t\t\t\t    NULL, 0, NULL) != 0) {\n\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK, LOG_ERR, __func__, \"Failed to get prologue hook results\");\n\t\t\t\tvna_list_free(vnl_changes);\n\t\t\t\t/* important to unlink this file here */\n\t\t\t\t/* as this file is usually opened in append */\n\t\t\t\t/* mode under mom_process_hooks() */\n\t\t\t\tunlink(hook_outfile);\n\t\t\t} else {\n\t\t\t\t/* Delete job or reject job actions */\n\t\t\t\t/* NOTE: Must appear here before vnode changes, */\n\t\t\t\t/* since this action will be sent whether or not */\n\t\t\t\t/* hook script executed by PBSADMIN or not. */\n\t\t\t\tif (reject_deletejob) {\n\t\t\t\t\t/* deletejob takes precedence */\n\t\t\t\t\tnew_job_action_req(pjob, HOOK_PBSADMIN, JOB_ACT_REQ_DELETE);\n\t\t\t\t} else if (reject_rerunjob) {\n\t\t\t\t\tnew_job_action_req(pjob, HOOK_PBSADMIN, JOB_ACT_REQ_REQUEUE);\n\t\t\t\t}\n\n\t\t\t\t/* Whether or not we accept or reject, we'll make */\n\t\t\t\t/* job changes, vnode changes, job actions */\n\t\t\t\tenqueue_update_for_send(pjob, IS_RESCUSED_FROM_HOOK);\n\n\t\t\t\t/* Push vnl hook changes to server */\n\t\t\t\thook_requests_to_server(&vnl_changes);\n\n\t\t\t\tunlink(hook_outfile);\n\t\t\t}\n\t\t}\n\t}\n\n\t/*\n\t ** Set the global id before exiting on error so any\n\t ** information can be put into the job struct first.\n\t */\n\tset_globid(pjob, &sjr);\n\tif (sjr.sj_code < 0) {\n#if MOM_ALPS\n\t\t/* we couldn't get a reservation so refresh the inventory */\n\t\tif (sjr.sj_reservation == -1)\n\t\t\tcall_hup = HUP_INIT;\n#endif\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n\t\tAFSLOG_TERM(ptask);\n#endif\n\t\t(void) sprintf(log_buffer, \"job not started, %s %d\",\n\t\t\t       (sjr.sj_code == JOB_EXEC_RETRY) ? \"Retry\" : \"Failure\", sjr.sj_code);\n\t\texec_bail(pjob, sjr.sj_code, log_buffer);\n\t\treturn;\n\t}\n\n\tptask->ti_qs.ti_sid = sjr.sj_session;\n\tptask->ti_qs.ti_status = TI_STATE_RUNNING;\n\n\tstrcpy(ptask->ti_qs.ti_parentjobid, pjob->ji_qs.ji_jobid);\n\tif (task_save(ptask) == -1) {\n\t\t(void) sprintf(log_buffer, \"Task save failed\");\n\t\texec_bail(pjob, JOB_EXEC_RETRY, log_buffer);\n\t\treturn;\n\t}\n\n\t/*\n\t * return from the starter indicated the job is a go ...\n\t * record the start time and session/process id\n\t */\n\n\tstart_walltime(pjob);\n\n\tset_jattr_l_slim(pjob, JOB_ATR_session_id, sjr.sj_session, SET);\n\n\tset_job_state(pjob, JOB_STATE_LTR_RUNNING);\n\tset_job_substate(pjob, JOB_SUBSTATE_RUNNING);\n\tjob_save(pjob);\n\n\tif (mom_get_sample() == PBSE_NONE) {\n\t\ttime_resc_updated = time_now;\n\t\t(void) mom_set_use(pjob);\n\t}\n\t/*\n\t * these are set so that it will\n\t * return them to the Server on the first update below\n\t */\n\t(get_jattr(pjob, JOB_ATR_errpath))->at_flags |= ATR_VFLAG_MODIFY;\n\t(get_jattr(pjob, JOB_ATR_outpath))->at_flags |= ATR_VFLAG_MODIFY;\n\t(get_jattr(pjob, JOB_ATR_session_id))->at_flags |= ATR_VFLAG_MODIFY;\n\t(get_jattr(pjob, JOB_ATR_altid))->at_flags |= ATR_VFLAG_MODIFY;\n\t(get_jattr(pjob, JOB_ATR_state))->at_flags |= ATR_VFLAG_MODIFY;\n\t(get_jattr(pjob, JOB_ATR_substate))->at_flags |= ATR_VFLAG_MODIFY;\n\t(get_jattr(pjob, JOB_ATR_jobdir))->at_flags |= ATR_VFLAG_MODIFY;\n\t(get_jattr(pjob, JOB_ATR_altid2))->at_flags |= ATR_VFLAG_MODIFY;\n\t(get_jattr(pjob, JOB_ATR_acct_id))->at_flags |= ATR_VFLAG_MODIFY;\n\n\tenqueue_update_for_send(pjob, IS_RESCUSED);\n\tnext_sample_time = min_check_poll;\n\tlog_eventf(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO, pjob->ji_qs.ji_jobid, \"Started, pid = %d\", sjr.sj_session);\n\n\treturn;\n}\n\n/**\n * @brief\n *\tRegenerate the PBS_NODEFILE of a job based on internal\n *\tnodes-related data.\n * @param[in]\tpjob\t- the job whose PBS_NODEFILE is to be generated.\n * @param[out]\tnodefile- buffer to hold the path to PBS_NODEFILE\n *\t\t\t  that got regenerated.\n *\t\t\t  NOTE: OK for this to be NULL, which means\n *\t\t\t  don't save nodefile path.\n *\n * @param[in] nodefile_sz - size of the 'nodefile' buffer.\n * @param[out] err_msg\t- buffer to hold the error message if this\n *\t\t\t functions returns a failure.\n * @param[in]\terr_msg_sz - size of the 'err_msg' buffer.\n *\n * @return int\n * @retval  0\tsuccess\n * @retval < 0\tfailure\n *\n */\nint\ngenerate_pbs_nodefile(job *pjob, char *nodefile, int nodefile_sz,\n\t\t      char *err_msg, int err_msg_sz)\n{\n\tFILE *nhow;\n\tint j, vnodenum;\n\tchar pbs_nodefile[MAXPATHLEN + 1];\n\n\tif (pjob == NULL) {\n\t\tsnprintf(err_msg, err_msg_sz, \"bad pjob param\");\n\t\treturn (-1);\n\t}\n\n\tif ((err_msg != NULL) && (err_msg_sz > 0))\n\t\terr_msg[0] = '\\0';\n\n\tsnprintf(pbs_nodefile, sizeof(pbs_nodefile) - 1, \"%s/aux/%s\",\n\t\t pbs_conf.pbs_home_path, pjob->ji_qs.ji_jobid);\n\n\tif ((nhow = fopen(pbs_nodefile, \"w\")) == NULL) {\n\t\tif ((err_msg != NULL) && (err_msg_sz > 0)) {\n\t\t\tsnprintf(err_msg, err_msg_sz,\n\t\t\t\t \"cannot open %s\", pbs_nodefile);\n\t\t}\n\t\treturn (-1);\n\t}\n\t/*\n\t **\tThe file must be owned by root and readable by\n\t **\tthe user.  We take the easy way out and make\n\t **\tit readable by anyone.\n\t */\n\tif (fchmod(fileno(nhow), 0644) == -1) {\n\t\tif ((err_msg != NULL) && (err_msg_sz > 0)) {\n\t\t\tsnprintf(err_msg, err_msg_sz, \"cannot chmod %s\",\n\t\t\t\t pbs_nodefile);\n\t\t}\n\t\tfclose(nhow);\n\t\t(void) unlink(pbs_nodefile);\n\t\treturn (-1);\n\t}\n\n\t/* write each node name out once per vnod and entry */\n\tvnodenum = pjob->ji_numvnod;\n\tfor (j = 0; j < vnodenum; j++) {\n\t\tif (pjob->ji_vnods[j].vn_hname == NULL) {\n\t\t\tsize_t len;\n\t\t\tchar *pdot;\n\n\t\t\t/* we want to write just the short name of the host */\n\t\t\tif ((pdot = strchr(pjob->ji_vnods[j].vn_host->hn_host, '.')) != NULL)\n\t\t\t\tlen = (size_t) (pdot - pjob->ji_vnods[j].vn_host->hn_host);\n\t\t\telse\n\t\t\t\tlen = strlen(pjob->ji_vnods[j].vn_host->hn_host);\n\t\t\tfprintf(nhow, \"%.*s\\n\", (int) len,\n\t\t\t\tpjob->ji_vnods[j].vn_host->hn_host);\n\t\t} else\n\t\t\tfprintf(nhow, \"%s\\n\", pjob->ji_vnods[j].vn_hname);\n\t}\n\tfclose(nhow);\n\n\tif ((nodefile != NULL) && (nodefile_sz > 0))\n\t\tpbs_strncpy(nodefile, pbs_nodefile, nodefile_sz);\n\n\treturn (0);\n}\n\n/**\n * @brief\n *\tRead a piece of data from 'downfds' pipe of size 'data_size'.\n *\n * @param[in]\tdownfds - the pipe descriptor to read from.\n * @param[in]\tdata_size - the size of data to read.\n * @param[in]\twait_sec - # of seconds to wait for data to arrive.\n *\n * @return void *\n * @retval <opaque_data>\t- pointer to some data that is in a fixed\n *\t\t\t\t  memory area that must not be freed and can\n *\t\t\t\t  get overwritten on a next call to this\n *\t\t\t\t  function.\n * @retval NULL\t\t\t- if no data was found or error encountered.\n * @note\n *\tThe read time is timed out using the $job_launch_delay mom config\n *\toption value.\n */\nvoid *\nread_pipe_data(int downfds, int data_size, int wait_sec)\n{\n\tstatic char *buf = NULL;\n\tstatic int buf_size = 0;\n\tint ret;\n\tint nread = 0;\n\tstruct pollfd pollfds[1];\n\tint timeout = (int) (wait_sec * 1000); /* milli seconds */\n\tpollfds[0].fd = downfds;\n\tpollfds[0].events = POLLIN;\n\tpollfds[0].revents = 0;\n\n\tret = poll(pollfds, 1, timeout);\n\n\tif (ret == -1) {\n\t\tlog_err(errno, __func__, \"error on monitoring pipe\");\n\t\treturn NULL;\n\t} else if (ret == 0) {\n\t\t/* select or poll timed out */\n\t\treturn NULL;\n\t}\n\n\tif (data_size > buf_size) {\n\t\tchar *tpbuf;\n\n\t\ttpbuf = realloc(buf, data_size);\n\t\tif (tpbuf == NULL) {\n\t\t\tlog_err(-1, __func__, \"realloc failure\");\n\t\t\treturn NULL;\n\t\t}\n\t\tbuf = tpbuf;\n\t\tbuf_size = data_size;\n\t}\n\tmemset(buf, 0, buf_size);\n\n\tnread = readpipe(downfds, buf, data_size);\n\n\tif (data_size != nread) {\n\t\tlog_err(-1, __func__, \"did not receive all data\");\n\t\treturn NULL;\n\t}\n\treturn (buf);\n}\n\n/**\n * @brief\n *\tWrite a piece of data of size 'data_size' into pipe descriptors\n *\t'upfds' (data write)  and 'downfds' (data ack).\n *\n * @param[in]\tupfds - pipe descriptor upstream.\n * @param[in]\tdownfds - pipe descriptor downstream\n * @param[in]\tdata - the data to write\n * @param[in]\tdata_size - the size of 'data'\n *\n * @return int\n * @retval  0\t- for success\n * @retval  1\t- for failure\n */\nint\nwrite_pipe_data_ack(int upfds, int downfds, void *data, size_t data_size)\n{\n\tvoid *data_recv = NULL;\n\tint nwrite = 0;\n\tsize_t data_size_recv;\n\n\tif ((data == NULL) || (data_size == 0)) {\n\t\treturn (1);\n\t}\n\n\t/* first send out data length */\n\tnwrite = writepipe(upfds, data, data_size);\n\tif (nwrite != data_size) {\n\t\tlog_err(-1, __func__, \"failed to write data to pipe\");\n\t\treturn (1);\n\t}\n\n\t/* wait for acknowledgement */\n\tdata_recv = read_pipe_data(downfds, sizeof(size_t), PIPE_READ_TIMEOUT);\n\tif (data_recv == NULL) {\n\t\tlog_err(-1, __func__, \"failed to get ack from pipe\");\n\t\treturn (1);\n\t}\n\n\tmemcpy(&data_size_recv, data_recv, sizeof(size_t));\n\tif (data_size_recv != data_size) {\n\t\tlog_err(-1, __func__, \"received data not match sent data\");\n\t\treturn (1);\n\t}\n\treturn (0);\n}\n\n/**\n * @brief\n *\tWrite a piece of data of size 'data_size' into pipe 'upfds'.\n *\n * @param[in]\tupfds - pipe descriptor upstream.\n * @param[in]\tdata - the data to write\n * @param[in]\tdata_size - the size of 'data'\n *\n * @return int\n * @retval  0\t- for success\n * @retval  1\t- for failure\n */\nint\nwrite_pipe_data(int upfds, void *data, int data_size)\n{\n\tint nwrite = 0;\n\n\tif ((data == NULL) || (data_size <= 0)) {\n\t\treturn (1);\n\t}\n\n\tnwrite = writepipe(upfds, data, data_size);\n\tif (nwrite != data_size) {\n\t\tlog_err(-1, __func__, \"partial write detected\");\n\t\treturn (1);\n\t}\n\treturn (0);\n}\n\n/**\n * @brief\n *\tWrite 'r_size' first, and then the actual data 'r_buf' into pipe\n *\tdescriptors *\t'upfds' (data write)  and 'downfds' (data ack).\n *\n * @param[in]\tupfds - pipe descriptor upstream.\n * @param[in]\tdownfds - pipe descriptor downstream\n * @param[in]\tr_buf - the data to write\n * @param[in]\tr_size - the size of 'r_buf'\n *\n * @return int\n * @retval  0\t- for success\n * @retval  1\t- for failure\n *\n */\nint\nsend_string_data(int upfds, int downfds, void *r_buf, size_t r_size)\n{\n\t/* send new string size */\n\tif (write_pipe_data_ack(upfds, downfds, &r_size, sizeof(size_t)) != 0) {\n\t\treturn (1);\n\t}\n\t/* now send string data actual data */\n\tif (write_pipe_data_ack(upfds, downfds, r_buf, r_size) != 0) {\n\t\treturn (1);\n\t}\n\n\treturn (0);\n}\n\n/**\n * @brief\n *\tRead some string of data from 'downfds' pipe descriptor and using\n *\t'upfds' for acknowledgement.\n *\n * @param[in]\tdownfds - the pipe descriptor to read from.\n * @param[in]\tupfds - the pipe descriptor to use for acks.\n *\n * @return char *\n * @retval <string_of_data>\t- pointer to some string data that is in a\n *\t\t\t\t  fixed memory area that must not be freed and\n *\t\t\t\t  can get overwritten on a next call to this\n *\t\t\t\t  function.\n * @retval NULL\t\t\t- if no data was found or error encountered.\n * @note\n *\tThe read time is timed out using the $job_launch_delay mom config\n *\toption value.\n */\nchar *\nreceive_string_data(int downfds, int upfds)\n{\n\tchar *r_buf;\n\tsize_t r_size;\n\tsize_t ack_size;\n\n\t/* get size of buffer to receive */\n\tr_buf = read_pipe_data(downfds, sizeof(size_t), PIPE_READ_TIMEOUT);\n\tif (r_buf == NULL) {\n\t\treturn (NULL);\n\t}\n\tmemcpy(&r_size, r_buf, sizeof(size_t));\n\t/* ack that we got the r_size */\n\tack_size = sizeof(size_t);\n\tif (write_pipe_data(upfds, &ack_size, sizeof(size_t)) != 0) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer), \"write of length %lu bytes to pipe failed\", (unsigned long) ack_size);\n\t\tlog_err(errno, __func__, log_buffer);\n\t\treturn (NULL);\n\t}\n\n\t/* now get the actual string data */\n\tr_buf = read_pipe_data(downfds, r_size, PIPE_READ_TIMEOUT);\n\tif (r_buf == NULL) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer), \"read of pipe of size %lu bytes for failed\", (unsigned long) r_size);\n\t\tlog_err(errno, __func__, log_buffer);\n\t\treturn (NULL);\n\t}\n\t/* send back as an acknowledgement that MOM got it */\n\tack_size = r_size;\n\tif (write_pipe_data(upfds, &ack_size, sizeof(size_t)) != 0) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer), \"write of length %lu bytes to pipe failed\", (unsigned long) r_size);\n\t\tlog_err(errno, __func__, log_buffer);\n\t\treturn (NULL);\n\t}\n\treturn (r_buf);\n}\n\n/**\n * @brief\n *\tSend a command 'cmd' request using the pipes given.\n *\n * @param[in]\tupfds - upstream pipe\n * @param[in]\tdownstream - downstream pipe\n * @param[in]\tcmd - command request to send (e.g. IM_EXEC_PROLOGUE)\n *\n * @return int\n * @retval 0\t- success\n * @retval 1\t- fail\n */\nint\nsend_pipe_request(int upfds, int downfds, int cmd)\n{\n\tchar *r_buf;\n\tint cmd_read;\n\n\tif (write_pipe_data(upfds, &cmd, sizeof(int)) != 0) {\n\t\tlog_err(-1, __func__, \"bad write to pipe\");\n\t\treturn (1);\n\t}\n\n\t/* wait for acknowledgement */\n\tr_buf = read_pipe_data(downfds, sizeof(int), PIPE_READ_TIMEOUT);\n\tif (r_buf == NULL) {\n\t\tlog_err(-1, __func__, \"bad read from pipe\");\n\t\treturn (1);\n\t}\n\n\tmemcpy(&cmd_read, r_buf, sizeof(int));\n\tif (cmd != cmd_read) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer), \"wrote %d got %d\", cmd, cmd_read);\n\t\tlog_err(-1, __func__, log_buffer);\n\t\treturn (1);\n\t}\n\treturn (0);\n}\n\n/**\n * @brief\n *\tReturns 1 (true) if sister moms have all replied IM_ALL_OKAY status in\n *\tregards to execution of remote prologue hooks.\n *\n * @param[in,out]\tpjob\t - job being operated on.\n * @param[in]\t\tpipefd\t-  pipe to mother superior to get status info.\n * @return int\n * @retval 1\t- for true\n * @retval 0\t- for false\n */\nint\nprologue_hook_all_okay_from_sisters_moms(job *pjob, int pipefd)\n{\n\tint cmd_ack = 0;\n\tchar *r_buf = NULL;\n\n\tif (pipefd == -1)\n\t\treturn (0);\n\n\t/* get cmd_ack from parent that it received IM_ALL_OKAY status from\n\t * all sister moms regarding execution of remote prologue hooks.\n\t */\n\tr_buf = read_pipe_data(pipefd, sizeof(int), 0);\n\tif (r_buf != NULL)\n\t\tmemcpy(&cmd_ack, r_buf, sizeof(int));\n\n\tif ((r_buf == NULL) || (cmd_ack != IM_ALL_OKAY))\n\t\treturn (0);\n\treturn (1);\n}\n\n/**\n * @brief\n *\tWait/read from 'pipfd' pipe for node names of unhealthy moms, and\n *\tupdate accordingly the job 'pjob''s ji_node_list and ji_failed_node_list.\n *\tReturn in 'vnl_fails' those entries in job's exec_vnode where the\n *\tvnodes are managed by parent moms appearing in pjob->ji_failed_node_list.\n *\n * @param[in/out]\tpjob\t - job being operated on.\n * @param[in]\t\tpipefd\t-  pipe to mother superior to get data.\n * @param[in]\t\tprolo_pipefd\t-  pipe to mother superior to get info\n *\t\t\t\t   about remote prologue hook execution\n * @param[out]\t\tvnl_fails - fill in with the list of vnodes and their\n *\t\t\t\t   resources with non-healthy  parent moms.\n * @param[out]\t\tvnl_good - fill in with the list of vnodes and their\n *\t\t\t\t   resources with functional parent moms.\n * @param[in]\t\ttimeout\t- # of seconds to wait waiting for list of failed\n *\t\t\t\t   mom hosts.\n * @return int\n * @retval 0\t- for success\n * @retval 1\t- for failure\n */\nint\nget_failed_moms_and_vnodes(job *pjob, int pipefd, int prolo_pipefd, vnl_t **vnl_fails, vnl_t **vnl_good, unsigned int timeout)\n{\n\tsize_t r_size = 0;\n\tchar *r_buf = NULL;\n\tint timer;\n\tchar err_msg[LOG_BUF_SIZE];\n\tint prolo_okay = 0;\n\n\tif (pjob == NULL)\n\t\treturn (1);\n\n\t/* Get failed mom hosts, and update the job's node_list and failed_node_list */\n\ttimer = timeout;\n\tdo {\n\t\t/* get size of buffer to receive */\n\t\tr_buf = read_pipe_data(pipefd, sizeof(size_t), 1);\n\t\tif (r_buf != NULL) {\n\t\t\tmemcpy(&r_size, r_buf, sizeof(size_t));\n\t\t\t/* now get the actual string data */\n\t\t\tr_buf = read_pipe_data(pipefd, r_size, 0);\n\t\t\tif (r_buf != NULL) {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"received from parent mom that node's host %s is not healthy\", r_buf);\n\t\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\t\treliable_job_node_add(&pjob->ji_failed_node_list, r_buf);\n\t\t\t\treliable_job_node_delete(&pjob->ji_node_list, r_buf);\n\t\t\t}\n\t\t}\n\t\tif (prolo_pipefd != -1)\n\t\t\tprolo_okay = prologue_hook_all_okay_from_sisters_moms(pjob, prolo_pipefd);\n\t\ttimer--;\n\t} while ((timer >= 0) && !prolo_okay);\n\n\tif ((prolo_pipefd != -1) && !prolo_okay)\n\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid, \"not all prologue hooks to sister moms completed, but job will proceed to execute\");\n\n\t/* now prune_exec_vnode taking away vnodes managed by moms\n\t * in job's node_fail_list, and also satisfy the original\n\t * job schedselect\n\t */\n\tif (prune_exec_vnode(pjob, NULL, vnl_fails, vnl_good, err_msg, LOG_BUF_SIZE) != 0) {\n\t\treturn (1);\n\t}\n\n\treturn (0);\n}\n\n/**\n * @brief\n *\tThis is called by a job child process telling parent mom of job attribute\n *\tupdates, using the communication pipes:'pipefd_write', 'pipefd_ack',\n *\t'pipefd_status'.\n *\n * @param[in,out]\tpjob\t - job being operated on.\n * @param[in]\t\tpipefd_write - for sending the job update request.\n * @param[in]\t\tpipefd_ack - for receiving the ack from parent mom that it\n *\t\t\t\t     has received the job update request.\n * @param[in]\t\tpipefd_status - for child to get the result from parent mom\n *\t\t\t\t\ton the job update.\n * @retval 0\t- for success\n * @retval 1\t- for failure\n */\nint\nsend_update_job(job *pjob, int pipefd_write, int pipefd_ack, int pipefd_status)\n{\n\tint exec_vnode_hookset;\n\tint schedselect_hookset;\n\tint exec_host_hookset;\n\tint exec_host2_hookset;\n\tsize_t r_size = 0;\n\tchar *r_buf = NULL;\n\tint cmd_ack = IM_ALL_OKAY;\n\n\tif (pjob == NULL)\n\t\treturn (1);\n\n\texec_vnode_hookset = (get_jattr(pjob, JOB_ATR_exec_vnode))->at_flags & ATR_VFLAG_HOOK;\n\tschedselect_hookset = (get_jattr(pjob, JOB_ATR_SchedSelect))->at_flags & ATR_VFLAG_HOOK;\n\texec_host_hookset = (get_jattr(pjob, JOB_ATR_exec_host))->at_flags & ATR_VFLAG_HOOK;\n\texec_host2_hookset = (get_jattr(pjob, JOB_ATR_exec_host2))->at_flags & ATR_VFLAG_HOOK;\n\tif (!exec_vnode_hookset || !schedselect_hookset ||\n\t    (!exec_host_hookset && !exec_host2_hookset)) {\n\t\treturn (1);\n\t}\n\n\t/* now that we pruned exec_vnode, need to send the\n\t * update to the parent mom\n\t */\n\tif (send_pipe_request(pipefd_write, pipefd_ack, IM_UPDATE_JOB) != 0) {\n\t\tlog_err(-1, __func__, \"send of IM_UPDATE_JOB to parent mom failed\");\n\t\treturn (1);\n\t}\n\n\t/* add delay */\n\tr_buf = get_jattr_str(pjob, JOB_ATR_exec_vnode);\n\tr_size = strlen(r_buf) + 1;\n\n\tif (send_string_data(pipefd_write, pipefd_ack, r_buf, r_size) != 0) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"failed to send_string_data %s to parent mom\", r_buf);\n\t\tlog_err(-1, __func__, log_buffer);\n\t\treturn (1);\n\t}\n\n\t/* now send new exec_host or exec_host2 */\n\tif (is_jattr_set(pjob, JOB_ATR_exec_host2))\n\t\tr_buf = get_jattr_str(pjob, JOB_ATR_exec_host2);\n\telse if (is_jattr_set(pjob, JOB_ATR_exec_host)) /* send new exec_host size */\n\t\tr_buf = get_jattr_str(pjob, JOB_ATR_exec_host);\n\telse {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"job %s has unset exec_host and exec_host2\", pjob->ji_qs.ji_jobid);\n\t\tlog_err(-1, __func__, log_buffer);\n\t\treturn (1);\n\t}\n\tr_size = strlen(r_buf) + 1;\n\n\tif (send_string_data(pipefd_write, pipefd_ack, r_buf, r_size) != 0) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"failed to send_string_data %s to parent mom\", r_buf);\n\t\tlog_err(-1, __func__, log_buffer);\n\t\treturn (1);\n\t}\n\n\t/* now send schedselect */\n\tr_buf = get_jattr_str(pjob, JOB_ATR_SchedSelect);\n\n\tr_size = strlen(r_buf) + 1;\n\tif (send_string_data(pipefd_write, pipefd_ack, r_buf, r_size) != 0) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"failed to send_string_data %s to parent mom\", r_buf);\n\t\tlog_err(-1, __func__, log_buffer);\n\t\treturn (1);\n\t}\n\n\t/* clear the hook set flag since we've sent the update */\n\t(get_jattr(pjob, JOB_ATR_exec_vnode))->at_flags &= ~ATR_VFLAG_HOOK;\n\tif (exec_host2_hookset)\n\t\t(get_jattr(pjob, JOB_ATR_exec_host2))->at_flags &= ~ATR_VFLAG_HOOK;\n\telse\n\t\t(get_jattr(pjob, JOB_ATR_exec_host))->at_flags &= ~ATR_VFLAG_HOOK;\n\t(get_jattr(pjob, JOB_ATR_SchedSelect))->at_flags &= ~ATR_VFLAG_HOOK;\n\n\tif (pjob->ji_numnodes > 1) {\n\t\t/* get cmd_ack from parent that it received\n\t\t * and acted upon the job updates from sis moms\n\t\t */\n\t\tsnprintf(log_buffer, sizeof(log_buffer), \"waiting up to %d secs for job update acks from sister moms\", PIPE_READ_TIMEOUT);\n\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid, log_buffer);\n\t\tr_buf = read_pipe_data(pipefd_status, sizeof(int), PIPE_READ_TIMEOUT);\n\t\tif (r_buf != NULL)\n\t\t\tmemcpy(&cmd_ack, r_buf, sizeof(int));\n\t\tif ((r_buf == NULL) || (cmd_ack != IM_ALL_OKAY)) {\n\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid, \"not all job updates to sister moms completed\");\n\t\t}\n\t}\n\treturn (0);\n}\n\n/**\n * @brief\n *\tGet/set job pjob's exec_vnode, exec_host, schedselect from the job's\n *\t3rd pipe, and communicatng to the server of the new exec_vnode,\n *\tschedselect values (no need to send exec_host as server will just\n *\trecreate it on its end). The sister moms whose vnodes have been\n *\tout in the new exec_vnode would get an IM_DELETE_JOB2 request.\n * @param[in]\tpjob - job whose exec_vnode/exec_host/schedselect  is being\n *\t\t\tobtained.\n * @param[in]\tmsg - fill in with error message received if this function\n *\t\t\tencounters a failure.\n * @param[in]\tmsg_size - size of 'msg'.\n *\n * @return int\n * @retval 0\t- for success\n * @retval 1\t- for non-success due to pipes failure\n * @retval 2\t- for no data found\n * @retval -1\t- for non-success due to internal error.\n */\nint\nget_new_exec_vnode_host_schedselect(job *pjob, char *msg, size_t msg_size)\n{\n\tchar *new_exec_vnode = NULL;\n\tchar *new_exec_host = NULL;\n\tchar *new_schedselect = NULL;\n\tchar *r_buf;\n\tint rc = 0;\n\n\t/* get exec_vnode don't close pipes */\n\tr_buf = receive_string_data(pjob->ji_child2parent_job_update_pipe,\n\t\t\t\t    pjob->ji_parent2child_job_update_pipe);\n\tif (r_buf == NULL) {\n\t\t(void) snprintf(msg, msg_size, \"failed to obtain new exec_vnode\");\n\t\treturn (1);\n\t}\n\tnew_exec_vnode = strdup(r_buf);\n\tif (new_exec_vnode == NULL) {\n\t\t(void) snprintf(msg, msg_size, \"%s: new exec_vnode strdup error\", __func__);\n\t\treturn (1);\n\t}\n\n\t/* get exec_host */\n\tr_buf = receive_string_data(pjob->ji_child2parent_job_update_pipe, pjob->ji_parent2child_job_update_pipe);\n\tif (r_buf == NULL) {\n\t\t(void) snprintf(msg, msg_size, \"failed to obtain new exec_host size\");\n\t\tfree(new_exec_vnode);\n\t\treturn (1);\n\t}\n\n\tnew_exec_host = strdup(r_buf);\n\tif (new_exec_host == NULL) {\n\t\t(void) snprintf(msg, msg_size, \"failed to strdup new exec_host\");\n\t\tfree(new_exec_vnode);\n\t\treturn (1);\n\t}\n\t/* get schedselect */\n\tr_buf = receive_string_data(pjob->ji_child2parent_job_update_pipe, pjob->ji_parent2child_job_update_pipe);\n\tif (r_buf == NULL) {\n\t\t(void) snprintf(msg, msg_size, \"failed to obtain new schedselect size\");\n\t\tfree(new_exec_vnode);\n\t\tfree(new_exec_host);\n\t\treturn (1);\n\t}\n\n\tnew_schedselect = strdup(r_buf);\n\tif (new_schedselect == NULL) {\n\t\t(void) snprintf(msg, msg_size, \"failed to strdup new schedselect\");\n\t\tfree(new_exec_vnode);\n\t\tfree(new_exec_host);\n\t\treturn (1);\n\t}\n\n\t/* set job's exec_vnode */\n\tsnprintf(log_buffer, sizeof(log_buffer), \"pruned from exec_vnode=%s\",\n\t\t get_jattr_str(pjob, JOB_ATR_exec_vnode));\n\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t \"pruned to exec_vnode=%s\", new_exec_vnode);\n\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\n\tset_jattr_str_slim(pjob, JOB_ATR_exec_vnode, new_exec_vnode, NULL);\n\n\t(void) update_resources_list(pjob, ATTR_l,\n\t\t\t\t     JOB_ATR_resource, new_exec_vnode, INCR, 0,\n\t\t\t\t     JOB_ATR_resource_orig);\n\n\tif (is_jattr_set(pjob, JOB_ATR_exec_host2))\n\t\tset_jattr_str_slim(pjob, JOB_ATR_exec_host2, new_exec_host, NULL);\n\telse if (is_jattr_set(pjob, JOB_ATR_exec_host))\n\t\tset_jattr_str_slim(pjob, JOB_ATR_exec_host, new_exec_host, NULL);\n\n\t/* Send DELETE_JOB2 request to the sister moms not in\n\t * 'new_peh', to kill the job on that sister and\n\t * report resources_used info.\n\t */\n\t(void) send_sisters_inner(pjob, IM_DELETE_JOB2, NULL, new_exec_host);\n\n\tset_jattr_str_slim(pjob, JOB_ATR_SchedSelect, new_schedselect, NULL);\n\n\tfree(new_exec_vnode);\n\tfree(new_exec_host);\n\tfree(new_schedselect);\n\n\tif ((rc = job_nodes(pjob)) != 0) {\n\t\tsnprintf(msg, msg_size, \"failed updating internal nodes data (rc=%d)\", rc);\n\t\treturn (-1);\n\t}\n\tif (generate_pbs_nodefile(pjob, NULL, 0, msg, msg_size) != 0) {\n\t\treturn (-1);\n\t}\n\n\tjob_save(pjob);\n\t/* set modify flag on the job attributes that will be sent to the server */\n\t(get_jattr(pjob, JOB_ATR_exec_vnode))->at_flags |= ATR_VFLAG_MODIFY;\n\t(get_jattr(pjob, JOB_ATR_SchedSelect))->at_flags |= ATR_VFLAG_MODIFY;\n\tenqueue_update_for_send(pjob, IS_RESCUSED);\n\n\treturn (0);\n}\n\n/**\n * @brief\n *\tA task that will report failed node hosts due to\n *\tunsuccessful execjob_prologue hook execution.\n *\n * @param[in] \twork_task -  task to process.\n *\n * @return none\n *\n */\nstatic void\nreport_failed_node_hosts_task(struct work_task *ptask)\n{\n\tjob *pjob = (job *) ptask->wt_parm1;\n\treliable_job_node *rjn, *rjn_next;\n\n\tif (pjob == NULL) {\n\t\tlog_err(-1, __func__, \"task structure contains reference to NULL job\");\n\t\treturn;\n\t}\n\tif (pjob->ji_report_task)\n\t\tpjob->ji_report_task = NULL;\n\n\tif (!check_job_state(pjob, JOB_STATE_LTR_RUNNING) ||\n\t    !check_job_substate(pjob, JOB_SUBSTATE_PRERUN))\n\t\treturn; /* job not longer waiting for healthy moms */\n\n\tfor (rjn = (reliable_job_node *) GET_NEXT(pjob->ji_node_list); rjn != NULL; rjn = rjn_next) {\n\t\trjn_next = (reliable_job_node *) GET_NEXT(rjn->rjn_link);\n\t\tif (strcmp(rjn->rjn_host, mom_host) == 0)\n\t\t\tcontinue;\n\n\t\tif (!rjn->prologue_hook_success) {\n\t\t\treliable_job_node_add(&pjob->ji_failed_node_list, rjn->rjn_host);\n\t\t\tif (pjob->ji_parent2child_moms_status_pipe != -1) {\n\t\t\t\tsize_t r_size;\n\t\t\t\tr_size = strlen(rjn->rjn_host) + 1;\n\t\t\t\tif (write_pipe_data(pjob->ji_parent2child_moms_status_pipe, &r_size, sizeof(size_t)) == 0)\n\t\t\t\t\t(void) write_pipe_data(pjob->ji_parent2child_moms_status_pipe, rjn->rjn_host, r_size);\n\t\t\t\telse\n\t\t\t\t\tlog_err(errno, __func__, \"failed to write\");\n\t\t\t}\n\t\t\tdelete_link(&rjn->rjn_link);\n\t\t\tfree(rjn);\n\t\t}\n\t}\n}\n\n/**\n * @brief\n * \tReceive a special request from the pipe represented by descriptor\n *\t'sd'.\n * @param[in]\tsd - connection descriptor\n *\n * @return none\n *\n */\nstatic void\nreceive_pipe_request(int sd)\n{\n\tconn_t *conn;\n\tint i;\n\tjob *pjob = NULL;\n\tpbs_task *ptask;\n\tint cmd;\n\tchar msg[LOG_BUF_SIZE];\n\n\tif ((conn = get_conn(sd)) == NULL) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"unable to find pipe\");\n\t\treturn;\n\t}\n\n\tptask = (pbs_task *) conn->cn_data;\n\tif (ptask == NULL)\n\t\treturn;\n\n\tpjob = ptask->ti_job;\n\n\tif (pjob == NULL) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"no job task associated with connection\");\n\t\treturn;\n\t}\n\n\t/* now we read the cmd or error */\n\ti = readpipe(pjob->ji_jsmpipe2, &cmd, sizeof(int));\n\n\tif (i != sizeof(int)) {\n\t\treturn;\n\t}\n\n\t/* send back as an acknowledgement that MOM got it */\n\t(void) writepipe(pjob->ji_mjspipe2, &cmd, sizeof(int));\n\n\tif (cmd == IM_EXEC_PROLOGUE) {\n\n\t\tif (send_sisters(pjob, IM_EXEC_PROLOGUE, NULL) != pjob->ji_numnodes - 1) {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"warning: %s: IM_EXEC_PROLOGUE requests \"\n\t\t\t\t \"could not reach some sister moms\",\n\t\t\t\t pjob->ji_qs.ji_jobid);\n\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t}\n\n\t\tif (do_tolerate_node_failures(pjob)) {\n\t\t\tlong delay_value;\n\t\t\t/* execute report task 'delay_value' seconds from\n\t\t\t * now, where the value is 95% of the job_launch_delay\n\t\t\t * value. This allows the waiting child mom to\n\t\t\t * capture the failed node host values before it times\n\t\t\t * out on job_launch_delay.\n\t\t\t */\n\t\t\tdelay_value = 0.95 * job_launch_delay;\n\t\t\tpjob->ji_report_task = set_task(WORK_Timed, time_now + delay_value, report_failed_node_hosts_task, pjob);\n\t\t}\n\t} else {\n\t\tsnprintf(msg, sizeof(msg), \"ignoring unknown cmd %d\", cmd);\n\t\tlog_err(-1, __func__, msg);\n\t}\n}\n\n/**\n * @brief\n *\tClose various pipes touched by a job update.\n *\n * @param[in] pjob - structure handle to job\n *\n * @return Void\n *\n */\nvoid\nclose_update_pipes(job *pjob)\n{\n\tif (pjob == NULL)\n\t\treturn;\n\n\t(void) close_conn(pjob->ji_child2parent_job_update_pipe);\n\tpjob->ji_child2parent_job_update_pipe = -1;\n\t(void) close(pjob->ji_parent2child_job_update_pipe);\n\tpjob->ji_parent2child_job_update_pipe = -1;\n\n\tif (pjob->ji_jsmpipe2 != -1) {\n\t\t(void) close(pjob->ji_jsmpipe2);\n\t\tpjob->ji_jsmpipe2 = -1;\n\t}\n\n\tif (pjob->ji_mjspipe2 != -1) {\n\t\t(void) close(pjob->ji_mjspipe2);\n\t\tpjob->ji_mjspipe2 = -1;\n\t}\n\n\t(void) close_conn(pjob->ji_jsmpipe);\n\tpjob->ji_jsmpipe = -1;\n\t(void) close(pjob->ji_mjspipe);\n\tpjob->ji_mjspipe = -1;\n}\n\n/**\n * @brief\n * \tReceive a special request from the pipe represented by descriptor\n *\t'sd'.\n * @param[in]\tsd - connection descriptor\n *\n * @return none\n *\n */\nstatic void\nreceive_job_update_request(int sd)\n{\n\tconn_t *conn = NULL;\n\tint i;\n\tjob *pjob = NULL;\n\tpbs_task *ptask;\n\tint cmd;\n\tchar msg[LOG_BUF_SIZE];\n\n\tif ((conn = get_conn(sd)) == NULL) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"unable to find pipe\");\n\t\treturn;\n\t}\n\n\tptask = (pbs_task *) conn->cn_data;\n\n\tif (ptask == NULL)\n\t\treturn;\n\n\tpjob = ptask->ti_job;\n\n\tif (pjob == NULL) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"no job task associated with connection\");\n\t\treturn;\n\t}\n\n\t/* now we read the cmd or error */\n\ti = readpipe(pjob->ji_child2parent_job_update_pipe, &cmd, sizeof(int));\n\n\tif (i != sizeof(int)) {\n\t\tsnprintf(msg, sizeof(msg),\n\t\t\t \"read of pipe for pid job %s got %d not %d: errno %s\",\n\t\t\t pjob->ji_qs.ji_jobid, i, (int) sizeof(int), strerror(errno));\n\n\t\tclose_update_pipes(pjob);\n\t\texec_bail(pjob, JOB_EXEC_RETRY, msg);\n\t\treturn;\n\t}\n\n\t/* send back as an acknowledgement that MOM got it */\n\t(void) writepipe(pjob->ji_parent2child_job_update_pipe, &cmd, sizeof(int));\n\n\tif (cmd == IM_UPDATE_JOB) {\n\t\tmom_hook_input_t hook_input;\n\t\tmom_hook_output_t hook_output;\n\t\tchar hook_msg[HOOK_MSG_SIZE + 1];\n\t\tint hook_errcode = 0;\n\t\thook *last_phook;\n\t\tunsigned int hook_fail_action = 0;\n\n\t\tif (get_new_exec_vnode_host_schedselect(pjob, msg, LOG_BUF_SIZE) != 0) {\n\t\t\tclose_update_pipes(pjob);\n\t\t\texec_bail(pjob, JOB_EXEC_RETRY, msg);\n\t\t\treturn;\n\t\t}\n\n\t\tmom_hook_input_init(&hook_input);\n\t\thook_input.pjob = pjob;\n\n\t\tmom_hook_output_init(&hook_output);\n\t\thook_output.reject_errcode = &hook_errcode;\n\t\thook_output.last_phook = &last_phook;\n\t\thook_output.fail_action = &hook_fail_action;\n\t\tif (mom_process_hooks(HOOK_EVENT_EXECJOB_RESIZE,\n\t\t\t\t      PBS_MOM_SERVICE_NAME, mom_host, &hook_input,\n\t\t\t\t      &hook_output,\n\t\t\t\t      hook_msg, sizeof(hook_msg), 1) == 0) {\n\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_NOTICE, pjob->ji_qs.ji_jobid, \"execjob_resize hook rejected request\");\n\t\t\tclose_update_pipes(pjob);\n\t\t\texec_bail(pjob, JOB_EXEC_RETRY, hook_msg);\n\t\t\treturn;\n\t\t}\n\t\t(void) send_sisters_job_update(pjob);\n\t\tpjob->ji_updated = 1;\n\t} else {\n\t\tsnprintf(msg, sizeof(msg), \"ignoring unknown cmd %d\", cmd);\n\t\tlog_err(-1, __func__, msg);\n\t}\n}\n\n/**\n *\n * @brief\n * \tUsed by MOM superior to start the shell process for 'pjob'\n *\n * @param[in]\tpjob - pointer to the job whose initial shell is\n *\t\tbeing spawned.\n *\n * @return\tVoid\n *\n */\nvoid\nfinish_exec(job *pjob)\n{\n\tchar **argv = NULL;\n\tchar buf[(2 * MAXPATHLEN) + 5];\n\tpid_t cpid;\n\tstruct passwd *pwdp; /* for uid, shell, home dir */\n\tint i, j, k;\n\tpbs_socklen_t len;\n\tint is_interactive = 0;\n\tint numthreads;\n#if SHELL_INVOKE == 1\n\tint pipe_script[] = {-1, -1};\n#endif\n\tchar *pts_name; /* name of slave pty */\n\tchar *shell;\n\tint jsmpipe[] = {-1, -1};\t\t       /* job starter to MOM for sid */\n\tint jsmpipe2[] = {-1, -1};\t\t       /* job starter to MOM */\n\tint child2parent_job_update_pipe[] = {-1, -1}; /* job starter to MOM */\n\tint child2parent_job_update_pipe_w = -1;\n\tint upfds = -1;\t\t\t\t       /* init to invalid fd */\n\tint upfds2 = -1;\t\t\t       /* init to invalid fd */\n\tint mjspipe[] = {-1, -1};\t\t       /* MOM to job starter for ack */\n\tint mjspipe2[] = {-1, -1};\t\t       /* MOM to job starter */\n\tint parent2child_job_update_pipe[] = {-1, -1}; /* MOM to job starter */\n\tint parent2child_job_update_pipe_r = -1;\n\tint parent2child_job_update_status_pipe[] = {-1, -1}; /* MOM to job starter */\n\tint parent2child_job_update_status_pipe_r = -1;\t      /* init to invalid fd */\n\tint downfds = -1;\t\t\t\t      /* init to invalid fd */\n\tint downfds2 = -1;\t\t\t\t      /* init to invalid fd */\n\tint parent2child_moms_status_pipe[] = {-1, -1};\t      /* MOM to job starter */\n\tint parent2child_moms_status_pipe_r = -1;\t      /* init to invalid fd */\n\tint port_out, port_err;\n\tstruct startjob_rtn sjr;\n#if MOM_ALPS\n\tstruct startjob_rtn ack;\n#endif\n\tpbs_task *ptask;\n\tstruct array_strings *vstrs;\n\tstruct sockaddr_in saddr;\n\tint nodemux = 0;\n\tchar *pbs_jobdir; /* staging and execution directory of this job */\n\tint sandbox_private = 0;\n\tint display_number = 0, n = 0;\n\tstruct pfwdsock *socks = NULL;\n#ifdef NAS /* localmod 020 */\n\tchar *schedselect;\n#endif /* localmod 020 */\n\tchar hook_msg[HOOK_MSG_SIZE + 1];\n\tint hook_rc;\n\tint prolo_hooks = 0; /*# of runnable prologue hooks*/\n\tchar *progname = NULL;\n\tpbs_list_head argv_list;\n\tchar *the_progname;\n\tchar **the_argv;\n\tchar **the_env;\n\tchar **res_env;\n\thook *last_phook = NULL;\n\tunsigned int hook_fail_action = 0;\n\tint hook_errcode = 0;\n\tmom_hook_input_t hook_input;\n\tmom_hook_output_t hook_output;\n\tint job_has_executable;\n\tFILE *temp_stderr = stderr;\n\tvnl_t *vnl_fails = NULL;\n\tvnl_t *vnl_good = NULL;\n\tstruct sigaction tmp_act_hup;\n\tstruct sigaction tmp_act_int;\n\tstruct sigaction tmp_act_quit;\n\tstruct sigaction tmp_act_stp;\n\n\tptc = -1; /* No current master pty */\n\n\tmemset(&sjr, 0, sizeof(sjr));\n\tif (is_jattr_set(pjob, JOB_ATR_nodemux))\n\t\tnodemux = get_jattr_long(pjob, JOB_ATR_nodemux);\n\n\tif ((i = job_setup(pjob, &pwdp)) != JOB_EXEC_OK) {\n\t\texec_bail(pjob, i, NULL);\n\t\treturn;\n\t}\n\n\t/* wait until after job_setup to call jobdirname(), we need the user's home info */\n\tpbs_jobdir = jobdirname(pjob->ji_qs.ji_jobid, pjob->ji_grpcache->gc_homedir);\n\n\tif ((is_jattr_set(pjob, JOB_ATR_sandbox)) &&\n\t    (strcasecmp(get_jattr_str(pjob, JOB_ATR_sandbox), \"PRIVATE\") == 0)) {\n\t\t/* set local variable sandbox_private */\n\t\tsandbox_private = 1;\n\t}\n\n\t/* If job has been checkpointed, restart from the checkpoint image */\n\n\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_CHKPT) ||\n\t    (pjob->ji_qs.ji_svrflags & JOB_SVFLG_ChkptMig)) {\n\t\tif ((i = local_restart(pjob, NULL)) != 0) {\n\t\t\tpost_restart(pjob, i);\n\t\t\texec_bail(pjob, (i == PBSE_CKPBSY) ? JOB_EXEC_RETRY : JOB_EXEC_FAIL2, NULL);\n\t\t}\n\t\treturn;\n\t}\n\n\tif (pjob->ji_numnodes == 1 || nodemux) {\n\t\tport_out = -1;\n\t\tport_err = -1;\n\t} else {\n\t\t/*\n\t\t ** Get port numbers from file decriptors in job struct.  The\n\t\t ** sockets are stored there so they can be closed later as\n\t\t ** Main MOM will not need them after the job is going.\n\t\t */\n\t\tlen = sizeof(saddr);\n\t\tif (getsockname(pjob->ji_stdout,\n\t\t\t\t(struct sockaddr *) &saddr, &len) == -1) {\n\t\t\t(void) sprintf(log_buffer, \"getsockname on stdout\");\n\t\t\texec_bail(pjob, JOB_EXEC_RETRY, log_buffer);\n\t\t\treturn;\n\t\t}\n\t\tport_out = (int) ntohs(saddr.sin_port);\n\n\t\tlen = sizeof(saddr);\n\t\tif (getsockname(pjob->ji_stderr,\n\t\t\t\t(struct sockaddr *) &saddr, &len) == -1) {\n\t\t\t(void) sprintf(log_buffer, \"getsockname on stderr\");\n\t\t\texec_bail(pjob, JOB_EXEC_RETRY, log_buffer);\n\t\t\treturn;\n\t\t}\n\t\tport_err = (int) ntohs(saddr.sin_port);\n\t}\n\n\tif (is_jattr_set(pjob, JOB_ATR_interactive) && get_jattr_long(pjob, JOB_ATR_interactive) != 0) {\n\n\t\tis_interactive = 1;\n\n\t\t/*\n\t\t * open a master pty, need to do it here before we fork,\n\t\t * to save the slave name in the master's job structure\n\t\t */\n\n\t\tif ((ptc = open_master(&pts_name)) < 0) {\n\t\t\tlog_err(errno, __func__, \"cannot open master pty\");\n\t\t\texec_bail(pjob, JOB_EXEC_RETRY, NULL);\n\t\t\treturn;\n\t\t}\n\t\tFDMOVE(ptc)\n\n\t\t/* save pty name in job output/error file name */\n\t\tset_jattr_str_slim(pjob, JOB_ATR_outpath, pts_name, NULL);\n\t\tset_jattr_str_slim(pjob, JOB_ATR_errpath, pts_name, NULL);\n\n#if SHELL_INVOKE == 1\n\t} else {\n\t\t/* need a pipe on which to write the shell script \t*/\n\t\t/* file name to the input of the shell\t\t\t*/\n\n\t\tif (pipe(pipe_script) == -1) {\n\t\t\t(void) sprintf(log_buffer,\n\t\t\t\t       \"Failed to create shell name pipe\");\n\t\t\texec_bail(pjob, JOB_EXEC_RETRY, log_buffer);\n\t\t\treturn;\n\t\t}\n#endif /* SHELL_INVOKE */\n\t}\n\n\t/* create pipes between MOM and the job starter    */\n\t/* fork the job starter which will become the job */\n\n\tif ((pipe(mjspipe) == -1) || (pipe(jsmpipe) == -1)) {\n\t\ti = -1;\n\n\t} else {\n\n\t\ti = 0;\n\n\t\t/* make sure pipe file descriptors are above 2 */\n\n\t\tif (jsmpipe[1] < 3) {\n\t\t\tupfds = fcntl(jsmpipe[1], F_DUPFD, 3);\n\t\t\t(void) close(jsmpipe[1]);\n\t\t\tjsmpipe[1] = -1;\n\t\t} else {\n\t\t\tupfds = jsmpipe[1];\n\t\t}\n\t\tif (mjspipe[0] < 3) {\n\t\t\tdownfds = fcntl(mjspipe[0], F_DUPFD, 3);\n\t\t\t(void) close(mjspipe[0]);\n\t\t\tmjspipe[0] = -1;\n\t\t} else {\n\t\t\tdownfds = mjspipe[0];\n\t\t}\n\t}\n\tif ((i == -1) || (upfds < 3) || (downfds < 3)) {\n\t\tif (upfds != -1)\n\t\t\t(void) close(upfds);\n\t\tif (downfds != -1)\n\t\t\t(void) close(downfds);\n\t\tif (jsmpipe[0] != -1)\n\t\t\t(void) close(jsmpipe[0]);\n\t\tif (mjspipe[1] != -1)\n\t\t\t(void) close(mjspipe[1]);\n\t\t(void) sprintf(log_buffer, \"Failed to create communication pipe\");\n\t\texec_bail(pjob, JOB_EXEC_RETRY, log_buffer);\n\t\treturn;\n\t}\n\tif ((ptask = momtask_create(pjob)) == NULL) {\n\t\tif (upfds != -1)\n\t\t\t(void) close(upfds);\n\t\tif (downfds != -1)\n\t\t\t(void) close(downfds);\n\t\tif (jsmpipe[0] != -1)\n\t\t\t(void) close(jsmpipe[0]);\n\t\tif (mjspipe[1] != -1)\n\t\t\t(void) close(mjspipe[1]);\n\t\t(void) sprintf(log_buffer, \"Task creation failed\");\n\t\texec_bail(pjob, JOB_EXEC_RETRY, log_buffer);\n\t\treturn;\n\t}\n\n\tprolo_hooks = num_eligible_hooks(HOOK_EVENT_EXECJOB_PROLOGUE);\n\n\t/* create 2nd set of pipes between MOM and the job starter */\n\t/* if there are prologue hooks */\n\tif (prolo_hooks > 0) {\n\t\tif ((pipe(mjspipe2) == -1) || (pipe(jsmpipe2) == -1)) {\n\t\t\ti = -1;\n\n\t\t} else {\n\n\t\t\ti = 0;\n\n\t\t\t/* make sure pipe file descriptors are above 2 */\n\n\t\t\tif (jsmpipe2[1] < 3) {\n\t\t\t\tupfds2 = fcntl(jsmpipe2[1], F_DUPFD, 3);\n\t\t\t\t(void) close(jsmpipe2[1]);\n\t\t\t\tjsmpipe2[1] = -1;\n\t\t\t} else {\n\t\t\t\tupfds2 = jsmpipe2[1];\n\t\t\t}\n\n\t\t\tif (mjspipe2[0] < 3) {\n\t\t\t\tdownfds2 = fcntl(mjspipe2[0], F_DUPFD, 3);\n\t\t\t\t(void) close(mjspipe2[0]);\n\t\t\t\tmjspipe2[0] = -1;\n\t\t\t} else {\n\t\t\t\tdownfds2 = mjspipe2[0];\n\t\t\t}\n\t\t}\n\t\tif ((i == -1) || (upfds2 < 3) || (downfds2 < 3)) {\n\t\t\tif (upfds2 != -1)\n\t\t\t\t(void) close(upfds2);\n\t\t\tif (downfds2 != -1)\n\t\t\t\t(void) close(downfds2);\n\t\t\tif (jsmpipe2[0] != -1)\n\t\t\t\t(void) close(jsmpipe2[0]);\n\t\t\tif (mjspipe2[1] != -1)\n\t\t\t\t(void) close(mjspipe2[1]);\n\t\t\t(void) snprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t\"Failed to create communication pipe\");\n\t\t\texec_bail(pjob, JOB_EXEC_RETRY, log_buffer);\n\t\t\treturn;\n\t\t}\n\t}\n\n\tif (do_tolerate_node_failures(pjob)) {\n\t\t/* create 3rd set of pipes between MOM and the job starter\n\t\t * fork the job starter which will become the job\n\t\t */\n\n\t\tif ((pipe(parent2child_job_update_pipe) == -1) || (pipe(child2parent_job_update_pipe) == -1)) {\n\t\t\ti = -1;\n\t\t} else {\n\t\t\ti = 0;\n\t\t\t/* make sure pipe file descriptors are above 2 */\n\t\t\tif (child2parent_job_update_pipe[1] < 3) {\n\t\t\t\tchild2parent_job_update_pipe_w = fcntl(child2parent_job_update_pipe[1], F_DUPFD, 3);\n\t\t\t\t(void) close(child2parent_job_update_pipe[1]);\n\t\t\t\tchild2parent_job_update_pipe[1] = -1;\n\t\t\t} else {\n\t\t\t\tchild2parent_job_update_pipe_w = child2parent_job_update_pipe[1];\n\t\t\t}\n\t\t\tif (parent2child_job_update_pipe[0] < 3) {\n\t\t\t\tparent2child_job_update_pipe_r = fcntl(parent2child_job_update_pipe[0], F_DUPFD, 3);\n\t\t\t\t(void) close(parent2child_job_update_pipe[0]);\n\t\t\t\tparent2child_job_update_pipe[0] = -1;\n\t\t\t} else {\n\t\t\t\tparent2child_job_update_pipe_r = parent2child_job_update_pipe[0];\n\t\t\t}\n\t\t}\n\t\tif ((i == -1) || (child2parent_job_update_pipe_w < 3) || (parent2child_job_update_pipe_r < 3)) {\n\t\t\tif (child2parent_job_update_pipe_w != -1)\n\t\t\t\t(void) close(child2parent_job_update_pipe_w);\n\t\t\tif (parent2child_job_update_pipe_r != -1)\n\t\t\t\t(void) close(parent2child_job_update_pipe_r);\n\t\t\tif (child2parent_job_update_pipe[0] != -1)\n\t\t\t\t(void) close(child2parent_job_update_pipe[0]);\n\t\t\tif (parent2child_job_update_pipe[1] != -1)\n\t\t\t\t(void) close(parent2child_job_update_pipe[1]);\n\t\t\t(void) sprintf(log_buffer,\n\t\t\t\t       \"Failed to create communication pipe\");\n\t\t\texec_bail(pjob, JOB_EXEC_RETRY, log_buffer);\n\t\t\treturn;\n\t\t}\n\n\t\t/* create 4th set of pipes between MOM and the job starter\n\t\t * fork the job starter which will become the job\n\t\t */\n\n\t\tif (pipe(parent2child_job_update_status_pipe) == -1) {\n\t\t\ti = -1;\n\t\t} else {\n\t\t\ti = 0;\n\t\t\t/* make sure pipe file descriptors are above 2 */\n\t\t\tif (parent2child_job_update_status_pipe[0] < 3) {\n\t\t\t\tparent2child_job_update_status_pipe_r = fcntl(parent2child_job_update_status_pipe[0], F_DUPFD, 3);\n\t\t\t\t(void) close(parent2child_job_update_status_pipe[0]);\n\t\t\t\tparent2child_job_update_status_pipe[0] = -1;\n\t\t\t} else {\n\t\t\t\tparent2child_job_update_status_pipe_r = parent2child_job_update_status_pipe[0];\n\t\t\t}\n\t\t}\n\t\tif ((i == -1) || (parent2child_job_update_status_pipe_r < 3)) {\n\t\t\tif (parent2child_job_update_status_pipe_r != -1)\n\t\t\t\t(void) close(parent2child_job_update_status_pipe_r);\n\t\t\tif (parent2child_job_update_status_pipe[1] != -1)\n\t\t\t\t(void) close(parent2child_job_update_status_pipe[1]);\n\t\t\t(void) sprintf(log_buffer,\n\t\t\t\t       \"Failed to create communication pipe\");\n\t\t\texec_bail(pjob, JOB_EXEC_RETRY, log_buffer);\n\t\t\treturn;\n\t\t}\n\n\t\tif (pipe(parent2child_moms_status_pipe) == -1) {\n\t\t\ti = -1;\n\n\t\t} else {\n\n\t\t\ti = 0;\n\n\t\t\t/* make sure pipe file descriptors are above 2 */\n\t\t\tif (parent2child_moms_status_pipe[0] < 3) {\n\t\t\t\tparent2child_moms_status_pipe_r = fcntl(parent2child_moms_status_pipe[0], F_DUPFD, 3);\n\t\t\t\t(void) close(parent2child_moms_status_pipe[0]);\n\t\t\t\tparent2child_moms_status_pipe[0] = -1;\n\t\t\t} else {\n\t\t\t\tparent2child_moms_status_pipe_r = parent2child_moms_status_pipe[0];\n\t\t\t}\n\t\t}\n\n\t\tif ((i == -1) || (parent2child_moms_status_pipe_r < 3)) {\n\t\t\tif (parent2child_moms_status_pipe_r != -1)\n\t\t\t\t(void) close(parent2child_moms_status_pipe_r);\n\t\t\tif (parent2child_moms_status_pipe[1] != -1)\n\t\t\t\t(void) close(parent2child_moms_status_pipe[1]);\n\t\t\t(void) sprintf(log_buffer,\n\t\t\t\t       \"Failed to create communication pipe\");\n\t\t\texec_bail(pjob, JOB_EXEC_RETRY, log_buffer);\n\t\t\treturn;\n\t\t}\n\t}\n\n\tpjob->ji_qs.ji_stime = time_now;\n\tset_jattr_l_slim(pjob, JOB_ATR_stime, time_now, SET);\n\tpjob->ji_sampletim = time_now;\n\n\t/*\n\t * Fork the child process that will become the job.\n\t */\n\tcpid = fork_me(-1);\n\tif (cpid > 0) {\n\t\tconn_t *conn = NULL;\n\n\t\t/* the parent side, still the main man, uhh that is MOM */\n\n\t\t(void) close(upfds);\n\t\t(void) close(downfds);\n\n\t\t(void) close(upfds2);\n\t\t(void) close(downfds2);\n\n\t\t(void) close(child2parent_job_update_pipe_w);\n\t\t(void) close(parent2child_job_update_pipe_r);\n\n\t\t(void) close(parent2child_job_update_status_pipe_r);\n\t\t(void) close(parent2child_moms_status_pipe_r);\n\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n\t\tDIS_tcp_funcs();\n#endif\n\n\t\t/* add the pipe to the connection table so we can poll it */\n\n\t\tif ((conn = add_conn(jsmpipe[0], ChildPipe, (pbs_net_t) 0,\n\t\t\t\t     (unsigned int) 0, NULL, record_finish_exec)) == NULL) {\n\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_ERR,\n\t\t\t\t  pjob->ji_qs.ji_jobid,\n\t\t\t\t  \"Unable to start job, communication connection table is full\");\n\t\t\t(void) close(jsmpipe[0]);\n\t\t\t(void) close(mjspipe[1]);\n\t\t\t(void) close(jsmpipe2[0]);\n\t\t\t(void) close(mjspipe2[1]);\n\n\t\t\t(void) close(child2parent_job_update_pipe[0]);\n\t\t\t(void) close(parent2child_job_update_pipe[1]);\n\n\t\t\t(void) close(parent2child_job_update_status_pipe[1]);\n\t\t\t(void) close(parent2child_moms_status_pipe[1]);\n#if SHELL_INVOKE == 1\n\t\t\tif (pipe_script[0] != -1)\n\t\t\t\t(void) close(pipe_script[0]);\n\t\t\tif (pipe_script[1] != -1)\n\t\t\t\t(void) close(pipe_script[1]);\n#endif\n\t\t\texec_bail(pjob, JOB_EXEC_RETRY, NULL);\n\t\t\treturn;\n\t\t}\n\t\tconn->cn_data = ptask;\n\t\tpjob->ji_jsmpipe = jsmpipe[0];\n\t\tpjob->ji_mjspipe = mjspipe[1];\n\t\tpjob->ji_jsmpipe2 = jsmpipe2[0];\n\t\tpjob->ji_mjspipe2 = mjspipe2[1];\n\n\t\t/*\n\t\t * at this point, parent mom writes to\n\t\t * pjob->ji_mjspipe2, and parent reads from\n\t\t * pjob->ji_jsmpipe2\n\t\t */\n\n\t\t/*\n\t\t * if there are prologue hooks to run\n\t\t * add the pipe to the connection table so we can poll it\n\t\t */\n\t\tif (prolo_hooks > 0) {\n\t\t\tif ((conn = add_conn(jsmpipe2[0], ChildPipe,\n\t\t\t\t\t     (pbs_net_t) 0, (unsigned int) 0, NULL,\n\t\t\t\t\t     receive_pipe_request)) == NULL) {\n\t\t\t\tlog_event(PBSEVENT_ERROR,\n\t\t\t\t\t  PBS_EVENTCLASS_JOB, LOG_ERR,\n\t\t\t\t\t  pjob->ji_qs.ji_jobid,\n\t\t\t\t\t  \"Unable t0 start job... communication \"\n\t\t\t\t\t  \"connection table is full\");\n\t\t\t\t(void) close(jsmpipe2[0]);\n\t\t\t\t(void) close(mjspipe2[1]);\n\n\t\t\t\t(void) close(jsmpipe[0]);\n\t\t\t\t(void) close(mjspipe[1]);\n\n\t\t\t\tif (pipe_script[0] != -1)\n\t\t\t\t\t(void) close(pipe_script[0]);\n\t\t\t\tif (pipe_script[1] != -1)\n\t\t\t\t\t(void) close(pipe_script[1]);\n\t\t\t\texec_bail(pjob, JOB_EXEC_RETRY, NULL);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tconn->cn_data = ptask;\n\t\t}\n\n\t\t/*\n\t\t * if there are prologue hooks to run\n\t\t * add the pipe to the connection table so we can poll it\n\t\t */\n\t\tif (do_tolerate_node_failures(pjob)) {\n\n\t\t\tif ((conn = add_conn(child2parent_job_update_pipe[0], ChildPipe,\n\t\t\t\t\t     (pbs_net_t) 0, (unsigned int) 0, NULL,\n\t\t\t\t\t     receive_job_update_request)) == NULL) {\n\t\t\t\tlog_event(PBSEVENT_ERROR,\n\t\t\t\t\t  PBS_EVENTCLASS_JOB, LOG_ERR,\n\t\t\t\t\t  pjob->ji_qs.ji_jobid,\n\t\t\t\t\t  \"Unable to start job, communication connection table is full\");\n\t\t\t\t(void) close(child2parent_job_update_pipe[0]);\n\t\t\t\t(void) close(parent2child_job_update_pipe[1]);\n\n\t\t\t\t(void) close(jsmpipe2[0]);\n\t\t\t\t(void) close(mjspipe2[1]);\n\n\t\t\t\t(void) close(jsmpipe[0]);\n\t\t\t\t(void) close(mjspipe[1]);\n\n\t\t\t\tif (pipe_script[0] != -1)\n\t\t\t\t\t(void) close(pipe_script[0]);\n\t\t\t\tif (pipe_script[1] != -1)\n\t\t\t\t\t(void) close(pipe_script[1]);\n\t\t\t\texec_bail(pjob, JOB_EXEC_RETRY, NULL);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tconn->cn_data = ptask;\n\n\t\t\tpjob->ji_child2parent_job_update_pipe = child2parent_job_update_pipe[0];\n\t\t\tpjob->ji_parent2child_job_update_pipe = parent2child_job_update_pipe[1];\n\n\t\t\tpjob->ji_parent2child_job_update_status_pipe = parent2child_job_update_status_pipe[1];\n\t\t\tpjob->ji_parent2child_moms_status_pipe = parent2child_moms_status_pipe[1];\n\t\t}\n\n\t\tif (ptc >= 0) {\n\t\t\t(void) close(ptc);\n\t\t\tptc = -1;\n\t\t}\n\n#if SHELL_INVOKE == 1\n\t\tif (is_interactive == 0) {\n\t\t\tchar *s;\n\t\t\tchar *d;\n\t\t\tchar holdbuf[(2 * MAXPATHLEN) + 5];\n\t\t\tint k;\n\n\t\t\tif (*pjob->ji_qs.ji_fileprefix != '\\0')\n\t\t\t\tsprintf(buf, \"%s%s%s\", path_jobs,\n\t\t\t\t\tpjob->ji_qs.ji_fileprefix, JOB_SCRIPT_SUFFIX);\n\t\t\telse\n\t\t\t\tsprintf(buf, \"%s%s%s\", path_jobs,\n\t\t\t\t\tpjob->ji_qs.ji_jobid, JOB_SCRIPT_SUFFIX);\n\n\t\t\tif (chown(buf, pjob->ji_qs.ji_un.ji_momt.ji_exuid,\n\t\t\t\t\tpjob->ji_qs.ji_un.ji_momt.ji_exgid) == -1)\n\t\t\t\t\tlog_errf(-1, __func__, \"chown failed. ERR : %s\",strerror(errno));\n\n\t\t\t/* add escape in front of brackets */\n\t\t\tfor (s = buf, d = holdbuf; *s && ((d - holdbuf) < sizeof(holdbuf)); s++, d++) {\n\t\t\t\tif (*s == '[' || *s == ']')\n\t\t\t\t\t*d++ = '\\\\';\n\t\t\t\t*d = *s;\n\t\t\t}\n\t\t\t*d = '\\0';\n\t\t\tsnprintf(buf, sizeof(buf), \"%s\", holdbuf);\n\t\t\tDBPRT((\"shell: %s\\n\", buf))\n\n\t\t\t/* pass name of shell script on pipe\t*/\n\t\t\t/* will be stdin of shell \t\t*/\n\n\t\t\t(void) close(pipe_script[0]);\n\n\t\t\t/* if in \"sandbox=PRIVATE\" mode, prepend the script name on the pipe */\n\t\t\t/* with \"cd $PBS_JOBDIR;\" command */\n\t\t\tif (sandbox_private) {\n\t\t\t\tsnprintf(buf, sizeof(buf), \"cd %s;%.*s\", pbs_jobdir,\n\t\t\t\t\t (int) (sizeof(buf) - strlen(pbs_jobdir) - 5), holdbuf);\n\t\t\t}\n\n\t\t\t(void) strcat(buf, \"\\n\"); /* setup above */\n\t\t\ti = strlen(buf);\n\t\t\tj = 0;\n\t\t\twhile (j < i) {\n\t\t\t\tif ((k = write(pipe_script[1], buf + j, i - j)) < 0) {\n\t\t\t\t\tif (errno == EINTR)\n\t\t\t\t\t\tcontinue;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tj += k;\n\t\t\t}\n\t\t\t(void) close(pipe_script[1]);\n\t\t}\n\n\t\tif (pjob->ji_numnodes > 1 && !nodemux) {\n\t\t\t/*\n\t\t\t * Put port numbers into job struct and close sockets.\n\t\t\t * The job uses them to talk to demux, but main MOM\n\t\t\t * doesn't need them.   The port numbers are stored\n\t\t\t * here for use in start_process(), to connect to\n\t\t\t * pbs_demux.\n\t\t\t */\n\t\t\t(void) close(pjob->ji_stdout);\n\t\t\tpjob->ji_stdout = port_out;\n\t\t\t(void) close(pjob->ji_stderr);\n\t\t\tpjob->ji_stderr = port_err;\n\t\t}\n\n\t\t/* record job working directory in jobdir attribute */\n\t\tset_jattr_str_slim(pjob, JOB_ATR_jobdir, sandbox_private ? pbs_jobdir : pwdp->pw_dir, NULL);\n#endif /* SHELL_INVOKE */\n\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n\t\tif (is_jattr_set(pjob, JOB_ATR_cred_id))\n\t\t\tsend_cred_sisters(pjob);\n#endif\n\n\t\treturn;\n\n\t} else if (cpid < 0) {\n#if SHELL_INVOKE == 1\n\t\tif (pipe_script[0] != -1)\n\t\t\t(void) close(pipe_script[0]);\n\t\tif (pipe_script[1] != -1)\n\t\t\t(void) close(pipe_script[1]);\n#endif /* SHELL_INVOKE */\n\t\tif (upfds != -1)\n\t\t\t(void) close(upfds);\n\t\tif (downfds != -1)\n\t\t\t(void) close(downfds);\n\t\tif (jsmpipe[0] != -1)\n\t\t\t(void) close(jsmpipe[0]);\n\t\tif (mjspipe[1] != -1)\n\t\t\t(void) close(mjspipe[1]);\n\t\tif (upfds2 != -1)\n\t\t\t(void) close(upfds2);\n\t\tif (downfds2 != -1)\n\t\t\t(void) close(downfds2);\n\t\tif (child2parent_job_update_pipe_w != -1)\n\t\t\t(void) close(child2parent_job_update_pipe_w);\n\t\tif (parent2child_job_update_pipe_r != -1)\n\t\t\t(void) close(parent2child_job_update_pipe_r);\n\t\tif (parent2child_job_update_status_pipe_r != -1)\n\t\t\t(void) close(parent2child_job_update_status_pipe_r);\n\t\tif (parent2child_moms_status_pipe_r != -1)\n\t\t\t(void) close(parent2child_moms_status_pipe_r);\n\n\t\tif (jsmpipe2[0] != -1)\n\t\t\t(void) close(jsmpipe2[0]);\n\t\tif (mjspipe2[1] != -1)\n\t\t\t(void) close(mjspipe2[1]);\n\t\tif (child2parent_job_update_pipe[0] != -1)\n\t\t\t(void) close(child2parent_job_update_pipe[0]);\n\t\tif (parent2child_job_update_pipe[1] != -1)\n\t\t\t(void) close(parent2child_job_update_pipe[1]);\n\t\tif (parent2child_job_update_status_pipe[1] != -1)\n\t\t\t(void) close(parent2child_job_update_status_pipe[1]);\n\t\tif (parent2child_moms_status_pipe[1] != -1)\n\t\t\t(void) close(parent2child_moms_status_pipe[1]);\n\n\t\t(void) sprintf(log_buffer, \"Fork failed in %s: %d\",\n\t\t\t       __func__, errno);\n\t\texec_bail(pjob, JOB_EXEC_RETRY, log_buffer);\n\t\treturn;\n\t}\n\t/************************************************/\n\t/*\t\t\t\t\t\t*/\n\t/* The child process - will become THE JOB\t*/\n\t/*\t\t\t\t\t\t*/\n\t/************************************************/\n\n\tif (jsmpipe[0] != -1)\n\t\t(void) close(jsmpipe[0]);\n\n\tif (mjspipe[1] != -1)\n\t\t(void) close(mjspipe[1]);\n\n\tif (jsmpipe2[0] != -1)\n\t\t(void) close(jsmpipe2[0]);\n\n\tif (mjspipe2[1] != -1)\n\t\t(void) close(mjspipe2[1]);\n\n\tif (child2parent_job_update_pipe[0] != -1)\n\t\t(void) close(child2parent_job_update_pipe[0]);\n\n\tif (parent2child_job_update_pipe[1] != -1)\n\t\t(void) close(parent2child_job_update_pipe[1]);\n\n\tif (parent2child_job_update_status_pipe[1] != -1)\n\t\t(void) close(parent2child_job_update_status_pipe[1]);\n\n\tif (parent2child_moms_status_pipe[1] != -1)\n\t\t(void) close(parent2child_moms_status_pipe[1]);\n\n\tCLR_SJR(sjr) /* clear structure used to return info to parent */\n\n\t/* unprotect the job from the vagaries of the kernel */\n\tdaemon_protect(0, PBS_DAEMON_PROTECT_OFF);\n\n\t/* set system core limit */\n#if defined(RLIM64_INFINITY)\n\t(void) setrlimit64(RLIMIT_CORE, &orig_core_limit);\n#else  /* set rlimit 32 bit */\n\t(void) setrlimit(RLIMIT_CORE, &orig_core_limit);\n#endif /* RLIM64_INFINITY */\n\n\t/*\n\t * find which shell to use, one specified or the login shell\n\t */\n\tshell = set_shell(pjob, pwdp); /* in the machine dependent section */\n\n\tprolo_hooks = num_eligible_hooks(HOOK_EVENT_EXECJOB_PROLOGUE);\n\n\t/*\n\t * set up the Environmental Variables to be given to the job\n\t */\n\tvstrs = get_jattr_arst(pjob, JOB_ATR_variables);\n\tpjob->ji_env.v_ensize = vstrs->as_usedptr + num_var_else + num_var_env +\n\t\t\t\tEXTRA_ENV_PTRS;\n\tpjob->ji_env.v_used = 0;\n\tpjob->ji_env.v_envp = (char **) calloc(pjob->ji_env.v_ensize, sizeof(char *));\n\tif (pjob->ji_env.v_envp == NULL) {\n\t\tlog_err(ENOMEM, __func__, \"out of memory\");\n\t\tstarter_return(upfds, downfds, JOB_EXEC_FAIL1, &sjr);\n\t}\n\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n\tif (cred_by_job(ptask->ti_job, CRED_RENEWAL) != PBS_KRB5_OK) {\n\t\tstarter_return(upfds, downfds, JOB_EXEC_FAIL_KRB5, &sjr);\n\t}\n\n#if defined(HAVE_LIBKAFS) || defined(HAVE_LIBKOPENAFS)\n\tif (start_afslog(ptask, NULL, pipe_script[0], pipe_script[1]) != PBS_KRB5_OK) {\n\t\tsprintf(log_buffer, \"afslog for task %8.8X not started\",\n\t\t\tptask->ti_qs.ti_task);\n\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_ERR,\n\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t}\n#endif\n#endif\n\n\t/*  First variables from the local environment */\n\n\tfor (j = 0; j < num_var_env; ++j)\n\t\tbld_env_variables(&(pjob->ji_env), environ[j], NULL);\n\n\t/* Second, the variables passed with the job.  They may */\n\t/* be overwritten with new correct values for this job\t*/\n\n\tfor (j = 0; j < vstrs->as_usedptr; ++j) {\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n\t\t\t/* never set KRB5CCNAME; it would rewrite the correct value */\n\t\t\tif (strncmp(vstrs->as_string[j], \"KRB5CCNAME\", strlen(\"KRB5CCNAME\")) == 0)\n\t\t\t\tcontinue;\n#endif\n\t\tbld_env_variables(&(pjob->ji_env), vstrs->as_string[j], NULL);\n\t}\n\n\t/* .. Next the critical variables: home, path, logname, ... */\n\t/* these may replace some passed in with the job\t    */\n\n\t/* HOME */\n\tbld_env_variables(&(pjob->ji_env), variables_else[0], pwdp->pw_dir); /* HOME */\n\n\t/* LOGNAME */\n\tbld_env_variables(&(pjob->ji_env), variables_else[1], pwdp->pw_name);\n\n\t/* PBS_JOBNAME */\n\tbld_env_variables(&(pjob->ji_env), variables_else[2], get_jattr_str(pjob, JOB_ATR_jobname));\n\n\t/* PBS_JOBID */\n\tbld_env_variables(&(pjob->ji_env), variables_else[3], pjob->ji_qs.ji_jobid);\n\n\t/* PBS_QUEUE */\n\tbld_env_variables(&(pjob->ji_env), variables_else[4], get_jattr_str(pjob, JOB_ATR_in_queue));\n\n\t/* SHELL */\n\tbld_env_variables(&(pjob->ji_env), variables_else[5], shell);\n\n\t/* USER, for compatability */\n\tbld_env_variables(&(pjob->ji_env), variables_else[6], pwdp->pw_name);\n\n\t/* PBS_JOBCOOKIE */\n\tbld_env_variables(&(pjob->ji_env), variables_else[7], get_jattr_str(pjob, JOB_ATR_Cookie));\n\n\t/* PBS_NODENUM */\n\tsprintf(buf, \"%d\", pjob->ji_nodeid);\n\tbld_env_variables(&(pjob->ji_env), variables_else[8], buf);\n\n\t/* PBS_TASKNUM */\n\tsprintf(buf, \"%u\", ptask->ti_qs.ti_task);\n\tbld_env_variables(&(pjob->ji_env), variables_else[9], buf);\n\n\t/* PBS_MOMPORT */\n\tsprintf(buf, \"%u\", pbs_rm_port);\n\tbld_env_variables(&(pjob->ji_env), variables_else[10], buf);\n\n\t/* OMP_NUM_THREADS and NCPUS eq to number of cpus */\n\n\tnumthreads = pjob->ji_vnods[0].vn_threads;\n\tsprintf(buf, \"%d\", numthreads);\n#ifdef NAS /* localmod 020 */\n\t/*\n\t * If ompthreads specified, use it to set OMP_NUM_THREADS, else\n\t * set OMP_NUM_THREADS=1\n\t * (Cannot just leave it unset because then the MKL sparse solvers\n\t * use every CPU in the system.)\n\t */\n\tschedselect = get_jattr_str(pjob, JOB_ATR_SchedSelect);\n\tif (schedselect && strstr(schedselect, OMPTHREADS) != NULL)\n\t\tbld_env_variables(&(pjob->ji_env), variables_else[12], buf);\n\telse\n\t\tbld_env_variables(&(pjob->ji_env), variables_else[12], \"1\");\n#else\n\tbld_env_variables(&(pjob->ji_env), variables_else[12], buf);\n#endif /* localmod 020 */\n\tbld_env_variables(&(pjob->ji_env), \"NCPUS\", buf);\n\n\t/* PBS_NODEFILE */\n\n\tif (generate_pbs_nodefile(pjob, buf, sizeof(buf) - 1, log_buffer, LOG_BUF_SIZE - 1) == 0)\n\t\tbld_env_variables(&(pjob->ji_env), variables_else[11], buf);\n\telse {\n\t\tlog_err(errno, __func__, log_buffer);\n\t\tstarter_return(upfds, downfds, JOB_EXEC_FAIL1, &sjr);\n\t}\n\n\t/* PBS_ACCOUNT */\n\tif (is_jattr_set(pjob, JOB_ATR_account))\n\t\tbld_env_variables(&(pjob->ji_env), variables_else[13], get_jattr_str(pjob, JOB_ATR_account));\n\n\t/* If an Sub job of an Array job, put in the index */\n\n\tif (strchr(pjob->ji_qs.ji_jobid, (int) '[') != NULL) {\n\t\tchar *pparent;\n\t\tchar *pindex;\n\n\t\tget_index_and_parent(pjob->ji_qs.ji_jobid, &pparent, &pindex);\n\t\tbld_env_variables(&(pjob->ji_env), variables_else[14], pindex);\n\t\tbld_env_variables(&(pjob->ji_env), variables_else[15], pparent);\n\t}\n\n\t/* if user specified umask for job, set it */\n\tif (is_jattr_set(pjob, JOB_ATR_umask)) {\n\t\tsprintf(buf, \"%ld\", get_jattr_long(pjob, JOB_ATR_umask));\n\t\tsscanf(buf, \"%o\", &j);\n\t\tumask(j);\n\t} else\n\t\tumask(077);\n\n\t\t/* Add TMPDIR to environment */\n#ifdef NAS /* localmod 010 */\n\t(void) NAS_tmpdirname(pjob);\n#endif /* localmod 010 */\n\tj = mktmpdir(pjob->ji_qs.ji_jobid,\n\t\t     pjob->ji_qs.ji_un.ji_momt.ji_exuid,\n\t\t     pjob->ji_qs.ji_un.ji_momt.ji_exgid,\n\t\t     &(pjob->ji_env));\n\tif (j != 0)\n\t\tstarter_return(upfds, downfds, j, &sjr);\n\n\t/* set PBS_JOBDIR */\n\tif (sandbox_private) {\n\t\t/* Add PBS_JOBDIR if it doesn't already exist */\n\t\tj = mkjobdir(pjob->ji_qs.ji_jobid,\n\t\t\t     pbs_jobdir,\n\t\t\t     pjob->ji_qs.ji_un.ji_momt.ji_exuid,\n\t\t\t     pjob->ji_qs.ji_un.ji_momt.ji_exgid);\n\t\tif (j != 0) {\n\t\t\tsprintf(log_buffer, \"unable to create the job directory %s\",\n\t\t\t\tpbs_jobdir);\n\t\t\tlog_joberr(errno, __func__, log_buffer, pjob->ji_qs.ji_jobid);\n\t\t\tstarter_return(upfds, downfds, j, &sjr); /* exits */\n\t\t}\n\t\tbld_env_variables(&(pjob->ji_env), \"PBS_JOBDIR\", pbs_jobdir);\n\t} else {\n\t\tbld_env_variables(&(pjob->ji_env), \"PBS_JOBDIR\", pwdp->pw_dir);\n\t}\n\n\tmom_unnice();\n\n\tif (is_interactive) {\n\t\tstruct sigaction act;\n\t\tchar *termtype;\n\t\tchar *phost;\n\t\tchar *auth_method;\n\t\tchar *encrypt_method;\n\t\tint qsub_sock;\n\t\tint old_qsub_sock;\n\t\tint pts; /* fd for slave pty */\n\t\tint ret = 0;\n\n\t\t/*************************************************************************/\n\t\t/*\t\tWe have an \"interactive\" job, connect the standard\t */\n\t\t/*\t\tstreams to a socket connected to qsub.\t\t\t */\n\t\t/*************************************************************************/\n\n\t\tsigemptyset(&act.sa_mask);\n\t\t/* prevent user from interrupting start of the job */\n\t\tact.sa_flags = 0;\n\t\tact.sa_handler = SIG_IGN;\n\t\t(void) sigaction(SIGHUP, &act, &tmp_act_hup);\n\t\t(void) sigaction(SIGINT, &act, &tmp_act_int);\n\t\t(void) sigaction(SIGQUIT, &act, &tmp_act_quit);\n\t\t(void) sigaction(SIGTSTP, &act, &tmp_act_stp);\n\n#ifdef SA_INTERRUPT\n\t\tact.sa_flags = SA_INTERRUPT;\n#else\n\t\tact.sa_flags = 0;\n#endif /* SA_INTERRUPT */\n\t\tact.sa_handler = no_hang;\n\t\t(void) sigaction(SIGALRM, &act, NULL);\n\t\talarm(30);\n\n\t\t/* Set environment to reflect interactive */\n\n\t\tbld_env_variables(&(pjob->ji_env), \"PBS_ENVIRONMENT\", \"PBS_INTERACTIVE\");\n\n\t\t/* get host where qsub resides */\n\n\t\tphost = arst_string(\"PBS_O_HOST\", get_jattr(pjob, JOB_ATR_variables));\n\t\tif ((phost == NULL) ||\n\t\t    ((phost = strchr(phost, (int) '=')) == NULL)) {\n\t\t\tlog_joberr(-1, __func__, \"PBS_O_HOST not set\",\n\t\t\t\t   pjob->ji_qs.ji_jobid);\n\t\t\tstarter_return(upfds, downfds, JOB_EXEC_FAIL1, &sjr);\n\t\t}\n\n\t\t/* get qsub prefered auth method */\n\n\t\tauth_method = arst_string(\"PBS_O_INTERACTIVE_AUTH_METHOD\", get_jattr(pjob, JOB_ATR_variables));\n\t\tif ((auth_method == NULL) ||\n\t\t    ((auth_method = strchr(auth_method, (int) '=')) == NULL)) {\n\t\t\tlog_joberr(-1, __func__, \"PBS_O_INTERACTIVE_AUTH_METHOD not set\",\n\t\t\t\t   pjob->ji_qs.ji_jobid);\n\t\t\tstarter_return(upfds, downfds, JOB_EXEC_FAIL1, &sjr);\n\t\t}\n\n\t\t/* get qsub prefered encrypt method */\n\n\t\tencrypt_method = arst_string(\"PBS_O_INTERACTIVE_ENCRYPT_METHOD\", get_jattr(pjob, JOB_ATR_variables));\n\t\tif ((encrypt_method == NULL) ||\n\t\t    ((encrypt_method = strchr(encrypt_method, (int) '=')) == NULL)) {\n\t\t\tlog_joberr(-1, __func__, \"PBS_O_INTERACTIVE_ENCRYPT_METHOD not set\",\n\t\t\t\t   pjob->ji_qs.ji_jobid);\n\t\t\tstarter_return(upfds, downfds, JOB_EXEC_FAIL1, &sjr);\n\t\t}\n\n\t\t/* Verify prefered auth is supported */\n\t\tif (is_string_in_arr(pbs_conf.supported_auth_methods, auth_method+1)) {\n\t\t\tlog_eventf(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO, pjob->ji_qs.ji_jobid,\n\t\t\t\t\"interactive authentication method = %s\", auth_method+1);\n\t\t\tif (strcmp(auth_method+1, AUTH_RESVPORT_NAME) == 0)\n\t\t\t\tqsub_sock = conn_qsub_resvport(phost + 1, get_jattr_long(pjob, JOB_ATR_interactive));\n\t\t\telse\n\t\t\t\tqsub_sock = conn_qsub(phost + 1, get_jattr_long(pjob, JOB_ATR_interactive));\n\t\t} else {\n\t\t\tqsub_sock = -1;\n\t\t\tsprintf(log_buffer, \"interactive authentication method %s not supported\", auth_method +1);\n\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t}\n\n\t\tif (qsub_sock < 0) {\n\t\t\tsprintf(log_buffer, \"cannot open qsub sock for %s\",\n\t\t\t\tpjob->ji_qs.ji_jobid);\n\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\tstarter_return(upfds, downfds, JOB_EXEC_FAIL1, &sjr);\n\t\t}\n\n\t\told_qsub_sock = qsub_sock;\n\t\tFDMOVE(qsub_sock);\n\n\t\tif (get_jattr_str(pjob, JOB_ATR_X11_cookie)) {\n\t\t\tchar display[X_DISPLAY_LEN];\n\n\t\t\tif ((socks = calloc(sizeof(struct pfwdsock), NUM_SOCKS)) == NULL) {\n\t\t\t\t/* FAILURE - cannot alloc memory */\n\t\t\t\tlog_err(errno, __func__, \"ERROR: could not calloc!\\n\");\n\t\t\t\tstarter_return(upfds, downfds, JOB_EXEC_FAIL1, &sjr);\n\t\t\t}\n\t\t\tdisplay_number = init_x11_display(socks, 1, /* use localhost only */\n\t\t\t\t\t\t\t  display, pjob->ji_grpcache->gc_homedir,\n\t\t\t\t\t\t\t  get_jattr_str(pjob, JOB_ATR_X11_cookie));\n\n\t\t\tif (display_number >= 0) {\n\t\t\t\tbld_env_variables(&(pjob->ji_env), \"DISPLAY\", display);\n\t\t\t} else {\n\t\t\t\tlog_err(errno, __func__, \"PBS: X11 forwarding init failed\\n\");\n\t\t\t\tstarter_return(upfds, downfds, JOB_EXEC_FAIL1, &sjr);\n\t\t\t}\n\t\t}\n\n\t\tif (qsub_sock != old_qsub_sock) {\n\n\t\t\tif (CS_remap_ctx(old_qsub_sock, qsub_sock) != CS_SUCCESS) {\n\n\t\t\t\t(void) CS_close_socket(old_qsub_sock);\n\t\t\t\tstarter_return(upfds, downfds, JOB_EXEC_FAIL1, &sjr);\n\t\t\t}\n\t\t}\n\n\t\tret = auth_with_qsub(qsub_sock, get_jattr_long(pjob, JOB_ATR_interactive),\n\t\t\t\t     phost + 1, auth_method + 1, encrypt_method + 1, pjob->ji_qs.ji_jobid);\n\t\tif (ret != INTERACTIVE_AUTH_SUCCESS) {\n\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_ERR, pjob->ji_qs.ji_jobid, \"Failed to authenticate with qsub\");\n\t\t\tstarter_return(upfds, downfds, JOB_EXEC_FAIL1, &sjr);\n\t\t}\n\n\t\t/* send job id as validation to qsub */\n\t\tif (transport_chan_get_ctx_status(qsub_sock, FOR_ENCRYPT) == (int) AUTH_STATUS_CTX_READY) {\n\t\t\tif (transport_send_pkt(qsub_sock, AUTH_ENCRYPTED_DATA, pjob->ji_qs.ji_jobid, PBS_MAXSVRJOBID + 1) < 0) {\n\t\t\t\tlog_err(errno, __func__, \"cannot write jobid\");\n\t\t\t\tstarter_return(upfds, downfds, JOB_EXEC_FAIL1, &sjr);\n\t\t\t}\n\t\t} else {\n\t\t\tif (CS_write(qsub_sock, pjob->ji_qs.ji_jobid, PBS_MAXSVRJOBID + 1) !=\n\t\t\t\tPBS_MAXSVRJOBID + 1) {\n\t\t\t\tlog_err(errno, __func__, \"cannot write jobid\");\n\t\t\t\tstarter_return(upfds, downfds, JOB_EXEC_FAIL1, &sjr);\n\t\t\t}\n\t\t}\n\n\t\t/* receive terminal type and window size */\n\n\t\tif ((termtype = rcvttype(qsub_sock)) == NULL)\n\t\t\tstarter_return(upfds, downfds, JOB_EXEC_FAIL1, &sjr);\n\n\t\tbld_env_variables(&(pjob->ji_env), termtype, NULL);\n\n\t\tif (rcvwinsize(qsub_sock) == -1)\n\t\t\tstarter_return(upfds, downfds, JOB_EXEC_FAIL1, &sjr);\n\n\t\t/* turn off alarm set around qsub connect activities */\n\n\t\talarm(0);\n\t\tact.sa_handler = SIG_DFL;\n\t\tact.sa_flags = 0;\n\t\t(void) sigaction(SIGALRM, &act, NULL);\n\n\t\t/* set up the Job session */\n\n\t\tj = set_job(pjob, &sjr);\n\t\tif (j < 0) {\n\t\t\tif (j == -1) {\n\t\t\t\t/* set_job didn't leave message in log_buffer */\n\t\t\t\t(void) strcpy(log_buffer, \"Unable to set session\");\n\t\t\t}\n\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB,\n\t\t\t\t  LOG_NOTICE, pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\tstarter_return(upfds, downfds, JOB_EXEC_FAIL1, &sjr);\n\t\t}\n#if MOM_ALPS\n\t\tsjr.sj_code = JOB_EXEC_UPDATE_ALPS_RESV_ID;\n\t\t(void) writepipe(upfds, &sjr, sizeof(sjr));\n\n\t\t/* wait for acknowledgement */\n\t\t(void) readpipe(downfds, &ack, sizeof(ack));\n#endif\n\n\t\t/* Open the slave pty as the controlling tty */\n\n\t\tif ((pts = open_pty(pjob)) < 0) {\n\t\t\tlog_err(errno, __func__, \"cannot open slave\");\n\t\t\tstarter_return(upfds, downfds, JOB_EXEC_FAIL1, &sjr);\n\t\t}\n\n\t\tact.sa_handler = SIG_IGN; /* setup to ignore SIGTERM */\n\n\t\twriterpid = fork();\n\t\tif (writerpid == 0) {\n\t\t\t/* child is \"writer\" process */\n\n\t\t\t(void) sigaction(SIGTERM, &act, NULL);\n\n\t\t\t(void) close(upfds);\n\t\t\t(void) close(downfds);\n\t\t\t(void) close(upfds2);\n\t\t\t(void) close(downfds2);\n\t\t\t(void) close(child2parent_job_update_pipe_w);\n\t\t\t(void) close(parent2child_job_update_pipe_r);\n\t\t\t(void) close(parent2child_job_update_status_pipe_r);\n\t\t\t(void) close(parent2child_moms_status_pipe_r);\n\t\t\t(void) close(pts);\n\t\t\t/*Closing the inherited post forwarded listening socket  */\n\t\t\tif (get_jattr_str(pjob, JOB_ATR_X11_cookie)) {\n\t\t\t\tfor (n = 0; n < NUM_SOCKS; n++) {\n\t\t\t\t\tif (socks[n].active)\n\t\t\t\t\t\tclose(socks[n].sock);\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tint res;\n\t\t\tif (transport_chan_get_ctx_status(qsub_sock, FOR_ENCRYPT) == (int) AUTH_STATUS_CTX_READY) {\n\t\t\t\tres = mom_writer_pkt(qsub_sock, ptc);\n\t\t\t} else {\n\t\t\t\tres = mom_writer(qsub_sock, ptc);\n\t\t\t}\n\n\t\t\t/* Inside mom_writer, if read is successful and write fails then it is an error and hence logging here as error for -1 */\n\t\t\tif (res == -1)\n\t\t\t\tlog_eventf(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_ERR, pjob->ji_qs.ji_jobid, \"CS_write failed with errno %d\", errno);\n\t\t\telse if (res == -2)\n\t\t\t\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid, \"read failed with errno %d\", errno);\n\n\t\t\tshutdown(qsub_sock, 2);\n\t\t\tdis_destroy_chan(qsub_sock);\n\t\t\texit(0);\n\n\t\t} else if (writerpid > 0) {\n\t\t\t/*\n\t\t\t ** parent -- it first runs the prolog then forks\n\t\t\t ** again.  the child becomes the job while the\n\t\t\t ** parent becomes the reader.\n\t\t\t */\n\n\t\t\t(void) close(1);\n\t\t\t(void) close(2);\n\t\t\t(void) dup2(pts, 1);\n\t\t\t(void) dup2(pts, 2);\n\t\t\tfflush(stdout);\n\t\t\tfflush(stderr);\n\t\t\tset_termcc(pts);\t/* set terminal control char */\n\t\t\t(void) setwinsize(pts); /* set window size to qsub's */\n\t\t\tif (do_tolerate_node_failures(pjob) && (get_failed_moms_and_vnodes(pjob, parent2child_moms_status_pipe_r, -1, &vnl_fails, &vnl_good, 1) != 0)) {\n\t\t\t\tFREE_VNLS(vnl_fails, vnl_good);\n\t\t\t\tstarter_return(upfds, downfds,\n\t\t\t\t\t       JOB_EXEC_RETRY, &sjr);\n\t\t\t}\n\n\t\t\t/* run prolog */\n\t\t\tif (prolo_hooks > 0) {\n\n\t\t\t\tmom_hook_input_init(&hook_input);\n\t\t\t\thook_input.pjob = pjob;\n\t\t\t\tif (do_tolerate_node_failures(pjob)) {\n\t\t\t\t\thook_input.vnl_fail = (vnl_t *) vnl_fails;\n\t\t\t\t\thook_input.failed_mom_list = &pjob->ji_failed_node_list;\n\t\t\t\t\thook_input.succeeded_mom_list = &pjob->ji_node_list;\n\t\t\t\t}\n\n\t\t\t\tmom_hook_output_init(&hook_output);\n\t\t\t\thook_output.reject_errcode = &hook_errcode;\n\t\t\t\thook_output.last_phook = &last_phook;\n\t\t\t\thook_output.fail_action = &hook_fail_action;\n\n\t\t\t\thook_rc =\n\t\t\t\t\tmom_process_hooks(HOOK_EVENT_EXECJOB_PROLOGUE,\n\t\t\t\t\t\t\t  PBS_MOM_SERVICE_NAME,\n\t\t\t\t\t\t\t  mom_host, &hook_input, &hook_output,\n\t\t\t\t\t\t\t  hook_msg, sizeof(hook_msg), 0);\n\t\t\t} else { /* no runnable hooks */\n\t\t\t\t/* don't execute any prologue hook */\n\t\t\t\t/* as no prologue hooks are runnable */\n\t\t\t\thook_rc = 2;\n\t\t\t}\n\n\t\t\tswitch (hook_rc) {\n\n\t\t\t\tcase 0: /* explicit reject */\n\t\t\t\t\tif (hook_errcode == PBSE_HOOK_REJECT_DELETEJOB) {\n\t\t\t\t\t\tstarter_return(upfds, downfds,\n\t\t\t\t\t\t\t       JOB_EXEC_FAILHOOK_DELETE, &sjr);\n\t\t\t\t\t} else if (hook_errcode == PBSE_HOOKERROR) {\n\t\t\t\t\t\tstarter_return(upfds, downfds,\n\t\t\t\t\t\t\t       JOB_EXEC_HOOKERROR, &sjr);\n\t\t\t\t\t} else {\n\t\t\t\t\t\t/* rerun is the default in prologue */\n\t\t\t\t\t\tstarter_return(upfds, downfds,\n\t\t\t\t\t\t\t       JOB_EXEC_FAILHOOK_RERUN, &sjr);\n\t\t\t\t\t}\n\t\t\t\t\treturn;\n\t\t\t\tcase 1: /* explicit accept */\n\t\t\t\t\tif (send_pipe_request(upfds2, downfds2, IM_EXEC_PROLOGUE) != 0) {\n\t\t\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB,\n\t\t\t\t\t\t\t  LOG_INFO, pjob->ji_qs.ji_jobid,\n\t\t\t\t\t\t\t  \"warning: send of IM_EXEC_PROLOGUE to parent mom failed\");\n\t\t\t\t\t}\n\t\t\t\t\tif (do_tolerate_node_failures(pjob))\n\t\t\t\t\t\tsend_update_job(pjob, child2parent_job_update_pipe_w, parent2child_job_update_pipe_r, parent2child_job_update_status_pipe_r);\n\t\t\t\t\tbreak;\n\t\t\t\tcase 2:\n\t\t\t\t\t/* no hook script executed - execute old-style prologue */\n\t\t\t\t\tif (run_pelog(PE_PROLOGUE,\n\t\t\t\t\t\t      path_prolog, pjob,\n\t\t\t\t\t\t      PE_IO_TYPE_ASIS) != 0) {\n\t\t\t\t\t\t(void) fprintf(stderr,\n\t\t\t\t\t\t\t       \"Could not run prolog: %s\\n\",\n\t\t\t\t\t\t\t       log_buffer);\n\t\t\t\t\t\tstarter_return(upfds, downfds,\n\t\t\t\t\t\t\t       JOB_EXEC_FAIL2, &sjr);\n\t\t\t\t\t}\n\t\t\t\t\tbreak;\n\t\t\t\tdefault:\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t\t\t\t\t  LOG_INFO, \"\",\n\t\t\t\t\t\t  \"prologue hook event: accept req by default\");\n\t\t\t\t\tif (send_pipe_request(upfds2, downfds2, IM_EXEC_PROLOGUE) != 0) {\n\t\t\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB,\n\t\t\t\t\t\t\t  LOG_INFO, pjob->ji_qs.ji_jobid,\n\t\t\t\t\t\t\t  \"warning: send of IM_EXEC_PROLOGUE to parent mom failed\");\n\t\t\t\t\t}\n\t\t\t\t\tif (do_tolerate_node_failures(pjob))\n\t\t\t\t\t\tsend_update_job(pjob, child2parent_job_update_pipe_w, parent2child_job_update_pipe_r, parent2child_job_update_status_pipe_r);\n\t\t\t}\n\n\t\t\tshellpid = fork();\n\t\t\tif (shellpid == 0) {\n\n\t\t\t\t/*********************************************/\n\t\t\t\t/* child - this will be the interactive job  */\n\t\t\t\t/* i/o is to slave tty\t\t\t     */\n\t\t\t\t/*********************************************/\n\n\t\t\t\t(void) close(0);\n\t\t\t\t(void) dup2(pts, 0);\n\t\t\t\tfflush(stdin);\n\n\t\t\t\t(void) close(ptc); /* close master side */\n\t\t\t\tptc = -1;\n\t\t\t\t(void) close(pts); /* dup'ed above */\n\t\t\t\t(void) close(qsub_sock);\n\n\t\t\t\t/* continue setting up and exec-ing shell */\n\n\t\t\t} else {\n\t\t\t\tif (shellpid > 0) {\n\t\t\t\t\t/* fork, parent is \"reader\" process  */\n\t\t\t\t\t(void) sigaction(SIGTERM, &act, NULL);\n\n\t\t\t\t\tif (pts != -1)\n\t\t\t\t\t\t(void) close(pts);\n\t\t\t\t\tif (upfds != -1)\n\t\t\t\t\t\t(void) close(upfds);\n\t\t\t\t\tif (downfds != -1)\n\t\t\t\t\t\t(void) close(downfds);\n\t\t\t\t\tif (upfds2 != -1)\n\t\t\t\t\t\t(void) close(upfds2);\n\t\t\t\t\tif (downfds2 != -1)\n\t\t\t\t\t\t(void) close(downfds2);\n\t\t\t\t\tif (child2parent_job_update_pipe_w != -1)\n\t\t\t\t\t\t(void) close(child2parent_job_update_pipe_w);\n\t\t\t\t\tif (parent2child_job_update_pipe_r != -1)\n\t\t\t\t\t\t(void) close(parent2child_job_update_pipe_r);\n\t\t\t\t\tif (parent2child_job_update_status_pipe_r != -1)\n\t\t\t\t\t\t(void) close(parent2child_job_update_status_pipe_r);\n\t\t\t\t\tif (parent2child_moms_status_pipe_r != -1)\n\t\t\t\t\t\t(void) close(parent2child_moms_status_pipe_r);\n\t\t\t\t\t(void) close(1);\n\t\t\t\t\t(void) close(2);\n\n\t\t\t\t\tsigemptyset(&act.sa_mask);\n\t\t\t\t\tact.sa_flags = SA_NOCLDSTOP;\n\t\t\t\t\tact.sa_handler = catchinter;\n\t\t\t\t\t(void) sigaction(SIGCHLD, &act,\n\t\t\t\t\t\t\t NULL);\n\n\t\t\t\t\tmom_reader_go = 1;\n\t\t\t\t\t/* prepare shell command \"cd $PBS_JOBDIR\" if in sandbox=PRIVATE mode */\n\t\t\t\t\tif (sandbox_private) {\n\t\t\t\t\t\tsprintf(buf, \"cd %s\\n\", pbs_jobdir);\n\t\t\t\t\t} else {\n\t\t\t\t\t\tbuf[0] = '\\0';\n\t\t\t\t\t}\n\t\t\t\t\tif ((is_interactive == TRUE) &&\n\t\t\t\t\t    get_jattr_str(pjob, JOB_ATR_X11_cookie)) {\n\t\t\t\t\t\tif (sandbox_private) {\n\t\t\t\t\t\t\t/* Change to $PBS_JOBDIR before\n\t\t\t\t\t\t\t blocking waiting for data */\n\t\t\t\t\t\t\tif (setcurrentworkdir(buf)) {\n\t\t\t\t\t\t\t\tlog_err(errno, __func__,\n\t\t\t\t\t\t\t\t\t\"Setting Private Sandbox directory Failed\");\n\t\t\t\t\t\t\t\tstarter_return(upfds, downfds,\n\t\t\t\t\t\t\t\t\t       JOB_EXEC_FAIL2, &sjr);\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif (strcmp(auth_method+1, AUTH_RESVPORT_NAME) == 0) {\n\t\t\t\t\t\t\tport_forwarder(socks, conn_qsub_resvport, phost + 1,\n\t\t\t\t\t\t\t\t\tget_jattr_long(pjob, JOB_ATR_X11_port),\n\t\t\t\t\t\t\t\t\tqsub_sock, mom_get_reader_Xjob,\n\t\t\t\t\t\t\t\t\tlog_mom_portfw_msg,\n\t\t\t\t\t\t\t\t\tEXEC_HOST_SIDE, auth_method+1, encrypt_method+1, pjob->ji_qs.ji_jobid);\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\tport_forwarder(socks, conn_qsub, phost + 1,\n\t\t\t\t\t\t\t\t\tget_jattr_long(pjob, JOB_ATR_X11_port),\n\t\t\t\t\t\t\t\t\tqsub_sock, mom_get_reader_Xjob,\n\t\t\t\t\t\t\t\t\tlog_mom_portfw_msg,\n\t\t\t\t\t\t\t\t\tEXEC_HOST_SIDE, auth_method+1, encrypt_method+1, pjob->ji_qs.ji_jobid);\n\t\t\t\t\t\t}\n\t\t\t\t\t} else {\n\t\t\t\t\t\tint res;\n\t\t\t\t\t\tif (transport_chan_get_ctx_status(qsub_sock, FOR_ENCRYPT) == (int) AUTH_STATUS_CTX_READY) {\n\t\t\t\t\t\t\tres = mom_reader_pkt(qsub_sock, ptc, buf);\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\tres = mom_reader(qsub_sock, ptc, buf);\n\t\t\t\t\t\t}\n\t\t\t\t\t\t/* Inside mom_reader, if read is successful and write fails then it is an error and hence logging here as error for -1 */\n\t\t\t\t\t\tif (res == -1)\n\t\t\t\t\t\t\tlog_eventf(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_ERR, pjob->ji_qs.ji_jobid, \"Write failed with errno %d\", errno);\n\t\t\t\t\t\telse if (res == -2)\n\t\t\t\t\t\t\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid, \"CS_read failed with errno %d\", errno);\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tlog_err(errno, __func__,\n\t\t\t\t\t\t\"cant fork reader\");\n\t\t\t\t}\n\n\t\t\t\t/* make sure qsub gets EOF */\n\n\t\t\t\tshutdown(qsub_sock, 2);\n\t\t\t\tdis_destroy_chan(qsub_sock);\n\n\t\t\t\t/* change pty back to available after */\n\t\t\t\t/* job is done */\n\t\t\t\tif (chmod(pts_name, 0666) == -1) \n\t\t\t\t\tlog_errf(-1, __func__, \"chmod failed. ERR : %s\",strerror(errno));\t\t\t\n\t\t\t\tif (chown(pts_name, 0, 0) == -1) \n\t\t\t\t\tlog_errf(-1, __func__, \"chown failed. ERR : %s\",strerror(errno));\t\t\n\t\t\t\texit(0);\n\t\t\t}\n\t\t} else { /* error */\n\t\t\tlog_err(errno, __func__, \"cannot fork nanny\");\n\n\t\t\t/* change pty back to available */\n\t\t\tif (chmod(pts_name, 0666) == -1) \n\t\t\t\tlog_errf(-1, __func__, \"chmod failed. ERR : %s\",strerror(errno));\t\t\n\t\t\tif (chown(pts_name, 0, 0) == -1) \n\t\t\t\tlog_errf(-1, __func__, \"chown failed. ERR : %s\",strerror(errno));\t\t\t\n\n\t\t\tstarter_return(upfds, downfds, JOB_EXEC_RETRY, &sjr);\n\t\t}\n\n\t} else {\n\n\t\t/*************************************************************************/\n\t\t/*\t\tWe have a \"normal\" batch job, connect the standard\t */\n\t\t/*\t\tstreams to files\t\t\t\t\t */\n\t\t/*************************************************************************/\n\n\t\t/* set Environment to reflect batch */\n\n\t\tbld_env_variables(&(pjob->ji_env), \"PBS_ENVIRONMENT\", \"PBS_BATCH\");\n\t\tbld_env_variables(&(pjob->ji_env), \"ENVIRONMENT\", \"BATCH\");\n\n#if SHELL_INVOKE == 1\n\t\t/* if passing script file name as input to shell */\n\n\t\t(void) close(pipe_script[1]);\n\t\tscript_in = pipe_script[0];\n#else  /* SHELL_INVOKE == 0 */\n\t\t/* if passing script itself as input to shell */\n\n\t\t(void) strcpy(buf, path_jobs);\n\t\tif (*pjob->ji_qs.ji_fileprefix != '\\0')\n\t\t\t(void) strcat(buf, pjob->ji_qs.ji_fileprefix);\n\t\telse\n\t\t\t(void) strcat(buf, pjob->ji_qs.ji_jobid);\n\t\t(void) strcat(buf, JOB_SCRIPT_SUFFIX);\n\t\tif ((script_in = open(buf, O_RDONLY, 0)) < 0) {\n\t\t\tif (errno == ENOENT)\n\t\t\t\tscript_in = open(\"/dev/null\", O_RDONLY, 0);\n\t\t}\n#endif /* SHELL_INVOKE */\n\t\tif (!is_jattr_set(pjob, JOB_ATR_executable)) {\n\t\t\t/*\n\t\t\t * user has passed executable and argument list as\n\t\t\t * as command-line options to qsub (i.e after -- flag\n\t\t\t * so, no need to check for script file)\n\t\t\t */\n\t\t\tif (script_in < 0) {\n\t\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_ERR,\n\t\t\t\t\t  pjob->ji_qs.ji_jobid,\n\t\t\t\t\t  \"Unable to open script\");\n\t\t\t\tstarter_return(upfds, downfds, JOB_EXEC_FAIL1, &sjr);\n\t\t\t}\n\t\t\tFDMOVE(script_in); /* make sure descriptor > 2       */\n\t\t\tif (script_in != 0) {\n\t\t\t\tclose(0);\n\t\t\t\tif (dup(script_in) == -1) \n\t\t\t\t\tlog_errf(-1, __func__, \"dup failed. ERR : %s\",strerror(errno));\t\t\t\n\t\t\t\tclose(script_in);\n\t\t\t}\n\t\t}\n\n\t\tif (open_std_out_err(pjob) == -1) {\n\t\t\tstarter_return(upfds, downfds, JOB_EXEC_RETRY, &sjr);\n\t\t}\n\n\t\t/* After the error is redirected, stderr does not have a valid FILE* */\n\t\ttemp_stderr = fdopen(STDERR_FILENO, \"w\");\n\t\t/* If we could not get the valid FILE*, let temp_stderr point to stderr to avoid\n\t\t * a possible crash in subsequent calls to output functions like printf/fprintf */\n\t\tif (!temp_stderr)\n\t\t\ttemp_stderr = stderr;\n\t\t/* set up the Job session */\n\n\t\tj = set_job(pjob, &sjr);\n\t\tif (j < 0) {\n\t\t\tif (j == -1) {\n\t\t\t\t/* set_job didn't leave message in log_buffer */\n\t\t\t\t(void) strcpy(log_buffer, \"Unable to set session\");\n\t\t\t}\n\t\t\t/* set_job leaves message in log_buffer */\n\t\t\t(void) fprintf(temp_stderr, \"%s\\n\", log_buffer);\n\n\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_NOTICE,\n\t\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\n\t\t\tif (j == -3)\n\t\t\t\tj = JOB_EXEC_FAIL2;\n\t\t\telse\n\t\t\t\tj = JOB_EXEC_RETRY;\n\t\t\tstarter_return(upfds, downfds, j, &sjr);\n\t\t}\n\t\tif (do_tolerate_node_failures(pjob) &&\n\t\t    (get_failed_moms_and_vnodes(pjob, downfds2, -1, &vnl_fails, &vnl_good, 1) != 0)) {\n\t\t\tFREE_VNLS(vnl_fails, vnl_good);\n\t\t\tstarter_return(upfds, downfds, JOB_EXEC_RETRY, &sjr);\n\t\t}\n\t\t/* run prologue hooks */\n\n\t\tif (prolo_hooks > 0) {\n\t\t\tmom_hook_input_init(&hook_input);\n\t\t\thook_input.pjob = pjob;\n\t\t\tif (do_tolerate_node_failures(pjob)) {\n\t\t\t\thook_input.vnl_fail = (vnl_t *) vnl_fails;\n\t\t\t\thook_input.failed_mom_list = &pjob->ji_failed_node_list;\n\t\t\t\thook_input.succeeded_mom_list = &pjob->ji_node_list;\n\t\t\t}\n\n\t\t\tmom_hook_output_init(&hook_output);\n\t\t\thook_output.reject_errcode = &hook_errcode;\n\t\t\thook_output.last_phook = &last_phook;\n\t\t\thook_output.fail_action = &hook_fail_action;\n\n\t\t\thook_rc =\n\t\t\t\tmom_process_hooks(HOOK_EVENT_EXECJOB_PROLOGUE,\n\t\t\t\t\t\t  PBS_MOM_SERVICE_NAME,\n\t\t\t\t\t\t  mom_host, &hook_input, &hook_output,\n\t\t\t\t\t\t  hook_msg, sizeof(hook_msg), 0);\n\t\t} else { /* no runnable hooks */\n\t\t\t/* don't execute any prologue hook */\n\t\t\t/* as no prologue hooks are runnable */\n\t\t\thook_rc = 2;\n\t\t}\n\n\t\tswitch (hook_rc) {\n\n\t\t\tcase 0: /* explicit reject */\n\t\t\t\tif (hook_errcode == PBSE_HOOK_REJECT_DELETEJOB) {\n\t\t\t\t\tstarter_return(upfds, downfds,\n\t\t\t\t\t\t       JOB_EXEC_FAILHOOK_DELETE, &sjr);\n\t\t\t\t} else if (hook_errcode == PBSE_HOOKERROR) {\n\t\t\t\t\tstarter_return(upfds, downfds,\n\t\t\t\t\t\t       JOB_EXEC_HOOKERROR, &sjr);\n\t\t\t\t} else { /* rerun is the default */\n\t\t\t\t\tstarter_return(upfds, downfds,\n\t\t\t\t\t\t       JOB_EXEC_FAILHOOK_RERUN, &sjr);\n\t\t\t\t}\n\t\t\tcase 1: /* explicit accept */\n\t\t\t\tif (send_pipe_request(upfds2, downfds2, IM_EXEC_PROLOGUE) != 0) {\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB,\n\t\t\t\t\t\t  LOG_INFO, pjob->ji_qs.ji_jobid,\n\t\t\t\t\t\t  \"warning: send of IM_EXEC_PROLOGUE to parent mom failed\");\n\t\t\t\t}\n\t\t\t\tif (do_tolerate_node_failures(pjob))\n\t\t\t\t\tsend_update_job(pjob, child2parent_job_update_pipe_w, parent2child_job_update_pipe_r, parent2child_job_update_status_pipe_r);\n\t\t\t\tbreak;\n\t\t\tcase 2:\n\t\t\t\t/* no hook script executed - execute old-style prologue */\n\t\t\t\tif ((j = run_pelog(PE_PROLOGUE,\n\t\t\t\t\t\t   path_prolog, pjob, PE_IO_TYPE_ASIS)) == 1) {\n\t\t\t\t\t/* abort job */\n\t\t\t\t\t(void) fprintf(temp_stderr,\n\t\t\t\t\t\t       \"Could not run prolog: %s\\n\", log_buffer);\n\t\t\t\t\tstarter_return(upfds, downfds, JOB_EXEC_FAIL2,\n\t\t\t\t\t\t       &sjr);\n\t\t\t\t} else if (j != 0) {\n\t\t\t\t\t/* requeue job */\n\t\t\t\t\tstarter_return(upfds, downfds, JOB_EXEC_RETRY,\n\t\t\t\t\t\t       &sjr);\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t\t\t\t  LOG_INFO, \"\",\n\t\t\t\t\t  \"prologue hook event: accept req by default\");\n\t\t\t\tif (send_pipe_request(upfds2, downfds2, IM_EXEC_PROLOGUE) != 0) {\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB,\n\t\t\t\t\t\t  LOG_INFO, pjob->ji_qs.ji_jobid,\n\t\t\t\t\t\t  \"warning: send of IM_EXEC_PROLOGUE to parent mom failed\");\n\t\t\t\t}\n\t\t\t\tif (do_tolerate_node_failures(pjob))\n\t\t\t\t\tsend_update_job(pjob, child2parent_job_update_pipe_w, parent2child_job_update_pipe_r, parent2child_job_update_status_pipe_r);\n\t\t}\n\t}\n\n\t/*************************************************************************/\n\t/*\tSet resource limits\t\t\t\t \t\t */\n\t/*\tBoth normal batch and interactive job come through here \t */\n\t/*************************************************************************/\n\n\tset_jattr_l_slim(pjob, JOB_ATR_session_id, sjr.sj_session, SET);\n\tif (site_job_setup(pjob) != 0) {\n\t\tstarter_return(upfds, downfds,\n\t\t\t       JOB_EXEC_FAIL2, &sjr); /* exits */\n\t}\n\n\ti = 0;\n\n\t/* if RLIMIT_NPROC is definded,  the value set when Mom was */\n\t/* invoked was saved,  reset that limit for the job\t    */\n#ifdef RLIMIT_NPROC\n#ifdef RLIM64_INFINITY\n\tif ((i = setrlimit64(RLIMIT_NPROC, &orig_nproc_limit)) == -1) {\n\t\t(void) sprintf(log_buffer,\n\t\t\t       \"Unable to restore NPROC limits, err=%d\", errno);\n\t}\n#else  /* RLIM64... */\n\tif ((i = setrlimit(RLIMIT_NPROC, &orig_nproc_limit)) == -1) {\n\t\t(void) sprintf(log_buffer,\n\t\t\t       \"Unable to restore NPROC limits, err=%d\", errno);\n\t}\n#endif /* RLIM64... */\n#endif /* RLIMIT_NPROC */\n\tif (i == 0) {\n\t\t/* now set all other kernel enforced limits on the job */\n\t\tif ((i = mom_set_limits(pjob, SET_LIMIT_SET)) != PBSE_NONE) {\n\t\t\t(void) sprintf(log_buffer, \"Unable to set limits, err=%d\", i);\n\t\t}\n\t}\n\tif (i != 0) {\n\t\t/* if we had a setlimit error, fail the job */\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_ERR,\n\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\tif (i == PBSE_RESCUNAV) { /* resource temp unavailable */\n\t\t\tif (is_interactive)\n\t\t\t\tj = JOB_EXEC_FAIL2;\n\t\t\telse\n\t\t\t\tj = JOB_EXEC_RETRY;\n\t\t} else\n\t\t\tj = JOB_EXEC_FAIL2;\n\t\tstarter_return(upfds, downfds, j, &sjr); /* exits */\n\t}\n\tendpwent();\n\n\tjob_has_executable = 0;\n\tif (is_jattr_set(pjob, JOB_ATR_executable)) {\n\t\t/*\n\t\t * Call decode_xml_arg_list to decode XML string\n\t\t * and store executable in shell and argument list in argv.\n\t\t */\n\t\tif (decode_xml_arg_list(get_jattr_str(pjob, JOB_ATR_executable),\n\t\t\t\t\tget_jattr_str(pjob, JOB_ATR_Arglist), &shell, &argv) != 0) {\n\t\t\tstarter_return(upfds, downfds, JOB_EXEC_FAIL2, &sjr);\n\t\t}\n\t\tjob_has_executable = 1;\n\t}\n\n\tif (do_tolerate_node_failures(pjob) && (prolo_hooks > 0)) {\n\n\t\t/* free up from previous execjob_prologue hook */\n\t\tFREE_VNLS(vnl_fails, vnl_good);\n\n\t\tif (pjob->ji_numnodes > 1) {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"waiting up to %ld secs ($job_launch_delay) for mom hosts status and prologue hooks ack\", job_launch_delay);\n\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\t/* a filled-in log_buffer could be mistaken for an error message */\n\t\t\tlog_buffer[0] = '\\0';\n\n\t\t\tif (get_failed_moms_and_vnodes(pjob, parent2child_moms_status_pipe_r, (prolo_hooks > 0) ? downfds2 : -1, &vnl_fails, &vnl_good, job_launch_delay) != 0) {\n\t\t\t\tFREE_VNLS(vnl_fails, vnl_good);\n\t\t\t\tstarter_return(upfds, downfds, JOB_EXEC_RETRY, &sjr);\n\t\t\t}\n\t\t}\n\t}\n\n\tthe_progname = shell;\n\tthe_argv = argv;\n\n\t/* NULL terminate the envp array */\n\t*((pjob->ji_env).v_envp + (pjob->ji_env).v_used) = NULL;\n\tthe_env = (pjob->ji_env).v_envp;\n\n\tmom_hook_input_init(&hook_input);\n\thook_input.pjob = pjob;\n\thook_input.progname = the_progname;\n\thook_input.argv = the_argv;\n\thook_input.env = the_env;\n\n\tif (do_tolerate_node_failures(pjob)) {\n\t\thook_input.vnl_fail = (vnl_t *) vnl_fails;\n\t\thook_input.failed_mom_list = &pjob->ji_failed_node_list;\n\t\thook_input.succeeded_mom_list = &pjob->ji_node_list;\n\t}\n\n\tmom_hook_output_init(&hook_output);\n\thook_output.reject_errcode = &hook_errcode;\n\thook_output.last_phook = &last_phook;\n\thook_output.fail_action = &hook_fail_action;\n\thook_output.progname = &progname;\n\tCLEAR_HEAD(argv_list);\n\thook_output.argv = &argv_list;\n\n\tswitch (mom_process_hooks(HOOK_EVENT_EXECJOB_LAUNCH,\n\t\t\t\t  PBS_MOM_SERVICE_NAME,\n\t\t\t\t  mom_host, &hook_input, &hook_output,\n\t\t\t\t  hook_msg, sizeof(hook_msg), 0)) {\n\n\t\tcase 0: /* explicit reject */\n\t\t\tfree(progname);\n\t\t\tfree_attrlist(&argv_list);\n\t\t\tfree_str_array(hook_output.env);\n\t\t\tif (do_tolerate_node_failures(pjob))\n\t\t\t\tFREE_VNLS(vnl_fails, vnl_good);\n\n\t\t\tif (hook_errcode == PBSE_HOOK_REJECT_RERUNJOB) {\n\t\t\t\tstarter_return(upfds, downfds,\n\t\t\t\t\t       JOB_EXEC_FAILHOOK_RERUN, &sjr);\n\t\t\t} else {\n\t\t\t\tstarter_return(upfds, downfds,\n\t\t\t\t\t       JOB_EXEC_FAILHOOK_DELETE, &sjr);\n\t\t\t}\n\t\tcase 1: /* explicit accept */\n\t\t\tif (progname != NULL)\n\t\t\t\tthe_progname = progname;\n\n\t\t\tthe_argv = svrattrl_to_str_array(&argv_list);\n\t\t\tif (the_argv == NULL) {\n\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t\t\t\t  LOG_INFO, \"\",\n\t\t\t\t\t  \"execjob_launch hook returned NULL argv!\");\n\t\t\t\tfree(progname);\n\t\t\t\tfree_attrlist(&argv_list);\n\t\t\t\tfree_str_array(hook_output.env);\n\t\t\t\tif (do_tolerate_node_failures(pjob))\n\t\t\t\t\tFREE_VNLS(vnl_fails, vnl_good);\n\n\t\t\t\tstarter_return(upfds, downfds,\n\t\t\t\t\t       JOB_EXEC_FAILHOOK_DELETE, &sjr);\n\t\t\t}\n\t\t\tres_env = hook_output.env;\n\n\t\t\tif (res_env == NULL) {\n\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t\t\t\t  LOG_INFO, \"\",\n\t\t\t\t\t  \"execjob_launch hook NULL env!\");\n\t\t\t\tfree(progname);\n\t\t\t\tfree_attrlist(&argv_list);\n\t\t\t\tfree_str_array(the_argv);\n\t\t\t\tif (do_tolerate_node_failures(pjob))\n\t\t\t\t\tFREE_VNLS(vnl_fails, vnl_good);\n\n\t\t\t\tstarter_return(upfds, downfds,\n\t\t\t\t\t       JOB_EXEC_FAILHOOK_DELETE, &sjr);\n\t\t\t}\n\n\t\t\t/* clear the env array */\n\t\t\t(pjob->ji_env).v_used = 0;\n\t\t\t(pjob->ji_env).v_envp[0] = NULL;\n\n\t\t\t/* need to also set vtable as that would */\n\t\t\t/* get appended to later in the code */\n\t\t\t/* vtable holds the environmnent variables */\n\t\t\t/* and their values that are going to be */\n\t\t\t/* part of the job. */\n\t\t\tk = 0;\n\t\t\twhile (res_env[k]) {\n\t\t\t\tchar *n, *v, *p;\n\t\t\t\tif ((p = strchr(res_env[k], '=')) != NULL) {\n\t\t\t\t\t*p = '\\0';\n\t\t\t\t\tn = res_env[k];\n\t\t\t\t\tv = p + 1;\n\t\t\t\t\tbld_env_variables(&(pjob->ji_env),\n\t\t\t\t\t\t\t  n, v);\n\t\t\t\t\t*p = '=';\n\t\t\t\t}\n\t\t\t\tk++;\n\t\t\t}\n\t\t\tthe_env = pjob->ji_env.v_envp;\n\t\t\tif (do_tolerate_node_failures(pjob))\n\t\t\t\tsend_update_job(pjob, child2parent_job_update_pipe_w, parent2child_job_update_pipe_r, parent2child_job_update_status_pipe_r);\n\n\t\t\tbreak;\n\t\tcase 2: /* no hook script executed - go ahead and accept event */\n\t\t\tbreak;\n\t\tdefault:\n\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t\t\t  LOG_INFO, \"\",\n\t\t\t\t  \"execjob_launch hook event: accept req by default\");\n\t}\n\n\tif (do_tolerate_node_failures(pjob))\n\t\tFREE_VNLS(vnl_fails, vnl_good);\n\n\t/* if job has executable (submitted as qsub -- <progname> <argv>), then */\n\t/* <progname> and <argv> take precedence so they must not be passed to */\n\t/* set_credential(), which would modify them. */\n\tif (set_credential(pjob, job_has_executable ? NULL : &the_progname,\n\t\t\t   job_has_executable ? NULL : &the_argv) == -1) {\n\t\tstarter_return(upfds, downfds,\n\t\t\t       JOB_EXEC_FAIL2, &sjr); /* exits */\n\t}\n\n\t/* include any new env settings added by set_credential. */\n\tthe_env = pjob->ji_env.v_envp;\n\t*(pjob->ji_env.v_envp + pjob->ji_env.v_used) = NULL;\n\n\t/*\n\t * If JOB_ATR_executable is set, and job is in \"sandbox=PRIVATE\" mode,\n\t * change working directory to PBS_JOBDIR and run the executable.\n\t * If JOB_ATR_executable attribute is unset,\n\t * change working directory to User's Home.\n\t * If in \"sandbox=PRIVATE\" mode, it is preferable to start in User's HOME\n\t * in order to process user's \"dot\" files in the login shell,\n\t * but if user's Home does not exist, start in PBS_JOBDIR.\n\t *\n\t * Note that even while job process is started in user's Home,\n\t * when \"sandbox\" is \"PRIVATE\", \"cd $PBS_JOBDIR\" is prepended to the job script name,\n\t * so job script is executed in $PBS_JOBDIR after \"dot\" files from user's Home are processed.\n\t * See the code for the forked parent (about 700 lines above), look for the comment:\n\t * \"the parent side, still the main man, uhh that is MOM\"\n\t */\n\tif (is_jattr_set(pjob, JOB_ATR_executable) && sandbox_private) {\n\t\tif (!pbs_jobdir || chdir(pbs_jobdir) == -1) {\n\t\t\tlog_event(PBSEVENT_JOB | PBSEVENT_SECURITY, PBS_EVENTCLASS_JOB,\n\t\t\t\t  LOG_ERR, pjob->ji_qs.ji_jobid,\n\t\t\t\t  \"sandbox=PRIVATE mode: Could not chdir to job directory\\n\");\n\t\t\tstarter_return(upfds, downfds, JOB_EXEC_FAIL2, &sjr);\n\t\t\treturn;\n\t\t}\n\t} else if (chdir(pwdp->pw_dir) == -1) {\n\t\tlog_event(PBSEVENT_JOB | PBSEVENT_SECURITY, PBS_EVENTCLASS_JOB,\n\t\t\t  LOG_ERR, pjob->ji_qs.ji_jobid,\n\t\t\t  \"Could not chdir to Home directory\");\n\t\t(void) fprintf(temp_stderr, \"Could not chdir to home directory\\n\");\n\t\t/* check if \"qsub -k[oe]\" was specified */\n\t\tif (((is_jattr_set(pjob, JOB_ATR_keep)) &&\n\t\t     ((strchr(get_jattr_str(pjob, JOB_ATR_keep), 'o')) ||\n\t\t      (strchr(get_jattr_str(pjob, JOB_ATR_keep), 'e')))) &&\n\t\t    !sandbox_private) {\n\t\t\t/* user Home is required for job output if \"qsub -k[oe]\" was specified\n\t\t\t * and not in sandbox=private mode, so error out.\n\t\t\t */\n\t\t\tstarter_return(upfds, downfds, JOB_EXEC_FAIL2, &sjr);\n\t\t\treturn;\n\t\t} else if (sandbox_private) {\n\t\t\t/* \"sandbox=PRIVATE\" mode is active, so job can be started in PBS_JOBDIR instead of user Home */\n\t\t\tif ((!pbs_jobdir) || (chdir(pbs_jobdir) == -1)) {\n\t\t\t\tlog_event(PBSEVENT_JOB | PBSEVENT_SECURITY, PBS_EVENTCLASS_JOB,\n\t\t\t\t\t  LOG_ERR, pjob->ji_qs.ji_jobid,\n\t\t\t\t\t  \"sandbox=PRIVATE mode: Could not chdir to job directory\\n\");\n\t\t\t\tstarter_return(upfds, downfds, JOB_EXEC_FAIL2, &sjr);\n\t\t\t}\n\t\t\t/* an else case for O_WORKDIR should be added here */\n\t\t} else {\n\t\t\t/* nothing special specified, so job must be started in user Home  */\n\t\t\tstarter_return(upfds, downfds, JOB_EXEC_FAIL2, &sjr);\n\t\t\treturn;\n\t\t}\n\t}\n\n\t/* tell mom we are going */\n\tstarter_return(upfds, downfds, JOB_EXEC_OK, &sjr);\n\tlog_close(0);\n\n\tif ((pjob->ji_numnodes == 1) || nodemux || ((cpid = fork()) > 0)) {\n\t\t/* parent does the shell */\n\t\tFILE *f;\n\n\t\t/* close sockets that child uses */\n\t\t(void) close(pjob->ji_stdout);\n\t\t(void) close(pjob->ji_stderr);\n\t\tif ((is_interactive == TRUE) &&\n\t\t    get_jattr_str(pjob, JOB_ATR_X11_cookie)) {\n\t\t\tchar auth_display[X_DISPLAY_LEN];\n\t\t\tchar cmd[X_DISPLAY_LEN];\n\t\t\tchar format[X_DISPLAY_LEN];\n\t\t\tchar x11proto[X_DISPLAY_LEN];\n\t\t\tchar x11data[X_DISPLAY_LEN];\n\t\t\tchar x11authstr[X_DISPLAY_LEN];\n\t\t\tunsigned int x11screen;\n\t\t\tint ret;\n\n\t\t\tx11proto[0] = x11data[0] = '\\0';\n\t\t\tformat[0] = '\\0';\n\n\t\t\tsprintf(format, \" %%%d[^:]: %%%d[^:]: %%u\",\n\t\t\t\tX_DISPLAY_LEN - 1, X_DISPLAY_LEN - 1);\n\n\t\t\t/*getting the cookie data from the job attributes*/\n\t\t\tstrcpy(x11authstr,\n\t\t\t       get_jattr_str(pjob, JOB_ATR_X11_cookie));\n\n\t\t\t/**\n\t\t\t * parsing cookie to get X11 protocol,\n\t\t\t * hex data and screen number\n\t\t\t */\n\t\t\tif ((n = sscanf(x11authstr, format,\n\t\t\t\t\tx11proto,\n\t\t\t\t\tx11data,\n\t\t\t\t\t&x11screen)) != 3) {\n\t\t\t\tsprintf(log_buffer, \"sscanf(%s)=%d failed: %s\\n\",\n\t\t\t\t\tx11authstr,\n\t\t\t\t\tn,\n\t\t\t\t\tstrerror(errno));\n\t\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\t\tlog_close(0);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tret = snprintf(auth_display, sizeof(auth_display),\n\t\t\t\t       \"unix:%d.%u\",\n\t\t\t\t       display_number,\n\t\t\t\t       x11screen);\n\t\t\tif (ret >= sizeof(auth_display)) {\n\t\t\t\tlog_err(-1, __func__, \" auth_display overflow\");\n\t\t\t\tlog_close(0);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tif (!sandbox_private) {\n\t\t\t\t/*Fetching XAUTHORITY from job environment if present*/\n\t\t\t\tint xauth_index;\n\t\t\t\tif ((xauth_index = find_env_slot(&(pjob->ji_env), \"XAUTHORITY=\")) != -1) {\n\t\t\t\t\tchar *xauth_file = strchr(pjob->ji_env.v_envp[xauth_index], (int) '=') + 1;\n\t\t\t\t\tret = snprintf(cmd, sizeof(cmd), \"%s -f %s -q -\", XAUTH_BINARY, xauth_file);\n\t\t\t\t} else {\n\t\t\t\t\tret = snprintf(cmd, sizeof(cmd), \"%s -q -\",\n\t\t\t\t\t\t       XAUTH_BINARY);\n\t\t\t\t}\n\t\t\t\tif (ret >= sizeof(cmd)) {\n\t\t\t\t\tlog_err(-1, __func__, \" cmd overflow \");\n\t\t\t\t\tlog_close(0);\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tchar var[MAXPATHLEN + 1];\n\t\t\t\tsprintf(var, \"%s/.Xauthority\", pbs_jobdir);\n\t\t\t\tret = snprintf(cmd, sizeof(cmd),\n\t\t\t\t\t       \"%s -f %s/.Xauthority -q -\",\n\t\t\t\t\t       XAUTH_BINARY, pbs_jobdir);\n\t\t\t\tif (ret >= sizeof(cmd)) {\n\t\t\t\t\tlog_err(-1, __func__, \" cmd overflow \");\n\t\t\t\t\tlog_close(0);\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t\tbld_env_variables(&(pjob->ji_env), \"XAUTHORITY\", var);\n\t\t\t}\n\t\t\tf = popen(cmd, \"w\");\n\t\t\tif (f != NULL) {\n\t\t\t\t/**\n\t\t\t\t *  executing commands to add new display\n\t\t\t\t *  in Xauthority file\n\t\t\t\t */\n\t\t\t\tfprintf(f, \"remove %s\\n \", auth_display);\n\t\t\t\tfprintf(f, \"add %s %s %s\\n\", auth_display,\n\t\t\t\t\tx11proto,\n\t\t\t\t\tx11data);\n\t\t\t\tpclose(f);\n\t\t\t} else {\n\t\t\t\tsprintf(log_buffer, \"could not run %s\\n\", cmd);\n\t\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\t\tlog_close(0);\n\t\t\t\treturn;\n\t\t\t}\n\t\t}\n\n\t\t/* include any new env settings added. */\n\t\tthe_env = pjob->ji_env.v_envp;\n\t\t*(pjob->ji_env.v_envp + pjob->ji_env.v_used) = NULL;\n\n\t\t/* user was prevented to interrupt, it is safe to revert now */\n\t\t(void) sigaction(SIGHUP, &tmp_act_hup, NULL);\n\t\t(void) sigaction(SIGINT, &tmp_act_int, NULL);\n\t\t(void) sigaction(SIGQUIT, &tmp_act_quit, NULL);\n\t\t(void) sigaction(SIGTSTP, &tmp_act_stp, NULL);\n\n\t\texecve(the_progname, the_argv, the_env);\n\t\tfree(progname);\n\t\tfree_attrlist(&argv_list);\n\t\tfree_str_array(the_argv);\n\t\tthe_argv = NULL;\n\t\tfree_str_array(hook_output.env);\n\t\tfree_str_array(the_env);\n\t\tthe_env = NULL;\n\t} else if (cpid == 0) { /* child does demux */\n\t\tchar *arg[2];\n\t\tchar *shellname;\n\n\t\t/* setup descriptors 3 and 4 */\n\t\t(void) dup2(pjob->ji_stdout, 3);\n\t\tif (pjob->ji_stdout > 3)\n\t\t\tclose(pjob->ji_stdout);\n\t\t(void) dup2(pjob->ji_stderr, 4);\n\t\tif (pjob->ji_stderr > 4)\n\t\t\tclose(pjob->ji_stderr);\n\n\t\t/* construct argv array */\n\t\tshell = pbs_conf.pbs_demux_path;\n\t\tshellname = strrchr(shell, '/');\n\t\tif (shellname)\n\t\t\t++shellname; /* go past last '/' */\n\t\telse\n\t\t\tshellname = shell;\n\t\targ[0] = shellname;\n\t\targ[1] = NULL;\n\n\t\t/* user was prevented to interrupt, it is safe to revert now */\n\t\t(void) sigaction(SIGHUP, &tmp_act_hup, NULL);\n\t\t(void) sigaction(SIGINT, &tmp_act_int, NULL);\n\t\t(void) sigaction(SIGQUIT, &tmp_act_quit, NULL);\n\t\t(void) sigaction(SIGTSTP, &tmp_act_stp, NULL);\n\n\t\t/* we're purposely not calling log_close() here */\n\t\t/* for this causes a side-effect. log_close() would */\n\t\t/* do an fclose(<logfile>), but its file position */\n\t\t/* is still shared with the parent mom, which */\n\t\t/* could be writing to the <logfile>. */\n\t\texecve(shell, arg, pjob->ji_env.v_envp);\n\t}\n\tfprintf(temp_stderr, \"pbs_mom, exec of %s failed with error: %s\\n\",\n\t\tshell, strerror(errno));\n\texit(254); /* should never, ever get here */\n}\n\n/**\n * @brief\n * \tStart a process for a spawn request.  This will be different from\n * \ta job's initial shell task in that the environment will be specified\n * \tand no interactive code need be included.\n *\n * @param[in] ptask - pointer to task structure\n * @param[in] argv - argument list\n * @param[in] envp - pointer to environment variable list\n * @param[in] nodemux - false if the task process needs demux, true otherwise\n\n *\n * @return\tint\n * @retval\tPBSE_NONE (0) if success\n * @retval\tPBSE_* on error.\n *\n */\nint\nstart_process(task *ptask, char **argv, char **envp, bool nodemux)\n{\n\tjob *pjob = ptask->ti_job;\n\tint ebsize;\n\tchar buf[MAXPATHLEN + 2];\n\tpid_t pid;\n\tint pipes[2], kid_read, kid_write, parent_read, parent_write;\n\tint pts;\n\tint i, j, k;\n\tint fd;\n\tu_long ipaddr;\n\tstruct array_strings *vstrs;\n\tstruct startjob_rtn sjr;\n\tchar *pbs_jobdir; /* staging and execution directory of this job */\n\tint hook_errcode = 0;\n\tchar hook_msg[HOOK_MSG_SIZE + 1];\n\tchar *progname = NULL;\n\tpbs_list_head argv_list;\n\tmom_hook_input_t hook_input;\n\tmom_hook_output_t hook_output;\n\tchar *the_progname;\n\tchar **the_argv;\n\tchar **the_env;\n\tchar **res_env;\n\thook *last_phook = NULL;\n\tunsigned int hook_fail_action = 0;\n\tFILE *temp_stderr = stderr;\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n\tint cred_action;\n#endif\n\n\tpbs_jobdir = jobdirname(pjob->ji_qs.ji_jobid, pjob->ji_grpcache->gc_homedir);\n\tmemset(&sjr, 0, sizeof(sjr));\n\tif (pipe(pipes) == -1)\n\t\treturn PBSE_SYSTEM;\n\tif (pipes[1] < 3) {\n\t\tkid_write = fcntl(pipes[1], F_DUPFD, 3);\n\t\t(void) close(pipes[1]);\n\t} else\n\t\tkid_write = pipes[1];\n\tparent_read = pipes[0];\n\n\tif (pipe(pipes) == -1) {\n\t\tclose(kid_write);\n\t\tclose(parent_read);\n\t\treturn PBSE_SYSTEM;\n\t}\n\tif (pipes[0] < 3) {\n\t\tkid_read = fcntl(pipes[0], F_DUPFD, 3);\n\t\t(void) close(pipes[0]);\n\t} else\n\t\tkid_read = pipes[0];\n\tparent_write = pipes[1];\n\n\t/*\n\t ** Get ipaddr to Mother Superior.\n\t */\n\tif (pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE) /* I'm MS */\n\t\tipaddr = htonl(localaddr);\n\telse {\n\t\tstruct sockaddr_in *ap;\n\n\t\t/*\n\t\t ** We always have a stream open to MS at node 0.\n\t\t */\n\t\ti = pjob->ji_hosts[0].hn_stream;\n\t\tif ((ap = tpp_getaddr(i)) == NULL) {\n\t\t\tlog_joberr(-1, __func__, \"no stream to MS\",\n\t\t\t\t   pjob->ji_qs.ji_jobid);\n\t\t\treturn PBSE_SYSTEM;\n\t\t}\n\t\tipaddr = ap->sin_addr.s_addr;\n\t}\n\n\t/*\n\t ** Begin a new process for the fledgling task.\n\t */\n\tif ((pid = fork_me(-1)) == -1)\n\t\treturn PBSE_SYSTEM;\n\telse if (pid != 0) { /* parent */\n\t\t(void) close(kid_read);\n\t\t(void) close(kid_write);\n\n\t\t/* read sid */\n\t\ti = readpipe(parent_read, &sjr, sizeof(sjr));\n\t\tj = errno;\n\t\t(void) close(parent_read);\n\t\tif (i != sizeof(sjr)) {\n\t\t\tsprintf(log_buffer,\n\t\t\t\t\"read of pipe for pid job %s got %d not %d\",\n\t\t\t\tpjob->ji_qs.ji_jobid, i, (int) sizeof(sjr));\n\t\t\tlog_err(j, __func__, log_buffer);\n\t\t\t(void) close(parent_write);\n\t\t\treturn PBSE_SYSTEM;\n\t\t}\n\t\t(void) writepipe(parent_write, &sjr, sizeof(sjr));\n\t\t(void) close(parent_write);\n\t\tDBPRT((\"%s: read start return %d %d\\n\", __func__,\n\t\t       sjr.sj_code, sjr.sj_session))\n\n\t\t/*\n\t\t ** Set the global id before exiting on error so any\n\t\t ** information can be put into the job struct first.\n\t\t */\n\t\tset_globid(pjob, &sjr);\n\t\tif (sjr.sj_code < 0) {\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n\t\t\tAFSLOG_TERM(ptask);\n#endif\n\t\t\t(void) sprintf(log_buffer, \"task not started, %s %s %d\",\n\t\t\t\t       (sjr.sj_code == JOB_EXEC_RETRY) ? \"Retry\" : \"Failure\",\n\t\t\t\t       argv[0],\n\t\t\t\t       sjr.sj_code);\n\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB,\n\t\t\t\t  LOG_NOTICE, pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\treturn PBSE_SYSTEM;\n\t\t}\n\n\t\tptask->ti_qs.ti_sid = sjr.sj_session;\n\t\tptask->ti_qs.ti_status = TI_STATE_RUNNING;\n\n\t\t(void) task_save(ptask);\n\t\tif (!check_job_substate(pjob, JOB_SUBSTATE_RUNNING)) {\n\t\t\tset_job_state(pjob, JOB_STATE_LTR_RUNNING);\n\t\t\tset_job_substate(pjob, JOB_SUBSTATE_RUNNING);\n\t\t\tjob_save(pjob);\n\t\t}\n\t\t(void) sprintf(log_buffer, \"task %8.8X started, %s\",\n\t\t\t       ptask->ti_qs.ti_task, argv[0]);\n\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\n\t\treturn PBSE_NONE;\n\t}\n\n\t/************************************************/\n\t/* The child process - will become the TASK\t*/\n\t/************************************************/\n\t(void) close(parent_read);\n\t(void) close(parent_write);\n\n\t/* unprotect the job from the vagaries of the kernel */\n\tdaemon_protect(0, PBS_DAEMON_PROTECT_OFF);\n\n\t/*\n\t * set up the Environmental Variables to be given to the job\n\t */\n\n\tfor (j = 0, ebsize = 0; envp[j]; j++)\n\t\tebsize += strlen(envp[j]);\n\tvstrs = get_jattr_arst(pjob, JOB_ATR_variables);\n\tpjob->ji_env.v_ensize = vstrs->as_usedptr + num_var_else + num_var_env +\n\t\t\t\tj + EXTRA_ENV_PTRS;\n\tpjob->ji_env.v_used = 0;\n\tpjob->ji_env.v_envp = (char **) malloc(pjob->ji_env.v_ensize * sizeof(char *));\n\tif (pjob->ji_env.v_envp == NULL) {\n\t\treturn PBSE_SYSTEM;\n\t}\n\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n\tif (pjob->ji_tasks.ll_prior == pjob->ji_tasks.ll_next) { /* create only on first task */\n\t\tcred_action = CRED_RENEWAL;\n\t} else {\n\t\tcred_action = CRED_SETENV;\n\t}\n\n\tif (cred_by_job(pjob, cred_action) != PBS_KRB5_OK) {\n\t\tlog_eventf(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_ERR, pjob->ji_qs.ji_jobid,\n\t\t\t   \"failed to set credentials for task %8.8X\",\n\t\t\t   ptask->ti_qs.ti_task);\n\t}\n\n#if defined(HAVE_LIBKAFS) || defined(HAVE_LIBKOPENAFS)\n\tif (start_afslog(ptask, NULL, kid_write, kid_read) != PBS_KRB5_OK) {\n\t\tsprintf(log_buffer, \"afslog for task %8.8X not started\",\n\t\t\tptask->ti_qs.ti_task);\n\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_ERR,\n\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t}\n#endif\n#endif\n\n\t/* First variables from the local environment */\n\tfor (j = 0; j < num_var_env; ++j)\n\t\tbld_env_variables(&(pjob->ji_env), environ[j], NULL);\n\n\t/* Next, the variables passed with the job.  They may   */\n\t/* be overwritten with new correct values for this job\t*/\n\n\tfor (j = 0; j < vstrs->as_usedptr; ++j) {\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n\t\t\t/* never set KRB5CCNAME; it would rewrite the correct value */\n\t\t\tif (strncmp(vstrs->as_string[j], \"KRB5CCNAME\", strlen(\"KRB5CCNAME\")) == 0)\n\t\t\t\tcontinue;\n#endif\n\t\tbld_env_variables(&(pjob->ji_env), vstrs->as_string[j], NULL);\n\t}\n\n\t/* HOME */\n\tbld_env_variables(&(pjob->ji_env), variables_else[0],\n\t\t\t  pjob->ji_grpcache->gc_homedir);\n\n\t/* PBS_JOBNAME */\n\tbld_env_variables(&(pjob->ji_env), variables_else[2],\n\t\t\t  get_jattr_str(pjob, JOB_ATR_jobname));\n\n\t/* PBS_JOBID */\n\tbld_env_variables(&(pjob->ji_env), variables_else[3], pjob->ji_qs.ji_jobid);\n\n\t/* PBS_QUEUE */\n\tbld_env_variables(&(pjob->ji_env), variables_else[4],\n\t\t\t  get_jattr_str(pjob, JOB_ATR_in_queue));\n\n\t/* PBS_JOBCOOKIE */\n\tbld_env_variables(&(pjob->ji_env), variables_else[7],\n\t\t\t  get_jattr_str(pjob, JOB_ATR_Cookie));\n\n\t/* PBS_NODENUM */\n\tsprintf(buf, \"%d\", pjob->ji_nodeid);\n\tbld_env_variables(&(pjob->ji_env), variables_else[8], buf);\n\n\t/* PBS_TASKNUM */\n\tsprintf(buf, \"%8.8X\", ptask->ti_qs.ti_task);\n\tbld_env_variables(&(pjob->ji_env), variables_else[9], buf);\n\n\t/* PBS_MOMPORT */\n\tsprintf(buf, \"%d\", pbs_rm_port);\n\tbld_env_variables(&(pjob->ji_env), variables_else[10], buf);\n\n\t/* OMP_NUM_THREADS and NCPUS eq to number of cpus */\n\tsprintf(buf, \"%d\", pjob->ji_vnods[ptask->ti_qs.ti_myvnode].vn_threads);\n#ifdef NAS /* localmod 020 */\n\t/* Force OMP_NUM_THREADS=1 on Columbia.\n\t * If you've ever seen a 256 process MPI program try to start 256\n\t * threads for each process, you'd know why.\n\t */\n\tbld_env_variables(&(pjob->ji_env), variables_else[12], \"1\");\n#else\n\tbld_env_variables(&(pjob->ji_env), variables_else[12], buf);\n#endif /* localmod 020 */\n\tbld_env_variables(&(pjob->ji_env), \"NCPUS\", buf);\n\n\t/* PBS_ACCOUNT */\n\tif (is_jattr_set(pjob, JOB_ATR_account))\n\t\tbld_env_variables(&(pjob->ji_env), variables_else[13],\n\t\t\t\t  get_jattr_str(pjob, JOB_ATR_account));\n\n\tif (is_jattr_set(pjob, JOB_ATR_umask)) {\n\t\tsprintf(buf, \"%ld\", get_jattr_long(pjob, JOB_ATR_umask));\n\t\tsscanf(buf, \"%o\", &j);\n\t\tumask(j);\n\t} else {\n\t\tumask(077);\n\t}\n\n\tmom_unnice();\n\n\t/* set Environment to reflect batch */\n\tbld_env_variables(&(pjob->ji_env), \"PBS_ENVIRONMENT\", \"PBS_BATCH\");\n\tbld_env_variables(&(pjob->ji_env), \"ENVIRONMENT\", \"BATCH\");\n\n\tfor (i = 0; envp[i]; i++)\n\t\tbld_env_variables(&(pjob->ji_env), envp[i], NULL);\n\n\t\t/* Add TMPDIR to environment */\n#ifdef NAS /* localmod 010 */\n\t(void) NAS_tmpdirname(pjob);\n#endif /* localmod 010 */\n\tj = mktmpdir(pjob->ji_qs.ji_jobid,\n\t\t     pjob->ji_qs.ji_un.ji_momt.ji_exuid,\n\t\t     pjob->ji_qs.ji_un.ji_momt.ji_exgid,\n\t\t     &(pjob->ji_env));\n\tif (j != 0) {\n\t\tstarter_return(kid_write, kid_read, j, &sjr);\n\t}\n\n\t/* set PBS_JOBDIR */\n\tif ((is_jattr_set(pjob, JOB_ATR_sandbox)) &&\n\t    (strcasecmp(get_jattr_str(pjob, JOB_ATR_sandbox), \"PRIVATE\") == 0)) {\n\t\tbld_env_variables(&(pjob->ji_env), \"PBS_JOBDIR\", pbs_jobdir);\n\t} else {\n\t\tbld_env_variables(&(pjob->ji_env), \"PBS_JOBDIR\", pjob->ji_grpcache->gc_homedir);\n\t}\n\n\tj = set_job(pjob, &sjr);\n\tif (j < 0) {\n\t\tif (j == -1) {\n\t\t\t/* set_job didn't leave message in log_buffer */\n\t\t\t(void) strcpy(log_buffer, \"Unable to set task session\");\n\t\t}\n\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_NOTICE,\n\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\tif (j == -3)\n\t\t\tj = JOB_EXEC_FAIL2;\n\t\telse\n\t\t\tj = JOB_EXEC_RETRY;\n\t\tstarter_return(kid_write, kid_read, j, &sjr);\n\t}\n\tptask->ti_qs.ti_sid = sjr.sj_session;\n\tif ((i = mom_set_limits(pjob, SET_LIMIT_SET)) != PBSE_NONE) {\n\t\t(void) sprintf(log_buffer, \"Unable to set limits, err=%d\", i);\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_WARNING,\n\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\tif (i == PBSE_RESCUNAV) /* resource temp unavailable */\n\t\t\tj = JOB_EXEC_RETRY;\n\t\telse\n\t\t\tj = JOB_EXEC_FAIL2;\n\t\tstarter_return(kid_write, kid_read, j, &sjr);\n\t}\n\n\tthe_progname = argv[0];\n\tthe_argv = argv;\n\n\t*(pjob->ji_env.v_envp + pjob->ji_env.v_used) = NULL;\n\tthe_env = pjob->ji_env.v_envp;\n\n\tmom_hook_input_init(&hook_input);\n\thook_input.pjob = pjob;\n\thook_input.progname = the_progname;\n\thook_input.argv = the_argv;\n\thook_input.env = the_env;\n\n\tmom_hook_output_init(&hook_output);\n\thook_output.reject_errcode = &hook_errcode;\n\thook_output.last_phook = &last_phook;\n\thook_output.fail_action = &hook_fail_action;\n\thook_output.progname = &progname;\n\tCLEAR_HEAD(argv_list);\n\thook_output.argv = &argv_list;\n\n\tswitch (mom_process_hooks(HOOK_EVENT_EXECJOB_LAUNCH,\n\t\t\t\t  PBS_MOM_SERVICE_NAME,\n\t\t\t\t  mom_host, &hook_input, &hook_output,\n\t\t\t\t  hook_msg, sizeof(hook_msg), 0)) {\n\n\t\tcase 0: /* explicit reject */\n\t\t\tfree(progname);\n\t\t\tfree_attrlist(&argv_list);\n\t\t\tfree_str_array(hook_output.env);\n\t\t\tstarter_return(kid_write, kid_read,\n\t\t\t\t       JOB_EXEC_FAILHOOK_DELETE, &sjr);\n\t\tcase 1: /* explicit accept */\n\t\t\tif (progname != NULL)\n\t\t\t\tthe_progname = progname;\n\n\t\t\tthe_argv = svrattrl_to_str_array(&argv_list);\n\t\t\tif (the_argv == NULL) {\n\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t\t\t\t  LOG_INFO, \"\",\n\t\t\t\t\t  \"execjob_launch hook returned NULL argv!\");\n\t\t\t\tfree(progname);\n\t\t\t\tfree_attrlist(&argv_list);\n\t\t\t\tfree_str_array(hook_output.env);\n\t\t\t\tstarter_return(kid_write, kid_read,\n\t\t\t\t\t       JOB_EXEC_FAILHOOK_DELETE, &sjr);\n\t\t\t}\n\t\t\tres_env = hook_output.env;\n\t\t\tif (res_env == NULL) {\n\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t\t\t\t  LOG_INFO, \"\",\n\t\t\t\t\t  \"execjob_launch hook NULL env!\");\n\t\t\t\tfree(progname);\n\t\t\t\tfree_attrlist(&argv_list);\n\t\t\t\tfree_str_array(the_argv);\n\t\t\t\tstarter_return(kid_write, kid_read,\n\t\t\t\t\t       JOB_EXEC_FAILHOOK_DELETE, &sjr);\n\t\t\t}\n\n\t\t\t/* clear the env array */\n\t\t\tpjob->ji_env.v_used = 0;\n\t\t\tpjob->ji_env.v_envp[0] = NULL;\n\n\t\t\t/* need to also set vtable as that would */\n\t\t\t/* get appended to later in the code */\n\t\t\t/* vtable holds the environmnent variables */\n\t\t\t/* and their values that are going to be */\n\t\t\t/* part of the job. */\n\t\t\tk = 0;\n\t\t\twhile (res_env[k]) {\n\t\t\t\tchar *n, *v, *p;\n\t\t\t\tif ((p = strchr(res_env[k], '=')) != NULL) {\n\t\t\t\t\t*p = '\\0';\n\t\t\t\t\tn = res_env[k];\n\t\t\t\t\tv = p + 1;\n\t\t\t\t\tbld_env_variables(&(pjob->ji_env),\n\t\t\t\t\t\t\t  n, v);\n\t\t\t\t\t*p = '=';\n\t\t\t\t}\n\t\t\t\tk++;\n\t\t\t}\n\t\t\tthe_env = pjob->ji_env.v_envp;\n\n\t\t\tbreak;\n\t\tcase 2: /* no hook script executed - go ahead and accept event */\n\t\t\tbreak;\n\t\tdefault:\n\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t\t\t  LOG_INFO, \"\",\n\t\t\t\t  \"execjob_launch hook event: accept req by default\");\n\t}\n\tif (set_credential(pjob, NULL, &the_argv) == -1) {\n\t\tstarter_return(kid_write, kid_read,\n\t\t\t       JOB_EXEC_FAIL2, &sjr); /* exits */\n\t}\n\n\t/* Pick up any env settings added by set_credential(), and NULL */\n\t/* terminate the envp array. */\n\t*(pjob->ji_env.v_envp + pjob->ji_env.v_used) = NULL;\n\tthe_env = pjob->ji_env.v_envp;\n\n\t/* change working directory to PBS_JOBDIR or to User's Home */\n\tif ((is_jattr_set(pjob, JOB_ATR_sandbox)) &&\n\t    (strcasecmp(get_jattr_str(pjob, JOB_ATR_sandbox), \"PRIVATE\") == 0)) {\n\t\tif ((!pbs_jobdir) || (chdir(pbs_jobdir) == -1)) {\n\t\t\tlog_event(PBSEVENT_JOB | PBSEVENT_SECURITY, PBS_EVENTCLASS_JOB,\n\t\t\t\t  LOG_ERR, pjob->ji_qs.ji_jobid,\n\t\t\t\t  \"Could not chdir to PBS_JOBDIR directory\");\n\t\t\t(void) fprintf(stderr, \"sandbox=PRIVATE mode: could not chdir to job directory\\n\");\n\t\t\tstarter_return(kid_write, kid_read, JOB_EXEC_FAIL2, &sjr);\n\t\t}\n\t} else {\n\t\tif (chdir(pjob->ji_grpcache->gc_homedir) == -1) {\n\t\t\tlog_event(PBSEVENT_JOB | PBSEVENT_SECURITY, PBS_EVENTCLASS_JOB,\n\t\t\t\t  LOG_ERR, pjob->ji_qs.ji_jobid,\n\t\t\t\t  \"Could not chdir to Home directory\");\n\t\t\t(void) fprintf(stderr, \"Could not chdir to home directory\\n\");\n\t\t\tstarter_return(kid_write, kid_read, JOB_EXEC_FAIL2, &sjr);\n\t\t}\n\t}\n\n\t/*\n\t ** Set up stdin.\n\t */\n\tif ((fd = open(\"/dev/null\", O_RDONLY)) == -1) {\n\t\tlog_err(errno, __func__, \"could not open devnull\");\n\t\t(void) close(0);\n\t} else {\n\t\t(void) dup2(fd, 0);\n\t\tif (fd > 0)\n\t\t\t(void) close(fd);\n\t}\n\n\t/* If nodemux is not already set by the caller, check job's JOB_ATR_nodemux attribute. */\n\tif (!nodemux && (is_jattr_set(pjob, JOB_ATR_nodemux)))\n\t\tnodemux = get_jattr_long(pjob, JOB_ATR_nodemux);\n\n\tif (pjob->ji_numnodes > 1) {\n\t\tif (nodemux) {\n\t\t\t/*\n\t\t\t ** Open /dev/null for stdout and stderr.\n\t\t\t */\n\t\t\tif ((fd = open(\"/dev/null\", O_RDONLY)) == -1) {\n\t\t\t\tlog_err(errno, __func__, \"could not open devnull\");\n\t\t\t\t(void) close(1);\n\t\t\t\t(void) close(2);\n\t\t\t} else {\n\t\t\t\tif (fd != 1)\n\t\t\t\t\t(void) dup2(fd, 1);\n\t\t\t\tif (fd != 2)\n\t\t\t\t\t(void) dup2(fd, 2);\n\t\t\t\tif (fd > 2)\n\t\t\t\t\t(void) close(fd);\n\t\t\t}\n\t\t} else {\n\t\t\t/*\n\t\t\t ** Open sockets to demux proc for stdout and stderr.\n\t\t\t */\n\t\t\tif ((fd = open_demux(ipaddr, pjob->ji_stdout)) == -1)\n\t\t\t\tstarter_return(kid_write, kid_read, JOB_EXEC_FAIL2, &sjr);\n\t\t\t(void) dup2(fd, 1);\n\t\t\tif (fd > 1)\n\t\t\t\t(void) close(fd);\n\t\t\tif ((fd = open_demux(ipaddr, pjob->ji_stderr)) == -1)\n\t\t\t\tstarter_return(kid_write, kid_read, JOB_EXEC_FAIL2, &sjr);\n\t\t\t(void) dup2(fd, 2);\n\t\t\tif (fd > 2)\n\t\t\t\t(void) close(fd);\n\n\t\t\tif (write(1, get_jattr_str(pjob, JOB_ATR_Cookie),\n\t\t\t      strlen(get_jattr_str(pjob, JOB_ATR_Cookie))) == -1) \n\t\t\t\tlog_errf(-1, __func__, \"write failed. ERR : %s\",strerror(errno));\t\t\t\n\t\t\tif ( write(2, get_jattr_str(pjob, JOB_ATR_Cookie),\n\t\t\t      strlen(get_jattr_str(pjob, JOB_ATR_Cookie))) == -1) \n\t\t\t\tlog_errf(-1, __func__, \"write failed. ERR : %s\",strerror(errno));\t\t\t\n\t\t}\n\t} else if (is_jattr_set(pjob, JOB_ATR_interactive) && get_jattr_long(pjob, JOB_ATR_interactive) > 0) {\n\t\t/* interactive job, single node, write to pty */\n\t\tif ((pts = open_pty(pjob)) < 0) {\n\t\t\tlog_err(errno, __func__, \"cannot open slave\");\n\t\t\tstarter_return(kid_write, kid_read, JOB_EXEC_FAIL1, &sjr);\n\t\t}\n\t\t(void) dup2(pts, 1);\n\t\t(void) dup2(pts, 2);\n\n\t} else {\n\t\t/* normal batch job, single node, write straight to files */\n\t\tif (open_std_out_err(pjob) == -1) {\n\t\t\tstarter_return(kid_write, kid_read,\n\t\t\t\t       JOB_EXEC_RETRY, &sjr);\n\t\t} else {\n\t\t\t/* After the error is redirected, stderr does not have a valid FILE* */\n\t\t\ttemp_stderr = fdopen(STDERR_FILENO, \"w\");\n\n\t\t\t/* If we could not get the valid FILE*, let temp_stderr point to stderr to avoid\n\t\t \t* a possible crash in subsequent calls to output functions like printf/fprintf */\n\t\t\tif (!temp_stderr)\n\t\t\t\ttemp_stderr = stderr;\n\t\t}\n\t}\n\n\tlog_close(0);\n\tstarter_return(kid_write, kid_read, JOB_EXEC_OK, &sjr);\n\n\tenviron = the_env;\n\texecvp(the_progname, the_argv);\n\tfree(progname);\n\tfree_attrlist(&argv_list);\n\tfree_str_array(the_argv);\n\tfree_str_array(hook_output.env);\n\tfree_str_array(the_env);\n#if 0\n\t/*\n\t ** This is for a shell to run the command.\n\t */\n\tif (argv[0][0] == '/')\t\t/* full path exe */\n\t\texecve(argv[0], argv, pjob->ji_env.v_envp);\n\telse {\n\t\tstruct\tpasswd\t*pwent;\n\t\tchar\t*shell = \"/bin/sh\";\n\t\tchar\t*shname;\n\t\tchar\t*args[4];\n\n\t\tpwent = getpwuid(pjob->ji_qs.ji_un.ji_momt.ji_exuid);\n\t\tif (pwent != NULL && pwent->pw_shell[0] == '/')\n\t\t\tshell = pwent->pw_shell;\n\t\tshname = strrchr(shell, '/') + 1;\t/* one past slash */\n\t\targs[0] = strdup(\"-\");\n\t\tstrcat(args[0], shname);\n\n\t\targs[1] = \"-c\";\n\n\t\targs[2] = strdup(argv[0]);\n\t\tfor (i=1; argv[i] != NULL; i++) {\n\t\t\tstrcat(args[2], \" \");\n\t\t\tstrcat(args[2], argv[i]);\n\t\t}\n\n\t\targs[3] = NULL;\n\n\t\tprintf(\"%s %s %s\\n\", args[0], args[1], args[2]);\n\t\texecve(shell, args, pjob->ji_env.v_envp);\n\t}\n#endif\n\tfprintf(temp_stderr, \"%s: %s\\n\", argv[0], strerror(errno));\n\texit(254);\n\treturn PBSE_SYSTEM; /* not reached */\n}\n\n/**\n * @brief\n *\tFree the ji_hosts and ji_vnods arrays for a job.  If any events are\n *\tattached to an array element, free them as well.\n *\n * @param[in] pj - job pointer\n *\n * @return Void\n *\n */\n\nvoid\nnodes_free(job *pj)\n{\n\tint i;\n\tvmpiprocs *vp;\n\n\tif (pj->ji_vnods) {\n\t\tvp = pj->ji_vnods;\n\t\tfor (i = 0; i < pj->ji_numvnod; i++, vp++) {\n\t\t\tif (vp->vn_hname)\n\t\t\t\tfree(vp->vn_hname);\n\t\t\tif (vp->vn_vname)\n\t\t\t\tfree(vp->vn_vname);\n\t\t}\n\t\t(void) free(pj->ji_vnods);\n\t\tpj->ji_vnods = NULL;\n\t}\n\n\tif (pj->ji_assn_vnodes) {\n\t\tvp = pj->ji_assn_vnodes;\n\t\tfor (i = 0; i < pj->ji_num_assn_vnodes; i++, vp++) {\n\t\t\tif (vp->vn_hname)\n\t\t\t\tfree(vp->vn_hname);\n\t\t\tif (vp->vn_vname)\n\t\t\t\tfree(vp->vn_vname);\n\t\t}\n\t\t(void) free(pj->ji_assn_vnodes);\n\t\tpj->ji_assn_vnodes = NULL;\n\t\tpj->ji_num_assn_vnodes = 0;\n\t}\n\n\tif (pj->ji_hosts) {\n\t\thnodent *np;\n\n\t\tnp = pj->ji_hosts;\n\t\tfor (i = 0; i < pj->ji_numnodes; i++, np++) {\n\t\t\teventent *ep = (eventent *) GET_NEXT(np->hn_events);\n\n\t\t\tif (np->hn_host)\n\t\t\t\tfree(np->hn_host);\n\t\t\tif (np->hn_vlist)\n\t\t\t\tfree(np->hn_vlist);\n\n\t\t\t/* don't close stream incase another job uses it */\n\t\t\twhile (ep) {\n\n\t\t\t\tif (ep->ee_argv)\n\t\t\t\t\tarrayfree(ep->ee_argv);\n\t\t\t\tif (ep->ee_envp)\n\t\t\t\t\tarrayfree(ep->ee_envp);\n\t\t\t\tdelete_link(&ep->ee_next);\n\t\t\t\tfree(ep);\n\t\t\t\tep = (eventent *) GET_NEXT(np->hn_events);\n\t\t\t}\n\t\t\t/*\n\t\t\t ** Here we free any dependent structure(s) from hn_setup.\n\t\t\t */\n\t\t\tif (job_free_node != NULL)\n\t\t\t\tjob_free_node(pj, np);\n\t\t}\n\t\tfree(pj->ji_hosts);\n\t\tpj->ji_hosts = NULL;\n\t}\n}\n\n/**\n * @brief\n *\tAdd a mom to a job, if the mom is not already present\n *\n * @param[in] pjob - job pointer\n * @param[in] mname - mom name to add\n * @param[in] port - mom port\n * @param[in/out] mi - The last used index in the ji_hosts array \n * @param[out] mynp - Return pointer to a match with this host\n *\n * @return hnodent\n * @retval - The hnodent structure matching the mname, port\n * @retval - NULL - failure to add (get_fullhostname failed)\n *\n */\nhnodent *\nadd_mom_to_job(job *pjob, char *mname, int port, int *mi, hnodent **mynp)\n{\n\tint j;\n\tint momindex = *mi;\n\thnodent *hp = NULL;\n\n\t/*\n\t* for the natural vnode in a set that satisfies a chunk,\n\t* see if we have a hnodent entry for the parent Mom,\n\t* if not add an entry\n\t*/\n\n\t/* see if we already have this mom */\n\tfor (j = 0; j < momindex; ++j) {\n\t\tif ((strcmp(mname, pjob->ji_hosts[j].hn_host) == 0) && (port == pjob->ji_hosts[j].hn_port))\n\t\t\tbreak;\n\t}\n\thp = &pjob->ji_hosts[j];\n\tif ((hp != NULL) && (j == momindex)) {\n\t\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid, \"Adding mom %s:%d to job\", mname, port);\n\t\t/* need to add entry */\n\t\thp->hn_node = momindex++;\n\t\thp->hn_host = strdup(mname);\n\t\tif (hp->hn_host == NULL)\n\t\t\treturn (NULL);\n\t\thp->hn_port = port;\n\t\thp->hn_stream = -1;\n\t\thp->hn_eof_ts = 0; /* reset eof timestamp */\n\t\thp->hn_sister = SISTER_OKAY;\n\t\thp->hn_nprocs = 0;\n\t\thp->hn_vlnum = 0;\n\t\thp->hn_vlist = (host_vlist_t *) 0;\n\t\thp->hn_vlist = NULL;\n\t\tmemset(&hp->hn_nrlimit, 0, sizeof(resc_limit_t));\n\t\tCLEAR_HEAD(hp->hn_events);\n\t\t/* mark next slot as the (current) end */\n\t\tpjob->ji_hosts[momindex].hn_node = TM_ERROR_NODE;\n\n\t\tif (hp->hn_port == pbs_rm_port) {\n\t\t\tint hostmatch = 0;\n\t\t\tstatic char node_name[PBS_MAXHOSTNAME + 1] = {'\\0'};\n\t\t\tstatic char canonical_name[PBS_MAXHOSTNAME + 1] = {'\\0'};\n\n\t\t\t/*\n\t\t\t* The following block prevents us from having to employ\n\t\t\t* yet another global variable to represent the hostname\n\t\t\t* of the local node.\n\t\t\t*/\n\t\t\tif (pbs_conf.pbs_leaf_name) {\n\t\t\t\tif (strcmp(pbs_conf.pbs_leaf_name, node_name) != 0) {\n\t\t\t\t\t/* PBS_LEAF_NAME has changed or node_name is uninitialized */\n\t\t\t\t\tstrncpy(node_name, pbs_conf.pbs_leaf_name, PBS_MAXHOSTNAME);\n\t\t\t\t\tnode_name[PBS_MAXHOSTNAME] = '\\0';\n\t\t\t\t\t/* Need to canonicalize PBS_LEAF_NAME */\n\t\t\t\t\tif (get_fullhostname(node_name, canonical_name, (sizeof(canonical_name) - 1)) != 0) {\n\t\t\t\t\t\tlog_errf(errno, __func__, \"Failed to get fullhostname from %s for job %s\", node_name, pjob->ji_qs.ji_jobid);\n\t\t\t\t\t\tnode_name[0] = '\\0';\n\t\t\t\t\t\tcanonical_name[0] = '\\0';\n\t\t\t\t\t\treturn (NULL);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tif (strcmp(mom_host, node_name) != 0) {\n\t\t\t\t\t/* mom_host has changed or node_name is uninitialized */\n\t\t\t\t\tstrncpy(node_name, mom_host, PBS_MAXHOSTNAME);\n\t\t\t\t\tnode_name[PBS_MAXHOSTNAME] = '\\0';\n\t\t\t\t\t/* mom_host contains the canonical name */\n\t\t\t\t\tstrncpy(canonical_name, mom_host, PBS_MAXHOSTNAME);\n\t\t\t\t\tcanonical_name[PBS_MAXHOSTNAME] = '\\0';\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif (strcmp(hp->hn_host, node_name) == 0)\n\t\t\t\thostmatch = 1;\n\t\t\telse {\n\t\t\t\tchar namebuf[PBS_MAXHOSTNAME + 1];\n\t\t\t\tif (get_fullhostname(hp->hn_host, namebuf, (sizeof(namebuf) - 1)) != 0) {\n\t\t\t\t\tlog_errf(errno, __func__, \"Failed to get fullhostname from %s for job %s\", hp->hn_host, pjob->ji_qs.ji_jobid);\n\t\t\t\t\treturn (NULL);\n\t\t\t\t}\n\t\t\t\tif (strcmp(namebuf, canonical_name) == 0)\n\t\t\t\t\thostmatch = 1;\n\t\t\t}\n\n\t\t\tif (hostmatch) {\n\t\t\t\tpjob->ji_nodeid = hp->hn_node;\n\t\t\t\tif (mynp)\n\t\t\t\t\t*mynp = hp;\n\t\t\t}\n\t\t}\n\t}\n\t*mi = momindex;\n\treturn hp;\n}\n\n/**\n * @brief\n *\tGet the next \"chunk\" from the exechost(2) string\n *\n * @param[in] enable_exechost2 - is exec_host2 available?\n * @param[in/out] ppeh - pointer to the current location in exechost(2) string\n * @param[out] pport - pointer to the integer port variable to be returned\n *\n * @return char *\n * @retval - the mom name \n * @retval - NULL - failure\n *\n */\nstatic char *\nget_next_exechost2(int enable_exechost2, char **ppeh, int *pport)\n{\n\tstatic char *mname;\n\tint port;\n\tchar *peh = *ppeh;\n\tint n = 0;\n\tstatic char natvnodename[PBS_MAXNODENAME + 1];\n\tstatic char momname[PBS_MAXNODENAME + 1];\n\tstatic char momport[10] = {0};\n\tmomvmap_t *pnat = NULL;\n\n\tif (enable_exechost2 == 0) {\n\t\twhile ((*peh != '/') && (*peh != '\\0') &&\n\t\t       (n < PBS_MAXNODENAME)) {\n\t\t\tnatvnodename[n++] = *peh++;\n\t\t}\n\t\tnatvnodename[n] = '\\0';\n\t} else {\n\t\tmomport[0] = '\\0';\n\t\twhile ((*peh != ':') && (*peh != '/') && (*peh != '\\0') &&\n\t\t       (n < PBS_MAXNODENAME)) {\n\t\t\tmomname[n++] = *peh++;\n\t\t}\n\t\tmomname[n] = '\\0';\n\t\t/* check if peh is colon, if so parse out port */\n\t\tn = 0;\n\t\tif (*peh == ':') {\n\t\t\tpeh++; /* skip first ':' character to get port number */\n\t\t\twhile ((*peh != '/') && (*peh != '\\0') && (n < sizeof(momport)))\n\t\t\t\tmomport[n++] = *peh++;\n\t\t}\n\t\tmomport[n] = '\\0';\n\t}\n\n\t/* advance past the \"+\" to the next host */\n\twhile (*peh != '\\0') {\n\t\tif (*peh++ == '+')\n\t\t\tbreak;\n\t}\n\n\tif (enable_exechost2 == 0) {\n\t\tpnat = find_vmap_entry(natvnodename);\n\t\tif (pnat != NULL) {\n\t\t\t/* found a map entry */\n\t\t\tmname = pnat->mvm_mom->mi_host;\n\t\t\tport = pnat->mvm_mom->mi_port + 1; /* RM port */\n\t\t} else {\n\t\t\t/* no map entry, assume same vnode name is */\n\t\t\t/* the host name and the port is standard  */\n\t\t\tmname = natvnodename;\n\t\t\tport = pbs_mom_port + 1; /* RM port */\n\t\t}\n\t} else {\n\t\tmname = momname;\n\t\tif (strlen(momport) > 0) {\n\t\t\tport = atol(momport) + 1;\n\t\t} else {\n\t\t\tport = pbs_mom_port + 1; /* RM port */\n\t\t}\n\t}\n\n\t*pport = port;\n\t*ppeh = peh;\n\treturn mname;\n}\n\n/**\n * @brief\n *\tjob_nodes - process schedselect and exec_vnode to build mapping between\n *\tchunks and allocated nodes/resources.\n *\n * @par Functionality:\n *\tLoops through schedselect attribute and concurrently exec_vnode and\n *\texec_host attributes creating two arrays of structures:\n *\t    hnodent - one per Mom regardless of the number of vnodes\n *\t\tallocated from that Mom.  For the local Mom's entry, indexed\n *\t\tby pjob->ji_nodeid, the hnodent will also contain an sub-array\n *\t\tof host_vlist_t with one entry per vnode allocated on this host.\n *\t\tThis sub-array's length is given by hn_vlnum.\n *\t    vmiprocs - one pre task/mpi process to be created;  there is one\n *\t\tline per entry written into PBS_NODEFILE by Mom\n *\tBoth of the hnodent and vmpiprocs arrays are terminated by an entry\n *\twhere the id (hn_node or vn_node) is set to TM_ERROR_NODE.\n *\tAdditionally this function also determines the ji_nodeid of the job\n *\tby matching the mom's name and port with the exechost list.\n *\n * @param[in]\tpjob - pointer to job structure for job to be run\n * @param[out]\tmynp - pointer to hnodent structure to be filled with the\n *                     hnodent for the node matching the current mom:port\n *\n * @return\tint\n * @retval\tPBSE_NONE (0) if success\n * @retval\tPBSE_* on error.\n *\n * @par Side Effects:\n *\tpjob->ji_vnods, pjob->ji_assn_vnodes, and pjob->ji_hosts are set,\n *\tarrays in the heap\n *\n * @par MT-safe: likely no\n *\n */\nint\njob_nodes_inner(struct job *pjob, hnodent **mynp)\n{\n\tchar *execvnode;\n\tchar *schedselect;\n\tint i, j, k;\n\thnodent *hp = NULL;\n\tint hpn;\n\tint momindex;\n\tchar *mname;\n\tint nmoms;\n\tvmpiprocs *vmp;\n\tmomvmap_t *pmm = NULL;\n\tmominfo_t *pmom;\n\n\tchar *peh;\n\tint port;\n\tint nprocs;\n\tint n_chunks;\n\tint procindex;\n\tint rc;\n\tlong long sz;\n\tchar *tpc;\n\tresc_limit_t have;\n\tresc_limit_t need;\n\tint naccels = 0;\t  /* naccelerators count */\n\tint need_accel = 0;\t  /* accelerator needed in subchunk? */\n\tlong long accel_mem = 0;  /* accel mem per exec_vnode key-value pair */\n\tchar *accel_model = NULL; /* accelerator model if set */\n\n\t/* variables used in parsing the \"exec_vnode\" string */\n\tint stop_on_paren;\n\tchar *pndspec;\n\tchar *elast;\n\tint enelma;\n\tchar *nodep;\n\tstatic int ebuf_len = 0;\n\tstatic char *ebuf = NULL;\n\tstatic int enelmt = 0;\n\tstatic key_value_pair *enkv = NULL;\n\n\t/* variables used in parsing the \"schedselect\" string */\n\tchar *psubspec;\n\tchar *slast;\n\tint snc;\n\tint snelma;\n\tstatic int sbuf_len = 0;\n\tstatic char *sbuf = NULL;\n\tstatic int snelmt = 0;\n\tstatic key_value_pair *skv = NULL;\n\tchar *save_ptr; /* posn for strtok_r() */\n\tint n_assn_vnodes;\n\tint assn_index;\n\tchar *tmp_str;\n\tchar *evnode;\n\n\tif (pjob == NULL)\n\t\treturn (PBSE_INTERNAL);\n\tif (!(is_jattr_set(pjob, JOB_ATR_exec_vnode)))\n\t\treturn (PBSE_INTERNAL);\n\tif (!(is_jattr_set(pjob, JOB_ATR_SchedSelect)))\n\t\treturn (PBSE_INTERNAL);\n\n\t/* free what might have been done before if job is restarted */\n\tnodes_free(pjob);\n\n\texecvnode = get_jattr_str(pjob, JOB_ATR_exec_vnode);\n\tif (execvnode == NULL)\n\t\treturn (PBSE_INTERNAL);\n\n\tschedselect = get_jattr_str(pjob, JOB_ATR_SchedSelect);\n\tif (schedselect == NULL)\n\t\treturn (PBSE_INTERNAL);\n\n\tif (get_jattr_str(pjob, JOB_ATR_exec_host2) != NULL) {\n\t\t/* Mom got information from new server */\n\t\tenable_exechost2 = 1;\n\t\tpeh = get_jattr_str(pjob, JOB_ATR_exec_host2);\n\t} else {\n\t\tpeh = get_jattr_str(pjob, JOB_ATR_exec_host);\n\t}\n\tif (peh == NULL)\n\t\treturn (PBSE_INTERNAL);\n\n\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid, \"execvnode=%s\", execvnode);\n\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid, \"schedselect=%s\", schedselect);\n\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid, \"%s=%s\", enable_exechost2 ? \"exechost2\" : \"exechost\", peh);\n\n\t/* make sure parsing buffers are long enought */\n\tif ((i = strlen(execvnode)) >= ebuf_len) {\n\t\ttpc = (char *) realloc(ebuf, i + 100);\n\t\tif (tpc == NULL)\n\t\t\treturn (PBSE_SYSTEM);\n\t\tebuf = tpc;\n\t\tebuf_len = i + 100;\n\t}\n\tif ((i = strlen(schedselect)) >= sbuf_len) {\n\t\ttpc = (char *) realloc(sbuf, i + 100);\n\t\tif (tpc == NULL)\n\t\t\treturn (PBSE_SYSTEM);\n\t\tsbuf = tpc;\n\t\tsbuf_len = i + 100;\n\t}\n\n\tstrcpy(sbuf, schedselect);\n\n\t/* First, go parse schedselect and count up number of chunks and */\n\t/* total number of mpiprocs;   assuming one Mom per chunk and    */\n\t/* one mpiproc structure per mpiproc, this is used to obtain a   */\n\t/* maxmimun number of each for allocating the array              */\n\n\tnmoms = 0;    /* num of mom (struct hnodent) entries needed    */\n\tnprocs = 0;   /* num of vmpiproc entries needed                */\n\tn_chunks = 0; /* number of chunks */\n\n\tpsubspec = parse_plus_spec_r(sbuf, &slast, &hpn);\n\t/* hpn set to 1 if open paren found, -1 if close paren found, or */\n\t/* 0 if neither or both found\t\t\t\t\t */\n\n\twhile (psubspec) {\n\t\tDBPRT((\"\\tsubspec: %s\\n\", psubspec))\n\t\trc = parse_chunk_r(psubspec, &snc, &snelma, &snelmt, &skv, NULL);\n\t\t/* snc is the number (repeat factor) of chunks */\n\t\tif (rc != 0)\n\t\t\treturn (rc);\n\n\t\tnmoms += snc; /* num of Moms, one per chunk */\n\t\tk = 1;\t      /* default number of mpiprocs */\n\t\tfor (j = 0; j < snelma; ++j) {\n\t\t\tif (strcmp(skv[j].kv_keyw, \"mpiprocs\") == 0) {\n\t\t\t\tk = atol(skv[j].kv_val);\n\t\t\t}\n\t\t}\n#ifdef NAS /* localmod 020 */\n\t\t/*\n\t\t * At NAS, if specify only ncpus and not mpiprocs or\n\t\t * ompthreads, assume mpiprocs = ncpus.\n\t\t */\n\t\t{\n\t\t\tint ncpusidx = -1;\n\t\t\tfor (j = 0; j < snelma; ++j) {\n\t\t\t\tif (strcmp(skv[j].kv_keyw, \"ncpus\") == 0) {\n\t\t\t\t\tncpusidx = j;\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\t\t\t\tif (strcmp(skv[j].kv_keyw, MPIPROCS) == 0) {\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tif (strcmp(skv[j].kv_keyw, OMPTHREADS) == 0) {\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (ncpusidx >= 0 && j >= snelma) {\n\t\t\t\tk = atol(skv[ncpusidx].kv_val);\n\t\t\t}\n\t\t}\n#endif\t\t\t\t   /* localmod 020 */\n\t\tnprocs += snc * k; /* mpiproces * num of chunks */\n\t\tn_chunks += snc;\n\t\tpsubspec = parse_plus_spec_r(slast, &slast, &hpn);\n\t}\n\n\tDBPRT((\"- allocating %d hosts and %d procs\\n\", nmoms, nprocs))\n\tpjob->ji_hosts = (hnodent *) calloc(nmoms + 1, sizeof(hnodent));\n\tpjob->ji_vnods = (vmpiprocs *) calloc(nprocs + 1, sizeof(vmpiprocs));\n\n\tn_assn_vnodes = 0;\n\tevnode = strdup(execvnode);\n\tif (evnode == NULL) {\n\t\tlog_err(errno, __func__, \"strdup failed\");\n\t\treturn (PBSE_SYSTEM);\n\t}\n\tfor (tmp_str = strtok_r(evnode, \"+\", &save_ptr); tmp_str != NULL; tmp_str = strtok_r(NULL, \"+\", &save_ptr)) {\n\t\tn_assn_vnodes++;\n\t}\n\n\tif (n_assn_vnodes == 0)\n\t\tn_assn_vnodes = 1;\n\n\tfree(evnode);\n\n\tpjob->ji_assn_vnodes = (vmpiprocs *) calloc(n_assn_vnodes + 1, sizeof(vmpiprocs));\n\n\tif ((pjob->ji_hosts == NULL) || (pjob->ji_vnods == NULL) ||\n\t    (pjob->ji_assn_vnodes == NULL)) {\n\t\tlog_err(errno, \"job_nodes\", \"calloc failed\");\n\t\treturn (PBSE_SYSTEM);\n\t}\n\n\tfor (i = 0; i <= nmoms; ++i) {\n\t\tpjob->ji_hosts[i].hn_node = TM_ERROR_NODE;\n\t\tCLEAR_HEAD(pjob->ji_hosts[i].hn_events);\n\t}\n\tfor (i = 0; i <= nprocs; ++i)\n\t\tpjob->ji_vnods[i].vn_node = TM_ERROR_NODE;\n\n\tfor (i = 0; i <= n_assn_vnodes; ++i)\n\t\tpjob->ji_assn_vnodes[i].vn_node = TM_ERROR_NODE;\n\n\t/* Now parse schedselect and exec_vnode at same time to map mpiprocs */\n\t/* onto the corresponding Mom and sum up the resources allocated     */\n\t/* from each Mom\t\t\t\t\t\t     */\n\n\tstrcpy(ebuf, execvnode);\n\tstrcpy(sbuf, schedselect);\n\n\tmomindex = 0;\n\tprocindex = 0;\n\tassn_index = 0;\n\n\telast = ebuf;\n\n\t/*\n\t * Next we parse the select spec to look at the next chunk that was\n\t * requested by the user.  For each chunk we\n\t * 1. parse the subspecs from the exec_vnode that were allocated for\n\t *    that chunk.  Then\n\t *    a. for the first vnode, get the Mom/host and setup the hnodent\n\t *    b. for my hnodent, for each vnode, add a host_vlist entry to\n\t *       the hnodent entry\n\t * 2. setup the number of \"mpiprocs\" (from the chunk) vmpiprocs\n\t */\n\n\t/* (1) parse chunk from select spec */\n\n\tpsubspec = parse_plus_spec_r(sbuf, &slast, &hpn);\n\twhile (psubspec) {\n\t\tint nthreads;\n\t\tint numprocs;\n\n\t\tDBPRT((\"\\tsubspec: %s\\n\", psubspec))\n\t\tnthreads = -1;\n\t\tnumprocs = -1;\n\t\trc = parse_chunk_r(psubspec, &snc, &snelma, &snelmt, &skv, NULL);\n\t\t/* snc = number of chunks */\n\t\tif (rc != 0) {\n\t\t\treturn (rc);\n\t\t}\n\n\t\tfor (i = 0; i < snc; ++i) { /* for each chunk in schedselect.. */\n\t\t\tneed_accel = 0;\n\t\t\taccel_model = NULL;\n\n\t\t\t/* clear \"need\" counts */\n\t\t\tmemset(&need, 0, sizeof(need));\n\n\t\t\t/* clear \"have\" counts */\n\t\t\tmemset(&have, 0, sizeof(have));\n\n\t\t\t/* figure out what is \"need\"ed */\n\t\t\tfor (j = 0; j < snelma; ++j) {\n\t\t\t\tif (strcmp(skv[j].kv_keyw, \"ncpus\") == 0)\n\t\t\t\t\tneed.rl_ncpus = atol(skv[j].kv_val);\n\t\t\t\telse if (strcmp(skv[j].kv_keyw, \"mem\") == 0)\n\t\t\t\t\tneed.rl_mem = to_kbsize(skv[j].kv_val);\n\t\t\t\telse if (strcmp(skv[j].kv_keyw, \"vmem\") == 0)\n\t\t\t\t\tneed.rl_vmem = to_kbsize(skv[j].kv_val);\n\t\t\t\telse if (strcmp(skv[j].kv_keyw, \"mpiprocs\") == 0)\n\t\t\t\t\tnumprocs = atol(skv[j].kv_val);\n\t\t\t\telse if (strcmp(skv[j].kv_keyw, \"ompthreads\") == 0)\n\t\t\t\t\tnthreads = atol(skv[j].kv_val);\n\t\t\t\telse if (strcmp(skv[j].kv_keyw, \"accelerator\") == 0) {\n\t\t\t\t\tif (strcmp(skv[j].kv_val, \"True\") == 0)\n\t\t\t\t\t\tneed_accel = 1;\n\t\t\t\t\telse\n\t\t\t\t\t\tneed_accel = 0;\n\t\t\t\t} else if (strcmp(skv[j].kv_keyw, \"naccelerators\") == 0)\n\t\t\t\t\tneed.rl_naccels = atol(skv[j].kv_val);\n\t\t\t\telse if (strcmp(skv[j].kv_keyw, \"accelerator_model\") == 0) {\n\t\t\t\t\taccel_model = skv[j].kv_val;\n\t\t\t\t\tneed_accel = 1;\n\t\t\t\t}\n\t\t\t}\n#ifdef NAS /* localmod 020 */\n\t\t\tif (nthreads == -1 && numprocs == -1 && need.rl_ncpus != 0) {\n\t\t\t\tnumprocs = need.rl_ncpus;\n\t\t\t}\n#endif /* localmod 020 */\n\t\t\tif (nthreads == -1)\n#ifdef NAS_NCPUS1 /* localmod 020 */\n\t\t\t\tnthreads = 1;\n#else\n\t\t\t\tnthreads = need.rl_ncpus;\n#endif /* localmod 020 */\n\n\t\t\tif (numprocs == -1) {\n\t\t\t\tif (need.rl_ncpus == 0) {\n\t\t\t\t\tnumprocs = 0;\n\t\t\t\t} else {\n\t\t\t\t\tnumprocs = 1;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tDBPRT((\"\\tchunk: %d, need %d ncpus and %lu mem\\n\", i,\n\t\t\t       need.rl_ncpus, (unsigned long) need.rl_mem))\n\n\t\t\t/*\n\t\t\t* The \"natural\" vnode for the Mom who is managing\n\t\t\t* this chunk of resources can be determined by the\n\t\t\t* corresponding entry in exec_host.  We have to know which\n\t\t\t* Mom in case of multiple-Moms for the allocated vnodes\n\t\t\t*/\n\t\t\tmname = get_next_exechost2(enable_exechost2, &peh, &port);\n\t\t\thp = add_mom_to_job(pjob, mname, port, &momindex, mynp);\n\t\t\tif (hp == NULL) {\n\t\t\t\tlog_err(errno, __func__, \"Failed to add mom to job\");\n\t\t\t\treturn (PBSE_SYSTEM);\n\t\t\t}\n\n\t\t\t/* now parse exec_vnode to match up alloc-ed with needed */\n\t\t\tstop_on_paren = 0;\n\n\t\t\twhile ((pndspec = parse_plus_spec_r(elast, &elast, &hpn)) != NULL) {\n\t\t\t\tint vnncpus = 0;\n\t\t\t\tlong long ndmem = 0;\n\n\t\t\t\tif (hpn > 0) /* found open paren '(' */\n\t\t\t\t\tstop_on_paren = 1;\n\n\t\t\t\trc = parse_node_resc_r(pndspec, &nodep, &enelma, &enelmt, &enkv);\n\n\t\t\t\t/* if no resources specified, skip it */\n\t\t\t\tif (enelma == 0) {\n\t\t\t\t\tstop_on_paren = 0;\n\t\t\t\t\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid, \"Ignoring vnode %s without resources\", nodep);\n\t\t\t\t\tmname = get_next_exechost2(enable_exechost2, &peh, &port);\n\t\t\t\t\thp = add_mom_to_job(pjob, mname, port, &momindex, mynp);\n\t\t\t\t\tif (hp == NULL) {\n\t\t\t\t\t\tlog_err(errno, __func__, \"Failed to add mom to job\");\n\t\t\t\t\t\treturn (PBSE_SYSTEM);\n\t\t\t\t\t}\n\t\t\t\t\tcontinue; /* check next piece */\n\t\t\t\t}\n\n\t\t\t\t/* nodep = vnode name */\n\t\t\t\tif (rc != 0) {\n\t\t\t\t\treturn (rc);\n\t\t\t\t}\n\t\t\t\tDBPRT((\"\\t\\tusing vnode %s\\n\", nodep))\n\n\t\t\t\t/* find the Mom who manages the vnode */\n\t\t\t\tpmm = (momvmap_t *) find_vmap_entry(nodep);\n\t\t\t\tif (pmm == NULL) {\n\t\t\t\t\t/* Did not find a vmap entry for this vnode */\n\t\t\t\t\t/* assume it is host and add it w/ std port */\n\n\t\t\t\t\tif (enable_exechost2) {\n\t\t\t\t\t\t/* In case mom connected with newer server  */\n\t\t\t\t\t\tpmom = create_mom_entry(mname, port - 1);\n\t\t\t\t\t} else {\n\t\t\t\t\t\tpmom = create_mom_entry(nodep, pbs_mom_port);\n\t\t\t\t\t}\n\t\t\t\t\tif (pmom == NULL)\n\t\t\t\t\t\treturn PBSE_SYSTEM;\n#ifdef NAS /* localmod 123 */\n\t\t\t\t\t/* call create_mommap_entry() in a */\n\t\t\t\t\t/* way that populates the job      */\n\t\t\t\t\t/* nodefile with short names       */\n\t\t\t\t\t/* (e.g. r169i0n0)                 */\n\t\t\t\t\tif (0)\n#else\n\t\t\t\t\tif (enable_exechost2)\n#endif /* localmod 123 */\n\t\t\t\t\t{\n\t\t\t\t\t\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid, \"Creating entry for vnode=%s, mom=%s\", nodep, mname);\n\t\t\t\t\t\tpmm = create_mommap_entry(nodep, mname, pmom, 0);\n\t\t\t\t\t} else {\n\t\t\t\t\t\tpmm = create_mommap_entry(nodep, NULL, pmom, 0);\n\t\t\t\t\t}\n\n\t\t\t\t\tif (pmm == NULL) {\n\t\t\t\t\t\tdelete_mom_entry(pmom);\n\t\t\t\t\t\treturn PBSE_SYSTEM;\n\t\t\t\t\t}\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_NODE, LOG_DEBUG, nodep, \"implicitly added host to vmap\");\n\t\t\t\t} else {\n\t\t\t\t\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid,\n\t\t\t\t\t\t   \"Found! vnodemap entry for vnode=%s: mom-name=%s, hostname=%s\", nodep, pmm->mvm_name, pmm->mvm_hostn);\n\t\t\t\t}\n\n\t\t\t\t/* for the allocated resc, add to hnodent resc_limit */\n\t\t\t\t/* which is used for limit enforcement while running */\n\n\t\t\t\tfor (j = 0; j < enelma; ++j) {\n\t\t\t\t\tif (strcmp(enkv[j].kv_keyw, \"ncpus\") == 0) {\n\t\t\t\t\t\tvnncpus = atoi(enkv[j].kv_val);\n\t\t\t\t\t\thave.rl_ncpus += vnncpus;\n\t\t\t\t\t\thp->hn_nrlimit.rl_ncpus += vnncpus;\n\t\t\t\t\t} else if (strcmp(enkv[j].kv_keyw, \"mem\") == 0) {\n\t\t\t\t\t\tndmem = to_kbsize(enkv[j].kv_val);\n\t\t\t\t\t\thave.rl_mem += ndmem;\n\t\t\t\t\t\thp->hn_nrlimit.rl_mem += ndmem;\n\t\t\t\t\t} else if (strcmp(enkv[j].kv_keyw, \"vmem\") == 0) {\n\t\t\t\t\t\tsz = to_kbsize(enkv[j].kv_val);\n\t\t\t\t\t\thave.rl_vmem += sz;\n\t\t\t\t\t\thp->hn_nrlimit.rl_vmem += sz;\n\t\t\t\t\t} else if (strcmp(enkv[j].kv_keyw, \"ssinodes\") == 0) {\n\t\t\t\t\t\thp->hn_nrlimit.rl_ssi += atoi(enkv[j].kv_val);\n\t\t\t\t\t} else if (strcmp(enkv[j].kv_keyw, \"naccelerators\") == 0) {\n\t\t\t\t\t\tnaccels = atoi(enkv[j].kv_val);\n\t\t\t\t\t\thave.rl_naccels += naccels;\n\t\t\t\t\t\thp->hn_nrlimit.rl_naccels += naccels;\n\t\t\t\t\t} else if (strcmp(enkv[j].kv_keyw, \"accelerator_memory\") == 0) {\n\t\t\t\t\t\taccel_mem = to_kbsize(enkv[j].kv_val);\n\t\t\t\t\t\thave.rl_accel_mem += accel_mem;\n\t\t\t\t\t\thp->hn_nrlimit.rl_accel_mem += accel_mem;\n\t\t\t\t\t\tneed_accel = 1;\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\t/* (1b)if this mom is me, add vnode to  host_vlist */\n\t\t\t\tif (hp->hn_node == pjob->ji_nodeid) {\n\t\t\t\t\thost_vlist_t *phv;\n\t\t\t\t\tphv = (host_vlist_t *) realloc(hp->hn_vlist,\n\t\t\t\t\t\t\t\t       (hp->hn_vlnum + 1) * sizeof(host_vlist_t));\n\t\t\t\t\tif (phv == NULL) {\n\t\t\t\t\t\treturn (PBSE_INTERNAL);\n\t\t\t\t\t}\n\t\t\t\t\thp->hn_vlist = phv;\n\t\t\t\t\tpbs_strncpy(phv[hp->hn_vlnum].hv_vname, nodep,\n\t\t\t\t\t\t    sizeof(phv[hp->hn_vlnum].hv_vname));\n\t\t\t\t\tphv[hp->hn_vlnum].hv_ncpus = vnncpus;\n\t\t\t\t\tphv[hp->hn_vlnum].hv_mem = ndmem;\n\t\t\t\t\thp->hn_vlnum++;\n\t\t\t\t}\n\n\t\t\t\tvmp = &pjob->ji_assn_vnodes[assn_index];\n\t\t\t\tvmp->vn_node = assn_index++;\n\t\t\t\tif (hp != NULL)\n\t\t\t\t\tvmp->vn_host = hp;\n\t\t\t\tif (pmm != NULL)\n\t\t\t\t\tvmp->vn_vname = strdup(pmm->mvm_name);\n\t\t\t\tif (vmp->vn_vname == NULL) {\n\t\t\t\t\tif (vmp->vn_hname != NULL) {\n\t\t\t\t\t\tfree(vmp->vn_hname);\n\t\t\t\t\t\tvmp->vn_hname = NULL;\n\t\t\t\t\t}\n\t\t\t\t\treturn (PBSE_SYSTEM);\n\t\t\t\t}\n\t\t\t\tvmp->vn_cpus = vnncpus;\n\t\t\t\tvmp->vn_mem = ndmem;\n\t\t\t\t/* mark next entry as the (current) end */\n\t\t\t\tpjob->ji_assn_vnodes[assn_index].vn_node = TM_ERROR_NODE;\n\n\t\t\t\tif (stop_on_paren == 0)\n\t\t\t\t\tbreak;\n\t\t\t\telse if (hpn < 0)\n\t\t\t\t\tbreak;\n\t\t\t}\n\t\t\t/* validate the pointers before using  - SAFER */\n\t\t\tassert((hp != NULL) && (pmm != NULL));\n\t\t\thp->hn_nprocs += numprocs;\n\n\t\t\t/* (2) setup the number of vmpiprocs entries based */\n\t\t\t/* on the number of procs, numprocs, in this chunk */\n\n\t\t\tfor (k = 0; k < numprocs; ++k) {\n\t\t\t\tvmp = &pjob->ji_vnods[procindex];\n\t\t\t\tvmp->vn_node = procindex++;\n\t\t\t\tvmp->vn_host = hp;\n\t\t\t\tif (pmm->mvm_hostn) {\n\t\t\t\t\t/* copy the true host name */\n\t\t\t\t\tvmp->vn_hname = strdup(pmm->mvm_hostn);\n\t\t\t\t\tif (vmp->vn_hname == NULL)\n\t\t\t\t\t\treturn (PBSE_SYSTEM);\n\t\t\t\t} else {\n\t\t\t\t\t/* set null and we will use the Mom name */\n\t\t\t\t\tvmp->vn_hname = NULL;\n\t\t\t\t}\n\t\t\t\tvmp->vn_vname = strdup(pmm->mvm_name);\n\t\t\t\tif (vmp->vn_vname == NULL) {\n\t\t\t\t\tif (vmp->vn_hname != NULL) {\n\t\t\t\t\t\tfree(vmp->vn_hname);\n\t\t\t\t\t\tvmp->vn_hname = NULL;\n\t\t\t\t\t}\n\t\t\t\t\treturn (PBSE_SYSTEM);\n\t\t\t\t}\n\t\t\t\tvmp->vn_cpus = have.rl_ncpus;\n\t\t\t\tvmp->vn_mem = have.rl_mem;\n\t\t\t\tvmp->vn_vmem = have.rl_vmem;\n\t\t\t\tvmp->vn_mpiprocs = numprocs;\n\t\t\t\tvmp->vn_threads = nthreads;\n\t\t\t\tvmp->vn_naccels = have.rl_naccels;\n\t\t\t\tvmp->vn_need_accel = need_accel;\n\t\t\t\tif (vmp->vn_need_accel || (vmp->vn_naccels > 0)) {\n\t\t\t\t\tif (accel_model) {\n\t\t\t\t\t\tvmp->vn_accel_model = strdup(accel_model);\n\t\t\t\t\t}\n\t\t\t\t\tvmp->vn_accel_mem = have.rl_accel_mem;\n\t\t\t\t}\n\n\t\t\t\t/* mark next entry as the (current) end */\n\t\t\t\tpjob->ji_vnods[procindex].vn_node = TM_ERROR_NODE;\n\t\t\t}\n\t\t}\n\n\t\t/* do next section of schedselect */\n\t\tpsubspec = parse_plus_spec_r(slast, &slast, &hpn);\n\t}\n\n\tpjob->ji_numnodes = momindex;\n\tpjob->ji_numvnod = procindex;\n\tpjob->ji_num_assn_vnodes = assn_index;\n\n\treturn (0);\n}\n\n/**\n * @brief\n * \twrapper function that calls job_nodes_inner with a NULL parameter\n * \tfor the \"mynodeid\" parameter\n *\n * @param[in] pjob - job pointer\n *\n * @return \tint\n * @retval\tPBSE_NONE (0) Success\n * @retval      PBSE_* on error.\n *\n */\n\nint\njob_nodes(struct job *pjob)\n{\n\treturn job_nodes_inner(pjob, NULL);\n}\n\n/**\n * @brief\n * \tstart_exec() - start execution of a job\n *\n * @param[in] pjob - job pointer\n *\n * @return\tVoid\n *\n */\nvoid\nstart_exec(job *pjob)\n{\n\teventent *ep = NULL;\n\tint i, nodenum;\n\tpbs_socklen_t len;\n\tint socks[2];\n\tstruct sockaddr_in saddr;\n\thnodent *np;\n\tpbs_list_head phead;\n\tmom_hook_input_t hook_input;\n\tmom_hook_output_t hook_output;\n\tint hook_errcode = 0;\n\tint hook_rc = 0;\n\tchar hook_msg[HOOK_MSG_SIZE];\n\thook *last_phook = NULL;\n\tunsigned int hook_fail_action = 0;\n\n\t/* make sure we have an open tpp stream back to the server */\n\tif (server_stream == -1)\n\t\tsend_hellosvr(server_stream);\n\n#if MOM_ALPS\n\t/* set ALPS reservation id to -1 to indicate there isn't one yet */\n\tpjob->ji_extended.ji_ext.ji_reservation = -1;\n#endif\n\n\tif (pjob->ji_mompost) { /* fail until activity is done */\n\t\tlog_joberr(-1, __func__, \"waiting for worktask completion\",\n\t\t\t   pjob->ji_qs.ji_jobid);\n\t\texec_bail(pjob, JOB_EXEC_RETRY, NULL);\n\t\treturn;\n\t}\n\n\t/*\n\t * Ensure we have a cookie for the job. The cookie consists of a\n\t * string of 32 hex characters plus a null terminator. The machine\n\t * architecture needs to be considered when populating the string\n\t * because random() and lrand48() return a long int.\n\t */\n\n\tif (!(is_jattr_set(pjob, JOB_ATR_Cookie))) {\n\t\tchar tt[33];\n\t\tint i;\n\n\t\tfor (i = 0; i < 33; i += sizeof(long)) {\n\t\t\tsnprintf(&tt[i], 33 - i, \"%.*lX\", (int) sizeof(long), (unsigned long) random());\n\t\t}\n\t\tset_jattr_str_slim(pjob, JOB_ATR_Cookie, tt, NULL);\n\t\tDBPRT((\"===== COOKIE %s\\n\", tt))\n\t}\n\n\tif ((i = job_nodes(pjob)) != 0) {\n\t\tsprintf(log_buffer, \"job_nodes failed with error %d\", i);\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_NOTICE,\n\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\tnodes_free(pjob);\n\t\texec_bail(pjob, JOB_EXEC_RETRY, NULL);\n\t\treturn;\n\t}\n\tpjob->ji_nodeid = 0; /* I'm MS */\n\tnodenum = pjob->ji_numnodes;\n\n\tif (do_tolerate_node_failures(pjob))\n\t\treliable_job_node_add(&pjob->ji_node_list, mom_host);\n\n\tif (mock_run) {\n\t\tpjob->ji_ports[0] = -1;\n\t\tpjob->ji_ports[1] = -1;\n\t\tpjob->ji_stdout = -1;\n\t\tpjob->ji_stderr = -1;\n\n\t\tmock_run_finish_exec(pjob);\n\t\treturn;\n\t}\n\n\tif (nodenum > 1) {\n\t\tint nodemux = 0;\n\t\tint mtfd = -1;\n\t\tint com;\n\n\t\tpjob->ji_resources = (noderes *) calloc(nodenum - 1,\n\t\t\t\t\t\t\tsizeof(noderes));\n\t\tassert(pjob->ji_resources != NULL);\n\t\tpjob->ji_numrescs = nodenum - 1;\n\n\t\t/* pjob->ji_numrescs is the number of entries in pjob->ji_resources array,\n\t\t * which houses the resources obtained from the SISTER moms attached to the\n\t\t * job. So pjob->ji_resources[0] is actually the resources from sister mom #1,\n\t\t * pjob->ji_resources[1] is the resources from sister mom #2, and so on.\n\t\t * Correlating this to the pjob->ji_hosts array,\n\t\t * pjob->ji_hosts[0] refers to the MS entry which won't have an entry in the\n\t\t * pjob->ji_resources array since that is for sisters only.\n\t\t * pjob->ji_hosts[1] is sister #1 whose resources obtained for the job\n\t\t * is in pjob->ji_resources[0],\n\t\t * pjob->ji_hosts[2] is sister #2 whose resources obtained for the job is in\n\t\t * pjob->ji_resources[1], and so on.\n\t\t * This is why pjob->ji_numnodes = pjob->numrescs + 1.\n\t\t */\n\t\tCLEAR_HEAD(phead);\n\t\tfor (i = 0; i < (int) JOB_ATR_LAST; i++) {\n\t\t\t(void) (job_attr_def + i)->at_encode(get_jattr(pjob, i), &phead, (job_attr_def + i)->at_name, NULL, ATR_ENCODE_MOM, NULL);\n\t\t}\n\t\tattrl_fixlink(&phead);\n\t\t/*\n\t\t **\t\tOpen streams to the sisterhood.\n\t\t */\n\t\tif (pbs_conf.pbs_use_mcast == 1) {\n\t\t\t/* open the tpp mcast channel here */\n\t\t\tif ((mtfd = tpp_mcast_open()) == -1) {\n\t\t\t\tsprintf(log_buffer, \"mcast open failed\");\n\t\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\t\texec_bail(pjob, JOB_EXEC_FAIL1, NULL);\n\t\t\t\treturn;\n\t\t\t}\n\t\t}\n\n\t\tfor (i = 1; i < nodenum; i++) {\n\t\t\tnp = &pjob->ji_hosts[i];\n\n\t\t\tnp->hn_stream = tpp_open(np->hn_host, np->hn_port);\n\t\t\tif (np->hn_stream < 0) {\n\t\t\t\tsprintf(log_buffer, \"tpp_open failed on %s:%d\",\n\t\t\t\t\tnp->hn_host, np->hn_port);\n\t\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\t\texec_bail(pjob, JOB_EXEC_FAIL1, NULL);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tif (pbs_conf.pbs_use_mcast == 1) {\n\t\t\t\t/* add each of the tpp streams to the tpp mcast channel */\n\t\t\t\tif (tpp_mcast_add_strm(mtfd, np->hn_stream, FALSE) == -1) {\n\t\t\t\t\ttpp_close(np->hn_stream);\n\t\t\t\t\tnp->hn_stream = -1;\n\t\t\t\t\ttpp_mcast_close(mtfd);\n\t\t\t\t\tsprintf(log_buffer, \"mcast add failed\");\n\t\t\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\t\t\texec_bail(pjob, JOB_EXEC_FAIL1, NULL);\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\tif (is_jattr_set(pjob, JOB_ATR_nodemux))\n\t\t\tnodemux = get_jattr_long(pjob, JOB_ATR_nodemux);\n\n\t\t/*\n\t\t **\t\tSend out a JOIN_JOB/RESTART message to all the MOM's in\n\t\t **\t\tthe sisterhood.\n\t\t */\n\t\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_CHKPT) ||\n\t\t    (pjob->ji_qs.ji_svrflags & JOB_SVFLG_ChkptMig)) {\n\n\t\t\t/*\n\t\t\t * NULL value passed to hook_input.vnl means to assign\n\t\t\t * vnode list using pjob->ji_host[].\n\t\t\t */\n\t\t\tmom_hook_input_init(&hook_input);\n\t\t\thook_input.pjob = pjob;\n\n\t\t\tmom_hook_output_init(&hook_output);\n\t\t\thook_output.reject_errcode = &hook_errcode;\n\t\t\thook_output.last_phook = &last_phook;\n\t\t\thook_output.fail_action = &hook_fail_action;\n\n\t\t\tswitch ((hook_rc = mom_process_hooks(HOOK_EVENT_EXECJOB_BEGIN,\n\t\t\t\t\t\t\t     PBS_MOM_SERVICE_NAME, mom_host,\n\t\t\t\t\t\t\t     &hook_input, &hook_output,\n\t\t\t\t\t\t\t     hook_msg, sizeof(hook_msg), 1))) {\n\t\t\t\tcase 1: /* explicit accept */\n\t\t\t\t\tbreak;\n\t\t\t\tcase 2: /* no hook script executed - go ahead and accept event*/\n\t\t\t\t\tbreak;\n\t\t\t\tdefault:\n\t\t\t\t\t/* a value of '0' means explicit reject encountered. */\n\t\t\t\t\tif (hook_rc != 0) {\n\t\t\t\t\t\t/*\n\t\t\t\t\t\t * we've hit an internal error (malloc error, full disk, etc...), so\n\t\t\t\t\t\t * treat this now like a  hook error so hook fail_action will be\n\t\t\t\t\t\t * consulted. Before, behavior of an internal error was to ignore it!\n\t\t\t\t\t\t */\n\t\t\t\t\t\thook_errcode = PBSE_HOOKERROR;\n\t\t\t\t\t\tsend_hook_fail_action(last_phook);\n\t\t\t\t\t}\n\t\t\t\t\texec_bail(pjob, JOB_EXEC_FAIL1, NULL);\n\t\t\t\t\treturn;\n\t\t\t}\n\n\t\t\tif ((i = job_setup(pjob, NULL)) != JOB_EXEC_OK) {\n\t\t\t\texec_bail(pjob, i, NULL);\n\t\t\t\treturn;\n\t\t\t}\n\n\t\t\t/* new tasks can't talk to demux anymore */\n\t\t\tnodemux = 0;\n\n\t\t\tcom = IM_RESTART;\n\t\t\tpjob->ji_mompost = post_restart;\n\n\t\t\tif ((i = local_restart(pjob, NULL)) != 0) {\n\t\t\t\tpost_restart(pjob, i);\n\t\t\t\texec_bail(pjob, (i == PBSE_CKPBSY) ? JOB_EXEC_RETRY : JOB_EXEC_FAIL2, NULL);\n\t\t\t\treturn;\n\t\t\t}\n\t\t} else\n\t\t\tcom = IM_JOIN_JOB;\n\n\t\tif (nodemux) {\n\t\t\tpjob->ji_ports[0] = -1;\n\t\t\tpjob->ji_ports[1] = -1;\n\t\t\tpjob->ji_stdout = -1;\n\t\t\tpjob->ji_stderr = -1;\n\t\t} else {\n\t\t\t/*\n\t\t\t * Open two sockets for use by demux program later.\n\t\t\t */\n\t\t\tfor (i = 0; i < 2; i++)\n\t\t\t\tsocks[i] = -1;\n\t\t\tfor (i = 0; i < 2; i++) {\n\t\t\t\tif ((socks[i] = socket(AF_INET,\n\t\t\t\t\t\t       SOCK_STREAM, 0)) == -1)\n\t\t\t\t\tbreak;\n\n\t\t\t\tmemset(&saddr, '\\0', sizeof(saddr));\n\t\t\t\tsaddr.sin_addr.s_addr = INADDR_ANY;\n\t\t\t\tsaddr.sin_family = AF_INET;\n\t\t\t\tif (bind(socks[i], (struct sockaddr *) &saddr,\n\t\t\t\t\t sizeof(saddr)) == -1)\n\t\t\t\t\tbreak;\n\n\t\t\t\tlen = sizeof(saddr);\n\t\t\t\tif (getsockname(socks[i],\n\t\t\t\t\t\t(struct sockaddr *) &saddr,\n\t\t\t\t\t\t&len) == -1)\n\t\t\t\t\tbreak;\n\t\t\t\tpjob->ji_ports[i] = (int) ntohs(saddr.sin_port);\n\t\t\t}\n\t\t\tif (i < 2) {\n\t\t\t\tlog_err(errno, __func__, \"stdout/err socket\");\n\t\t\t\tfor (i = 0; i < 2; i++) {\n\t\t\t\t\tif (socks[i] != -1)\n\t\t\t\t\t\tclose(socks[i]);\n\t\t\t\t}\n\t\t\t\texec_bail(pjob, JOB_EXEC_FAIL1, NULL);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tpjob->ji_stdout = socks[0];\n\t\t\tpjob->ji_stderr = socks[1];\n\t\t\tpjob->ji_extended.ji_ext.ji_stdout = pjob->ji_ports[0];\n\t\t\tpjob->ji_extended.ji_ext.ji_stderr = pjob->ji_ports[1];\n\t\t}\n\n\t\tfor (i = 1; i < nodenum; i++) {\n\t\t\tnp = &pjob->ji_hosts[i];\n\n\t\t\tif (i == 1)\n\t\t\t\tep = event_alloc(pjob, com, -1, np,\n\t\t\t\t\t\t TM_NULL_EVENT, TM_NULL_TASK);\n\t\t\telse\n\t\t\t\tep = event_dup(ep, pjob, np);\n\n\t\t\tif (ep == NULL) {\n\t\t\t\texec_bail(pjob, JOB_EXEC_FAIL1, NULL);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tif (pbs_conf.pbs_use_mcast == 0)\n\t\t\t\tsend_join_job_restart(com, ep, i, pjob, &phead);\n\t\t}\n\t\tif (pbs_conf.pbs_use_mcast == 1) {\n\t\t\tsend_join_job_restart_mcast(mtfd, com, ep, i, pjob, &phead);\n\t\t\ttpp_mcast_close(mtfd);\n\t\t}\n\n\t\tfree_attrlist(&phead);\n\t\tif (do_tolerate_node_failures(pjob)) {\n\t\t\tif (!check_job_substate(pjob, JOB_SUBSTATE_WAITING_JOIN_JOB)) {\n\t\t\t\tset_job_substate(pjob, JOB_SUBSTATE_WAITING_JOIN_JOB);\n\t\t\t\tpjob->ji_joinalarm = time_now + joinjob_alarm_time;\n\t\t\t\tsprintf(log_buffer, \"job waiting up to %ld secs ($sister_join_job_alarm) for all sister moms to join\", joinjob_alarm_time);\n\t\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_INFO, pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\t\tlog_buffer[0] = '\\0';\n\t\t\t}\n\t\t}\n\t} else { /* no sisters */\n\t\tpjob->ji_ports[0] = -1;\n\t\tpjob->ji_ports[1] = -1;\n\t\tpjob->ji_stdout = -1;\n\t\tpjob->ji_stderr = -1;\n\n\t\t/*\n\t\t * All the JOIN messages have come in.\n\t\t */\n\t\tswitch (pre_finish_exec(pjob, 0)) {\n\t\t\tcase PRE_FINISH_SUCCESS:\n\t\t\t\tfinish_exec(pjob);\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\texec_bail(pjob, JOB_EXEC_RETRY, NULL);\n\t\t\t\treturn;\n\t\t}\n\t}\n}\n\n/**\n * @brief\n *\tForks a child process, with the parent process returning the child\n *\tprocess id, while the child closes shuts down tpp, and closes\n *\tnetwork descriptors, and turns off alarm.\n *\n * @param[in]\tconn\t- connection file descriptor to NOT close in the child.\n * @note\n * \tIf 'conn' is the impossible -1, then ALL connection descriptors will\n *\tbe closed.\n *\n * @return \tpid_t\n * @retval\tchild process id\tSuccess\n *\n */\n\npid_t\nfork_me(int conn)\n{\n\tstruct sigaction act;\n\tpid_t pid;\n\n\tfflush(stdout);\n\tfflush(stderr);\n\n\tpid = fork();\n\tif (pid == 0) {\n\t\t/* now the child */\n\n\t\t/* Turn off alarm if it should happen to be on */\n\t\talarm(0);\n\t\ttpp_terminate();\n\t\t(void) close(lockfds);\n\n\t\t/* Reset signal actions for most to SIG_DFL */\n\t\tsigemptyset(&act.sa_mask);\n\t\tact.sa_flags = 0;\n\t\tact.sa_handler = SIG_DFL;\n\t\t(void) sigaction(SIGCHLD, &act, NULL);\n\t\t(void) sigaction(SIGINT, &act, NULL);\n\t\t(void) sigaction(SIGTERM, &act, NULL);\n\t\tact.sa_handler = SIG_IGN;\n\t\t(void) sigaction(SIGHUP, &act, NULL);\n\n\t\t/* Reset signal mask */\n\t\t(void) sigprocmask(SIG_SETMASK, &act.sa_mask, NULL);\n\n\t\t(void) mom_close_poll();\n\t\tnet_close(conn); /* close all but for the current */\n\t} else if (pid < 0)\n\t\tlog_err(errno, __func__, \"fork failed\");\n\n\treturn (pid);\n}\n\n/**\n * @brief\n * \tstarter_return - return starter value,\n *\texit if negative\n *\n */\n\nvoid\nstarter_return(int upfds, int downfds, int code, struct startjob_rtn *sjrtn)\n{\n\tstruct startjob_rtn ack;\n\n\tsjrtn->sj_code = code;\n\t(void) writepipe(upfds, sjrtn, sizeof(*sjrtn));\n\n\t/* wait for acknowledgement */\n\t(void) readpipe(downfds, &ack, sizeof(ack));\n\t(void) close(upfds);\n\t(void) close(downfds);\n\tif (code < 0) {\n\t\texit(254);\n\t}\n}\n\n/**\n * @brief\tOpen a file as the user.\n *\n * @par\tPurpose\n *\tThis is done to prevent a security problem where in \"root\" opens a file\n *\tin a public directory and then changes ownership.   If a symlink\n *\talready existed, we could have a problem.\n *\n * @param[in] path  - full path of file to open/create\n * @param[in] oflag - access mode, see oflag of open(2)\n * @param[in] mode  - file creation mode (permissions), see mode of open(2)\n * @param[in] exuid - effective uid of user as which to open/create file\n * @param[in] exgid - effective gid of user as which to open/create file\n *\n * @return int\n * @retval -1  - error on open/create or impersonating the user\n * @retval >=0 - the opened file descriptor\n *\n */\n\nint\nopen_file_as_user(char *path, int oflag, mode_t mode, uid_t exuid, gid_t exgid)\n{\n\tint fds;\n\tint open_errno = 0;\n\textern gid_t pbsgroup;\n\n\t/* must open or create file as the user */\n\n\tif (impersonate_user(exuid, exgid) == -1)\n\t\treturn -1;\n\n\tif ((fds = open(path, oflag, mode)) == -1)\n\t\topen_errno = errno;\n\n\trevert_from_user();\n\n\tif (open_errno)\n\t\terrno = open_errno;\n\n\treturn (fds);\n}\n\n/**\n * @brief\n *\tcreate_file_securely - create job's output/error file in a secure\n *\tmanner, but as the user.\n *\n * @par Functionality:\n *\tThis function is used when we need to create the job's output/error.\n *\tIn the spool directory, anyone can create a file there, so we need to\n *\topen in a manner that a hacker cannot replace the file with a link\n *\tand not open it to create it as a link may be there and we don't wish\n *\tto create the link's target.\n *\n *\tUses mktemp() to create a file with a name that does not\n *\tcurrently exist, and then rename this file to the correct name for\n *\tthe job.  Then set correct permissions based on user's umask for the\n *\tjob and set the O_APPEND flag so that qmsg can write without having\n *\tits text over written by the job.\n *\n * @param[in]\tpath  - path to the file to be created\n * @param[in]\texuid - uid for file owner\n * @param[in]\texgid - gid for file owner\n *\n * @return\tint\n * @retval\tfile descriptor (>=0) if success\n * @retval\t-1 on error.\n *\n * @par MT-safe: likely no\n *\n */\nstatic int\ncreate_file_securely(char *path, uid_t exuid, gid_t exgid)\n{\n\tchar buf[MAXPATHLEN + 1];\n\tchar *pc;\n\tmode_t cur_mask;\n\tint fds;\n\n\t/* create a uniquely named file using mkstemp() */\n\t/* for that we need to setup the template       */\n\n\tpbs_strncpy(buf, path, sizeof(buf));\n\tpc = strrchr(buf, '/'); /* last slash in path */\n\tif (pc == NULL)\n\t\treturn (-1);\n\t++pc;\n\tif ((pc - buf) > (MAXPATHLEN - 6))\n\t\treturn (-1);  /* path too long, unlikely to happen */\n\tstrcpy(pc, \"XXXXXX\"); /* mkstemp template */\n\n\t/* The user's umask has already been set.  Get it for later */\n\n\tcur_mask = umask(022);\n\t(void) umask(cur_mask); /* reset it back */\n\n\t/*\n\t * become the user, i.e. set effective privileges\n\t * IMPORTANT - once we have changed to the user privilege,\n\t * DO NOT return until have changed back to root\n\t */\n\n\tif (impersonate_user(exuid, exgid) == -1)\n\t\treturn -1;\n\n\tfds = mkstemp(buf); /* create file */\n\tif (fds != -1) {\n\n\t\t/* change permissions based on user's umask */\n\t\t/* because mkstemp() ignores umask; then    */\n\t\t/* rename to what we want for the job file  */\n\n\t\tif ((fchmod(fds, 0666 & ~cur_mask) == -1) ||\n\t\t    (rename(buf, path) == -1)) {\n\t\t\t(void) close(fds);\n\t\t\t(void) unlink(buf);\n\t\t\tfds = -1;\n\t\t} else {\n\t\t\tint acc;\n\n\t\t\t/* add O_APPEND to the file descriptor so that lines */\n\t\t\t/* written by qmsg are not overwritten by the job    */\n\n\t\t\tacc = fcntl(fds, F_GETFL);\n\t\t\tif (acc == -1) {\n\t\t\t\t(void) close(fds);\n\t\t\t\t(void) unlink(path);\n\t\t\t\tfds = -1;\n\t\t\t}\n\t\t\tacc = (acc & ~O_ACCMODE) | O_APPEND;\n\t\t\tif (fcntl(fds, F_SETFL, acc) == -1) {\n\t\t\t\t(void) close(fds);\n\t\t\t\t(void) unlink(path);\n\t\t\t\tfds = -1;\n\t\t\t}\n\t\t}\n\t}\n\n\t/* return to having full administrative (root) privileges */\n\trevert_from_user();\n\n\t/* now we can return */\n\treturn (fds);\n}\n\n/**\n * @brief\n *\tGenerate the fully qualified path name for a job's standard stream\n *\n * @par Functionality:\n *\tCreates a fully qualified path name for the output/error file for the following cases:\n *\t1.  Interactive PBS job, just return the output attribute value,  it won't be used.\n *\t2.  qsub -k option was specified for the file, retain in User's Home directory unless\n *\t    sandbox=PRIVATE, in which case it goes there.  The file name is the default of\n *\t    job_name.a|eo<sequence number>\n *\t3.  If sandbox=PRIVATE, the file is placed there.\n *\t4.  If direct_write is specified and the final destination of the file is mapped to a local directory by $usecp,\n *\t    create the file in its final destination directory and set the \"keeping\" flag so it will not be staged.\n *\t5.  Else, the file path is created to put the file in PBS_HOME/spool\n * @param[in]  pjob - pointer to job structure\n * @param[in]  which - identifies which file: StdOut, StdErr, or Chkpt.\n * @param[out] keeping - set true if file to reside in User's Home or sandbox, false if in spool.\n *\n * @return char * - pointer to path which is in a static array.\n *\n * @par @par MT-safe: No\n */\n\nchar *\nstd_file_name(job *pjob, enum job_file which, int *keeping)\n{\n\tstatic char path[MAXPATHLEN + 1];\n\tchar key = '\\001'; /* should never be found */\n\tint len;\n\tchar *pd;\n\tchar *suffix = NULL;\n\n\tif (is_jattr_set(pjob, JOB_ATR_interactive) && (get_jattr_long(pjob, JOB_ATR_interactive) > 0)) {\n\n\t\t/* interactive job, name of pty is in outpath */\n\n\t\t*keeping = 0;\n\t\treturn (get_jattr_str(pjob, JOB_ATR_outpath));\n\t}\n\n\tswitch (which) {\n\t\tcase StdOut:\n\t\t\tkey = 'o';\n\t\t\tsuffix = JOB_STDOUT_SUFFIX;\n\t\t\tbreak;\n\n\t\tcase StdErr:\n\t\t\tkey = 'e';\n\t\t\tsuffix = JOB_STDERR_SUFFIX;\n\t\t\tbreak;\n\n\t\tcase Chkpt:\n\t\t\tsuffix = JOB_CKPT_SUFFIX;\n\t\t\tbreak;\n\n\t\tdefault:\n\t\t\tbreak;\n\t}\n\n\tif (pjob->ji_grpcache == NULL)\n\t\treturn (\"\"); /* needs to be non-NULL for figuring out homedir path; otherwise, mom will crash! */\n\n\t/* check if file is to be directly written to its final destination */\n\tif (is_direct_write(pjob, which, path, &direct_write_possible)) {\n\t\t*keeping = 1; /* inhibit staging */\n\t\treturn (path);\n\t}\n\n\t/* Is file to be kept?, if so use default name in Home directory */\n\telse if ((is_jattr_set(pjob, JOB_ATR_keep)) &&\n\t\t strchr(get_jattr_str(pjob, JOB_ATR_keep), key) && !strchr(get_jattr_str(pjob, JOB_ATR_keep), 'd')) {\n\t\t/* sandbox=private mode set the path to be the path to the */\n\t\t/* staging and execution directory\t\t\t   */\n\t\tif ((is_jattr_set(pjob, JOB_ATR_sandbox)) && (strcasecmp(get_jattr_str(pjob, JOB_ATR_sandbox), \"PRIVATE\") == 0)) {\n\t\t\tstrcpy(path, jobdirname(pjob->ji_qs.ji_jobid, pjob->ji_grpcache->gc_homedir));\n\t\t\t*keeping = 1;\n\t\t} else\n\t\t\tstrcpy(path, pjob->ji_grpcache->gc_homedir);\n\n\t\tpd = strrchr(get_jattr_str(pjob, JOB_ATR_jobname), '/');\n\t\tif (pd == NULL) {\n\t\t\tpd = get_jattr_str(pjob, JOB_ATR_jobname);\n\t\t\tstrcat(path, \"/\");\n\t\t}\n\n\t\tstrcat(path, pd); /* start with the job name */\n\t\tlen = strlen(path);\n\t\t*(path + len++) = '.';\t   /* the dot        */\n\t\t*(path + len++) = key;\t   /* the letter     */\n\t\tpd = pjob->ji_qs.ji_jobid; /* the seq_number */\n\t\twhile (isdigit((int) *pd))\n\t\t\t*(path + len++) = *pd++;\n\t\t*(path + len) = '\\0';\n\t\tif (is_jattr_set(pjob, JOB_ATR_array_index)) {\n\t\t\t/* this is a sub job of an Array Job, append .index */\n\t\t\tstrcat(path, \".\");\n\t\t\tstrcat(path, get_jattr_str(pjob, JOB_ATR_array_index));\n\t\t}\n\t\t*keeping = 1;\n\t} else {\n\t\t/* put into spool directory unless NO_SPOOL_OUTPUT is defined */\n\n#ifdef NO_SPOOL_OUTPUT\n\t\t/* sandbox=PRIVATE mode puts output in job staging and execution directory */\n\t\tif (is_jattr_set(pjob, JOB_ATR_sandbox) &&\n\t\t    (strcasecmp(get_jattr_str(pjob, JOB_ATR_sandbox), \"PRIVATE\") == 0)) {\n\t\t\tstrcpy(path, jobdirname(pjob->ji_qs.ji_jobid, pjob->ji_grpcache->gc_homedir));\n\t\t\tstrcat(path, \"/\");\n\t\t} else /* force all output to user's HOME */\n\t\t\tstrcpy(path, pjob->ji_grpcache->gc_homedir);\n\n\t\tstrcat(path, \"/\");\n\t\t*keeping = 1;\n#else  /* NO_SPOOL_OUTPUT */\n\t\t/* sandbox=PRIVATE mode puts output in job staging and execution directory */\n\t\tif (is_jattr_set(pjob, JOB_ATR_sandbox) &&\n\t\t    (strcasecmp(get_jattr_str(pjob, JOB_ATR_sandbox), \"PRIVATE\") == 0)) {\n\t\t\tstrcpy(path, jobdirname(pjob->ji_qs.ji_jobid, pjob->ji_grpcache->gc_homedir));\n\t\t\tstrcat(path, \"/\");\n\t\t\t*keeping = 1;\n\t\t} else {\n\t\t\tstrcpy(path, path_spool);\n\t\t\t*keeping = 0;\n\t\t}\n#endif /* NO_SPOOL_OUTPUT */\n\t\tif (*pjob->ji_qs.ji_fileprefix != '\\0')\n\t\t\t(void) strcat(path, pjob->ji_qs.ji_fileprefix);\n\t\telse\n\t\t\t(void) strcat(path, pjob->ji_qs.ji_jobid);\n\t\tif (suffix)\n\t\t\t(void) strcat(path, suffix);\n\t}\n\treturn (path);\n}\n\n/**\n * @brief\n *\tOpen (create) either standard output or standard error for the job.\n * @par\n *\tOpen, likely creating, the job file in a secure manner.\n *\tIf the job is interactive, connecting to a pseudo terminal, or the file is being opened\n *\tin the User's Home or sandbox,  it is opened as the user.\n *\tIn spool, it is a bit more complex; must make sure an attacker cannot try and slip in a\n *\tsymbolic link which would cause the file to overwrite something else.\n *\n * @param[in] pjob  - pointer to job structure\n * @param[in] which - which file to create, StdOut, StdErr, or Chkpt\n * @param[in] mode  - file open oflag (O_CREAT, O_WRONLY, ...)\n * @param[in] exgid - User's gid\n *\n * @return \tint\n * @retval\tfd\tOn success\n * @retval\t-1\ton failure\n *\n */\n\nint\nopen_std_file(job *pjob, enum job_file which, int mode, gid_t exgid)\n{\n\tuid_t exuid;\n\tint fds;\n\tint keeping = 0;\n\tchar *path;\n\tstruct stat sb;\n\n\tif (!pjob)\n\t\treturn (-1);\n\tif (!pjob->ji_grpcache)\n\t\treturn (-1);\n\texuid = pjob->ji_grpcache->gc_uid;\n\tpath = std_file_name(pjob, which, &keeping);\n\n\t/* must open or create file as the user */\n\n\t/*\n\t * If the job is interactive, the tty device file is \"safe\" in a\n\t * protected directory,  otherwise check others for security.\n\t */\n\n\tif (is_jattr_set(pjob, JOB_ATR_interactive) != 0 && get_jattr_long(pjob, JOB_ATR_interactive) > 0)\n\t\tfds = open_file_as_user(path, mode, 0644, exuid, exgid);\n\telse if (keeping) {\n\t\t/* The user is keeping the file in his Home directory or sandbox, */\n\t\t/* both are safe and the file can be opened directly.             */\n\t\tfds = open_file_as_user(path, mode, 0644, exuid, exgid);\n\t} else {\n\n\t\t/* File going into the spool area... */\n\n\t\tint lrc;\n\n\t\t/*\n\t\t * If the file does not already exist (the typical case for a\n\t\t * job running the first time) or exists, but is a regular file\n\t\t * owned by the right user (if job has run before), then we\n\t\t * will open it.  BUT, we need to recheck because there is a\n\t\t * (very) small window in which it could be changed.\n\t\t *\n\t\t * If the file exists, but isn't a regular file owned by the\n\t\t * right user, there is a problem.  In this case, (see \"else\")\n\t\t * we don't want to open it as this could create a file as the\n\t\t * target of a link,  we make it using a more secure manner\n\t\t * using mkstemp().\n\t\t */\n\t\tif (((lrc = lstat(path, &sb)) != -1) && ((sb.st_mode & S_IFMT) == S_IFREG) && (sb.st_nlink == 1) && (sb.st_uid == exuid) && (sb.st_gid == exgid)) {\n\n\t\t\t/* at this point all is ok, go open it */\n\t\t\tfds = open_file_as_user(path, mode, 0644, exuid, exgid);\n\t\t\tif (fds == -1)\n\t\t\t\treturn (-1);\n\n\t\t\t/* Recheck what is opened, it might have  */\n\t\t\t/* changed between the check and the open */\n\n\t\t\tif ((fstat(fds, &sb) == -1) || ((sb.st_mode & S_IFMT) != S_IFREG) || (sb.st_nlink != 1) || (sb.st_uid != exuid) || (sb.st_gid != exgid)) {\n\n\t\t\t\t/* Its bad now,  log it, leaving */\n\t\t\t\t/* it in place as evidence       */\n\t\t\t\tif (fds != -1)\n\t\t\t\t\tclose(fds);\n\n\t\t\t\tlog_suspect_file(__func__, \"bad type or owner\", path, &sb);\n\t\t\t\treturn (-1);\n\t\t\t}\n\t\t} else {\n\n\t\t\tif ((lrc != -1) || (errno != ENOENT)) {\n\t\t\t\t/* file exists but is suspect */\n\t\t\t\tlog_suspect_file(__func__,\n\t\t\t\t\t\t \"bad type or owner, attempting to remove file\", path, &sb);\n\t\t\t\t(void) unlink(path);\n\t\t\t}\n\n\t\t\t/* file does not exist or is not correct */\n\t\t\t/* create the file in a secure manner    */\n\n\t\t\tif ((fds = create_file_securely(path, exuid, exgid)) == -1) {\n\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\"secure create of file failed for job %s for user %u\",\n\t\t\t\t\tpjob->ji_qs.ji_jobid, exuid);\n\t\t\t\tif (stat(path, &sb) != -1) {\n\t\t\t\t\tstrcat(log_buffer, \", file exists\");\n\t\t\t\t\tlog_suspect_file(__func__, log_buffer, path, &sb);\n\t\t\t\t} else {\n\t\t\t\t\tlog_record(PBSEVENT_SECURITY, PBS_EVENTCLASS_FILE, LOG_CRIT,\n\t\t\t\t\t\t   path, log_buffer);\n\t\t\t\t}\n\t\t\t\treturn (-1);\n\t\t\t}\n\t\t}\n\t}\n\n\treturn (fds);\n}\n\n/**\n * @brief\n *\tcatchinter = catch death of writer child of interactive job\n *\tand kill off the shell child.\n *\n * @param[in] sig - signal number\n *\n * @return\tVoid\n *\n */\n\nstatic void\ncatchinter(int sig)\n{\n\tint status;\n\tpid_t pid;\n\n\tpid = waitpid(-1, &status, WNOHANG);\n\tif (pid == 0)\n\t\treturn;\n\tif (pid == writerpid) {\n\t\tkill(shellpid, SIGKILL);\n#if defined(HAVE_LIBKAFS) || defined(HAVE_LIBKOPENAFS)\n\t\twaitpid(shellpid, &status, WNOHANG);\n#else\n\t\t(void) wait(&status);\n#endif\n\t\tmom_reader_go = 0;\n\t\tx11_reader_go = 0;\n\t}\n}\n/**\n * @brief\n *\tlog_mom_portfw_msg - used to log a port forwarding error message to\n *\tMOM logs\n *\n * @param[in]\tmsg -  pointer to error message\n *\n * @return\tNone\n *\n */\nvoid\nlog_mom_portfw_msg(char *msg)\n{\n\tstrcpy(log_buffer, msg);\n\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_ERR, __func__, log_buffer);\n}\n"
  },
  {
    "path": "src/resmom/vnode_storage.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include \"pbs_config.h\"\n\n#include <stdlib.h>\n#include <string.h>\n#include <stdio.h>\n#include <errno.h>\n#include <assert.h>\n#include \"libpbs.h\"\n#include \"log.h\"\n#include \"server_limits.h\"\n#include \"attribute.h\"\n#include \"placementsets.h\"\n#include \"resource.h\"\n#include \"pbs_nodes.h\"\n\n#ifdef DEBUG\nextern void mom_CPUs_report();\n#endif /* DEBUG */\n\n/**\n * @file\n */\n/**\n * @brief\n *\tcreates vnode map\n *\n * @param[in] ctxp - pointer to pointer to vnodes\n *\n * @return \tint\n * @retval \t1\tSuccess\n * @retval \t0 \tFailure\n *\n */\n\nint\ncreate_vmap(void **ctxp)\n{\n\tassert(ctxp != NULL);\n\n\tif (*ctxp == NULL) {\n\t\tif ((*ctxp = pbs_idx_create(0, 0)) == NULL) {\n\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_SERVER, LOG_ERR, __func__, \"Failed to create vnode map\");\n\t\t\treturn 0;\n\t\t}\n\t}\n\n\treturn (1);\n}\n\n/**\n * @brief\n *\tdestroys the vnode map\n *\n * @param[in] ctx - char pointer to node\n *\n * @return Void\n *\n */\nvoid\ndestroy_vmap(void *ctx)\n{\n\tassert(ctx != NULL);\n\tpbs_idx_destroy(ctx);\n\tctx = NULL;\n}\n\n/**\n * @brief\n *\tfinds vnode map entry by vnode id\n *\n * @param[in] ctx - char pointer to vnode\n * @param[in] vnid - vnode id\n *\n * @return \tstructure handle\n * @retval  \tpointer to mominfo_t structure on success else NULL\n *\n */\nmominfo_t *\nfind_vmapent_byID(void *ctx, const char *vnid)\n{\n\tmominfo_t *p;\n\n\tif (pbs_idx_find(ctx, (void **) &vnid, (void **) &p, NULL) == PBS_IDX_RET_OK)\n\t\treturn p;\n\treturn NULL;\n}\n\n/**\n * @brief\n *\tadds vnode to vnode map by vnode id.\n *\n * @param[in] ctx - char pointer to vnode\n * @param[in] vnid - vnode id\n * @param[in] data - information about vnode\n *\n * @return \tint\n * @retval \t0\tSuccess\n * @retval \t1\tFailure\n *\n */\nint\nadd_vmapent_byID(void *ctx, const char *vnid, void *data)\n{\n\tif (pbs_idx_insert(ctx, (void *) vnid, data) != PBS_IDX_RET_OK) {\n\t\tlog_eventf(PBSEVENT_DEBUG, PBS_EVENTCLASS_SERVER, LOG_DEBUG, __func__, \"Failed to add vnode %s in vnodemap\", vnid);\n\t\treturn 1;\n\t}\n\treturn 0;\n}\n"
  },
  {
    "path": "src/scheduler/Makefile.am",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nlib_LIBRARIES = libpbs_sched.a\nnoinst_LIBRARIES = libschedharness.a\n\ncommon_cflags = \\\n\t-I$(top_srcdir)/src/include \\\n\t@libz_inc@ \\\n\t-pthread \\\n\t@PYTHON_INCLUDES@ \\\n\t@KRB5_CFLAGS@\n\ncommon_libs = \\\n\t$(builddir)/libpbs_sched.a \\\n\t$(builddir)/libschedharness.a \\\n\t$(top_builddir)/src/lib/Libpbs/libpbs.la \\\n\t$(top_builddir)/src/lib/Libutil/libutil.a \\\n\t$(top_builddir)/src/lib/Liblog/liblog.a \\\n\t$(top_builddir)/src/lib/Libnet/libnet.a \\\n\t$(top_builddir)/src/lib/Libsec/libsec.a \\\n\t@KRB5_LIBS@ \\\n\t@PYTHON_LDFLAGS@ \\\n\t@PYTHON_LIBS@ \\\n\t@libz_lib@ \\\n\t@libical_lib@\n\nlibpbs_sched_a_CPPFLAGS = ${common_cflags}\n\nlibschedharness_a_CPPFLAGS = ${common_cflags}\nlibschedharness_a_SOURCES = \\\n\tpbs_sched_utils.cpp\n\nlibpbs_sched_a_SOURCES = \\\n\t$(top_builddir)/src/lib/Libpython/shared_python_utils.c \\\n\tbuckets.cpp \\\n\tbuckets.h \\\n\tcheck.cpp \\\n\tcheck.h \\\n\tconfig.h \\\n\tconstant.h \\\n\tdata_types.h \\\n\tdedtime.cpp \\\n\tdedtime.h \\\n\tfairshare.cpp \\\n\tfairshare.h \\\n\tfifo.cpp \\\n\tfifo.h \\\n\tget_4byte.cpp \\\n\tglobals.cpp \\\n\tglobals.h \\\n\tjob_info.cpp \\\n\tjob_info.h \\\n\tlimits.cpp \\\n\tlimits_if.h \\\n\tmisc.cpp \\\n\tmisc.h \\\n\tmulti_threading.cpp \\\n\tmulti_threading.h \\\n\tnode_info.cpp \\\n\tnode_info.h \\\n\tnode_partition.cpp \\\n\tnode_partition.h \\\n\tparse.cpp \\\n\tparse.h \\\n\tpbs_bitmap.cpp \\\n\tpbs_bitmap.h \\\n\tprev_job_info.cpp \\\n\tprev_job_info.h \\\n\tprime.cpp \\\n\tprime.h \\\n\tqueue.cpp \\\n\tqueue.h \\\n\tqueue_info.cpp \\\n\tqueue_info.h \\\n\tresource.cpp \\\n\tresource.h \\\n\tresource_resv.cpp \\\n\tresource_resv.h \\\n\tresv_info.cpp \\\n\tresv_info.h \\\n\tsched_exception.cpp \\\n\tsched_ifl_wrappers.cpp \\\n\tserver_info.cpp \\\n\tserver_info.h \\\n\tsimulate.cpp \\\n\tsimulate.h \\\n\tsort.cpp \\\n\tsort.h \\\n\tstate_count.cpp \\\n\tstate_count.h \\\n\tsite_code.cpp \\\n\tsite_code.h \\\n\tsite_data.h\n\nsbin_PROGRAMS = pbs_sched pbsfs\nnoinst_PROGRAMS = pbs_sched_bare\n\npbs_sched_CPPFLAGS = ${common_cflags}\npbs_sched_LDADD = ${common_libs}\npbs_sched_SOURCES = pbs_sched.cpp\n\npbs_sched_bare_CPPFLAGS = ${common_cflags}\npbs_sched_bare_LDADD = ${common_libs}\npbs_sched_bare_SOURCES = pbs_sched_bare.cpp\n\npbsfs_CPPFLAGS = ${common_cflags}\npbsfs_LDADD = ${common_libs}\npbsfs_SOURCES = pbsfs.cpp\n\ndist_sysconf_DATA = \\\n\tpbs_dedicated \\\n\tpbs_holidays \\\n\tpbs_holidays.2017 \\\n\tpbs_resource_group \\\n\tpbs_sched_config\n"
  },
  {
    "path": "src/scheduler/buckets.cpp",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <errno.h>\n#include \"data_types.h\"\n#include \"pbs_bitmap.h\"\n#include \"node_info.h\"\n#include \"server_info.h\"\n#include \"buckets.h\"\n#include \"globals.h\"\n#include \"resource.h\"\n#include \"resource_resv.h\"\n#include \"simulate.h\"\n#include \"misc.h\"\n#include \"sort.h\"\n#include \"node_partition.h\"\n#include \"check.h\"\n#include <log.h>\n#include \"pbs_internal.h\"\n\n/* bucket_bitpool constructor */\nbucket_bitpool *\nnew_bucket_bitpool()\n{\n\tbucket_bitpool *bp;\n\n\tbp = static_cast<bucket_bitpool *>(calloc(1, sizeof(bucket_bitpool)));\n\tif (bp == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn NULL;\n\t}\n\n\tbp->truth = pbs_bitmap_alloc(NULL, 1);\n\tif (bp->truth == NULL) {\n\t\tfree_bucket_bitpool(bp);\n\t\treturn NULL;\n\t}\n\tbp->truth_ct = 0;\n\n\tbp->working = pbs_bitmap_alloc(NULL, 1);\n\tif (bp->working == NULL) {\n\t\tfree_bucket_bitpool(bp);\n\t\treturn NULL;\n\t}\n\tbp->working_ct = 0;\n\n\treturn bp;\n}\n\n/* bucket_bitpool destructor */\nvoid\nfree_bucket_bitpool(bucket_bitpool *bp)\n{\n\tif (bp == NULL)\n\t\treturn;\n\n\tpbs_bitmap_free(bp->truth);\n\tpbs_bitmap_free(bp->working);\n\n\tfree(bp);\n}\n\n/* bucket_bitpool copy constructor */\nbucket_bitpool *\ndup_bucket_bitpool(bucket_bitpool *obp)\n{\n\tbucket_bitpool *nbp;\n\n\tnbp = new_bucket_bitpool();\n\n\tif (pbs_bitmap_assign(nbp->truth, obp->truth) == 0) {\n\t\tfree_bucket_bitpool(nbp);\n\t\treturn NULL;\n\t}\n\tnbp->truth_ct = obp->truth_ct;\n\n\tif (pbs_bitmap_assign(nbp->working, obp->working) == 0) {\n\t\tfree_bucket_bitpool(nbp);\n\t\treturn NULL;\n\t}\n\tnbp->working_ct = obp->working_ct;\n\n\treturn nbp;\n}\n\n/* node_bucket constructor */\nnode_bucket *\nnew_node_bucket(int new_pools)\n{\n\tnode_bucket *nb;\n\n\tnb = static_cast<node_bucket *>(calloc(1, sizeof(node_bucket)));\n\tif (nb == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn NULL;\n\t}\n\n\tif (new_pools) {\n\t\tnb->busy_pool = new_bucket_bitpool();\n\t\tif (nb->busy_pool == NULL) {\n\t\t\tfree_node_bucket(nb);\n\t\t\treturn NULL;\n\t\t}\n\n\t\tnb->busy_later_pool = new_bucket_bitpool();\n\t\tif (nb->busy_later_pool == NULL) {\n\t\t\tfree_node_bucket(nb);\n\t\t\treturn NULL;\n\t\t}\n\n\t\tnb->free_pool = new_bucket_bitpool();\n\t\tif (nb->free_pool == NULL) {\n\t\t\tfree_node_bucket(nb);\n\t\t\treturn NULL;\n\t\t}\n\t} else {\n\t\tnb->busy_pool = NULL;\n\t\tnb->busy_later_pool = NULL;\n\t\tnb->free_pool = NULL;\n\t}\n\tnb->bkt_nodes = pbs_bitmap_alloc(NULL, 1);\n\tif (nb->bkt_nodes == NULL) {\n\t\tfree_node_bucket(nb);\n\t\treturn NULL;\n\t}\n\n\tnb->res_spec = NULL;\n\tnb->queue = NULL;\n\tnb->priority = 0;\n\tnb->total = 0;\n\n\treturn nb;\n}\n\n/* node_bucket copy constructor */\nnode_bucket *\ndup_node_bucket(node_bucket *onb, server_info *nsinfo)\n{\n\tnode_bucket *nnb;\n\n\tnnb = new_node_bucket(0);\n\tif (nnb == NULL)\n\t\treturn NULL;\n\n\tnnb->busy_pool = dup_bucket_bitpool(onb->busy_pool);\n\tif (nnb->busy_pool == NULL) {\n\t\tfree_node_bucket(nnb);\n\t\treturn NULL;\n\t}\n\n\tnnb->busy_later_pool = dup_bucket_bitpool(onb->busy_later_pool);\n\tif (nnb->busy_later_pool == NULL) {\n\t\tfree_node_bucket(nnb);\n\t\treturn NULL;\n\t}\n\n\tnnb->free_pool = dup_bucket_bitpool(onb->free_pool);\n\tif (nnb->free_pool == NULL) {\n\t\tfree_node_bucket(nnb);\n\t\treturn NULL;\n\t}\n\n\tpbs_bitmap_assign(nnb->bkt_nodes, onb->bkt_nodes);\n\tnnb->res_spec = dup_resource_list(onb->res_spec);\n\tif (nnb->res_spec == NULL) {\n\t\tfree_node_bucket(nnb);\n\t\treturn NULL;\n\t}\n\n\tif (onb->queue != NULL)\n\t\tnnb->queue = find_queue_info(nsinfo->queues, onb->queue->name);\n\n\tif (onb->name != NULL)\n\t\tnnb->name = string_dup(onb->name);\n\tnnb->total = onb->total;\n\tnnb->priority = onb->priority;\n\n\treturn nnb;\n}\n\n/* node_bucket array copy constructor */\nnode_bucket **\ndup_node_bucket_array(node_bucket **old, server_info *nsinfo)\n{\n\tnode_bucket **nnb;\n\tint i;\n\tif (old == NULL)\n\t\treturn NULL;\n\n\tnnb = static_cast<node_bucket **>(malloc((count_array(old) + 1) * sizeof(node_bucket *)));\n\tif (nnb == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn NULL;\n\t}\n\n\tfor (i = 0; old[i] != NULL; i++) {\n\t\tnnb[i] = dup_node_bucket(old[i], nsinfo);\n\t\tif (nnb[i] == NULL) {\n\t\t\tfree_node_bucket_array(nnb);\n\t\t\treturn NULL;\n\t\t}\n\t}\n\n\tnnb[i] = NULL;\n\n\treturn nnb;\n}\n\n/* node_bucket destructor */\nvoid\nfree_node_bucket(node_bucket *nb)\n{\n\tif (nb == NULL)\n\t\treturn;\n\n\tfree_bucket_bitpool(nb->busy_pool);\n\tfree_bucket_bitpool(nb->busy_later_pool);\n\tfree_bucket_bitpool(nb->free_pool);\n\n\tfree_resource_list(nb->res_spec);\n\n\tpbs_bitmap_free(nb->bkt_nodes);\n\n\tfree(nb->name);\n\tfree(nb);\n}\n\n/* node bucket array destructor */\nvoid\nfree_node_bucket_array(node_bucket **buckets)\n{\n\tint i;\n\n\tif (buckets == NULL)\n\t\treturn;\n\n\tfor (i = 0; buckets[i] != NULL; i++)\n\t\tfree_node_bucket(buckets[i]);\n\n\tfree(buckets);\n}\n\n/* node_bucket_count constructor */\nnode_bucket_count *\nnew_node_bucket_count()\n{\n\tnode_bucket_count *nbc;\n\n\tnbc = static_cast<node_bucket_count *>(malloc(sizeof(node_bucket_count)));\n\tif (nbc == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn NULL;\n\t}\n\tnbc->bkt = NULL;\n\tnbc->chunk_count = 0;\n\n\treturn nbc;\n}\n\nvoid\nfree_node_bucket_count(node_bucket_count *nbc)\n{\n\tif (nbc == NULL)\n\t\treturn;\n\n\tfree(nbc);\n}\n\nvoid\nfree_node_bucket_count_array(node_bucket_count **nbc_array)\n{\n\tint i;\n\tif (nbc_array == NULL)\n\t\treturn;\n\n\tfor (i = 0; nbc_array[i] != NULL; i++)\n\t\tfree_node_bucket_count(nbc_array[i]);\n\n\tfree(nbc_array);\n}\n\n/**\n * @brief find the index into an array of node_buckets based on resources, queue, and priority\n * @param[in] buckets - the node_bucket array to search\n * @param[in] rl - the resource list of the node bucket\n * @param[in] qinfo - the queue of the node bucket\n * @param[in] priority - the priority of the node bucket\n * @return int\n * @retval index of array if found\n * @retval -1 if not found or on error\n */\nint\nfind_node_bucket_ind(node_bucket **buckets, schd_resource *rl, queue_info *qinfo, int priority)\n{\n\tint i;\n\tif (buckets == NULL || rl == NULL)\n\t\treturn -1;\n\n\tfor (i = 0; buckets[i] != NULL; i++) {\n\t\tif (buckets[i]->queue == qinfo && buckets[i]->priority == priority &&\n\t\t    compare_resource_avail_list(buckets[i]->res_spec, rl))\n\t\t\treturn i;\n\t}\n\treturn -1;\n}\n\n/**\n * @brief create a name for a node bucket based on resource names, priority, and queue\n *\n * @return char *\n * @retval name of bucket\n * @retval NULL - error\n */\nchar *\ncreate_node_bucket_name(status *policy, node_bucket *nb)\n{\n\tchar *name;\n\tint len;\n\n\tif (policy == NULL || nb == NULL)\n\t\treturn NULL;\n\n\tname = create_resource_signature(nb->res_spec, policy->resdef_to_check_no_hostvnode, ADD_ALL_BOOL);\n\tif (name == NULL)\n\t\treturn NULL;\n\n\tlen = strlen(name);\n\n\tif (nb->priority != 0) {\n\t\tchar buf[20];\n\t\tif (pbs_strcat(&name, &len, \":priority=\") == NULL) {\n\t\t\tfree(name);\n\t\t\treturn NULL;\n\t\t}\n\t\tsnprintf(buf, sizeof(buf), \"%d\", nb->priority);\n\t\tif (pbs_strcat(&name, &len, buf) == NULL) {\n\t\t\tfree(name);\n\t\t\treturn NULL;\n\t\t}\n\t}\n\n\tif (nb->queue != NULL) {\n\t\tif (pbs_strcat(&name, &len, \":queue=\") == NULL) {\n\t\t\tfree(name);\n\t\t\treturn NULL;\n\t\t}\n\t\tif (pbs_strcat(&name, &len, nb->queue->name.c_str()) == NULL) {\n\t\t\tfree(name);\n\t\t\treturn NULL;\n\t\t}\n\t}\n\n\treturn name;\n}\n\n/**\n * @brief create node buckets from an array of nodes\n * @param[in] policy - policy info\n * @param[in] nodes - the nodes to create buckets from\n * @param[in] queues - the queues the nodes may be associated with.  May be NULL\n * @param[in] flags - flags to control creation of buckets\n * \t\t\t\t\t\tUPDATE_BUCKET_IND - update the bucket_ind member on the node_info structure\n * \t\t\t\t\t\tNO_PRINT_BUCKETS - do not print that a bucket has been created\n * @return node_bucket **\n * @retval array of node buckets\n * @retval NULL on error\n */\nnode_bucket **\ncreate_node_buckets(status *policy, node_info **nodes, std::vector<queue_info *> &queues, unsigned int flags)\n{\n\tint i;\n\tint j = 0;\n\tnode_bucket **buckets = NULL;\n\tnode_bucket **tmp;\n\tint node_ct;\n\n\tif (policy == NULL || nodes == NULL || queues.empty())\n\t\treturn NULL;\n\n\tnode_ct = count_array(nodes);\n\n\tbuckets = static_cast<node_bucket **>(calloc((node_ct + 1), sizeof(node_bucket *)));\n\tif (buckets == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn NULL;\n\t}\n\n\tfor (i = 0; i < node_ct; i++) {\n\t\tnode_bucket *nb = NULL;\n\t\tint bkt_ind;\n\t\tqueue_info *qinfo = NULL;\n\t\tint node_ind = nodes[i]->node_ind;\n\n\t\tif (nodes[i]->is_down || nodes[i]->is_offline || node_ind == -1 || nodes[i]->lic_lock == 0)\n\t\t\tcontinue;\n\n\t\tif (!nodes[i]->queue_name.empty())\n\t\t\tqinfo = find_queue_info(queues, nodes[i]->queue_name);\n\n\t\tbkt_ind = find_node_bucket_ind(buckets, nodes[i]->res, qinfo, nodes[i]->priority);\n\t\tif (flags & UPDATE_BUCKET_IND) {\n\t\t\tif (bkt_ind == -1)\n\t\t\t\tnodes[i]->bucket_ind = j;\n\t\t\telse\n\t\t\t\tnodes[i]->bucket_ind = bkt_ind;\n\t\t}\n\t\tif (bkt_ind != -1)\n\t\t\tnb = buckets[bkt_ind];\n\n\t\tif (nb == NULL) { /* no bucket found, need to add one*/\n\t\t\tschd_resource *cur_res;\n\t\t\tbuckets[j] = new_node_bucket(1);\n\n\t\t\tif (buckets[j] == NULL) {\n\t\t\t\tfree_node_bucket_array(buckets);\n\t\t\t\treturn NULL;\n\t\t\t}\n\n\t\t\tbuckets[j]->res_spec = dup_selective_resource_list(nodes[i]->res, policy->resdef_to_check_no_hostvnode,\n\t\t\t\t\t\t\t\t\t   (ADD_UNSET_BOOLS_FALSE | ADD_ALL_BOOL));\n\n\t\t\tif (buckets[j]->res_spec == NULL) {\n\t\t\t\tfree_node_bucket_array(buckets);\n\t\t\t\treturn NULL;\n\t\t\t}\n\n\t\t\tif (qinfo != NULL)\n\t\t\t\tbuckets[j]->queue = qinfo;\n\n\t\t\tbuckets[j]->priority = nodes[i]->priority;\n\n\t\t\tfor (cur_res = buckets[j]->res_spec; cur_res != NULL; cur_res = cur_res->next)\n\t\t\t\tif (cur_res->type.is_consumable)\n\t\t\t\t\tcur_res->assigned = 0;\n\n\t\t\tbuckets[j]->busy_later_pool->truth_ct = 0;\n\t\t\tbuckets[j]->free_pool->truth_ct = 0;\n\t\t\tbuckets[j]->busy_pool->truth_ct = 0;\n\n\t\t\tbuckets[j]->total = 0;\n\n\t\t\tbuckets[j]->name = create_node_bucket_name(policy, buckets[j]);\n\t\t\tif (buckets[j]->name == NULL) {\n\t\t\t\tfree_node_bucket_array(buckets);\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t\tif (!(flags & NO_PRINT_BUCKETS))\n\t\t\t\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_NODE, LOG_DEBUG, __func__, \"Created node bucket %s\", buckets[j]->name);\n\n\t\t\tnb = buckets[j];\n\t\t\tj++;\n\t\t}\n\t\tpbs_bitmap_bit_on(nb->bkt_nodes, node_ind);\n\t\tnb->total++;\n\t\tif (nodes[i]->is_free && nodes[i]->num_jobs == 0 && nodes[i]->num_run_resv == 0) {\n\t\t\tif (nodes[i]->node_events != NULL) {\n\t\t\t\tpbs_bitmap_bit_on(nb->busy_later_pool->truth, node_ind);\n\t\t\t\tnb->busy_later_pool->truth_ct++;\n\t\t\t} else {\n\t\t\t\tpbs_bitmap_bit_on(nb->free_pool->truth, node_ind);\n\t\t\t\tnb->free_pool->truth_ct++;\n\t\t\t}\n\t\t} else {\n\t\t\tpbs_bitmap_bit_on(nb->busy_pool->truth, node_ind);\n\t\t\tnb->busy_pool->truth_ct++;\n\t\t}\n\t}\n\n\tif (j == 0) {\n\t\tfree(buckets);\n\t\treturn NULL;\n\t}\n\n\ttmp = static_cast<node_bucket **>(realloc(buckets, (j + 1) * sizeof(node_bucket *)));\n\tif (tmp != NULL)\n\t\tbuckets = tmp;\n\telse {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\tfree_node_bucket_array(buckets);\n\t\treturn NULL;\n\t}\n\treturn buckets;\n}\n\n/* chunk_map constructor */\nchunk_map *\nnew_chunk_map()\n{\n\tchunk_map *cmap;\n\tcmap = static_cast<chunk_map *>(malloc(sizeof(chunk_map)));\n\tif (cmap == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn NULL;\n\t}\n\n\tcmap->chk = NULL;\n\tcmap->bkt_cnts = NULL;\n\tcmap->node_bits = pbs_bitmap_alloc(NULL, 1);\n\tif (cmap->node_bits == NULL) {\n\t\tfree_chunk_map(cmap);\n\t\treturn NULL;\n\t}\n\n\treturn cmap;\n}\n\n/* chunk_map destructor */\nvoid\nfree_chunk_map(chunk_map *cmap)\n{\n\tif (cmap == NULL)\n\t\treturn;\n\n\tfree_node_bucket_count_array(cmap->bkt_cnts);\n\tpbs_bitmap_free(cmap->node_bits);\n\tfree(cmap);\n}\n\n/* chunk_map array destructor */\nvoid\nfree_chunk_map_array(chunk_map **cmap_arr)\n{\n\tint i;\n\tif (cmap_arr == NULL)\n\t\treturn;\n\n\tfor (i = 0; cmap_arr[i] != NULL; i++)\n\t\tfree_chunk_map(cmap_arr[i]);\n\n\tfree(cmap_arr);\n}\n\n/**\n * @brief log a summary of a chunk_map array\n * @param[in] resresv - the job we are logging about\n * @param[in] cmap - the chunk_map to log\n *\n * @return nothing\n */\nvoid\nlog_chunk_map_array(resource_resv *resresv, chunk_map **cmap)\n{\n\tint i;\n\tint j;\n\n\tif (resresv == NULL || cmap == NULL)\n\t\treturn;\n\n\tfor (i = 0; cmap[i] != NULL; i++) {\n\t\tint total_chunks = 0;\n\n\t\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG, resresv->name, \"Chunk: %s\", cmap[i]->chk->str_chunk);\n\n\t\tfor (j = 0; cmap[i]->bkt_cnts[j] != NULL; j++) {\n\t\t\tint chunk_count;\n\t\t\tnode_bucket_count *nbc = cmap[i]->bkt_cnts[j];\n\t\t\tchunk_count = (nbc->bkt->free_pool->truth_ct + nbc->bkt->busy_later_pool->truth_ct) * nbc->chunk_count;\n\t\t\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG, resresv->name, \"Bucket %s can fit %d chunks\", nbc->bkt->name, chunk_count);\n\t\t\ttotal_chunks += chunk_count;\n\t\t}\n\t\tif (total_chunks < cmap[i]->chk->num_chunks)\n\t\t\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG, resresv->name,\n\t\t\t\t   \"Found %d out of %d chunks needed\", total_chunks, cmap[i]->chk->num_chunks);\n\t}\n}\n\n/**\n * @brief set working buckets = truth buckets\n * @param[in,out] nb - node bucket to set\n */\nvoid\nset_working_bucket_to_truth(node_bucket *nb)\n{\n\tif (nb == NULL)\n\t\treturn;\n\tif (nb->busy_pool == NULL || nb->busy_later_pool == NULL || nb->free_pool == NULL)\n\t\treturn;\n\n\tpbs_bitmap_assign(nb->busy_pool->working, nb->busy_pool->truth);\n\tnb->busy_pool->working_ct = nb->busy_pool->truth_ct;\n\n\tpbs_bitmap_assign(nb->busy_later_pool->working, nb->busy_later_pool->truth);\n\tnb->busy_later_pool->working_ct = nb->busy_later_pool->truth_ct;\n\n\tpbs_bitmap_assign(nb->free_pool->working, nb->free_pool->truth);\n\tnb->free_pool->working_ct = nb->free_pool->truth_ct;\n}\n\n/**\n * @brief map job to nodes in buckets and allocate nodes to job\n * @param[in, out] cmap - mapping between chunks and buckets for the job\n * @param[in] resresv - the job\n * @param[out] err - error structure\n * @return int\n * @retval 1 - success\n * @retval 0 - failure\n */\nint\nbucket_match(chunk_map **cmap, resource_resv *resresv, schd_error *err)\n{\n\tint i;\n\tint j;\n\tint k;\n\tstatic pbs_bitmap *zeromap = NULL;\n\tserver_info *sinfo;\n\n\tif (cmap == NULL || resresv == NULL || resresv->select == NULL)\n\t\treturn 0;\n\n\tif (zeromap == NULL) {\n\t\tzeromap = pbs_bitmap_alloc(NULL, 1);\n\t\tif (zeromap == NULL)\n\t\t\treturn 0;\n\t}\n\n\tsinfo = resresv->server;\n\n\tfor (i = 0; cmap[i] != NULL; i++) {\n\t\tif (cmap[i]->bkt_cnts != NULL) {\n\t\t\tfor (j = 0; cmap[i]->bkt_cnts[j] != NULL; j++) {\n\t\t\t\tset_working_bucket_to_truth(cmap[i]->bkt_cnts[j]->bkt);\n\t\t\t\tpbs_bitmap_assign(cmap[i]->node_bits, zeromap);\n\t\t\t}\n\t\t}\n\t}\n\n\tfor (i = 0; cmap[i] != NULL; i++) {\n\t\tint num_chunks_needed = cmap[i]->chk->num_chunks;\n\n\t\tif (cmap[i]->bkt_cnts == NULL)\n\t\t\tbreak;\n\n\t\tfor (j = 0; cmap[i]->bkt_cnts[j] != NULL && num_chunks_needed > 0; j++) {\n\t\t\tnode_bucket *bkt = cmap[i]->bkt_cnts[j]->bkt;\n\t\t\tint chunks_added = 0;\n\n\t\t\tfor (k = pbs_bitmap_first_on_bit(bkt->busy_later_pool->working);\n\t\t\t     num_chunks_needed > chunks_added && k >= 0;\n\t\t\t     k = pbs_bitmap_next_on_bit(bkt->busy_later_pool->working, k)) {\n\t\t\t\tclear_schd_error(err);\n\t\t\t\tif (resresv->aoename != NULL) {\n\t\t\t\t\tif (sinfo->unordered_nodes[k]->current_aoe == NULL ||\n\t\t\t\t\t    strcmp(sinfo->unordered_nodes[k]->current_aoe, resresv->aoename) != 0)\n\t\t\t\t\t\tif (is_provisionable(sinfo->unordered_nodes[k], resresv, err) == NOT_PROVISIONABLE) {\n\t\t\t\t\t\t\tcontinue;\n\t\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tif (node_can_fit_job_time(k, resresv)) {\n\t\t\t\t\tpbs_bitmap_bit_off(bkt->busy_later_pool->working, k);\n\t\t\t\t\tbkt->busy_later_pool->working_ct--;\n\t\t\t\t\tpbs_bitmap_bit_on(bkt->busy_pool->working, k);\n\t\t\t\t\tbkt->busy_pool->working_ct++;\n\t\t\t\t\tpbs_bitmap_bit_on(cmap[i]->node_bits, k);\n\t\t\t\t\tchunks_added += cmap[i]->bkt_cnts[j]->chunk_count;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tfor (k = pbs_bitmap_first_on_bit(bkt->free_pool->working);\n\t\t\t     num_chunks_needed > chunks_added && k >= 0;\n\t\t\t     k = pbs_bitmap_next_on_bit(bkt->free_pool->working, k)) {\n\t\t\t\tclear_schd_error(err);\n\t\t\t\tif (resresv->aoename != NULL) {\n\t\t\t\t\tif (sinfo->unordered_nodes[k]->current_aoe == NULL ||\n\t\t\t\t\t    strcmp(sinfo->unordered_nodes[k]->current_aoe, resresv->aoename) != 0)\n\t\t\t\t\t\tif (is_provisionable(sinfo->unordered_nodes[k], resresv, err) == NOT_PROVISIONABLE) {\n\t\t\t\t\t\t\tcontinue;\n\t\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tpbs_bitmap_bit_off(bkt->free_pool->working, k);\n\t\t\t\tbkt->free_pool->working_ct--;\n\t\t\t\tpbs_bitmap_bit_on(bkt->busy_pool->working, k);\n\t\t\t\tbkt->busy_pool->working_ct++;\n\t\t\t\tpbs_bitmap_bit_on(cmap[i]->node_bits, k);\n\t\t\t\tchunks_added += cmap[i]->bkt_cnts[j]->chunk_count;\n\t\t\t}\n\n\t\t\tif (chunks_added > 0)\n\t\t\t\tnum_chunks_needed -= chunks_added;\n\t\t}\n\t\t/* Couldn't find buckets to satisfy all the chunks */\n\t\tif (num_chunks_needed > 0)\n\t\t\treturn 0;\n\t}\n\n\treturn 1;\n}\n\n/**\n * @brief Determine if a job can fit in time before a node becomes busy\n * @param[in] node_ind - index into sinfo->snodes of the node\n * @param[in] resresv - the job\n * @return yes/no\n * @retval 1 - yes\n * @retvan 0 - no\n */\nint\nnode_can_fit_job_time(int node_ind, resource_resv *resresv)\n{\n\tte_list *tel;\n\ttime_t end;\n\tserver_info *sinfo;\n\n\tif (resresv == NULL)\n\t\treturn 0;\n\n\tsinfo = resresv->server;\n\tend = sinfo->server_time + calc_time_left(resresv, 0);\n\ttel = sinfo->unordered_nodes[node_ind]->node_events;\n\tif (tel != NULL && tel->event != NULL)\n\t\tif (tel->event->event_time < end)\n\t\t\treturn 0;\n\n\treturn 1;\n}\n\n/**\n * @brief convert a chunk into an nspec for a job on a node\n * @param policy - policy info\n * @param chk - the chunk\n * @param node - the node\n * @param aoename - the aoe requested by the job\n * @return nspec*\n * @retval the nspec\n * @retval NULL on error\n */\nnspec *\nchunk_to_nspec(status *policy, chunk *chk, node_info *node, char *aoename)\n{\n\tnspec *ns;\n\tresource_req *prev_req;\n\tresource_req *req;\n\tresource_req *cur_req;\n\n\tif (policy == NULL || chk == NULL || node == NULL)\n\t\treturn NULL;\n\n\tns = new nspec();\n\n\tns->ninfo = node;\n\tns->seq_num = get_sched_rank();\n\tns->end_of_chunk = 1;\n\tprev_req = NULL;\n\tif (aoename != NULL) {\n\t\tif (node->current_aoe == NULL || strcmp(aoename, node->current_aoe) != 0) {\n\t\t\tns->go_provision = 1;\n\t\t\treq = create_resource_req(\"aoe\", aoename);\n\t\t\tif (req == NULL) {\n\t\t\t\tdelete ns;\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t\tns->resreq = req;\n\t\t\tprev_req = req;\n\t\t}\n\t}\n\tfor (cur_req = chk->req; cur_req != NULL; cur_req = cur_req->next) {\n\t\tif (cur_req->def->type.is_consumable && policy->resdef_to_check.find(cur_req->def) != policy->resdef_to_check.end()) {\n\t\t\treq = dup_resource_req(cur_req);\n\t\t\tif (req == NULL) {\n\t\t\t\tdelete ns;\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t\tif (prev_req == NULL)\n\t\t\t\tns->resreq = req;\n\t\t\telse\n\t\t\t\tprev_req->next = req;\n\t\t\tprev_req = req;\n\t\t}\n\t}\n\n\treturn ns;\n}\n\n/**\n * @brief convert a chunk_map->node_bits into an nspec array\n * @param[in] policy - policy info\n * @param[in] cb_map - chunk_map->node_bits are the nodes to allocate\n * @param resresv - the job\n * @return vector<nspec*>\n * @retval nspec array to run the job on\n * @retval empty vector on error\n */\nstd::vector<nspec *>\nbucket_to_nspecs(status *policy, chunk_map **cb_map, resource_resv *resresv)\n{\n\tint i;\n\tint j;\n\tint k;\n\tint cnt = 1;\n\tint n = 0;\n\tstd::vector<nspec *> ns_arr;\n\tserver_info *sinfo;\n\n\tif (policy == NULL || cb_map == NULL || resresv == NULL)\n\t\treturn {};\n\n\tsinfo = resresv->server;\n\tns_arr.reserve(resresv->select->total_chunks);\n\n\tfor (i = 0; cb_map[i] != NULL; i++) {\n\t\tint chunks_needed = cb_map[i]->chk->num_chunks;\n\t\tfor (j = pbs_bitmap_first_on_bit(cb_map[i]->node_bits); j >= 0;\n\t\t     j = pbs_bitmap_next_on_bit(cb_map[i]->node_bits, j)) {\n\t\t\t/* Find the bucket the node is in */\n\t\t\tif (cb_map[i]->bkt_cnts != NULL) {\n\t\t\t\tfor (k = 0; cb_map[i]->bkt_cnts[k] != NULL; k++)\n\t\t\t\t\tif (pbs_bitmap_get_bit(cb_map[i]->bkt_cnts[k]->bkt->bkt_nodes, j)) {\n\t\t\t\t\t\tcnt = cb_map[i]->bkt_cnts[k]->chunk_count;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t} else {\n\t\t\t\t/* Error case(shouldn't happen): the bkt_cnts is NULL.  Only assign one chunk.\n\t\t\t\t * This could cause us not to allocate enough chunks in free placement\n\t\t\t\t */\n\t\t\t\tcnt = 1;\n\t\t\t}\n\t\t\t/* Allocate the chunks.  For all but the final chunk, we need to allocate cnt chunks,\n\t\t\t * For the final chunk, we might allocate less.\n\t\t\t */\n\t\t\tfor (; cnt > 0 && chunks_needed > 0; cnt--, chunks_needed--, n++) {\n\t\t\t\tauto ns = chunk_to_nspec(policy, cb_map[i]->chk, sinfo->unordered_nodes[j], resresv->aoename);\n\t\t\t\tif (ns == NULL) {\n\t\t\t\t\tfree_nspecs(ns_arr);\n\t\t\t\t\treturn {};\n\t\t\t\t}\n\t\t\t\tns_arr.push_back(ns);\n\t\t\t}\n\t\t}\n\t}\n\n\treturn ns_arr;\n}\n\n/**\n * @brief decide if a job should use the node bucket algorithm\n * @param resresv - the job\n * @return int\n * @retval 1 if the job should use the bucket algorithm\n * @retval 0 if not\n */\nint\njob_should_use_buckets(resource_resv *resresv)\n{\n\n\tif (resresv == NULL)\n\t\treturn 0;\n\n\t/* nodes are bucketed, they can't be sorted by unused */\n\tif (conf.node_sort_unused)\n\t\treturn 0;\n\n\t/* Bucket algorithm doesn't support avoid_provisioning */\n\tif (conf.provision_policy == AVOID_PROVISION)\n\t\treturn 0;\n\n\t/* qrun uses the standard path */\n\tif (resresv == resresv->server->qrun_job)\n\t\treturn 0;\n\n\t/* Jobs in reservations use the standard path */\n\tif (resresv->job != NULL) {\n\t\tif (resresv->job->resv != NULL)\n\t\t\treturn 0;\n\t}\n\n\t/* Only excl jobs use buckets */\n\tif (resresv->place_spec->share)\n\t\treturn 0;\n\tif (!resresv->place_spec->excl)\n\t\treturn 0;\n\n\t/* place=pack jobs do not use buckets */\n\tif (resresv->place_spec->pack)\n\t\treturn 0;\n\n\t/*  multivnoded systems are incompatible */\n\tif (resresv->server->has_multi_vnode)\n\t\treturn 0;\n\n\t/* Job's requesting specific hosts or vnodes use the standard path */\n\tconst auto &defs = resresv->select->defs;\n\tif (defs.find(allres[\"host\"]) != defs.end())\n\t\treturn 0;\n\tif (defs.find(allres[\"vnode\"]) != defs.end())\n\t\treturn 0;\n\t/* If a job has an execselect, it means it's requesting vnode */\n\tif (resresv->execselect != NULL)\n\t\treturn 0;\n\n\treturn 1;\n}\n\n/*\n * @brief - create a mapping of chunks to the buckets they can run in.\n * \t    The mapping will be of the chunks to all the buckets that can satisfy them.\n * \t    This may be way more nodes than are required to run the job.\n * \t    If we can't find enough nodes in the buckets, we know we can never run.\n *\n * @param[in] policy - policy info\n * @param[in] buckets - buckets to check\n * @param[in] resresv - resresv to check\n * @param[out] err - error structure to return failure\n *\n * @return chunk map\n * @retval NULL - for the following reasons:\n\t\t- if no buckets are found for one chunk\n *\t\t- if there aren't enough nodes in all buckets found for one chunk\n *\t\t- on malloc() failure\n */\nchunk_map **\nfind_correct_buckets(status *policy, node_bucket **buckets, resource_resv *resresv, schd_error *err)\n{\n\tint bucket_ct;\n\tint chunk_ct;\n\tint i, j;\n\tint can_run = 1;\n\tchunk_map **cb_map;\n\tstatic struct schd_error *failerr = NULL;\n\n\tif (policy == NULL || buckets == NULL || resresv == NULL || resresv->select == NULL || resresv->select->chunks == NULL || err == NULL)\n\t\treturn NULL;\n\n\tif (failerr == NULL) {\n\t\tfailerr = new_schd_error();\n\t\tif (failerr == NULL) {\n\t\t\tset_schd_error_codes(err, NOT_RUN, SCHD_ERROR);\n\t\t\treturn 0;\n\t\t}\n\t} else\n\t\tclear_schd_error(failerr);\n\n\tbucket_ct = count_array(buckets);\n\tchunk_ct = count_array(resresv->select->chunks);\n\n\tcb_map = static_cast<chunk_map **>(calloc((chunk_ct + 1), sizeof(chunk_map *)));\n\tif (cb_map == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn NULL;\n\t}\n\n\tfor (i = 0; resresv->select->chunks[i] != NULL; i++) {\n\t\tint total = 0;\n\t\tint b = 0;\n\t\tcb_map[i] = new_chunk_map();\n\t\tif (cb_map[i] == NULL) {\n\t\t\tfree_chunk_map_array(cb_map);\n\t\t\treturn NULL;\n\t\t}\n\t\tcb_map[i]->chk = resresv->select->chunks[i];\n\t\tcb_map[i]->bkt_cnts = static_cast<node_bucket_count **>(calloc(bucket_ct + 1, sizeof(node_bucket_count *)));\n\t\tif (cb_map[i]->bkt_cnts == NULL) {\n\t\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\t\tfree_chunk_map_array(cb_map);\n\t\t\tset_schd_error_codes(err, NOT_RUN, SCHD_ERROR);\n\t\t\treturn NULL;\n\t\t}\n\t\tfor (j = 0; buckets[j] != NULL && can_run; j++) {\n\t\t\tqueue_info *qinfo = NULL;\n\n\t\t\tif (resresv->job != NULL && resresv->job->queue->nodes != NULL)\n\t\t\t\tqinfo = resresv->job->queue;\n\n\t\t\tif (buckets[j]->queue == qinfo) {\n\t\t\t\tint c;\n\t\t\t\tc = check_avail_resources(buckets[j]->res_spec, resresv->select->chunks[i]->req,\n\t\t\t\t\t\t\t  (CHECK_ALL_BOOLS | COMPARE_TOTAL | UNSET_RES_ZERO),\n\t\t\t\t\t\t\t  policy->resdef_to_check_no_hostvnode, INSUFFICIENT_RESOURCE, err);\n\t\t\t\tif (c > 0) {\n\t\t\t\t\tif (resresv->place_spec->scatter || resresv->place_spec->vscatter)\n\t\t\t\t\t\tc = 1;\n\n\t\t\t\t\tcb_map[i]->bkt_cnts[b] = new_node_bucket_count();\n\t\t\t\t\tif (cb_map[i]->bkt_cnts[b] == NULL) {\n\t\t\t\t\t\tfree_chunk_map_array(cb_map);\n\t\t\t\t\t\tset_schd_error_codes(err, NOT_RUN, SCHD_ERROR);\n\t\t\t\t\t\treturn NULL;\n\t\t\t\t\t}\n\t\t\t\t\tcb_map[i]->bkt_cnts[b]->bkt = buckets[j];\n\t\t\t\t\tcb_map[i]->bkt_cnts[b++]->chunk_count = c;\n\t\t\t\t\ttotal += buckets[j]->total * c;\n\t\t\t\t} else {\n\t\t\t\t\tif (failerr->status_code == SCHD_UNKWN)\n\t\t\t\t\t\tmove_schd_error(failerr, err);\n\t\t\t\t}\n\t\t\t\tclear_schd_error(err);\n\t\t\t}\n\t\t}\n\n\t\t/* No buckets match or not enough nodes in the buckets: the job can't run */\n\t\tif (b == 0 || total < cb_map[i]->chk->num_chunks)\n\t\t\tcan_run = 0;\n\t}\n\tcb_map[i] = NULL;\n\n\tlog_chunk_map_array(resresv, cb_map);\n\n\tif (can_run == 0) {\n\t\tif (err->status_code == SCHD_UNKWN && failerr->status_code != SCHD_UNKWN)\n\t\t\tmove_schd_error(err, failerr);\n\t\terr->status_code = NEVER_RUN;\n\t\tfree_chunk_map_array(cb_map);\n\t\treturn NULL;\n\t}\n\n\treturn cb_map;\n}\n\n/**\n * @brief entry point into the node bucket algorithm.  If placement sets are\n * \tin use, choose the right pool and call map_buckets() on each.  If placement\n * \tsets are not in use, just call map_buckets()\n * @param[in] policy - policy info\n * @param[in] sinfo - the server info universe\n * @param[in] qinfo - the queue the job is in\n * @param[in] resresv - the job\n * @param[out] err - schd_error structure to return reason why the job can't run\n * @return vector<nspec *>\n * @retval place job can run\n * @retval empty vector if job can't run\n */\nstd::vector<nspec *>\ncheck_node_buckets(status *policy, server_info *sinfo, queue_info *qinfo, resource_resv *resresv, schd_error *err)\n{\n\tnode_partition **nodepart = NULL;\n\n\tif (policy == NULL || sinfo == NULL || resresv == NULL || err == NULL)\n\t\treturn {};\n\n\tif (resresv->is_job && qinfo == NULL)\n\t\treturn {};\n\n\tif (resresv->is_job && qinfo->nodepart != NULL)\n\t\tnodepart = qinfo->nodepart;\n\telse if (sinfo->nodepart != NULL)\n\t\tnodepart = sinfo->nodepart;\n\telse\n\t\tnodepart = NULL;\n\n\t/* job's place=group=res replaces server or queue node grouping\n\t * We'll search the node partition cache for the job's pool of node partitions\n\t * If it doesn't exist, we'll create it and add it to the cache\n\t */\n\tif (resresv->place_spec->group != NULL) {\n\t\tstd::vector<std::string> groupvec{resresv->place_spec->group};\n\t\tnp_cache *npc = NULL;\n\t\tnode_info **ninfo_arr;\n\n\t\tif (qinfo->has_nodes)\n\t\t\tninfo_arr = qinfo->nodes;\n\t\telse\n\t\t\tninfo_arr = sinfo->unassoc_nodes;\n\n\t\tnpc = find_alloc_np_cache(policy, sinfo->npc_arr, groupvec, ninfo_arr, cmp_placement_sets);\n\t\tif (npc != NULL)\n\t\t\tnodepart = npc->nodepart;\n\t}\n\tif (nodepart != NULL) {\n\t\tint i;\n\t\tint can_run = 0;\n\t\tstatic schd_error *failerr = NULL;\n\t\tif (failerr == NULL) {\n\t\t\tfailerr = new_schd_error();\n\t\t\tif (failerr == NULL)\n\t\t\t\treturn {};\n\t\t} else\n\t\t\tclear_schd_error(failerr);\n\n\t\tfor (i = 0; nodepart[i] != NULL; i++) {\n\t\t\tstd::vector<nspec *> nspecs;\n\t\t\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG, resresv->name,\n\t\t\t\t   \"Evaluating placement set: %s\", nodepart[i]->name);\n\n\t\t\tclear_schd_error(err);\n\t\t\tnspecs = map_buckets(policy, nodepart[i]->bkts, resresv, err);\n\t\t\tif (!nspecs.empty())\n\t\t\t\treturn nspecs;\n\t\t\tif (err->status_code == NOT_RUN) {\n\t\t\t\tif (failerr->status_code == SCHD_UNKWN)\n\t\t\t\t\tcopy_schd_error(failerr, err);\n\t\t\t\tcan_run = 1;\n\t\t\t}\n\t\t}\n\t\t/* If we can't fit in any placement set, span over all of them */\n\t\tif (can_run == 0) {\n\t\t\tif (sc_attrs.do_not_span_psets) {\n\t\t\t\tset_schd_error_codes(err, NEVER_RUN, CANT_SPAN_PSET);\n\t\t\t\treturn {};\n\t\t\t} else {\n\t\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG, resresv->name, \"Request won't fit into any placement sets, will use all nodes\");\n\t\t\t\treturn map_buckets(policy, sinfo->buckets, resresv, err);\n\t\t\t}\n\t\t} else\n\t\t\t/* There is a possibility that the job might fit in one of the placement set,\n\t\t\t * use that error code\n\t\t\t */\n\t\t\tmove_schd_error(err, failerr);\n\t}\n\n\treturn map_buckets(policy, sinfo->buckets, resresv, err);\n}\n\n/*\n * @brief check to see if a resresv can fit on the nodes using buckets\n *\n * @param[in] policy - policy info\n * @param[in] bkts - buckets to search\n * @param[in] resresv - resresv to see if it can fit\n * @param[out] err - error structure to return failure\n *\n * @return place resresv can run or NULL if it can't\n */\nstd::vector<nspec *>\nmap_buckets(status *policy, node_bucket **bkts, resource_resv *resresv, schd_error *err)\n{\n\tchunk_map **cmap;\n\n\tif (policy == NULL || bkts == NULL || resresv == NULL || err == NULL)\n\t\treturn {};\n\n\tcmap = find_correct_buckets(policy, bkts, resresv, err);\n\tif (cmap == NULL)\n\t\treturn {};\n\n\tclear_schd_error(err);\n\tif (bucket_match(cmap, resresv, err) == 0) {\n\t\tif (err->status_code == SCHD_UNKWN)\n\t\t\tset_schd_error_codes(err, NOT_RUN, NO_NODE_RESOURCES);\n\n\t\tfree_chunk_map_array(cmap);\n\t\treturn {};\n\t}\n\n\tauto ns_arr = bucket_to_nspecs(policy, cmap, resresv);\n\n\tfree_chunk_map_array(cmap);\n\treturn ns_arr;\n}\n"
  },
  {
    "path": "src/scheduler/buckets.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _BUCKETS_H\n#define _BUCKETS_H\n\n#include \"data_types.h\"\n\n/* bucket_bitpool constructor, copy constructor, destructor */\nbucket_bitpool *new_bucket_bitpool();\nvoid free_bucket_bitpool(bucket_bitpool *bp);\nbucket_bitpool *dup_bucket_bitpool(bucket_bitpool *obp);\n\n/* node_bucket constructor, copy constructor, destructor */\nnode_bucket *new_node_bucket(int new_pools);\nnode_bucket *dup_node_bucket(node_bucket *onb, server_info *nsinfo);\nnode_bucket **dup_node_bucket_array(node_bucket **old, server_info *nsinfo);\nvoid free_node_bucket(node_bucket *nb);\nvoid free_node_bucket_array(node_bucket **buckets);\n\n/* find index of node_bucket in an array */\nint find_node_bucket_ind(node_bucket **buckets, schd_resource *rl, queue_info *qinfo, int priority);\n\n/* create node_buckets an array of nodes */\nnode_bucket **create_node_buckets(status *policy, node_info **nodes, std::vector<queue_info *> &queues, unsigned int flags);\n\n/* Create a name for the node bucket based on resources, queue, and priority */\nchar *create_node_bucket_name(status *policy, node_bucket *nb);\n\n/* match job's request to buckets and allocate */\nint bucket_match(chunk_map **cmap, resource_resv *resresv, schd_error *err);\n/* convert chunk_map->node_bits into nspec array */\nstd::vector<nspec *> bucket_to_nspecs(status *policy, chunk_map **cb_map, resource_resv *resresv);\n\n/* can a job completely fit on a node before it is busy */\nint node_can_fit_job_time(int node_ind, resource_resv *resresv);\n\n/* bucket version of a = b */\nvoid set_working_bucket_to_truth(node_bucket *nb);\nvoid set_chkpt_bucket_to_working(node_bucket *nb);\nvoid set_working_bucket_to_chkpt(node_bucket *nb);\nvoid set_chkpt_bucket_to_truth(node_bucket *nb);\nvoid set_truth_bucket_to_chkpt(node_bucket *nb);\n\n/* chunk_map constructor, copy constructor, destructor */\nchunk_map *new_chunk_map();\nchunk_map *dup_chunk_map(chunk_map *ocmap);\nvoid free_chunk_map(chunk_map *cmap);\nvoid free_chunk_map_array(chunk_map **cmap_arr);\nchunk_map **dup_chunk_map_array(chunk_map **ocmap_arr);\n\n/* decide of a job should use the node_bucket path */\nint job_should_use_buckets(resource_resv *resresv);\n\n/* Log a summary of a chunk_map array */\nvoid log_chunk_map_array(resource_resv *resresv, chunk_map **cmap);\n\n/* Check to see if a job can run on nodes via the node_bucket codepath */\nstd::vector<nspec *> check_node_buckets(status *policy, server_info *sinfo, queue_info *qinfo, resource_resv *resresv, schd_error *err);\nstd::vector<nspec *> map_buckets(status *policy, node_bucket **bkts, resource_resv *resresv, schd_error *err);\n\n/* map job to buckets that can satisfy */\nchunk_map **find_correct_buckets(status *policy, node_bucket **buckets, resource_resv *resresv, schd_error *err);\n\n#endif /* _BUCKETS_H */\n"
  },
  {
    "path": "src/scheduler/check.cpp",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file    check.c\n *\n * Functions included are:\n *\tis_ok_to_run_queue()\n *\ttime_to_ded_boundary()\n *\ttime_to_prime_boundary()\n *\tshrink_to_boundary()\n *\tshrink_to_minwt()\n *\tshrink_to_run_event()\n *\tshrink_job_algorithm()\n *\tis_ok_to_run_STF()\n *\tis_ok_to_run()\n *\tcheck_avail_resources()\n *\tdynamic_avail()\n *\tfind_counts_elm()\n *\tcheck_ded_time_boundary()\n *\tdedtime_conflict()\n *\tcheck_nodes()\n *\tcheck_ded_time_queue()\n *\tcheck_prime_queue()\n *\tcheck_nonprime_queue()\n *\tcheck_prime_boundary()\n *\tfalse_res()\n *\tunset_str_res()\n *\tzero_res()\n *\n */\n\n#include <pbs_config.h>\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <errno.h>\n#include <pbs_ifl.h>\n#include <pbs_internal.h>\n#include <log.h>\n#include <libutil.h>\n#include \"check.h\"\n#include \"config.h\"\n#include \"server_info.h\"\n#include \"queue_info.h\"\n#include \"job_info.h\"\n#include \"misc.h\"\n#include \"constant.h\"\n#include \"globals.h\"\n#include \"dedtime.h\"\n#include \"node_info.h\"\n#include \"fifo.h\"\n#include \"resource_resv.h\"\n#ifdef NAS\n#include \"site_code.h\"\n#endif\n#include \"node_partition.h\"\n#include \"sort.h\"\n#include \"server_info.h\"\n#include \"queue_info.h\"\n#include \"limits_if.h\"\n#include \"simulate.h\"\n#include \"resource.h\"\n#include \"buckets.h\"\n#include \"pbs_bitmap.h\"\n\n/**\n *\n * @brief\n *\t\tcheck to see if jobs can be run in a queue\n *\n * @param[in]\tpolicy\t-\tpolicy info\n * @param[in]\tqinfo\t-\tqueue in question\n *\n * @return\tenum sched_error_code\n * @retval\tSUCCESS\t: on success or\n * @retval\tscheduler failure code\t: jobs can' run in queue\n *\n * @note\n * \t\tThis function will be run once per queue every scheduling cycle\n *\n */\n\nenum sched_error_code\nis_ok_to_run_queue(status *policy, queue_info *qinfo)\n{\n\tenum sched_error_code rc = SE_NONE; /* Return Code */\n\n\tif (!qinfo->is_exec)\n\t\treturn QUEUE_NOT_EXEC;\n\n\tif (!qinfo->is_started)\n\t\treturn QUEUE_NOT_STARTED;\n\n\tif ((rc = check_ded_time_queue(qinfo)))\n\t\treturn rc;\n\n\tif ((rc = check_prime_queue(policy, qinfo)))\n\t\treturn rc;\n\n\tif ((rc = check_nonprime_queue(policy, qinfo)))\n\t\treturn rc;\n\n\tif (rc == SE_NONE)\n\t\treturn SUCCESS;\n\n\treturn rc;\n}\n\n/**\n *\n * @brief\n * \t\tTime before dedicated time boundary if the job is hitting the boundary.\n *\n *\t@param[in]\tpolicy\t-\tpolicy structure\n *\t@param[in]\tnjob\t-\tresource resv\n *\n *\t@return\ttime duration up to dedicated boundary\n *\t@retval sch_resource_t:\ttime duration upto dedicated boundary\n *\t\t\t\t\t\t\tor full duration of the job if not\n *\t\t\t\t\t\t\thitting dedicated boundary.\n *\t@retval UNSPECIFIED\t: if job's min duration is hitting dedicated boundary\n *\t@retval -3\t: on error\n *\n */\nsch_resource_t\ntime_to_ded_boundary(status *policy, resource_resv *njob)\n{\n\tsch_resource_t min_time_left = UNSPECIFIED;\n\tsch_resource_t end = UNSPECIFIED;\n\tsch_resource_t min_end = UNSPECIFIED;\n\n\tif (njob == NULL || policy == NULL)\n\t\treturn -3; /* error */\n\n\tsch_resource_t duration = njob->duration;\n\ttimegap ded_time = find_next_dedtime(njob->server->server_time);\n\tbool ded = is_ded_time(njob->server->server_time);\n\tsch_resource_t time_left = calc_time_left_STF(njob, &min_time_left);\n\n\tif (!ded) {\n\t\tsch_resource_t start = UNSPECIFIED;\n\n\t\tif (njob->start == UNSPECIFIED && njob->end == UNSPECIFIED) {\n\t\t\tstart = njob->server->server_time;\n\t\t\tmin_end = start + min_time_left;\n\t\t\tend = start + time_left;\n\t\t} else if (njob->start == UNSPECIFIED || njob->end == UNSPECIFIED) {\n\t\t\treturn -3; /* error */\n\t\t} else {\n\t\t\tstart = njob->start;\n\t\t\tend = njob->end;\n\t\t\tmin_end = njob->start + njob->min_duration;\n\t\t}\n\t\t/* Currently not dedicated time, Job can not complete its\n\t\t * maximum duration before dedicated time would start,\n\t\t * See if it can complete it's minimum duration before the start of\n\t\t * dedicated time. If yes, set duration upto start of the dedicated time.\n\t\t */\n\t\tif (end > ded_time.from && end < ded_time.to) {\n\t\t\tmin_end = start + min_time_left;\n\t\t\tif (min_end > ded_time.from && min_end < ded_time.to)\n\t\t\t\tduration = UNSPECIFIED;\n\t\t\telse\n\t\t\t\tduration = ded_time.from - start;\n\t\t}\n\t\t/* Long job -- one which includes dedicated time.  In other words,\n\t\t * it starts at or before dedicated time starts and\n\t\t * it ends at or after dedicated time ends, if run for maximum duration.\n\t\t * Check whether the job can be run for minimum duration without hitting dedicated time?\n\t\t * If yes, set duration upto start of the dedicated time.\n\t\t */\n\t\tif (start <= ded_time.from && end >= ded_time.to) {\n\t\t\tif (min_end >= ded_time.from)\n\t\t\t\tduration = UNSPECIFIED;\n\t\t\telse\n\t\t\t\tduration = ded_time.from - start;\n\t\t}\n\t} else /* Dedicated time */ {\n\t\tmin_end = njob->server->server_time + min_time_left;\n\t\tend = njob->server->server_time + time_left;\n\t\t/* See if job's minimum duration can be completed without hitting\n\t\t * dedicated time boundary. If yes, see if job's complete duration\n\t\t * too can be satisfied. If No, set duration to the end of the dedicated time.\n\t\t */\n\t\tif (min_end > ded_time.to)\n\t\t\tduration = UNSPECIFIED;\n\t\t/* Set duration only if it is hitting */\n\t\telse if (end > ded_time.to)\n\t\t\tduration = ded_time.to - njob->server->server_time;\n\t}\n\treturn (duration);\n}\n\n/**\n *\n *\t@brief\n *\t\tTime to prime time boundary if the job is hitting prime/non-prime boundary\n *\n *\n *\t@param[in]\tpolicy\t-\tpolicy structure\n *\t@param[in]\tnjob\t-\tresource resv\n *\n *\t@return\ttime duration upto prime/non-prime boundary\n *\t@retval sch_resource_t\t: time duration upto prime/non-prime boundary\n *\t\t\t\t\t\t\t\tor full duration of the job if not hitting\n *\t@retval UNSPECIFIED\t: if job's minimum duration is hitting prime/non-prime boundary\n *\t@retval -3\t: if njob is NULL or policy is NULL\n *\n */\nsch_resource_t\ntime_to_prime_boundary(status *policy, resource_resv *njob)\n{\n\tsch_resource_t time_left = UNSPECIFIED;\n\tsch_resource_t min_time_left = UNSPECIFIED; /* time left for minimum duration */\n\tsch_resource_t duration = UNSPECIFIED;\n\n\tif (njob == NULL || policy == NULL)\n\t\treturn -3; /* error */\n\n\tduration = njob->duration;\n\t/* If backfill_prime is not set to true or if prime status never ends return full duration of the job */\n\tif (policy->prime_status_end == SCHD_INFINITY || !(policy->backfill_prime))\n\t\treturn duration;\n\n\ttime_left = calc_time_left_STF(njob, &min_time_left);\n\t/* If not hitting, return full duration */\n\tif (njob->server->server_time + time_left < policy->prime_status_end + policy->prime_spill)\n\t\treturn duration;\n\n\t/* Job can be shrunk to time available before prime/non-prime time boundary */\n\tif (njob->server->server_time + min_time_left < policy->prime_status_end + policy->prime_spill)\n\t\t/* Shrink the job's duration to prime time boundary */\n\t\tduration = (policy->prime_status_end + policy->prime_spill) - (njob->server->server_time);\n\telse\n\t\tduration = UNSPECIFIED;\n\treturn (duration);\n}\n\n/**\n *\t@brief\n *\t\tShrink job to dedicated/prime time boundary(Job's duration will be set),\n *\t\tif hitting and see if job can run. If job is not hitting a boundary see if it\n *\t\tcan run with full duration.\n *\t\tJob duration may be set inside this function and it is caller's responsibility\n *\t\tto keep track of the earlier value of job duration  if needed.\n *\n *\t@param[in]\tpolicy\t-\tpolicy structure\n *\t@param[in]\tsinfo\t-\tserver info\n *\t@param[in]\tqinfo\t-\tqueue info\n *\t@param[in]\tresresv -\tresource resv\n *\t@param[in]\tflags\t\tflags for is_ok_to_run() @see is_ok_to_run()\n *\t@param[in,out]\terr\t-\terror reply structure\n *\n *\t@par NOTE\n *\t\t\treturn value is required to be freed by caller\n *\n *\t@return\tnode solution of where job will run - more info in err\n *\t@retval\tnspec**\t: array\n *\t@retval NULL\t: if job/resv can not run/error\n *\n */\nstd::vector<nspec *>\nshrink_to_boundary(status *policy, server_info *sinfo,\n\t\t   queue_info *qinfo, resource_resv *njob, unsigned int flags, schd_error *err)\n{\n\tstd::vector<nspec *> ns_arr;\n\tif (njob == NULL || policy == NULL || sinfo == NULL || err == NULL)\n\t\treturn {};\n\t/* No need to shrink the job to prime/dedicated boundary,\n\t * if it is not hitting */\n\tif (err->error_code == CROSS_PRIME_BOUNDARY ||\n\t    err->error_code == CROSS_DED_TIME_BOUNDRY) {\n\t\tauto orig_duration = njob->duration;\n\t\tauto time_to_dedboundary = time_to_ded_boundary(policy, njob);\n\t\tif (time_to_dedboundary == UNSPECIFIED)\n\t\t\treturn {};\n\n\t\tauto time_to_primeboundary = time_to_prime_boundary(policy, njob);\n\t\tif (time_to_primeboundary == UNSPECIFIED)\n\t\t\treturn {};\n\t\tclear_schd_error(err);\n\t\t/* Shrink job to prime/ded boundary if hitting,\n\t\t * If both prime and ded boundaries are getting hit\n\t\t * shrink job to the nearest of the two\n\t\t */\n\t\tnjob->duration = time_to_dedboundary < time_to_primeboundary ? time_to_dedboundary : time_to_primeboundary;\n\t\tns_arr = is_ok_to_run(policy, sinfo, qinfo, njob, flags, err);\n\t\tif (!ns_arr.empty() && orig_duration > njob->duration) {\n\t\t\tchar timebuf[TIMEBUF_SIZE];\n\t\t\tconvert_duration_to_str(njob->duration, timebuf, TIMEBUF_SIZE);\n\t\t\tlog_eventf(PBSEVENT_SCHED, PBS_EVENTCLASS_JOB, LOG_NOTICE, njob->name,\n\t\t\t\t   \"Considering shrinking job to duration=%s, due to a prime/dedicated time conflict\", timebuf);\n\t\t}\n\t}\n\treturn ns_arr;\n}\n\n/**\n *\n *\t@brief\n *\t\tShrink the job to it's minimum duration and see if it can run\n *\t\t(Job's duration will be set to minimum duration)\n *\t\tJob duration may be set inside this function and it is caller's responsibility\n *\t\tto keep track of the earlier value of job duration  if needed.\n *\n *\t@param[in]\tpolicy\t-\tpolicy structure\n *\t@param[in]\tsinfo\t-\tserver info\n *\t@param[in]\tqinfo\t-\tqueue info\n *\t@param[in]\tresresv\t-\tresource resv\n *\t@param[in]\tflags\t\tflags for is_ok_to_run() @see is_ok_to_run()\n *\t@param[out]\terr\t-\terror reply structure\n *\t@par NOTE\n *\t\treturn value is required to be freed by caller\n *\t@return\tnode solution of where job will run - more info in err\n *\t@retval\tvector<nspec *> array\n *\t@retval NULL\t: if job/resv can not run/error\n **/\nstd::vector<nspec *>\nshrink_to_minwt(status *policy, server_info *sinfo,\n\t\tqueue_info *qinfo, resource_resv *njob, unsigned int flags, schd_error *err)\n{\n\tif (njob == NULL || policy == NULL || sinfo == NULL || err == NULL)\n\t\treturn {};\n\tnjob->duration = njob->min_duration;\n\treturn is_ok_to_run(policy, sinfo, qinfo, njob, flags, err);\n}\n\n/**\n *\n *\t@brief\n *\t\tShrink upto a run event and see if it can run\n *\t\tTry only upto SHRINK_MAX_RETRY=5 events.\n *\t\tInitially retry_count=SHRINK_MAX_RETRY.\n *\t@par Algorithm:\n *\t\tIn each iteration:\n *\t\t1.\tCalculate job's possible_shrunk_duration. This should be\n *\t\t\tthe duration between min_end_time and last tried event's event_time.\n *\t\t\tIf it is the first event to be tried, possible_shrunk_duration should be\n *\t\t\tthe duration between min_end_time and farthest_event's event_time.\n *\t\t2.\tDivide the possible_shrunk_duration into retry_count equal segments.\n *\t\t3.\ttry shrinking to the last event of the last segment.\n *\t\t4.\tIf job still can't run, traverse backwards and skip rest of the events in that segment.\n *\t\t\tand try last event of the next segment.\n *\t\t5.\treduce the retry_count by 1.\n *\t\tRepeat these iterations until either retry_count==0 or job is ok to run.\n *\n *\t\tSo what this algorithm does, is:\n *\t\tFirst try shrinking to the farthest event. If it fails, divide the\n *\t\tpossible_shrunk_duration(duration between min_end_time and this event's event_time)\n *\t\tinto 5 equal segments. Skip rest of the events in the 5th segment.\n *\t\tTry last event of the 4th segment. If it fails, recalculate possible_shrunk_duration and divide it\n *\t\tinto 4 equal segments. Skip rest of the events in the 4th segment.\n *\t\tTry last event of the 3rd segment. If it fails, recalculate possible_shrunk_duration and divide it\n *\t\tinto 3 equal segments. Skip rest of the events in the 3rd segment.\n *\t\tTry last event of the 2nd segment. If it fails, recalculate possible_shrunk_duration and divide it\n *\t\tinto 2 equal segments. Skip rest of the events in the 2nd segment.\n *\t\tTry last event of the 1st segment.\n *\t\tExample:\n *\t\tThe farthest event within job's duration is 100 hours after min_end_time.\n *\t\tTry shrinking to this event's start time i.e. 100 hours.\n *\t\tLet's say shrinking fails, now divide 100 hours into 5 equal segments\n *\t\tof 20 hours each. Skip rest of the events of the last(5th) segment, since\n *\t\twe have tried one event in this segment already. We keep traversing\n *\t\tand skipping events untill we found an event that falls in the\n *\t\t4th segment e.g. within (100-20=80)hours.\n *\t\tTry shrinking to this event's start time say it is: 56 hours.\n *\t\tLet's say shrinking fails, divide 56 hours into 4 equal segments\n *\t\tof 14 hours each. Skip rest of the events of the last(4th) segment, since\n *\t\twe have tried one event in this segment already. We keep traversing\n *\t\tand skipping events untill we found an event that falls in the\n *\t\t3rd segment e.g. within (56-14=42)hours.\n *\t\tTry shrinking to this event's start time say it is: 36 hours.\n *\t\tLet's say shrinking fails, divide 36 hours into 3 equal segments\n *\t\tof 12 hours each. Skip rest of the events of the last(3rd) segment, since\n *\t\twe have tried one event in this segment already. We keep traversing\n *\t\tand skipping events untill we found an event that falls in the\n *\t\t2nd segment e.g. within (36-12=24)hours.\n *\t\tTry shrinking to this event's start time say it is: 20 hours.\n *\t\tLet's say shrinking fails, divide 20 hours into 2 equal segments\n *\t\tof 10 hours each. Skip rest of the events of the last(2nd) segment, since\n *\t\twe have tried one event in this segment already. We keep traversing\n *\t\tand skipping events untill we found an event that falls in the\n *\t\t1st segment e.g. within (20-10=10)hours.\n *\t\tTry shrinking to this event's start time say it is: 6 hours.\n *\t\tIf job still can't run, indicate failure.\n *\n *\t@param[in]\tpolicy\t-\tpolicy structure\n *\t@param[in]\tsinfo\t-\tserver info\n *\t@param[in]\tqinfo   -\tqueue info\n *\t@param[in]\tresresv -\tresource resv\n *\t@param[in]\tflags\t\tflags for is_ok_to_run() @see is_ok_to_run()\n *\t@param[in,out]\terr\t-\terror reply structure\n *\n *\t@par NOTE:\n *\t\treturn value is required to be freed by caller\n *\n *\t@return\tnode solution of where job will run - more info in err\n *\t@retval\tvector<nspec*> array\n *\t@retval\tNULL\t: if job/resv can not run/error\n *\n */\nstd::vector<nspec *>\nshrink_to_run_event(status *policy, server_info *sinfo,\n\t\t    queue_info *qinfo, resource_resv *njob, unsigned int flags, schd_error *err)\n{\n\tstd::vector<nspec *> ns_arr;\n\ttimed_event *te = NULL;\n\ttimed_event *initial_event = NULL;\n\ttimed_event *farthest_event = NULL;\n\tunsigned int event_mask = TIMED_RUN_EVENT;\n\n\tif (njob == NULL || policy == NULL || sinfo == NULL || err == NULL)\n\t\treturn {};\n\n\tauto orig_duration = njob->duration;\n\tauto servertime_now = sinfo->server_time;\n\tauto end_time = servertime_now + njob->duration;\n\tauto min_end_time = servertime_now + njob->min_duration;\n\t/* Go till farthest event in the event list between job's min and max duration */\n\tte = get_next_event(sinfo->calendar);\n\t/* Get the front pointer of the event list. It may not always be NULL. */\n\tif (te != NULL)\n\t\tinitial_event = te->prev;\n\tfor (te = find_init_timed_event(te, IGNORE_DISABLED_EVENTS, event_mask);\n\t     te != NULL && te->event_time < end_time;\n\t     te = find_next_timed_event(te, IGNORE_DISABLED_EVENTS, event_mask)) {\n\t\tfarthest_event = te;\n\t}\n\tclear_schd_error(err);\n\t/* If no events between job's min and max duration, try running with complete duration */\n\tif (farthest_event == NULL || farthest_event->event_time < min_end_time)\n\t\tns_arr = is_ok_to_run(policy, sinfo, qinfo, njob, flags, err);\n\telse {\n\t\t/* try shrinking upto the farthest event */\n\t\ttime_t last_tried_event_time = 0;\n\t\tint retry_count = SHRINK_MAX_RETRY;\n\t\ttimed_event *last_skipped_event = NULL;\n\t\tend_time = farthest_event->event_time;\n\n\t\t/* Now, go backwards in the events list */\n\t\tfor (te = farthest_event; retry_count != 0;\n\t\t     te = find_prev_timed_event(te, IGNORE_DISABLED_EVENTS, event_mask)) {\n\t\t\tif (te == NULL) {\n\t\t\t\t/* If we've reached the end of the list, we're done */\n\t\t\t\tif (last_skipped_event == NULL)\n\t\t\t\t\tbreak;\n\t\t\t\tte = last_skipped_event;\n\t\t\t\tlast_skipped_event = NULL;\n\t\t\t\t/* No events left, this is the last time through the loop */\n\t\t\t\tretry_count = 1;\n\t\t\t\t/* If we have reached the front of event list or if the event is falling before min end time, break. */\n\t\t\t} else if (te == initial_event || te->event_time < min_end_time)\n\t\t\t\tbreak;\n\t\t\t/* If no events in this segment, then try last skipped event of the previous segment\n\t\t\t * Skip events that fall in the previous segment or if the event time is already tried\n\t\t\t */\n\t\t\telse if (te->event_time > end_time || te->event_time == last_tried_event_time) {\n\t\t\t\tlast_skipped_event = te;\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\t/* Shrink job to the start of this event */\n\t\t\tnjob->duration = te->event_time - servertime_now;\n\t\t\tclear_schd_error(err);\n\t\t\tns_arr = is_ok_to_run(policy, sinfo, qinfo, njob, flags, err);\n\t\t\t/* break if success */\n\t\t\tif (!ns_arr.empty())\n\t\t\t\tbreak;\n\t\t\tlast_skipped_event = NULL; /* This event does not get skipped */\n\t\t\tlast_tried_event_time = te->event_time;\n\t\t\t/* Shrink end_time to the next segment */\n\t\t\tend_time = min_end_time + (njob->duration - njob->min_duration) * (retry_count - 1) / retry_count;\n\t\t\tretry_count--;\n\t\t}\n\t}\n\tif (!ns_arr.empty() && njob->duration == njob->min_duration)\n\t\tlog_event(PBSEVENT_SCHED, PBS_EVENTCLASS_JOB, LOG_NOTICE, njob->name,\n\t\t\t  \"Considering shrinking job to it's minimum walltime\");\n\telse if (!ns_arr.empty() && orig_duration > njob->duration) {\n\t\tchar timebuf[TIMEBUF_SIZE];\n\t\tconvert_duration_to_str(njob->duration, timebuf, TIMEBUF_SIZE);\n\t\tlog_eventf(PBSEVENT_SCHED, PBS_EVENTCLASS_JOB, LOG_NOTICE, njob->name,\n\t\t\t   \"Considering shrinking job to duration=%s, due to a reservation/top job conflict\", timebuf);\n\t}\n\treturn ns_arr;\n}\n\n/**\n *\n *\t@brief\n *\t\tGeneric algorithm for shrinking a job\n *\n *\t@param[in]\tpolicy\t-\tpolicy structure\n *\t@param[in]\tpbs_sd\t-\tthe connection descriptor to the pbs_server\n *\t@param[in]\tsinfo\t-\tserver info\n *\t@param[in]\tqinfo\t-\tqueue info\n *\t@param[in]\tresresv\t-\tresource resv\n *\t@param[in]\tflags\t\tflags for is_ok_to_run() @see is_ok_to_run()\n *\t@param[in,out]\terr\t-\terror reply structure\n *\n *\t@par NOTE:\n *\t\treturn value is required to be freed by caller\n *\n *\t@return\tnode solution of where job will run - more info in err\n *\t@retval\tvector<nspec*> array\n *\t@retval NULL\t: if job/resv can not run/error\n **/\nstd::vector<nspec *>\nshrink_job_algorithm(status *policy, server_info *sinfo,\n\t\t     queue_info *qinfo, resource_resv *njob, unsigned int flags, schd_error *err)\n{\n\tstd::vector<nspec *> ns_arr; /* node solution for job */\n\ttime_t transient_duration;\n\n\tif (njob == NULL || policy == NULL || sinfo == NULL || err == NULL)\n\t\treturn {};\n\t/* We are here because job could not run with full duration, check the error code\n\t * and see if dedicated/prime conflict was found, if yes, try shrinking to boundary\n\t */\n\tif (err->error_code == CROSS_PRIME_BOUNDARY ||\n\t    err->error_code == CROSS_DED_TIME_BOUNDRY) {\n\t\t/* Return ns_arr on success */\n\t\t/* err will be cleared inside shrink_to_boundary if min walltime is not hitting the\n\t\t * prime/dedicated boundary. If min walltime is still hitting prime/dedicated\n\t\t * boundary, the err will not be cleared.\n\t\t */\n\t\tns_arr = shrink_to_boundary(policy, sinfo, qinfo, njob, flags, err);\n\t\tif (!ns_arr.empty())\n\t\t\treturn ns_arr;\n\t}\n\t/* Inside shrink_to_boundary(), job's duration would be set to time upto the\n\t * prime/dedicated boundary if hitting. If the job could still not run,\n\t * we need to see if the job can be run by shrinking further within the boundary.\n\t * If err is set to CROSS_PRIME_BOUNDARY or CROSS_DED_TIME_BOUNDRY, no need to try further\n\t * since we know that minimum duration of the job, itself is hitting boundary.\n\t */\n\ttransient_duration = njob->duration;\n\tif (ns_arr.empty() &&\n\t    err->error_code != CROSS_PRIME_BOUNDARY &&\n\t    err->error_code != CROSS_DED_TIME_BOUNDRY) {\n\t\t/* Try with lesser time durations */\n\t\t/* Clear any scheduling errors we got during earlier shrink attempts. */\n\t\tclear_schd_error(err);\n\t\tauto ns_arr_minwt = shrink_to_minwt(policy, sinfo, qinfo, njob, flags, err);\n\t\t/* Return NULL if job can't run at all */\n\t\tif (ns_arr_minwt.empty())\n\t\t\treturn {};\n\t\telse { /* If success with min walltime, try running with a bigger walltime possible */\n\t\t\tnjob->duration = transient_duration;\n\t\t\tclear_schd_error(err);\n\t\t\tns_arr = shrink_to_run_event(policy, sinfo, qinfo, njob, flags, err);\n\t\t\t/* If job still could not be run, should be run with min_duration */\n\t\t\tif (ns_arr.empty()) {\n\t\t\t\tns_arr = ns_arr_minwt;\n\t\t\t\tnjob->duration = njob->min_duration;\n\t\t\t} else\n\t\t\t\tfree_nspecs(ns_arr_minwt);\n\t\t}\n\t}\n\treturn ns_arr;\n}\n\n/**\n *\n *\t@brief\n *\t\tcheck to see if the STF job is OK to run.\n *\n *\t@param[in]\tpolicy\t-\tpolicy structure\n *\t@param[in]\tsinfo\t-\tserver info\n *\t@param[in]\tqinfo\t-\tqueue info\n *\t@param[in]\tresresv\t-\tresource resv\n *\t@param[in]\tflags\t\tflags for is_ok_to_run() @see is_ok_to_run()\n *\t@param[out]\terr\t-\terror reply structure\n *\t@par NOTE:\n *\t\treturn value is required to be freed by caller\n *\n *\t@return\tnode solution of where job will run - more info in err\n *\t@retval\tnspec** array\n *\t@retval NULL\t: if job/resv can not run/error\n */\nstd::vector<nspec *>\nis_ok_to_run_STF(status *policy, server_info *sinfo,\n\t\t queue_info *qinfo, resource_resv *njob, unsigned int flags, schd_error *err,\n\t\t std::vector<nspec *> (*shrink_heuristic)(status *policy, server_info *sinfo,\n\t\t\t\t\t\t\t  queue_info *qinfo, resource_resv *njob, unsigned int flags, schd_error *err))\n{\n\tstd::vector<nspec *> ns_arr; /* node solution for job */\n\tsch_resource_t orig_duration;\n\n\tif (njob == NULL || policy == NULL || sinfo == NULL || err == NULL)\n\t\treturn {};\n\n\torig_duration = njob->duration;\n\n\t/* First see if it can run with full walltime */\n\tns_arr = is_ok_to_run(policy, sinfo, qinfo, njob, flags, err);\n\t/* If the job can not run for non-calender reasons, return NULL*/\n\tif (!ns_arr.empty())\n\t\treturn ns_arr;\n\n\tif (err->error_code == DED_TIME ||\n\t    err->error_code == PRIME_ONLY ||\n\t    err->error_code == NONPRIME_ONLY)\n\t\treturn {};\n\t/* Apply the shrink heuristic  and try running the job after shrinking it */\n\tns_arr = shrink_heuristic(policy, sinfo, qinfo, njob, flags, err);\n\t/* Reset the job duration on failure */\n\tif (ns_arr.empty())\n\t\tnjob->duration = orig_duration;\n\telse\n\t\tnjob->hard_duration = njob->duration;\n\treturn ns_arr;\n}\n\n/**\n *\n *  @brief\n *  \tCheck to see if the resresv can fit within the system limits\n *\t  \tUsed for both job to run and confirming/running of reservations.\n *\n *  @par the err structure can be set in two ways:\n *\t1. For simple check functions, the error code comes from the function.\n *\t   We set the error code into err within is_ok_to_run()\n *\t2. For more complex check functions, we pass in err by reference.\n *\t   The err will be completed inside the check function.\n *\t* As an extension of #2, even more complex check functions may construct\n *\t  a list of error structures.\n *\n * @param[in] policy\t-\tpolicy info\n * @param[in] sinfo\t-\tserver info\n * @param[in] qinfo\t-\tqueue info\n * @param[in] resresv\t-\tresource resv\n * @param[in] flags\t-\tRETURN_ALL_ERR - Return all reasons why the job\n * \t\t\t\t\tcan not run, not just the first.  @warning: may be expensive.\n *\t\t\t\t\tThis flag will ignore equivalence classes\n *\t\t\t\tIGNORE_EQUIV_CLASS - Ignore job equivalence class feature.\n *\t\t\t\t\tIf a job equivalence class has been seen before and marked\n *\t\t\t\t\tcan_not_run, the job will still be evaluated normally.\n *\t\t\t\tUSE_BUCKETS - use bucket code path\n *\t\t\t\tNO_ALLPART - do not use the allpart\n * @param[in,out]\tperr\t-\tpointer to error structure or NULL.\n *\n * @par NOTE:\n *\t\treturn value is required to be freed by caller (using free_nspecs())\n *\n * @return\tnode solution of where job/resv will run - more info in err\n * @retval\tnspec** array\n * @retval\tNULL\t: if job/resv can not run/error\n *\n *\n */\nstd::vector<nspec *>\nis_ok_to_run(status *policy, server_info *sinfo,\n\t     queue_info *qinfo, resource_resv *resresv, unsigned int flags, schd_error *perr)\n{\n\tenum sched_error_code rc = SE_NONE; /* Return Code */\n\tschd_resource *res = NULL;\t    /* resource list to check */\n\tint endtime = 0;\t\t    /* end time of job if started now */\n\tnode_partition *allpart = NULL;\t    /* all partition to use (queue's or servers) */\n\tschd_error *prev_err = NULL;\n\tschd_error *err;\n\tresource_req *resreq = NULL;\n\n\tif (sinfo == NULL || resresv == NULL || perr == NULL)\n\t\treturn {};\n\n\tif (resresv->is_job && qinfo == NULL)\n\t\treturn {};\n\n\terr = perr;\n\n\tif (resresv->is_job && sinfo->equiv_classes != NULL &&\n\t    !(flags & (IGNORE_EQUIV_CLASS | RETURN_ALL_ERR)) &&\n\t    resresv->ec_index != UNSPECIFIED &&\n\t    sinfo->equiv_classes[resresv->ec_index]->can_not_run) {\n\t\tcopy_schd_error(err, sinfo->equiv_classes[resresv->ec_index]->err);\n\t\treturn {};\n\t}\n\n\tif (resresv->is_job) {\n\t\tif (qinfo == NULL) {\n\t\t\tset_schd_error_codes(err, NOT_RUN, SCHD_ERROR);\n\t\t\tadd_err(&prev_err, err);\n\n\t\t\tif (!(flags & RETURN_ALL_ERR))\n\t\t\t\treturn {};\n\t\t\telse {\n\t\t\t\terr = new_schd_error();\n\t\t\t\tif (err == NULL)\n\t\t\t\t\treturn {};\n\t\t\t}\n\t\t}\n\t}\n\n\tif (!in_runnable_state(resresv)) {\n\t\tif (resresv->is_job) {\n\t\t\tset_schd_error_codes(err, NOT_RUN, NOT_QUEUED);\n\t\t\tadd_err(&prev_err, err);\n\n\t\t\tif (!(flags & RETURN_ALL_ERR))\n\t\t\t\treturn {};\n\n\t\t\terr = new_schd_error();\n\t\t\tif (err == NULL)\n\t\t\t\treturn {};\n\t\t}\n\n\t\t/* There are 3 [sub]states a reservation is in that can be confirmed\n\t\t * 1) state = RESV_UNCONFIRMED\n\t\t * 2) state = RESV_BEING_ALTERED\n\t\t * 3) substate = RESV_DEGRADED\n\t\t */\n\t\tif (resresv->is_resv && resresv->resv != NULL) {\n\t\t\tint rstate = resresv->resv->resv_state;\n\t\t\tint rsubstate = resresv->resv->resv_substate;\n\t\t\tif (rstate != RESV_UNCONFIRMED && rstate != RESV_BEING_ALTERED && rsubstate != RESV_DEGRADED) {\n\t\t\t\tset_schd_error_codes(err, NOT_RUN, NOT_QUEUED);\n\t\t\t\tadd_err(&prev_err, err);\n\n\t\t\t\tif (!(flags & RETURN_ALL_ERR))\n\t\t\t\t\treturn {};\n\n\t\t\t\terr = new_schd_error();\n\t\t\t\tif (err == NULL)\n\t\t\t\t\treturn {};\n\t\t\t}\n\t\t}\n\t}\n\n\t/* If the pset metadata is stale, update it now for the allpart */\n\tif (sinfo->pset_metadata_stale && !(flags & NO_ALLPART))\n\t\tupdate_all_nodepart(policy, sinfo, NO_FLAGS);\n\n\t/* quick check to see if there are enough consumable resources over all nodes\n\t * on the system to see if the resresv can possibly fit.\n\t * This check is bypassed for jobs in reservations.  They have their own\n\t * universe of nodes\n\t */\n\tif (flags & NO_ALLPART)\n\t\tallpart = NULL;\n\telse if (resresv->is_job && resresv->job != NULL &&\n\t\t resresv->job->resv != NULL)\n\t\tallpart = NULL;\n\telse if (qinfo != NULL && qinfo->has_nodes)\n\t\tallpart = qinfo->allpart;\n\telse\n\t\tallpart = sinfo->allpart;\n\n\tif (allpart != NULL) {\n\t\tif (resresv_can_fit_nodepart(policy, allpart, resresv, flags, err) == 0) {\n\t\t\tschd_error *toterr;\n\t\t\ttoterr = new_schd_error();\n\t\t\tif (toterr == NULL) {\n\t\t\t\tif (err != perr)\n\t\t\t\t\tfree_schd_error(err);\n\t\t\t\treturn {};\n\t\t\t}\n\t\t\t/* We can't fit now, lets see if we can ever fit */\n\t\t\tif (resresv_can_fit_nodepart(policy, allpart, resresv, flags | COMPARE_TOTAL, toterr) == 0) {\n\t\t\t\tmove_schd_error(err, toterr);\n\t\t\t\terr->status_code = NEVER_RUN;\n\t\t\t}\n\n\t\t\tadd_err(&prev_err, err);\n\t\t\tif (!(flags & RETURN_ALL_ERR)) {\n\t\t\t\tfree_schd_error(toterr);\n\t\t\t\treturn {};\n\t\t\t}\n\t\t\t/* reuse toterr since we've already allocated it*/\n\t\t\terr = toterr;\n\t\t\tclear_schd_error(err);\n\t\t}\n\t}\n\n\t/* override these limits if we were issued a qrun request */\n\tif (sinfo->qrun_job == NULL) {\n#ifdef NAS_HWY149 /* localmod 033 */\n\t\tif (resresv->job == NULL || resresv->job->priority != NAS_HWY149)\n#endif\t\t  /* localmod 033 */\n#ifdef NAS_HWY101 /* localmod 032 */\n\t\t\tif (resresv->job == NULL || resresv->job->priority != NAS_HWY101)\n#endif /* localmod 032 */\n\t\t\t\tif (resresv->is_job) {\n\t\t\t\t\tif ((rc = static_cast<sched_error_code>(check_limits(sinfo, qinfo, resresv, err, flags | CHECK_LIMIT)))) {\n\n\t\t\t\t\t\tadd_err(&prev_err, err);\n\t\t\t\t\t\tif (rc == SCHD_ERROR)\n\t\t\t\t\t\t\treturn {};\n\t\t\t\t\t\tif (!(flags & RETURN_ALL_ERR))\n\t\t\t\t\t\t\treturn {};\n\t\t\t\t\t\terr = new_schd_error();\n\t\t\t\t\t\tif (err == NULL)\n\t\t\t\t\t\t\treturn {};\n\t\t\t\t\t}\n\t\t\t\t\t/* check for max_run_subjobs limits only when its not a qrun job */\n\t\t\t\t\tif (resresv->job->is_array && (resresv->job->max_run_subjobs != UNSPECIFIED) &&\n\t\t\t\t\t    (resresv->job->running_subjobs >= resresv->job->max_run_subjobs)) {\n\t\t\t\t\t\tset_schd_error_codes(err, NOT_RUN, MAX_RUN_SUBJOBS);\n\t\t\t\t\t\tadd_err(&prev_err, err);\n\n\t\t\t\t\t\tif (!(flags & RETURN_ALL_ERR))\n\t\t\t\t\t\t\treturn {};\n\t\t\t\t\t\telse {\n\t\t\t\t\t\t\terr = new_schd_error();\n\t\t\t\t\t\t\tif (err == NULL)\n\t\t\t\t\t\t\t\treturn {};\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\n\t\t\t\t\tif (check_prime_boundary(policy, resresv, err) != SE_NONE) {\n\t\t\t\t\t\t/* err is set inside check_prime_boundary() */\n\t\t\t\t\t\tadd_err(&prev_err, err);\n\t\t\t\t\t\tif (!(flags & RETURN_ALL_ERR))\n\t\t\t\t\t\t\treturn {};\n\n\t\t\t\t\t\terr = new_schd_error();\n\t\t\t\t\t\tif (err == NULL)\n\t\t\t\t\t\t\treturn {};\n\t\t\t\t\t}\n\n\t\t\t\t\tif ((rc = check_ded_time_queue(qinfo))) {\n\t\t\t\t\t\tset_schd_error_codes(err, NOT_RUN, rc);\n\t\t\t\t\t\tadd_err(&prev_err, err);\n\t\t\t\t\t\tif (!(flags & RETURN_ALL_ERR))\n\t\t\t\t\t\t\treturn {};\n\n\t\t\t\t\t\terr = new_schd_error();\n\t\t\t\t\t\tif (err == NULL)\n\t\t\t\t\t\t\treturn {};\n\t\t\t\t\t}\n\n\t\t\t\t\tif ((rc = check_prime_queue(policy, qinfo))) {\n\t\t\t\t\t\tset_schd_error_codes(err, NOT_RUN, rc);\n\t\t\t\t\t\tadd_err(&prev_err, err);\n\t\t\t\t\t\tif (!(flags & RETURN_ALL_ERR))\n\t\t\t\t\t\t\treturn {};\n\n\t\t\t\t\t\terr = new_schd_error();\n\t\t\t\t\t\tif (err == NULL)\n\t\t\t\t\t\t\treturn {};\n\t\t\t\t\t}\n\n\t\t\t\t\tif ((rc = check_nonprime_queue(policy, qinfo))) {\n\t\t\t\t\t\tenum schd_err_status scode;\n\t\t\t\t\t\tif (policy->prime_status_end == SCHD_INFINITY) /* only primetime and we're in a non-prime queue*/\n\t\t\t\t\t\t\tscode = NEVER_RUN;\n\t\t\t\t\t\telse\n\t\t\t\t\t\t\tscode = NOT_RUN;\n\t\t\t\t\t\tset_schd_error_codes(err, scode, rc);\n\t\t\t\t\t\tadd_err(&prev_err, err);\n\t\t\t\t\t\tif (!(flags & RETURN_ALL_ERR))\n\t\t\t\t\t\t\treturn {};\n\n\t\t\t\t\t\terr = new_schd_error();\n\t\t\t\t\t\tif (err == NULL)\n\t\t\t\t\t\t\treturn {};\n\t\t\t\t\t}\n#ifdef NAS /* localmod 034 */\n\t\t\t\t\tif ((rc = site_check_cpu_share(sinfo, policy, resresv))) {\n\t\t\t\t\t\tset_schd_error_codes(err, NOT_RUN, rc);\n\t\t\t\t\t\tadd_err(&prev_err, err);\n\t\t\t\t\t\tif (!(flags & RETURN_ALL_ERR))\n\t\t\t\t\t\t\treturn NULL;\n\n\t\t\t\t\t\terr = new_schd_error();\n\t\t\t\t\t\tif (err == NULL)\n\t\t\t\t\t\t\treturn NULL;\n\t\t\t\t\t}\n#endif /* localmod 034 */\n\t\t\t\t}\n\t}\n\tif (resresv->is_job || (resresv->is_resv && !conf.resv_conf_ignore)) {\n\t\tif ((rc = check_ded_time_boundary(resresv))) {\n\t\t\tset_schd_error_codes(err, NOT_RUN, rc);\n\t\t\tadd_err(&prev_err, err);\n\t\t\tif (!(flags & RETURN_ALL_ERR))\n\t\t\t\treturn {};\n\t\t\terr = new_schd_error();\n\t\t\tif (err == NULL)\n\t\t\t\treturn {};\n\t\t}\n\t}\n\n\tif (exists_resv_event(sinfo->calendar, sinfo->server_time + resresv->hard_duration))\n\t\tendtime = sinfo->server_time + calc_time_left(resresv, 1);\n\telse\n\t\tendtime = sinfo->server_time + calc_time_left(resresv, 0);\n\n\tif (resresv->is_job) {\n\t\tif (qinfo->qres != NULL) {\n\t\t\tif (resresv->job->resv == NULL) {\n\t\t\t\tres = simulate_resmin(qinfo->qres, endtime, sinfo->calendar,\n\t\t\t\t\t\t      qinfo->jobs, resresv);\n\t\t\t} else\n#ifdef NAS /* localmod 036 */\n\t\t\t{\n\t\t\t\tif (resresv->job->resv->resv->is_standing) {\n\t\t\t\t\tresource_req *req = find_resource_req(resresv->resreq, allres[\"min_walltime\"]);\n\n\t\t\t\t\tif (req != NULL) {\n\t\t\t\t\t\tint resv_time_left = calc_time_left(resresv->job->resv, 0);\n\t\t\t\t\t\tif (req->amount > resv_time_left) {\n\t\t\t\t\t\t\tset_schd_error_codes(err, NOT_RUN, INSUFFICIENT_RESOURCE);\n\t\t\t\t\t\t\tadd_err(&prev_err, err);\n\t\t\t\t\t\t\tif (!(flags & RETURN_ALL_ERR))\n\t\t\t\t\t\t\t\treturn {};\n\n\t\t\t\t\t\t\terr = new_schd_error();\n\t\t\t\t\t\t\tif (err == NULL)\n\t\t\t\t\t\t\t\treturn {};\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n#endif /* localmod 036 */\n\t\t\t\tres = qinfo->qres;\n#ifdef NAS /* localmod 036 */\n\t\t\t}\n#endif /* localmod 036 */\n\t\t\t/* If job already has a list of resources released, use that list\n\t\t\t * check for available resources\n\t\t\t */\n\t\t\tif ((resresv->job != NULL) && (resresv->job->resreq_rel != NULL))\n\t\t\t\tresreq = resresv->job->resreq_rel;\n\t\t\telse\n\t\t\t\tresreq = resresv->resreq;\n\t\t\tif (check_avail_resources(res, resreq,\n\t\t\t\t\t\t  flags, policy->resdef_to_check, INSUFFICIENT_QUEUE_RESOURCE, err) == 0) {\n\t\t\t\tstruct schd_error *toterr;\n\t\t\t\ttoterr = new_schd_error();\n\t\t\t\tif (toterr == NULL) {\n\t\t\t\t\tif (err != perr)\n\t\t\t\t\t\tfree_schd_error(err);\n\t\t\t\t\treturn {};\n\t\t\t\t}\n\t\t\t\t/* We can't fit now, lets see if we can ever fit */\n\t\t\t\tif (check_avail_resources(res, resreq,\n\t\t\t\t\t\t\t  flags | COMPARE_TOTAL, policy->resdef_to_check, INSUFFICIENT_QUEUE_RESOURCE, toterr) == 0) {\n\t\t\t\t\tmove_schd_error(err, toterr);\n\t\t\t\t\terr->status_code = NEVER_RUN;\n\t\t\t\t}\n\n\t\t\t\tadd_err(&prev_err, err);\n\t\t\t\terr = toterr;\n\t\t\t\tclear_schd_error(err);\n\n\t\t\t\tif (!(flags & RETURN_ALL_ERR)) {\n\t\t\t\t\tif (err != perr)\n\t\t\t\t\t\tfree_schd_error(err);\n\t\t\t\t\treturn {};\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\t/* Don't check the server resources if a job is in a reservation.  This is\n\t * because the server resources_assigned will already reflect the entire\n\t * resource amount for the reservation\n\t */\n\tif (sinfo->res != NULL) {\n\t\tif (resresv->is_resv ||\n\t\t    (resresv->is_job && resresv->job != NULL && resresv->job->resv == NULL)) {\n\t\t\tres = simulate_resmin(sinfo->res, endtime, sinfo->calendar, NULL, resresv);\n\t\t\tif ((resresv->job != NULL) && (resresv->job->resreq_rel != NULL))\n\t\t\t\tresreq = resresv->job->resreq_rel;\n\t\t\telse\n\t\t\t\tresreq = resresv->resreq;\n\t\t\tif (check_avail_resources(res, resreq, flags,\n\t\t\t\t\t\t  policy->resdef_to_check, INSUFFICIENT_SERVER_RESOURCE, err) == 0) {\n\t\t\t\tstruct schd_error *toterr;\n\t\t\t\ttoterr = new_schd_error();\n\t\t\t\tif (toterr == NULL) {\n\t\t\t\t\tif (err != perr)\n\t\t\t\t\t\tfree_schd_error(err);\n\t\t\t\t\treturn {};\n\t\t\t\t}\n\t\t\t\t/* We can't fit now, lets see if we can ever fit */\n\t\t\t\tif (check_avail_resources(res, resreq,\n\t\t\t\t\t\t\t  flags | COMPARE_TOTAL, policy->resdef_to_check,\n\t\t\t\t\t\t\t  INSUFFICIENT_SERVER_RESOURCE, toterr) == 0) {\n\t\t\t\t\ttoterr->status_code = NEVER_RUN;\n\t\t\t\t\tmove_schd_error(err, toterr);\n\t\t\t\t}\n\n\t\t\t\tadd_err(&prev_err, err);\n\t\t\t\terr = toterr;\n\t\t\t\tclear_schd_error(err);\n\n\t\t\t\tif (!(flags & RETURN_ALL_ERR)) {\n\t\t\t\t\tif (err != perr)\n\t\t\t\t\t\tfree_schd_error(err);\n\t\t\t\t\treturn {};\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\tauto ns_arr = check_nodes(policy, sinfo, qinfo, resresv, flags, err);\n\n\tif (err->error_code != SUCCESS)\n\t\tadd_err(&prev_err, err);\n\n\t/* If any more checks are added after check_nodes(),\n\t * the RETURN_ALL_ERR case must be added here */\n\n\t/* This is the case where we allocated a error structure for use, but\n\t * didn't end up using it.  We have to check against perr, so we don't\n\t * free the caller's memory.\n\t */\n\telse if (err != perr)\n\t\tfree_schd_error(err);\n\n\treturn ns_arr;\n}\n\n/**\n * @brief find the resources associated with the resource_req's def\n * @param[in] reslist - schd_resource list to search in\n * @param[in] resreq - requested resource\n * @param[in] flags to modify behavior (@see check_avail_resources())\n * @return schd_resource\n * @retval found resource\n * @retval fres/zres/ustr if not found\n * @retval if indirect, point to the real resource\n * @retval NULL if resource is to be ignored\n */\nschd_resource *\nfind_check_resource(schd_resource *reslist, resource_req *resreq, unsigned int flags)\n{\n\tschd_resource *res;\n\tschd_resource *fres = false_res();\n\tschd_resource *zres = zero_res();\n\tschd_resource *ustr = unset_str_res();\n\n\tres = find_resource(reslist, resreq->def);\n\n\tif (res == NULL || res->orig_str_avail == NULL) {\n\t\t/* if resources_assigned.res is unset and resources is in\n\t\t * resource_unset_infinite, ignore the check and assume a match\n\t\t */\n\t\tif (conf.ignore_res.find(resreq->name) != conf.ignore_res.end())\n\t\t\treturn NULL;\n\t}\n\n\tif (res == NULL) {\n\t\tif (!(flags & UNSET_RES_ZERO))\n\t\t\treturn NULL;\n\n\t\tif (resreq->type.is_boolean)\n\t\t\tres = fres;\n\t\telse if (resreq->type.is_num)\n\t\t\tres = zres;\n\t\telse if (resreq->type.is_string)\n\t\t\tres = ustr;\n\t\telse /* ignore check: effect is resource is infinite */\n\t\t\treturn NULL;\n\n\t\tres->name = resreq->name;\n\t\tres->def = resreq->def;\n\t}\n\n\tif (res->indirect_res != NULL) {\n\t\tres = res->indirect_res;\n\t}\n\treturn res;\n}\n\n/**\n * @brief do resource matching between a resource_req and a schd_resource\n * @param[in] res - schd_resource to match\n * @param[in] resreq - resource_req to match\n * @param[in] flags to modify behavior (@see check_avail_resources())\n * @param[in] fail_code - fail code to use in schd_error if resources don't match\n * @param[out] err - if resources don't match, reason not matched\n * @return long long\n * @retval number of chunks matched if matched and consumable\n * @retval SCHD_INFINITY if matched and non-consumable\n * @retval 0 of resources failed to match\n */\n\nlong long\nmatch_resource(schd_resource *res, resource_req *resreq, unsigned int flags, enum sched_error_code fail_code, schd_error *err)\n{\n\tsch_resource_t avail; /* amount of available resource */\n\tlong long num_chunk = SCHD_INFINITY;\n\tlong long cur_chunk = 0;\n\n\tchar resbuf1[MAX_LOG_SIZE];\n\tchar resbuf2[MAX_LOG_SIZE];\n\tchar resbuf3[MAX_LOG_SIZE];\n\t/*\n\t * buf must be large enough to hold the three resbuf buffers plus a\n\t * small amount of text... (R: resbuf1 A: resbuf2 T: resbuf3)\n\t */\n\tchar buf[(MAX_LOG_SIZE * 3) + 16];\n\n\tif (res->type.is_non_consumable && !(flags & ONLY_COMP_CONS)) {\n\t\tif (!compare_non_consumable(res, resreq)) {\n\t\t\tnum_chunk = 0;\n\t\t\tif (err != NULL) {\n\t\t\t\tconst char *requested;\n\t\t\t\tset_schd_error_codes(err, NOT_RUN, fail_code);\n\t\t\t\terr->rdef = res->def;\n\t\t\t\trequested = res_to_str_r(resreq, RF_REQUEST, resbuf1, sizeof(resbuf1));\n\t\t\t\tsnprintf(buf, sizeof(buf), \"(%s != %s)\",\n\t\t\t\t\t requested,\n\t\t\t\t\t res_to_str_r(res, RF_AVAIL, resbuf2, sizeof(resbuf2)));\n\t\t\t\tset_schd_error_arg(err, ARG1, buf);\n\t\t\t\t/* Set arg2 for vnode/host resource. In case of preemption, arg2 is used to cull\n\t\t\t\t * the list of running jobs\n\t\t\t\t */\n\t\t\t\tif (res->def == allres[\"host\"] || (res->def == allres[\"vnode\"]))\n\t\t\t\t\tset_schd_error_arg(err, ARG2, requested);\n\t\t\t}\n\t\t}\n\t} else if (res->type.is_consumable && !(flags & ONLY_COMP_NONCONS)) {\n\t\tif (flags & COMPARE_TOTAL)\n\t\t\tavail = res->avail;\n\t\telse\n\t\t\tavail = dynamic_avail(res);\n\n\t\tif (avail == SCHD_INFINITY_RES && (flags & UNSET_RES_ZERO))\n\t\t\tavail = 0;\n\n\t\t/*\n\t\t * if there is an infinite amount available or we are requesting\n\t\t * 0 amount of the resource, we do not need to check if any is\n\t\t * available\n\t\t */\n\t\tif (avail != SCHD_INFINITY_RES && resreq->amount != 0) {\n\t\t\tif (avail < resreq->amount) {\n\t\t\t\tnum_chunk = 0;\n\t\t\t\tif (err != NULL) {\n\t\t\t\t\tset_schd_error_codes(err, NOT_RUN, fail_code);\n\t\t\t\t\terr->rdef = res->def;\n\n\t\t\t\t\tres_to_str_r(resreq, RF_REQUEST, resbuf1, sizeof(resbuf1));\n\t\t\t\t\tres_to_str_c(avail, res->def, RF_AVAIL, resbuf2, sizeof(resbuf2));\n\t\t\t\t\tif ((flags & UNSET_RES_ZERO) && res->avail == SCHD_INFINITY_RES)\n\t\t\t\t\t\tres_to_str_c(0, res->def, RF_AVAIL, resbuf3, sizeof(resbuf3));\n\t\t\t\t\telse\n\t\t\t\t\t\tres_to_str_r(res, RF_AVAIL, resbuf3, sizeof(resbuf3));\n\t\t\t\t\tsnprintf(buf, sizeof(buf), \"(R: %s A: %s T: %s)\", resbuf1, resbuf2, resbuf3);\n\t\t\t\t\tset_schd_error_arg(err, ARG1, buf);\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tcur_chunk = avail / resreq->amount;\n\t\t\t\tif (cur_chunk < num_chunk || num_chunk == SCHD_INFINITY)\n\t\t\t\t\tnum_chunk = cur_chunk;\n\t\t\t}\n\t\t}\n\t}\n\n\treturn num_chunk;\n}\n\n/**\n *\n * @brief\n * \t\tThis function will calculate the number of\n *\t\tmultiples of the requested resources in reqlist\n *\t\twhich can be satisfied by the resources\n *\t\tavailable in the reslist for the resources in checklist\n *\n * @param[in]\treslist\t-\tresources list\n * @param[in]\treqlist\t-\tthe list of resources requested\n * @param[in]\tflags\t-\tvalid flags:\n *\t\t\t\t\t\t\tCHECK_ALL_BOOLS - always check all boolean resources\n *\t\t\t\t\t\t\tUNSET_RES_ZERO - a resource which is unset defaults to 0\n *\t\t\t\t\t\t\tCOMPARE_TOTAL - do comparisons against resource total rather\n *\t\t\t\t\t\t\tthan what is currently available\n *\t\t\t\t\t\t\tONLY_COMP_NONCONS - only compare non-consumable resources\n *\t\t\t\t\t\t\tONLY_COMP_CONS - only compare consumable resources\n * @param[in]\tchecklist\t-\tset of resources to check\n * @param[in]\tfail_code\t-\terror code if resource request is rejected\n *\t@param[out]\tperr\t-\tif not NULL the the reason request is not\n *\t\t\t\t\t\t\tsatisfiable (i.e. the resource there is not\n *\t\t\t\t\t\t\tenough of).  If err is NULL, no error reason is\n *\t\t\t\t\t\t\treturned.\n *\n * @return\tint\n * @retval\tnumber of chunks which can be allocated\n * @retval\t-1\t: on error\n *\n */\nlong long\ncheck_avail_resources(schd_resource *reslist, resource_req *reqlist,\n\t\t      unsigned int flags, std::unordered_set<resdef *> &checklist,\n\t\t      enum sched_error_code fail_code, schd_error *perr)\n{\n\tlong long num_chunk = SCHD_INFINITY;\n\tlong long match_chunk = SCHD_INFINITY;\n\n\tint any_fail = 0;\n\tschd_error *prev_err = NULL;\n\tschd_error *err;\n\n\tif (reslist == NULL || reqlist == NULL) {\n\t\tif (perr != NULL)\n\t\t\tset_schd_error_codes(perr, NOT_RUN, SCHD_ERROR);\n\n\t\treturn -1;\n\t}\n\n\terr = perr;\n\n\tfor (resource_req *resreq = reqlist; resreq != NULL; resreq = resreq->next) {\n\t\tif (((flags & CHECK_ALL_BOOLS) && resreq->type.is_boolean) ||\n\t\t    (checklist.find(resreq->def) != checklist.end())) {\n\n\t\t\tschd_resource *res = find_check_resource(reslist, resreq, flags);\n\t\t\tif (res == NULL)\n\t\t\t\tcontinue;\n\n\t\t\tmatch_chunk = match_resource(res, resreq, flags, fail_code, err);\n\n\t\t\tif (num_chunk == SCHD_INFINITY)\n\t\t\t\tnum_chunk = match_chunk;\n\t\t\telse if (match_chunk != SCHD_INFINITY && match_chunk < num_chunk)\n\t\t\t\tnum_chunk = match_chunk;\n\n\t\t\tif (num_chunk == 0) {\n\t\t\t\tany_fail = 1;\n\t\t\t\tif (flags & RETURN_ALL_ERR) {\n\t\t\t\t\tif (err != NULL) {\n\t\t\t\t\t\terr->next = new_schd_error();\n\t\t\t\t\t\tif (err->next == NULL)\n\t\t\t\t\t\t\treturn 0;\n\t\t\t\t\t\tprev_err = err;\n\t\t\t\t\t\terr = err->next;\n\t\t\t\t\t}\n\t\t\t\t} else\n\t\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t}\n\n\tif (any_fail)\n\t\tnum_chunk = 0;\n\n\tif (prev_err != NULL && (flags & RETURN_ALL_ERR)) {\n\t\tif (prev_err != NULL) {\n\t\t\tfree_schd_error(err);\n\t\t\tprev_err->next = NULL;\n\t\t}\n\t}\n\n\treturn num_chunk;\n}\n\n/** @brief overloaded version of check_avail_resources() which matches all resources.  \n * @see other function for argument description\n*/\nlong long\ncheck_avail_resources(schd_resource *reslist, resource_req *reqlist,\n\t\t      unsigned int flags, enum sched_error_code fail_code, schd_error *perr)\n{\n\tlong long num_chunk = SCHD_INFINITY;\n\tlong long match_chunk = SCHD_INFINITY;\n\n\tint any_fail = 0;\n\tschd_error *prev_err = NULL;\n\tschd_error *err;\n\n\tif (reslist == NULL || reqlist == NULL) {\n\t\tif (perr != NULL)\n\t\t\tset_schd_error_codes(perr, NOT_RUN, SCHD_ERROR);\n\n\t\treturn -1;\n\t}\n\n\terr = perr;\n\n\tfor (resource_req *resreq = reqlist; resreq != NULL; resreq = resreq->next) {\n\t\tschd_resource *res = find_check_resource(reslist, resreq, flags);\n\t\tif (res == NULL)\n\t\t\tcontinue;\n\n\t\tmatch_chunk = match_resource(res, resreq, flags, fail_code, err);\n\n\t\tif (num_chunk == SCHD_INFINITY)\n\t\t\tnum_chunk = match_chunk;\n\t\telse if (match_chunk != SCHD_INFINITY && match_chunk < num_chunk)\n\t\t\tnum_chunk = match_chunk;\n\n\t\tif (num_chunk == 0) {\n\t\t\tany_fail = 1;\n\t\t\tif (flags & RETURN_ALL_ERR) {\n\t\t\t\tif (err != NULL) {\n\t\t\t\t\terr->next = new_schd_error();\n\t\t\t\t\tif (err->next == NULL)\n\t\t\t\t\t\treturn 0;\n\t\t\t\t\tprev_err = err;\n\t\t\t\t\terr = err->next;\n\t\t\t\t}\n\t\t\t} else\n\t\t\t\tbreak;\n\t\t}\n\t}\n\n\tif (any_fail)\n\t\tnum_chunk = 0;\n\n\tif (prev_err != NULL && (flags & RETURN_ALL_ERR)) {\n\t\tif (prev_err != NULL) {\n\t\t\tfree_schd_error(err);\n\t\t\tprev_err->next = NULL;\n\t\t}\n\t}\n\n\treturn num_chunk;\n}\n\n/**\n * @brief\n *\t\tdynamic_avail - find out how much of a resource is available on a\n *\t\t\tserver.  If the resources_available attribute is\n *\t\t\tset, use that, else use resources_max.\n *\n * @param[in]\tres\t-\tthe resource to check\n *\n * @return\tavailable amount of the resource\n *\n */\n\nsch_resource_t\ndynamic_avail(schd_resource *res)\n{\n\tif (res->avail == SCHD_INFINITY_RES)\n\t\treturn SCHD_INFINITY_RES;\n\telse if ((res->avail - res->assigned) <= 0)\n\t\treturn 0;\n\telse\n\t\treturn res->avail - res->assigned;\n}\n\n/**\n *\t@brief\n *\t\tfind a element of a counts structure by name.\n *\t\t  If res arg is NULL return 'running' element.\n *\t\t  otherwise return named resource\n *\n * @param[in]\tcts_list\t-\tcounts list to search\n * @param[in]\tname\t-\tname of counts structure to find\n * @param[in]\trdef\t-\tresource definition to find or if NULL,\n *\t\t\t\treturn number of running\n * @param[out]  cnt\t-\taddress of the counts structure found in the list\n * @param[out]  rcount\t-\taddress of matching resource count structure\n *\n * @return\tresource amount\n */\nsch_resource_t\nfind_counts_elm(counts_umap &cts_list, const std::string &name, resdef *rdef, counts **cnt, resource_count **rcount)\n{\n\tresource_count *res_lim;\n\tcounts *cts;\n\n\tif (name.empty())\n\t\treturn 0;\n\n\tif ((cts = find_counts(cts_list, name)) != NULL) {\n\t\tif (cnt != NULL)\n\t\t\t*cnt = cts;\n\t\tif (rdef == NULL)\n\t\t\treturn cts->running;\n\t\telse if ((res_lim = find_resource_count(cts->rescts, rdef)) != NULL) {\n\t\t\tif (rcount != NULL)\n\t\t\t\t*rcount = res_lim;\n\t\t\treturn res_lim->amount;\n\t\t}\n\t}\n\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\tcheck to see if a resource resv will cross\n *\t\t  into dedicated time\n *\n * @param[in]\tresresv\t-\tthe resource resv to check\n *\n * @retval\tSE_NONE\t: will not cross a ded time boundary\n * @retval\tCROSS_DED_TIME_BOUNDRY\t: will cross a ded time boundary\n */\nenum sched_error_code\ncheck_ded_time_boundary(resource_resv *resresv)\n{\n\tif (resresv == NULL)\n\t\treturn SE_NONE;\n\n\ttimegap ded_time = find_next_dedtime(resresv->server->server_time);\n\n\t/* we have no dedicated time */\n\tif (ded_time.from == 0 && ded_time.to == 0)\n\t\treturn SE_NONE;\n\n\tauto ded = is_ded_time(resresv->server->server_time);\n\n\tif (!ded) {\n\t\tif (dedtime_conflict(resresv)) /* has conflict or has no duration */\n\t\t\treturn CROSS_DED_TIME_BOUNDRY;\n\t} else {\n\t\tauto time_left = calc_time_left(resresv, 0);\n\t\tauto finish_time = resresv->server->server_time + time_left;\n\n\t\tif (finish_time > ded_time.to)\n\t\t\treturn CROSS_DED_TIME_BOUNDRY;\n\t}\n\treturn SE_NONE;\n}\n\n/**\n * @brief\n *\t\tdedtime_conflict - check for dedtime conflicts\n *\n * @param[in]\tresresv\t-\tresource resv to check for conflects\n *\n * @return\tint\n * @retval\t1\t: the reservation conflicts\n * @retval\t0\t: the reservation doesn't conflict\n * @retval\t-1\t: error\n *\n */\nint\ndedtime_conflict(resource_resv *resresv)\n{\n\ttime_t start;\n\ttime_t end;\n\n\tif (resresv == NULL)\n\t\treturn -1;\n\n\tif (resresv->start == UNSPECIFIED && resresv->end == UNSPECIFIED) {\n\t\tauto duration = calc_time_left(resresv, 0);\n\n\t\tstart = resresv->server->server_time;\n\t\tend = start + duration;\n\t} else if (resresv->start == UNSPECIFIED || resresv->end == UNSPECIFIED)\n\t\treturn -1;\n\telse {\n\t\tstart = resresv->start;\n\t\tend = resresv->end;\n\t}\n\n\ttimegap ded_time = find_next_dedtime(start);\n\n\t/* no ded time */\n\tif (ded_time.from == 0 && ded_time.to == 0)\n\t\treturn 0;\n\n\t/* it is currently dedicated time */\n\tif (start > ded_time.from && start < ded_time.to)\n\t\treturn 1;\n\n\t/* currently not dedicated time, but job would not\n\t * complete before dedicated time would start\n\t */\n\tif (end > ded_time.from && end < ded_time.to)\n\t\treturn 1;\n\n\t/* Long job -- one which includes dedicated time.  In other words,\n\t *             it starts at or before dedicated time starts and\n\t *             it ends at or after dedicated time ends\n\t */\n\tif (start <= ded_time.from && end >= ded_time.to)\n\t\treturn 1;\n\n\treturn 0;\n}\n\n/**\n * @brief check to see if a resresv can run on nodes using either node search code path\n  * @param[in]\tpolicy\t-\tpolicy info\n * @param[in]\tsinfo\t-\tserver associated with job/resv\n * @param[in]\tqinfo\t-\tqueue associated with job (NULL if resv)\n * @param[in]\tresresv\t-\tresource resv to check\n * @param[in]\tflags   -\tflags to change functions behavior\n *\t\t\t\t\tEVAL_OKBREAK - ok to break chunk up across vnodes\n *\t\t\t\t\tEVAL_EXCLSET - allocate entire nodelist exclusively\n *\t\t\t\t\tNO_ALLPART - don't update allpart when updating meta data\n *\t\t\t\t\tUSE_BUCKETS - use the bucket code path\n * @param[out]\terr\t-\terror structure on why job/resv can't run\n *\n * @return\tvector<nspec *>\n * @retval\tnode solution of where the job/resv will run\n * @retval\tNULL\t: if the job/resv can't run now\n\n */\nstd::vector<nspec *>\ncheck_nodes(status *policy, server_info *sinfo, queue_info *qinfo, resource_resv *resresv, unsigned int flags, schd_error *err)\n{\n\tstd::vector<nspec *> ns_arr;\n\n\tif (sinfo->pset_metadata_stale)\n\t\tupdate_all_nodepart(policy, sinfo, (flags & NO_ALLPART));\n\n\tif (flags & USE_BUCKETS)\n\t\tns_arr = check_node_buckets(policy, sinfo, qinfo, resresv, err);\n\telse\n\t\tns_arr = check_normal_node_path(policy, sinfo, qinfo, resresv, flags, err);\n\n\treturn ns_arr;\n}\n\n/**\n *\t@brief\n *\t\tcheck to see if there is sufficient nodes available to run a job/resv\n *\t\tusing the normal node search code path.\n *\n * @param[in]\tpolicy\t-\tpolicy info\n * @param[in]\tsinfo\t-\tserver associated with job/resv\n * @param[in]\tqinfo\t-\tqueue associated with job (NULL if resv)\n * @param[in]\tresresv\t-\tresource resv to check\n * @param[in]\tflags   -\tflags to change functions behavior\n *\t\t\t\t\tEVAL_OKBREAK - ok to break chunk up across vnodes\n *\t\t\t\t\tEVAL_EXCLSET - allocate entire nodelist exclusively\n * @param[out]\terr\t-\terror structure on why job/resv can't run\n *\n * @return\tvector<nspec *>\n * @retval\tnode solution of where the job/resv will run\n * @retval\tNULL\t: if the job/resv can't run now\n *\n */\nstd::vector<nspec *>\ncheck_normal_node_path(status *policy, server_info *sinfo, queue_info *qinfo, resource_resv *resresv, unsigned int flags, schd_error *err)\n{\n\tstd::vector<nspec *> nspec_arr;\n\tselspec *spec = NULL;\n\tplace *pl = NULL;\n\tint rc = 0;\n\tnp_cache *npc = NULL;\n\tint error = 0;\n\tnode_partition **nodepart = NULL;\n\tnode_info **ninfo_arr = NULL;\n\n\tif (!sc_attrs.do_not_span_psets)\n\t\tflags |= SPAN_PSETS;\n\n\tif (sinfo == NULL || resresv == NULL || err == NULL) {\n\t\tif (err != NULL)\n\t\t\tset_schd_error_codes(err, NOT_RUN, SCHD_ERROR);\n\t\treturn {};\n\t}\n\n\tif (resresv->is_job) {\n\t\tif (qinfo == NULL)\n\t\t\treturn {};\n\n\t\tif (resresv->job == NULL)\n\t\t\treturn {};\n\n\t\tif (resresv->job->resv != NULL && resresv->job->resv->resv == NULL)\n\t\t\treturn {};\n\t}\n\n\tget_resresv_spec(resresv, &spec, &pl);\n\n\t/* Sets of nodes:\n\t   * 1. job is in a reservation - use reservation nodes\n\t   * 2. job or reservation has nodes -- use them\n\t   * 3. queue job is in has nodes associated with it - use queue's nodes\n\t   * 4. catchall - either the job is being run on nodes not associated with\n\t   * any queue, or we're node grouping and we the job can't fit into any\n\t   * node partition, therefore it falls in here\n\t   */\n\n\tif (resresv->is_job && resresv->job->resv != NULL) {\n\t\t/* if we're in a reservation, only check nodes assigned to the resv\n\t\t * and not worry about node grouping since the nodes for the reservation\n\t\t * are already in a group\n\t\t */\n\t\tninfo_arr = resresv->job->resv->resv->resv_nodes;\n\t\tnodepart = NULL;\n\t} else if (resresv->ninfo_arr != NULL) {\n\t\t/* if we have nodes, use them\n\t\t * don't care about node grouping because nodes are already assigned\n\t\t * to the job.  We won't need to search for them.\n\t\t */\n\t\tninfo_arr = resresv->ninfo_arr;\n\t\tnodepart = NULL;\n\t} else {\n\t\tif (resresv->is_job && qinfo->nodepart != NULL)\n\t\t\tnodepart = qinfo->nodepart;\n\t\telse if (sinfo->nodepart != NULL)\n\t\t\tnodepart = sinfo->nodepart;\n\t\telse\n\t\t\tnodepart = NULL;\n\n\t\tif (resresv->is_job) {\n\t\t\t/* if there are nodes assigned to the queue, then check those */\n\t\t\tif (qinfo->has_nodes)\n\t\t\t\tninfo_arr = qinfo->nodes;\n\t\t}\n\t}\n\n\tif (ninfo_arr == NULL)\n\t\tninfo_arr = sinfo->unassoc_nodes;\n\n\tif (resresv->node_set_str != NULL) {\n\t\t/* Note that jobs inside reservations have their node_set\n\t\t * created in query_reservations()\n\t\t */\n\t\tif (resresv->node_set == NULL) {\n\t\t\tresresv->node_set = create_node_array_from_str(\n\t\t\t\tqinfo->num_nodes > 0 ? qinfo->nodes : sinfo->unassoc_nodes,\n\t\t\t\tresresv->node_set_str);\n\t\t}\n\t\tninfo_arr = resresv->node_set;\n\t\tnodepart = NULL;\n\t}\n\n\t/* job's place=group=res replaces server or queue node grouping\n\t * We'll search the node partition cache for the job's pool of node partitions\n\t * If it doesn't exist, we'll create it and add it to the cache\n\t */\n\tif (resresv->place_spec->group != NULL) {\n\t\tstd::vector<std::string> grouparr{resresv->place_spec->group};\n\t\tnpc = find_alloc_np_cache(policy, sinfo->npc_arr, grouparr, ninfo_arr, cmp_placement_sets);\n\t\tif (npc != NULL)\n\t\t\tnodepart = npc->nodepart;\n\t\telse\n\t\t\terror = 1;\n\t}\n\n\tif (ninfo_arr == NULL || error)\n\t\treturn {};\n\n\tnspec_arr.reserve(spec->total_chunks);\n\n\trc = eval_selspec(policy, spec, pl, ninfo_arr, nodepart, resresv, flags, nspec_arr, err);\n\n\t/* We can run, yippie! */\n\tif (rc > 0)\n\t\treturn nspec_arr;\n\n\t/* We were not told why the resresv can't run: Use generic reason */\n\tif (err->status_code == SCHD_UNKWN)\n\t\tset_schd_error_codes(err, NOT_RUN, NO_NODE_RESOURCES);\n\n\tfree_nspecs(nspec_arr);\n\n\treturn {};\n}\n\n/**\n * @brief\n *\t\tcheck_ded_time_queue - check if it is the appropriate time to run jobs\n *\t\t\t\t\tin a dedtime queue\n *\n * @param[in]\tqinfo\t-\tthe queue\n *\n * @return\tint\n * @retval\tSE_NONE\t: if it is dedtime and qinfo is a dedtime queue or\n *\t     \t\t\tif it is not dedtime and qinfo is not a dedtime queue\n * @retval\tDED_TIME\t: if jobs can not run in queue because of dedtime restrictions\n * @retval\tSCHD_ERROR\t: An error has occurred.\n *\n */\nenum sched_error_code\ncheck_ded_time_queue(queue_info *qinfo)\n{\n\tenum sched_error_code rc = SE_NONE; /* return code */\n\n\tif (qinfo == NULL || qinfo->server == NULL)\n\t\treturn SCHD_ERROR;\n\n\tif (is_ded_time(qinfo->server->server_time)) {\n\t\tif (qinfo->is_ded_queue)\n\t\t\trc = SE_NONE;\n\t\telse\n\t\t\trc = DED_TIME;\n\t} else {\n\t\tif (qinfo->is_ded_queue)\n\t\t\trc = DED_TIME;\n\t\telse\n\t\t\trc = SE_NONE;\n\t}\n\treturn rc;\n}\n\n/**\n *\n *\t@brief\n *\t\tCheck primetime status of the queue.  If the queue\n *\t\t    is a primetime queue and it is primetime or if the\n *\t\t    queue is an anytime queue, jobs can run in it.\n *\n * @param[in]\tpolicy\t-\tpolicy info\n * @param[in]\tqinfo\t-\tthe queue to check\n *\n * @retval\tSE_NONE\t: if the queue is an anytime queue or if it is a primetime\n * \t\t\t\t\tqueue and its is currently primetime\n * @retval\tPRIME_ONLY\t: its a primetime queue and its not primetime\n * @retval\tSCHD_ERROR\terror\n *\n */\nenum sched_error_code\ncheck_prime_queue(status *policy, queue_info *qinfo)\n{\n\tif (policy == NULL || qinfo == NULL)\n\t\treturn SCHD_ERROR;\n\t/* if the queue is an anytime queue, allow jobs to run */\n\tif (!qinfo->is_prime_queue && !qinfo->is_nonprime_queue)\n\t\treturn SE_NONE;\n\n\tif (!policy->is_prime && qinfo->is_prime_queue)\n\t\treturn PRIME_ONLY;\n\n\treturn SE_NONE;\n}\n\n/**\n * @brief\n * \t\tCheck nonprime status of the queue.  If the\n *\t\t\t       queue is a nonprime queue and it is nonprimetime\n *\t\t\t       of the queue is an anytime queue, jobs can run\n *\n * @param[in]\tpolicy\t-\tpolicy info\n * @param[in]\tqinfo\t-\tthe queue to check\n *\n * @return\tint\n * @retval\tSE_NONE\t: if the queue is an anytime queue or if it is nonprimetime\n * \t             \tand the queue is a nonprimetime queue\n * @retval\tNONPRIME_ONLY\t: its a nonprime queue and its primetime\n *\n */\nenum sched_error_code\ncheck_nonprime_queue(status *policy, queue_info *qinfo)\n{\n\t/* if the queue is an anytime queue, allow jobs to run */\n\tif (!qinfo->is_prime_queue && !qinfo->is_nonprime_queue)\n\t\treturn SE_NONE;\n\n\tif (policy->is_prime && qinfo->is_nonprime_queue)\n\t\treturn NONPRIME_ONLY;\n\n\treturn SE_NONE;\n}\n\n/**\n * @brief\n * \t\tcheck to see if the resource resv can run before\n *\t\tthe prime status changes (from primetime to nonprime etc)\n *\n * @param[in]\tpolicy\t-\tpolicy info\n * @param[in]\tresresv\t-\tthe resource_resv to check\n * @param[out]\terr     -\terror structure to return\n *\n * @retval\tCROSS_PRIME_BOUNDARY\t: if the resource resv crosses\n * @retval\tSE_NONE\t: if it doesn't\n * @retval\tSCHD_ERROR\t: on error\n *\n */\nenum sched_error_code\ncheck_prime_boundary(status *policy, resource_resv *resresv, struct schd_error *err)\n{\n\n\tif (resresv == NULL || policy == NULL) {\n\t\tset_schd_error_codes(err, NOT_RUN, SCHD_ERROR);\n\t\treturn SCHD_ERROR;\n\t}\n\n\t/*\n\t *   If the job is not in a prime or non-prime queue, we do not\n\t *   need to check the prime boundary.\n\t */\n\tif (resresv->is_job) {\n\t\tif (conf.prime_exempt_anytime_queues &&\n\t\t    !resresv->job->queue->is_nonprime_queue &&\n\t\t    !resresv->job->queue->is_prime_queue)\n\t\t\treturn SE_NONE;\n\t}\n\n\t/* prime status never ends */\n\tif (policy->prime_status_end == SCHD_INFINITY)\n\t\treturn SE_NONE;\n\n\tif (policy->backfill_prime) {\n\t\tauto time_left = calc_time_left(resresv, 0);\n\n\t\t/*\n\t\t *   Job has no walltime requested.  Lets be conservative and assume the\n\t\t *   job will conflict with primetime.\n\t\t */\n\t\tif (time_left < 0) {\n\t\t\tset_schd_error_codes(err, NOT_RUN, CROSS_PRIME_BOUNDARY);\n\t\t\tset_schd_error_arg(err, ARG1, policy->is_prime ? NONPRIMESTR : PRIMESTR);\n\t\t\treturn CROSS_PRIME_BOUNDARY;\n\t\t}\n\n\t\tif (resresv->server->server_time + time_left >\n\t\t    policy->prime_status_end + policy->prime_spill) {\n\t\t\tset_schd_error_codes(err, NOT_RUN, CROSS_PRIME_BOUNDARY);\n\t\t\tset_schd_error_arg(err, ARG1, policy->is_prime ? NONPRIMESTR : PRIMESTR);\n\t\t\treturn CROSS_PRIME_BOUNDARY;\n\t\t}\n\t}\n\treturn SE_NONE;\n}\n\n/**\n * @brief\n * \t\treturn a boolean resource that is False\n *         It is up to the caller to set the name and def fields\n *\n * @return\tschd_resource * (set to False)\n *\n * @par MT-safe: No\n */\nschd_resource *\nfalse_res()\n{\n\tstatic schd_resource *res = NULL;\n\n\tif (res == NULL) {\n\t\tres = new_resource();\n\t\tif (res != NULL) {\n\t\t\tres->type.is_non_consumable = 1;\n\t\t\tres->type.is_boolean = 1;\n\t\t\tres->orig_str_avail = string_dup(ATR_FALSE);\n\t\t\tres->avail = 0;\n\t\t} else\n\t\t\treturn NULL;\n\t}\n\n\tres->def = NULL;\n\tres->name = NULL;\n\n\treturn res;\n}\n\n/**\n * @brief\n * \t\treturn a string resource that is \"unset\" (set to \"\")\n *         It is up to the caller to set the name and def fields\n *\n * @return\tschd_resource *\n * @retval\tNULL\t: fail\n *\n * @par MT-safe: No\n */\nschd_resource *\nunset_str_res()\n{\n\tstatic schd_resource *res = NULL;\n\n\tif (res == NULL) {\n\t\tres = new_resource();\n\t\tif ((res->str_avail = static_cast<char **>(malloc(sizeof(char *) * 2))) != NULL) {\n\t\t\tif (res->str_avail != NULL) {\n\t\t\t\tres->str_avail[0] = string_dup(\"\");\n\t\t\t\tres->str_avail[1] = NULL;\n\t\t\t} else {\n\t\t\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\t\t\tfree_resource(res);\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t\tres->type.is_non_consumable = 1;\n\t\t\tres->type.is_string = 1;\n\t\t\tres->orig_str_avail = string_dup(\"\");\n\t\t\tres->avail = 0;\n\t\t} else\n\t\t\treturn NULL;\n\t}\n\n\tres->name = NULL;\n\tres->def = NULL;\n\n\treturn res;\n}\n/**\n * @brief\n * \t\treturn a numeric resource that is 0\n *         It is up to the caller to set the name and def fields\n *\n * @return\tschd_resource *\n * @retval\tNULL\t: fail\n */\nschd_resource *\nzero_res()\n{\n\tstatic schd_resource *res = NULL;\n\n\tif (res == NULL) {\n\t\tres = new_resource();\n\t\tif (res != NULL) {\n\t\t\tres->type.is_consumable = 1;\n\t\t\tres->type.is_num = 1;\n\t\t\tres->orig_str_avail = string_dup(\"0\");\n\t\t\tres->avail = 0;\n\t\t} else\n\t\t\treturn NULL;\n\t}\n\n\tres->name = NULL;\n\tres->def = NULL;\n\n\treturn res;\n}\n\n/**\n * @brief get_resresv_spec - this function returns the correct value of select and\n *\t    place to be used for node searching.\n * @param[in]  *resresv resources reservation object\n * @param[out] **spec output select specification\n * @param[out] **pl  output placement specification\n *\n * @par MT-Safe: No\n * @return void\n */\nvoid\nget_resresv_spec(resource_resv *resresv, selspec **spec, place **pl)\n{\n\tstatic place place_spec;\n\tif (resresv->is_job && resresv->job != NULL) {\n\t\tif (resresv->execselect != NULL) {\n\t\t\t*spec = resresv->execselect;\n\t\t\tplace_spec = *resresv->place_spec;\n\n\t\t\t/* Placement was handled the first time.  Don't let it get in the way */\n\t\t\tplace_spec.scatter = place_spec.vscatter = place_spec.pack = 0;\n\t\t\tplace_spec.free = 1;\n\t\t\t*pl = &place_spec;\n\t\t} else {\n\t\t\t*pl = resresv->place_spec;\n\t\t\t*spec = resresv->select;\n\t\t}\n\t} else if (resresv->is_resv && resresv->resv != NULL) {\n\t\t/* The execselect should be used when the resv is running.  We can't\n\t\t * trust the state/substate to be RESV_RUNNING when a reservation is both\n\t\t * RESV_DEGRADED and RESV_BEING_ALTERED and is running.\n\t\t */\n\t\tif (resresv->resv->is_running)\n\n\t\t\t*spec = resresv->execselect;\n\t\telse\n\t\t\t*spec = resresv->select;\n\t\tplace_spec = *resresv->place_spec;\n\t\t*pl = &place_spec;\n\t}\n}\n"
  },
  {
    "path": "src/scheduler/check.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _CHECK_H\n#define _CHECK_H\n\n#include <unordered_set>\n\n#include \"server_info.h\"\n#include \"queue_info.h\"\n#include \"job_info.h\"\n\n/*\n *\tis_ok_to_run_in_queue - check to see if jobs can be run in queue\n */\nenum sched_error_code is_ok_to_run_queue(status *policy, queue_info *qinfo);\n\n/*\n *\tis_ok_to_run - check to see if it ok to run a job on the server\n */\nstd::vector<nspec *>\nis_ok_to_run(status *policy, server_info *sinfo,\n\t     queue_info *qinfo, resource_resv *resresv, unsigned int flags, schd_error *perr);\n\n/**\n *\n *\tis_ok_to_run_STF - check to see if the STF job is OK to run.\n *\n */\nstd::vector<nspec *>\nis_ok_to_run_STF(status *policy, server_info *sinfo,\n\t\t queue_info *qinfo, resource_resv *njob, unsigned int flags, schd_error *err,\n\t\t std::vector<nspec *> (*shrink_heuristic)(status *policy, server_info *sinfo,\n\t\t\t\t\t\t\t  queue_info *qinfo, resource_resv *njob, unsigned int flags, schd_error *err));\n/*\n * shrink_job_algorithm - generic algorithm for shrinking a job\n */\nstd::vector<nspec *>\nshrink_job_algorithm(status *policy, server_info *sinfo,\n\t\t     queue_info *qinfo, resource_resv *njob, unsigned int flags, schd_error *err); /* Generic shrinking heuristic */\n\n/*\n * shrink_to_boundary - Shrink job to dedicated/prime time boundary\n */\nstd::vector<nspec *>\nshrink_to_boundary(status *policy, server_info *sinfo,\n\t\t   queue_info *qinfo, resource_resv *njob, unsigned int flags, schd_error *err);\n\n/*\n * shrink_to_minwt - Shrink job to it's minimum walltime\n */\nstd::vector<nspec *>\nshrink_to_minwt(status *policy, server_info *sinfo,\n\t\tqueue_info *qinfo, resource_resv *njob, unsigned int flags, schd_error *err);\n\n/*\n * shrink_to_run_event - Shrink job before reservation or top job.\n */\nstd::vector<nspec *>\nshrink_to_run_event(status *policy, server_info *sinfo,\n\t\t    queue_info *qinfo, resource_resv *njob, unsigned int flags, schd_error *err);\n\n/*\n *      check_avail_resources - This function will calculate the number of\n *\t\t\t\tmultiples of the requested resources in reqlist\n *\t\t\t\twhich can be satisfied by the resources\n *\t\t\t\tavailable in the reslist for the resources in\n *\t\t\t\tchecklist\n *\n *\n *      returns number of chunks which can be allocated or -1 on error\n *\n */\nlong long\ncheck_avail_resources(schd_resource *reslist, resource_req *reqlist,\n\t\t      unsigned int flags, std::unordered_set<resdef *> &checklist,\n\t\t      enum sched_error_code fail_code, schd_error *perr);\nlong long\ncheck_avail_resources(schd_resource *reslist, resource_req *reqlist,\n\t\t      unsigned int flags, enum sched_error_code fail_code, schd_error *perr);\n\n/*\n *\tdynamic_avail - find out how much of a resource is available on a\n */\nsch_resource_t dynamic_avail(schd_resource *res);\n\n/*\n *\tfind_counts_elm - find a element of a counts structure by name.\n *\t\t\t  If res arg is NULL return number of running jobs\n *\t\t\t  otherwise return named resource\n *\n *\tcts_list - counts map to search\n *\tname     - name of counts structure to find\n *\tres      - resource to find or if NULL, return number of running\n *\t\t\tresource amount\n *\tcnt\t- output param for address of the matching counts structure\n *\trreq\t- output param for address of the matching resource_count structure\n */\nsch_resource_t find_counts_elm(counts_umap &cts_list, const std::string &name, resdef *rdef, counts **rcount, resource_count **rreq);\n\n/*\n *      check_nodes - check to see if there is sufficient nodes available to\n *                    run a job/resv.\n */\nstd::vector<nspec *> check_nodes(status *policy, server_info *sinfo, queue_info *qinfo, resource_resv *resresv, unsigned int flags, schd_error *err);\n\n/* Normal node searching algorithm */\nstd::vector<nspec *>\ncheck_normal_node_path(status *policy, server_info *sinfo, queue_info *qinfo, resource_resv *resresv, unsigned int flags, schd_error *err);\n\n/*\n *      is_node_available - determine that there is a node available to run\n *                          the job\n */\nint is_node_available(resource_resv *job, node_info **ninfo_arr);\n\n/*\n *      check_ded_time_queue - check if it is the approprate time to run jobs\n *                             in a dedtime queue\n */\nenum sched_error_code check_ded_time_queue(queue_info *qinfo);\n\n/*\n *      dedtime_conflict - check for dedtime conflicts\n */\nint dedtime_conflict(resource_resv *resresv);\n\n/*\n *      check_ded_time_boundary  - check to see if a job would cross into\n */\nenum sched_error_code check_ded_time_boundary(resource_resv *resresv);\n\n/*\n *      check_prime_queue - Check primetime status of the queue.  If the queue\n *                          is a primetime queue and it is primetime or if the\n *                          queue is an anytime queue, jobs can run in it.\n */\nenum sched_error_code check_prime_queue(status *policy, queue_info *qinfo);\n\n/*\n *      check_nonprime_queue - Check nonprime status of the queue.  If the\n *                             queue is a nonprime queue and it is nonprimetime\n *                             of the queue is an anytime queue, jobs can run\n */\nenum sched_error_code check_nonprime_queue(status *policy, queue_info *qinfo);\n\n/*\n *      check_prime_boundary - check to see if the job can run before the prime\n *                            status changes (from primetime to nonprime etc)\n */\nenum sched_error_code check_prime_boundary(status *policy, resource_resv *resresv, struct schd_error *err);\n\n/*\n *      check_node_resources - check to see if resources are available on\n *                             timesharing nodes for a job to run\n */\nint check_node_resources(resource_resv *resresv, node_info **ninfo_arr);\n\n/*\n *\tfalse_res - return a static struct of resource which is a boolean\n *\t\t    set to false\n */\nschd_resource *false_res(void);\n\n/*\n *\n *\tzero_res -  return a static struct of resource which is numeric and\n *\t\t    consumable set to 0\n *\treturns zero resource ptr\n *\n */\nschd_resource *zero_res(void);\n\n/*\n *\tunset_str_res - return a static struct of resource which is a string\n *\t\t    set to \"\"\n *\treturns unset string resource ptr\n *\n */\nschd_resource *unset_str_res(void);\n\n/*\n *\tget_resresv_spec - gets the correct select and placement specification\n *\n *\treturns void\n */\nvoid get_resresv_spec(resource_resv *resresv, selspec **spec, place **pl);\n#endif /* _CHECK_H */\n"
  },
  {
    "path": "src/scheduler/config.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _CONFIG_H\n#define _CONFIG_H\n\n#include \"constant.h\"\n\n/* the level of schd_priority that a suspended job has */\n#define SUSPEND_PRIO 500000\n\n/* resources can get too large for a 32 bit number, so the ability to use the\n * nonstandard type long long is necessary.\n */\n#define RESOURCE_TYPE double\n\n/* name of config file */\n#define CONFIG_FILE \"sched_config\"\n#define USAGE_FILE \"usage\"\n#define USAGE_TOUCH USAGE_FILE \".touch\"\n#define HOLIDAYS_FILE \"holidays\"\n#define RESGROUP_FILE \"resource_group\"\n#define DEDTIME_FILE \"dedicated_time\"\n\n/* usage file \"magic number\" - needs to be 8 chars */\n#define USAGE_MAGIC \"PBS_MAG!\"\n#define USAGE_VERSION 2\n#define USAGE_NAME_MAX 50\n\n#define UNKNOWN_GROUP_NAME \"unknown\"\n\n/* preempt priority values */\n#define PREEMPT_PRIORITY_HIGH 100000\n#define PREEMPT_PRIORITY_STEP 1000\n\n#define PREEMPT_ORDER_MAX 20\n\n/* name of root node in the fairshare tree */\n#define FAIRSHARE_ROOT_NAME \"TREEROOT\"\n\n/* Estimate on how long it will take exiting jobs to end*/\n#define EXITING_TIME 1\n\n/* Estimate of time that a job past their walltime will be running */\n#define FINISH_TIME 120\n\n/* maximum number of sort keys */\n#define MAX_SORTS 21\n\n/* maximum number of scheduling cycle restarts in event of job-run failure */\n#define MAX_RESTART_CYCLECNT 5\n\n/* estimate of how long a node will take to provision - used in simulation */\n#define PROVISION_DURATION 600\n\n/* Maximum number of events(reservations or top jobs)\n * around which shrinking of a STF job would be attemtpted,\n */\n#define SHRINK_MAX_RETRY 5\n\n/* parsing -\n * names that appear on the left hand side in the sched config file\n */\n#define PARSE_ROUND_ROBIN \"round_robin\"\n#define PARSE_BY_QUEUE \"by_queue\"\n#define PARSE_FAIR_SHARE \"fair_share\"\n#define PARSE_HALF_LIFE \"half_life\"\n#define PARSE_UNKNOWN_SHARES \"unknown_shares\"\n#define PARSE_LOG_FILTER \"log_filter\"\n#define PARSE_DEDICATED_PREFIX \"dedicated_prefix\"\n#define PARSE_HELP_STARVING_JOBS \"help_starving_jobs\"\n#define PARSE_MAX_STARVE \"max_starve\"\n#define PARSE_SORT_QUEUES \"sort_queues\"\n#define PARSE_BACKFILL \"backfill\"\n#define PARSE_PRIMETIME_PREFIX \"primetime_prefix\"\n#define PARSE_NONPRIMETIME_PREFIX \"nonprimetime_prefix\"\n#define PARSE_BACKFILL_PRIME \"backfill_prime\"\n#define PARSE_PRIME_EXEMPT_ANYTIME_QUEUES \"prime_exempt_anytime_queues\"\n#define PARSE_PRIME_SPILL \"prime_spill\"\n#define PARSE_RESOURCES \"resources\"\n#define PARSE_MOM_RESOURCES \"mom_resources\"\n#define PARSE_SMP_CLUSTER_DIST \"smp_cluster_dist\"\n#define PARSE_PREEMPT_QUEUE_PRIO \"preempt_queue_prio\"\n#define PARSE_PREEMPT_SUSPEND \"preempt_suspend\"\n#define PARSE_PREEMPT_CHKPT \"preempt_checkpoint\"\n#define PARSE_PREEMPT_REQUEUE \"preempt_requeue\"\n#define PARSE_PREEMPIVE_SCHED \"preemptive_sched\"\n#define PARSE_FAIRSHARE_RES \"fairshare_usage_res\"\n#define PARSE_FAIRSHARE_ENT \"fairshare_entity\"\n#define PARSE_FAIRSHARE_DECAY_FACTOR \"fairshare_decay_factor\"\n#define PARSE_FAIRSHARE_DECAY_TIME \"fairshare_decay_time\"\n#define PARSE_SUSP_THRESHOLD \"susp_threshold\"\n#define PARSE_PREEMPT_PRIO \"preempt_prio\"\n#define PARSE_PREEMPT_ORDER \"preempt_order\"\n#define PARSE_PREEMPT_SORT \"preempt_sort\"\n#define PARSE_JOB_SORT_KEY \"job_sort_key\"\n#define PARSE_NODE_SORT_KEY \"node_sort_key\"\n#define PARSE_SORT_NODES \"sort_nodes\"\n#define PARSE_SERVER_DYN_RES \"server_dyn_res\"\n#define PARSE_PEER_QUEUE \"peer_queue\"\n#define PARSE_PEER_TRANSLATION \"peer_translation\"\n#define PARSE_NODE_GROUP_KEY \"node_group_key\"\n#define PARSE_ENFORCE_NO_SHARES \"fairshare_enforce_no_shares\"\n#define PARSE_STRICT_ORDERING \"strict_ordering\"\n#define PARSE_RES_UNSET_INFINITE \"resource_unset_infinite\"\n#define PARSE_SELECT_PROVISION \"provision_policy\"\n\n#ifdef NAS\n/* localmod 034 */\n#define PARSE_MAX_BORROW \"max_borrow\"\n#define PARSE_SHARES_TRACK_ONLY \"shares_track_only\"\n#define PARSE_PER_SHARE_DEPTH \"per_share_depth\" /* old name */\n#define PARSE_PER_SHARE_TOPJOBS \"per_share_topjobs\"\n\n/* localmod 038 */\n#define PARSE_PER_QUEUES_TOPJOBS \"per_queues_topjobs\"\n\n/* localmod 030 */\n#define PARSE_MIN_INTERRUPTED_CYCLE_LENGTH \"min_interrupted_cycle_length\"\n#define PARSE_MAX_CONS_INTERRUPTED_CYCLES \"max_cons_interrupted_cycles\"\n#endif\n\n/* undocumented */\n#define PARSE_MAX_JOB_CHECK \"max_job_check\"\n#define PARSE_PREEMPT_ATTEMPTS \"preempt_attempts\"\n#define PARSE_UPDATE_COMMENTS \"update_comments\"\n#define PARSE_RESV_CONFIRM_IGNORE \"resv_confirm_ignore\"\n#define PARSE_ALLOW_AOE_CALENDAR \"allow_aoe_calendar\"\n\n/* deprecated */\n#define PARSE_STRICT_FIFO \"strict_fifo\"\n\n/* max sizes */\n#define MAX_HOLIDAY_SIZE 50\n#define MAX_DEDTIME_SIZE 50\n#define MAX_SERVER_DYN_RES 201 /* 200 elements + 1 sentinel */\n#define MAX_LOG_SIZE 1024\n#define MAX_RES_NAME_SIZE 256\n#define MAX_RES_RET_SIZE 256\n#define NUM_PPRIO 20\n#define NUM_PEERS 50\n#define MAX_DEF_REPLY 5\n#define MAX_PTIME_SIZE 64\n\n/* resource names for sorting special cases */\n#define SORT_FAIR_SHARE \"fair_share_perc\"\n#define SORT_PREEMPT \"preempt_priority\"\n#define SORT_PRIORITY \"sort_priority\"\n#define SORT_JOB_PRIORITY \"job_priority\"\n#define SORT_USED_TIME \"last_used_time\"\n\n#ifdef NAS\n/* localmod 039 */\n#define SORT_QPRI \"qpri\"\n\n/* localmod 034 */\n#define SORT_ALLOC \"cpu_alloc\"\n\n/* localmod 040 */\n#define SORT_NODECT \"nodect\"\n#endif\n\n/* max num of retries for preemption */\n#define MAX_PREEMPT_RETRIES 5\n\n/* provisioning policy */\n#define PROVPOLICY_AVOID \"avoid_provision\"\n#define PROVPOLICY_AGGRESSIVE \"aggressive_provision\"\n\n/* Job Comment Prefixes */\n#define JOB_COMMENT_NOT_RUN_NOW \"Not Running\"\n#define JOB_COMMENT_NEVER_RUN \"Can Never Run\"\n\n#define BF_OFF 0\n#define BF_LOW 60\n#define BF_MED 600\n#define BF_HIGH 3600\n#define BF_DEFAULT BF_LOW\n\n#define SCH_CYCLE_LEN_DFLT 1200\n\n#ifdef NAS /* attributes we may define in the server's resourcedef file */\n/* localmod 040 */\n#define ATTR_ignore_nodect_sort \"ignore_nodect_sort\"\n\n/* localmod 038 */\n#define ATTR_topjob_setaside \"topjob_set_aside\"\n#endif\n\n#endif /* _CONFIG_H */\n"
  },
  {
    "path": "src/scheduler/constant.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _CONSTANT_H\n#define _CONSTANT_H\n\n#include <math.h>\n\n/* macro to turn a value from enum preempt into it's bit for the bitfield */\n#define PREEMPT_TO_BIT(X) (1 << (X))\n\n/* bitcount macro for up to 16 bits */\n#define BX16_(x) ((x) - (((x) >> 1) & 0x7777) - (((x) >> 2) & 0x3333) - (((x) >> 3) & 0x1111))\n#define BITCOUNT16(x) (((BX16_(x) + (BX16_(x) >> 4)) & 0x0F0F) % 255)\n\n/* max between 0 or a number: basically don't let a number drop below 0 */\n#define IF_NEG_THEN_ZERO(a) (((a) >= (0)) ? (a) : (0))\n\n/* multipliers [bw] means either btye or word */\n#define KILO 1024L\t       /* number of [bw] in a kilo[bw] */\n#define MEGATOKILO 1024L       /* number of mega[bw] in a kilo[bw] */\n#define GIGATOKILO 1048576L    /* number of giga[bw] in a kilo[bw] */\n#define TERATOKILO 1073741824L /* number of tera[bw] in a kilo[bw] */\n\n/* extra constants */\n#define FREE_DEEP 1 /* constant to pass to free_*_list */\n#define INITIALIZE -1\n\n/* Constants used as flags to pass to next_job() function\n * Decision of Sorting jobs is taken on the basis of these constants */\nenum sort_status {\n\tDONT_SORT_JOBS,\t  /* If there is no need to sort in next_job() */\n\tMAY_RESORT_JOBS,  /* used to resort all jobs whenever needed */\n\tMUST_RESORT_JOBS, /* used to resort all jobs mandatorily */\n\tSORTED\t\t  /* job list is already sorted */\n};\n\n/* enum used to find out what to skip while searching for the next job to schedule.  Values are bits in a bitfield */\n\nenum skip {\n\tSKIP_NOTHING,\n\t/* Value used to know whether reservations are already scheduled or not */\n\tSKIP_RESERVATIONS = 1,\n\t/* Value used to know whether express, preempted, are already scheduled or not */\n\tSKIP_NON_NORMAL_JOBS = 2\n};\n\n/* return value of select_index_to_preempt function */\nenum select_job_status {\n\tNO_JOB_FOUND = -1, /* fails to find a job to preempt */\n\tERR_IN_SELECT = -2 /* error while selecting a job to preempt */\n};\n\n#define INIT_ARR_SIZE 2048\n\n/* We need two sets of UNSPECIFIED/SCHD_INFINITY constants.  One for resources\n * which can be negative, and one for positive integer values.  While we could\n * use some numbers near -LONG_MAX, that would mean every integer used in the\n * scheduler would need to be a long, when an int or smaller type is fine.\n */\n\n/* Unspecified resource value */\n#define UNSPECIFIED_RES -HUGE_VAL\n#define UNSPECIFIED_STR \"UNSPECIFIED\"\n/* infinity value for resources */\n#define SCHD_INFINITY_RES HUGE_VAL\n#define SCHD_INFINITY_STR \"SCHD_INFINITY\"\n\n#define UNSPECIFIED -1\n#define SCHD_INFINITY -2\n\n/* infinity walltime value for forever job. This is 5 years(=60 * 60 * 24 * 365 * 5 seconds) */\n#define JOB_INFINITY (60 * 60 * 24 * 365 * 5)\n\n/* for filter functions */\n#define FILTER_FULL 1 /* leave new array the full size */\n\n/* for update_jobs_cant run */\n#define START_BEFORE_JOB -1\n#define START_WITH_JOB 0\n#define START_AFTER_JOB 1\n\n/* Error message when we fail to allocate memory */\n#define MEM_ERR_MSG \"Unable to allocate memory (malloc error)\"\n\n/* accrue types for update_accruetype */\n#define ACCRUE_INIT \"0\"\n#define ACCRUE_INEL \"1\"\n#define ACCRUE_ELIG \"2\"\n#define ACCRUE_RUNN \"3\"\n#define ACCRUE_EXIT \"4\"\n\n/* operational modes for update_accruetype */\nenum update_accruetype_mode {\n\tACCRUE_CHECK_ERR = 0,\n\tACCRUE_MAKE_INELIGIBLE,\n\tACCRUE_MAKE_ELIGIBLE\n};\n\n/* Default values for datatype resource */\n#define RES_DEFAULT_AVAIL SCHD_INFINITY_RES\n#define RES_DEFAULT_ASSN 0\n\n#define PREEMPT_QUEUE_SERVER_SOFTLIMIT (1 << (PREEMPT_OVER_QUEUE_LIMIT) | 1 << (PREEMPT_OVER_SERVER_LIMIT))\n\n/* strings for prime and non-prime */\n#define PRIMESTR \"primetime\"\n#define NONPRIMESTR \"non-primetime\"\n\n/* dedtime_change */\n#define DEDTIME_START \"DEDTIME_START\"\n#define DEDTIME_END \"DEDTIME_END\"\n\n/* comment prefixes */\n#define NOT_RUN_PREFIX \"Not Running\"\n#define NEVER_RUN_PREFIX \"Can Never Run\"\n\n/* Time in seconds for 5 years */\n#define FIVE_YRS 157680000\n\n#define PREEMPT_NONE 1\n\n/* resource comparison flag values */\nenum resval_cmpflag {\n\tCMP_CASE,\n\tCMP_CASELESS\n};\n\nenum thread_task_type {\n\tTS_IS_ND_ELIGIBLE,\n\tTS_DUP_ND_INFO,\n\tTS_QUERY_ND_INFO,\n\tTS_FREE_ND_INFO,\n\tTS_DUP_RESRESV,\n\tTS_QUERY_JOB_INFO,\n\tTS_FREE_RESRESV\n};\n\n/* return codes for is_ok_to_run_* functions\n * codes less then RET_BASE are standard PBSE pbs error codes\n * NOTE: RET_BASE MUST be greater than the highest PBSE error code\n */\nenum sched_error_code {\n\tSE_NONE = 0,\n\tRET_BASE = 16300,\n\tSUCCESS = RET_BASE + 1,\n\tSCHD_ERROR = RET_BASE + 2,\n\tNOT_QUEUED = RET_BASE + 3,\n\tQUEUE_NOT_STARTED = RET_BASE + 4,\n\tQUEUE_NOT_EXEC = RET_BASE + 5,\n\tQUEUE_JOB_LIMIT_REACHED = RET_BASE + 6,\n\tSERVER_JOB_LIMIT_REACHED = RET_BASE + 7,\n\tSERVER_USER_LIMIT_REACHED = RET_BASE + 8,\n\tQUEUE_USER_LIMIT_REACHED = RET_BASE + 9,\n\tSERVER_GROUP_LIMIT_REACHED = RET_BASE + 10,\n\tQUEUE_GROUP_LIMIT_REACHED = RET_BASE + 11,\n\tDED_TIME = RET_BASE + 12,\n\tCROSS_DED_TIME_BOUNDRY = RET_BASE + 13,\n\tNO_AVAILABLE_NODE = RET_BASE + 14, /* unused */\n\tNOT_ENOUGH_NODES_AVAIL = RET_BASE + 15,\n\tBACKFILL_CONFLICT = RET_BASE + 16,\n\tRESERVATION_INTERFERENCE = RET_BASE + 17,\n\tPRIME_ONLY = RET_BASE + 18,\n\tNONPRIME_ONLY = RET_BASE + 19,\n\tCROSS_PRIME_BOUNDARY = RET_BASE + 20,\n\tNODE_NONEXISTENT = RET_BASE + 21,\n\tNO_NODE_RESOURCES = RET_BASE + 22,\n\tCANT_PREEMPT_ENOUGH_WORK = RET_BASE + 23,\n\tQUEUE_USER_RES_LIMIT_REACHED = RET_BASE + 24,\n\tSERVER_USER_RES_LIMIT_REACHED = RET_BASE + 25,\n\tQUEUE_GROUP_RES_LIMIT_REACHED = RET_BASE + 26,\n\tSERVER_GROUP_RES_LIMIT_REACHED = RET_BASE + 27,\n\tNO_FAIRSHARES = RET_BASE + 28,\n\tINVALID_NODE_STATE = RET_BASE + 29,\n\tINVALID_NODE_TYPE = RET_BASE + 30,\n\tNODE_NOT_EXCL = RET_BASE + 31,\n\tNODE_JOB_LIMIT_REACHED = RET_BASE + 32,\n\tNODE_USER_LIMIT_REACHED = RET_BASE + 33,\n\tNODE_GROUP_LIMIT_REACHED = RET_BASE + 34,\n\tNODE_NO_MULT_JOBS = RET_BASE + 35,\n\tNODE_UNLICENSED = RET_BASE + 36,\n\tNOT_USED37 = RET_BASE + 37, /* unused */\n\tNO_SMALL_CPUSETS = RET_BASE + 38,\n\tINSUFFICIENT_RESOURCE = RET_BASE + 39,\n\tRESERVATION_CONFLICT = RET_BASE + 40,\n\tNODE_PLACE_PACK = RET_BASE + 41,\n\tNODE_RESV_ENABLE = RET_BASE + 42,\n\tSTRICT_ORDERING = RET_BASE + 43,\n\tMAKE_ELIGIBLE = RET_BASE + 44,\t /* unused */\n\tMAKE_INELIGIBLE = RET_BASE + 45, /* unused */\n\tINSUFFICIENT_QUEUE_RESOURCE = RET_BASE + 46,\n\tINSUFFICIENT_SERVER_RESOURCE = RET_BASE + 47,\n\tQUEUE_BYGROUP_JOB_LIMIT_REACHED = RET_BASE + 48,\n\tQUEUE_BYUSER_JOB_LIMIT_REACHED = RET_BASE + 49,\n\tSERVER_BYGROUP_JOB_LIMIT_REACHED = RET_BASE + 50,\n\tSERVER_BYUSER_JOB_LIMIT_REACHED = RET_BASE + 51,\n\tSERVER_BYGROUP_RES_LIMIT_REACHED = RET_BASE + 52,\n\tSERVER_BYUSER_RES_LIMIT_REACHED = RET_BASE + 53,\n\tQUEUE_BYGROUP_RES_LIMIT_REACHED = RET_BASE + 54,\n\tQUEUE_BYUSER_RES_LIMIT_REACHED = RET_BASE + 55,\n\tQUEUE_RESOURCE_LIMIT_REACHED = RET_BASE + 56,\n\tSERVER_RESOURCE_LIMIT_REACHED = RET_BASE + 57,\n\tPROV_DISABLE_ON_SERVER = RET_BASE + 58,\n\tPROV_DISABLE_ON_NODE = RET_BASE + 59,\n\tAOE_NOT_AVALBL = RET_BASE + 60,\n\tEOE_NOT_AVALBL = RET_BASE + 61,\n\tPROV_BACKFILL_CONFLICT = RET_BASE + 62, /* unused */\n\tIS_MULTI_VNODE = RET_BASE + 63,\n\tPROV_RESRESV_CONFLICT = RET_BASE + 64,\n\tRUN_FAILURE = RET_BASE + 65,\n\tSET_TOO_SMALL = RET_BASE + 66,\n\tCANT_SPAN_PSET = RET_BASE + 67,\n\tNO_FREE_NODES = RET_BASE + 68,\n\tSERVER_PROJECT_LIMIT_REACHED = RET_BASE + 69,\n\tSERVER_PROJECT_RES_LIMIT_REACHED = RET_BASE + 70,\n\tSERVER_BYPROJECT_RES_LIMIT_REACHED = RET_BASE + 71,\n\tSERVER_BYPROJECT_JOB_LIMIT_REACHED = RET_BASE + 72,\n\tQUEUE_PROJECT_LIMIT_REACHED = RET_BASE + 73,\n\tQUEUE_PROJECT_RES_LIMIT_REACHED = RET_BASE + 74,\n\tQUEUE_BYPROJECT_RES_LIMIT_REACHED = RET_BASE + 75,\n\tQUEUE_BYPROJECT_JOB_LIMIT_REACHED = RET_BASE + 76,\n\tNO_TOTAL_NODES = RET_BASE + 77,\n\tINVALID_RESRESV = RET_BASE + 78,\n\tJOB_UNDER_THRESHOLD = RET_BASE + 79,\n\tMAX_RUN_SUBJOBS = RET_BASE + 80,\n#ifdef NAS\n\t/* localmod 034 */\n\tGROUP_CPU_SHARE = RET_BASE + 81,\n\tGROUP_CPU_INSUFFICIENT = RET_BASE + 82,\n\t/* localmod 998 */\n\tRESOURCES_INSUFFICIENT = RET_BASE + 83,\n#endif\n\tERR_SPECIAL = RET_BASE + 1000\n};\n\nenum schd_err_status {\n\tSCHD_UNKWN,\n\tNOT_RUN,\n\tNEVER_RUN,\n\tSCHD_STATUS_HIGH\n};\n\n/* for SORT_BY */\nenum sort_type {\n\tNO_SORT,\n\tSHORTEST_JOB_FIRST,\n\tLONGEST_JOB_FIRST,\n\tSMALLEST_MEM_FIRST,\n\tLARGEST_MEM_FIRST,\n\tHIGH_PRIORITY_FIRST,\n\tLOW_PRIORITY_FIRST,\n\tLARGE_WALLTIME_FIRST,\n\tSHORT_WALLTIME_FIRST,\n\tFAIR_SHARE,\n\tPREEMPT_PRIORITY,\n\tMULTI_SORT\n};\n\n#ifdef FALSE\n#undef FALSE\n#endif\n\n#ifdef TRUE\n#undef TRUE\n#endif\n\n/* Reservation related constants */\n#define MAXVNODELIST 100\n\nenum resv_conf {\n\tRESV_CONFIRM_FAIL = -1,\n\tRESV_CONFIRM_VOID,\n\tRESV_CONFIRM_SUCCESS,\n\tRESV_CONFIRM_RETRY\n};\n\n/* job substate meaning suspended by scheduler */\n#define SUSP_BY_SCHED_SUBSTATE \"45\"\n\n/* job substate meaning node is provisioning */\n#define PROVISIONING_SUBSTATE \"71\"\n\n/* job substate meaning job is pre-running state */\n#define PRERUNNING_SUBSTATE \"41\"\n\n/* TRUE_FALSE indicates both true and false for collections of resources */\nenum { FALSE,\n       TRUE,\n       TRUE_FALSE };\n\nenum { RUN_JOBS_SORTED = 1,\n       SIM_RUN_JOB = 2 };\nenum { SIMULATE_SD = -1 };\n\nenum fairshare_flags {\n\tFS_TRIM = 1\n};\n\n#define FAIRSHARE_MIN_USAGE 1\n\n/* flags used for copy constructors - bit field */\nenum dup_flags {\n\tDUP_LOW = 0,\n\tDUP_INDIRECT = 1\n\t/* next flag 2 then 4, the 8... */\n};\n\n/* an enum of 1-off names */\nenum misc_constants {\n\tNO_FLAGS = 0,\n\tIGNORE_DISABLED_EVENTS = 1,\n\tFORCE_SCHED,\n\tSET_RESRESV_INDEX = 4,\n\tDETECT_GHOST_JOBS = 8,\n\tALL_MASK = 0xffffffff\n};\n\nenum advance {\n\tDONT_ADVANCE,\n\tADVANCE\n};\n\n/* resource list flags is a bitfield = 0, 1, 2, 4, 8...*/\nenum add_resource_list_flags {\n\tNO_UPDATE_NON_CONSUMABLE = 1,\n\tUSE_RESOURCE_LIST = 2,\n\tADD_UNSET_BOOLS_FALSE = 4,\n\tADD_AVAIL_ASSIGNED = 8,\n\tADD_ALL_BOOL = 16\n\t/* next flag 32 */\n};\n\nenum incr_decr {\n\tSCHD_INCR,\n\tSCHD_DECR\n};\n\n/* run update resresv flags is a bitfield = 0, 1, 2, 4, 8, ...*/\nenum run_update_resresv_flags {\n\tRURR_NO_FLAGS = 0,\n\tRURR_ADD_END_EVENT = 1, /* add end events to calendar for job */\n\tRURR_NOPRINT = 2\t/* don't print messages */\n\t\t\t\t/* next value 4 */\n};\n\nenum delete_event_flags {\n\tDE_NO_FLAGS = 0,\n\tDE_UNLINK = 1\n\t/* next flag 2, 4, 8, 16, ...*/\n};\n\nenum res_print_flags {\n\tPRINT_INT_CONST = 1,\n\tNOEXPAND = 2\n\t/* next flex 4, 8, 16, ...*/\n};\n\nenum is_provisionable_ret {\n\tNOT_PROVISIONABLE,\n\tNO_PROVISIONING_NEEDED,\n\tPROVISIONING_NEEDED\n};\n\nenum sort_order {\n\tNO_SORT_ORDER,\n\tDESC, /* decending i.e. 4 3 2 1 */\n\tASC   /* ascending i.e. 1 2 3 4 */\n};\n\nenum cmptype {\n\tCMPAVAIL,\n\tCMPTOTAL\n};\n\nenum match_string_array_ret {\n\tSA_NO_MATCH,\t  /* no match */\n\tSA_PARTIAL_MATCH, /* at least one match */\n\tSA_SUB_MATCH,\t  /* one array is a subset of the other */\n\tSA_FULL_MATCH\t  /* both arrays are the same size and match */\n};\n\nenum prime_time {\n\tNON_PRIME = 0,\n\tPRIME = 1,\n\tPT_ALL,\n\tPT_NONE\n};\n\nenum days {\n\tSUNDAY,\n\tMONDAY,\n\tTUESDAY,\n\tWEDNESDAY,\n\tTHURSDAY,\n\tFRIDAY,\n\tSATURDAY,\n\tWEEKDAY,\n\tHIGH_DAY,\n};\n\nenum smp_cluster_dist {\n\tSMP_NODE_PACK,\n\tSMP_ROUND_ROBIN,\n\tHIGH_SMP_DIST\n};\n\n/*\n *\tWhen adding entries to this enum, be sure to initialize a matching\n *\tentry in prempt_prio_info[] (globals.c).\n */\nenum preempt {\n\tPREEMPT_NORMAL,\t\t   /* normal priority jobs */\n\tPREEMPT_OVER_FS_LIMIT,\t   /* jobs over their fairshare of the machine */\n\tPREEMPT_OVER_QUEUE_LIMIT,  /* jobs over queue run limits (maxrun etc) */\n\tPREEMPT_OVER_SERVER_LIMIT, /* jobs over server run limits */\n\tPREEMPT_EXPRESS,\t   /* jobs in express queue */\n\tPREEMPT_QRUN,\t\t   /* job is being qrun */\n\tPREEMPT_ERR,\t\t   /* error occurred during preempt computation */\n\tPREEMPT_HIGH\n};\n\nenum schd_simulate_cmd {\n\tSIM_NONE,\n\tSIM_NEXT_EVENT,\n\tSIM_TIME\n};\n\nenum timed_event_types {\n\tTIMED_NOEVENT = 1,\n\tTIMED_ERROR = 2,\n\tTIMED_RUN_EVENT = 4,\n\tTIMED_END_EVENT = 8,\n\tTIMED_POLICY_EVENT = 16,\n\tTIMED_DED_START_EVENT = 32,\n\tTIMED_DED_END_EVENT = 64,\n\tTIMED_NODE_DOWN_EVENT = 128,\n\tTIMED_NODE_UP_EVENT = 256\n};\n\nenum resource_fields {\n\tRF_NONE,\n\tRF_AVAIL,\t /* resources_available - if indirect, resolve */\n\tRF_DIRECT_AVAIL, /* resources_available - if indirect, return @vnode */\n\tRF_ASSN,\n\tRF_REQUEST,\n\tRF_UNUSED /* meta field: RF_AVAIL - RF_ASSN: used for sorting */\n};\n\n/* bit fields */\nenum node_eval {\n\tEVAL_LOW = 0,\n\tEVAL_OKBREAK = 1, /* OK to break chunk up across placement set */\n\tEVAL_EXCLSET = 2  /* allocate entire placement set exclusively */\n\t\t\t  /* next 4, then 8, etc */\n};\n\nenum nodepart {\n\tNP_NONE = 0,\n\tNP_IGNORE_EXCL = 1,\n\tNP_CREATE_REST = 2,\n\tNP_NO_ADD_NP_ARR = 4\n\t/* next 8, 16, etc */\n};\n\n/* It is used to identify the provisioning policy set on scheduler */\nenum provision_policy_types {\n\tAGGRESSIVE_PROVISION = 0,\n\tAVOID_PROVISION = 1\n};\n\nenum sort_obj_type {\n\tSOBJ_JOB,\n\tSOBJ_NODE,\n\tSOBJ_PARTITION,\n\tSOBJ_BUCKET\n};\n\nenum update_sort_defs {\n\tSD_FREE,\n\tSD_UPDATE\n};\n\nenum update_attr_flags {\n\tUPDATE_FLAGS_LOW = 0,\n\tUPDATE_LATER = 1,\n\tUPDATE_NOW = 2,\n\t/* Bit Field, next 4, then 8 */\n};\n\n/* static indexes into the allres resdef array for built in resources.  It is\n * likely that the query_rsc() API call will return the resources in the order\n * of the server's resc_def_all array.  It is marginally faster if we try and\n * keep this array in the same order.  There is no dependency on this ordering\n */\nenum resource_index {\n\tRES_CPUT,\n\tRES_MEM,\n\tRES_WALLTIME,\n\tRES_SOFT_WALLTIME,\n\tRES_NCPUS,\n\tRES_ARCH,\n\tRES_HOST,\n\tRES_VNODE,\n\tRES_AOE,\n\tRES_EOE,\n\tRES_MIN_WALLTIME,\n\tRES_MAX_WALLTIME,\n\tRES_PREEMPT_TARGETS,\n\tRES_HIGH\n};\n\n/* Flags for is_ok_to_run() and the check functions called by it */\nenum check_flags {\n\tCHECK_FLAGS_LOW,\n\tRETURN_ALL_ERR = 1,\n\tCHECK_LIMIT = 2,\t    /* for check_limits */\n\tCHECK_CUMULATIVE_LIMIT = 4, /* for check_limits */\n\tCHECK_ALL_BOOLS = 8,\n\tUNSET_RES_ZERO = 16,\n\tCOMPARE_TOTAL = 32,\n\tONLY_COMP_NONCONS = 64,\n\tONLY_COMP_CONS = 128,\n\tIGNORE_EQUIV_CLASS = 256,\n\tUSE_BUCKETS = 512,\n\tNO_ALLPART = 1024,\n\tSPAN_PSETS = 2048\n\t/* next flag 4096 */\n};\n\nenum schd_error_args {\n\tARG1,\n\tARG2,\n\tARG3,\n\tSPECMSG\n};\n\nenum bucket_flags {\n\tUPDATE_BUCKET_IND = 1,\n\tNO_PRINT_BUCKETS\n};\n\nenum sort_info_type {\n\tPRIME_SORT,\n\tNON_PRIME_SORT,\n\tPRIME_NODE_SORT,\n\tNON_PRIME_NODE_SORT\n};\n\nenum runjob_mode {\n\tRJ_NOWAIT,\n\tRJ_RUNJOB_HOOK,\n\tRJ_EXECJOB_HOOK\n};\n\nenum preempt_sort_vals {\n\tPS_MIN_T_SINCE_START,\n\tPS_PREEMPT_PRIORITY,\n\tPS_HIGH\n};\n\nenum nscr_vals {\n\tNSCR_NONE = 0,\n\tNSCR_VISITED = 1,\n\tNSCR_SCATTERED = 2,\n\tNSCR_INELIGIBLE = 4,\n\tNSCR_CYCLE_INELIGIBLE = 8\n};\n\n#endif /* _CONSTANT_H */\n"
  },
  {
    "path": "src/scheduler/data_types.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * A quick explanation of the scheduler's data model:\n * To free an object, use the object’s destructor (e.g., free_node_info())\n * To free an array of objects, you need to know if you own the objects yourself\n * or are an array of references\n * If you own the objects(e.g., sinfo->nodes), you call the multi-object object\n * destructor (e.g., free_nodes())\n * If you are an array of references (e.g., sinfo->queues[i]->nodes), you call\n * free().  You are just an array of pointers that are referencing objects.\n */\n\n// clang-format off\n\n#ifndef\t_DATA_TYPES_H\n#define\t_DATA_TYPES_H\n\n#include <string>\n#include <unordered_map>\n#include <unordered_set>\n#include <vector>\n\n#include <time.h>\n#include <pbs_ifl.h>\n#include <libutil.h>\n#include \"constant.h\"\n#include \"config.h\"\n#include \"pbs_bitmap.h\"\n#include \"pbs_share.h\"\n#include \"range.h\"\n#ifdef NAS\n#include \"site_queue.h\"\n#endif\n\nclass server_info;\nstruct job_info;\nstruct schd_resource;\nstruct resource_req;\nstruct resource_count;\nstruct holiday;\nstruct prev_job_info;\nclass group_info;\nstruct usage_info;\nclass counts;\nclass nspec;\nstruct node_partition;\nstruct range;\nstruct place;\nstruct schd_error;\nclass np_cache;\nstruct chunk;\nclass selspec;\nclass resdef;\nstruct event_list;\nstruct status;\nclass fairshare_head;\nstruct node_scratch;\nstruct te_list;\nstruct node_bucket;\nstruct bucket_bitpool;\nstruct chunk_map;\nstruct node_bucket_count;\nstruct preempt_job_st;\nstruct config;\nstruct sort_info;\nclass resource_resv;\nclass node_info;\nclass queue_info;\nclass sched_exception;\n\n\ntypedef struct state_count state_count;\ntypedef struct job_info job_info;\ntypedef struct schd_resource schd_resource;\ntypedef struct resource_req resource_req;\ntypedef struct resource_count resource_count;\ntypedef struct usage_info usage_info;\ntypedef struct resv_info resv_info;\ntypedef struct node_partition node_partition;\ntypedef struct place place;\ntypedef struct schd_error schd_error;\ntypedef struct chunk chunk;\ntypedef struct timed_event timed_event;\ntypedef struct event_list event_list;\ntypedef struct node_scratch node_scratch;\ntypedef struct resresv_set resresv_set;\ntypedef struct te_list te_list;\ntypedef struct node_bucket node_bucket;\ntypedef struct bucket_bitpool bucket_bitpool;\ntypedef struct chunk_map chunk_map;\ntypedef struct node_bucket_count node_bucket_count;\ntypedef struct preempt_job_st preempt_job_st;\ntypedef struct th_task_info th_task_info;\ntypedef struct th_data_nd_eligible th_data_nd_eligible;\ntypedef struct th_data_dup_nd_info th_data_dup_nd_info;\ntypedef struct th_data_query_ninfo th_data_query_ninfo;\ntypedef struct th_data_free_ninfo th_data_free_ninfo;\ntypedef struct th_data_dup_resresv th_data_dup_resresv;\ntypedef struct th_data_query_jinfo th_data_query_jinfo;\ntypedef struct th_data_free_resresv th_data_free_resresv;\n\nusing counts_umap = std::unordered_map<std::string, counts *>;\n#ifdef NAS\n/* localmod 034 */\n/*\n * site_j_share_type - How jobs interact with CPU shares\n */\nenum    site_j_share_type {\n\tJ_TYPE_ignore =   0\t\t/* share ignored when scheduling */\n\t, J_TYPE_limited =  1\t\t/* jobs limited to share */\n\t, J_TYPE_borrow =   2\t\t/* jobs can borrow from other shares */\n};\n#define\tJ_TYPE_COUNT\t(J_TYPE_borrow+1)\n\nstruct share_head;\ntypedef struct share_head share_head;\nstruct share_info;\ntypedef struct share_info share_info;\ntypedef int sh_amt;\n/* localmod 053 */\nstruct\tsite_user_info;\ntypedef struct site_user_info site_user_info;\n#endif\n\ntypedef RESOURCE_TYPE sch_resource_t;\n/* since resource values and usage values are linked */\ntypedef sch_resource_t usage_t;\n\ntypedef void event_ptr_t;\ntypedef int (*event_func_t)(event_ptr_t*, void *);\n\nstruct th_task_info\n{\n\tint task_id;\t\t\t\t\t\t\t/* task id, should be set by main thread */\n\tenum thread_task_type task_type;\t\t/* task type */\n\tvoid *thread_data;\t\t\t\t\t/* data for the worker thread to execute the task */\n};\n\nstruct th_data_nd_eligible\n{\n\tresource_resv *resresv;\n\tplace *pl;\n\tschd_error *err;\n\tnode_info **ninfo_arr;\n\tint sidx;\n\tint eidx;\n};\n\nstruct th_data_dup_nd_info\n{\n\tbool error:1;\n\tnode_info **onodes;\n\tnode_info **nnodes;\n\tserver_info *nsinfo;\n\tunsigned int flags;\n\tint sidx;\n\tint eidx;\n};\n\nstruct th_data_query_ninfo\n{\n\tbool error:1;\n\tstruct batch_status *nodes;\n\tserver_info *sinfo;\n\tnode_info **oarr;\n\tint sidx;\n\tint eidx;\n};\n\nstruct th_data_free_ninfo\n{\n\tnode_info **ninfo_arr;\n\tint sidx;\n\tint eidx;\n};\n\nstruct th_data_dup_resresv\n{\n\tbool error:1;\n\tresource_resv **oresresv_arr;\n\tresource_resv **nresresv_arr;\n\tserver_info *nsinfo;\n\tqueue_info *nqinfo;\n\tint sidx;\n\tint eidx;\n};\n\nstruct th_data_query_jinfo\n{\n\tbool error:1;\n\tstruct batch_status *jobs;\n\tserver_info *sinfo;\n\tqueue_info *qinfo;\n\tresource_resv **oarr;\n\tstatus *policy;\n\tint pbs_sd;\n\tint sidx;\n\tint eidx;\n};\n\nstruct th_data_free_resresv\n{\n\tresource_resv **resresv_arr;\n\tint sidx;\n\tint eidx;\n};\n\nstruct schd_error\n{\n\tenum sched_error_code error_code;\t/* scheduler error code (see constant.h) */\n\tenum schd_err_status status_code; /* error status */\n\tresdef *rdef;\t\t\t/* resource def if error contains a resource*/\n\tchar *arg1;\t\t\t/* buffer for error code specific string */\n\tchar *arg2;\t\t\t/* buffer for error code specific string */\n\tchar *arg3;\t\t\t/* buffer for error code specific string */\n\tchar *specmsg;\t\t\t/* buffer to override static error msg */\n\tschd_error *next;\n};\n\nstruct state_count\n{\n\tint running;\t\t\t/* number of jobs in the running state*/\n\tint queued;\t\t\t/* number of jobs in the queued state */\n\tint held;\t\t\t/* number of jobs in the held state */\n\tint transit;\t\t\t/* number of jobs in the transit state */\n\tint waiting;\t\t\t/* number of jobs in the waiting state */\n\tint exiting;\t\t\t/* number of jobs in the exiting state */\n\tint suspended;\t\t\t/* number of jobs in the suspended state */\n\tint userbusy;\t\t\t/* number of jobs in the userbusy state */\n\tint begin;\t\t\t/* number of job arrays in begin state */\n\tint expired;\t\t\t/* expired jobs which are no longer running */\n\tint invalid;\t\t\t/* number of invalid jobs */\n\tint total;\t\t\t/* total number of jobs in all states */\n};\n\nstruct place\n{\n\tbool free:1;\t\t/* free placement */\n\tbool pack:1;\t\t/* pack placement */\n\tbool scatter:1;\t\t/* scatter placement */\n\tbool vscatter:1;\t/* scatter by vnode */\n\tbool excl:1;\t\t/* need nodes exclusively */\n\tbool exclhost:1;\t/* need whole hosts exclusively */\n\tbool share:1;\t\t/* will share nodes */\n\n\tchar *group;\t\t\t/* resource to node group by */\n};\n\nstruct chunk\n{\n\tchar *str_chunk;\t\t/* chunk in string form */\n\tint num_chunks;\t\t\t/* the number of chunks needed */\n\tint seq_num;\t\t\t/* the chunk sequence number */\n\tresource_req *req;\t\t/* the resources in resource_req form */\n};\n\nclass selspec\n{\n\tpublic:\n\tint total_chunks;\n\tint total_cpus;\t\t\t/* # of cpus requested in this select spec */\n\tstd::unordered_set<resdef *> defs;\t\t\t/* the resources requested by this select spec*/\n\tchunk **chunks;\n\tselspec();\n\tselspec(const selspec&);\n\tselspec& operator=(const selspec&);\n\tvirtual ~selspec();\n};\n\n/* for description of these bits, check the PBS admin guide or scheduler IDS */\nstruct status\n{\n\tbool round_robin:1;\t\t/* Round robin around queues */\n\tbool by_queue:1;\t\t/* schedule per-queue */\n\tbool strict_fifo:1;\t\t/* deprecated */\n\tbool strict_ordering:1;\n\tbool fair_share:1;\n\tbool backfill:1;\n\tbool sort_nodes:1;\n\tbool backfill_prime:1;\n\tbool preempting:1;\n#ifdef NAS /* localmod 034 */\n\tbool shares_track_only:1;\n#endif /* localmod 034 */\n\n\tbool is_prime:1;\n\tbool is_ded_time:1;\n\n\tstd::vector<sort_info> *sort_by;\t\t/* job sorting */\n\tstd::vector<sort_info> *node_sort;\t\t/* node sorting */\n\tenum smp_cluster_dist smp_dist;\n\n\tunsigned prime_spill;\t\t\t/* the amount of time a job can spill into the next prime state */\n\tunsigned int backfill_depth;\t\t/* number of top jobs to backfill around */\n\n\tstd::unordered_set<resdef *> resdef_to_check;\t\t/* resources to match as definitions */\n\tstd::unordered_set<resdef *> resdef_to_check_no_hostvnode;\t/* resdef_to_check without host/vnode*/\n\tstd::unordered_set<resdef *> resdef_to_check_rassn;\t\t/* resdef_to_check intersects res_rassn */\n\tstd::unordered_set<resdef *> resdef_to_check_rassn_select;\t/* resdef_to_check intersects res_rassn and host level resource */\n\tstd::unordered_set<resdef *> resdef_to_check_noncons;\t/* non-consumable resources to match */\n\tstd::unordered_set<resdef *> equiv_class_resdef;\t\t/* resources to consider for job equiv classes */\n\n\n\ttime_t prime_status_end;\t\t/* the end of prime or nonprime */\n\n\tstd::unordered_set<resdef *> rel_on_susp;\t    /* resources to release on suspend */\n\n\t/* not really policy... but kinda just left over here */\n\ttime_t current_time;\t\t\t/* current time in the cycle */\n\ttime_t cycle_start;\t\t\t/* cycle start in real time */\n\n\tunsigned int order;\t\t\t/* used to assign a ordering to objs */\n\tint preempt_attempts;\t\t\t/* number of jobs attempted to preempt */\n};\n\n/*\n * All attributes of the qmgr sched object\n * Don't need log_events here, we use log_event_mask from Liblog/log_event.c\n */\nstruct schedattrs\n{\n\tbool do_not_span_psets:1;\n\tbool only_explicit_psets:1;\n\tbool preempt_targets_enable:1;\n\tbool sched_preempt_enforce_resumption:1;\n\tbool throughput_mode:1;\n\tlong attr_update_period;\n\tchar *comment;\n\tchar *job_sort_formula;\n\tdouble job_sort_formula_threshold;\n\tint opt_backfill_fuzzy;\n\tchar *partition;\n\tlong preempt_queue_prio;\n\tunsigned int preempt_prio[NUM_PPRIO][2];\n\tstruct preempt_ordering preempt_order[PREEMPT_ORDER_MAX + 1];\n\tenum preempt_sort_vals preempt_sort;\n\tenum runjob_mode runjob_mode; /* set to a numeric version of job_run_wait attribute value */\n\tlong sched_cycle_length;\n\tchar *sched_log;\n\tchar *sched_priv;\n\tlong server_dyn_res_alarm;\n};\n\nclass server_info\n{\n\tpublic:\n\tbool has_soft_limit:1;\t/* server has a soft user/grp limit set */\n\tbool has_hard_limit:1;\t/* server has a hard user/grp limit set */\n\tbool has_mult_express:1;\t/* server has multiple express queues */\n\tbool has_user_limit:1;\t/* server has user hard or soft limit */\n\tbool has_grp_limit:1;\t/* server has group hard or soft limit */\n\tbool has_proj_limit:1;\t/* server has project hard or soft limit */\n\tbool has_all_limit:1;\t/* server has PBS_ALL limits set on it */\n\tbool has_prime_queue:1;\t/* server has a primetime queue */\n\tbool has_ded_queue:1;\t/* server has a dedtime queue */\n\tbool has_nonprime_queue:1;\t/* server has a non primetime queue */\n\tbool node_group_enable:1;\t/* is node grouping enabled */\n\tbool has_nodes_assoc_queue:1; /* nodes are associates with queues */\n\tbool has_multi_vnode:1;\t/* server has at least one multi-vnoded MOM  */\n\tbool has_runjob_hook:1;\t/* server has at least 1 runjob hook enabled */\n\tbool eligible_time_enable:1;/* controls if we accrue eligible_time  */\n\tbool provision_enable:1;\t/* controls if provisioning occurs */\n\tbool power_provisioning:1;\t/* controls if power provisioning occurs */\n\tbool has_nonCPU_licenses:1;\t/* server has non-CPU (e.g. socket-based) licenses */\n\tbool use_hard_duration:1;\t/* use hard duration when creating the calendar */\n\tbool pset_metadata_stale:1;\t/* The placement set meta data is stale and needs to be regenerated before the next use */\n\tstd::string name;\t\t/* name of server */\n\tstruct schd_resource *res;\t/* list of resources */\n\tvoid *liminfo;\t\t\t/* limit storage information */\n\tint num_nodes;\t\t\t/* number of nodes associated with the server */\n\tint num_resvs;\t\t\t/* number of reservations on the server */\n\tint num_preempted;\t\t/* number of jobs currently preempted */\n\tstd::vector<std::string> node_group_key;\t\t/* the node grouping resources */\n\tstate_count sc;\t\t\t/* number of jobs in each state */\n\tstd::vector<queue_info *> queues;\t\t/* array of queues */\n\tqueue_info ***queue_list;\t/* 3 dimensional array, used to order jobs in round_robin */\n\tnode_info **nodes;\t\t/* array of nodes associated with the server */\n\tnode_info **unassoc_nodes;\t/* array of nodes not associated with queues */\n\tresource_resv **resvs;\t\t/* the reservations on the server */\n\tresource_resv **running_jobs;\t/* array of jobs which are in state R */\n\tresource_resv **exiting_jobs;\t/* array of jobs which are in state E */\n\tresource_resv **jobs;\t\t/* all the jobs in the server */\n\tresource_resv **all_resresv;\t/* a list of all jobs and adv resvs */\n\tevent_list *calendar;\t\t/* the calendar of events */\n\tchar *job_sort_formula;\t/* set via the JSF attribute of either the sched, or the server */\n\n\ttime_t server_time;\t\t/* The time the server is at.  Could be in the\n\t\t\t\t\t * future if we're simulating\n\t\t\t\t\t */\n\t/* the number of running jobs in each preempt level\n\t * all jobs in preempt_count[NUM_PPRIO] are unknown preempt status's\n\t */\n\tint preempt_count[NUM_PPRIO + 1];\n\n\tcounts_umap group_counts;\t\t/* group resource and running counts */\n\tcounts_umap project_counts;\t\t/* project resource and running counts */\n\tcounts_umap user_counts;\t\t/* user resource and running counts */\n\tcounts_umap alljobcounts;\t\t/* overall resource and running counts */\n\n\t/*\n\t * Resource/Run counts list to store counts for all jobs which\n\t * are running/queued/suspended.\n\t */\n\tcounts_umap total_group_counts;\n\tcounts_umap total_project_counts;\n\tcounts_umap total_user_counts;\n\tcounts_umap total_alljobcounts;\n\n\tnode_partition **nodepart;\t/* array pointers to node partitions */\n\tint num_parts;\t\t\t/* number of node partitions(node_group_key) */\n\tnode_partition *allpart;\t/* node partition for all nodes */\n\tint num_hostsets;\t\t/* the size of hostsets */\n\tnode_partition **hostsets;\t/* partitions for vnodes on a host */\n\n\tchar **nodesigs;\t\t/* node signatures from server nodes */\n\n\t/* cache of node partitions we created.  We cache them all here and\n\t * will attempt to find one when we need to use it.  This cache will not\n\t * be duplicated.  It would be difficult to duplicate correctly, and it is\n\t * just a cache.  It will be regenerated when needed\n\t */\n\tstd::vector<np_cache *> npc_arr;\n\n\tresource_resv *qrun_job;\t/* used if running a job via qrun request */\n\t/* policy structure for the server.  This is an easy storage location for\n\t * the policy struct.  The policy struct will be passed around separately\n\t */\n\tstatus *policy;\n\tfairshare_head *fstree;\t/* root of fairshare tree */\n\tresresv_set **equiv_classes;\n\tnode_bucket **buckets;\t\t/* node bucket array */\n\tnode_info **unordered_nodes;\n\tstd::unordered_map<std::string, node_partition *> svr_to_psets;\n#ifdef NAS\n\t/* localmod 034 */\n\tshare_head *share_head;\t/* root of share info */\n#endif\n\t// Class methods\n\texplicit server_info(const char *);\n\tserver_info() = delete;\n\tserver_info(const server_info &);\n\tvirtual ~server_info();\n\tserver_info & operator=(const server_info &);\n\n\tprivate:\n\tvoid init_server_info();\n\tvoid free_server_info();\n\tvoid free_server_psets();\n\tvoid dup_server_psets(const std::unordered_map<std::string, node_partition*>& spsets);\n};\n\nclass queue_info\n{\n\tpublic:\n\tbool is_started:1;\t\t/* is queue started */\n\tbool is_exec:1;\t\t/* is the queue an execution queue */\n\tbool is_route:1;\t\t/* is the queue a routing queue */\n\tbool is_ok_to_run:1;\t/* is it ok to run jobs in this queue */\n\tbool is_ded_queue:1;\t/* only jobs in dedicated time */\n\tbool is_prime_queue:1;\t/* only run jobs in primetime */\n\tbool is_nonprime_queue:1;\t/* only run jobs in nonprimetime */\n\tbool has_nodes:1;\t\t/* does this queue have nodes assoc with it */\n\tbool has_soft_limit:1;\t/* queue has a soft user/grp limit set */\n\tbool has_hard_limit:1;\t/* queue has a hard user/grp limit set */\n\tbool is_peer_queue:1;\t/* queue is a peer queue */\n\tbool has_resav_limit:1;\t/* queue has resources_available limits */\n\tbool has_user_limit:1;\t/* queue has user hard or soft limit */\n\tbool has_grp_limit:1;\t/* queue has group hard or soft limit */\n\tbool has_proj_limit:1;\t/* queue has project hard or soft limit */\n\tbool has_all_limit:1;\t/* queue has PBS_ALL limits set on it */\n\tstruct server_info *server;\t/* server where queue resides */\n\tconst std::string name;\t\t/* queue name */\n\tstate_count sc;\t\t\t/* number of jobs in different states */\n\tvoid *liminfo;\t\t\t/* limit storage information */\n\tint priority;\t\t\t/* priority of queue */\n#ifdef NAS\n\t/* localmod 046 */\n\ttime_t max_starve;\t\t/* eligible job marked starving after this */\n\t/* localmod 034 */\n\ttime_t max_borrow;\t\t/* longest job that can borrow CPUs */\n\t/* localmod 038 */\n\tbool is_topjob_set_aside:1; /* draws topjobs from per_queues_topjobs */\n\t/* localmod 040 */\n\tbool ignore_nodect_sort:1; /* job_sort_key nodect ignored in this queue */\n#endif\n\tint num_nodes;\t\t/* number of nodes associated with queue */\n\tstruct schd_resource *qres;\t/* list of resources on the queue */\n\tresource_resv *resv;\t\t/* the resv if this is a resv queue */\n\tresource_resv **jobs;\t\t/* array of jobs that reside in queue */\n\tresource_resv **running_jobs;\t/* array of jobs in the running state */\n\tnode_info **nodes;\t\t/* array of nodes associated with the queue */\n\tcounts_umap group_counts;\t\t/* group resource and running counts */\n\tcounts_umap project_counts;\t\t/* project resource and running counts */\n\tcounts_umap user_counts;\t\t/* user resource and running counts */\n\tcounts_umap alljobcounts;\t\t/* overall resource and running counts */\n\t/*\n\t * Resource/Run counts list to store counts for all jobs which\n\t * are running/queued/suspended.\n\t */\n\tcounts_umap total_group_counts;\n\tcounts_umap total_project_counts;\n\tcounts_umap total_user_counts;\n\tcounts_umap total_alljobcounts;\n\n\tstd::vector<std::string> node_group_key;\t\t/* node grouping resources */\n\tstruct node_partition **nodepart; /* array pointers to node partitions */\n\tstruct node_partition *allpart;   /* partition w/ all nodes assoc with queue*/\n\tint num_parts;\t\t\t/* number of node partitions(node_group_key) */\n\tint num_topjobs;\t\t/* current number of top jobs in this queue */\n\tint backfill_depth;\t\t/* total allowable topjobs in this queue*/\n\tchar *partition;\t\t/* partition to which queue belongs to */\n\n\texplicit queue_info(const char *);\n\tqueue_info(queue_info&, server_info *);\n\tvirtual ~queue_info();\n};\n\nstruct job_info\n{\n\tbool is_queued:1;\t\t/* state booleans */\n\tbool is_running:1;\n\tbool is_held:1;\n\tbool is_waiting:1;\n\tbool is_transit:1;\n\tbool is_exiting:1;\n\tbool is_suspended:1;\n\tbool is_susp_sched:1;\t/* job is suspended by scheduler */\n\tbool is_userbusy:1;\n\tbool is_begin:1;\t\t/* job array 'B' state */\n\tbool is_expired:1;\t\t/* 'X' pseudo state for simulated job end */\n\tbool is_checkpointed:1;\t/* job has been checkpointed */\n\n\tbool can_not_preempt:1;\t/* this job can not be preempted */\n\n\tbool can_checkpoint:1;\t/* this job can be checkpointed */\n\tbool can_requeue:1;\t/* this job can be requeued */\n\tbool can_suspend:1;\t/* this job can be suspended */\n\n\tbool is_array:1;\t\t/* is the job a job array object */\n\tbool is_subjob:1;\t\t/* is a subjob of a job array */\n\n\tbool is_provisioning:1;\t/* job is provisioning */\n\tbool is_preempted:1;\t/* job is preempted */\n\tbool is_prerunning:1;\t\t/* Job in prerunning substate */\n\tbool is_topjob:1;\t\t/* job is the top job */\n\tbool topjob_ineligible:1;\t/* Job is ineligible to be a top job */\n\n\tchar *job_name;\t\t\t/* job name attribute (qsub -N) */\n\tchar *comment;\t\t\t/* comment field of job */\n\tchar *resv_id;\t\t\t/* identifier of reservation job is in */\n\tchar *alt_id;\t\t\t/* vendor assigned job identifier */\n\tqueue_info *queue;\t\t/* queue where job resides */\n\tresource_resv *resv;\t\t/* the reservation the job is part of */\n\tint priority;\t\t\t/* PBS priority of job */\n\ttime_t etime;\t\t\t/* the time the job went to the queued state */\n\ttime_t stime;\t\t\t/* the time the job was started */\n\ttime_t est_start_time;\t\t/* scheduler estimated start time of job */\n\ttime_t time_preempted;\t\t/* time when the job was preempted */\n\tchar *est_execvnode;\t\t/* scheduler estimated execvnode of job */\n\tunsigned int preempt_status;\t/* preempt levels (bitfield) */\n\tunsigned int preempt;\t\t\t/* preempt priority */\n\tint peer_sd;\t\t\t/* connection descriptor to peer server */\n\tresource_req *resused;\t\t/* a list of resources used */\n\tgroup_info *ginfo;\t\t/* the fair share node for the owner */\n\n\t/* subjob information */\n\tstd::string array_id;\t\t/* job id of job array if we are a subjob */\n\tint array_index;\t\t/* array index if we are a subjob */\n\tresource_resv *parent_job;\t/* pointer to the parent array job */\n\n\t/* job array information */\n\trange *queued_subjobs;\t\t/* a list of ranges of queued subjob indices */\n\tlong max_run_subjobs;\t\t/* Max number of running subjobs at any time */\n\tlong running_subjobs;\t\t/* number of currently running subjobs */\n\n\tint accrue_type;\t\t/* type of time job should accrue */\n\ttime_t eligible_time;\t\t/* eligible time accrued until last cycle */\n\n\tstruct attrl *attr_updates;\t/* used to federate all attr updates to server*/\n\tfloat formula_value;\t\t/* evaluated job sort formula value */\n\tstd::vector<nspec *> resreleased;\t\t/* list of resources released by the job on each node */\n\tresource_req *resreq_rel;\t/* list of resources released */\n\tchar *depend_job_str;\t\t/* dependent jobs in a ':' separated string */\n\tresource_resv **dependent_jobs; /* dependent jobs with runone depenency */\n\n#ifdef NAS\n\t/* localmod 045 */\n\tint\t\tNAS_pri;\t/* NAS version of priority */\n\t/* localmod 034 */\n\tsh_amt\t*sh_amts;\t/* Amount of each type job is requesting */\n\tshare_info\t*sh_info;\t/* Info about share group job belongs to */\n\tsch_resource_t accrue_rate;\t/* rate at which job uses share resources */\n\t/* localmod 040 */\n\tint\t\tnodect;\t\t/* Node count for sorting jobs by */\n\t/* localmod 031 */\n\tchar\t\t*schedsel;\t/* schedselect field of job */\n\t/* localmod 053 */\n\tsite_user_info *u_info;\t/* User associated with job */\n#endif\n};\n\nclass node_info\n{\n\tpublic:\n\tbool is_down:1;\t\t/* node is down */\n\tbool is_free:1;\t\t/* node is free to run a job */\n\tbool is_offline:1;\t/* node is off-line */\n\tbool is_unknown:1;\t/* node is in an unknown state */\n\tbool is_exclusive:1;\t/* node is running in exclusive mode */\n\tbool is_job_exclusive:1;\t/* node is running in job-exclusive mode */\n\tbool is_resv_exclusive:1;\t/* node is reserved exclusively */\n\tbool is_sharing:1;\t/* node is running in job-sharing mode */\n\tbool is_busy:1;\t\t/* load on node is too high to schedule */\n\tbool is_job_busy:1;\t/* ntype = cluster all vp's allocated */\n\tbool is_stale:1;\t\t/* node is unknown by mom */\n\tbool is_maintenance:1;\t/* node is in maintenance */\n\n\t/* license types */\n\tbool lic_lock:1;\t\t/* node has a node locked license */\n\n\tbool has_hard_limit:1;\t/* node has a hard user/grp limit set */\n\tbool no_multinode_jobs:1;\t/* do not run multnode jobs on this node */\n\n\tbool resv_enable:1;\t/* is this node available for reservations */\n\tbool provision_enable:1;\t/* is this node available for provisioning */\n\n\tbool is_provisioning:1;\t/* node is provisioning */\n\t/* node in wait-provision is considered as node in provisioning state\n\t * nodes in provisioning and wait provisioning states cannot run job\n\t * NOTE:\n\t * If node is provisioning an aoe and job needs this aoe then it could have\n\t * run on this node. However, within the same cycle, this cannot be handled\n\t * since we can't make the other job wait. In another cycle, the node is\n\t * either free or provisioning, then, the case is clear.\n\t */\n\tbool is_multivnoded:1;\t/* multi vnode */\n\tbool power_provisioning:1;\t/* can this node can power provision */\n\tbool is_sleeping:1;\t\t/* node put to sleep through power on/off or ramp rate limit */\n\tbool has_ghost_job:1;\t/* race condition occurred: recalculate resources_assigned */\n\n\t/* sharing */\n\tenum vnode_sharing sharing;\t/* deflt or forced sharing/excl of the node */\n\n\tconst std::string name;\t\t/* name of the node */\n\tchar *mom;\t\t\t/* host name on which mom resides */\n\n\tchar **jobs;\t\t\t/* the name of the jobs currently on the node */\n\tchar **resvs;\t\t\t/* the name of the reservations currently on the node */\n\tresource_resv **job_arr;\t/* ptrs to structs of the jobs on the node */\n\tresource_resv **run_resvs_arr;\t/* ptrs to structs of resvs holding resources on the node */\n\n\t/* This element is the server the node is associated with.  In the case\n\t * of a node which is part of an advanced reservation, the nodes are\n\t * a copy of the real nodes with the resources modified to what the\n\t * reservation gets.  This element points to the server the non-duplicated\n\t * nodes do.  This means ninfo is not part of ninfo -> server -> nodes.\n\t */\n\tserver_info *server;\n\tstd::string queue_name;\t\t/* the queue the node is associated with */\n\n\tint num_jobs;\t\t\t/* number of jobs running on the node */\n\tint num_run_resv;\t\t/* number of running advanced reservations */\n\tint num_susp_jobs;\t\t/* number of suspended jobs on the node */\n\n\tint priority;\t\t\t/* node priority */\n\n\tcounts_umap group_counts;\t/* group resource and running counts */\n\tcounts_umap user_counts;\t\t/* user resource and running counts */\n\n\tint max_running;\t\t/* max number of jobs on the node */\n\tint max_user_run;\t\t/* max number of jobs running by a user */\n\tint max_group_run;\t\t/* max number of jobs running by a UNIX group */\n\n\tschd_resource *res;\t\t/* list of resources max/current usage */\n\n\tint rank;\t\t\t/* unique numeric identifier for node */\n\n#ifdef NAS\n\t/* localmod 034 */\n\tint\tsh_cls;\t\t\t/* Share class supplied by node */\n\tint\tsh_type;\t\t/* Share type of node */\n#endif\n\n\tchar *current_aoe;\t\t/* AOE name instantiated on node */\n\tchar *current_eoe;\t\t/* EOE name instantiated on node */\n\tchar *nodesig;\t\t\t/* resource signature */\n\tint nodesig_ind;\t\t/* resource signature index in server array */\n\tnode_info *svr_node;\t\t/* ptr to svr's node if we're a resv node */\n\tnode_partition *hostset;\t/* other vnodes on on the same host */\n\tunsigned int nscr;\t\t/* scratch space local to node search code */\n\tchar *partition;\t\t/* partition to which node belongs to */\n\ttime_t last_state_change_time;\t/* Node state change at time stamp */\n\ttime_t last_used_time;\t\t/* Node was last active at this time */\n\tte_list *node_events;\t\t/* list of run events that affect the node */\n\tint bucket_ind;\t\t\t/* index in server's bucket array */\n\tint node_ind;\t\t\t/* node's index into sinfo->unordered_nodes */\n\tnode_partition **np_arr;\t/* array of node partitions node is in */\n\n\texplicit node_info(const std::string& nname);\n\tvirtual ~node_info();\n};\n\nstruct resv_info\n{\n\tbool is_standing:1;\t\t/* set to 1 for a standing reservation */\n\tbool is_running:1;\t\t/* the reservation is running (not necessarily in the running state) */\n\tchar *queuename;\t\t/* the name of the queue */\n\tchar *rrule;\t\t\t/* recurrence rule for standing reservations */\n\tchar *execvnodes_seq;\t\t/* sequence of execvnodes for standing resvs */\n\ttime_t *occr_start_arr;\t\t/* occurrence start time */\n\tchar *timezone;\t\t\t/* timezone associated to a reservation */\n\tint resv_idx;\t\t\t/* the index of standing resv occurrence */\n\tint count;\t\t\t/* the total number of occurrences */\n\ttime_t req_start;\t\t/* user requested start time of resv */\n\ttime_t req_start_orig;\t\t/* For altered reservations, this has the original start time */\n\ttime_t req_start_standing;\t\t/* For standing reservations, this will be used to get start time of future occurrences */\n\ttime_t req_end;\t\t\t/* user requested end tiem of resv */\n\ttime_t req_duration;\t\t/* user requested duration of resv */\n\ttime_t req_duration_orig;\t\t/* For altered reservations, this has the original duration */\n\ttime_t req_duration_standing;\t/* For standing reservations, this will be used to get duration of future occurrences */\n\ttime_t retry_time;\t\t/* time at which a reservation is to be reconfirmed */\n\tenum resv_states resv_state;\t/* reservation state */\n\tenum resv_states resv_substate;\t/* reservation substate */\n\tqueue_info *resv_queue;\t\t/* general resv: queue which is owned by resv */\n\tnode_info **resv_nodes;\t\t/* node universe for reservation */\n\tchar *partition;\t\t/* name of the partition in which the reservation was confirmed */\n\tselspec *select_orig;\t\t/* original schedselect pre-alter */\n\tselspec *select_standing;\t/* original schedselect for standing reservations */\n\tstd::vector<nspec *> orig_nspec_arr;\t\t/* original non-shrunk exec_vnode with exec_vnode chunk mapped to select chunk */\n};\n\n/* resource reservation - used for both jobs and advanced reservations */\nclass resource_resv \n{\n\tpublic:\n\tbool can_not_run:1;   /* res resv can not run this cycle */\n\tbool can_never_run:1; /* res resv can never run and will be deleted */\n\tbool can_not_fit:1;   /* res resv can not fit into node group */\n\tbool is_invalid:1;    /* res resv is invalid and will be ignored */\n\tbool is_peer_ob:1;    /* res resv can from a peer server */\n\n\tbool is_job:1;\t       /* res resv is a job */\n\tbool is_prov_needed:1;   /* res resv requires provisioning */\n\tbool is_shrink_to_fit:1; /* res resv is a shrink-to-fit job */\n\tbool is_resv:1;\t       /* res resv is an advanced reservation */\n\n\tbool will_use_multinode:1;\t/* res resv will use multiple nodes */\n\n\tconst std::string name;\t\t/* name of res resv */\n\tstd::string user;\t\t/* username of the owner of the res resv */\n\tstd::string group;\t\t/* exec group of owner of res resv */\n\tstd::string project;\t\t/* exec project of owner of res resv */\n\tchar *nodepart_name;\t\t/* name of node partition to run res resv in */\n\n\tlong sch_priority;\t\t/* scheduler priority of res resv */\n\tint rank;\t\t\t/* unique numeric identifier for resource_resv */\n\tint ec_index;\t\t\t/* Index into server's job_set array*/\n\n\ttime_t qtime;\t\t\t/* time res resv was submitted */\n\tlong long qrank;\t\t/* time on which we might need to stabilize the sort */\n\ttime_t start;\t\t\t/* start time (UNDEFINED means no start time */\n\ttime_t end;\t\t\t/* end time (UNDEFINED means no end time */\n\ttime_t duration;\t\t/* duration of resource resv request */\n\ttime_t hard_duration;\t\t/* hard duration of resource resv request */\n\ttime_t min_duration;\t\t/* minimum duration of STF job */\n\n\tresource_req *resreq;\t\t/* list of resources requested */\n\tselspec *select;\t\t/* select spec */\n\tselspec *execselect;\t\t/* select spec from exec_vnode and resv_nodes */\n\tplace *place_spec;\t\t/* placement spec */\n\n\tserver_info *server;\t\t/* pointer to server which owns res resv */\n\tnode_info **ninfo_arr; \t\t/* nodes belonging to res resv */\n\tstd::vector<nspec *> nspec_arr;\t\t/* exec vnode of object in internal sched form (one nspec per node) */\n\n\tjob_info *job;\t\t\t/* pointer to job specific structure */\n\tresv_info *resv;\t\t/* pointer to reservation specific structure */\n\n\tchar *aoename;\t\t\t   /* store name of aoe if requested */\n\tchar *eoename;\t\t\t   /* store name of eoe if requested */\n\tchar **node_set_str;\t\t   /* user specified node string */\n\tnode_info **node_set;\t\t   /* node array specified by node_set_str */\n#ifdef NAS\t\t\t\t   /* localmod 034 */\n\tenum site_j_share_type share_type; /* How resv counts against group share */\n#endif\t\t\t\t\t   /* localmod 034 */\n\tint resresv_ind;\t\t   /* resource_resv index in all_resresv array */\n\ttimed_event *run_event;\t\t   /* run event in calendar */\n\ttimed_event *end_event;\t\t   /* end event in calendar */\n\n\texplicit resource_resv(const std::string& rname);\n\tvirtual ~resource_resv();\n};\n\nclass resource_type\n{\n\tpublic:\n\t/* non consumable - used for selection only (e.g. arch) */\n\tbool is_non_consumable:1;\n\tbool is_string:1;\n\tbool is_boolean:1; /* value == 1 for true and 0 for false */\n\n\t/* consumable - numeric resource which is consumed and may have a max limit */\n\tbool is_consumable:1;\n\tbool is_num:1;\n\tbool is_long:1;\n\tbool is_float:1;\n\tbool is_size:1;\t/* all sizes are converted into kb */\n\tbool is_time:1;\n\tresource_type();\n};\n\nstruct schd_resource\n{\n\tconst char *name;\t\t\t/* name of the resource - reference to the definition name */\n\tresource_type type;\t/* resource type */\n\n\tchar *orig_str_avail;\t\t/* original resources_available string */\n\n\tchar *indirect_vnode_name;\t/* name of vnode where to get value */\n\tschd_resource *indirect_res;\t/* ptr to indirect resource */\n\n\tsch_resource_t avail;\t\t/* availble amount of the resource */\n\tchar **str_avail;\t\t/* the string form of avail */\n\tsch_resource_t assigned;\t/* amount of the resource assigned */\n\tchar *str_assigned;\t\t/* the string form of assigned */\n\n\tresdef *def;\t\t\t/* resource definition */\n\n\tstruct schd_resource *next;\t/* next resource in list */\n};\n\nstruct resource_req\n{\n\tconst char *name;\t\t\t/* name of the resource - reference to the definition name */\n\tresource_type type;\t/* resource type information */\n\n\tsch_resource_t amount;\t\t/* numeric value of resource */\n\tchar *res_str;\t\t\t/* string value of resource */\n\tresdef *def;\t\t\t/* definition of resource */\n\tstruct resource_req *next;\t/* next resource_req in list */\n};\n\nclass resdef\n{\n\tpublic:\n\tconst std::string name;\t/* name of resource */\n\tresource_type type;\t/* resource type */\n\tunsigned int flags;\t/* resource flags (see pbs_ifl.h) */\n\tresdef(char *rname, unsigned int rflags, resource_type rtype) : name(rname), type(rtype), flags(rflags) {}\n};\n\nclass prev_job_info\n{\n\tpublic:\n\tstd::string name;\t/* name of job */\n\tstd::string entity_name;\t/* fair share entity of job */\n\tresource_req *resused;\t/* resources used by the job */\n\tprev_job_info(const std::string& pname, const std::string& ename, resource_req *rused);\n\tprev_job_info(const prev_job_info &);\n\tprev_job_info(prev_job_info &&);\n\tprev_job_info& operator=(const prev_job_info&);\n\tvirtual ~prev_job_info();\n};\n\nclass counts\n{\n\tpublic:\n\tstd::string name;\t\t/* name of entitiy */\n\tint running;\t\t\t/* count of running jobs in object */\n\tint soft_limit_preempt_bit;\t/* Place to store preempt bit if entity is over limits */\n\tresource_count *rescts;\t\t/* resources used */\n\texplicit counts(const std::string &);\n\tcounts(const counts &);\n\tcounts& operator=(const counts &);\n\tvirtual ~counts();\n};\n\nstruct resource_count\n{\n\tconst char *name;\t\t    /* resource name */\n\tresdef *def;\t\t    /* definition of resource */\n\tsch_resource_t amount;\t    /* amount of resource used */\n\tint soft_limit_preempt_bit; /* Place to store preempt bit if resource of an entity is over limits */\n\tstruct resource_count *next;\n};\n\n/* global data types */\n\n/* fairshare head structure */\nclass fairshare_head\n{\n\tpublic:\n\tgroup_info *root;\t\t\t/* root of fairshare tree */\n\ttime_t last_decay;\t\t\t/* last time tree was decayed */\n\tfairshare_head();\n\tfairshare_head(fairshare_head&);\n\tfairshare_head& operator=(fairshare_head&);\n\tvirtual ~fairshare_head();\n};\n\nclass group_info\n{\n\tpublic:\n\tstd::string name;\t\t\t\t/* name of user/group */\n\tint resgroup;\t\t\t\t/* resgroup the group is in */\n\tint cresgroup;\t\t\t\t/* resgroup of the children of group */\n\tint shares;\t\t\t\t/* number of shares this group has */\n\tfloat tree_percentage;\t\t\t/* overall percentage the group has */\n\tfloat group_percentage;\t\t\t/* percentage within fairshare group (i.e., shares/group_shares) */\n\n\t/* There are two usage element per entity.  The usage element is used to\n\t * hold the real usage for the entity.  The temp_usage is more of a sractch\n\t * variable.  At the beginning of the cycle, usage is copied into temp_usage\n\t * and from then on, only temp_usage is consulted for fairshare usage\n\t */\n\tusage_t usage;\t\t\t\t/* calculated usage info */\n\tusage_t temp_usage;\t\t\t/* usage plus any temporary usage */\n\tfloat usage_factor;\t\t\t/* usage calculation taking parent's usage into account: number between 0 and 1 */\n\n\tstd::vector<group_info *> gpath;\t/* path from the root of the tree */\n\n\tgroup_info *parent;\t\t\t/* parent node */\n\tgroup_info *sibling;\t\t\t/* sibling node */\n\tgroup_info *child;\t\t\t/* child node */\n\texplicit group_info(const std::string& gname);\n\tgroup_info(group_info&);\n\tgroup_info &operator=(const group_info &);\n};\n\n/**\n * Set of equivalent resresvs.  It is used to keep track if one can't run, the rest cannot.\n * The set is defined by a number of attributes of the resresv.  If the attributes do\n * not matter, they won't be used and set to NULL\n * @see create_resresv_set_by_resresv() for reasons why members can be NULL\n */\nstruct resresv_set\n{\n\tbool can_not_run:1;\t\t/* set can not run */\n\tschd_error *err;\t\t/* reason why set can not run*/\n\tchar *user;\t\t\t/* user of set, can be NULL */\n\tchar *group;\t\t\t/* group of set, can be NULL */\n\tchar *project;\t\t\t/* project of set, can be NULL */\n\tselspec *select_spec;\t\t/* select spec of set */\n\tplace *place_spec;\t\t/* place spec of set */\n\tresource_req *req;\t\t/* ATTR_L (qsub -l) resources of set.  Only contains resources on the resources line */\n\tqueue_info *qinfo;\t\t/* The queue the resresv is in if the queue has nodes associated */\n};\n\nstruct node_partition\n{\n\tbool ok_break:1;\t/* OK to break up chunks on this node part */\n\tbool excl:1;\t\t/* partition should be allocated exclusively */\n\tchar *name;\t\t/* res_name=res_val */\n\t/* name of resource and value which define the node partition */\n\tresdef *def;\n\tchar *res_val;\n\tint tot_nodes;\t\t/* the total number of nodes  */\n\tint free_nodes;\t\t/* the number of nodes in state Free  */\n\tschd_resource *res;\t/* total amount of resources in node part */\n\tnode_info **ninfo_arr;\t/* array of pointers to node structures  */\n\tnode_bucket **bkts;\t/* node buckets for node part */\n\tint rank;\t\t/* unique numeric identifier for node partition */\n};\n\nclass np_cache\n{\n\tpublic:\n\tnode_info **ninfo_arr;\t\t/* ptr to array of nodes used to create pools */\n\tstd::vector<std::string> resnames;\t\t/* resource names used to create partitions */\n\tnode_partition **nodepart;\t/* node partitions */\n\tint num_parts;\t\t\t/* number of partitions in nodepart */\n\tnp_cache();\n\tnp_cache(node_info **, const std::vector<std::string>&, node_partition **, int);\n\tnp_cache(const np_cache &) = delete;\n\tnp_cache& operator=(const np_cache &) = delete;\n\tvirtual ~np_cache();\n};\n\n/* header to usage file.  Needs to be EXACTLY the same size as a\n * group_node_usage for backwards compatibility\n * tag defined in config.h\n */\nstruct group_node_header\n{\n\tchar tag[9];\t\t/* usage file \"magic number\" */\n\tusage_t version;\t/* usage file version number */\n};\n\n/* This structure is used to write out the usage to disk\n * Version 1 was just successive group_node_usage structures written to disk\n * with not header or anything.\n */\nstruct group_node_usage_v1\n{\n\tchar name[9];\n\tusage_t usage;\n};\n\n/* This is the second attempt at a good usage file.  The first became obsolete\n * when users became entities and entities were no longer constrained by the\n * 8 characters of usernames.  Usage file version 2 also contains the last\n * 8 characters of usernames.  Usage file version 2 also contains the last\n * 8 characters of usernames.  Usage file version 2 also contains the last\n * decay time so it can be saved over restarts of the scheduler\n */\nstruct group_node_usage_v2\n{\n\tchar name[USAGE_NAME_MAX];\n\tusage_t usage;\n};\n\nstruct usage_info\n{\n\tchar *name;\t\t\t/* name of the user */\n\tstruct resource_req *reslist;\t/* list of resources */\n\tint computed_value;\t\t/* value computed from usage info */\n};\n\nstruct t\n{\n\tunsigned int hour;\n\tunsigned int min;\n\tunsigned int none;\n\tunsigned int all;\n};\n\nstruct sort_info\n{\n\tstd::string res_name;\t\t/* Name of sorting resource */\n\tresdef *def;\t\t\t/* Definition of sorting resource */\n\tenum sort_order order;\t\t/* Ascending or Descending sort */\n\tenum resource_fields res_type;\t/* resources_available, resources_assigned, etc */\n};\n\nstruct sort_conv\n{\n\tconst char *config_name;\n\tconst char *res_name;\n\tenum sort_order order;\n};\n\n/* structure to convert an enum to a string or back again */\nstruct enum_conv\n{\n\tint value;\n\tconst char *str;\n};\n\nstruct timegap\n{\n\ttime_t from;\n\ttime_t to;\n\ttimegap(time_t tfrom, time_t tto): from(tfrom), to(tto) {}\n};\n\nstruct dyn_res\n{\n\tstd::string res;\n\tstd::string command_line;\n\tstd::string script_name;\n\tdyn_res(const char *resource, const char *cmdline, const char *fname): res(resource), command_line(cmdline), script_name(fname) {}\n};\n\nstruct peer_queue\n{\n\tconst std::string local_queue;\n\tconst std::string remote_queue;\n\tconst std::string remote_server;\n\tint peer_sd;\n\tpeer_queue(const char *lqueue, const char *rqueue, const char *rserver): local_queue(lqueue), remote_queue(rqueue), remote_server(rserver) {peer_sd = -1;}\n};\n\nclass nspec\n{\n\tpublic:\n\tbool end_of_chunk:1; /* used for putting parens into the execvnode */\n\tbool go_provision:1; /* used to mark a node to be provisioned */\n\tint seq_num;\t\t\t/* sequence number of chunk */\n\tint sub_seq_num;\t\t/* sub sequence number for sort stabilization */\n\tnode_info *ninfo;\n\tresource_req *resreq;\n\tchunk *chk;\n\n\tnspec();\n\tnspec(const nspec &, node_info **, selspec *);\n\t~nspec();\n\t/* We need to have the copy constructor dup everything inside the nspec\n\t * We can't have the default copy constructor copy everything, because\n\t * we'd end up with a pointer to the same resreq\n\t */\n\tnspec(const nspec&) = delete;\n\tnspec &operator=(const nspec &) = delete;\n};\n\nstruct nameval\n{\n\tbool is_set:1;\n\tchar *str;\n\tint value;\n};\n\nstruct config\n{\n\t/* these bits control the scheduling policy\n\t * prime_* is the prime time setting\n\t * non_prime_* is the non-prime setting\n\t */\n\tbool prime_rr:1;\t\t/* round robin through queues*/\n\tbool non_prime_rr:1;\n\tbool prime_bq:1;\t\t/* by queue */\n\tbool non_prime_bq:1;\n\tbool prime_sf:1;\t\t/* strict fifo */\n\tbool non_prime_sf:1;\n\tbool prime_so:1;\t\t/* strict ordering */\n\tbool non_prime_so:1;\n\tbool prime_fs:1;\t\t/* fair share */\n\tbool non_prime_fs:1;\n\tbool prime_bf:1;\t\t/* back filling */\n\tbool non_prime_bf:1;\n\tbool prime_sn:1;\t\t/* sort nodes by priority */\n\tbool non_prime_sn:1;\n\tbool prime_bp:1;\t\t/* backfill around prime time */\n\tbool non_prime_bp:1;\t/* backfill around non prime time */\n\tbool prime_pre:1;\t\t/* preemptive scheduling */\n\tbool non_prime_pre:1;\n\tbool update_comments:1;\t/* should we update comments or not */\n\tbool prime_exempt_anytime_queues:1; /* backfill affects anytime queues */\n\tbool enforce_no_shares:1;\t/* jobs with 0 shares don't run */\n\tbool node_sort_unused:1;\t/* node sorting by unused/assigned is used */\n\tbool resv_conf_ignore:1;\t/* if we want to ignore dedicated time when confirming reservations.  Move to enum if ever expanded */\n\tbool allow_aoe_calendar:1;\t/* allow jobs requesting aoe in calendar*/\n#ifdef NAS /* localmod 034 */\n\tbool prime_sto:1;\t/* shares_track_only--no enforce shares */\n\tbool non_prime_sto:1;\n#endif /* localmod 034 */\n\n\tstd::vector<sort_info> prime_sort;\t/* prime time sort */\n\tstd::vector<sort_info> non_prime_sort;\t/* non-prime time sort */\n\n\tenum smp_cluster_dist prime_smp_dist;\t/* how to dist jobs during prime*/\n\tenum smp_cluster_dist non_prime_smp_dist;/* how do dist jobs during nonprime*/\n\ttime_t prime_spill;\t\t\t/* the amount of time a job can\n\t\t\t\t\t\t * spill into primetime\n\t\t\t\t\t\t */\n\ttime_t nonprime_spill;\t\t\t/* vice versa for prime_spill */\n\ttime_t decay_time;\t\t\t/*  time in seconds for the decay period*/\n\tstruct t prime[HIGH_DAY][2];\t\t/* prime time start and prime time end*/\n\tstd::vector<int> holidays;\t\t/* holidays in Julian date */\n\tint holiday_year;\t\t\t/* the year the holidays are for */\n\tstd::vector<struct timegap> ded_time;\t/* dedicated times */\n\tint unknown_shares;\t\t\t/* unknown group shares */\n\tint max_preempt_attempts;\t\t/* max num of preempt attempts per cyc*/\n\tint max_jobs_to_check;\t\t\t/* max number of jobs to check in cyc*/\n\tstd::string ded_prefix;\t\t\t/* prefix to dedicated queues */\n\tstd::string pt_prefix;\t\t\t/* prefix to primetime queues */\n\tstd::string npt_prefix;\t\t\t/* prefix to non primetime queues */\n\tstd::string fairshare_res;\t\t/* resource to calc fairshare usage */\n\tfloat fairshare_decay_factor;\t\t/* decay factor used when decaying fairshare tree */\n\tstd::string fairshare_ent;\t\t\t/* job attribute to use as fs entity */\n\tstd::unordered_set<std::string> res_to_check;\t\t/* the resources schedule on */\n\tstd::unordered_set<resdef *> resdef_to_check;\t\t/* the res to schedule on in def form */\n\tstd::unordered_set<std::string> ignore_res;\t\t/* resources - unset implies infinite */\n\t/* order to preempt jobs */\n\tstd::vector<sort_info> prime_node_sort;\t/* node sorting primetime */\n\tstd::vector<sort_info> non_prime_node_sort;\t/* node sorting non primetime */\n\tstd::vector<dyn_res> dynamic_res; /* for server_dyn_res */\n\tstd::vector<peer_queue> peer_queues;/* peer local -> remote queue map */\n#ifdef NAS\n\t/* localmod 034 */\n\ttime_t max_borrow;\t\t\t/* job share borrowing limit */\n\tint per_share_topjobs;\t\t/* per share group guaranteed top jobs*/\n\t/* localmod 038 */\n\tint per_queues_topjobs;\t\t/* per queues guaranteed top jobs */\n\t/* localmod 030 */\n\tint min_intrptd_cycle_length;\t\t/* min length of interrupted cycle */\n\tint max_intrptd_cycles;\t\t/* max consecutive interrupted cycles */\n#endif\n\n\t/* selection criteria of nodes for provisioning */\n\tenum provision_policy_types provision_policy;\n\tconfig();\n};\n\nstruct rescheck\n{\n\tchar *name;\n\tchar *comment_msg;\n\tchar *debug_msg;\n};\n\nstruct event_list\n{\n\tbool eol:1;\t\t/* we've reached the end of time */\n\ttimed_event *events;\t\t/* the calendar of events */\n\ttimed_event *next_event;\t/* the next event to be performed */\n\ttimed_event *first_run_event;\t/* The first run event in the calendar */\n\ttime_t *current_time;\t\t/* [reference] current time in the calendar */\n};\n\nstruct timed_event\n{\n\tbool disabled:1;\t/* event is disabled - skip it in simulation */\n\tstd::string name;\t\n\tenum timed_event_types event_type;\n\ttime_t event_time;\n\tevent_ptr_t *event_ptr;\n\tevent_func_t event_func;\n\tvoid *event_func_arg;\t\t/* optional argument to function - not freed */\n\ttimed_event *next;\n\ttimed_event *prev;\n};\n\nstruct te_list {\n\tte_list *next;\n\ttimed_event *event;\n};\n\nstruct bucket_bitpool {\n\tpbs_bitmap *truth;\t\t/* The actual bits.  This only changes if the bitmaps are changing */\n\tint truth_ct;\t\t\t/* number of 1 bits in truth bitmap*/\n\tpbs_bitmap *working;\t\t/* Used for short lived operations.  Usually truth is copied into working. */\n\tint working_ct;\t\t\t/* number of 1 bits in working bitmap */\n};\n\nstruct node_bucket {\n\tchar *name;\t\t\t/* Name of bucket: resource spec + queue + priority */\n\tschd_resource *res_spec;\t/* resources that describe the bucket */\n\tqueue_info *queue;\t\t/* queue that nodes in the bucket are associated with */\n\tint priority;\t\t\t/* priority of nodes in the bucket */\n\tpbs_bitmap *bkt_nodes;\t\t/* bitmap of the nodes in the bucket */\n\tbucket_bitpool *free_pool;\t/* bit pool of free_pool nodes*/\n\tbucket_bitpool *busy_later_pool;/* bit pool of nodes that are free now, but are busy_pool later */\n\tbucket_bitpool *busy_pool;\t/* bit pool of nodes that are busy now */\n\tint total;\t\t\t/* total number of nodes in bucket */\n};\n\nstruct node_bucket_count {\n\tnode_bucket *bkt;\t\t/* node bucket */\n\tint chunk_count;\t\t/* number of chunks bucket can satisfy */\n};\n\nstruct chunk_map {\n\tchunk *chk;\n\tnode_bucket_count **bkt_cnts;\t/* buckets job can run in and chunk counts */\n\tpbs_bitmap *node_bits;\t\t/* assignment of nodes from buckets */\n};\n\nstruct resresv_filter {\n\tresource_resv *job;\n\tschd_error *err;\t\t/* reason why set can not run*/\n};\n\nclass sched_exception: public std::exception\n{\n\tpublic:\n\tsched_exception(const sched_exception &e);\n\tsched_exception &operator=(const sched_exception &err);\n\tsched_exception (const std::string &str, const enum sched_error_code e);\n\tconst char *what();\n\tenum sched_error_code get_error_code() const;\n\tconst std::string& get_message() const;\n\tsched_exception() = delete;\n\n\tprivate:\n\tstd::string message;\n\tenum sched_error_code error_code;\n\n};\n#endif\t/* _DATA_TYPES_H */\n\n// clang-format on"
  },
  {
    "path": "src/scheduler/dedtime.cpp",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file    dedtime.c\n *\n * @brief\n * \t\tdedtime.c - This file contains functions related to dedicated time.\n *\n * Functions included are:\n * \tparse_ded_file()\n * \tcmp_ded_time()\n * \tis_ded_time()\n *\n */\n\n#include <algorithm>\n\n#include <pbs_config.h>\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <time.h>\n#include <memory.h>\n#include <errno.h>\n\n#include <log.h>\n\n#include \"data_types.h\"\n#include \"misc.h\"\n#include \"dedtime.h\"\n#include \"globals.h\"\n\n/**\n * @brief\n *\t\tparse_ded_file - read in dedicated times from file\n *\n * @param[in]\tfilename\t-\tfilename of dedicated time file\n *\n * @return\t0\t: on success non-zero on failure\n *\n * @par NOTE:\n * \t\tmodifies conf data structure\n *\n * @note\n *\tFORMAT:      start         finish\n *\t\tMM/DD/YY HH:MM MM/DD/YYYY HH:MM\n */\nint\nparse_ded_file(const char *filename)\n{\n\tFILE *fp;\t\t  /* file pointer for the dedtime file */\n\tchar line[256];\t\t  /* a buffer for a line from the file */\n\tint error = 0;\t\t  /* boolean: is there an error? */\n\tstruct tm tm_from, tm_to; /* tm structs used to convert to time_t */\n\ttime_t from, to;\t  /* time_t values for dedtime start - end */\n\n\tif ((fp = fopen(filename, \"r\")) == NULL) {\n\t\tsprintf(log_buffer, \"Error opening file %s\", filename);\n\t\tlog_err(errno, \"parse_ded_file\", log_buffer);\n\t\treturn 1;\n\t}\n\n\t// We are rereading the dedtime file.  The current dedtime might not exist any more.\n\tcstat.is_ded_time = false;\n\tconf.ded_time.clear();\n\n\twhile (fgets(line, 256, fp) != NULL) {\n\t\tif (!skip_line(line)) {\n\t\t\t/* mktime() will figure out if it is dst or not if tm_isdst == -1 */\n\t\t\tmemset(&tm_from, 0, sizeof(struct tm));\n\t\t\ttm_from.tm_isdst = -1;\n\n\t\t\tmemset(&tm_to, 0, sizeof(struct tm));\n\t\t\ttm_to.tm_isdst = -1;\n\n\t\t\tif (sscanf(line, \"%d/%d/%d %d:%d %d/%d/%d %d:%d\", &tm_from.tm_mon, &tm_from.tm_mday, &tm_from.tm_year, &tm_from.tm_hour, &tm_from.tm_min, &tm_to.tm_mon, &tm_to.tm_mday, &tm_to.tm_year, &tm_to.tm_hour, &tm_to.tm_min) != 10)\n\t\t\t\terror = 1;\n\t\t\telse {\n\t\t\t\t/* tm_mon starts at 0, where the file will start at 1 */\n\t\t\t\ttm_from.tm_mon--;\n\n\t\t\t\t/* the MM/DD/YY is the wrong date format, but we will accept it anyways */\n\t\t\t\t/* if year is less then 90, assume year is > 2000 */\n\t\t\t\tif (tm_from.tm_year < 90)\n\t\t\t\t\ttm_from.tm_year += 100;\n\n\t\t\t\t/* MM/DD/YYYY is the correct date format */\n\t\t\t\tif (tm_from.tm_year > 1900)\n\t\t\t\t\ttm_from.tm_year -= 1900;\n\t\t\t\tfrom = mktime(&tm_from);\n\n\t\t\t\ttm_to.tm_mon--;\n\t\t\t\tif (tm_from.tm_year < 90)\n\t\t\t\t\ttm_from.tm_year += 100;\n\t\t\t\tif (tm_to.tm_year > 1900)\n\t\t\t\t\ttm_to.tm_year -= 1900;\n\t\t\t\tto = mktime(&tm_to);\n\n\t\t\t\t/* ignore all dedtime which has passed */\n\t\t\t\tif (!(from < cstat.current_time && to < cstat.current_time))\n\t\t\t\t\tconf.ded_time.emplace_back(from, to);\n\n\t\t\t\tif (from > to) {\n\t\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"From date is greater than To date in the line - '%s'.\", line);\n\t\t\t\t\tlog_err(-1, \"Dedicated Time Conflict\", log_buffer);\n\t\t\t\t\terror = 1;\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (error) {\n\t\t\t\tprintf(\"Error: %s\\n\", line);\n\t\t\t\terror = 0;\n\t\t\t}\n\t\t}\n\t}\n\t/* sort dedtime in ascending order with all 0 elements at the end */\n\tstd::sort(conf.ded_time.begin(), conf.ded_time.end(), cmp_ded_time);\n\tfclose(fp);\n\treturn 0;\n}\n\n/**\n * @brief\n *\t\tcmp_ded_time - compare function for qsort for the ded time array\n *\n * @param[in]\tv1\t-\tvalue 1\n * @param[in]\tv2\t-\tvalue 2\n *\n * @par\n *\t  Sort Keys:\n *\t    - zero elements to the end of the array\n *\t    - descending by the start time\n *\n */\nbool\ncmp_ded_time(const timegap &t1, const timegap &t2)\n{\n\tif (t1.from == 0 && t2.from != 0)\n\t\treturn 0;\n\telse if (t2.from == 0 && t1.from != 0)\n\t\treturn 1;\n\n\treturn t1.from < t2.from;\n}\n\n/**\n * @brief\n * \t\tchecks if it is dedicated time at time t\n *\n * @param[in]\tt\t-\tthe time to check\n *\n * @return\tbool\n * @retval\ttrue if it is currently ded time\n * @retval\tfalse if it is not ded time\n *\n */\nbool\nis_ded_time(time_t t)\n{\n\tif (t == 0)\n\t\tt = cstat.current_time;\n\n\tstruct timegap ded = find_next_dedtime(t);\n\n\tif (t >= ded.from && t < ded.to)\n\t\treturn true;\n\telse\n\t\treturn false;\n}\n\n/**\n * @brief\n * \t\tfind the next dedtime after time t\n *\n * @param[in]\tt\t-\ta time to find the next dedtime after\n *\n * @return\tthe next dedtime or empty timegap if no dedtime\n */\nstruct timegap\nfind_next_dedtime(time_t t)\n{\n\tfor (const auto &dt : conf.ded_time)\n\t\tif (dt.to >= t)\n\t\t\treturn dt;\n\n\treturn {0, 0};\n}\n"
  },
  {
    "path": "src/scheduler/dedtime.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _DEDTIME_H\n#define _DEDTIME_H\n\n#include <time.h>\n\n/*\n *      parse_ded_file - read in dedicated times from file\n *\n *      FORMAT: start - finish\n *              MM/DD/YYYY HH:MM MM/DD/YYYY HH:MM\n */\nint parse_ded_file(const char *filename);\n\n/*\n *\n *      cmp_ded_time - compare function for qsort for the ded time array\n *\n */\nbool cmp_ded_time(const timegap &t1, const timegap &t2);\n\n/*\n *      is_ded_time - checks if it is currently dedicated time\n */\nbool is_ded_time(time_t t);\n\n/*\n *\n *\tfind_next_dedtime - find the next dedtime.  If t is specified\n *\t\t\t    find the next dedtime after time t\n *\n *\t  t - a time to find the next dedtime after\n *\n *\treturn the next dedtime or empty timegap if no dedtime\n */\nstruct timegap find_next_dedtime(time_t t);\n\n#endif /* _DEDTIME_H */\n"
  },
  {
    "path": "src/scheduler/fairshare.cpp",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file    fairshare.c\n *\n * @brief\n * \t\tfairshare.c - This file contains functions related to fareshare scheduling.\n *\n * Functions included are:\n * \tadd_child()\n * \tadd_unknown()\n * \tfind_group_info()\n * \tfind_alloc_ginfo()\n * \tnew_group_info()\n * \tparse_group()\n * \tpreload_tree()\n * \tcount_shares()\n * \tcalc_fair_share_perc()\n * \ttest_perc()\n * \tupdate_usage_on_run()\n * \tdecay_fairshare_tree()\n * \tcompare_path()\n * \tprint_fairshare()\n * \twrite_usage()\n * \trec_write_usage()\n * \tread_usage()\n * \tread_usage_v1()\n * \tread_usage_v2()\n * \tover_fs_usage()\n * \tdup_fairshare_tree()\n * \tfree_fairshare_tree()\n * \treset_temp_usage()\n *\n */\n#include <pbs_config.h>\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <errno.h>\n\n#include <log.h>\n\n#include \"data_types.h\"\n#include \"job_info.h\"\n#include \"constant.h\"\n#include \"fairshare.h\"\n#include \"globals.h\"\n#include \"misc.h\"\n#include \"constant.h\"\n#include \"config.h\"\n#include \"log.h\"\n#include \"fifo.h\"\n#include \"resource_resv.h\"\n#include \"resource.h\"\n#ifdef NAS /* localmod 041 */\n#include \"sort.h\"\n#endif\n\nextern time_t last_decay;\n\n/**\n * @brief\n *\t\tadd_child - add a group_info to the resource group tree\n *\n * @param[out]\tginfo\t-\tginfo to add to the tree\n * @param[in,out]\tparent\t-\tparent ginfo\n *\n * @return\tnothing\n *\n */\nvoid\nadd_child(group_info *ginfo, group_info *parent)\n{\n\tif (parent != NULL) {\n\t\tginfo->sibling = parent->child;\n\t\tparent->child = ginfo;\n\t\tginfo->parent = parent;\n\t\tginfo->resgroup = parent->cresgroup;\n\t\tginfo->gpath = create_group_path(ginfo);\n\t}\n}\n\n/**\n * @brief\n * \t\tadd a ginfo to the \"unknown\" group\n *\n * @param[in]\tginfo\t-\tginfo to add\n * @param[in]\troot\t-\troot of fairshare tree\n *\n * @return\tnothing\n *\n */\nvoid\nadd_unknown(group_info *ginfo, group_info *root)\n{\n\tgroup_info *unknown; /* ptr to the \"unknown\" group */\n\n\tunknown = find_group_info(\"unknown\", root);\n\tadd_child(ginfo, unknown);\n\tcalc_fair_share_perc(unknown->child, UNSPECIFIED);\n}\n\n/**\n * @brief\n *\t\tfind_group_info - recursive function to find a group_info in the\n *\t\t\t  resgroup tree\n *\n * @param[in]\tname\t-\tname of the ginfo to find\n * @param[in]\troot\t-\tthe root of the current sub-tree\n *\n * @return\tthe found group_info or NULL\n *\n */\ngroup_info *\nfind_group_info(const std::string &name, group_info *root)\n{\n\tgroup_info *ginfo; /* the found group */\n\tif (root == NULL || name == root->name)\n\t\treturn root;\n\n\tginfo = find_group_info(name, root->sibling);\n\tif (ginfo == NULL)\n\t\tginfo = find_group_info(name, root->child);\n\n\treturn ginfo;\n}\n\n/**\n * @brief\n *\t\tfind_alloc_ginfo - tries to find a ginfo in the fair share tree.  If it\n *\t\t\t  can not find the ginfo, then allocate a new one and\n *\t\t\t  add it to the \"unknown\" group\n *\n * @param[in]\tname\t-\tname of the ginfo to find\n * @param[in]\troot\t-\troot of the fairshare tree\n *\n * @return\tthe found ginfo or the newly allocated ginfo\n *\n */\ngroup_info *\nfind_alloc_ginfo(const std::string &name, group_info *root)\n{\n\tgroup_info *ginfo; /* the found group or allocated group */\n\n\tif (root == NULL)\n\t\treturn NULL;\n\n\tginfo = find_group_info(name, root);\n\n\tif (ginfo == NULL) {\n\t\tif ((ginfo = new group_info(name)) == NULL)\n\t\t\treturn NULL;\n\n\t\tginfo->shares = 1;\n\t\tadd_unknown(ginfo, root);\n\t}\n\treturn ginfo;\n}\n\n/**\n * @brief\n * \t\tparse the resource group file\n *\n * @param[in]\tfname\t-\tname of the file\n * @param[in]\troot\t-\troot of fairshare tree\n *\n * @return\tsuccess/failure\n *\n *\n * @par FORMAT:   name\tcresgrp\t\tgrpname\t\tshares\n *\t  name    - name of user/grp\n *\t  cresgrp - resource group of the children of this group (if group)\n *\t  grpname - resource group of this user/group\n *\t  shares  - the amount of shares the user/group has in its resgroup\n *\n */\nint\nparse_group(const char *fname, group_info *root)\n{\n\tgroup_info *ginfo;     /* ptr to parent group */\n\tgroup_info *new_ginfo; /* used to add each new group */\n\tchar buf[256];\t       /* used to read each line from the file */\n\tchar *nametok;\t       /* strtok: name of new group */\n\tchar *grouptok;\t       /* strtok: parent group name */\n\tchar *cgrouptok;       /* strtok: resgrp of the children of newgrp */\n\tchar *sharestok;       /* strtok: the amount of shares for newgrp */\n\tFILE *fp;\t       /* file pointer to the resource group file */\n\tchar error = 0;\t       /* boolean: is there an error ? */\n\tint shares;\t       /* number of shares for the new group */\n\tint cgroup;\t       /* resource group of the children of the grp */\n\tchar *endp;\t       /* used for strtol() */\n\tint linenum = 0;       /* current line number in the file */\n\n\tif ((fp = fopen(fname, \"r\")) == NULL) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer), \"Error opening file %s\", fname);\n\t\tlog_err(errno, __func__, log_buffer);\n\t\tlog_event(PBSEVENT_SCHED, PBS_EVENTCLASS_FILE, LOG_NOTICE, \"\", \"Warning: resource group file error, fair share will not work\");\n\t\treturn 0;\n\t}\n\n\twhile (fgets(buf, 256, fp) != NULL) {\n\t\tif (buf[strlen(buf) - 1] == '\\n')\n\t\t\tbuf[strlen(buf) - 1] = '\\0';\n\t\tlinenum++;\n\t\tif (!skip_line(buf)) {\n\t\t\tnametok = strtok(buf, \" \\t\");\n\t\t\tcgrouptok = strtok(NULL, \" \\t\");\n\t\t\tgrouptok = strtok(NULL, \" \\t\");\n\t\t\tsharestok = strtok(NULL, \" \\t\");\n\n\t\t\tif (nametok == NULL || cgrouptok == NULL ||\n\t\t\t    grouptok == NULL || sharestok == NULL) {\n\t\t\t\terror = 1;\n\t\t\t} else if (find_group_info(nametok, root) != NULL) {\n\t\t\t\terror = 1;\n\t\t\t\tsprintf(log_buffer, \"entity %s is not unique\", nametok);\n\t\t\t\tfprintf(stderr, \"%s\\n\", log_buffer);\n\t\t\t\tlog_event(PBSEVENT_SCHED, PBS_EVENTCLASS_FILE, LOG_NOTICE,\n\t\t\t\t\t  \"fairshare\", log_buffer);\n\t\t\t} else {\n\t\t\t\tif (!strcmp(grouptok, \"root\"))\n\t\t\t\t\tginfo = find_group_info(FAIRSHARE_ROOT_NAME, root);\n\t\t\t\telse\n\t\t\t\t\tginfo = find_group_info(grouptok, root);\n\n\t\t\t\tif (ginfo != NULL) {\n\t\t\t\t\tshares = strtol(sharestok, &endp, 10);\n\t\t\t\t\tif (*endp == '\\0') {\n\t\t\t\t\t\tcgroup = strtol(cgrouptok, &endp, 10);\n\t\t\t\t\t\tif (*endp == '\\0') {\n\t\t\t\t\t\t\tif ((new_ginfo = new group_info(nametok)) == NULL)\n\t\t\t\t\t\t\t\treturn 0;\n\t\t\t\t\t\t\tnew_ginfo->resgroup = ginfo->cresgroup;\n\t\t\t\t\t\t\tnew_ginfo->cresgroup = cgroup;\n\t\t\t\t\t\t\tnew_ginfo->shares = shares;\n\t\t\t\t\t\t\tadd_child(new_ginfo, ginfo);\n\t\t\t\t\t\t} else\n\t\t\t\t\t\t\terror = 1;\n\t\t\t\t\t} else\n\t\t\t\t\t\terror = 1;\n\t\t\t\t} else {\n\t\t\t\t\terror = 1;\n\t\t\t\t\tsprintf(log_buffer, \"Parent ginfo of %s doesnt exist.\", nametok);\n\t\t\t\t\tfprintf(stderr, \"%s\\n\", log_buffer);\n\t\t\t\t\tlog_event(PBSEVENT_SCHED, PBS_EVENTCLASS_FILE, LOG_NOTICE,\n\t\t\t\t\t\t  \"fairshare\", log_buffer);\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif (error) {\n\t\t\t\tsprintf(log_buffer, \"resgroup: error on line %d.\", linenum);\n\t\t\t\tfprintf(stderr, \"%s\\n\", log_buffer);\n\t\t\t\tlog_event(PBSEVENT_SCHED, PBS_EVENTCLASS_FILE, LOG_NOTICE,\n\t\t\t\t\t  \"fairshare\", log_buffer);\n\t\t\t}\n\n\t\t\terror = 0;\n\t\t}\n\t}\n\tfclose(fp);\n\treturn 1;\n}\n\n/**\n * @brief\n * \t\tload the root node into the fair share tree\n *\t\tthe root node is the entire machine.  Also load\n *\t\tthe \"unknown\" group.  This group is for any user that\n *\t\tis not specified in the resource group file.\n *\n * @return\tnew head and root of a fairshare tree\n *\n */\nfairshare_head *\npreload_tree()\n{\n\tfairshare_head *head;\n\tgroup_info *root;\n\tgroup_info *unknown; /* pointer to the \"unknown\" group */\n\n\tif ((head = new fairshare_head()) == NULL)\n\t\treturn 0;\n\n\troot = new group_info(FAIRSHARE_ROOT_NAME);\n\n\thead->root = root;\n\n\troot->resgroup = -1;\n\troot->cresgroup = 0;\n\troot->tree_percentage = 1.0;\n\n\tif ((unknown = new group_info(UNKNOWN_GROUP_NAME)) == NULL) {\n\t\tdelete head;\n\t\treturn NULL;\n\t}\n\n\tunknown->shares = conf.unknown_shares;\n\tunknown->resgroup = 0;\n\tunknown->cresgroup = 1;\n\tunknown->parent = root;\n\tadd_child(unknown, root);\n\treturn head;\n}\n\n/**\n * @brief\n *\t\tcount_shares - count the shares in a resource group\n *\t\t       a resource group is a group_info and all of its\n *\t\t       siblings\n *\n * @param[in]\tgrp\t-\tThe start of a sibling chain\n *\n * @return\tthe number of shares\n *\n */\nint\ncount_shares(group_info *grp)\n{\n\tint shares = 0;\t     /* accumulator to count the shares */\n\tgroup_info *cur_grp; /* the current group in a sibling chain */\n\n\tcur_grp = grp;\n\n\twhile (cur_grp != NULL) {\n\t\tshares += cur_grp->shares;\n\t\tcur_grp = cur_grp->sibling;\n\t}\n\n\treturn shares;\n}\n\n/**\n * @brief\n *\t\tcalc_fair_share_perc - walk the fair share group tree and calculate\n *\t\t\t       the overall percentage of the machine a user/\n *\t\t\t       group gets if all usage is equal\n *\n * @param[in,out]\troot\t-\tthe root of the current subtree\n * @param[in]\tshares\t-\tthe number of total shares in the group\n *\n * @return\tsuccess/failure\n *\n */\nint\ncalc_fair_share_perc(group_info *root, int shares)\n{\n\tint cur_shares; /* total number of shares in the resgrp */\n\n\tif (root == NULL)\n\t\treturn 0;\n\n\tif (shares == UNSPECIFIED)\n\t\tcur_shares = count_shares(root);\n\telse\n\t\tcur_shares = shares;\n\n\tif (cur_shares * root->parent->tree_percentage == 0) {\n\t\troot->group_percentage = 0;\n\t\troot->tree_percentage = 0;\n\t} else {\n\t\troot->group_percentage = (float) root->shares / cur_shares;\n\t\troot->tree_percentage = root->group_percentage * root->parent->tree_percentage;\n\t}\n\n\tcalc_fair_share_perc(root->sibling, cur_shares);\n\tcalc_fair_share_perc(root->child, UNSPECIFIED);\n\treturn 1;\n}\n\n/**\n * @brief\n * \t\tUpdate the usage of a fairshare entity for a job.  The process\n *         of updating an entity's usage causes the full usage of the job\n *\t       to be accrued to the entity and all groups on the path from the\n *\t       entity to the root of the fairshare tree.\n *\n * @param[in]\tresresv\t-\tthe job to accrue usage\n *\n * @return nothing\n *\n */\nvoid\nupdate_usage_on_run(resource_resv *resresv)\n{\n\tusage_t u;\n\n\tif (resresv == NULL)\n\t\treturn;\n\n\tif (!resresv->is_job || resresv->job == NULL)\n\t\treturn;\n\n\tu = formula_evaluate(conf.fairshare_res.c_str(), resresv, resresv->resreq);\n\tif (resresv->job->ginfo != NULL) {\n\t\tfor (auto &g : resresv->job->ginfo->gpath)\n\t\t\tg->temp_usage += u;\n\t} else\n\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO, resresv->name,\n\t\t\t  \"Job doesn't have a group_info ptr set, usage not updated.\");\n}\n\n/**\n * @brief\n *\t\tdecay_fairshare_tree - decay the usage information kept in the fair\n *\t\t\t       share tree\n *\n * @param[in,out]\troot\t-\tthe root of the fairshare tree\n *\n * @return nothing\n *\n */\nvoid\ndecay_fairshare_tree(group_info *root)\n{\n\tif (root == NULL)\n\t\treturn;\n\n\tdecay_fairshare_tree(root->sibling);\n\tdecay_fairshare_tree(root->child);\n\n\troot->usage *= conf.fairshare_decay_factor;\n\tif (root->usage < FAIRSHARE_MIN_USAGE)\n\t\troot->usage = FAIRSHARE_MIN_USAGE;\n}\n\n/**\n * @brief\n *\t\tcompare_path - compare two group paths and see which is more\n *\t\t\t\tdeserving to run\n * @par\n *\t\tcomparison: usage / priority\n *\n * @param[in]\tgp1\t-\tgroup path 1\n * @param[in]\tgp2\t-\tgroup path 2\n *\n * @return\tint\n * @retval\t-1\t: gp1 is more deserving\n * @retval\t0\t: both are equal\n * @retval\t1\t: gp2 is more deserving\n *\n */\nint\ncompare_path(std::vector<group_info *> &gp1, std::vector<group_info *> &gp2)\n{\n\tdouble curval1, curval2;\n\tint rc = 0;\n\tint len;\n\n\tif (gp1.size() > gp2.size())\n\t\tlen = gp2.size();\n\telse\n\t\tlen = gp1.size();\n\n\tfor (int i = 0; rc == 0 && i < len; i++) {\n\t\tif (gp1[i] != gp2[i]) {\n\t\t\tif (gp1[i]->tree_percentage <= 0 && gp2[i]->tree_percentage > 0)\n\t\t\t\treturn 1;\n\t\t\tif (gp1[i]->tree_percentage > 0 && gp2[i]->tree_percentage <= 0)\n\t\t\t\treturn -1;\n\t\t\tif (gp1[i]->tree_percentage <= 0 && gp2[i]->tree_percentage <= 0)\n\t\t\t\treturn 0;\n\n\t\t\tcurval1 = gp1[i]->temp_usage / gp1[i]->tree_percentage;\n\t\t\tcurval2 = gp2[i]->temp_usage / gp2[i]->tree_percentage;\n\n\t\t\tif (curval1 < curval2)\n\t\t\t\trc = -1;\n\t\t\telse if (curval2 < curval1)\n\t\t\t\trc = 1;\n\t\t}\n\t}\n\n\treturn rc;\n}\n\n/**\n * @brief\n *\t\twrite_usage - write the usage information to the usage file\n *\t\t      This function uses a recursive helper function\n *\n * @param[in]\tfilename\t-\tusage file\n * @param[in]\tfhead\t-\tPointer to fairshare_head structure.\n *\n * @return\tsuccess/failure\n *\n */\nint\nwrite_usage(const char *filename, fairshare_head *fhead)\n{\n\tFILE *fp; /* file pointer to usage file */\n\tstruct group_node_header head;\n\n\tif (fhead == NULL)\n\t\treturn 0;\n\n\tif (filename == NULL)\n\t\tfilename = USAGE_FILE;\n\n\tif ((fp = fopen(filename, \"wb\")) == NULL) {\n\t\tsprintf(log_buffer, \"Error opening file %s\", filename);\n\t\tlog_err(errno, \"write_usage\", log_buffer);\n\t\treturn 0;\n\t}\n\n\t/* version 2:\n\t * header\n\t * last_decay\n\t * group_node_usage_v2\n\t * group_node_usage_v2\n\t * ...\n\t */\n\n\tmemset(&head, 0, sizeof(struct group_node_header));\n\n\tpbs_strncpy(head.tag, USAGE_MAGIC, sizeof(head.tag));\n\thead.version = USAGE_VERSION;\n\tfwrite(&head, sizeof(struct group_node_header), 1, fp);\n\tfwrite(&fhead->last_decay, sizeof(time_t), 1, fp);\n\n\trec_write_usage(fhead->root, fp);\n\tfclose(fp);\n\treturn 1;\n}\n\n/**\n * @brief\n *\t\trec_write_usage - recursive helper function which will write out all\n *\t\t\t  the group_info structs of the resgroup tree\n *\n * @param[in]\troot\t-\tthe root of the current subtree\n * @param[in]\tfp\t-\tthe file to write the ginfo out to\n *\n * @return nothing\n *\n */\nvoid\nrec_write_usage(group_info *root, FILE *fp)\n{\n\tstruct group_node_usage_v2 grp; /* used to write out usage info */\n\n\tif (root == NULL)\n\t\treturn;\n\n\t/* only write out leaves of the tree (fairshare entities)\n\t * usage defaults to 1 so don't bother writing those out either\n\t * It is possible that the unknown group is empty.  Don't want to write it out\n\t */\n\tif (root->usage != 1 && root->child == NULL && root->name != UNKNOWN_GROUP_NAME) {\n\t\tmemset(&grp, 0, sizeof(struct group_node_usage_v2));\n\t\tsnprintf(grp.name, sizeof(grp.name), \"%s\", root->name.c_str());\n\t\tgrp.usage = root->usage;\n\n\t\tfwrite(&grp, sizeof(struct group_node_usage_v2), 1, fp);\n\t}\n\n\trec_write_usage(root->sibling, fp);\n\trec_write_usage(root->child, fp);\n}\n\n/**\n * @brief\n *\t\tread_usage - read the usage information and load it into the\n *\t\t     resgroup tree.\n *\n * @param[in]\tfilename\t-\tThe file which stores the usage information.\n * @param[in]\tflags\t-\tflags to check whether to trim or not.\n * @param[in]\tfhead\t-\tpointer to fairshare_head struct.\n *\n * @return void\n *\n */\nvoid\nread_usage(const char *filename, int flags, fairshare_head *fhead)\n{\n\tFILE *fp;\t\t       /* file pointer to usage file */\n\tstruct group_node_header head; /* usage file header */\n\ttime_t last;\t\t       /* read the last sync from the file */\n\n\tif (fhead == NULL || fhead->root == NULL)\n\t\treturn;\n\n\tif (filename == NULL)\n\t\tfilename = USAGE_FILE;\n\n\tif ((fp = fopen(filename, \"r\")) == NULL) {\n\t\tlog_event(PBSEVENT_SCHED, PBS_EVENTCLASS_FILE, LOG_WARNING, \"fairshare usage\",\n\t\t\t  \"Creating usage database for fairshare\");\n\t\tfprintf(stderr, \"Creating usage database for fairshare.\\n\");\n\t\treturn;\n\t}\n\tmemset(&head, 0, sizeof(struct group_node_header));\n\t/* read header */\n\tif (fread(&head, sizeof(struct group_node_header), 1, fp) != 0) {\n\t\tif (!strcmp(head.tag, USAGE_MAGIC)) { /* this is a header */\n\t\t\tint error = 0;\n\n\t\t\tif (head.version == 2) {\n\t\t\t\tif (fread(&last, sizeof(time_t), 1, fp) != 0) {\n\t\t\t\t\t/* 946713600 = 1/1/2000 00:00 - before usage version 2 existed */\n\t\t\t\t\tif (last == 0 || last > 946713600)\n\t\t\t\t\t\tfhead->last_decay = last;\n\t\t\t\t\telse\n\t\t\t\t\t\terror = 1;\n\t\t\t\t}\n\t\t\t\tif (!error)\n\t\t\t\t\tread_usage_v2(fp, flags, fhead->root);\n\t\t\t} else\n\t\t\t\terror = 1;\n\n\t\t\tif (error)\n\t\t\t\tlog_event(PBSEVENT_SCHED, PBS_EVENTCLASS_FILE, LOG_WARNING,\n\t\t\t\t\t  \"fairshare usage\", \"Invalid usage file header\");\n\n\t\t} else { /* original headerless usage file */\n\t\t\trewind(fp);\n\t\t\tread_usage_v1(fp, fhead->root);\n\t\t}\n\t}\n\n\tfclose(fp);\n}\n\n/**\n * @brief\n * \t\tread version 1 usage file\n *\n * @param[in]\tfp\t-\tthe file pointer to the open file\n * @param[in]\troot\t-\troot of the fairshare tree\n *\n * @return\tint\n *\t@retval\t1\t: success\n *\t@retval\t0\t: failure\n *\n */\nint\nread_usage_v1(FILE *fp, group_info *root)\n{\n\tstruct group_node_usage_v1 grp;\n\tgroup_info *ginfo;\n\n\tif (fp == NULL)\n\t\treturn 0;\n\tmemset(&grp, 0, sizeof(struct group_node_usage_v1));\n\twhile (fread(&grp, sizeof(struct group_node_usage_v1), 1, fp)) {\n\t\tif (grp.usage >= 0 && is_valid_pbs_name(grp.name, USAGE_NAME_MAX)) {\n\t\t\tginfo = find_alloc_ginfo(grp.name, root);\n\t\t\tif (ginfo != NULL) {\n\t\t\t\tginfo->usage = grp.usage;\n\t\t\t\tginfo->temp_usage = grp.usage;\n\t\t\t\tif (ginfo->child == NULL) {\n\t\t\t\t\t/* add usage down the path from the root to our parent */\n\t\t\t\t\tfor (auto &g : ginfo->gpath) {\n\t\t\t\t\t\tif (g == ginfo)\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\tg->usage += grp.usage;\n\t\t\t\t\t\tg->temp_usage += grp.usage;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t} else\n\t\t\tlog_event(PBSEVENT_SCHED, PBS_EVENTCLASS_FILE, LOG_WARNING,\n\t\t\t\t  \"fairshare usage\", \"Invalid entity\");\n\t}\n\n\treturn 1;\n}\n\n/**\n * @brief\n * \t\tread version 2 usage file\n *\n * @param[in]\tfp\t- the file pointer to the open file\n * @param[in]\tflags\t- flags to check whether to trim or not.\n * @param[in]\troot\t- root of the fairshare tree\n *\n *\t@retval 1 success\n *\t@retval 0 failure\n *\n */\nint\nread_usage_v2(FILE *fp, int flags, group_info *root)\n{\n\tstruct group_node_usage_v2 grp;\n\tgroup_info *ginfo;\n\n\tif (fp == NULL)\n\t\treturn 0;\n\n\tmemset(&grp, 0, sizeof(struct group_node_usage_v2));\n\twhile (fread(&grp, sizeof(struct group_node_usage_v2), 1, fp)) {\n\t\tif (grp.usage >= 0 && is_valid_pbs_name(grp.name, USAGE_NAME_MAX)) {\n\t\t\t/* if we're trimming the tree, don't add any new nodes which are not\n\t\t\t * already in the resource_group file\n\t\t\t */\n\t\t\tif (flags & FS_TRIM)\n\t\t\t\tginfo = find_group_info(grp.name, root);\n\t\t\telse\n\t\t\t\tginfo = find_alloc_ginfo(grp.name, root);\n\n\t\t\tif (ginfo != NULL) {\n\t\t\t\tginfo->usage = grp.usage;\n\t\t\t\tginfo->temp_usage = grp.usage;\n\t\t\t\tif (ginfo->child == NULL) {\n\t\t\t\t\t/* add usage down the path from the root to our parent */\n\t\t\t\t\tfor (auto &g : ginfo->gpath) {\n\t\t\t\t\t\tif (g == ginfo)\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\tg->usage += grp.usage;\n\t\t\t\t\t\tg->temp_usage += grp.usage;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t} else\n\t\t\tlog_event(PBSEVENT_SCHED, PBS_EVENTCLASS_FILE, LOG_WARNING,\n\t\t\t\t  \"fairshare usage\", \"Invalid entity\");\n\t}\n\n\treturn 1;\n}\n\n/**\n * @brief\n *\t\tcreate_group_path - create a path from the root to the leaf of the tree\n *\n * @param[in]\tginfo - the group_root to create the path from\n *\n * @return path\n *\n */\nstd::vector<group_info *>\ncreate_group_path(group_info *ginfo)\n{\n\tstruct group_info *cur;\n\tstd::vector<group_info *> gpath;\n\n\tif (ginfo == NULL)\n\t\treturn {};\n\n\tfor (cur = ginfo; cur != NULL; cur = cur->parent)\n\t\tgpath.insert(gpath.begin(), cur);\n\n\treturn gpath;\n}\n\n/**\n * @brief\n *\t\tover_fs_usage - return true of a entity has used more then their\n *\t\t\tfairshare of the machine.  Overusage is defined as\n *\t\t\ta user using more then their strict percentage of the\n *\t\t\ttotal usage used (the usage of the root node)\n *\n * @param[in]\tginfo\t-\tthe entity to check\n *\n * @return\ttrue/false\n * @retval\t-\ttrue\t: if the user is over their usage\n * @retval\t-\tfalse\t: if under\n *\n */\nint\nover_fs_usage(group_info *ginfo)\n{\n\treturn ginfo->gpath[0]->usage * ginfo->tree_percentage < ginfo->usage;\n}\n\n/**\n * @brief\n *\t\tnew_group_info - allocate a new group_info struct and initalize it\n *\n * @return\ta ptr to the new group_info\n *\n */\ngroup_info::group_info(const std::string &gname) : name(gname)\n{\n\tresgroup = UNSPECIFIED;\n\tcresgroup = UNSPECIFIED;\n\tshares = UNSPECIFIED;\n\ttree_percentage = 0.0;\n\tgroup_percentage = 0.0;\n\tusage = FAIRSHARE_MIN_USAGE;\n\ttemp_usage = FAIRSHARE_MIN_USAGE;\n\tusage_factor = 0.0;\n\tparent = NULL;\n\tsibling = NULL;\n\tchild = NULL;\n}\n\ngroup_info::group_info(group_info &oginfo) : name(oginfo.name)\n{\n\tresgroup = oginfo.resgroup;\n\tcresgroup = oginfo.cresgroup;\n\tshares = oginfo.shares;\n\ttree_percentage = oginfo.tree_percentage;\n\tgroup_percentage = oginfo.group_percentage;\n\tusage = oginfo.usage;\n\tusage_factor = oginfo.usage_factor;\n\ttemp_usage = oginfo.temp_usage;\n\tsibling = NULL;\n\tchild = NULL;\n\tparent = NULL;\n}\n\ngroup_info &\ngroup_info::operator=(const group_info &oginfo)\n{\n\tresgroup = oginfo.resgroup;\n\tcresgroup = oginfo.cresgroup;\n\tshares = oginfo.shares;\n\ttree_percentage = oginfo.tree_percentage;\n\tgroup_percentage = oginfo.group_percentage;\n\tusage = oginfo.usage;\n\tusage_factor = oginfo.usage_factor;\n\ttemp_usage = oginfo.temp_usage;\n\tsibling = NULL;\n\tchild = NULL;\n\tparent = NULL;\n\n\treturn *this;\n}\n\n/**\n * @brief\n * \t\tcopy constructor for the fairshare tree\n *\n * @param[in]\troot\t-\troot of the tree\n * @param[in] nparent\t-\tthe parent of the root in the new dup'd tree\n *\n * @return\tduplicated fairshare tree\n */\ngroup_info *\ndup_fairshare_tree(group_info *root, group_info *nparent)\n{\n\tgroup_info *nroot;\n\tif (root == NULL)\n\t\treturn NULL;\n\n\tnroot = new group_info(*root);\n\n\tif (nroot == NULL)\n\t\treturn NULL;\n\n\tadd_child(nroot, nparent);\n\n\tnroot->sibling = dup_fairshare_tree(root->sibling, nparent);\n\tnroot->child = dup_fairshare_tree(root->child, nroot);\n\n\treturn nroot;\n}\n\n/**\n *\t@brief\n *\t\tfree the entire fairshare tree\n *\n * @param[in]\troot\t-\troot of the tree\n */\nvoid\nfree_fairshare_tree(group_info *root)\n{\n\tif (root == NULL)\n\t\treturn;\n\n\tfree_fairshare_tree(root->sibling);\n\tfree_fairshare_tree(root->child);\n\tdelete root;\n}\n\n/**\n * @brief\n * \t\tconstructor for fairshare head\n */\n\nfairshare_head::fairshare_head()\n{\n\troot = NULL;\n\tlast_decay = 0;\n}\n\n/**\n * @brief\n *\t\tcopy constructor for fairshare_head\n *\n * @param[in]\tofhead\t-\tfairshare_head to dup\n *\n * @return\tduplicated fairshare_head\n * @retval\tNULL\t: fail \n */\nfairshare_head::fairshare_head(fairshare_head &ofhead)\n{\n\tlast_decay = ofhead.last_decay;\n\troot = dup_fairshare_tree(ofhead.root, NULL);\n}\n\n/**\n * @brief copy assignment operator for fairshare_head\n */\nfairshare_head &\nfairshare_head::operator=(fairshare_head &ofhead)\n{\n\tfree_fairshare_tree(root);\n\tlast_decay = ofhead.last_decay;\n\troot = dup_fairshare_tree(ofhead.root, NULL);\n\treturn *this;\n}\n\n/**\n * @brief\n * \t\tdestructor for fairshare_head\n *\n * @param[in]\tfhead\t-\tfairshare_head to be freed.\n */\nfairshare_head::~fairshare_head()\n{\n\tfree_fairshare_tree(root);\n}\n\n/**\n * @brief\n * \t\trecursively walk the fairshare tree resetting temp_usage = usage\n *\n * @param[in]\thead\t-\tfairshare node to reset\n *\n * @return\tvoid\n */\nvoid\nreset_temp_usage(group_info *head)\n{\n\tif (head == NULL)\n\t\treturn;\n\n\thead->temp_usage = head->usage;\n\treset_temp_usage(head->sibling);\n\treset_temp_usage(head->child);\n}\n\n/**\n * @brief\n *\t\trecursive helper function to calc_usage_factor()\n * @param root\t- parent fairshare tree node\n * @param ginfo\t- child fairshare tree node\n *\n * @return void\n */\nstatic void\ncalc_usage_factor_rec(group_info *root, group_info *ginfo)\n{\n\tfloat usage;\n\n\tif (root == NULL || ginfo == NULL)\n\t\treturn;\n\n\tusage = ginfo->usage / root->usage;\n\tginfo->usage_factor = usage + ((ginfo->parent->usage_factor - usage) * ginfo->group_percentage);\n\n\tcalc_usage_factor_rec(root, ginfo->sibling);\n\tcalc_usage_factor_rec(root, ginfo->child);\n}\n\n/**\n * @brief\n *\t\tcalculate usage_factor numbers for entire tree.\n *\t\tThe usage_factor is a number that takes the node's usage\n *\t\tplus part of its parent's usage_factor into account.  This\n *\t\twill give a number that is comparable across the tree.\n *\n * @param[in] tree - fairshare tree\n *\n * @return void\n */\nvoid\ncalc_usage_factor(fairshare_head *tree)\n{\n\tgroup_info *ginfo;\n\tgroup_info *root;\n\n\tif (tree == NULL)\n\t\treturn;\n\n\troot = tree->root;\n\t/* Root's children use their real usage as their arbitrary usage */\n\tfor (ginfo = root->child; ginfo != NULL; ginfo = ginfo->sibling) {\n\t\tginfo->usage_factor = ginfo->usage / root->usage;\n\t\tcalc_usage_factor_rec(root, ginfo->child);\n\t}\n}\n\n/**\n * @brief reset the usage of the fairshare tree so the usage can be reread.\n *\tIf the usage is not reset first, any entity that is no longer in the\n *\tfairshare usage file will retain their original usage.\n * @param node - the fairshare node\n */\nvoid\nreset_usage(group_info *node)\n{\n\tif (node == NULL)\n\t\treturn;\n\treset_usage(node->sibling);\n\treset_usage(node->child);\n\tnode->usage = 1;\n\tnode->temp_usage = 1;\n}\n"
  },
  {
    "path": "src/scheduler/fairshare.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _FAIRSHARE_H\n#define _FAIRSHARE_H\n\n#include \"data_types.h\"\n/*\n *      add_child - add a ginfo to the resource group tree\n */\nvoid add_child(group_info *ginfo, group_info *parent);\n\n/*\n *      find_group_info - recursive function to find a ginfo in the\n resgroup tree\n */\ngroup_info *find_group_info(const std::string &name, group_info *root);\n\n/*\n *      find_alloc_ginfo - trys to find a ginfo in the fair share tree.  If it\n *                        can not find the ginfo, then allocate a new one and\n *                        add it to the \"unknown\" group\n */\ngroup_info *find_alloc_ginfo(const std::string &name, group_info *root);\n\n/*\n *\n *\tparse_group - parse the resource group file\n *\n *\t  fname - name of the file\n *\t  root  - root of fairshare tree\n *\n *\treturn success/failure\n *\n *\n *\tFORMAT:   name\tcresgrp\t\tgrpname\t\tshares\n *\n *\t  name    - name of user/grp\n *\t  cresgrp - resource group of the children of this group (if group)\n *\t  grpname - resource group of this user/group\n *\t  shares  - the amount of shares the user/group has in its resgroup\n *\n */\nint parse_group(const char *fname, group_info *root);\n\n/*\n *\n *\tpreload_tree -  load the root node into the fair share tree\n *\t\t\tthe root node is the entire machine.  Also load\n *\t\t\tthe \"unknown\" group.  This group is for any user that\n *\t\t\tis not specified in the resource group file.\n *\n *\treturn new head and root of a fairshare tree\n *\n */\nfairshare_head *preload_tree(void);\n\n/*\n *      count_shares - count the shares in the current resource group\n */\nint count_shares(group_info *grp);\n\n/*\n *      calc_fair_share_perc - walk the fair share group tree and calculate\n *                             the overall percentage of the machine a user/\n *                             group gets if all usage is equal\n */\nint calc_fair_share_perc(group_info *root, int shares);\n\n/*\n *      update_usage_on_run - update a users usage information when a\n *                            job is run\n */\nvoid update_usage_on_run(resource_resv *resresv);\n\n/*\n *      decay_fairshare_tree - decay the usage information kept in the fair\n *                             share tree\n */\nvoid decay_fairshare_tree(group_info *root);\n\n/*\n *      write_usage - write the usage information to the usage file\n *                    This fuction uses a recursive helper function\n */\nint write_usage(const char *filename, fairshare_head *fhead);\n\n/*\n *      rec_write_usage - recursive helper function which will write out all\n *                        the group_info structs of the resgroup tree\n */\nvoid rec_write_usage(group_info *root, FILE *fp);\n\n/*\n *      read_usage - read the usage information and load it into the\n *                   resgroup tree.\n */\nvoid read_usage(const char *filename, int flags, fairshare_head *fhead);\n\n/*\n *      read_usage_v1 - read version 1 usage file\n */\nint read_usage_v1(FILE *fp, group_info *root);\n\n/*\n *      read_usage_v2 - read version 2 usage file\n */\nint read_usage_v2(FILE *fp, int flags, group_info *root);\n\n/*\n *      create_group_path - create a path from the root to the leaf of the tree\n */\nstd::vector<group_info *> create_group_path(group_info *ginfo);\n\n/*\n *      compare_path - compare two group_path's and see which is more\n *                     deserving to run\n */\nint compare_path(std::vector<group_info *> &gp1, std::vector<group_info *> &gp2);\n\n/*\n *      over_fs_usage - return true of a entity has used more then their\n *                      fairshare of the machine.  Overusage is defined as\n *                      a user using more then their strict percentage of the\n *                      total usage used (the usage of the root node)\n */\nint over_fs_usage(group_info *ginfo);\n\n/*\n *\tdup_fairshare_tree\n *\n *\t  root - root of the tree\n *\t  nparent - the parent of the root in the \"new\" duplicated tree\n *\n *\treturn duplicated fairshare tree\n */\ngroup_info *dup_fairshare_tree(group_info *root, group_info *nparent);\n\n/*\n *\tfree_fairshare_tree - free the entire fairshare tree\n */\nvoid free_fairshare_tree(group_info *root);\n\n/*\n *\n *\tadd_unknown - add a ginfo to the \"unknown\" group\n *\n *\t  ginfo - ginfo to add\n *\t  root  - root of fairshare tre\n *\n *\treturn nothing\n *\n */\nvoid add_unknown(group_info *ginfo, group_info *root);\n\n/*\n * \treset_temp_usage - walk the fairshare tree resetting temp_usage = usage\n *\n * \t  head - fairshare node to reset\n *\n * \treturn nothing\n */\nvoid reset_temp_usage(group_info *head);\n\n/* reset the tree to 1 usage */\nvoid reset_usage(group_info *node);\n\n/* Calculate the arbitrary usage of the tree */\nvoid calc_usage_factor(fairshare_head *tree);\n\n#endif /* _FAIRSHARE_H */\n"
  },
  {
    "path": "src/scheduler/fifo.cpp",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * This file contains functions related to scheduling\n */\n\n#include <pbs_config.h>\n\n#ifdef PYTHON\n#include <pbs_python_private.h>\n#include <Python.h>\n#include <pythonrun.h>\n#include <wchar.h>\n#endif\n\n#include <algorithm>\n\n#include \"buckets.h\"\n#include \"check.h\"\n#include \"config.h\"\n#include \"constant.h\"\n#include \"dedtime.h\"\n#include \"fairshare.h\"\n#include \"fifo.h\"\n#include \"globals.h\"\n#include \"job_info.h\"\n#include \"libpbs.h\"\n#include \"limits_if.h\"\n#include \"misc.h\"\n#include \"multi_threading.h\"\n#include \"node_info.h\"\n#include \"node_partition.h\"\n#include \"parse.h\"\n#include \"pbs_internal.h\"\n#include \"pbs_python.h\"\n#include \"pbs_share.h\"\n#include \"pbs_version.h\"\n#include \"prev_job_info.h\"\n#include \"prime.h\"\n#include \"queue_info.h\"\n#include \"range.h\"\n#include \"resource.h\"\n#include \"resource_resv.h\"\n#include \"resv_info.h\"\n#include \"server_info.h\"\n#include \"simulate.h\"\n#include \"sort.h\"\n#include <errno.h>\n#include <fcntl.h>\n#include <libutil.h>\n#include <log.h>\n#include <pbs_error.h>\n#include <pbs_ifl.h>\n#include <pwd.h>\n#include <sched_cmds.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <sys/stat.h>\n#include <sys/types.h>\n#include <time.h>\n#include <unistd.h>\n\n#ifdef NAS\n#include \"site_code.h\"\n#endif\n\n/**\n * @brief\n * \t\tinitialize conf struct and parse conf files\n *\n * @param[in]\tnthreads - number of worker threads to launch, < 1 to use num cores\n *\n * @return\tSuccess/Failure\n * @retval\t0\t: success\n * @retval\t!= 0\t: failure\n */\nint\nschedinit(int nthreads)\n{\n\tchar zone_dir[MAXPATHLEN];\n\n#ifdef PYTHON\n\tconst char *errstr;\n\tPyObject *module;\n\tPyObject *obj;\n\tPyObject *dict;\n#endif\n\n\tconf = parse_config(CONFIG_FILE);\n\n\tparse_holidays(HOLIDAYS_FILE);\n\ttime(&(cstat.current_time));\n\n\tif (is_prime_time(cstat.current_time))\n\t\tinit_prime_time(&cstat, NULL);\n\telse\n\t\tinit_non_prime_time(&cstat, NULL);\n\n\tif (conf.holiday_year != 0) {\n\t\tauto tmptr = localtime(&cstat.current_time);\n\t\tif ((tmptr != NULL) && ((tmptr->tm_year + 1900) > conf.holiday_year))\n\t\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_FILE, LOG_NOTICE, HOLIDAYS_FILE,\n\t\t\t\t  \"The holiday file is out of date; please update it.\");\n\t}\n\n\tparse_ded_file(DEDTIME_FILE);\n\n\tif (fstree != NULL)\n\t\tdelete fstree;\n\t/* preload the static members to the fairshare tree */\n\tfstree = preload_tree();\n\tif (fstree != NULL) {\n\t\tparse_group(RESGROUP_FILE, fstree->root);\n\t\tcalc_fair_share_perc(fstree->root->child, UNSPECIFIED);\n\t\tread_usage(USAGE_FILE, 0, fstree);\n\n\t\tif (fstree->last_decay == 0)\n\t\t\tfstree->last_decay = cstat.current_time;\n\t}\n#ifdef NAS /* localmod 034 */\n\tsite_parse_shares(SHARE_FILE);\n#endif /* localmod 034 */\n\n\t/* set the zoneinfo directory to $PBS_EXEC/zoneinfo.\n\t * This is used for standing reservations user of libical */\n\tsprintf(zone_dir, \"%s%s\", pbs_conf.pbs_exec_path, ICAL_ZONEINFO_DIR);\n\tset_ical_zoneinfo(zone_dir);\n\n#ifdef PYTHON\n#ifdef WIN32\n\tPy_NoSiteFlag = 1;\n\tPy_FrozenFlag = 1;\n\tPy_IgnoreEnvironmentFlag = 1;\n\tset_py_progname();\n\tPy_InitializeEx(0);\n#else\n\tstatic wchar_t w_python_binpath[MAXPATHLEN + 1] = {'\\0'};\n\tchar *python_binpath = NULL;\n\tPyStatus py_status;\n\tPyConfig py_config;\n\n\tPyConfig_InitPythonConfig(&py_config);\n\n\tpy_config._install_importlib = 1;\n\tpy_config.use_environment = 0;\n\tpy_config.optimization_level = 2;\n\tpy_config.isolated = 1;\n\tpy_config.site_import = 0;\n\tpy_config.install_signal_handlers = 0;\n\n        if (w_python_binpath[0] == '\\0') {\n                if (get_py_progname(&python_binpath)) {\n                        log_err(-1, __func__, \"Failed to find python binary path!\");\n                        return -1;\n                }\n                mbstowcs(w_python_binpath, python_binpath, MAXPATHLEN + 1);\n                free(python_binpath);\n        }\n\n\tpy_status = PyConfig_SetString(&py_config, &py_config.program_name, w_python_binpath);\n\tif (PyStatus_Exception(py_status))\n\t\treturn -1;\n\n\tpy_status = Py_InitializeFromConfig(&py_config);\n\tif (PyStatus_Exception(py_status)) {\n\t\tlog_err(-1, \"Py_InitializeFromConfig\",\n\t\t\t\"--> Failed to initialize Python interpreter <--\");\n\t\tPyConfig_Clear(&py_config);  // Clear the configuration object\n\t\treturn -1;\n\t}\n#endif\n\tPyRun_SimpleString(\n\t\t\"_err =\\\"\\\"\\n\"\n\t\t\"ex = None\\n\"\n\t\t\"try:\\n\"\n\t\t\"\\tfrom math import *\\n\"\n\t\t\"except ImportError as ex:\\n\"\n\t\t\"\\t_err = str(ex)\");\n\n\tmodule = PyImport_AddModule(\"__main__\");\n\tdict = PyModule_GetDict(module);\n\n\terrstr = NULL;\n\tobj = PyMapping_GetItemString(dict, \"_err\");\n\tif (obj != NULL) {\n\t\terrstr = PyUnicode_AsUTF8(obj);\n\t\tif (errstr != NULL) {\n\t\t\tif (strlen(errstr) > 0)\n\t\t\t\tlog_eventf(PBSEVENT_SCHED, PBS_EVENTCLASS_SCHED, LOG_WARNING,\n\t\t\t\t\t   \"PythonError\", \" %s. Python is unlikely to work properly.\", errstr);\n\t\t}\n\t\tPy_XDECREF(obj);\n\t}\n\n#endif\n\n\t/* (Re-)Initialize multithreading */\n\tif (num_threads == 0 || (nthreads > 0 && nthreads != num_threads)) {\n\t\tif (init_multi_threading(nthreads) != 1) {\n\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_REQUEST, LOG_DEBUG,\n\t\t\t\t  \"\", \"Error initializing pthreads\");\n\t\t\treturn -1;\n\t\t}\n\t}\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\t\tupdate global status structure which holds\n *\t\tstatus information used by the scheduler which\n *\t\tcan change from cycle to cycle\n *\n * @param[in]\tpolicy\t-\tstatus structure to update\n * @param[in]\tcurrent_time\t-\tcurrent time or 0 to call time()\n *\n * @return nothing\n *\n */\nvoid\nupdate_cycle_status(status &policy, time_t current_time)\n{\n\tbool dedtime;\t       /* is it dedtime? */\n\tenum prime_time prime; /* current prime time status */\n\tconst char *primetime;\n\n\tif (current_time == 0)\n\t\ttime(&policy.current_time);\n\telse\n\t\tpolicy.current_time = current_time;\n\n\t/* cycle start in real time -- can be used for time deltas */\n\tpolicy.cycle_start = time(NULL);\n\n\tdedtime = is_ded_time(policy.current_time);\n\n\t/* it was dedtime last scheduling cycle, and it is not dedtime now */\n\tif (policy.is_ded_time && !dedtime)\n\t\tconf.ded_time.erase(conf.ded_time.begin());\n\n\tpolicy.is_ded_time = dedtime;\n\n\t/* if we have changed from prime to nonprime or nonprime to prime\n\t * init the status respectively\n\t */\n\tprime = is_prime_time(policy.current_time);\n\tif (prime == PRIME && !policy.is_prime)\n\t\tinit_prime_time(&policy, NULL);\n\telse if (prime == NON_PRIME && policy.is_prime)\n\t\tinit_non_prime_time(&policy, NULL);\n\n\tif (conf.holiday_year != 0) {\n\t\tauto tmptr = localtime(&policy.current_time);\n\t\tif ((tmptr != NULL) && ((tmptr->tm_year + 1900) > conf.holiday_year))\n\t\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_FILE, LOG_NOTICE,\n\t\t\t\t  HOLIDAYS_FILE, \"The holiday file is out of date; please update it.\");\n\t}\n\tpolicy.prime_status_end = end_prime_status(policy.current_time);\n\n\tprimetime = prime == PRIME ? \"primetime\" : \"non-primetime\";\n\tif (policy.prime_status_end == SCHD_INFINITY)\n\t\tlog_eventf(PBSEVENT_DEBUG2, PBS_EVENTCLASS_SERVER, LOG_DEBUG, \"\", \"It is %s.  It will never end\", primetime);\n\telse {\n\t\tauto ptm = localtime(&(policy.prime_status_end));\n\t\tif (ptm != NULL) {\n\t\t\tlog_eventf(PBSEVENT_DEBUG2, PBS_EVENTCLASS_SERVER, LOG_DEBUG, \"\",\n\t\t\t\t   \"It is %s.  It will end in %ld seconds at %02d/%02d/%04d %02d:%02d:%02d\",\n\t\t\t\t   primetime, policy.prime_status_end - policy.current_time,\n\t\t\t\t   ptm->tm_mon + 1, ptm->tm_mday, ptm->tm_year + 1900,\n\t\t\t\t   ptm->tm_hour, ptm->tm_min, ptm->tm_sec);\n\t\t} else\n\t\t\tlog_eventf(PBSEVENT_DEBUG2, PBS_EVENTCLASS_SERVER, LOG_DEBUG, \"\",\n\t\t\t\t   \"It is %s.  It will end at <UNKNOWN>\", primetime);\n\t}\n\n\t// Will be set in query_server()\n\tpolicy.resdef_to_check_no_hostvnode.clear();\n\tpolicy.resdef_to_check_noncons.clear();\n\tpolicy.resdef_to_check_rassn.clear();\n\tpolicy.resdef_to_check_rassn_select.clear();\n\tpolicy.backfill_depth = UNSPECIFIED;\n\n\tpolicy.order = 0;\n\tpolicy.preempt_attempts = 0;\n}\n\n/**\n * @brief\n * \t\tprep the scheduling cycle.  Do tasks that have to happen prior\n *\t\tto the consideration of the first job.  This includes any\n *\t\tperiodic upkeep (like fairshare), or any prep to the queried\n *\t\tdata that needed to happen post query_server() (like preemption)\n *\n * @param[in]\tpolicy\t-\tpolicy info\n * @param[in]\tpbs_sd\t\tconnection descriptor to pbs_server\n * @param[in]\tsinfo\t-\tthe server\n *\n * @return\tint\n * @retval\t1\t: success\n * @retval\t0\t: failure\n *\n * @note\n * \t\tfailure of this function will cause schedule() to exit\n */\n\nint\ninit_scheduling_cycle(status *policy, int pbs_sd, server_info *sinfo)\n{\n\tgroup_info *user = NULL; /* the user for the running jobs of the last cycle */\n\tstatic schd_error *err;\n\n\tif (err == NULL) {\n\t\terr = new_schd_error();\n\t\tif (err == NULL)\n\t\t\treturn 0;\n\t}\n\n\tif ((policy->fair_share || sinfo->job_sort_formula != NULL) && sinfo->fstree != NULL) {\n\t\tFILE *fp;\n\t\tbool decayed = false;\n\t\tbool resort = false;\n\t\tif ((fp = fopen(USAGE_TOUCH, \"r\")) != NULL) {\n\t\t\tfclose(fp);\n\t\t\treset_usage(fstree->root);\n\t\t\tread_usage(USAGE_FILE, NO_FLAGS, fstree);\n\t\t\tif (fstree->last_decay == 0)\n\t\t\t\tfstree->last_decay = policy->current_time;\n\t\t\tremove(USAGE_TOUCH);\n\t\t\tresort = true;\n\t\t}\n\t\tif (!last_running.empty() && sinfo->running_jobs != NULL) {\n\t\t\t/* add the usage which was accumulated between the last cycle and this\n\t\t\t * one and calculate a new value\n\t\t\t */\n\n\t\t\tfor (const auto &lj : last_running) {\n\t\t\t\tuser = find_alloc_ginfo(lj.entity_name, sinfo->fstree->root);\n\n\t\t\t\tif (user != NULL) {\n\t\t\t\t\tauto rj = find_resource_resv(sinfo->running_jobs, lj.name);\n\n\t\t\t\t\tif (rj != NULL && rj->job != NULL && !rj->job->is_prerunning) {\n\t\t\t\t\t\t/* just in case the delta is negative just add 0 */\n\t\t\t\t\t\tauto delta = formula_evaluate(conf.fairshare_res.c_str(), rj, rj->job->resused) -\n\t\t\t\t\t\t\t     formula_evaluate(conf.fairshare_res.c_str(), rj, lj.resused);\n\n\t\t\t\t\t\tdelta = IF_NEG_THEN_ZERO(delta);\n\n\t\t\t\t\t\t;\n\t\t\t\t\t\tfor (auto &g : user->gpath)\n\t\t\t\t\t\t\tg->usage += delta;\n\n\t\t\t\t\t\tresort = true;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\t/* The half life for the fair share tree might have passed since the last\n\t\t * scheduling cycle.  For that matter, several half lives could have\n\t\t * passed.  If this is the case, perform as many decays as necessary\n\t\t */\n\n\t\tauto t = policy->current_time;\n\t\twhile (conf.decay_time != SCHD_INFINITY &&\n\t\t       (t - sinfo->fstree->last_decay) > conf.decay_time) {\n\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_SERVER, LOG_DEBUG,\n\t\t\t\t  \"Fairshare\", \"Decaying Fairshare Tree\");\n\t\t\tif (fstree != NULL)\n\t\t\t\tdecay_fairshare_tree(sinfo->fstree->root);\n\t\t\tt -= conf.decay_time;\n\t\t\tdecayed = true;\n\t\t\tresort = true;\n\t\t}\n\n\t\tif (decayed) {\n\t\t\t/* set the time to the actual time the half-life should have occurred */\n\t\t\tif (fstree != NULL)\n\t\t\t\tfstree->last_decay =\n\t\t\t\t\tpolicy->current_time - (policy->current_time -\n\t\t\t\t\t\t\t\tsinfo->fstree->last_decay) %\n\t\t\t\t\t\t\t\t       conf.decay_time;\n\t\t}\n\n\t\tif (decayed || !last_running.empty()) {\n\t\t\twrite_usage(USAGE_FILE, sinfo->fstree);\n\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_SERVER, LOG_DEBUG,\n\t\t\t\t  \"Fairshare\", \"Usage Sync\");\n\t\t}\n\t\treset_temp_usage(sinfo->fstree->root);\n\t\tcalc_usage_factor(sinfo->fstree);\n\t\tif (resort)\n\t\t\tsort_jobs(policy, sinfo);\n\t}\n\n\t/* set all the jobs' preempt priorities.  It is done here instead of when\n\t * the jobs were created for several reasons.\n\t * 1. fairshare usage is not updated\n\t * 2. we need all the jobs to be created and up to date for soft run limits\n\t */\n\n\t/* Before setting preempt priorities on all jobs, make sure that entity's preempt bit\n\t * is updated for all running jobs\n\t */\n\tif ((sinfo->running_jobs != NULL) && (policy->preempting)) {\n\t\tfor (int i = 0; sinfo->running_jobs[i] != NULL; i++) {\n\t\t\tif (sinfo->running_jobs[i]->job->resv_id == NULL)\n\t\t\t\tupdate_soft_limits(sinfo, sinfo->running_jobs[i]->job->queue, sinfo->running_jobs[i]);\n\t\t}\n\t}\n\tif (sinfo->jobs != NULL) {\n\t\tfor (int i = 0; sinfo->jobs[i] != NULL; i++) {\n\t\t\tresource_resv *resresv = sinfo->jobs[i];\n\t\t\tif (resresv->job != NULL) {\n\t\t\t\tif (policy->preempting) {\n\t\t\t\t\tset_preempt_prio(resresv, resresv->job->queue, sinfo);\n\t\t\t\t\tif (resresv->job->is_running)\n\t\t\t\t\t\tif (!resresv->job->can_not_preempt)\n\t\t\t\t\t\t\tsinfo->preempt_count[preempt_level(resresv->job->preempt)]++;\n\t\t\t\t}\n\t\t\t\tif (sinfo->job_sort_formula != NULL) {\n\t\t\t\t\tdouble threshold = sc_attrs.job_sort_formula_threshold;\n\t\t\t\t\tresresv->job->formula_value = formula_evaluate(sinfo->job_sort_formula, resresv, resresv->resreq);\n\t\t\t\t\tlog_eventf(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG, resresv->name, \"Formula Evaluation = %.*f\",\n\t\t\t\t\t\t   float_digits(resresv->job->formula_value, FLOAT_NUM_DIGITS), resresv->job->formula_value);\n\n\t\t\t\t\tif (!resresv->can_not_run && resresv->job->formula_value <= threshold) {\n\t\t\t\t\t\tset_schd_error_codes(err, NOT_RUN, JOB_UNDER_THRESHOLD);\n\t\t\t\t\t\tlog_eventf(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG, resresv->name, \"Job's formula value %.*f is under threshold %.*f\",\n\t\t\t\t\t\t\t   float_digits(resresv->job->formula_value, FLOAT_NUM_DIGITS), resresv->job->formula_value, float_digits(threshold, 2), threshold);\n\t\t\t\t\t\tif (err->error_code != SUCCESS) {\n\t\t\t\t\t\t\tupdate_job_can_not_run(pbs_sd, resresv, err);\n\t\t\t\t\t\t\tclear_schd_error(err);\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\tnext_job(policy, sinfo, INITIALIZE);\n#ifdef NAS /* localmod 034 */\n\t(void) site_pick_next_job(NULL);\n\t(void) site_is_share_king(policy);\n#endif /* localmod 034 */\n\n\treturn 1; /* SUCCESS */\n}\n\n/**\n * @brief\n *\t\tschedule - this function gets called to start each scheduling cycle\n *\t\t   It will handle the difference cases that caused a\n *\t\t   scheduling cycle\n *\n * @param[in]\tsd\t-\tprimary socket descriptor to the server pool\n *\n * @return\tint\n * @retval\t0\t: continue calling scheduling cycles\n * @retval\t1\t: exit scheduler\n */\nint\nschedule(int sd, const sched_cmd *cmd)\n{\n\tswitch (cmd->cmd) {\n\t\tcase SCH_SCHEDULE_NULL:\n\t\tcase SCH_RULESET:\n\t\t\t/* ignore and end cycle */\n\t\t\tbreak;\n\n\t\tcase SCH_SCHEDULE_FIRST:\n\t\t\t/*\n\t\t\t * on the first cycle after the server restarts custom resources\n\t\t\t * may have been added.\n\t\t\t */\n\t\t\tupdate_resource_defs(sd);\n\n\t\t\t/* Get config from the qmgr sched object */\n\t\t\tif (!set_validate_sched_attrs(sd))\n\t\t\t\treturn 0;\n\n\t\tcase SCH_SCHEDULE_NEW:\n\t\tcase SCH_SCHEDULE_TERM:\n\t\tcase SCH_SCHEDULE_CMD:\n\t\tcase SCH_SCHEDULE_TIME:\n\t\tcase SCH_SCHEDULE_JOBRESV:\n\t\tcase SCH_SCHEDULE_STARTQ:\n\t\tcase SCH_SCHEDULE_MVLOCAL:\n\t\tcase SCH_SCHEDULE_ETE_ON:\n\t\tcase SCH_SCHEDULE_RESV_RECONFIRM:\n\t\t\treturn intermediate_schedule(sd, cmd);\n\t\tcase SCH_SCHEDULE_AJOB:\n\t\t\treturn intermediate_schedule(sd, cmd);\n\t\tcase SCH_CONFIGURE:\n\t\t\tlog_event(PBSEVENT_SCHED, PBS_EVENTCLASS_SCHED, LOG_INFO,\n\t\t\t\t  \"reconfigure\", \"Scheduler is reconfiguring\");\n\t\t\t/* Get config from sched_priv/ files */\n\t\t\tif (schedinit(-1) != 0)\n\t\t\t\treturn 0;\n\n\t\t\tupdate_resource_defs(sd);\n\n\t\t\t/* Get config from the qmgr sched object */\n\t\t\tif (!set_validate_sched_attrs(sd))\n\t\t\t\treturn 0;\n\t\t\tbreak;\n\t\tcase SCH_QUIT:\n#ifdef PYTHON\n\t\t\tPy_Finalize();\n#endif\n\t\t\treturn 1; /* have the scheduler exit nicely */\n\t\tdefault:\n\t\t\treturn 0;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\t\tintermediate_schedule - responsible for starting/restarting scheduling\n *\t\tcycle.\n *\n * @param[in]\tsd\t-\tprimary socket descriptor to the server pool\n *\n * returns 0\n *\n */\nint\nintermediate_schedule(int sd, const sched_cmd *cmd)\n{\n\tint ret;\t   /* to re schedule or not */\n\tint cycle_cnt = 0; /* count of cycles run */\n\n\tdo {\n\t\tret = scheduling_cycle(sd, cmd);\n\n\t\t/* don't restart cycle if :- */\n\n\t\t/* 1) qrun request, we don't want to keep trying same job */\n\t\tif (cmd->jid != NULL)\n\t\t\tbreak;\n\n\t\t/* Note that a qrun request receiving batch protocol error or any other\n\t\t error will not restart scheduling cycle.\n\t\t */\n\n\t\t/* 2) broken pipe, server connection lost */\n\t\tif (got_sigpipe)\n\t\t\tbreak;\n\n\t\t/* 3) max allowed number of cycles have already been run,\n\t\t *    there can be total of 1 + MAX_RESTART_CYCLECNT cycles\n\t\t */\n\t\tif (cycle_cnt > (MAX_RESTART_CYCLECNT - 1))\n\t\t\tbreak;\n\n\t\tcycle_cnt++;\n\t} while (ret == -1);\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\t\tscheduling_cycle - the controling function of the scheduling cycle\n *\n * @param[in]\tsd\t-\tprimary socket descriptor to the server pool\n *\n * @return\tint\n * @retval\t0\t: success/normal return\n * @retval\t-1\t: failure\n *\n */\n\nint\nscheduling_cycle(int sd, const sched_cmd *cmd)\n{\n\tserver_info *sinfo;\t    /* ptr to the server/queue/job/node info */\n\tint rc = SUCCESS;\t    /* return code from main_sched_loop() */\n\tchar log_msg[MAX_LOG_SIZE]; /* used to log the message why a job can't run*/\n\tint error = 0;\t\t    /* error happened, don't run main loop */\n\tstatus *policy;\t\t    /* policy structure used for cycle */\n\tschd_error *err = NULL;\n\n\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_REQUEST, LOG_DEBUG,\n\t\t  \"\", \"Starting Scheduling Cycle\");\n\n\t/* Decide whether we need to send \"can't run\" type updates this cycle */\n\tif (time(NULL) - last_attr_updates >= sc_attrs.attr_update_period)\n\t\tsend_job_attr_updates = 1;\n\telse\n\t\tsend_job_attr_updates = 0;\n\n\tupdate_cycle_status(cstat, 0);\n\n#ifdef NAS /* localmod 030 */\n\tdo_soft_cycle_interrupt = 0;\n\tdo_hard_cycle_interrupt = 0;\n#endif /* localmod 030 */\n\t/* create the server / queue / job / node structures */\n\tif ((sinfo = query_server(&cstat, sd)) == NULL) {\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_NOTICE,\n\t\t\t  \"\", \"Problem with creating server data structure\");\n\t\tend_cycle_tasks(sinfo);\n\t\treturn 0;\n\t}\n\tpolicy = sinfo->policy;\n\n\t/* don't confirm reservations if we're handling a qrun request */\n\tif (cmd->jid == NULL) {\n\t\tint rc;\n\t\trc = check_new_reservations(policy, sd, sinfo->resvs, sinfo);\n\t\tif (rc) {\n\t\t\t/* Check if there are new reservations.  If there are, we can't go any\n\t\t\t * further in the scheduling cycle since we don't have the up to date\n\t\t\t * information about the newly confirmed reservations\n\t\t\t */\n\t\t\tend_cycle_tasks(sinfo);\n\t\t\t/* Problem occurred confirming reservation, retry cycle */\n\t\t\tif (rc < 0)\n\t\t\t\treturn -1;\n\n\t\t\treturn 0;\n\t\t}\n\t}\n\n\t/* jobid will not be NULL if we received a qrun request */\n\tif (cmd->jid != NULL) {\n\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO, cmd->jid, \"Received qrun request\");\n\t\tif (is_job_array(cmd->jid) > 1) /* is a single subjob or a range */\n\t\t\tmodify_job_array_for_qrun(sinfo, cmd->jid);\n\t\telse\n\t\t\tsinfo->qrun_job = find_resource_resv(sinfo->jobs, cmd->jid);\n\n\t\tif (sinfo->qrun_job == NULL) { /* something went wrong */\n\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO, cmd->jid, \"Could not find job to qrun.\");\n\t\t\terror = 1;\n\t\t\trc = SCHD_ERROR;\n\t\t\tsprintf(log_msg, \"PBS Error: Scheduler can not find job\");\n\t\t}\n\t}\n\n\tif (init_scheduling_cycle(policy, sd, sinfo) == 0) {\n\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_SERVER, LOG_DEBUG, sinfo->name, \"init_scheduling_cycle failed.\");\n\t\tend_cycle_tasks(sinfo);\n\t\treturn 0;\n\t}\n\n\tif (sinfo->qrun_job != NULL) {\n\t\tsinfo->qrun_job->can_not_run = 0;\n\t\tif (sinfo->qrun_job->job != NULL) {\n\t\t\tif (sinfo->qrun_job->job->is_waiting ||\n\t\t\t    sinfo->qrun_job->job->is_held) {\n\t\t\t\tset_job_state(\"Q\", sinfo->qrun_job->job);\n\t\t\t}\n\t\t}\n\t}\n\n\t/* run loop run */\n\tif (error == 0)\n\t\trc = main_sched_loop(policy, sd, sinfo, &err);\n\n\tif (cmd->jid != NULL) {\n\t\tint def_rc = -1;\n\t\tint i;\n\n\t\tfor (i = 0; i < MAX_DEF_REPLY && def_rc != 0; i++) {\n\t\t\t/* smooth sailing, the job ran */\n\t\t\tif (rc == SUCCESS)\n\t\t\t\tdef_rc = pbs_defschreply(sd, SCH_SCHEDULE_AJOB, cmd->jid, 0, NULL, NULL);\n\n\t\t\t/* we thought the job should run, but the server had other ideas */\n\t\t\telse {\n\t\t\t\tif (err != NULL) {\n\t\t\t\t\ttranslate_fail_code(err, NULL, log_msg);\n\t\t\t\t\tif (err->error_code < RET_BASE) {\n\t\t\t\t\t\terror = err->error_code;\n\t\t\t\t\t} else {\n\t\t\t\t\t\t/* everything else... unfortunately our ret codes don't nicely match up to\n\t\t\t\t\t\t * the rest of PBS's PBSE codes, so we return resources unavailable.  This\n\t\t\t\t\t\t * doesn't really matter, because we're returning a message\n\t\t\t\t\t\t */\n\t\t\t\t\t\terror = PBSE_RESCUNAV;\n\t\t\t\t\t}\n\t\t\t\t} else\n\t\t\t\t\terror = PBSE_RESCUNAV;\n\t\t\t\tdef_rc = pbs_defschreply(sd, SCH_SCHEDULE_AJOB, cmd->jid, error, log_msg, NULL);\n\t\t\t}\n\t\t\tif (def_rc != 0) {\n\t\t\t\tchar *pbs_errmsg;\n\t\t\t\tpbs_errmsg = pbs_geterrmsg(sd);\n\n\t\t\t\tlog_eventf(PBSEVENT_SCHED, PBS_EVENTCLASS_SCHED, LOG_WARNING, cmd->jid, \"Error in deferred reply: %s\", pbs_errmsg == NULL ? \"\" : pbs_errmsg);\n\t\t\t}\n\t\t}\n\t\tif (i == MAX_DEF_REPLY && def_rc != 0) {\n\t\t\tlog_event(PBSEVENT_SCHED, PBS_EVENTCLASS_SCHED, LOG_WARNING, cmd->jid, \"Max deferred reply count reached; giving up.\");\n\t\t}\n\t}\n\n#ifdef NAS\n\t/* localmod 064 */\n\tsite_list_jobs(sinfo, sinfo->jobs);\n\t/* localmod 034 */\n\tsite_list_shares(stdout, sinfo, \"eoc_\", 1);\n#endif\n\tend_cycle_tasks(sinfo);\n\n\tfree_schd_error(err);\n\tif (rc < 0)\n\t\treturn -1;\n\n\treturn 0;\n}\n\n/**\n * @brief check whether any server sent us super high priority command\n *        return cmd if we have it\n *\n * @param[out] is_conn_lost - did we lost connection to server?\n *                            1 - yes, 0 - no\n * @param[in,out] high_prior_cmd - contains the high priority command received\n *\n * @return sched_cmd *\n * @retval 0 - no super high priority command is received\n * @retval 1 - super high priority command is received\n *\n */\nstatic int\nget_high_prio_cmd(int *is_conn_lost, sched_cmd *high_prior_cmd)\n{\n\tsched_cmd cmd;\n\tint rc;\n\n\trc = get_sched_cmd_noblk(clust_secondary_sock, &cmd);\n\tif (rc == -2) {\n\t\t*is_conn_lost = 1;\n\t\treturn 0;\n\t}\n\tif (rc != 1)\n\t\treturn 0;\n\n\tif (cmd.cmd == SCH_SCHEDULE_RESTART_CYCLE) {\n\t\t*high_prior_cmd = cmd;\n\t\treturn 1;\n\t} else {\n\t\tif (cmd.cmd == SCH_SCHEDULE_AJOB)\n\t\t\tqrun_list[qrun_list_size++] = cmd;\n\t\telse {\n\t\t\t/* Index of the array is the command recevied. Put the value as 1 which indicates that\n\t\t\t * the command is received. If we receive same commands from multiple servers they\n\t\t\t * are overwritten which is what we want i.e. it allows us to eliminate duplicate commands.\n\t\t\t */\n\t\t\tif (cmd.cmd >= SCH_SCHEDULE_NULL && cmd.cmd < SCH_CMD_HIGH)\n\t\t\t\tsched_cmds[cmd.cmd] = 1;\n\t\t}\n\t}\n\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\tthe main scheduler loop\n *\t\tLoop until njob = next_job() returns NULL\n *\t\tif njob can run now, run it\n *\t\tif not, attempt preemption\n *\t\tif successful, run njob\n *\t\tnjob can't run:\n *\t\tif we can backfill\n *\t\tadd job to calendar\n *\t\tdeal with normal job can't run stuff\n *\n * @param[in]\tpolicy\t-\tpolicy info\n * @param[in]\tsd\t-\tprimary socket descriptor to the server pool\n * @param[in]\tsinfo\t-\tpbs universe we're going to loop over\n * @param[out]\trerr\t-\terror bits from the last job considered\n *\n *\t@return return code of last job scheduled\n *\t@retval -1\t: on error\n */\nint\nmain_sched_loop(status *policy, int sd, server_info *sinfo, schd_error **rerr)\n{\n\tresource_resv *njob;\t     /* ptr to the next job to see if it can run */\n\tint rc = 0;\t\t     /* return code to the function */\n\tint num_topjobs = 0;\t     /* number of jobs we've added to the calendar */\n\tint end_cycle = 0;\t     /* boolean  - end main cycle loop */\n\tchar log_msg[MAX_LOG_SIZE];  /* used to log an message about job */\n\tchar comment[MAX_LOG_SIZE];  /* used to update comment of job */\n\ttime_t cycle_start_time;     /* the time the cycle starts */\n\ttime_t cycle_end_time;\t     /* the time when the current cycle should end */\n\ttime_t cur_time;\t     /* the current time via time() */\n\tstd::vector<nspec *> ns_arr; /* node solution for job */\n\tint i;\n\tint sort_again = DONT_SORT_JOBS;\n\tschd_error *err;\n\tschd_error *chk_lim_err;\n\n\tif (policy == NULL || sinfo == NULL || rerr == NULL)\n\t\treturn -1;\n\n\ttime(&cycle_start_time);\n\t/* calculate the time which we've been in the cycle too long */\n\tcycle_end_time = cycle_start_time + sc_attrs.sched_cycle_length;\n\n\tchk_lim_err = new_schd_error();\n\tif (chk_lim_err == NULL)\n\t\treturn -1;\n\terr = new_schd_error();\n\tif (err == NULL) {\n\t\tfree_schd_error(chk_lim_err);\n\t\treturn -1;\n\t}\n\n\t/* main scheduling loop */\n#ifdef NAS\n\t/* localmod 030 */\n\tinterrupted_cycle_start_time = cycle_start_time;\n\t/* localmod 038 */\n\tnum_topjobs_per_queues = 0;\n\t/* localmod 064 */\n\tsite_list_jobs(sinfo, sinfo->jobs);\n#endif\n\tfor (i = 0; !end_cycle &&\n\t\t    (njob = next_job(policy, sinfo, sort_again)) != NULL;\n\t     i++) {\n\t\tint should_use_buckets;\t       /* Should use node buckets for a job */\n\t\tunsigned int flags = NO_FLAGS; /* flags to is_ok_to_run @see is_ok_to_run() */\n\t\tauto qinfo = njob->job->queue;\n\n#ifdef NAS /* localmod 030 */\n\t\tif (check_for_cycle_interrupt(1)) {\n\t\t\tbreak;\n\t\t}\n#endif /* localmod 030 */\n\n\t\trc = 0;\n\t\tcomment[0] = '\\0';\n\t\tlog_msg[0] = '\\0';\n\t\tsort_again = SORTED;\n\n\t\tclear_schd_error(err);\n\n\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t  njob->name, \"Considering job to run\");\n\n\t\tshould_use_buckets = job_should_use_buckets(njob);\n\t\tif (should_use_buckets)\n\t\t\tflags = USE_BUCKETS;\n\n\t\tif (njob->is_shrink_to_fit) {\n\t\t\t/* Pass the suitable heuristic for shrinking */\n\t\t\tns_arr = is_ok_to_run_STF(policy, sinfo, qinfo, njob, flags, err, shrink_job_algorithm);\n\t\t} else\n\t\t\tns_arr = is_ok_to_run(policy, sinfo, qinfo, njob, flags, err);\n\n\t\tif (err->status_code == NEVER_RUN)\n\t\t\tnjob->can_never_run = 1;\n\n\t\tif (!ns_arr.empty()) { /* success! */\n\t\t\tif (rc != SCHD_ERROR) {\n\t\t\t\tif (run_update_job(policy, sd, sinfo, qinfo, njob, ns_arr, RURR_ADD_END_EVENT, err)) {\n\t\t\t\t\trc = SUCCESS;\n\t\t\t\t\tif (sinfo->has_soft_limit || qinfo->has_soft_limit)\n\t\t\t\t\t\tsort_again = MUST_RESORT_JOBS;\n\t\t\t\t\telse\n\t\t\t\t\t\tsort_again = MAY_RESORT_JOBS;\n\t\t\t\t} else {\n\t\t\t\t\t/* if run_update_job() returns 0 and pbs_errno == PBSE_HOOKERROR,\n\t\t\t\t\t * then this job is required to be ignored in this scheduling cycle\n\t\t\t\t\t */\n\t\t\t\t\trc = err->error_code;\n\t\t\t\t\tsort_again = SORTED;\n\t\t\t\t}\n\t\t\t} else\n\t\t\t\tfree_nspecs(ns_arr);\n\t\t} else if (policy->preempting && in_runnable_state(njob) && (!njob->can_never_run)) {\n\t\t\tif (find_and_preempt_jobs(policy, sd, njob, sinfo, err) > 0) {\n\t\t\t\trc = SUCCESS;\n\t\t\t\tsort_again = MUST_RESORT_JOBS;\n\t\t\t} else\n\t\t\t\tsort_again = SORTED;\n\t\t}\n\n#ifdef NAS /* localmod 034 */\n\t\tif (rc == SUCCESS && !site_is_queue_topjob_set_aside(njob)) {\n\t\t\tsite_bump_topjobs(njob);\n\t\t}\n\t\tif (rc == SUCCESS) {\n\t\t\tsite_resort_jobs(njob);\n\t\t}\n#endif /* localmod 034 */\n\n\t\t/* if run_update_job() returns an error, it's generally pretty serious.\n\t\t * lets bail out of the cycle now\n\t\t */\n\t\tif (rc == SCHD_ERROR || rc == PBSE_PROTOCOL || got_sigpipe) {\n\t\t\tend_cycle = 1;\n\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_WARNING, njob->name, \"Leaving scheduling cycle because of an internal error.\");\n\t\t} else if (rc != SUCCESS && rc != RUN_FAILURE) {\n#ifdef NAS /* localmod 034 */\n\t\t\tint bf_rc;\n\t\t\tif ((bf_rc = site_should_backfill_with_job(policy, sinfo, njob, num_topjobs, num_topjobs_per_queues, err))) {\n#else\n\t\t\tif (should_backfill_with_job(policy, sinfo, njob, num_topjobs) != 0) {\n#endif\n\t\t\t\tauto cal_rc = add_job_to_calendar(sd, policy, sinfo, njob, should_use_buckets);\n\n\t\t\t\tif (cal_rc > 0) { /* Success! */\n#ifdef NAS\t\t\t\t\t  /* localmod 034 */\n\t\t\t\t\tswitch (bf_rc) {\n\t\t\t\t\t\tcase 1:\n\t\t\t\t\t\t\tnum_topjobs++;\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\tcase 2: /* localmod 038 */\n\t\t\t\t\t\t\tnum_topjobs_per_queues++;\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\tcase 3:\n\t\t\t\t\t\t\tsite_bump_topjobs(njob, 0.0);\n\t\t\t\t\t\t\tnum_topjobs++;\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\tcase 4:\n\t\t\t\t\t\t\tif (!njob->job->is_preempted) {\n\t\t\t\t\t\t\t\tsite_bump_topjobs(njob, delta);\n\t\t\t\t\t\t\t\tnum_topjobs++;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n#else\n\t\t\t\t\tsort_again = MAY_RESORT_JOBS;\n\t\t\t\t\tif (njob->job->is_preempted == 0 || sc_attrs.sched_preempt_enforce_resumption == 0) { /* preempted jobs don't increase top jobs count */\n\t\t\t\t\t\tif (qinfo->backfill_depth == UNSPECIFIED)\n\t\t\t\t\t\t\tnum_topjobs++;\n\t\t\t\t\t\telse\n\t\t\t\t\t\t\tqinfo->num_topjobs++;\n\t\t\t\t\t}\n#endif\t\t\t\t\t\t\t   /* localmod 034 */\n\t\t\t\t} else if (cal_rc == -1) { /* recycle */\n\t\t\t\t\tend_cycle = 1;\n\t\t\t\t\trc = -1;\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_SERVER, LOG_DEBUG,\n\t\t\t\t\t\t  njob->name, \"Error in add_job_to_calendar\");\n\t\t\t\t}\n\t\t\t\t/* else cal_rc == 0: failed to add to calendar - continue on */\n\t\t\t} else {\n\t\t\t\tif (njob->job->is_topjob) {\n\t\t\t\t\t/* the job is not a tob job anymore */\n\t\t\t\t\tupdate_job_attr(sd, njob, ATTR_topjob, NULL, const_cast<char *>(\"False\"), NULL, UPDATE_NOW);\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t/* Need to set preemption status so that soft limits can be checked\n\t\t\t * before updating accrue_type.\n\t\t\t */\n\t\t\tif (sinfo->eligible_time_enable == 1) {\n\t\t\t\tstruct schd_error *update_accrue_err = err;\n\t\t\t\tset_preempt_prio(njob, qinfo, sinfo);\n\t\t\t\t/*\n\t\t\t\t * A temporary schd_error location where errors from check_limits\n\t\t\t\t * will be stored (when called to check CUMULATIVE limits)\n\t\t\t\t */\n\t\t\t\tclear_schd_error(chk_lim_err);\n\t\t\t\tif (sinfo->qrun_job == NULL) {\n\t\t\t\t\tchk_lim_err->error_code = static_cast<enum sched_error_code>(check_limits(sinfo,\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t  qinfo, njob, chk_lim_err, CHECK_CUMULATIVE_LIMIT));\n\t\t\t\t\tif (chk_lim_err->error_code != 0) {\n\t\t\t\t\t\tupdate_accrue_err = chk_lim_err;\n\t\t\t\t\t}\n\t\t\t\t\t/* Update total_*_counts in server_info and queue_info\n\t\t\t\t\t * based on number of jobs that are either running  or\n\t\t\t\t\t * are considered to run.\n\t\t\t\t\t */\n\t\t\t\t\tupdate_total_counts(sinfo, qinfo, njob, ALL);\n\t\t\t\t}\n\t\t\t\tupdate_accruetype(sd, sinfo, ACCRUE_CHECK_ERR, update_accrue_err->error_code, njob);\n\t\t\t}\n\n\t\t\tnjob->can_not_run = 1;\n\t\t}\n\n\t\tif ((rc != SUCCESS) && (err->error_code != 0)) {\n\t\t\ttranslate_fail_code(err, comment, log_msg);\n\t\t\tif (comment[0] != '\\0' &&\n\t\t\t    (!njob->job->is_array || !njob->job->is_begin))\n\t\t\t\tupdate_job_comment(sd, njob, comment);\n\t\t\tif (log_msg[0] != '\\0')\n\t\t\t\tlog_event(PBSEVENT_SCHED, PBS_EVENTCLASS_JOB,\n\t\t\t\t\t  LOG_INFO, njob->name, log_msg);\n\n\t\t\t/* If this job couldn't run, the mark the equiv class so the rest of the jobs are discarded quickly.*/\n\t\t\t/* Note: for MAX_RUN_SUBJOBS it concerns only this array, not the equivalence class!! */\n\t\t\tif(sinfo->equiv_classes != NULL && njob->ec_index != UNSPECIFIED &&\n\t\t\t   err->error_code != MAX_RUN_SUBJOBS) {\n\t\t\t\tresresv_set *ec = sinfo->equiv_classes[njob->ec_index];\n\t\t\t\tif (rc != RUN_FAILURE && !ec->can_not_run) {\n\t\t\t\t\tec->can_not_run = 1;\n\t\t\t\t\tec->err = dup_schd_error(err);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\tif (njob->can_never_run) {\n\t\t\tlog_event(PBSEVENT_SCHED, PBS_EVENTCLASS_JOB, LOG_WARNING,\n\t\t\t\t  njob->name, \"Job will never run with the resources currently configured in the complex\");\n\t\t}\n\t\tif ((rc != SUCCESS) && njob->job->resv == NULL) {\n\t\t\t/* jobs in reservations are outside of the law... they don't cause\n\t\t\t * the rest of the system to idle waiting for them\n\t\t\t */\n\t\t\tif (policy->strict_fifo) {\n\t\t\t\tset_schd_error_codes(err, NOT_RUN, STRICT_ORDERING);\n\t\t\t\tupdate_jobs_cant_run(sd, qinfo->jobs, NULL, err, START_WITH_JOB);\n\t\t\t} else if (!policy->backfill && policy->strict_ordering) {\n\t\t\t\tset_schd_error_codes(err, NOT_RUN, STRICT_ORDERING);\n\t\t\t\tupdate_jobs_cant_run(sd, sinfo->jobs, NULL, err, START_WITH_JOB);\n\t\t\t} else if (policy->backfill && policy->strict_ordering && qinfo->backfill_depth == 0) {\n\t\t\t\tset_schd_error_codes(err, NOT_RUN, STRICT_ORDERING);\n\t\t\t\tupdate_jobs_cant_run(sd, qinfo->jobs, NULL, err, START_WITH_JOB);\n\t\t\t}\n\t\t}\n\n\t\ttime(&cur_time);\n\t\tif (cur_time >= cycle_end_time) {\n\t\t\tend_cycle = 1;\n\t\t\tlog_eventf(PBSEVENT_SCHED, PBS_EVENTCLASS_SCHED, LOG_NOTICE, \"toolong\",\n\t\t\t\t   \"Leaving the scheduling cycle: Cycle duration of %ld seconds has exceeded %s of %ld seconds\",\n\t\t\t\t   (long) (cur_time - cycle_start_time), ATTR_sched_cycle_len, sc_attrs.sched_cycle_length);\n\t\t}\n\t\tif (conf.max_jobs_to_check != SCHD_INFINITY && (i + 1) >= conf.max_jobs_to_check) {\n\t\t\t/* i begins with 0, hence i + 1 */\n\t\t\tend_cycle = 1;\n\t\t\tlog_eventf(PBSEVENT_SCHED, PBS_EVENTCLASS_JOB, LOG_INFO, \"\",\n\t\t\t\t   \"Bailed out of main job loop after checking to see if %d jobs could run.\", (i + 1));\n\t\t}\n\t\tif (!end_cycle) {\n\t\t\tsched_cmd cmd;\n\t\t\tint is_conn_lost = 0;\n\t\t\tint rc = get_high_prio_cmd(&is_conn_lost, &cmd);\n\n\t\t\tif (is_conn_lost) {\n\t\t\t\tlog_event(PBSEVENT_SCHED, PBS_EVENTCLASS_JOB, LOG_WARNING,\n\t\t\t\t\t  njob->name, \"We lost connection with the server, leaving scheduling cycle\");\n\t\t\t\tend_cycle = 1;\n\t\t\t} else if ((rc == 1) && (cmd.cmd == SCH_SCHEDULE_RESTART_CYCLE)) {\n\t\t\t\tlog_event(PBSEVENT_SCHED, PBS_EVENTCLASS_JOB, LOG_WARNING,\n\t\t\t\t\t  njob->name, \"Leaving scheduling cycle as requested by server.\");\n\t\t\t\tend_cycle = 1;\n\t\t\t}\n\t\t}\n\n#ifdef NAS /* localmod 030 */\n\t\tif (check_for_cycle_interrupt(0)) {\n\t\t\tconsecutive_interrupted_cycles++;\n\t\t} else {\n\t\t\tconsecutive_interrupted_cycles = 0;\n\t\t}\n#endif /* localmod 030 */\n\n\t\t/* send any attribute updates to server that we've collected */\n\t\tsend_job_updates(sd, njob);\n\t}\n\n\t*rerr = err;\n\n\tfree_schd_error(chk_lim_err);\n\treturn rc;\n}\n\n/**\n * @brief\n *\t\tend_cycle_tasks - stuff which needs to happen at the end of a cycle\n *\n * @param[in]\tsinfo\t-\tthe server structure\n *\n * @return\tnothing\n *\n */\nvoid\nend_cycle_tasks(server_info *sinfo)\n{\n\t/* keep track of update used resources for fairshare */\n\tif (sinfo != NULL && sinfo->policy->fair_share)\n\t\tcreate_prev_job_info(sinfo->running_jobs);\n\n\t/* we copied in the global fairshare into sinfo at the start of the cycle,\n\t * we don't want to free it now, or we'd lose all fairshare data\n\t */\n\tif (sinfo != NULL) {\n\t\tsinfo->fstree = NULL;\n\t\tdelete sinfo; /* free server and queues and jobs */\n\t}\n\n\t/* close any open connections to peers */\n\tfor (auto &pq : conf.peer_queues) {\n\t\tif (pq.peer_sd >= 0) {\n\t\t\t/* When peering \"local\", do not disconnect server */\n\t\t\tif (!pq.remote_server.empty())\n\t\t\t\tpbs_disconnect(pq.peer_sd);\n\t\t\tpq.peer_sd = -1;\n\t\t}\n\t}\n\n\t/* free cmp_aoename */\n\tif (cmp_aoename != NULL) {\n\t\tfree(cmp_aoename);\n\t\tcmp_aoename = NULL;\n\t}\n\n\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_REQUEST, LOG_DEBUG,\n\t\t  \"\", \"Leaving Scheduling Cycle\");\n}\n\n/**\n * @brief\n *\t\tupdate_job_can_not_run - do post job 'can't run' processing\n *\t\t\t\t mark it 'can_not_run'\n *\t\t\t\t update the job comment and log the reason why\n *\t\t\t\t take care of deleting a 'can_never_run job\n *\n * @param[in]\tpbs_sd\t-\tthe connection descriptor to the server\n * @param[in,out]\tjob\t-\tthe job to update\n * @param[in]\terr\t-\tthe error structure for why the job can't run\n *\n * @return\tint\n * @retval\t1\t: success\n * @retval\t0\t: failure.\n *\n */\nint\nupdate_job_can_not_run(int pbs_sd, resource_resv *job, schd_error *err)\n{\n\tchar comment_buf[MAX_LOG_SIZE]; /* buffer for comment message */\n\tchar log_buf[MAX_LOG_SIZE];\t/* buffer for log message */\n\tint ret = 1;\t\t\t/* return code for function */\n\n\tif ((job == NULL) || (err == NULL) || (job->job == NULL))\n\t\treturn ret;\n\n\tjob->can_not_run = 1;\n\n\tif (translate_fail_code(err, comment_buf, log_buf)) {\n\t\t/* don't attempt to update the comment on a remote job and on an array job */\n\t\tif (!job->is_peer_ob && (!job->job->is_array || !job->job->is_begin))\n\t\t\tupdate_job_comment(pbs_sd, job, comment_buf);\n\n\t\t/* not attempting to update accrue type on a remote job */\n\t\tif (!job->is_peer_ob) {\n\t\t\tif (job->job != NULL)\n\t\t\t\tset_preempt_prio(job, job->job->queue, job->server);\n\t\t\tupdate_accruetype(pbs_sd, job->server, ACCRUE_CHECK_ERR, err->error_code, job);\n\t\t}\n\n\t\tif (log_buf[0] != '\\0')\n\t\t\tlog_event(PBSEVENT_SCHED, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t  job->name, log_buf);\n\n\t\t/* We won't be looking at this job in main_sched_loop()\n\t\t * and we just updated some attributes just above.  Send Now.\n\t\t */\n\t\tsend_job_updates(pbs_sd, job);\n\t} else\n\t\tret = 0;\n\n\treturn ret;\n}\n\n/**\n * @brief move a peer job locally\n * \n * @param[in] rr - the job\n * @return int\n * @retval return value from pbs_movejob()\n */\nint\nmove_peer_job(resource_resv *rr)\n{\n\tint rc = 0;\n\n\tif (rr == NULL)\n\t\treturn 1;\n\tif (rr->is_peer_ob) {\n\t\tchar buf[100]; /* used to assemble queue@localserver */\n\n\t\tif (rr->server->name.find(':') == std::string::npos) {\n#ifdef NAS /* localmod 005 */\n\t\t\tsprintf(buf, \"%s@%s:%u\", rr->job->queue->name.c_str(),\n#else\n\t\t\tsprintf(buf, \"%s@%s:%d\", rr->job->queue->name.c_str(),\n#endif /* localmod 005 */\n\t\t\t\trr->server->name.c_str(), pbs_conf.batch_service_port);\n\t\t} else {\n\t\t\tsprintf(buf, \"%s@%s\", rr->job->queue->name.c_str(),\n\t\t\t\trr->server->name.c_str());\n\t\t}\n\n\t\trc = pbs_movejob(rr->job->peer_sd, const_cast<char *>(rr->name.c_str()), buf, NULL);\n\n\t\t/*\n\t\t * After successful transfer of the peer job to local server,\n\t\t * reset the peer job flag i.e. is_peer_ob to 0, as it became\n\t\t * a local job.\n\t\t */\n\t\tif (rc == 0)\n\t\t\trr->is_peer_ob = 0;\n\t}\n\treturn rc;\n}\n\n/**\n * @brief\n * \t\trun_job - handle the running of a pbs job.  If it's a peer job\n *\t\t\tfirst move it to the local server and then run it.\n *\t\t\tif it's a local job, just run it.\n *\n * @param[in]\tpbs_sd\t-\tpbs connection descriptor to the server\n * @param[in]\trr\t-\tthe job to run\n * @param[in]\tns_arr\t\twhere to run the job\n * @param[out]\terr\t-\terror struct to return errors\n *\n *\n * @retval true - success\n * @retval false - failure\n * \n */\nbool\nrun_job(status *policy, int pbs_sd, resource_resv *rr, std::vector<nspec *> &ns_arr, schd_error *err)\n{\n\tbool ret = true;\n\tint pbsrc = 0; /* Return code from IFL call, 0 success, 1 failure */\n\n\tif (rr == NULL || rr->job == NULL || err == NULL)\n\t\treturn false;\n\n\t/* Server most likely crashed */\n\tif (got_sigpipe) {\n\t\tset_schd_error_codes(err, NEVER_RUN, SCHD_ERROR);\n\t\treturn false;\n\t}\n\n#ifdef RESC_SPEC /* Hack to make rescspec work with new select code */\n\tif (rr->is_job && rr->job->rspec != NULL && ns[0] != NULL) {\n\t\tstruct batch_status *bs; /* used for rescspec res assignment */\n\t\tstruct attrl *attrp;\t /* used for rescspec res assignment */\n\t\tresource_req *req;\n\t\tbs = rescspec_get_assignments(rr->job->rspec);\n\t\tif (bs != NULL) {\n\t\t\tattrp = bs->attribs;\n\t\t\twhile (attrp != NULL) {\n\t\t\t\treq = find_alloc_resource_req_by_str(ns[0]->resreq, attrp->resource);\n\t\t\t\tif (req != NULL)\n\t\t\t\t\tset_resource_req(req, attrp->value);\n\n\t\t\t\tif (rr->resreq == NULL)\n\t\t\t\t\trr->resreq = req;\n\t\t\t\tattrp = attrp->next;\n\t\t\t}\n\t\t\tpbs_statfree(bs);\n\t\t}\n\t}\n#endif\n\n\tif (rr->is_peer_ob)\n\t\tpbsrc = move_peer_job(rr);\n\n\tif (!pbsrc) {\n\t\tauto execvnode = create_execvnode(ns_arr);\n\n#ifdef NAS /* localmod 031 */\n\t\t/* debug dpr - Log vnodes assigned to job */\n\t\ttime_t tm = time(NULL);\n\t\tstruct tm *ptm = localtime(&tm);\n\t\tprintf(\"%04d-%02d-%02d %02d:%02d:%02d %s %s %s\\n\",\n\t\t       ptm->tm_year + 1900, ptm->tm_mon + 1, ptm->tm_mday,\n\t\t       ptm->tm_hour, ptm->tm_min, ptm->tm_sec,\n\t\t       \"Running\", resresv->name.c_str(),\n\t\t       execvnode != NULL ? execvnode : \"(NULL)\");\n\t\tfflush(stdout);\n#endif /* localmod 031 */\n\n\t\tif (rr->is_shrink_to_fit) {\n\t\t\tchar timebuf[TIMEBUF_SIZE] = {0};\n\t\t\tauto rc = 1;\n\t\t\t/* The job is set to run, update it's walltime only if it is not a foerever job */\n\t\t\tif (rr->duration != JOB_INFINITY) {\n\t\t\t\tconvert_duration_to_str(rr->duration, timebuf, TIMEBUF_SIZE);\n\t\t\t\trc = update_job_attr(pbs_sd, rr, ATTR_l, \"walltime\", timebuf, NULL, UPDATE_NOW);\n\t\t\t}\n\t\t\tif (rc > 0) {\n\t\t\t\tif (strlen(timebuf) > 0)\n\t\t\t\t\tlog_eventf(PBSEVENT_SCHED, PBS_EVENTCLASS_JOB, LOG_NOTICE, rr->name,\n\t\t\t\t\t\t   \"Job will run for duration=%s\", timebuf);\n\t\t\t\tpbsrc = send_run_job(pbs_sd, rr->server->has_runjob_hook, rr->name, execvnode);\n\t\t\t} else\n\t\t\t\tpbsrc = 1;\n\t\t} else\n\t\t\tpbsrc = send_run_job(pbs_sd, rr->server->has_runjob_hook, rr->name, execvnode);\n\t}\n\n#ifdef NAS_CLUSTER /* localmod 125 */\n\tret = translate_runjob_return_code(pbsrc, rr);\n#else\n\tif (pbsrc)\n\t\tret = false;\n#endif /* localmod 125 */\n\n\tif (!ret) {\n\t\t/* received 'batch protocol error' */\n\t\tif (pbs_errno == PBSE_PROTOCOL) {\n\t\t\tset_schd_error_codes(err, NOT_RUN, static_cast<enum sched_error_code>(PBSE_PROTOCOL));\n\t\t\treturn false;\n\t\t} else {\n\t\t\tconst char *errbuf; /* comes from pbs_geterrmsg() */\n\t\t\tchar buf[MAX_LOG_SIZE];\n\n\t\t\tset_schd_error_codes(err, NOT_RUN, RUN_FAILURE);\n\t\t\terrbuf = pbs_geterrmsg(pbs_sd);\n\t\t\tif (errbuf == NULL)\n\t\t\t\terrbuf = \"\";\n\t\t\tset_schd_error_arg(err, ARG1, errbuf);\n\t\t\tsnprintf(buf, sizeof(buf), \"%d\", pbs_errno);\n\t\t\tset_schd_error_arg(err, ARG2, buf);\n#ifdef NAS /* localmod 031 */\n\t\t\tset_schd_error_arg(err, ARG3, rr->name);\n#endif /* localmod 031 */\n\t\t}\n\t}\n\trr->can_not_run = true;\n\tif (rr->job->parent_job != NULL && range_next_value(rr->job->parent_job->job->queued_subjobs, -1) < 0)\n\t\trr->job->parent_job->can_not_run = true;\n\n\treturn ret;\n}\n\n#ifdef NAS_CLUSTER /* localmod 125 */\n/**\n * @brief\n * \t\tcheck the run_job return code and decide whether to\n *        consider this job as running or not.\n *\n * @param[in]\tpbsrc\t-\treturn code of run_job\n * @param[in]\tbjob\t-\tjob structure\n *\n * @return\tint\n * @retval\t1\t-\tJob ran successfully\n * @retval\t2\t-\tJob may not be running, ignoring error.\n * @retval\t0\t-\tJob did not run\n * @retval\t-1\t-\tInvalid function parameter\n */\nstatic int\ntranslate_runjob_return_code(int pbsrc, resource_resv *bjob)\n{\n\tif ((bjob == NULL) || (pbsrc == PBSE_PROTOCOL))\n\t\treturn -1;\n\tif (pbsrc == 0)\n\t\treturn 1;\n\tswitch (pbsrc) {\n\t\tcase PBSE_HOOKERROR:\n\t\t\treturn 0;\n\t\tdefault:\n\t\t\tlog_eventf(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_WARNING, bjob->name,\n\t\t\t\t   \"Transient job warning.  Job may get held if issue persists:%d\", pbsrc);\n\t\t\treturn 2;\n\t}\n}\n#endif /* localmod 125 */\n\n/**\n * @brief resume a suspended job\n * \n * @param[in] policy - policy info\n * @param[in] pbs_sd - PBS connection descriptor\n * @param[in] rr - the job\n * @param[in] flags - flags to pass to update_universe_on_run()\n * @param[out] err - error structure to return errors\n * @return true job resumed successfully\n * @return false job didn't resume\n */\nbool\nresume_job(status *policy, int pbs_sd, resource_resv *rr, unsigned int flags, schd_error *err)\n{\n\tauto pbsrc = send_sigjob(pbs_sd, rr, \"resume\", NULL);\n\tif (pbsrc) {\n\t\tchar buf[COMMENT_BUF_SIZE] = {'\\0'}; /* generic buffer - comments & logging*/\n\t\tconst char *err_txt = pbse_to_txt(pbsrc);\n\t\tif (err_txt == NULL)\n\t\t\terr_txt = \"\";\n\t\tclear_schd_error(err);\n\t\tset_schd_error_codes(err, NOT_RUN, RUN_FAILURE);\n\t\tset_schd_error_arg(err, ARG1, err_txt);\n\t\tsnprintf(buf, sizeof(buf), \"%d\", pbsrc);\n\t\tset_schd_error_arg(err, ARG2, buf);\n\t\treturn false;\n\t}\n\tupdate_universe_on_run(policy, pbs_sd, rr, flags);\n\n\treturn true;\n}\n/**\n * @brief\n * \t\trun a job and update the local cache if it was successfully run\n * \t\tThis overload should be used when the ns_arr is unknown or \n * \t\tthe job is being rerun on its own ns_arr\n *\n * @param[in]\tpolicy\t-\tpolicy info\n * @param[in]\tpbs_sd\t-\tconnection descriptor to pbs_server\n * @param[in]\tsinfo\t-\tserver job is on\n * @param[in]\tqinfo\t-\tqueue job resides in\n * @param[in]\tresresv\t-\tthe job/reservation to run\n * @param[in]\tflags\t-\tflags to modify procedure\n *\t\t\t\t\t\t\tRURR_ADD_END_EVENT - add an end event to calendar for this job\n * \t\t\t\t\t\t\tRURR_NOPRINT - Don't print anything\n * @param[out]\terr\t-\terror struct to return errors\n *\n * @retval\ttrue\t: success\n * @retval\tfalse\t: failure (see err for more info)\n */\nbool\nrun_update_job(status *policy, int pbs_sd, server_info *sinfo,\n\t       queue_info *qinfo, resource_resv *rr,\n\t       unsigned int flags, schd_error *err)\n{\n\tif (rr == NULL || sinfo == NULL || qinfo == NULL || !rr->is_job || rr->job == NULL) {\n\t\tclear_schd_error(err);\n\t\tset_schd_error_codes(err, NOT_RUN, SCHD_ERROR);\n\t\treturn false;\n\t}\n\tif (!is_resource_resv_valid(rr, err))\n\t\treturn false;\n\n\tif (rr->job->is_suspended)\n\t\treturn resume_job(policy, pbs_sd, rr, flags, err);\n\n\tif (rr->nspec_arr.empty()) {\n\t\tauto nspec_arr = check_nodes(policy, sinfo, qinfo, rr, NO_FLAGS, err);\n\t\tif (nspec_arr.empty()) {\n\t\t\t/* Theoretically we've already make sure we can run, so this shouldn't happen */\n\t\t\tlog_event(PBSEVENT_SCHED, PBS_EVENTCLASS_JOB, LOG_NOTICE, rr->name,\n\t\t\t\t  \"Could not find node solution in run_update_job()\");\n\t\t\tset_schd_error_codes(err, NOT_RUN, SCHD_ERROR);\n\t\t\treturn false;\n\t\t}\n\t\treturn run_update_job(policy, pbs_sd, sinfo, qinfo, rr, nspec_arr, flags, err);\n\t} else {\n\t\t/* We're going to use the job's nspec_arr, so just pass an empty vector that will be ignored */\n\t\tstd::vector<nspec *> empty_ns;\n\t\treturn run_update_job(policy, pbs_sd, sinfo, qinfo, rr, empty_ns, flags, err);\n\t}\n}\n\n/**\n * @brief\n * \t\trun a job and update the local cache if it was successfully run\n *\n * @param[in]\tpolicy\t-\tpolicy info\n * @param[in]\tpbs_sd\t-\tconnection descriptor to pbs_server\n * @param[in]\tsinfo\t-\tserver job is on\n * @param[in]\tqinfo\t-\tqueue job resides in\n * @param[in]\tresresv\t-\tthe job/reservation to run\n * @param[in]\tns_arr\t-\tnode solution of where job should run.  \n * \t\t\t\tThis will either be owned by the job or freed before we return.\n * @param[in]\tflags\t-\tflags to modify procedure\n *\t\t\t\t\t\t\tRURR_ADD_END_EVENT - add an end event to calendar for this job\n * \t\t\t\t\t\t\tRURR_NOPRINT - Don't print anything\n * @param[out]\terr\t-\terror struct to return errors\n *\n * @retval\ttrue\t: success\n * @retval\tfalse\t: failure (see err for more info)\n *\n */\nbool\nrun_update_job(status *policy, int pbs_sd, server_info *sinfo,\n\t       queue_info *qinfo, resource_resv *resresv, std::vector<nspec *> &nspec_arr,\n\t       unsigned int flags, schd_error *err)\n{\n\tbool ret;\n\tresource_resv *rr;\n\n\tif (resresv == NULL || sinfo == NULL || qinfo == NULL || !resresv->is_job) {\n\t\tclear_schd_error(err);\n\t\tset_schd_error_codes(err, NOT_RUN, SCHD_ERROR);\n\t\tfree_nspecs(nspec_arr);\n\t\treturn false;\n\t}\n\n\tif (!is_resource_resv_valid(resresv, err)) {\n\t\tschdlogerr(PBSEVENT_DEBUG2, PBS_EVENTCLASS_SCHED, LOG_DEBUG, (char *) __func__, \"Request not valid:\", err);\n\t\tfree_nspecs(nspec_arr);\n\t\treturn false;\n\t}\n\n\tif (resresv->job->is_array)\n\t\trr = queue_subjob(resresv, sinfo, qinfo);\n\telse\n\t\trr = resresv;\n\n\tif (rr->job->is_suspended) {\n\t\tfree_nspecs(nspec_arr);\n\t\treturn resume_job(policy, pbs_sd, rr, flags, err);\n\t}\n\t/* If the job/resv already has a location to run, use that */\n\tif (!rr->nspec_arr.empty()) {\n\t\t/* We're not using this, so free it */\n\t\tfree_nspecs(nspec_arr);\n\t\tret = run_job(policy, pbs_sd, rr, rr->nspec_arr, err);\n\t\tif (ret) {\n\t\t\tret = update_universe_on_run(policy, pbs_sd, rr, flags);\n\t\t\tif (!ret)\n\t\t\t\tset_schd_error_codes(err, NOT_RUN, SCHD_ERROR);\n\t\t}\n\t} else {\n\t\tstd::sort(nspec_arr.begin(), nspec_arr.end(), cmp_nspec);\n\t\tret = run_job(policy, pbs_sd, rr, nspec_arr, err);\n\t\tif (!ret)\n\t\t\tfree_nspecs(nspec_arr);\n\t\telse {\n\t\t\tret = update_universe_on_run(policy, pbs_sd, rr, nspec_arr, flags);\n\t\t\tif (!ret)\n\t\t\t\tset_schd_error_codes(err, NOT_RUN, SCHD_ERROR);\n\t\t}\n\t}\n\treturn ret;\n}\n\n/**\n * @brief\n * \t\tsimulate the running of a resource resv\n *\n * @param[in]\tpolicy\t-\tpolicy info\n * @param[in]\tresresv\t-\tthe resource resv to simulate running\n * @param[in]\tns_arr  -\tnode solution of where a job/resv should run\n * @param[in]\tflags\t-\tflags to modify procedure\n *\t\t\t\t\t\t\tRURR_ADD_END_EVENT - add an end event to calendar for this job\n *\n * @retval\ttrue\t: success\n * @retval\tfalse\t: failure\n *\n */\nbool\nsim_run_update_resresv(status *policy, resource_resv *resresv, std::vector<nspec *> &ns_arr, unsigned int flags)\n{\n\tbool ret = true;\n\tresource_resv *rr;\n\n\tif (resresv == NULL) {\n\t\tfree_nspecs(ns_arr);\n\t\treturn false;\n\t}\n\n\tif (!is_resource_resv_valid(resresv, NULL)) {\n\t\tfree_nspecs(ns_arr);\n\t\treturn false;\n\t}\n\n\tif (resresv->is_job && resresv->job->is_array)\n\t\trr = queue_subjob(resresv, resresv->server, resresv->job->queue);\n\telse\n\t\trr = resresv;\n\n\tif (rr->nspec_arr.empty())\n\t\tret = update_universe_on_run(policy, SIMULATE_SD, rr, ns_arr, (flags | RURR_NOPRINT));\n\telse\n\t\tret = update_universe_on_run(policy, SIMULATE_SD, rr, (flags | RURR_NOPRINT));\n\n\tif (!ret)\n\t\tfree_nspecs(ns_arr);\n\n\treturn ret;\n}\n\n/**\n * @brief\n * \t\tsimulate the running of a resource resv without ns_arr.  This is \n * \t\tused when we want to rerun the job on the same nodes its own ns_arr\n *\n * @param[in]\tpolicy\t-\tpolicy info\n * @param[in]\tresresv\t-\tthe resource resv to simulate running\n * @param[in]\tns_arr  -\tnode solution of where a job/resv should run\n * @param[in]\tflags\t-\tflags to modify procedure\n *\t\t\t\t\t\t\tRURR_ADD_END_EVENT - add an end event to calendar for this job\n *\n * @retval\ttrue\t: success\n * @retval\tfalse\t: failure */\nbool\nsim_run_update_resresv(status *policy, resource_resv *resresv, unsigned int flags)\n{\n\tstd::vector<nspec *> empty_ns;\n\treturn sim_run_update_resresv(policy, resresv, empty_ns, flags);\n}\n/**\n * @brief\n * \t\tshould we call add_job_to_calendar() with job\n *\n * @param[in]\tpolicy\t-\tpolicy structure\n * @param[in]\tsinfo   -\tserver where job resides\n * @param[in]\tresresv -\tthe job to check\n * @param[in]\tnum_topjobs\t-\tnumber of topjobs added to the calendar\n *\n * @return\tint\n * @retval\t1\t: we should backfill\n * @retval\t0\t: we should not\n *\n */\nint\nshould_backfill_with_job(status *policy, server_info *sinfo, resource_resv *resresv, int num_topjobs)\n{\n\n\tif (policy == NULL || sinfo == NULL || resresv == NULL)\n\t\treturn 0;\n\n\tif (resresv->job == NULL)\n\t\treturn 0;\n\n\tif (!policy->backfill)\n\t\treturn 0;\n\n\t/* jobs in reservations are not eligible for backfill */\n\tif (resresv->job->resv != NULL)\n\t\treturn 0;\n\n#ifndef NAS /* localmod 038 */\n\tif (!resresv->job->is_preempted) {\n\t\tqueue_info *qinfo = resresv->job->queue;\n\t\tint bf_depth;\n\t\tint num_tj;\n\n\t\t/* If job is in a queue with a backfill_depth, use it*/\n\t\tif (qinfo->backfill_depth != UNSPECIFIED) {\n\t\t\tbf_depth = qinfo->backfill_depth;\n\t\t\tnum_tj = qinfo->num_topjobs;\n\t\t} /* else check for a server bf depth */\n\t\telse if (policy->backfill_depth != static_cast<unsigned int>(UNSPECIFIED)) {\n\t\t\tbf_depth = policy->backfill_depth;\n\t\t\tnum_tj = num_topjobs;\n\t\t} else { /* lastly use the server's default of 1*/\n\t\t\tbf_depth = 1;\n\t\t\tnum_tj = num_topjobs;\n\t\t}\n\n\t\tif ((num_tj >= bf_depth))\n\t\t\treturn 0;\n\t}\n#endif /* localmod 038 */\n\n\t/* jobs with AOE are not eligible for backfill unless specifically allowed */\n\tif (!conf.allow_aoe_calendar && resresv->aoename != NULL)\n\t\treturn 0;\n\n\t/* If we know we can never run the job, we shouldn't try and backfill*/\n\tif (resresv->can_never_run)\n\t\treturn 0;\n\n\t/* Job is preempted and we're helping preempted jobs resume -- add to the calendar*/\n\tif (resresv->job->is_preempted && sc_attrs.sched_preempt_enforce_resumption && (resresv->job->preempt >= preempt_normal))\n\t\treturn 1;\n\n\t/* Admin settable flag - don't add to calendar */\n\tif (resresv->job->topjob_ineligible)\n\t\treturn 0;\n\n\tif (policy->strict_ordering)\n\t\treturn 1;\n\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\tFind the start time of the top job and init\n *       all the necessary variables in sinfo to correctly backfill\n *       around it.  If no start time can be found, the job is not added\n *\t     to the calendar.\n *\n * @param[in]\tpolicy\t-\tpolicy info\n * @param[in]\tpbs_sd\t-\tconnection descriptor to pbs server\n * @param[in]\tpolicy\t-\tpolicy structure\n * @param[in]\tsinfo\t-\tthe server to find the topjob in\n * @param[in]\ttopjob\t-\tthe job we want to backfill around\n * @param[in]\tuse_bucekts\tuse the bucket algorithm to add the job to the calendar\n *\n * @retval\t1\t: success\n * @retval\t0\t: failure\n * @retval\t-1\t: error\n *\n * @par Side-Effect:\n * \t\t\tUse caution when returning failure from this function.\n *\t\t    It will have the effect of exiting the cycle and possibily\n *\t\t    stalling scheduling.  It should only be done for important\n *\t\t    reasons like jobs can't be added to the calendar.\n */\nint\nadd_job_to_calendar(int pbs_sd, status *policy, server_info *sinfo,\n\t\t    resource_resv *topjob, int use_buckets)\n{\n\tserver_info *nsinfo; /* dup'd universe to simulate in */\n\tresource_resv *njob; /* the topjob in the dup'd universe */\n\tresource_resv *bjob; /* job pointer which becomes the topjob*/\n\tresource_resv *tjob; /* temporary job pointer for job arrays */\n\ttime_t start_time;   /* calculated start time of topjob */\n\n\tif (policy == NULL || sinfo == NULL ||\n\t    topjob == NULL || topjob->job == NULL)\n\t\treturn 0;\n\n\tif (sinfo->calendar != NULL) {\n\t\t/* if the job is in the calendar, then there is nothing to do\n\t\t * Note: We only ever look from now into the future\n\t\t */\n\t\tauto nexte = get_next_event(sinfo->calendar);\n\t\tif (find_timed_event(nexte, topjob->name, IGNORE_DISABLED_EVENTS, TIMED_NOEVENT, 0) != NULL)\n\t\t\treturn 1;\n\t}\n\ttry {\n\t\tnsinfo = new server_info(*sinfo);\n\t} catch (std::exception &e) {\n\t\treturn 0;\n\t}\n\n\tif ((njob = find_resource_resv_by_indrank(nsinfo->jobs, topjob->resresv_ind, topjob->rank)) == NULL) {\n\t\tdelete nsinfo;\n\t\treturn 0;\n\t}\n\n#ifdef NAS /* localmod 031 */\n\tlog_eventf(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t   topjob->name, \"Estimating the start time for a top job (q=%s schedselect=%.1000s).\", topjob->job->queue->name, topjob->job->schedsel);\n#else\n\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t  topjob->name, \"Estimating the start time for a top job.\");\n#endif /* localmod 031 */\n\tif (use_buckets)\n\t\tstart_time = calc_run_time(njob->name, nsinfo, SIM_RUN_JOB | USE_BUCKETS);\n\telse\n\t\tstart_time = calc_run_time(njob->name, nsinfo, SIM_RUN_JOB);\n\n\tif (start_time > 0) {\n\t\tchar *exec; /* used to hold execvnode for topjob */\n\t\tchar log_buf[MAX_LOG_SIZE];\n\n\t\t/* If our top job is a job array, we don't backfill around the\n\t\t * parent array... rather a subjob.  Normally subjobs don't actually\n\t\t * exist until they are started.  In our case here, we need to create\n\t\t * the subjob so we can backfill around it.\n\t\t */\n\t\tif (topjob->job->is_array) {\n\t\t\ttjob = queue_subjob(topjob, sinfo, topjob->job->queue);\n\t\t\tif (tjob == NULL) {\n\t\t\t\tdelete nsinfo;\n\t\t\t\treturn 0;\n\t\t\t}\n\n\t\t\t/* Can't search by rank, we just created tjob and it has a new rank*/\n\t\t\tnjob = find_resource_resv(nsinfo->jobs, tjob->name);\n\t\t\tif (njob == NULL) {\n\t\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_DEBUG, __func__,\n\t\t\t\t\t  \"Can't find new subjob in simulated universe\");\n\t\t\t\tdelete nsinfo;\n\t\t\t\treturn 0;\n\t\t\t}\n\t\t\t/* The subjob is just for the calendar, not for running */\n\t\t\ttjob->can_not_run = 1;\n\t\t\tbjob = tjob;\n\t\t} else\n\t\t\tbjob = topjob;\n\n\t\texec = create_execvnode(njob->nspec_arr);\n\t\tif (exec != NULL) {\n\t\t\tfree_nspecs(bjob->nspec_arr);\n\t\t\tbjob->nspec_arr = parse_execvnode(exec, sinfo, NULL);\n\t\t\tif (!bjob->nspec_arr.empty()) {\n\t\t\t\tstd::string selectspec;\n\t\t\t\tif (bjob->ninfo_arr != NULL)\n\t\t\t\t\tfree(bjob->ninfo_arr);\n\t\t\t\tbjob->ninfo_arr =\n\t\t\t\t\tcreate_node_array_from_nspec(bjob->nspec_arr);\n\t\t\t\tselectspec = create_select_from_nspec(bjob->nspec_arr);\n\t\t\t\tif (!selectspec.empty()) {\n\t\t\t\t\tdelete bjob->execselect;\n\t\t\t\t\tbjob->execselect = parse_selspec(selectspec);\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tdelete nsinfo;\n\t\t\t\treturn 0;\n\t\t\t}\n\t\t} else {\n\t\t\tdelete nsinfo;\n\t\t\treturn 0;\n\t\t}\n\n\t\tif (bjob->job->est_execvnode != NULL)\n\t\t\tfree(bjob->job->est_execvnode);\n\t\tbjob->job->est_execvnode = string_dup(exec);\n\t\tbjob->job->est_start_time = start_time;\n\t\tbjob->start = start_time;\n\t\tbjob->end = start_time + bjob->duration;\n\n\t\tauto te_start = create_event(TIMED_RUN_EVENT, bjob->start, bjob, NULL, NULL);\n\t\tif (te_start == NULL) {\n\t\t\tdelete nsinfo;\n\t\t\treturn 0;\n\t\t}\n\t\tadd_event(sinfo->calendar, te_start);\n\n\t\tauto te_end = create_event(TIMED_END_EVENT, bjob->end, bjob, NULL, NULL);\n\t\tif (te_end == NULL) {\n\t\t\tdelete nsinfo;\n\t\t\treturn 0;\n\t\t}\n\t\tadd_event(sinfo->calendar, te_end);\n\n\t\tif (update_estimated_attrs(pbs_sd, bjob, bjob->job->est_start_time,\n\t\t\t\t\t   bjob->job->est_execvnode, 0) < 0) {\n\t\t\tlog_event(PBSEVENT_SCHED, PBS_EVENTCLASS_SCHED, LOG_WARNING,\n\t\t\t\t  bjob->name, \"Failed to update estimated attrs.\");\n\t\t}\n\n\t\tfor (auto ns : bjob->nspec_arr) {\n\t\t\tint ind = ns->ninfo->node_ind;\n\t\t\tadd_te_list(&(ns->ninfo->node_events), te_start);\n\n\t\t\tif (ind != -1 && sinfo->unordered_nodes[ind]->bucket_ind != -1) {\n\t\t\t\tnode_bucket *bkt;\n\n\t\t\t\tbkt = sinfo->buckets[sinfo->unordered_nodes[ind]->bucket_ind];\n\t\t\t\tif (pbs_bitmap_get_bit(bkt->free_pool->truth, ind)) {\n\t\t\t\t\tpbs_bitmap_bit_off(bkt->free_pool->truth, ind);\n\t\t\t\t\tbkt->free_pool->truth_ct--;\n\t\t\t\t\tpbs_bitmap_bit_on(bkt->busy_later_pool->truth, ind);\n\t\t\t\t\tbkt->busy_later_pool->truth_ct++;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\tif (policy->fair_share) {\n\t\t\t/* update the fairshare usage of this job.  This only modifies the\n\t\t\t * temporary usage used for this cycle.  Updating this will help the\n\t\t\t * problem of backfilling other jobs which will affect the fairshare\n\t\t\t * priority of the top job.  If the priority changes too much\n\t\t\t * before it is run, the current top job may change in subsequent\n\t\t\t * cycles\n\t\t\t */\n\t\t\tupdate_usage_on_run(bjob);\n\t\t\tlog_eventf(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_DEBUG, bjob->name,\n\t\t\t\t   \"Fairshare usage of entity %s increased due to job becoming a top job.\", bjob->job->ginfo->name.c_str());\n\t\t}\n\n\t\tsprintf(log_buf, \"Job is a top job and will run at %s\",\n\t\t\tctime(&bjob->start));\n\n\t\tlog_buf[strlen(log_buf) - 1] = '\\0'; /* ctime adds a \\n */\n\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_DEBUG, bjob->name, log_buf);\n\t} else if (start_time == 0) {\n\t\tlog_event(PBSEVENT_SCHED, PBS_EVENTCLASS_JOB, LOG_WARNING, topjob->name,\n\t\t\t  \"Error in calculation of start time of top job\");\n\t\tdelete nsinfo;\n\t\treturn 0;\n\t}\n\tdelete nsinfo;\n\n\treturn 1;\n}\n\n/**\n * @brief\n *\t\tfind_ready_resv_job - find a job in a reservation which can run\n *\n * @param[in]\tresvs\t-\trunning resvs\n *\n * @return\tthe first job whose reservation is running\n * @retval\t: NULL if there are not any\n *\n */\nresource_resv *\nfind_ready_resv_job(resource_resv **resvs)\n{\n\tint i;\n\tint ind;\n\tresource_resv *rjob = NULL;\n\n\tif (resvs == NULL)\n\t\treturn NULL;\n\n\tfor (i = 0; resvs[i] != NULL && rjob == NULL; i++) {\n\t\tif (resvs[i]->resv != NULL) {\n\t\t\tif (resvs[i]->resv->is_running) {\n\t\t\t\tif (resvs[i]->resv->resv_queue != NULL) {\n\t\t\t\t\tind = find_runnable_resresv_ind(resvs[i]->resv->resv_queue->jobs, 0);\n\t\t\t\t\tif (ind != -1)\n\t\t\t\t\t\trjob = resvs[i]->resv->resv_queue->jobs[ind];\n\t\t\t\t\telse\n\t\t\t\t\t\trjob = NULL;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\treturn rjob;\n}\n\n/**\n * @brief\n *\t\tfind the index of the next runnable resouce resv in an array\n *\n * @param[in]\tresresv_arr\t-\tarray of resource resvs to search\n * @param[in]\tstart_index\t\tindex of array to start from\n *\n * @return\tthe index of the next resource resv to run\n * @retval\tNULL\t: if there are not any\n *\n */\nint\nfind_runnable_resresv_ind(resource_resv **resresv_arr, int start_index)\n{\n#ifdef NAS /* localmod 034 */\n\treturn site_find_runnable_res(resresv_arr);\n#else\n\tint i;\n\n\tif (resresv_arr == NULL)\n\t\treturn -1;\n\n\tfor (i = start_index; resresv_arr[i] != NULL; i++) {\n\t\tif (!resresv_arr[i]->can_not_run && in_runnable_state(resresv_arr[i]))\n\t\t\treturn i;\n\t}\n\treturn -1;\n#endif /* localmod 034 */\n}\n\n/**\n * @brief\n *\t\tfind the index of the next runnable express,preempted.\n * @par\n * \t\tASSUMPTION: express jobs will be sorted to the front of the list, followed by preempted jobs\n *\n * @param[in]\tjobs\t-\tthe array of jobs\n * @param[in]\tstart_index\tthe index to start from\n *\n * @return\tthe index of the first runnable job\n * @retval\t-1\t: if there aren't any\n *\n */\nint\nfind_non_normal_job_ind(resource_resv **jobs, int start_index)\n{\n\tint i;\n\n\tif (jobs == NULL)\n\t\treturn -1;\n\n\tfor (i = start_index; jobs[i] != NULL; i++) {\n\t\tif (jobs[i]->job != NULL) {\n\t\t\tif ((jobs[i]->job->preempt_status & PREEMPT_TO_BIT(PREEMPT_EXPRESS)) || (jobs[i]->job->is_preempted)) {\n\t\t\t\tif (!jobs[i]->can_not_run)\n\t\t\t\t\treturn i;\n\t\t\t} else if (jobs[i]->job->preempt_status & PREEMPT_TO_BIT(PREEMPT_NORMAL))\n\t\t\t\treturn -1;\n\t\t}\n\t}\n\treturn -1;\n}\n\n/**\n * @brief\n * \t\tfind the next job to be considered to run by the scheduler\n *\n * @param[in]\tpolicy\t-\tpolicy info\n * @param[in]\tsinfo\t-\tthe server the jobs are on\n * @param[in]\tflag\t-\twhether or not to initialize, sort/re-sort jobs.\n *\n * @return\tresource_resv *\n * @retval\tthe next job to consider\n * @retval  NULL\t: on error or if there are no more jobs to run\n *\n * @par MT-safe: No\n *\n */\nresource_resv *\nnext_job(status *policy, server_info *sinfo, int flag)\n{\n\t/* last_queue is the index into a queue array of the last time\n\t * the function was called\n\t */\n\tstatic unsigned int last_queue;\n\tstatic int last_queue_index;\n\tstatic int last_job_index;\n\n\t/* skip is used to mark that we're done looking for qrun, reservation jobs and\n\t * preempted jobs (while using by_queue policy).\n\t * In each scheduling cycle skip is reset to 0. Jobs are selected in following\n\t * order.\n\t * 1. qrun job\n\t * 2. jobs in reservation\n\t * 3. High priority preempting jobs\n\t * 4. Preempted jobs\n\t * 5. Normal jobs\n\t */\n\tstatic int skip = SKIP_NOTHING;\n\tstatic int sort_status = MAY_RESORT_JOBS; /* to decide whether to sort jobs or not */\n\tstatic int queue_list_size;\t\t  /* Count of number of priority levels in queue_list */\n\tresource_resv *rjob = NULL;\t\t  /* the job to return */\n\tint ind = -1;\n\n\tif ((policy == NULL) || (sinfo == NULL))\n\t\treturn NULL;\n\n\tif (flag == INITIALIZE) {\n\t\tif (policy->round_robin) {\n\t\t\tlast_queue = 0;\n\t\t\tlast_queue_index = 0;\n\t\t\tqueue_list_size = count_array(sinfo->queue_list);\n\n\t\t} else if (policy->by_queue)\n\t\t\tlast_queue = 0;\n\t\tskip = SKIP_NOTHING;\n\t\tsort_jobs(policy, sinfo);\n\t\tsort_status = SORTED;\n\t\tlast_job_index = 0;\n\t\treturn NULL;\n\t}\n\n\tif (sinfo->qrun_job != NULL) {\n\t\tif (!sinfo->qrun_job->can_not_run &&\n\t\t    in_runnable_state(sinfo->qrun_job)) {\n\t\t\trjob = sinfo->qrun_job;\n\t\t}\n\t\treturn rjob;\n\t}\n\tif (!(skip & SKIP_RESERVATIONS)) {\n\t\trjob = find_ready_resv_job(sinfo->resvs);\n\t\tif (rjob != NULL)\n\t\t\treturn rjob;\n\t\telse\n\t\t\tskip |= SKIP_RESERVATIONS;\n\t}\n\n\tif ((sort_status != SORTED) || ((flag == MAY_RESORT_JOBS) && policy->fair_share) || (flag == MUST_RESORT_JOBS)) {\n\t\tsort_jobs(policy, sinfo);\n\t\tsort_status = SORTED;\n\t\tlast_job_index = 0;\n\t}\n\tif (policy->round_robin) {\n\t\t/* Below is a pictorial representation of how queue_list\n\t\t * looks like when policy is set to round_robin.\n\t\t * each column represent the queues which are at a given priority level\n\t\t * Priorities are also sorted in descending order i.e.\n\t\t * Priority 1 > Priority 2 > Priority 3 ...\n\t\t * We make use of each column, traverse through each queue at the same priority\n\t\t * level and run jobs from these queues in round_robin fashion. For example:\n\t\t * If queue 1 has J1,J3 : queue 2 has J2, J5 : queue 4 has J4, J6 then the order\n\t\t * these jobs will be picked would be J1 -> J2 -> J4 -> J3 -> J5 -> J6\n\t\t * When we are finished running all jobs from one priority column, we move onto\n\t\t * next column and repeat the procedure there.\n\t\t */\n\t\t/****************************************************\n\t\t *    --------------------------------------------\n\t\t *    | Priority 1 | Priority 2 | .............. |\n\t\t *    --------------------------------------------\n\t\t *    | queue 1    | queue 3    | ........ | NULL|\n\t\t *    --------------------------------------------\n\t\t *    | queue 2    | queue 5    | ........ | NULL|\n\t\t *    --------------------------------------------\n\t\t *    | queue 4    | NULL       | ........ | NULL|\n\t\t *    --------------------------------------------\n\t\t *    | NULL       |            | ........ | NULL|\n\t\t *    --------------------------------------------\n\t\t ****************************************************/\n\t\t/* last_index refers to a priority level as shown in diagram\n\t\t * above.\n\t\t */\n\t\tint i = last_queue_index;\n\n\t\twhile ((rjob == NULL) && (i < queue_list_size)) {\n\t\t\t/* Calculating number of queues at this priority level */\n\t\t\tunsigned int queue_index_size = count_array(sinfo->queue_list[i]);\n\t\t\tunsigned int queues_finished = 0;\n\n\t\t\tfor (unsigned int j = last_queue; j < queue_index_size; j++) {\n\t\t\t\tind = find_runnable_resresv_ind(sinfo->queue_list[i][j]->jobs, 0);\n\t\t\t\tif (ind != -1)\n\t\t\t\t\trjob = sinfo->queue_list[i][j]->jobs[ind];\n\t\t\t\telse\n\t\t\t\t\trjob = NULL;\n\t\t\t\tlast_queue++;\n\t\t\t\t/*If all queues are traversed, move back to first queue */\n\t\t\t\tif (last_queue == queue_index_size)\n\t\t\t\t\tlast_queue = 0;\n\t\t\t\t/* Count how many times we've reached the end of a queue.\n\t\t\t\t * If we've reached the end of all the queues, we're done.\n\t\t\t\t * If we find a job, reset our counter.\n\t\t\t\t */\n\t\t\t\tif (rjob == NULL) {\n\t\t\t\t\tqueues_finished++;\n\t\t\t\t\tif (queues_finished == queue_index_size)\n\t\t\t\t\t\tbreak;\n\t\t\t\t} else {\n\t\t\t\t\t/* If we are able to get one job from any of the queues,\n\t\t\t\t\t * set queues_finished to 0\n\t\t\t\t\t */\n\t\t\t\t\tqueues_finished = 0;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t\t/* If all queues at the given priority level are traversed\n\t\t\t * then move onto next index and set last_queue as 0, so as to\n\t\t\t * start from the first queue of the next index\n\t\t\t */\n\t\t\tif (queues_finished == queue_index_size) {\n\t\t\t\tlast_queue = 0;\n\t\t\t\tlast_queue_index++;\n\t\t\t\ti++;\n\t\t\t}\n\t\t}\n\t} else if (policy->by_queue) {\n\t\tif (!(skip & SKIP_NON_NORMAL_JOBS)) {\n\t\t\tind = find_non_normal_job_ind(sinfo->jobs, last_job_index);\n\t\t\tif (ind == -1) {\n\t\t\t\t/* No more preempted jobs */\n\t\t\t\tskip |= SKIP_NON_NORMAL_JOBS;\n\t\t\t\tlast_job_index = 0;\n\t\t\t} else {\n\t\t\t\trjob = sinfo->jobs[ind];\n\t\t\t\tlast_job_index = ind;\n\t\t\t}\n\t\t}\n\t\tif (skip & SKIP_NON_NORMAL_JOBS) {\n\t\t\twhile (last_queue < sinfo->queues.size() &&\n\t\t\t       ((ind = find_runnable_resresv_ind(sinfo->queues[last_queue]->jobs, last_job_index)) == -1)) {\n\t\t\t\tlast_queue++;\n\t\t\t\tlast_job_index = 0;\n\t\t\t}\n\t\t\tif (last_queue < sinfo->queues.size() && ind != -1) {\n\t\t\t\trjob = sinfo->queues[last_queue]->jobs[ind];\n\t\t\t\tlast_job_index = ind;\n\t\t\t} else\n\t\t\t\trjob = NULL;\n\t\t}\n\t} else { /* treat the entire system as one large queue */\n\t\tind = find_runnable_resresv_ind(sinfo->jobs, last_job_index);\n\t\tif (ind != -1) {\n\t\t\trjob = sinfo->jobs[ind];\n\t\t\tlast_job_index = ind;\n\t\t} else\n\t\t\trjob = NULL;\n\t}\n\treturn rjob;\n}\n\n/**\n * @brief\tInitialize sc_attrs\n */\nstatic void\ninit_sc_attrs(void)\n{\n\tfree(sc_attrs.comment);\n\tfree(sc_attrs.job_sort_formula);\n\tfree(sc_attrs.partition);\n\tfree(sc_attrs.sched_log);\n\tfree(sc_attrs.sched_priv);\n\n\tsc_attrs.attr_update_period = 0;\n\tsc_attrs.comment = NULL;\n\tsc_attrs.do_not_span_psets = 0;\n\tsc_attrs.job_sort_formula = NULL;\n\tsc_attrs.job_sort_formula_threshold = INT_MIN;\n\tsc_attrs.only_explicit_psets = 0;\n\tsc_attrs.partition = NULL;\n\tsc_attrs.preempt_queue_prio = 0;\n\tsc_attrs.preempt_sort = PS_MIN_T_SINCE_START;\n\tsc_attrs.runjob_mode = RJ_NOWAIT;\n\tsc_attrs.preempt_targets_enable = 1;\n\tsc_attrs.sched_cycle_length = SCH_CYCLE_LEN_DFLT;\n\tsc_attrs.sched_log = NULL;\n\tsc_attrs.sched_preempt_enforce_resumption = 0;\n\tsc_attrs.sched_priv = NULL;\n\tsc_attrs.server_dyn_res_alarm = 0;\n\tsc_attrs.throughput_mode = 1;\n\tsc_attrs.opt_backfill_fuzzy = BF_DEFAULT;\n}\n\n/**\n * @brief\tParse and cache sched object batch_status\n *\n * @param[in] status - populated batch_status after stating this scheduler from server\n *\n * @retval\n * @return 0 - Failure\n * @return  1 - Success\n *\n * @mt-safe: No\n * @par Side Effects:\n *\tNone\n */\nstatic int\nparse_sched_obj(int connector, struct batch_status *status)\n{\n\tstruct attrl *attrp;\n\tchar *tmp_priv_dir = NULL;\n\tchar *tmp_log_dir = NULL;\n\tstatic char *priv_dir = NULL;\n\tstatic char *log_dir = NULL;\n\tstruct attropl *attribs;\n\tchar *tmp_comment = NULL;\n\tint clear_comment = 0;\n\tint ret = 0;\n\tlong num;\n\tchar *endp;\n\tchar *tok;\n\tchar *save_ptr;\n\tint i;\n\tint j;\n\tlong prev_attr_u_period = sc_attrs.attr_update_period;\n\n\tattrp = status->attribs;\n\n\tinit_sc_attrs();\n\n\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_REQUEST, LOG_DEBUG,\n\t\t  \"\", \"Updating scheduler attributes\");\n\n\t/* resetting the following before fetching from batch_status. */\n\twhile (attrp != NULL) {\n\t\tif (!strcmp(attrp->name, ATTR_sched_cycle_len)) {\n\t\t\tsc_attrs.sched_cycle_length = res_to_num(attrp->value, NULL);\n\t\t} else if (!strcmp(attrp->name, ATTR_attr_update_period)) {\n\t\t\tlong newval;\n\n\t\t\tnewval = res_to_num(attrp->value, NULL);\n\t\t\tsc_attrs.attr_update_period = newval;\n\t\t\tif (newval != prev_attr_u_period)\n\t\t\t\tlast_attr_updates = 0;\n\t\t} else if (!strcmp(attrp->name, ATTR_partition)) {\n\t\t\tfree(sc_attrs.partition);\n\t\t\tsc_attrs.partition = string_dup(attrp->value);\n\t\t} else if (!strcmp(attrp->name, ATTR_do_not_span_psets)) {\n\t\t\tsc_attrs.do_not_span_psets = res_to_num(attrp->value, NULL);\n\t\t} else if (!strcmp(attrp->name, ATTR_only_explicit_psets)) {\n\t\t\tsc_attrs.only_explicit_psets = res_to_num(attrp->value, NULL);\n\t\t} else if (!strcmp(attrp->name, ATTR_sched_preempt_enforce_resumption)) {\n\t\t\tif (!strcasecmp(attrp->value, ATR_FALSE))\n\t\t\t\tsc_attrs.sched_preempt_enforce_resumption = 0;\n\t\t\telse\n\t\t\t\tsc_attrs.sched_preempt_enforce_resumption = 1;\n\t\t} else if (!strcmp(attrp->name, ATTR_preempt_targets_enable)) {\n\t\t\tif (!strcasecmp(attrp->value, ATR_FALSE))\n\t\t\t\tsc_attrs.preempt_targets_enable = 0;\n\t\t\telse\n\t\t\t\tsc_attrs.preempt_targets_enable = 1;\n\t\t} else if (!strcmp(attrp->name, ATTR_job_sort_formula_threshold)) {\n\t\t\tsc_attrs.job_sort_formula_threshold = res_to_num(attrp->value, NULL);\n\t\t} else if (!strcmp(attrp->name, ATTR_throughput_mode)) {\n\t\t\tsc_attrs.throughput_mode = res_to_num(attrp->value, NULL);\n\t\t} else if (!strcmp(attrp->name, ATTR_opt_backfill_fuzzy)) {\n\t\t\tnum = strtol(attrp->value, &endp, 10);\n\t\t\tif (*endp == '\\0')\n\t\t\t\tsc_attrs.opt_backfill_fuzzy = num;\n\t\t\telse if (!strcasecmp(attrp->value, \"off\"))\n\t\t\t\tsc_attrs.opt_backfill_fuzzy = BF_OFF;\n\t\t\telse if (!strcasecmp(attrp->value, \"low\"))\n\t\t\t\tsc_attrs.opt_backfill_fuzzy = BF_LOW;\n\t\t\telse if (!strcasecmp(attrp->value, \"med\") || !strcasecmp(attrp->value, \"medium\"))\n\t\t\t\tsc_attrs.opt_backfill_fuzzy = BF_MED;\n\t\t\telse if (!strcasecmp(attrp->value, \"high\"))\n\t\t\t\tsc_attrs.opt_backfill_fuzzy = BF_HIGH;\n\t\t\telse\n\t\t\t\tsc_attrs.opt_backfill_fuzzy = BF_DEFAULT;\n\t\t} else if (!strcmp(attrp->name, ATTR_job_run_wait)) {\n\t\t\tif (!strcmp(attrp->value, RUN_WAIT_NONE))\n\t\t\t\tsc_attrs.runjob_mode = RJ_NOWAIT;\n\t\t\telse if (!strcmp(attrp->value, RUN_WAIT_RUNJOB_HOOK)) {\n\t\t\t\tsc_attrs.runjob_mode = RJ_RUNJOB_HOOK;\n\t\t\t} else\n\t\t\t\tsc_attrs.runjob_mode = RJ_EXECJOB_HOOK;\n\t\t} else if (!strcmp(attrp->name, ATTR_sched_preempt_order)) {\n\t\t\ttok = strtok_r(attrp->value, \"\\t \", &save_ptr);\n\n\t\t\tif (tok != NULL && !isdigit(tok[0])) {\n\t\t\t\t/* unset the defaults */\n\t\t\t\tsc_attrs.preempt_order[0].order[0] = PREEMPT_METHOD_LOW;\n\t\t\t\tsc_attrs.preempt_order[0].order[1] = PREEMPT_METHOD_LOW;\n\t\t\t\tsc_attrs.preempt_order[0].order[2] = PREEMPT_METHOD_LOW;\n\n\t\t\t\tsc_attrs.preempt_order[0].high_range = 100;\n\t\t\t\ti = 0;\n\t\t\t\tdo {\n\t\t\t\t\tif (isdigit(tok[0])) {\n\t\t\t\t\t\tnum = strtol(tok, &endp, 10);\n\t\t\t\t\t\tif (*endp != '\\0')\n\t\t\t\t\t\t\tgoto cleanup;\n\t\t\t\t\t\tsc_attrs.preempt_order[i].low_range = num + 1;\n\t\t\t\t\t\ti++;\n\t\t\t\t\t\tsc_attrs.preempt_order[i].high_range = num;\n\t\t\t\t\t} else {\n\t\t\t\t\t\tfor (j = 0; tok[j] != '\\0'; j++) {\n\t\t\t\t\t\t\tswitch (tok[j]) {\n\t\t\t\t\t\t\t\tcase 'S':\n\t\t\t\t\t\t\t\t\tsc_attrs.preempt_order[i].order[j] = PREEMPT_METHOD_SUSPEND;\n\t\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\t\tcase 'C':\n\t\t\t\t\t\t\t\t\tsc_attrs.preempt_order[i].order[j] = PREEMPT_METHOD_CHECKPOINT;\n\t\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\t\tcase 'R':\n\t\t\t\t\t\t\t\t\tsc_attrs.preempt_order[i].order[j] = PREEMPT_METHOD_REQUEUE;\n\t\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\t\tcase 'D':\n\t\t\t\t\t\t\t\t\tsc_attrs.preempt_order[i].order[j] = PREEMPT_METHOD_DELETE;\n\t\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\ttok = strtok_r(NULL, \"\\t \", &save_ptr);\n\t\t\t\t} while (tok != NULL && i < PREEMPT_ORDER_MAX);\n\n\t\t\t\tsc_attrs.preempt_order[i].low_range = 0;\n\t\t\t}\n\t\t} else if (!strcmp(attrp->name, ATTR_sched_preempt_queue_prio)) {\n\t\t\tsc_attrs.preempt_queue_prio = strtol(attrp->value, &endp, 10);\n\t\t\tif (*endp != '\\0')\n\t\t\t\tgoto cleanup;\n\t\t} else if (!strcmp(attrp->name, ATTR_sched_preempt_prio)) {\n\t\t\tlong prio;\n\t\t\tchar **list;\n\n\t\t\tprio = PREEMPT_PRIORITY_HIGH;\n\t\t\tlist = break_comma_list(attrp->value);\n\t\t\tif (list != NULL) {\n\t\t\t\tmemset(sc_attrs.preempt_prio, 0, sizeof(sc_attrs.preempt_prio));\n\t\t\t\tsc_attrs.preempt_prio[0][0] = PREEMPT_TO_BIT(PREEMPT_QRUN);\n\t\t\t\tsc_attrs.preempt_prio[0][1] = prio;\n\t\t\t\tprio -= PREEMPT_PRIORITY_STEP;\n\t\t\t\tfor (i = 0; list[i] != NULL; i++) {\n\t\t\t\t\tnum = preempt_bit_field(list[i]);\n\t\t\t\t\tif (num >= 0) {\n\t\t\t\t\t\tsc_attrs.preempt_prio[i + 1][0] = num;\n\t\t\t\t\t\tsc_attrs.preempt_prio[i + 1][1] = prio;\n\t\t\t\t\t\tprio -= PREEMPT_PRIORITY_STEP;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\t/* sc_attrs.preempt_prio is an int array of size[NUM_PPRIO][2] */\n\t\t\t\tqsort(sc_attrs.preempt_prio, NUM_PPRIO, sizeof(int) * 2, preempt_cmp);\n\n\t\t\t\t/* cache preemption priority for normal jobs */\n\t\t\t\tfor (i = 0; i < NUM_PPRIO && sc_attrs.preempt_prio[i][1] != 0; i++) {\n\t\t\t\t\tif (sc_attrs.preempt_prio[i][0] == PREEMPT_TO_BIT(PREEMPT_NORMAL)) {\n\t\t\t\t\t\tpreempt_normal = sc_attrs.preempt_prio[i][1];\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\tfree_string_array(list);\n\t\t\t}\n\t\t} else if (!strcmp(attrp->name, ATTR_sched_preempt_sort)) {\n\t\t\tif (strcasecmp(attrp->value, \"min_time_since_start\") == 0)\n\t\t\t\tsc_attrs.preempt_sort = PS_MIN_T_SINCE_START;\n\t\t\telse\n\t\t\t\tsc_attrs.preempt_sort = PS_PREEMPT_PRIORITY;\n\t\t} else if (!strcmp(attrp->name, ATTR_job_sort_formula)) {\n\t\t\tfree(sc_attrs.job_sort_formula);\n\t\t\tsc_attrs.job_sort_formula = read_formula();\n\t\t\tif (!conf.prime_sort.empty() || !conf.non_prime_sort.empty())\n\t\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_SCHED, LOG_DEBUG, __func__,\n\t\t\t\t\t  \"Job sorting formula and job_sort_key are incompatible.  \"\n\t\t\t\t\t  \"The job sorting formula will be used.\");\n\t\t} else if (!strcmp(attrp->name, ATTR_sched_server_dyn_res_alarm)) {\n\t\t\tnum = strtol(attrp->value, &endp, 10);\n\t\t\tif (*endp != '\\0')\n\t\t\t\tgoto cleanup;\n\n\t\t\tsc_attrs.server_dyn_res_alarm = num;\n\t\t} else if (!strcmp(attrp->name, ATTR_sched_priv) && !dflt_sched) {\n\t\t\tif ((tmp_priv_dir = string_dup(attrp->value)) == NULL)\n\t\t\t\tgoto cleanup;\n\t\t} else if (!strcmp(attrp->name, ATTR_sched_log) && !dflt_sched) {\n\t\t\tif ((tmp_log_dir = string_dup(attrp->value)) == NULL)\n\t\t\t\tgoto cleanup;\n\t\t} else if (!strcmp(attrp->name, ATTR_comment) && !dflt_sched) {\n\t\t\tif ((tmp_comment = string_dup(attrp->value)) == NULL)\n\t\t\t\tgoto cleanup;\n\t\t} else if (!strcmp(attrp->name, ATTR_logevents)) {\n\t\t\tchar *endp;\n\t\t\tlong mask;\n\t\t\tmask = strtol(attrp->value, &endp, 10);\n\t\t\tif (*endp != '\\0')\n\t\t\t\tgoto cleanup;\n\t\t\t*log_event_mask = mask;\n\t\t}\n\t\tattrp = attrp->next;\n\t}\n\n\tif (!dflt_sched) {\n\t\tint err;\n\t\tint priv_dir_update_fail = 0;\n\t\tint validate_log_dir = 0;\n\t\tint validate_priv_dir = 0;\n\t\tstruct attropl *patt;\n\t\tchar comment[MAX_LOG_SIZE] = {0};\n\n\t\tif (log_dir == NULL)\n\t\t\tvalidate_log_dir = 1;\n\t\telse if (tmp_log_dir != NULL && strcmp(log_dir, tmp_log_dir) != 0)\n\t\t\tvalidate_log_dir = 1;\n\n\t\tif (priv_dir == NULL)\n\t\t\tvalidate_priv_dir = 1;\n\t\telse if (tmp_priv_dir != NULL && strcmp(priv_dir, tmp_priv_dir) != 0)\n\t\t\tvalidate_priv_dir = 1;\n\n\t\tif (!validate_log_dir && !validate_priv_dir && tmp_comment != NULL)\n\t\t\tclear_comment = 1;\n\n\t\tif (validate_log_dir) {\n\t\t\tlog_close(1);\n\t\t\tif (log_open(logfile, tmp_log_dir) == -1) {\n\t\t\t\t/* update the sched comment attribute with the reason for failure */\n\t\t\t\tattribs = static_cast<attropl *>(calloc(2, sizeof(struct attropl)));\n\t\t\t\tif (attribs == NULL) {\n\t\t\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_SCHED, LOG_ERR, __func__, MEM_ERR_MSG);\n\t\t\t\t\tgoto cleanup;\n\t\t\t\t}\n\t\t\t\tstrcpy(comment, \"Unable to change the sched_log directory\");\n\t\t\t\tpatt = attribs;\n\t\t\t\tpatt->name = const_cast<char *>(ATTR_comment);\n\t\t\t\tpatt->value = comment;\n\t\t\t\tpatt->next = patt + 1;\n\t\t\t\tpatt++;\n\t\t\t\tpatt->name = const_cast<char *>(ATTR_scheduling);\n\t\t\t\tpatt->value = const_cast<char *>(\"0\");\n\t\t\t\tpatt->next = NULL;\n\n\t\t\t\terr = pbs_manager(connector,\n\t\t\t\t\t\t  MGR_CMD_SET, MGR_OBJ_SCHED,\n\t\t\t\t\t\t  const_cast<char *>(sc_name), attribs, NULL);\n\t\t\t\tfree(attribs);\n\t\t\t\tif (err) {\n\t\t\t\t\tlog_eventf(PBSEVENT_ERROR, PBS_EVENTCLASS_SCHED, LOG_ERR, __func__,\n\t\t\t\t\t\t   \"Failed to update scheduler comment %s at the server\", comment);\n\t\t\t\t}\n\t\t\t\tgoto cleanup;\n\t\t\t} else {\n\t\t\t\tif (tmp_comment != NULL)\n\t\t\t\t\tclear_comment = 1;\n\t\t\t\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_SCHED, LOG_DEBUG,\n\t\t\t\t\t   \"reconfigure\", \"scheduler log directory is changed to %s\", tmp_log_dir);\n\t\t\t\tfree(log_dir);\n\t\t\t\tif ((log_dir = string_dup(tmp_log_dir)) == NULL) {\n\t\t\t\t\treturn 0;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\tif (validate_priv_dir) {\n\t\t\tint c;\n#if !defined(DEBUG) && !defined(NO_SECURITY_CHECK)\n\t\t\tc = chk_file_sec_user(tmp_priv_dir, 1, 0, S_IWGRP | S_IWOTH, 1, getuid());\n\t\t\tc |= chk_file_sec_user(pbs_conf.pbs_environment, 0, 0, S_IWGRP | S_IWOTH, 0, getuid());\n\t\t\tif (c != 0) {\n\t\t\t\tlog_eventf(PBSEVENT_ERROR, PBS_EVENTCLASS_SCHED, LOG_ERR, __func__,\n\t\t\t\t\t   \"PBS failed validation checks for directory %s\", tmp_priv_dir);\n\t\t\t\tstrcpy(comment, \"PBS failed validation checks for sched_priv directory\");\n\t\t\t\tpriv_dir_update_fail = 1;\n\t\t\t}\n#else /* not DEBUG and not NO_SECURITY_CHECK */\n\t\t\tc = 0;\n#endif\n\t\t\tif (c == 0) {\n\t\t\t\tif (tmp_priv_dir == NULL || chdir(tmp_priv_dir) == -1) {\n\t\t\t\t\tstrcpy(comment, \"PBS failed validation checks for sched_priv directory\");\n\t\t\t\t\tlog_eventf(PBSEVENT_ERROR, PBS_EVENTCLASS_SCHED, LOG_ERR, __func__,\n\t\t\t\t\t\t   \"PBS failed validation checks for directory %s\", tmp_priv_dir);\n\t\t\t\t\tpriv_dir_update_fail = 1;\n\t\t\t\t} else {\n\t\t\t\t\tint lockfds;\n\t\t\t\t\tlockfds = open(\"sched.lock\", O_CREAT | O_WRONLY, 0644);\n\t\t\t\t\tif (lockfds < 0) {\n\t\t\t\t\t\tstrcpy(comment, \"PBS failed validation checks for sched_priv directory\");\n\t\t\t\t\t\tlog_eventf(PBSEVENT_ERROR, PBS_EVENTCLASS_SCHED, LOG_ERR, __func__,\n\t\t\t\t\t\t\t   \"PBS failed validation checks for directory %s\", tmp_priv_dir);\n\t\t\t\t\t\tpriv_dir_update_fail = 1;\n\t\t\t\t\t} else {\n\t\t\t\t\t\t/* write schedulers pid into lockfile */\n\t\t\t\t\t\tif (ftruncate(lockfds, (off_t) 0) == -1) \n\t\t\t\t\t\t\tlog_errf(-1, __func__, \"ftruncate failed. ERR : %s\",strerror(errno));\n\t\t\t\t\t\t(void) sprintf(log_buffer, \"%d\\n\", getpid());\n\t\t\t\t\t\tif (write(lockfds, log_buffer, strlen(log_buffer)) == -1) \n\t\t\t\t\t\t\tlog_errf(-1, __func__, \"fwrite failed. ERR : %s\",strerror(errno));\n\t\t\t\t\t\tclose(lockfds);\n\t\t\t\t\t\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_SCHED, LOG_DEBUG, \"reconfigure\",\n\t\t\t\t\t\t\t   \"scheduler priv directory has changed to %s\", tmp_priv_dir);\n\t\t\t\t\t\tif (tmp_comment != NULL)\n\t\t\t\t\t\t\tclear_comment = 1;\n\t\t\t\t\t\tfree(priv_dir);\n\t\t\t\t\t\tif ((priv_dir = string_dup(tmp_priv_dir)) == NULL) {\n\t\t\t\t\t\t\treturn 0;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\tif (priv_dir_update_fail) {\n\t\t\t/* update the sched comment attribute with the reason for failure */\n\t\t\tattribs = static_cast<attropl *>(calloc(2, sizeof(struct attropl)));\n\t\t\tif (attribs == NULL) {\n\t\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_SCHED, LOG_ERR, __func__, MEM_ERR_MSG);\n\t\t\t\tstrcpy(comment, \"Unable to change the sched_priv directory\");\n\t\t\t\tgoto cleanup;\n\t\t\t}\n\t\t\tpatt = attribs;\n\t\t\tpatt->name = const_cast<char *>(ATTR_comment);\n\t\t\tpatt->value = comment;\n\t\t\tpatt->next = patt + 1;\n\t\t\tpatt++;\n\t\t\tpatt->name = const_cast<char *>(ATTR_scheduling);\n\t\t\tpatt->value = const_cast<char *>(\"0\");\n\t\t\tpatt->next = NULL;\n\t\t\terr = pbs_manager(connector,\n\t\t\t\t\t  MGR_CMD_SET, MGR_OBJ_SCHED,\n\t\t\t\t\t  const_cast<char *>(sc_name), attribs, NULL);\n\t\t\tfree(attribs);\n\t\t\tif (err) {\n\t\t\t\tlog_eventf(PBSEVENT_ERROR, PBS_EVENTCLASS_SCHED, LOG_ERR, __func__,\n\t\t\t\t\t   \"Failed to update scheduler comment %s at the server\", comment);\n\t\t\t}\n\t\t\tgoto cleanup;\n\t\t}\n\t}\n\tif (clear_comment) {\n\t\tint err;\n\t\tstruct attropl *patt;\n\n\t\tattribs = static_cast<attropl *>(calloc(1, sizeof(struct attropl)));\n\t\tif (attribs == NULL) {\n\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_SCHED, LOG_ERR, __func__, MEM_ERR_MSG);\n\t\t\tgoto cleanup;\n\t\t}\n\n\t\tpatt = attribs;\n\t\tpatt->name = const_cast<char *>(ATTR_comment);\n\t\tpatt->value = static_cast<char *>(malloc(1));\n\t\tif (patt->value == NULL) {\n\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_SCHED, LOG_ERR, __func__,\n\t\t\t\t  \"can't update scheduler attribs, malloc failed\");\n\t\t\tfree(attribs);\n\t\t\tgoto cleanup;\n\t\t}\n\t\tpatt->value[0] = '\\0';\n\t\tpatt->next = NULL;\n\t\terr = pbs_manager(connector,\n\t\t\t\t  MGR_CMD_UNSET, MGR_OBJ_SCHED,\n\t\t\t\t  const_cast<char *>(sc_name), attribs, NULL);\n\t\tfree(attribs->value);\n\t\tfree(attribs);\n\t\tif (err) {\n\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_SCHED, LOG_ERR, __func__,\n\t\t\t\t  \"Failed to update scheduler comment at the server\");\n\t\t\tgoto cleanup;\n\t\t}\n\t}\n\tret = 1;\ncleanup:\n\tfree(tmp_log_dir);\n\tfree(tmp_priv_dir);\n\tfree(tmp_comment);\n\treturn ret;\n}\n\n/**\n * @brief\n *\tSet and validate the sched object attributes queried from Server\n *\n * @param[in] connector - socket descriptor to server\n *\n * @retval Error code\n * @return 0 - Failure\n * @return 1 - Success\n *\n * @par Side Effects:\n *\tNone\n *\n *\n */\nint\nset_validate_sched_attrs(int connector)\n{\n\tstruct batch_status *ss = NULL;\n\tstruct batch_status *all_ss = NULL;\n\n\tif (connector < 0)\n\t\treturn 0;\n\n\t/* Stat the scheduler to get details of sched */\n\n\tall_ss = send_statsched(connector, NULL, NULL);\n\tss = bs_find(all_ss, sc_name);\n\n\tif (ss == NULL) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer), \"Unable to retrieve the scheduler attributes from server\");\n\t\tlog_err(-1, __func__, log_buffer);\n\t\tpbs_statfree(all_ss);\n\t\treturn 0;\n\t}\n\tif (!parse_sched_obj(connector, ss)) {\n\t\tpbs_statfree(all_ss);\n\t\treturn 0;\n\t}\n\n\tpbs_statfree(all_ss);\n\n\treturn 1;\n}\n\n/**\n * @brief Validate running user.\n * If PBS_DAEMON_SERVICE_USER is set, and user is root, change user to it.\n *\n * @param[in] exename - name of executable (argv[0])\n *\n * @retval Error code\n * @return 0 - Failure\n * @return 1 - Success\n *\n * @par Side Effects:\n *\tNone\n */\nint\nvalidate_running_user(char *exename)\n{\n\tchar buf[128];\n\tif (pbs_conf.pbs_daemon_service_user) {\n\t\tstruct passwd *user = getpwnam(pbs_conf.pbs_daemon_service_user);\n\t\tif (user == NULL) {\n\t\t\tsnprintf(buf, sizeof(buf), \"%s: PBS_DAEMON_SERVICE_USER [%s] does not exist\\n\", exename, pbs_conf.pbs_daemon_service_user);\n\t\t\tperror(buf);\n\t\t\treturn 0;\n\t\t}\n\n\t\tint rc = setgid(user->pw_gid);\n\t\tif (rc != 0) {\n\t\t\tsnprintf(buf, sizeof(buf), \"%s: Can't change group to PBS_DAEMON_SERVICE_USER's group [%d], setgid() failed.\", exename, user->pw_gid);\n\t\t\tperror(buf);\n\t\t\treturn 0;\n\t\t}\n\n\t\tif (geteuid() == 0) {\n\t\t\tint rc = setuid(user->pw_uid);\n\t\t\tif (rc != 0) {\n\t\t\t\tsnprintf(buf, sizeof(buf), \"%s: Can't change to PBS_DAEMON_SERVICE_USER [%s], setuid() failed.\", exename, pbs_conf.pbs_daemon_service_user);\n\t\t\t\tperror(buf);\n\t\t\t\treturn 0;\n\t\t\t}\n\t\t\tpbs_strncpy(pbs_current_user, pbs_conf.pbs_daemon_service_user, PBS_MAXUSER);\n\t\t}\n\n\t\tif (user->pw_uid != getuid()) {\n\t\t\tsnprintf(buf, sizeof(buf), \"%s: Must be run by PBS_DAEMON_SERVICE_USER [%s]\\n\", exename, pbs_conf.pbs_daemon_service_user);\n\t\t\tperror(buf);\n\t\t\treturn 0;\n\t\t}\n\t} else if ((geteuid() != 0) || getuid() != 0) {\n\t\tsnprintf(buf, sizeof(buf), \"%s: Must be run by PBS_DAEMON_SERVICE_USER if set or root if not set\\n\", exename);\n\t\tperror(buf);\n\t\treturn 0;\n\t}\n\n\treturn 1;\n}\n"
  },
  {
    "path": "src/scheduler/fifo.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _FIFO_H\n#define _FIFO_H\n\n#include <string>\n\n#include <limits.h>\n#include \"data_types.h\"\n#include \"sched_cmds.h\"\n\n/**\n * @brief Gets the Scheduler Command sent by the Server\n *\n * @param[in]     sock - secondary connection to the server\n * @param[in,out] cmd  - pointer to sched cmd to be filled with received cmd\n *\n * @return\tint\n * @retval\t0\t: for EOF\n * @retval\t+1\t: for success\n * @retval\t-1\t: for error\n */\nint get_sched_cmd(int sock, sched_cmd *cmd);\n\n/**\n * @brief This is non-blocking version of get_sched_cmd()\n *\n * @param[in]     sock - secondary connection to the server\n * @param[in,out] cmd  - pointer to sched cmd to be filled with received cmd\n *\n * @return\tint\n * @retval\t0\tno super high priority command\n * @retval\t+1\tfor success\n * @retval\t-1\tfor error\n * @retval\t-2\tfor EOF\n *\n * @note this function uses different return code (-2) for EOF than get_sched_cmd() (-1)\n */\nint get_sched_cmd_noblk(int sock, sched_cmd *cmd);\n\n/*\n *      schedinit - initialize conf struct and parse conf files\n */\nint schedinit(int nthreads);\n\n/*\n *\tintermediate_schedule - responsible for starting/restarting scheduling\n *\t\t\t\tcycle\n */\n\nint intermediate_schedule(int sd, const sched_cmd *cmd);\n\n/*\n *      scheduling_cycle - the controling function of the scheduling cycle\n */\n\nint scheduling_cycle(int sd, const sched_cmd *cmd);\n\n/*\n *\tinit_scheduling_cycle - run things that need to be set up every\n *\t\t\t\tscheduling cycle\n *\tNOTE: failure of this function will cause schedule() to exit\n */\nint init_scheduling_cycle(status *policy, int pbs_sd, server_info *sinfo);\n\n/*\n *\n *      next_job - find the next job to be run by the scheduler\n *\n *        policy - policy info.\n *        sinfo - the server the jobs are on\n *        flag - whether or not to initialize, sort jobs.\n *\n *      returns the next job to run or NULL when there are no more jobs\n *              to run, or on error\n *\n */\n\nresource_resv *next_job(status *policy, server_info *sinfo, int flag);\n\n/*\n *      find_runnable_job_ind - find the index of the next runnable job in a job array\n *  \t\tJobs are runnable if:\n *\t   \tin state 'Q'\n *\t\tsuspended by the scheduler\n *\t\tis job array in state 'B' and there is a queued subjob\n *\n *\t\tReservations are runnable if they are in state RESV_CONFIRMED\n */\nint find_runnable_resresv_ind(resource_resv **resresv_arr, int start_index);\n\n/*\n *\tfind_non_normal_job_ind - find the index of the next runnable express,preempted\n */\nint find_non_normal_job_ind(resource_resv **jobs, int start_index);\n\n/*\n *\n *      sim_run_update_resresv - simulate the running of a job\n */\nbool sim_run_update_resresv(status *policy, resource_resv *resresv, std::vector<nspec *> &ns_arr, unsigned int flags);\nbool sim_run_update_resresv(status *policy, resource_resv *resresv, unsigned int flags);\n\n/*\n *\n *\trun_update_resresv - run a resource_resv (job or reservation) and\n *\t\t\t\tupdate the local cache if it was successfully\n *\t\t\t\trun.  Currently we only simulate the running\n *\t\t\t\tof reservations.\n *\n *\t  pbs_sd - connection descriptor to pbs_server or\n *\t\t\t  SIMULATE_SD if we're simulating\n *\t  sinfo  - server job is on\n *\t  qinfo  - queue job resides in or NULL if reservation\n *\t  rresv  - the job/reservation to run\n *\t  flags  - flags to modify procedure\n *\t\tRURR_ADD_END_EVENT - add an end event to calendar for this job\n *\n *\treturn 1 for success\n *\treturn 0 for failure (see pbs_errno for more info)\n *\treturn -1 on error\n *\n */\nbool run_update_job(status *policy, int pbs_sd, server_info *sinfo, queue_info *qinfo,\n\t\t    resource_resv *resresv, std::vector<nspec *> &nspec_arr, unsigned int flags, schd_error *err);\n\nbool\nrun_update_job(status *policy, int pbs_sd, server_info *sinfo, queue_info *qinfo,\n\t       resource_resv *rr, unsigned int flags, schd_error *err);\n\n/*\n *\tupdate_job_can_not_run - do post job 'can't run' processing\n *\t\t\t\t mark it 'can_not_run'\n *\t\t\t\t update the job comment and log the reason why\n *\t\t\t\t take care of deleting a 'can_never_run job\n */\nint update_job_can_not_run(int pbs_sd, resource_resv *job, schd_error *err);\n\n/*\n *\tend_cycle_tasks - stuff which needs to happen at the end of a cycle\n */\nvoid end_cycle_tasks(server_info *sinfo);\n\n/*\n *\tadd_job_to_calendar - find the most top job and init all the\n *\t\tcorrect variables in sinfo to correctly backfill around it\n */\nint add_job_to_calendar(int pbs_sd, status *policy, server_info *sinfo, resource_resv *topjob, int use_buckets);\n\n/*\n * \trun_job - handle the running of a pbs job.  If it's a peer job\n *\t       first move it to the local server and then run it.\n *\t       if it's a local job, just run it.\n */\nint run_job(int pbs_sd, resource_resv *rjob, char *execvnode, schd_error *err);\n\n/*\n *\tshould_backfill_with_job - should we call add_job_to_calendar() with job\n *\treturns 1: we should backfill 0: we should not\n */\nint should_backfill_with_job(status *policy, server_info *sinfo, resource_resv *resresv, int num_topjobs);\n\n/*\n *\n *\tupdate_cycle_status - update global status structure which holds\n *\t\t\t      status information used by the scheduler which\n *\t\t\t      can change from cycle to cycle\n *\n *\t  policy - status structure to update\n *\t  current_time - current time or 0 to call time()\n *\n *\treturn nothing\n *\n */\nvoid update_cycle_status(status &policy, time_t current_time);\n\n/*\n *\n *\tthe main scheduler loop\n *\t\t\t  Loop until njob = next_job() returns NULL\n *\t\t\t   if njob can run now, run it\n *\t\t\t   if not, attempt preemption\n *\t\t\t\tif successful, run njob\n *\t\t\t   njob can't run:\n *\t\t\t     if we can backfill\n *\t\t\t\tadd job to calendar\n *\t\t\t     deal with normal job can't run stuff\n\n */\nint main_sched_loop(status *policy, int sd, server_info *sinfo, schd_error **rerr);\n\n/*\n *\n *\tscheduler_simulation_task - offline simulation task to calculate\n *\t\t       estimated information for all jobs in the system.\n *\n *\t  pbs_sd - connection descriptor to pbs server\n *\n *\treturn success 1 or error 0\n */\nint scheduler_simulation_task(int pbs_sd, int debug);\n\nint set_validate_sched_attrs(int);\n\nint validate_running_user(char *exename);\n\nint send_run_job(int virtual_sd, int has_runjob_hook, const std::string &jobid, char *execvnode);\n\nstruct batch_status *send_statsched(int virtual_fd, struct attrl *attrib, char *extend);\n\n#endif /* _FIFO_H */\n"
  },
  {
    "path": "src/scheduler/get_4byte.cpp",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * contains functions related to receive command sent by the Server\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n#include <stdlib.h>\n#include \"dis.h\"\n#include \"sched_cmds.h\"\n#include \"data_types.h\"\n#include \"fifo.h\"\n\n#include <sys/types.h>\n#include <sys/time.h>\n#include <unistd.h>\n\n#if defined(FD_SET_IN_SYS_SELECT_H)\n#include <sys/select.h>\n#endif\n\n/**\n * @brief Gets the Scheduler Command sent by the Server\n *\n * @param[in]     sock - secondary connection to the server\n * @param[in,out] cmd  - pointer to sched cmd to be filled with received cmd\n *\n * @return\tint\n * @retval\t0\t: for EOF\n * @retval\t+1\t: for success\n * @retval\t-1\t: for error\n */\nint\nget_sched_cmd(int sock, sched_cmd *cmd)\n{\n\tint i;\n\tint rc = 0;\n\tchar *jobid = NULL;\n\n\ti = disrsi(sock, &rc);\n\tif (rc != 0)\n\t\tgoto err;\n\tif (i == SCH_SCHEDULE_AJOB) {\n\t\tjobid = disrst(sock, &rc);\n\t\tif (rc != 0)\n\t\t\tgoto err;\n\t}\n\n\tcmd->cmd = i;\n\tcmd->jid = jobid;\n\treturn 1;\n\nerr:\n\tif (rc == DIS_EOF)\n\t\treturn 0;\n\telse\n\t\treturn -1;\n}\n\n/**\n * @brief This is non-blocking version of get_sched_cmd()\n *\n * @param[in]     sock - secondary connection to the server\n * @param[in,out] cmd  - pointer to sched cmd to be filled with received cmd\n *\n * @return\tint\n * @retval\t0\tno command to read\n * @retval\t+1\tfor success\n * @retval\t-1\tfor error\n * @retval\t-2\tfor EOF\n *\n * @note this function uses different return code (-2) for EOF than get_sched_cmd() (which uses -1)\n */\nint\nget_sched_cmd_noblk(int sock, sched_cmd *cmd)\n{\n\tstruct timeval timeout;\n\tfd_set fdset;\n\ttimeout.tv_usec = 0;\n\ttimeout.tv_sec = 0;\n\n\tFD_ZERO(&fdset);\n\tFD_SET(sock, &fdset);\n\n\tif (select(FD_SETSIZE, &fdset, NULL, NULL, &timeout) != -1 && FD_ISSET(sock, &fdset)) {\n\t\tint rc = get_sched_cmd(sock, cmd);\n\t\tif (rc == 0)\n\t\t\treturn -2;\n\t\treturn rc;\n\t}\n\treturn 0;\n}\n"
  },
  {
    "path": "src/scheduler/globals.cpp",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h>\n\n#include <stdio.h>\n#include <pthread.h>\n#include <limits.h>\n\n#include \"globals.h\"\n#include \"constant.h\"\n#include \"sort.h\"\n#include \"config.h\"\n#include \"data_types.h\"\n#include \"queue.h\"\n\n/**\n * @file    globals.c\n *\n * @brief\n *\tsorting_info[] - holds information about all the different ways you\n *\t\t\t can sort the jobs\n * @par\n *\tFormat: { sort_type, config_name, cmp_func_ptr }\n * @par\n *\tsort_type    : an element from the enum sort_type\n *\tconfig_name  : the name which appears in the scheduling policy config\n *\t\t         file (sched_config)\n *\tcmp_func_ptr : function pointer the qsort compare function\n *\t\t\t (located in sort.c)\n *\n */\n\nconst struct sort_conv sort_convert[] =\n\t{\n\t\t{\"shortest_job_first\", \"cput\", ASC},\n\t\t{\"longest_job_first\", \"cput\", DESC},\n\t\t{\"smallest_memory_first\", \"mem\", ASC},\n\t\t{\"largest_memory_first\", \"mem\", DESC},\n\t\t{\"high_priority_first\", SORT_PRIORITY, DESC},\n\t\t{\"low_priority_first\", SORT_PRIORITY, ASC},\n\t\t{\"large_walltime_first\", \"walltime\", DESC},\n\t\t{\"short_walltime_first\", \"walltime\", ASC},\n\t\t{\"fair_share\", SORT_FAIR_SHARE, ASC},\n\t\t{\"preempt_priority\", SORT_PREEMPT, DESC},\n\t\t{NULL, NULL, NO_SORT_ORDER}};\n\n/*\n * \tsmp_cluster_info - used to convert parse values into enum\n */\nconst struct enum_conv smp_cluster_info[] =\n\t{\n\t\t{SMP_NODE_PACK, \"pack\"},\n\t\t{SMP_ROUND_ROBIN, \"round_robin\"},\n\t\t{HIGH_SMP_DIST, \"\"}};\n\n/*\n *\tprempt_prio_info - used to convert parse values into enum values\n *\t\t\t   for preemption priority levels\n *\n */\nconst struct enum_conv preempt_prio_info[] =\n\t{\n\t\t{PREEMPT_NORMAL, \"normal_jobs\"},\n\t\t{PREEMPT_OVER_FS_LIMIT, \"fairshare\"},\n\t\t{PREEMPT_OVER_QUEUE_LIMIT, \"queue_softlimits\"},\n\t\t{PREEMPT_OVER_SERVER_LIMIT, \"server_softlimits\"},\n\t\t{PREEMPT_EXPRESS, \"express_queue\"},\n\t\t{PREEMPT_ERR, \"\"}, /* no corresponding config file value */\n\t\t{PREEMPT_HIGH, \"\"}};\n\n/* Well known resources: If these aren't queried, we return an error.\n * Any resource you want to directly index into allres should be in this list\n */\nconst std::vector<std::string> well_known_res{\n\t\"cput\",\n\t\"mem\",\n\t\"walltime\",\n\t\"soft_walltime\",\n\t\"ncpus\",\n\t\"arch\",\n\t\"host\",\n\t\"vnode\",\n\t\"aoe\",\n\t\"eoe\",\n\t\"min_walltime\",\n\t\"max_walltime\",\n\t\"preempt_targets\",\n};\n\nstruct config conf;\nstruct status cstat;\n\n/* to make references happy */\nint got_sigpipe;\n\n/* Each index of the array is a sched command. Store 1 as a value to indicate that we received a command */\nint sched_cmds[SCH_CMD_HIGH];\n\n/* This list stores SCH_SCHEDULE_AJOB commands */\nsched_cmd *qrun_list;\nint qrun_list_size;\n\nvoid *poll_context = NULL;\n\n/* Stuff needed for multi-threading */\npthread_mutex_t general_lock;\npthread_mutex_t work_lock;\npthread_mutex_t result_lock;\npthread_cond_t work_cond;\npthread_cond_t result_cond;\nds_queue *work_queue = NULL;\nds_queue *result_queue = NULL;\npthread_t *threads = NULL;\nint threads_die = 0;\nint num_threads = 0;\npthread_key_t th_id_key;\npthread_once_t key_once = PTHREAD_ONCE_INIT;\n\n/* resource definitions from the server */\n\n/* all resources */\nstd::unordered_map<std::string, resdef *> allres;\n/* consumable resources */\nstd::unordered_set<resdef *> consres;\n/* boolean resources*/\nstd::unordered_set<resdef *> boolres;\n\n/* AOE name used to compare nodes, free when exit cycle */\nchar *cmp_aoename = NULL;\n\nconst char *sc_name = NULL;\nchar *logfile = NULL;\n\nunsigned int preempt_normal; /* preempt priority of normal_jobs */\n\nchar path_log[_POSIX_PATH_MAX];\nint dflt_sched = 0;\n\nstruct schedattrs sc_attrs;\n\ntime_t last_attr_updates = 0;\n\nint send_job_attr_updates = 1;\n\n/* primary socket descriptor to the server pool */\nint clust_primary_sock = -1;\n\n/* secondary socket descriptor to the server pool */\nint clust_secondary_sock = -1;\n\n/* a list of running jobs from the last scheduling cycle */\nstd::vector<prev_job_info> last_running;\n\n/* fairshare tree */\nfairshare_head *fstree;\n"
  },
  {
    "path": "src/scheduler/globals.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _GLOBALS_H\n#define _GLOBALS_H\n#include <pthread.h>\n#include <limits.h>\n\n#include \"data_types.h\"\n#include \"queue.h\"\n#include \"sched_cmds.h\"\n\nextern void *poll_context;\n\n/* Each index of the array is a sched command. Store 1 as a value to indicate that we received a command */\nextern int sched_cmds[SCH_CMD_HIGH];\n\n/* This list stores SCH_SCHEDULE_AJOB commands */\nextern sched_cmd *qrun_list;\nextern int qrun_list_size;\n\n/* resources to check */\nextern const struct rescheck res_to_check[];\n\n/* information about sorting */\nextern const struct sort_conv sort_convert[];\n\n/* used to convert string into enum in parsing */\nextern const struct enum_conv smp_cluster_info[];\nextern const struct enum_conv preempt_prio_info[];\n\n/* info to get from mom */\nextern const char *res_to_get[];\n\n/* programs to run to replaced specific resources_assigned values */\nextern const char *res_assn[];\n\nextern struct config conf;\nextern struct status cstat;\n\nextern const int num_resget;\n\n/* Variables from pbs_sched code */\nextern int got_sigpipe;\n\nextern const std::vector<std::string> well_known_res;\n/* Stuff needed for multi-threading */\nextern pthread_mutex_t general_lock;\nextern pthread_mutex_t work_lock;\nextern pthread_cond_t work_cond;\nextern pthread_mutex_t result_lock;\nextern pthread_cond_t result_cond;\nextern ds_queue *work_queue;\nextern ds_queue *result_queue;\nextern pthread_t *threads;\nextern int threads_die;\nextern int num_threads;\nextern pthread_key_t th_id_key;\nextern pthread_once_t key_once;\n\nextern std::unordered_map<std::string, resdef *> allres;\nextern std::unordered_set<resdef *> consres;\nextern std::unordered_set<resdef *> boolres;\n\nextern const char *sc_name;\nextern char *logfile;\n\nextern unsigned int preempt_normal; /* preempt priority of normal_jobs */\n\nextern char path_log[_POSIX_PATH_MAX];\nextern int dflt_sched;\n\nextern struct schedattrs sc_attrs;\n\nextern time_t last_attr_updates; /* timestamp of the last time attr updates were sent */\n\nextern int send_job_attr_updates;\n\nextern int clust_primary_sock;\n\nextern int clust_secondary_sock;\n\n/* a list of running jobs from the last scheduling cycle */\nextern std::vector<prev_job_info> last_running;\n\n/**\n * @brief\n * It is used as a placeholder to store aoe name. This aoe name will be\n * used by sorting routine to compare with vnode's current aoe.\n */\nextern char *cmp_aoename;\n\nextern fairshare_head *fstree;\n\n#endif /* _GLOBALS_H */\n"
  },
  {
    "path": "src/scheduler/job_info.cpp",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file    job_info.c\n *\n * @brief\n * \t\tjob_info.c - This file contains functions related to job_info structure.\n *\n * Functions included are:\n * \tquery_jobs()\n * \tquery_job()\n * \tnew_job_info()\n * \tfree_job_info()\n * \tset_job_state()\n * \tupdate_job_attr()\n * \tsend_job_updates()\n * \tsend_attr_updates()\n * \tunset_job_attr()\n * \tupdate_job_comment()\n * \tupdate_jobs_cant_run()\n * \ttranslate_fail_code()\n * \tdup_job_info()\n * \tpreempt_job_set_filter()\n * \tget_preemption_order()\n * \tpreempt_job()\n * \tfind_and_preempt_jobs()\n * \tfind_jobs_to_preempt()\n * \tselect_index_to_preempt()\n * \tpreempt_level()\n * \tset_preempt_prio()\n * \tcreate_subjob_name()\n * \tcreate_subjob_from_array()\n * \tupdate_array_on_run()\n * \tis_job_array()\n * \tmodify_job_array_for_qrun()\n * \tqueue_subjob()\n * \tformula_evaluate()\n * \tmake_eligible()\n * \tmake_ineligible()\n * \tupdate_accruetype()\n * \tgetaoename()\n * \tupdate_estimated_attrs()\n * \tcheck_preempt_targets_for_none()\n * \tis_finished_job()\n * \tpreemption_similarity()\n * \tgeteoename()\n *\n */\n\n#include <pbs_config.h>\n\n#ifdef PYTHON\n#include <pbs_python_private.h>\n#include <Python.h>\n#endif\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <errno.h>\n#include <time.h>\n#include <unistd.h>\n#include <sys/types.h>\n#include <math.h>\n#include <pbs_ifl.h>\n#include <log.h>\n#include <libutil.h>\n#include <pbs_share.h>\n#include <pbs_internal.h>\n#include <pbs_error.h>\n#include \"queue_info.h\"\n#include \"job_info.h\"\n#include \"resv_info.h\"\n#include \"constant.h\"\n#include \"misc.h\"\n#include \"config.h\"\n#include \"globals.h\"\n#include \"fairshare.h\"\n#include \"node_info.h\"\n#include \"check.h\"\n#include \"sort.h\"\n#include \"fifo.h\"\n#include \"range.h\"\n#include \"resource_resv.h\"\n#include \"limits_if.h\"\n#include \"simulate.h\"\n#include \"resource.h\"\n#include \"server_info.h\"\n#include \"attribute.h\"\n#include \"multi_threading.h\"\n#include \"libpbs.h\"\n\n#ifdef NAS\n#include \"site_code.h\"\n#include \"site_queue.h\"\n#endif\n\nextern char *pbse_to_txt(int err);\n\n/**\n *\tThis table contains job comment and information messages that correspond\n *\tto the sched_error_code enums in \"constant.h\".  The order of the strings in\n *\tthe table must match the numeric order of the sched_error_code enum values.\n *\tThe length of the resultant strings (including any arguments inserted\n *\tvia % formatting directives by translate_fail_code(), q.v.) must not\n *\texceed the dimensions of the schd_error elements.  See data_types.h.\n */\nstruct fc_translation_table {\n\tconst char *fc_comment; /**< job comment string */\n\tconst char *fc_info;\t/**< job error string */\n};\nstatic struct fc_translation_table fctt[] = {\n\t{\"\",\n\t \"\"},\n\t{/* SUCCESS */\n\t \"\",\n\t \"\"},\n\t{/* SCHD_ERROR */\n\t \"Internal Scheduling Error\",\n\t \"A scheduling error has occurred\"},\n\t{/* NOT_QUEUED */\n\t \"Job not in queued state\",\n\t \"Job is not in queued state\"},\n\t{/* QUEUE_NOT_STARTED */\n\t \"Queue not started.\",\n\t \"Queue not started\"},\n\t{/* QUEUE_NOT_EXEC */\n\t \"Queue not an execution queue.\",\n\t \"Queue not an execution queue\"},\n\t{/* QUEUE_JOB_LIMIT_REACHED */\n\t \"Queue %s job limit has been reached.\",\n\t \"Queue %s job limit reached\"},\n\t{/* SERVER_JOB_LIMIT_REACHED */\n\t \"Server job limit has been reached.\",\n\t \"Server job limit reached\"},\n\t{/* SERVER_USER_LIMIT_REACHED */\n\t \"User has reached server running job limit.\",\n\t \"Server per-user job limit reached\"},\n\t{/* QUEUE_USER_LIMIT_REACHED */\n\t \"User has reached queue %s running job limit.\",\n\t \"Queue %s per-user job limit reached\"},\n\t{/* SERVER_GROUP_LIMIT_REACHED */\n\t \"Group has reached server running limit.\",\n\t \"Server per-group limit reached\"},\n\t{/* QUEUE_GROUP_LIMIT_REACHED */\n\t \"Group has reached queue %s running limit.\",\n\t \"Queue %s per-group job limit reached\"},\n\t{/* DED_TIME */\n\t \"Dedicated time conflict\",\n\t \"Dedicated Time\"},\n\t{/* CROSS_DED_TIME_BOUNDRY */\n\t \"Job would cross dedicated time boundary\",\n\t \"Job would not finish before dedicated time\"},\n\t{/* NO_AVAILABLE_NODE */\n\t \"\",\n\t \"\"},\n\t{/* NOT_ENOUGH_NODES_AVAIL */\n\t \"Not enough of the right type of nodes are available\",\n\t \"Not enough of the right type of nodes available\"},\n\t{/* BACKFILL_CONFLICT */\n\t \"Job would interfere with a top job\",\n\t \"Job would interfere with a top job\"},\n\t{/* RESERVATION_INTERFERENCE */\n\t \"Job would interfere with a confirmed reservation\",\n\t \"Job would interfere with a reservation\"},\n\t{/* PRIME_ONLY */\n\t \"Job will run in primetime only\",\n\t \"Job only runs in primetime\"},\n\t{/* NONPRIME_ONLY */\n\t \"Job will run in nonprimetime only\",\n\t \"Job only runs in nonprimetime\"},\n\t{/* CROSS_PRIME_BOUNDARY */\n\t \"Job will cross into %s\",\n\t \"Job would cross into %s\"},\n\t{/* NODE_NONEXISTENT */\n\t \"Specified %s does not exist: %s\",\n\t \"Specified %s does not exist: %s\"},\n\t{/* NO_NODE_RESOURCES */\n\t \"No available resources on nodes\",\n\t \"No available resources on nodes\"},\n\t{/* CANT_PREEMPT_ENOUGH_WORK */\n\t \"Can't preempt enough work to run job\",\n\t \"Can't preempt enough work to run job\"},\n\t{/* QUEUE_USER_RES_LIMIT_REACHED */\n\t \"Queue %s per-user limit reached on resource %s\",\n\t \"Queue %s per-user limit reached on resource %s\"},\n\t{/* SERVER_USER_RES_LIMIT_REACHED */\n\t \"Server per-user limit reached on resource %s\",\n\t \"Server per-user limit reached on resource %s\"},\n\t{/* QUEUE_GROUP_RES_LIMIT_REACHED */\n\t \"Queue %s per-group limit reached on resource %s\",\n\t \"Queue %s per-group limit reached on resource %s\"},\n\t{/* SERVER_GROUP_RES_LIMIT_REACHED */\n\t \"Server per-group limit reached on resource %s\",\n\t \"Server per-group limit reached on resource %s\"},\n\t{/* NO_FAIRSHARES */\n\t \"Job has zero shares for fairshare\",\n\t \"Job has zero shares for fairshare\"},\n\t{/* INVALID_NODE_STATE */\n\t \"Node is in an ineligible state: %s\",\n#ifdef NAS /* localmod 031 */\n\t \"Node is in an ineligible state: %s: %s\"\n#else\n\t \"Node is in an ineligible state: %s\"\n#endif /* localmod 031 */\n\t},\n\t{/* INVALID_NODE_TYPE */\n\t \"Node is of an ineligible type: %s\",\n\t \"Node is of an ineligible type: %s\"},\n\t{  /* NODE_NOT_EXCL */\n#ifdef NAS /* localmod 031 */\n\t \"Nodes not available\",\n#else\n\t \"%s is requesting an exclusive node and node is in use\",\n#endif /* localmod 031 */\n\t \"%s is requesting an exclusive node and node is in use\"},\n\t{/* NODE_JOB_LIMIT_REACHED */\n\t \"Node has reached job run limit\",\n\t \"Node has reached job run limit\"},\n\t{/* NODE_USER_LIMIT_REACHED */\n\t \"Node has reached user run limit\",\n\t \"Node has reached user run limit\"},\n\t{/* NODE_GROUP_LIMIT_REACHED */\n\t \"Node has reached group run limit\",\n\t \"Node has reached group run limit\"},\n\t{/* NODE_NO_MULT_JOBS */\n\t \"Node can't satisfy a multi-node job\",\n\t \"Node can't satisfy a multi-node job\"},\n\t{/* NODE_UNLICENSED */\n\t \"Node has no PBS license\",\n\t \"Node has no PBS license\"},\n\t{/* UNUSED37 */\n\t \"\",\n\t \"\"},\n\t{/* NO_SMALL_CPUSETS */\n\t \"Max number of small cpusets has been reached\",\n\t \"Max number of small cpusets has been reached\"},\n\t{/* INSUFFICIENT_RESOURCE */\n\t \"Insufficient amount of resource: %s %s\",\n\t \"Insufficient amount of resource: %s %s\"},\n\t{/* RESERVATION_CONFLICT */\n\t \"Job would conflict with reservation or top job\",\n\t \"Job would conflict with reservation or top job\"},\n\t{/* NODE_PLACE_PACK */\n\t \"Node ineligible because job requested pack placement and won't fit on node\",\n\t \"Node ineligible because job requested pack placement and won't fit on node\"},\n\t{/* NODE_RESV_ENABLE */\n\t \"Node not eligible for advance reservation\",\n\t \"Node not eligible for advance reservation\"},\n\t{/* STRICT_ORDERING */\n\t \"Job would break strict sorted order\",\n\t \"Job would break strict sorted order\"},\n\t{/* MAKE_ELIGIBLE */\n\t \"\",\n\t \"\"},\n\t{/* MAKE_INELIGIBLE */\n\t \"\",\n\t \"\"},\n\t{/* INSUFFICIENT_QUEUE_RESOURCE */\n\t \"Insufficient amount of queue resource: %s %s\",\n\t \"Insufficient amount of queue resource: %s %s\"},\n\t{/* INSUFFICIENT_SERVER_RESOURCE */\n\t \"Insufficient amount of server resource: %s %s\",\n\t \"Insufficient amount of server resource: %s %s\"},\n\t{/* QUEUE_BYGROUP_JOB_LIMIT_REACHED */\n\t \"Queue %s job limit reached for group %s\",\n\t \"Queue %s job limit reached for group %s\"},\n\t{/* QUEUE_BYUSER_JOB_LIMIT_REACHED */\n\t \"Queue %s job limit reached for user %s\",\n\t \"Queue %s job limit reached for user %s\"},\n\t{/* SERVER_BYGROUP_JOB_LIMIT_REACHED */\n\t \"Server job limit reached for group %s\",\n\t \"Server job limit reached for group %s\"},\n\t{/* SERVER_BYUSER_JOB_LIMIT_REACHED */\n\t \"Server job limit reached for user %s\",\n\t \"Server job limit reached for user %s\"},\n\t{/* SERVER_BYGROUP_RES_LIMIT_REACHED */\n\t \"would exceed group %s's limit on resource %s in complex\",\n\t \"would exceed group %s's limit on resource %s in complex\"},\n\t{/* SERVER_BYUSER_RES_LIMIT_REACHED */\n\t \"would exceed user %s's limit on resource %s in complex\",\n\t \"would exceed user %s's limit on resource %s in complex\"},\n\t{/* QUEUE_BYGROUP_RES_LIMIT_REACHED */\n\t \"would exceed group %s's limit on resource %s in queue %s\",\n\t \"would exceed group %s's limit on resource %s in queue %s\"},\n\t{/* QUEUE_BYUSER_RES_LIMIT_REACHED */\n\t \"would exceed user %s's limit on resource %s in queue %s\",\n\t \"would exceed user %s's limit on resource %s in queue %s\"},\n\t{/* QUEUE_RESOURCE_LIMIT_REACHED */\n\t \"would exceed overall limit on resource %s in queue %s\",\n\t \"would exceed overall limit on resource %s in queue %s\"},\n\t{/* SERVER_RESOURCE_LIMIT_REACHED */\n\t \"would exceed overall limit on resource %s in complex\",\n\t \"would exceed overall limit on resource %s in complex\"},\n\t{/* PROV_DISABLE_ON_SERVER */\n\t \"Cannot provision, provisioning disabled on server\",\n\t \"Cannot provision, provisioning disabled on server\"},\n\t{/* PROV_DISABLE_ON_NODE */\n\t \"Cannot provision, provisioning disabled on vnode\",\n\t \"Cannot provision, provisioning disabled on vnode\"},\n\t{/* AOE_NOT_AVALBL */\n\t \"Cannot provision, requested AOE %s not available on vnode\",\n\t \"Cannot provision, requested AOE %s not available on vnode\"},\n\t{/* EOE_NOT_AVALBL */\n\t \"Cannot provision, requested EOE %s not available on vnode\",\n\t \"Cannot provision, requested EOE %s not available on vnode\"},\n\t{/* PROV_BACKFILL_CONFLICT */\n\t \"Provisioning for job would interfere with backfill job\",\n\t \"Provisioning for job would interfere with backfill job\"},\n\t{/* IS_MULTI_VNODE */\n\t \"Cannot provision, host has multiple vnodes\",\n\t \"Cannot provision, host has multiple vnodes\"},\n\t{/* PROV_RESRESV_CONFLICT */\n\t \"Provision conflict with existing job/reservation\",\n\t \"Provision conflict with existing job/reservation\"},\n\t{/* RUN_FAILURE */\n\t \"PBS Error: %s\",\n\t \"Failed to run: %s (%s)\"},\n\t{/* SET_TOO_SMALL */\n\t \"%s set %s has too few free resources\",\n\t \"%s set %s has too few free resources or is too small\"},\n\t{/* CANT_SPAN_PSET */\n\t \"can't fit in the largest placement set, and can't span psets\",\n\t \"Can't fit in the largest placement set, and can't span placement sets\"},\n\t{/* NO_FREE_NODES */\n\t \"Not enough free nodes available\",\n\t \"Not enough free nodes available\"},\n\t{/* SERVER_PROJECT_LIMIT_REACHED */\n\t \"Project has reached server running limit.\",\n\t \"Server per-project limit reached\"},\n\t{/* SERVER_PROJECT_RES_LIMIT_REACHED */\n\t \"Server per-project limit reached on resource %s\",\n\t \"Server per-project limit reached on resource %s\"},\n\t{/* SERVER_BYPROJECT_RES_LIMIT_REACHED */\n\t \"would exceed project %s's limit on resource %s in complex\",\n\t \"would exceed project %s's limit on resource %s in complex\"},\n\t{/* SERVER_BYPROJECT_JOB_LIMIT_REACHED */\n\t \"Server job limit reached for project %s\",\n\t \"Server job limit reached for project %s\"},\n\t{/* QUEUE_PROJECT_LIMIT_REACHED */\n\t \"Project has reached queue %s's running limit.\",\n\t \"Queue %s per-project job limit reached\"},\n\t{/* QUEUE_PROJECT_RES_LIMIT_REACHED */\n\t \"Queue %s per-project limit reached on resource %s\",\n\t \"Queue %s per-project limit reached on resource %s\"},\n\t{/* QUEUE_BYPROJECT_RES_LIMIT_REACHED */\n\t \"would exceed project %s's limit on resource %s in queue %s\",\n\t \"would exceed project %s's limit on resource %s in queue %s\"},\n\t{/* QUEUE_BYPROJECT_JOB_LIMIT_REACHED */\n\t \"Queue %s job limit reached for project %s\",\n\t \"Queue %s job limit reached for project %s\"},\n\t{/* NO_TOTAL_NODES */\n\t \"Not enough total nodes available\",\n\t \"Not enough total nodes available\"},\n\t{/* INVALID_RESRESV */\n\t \"Invalid Job/Resv %s\",\n\t \"Invalid Job/Resv %s\"},\n\t{\n\t\t/* JOB_UNDER_THRESHOLD */\n\t\t\"Job is under job_sort_formula threshold value\",\n\t\t\"Job is under job_sort_formula threshold value\",\n\t},\n\t{\n\t\t/* MAX_RUN_SUBJOBS */\n\t\t\"Number of concurrent running subjobs limit reached\",\n\t\t\"Number of concurrent running subjobs limit reached\",\n#ifdef NAS\n\t},\n\t/* localmod 034 */\n\t{\n\t\t/* GROUP_CPU_SHARE */\n\t\t\"Job would exceed mission CPU share\",\n\t\t\"Job would exceed mission CPU share\",\n\t},\n\t{\n\t\t/* GROUP_CPU_INSUFFICIENT */\n\t\t\"Job exceeds total mission share\",\n\t\t\"Job exceeds total mission share\",\n\t},\n\t/* localmod 998 */\n\t{\n\t\t/* RESOURCES_INSUFFICIENT */\n\t\t\"Too few free resources\",\n\t\t\"Too few free resources\",\n#endif\n\t},\n};\n\n#define ERR2COMMENT(code) (fctt[(code) -RET_BASE].fc_comment)\n#define ERR2INFO(code) (fctt[(code) -RET_BASE].fc_info)\n\n/**\n * @brief\tpthread routine for querying a chunk of jobs\n *\n * @param[in,out]\tdata - th_data_query_jinfo object for the querying\n *\n * @return void\n */\nvoid\nquery_jobs_chunk(th_data_query_jinfo *data)\n{\n\tstruct batch_status *jobs;\n\tresource_resv **resresv_arr;\n\tserver_info *sinfo;\n\tqueue_info *qinfo;\n\tint sidx;\n\tint eidx;\n\tint num_jobs_chunk;\n\tint i;\n\tint jidx;\n\tstruct batch_status *cur_job;\n\tschd_error *err;\n\tint pbs_sd;\n\tstatus *policy;\n\n\tjobs = data->jobs;\n\tsinfo = data->sinfo;\n\tqinfo = data->qinfo;\n\tpbs_sd = data->pbs_sd;\n\tpolicy = data->policy;\n\tsidx = data->sidx;\n\teidx = data->eidx;\n\tnum_jobs_chunk = eidx - sidx + 1;\n\n\terr = new_schd_error();\n\tif (err == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\tdata->error = 1;\n\t\treturn;\n\t}\n\n\tresresv_arr = static_cast<resource_resv **>(malloc(sizeof(resource_resv *) * (num_jobs_chunk + 1)));\n\tif (resresv_arr == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\tdata->error = 1;\n\t\treturn;\n\t}\n\tresresv_arr[0] = NULL;\n\n\t/* Move to the linked list item corresponding to the 'start' index */\n\tfor (cur_job = jobs, i = 0; i < sidx && cur_job != NULL; cur_job = cur_job->next, i++)\n\t\t;\n\n\tfor (i = sidx, jidx = 0; i <= eidx && cur_job != NULL; cur_job = cur_job->next, i++) {\n\t\tresource_resv *resresv;\n\n\t\tif ((resresv = query_job(pbs_sd, cur_job, sinfo, qinfo, err)) == NULL) {\n\t\t\tdata->error = 1;\n\t\t\tfree_schd_error(err);\n\t\t\tfree_resource_resv_array(resresv_arr);\n\t\t\treturn;\n\t\t}\n\n\t\t/* do a validity check to see if the job is sane.  If we're peering and\n\t\t * we're not a manager at the remote host, we wont have necessary attribs\n\t\t * like euser and egroup\n\t\t */\n\t\tif (resresv->is_invalid || !is_resource_resv_valid(resresv, err)) {\n\t\t\tschdlogerr(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_DEBUG, resresv->name,\n\t\t\t\t   \"Job is invalid - ignoring for this cycle\", err);\n\t\t\t/* decrement i because we're about to increment it as part of the for()\n\t\t\t * header.  We to continue adding valid jobs to our array.  We're\n\t\t\t * freeing what we allocated and ignoring this job completely.\n\t\t\t */\n\t\t\tdelete resresv;\n\t\t\tcontinue;\n\t\t}\n\n\t\t/* Make sure scheduler does not process a subjob in undesirable state*/\n\t\tif (resresv->job->is_subjob && !resresv->job->is_running && !resresv->job->is_exiting &&\n\t\t    !resresv->job->is_suspended && !resresv->job->is_provisioning) {\n\t\t\tlog_event(PBSEVENT_SCHED, PBS_EVENTCLASS_RESV, LOG_DEBUG,\n\t\t\t\t  resresv->name, \"Subjob found in undesirable state, ignoring this job\");\n\t\t\tdelete resresv;\n\t\t\tcontinue;\n\t\t}\n\n\t\t/* if the job's fairshare entity has no percentage of the machine,\n\t\t * the job can not run if enforce_no_shares is set\n\t\t */\n\t\tif (policy->fair_share && conf.enforce_no_shares) {\n\t\t\tif (resresv->job->ginfo != NULL &&\n\t\t\t    resresv->job->ginfo->tree_percentage == 0) {\n\t\t\t\tset_schd_error_codes(err, NEVER_RUN, NO_FAIRSHARES);\n\t\t\t}\n\t\t}\n\n#ifdef NAS /* localmod 034 */\n\t\tsite_set_job_share(resresv);\n#endif /* localmod 034 */\n\n\t\t/* Don't consider a job not in a queued state as runnable */\n\t\tif (!in_runnable_state(resresv))\n\t\t\tresresv->can_not_run = 1;\n\n#ifdef RESC_SPEC\n\t\t/* search_for_rescspec() sets jinfo->rspec */\n\t\tif (!search_for_rescspec(resresv, qinfo->server->nodes))\n\t\t\tset_schd_error_codes(err, NOT_RUN, NO_NODE_RESOURCES);\n#endif\n\n\t\tif (err->error_code != SUCCESS) {\n\t\t\tupdate_job_can_not_run(pbs_sd, resresv, err);\n\t\t\tclear_schd_error(err);\n\t\t}\n\t\tresresv_arr[jidx++] = resresv;\n\t}\n\n\tresresv_arr[jidx] = NULL;\n\tdata->oarr = resresv_arr;\n\n\tfree_schd_error(err);\n}\n\n/**\n * @brief\tAllocates th_data_query_jinfo for multi-threading of query_jobs\n *\n * @param[in]\tpolicy\t-\tpolicy info\n * @param[in]\tpbs_sd\t-\tconnection to pbs_server\n * @param[in]\tjobs\t-\tbatch_status of jobs\n * @param[in]\tqinfo\t-\tqueue to get jobs from\n * @param[in]\tsidx\t-\tstart index for the jobs list for the thread\n * @param[in]\teidx\t-\tend index for the jobs list for the thread\n *\n * @return th_data_query_jinfo *\n * @retval a newly allocated th_data_query_jinfo object\n * @retval NULL for malloc error\n */\nstatic inline th_data_query_jinfo *\nalloc_tdata_jquery(status *policy, int pbs_sd, struct batch_status *jobs, queue_info *qinfo,\n\t\t   int sidx, int eidx)\n{\n\tth_data_query_jinfo *tdata;\n\n\ttdata = static_cast<th_data_query_jinfo *>(malloc(sizeof(th_data_query_jinfo)));\n\tif (tdata == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn NULL;\n\t}\n\ttdata->error = 0;\n\ttdata->jobs = jobs;\n\ttdata->oarr = NULL; /* Will be filled by the thread routine */\n\ttdata->sinfo = qinfo->server;\n\ttdata->qinfo = qinfo;\n\ttdata->pbs_sd = pbs_sd;\n\ttdata->policy = policy;\n\ttdata->sidx = sidx;\n\ttdata->eidx = eidx;\n\n\treturn tdata;\n}\n\n/**\n * @brief\n * \t\tcreate an array of jobs in a specified queue\n *\n * @par NOTE:\n * \t\tanything reservation related needs to happen in\n *\t\tquery_reservations().  Since it is called after us,\n *\t\treservations aren't available at this point.\n *\n * @param[in]\tpolicy\t-\tpolicy info\n * @param[in]\tpbs_sd\t-\tconnection to pbs_server\n * @param[in]\tqinfo\t-\tqueue to get jobs from\n * @param[in]\tpjobs   -\tpossible job array to add too\n * @param[in]\tqueue_name\t-\tthe name of the queue to query (local/remote)\n *\n * @return\tpointer to the head of a list of jobs\n * @par MT-safe: No\n */\nresource_resv **\nquery_jobs(status *policy, int pbs_sd, queue_info *qinfo, resource_resv **pjobs, const std::string &queue_name)\n{\n\t/* pbs_selstat() takes a linked list of attropl structs which tell it\n\t * what information about what jobs to return.  We want all jobs which are\n\t * in a specified queue\n\t */\n\tstruct attropl opl = {NULL, const_cast<char *>(ATTR_q), NULL, NULL, EQ};\n\tstatic struct attropl opl2[2] = {{&opl2[1], const_cast<char *>(ATTR_state), NULL, const_cast<char *>(\"Q\"), EQ},\n\t\t\t\t\t {NULL, const_cast<char *>(ATTR_array), NULL, const_cast<char *>(\"True\"), NE}};\n\tstatic struct attrl *attrib = NULL;\n\n\t/* linked list of jobs returned from pbs_selstat() */\n\tstruct batch_status *jobs;\n\n\t/* current job in jobs linked list */\n\tstruct batch_status *cur_job;\n\n\t/* array of internal scheduler structures for jobs */\n\tresource_resv **resresv_arr;\n\n\t/* number of jobs in resresv_arr */\n\tint num_jobs = 0;\n\t/* number of jobs in pjobs */\n\tint num_prev_jobs;\n\tint num_new_jobs;\n\n\t/* for multi-threading */\n\tint jidx;\n\tth_data_query_jinfo *tdata = NULL;\n\tth_task_info *task = NULL;\n\tresource_resv ***jinfo_arrs_tasks;\n\tint tid;\n\n\tif (policy == NULL || qinfo == NULL || queue_name.empty())\n\t\treturn pjobs;\n\n\topl.value = const_cast<char *>(queue_name.c_str());\n\n\tif (qinfo->is_peer_queue)\n\t\topl.next = &opl2[0];\n\n\tif (attrib == NULL) {\n\t\tconst char *jobattrs[] = {\n\t\t\tATTR_p,\n\t\t\tATTR_qtime,\n\t\t\tATTR_qrank,\n\t\t\tATTR_etime,\n\t\t\tATTR_stime,\n\t\t\tATTR_N,\n\t\t\tATTR_state,\n\t\t\tATTR_substate,\n\t\t\tATTR_sched_preempted,\n\t\t\tATTR_comment,\n\t\t\tATTR_released,\n\t\t\tATTR_euser,\n\t\t\tATTR_egroup,\n\t\t\tATTR_project,\n\t\t\tATTR_resv_ID,\n\t\t\tATTR_altid,\n\t\t\tATTR_SchedSelect,\n\t\t\tATTR_array_id,\n\t\t\tATTR_node_set,\n\t\t\tATTR_array,\n\t\t\tATTR_array_index,\n\t\t\tATTR_topjob,\n\t\t\tATTR_topjob_ineligible,\n\t\t\tATTR_array_indices_remaining,\n\t\t\tATTR_execvnode,\n\t\t\tATTR_l,\n\t\t\tATTR_rel_list,\n\t\t\tATTR_used,\n\t\t\tATTR_accrue_type,\n\t\t\tATTR_eligible_time,\n\t\t\tATTR_estimated,\n\t\t\tATTR_c,\n\t\t\tATTR_r,\n\t\t\tATTR_depend,\n\t\t\tATTR_A,\n\t\t\tATTR_max_run_subjobs,\n\t\t\tNULL};\n\n\t\tfor (int i = 0; jobattrs[i] != NULL; i++) {\n\t\t\tstruct attrl *temp_attrl;\n\n\t\t\ttemp_attrl = new_attrl();\n\t\t\ttemp_attrl->name = strdup(jobattrs[i]);\n\t\t\ttemp_attrl->next = attrib;\n\t\t\ttemp_attrl->value = const_cast<char *>(\"\");\n\t\t\tattrib = temp_attrl;\n\t\t}\n\t}\n\n\t/* get jobs from PBS server */\n\tif ((jobs = send_selstat(pbs_sd, &opl, attrib, const_cast<char *>(\"S\"))) == NULL) {\n\t\tif (pbs_errno > 0) {\n\t\t\tconst char *errmsg = pbs_geterrmsg(pbs_sd);\n\t\t\tif (errmsg == NULL)\n\t\t\t\terrmsg = \"\";\n\t\t\tlog_eventf(PBSEVENT_SCHED, PBS_EVENTCLASS_JOB, LOG_NOTICE, \"job_info\",\n\t\t\t\t   \"pbs_selstat failed: %s (%d)\", errmsg, pbs_errno);\n\t\t}\n\t\treturn pjobs;\n\t}\n\n\t/* count the number of new jobs */\n\tcur_job = jobs;\n\twhile (cur_job != NULL) {\n\t\tnum_jobs++;\n\t\tcur_job = cur_job->next;\n\t}\n\tnum_new_jobs = num_jobs;\n\n\t/* if there are previous jobs, count those too */\n\tnum_prev_jobs = count_array(pjobs);\n\tnum_jobs += num_prev_jobs;\n\n\t/* allocate enough space for all the jobs and the NULL sentinal */\n\tresresv_arr = static_cast<resource_resv **>(realloc(pjobs, sizeof(resource_resv *) * (num_jobs + 1)));\n\n\tif (resresv_arr == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\tpbs_statfree(jobs);\n\t\treturn NULL;\n\t}\n\tresresv_arr[num_prev_jobs] = NULL;\n\n\ttid = *((int *) pthread_getspecific(th_id_key));\n\tif (tid != 0 || num_threads <= 1) {\n\t\t/* don't use multi-threading if I am a worker thread or num_threads is 1 */\n\t\ttdata = alloc_tdata_jquery(policy, pbs_sd, jobs, qinfo, 0, num_new_jobs - 1);\n\t\tif (tdata == NULL) {\n\t\t\tfree_resource_resv_array(resresv_arr);\n\t\t\tpbs_statfree(jobs);\n\t\t\treturn NULL;\n\t\t}\n\t\tquery_jobs_chunk(tdata);\n\n\t\tif (tdata->error || tdata->oarr == NULL) {\n\t\t\tfree_resource_resv_array(resresv_arr);\n\t\t\tpbs_statfree(jobs);\n\t\t\tfree(tdata->oarr);\n\t\t\tfree(tdata);\n\t\t\treturn NULL;\n\t\t}\n\n\t\tjidx = num_prev_jobs;\n\t\tfor (int j = 0; tdata->oarr[j] != NULL; j++) {\n\t\t\tresresv_arr[jidx++] = tdata->oarr[j];\n\t\t}\n\t\tfree(tdata->oarr);\n\t\tfree(tdata);\n\t\tresresv_arr[jidx] = NULL;\n\t} else {\n\t\tint chunk_size = num_new_jobs / num_threads;\n\t\tint th_err = 0;\n\t\tint num_tasks = 0;\n\n\t\tchunk_size = (chunk_size > MT_CHUNK_SIZE_MIN) ? chunk_size : MT_CHUNK_SIZE_MIN;\n\t\tchunk_size = (chunk_size < MT_CHUNK_SIZE_MAX) ? chunk_size : MT_CHUNK_SIZE_MAX;\n\t\tfor (int j = 0; num_new_jobs > 0;\n\t\t     num_tasks++, j += chunk_size, num_new_jobs -= chunk_size) {\n\t\t\ttdata = alloc_tdata_jquery(policy, pbs_sd, jobs, qinfo, j, j + chunk_size - 1);\n\t\t\tif (tdata == NULL) {\n\t\t\t\tth_err = 1;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\ttask = static_cast<th_task_info *>(malloc(sizeof(th_task_info)));\n\t\t\tif (task == NULL) {\n\t\t\t\tfree(tdata);\n\t\t\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\t\t\tth_err = 1;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\ttask->task_id = num_tasks;\n\t\t\ttask->task_type = TS_QUERY_JOB_INFO;\n\t\t\ttask->thread_data = (void *) tdata;\n\n\t\t\tpthread_mutex_lock(&work_lock);\n\t\t\tds_enqueue(work_queue, (void *) task);\n\t\t\tpthread_cond_signal(&work_cond);\n\t\t\tpthread_mutex_unlock(&work_lock);\n\t\t}\n\t\tjinfo_arrs_tasks = static_cast<resource_resv ***>(malloc(num_tasks * sizeof(resource_resv **)));\n\t\tif (jinfo_arrs_tasks == NULL) {\n\t\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\t\tth_err = 1;\n\t\t}\n\t\t/* Get results from worker threads */\n\t\tfor (int i = 0; i < num_tasks;) {\n\t\t\tpthread_mutex_lock(&result_lock);\n\t\t\twhile (ds_queue_is_empty(result_queue))\n\t\t\t\tpthread_cond_wait(&result_cond, &result_lock);\n\t\t\twhile (!ds_queue_is_empty(result_queue)) {\n\t\t\t\ttask = static_cast<th_task_info *>(ds_dequeue(result_queue));\n\t\t\t\ttdata = static_cast<th_data_query_jinfo *>(task->thread_data);\n\t\t\t\tif (tdata->error)\n\t\t\t\t\tth_err = 1;\n\t\t\t\tjinfo_arrs_tasks[task->task_id] = tdata->oarr;\n\t\t\t\tfree(tdata);\n\t\t\t\tfree(task);\n\t\t\t\ti++;\n\t\t\t}\n\t\t\tpthread_mutex_unlock(&result_lock);\n\t\t}\n\t\tif (th_err) {\n\t\t\tpbs_statfree(jobs);\n\t\t\tfree_resource_resv_array(resresv_arr);\n\t\t\tfree(jinfo_arrs_tasks);\n\t\t\treturn NULL;\n\t\t}\n\t\t/* Assemble job info objects from various threads into the resresv_arr */\n\t\tjidx = num_prev_jobs;\n\t\tfor (int i = 0; i < num_tasks; i++) {\n\t\t\tif (jinfo_arrs_tasks[i] != NULL) {\n\t\t\t\tfor (int j = 0; jinfo_arrs_tasks[i][j] != NULL; j++) {\n\t\t\t\t\tresresv_arr[jidx++] = jinfo_arrs_tasks[i][j];\n\t\t\t\t}\n\t\t\t\tfree(jinfo_arrs_tasks[i]);\n\t\t\t}\n\t\t}\n\t\tresresv_arr[jidx] = NULL;\n\t\tfree(jinfo_arrs_tasks);\n\t}\n\n\tpbs_statfree(jobs);\n\n\treturn resresv_arr;\n}\n\n/**\n * @brief\n *\t\tquery_job - takes info from a batch_status about a job and\n *\t\t\t converts it into a resource_resv struct\n *\n * \t  @param[in] pbs_sd - connection descriptor to the server\n *\t  @param[in] job - batch_status struct of job\n *\t  @param[in] qinfo - queue where job resides\n *\t  @param[out] err - returns error info\n *\n *\t@return resource_resv\n *\t@retval job (may be invalid, if so, err will report why)\n *\t@retval  or NULL on error\n */\n\nresource_resv *\nquery_job(int pbs_sd, struct batch_status *job, server_info *sinfo, queue_info *qinfo, schd_error *err)\n{\n\tresource_resv *resresv; /* converted job */\n\tstruct attrl *attrp;\t/* list of attributes returned from server */\n\tlong count;\t\t/* long used in string->long conversion */\n\tchar *endp;\t\t/* used for strtol() */\n\tresource_req *resreq;\t/* resource_req list for resources requested  */\n\n\tif ((resresv = new resource_resv(job->name)) == NULL)\n\t\treturn NULL;\n\n\tif ((resresv->job = new_job_info()) == NULL) {\n\t\tdelete resresv;\n\t\treturn NULL;\n\t}\n\n\tresresv->rank = get_sched_rank();\n\n\tattrp = job->attribs;\n\n\tresresv->server = sinfo;\n\tresresv->job->queue = qinfo;\n\n\tresresv->is_job = true;\n\n\tresresv->job->can_checkpoint = true; /* default can be checkpointed */\n\tresresv->job->can_requeue = true;    /* default can be requeued */\n\tresresv->job->can_suspend = true;    /* default can be suspended */\n\n\twhile (attrp != NULL && !resresv->is_invalid) {\n\t\tclear_schd_error(err);\n\t\tif (conf.fairshare_ent == attrp->name) {\n\t\t\tif (sinfo->fstree != NULL) {\n#ifdef NAS /* localmod 059 */\n\t\t\t\t/* This is a hack to allow -A specification for testing, but\n\t\t\t\t * ignore most incorrect user -A values\n\t\t\t\t */\n\t\t\t\tif (strchr(attrp->value, ':') != NULL) {\n\t\t\t\t\t/* moved to query_jobs() in order to include the queue name\n\t\t\t\t\t resresv->job->ginfo = find_alloc_ginfo( attrp->value,\n\t\t\t\t\t sinfo->fstree->root );\n\t\t\t\t\t */\n\t\t\t\t\t/* localmod 034 */\n\t\t\t\t\tresresv->job->sh_info = site_find_alloc_share(sinfo, attrp->value);\n\t\t\t\t}\n#else\n\t\t\t\tresresv->job->ginfo = find_alloc_ginfo(attrp->value, sinfo->fstree->root);\n#endif /* localmod 059 */\n\t\t\t} else\n\t\t\t\tresresv->job->ginfo = NULL;\n\t\t}\n\t\tif (!strcmp(attrp->name, ATTR_p)) { /* priority */\n\t\t\tcount = strtol(attrp->value, &endp, 10);\n\t\t\tif (*endp == '\\0')\n\t\t\t\tresresv->job->priority = count;\n\t\t\telse\n\t\t\t\tresresv->job->priority = -1;\n#ifdef NAS /* localmod 045 */\n\t\t\tresresv->job->NAS_pri = resresv->job->priority;\n#endif\t\t\t\t\t\t\t       /* localmod 045 */\n\t\t} else if (!strcmp(attrp->name, ATTR_qtime)) { /* queue time */\n\t\t\tcount = strtol(attrp->value, &endp, 10);\n\t\t\tif (*endp == '\\0')\n\t\t\t\tresresv->qtime = count;\n\t\t\telse\n\t\t\t\tresresv->qtime = -1;\n\t\t} else if (!strcmp(attrp->name, ATTR_qrank)) { /* queue rank */\n\t\t\tlong long qrank;\n\t\t\tqrank = strtoll(attrp->value, &endp, 10);\n\t\t\tif (*endp == '\\0')\n\t\t\t\tresresv->qrank = qrank;\n\t\t\telse\n\t\t\t\tresresv->qrank = -1;\n\t\t} else if (!strcmp(attrp->name, ATTR_etime)) { /* eligible time */\n\t\t\tcount = strtol(attrp->value, &endp, 10);\n\t\t\tif (*endp == '\\0')\n\t\t\t\tresresv->job->etime = count;\n\t\t\telse\n\t\t\t\tresresv->job->etime = -1;\n\t\t} else if (!strcmp(attrp->name, ATTR_stime)) { /* job start time */\n\t\t\tcount = strtol(attrp->value, &endp, 10);\n\t\t\tif (*endp == '\\0')\n\t\t\t\tresresv->job->stime = count;\n\t\t\telse\n\t\t\t\tresresv->job->stime = -1;\n\t\t} else if (!strcmp(attrp->name, ATTR_N)) /* job name (qsub -N) */\n\t\t\tresresv->job->job_name = string_dup(attrp->value);\n\t\telse if (!strcmp(attrp->name, ATTR_state)) { /* state of job */\n\t\t\tif (set_job_state(attrp->value, resresv->job) == 0) {\n\t\t\t\tset_schd_error_codes(err, NEVER_RUN, ERR_SPECIAL);\n\t\t\t\tset_schd_error_arg(err, SPECMSG, \"Job is in an invalid state\");\n\t\t\t\tresresv->is_invalid = 1;\n\t\t\t}\n\t\t} else if (!strcmp(attrp->name, ATTR_substate)) {\n\t\t\tif (!strcmp(attrp->value, SUSP_BY_SCHED_SUBSTATE))\n\t\t\t\tresresv->job->is_susp_sched = true;\n\t\t\tif (!strcmp(attrp->value, PROVISIONING_SUBSTATE))\n\t\t\t\tresresv->job->is_provisioning = true;\n\t\t\tif (!strcmp(attrp->value, PRERUNNING_SUBSTATE))\n\t\t\t\tresresv->job->is_prerunning = true;\n\t\t} else if (!strcmp(attrp->name, ATTR_sched_preempted)) {\n\t\t\tcount = strtol(attrp->value, &endp, 10);\n\t\t\tif (*endp == '\\0') {\n\t\t\t\tresresv->job->time_preempted = count;\n\t\t\t\tresresv->job->is_preempted = true;\n\t\t\t}\n\t\t} else if (!strcmp(attrp->name, ATTR_comment)) /* job comment */\n\t\t\tresresv->job->comment = string_dup(attrp->value);\n\t\telse if (!strcmp(attrp->name, ATTR_released)) /* resources_released */\n\t\t\tresresv->job->resreleased = parse_execvnode(attrp->value, sinfo, NULL);\n\t\telse if (!strcmp(attrp->name, ATTR_euser)) /* account name */\n\t\t\tresresv->user = attrp->value;\n\t\telse if (!strcmp(attrp->name, ATTR_egroup)) /* group name */\n\t\t\tresresv->group = attrp->value;\n\t\telse if (!strcmp(attrp->name, ATTR_project)) /* project name */\n\t\t\tresresv->project = attrp->value;\n\t\telse if (!strcmp(attrp->name, ATTR_resv_ID)) /* reserve_ID */\n\t\t\tresresv->job->resv_id = string_dup(attrp->value);\n\t\telse if (!strcmp(attrp->name, ATTR_altid)) /* vendor ID */\n\t\t\tresresv->job->alt_id = string_dup(attrp->value);\n\t\telse if (!strcmp(attrp->name, ATTR_SchedSelect))\n#ifdef NAS /* localmod 031 */\n\t\t{\n\t\t\tresresv->job->schedsel = string_dup(attrp->value);\n#endif /* localmod 031 */\n\n\t\t\tresresv->select = parse_selspec(attrp->value);\n#ifdef NAS /* localmod 031 */\n\t\t}\n#endif /* localmod 031 */\n\t\telse if (!strcmp(attrp->name, ATTR_array_id))\n\t\t\tresresv->job->array_id = attrp->value;\n\t\telse if (!strcmp(attrp->name, ATTR_node_set))\n\t\t\tresresv->node_set_str = break_comma_list(attrp->value);\n\t\telse if (!strcmp(attrp->name, ATTR_array)) { /* array */\n\t\t\tif (!strcmp(attrp->value, ATR_TRUE))\n\t\t\t\tresresv->job->is_array = true;\n\t\t} else if (!strcmp(attrp->name, ATTR_array_index)) { /* array_index */\n\t\t\tcount = strtol(attrp->value, &endp, 10);\n\t\t\tif (*endp == '\\0')\n\t\t\t\tresresv->job->array_index = count;\n\t\t\telse\n\t\t\t\tresresv->job->array_index = -1;\n\n\t\t\tresresv->job->is_subjob = true;\n\t\t} else if (!strcmp(attrp->name, ATTR_topjob)) {\n\t\t\tif (!strcmp(attrp->value, ATR_TRUE))\n\t\t\t\tresresv->job->is_topjob = true;\n\t\t} else if (!strcmp(attrp->name, ATTR_topjob_ineligible)) {\n\t\t\tif (!strcmp(attrp->value, ATR_TRUE))\n\t\t\t\tresresv->job->topjob_ineligible = true;\n\t\t}\n\t\t/* array_indices_remaining */\n\t\telse if (!strcmp(attrp->name, ATTR_array_indices_remaining))\n\t\t\tresresv->job->queued_subjobs = range_parse(attrp->value);\n\t\telse if (!strcmp(attrp->name, ATTR_max_run_subjobs)) {\n\t\t\tcount = strtol(attrp->value, &endp, 10);\n\t\t\tif (*endp == '\\0')\n\t\t\t\tresresv->job->max_run_subjobs = count;\n\t\t} else if (!strcmp(attrp->name, ATTR_execvnode)) {\n\t\t\tauto tmp_nspec_arr = parse_execvnode(attrp->value, sinfo, NULL);\n\t\t\tresresv->nspec_arr = combine_nspec_array(tmp_nspec_arr);\n\t\t\tfree_nspecs(tmp_nspec_arr);\n\n\t\t\tresresv->ninfo_arr = create_node_array_from_nspec(resresv->nspec_arr);\n\t\t} else if (!strcmp(attrp->name, ATTR_l)) { /* resources requested*/\n\t\t\tresreq = find_alloc_resource_req_by_str(resresv->resreq, attrp->resource);\n\t\t\tif (resreq == NULL) {\n\t\t\t\tdelete resresv;\n\t\t\t\treturn NULL;\n\t\t\t}\n\n\t\t\tif (set_resource_req(resreq, attrp->value) != 1) {\n\t\t\t\tset_schd_error_codes(err, NEVER_RUN, ERR_SPECIAL);\n\t\t\t\tset_schd_error_arg(err, SPECMSG, \"Bad requested resource data\");\n\t\t\t\tresresv->is_invalid = true;\n\t\t\t\treturn resresv;\n\t\t\t} else {\n\t\t\t\tif (resresv->resreq == NULL)\n\t\t\t\t\tresresv->resreq = resreq;\n#ifdef NAS\n\t\t\t\tif (!strcmp(attrp->resource, \"nodect\")) { /* nodect for sort */\n\t\t\t\t\t/* localmod 040 */\n\t\t\t\t\tcount = strtol(attrp->value, &endp, 10);\n\t\t\t\t\tif (*endp == '\\0')\n\t\t\t\t\t\tresresv->job->nodect = count;\n\t\t\t\t\telse\n\t\t\t\t\t\tresresv->job->nodect = 0;\n\t\t\t\t\t/* localmod 034 */\n\t\t\t\t\tresresv->job->accrue_rate = resresv->job->nodect; /* XXX should be SBU rate */\n\t\t\t\t}\n#endif\n\t\t\t\tif (!strcmp(attrp->resource, \"place\")) {\n\t\t\t\t\tresresv->place_spec = parse_placespec(attrp->value);\n\t\t\t\t\tif (resresv->place_spec == NULL) {\n\t\t\t\t\t\tset_schd_error_codes(err, NEVER_RUN, ERR_SPECIAL);\n\t\t\t\t\t\tset_schd_error_arg(err, SPECMSG, \"invalid placement spec\");\n\t\t\t\t\t\tresresv->is_invalid = true;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t} else if (!strcmp(attrp->name, ATTR_rel_list)) {\n\t\t\tresreq = find_alloc_resource_req_by_str(resresv->job->resreq_rel, attrp->resource);\n\t\t\tif (resreq != NULL)\n\t\t\t\tset_resource_req(resreq, attrp->value);\n\t\t\tif (resresv->job->resreq_rel == NULL)\n\t\t\t\tresresv->job->resreq_rel = resreq;\n\t\t} else if (!strcmp(attrp->name, ATTR_used)) { /* resources used */\n\t\t\tresreq =\n\t\t\t\tfind_alloc_resource_req_by_str(resresv->job->resused, attrp->resource);\n\t\t\tif (resreq != NULL)\n\t\t\t\tset_resource_req(resreq, attrp->value);\n\t\t\tif (resresv->job->resused == NULL)\n\t\t\t\tresresv->job->resused = resreq;\n\t\t} else if (!strcmp(attrp->name, ATTR_accrue_type)) {\n\t\t\tcount = strtol(attrp->value, &endp, 10);\n\t\t\tif (*endp == '\\0')\n\t\t\t\tresresv->job->accrue_type = count;\n\t\t\telse\n\t\t\t\tresresv->job->accrue_type = 0;\n\t\t} else if (!strcmp(attrp->name, ATTR_eligible_time))\n\t\t\tresresv->job->eligible_time = (time_t) res_to_num(attrp->value, NULL);\n\t\telse if (!strcmp(attrp->name, ATTR_estimated)) {\n\t\t\tif (!strcmp(attrp->resource, \"start_time\")) {\n\t\t\t\tresresv->job->est_start_time =\n\t\t\t\t\t(time_t) res_to_num(attrp->value, NULL);\n\t\t\t} else if (!strcmp(attrp->resource, \"execvnode\"))\n\t\t\t\tresresv->job->est_execvnode = string_dup(attrp->value);\n\t\t} else if (!strcmp(attrp->name, ATTR_c)) { /* checkpoint allowed? */\n\t\t\tif (strcmp(attrp->value, \"n\") == 0)\n\t\t\t\tresresv->job->can_checkpoint = false;\n\t\t} else if (!strcmp(attrp->name, ATTR_r)) { /* reque allowed ? */\n\t\t\tif (strcmp(attrp->value, ATR_FALSE) == 0)\n\t\t\t\tresresv->job->can_requeue = false;\n\t\t} else if (!strcmp(attrp->name, ATTR_depend)) {\n\t\t\tresresv->job->depend_job_str = string_dup(attrp->value);\n\t\t}\n\n\t\tattrp = attrp->next;\n\t}\n\n#ifdef NAS /* localmod 040 */\n\t/* we modify nodect to be the same value for all jobs in queues that are\n\t * configured to ignore nodect key sorting, for two reasons:\n\t * 1. obviously to accomplish ignoring of nodect key sorting\n\t * 2. maintain stability of qsort when comparing with a job in a queue that\n\t *    does not require nodect key sorting\n\t * note that this assumes nodect is used only for sorting\n\t */\n\tif (qinfo->ignore_nodect_sort)\n\t\tresresv->job->nodect = 999999;\n#endif /* localmod 040 */\n\n\tif (qinfo->is_peer_queue) {\n\t\tresresv->is_peer_ob = 1;\n\t\tresresv->job->peer_sd = pbs_sd;\n\t}\n\n\tif ((resresv->aoename = getaoename(resresv->select)) != NULL)\n\t\tresresv->is_prov_needed = true;\n\tif ((resresv->eoename = geteoename(resresv->select)) != NULL) {\n\t\t/* job with a power profile can't be checkpointed or suspended */\n\t\tresresv->job->can_checkpoint = false;\n\t\tresresv->job->can_suspend = false;\n\t}\n\n\tif (resresv->select != NULL && resresv->select->chunks != NULL) {\n\t\t/*\n\t\t * Job is invalid if there are no resources in a chunk.  Usually\n\t\t * happens because we strip out resources not in conf.res_to_check\n\t\t */\n\t\tint k;\n\t\tfor (k = 0; resresv->select->chunks[k] != NULL; k++)\n\t\t\tif (resresv->select->chunks[k]->req == NULL) {\n\t\t\t\tset_schd_error_codes(err, NEVER_RUN, INVALID_RESRESV);\n\t\t\t\tset_schd_error_arg(err, ARG1, \"invalid chunk in select\");\n\t\t\t\tresresv->is_invalid = true;\n\t\t\t\treturn resresv;\n\t\t\t}\n\t}\n\n\tif (resresv->place_spec->scatter &&\n\t    resresv->select->total_chunks > 1)\n\t\tresresv->will_use_multinode = true;\n\n\tif (resresv->job->is_queued && !resresv->nspec_arr.empty())\n\t\tresresv->job->is_checkpointed = true;\n\n\t/* If we did not wait for mom to start the job (throughput mode),\n\t * it is possible that we’re seeing a running job without a start time set.\n\t * The stime is set when the mom reports back to the server to say the job is running.\n\t */\n\tif ((resresv->job->is_running) && (resresv->job->stime == UNSPECIFIED))\n\t\tresresv->job->stime = sinfo->server_time + 1;\n\n\t/* For jobs that have an exec_vnode, we create a \"select\" based\n\t * on its exec_vnode.  We do this so if we ever need to run the job\n\t * again, we will replace the job on the exact vnodes/resources it originally used.\n\t */\n\tstd::string selectspec;\n\tif (resresv->job->is_suspended && !resresv->job->resreleased.empty())\n\t\t/* For jobs that are suspended and have resource_released, the \"select\"\n\t\t * we create is based off of resources_released instead of the exec_vnode.\n\t\t */\n\t\tselectspec = create_select_from_nspec(resresv->job->resreleased);\n\telse if (!resresv->nspec_arr.empty())\n\t\tselectspec = create_select_from_nspec(resresv->nspec_arr);\n\n\tif (!selectspec.empty())\n\t\tresresv->execselect = parse_selspec(selectspec);\n\n\tset_job_times(pbs_sd, resresv, sinfo->server_time);\n\n\t/* Add Resource_List resources after resource_used on the job.  This is \n\t * mainly for fairshare.  This allows us to use custom resources only found in \n\t * Resource_List in our fairshare formula.  If a resource appears in both\n\t * lists, we'll find the one in resources_used first and use that.\n\t */\n\tauto req = resresv->job->resused;\n\n\tif (req != NULL) {\n\t\twhile (req->next != NULL)\n\t\t\treq = req->next;\n\n\t\treq->next = dup_resource_req_list(resresv->resreq);\n\t}\n\n\t/* if the fairshare entity was not set by query_job(), then check\n\t * if it's 'queue' and if so, set the group info to the queue name\n\t */\n\tif (conf.fairshare_ent == \"queue\") {\n\t\tif (sinfo->fstree != NULL) {\n\t\t\tresresv->job->ginfo =\n\t\t\t\tfind_alloc_ginfo(qinfo->name, sinfo->fstree->root);\n\t\t} else\n\t\t\tresresv->job->ginfo = NULL;\n\t}\n\n\t/* if fairshare_ent is invalid or the job doesn't have one, give a default\n\t * of something most likely unique - egroup:euser\n\t */\n\n\tif (resresv->job->ginfo == NULL) {\n\t\tchar fairshare_name[100];\n#ifdef NAS /* localmod 058 */\n\t\tsprintf(fairshare_name, \"%s:%s:%s\", resresv->group.c_str(), resresv->user.c_str(),\n\t\t\tqinfo->name.c_str());\n#else\n\t\tsprintf(fairshare_name, \"%s:%s\", resresv->group.c_str(), resresv->user.c_str());\n#endif /* localmod 058 */\n\t\tif (resresv->server->fstree != NULL) {\n\t\t\tresresv->job->ginfo = find_alloc_ginfo(fairshare_name, sinfo->fstree->root);\n\t\t} else\n\t\t\tresresv->job->ginfo = NULL;\n\t}\n#ifdef NAS /* localmod 034 */\n\tif (resresv->job->sh_info == NULL) {\n\t\tsprintf(fairshare_name, \"%s:%s\", resresv->group.c_str(), resresv->user.c_str());\n\t\tresresv->job->sh_info = site_find_alloc_share(sinfo,\n\t\t\t\t\t\t\t      fairshare_name);\n\t}\n\tsite_set_share_type(sinfo, resresv);\n#endif /* localmod 034 */\n\n\treturn resresv;\n}\n\n/**\n *\t@brief\n *\t\tjob_info constructor\n *\n * @return job_info *\n */\njob_info *\nnew_job_info()\n{\n\tjob_info *jinfo;\n\n\tif ((jinfo = new job_info()) == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn NULL;\n\t}\n\n\tjinfo->is_queued = 0;\n\tjinfo->is_running = 0;\n\tjinfo->is_held = 0;\n\tjinfo->is_waiting = 0;\n\tjinfo->is_transit = 0;\n\tjinfo->is_exiting = 0;\n\tjinfo->is_suspended = 0;\n\tjinfo->is_susp_sched = 0;\n\tjinfo->is_userbusy = 0;\n\tjinfo->is_begin = 0;\n\tjinfo->is_expired = 0;\n\tjinfo->is_checkpointed = 0;\n\tjinfo->accrue_type = 0;\n\tjinfo->eligible_time = 0;\n\tjinfo->can_not_preempt = 0;\n\tjinfo->is_topjob = false;\n\tjinfo->topjob_ineligible = 0;\n\n\tjinfo->is_array = 0;\n\tjinfo->is_subjob = 0;\n\n\tjinfo->can_checkpoint = 1; /* default can be checkpointed */\n\tjinfo->can_requeue = 1;\t   /* default can be reuqued */\n\tjinfo->can_suspend = 1;\t   /* default can be suspended */\n\n\tjinfo->is_provisioning = 0;\n\tjinfo->is_preempted = 0;\n\tjinfo->is_prerunning = 0;\n\n\tjinfo->job_name = NULL;\n\tjinfo->comment = NULL;\n\tjinfo->job_name = NULL;\n\tjinfo->resv_id = NULL;\n\tjinfo->alt_id = NULL;\n\tjinfo->queue = NULL;\n\tjinfo->resv = NULL;\n\tjinfo->priority = 0;\n\tjinfo->etime = UNSPECIFIED;\n\tjinfo->stime = UNSPECIFIED;\n\tjinfo->preempt = 0;\n\tjinfo->preempt_status = 0;\n\tjinfo->peer_sd = -1;\n\tjinfo->est_start_time = UNSPECIFIED;\n\tjinfo->time_preempted = UNSPECIFIED;\n\tjinfo->est_execvnode = NULL;\n\tjinfo->resused = NULL;\n\tjinfo->ginfo = NULL;\n\n\tjinfo->array_index = UNSPECIFIED;\n\tjinfo->parent_job = NULL;\n\tjinfo->queued_subjobs = NULL;\n\tjinfo->max_run_subjobs = UNSPECIFIED;\n\tjinfo->running_subjobs = 0;\n\tjinfo->attr_updates = NULL;\n\tjinfo->resreq_rel = NULL;\n\tjinfo->depend_job_str = NULL;\n\tjinfo->dependent_jobs = NULL;\n\n\tjinfo->formula_value = 0.0;\n\n#ifdef RESC_SPEC\n\tjinfo->rspec = NULL;\n#endif\n\n#ifdef NAS\n\t/* localmod 045 */\n\tjinfo->NAS_pri = 0;\n\t/* localmod 034 */\n\tjinfo->sh_amts = NULL;\n\tjinfo->sh_info = NULL;\n\tjinfo->accrue_rate = 0;\n\t/* localmod 040 */\n\tjinfo->nodect = 0;\n\t/* localmod 031 */\n\tjinfo->schedsel = NULL;\n\t/* localmod 053 */\n\tjinfo->u_info = NULL;\n#endif\n\n\treturn jinfo;\n}\n\n/**\n *\t@brief\n *\t\tjob_info destructor\n *\n * @param[in,out]\tjinfo\t-\tJob Info structure to be freed.\n */\n\nvoid\nfree_job_info(job_info *jinfo)\n{\n\tif (jinfo == NULL)\n\t\treturn;\n\n\tif (jinfo->comment != NULL)\n\t\tfree(jinfo->comment);\n\n\tif (jinfo->job_name != NULL)\n\t\tfree(jinfo->job_name);\n\n\tif (jinfo->resv_id != NULL)\n\t\tfree(jinfo->resv_id);\n\n\tif (jinfo->alt_id != NULL)\n\t\tfree(jinfo->alt_id);\n\n\tif (jinfo->est_execvnode != NULL)\n\t\tfree(jinfo->est_execvnode);\n\n\tif (jinfo->queued_subjobs != NULL)\n\t\tfree_range_list(jinfo->queued_subjobs);\n\n\tif (jinfo->depend_job_str != NULL)\n\t\tfree(jinfo->depend_job_str);\n\n\tif (jinfo->dependent_jobs != NULL)\n\t\tfree(jinfo->dependent_jobs);\n\n\tfree_resource_req_list(jinfo->resused);\n\n\tfree_attrl_list(jinfo->attr_updates);\n\n\tfree_resource_req_list(jinfo->resreq_rel);\n\n\tfree_nspecs(jinfo->resreleased);\n\n#ifdef RESC_SPEC\n\tfree_rescspec(jinfo->rspec);\n#endif\n#ifdef NAS\n\t/* localmod 034 */\n\tif (jinfo->sh_amts)\n\t\tfree(jinfo->sh_amts);\n\n\t/* localmod 031 */\n\tif (jinfo->schedsel)\n\t\tfree(jinfo->schedsel);\n#endif\n\tdelete jinfo;\n}\n\n/**\n * @brief\n *\t\tset_job_state - set the state flag in a job_info structure\n *\t\t\ti.e. the is_* bit\n *\n * @param[in]\tstate\t-\tthe state\n * @param[in,out]\tjinfo\t-\tthe job info structure\n *\n * @return\t1\t-\tif state is successfully set\n * @return\t0\t-\tif state is not set\n */\nint\nset_job_state(const char *state, job_info *jinfo)\n{\n\tif (jinfo == NULL)\n\t\treturn 0;\n\n\t/* turn off all state bits first to make sure only 1 is set at the end */\n\tjinfo->is_queued = jinfo->is_running = jinfo->is_transit =\n\t\tjinfo->is_held = jinfo->is_waiting = jinfo->is_exiting =\n\t\t\tjinfo->is_suspended = jinfo->is_userbusy = jinfo->is_begin =\n\t\t\t\tjinfo->is_expired = 0;\n\n\tswitch (state[0]) {\n\t\tcase 'Q':\n\t\t\tjinfo->is_queued = 1;\n\t\t\tbreak;\n\n\t\tcase 'R':\n\t\t\tjinfo->is_running = 1;\n\t\t\tbreak;\n\n\t\tcase 'T':\n\t\t\tjinfo->is_transit = 1;\n\t\t\tbreak;\n\n\t\tcase 'H':\n\t\t\tjinfo->is_held = 1;\n\t\t\tbreak;\n\n\t\tcase 'W':\n\t\t\tjinfo->is_waiting = 1;\n\t\t\tbreak;\n\n\t\tcase 'E':\n\t\t\tjinfo->is_exiting = 1;\n\t\t\tbreak;\n\n\t\tcase 'S':\n\t\t\tjinfo->is_suspended = 1;\n\t\t\tbreak;\n\n\t\tcase 'U':\n\t\t\tjinfo->is_userbusy = 1;\n\t\t\tbreak;\n\n\t\tcase 'B':\n\t\t\tjinfo->is_begin = 1;\n\t\t\tbreak;\n\n\t\tcase 'X':\n\t\t\tjinfo->is_expired = 1;\n\t\t\tbreak;\n\n\t\tdefault:\n\t\t\treturn 0;\n\t}\n\treturn 1;\n}\n\n/**\n * @brief\tCheck whether it's ok send attribute updates to server\n *\n * @param[in]\tattrname - name of the attribute\n *\n * @return\tint\n * @retval\t0 for No\n * @retval\t1 for Yes\n */\nstatic int\ncan_send_update(const char *attrname)\n{\n\tconst char *attrs_to_throttle[] = {ATTR_comment, ATTR_estimated, NULL};\n\tint i;\n\n\tif (send_job_attr_updates)\n\t\treturn 1;\n\n\t/* Check to see if the attr being updated is eligible for throttling */\n\tfor (i = 0; attrs_to_throttle[i] != NULL; i++) {\n\t\tif (strcmp(attrs_to_throttle[i], attrname) == 0)\n\t\t\treturn 0;\n\t}\n\n\treturn 1;\n}\n\n/**\n * @brief\n * \t\tupdate job attributes on the server\n *\n * @param[in]\tpbs_sd     -\tconnection to the pbs_server\n * @param[in]\tresresv    -\tjob to update\n * @param[in]\tattr_name  -\tthe name of the attribute to alter\n * @param[in]\tattr_resc  -\tresource part of the attribute (if any)\n * @param[in]\tattr_value -\tthe value of the attribute to alter (as a str)\n * @param[in]\textra\t-\textra attrl attributes to tag along on the alterjob\n * @param[in]\tflags\t-\tUPDATE_NOW - call send_attr_updates() to update the attribute now\n *\t\t\t     \t\t\tUPDATE_LATER - attach attribute change to job to be sent all at once\n *\t\t\t\t\t\t\tfor the job.  NOTE: Only the jobs that are part\n *\t\t\t\t\t\t\tof the server in main_sched_loop() will be updated in this way.\n *\n * @retval\t1\tattributes were updated or successfully attached to job\n * @retval\t0\tno attributes were updated for a valid reason\n * @retval -1\tno attributes were updated for an error\n *\n */\nint\nupdate_job_attr(int pbs_sd, resource_resv *resresv, const char *attr_name,\n\t\tconst char *attr_resc, const char *attr_value, struct attrl *extra, unsigned int flags)\n{\n\tstruct attrl *pattr = NULL;\n\tstruct attrl *pattr2 = NULL;\n\tstruct attrl *end;\n\n\tif (resresv == NULL ||\n\t    (attr_name == NULL && attr_value == NULL && extra == NULL))\n\t\treturn -1;\n\n\tif (extra == NULL && (attr_name == NULL || attr_value == NULL))\n\t\treturn -1;\n\n\tif (!resresv->is_job)\n\t\treturn 0;\n\n\t/* if running in simulation then don't update but simulate that we have */\n\tif (pbs_sd == SIMULATE_SD)\n\t\treturn 1;\n\n\t/* don't try and update attributes for jobs on peer servers */\n\tif (resresv->is_peer_ob)\n\t\treturn 0;\n\n\t/* if we've received a SIGPIPE, it means our connection to the server\n\t * has gone away.  No need to attempt to contact again\n\t */\n\tif (got_sigpipe)\n\t\treturn -1;\n\n\tif (attr_name == NULL && attr_value == NULL) {\n\t\tend = pattr = dup_attrl_list(extra);\n\t\tif (pattr == NULL)\n\t\t\treturn -1;\n\t} else {\n\t\tif ((flags & UPDATE_NOW) && !can_send_update(attr_name)) {\n\t\t\tstruct attrl *iter_attrl = NULL;\n\t\t\tint attr_elig = 0;\n\n\t\t\t/* Check if any of the extra attrs are eligible to be sent */\n\t\t\tfor (iter_attrl = extra; iter_attrl != NULL; iter_attrl = iter_attrl->next) {\n\t\t\t\tif (can_send_update(iter_attrl->name)) {\n\t\t\t\t\tattr_elig = 1;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif (!attr_elig)\n\t\t\t\treturn 0;\n\t\t}\n\n\t\tpattr = new_attrl();\n\n\t\tif (pattr == NULL)\n\t\t\treturn -1;\n\t\tpattr->name = string_dup(attr_name);\n\t\tpattr->value = string_dup(attr_value);\n\t\tpattr->resource = string_dup(attr_resc);\n\t\tend = pattr;\n\t\tif (extra != NULL) {\n\t\t\tpattr2 = dup_attrl_list(extra);\n\t\t\tif (pattr2 == NULL) {\n\t\t\t\tfree_attrl(pattr);\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t\tpattr->next = pattr2;\n\t\t\t/* extra may have been a list, let's find the end */\n\t\t\tfor (end = pattr2; end->next != NULL; end = end->next)\n\t\t\t\t;\n\t\t}\n\t}\n\n\tif (flags & UPDATE_LATER) {\n\t\tend->next = resresv->job->attr_updates;\n\t\tresresv->job->attr_updates = pattr;\n\t}\n\n\tif (pattr != NULL && (flags & UPDATE_NOW)) {\n\t\tint rc;\n\t\trc = send_attr_updates(pbs_sd, resresv, pattr);\n\t\tfree_attrl_list(pattr);\n\t\treturn rc;\n\t}\n\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\tsend delayed job attribute updates for job using send_attr_updates().\n *\n * @par\n * \t\tThe main reason to use this function over a direct send_attr_update()\n *      call is so that the job's attr_updates list gets free'd and NULL'd.\n *      We don't want to send the attr updates multiple times\n *\n * @param[in]\tpbs_sd\t-\tserver connection descriptor\n * @param[in]\tjob\t-\tjob to send attributes to\n *\n * @return\tint(ret val from send_attr_updates)\n * @retval\t1\t- success\n * @retval\t0\t- failure to update\n */\nint\nsend_job_updates(int pbs_sd, resource_resv *job)\n{\n\tint rc;\n\tstruct attrl *iter_attr = NULL;\n\n\tif (job == NULL)\n\t\treturn 0;\n\n\tif (!send_job_attr_updates) {\n\t\tint send = 0;\n\t\tfor (iter_attr = job->job->attr_updates; iter_attr != NULL; iter_attr = iter_attr->next) {\n\t\t\tif (can_send_update(iter_attr->name)) {\n\t\t\t\tsend = 1;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t\tif (!send)\n\t\t\treturn 0;\n\t}\n\n\trc = send_attr_updates(pbs_sd, job, job->job->attr_updates);\n\n\tfree_attrl_list(job->job->attr_updates);\n\tjob->job->attr_updates = NULL;\n\treturn rc;\n}\n\n/**\n *\t@brief\n *\t\tunset job attributes on the server\n *\n * @param[in]\tpbs_sd\t-\tconnection to the pbs_server\n * @param[in]\tresresv\t-\tjob to update\n * @param[in]\tattr_name\t-\tthe name of the attribute to unset\n * @param[in]\tflags\t-\tUPDATE_NOW : call send_attr_updates() to update the attribute now\n *\t\t\t     \t\t\tUPDATE_LATER - attach attribute change to job to be sent all at once\n *\t\t\t\t\t\t\tfor the job.  NOTE: Only the jobs that are part\n *\t\t\t\t\t\t\tof the server in main_sched_loop() will be updated in this way.\n *\n *\t@retval\t1\t: attributes were unset\n *\t@retval\t0\t: no attributes were unset for a valid reason\n *\t@retval -1\t: no attributes were unset for an error\n *\n */\nint\nunset_job_attr(int pbs_sd, resource_resv *resresv, const char *attr_name, unsigned int flags)\n{\n\treturn (update_job_attr(pbs_sd, resresv, attr_name, NULL, \"\", NULL, flags));\n}\n\n/**\n * @brief\n *\t\tupdate_job_comment - update a job's comment attribute.  If the job's\n *\t\t\t     comment attr is identical, don't update\n *\n * @param[in]\tpbs_sd\t-\tpbs connection descriptor\n * @param[in]\tresresv -\tthe job to update\n * @param[in]\tcomment -\tthe comment string\n *\n * @return\tint\n * @retval\t1\t: if the comment was updated\n * @retval\t0\t: if not\n *\n */\nint\nupdate_job_comment(int pbs_sd, resource_resv *resresv, char *comment)\n{\n\tint rc = 0;\n\n\tif (resresv == NULL || comment == NULL)\n\t\treturn 0;\n\n\tif (!resresv->is_job || resresv->job == NULL)\n\t\treturn 0;\n\n\t/* no need to update the job comment if it is the same */\n\tif (resresv->job->comment == NULL ||\n\t    strcmp(resresv->job->comment, comment)) {\n\t\tif (conf.update_comments) {\n\t\t\trc = update_job_attr(pbs_sd, resresv, ATTR_comment, NULL, comment, NULL, UPDATE_LATER);\n\t\t\tif (rc > 0) {\n\t\t\t\tif (resresv->job->comment != NULL)\n\t\t\t\t\tfree(resresv->job->comment);\n\t\t\t\tresresv->job->comment = string_dup(comment);\n\t\t\t}\n\t\t}\n\t}\n\treturn rc;\n}\n\n/**\n * @brief\n *\t\tupdate_jobs_cant_run - update an array of jobs which can not run\n *\n * @param[in]\tpbs_sd\t-\tconnection to the PBS server\n * @param[in,out]\tresresv_arr\t-\tthe array to update\n * @param[in]\tstart\t-\tthe job which couldn't run\n * @param[in]\tcomment\t-\tthe comment to update\n * @param[in]\tlog_msg\t-\tthe message to log for the job\n *\n * @return nothing\n *\n */\nvoid\nupdate_jobs_cant_run(int pbs_sd, resource_resv **resresv_arr,\n\t\t     resource_resv *start, struct schd_error *err, int start_where)\n{\n\tint i = 0;\n\n\tif (resresv_arr == NULL)\n\t\treturn;\n\n\t/* We are not starting at the front of the array, so we need to find the\n\t * element to start with.\n\t */\n\tif (start != NULL) {\n\t\tfor (; resresv_arr[i] != NULL && resresv_arr[i] != start; i++)\n\t\t\t;\n\t} else\n\t\ti = 0;\n\n\tif (resresv_arr[i] != NULL) {\n\t\tif (start_where == START_BEFORE_JOB)\n\t\t\ti--;\n\t\telse if (start_where == START_AFTER_JOB)\n\t\t\ti++;\n\n\t\tfor (; resresv_arr[i] != NULL; i++) {\n\t\t\tif (!resresv_arr[i]->can_not_run) {\n\t\t\t\tupdate_job_can_not_run(pbs_sd, resresv_arr[i], err);\n\t\t\t}\n\t\t}\n\t}\n}\n\n/**\n * @brief\n *\t\ttranslate failure codes into a comment and log message\n *\n * @param[in]\terr\t-\terror reply structure to translate\n * @param[out]\tcomment_msg\t-\ttranslated comment (may be NULL)\n * @param[out]\tlog_msg\t-\ttranslated log message (may be NULL)\n *\n * @return\tint\n * @retval\t1\t: comment and log messages were set\n * @retval\t0\t: comment and log messages were not set\n *\n */\nint\ntranslate_fail_code(schd_error *err, char *comment_msg, char *log_msg)\n{\n\tint rc = 1;\n\tchar commentbuf[MAX_LOG_SIZE];\n\tconst char *arg1;\n\tconst char *arg2;\n\tconst char *arg3;\n\tconst char *spec;\n\n\tif (err == NULL)\n\t\treturn 0;\n\n\tif (err->status_code == SCHD_UNKWN) {\n\t\tif (comment_msg != NULL)\n\t\t\tcomment_msg[0] = '\\0';\n\t\tif (log_msg != NULL)\n\t\t\tlog_msg[0] = '\\0';\n\t\treturn 0;\n\t}\n\n\tif (err->error_code < RET_BASE) {\n\t\tconst char *pbse;\n\n\t\tif (err->specmsg != NULL)\n\t\t\tpbse = err->specmsg;\n\t\telse\n\t\t\tpbse = pbse_to_txt(err->error_code);\n\n\t\tif (pbse == NULL)\n\t\t\tpbse = \"\";\n\n\t\tif (comment_msg != NULL)\n\t\t\tsnprintf(commentbuf, sizeof(commentbuf), \"%s\", pbse);\n\t\tif (log_msg != NULL)\n\t\t\tsnprintf(log_msg, MAX_LOG_SIZE, \"%s\", pbse);\n\t}\n\n\targ1 = err->arg1;\n\targ2 = err->arg2;\n\targ3 = err->arg3;\n\tspec = err->specmsg;\n\tif (arg1 == NULL)\n\t\targ1 = \"\";\n\tif (arg2 == NULL)\n\t\targ2 = \"\";\n\tif (arg3 == NULL)\n\t\targ3 = \"\";\n\tif (spec == NULL)\n\t\tspec = \"\";\n\n\tswitch (err->error_code) {\n\t\tcase ERR_SPECIAL:\n\n\t\t\tif (comment_msg != NULL)\n\t\t\t\tsnprintf(commentbuf, sizeof(commentbuf), \"%s\", spec);\n\t\t\tif (log_msg != NULL)\n\t\t\t\tsnprintf(log_msg, MAX_LOG_SIZE, \"%s\", spec);\n\t\t\tbreak;\n\n\t\t/* codes using no args */\n\t\tcase MAX_RUN_SUBJOBS:\n\t\tcase BACKFILL_CONFLICT:\n\t\tcase CANT_PREEMPT_ENOUGH_WORK:\n\t\tcase CROSS_DED_TIME_BOUNDRY:\n\t\tcase DED_TIME:\n\t\tcase NODE_GROUP_LIMIT_REACHED:\n\t\tcase NODE_JOB_LIMIT_REACHED:\n\t\tcase NODE_NO_MULT_JOBS:\n\t\tcase NODE_PLACE_PACK:\n\t\tcase NODE_RESV_ENABLE:\n\t\tcase NODE_UNLICENSED:\n\t\tcase NODE_USER_LIMIT_REACHED:\n\t\tcase NONPRIME_ONLY:\n\t\tcase NOT_ENOUGH_NODES_AVAIL:\n\t\tcase NO_FAIRSHARES:\n\t\tcase NO_NODE_RESOURCES:\n\t\tcase NO_SMALL_CPUSETS:\n\t\tcase PRIME_ONLY:\n\t\tcase QUEUE_NOT_STARTED:\n\t\tcase RESERVATION_CONFLICT:\n\t\tcase SCHD_ERROR:\n\t\tcase SERVER_GROUP_LIMIT_REACHED:\n\t\tcase SERVER_PROJECT_LIMIT_REACHED:\n\t\tcase SERVER_JOB_LIMIT_REACHED:\n\t\tcase SERVER_USER_LIMIT_REACHED:\n\t\tcase STRICT_ORDERING:\n\t\tcase PROV_DISABLE_ON_SERVER:\n\t\tcase PROV_DISABLE_ON_NODE:\n\t\tcase PROV_BACKFILL_CONFLICT:\n\t\tcase CANT_SPAN_PSET:\n\t\tcase IS_MULTI_VNODE:\n\t\tcase PROV_RESRESV_CONFLICT:\n\t\tcase NO_FREE_NODES:\n\t\tcase NO_TOTAL_NODES:\n\t\tcase JOB_UNDER_THRESHOLD:\n#ifdef NAS\n\t\t\t/* localmod 034 */\n\t\tcase GROUP_CPU_SHARE:\n\t\tcase GROUP_CPU_INSUFFICIENT:\n\t\t\t/* localmod 998 */\n\t\tcase RESOURCES_INSUFFICIENT:\n#endif\n\t\t\tif (comment_msg != NULL)\n\t\t\t\tsnprintf(commentbuf, sizeof(commentbuf), \"%s\", ERR2COMMENT(err->error_code));\n\t\t\tif (log_msg != NULL)\n\t\t\t\tsnprintf(log_msg, MAX_LOG_SIZE, \"%s\", ERR2INFO(err->error_code));\n\t\t\tbreak;\n\t\t\t/* codes using arg1  */\n#ifndef NAS /* localmod 031 */\n\t\tcase INVALID_NODE_STATE:\n#endif /* localmod 031 */\n\t\tcase INVALID_NODE_TYPE:\n\t\tcase NODE_NOT_EXCL:\n\t\tcase QUEUE_GROUP_LIMIT_REACHED:\n\t\tcase QUEUE_PROJECT_LIMIT_REACHED:\n\t\tcase QUEUE_JOB_LIMIT_REACHED:\n\t\tcase QUEUE_USER_LIMIT_REACHED:\n\t\tcase SERVER_BYGROUP_JOB_LIMIT_REACHED:\n\t\tcase SERVER_BYPROJECT_JOB_LIMIT_REACHED:\n\t\tcase SERVER_BYUSER_JOB_LIMIT_REACHED:\n\t\tcase AOE_NOT_AVALBL:\n\t\tcase EOE_NOT_AVALBL:\n\t\tcase CROSS_PRIME_BOUNDARY:\n\t\tcase INVALID_RESRESV:\n\t\t\tif (comment_msg != NULL)\n\t\t\t\tsnprintf(commentbuf, sizeof(commentbuf), ERR2COMMENT(err->error_code), arg1);\n\t\t\tif (log_msg != NULL)\n\t\t\t\tsnprintf(log_msg, MAX_LOG_SIZE, ERR2INFO(err->error_code), arg1);\n\t\t\tbreak;\n\n\t\t\t/* codes using two arguments */\n#ifdef NAS /* localmod 031 */\n\t\tcase INVALID_NODE_STATE:\n#endif /* localmod 031 */\n\t\tcase QUEUE_BYGROUP_JOB_LIMIT_REACHED:\n\t\tcase QUEUE_BYPROJECT_JOB_LIMIT_REACHED:\n\t\tcase QUEUE_BYUSER_JOB_LIMIT_REACHED:\n\t\tcase RUN_FAILURE:\n\t\tcase NODE_NONEXISTENT:\n\t\tcase SET_TOO_SMALL:\n\t\t\tif (comment_msg != NULL) {\n\t\t\t\tsnprintf(commentbuf, sizeof(commentbuf), ERR2COMMENT(err->error_code), arg1, arg2);\n\t\t\t}\n\t\t\tif (log_msg != NULL) {\n\t\t\t\tsnprintf(log_msg, MAX_LOG_SIZE, ERR2INFO(err->error_code), arg1, arg2);\n\t\t\t}\n\t\t\tbreak;\n\t\t/* codes using a resource definition and arg1 */\n\t\tcase QUEUE_GROUP_RES_LIMIT_REACHED:\n\t\tcase QUEUE_PROJECT_RES_LIMIT_REACHED:\n\t\tcase QUEUE_USER_RES_LIMIT_REACHED:\n\t\t\tif (comment_msg != NULL && err->rdef != NULL)\n\t\t\t\tsnprintf(commentbuf, sizeof(commentbuf), ERR2COMMENT(err->error_code), arg1, err->rdef->name.c_str());\n\t\t\tif (log_msg != NULL && err->rdef != NULL)\n\t\t\t\tsnprintf(log_msg, MAX_LOG_SIZE, ERR2INFO(err->error_code), arg1, err->rdef->name.c_str());\n\t\t\tbreak;\n\n\t\t/* codes using resource definition in error structure */\n\t\tcase SERVER_GROUP_RES_LIMIT_REACHED:\n\t\tcase SERVER_PROJECT_RES_LIMIT_REACHED:\n\t\tcase SERVER_USER_RES_LIMIT_REACHED:\n\t\tcase SERVER_RESOURCE_LIMIT_REACHED:\n\t\t\tif (comment_msg != NULL && err->rdef != NULL)\n\t\t\t\tsnprintf(commentbuf, sizeof(commentbuf), ERR2COMMENT(err->error_code), err->rdef->name.c_str());\n\t\t\tif (log_msg != NULL && err->rdef != NULL)\n\t\t\t\tsnprintf(log_msg, MAX_LOG_SIZE, ERR2INFO(err->error_code), err->rdef->name.c_str());\n\t\t\tbreak;\n\n\t\t/* codes using a resource definition and arg1 in a different order */\n\t\tcase QUEUE_RESOURCE_LIMIT_REACHED:\n\t\tcase INSUFFICIENT_QUEUE_RESOURCE:\n\t\tcase INSUFFICIENT_SERVER_RESOURCE:\n\t\tcase INSUFFICIENT_RESOURCE:\n\t\t\tif (comment_msg != NULL && err->rdef != NULL)\n\t\t\t\tsnprintf(commentbuf, sizeof(commentbuf), ERR2COMMENT(err->error_code), err->rdef->name.c_str(),\n\t\t\t\t\t arg1 == NULL ? \"\" : arg1);\n\t\t\tif (log_msg != NULL && err->rdef != NULL)\n\t\t\t\tsnprintf(log_msg, MAX_LOG_SIZE, ERR2INFO(err->error_code), err->rdef->name.c_str(),\n\t\t\t\t\t arg1 == NULL ? \"\" : arg1);\n\t\t\tbreak;\n\n\t\t/* codes using arg1, arg3 and resource definition (in a weird order) */\n\t\tcase QUEUE_BYGROUP_RES_LIMIT_REACHED:\n\t\tcase QUEUE_BYPROJECT_RES_LIMIT_REACHED:\n\t\tcase QUEUE_BYUSER_RES_LIMIT_REACHED:\n\t\t\tif (comment_msg != NULL && err->rdef != NULL)\n\t\t\t\tsnprintf(commentbuf, sizeof(commentbuf), ERR2COMMENT(err->error_code), arg3, err->rdef->name.c_str(), arg1);\n\t\t\tif (log_msg != NULL && err->rdef != NULL)\n\t\t\t\tsnprintf(log_msg, MAX_LOG_SIZE, ERR2INFO(err->error_code), arg3, err->rdef->name.c_str(), arg1);\n\t\t\tbreak;\n\n\t\t/* codes using resource definition and arg2 */\n\t\tcase SERVER_BYGROUP_RES_LIMIT_REACHED:\n\t\tcase SERVER_BYPROJECT_RES_LIMIT_REACHED:\n\t\tcase SERVER_BYUSER_RES_LIMIT_REACHED:\n\t\t\tif (comment_msg != NULL && err->rdef != NULL)\n\t\t\t\tsnprintf(commentbuf, sizeof(commentbuf), ERR2COMMENT(err->error_code), arg2,\n\t\t\t\t\t err->rdef->name.c_str());\n\t\t\tif (log_msg != NULL && err->rdef != NULL)\n\t\t\t\tsnprintf(log_msg, MAX_LOG_SIZE, ERR2INFO(err->error_code), arg2,\n\t\t\t\t\t err->rdef->name.c_str());\n\t\t\tbreak;\n\n\t\tcase RESERVATION_INTERFERENCE:\n\t\t\tif (*arg1 != '\\0') {\n\t\t\t\tif (comment_msg != NULL) {\n\t\t\t\t\tsprintf(commentbuf, \"%s: %s\",\n\t\t\t\t\t\tERR2COMMENT(err->error_code), arg1);\n\t\t\t\t}\n\t\t\t\tif (log_msg != NULL) {\n\t\t\t\t\tsnprintf(log_msg, MAX_LOG_SIZE, \"%s: %s\",\n\t\t\t\t\t\t ERR2INFO(err->error_code), arg1);\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tif (comment_msg != NULL)\n\t\t\t\t\tsnprintf(commentbuf, sizeof(commentbuf), \"%s\", ERR2COMMENT(err->error_code));\n\t\t\t\tif (log_msg != NULL)\n\t\t\t\t\tsnprintf(log_msg, MAX_LOG_SIZE, \"%s\", ERR2INFO(err->error_code));\n\t\t\t}\n\t\t\tbreak;\n\n\t\tcase NOT_QUEUED:\n\t\tdefault:\n\t\t\trc = 0;\n\t\t\tif (comment_msg != NULL)\n\t\t\t\tcommentbuf[0] = '\\0';\n\t\t\tif (log_msg != NULL)\n\t\t\t\tlog_msg[0] = '\\0';\n\t}\n\n\tif (comment_msg != NULL) {\n\t\t/* snprintf() use MAX_LOG_SIZE because all calls to this function\n\t\t * pass in comment_msg buffers of size MAX_LOG_SIZE.  This needs to be\n\t\t * fixed by passing in the size of comment_msg and log_msg (SPID268659)\n\t\t */\n\t\tswitch (err->status_code) {\n\t\t\tcase SCHD_UNKWN:\n\t\t\tcase NOT_RUN:\n\t\t\t\tsnprintf(comment_msg, MAX_LOG_SIZE, \"%s: %.*s\",\n\t\t\t\t\t NOT_RUN_PREFIX,\n\t\t\t\t\t (int) (MAX_LOG_SIZE - strlen(NOT_RUN_PREFIX) - 3),\n\t\t\t\t\t commentbuf);\n\t\t\t\tbreak;\n\t\t\tcase NEVER_RUN:\n\t\t\t\tsnprintf(comment_msg, MAX_LOG_SIZE, \"%s: %.*s\",\n\t\t\t\t\t NEVER_RUN_PREFIX,\n\t\t\t\t\t (int) (MAX_LOG_SIZE - strlen(NEVER_RUN_PREFIX) - 3),\n\t\t\t\t\t commentbuf);\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\tsnprintf(comment_msg, MAX_LOG_SIZE, \"%s\", commentbuf);\n\t\t}\n\t}\n\n\treturn rc;\n}\n\n/**\n * @brief resresv_set constructor\n */\nresresv_set *\nnew_resresv_set(void)\n{\n\tresresv_set *rset;\n\n\trset = static_cast<resresv_set *>(malloc(sizeof(resresv_set)));\n\tif (rset == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn NULL;\n\t}\n\n\trset->can_not_run = 0;\n\trset->err = NULL;\n\trset->user = NULL;\n\trset->group = NULL;\n\trset->project = NULL;\n\trset->place_spec = NULL;\n\trset->req = NULL;\n\trset->select_spec = NULL;\n\trset->qinfo = NULL;\n\n\treturn rset;\n}\n/**\n * @brief resresv_set destructor\n */\nvoid\nfree_resresv_set(resresv_set *rset)\n{\n\tif (rset == NULL)\n\t\treturn;\n\n\tfree_schd_error(rset->err);\n\tfree(rset->user);\n\tfree(rset->group);\n\tfree(rset->project);\n\tdelete rset->select_spec;\n\tfree_place(rset->place_spec);\n\tfree_resource_req_list(rset->req);\n\tfree(rset);\n}\n/**\n *  @brief resresv_set array destructor\n */\nvoid\nfree_resresv_set_array(resresv_set **rsets)\n{\n\tint i;\n\n\tif (rsets == NULL)\n\t\treturn;\n\n\tfor (i = 0; rsets[i] != NULL; i++)\n\t\tfree_resresv_set(rsets[i]);\n\n\tfree(rsets);\n}\n\n/**\n * @brief resresv_set copy constructor\n */\nresresv_set *\ndup_resresv_set(resresv_set *oset, server_info *nsinfo)\n{\n\tresresv_set *rset;\n\n\tif (oset == NULL || nsinfo == NULL)\n\t\treturn NULL;\n\n\trset = new_resresv_set();\n\tif (rset == NULL)\n\t\treturn NULL;\n\n\trset->can_not_run = oset->can_not_run;\n\n\trset->err = dup_schd_error(oset->err);\n\tif (oset->err != NULL && oset->err == NULL) {\n\t\tfree_resresv_set(rset);\n\t\treturn NULL;\n\t}\n\n\trset->user = string_dup(oset->user);\n\tif (oset->user != NULL && rset->user == NULL) {\n\t\tfree_resresv_set(rset);\n\t\treturn NULL;\n\t}\n\trset->group = string_dup(oset->group);\n\tif (oset->group != NULL && rset->group == NULL) {\n\t\tfree_resresv_set(rset);\n\t\treturn NULL;\n\t}\n\trset->project = string_dup(oset->project);\n\tif (oset->project != NULL && rset->project == NULL) {\n\t\tfree_resresv_set(rset);\n\t\treturn NULL;\n\t}\n\trset->select_spec = new selspec(*oset->select_spec);\n\tif (rset->select_spec == NULL) {\n\t\tfree_resresv_set(rset);\n\t\treturn NULL;\n\t}\n\trset->place_spec = dup_place(oset->place_spec);\n\tif (rset->place_spec == NULL) {\n\t\tfree_resresv_set(rset);\n\t\treturn NULL;\n\t}\n\trset->req = dup_resource_req_list(oset->req);\n\tif (oset->req != NULL && rset->req == NULL) {\n\t\tfree_resresv_set(rset);\n\t\treturn NULL;\n\t}\n\tif (oset->qinfo != NULL)\n\t\trset->qinfo = find_queue_info(nsinfo->queues, oset->qinfo->name);\n\n\treturn rset;\n}\n/**\n * @brief resresv_set array copy constructor\n */\nresresv_set **\ndup_resresv_set_array(resresv_set **osets, server_info *nsinfo)\n{\n\tint i;\n\tint len;\n\tresresv_set **rsets;\n\tif (osets == NULL || nsinfo == NULL)\n\t\treturn NULL;\n\n\tlen = count_array(osets);\n\n\trsets = static_cast<resresv_set **>(malloc((len + 1) * sizeof(resresv_set *)));\n\tif (rsets == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn NULL;\n\t}\n\n\tfor (i = 0; osets[i] != NULL; i++) {\n\t\trsets[i] = dup_resresv_set(osets[i], nsinfo);\n\t\tif (rsets[i] == NULL) {\n\t\t\tfree_resresv_set_array(rsets);\n\t\t\treturn NULL;\n\t\t}\n\t}\n\trsets[i] = NULL;\n\treturn rsets;\n}\n\n/**\n * @brief should a resresv_set use the user\n * @param sinfo - server info\n * @param qinfo - queue info\n * @retval 1 - yes\n * @retval 0 - no\n */\nint\nresresv_set_use_user(server_info *sinfo, queue_info *qinfo)\n{\n\tif ((sinfo != NULL) && (sinfo->has_user_limit))\n\t\treturn 1;\n\tif ((qinfo != NULL) && (qinfo->has_user_limit))\n\t\treturn 1;\n\n\treturn 0;\n}\n\n/**\n * @brief should a resresv_set use the group\n * @param sinfo - server info\n * @param qinfo - queue info\n * @retval 1 - yes\n * @retval 0 - no\n */\nint\nresresv_set_use_grp(server_info *sinfo, queue_info *qinfo)\n{\n\tif ((sinfo != NULL) && (sinfo->has_grp_limit))\n\t\treturn 1;\n\tif ((qinfo != NULL) && (qinfo->has_grp_limit))\n\t\treturn 1;\n\n\treturn 0;\n}\n\n/**\n * @brief should a resresv_set use the project\n * @param sinfo - server info\n * @param qinfo - queue info\n * @retval 1 - yes\n * @retval 0 - no\n */\nint\nresresv_set_use_proj(server_info *sinfo, queue_info *qinfo)\n{\n\tif ((sinfo != NULL) && (sinfo->has_proj_limit))\n\t\treturn 1;\n\tif ((qinfo != NULL) && (qinfo->has_proj_limit))\n\t\treturn 1;\n\n\treturn 0;\n}\n\n/**\n * @brief should a resresv_set use the queue\n * \tA resresv_set should use queue for the following reasons:\n * \tHard limits\tmax_run_res, etc\n * \tSoft Limits\tmax_run_res_soft, etc\n * \tNodes\t\tQueue has nodes(e.g., node's queue attribute)\n * \tDedtime queue \tQueue is a dedicated time queue\n * \tPrimetime\tQueue is a primetime queue\n * \tNon-primetime\tQueue is a non-primetime queue\n * \tResource limits\tQueue has resources_available limits\n * \tReservation\tQueue is a reservation queue\n *\n * @param qinfo - the queue\n * @retval 1 - yes\n * @retval 0 - no\n */\nint\nresresv_set_use_queue(queue_info *qinfo)\n{\n\tif (qinfo == NULL)\n\t\treturn 0;\n\n\tif (qinfo->has_hard_limit || qinfo->has_soft_limit || qinfo->has_nodes ||\n\t    qinfo->is_ded_queue || qinfo->is_prime_queue || qinfo->is_nonprime_queue ||\n\t    qinfo->has_resav_limit || qinfo->resv != NULL)\n\t\treturn 1;\n\n\treturn 0;\n}\n\n/**\n * @brief determine which selspec to use from a resource_resv for a resresv_set\n *\n * @par Jobs that have an execselect are either running or need to be placed\n *\tback on the nodes they were originally running on (e.g., suspended jobs).\n *\tWe need to put them in their own set because they are no longer\n *\trequesting the same resources as jobs with the same select spec.\n *\tThey are requesting the resources on each vnode they are running on.\n *\tWe don't care about running jobs because the only time they will be\n *\tlooked at is if they are requeued.  At that point they are back in\n *\tthe queued state and have the same select spec as they originally did.\n *\n * @return selspec *\n * @retval selspec to use\n * @retval NULL on error\n */\nselspec *\nresresv_set_which_selspec(resource_resv *resresv)\n{\n\tif (resresv == NULL)\n\t\treturn NULL;\n\n\tif (resresv->job != NULL && !resresv->job->is_running && resresv->execselect != NULL)\n\t\treturn resresv->execselect;\n\n\treturn resresv->select;\n}\n\n/**\n * @brief create the list of resources to consider when creating the resresv sets\n * @param policy[in] - policy info\n * @param sinfo[in] - server universe\n * @return resdef **\n * @retval array of resdefs for creating resresv_set's resources.\n * @retval NULL on error\n */\nstd::unordered_set<resdef *>\ncreate_resresv_sets_resdef(status *policy)\n{\n\tstd::unordered_set<resdef *> defs;\n\tschd_resource *limres;\n\n\tif (policy == NULL)\n\t\treturn {};\n\n\tlimres = query_limres();\n\n\tdefs = policy->resdef_to_check;\n\tdefs.insert(allres[\"cput\"]);\n\tdefs.insert(allres[\"walltime\"]);\n\tdefs.insert(allres[\"max_walltime\"]);\n\tdefs.insert(allres[\"min_walltime\"]);\n\tif (sc_attrs.preempt_targets_enable)\n\t\tdefs.insert(allres[\"preempt_targets\"]);\n\n\tfor (auto cur_res = limres; cur_res != NULL; cur_res = cur_res->next)\n\t\tdefs.insert(cur_res->def);\n\n\treturn defs;\n}\n\n/**\n * @brief create a resresv_set based on a resource_resv\n *\n * @param[in] policy - policy info\n * @param[in] sinfo - server info\n * @param[in] resresv - resresv to create resresv set from\n *\n * @return resresv_set **\n * @retval newly created resresv_set\n * @retval NULL on error\n */\nresresv_set *\ncreate_resresv_set_by_resresv(status *policy, server_info *sinfo, resource_resv *resresv)\n{\n\tresresv_set *rset;\n\tif (policy == NULL || resresv == NULL)\n\t\treturn NULL;\n\n\trset = new_resresv_set();\n\tif (rset == NULL)\n\t\treturn NULL;\n\n\tif (resresv->is_job && resresv->job != NULL) {\n\t\tif (resresv_set_use_queue(resresv->job->queue))\n\t\t\trset->qinfo = resresv->job->queue;\n\t}\n\n\tif (resresv_set_use_user(sinfo, rset->qinfo))\n\t\trset->user = string_dup(resresv->user.c_str());\n\tif (resresv_set_use_grp(sinfo, rset->qinfo))\n\t\trset->group = string_dup(resresv->group.c_str());\n\tif (resresv_set_use_proj(sinfo, rset->qinfo))\n\t\trset->project = string_dup(resresv->project.c_str());\n\n\trset->select_spec = new selspec(*resresv_set_which_selspec(resresv));\n\tif (rset->select_spec == NULL) {\n\t\tfree_resresv_set(rset);\n\t\treturn NULL;\n\t}\n\trset->place_spec = dup_place(resresv->place_spec);\n\tif (rset->place_spec == NULL) {\n\t\tfree_resresv_set(rset);\n\t\treturn NULL;\n\t}\n\t/* rset->req may be NULL if the intersection of resresv->resreq and policy->equiv_class_resdef is the NULL set */\n\trset->req = dup_selective_resource_req_list(resresv->resreq, policy->equiv_class_resdef);\n\n\treturn rset;\n}\n\n/**\n * @brief find the index of a resresv_set by its component parts\n * @par qinfo, user, group, project, or req can be NULL if the resresv_set does not have one\n * @param[in] policy - policy info\n * @param[in] rsets - resresv_sets to search\n * @param[in] user - user name\n * @param[in] group - group name\n * @param[in] project - project name\n * @param[in] sel - select spec\n * @param[in] pl - place spec\n * @param[in] req - list of resources (i.e., qsub -l)\n * @param[in] qinfo - queue\n * @return int\n * @retval index of resresv if found\n * @retval -1 if not found or on error\n */\nint\nfind_resresv_set(status *policy, resresv_set **rsets, const char *user, const char *group, const char *project, selspec *sel, place *pl, resource_req *req, queue_info *qinfo)\n{\n\tint i;\n\n\tif (rsets == NULL)\n\t\treturn -1;\n\n\tfor (i = 0; rsets[i] != NULL; i++) {\n\t\tif ((qinfo != NULL && rsets[i]->qinfo == NULL) || (qinfo == NULL && rsets[i]->qinfo != NULL))\n\t\t\tcontinue;\n\t\tif ((qinfo != NULL && rsets[i]->qinfo != NULL) && qinfo->name != rsets[i]->qinfo->name)\n\n\t\t\tcontinue;\n\t\tif ((user != NULL && rsets[i]->user == NULL) || (user == NULL && rsets[i]->user != NULL))\n\t\t\tcontinue;\n\t\tif (user != NULL && cstrcmp(user, rsets[i]->user) != 0)\n\t\t\tcontinue;\n\n\t\tif ((group != NULL && rsets[i]->group == NULL) || (group == NULL && rsets[i]->group != NULL))\n\t\t\tcontinue;\n\t\tif (group != NULL && cstrcmp(group, rsets[i]->group) != 0)\n\t\t\tcontinue;\n\n\t\tif ((project != NULL && rsets[i]->project == NULL) || (project == NULL && rsets[i]->project != NULL))\n\t\t\tcontinue;\n\t\tif (project != NULL && cstrcmp(project, rsets[i]->project) != 0)\n\t\t\tcontinue;\n\n\t\tif (compare_selspec(rsets[i]->select_spec, sel) == 0)\n\t\t\tcontinue;\n\t\tif (compare_place(rsets[i]->place_spec, pl) == 0)\n\t\t\tcontinue;\n\t\tif (compare_resource_req_list(rsets[i]->req, req, policy->equiv_class_resdef) == 0)\n\t\t\tcontinue;\n\t\t/* If we got here, we have found our set */\n\t\treturn i;\n\t}\n\treturn -1;\n}\n\n/**\n * @brief find the index of a resresv_set by a resresv inside it\n * @param[in] policy - policy info\n * @param[in] rsets - resresv_set array to search\n * @param[in] resresv - resresv to search for\n * @return index of resresv\n */\nint\nfind_resresv_set_by_resresv(status *policy, resresv_set **rsets, resource_resv *resresv)\n{\n\tconst char *user = NULL;\n\tconst char *grp = NULL;\n\tconst char *proj = NULL;\n\tqueue_info *qinfo = NULL;\n\tselspec *sspec;\n\n\tif (policy == NULL || rsets == NULL || resresv == NULL)\n\t\treturn -1;\n\n\tif (resresv->is_job && resresv->job != NULL)\n\t\tif (resresv_set_use_queue(resresv->job->queue))\n\t\t\tqinfo = resresv->job->queue;\n\n\tif (resresv_set_use_user(resresv->server, qinfo))\n\t\tuser = resresv->user.c_str();\n\n\tif (resresv_set_use_grp(resresv->server, qinfo))\n\t\tgrp = resresv->group.c_str();\n\n\tif (resresv_set_use_proj(resresv->server, qinfo))\n\t\tproj = resresv->project.c_str();\n\n\tsspec = resresv_set_which_selspec(resresv);\n\n\treturn find_resresv_set(policy, rsets, user, grp, proj, sspec, resresv->place_spec, resresv->resreq, qinfo);\n}\n\n/**\n * @brief create equivalence classes based on an array of resresvs\n * @param[in] policy - policy info\n * @param[in] sinfo - server universe\n * @return array of equivalence classes (resresv_sets)\n */\nresresv_set **\ncreate_resresv_sets(status *policy, server_info *sinfo)\n{\n\tint i;\n\tint j = 0;\n\tint len;\n\tint rset_len;\n\tresource_resv **resresvs;\n\tresresv_set **rsets;\n\tresresv_set **tmp_rset_arr;\n\tresresv_set *cur_rset;\n\n\tif (policy == NULL || sinfo == NULL)\n\t\treturn NULL;\n\n\tresresvs = sinfo->jobs;\n\n\tlen = count_array(resresvs);\n\trsets = static_cast<resresv_set **>(malloc((len + 1) * sizeof(resresv_set *)));\n\tif (rsets == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn NULL;\n\t}\n\n\trsets[0] = NULL;\n\n\tfor (i = 0; resresvs[i] != NULL; i++) {\n\t\tauto cur_ind = find_resresv_set_by_resresv(policy, rsets, resresvs[i]);\n\n\t\t/* Didn't find the set, create it.*/\n\t\tif (cur_ind == -1) {\n\t\t\tcur_rset = create_resresv_set_by_resresv(policy, sinfo, resresvs[i]);\n\t\t\tif (cur_rset == NULL) {\n\t\t\t\tfree_resresv_set_array(rsets);\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t\tcur_ind = j;\n\t\t\trsets[j++] = cur_rset;\n\t\t\trsets[j] = NULL;\n\t\t}\n\t\tresresvs[i]->ec_index = cur_ind;\n\t}\n\n\ttmp_rset_arr = static_cast<resresv_set **>(realloc(rsets, (j + 1) * sizeof(resresv_set *)));\n\tif (tmp_rset_arr != NULL)\n\t\trsets = tmp_rset_arr;\n\trset_len = count_array(rsets);\n\tif (rset_len > 0) {\n\t\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_SCHED, LOG_DEBUG, __func__,\n\t\t\t   \"Number of job equivalence classes: %d\", rset_len);\n\t}\n\n\treturn rsets;\n}\n\n/**\n * @brief\n * \t\tjob_info copy constructor\n *\n * @param[in]\tojinfo\t-\tPointer to JobInfo structure\n * @param[in]\tnqinfo\t-\tQueue Info\n * @param[in]\tsinfo\t-\tServer Info\n *\n * @return\tjob_info *\n * @retval\tNULL\t:\twhen function fails to duplicate job_info\n * @retval\t!NULL\t:\tduplicated job_info structure pointer\n */\njob_info *\ndup_job_info(job_info *ojinfo, queue_info *nqinfo, server_info *nsinfo)\n{\n\tjob_info *njinfo;\n\n\tif ((njinfo = new_job_info()) == NULL)\n\t\treturn NULL;\n\n\tnjinfo->queue = nqinfo;\n\tnjinfo->is_queued = ojinfo->is_queued;\n\tnjinfo->is_running = ojinfo->is_running;\n\tnjinfo->is_held = ojinfo->is_held;\n\tnjinfo->is_waiting = ojinfo->is_waiting;\n\tnjinfo->is_transit = ojinfo->is_transit;\n\tnjinfo->is_exiting = ojinfo->is_exiting;\n\tnjinfo->is_userbusy = ojinfo->is_userbusy;\n\tnjinfo->is_begin = ojinfo->is_begin;\n\tnjinfo->is_expired = ojinfo->is_expired;\n\tnjinfo->is_suspended = ojinfo->is_suspended;\n\tnjinfo->is_susp_sched = ojinfo->is_susp_sched;\n\tnjinfo->is_array = ojinfo->is_array;\n\tnjinfo->is_subjob = ojinfo->is_subjob;\n\tnjinfo->is_prerunning = ojinfo->is_prerunning;\n\tnjinfo->can_not_preempt = ojinfo->can_not_preempt;\n\tnjinfo->is_topjob = ojinfo->is_topjob;\n\tnjinfo->topjob_ineligible = ojinfo->topjob_ineligible;\n\tnjinfo->is_checkpointed = ojinfo->is_checkpointed;\n\tnjinfo->is_provisioning = ojinfo->is_provisioning;\n\n\tnjinfo->can_checkpoint = ojinfo->can_checkpoint;\n\tnjinfo->can_requeue = ojinfo->can_requeue;\n\tnjinfo->can_suspend = ojinfo->can_suspend;\n\n\tnjinfo->priority = ojinfo->priority;\n\tnjinfo->etime = ojinfo->etime;\n\tnjinfo->stime = ojinfo->stime;\n\tnjinfo->preempt = ojinfo->preempt;\n\tnjinfo->preempt_status = ojinfo->preempt_status;\n\tnjinfo->peer_sd = ojinfo->peer_sd;\n\tnjinfo->est_start_time = ojinfo->est_start_time;\n\tnjinfo->formula_value = ojinfo->formula_value;\n\tnjinfo->est_execvnode = string_dup(ojinfo->est_execvnode);\n\tnjinfo->job_name = string_dup(ojinfo->job_name);\n\tnjinfo->comment = string_dup(ojinfo->comment);\n\tnjinfo->resv_id = string_dup(ojinfo->resv_id);\n\tnjinfo->alt_id = string_dup(ojinfo->alt_id);\n\n\tif (ojinfo->resv != NULL) {\n\t\tnjinfo->resv = find_resource_resv_by_indrank(nqinfo->server->resvs,\n\t\t\t\t\t\t\t     ojinfo->resv->resresv_ind, ojinfo->resv->rank);\n\t}\n\n\tnjinfo->resused = dup_resource_req_list(ojinfo->resused);\n\n\tnjinfo->array_index = ojinfo->array_index;\n\tnjinfo->array_id = ojinfo->array_id;\n\tnjinfo->queued_subjobs = dup_range_list(ojinfo->queued_subjobs);\n\tnjinfo->max_run_subjobs = ojinfo->max_run_subjobs;\n\n\tnjinfo->resreleased = dup_nspecs(ojinfo->resreleased, nsinfo->nodes, NULL);\n\tnjinfo->resreq_rel = dup_resource_req_list(ojinfo->resreq_rel);\n\n\tif (nqinfo->server->fstree != NULL) {\n\t\tnjinfo->ginfo = find_group_info(ojinfo->ginfo->name,\n\t\t\t\t\t\tnqinfo->server->fstree->root);\n\t} else\n\t\tnjinfo->ginfo = NULL;\n\n\tnjinfo->depend_job_str = string_dup(njinfo->depend_job_str);\n\n#ifdef RESC_SPEC\n\tnjinfo->rspec = dup_rescspec(ojinfo->rspec);\n#endif\n\n#ifdef NAS\n\t/* localmod 045 */\n\tnjinfo->NAS_pri = ojinfo->NAS_pri;\n\t/* localmod 034 */\n\tnjinfo->sh_amts = site_dup_share_amts(ojinfo->sh_amts);\n\tnjinfo->sh_info = ojinfo->sh_info;\n\tnjinfo->accrue_rate = ojinfo->accrue_rate;\n\t/* localmod 040 */\n\tnjinfo->nodect = ojinfo->nodect;\n\t/* localmod 031 */\n\tnjinfo->schedsel = string_dup(ojinfo->schedsel);\n\t/* localmod 053 */\n\tnjinfo->u_info = ojinfo->u_info;\n#endif\n\n\treturn njinfo;\n}\n\n/**\n * @brief\n * \t\tfilter function used with resource_resv_filter\n *        create limited running job set for use with preemption.\n *        If there are multiple resources found in preempt_targets\n *        the scheduler will select a preemptable job which satisfies\n *        any one of them.\n *\n * @see\tresource_resv_filter()\n *\n * @param[in]\tjob\t-\tjob to consider to include\n * @param[in]\targ\t-\tattribute=value pairs criteria for inclusion\n *\n * @retval\tint\n * @return\t1\t: If job falls into one of the preempt_targets\n * @return\t0\t: If job dos not fall into any of the preempt_targets\n */\nint\npreempt_job_set_filter(resource_resv *job, const void *arg)\n{\n\tresource_req *req;\n\tchar **arglist;\n\tchar *p;\n\tchar *dot;\n\tint i;\n\n\tif (job == NULL || arg == NULL || job->job == NULL ||\n\t    job->job->queue == NULL || job->job->is_running != 1)\n\t\treturn 0;\n\n\targlist = (char **) arg;\n\n\tfor (i = 0; arglist[i] != NULL; i++) {\n\t\tp = strpbrk(arglist[i], \".=\");\n\t\tif (p != NULL) {\n\t\t\t/* two valid attributes: queue and Resource_List.<res> */\n\t\t\tif (!strncasecmp(arglist[i], ATTR_queue, p - arglist[i])) {\n\t\t\t\tif (job->job->queue->name == p + 1)\n\t\t\t\t\treturn 1;\n\t\t\t} else if (!strncasecmp(arglist[i], ATTR_l, p - arglist[i])) {\n\t\t\t\tdot = p;\n\t\t\t\tp = strpbrk(arglist[i], \"=\");\n\t\t\t\tif (p == NULL)\n\t\t\t\t\treturn 0;\n\t\t\t\telse {\n\t\t\t\t\t*p = '\\0';\n\t\t\t\t\treq = find_resource_req_by_str(job->resreq, dot + 1);\n\t\t\t\t\t*p = '=';\n\t\t\t\t\tif (req != NULL) {\n\t\t\t\t\t\tif (!strcmp(req->res_str, p + 1))\n\t\t\t\t\t\t\treturn 1;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *  get_job_req_used_time - get a running job's req and used time for preemption\n *\n * @param[in]\tpjob - the job in question\n * @param[out]\trtime - return pointer to the requested time\n * @param[out]\tutime - return pointer to the used time\n *\n * @return\tint\n * @retval\t0 for success\n * @retval\t1 for error\n */\nstatic int\nget_job_req_used_time(resource_resv *pjob, int *rtime, int *utime)\n{\n\tresource_req *req;  /* the jobs requested soft_walltime/walltime/cput */\n\tresource_req *used; /* the amount of the walltime/cput used */\n\n\tif (pjob == NULL || pjob->job == NULL || !pjob->job->is_running || rtime == NULL || utime == NULL)\n\t\treturn 1;\n\n\treq = find_resource_req(pjob->resreq, allres[\"soft_walltime\"]);\n\n\tif (req == NULL)\n\t\treq = find_resource_req(pjob->resreq, allres[\"walltime\"]);\n\n\tif (req == NULL) {\n\t\treq = find_resource_req(pjob->resreq, allres[\"cput\"]);\n\t\tused = find_resource_req(pjob->job->resused, allres[\"cput\"]);\n\t\t;\n\t} else\n\t\tused = find_resource_req(pjob->job->resused, allres[\"walltime\"]);\n\n\tif (req != NULL && used != NULL) {\n\t\t*rtime = req->amount;\n\t\t*utime = used->amount;\n\t} else {\n\t\t*rtime = -1;\n\t\t*utime = -1;\n\t}\n\n\treturn 0;\n}\n\n/**\n * @brief\n *  schd_get_preempt_order - deduce the preemption ordering to be used for a job\n *\n * @param[in]\tpjob\t-\tthe job to preempt\n * @param[in]\tsinfo\t-\tPointer to server info structure.\n *\n * @return\t: struct preempt_ordering.  array containing preemption order\n *\n */\nstruct preempt_ordering *\nschd_get_preempt_order(resource_resv *resresv)\n{\n\tstruct preempt_ordering *po = NULL;\n\tint req = -1;\n\tint used = -1;\n\n\tif (get_job_req_used_time(resresv, &req, &used) != 0)\n\t\treturn NULL;\n\n\tpo = get_preemption_order(sc_attrs.preempt_order, req, used);\n\n\treturn po;\n}\n\n/**\n * @brief\n * \t\tfind the jobs to preempt and then preempt them\n *\n * @param[in]\tpolicy\t-\tpolicy info\n * @param[in]\tpbs_sd\t-\tcommunication descriptor to the PBS server\n * @param[in]\thjob\t-\tthe high priority job\n * @param[in]\tsinfo\t-\tthe server to find jobs to preempt\n * @param[out]\terr\t-\tschd_error to return error from runjob\n *\n * @return\tint\n * @retval\t1\t: success\n * @retval  0\t: failure\n * @retval -1\t: error\n *\n */\nint\nfind_and_preempt_jobs(status *policy, int pbs_sd, resource_resv *hjob, server_info *sinfo, schd_error *err)\n{\n\n\tint i = 0;\n\tint *jobs = NULL;\n\tresource_resv *job = NULL;\n\tint done = 0;\n\tint rc = 1;\n\tint *preempted_list = NULL;\n\tint preempted_count = 0;\n\tint *fail_list = NULL;\n\tint fail_count = 0;\n\tint num_tries = 0;\n\tint no_of_jobs = 0;\n\tchar **preempt_jobs_list = NULL;\n\tpreempt_job_info *preempt_jobs_reply = NULL;\n\n\t/* jobs with AOE cannot preempt (atleast for now) */\n\tif (hjob->aoename != NULL)\n\t\treturn 0;\n\n\t/* using calloc - saves the trouble to put NULL at end of list */\n\tif ((preempted_list = static_cast<int *>(calloc((sinfo->sc.running + 1), sizeof(int)))) == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn -1;\n\t}\n\n\tif ((fail_list = static_cast<int *>(calloc((sinfo->sc.running + 1), sizeof(int)))) == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\tfree(preempted_list);\n\t\treturn -1;\n\t}\n\n\t/* loop till done is true, ie, all selected jobs are truely preempted,\n\t * or we cant find enough jobs to preempt\n\t * or the maximum number of tries has been exhausted\n\t */\n\twhile (!done &&\n\t       ((jobs = find_jobs_to_preempt(policy, hjob, sinfo, fail_list, &no_of_jobs)) != NULL) &&\n\t       num_tries < MAX_PREEMPT_RETRIES) {\n\t\tdone = 1;\n\n\t\tif ((preempt_jobs_list = static_cast<char **>(calloc(no_of_jobs + 1, sizeof(char *)))) == NULL) {\n\t\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\t\tfree(preempted_list);\n\t\t\tfree(fail_list);\n\t\t\treturn -1;\n\t\t}\n\n\t\tfor (i = 0; i < no_of_jobs; i++) {\n\t\t\tjob = find_resource_resv_by_indrank(sinfo->running_jobs, -1, jobs[i]);\n\t\t\tif (job != NULL) {\n\t\t\t\tif ((preempt_jobs_list[i] = string_dup(job->name.c_str())) == NULL) {\n\t\t\t\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\t\t\t\tfree_string_array(preempt_jobs_list);\n\t\t\t\t\tfree(preempt_jobs_list);\n\t\t\t\t\tfree(preempted_list);\n\t\t\t\t\tfree(fail_list);\n\t\t\t\t\treturn -1;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\tif ((preempt_jobs_reply = send_preempt_jobs(pbs_sd, preempt_jobs_list)) == NULL) {\n\t\t\tfree_string_array(preempt_jobs_list);\n\t\t\tfree(preempted_list);\n\t\t\tfree(fail_list);\n\t\t\treturn -1;\n\t\t}\n\n\t\tfor (i = 0; i < no_of_jobs; i++) {\n\t\t\tjob = find_resource_resv(sinfo->running_jobs, preempt_jobs_reply[i].job_id);\n\t\t\tif (job == NULL) {\n\t\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_DEBUG, preempt_jobs_reply[i].job_id,\n\t\t\t\t\t  \"Server replied to preemption request with job which does not exist.\");\n\t\t\t\tcontinue;\n\t\t\t}\n\n\t\t\tif (preempt_jobs_reply[i].order[0] == '0') {\n\t\t\t\tdone = 0;\n\t\t\t\tfail_list[fail_count++] = job->rank;\n\t\t\t\tlog_event(PBSEVENT_SCHED, PBS_EVENTCLASS_JOB, LOG_INFO, job->name, \"Job failed to be preempted\");\n\t\t\t} else {\n\t\t\t\tint update_accrue_type = 1;\n\t\t\t\tpreempted_list[preempted_count++] = job->rank;\n\t\t\t\tif (preempt_jobs_reply[i].order[0] == 'S') {\n\t\t\t\t\tif (!policy->rel_on_susp.empty()) {\n\t\t\t\t\t\t/* Set resources_released and execselect on the job */\n\t\t\t\t\t\tcreate_res_released(policy, job);\n\t\t\t\t\t}\n\t\t\t\t\tupdate_universe_on_end(policy, job, \"S\", NO_FLAGS);\n\t\t\t\t\tjob->job->is_susp_sched = 1;\n\t\t\t\t\tlog_event(PBSEVENT_SCHED, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t\t\t  job->name, \"Job preempted by suspension\");\n\t\t\t\t\t/* Since suspended job is not part of its current equivalence class,\n\t\t\t\t\t * break the job's association with its equivalence class.\n\t\t\t\t\t */\n\t\t\t\t\tjob->ec_index = UNSPECIFIED;\n\t\t\t\t} else if (preempt_jobs_reply[i].order[0] == 'C') {\n\t\t\t\t\tjob->job->is_checkpointed = 1;\n\t\t\t\t\tupdate_universe_on_end(policy, job, \"Q\", NO_FLAGS);\n\t\t\t\t\tlog_event(PBSEVENT_SCHED, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t\t\t  job->name, \"Job preempted by checkpointing\");\n\t\t\t\t\t/* Since checkpointed job is not part of its current equivalence class,\n\t\t\t\t\t * break the job's association with its equivalence class.\n\t\t\t\t\t */\n\t\t\t\t\tjob->ec_index = UNSPECIFIED;\n\t\t\t\t} else if (preempt_jobs_reply[i].order[0] == 'Q') {\n\t\t\t\t\tupdate_universe_on_end(policy, job, \"Q\", NO_FLAGS);\n\t\t\t\t\tlog_event(PBSEVENT_SCHED, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t\t\t  job->name, \"Job preempted by requeuing\");\n\t\t\t\t} else {\n\t\t\t\t\tupdate_universe_on_end(policy, job, \"X\", NO_FLAGS);\n\t\t\t\t\tlog_event(PBSEVENT_SCHED, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t\t\t  job->name, \"Job preempted by deletion\");\n\t\t\t\t\tjob->can_not_run = 1;\n\t\t\t\t\tupdate_accrue_type = 0;\n\t\t\t\t}\n\t\t\t\tif (update_accrue_type)\n\t\t\t\t\tupdate_accruetype(pbs_sd, sinfo, ACCRUE_MAKE_ELIGIBLE, SUCCESS, job);\n\t\t\t\tjob->job->is_preempted = 1;\n\t\t\t\tjob->job->time_preempted = sinfo->server_time;\n\t\t\t\tsinfo->num_preempted++;\n\t\t\t}\n\t\t}\n\n\t\tfree(jobs);\n\t\tfree_string_array(preempt_jobs_list);\n\t\tfree(preempt_jobs_reply);\n\t\tnum_tries++;\n\t}\n\n\tif (done) {\n\t\tclear_schd_error(err);\n\t\tauto ret = run_update_job(policy, pbs_sd, sinfo, hjob->job->queue, hjob, RURR_ADD_END_EVENT, err);\n\n\t\t/* oops... we screwed up.. the high priority job didn't run.  Forget about\n\t\t * running it now and resume preempted work\n\t\t */\n\t\tif (!ret) {\n\t\t\tschd_error *serr;\n\t\t\tserr = new_schd_error();\n\t\t\tif (serr == NULL) {\n\t\t\t\tfree(preempted_list);\n\t\t\t\tfree(fail_list);\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_DEBUG, hjob->name,\n\t\t\t\t  \"Preempted work didn't run job - rerun it\");\n\t\t\tfor (i = 0; i < preempted_count; i++) {\n\t\t\t\tjob = find_resource_resv_by_indrank(sinfo->jobs, -1, preempted_list[i]);\n\t\t\t\tif (job != NULL && !job->job->is_running) {\n\t\t\t\t\tclear_schd_error(serr);\n\t\t\t\t\tif (run_update_job(policy, pbs_sd, sinfo, job->job->queue, job, RURR_NO_FLAGS, serr) == 0) {\n\t\t\t\t\t\tschdlogerr(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_DEBUG, job->name, \"Failed to rerun job:\", serr);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\trc = 0;\n\t\t\tfree_schd_error_list(serr);\n\t\t}\n\t} else if (num_tries == MAX_PREEMPT_RETRIES) {\n\t\trc = 0;\n\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_DEBUG, hjob->name,\n\t\t\t  \"Maximum number of preemption tries exceeded - cannot run job\");\n\t} else\n\t\trc = 0;\n\n\tfree(preempted_list);\n\tfree(fail_list);\n\treturn rc;\n}\n\n/**\n * @brief\n * \t\tfind jobs to preempt in order to run a high priority job.\n *        First we'll check if the reason the job can't run will be helped\n *        if we preempt work (i.e. job won't run because of dedtime) then\n *        we'll simulate preempting jobs to find a list which will work.\n *        We will then go back through the list to find if any work doesn't\n *        need to be preempted.  Finally we'll return the list if we found\n *        one, NULL if not.\n *\n * @param[in]\tpolicy\t\t-\tpolicy info\n * @param[in]\thjob\t\t-\tthe high priority job\n * @param[in]\tsinfo\t\t-\tthe server of the jobs to preempt\n * @param[in]\tfail_list\t-\tlist of jobs which preemption has failed\n *\t\t\t\t \tdo not attempt to preempt again\n * @param[out]\tno_of_jobs\t-\tnumber of jobs in the list being returned\n *\n * @return\tint *\n * @retval\tarray of job ranks to preempt\n * @retval\tNULL\t: error/no jobs\n * @par NOTE:\treturned array is allocated with malloc() --  needs freeing\n *\n */\nint *\nfind_jobs_to_preempt(status *policy, resource_resv *hjob, server_info *sinfo, int *fail_list, int *no_of_jobs)\n{\n\tint i;\n\tint j = 0;\n\tint has_lower_jobs = 0; /* there are jobs of a lower preempt priority */\n\tunsigned int prev_prio; /* jinfo's preempt field before simulation */\n\tserver_info *nsinfo;\n\tstatus *npolicy;\n\tresource_resv **rjobs = NULL; /* the running jobs to choose from */\n\tresource_resv **pjobs = NULL; /* jobs to preempt */\n\tresource_resv **rjobs_subset = NULL;\n\tint *pjobs_list = NULL;\t     /* list of job ids */\n\tresource_resv *nhjob = NULL; /* pointer to high priority job from duplicated universe */\n\tresource_resv *pjob = NULL;\n\tint rc = 0;\n\tint retval = 0;\n\tchar log_buf[MAX_LOG_SIZE];\n\tschd_error *err = NULL;\n\n\tenum sched_error_code old_errorcode = SUCCESS;\n\tresdef *old_rdef = NULL;\n\tlong indexfound;\n\tlong skipto;\n\tint filter_again = 0;\n\n\tschd_error *full_err = NULL;\n\tschd_error *cur_err = NULL;\n\n\tresource_req *preempt_targets_req = NULL;\n\tchar **preempt_targets_list = NULL;\n\tresource_resv **prjobs = NULL;\n\tint rjobs_count = 0;\n\n\t*no_of_jobs = 0;\n\tif (hjob == NULL || sinfo == NULL)\n\t\treturn NULL;\n\n\t/* if the job is in an express queue and there are multiple express queues,\n\t * we need see if there are any running jobs who we can preempt.  All\n\t * express queues fall into the same preempt level but have different\n\t * preempt priorities.\n\t */\n\tif ((hjob->job->preempt_status & PREEMPT_TO_BIT(PREEMPT_EXPRESS)) &&\n\t    sinfo->has_mult_express) {\n\t\tfor (i = 0; sinfo->running_jobs[i] != NULL && !has_lower_jobs; i++)\n\t\t\tif (sinfo->running_jobs[i]->job->preempt < hjob->job->preempt)\n\t\t\t\thas_lower_jobs = TRUE;\n\t} else {\n\t\tfor (i = 0; i < NUM_PPRIO && !has_lower_jobs; i++)\n\t\t\tif (sc_attrs.preempt_prio[i][1] < hjob->job->preempt &&\n\t\t\t    sinfo->preempt_count[i] > 0)\n\t\t\t\thas_lower_jobs = TRUE;\n\t}\n\n\tif (has_lower_jobs == FALSE)\n\t\treturn NULL;\n\n\t/* we increment cstat.preempt_attempts when we check, if we only did a\n\t * cstat.preempt_attempts > conf.max_preempt_attempts we would actually\n\t * attempt to preempt conf.max_preempt_attempts + 1 times\n\t */\n\tif (conf.max_preempt_attempts != SCHD_INFINITY) {\n\t\tif (cstat.preempt_attempts >= conf.max_preempt_attempts) {\n\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG, hjob->name,\n\t\t\t\t  \"Not attempting to preempt: over max cycle preempt limit\");\n\t\t\treturn NULL;\n\t\t} else\n\t\t\tcstat.preempt_attempts++;\n\t}\n\n\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG, hjob->name,\n\t\t  \"Employing preemption to try and run high priority job.\");\n\n\t/* Let's get all the reasons the job won't run now.\n\t * This will help us find the set of jobs to preempt\n\t */\n\n\tfull_err = new_schd_error();\n\tif (full_err == NULL) {\n\t\treturn NULL;\n\t}\n\n\tauto ns_arr = is_ok_to_run(policy, sinfo, hjob->job->queue, hjob, RETURN_ALL_ERR, full_err);\n\n\t/* This should be NULL, but just in case */\n\tfree_nspecs(ns_arr);\n\n\t/* If a job can't run due to any of these reasons, no amount of preemption will help */\n\tfor (cur_err = full_err; cur_err != NULL; cur_err = cur_err->next) {\n\t\tint cant_preempt = 0;\n\t\tswitch ((int) cur_err->error_code) {\n\t\t\tcase SCHD_ERROR:\n\t\t\tcase NOT_QUEUED:\n\t\t\tcase QUEUE_NOT_STARTED:\n\t\t\tcase QUEUE_NOT_EXEC:\n\t\t\tcase DED_TIME:\n\t\t\tcase CROSS_DED_TIME_BOUNDRY:\n\t\t\tcase PRIME_ONLY:\n\t\t\tcase NONPRIME_ONLY:\n\t\t\tcase CROSS_PRIME_BOUNDARY:\n\t\t\tcase NODE_NONEXISTENT:\n\t\t\tcase CANT_SPAN_PSET:\n\t\t\tcase RESERVATION_INTERFERENCE:\n\t\t\tcase PROV_DISABLE_ON_SERVER:\n\t\t\tcase MAX_RUN_SUBJOBS:\n\t\t\t\tcant_preempt = 1;\n\t\t\t\tbreak;\n\t\t}\n\t\tif (cur_err->status_code == NEVER_RUN)\n\t\t\tcant_preempt = 1;\n\t\tif (cant_preempt) {\n\t\t\ttranslate_fail_code(cur_err, NULL, log_buf);\n\t\t\tlog_eventf(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_DEBUG, hjob->name,\n\t\t\t\t   \"Preempt: Can not preempt to run job: %s\", log_buf);\n\t\t\tfree_schd_error_list(full_err);\n\t\t\treturn NULL;\n\t\t}\n\t}\n\n\tif ((pjobs = static_cast<resource_resv **>(malloc(sizeof(resource_resv *) * (sinfo->sc.running + 1)))) == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\tfree_schd_error_list(full_err);\n\t\treturn NULL;\n\t}\n\tpjobs[0] = NULL;\n\n\tif (sc_attrs.preempt_targets_enable) {\n\t\tpreempt_targets_req = find_resource_req(hjob->resreq, allres[\"preempt_targets\"]);\n\t\tif (preempt_targets_req != NULL) {\n\n\t\t\tpreempt_targets_list = break_comma_list(preempt_targets_req->res_str);\n\t\t\tretval = check_preempt_targets_for_none(preempt_targets_list);\n\t\t\tif (retval == PREEMPT_NONE) {\n\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG, hjob->name,\n\t\t\t\t\t  \"No preemption set specified for the job: Job will not preempt\");\n\t\t\t\tfree_schd_error_list(full_err);\n\t\t\t\tfree(pjobs);\n\t\t\t\tfree_string_array(preempt_targets_list);\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t}\n\t}\n\n\t/* use locally dup'd copy of sinfo so we don't modify the original */\n\ttry {\n\t\tnsinfo = new server_info(*sinfo);\n\t} catch (std::exception &e) {\n\t\tfree_schd_error_list(full_err);\n\t\tfree(pjobs);\n\t\tfree_string_array(preempt_targets_list);\n\t\treturn NULL;\n\t}\n\tnpolicy = nsinfo->policy;\n\tnhjob = find_resource_resv_by_indrank(nsinfo->jobs, hjob->resresv_ind, hjob->rank);\n\tprev_prio = nhjob->job->preempt;\n\n\tif (sc_attrs.preempt_targets_enable) {\n\t\tif (preempt_targets_req != NULL) {\n\t\t\tprjobs = resource_resv_filter(nsinfo->running_jobs,\n\t\t\t\t\t\t      count_array(nsinfo->running_jobs),\n\t\t\t\t\t\t      preempt_job_set_filter,\n\t\t\t\t\t\t      (void *) preempt_targets_list, NO_FLAGS);\n\t\t\tfree_string_array(preempt_targets_list);\n\t\t}\n\t}\n\n\tif (prjobs != NULL) {\n\t\trjobs = prjobs;\n\t\trjobs_count = count_array(prjobs);\n\t\tif (rjobs_count > 0) {\n\t\t\tlog_eventf(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG, nhjob->name,\n\t\t\t\t   \"Limited running jobs used for preemption from %d to %d\", nsinfo->sc.running, rjobs_count);\n\t\t} else {\n\t\t\tlog_eventf(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG, nhjob->name,\n\t\t\t\t   \"Limited running jobs used for preemption from %d to 0: No jobs to preempt\", nsinfo->sc.running);\n\t\t\tpjobs_list = NULL;\n\t\t\tgoto cleanup;\n\t\t}\n\n\t} else {\n\t\trjobs = nsinfo->running_jobs;\n\t\trjobs_count = nsinfo->sc.running;\n\t}\n\n\t/* sort jobs in ascending preemption priority and starttime... we want to preempt them\n\t * from lowest prio to highest\n\t */\n\tif (sc_attrs.preempt_sort == PS_MIN_T_SINCE_START)\n\t\tqsort(rjobs, rjobs_count, sizeof(job_info *), cmp_preempt_stime_asc);\n\telse {\n\t\t/* sort jobs in ascending preemption priority... we want to preempt them\n\t\t * from lowest prio to highest\n\t\t */\n\t\tqsort(rjobs, rjobs_count, sizeof(job_info *), cmp_preempt_priority_asc);\n\t}\n\n\terr = dup_schd_error(full_err); /* only first element */\n\tif (err == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\tpjobs_list = NULL;\n\t\tgoto cleanup;\n\t}\n\n\trjobs_subset = filter_preemptable_jobs(rjobs, nhjob, err);\n\tif (rjobs_subset == NULL) {\n\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_INFO, nhjob->name, \"Found no preemptable candidates\");\n\t\tpjobs_list = NULL;\n\t\tgoto cleanup;\n\t}\n\n\tskipto = 0;\n\twhile ((indexfound = select_index_to_preempt(npolicy, nhjob, rjobs_subset, skipto, err, fail_list)) != NO_JOB_FOUND) {\n\t\tstruct preempt_ordering *po;\n\t\tint dont_preempt_job = 0;\n\t\tint ind = 0;\n\n\t\tif (indexfound == ERR_IN_SELECT) {\n\t\t\t/* System error occurred, no need to proceed */\n\t\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\t\tpjobs_list = NULL;\n\t\t\tgoto cleanup;\n\t\t}\n\t\tpjob = rjobs_subset[indexfound];\n\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->name,\n\t\t\t  \"Simulation: preempting job\");\n\n\t\tpo = schd_get_preempt_order(pjob);\n\t\tif (po != NULL) {\n\t\t\tif (!policy->rel_on_susp.empty() && po->order[0] == PREEMPT_METHOD_SUSPEND && pjob->job->can_suspend) {\n\t\t\t\tpjob->job->resreleased = create_res_released_array(npolicy, pjob);\n\t\t\t\tpjob->job->resreq_rel = create_resreq_rel_list(npolicy, pjob);\n\t\t\t}\n\t\t}\n\n\t\tupdate_universe_on_end(npolicy, pjob, \"S\", NO_ALLPART);\n\t\trjobs_count--;\n\t\t/* Check if any of the previously preempted job increased its preemption priority to be more than the\n\t\t * high priority job\n\t\t */\n\t\tfor (ind = 0; pjobs[ind] != NULL; ind++) {\n\t\t\tif (pjobs[ind]->job->preempt > nhjob->job->preempt) {\n\t\t\t\tdont_preempt_job = 1;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t\t/* Check if the job we just ended increases its preemption priority to be more than the high priority job.\n\t\t * If so, don't preempt this job\n\t\t */\n\t\tif (dont_preempt_job || pjob->job->preempt > nhjob->job->preempt) {\n\t\t\tremove_resresv_from_array(rjobs_subset, pjob);\n\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_INFO, pjob->name,\n\t\t\t\t  \"Preempting job will escalate its priority or priority of other jobs, not preempting it\");\n\t\t\tif (sim_run_update_resresv(npolicy, pjob, ns_arr, NO_ALLPART) != 1) {\n\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_INFO, nhjob->name,\n\t\t\t\t\t  \"Trouble finding preemptable candidates\");\n\t\t\t\tpjobs_list = NULL;\n\t\t\t\tgoto cleanup;\n\t\t\t}\n\t\t\tif (indexfound > 0)\n\t\t\t\tskipto = indexfound - 1;\n\t\t\telse\n\t\t\t\tskipto = 0;\n\t\t\tcontinue;\n\t\t}\n\n\t\tif (pjob->end_event != NULL)\n\t\t\tdelete_event(nsinfo, pjob->end_event);\n\n\t\tpjobs[j++] = pjob;\n\t\tpjobs[j] = NULL;\n\n\t\told_errorcode = err->error_code;\n\t\tif (err->rdef != NULL) {\n\t\t\told_rdef = err->rdef;\n\t\t} else\n\t\t\told_rdef = NULL;\n\n\t\tclear_schd_error(err);\n\t\tns_arr = is_ok_to_run(npolicy, nsinfo, nhjob->job->queue, nhjob, NO_ALLPART, err);\n\t\tif (!ns_arr.empty()) {\n\t\t\t/* Normally when running a subjob, we do not care about the subjob. We just care that it successfully runs.\n\t\t\t * We allow run_update_job() to enqueue and run the subjob.  In this case, we need to act upon the\n\t\t\t * subjob after it runs.  To handle this case, we enqueue it first then we run it.\n\t\t\t */\n\t\t\tif (nhjob->job->is_array) {\n\t\t\t\tresource_resv *nj;\n\t\t\t\tnj = queue_subjob(nhjob, nsinfo, nhjob->job->queue);\n\n\t\t\t\tif (nj == NULL) {\n\t\t\t\t\tpjobs_list = NULL;\n\t\t\t\t\tgoto cleanup;\n\t\t\t\t}\n\t\t\t\tnhjob = nj;\n\t\t\t}\n\n\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG, nhjob->name,\n\t\t\t\t  \"Simulation: Preempted enough work to run job\");\n\t\t\trc = sim_run_update_resresv(npolicy, nhjob, ns_arr, NO_ALLPART);\n\t\t\tbreak;\n\t\t} else if (old_errorcode == err->error_code) {\n\t\t\tif (err->rdef != NULL) {\n\t\t\t\t/* If the error code matches, make sure the resource definition is also matching.\n\t\t\t * If the definition does not match that means the error is because of some other\n\t\t\t * resource and we need to filter again.\n\t\t\t * Otherwise, we just set the skipto to the index of the job that was last seen\n\t\t\t * by select_index_to_preempt function. This will make select_index_to_preempt\n\t\t\t * start looking at the jobs from the place it left in the previous call.\n\t\t\t */\n\t\t\t\tif (old_rdef != err->rdef)\n\t\t\t\t\tfilter_again = 1;\n\t\t\t\telse {\n\t\t\t\t\tskipto = indexfound;\n\t\t\t\t\tfilter_again = 0;\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tskipto = indexfound;\n\t\t\t\tfilter_again = 0;\n\t\t\t}\n\t\t} else {\n\t\t\t/* error changed, so we need to revisit jobs discarded as preemption candidates earlier */\n\t\t\tfilter_again = 1;\n\t\t}\n\n\t\tif (filter_again == 1) {\n\t\t\tfree(rjobs_subset);\n\t\t\trjobs_subset = filter_preemptable_jobs(rjobs, nhjob, err);\n\t\t\tif (rjobs_subset == NULL) {\n\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_INFO, nhjob->name, \"Found no preemptable candidates\");\n\t\t\t\tpjobs_list = NULL;\n\t\t\t\tgoto cleanup;\n\t\t\t}\n\t\t\tfilter_again = 0;\n\t\t\tskipto = 0;\n\t\t}\n\n\t\ttranslate_fail_code(err, NULL, log_buf);\n\t\tlog_eventf(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG, nhjob->name,\n\t\t\t   \"Simulation: not enough work preempted: %s\", log_buf);\n\t}\n\n\tpjobs[j] = NULL;\n\tif (rjobs_subset != NULL)\n\t\tfree(rjobs_subset);\n\n\t/* check to see if we lowered our preempt priority in our simulation\n\t * if we have, then punt and don't\n\t */\n\tif (prev_prio > nhjob->job->preempt) {\n\t\trc = 0;\n\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG, nhjob->name,\n\t\t\t  \"Job not run because it would immediately be preemptable.\");\n\t}\n\n\t/* Right now we have a list of jobs we know will create enough space.  It\n\t * might preempt too much work.  We need to determine if each job is\n\t * still needed.\n\t *\n\t * We look to see if jobs are similar to the high priority job (preemption_similarity())\n\t * or we try and rerun them in the simulated universe.\n\t * If we can run them or the jobs aren't similar, then we don't have to\n\t * preempt them.  We will go backwards from the end of the list because we\n\t * started preempting with the lowest priority jobs.\n\t */\n\n\tif (rc > 0) {\n\t\tif ((pjobs_list = static_cast<int *>(calloc((j + 1), sizeof(int)))) == NULL) {\n\t\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\t\tgoto cleanup;\n\t\t}\n\n\t\tfor (j--, i = 0; j >= 0; j--) {\n\t\t\tint remove_job = 0;\n\t\t\tclear_schd_error(err);\n\t\t\tif (preemption_similarity(nhjob, pjobs[j], full_err) == 0) {\n\t\t\t\tremove_job = 1;\n\t\t\t} else {\n\t\t\t\tns_arr = is_ok_to_run(npolicy, nsinfo, pjobs[j]->job->queue, pjobs[j], NO_ALLPART, err);\n\t\t\t\tif (!ns_arr.empty()) {\n\t\t\t\t\tremove_job = 1;\n\t\t\t\t\tsim_run_update_resresv(npolicy, pjobs[j], ns_arr, NO_ALLPART);\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif (remove_job) {\n\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t\t\t  pjobs[j]->name, \"Simulation: preemption of job not needed.\");\n\t\t\t\tremove_resresv_from_array(pjobs, pjobs[j]);\n\n\t\t\t} else {\n\t\t\t\tpjobs_list[i] = pjobs[j]->rank;\n\t\t\t\ti++;\n\t\t\t}\n\t\t}\n\n\t\tpjobs_list[i] = 0;\n\t\t/* i == 0 means we removed all the jobs: Should not happen */\n\t\tif (i == 0) {\n\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG, nhjob->name,\n\t\t\t\t  \"Simulation Error: All jobs removed from preemption list\");\n\t\t} else\n\t\t\t*no_of_jobs = i;\n\t}\ncleanup:\n\tdelete nsinfo;\n\tfree(pjobs);\n\tfree(prjobs);\n\tfree_schd_error_list(full_err);\n\tfree_schd_error(err);\n\n\treturn pjobs_list;\n}\n\n/**\n * @brief\n *\t\tselect a good candidate for preemption\n *\n * @param[in] policy - policy info\n * @param[in] hjob - the high priority job to preempt for\n * @param[in] rjobs - the list of running jobs to select from\n * @param[in] skipto - Index from where we need to start looking into rjobs\n * @param[in] err    - reason the high prio job isn't running\n * @param[in] fail_list - list of jobs to skip. They previously failed to be preempted.\n *\t\t\t  Do not select them again.\n *\n * @return long\n * @retval index of the job to preempt\n * @retval NO_JOB_FOUND nothing can be selected for preemption\n * @retval ERR_IN_SELECT error\n */\nlong\nselect_index_to_preempt(status *policy, resource_resv *hjob,\n\t\t\tresource_resv **rjobs, long skipto, schd_error *err,\n\t\t\tint *fail_list)\n{\n\tint i, j, k;\n\tint good = 1; /* good boolean: Is job eligible to be preempted */\n\tstruct preempt_ordering *po;\n\n\tif (err == NULL || hjob == NULL || hjob->job == NULL ||\n\t    rjobs == NULL || rjobs[0] == NULL)\n\t\treturn NO_JOB_FOUND;\n\n\t/* This shouldn't happen, but you can never be too paranoid */\n\tif (hjob->job->is_running && hjob->ninfo_arr == NULL)\n\t\treturn NO_JOB_FOUND;\n\n\t/* if we find a good job, we'll break out at the bottom\n\t * we can't break out up here since i will be incremented by this point\n\t * and we'll be returning the job AFTER the one we want too\n\t */\n\tfor (i = skipto; rjobs[i] != NULL; i++) {\n\t\t/* Does the running job have any resource we need? */\n\t\tint node_good = 1;\n\n\t\t/* lets be optimistic.. we'll start off assuming this is a good candidate */\n\t\tgood = 1;\n\n\t\t/* if hjob hit a hard limit, check if candidate job has requested that resource\n\t\t * if reason is different then set flag as if resource was found\n\t\t */\n\n\t\tif (rjobs[i]->job == NULL || rjobs[i]->ninfo_arr == NULL)\n\t\t\tcontinue; /* we have problems... */\n\n\t\tif (!rjobs[i]->job->is_running)\n\t\t\t/* Only running jobs have resources allocated to them.\n\t\t\t * They are only eligible to preempt.\n\t\t\t */\n\t\t\tgood = 0;\n\n\t\tif (rjobs[i]->job->is_provisioning)\n\t\t\tgood = 0; /* provisioning job cannot be preempted */\n\n\t\tif (good) {\n\t\t\tif (rjobs[i]->job->can_not_preempt ||\n\t\t\t    rjobs[i]->job->preempt >= hjob->job->preempt)\n\t\t\t\tgood = 0;\n\t\t}\n\n\t\tif (good) {\n\t\t\tfor (j = 0; fail_list[j] != 0; j++) {\n\t\t\t\tif (fail_list[j] == rjobs[i]->rank) {\n\t\t\t\t\tgood = 0;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\tif (good) {\n\t\t\t/* get the preemption order to be used for this job */\n\t\t\tpo = schd_get_preempt_order(rjobs[i]);\n\n\t\t\t/* check whether chosen order is enabled for this job */\n\t\t\tfor (j = 0; j < PREEMPT_METHOD_HIGH; j++) {\n\t\t\t\tif (po->order[j] == PREEMPT_METHOD_SUSPEND &&\n\t\t\t\t    rjobs[i]->job->can_suspend)\n\t\t\t\t\tbreak; /* suspension is always allowed */\n\n\t\t\t\tif (po->order[j] == PREEMPT_METHOD_CHECKPOINT &&\n\t\t\t\t    rjobs[i]->job->can_checkpoint)\n\t\t\t\t\tbreak; /* choose if checkpoint is allowed */\n\n\t\t\t\tif (po->order[j] == PREEMPT_METHOD_REQUEUE &&\n\t\t\t\t    rjobs[i]->job->can_requeue)\n\t\t\t\t\tbreak; /* choose if requeue is allowed */\n\t\t\t\tif (po->order[j] == PREEMPT_METHOD_DELETE)\n\t\t\t\t\tbreak;\n\t\t\t}\n\t\t\tif (j == PREEMPT_METHOD_HIGH) /* no preemption method good */\n\t\t\t\tgood = 0;\n\t\t}\n\n\t\tif (good) {\n\t\t\tfor (j = 0; good && rjobs[i]->ninfo_arr[j] != NULL; j++) {\n\t\t\t\tif (rjobs[i]->ninfo_arr[j]->is_down || rjobs[i]->ninfo_arr[j]->is_offline)\n\t\t\t\t\tgood = 0;\n\t\t\t}\n\t\t}\n\n\t\t/* if the high priority job is suspended then make sure we only\n\t\t * select jobs from the node the job is currently suspended on\n\t\t */\n\n\t\tif (good) {\n\t\t\tif (hjob->ninfo_arr != NULL) {\n\t\t\t\tfor (j = 0; hjob->ninfo_arr[j] != NULL; j++) {\n\t\t\t\t\tif (find_node_by_rank(rjobs[i]->ninfo_arr,\n\t\t\t\t\t\t\t      hjob->ninfo_arr[j]->rank) != NULL)\n\t\t\t\t\t\tbreak;\n\t\t\t\t}\n\n\t\t\t\t/* if we made all the way through the list, then rjobs[i] has no useful\n\t\t\t\t * nodes for us to use... don't select it, unless it's not node resources we're after\n\t\t\t\t */\n\n\t\t\t\tif (hjob->ninfo_arr[j] == NULL)\n\t\t\t\t\tgood = 0;\n\t\t\t}\n\t\t}\n\t\tif (good) {\n\t\t\tschd_error *err;\n\t\t\tnode_good = 0;\n\n\t\t\terr = new_schd_error();\n\t\t\tif (err == NULL)\n\t\t\t\treturn NO_JOB_FOUND;\n\n\t\t\tfor (j = 0; rjobs[i]->ninfo_arr[j] != NULL && !node_good; j++) {\n\t\t\t\tnode_info *node = rjobs[i]->ninfo_arr[j];\n\t\t\t\tbool only_check_noncons = false;\n\n\t\t\t\tif (node->is_multivnoded) {\n\t\t\t\t\t/* unsafe to consider vnodes from multivnoded hosts \"no good\" when \"not enough\" of some consumable\n\t\t\t\t\t * resource can be found in the vnode, since rest may be provided by other vnodes on the same host\n\t\t\t\t\t * restrict check on these vnodes to check only against non consumable resources\n\t\t\t\t\t */\n\t\t\t\t\tif (policy->resdef_to_check_noncons.empty()) {\n\t\t\t\t\t\tfor (const auto &rtc : policy->resdef_to_check) {\n\t\t\t\t\t\t\tif (rtc->type.is_non_consumable)\n\t\t\t\t\t\t\t\tpolicy->resdef_to_check_noncons.insert(rtc);\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tonly_check_noncons = true;\n\t\t\t\t}\n\t\t\t\tfor (k = 0; hjob->select->chunks[k] != NULL; k++) {\n\t\t\t\t\tlong num_chunks_returned = 0;\n\t\t\t\t\tunsigned int flags = COMPARE_TOTAL | CHECK_ALL_BOOLS | UNSET_RES_ZERO;\n\t\t\t\t\t/* if only non consumables are checked, infinite number of chunks can be satisfied,\n\t\t\t\t\t * and SCHD_INFINITY is negative, so don't be tempted to check on positive value\n\t\t\t\t\t */\n\t\t\t\t\tclear_schd_error(err);\n\t\t\t\t\tif (only_check_noncons) {\n\t\t\t\t\t\tif (!policy->resdef_to_check_noncons.empty())\n\t\t\t\t\t\t\tnum_chunks_returned = check_avail_resources(node->res, hjob->select->chunks[k]->req,\n\t\t\t\t\t\t\t\t\t\t\t\t    flags, policy->resdef_to_check_noncons, INSUFFICIENT_RESOURCE, err);\n\t\t\t\t\t\telse\n\t\t\t\t\t\t\tnum_chunks_returned = SCHD_INFINITY;\n\t\t\t\t\t} else\n\t\t\t\t\t\tnum_chunks_returned = check_avail_resources(node->res, hjob->select->chunks[k]->req,\n\t\t\t\t\t\t\t\t\t\t\t    flags, INSUFFICIENT_RESOURCE, err);\n\n\t\t\t\t\tif ((num_chunks_returned > 0) || (num_chunks_returned == SCHD_INFINITY)) {\n\t\t\t\t\t\tnode_good = 1;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\tfree_schd_error(err);\n\t\t}\n\n\t\tif (node_good == 0)\n\t\t\tgood = 0;\n\n\t\tif (good)\n\t\t\tbreak;\n\t}\n\n\tif (good && rjobs[i] != NULL)\n\t\treturn i;\n\n\treturn NO_JOB_FOUND;\n}\n\n/**\n * @brief\n *\t\tpreempt_level - take a preemption priority and return a preemption\n *\t\t\tlevel\n *\n * @param[in]\tprio\t-\tthe preemption priority\n *\n * @return\tthe preemption level\n *\n */\nint\npreempt_level(unsigned int prio)\n{\n\tint level = NUM_PPRIO;\n\tint i;\n\n\tfor (i = 0; i < NUM_PPRIO && level == NUM_PPRIO; i++)\n\t\tif (sc_attrs.preempt_prio[i][1] == prio)\n\t\t\tlevel = i;\n\n\treturn level;\n}\n\n/**\n * @brief\n *\t\tset_preempt_prio - set a job's preempt field to the correct value\n *\n * @param[in]\tjob\t-\tthe job to set\n * @param[in]\tqinfo\t-\tthe queue the job is in\n * @param[in]\tsinfo\t-\tthe job's server\n *\n * @return\tnothing\n *\n */\nvoid\nset_preempt_prio(resource_resv *job, queue_info *qinfo, server_info *sinfo)\n{\n\tint i;\n\tjob_info *jinfo;\n\tint rc;\n\n\tif (job == NULL || job->job == NULL || qinfo == NULL || sinfo == NULL)\n\t\treturn;\n\n\tjinfo = job->job;\n\n\t/* in the case of reseting the value, we need to clear them first */\n\tjinfo->preempt = 0;\n\tjinfo->preempt_status = 0;\n\n\tif (sinfo->qrun_job != NULL) {\n\t\tif (job == sinfo->qrun_job ||\n\t\t    (jinfo->is_subjob && jinfo->array_id == sinfo->qrun_job->name))\n\t\t\tjinfo->preempt_status |= PREEMPT_TO_BIT(PREEMPT_QRUN);\n\t}\n\n\tif (sc_attrs.preempt_queue_prio != SCHD_INFINITY &&\n\t    qinfo->priority >= sc_attrs.preempt_queue_prio)\n\t\tjinfo->preempt_status |= PREEMPT_TO_BIT(PREEMPT_EXPRESS);\n\n\tif (over_fs_usage(jinfo->ginfo))\n\t\tjinfo->preempt_status |= PREEMPT_TO_BIT(PREEMPT_OVER_FS_LIMIT);\n\n\tif ((rc = check_soft_limits(sinfo, qinfo, job)) != 0) {\n\t\tif ((rc & PREEMPT_TO_BIT(PREEMPT_ERR)) != 0) {\n\t\t\tjob->can_not_run = 1;\n\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_ERR, job->name,\n\t\t\t\t  \"job marked as not runnable due to check_soft_limits internal error\");\n\t\t\treturn;\n\t\t} else\n\t\t\tjinfo->preempt_status |= rc;\n\t}\n\n\t/* we haven't set it yet, therefore it's a normal job */\n\tif (jinfo->preempt_status == 0)\n\t\tjinfo->preempt_status = PREEMPT_TO_BIT(PREEMPT_NORMAL);\n\n\t/* now that we've set all the possible preempt status's on the job, lets\n\t * set its priority compared to those statuses.  The statuses are sorted\n\t * by number of bits first, and priority second.  We need to just search\n\t * through the list once and set the priority to the first one we find\n\t */\n\n\tfor (i = 0; i < NUM_PPRIO && sc_attrs.preempt_prio[i][0] != 0 &&\n\t\t    jinfo->preempt == 0;\n\t     i++) {\n\t\tif ((jinfo->preempt_status & sc_attrs.preempt_prio[i][0]) == sc_attrs.preempt_prio[i][0]) {\n\t\t\tjinfo->preempt = sc_attrs.preempt_prio[i][1];\n\t\t\t/* if the express bit is on, then we'll add the priority of that\n\t\t\t * queue into our priority to allow for multiple express queues\n\t\t\t */\n\t\t\tif (sc_attrs.preempt_prio[i][0] & PREEMPT_TO_BIT(PREEMPT_EXPRESS))\n\t\t\t\tjinfo->preempt += jinfo->queue->priority;\n\t\t}\n\t}\n\t/* we didn't find our preemption level -- this means we're a normal job */\n\tif (jinfo->preempt == 0) {\n\t\tjinfo->preempt_status = PREEMPT_TO_BIT(PREEMPT_NORMAL);\n\t\tjinfo->preempt = preempt_normal;\n\t}\n}\n\n/**\n * @brief\n * \t\tcreate subjob name from subjob id and array name\n *\n * @param[in]\tarray_id\t-\tthe parent array name\n * @param[in]\tindex\t-\tsubjob index\n *\n * @return\tcreated subjob name\n */\nstd::string\ncreate_subjob_name(const std::string &array_id, int index)\n{\n\tstd::string subjob_id;\n\tstd::size_t brackets;\n\n\tsubjob_id = array_id;\n\tbrackets = subjob_id.find(\"[]\");\n\tif (brackets == std::string::npos)\n\t\treturn std::string(\"\");\n\tsubjob_id.insert(brackets + 1, std::to_string(index));\n\n\treturn subjob_id;\n}\n\n/**\n * @brief\n *\t\tcreate_subjob_from_array - create a resource_resv structure for a\n *\t\t\t\t   subjob from a job array structure.  The\n *\t\t\t\t   subjob will be in state 'Q'\n *\n * @param[in]\tarray\t-\tthe job array\n * @param[in]\tindex\t-\tthe subjob index\n * @param[in]\tsubjob_name\t-\tname of subjob @see create_subjob_name()\n *\n * @return\tthe new subjob\n * @retval\tNULL\t: on error\n *\n */\nresource_resv *\ncreate_subjob_from_array(resource_resv *array, int index, const std::string &subjob_name)\n{\n\tresource_resv *subjob; /* job_info structure for new subjob */\n\trange *tmp;\t       /* a tmp ptr to hold the queued_indices ptr */\n\tschd_error *err;\n\n\tif (array == NULL || array->job == NULL)\n\t\treturn NULL;\n\n\tif (!array->job->is_array)\n\t\treturn NULL;\n\n\terr = new_schd_error();\n\tif (err == NULL)\n\t\treturn NULL;\n\n\t/* so we don't dup the queued_indices for the subjob */\n\ttmp = array->job->queued_subjobs;\n\tarray->job->queued_subjobs = NULL;\n\n\tsubjob = dup_resource_resv(array, array->server, array->job->queue, subjob_name);\n\n\tif (subjob == NULL) {\n\t\tfree_schd_error(err);\n\t\treturn NULL;\n\t}\n\n\t/* make a copy of dependent jobs */\n\tsubjob->job->depend_job_str = string_dup(array->job->depend_job_str);\n\tsubjob->job->dependent_jobs = (resource_resv **) dup_array(array->job->dependent_jobs);\n\n\tarray->job->queued_subjobs = tmp;\n\n\tsubjob->job->is_begin = 0;\n\tsubjob->job->is_array = 0;\n\n\tsubjob->job->is_queued = 1;\n\tsubjob->job->is_subjob = 1;\n\tsubjob->job->array_index = index;\n\tsubjob->job->array_id = array->name;\n\n\tsubjob->rank = get_sched_rank();\n\n\tfree_schd_error(err);\n\treturn subjob;\n}\n\n/**\n * @brief\n *\t\tupdate_array_on_run - update a job array object when a subjob is run\n *\n * @param[in]\tarray\t-\tthe job array to update\n * @param[in]\tsubjob\t-\tthe subjob which was run\n *\n * @return\tsuccess or failure\n *\n */\nint\nupdate_array_on_run(job_info *array, job_info *subjob)\n{\n\tif (array == NULL || subjob == NULL)\n\t\treturn 0;\n\n\trange_remove_value(&array->queued_subjobs, subjob->array_index);\n\n\tif (array->is_queued) {\n\t\tarray->is_begin = 1;\n\t\tarray->is_queued = 0;\n\t}\n\n\treturn 1;\n}\n\n/**\n * @brief\n *\t\tis_job_array - is a job name a job array range\n *\t\t\t  valid_form: 1234[]\n *\t\t\t  valid_form: 1234[N]\n *\t\t\t  valid_form: 1234[N-M]\n *\n * @param[in]\tjobname\t-\tjobname to check\n *\n * @return int\n * @retval\t1\t: if jobname is a job array\n * @retval\t2\t: if jobname is a subjob\n * @retval\t3\t: if jobname is a range\n * @retval\t0\t: if it is not a job array\n */\nint\nis_job_array(char *jobname)\n{\n\tchar *bracket;\n\tint ret = 0;\n\n\tif (jobname == NULL)\n\t\treturn 0;\n\n\tbracket = strchr(jobname, (int) '[');\n\n\tif (bracket != NULL) {\n\t\tif (*(bracket + 1) == ']')\n\t\t\tret = 1;\n\t\telse if (strchr(bracket, (int) '-') != NULL)\n\t\t\tret = 3;\n\t\telse\n\t\t\tret = 2;\n\t}\n\n\treturn ret;\n}\n\n/**\n * @brief\n *\t\tmodify_job_array_for_qrun - modify a job array for qrun -\n *\t\t\t\t    set queued_subjobs to just the\n *\t\t\t\t    range which is being run\n *\t\t\t\t    set qrun_job on server\n *\n * @param[in,out]\tsinfo\t-\tserver to modify job array\n * @param[in]\tjobid\t-\tstring job name\n *\n * @return\tint\n * @retval\t1\t: on success\n * @retval\t0\t: on failure\n * @retval\t-1\t: on error\n *\n */\nint\nmodify_job_array_for_qrun(server_info *sinfo, char *jobid)\n{\n\tchar name[128];\n\tchar rest[128];\n\tchar *rangestr;\n\tchar *ptr;\n\trange *r, *r2;\n\tint len;\n\n\tresource_resv *job;\n\n\tif (sinfo == NULL || jobid == NULL)\n\t\treturn -1;\n\n\tpbs_strncpy(name, jobid, sizeof(name));\n\n\tif ((ptr = strchr(name, (int) '[')) == NULL)\n\t\treturn 0;\n\n\t*ptr = '\\0';\n\trangestr = ptr + 1;\n\n\tif ((ptr = strchr(rangestr, ']')) == NULL)\n\t\treturn 0;\n\n\tpbs_strncpy(rest, ptr, sizeof(rest));\n\n\t*ptr = '\\0';\n\n\t/* now rangestr should be the subjob index or range of indices */\n\tif ((r = range_parse(rangestr)) == NULL)\n\t\treturn 0;\n\n\t/* now that we've converted the subjob index or range into a range list\n\t * we can munge our original name to find the job array\n\t */\n\tlen = strlen(name);\n\tname[len] = '[';\n\tname[len + 1] = '\\0';\n\tstrcat(name, rest);\n\n\tjob = find_resource_resv(sinfo->jobs, name);\n\n\tif (job != NULL) {\n\t\t/* lets only run the jobs which were requested */\n\t\tr2 = range_intersection(r, job->job->queued_subjobs);\n\t\tif (r2 != NULL) {\n\t\t\tfree_range_list(job->job->queued_subjobs);\n\t\t\tjob->job->queued_subjobs = r2;\n\t\t} else {\n\t\t\tfree_range_list(r);\n\t\t\treturn 0;\n\t\t}\n\t} else {\n\t\tfree_range_list(r);\n\t\treturn 0;\n\t}\n\n\tsinfo->qrun_job = job;\n\tfree_range_list(r);\n\treturn 1;\n}\n\n/**\n * @brief\n *\t\tcreate a subjob from a job array and queue it\n *\n * @param[in]\tarray\t-\tjob array to create the next subjob from\n * @param[in]\tsinfo\t-\tthe server the job array is in\n * @param[in]\tqinfo\t-\tthe queue the job array is in\n *\n * @return\tresource_resv *\n * @retval\tnew subjob\n * @retval\tNULL\t: on error\n *\n * @note\n * \t\tsubjob will be attached to the server/queue job lists\n *\n */\nresource_resv *\nqueue_subjob(resource_resv *array, server_info *sinfo,\n\t     queue_info *qinfo)\n{\n\tint subjob_index;\n\tstd::string subjob_name;\n\tresource_resv *rresv = NULL;\n\tresource_resv **tmparr = NULL;\n\n\tif (array == NULL || array->job == NULL || sinfo == NULL || qinfo == NULL)\n\t\treturn NULL;\n\n\tif (!array->job->is_array)\n\t\treturn NULL;\n\n\tsubjob_index = range_next_value(array->job->queued_subjobs, -1);\n\tif (subjob_index >= 0) {\n\t\tsubjob_name = create_subjob_name(array->name, subjob_index);\n\t\tif (!subjob_name.empty()) {\n\t\t\tif ((rresv = find_resource_resv(sinfo->jobs, subjob_name)) != NULL) {\n\t\t\t\t/* Set tmparr to something so we're not considered an error */\n\t\t\t\ttmparr = sinfo->jobs;\n\t\t\t} else if ((rresv = create_subjob_from_array(array, subjob_index, subjob_name)) != NULL) {\n\t\t\t\t/* add_resresv_to_array calls realloc, so we need to treat this call\n\t\t\t\t * as a call to realloc.  Put it into a temp variable to check for NULL\n\t\t\t\t */\n\t\t\t\ttmparr = add_resresv_to_array(sinfo->jobs, rresv, NO_FLAGS);\n\t\t\t\tif (tmparr != NULL) {\n\t\t\t\t\tsinfo->jobs = tmparr;\n\t\t\t\t\tsinfo->sc.queued++;\n\t\t\t\t\tsinfo->sc.total++;\n\n\t\t\t\t\ttmparr = add_resresv_to_array(sinfo->all_resresv, rresv, SET_RESRESV_INDEX);\n\t\t\t\t\tif (tmparr != NULL) {\n\t\t\t\t\t\tsinfo->all_resresv = tmparr;\n\t\t\t\t\t\ttmparr = add_resresv_to_array(qinfo->jobs, rresv, NO_FLAGS);\n\t\t\t\t\t\tif (tmparr != NULL) {\n\t\t\t\t\t\t\tqinfo->jobs = tmparr;\n\t\t\t\t\t\t\tqinfo->sc.queued++;\n\t\t\t\t\t\t\tqinfo->sc.total++;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\trresv->job->parent_job = array;\n\t\t\t}\n\t\t}\n\t}\n\n\tif (tmparr == NULL || rresv == NULL) {\n\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_DEBUG, array->name,\n\t\t\t  \"Unable to create new subjob for job array\");\n\t\treturn NULL;\n\t}\n\n\treturn rresv;\n}\n\n/**\n * @brief\n * \t\tevaluate a math formula for jobs based on their resources\n *\t\tNOTE: currently done through embedded python interpreter\n *\n * @param[in]\tformula\t-\tformula to evaluate\n * @param[in]\tresresv\t-\tjob for special case key words\n * @param[in]\tresreq\t-\tresources to use when evaluating\n *\n * @return\tevaluated formula answer or 0 on exception\n *\n */\n\n#ifdef PYTHON\nsch_resource_t\nformula_evaluate(const char *formula, resource_resv *resresv, resource_req *resreq)\n{\n\tchar buf[1024];\n\tchar *globals;\n\tint globals_size = 1024; /* initial size... will grow if needed */\n\tsch_resource_t ans = 0;\n\tchar *formula_buf;\n\tint formula_buf_len;\n\n\tPyObject *module;\n\tPyObject *dict;\n\tPyObject *obj;\n\n\tif (formula == NULL || resresv == NULL ||\n\t    resresv->job == NULL)\n\t\treturn 0;\n\n\tformula_buf_len = sizeof(buf) + strlen(formula) + 1;\n\n\tformula_buf = static_cast<char *>(malloc(formula_buf_len));\n\tif (formula_buf == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn 0;\n\t}\n\n\tif ((globals = static_cast<char *>(malloc(globals_size * sizeof(char)))) == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\tfree(formula_buf);\n\t\treturn 0;\n\t}\n\n\tglobals[0] = '\\0';\n\n\tif (pbs_strcat(&globals, &globals_size, \"globals_dict = {\") == NULL) {\n\t\tfree(globals);\n\t\tfree(formula_buf);\n\t\treturn 0;\n\t}\n\n\tfor (const auto &cr : consres) {\n\t\tauto req = find_resource_req(resreq, cr);\n\n\t\tif (req != NULL)\n\t\t\tsprintf(buf, \"\\'%s\\':%.*f,\", cr->name.c_str(),\n\t\t\t\tfloat_digits(req->amount, FLOAT_NUM_DIGITS), req->amount);\n\t\telse\n\t\t\tsprintf(buf, \"\\'%s\\' : 0,\", cr->name.c_str());\n\n\t\tif (pbs_strcat(&globals, &globals_size, buf) == NULL) {\n\t\t\tfree(globals);\n\t\t\tfree(formula_buf);\n\t\t\treturn 0;\n\t\t}\n\t}\n\n\t/* special cases */\n\tsprintf(buf, \"\\'%s\\':%ld,\\'%s\\':%d,\\'%s\\':%d,\\'%s\\':%f, \\'%s\\': %f, \\'%s\\': %f, \\'%s\\': %f, \\'%s\\':%d}\",\n\t\tFORMULA_ELIGIBLE_TIME, resresv->job->eligible_time,\n\t\tFORMULA_QUEUE_PRIO, resresv->job->queue->priority,\n\t\tFORMULA_JOB_PRIO, resresv->job->priority,\n\t\tFORMULA_FSPERC, resresv->job->ginfo->tree_percentage,\n\t\tFORMULA_FSPERC_DEP, resresv->job->ginfo->tree_percentage,\n\t\tFORMULA_TREE_USAGE, resresv->job->ginfo->usage_factor,\n\t\tFORMULA_FSFACTOR, resresv->job->ginfo->tree_percentage == 0 ? 0 : pow(2, -(resresv->job->ginfo->usage_factor / resresv->job->ginfo->tree_percentage)),\n\t\tFORMULA_ACCRUE_TYPE, resresv->job->accrue_type);\n\tif (pbs_strcat(&globals, &globals_size, buf) == NULL) {\n\t\tfree(globals);\n\t\tfree(formula_buf);\n\t\treturn 0;\n\t}\n\n\tPyRun_SimpleString(globals);\n\tfree(globals);\n\n\t/* now that we've set all the values, let's calculate the answer */\n\tsnprintf(formula_buf, formula_buf_len,\n\t\t \"_PBS_PYTHON_EXCEPTIONSTR_=\\\"\\\"\\n\"\n\t\t \"ex = None\\n\"\n\t\t \"try:\\n\"\n\t\t \"\\t_FORMANS_ = eval(\\\"%s\\\", globals_dict, locals())\\n\"\n\t\t \"except Exception as ex:\"\n\t\t \"\\t_PBS_PYTHON_EXCEPTIONSTR_=str(ex)\\n\",\n\t\t formula);\n\n\tPyRun_SimpleString(formula_buf);\n\tfree(formula_buf);\n\n\tmodule = PyImport_AddModule(\"__main__\");\n\tdict = PyModule_GetDict(module);\n\tobj = PyMapping_GetItemString(dict, \"_FORMANS_\");\n\n\tif (obj != NULL) {\n\t\tans = PyFloat_AsDouble(obj);\n\t\tPy_XDECREF(obj);\n\t}\n\n\tobj = PyMapping_GetItemString(dict, \"_PBS_PYTHON_EXCEPTIONSTR_\");\n\tif (obj != NULL) {\n\t\tauto str = PyUnicode_AsUTF8(obj);\n\t\tif (str != NULL) {\n\t\t\tif (strlen(str) > 0) { /* exception happened */\n\t\t\t\tlog_eventf(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG, resresv->name,\n\t\t\t\t\t   \"Formula evaluation for job had an error.  Zero value will be used: %s\", str);\n\t\t\t\tans = 0;\n\t\t\t}\n\t\t}\n\t\tPy_XDECREF(obj);\n\t}\n\n\treturn ans;\n}\n#else\nsch_resource_t\nformula_evaluate(char *formula, resource_resv *resresv, resource_req *resreq)\n{\n\treturn 0;\n}\n#endif\n\n/**\n * @brief\n * \t\tSet the job accrue type to eligible time.\n *\n * @param[in]\tpbs_sd\t-\tconnection to pbs_server\n * @param[out]\tresresv\t-\tpointer to job\n *\n * @return\tvoid\n *\n */\nstatic void\nmake_eligible(int pbs_sd, resource_resv *resresv)\n{\n\tif (resresv == NULL || resresv->job == NULL)\n\t\treturn;\n\tif (resresv->job->accrue_type != JOB_ELIGIBLE) {\n\t\tupdate_job_attr(pbs_sd, resresv, ATTR_accrue_type, NULL, ACCRUE_ELIG, NULL, UPDATE_LATER);\n\t\tresresv->job->accrue_type = JOB_ELIGIBLE;\n\t}\n\treturn;\n}\n\n/**\n * @brief\n * \t\tSet the job accrue type to ineligible time.\n *\n * @param[in]\tpbs_sd\t-\tconnection to pbs_server\n * @param[out]\tresresv\t-\tpointer to job\n *\n * @return\tvoid\n */\nstatic void\nmake_ineligible(int pbs_sd, resource_resv *resresv)\n{\n\tif (resresv == NULL || resresv->job == NULL)\n\t\treturn;\n\tif (resresv->job->accrue_type != JOB_INELIGIBLE) {\n\t\tupdate_job_attr(pbs_sd, resresv, ATTR_accrue_type, NULL, ACCRUE_INEL, NULL, UPDATE_LATER);\n\t\tresresv->job->accrue_type = JOB_INELIGIBLE;\n\t}\n\treturn;\n}\n\n/**\n * @brief\n * \t\tUpdates accrue_type of job on server. The accrue_type is determined\n * \t\tfrom the values of mode and err_code. If resresv is a job array, special\n * \t\taction is taken. If mode is set to something other than ACCRUE_CHECK_ERR\n * \t\tthen the value of err_code is ignored unless it is set to SCHD_ERROR.\n *\n * @param[in]\tpbs_sd\t-\tconnection to pbs_server\n * @param[in]\tsinfo\t-\tpointer to server\n * @param[in]\tmode\t-\tmode of operation\n * @param[in]\terr_code\t-\tsched_error_code value\n * @param[in,out]\tresresv\t-\tpointer to job\n *\n * @return void\n *\n */\nvoid\nupdate_accruetype(int pbs_sd, server_info *sinfo,\n\t\t  enum update_accruetype_mode mode, enum sched_error_code err_code,\n\t\t  resource_resv *resresv)\n{\n\tif (sinfo == NULL || resresv == NULL || resresv->job == NULL)\n\t\treturn;\n\n\t/* if SCHD_ERROR, don't change accrue type */\n\tif (err_code == SCHD_ERROR)\n\t\treturn;\n\n\tif (sinfo->eligible_time_enable == 0)\n\t\treturn;\n\n\t/* If we're simulating, don't update the job on the server */\n\tif (pbs_sd == SIMULATE_SD)\n\t\treturn;\n\n\t/* behavior of job array's eligible_time calc differs from jobs/subjobs,\n\t *  it depends on:\n\t *    1) job array is empty - accrues ineligible time\n\t *    2) job array has instantiated all subjobs - accrues ineligible time\n\t *    3) job array has atleast one subjob to run - accrues eligible time\n\t */\n\n\tif (resresv->job->is_array && resresv->job->is_begin &&\n\t    (range_next_value(resresv->job->queued_subjobs, -1) < 0)) {\n\t\tmake_ineligible(pbs_sd, resresv);\n\t\treturn;\n\t}\n\n\tif ((resresv->job->preempt_status & PREEMPT_QUEUE_SERVER_SOFTLIMIT) > 0) {\n\t\tmake_ineligible(pbs_sd, resresv);\n\t\treturn;\n\t}\n\n\tif (mode == ACCRUE_MAKE_INELIGIBLE) {\n\t\tmake_ineligible(pbs_sd, resresv);\n\t\treturn;\n\t}\n\n\tif (mode == ACCRUE_MAKE_ELIGIBLE) {\n\t\tmake_eligible(pbs_sd, resresv);\n\t\treturn;\n\t}\n\n\t/* determine accruetype from err code */\n\tswitch (err_code) {\n\n\t\tcase SUCCESS:\n\t\t\t/* server updates accrue_type to RUNNING, hence, simply move out */\n\t\t\t/* accrue type is set to running in update_resresv_on_run() */\n\t\t\tbreak;\n\n\t\tcase SERVER_BYUSER_JOB_LIMIT_REACHED:\n\t\tcase SERVER_BYUSER_RES_LIMIT_REACHED:\n\t\tcase SERVER_USER_LIMIT_REACHED:\n\t\tcase SERVER_USER_RES_LIMIT_REACHED:\n\t\tcase SERVER_BYGROUP_JOB_LIMIT_REACHED:\n\t\tcase SERVER_BYPROJECT_JOB_LIMIT_REACHED:\n\t\tcase SERVER_BYGROUP_RES_LIMIT_REACHED:\n\t\tcase SERVER_BYPROJECT_RES_LIMIT_REACHED:\n\t\tcase SERVER_GROUP_LIMIT_REACHED:\n\t\tcase SERVER_GROUP_RES_LIMIT_REACHED:\n\t\tcase SERVER_PROJECT_LIMIT_REACHED:\n\t\tcase SERVER_PROJECT_RES_LIMIT_REACHED:\n\t\tcase QUEUE_BYUSER_JOB_LIMIT_REACHED:\n\t\tcase QUEUE_BYUSER_RES_LIMIT_REACHED:\n\t\tcase QUEUE_USER_LIMIT_REACHED:\n\t\tcase QUEUE_USER_RES_LIMIT_REACHED:\n\t\tcase QUEUE_BYGROUP_JOB_LIMIT_REACHED:\n\t\tcase QUEUE_BYPROJECT_JOB_LIMIT_REACHED:\n\t\tcase QUEUE_BYGROUP_RES_LIMIT_REACHED:\n\t\tcase QUEUE_BYPROJECT_RES_LIMIT_REACHED:\n\t\tcase QUEUE_GROUP_LIMIT_REACHED:\n\t\tcase QUEUE_GROUP_RES_LIMIT_REACHED:\n\t\tcase QUEUE_PROJECT_LIMIT_REACHED:\n\t\tcase QUEUE_PROJECT_RES_LIMIT_REACHED:\n\t\tcase NODE_GROUP_LIMIT_REACHED:\n\t\tcase JOB_UNDER_THRESHOLD:\n\t\t\tmake_ineligible(pbs_sd, resresv);\n\t\t\tbreak;\n\n\t\t\t/*\n\t\t\t * The list of ineligible cases must be complete, the remainer are eligible.\n\t\t\t * Some eligible cases include:\n\t\t\t * - SERVER_JOB_LIMIT_REACHED\n\t\t\t * - QUEUE_JOB_LIMIT_REACHED\n\t\t\t * - CROSS_PRIME_BOUNDARY\n\t\t\t * - CROSS_DED_TIME_BOUNDRY\n\t\t\t * - ERR_SPECIAL\n\t\t\t * - NO_NODE_RESOURCES\n\t\t\t * - INSUFFICIENT_RESOURCE\n\t\t\t * - BACKFILL_CONFLICT\n\t\t\t * - RESERVATION_INTERFERENCE\n\t\t\t * - PRIME_ONLY\n\t\t\t * - NONPRIME_ONLY\n\t\t\t * - DED_TIME\n\t\t\t * - INSUFFICIENT_QUEUE_RESOURCE\n\t\t\t * - INSUFFICIENT_SERVER_RESOURCE\n\t\t\t */\n\t\tdefault:\n\t\t\tmake_eligible(pbs_sd, resresv);\n\t\t\tbreak;\n\t}\n\n\treturn;\n}\n\n/**\n * @brief\n *\t\tGet AOE name from select of job/reservation.\n *\n * @see\n *\t\tquery_jobs\n *\t\tquery_reservations\n *\n * @param[in]\tselect\t-\tselect of job/reservation\n *\n * @return\tchar *\n * @retval\tNULL\t: no AOE requested\n * @retval\taoe\t: AOE requested\n *\n * @par Side Effects:\n *\t\tNone\n *\n * @par\tMT-safe: Yes\n *\n */\nchar *\ngetaoename(selspec *select)\n{\n\tint i = 0;\n\n\tif (select == NULL)\n\t\treturn NULL;\n\n\tfor (i = 0; select->chunks[i] != NULL; i++) {\n\t\tauto req = find_resource_req(select->chunks[i]->req, allres[\"aoe\"]);\n\t\tif (req != NULL)\n\t\t\treturn string_dup(req->res_str);\n\t}\n\n\treturn NULL;\n}\n\n/**\n * @brief\n *\tGet EOE name from select of job/reservation.\n *\n * @see\n *\tquery_jobs\n *\tquery_reservations\n *\n * @param[in]\tselect\t  -\tselect of job/reservation\n *\n * @return\tchar *\n * @retval\tNULL : no EOE requested\n * @retval\teoe  : EOE requested\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nchar *\ngeteoename(selspec *select)\n{\n\tresource_req *req;\n\n\tif (select == NULL || select->chunks == NULL || select->chunks[0] == NULL)\n\t\treturn NULL;\n\n\t/* we only need to look at 1st chunk since either all request eoe\n\t * or none request eoe.\n\t */\n\treq = find_resource_req(select->chunks[0]->req, allres[\"eoe\"]);\n\tif (req != NULL)\n\t\treturn string_dup(req->res_str);\n\n\treturn NULL;\n}\n\n/**\n * @brief\n * \t\tupdated the estimated.start_time and\n *\t\t estimated.exec_vnode attributes on a job\n *\n * @param[in]\tpbs_sd\t-\tconnection descriptor to pbs server\n * @param[in]\tjob\t-\tjob to update\n * @param[in]\tstart_time\t-\tstart time of job\n * @param[in]\texec_vnode\t-\texec_vnode of job or NULL to create it from\n *\t\t\t      \t\t\t\tjob->nspec_arr\n * @param[in]\tforce\t-\tforces attributes to update now -- no checks\n *\n * @return\tint\n * @retval\t1\t: if attributes were successfully updated\n * @retval\t0\t: if attributes were not updated for a valid reason\n * @retval\t-1\t: if attributes were not updated for an error\n */\nint\nupdate_estimated_attrs(int pbs_sd, resource_resv *job,\n\t\t       time_t start_time, char *exec_vnode, int force)\n{\n\tstruct attrl attr = {0};\n\tchar timebuf[128];\n\tresource_resv *array = NULL;\n\tresource_resv *resresv;\n\tenum update_attr_flags aflags;\n\n\tif (job == NULL)\n\t\treturn -1;\n\n\tif (job->is_job && job->job == NULL)\n\t\treturn -1;\n\n\tif (!force) {\n\t\tif (job->job->is_subjob) {\n\t\t\tarray = find_resource_resv(job->server->jobs, job->job->array_id);\n\t\t\tif (array != NULL) {\n\t\t\t\tif (job->job->array_index !=\n\t\t\t\t    range_next_value(array->job->queued_subjobs, -1)) {\n\t\t\t\t\treturn -1;\n\t\t\t\t}\n\t\t\t} else\n\t\t\t\treturn -1;\n\t\t}\n\t\taflags = UPDATE_LATER;\n\t} else {\n\t\taflags = UPDATE_NOW;\n\t\tif (!job->job->array_id.empty())\n\t\t\tarray = find_resource_resv(job->server->jobs, job->job->array_id);\n\t}\n\n\tif (job->job->is_topjob == false) {\n\t\tupdate_job_attr(pbs_sd, job, ATTR_topjob, NULL, const_cast<char *>(\"True\"), NULL, aflags);\n\t}\n\n\t/* create attrl for estimated.exec_vnode to be passed as the 'extra' field\n\t * to update_job_attr().  This will cause both attributes to be updated\n\t * in one call to pbs_asyalterjob()\n\t */\n\tattr.name = const_cast<char *>(ATTR_estimated);\n\tattr.resource = const_cast<char *>(\"exec_vnode\");\n\tif (exec_vnode == NULL)\n\t\tattr.value = create_execvnode(job->nspec_arr);\n\telse\n\t\tattr.value = exec_vnode;\n\n\tsnprintf(timebuf, 128, \"%ld\", (long) start_time);\n\n\tif (array)\n\t\tresresv = array;\n\telse\n\t\tresresv = job;\n\n\treturn update_job_attr(pbs_sd, resresv, ATTR_estimated, \"start_time\",\n\t\t\t       timebuf, &attr, aflags);\n}\n\n/**\n * @brief\n * \t\tThis function checks if preemption set has been configured as TARGET_NONE\n *          If its found that preempt_targets = TARGET_NONE then this function returns PREEMPT_NONE.\n * @param[in]\tres_list\t-\tlist of resources created from comma seperated resource list.\n *\n * @return\tint\n * @retval\tPREEMPT_NONE\t: If preemption set is set to TARGET_NONE\n * @retval\t0\t: If preemption set is not set as TARGET_NONE\n */\nint\ncheck_preempt_targets_for_none(char **res_list)\n{\n\tchar *arg = NULL;\n\tint i = 0;\n\tif (res_list == NULL)\n\t\treturn 0;\n\n\tfor (arg = res_list[i]; arg != NULL; i++, arg = res_list[i]) {\n\t\tif (!strcasecmp(arg, TARGET_NONE)) {\n\t\t\treturn PREEMPT_NONE;\n\t\t}\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\tchecks whether the IFL interface failed because it was a finished job\n *\n * @param[in]\terror\t-\tpbs_errno set by server\n *\n * @retval\t1\t: if job is a finished job\n * @retval\t0\t: if job is not a finished job\n */\nint\nis_finished_job(int error)\n{\n\tswitch (error) {\n\t\tcase PBSE_UNKJOBID:\n\t\tcase PBSE_HISTJOBID:\n\t\t\treturn (1);\n\t\tdefault:\n\t\t\treturn (0);\n\t}\n}\n\n/**\n * @brief\n * \t\tpreemption_similarity - Compare two running jobs to see if they have\n *\t\t\toverlap.  Overlap is defined in terms of preemption.\n *\t\t\tCan pre-emptee pjob help in running hjob.  In doing this\n *\t\t\twe look at the full list of reasons hjob can not run and\n *\t\t\trun a similarity heuristic against the two jobs to see if\n *\t\t\tthey are alike.\n *\n * @param[in]\thjob\t-\thigh priority job\n * @param[in]\tpjob\t-\tjob to see if it is similar to the high priority job\n * @param[in]\tfull_err\t-\tlist of reasons why hjob can not run right now.  It gets\n *                      \t\tcreated by passing the RETURN_ALL_ERRS to is_ok_to_run()\n * @return\tint\n * @retval\t1\t: jobs are similar\n * @retval\t0\t: jobs are not similar\n */\nint\npreemption_similarity(resource_resv *hjob, resource_resv *pjob, schd_error *full_err)\n{\n\tschd_error *cur_err;\n\tint match = 0;\n\tschd_resource *res;\n\tint j;\n\n\tfor (cur_err = full_err; match == 0 && cur_err != NULL; cur_err = cur_err->next) {\n\t\tswitch (cur_err->error_code) {\n\t\t\tcase QUEUE_JOB_LIMIT_REACHED:\n\t\t\tcase QUEUE_RESOURCE_LIMIT_REACHED:\n\t\t\t\tif (pjob->job->queue == hjob->job->queue)\n\t\t\t\t\tmatch = 1;\n\t\t\t\tbreak;\n\t\t\tcase SERVER_USER_LIMIT_REACHED:\n\t\t\tcase SERVER_USER_RES_LIMIT_REACHED:\n\t\t\tcase SERVER_BYUSER_JOB_LIMIT_REACHED:\n\t\t\tcase SERVER_BYUSER_RES_LIMIT_REACHED:\n\t\t\t\tif (pjob->user == hjob->user)\n\t\t\t\t\tmatch = 1;\n\t\t\t\tbreak;\n\t\t\tcase QUEUE_USER_LIMIT_REACHED:\n\t\t\tcase QUEUE_USER_RES_LIMIT_REACHED:\n\t\t\tcase QUEUE_BYUSER_JOB_LIMIT_REACHED:\n\t\t\tcase QUEUE_BYUSER_RES_LIMIT_REACHED:\n\t\t\t\tif (pjob->job->queue == hjob->job->queue &&\n\t\t\t\t    (pjob->user == hjob->user))\n\t\t\t\t\tmatch = 1;\n\n\t\t\t\tbreak;\n\t\t\tcase SERVER_GROUP_LIMIT_REACHED:\n\t\t\tcase SERVER_GROUP_RES_LIMIT_REACHED:\n\t\t\tcase SERVER_BYGROUP_JOB_LIMIT_REACHED:\n\t\t\tcase SERVER_BYGROUP_RES_LIMIT_REACHED:\n\t\t\t\tif (pjob->group == hjob->group)\n\t\t\t\t\tmatch = 1;\n\t\t\t\tbreak;\n\n\t\t\tcase QUEUE_GROUP_LIMIT_REACHED:\n\t\t\tcase QUEUE_GROUP_RES_LIMIT_REACHED:\n\t\t\tcase QUEUE_BYGROUP_JOB_LIMIT_REACHED:\n\t\t\tcase QUEUE_BYGROUP_RES_LIMIT_REACHED:\n\t\t\t\tif (pjob->job->queue == hjob->job->queue &&\n\t\t\t\t    (pjob->group == hjob->group))\n\t\t\t\t\tmatch = 1;\n\t\t\t\tbreak;\n\t\t\tcase SERVER_PROJECT_LIMIT_REACHED:\n\t\t\tcase SERVER_PROJECT_RES_LIMIT_REACHED:\n\t\t\tcase SERVER_BYPROJECT_RES_LIMIT_REACHED:\n\t\t\tcase SERVER_BYPROJECT_JOB_LIMIT_REACHED:\n\t\t\t\tif (pjob->project == hjob->project)\n\t\t\t\t\tmatch = 1;\n\t\t\t\tbreak;\n\t\t\tcase QUEUE_PROJECT_LIMIT_REACHED:\n\t\t\tcase QUEUE_PROJECT_RES_LIMIT_REACHED:\n\t\t\tcase QUEUE_BYPROJECT_RES_LIMIT_REACHED:\n\t\t\tcase QUEUE_BYPROJECT_JOB_LIMIT_REACHED:\n\t\t\t\tif (pjob->job->queue == hjob->job->queue &&\n\t\t\t\t    (pjob->project == hjob->project))\n\t\t\t\t\tmatch = 1;\n\t\t\t\tbreak;\n\t\t\tcase SERVER_JOB_LIMIT_REACHED:\n\t\t\tcase SERVER_RESOURCE_LIMIT_REACHED:\n\t\t\t\tmatch = 1;\n\t\t\t\tbreak;\n\t\t\t/* Codes from check_nodes(): check_nodes() returns a code for one node.\n\t\t\t * The code itself doesn't really help us.  What it does do is signal us\n\t\t\t * that we searched the nodes and didn't find a match.  We need to check if\n\t\t\t * there are nodes in the exec_vnodes that are similar\n\t\t\t */\n\t\t\tcase NO_AVAILABLE_NODE:\n\t\t\tcase NOT_ENOUGH_NODES_AVAIL:\n\t\t\tcase NO_NODE_RESOURCES:\n\t\t\tcase INVALID_NODE_STATE:\n\t\t\tcase INVALID_NODE_TYPE:\n\t\t\tcase NODE_JOB_LIMIT_REACHED:\n\t\t\tcase NODE_USER_LIMIT_REACHED:\n\t\t\tcase NODE_GROUP_LIMIT_REACHED:\n\t\t\tcase NODE_NO_MULT_JOBS:\n\t\t\tcase NODE_UNLICENSED:\n\t\t\tcase INSUFFICIENT_RESOURCE:\n\t\t\tcase AOE_NOT_AVALBL:\n\t\t\tcase PROV_RESRESV_CONFLICT:\n\t\t\tcase NO_FREE_NODES:\n\t\t\tcase NO_TOTAL_NODES:\n\t\t\tcase NODE_NOT_EXCL:\n\t\t\tcase CANT_SPAN_PSET:\n\t\t\tcase IS_MULTI_VNODE:\n\t\t\tcase RESERVATION_CONFLICT:\n\t\t\tcase SET_TOO_SMALL:\n\n\t\t\t\tif (hjob->ninfo_arr != NULL && pjob->ninfo_arr != NULL) {\n\t\t\t\t\tfor (j = 0; hjob->ninfo_arr[j] != NULL && !match; j++) {\n\t\t\t\t\t\tif (find_node_by_rank(pjob->ninfo_arr, hjob->ninfo_arr[j]->rank) != NULL)\n\t\t\t\t\t\t\tmatch = 1;\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\tbreak;\n\t\t\tcase INSUFFICIENT_QUEUE_RESOURCE:\n\t\t\t\tif (hjob->job->queue == pjob->job->queue) {\n\t\t\t\t\tfor (res = hjob->job->queue->qres; res != NULL; res = res->next) {\n\t\t\t\t\t\tif (res->avail != SCHD_INFINITY_RES)\n\t\t\t\t\t\t\tif (find_resource_req(pjob->resreq, res->def) != NULL)\n\t\t\t\t\t\t\t\tmatch = 1;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase INSUFFICIENT_SERVER_RESOURCE:\n\t\t\t\tfor (res = hjob->server->res; res != NULL; res = res->next) {\n\t\t\t\t\tif (res->avail != SCHD_INFINITY_RES)\n\t\t\t\t\t\tif (find_resource_req(pjob->resreq, res->def) != NULL)\n\t\t\t\t\t\t\tmatch = 1;\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\t/* Something we didn't expect, err on the side of caution */\n\t\t\t\tmatch = 1;\n\t\t\t\tbreak;\n\t\t}\n\t}\n\treturn match;\n}\n\n/**\n * @brief Create the resources_released and resource_released_list for a job\n *\t    and also set execselect on the job based on resources_released\n *\n * @param[in] policy - policy object\n * @param[in] pjob - Job structure using which resources_released string is created\n *\n * @retval - void\n */\nvoid\ncreate_res_released(status *policy, resource_resv *pjob)\n{\n\tstd::string selectspec;\n\tif (pjob->job->resreleased.empty()) {\n\t\tpjob->job->resreleased = create_res_released_array(policy, pjob);\n\t\tif (pjob->job->resreleased.empty()) {\n\t\t\treturn;\n\t\t}\n\t\tpjob->job->resreq_rel = create_resreq_rel_list(policy, pjob);\n\t}\n\tselectspec = create_select_from_nspec(pjob->job->resreleased);\n\tdelete pjob->execselect;\n\tpjob->execselect = parse_selspec(selectspec);\n\treturn;\n}\n\n/**\n * @brief This function populates resreleased job structure for a particular job.\n *\t  It does so by duplicating the job's exec_vnode and only keeping the\n *\t  consumable resources in policy->rel_on_susp\n *\n * @param[in] policy - policy object\n * @param[in] resresv - Job to create resources_released\n *\n * @return vector<nspec *>\n * @retval nspec array of released resources\n * @retval NULL\n *\n */\nstd::vector<nspec *>\ncreate_res_released_array(status *policy, resource_resv *resresv)\n{\n\tresource_req *req;\n\n\tif ((resresv == NULL) || (resresv->nspec_arr.empty()) || (resresv->ninfo_arr == NULL))\n\t\treturn {};\n\n\tauto nspec_arr = dup_nspecs(resresv->nspec_arr, resresv->ninfo_arr, NULL);\n\tif (nspec_arr.empty())\n\t\treturn {};\n\tif (!policy->rel_on_susp.empty()) {\n\t\tfor (auto ns : nspec_arr) {\n\t\t\tfor (req = ns->resreq; req != NULL; req = req->next) {\n\t\t\t\tauto &ros = policy->rel_on_susp;\n\t\t\t\tif (req->type.is_consumable == 1 && ros.find(req->def) == ros.end())\n\t\t\t\t\treq->amount = 0;\n\t\t\t}\n\t\t}\n\t}\n\treturn nspec_arr;\n}\n\n/**\n * @brief create a resource_rel array for a job by accumulating all of the RASSN\n *\t    resources in a resources_released nspec array.\n *\n * @note only uses RASSN resources on the sched_config resources line\n *\n * @param policy - policy info\n * @param pjob -  resource reservation structure\n * @return resource_req *\n * @retval newly created resreq_rel array\n * @retval NULL on error\n */\nresource_req *\ncreate_resreq_rel_list(status *policy, resource_resv *pjob)\n{\n\tresource_req *resreq_rel = NULL;\n\tresource_req *rel;\n\tresource_req *req;\n\tif (policy == NULL || pjob == NULL)\n\t\treturn NULL;\n\n\tfor (req = pjob->resreq; req != NULL; req = req->next) {\n\t\tauto rdc = policy->resdef_to_check_rassn;\n\t\tif (rdc.find(req->def) != rdc.end()) {\n\t\t\tauto ros = policy->rel_on_susp;\n\t\t\tif (!policy->rel_on_susp.empty() && ros.find(req->def) == ros.end())\n\t\t\t\tcontinue;\n\t\t\trel = find_alloc_resource_req(resreq_rel, req->def);\n\t\t\tif (rel != NULL) {\n\t\t\t\trel->amount += req->amount;\n\t\t\t\tif (resreq_rel == NULL)\n\t\t\t\t\tresreq_rel = rel;\n\t\t\t}\n\t\t}\n\t}\n\treturn resreq_rel;\n}\n\n/**\n * @brief extend the soft walltime of job.  A job's soft_walltime will be extended by 100% of its\n * \t\toriginal soft_walltime.  If this extension would go past the job's normal walltime\n * \t\tthe soft_walltime is set to the normal walltime.\n * @param[in] policy - policy info\n * @param[in] resresv - job to extend soft walltime\n * @param[in] server_time - current time on the sinfo\n * @return extended soft walltime duration\n */\nlong\nextend_soft_walltime(resource_resv *resresv, time_t server_time)\n{\n\tresource_req *walltime_req;\n\tresource_req *soft_walltime_req;\n\n\tint extension = 0;\n\tint num_ext_over;\n\n\tlong job_duration = UNSPECIFIED;\n\tlong extended_duration = UNSPECIFIED;\n\n\tif (resresv == NULL)\n\t\treturn UNSPECIFIED;\n\n\tsoft_walltime_req = find_resource_req(resresv->resreq, allres[\"soft_walltime\"]);\n\twalltime_req = find_resource_req(resresv->resreq, allres[\"walltime\"]);\n\n\tif (soft_walltime_req == NULL) { /* Nothing to extend */\n\t\tif (walltime_req != NULL)\n\t\t\treturn walltime_req->amount;\n\t\telse\n\t\t\treturn JOB_INFINITY;\n\t}\n\n\tjob_duration = soft_walltime_req->amount;\n\n\t/* number of times the job has been extended */\n\tnum_ext_over = (server_time - resresv->job->stime) / job_duration;\n\n\textension = num_ext_over * job_duration;\n\textended_duration = job_duration + extension;\n\tif (walltime_req != NULL) {\n\t\tif (extended_duration > walltime_req->amount) {\n\t\t\textended_duration = walltime_req->amount;\n\t\t}\n\t}\n\treturn extended_duration;\n}\n\n/**\n * @brief   This function is used as a callback with resource_resv_filter(). It\tfinds out whether or not\n *\t    the job in question is appropriate to be preempted.\n * @param[in] job - job that is being analyzed\n * @param[in] arg - a pointer to resresv_filter structure which contains the job that could not run\n *\t\t    and a sched error structure specifying the reason why it could not run.\n *\n * @return - integer\n * @retval - 0 if job is not valid for preemption\n * @retval - 1 if the job is valid for preemption\n */\nstatic int\ncull_preemptible_jobs(resource_resv *job, const void *arg)\n{\n\tstruct resresv_filter *inp;\n\tresource_req *req_scan;\n\n\tif (arg == NULL || job == NULL)\n\t\treturn 0;\n\tinp = (struct resresv_filter *) arg;\n\tif (inp->job == NULL)\n\t\treturn 0;\n\n\t/* make sure that only running jobs are looked at */\n\tif (job->job->is_running == 0)\n\t\treturn 0;\n\n\tif (job->job->preempt >= inp->job->job->preempt)\n\t\treturn 0;\n\n\tswitch (inp->err->error_code) {\n\t\tcase SERVER_USER_RES_LIMIT_REACHED:\n\t\tcase SERVER_BYUSER_RES_LIMIT_REACHED:\n\t\t\tif ((job->user == inp->job->user) &&\n\t\t\t    find_resource_req(job->resreq, inp->err->rdef) != NULL)\n\t\t\t\treturn 1;\n\t\t\tbreak;\n\t\tcase QUEUE_USER_RES_LIMIT_REACHED:\n\t\tcase QUEUE_BYUSER_RES_LIMIT_REACHED:\n\t\t\tif ((job->job->queue == inp->job->job->queue) &&\n\t\t\t    (job->user == inp->job->user) &&\n\t\t\t    find_resource_req(job->resreq, inp->err->rdef) != NULL)\n\t\t\t\treturn 1;\n\t\t\tbreak;\n\t\tcase SERVER_GROUP_RES_LIMIT_REACHED:\n\t\tcase SERVER_BYGROUP_RES_LIMIT_REACHED:\n\t\t\tif ((job->group == inp->job->group) &&\n\t\t\t    find_resource_req(job->resreq, inp->err->rdef) != NULL)\n\t\t\t\treturn 1;\n\t\t\tbreak;\n\t\tcase QUEUE_GROUP_RES_LIMIT_REACHED:\n\t\tcase QUEUE_BYGROUP_RES_LIMIT_REACHED:\n\t\t\tif ((job->job->queue == inp->job->job->queue) &&\n\t\t\t    (job->group == inp->job->group) &&\n\t\t\t    find_resource_req(job->resreq, inp->err->rdef) != NULL)\n\t\t\t\treturn 1;\n\t\t\tbreak;\n\t\tcase SERVER_PROJECT_RES_LIMIT_REACHED:\n\t\tcase SERVER_BYPROJECT_RES_LIMIT_REACHED:\n\t\t\tif ((job->user == inp->job->user) &&\n\t\t\t    find_resource_req(job->resreq, inp->err->rdef) != NULL)\n\t\t\t\treturn 1;\n\t\t\tbreak;\n\t\tcase QUEUE_PROJECT_RES_LIMIT_REACHED:\n\t\tcase QUEUE_BYPROJECT_RES_LIMIT_REACHED:\n\t\t\tif ((job->job->queue == inp->job->job->queue) &&\n\t\t\t    (job->project == inp->job->project) &&\n\t\t\t    find_resource_req(job->resreq, inp->err->rdef) != NULL)\n\t\t\t\treturn 1;\n\t\t\tbreak;\n\t\tcase QUEUE_JOB_LIMIT_REACHED:\n\t\t\tif (job->job->queue == inp->job->job->queue)\n\t\t\t\treturn 1;\n\t\t\tbreak;\n\t\tcase SERVER_USER_LIMIT_REACHED:\n\t\tcase SERVER_BYUSER_JOB_LIMIT_REACHED:\n\t\t\tif (job->user == inp->job->user)\n\t\t\t\treturn 1;\n\t\t\tbreak;\n\t\tcase QUEUE_USER_LIMIT_REACHED:\n\t\tcase QUEUE_BYUSER_JOB_LIMIT_REACHED:\n\t\t\tif ((job->job->queue == inp->job->job->queue) &&\n\t\t\t    (job->user == inp->job->user))\n\t\t\t\treturn 1;\n\t\t\tbreak;\n\t\tcase SERVER_GROUP_LIMIT_REACHED:\n\t\tcase SERVER_BYGROUP_JOB_LIMIT_REACHED:\n\t\t\tif (job->group == inp->job->group)\n\t\t\t\treturn 1;\n\t\t\tbreak;\n\t\tcase QUEUE_GROUP_LIMIT_REACHED:\n\t\tcase QUEUE_BYGROUP_JOB_LIMIT_REACHED:\n\t\t\tif ((job->job->queue == inp->job->job->queue) &&\n\t\t\t    (job->group == inp->job->group))\n\t\t\t\treturn 1;\n\t\t\tbreak;\n\t\tcase SERVER_PROJECT_LIMIT_REACHED:\n\t\tcase SERVER_BYPROJECT_JOB_LIMIT_REACHED:\n\t\t\tif (job->project == inp->job->project)\n\t\t\t\treturn 1;\n\t\t\tbreak;\n\t\tcase QUEUE_PROJECT_LIMIT_REACHED:\n\t\tcase QUEUE_BYPROJECT_JOB_LIMIT_REACHED:\n\t\t\tif ((job->job->queue == inp->job->job->queue) &&\n\t\t\t    (job->project == inp->job->project))\n\t\t\t\treturn 1;\n\t\t\tbreak;\n\t\tcase SERVER_JOB_LIMIT_REACHED:\n\t\t\treturn 1;\n\n\t\tcase QUEUE_RESOURCE_LIMIT_REACHED:\n\t\t\tif (job->job->queue != inp->job->job->queue)\n\t\t\t\treturn 0;\n\t\tcase SERVER_RESOURCE_LIMIT_REACHED:\n\t\t\tfor (req_scan = job->resreq; req_scan != NULL; req_scan = req_scan->next) {\n\t\t\t\tif (req_scan->def == inp->err->rdef && req_scan->amount > 0)\n\t\t\t\t\treturn 1;\n\t\t\t}\n\t\t\tbreak;\n\t\tcase INSUFFICIENT_RESOURCE:\n\t\t\t/* special check for vnode and host resource because those resources\n\t\t\t * do not get into chunk level resources. So in such a case we\n\t\t\t * compare the resource name with the chunk name\n\t\t\t */\n\t\t\tif (inp->err->rdef == allres[\"vnode\"]) {\n\t\t\t\tif (inp->err->arg2 != NULL && find_node_info(job->ninfo_arr, inp->err->arg2) != NULL)\n\t\t\t\t\treturn 1;\n\t\t\t} else if (inp->err->rdef == allres[\"host\"]) {\n\t\t\t\tif (inp->err->arg2 != NULL && find_node_by_host(job->ninfo_arr, inp->err->arg2) != NULL)\n\t\t\t\t\treturn 1;\n\t\t\t} else {\n\t\t\t\tif (inp->err->rdef->type.is_non_consumable) {\n\t\t\t\t\t/* In the non-consumable case, we need to pass the job on.\n\t\t\t\t\t * There is a case when a job requesting a non-specific\n\t\t\t\t\t * select is allocated a node with a non-consumable.\n\t\t\t\t\t * We will check nodes to see if they are useful in\n\t\t\t\t\t * select_index_to_preempt\n\t\t\t\t\t */\n\t\t\t\t\treturn 1;\n\t\t\t\t}\n\t\t\t\tfor (int index = 0; job->select->chunks[index] != NULL; index++) {\n\t\t\t\t\tfor (req_scan = job->select->chunks[index]->req; req_scan != NULL; req_scan = req_scan->next) {\n\t\t\t\t\t\tif (req_scan->def == inp->err->rdef) {\n\t\t\t\t\t\t\tif (req_scan->type.is_non_consumable ||\n\t\t\t\t\t\t\t    req_scan->amount > 0) {\n\t\t\t\t\t\t\t\treturn 1;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\tbreak;\n\t\tcase INSUFFICIENT_QUEUE_RESOURCE:\n\t\t\tif (job->job->queue != inp->job->job->queue)\n\t\t\t\treturn 0;\n\t\tcase INSUFFICIENT_SERVER_RESOURCE:\n\t\t\tif (find_resource_req(job->resreq, inp->err->rdef))\n\t\t\t\treturn 1;\n\t\t\tbreak;\n\t\tdefault:\n\t\t\treturn 0;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief   This function looks at a list of running jobs and create a subset of preemptable candidates\n *\t    according to the high priority job and the reason why it couldn't run\n * @param[in] arr - List of running jobs (preemptable candidates)\n * @param[in] job - high priority job that couldn't run\n * @param[in] err - error structure that has the reason why high priority job couldn't run\n *\n * @return - resource_resv **\n * @retval - a newly allocated list of preemptable candidates\n * @retval - NULL if no jobs can be preempted\n */\nresource_resv **\nfilter_preemptable_jobs(resource_resv **arr, resource_resv *job, schd_error *err)\n{\n\tstruct resresv_filter arg;\n\tresource_resv **temp = NULL;\n\tint i;\n\tint arr_length;\n\n\tif ((arr == NULL) || (job == NULL) || (err == NULL))\n\t\treturn NULL;\n\n\tarr_length = count_array(arr);\n\n\tswitch (err->error_code) {\n\t\t/* list of resources we care about */\n\t\tcase SERVER_USER_RES_LIMIT_REACHED:\n\t\tcase SERVER_BYUSER_RES_LIMIT_REACHED:\n\t\tcase QUEUE_USER_RES_LIMIT_REACHED:\n\t\tcase QUEUE_BYUSER_RES_LIMIT_REACHED:\n\n\t\tcase SERVER_GROUP_RES_LIMIT_REACHED:\n\t\tcase SERVER_BYGROUP_RES_LIMIT_REACHED:\n\t\tcase QUEUE_GROUP_RES_LIMIT_REACHED:\n\t\tcase QUEUE_BYGROUP_RES_LIMIT_REACHED:\n\n\t\tcase SERVER_PROJECT_RES_LIMIT_REACHED:\n\t\tcase SERVER_BYPROJECT_RES_LIMIT_REACHED:\n\t\tcase QUEUE_PROJECT_RES_LIMIT_REACHED:\n\t\tcase QUEUE_BYPROJECT_RES_LIMIT_REACHED:\n\n\t\tcase SERVER_JOB_LIMIT_REACHED:\n\t\tcase SERVER_RESOURCE_LIMIT_REACHED:\n\t\tcase QUEUE_JOB_LIMIT_REACHED:\n\t\tcase QUEUE_RESOURCE_LIMIT_REACHED:\n\n\t\tcase SERVER_USER_LIMIT_REACHED:\n\t\tcase SERVER_BYUSER_JOB_LIMIT_REACHED:\n\t\tcase QUEUE_USER_LIMIT_REACHED:\n\t\tcase QUEUE_BYUSER_JOB_LIMIT_REACHED:\n\n\t\tcase SERVER_GROUP_LIMIT_REACHED:\n\t\tcase SERVER_BYGROUP_JOB_LIMIT_REACHED:\n\t\tcase QUEUE_GROUP_LIMIT_REACHED:\n\t\tcase QUEUE_BYGROUP_JOB_LIMIT_REACHED:\n\n\t\tcase SERVER_PROJECT_LIMIT_REACHED:\n\t\tcase SERVER_BYPROJECT_JOB_LIMIT_REACHED:\n\t\tcase QUEUE_PROJECT_LIMIT_REACHED:\n\t\tcase QUEUE_BYPROJECT_JOB_LIMIT_REACHED:\n\n\t\tcase INSUFFICIENT_RESOURCE:\n\t\tcase INSUFFICIENT_QUEUE_RESOURCE:\n\t\tcase INSUFFICIENT_SERVER_RESOURCE:\n\t\t\targ.job = job;\n\t\t\targ.err = err;\n\t\t\ttemp = resource_resv_filter(arr, arr_length, cull_preemptible_jobs, &arg, 0);\n\t\t\tif (temp == NULL)\n\t\t\t\treturn NULL;\n\t\t\tif (temp[0] == NULL) {\n\t\t\t\tfree(temp);\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t\treturn temp;\n\t\tdefault:\n\t\t\t/* For all other errors return the copy of list back again */\n\t\t\ttemp = static_cast<resource_resv **>(malloc((arr_length + 1) * sizeof(resource_resv *)));\n\t\t\tif (temp == NULL) {\n\t\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_DEBUG, __func__, MEM_ERR_MSG);\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t\tfor (i = 0; arr[i] != NULL; i++)\n\t\t\t\ttemp[i] = arr[i];\n\t\t\ttemp[i] = NULL;\n\t\t\treturn temp;\n\t}\n\treturn NULL;\n}\n\n/**\n * @brief   This function looks at the job's depend attribute string and creates\n *\t    an array of job ids having runone dependency.\n * @param[in] depend_val - job's dependency string\n *\n * @return - char **\n * @retval - a newly allocated list of jobs with runone dependeny\n * @retval - NULL in case of error\n */\nstatic char **\nparse_runone_job_list(char *depend_val)\n{\n\tchar *start;\n\tconst char *depend_type = \"runone:\";\n\tint i;\n\tint len = 1;\n\tchar *r;\n\tchar **ret = NULL;\n\tchar *depend_str = NULL;\n\n\tif (depend_val == NULL)\n\t\treturn NULL;\n\telse\n\t\tdepend_str = string_dup(depend_val);\n\n\tstart = strstr(depend_str, depend_type);\n\tif (start == NULL) {\n\t\tfree(depend_str);\n\t\treturn NULL;\n\t}\n\n\tr = start + strlen(depend_type);\n\tfor (i = 0; r[i] != '\\0'; i++) {\n\t\tif (r[i] == ':')\n\t\t\tlen++;\n\t}\n\n\tret = static_cast<char **>(calloc(len + 1, sizeof(char *)));\n\tif (ret == NULL) {\n\t\tfree(depend_str);\n\t\treturn NULL;\n\t}\n\tfor (i = 0; i < len; i++) {\n\t\tauto job_delim = strcspn(r, \":\");\n\t\tr[job_delim] = '\\0';\n\t\tauto svr_delim = strcspn(r, \"@\");\n\t\tr[svr_delim] = '\\0';\n\t\tret[i] = string_dup(r);\n\t\tif (ret[i] == NULL) {\n\t\t\tfree_ptr_array(ret);\n\t\t\tfree(depend_str);\n\t\t\treturn NULL;\n\t\t}\n\t\tr = r + job_delim + 1;\n\t}\n\tfree(depend_str);\n\treturn ret;\n}\n\n/**\n * @brief   This function processes every job's depend attribute and\n *\t    associate the jobs with runone dependency to its dependent_jobs list.\n * @param[in] sinfo - server info structure\n *\n * @return - void\n */\nvoid\nassociate_dependent_jobs(server_info *sinfo)\n{\n\tint i;\n\tchar **job_arr = NULL;\n\n\tif (sinfo == NULL)\n\t\treturn;\n\tfor (i = 0; sinfo->jobs[i] != NULL; i++) {\n\t\tif (sinfo->jobs[i]->job->depend_job_str != NULL) {\n\t\t\tjob_arr = parse_runone_job_list(sinfo->jobs[i]->job->depend_job_str);\n\t\t\tif (job_arr != NULL) {\n\t\t\t\tint j;\n\t\t\t\tint len = count_array(job_arr);\n\t\t\t\tsinfo->jobs[i]->job->dependent_jobs = static_cast<resource_resv **>(calloc((len + 1), sizeof(resource_resv *)));\n\t\t\t\tsinfo->jobs[i]->job->dependent_jobs[len] = NULL;\n\t\t\t\tfor (j = 0; job_arr[j] != NULL; j++) {\n\t\t\t\t\tresource_resv *jptr;\n\t\t\t\t\tjptr = find_resource_resv(sinfo->jobs, job_arr[j]);\n\t\t\t\t\tif (jptr != NULL)\n\t\t\t\t\t\tsinfo->jobs[i]->job->dependent_jobs[j] = jptr;\n\t\t\t\t\tfree(job_arr[j]);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tif (job_arr != NULL) {\n\t\t\tfree(job_arr);\n\t\t\tjob_arr = NULL;\n\t\t}\n\t}\n\treturn;\n}\n\n/**\n * @brief This function associates the subjob passed in to its parent job.\n *\n * @param[in] pjob\tThe subjob that needs association\n * @param[in] sinfo\tserver info structure\n *\n * @return int\n * @retval 1 - Failure\n * @retval 0 - Success\n */\nint\nassociate_array_parent(resource_resv *pjob, server_info *sinfo)\n{\n\tresource_resv *parent = NULL;\n\n\tif (pjob == NULL || sinfo == NULL || !pjob->job->is_subjob)\n\t\treturn 1;\n\n\tparent = find_resource_resv(sinfo->jobs, pjob->job->array_id);\n\tif (parent == NULL)\n\t\treturn 1;\n\n\tpjob->job->parent_job = parent;\n\tparent->job->running_subjobs++;\n\n\treturn 0;\n}\n\n/**\n *\t@brief set job's start, end, duration, and STF attributes if needed\n *\n * \t@param[in] pbs_sd - used to set estimated.soft_walltime\n * \t@param[in] resresv - the job\n * \t@param[in] server_time - current time in cycle\n * \n * \t@return void\n */\nvoid\nset_job_times(int pbs_sd, resource_resv *resresv, time_t server_time)\n{\n\tlong duration;\n\tresource_req *soft_walltime_req = NULL;\n\tresource_req *walltime_req = NULL;\n\t/* Find out if it is a shrink-to-fit job.\n\t * If yes, set the duration to max walltime.\n\t */\n\tauto req = find_resource_req(resresv->resreq, allres[\"min_walltime\"]);\n\tif (req != NULL) {\n\t\tresresv->is_shrink_to_fit = true;\n\t\t/* Set the min duration */\n\t\tresresv->min_duration = (time_t) req->amount;\n\t\treq = find_resource_req(resresv->resreq, allres[\"max_walltime\"]);\n\n#ifdef NAS /* localmod 026 */\n\t\t/* if no max_walltime is set then we want to look at what walltime\n\t\t * is (if it's set at all) - it may be user-specified, queue default,\n\t\t * queue max, or server max.\n\t\t */\n\t\tif (req == NULL) {\n\t\t\treq = find_resource_req(resresv->resreq, allres[\"walltime\"]);\n\n\t\t\t/* if walltime is set, use it if it's greater than min_walltime */\n\t\t\tif (req != NULL && resresv->min_duration > req->amount) {\n\t\t\t\treq = find_resource_req(resresv->resreq, allres[\"min_walltime\"]);\n\t\t\t}\n\t\t}\n#endif /* localmod 026 */\n\t}\n\n\tif ((req == NULL) || (resresv->job->is_running == true)) {\n\t\tsoft_walltime_req = find_resource_req(resresv->resreq, allres[\"soft_walltime\"]);\n\t\twalltime_req = find_resource_req(resresv->resreq, allres[\"walltime\"]);\n\t\tif (soft_walltime_req != NULL)\n\t\t\treq = soft_walltime_req;\n\t\telse\n\t\t\treq = walltime_req;\n\t}\n\n\tif (req != NULL)\n\t\tduration = (long) req->amount;\n\telse /* set to virtual job infinity: 5 years */\n\t\tduration = JOB_INFINITY;\n\n\tif (walltime_req != NULL)\n\t\tresresv->hard_duration = (long) walltime_req->amount;\n\telse if (resresv->min_duration != UNSPECIFIED)\n\t\tresresv->hard_duration = resresv->min_duration;\n\telse\n\t\tresresv->hard_duration = JOB_INFINITY;\n\n\tif (resresv->job->stime != UNSPECIFIED &&\n\t    !(resresv->job->is_queued || resresv->job->is_suspended) &&\n\t    resresv->ninfo_arr != NULL) {\n\t\tauto start = resresv->job->stime;\n\t\ttime_t end;\n\n\t\t/* if a job is exiting, then its end time can be more closely\n\t\t\t * estimated by setting it to now + EXITING_TIME\n\t\t\t */\n\t\tif (resresv->job->is_exiting)\n\t\t\tend = server_time + EXITING_TIME;\n\t\t/* Normal Case: Job's end is start + duration and it ends in the future */\n\t\telse if (start + duration >= server_time)\n\t\t\tend = start + duration;\n\t\t/* Duration has been exceeded - either extend soft_walltime or expect the job to be killed */\n\t\telse {\n\t\t\tif (soft_walltime_req != NULL) {\n\t\t\t\tduration = extend_soft_walltime(resresv, server_time);\n\t\t\t\tif (duration > soft_walltime_req->amount) {\n\t\t\t\t\tchar timebuf[128];\n\t\t\t\t\tconvert_duration_to_str(duration, timebuf, 128);\n\t\t\t\t\tupdate_job_attr(pbs_sd, resresv, ATTR_estimated, \"soft_walltime\", timebuf, NULL, UPDATE_NOW);\n\t\t\t\t}\n\t\t\t} else\n\t\t\t\t/* Job has exceeded its walltime.  It'll soon be killed and be put into the exiting state.\n\t\t\t\t * Change the duration of the job to match the current situation and assume it will end in\n\t\t\t\t * now + EXITING_TIME\n\t\t\t\t */\n\t\t\t\tduration = server_time - start + EXITING_TIME;\n\t\t\tend = start + duration;\n\t\t}\n\t\tresresv->start = start;\n\t\tresresv->end = end;\n\t}\n\tresresv->duration = duration;\n}\n"
  },
  {
    "path": "src/scheduler/job_info.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _JOB_INFO_H\n#define _JOB_INFO_H\n\n#include <pbs_ifl.h>\n#include \"data_types.h\"\n\n/*\n *\tquery_job - takes info from a batch_status about a job and puts\n */\nresource_resv *query_job(int pbs_sd, struct batch_status *job, server_info *sinfo, queue_info *qinfo, schd_error *err);\n\n/*\n * pthread routine for querying a chunk of jobs\n */\nvoid query_jobs_chunk(th_data_query_jinfo *data);\n\n/* create an array of jobs for a particular queue */\nresource_resv **query_jobs(status *policy, int pbs_sd, queue_info *qinfo, resource_resv **pjobs, const std::string &queue_name);\n\n/*\n *\tnew_job_info  - allocate and initialize new job_info structure\n */\n#ifdef NAS /* localmod 005 */\njob_info *new_job_info(void);\n#else\njob_info *new_job_info();\n#endif /* localmod 005 */\n\n/*\n *\tfree_job_info - free all the memory used by a job_info structure\n */\n\nvoid free_job_info(job_info *jinfo);\n\n/*\n *      set_job_state - set the state flag in a job_info structure\n *                      i.e. the is_* bit\n */\nint set_job_state(const char *state, job_info *jinfo);\n\n/* update_job_attr - update job attributes on the server */\nint\nupdate_job_attr(int pbs_sd, resource_resv *resresv, const char *attr_name,\n\t\tconst char *attr_resc, const char *attr_value, struct attrl *extra, unsigned int flags);\n\n/* send delayed job attribute updates for job using send_attr_updates() */\nint send_job_updates(int pbs_sd, resource_resv *job);\n\n/* send delayed attributes to the server for a job */\nint send_attr_updates(int virtual_sd, resource_resv *resresv, struct attrl *pattr);\n\npreempt_job_info *send_preempt_jobs(int virtual_sd, char **preempt_jobs_list);\n\nint send_sigjob(int virtual_sd, resource_resv *resresv, const char *signal, char *extend);\n\nstruct batch_status *send_selstat(int virtual_fd, struct attropl *attrib, struct attrl *rattrib, char *extend);\n\n/*\n *\n *      unset_job_attr - unset job attributes on the server\n *\n *\t  pbs_sd     - connection to the pbs_server\n *\t  resresv    - job to update\n *\t  attr_name  - the name of the attribute to unset\n *\t  flags - UPDATE_NOW - call send_attr_updates() to update the attribute now\n *\t\t  UPDATE_LATER - attach attribute change to job to be sent all at once\n *\t\t\t\tfor the job.  NOTE: Only the jobs that are part\n *\t\t\t\tof the server in main_sched_loop() will be updated in this way.\n *\n *      returns\n *              1: attribute was unset\n *              0: no attribute was unset\n *\n */\nint\nunset_job_attr(int pbs_sd, resource_resv *resresv, const char *attr_name, unsigned int flags);\n\n/*\n *      update_jobs_cant_run - update an array of jobs which can not run\n */\nvoid\nupdate_jobs_cant_run(int pbs_sd, resource_resv **resresv_arr,\n\t\t     resource_resv *start, struct schd_error *err, int start_where);\n\n/*\n *\tupdate_job_comment - update a job's comment attribute.  If the job's\n *\t\t\t     comment attr is identical, don't update\n *\n *\t  pbs_sd - pbs connection descriptor\n *\t  resresv - the job to update\n *\t  comment - the comment string\n *\n *\treturns 1 if the comment was updated\n *\t\t0 if not\n */\nint update_job_comment(int pbs_sd, resource_resv *resresv, char *comment);\n\n/*\n *\ttranslate_fail_code - translate Scheduler failure code into\n *\t\t\t\ta comment and log message\n */\nint translate_fail_code(schd_error *err, char *comment_msg, char *log_msg);\n\n/*\n *      preempt_job - preempt a job to allow another job to run.  First the\n *                    job will try to be suspended, then checkpointed and\n *                    finally forcablly requeued\n */\nint preempt_job(status *policy, int pbs_sd, resource_resv *jinfo, server_info *sinfo);\n\n/*\n *      find_and_preempt_jobs - find the jobs to preempt and then preempt them\n */\nint find_and_preempt_jobs(status *policy, int pbs_sd, resource_resv *hjob, server_info *sinfo, schd_error *err);\n\n/*\n *      find_jobs_to_preempt - find jobs to preempt in order to run a high\n *                             priority job\n */\nint *\nfind_jobs_to_preempt(status *policy, resource_resv *hjob,\n\t\t     server_info *sinfo, int *fail_list, int *no_of_jobs);\n\n/*\n *      select_job_to_preempt - select the best candidite out of the running\n *                              jobs to preempt\n */\nlong\nselect_index_to_preempt(status *policy, resource_resv *hjob,\n\t\t\tresource_resv **rjobs, long skipto, schd_error *err,\n\t\t\tint *fail_list);\n\n/*\n *      preempt_level - take a preemption priority and return a preemption\n *                      level\n */\nint preempt_level(unsigned int prio);\n\n/*\n *      set_preempt_prio - set a job's preempt field to the correct value\n */\nvoid set_preempt_prio(resource_resv *job, queue_info *qinfo, server_info *sinfo);\n\n/*\n * create subjob name from subjob id and array name\n *\n *\tarray_id - the parent array name\n *\tindex    - subjob index\n *\n * return created subjob name\n */\nstd::string create_subjob_name(const std::string &array_id, int index);\n\n/*\n *\tcreate_subjob_from_array - create a resource_resv structure for a subjob\n *\t\t\t\t   from a job array structure.  The subjob\n *\t\t\t\t   will be in state 'Q'\n */\nresource_resv *\ncreate_subjob_from_array(resource_resv *array, int index, const std::string &subjob_name);\n\n/*\n *\tupdate_array_on_run - update a job array object when a subjob is run\n */\nint update_array_on_run(job_info *array, job_info *subjob);\n/*\n *\tdup_job_info - duplicate the information in a job_info structure\n */\njob_info *dup_job_info(job_info *ojinfo, queue_info *nqinfo, server_info *nsinfo);\n\n/*\n *\tis_job_array - is a job name a job array range\n *\t\t\t  valid_form: 1234[]\n *\t\t\t  valid_form: 1234[N]\n *\t\t\t  valid_form: 1234[N-M]\n *\n *\treturns 1 if jobname is a job array\n *\t\t2 if jobname is a subjob\n *\t\t3 if jobname is a range\n *\t\t0 if it is not a job array\n */\nint is_job_array(char *jobname);\n\n/*\n *\n *\tmodify_job_array_for_qrun - modify a job array for qrun -\n * \t\t\t\t    set queued_subjobs to just the\n *\t\t\t\t    range which is being run\n *\t\t\t\t    set qrun_job on server\n */\nint modify_job_array_for_qrun(server_info *sinfo, char *jobid);\n\n/*\n *\tqueue_subjob - create a subjob from a job array and queue it\n *\n *\t  array - job array to create the next subjob from\n *\t  sinfo - the server the job array is in\n *\t  qinfo - the queue the job array is in\n *\n *\treturns new subjob or NULL on error\n *\tNOTE: subjob will be attached to the server/queue job lists\n */\nresource_resv *\nqueue_subjob(resource_resv *array, server_info *sinfo,\n\t     queue_info *qinfo);\n/*\n *\tformula_evaluate - evaluate a math formula for jobs based on their resources\n *\t\tNOTE: currently done through embedded python interpreter\n */\n\nsch_resource_t formula_evaluate(const char *formula, resource_resv *resresv, resource_req *resreq);\n\n/*\n *\n *      update_accruetype - Updates accrue_type of job on server.\n *                          The accrue_type is determined from the values\n *                          of err_code and mode. if resresv is a job\n *                          array, special action is taken.\n *\n *        pbs_sd   - connection to pbs_server\n *        sinfo    - pointer to server\n *        mode     - mode of operation\n *        err_code - error code to evaluate\n *        resresv  - pointer to job\n *\n */\n\nvoid update_accruetype(int pbs_sd, server_info *sinfo, enum update_accruetype_mode mode, enum sched_error_code err_code, resource_resv *resresv);\n\n/**\n * @brief\n *\treturn aoe from select spec\n *\n * @param[in]\tselect - select spec of job/reservation\n *\n * @return\tchar*\n * @retval\tNULL - no aoe found or failure encountered\n * @retval\taoe name string\n */\nchar *getaoename(selspec *select);\n\n/**\n *  * @brief\n *\treturn eoe from select spec\n *\n * @param[in]\tselect - select spec of job/reservation\n *\n * @return\tchar*\n * @retval\tNULL - no eoe found or failure encountered\n * @retval\teoe name string\n */\nchar *geteoename(selspec *select);\n\n/*\n *\n *\tupdate_estimated_attrs - updated the estimated.start_time and\n *\t\t\t\t estimated.exec_vnode attributes on a job\n *\n *\t  \\param pbs_sd     - connection descriptor to pbs server\n *\t  \\param job        - job to update\n *\t  \\param start_time - start time of job\n *\t  \\param exec_vnode - exec_vnode of job or NULL to create it from\n *\t\t\t      job -> nspec_arr\n *\n *\t\\return 1 if attributes were successfully updated 0 if not\n */\nint\nupdate_estimated_attrs(int pbs_sd, resource_resv *job,\n\t\t       time_t start_time, char *exec_vnode, int force);\n\n/*\n *\n *\tcheck_preempt_targets_for_none - This function checks if preemption set has been configured as \"NONE\"\n *\t\t\t\t\tIf its found that resources_default.preempt_targets = NONE\n *\t\t\t\t\tthen this function returns PREEMPT_NONE.\n *\tres_list - list of resources created from comma seperated resource list.\n *\n *\treturn - int\n *\tretval - PREEMPT_NONE :If preemption set is set to \"NONE\"\n *\tretval - 0 :If preemption set is not set as \"NONE\"\n */\nint check_preempt_targets_for_none(char **res_list);\n\n/*\n *\n *  @brief checks whether the IFL interface failed because it was a finished job\n *\n *  @param[in] error - pbs_errno set by server\n *\n *  @retval 1 if job is a finished job\n *  @retval 0 if job is not a finished job\n */\nint is_finished_job(int error);\n\n/*\n * compare two jobs to see if they overlap using a complete err list as\n * criteria similarity criteria.\n */\nint preemption_similarity(resource_resv *hjob, resource_resv *pjob, schd_error *full_err);\n\n/* Equivalence class functions*/\nresresv_set *new_resresv_set(void);\nvoid free_resresv_set(resresv_set *rset);\nvoid free_resresv_set_array(resresv_set **rsets);\nresresv_set *dup_resresv_set(resresv_set *oset, server_info *nsinfo);\nresresv_set **dup_resresv_set_array(resresv_set **osets, server_info *nsinfo);\n\n/* create a resresv_set with a resresv as a template */\nresresv_set *create_resresv_set_by_resresv(status *policy, server_info *sinfo, resource_resv *resresv);\n\n/* find a resresv_set by its internal components */\nint find_resresv_set(status *policy, resresv_set **rsets, const char *user, const char *group, const char *project, selspec *sel, place *pl, resource_req *req, queue_info *qinfo);\n\n/* find a resresv_set with a resresv as a template */\nint find_resresv_set_by_resresv(status *policy, resresv_set **rsets, resource_resv *resresv);\n\n/* create the array of resdef's to use to create resresv->req*/\nstd::unordered_set<resdef *> create_resresv_sets_resdef(status *policy);\n\n/* Create an array of resresv_sets based on sinfo*/\nresresv_set **create_resresv_sets(status *policy, server_info *sinfo);\n/*\n * This function creates a string and update resources_released job\n *  attribute.\n *  The string created will be similar to how exec_vnode is presented\n *  example: (node1:ncpus=8)+(node2:ncpus=8)\n */\nvoid create_res_released(status *policy, resource_resv *pjob);\n\n/*\n *This function populates resreleased job structure for a particular job.\n */\nstd::vector<nspec *> create_res_released_array(status *policy, resource_resv *resresv);\n\n/*\n * @brief create a resource_rel array for a job by accumulating all of the RASSN\n *\t    resources in a resources_released nspec array.\n */\nresource_req *create_resreq_rel_list(status *policy, resource_resv *pjob);\n\n/* Returns the extended duration of a job that has exceeded its soft_walltime */\nlong extend_soft_walltime(resource_resv *resresv, time_t server_time);\n\n/* Returns a list of preemptable candidates */\nresource_resv **filter_preemptable_jobs(resource_resv **arr, resource_resv *job, schd_error *err);\n\n/*\n * This function processes every job's depend attribute and\n * associate the jobs with runone dependency to its dependent_jobs list.\n */\nvoid associate_dependent_jobs(server_info *sinfo);\n\n/* This function associated the job passed in to its parent job */\nint associate_array_parent(resource_resv *pjob, server_info *sinfo);\n\n/* Set start, end, duration, and possibly STF parts of the job */\nvoid set_job_times(int pbs_sd, resource_resv *resresv, time_t server_time);\n\n#endif /* _JOB_INFO_H */\n"
  },
  {
    "path": "src/scheduler/limits.cpp",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file    limits.c\n *\n * @brief\n * \t\tlimits.c - This file contains functions related to limit information.\n *\n * Functions included are:\n * \tlim_alloc_liminfo()\n * \tlim_dup_liminfo()\n * \tlim_free_liminfo()\n * \tis_reslimattr()\n * \tis_runlimattr()\n * \tis_oldlimattr()\n * \tlim_setlimits()\n * \thas_hardlimits()\n * \thas_softlimits()\n * \tcheck_limits()\n * \tcheck_soft_limits()\n * \tcheck_server_max_user_run()\n * \tcheck_server_max_group_run()\n * \tcheck_server_max_user_res()\n * \tcheck_server_max_group_res()\n * \tcheck_queue_max_user_run()\n * \tcheck_queue_max_group_run()\n * \tcheck_queue_max_user_res()\n * \tcheck_queue_max_group_res()\n * \tcheck_queue_max_res()\n * \tcheck_server_max_res()\n * \tcheck_server_max_run()\n * \tcheck_queue_max_run()\n * \tcheck_queue_max_run_soft()\n * \tcheck_queue_max_user_run_soft()\n * \tcheck_queue_max_group_run_soft()\n * \tcheck_queue_max_user_res_soft()\n * \tcheck_queue_max_group_res_soft()\n * \tcheck_server_max_run_soft()\n * \tcheck_server_max_user_run_soft()\n * \tcheck_server_max_group_run_soft()\n * \tcheck_server_max_user_res_soft()\n * \tcheck_server_max_group_res_soft()\n * \tcheck_server_max_res_soft()\n * \tcheck_queue_max_res_soft()\n * \tcheck_max_group_res()\n * \tcheck_max_group_res_soft()\n * \tcheck_max_user_res()\n * \tcheck_max_user_res_soft()\n * \tlim_setreslimits()\n * \tclear_limres()\n * \tlim_setrunlimits()\n * \tlim_setoldlimits()\n * \tlim_dup_ctx()\n * \tis_hardlimit()\n * \tlim_gengroupreskey()\n * \tlim_genprojectreskey()\n * \tlim_genuserreskey()\n * \tlim_callback()\n * \tlim_get()\n * \tschderr_args_q()\n * \tschderr_args_q_res()\n * \tschderr_args_server()\n * \tschderr_args_server_res()\n * \tcheck_max_project_res()\n * \tcheck_max_project_res_soft()\n * \tcheck_server_max_project_res()\n * \tcheck_server_max_project_run_soft()\n * \tcheck_server_max_project_res_soft()\n * \tcheck_queue_max_project_res()\n * \tcheck_queue_max_project_run_soft()\n * \tcheck_queue_max_project_res_soft()\n * \tcheck_server_max_project_run()\n * \tcheck_queue_max_project_run()\n *\n */\n#include <unistd.h>\n#include <stdlib.h>\n#include <errno.h>\n#include <stdio.h>\n#include <string.h>\n#include <assert.h>\n#include \"pbs_config.h\"\n#include \"pbs_ifl.h\"\n#include \"data_types.h\"\n#include \"resource_resv.h\"\n#include \"misc.h\"\n#include \"log.h\"\n#include \"check.h\"\n#include \"limits_if.h\"\n#include \"simulate.h\"\n#include \"resource.h\"\n#include \"globals.h\"\n\nclass limcounts {\n      public:\n\tcounts_umap user;\n\tcounts_umap group;\n\tcounts_umap project;\n\tcounts_umap all;\n\tlimcounts() = delete;\n\tlimcounts(const counts_umap &ruser,\n\t\t  const counts_umap &rgroup,\n\t\t  const counts_umap &rproject,\n\t\t  const counts_umap &rall);\n\tlimcounts(const limcounts &);\n\tlimcounts &operator=(const limcounts &);\n\t~limcounts();\n};\n\nstatic int\ncheck_max_group_res(resource_resv *, counts_umap &,\n\t\t    resdef **, void *);\nstatic int\ncheck_max_project_res(resource_resv *, counts_umap &,\n\t\t      resdef **, void *);\nstatic int\ncheck_max_user_res(resource_resv *, counts_umap &,\n\t\t   resdef **, void *);\nstatic int\ncheck_max_group_res_soft(resource_resv *,\n\t\t\t counts_umap &, void *, int);\nstatic int\ncheck_max_project_res_soft(resource_resv *,\n\t\t\t   counts_umap &, void *, int);\nstatic int\ncheck_max_user_res_soft(resource_resv **, resource_resv *,\n\t\t\tcounts_umap &, void *, int);\nstatic int\ncheck_server_max_user_run(server_info *, queue_info *,\n\t\t\t  resource_resv *, limcounts *, limcounts *, schd_error *);\nstatic int\ncheck_server_max_group_run(server_info *, queue_info *,\n\t\t\t   resource_resv *, limcounts *, limcounts *, schd_error *);\nstatic int\ncheck_server_max_project_run(server_info *, queue_info *,\n\t\t\t     resource_resv *, limcounts *, limcounts *, schd_error *);\nstatic int\ncheck_server_max_user_res(server_info *, queue_info *,\n\t\t\t  resource_resv *, limcounts *, limcounts *, schd_error *);\nstatic int\ncheck_server_max_group_res(server_info *, queue_info *,\n\t\t\t   resource_resv *, limcounts *, limcounts *, schd_error *);\nstatic int\ncheck_server_max_project_res(server_info *, queue_info *,\n\t\t\t     resource_resv *, limcounts *, limcounts *, schd_error *);\nstatic int\ncheck_queue_max_user_run(server_info *, queue_info *,\n\t\t\t resource_resv *, limcounts *, limcounts *, schd_error *);\nstatic int\ncheck_queue_max_group_run(server_info *, queue_info *,\n\t\t\t  resource_resv *, limcounts *, limcounts *, schd_error *);\nstatic int\ncheck_queue_max_project_run(server_info *, queue_info *,\n\t\t\t    resource_resv *, limcounts *, limcounts *, schd_error *);\nstatic int\ncheck_queue_max_user_res(server_info *, queue_info *,\n\t\t\t resource_resv *, limcounts *, limcounts *, schd_error *);\nstatic int\ncheck_queue_max_group_res(server_info *, queue_info *,\n\t\t\t  resource_resv *, limcounts *, limcounts *, schd_error *);\nstatic int\ncheck_queue_max_project_res(server_info *, queue_info *,\n\t\t\t    resource_resv *, limcounts *, limcounts *, schd_error *);\nstatic int\ncheck_server_max_res(server_info *, queue_info *,\n\t\t     resource_resv *, limcounts *, limcounts *, schd_error *);\nstatic int\ncheck_queue_max_res(server_info *, queue_info *,\n\t\t    resource_resv *, limcounts *, limcounts *, schd_error *);\nstatic int\ncheck_server_max_run(server_info *, queue_info *,\n\t\t     resource_resv *, limcounts *, limcounts *, schd_error *);\nstatic int\ncheck_queue_max_run(server_info *, queue_info *,\n\t\t    resource_resv *, limcounts *, limcounts *, schd_error *);\nstatic int\ncheck_server_max_user_run_soft(server_info *, queue_info *,\n\t\t\t       resource_resv *);\nstatic int\ncheck_server_max_group_run_soft(server_info *, queue_info *,\n\t\t\t\tresource_resv *);\nstatic int\ncheck_server_max_project_run_soft(server_info *, queue_info *,\n\t\t\t\t  resource_resv *);\nstatic int\ncheck_server_max_user_res_soft(server_info *, queue_info *,\n\t\t\t       resource_resv *);\nstatic int\ncheck_server_max_group_res_soft(server_info *, queue_info *,\n\t\t\t\tresource_resv *);\nstatic int\ncheck_server_max_project_res_soft(server_info *, queue_info *,\n\t\t\t\t  resource_resv *);\nstatic int\ncheck_server_max_res_soft(server_info *, queue_info *,\n\t\t\t  resource_resv *);\nstatic int\ncheck_queue_max_res_soft(server_info *, queue_info *,\n\t\t\t resource_resv *);\nstatic int\ncheck_queue_max_user_run_soft(server_info *, queue_info *,\n\t\t\t      resource_resv *);\nstatic int\ncheck_queue_max_group_run_soft(server_info *, queue_info *,\n\t\t\t       resource_resv *);\nstatic int\ncheck_queue_max_project_run_soft(server_info *, queue_info *,\n\t\t\t\t resource_resv *);\nstatic int\ncheck_queue_max_user_res_soft(server_info *, queue_info *,\n\t\t\t      resource_resv *);\nstatic int\ncheck_queue_max_group_res_soft(server_info *, queue_info *,\n\t\t\t       resource_resv *);\nstatic int\ncheck_queue_max_project_res_soft(server_info *, queue_info *,\n\t\t\t\t resource_resv *);\nstatic int\ncheck_server_max_run_soft(server_info *, queue_info *,\n\t\t\t  resource_resv *);\nstatic int\ncheck_queue_max_run_soft(server_info *, queue_info *,\n\t\t\t resource_resv *);\n\n/**\n * @typedef\n * \t\tint (*limfunc_t)(server_info *, queue_info *, resource_resv *, schd_error *);\n * @brief\n * \t\teach hard limit function we call has this interface\n * @par\n * \t\tWhen adding a new hard limit, be sure to address the following:\n * \t\t\tadd the function that does limit enforcement to this\n *\t\t\t\tlist\n * \t\t\tadd a new error enum to sched_error\n * \t\t\tadd new log and comment formatting strings to the\n *\t\t\t\tfc_translation_table\n * \t\t\tformat the error string the scheduler will report (use\n *\t\t\t\tone of the existing schderr_args_*() functions below or\n *\t\t\t\tcreate a new one)\n * \t\t\tadd the new error case to translate_fail_code()\n * \t\t\tif the limit applies to a job's owner or group, add the\n *\t\t\t\tnew error case to update_accruetype() so that the job is\n *\t\t\t\tmarked as ineligible\n *\n * @see\tsched_error_code in constant.h\n * @see\tfc_translation_table in job_info.c\n */\ntypedef int (*limfunc_t)(server_info *, queue_info *, resource_resv *,\n\t\t\t limcounts *, limcounts *, schd_error *);\n\nstatic limfunc_t limfuncs[] = {\n\tcheck_queue_max_group_run,\n\tcheck_queue_max_project_run,\n\tcheck_queue_max_run,\n\tcheck_queue_max_user_run,\n\tcheck_server_max_group_run,\n\tcheck_server_max_project_run,\n\tcheck_server_max_run,\n\tcheck_server_max_user_run,\n\tcheck_queue_max_group_res,\n\tcheck_queue_max_project_res,\n\tcheck_queue_max_res,\n\tcheck_queue_max_user_res,\n\tcheck_server_max_group_res,\n\tcheck_server_max_project_res,\n\tcheck_server_max_res,\n\tcheck_server_max_user_res,\n};\n\n/**\n * @typedef\n * \t\tint (*softlimfunc_t)(server_info *, queue_info *, resource_resv *);\n * @brief\n * \t\teach soft limit function we call has this interface\n */\ntypedef int (*softlimfunc_t)(server_info *, queue_info *, resource_resv *);\nstatic softlimfunc_t softlimfuncs[] = {\n\tcheck_queue_max_run_soft,\n\tcheck_queue_max_user_run_soft,\n\tcheck_queue_max_group_run_soft,\n\tcheck_queue_max_project_run_soft,\n\tcheck_server_max_run_soft,\n\tcheck_server_max_user_run_soft,\n\tcheck_server_max_group_run_soft,\n\tcheck_server_max_project_run_soft,\n\tcheck_queue_max_user_res_soft,\n\tcheck_queue_max_group_res_soft,\n\tcheck_queue_max_project_res_soft,\n\tcheck_server_max_user_res_soft,\n\tcheck_server_max_group_res_soft,\n\tcheck_server_max_project_res_soft,\n\tcheck_server_max_res_soft,\n\tcheck_queue_max_res_soft,\n};\n\n/**\n * @struct\tlim_old2new\n * @brief\n * \t\tMaps between old-style limit attribute names and new-style params\n * @par\n *\t\tThis structure holds information needed to map old-style limit attribute\n *\t\tnames to new-style parameterized keys (\"param\"s).\n *\n * @param[in]\tlim_attr\t-\tthe (old) attribute name\n * @param[in]\tlim_param\t-\tthe (new-style) entity parameter\n * @param[in]\tlim_isreslim\t-\t1 if this attribute is a resource limit, else 0\n */\nstruct lim_old2new {\n\tconst char *lim_attr;\n\tconst char *lim_param;\n\tint lim_isreslim;\n};\nstatic struct lim_old2new old2new[] = {\n\t{ATTR_maxgroupres, \"g:\" PBS_GENERIC_ENTITY, 1},\n\t{ATTR_maxgrprun, \"g:\" PBS_GENERIC_ENTITY, 0},\n\t{ATTR_maxrun, \"o:\" PBS_ALL_ENTITY, 0},\n\t{ATTR_maxuserres, \"u:\" PBS_GENERIC_ENTITY, 1},\n\t{ATTR_maxuserrun, \"u:\" PBS_GENERIC_ENTITY, 0}};\nstatic struct lim_old2new old2new_soft[] = {\n\t{ATTR_max_run_soft, \"o:\" PBS_ALL_ENTITY, 0},\n\t{ATTR_max_run_res_soft, \"o:\" PBS_ALL_ENTITY, 1},\n\t{ATTR_maxgroupressoft, \"g:\" PBS_GENERIC_ENTITY, 1},\n\t{ATTR_maxgrprunsoft, \"g:\" PBS_GENERIC_ENTITY, 0},\n\t{ATTR_maxuserressoft, \"u:\" PBS_GENERIC_ENTITY, 1},\n\t{ATTR_maxuserrunsoft, \"u:\" PBS_GENERIC_ENTITY, 0}};\n\nstatic const char allparam[] = PBS_ALL_ENTITY;\nstatic const char genparam[] = PBS_GENERIC_ENTITY;\n\nstatic int is_hardlimit(const struct attrl *);\nstatic int\nlim_callback(void *, enum lim_keytypes, char *, char *,\n\t     char *, char *);\nstatic void *lim_dup_ctx(void *);\nstatic char *lim_gengroupreskey(const char *);\nstatic char *lim_genprojectreskey(const char *);\nstatic char *lim_genuserreskey(const char *);\nstatic void schderr_args_q(const std::string &, const char *, schd_error *);\nstatic void schderr_args_q(const std::string &, const std::string &, schd_error *);\nstatic void schderr_args_q_res(const std::string &, const char *, char *, schd_error *);\nstatic void schderr_args_q_res(const std::string &, const std::string &, char *, schd_error *);\nstatic void schderr_args_server(const char *, schd_error *);\nstatic void schderr_args_server(const std::string &, schd_error *);\nstatic void schderr_args_server_res(std::string &, const char *, schd_error *);\nstatic sch_resource_t lim_get(const char *, void *);\nstatic int lim_setoldlimits(const struct attrl *, void *);\nstatic int lim_setreslimits(const struct attrl *, void *);\nstatic int lim_setrunlimits(const struct attrl *, void *);\n\n/**\n * @struct\tlimit_info\n * @brief\n * \t\tinternal structure of stored limit information\n *\n * @param[in]\tli_ctxh\t-\tlimit context for storing (hard) resource and run limits\n * @param[in]\tli_ctxs\t-\tlimit context for storing (soft) resource and run limits\n */\nstruct limit_info {\n\tvoid *li_ctxh;\n\tvoid *li_ctxs;\n};\n#define LI2RESCTX(li) (((struct limit_info *) li)->li_ctxh)\n#define LI2RESCTXSOFT(li) (((struct limit_info *) li)->li_ctxs)\n#define LI2RUNCTX(li) (((struct limit_info *) li)->li_ctxh)\n#define LI2RUNCTXSOFT(li) (((struct limit_info *) li)->li_ctxs)\n\n/**\n * @var\tresource *limres\n *\n * @brief\n * \t\tlist of resources that have limits\n * @par\n *\t\tWe record in this list only those resources that have had limits set.\n *\t\tThis is done in lim_setreslimits() and lim_setoldlimits() and used in\n *\t\tthe resource checking functions.  These latter functions loop over\n *\t\tonly those resources that appear in this list.  We do not maintain a\n *\t\tseparate list of per-queue or per-server resources because each limit\n *\t\tchecking function uses a limit evaluation context that narrows the\n *\t\tlimit search to the proper context.\n * @note\n *\t\tNote that we do not free and rebuild this list for each scheduling cycle.\n *\t\tInstead, we assume that the number of resources with limits is small and\n *\t\tthe index tree limit fetching code is sufficiently fast that this isn't an\n *\t\tissue.\n */\nstatic schd_resource *limres; /* list of resources that have limits */\n/**\n * @brief\n * \t\tWe currently store both resource and run limits in a\n * \t\tsingle member of the limit_info structure.  That might\n * \t\tchange some day, and these assertions are here in order\n * \t\tto protect accidental violations of this assumption:\n * \t\tif the run limit contexts are ever NULL after allocation\n * \t\tof the resource limit contexts, the assumption has been\n * \t\tviolated.\n *\n * @return\tallocated lim_info structure\n * @retval\tNULL\t: failed to allocate memory\n*/\nvoid *\nlim_alloc_liminfo(void)\n{\n\tstruct limit_info *lip;\n\n\tif ((lip = static_cast<limit_info *>(calloc(1, sizeof(struct limit_info)))) == NULL)\n\t\treturn NULL;\n\telse {\n\t\tvoid *ctx;\n\n\t\tif ((ctx = entlim_initialize_ctx()) == NULL) {\n\t\t\tlim_free_liminfo(lip);\n\t\t\treturn NULL;\n\t\t} else\n\t\t\tLI2RESCTX(lip) = ctx;\n\t\tif ((ctx = entlim_initialize_ctx()) == NULL) {\n\t\t\tlim_free_liminfo(lip);\n\t\t\treturn NULL;\n\t\t} else\n\t\t\tLI2RESCTXSOFT(lip) = ctx;\n\n\t\tassert(LI2RUNCTX(lip) != NULL);\n\t\tassert(LI2RUNCTXSOFT(lip) != NULL);\n\n\t\treturn (lip);\n\t}\n}\n/**\n * @brief\n * \t\ttake duplicate of passed structure and return the duplicate of the lim_info structure\n *\n * @param[in]\tp\t-\told limit info structure\n *\n * @return\tduplicate of old limit info structure\n * @retval\tNULL\t: failure\n */\nvoid *\nlim_dup_liminfo(void *p)\n{\n\tstruct limit_info *oldlip = static_cast<limit_info *>(p);\n\tstruct limit_info *newlip;\n\n\tif ((oldlip == NULL) ||\n\t    (LI2RESCTX(oldlip) == NULL) ||\n\t    (LI2RESCTXSOFT(oldlip) == NULL))\n\t\treturn NULL;\n\n\tif ((newlip = static_cast<limit_info *>(calloc(1, sizeof(struct limit_info)))) == NULL)\n\t\treturn NULL;\n\telse {\n\t\tvoid *ctx;\n\n\t\tif ((ctx = lim_dup_ctx(LI2RESCTX(oldlip))) == NULL) {\n\t\t\tlim_free_liminfo(newlip);\n\t\t\treturn NULL;\n\t\t} else\n\t\t\tLI2RESCTX(newlip) = ctx;\n\t\tif ((ctx = lim_dup_ctx(LI2RESCTXSOFT(oldlip))) == NULL) {\n\t\t\tlim_free_liminfo(newlip);\n\t\t\treturn NULL;\n\t\t} else\n\t\t\tLI2RESCTXSOFT(newlip) = ctx;\n\n\t\t/*\n\t\t *\tWe currently store both resource and run limits in a\n\t\t *\tsingle member of the limit_info structure.  That might\n\t\t *\tchange some day, and these assertions are here in order\n\t\t *\tto protect accidental violations of this assumption:\n\t\t *\tif the run limit contexts are ever NULL after allocation\n\t\t *\tof the resource limit contexts, the assumption has been\n\t\t *\tviolated.\n\t\t */\n\t\tassert(LI2RUNCTX(newlip) != NULL);\n\t\tassert(LI2RUNCTXSOFT(newlip) != NULL);\n\n\t\treturn (newlip);\n\t}\n}\n/**\n * @brief\n * \t\tfree the limit info structure\n *\n * @param[in]\tp\t-\tlimit info structure to be freed.\n */\nvoid\nlim_free_liminfo(void *p)\n{\n\tstruct limit_info *lip = static_cast<limit_info *>(p);\n\n\tif (lip == NULL)\n\t\treturn;\n\n\tif (LI2RESCTX(lip) != NULL) {\n\t\t(void) entlim_free_ctx(LI2RESCTX(lip), free);\n\t\tLI2RESCTX(lip) = NULL;\n\t}\n\tif (LI2RESCTXSOFT(lip) != NULL) {\n\t\t(void) entlim_free_ctx(LI2RESCTXSOFT(lip), free);\n\t\tLI2RESCTXSOFT(lip) = NULL;\n\t}\n\tif (LI2RUNCTX(lip) != NULL) {\n\t\t(void) entlim_free_ctx(LI2RUNCTX(lip), free);\n\t\tLI2RUNCTX(lip) = NULL;\n\t}\n\tif (LI2RUNCTXSOFT(lip) != NULL) {\n\t\t(void) entlim_free_ctx(LI2RUNCTXSOFT(lip), free);\n\t\tLI2RUNCTXSOFT(lip) = NULL;\n\t}\n\tfree(lip);\n}\n/**\n * @brief\n * \t\tcheck attribute has max run result as name\n *\n * @param[in]\ta\t-\tattribute list structure\n *\n * @return\tint\n * @retval\t1\t: Yes\n * @retval\t0\t: No\n */\nint\nis_reslimattr(const struct attrl *a)\n{\n\tif (!strcmp(a->name, ATTR_max_run_res) ||\n\t    !strcmp(a->name, ATTR_max_run_res_soft))\n\t\treturn (1);\n\telse\n\t\treturn (0);\n}\n/**\n * @brief\n * \t\tcheck attribute has run limit as name\n *\n * @param[in]\ta\t-\tattribute list structure\n *\n * @return\tint\n * @retval\t1\t: Yes\n * @retval\t0\t: No\n */\nint\nis_runlimattr(const struct attrl *a)\n{\n\tif (!strcmp(a->name, ATTR_max_run) ||\n\t    !strcmp(a->name, ATTR_max_run_soft))\n\t\treturn (1);\n\telse\n\t\treturn (0);\n}\n/**\n * @brief\n * \t\tconvert an old limit attribute name to the new one\n *\t\t@see old2new[]\n *\t\t@see old2new_soft[]\n *\n * @param[in]\ta\t-\tattribute list structure\n *\n * @return char *\n * @retval !NULL\t: old limit attribute name\n * @retval NULL\t\t: attribute value is not an old limit attribute\n *\n */\nconst char *\nconvert_oldlim_to_new(const struct attrl *a)\n{\n\tsize_t i;\n\n\tfor (i = 0; i < sizeof(old2new) / sizeof(old2new[0]); i++)\n\t\tif (!strcmp(a->name, old2new[i].lim_attr))\n\t\t\treturn old2new[i].lim_param;\n\tfor (i = 0; i < sizeof(old2new_soft) / sizeof(old2new_soft[0]); i++)\n\t\tif (!strcmp(a->name, old2new_soft[i].lim_attr))\n\t\t\treturn old2new_soft[i].lim_param;\n\n\treturn NULL;\n}\n\n/*\n * @brief\n *\t\tReturn true if attribute is an old limit attribute\n * @param[in] a\t\tattribute list structure\n *\n * @return\tint\n * @retval\t1\t: Yes\n * @retval\t0\t: No\n */\nint\nis_oldlimattr(const struct attrl *a)\n{\n\treturn (convert_oldlim_to_new(a) != NULL);\n}\n\n/**\n * @brief\n * \t\tassign the resource-limits to the limit context based on the limit type.\n *\n * @param[in]\ta\t-\tattribute list structure\n * @param[in]\tlt\t-\tlimit type.\n * @param[in]\tp\t-\tpointer to limit_info structure\n *\n * @return\tint\n * @retval\t1\t: Yes\n * @retval\t0\t: No\n */\nint\nlim_setlimits(const struct attrl *a, enum limtype lt, void *p)\n{\n\tstruct limit_info *lip = static_cast<limit_info *>(p);\n\n\tswitch (lt) {\n\t\tcase LIM_RES:\n\t\t\tif (is_hardlimit(a))\n\t\t\t\treturn (lim_setreslimits(a, LI2RESCTX(lip)));\n\t\t\telse\n\t\t\t\treturn (lim_setreslimits(a, LI2RESCTXSOFT(lip)));\n\t\tcase LIM_RUN:\n\t\t\tif (is_hardlimit(a))\n\t\t\t\treturn (lim_setrunlimits(a, LI2RUNCTX(lip)));\n\t\t\telse\n\t\t\t\treturn (lim_setrunlimits(a, LI2RUNCTXSOFT(lip)));\n\t\tcase LIM_OLD:\n\t\t\treturn (lim_setoldlimits(a, lip));\n\t\tdefault:\n\t\t\tlog_eventf(PBSEVENT_ERROR, PBS_EVENTCLASS_SCHED, LOG_ERR, __func__,\n\t\t\t\t   \"attribute %s not a limit attribute\", a->name);\n\t\t\treturn (1);\n\t}\n}\n/**\n * @brief\n * \t\tcheck whether the limit info structure has at least one hard resource limit,\n * \t\tif so, free them.\n *\n * @param[in,out]\tp\t-\tlimit info structure which needs to be checked.\n *\n * @return\tint\n * @retval\t1\t: Hard limit found and freed.\n * @retval\t0\t: Run limit already checked, or nothing is found.\n */\nint\nhas_hardlimits(void *p)\n{\n\tstruct limit_info *lip = static_cast<limit_info *>(p);\n\tchar *k = NULL;\n\n\tif (entlim_get_next(LI2RESCTX(lip), (void **) &k) != NULL) /* at least one hard resource limit present */\n\t\treturn (1);\n\n\t/* run limit already checked? */\n\tif (LI2RUNCTX(lip) == LI2RESCTX(lip))\n\t\treturn (0);\n\tk = NULL;\n\tif (entlim_get_next(LI2RUNCTX(lip), (void **) &k) != NULL) /* at least one hard run limit present */\n\t\treturn (1);\n\n\treturn (0);\n}\n/**\n * @brief\n * \t\tcheck whether the limit info structure has at least one soft resource limit,\n * \t\tif so, free them.\n *\n * @param[in,out]\tp\t-\tlimit info structure which needs to be checked.\n *\n * @return\tint\n * @retval\t1\t: Soft limit found and freed.\n * @retval\t0\t: Run limit already checked, or nothing is found.\n */\nint\nhas_softlimits(void *p)\n{\n\tstruct limit_info *lip = static_cast<limit_info *>(p);\n\tchar *k = NULL;\n\n\tif (entlim_get_next(LI2RESCTXSOFT(lip), (void **) &k) != NULL) /* at least one soft resource limit present */\n\t\treturn (1);\n\n\t/* run limit already checked? */\n\tif (LI2RUNCTXSOFT(lip) == LI2RESCTXSOFT(lip))\n\t\treturn (0);\n\tk = NULL;\n\tif (entlim_get_next(LI2RUNCTXSOFT(lip), (void **) &k) != NULL) /* at least one soft run limit present */\n\t\treturn (1);\n\n\treturn (0);\n}\n/**\n * @brief\n *\t\tlimitcount class constructor.\n */\n// Parametrized Constructor\nlimcounts::limcounts(const counts_umap &ruser,\n\t\t     const counts_umap &rgroup,\n\t\t     const counts_umap &rproject,\n\t\t     const counts_umap &rall)\n{\n\tuser = dup_counts_umap(ruser);\n\tgroup = dup_counts_umap(rgroup);\n\tproject = dup_counts_umap(rproject);\n\tall = dup_counts_umap(rall);\n}\n\n// Copy Constructor\nlimcounts::limcounts(const limcounts &rlimit)\n{\n\tuser = dup_counts_umap(rlimit.user);\n\tgroup = dup_counts_umap(rlimit.group);\n\tproject = dup_counts_umap(rlimit.project);\n\tall = dup_counts_umap(rlimit.all);\n}\n\n// Assignment operator\nlimcounts &\nlimcounts::operator=(const limcounts &rlimit)\n{\n\tuser = dup_counts_umap(rlimit.user);\n\tgroup = dup_counts_umap(rlimit.group);\n\tproject = dup_counts_umap(rlimit.project);\n\tall = dup_counts_umap(rlimit.all);\n\treturn *this;\n}\n\n// destructor\nlimcounts::~limcounts()\n{\n\tfree_counts_list(user);\n\tfree_counts_list(group);\n\tfree_counts_list(project);\n\tfree_counts_list(all);\n}\n\n/**\n * @brief\n *\t\tcheck_limits - hard limit checking function.\n *\t\tThis is table-driven limit checking, against limfuncs[]\n *\t\tarray.\n *\n * @param[in]\tsi\t-\tserver_info structure to use for limit evaluation\n * @param[in]\tqi\t-\tqueue_info structure to use for limit evaluation\n * @param[in]\trr\t-\tresource_resv structure to use for limit evaluation\n * @param[out]\terr\t-\tsched_error_code structure to return error information\n * @param[in]\tflags\t-\tCHECK_LIMITS - check real limits\n *                      CHECK_CUMULATIVE_LIMIT - check limits against total counts\n *                      RETURN_ALL_ERR - check all limits and return an err for all failed limits *\n *\n * @return\tinteger indicating failing limit test if limit is exceeded,\n *\t\t\t\talong with error 'err'.\n * @retval\t0\t: if limit is not exceeded.\n */\n\nint\ncheck_limits(server_info *si, queue_info *qi, resource_resv *rr, schd_error *err, unsigned int flags)\n{\n\tenum sched_error_code rc;\n\tint any_fail_rc = 0;\n\tsize_t i;\n\tlimcounts *svr_counts = NULL;\n\tlimcounts *que_counts = NULL;\n\tlimcounts *svr_counts_max = NULL;\n\tlimcounts *que_counts_max = NULL;\n\tlimcounts *server_lim = NULL;\n\tlimcounts *queue_lim = NULL;\n\tschd_error *prev_err = NULL;\n\n\tif (si == NULL || qi == NULL || rr == NULL)\n\t\treturn SE_NONE;\n\n\t/*\n\t * Check for  CHECK_CUMULATIVE_LIMIT is needed because we  must have\n\t * already run through the same loop before while calling check_limits\n\t * from is_ok_to_run.\n\t * We do not need to run into the same loop again.\n\t */\n\tif (si->calendar != NULL && !(flags & CHECK_CUMULATIVE_LIMIT)) {\n\t\tlong time_left;\n\t\tif (rr->duration != rr->hard_duration &&\n\t\t    exists_resv_event(si->calendar, si->server_time + rr->hard_duration))\n\t\t\ttime_left = calc_time_left(rr, 1);\n\t\telse\n\t\t\ttime_left = calc_time_left(rr, 0);\n\n\t\tauto end = si->server_time + time_left;\n\n\t\tif (exists_run_event(si->calendar, end)) {\n\t\t\tif (si->has_hard_limit) {\n\t\t\t\tsvr_counts_max = new limcounts(si->user_counts,\n\t\t\t\t\t\t\t       si->group_counts,\n\t\t\t\t\t\t\t       si->project_counts,\n\t\t\t\t\t\t\t       si->alljobcounts);\n\n\t\t\t\tsvr_counts = new limcounts(si->user_counts,\n\t\t\t\t\t\t\t   si->group_counts,\n\t\t\t\t\t\t\t   si->project_counts,\n\t\t\t\t\t\t\t   si->alljobcounts);\n\t\t\t}\n\n\t\t\tif (qi->has_hard_limit) {\n\t\t\t\tque_counts_max = new limcounts(qi->user_counts,\n\t\t\t\t\t\t\t       qi->group_counts,\n\t\t\t\t\t\t\t       qi->project_counts,\n\t\t\t\t\t\t\t       qi->alljobcounts);\n\n\t\t\t\tque_counts = new limcounts(qi->user_counts,\n\t\t\t\t\t\t\t   qi->group_counts,\n\t\t\t\t\t\t\t   qi->project_counts,\n\t\t\t\t\t\t\t   qi->alljobcounts);\n\t\t\t}\n\n\t\t\tauto te = get_next_event(si->calendar);\n\t\t\tconst auto event_mask = TIMED_RUN_EVENT | TIMED_END_EVENT;\n\t\t\tbool error = false;\n\t\t\tfor (te = find_init_timed_event(te, IGNORE_DISABLED_EVENTS, event_mask);\n\t\t\t     te != NULL && te->event_time < end;\n\t\t\t     te = find_next_timed_event(te, IGNORE_DISABLED_EVENTS, event_mask)) {\n\t\t\t\tauto te_rr = static_cast<resource_resv *>(te->event_ptr);\n\t\t\t\tif ((te_rr != rr) && te_rr->is_job) {\n\t\t\t\t\tif (te->event_type == TIMED_RUN_EVENT) {\n\t\t\t\t\t\tif (svr_counts != NULL) {\n\t\t\t\t\t\t\tauto cts = find_alloc_counts(svr_counts->user, te_rr->user);\n\t\t\t\t\t\t\tupdate_counts_on_run(cts, te_rr->resreq);\n\t\t\t\t\t\t\tcounts_max(svr_counts_max->user, cts);\n\t\t\t\t\t\t\tif (svr_counts_max->user.size() == 0) {\n\t\t\t\t\t\t\t\terror = 1;\n\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\t}\n\n\t\t\t\t\t\t\tcts = find_alloc_counts(svr_counts->group, te_rr->group);\n\t\t\t\t\t\t\tupdate_counts_on_run(cts, te_rr->resreq);\n\t\t\t\t\t\t\tcounts_max(svr_counts_max->group, cts);\n\t\t\t\t\t\t\tif (svr_counts_max->group.size() == 0) {\n\t\t\t\t\t\t\t\terror = 1;\n\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\t}\n\n\t\t\t\t\t\t\tcts = find_alloc_counts(svr_counts->project, te_rr->project);\n\t\t\t\t\t\t\tupdate_counts_on_run(cts, te_rr->resreq);\n\t\t\t\t\t\t\tcounts_max(svr_counts_max->project, cts);\n\t\t\t\t\t\t\tif (svr_counts_max->project.size() == 0) {\n\t\t\t\t\t\t\t\terror = 1;\n\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\t}\n\n\t\t\t\t\t\t\tcts = find_alloc_counts(svr_counts->all, PBS_ALL_ENTITY);\n\t\t\t\t\t\t\tupdate_counts_on_run(cts, te_rr->resreq);\n\t\t\t\t\t\t\tcounts_max(svr_counts_max->all, svr_counts->all);\n\t\t\t\t\t\t\tif (svr_counts_max->all.size() == 0) {\n\t\t\t\t\t\t\t\terror = 1;\n\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\tif (que_counts != NULL) {\n\t\t\t\t\t\t\tif (te_rr->is_job && te_rr->job != NULL) {\n\t\t\t\t\t\t\t\tif (te_rr->job->queue == qi) {\n\t\t\t\t\t\t\t\t\tauto cts = find_alloc_counts(que_counts->user, te_rr->user);\n\t\t\t\t\t\t\t\t\tupdate_counts_on_run(cts, te_rr->resreq);\n\t\t\t\t\t\t\t\t\tcounts_max(que_counts_max->user, cts);\n\t\t\t\t\t\t\t\t\tif (que_counts_max->user.size() == 0) {\n\t\t\t\t\t\t\t\t\t\terror = 1;\n\t\t\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\t\t\t}\n\n\t\t\t\t\t\t\t\t\tcts = find_alloc_counts(que_counts->group, te_rr->group);\n\t\t\t\t\t\t\t\t\tupdate_counts_on_run(cts, te_rr->resreq);\n\t\t\t\t\t\t\t\t\tcounts_max(que_counts_max->group, cts);\n\t\t\t\t\t\t\t\t\tif (que_counts_max->group.size() == 0) {\n\t\t\t\t\t\t\t\t\t\terror = 1;\n\t\t\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\t\t\t}\n\n\t\t\t\t\t\t\t\t\tcts = find_alloc_counts(que_counts->project, te_rr->project);\n\t\t\t\t\t\t\t\t\tupdate_counts_on_run(cts, te_rr->resreq);\n\t\t\t\t\t\t\t\t\tcounts_max(que_counts_max->project, cts);\n\t\t\t\t\t\t\t\t\tif (que_counts_max->project.size() == 0) {\n\t\t\t\t\t\t\t\t\t\terror = 1;\n\t\t\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\t\t\t}\n\n\t\t\t\t\t\t\t\t\tcts = find_alloc_counts(que_counts->all, PBS_ALL_ENTITY);\n\t\t\t\t\t\t\t\t\tupdate_counts_on_run(cts, te_rr->resreq);\n\t\t\t\t\t\t\t\t\tcounts_max(que_counts_max->all, que_counts->all);\n\t\t\t\t\t\t\t\t\tif (que_counts_max->all.size() == 0) {\n\t\t\t\t\t\t\t\t\t\terror = 1;\n\t\t\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t} else if (te->event_type == TIMED_END_EVENT) {\n\t\t\t\t\t\tif (svr_counts != NULL) {\n\t\t\t\t\t\t\tauto cts = find_alloc_counts(svr_counts->user, te_rr->user);\n\t\t\t\t\t\t\tupdate_counts_on_end(cts, te_rr->resreq);\n\t\t\t\t\t\t\tcts = find_alloc_counts(svr_counts->group, te_rr->group);\n\t\t\t\t\t\t\tupdate_counts_on_end(cts, te_rr->resreq);\n\t\t\t\t\t\t\tcts = find_alloc_counts(svr_counts->project, te_rr->project);\n\t\t\t\t\t\t\tupdate_counts_on_end(cts, te_rr->resreq);\n\t\t\t\t\t\t\tcts = find_alloc_counts(svr_counts->all, PBS_ALL_ENTITY);\n\t\t\t\t\t\t\tupdate_counts_on_end(cts, te_rr->resreq);\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif (que_counts != NULL) {\n\t\t\t\t\t\t\tif (te_rr->is_job && te_rr->job != NULL) {\n\t\t\t\t\t\t\t\tif (te_rr->job->queue == qi) {\n\t\t\t\t\t\t\t\t\tauto cts = find_alloc_counts(que_counts->user, te_rr->user);\n\t\t\t\t\t\t\t\t\tupdate_counts_on_end(cts, te_rr->resreq);\n\t\t\t\t\t\t\t\t\tcts = find_alloc_counts(que_counts->group, te_rr->group);\n\t\t\t\t\t\t\t\t\tupdate_counts_on_end(cts, te_rr->resreq);\n\t\t\t\t\t\t\t\t\tcts = find_alloc_counts(que_counts->project, te_rr->project);\n\t\t\t\t\t\t\t\t\tupdate_counts_on_end(cts, te_rr->resreq);\n\t\t\t\t\t\t\t\t\tcts = find_alloc_counts(que_counts->all, PBS_ALL_ENTITY);\n\t\t\t\t\t\t\t\t\tupdate_counts_on_end(cts, te_rr->resreq);\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\tdelete svr_counts;\n\t\t\tdelete que_counts;\n\t\t\tif (error) {\n\t\t\t\tdelete svr_counts_max;\n\t\t\t\tdelete que_counts_max;\n\t\t\t\treturn SE_NONE;\n\t\t\t}\n\t\t}\n\t}\n\tif ((flags & CHECK_LIMIT)) {\n\t\tif (svr_counts_max != NULL) {\n\t\t\tserver_lim = svr_counts_max;\n\t\t} else {\n\t\t\tserver_lim = new limcounts(si->user_counts,\n\t\t\t\t\t\t   si->group_counts,\n\t\t\t\t\t\t   si->project_counts,\n\t\t\t\t\t\t   si->alljobcounts);\n\t\t}\n\t\tif (que_counts_max != NULL) {\n\t\t\tqueue_lim = que_counts_max;\n\t\t} else {\n\t\t\tqueue_lim = new limcounts(qi->user_counts,\n\t\t\t\t\t\t  qi->group_counts,\n\t\t\t\t\t\t  qi->project_counts,\n\t\t\t\t\t\t  qi->alljobcounts);\n\t\t}\n\t} else if ((flags & CHECK_CUMULATIVE_LIMIT)) {\n\t\tif (!si->has_hard_limit && !qi->has_hard_limit)\n\t\t\treturn SE_NONE;\n\t\tserver_lim = new limcounts(si->total_user_counts,\n\t\t\t\t\t   si->total_group_counts,\n\t\t\t\t\t   si->total_project_counts,\n\t\t\t\t\t   si->total_alljobcounts);\n\t\tqueue_lim = new limcounts(qi->total_user_counts,\n\t\t\t\t\t  qi->total_group_counts,\n\t\t\t\t\t  qi->total_project_counts,\n\t\t\t\t\t  qi->total_alljobcounts);\n\t}\n\tfor (i = 0; i < sizeof(limfuncs) / sizeof(limfuncs[0]); i++) {\n\t\trc = static_cast<enum sched_error_code>((limfuncs[i])(si, qi, rr, server_lim, queue_lim, err));\n\t\tif (rc != SE_NONE) {\n\t\t\tif ((flags & RETURN_ALL_ERR)) {\n\t\t\t\tif (any_fail_rc == 0)\n\t\t\t\t\tany_fail_rc = rc;\n\t\t\t\tset_schd_error_codes(err, NOT_RUN, rc);\n\t\t\t\terr->next = new_schd_error();\n\t\t\t\tprev_err = err;\n\t\t\t\terr = err->next;\n\t\t\t\tif (err == NULL) {\n\t\t\t\t\tdelete server_lim;\n\t\t\t\t\tdelete queue_lim;\n\t\t\t\t\treturn SCHD_ERROR;\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tset_schd_error_codes(err, NOT_RUN, rc);\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t}\n\n\tdelete server_lim;\n\tdelete queue_lim;\n\n\tif (flags & RETURN_ALL_ERR) {\n\t\tif (prev_err != NULL) {\n\t\t\tfree_schd_error(err);\n\t\t\tprev_err->next = NULL;\n\t\t}\n\t}\n\n\tif (any_fail_rc)\n\t\treturn any_fail_rc;\n\n\treturn rc;\n}\n\n/**\n * @brief\n *\t\tupdate_soft_limits - check the soft limit using soft limit function.\n *\n * @param[in]\tsi\t-\tserver info.\n * @param[in]\tqi\t-\tqueue info\n * @param[in]\trr\t-\tResource reservation structure\n *\n * @return\tvoid\n */\nvoid\nupdate_soft_limits(server_info *si, queue_info *qi, resource_resv *rr)\n{\n\tsize_t i;\n\tfor (i = 0; i < sizeof(softlimfuncs) / sizeof(softlimfuncs[0]); i++)\n\t\tsoftlimfuncs[i](si, qi, rr);\n\treturn;\n}\n\n/**\n * @brief\tfind the value of preempt bit with matching entity and resource in\n *\t\tthe counts structure\n * @param[in]\tentity_counts\t-   Counts structure where entity information is stored\n * @param[in]\tentity_name\t-   Name of the entity\n * @param[in]\trr\t\t-   job structure\n *\n * @return\tint\n * @retval\tAccumulated preempt_bits matching the entity\n */\nint\nfind_preempt_bits(counts_umap &entity_counts, const std::string &entity_name, resource_resv *rr)\n{\n\tcounts *cnt = NULL;\n\tresource_count *res_c;\n\tint rc = 0;\n\n\tif (entity_name.empty())\n\t\treturn rc;\n\n\tfind_counts_elm(entity_counts, entity_name, NULL, &cnt, NULL);\n\tif (cnt == NULL)\n\t\treturn rc;\n\n\trc |= cnt->soft_limit_preempt_bit;\n\tfor (res_c = cnt->rescts; res_c != NULL; res_c = res_c->next) {\n\t\tauto req = find_resource_req(rr->resreq, res_c->def);\n\t\tif (req != NULL)\n\t\t\trc |= res_c->soft_limit_preempt_bit;\n\t}\n\treturn rc;\n}\n\n/**\n * @brief\n * \t\tcheck_soft_limits - check the soft limit using soft limit function.\n *\n * @param[in]\tsi\t-\tserver info.\n * @param[in]\tqi\t-\tqueue info\n * @param[in]\trr\t-\tResource reservation structure\n *\n * @return\treturn code for soft limits\n */\nint\ncheck_soft_limits(server_info *si, queue_info *qi, resource_resv *rr)\n{\n\tint rc = 0;\n\n\tif (si == NULL || qi == NULL || rr == NULL)\n\t\treturn 0;\n\n\tif (si->has_soft_limit) {\n\t\tif (si->has_user_limit)\n\t\t\trc |= find_preempt_bits(si->user_counts, rr->user, rr);\n\t\tif (si->has_grp_limit)\n\t\t\trc |= find_preempt_bits(si->group_counts, rr->group, rr);\n\t\tif (si->has_proj_limit)\n\t\t\trc |= find_preempt_bits(si->project_counts, rr->project, rr);\n\t\tif (si->has_all_limit)\n\t\t\trc |= find_preempt_bits(si->alljobcounts, PBS_ALL_ENTITY, rr);\n\t}\n\tif (qi->has_soft_limit) {\n\t\tif (qi->has_user_limit)\n\t\t\trc |= find_preempt_bits(qi->user_counts, rr->user, rr);\n\t\tif (qi->has_grp_limit)\n\t\t\trc |= find_preempt_bits(qi->group_counts, rr->group, rr);\n\t\tif (qi->has_proj_limit)\n\t\t\trc |= find_preempt_bits(qi->project_counts, rr->project, rr);\n\t\tif (qi->has_all_limit)\n\t\t\trc |= find_preempt_bits(qi->alljobcounts, PBS_ALL_ENTITY, rr);\n\t}\n\n\treturn (rc);\n}\n\n/**\n * @brief\n *\t\tcheck_server_max_user_run\thard limit checking function for\n *\t\t\t\t\tuser server run limits\n *\n * @param[in]\tsi\t-\tserver_info structure to use for limit evaluation\n * @param[in]\tqi\t-\tqueue_info structure to use for limit evaluation\n * @param[in]\trr\t-\tresource_resv structure to use for limit evaluation\n * @param[in]\tsc\t-\tlimcounts struct for server count/total_count maxes over job run\n * @param[in]\tqc\t-\tlimcounts struct for queue count/total_count maxes over job run\n * @param[out]\terr\t-\tschd_error structure to return error information\n *\n * @return\tinteger indicating failing limit test if limit is exceeded\n * @retval\t0\t: if limit is not exceeded\n * @retval\tsched_error_code enum\t: if limit is exceeded\n * @retval\tSCHD_ERROR\t: on error\n *\n * @see\t#sched_error_code in constant.h\n */\nstatic int\ncheck_server_max_user_run(server_info *si, queue_info *qi, resource_resv *rr,\n\t\t\t  limcounts *sc, limcounts *qc, schd_error *err)\n{\n\tchar *key;\n\tstd::string user;\n\tint used;\n\tint max_user_run, max_genuser_run;\n\n\tif ((si == NULL) || (rr == NULL) || (rr->user.empty()) || (sc == NULL))\n\t\treturn (SCHD_ERROR);\n\n\tif (!si->has_user_limit)\n\t\treturn (0);\n\n\tuser = rr->user;\n\n\tauto &cts = sc->user;\n\n\tif ((key = entlim_mk_runkey(LIM_USER, user.c_str())) == NULL)\n\t\treturn (SCHD_ERROR);\n\tmax_user_run = (int) lim_get(key, LI2RUNCTX(si->liminfo));\n\tfree(key);\n\n\tif ((key = entlim_mk_runkey(LIM_USER, genparam)) == NULL)\n\t\treturn (SCHD_ERROR);\n\tmax_genuser_run = (int) lim_get(key, LI2RUNCTX(si->liminfo));\n\tfree(key);\n\n\tif ((max_user_run == SCHD_INFINITY) &&\n\t    (max_genuser_run == SCHD_INFINITY))\n\t\treturn (0);\n\n\t/* at this point, we know a generic or individual limit is set */\n\tused = find_counts_elm(cts, user, NULL, NULL, NULL);\n\tlog_eventf(PBSEVENT_DEBUG4, PBS_EVENTCLASS_JOB, LOG_DEBUG, __func__,\n\t\t   \"user %s max_*user_run (%d, %d), used %d\",\n\t\t   user.c_str(), max_user_run, max_genuser_run, used);\n\n\tif (max_user_run != SCHD_INFINITY) {\n\t\tif (max_user_run <= used) {\n\t\t\tschderr_args_server(user, err);\n\t\t\treturn (SERVER_BYUSER_JOB_LIMIT_REACHED);\n\t\t} else\n\t\t\treturn (0); /* ignore a generic limit */\n\t} else if (max_genuser_run <= used) {\n\t\tschderr_args_server(NULL, err);\n\t\treturn (SERVER_USER_LIMIT_REACHED);\n\t} else\n\t\treturn (0);\n}\n/**\n * @brief\n *\t\tcheck_server_max_group_run\thard limit checking function for\n *\t\t\t\t\tgroup server resource limits\n *\n * @param[in]\tsi\t-\tserver_info structure to use for limit evaluation\n * @param[in]\tqi\t-\tqueue_info structure to use for limit evaluation\n * @param[in]\trr\t-\tresource_resv structure to use for limit evaluation\n * @param[in]\tsc\t-\tlimcounts struct for server count/total_count maxes over job run\n * @param[in]\tqc\t-\tlimcounts struct for queue count/total_count maxes over job run\n * @param[out]\terr\t-\tschd_error structure to return error information\n *\n * @return\tinteger indicating failing limit test if limit is exceeded\n * @retval\t0\t: if limit is not exceeded\n * @retval\tsched_error_code enum\t: if limit is exceeded\n * @retval\tSCHD_ERROR\t: on error\n *\n * @see\t#sched_error_code in constant.h\n */\nstatic int\ncheck_server_max_group_run(server_info *si, queue_info *qi, resource_resv *rr,\n\t\t\t   limcounts *sc, limcounts *qc, schd_error *err)\n{\n\tchar *key;\n\tstd::string group;\n\tint used;\n\tint max_group_run, max_gengroup_run;\n\n\tif ((si == NULL) || (rr == NULL) || (rr->group.empty()) || (sc == NULL))\n\t\treturn (SCHD_ERROR);\n\n\tif (!si->has_grp_limit)\n\t\treturn (0);\n\n\tgroup = rr->group;\n\n\tauto &cts = sc->group;\n\n\tif ((key = entlim_mk_runkey(LIM_GROUP, group.c_str())) == NULL)\n\t\treturn (SCHD_ERROR);\n\tmax_group_run = (int) lim_get(key, LI2RUNCTX(si->liminfo));\n\tfree(key);\n\n\tif ((key = entlim_mk_runkey(LIM_GROUP, genparam)) == NULL)\n\t\treturn (SCHD_ERROR);\n\tmax_gengroup_run = (int) lim_get(key, LI2RUNCTX(si->liminfo));\n\tfree(key);\n\n\tif ((max_group_run == SCHD_INFINITY) &&\n\t    (max_gengroup_run == SCHD_INFINITY))\n\t\treturn (0);\n\n\t/* at this point, we know a generic or individual limit is set */\n\tused = find_counts_elm(cts, group, NULL, NULL, NULL);\n\tlog_eventf(PBSEVENT_DEBUG4, PBS_EVENTCLASS_JOB, LOG_DEBUG, rr->name,\n\t\t   \"group %s max_*group_run (%d, %d), used %d\",\n\t\t   group.c_str(), max_group_run, max_gengroup_run, used);\n\n\tif (max_group_run != SCHD_INFINITY) {\n\t\tif (max_group_run <= used) {\n\t\t\tschderr_args_server(group, err);\n\t\t\treturn (SERVER_BYGROUP_JOB_LIMIT_REACHED);\n\t\t} else\n\t\t\treturn (0); /* ignore a generic limit */\n\t} else if (max_gengroup_run <= used) {\n\t\tschderr_args_server(NULL, err);\n\t\treturn (SERVER_GROUP_LIMIT_REACHED);\n\t} else\n\t\treturn (0);\n}\n\n/**\n * @brief\n *\t\tcheck_server_max_user_res\thard limit checking function for\n *\t\t\t\t\tuser server resource limits\n *\n * @param[in]\tsi\t-\tserver_info structure to use for limit evaluation\n * @param[in]\tqi\t-\tqueue_info structure to use for limit evaluation\n * @param[in]\trr\t-\tresource_resv structure to use for limit evaluation\n * @param[in]\tsc\t-\tlimcounts struct for server count/total_count maxes over job run\n * @param[in]\tqc\t-\tlimcounts struct for queue count/total_count maxes over job run\n * @param[out]\terr\t-\tschd_error structure to return error information\n *\n * @return\tinteger indicating failing limit test if limit is exceeded\n * @retval\t0\t: if limit is not exceeded\n * @retval\tsched_error_code enum\t: if limit is exceeded\n * @retval\tSCHD_ERROR\t: on error\n *\n * @see\t#sched_error_code in constant.h\n */\nstatic int\ncheck_server_max_user_res(server_info *si, queue_info *qi, resource_resv *rr,\n\t\t\t  limcounts *sc, limcounts *qc, schd_error *err)\n{\n\tint ret;\n\tresdef *rdef = NULL;\n\n\tif ((si == NULL) || (rr == NULL) || (sc == NULL))\n\t\treturn (SCHD_ERROR);\n\n\tif (!si->has_user_limit)\n\t\treturn (0);\n\n\tauto &cts = sc->user;\n\n\tret = check_max_user_res(rr, cts, &rdef,\n\t\t\t\t LI2RESCTX(si->liminfo));\n\tif (ret != 0)\n\t\tlog_eventf(PBSEVENT_DEBUG4, PBS_EVENTCLASS_JOB, LOG_DEBUG, rr->name,\n\t\t\t   \"check_max_user_res returned %d\", ret);\n\tswitch (ret) {\n\t\tdefault:\n\t\tcase -1:\n\t\t\treturn (SCHD_ERROR);\n\t\tcase 0:\n\t\t\treturn (0);\n\t\tcase 1: /* generic user limit exceeded */\n\t\t\terr->rdef = rdef;\n\t\t\treturn (SERVER_USER_RES_LIMIT_REACHED);\n\t\tcase 2: /* individual user limit exceeded */\n\t\t\tschderr_args_server_res(rr->user, NULL, err);\n\t\t\terr->rdef = rdef;\n\t\t\treturn (SERVER_BYUSER_RES_LIMIT_REACHED);\n\t}\n}\n\n/**\n * @brief\n *\t\tcheck_server_max_group_res\thard limit checking function for\n *\t\t\t\t\tgroup server resource limits\n *\n * @param[in]\tsi\t-\tserver_info structure to use for limit evaluation\n * @param[in]\tqi\t-\tqueue_info structure to use for limit evaluation\n * @param[in]\trr\t-\tresource_resv structure to use for limit evaluation\n * @param[in]\tsc\t-\tlimcounts struct for server count/total_count maxes over job run\n * @param[in]\tqc\t-\tlimcounts struct for queue count/total_count maxes over job run\n * @param[out]\terr\t-\tschd_error structure to return error information\n *\n * @return\tinteger indicating failing limit test if limit is exceeded\n * @retval\t0\t: if limit is not exceeded\n * @retval\tsched_error_code enum\t: if limit is exceeded\n * @retval\tSCHD_ERROR\t: on error\n *\n * @see\t\t#sched_error_code in constant.h\n */\nstatic int\ncheck_server_max_group_res(server_info *si, queue_info *qi, resource_resv *rr,\n\t\t\t   limcounts *sc, limcounts *qc, schd_error *err)\n{\n\tint ret;\n\tresdef *rdef = NULL;\n\n\tif ((si == NULL) || (rr == NULL) || (sc == NULL))\n\t\treturn (SCHD_ERROR);\n\n\tif (!si->has_grp_limit)\n\t\treturn (0);\n\n\tauto &cts = sc->group;\n\n\tret = check_max_group_res(rr, cts,\n\t\t\t\t  &rdef, LI2RESCTX(si->liminfo));\n\tif (ret != 0)\n\t\tlog_eventf(PBSEVENT_DEBUG4, PBS_EVENTCLASS_JOB, LOG_DEBUG, rr->name,\n\t\t\t   \"check_max_group_res returned %d\", ret);\n\tswitch (ret) {\n\t\tdefault:\n\t\tcase -1:\n\t\t\treturn (SCHD_ERROR);\n\t\tcase 0:\n\t\t\treturn (0);\n\t\tcase 1: /* generic group limit exceeded */\n\t\t\terr->rdef = rdef;\n\t\t\treturn (SERVER_GROUP_RES_LIMIT_REACHED);\n\t\tcase 2: /* individual group limit exceeded */\n\t\t\tschderr_args_server_res(rr->group, NULL, err);\n\t\t\terr->rdef = rdef;\n\t\t\treturn (SERVER_BYGROUP_RES_LIMIT_REACHED);\n\t}\n}\n\n/**\n * @brief\n *\t\tcheck_queue_max_user_run\thard limit checking function for\n *\t\t\t\t\tuser queue run limits\n *\n * @param[in]\tsi\t-\tserver_info structure to use for limit evaluation\n * @param[in]\tqi\t-\tqueue_info structure to use for limit evaluation\n * @param[in]\trr\t-\tresource_resv structure to use for limit evaluation\n * @param[in]\tsc\t-\tlimcounts struct for server count/total_count maxes over job run\n * @param[in]\tqc\t-\tlimcounts struct for queue count/total_count maxes over job run\n * @param[out]\terr\t-\tschd_error structure to return error information\n *\n * @return\tinteger indicating failing limit test if limit is exceeded\n * @retval\t0\t-\tif limit is not exceeded\n * @retval\tsched_error_code enum\t-\tif limit is exceeded\n * @retval\tSCHD_ERROR\t-\ton error\n * @see\t#sched_error_code in constant.h\n */\nstatic int\ncheck_queue_max_user_run(server_info *si, queue_info *qi, resource_resv *rr,\n\t\t\t limcounts *sc, limcounts *qc, schd_error *err)\n{\n\tchar *key;\n\tstd::string user;\n\tint used;\n\tint max_user_run, max_genuser_run;\n\n\tif ((qi == NULL) || (rr == NULL) || (rr->user.empty()) || (qc == NULL))\n\t\treturn (SCHD_ERROR);\n\n\tif (!qi->has_user_limit)\n\t\treturn (0);\n\n\tuser = rr->user;\n\n\tauto &cts = qc->user;\n\n\tif ((key = entlim_mk_runkey(LIM_USER, user.c_str())) == NULL)\n\t\treturn (SCHD_ERROR);\n\tmax_user_run = (int) lim_get(key, LI2RUNCTX(qi->liminfo));\n\tfree(key);\n\n\tif ((key = entlim_mk_runkey(LIM_USER, genparam)) == NULL)\n\t\treturn (SCHD_ERROR);\n\tmax_genuser_run = (int) lim_get(key, LI2RUNCTX(qi->liminfo));\n\tfree(key);\n\n\tif ((max_user_run == SCHD_INFINITY) &&\n\t    (max_genuser_run == SCHD_INFINITY))\n\t\treturn (0);\n\n\t/* at this point, we know a generic or individual limit is set */\n\tused = find_counts_elm(cts, user, NULL, NULL, NULL);\n\tlog_eventf(PBSEVENT_DEBUG4, PBS_EVENTCLASS_JOB, LOG_DEBUG, rr->name,\n\t\t   \"user %s max_*user_run (%d, %d), used %d\",\n\t\t   user.c_str(), max_user_run, max_genuser_run, used);\n\n\tif (max_user_run != SCHD_INFINITY) {\n\t\tif (max_user_run <= used) {\n\t\t\tschderr_args_q(qi->name, user, err);\n\t\t\treturn (QUEUE_BYUSER_JOB_LIMIT_REACHED);\n\t\t} else\n\t\t\treturn (0); /* ignore a generic limit */\n\t} else if (max_genuser_run <= used) {\n\t\tschderr_args_q(qi->name, NULL, err);\n\t\treturn (QUEUE_USER_LIMIT_REACHED);\n\t} else\n\t\treturn (0);\n}\n\n/**\n * @brief\n *\t\tcheck_queue_max_group_run\thard limit checking function for\n *\t\t\t\t\tgroup queue run limits\n *\n * @param[in]\tsi\t-\tserver_info structure to use for limit evaluation\n * @param[in]\tqi\t-\tqueue_info structure to use for limit evaluation\n * @param[in]\trr\t-\tresource_resv structure to use for limit evaluation\n * @param[in]\tsc\t-\tlimcounts struct for server count/total_count maxes over job run\n * @param[in]\tqc\t-\tlimcounts struct for queue count/total_count maxes over job run\n * @param[out]\terr\t-\tschd_error structure to return error information\n *\n * @return\tinteger indicating failing limit test if limit is exceeded\n * @retval\t0\t: if limit is not exceeded\n * @retval\tsched_error_code enum\t: if limit is exceeded\n * @retval\tSCHD_ERROR\t: on error\n * @see\t#sched_error\t: in constant.h\n */\nstatic int\ncheck_queue_max_group_run(server_info *si, queue_info *qi, resource_resv *rr,\n\t\t\t  limcounts *sc, limcounts *qc, schd_error *err)\n{\n\tchar *key;\n\tstd::string group;\n\tint used;\n\tint max_group_run, max_gengroup_run;\n\n\tif ((qi == NULL) || (rr == NULL) || (rr->group.empty()) || (qc == NULL))\n\t\treturn (SCHD_ERROR);\n\n\tif (!qi->has_grp_limit)\n\t\treturn (0);\n\n\tgroup = rr->group;\n\n\tauto &cts = qc->group;\n\n\tif ((key = entlim_mk_runkey(LIM_GROUP, group.c_str())) == NULL)\n\t\treturn (SCHD_ERROR);\n\tmax_group_run = (int) lim_get(key, LI2RUNCTX(qi->liminfo));\n\tfree(key);\n\n\tif ((key = entlim_mk_runkey(LIM_GROUP, genparam)) == NULL)\n\t\treturn (SCHD_ERROR);\n\tmax_gengroup_run = (int) lim_get(key, LI2RUNCTX(qi->liminfo));\n\tfree(key);\n\n\tif ((max_group_run == SCHD_INFINITY) &&\n\t    (max_gengroup_run == SCHD_INFINITY))\n\t\treturn (0);\n\n\t/* at this point, we know a generic or individual limit is set */\n\tused = find_counts_elm(cts, group, NULL, NULL, NULL);\n\tlog_eventf(PBSEVENT_DEBUG4, PBS_EVENTCLASS_JOB, LOG_DEBUG, rr->name,\n\t\t   \"group %s max_*group_run (%d, %d), used %d\",\n\t\t   group.c_str(), max_group_run, max_gengroup_run, used);\n\n\tif (max_group_run != SCHD_INFINITY) {\n\t\tif (max_group_run <= used) {\n\t\t\tschderr_args_q(qi->name, group, err);\n\t\t\treturn (QUEUE_BYGROUP_JOB_LIMIT_REACHED);\n\t\t} else\n\t\t\treturn (0); /* ignore a generic limit */\n\t} else if (max_gengroup_run <= used) {\n\t\tschderr_args_q(qi->name, NULL, err);\n\t\treturn (QUEUE_GROUP_LIMIT_REACHED);\n\t} else\n\t\treturn (0);\n}\n\n/**\n * @brief\n *\t\tcheck_queue_max_user_res\thard limit checking function for\n *\t\t\t\t\tuser queue resource limits\n *\n * @param[in]\tsi\t-\tserver_info structure to use for limit evaluation\n * @param[in]\tqi\t-\tqueue_info structure to use for limit evaluation\n * @param[in]\trr\t-\tresource_resv structure to use for limit evaluation\n * @param[in]\tsc\t-\tlimcounts struct for server count/total_count maxes over job run\n * @param[in]\tqc\t-\tlimcounts struct for queue count/total_count maxes over job run\n * @param[out]\terr\t-\tschd_error structure to return error information\n *\n * @return\tinteger\t: indicating failing limit test if limit is exceeded\n * @retval\t0\t: if limit is not exceeded\n * @retval\tsched_error_code enum\t: if limit is exceeded\n * @retval\tSCHD_ERROR\t: on error\n *\n * @see\t#sched_error\t: in constant.h\n */\nstatic int\ncheck_queue_max_user_res(server_info *si, queue_info *qi, resource_resv *rr,\n\t\t\t limcounts *sc, limcounts *qc, schd_error *err)\n{\n\tint ret;\n\tresdef *rdef = NULL;\n\n\tif ((qi == NULL) || (rr == NULL) || (qc == NULL))\n\t\treturn (SCHD_ERROR);\n\n\tif (!qi->has_user_limit)\n\t\treturn (0);\n\n\tauto &cts = qc->user;\n\n\tret = check_max_user_res(rr, cts, &rdef, LI2RESCTX(qi->liminfo));\n\tif (ret != 0)\n\t\tlog_eventf(PBSEVENT_DEBUG4, PBS_EVENTCLASS_JOB, LOG_DEBUG, rr->name,\n\t\t\t   \"check_max_user_res returned %d\", ret);\n\n\tswitch (ret) {\n\t\tdefault:\n\t\tcase -1:\n\t\t\treturn (SCHD_ERROR);\n\t\tcase 0:\n\t\t\treturn (0);\n\t\tcase 1: /* generic user limit exceeded */\n\t\t\tschderr_args_q_res(qi->name, NULL, NULL, err);\n\t\t\terr->rdef = rdef;\n\t\t\treturn (QUEUE_USER_RES_LIMIT_REACHED);\n\t\tcase 2: /* individual user limit exceeded */\n\t\t\tschderr_args_q_res(qi->name, rr->user, NULL, err);\n\t\t\terr->rdef = rdef;\n\t\t\treturn (QUEUE_BYUSER_RES_LIMIT_REACHED);\n\t}\n}\n\n/**\n * @brief\n *\t\tcheck_queue_max_group_res\thard limit checking function for\n *\t\t\t\t\tgroup queue resource limits\n *\n * @param[in]\tsi\t-\tserver_info structure to use for limit evaluation\n * @param[in]\tqi\t-\tqueue_info structure to use for limit evaluation\n * @param[in]\trr\t-\tresource_resv structure to use for limit evaluation\n * @param[in]\tsc\t-\tlimcounts struct for server count/total_count maxes over job run\n * @param[in]\tqc\t-\tlimcounts struct for queue count/total_count maxes over job run\n * @param[out]\terr\t-\tschd_error structure to return error information\n *\n * @return\tinteger indicating failing limit test if limit is exceeded\n * @retval\t0\t: if limit is not exceeded\n * @retval\tsched_error_code enum\t: if limit is exceeded\n * @retval\tSCHD_ERROR\t: on error\n *\n * @see\t#sched_error_code in constant.h\n */\nstatic int\ncheck_queue_max_group_res(server_info *si, queue_info *qi, resource_resv *rr,\n\t\t\t  limcounts *sc, limcounts *qc, schd_error *err)\n{\n\tint ret;\n\tresdef *rdef = NULL;\n\n\tif ((qi == NULL) || (rr == NULL) || (qc == NULL))\n\t\treturn (SCHD_ERROR);\n\n\tif (!qi->has_grp_limit)\n\t\treturn (0);\n\n\tauto &cts = qc->group;\n\n\tret = check_max_group_res(rr, cts, &rdef, LI2RESCTX(qi->liminfo));\n\tif (ret != 0)\n\t\tlog_eventf(PBSEVENT_DEBUG4, PBS_EVENTCLASS_JOB, LOG_DEBUG, rr->name,\n\t\t\t   \"check_max_group_res returned %d\", ret);\n\n\tswitch (ret) {\n\t\tdefault:\n\t\tcase -1:\n\t\t\treturn (SCHD_ERROR);\n\t\tcase 0:\n\t\t\treturn (0);\n\t\tcase 1: /* generic group limit exceeded */\n\t\t\tschderr_args_q_res(qi->name, NULL, NULL, err);\n\t\t\terr->rdef = rdef;\n\t\t\treturn (QUEUE_GROUP_RES_LIMIT_REACHED);\n\t\tcase 2: /* individual group limit exceeded */\n\t\t\tschderr_args_q_res(qi->name, rr->group, NULL, err);\n\t\t\terr->rdef = rdef;\n\t\t\treturn (QUEUE_BYGROUP_RES_LIMIT_REACHED);\n\t}\n}\n\n/**\n * @brief\n *\t\tcheck_queue_max_res\thard limit checking function for overall queue\n *\t\t\t\tresource limits\n *\n * @param[in]\tsi\t-\tserver_info structure to use for limit evaluation\n * @param[in]\tqi\t-\tqueue_info structure to use for limit evaluation\n * @param[in]\trr\t-\tresource_resv structure to use for limit evaluation\n * @param[in]\tsc\t-\tlimcounts struct for server count/total_count maxes over job run\n * @param[in]\tqc\t-\tlimcounts struct for queue count/total_count maxes over job run\n * @param[out]\terr\t-\tschd_error structure to return error information\n *\n * @return\tinteger indicating failing limit test if limit is exceeded\n * @retval\t0\t: if limit is not exceeded\n * @retval\tsched_error_code enum\t: if limit is exceeded\n * @retval\tSCHD_ERROR\t: on error\n *\n * @see\t#sched_error_code in constant.h\n */\nstatic int\ncheck_queue_max_res(server_info *si, queue_info *qi, resource_resv *rr,\n\t\t    limcounts *sc, limcounts *qc, schd_error *err)\n{\n\tchar *reskey;\n\tsch_resource_t max_res;\n\tsch_resource_t used;\n\tschd_resource *res;\n\tresource_count *used_res;\n\tcounts *c;\n\n\tif ((qi == NULL) || (rr == NULL))\n\t\treturn (SCHD_ERROR);\n\n\tif (qc == NULL)\n\t\treturn (0);\n\n\tauto &cts = qc->all;\n\n\tc = find_counts(cts, PBS_ALL_ENTITY);\n\tif (c == NULL)\n\t\treturn (0);\n\n\tfor (res = limres; res != NULL; res = res->next) {\n\t\tresource_req *req;\n\t\tif ((req = find_resource_req(rr->resreq, res->def)) == NULL)\n\t\t\tcontinue;\n\n\t\tif ((reskey = entlim_mk_reskey(LIM_OVERALL, allparam,\n\t\t\t\t\t       res->name)) == NULL)\n\t\t\treturn (SCHD_ERROR);\n\t\tmax_res = lim_get(reskey, LI2RESCTX(qi->liminfo));\n\t\tfree(reskey);\n\n\t\tif (max_res == SCHD_INFINITY)\n\t\t\tcontinue;\n\n\t\tif ((used_res = find_resource_count(c->rescts, res->def)) == NULL)\n\t\t\tused = 0;\n\t\telse\n\t\t\tused = used_res->amount;\n\n\t\tlog_eventf(PBSEVENT_DEBUG4, PBS_EVENTCLASS_JOB, LOG_DEBUG, rr->name,\n\t\t\t   \"max_res.%s %.1lf, used %.1lf\", res->name, max_res, used);\n\t\tif (used + req->amount > max_res) {\n\t\t\tschderr_args_q_res(qi->name, NULL, NULL, err);\n\t\t\terr->rdef = res->def;\n\t\t\treturn (QUEUE_RESOURCE_LIMIT_REACHED);\n\t\t}\n\t}\n\n\treturn (0);\n}\n\n/**\n * @brief\n *\t\tcheck_server_max_res\thard limit checking function for overall server\n *\t\t\t\tresource limits\n *\n * @param[in]\tsi\t-\tserver_info structure to use for limit evaluation\n * @param[in]\tqi\t-\tqueue_info structure to use for limit evaluation\n * @param[in]\trr\t-\tresource_resv structure to use for limit evaluation\n * @param[in]\tsc\t-\tlimcounts struct for server count/total_count maxes over job run\n * @param[in]\tqc\t-\tlimcounts struct for queue count/total_count maxes over job run\n * @param[out]\terr\t-\tschd_error structure to return error information\n *\n * @return\tinteger indicating failing limit test if limit is exceeded\n * @retval\t0\t: if limit is not exceeded\n * @retval\tsched_error_code enum\t: if limit is exceeded\n * @retval\tSCHD_ERROR\t: on error\n *\n * @see\t#sched_error_code in constant.h\n */\nstatic int\ncheck_server_max_res(server_info *si, queue_info *qi, resource_resv *rr,\n\t\t     limcounts *sc, limcounts *qc, schd_error *err)\n{\n\tchar *reskey;\n\tsch_resource_t max_res;\n\tsch_resource_t used;\n\tschd_resource *res;\n\tresource_count *used_res;\n\tcounts *c;\n\n\tif ((si == NULL) || (rr == NULL))\n\t\treturn (SCHD_ERROR);\n\n\tif (sc == NULL)\n\t\treturn (0);\n\n\tauto &cts = sc->all;\n\n\tc = find_counts(cts, PBS_ALL_ENTITY);\n\tif (c == NULL)\n\t\treturn (0);\n\n\tfor (res = limres; res != NULL; res = res->next) {\n\t\tresource_req *req;\n\n\t\tif ((req = find_resource_req(rr->resreq, res->def)) == NULL)\n\t\t\tcontinue;\n\n\t\tif ((reskey = entlim_mk_reskey(LIM_OVERALL, allparam,\n\t\t\t\t\t       res->name)) == NULL)\n\t\t\treturn (SCHD_ERROR);\n\t\tmax_res = lim_get(reskey, LI2RESCTX(si->liminfo));\n\t\tfree(reskey);\n\n\t\tif (max_res == SCHD_INFINITY)\n\t\t\tcontinue;\n\n\t\tif ((used_res = find_resource_count(c->rescts, res->def)) == NULL)\n\t\t\tused = 0;\n\t\telse\n\t\t\tused = used_res->amount;\n\n\t\tlog_eventf(PBSEVENT_DEBUG4, PBS_EVENTCLASS_JOB, LOG_DEBUG, rr->name,\n\t\t\t   \"max_res.%s %.1lf, used %.1lf\", res->name, max_res, used);\n\t\tif (used + req->amount > max_res) {\n\t\t\terr->rdef = res->def;\n\t\t\treturn (SERVER_RESOURCE_LIMIT_REACHED);\n\t\t}\n\t}\n\n\treturn (0);\n}\n\n/**\n * @brief\n *\t\tcheck_server_max_run\thard limit checking function for\n *\t\t\t\tserver run limits\n *\n * @param[in]\tsi\t-\tserver_info structure to use for limit evaluation\n * @param[in]\tqi\t-\tqueue_info structure to use for limit evaluation\n * @param[in]\trr\t-\tresource_resv structure to use for limit evaluation\n * @param[in]\tsc\t-\tlimcounts struct for server count/total_count maxes over job run\n * @param[in]\tqc\t-\tlimcounts struct for queue count/total_count maxes over job run\n * @param[out]\terr\t-\tschd_error structure to return error information\n *\n * @return\tinteger indicating failing limit test if limit is exceeded\n * @retval\t0\t: if limit is not exceeded\n * @retval\tsched_error_code enum\t: if limit is exceeded\n * @retval\tSCHD_ERROR\t: on error\n *\n * @see\t#sched_error_code in constant.h\n */\nstatic int\ncheck_server_max_run(server_info *si, queue_info *qi, resource_resv *rr,\n\t\t     limcounts *sc, limcounts *qc, schd_error *err)\n{\n\tint max_running;\n\tchar *key;\n\tint running;\n\n\tif (si == NULL)\n\t\treturn (SCHD_ERROR);\n\n\tif (sc == NULL)\n\t\treturn (0);\n\n\tauto &cts = sc->all;\n\n\tif ((key = entlim_mk_runkey(LIM_OVERALL, allparam)) == NULL)\n\t\treturn (SCHD_ERROR);\n\tmax_running = (int) lim_get(key, LI2RUNCTX(si->liminfo));\n\tfree(key);\n\n\trunning = find_counts_elm(cts, PBS_ALL_ENTITY, NULL, NULL, NULL);\n\n\tif ((max_running == SCHD_INFINITY) ||\n\t    (max_running > running))\n\t\treturn (0);\n\telse {\n\t\tschderr_args_server(NULL, err);\n\t\treturn (SERVER_JOB_LIMIT_REACHED);\n\t}\n}\n\n/**\n * @brief\n *\t\tcheck_queue_max_run\thard limit checking function for\n *\t\t\t\tqueue run limits\n *\n * @param[in]\tsi\t-\tserver_info structure to use for limit evaluation\n * @param[in]\tqi\t-\tqueue_info structure to use for limit evaluation\n * @param[in]\trr\t-\tresource_resv structure to use for limit evaluation\n * @param[in]\tsc\t-\tlimcounts struct for server count/total_count maxes over job run\n * @param[in]\tqc\t-\tlimcounts struct for queue count/total_count maxes over job run\n * @param[out]\terr\t-\tschd_error structure to return error information\n *\n * @return\tinteger indicating failing limit test if limit is exceeded\n * @retval\t0\t: if limit is not exceeded\n * @retval\tsched_error_code enum\t: if limit is exceeded\n * @retval\tSCHD_ERROR\t: on error\n *\n * @see\t#sched_error_code in constant.h\n */\nstatic int\ncheck_queue_max_run(server_info *si, queue_info *qi, resource_resv *rr,\n\t\t    limcounts *sc, limcounts *qc, schd_error *err)\n{\n\tint max_running;\n\tchar *key;\n\tint running;\n\n\tif (qi == NULL)\n\t\treturn (SCHD_ERROR);\n\n\tif (qc == NULL)\n\t\treturn (0);\n\n\tauto &cts = qc->all;\n\n\tif ((key = entlim_mk_runkey(LIM_OVERALL, allparam)) == NULL)\n\t\treturn (SCHD_ERROR);\n\n\tmax_running = (int) lim_get(key, LI2RUNCTX(qi->liminfo));\n\tfree(key);\n\n\trunning = find_counts_elm(cts, PBS_ALL_ENTITY, NULL, NULL, NULL);\n\n\tif ((max_running == SCHD_INFINITY) ||\n\t    (max_running > running))\n\t\treturn (0);\n\telse {\n\t\tschderr_args_q(qi->name, NULL, err);\n\t\treturn (QUEUE_JOB_LIMIT_REACHED);\n\t}\n}\n\n/**\n * @brief\n *\t\tcheck_queue_max_run_soft\tsoft limit checking function for\n *\t\t\t\t\tqueue run limits\n *\n * @param[in]\tsi\t-\tserver_info structure to use for limit evaluation\n * @param[in]\tqi\t-\tqueue_info structure to use for limit evaluation\n * @param[in]\trr\t-\tresource_resv structure to use for limit evaluation\n *\n * @return\tinteger indicating failing limit test if limit is exceeded\n * @retval\t0\t: if limit is not exceeded\n * @retval\tPREEMPT_TO_BIT(preempt enum)\t: if limit is exceeded\n * @retval\tPREEMPT_TO_BIT(PREEMPT_ERR)\t: on error\n *\n * @see\t\t#preempt enum in constant.h\n */\nstatic int\ncheck_queue_max_run_soft(server_info *si, queue_info *qi, resource_resv *rr)\n{\n\tint max_running;\n\tchar *key;\n\tcounts *cnt = NULL;\n\tint used = 0;\n\n\tif (qi == NULL)\n\t\treturn (PREEMPT_TO_BIT(PREEMPT_ERR));\n\n\tif (!qi->has_all_limit)\n\t\treturn (0);\n\n\tif ((key = entlim_mk_runkey(LIM_OVERALL, allparam)) == NULL)\n\t\treturn (PREEMPT_TO_BIT(PREEMPT_ERR));\n\tmax_running = (int) lim_get(key, LI2RUNCTXSOFT(qi->liminfo));\n\tfree(key);\n\n\t/* at this point, we know a limit is set for PBS_ALL*/\n\tused = find_counts_elm(qi->alljobcounts, PBS_ALL_ENTITY, NULL, &cnt, NULL);\n\tif (max_running != SCHD_INFINITY && used > max_running) {\n\t\tif (cnt != NULL)\n\t\t\tcnt->soft_limit_preempt_bit = PREEMPT_TO_BIT(PREEMPT_OVER_QUEUE_LIMIT);\n\t\treturn (PREEMPT_TO_BIT(PREEMPT_OVER_QUEUE_LIMIT));\n\t} else {\n\t\tif (cnt != NULL)\n\t\t\tcnt->soft_limit_preempt_bit = 0;\n\t\treturn (0);\n\t}\n}\n\n/**\n * @brief\n *\t\tcheck_queue_max_user_run_soft\tsoft limit checking function for\n *\t\t\t\t\tuser queue run limits\n *\n * @param[in]\tsi\t-\tserver_info structure to use for limit evaluation\n * @param[in]\tqi\t-\tqueue_info structure to use for limit evaluation\n * @param[in]\trr\t-\tresource_resv structure to use for limit evaluation\n *\n * @return\tinteger indicating failing limit test if limit is exceeded\n * @retval\t0\t: if limit is not exceeded\n * @retval\tPREEMPT_TO_BIT(preempt enum)\t: if limit is exceeded\n * @retval\tPREEMPT_TO_BIT(PREEMPT_ERR)\t: on error\n *\n * @see\t#preempt enum in constant.h\n */\nstatic int\ncheck_queue_max_user_run_soft(server_info *si, queue_info *qi, resource_resv *rr)\n{\n\tchar *key;\n\tstd::string user;\n\tint used;\n\tint max_user_run_soft, max_genuser_run_soft;\n\tcounts *cnt = NULL;\n\n\tif ((qi == NULL) || (rr == NULL) || (rr->user.empty()))\n\t\treturn (PREEMPT_TO_BIT(PREEMPT_ERR));\n\n\tif (!qi->has_user_limit)\n\t\treturn (0);\n\n\tuser = rr->user;\n\n\tif ((key = entlim_mk_runkey(LIM_USER, user.c_str())) == NULL)\n\t\treturn (PREEMPT_TO_BIT(PREEMPT_ERR));\n\tmax_user_run_soft = (int) lim_get(key, LI2RUNCTXSOFT(qi->liminfo));\n\tfree(key);\n\n\tif ((key = entlim_mk_runkey(LIM_USER, genparam)) == NULL)\n\t\treturn (PREEMPT_TO_BIT(PREEMPT_ERR));\n\tmax_genuser_run_soft = (int) lim_get(key, LI2RUNCTXSOFT(qi->liminfo));\n\tfree(key);\n\n\tif ((max_user_run_soft == SCHD_INFINITY) &&\n\t    (max_genuser_run_soft == SCHD_INFINITY))\n\t\treturn (0);\n\n\t/* at this point, we know a generic or individual limit is set */\n\tused = find_counts_elm(qi->user_counts, user, NULL, &cnt, NULL);\n\n\tlog_eventf(PBSEVENT_DEBUG4, PBS_EVENTCLASS_JOB, LOG_DEBUG, rr->name,\n\t\t   \"user %s max_*user_run_soft (%d, %d), used %d\",\n\t\t   user.c_str(), max_user_run_soft, max_genuser_run_soft, used);\n\n\tif (max_user_run_soft != SCHD_INFINITY) {\n\t\tif (max_user_run_soft < used) {\n\t\t\tif (cnt != NULL)\n\t\t\t\tcnt->soft_limit_preempt_bit = PREEMPT_TO_BIT(PREEMPT_OVER_QUEUE_LIMIT);\n\t\t\treturn (PREEMPT_TO_BIT(PREEMPT_OVER_QUEUE_LIMIT));\n\t\t} else {\n\t\t\tif (cnt != NULL)\n\t\t\t\tcnt->soft_limit_preempt_bit = 0;\n\t\t\treturn (0); /* ignore a generic limit */\n\t\t}\n\t} else if (max_genuser_run_soft < used) {\n\t\tif (cnt != NULL)\n\t\t\tcnt->soft_limit_preempt_bit = PREEMPT_TO_BIT(PREEMPT_OVER_QUEUE_LIMIT);\n\t\treturn (PREEMPT_TO_BIT(PREEMPT_OVER_QUEUE_LIMIT));\n\t} else {\n\t\tif (cnt != NULL)\n\t\t\tcnt->soft_limit_preempt_bit = 0;\n\t\treturn (0);\n\t}\n}\n\n/**\n * @brief\n *\t\tcheck_queue_max_group_run_soft\tsoft limit checking function for\n *\t\t\t\t\tgroup queue run limits\n *\n * @param[in]\tsi\t-\tserver_info structure to use for limit evaluation\n * @param[in]\tqi\t-\tqueue_info structure to use for limit evaluation\n * @param[in]\trr\t-\tresource_resv structure to use for limit evaluation\n *\n * @return\tinteger indicating failing limit test if limit is exceeded\n * @retval\t0\t: if limit is not exceeded\n * @retval\tPREEMPT_TO_BIT(preempt enum)\t: if limit is exceeded\n * @retval\tPREEMPT_TO_BIT(PREEMPT_ERR)\t: on error\n *\n * @see\t#preempt enum in constant.h\n */\nstatic int\ncheck_queue_max_group_run_soft(server_info *si, queue_info *qi,\n\t\t\t       resource_resv *rr)\n{\n\tchar *key;\n\tstd::string group;\n\tint used;\n\tint max_group_run_soft, max_gengroup_run_soft;\n\tcounts *cnt = NULL;\n\n\tif ((qi == NULL) || (rr == NULL) || (rr->group.empty()))\n\t\treturn (PREEMPT_TO_BIT(PREEMPT_ERR));\n\n\tif (!qi->has_grp_limit)\n\t\treturn (0);\n\n\tgroup = rr->group;\n\n\tif ((key = entlim_mk_runkey(LIM_GROUP, group.c_str())) == NULL)\n\t\treturn (PREEMPT_TO_BIT(PREEMPT_ERR));\n\tmax_group_run_soft = (int) lim_get(key, LI2RUNCTXSOFT(qi->liminfo));\n\tfree(key);\n\n\tif ((key = entlim_mk_runkey(LIM_GROUP, genparam)) == NULL)\n\t\treturn (PREEMPT_TO_BIT(PREEMPT_ERR));\n\tmax_gengroup_run_soft = (int) lim_get(key, LI2RUNCTXSOFT(qi->liminfo));\n\tfree(key);\n\n\tif ((max_group_run_soft == SCHD_INFINITY) &&\n\t    (max_gengroup_run_soft == SCHD_INFINITY))\n\t\treturn (0);\n\n\tused = find_counts_elm(qi->group_counts, group, NULL, &cnt, NULL);\n\tlog_eventf(PBSEVENT_DEBUG4, PBS_EVENTCLASS_JOB, LOG_DEBUG, rr->name,\n\t\t   \"group %s max_*group_run_soft (%d, %d), used %d\",\n\t\t   group.c_str(), max_group_run_soft, max_gengroup_run_soft, used);\n\n\tif (max_group_run_soft != SCHD_INFINITY) {\n\t\tif (max_group_run_soft < used) {\n\t\t\tif (cnt != NULL)\n\t\t\t\tcnt->soft_limit_preempt_bit = PREEMPT_TO_BIT(PREEMPT_OVER_QUEUE_LIMIT);\n\t\t\treturn (PREEMPT_TO_BIT(PREEMPT_OVER_QUEUE_LIMIT));\n\t\t} else {\n\t\t\tif (cnt != NULL)\n\t\t\t\tcnt->soft_limit_preempt_bit = 0;\n\t\t\treturn (0); /* ignore a generic limit */\n\t\t}\n\t} else if (max_gengroup_run_soft < used) {\n\t\tif (cnt != NULL)\n\t\t\tcnt->soft_limit_preempt_bit = PREEMPT_TO_BIT(PREEMPT_OVER_QUEUE_LIMIT);\n\t\treturn (PREEMPT_TO_BIT(PREEMPT_OVER_QUEUE_LIMIT));\n\t} else {\n\t\tif (cnt != NULL)\n\t\t\tcnt->soft_limit_preempt_bit = 0;\n\t\treturn (0);\n\t}\n}\n\n/**\n * @brief\n *\t\tcheck_queue_max_user_res_soft\tsoft limit checking function for\n *\t\t\t\t\tuser queue resource limits\n *\n * @param[in]\tsi\t-\tserver_info structure to use for limit evaluation\n * @param[in]\tqi\t-\tqueue_info structure to use for limit evaluation\n * @param[in]\trr\t-\tresource_resv structure to use for limit evaluation\n *\n * @return\tinteger indicating failing limit test if limit is exceeded\n * @retval\t0\t: if limit is not exceeded\n * @retval\tPREEMPT_TO_BIT(preempt enum)\t: if limit is exceeded\n * @retval\tPREEMPT_TO_BIT(PREEMPT_ERR)\t: on error\n *\n * @see\t\t#preempt enum in constant.h\n */\nstatic int\ncheck_queue_max_user_res_soft(server_info *si, queue_info *qi, resource_resv *rr)\n{\n\tif ((qi == NULL) || (rr == NULL))\n\t\treturn (PREEMPT_TO_BIT(PREEMPT_ERR));\n\n\tif (!qi->has_user_limit)\n\t\treturn (0);\n\n\treturn (check_max_user_res_soft(qi->running_jobs, rr, qi->user_counts,\n\t\t\t\t\tLI2RESCTXSOFT(qi->liminfo), PREEMPT_TO_BIT(PREEMPT_OVER_QUEUE_LIMIT)));\n}\n\n/**\n * @brief\n *\t\tcheck_queue_max_group_res_soft\tsoft limit checking function for\n *\t\t\t\t\tgroup queue resource limits\n *\n * @param[in]\tsi\t-\tserver_info structure to use for limit evaluation\n * @param[in]\tqi\t-\tqueue_info structure to use for limit evaluation\n * @param[in]\trr\t-\tresource_resv structure to use for limit evaluation\n *\n * @return\tinteger indicating failing limit test if limit is exceeded\n * @retval\t0\t: if limit is not exceeded\n * @retval\tPREEMPT_TO_BIT(preempt enum)\t: if limit is exceeded\n * @retval\tPREEMPT_TO_BIT(PREEMPT_ERR)\t: on error\n *\n * @see\t#preempt enum in constant.h\n */\nstatic int\ncheck_queue_max_group_res_soft(server_info *si, queue_info *qi,\n\t\t\t       resource_resv *rr)\n{\n\tif ((qi == NULL) || (rr == NULL))\n\t\treturn (PREEMPT_TO_BIT(PREEMPT_ERR));\n\n\tif (!qi->has_grp_limit)\n\t\treturn (0);\n\n\treturn (check_max_group_res_soft(rr, qi->group_counts,\n\t\t\t\t\t LI2RESCTXSOFT(qi->liminfo), PREEMPT_TO_BIT(PREEMPT_OVER_QUEUE_LIMIT)));\n}\n\n/**\n * @brief\n *\t\tcheck_server_max_run_soft\tsoft limit checking function for\n *\t\t\t\t\tserver run limits\n *\n * @param[in]\tsi\t-\tserver_info structure to use for limit evaluation\n * @param[in]\tqi\t-\tqueue_info structure to use for limit evaluation\n * @param[in]\trr\t-\tresource_resv structure to use for limit evaluation\n *\n * @return\tinteger indicating failing limit test if limit is exceeded\n * @retval\t0\t: if limit is not exceeded\n * @retval\tPREEMPT_TO_BIT(preempt enum)\t: if limit is exceeded\n * @retval\tPREEMPT_TO_BIT(PREEMPT_ERR)\t: on error\n *\n * @see\t#preempt enum in constant.h\n */\nstatic int\ncheck_server_max_run_soft(server_info *si, queue_info *qi, resource_resv *rr)\n{\n\tint max_running;\n\tchar *key;\n\tcounts *cnt = NULL;\n\tint used = 0;\n\n\tif (si == NULL)\n\t\treturn (PREEMPT_TO_BIT(PREEMPT_ERR));\n\n\tif (!si->has_all_limit)\n\t\treturn (0);\n\n\tif ((key = entlim_mk_runkey(LIM_OVERALL, allparam)) == NULL)\n\t\treturn (PREEMPT_TO_BIT(PREEMPT_ERR));\n\tmax_running = (int) lim_get(key, LI2RUNCTXSOFT(si->liminfo));\n\tfree(key);\n\n\t/* at this point, we know a limit is set for PBS_ALL*/\n\tused = find_counts_elm(si->alljobcounts, PBS_ALL_ENTITY, NULL, &cnt, NULL);\n\tif (max_running != SCHD_INFINITY && used > max_running) {\n\t\tif (cnt != NULL)\n\t\t\tcnt->soft_limit_preempt_bit = PREEMPT_TO_BIT(PREEMPT_OVER_SERVER_LIMIT);\n\t\treturn (PREEMPT_TO_BIT(PREEMPT_OVER_SERVER_LIMIT));\n\t} else {\n\t\tif (cnt != NULL)\n\t\t\tcnt->soft_limit_preempt_bit = 0;\n\t\treturn (0);\n\t}\n}\n\n/**\n * @brief\n *\t\tcheck_server_max_user_run_soft\tsoft limit checking function for\n *\t\t\t\t\tuser server run limits\n *\n * @param [in]\tsi\t-\tserver_info structure to use for limit evaluation\n * @param [in]\tqi\t-\tqueue_info structure to use for limit evaluation\n * @param [in]\trr\t-\tresource_resv structure to use for limit evaluation\n *\n * @return\tinteger indicating failing limit test if limit is exceeded\n * @retval\t0\t: if limit is not exceeded\n * @retval\tPREEMPT_TO_BIT(preempt enum)\t: if limit is exceeded\n * @retval\tPREEMPT_TO_BIT(PREEMPT_ERR)\t: on error\n *\n * @see\t#preempt enum in constant.h\n */\nstatic int\ncheck_server_max_user_run_soft(server_info *si, queue_info *qi,\n\t\t\t       resource_resv *rr)\n{\n\tchar *key;\n\tstd::string user;\n\tint used;\n\tint max_user_run_soft, max_genuser_run_soft;\n\tcounts *cnt = NULL;\n\n\tif ((si == NULL) || (rr == NULL) || (rr->user.empty()))\n\t\treturn (PREEMPT_TO_BIT(PREEMPT_ERR));\n\n\tif (!si->has_user_limit)\n\t\treturn (0);\n\n\tuser = rr->user;\n\n\tif ((key = entlim_mk_runkey(LIM_USER, user.c_str())) == NULL)\n\t\treturn (PREEMPT_TO_BIT(PREEMPT_ERR));\n\tmax_user_run_soft = (int) lim_get(key, LI2RUNCTXSOFT(si->liminfo));\n\tfree(key);\n\n\tif ((key = entlim_mk_runkey(LIM_USER, genparam)) == NULL)\n\t\treturn (PREEMPT_TO_BIT(PREEMPT_ERR));\n\tmax_genuser_run_soft = (int) lim_get(key, LI2RUNCTXSOFT(si->liminfo));\n\tfree(key);\n\n\tif ((max_user_run_soft == SCHD_INFINITY) &&\n\t    (max_genuser_run_soft == SCHD_INFINITY))\n\t\treturn (0);\n\n\t/* at this point, we know a generic or individual limit is set */\n\tused = find_counts_elm(si->user_counts, user, NULL, &cnt, NULL);\n\tlog_eventf(PBSEVENT_DEBUG4, PBS_EVENTCLASS_JOB, LOG_DEBUG, rr->name,\n\t\t   \"user %s max_*user_run_soft (%d, %d), used %d\",\n\t\t   user.c_str(), max_user_run_soft, max_genuser_run_soft, used);\n\n\tif (max_user_run_soft != SCHD_INFINITY) {\n\t\tif (max_user_run_soft < used) {\n\t\t\tif (cnt != NULL)\n\t\t\t\tcnt->soft_limit_preempt_bit = PREEMPT_TO_BIT(PREEMPT_OVER_SERVER_LIMIT);\n\t\t\treturn (PREEMPT_TO_BIT(PREEMPT_OVER_SERVER_LIMIT));\n\t\t} else {\n\t\t\tif (cnt != NULL)\n\t\t\t\tcnt->soft_limit_preempt_bit = 0;\n\t\t\treturn (0); /* ignore a generic limit */\n\t\t}\n\t} else if (max_genuser_run_soft < used) {\n\t\tif (cnt != NULL)\n\t\t\tcnt->soft_limit_preempt_bit = PREEMPT_TO_BIT(PREEMPT_OVER_SERVER_LIMIT);\n\t\treturn ((PREEMPT_TO_BIT(PREEMPT_OVER_SERVER_LIMIT)));\n\t} else {\n\t\tif (cnt != NULL)\n\t\t\tcnt->soft_limit_preempt_bit = 0;\n\t\treturn (0);\n\t}\n}\n\n/**\n * @brief\n *\t\tcheck_server_max_group_run_soft\tsoft limit checking function for\n *\t\t\t\t\tgroup server run limits\n *\n * @param[in]\tsi\t-\tserver_info structure to use for limit evaluation\n * @param[in]\tqi\t-\tqueue_info structure to use for limit evaluation\n * @param[in]\trr\t-\tresource_resv structure to use for limit evaluation\n *\n * @return\tinteger indicating failing limit test if limit is exceeded\n * @retval\t0\t: if limit is not exceeded\n * @retval\tPREEMPT_TO_BIT(preempt enum)\t: if limit is exceeded\n * @retval\tPREEMPT_TO_BIT(PREEMPT_ERR)\t: on error\n *\n * @see\t#preempt enum in constant.h\n */\nstatic int\ncheck_server_max_group_run_soft(server_info *si, queue_info *qi,\n\t\t\t\tresource_resv *rr)\n{\n\tchar *key;\n\tstd::string group;\n\tint used;\n\tint max_group_run_soft, max_gengroup_run_soft;\n\tcounts *cnt = NULL;\n\n\tif ((si == NULL) || (rr == NULL) || (rr->group.empty()))\n\t\treturn (PREEMPT_TO_BIT(PREEMPT_ERR));\n\n\tif (!si->has_grp_limit)\n\t\treturn (0);\n\n\tgroup = rr->group;\n\n\tif ((key = entlim_mk_runkey(LIM_GROUP, group.c_str())) == NULL)\n\t\treturn (PREEMPT_TO_BIT(PREEMPT_ERR));\n\tmax_group_run_soft = (int) lim_get(key, LI2RUNCTXSOFT(si->liminfo));\n\tfree(key);\n\n\tif ((key = entlim_mk_runkey(LIM_GROUP, genparam)) == NULL)\n\t\treturn (PREEMPT_TO_BIT(PREEMPT_ERR));\n\tmax_gengroup_run_soft = (int) lim_get(key, LI2RUNCTXSOFT(si->liminfo));\n\tfree(key);\n\n\tif ((max_group_run_soft == SCHD_INFINITY) &&\n\t    (max_gengroup_run_soft == SCHD_INFINITY))\n\t\treturn (0);\n\n\t/* at this point, we know a generic or individual limit is set */\n\tused = find_counts_elm(si->group_counts, group, NULL, &cnt, NULL);\n\n\tlog_eventf(PBSEVENT_DEBUG4, PBS_EVENTCLASS_JOB, LOG_DEBUG, rr->name,\n\t\t   \"group %s max_*group_run_soft (%d, %d), used %d\",\n\t\t   group.c_str(), max_group_run_soft, max_gengroup_run_soft, used);\n\n\tif (max_group_run_soft != SCHD_INFINITY) {\n\t\tif (max_group_run_soft < used) {\n\t\t\tif (cnt != NULL)\n\t\t\t\tcnt->soft_limit_preempt_bit = PREEMPT_TO_BIT(PREEMPT_OVER_SERVER_LIMIT);\n\t\t\treturn (PREEMPT_TO_BIT(PREEMPT_OVER_SERVER_LIMIT));\n\t\t} else {\n\t\t\tif (cnt != NULL)\n\t\t\t\tcnt->soft_limit_preempt_bit = 0;\n\t\t\treturn (0); /* ignore a generic limit */\n\t\t}\n\t} else if (max_gengroup_run_soft < used) {\n\t\tif (cnt != NULL)\n\t\t\tcnt->soft_limit_preempt_bit = PREEMPT_TO_BIT(PREEMPT_OVER_SERVER_LIMIT);\n\t\treturn (PREEMPT_TO_BIT(PREEMPT_OVER_SERVER_LIMIT));\n\t} else {\n\t\tif (cnt != NULL)\n\t\t\tcnt->soft_limit_preempt_bit = 0;\n\t\treturn (0);\n\t}\n}\n\n/**\n * @brief\n *\t\tcheck_server_max_user_res_soft\tsoft limit checking function for\n *\t\t\t\t\tuser server resource limits\n *\n * @param[in]\tsi\t-\tserver_info structure to use for limit evaluation\n * @param [in]\tqi\t-\tqueue_info structure to use for limit evaluation\n * @param [in]\trr\t-\tresource_resv structure to use for limit evaluation\n *\n * @return\tinteger indicating failing limit test if limit is exceeded\n * @retval\t0\t: if limit is not exceeded\n * @retval\tPREEMPT_TO_BIT(preempt enum)\t: if limit is exceeded\n * @retval\tPREEMPT_TO_BIT(PREEMPT_ERR)\t: on error\n *\n * @see\t#preempt enum in constant.h\n */\nstatic int\ncheck_server_max_user_res_soft(server_info *si, queue_info *qi,\n\t\t\t       resource_resv *rr)\n{\n\tif ((si == NULL) || (rr == NULL))\n\t\treturn (PREEMPT_TO_BIT(PREEMPT_ERR));\n\n\tif (!si->has_user_limit)\n\t\treturn (0);\n\n\treturn (check_max_user_res_soft(si->running_jobs, rr, si->user_counts,\n\t\t\t\t\tLI2RESCTXSOFT(si->liminfo), PREEMPT_TO_BIT(PREEMPT_OVER_SERVER_LIMIT)));\n}\n\n/**\n * @brief\n * \t\tcheck_server_max_group_res_soft\tsoft limit checking function for\n *\t\t\t\t\tgroup server resource limits\n *\n * @param[in]\tsi\t-\tserver_info structure to use for limit evaluation\n * @param[in]\tqi\t-\tqueue_info structure to use for limit evaluation\n * @param[in]\trr\t-\tresource_resv structure to use for limit evaluation\n *\n * @return\tinteger indicating failing limit test if limit is exceeded\n * @retval\t0\t: if limit is not exceeded\n * @retval\tPREEMPT_TO_BIT(preempt enum)\t: if limit is exceeded\n * @retval\tPREEMPT_TO_BIT(PREEMPT_ERR)\t: on error\n *\n * @see\t#preempt enum in constant.h\n */\nstatic int\ncheck_server_max_group_res_soft(server_info *si, queue_info *qi,\n\t\t\t\tresource_resv *rr)\n{\n\tif ((si == NULL) || (rr == NULL))\n\t\treturn (PREEMPT_TO_BIT(PREEMPT_ERR));\n\n\tif (!si->has_grp_limit)\n\t\treturn (0);\n\n\treturn (check_max_group_res_soft(rr, si->group_counts,\n\t\t\t\t\t LI2RESCTXSOFT(si->liminfo), PREEMPT_TO_BIT(PREEMPT_OVER_SERVER_LIMIT)));\n}\n\n/**\n * @brief\n *\t\tcheck_server_max_res_soft\tsoft limit checking function for overall\n *\t\t\t\t\tserver resource limits\n *\n * @param[in]\tsi\t-\tserver_info structure to use for limit evaluation\n * @param[in]\tqi\t-\tqueue_info structure to use for limit evaluation\n * @param[in]\trr\t-\tresource_resv structure to use for limit evaluation\n *\n * @return\tinteger indicating failing limit test if limit is exceeded\n * @retval\t0\t: if limit is not exceeded\n * @retval\tPREEMPT_TO_BIT(preempt enum)\t: if limit is exceeded\n * @retval\tPREEMPT_TO_BIT(PREEMPT_ERR)\t: on error\n * @see\t#preempt enum in constant.h\n */\nstatic int\ncheck_server_max_res_soft(server_info *si, queue_info *qi, resource_resv *rr)\n{\n\tchar *reskey;\n\tsch_resource_t max_res_soft;\n\tsch_resource_t used;\n\tschd_resource *res;\n\tresource_count *used_res;\n\tcounts *c;\n\n\tif ((si == NULL) || (rr == NULL))\n\t\treturn (PREEMPT_TO_BIT(PREEMPT_ERR));\n\n\tc = find_counts(si->alljobcounts, PBS_ALL_ENTITY);\n\tif (c == NULL)\n\t\treturn (0);\n\n\tfor (res = limres; res != NULL; res = res->next) {\n\t\t/* If the job is not requesting the limit resource, it is not over its soft limit*/\n\t\tif (find_resource_req(rr->resreq, res->def) == NULL)\n\t\t\tcontinue;\n\n\t\tif ((reskey = entlim_mk_reskey(LIM_OVERALL, allparam,\n\t\t\t\t\t       res->name)) == NULL)\n\t\t\treturn (PREEMPT_TO_BIT(PREEMPT_ERR));\n\t\tmax_res_soft = lim_get(reskey, LI2RESCTXSOFT(si->liminfo));\n\t\tfree(reskey);\n\n\t\tif (max_res_soft == SCHD_INFINITY)\n\t\t\tcontinue;\n\n\t\tif ((used_res = find_resource_count(c->rescts, res->def)) == NULL)\n\t\t\tused = 0;\n\t\telse\n\t\t\tused = used_res->amount;\n\n\t\tlog_eventf(PBSEVENT_DEBUG4, PBS_EVENTCLASS_JOB, LOG_DEBUG, rr->name,\n\t\t\t   \"max_res_soft.%s %.1lf, used %.1lf\",\n\t\t\t   res->name, max_res_soft, used);\n\n\t\tif (max_res_soft < used) {\n\t\t\tif (used_res != NULL)\n\t\t\t\tused_res->soft_limit_preempt_bit = PREEMPT_TO_BIT(PREEMPT_OVER_SERVER_LIMIT);\n\t\t\treturn (PREEMPT_TO_BIT(PREEMPT_OVER_SERVER_LIMIT));\n\t\t} else {\n\t\t\tif (used_res != NULL)\n\t\t\t\tused_res->soft_limit_preempt_bit = 0;\n\t\t}\n\t}\n\n\treturn (0);\n}\n\n/**\n * @brief\n *\t\tcheck_queue_max_res_soft\tsoft limit checking function for overall\n *\t\t\t\t\tqueue resource limits\n *\n * @param[in]\tsi\t-\tserver_info structure to use for limit evaluation\n * @param[in]\tqi\t-\tqueue_info structure to use for limit evaluation\n * @param[in]\trr\t-\tresource_resv structure to use for limit evaluation\n *\n * @return\tinteger indicating failing limit test if limit is exceeded\n * @retval\t0\t: if limit is not exceeded\n * @retval\tPREEMPT_TO_BIT(preempt enum)\t: if limit is exceeded\n * @retval\tPREEMPT_TO_BIT(PREEMPT_ERR)\t: on error\n *\n * @see\t#preempt enum in constant.h\n */\nstatic int\ncheck_queue_max_res_soft(server_info *si, queue_info *qi, resource_resv *rr)\n{\n\tchar *reskey;\n\tsch_resource_t max_res_soft;\n\tsch_resource_t used;\n\tschd_resource *res;\n\tresource_count *used_res;\n\tcounts *c;\n\n\tif ((qi == NULL) || (rr == NULL))\n\t\treturn (PREEMPT_TO_BIT(PREEMPT_ERR));\n\n\tc = find_counts(qi->alljobcounts, PBS_ALL_ENTITY);\n\tif (c == NULL)\n\t\treturn (0);\n\n\tfor (res = limres; res != NULL; res = res->next) {\n\t\t/* If the job is not requesting the limit resource, it is not over its soft limit*/\n\t\tif (find_resource_req(rr->resreq, res->def) == NULL)\n\t\t\tcontinue;\n\n\t\tif ((reskey = entlim_mk_reskey(LIM_OVERALL, allparam,\n\t\t\t\t\t       res->name)) == NULL)\n\t\t\treturn (PREEMPT_TO_BIT(PREEMPT_ERR));\n\t\tmax_res_soft = lim_get(reskey, LI2RESCTXSOFT(qi->liminfo));\n\t\tfree(reskey);\n\n\t\tif (max_res_soft == SCHD_INFINITY)\n\t\t\tcontinue;\n\n\t\tif ((used_res = find_resource_count(c->rescts, res->def)) == NULL)\n\t\t\tused = 0;\n\t\telse\n\t\t\tused = used_res->amount;\n\n\t\tlog_eventf(PBSEVENT_DEBUG4, PBS_EVENTCLASS_JOB, LOG_DEBUG, rr->name,\n\t\t\t   \"max_res_soft.%s %.1lf, used %.1lf\",\n\t\t\t   res->name, max_res_soft, used);\n\n\t\tif (max_res_soft < used) {\n\t\t\tif (used_res != NULL)\n\t\t\t\tused_res->soft_limit_preempt_bit = PREEMPT_TO_BIT(PREEMPT_OVER_QUEUE_LIMIT);\n\t\t\treturn (PREEMPT_TO_BIT(PREEMPT_OVER_QUEUE_LIMIT));\n\t\t} else {\n\t\t\tif (used_res != NULL)\n\t\t\t\tused_res->soft_limit_preempt_bit = 0;\n\t\t}\n\t}\n\n\treturn (0);\n}\n\n/**\n * @brief\n *\t\tcheck_max_group_res\tcheck to see whether the user can run a\n *\t\t\t\tresource resv and still be within group max\n *\t\t\t\tresource limits\n *\n * @param[in]\trr\t-\tresource_resv to run\n * @param[in]\tcts_list\t-\tthe user counts list\n * @param[out]  rdef -\t\tresource definition of resource exceeding a limit\n * @param[in]\tlimitctx\t-\tthe limit storage context\n *\n * @return\tint\n * @retval\t0\t: if the group would be under or at its limits\n * @retval\t1\t: if a generic group limit would be exceeded\n * @retval\t2\t: if an individual group limit would be exceeded\n * @retval\t-1\t: on error\n */\nstatic int\ncheck_max_group_res(resource_resv *rr, counts_umap &cts_list,\n\t\t    resdef **rdef, void *limitctx)\n{\n\tchar *groupreskey;\n\tchar *gengroupreskey;\n\tstd::string group;\n\tschd_resource *res;\n\tsch_resource_t max_group_res;\n\tsch_resource_t max_gengroup_res;\n\tsch_resource_t used = 0;\n\n\tif (rr == NULL)\n\t\treturn (-1);\n\tif ((limres == NULL) || (rr->resreq == NULL))\n\t\treturn (0);\n\n\tgroup = rr->group;\n\n\tfor (res = limres; res != NULL; res = res->next) {\n\t\tresource_req *req;\n\t\tif ((req = find_resource_req(rr->resreq, res->def)) == NULL)\n\t\t\tcontinue;\n\n\t\t/* individual group limit check */\n\t\tif ((groupreskey = entlim_mk_reskey(LIM_GROUP, group.c_str(),\n\t\t\t\t\t\t    res->name)) == NULL)\n\t\t\treturn (-1);\n\t\tmax_group_res = lim_get(groupreskey, limitctx);\n\t\tfree(groupreskey);\n\n\t\t/* generic group limit check */\n\t\tif ((gengroupreskey = lim_gengroupreskey(res->name)) == NULL)\n\t\t\treturn (-1);\n\t\tmax_gengroup_res = lim_get(gengroupreskey, limitctx);\n\t\tfree(gengroupreskey);\n\n\t\tif ((max_group_res == SCHD_INFINITY) &&\n\t\t    (max_gengroup_res == SCHD_INFINITY))\n\t\t\tcontinue;\n\n\t\t/* at this point, we know a generic or individual limit is set */\n\t\tused = find_counts_elm(cts_list, group, res->def, NULL, NULL);\n\n\t\tlog_eventf(PBSEVENT_DEBUG4, PBS_EVENTCLASS_JOB, LOG_DEBUG, rr->name,\n\t\t\t   \"group %s max_*group_res.%s (%.1lf, %.1lf), used %.1lf\",\n\t\t\t   group.c_str(), res->name, max_group_res, max_gengroup_res, used);\n\n\t\tif (max_group_res != SCHD_INFINITY) {\n\t\t\tif (used + req->amount > max_group_res) {\n\t\t\t\t*rdef = res->def;\n\t\t\t\treturn (2);\n\t\t\t} else\n\t\t\t\tcontinue; /* ignore a generic limit */\n\t\t} else if (used + req->amount > max_gengroup_res) {\n\t\t\t*rdef = res->def;\n\t\t\treturn (1);\n\t\t}\n\t}\n\n\treturn (0);\n}\n\n/**\n * @brief\n *\t\tcheck_max_group_res_soft\tcheck to see whether the user can run a\n *\t\t\t\t\tresource resv and still be within group\n *\t\t\t\t\tmax resource limits\n *\n * @param[in]\trr\t-\tresource_resv to run\n * @param[in]\tcts_list\t-\tthe user counts list\n * @param[in]\tlimitctx\t-\tthe limit storage context\n * @param[in]\tpreempt_bit\t-\tpreempt bit value to set and return if limit is exceeded\n *\n * @return\tint\n * @retval\t0\t: if the group would be under or at its limits\n * @retval\t1\t: if a generic group limit would be exceeded\n * @retval\t-1\t: on error\n */\nstatic int\ncheck_max_group_res_soft(resource_resv *rr, counts_umap &cts_list, void *limitctx, int preempt_bit)\n{\n\tchar *groupreskey;\n\tchar *gengroupreskey;\n\tstd::string group;\n\tschd_resource *res;\n\tsch_resource_t max_group_res_soft;\n\tsch_resource_t max_gengroup_res_soft;\n\tsch_resource_t used = 0;\n\tresource_count *rescts;\n\tint rc = 0;\n\n\tif (rr == NULL)\n\t\treturn (-1);\n\tif ((limres == NULL) || (rr->resreq == NULL))\n\t\treturn (0);\n\n\tgroup = rr->group;\n\n\tfor (res = limres; res != NULL; res = res->next) {\n\t\t/* If the job is not requesting the limit resource, it is not over its soft limit*/\n\t\tif (find_resource_req(rr->resreq, res->def) == NULL)\n\t\t\tcontinue;\n\n\t\t/* individual group limit check */\n\t\tif ((groupreskey = entlim_mk_reskey(LIM_GROUP, group.c_str(),\n\t\t\t\t\t\t    res->name)) == NULL)\n\t\t\treturn (-1);\n\t\tmax_group_res_soft = lim_get(groupreskey, limitctx);\n\t\tfree(groupreskey);\n\n\t\t/* generic group limit check */\n\t\tif ((gengroupreskey = lim_gengroupreskey(res->name)) == NULL)\n\t\t\treturn (-1);\n\t\tmax_gengroup_res_soft = lim_get(gengroupreskey, limitctx);\n\t\tfree(gengroupreskey);\n\n\t\tif ((max_group_res_soft == SCHD_INFINITY) &&\n\t\t    (max_gengroup_res_soft == SCHD_INFINITY))\n\t\t\tcontinue;\n\n\t\trescts = NULL;\n\t\t/* at this point, we know a generic or individual limit is set */\n\t\tused = find_counts_elm(cts_list, group, res->def, NULL, &rescts);\n\t\tlog_eventf(PBSEVENT_DEBUG4, PBS_EVENTCLASS_JOB, LOG_DEBUG, rr->name,\n\t\t\t   \"group %s max_*group_res_soft.%s (%.1lf, %.1lf), used %.1lf\",\n\t\t\t   group.c_str(), res->name, max_group_res_soft, max_gengroup_res_soft, used);\n\n\t\tif (max_group_res_soft != SCHD_INFINITY) {\n\t\t\tif (max_group_res_soft < used) {\n\t\t\t\tif (rescts != NULL)\n\t\t\t\t\trescts->soft_limit_preempt_bit = preempt_bit;\n\t\t\t\trc = preempt_bit;\n\t\t\t} else {\n\t\t\t\tif (rescts != NULL)\n\t\t\t\t\trescts->soft_limit_preempt_bit = 0;\n\t\t\t\tcontinue; /* ignore a generic limit */\n\t\t\t}\n\t\t} else if (max_gengroup_res_soft < used) {\n\t\t\tif (rescts != NULL)\n\t\t\t\trescts->soft_limit_preempt_bit = preempt_bit;\n\t\t\trc = preempt_bit;\n\t\t} else {\n\t\t\t/* usage is under generic group soft limit, reset the preempt bit */\n\t\t\tif (rescts != NULL)\n\t\t\t\trescts->soft_limit_preempt_bit = 0;\n\t\t}\n\t}\n\n\treturn (rc);\n}\n\n/**\n * @brief\n *\t\tcheck_max_user_res\tcheck to see whether the user can run a job\n *\t\t\t\tand still be within max resource limits\n *\n * @param[in]\trr\t-\tresource_resv to run\n * @param[in]\tcts_list\t-\tthe user counts list\n * @param[out]  rdef -\t\tresource definition of resource exceeding a limit\n * @param [in]\tlimitctx\t-\tthe limit storage context\n *\n * @return\tint\n * @retval\t0\t: if the user would be under or at its limits\n * @retval\t1\t: if a generic user limit would be exceeded\n * @retval\t2\t: if an individual user limit would be exceeded\n * @retval\t-1\t: on error\n */\nstatic int\ncheck_max_user_res(resource_resv *rr, counts_umap &cts_list, resdef **rdef,\n\t\t   void *limitctx)\n{\n\tchar *userreskey;\n\tchar *genuserreskey;\n\tstd::string user;\n\tschd_resource *res;\n\tsch_resource_t max_user_res;\n\tsch_resource_t max_genuser_res;\n\tsch_resource_t used = 0;\n\n\tif (rr == NULL)\n\t\treturn (-1);\n\tif ((limres == NULL) || (rr->resreq == NULL))\n\t\treturn (0);\n\n\tuser = rr->user;\n\n\tfor (res = limres; res != NULL; res = res->next) {\n\t\tresource_req *req;\n\n\t\tif ((req = find_resource_req(rr->resreq, res->def)) == NULL)\n\t\t\tcontinue;\n\n\t\t/* individual user limit check */\n\t\tif ((userreskey = entlim_mk_reskey(LIM_USER, user.c_str(),\n\t\t\t\t\t\t   res->name)) == NULL)\n\t\t\treturn (-1);\n\t\tmax_user_res = lim_get(userreskey, limitctx);\n\t\tfree(userreskey);\n\n\t\t/* generic user limit check */\n\t\tif ((genuserreskey = lim_genuserreskey(res->name)) == NULL)\n\t\t\treturn (-1);\n\t\tmax_genuser_res = lim_get(genuserreskey, limitctx);\n\t\tfree(genuserreskey);\n\n\t\tif ((max_user_res == SCHD_INFINITY) &&\n\t\t    (max_genuser_res == SCHD_INFINITY))\n\t\t\tcontinue;\n\n\t\t/* at this point, we know a generic or individual limit is set */\n\t\tused = find_counts_elm(cts_list, user, res->def, NULL, NULL);\n\n\t\tlog_eventf(PBSEVENT_DEBUG4, PBS_EVENTCLASS_JOB, LOG_DEBUG, rr->name,\n\t\t\t   \"user %s max_*user_res.%s (%.1lf, %.1lf), used %.1lf\",\n\t\t\t   user.c_str(), res->name, max_user_res, max_genuser_res, used);\n\n\t\tif (max_user_res != SCHD_INFINITY) {\n\t\t\tif (used + req->amount > max_user_res) {\n\t\t\t\t*rdef = res->def;\n\t\t\t\treturn (2);\n\t\t\t} else\n\t\t\t\tcontinue; /* ignore a generic limit */\n\t\t} else if (used + req->amount > max_genuser_res) {\n\t\t\t*rdef = res->def;\n\t\t\treturn (1);\n\t\t}\n\t}\n\n\treturn (0);\n}\n\n/**\n * @brief\n *\t\tcheck_max_user_res_soft\t\tcheck to see whether the user can run a\n *\t\t\t\t\tresource resv and still be within max\n *\t\t\t\t\tuser resource limits\n *\n * @param[in]\trr_arr\t-\tresource_resv array to count\n * @param[in]\trr\t-\tresource_resv to run\n * @param[in]\tcts_list\t-\tthe user counts list\n * @param[in]\tlimitctx\t-\tthe limit storage context\n * @param[in]\tpreempt_bit\t-\tpreempt bit value to set and return if limit is exceeded\n *\n * @return\tint\n * @retval\t0\t: if the user would be under or at its limits\n * @retval\t1\t: if a generic user limit would be exceeded\n * @retval\t-1\t: on error\n */\nstatic int\ncheck_max_user_res_soft(resource_resv **rr_arr, resource_resv *rr,\n\t\t\tcounts_umap &cts_list, void *limitctx, int preempt_bit)\n{\n\tchar *userreskey;\n\tchar *genuserreskey;\n\tstd::string user;\n\tschd_resource *res;\n\tsch_resource_t max_user_res_soft;\n\tsch_resource_t max_genuser_res_soft;\n\tsch_resource_t used = 0;\n\tresource_count *rescts;\n\tint rc = 0;\n\n\tif (rr == NULL)\n\t\treturn (-1);\n\tif ((limres == NULL) || (rr->resreq == NULL))\n\t\treturn (0);\n\n\tuser = rr->user;\n\n\tfor (res = limres; res != NULL; res = res->next) {\n\t\t/* If the job is not requesting the limit resource, it is not over its soft limit*/\n\t\tif (find_resource_req(rr->resreq, res->def) == NULL)\n\t\t\tcontinue;\n\n\t\t/* individual user limit check */\n\t\tif ((userreskey = entlim_mk_reskey(LIM_USER, user.c_str(),\n\t\t\t\t\t\t   res->name)) == NULL)\n\t\t\treturn (-1);\n\t\tmax_user_res_soft = lim_get(userreskey, limitctx);\n\t\tfree(userreskey);\n\n\t\t/* generic user limit check */\n\t\tif ((genuserreskey = lim_genuserreskey(res->name)) == NULL)\n\t\t\treturn (-1);\n\t\tmax_genuser_res_soft = lim_get(genuserreskey, limitctx);\n\t\tfree(genuserreskey);\n\n\t\tif ((max_user_res_soft == SCHD_INFINITY) &&\n\t\t    (max_genuser_res_soft == SCHD_INFINITY))\n\t\t\tcontinue;\n\n\t\trescts = NULL;\n\t\t/* at this point, we know a generic or individual limit is set */\n\t\tused = find_counts_elm(cts_list, user, res->def, NULL, &rescts);\n\n\t\tlog_eventf(PBSEVENT_DEBUG4, PBS_EVENTCLASS_JOB, LOG_DEBUG, rr->name,\n\t\t\t   \"user %s max_*user_res_soft (%.1lf, %.1lf), used %.1lf\",\n\t\t\t   user.c_str(), max_user_res_soft, max_genuser_res_soft, used);\n\n\t\tif (max_user_res_soft != SCHD_INFINITY) {\n\t\t\tif (max_user_res_soft < used) {\n\t\t\t\tif (rescts != NULL)\n\t\t\t\t\trescts->soft_limit_preempt_bit = preempt_bit;\n\t\t\t\trc = preempt_bit;\n\t\t\t} else {\n\t\t\t\tif (rescts != NULL)\n\t\t\t\t\trescts->soft_limit_preempt_bit = 0;\n\t\t\t\tcontinue; /* ignore a generic limit */\n\t\t\t}\n\t\t} else if (max_genuser_res_soft < used) {\n\t\t\tif (rescts != NULL)\n\t\t\t\trescts->soft_limit_preempt_bit = preempt_bit;\n\t\t\trc = preempt_bit;\n\t\t} else {\n\t\t\t/* usage is under generic user soft limit, reset the preempt bit */\n\t\t\tif (rescts != NULL)\n\t\t\t\trescts->soft_limit_preempt_bit = 0;\n\t\t}\n\t}\n\n\treturn (rc);\n}\n\n/**\n * @brief\n *\t\tlim_setreslimits\tset new-style resource limits\n * @par\n *\t\tFor the given attribute value and context, parse and set any limit\n *\t\tdirectives found therein.  We expect the attribute's value to be a\n *\t\tnumber.\n *\n * @param[in]\ta\t-\tpointer to the attribute, whose value is a new-style\n *\t\t\t\t\t\t\tlimit attribute\n * @param[in]\tctx\t-\tthe limit context into which the limits should be stored\n *\n * @return\tint\n * @retval\t0\t: success\n * @retval\t1\t: failure\n */\nstatic int\nlim_setreslimits(const struct attrl *a, void *ctx)\n{\n\tschd_resource *r;\n\n\t/* remember resources that appear in a limit */\n\tr = find_alloc_resource_by_str(limres, a->resource);\n\tif (limres == NULL)\n\t\tlimres = r;\n\n\tif (entlim_parse(a->value, a->resource, ctx, lim_callback) == 0)\n\t\treturn (0);\n\telse {\n\t\tlog_eventf(PBSEVENT_DEBUG, PBS_EVENTCLASS_SCHED, LOG_DEBUG, __func__,\n\t\t\t   \"entlim_parse(%s, %s) failed\", a->value, a->resource);\n\t\treturn (1);\n\t}\n}\n\n/**\n * @brief\n * \t\tfree and clear saved limit resources.  Must be called whenever\n *\t\tresource definitions are updated.\n *\n * @return void\n */\nvoid\nclear_limres(void)\n{\n\tfree_resource_list(limres);\n\tlimres = NULL;\n}\n\n/**\n *      @brief returns a linked list of resources being limited.\n *\n *      @par This is to be treated as read-only.  Modifying this will adversely\n *           affect the limits code\n *\n *      @return schd_resource *\n */\nschd_resource *\nquery_limres(void)\n{\n\treturn limres;\n}\n\n/**\n * @brief\n *\t\tlim_setrunlimits\tset new-style run limits\n * @par\n *\t\tFor the given attribute value and context, parse and set any limit\n *\t\tdirectives found therein.  We expect the attribute's value to be a\n *\t\tnumber.\n *\n * @param[in]\ta\t-\tpointer to the attribute, whose value is a new-style\n *\t\t\t\t\t\tlimit attribute\n * @param [in]\tctx\t-\tthe limit context into which the limits should be stored\n *\n * @return\tint\n * @retval\t0\t: success\n * @retval\t1\t: failure\n */\nstatic int\nlim_setrunlimits(const struct attrl *a, void *ctx)\n{\n\n\tif (entlim_parse(a->value, NULL, ctx, lim_callback) == 0)\n\t\treturn (0);\n\telse {\n\t\tlog_eventf(PBSEVENT_DEBUG, PBS_EVENTCLASS_SCHED, LOG_DEBUG, __func__, \"entlim_parse(%s) failed\", a->value);\n\t\treturn (1);\n\t}\n}\n\n/**\n * @brief\n *\t\tlim_setoldlimits\tset old-style run limits\n * @par\n *\t\tFor the given attribute value and context, parse and set any limit\n *\t\tdirectives found therein.  We expect the attribute's value to be a\n *\t\tnumber.\n *\n * @param[in]\ta\t-\tpointer to the attribute, whose value is an old-style\n *\t\t\t\t\t\tlimit attribute\n * @param[in]\tctx\t-\tthe limit context into which the limits should be stored\n *\n * @return\tint\n * @retval\t0\t: success\n * @retval\t1\t: failure\n * @retval\t-1\t: indicates an internal error (bad lim_param in old2new_soft[])\n */\nstatic int\nlim_setoldlimits(const struct attrl *a, void *ctx)\n{\n\tsize_t i;\n\tstruct lim_old2new *avalue = NULL;\n\tenum lim_keytypes kt;\n\tconst char *p;\n\tconst char *e;\n\n\t/* first try soft limits ... */\n\tfor (i = 0; i < sizeof(old2new_soft) / sizeof(old2new_soft[0]); i++) {\n\t\tif (!strcmp(a->name, old2new_soft[i].lim_attr)) {\n\t\t\tavalue = &old2new_soft[i];\n\n\t\t\tp = avalue->lim_param;\n\t\t\tif (*p == 'g')\n\t\t\t\tkt = LIM_GROUP;\n\t\t\telse if (*p == 'o')\n\t\t\t\tkt = LIM_OVERALL;\n\t\t\telse if (*p == 'u')\n\t\t\t\tkt = LIM_USER;\n\t\t\telse\n\t\t\t\treturn (-1);\n\n\t\t\t/* e is PBS_GENERIC_ENTITY or PBS_ALL_ENTITY */\n\t\t\te = p + 2;\n\t\t\tif (avalue->lim_isreslim) {\n\t\t\t\tschd_resource *r;\n\n\t\t\t\t/* remember resources that appear in a limit */\n\t\t\t\tr = find_alloc_resource_by_str(limres, a->resource);\n\t\t\t\tif (limres == NULL)\n\t\t\t\t\tlimres = r;\n\n\t\t\t\treturn (lim_callback(LI2RESCTXSOFT(ctx),\n\t\t\t\t\t\t     kt, const_cast<char *>(avalue->lim_param),\n\t\t\t\t\t\t     const_cast<char *>(e),\n\t\t\t\t\t\t     a->resource, a->value));\n\t\t\t} else\n\t\t\t\treturn (lim_callback(LI2RUNCTXSOFT(ctx),\n\t\t\t\t\t\t     kt, const_cast<char *>(avalue->lim_param),\n\t\t\t\t\t\t     const_cast<char *>(e),\n\t\t\t\t\t\t     NULL, a->value));\n\t\t}\n\t}\n\n\t/* ... then hard */\n\tfor (i = 0; i < sizeof(old2new) / sizeof(old2new[0]); i++) {\n\t\tif (!strcmp(a->name, old2new[i].lim_attr)) {\n\t\t\tavalue = &old2new[i];\n\n\t\t\tp = avalue->lim_param;\n\t\t\tif (*p == 'g')\n\t\t\t\tkt = LIM_GROUP;\n\t\t\telse if (*p == 'o')\n\t\t\t\tkt = LIM_OVERALL;\n\t\t\telse if (*p == 'u')\n\t\t\t\tkt = LIM_USER;\n\t\t\telse\n\t\t\t\treturn (-1);\n\n\t\t\t/* e is PBS_GENERIC_ENTITY or PBS_ALL_ENTITY */\n\t\t\te = p + 2;\n\t\t\tif (avalue->lim_isreslim) {\n\t\t\t\tschd_resource *r;\n\n\t\t\t\t/* remember resources that appear in a limit */\n\t\t\t\tr = find_alloc_resource_by_str(limres, a->resource);\n\t\t\t\tif (limres == NULL)\n\t\t\t\t\tlimres = r;\n\n\t\t\t\treturn (lim_callback(LI2RESCTX(ctx),\n\t\t\t\t\t\t     kt, const_cast<char *>(avalue->lim_param),\n\t\t\t\t\t\t     const_cast<char *>(e),\n\t\t\t\t\t\t     a->resource,\n\t\t\t\t\t\t     a->value));\n\t\t\t} else\n\t\t\t\treturn (lim_callback(LI2RUNCTX(ctx),\n\t\t\t\t\t\t     kt, const_cast<char *>(avalue->lim_param),\n\t\t\t\t\t\t     const_cast<char *>(e),\n\t\t\t\t\t\t     a->resource,\n\t\t\t\t\t\t     a->value));\n\t\t}\n\t}\n\n\treturn (1); /* attribute name not found in translation table */\n}\n/**\n * @brief\n *\t\tlim_dup_ctx\tduplicate all entries in a limit storage context\n *\n * @param[in]\tctx\t-\tthe limit storage context\n *\n * @return\tvoid *\n * @retval\tthe newly-allocated storage context\t: on success\n * @retval\tNULL\t: on error\n */\nstatic void *\nlim_dup_ctx(void *ctx)\n{\n\tvoid *newctx;\n\tchar *key = NULL;\n\tchar *value = NULL;\n\n\tif ((newctx = entlim_initialize_ctx()) == NULL) {\n\t\tlog_err(errno, __func__, \"malloc failed\");\n\t\treturn (NULL);\n\t}\n\n\twhile ((value = static_cast<char *>(entlim_get_next(ctx, (void **) &key))) != NULL) {\n\t\tconst char *newval;\n\t\tif ((newval = strdup(value)) == NULL) {\n\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_SCHED, LOG_ERR, __func__, \"strdup value failed\");\n\t\t\t(void) entlim_free_ctx(newctx, free);\n\t\t\treturn NULL;\n\t\t} else if (entlim_add(key, newval, newctx) != 0) {\n\t\t\tlog_eventf(PBSEVENT_DEBUG, PBS_EVENTCLASS_SCHED, LOG_ERR, __func__, \"entlim_add(%s) failed\", key);\n\t\t\t/*\n\t\t\t *\tOne might think that we should free newval\n\t\t\t *\ton error as well.  We don't only because we\n\t\t\t *\tare uncertain whether the underneath entlim code\n\t\t\t *\tmight have remembered the location of the key's\n\t\t\t *\tvalue in spite of having returned failure indication.\n\t\t\t *\tWe choose to leak a small amount of memory (in\n\t\t\t *\twhat we hope to be rare circumstances) rather\n\t\t\t *\tthan have the scheduler crash when the system\n\t\t\t *\tmemory allocation code detects and aborts due\n\t\t\t *\tto twice-freed memory.\n\t\t\t */\n\t\t\t(void) entlim_free_ctx(newctx, free);\n\t\t\treturn NULL;\n\t\t}\n\t}\n\treturn newctx;\n}\n\n/**\n * @brief\n *\t\tis_hardlimit\tis the named attribute a new-style hard limit?\n *\n * @param[in]\ta\t-\tpointer to the attribute, whose value is a limit attribute\n *\n * @return\tint\n * @retval\t0\t: if the attrl pointer does not represent a hard lmit\n * @retval\t1\t: if the attrl pointer represents a soft lmit\n */\nstatic int\nis_hardlimit(const struct attrl *a)\n{\n\tif (!strcmp(a->name, ATTR_max_run) ||\n\t    !strcmp(a->name, ATTR_max_run_res))\n\t\treturn (1);\n\telse\n\t\treturn (0);\n}\n\n/**\n * @brief\n *\t\tlim_gengroupreskey\tspecial-purpose shortcut function to construct a\n *\t\t\t\tgeneric group resource key\n *\n * @param[in]\tres\t-\tthe resource\n *\n * @return\ta resource limit key if successful\n * @retval\tNULL\t: if not\n */\nstatic char *\nlim_gengroupreskey(const char *res)\n{\n\treturn (entlim_mk_reskey(LIM_GROUP, genparam, res));\n}\n\n/**\n * @brief\n *\t\tlim_genprojectreskey\tspecial-purpose shortcut function to construct a\n *\t\t\t\tgeneric project resource key\n *\n * @param[in]\tres\t-\tthe resource\n *\n * @return\ta resource limit key if successful\n * @retval\tNULL\t: if not\n */\nstatic char *\nlim_genprojectreskey(const char *res)\n{\n\treturn (entlim_mk_reskey(LIM_PROJECT, genparam, res));\n}\n\n/**\n * @brief\n *\t\tlim_genuserreskey\tspecial-purpose shortcut function to construct a\n *\t\t\t\tgeneric user resource key\n *\n * @param[in]\tres\t-\tthe resource\n *\n * @return\ta resource limit key on success\n * @retval\tNULL\t: on failure\n */\nstatic char *\nlim_genuserreskey(const char *res)\n{\n\treturn (entlim_mk_reskey(LIM_USER, genparam, res));\n}\n\n/**\n * @brief\n *\t\tlim_callback install a new key of the given type and value\n *\n * @param[in]\tctx\t\tthe limit storage context\n * @param[in]\tkt\t\tthe key type\n * @param[in]\tparam\t\tentity type (unused - see entlim_parse_one())\n * @param[in]\tnamestring\tentity name (see entlim_parse_one())\n * @param[in]\tres\t\tthe limit resource for which a limit is being set\n * @param[in]\tval\t\tthe value of the limit\n *\n * @return\tint\n * @retval\t0 if limit is successfully set\n * @retval\t-1 on error.\n */\nstatic int\nlim_callback(void *ctx, enum lim_keytypes kt, char *param, char *namestring,\n\t     char *res, char *val)\n{\n\tchar *key = NULL;\n\tchar *v = NULL;\n\n\tif (res != NULL)\n\t\tkey = entlim_mk_reskey(kt, namestring, res);\n\telse\n\t\tkey = entlim_mk_runkey(kt, namestring);\n\tif (key == NULL) {\n\t\tlog_eventf(PBSEVENT_SCHED, PBS_EVENTCLASS_SCHED, LOG_ERR, __func__,\n\t\t\t   \"key construction %d %s failed\", (int) kt, namestring);\n\t\treturn (-1);\n\t}\n\n\tif ((v = strdup(val)) == NULL) {\n\t\tlog_eventf(PBSEVENT_SCHED, PBS_EVENTCLASS_SCHED, LOG_ERR, __func__,\n\t\t\t   \"strdup %s %s %s failed\", key, res, val);\n\t\tfree(key);\n\t\treturn (-1);\n\t}\n\n\tif (entlim_add(key, v, ctx) != 0) {\n\t\tlog_eventf(PBSEVENT_SCHED, PBS_EVENTCLASS_SCHED, LOG_ERR, __func__,\n\t\t\t   \"limit set %s %s %s failed\", key, res, val);\n\t\tfree(v);\n\t\tfree(key);\n\t\treturn (-1);\n\t} else {\n\t\tif (res != NULL)\n\t\t\tlog_eventf(PBSEVENT_DEBUG4, PBS_EVENTCLASS_SCHED, LOG_DEBUG, __func__,\n\t\t\t\t   \"limit set %s %s %s\", key, res, val);\n\t\telse\n\t\t\tlog_eventf(PBSEVENT_DEBUG4, PBS_EVENTCLASS_SCHED, LOG_DEBUG, __func__,\n\t\t\t\t   \"limit set %s NULL %s\", key, val);\n\t\tfree(key);\n\t\treturn (0);\n\t}\n}\n\n/**\n * @brief\n *\t\tlim_get\tfetch a limit value\n *\n * @param[in]\tparam\t-\tthe requested limit\n * @param[in]\tctx\t-\tthe limit storage context\n *\n * @return\tsch_resource_t\n * @retval\tthe value of the limit, if no error occurs fetching it\n * @retval\tSCHD_INFINITY if no such limit exists in the named context\n */\nstatic sch_resource_t\nlim_get(const char *param, void *ctx)\n{\n\tchar *retptr;\n\n\tretptr = static_cast<char *>(entlim_get(param, ctx));\n\tif (retptr != NULL) {\n\t\tsch_resource_t v;\n\n\t\tv = res_to_num(retptr, NULL);\n\t\treturn (v);\n\t} else {\n\t\treturn (SCHD_INFINITY);\n\t}\n}\n\n/**\n * @brief\n *\t\tschderr_args_q\tlog a queue-related run limit exceeded message\n *\n * @param[in]\tqname\t-\tname of the queue\n * @param[in]\tentity\t-\tname of the group or user, or NULL if unneeded by fmt\n * @param[out]\terr\t-\tschd_error structure to return error information\n */\nstatic void\nschderr_args_q(const std::string &qname, const char *entity, schd_error *err)\n{\n\tset_schd_error_arg(err, ARG1, qname.c_str());\n\tif (entity != NULL)\n\t\tset_schd_error_arg(err, ARG2, entity);\n}\n// overloaded\nstatic void\nschderr_args_q(const std::string &qname, const std::string &entity, schd_error *err)\n{\n\tset_schd_error_arg(err, ARG1, qname.c_str());\n\tif (!entity.empty())\n\t\tset_schd_error_arg(err, ARG2, entity.c_str());\n}\n\n/**\n * @brief\n * \t\tschderr_args_q_res log a queue-related resource limit exceeded message\n *\n * @param [in]\tqname\t-\tname of the queue\n * @param [in]\tentity\t-\tname of the group or user, or NULL if unneeded by fmt\n * @param [in]\tres\t-\tname of the resource\n * @param [out]\terr\t-\tschd_error structure to return error information\n */\nstatic void\nschderr_args_q_res(const std::string &qname, const char *entity, char *res,\n\t\t   schd_error *err)\n{\n\tset_schd_error_arg(err, ARG1, qname.c_str());\n\tset_schd_error_arg(err, ARG2, res);\n\tif (entity != NULL)\n\t\tset_schd_error_arg(err, ARG3, entity);\n}\n//overload\nstatic void\nschderr_args_q_res(const std::string &qname, const std::string &entity, char *res,\n\t\t   schd_error *err)\n{\n\tset_schd_error_arg(err, ARG1, qname.c_str());\n\tset_schd_error_arg(err, ARG2, res);\n\tif (!entity.empty())\n\t\tset_schd_error_arg(err, ARG3, entity.c_str());\n}\n\n/**\n * @brief\n *\t\tschderr_args_server\tlog a server-related run limit exceeded message\n *\n * @param [in]\tentity\t-\tname of the group or user, or NULL if unneeded\n * @param [out]\terr\t-\tschd_error structure to return error information\n */\nstatic void\nschderr_args_server(const char *entity, schd_error *err)\n{\n\tset_schd_error_arg(err, ARG1, entity);\n}\n// overloaded\nstatic void\nschderr_args_server(const std::string &entity, schd_error *err)\n{\n\tset_schd_error_arg(err, ARG1, entity.c_str());\n}\n/**\n * @brief\n *\t\tschderr_args_server_res\tlog a server-related resource limit exceeded\n *\t\t\t\tmessage\n *\n * @param[in]\tentity\t-\tname of the group or user, or NULL if unneeded by fmt\n * @param[in]\tres\t-\tname of the resource\n * @param[out]\terr\t-\tschd_error structure to return error information\n */\nstatic void\nschderr_args_server_res(std::string &entity, const char *res, schd_error *err)\n{\n\tset_schd_error_arg(err, ARG1, res);\n\tif (!entity.empty())\n\t\tset_schd_error_arg(err, ARG2, entity.c_str());\n}\n\n/**\n * @brief\n *\t\tcheck_max_project_res\tcheck to see whether the user can run a\n *\t\t\t\tresource resv and still be within project max\n *\t\t\t\tresource limits\n *\n * @param[in]\trr\t-\tresource_resv to run\n * @param[in]\tcts_list\t-\tthe user counts list\n * @param[out]  rdef -\t\tresource definition of resource exceeding a limit\n * @param[in]\tlimitctx\t-\tthe limit storage context\n *\n * @return\tint\n * @retval\t0\t: if the project would be under or at its limits\n * @retval\t1\t: if a generic project limit would be exceeded\n * @retval\t2\t: if an individual project limit would be exceeded\n * @retval\t-1\t: on error\n */\nstatic int\ncheck_max_project_res(resource_resv *rr, counts_umap &cts_list,\n\t\t      resdef **rdef, void *limitctx)\n{\n\tchar *projectreskey;\n\tchar *genprojectreskey;\n\tschd_resource *res;\n\tstd::string project;\n\tsch_resource_t max_project_res;\n\tsch_resource_t max_genproject_res;\n\tsch_resource_t used = 0;\n\n\tif (rr == NULL)\n\t\treturn (-1);\n\tif ((limres == NULL) || (rr->resreq == NULL) || (rr->project.empty()))\n\t\treturn (0);\n\n\tproject = rr->project;\n\tfor (res = limres; res != NULL; res = res->next) {\n\t\tresource_req *req;\n\t\tif ((req = find_resource_req(rr->resreq, res->def)) == NULL)\n\t\t\tcontinue;\n\n\t\t/* individual project limit check */\n\t\tif ((projectreskey = entlim_mk_reskey(LIM_PROJECT,\n\t\t\t\t\t\t      project.c_str(), res->name)) == NULL)\n\t\t\treturn (-1);\n\t\tmax_project_res = lim_get(projectreskey, limitctx);\n\t\tfree(projectreskey);\n\n\t\t/* generic project limit check */\n\t\tif ((genprojectreskey = lim_genprojectreskey(res->name)) == NULL)\n\t\t\treturn (-1);\n\t\tmax_genproject_res = lim_get(genprojectreskey, limitctx);\n\t\tfree(genprojectreskey);\n\n\t\tif ((max_project_res == SCHD_INFINITY) &&\n\t\t    (max_genproject_res == SCHD_INFINITY))\n\t\t\tcontinue;\n\n\t\t/* at this point, we know a generic or individual limit is set */\n\t\tused = find_counts_elm(cts_list, project, res->def, NULL, NULL);\n\n\t\tlog_eventf(PBSEVENT_DEBUG4, PBS_EVENTCLASS_JOB, LOG_DEBUG, rr->name,\n\t\t\t   \"project %s max_*project_res.%s (%.1lf, %.1lf), used %.1lf\",\n\t\t\t   project.c_str(), res->name, max_project_res, max_genproject_res, used);\n\n\t\tif (max_project_res != SCHD_INFINITY) {\n\t\t\tif (used + req->amount > max_project_res) {\n\t\t\t\t*rdef = res->def;\n\t\t\t\treturn (2);\n\t\t\t} else\n\t\t\t\tcontinue; /* ignore a generic limit */\n\t\t} else if (used + req->amount > max_genproject_res) {\n\t\t\t*rdef = res->def;\n\t\t\treturn (1);\n\t\t}\n\t}\n\n\treturn (0);\n}\n\n/**\n * @brief\n *\t\tcheck_max_project_res_soft\tcheck to see whether the user can run a\n *\t\t\t\t\tresource resv and still be within project\n *\t\t\t\t\tmax resource soft limits\n *\n * @param[in]\trr\t-\tresource_resv to run\n * @param[in]\tcts_list\t-\tthe user counts list\n * @param[in]\tlimitctx\t-\tthe limit storage context\n * @param[in]\tpreempt_bit\t-\tpreempt bit value to set and return if limit is exceeded\n *\n * @return\tint\n * @retval\t0\t: if the project would be under or at its limits\n * @retval\t1\t: if a generic or individual project limit would be exceeded\n * @retval\t-1\t: on error\n */\nstatic int\ncheck_max_project_res_soft(resource_resv *rr, counts_umap &cts_list, void *limitctx, int preempt_bit)\n{\n\tchar *projectreskey;\n\tchar *genprojectreskey;\n\tstd::string project;\n\tschd_resource *res;\n\tsch_resource_t max_project_res_soft;\n\tsch_resource_t max_genproject_res_soft;\n\tsch_resource_t used = 0;\n\tresource_count *rescts;\n\tint rc = 0;\n\n\tif (rr == NULL)\n\t\treturn (-1);\n\tif ((limres == NULL) || (rr->resreq == NULL) || (rr->project.empty()))\n\t\treturn (0);\n\n\tproject = rr->project;\n\tfor (res = limres; res != NULL; res = res->next) {\n\t\t/* If the job is not requesting the limit resource, it is not over its soft limit*/\n\t\tif (find_resource_req(rr->resreq, res->def) == NULL)\n\t\t\tcontinue;\n\n\t\t/* individual project limit check */\n\t\tif ((projectreskey = entlim_mk_reskey(LIM_PROJECT, project.c_str(),\n\t\t\t\t\t\t      res->name)) == NULL)\n\t\t\treturn (-1);\n\t\tmax_project_res_soft = lim_get(projectreskey, limitctx);\n\t\tfree(projectreskey);\n\n\t\t/* generic project limit check */\n\t\tif ((genprojectreskey = lim_genprojectreskey(res->name)) == NULL)\n\t\t\treturn (-1);\n\t\tmax_genproject_res_soft = lim_get(genprojectreskey, limitctx);\n\t\tfree(genprojectreskey);\n\n\t\tif ((max_project_res_soft == SCHD_INFINITY) &&\n\t\t    (max_genproject_res_soft == SCHD_INFINITY))\n\t\t\tcontinue;\n\n\t\trescts = NULL;\n\t\t/* at this point, we know a generic or individual limit is set */\n\t\tused = find_counts_elm(cts_list, project, res->def, NULL, &rescts);\n\t\tlog_eventf(PBSEVENT_DEBUG4, PBS_EVENTCLASS_JOB, LOG_DEBUG, rr->name,\n\t\t\t   \"project %s max_*project_res_soft.%s (%.1lf, %.1lf), used %.1lf\",\n\t\t\t   project.c_str(), res->name, max_project_res_soft, max_genproject_res_soft, used);\n\n\t\tif (max_project_res_soft != SCHD_INFINITY) {\n\t\t\tif (max_project_res_soft < used) {\n\t\t\t\tif (rescts != NULL)\n\t\t\t\t\trescts->soft_limit_preempt_bit = preempt_bit;\n\t\t\t\trc = preempt_bit;\n\t\t\t} else {\n\t\t\t\tif (rescts != NULL)\n\t\t\t\t\trescts->soft_limit_preempt_bit = 0;\n\t\t\t\tcontinue; /* ignore a generic limit */\n\t\t\t}\n\t\t} else if (max_genproject_res_soft < used) {\n\t\t\tif (rescts != NULL)\n\t\t\t\trescts->soft_limit_preempt_bit = preempt_bit;\n\t\t\trc = preempt_bit;\n\t\t} else {\n\t\t\t/* usage is under generic project soft limit, reset the preempt bit */\n\t\t\tif (rescts != NULL)\n\t\t\t\trescts->soft_limit_preempt_bit = 0;\n\t\t}\n\t}\n\n\treturn (rc);\n}\n\n/**\n * @brief\n *\t\tcheck_server_max_project_res\thard limit checking function for\n *\t\t\t\t\tproject server resource limits\n *\n * @param[in]\tsi\t-\tserver_info structure to use for limit evaluation\n * @param[in]\tqi\t-\tqueue_info structure to use for limit evaluation\n * @param[in]\trr\t-\tresource_resv structure to use for limit evaluation\n * @param[in]\tsc\t-\tlimcounts struct for server count/total_count maxes over job run\n * @param[in]\tqc\t-\tlimcounts struct for queue count/total_count maxes over job run\n * @param[out]\terr\t-\tschd_error structure to return error information\n *\n * @return\tinteger indicating failing limit test if limit is exceeded\n * @retval\t0\t: if limit is not exceeded\n * @retval\tsched_error_code enum\t: if limit is exceeded\n * @retval\tSCHD_ERROR\t: on error\n *\n * @see\t#sched_error_code in constant.h\n */\nstatic int\ncheck_server_max_project_res(server_info *si, queue_info *qi, resource_resv *rr,\n\t\t\t     limcounts *sc, limcounts *qc, schd_error *err)\n{\n\tint ret;\n\tresdef *rdef = NULL;\n\n\tif ((si == NULL) || (rr == NULL) || (sc == NULL))\n\t\treturn (SCHD_ERROR);\n\n\tif (rr->project.empty())\n\t\treturn 0;\n\n\tif (!si->has_proj_limit)\n\t\treturn (0);\n\n\tauto &cts = sc->project;\n\n\tret = check_max_project_res(rr, cts,\n\t\t\t\t    &rdef, LI2RESCTX(si->liminfo));\n\tif (ret != 0) {\n\t\tlog_eventf(PBSEVENT_DEBUG4, PBS_EVENTCLASS_JOB, LOG_DEBUG, rr->name,\n\t\t\t   \"check_max_project_res returned %d\", ret);\n\t}\n\tswitch (ret) {\n\t\tdefault:\n\t\tcase -1:\n\t\t\treturn (SCHD_ERROR);\n\t\tcase 0:\n\t\t\treturn (0);\n\t\tcase 1: /* generic project limit exceeded */\n\t\t\terr->rdef = rdef;\n\t\t\treturn (SERVER_PROJECT_RES_LIMIT_REACHED);\n\t\tcase 2: /* individual project limit exceeded */\n\t\t\tschderr_args_server_res(rr->project, NULL, err);\n\t\t\terr->rdef = rdef;\n\t\t\treturn (SERVER_BYPROJECT_RES_LIMIT_REACHED);\n\t}\n}\n\n/**\n * @brief\n *\t\tcheck_server_max_project_run_soft\tsoft limit checking function for\n *\t\t\t\t\tproject server run limits\n *\n * @param[in]\tsi\t-\tserver_info structure to use for limit evaluation\n * @param[in]\tqi\t-\tqueue_info structure to use for limit evaluation\n * @param[in]\trr\t-\tresource_resv structure to use for limit evaluation\n *\n * @return\tinteger indicating failing limit test if limit is exceeded\n * @retval\t0\t: if limit is not exceeded\n * @retval\tPREEMPT_TO_BIT(preempt enum)\t: if limit is exceeded\n * @retval\tPREEMPT_TO_BIT(PREEMPT_ERR)\t: on error\n *\n * @see\t#preempt enum in constant.h\n */\nstatic int\ncheck_server_max_project_run_soft(server_info *si, queue_info *qi,\n\t\t\t\t  resource_resv *rr)\n{\n\tchar *key;\n\tstd::string project;\n\tint used;\n\tint max_project_run_soft, max_genproject_run_soft;\n\tcounts *cnt = NULL;\n\n\tif (si == NULL || rr == NULL)\n\t\treturn (PREEMPT_TO_BIT(PREEMPT_ERR));\n\n\tif (rr->project.empty())\n\t\treturn 0;\n\n\tif (!si->has_proj_limit)\n\t\treturn (0);\n\n\tproject = rr->project;\n\tif ((key = entlim_mk_runkey(LIM_PROJECT, project.c_str())) == NULL)\n\t\treturn (PREEMPT_TO_BIT(PREEMPT_ERR));\n\tmax_project_run_soft = (int) lim_get(key, LI2RUNCTXSOFT(si->liminfo));\n\tfree(key);\n\n\tif ((key = entlim_mk_runkey(LIM_PROJECT, genparam)) == NULL)\n\t\treturn (PREEMPT_TO_BIT(PREEMPT_ERR));\n\tmax_genproject_run_soft = (int) lim_get(key, LI2RUNCTXSOFT(si->liminfo));\n\tfree(key);\n\n\tif ((max_project_run_soft == SCHD_INFINITY) &&\n\t    (max_genproject_run_soft == SCHD_INFINITY))\n\t\treturn (0);\n\n\t/* at this point, we know a generic or individual limit is set */\n\tused = find_counts_elm(si->project_counts, project, NULL, &cnt, NULL);\n\tlog_eventf(PBSEVENT_DEBUG4, PBS_EVENTCLASS_JOB, LOG_DEBUG, rr->name,\n\t\t   \"project %s max_*project_run_soft (%d, %d), used %d\",\n\t\t   project.c_str(), max_project_run_soft, max_genproject_run_soft, used);\n\n\tif (max_project_run_soft != SCHD_INFINITY) {\n\t\tif (max_project_run_soft < used) {\n\t\t\tif (cnt != NULL)\n\t\t\t\tcnt->soft_limit_preempt_bit = PREEMPT_TO_BIT(PREEMPT_OVER_SERVER_LIMIT);\n\t\t\treturn (PREEMPT_TO_BIT(PREEMPT_OVER_SERVER_LIMIT));\n\t\t} else {\n\t\t\tif (cnt != NULL)\n\t\t\t\tcnt->soft_limit_preempt_bit = 0;\n\t\t\treturn (0); /* ignore a generic limit */\n\t\t}\n\t} else if (max_genproject_run_soft < used) {\n\t\tif (cnt != NULL)\n\t\t\tcnt->soft_limit_preempt_bit = PREEMPT_TO_BIT(PREEMPT_OVER_SERVER_LIMIT);\n\t\treturn (PREEMPT_TO_BIT(PREEMPT_OVER_SERVER_LIMIT));\n\t} else {\n\t\tif (cnt != NULL)\n\t\t\tcnt->soft_limit_preempt_bit = 0;\n\t\treturn (0);\n\t}\n}\n\n/**\n * @brief\n *\t\tcheck_server_max_project_res_soft\tsoft limit checking function for\n *\t\t\t\t\tproject server resource limits\n *\n * @param[in]\tsi\t-\tserver_info structure to use for limit evaluation\n * @param[in]\tqi\t-\tqueue_info structure to use for limit evaluation\n * @param[in]\trr\t-\tresource_resv structure to use for limit evaluation\n *\n * @return\tinteger indicating failing limit test if limit is exceeded\n * @retval\t0\t: if limit is not exceeded\n * @retval\tPREEMPT_TO_BIT(preempt enum)\t: if limit is exceeded\n * @retval\tPREEMPT_TO_BIT(PREEMPT_ERR)\t: on error\n *\n * @see\t#preempt enum in constant.h\n */\nstatic int\ncheck_server_max_project_res_soft(server_info *si, queue_info *qi,\n\t\t\t\t  resource_resv *rr)\n{\n\tif ((si == NULL) || (rr == NULL))\n\t\treturn (PREEMPT_TO_BIT(PREEMPT_ERR));\n\n\tif (!si->has_proj_limit)\n\t\treturn (0);\n\n\treturn (check_max_project_res_soft(rr, si->project_counts,\n\t\t\t\t\t   LI2RESCTXSOFT(si->liminfo), PREEMPT_TO_BIT(PREEMPT_OVER_SERVER_LIMIT)));\n}\n\n/**\n * @brief\n *\t\tcheck_queue_max_project_res\thard limit checking function for\n *\t\t\t\t\tproject queue resource limits\n *\n * @param[in]\tsi\t-\tserver_info structure to use for limit evaluation\n * @param[in]\tqi\t-\tqueue_info structure to use for limit evaluation\n * @param[in]\trr\t-\tresource_resv structure to use for limit evaluation\n * @param[in]\tsc\t-\tlimcounts struct for server count/total_count maxes over job run\n * @param[in]\tqc\t-\tlimcounts struct for queue count/total_count maxes over job run\n * @param[out]\terr\t-\tschd_error structure to return error information\n *\n * @return\tinteger indicating failing limit test if limit is exceeded\n * @retval\t0\t: if limit is not exceeded\n * @retval\tsched_error_code enum\t: if limit is exceeded\n * @retval\tSCHD_ERROR\t: on error\n *\n * @see\t#sched_error_code in constant.h\n */\nstatic int\ncheck_queue_max_project_res(server_info *si, queue_info *qi, resource_resv *rr,\n\t\t\t    limcounts *sc, limcounts *qc, schd_error *err)\n{\n\tint ret;\n\tresdef *rdef = NULL;\n\n\tif ((qi == NULL) || (rr == NULL) || (qc == NULL))\n\t\treturn (SCHD_ERROR);\n\n\tif (rr->project.empty())\n\t\treturn 0;\n\n\tif (!qi->has_proj_limit)\n\t\treturn (0);\n\n\tauto &cts = qc->project;\n\n\tret = check_max_project_res(rr, cts, &rdef, LI2RESCTX(qi->liminfo));\n\tif (ret != 0)\n\t\tlog_eventf(PBSEVENT_DEBUG4, PBS_EVENTCLASS_JOB, LOG_DEBUG, rr->name,\n\t\t\t   \"check_max_project_res returned %d\", ret);\n\tswitch (ret) {\n\t\tdefault:\n\t\tcase -1:\n\t\t\treturn (SCHD_ERROR);\n\t\tcase 0:\n\t\t\treturn (0);\n\t\tcase 1: /* generic project limit exceeded */\n\t\t\tschderr_args_q_res(qi->name, NULL, NULL, err);\n\t\t\terr->rdef = rdef;\n\t\t\treturn (QUEUE_PROJECT_RES_LIMIT_REACHED);\n\t\tcase 2: /* individual project limit exceeded */\n\t\t\tschderr_args_q_res(qi->name, rr->project, NULL, err);\n\t\t\terr->rdef = rdef;\n\t\t\treturn (QUEUE_BYPROJECT_RES_LIMIT_REACHED);\n\t}\n}\n\n/**\n * @brief\n *\t\tcheck_queue_max_project_run_soft\tsoft limit checking function for\n *\t\t\t\t\tproject queue run limits\n *\n * @param [in]\tsi\t-\tserver_info structure to use for limit evaluation\n * @param [in]\tqi\t-\tqueue_info structure to use for limit evaluation\n * @param [in]\trr\t-\tresource_resv structure to use for limit evaluation\n *\n * @return\tinteger indicating failing limit test if limit is exceeded\n * @retval\t0\t: if limit is not exceeded\n * @retval\tPREEMPT_TO_BIT(preempt enum)\t: if limit is exceeded\n * @retval\tPREEMPT_TO_BIT(PREEMPT_ERR)\t: on error\n *\n * @see\t#preempt enum in constant.h\n */\nstatic int\ncheck_queue_max_project_run_soft(server_info *si, queue_info *qi,\n\t\t\t\t resource_resv *rr)\n{\n\tchar *key;\n\tstd::string project;\n\tint used;\n\tint max_project_run_soft, max_genproject_run_soft;\n\tcounts *cnt = NULL;\n\n\tif (qi == NULL)\n\t\treturn (PREEMPT_TO_BIT(PREEMPT_ERR));\n\n\tif (rr->project.empty())\n\t\treturn 0;\n\n\tif (!qi->has_proj_limit)\n\t\treturn (0);\n\n\tproject = rr->project;\n\tif ((key = entlim_mk_runkey(LIM_PROJECT, project.c_str())) == NULL)\n\t\treturn (PREEMPT_TO_BIT(PREEMPT_ERR));\n\tmax_project_run_soft = (int) lim_get(key, LI2RUNCTXSOFT(qi->liminfo));\n\tfree(key);\n\n\tif ((key = entlim_mk_runkey(LIM_PROJECT, genparam)) == NULL)\n\t\treturn (PREEMPT_TO_BIT(PREEMPT_ERR));\n\tmax_genproject_run_soft = (int) lim_get(key, LI2RUNCTXSOFT(qi->liminfo));\n\tfree(key);\n\n\tif ((max_project_run_soft == SCHD_INFINITY) &&\n\t    (max_genproject_run_soft == SCHD_INFINITY))\n\t\treturn (0);\n\n\tused = find_counts_elm(qi->project_counts, project, NULL, &cnt, NULL);\n\n\tlog_eventf(PBSEVENT_DEBUG4, PBS_EVENTCLASS_JOB, LOG_DEBUG, rr->name,\n\t\t   \"%s project %s max_*project_run_soft (%d, %d), used %d\",\n\t\t   project.c_str(), max_project_run_soft, max_genproject_run_soft, used);\n\n\tif (max_project_run_soft != SCHD_INFINITY) {\n\t\tif (max_project_run_soft < used) {\n\t\t\tif (cnt != NULL)\n\t\t\t\tcnt->soft_limit_preempt_bit = PREEMPT_TO_BIT(PREEMPT_OVER_QUEUE_LIMIT);\n\t\t\treturn (PREEMPT_TO_BIT(PREEMPT_OVER_QUEUE_LIMIT));\n\t\t} else {\n\t\t\tif (cnt != NULL)\n\t\t\t\tcnt->soft_limit_preempt_bit = 0;\n\t\t\treturn (0); /* ignore a generic limit */\n\t\t}\n\t} else if (max_genproject_run_soft < used) {\n\t\tif (cnt != NULL)\n\t\t\tcnt->soft_limit_preempt_bit = PREEMPT_TO_BIT(PREEMPT_OVER_QUEUE_LIMIT);\n\t\treturn (PREEMPT_TO_BIT(PREEMPT_OVER_QUEUE_LIMIT));\n\t} else {\n\t\tif (cnt != NULL)\n\t\t\tcnt->soft_limit_preempt_bit = 0;\n\t\treturn (0);\n\t}\n}\n\n/**\n * @brief\n *\t\tcheck_queue_max_project_res_soft\tsoft limit checking function for\n *\t\t\t\t\tproject queue resource limits\n *\n * @param[in]\tsi\t-\tserver_info structure to use for limit evaluation\n * @param[in]\tqi\t-\tqueue_info structure to use for limit evaluation\n * @param[in]\trr\t-\tresource_resv structure to use for limit evaluation\n *\n * @return\tinteger indicating failing limit test if limit is exceeded\n * @retval\t0\t: if limit is not exceeded\n * @retval\tPREEMPT_TO_BIT(preempt enum)\t: if limit is exceeded\n * @retval\tPREEMPT_TO_BIT(PREEMPT_ERR)\t: on error\n *\n * @see\t#preempt enum in constant.h\n */\nstatic int\ncheck_queue_max_project_res_soft(server_info *si, queue_info *qi,\n\t\t\t\t resource_resv *rr)\n{\n\tif ((qi == NULL) || (rr == NULL))\n\t\treturn (PREEMPT_TO_BIT(PREEMPT_ERR));\n\n\tif (!qi->has_proj_limit)\n\t\treturn (0);\n\n\treturn (check_max_project_res_soft(rr, qi->project_counts,\n\t\t\t\t\t   LI2RESCTXSOFT(qi->liminfo), PREEMPT_TO_BIT(PREEMPT_OVER_QUEUE_LIMIT)));\n}\n\n/**\n * @brief\n *\t\tcheck_server_max_project_run\thard limit checking function for\n *\t\t\t\t\tproject server resource limits\n *\n * @param[in]\tsi\t-\tserver_info structure to use for limit evaluation\n * @param[in]\tqi\t-\tqueue_info structure to use for limit evaluation\n * @param[in]\trr\t-\tresource_resv structure to use for limit evaluation\n * @param[in]\tsc\t-\tlimcounts struct for server count/total_count maxes over job run\n * @param[in]\tqc\t-\tlimcounts struct for queue count/total_count maxes over job run\n * @param[out]\terr\t-\tschd_error structure to return error information\n *\n * @return\tinteger indicating failing limit test if limit is exceeded\n * @retval\t0\t: if limit is not exceeded\n * @retval\tsched_error_code enum\t: if limit is exceeded\n * @retval\tSCHD_ERROR\t: on error\n *\n * @see\t#sched_error_code in constant.h\n */\nstatic int\ncheck_server_max_project_run(server_info *si, queue_info *qi, resource_resv *rr,\n\t\t\t     limcounts *sc, limcounts *qc, schd_error *err)\n{\n\tchar *key;\n\tstd::string project;\n\tint used;\n\tint max_project_run, max_genproject_run;\n\n\tif ((si == NULL) || (rr == NULL) || (sc == NULL))\n\t\treturn (SCHD_ERROR);\n\n\tauto &cts = sc->project;\n\n\tif (rr->project.empty())\n\t\treturn 0;\n\n\tif (!si->has_proj_limit)\n\t\treturn (0);\n\n\tproject = rr->project;\n\tif ((key = entlim_mk_runkey(LIM_PROJECT, project.c_str())) == NULL)\n\t\treturn (SCHD_ERROR);\n\tmax_project_run = (int) lim_get(key, LI2RUNCTX(si->liminfo));\n\tfree(key);\n\n\tif ((key = entlim_mk_runkey(LIM_PROJECT, genparam)) == NULL)\n\t\treturn (SCHD_ERROR);\n\tmax_genproject_run = (int) lim_get(key, LI2RUNCTX(si->liminfo));\n\tfree(key);\n\n\tif ((max_project_run == SCHD_INFINITY) &&\n\t    (max_genproject_run == SCHD_INFINITY))\n\t\treturn (0);\n\n\t/* at this point, we know a generic or individual limit is set */\n\tused = find_counts_elm(cts, project, NULL, NULL, NULL);\n\n\tlog_eventf(PBSEVENT_DEBUG4, PBS_EVENTCLASS_JOB, LOG_DEBUG, rr->name,\n\t\t   \"project %s max_*project_run (%d, %d), used %d\",\n\t\t   project.c_str(), max_project_run, max_genproject_run, used);\n\n\tif (max_project_run != SCHD_INFINITY) {\n\t\tif (max_project_run <= used) {\n\t\t\tschderr_args_server(project, err);\n\t\t\treturn (SERVER_BYPROJECT_JOB_LIMIT_REACHED);\n\t\t} else\n\t\t\treturn (0); /* ignore a generic limit */\n\t} else if (max_genproject_run <= used) {\n\t\tschderr_args_server(NULL, err);\n\t\treturn (SERVER_PROJECT_LIMIT_REACHED);\n\t} else\n\t\treturn (0);\n}\n\n/**\n * @brief\n *\t\tcheck_queue_max_project_run\thard limit checking function for\n *\t\t\t\t\tproject queue run limits\n *\n * @param[in]\tsi\t-\tserver_info structure to use for limit evaluation\n * @param[in]\tqi\t-\tqueue_info structure to use for limit evaluation\n * @param[in]\trr\t-\tresource_resv structure to use for limit evaluation\n * @param[in]\tsc\t-\tlimcounts struct for server count/total_count maxes over job run\n * @param[in]\tqc\t-\tlimcounts struct for queue count/total_count maxes over job run\n * @param[out]\terr\t-\tschd_error structure to return error information\n *\n * @return\tinteger indicating failing limit test if limit is exceeded\n * @retval\t0\t: if limit is not exceeded\n * @retval\tsched_error_code enum\t: if limit is exceeded\n * @retval\tSCHD_ERROR\t: on error\n *\n * @see\t#sched_error_code in constant.h\n */\nstatic int\ncheck_queue_max_project_run(server_info *si, queue_info *qi, resource_resv *rr,\n\t\t\t    limcounts *sc, limcounts *qc, schd_error *err)\n{\n\tchar *key;\n\tstd::string project;\n\tint used;\n\tint max_project_run, max_genproject_run;\n\n\tif (qi == NULL || (rr == NULL) || (qc == NULL))\n\t\treturn (SCHD_ERROR);\n\n\tauto &cts = qc->project;\n\n\tproject = rr->project;\n\tif (project.empty())\n\t\treturn 0;\n\n\tif (!qi->has_proj_limit)\n\t\treturn (0);\n\n\tif ((key = entlim_mk_runkey(LIM_PROJECT, project.c_str())) == NULL)\n\t\treturn (SCHD_ERROR);\n\tmax_project_run = (int) lim_get(key, LI2RUNCTX(qi->liminfo));\n\tfree(key);\n\n\tif ((key = entlim_mk_runkey(LIM_PROJECT, genparam)) == NULL)\n\t\treturn (SCHD_ERROR);\n\tmax_genproject_run = (int) lim_get(key, LI2RUNCTX(qi->liminfo));\n\tfree(key);\n\n\tif ((max_project_run == SCHD_INFINITY) &&\n\t    (max_genproject_run == SCHD_INFINITY))\n\t\treturn (0);\n\n\t/* at this point, we know a generic or individual limit is set */\n\tused = find_counts_elm(cts, project, NULL, NULL, NULL);\n\n\tlog_eventf(PBSEVENT_DEBUG4, PBS_EVENTCLASS_JOB, LOG_DEBUG, rr->name,\n\t\t   \"project %s max_*project_run (%d, %d), used %d\",\n\t\t   project.c_str(), max_project_run, max_genproject_run, used);\n\n\tif (max_project_run != SCHD_INFINITY) {\n\t\tif (max_project_run <= used) {\n\t\t\tschderr_args_q(qi->name, project, err);\n\t\t\treturn (QUEUE_BYPROJECT_JOB_LIMIT_REACHED);\n\t\t} else\n\t\t\treturn (0); /* ignore a generic limit */\n\t} else if (max_genproject_run <= used) {\n\t\tschderr_args_q(qi->name, NULL, err);\n\t\treturn (QUEUE_PROJECT_LIMIT_REACHED);\n\t} else\n\t\treturn (0);\n}\n"
  },
  {
    "path": "src/scheduler/limits_if.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _LIMITS_IF_H\n#define _LIMITS_IF_H\n#include \"pbs_ifl.h\"\n#include \"pbs_entlim.h\"\n\nenum limtype {\n\tLIM_RES, /* new-style resource limit */\n\tLIM_RUN, /* new-style run (i.e. job count) limit */\n\tLIM_OLD\t /* old-style run limit */\n};\n\n/**\t@fn void *lim_alloc_liminfo(void)\n *\t@brief\tallocate and return storage for recording limit information\n *\n *\t@par MT-safe:\tNo\n */\nvoid *lim_alloc_liminfo(void);\n\n/**\t@fn void *lim_dup_liminfo(void *p)\n *\t@brief\tduplicate limit information allocated by lim_alloc_liminfo()\n *\n *\t@param\tp\tthe data to be cloned\n *\n *\t@return\t\tpointer to cloned data on success or NULL on failure\n *\n *\t@par MT-safe:\tNo\n */\nvoid *lim_dup_liminfo(void *);\n\n/**\t@fn void lim_free_liminfo(void *p)\n *\t@brief\tfree limit information allocated by lim_alloc_liminfo()\n *\n *\t@param\tp\tthe data to be freed\n *\n *\t@par MT-safe:\tNo\n */\nvoid lim_free_liminfo(void *);\n\n/**\t@fn int has_hardlimits(void *p)\n *\t@brief\tare any hard limits set?\n *\n *\t@param p\tthe limit storage to test\n *\n *\t@retval 0\tno hard limits set\n *\t@retval 1\tat least one hard limit set\n *\n *\t@par MT-safe:\tNo\n */\nint has_hardlimits(void *);\n\n/**\t@fn int has_softlimits(void *p)\n *\t@brief\tare any soft limits set?\n *\n *\t@param p\tthe limit storage to test\n *\n *\t@retval 0\tno soft limits set\n *\t@retval 1\tat least one soft limit set\n *\n *\t@par MT-safe:\tNo\n */\nint has_softlimits(void *);\n\n/**\t@fn int is_reslimattr(const struct attrl *a)\n *\t@brief\tis the given attribute a new-style resource limit attribute?\n *\n *\t@param a\tpointer to the attribute\n *\n *\t@retval 0\tthe named attribute is not a new-style resource limit\n *\t@retval 1\tthe named attribute is a new-style resource limit\n *\n *\t@par MT-safe:\tYes\n */\nint is_reslimattr(const struct attrl *);\n\n/**\t@fn int is_runlimattr(const struct attrl *a)\n *\t@brief\tis the given attribute a new-style run limit attribute?\n *\n *\t@param a\tpointer to the attribute\n *\n *\t@retval 0\tthe named attribute is not a new-style run limit\n *\t@retval 1\tthe named attribute is a new-style run limit\n *\n *\t@par MT-safe:\tYes\n */\nint is_runlimattr(const struct attrl *);\n\n/**\t@fn int is_oldlimattr(const struct attrl *a)\n *\t@brief\tis the given attribute an old-style limit attribute?\n *\n *\t@param a\tpointer to the attribute\n *\n *\t@retval 0\tthe named attribute is not an old-style limit attribute\n *\t@retval 1\tthe named attribute is an old-style limit attribute\n *\n *\t@par MT-safe:\tYes\n */\nint is_oldlimattr(const struct attrl *);\n\n/**\n * @brief\n * \t\tconvert an old limit attribute name to the new one\n *\n * @param[in]\ta\t-\tattribute list structure\n *\n * @return char *\n * @retval !NULL\t: old limit attribute name\n * @retval NULL\t\t: attribute value is not an old limit attribute\n *\n */\nconst char *convert_oldlim_to_new(const struct attrl *a);\n\n/**\t@fn int lim_setlimits(const struct attrl *a, enum limtype lt, void *p)\n *\t@brief set resource or run-time limits\n *\n *\tFor the given attribute value and limit data, parse and set any limit\n *\tdirectives found therein.\n *\n *\t@param a\tpointer to the attribute, whose value is a new-style\n *\t\t\tparam and may contain multiple limits to set\n *\t@param lt\tthe class of limit being set\n *\t@param p\tthe place to store limit information\n *\n *\t@retval 0\tlimits were successfully set\n *\t@retval nonzero\tlimits were not successfully set\n *\n *\t@par MT-safe:\tNo\n */\nint lim_setlimits(const struct attrl *, enum limtype, void *);\n\n/**\t@fn int check_limits(server_info *si, queue_info *qi, resource_resv *rr,\n *                          schd_error *err, int mode)\n *\t@brief\tcheck all run-time hard limits to see whether a job may run\n *\n *\t@param si\tserver_info structure (for server resources, list of\n *\t\t\trunning jobs, limit information, ...)\n *\t@param qi\tqueue_info structure (for server resources, list of\n *\t\t\trunning jobs, limit information, ...)\n *\t@param rr\tresource_resv structure (array of assigned resources to\n *\t\t\tcount against resource limits, group/user name, ...)\n *\t@param err\tschd_error structure to return error information\n *\n *\t@param mode specifies the mode in which limits need to be checked\n *\n *\t@retval 0\tjob exceeds no run-time hard limits\n *\t@retval nonzero\tjob exceeds at least one run-time hard limit\n *\n *\t@par MT-safe:\tNo\n */\nint check_limits(server_info *, queue_info *, resource_resv *,\n\t\t schd_error *, unsigned int);\n\n/**\t@fn int check_soft_limits(server_info *si, queue_info *qi, resource_resv *rr)\n *\t@brief\tcheck to see whether a job exceeds its run-time soft limits\n *\n *\t@param si\tserver_info structure (for server resources, list of\n *\t\t\trunning jobs, limit information, ...)\n *\t@param qi\tqueue_info structure (for server resources, list of\n *\t\t\trunning jobs, limit information, ...)\n *\t@param rr\tresource_resv structure (array of assigned resources to\n *\t\t\tcount against resource limits, group/user name, ...)\n *\n *\t@retval 0\tjob exceeds no run-time soft limits\n *\t@retval PREEMPT_TO_BIT(enum preempt) otherwise\n *\n *\t@par MT-safe:\tNo\n *\n *\t@see\t\t#preempt in constant.h\n */\nint check_soft_limits(server_info *, queue_info *, resource_resv *);\n\n/**\n *      @fn void clear_limres()\n *      @brief free and clear saved limit resources.  Must be called whenever\n *             resource definitions are updated.\n *      @return void\n */\nvoid clear_limres(void);\n\n/**\n * \t@fn schd_resource query_limres()\n * \t@brief returns a linked list of resources being limited.\n * \t@par This is to be treated as read-only.  Modifying this will adversely\n * \t     affect the limits code\n *\n * \t@return schd_resource *\n */\nschd_resource *query_limres(void);\n\n/**\n  *  @brief check the soft limit using soft limit function.\n  *\n  *  @return void\n  */\nvoid update_soft_limits(server_info *, queue_info *, resource_resv *);\n\n/**\n * @brief\tfind the value of preempt bit with matching entity and resource in\n *\t\tthe counts structure\n * @return\tint\n */\nint find_preempt_bits(counts *, std::string &, resource_resv *);\n#endif /* _LIMITS_IF_H */\n"
  },
  {
    "path": "src/scheduler/list_order.awk",
    "content": "#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\nBEGIN { FS=\";\" }\n$6 == \"Starting Scheduling Cycle\" { st = $1; jobs = \"\" }\n\n$6 == \"Considering job to run\" { jobs = sprintf(\"%s%s\\n\", jobs, $5) }\n\n$6 == \"Leaving Scheduling Cycle\" { en = $1 }\n\nEND {\n  split(en, arr, \" \");\n  printf(\"Time: %s - %s\\nOrder:%s\\n\\n\", st, arr[2], jobs);\n}\n"
  },
  {
    "path": "src/scheduler/misc.cpp",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * Miscellaneous functions of scheduler.\n */\n#include <pbs_config.h>\n\n#include \"config.h\"\n#include \"constant.h\"\n#include \"fairshare.h\"\n#include \"globals.h\"\n#include \"misc.h\"\n#include \"resource.h\"\n#include \"resource_resv.h\"\n#include <algorithm>\n#include <ctype.h>\n#include <errno.h>\n#include <libpbs.h>\n#include <libutil.h>\n#include <log.h>\n#include <math.h>\n#include <pbs_error.h>\n#include <pbs_ifl.h>\n#include <pbs_internal.h>\n#include <pbs_share.h>\n#include <sstream>\n#include <stdarg.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <time.h>\n\n/**\n * @brief\n *\t\tstring_dup - duplicate a string\n *\n * @param[in]\tstr\t-\tstring to duplicate\n *\n * @return\tnewly allocated string\n *\n */\n\nchar *\nstring_dup(const char *str)\n{\n\tchar *newstr;\n\tsize_t len;\n\n\tif (str == NULL)\n\t\treturn NULL;\n\n\tlen = strlen(str) + 1;\n\tif ((newstr = static_cast<char *>(malloc(len))) == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn NULL;\n\t}\n\n\tpbs_strncpy(newstr, str, len);\n\n\treturn newstr;\n}\n\n/**\n * @brief\n * \t\tadd a string to a string array only if it is unique\n *\n * @param[in,out]\tstr_arr\t-\tarray of strings of unique values\n * @param[in]\tstr\t-\tstring to add\n * @return\tint\n * @retval\tindex in array if success\t: string is added to array\n * @retval\t-1 failure\t: string could not be added to the array\n */\nint\nadd_str_to_unique_array(char ***str_arr, char *str)\n{\n\tint ind;\n\tif (str_arr == NULL || str == NULL)\n\t\treturn -1;\n\n\tind = find_string_idx(*str_arr, str);\n\tif (ind >= 0) /* found it! */\n\t\treturn ind;\n\n\treturn add_str_to_array(str_arr, str);\n}\n\n/**\n * @brief\n * \t\tadd string to string array\n *\n * @param[in]\tstr_arr\t-\tpointer to an array of strings to be added to(i.e. char ***)\n * @param[in]\tstr\t-\tstring to add to array\n *\n * @return\tint\n * @retval\tindex\t-\tindex of string on success\n * @retval\t-1\t: failure\n */\nint\nadd_str_to_array(char ***str_arr, char *str)\n{\n\tchar **tmp_arr;\n\tint cnt;\n\n\tif (str_arr == NULL || str == NULL)\n\t\treturn -1;\n\n\tif (*str_arr == NULL)\n\t\tcnt = 0;\n\telse\n\t\tcnt = count_array(*str_arr);\n\n\ttmp_arr = static_cast<char **>(realloc(*str_arr, (cnt + 2) * sizeof(char *)));\n\tif (tmp_arr == NULL)\n\t\treturn -1;\n\n\ttmp_arr[cnt] = string_dup(str);\n\ttmp_arr[cnt + 1] = NULL;\n\n\t*str_arr = tmp_arr;\n\n\treturn cnt;\n}\n\n/**\n * @brief\n * \t\tres_to_num - convert a resource string to a numeric sch_resource_t\n *\n * @param[in]\tres_str\t-\tthe resource string\n * @param[out]\ttype\t-\tthe type of the resource\n *\n * @return\tsch_resource_t\n * @retval\ta number in kilobytes or seconds\n * @retval\t0 for False, if type is boolean\n * @retval\t1 for True, if type is boolean\n * @retval\tSCHD_INFINITY_RES\t: if not a number\n *\n */\nsch_resource_t\nres_to_num(const char *res_str, struct resource_type *type)\n{\n\tsch_resource_t count = SCHD_INFINITY_RES;  /* convert string resource to numeric */\n\tsch_resource_t count2 = SCHD_INFINITY_RES; /* convert string resource to numeric */\n\tchar *endp;\t\t\t\t   /* used for strtol() */\n\tchar *endp2;\t\t\t\t   /* used for strtol() */\n\tlong multiplier = 1;\t\t\t   /* multiplier to count */\n\tint is_size = 0;\t\t\t   /* resource value is a size type */\n\tint is_time = 0;\t\t\t   /* resource value is a time spec */\n\n\tif (res_str == NULL)\n\t\treturn SCHD_INFINITY_RES;\n\n\tif (!strcasecmp(ATR_TRUE, res_str)) {\n\t\tif (type != NULL) {\n\t\t\ttype->is_boolean = 1;\n\t\t\ttype->is_non_consumable = 1;\n\t\t}\n\t\tcount = 1;\n\t} else if (!strcasecmp(ATR_FALSE, res_str)) {\n\t\tif (type != NULL) {\n\t\t\ttype->is_boolean = 1;\n\t\t\ttype->is_non_consumable = 1;\n\t\t}\n\t\tcount = 0;\n\t} else if (!is_num(res_str)) {\n\t\tif (type != NULL) {\n\t\t\ttype->is_string = 1;\n\t\t\ttype->is_non_consumable = 1;\n\t\t}\n\t\tcount = SCHD_INFINITY_RES;\n\t} else {\n\t\tcount = (sch_resource_t) strtod(res_str, &endp);\n\n\t\tif (*endp == ':') { /* time resource -> convert to seconds */\n\t\t\tcount2 = (sch_resource_t) strtod(endp + 1, &endp2);\n\t\t\tif (*endp2 == ':') { /* form of HH:MM:SS */\n\t\t\t\tcount *= 3600;\n\t\t\t\tcount += count2 * 60;\n\t\t\t\tcount += strtol(endp2 + 1, &endp, 10);\n\t\t\t\tif (*endp != '\\0')\n\t\t\t\t\tcount = SCHD_INFINITY_RES;\n\t\t\t} else { /* form of MM:SS */\n\t\t\t\tcount *= 60;\n\t\t\t\tcount += count2;\n\t\t\t}\n\t\t\tmultiplier = 1;\n\t\t\tis_time = 1;\n\t\t} else if (*endp == 'k' || *endp == 'K') {\n\t\t\tmultiplier = 1;\n\t\t\tis_size = 1;\n\t\t} else if (*endp == 'm' || *endp == 'M') {\n\t\t\tmultiplier = MEGATOKILO;\n\t\t\tis_size = 1;\n\t\t} else if (*endp == 'g' || *endp == 'G') {\n\t\t\tmultiplier = GIGATOKILO;\n\t\t\tis_size = 1;\n\t\t} else if (*endp == 't' || *endp == 'T') {\n\t\t\tmultiplier = TERATOKILO;\n\t\t\tis_size = 1;\n\t\t} else if (*endp == 'b' || *endp == 'B') {\n\t\t\tcount = ceil(count / KILO);\n\t\t\tmultiplier = 1;\n\t\t\tis_size = 1;\n\t\t} else if (*endp == 'w') {\n\t\t\tcount = ceil(count / KILO);\n\t\t\tmultiplier = SIZEOF_WORD;\n\t\t\tis_size = 1;\n\t\t} else /* catch all */\n\t\t\tmultiplier = 1;\n\n\t\tif (*endp != '\\0' && *(endp + 1) == 'w')\n\t\t\tmultiplier *= SIZEOF_WORD;\n\n\t\tif (type != NULL) {\n\t\t\ttype->is_consumable = 1;\n\t\t\tif (is_size)\n\t\t\t\ttype->is_size = 1;\n\t\t\telse if (is_time)\n\t\t\t\ttype->is_time = 1;\n\t\t\telse\n\t\t\t\ttype->is_num = 1;\n\t\t}\n\t}\n\n\treturn count * multiplier;\n}\n\n/**\n * @brief\n *      skip_line - find if the line of the config file needs to be skipped\n *                  due to it being a comment or other means\n *\n * @param[in]\tline\t-\tthe line from the config file\n *\n * @return\ttrue:1/false:0\n * @retval\ttrue\t: if the line should be skipped\n * @retval\tfalse\t: if it should be parsed\n *\n */\nint\nskip_line(char *line)\n{\n\tint skip = 0; /* whether or not to skil the line */\n\n\tif (line != NULL) {\n\t\twhile (isspace((int) *line))\n\t\t\tline++;\n\n\t\t/* '#' is comment in config files and '*' is comment in holidays file */\n\t\tif (line[0] == '\\0' || line[0] == '#' || line[0] == '*')\n\t\t\tskip = 1;\n\t}\n\n\treturn skip;\n}\n\n/**\n *\t@brief  combination of log_event() and translate_fail_code()\n *\t\tIf we're actually going to log a message, translate\n *\t\terr into a message and then log it.  The translated\n *\t\terror will be printed after the message\n *\n *\t@param[in] event - the event type\n *\t@param[in] event_class - the event class\n *\t@param[in] sev   - the severity of the log message\n *\t@param[in] name  - the name of the object\n *\t@param[in] text  - the text of the message\n *\t\t\tif NULL, only print translated error text\n *\t@param[in] err   - schderr error structure to be translated\n *\n *\t@return nothing\n */\nvoid\nschdlogerr(int event, int event_class, int sev, const std::string &name, const char *text,\n\t   schd_error *err)\n{\n\n\tif (err == NULL)\n\t\treturn;\n\n\tif (will_log_event(event)) {\n\t\tchar logbuf[MAX_LOG_SIZE];\n\n\t\ttranslate_fail_code(err, NULL, logbuf);\n\t\tif (text == NULL)\n\t\t\tlog_event(event, event_class, sev, name, logbuf);\n\t\telse\n\t\t\tlog_eventf(event, event_class, sev, name, \"%s %s\", text, logbuf);\n\t}\n}\n\n/**\n * @brief\n * \tlog_eventf - a combination of log_event() and printf()\n *\n * @param[in] eventtype - event type\n * @param[in] objclass - event object class\n * @param[in] sev - indication for whether to syslogging enabled or not\n * @param[in] objname - object name stating log msg related to which object\n * @param[in] fmt - format string\n * @param[in] ... - arguments to format string\n *\n * @return void\n */\nvoid\nlog_eventf(int eventtype, int objclass, int sev, const std::string &objname, const char *fmt, ...)\n{\n\tva_list args;\n\tva_start(args, fmt);\n\tdo_log_eventf(eventtype, objclass, sev, objname.c_str(), fmt, args);\n\tva_end(args);\n}\n\n/**\n * @brief\n * \tlog_event - log a server event to the log file\n *\n *\tChecks to see if the event type is being recorded.  If they are,\n *\tpass off to log_record().\n *\n *\tThe caller should ensure proper formating of the message if \"text\"\n *\tis to contain \"continuation lines\".\n *\n * @param[in] eventtype - event type\n * @param[in] objclass - event object class\n * @param[in] sev - indication for whether to syslogging enabled or not\n * @param[in] objname - object name stating log msg related to which object\n * @param[in] text - log msg to be logged.\n *\n *\tNote, \"sev\" or severity is used only if syslogging is enabled,\n *\tsee syslog(3) and log_record.c for details.\n */\n\nvoid\nlog_event(int eventtype, int objclass, int sev, const std::string &objname, const char *text)\n{\n\tif (will_log_event(eventtype))\n\t\tlog_record(eventtype, objclass, sev, objname.c_str(), text);\n}\n\n/**\n * @brief\n * \t\ttake a generic NULL terminated pointer array and return a\n *             filtered specialized array based on calling filter_func() on every\n *             member.  This can be used with any standard scheduler array\n *\t       like resource_resv or node_info or resdef\n *\n * @param[in] ptrarr\t-\tthe array to filter\n * @param[in] filter_func\t-\tpointer to a function that will filter the array\n * @param[in] arg\t-\tan optional arg passed to filter_func\n * @param[in] flags\t-\tcontrol how ptrs are filtered\n *\t\t\t\t\t\tFILTER_FULL - leave the filtered array full size\n *\n * @par\n * \t   filter_func prototype: @fn int func( void *, void * )\n *                                           object   arg\n *\t\t  object - specific member of ptrarr[]\n *\t\t  arg    - arg parameter\n *               - returns 1: ptr will be added to filtered array\n *               - returns 0: ptr will NOT be added to filtered array\n *\n * @return\tvoid ** filtered array.\n */\nvoid **\nfilter_array(void **ptrarr, int (*filter_func)(void *, void *),\n\t     void *arg, int flags)\n{\n\tvoid **new_arr = NULL; /* the filtered array */\n\tvoid **tmp;\n\tint i, j;\n\tint size;\n\n\tif (ptrarr == NULL || filter_func == NULL)\n\t\treturn NULL;\n\n\tsize = count_array(ptrarr);\n\n\tif ((new_arr = static_cast<void **>(malloc((size + 1) * sizeof(void *)))) == NULL) {\n\t\tlog_err(errno, __func__, \"Error allocating memory\");\n\t\treturn NULL;\n\t}\n\n\tfor (i = 0, j = 0; i < size; i++) {\n\t\tif (filter_func(ptrarr[i], arg)) {\n\t\t\tnew_arr[j] = ptrarr[i];\n\t\t\tj++;\n\t\t}\n\t}\n\tnew_arr[j] = NULL;\n\n\tif (!(flags & FILTER_FULL)) {\n\t\tif ((tmp = static_cast<void **>(realloc(new_arr, (j + 1) * sizeof(void *)))) == NULL) {\n\t\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\t\tfree(new_arr);\n\t\t\treturn NULL;\n\t\t}\n\t\tnew_arr = tmp;\n\t}\n\treturn new_arr;\n}\n\n/**\n * @brief\n * \t\tmatch two NULL terminated string arrays\n *\n * @param[in]\tstrarr\t-\tthe string array to search\n * @param[in]\tstr\t-\tthe string to find\n *\n * @return\tenum match_string_array_ret\n * @retval\tSA_FULL_MATCH\t: full match\n * @retval\tSA_SUB_MATCH\t\t: full match of one array and it is\n *\t\t\t\t\t\t\t\t\ta subset of the other\n * @retval\tSA_PARTIAL_MATCH\t: at least one match but not all\n * @retval\tSA_NO_MATCH\t: no match\n *\n */\nenum match_string_array_ret\nmatch_string_array(const char *const *strarr1, const char *const *strarr2)\n{\n\tint match = 0;\n\tint i;\n\tint strarr2_len;\n\n\tif (strarr1 == NULL || strarr2 == NULL)\n\t\treturn SA_NO_MATCH;\n\n\tstrarr2_len = count_array(strarr2);\n\n\tfor (i = 0; strarr1[i] != NULL; i++) {\n\t\tif (is_string_in_arr(const_cast<char **>(strarr2), strarr1[i]))\n\t\t\tmatch++;\n\t}\n\n\t/* i is the length of strarr1 since we just looped through the whole array */\n\tif (match == i && match == strarr2_len)\n\t\treturn SA_FULL_MATCH;\n\n\tif (match == i || match == strarr2_len)\n\t\treturn SA_SUB_MATCH;\n\n\tif (match)\n\t\treturn SA_PARTIAL_MATCH;\n\n\treturn SA_NO_MATCH;\n}\n// overloaded\nenum match_string_array_ret\nmatch_string_array(const std::vector<std::string> &strarr1, const std::vector<std::string> &strarr2)\n{\n\tunsigned int match = 0;\n\n\tif (strarr1.empty() || strarr2.empty())\n\t\treturn SA_NO_MATCH;\n\n\tfor (auto &str1 : strarr1) {\n\t\tif (std::find(strarr2.begin(), strarr2.end(), str1) != strarr2.end())\n\t\t\tmatch++;\n\t}\n\n\tif (match == strarr1.size() && match == strarr2.size())\n\t\treturn SA_FULL_MATCH;\n\n\tif (match == strarr1.size() || match == strarr2.size())\n\t\treturn SA_SUB_MATCH;\n\n\tif (match)\n\t\treturn SA_PARTIAL_MATCH;\n\n\treturn SA_NO_MATCH;\n}\n\n/**\n * @brief\n * \t\tconvert a string array into a printable string\n *\n * @param[in]\tstrarr\t-\tstring array to convert\n *\n * @return\tconverted string stored in local static ptr (no need to free)\n *\n * @par MT-safe:\tyes\n *\n */\nchar *\nstring_array_to_str(char **strarr)\n{\n\tchar *arrbuf = NULL;\n\tint len = 0;\n\tint i;\n\n\tif (strarr == NULL)\n\t\treturn NULL;\n\n\tif (strarr[0] == NULL)\n\t\treturn NULL;\n\n\tfor (i = 0; strarr[i] != NULL; i++)\n\t\tlen += strlen(strarr[i]);\n\tlen += i; /* added space for the commas */\n\n\tarrbuf = static_cast<char *>(malloc(len + 1));\n\tif (arrbuf == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn NULL;\n\t}\n\tarrbuf[0] = '\\0';\n\n\tfor (i = 0; strarr[i] != NULL; i++) {\n\t\tstrcat(arrbuf, strarr[i]);\n\t\tstrcat(arrbuf, \",\");\n\t}\n\tarrbuf[strlen(arrbuf) - 1] = '\\0';\n\n\treturn arrbuf;\n}\n\n/**\n * @brief\n * \t\tcalc_used_walltime - calculate the  used amount of a resource resv\n *\n * @param[in]\tresresv\t-\tthe resource resv to calculate\n *\n * @return\tused amount of the resource resv\n * @retval\t0\t: if resresv starts in the future\n *\t\t\t\t\tor if the resource used for walltime is NULL\n */\n\ntime_t\ncalc_used_walltime(resource_resv *resresv)\n{\n\ttime_t used_amount = 0;\n\tresource_req *used = NULL;\n\n\tif (resresv == NULL)\n\t\treturn 0;\n\n\tif (resresv->is_job && resresv->job != NULL) {\n\t\tused = find_resource_req(resresv->job->resused, allres[\"walltime\"]);\n\n\t\t/* If we can't find the used structure, we will just assume no usage */\n\t\tif (used == NULL)\n\t\t\tused_amount = 0;\n\t\telse\n\t\t\tused_amount = (time_t) used->amount;\n\t} else {\n\t\tif (resresv->server->server_time > resresv->start)\n\t\t\tused_amount = resresv->server->server_time - resresv->start;\n\t\telse\n\t\t\tused_amount = 0;\n\t}\n\treturn used_amount;\n}\n/**\n * @brief\n * \t\tcalc_time_left_STF - calculate the amount of time left\n *  \tfor minimum duration and maximum duration of a STF resource resv\n *\n * \t@param[in]\tresresv\t\t-\tthe resource resv to calculate\n * \t@param[out]\tmin_time_left\t-\ttime left to complete minimum duration\n *\n * \t@return\ttime left to complete maximum duration of the job\n * \t@retval\t0\t: if used amount is greater than duration\n * \t@retval\t-1\t: on error\n */\nint\ncalc_time_left_STF(resource_resv *resresv, sch_resource_t *min_time_left)\n{\n\ttime_t used_amount = 0;\n\n\tif (min_time_left == NULL || resresv->duration == UNSPECIFIED)\n\t\treturn -1;\n\n\tused_amount = calc_used_walltime(resresv);\n\t*min_time_left = IF_NEG_THEN_ZERO((resresv->min_duration - used_amount));\n\n\treturn IF_NEG_THEN_ZERO((resresv->duration - used_amount));\n}\n/**\n * @brief\n *\t\tcalc_time_left - calculate the remaining time of a resource resv\n *\n * @param[in]\tresresv\t-\tthe resource resv to calculate\n * @param[in]\tuse_hard_duration - use the resresv's hard_duration instead of normal duration\n *\n * @return\ttime left on job\n * @retval\t-1\t: on error\n *\n */\nint\ncalc_time_left(resource_resv *resresv, int use_hard_duration)\n{\n\ttime_t used_amount = 0;\n\tlong duration;\n\n\tif (use_hard_duration && resresv->hard_duration == UNSPECIFIED)\n\t\treturn -1;\n\n\telse if (!use_hard_duration && resresv->duration == UNSPECIFIED)\n\t\treturn -1;\n\n\tif (use_hard_duration)\n\t\tduration = resresv->hard_duration;\n\telse\n\t\tduration = resresv->duration;\n\n\tused_amount = calc_used_walltime(resresv);\n\n\treturn IF_NEG_THEN_ZERO(duration - used_amount);\n}\n\n/**\n * @brief\n *\t\tcstrcmp\t- check string compare - compares two strings but doesn't bomb\n *\t\t  if either one is null\n *\n * @param[in]\ts1\t-\tstring one\n * @param[in]\ts2\t-\tstring two\n *\n * @return\tint\n * @retval\t-1\t: if s1 < s2\n * @retval\t0\t: if s1 == s2\n * @retval\t1\t: if s1 > s2\n *\n */\nint\ncstrcmp(const char *s1, const char *s2)\n{\n\tif (s1 == NULL && s2 == NULL)\n\t\treturn 0;\n\n\telse if (s1 == NULL && s2 != NULL)\n\t\treturn -1;\n\n\telse if (s1 != NULL && s2 == NULL)\n\t\treturn 1;\n\n\treturn strcmp(s1, s2);\n}\n\n/**\n * @brief\n *\t\tis_num - checks to see if the string is a number, size, float\n *\t\t or time in string form\n *\n * @param[in]\tstr\t-\tthe string to test\n *\n * @return\tint\n * @retval\t1\t: if str is a number\n * @retval\t0\t: if str is not a number\n * @retval\t-1\t: on error\n *\n */\nint\nis_num(const char *str)\n{\n\tint i;\n\tint colon_count = 0;\n\tint str_len = -1;\n\n\tif (str == NULL)\n\t\treturn -1;\n\n\tif (str[0] == '-' || str[0] == '+')\n\t\tstr++;\n\n\tstr_len = strlen(str);\n\tfor (i = 0; i < str_len && (isdigit(str[i]) || str[i] == ':'); i++) {\n\t\tif (str[i] == ':')\n\t\t\tcolon_count++;\n\t}\n\n\t/* is the string completely numeric or a time(HH:MM:SS or MM:SS) */\n\tif (i == str_len && colon_count <= 2)\n\t\treturn 1;\n\n\t/* is the string a size type resource like 'mem' */\n\tif ((i == (str_len - 2)) || (i == (str_len - 1))) {\n\t\tauto c = tolower(str[i]);\n\t\tif (c == 'k' || c == 'm' || c == 'g' || c == 't') {\n\t\t\tc = tolower(str[i + 1]);\n\t\t\tif (c == 'b' || c == 'w' || c == '\\0')\n\t\t\t\treturn 1;\n\t\t} else if (i == (str_len - 1)) {\n\t\t\t/* catch the case of \"b\" or \"w\" */\n\t\t\tif (c == 'b' || c == 'w')\n\t\t\t\treturn 1;\n\t\t}\n\t}\n\n\t/* last but not least, make sure we didn't stop on a decmal point */\n\tif (str[i] == '.') {\n\t\tfor (i++; i < str_len && isdigit(str[i]); i++)\n\t\t\t;\n\n\t\t/* number is a float */\n\t\tif (i == str_len)\n\t\t\treturn 1;\n\t}\n\n\t/* the string is not a number or a size or time */\n\treturn 0;\n}\n\n/**\n * @brief\n *\t\tcount_array - count the number of elements in a NULL terminated array\n *\t\t      of pointers\n *\n * @param[in]\tarr\tthe array to count\n *\n * @return\tnumber of elements in the array\n *\n */\nint\ncount_array(const void *arr)\n{\n\tint i;\n\tvoid **ptr_arr;\n\n\tif (arr == NULL)\n\t\treturn 0;\n\n\tptr_arr = (void **) arr;\n\n\tfor (i = 0; ptr_arr[i] != NULL; i++)\n\t\t;\n\n\treturn i;\n}\n\n/**\n * @brief\n *  dup_array - make a shallow copy of elements in a NULL terminated array of pointers.\n *\n * @param[in]\tarr\t-\tthe array to copy\n *\n * @return\tarray of pointers\n *\n */\nvoid **\ndup_array(void *ptr)\n{\n\tvoid **ret;\n\tvoid **arr;\n\tint len = 0;\n\n\tarr = (void **) ptr;\n\tif (arr == NULL)\n\t\treturn NULL;\n\n\tlen = count_array(arr);\n\tret = static_cast<void **>(malloc((len + 1) * sizeof(void *)));\n\tif (ret == NULL)\n\t\treturn NULL;\n\tmemcpy(ret, arr, len * sizeof(void *));\n\tret[len] = NULL;\n\treturn ret;\n}\n\n/**\n * @brief\n *\t\tremove_ptr_from_array - remove a pointer from a ptr list and move\n *\t\t\t\tthe rest of the pointers up to fill the hole\n *\t\t\t\tPointer array size will not change - an extra\n *\t\t\t\tNULL is added to the end\n *\n *\t  arr - pointer array\n *\t  ptr - pointer to remove from array\n *\n *\treturns non-zero if the ptr was successfully removed from the array\n *\t\tzero if the array has not been modified\n *\n */\nint\nremove_ptr_from_array(void *arr, void *ptr)\n{\n\tint i;\n\tvoid **parr;\n\n\tif (arr == NULL || ptr == NULL)\n\t\treturn 0;\n\n\tparr = (void **) arr;\n\n\tfor (i = 0; parr[i] != NULL && parr[i] != ptr; i++)\n\t\t;\n\n\tif (parr[i] != NULL) {\n\t\tfor (int j = i; parr[j] != NULL; j++)\n\t\t\tparr[j] = parr[j + 1];\n\t\treturn 1;\n\t}\n\n\treturn 0;\n}\n\n/**\n * @brief add pointer to NULL terminated pointer array\n * @param[in] ptr_arr - pointer array to add to\n * @param[in] ptr - pointer to add\n *\n * @return void *\n * @retval pointer array with new element added\n * @retval NULL on error\n */\nvoid *\nadd_ptr_to_array(void *ptr_arr, void *ptr)\n{\n\tvoid **arr;\n\tint cnt;\n\n\tcnt = count_array(ptr_arr);\n\n\tif (cnt == 0) {\n\t\tarr = static_cast<void **>(malloc(sizeof(void *) * 2));\n\t\tif (arr == NULL) {\n\t\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\t\treturn NULL;\n\t\t}\n\t\tarr[0] = ptr;\n\t\tarr[1] = NULL;\n\t} else {\n\t\tarr = static_cast<void **>(realloc(ptr_arr, (cnt + 1) * sizeof(void *)));\n\t\tif (arr == NULL) {\n\t\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\t\treturn NULL;\n\t\t}\n\t\tarr[cnt - 1] = ptr;\n\t\tarr[cnt] = NULL;\n\t}\n\treturn arr;\n}\n\n/**\n * @brief\n *\t\tis_valid_pbs_name - is str a valid pbs username (POSIX.1 + ' ')\n *\t\t\t    a valid name is: alpha numeric '-' '_' '.' ' '\n * \t\t\t    For fairshare entities: colen ':'\n *\n * @param[in]\tstr\t-\tstring to check validity\n * @param[in]\tlen\t-\tlength of str buffer or -1 if length is unknown\n *\n * @return\tint\n * @retval\t1\t: valid PBS username\n * @retval\t0\t: not valid username\n */\nint\nis_valid_pbs_name(char *str, int len)\n{\n\tint i;\n\tint valid = 1;\n\n\tif (str == NULL)\n\t\treturn 0;\n\n\t/* if str is not null terminated, this could cause problems */\n\tif (len < 0)\n\t\tlen = strlen(str) + 1;\n\n\tfor (i = 0; i < len && valid; i++) {\n\t\tif (str[i] == '\\0')\n\t\t\tbreak;\n\t\tif (!(isalpha(str[i]) || isdigit(str[i]) || str[i] == '.' ||\n\t\t      str[i] == '-' || str[i] == '_' || str[i] == ' ' || str[i] == ':')) {\n\t\t\tvalid = 0;\n\t\t}\n\t}\n\n\tif (i == len)\n\t\tvalid = 0;\n\n\treturn valid;\n}\n\n/**\n * @brief\n * \t\tclear an schd_error structure for reuse\n *\n * @param[in]\terr\t-\terror structure to clear\n *\n * @return\tvoid\n */\nvoid\nclear_schd_error(schd_error *err)\n{\n\tif (err == NULL)\n\t\treturn;\n\n\tset_schd_error_codes(err, SCHD_UNKWN, SUCCESS);\n\tset_schd_error_arg(err, ARG1, NULL);\n\tset_schd_error_arg(err, ARG2, NULL);\n\tset_schd_error_arg(err, ARG3, NULL);\n\tset_schd_error_arg(err, SPECMSG, NULL);\n\terr->rdef = NULL;\n\terr->next = NULL;\n}\n\n/**\n * @brief\n *  \tconstructor schd_error\n *\n * @return\tnew schd_error strucuture\n * @retval\tNULL\t: Error\n */\nschd_error *\nnew_schd_error()\n{\n\tschd_error *err;\n\tif ((err = static_cast<schd_error *>(calloc(1, sizeof(schd_error)))) == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn NULL;\n\t}\n\tclear_schd_error(err);\n\treturn err;\n}\n\n/**\n * @brief\n * \t\tcopy constructor for schd_error\n *\n * @param[in]\toerr\t-\toutput error.\n *\n * @return\tnew schd_error structure\n * @retval\tNULL\t: Error\n */\nschd_error *\ndup_schd_error(schd_error *oerr)\n{\n\tschd_error *nerr;\n\tif (oerr == NULL)\n\t\treturn NULL;\n\n\tnerr = new_schd_error();\n\tif (nerr == NULL)\n\t\treturn NULL;\n\n\tnerr->rdef = oerr->rdef;\n\tset_schd_error_codes(nerr, oerr->status_code, oerr->error_code);\n\tset_schd_error_arg(nerr, ARG1, oerr->arg1);\n\tset_schd_error_arg(nerr, ARG2, oerr->arg2);\n\tset_schd_error_arg(nerr, ARG3, oerr->arg3);\n\tset_schd_error_arg(nerr, SPECMSG, oerr->specmsg);\n\n\treturn nerr;\n}\n\n/**\n * @brief\n * \t\tmake a shallow copy of a schd_error and move all argument data\n *\t\tto err.\n *\n * @param[in]\terr\t-\tschd_error to move TO\n * @param[in]\toerr-\tschd_error to move FROM\n *\n * @return\tnothing\n */\nvoid\nmove_schd_error(schd_error *err, schd_error *oerr)\n{\n\tif (oerr == NULL || err == NULL)\n\t\treturn;\n\n\t/* we're about to overwrite these, free just incase so we don't leak*/\n\tfree(err->arg1);\n\tfree(err->arg2);\n\tfree(err->arg3);\n\tfree(err->specmsg);\n\tfree_schd_error_list(err->next);\n\n\tmemcpy(err, oerr, sizeof(schd_error));\n\n\t/* Now that err has taken over the memory for the points,\n\t * NULL them on the original so we don't accidentally free them\n\t */\n\toerr->arg1 = NULL;\n\toerr->arg2 = NULL;\n\toerr->arg3 = NULL;\n\toerr->specmsg = NULL;\n\toerr->next = NULL;\n\tclear_schd_error(oerr);\n}\n\n/**\n * @brief\n *\t\tdeep copy oerr into err.  This will allocate memory\n *\t\tfor members of err, but not a new structure itself (like\n *\t\tdup_schd_error() would.\n * @param[out] err\n * @param[in] oerr\n */\nvoid\ncopy_schd_error(schd_error *err, schd_error *oerr)\n{\n\tset_schd_error_codes(err, oerr->status_code, oerr->error_code);\n\tset_schd_error_arg(err, ARG1, oerr->arg1);\n\tset_schd_error_arg(err, ARG2, oerr->arg2);\n\tset_schd_error_arg(err, ARG3, oerr->arg3);\n\tset_schd_error_arg(err, SPECMSG, oerr->specmsg);\n\terr->rdef = oerr->rdef;\n}\n\n/**\n * @brief\n * \t\tsafely set the schd_config arg buffers without worrying about leaking\n *\n * @param[in,out]\terr\t-\tobject to set\n * @param[in]\targ_field\t-\targ buffer to set\n * @param[in]\targ\t-\tstring to set arg to\n *\n * @return\tnothing\n */\nvoid\nset_schd_error_arg(schd_error *err, enum schd_error_args arg_field, const char *arg)\n{\n\n\tif (err == NULL)\n\t\treturn;\n\n\tswitch (arg_field) {\n\t\tcase ARG1:\n\t\t\tfree(err->arg1);\n\t\t\tif (arg != NULL)\n\t\t\t\terr->arg1 = string_dup(arg);\n\t\t\telse\n\t\t\t\terr->arg1 = NULL;\n\t\t\tbreak;\n\t\tcase ARG2:\n\t\t\tfree(err->arg2);\n\t\t\tif (arg != NULL)\n\t\t\t\terr->arg2 = string_dup(arg);\n\t\t\telse\n\t\t\t\terr->arg2 = NULL;\n\t\t\tbreak;\n\t\tcase ARG3:\n\t\t\tfree(err->arg3);\n\t\t\tif (arg != NULL)\n\t\t\t\terr->arg3 = string_dup(arg);\n\t\t\telse\n\t\t\t\terr->arg3 = NULL;\n\t\t\tbreak;\n\t\tcase SPECMSG:\n\t\t\tfree(err->specmsg);\n\t\t\tif (arg != NULL)\n\t\t\t\terr->specmsg = string_dup(arg);\n\t\t\telse\n\t\t\t\terr->specmsg = NULL;\n\t\t\tbreak;\n\t\tdefault:\n\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_SCHED, LOG_DEBUG, __func__, \"Invalid schd_error arg message type\");\n\t}\n}\n\n/**\n * @brief\n * \t\tset the status code and error code of a schd_error structure\n *\n *\t@note\n *\t\tthis ensures both codes are set together\n *\n * @param[in,out]\terr\t-\terror structure to set\n * @param[in]\tstatus_code\t-\tstatus code\n * @param[in]\terror_code\t-\terror code\n *\n * @return\tnothing\n */\nvoid\nset_schd_error_codes(schd_error *err, enum schd_err_status status_code, enum sched_error_code error_code)\n{\n\tif (err == NULL)\n\t\treturn;\n\tif (status_code < SCHD_UNKWN || status_code >= SCHD_STATUS_HIGH)\n\t\treturn;\n\tif (error_code < PBSE_NONE || error_code > ERR_SPECIAL)\n\t\treturn;\n\n\terr->status_code = status_code;\n\terr->error_code = error_code;\n}\n\n/**\n * @brief\n * \t\tdestructor of schd_error: Free a single schd_error structure\n *\n * @param[in]\terr\t-\tError structure\n */\nvoid\nfree_schd_error(schd_error *err)\n{\n\tif (err == NULL)\n\t\treturn;\n\n\tfree(err->arg1);\n\tfree(err->arg2);\n\tfree(err->arg3);\n\tfree(err->specmsg);\n\n\terr->next = NULL; /* just incase people try and access freed memory */\n\n\tfree(err);\n}\n\n/**\n * @brief\n * \t\tlist schd_error destructor: Free multiple schd_error's in a list\n *\n * @param[in]\t err_list\t-\tError list.\n */\nvoid\nfree_schd_error_list(schd_error *err_list)\n{\n\tschd_error *err, *tmp;\n\n\terr = err_list;\n\twhile (err != NULL) {\n\t\ttmp = err->next;\n\t\tfree_schd_error(err);\n\t\terr = tmp;\n\t}\n}\n\n/**\n * @brief\n * \t\tcreate a simple schd_error with no arguments\n *\n * @param[in]\terror_code\t-\terror code for new schd_error\n * @param[in]\tstatus_code -\tstatus code for new schd_error\n *\n * @return\tnew schd_error\n */\nschd_error *\ncreate_schd_error(enum sched_error_code error_code, enum schd_err_status status_code)\n{\n\tschd_error *nse;\n\tnse = new_schd_error();\n\tif (nse == NULL)\n\t\treturn NULL;\n\tset_schd_error_codes(nse, status_code, error_code);\n\treturn nse;\n}\n\n/**\n * @brief\n *\t\tcreate a schd_error complete with arguments\n * @par\n *\tschd_error fields: error_code, status_code, arg1, arg2, arg3 and specmsg.\n *\n * @param[in]\terror_code\t-\tError Code.\n * @param[in]\tstatus_code\t-\tStatus Code.\n * @param[in]\targ1\t-\tArgument 1\n * @param[in]\targ2\t-\tArgument 2\n * @param[in]\targ3\t-\tArgument 3\n * @param[in]\tspecmsg\t-\tstring to set arg to\n *\n * @return\tnew schd_error\n */\nschd_error *\ncreate_schd_error_complex(enum sched_error_code error_code, enum schd_err_status status_code, char *arg1, char *arg2, char *arg3, char *specmsg)\n{\n\tschd_error *nse;\n\n\tnse = create_schd_error(error_code, status_code);\n\tif (nse == NULL)\n\t\treturn NULL;\n\n\tif (arg1 != NULL)\n\t\tset_schd_error_arg(nse, ARG1, arg1);\n\n\tif (arg2 != NULL)\n\t\tset_schd_error_arg(nse, ARG2, arg2);\n\n\tif (arg3 != NULL)\n\t\tset_schd_error_arg(nse, ARG3, arg3);\n\n\tif (specmsg != NULL)\n\t\tset_schd_error_arg(nse, SPECMSG, specmsg);\n\n\treturn nse;\n}\n/**\n * @brief\n * \t\tadd a schd_error to a linked list of schd_errors.\n *\t\tThe way this works: head of the schd_error list is already created and\n *\t\tpassed into the caller (e.g., from main_sched_loop() -> is_ok_to_run()).  The caller will maintain\n *\t\ta 'prev_err' pointer.  The address of the prev_err(i.e., &prev_err) is passed into this function.\n *\t\tThe first call, we add the head.  Each additional call, we set up the next pointers.\n *\t\tIf err-> next is not NULL, we assume we're adding a sublist of schd_error's to the main list.\n *\n *\t@example\n *\t\tmain_sched_loop(): foo_err = new_schd_error()\n *\t\tmain_sched_loop(): is_ok_to_run(..., foo_err)\n *\t\tis_ok_to_run(): schd_error *prev_err = NULL;\n *\t\tis_ok_to_run() add_err(&prev_err, err);\n *\t\tis_ok_to_run(): err = new_schd_err()\n *\t\tis_ok_to_run() add_err(&prev_err, err)\n *\t\tNote: main_sched_loop() did not pass the address of foo_err into is_ok_to_run()\n *\t\tNote2: main_sched_loop() holds the head of the list, so we don't return the list\n *\n * @param[in]\tprev_err\t-\taddress of the pointer previous to the end of the list (i.e. (*prev_err)->next->next == NULL\n * @param[in]\terr\t-\tschd_error to add to the list (may be a list of schd_error's)\n *\n * @return\tnothing\n *\n * @note\tnothing stops duplicate entries from being added\n */\nvoid\nadd_err(schd_error **prev_err, schd_error *err)\n{\n\tschd_error *cur = NULL;\n\n\tif (err == NULL || prev_err == NULL)\n\t\treturn;\n\n\tif (*prev_err == NULL)\n\t\t(*prev_err) = err;\n\telse\n\t\t(*prev_err)->next = err;\n\n\tif (err->next != NULL) {\n\t\tfor (cur = err; cur->next != NULL; cur = cur->next)\n\t\t\t;\n\t\t(*prev_err) = cur;\n\t} else\n\t\t(*prev_err) = err;\n}\n\n/**\n * @brief\n * \t\tturn a resource/resource_req into a string for printing using a\n *\t\tinternal static buffer\n *\n * @param[in]\tp\t-\tpointer to resource/req\n * @param[in] fld\t-\tthe field of the resource to print\n *\n * @return\tchar *\n * @retval\tthe resource in string format (in internal static string)\n * @retval\t\"\" on error\n *\n * @par\tMT-Safe: No\n *\n * @note\n * \t\tThis function can not be used more than once in a printf() type func\n *\t \tThe static string will be overwritten before printing and you will get\n *\t \tthe same value for all calls.  Use res_to_str_r().\n */\nchar *\nres_to_str(void *p, enum resource_fields fld)\n{\n\tstatic char *resbuf = NULL;\n\tstatic int resbuf_size = 1024;\n\n\tif (resbuf == NULL) {\n\t\tif ((resbuf = static_cast<char *>(malloc(resbuf_size))) == NULL)\n\t\t\treturn const_cast<char *>(\"\");\n\t}\n\n\treturn res_to_str_re(p, fld, &resbuf, &resbuf_size, NO_FLAGS);\n}\n\n/**\n * @brief\n * \t\tconvert a number that is a resource into a string with a provided\n * \t\tnon-expandable buffer.  This is useful for size types or scheduler constants\n *\n * @param[in]\tamount\t-\tamount of resource\n * @param[in]\tdef\t-\tresource definition of amount\n * @param[in]\tfld\t-\tthe field of the resource to print - Should be RF_REQUEST or\n *              \t\tRF_AVAIL\n * @param[in,out]\tbuf\t-\tbuffer for res to str conversion -- not expandable\n * @param[in]\tbufsize\t-\tsize of buf\n *\n * @return\tchar *\n * @retval\tthe resource in string format (in provided buffer)\n * @retval\t\"\"\t: on error\n */\nchar *\nres_to_str_c(sch_resource_t amount, resdef *def, enum resource_fields fld,\n\t     char *buf, int bufsize)\n{\n\tschd_resource res = {0};\n\tresource_req req = {0};\n\tconst char *unknown[] = {\"unknown\", NULL};\n\n\tif (buf == NULL)\n\t\treturn const_cast<char *>(\"\");\n\n\tbuf[0] = '\\0';\n\n\tif (def == NULL)\n\t\treturn const_cast<char *>(\"\");\n\n\tswitch (fld) {\n\t\tcase RF_REQUEST:\n\t\t\treq.amount = amount;\n\t\t\treq.def = def;\n\t\t\treq.name = def->name.c_str();\n\t\t\treq.type = def->type;\n\t\t\treq.res_str = const_cast<char *>(\"unknown\");\n\t\t\treturn res_to_str_re(((void *) &req), fld, &buf, &bufsize, NOEXPAND);\n\t\t\tbreak;\n\t\tcase RF_AVAIL:\n\t\tdefault:\n\t\t\tres.avail = amount;\n\t\t\tres.assigned = amount;\n\t\t\tres.def = def;\n\t\t\tres.name = def->name.c_str();\n\t\t\tres.type = def->type;\n\t\t\tres.orig_str_avail = const_cast<char *>(\"unknown\");\n\t\t\tres.str_avail = const_cast<char **>(unknown);\n\t\t\tres.str_assigned = const_cast<char *>(\"unknown\");\n\t\t\treturn res_to_str_re(((void *) &res), fld, &buf, &bufsize, NOEXPAND);\n\t}\n\treturn const_cast<char *>(\"\");\n}\n/**\n * @brief\n * \t\tconvert a resource to string with a non-expandable buffer\n *\n * @param[in]\tp\t-\tpointer to resource/req\n * @param[in]\tfld\t-\tthe field of the resource to print\n * @param[in]\tbuf\t-\tbuffer for res to str conversion -- not expandable\n * @param[in,out]\tbufsize\t-\tsize of buf\n *\n * @return\tchar *\n * @retval\tthe resource in string format (in provided buffer)\n * @retval\t\"\"\t: on error\n */\nchar *\nres_to_str_r(void *p, enum resource_fields fld, char *buf, int bufsize)\n{\n\treturn res_to_str_re(p, fld, &buf, &bufsize, NOEXPAND);\n}\n\n/**\n * @brief\n * \t\tturn a resource (resource/resource_req) into\n *      a string for printing.  If the buffer needs expanding, it\n *\t\twill be expanded(except when specified by flags).\n *\n * @param[in]\tp\t-\ta pointer to a resource/req\n * @param[in]\tfld\t-\tThe field of the resource type to print.  This is\n *\t\t\t\t\t\tused to determine if p is a resource or a resource_req\n * @param[in,out]\tbuf\t-\tbuffer to copy string into\n * @param[in,out]\tbuf_size\t-\tsize of buffer\n * @param[in]\tflags\t-\tflags to control printing\n *\t\t\t\t\t\t\tPRINT_INT_CONST - print internal constant names\n *\t\t\t\t\t\t\tNOEXPAND - don't expand the buffer, just fill upto the max size\n *\n * @return\tchar *\n * @retval\tthe resource in string format (in buf)\n * @retval\t\"\"  on error\n */\nchar *\nres_to_str_re(void *p, enum resource_fields fld, char **buf,\n\t      int *bufsize, unsigned int flags)\n{\n\tschd_resource *res = NULL;\n\tresource_req *req = NULL;\n\tstruct resource_type *rt;\n\tchar *str;\n\tint free_str = 0;\n\tsch_resource_t amount;\n\n\tchar localbuf[1024];\n\tchar *ret;\n\tresource_type rtype;\n\n\tif (buf == NULL || bufsize == NULL)\n\t\treturn const_cast<char *>(\"\");\n\n\tif (*bufsize > 0)\n\t\t**buf = '\\0';\n\n\tif (p == NULL)\n\t\treturn const_cast<char *>(\"\");\n\n\tret = *buf;\n\n\tif (*bufsize <= 0) {\n\t\tif (!(flags & NOEXPAND)) {\n\t\t\tif ((*buf = static_cast<char *>(malloc(1024))) == NULL) {\n\t\t\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\t\t\treturn const_cast<char *>(\"\");\n\t\t\t} else\n\t\t\t\t*bufsize = 1024;\n\t\t}\n\t}\n\n\t/* empty string */\n\t**buf = '\\0';\n\n\tswitch (fld) {\n\t\tcase RF_REQUEST:\n\t\t\treq = static_cast<resource_req *>(p);\n\t\t\trt = &(req->type);\n\t\t\tstr = req->res_str;\n\t\t\tamount = req->amount;\n\t\t\tbreak;\n\n\t\tcase RF_DIRECT_AVAIL:\n\t\t\tres = static_cast<schd_resource *>(p);\n\t\t\tif (res->indirect_res != NULL) {\n\t\t\t\tif (flags & NOEXPAND)\n\t\t\t\t\tsnprintf(*buf, *bufsize, \"@\");\n\t\t\t\telse {\n\t\t\t\t\tret = pbs_strcat(buf, bufsize, \"@\");\n\t\t\t\t\tif (ret == NULL)\n\t\t\t\t\t\treturn const_cast<char *>(\"\");\n\t\t\t\t}\n\n\t\t\t\tstr = res->indirect_vnode_name;\n\t\t\t\trtype.is_string = 1;\n\t\t\t\trtype.is_non_consumable = 1;\n\t\t\t\trt = &rtype;\n\t\t\t\tamount = 0;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\t/* if not indirect, fall through normally */\n\t\tcase RF_AVAIL:\n\t\t\tres = static_cast<schd_resource *>(p);\n\t\t\tif (res->indirect_res != NULL)\n\t\t\t\tres = res->indirect_res;\n\t\t\trt = &(res->type);\n\t\t\tstr = string_array_to_str(res->str_avail);\n\t\t\tif (str == NULL)\n\t\t\t\tstr = const_cast<char *>(\"\");\n\t\t\telse\n\t\t\t\tfree_str = 1;\n\t\t\tamount = res->avail;\n\t\t\tbreak;\n\n\t\tcase RF_ASSN:\n\t\t\tres = static_cast<schd_resource *>(p);\n\t\t\trt = &(res->type);\n\t\t\tstr = res->str_assigned;\n\t\t\tamount = res->assigned;\n\t\t\tbreak;\n\n\t\tdefault:\n\t\t\treturn const_cast<char *>(\"\");\n\t}\n\n\t/* error checking */\n\tif (rt->is_string) {\n\t\tif (flags & NOEXPAND)\n\t\t\tsnprintf(*buf, *bufsize, \"%s\", str);\n\t\telse\n\t\t\tret = pbs_strcat(buf, bufsize, str);\n\t} else if (rt->is_boolean) {\n\t\tif (flags & NOEXPAND)\n\t\t\tsnprintf(*buf, *bufsize, \"%s\", amount ? ATR_TRUE : ATR_FALSE);\n\t\telse\n\t\t\tret = pbs_strcat(buf, bufsize, amount ? ATR_TRUE : ATR_FALSE);\n\t} else if (rt->is_size) {\n\t\tif (amount == 0) /* need to special case 0 or it falls into tb case */\n\t\t\tsnprintf(localbuf, sizeof(localbuf), \"0kb\");\n\t\telse if (((long) amount % TERATOKILO) == 0)\n\t\t\tsnprintf(localbuf, sizeof(localbuf), \"%ldtb\", (long) (amount / TERATOKILO));\n\t\telse if (((long) amount % GIGATOKILO) == 0)\n\t\t\tsnprintf(localbuf, sizeof(localbuf), \"%ldgb\", (long) (amount / GIGATOKILO));\n\t\telse if (((long) amount % MEGATOKILO) == 0)\n\t\t\tsnprintf(localbuf, sizeof(localbuf), \"%ldmb\", (long) (amount / MEGATOKILO));\n\t\telse\n\t\t\tsnprintf(localbuf, sizeof(localbuf), \"%ldkb\", (long) amount);\n\t\tif (flags & NOEXPAND)\n\t\t\tsnprintf(*buf, *bufsize, \"%s\", localbuf);\n\t\telse\n\t\t\tret = pbs_strcat(buf, bufsize, localbuf);\n\t} else if (rt->is_num) {\n\t\tint const_print = 0;\n\t\tif (amount == UNSPECIFIED_RES) {\n\t\t\tif (flags & PRINT_INT_CONST) {\n\t\t\t\tif (flags & NOEXPAND)\n\t\t\t\t\tsnprintf(*buf, *bufsize, UNSPECIFIED_STR);\n\t\t\t\telse\n\t\t\t\t\tret = pbs_strcat(buf, bufsize, UNSPECIFIED_STR);\n\n\t\t\t\tconst_print = 1;\n\t\t\t}\n\t\t} else if (amount == SCHD_INFINITY_RES) {\n\t\t\tif (flags & PRINT_INT_CONST) {\n\t\t\t\tif (flags & NOEXPAND)\n\t\t\t\t\tsnprintf(*buf, *bufsize, SCHD_INFINITY_STR);\n\t\t\t\telse\n\t\t\t\t\tret = pbs_strcat(buf, bufsize, SCHD_INFINITY_STR);\n\n\t\t\t\tconst_print = 1;\n\t\t\t}\n\t\t}\n\n\t\tif (const_print == 0) {\n\t\t\tif (rt->is_float)\n\t\t\t\tsnprintf(localbuf, sizeof(localbuf), \"%.*f\",\n\t\t\t\t\t float_digits(amount, FLOAT_NUM_DIGITS), (double) amount);\n\t\t\telse\n\t\t\t\tsnprintf(localbuf, sizeof(localbuf), \"%ld\", (long) amount);\n\t\t\tif (flags & NOEXPAND)\n\t\t\t\tsnprintf(*buf, *bufsize, \"%s\", localbuf);\n\t\t\telse\n\t\t\t\tret = pbs_strcat(buf, bufsize, localbuf);\n\t\t}\n\t} else if (rt->is_time) {\n\t\tchar resbuf[1024];\n\t\tconvert_duration_to_str((long) amount, resbuf, sizeof(resbuf));\n\t}\n\n\tif (free_str)\n\t\tfree(str);\n\n\tif (ret == NULL)\n\t\treturn const_cast<char *>(\"\");\n\treturn *buf;\n}\n\n/*\n * @brief   helper function to free an array of pointers\n *\n * @param[in] inp - array of pointers\n * @return void\n */\nvoid\nfree_ptr_array(void *inp)\n{\n\tint i;\n\tvoid **arr;\n\n\tif (inp == NULL)\n\t\treturn;\n\n\tarr = (void **) inp;\n\n\tfor (i = 0; arr[i] != NULL; i++)\n\t\tfree(arr[i]);\n\tfree(arr);\n}\n\n/**\n *\n *\t@brief break apart a comma delimited string into an array of strings.\n *\t       It's an overloaded function of break_comma_list in libutils\n *\n *\t@param[in] strlist - the comma delimited string\n *\n *\t@return std::vector<std::string>\n *\n */\nstd::vector<std::string>\nbreak_comma_list(const std::string &strlist)\n{\n\tstd::stringstream sstream(strlist);\n\tstd::string str;\n\tstd::vector<std::string> ret;\n\twhile (std::getline(sstream, str, ','))\n\t\tret.push_back(str);\n\treturn ret;\n}\n"
  },
  {
    "path": "src/scheduler/misc.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _MISC_H\n#define _MISC_H\n\n#include <string>\n\n#include \"data_types.h\"\n#include \"server_info.h\"\n#include \"queue_info.h\"\n#include \"job_info.h\"\n#include \"sched_cmds.h\"\n\n/*\n *\tstring_dup - duplicate a string\n */\nchar *string_dup(const char *str);\n\n/*\n *      res_to_num - convert a resource string to an integer in the lowest\n *                      form of resource on the machine (btye/word)\n *                      example: 1kb -> 1024 or 1kw -> 1024\n */\nsch_resource_t res_to_num(const char *res_str, struct resource_type *type);\n\n/*\n *      skip_line - find if the line of the config file needs to be skipped\n *                  due to it being a comment or other means\n */\nint skip_line(char *line);\n\n/*\n *      schdlogerr - combination of log_event() and translate_fail_code()\n *                   If we're actually going to log a message, translate\n *                   err into a message and then log it.  The translated\n *                   error will be printed after the message\n */\nvoid\nschdlogerr(int event, int event_class, int sev, const std::string &name, const char *text,\n\t   schd_error *err);\n\n/*\n *\n *      take a generic NULL terminated pointer array and return a\n *      filtered specialized array based on calling filter_func() on every\n *      member.  This can be used with any standard scheduler array\n *\tlike resource_resv or node_info or resdef\n *\n *      filter_func prototype: int func( void *, void * )\n *                                       object   arg\n *\t\t  object - specific member of ptrarr[]\n *\t\t  arg    - arg parameter\n *              - returns 1: ptr will be added to filtered array\n *              - returns 0: ptr will NOT be added to filtered array\n */\nvoid **\nfilter_array(void **ptrarr, int (*filter_func)(void *, void *),\n\t     void *arg, int flags);\n\n/**\n * \tcalc_time_left_STF - calculate the amount of time left\n *  for minimum duration and maximum duration of a STF resource resv\n *\n */\nint calc_time_left_STF(resource_resv *resresv, sch_resource_t *min_time_left);\n\n/*\n *\n *\tmatch_string_array - match two NULL terminated string arrays\n *\n *\treturns: SA_FULL_MATCH\t\t: full match\n *\t\t SA_SUB_MATCH\t\t: one array is a subset of the other\n *\t\t SA_PARTIAL_MATCH\t: at least one match but not all\n *\t\t SA_NO_MATCH\t\t: no match\n *\n */\nenum match_string_array_ret match_string_array(const char *const *strarr1, const char *const *strarr2);\nenum match_string_array_ret match_string_array(const std::vector<std::string> &strarr1, const std::vector<std::string> &strarr2);\n\n/*\n * convert a string array into a printable string\n */\nchar *string_array_to_str(char **strarr);\n\n/*\n *      calc_time_left - calculate the remaining time of a job\n */\nint calc_time_left(resource_resv *resresv, int use_hard_duration);\n\n/*\n *      cstrcmp - check string compare - compares two strings but doesn't bomb\n *                if either one is null\n */\nint cstrcmp(const char *s1, const char *s2);\n\n/*\n *      is_num - checks to see if the string is a number, size, float\n *               or time in string form\n */\nint is_num(const char *str);\n\n/*\n *\tcount_array - count the number of elements in a NULL terminated array\n *\t\t      of pointers\n */\nint count_array(const void *arr);\n\n/*\n *\tdup_array - make a shallow copy of elements in a NULL terminated array of pointers.\n */\nvoid **dup_array(void *ptr);\n\n/*\n *\tremove_ptr_from_array - remove a pointer from a ptr list and move\n *\t\t\t\tthe rest of the pointers up to fill the hole\n *\t\t\t\tPointer array size will not change - an extra\n *\t\t\t\tNULL is added to the end\n *\n *\treturns non-zero if the ptr was successfully removed from the array\n *\t\tzero if the array has not been modified\n */\nint remove_ptr_from_array(void *arr, void *ptr);\n\n/**\n * @brief add pointer to NULL terminated pointer array\n * @param[in] ptr_arr - pointer array to add to\n * @param[in] ptr - pointer to add\n *\n * @return void *\n * @retval pointer array with new element added\n * @retval NULL on error\n */\nvoid *add_ptr_to_array(void *ptr_arr, void *ptr);\n\n/*\n *      is_valid_pbs_name - is str a valid pbs username (POSIX.1 + ' ')\n *                          a valid name is: alpha numeric '-' '_' '.' or ' '\n */\nint is_valid_pbs_name(char *str, int len);\n\n/*\n *\n *      res_to_str - turn a resource (resource/resource_req) into\n *                   a string for printing.\n *      returns the resource in string format.  It is returned in a static\n *              buffer\n *              a null string (\"\") is returned on error\n */\nchar *res_to_str(void *p, enum resource_fields fld);\n\n/*\n *\n *    turn a resource/req into a string for printing (reentrant)\n *\n */\nchar *res_to_str_r(void *p, enum resource_fields fld, char *buf, int bufsize);\n\n/**\n * convert a number that is a resource into a string with a non-expandable\n * buffer. This is useful for size types or scheduler constants\n */\nchar *\nres_to_str_c(sch_resource_t amount, resdef *def, enum resource_fields fld,\n\t     char *buf, int bufsize);\n\n/**\n *\n *    @brief turn a resource (resource/resource_req) into\n *                 a string for printing.  If the buffer needs expanding, it\n *\t\t   will be expanded based on flags\n * flags:\n * \t\tNOPRINT_INT_CONST - print \"\" instead of internal sched constants\n *\t\tNOEXPAND - don't expand the buffer, just fill to the max size\n */\nchar *\nres_to_str_re(void *p, enum resource_fields fld, char **buf,\n\t      int *bufsize, unsigned int flags);\n\n/*\n * clear schd_error structure for reuse\n */\nvoid clear_schd_error(schd_error *err);\n\n/* schd_error constructor */\nschd_error *new_schd_error(void);\n\n/* schd_error copy constructor */\nschd_error *dup_schd_error(schd_error *oerr);\n\n/* does a shallow copy err = oerr safely moving all argument data to err */\nvoid move_schd_error(schd_error *err, schd_error *oerr);\n\n/* do a deep copy of err, but don't dup the error itself. */\nvoid copy_schd_error(schd_error *err, schd_error *oerr);\n\n/* safely set the schd_config arg buffers without worrying about leaking */\nvoid set_schd_error_arg(schd_error *err, enum schd_error_args arg_field, const char *arg);\n\n/* set the status code and error code of a schd_error structure to ensure both are set together  */\nvoid set_schd_error_codes(schd_error *err, enum schd_err_status status_code, enum sched_error_code error_code);\n\n/* schd_error destuctor */\nvoid\nfree_schd_error(schd_error *err);\nvoid\nfree_schd_error_list(schd_error *err_list);\n\n/* helper functions to create schd_errors*/\nschd_error *\ncreate_schd_error(enum sched_error_code error_code, enum schd_err_status status_code);\nschd_error *\ncreate_schd_error_complex(enum sched_error_code error_code, enum schd_err_status status_code, char *arg1, char *arg2, char *arg3, char *specmsg);\n\n/* add schd_errors to linked list */\nvoid\nadd_err(schd_error **prev_err, schd_error *err);\n\n/*\n * add string to NULL terminated string array\n */\nint\nadd_str_to_array(char ***str_arr, char *str);\n\n/*\n * add a string to a string array only if it is unique\n */\nint\nadd_str_to_unique_array(char ***str_arr, char *str);\n\n/*\n * helper function to free an array of pointers\n */\nvoid free_ptr_array(void *inp);\n\nvoid log_eventf(int eventtype, int objclass, int sev, const std::string &objname, const char *fmt, ...);\nvoid log_event(int eventtype, int objclass, int sev, const std::string &objname, const char *text);\n\n/*\n * overloaded break_comma_list function\n */\nstd::vector<std::string> break_comma_list(const std::string &strlist);\n#endif /* _MISC_H */\n"
  },
  {
    "path": "src/scheduler/multi_threading.cpp",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h>\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <unistd.h>\n#include <pthread.h>\n#include <errno.h>\n#include <signal.h>\n\n#include \"log.h\"\n#include \"pbs_idx.h\"\n\n#include \"constant.h\"\n#include \"misc.h\"\n#include \"data_types.h\"\n#include \"globals.h\"\n#include \"node_info.h\"\n#include \"queue.h\"\n#include \"fifo.h\"\n#include \"resource_resv.h\"\n#include \"multi_threading.h\"\n\n/**\n * @brief\tcreate the thread id key & set it for the main thread\n *\n * @param\tvoid\n *\n * @return\tvoid\n */\nstatic void\ncreate_id_key(void)\n{\n\tint *mainid;\n\n\tmainid = static_cast<int *>(malloc(sizeof(int)));\n\tif (mainid == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn;\n\t}\n\t*mainid = 0;\n\n\tpthread_key_create(&th_id_key, free);\n\tpthread_setspecific(th_id_key, (void *) mainid);\n}\n\n/**\n * @brief\tconvenience function to kill worker threads\n *\n * @param\tvoid\n *\n * @return\tvoid\n */\nvoid\nkill_threads(void)\n{\n\tint i;\n\n\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_REQUEST, LOG_DEBUG,\n\t\t  \"\", \"Killing worker threads\");\n\n\tthreads_die = 1;\n\tpthread_mutex_lock(&work_lock);\n\tpthread_cond_broadcast(&work_cond);\n\tpthread_mutex_unlock(&work_lock);\n\n\t/* Wait until all threads to finish */\n\tfor (i = 0; i < num_threads; i++) {\n\t\tpthread_join(threads[i], NULL);\n\t}\n\tpthread_mutex_destroy(&work_lock);\n\tpthread_cond_destroy(&work_cond);\n\tpthread_mutex_destroy(&result_lock);\n\tpthread_cond_destroy(&result_cond);\n\tpthread_mutex_destroy(&general_lock);\n\tfree(threads);\n\tfree_ds_queue(work_queue);\n\tfree_ds_queue(result_queue);\n\tthreads = NULL;\n\tnum_threads = 0;\n\twork_queue = NULL;\n\tresult_queue = NULL;\n}\n\n/**\n * @brief\tinitialize multi-threading\n *\n * @param[in]\tnthreads - number of threads to create, or -1 to use default\n *\n * @return\tint\n * @retval\t1 for success\n * @retval\t0 for malloc error\n */\nint\ninit_multi_threading(int nthreads)\n{\n\tint i;\n\tint num_cores;\n\tpthread_mutexattr_t attr;\n\n\t/* Kill any existing worker threads */\n\tif (num_threads > 1)\n\t\tkill_threads();\n\n\tthreads_die = 0;\n\tif (pthread_cond_init(&work_cond, NULL) != 0) {\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_SCHED, LOG_ERR, __func__,\n\t\t\t  \"pthread_cond_init failed\");\n\t\treturn 0;\n\t}\n\tif (pthread_cond_init(&result_cond, NULL) != 0) {\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_SCHED, LOG_ERR, __func__,\n\t\t\t  \"pthread_cond_init failed\");\n\t\treturn 0;\n\t}\n\n\tif (init_mutex_attr_recursive(&attr) != 0) {\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_SCHED, LOG_ERR, __func__,\n\t\t\t  \"init_mutex_attr_recursive failed\");\n\t\treturn 0;\n\t}\n\n\tpthread_mutex_init(&work_lock, &attr);\n\tpthread_mutex_init(&result_lock, &attr);\n\tpthread_mutex_init(&general_lock, &attr);\n\n\tnum_cores = sysconf(_SC_NPROCESSORS_ONLN);\n\tif (nthreads < 1 && num_cores > 2)\n\t\t/* Create as many threads as half the number of cores */\n\t\tnum_threads = num_cores / 2;\n\telse\n\t\tnum_threads = nthreads;\n\n\tif (num_threads <= 1) {\n\t\tnum_threads = 1;\n\t\tpthread_once(&key_once, create_id_key);\n\t\treturn 1; /* main thread will act as the only worker thread */\n\t}\n\n\tlog_eventf(PBSEVENT_DEBUG, PBS_EVENTCLASS_REQUEST, LOG_DEBUG,\n\t\t   \"\", \"Launching %d worker threads\", num_threads);\n\n\tthreads = static_cast<pthread_t *>(malloc(num_threads * sizeof(pthread_t)));\n\tif (threads == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn 0;\n\t}\n\n\t/* Create task and result queues */\n\twork_queue = new_ds_queue();\n\tif (work_queue == NULL) {\n\t\tfree(threads);\n\t\treturn 0;\n\t}\n\tresult_queue = new_ds_queue();\n\tif (result_queue == NULL) {\n\t\tfree(threads);\n\t\tfree_ds_queue(work_queue);\n\t\twork_queue = NULL;\n\t\treturn 0;\n\t}\n\n\tpthread_once(&key_once, create_id_key);\n\tfor (i = 0; i < num_threads; i++) {\n\t\tint *thid;\n\n\t\tthid = static_cast<int *>(malloc(sizeof(int)));\n\t\tif (thid == NULL) {\n\t\t\tfree(threads);\n\t\t\tfree_ds_queue(work_queue);\n\t\t\tfree_ds_queue(result_queue);\n\t\t\twork_queue = NULL;\n\t\t\tresult_queue = NULL;\n\t\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\t\treturn 0;\n\t\t}\n\t\t*thid = i + 1;\n\t\tpthread_create(&(threads[i]), NULL, &worker, (void *) thid);\n\t}\n\n\treturn 1;\n}\n\n/**\n * @brief\tMain pthread routine for worker threads\n *\n * @param[in]\ttid  - thread id of the thread\n *\n * @return void\n */\nvoid *\nworker(void *tid)\n{\n\tth_task_info *work = NULL;\n\tsigset_t set;\n\tint ntid;\n\tchar buf[1024];\n\n\tpthread_setspecific(th_id_key, tid);\n\tntid = *(int *) tid;\n\n\t/* Add HUP to the list of signals to block, if we ever unblock this, we'll need to modify 'restart()' to handle MT */\n\tsigemptyset(&set);\n\tsigaddset(&set, SIGHUP);\n\n\tif (pthread_sigmask(SIG_BLOCK, &set, NULL) != 0) {\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_SCHED, LOG_ERR, __func__,\n\t\t\t  \"pthread_sigmask failed\");\n\t\tpthread_exit(NULL);\n\t}\n\n\twhile (!threads_die) {\n\t\t/* Get the next work task from work queue */\n\t\tpthread_mutex_lock(&work_lock);\n\t\twhile (ds_queue_is_empty(work_queue) && !threads_die) {\n\t\t\tpthread_cond_wait(&work_cond, &work_lock);\n\t\t}\n\t\twork = static_cast<th_task_info *>(ds_dequeue(work_queue));\n\t\tpthread_mutex_unlock(&work_lock);\n\n\t\t/* find out what task we need to do */\n\t\tif (work != NULL) {\n\t\t\tswitch (work->task_type) {\n\t\t\t\tcase TS_IS_ND_ELIGIBLE:\n\t\t\t\t\tsnprintf(buf, sizeof(buf), \"Thread %d calling check_node_eligibility_chunk()\", ntid);\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_SCHED, LOG_DEBUG, __func__, buf);\n\t\t\t\t\tcheck_node_eligibility_chunk(static_cast<th_data_nd_eligible *>(work->thread_data));\n\t\t\t\t\tbreak;\n\t\t\t\tcase TS_DUP_ND_INFO:\n\t\t\t\t\tsnprintf(buf, sizeof(buf), \"Thread %d calling dup_node_info_chunk()\", ntid);\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_SCHED, LOG_DEBUG, __func__, buf);\n\t\t\t\t\tdup_node_info_chunk(static_cast<th_data_dup_nd_info *>(work->thread_data));\n\t\t\t\t\tbreak;\n\t\t\t\tcase TS_QUERY_ND_INFO:\n\t\t\t\t\tsnprintf(buf, sizeof(buf), \"Thread %d calling query_node_info_chunk()\", ntid);\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_SCHED, LOG_DEBUG, __func__, buf);\n\t\t\t\t\tquery_node_info_chunk(static_cast<th_data_query_ninfo *>(work->thread_data));\n\t\t\t\t\tbreak;\n\t\t\t\tcase TS_FREE_ND_INFO:\n\t\t\t\t\tsnprintf(buf, sizeof(buf), \"Thread %d calling free_node_info_chunk()\", ntid);\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_SCHED, LOG_DEBUG, __func__, buf);\n\t\t\t\t\tfree_node_info_chunk(static_cast<th_data_free_ninfo *>(work->thread_data));\n\t\t\t\t\tbreak;\n\t\t\t\tcase TS_DUP_RESRESV:\n\t\t\t\t\tsnprintf(buf, sizeof(buf), \"Thread %d calling dup_resource_resv_array_chunk()\", ntid);\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_SCHED, LOG_DEBUG, __func__, buf);\n\t\t\t\t\tdup_resource_resv_array_chunk(static_cast<th_data_dup_resresv *>(work->thread_data));\n\t\t\t\t\tbreak;\n\t\t\t\tcase TS_QUERY_JOB_INFO:\n\t\t\t\t\tsnprintf(buf, sizeof(buf), \"Thread %d calling query_jobs_chunk()\", ntid);\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_SCHED, LOG_DEBUG, __func__, buf);\n\t\t\t\t\tquery_jobs_chunk(static_cast<th_data_query_jinfo *>(work->thread_data));\n\t\t\t\t\tbreak;\n\t\t\t\tcase TS_FREE_RESRESV:\n\t\t\t\t\tsnprintf(buf, sizeof(buf), \"Thread %d calling free_resource_resv_array_chunk()\", ntid);\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_SCHED, LOG_DEBUG, __func__, buf);\n\t\t\t\t\tfree_resource_resv_array_chunk(static_cast<th_data_free_resresv *>(work->thread_data));\n\t\t\t\t\tbreak;\n\t\t\t\tdefault:\n\t\t\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_SCHED, LOG_ERR, __func__,\n\t\t\t\t\t\t  \"Invalid task type passed to worker thread\");\n\t\t\t}\n\n\t\t\t/* Post results */\n\t\t\tpthread_mutex_lock(&result_lock);\n\t\t\tds_enqueue(result_queue, (void *) work);\n\t\t\tpthread_cond_signal(&result_cond);\n\t\t\tpthread_mutex_unlock(&result_lock);\n\t\t}\n\t}\n\n\tpthread_exit(NULL);\n}\n\n/**\n * @brief\tConvenience function to queue up work for worker threads\n *\n * @param[in]\ttask - the task to queue up\n *\n * @return void\n */\nvoid\nqueue_work_for_threads(th_task_info *task)\n{\n\tpthread_mutex_lock(&work_lock);\n\tds_enqueue(work_queue, (void *) task);\n\tpthread_cond_signal(&work_cond);\n\tpthread_mutex_unlock(&work_lock);\n}\n"
  },
  {
    "path": "src/scheduler/multi_threading.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef SRC_SCHEDULER_MULTI_THREADING_H_\n#define SRC_SCHEDULER_MULTI_THREADING_H_\n\n#include \"data_types.h\"\n\n#define MT_CHUNK_SIZE_MIN 1024\n#define MT_CHUNK_SIZE_MAX 8192\n\nint init_multi_threading(int nthreads);\nvoid kill_threads(void);\nvoid *worker(void *);\nvoid queue_work_for_threads(th_task_info *task);\n\n#endif /* SRC_SCHEDULER_MULTI_THREADING_H_ */\n"
  },
  {
    "path": "src/scheduler/node_info.cpp",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file    node_info.c\n *\n * @brief\n * \t\tmisc.c - This file contains functions related to node_info structure.\n *\n * Functions included are:\n * \tquery_nodes()\n * \tquery_node_info()\n * \tfree_nodes()\n * \tset_node_info_state()\n * \tremove_node_state()\n * \tadd_node_state()\n * \tnode_filter()\n * \tfind_node_info()\n * \tfind_node_by_host()\n * \tdup_nodes()\n * \tdup_node_info()\n * \tcopy_node_ptr_array()\n * \tcollect_resvs_on_nodes()\n * \tcollect_jobs_on_nodes()\n * \tupdate_node_on_run()\n * \tupdate_node_on_end()\n * \tcheck_rescspec()\n * \tsearch_for_rescspec()\n * \tdup_nspec()\n * \tfree_nspecs()\n * \tfind_nspec()\n * \tfind_nspec_by_rank()\n * \teval_selspec()\n * \teval_placement()\n * \teval_complex_selspec()\n * \teval_simple_selspec()\n * \tis_vnode_eligible()\n * \tis_vnode_eligible_chunk()\n * \tresources_avail_on_vnode()\n * \tcheck_resources_for_node()\n * \tparse_placespec()\n * \tparse_selspec()\n * \tcreate_execvnode()\n * \tparse_execvnode()\n * \tnode_state_to_str()\n * \tcombine_nspec_array()\n * \tcreate_node_array_from_nspec()\n * \treorder_nodes()\n * \treorder_nodes_set()\n * \tok_break_chunk()\n * \tis_excl()\n * \talloc_rest_nodepart()\n * \tcan_fit_on_vnode()\n * \tis_provisionable()\n * \tnode_up_event()\n * \tnode_down_event()\n * \tcreate_node_array_from_str()\n * \tfind_node_by_rank()\n * \tnew_node_scratch()\n * \tfree_node_scratch()\n * \tsim_exclhost()\n * \tsim_exclhost_func()\n * \tset_current_aoe()\n * \tis_exclhost()\n * \tcheck_node_array_eligibility()\n * \tis_powerok()\n * \tis_eoe_avail_on_vnode()\n * \tset_current_eoe()\n *\n */\n\n#include <unordered_map>\n\n#include <pbs_config.h>\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <math.h>\n#include <ctype.h>\n#include <sys/types.h>\n#include <errno.h>\n#include <math.h>\n#include <errno.h>\n#include <time.h>\n#include <pbs_ifl.h>\n#include <log.h>\n#include <grunt.h>\n#include <libutil.h>\n#include <pbs_internal.h>\n#include \"attribute.h\"\n#include \"node_info.h\"\n#include \"server_info.h\"\n#include \"job_info.h\"\n#include \"misc.h\"\n#include \"globals.h\"\n#include \"check.h\"\n#include \"constant.h\"\n#include \"config.h\"\n#include \"resource_resv.h\"\n#include \"simulate.h\"\n#include \"sort.h\"\n#include \"node_partition.h\"\n#include \"resource.h\"\n#include \"pbs_internal.h\"\n#include \"server_info.h\"\n#include \"pbs_share.h\"\n#include \"pbs_bitmap.h\"\n#include \"pbs_license.h\"\n#include \"multi_threading.h\"\n#ifdef NAS\n#include \"site_code.h\"\n#endif\n\n/* name of the last node a job ran on - used in smp_dist = round robin */\nstatic char last_node_name[PBS_MAXSVRJOBID];\n\nvoid\nquery_node_info_chunk(th_data_query_ninfo *data)\n{\n\tstruct batch_status *nodes;\n\tstruct batch_status *cur_node;\n\tnode_info **ninfo_arr;\n\tserver_info *sinfo;\n\tnode_info *ninfo;\n\tint i;\n\tint nidx;\n\tint start;\n\tint end;\n\tint num_nodes_chunk;\n\n\tnodes = data->nodes;\n\tsinfo = data->sinfo;\n\tstart = data->sidx;\n\tend = data->eidx;\n\tnum_nodes_chunk = end - start + 1;\n\n\tif ((ninfo_arr = static_cast<node_info **>(malloc((num_nodes_chunk + 1) * sizeof(node_info *)))) == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\tdata->error = 1;\n\t\treturn;\n\t}\n\tninfo_arr[0] = NULL;\n\n\t/* Move to the linked list item corresponding to the 'start' index */\n\tfor (cur_node = nodes, i = 0; i < start && cur_node != NULL; cur_node = cur_node->next, i++)\n\t\t;\n\n\tfor (i = start, nidx = 0; i <= end && cur_node != NULL; cur_node = cur_node->next, i++) {\n\t\t/* get node info from the batch_status */\n\t\tif ((ninfo = query_node_info(cur_node, sinfo)) == NULL) {\n\t\t\tfree_nodes(ninfo_arr);\n\t\t\tdata->error = 1;\n\t\t\treturn;\n\t\t}\n\n\t\tif (node_in_partition(ninfo, sc_attrs.partition)) {\n\t\t\tninfo_arr[nidx++] = ninfo;\n\t\t} else\n\t\t\tdelete ninfo;\n\t}\n\tninfo_arr[nidx] = NULL;\n\n\tdata->oarr = ninfo_arr;\n}\n\n/**\n * @brief\tAllocates th_data_query_ninfo for multi-threading of query_nodes\n *\n * @param[in]\tnodes\t-\tbatch_status of nodes queried from server\n * @param[in]\tsinfo\t-\tserver information\n * @param[in]\tsidx\t-\tstart index for the jobs list for the thread\n * @param[in]\teidx\t-\tend index for the jobs list for the thread\n *\n * @return th_data_query_ninfo *\n * @retval a newly allocated th_data_query_ninfo object\n * @retval NULL for malloc error\n */\nstatic inline th_data_query_ninfo *\nalloc_tdata_nd_query(struct batch_status *nodes, server_info *sinfo, int sidx, int eidx)\n{\n\tth_data_query_ninfo *tdata;\n\n\ttdata = static_cast<th_data_query_ninfo *>(malloc(sizeof(th_data_query_ninfo)));\n\tif (tdata == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn NULL;\n\t}\n\ttdata->error = 0;\n\ttdata->nodes = nodes;\n\ttdata->oarr = NULL; /* Will be filled by the thread routine */\n\ttdata->sinfo = sinfo;\n\ttdata->sidx = sidx;\n\ttdata->eidx = eidx;\n\n\treturn tdata;\n}\n\n/**\n * @brief\n *      query_nodes - query all the nodes associated with a server\n *\n * @param[in]\tpbs_sd\t-\tcommunication descriptor wit the pbs server\n * @param[in,out]\tsinfo\t-\tserver information\n *\n * @return\tarray of nodes associated with server\n *\n */\nnode_info **\nquery_nodes(int pbs_sd, server_info *sinfo)\n{\n\tstruct batch_status *nodes;    /* nodes returned from the server */\n\tstruct batch_status *cur_node; /* used to cycle through nodes */\n\tnode_info **ninfo_arr;\t       /* array of nodes for scheduler's use */\n\tint num_nodes = 0;\t       /* the number of nodes */\n\tint nidx = 0;\n\tstatic struct attrl *attrib = NULL;\n\tth_data_query_ninfo *tdata = NULL;\n\tth_task_info *task = NULL;\n\tnode_info ***ninfo_arrs_tasks = NULL;\n\tint tid;\n\n\tif (attrib == NULL) {\n\t\tconst char *nodeattrs[] = {\n\t\t\tATTR_NODE_state,\n\t\t\tATTR_NODE_Mom,\n\t\t\tATTR_NODE_Port,\n\t\t\tATTR_partition,\n\t\t\tATTR_NODE_jobs,\n\t\t\tATTR_NODE_ntype,\n\t\t\tATTR_maxrun,\n\t\t\tATTR_maxuserrun,\n\t\t\tATTR_maxgrprun,\n\t\t\tATTR_queue,\n\t\t\tATTR_p,\n\t\t\tATTR_NODE_Sharing,\n\t\t\tATTR_NODE_License,\n\t\t\tATTR_rescavail,\n\t\t\tATTR_rescassn,\n\t\t\tATTR_NODE_NoMultiNode,\n\t\t\tATTR_ResvEnable,\n\t\t\tATTR_NODE_ProvisionEnable,\n\t\t\tATTR_NODE_current_aoe,\n\t\t\tATTR_NODE_power_provisioning,\n\t\t\tATTR_NODE_current_eoe,\n\t\t\tATTR_NODE_in_multivnode_host,\n\t\t\tATTR_NODE_last_state_change_time,\n\t\t\tATTR_NODE_last_used_time,\n\t\t\tATTR_NODE_resvs,\n\t\t\tNULL};\n\n\t\tfor (int i = 0; nodeattrs[i] != NULL; i++) {\n\t\t\tstruct attrl *temp_attrl;\n\n\t\t\ttemp_attrl = new_attrl();\n\t\t\ttemp_attrl->name = strdup(nodeattrs[i]);\n\t\t\ttemp_attrl->next = attrib;\n\t\t\ttemp_attrl->value = const_cast<char *>(\"\");\n\t\t\tattrib = temp_attrl;\n\t\t}\n\t}\n\n\t/* get nodes from PBS server */\n\tif ((nodes = send_statvnode(pbs_sd, NULL, attrib, NULL)) == NULL) {\n\t\tauto err = pbs_geterrmsg(pbs_sd);\n\t\tlog_eventf(PBSEVENT_SCHED, PBS_EVENTCLASS_NODE, LOG_INFO, \"\", \"Error getting nodes: %s\", err);\n\t\treturn NULL;\n\t}\n\n\tcur_node = nodes;\n\twhile (cur_node != NULL) {\n\t\tnum_nodes++;\n\t\tcur_node = cur_node->next;\n\t}\n\n\ttid = *((int *) pthread_getspecific(th_id_key));\n\tif (tid != 0 || num_threads <= 1) {\n\t\t/* don't use multi-threading if I am a worker thread or num_threads is 1 */\n\t\ttdata = alloc_tdata_nd_query(nodes, sinfo, 0, num_nodes - 1);\n\t\tif (tdata == NULL) {\n\t\t\tpbs_statfree(nodes);\n\t\t\treturn NULL;\n\t\t}\n\t\tquery_node_info_chunk(tdata);\n\t\tninfo_arr = tdata->oarr;\n\t\tfree(tdata);\n\n\t\tfor (nidx = 0; ninfo_arr[nidx] != NULL; nidx++)\n\t\t\tninfo_arr[nidx]->rank = get_sched_rank();\n\n\t\tninfo_arr[nidx] = NULL;\n\t} else {\n\t\tint chunk_size = num_nodes / num_threads;\n\t\tint th_err = 0;\n\t\tint j;\n\t\tint num_tasks;\n\t\tif ((ninfo_arr = static_cast<node_info **>(malloc((num_nodes + 1) * sizeof(node_info *)))) == NULL) {\n\t\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\t\tpbs_statfree(nodes);\n\t\t\treturn NULL;\n\t\t}\n\t\tninfo_arr[0] = NULL;\n\t\tchunk_size = (chunk_size > MT_CHUNK_SIZE_MIN) ? chunk_size : MT_CHUNK_SIZE_MIN;\n\t\tfor (j = 0, num_tasks = 0; num_nodes > 0;\n\t\t     j += chunk_size, num_tasks++, num_nodes -= chunk_size) {\n\t\t\ttdata = alloc_tdata_nd_query(nodes, sinfo, j, j + chunk_size - 1);\n\t\t\tif (tdata == NULL) {\n\t\t\t\tth_err = 1;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\ttask = static_cast<th_task_info *>(malloc(sizeof(th_task_info)));\n\t\t\tif (task == NULL) {\n\t\t\t\tfree(tdata);\n\t\t\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\t\t\tth_err = 1;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\ttask->task_id = num_tasks;\n\t\t\ttask->task_type = TS_QUERY_ND_INFO;\n\t\t\ttask->thread_data = (void *) tdata;\n\n\t\t\tqueue_work_for_threads(task);\n\t\t}\n\t\tninfo_arrs_tasks = static_cast<node_info ***>(malloc(num_tasks * sizeof(node_info **)));\n\t\tif (ninfo_arrs_tasks == NULL) {\n\t\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\t\tth_err = 1;\n\t\t}\n\t\t/* Get results from worker threads */\n\t\tfor (int i = 0; i < num_tasks;) {\n\t\t\tpthread_mutex_lock(&result_lock);\n\t\t\twhile (ds_queue_is_empty(result_queue))\n\t\t\t\tpthread_cond_wait(&result_cond, &result_lock);\n\t\t\twhile (!ds_queue_is_empty(result_queue)) {\n\t\t\t\ttask = static_cast<th_task_info *>(ds_dequeue(result_queue));\n\t\t\t\ttdata = static_cast<th_data_query_ninfo *>(task->thread_data);\n\t\t\t\tif (tdata->error)\n\t\t\t\t\tth_err = 1;\n\t\t\t\tninfo_arrs_tasks[task->task_id] = tdata->oarr;\n\t\t\t\tfree(tdata);\n\t\t\t\tfree(task);\n\t\t\t\ti++;\n\t\t\t}\n\t\t\tpthread_mutex_unlock(&result_lock);\n\t\t}\n\t\tif (th_err) {\n\t\t\tpbs_statfree(nodes);\n\t\t\tfree_nodes(ninfo_arr);\n\t\t\treturn NULL;\n\t\t}\n\t\t/* Assemble node info objects from various threads into the ninfo_arr */\n\t\tfor (int i = 0; i < num_tasks; i++) {\n\t\t\tif (ninfo_arrs_tasks[i] != NULL) {\n\t\t\t\tnode_info *ninfo;\n\n\t\t\t\tfor (int j = 0; (ninfo = ninfo_arrs_tasks[i][j]) != NULL; j++) {\n\t\t\t\t\tninfo->rank = get_sched_rank();\n\t\t\t\t\tninfo_arr[nidx++] = ninfo;\n\t\t\t\t}\n\t\t\t\tfree(ninfo_arrs_tasks[i]);\n\t\t\t}\n\t\t}\n\t\tninfo_arr[nidx] = NULL;\n\t\tfree(ninfo_arrs_tasks);\n\t}\n\n\tif (nidx == 0) {\n\t\tlog_event(PBSEVENT_SCHED, PBS_EVENTCLASS_SERVER, LOG_INFO, __func__,\n\t\t\t  \"No nodes found in partitions serviced by scheduler\");\n\t\tpbs_statfree(nodes);\n\t\tfree(ninfo_arr);\n\t\treturn NULL;\n\t}\n\n#ifdef NAS /* localmod 062 */\n\tsite_vnode_inherit(ninfo_arr);\n#endif /* localmod 062 */\n\tresolve_indirect_resources(ninfo_arr);\n\tsinfo->num_nodes = nidx;\n\tpbs_statfree(nodes);\n\treturn ninfo_arr;\n}\n\n/**\n * @brief\n *      query_node_info\t- collect information from a batch_status and\n *      put it in a node_info struct for easier access\n *\n * @param[in]\tnode\t-\ta node returned from a pbs_statvnode() call\n * @param[in,out]\tsinfo\t-\tserver information\n *\n * @return\ta node_info filled with information from node\n *\n */\nnode_info *\nquery_node_info(struct batch_status *node, server_info *sinfo)\n{\n\tnode_info *ninfo;     /* the new node_info */\n\tstruct attrl *attrp;  /* used to cycle though attribute list */\n\tschd_resource *res;   /* used to set resources in res list */\n\tsch_resource_t count; /* used to convert str->num */\n\tchar *endp;\t      /* end pointer for strtol */\n\tint check_expiry = 0;\n\ttime_t expiry = 0;\n\n\tif ((ninfo = new node_info(node->name)) == NULL)\n\t\treturn NULL;\n\n\tattrp = node->attribs;\n\n\tninfo->server = sinfo;\n\n\twhile (attrp != NULL) {\n\t\t/* Node State... i.e. offline down free etc */\n\t\tif (!strcmp(attrp->name, ATTR_NODE_state))\n\t\t\tset_node_info_state(ninfo, attrp->value);\n\n\t\t/* Host name */\n\t\telse if (!strcmp(attrp->name, ATTR_NODE_Mom)) {\n\t\t\tif (ninfo->mom)\n\t\t\t\tfree(ninfo->mom);\n\t\t\tif ((ninfo->mom = string_dup(attrp->value)) == NULL) {\n\t\t\t\tdelete ninfo;\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t} else if (!strcmp(attrp->name, ATTR_partition)) {\n\t\t\tninfo->partition = string_dup(attrp->value);\n\t\t\tif (ninfo->partition == NULL) {\n\t\t\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\t\t\tdelete ninfo;\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t} else if (!strcmp(attrp->name, ATTR_NODE_jobs))\n\t\t\tninfo->jobs = break_comma_list(attrp->value);\n\t\telse if (!strcmp(attrp->name, ATTR_maxrun)) {\n\t\t\tcount = strtol(attrp->value, &endp, 10);\n\t\t\tif (*endp == '\\0')\n\t\t\t\tninfo->max_running = count;\n\t\t} else if (!strcmp(attrp->name, ATTR_maxuserrun)) {\n\t\t\tcount = strtol(attrp->value, &endp, 10);\n\t\t\tif (*endp == '\\0')\n\t\t\t\tninfo->max_user_run = count;\n\t\t\tninfo->has_hard_limit = 1;\n\t\t} else if (!strcmp(attrp->name, ATTR_maxgrprun)) {\n\t\t\tcount = strtol(attrp->value, &endp, 10);\n\t\t\tif (*endp == '\\0')\n\t\t\t\tninfo->max_group_run = count;\n\t\t\tninfo->has_hard_limit = 1;\n\t\t} else if (!strcmp(attrp->name, ATTR_queue))\n\t\t\tninfo->queue_name = attrp->value;\n\t\telse if (!strcmp(attrp->name, ATTR_p)) {\n\t\t\tcount = strtol(attrp->value, &endp, 10);\n\t\t\tif (*endp == '\\0')\n\t\t\t\tninfo->priority = count;\n\t\t} else if (!strcmp(attrp->name, ATTR_NODE_Sharing)) {\n\t\t\tninfo->sharing = str_to_vnode_sharing(attrp->value);\n\t\t\tif (ninfo->sharing == VNS_UNSET) {\n\t\t\t\tlog_eventf(PBSEVENT_SCHED, PBS_EVENTCLASS_NODE, LOG_INFO, ninfo->name,\n\t\t\t\t\t   \"Unknown sharing type: %s using default shared\", attrp->value);\n\t\t\t\tninfo->sharing = VNS_DFLT_SHARED;\n\t\t\t}\n\t\t} else if (!strcmp(attrp->name, ATTR_NODE_License)) {\n\t\t\tswitch (attrp->value[0]) {\n\t\t\t\tcase ND_LIC_TYPE_locked:\n\t\t\t\t\tninfo->lic_lock = 1;\n\t\t\t\t\tbreak;\n\t\t\t\tcase ND_LIC_TYPE_cloud:\n\t\t\t\t\tcheck_expiry = 1;\n\t\t\t\t\tbreak;\n\t\t\t\tdefault:\n\t\t\t\t\tlog_eventf(PBSEVENT_SCHED, PBS_EVENTCLASS_NODE, LOG_INFO,\n\t\t\t\t\t\t   ninfo->name, \"Unknown license type: %c\", attrp->value[0]);\n\t\t\t}\n\t\t} else if (!strcmp(attrp->name, ATTR_rescavail)) {\n\t\t\tif (!strcmp(attrp->resource, ND_RESC_LicSignature)) {\n\t\t\t\texpiry = strtol(attrp->value, &endp, 10);\n\t\t\t}\n\t\t\tres = find_alloc_resource_by_str(ninfo->res, attrp->resource);\n\n\t\t\tif (res != NULL) {\n\t\t\t\tif (ninfo->res == NULL)\n\t\t\t\t\tninfo->res = res;\n\n\t\t\t\tif (set_resource(res, attrp->value, RF_AVAIL) == 0) {\n\t\t\t\t\tdelete ninfo;\n\t\t\t\t\tninfo = NULL;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\n\t\t\t\t/* Round memory off to the nearest megabyte */\n\t\t\t\tif (res->def == allres[\"mem\"])\n\t\t\t\t\tres->avail -= (long) res->avail % 1024;\n#ifdef NAS /* localmod 034 */\n\t\t\t\tsite_set_node_share(ninfo, res);\n#endif /* localmod 034 */\n\t\t\t}\n\t\t} else if (!strcmp(attrp->name, ATTR_rescassn)) {\n\t\t\tres = find_alloc_resource_by_str(ninfo->res, attrp->resource);\n\n\t\t\tif (ninfo->res == NULL)\n\t\t\t\tninfo->res = res;\n\t\t\tif (res != NULL) {\n\t\t\t\tif (set_resource(res, attrp->value, RF_ASSN) == 0) {\n\t\t\t\t\tdelete ninfo;\n\t\t\t\t\tninfo = NULL;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t} else if (!strcmp(attrp->name, ATTR_NODE_NoMultiNode)) {\n\t\t\tif (!strcmp(attrp->value, ATR_TRUE))\n\t\t\t\tninfo->no_multinode_jobs = 1;\n\t\t} else if (!strcmp(attrp->name, ATTR_ResvEnable)) {\n\t\t\tif (!strcmp(attrp->value, ATR_TRUE))\n\t\t\t\tninfo->resv_enable = 1;\n\t\t} else if (!strcmp(attrp->name, ATTR_NODE_ProvisionEnable)) {\n\t\t\tif (!strcmp(attrp->value, ATR_TRUE))\n\t\t\t\tninfo->provision_enable = 1;\n\t\t} else if (!strcmp(attrp->name, ATTR_NODE_current_aoe)) {\n\t\t\tif (attrp->value != NULL)\n\t\t\t\tset_current_aoe(ninfo, attrp->value);\n\t\t} else if (!strcmp(attrp->name, ATTR_NODE_power_provisioning)) {\n\t\t\tif (!strcmp(attrp->value, ATR_TRUE))\n\t\t\t\tninfo->power_provisioning = 1;\n\t\t} else if (!strcmp(attrp->name, ATTR_NODE_current_eoe)) {\n\t\t\tif (attrp->value != NULL)\n\t\t\t\tset_current_eoe(ninfo, attrp->value);\n\t\t} else if (!strcmp(attrp->name, ATTR_NODE_in_multivnode_host)) {\n\t\t\tif (attrp->value != NULL) {\n\t\t\t\tcount = strtol(attrp->value, &endp, 10);\n\t\t\t\tif (*endp == '\\0')\n\t\t\t\t\tninfo->is_multivnoded = count;\n\t\t\t\tif ((!sinfo->has_multi_vnode) && (count != 0))\n\t\t\t\t\tsinfo->has_multi_vnode = 1;\n\t\t\t}\n\t\t} else if (!strcmp(attrp->name, ATTR_NODE_last_state_change_time)) {\n\t\t\tcount = strtol(attrp->value, &endp, 10);\n\t\t\tif (*endp == '\\0')\n\t\t\t\tninfo->last_state_change_time = count;\n\t\t} else if (!strcmp(attrp->name, ATTR_NODE_last_used_time)) {\n\t\t\tcount = strtol(attrp->value, &endp, 10);\n\t\t\tif (*endp == '\\0')\n\t\t\t\tninfo->last_used_time = count;\n\t\t} else if (!strcmp(attrp->name, ATTR_NODE_resvs)) {\n\t\t\tninfo->resvs = break_comma_list(attrp->value);\n\t\t}\n\t\tattrp = attrp->next;\n\t}\n\tif (check_expiry) {\n\t\tif (time(NULL) < expiry)\n\t\t\tninfo->lic_lock = 1;\n\t}\n\n\tif (ninfo->lic_lock != 1)\n\t\tninfo->nscr |= NSCR_CYCLE_INELIGIBLE;\n\n\treturn ninfo;\n}\n\n/**\n * @brief\n *\tnode_info constructor\n */\nnode_info::node_info(const std::string &nname) : name(nname)\n{\n\tis_down = 0;\n\tis_free = 0;\n\tis_offline = 0;\n\tis_unknown = 0;\n\tis_exclusive = 0;\n\tis_job_exclusive = 0;\n\tis_resv_exclusive = 0;\n\tis_sharing = 0;\n\tis_busy = 0;\n\tis_job_busy = 0;\n\tis_stale = 0;\n\tis_maintenance = 0;\n\tis_provisioning = 0;\n\tis_sleeping = 0;\n\tis_multivnoded = 0;\n\thas_ghost_job = 0;\n\n\tlic_lock = 0;\n\n\thas_hard_limit = 0;\n\tno_multinode_jobs = 0;\n\tresv_enable = 0;\n\tprovision_enable = 0;\n\tpower_provisioning = 0;\n\n\tsharing = VNS_DFLT_SHARED;\n\n\tnum_jobs = 0;\n\tnum_run_resv = 0;\n\tnum_susp_jobs = 0;\n\n\tpriority = 0;\n\n\trank = 0;\n\n\tnodesig_ind = -1;\n\n\tmom = NULL;\n\tjobs = NULL;\n\tresvs = NULL;\n\tjob_arr = NULL;\n\trun_resvs_arr = NULL;\n\tres = NULL;\n\tserver = NULL;\n\n\tmax_running = SCHD_INFINITY;\n\tmax_user_run = SCHD_INFINITY;\n\tmax_group_run = SCHD_INFINITY;\n\n\tcurrent_aoe = NULL;\n\tcurrent_eoe = NULL;\n\tnodesig = NULL;\n\tlast_state_change_time = 0;\n\tlast_used_time = 0;\n\n\tsvr_node = NULL;\n\thostset = NULL;\n\n\tnode_events = NULL;\n\tbucket_ind = -1;\n\tnode_ind = -1;\n\n\tnscr = NSCR_NONE;\n\n#ifdef NAS\n\t/* localmod 034 */\n\tsh_type = 0;\n\tsh_cls = 0;\n#endif\n\tpartition = NULL;\n\tnp_arr = NULL;\n}\n\n/**\n * @brief\tpthread routine for freeing up a node_info array\n *\n * @param[in,out]\tdata - th_data_free_ninfo wrapper for the ninfo array\n *\n * @return void\n */\nvoid\nfree_node_info_chunk(th_data_free_ninfo *data)\n{\n\tnode_info **ninfo_arr;\n\tint start;\n\tint end;\n\tint i;\n\n\tninfo_arr = data->ninfo_arr;\n\tstart = data->sidx;\n\tend = data->eidx;\n\n\tfor (i = start; i <= end && ninfo_arr[i] != NULL; i++) {\n\t\tdelete ninfo_arr[i];\n\t}\n}\n\n/**\n * @brief\tAllocates th_data_free_ninfo for multi-threading of free_nodes\n *\n * @param[in,out]\tninfo_arr\t-\tthe node array to free\n * @param[in]\tsidx\t-\tstart index for the nodes array for the thread\n * @param[in]\teidx\t-\tend index for the nodes array for the thread\n *\n * @return th_data_free_ninfo *\n * @retval a newly allocated th_data_free_ninfo object\n * @retval NULL for malloc error\n */\nstatic inline th_data_free_ninfo *\nalloc_tdata_free_nodes(node_info **ninfo_arr, int sidx, int eidx)\n{\n\tth_data_free_ninfo *tdata;\n\n\ttdata = static_cast<th_data_free_ninfo *>(malloc(sizeof(th_data_free_ninfo)));\n\tif (tdata == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn NULL;\n\t}\n\n\ttdata->ninfo_arr = ninfo_arr;\n\ttdata->sidx = sidx;\n\ttdata->eidx = eidx;\n\n\treturn tdata;\n}\n\n/**\n * @brief\n *\t\tfree_nodes - free all the nodes in a node_info array\n *\n * @param[in,out]\tninfo_arr - the node info array\n *\n * @return\tnothing\n *\n */\nvoid\nfree_nodes(node_info **ninfo_arr)\n{\n\tint i;\n\tint chunk_size;\n\tth_data_free_ninfo *tdata = NULL;\n\tth_task_info *task = NULL;\n\tint num_tasks;\n\tint num_nodes;\n\tint tid;\n\n\tif (ninfo_arr == NULL)\n\t\treturn;\n\n\tnum_nodes = count_array(ninfo_arr);\n\n\ttid = *((int *) pthread_getspecific(th_id_key));\n\tif (tid != 0 || num_threads <= 1) {\n\t\t/* don't use multi-threading if I am a worker thread or num_threads is 1 */\n\t\ttdata = alloc_tdata_free_nodes(ninfo_arr, 0, num_nodes - 1);\n\t\tif (tdata == NULL)\n\t\t\treturn;\n\n\t\tfree_node_info_chunk(tdata);\n\t\tfree(tdata);\n\t\tfree(ninfo_arr);\n\t\treturn;\n\t}\n\tchunk_size = num_nodes / num_threads;\n\tchunk_size = (chunk_size > MT_CHUNK_SIZE_MIN) ? chunk_size : MT_CHUNK_SIZE_MIN;\n\tfor (i = 0, num_tasks = 0; num_nodes > 0;\n\t     num_tasks++, i += chunk_size, num_nodes -= chunk_size) {\n\t\ttdata = alloc_tdata_free_nodes(ninfo_arr, i, i + chunk_size - 1);\n\t\tif (tdata == NULL)\n\t\t\tbreak;\n\n\t\ttask = static_cast<th_task_info *>(malloc(sizeof(th_task_info)));\n\t\tif (task == NULL) {\n\t\t\tfree(tdata);\n\t\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\t\tbreak;\n\t\t}\n\t\ttask->task_type = TS_FREE_ND_INFO;\n\t\ttask->thread_data = (void *) tdata;\n\n\t\tqueue_work_for_threads(task);\n\t}\n\n\t/* Get results from worker threads */\n\tfor (i = 0; i < num_tasks;) {\n\t\tpthread_mutex_lock(&result_lock);\n\t\twhile (ds_queue_is_empty(result_queue))\n\t\t\tpthread_cond_wait(&result_cond, &result_lock);\n\t\twhile (!ds_queue_is_empty(result_queue)) {\n\t\t\ttask = static_cast<th_task_info *>(ds_dequeue(result_queue));\n\t\t\ttdata = static_cast<th_data_free_ninfo *>(task->thread_data);\n\t\t\tfree(tdata);\n\t\t\tfree(task);\n\t\t\ti++;\n\t\t}\n\t\tpthread_mutex_unlock(&result_lock);\n\t}\n\tfree(ninfo_arr);\n}\n\n/**\n * @brief\n *      node_info destructor\n */\nnode_info::~node_info()\n{\n\tfree(mom);\n\tfree_string_array(jobs);\n\tfree_string_array(resvs);\n\tfree(job_arr);\n\tfree(run_resvs_arr);\n\tfree_resource_list(res);\n\tfree_counts_list(group_counts);\n\tfree_counts_list(user_counts);\n\tfree(current_aoe);\n\tfree(current_eoe);\n\tfree(nodesig);\n\tfree_te_list(node_events);\n\tfree(partition);\n\tfree(np_arr);\n}\n\n/**\n * @brief\n * \t\tset the node state info bits from a single or comma separated list of\n * \t\tstates.\n *\n * @param[in]\tninfo\t-\tthe node to set the state\n * @param[in]\tstate\t-\tthe state string from the server\n *\n * @retval\t0\t: on success\n * @retval\t1\t: on failure\n */\nint\nset_node_info_state(node_info *ninfo, const char *state)\n{\n\tif (ninfo != NULL && state != NULL) {\n\t\tchar statebuf[256]; /* used to strtok() node states */\n\t\tchar *tok;\t    /* used with strtok() */\n\t\tchar *saveptr;\n\n\t\t/* clear all states */\n\t\tninfo->is_down = ninfo->is_free = ninfo->is_unknown = 0;\n\t\tninfo->is_sharing = ninfo->is_busy = ninfo->is_job_busy = 0;\n\t\tninfo->is_stale = ninfo->is_provisioning = ninfo->is_exclusive = 0;\n\t\tninfo->is_resv_exclusive = ninfo->is_job_exclusive = 0;\n\t\tninfo->is_sleeping = ninfo->is_maintenance = 0;\n\n\t\tstrcpy(statebuf, state);\n\t\ttok = strtok_r(statebuf, \",\", &saveptr);\n\n\t\twhile (tok != NULL) {\n\t\t\twhile (isspace((int) *tok))\n\t\t\t\ttok++;\n\n\t\t\tif (add_node_state(ninfo, tok) == 1)\n\t\t\t\tlog_eventf(PBSEVENT_SCHED, PBS_EVENTCLASS_NODE, LOG_INFO,\n\t\t\t\t\t   ninfo->name, \"Unknown Node State: %s\", tok);\n\n\t\t\ttok = strtok_r(NULL, \",\", &saveptr);\n\t\t}\n\t\treturn 0;\n\t}\n\n\treturn 1;\n}\n\n/**\n * @brief\n * \t\tRemove a node state\n *\n * @param[in]\tnode\t-\tThe node being considered\n * @param[in]\tstate\t-\tThe state to remove\n *\n * @par Side-Effects:\n * \t\tHandles node exclusivity in a special way.\n * \t\tIf handling resv-exclusive, unset is_exclusive along\n * \t\tIf handling job-exclusive, unset is_exclusive only if resv-exclusive isn't\n * \t\tset.\n *\n * @retval\t0\t: on success\n * @retval\t1\t: on failure\n */\nint\nremove_node_state(node_info *ninfo, const char *state)\n{\n\tif (ninfo == NULL)\n\t\treturn 1;\n\n\tif (!strcmp(state, ND_down))\n\t\tninfo->is_down = 0;\n\telse if (!strcmp(state, ND_free))\n\t\tninfo->is_free = 0;\n\telse if (!strcmp(state, ND_offline))\n\t\tninfo->is_offline = 0;\n\telse if (!strcmp(state, ND_state_unknown))\n\t\tninfo->is_unknown = 0;\n\telse if (!strcmp(state, ND_job_exclusive)) {\n\t\tninfo->is_job_exclusive = 0;\n\t\tif (ninfo->is_resv_exclusive == 0)\n\t\t\tninfo->is_exclusive = 0;\n\t} else if (!strcmp(state, ND_resv_exclusive)) {\n\t\tninfo->is_resv_exclusive = 0;\n\t\tif (ninfo->is_job_exclusive == 0)\n\t\t\tninfo->is_exclusive = 0;\n\t} else if (!strcmp(state, ND_job_sharing))\n\t\tninfo->is_sharing = 0;\n\telse if (!strcmp(state, ND_busy))\n\t\tninfo->is_busy = 0;\n\telse if (!strcmp(state, ND_jobbusy))\n\t\tninfo->is_job_busy = 0;\n\telse if (!strcmp(state, ND_Stale))\n\t\tninfo->is_stale = 0;\n\telse if (!strcmp(state, ND_prov))\n\t\tninfo->is_provisioning = 0;\n\telse if (!strcmp(state, ND_wait_prov))\n\t\tninfo->is_provisioning = 0;\n\telse if (!strcmp(state, ND_maintenance))\n\t\tninfo->is_maintenance = 0;\n\telse {\n\t\tlog_eventf(PBSEVENT_SCHED, PBS_EVENTCLASS_NODE, LOG_INFO,\n\t\t\t   ninfo->name, \"Unknown Node State: %s on remove operation\", state);\n\t\treturn 1;\n\t}\n\n\t/* If all state bits are turned off, the node state must be free */\n\tif (!ninfo->is_free && !ninfo->is_busy && !ninfo->is_exclusive && !ninfo->is_job_exclusive && !ninfo->is_resv_exclusive && !ninfo->is_offline && !ninfo->is_job_busy && !ninfo->is_stale && !ninfo->is_provisioning && !ninfo->is_sharing && !ninfo->is_unknown && !ninfo->is_down && !ninfo->is_maintenance)\n\t\tninfo->is_free = 1;\n\n\tif (ninfo->is_free)\n\t\tninfo->nscr &= ~NSCR_CYCLE_INELIGIBLE;\n\telse\n\t\tninfo->nscr |= NSCR_CYCLE_INELIGIBLE;\n\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\tAdd a node state\n *\n * @param[in]\tnode\t-\tThe node being considered\n * @param[in]\tstate\t-\tThe state to add\n *\n * @par Side-Effects:\n * \t\tHandle node exclusivity in a special way.\n * \t\tIf handling resv-exclusive or job-exclusive, turn is_exclusive bit on.\n *\n * @retval\t0\t: on success\n * @retval\t1\t: on failure\n */\nint\nadd_node_state(node_info *ninfo, const char *state)\n{\n\tint set_free = 0;\n\n\tif (ninfo == NULL)\n\t\treturn 1;\n\n\tif (!strcmp(state, ND_down))\n\t\tninfo->is_down = 1;\n\telse if (!strcmp(state, ND_free)) {\n\t\tninfo->is_free = 1;\n\t\tset_free = 1;\n\t} else if (!strcmp(state, ND_offline))\n\t\tninfo->is_offline = 1;\n\telse if (!strcmp(state, ND_state_unknown) || !strcmp(state, ND_unresolvable))\n\t\tninfo->is_unknown = 1;\n\telse if (!strcmp(state, ND_job_exclusive)) {\n\t\tninfo->is_job_exclusive = 1;\n\t\tninfo->is_exclusive = 1;\n\t} else if (!strcmp(state, ND_resv_exclusive)) {\n\t\tninfo->is_resv_exclusive = 1;\n\t\tninfo->is_exclusive = 1;\n\t} else if (!strcmp(state, ND_job_sharing))\n\t\tninfo->is_sharing = 1;\n\telse if (!strcmp(state, ND_busy))\n\t\tninfo->is_busy = 1;\n\telse if (!strcmp(state, ND_jobbusy))\n\t\tninfo->is_job_busy = 1;\n\telse if (!strcmp(state, ND_Stale))\n\t\tninfo->is_stale = 1;\n\telse if (!strcmp(state, ND_prov))\n\t\tninfo->is_provisioning = 1;\n\telse if (!strcmp(state, ND_wait_prov))\n\t\tninfo->is_provisioning = 1;\n\telse if (!strcmp(state, ND_maintenance))\n\t\tninfo->is_maintenance = 1;\n\telse if (!strcmp(state, ND_sleep)) {\n\t\tif (ninfo->server->power_provisioning)\n\t\t\tninfo->is_sleeping = 1;\n\t} else {\n\t\tlog_eventf(PBSEVENT_SCHED, PBS_EVENTCLASS_NODE, LOG_INFO,\n\t\t\t   ninfo->name, \"Unknown Node State: %s on add operation\", state);\n\t\treturn 1;\n\t}\n\n\t/* Remove the free state unless it was specifically the state being added */\n\tif (!set_free) {\n\t\tninfo->is_free = 0;\n\t\tninfo->nscr |= NSCR_CYCLE_INELIGIBLE;\n\t} else\n\t\tninfo->nscr &= ~NSCR_CYCLE_INELIGIBLE;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\t\tnode_filter - filter a node array and return a new filterd array\n *\n * @param[in]\tnodes\t-\tthe array to filter\n * @param[in]\tsize\t-\tsize of nodes (<0 for function to figure it out)\n * @param[in]\tfilter_func\t-\tpointer to a function that will filter the nodes\n *\t\t\t\t\t\t\t\t- returns 1: job will be added to filtered array\n *\t\t\t\t\t\t\t\t- returns 0: job will NOT be added to filtered array\n * @param[in]\targ - an optional arg passed to filter_func\n * @param[in]\tflags - describe how nodes are filtered\n *\n * @return pointer to filtered array\n *\n * @par\n * filter_func prototype: int func( node_info *, void * )\n *\n */\nnode_info **\nnode_filter(node_info **nodes, int size,\n\t    int (*filter_func)(node_info *, void *), void *arg, int flags)\n{\n\tnode_info **new_nodes = NULL; /* the new node array */\n\tnode_info **new_nodes_tmp = NULL;\n\tint i, j;\n\n\tif (size < 0)\n\t\tsize = count_array(nodes);\n\n\tif ((new_nodes = static_cast<node_info **>(malloc((size + 1) * sizeof(node_info *)))) == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn NULL;\n\t}\n\n\tfor (i = 0, j = 0; i < size; i++) {\n\t\tif (filter_func(nodes[i], arg)) {\n\t\t\tnew_nodes[j] = nodes[i];\n\t\t\tj++;\n\t\t}\n\t}\n\tnew_nodes[j] = NULL;\n\n\tif (!(flags & FILTER_FULL)) {\n\t\tif ((new_nodes_tmp = static_cast<node_info **>(realloc(new_nodes, (j + 1) * sizeof(node_info *)))) == NULL)\n\t\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\telse\n\t\t\tnew_nodes = new_nodes_tmp;\n\t}\n\treturn new_nodes;\n}\n\n/**\n * @brief find a node by string\n * @param[in] ninfo_arr - node array to search\n * @param[in] nodename - name of node to searh for\n * @return node_info *\n * @retval found node\n * @retval NULL if not found or on error\n */\nnode_info *\nfind_node_info(node_info **ninfo_arr, const std::string &nodename)\n{\n\tint i;\n\n\tif (ninfo_arr == NULL)\n\t\treturn NULL;\n\n\tfor (i = 0; ninfo_arr[i] != NULL && nodename != ninfo_arr[i]->name; i++)\n\t\t;\n\n\treturn ninfo_arr[i];\n}\n\n/**\n * @brief\n *\t\tfind_node_by_host - find a node by its host resource rather then\n *\t\t\t\tits name -- will return first found vnode\n *\n * @param[in]\tninfo_arr\t-\tarray of nodes to search\n * @param[in]\thost\t-\thost of node to find\n *\n * @return\tfound node\n * @retval\tNULL\t: not found\n *\n */\nnode_info *\nfind_node_by_host(node_info **ninfo_arr, char *host)\n{\n\tint i;\n\n\tif (ninfo_arr == NULL || host == NULL)\n\t\treturn NULL;\n\n\tfor (i = 0; ninfo_arr[i] != NULL; i++) {\n\t\tauto res = find_resource(ninfo_arr[i]->res, allres[\"host\"]);\n\t\tif (res != NULL) {\n\t\t\tif (compare_res_to_str(res, host, CMP_CASELESS))\n\t\t\t\tbreak;\n\t\t}\n\t}\n\n\treturn ninfo_arr[i];\n}\n\n/**\n * @brief\tpthread routine to dup a chunk of nodes\n *\n * @param[in,out]\tdata - data associated with duping of the nodes\n *\n * @return void\n */\nvoid\ndup_node_info_chunk(th_data_dup_nd_info *data)\n{\n\tint i;\n\tint start;\n\tint end;\n\tnode_info **onodes;\n\tnode_info **nnodes;\n\tserver_info *nsinfo;\n\tunsigned int flags;\n\n\tstart = data->sidx;\n\tend = data->eidx;\n\tonodes = data->onodes;\n\tnnodes = data->nnodes;\n\tnsinfo = data->nsinfo;\n\tdata->error = 0;\n\tflags = data->flags;\n\n\tfor (i = start; i <= end && data->onodes[i] != NULL; i++) {\n\t\tif ((nnodes[i] = dup_node_info(onodes[i], nsinfo, flags)) == NULL) {\n\t\t\tdata->error = 1;\n\t\t\treturn;\n\t\t}\n\t}\n}\n\n/**\n * @brief\tAllocates th_data_dup_nd_info for multi-threading of dup_nodes\n *\n * @param[in]\tflags\t-\tflags passed to dup_nodes\n * @param[in]\tnsinfo\t-\tthe new server\n * @param[in]\tonodes\t-\tthe array to duplicate\n * @param[out]\tnnodes\t-\tthe duplicated array\n * @param[in]\tsidx\t-\tstart index for the nodes list for the thread\n * @param[in]\teidx\t-\tend index for the nodes list for the thread\n *\n * @return th_data_dup_nd_info *\n * @retval a newly allocated th_data_dup_nd_info object\n * @retval NULL for malloc error\n */\nstatic inline th_data_dup_nd_info *\nalloc_tdata_dup_nodes(unsigned int flags, server_info *nsinfo, node_info **onodes, node_info **nnodes,\n\t\t      int sidx, int eidx)\n{\n\tth_data_dup_nd_info *tdata;\n\n\ttdata = static_cast<th_data_dup_nd_info *>(malloc(sizeof(th_data_dup_nd_info)));\n\tif (tdata == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn NULL;\n\t}\n\ttdata->flags = flags;\n\ttdata->nsinfo = nsinfo;\n\ttdata->onodes = onodes;\n\ttdata->nnodes = nnodes;\n\ttdata->sidx = sidx;\n\ttdata->eidx = eidx;\n\n\treturn tdata;\n}\n\n/**\n * @brief\n *\t\tdup_nodes - duplicate an array of nodes\n *\n * @param[in]\tonodes\t-\tthe array to duplicate\n * @param[in]\tnsinfo\t-\tthe new server\n * @param[in]\tflags\t-\tDUP_INDIRECT - duplicate\n * \t\t\t\t \t\t\ttarget resources, not indirect\n *\n * @return the duplicated array\n * @retval\tNULL\t: on error\n *\n */\nnode_info **\ndup_nodes(node_info **onodes, server_info *nsinfo, unsigned int flags)\n{\n\tnode_info **nnodes;\n\tint num_nodes;\n\tint thread_node_ct_left;\n\tschd_resource *nres = NULL;\n\tschd_resource *ores = NULL;\n\tschd_resource *tres = NULL;\n\tnode_info *ninfo = NULL;\n\tth_data_dup_nd_info *tdata = NULL;\n\tth_task_info *task = NULL;\n\tint th_err = 0;\n\tint tid;\n\n\tif (onodes == NULL || nsinfo == NULL)\n\t\treturn NULL;\n\n\tnum_nodes = thread_node_ct_left = count_array(onodes);\n\n\tif ((nnodes = static_cast<node_info **>(malloc((num_nodes + 1) * sizeof(node_info *)))) == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn NULL;\n\t}\n\n\ttid = *((int *) pthread_getspecific(th_id_key));\n\tif (tid != 0 || num_threads <= 1) {\n\t\t/* don't use multi-threading if I am a worker thread or num_threads is 1 */\n\t\ttdata = alloc_tdata_dup_nodes(flags, nsinfo, onodes, nnodes, 0, num_nodes - 1);\n\t\tif (tdata == NULL) {\n\t\t\tfree_nodes(nnodes);\n\t\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\t\treturn NULL;\n\t\t}\n\n\t\tdup_node_info_chunk(tdata);\n\t\tth_err = tdata->error;\n\t\tfree(tdata);\n\t} else { /* We are multithreading */\n\t\tint j;\n\t\tint num_tasks;\n\t\tint chunk_size = num_nodes / num_threads;\n\t\tchunk_size = (chunk_size > MT_CHUNK_SIZE_MIN) ? chunk_size : MT_CHUNK_SIZE_MIN;\n\t\tfor (j = 0, num_tasks = 0; thread_node_ct_left > 0;\n\t\t     num_tasks++, j += chunk_size, thread_node_ct_left -= chunk_size) {\n\t\t\ttdata = alloc_tdata_dup_nodes(flags, nsinfo, onodes, nnodes, j, j + chunk_size - 1);\n\t\t\tif (tdata == NULL) {\n\t\t\t\tth_err = 1;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\ttask = static_cast<th_task_info *>(malloc(sizeof(th_task_info)));\n\t\t\tif (task == NULL) {\n\t\t\t\tfree(tdata);\n\t\t\t\tth_err = 1;\n\t\t\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\ttask->task_type = TS_DUP_ND_INFO;\n\t\t\ttask->thread_data = (void *) tdata;\n\n\t\t\tqueue_work_for_threads(task);\n\t\t}\n\n\t\t/* Get results from worker threads */\n\t\tfor (int i = 0; i < num_tasks;) {\n\t\t\tpthread_mutex_lock(&result_lock);\n\t\t\twhile (ds_queue_is_empty(result_queue))\n\t\t\t\tpthread_cond_wait(&result_cond, &result_lock);\n\t\t\twhile (!ds_queue_is_empty(result_queue)) {\n\t\t\t\ttask = static_cast<th_task_info *>(ds_dequeue(result_queue));\n\t\t\t\ttdata = static_cast<th_data_dup_nd_info *>(task->thread_data);\n\t\t\t\tif (tdata->error)\n\t\t\t\t\tth_err = 1;\n\t\t\t\tfree(tdata);\n\t\t\t\tfree(task);\n\t\t\t\ti++;\n\t\t\t}\n\t\t\tpthread_mutex_unlock(&result_lock);\n\t\t}\n\t}\n\n\tif (th_err) {\n\t\tfree_nodes(nnodes);\n\t\treturn NULL;\n\t}\n\tnnodes[num_nodes] = NULL;\n\n\tif (!(flags & DUP_INDIRECT)) {\n\t\tfor (int i = 0; nnodes[i] != NULL; i++) {\n\t\t\t/* since the node list we're duplicating may have indirect resources\n\t\t\t * which point to resources not in our node list, we need to detect it\n\t\t\t * if we are in this case, we'll redirect them locally\n\t\t\t */\n\t\t\tnres = nnodes[i]->res;\n\t\t\twhile (nres != NULL) {\n\t\t\t\tif (nres->indirect_vnode_name != NULL) {\n\t\t\t\t\tninfo = find_node_info(nnodes, nres->indirect_vnode_name);\n\t\t\t\t\t/* we found the problem -- first time we see it, we set the value\n\t\t\t\t\t * of THIS node to the indirect value.  We'll then set all the rest\n\t\t\t\t\t * to point to THIS node.\n\t\t\t\t\t */\n\t\t\t\t\tif (ninfo == NULL) {\n\t\t\t\t\t\tninfo = find_node_info(onodes, nnodes[i]->name);\n\t\t\t\t\t\tores = find_resource(ninfo->res, nres->def);\n\t\t\t\t\t\tif (ores->indirect_res != NULL) {\n\t\t\t\t\t\t\tchar namebuf[1024];\n\n\t\t\t\t\t\t\tsprintf(namebuf, \"@%s\", nnodes[i]->name.c_str());\n\t\t\t\t\t\t\tfor (int j = i + 1; nnodes[j] != NULL; j++) {\n\t\t\t\t\t\t\t\ttres = find_resource(nnodes[j]->res, nres->def);\n\t\t\t\t\t\t\t\tif (tres != NULL) {\n\t\t\t\t\t\t\t\t\tif (tres->indirect_vnode_name != NULL &&\n\t\t\t\t\t\t\t\t\t    !strcmp(nres->indirect_vnode_name,\n\t\t\t\t\t\t\t\t\t\t    nres->indirect_vnode_name)) {\n\t\t\t\t\t\t\t\t\t\tif (set_resource(tres, namebuf, RF_AVAIL) == 0) {\n\t\t\t\t\t\t\t\t\t\t\tfree_nodes(nnodes);\n\t\t\t\t\t\t\t\t\t\t\treturn NULL;\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tif (set_resource(nres,\n\t\t\t\t\t\t\t\t\t ores->indirect_res->orig_str_avail, RF_AVAIL) == 0) {\n\t\t\t\t\t\t\t\tfree_nodes(nnodes);\n\t\t\t\t\t\t\t\treturn NULL;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tnres->assigned = ores->indirect_res->assigned;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tnres = nres->next;\n\t\t\t}\n\t\t}\n\t}\n\n\tif (resolve_indirect_resources(nnodes) == 0) {\n\t\tfree_nodes(nnodes);\n\t\treturn NULL;\n\t}\n\treturn nnodes;\n}\n\n/**\n * @brief\n *\t\tdup_node_info - duplicate a node by creating a new one and coping all\n *\t\t        the data into the new\n *\n * @param[in]\tonode\t-\tthe node to dup\n * @param[in]\tnsinfo\t-\tthe NEW server (i.e. duplicated)\n * @param[in]\tflags\t-\tDUP_INDIRECT - duplicate target resources, not indirect\n *\n * @return\tnewly allocated and duped node\n *\n */\nnode_info *\ndup_node_info(node_info *onode, server_info *nsinfo, unsigned int flags)\n{\n\tnode_info *nnode;\n\n\tif (onode == NULL)\n\t\treturn NULL;\n\n\tif ((nnode = new node_info(onode->name)) == NULL)\n\t\treturn NULL;\n\n\tnnode->server = nsinfo;\n\tnnode->mom = string_dup(onode->mom);\n\tnnode->queue_name = onode->queue_name;\n\n\tnnode->is_down = onode->is_down;\n\tnnode->is_free = onode->is_free;\n\tnnode->is_offline = onode->is_offline;\n\tnnode->is_unknown = onode->is_unknown;\n\tnnode->is_exclusive = onode->is_exclusive;\n\tnnode->is_job_exclusive = onode->is_job_exclusive;\n\tnnode->is_resv_exclusive = onode->is_resv_exclusive;\n\tnnode->is_sharing = onode->is_sharing;\n\tnnode->is_busy = onode->is_busy;\n\tnnode->is_job_busy = onode->is_job_busy;\n\tnnode->is_stale = onode->is_stale;\n\tnnode->is_maintenance = onode->is_maintenance;\n\tnnode->is_provisioning = onode->is_provisioning;\n\tnnode->is_multivnoded = onode->is_multivnoded;\n\n\tnnode->sharing = onode->sharing;\n\n\tnnode->lic_lock = onode->lic_lock;\n\n\tnnode->rank = onode->rank;\n\n\tnnode->has_hard_limit = onode->has_hard_limit;\n\tnnode->no_multinode_jobs = onode->no_multinode_jobs;\n\tnnode->resv_enable = onode->resv_enable;\n\tnnode->provision_enable = onode->provision_enable;\n\tnnode->power_provisioning = onode->power_provisioning;\n\n\tnnode->num_jobs = onode->num_jobs;\n\tnnode->num_run_resv = onode->num_run_resv;\n\tnnode->num_susp_jobs = onode->num_susp_jobs;\n\n\tnnode->priority = onode->priority;\n\n\tnnode->jobs = dup_string_arr(onode->jobs);\n\tnnode->resvs = dup_string_arr(onode->resvs);\n\tif (flags & DUP_INDIRECT)\n\t\tnnode->res = dup_ind_resource_list(onode->res);\n\telse\n\t\tnnode->res = dup_resource_list(onode->res);\n\n\tnnode->max_running = onode->max_running;\n\tnnode->max_user_run = onode->max_user_run;\n\tnnode->max_group_run = onode->max_group_run;\n\n\tnnode->group_counts = dup_counts_umap(onode->group_counts);\n\tnnode->user_counts = dup_counts_umap(onode->user_counts);\n\n\tset_current_aoe(nnode, onode->current_aoe);\n\tset_current_eoe(nnode, onode->current_eoe);\n\tnnode->nodesig = string_dup(onode->nodesig);\n\tnnode->nodesig_ind = onode->nodesig_ind;\n\tnnode->last_state_change_time = onode->last_state_change_time;\n\tnnode->last_used_time = onode->last_used_time;\n\n\tif (onode->svr_node != NULL)\n\t\tnnode->svr_node = find_node_by_indrank(nsinfo->nodes, onode->node_ind, onode->rank);\n\n\t/* Duplicate list of jobs and running reservations.\n\t * If caller is server_info's copy constructor then nsinfo->resvs/jobs should be NULL,\n\t * but running reservations and jobs are collected later in the caller.\n\t * Otherwise, we collect running reservations or jobs here.\n\t */\n\tnnode->run_resvs_arr = copy_resresv_array(onode->run_resvs_arr, nsinfo->resvs);\n\tnnode->job_arr = copy_resresv_array(onode->job_arr, nsinfo->jobs);\n\n\t/* If we are called from dup_server(), nsinfo->hostsets are NULL.\n\t * They are not created yet.  Hostsets will be attached in dup_server()\n\t */\n\tif (onode->hostset != NULL)\n\t\tnnode->hostset = find_node_partition_by_rank(nsinfo->hostsets,\n\t\t\t\t\t\t\t     onode->hostset->rank);\n\n\tnnode->bucket_ind = onode->bucket_ind;\n\tnnode->node_ind = onode->node_ind;\n\n\tnnode->nscr = onode->nscr;\n\n\tif (onode->partition != NULL) {\n\t\tnnode->partition = string_dup(onode->partition);\n\t\tif (nnode->partition == NULL) {\n\t\t\tdelete nnode;\n\t\t\treturn NULL;\n\t\t}\n\t}\n\n\treturn nnode;\n}\n\n/**\n * @brief\n *\t\tcopy_node_ptr_array - copy an array of jobs using a different set of\n *\t\t\t      of node pointer (same nodes, different array).\n *\t\t\t      This means we have to use the names from the\n *\t\t\t      first array and find them in the second array\n *\n *\n * @param[in]\toarr\t-\tthe old array (filtered array)\n * @param[in]\tnarr\t-\tthe new array (entire node array)\n *\n * @return\tcopied array\n * @retval\tNULL\t: on error\n *\n */\nnode_info **\ncopy_node_ptr_array(node_info **oarr, node_info **narr)\n{\n\tint i;\n\tnode_info **ninfo_arr;\n\tnode_info *ninfo;\n\n\tif (oarr == NULL || narr == NULL)\n\t\treturn NULL;\n\n\tfor (i = 0; oarr[i] != NULL; i++)\n\t\t;\n\n\tif ((ninfo_arr = static_cast<node_info **>(malloc(sizeof(node_info *) * (i + 1)))) == NULL)\n\t\treturn NULL;\n\n\tfor (i = 0; oarr[i] != NULL; i++) {\n\t\tninfo = find_node_by_indrank(narr, oarr[i]->node_ind, oarr[i]->rank);\n\n\t\tif (ninfo == NULL) {\n\t\t\tfree(ninfo_arr);\n\t\t\treturn NULL;\n\t\t}\n\t\tninfo_arr[i] = ninfo;\n\t}\n\tninfo_arr[i] = NULL;\n\n\treturn ninfo_arr;\n}\n\n/**\n * @brief\n *\t\tcollect_resvs_on_nodes - collect all the running resvs from resv array\n *\t\t\t\ton the nodes\n *\n * @param[in]\tninfo\t-\tthe nodes to collect for\n * @param[in]\tresresv_arr\t-\tthe array of resvs to consider\n * @param[in]\tsize\t-\tthe size (in number of pointers) of the resv array\n *\n * @return\tint\n * @retval\t1\t: success\n * @retval\t0\t: failure\n *\n */\nint\ncollect_resvs_on_nodes(node_info **ninfo_arr, resource_resv **resresv_arr, int size)\n{\n\tint i;\n\n\tif (ninfo_arr == NULL || ninfo_arr[0] == NULL)\n\t\treturn 0;\n\n\tfor (i = 0; ninfo_arr[i] != NULL; i++) {\n\t\tninfo_arr[i]->run_resvs_arr = resource_resv_filter(resresv_arr, size,\n\t\t\t\t\t\t\t\t   check_resv_running_on_node, ninfo_arr[i]->name.c_str(), 0);\n\t\t/* the count of running resvs on the node is set in query_reservations */\n\t}\n\treturn 1;\n}\n\n/**\n * @brief\n *\t\tcollect_jobs_on_nodes - collect all the jobs in the job array on the\n *\t\t\t\tnodes\n *\n * @param[in]\tninfo\t-\tthe nodes to collect for\n * @param[in]\tresresv_arr\t-\tthe array of jobs to consider\n * @param[in]\tsize\t-\tthe size (in number of pointers) of the job arrays\n * @param[in]\tflags\t-\tto indicate whether to do ghost job detection\n *\n * @retval\t1\t: upon success\n * @retval\t2\t: if a job reported on nodes was not found in the job arrays\n * @retval\t0\t: upon failure\n *\n */\nint\ncollect_jobs_on_nodes(node_info **ninfo_arr, resource_resv **resresv_arr, int size, int flags)\n{\n\tchar *ptr;\t\t\t  /* used to find the '/' in the jobs array */\n\tresource_resv *job;\t\t  /* find the job from the jobs array */\n\tresource_resv **susp_jobs = NULL; /* list of suspended jobs */\n\tnode_info *node;\t\t  /* used to store pointer of node in ninfo_arr */\n\tresource_resv **temp_ninfo_arr = NULL;\n\n\tif (ninfo_arr == NULL || ninfo_arr[0] == NULL)\n\t\treturn 0;\n\n\tfor (int i = 0; ninfo_arr[i] != NULL; i++) {\n\t\tif ((ninfo_arr[i]->job_arr = static_cast<resource_resv **>(malloc((size + 1) * sizeof(resource_resv *)))) == NULL) {\n\t\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\t\treturn 0;\n\t\t}\n\t\tninfo_arr[i]->job_arr[0] = NULL;\n\t}\n\n\tfor (int i = 0; ninfo_arr[i] != NULL; i++) {\n\t\tif (ninfo_arr[i]->jobs != NULL) {\n\t\t\t/* If there are no running jobs in the list and node reports a running job,\n\t\t\t * mark that the node has ghost job\n\t\t\t */\n\t\t\tif (size == 0 && (flags & DETECT_GHOST_JOBS)) {\n\t\t\t\tninfo_arr[i]->has_ghost_job = 1;\n\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_NODE, LOG_DEBUG, ninfo_arr[i]->name,\n\t\t\t\t\t  \"Jobs reported running on node no longer exists or are not in running state\");\n\t\t\t}\n\n\t\t\tint k = 0;\n\t\t\tfor (int j = 0; ninfo_arr[i]->jobs[j] != NULL && k < size; j++) {\n\t\t\t\t/* jobs are in the format of node_name/sub_node.  We don't care about\n\t\t\t\t * the subnode... we just want to populate the jobs on our node\n\t\t\t\t * structure\n\t\t\t\t */\n\t\t\t\tptr = strchr(ninfo_arr[i]->jobs[j], '/');\n\t\t\t\tif (ptr != NULL)\n\t\t\t\t\t*ptr = '\\0';\n\n\t\t\t\tjob = find_resource_resv(resresv_arr, ninfo_arr[i]->jobs[j]);\n\t\t\t\tif ((job != NULL) && (!job->nspec_arr.empty())) {\n\t\t\t\t\t/* if a distributed job has more then one instance on this node\n\t\t\t\t\t * it'll show up more then once.  If this is the case, we only\n\t\t\t\t\t * want to have the job in our array once.\n\t\t\t\t\t */\n\t\t\t\t\tif (find_resource_resv_by_indrank(ninfo_arr[i]->job_arr, -1, job->rank) == NULL) {\n\t\t\t\t\t\tif (ninfo_arr[i]->has_hard_limit) {\n\t\t\t\t\t\t\tcounts *cts;\n\t\t\t\t\t\t\tcts = find_alloc_counts(ninfo_arr[i]->group_counts,\n\t\t\t\t\t\t\t\t\t\tjob->group);\n\t\t\t\t\t\t\tupdate_counts_on_run(cts, job->resreq);\n\n\t\t\t\t\t\t\tcts = find_alloc_counts(ninfo_arr[i]->user_counts,\n\t\t\t\t\t\t\t\t\t\tjob->user);\n\t\t\t\t\t\t\tupdate_counts_on_run(cts, job->resreq);\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\tninfo_arr[i]->job_arr[k] = job;\n\t\t\t\t\t\tk++;\n\t\t\t\t\t\t/* make the job array searchable with find_resource_resv */\n\t\t\t\t\t\tninfo_arr[i]->job_arr[k] = NULL;\n\t\t\t\t\t}\n\t\t\t\t} else if (flags & DETECT_GHOST_JOBS) {\n\t\t\t\t\t/* Race Condition occurred: nodes were queried when a job existed.\n\t\t\t\t\t * Jobs were queried when the job no longer existed.  Make note\n\t\t\t\t\t * of it on the job so the node's resources_assigned values can be\n\t\t\t\t\t * recalculated later.\n\t\t\t\t\t */\n\t\t\t\t\tninfo_arr[i]->has_ghost_job = 1;\n\t\t\t\t\tlog_eventf(PBSEVENT_DEBUG2, PBS_EVENTCLASS_NODE, LOG_DEBUG, ninfo_arr[i]->name,\n\t\t\t\t\t\t   \"Job %s reported running on node no longer exists or is not in running state\",\n\t\t\t\t\t\t   ninfo_arr[i]->jobs[j]);\n\t\t\t\t}\n\t\t\t}\n\t\t\tninfo_arr[i]->num_jobs = k;\n\t\t}\n\t}\n\n\tfor (int i = 0; ninfo_arr[i] != NULL; i++) {\n\t\ttemp_ninfo_arr = static_cast<resource_resv **>(realloc(ninfo_arr[i]->job_arr, (ninfo_arr[i]->num_jobs + 1) * sizeof(resource_resv *)));\n\t\tif (temp_ninfo_arr == NULL) {\n\t\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\t\treturn 0;\n\t\t} else {\n\t\t\tninfo_arr[i]->job_arr = temp_ninfo_arr;\n\t\t}\n\t\tninfo_arr[i]->job_arr[ninfo_arr[i]->num_jobs] = NULL;\n\t}\n\n\tsusp_jobs = resource_resv_filter(resresv_arr,\n\t\t\t\t\t count_array(resresv_arr), check_susp_job, NULL, 0);\n\tif (susp_jobs == NULL)\n\t\treturn 0;\n\n\tfor (int i = 0; susp_jobs[i] != NULL; i++) {\n\t\tif (susp_jobs[i]->ninfo_arr != NULL) {\n\t\t\tfor (int j = 0; susp_jobs[i]->ninfo_arr[j] != NULL; j++) {\n\t\t\t\t/* resresv->ninfo_arr is merely a new list with pointers to server nodes.\n\t\t\t\t * resresv->resv->resv_nodes is a new list with pointers to resv nodes\n\t\t\t\t */\n\t\t\t\tnode = find_node_info(ninfo_arr,\n\t\t\t\t\t\t      susp_jobs[i]->ninfo_arr[j]->name);\n\t\t\t\tif (node != NULL)\n\t\t\t\t\tnode->num_susp_jobs++;\n\t\t\t}\n\t\t}\n\t}\n\tfree(susp_jobs);\n\n\treturn 1;\n}\n\n/**\n * @brief\n *\t\tupdate_node_on_run - update internal scheduler node data when a\n *\t\t\t     resource resv is run.\n *\n * @param[in]\tnspec\t-\tthe nspec for the node allocation\n * @param[in]\tresresv -\tthe resource rev which ran\n * @param[in]  job_state -\tthe old state of a job if resresv is a job\n *\t\t\t\tIf the old_state is found to be suspended\n *\t\t\t\tthen only resources that were released\n *\t\t\t\tduring suspension will be accounted.\n *\n * @return\tnothing\n *\n */\nvoid\nupdate_node_on_run(nspec *ns, resource_resv *resresv, const char *job_state)\n{\n\tresource_req *resreq;\n\tschd_resource *ncpusres = NULL;\n\tresource_resv **tmp_arr;\n\tnode_info *ninfo;\n\n\tif (ns == NULL || resresv == NULL)\n\t\treturn;\n\n\tninfo = ns->ninfo;\n\n\t/* Don't account for resources of a node that is unavailable */\n\tif (ninfo->is_offline || ninfo->is_down)\n\t\treturn;\n\n\tif (resresv->is_job) {\n\t\tninfo->num_jobs++;\n\t\tif (find_resource_resv_by_indrank(ninfo->job_arr, resresv->resresv_ind, resresv->rank) == NULL) {\n\t\t\ttmp_arr = add_resresv_to_array(ninfo->job_arr, resresv, NO_FLAGS);\n\t\t\tif (tmp_arr == NULL)\n\t\t\t\treturn;\n\n\t\t\tninfo->job_arr = tmp_arr;\n\t\t}\n\n\t} else if (resresv->is_resv) {\n\t\tninfo->num_run_resv++;\n\t\tif (find_resource_resv_by_indrank(ninfo->run_resvs_arr, resresv->resresv_ind, resresv->rank) == NULL) {\n\t\t\ttmp_arr = add_resresv_to_array(ninfo->run_resvs_arr, resresv, NO_FLAGS);\n\t\t\tif (tmp_arr == NULL)\n\t\t\t\treturn;\n\n\t\t\tninfo->run_resvs_arr = tmp_arr;\n\t\t}\n\t}\n\n\tresreq = ns->resreq;\n\tif ((job_state != NULL) && (*job_state == 'S')) {\n\t\tif (!resresv->job->resreleased.empty()) {\n\t\t\tnspec *temp = find_nspec_by_rank(resresv->job->resreleased, ninfo->rank);\n\t\t\tif (temp != NULL)\n\t\t\t\tresreq = temp->resreq;\n\t\t}\n\t}\n\twhile (resreq != NULL) {\n\t\tif (resreq->type.is_consumable) {\n\t\t\tschd_resource *res;\n\n\t\t\tres = find_resource(ninfo->res, resreq->def);\n\n\t\t\tif (res != NULL) {\n\t\t\t\tif (res->indirect_res != NULL)\n\t\t\t\t\tres = res->indirect_res;\n\n\t\t\t\tres->assigned += resreq->amount;\n\n\t\t\t\tif (res->def == allres[\"ncpus\"]) {\n\t\t\t\t\tncpusres = res;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tresreq = resreq->next;\n\t}\n\n\tif (ninfo->has_hard_limit && resresv->is_job) {\n\t\tcounts *cts;\n\n\t\tcts = find_alloc_counts(ninfo->group_counts, resresv->group);\n\t\tupdate_counts_on_run(cts, ns->resreq);\n\n\t\tcts = find_alloc_counts(ninfo->user_counts, resresv->user);\n\t\tupdate_counts_on_run(cts, ns->resreq);\n\t}\n\n\t/* if we're a cluster node and we have no cpus available, we're job_busy */\n\tif (ncpusres == NULL)\n\t\tncpusres = find_resource(ninfo->res, allres[\"ncpus\"]);\n\n\tif (ncpusres != NULL) {\n\t\tif (dynamic_avail(ncpusres) == 0)\n\t\t\tset_node_info_state(ninfo, ND_jobbusy);\n\t}\n\n\t/* if node selected for provisioning, this node is no longer available */\n\tif (ns->go_provision == 1) {\n\t\tset_node_info_state(ninfo, ND_prov);\n\n\t\t/* for jobs inside reservation, update the server's node info as well */\n\t\tif (resresv->job != NULL && resresv->job->resv != NULL &&\n\t\t    ninfo->svr_node != NULL) {\n\t\t\tset_node_info_state(ninfo->svr_node, ND_prov);\n\t\t}\n\n\t\tset_current_aoe(ninfo, resresv->aoename);\n\t}\n\n\t/* if job has eoe setting this node gets current_eoe set */\n\tif (resresv->is_job && resresv->eoename != NULL)\n\t\tset_current_eoe(ninfo, resresv->eoename);\n\n\tif (is_excl(resresv->place_spec, ninfo->sharing)) {\n\t\tif (resresv->is_resv) {\n\t\t\tadd_node_state(ninfo, ND_resv_exclusive);\n\t\t} else {\n\t\t\tadd_node_state(ninfo, ND_job_exclusive);\n\t\t\tif (ninfo->svr_node != NULL)\n\t\t\t\tadd_node_state(ninfo->svr_node, ND_job_exclusive);\n\t\t}\n\t}\n\n\tif (resresv->run_event != NULL)\n\t\tremove_te_list(&ninfo->node_events, resresv->run_event);\n\n\tif (ninfo->node_ind != -1 && ninfo->bucket_ind != -1) {\n\t\tnode_bucket *bkt = ninfo->server->buckets[ninfo->bucket_ind];\n\t\tint ind = ninfo->node_ind;\n\n\t\tif (pbs_bitmap_get_bit(bkt->free_pool->truth, ind)) {\n\t\t\tpbs_bitmap_bit_off(bkt->free_pool->truth, ind);\n\t\t\tbkt->free_pool->truth_ct--;\n\t\t} else {\n\t\t\tpbs_bitmap_bit_off(bkt->busy_later_pool->truth, ind);\n\t\t\tbkt->busy_later_pool->truth_ct--;\n\t\t}\n\n\t\tpbs_bitmap_bit_on(bkt->busy_pool->truth, ind);\n\t\tbkt->busy_pool->truth_ct++;\n\t}\n}\n\n/**\n * @brief\n *\t\tupdate_node_on_end - update a node when a resource resv ends\n *\n * @param[in]\tninfo\t-\tthe node where the job was running\n * @param[in]\tresresv -\tthe resource resv which is ending\n * @param[in]\tjob_state -\tthe old state of a job if resresv is a job\n *\t\t\t\tIf the old_state is found to be suspended\n *\t\t\t\tthen only resources that were released\n *\t\t\t\tduring suspension will be accounted.\n *\n * @return\tnothing\n *\n */\nvoid\nupdate_node_on_end(node_info *ninfo, resource_resv *resresv, const char *job_state)\n{\n\tresource_req *resreq = NULL;\n\tschd_resource *res = NULL;\n\tcounts *cts;\n\tint ind;\n\n\tif (ninfo == NULL || resresv == NULL)\n\t\treturn;\n\n\t/* Don't account for resources of a node that is unavailable */\n\tif (ninfo->is_offline || ninfo->is_down)\n\t\treturn;\n\n\tif (resresv->is_job) {\n\t\tninfo->num_jobs--;\n\t\tif (ninfo->num_jobs < 0)\n\t\t\tninfo->num_jobs = 0;\n\n\t\tremove_resresv_from_array(ninfo->job_arr, resresv);\n\t} else if (resresv->is_resv) {\n\t\tninfo->num_run_resv--;\n\t\tif (ninfo->num_run_resv < 0)\n\t\t\tninfo->num_run_resv = 0;\n\n\t\tremove_resresv_from_array(ninfo->run_resvs_arr, resresv);\n\t}\n\n\tif (ninfo->is_job_busy)\n\t\tremove_node_state(ninfo, ND_jobbusy);\n\tif (is_excl(resresv->place_spec, ninfo->sharing)) {\n\t\tif (resresv->is_resv)\n\t\t\tremove_node_state(ninfo, ND_resv_exclusive);\n\t\telse {\n\t\t\tremove_node_state(ninfo, ND_job_exclusive);\n\t\t\tif (ninfo->svr_node != NULL)\n\t\t\t\tremove_node_state(ninfo->svr_node, ND_job_exclusive);\n\t\t}\n\t}\n\n\tfor (auto ns : resresv->nspec_arr) {\n\t\tif (ns->ninfo == ninfo) {\n\t\t\tresreq = ns->resreq;\n\t\t\tif ((job_state != NULL) && (*job_state == 'S')) {\n\t\t\t\tif (!resresv->job->resreleased.empty()) {\n\t\t\t\t\tnspec *temp = find_nspec_by_rank(resresv->job->resreleased, ninfo->rank);\n\t\t\t\t\tif (temp != NULL)\n\t\t\t\t\t\tresreq = temp->resreq;\n\t\t\t\t}\n\t\t\t}\n\t\t\twhile (resreq != NULL) {\n\t\t\t\tif (resreq->type.is_consumable) {\n\t\t\t\t\tres = find_resource(ninfo->res, resreq->def);\n\t\t\t\t\tif (res != NULL) {\n\t\t\t\t\t\tif (res->indirect_res != NULL)\n\t\t\t\t\t\t\tres = res->indirect_res;\n\t\t\t\t\t\tres->assigned -= resreq->amount;\n\t\t\t\t\t\tif (res->assigned < 0) {\n\t\t\t\t\t\t\tlog_eventf(PBSEVENT_DEBUG, PBS_EVENTCLASS_NODE, LOG_DEBUG, ninfo->name,\n\t\t\t\t\t\t\t\t   \"%s turned negative %.2lf, setting it to 0\", res->name, res->assigned);\n\t\t\t\t\t\t\tres->assigned = 0;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tresreq = resreq->next;\n\t\t\t}\n\t\t\t/* no soft limits on nodes... just hard limits */\n\t\t\tif (ninfo->has_hard_limit && resresv->is_job) {\n\t\t\t\tcts = find_counts(ninfo->group_counts, resresv->group);\n\n\t\t\t\tif (cts != NULL)\n\t\t\t\t\tupdate_counts_on_end(cts, ns->resreq);\n\n\t\t\t\tcts = find_counts(ninfo->user_counts, resresv->user);\n\n\t\t\t\tif (cts != NULL)\n\t\t\t\t\tupdate_counts_on_end(cts, ns->resreq);\n\t\t\t}\n\t\t}\n\t}\n\n\tind = ninfo->node_ind;\n\tif (ind != -1 && ninfo->bucket_ind != -1 && ninfo->num_jobs == 0) {\n\t\tnode_bucket *bkt = ninfo->server->buckets[ninfo->bucket_ind];\n\n\t\tif (ninfo->node_events == NULL) {\n\t\t\tpbs_bitmap_bit_on(bkt->free_pool->truth, ind);\n\t\t\tbkt->free_pool->truth_ct++;\n\t\t} else {\n\t\t\tpbs_bitmap_bit_on(bkt->busy_later_pool->truth, ind);\n\t\t\tbkt->busy_later_pool->truth_ct++;\n\t\t}\n\t\tpbs_bitmap_bit_off(bkt->busy_pool->truth, ind);\n\t\tbkt->busy_pool->truth_ct--;\n\t}\n}\n\n// constructor\nnspec::nspec()\n{\n\tend_of_chunk = 0;\n\tseq_num = 0;\n\tsub_seq_num = 0;\n\tgo_provision = 0;\n\tninfo = NULL;\n\tresreq = NULL;\n\tchk = NULL;\n}\n\n// destructor\nnspec::~nspec()\n{\n\tfree_resource_req_list(resreq);\n}\n\n// copy constructor\nnspec::nspec(const nspec &ons, node_info **ninfo_arr, selspec *sel)\n{\n\tend_of_chunk = ons.end_of_chunk;\n\tseq_num = ons.seq_num;\n\tsub_seq_num = ons.sub_seq_num;\n\tgo_provision = ons.go_provision;\n\tninfo = find_node_by_indrank(ninfo_arr, ons.ninfo->node_ind, ons.ninfo->rank);\n\tresreq = dup_resource_req_list(ons.resreq);\n\tif (sel != NULL)\n\t\tchk = find_chunk_by_seq_num(sel->chunks, ons.seq_num);\n}\n\n/**\n * @brief\n * \t\tdup_nspecs - duplicate an array of nspecs\n *\n * @param[in]\tonspecs\t\t- the nspecs to duplicate\n * @param[in]\tninfo_arr\t- the nodes corresponding to the nspecs\n * @param[in]\tsel\t\t- select spec to map nspecs to\n * @return\tduplicated nspec array\n *\n */\nstd::vector<nspec *>\ndup_nspecs(const std::vector<nspec *> &onspecs, node_info **ninfo_arr, selspec *sel)\n{\n\tstd::vector<nspec *> nnspecs;\n\n\tif (onspecs.empty() || ninfo_arr == NULL)\n\t\treturn {};\n\n\tfor (const auto &ns : onspecs)\n\t\tnnspecs.push_back(new nspec(*ns, ninfo_arr, sel));\n\n\treturn nnspecs;\n}\n\n/**\n * @brief\n * \t\tfree_nspecs - free a nspec array\n *\n * @param[in,out]\tns\t-\tthe nspec array\n *\n * @return\tnothing\n *\n */\nvoid\nfree_nspecs(std::vector<nspec *> &nspec_arr)\n{\n\tfor (auto ns : nspec_arr)\n\t\tdelete ns;\n\tnspec_arr.clear();\n}\n\n/**\n * @brief\n *\t\tfind_nspec - find an nspec in an array\n *\n * @param[in]\tnspec_arr\t-\tthe array of nspecs to search\n * @param[in]\tninfo\t-\tthe node_info to find\n *\n * @return\tthe found nspec\n * @retval\tNULL\n *\n */\nnspec *\nfind_nspec(std::vector<nspec *> &nspec_arr, node_info *ninfo)\n{\n\tif (ninfo == NULL)\n\t\treturn NULL;\n\tfor (const auto &ns : nspec_arr)\n\t\tif (ns->ninfo == ninfo)\n\t\t\treturn ns;\n\n\treturn NULL;\n}\n\n/**\n * @brief\n * \t\tfind an nspec in an array by rank\n *\n * @param[in]\tnspec_arr\t-\tthe array of nspecs to search\n * @param[in]\trank\t-\tthe unique integer identifier of the nspec/node to search for\n *\n * @return\tthe found nspec\n * @retval\tNULL\t: Error\n *\n */\nnspec *\nfind_nspec_by_rank(std::vector<nspec *> &nspec_arr, int rank)\n{\n\tfor (const auto &ns : nspec_arr)\n\t\tif (ns->ninfo->rank == rank)\n\t\t\treturn ns;\n\n\treturn NULL;\n}\n\n/**\n *\t@brief\n *\t\teval a select spec to see if it is satisfiable\n *\n * @param[in]\tpolicy\t  -\tpolicy info\n * @param[in]\tspec\t  -\tthe select spec\n * @param[in]\tplacespec -\tthe placement spec (-l place)\n * @param[in]\tninfo_arr - \tarray of nodes to satisfy the spec\n * @param[in]\tnodepart  -\tthe node partition array for node grouping\n *\t\t \t \tif NULL, we're not doing node grouping\n * @param[in]\tresresv\t  -\tthe resource resv the spec is from\n * @param[in]\tflags\t  -\tflags to change functions behavior\n *\t      \t\t\tEVAL_OKBREAK - ok to break chunk up across vnodes\n *\t      \t\t\tEVAL_EXCLSET - allocate entire nodelist exclusively\n * @param[out]\tnspec_arr -\tthe node solution\n * @param[out]\terr\t  -\terror structure to return error information\n *\n * @return\tbool\n * @retval\ttrue\t  : \tif the nodespec can be satisfied\n * @retval\tfalse\t  : \tif not\n *\n */\nbool\neval_selspec(status *policy, selspec *spec, place *placespec,\n\t     node_info **ninfo_arr, node_partition **nodepart, resource_resv *resresv,\n\t     unsigned int flags, std::vector<nspec *> &nspec_arr, schd_error *err)\n{\n\tplace *pl;\n\tint can_fit = 0;\n\tbool rc = false; /* 1 if resources are available, 0 if not */\n\tint pass_flags = NO_FLAGS;\n\tchar reason[MAX_LOG_SIZE] = {0};\n\tint i = 0;\n\tstatic struct schd_error *failerr = NULL;\n\n\tif (spec == NULL || ninfo_arr == NULL || resresv == NULL || placespec == NULL)\n\t\treturn false;\n\t/* Unsetting RETURN_ALL_ERR flag, because with this flag set resresv_can_fit_nodepart can return\n\t * with multiple errors and the function only needs to see the first error it encounters.\n\t */\n\tflags &= ~RETURN_ALL_ERR;\n\n\tif (failerr == NULL) {\n\t\tfailerr = new_schd_error();\n\t\tif (failerr == NULL) {\n\t\t\tset_schd_error_codes(err, NOT_RUN, SCHD_ERROR);\n\t\t\treturn false;\n\t\t}\n\t} else\n\t\tclear_schd_error(failerr);\n\n\t/* Remove visited, scattered and ineligible bits from ncsr for node searching\n\t * Since statebusy is a per cycle bit, it shouldn't be changed\n\t */\n\tfor (i = 0; ninfo_arr[i] != NULL; i++)\n\t\tninfo_arr[i]->nscr &= ~(NSCR_VISITED | NSCR_SCATTERED | NSCR_INELIGIBLE);\n\n\tpl = placespec;\n\n\tif (flags != NO_FLAGS)\n\t\tpass_flags = flags;\n\n\tcheck_node_array_eligibility(ninfo_arr, resresv, pl, err);\n\n\tif (failerr->status_code == SCHD_UNKWN)\n\t\tmove_schd_error(failerr, err);\n\tclear_schd_error(err);\n\n\t/* If we are not node grouping or we only have 1 chunk packed onto a single\n\t * host, then we should try and satisfy over all nodes in the list\n\t *\n\t * NOTE: pack && total_chunks == 1 may not be valid once chunks are\n\t * broken into vchunks\n\t */\n\tif (nodepart == NULL) {\n\t\tif (resresv->server->has_multi_vnode && ok_break_chunk(resresv, ninfo_arr))\n\t\t\tpass_flags |= EVAL_OKBREAK;\n\n\t\trc = eval_placement(policy, spec, ninfo_arr, pl, resresv, pass_flags, nspec_arr, err);\n\t\tif (!rc)\n\t\t\tfree_nspecs(nspec_arr);\n\n\t\tif (pass_flags & EVAL_EXCLSET)\n\t\t\talloc_rest_nodepart(nspec_arr, ninfo_arr);\n\n\t\tif (err->status_code == SCHD_UNKWN && failerr->status_code != SCHD_UNKWN)\n\t\t\tmove_schd_error(err, failerr);\n\n\t\treturn rc;\n\t}\n\n\t/* Otherwise we're node grouping... */\n\n\tfor (i = 0; nodepart[i] != NULL && rc == 0; i++) {\n\t\tclear_schd_error(err);\n\t\tif (resresv_can_fit_nodepart(policy, nodepart[i], resresv, flags, err)) {\n\t\t\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG, resresv->name,\n\t\t\t\t   \"Evaluating placement set: %s\", nodepart[i]->name);\n\t\t\tif (nodepart[i]->ok_break)\n\t\t\t\tpass_flags |= EVAL_OKBREAK;\n\n\t\t\tif (nodepart[i]->excl)\n\t\t\t\tpass_flags |= EVAL_EXCLSET;\n\n\t\t\trc = eval_placement(policy, spec, nodepart[i]->ninfo_arr, pl,\n\t\t\t\t\t    resresv, pass_flags, nspec_arr, err);\n\t\t\tif (rc) {\n\t\t\t\tif (resresv->nodepart_name != NULL)\n\t\t\t\t\tfree(resresv->nodepart_name);\n\t\t\t\tresresv->nodepart_name = string_dup(nodepart[i]->name);\n\t\t\t\tcan_fit = 1;\n\t\t\t\tif (nodepart[i]->excl)\n\t\t\t\t\talloc_rest_nodepart(nspec_arr, nodepart[i]->ninfo_arr);\n\t\t\t} else {\n\t\t\t\tfree_nspecs(nspec_arr);\n\t\t\t\tif (failerr->status_code == SCHD_UNKWN)\n\t\t\t\t\tcopy_schd_error(failerr, err);\n\t\t\t}\n\t\t} else {\n\t\t\ttranslate_fail_code(err, NULL, reason);\n\t\t\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG, resresv->name,\n\t\t\t\t   \"Placement set %s is too small: %s\", nodepart[i]->name, reason);\n\t\t\tset_schd_error_codes(err, NOT_RUN, SET_TOO_SMALL);\n\t\t\tset_schd_error_arg(err, ARG1, \"Placement\");\n#ifdef NAS /* localmod 031 */\n\t\t\tset_schd_error_arg(err, ARG2, \"for resource model\");\n#else\n\t\t\tset_schd_error_arg(err, ARG2, nodepart[i]->name);\n#endif /* localmod 031 */\n\t\t\tif (failerr->status_code == SCHD_UNKWN)\n\t\t\t\tcopy_schd_error(failerr, err);\n\t\t}\n\n\t\tif (!can_fit && !rc &&\n\t\t    resresv_can_fit_nodepart(policy, nodepart[i], resresv, flags | COMPARE_TOTAL, err)) {\n\t\t\tcan_fit = 1;\n\t\t}\n\t\tpass_flags = NO_FLAGS;\n\t}\n\n\tif (!can_fit) {\n\t\tif (flags & SPAN_PSETS) {\n\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG, resresv->name,\n\t\t\t\t  \"Request won't fit into any placement sets, will use all nodes\");\n\t\t\tresresv->can_not_fit = 1;\n\t\t\tif (resresv->server->has_multi_vnode && ok_break_chunk(resresv, ninfo_arr))\n\t\t\t\tpass_flags |= EVAL_OKBREAK;\n\n\t\t\trc = eval_placement(policy, spec, ninfo_arr, pl, resresv, pass_flags, nspec_arr, err);\n\t\t} else {\n\t\t\tset_schd_error_codes(err, NEVER_RUN, CANT_SPAN_PSET);\n\t\t\t/* CANT_SPAN_PSET is more important than any other error we may have encountered -- keep it*/\n\t\t\tclear_schd_error(failerr);\n\t\t\tcopy_schd_error(failerr, err);\n\t\t}\n\t}\n\n\tif (!rc)\n\t\tfree_nspecs(nspec_arr);\n\n\tif (err->status_code == SCHD_UNKWN && failerr->status_code != SCHD_UNKWN)\n\t\tmove_schd_error(err, failerr);\n\n\treturn rc;\n}\n\n/**\n * @brief\n * \t\thandle the place spec for vnode placement of chunks\n *\n * @param[in] policy     - policy info\n * @param[in] spec       - the select spec\n * @param[in] ninfo_arr  - array of nodes to satisfy the spec\n * @param[in] pl         - parsed placement spec\n * @param[in] resresv    - the resource resv the spec if from\n * @param[in] flags\t-\tflags to change function's behavior\n *\t      \t\t\t\tEVAL_OKBREAK - ok to break chunk up across vnodes\n * @param[out]\tnspec_arr\t-\tthe node solution will be allocated and\n *\t\t\t\t   \t\t\t\treturned by this pointer by reference\n * @param[out]\terr\t-\terror structure to return error information\n *\n * @return\tbool\n * @retval\ttrue\t: if the selspec can be satisfied\n * @retval\tfalse\t: if not\n *\n */\nbool\neval_placement(status *policy, selspec *spec, node_info **ninfo_arr, place *pl,\n\t       resource_resv *resresv, unsigned int flags,\n\t       std::vector<nspec *> &nspec_arr, schd_error *err)\n{\n\tnp_cache *npc = NULL;\n\tnode_partition **hostsets = NULL;\n\tint tot = 0;\n\tresource_req *req = NULL;\n\tschd_resource *res = NULL;\n\tselspec *dselspec = NULL;\n\tnode_info **nptr = NULL;\n\tstatic schd_error *failerr = NULL;\n\n\tif (spec == NULL || ninfo_arr == NULL || pl == NULL || resresv == NULL)\n\t\treturn 0;\n\n\tif (failerr == NULL) {\n\t\tfailerr = new_schd_error();\n\t\tif (failerr == NULL) {\n\t\t\tset_schd_error_codes(err, NOT_RUN, SCHD_ERROR);\n\t\t\treturn 0;\n\t\t}\n\t} else\n\t\tclear_schd_error(failerr);\n\n\t/* reorder nodes for smp_cluster_dist or avoid_provision.\n\t *\n\t * remark: reorder_nodes doesn't reorder in place, returns\n\t *         a ptr to a reordered static array\n\t */\n\tif ((pl->pack && spec->total_chunks == 1) ||\n\t    (conf.provision_policy == AVOID_PROVISION && resresv->aoename != NULL))\n\t\tnptr = reorder_nodes(ninfo_arr, resresv);\n\n\tif (nptr == NULL)\n\t\tnptr = ninfo_arr;\n\n\t/*\n\t * eval_complex_selspec() handles placement for single vnoded systems.\n\t * it should be merged into this function in a optimized way, but until\n\t * we short circuit this function and fall into it.  This function doesn't\n\t * handle multi-chunk pack.  We still fall into it in the case of a single\n\t * chunk.\n\t */\n\tif (!resresv->server->has_multi_vnode &&\n\t    (!resresv->place_spec->pack || spec->total_chunks == 1)) {\n\t\treturn eval_complex_selspec(policy, spec, nptr, pl, resresv, flags, nspec_arr, err);\n\t}\n\n\t/* get a pool of node partitions based on host.  If we're using the\n\t * server's nodes, we can use the pre-created host sets.\n\t */\n\tif (nptr == resresv->server->nodes)\n\t\thostsets = resresv->server->hostsets;\n\n\tif (hostsets == NULL) {\n\t\tstd::vector<std::string> host_arr{\"host\"};\n\t\tnpc = find_alloc_np_cache(policy, resresv->server->npc_arr, host_arr, nptr, NULL);\n\t\tif (npc != NULL)\n\t\t\thostsets = npc->nodepart;\n\t}\n\n\tif (hostsets != NULL) {\n\t\tif (pl->scatter || pl->vscatter || pl->free) {\n\t\t\tdselspec = new selspec(*spec);\n\t\t\tif (dselspec == NULL)\n\t\t\t\treturn false;\n\t\t}\n\n\t\tfor (int i = 0; hostsets[i] != NULL && tot != spec->total_chunks; i++) {\n\t\t\t/* if one vnode on a host is set to force/dflt exclhost\n\t\t\t * then they all are.  The mom makes sure of this\n\t\t\t */\n\t\t\tstd::vector<nspec *> nsa;\n\t\t\tnode_info **dninfo_arr = hostsets[i]->ninfo_arr;\n\t\t\tenum vnode_sharing sharing = VNS_DFLT_SHARED;\n\t\t\tbool do_exclhost = false;\n\t\t\tbool rc = false; /* true if current chunk was successfully allocated */\n\t\t\t/* true if any vnode is allocated from a host - used in exclhost allocation */\n\t\t\tbool any_succ_rc = false;\n\n\t\t\tif (dninfo_arr[0] != NULL)\n\t\t\t\tsharing = dninfo_arr[0]->sharing;\n\n\t\t\tflags &= ~EVAL_EXCLSET;\n\t\t\tif (sharing == VNS_FORCE_EXCLHOST ||\n\t\t\t    (sharing == VNS_DFLT_EXCLHOST && pl->excl == 0 && pl->share == 0) || pl->exclhost) {\n\t\t\t\tdo_exclhost = true;\n\t\t\t\tflags |= EVAL_EXCLSET;\n\t\t\t}\n\n\t\t\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_NODE, LOG_DEBUG,\n\t\t\t\t   resresv->name, \"Evaluating host %s\", hostsets[i]->res_val);\n\n\t\t\t/* Pack on One Host Placement:\n\t\t\t * place all chunks on one host.  This is done with a call to\n\t\t\t * to eval_complex_selspec().\n\t\t\t */\n\t\t\tif (pl->pack) {\n\t\t\t\trc = eval_complex_selspec(policy, spec, dninfo_arr, pl,\n\t\t\t\t\t\t\t  resresv, flags | EVAL_OKBREAK, nsa, err);\n\t\t\t\tif (rc) {\n\t\t\t\t\tany_succ_rc = true;\n\t\t\t\t\ttot = spec->total_chunks;\n\t\t\t\t} else {\n\t\t\t\t\tfree_nspecs(nsa);\n\t\t\t\t\tif (failerr->status_code == SCHD_UNKWN)\n\t\t\t\t\t\tmove_schd_error(failerr, err);\n\t\t\t\t\tclear_schd_error(err);\n\t\t\t\t}\n\t\t\t}\n\t\t\t/* Scatter by Vnode Placement:\n\t\t\t * place at most one chunk on any one vnode.  This is done by successive\n\t\t\t * calls to to eval_simple_selspec().  If it returns true, we remove the\n\t\t\t * vnode from dninfo_arr[] so we don't allocate it again.\n\t\t\t */\n\t\t\telse if (pl->vscatter) {\n\t\t\t\tfor (int c = 0; dselspec->chunks[c] != NULL; c++) {\n\t\t\t\t\t/* setting rc=1 forces at least 1 loop of the while.  This should be\n\t\t\t\t\t * be rewritten in the do/while() style seen in the free block below\n\t\t\t\t\t */\n\t\t\t\t\trc = true;\n\t\t\t\t\tif ((hostsets[i]->free_nodes > 0) && (check_avail_resources(hostsets[i]->res,\n\t\t\t\t\t\t\t\t\t\t\t\t    dselspec->chunks[c]->req, UNSET_RES_ZERO, INSUFFICIENT_RESOURCE, err))) {\n\t\t\t\t\t\tfor (int k = 0; dninfo_arr[k] != NULL; k++)\n\t\t\t\t\t\t\tdninfo_arr[k]->nscr &= ~NSCR_VISITED;\n\t\t\t\t\t\twhile (rc && dselspec->chunks[c]->num_chunks > 0) {\n\t\t\t\t\t\t\tstd::vector<nspec *> ns_chunk;\n\t\t\t\t\t\t\trc = eval_simple_selspec(policy, spec->chunks[c], dninfo_arr, pl,\n\t\t\t\t\t\t\t\t\t\t resresv, flags, ns_chunk, err);\n\n\t\t\t\t\t\t\tif (rc) {\n\t\t\t\t\t\t\t\tany_succ_rc = true;\n\t\t\t\t\t\t\t\ttot++;\n\t\t\t\t\t\t\t\tdselspec->chunks[c]->num_chunks--;\n\n\t\t\t\t\t\t\t\tfor (const auto &ns : ns_chunk) {\n\t\t\t\t\t\t\t\t\tauto vn = find_node_by_rank(dninfo_arr, ns->ninfo->rank);\n\t\t\t\t\t\t\t\t\tif (vn != NULL)\n\t\t\t\t\t\t\t\t\t\tvn->nscr |= NSCR_SCATTERED;\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\tnsa.insert(nsa.end(), ns_chunk.begin(), ns_chunk.end());\n\t\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t\tfree_nspecs(ns_chunk);\n\t\t\t\t\t\t\t\tif (failerr->status_code == SCHD_UNKWN)\n\t\t\t\t\t\t\t\t\tmove_schd_error(failerr, err);\n\t\t\t\t\t\t\t\tclear_schd_error(err);\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t} else {\n\t\t\t\t\t\tchar reason[MAX_LOG_SIZE] = {0};\n\n\t\t\t\t\t\tif (hostsets[i]->free_nodes == 0)\n\t\t\t\t\t\t\tstrcpy(reason, \"No free nodes available\");\n\t\t\t\t\t\telse\n\t\t\t\t\t\t\ttranslate_fail_code(err, NULL, reason);\n\n\t\t\t\t\t\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t\t\t\t\t   resresv->name, \"Insufficient host-level resources %s\", reason);\n\n\t\t\t\t\t\t/* don't be so specific in the comment since it's only for a single host */\n\t\t\t\t\t\tset_schd_error_arg(err, ARG1, NULL);\n\n\t\t\t\t\t\tif (failerr->status_code == SCHD_UNKWN)\n\t\t\t\t\t\t\tmove_schd_error(failerr, err);\n\t\t\t\t\t\tclear_schd_error(err);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\t/* Scatter By Host Placement:\n\t\t\t* place at most one chunk on any one host.  We make a call to\n\t\t\t* eval_simple_selspec() for each sub-chunk (things between the +'s).\n\t\t\t* If it returns success (i.e. rc=1) we are done for this host\n\t\t\t*/\n\t\t\telse if (pl->scatter) {\n\t\t\t\tfor (int c = 0; dselspec->chunks[c] != NULL && rc == 0; c++) {\n\t\t\t\t\tif ((hostsets[i]->free_nodes > 0) && (check_avail_resources(hostsets[i]->res,\n\t\t\t\t\t\t\t\t\t\t\t\t    dselspec->chunks[c]->req, UNSET_RES_ZERO, INSUFFICIENT_RESOURCE, err))) {\n\t\t\t\t\t\tstd::vector<nspec *> ns_chunk;\n\n\t\t\t\t\t\tif (dselspec->chunks[c]->num_chunks > 0) {\n\t\t\t\t\t\t\tfor (int k = 0; dninfo_arr[k] != NULL; k++)\n\t\t\t\t\t\t\t\tdninfo_arr[k]->nscr &= ~NSCR_VISITED;\n\n\t\t\t\t\t\t\trc = eval_simple_selspec(policy, spec->chunks[c],\n\t\t\t\t\t\t\t\t\t\t dninfo_arr, pl, resresv, flags | EVAL_OKBREAK,\n\t\t\t\t\t\t\t\t\t\t ns_chunk, err);\n\n\t\t\t\t\t\t\tif (rc) {\n\t\t\t\t\t\t\t\tany_succ_rc = 1;\n\t\t\t\t\t\t\t\ttot++;\n\t\t\t\t\t\t\t\tdselspec->chunks[c]->num_chunks--;\n\t\t\t\t\t\t\t\tnsa.insert(nsa.end(), ns_chunk.begin(), ns_chunk.end());\n\t\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t\tfree_nspecs(ns_chunk);\n\n\t\t\t\t\t\t\t\tif (failerr->status_code == SCHD_UNKWN)\n\t\t\t\t\t\t\t\t\tmove_schd_error(failerr, err);\n\t\t\t\t\t\t\t\tclear_schd_error(err);\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t} else {\n\t\t\t\t\t\tchar reason[MAX_LOG_SIZE] = {0};\n\n\t\t\t\t\t\tif (hostsets[i]->free_nodes == 0)\n\t\t\t\t\t\t\tstrcpy(reason, \"No free nodes available\");\n\t\t\t\t\t\telse\n\t\t\t\t\t\t\ttranslate_fail_code(err, NULL, reason);\n\n\t\t\t\t\t\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t\t\t\t\t   resresv->name, \"Insufficient host-level resources %s\", reason);\n\n\t\t\t\t\t\t/* don't be so specific in the comment since it's only for a single host */\n\t\t\t\t\t\tset_schd_error_arg(err, ARG1, NULL);\n\n\t\t\t\t\t\tif (failerr->status_code == SCHD_UNKWN)\n\t\t\t\t\t\t\tmove_schd_error(failerr, err);\n\t\t\t\t\t\tclear_schd_error(err);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\t/* Free Placement:\n\t\t\t * Place as many chunks as possible on vnodes as they can hold.\n\t\t\t * We do this by duplicating both the nodes and the select spec.  The\n\t\t\t * resources and chunks counts are decremented in the duplicated select\n\t\t\t * when they are allocated.  The assigned resources on the nodes are\n\t\t\t * increased when resources are allocated.  When the select spec has no\n\t\t\t *  more resources left, we've successfully fulfilled the request.  If we\n\t\t\t * run out of nodes to search, then we've failed to fulfill the request.\n\t\t\t */\n\t\t\telse if (pl->free) {\n\t\t\t\tnode_info **dup_ninfo_arr;\n\t\t\t\tdup_ninfo_arr = dup_nodes(hostsets[i]->ninfo_arr,\n\t\t\t\t\t\t\t  resresv->server, NO_FLAGS);\n\t\t\t\tif (dup_ninfo_arr == NULL) {\n\t\t\t\t\tdelete dselspec;\n\t\t\t\t\treturn 0;\n\t\t\t\t}\n\n\t\t\t\tfor (int c = 0; dselspec->chunks[c] != NULL; c++) {\n\t\t\t\t\tif ((hostsets[i]->free_nodes > 0) && (check_avail_resources(hostsets[i]->res,\n\t\t\t\t\t\t\t\t\t\t\t\t    dselspec->chunks[c]->req, UNSET_RES_ZERO, INSUFFICIENT_RESOURCE, err))) {\n\t\t\t\t\t\tif (dselspec->chunks[c]->num_chunks > 0) {\n\t\t\t\t\t\t\tfor (int k = 0; dup_ninfo_arr[k] != NULL; k++)\n\t\t\t\t\t\t\t\tdup_ninfo_arr[k]->nscr &= ~NSCR_VISITED;\n\t\t\t\t\t\t\tdo {\n\t\t\t\t\t\t\t\tstd::vector<nspec *> ns_chunk;\n\t\t\t\t\t\t\t\trc = eval_simple_selspec(policy, dselspec->chunks[c], dup_ninfo_arr,\n\t\t\t\t\t\t\t\t\t\t\t pl, resresv, flags | EVAL_OKBREAK, ns_chunk, err);\n\n\t\t\t\t\t\t\t\tif (rc) {\n\t\t\t\t\t\t\t\t\tany_succ_rc = true;\n\t\t\t\t\t\t\t\t\ttot++;\n\t\t\t\t\t\t\t\t\tdselspec->chunks[c]->num_chunks--;\n\n\t\t\t\t\t\t\t\t\tfor (const auto &ns : ns_chunk) {\n\t\t\t\t\t\t\t\t\t\treq = ns->resreq;\n\t\t\t\t\t\t\t\t\t\twhile (req != NULL) {\n\t\t\t\t\t\t\t\t\t\t\tif (req->type.is_consumable) {\n\t\t\t\t\t\t\t\t\t\t\t\tres = find_resource(ns->ninfo->res, req->def);\n\t\t\t\t\t\t\t\t\t\t\t\tif (res != NULL) {\n\t\t\t\t\t\t\t\t\t\t\t\t\tif (res->indirect_res != NULL)\n\t\t\t\t\t\t\t\t\t\t\t\t\t\tres = res->indirect_res;\n\n\t\t\t\t\t\t\t\t\t\t\t\t\tres->assigned += req->amount;\n\t\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\t\treq = req->next;\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t\tns->ninfo = find_node_by_indrank(nptr, ns->ninfo->node_ind, ns->ninfo->rank);\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\tnsa.insert(nsa.end(), ns_chunk.begin(), ns_chunk.end());\n\t\t\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t\t\tfree_nspecs(ns_chunk);\n\n\t\t\t\t\t\t\t\t\tif (failerr->status_code == SCHD_UNKWN)\n\t\t\t\t\t\t\t\t\t\tmove_schd_error(failerr, err);\n\t\t\t\t\t\t\t\t\tclear_schd_error(err);\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t} while (rc && dselspec->chunks[c]->num_chunks > 0);\n\t\t\t\t\t\t}\n\t\t\t\t\t} else {\n\t\t\t\t\t\tchar reason[MAX_LOG_SIZE] = {0};\n\n\t\t\t\t\t\tif (hostsets[i]->free_nodes == 0)\n\t\t\t\t\t\t\tstrcpy(reason, \"No free nodes available\");\n\t\t\t\t\t\telse\n\t\t\t\t\t\t\ttranslate_fail_code(err, NULL, reason);\n\n\t\t\t\t\t\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t\t\t\t\t   resresv->name, \"Insufficient host-level resources %s\", reason);\n#ifdef NAS /* localmod 998 */\n\t\t\t\t\t\tset_schd_error_codes(err, NOT_RUN, RESOURCES_INSUFFICIENT);\n\t\t\t\t\t\tset_schd_error_arg(err, ARG1, \"Host\");\n\t\t\t\t\t\tset_schd_error_arg(err, ARG2, hostsets[i]->name);\n#endif /* localmod 998 */\n\t\t\t\t\t\t/* don't be so specific in the comment since it's only for a single host */\n\t\t\t\t\t\tset_schd_error_arg(err, ARG1, NULL);\n\n\t\t\t\t\t\tif (failerr->status_code == SCHD_UNKWN)\n\t\t\t\t\t\t\tmove_schd_error(failerr, err);\n\t\t\t\t\t\tclear_schd_error(err);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tfree_nodes(dup_ninfo_arr);\n\t\t\t} else {\n\t\t\t\tlog_eventf(PBSEVENT_DEBUG, PBS_EVENTCLASS_NODE, LOG_DEBUG, resresv->name,\n\t\t\t\t\t   \"Unexpected Placement: not %s, %s, %s, or %s\",\n\t\t\t\t\t   PLACE_Scatter, PLACE_VScatter, PLACE_Pack, PLACE_Free);\n\t\t\t}\n\t\t\tif (any_succ_rc) {\n\t\t\t\tif (do_exclhost) {\n\t\t\t\t\t/* If we're in a placement set dninfo_arr might not contain the whole hostset set.\n\t\t\t\t\t * We need to grab the hostset from the node\n\t\t\t\t\t */\n\t\t\t\t\tif (dninfo_arr[0]->hostset != NULL)\n\t\t\t\t\t\talloc_rest_nodepart(nsa, dninfo_arr[0]->hostset->ninfo_arr);\n\t\t\t\t\telse\n\t\t\t\t\t\talloc_rest_nodepart(nsa, dninfo_arr);\n\t\t\t\t}\n\t\t\t\tnspec_arr.insert(nspec_arr.end(), nsa.begin(), nsa.end());\n\t\t\t}\n\t\t}\n\t} else\n\t\tset_schd_error_codes(err, NOT_RUN, SCHD_ERROR);\n\n\tif (dselspec != NULL)\n\t\tdelete dselspec;\n\n\tif (tot == spec->total_chunks)\n\t\treturn true;\n\n\tif (err->status_code == SCHD_UNKWN && failerr->status_code != SCHD_UNKWN)\n\t\tmove_schd_error(err, failerr);\n\n\treturn false;\n}\n\n/**\n * @brief\n * \t\thandle a complex (plus'd) select spec\n *\n * @param[in]\tpolicy\t-\tpolicy info\n * @param[in]\tspec\t-\tthe select spec\n * @param[in]\tninfo_arr\t-\tarray of nodes to satisify the spec\n * @param[in]\tpl\t-\tparsed placement spec\n * @param[in]\tresresv\t-\tthe resource resv the spec if from\n * @param[in]\tflags\t-\tflags to change functions behavior\n *\t      \t\t\t\t\tEVAL_OKBREAK - ok to break chunck up across vnodes\n *\t      \t\t\t\t\tEVAL_EXCLSET - allocate entire nodelist exclusively\n * @param[out]\tnspec_arr\t-\tthe node solution\n * @param[out]\terr\t-\terror structure to return error information\n *\n * @retval\ttrue\t: if the selspec can be satisfied\n * @retval\tfalse\t: if not\n *\n */\nbool\neval_complex_selspec(status *policy, selspec *spec, node_info **ninfo_arr, place *pl,\n\t\t     resource_resv *resresv, unsigned int flags, std::vector<nspec *> &nspec_arr, schd_error *err)\n{\n\tstd::vector<nspec *> nsa; /* the nspec array to hold node solution */\n\tnode_info **nodes;\t  /* nodes to search through (possibly duplicated) */\n\tint rc = 1;\t\t  /* used as a return code in the complex spec case */\n\tint tot_nodes;\t\t  /* total number of nodes on the server */\n\tint num_nodes_used = 0;\t  /* number of nodes used to satisfy spec */\n\n\t/* number of nodes used with the no_multinode_job flag set */\n\tint num_no_multi_nodes = 0;\n\n\tint chunks_needed = 0;\n\n\tint k;\n\tint n;\n\tint c;\n\tresource_req *req;\n\tschd_resource *res;\n\n\tif (spec == NULL || ninfo_arr == NULL)\n\t\treturn false;\n\n\t/* we have a simple selspec... just pass it along */\n\tif (spec->total_chunks == 1)\n\t\treturn eval_simple_selspec(policy, spec->chunks[0], ninfo_arr,\n\t\t\t\t\t   pl, resresv, flags, nspec_arr, err);\n\n\ttot_nodes = count_array(ninfo_arr);\n\n\t/* we have a complex select spec\n\t * This makes things more complicated now... we can make a single pass\n\t * through our nodes and will possibly come up with a solution... but\n\t * there is always the chance we could swap out nodes depending on one\n\t * node being able to satisfy multiple simple specs.\n\t * This recursion could take very long to finish.\n\t *\n\t * We'll go through the nodes once since it'll probably be fine for most\n\t * cases.\n\t */\n\n\tif (pl->scatter || pl->vscatter) {\n\t\tnodes = ninfo_arr;\n\t\tfor (k = 0; nodes[k] != 0; k++)\n\t\t\tnodes[k]->nscr &= ~NSCR_SCATTERED;\n\t} else {\n\t\tif ((nodes = dup_nodes(ninfo_arr, resresv->server, NO_FLAGS)) == NULL) {\n\t\t\t/* only free array if we allocated it locally */\n\t\t\treturn false;\n\t\t}\n\t}\n\n\tn = -1;\n\tfor (c = 0, chunks_needed = 0; c < spec->total_chunks && rc > 0; c++) {\n\t\tstd::vector<nspec *> ns_chunk;\n\t\tif (chunks_needed == 0) {\n\t\t\tn++;\n\t\t\tchunks_needed = spec->chunks[n]->num_chunks;\n\t\t\tfor (k = 0; nodes[k] != 0; k++)\n\t\t\t\tnodes[k]->nscr &= ~NSCR_VISITED;\n\t\t}\n\n\t\trc = eval_simple_selspec(policy, spec->chunks[n], nodes, pl, resresv, flags, ns_chunk, err);\n\n\t\tif (rc) {\n\t\t\tfor (auto &ns : ns_chunk) {\n\t\t\t\tnum_nodes_used++;\n\t\t\t\tif (ns->ninfo->no_multinode_jobs)\n\t\t\t\t\tnum_no_multi_nodes++;\n\n\t\t\t\tif (pl->scatter || pl->vscatter)\n\t\t\t\t\tns->ninfo->nscr |= NSCR_SCATTERED;\n\t\t\t\telse {\n\t\t\t\t\treq = ns->resreq;\n\t\t\t\t\twhile (req != NULL) {\n\t\t\t\t\t\tres = find_resource(ns->ninfo->res, req->def);\n\t\t\t\t\t\tif (res != NULL)\n\t\t\t\t\t\t\tres->assigned += req->amount;\n\n\t\t\t\t\t\treq = req->next;\n\t\t\t\t\t}\n\t\t\t\t\t/* replace the dup'd node with the real one */\n\t\t\t\t\tns->ninfo = find_node_by_indrank(ninfo_arr, ns->ninfo->node_ind, ns->ninfo->rank);\n\t\t\t\t}\n\n\t\t\t\t/* if policy is avoid provision, continue to use aoe-sorted list\n\t\t\t\t * of nodes.\n\t\t\t\t */\n\t\t\t\tif (conf.provision_policy != AVOID_PROVISION &&\n\t\t\t\t    !cstat.node_sort->empty() && conf.node_sort_unused)\n\t\t\t\t\tqsort(nodes, tot_nodes, sizeof(node_info *), multi_node_sort);\n\t\t\t}\n\t\t\tchunks_needed--;\n\t\t\tnsa.insert(nsa.end(), ns_chunk.begin(), ns_chunk.end());\n\t\t}\n\t}\n\n\tif (rc) {\n\t\tnspec_arr.insert(nspec_arr.end(), nsa.begin(), nsa.end());\n\t\tnsa.clear();\n\t} else\n\t\tfree_nspecs(nsa);\n\n\tif (!(pl->scatter || pl->vscatter))\n\t\tfree_nodes(nodes);\n\n\tif (num_no_multi_nodes == 0 ||\n\t    (num_no_multi_nodes == 1 && num_nodes_used == 1))\n\t\treturn rc;\n\n\t/* if we've reached this point we're a multi node job and have selected\n\t * a node which requested to not be used for multi-node jobs.  We'll\n\t * mark the job as a job which will use multiple nodes and use tail\n\t * recursion to resatisify the job without the nodes which are marked\n\t * as no multi-node jobs\n\t */\n\tresresv->will_use_multinode = 1;\n\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG, resresv->name,\n\t\t  \"Used multiple nodes with no_multinode_job=true: Resatisfy\");\n\n\tif (!nspec_arr.empty())\n\t\tfree_nspecs(nspec_arr);\n\n\treturn eval_complex_selspec(policy, spec, ninfo_arr, pl, resresv, flags, nspec_arr, err);\n}\n\n/**\n * @brief\n * \t\teval a non-plused select spec for satisfiability\n *\n * @param[in]\tpolicy\t-\tpolicy info\n * @param[in]\tchk\t-\tthe chunk to satisfy\n * @param[in]\tpninfo_arr\t-\tthe array of nodes\n * @param[in]\tpl\t-\tplacement information (from -l place)\n * @param[in]\tresresv\t-\tthe job the spec is from - needed for resvs\n * @param[in]\tflags\t-\tflags to change functions behavior\n *\t      \t\t\t\t\tEVAL_OKBREAK - ok to break chunck up across vnodes\n *\t      \t\t\t\t\tEVAL_EXCLSET - allocate entire nodelist exclusively\n * @param[out]\tnspec_arr\t-\tthe node solution\n * @param[out]\terr\t-\terror structure to return error information\n *\n * @return\tbool\n * @retval\ttrue\t: if the select spec is satisfiable\n * @retval\tfalse\t: if not\n *\n */\nbool\neval_simple_selspec(status *policy, chunk *chk, node_info **pninfo_arr,\n\t\t    place *pl, resource_resv *resresv, unsigned int flags,\n\t\t    std::vector<nspec *> &nspec_arr, schd_error *err)\n{\n\tint chunks_found = 0;\t\t      /* number of nodes found to satisfy a subspec */\n\tresource_req *specreq_noncons = NULL; /* non-consumable resources requested by spec */\n\tresource_req *specreq_cons = NULL;    /* consumable resources requested by spec */\n\tresource_req *req = NULL;\t      /* used to determine if we're done */\n\tresource_req *prevreq = NULL;\t      /* used to determine if we're done */\n\tresource_req *tmpreq = NULL;\t      /* used to unlink and free */\n\tbool need_new_nspec = false;\t      /* need to allocate a new nspec for node solution */\n\n\tbool allocated = false; /* did we allocate resources to a vnode */\n\tint i = 0;\n\tint k = 0;\n\n\tchar *str_chunk = NULL; /* ptr to after the number of chunks in the str_chunk */\n\n\tnode_info **ninfo_arr = NULL;\n\n\tstatic schd_error *failerr = NULL;\n\n\tresource_req *aoereq = NULL;\n\tnspec *ns = NULL;\n\n\tstd::vector<nspec *> nsa;\n\n\tif (chk == NULL || pninfo_arr == NULL || resresv == NULL || pl == NULL)\n\t\treturn false;\n\n\tif (failerr == NULL) {\n\t\tfailerr = new_schd_error();\n\t\tif (failerr == NULL) {\n\t\t\tset_schd_error_codes(err, NOT_RUN, SCHD_ERROR);\n\t\t\treturn false;\n\t\t}\n\t}\n\n\t/* if it's OK to break across vnodes, but we can fully fit on one\n\t * vnode, then lets do that rather then possibly breaking across multiple\n\t */\n\tif ((flags & EVAL_OKBREAK) &&\n\t    can_fit_on_vnode(chk->req, pninfo_arr)) {\n\t\tflags &= ~EVAL_OKBREAK;\n\t}\n\n\t/* we need to dup the nodes to handle indirect resources which need to be\n\t * accounted for between nodes allocated to the job.  The only time we\n\t * need to account for for this is when we're breaking a chunks across\n\t * vnodes.  Otherwise the entire chunk is going onto 1 vnode.\n\t */\n\tif (flags & EVAL_OKBREAK) {\n\t\tninfo_arr = dup_nodes(pninfo_arr, resresv->server, NO_FLAGS);\n\t\tif (ninfo_arr == NULL) {\n\t\t\tset_schd_error_codes(err, NOT_RUN, SCHD_ERROR);\n\t\t\treturn false;\n\t\t}\n\t} else\n\t\tninfo_arr = pninfo_arr;\n\n\t/* find the requested resources part of a chunk, not the number requested */\n\tfor (i = 0; isdigit(chk->str_chunk[i]); i++)\n\t\t;\n\n\tif (chk->str_chunk[i] == ':')\n\t\ti++;\n\n\tstr_chunk = &chk->str_chunk[i];\n\n\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_NODE, LOG_DEBUG,\n\t\t   resresv->name, \"Evaluating subchunk: %s\", str_chunk);\n\n\t/* We're duplicating the entire list here.  This list is organized so that\n\t * all non-consumable resources come before the consumable ones.  After\n\t * duplicating, we split it into the consumable and non-consumable lists.\n\t */\n\tspecreq_noncons = dup_resource_req_list(chk->req);\n\tclear_schd_error(failerr);\n\n\tif (specreq_noncons == NULL) {\n\t\tset_schd_error_codes(err, NOT_RUN, SCHD_ERROR);\n\n\t\tif (flags & EVAL_OKBREAK)\n\t\t\tfree_nodes(ninfo_arr);\n\t\treturn false;\n\t}\n\n\tif (resresv->aoename != NULL) {\n\t\tresresv->is_prov_needed = 1;\n\t\taoereq = find_resource_req(specreq_noncons, allres[\"aoe\"]);\n\t\t/* Provisionable node needed only if placement=pack or\n\t\t * subchunk consists aoe resource request.\n\t\t */\n\t\tif (!pl->pack && aoereq == NULL)\n\t\t\tresresv->is_prov_needed = 0;\n\t}\n\n\tfor (req = specreq_noncons; req != NULL && req->type.is_non_consumable;\n\t     prevreq = req, req = req->next)\n\t\t;\n\n\tspecreq_cons = req;\n\tif (prevreq != NULL)\n\t\tprevreq->next = NULL;\n\telse\n\t\tspecreq_noncons = NULL; /* no non-consumable resources */\n\n\tns = new nspec();\n\n\tfor (i = 0; ninfo_arr[i] != NULL && chunks_found == 0; i++) {\n\t\tif (ninfo_arr[i]->nscr)\n\t\t\tcontinue;\n\n\t\tallocated = false;\n\t\tclear_schd_error(err);\n\t\tif (ninfo_arr[i]->lic_lock) {\n\n\t\t\tif (need_new_nspec) {\n\t\t\t\tneed_new_nspec = false;\n\t\t\t\tns = new nspec();\n\t\t\t}\n\n\t\t\tif (is_vnode_eligible_chunk(specreq_noncons, ninfo_arr[i], resresv, err)) {\n\t\t\t\tif (specreq_cons != NULL)\n\t\t\t\t\tallocated = resources_avail_on_vnode(specreq_cons, ninfo_arr[i],\n\t\t\t\t\t\t\t\t\t     pl, resresv, flags, ns, err);\n\t\t\t\tif (allocated) {\n\t\t\t\t\tnsa.push_back(ns);\n\t\t\t\t\tneed_new_nspec = true;\n\t\t\t\t\tns->seq_num = chk->seq_num;\n\t\t\t\t\tns->sub_seq_num = get_sched_rank();\n\n\t\t\t\t\tif (flags & EVAL_OKBREAK) {\n\t\t\t\t\t\t/* search through requested consumable resources for resources we've\n\t\t\t\t\t\t * completely allocated.  We'll unlink and free the resource_req\n\t\t\t\t\t\t */\n\t\t\t\t\t\tprevreq = NULL;\n\t\t\t\t\t\treq = specreq_cons;\n\t\t\t\t\t\twhile (req != NULL) {\n\t\t\t\t\t\t\tif (req->amount == 0) {\n\t\t\t\t\t\t\t\ttmpreq = req;\n\t\t\t\t\t\t\t\tif (prevreq == NULL)\n\t\t\t\t\t\t\t\t\treq = specreq_cons = req->next;\n\t\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t\t\treq = prevreq->next = req->next;\n\n\t\t\t\t\t\t\t\tfree_resource_req(tmpreq);\n\t\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t\tprevreq = req;\n\t\t\t\t\t\t\t\treq = req->next;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif (specreq_cons == NULL) {\n\t\t\t\t\t\t\tchunks_found = 1;\n\t\t\t\t\t\t\t/* we found our solution, we don't need any more nspec's */\n\t\t\t\t\t\t\tneed_new_nspec = false;\n\t\t\t\t\t\t\tns->end_of_chunk = 1;\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\t/* Replace the dup'd node with the real one, but only if we dup'd the nodes */\n\t\t\t\t\t\tif (pninfo_arr != ninfo_arr) {\n\t\t\t\t\t\t\t/* Need to call find_node_by_rank() over indrank since eval_placement might dup the nodes */\n\t\t\t\t\t\t\tns->ninfo = find_node_by_rank(pninfo_arr, ns->ninfo->rank);\n\t\t\t\t\t\t}\n\t\t\t\t\t} else {\n\t\t\t\t\t\tchunks_found = 1;\n\t\t\t\t\t\t/* we found our solution, we don't need any more nspec's */\n\t\t\t\t\t\tneed_new_nspec = false;\n\t\t\t\t\t\tns->end_of_chunk = 1;\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tninfo_arr[i]->nscr |= NSCR_VISITED;\n\t\t\t\t\tif (failerr->status_code == SCHD_UNKWN)\n\t\t\t\t\t\tcopy_schd_error(failerr, err);\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tninfo_arr[i]->nscr |= NSCR_VISITED;\n\t\t\t\tif (failerr->status_code == SCHD_UNKWN)\n\t\t\t\t\tcopy_schd_error(failerr, err);\n\t\t\t}\n\n\t\t} else\n\t\t\tset_schd_error_codes(err, NOT_RUN, NODE_UNLICENSED);\n\n\t\tif (err->error_code != SUCCESS) {\n\t\t\tschdlogerr(PBSEVENT_DEBUG3, PBS_EVENTCLASS_NODE, LOG_DEBUG,\n\t\t\t\t   ninfo_arr[i]->name, NULL, err);\n\t\t\t/* Since this node is not eligible, check if it ever eligible\n\t\t\t * If it is never eligible, mark all nodes like it visited.\n\t\t\t * If we can break, don't bother with equivalence classes because\n\t\t\t * because the chunk is pretty much equivalent to ncpus=1 at that point\n\t\t\t */\n\t\t\tif (ninfo_arr[i]->nodesig_ind >= 0 && !(flags & EVAL_OKBREAK)) {\n\t\t\t\tif (check_avail_resources(ninfo_arr[i]->res, chk->req,\n\t\t\t\t\t\t\t  COMPARE_TOTAL | UNSET_RES_ZERO | CHECK_ALL_BOOLS,\n\t\t\t\t\t\t\t  policy->resdef_to_check_no_hostvnode,\n\t\t\t\t\t\t\t  INSUFFICIENT_RESOURCE, err) == 0) {\n\t\t\t\t\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_NODE, LOG_DEBUG,\n\t\t\t\t\t\t   \"\", \"Marking nodes with signature %s ineligible\", ninfo_arr[i]->nodesig);\n\t\t\t\t\tfor (k = 0; ninfo_arr[k] != NULL; k++) {\n\t\t\t\t\t\tif (ninfo_arr[k]->nodesig_ind == ninfo_arr[i]->nodesig_ind) {\n\t\t\t\t\t\t\tninfo_arr[k]->nscr |= NSCR_VISITED;\n\t\t\t\t\t\t\tif (i != k)\n\t\t\t\t\t\t\t\tschdlogerr(PBSEVENT_DEBUG3, PBS_EVENTCLASS_NODE, LOG_DEBUG,\n\t\t\t\t\t\t\t\t\t   ninfo_arr[k]->name, NULL, err);\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\n\t\t} else {\n\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_NODE, LOG_DEBUG,\n\t\t\t\t  ninfo_arr[i]->name, \"Node allocated to job\");\n\t\t}\n\t}\n\n\tif (specreq_cons != NULL)\n\t\tfree_resource_req_list(specreq_cons);\n\tif (specreq_noncons != NULL)\n\t\tfree_resource_req_list(specreq_noncons);\n\n\tif (flags & EVAL_OKBREAK)\n\t\tfree_nodes(ninfo_arr);\n\n\tif (chunks_found) {\n\t\tnspec_arr.insert(nspec_arr.end(), nsa.begin(), nsa.end());\n\t\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_NODE, LOG_DEBUG,\n\t\t\t   resresv->name, \"Allocated one subchunk: %s\", str_chunk);\n\t\tclear_schd_error(err);\n\t\treturn true;\n\t}\n\n\t/* If we didn't allocate any nodes, we need to clean up any nspecs\n\t * we've allocated.  This is either the one we allocated in the front of\n\t * the main loop above, or for all the nodes we allocated and need to\n\t * \"deallocate\" due to a reason we've decided we can't allocate like exclhost\n\t */\n\tfree_nspecs(nsa);\n\tnsa.clear();\n\n\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_NODE, LOG_DEBUG, resresv->name,\n\t\t   \"Failed to satisfy subchunk: %s\", chk->str_chunk);\n\n\t/* If the last node we looked at was fine, err would be empty.\n\t * Actually return an a real error */\n\tif (err->status_code == SCHD_UNKWN && failerr->status_code != SCHD_UNKWN)\n\t\tmove_schd_error(err, failerr);\n\t/* don't be so specific in the comment since it's only for a single node */\n\tset_schd_error_arg(err, ARG1, NULL);\n\treturn false;\n}\n\n/**\n * @brief\n * \t\tevaluate one node to see if it is eligible\n *\t\t\t    for a simple select spec and return the number\n *\t\t\t    of chunks it can satisfy\n *\n * @param[in]\tspecreq\t-\tresources in the select spec\n * @param[in]\tnode\t-\tthe node to evaluate\n * @param[in]\tpl\t-\tplace spec for request\n * @param[in]\tresresv\t-\tresource resv which is requesting\n * @param[in]\tflags\t-\tflags to change behavior of function\n *\t\t\t\t\t\t\tEVAL_OKBREAK - OK to break chunk across vnodes\n * @param[out]\terr\t-\terror status if node is ineligible\n *\n * @par NOTE:\n * \t\tall resources in specreq will be honored regardless of\n *      whether they are in conf.res_to_check or not due to the fact\n *\t\tthat chunk resources only contain such resources.\n *\n * @retval\ttrue\t: if node is statically eligible to run the request\n * @retval\tfalse\t: if node is ineligible\n *\n */\nbool\nis_vnode_eligible(node_info *node, resource_resv *resresv,\n\t\t  struct place *pl, schd_error *err)\n{\n\tif (node == NULL || resresv == NULL || pl == NULL || err == NULL)\n\t\treturn false;\n\n\tbool job_in_maintenance_resv = false;\n\tif (resresv->job && resresv->job->resv)\n\t\tif (resresv->job->resv->name[0] == 'M')\n\t\t\tjob_in_maintenance_resv = true;\n\n\t/* A node is invalid for an exclusive job if jobs/resvs are running on it\n\t * except if the job is a maintenance reservation\n\t * NOTE: this check must be the first check or exclhost may break\n\t */\n\tif (!job_in_maintenance_resv && is_excl(pl, node->sharing) &&\n\t    (node->num_jobs > 0 || node->num_run_resv > 0)) {\n\t\tset_schd_error_codes(err, NOT_RUN, NODE_NOT_EXCL);\n\t\tset_schd_error_arg(err, ARG1, resresv->is_job ? \"Job\" : \"Reservation\");\n\t\treturn false;\n\t}\n\n\t/* Does this chunk have EOE? */\n\tif (resresv->eoename != NULL) {\n\t\tif (!is_eoe_avail_on_vnode(node, resresv)) {\n\t\t\tset_schd_error_codes(err, NOT_RUN, EOE_NOT_AVALBL);\n\t\t\tset_schd_error_arg(err, ARG1, resresv->eoename);\n\t\t\treturn false;\n\t\t}\n\t}\n\n\tif (!node->is_free) {\n\t\tset_schd_error_codes(err, NOT_RUN, INVALID_NODE_STATE);\n\t\tset_schd_error_arg(err, ARG1, (char *) node_state_to_str(node));\n#ifdef NAS /* localmod 031 */\n\t\tset_schd_error_arg(err, ARG2, node->name);\n#endif /* localmod 031 */\n\t\treturn false;\n\t}\n\n\t/*\n\t * If we are in the reservation's universe, we need to check the state of\n\t * the node in the server's universe.  We may have provisioned the node\n\t * and it could be down.\n\t */\n\tif (resresv->job != NULL && resresv->job->resv != NULL) {\n\t\tif (node->svr_node != NULL) {\n\t\t\tif (node->svr_node->is_provisioning) {\n\t\t\t\tset_schd_error_codes(err, NOT_RUN, INVALID_NODE_STATE);\n#ifdef NAS /* localmod 031 */\n\t\t\t\tset_schd_error_arg(err, ARG1, node->name);\n\t\t\t\tset_schd_error_arg(err, ARG2, node_state_to_str(node->svr_node));\n#else\n\t\t\t\tset_schd_error_arg(err, ARG1, (char *) node_state_to_str(node->svr_node));\n#endif /* localmod 031 */\n\t\t\t\treturn false;\n\t\t\t}\n\t\t}\n\t}\n\n\tif (resresv->is_resv && !node->resv_enable) {\n\t\tset_schd_error_codes(err, NOT_RUN, NODE_RESV_ENABLE);\n\t\treturn false;\n\t}\n\n\tif (resresv->is_job) {\n\t\t/* don't enforce max run limits of job is being qrun */\n\t\tif (resresv->server->qrun_job == NULL) {\n\t\t\tif (node->max_running != SCHD_INFINITY &&\n\t\t\t    node->max_running <= node->num_jobs) {\n\t\t\t\tset_schd_error_codes(err, NOT_RUN, NODE_JOB_LIMIT_REACHED);\n\t\t\t\treturn false;\n\t\t\t}\n\n\t\t\tif (node->max_user_run != SCHD_INFINITY &&\n\t\t\t    node->max_user_run <= find_counts_elm(node->user_counts, resresv->user, NULL, NULL, NULL)) {\n\t\t\t\tset_schd_error_codes(err, NOT_RUN, NODE_USER_LIMIT_REACHED);\n\t\t\t\treturn false;\n\t\t\t}\n\n\t\t\tif (node->max_group_run != SCHD_INFINITY &&\n\t\t\t    node->max_group_run <= find_counts_elm(node->group_counts, resresv->group, NULL, NULL, NULL)) {\n\t\t\t\tset_schd_error_codes(err, NOT_RUN, NODE_GROUP_LIMIT_REACHED);\n\t\t\t\treturn false;\n\t\t\t}\n\t\t}\n\t}\n\n\tif (node->no_multinode_jobs && resresv->will_use_multinode) {\n\t\tset_schd_error_codes(err, NOT_RUN, NODE_NO_MULT_JOBS);\n\t\treturn false; /* multiple nodes jobs/resvs not allowed on this node */\n\t}\n\n\treturn true;\n}\n/**\n * @brief\n * \t\tcheck if a vnode is eligible for a chunk\n *\n * @param[in]\tspecreq\t-\tresources from chunk\n * @param[in]\tnode\t-\tvnode tocheck\n * @param[out]\terr\t-\terror structure\n *\n * @return\tbool\n * @retval\ttrue\t: eligible\n * @retval\tfalse\t: not eligible\n */\nbool\nis_vnode_eligible_chunk(resource_req *specreq, node_info *node,\n\t\t\tresource_resv *resresv, schd_error *err)\n{\n\tif (resresv != NULL) {\n\t\tif (node->no_multinode_jobs && resresv->will_use_multinode) {\n\t\t\tset_schd_error_codes(err, NOT_RUN, NODE_NO_MULT_JOBS);\n\t\t\treturn false; /* multiple nodes jobs/resvs not allowed on this node */\n\t\t}\n\t}\n\n\tif (specreq != NULL) {\n\t\tif (check_avail_resources(node->res, specreq,\n\t\t\t\t\t  CHECK_ALL_BOOLS | ONLY_COMP_NONCONS | UNSET_RES_ZERO,\n\t\t\t\t\t  INSUFFICIENT_RESOURCE, err) == 0) {\n\t\t\treturn false;\n\t\t}\n\t}\n\n\treturn true;\n}\n\n/**\n * @brief\n *\tChecks if a vnode is eligible for power operations.\n *  Based on is_provisionable.\n *\n * @par Functionality:\n *\tThis function checks if a vnode is eligible to be provisioned.\n *\tA vnode is eligible for power operations if it satisfies all of the\n *\tfollowing conditions:-\n *\t(1) Server has power_provisioning True,\n *\t(2) Vnode has power_provisioning True,\n *\t(3) No conflicts with reservations already running on the Vnode\n *\t(4) No conflicts with jobs already running on the Vnode\n *\n * @param[in]\t\tnode\t-\tpointer to node_info\n * @param[in]\t\tresresv\t-\tpointer to resource_resv\n * @param[in,out]\terr\t\t-\tpointer to schd_error\n *\n * @return\tint\n * @retval\t NO_PROVISIONING_NEEDED : resresv doesn't request eoe\n *\t\t\tor resresv is not a job\n * @retval\t PROVISIONING_NEEDED : vnode doesn't have current_eoe set\n *          or it doesn't match job eoe\n * @retval\t NOT_PROVISIONABLE  : vnode is not provisionable\n *\t\t\t(see err for more details)\n *\n * @par Side Effects:\n *\tUnknown\n *\n * @par MT-safe: No\n *\n */\nint\nis_powerok(node_info *node, resource_resv *resresv, schd_error *err)\n{\n\tint ret = NO_PROVISIONING_NEEDED;\n\n\tif (!resresv->is_job)\n\t\treturn NO_PROVISIONING_NEEDED;\n\tif (resresv->eoename == NULL)\n\t\treturn NO_PROVISIONING_NEEDED;\n\tif (!resresv->server->power_provisioning) {\n\t\terr->error_code = PROV_DISABLE_ON_SERVER;\n\t\treturn NOT_PROVISIONABLE;\n\t}\n\tif (!node->power_provisioning) {\n\t\terr->error_code = PROV_DISABLE_ON_NODE;\n\t\treturn NOT_PROVISIONABLE;\n\t}\n\n\t/* node doesn't have eoe or it doesn't match job eoe */\n\tif (node->current_eoe == NULL ||\n\t    strcmp(resresv->eoename, node->current_eoe) != 0) {\n\t\tret = PROVISIONING_NEEDED;\n\n\t\t/* there can't be any jobs on the node */\n\t\tif ((node->num_susp_jobs > 0) || (node->num_jobs > 0)) {\n\t\t\terr->error_code = PROV_RESRESV_CONFLICT;\n\t\t\treturn NOT_PROVISIONABLE;\n\t\t}\n\t}\n\n\t/* node cannot be shared between running reservation without EOE\n\t * and job with EOE\n\t */\n\tif (node->run_resvs_arr) {\n\t\tfor (int i = 0; node->run_resvs_arr[i]; i++) {\n\t\t\tif (node->run_resvs_arr[i]->eoename == NULL) {\n\t\t\t\terr->error_code = PROV_RESRESV_CONFLICT;\n\t\t\t\treturn NOT_PROVISIONABLE;\n\t\t\t}\n\t\t}\n\t}\n\n\treturn ret;\n}\n\n/**\n * @brief\n * \t\tcheck to see if there are enough\n *\t\tconsumable resources on a vnode to make it\n *\t\teligible for a request\n *\t\tNote: This function will allocate <= 1 chunk\n *\n * @param[in][out] specreq_cons - IN : requested consumable resources\n *\t\t\t\t  OUT: requested - allocated resources\n * @param[in]\tnode\t-\tthe node to evaluate\n * @param[in]\tpl\t-\tplace spec for request\n * @param[in]\tresresv\t-\tresource resv which is requesting\n * @param[in]\tflags\t-\tflags to change behavior of function\n *              \t\t\tEVAL_OKBREAK - OK to break chunk across vnodes\n * @param[out]\terr\t-\terror status if node is ineligible\n *\n *\t@retval\ttrue\t: if resources were allocated from the node\n *\t@retval false\t: if sufficent resources are not available (err is set)\n */\nbool\nresources_avail_on_vnode(resource_req *specreq_cons, node_info *node,\n\t\t\t place *pl, resource_resv *resresv, unsigned int flags,\n\t\t\t nspec *ns, schd_error *err)\n{\n\t/* used for allocating partial chunks */\n\tresource_req tmpreq = {0};\n\tresource_req *req;\n\tresource_req *newreq, *aoereq;\n\tlong long num_chunks = 0;\n\tint is_p;\n\n\tif (specreq_cons == NULL || node == NULL ||\n\t    resresv == NULL || pl == NULL || err == NULL)\n\t\treturn false;\n\n\tif (flags & EVAL_OKBREAK) {\n\t\tbool allocated = false;\n\n\t\t/* req is the first consumable resource at this point */\n\t\tfor (req = specreq_cons; req != NULL; req = req->next) {\n\t\t\tif (req->type.is_consumable) {\n\t\t\t\tauto num = req->amount;\n\t\t\t\ttmpreq.amount = 1;\n\n\t\t\t\ttmpreq.name = req->name;\n\t\t\t\ttmpreq.type = req->type;\n\t\t\t\ttmpreq.res_str = req->res_str;\n\t\t\t\ttmpreq.def = req->def;\n\t\t\t\ttmpreq.next = NULL;\n\t\t\t\tnum_chunks = check_resources_for_node(&tmpreq, node, resresv, err);\n\n\t\t\t\tif (num_chunks > 0) {\n\t\t\t\t\tis_p = is_provisionable(node, resresv, err);\n\t\t\t\t\tif (is_p == NOT_PROVISIONABLE) {\n\t\t\t\t\t\tallocated = false;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t} else if (is_p == PROVISIONING_NEEDED) {\n\t\t\t\t\t\tif (ns != NULL)\n\t\t\t\t\t\t\tns->go_provision = 1;\n\t\t\t\t\t\t/* Do not set current aoe/eoe on the node when placement is scatter/vscatter\n\t\t\t\t\t\t * because in case of scatter/vscatter placement we are not working on duplicate\n\t\t\t\t\t\t * copy of nodes.\n\t\t\t\t\t\t */\n\t\t\t\t\t\tif (resresv->select->total_chunks > 1 && pl->scatter != 1 && pl->vscatter != 1)\n\t\t\t\t\t\t\tset_current_aoe(node, resresv->aoename);\n\t\t\t\t\t\tif (resresv->is_job) {\n\t\t\t\t\t\t\tlog_eventf(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_NOTICE, resresv->name,\n\t\t\t\t\t\t\t\t   \"Vnode %s selected for provisioning with AOE %s\", node->name.c_str(), resresv->aoename);\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\n\t\t\t\t\t/* check if power eoe needs to be considered */\n\t\t\t\t\tis_p = is_powerok(node, resresv, err);\n\t\t\t\t\tif (is_p == NOT_PROVISIONABLE) {\n\t\t\t\t\t\tallocated = false;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t} else if (is_p == PROVISIONING_NEEDED) {\n\t\t\t\t\t\tif (resresv->select->total_chunks > 1 && pl->scatter != 1 && pl->vscatter != 1)\n\t\t\t\t\t\t\tset_current_eoe(node, resresv->eoename);\n\n\t\t\t\t\t\tif (resresv->is_job)\n\t\t\t\t\t\t\tlog_eventf(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_NOTICE, resresv->name,\n\t\t\t\t\t\t\t\t   \"Vnode %s selected for power with EOE %s\", node->name.c_str(), resresv->eoename);\n\t\t\t\t\t}\n\n\t\t\t\t\tif (num_chunks > num)\n\t\t\t\t\t\tnum_chunks = num;\n\n\t\t\t\t\tif (ns != NULL) {\n\t\t\t\t\t\tnewreq = dup_resource_req(req);\n\t\t\t\t\t\tif (newreq == NULL)\n\t\t\t\t\t\t\treturn false;\n\n\t\t\t\t\t\tnewreq->amount = num_chunks;\n\n\t\t\t\t\t\tif (ns->ninfo == NULL) /* check if this is the first res */\n\t\t\t\t\t\t\tns->ninfo = node;\n\n\t\t\t\t\t\tnewreq->next = ns->resreq;\n\t\t\t\t\t\tns->resreq = newreq;\n\t\t\t\t\t}\n\n\t\t\t\t\t/* now that we have allocated some resources to this node, we need\n\t\t\t\t\t * to remove them from the requested amount\n\t\t\t\t\t */\n\t\t\t\t\treq->amount -= num_chunks;\n\n\t\t\t\t\tauto res = find_resource(node->res, req->def);\n\t\t\t\t\tif (res != NULL) {\n\t\t\t\t\t\tif (res->indirect_res != NULL)\n\t\t\t\t\t\t\tres->indirect_res->assigned += num_chunks;\n\t\t\t\t\t\telse\n\t\t\t\t\t\t\tres->assigned += num_chunks;\n\t\t\t\t\t}\n\n\t\t\t\t\t/* use tmpreq to wrap the amount so we can use res_to_str */\n\t\t\t\t\ttmpreq.amount = num_chunks;\n\n\t\t\t\t\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_NODE, LOG_DEBUG, node->name,\n\t\t\t\t\t\t   \"vnode allocated %s=%s\", req->name, res_to_str(&tmpreq, RF_REQUEST));\n\n\t\t\t\t\tallocated = true;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tif (allocated) {\n\t\t\tif (ns != NULL && ns->go_provision != 0) {\n\t\t\t\taoereq = create_resource_req(\"aoe\", resresv->aoename);\n\t\t\t\tif (aoereq != NULL) {\n\t\t\t\t\taoereq->next = ns->resreq;\n\t\t\t\t\tns->resreq = aoereq;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif (pl->pack && num_chunks == 1 && cstat.smp_dist == SMP_ROUND_ROBIN)\n\t\t\t\tpbs_strncpy(last_node_name, node->name.c_str(), sizeof(last_node_name));\n\t\t\treturn true;\n\t\t}\n\t} else {\n\t\tnum_chunks = check_resources_for_node(specreq_cons, node, resresv, err);\n\t\tif (num_chunks > 0) {\n\t\t\tis_p = is_provisionable(node, resresv, err);\n\t\t\tif (is_p == NOT_PROVISIONABLE)\n\t\t\t\treturn false;\n\n\t\t\telse if (is_p == PROVISIONING_NEEDED) {\n\t\t\t\tif (ns != NULL)\n\t\t\t\t\tns->go_provision = 1;\n\t\t\t\tif (resresv->select->total_chunks > 1 && pl->scatter != 1 && pl->vscatter != 1) {\n\t\t\t\t\tset_current_aoe(node, resresv->aoename);\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tis_p = is_powerok(node, resresv, err);\n\t\t\tif (is_p == NOT_PROVISIONABLE)\n\t\t\t\treturn false;\n\t\t\telse if (is_p == PROVISIONING_NEEDED) {\n\t\t\t\tif (resresv->select->total_chunks > 1 && pl->scatter != 1 && pl->vscatter != 1) {\n\t\t\t\t\tset_current_eoe(node, resresv->eoename);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\t/* if we have infinite amount of resources, the node has been allocated */\n\t\tif (num_chunks == SCHD_INFINITY)\n\t\t\tnum_chunks = 1;\n\n\t\tif (ns != NULL && num_chunks != 0) {\n\t\t\tns->ninfo = node;\n\t\t\tns->resreq = dup_resource_req_list(specreq_cons);\n\n\t\t\tif (ns->go_provision != 0) {\n\t\t\t\taoereq = create_resource_req(\"aoe\", resresv->aoename);\n\t\t\t\tif (aoereq != NULL) {\n\t\t\t\t\taoereq->next = ns->resreq;\n\t\t\t\t\tns->resreq = aoereq;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif (pl->pack && cstat.smp_dist == SMP_ROUND_ROBIN)\n\t\t\t\tpbs_strncpy(last_node_name, node->name.c_str(), sizeof(last_node_name));\n\t\t}\n\t\treturn num_chunks ? true : false;\n\t}\n\n\treturn false;\n}\n\n/**\n * @brief\n * \t\tcheck to see how many chunks can fit on a\n *\t\tnode looking at both resources available\n *\t\tnow and future advanced reservations\n *\n * @param[in]\tresreq\t-\trequested resources\n * @param[in]\tnode    -\tnode to check for\n * @param[in]\tresresv -\tthe resource resv to check for\n * @param[out]\terr    -\tschd_error reply if there aren't enough resources\n *\n * @par NOTE:\n * \t\tall resources in resreq will be honored regardless of\n *      whether they are in conf.res_to_check or not due to the fact\n *\t\tthat chunk resources only contain such resources.\n *\n * @retval\n * \t\tnumber of chunks which can be satisifed during the duration\n * @retval\t-1\t: on error\n */\nlong long\ncheck_resources_for_node(resource_req *resreq, node_info *ninfo,\n\t\t\t resource_resv *resresv, schd_error *err)\n{\n\t/* the minimum number of chunks which can be satisified for the duration\n\t * of the request\n\t */\n\tlong long min_chunks = UNSPECIFIED;\n\tlong long chunks = UNSPECIFIED; /*number of chunks which can be satisfied*/\n\n\ttime_t end_time;\n\n\ttimed_event *event;\n\n\tif (resreq == NULL || ninfo == NULL || err == NULL || resresv == NULL)\n\t\treturn -1;\n\n\tauto noderes = ninfo->res;\n\n\tmin_chunks = check_avail_resources(noderes, resreq,\n\t\t\t\t\t   CHECK_ALL_BOOLS | UNSET_RES_ZERO, INSUFFICIENT_RESOURCE, err);\n\n\tif (chunks != UNSPECIFIED && (min_chunks == SCHD_INFINITY || chunks < min_chunks))\n\t\tmin_chunks = chunks;\n\n\tauto calendar = ninfo->server->calendar;\n\tauto cur_time = ninfo->server->server_time;\n\tif (resresv->duration != resresv->hard_duration &&\n\t    exists_resv_event(calendar, cur_time + resresv->hard_duration))\n\t\tend_time = cur_time + calc_time_left(resresv, 1);\n\telse\n\t\tend_time = cur_time + calc_time_left(resresv, 0);\n\n\t/* check if there are any timed events to check for conflicts with. We do not\n\t * need to check for timed conflicts if the current object is a job inside a\n\t * reservation.\n\t */\n\tif (min_chunks > 0 && exists_run_event(calendar, end_time) && !(resresv->job != NULL && resresv->job->resv != NULL)) {\n\t\t/* Check for possible conflicts with timed events by walking the sorted\n\t\t * event list that was created in eval_selspec. This runs a simulation\n\t\t * forward in time to account for timed events consuming and/or releasing\n\t\t * resources.\n\t\t *\n\t\t * For example, if a resource_resv such as a reservation is consuming n cpus\n\t\t * from t1 to t2, then the resources should be taken out at t1 and returned\n\t\t * at t2.\n\t\t */\n\t\tauto nres = dup_ind_resource_list(noderes);\n\t\tauto resresv_excl = is_excl(resresv->place_spec, ninfo->sharing);\n\n\t\tif (nres != NULL) {\n\t\t\t/* Walk the event list by time such that the start of an event always\n\t\t\t * precedes the end of it. The event type (start or end event) is\n\t\t\t * determined, and the resources are consumed if a start event, and\n\t\t\t * released if an end event.\n\t\t\t */\n\t\t\tevent = get_next_event(calendar);\n\t\t\tconst auto event_mask = TIMED_RUN_EVENT | TIMED_END_EVENT;\n\n\t\t\tfor (event = find_init_timed_event(event, IGNORE_DISABLED_EVENTS, event_mask);\n\t\t\t     event != NULL && min_chunks > 0;\n\t\t\t     event = find_next_timed_event(event, IGNORE_DISABLED_EVENTS, event_mask)) {\n\t\t\t\tauto event_time = event->event_time;\n\t\t\t\tauto resc_resv = static_cast<resource_resv *>(event->event_ptr);\n\t\t\t\tnspec *ns;\n\n\t\t\t\tif (event_time < cur_time)\n\t\t\t\t\tcontinue;\n\t\t\t\tif (resc_resv->job != NULL && resc_resv->job->resv != NULL)\n\t\t\t\t\tcontinue;\n\n\t\t\t\tif (!resc_resv->nspec_arr.empty())\n\t\t\t\t\tns = find_nspec_by_rank(resc_resv->nspec_arr, ninfo->rank);\n\t\t\t\telse {\n\t\t\t\t\tns = NULL;\n\t\t\t\t\tlog_eventf(PBSEVENT_SCHED, PBS_EVENTCLASS_SCHED, LOG_WARNING, resresv->name,\n\t\t\t\t\t\t   \"Event %s is a run/end event w/o nspec array, ignoring event\", event->name.c_str());\n\t\t\t\t}\n\n\t\t\t\tauto is_run_event = (event->event_type == TIMED_RUN_EVENT);\n\n\t\t\t\tif ((event_time < end_time) && resresv != resc_resv && ns != NULL) {\n\t\t\t\t\t/* One event will need provisioning while the other will not,\n\t\t\t\t\t * they cannot co exist at same time.\n\t\t\t\t\t */\n\t\t\t\t\tif (resresv->aoename != NULL && resc_resv->aoename == NULL) {\n\t\t\t\t\t\tset_schd_error_codes(err, NOT_RUN, PROV_RESRESV_CONFLICT);\n\t\t\t\t\t\tmin_chunks = 0;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\n\t\t\t\t\tif (is_excl(resc_resv->place_spec, ninfo->sharing) || resresv_excl) {\n\t\t\t\t\t\tmin_chunks = 0;\n\t\t\t\t\t} else {\n\t\t\t\t\t\tfor (auto cur_res = nres; cur_res != NULL; cur_res = cur_res->next) {\n\t\t\t\t\t\t\tif (cur_res->type.is_consumable) {\n\t\t\t\t\t\t\t\tauto req = find_resource_req(ns->resreq, cur_res->def);\n\t\t\t\t\t\t\t\tif (req != NULL) {\n\t\t\t\t\t\t\t\t\tcur_res->assigned += is_run_event ? req->amount : -req->amount;\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif (is_run_event) {\n\t\t\t\t\t\t\tchunks = check_avail_resources(nres, resreq,\n\t\t\t\t\t\t\t\t\t\t       CHECK_ALL_BOOLS | UNSET_RES_ZERO, INSUFFICIENT_RESOURCE, err);\n\t\t\t\t\t\t\tif (chunks < min_chunks)\n\t\t\t\t\t\t\t\tmin_chunks = chunks;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\tfree_resource_list(nres);\n\t\t} else {\n\t\t\tset_schd_error_codes(err, NOT_RUN, SCHD_ERROR);\n\t\t\treturn -1;\n\t\t}\n\n\t\tif (min_chunks == 0) {\n\t\t\tif (err->error_code != PROV_RESRESV_CONFLICT)\n\t\t\t\tset_schd_error_codes(err, NOT_RUN, RESERVATION_CONFLICT);\n\t\t}\n\t}\n\n\treturn min_chunks;\n}\n\n/**\n * @brief compare two place specs to see if they are equal\n * @param[in] pl1 - place spec 1\n * @param[in] pl2 - place spec 2\n * @return  1 if equal 0 if not\n */\nint\ncompare_place(place *pl1, place *pl2)\n{\n\tif (pl1 == NULL && pl2 == NULL)\n\t\treturn 1;\n\telse if (pl1 == NULL || pl2 == NULL)\n\t\treturn 0;\n\n\tif (pl1->excl != pl2->excl)\n\t\treturn 0;\n\n\tif (pl1->exclhost != pl2->exclhost)\n\t\treturn 0;\n\n\tif (pl1->share != pl2->share)\n\t\treturn 0;\n\n\tif (pl1->free != pl2->free)\n\t\treturn 0;\n\n\tif (pl1->pack != pl2->pack)\n\t\treturn 0;\n\n\tif (pl1->scatter != pl2->scatter)\n\t\treturn 0;\n\n\tif (pl1->vscatter != pl2->vscatter)\n\t\treturn 0;\n\n\tif (pl1->group != NULL && pl2->group != NULL) {\n\t\tif (strcmp(pl1->group, pl2->group))\n\t\t\treturn 0;\n\t} else if (pl1->group != NULL || pl2->group != NULL)\n\t\treturn 0;\n\n\treturn 1;\n}\n\n/**\n * @brief\n *\t\tparse_placespec - allocate a new place structure and parse\n *\t\ta placement spec (-l place)\n *\n * @param[in]\tplace_str\t-\tplacespec as a string\n *\n * @return\tnewly allocated place\n * @retval\tNULL\t: invalid placement spec\n *\n */\nplace *\nparse_placespec(char *place_str)\n{\n\t/* copy place spec into - max log size should be big enough */\n\tchar str[MAX_LOG_SIZE];\n\tchar *tok;\n\tchar *tokptr;\n\tint invalid = 0;\n\tplace *pl;\n\n\tif (place_str == NULL)\n\t\treturn NULL;\n\n\tpl = new_place();\n\n\tif (pl == NULL)\n\t\treturn NULL;\n\n\tpbs_strncpy(str, place_str, sizeof(str));\n\n\ttok = string_token(str, \":\", &tokptr);\n\n\twhile (tok != NULL && !invalid) {\n\t\tif (!strcmp(tok, PLACE_Pack))\n\t\t\tpl->pack = 1;\n\t\telse if (!strcmp(tok, PLACE_Scatter))\n\t\t\tpl->scatter = 1;\n\t\telse if (!strcmp(tok, PLACE_Excl))\n\t\t\tpl->excl = 1;\n\t\telse if (!strcmp(tok, PLACE_Free))\n\t\t\tpl->free = 1;\n\t\telse if (!strcmp(tok, PLACE_Shared))\n\t\t\tpl->share = 1;\n\t\telse if (!strcmp(tok, PLACE_VScatter))\n\t\t\tpl->vscatter = 1;\n\t\telse if (!strcmp(tok, PLACE_ExclHost)) {\n\t\t\tpl->exclhost = 1;\n\t\t\tpl->excl = 1;\n\t\t} else if (!strncmp(tok, PLACE_Group, 5)) {\n\t\t\t/* format: group=res */\n\t\t\tif (tok[5] == '=') {\n\t\t\t\t/* \"group=\" is 6 characters so tok[6] should be the first character of\n\t\t\t\t * the resource\n\t\t\t\t */\n\t\t\t\tpl->group = string_dup(&tok[6]);\n\t\t\t} else\n\t\t\t\tinvalid = 1;\n\t\t} else\n\t\t\tinvalid = 1;\n\n\t\ttok = string_token(NULL, \":\", &tokptr);\n\t}\n\n\t/* pack and scatter vscatter, and free are all mutually exclusive */\n\tif (pl->pack + pl->scatter + pl->free + pl->vscatter > 1)\n\t\tinvalid = 1;\n\n\t/* if no scatter, vscatter, pack, or free given, default to free */\n\tif (pl->pack + pl->scatter + pl->free + pl->vscatter == 0)\n\t\tpl->free = 1;\n\n\tif (invalid) {\n\t\tfree_place(pl);\n\t\treturn NULL;\n\t}\n\n\treturn pl;\n}\n\n/**\n * @brief\n * \t\tparse a select spec into a selspec structure with\n *\t\ta dependant array of chunks.  Non-consuamble resources\n *\t\tare sorted first in the chunk resource list\n *\n * @param[in]\tselspec\t-\tthe select spec to parse\n *\n * @return\tselspec*\n * @retval\tpointer to a selspec obtained by parsing the select spec\n *\t\t\tof the job/resv.\n * @retval\tNULL\t: on error or invalid spec\n *\n * @par MT-safe: Yes\n */\nselspec *\nparse_selspec(const std::string &sspec)\n{\n\t/* select specs can be large.  We need to allocate a buffer large enough\n\t * to handle the spec.  We'll keep it around so we don't have to allocate\n\t * it on every call\n\t */\n\tchar *specbuf = NULL;\n\tchar *tmpptr;\n\n\tselspec *spec;\n\tint num_plus;\n\tconst char *p;\n\n\tchar *tok;\n\tchar *endp = NULL;\n\n\tresource_req *req_head = NULL;\n\tresource_req *req_end = NULL;\n\tresource_req *req;\n\tint invalid = 0;\n\n\tint num_kv;\n\tstruct key_value_pair *kv = NULL;\n\tint nkvelements = 0;\n\n\tint num_chunks;\n\tint num_cpus = 0;\n\n\tint i;\n\tint n = 0;\n\n\tconst char *select_spec = sspec.c_str();\n\n\tif ((spec = new selspec()) == NULL)\n\t\treturn NULL;\n\n\tfor (num_plus = 0, p = select_spec; *p != '\\0'; p++) {\n\t\tif (*p == '+')\n\t\t\tnum_plus++;\n\t}\n\n\t/* num_plus + 2: 1 for the initial chunk 1 for the NULL ptr */\n\tif ((spec->chunks = static_cast<chunk **>(calloc(num_plus + 2, sizeof(chunk *)))) == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\tdelete spec;\n\t\treturn 0;\n\t}\n\n\tspecbuf = string_dup(select_spec);\n\tif (specbuf == NULL) {\n\t\tdelete spec;\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn 0;\n\t}\n\n\ttok = string_token(specbuf, \"+\", &endp);\n\n\ttmpptr = NULL;\n\twhile (tok != NULL && !invalid) {\n\t\ttmpptr = string_dup(tok);\n\t\tauto ret = parse_chunk_r(tok, &num_chunks, &num_kv, &nkvelements, &kv, NULL);\n\n\t\tif (!ret) {\n\t\t\tfor (i = 0; i < num_kv && !invalid; i++) {\n\t\t\t\treq = create_resource_req(kv[i].kv_keyw, kv[i].kv_val);\n\n\t\t\t\tif (req == NULL)\n\t\t\t\t\tinvalid = 1;\n\t\t\t\telse {\n\t\t\t\t\tif (strcmp(req->name, \"ncpus\") == 0) {\n\t\t\t\t\t\t/* Given: -l select=nchunk1:ncpus=Y + nchunk2:ncpus=Z +... */\n\t\t\t\t\t\t/* Then: # of cpus = (nchunk1 * Y) + (nchunk2 * Z) + ... */\n\t\t\t\t\t\tnum_cpus += (num_chunks * req->amount);\n\t\t\t\t\t}\n\t\t\t\t\tconst auto &rtc = conf.res_to_check;\n\t\t\t\t\tif (!invalid && (req->type.is_boolean || rtc.empty() || rtc.find(kv[i].kv_keyw) != rtc.end())) {\n\t\t\t\t\t\tspec->defs.insert(req->def);\n\t\t\t\t\t\tif (req_head == NULL)\n\t\t\t\t\t\t\treq_end = req_head = req;\n\t\t\t\t\t\telse {\n\t\t\t\t\t\t\tif (req->type.is_consumable) {\n\t\t\t\t\t\t\t\treq_end->next = req;\n\t\t\t\t\t\t\t\treq_end = req;\n\t\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t\treq->next = req_head;\n\t\t\t\t\t\t\t\treq_head = req;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t} else\n\t\t\t\t\t\tfree_resource_req(req);\n\t\t\t\t}\n\t\t\t}\n\t\t\tspec->chunks[n] = new_chunk();\n\t\t\tif (spec->chunks[n] != NULL) {\n\t\t\t\tspec->chunks[n]->num_chunks = num_chunks;\n\t\t\t\tspec->chunks[n]->seq_num = get_sched_rank();\n\t\t\t\tspec->total_chunks += num_chunks;\n\t\t\t\tspec->total_cpus = num_cpus;\n\t\t\t\tspec->chunks[n]->req = req_head;\n\t\t\t\tspec->chunks[n]->str_chunk = tmpptr;\n\t\t\t\ttmpptr = NULL;\n\t\t\t\treq_head = NULL;\n\t\t\t\treq_end = NULL;\n\t\t\t\tn++;\n\t\t\t} else\n\t\t\t\tinvalid = 1;\n\t\t} else\n\t\t\tinvalid = 1;\n\n\t\ttok = string_token(NULL, \"+\", &endp);\n\t}\n\tfree(kv);\n\n\tif (invalid) {\n\t\tdelete spec;\n\t\tif (tmpptr != NULL)\n\t\t\tfree(tmpptr);\n\n\t\tfree(specbuf);\n\t\treturn NULL;\n\t}\n\n\tfree(specbuf);\n\n\treturn spec;\n}\n\n/**\n *\t@brief compare two chunks for equality\n *\t@param[in] c1 - first chunk\n * \t@param[in] c2 - second chunk\n *\n *\t@return int\n *\t@retval 1 if chunks are equal\n *\t@retval 0 if chunks are not equal\n */\nint\ncompare_chunk(chunk *c1, chunk *c2)\n{\n\tif (c1 == NULL && c2 == NULL)\n\t\treturn 1;\n\tif (c1 == NULL || c2 == NULL)\n\t\treturn 0;\n\n\tif (c1->num_chunks != c2->num_chunks)\n\t\treturn 0;\n\tif (compare_resource_req_list(c1->req, c2->req, conf.resdef_to_check) == 0)\n\t\treturn 0;\n\n\treturn 1;\n}\n\n/**\n *\t@brief compare two selspecs for equality\n *\t@param[in] s1 - first selspec\n *\t@param[in] s2 - second selspec\n *\n *\t@returns int\n *\t@retval 1 if selspecs are equal\n *\t@retval 0 if not equal\n */\nint\ncompare_selspec(selspec *s1, selspec *s2)\n{\n\tint ret = 1;\n\n\tif (s1 == NULL && s2 == NULL)\n\t\treturn 1;\n\telse if (s1 == NULL || s2 == NULL)\n\t\treturn 0;\n\n\tif (s1->total_chunks != s2->total_chunks)\n\t\treturn 0;\n\n\tif (s1->chunks != NULL && s2->chunks != NULL) {\n\t\tfor (int i = 0; ret && s1->chunks[i] != NULL; i++) {\n\t\t\tif (compare_chunk(s1->chunks[i], s2->chunks[i]) == 0)\n\t\t\t\tret = 0;\n\t\t}\n\t} else\n\t\tret = 0;\n\treturn ret;\n}\n\n/**\n * @brief\n * \t\tcreate an execvnode from a node solution array\n *\n * @param[in]\tns\t-\tthe nspec struct with the chosen nodes to run the job on\n *\n * @par MT-safe:\tno\n *\n * @return\texecvnode in static memory\n *\n */\nchar *\ncreate_execvnode(std::vector<nspec *> &ns_arr)\n{\n\tstatic char *execvnode = NULL;\n\tstatic int execvnode_size = 0;\n\tstatic char *buf = NULL;\n\tstatic int bufsize = 0;\n\tchar buf2[128];\n\tresource_req *req;\n\tbool end_of_chunk = true;\n\n\tif (execvnode == NULL) {\n\t\texecvnode = static_cast<char *>(malloc(INIT_ARR_SIZE + 1));\n\n\t\tif (execvnode == NULL) {\n\t\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\t\treturn NULL;\n\t\t}\n\t\texecvnode_size = INIT_ARR_SIZE;\n\t}\n\tif (buf == NULL) {\n\t\tbuf = static_cast<char *>(malloc(INIT_ARR_SIZE + 1));\n\t\tif (buf == NULL) {\n\t\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\t\treturn NULL;\n\t\t}\n\t\tbufsize = INIT_ARR_SIZE;\n\t}\n\texecvnode[0] = '\\0';\n\n\tbool need_plus = false;\n\tfor (const auto &ns : ns_arr) {\n\t\tif (need_plus)\n\t\t\tstrcpy(buf, \"+\");\n\t\telse {\n\t\t\tneed_plus = true;\n\t\t\tbuf[0] = '\\0';\n\t\t}\n\n\t\tif (end_of_chunk) {\n\t\t\tif (pbs_strcat(&buf, &bufsize, \"(\") == NULL)\n\t\t\t\treturn NULL;\n\t\t}\n\n\t\tif (pbs_strcat(&buf, &bufsize, ns->ninfo->name.c_str()) == NULL)\n\t\t\treturn NULL;\n\n\t\tend_of_chunk = ns->end_of_chunk;\n\n\t\treq = ns->resreq;\n\t\twhile (req != NULL) {\n\t\t\tif (req->type.is_consumable) {\n\t\t\t\tif (pbs_strcat(&buf, &bufsize, \":\") == NULL)\n\t\t\t\t\treturn NULL;\n\t\t\t\tif (pbs_strcat(&buf, &bufsize, req->name) == NULL)\n\t\t\t\t\treturn NULL;\n\t\t\t\tif (req->type.is_float)\n\t\t\t\t\tsprintf(buf2, \"=%.*f\", float_digits(req->amount, FLOAT_NUM_DIGITS), req->amount);\n\t\t\t\telse\n\n\t\t\t\t\tsprintf(buf2, \"=%.0lf%s\", ceil(req->amount),\n\t\t\t\t\t\treq->type.is_size ? \"kb\" : \"\");\n\t\t\t\tif (pbs_strcat(&buf, &bufsize, buf2) == NULL)\n\t\t\t\t\treturn NULL;\n\t\t\t} else if (ns->go_provision && strcmp(req->name, \"aoe\") == 0) {\n\t\t\t\tstrcpy(buf2, \":aoe=\");\n\t\t\t\tif (pbs_strcat(&buf, &bufsize, buf2) == NULL)\n\t\t\t\t\treturn NULL;\n\t\t\t\tif (pbs_strcat(&buf, &bufsize, req->res_str) == NULL)\n\t\t\t\t\treturn NULL;\n\t\t\t}\n\t\t\treq = req->next;\n\t\t}\n\t\tif (end_of_chunk)\n\t\t\tif (pbs_strcat(&buf, &bufsize, \")\") == NULL)\n\t\t\t\treturn NULL;\n\n\t\tif (pbs_strcat(&execvnode, &execvnode_size, buf) == NULL)\n\t\t\treturn NULL;\n\t}\n\n\treturn execvnode;\n}\n\n/**\n * @brief\n *\t\tparse_execvnode - parse an execvnode into an nspec array\n *\n * @param[in]\texecvnode\t-\tthe execvnode to parse\n * @param[in]\tsinfo\t\t-\tserver to get the nodes from\n * @param[in]\tsel\t\t\t- select to map\n *\n * @return\ta newly allocated nspec array for the execvnode\n *\n */\nstd::vector<nspec *>\nparse_execvnode(char *execvnode, server_info *sinfo, selspec *sel)\n{\n\tchar *simplespec;\n\tchar *excvndup;\n\tchar *node_name;\n\tint num_el;\n\tstruct key_value_pair *kv = NULL;\n\n\tstd::vector<nspec *> nspec_arr;\n\tnode_info *ninfo;\n\tresource_req *req;\n\tint i, j;\n\n\tbool invalid = false;\n\n\tint num_chunk;\n\tint nlkv = 0;\n\tchar *p;\n\tchar *tailptr = NULL;\n\tint hp;\n\tint cur_chunk_num = 0;\n\tint cur_tot_chunks = 0;\n\tint chunks_ind;\n\tint num_paren = 0;\n\tint in_superchunk = 0;\n\n\tif (execvnode == NULL || sinfo == NULL)\n\t\treturn {};\n\n\tp = execvnode;\n\n\t/* number of chunks is number of pluses + 1 */\n\tnum_chunk = 1;\n\n\twhile (p != NULL && *p != '\\0') {\n\t\tif (*p == '+')\n\t\t\tnum_chunk++;\n\t\tif (*p == '(')\n\t\t\tnum_paren++;\n\n\t\tp++;\n\t}\n\n\t/* Number of chunks in exec_vnode don't match selspec, don't map chunks */\n\tif (sel != NULL && num_paren != sel->total_chunks)\n\t\tsel = NULL;\n\n\tif ((excvndup = string_dup(execvnode)) == NULL)\n\t\treturn {};\n\n\tsimplespec = parse_plus_spec_r(excvndup, &tailptr, &hp);\n\tif (hp > 0) /* simplespec starts with '(' but doesn't close */\n\t\tin_superchunk = 1;\n\n\tif (simplespec == NULL)\n\t\tinvalid = true;\n\telse if (parse_node_resc_r(simplespec, &node_name, &num_el, &nlkv, &kv) != 0)\n\t\tinvalid = true;\n\n\tif (sel != NULL) {\n\t\tcur_tot_chunks = sel->chunks[0]->num_chunks;\n\t\tchunks_ind = 0;\n\t}\n\tfor (i = 0; i < num_chunk && !invalid && simplespec != NULL; i++) {\n\t\tauto ns = new nspec();\n\t\tnspec_arr.push_back(ns);\n\t\tninfo = find_node_info(sinfo->nodes, node_name);\n\t\tif (ninfo != NULL) {\n\t\t\tns->ninfo = ninfo;\n\t\t\tfor (j = 0; j < num_el; j++) {\n\t\t\t\treq = create_resource_req(kv[j].kv_keyw, kv[j].kv_val);\n\t\t\t\tif (req != NULL) {\n\t\t\t\t\tif (ns->resreq == NULL)\n\t\t\t\t\t\tns->resreq = req;\n\t\t\t\t\telse {\n\t\t\t\t\t\treq->next = ns->resreq;\n\t\t\t\t\t\tns->resreq = req;\n\t\t\t\t\t}\n\t\t\t\t} else\n\t\t\t\t\tinvalid = true;\n\t\t\t}\n\t\t} else {\n\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_DEBUG, node_name,\n\t\t\t\t  \"Exechost contains a node that does not exist.\");\n\t\t\tinvalid = true;\n\t\t}\n\t\tif (sel != NULL) {\n\t\t\t/* This shouldn't happen since we checked above to make sure we could map properly */\n\t\t\tif (sel->chunks[chunks_ind] == NULL) {\n\t\t\t\tlog_event(PBS_EVENTCLASS_NODE, PBS_EVENTCLASS_NODE, LOG_WARNING, __func__, \"Select spec and exec_vnode/resv_nodes can not be mapped\");\n\t\t\t\tfree_nspecs(nspec_arr);\n\t\t\t\treturn {};\n\t\t\t}\n\t\t\tns->chk = sel->chunks[chunks_ind];\n\t\t\tns->seq_num = ns->chk->seq_num;\n\t\t}\n\t\tif (!in_superchunk || hp < 0) {\n\t\t\tns->end_of_chunk = 1;\n\t\t\tif (sel != NULL) {\n\t\t\t\tcur_chunk_num++;\n\t\t\t\tif (cur_chunk_num == cur_tot_chunks) {\n\t\t\t\t\tchunks_ind++;\n\t\t\t\t\tif (sel->chunks[chunks_ind] != NULL) {\n\t\t\t\t\t\tcur_tot_chunks = sel->chunks[chunks_ind]->num_chunks;\n\t\t\t\t\t\tcur_chunk_num = 0;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\tif (!invalid) {\n\t\t\tsimplespec = parse_plus_spec_r(tailptr, &tailptr, &hp);\n\t\t\tif (simplespec != NULL) {\n\t\t\t\tint ret;\n\n\t\t\t\tif (hp > 0) /* simplespec starts with '(' but doesn't end with ')' */\n\t\t\t\t\tin_superchunk = 1;\n\t\t\t\telse if (hp < 0) /* simplespec ends with ')' but does not start with '(' */\n\t\t\t\t\tin_superchunk = 0;\n\t\t\t\t/* hp == 0 simplespec either starts and ends with '(' ')' or has neither */\n\n\t\t\t\tret = parse_node_resc_r(simplespec, &node_name, &num_el, &nlkv, &kv);\n\t\t\t\tif (ret < 0)\n\t\t\t\t\tinvalid = true;\n\t\t\t}\n\t\t}\n\t}\n\n\tfree(kv);\n\tfree(excvndup);\n\n\tif (invalid) {\n\t\tlog_eventf(PBSEVENT_SCHED, PBS_EVENTCLASS_NODE, LOG_WARNING, __func__,\n\t\t\t   \"Failed to parse execvnode: %s\", execvnode);\n\t\tfree_nspecs(nspec_arr);\n\t\treturn {};\n\t}\n\n\treturn nspec_arr;\n}\n\n/**\n * @brief\n *\t\tnode_state_to_str - convert a node's state into a string for printing\n *\n * @param[in]\tninfo\t-\tthe node\n *\n * @return\tstatic string of node state\n *\n */\nconst char *\nnode_state_to_str(node_info *ninfo)\n{\n\tif (ninfo == NULL)\n\t\treturn \"\";\n\n\tif (ninfo->is_job_busy)\n\t\treturn ND_jobbusy;\n\n\tif (ninfo->is_free)\n\t\treturn ND_free;\n\n\tif (ninfo->is_down)\n\t\treturn ND_down;\n\n\tif (ninfo->is_offline)\n\t\treturn ND_offline;\n\n\tif (ninfo->is_resv_exclusive)\n\t\treturn ND_resv_exclusive;\n\n\tif (ninfo->is_job_exclusive)\n\t\treturn ND_job_exclusive;\n\n\tif (ninfo->is_busy)\n\t\treturn ND_busy;\n\n\tif (ninfo->is_stale)\n\t\treturn ND_Stale;\n\n\tif (ninfo->is_provisioning)\n\t\treturn ND_prov;\n\n\tif (ninfo->is_sleeping)\n\t\treturn ND_sleep;\n\n\tif (ninfo->is_maintenance)\n\t\treturn ND_maintenance;\n\n\t/* default */\n\treturn ND_state_unknown;\n}\n\n/**\n * @brief\n *\t\tcombine_nspec_array - find and combine any nspec's for the same node\n *\t\tin an nspec array.  Because nspecs no longer map to the original chunks\n *\t\tthey came from, seq_num and chk no longer have meaning.  They are cleared.\n *\n * @param[in,out]\tnspec_arr\t-\tarray to combine\n *\n * @return\tvector<nspec *>\n * @retval\tcombined nspec array (up to caller to free)\n * @retval\tNULL on error\n *\n */\nstd::vector<nspec *>\ncombine_nspec_array(const std::vector<nspec *> &nspec_arr)\n{\n\tstd::vector<nspec *> new_nspec_arr;\n\tnspec *new_ns;\n\tstd::unordered_map<int, nspec *> nspec_umap;\n\n\tif (nspec_arr.empty())\n\t\treturn {};\n\n\tfor (const auto &ns : nspec_arr) {\n\t\tauto numap = nspec_umap.find(ns->ninfo->rank);\n\t\tif (numap == nspec_umap.end()) {\n\t\t\tnew_ns = new nspec();\n\n\t\t\tnspec_umap[ns->ninfo->rank] = new_ns;\n\n\t\t\tnew_ns->end_of_chunk = true;\n\t\t\tnew_ns->ninfo = ns->ninfo;\n\t\t\tnew_ns->resreq = dup_resource_req_list(ns->resreq);\n\t\t\tnew_nspec_arr.push_back(new_ns);\n\t\t} else {\n\t\t\tnew_ns = numap->second;\n\t\t\tfor (auto req_j = ns->resreq; req_j != NULL; req_j = req_j->next) {\n\t\t\t\tauto req_i = find_resource_req(new_ns->resreq, req_j->def);\n\t\t\t\tif (req_i != NULL) {\n\t\t\t\t\t/* we assume that if the resource is a boolean or a string\n\t\t\t\t\t * the value is either the same, or doesn't exist\n\t\t\t\t\t * so we don't need to do validity checking\n\t\t\t\t\t */\n\t\t\t\t\tif (req_j->type.is_consumable)\n\t\t\t\t\t\treq_i->amount += req_j->amount;\n\t\t\t\t\telse if (req_j->type.is_string && req_i->res_str == NULL)\n\t\t\t\t\t\treq_i->res_str = string_dup(req_j->res_str);\n\t\t\t\t} else { /* This is the first time we've seen this resource */\n\t\t\t\t\tresource_req *tmpreq;\n\t\t\t\t\ttmpreq = dup_resource_req(req_j);\n\t\t\t\t\ttmpreq->next = new_ns->resreq;\n\t\t\t\t\tnew_ns->resreq = tmpreq;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\treturn new_nspec_arr;\n}\n\n/**\n * @brief\n *\t\tcreate a node_info array by copying the ninfo pointers out of a nspec array\n *\n * @param[in]\tnspec_arr\t-\tsource nspec array\n *\n * @return\tnew node_info array\n * @retval\tNULL\t: on error\n *\n */\nnode_info **\ncreate_node_array_from_nspec(std::vector<nspec *> &nspec_arr)\n{\n\tstd::unordered_map<std::string, node_info *> node_umap;\n\tnode_info **ninfo_arr;\n\tint j = 0;\n\n\tif (nspec_arr.empty())\n\t\treturn NULL;\n\n\tif ((ninfo_arr = static_cast<node_info **>(calloc(nspec_arr.size() + 1, sizeof(node_info *)))) == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn NULL;\n\t}\n\n\tfor (const auto &ns : nspec_arr) {\n\t\tif (node_umap.find(ns->ninfo->name) == node_umap.end())\n\t\t\tnode_umap[ns->ninfo->name] = ns->ninfo;\n\t}\n\n\tfor (const auto &numap : node_umap)\n\t\tninfo_arr[j++] = numap.second;\n\n\treturn ninfo_arr;\n}\n/**\n * @brief\n *\treorder the nodes for the avoid_provision or smp_cluster_dist policies\n *\tor when the reservation is being altered without changing the source\n * \tarray. We do so by holding our own static array of node pointers that\n * \twe will sort for the different policies.\n *\n * @param[in]\tnodes\t: nodes to reorder\n * @param[in]\tresresv : job or reservation for which reorder is done\n *\n * @see\tlast_node_name - the last allocated node - used for round_robin\n *\n * @return\tnode_info **\n * @retval\treordered list of nodes if needed\n * @retval\t'nodes' parameter if nodes do not need reordering\n * @retval\tNULL\t: on error\n *\n * @par Side Effects:\n *      local variable node_array holds onto memory in the heap for reuse.\n *      The caller should not free the return\n *\n * @par MT-safe:\tNo\n *\n * @par MT-safe: No\n */\nnode_info **\nreorder_nodes(node_info **nodes, resource_resv *resresv)\n{\n\tstatic node_info **node_array = NULL;\n\tstatic int node_array_size = 0;\n\tnode_info **nptr = NULL;\n\tnode_info **tmparr = NULL;\n\tschd_resource *hostres = NULL;\n\tschd_resource *cur_hostres = NULL;\n\tint nsize = 0;\n\tint i = 0;\n\tint j = 0;\n\tint k = 0;\n\n\tif (nodes == NULL)\n\t\treturn NULL;\n\n\tif (resresv == NULL && conf.provision_policy == AVOID_PROVISION)\n\t\treturn NULL;\n\n\tnsize = count_array(nodes);\n\n\tif ((node_array_size < nsize + 1) || node_array == NULL) {\n\t\ttmparr = static_cast<node_info **>(realloc(node_array, sizeof(node_info *) * (nsize + 1)));\n\t\tif (tmparr == NULL) {\n\t\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\t\treturn NULL;\n\t\t}\n\n\t\tnode_array = tmparr;\n\t\tnode_array_size = nsize + 1;\n\t}\n\ttmparr = NULL;\n\n\tnode_array[0] = NULL;\n\tnptr = node_array;\n\n\tif (last_node_name[0] == '\\0' && nodes[0] != NULL)\n\t\tpbs_strncpy(last_node_name, nodes[0]->name.c_str(), sizeof(last_node_name));\n\n\tif (resresv != NULL) {\n\t\tif (resresv->aoename != NULL && conf.provision_policy == AVOID_PROVISION) {\n\t\t\tmemcpy(nptr, nodes, (nsize + 1) * sizeof(node_info *));\n\n\t\t\tif (cmp_aoename != NULL)\n\t\t\t\tfree(cmp_aoename);\n\n\t\t\tcmp_aoename = string_dup(resresv->aoename);\n\t\t\tqsort(nptr, nsize, sizeof(node_info *), cmp_aoe);\n\n\t\t\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG, resresv->name,\n\t\t\t\t   \"Re-sorted the nodes on aoe %s, since aoe was requested\", resresv->aoename);\n\n\t\t\treturn nptr;\n\t\t}\n\t}\n\n\tswitch (cstat.smp_dist) {\n\t\tcase SMP_NODE_PACK:\n\t\t\tnptr = nodes;\n\t\t\tbreak;\n\n\t\tcase SMP_ROUND_ROBIN:\n\n\t\t\tif ((tmparr = static_cast<node_info **>(calloc(node_array_size, sizeof(node_info *)))) == NULL) {\n\t\t\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\t\t\treturn NULL;\n\t\t\t}\n\n\t\t\tmemcpy(tmparr, nodes, nsize * sizeof(node_info *));\n\t\t\tqsort(tmparr, nsize, sizeof(node_info *), cmp_node_host);\n\n\t\t\tfor (i = 0; i < nsize && tmparr[i]->name != last_node_name; i++)\n\t\t\t\t;\n\n\t\t\tif (i < nsize) {\n\t\t\t\t/* find the vnode of the next host or the end of the list since the\n\t\t\t\t * beginning will definitely be a different host because of our sort\n\t\t\t\t */\n\t\t\t\thostres = find_resource(tmparr[i]->res, allres[\"host\"]);\n\t\t\t\tif (hostres != NULL) {\n\t\t\t\t\tfor (; i < nsize; i++) {\n\t\t\t\t\t\tcur_hostres = find_resource(tmparr[i]->res, allres[\"host\"]);\n\t\t\t\t\t\tif (cur_hostres != NULL) {\n\t\t\t\t\t\t\tif (!compare_res_to_str(cur_hostres, hostres->str_avail[0], CMP_CASELESS))\n\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t/* copy from our last location to the end */\n\t\t\tfor (j = 0, k = i; k < nsize; j++, k++)\n\t\t\t\tnptr[j] = tmparr[k];\n\n\t\t\t/* copy from the beginning to our last location */\n\t\t\tfor (k = 0; k < i; j++, k++)\n\t\t\t\tnptr[j] = tmparr[k];\n\n\t\t\tnptr[j] = NULL;\n\n\t\t\tfree(tmparr);\n\t\t\tbreak;\n\n\t\tdefault:\n\t\t\tnptr = nodes;\n\t\t\tlog_event(PBSEVENT_SCHED, PBS_EVENTCLASS_FILE, LOG_NOTICE, \"\", \"Invalid smp_cluster_dist value\");\n\t}\n\n\treturn nptr;\n}\n\n/**\n * @brief\n *\t\tok_break_chunk - is it OK to break up a chunk on a list of nodes?\n *\n * @param[in]\tresresv\t-\tthe requestor (unused for the moment)\n * @param[in]\tnodes   -\tthe list of nodes to check\n *\n * @return\tint\n * @retval\t1\t: if its OK to break up chunks across the nodes\n * @retval\t0\t: if it not\n *\n */\nint\nok_break_chunk(resource_resv *resresv, node_info **nodes)\n{\n\tint i;\n\tschd_resource *hostres = NULL;\n\tschd_resource *res;\n\tif (resresv == NULL || nodes == NULL)\n\t\treturn 0;\n\n\tfor (i = 0; nodes[i] != NULL; i++) {\n\t\tres = find_resource(nodes[i]->res, allres[\"host\"]);\n\t\tif (res != NULL) {\n\t\t\tif (hostres == NULL)\n\t\t\t\thostres = res;\n\t\t\telse {\n\t\t\t\tif (match_string_array(hostres->str_avail, res->str_avail) != SA_FULL_MATCH) {\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t} else {\n\t\t\tlog_event(PBSEVENT_SCHED, PBS_EVENTCLASS_NODE, LOG_WARNING,\n\t\t\t\t  nodes[i]->name, \"Node has no host resource\");\n\t\t}\n\t}\n\n\tif (nodes[i] == NULL)\n\t\treturn 1;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\t\tis_excl - is a request/node combination exclusive?  This is based\n *\t\t  on both the place directive of the request and the\n *\t\t  sharing attribute of the node\n *\n * @param[in]\tpl\t-\tplace directive of the request\n * @param[in]\tsharing\t-\tsharing attribute of the node\n *\n * @return\tint\n * @retval\t1\t: if exclusive\n * @retval\t0\t: if not\n *\n * @note\n *\t\tAssumes if pl is NULL, no request excl/shared request was given\n */\nint\nis_excl(place *pl, enum vnode_sharing sharing)\n{\n\tif (sharing == VNS_FORCE_EXCL || sharing == VNS_FORCE_EXCLHOST)\n\t\treturn 1;\n\n\tif (sharing == VNS_IGNORE_EXCL)\n\t\treturn 0;\n\n\tif (pl != NULL) {\n\t\tif (pl->excl)\n\t\t\treturn 1;\n\n\t\tif (pl->share)\n\t\t\treturn 0;\n\t}\n\n\tif (sharing == VNS_DFLT_EXCL || sharing == VNS_DFLT_EXCLHOST)\n\t\treturn 1;\n\n\tif (sharing == VNS_DFLT_SHARED)\n\t\treturn 0;\n\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\ttake a node solution and extend it by allocating the rest of\n *\t\ta node array to it.\n *\n * @param[in][out]\tnsa\t-\tcurrently allocated node solution to be\n *\t\t\t\t\t\t\textended with ninfo_arr\n * @param[in] ninfo_arr - node array to allocate to nspec array\n *\n * @note\n *\t\tmust be allocated by caller\n *\n * @return\tint\n * @retval\t1\t: on success\n * @retval\ton error\t: nsa may be modified\n *\n */\nint\nalloc_rest_nodepart(std::vector<nspec *> &nsa, node_info **ninfo_arr)\n{\n\tint max_seq_num = 0;\n\tif (ninfo_arr == NULL)\n\t\treturn 0;\n\n\t/* find the end of our current node solution.  While we're searching, find\n\t * the highest nspec sequence number.  We will use this for the sequence\n\t * number for the rest of the nodepart\n\t */\n\tfor (const auto &ns : nsa)\n\t\tif (ns->seq_num > max_seq_num)\n\t\t\tmax_seq_num = ns->seq_num;\n\n\tfor (int i = 0; ninfo_arr[i] != NULL; i++) {\n\t\tconst auto ns = find_nspec(nsa, ninfo_arr[i]);\n\n\t\t/* node not part of solution */\n\t\tif (ns == NULL) {\n\t\t\tauto nns = new nspec();\n\t\t\tnns->ninfo = ninfo_arr[i];\n\t\t\tnns->end_of_chunk = 1;\n\t\t\tnns->seq_num = max_seq_num;\n\n\t\t\tnns->sub_seq_num = get_sched_rank();\n\t\t\tnsa.push_back(nns);\n\t\t}\n\t}\n\n\treturn 1;\n}\n\n/**\n * @brief\n * \t\tdetermine if a chunk can fit on one vnode in node list\n *\n * @param[in]\treq\t-\trequested resources to compare to nodes\n * @param[in]\tninfo_arr\t-\tnode array\n *\n * @par NOTE:\n * \t\tall resources in req will be honored regardless of\n * \t    whether they are in conf.res_to_check or not due to the fact\n *\t\tthat chunk resources only contain such resources\n *\n * @return\tint\n * @retval\t1\t: chunk can fit in 1 vnode\n * @retval\t0\t: chunk can not fit / error\n *\n */\nint\ncan_fit_on_vnode(resource_req *req, node_info **ninfo_arr)\n{\n\tint i;\n\tstatic schd_error *dumperr = NULL;\n\n\tif (req == NULL || ninfo_arr == NULL)\n\t\treturn 0;\n\n\tif (dumperr == NULL) {\n\t\tdumperr = new_schd_error();\n\t\tif (dumperr == NULL) {\n\t\t\tset_schd_error_codes(dumperr, NOT_RUN, SCHD_ERROR);\n\t\t\treturn 0;\n\t\t}\n\t}\n\n\tfor (i = 0; ninfo_arr[i] != NULL; i++) {\n\t\tclear_schd_error(dumperr);\n\n\t\tif (is_vnode_eligible_chunk(req, ninfo_arr[i], NULL, dumperr)) {\n\t\t\tif (check_avail_resources(ninfo_arr[i]->res, req,\n\t\t\t\t\t\t  UNSET_RES_ZERO, INSUFFICIENT_RESOURCE, NULL))\n\t\t\t\treturn 1;\n\t\t}\n\t}\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tChecks if an EOE is available on a vnode\n *\n * @par Functionality:\n *\tThis function checks if an EOE is available on a vnode.\n *\n * @see\n *\n * @param[in]\tninfo\t\t-\tpointer to node_info\n * @param[in]\tresresv\t\t-\tpointer to resource_resv\n *\n * @return\tint\n * @retval\t 0 : EOE not available\n * @retval\t 1 : EOE available\n *\n * @par Side Effects:\n *\tUnknown\n *\n * @par MT-safe: No\n *\n */\nint\nis_eoe_avail_on_vnode(node_info *ninfo, resource_resv *resresv)\n{\n\tschd_resource *resp;\n\n\tif (ninfo == NULL || resresv == NULL)\n\t\treturn 0;\n\n\tif (resresv->eoename == NULL)\n\t\treturn 0;\n\n\tif ((resp = find_resource(ninfo->res, allres[\"eoe\"])) != NULL)\n\t\treturn is_string_in_arr(resp->str_avail, resresv->eoename);\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\t\tChecks if a vnode is eligible to be provisioned\n *\n * @par Functionality:\n *\t\tThis function checks if a vnode is eligible to be provisioned.\n *\t\tA vnode is eligible to be provisioned if it satisfies all of the\n *\t\tfollowing conditions:-\n *\t\t\t(1) AOE instantiated on the vnode is different from AOE requested by the\n *\t\t\tthe job or reservation,\n *\t\t\t(2) Server has provisioning enabled,\n *\t\t\t(3) Vnode has provisioning enabled,\n *\t\t\t(4) Vnode does not have suspended jobs\n *  \t\t(5) Vnode does not have any running jobs (inside / outside resvs)\n *\t\t\t(6) No conflicts with reservations already running on the Vnode\n *\t\t\t(7) No conflicts with jobs already running on the Vnode\n *\n * @see\n *\t\tshould_backfill_with_job\n *\n * @param[in]\tnode\t\t-\tpointer to node_info\n * @param[in]\tresresv\t\t-\tpointer to resource_resv\n * @param[in]\terr\t\t-\tpointer to schd_error\n *\n * @return\tint\n * @retval\tNO_PROVISIONING_NEEDED\t: resresv doesn't request aoe\n *\t\t\tor node doesn't need provisioning\n * @retval\tPROVISIONING_NEEDED : vnode is provisionable and needs\n *\t\t\tprovisioning\n * @retval\tNOT_PROVISIONABLE  : vnode is not provisionable\n *\t\t\t(see err for more details)\n *\n * @par Side Effects:\n *\t\tUnknown\n *\n * @par MT-safe:\tNo\n *\n */\nint\nis_provisionable(node_info *node, resource_resv *resresv, schd_error *err)\n{\n\tint i;\n\tint ret = NO_PROVISIONING_NEEDED;\n\n\tif ((resresv->aoename == NULL && resresv->is_job) ||\n\t    !resresv->is_prov_needed)\n\t\treturn NO_PROVISIONING_NEEDED;\n\n\t/* Perform checks if job is going to provision now or if reservation has aoe. */\n\tif ((resresv->is_job && (node->current_aoe == NULL || strcmp(resresv->aoename, node->current_aoe))) ||\n\t    (resresv->is_resv && resresv->aoename != NULL)) {\n\t\t/* we are inside, it means node requires provisioning... */\n\t\tret = PROVISIONING_NEEDED;\n\n\t\tif (node->is_multivnoded) {\n\t\t\tset_schd_error_codes(err, NOT_RUN, IS_MULTI_VNODE);\n\t\t\treturn NOT_PROVISIONABLE;\n\t\t}\n\n\t\t/* PROV_DISABLE_ON_SERVER is NOT_RUN instead of NEVER RUN.\n\t\t * Even though we can't provision any nodes, there might be\n\t\t * enough nodes in the correct aoe to run the job.\n\t\t */\n\t\tif (!resresv->server->provision_enable) {\n\t\t\tset_schd_error_codes(err, NOT_RUN, PROV_DISABLE_ON_SERVER);\n\t\t\treturn NOT_PROVISIONABLE;\n\t\t}\n\n\t\tif (!node->provision_enable) {\n\t\t\tset_schd_error_codes(err, NOT_RUN, PROV_DISABLE_ON_NODE);\n\t\t\treturn NOT_PROVISIONABLE;\n\t\t}\n\n\t\t/* Provisioning disallowed if there are suspended jobs on the node */\n\t\tif (node->num_susp_jobs > 0) {\n\t\t\tset_schd_error_codes(err, NOT_RUN, PROV_RESRESV_CONFLICT);\n\t\t\treturn NOT_PROVISIONABLE;\n\t\t}\n\n\t\t/* if there are running jobs, inside or outside a resv,\n\t\t * disallow prov.\n\t\t */\n\t\tif (node->num_jobs > 0) {\n\t\t\tset_schd_error_codes(err, NOT_RUN, PROV_RESRESV_CONFLICT);\n\t\t\treturn NOT_PROVISIONABLE;\n\t\t}\n\t}\n\n\t/* node cannot be shared between running reservation without AOE\n\t * and job with AOE\n\t */\n\tif (resresv->is_job && node->run_resvs_arr) {\n\t\tfor (i = 0; node->run_resvs_arr[i]; i++) {\n\t\t\tif (node->run_resvs_arr[i]->aoename == NULL) {\n\t\t\t\tset_schd_error_codes(err, NOT_RUN, PROV_RESRESV_CONFLICT);\n\t\t\t\treturn NOT_PROVISIONABLE;\n\t\t\t}\n\t\t}\n\t}\n\n\t/* node cannot be shared between running job with AOE\n\t * and reservation without AOE\n\t */\n\tif (resresv->is_resv && resresv->aoename == NULL &&\n\t    node->job_arr) {\n\t\tfor (i = 0; node->job_arr[i]; i++) {\n\t\t\tif (node->job_arr[i]->aoename != NULL) {\n\t\t\t\tset_schd_error_codes(err, NOT_RUN, PROV_RESRESV_CONFLICT);\n\t\t\t\treturn NOT_PROVISIONABLE;\n\t\t\t}\n\t\t}\n\t}\n\n\treturn ret;\n}\n\n/**\n * @brief\n *\t\thandles everything which happens to a node when it comes back up\n *\n * @param[in]\tnode\t-\tthe node to bring back up\n * @param[in]\targ\t\t-\tNULL param\n *\n * @par Side Effects:\n * \t\tSets the resv-exclusive state if the node had it\n * \t\tpreviously set.\n *\n * @par MT-safe:\tUnknown\n *\n * @retval\t1 - node was successfully brought up\n * @retval\t0 - node couldn't be brought up\n *\n */\nint\nnode_up_event(node_info *node, void *arg)\n{\n\tserver_info *sinfo;\n\n\tif (node == NULL)\n\t\treturn 0;\n\n\t/* Preserve the resv-exclusive state when previously set */\n\tif (node->is_resv_exclusive)\n\t\tset_node_info_state(node, ND_resv_exclusive);\n\telse\n\t\tset_node_info_state(node, ND_free);\n\n\tsinfo = node->server;\n\tif (sinfo->node_group_enable && !sinfo->node_group_key.empty()) {\n\t\tnode_partition_update_array(sinfo->policy, sinfo->nodepart);\n\t\tqsort(sinfo->nodepart, sinfo->num_parts,\n\t\t      sizeof(node_partition *), cmp_placement_sets);\n\t}\n\tupdate_all_nodepart(sinfo->policy, sinfo, NO_ALLPART);\n\n\treturn 1;\n}\n\n/**\n * @brief\n *\t\thandles everything which happens to a node when it goes down\n *\n * @param[in]\tnode\t-\tnode to bring down\n * @param[in]\targ\t\t-\tNULL param\n *\n * @par Side Effects: None\n * @par MT-safe: Unknown\n *\n * @return\tint\n * @retval\t1\t: node was successfully brought down\n * @retval\t0\t: node couldn't be brought down\n *\n */\nint\nnode_down_event(node_info *node, void *arg)\n{\n\tserver_info *sinfo;\n\n\tif (node == NULL)\n\t\treturn 0;\n\n\tsinfo = node->server;\n\tif (node->job_arr != NULL) {\n\t\tfor (int i = 0; node->job_arr[i] != NULL; i++) {\n\t\t\tconst char *job_state;\n\n\t\t\tif (node->job_arr[i]->job->can_requeue)\n\t\t\t\tjob_state = \"Q\";\n\t\t\telse\n\t\t\t\tjob_state = \"X\";\n\t\t\tupdate_universe_on_end(sinfo->policy, node->job_arr[i], job_state, NO_ALLPART);\n\t\t}\n\t}\n\n\tset_node_info_state(node, ND_down);\n\n\tif (sinfo->node_group_enable && !sinfo->node_group_key.empty()) {\n\t\tnode_partition_update_array(sinfo->policy, sinfo->nodepart);\n\t\tqsort(sinfo->nodepart, sinfo->num_parts,\n\t\t      sizeof(node_partition *), cmp_placement_sets);\n\t}\n\tupdate_all_nodepart(sinfo->policy, sinfo, NO_ALLPART);\n\n\treturn 1;\n}\n\n/**\n * @brief\n *\t\tcreate an array of unique nodes from a names in a string array\n *\n * @param[in]\tnodes\t-\tnodes to create array from\n * @param[in]\tstrnodes\t-\tstring array of vnode names\n *\n * @return node array\n *\n */\nnode_info **\ncreate_node_array_from_str(node_info **nodes, char **strnodes)\n{\n\tint i, j;\n\tnode_info **ninfo_arr;\n\tint cnt;\n\n\tif (nodes == NULL || strnodes == NULL)\n\t\treturn NULL;\n\n\tcnt = count_array(strnodes);\n\n\tif ((ninfo_arr = static_cast<node_info **>(malloc((cnt + 1) * sizeof(node_info *)))) == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn NULL;\n\t}\n\tninfo_arr[0] = NULL;\n\n\tfor (i = 0, j = 0; strnodes[i] != NULL; i++) {\n\t\tif (find_node_info(ninfo_arr, strnodes[i]) == NULL) {\n\t\t\tninfo_arr[j] = find_node_info(nodes, strnodes[i]);\n\t\t\tif (ninfo_arr[j] != NULL) {\n\t\t\t\tj++;\n\t\t\t\tninfo_arr[j] = NULL;\n\t\t\t} else\n\t\t\t\tlog_eventf(PBSEVENT_DEBUG2, PBS_EVENTCLASS_NODE, LOG_DEBUG, __func__,\n\t\t\t\t\t   \"Node %s not found in list.\", strnodes[i]);\n\t\t}\n\t}\n\n\treturn ninfo_arr;\n}\n\n/**\n * @brief find a node in the array and return its index\n * @param ninfo_arr - node array\n * @param rank - rank of node to search for\n * @return int\n * @retval 0+ - index in array of node\n * @retval -1 - node not found\n */\nint\nfind_node_ind(node_info **ninfo_arr, int rank)\n{\n\tint i;\n\tif (ninfo_arr == NULL)\n\t\treturn -1;\n\n\tfor (i = 0; ninfo_arr[i] != NULL && ninfo_arr[i]->rank != rank; i++)\n\t\t;\n\n\tif (ninfo_arr[i] == NULL)\n\t\treturn -1;\n\n\treturn i;\n}\n\n/**\n * @brief\n * \t\tfind a node by its unique rank\n *\n * @param[in]\tninfo_arr\t-\tnode array to search\n * @param[in] rank\t-\tunique numeric identifier for node\n *\n * @return\tnode_info *\n * @retval\tfound node\n * @retval\tNULL\t: if node is not found\n */\nnode_info *\nfind_node_by_rank(node_info **ninfo_arr, int rank)\n{\n\tint ind;\n\tif (ninfo_arr == NULL)\n\t\treturn NULL;\n\n\tind = find_node_ind(ninfo_arr, rank);\n\tif (ind == -1)\n\t\treturn NULL;\n\treturn ninfo_arr[ind];\n}\n\n/**\n * @brief find a node by indexing into sinfo->unordered_nodes O(1) or\n * \t  by searching for its unique rank O(N) if sinfo->unordered_nodes is unavailable\n *\n * @param[in] ninfo_arr - array of nodes to search\n * @param[in] ind - index into sinfo->unordered_nodes\n * @param[in] rank - node's unique rank\n *\n * @return node_info *\n * @retval found node\n * @retval NULL if node is not found\n */\nnode_info *\nfind_node_by_indrank(node_info **ninfo_arr, int ind, int rank)\n{\n\tif (ninfo_arr == NULL || *ninfo_arr == NULL)\n\t\treturn NULL;\n\n\tif (ninfo_arr[0] == NULL || ninfo_arr[0]->server == NULL || ninfo_arr[0]->server->unordered_nodes == NULL || ind == -1)\n\t\treturn find_node_by_rank(ninfo_arr, rank);\n\n\treturn ninfo_arr[0]->server->unordered_nodes[ind];\n}\n\n/**\n * @brief\n * \t\tdetermine if resresv conflicts based on exclhost state of the\n *\t\tfuture events on this node.\n *\n * @param\tcalendar - server's calendar of events\n * @param\tresresv  - job or resv to check\n * @param\tninfo    - node to check\n *\n * @return int\n * @retval 1\t: no excl conflict\n * @retval 0\t: future excl conflict\n */\nint\nsim_exclhost(event_list *calendar, resource_resv *resresv, node_info *ninfo)\n{\n\ttime_t end;\n\n\tif (calendar == NULL || resresv == NULL || ninfo == NULL)\n\t\treturn 1;\n\n\tif (resresv->duration != resresv->hard_duration &&\n\t    exists_resv_event(calendar, resresv->hard_duration))\n\t\tend = resresv->server->server_time + calc_time_left(resresv, 1);\n\telse\n\t\tend = resresv->server->server_time + calc_time_left(resresv, 0);\n\n\treturn generic_sim(calendar, TIMED_RUN_EVENT,\n\t\t\t   end, 1, sim_exclhost_func, (void *) resresv, (void *) ninfo);\n}\n/**\n * @brief\n * \t\thelper function for generic_sim() to check if an event has an exclhost\n *\t  conflict with a job/resv on a node.  We either find a conflict or\n *\t  continue looping.\n *\n * @param[in]\tte\t-\tfuture event\n * @param[in]\targ1\t-\tjob/reservation\n * @param[in]\targ2\t- node\n *\n * @return\tint\n * @retval\t1\t: no excl conflict\n * @retval\t0\t: continue looping\n * @retval\t-1\t: excl conflict\n */\nint\nsim_exclhost_func(timed_event *te, void *arg1, void *arg2)\n{\n\tresource_resv *resresv;\n\tresource_resv *future_resresv;\n\tnode_info *ninfo;\n\n\tif (te == NULL || arg1 == NULL || arg2 == NULL)\n\t\treturn 0;\n\n\tresresv = static_cast<resource_resv *>(arg1);\n\tninfo = static_cast<node_info *>(arg2);\n\tfuture_resresv = static_cast<resource_resv *>(te->event_ptr);\n\tif (find_nspec_by_rank(future_resresv->nspec_arr, ninfo->rank) == NULL)\n\t\treturn 0; /* event does not affect the node */\n\n\tif (is_exclhost(future_resresv->place_spec, ninfo->sharing) ||\n\t    is_exclhost(resresv->place_spec, ninfo->sharing)) {\n\t\treturn -1;\n\t}\n\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\tset current_aoe on a node.  Free existing value if set\n *\n * @param[in]\tnode\t-\tnode to set\n * @paran[in]\taoe\t-\taoe to set on ode\n *\n * @return void\n */\nvoid\nset_current_aoe(node_info *node, char *aoe)\n{\n\tif (node == NULL)\n\t\treturn;\n\tif (node->current_aoe != NULL)\n\t\tfree(node->current_aoe);\n\tif (aoe == NULL)\n\t\tnode->current_aoe = NULL;\n\telse\n\t\tnode->current_aoe = string_dup(aoe);\n}\n\n/**\n * @brief set current_eoe on a node.  Free existing value if set\n * @param[in] node - node to set\n * @paran[in] eoe - eoe to set on ode\n * @return void\n */\nvoid\nset_current_eoe(node_info *node, char *eoe)\n{\n\tif (node == NULL)\n\t\treturn;\n\tif (node->current_eoe != NULL)\n\t\tfree(node->current_eoe);\n\tif (eoe == NULL)\n\t\tnode->current_eoe = NULL;\n\telse\n\t\tnode->current_eoe = string_dup(eoe);\n}\n\n/**\n * @brief\n * \t\tshould we exclhost this job - a function of node sharing and job place\n *\n * @param[in]\tsharing\t-\tthe nodes sharing attribute value\n * @param[in]\tplacespec\t-\tjob place attribute\n *\n * @return\tint\n * @retval\t1\t: do exclhost\n * @retval\t0\t: don't do exclhost (or invalid input)\n */\nint\nis_exclhost(place *pl, enum vnode_sharing sharing)\n{\n\t/* if the node forces exclhost, we don't care about the place */\n\tif (sharing == VNS_FORCE_EXCLHOST)\n\t\treturn 1;\n\n\t/* if the node ignores exclusiveness, we don't care about the place */\n\tif (sharing == VNS_IGNORE_EXCL)\n\t\treturn 0;\n\n\t/* invalid input */\n\tif (pl == NULL)\n\t\treturn 0;\n\n\t/* Node defaults to exclhost and the job doesn't disagree */\n\tif (sharing == VNS_DFLT_EXCLHOST &&\n\t    pl->excl == 0 && pl->share == 0)\n\t\treturn 1;\n\n\t/* If the job is requesting exclhost */\n\tif (pl->exclhost)\n\t\treturn 1;\n\n\t/* otherwise we're not doing exclhost */\n\treturn 0;\n}\n\n/**\n * @brief\tpthread routing to check eligibility for a chunk of nodes\n *\n * @param[in,out]\tdata - th_data_nd_eligible object\n *\n * @return void\n */\nvoid\ncheck_node_eligibility_chunk(th_data_nd_eligible *data)\n{\n\tint i;\n\tint start, end;\n\tschd_error *err;\n\tschd_error *misc_err;\n\tresource_resv *resresv;\n\tplace *pl;\n\tnode_info **ninfo_arr;\n\n\tif (data == NULL)\n\t\treturn;\n\n\terr = new_schd_error();\n\tif (err == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn;\n\t}\n\tmisc_err = new_schd_error();\n\tif (misc_err == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn;\n\t}\n\n\tstart = data->sidx;\n\tend = data->eidx;\n\tresresv = data->resresv;\n\tpl = data->pl;\n\tninfo_arr = data->ninfo_arr;\n\n\tfor (i = start; i <= end && ninfo_arr[i] != NULL; i++) {\n\t\tnode_info *node;\n\n\t\tnode = ninfo_arr[i];\n\t\tif (!node->nscr) {\n\t\t\tif (is_vnode_eligible(node, resresv, pl, err) == 0) {\n\t\t\t\tnode->nscr |= NSCR_INELIGIBLE;\n\t\t\t\tif (node->hostset != NULL) {\n\t\t\t\t\tif ((err->error_code == NODE_NOT_EXCL && is_exclhost(pl, node->sharing)) || sim_exclhost(resresv->server->calendar, resresv, node) == 0) {\n\t\t\t\t\t\tint j;\n\n\t\t\t\t\t\tfor (j = 0; node->hostset->ninfo_arr[j] != NULL; j++) {\n\t\t\t\t\t\t\tnode_info *n = node->hostset->ninfo_arr[j];\n\t\t\t\t\t\t\tn->nscr |= NSCR_INELIGIBLE;\n\t\t\t\t\t\t\tset_schd_error_codes(misc_err, NOT_RUN, NODE_NOT_EXCL);\n\t\t\t\t\t\t\tschdlogerr(PBSEVENT_DEBUG3, PBS_EVENTCLASS_NODE, LOG_DEBUG, n->name,\n\t\t\t\t\t\t\t\t   NULL, misc_err);\n\t\t\t\t\t\t\tclear_schd_error(misc_err);\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tif (err->status_code != SCHD_UNKWN) {\n\t\t\t\t\tif (misc_err->status_code == SCHD_UNKWN)\n\t\t\t\t\t\tcopy_schd_error(misc_err, err);\n\t\t\t\t\tschdlogerr(PBSEVENT_DEBUG3, PBS_EVENTCLASS_NODE, LOG_DEBUG, node->name, NULL, err);\n\t\t\t\t}\n\t\t\t\tclear_schd_error(err);\n\t\t\t}\n\t\t}\n\t}\n\n\tfree_schd_error(err);\n\tdata->err = misc_err;\n}\n\n/**\n * @brief\t Allocates th_data_nd_eligible for multi-threading of check_node_array_eligibility\n *\n * @param[in]\tpl\t-\tthe placement object\n * @param[in]\tresresv\t-\tresresv to check to place on nodes\n * @param[in]\texclerr_buf\t-\tbuffer for not exclusive error message\n * @param[in]\tninfo_arr\t-\tarray to check\n * @param[in]\tsidx\t-\tthe start index in ninfo_arr for the thread\n * @param[in]\teidx\t-\tthe end index in ninfo_arr for the thread\n *\n * @return th_data_nd_eligible *\n * @retval a newly allocated th_data_nd_eligible object\n * @retval NULL for malloc error\n */\nstatic inline th_data_nd_eligible *\nalloc_tdata_nd_eligible(place *pl, resource_resv *resresv, node_info **ninfo_arr,\n\t\t\tint sidx, int eidx)\n{\n\tth_data_nd_eligible *tdata;\n\n\ttdata = static_cast<th_data_nd_eligible *>(malloc(sizeof(th_data_nd_eligible)));\n\tif (tdata == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn NULL;\n\t}\n\ttdata->err = NULL;\n\ttdata->pl = pl;\n\ttdata->resresv = resresv;\n\ttdata->ninfo_arr = ninfo_arr;\n\ttdata->sidx = sidx;\n\ttdata->eidx = eidx;\n\n\treturn tdata;\n}\n\n/**\n * @brief\n * \t\tcheck nodes for eligibility and mark them ineligible if not\n *\n * @param[in]\tninfo_arr\t-\tarray to check\n * @param[in]\tresresv\t-\tresresv to check to place on nodes\n * @param[in]\tnum_nodes - count of no. of nodes\n * @param[out]\terr\t-\terror structure\n *\n * @warning\n * \t\tIf an error occurs in this function, no indication will be returned.\n *\t\tThis is not a huge concern because, it will just cause more work to be done.\n *\n * @return\tvoid\n */\nvoid\ncheck_node_array_eligibility(node_info **ninfo_arr, resource_resv *resresv, place *pl, schd_error *err)\n{\n\tth_data_nd_eligible *tdata = NULL;\n\tth_task_info *task = NULL;\n\tint tid;\n\tint num_nodes;\n\n\tif (ninfo_arr == NULL || resresv == NULL || pl == NULL || err == NULL)\n\t\treturn;\n\n\tnum_nodes = count_array(ninfo_arr);\n\n\ttid = *((int *) pthread_getspecific(th_id_key));\n\tif (tid != 0 || num_threads <= 1) {\n\t\t/* don't use multi-threading if I am a worker thread or num_threads is 1 */\n\t\ttdata = alloc_tdata_nd_eligible(pl, resresv, ninfo_arr, 0, num_nodes - 1);\n\t\tif (tdata == NULL)\n\t\t\treturn;\n\t\tcheck_node_eligibility_chunk(tdata);\n\t\tcopy_schd_error(err, tdata->err);\n\t\tfree_schd_error(tdata->err);\n\t\tfree(tdata);\n\t} else { /* We are multithreading */\n\t\tint j;\n\t\tint num_tasks;\n\t\tint chunk_size = num_nodes / num_threads;\n\t\tchunk_size = (chunk_size > MT_CHUNK_SIZE_MIN) ? chunk_size : MT_CHUNK_SIZE_MIN;\n\t\tfor (j = 0, num_tasks = 0; num_nodes > 0;\n\t\t     num_tasks++, j += chunk_size, num_nodes -= chunk_size) {\n\t\t\ttdata = alloc_tdata_nd_eligible(pl, resresv, ninfo_arr, j, j + chunk_size - 1);\n\t\t\tif (tdata == NULL)\n\t\t\t\tbreak;\n\n\t\t\ttask = static_cast<th_task_info *>(malloc(sizeof(th_task_info)));\n\t\t\tif (task == NULL) {\n\t\t\t\tfree(tdata);\n\t\t\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\ttask->task_type = TS_IS_ND_ELIGIBLE;\n\t\t\ttask->thread_data = (void *) tdata;\n\n\t\t\tqueue_work_for_threads(task);\n\t\t}\n\n\t\t/* Get results from worker threads */\n\t\tfor (int i = 0; i < num_tasks;) {\n\t\t\tpthread_mutex_lock(&result_lock);\n\t\t\twhile (ds_queue_is_empty(result_queue))\n\t\t\t\tpthread_cond_wait(&result_cond, &result_lock);\n\t\t\twhile (!ds_queue_is_empty(result_queue)) {\n\t\t\t\ttask = static_cast<th_task_info *>(ds_dequeue(result_queue));\n\t\t\t\ttdata = static_cast<th_data_nd_eligible *>(task->thread_data);\n\t\t\t\tif (err->status_code == SCHD_UNKWN && tdata->err->status_code != SCHD_UNKWN)\n\t\t\t\t\tcopy_schd_error(err, tdata->err);\n\n\t\t\t\tfree_schd_error(tdata->err);\n\t\t\t\tfree(tdata);\n\t\t\t\tfree(task);\n\t\t\t\ti++;\n\t\t\t}\n\t\t\tpthread_mutex_unlock(&result_lock);\n\t\t}\n\t}\n}\n\n/**\n * @brief\n *\tnode_in_partition\t-  Tells whether the given node belongs to this scheduler\n *\n * @param[in]\tninfo\t\t-  node information\n * @param[in]\tpartition\t-  partition associated to scheduler\n *\n *\n * @return\tint\n * @retval\t1\t: if success\n * @retval\t0\t: if failure\n */\nint\nnode_in_partition(node_info *ninfo, char *partition)\n{\n\tif (dflt_sched) {\n\t\tif (ninfo->partition == NULL)\n\t\t\treturn 1;\n\t\telse\n\t\t\treturn 0;\n\t}\n\tif (ninfo->partition == NULL)\n\t\treturn 0;\n\n\tif (strcmp(partition, ninfo->partition) == 0)\n\t\treturn 1;\n\telse\n\t\treturn 0;\n}\n\n/**\n * @brief add an event to all the nodes associated to a calendar event\n * @param te - event\n * @param nspecs - nspecs[i]->node is the node to add the event to\n * @return bool\n * @retval true success\n * @retval false error\n */\nbool\nadd_event_to_nodes(timed_event *te, std::vector<nspec *> &nspecs)\n{\n\tif (te == NULL)\n\t\treturn false;\n\n\tfor (auto ns : nspecs) {\n\t\tte_list *tel;\n\t\tte_list *pre_tel = NULL;\n\t\tte_list *cur_tel;\n\t\ttel = new_te_list();\n\t\tif (tel == NULL)\n\t\t\treturn false;\n\t\ttel->event = te;\n\t\tfor (cur_tel = ns->ninfo->node_events; cur_tel != NULL && cur_tel->event->event_time <= te->event_time; cur_tel = cur_tel->next)\n\t\t\tpre_tel = cur_tel;\n\t\tif (pre_tel != NULL)\n\t\t\tpre_tel->next = tel;\n\t\telse {\n\t\t\ttel->next = ns->ninfo->node_events;\n\t\t\tns->ninfo->node_events = tel;\n\t\t}\n\t}\n\treturn true;\n}\n\n/**\n * @brief function pointer argument to generic_sim() to add an event to nodes\n * @param te - event\n * @param arg1 - unused\n * @param arg2 - unused\n * @return @see generic_sim()\n */\nint\nadd_node_events(timed_event *te, void *arg1, void *arg2)\n{\n\tif (!te->disabled) {\n\t\tauto nspecs = (static_cast<resource_resv *>(te->event_ptr))->nspec_arr;\n\n\t\tif (add_event_to_nodes(te, nspecs) == 0)\n\t\t\treturn -1;\n\t}\n\n\treturn 0;\n}\n"
  },
  {
    "path": "src/scheduler/node_info.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _NODE_INFO_H\n#define _NODE_INFO_H\n\n#include \"data_types.h\"\n#include <pbs_ifl.h>\n\nvoid query_node_info_chunk(th_data_query_ninfo *data);\n\n/*\n *      query_nodes - query all the nodes associated with a server\n */\nnode_info **query_nodes(int pbs_sd, server_info *sinfo);\n\n/*\n *      query_node_info - collect information from a batch_status and\n *                        put it in a node_info struct for easier access\n */\nnode_info *query_node_info(struct batch_status *node, server_info *sinfo);\n\n/*\n * pthread routine for freeing up a node_info array\n */\nvoid\nfree_node_info_chunk(th_data_free_ninfo *data);\n\n/*\n *      free_nodes - free all the nodes in a node_info array\n */\nvoid free_nodes(node_info **ninfo_arr);\n\n/*\n *      set_node_info_state - set a node state\n */\nint set_node_info_state(node_info *ninfo, const char *state);\n\n/*\n *      remove_node_state\n */\nint remove_node_state(node_info *ninfo, const char *state);\n\n/*\n *      add_node_state\n */\nint add_node_state(node_info *ninfo, const char *state);\n\n/*\n *      node_filter - filter a node array and return a new filterd array\n */\nnode_info **\nnode_filter(node_info **nodes, int size,\n\t    int (*filter_func)(node_info *, void *), void *arg, int flags);\n\n/*\n *      is_node_timeshared - check if a node is timeshared\n */\nint is_node_timeshared(node_info *node, void *arg);\n\n/*\n *      find_node_info - find a node in the node array\n */\nnode_info *find_node_info(node_info **ninfo_arr, const std::string &nodename);\n\n/*\n *      dup_node_info - duplicate a node by creating a new one and coping all\n *                      the data into the new\n */\nnode_info *dup_node(node_info *oninfo, server_info *nsinfo);\n\nvoid dup_node_info_chunk(th_data_dup_nd_info *data);\n\n/*\n *      dup_nodes - duplicate an array of nodes\n */\nnode_info **dup_nodes(node_info **onodes, server_info *nsinfo, unsigned int flags);\n\n/*\n *      collect_jobs_on_nodes - collect all the jobs in the job array on the\n *                              nodes\n */\nint collect_jobs_on_nodes(node_info **ninfo_arr, resource_resv **resresv_arr, int size, int flags);\n\n/*\n *      collect_resvs_on_nodes - collect all the running resvs in the resv array\n *                              on the nodes\n */\nint collect_resvs_on_nodes(node_info **ninfo_arr, resource_resv **resresv_arr, int size);\n\n/*\n *      is_node_eligible - is this node eligible to run the job\n */\nint is_node_eligible(resource_resv *job, node_info *ninfo, char *reason);\n\n/*\n *      find_eligible_nodes - find the eligible node in an array of nodes\n *                            that a job can run on.\n */\nnode_info **find_eligible_nodes(resource_resv *job, node_info **ninfo_arr, int node_size);\n\n/*\n *      ssinode_reqlist - create a duplicate reqlist for a job for a node's\n *                        ssinode nodeboard mem/proc configuration\n */\nresource_req *ssinode_reqlist(resource_req *reqlist, node_info *ninfo);\n\n/*\n *      update_node_on_run - update internal scheduler node data when a job\n *                           is run.\n */\nvoid update_node_on_run(nspec *ns, resource_resv *resresv, const char *job_state);\n\n/*\n *      node_queue_cmp - used with node_filter to filter nodes attached to a\n *                       specific queue\n */\nint node_queue_cmp(node_info *ninfo, void *arg);\n\n/*\n *      update_node_on_end - update a node when a job ends\n */\nvoid update_node_on_end(node_info *ninfo, resource_resv *resresv, const char *job_state);\n\n/*\n *      copy_node_ptr_array - copy an array of jobs using a different set of\n *                            of job pointer (same jobs, different array).\n *                            This means we have to use the names from the\n *                            first array and find them in the second array\n */\nnode_info **copy_node_ptr_array(node_info **oarr, node_info **narr);\n\n/*\n *      create_execvnode - create an execvnode to run a multi-node job\n */\nchar *create_execvnode(std::vector<nspec *> &ns_arr);\n\n/*\n *      parse_execvnode - parse an execvnode into an nspec array\n */\nstd::vector<nspec *> parse_execvnode(char *execvnode, server_info *sinfo, selspec *sel);\n\n/*\n *      dup_nspecs - duplicate an array of nspecs\n */\nstd::vector<nspec *> dup_nspecs(const std::vector<nspec *> &onspecs, node_info **ninfo_arr, selspec *sel);\n\n/* find a chunk by a sequence number */\nchunk *find_chunk_by_seq_num(chunk **chunks, int seq_num);\n\n/*\n *      free_nspecs - free a nspec array\n */\nvoid free_nspecs(std::vector<nspec *> &nspec_arr);\n\n/*\n *      find_nspec - find an nspec in an array\n */\nnspec *find_nspec(std::vector<nspec *> &nspec_arr, node_info *ninfo);\n\n/*\n *      update_nodes_for_resvs - take a node array and make resource effects\n *                               to it for reservations in respect to a job\n *                               This function is for jobs outside of resvs\n */\nint\nupdate_nodes_for_resvs(node_info **ninfo_arr, server_info *sinfo,\n\t\t       resource_resv *job);\n\n/*\n *      dup_node_info - duplicate a node by creating a new one and coping all\n *                      the data into the new\n */\nnode_info *dup_node_info(node_info *onode, server_info *nsinfo, unsigned int flags);\n\n/*\n *      find_nspec_by_name - find an nspec in an array by nodename\n */\nnspec *find_nspec_by_rank(std::vector<nspec *> &nspec_arr, int rank);\n\n/* find node by unique rank and return index into ninfo_arr */\nint find_node_ind(node_info **ninfo_arr, int rank);\n\n/*\n *\n *      update_nodes_for_running_resvs - update resource_assigned values for\n *                                       to reflect running resvs\n */\nvoid update_nodes_for_running_resvs(resource_resv **resvs, node_info **nodes);\n\n/*\n *\tglobal_spec_size - calculate how large a the spec will be\n *\t\t\t   after the global\n *\t\t\t   '#' operator has been expanded\n */\nint global_spec_size(char *spec, int ncpu_size);\n\n/*\n *\tnode_state_to_str - convert a node's state into a string for printing\n *\treturns static string of node state\n */\nconst char *node_state_to_str(node_info *ninfo);\n\n/*\n *\tparse_placespec - allocate a new place structure and parse\n *\t\t\t\ta placement spec (-l place)\n *\treturns newly allocated place\n *\t\tNULL: invalid placement spec\n *\n */\nplace *parse_placespec(char *place_str);\n\n/* compare two place specs to see if they are equal */\nint compare_place(place *pl1, place *pl2);\n\n/*\n *\tparse_selspec - parse a simple select spec into requested resources\n *\n *\t  IN: selspec - the select spec to parse\n *\t  OUT: numchunks - the number of chunks\n *\n *\treturns requested resource list (& number of chunks in numchunks)\n *\t\tNULL on error or invalid spec\n */\nselspec *parse_selspec(const std::string &sspec);\n\n/* compare two selspecs to see if they are equal*/\nint compare_selspec(selspec *s1, selspec *s2);\n\n/*\n *\tcombine_nspec_array - find and combine any nspec's for the same node\n *\t\t\t\tin an nspec array\n */\nstd::vector<nspec *> combine_nspec_array(const std::vector<nspec *> &nspec_arr);\n\n/*\n *\teval_selspec - eval a select spec to see if it is satisifable\n *\n *\t  IN: spec - the nodespec\n *\t  IN: placespec - the placement spec (-l place)\n *\t  IN: ninfo_arr - array of nodes to satisify the spec\n *\t  IN: nodepart - the node partition array for node grouping\n *\t\t \t if NULL, we're not doing node grouping\n *\t  IN: resresv - the resource resv the spec is from\n *\t  IN: flags - flags to change functions behavior\n *\t      EVAL_OKBREAK - ok to break chunck up across vnodes\n *\t      EVAL_EXCLSET - allocate entire nodelist exclusively\n *\t  OUT: nspec_arr - the node solution\n *\n *\treturns true if the nodespec can be satisified\n *\t\tfalse if not\n */\nbool\neval_selspec(status *policy, selspec *spec, place *placespec,\n\t     node_info **ninfo_arr, node_partition **nodepart,\n\t     resource_resv *resresv, unsigned int flags,\n\t     std::vector<nspec *> &nspec_arr, schd_error *err);\n\n/*\n *\n *\teval_placement - handle the place spec for node placement\n *\n *\t  IN: spec       - the select spec\n *\t  IN: ninfo_arr  - array of nodes to satisify the spec\n *\t  IN: pl         - parsed placement spec\n *\t  IN: resresv    - the resource resv the spec if from\n *\t  IN: flags - flags to change functions behavior\n *\t      EVAL_OKBREAK - ok to break chunck up across vnodes\n *\t  OUT: nspec_arr - the node solution\n *\n *\treturns true if the selspec can be satisified\n *\t\tfalse if not\n *\n */\nbool\neval_placement(status *policy, selspec *spec, node_info **ninfo_arr, place *pl,\n\t       resource_resv *resresv, unsigned int flags, std::vector<nspec *> &nspec_arr, schd_error *err);\n/*\n *\teval_complex_selspec - handle a complex (plus'd) select spec\n *\n *\t  IN: spec       - the select spec\n *\t  IN: ninfo_arr  - array of nodes to satisify the spec\n *\t  IN: pl         - parsed placement spec\n *\t  IN: resresv    - the resource resv the spec if from\n *\t  IN: flags - flags to change functions behavior\n *\t      EVAL_OKBREAK - ok to break chunck up across vnodes\n *\t  OUT: nspec_arr - the node solution\n *\n *\treturns true if the selspec can be satisified\n *\t\tfalse if not\n */\nbool\neval_complex_selspec(status *policy, selspec *spec, node_info **ninfo_arr, place *pl,\n\t\t     resource_resv *resresv, unsigned int flags, std::vector<nspec *> &nspec_arr, schd_error *err);\n\n/*\n * \teval_simple_selspec - eval a non-plused select spec for satasifiability\n *\n *        IN: chk - the chunk to satisfy\n * \t  IN: ninfo_arr - the array of nodes\n *\t  IN: pl - placement information (from -l place)\n *\t  IN: resresv - the job the spec if from (needed for reservations)\n *        IN: flags - flags to change functions behavior\n *            EVAL_OKBREAK - ok to break chunck up across vnodes\n *\t  OUT: nspec_arr - array of struct nspec's describing the chosen nodes\n *\n * \treturns true if the select spec is satifiable\n * \t\tfalse if not\n */\nbool\neval_simple_selspec(status *policy, chunk *chk, node_info **pninfo_arr,\n\t\t    place *pl, resource_resv *resresv, unsigned int flags,\n\t\t    std::vector<nspec *> &nspec_arr, schd_error *err);\n\n/* evaluate one node to see if it is eligible at the job/resv level */\nbool\nis_vnode_eligible(node_info *node, resource_resv *resresv,\n\t\t  struct place *pl, schd_error *err);\n\n/* check if a vnode is eligible for a chunk */\nbool\nis_vnode_eligible_chunk(resource_req *specreq, node_info *node,\n\t\t\tresource_resv *resresv, schd_error *err);\n\n/*\n *\tresources_avail_on_vnode - check to see if there are enough\n *\t\t\t\tconsuable resources on a vnode to make it\n *\t\t\t\teligible for a request\n *\t\t\t\tNote: This function will allocate <= 1 chunk\n *\n *\t  specreq_cons - requested consumable resources\n *        IN: node - the node to evaluate\n *        IN: pl - place spec for request\n *        IN: resresv - resource resv which is requesting\n *        IN: flags - flags to change behavior of function\n *              EVAL_OKBREAK - OK to break chunk across vnodes\n *        OUT: err - error status if node is ineligible\n *\n *\treturns 1 if resources were allocated from the node\n *\t\t0 if sufficent resources are not available (err is set)\n */\nbool\nresources_avail_on_vnode(resource_req *specreq_cons, node_info *node,\n\t\t\t place *pl, resource_resv *resresv, unsigned int flags,\n\t\t\t nspec *ns, schd_error *err);\n\n/*\n *\tcheck_resources_for_node - check to see how many chunks can fit on a\n *\t\t\t\t   node looking at both resources available\n *\t\t\t\t   now and future advanced reservations\n *\n *\n *\t  IN: resreq     - requested resources\n *\t  IN: node       - node to check for\n *\t  IN: resersv    - the resource resv to check for\n *\t  OUT: err       - schd_error reply if there aren't enough resources\n *\n *\treturns number of chunks which can be satisifed during the duration\n *\t\t-1 on error\n */\nlong long\ncheck_resources_for_node(resource_req *resreq, node_info *ninfo,\n\t\t\t resource_resv *resresv, schd_error *err);\n\n/*\n *\tcreate_node_array_from_nspec - create a node_info array by copying the\n *\t\t\t\t       ninfo pointers out of a nspec array\n *\treturns new node_info array or NULL on error\n */\nnode_info **create_node_array_from_nspec(std::vector<nspec *> &nspec_arr);\n\n/*\n *\treorder_nodes - reorder nodes for smp_cluster_dist or\n *\t\t\t\tprovision_policy_types\n *\tNOTE: uses global last_node_name for round_robin\n *\treturns pointer to static buffer of nodes (reordered appropretly)\n */\nnode_info **reorder_nodes(node_info **nodes, resource_resv *resresv);\n\n/*\n *\tok_break_chunk - is it OK to break up a chunk on a list of nodes?\n *\t  resresv - the requestor (unused for the moment)\n *\t  nodes   - the list of nodes to check\n *\n *\treturns 1 if its OK to break up chunks across the nodes\n *\t\t0 if it not\n */\nint ok_break_chunk(resource_resv *resresv, node_info **nodes);\n\n/*\n *\tis_excl - is a request/node combination exclusive?  This is based\n *\t\t  on both the place directive of the request and the\n *\t\t  sharing attribute of the node\n *\n *\t  pl      - place directive of the request\n *\t  sharing - sharing attribute of the node\n *\n *\treturns 1 if exclusive\n *\t\t0 if not\n *\n *\tAssumes if pl is NULL, no request excl/shared request was given\n */\nint is_excl(place *pl, enum vnode_sharing sharing);\n/* similar to is_excl but for exclhost */\nint is_exclhost(place *pl, enum vnode_sharing sharing);\n\n/*\n *\talloc_rest_nodepart - allocate the rest of a node partition to a\n *\t\t\t      nspec array\n *\n *\t  IN,OUT: nsa - node solution to be filled out -- allocated by the\n *\t\t        caller with enough space for the entire solution\n *\t  IN: ninfo_arr - node array to allocate\n *\n *\treturns 1 on success\n *\t\t0 on error -- nsa will be modified\n */\nint alloc_rest_nodepart(std::vector<nspec *> &nsa, node_info **ninfo_arr);\n\n/*\n *\tcan_fit_on_vnode - see if a chunk fit on one vnode in node list\n *\n *\t  req - requested resources to compare to nodes\n *\t  ninfo_arr - node array\n *\n *\treturns 1: chunk can fit in 1 vnode\n *\t\t0: chunk can not fit / error\n */\nint can_fit_on_vnode(resource_req *req, node_info **ninfo_arr);\n\n/*\n *      is_eoe_avail_on_vnode - it first finds if eoe is available in node's\n *                              available list\n *\n *      return : 0 if eoe not available on node\n *             : 1 if eoe available\n *\n */\nint is_eoe_avail_on_vnode(node_info *ninfo, resource_resv *resresv);\n\n/*\n *      is_provisionable - it checks if a vnode is eligible to be provisioned\n *\n *      return NO_PROVISIONING_NEEDED : resresv doesn't doesn't request aoe\n *                                      or node doesn't need provisioning\n *             PROVISIONING_NEEDED : vnode is provisionable and needs\n *                                   provisioning\n *             NOT_PROVISIONABLE  : vnode is not provisionable\n *\n *\n */\nint is_provisionable(node_info *node, resource_resv *job, schd_error *err);\n\n/*\n *\thandles everything which happens to a node when it comes back up\n */\nint node_up_event(node_info *node, void *arg);\n\n/*\n *\thandles everything which happens to a node when it goes down\n */\nint node_down_event(node_info *node, void *arg);\n\n/*\n *\tcreate a node_info array from a list of nodes in a string array\n */\nnode_info **create_node_array_from_str(node_info **nodes, char **strnodes);\n\n/*\n *      find node by unique rank\n */\nnode_info *find_node_by_rank(node_info **ninfo_arr, int rank);\n\n/* find node by index into sinfo->unordered_nodes or by unique rank */\nnode_info *find_node_by_indrank(node_info **ninfo_arr, int ind, int rank);\n\n/* determine if resresv conflicts with future events on ninfo based on the exclhost state */\nint sim_exclhost(event_list *calendar, resource_resv *resresv, node_info *ninfo);\n\n/*\n * helper function for generic_sim() to check if an event has an exclhost\n * conflict with a job/resv on a node\n */\nint sim_exclhost_func(timed_event *te, void *arg1, void *arg2);\n\n/*\n *  get the node resource list from the node object.  If there is a\n *  scratch resource list, return that one first.\n */\n\n/**\n * set current_aoe on a node.  Free existing value if set\n */\nvoid set_current_aoe(node_info *node, char *aoe);\n\n/**\n * set current_eoe on a node.  Free existing value if set\n */\nvoid set_current_eoe(node_info *node, char *eoe);\n\n/*\n * Check eligibility for a chunk of nodes, a supplementary function to check_node_array_eligibility\n */\nvoid\ncheck_node_eligibility_chunk(th_data_nd_eligible *data);\n\n/* check nodes for eligibility and mark them ineligible if not */\nvoid check_node_array_eligibility(node_info **ninfo_arr, resource_resv *resresv, place *pl, schd_error *err);\n\nint node_in_partition(node_info *ninfo, char *partition);\n/* add a node to a node array*/\nnode_info **add_node_to_array(node_info **ninfo_arr, node_info *node);\n\nbool add_event_to_nodes(timed_event *te, std::vector<nspec *> &nspecs);\n\nint add_node_events(timed_event *te, void *arg1, void *arg2);\n\nstruct batch_status *send_statvnode(int virtual_fd, char *id, struct attrl *attrib, char *extend);\n\n/*\n * Find a node by its hostname\n */\nnode_info *find_node_by_host(node_info **ninfo_arr, char *host);\n#endif /* _NODE_INFO_H */\n"
  },
  {
    "path": "src/scheduler/node_partition.cpp",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file    node_partition.c\n *\n * @brief\n * node_partition.c - contains functions to related to node_partition structure.\n *\n * Functions included are:\n * \tnew_node_partition()\n * \tfree_node_partition_array()\n * \tfree_node_partition()\n * \tdup_node_partition_array()\n * \tdup_node_partition()\n * \tfind_node_partition()\n * \tfind_node_partition_by_rank()\n * \tcreate_node_partitions()\n * \tnode_partition_update_array()\n * \tnode_partition_update()\n * \tfree_np_cache_array()\n * \tfind_alloc_np_cache()\n * \tresresv_can_fit_nodepart()\n * \tcreate_specific_nodepart()\n * \tcreate_placement_sets()\n *\n */\n#include <pbs_config.h>\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <time.h>\n#include \"errno.h\"\n\n#include <log.h>\n#include <pbs_ifl.h>\n#include <pbs_internal.h>\n#include <libpbs.h>\n\n#include \"config.h\"\n#include \"constant.h\"\n#include \"data_types.h\"\n#include \"server_info.h\"\n#include \"queue_info.h\"\n#include \"node_info.h\"\n#include \"resource_resv.h\"\n#include \"resource.h\"\n#include \"misc.h\"\n#include \"node_partition.h\"\n#include \"check.h\"\n#include \"globals.h\"\n#include \"sort.h\"\n#include \"buckets.h\"\n#include <vector>\n\n/**\n * @brief\n *\t\tnew_node_partition - allocate and initialize a node_partition\n *\n * @return new node partition\n * @retval\tNULL\t: on error\n *\n */\nnode_partition *\nnew_node_partition()\n{\n\tnode_partition *np;\n\n\tif ((np = static_cast<node_partition *>(malloc(sizeof(node_partition)))) == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn NULL;\n\t}\n\n\tnp->ok_break = false;\n\tnp->excl = false;\n\tnp->name = NULL;\n\tnp->def = NULL;\n\tnp->res_val = NULL;\n\tnp->tot_nodes = 0;\n\tnp->free_nodes = 0;\n\tnp->res = NULL;\n\tnp->ninfo_arr = NULL;\n\tnp->bkts = NULL;\n\n\tnp->rank = -1;\n\n\treturn np;\n}\n\n/**\n * @brief\n *\t\tfree_node_partition_array - free an array of node_partitions\n *\n * @param[in]\tnp_arr\t-\tnode partition array to free\n *\n * @return\tnothing\n *\n */\nvoid\nfree_node_partition_array(node_partition **np_arr)\n{\n\tint i;\n\n\tif (np_arr == NULL)\n\t\treturn;\n\n\tfor (i = 0; np_arr[i] != NULL; i++)\n\t\tfree_node_partition(np_arr[i]);\n\n\tfree(np_arr);\n}\n\n/**\n * @brief\n *\t\tfree_node_partition - free a node_partition structure\n *\n * @param[in]\tnp\t-\tthe node_partition to free\n *\n */\nvoid\nfree_node_partition(node_partition *np)\n{\n\tif (np == NULL)\n\t\treturn;\n\n\tif (np->name != NULL)\n\t\tfree(np->name);\n\n\tnp->def = NULL;\n\n\tif (np->res_val != NULL)\n\t\tfree(np->res_val);\n\n\tif (np->res != NULL)\n\t\tfree_resource_list(np->res);\n\n\tif (np->ninfo_arr != NULL)\n\t\tfree(np->ninfo_arr);\n\n\tif (np->bkts != NULL)\n\t\tfree_node_bucket_array(np->bkts);\n\n\tfree(np);\n}\n\n/**\n * @brief\n *\t\tdup_node_partition_array - duplicate a node_partition array\n *\n * @param[in]\tonp_arr\t-\tthe node_partition array to duplicate\n * @param[in]\tnsinfo\t-\tserver for the new node partition\n *\n * @return\tduplicated array\n * @retval\tNULL\t: on error\n *\n */\nnode_partition **\ndup_node_partition_array(node_partition **onp_arr, server_info *nsinfo)\n{\n\tint i;\n\tnode_partition **nnp_arr;\n\tif (onp_arr == NULL)\n\t\treturn NULL;\n\n\tfor (i = 0; onp_arr[i] != NULL; i++)\n\t\t;\n\n\tif ((nnp_arr = static_cast<node_partition **>(malloc((i + 1) * sizeof(node_partition *)))) == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn NULL;\n\t}\n\n\tfor (i = 0; onp_arr[i] != NULL; i++) {\n\t\tnnp_arr[i] = dup_node_partition(onp_arr[i], nsinfo);\n\t\tif (nnp_arr[i] == NULL) {\n\t\t\tfree_node_partition_array(nnp_arr);\n\t\t\treturn NULL;\n\t\t}\n\t}\n\n\tnnp_arr[i] = NULL;\n\n\treturn nnp_arr;\n}\n\n/**\n * @brief\n *\t\tdup_node_partition - duplicate a node_partition structure\n *\n * @param[in]\tonp\t-\tthe node_partition structure to duplicate\n * @param[in]\tnsinfo\t-\tserver for the new node partiton (the nodes are needed)\n *\n * @return\tduplicated node_partition\n * @retval\tNULL\t: on error\n *\n */\nnode_partition *\ndup_node_partition(node_partition *onp, server_info *nsinfo)\n{\n\tnode_partition *nnp;\n\n\tif (onp == NULL)\n\t\treturn NULL;\n\n\tif ((nnp = new_node_partition()) == NULL)\n\t\treturn NULL;\n\n\tif (onp->name != NULL)\n\t\tnnp->name = string_dup(onp->name);\n\n\tif (onp->def != NULL)\n\t\tnnp->def = onp->def;\n\n\tif (onp->res_val != NULL)\n\t\tnnp->res_val = string_dup(onp->res_val);\n\n\tnnp->ok_break = onp->ok_break;\n\tnnp->excl = onp->excl;\n\tnnp->tot_nodes = onp->tot_nodes;\n\tnnp->free_nodes = onp->free_nodes;\n\tnnp->res = dup_resource_list(onp->res);\n\tnnp->ninfo_arr = copy_node_ptr_array(onp->ninfo_arr, nsinfo->nodes);\n\n\tnnp->bkts = dup_node_bucket_array(onp->bkts, nsinfo);\n\tnnp->rank = onp->rank;\n\n\t/* validity check */\n\tif (onp->name == NULL || onp->res_val == NULL ||\n\t    nnp->res == NULL || nnp->ninfo_arr == NULL) {\n\t\tfree_node_partition(nnp);\n\t\treturn NULL;\n\t}\n\treturn nnp;\n}\n\n/**\n * @brief copy a node partition array from pointers out of another.\n * @param[in] onp_arr - old node partition array\n * @param[in] new_nps - node partition array with new pointers\n *\n * @return node_partition **\n */\nnode_partition **\ncopy_node_partition_ptr_array(node_partition **onp_arr, node_partition **new_nps)\n{\n\tint cnt;\n\tint i;\n\tnode_partition **nnp_arr;\n\n\tif (onp_arr == NULL || new_nps == NULL)\n\t\treturn NULL;\n\n\tcnt = count_array(onp_arr);\n\tif ((nnp_arr = static_cast<node_partition **>(malloc((cnt + 1) * sizeof(node_partition *)))) == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn NULL;\n\t}\n\n\tfor (i = 0; i < cnt; i++)\n\t\tnnp_arr[i] = find_node_partition_by_rank(new_nps, onp_arr[i]->rank);\n\tnnp_arr[i] = NULL;\n\n\treturn nnp_arr;\n}\n\n/**\n * @brief\n *\t\tfind_node_partition - find a node partition by (resource_name=value)\n *\t\t\t      partition name from a pool of partitions\n *\n * @param[in]\tnp_arr\t-\tarray of node partitions to search name\n *\n * @return\tfound node partition\n * @retval\tNULL\t: if not found\n *\n */\nnode_partition *\nfind_node_partition(node_partition **np_arr, const std::string &name)\n{\n\tint i;\n\tif (np_arr == NULL || name.empty())\n\t\treturn NULL;\n\n\tfor (i = 0; np_arr[i] != NULL && strcmp(np_arr[i]->name, name.c_str()); i++)\n\t\t;\n\n\treturn np_arr[i];\n}\n\n/**\n * @brief\n * \t\tfind node partition by unique rank\n *\n * @param[in]\tnp_arr\t-\tarray of node partitions to search\n * @param[in]\trank\t-\tunique rank of node partition\n *\n * @return\tnode_partition **\n * @retval\tfound node partition\n * @retval\tNULL\t: if node partition isn't found or on error\n */\n\nnode_partition *\nfind_node_partition_by_rank(node_partition **np_arr, int rank)\n{\n\tint i;\n\tif (np_arr == NULL)\n\t\treturn NULL;\n\n\tfor (i = 0; np_arr[i] != NULL && np_arr[i]->rank != rank; i++)\n\t\t;\n\n\treturn np_arr[i];\n}\n\n/**\n * @brief\n * \t\tbreak apart nodes into partitions\n *\t\tA possible side-effect of this function when multiple identical\n *\t\tresources are defined on an attribute, is that the node\n *\t\tpartitions accounting for this node may count this node in the\n *\t\ttotal count of \"free_nodes\" for that partition (if the node was\n *\t\tfree in the first place). Due to this, incorrect accounting,\n *\t\tthe metadata of the node partition may cause eval_selspec to\n *\t\tdescend into the node matching code instead of bailing out right\n *\t\taway due to the fact that the node partition has insufficient\n *\t\tresources.\n *\n * @param[in]\tpolicy\t-\tpolicy info\n * @param[in]\tnodes\t-\tthe nodes which to create partitions from\n * @param[in]\tresnames\t-\tnode grouping resource names\n * @param[in]\tflags\t-\tflags which change operations of node partition creation\n *\t\t\t\t\t\t\tNP_IGNORE_EXCL - ignore vnodes marked excl\n *\t \t\t\t\t\t\tNP_CREATE_REST - create a part for vnodes w/ no np resource\n * @param[out]\tnum_parts\t-\tthe number of partitions created\n *\n * @return\tnode_partition ** (NULL terminated node_partition array)\n * @retval\t: created node_partition array\n * @retval\t: NULL on error\n *\n */\nnode_partition **\ncreate_node_partitions(status *policy, node_info **nodes, const std::vector<std::string> &resnames, unsigned int flags, int *num_parts)\n{\n\tnode_partition **np_arr;\n\tnode_partition *np;\n\tnode_partition **tmp_arr;\n\tint np_arr_size = 0;\n\tstatic schd_resource *res;\n\n\tint num_nodes;\n\n\tschd_resource *tmpres;\n\n\tint val_i;  /* index of placement set resource value */\n\tint node_i; /* index into nodes array */\n\tint np_i;   /* index into node partition array we are creating */\n\n\tstatic schd_resource *unset_res = NULL;\n\n\tstd::vector<queue_info *> queues;\n\n\tif (nodes == NULL || resnames.empty())\n\t\treturn NULL;\n\n\tif (nodes[0] != NULL && nodes[0]->server != NULL)\n\t\tqueues = nodes[0]->server->queues;\n\n\tnum_nodes = count_array(nodes);\n\n\tif ((np_arr = static_cast<node_partition **>(malloc((num_nodes + 1) * sizeof(node_partition *)))) == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn NULL;\n\t}\n\n\tnp_arr_size = num_nodes;\n\n\tnp_i = 0;\n\tnp_arr[0] = NULL;\n\n\tif (flags & NP_CREATE_REST && unset_res == NULL)\n\t\tunset_res = new_resource();\n\n\tfor (auto &res_i : resnames) {\n\t\tauto def = find_resdef(res_i);\n\t\tfor (node_i = 0; nodes[node_i] != NULL; node_i++) {\n\t\t\tif (nodes[node_i]->is_stale)\n\t\t\t\tcontinue;\n\n\t\t\tres = find_resource(nodes[node_i]->res, def);\n\n\t\t\tif (res == NULL && (flags & NP_CREATE_REST)) {\n\t\t\t\tunset_res->name = res_i.c_str();\n\t\t\t\tif (set_resource(unset_res, \"\\\"\\\"\", RF_AVAIL) == 0) {\n\t\t\t\t\tfree_node_partition_array(np_arr);\n\t\t\t\t\treturn NULL;\n\t\t\t\t}\n\t\t\t\tres = unset_res;\n\t\t\t}\n\t\t\tif (res != NULL) {\n\t\t\t\t/* Incase of indirect resource, point it to the right place */\n\t\t\t\tif (res->indirect_res != NULL)\n\t\t\t\t\tres = res->indirect_res;\n\t\t\t\tfor (val_i = 0; res->str_avail[val_i] != NULL; val_i++) {\n\t\t\t\t\tstd::string str = res_i + \"=\" + res->str_avail[val_i];\n\t\t\t\t\t/* If we find the partition, we've already created it - add the node\n\t\t\t\t\t * to the existing partition.  If we don't find it, we create it.\n\t\t\t\t\t */\n\t\t\t\t\tnp = find_node_partition(np_arr, str);\n\t\t\t\t\tif (np == NULL) {\n\t\t\t\t\t\tif (np_i >= np_arr_size) {\n\t\t\t\t\t\t\ttmp_arr = static_cast<node_partition **>(realloc(np_arr,\n\t\t\t\t\t\t\t\t\t\t\t\t\t (np_arr_size * 2 + 1) * sizeof(node_partition *)));\n\t\t\t\t\t\t\tif (tmp_arr == NULL) {\n\t\t\t\t\t\t\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\t\t\t\t\t\t\tfree_node_partition_array(np_arr);\n\t\t\t\t\t\t\t\treturn NULL;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tnp_arr = tmp_arr;\n\t\t\t\t\t\t\tnp_arr_size *= 2;\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\tnp_arr[np_i] = new_node_partition();\n\t\t\t\t\t\tif (np_arr[np_i] != NULL) {\n\t\t\t\t\t\t\tnp_arr[np_i]->name = string_dup(str.c_str());\n\t\t\t\t\t\t\tnp_arr[np_i]->def = def;\n\t\t\t\t\t\t\tnp_arr[np_i]->res_val = string_dup(res->str_avail[val_i]);\n\t\t\t\t\t\t\tnp_arr[np_i]->tot_nodes = 1;\n\t\t\t\t\t\t\tif (nodes[node_i]->is_free)\n\t\t\t\t\t\t\t\tnp_arr[np_i]->free_nodes = 1;\n\t\t\t\t\t\t\tnp_arr[np_i]->rank = get_sched_rank();\n\n\t\t\t\t\t\t\tif (np_arr[np_i]->res_val == NULL) {\n\t\t\t\t\t\t\t\tnp_arr[np_i + 1] = NULL;\n\t\t\t\t\t\t\t\tfree_node_partition_array(np_arr);\n\t\t\t\t\t\t\t\treturn NULL;\n\t\t\t\t\t\t\t}\n\n\t\t\t\t\t\t\tnp_i++;\n\t\t\t\t\t\t\tnp_arr[np_i] = NULL;\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\tfree_node_partition_array(np_arr);\n\t\t\t\t\t\t\treturn NULL;\n\t\t\t\t\t\t}\n\t\t\t\t\t} else {\n\t\t\t\t\t\tnp->tot_nodes++;\n\t\t\t\t\t\tif (nodes[node_i]->is_free)\n\t\t\t\t\t\t\tnp->free_nodes++;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\t/* else we ignore nodes without the node partition resource set\n\t\t\t * unless the NP_CREATE_REST flag is set\n\t\t\t */\n\t\t}\n\t}\n\n\t/* now that we have a list of node partitions and number of nodes in each\n\t * lets allocate a node array and fill it\n\t */\n\n\tfor (np_i = 0; np_arr[np_i] != NULL; np_i++) {\n\t\tint i = 0;\n\t\tnp_arr[np_i]->ok_break = 1;\n\t\tschd_resource *hostres = NULL;\n\n\t\tnp_arr[np_i]->ninfo_arr =\n\t\t\tstatic_cast<node_info **>(malloc((np_arr[np_i]->tot_nodes + 1) * sizeof(node_info *)));\n\n\t\tif (np_arr[np_i]->ninfo_arr == NULL) {\n\t\t\tfree_node_partition_array(np_arr);\n\t\t\treturn NULL;\n\t\t}\n\n\t\tnp_arr[np_i]->ninfo_arr[0] = NULL;\n\n\t\tfor (node_i = 0; nodes[node_i] != NULL && i < np_arr[np_i]->tot_nodes; node_i++) {\n\t\t\tif (nodes[node_i]->is_stale)\n\t\t\t\tcontinue;\n\n\t\t\tres = find_resource(nodes[node_i]->res, np_arr[np_i]->def);\n\t\t\tif (res == NULL && (flags & NP_CREATE_REST)) {\n\t\t\t\tset_resource(unset_res, \"\\\"\\\"\", RF_AVAIL);\n\t\t\t\tres = unset_res;\n\t\t\t}\n\t\t\tif (res != NULL) {\n\t\t\t\t/* Incase of indirect resource, point it to the right place */\n\t\t\t\tif (res->indirect_res != NULL)\n\t\t\t\t\tres = res->indirect_res;\n\t\t\t\tif (compare_res_to_str(res, np_arr[np_i]->res_val, CMP_CASE)) {\n\t\t\t\t\tif (np_arr[np_i]->ok_break) {\n\t\t\t\t\t\ttmpres = find_resource(nodes[node_i]->res, allres[\"host\"]);\n\t\t\t\t\t\tif (tmpres != NULL) {\n\t\t\t\t\t\t\tif (hostres == NULL)\n\t\t\t\t\t\t\t\thostres = tmpres;\n\t\t\t\t\t\t\telse {\n\t\t\t\t\t\t\t\tif (!compare_res_to_str(hostres, tmpres->str_avail[0], CMP_CASELESS))\n\t\t\t\t\t\t\t\t\tnp_arr[np_i]->ok_break = 0;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tif (!(NP_NO_ADD_NP_ARR & flags)) {\n\t\t\t\t\t\ttmp_arr = static_cast<node_partition **>(add_ptr_to_array(nodes[node_i]->np_arr, np_arr[np_i]));\n\t\t\t\t\t\tif (tmp_arr == NULL) {\n\t\t\t\t\t\t\tfree_node_partition_array(np_arr);\n\t\t\t\t\t\t\treturn NULL;\n\t\t\t\t\t\t}\n\t\t\t\t\t\tnodes[node_i]->np_arr = tmp_arr;\n\t\t\t\t\t}\n\n\t\t\t\t\tnp_arr[np_i]->ninfo_arr[i] = nodes[node_i];\n\t\t\t\t\ti++;\n\t\t\t\t\tnp_arr[np_i]->ninfo_arr[i] = NULL;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\t/* if multiple resource values are present, tot_nodes may be incorrect.\n\t\t * recalculating tot_nodes for each node partition.\n\t\t */\n\t\tnp_arr[np_i]->tot_nodes = count_array(np_arr[np_i]->ninfo_arr);\n\t\tnp_arr[np_i]->bkts = create_node_buckets(policy, np_arr[np_i]->ninfo_arr, queues, NO_PRINT_BUCKETS);\n\t\tnode_partition_update(policy, np_arr[np_i]);\n\t}\n\n\t*num_parts = np_i;\n\treturn np_arr;\n}\n\n/**\n * @brief update the node buckets associated with a node\n *\n *  @param[in] bkts - the buckets to update\n *  @param[in] ninfo - the node of the job/resv\n */\nvoid\nupdate_buckets_for_node(node_bucket **bkts, node_info *ninfo)\n{\n\tint i;\n\n\tif (bkts == NULL || ninfo == NULL)\n\t\treturn;\n\n\tfor (i = 0; bkts[i] != NULL; i++) {\n\t\tint node_ind = ninfo->node_ind;\n\n\t\t/* Is this node in the bucket? */\n\t\tif (pbs_bitmap_get_bit(bkts[i]->bkt_nodes, node_ind)) {\n\t\t\t/* First turn off the current bit */\n\t\t\tif (pbs_bitmap_get_bit(bkts[i]->free_pool->truth, node_ind)) {\n\t\t\t\tpbs_bitmap_bit_off(bkts[i]->free_pool->truth, node_ind);\n\t\t\t\tbkts[i]->free_pool->truth_ct--;\n\t\t\t} else if (pbs_bitmap_get_bit(bkts[i]->busy_later_pool->truth, node_ind)) {\n\t\t\t\tpbs_bitmap_bit_off(bkts[i]->busy_later_pool->truth, node_ind);\n\t\t\t\tbkts[i]->busy_later_pool->truth_ct--;\n\t\t\t} else if (pbs_bitmap_get_bit(bkts[i]->busy_pool->truth, node_ind)) {\n\t\t\t\tpbs_bitmap_bit_off(bkts[i]->busy_pool->truth, node_ind);\n\t\t\t\tbkts[i]->busy_pool->truth_ct--;\n\t\t\t}\n\n\t\t\t/* Next, turn on the correct bit */\n\t\t\tif (ninfo->num_jobs > 0 || ninfo->num_run_resv > 0) {\n\t\t\t\tpbs_bitmap_bit_on(bkts[i]->busy_pool->truth, node_ind);\n\t\t\t\tbkts[i]->busy_pool->truth_ct++;\n\t\t\t} else {\n\t\t\t\tif (ninfo->node_events != NULL) {\n\t\t\t\t\tpbs_bitmap_bit_on(bkts[i]->busy_later_pool->truth, node_ind);\n\t\t\t\t\tbkts[i]->busy_later_pool->truth_ct++;\n\t\t\t\t} else {\n\t\t\t\t\tpbs_bitmap_bit_on(bkts[i]->free_pool->truth, node_ind);\n\t\t\t\t\tbkts[i]->free_pool->truth_ct++;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n\n/**\n * @brief update the node buckets associated with a node partition on\n *        job/resv run/end\n *\n *  @param[in] bkts - the buckets to update\n *  @param[in] ninfo_arr - the nodes of the job/resv\n */\nvoid\nupdate_buckets_for_node_array(node_bucket **bkts, node_info **ninfo_arr)\n{\n\tint i;\n\n\tif (bkts == NULL || ninfo_arr == NULL)\n\t\treturn;\n\n\tfor (i = 0; ninfo_arr[i] != NULL; i++)\n\t\tupdate_buckets_for_node(bkts, ninfo_arr[i]);\n}\n\n/**\n * @brief\n * \t\tupdate metadata for an entire array of node partitions\n *\n * @param[in] policy\t-\tpolicy info\n * @param[in] nodepart\t-\tpartition array to update\n * @param[in] ninfo_arr - \tnodes being updated (may be NULL)\n *\n * @return\tint\n * @retval\t1\t: on all success\n * @retval\t0\t: on any failure\n *\n * @note\n * \t\tThis is not an atomic operation -- this means that if this\n *\t\tfunction fails, some node partitions may have been updated and\n *\t\tothers not.\n *\n */\nint\nnode_partition_update_array(status *policy, node_partition **nodepart)\n{\n\tint rc = 1;\n\n\tif (policy == NULL || nodepart == NULL)\n\t\treturn 0;\n\n\tfor (int i = 0; nodepart[i] != NULL; i++) {\n\t\tauto cur_rc = node_partition_update(policy, nodepart[i]);\n\t\tif (cur_rc == 0)\n\t\t\trc = 0;\n\t\tupdate_buckets_for_node_array(nodepart[i]->bkts, nodepart[i]->ninfo_arr);\n\t}\n\n\treturn rc;\n}\n\n/**\n * @brief\n * \t\tupdate the meta data about a node partition\n *\t\t\tlike free_nodes and consumable resources in res\n *\n * @param[in]\tpolicy\t-\tpolicy info\n * @param[in]\tnp\t-\tthe node partition to update\n * @param[in]\tnodes\t-\tthe entire node array used to create these node partitions\n *\n * @return\tint\n * @retval\t1\t: on success\n * @retval\t0\t: on failure\n *\n */\nint\nnode_partition_update(status *policy, node_partition *np)\n{\n\tint i;\n\tint rc = 1;\n\tschd_resource *res;\n\tunsigned int arl_flags = USE_RESOURCE_LIST | ADD_ALL_BOOL;\n\n\tif (np == NULL)\n\t\treturn 0;\n\n\t/* if res is not NULL, we are updating.  Clear the meta data for the update*/\n\tif (np->res != NULL) {\n\t\tarl_flags |= NO_UPDATE_NON_CONSUMABLE;\n\t\tfor (res = np->res; res != NULL; res = res->next) {\n\t\t\tif (res->type.is_consumable) {\n\t\t\t\tres->assigned = 0;\n\t\t\t\tres->avail = 0;\n\t\t\t}\n\t\t}\n\t} else\n\t\tarl_flags |= ADD_UNSET_BOOLS_FALSE;\n\n\tnp->free_nodes = 0;\n\n\tfor (i = 0; i < np->tot_nodes; i++) {\n\t\tif (np->ninfo_arr[i]->is_free) {\n\t\t\tnp->free_nodes++;\n\t\t\tarl_flags &= ~ADD_AVAIL_ASSIGNED;\n\t\t} else\n\t\t\tarl_flags |= ADD_AVAIL_ASSIGNED;\n\n\t\tif (np->res == NULL)\n\t\t\tnp->res = dup_selective_resource_list(np->ninfo_arr[i]->res,\n\t\t\t\t\t\t\t      policy->resdef_to_check, arl_flags);\n\t\telse if (!add_resource_list(policy, np->res, np->ninfo_arr[i]->res, arl_flags)) {\n\t\t\trc = 0;\n\t\t\tbreak;\n\t\t}\n\t}\n\n\tif (!policy->node_sort->empty() && conf.node_sort_unused) {\n\t\t/* Resort the nodes in the partition so that selection works correctly. */\n\t\tqsort(np->ninfo_arr, np->tot_nodes, sizeof(node_info *),\n\t\t      multi_node_sort);\n\t}\n\n\treturn rc;\n}\n\n/**\n * @brief\n *\t\tnp_cache - constructor\n *\n * @return\tnew np_cache object\n */\nnp_cache::np_cache()\n{\n\tninfo_arr = NULL;\n\tnodepart = NULL;\n\tnum_parts = UNSPECIFIED;\n}\n// overloaded\nnp_cache::np_cache(node_info **rninfo_arr, const std::vector<std::string> &rresnames, node_partition **rnodepart, int rnum_parts) : ninfo_arr(rninfo_arr), resnames(rresnames), nodepart(rnodepart), num_parts(rnum_parts) {}\n// Destructor\nnp_cache::~np_cache()\n{\n\tif (nodepart != NULL)\n\t\tfree_node_partition_array(nodepart);\n\n\t/* reference to an array of nodes, the owner will free */\n\tninfo_arr = NULL;\n}\n\n/**\n * @brief\n *\t\tfree_np_cache_array - destructor for array\n *\n * @param[in,out]\tnpc_arr\t-\tnp cashe array.\n */\nvoid\nfree_np_cache_array(std::vector<np_cache *> &npc_arr)\n{\n\tif (npc_arr.empty())\n\t\treturn;\n\n\tfor (auto elem : npc_arr)\n\t\tdelete elem;\n\tnpc_arr.clear();\n\treturn;\n}\n\n/**\n * @brief\n *\t\tfind_np_cache - find a np_cache by the array of resource names and\n *\t\t\tnodes which created it.\n *\n * @param[in]\tnpc_arr\t-\tthe array to search\n * @param[in]\tresnames\t-\tthe list of names\n * @param[in]\tninfo_arr\t-\tarray of nodes\n *\n * @par NOTE:\n * \t\tfunction does node node_info pointer comparison to save time\n *\n * @return\tthe node found node partition\n * @retval\tNULL\t: if not (or on error)\n *\n */\nnp_cache *\nfind_np_cache(const std::vector<np_cache *> &npc_arr,\n\t      const std::vector<std::string> &resnames, node_info **ninfo_arr)\n{\n\tif (npc_arr.empty() || resnames.empty() || ninfo_arr == NULL)\n\t\treturn NULL;\n\n\tfor (auto elem : npc_arr) {\n\t\tif (elem->ninfo_arr == ninfo_arr &&\n\t\t    match_string_array(elem->resnames, resnames) == SA_FULL_MATCH)\n\t\t\treturn elem;\n\t}\n\n\treturn NULL;\n}\n\n/**\n * @brief\n * \t\tfind a np_cache by the array of resource names\n *\t\tand nodes which created it.  If the np_cache\n *\t\tdoes not exist, create it and add it to the list\n *\n * @param[in]\tpolicy\t-\tpolicy info\n * @param[in,out]\tpnpc_arr\t-\tpointer to np_cache array -- if *npc_arr == NULL\n *\t\t \t           \t\t\t\ta np_cache will be created and it will be set\n *\t\t\t           \t\t\t\tExample: you pass &(sinfo->npc_arr)\n * @param[in]\tresnames\t-\tthe names used to create the pool of node parts\n * @param[in]\tninfo_arr\t-\tthe node array used to create the pool of node_parts\n * @param[in]\tsort_func\t-\tsort function to sort placement sets.\n *\t\t\t\t  \t\t\t\tsets are only sorted when they are created.\n *\t\t\t\t  \t\t\t\tIf NULL is passed in, no sorting is done\n *\n * @return\tnp_cache *\n * @retval\tfound\t: created np_cache\n * @retval \tNULL\t: on error\n *\n */\nnp_cache *\nfind_alloc_np_cache(status *policy, std::vector<np_cache *> &pnpc_arr,\n\t\t    const std::vector<std::string> &resnames, node_info **ninfo_arr,\n\t\t    int (*sort_func)(const void *, const void *))\n{\n\tnode_partition **nodepart = NULL;\n\tint num_parts;\n\tnp_cache *npc = NULL;\n\tint error = 0;\n\n\tif (resnames.empty() || ninfo_arr == NULL)\n\t\treturn NULL;\n\n\tnpc = find_np_cache(pnpc_arr, resnames, ninfo_arr);\n\n\tif (npc == NULL) {\n\t\tint flags = NP_NO_ADD_NP_ARR;\n\n\t\tif (sc_attrs.only_explicit_psets == 0)\n\t\t\tflags |= NP_CREATE_REST;\n\n\t\t/* didn't find node partition cache, need to allocate and create */\n\t\tnodepart = create_node_partitions(policy, ninfo_arr, resnames, flags, &num_parts);\n\t\tif (nodepart != NULL) {\n\t\t\tif (sort_func != NULL)\n\t\t\t\tqsort(nodepart, num_parts, sizeof(node_partition *), sort_func);\n\t\t\ttry {\n\t\t\t\tnpc = new np_cache(ninfo_arr, resnames, nodepart, num_parts);\n\t\t\t\tpnpc_arr.push_back(npc);\n\t\t\t} catch (std::bad_alloc &e) {\n\t\t\t\tfree_node_partition_array(nodepart);\n\t\t\t\terror = 1;\n\t\t\t}\n\t\t} else\n\t\t\terror = 1;\n\t}\n\n\tif (error)\n\t\treturn NULL;\n\n\treturn npc;\n}\n\n/**\n * @brief\n * \t\tdo an initial check to see if a resresv can fit into a node partition\n *        based on the meta data we keep.\n *\n * @param[in]\tpolicy\t-\tpolicy info\n * @param[in]\tnp\t-\tnode partition to check\n * @param[in]\tresresv\t-\tjob/resv to see if it can fit\n * @param[in]\tflags\t-\tcheck_flags\n *\t\t\t\t\t\t\tCOMPARE_TOTAL - compare with resources_available value\n *\t\t\t\t\t\t\tRETURN_ALL_ERR - return all the errors, not just the first failure\n * @param[in]\terr\t-\tschd_error structure to return why job/resv can't fit\n *\n * @return\tint\n * @retval\t1\t: can fit\n * @retval\t0\t: can't fit\n * @retval\t-1\t: on error\n */\nint\nresresv_can_fit_nodepart(status *policy, node_partition *np, resource_resv *resresv,\n\t\t\t unsigned int flags, schd_error *err)\n{\n\tint i;\n\tschd_error *prev_err = NULL;\n\tint can_fit = 1;\n\tint pass_flags;\n\tresource_req *req;\n\tselspec *spec = NULL;\n\tplace *pl = NULL;\n\n\tif (policy == NULL || np == NULL || resresv == NULL || err == NULL)\n\t\treturn -1;\n\n\tpass_flags = flags | UNSET_RES_ZERO;\n\n\t/* Check 1: Based on the flag check if there are any nodes available or if\n\t * they are free.\n\t */\n\tif (flags & COMPARE_TOTAL) {\n\t\t/* Check that node partition must have one or more nodes inside it */\n\t\tif (np->tot_nodes == 0) {\n\t\t\tset_schd_error_codes(err, NEVER_RUN, NO_TOTAL_NODES);\n\t\t\tif ((flags & RETURN_ALL_ERR)) {\n\t\t\t\tcan_fit = 0;\n\t\t\t\terr->next = new_schd_error();\n\t\t\t\tprev_err = err;\n\t\t\t\terr = err->next;\n\t\t\t} else\n\t\t\t\treturn 0;\n\t\t}\n\t} else {\n\t\t/* Check is there at least 1 node in the free state */\n\t\tif (np->free_nodes == 0) {\n\t\t\tset_schd_error_codes(err, NOT_RUN, NO_FREE_NODES);\n\t\t\tif ((flags & RETURN_ALL_ERR)) {\n\t\t\t\tcan_fit = 0;\n\t\t\t\terr->next = new_schd_error();\n\t\t\t\tprev_err = err;\n\t\t\t\terr = err->next;\n\t\t\t} else\n\t\t\t\treturn 0;\n\t\t}\n\t}\n\n\t/* Check 2: v/scatter - If we're scattering or requesting exclusive nodes\n\t * we know we need at least as many nodes as requested chunks */\n\tif (resresv->place_spec->scatter || resresv->place_spec->vscatter) {\n\t\tint nodect;\n\t\tenum sched_error_code error_code;\n\t\tenum schd_err_status status_code;\n\t\tif ((flags & COMPARE_TOTAL)) {\n\t\t\tnodect = np->tot_nodes;\n\t\t\terror_code = NO_TOTAL_NODES;\n\t\t\tstatus_code = NEVER_RUN;\n\t\t} else {\n\t\t\tnodect = np->free_nodes;\n\t\t\terror_code = NO_FREE_NODES;\n\t\t\tstatus_code = NOT_RUN;\n\t\t}\n\n\t\tif (nodect < resresv->select->total_chunks) {\n\t\t\tset_schd_error_codes(err, status_code, error_code);\n\t\t\tif ((flags & RETURN_ALL_ERR)) {\n\t\t\t\tcan_fit = 0;\n\t\t\t\terr->next = new_schd_error();\n\t\t\t\tprev_err = err;\n\t\t\t\terr = err->next;\n\t\t\t} else\n\t\t\t\treturn 0;\n\t\t}\n\t}\n\n\t/* Check 3: Job Wide RASSN resources(e.g., ncpus, mem).  We only check\n\t * resources that the server has summed over the select statement.  We\n\t * know these came from the nodes so should be checked on the nodes.  Other\n\t * resources are for server/queue and so we ignore them here\n\t */\n\tif (resresv->is_job && resresv->job != NULL && resresv->job->resreq_rel != NULL)\n\t\treq = resresv->job->resreq_rel;\n\telse\n\t\treq = resresv->resreq;\n\tif (check_avail_resources(np->res, req,\n\t\t\t\t  pass_flags, policy->resdef_to_check_rassn_select,\n\t\t\t\t  INSUFFICIENT_RESOURCE, err) == 0) {\n\t\tif ((flags & RETURN_ALL_ERR)) {\n\t\t\tcan_fit = 0;\n\t\t\tfor (; err->next != NULL; err = err->next)\n\t\t\t\t;\n\t\t\terr->next = new_schd_error();\n\t\t\tprev_err = err;\n\t\t\terr = err->next;\n\t\t} else\n\t\t\treturn 0;\n\t}\n\n\t/* Check 4: Chunk level resources: Check each chunk compared to the meta data\n\t *          This is mostly for non-consumables.  Booleans are always honored\n\t *\t      on nodes regardless if they are in the resources line.  This is a\n\t *\t      grandfathering in from the old nodespec properties.\n\t */\n\t/* The call to get_resresv_spec is needed here because we are checking for resources on each\n\t * chunk. For jobs that already have execselect specification defined we only need to\n\t * traverse through those chunks.\n\t * get_resresv_spec sets the spec value to execselect/select depending on whether execselect\n\t * was set or not.\n\t */\n\tget_resresv_spec(resresv, &spec, &pl);\n\tfor (i = 0; spec->chunks[i] != NULL; i++) {\n\t\tif (check_avail_resources(np->res, spec->chunks[i]->req,\n\t\t\t\t\t  pass_flags | CHECK_ALL_BOOLS, policy->resdef_to_check,\n\t\t\t\t\t  INSUFFICIENT_RESOURCE, err) == 0) {\n\t\t\tif ((flags & RETURN_ALL_ERR)) {\n\t\t\t\tcan_fit = 0;\n\t\t\t\tfor (; err->next != NULL; err = err->next)\n\t\t\t\t\t;\n\t\t\t\terr->next = new_schd_error();\n\t\t\t\tprev_err = err;\n\t\t\t\terr = err->next;\n\t\t\t} else\n\t\t\t\treturn 0;\n\t\t}\n\t}\n\tif ((flags & RETURN_ALL_ERR)) {\n\t\tif (prev_err != NULL) {\n\t\t\tprev_err->next = NULL;\n\t\t\tfree(err);\n\t\t}\n\t\treturn can_fit;\n\t}\n\treturn 1;\n}\n\n/**\n * @brief\n * \t\tcreate_specific_nodepart - create a node partition with specific\n *\t\t\t\t          nodes, rather than from a placement\n *\t\t\t\t          set resource=value\n *\n * @param[in]\tpolicy\t-\tpolicy info\n * @param[in]\tname\t-\tthe name of the node partition\n * @param[in]\tnodes\t-\tthe nodes to create the placement set with\n * @param[in]\tflags\t-\tflags which change operations of node partition creation\n *\n * @return\tnode_partition * - the node partition\n * @NULL\t: on error\n */\nnode_partition *\ncreate_specific_nodepart(status *policy, const char *name, node_info **nodes, int flags)\n{\n\tnode_partition *np;\n\tint i, j;\n\tint cnt;\n\tnode_partition **tmp_arr;\n\n\tif (name == NULL || nodes == NULL)\n\t\treturn NULL;\n\n\tnp = new_node_partition();\n\tif (np == NULL)\n\t\treturn NULL;\n\n\tcnt = count_array(nodes);\n\n\tnp->name = string_dup(name);\n\tnp->def = NULL;\n\tnp->res_val = string_dup(\"none\");\n\tnp->rank = get_sched_rank();\n\n\tnp->ninfo_arr = static_cast<node_info **>(malloc((cnt + 1) * sizeof(node_info *)));\n\tif (np->ninfo_arr == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\tfree_node_partition(np);\n\t\treturn NULL;\n\t}\n\n\tj = 0;\n\tfor (i = 0; i < cnt; i++) {\n\t\tif (!nodes[i]->is_stale) {\n\t\t\tif (!(flags & NP_NO_ADD_NP_ARR)) {\n\t\t\t\ttmp_arr = static_cast<node_partition **>(add_ptr_to_array(nodes[i]->np_arr, np));\n\t\t\t\tif (tmp_arr == NULL) {\n\t\t\t\t\tfree_node_partition(np);\n\t\t\t\t\treturn NULL;\n\t\t\t\t}\n\t\t\t\tnodes[i]->np_arr = tmp_arr;\n\t\t\t}\n\n\t\t\tnp->ninfo_arr[j] = nodes[i];\n\t\t\tj++;\n\t\t}\n\t}\n\tnp->tot_nodes = j;\n\n\tnp->ninfo_arr[np->tot_nodes] = NULL;\n\n\tif (node_partition_update(policy, np) == 0) {\n\t\tfree_node_partition(np);\n\t\treturn NULL;\n\t}\n\n\treturn np;\n}\n\n/**\n * @brief\n * \t\tcreate the placement sets for the server and queues\n *\n * @param[in]\tpolicy\t-\tpolicy info\n * @param[in]\tsinfo\t-\tthe server\n *\n * @return\tint\n * @retval\ttrue\t: success\n * @retval\tfalse\t: failure\n */\nbool\ncreate_placement_sets(status *policy, server_info *sinfo)\n{\n\tbool is_success = true;\n\n\tsinfo->allpart = create_specific_nodepart(policy, \"all\", sinfo->unassoc_nodes, NO_FLAGS);\n\tif (sinfo->has_multi_vnode) {\n\t\tconst std::vector<std::string> resstr{\"host\"};\n\t\tint num;\n\t\tsinfo->hostsets = create_node_partitions(policy, sinfo->nodes,\n\t\t\t\t\t\t\t resstr, sc_attrs.only_explicit_psets ? NP_NONE : NP_CREATE_REST, &num);\n\t\tif (sinfo->hostsets != NULL) {\n\t\t\tsinfo->num_hostsets = num;\n\t\t\tfor (int i = 0; sinfo->nodes[i] != NULL; i++) {\n\t\t\t\tschd_resource *hostres;\n\t\t\t\tchar hostbuf[256];\n\n\t\t\t\thostres = find_resource(sinfo->nodes[i]->res, allres[\"host\"]);\n\t\t\t\tif (hostres != NULL) {\n\t\t\t\t\tsnprintf(hostbuf, sizeof(hostbuf), \"host=%s\", hostres->str_avail[0]);\n\t\t\t\t\tsinfo->nodes[i]->hostset =\n\t\t\t\t\t\tfind_node_partition(sinfo->hostsets, hostbuf);\n\t\t\t\t} else {\n\t\t\t\t\tsnprintf(hostbuf, sizeof(hostbuf), \"host=\\\"\\\"\");\n\t\t\t\t\tsinfo->nodes[i]->hostset =\n\t\t\t\t\t\tfind_node_partition(sinfo->hostsets, hostbuf);\n\t\t\t\t}\n\t\t\t}\n\t\t} else {\n\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_SERVER, LOG_DEBUG, \"\",\n\t\t\t\t  \"Failed to create host sets for server\");\n\t\t\tis_success = false;\n\t\t}\n\t}\n\n\tif (sinfo->node_group_enable && !sinfo->node_group_key.empty()) {\n\t\tsinfo->nodepart = create_node_partitions(policy, sinfo->unassoc_nodes,\n\t\t\t\t\t\t\t sinfo->node_group_key,\n\t\t\t\t\t\t\t sc_attrs.only_explicit_psets ? NP_NONE : NP_CREATE_REST,\n\t\t\t\t\t\t\t &sinfo->num_parts);\n\n\t\tif (sinfo->nodepart != NULL) {\n\t\t\tqsort(sinfo->nodepart, sinfo->num_parts,\n\t\t\t      sizeof(node_partition *), cmp_placement_sets);\n\t\t} else {\n\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_SERVER, LOG_DEBUG, \"\",\n\t\t\t\t  \"Failed to create node partitions for server\");\n\t\t\tis_success = false;\n\t\t}\n\t}\n\n\tfor (auto qinfo : sinfo->queues) {\n\n\t\tif (qinfo->has_nodes)\n\t\t\tqinfo->allpart = create_specific_nodepart(policy, \"all\", qinfo->nodes, NO_FLAGS);\n\n\t\tif (sinfo->node_group_enable && (qinfo->has_nodes || !qinfo->node_group_key.empty())) {\n\t\t\tnode_info **ngroup_nodes;\n\t\t\tstd::vector<std::string> ngkey;\n\n\t\t\tif (qinfo->has_nodes)\n\t\t\t\tngroup_nodes = qinfo->nodes;\n\t\t\telse\n\t\t\t\tngroup_nodes = sinfo->unassoc_nodes;\n\n\t\t\tif (!qinfo->node_group_key.empty())\n\t\t\t\tngkey = qinfo->node_group_key;\n\t\t\telse\n\t\t\t\tngkey = sinfo->node_group_key;\n\n\t\t\tqinfo->nodepart = create_node_partitions(policy, ngroup_nodes,\n\t\t\t\t\t\t\t\t ngkey, sc_attrs.only_explicit_psets ? NP_NONE : NP_CREATE_REST,\n\t\t\t\t\t\t\t\t &(qinfo->num_parts));\n\t\t\tif (qinfo->nodepart != NULL) {\n\t\t\t\tqsort(qinfo->nodepart, qinfo->num_parts,\n\t\t\t\t      sizeof(node_partition *), cmp_placement_sets);\n\t\t\t} else {\n\t\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_QUEUE, LOG_DEBUG, qinfo->name,\n\t\t\t\t\t  \"Failed to create node partitions for queue.\");\n\t\t\t\tis_success = false;\n\t\t\t}\n\t\t}\n\t}\n\treturn is_success;\n}\n\n/**\n * @brief sort all placement sets (server's psets, queue's psets, and hostsets)\n * @param[in] policy - policy info\n * @param[in] sinfo - server universe\n * @return void\n */\nvoid\nsort_all_nodepart(status *policy, server_info *sinfo)\n{\n\tif (policy == NULL || sinfo == NULL)\n\t\treturn;\n\n\tif (sinfo->node_group_enable && !sinfo->node_group_key.empty())\n\t\tqsort(sinfo->nodepart, sinfo->num_parts,\n\t\t      sizeof(node_partition *), cmp_placement_sets);\n\n\tfor (auto qinfo : sinfo->queues) {\n\n\t\tif (sinfo->node_group_enable && !qinfo->node_group_key.empty())\n\t\t\tqsort(qinfo->nodepart, qinfo->num_parts,\n\t\t\t      sizeof(node_partition *), cmp_placement_sets);\n\t}\n\tif (!policy->node_sort->empty() && conf.node_sort_unused && sinfo->hostsets != NULL) {\n\t\t/* Resort the nodes in host sets to correctly reflect unused resources */\n\t\tqsort(sinfo->hostsets, sinfo->num_hostsets, sizeof(node_partition *), multi_nodepart_sort);\n\t}\n}\n\n/**\n *\n *\t@brief update all node partitions of all queues on the server\n *\t@note Call update_all_nodepart() after all nodes have been processed\n *\t\tby update_node_on_end/update_node_on_run\n *\n *\t  @param[in] policy - policy info\n *\t  @param[in] sinfo - server info\n *\t  @param[in] flags - flags to modify behavior\n *\t  \t\t\tNO_ALLPART - do not update the metadata in the allpart.\n *\t  \t\t\t\t     There are circumstances (e.g., calendaring) where\n *\t  \t\t\t\t     the allpart provides limited use and will constantly\n *\t  \t\t\t\t     be updated.  It is best to just skip it.\n *\n *\t@return nothing\n *\n */\nvoid\nupdate_all_nodepart(status *policy, server_info *sinfo, unsigned int flags)\n{\n\tif (sinfo == NULL)\n\t\treturn;\n\n\tif (sinfo->allpart == NULL)\n\t\treturn;\n\n\tif (sinfo->node_group_enable && !sinfo->node_group_key.empty())\n\t\tnode_partition_update_array(policy, sinfo->nodepart);\n\n\t/* Update and resort the placement sets on the queues */\n\tfor (auto qinfo : sinfo->queues) {\n\n\t\tif (sinfo->node_group_enable && !qinfo->node_group_key.empty())\n\t\t\tnode_partition_update_array(policy, qinfo->nodepart);\n\n\t\tif ((flags & NO_ALLPART) == 0) {\n\t\t\tif (qinfo->allpart != NULL && qinfo->allpart->res == NULL)\n\t\t\t\tnode_partition_update(policy, qinfo->allpart);\n\t\t}\n\t}\n\n\t/* Update and resort the hostsets */\n\tnode_partition_update_array(policy, sinfo->hostsets);\n\n\tif ((flags & NO_ALLPART) == 0)\n\t\tnode_partition_update(policy, sinfo->allpart);\n\n\tsort_all_nodepart(policy, sinfo);\n\n\tsinfo->pset_metadata_stale = 0;\n}\n"
  },
  {
    "path": "src/scheduler/node_partition.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _NODE_PARTITION_H\n#define _NODE_PARTITION_H\n\n#include \"data_types.h\"\n#include <pbs_ifl.h>\n\n/*\n *\n *      new_node_partition - allocate and initialize a node_partition\n *\n *      returns new node partition or NULL on error\n *\n */\n#ifdef NAS /* localmod 005 */\nnode_partition *new_node_partition(void);\n#else\nnode_partition *new_node_partition();\n#endif /* localmod 005 */\n\n/*\n *\n *      free_node_partition_array - free an array of node_partitions\n *\n *        np_arr - node partition array to free\n *\n *      returns nothing\n *\n */\nvoid free_node_partition_array(node_partition **np_arr);\n\n/*\n *\n *      free_node_partition - free a node_partition structure\n *\n *        np - the node_partition to free\n *\n */\nvoid free_node_partition(node_partition *np);\n\n/*\n *\n *      dup_node_partition_array - duplicate a node_partition array\n *\n *        onp_arr - the node_partition array to duplicate\n *        nsinfo - server for the new node partition\n *\n *      returns duplicated array or NULL on error\n *\n */\nnode_partition **dup_node_partition_array(node_partition **onp_arr, server_info *nsinfo);\n\n/*\n *\n *      dup_node_partition - duplicate a node_partition structure\n *\n *        onp - the node_partition structure to duplicate\n *        nsinfo - server for the new node partiton (the nodes are needed)\n *\n *      returns duplicated node_partition or NULL on error\n *\n */\nnode_partition *dup_node_partition(node_partition *onp, server_info *nsinfo);\n\n/* copy a node partition array from pointers out of another.*/\nnode_partition **copy_node_partition_ptr_array(node_partition **onp_arr, node_partition **new_nps);\n\n/*\n *\n *      create_node_partitions - break apart nodes into partitions\n *\n *         IN: policy - policy info\n *         IN: nodes  -  the nodes to create partitions from\n *\t   IN: resname - node grouping resource name\n *         IN: flags - flags which change operations of node partition creation\n *\t  OUT: num_parts - the number of node partitions created\n *\n *      returns node_partition array or NULL on error\n *              num_parts set to the number of partitions created if not NULL\n *\n *\n */\nnode_partition **create_node_partitions(status *policy, node_info **nodes, const std::vector<std::string> &resnames,\n\t\t\t\t\tunsigned int flags, int *num_parts);\n\n/*\n *\n *      find_node_partition - find a node partition by name in an array\n *\n *        np_arr - array of node partitions to search\n *        name\n *\n *      returns found node partition or NULL if not found\n *\n */\nnode_partition *find_node_partition(node_partition **np_arr, const std::string &name);\n\n/* find node partition by unique rank */\n\nnode_partition *find_node_partition_by_rank(node_partition **np_arr, int rank);\n\n/*\n *\tnode_partition_update_array - update an entire array of node partitions\n *\t  nodepart - the array of node partition to update\n *\n *\treturns 1 on all success, 0 on any failure\n *\tNote: This is not an atomic operation\n */\nint node_partition_update_array(status *policy, node_partition **nodepart);\n\n/*\n *\tnode_partition_update - update the meta data about a node partition\n *\t\t\tlike free_nodes and res\n *\n *\t  np - the node partition to update\n *\n *\treturns 1 on success, 0 on failure\n */\nint node_partition_update(status *policy, node_partition *np);\n\n/*\n *\tfree_np_cache_array - destructor for array\n */\nvoid free_np_cache_array(std::vector<np_cache *> &npc_arr);\n\n/*\n *\tfree_np_cache - destructor\n */\nvoid free_np_cache(np_cache *npc);\n\n/*\n *\tfind_np_cache - find a np_cache by the array of resource names and\n *\t\t\tnodes which created it.\n *\n *\t  npc_arr - the array to search\n *\t  resnames - the list of names\n *\t  ninfo_arr - array of nodes\n *\n *\tNOTE: function does node node_info pointer comparison to save time\n *\n *\treturns the node found node partition or NULL if not (or on error)\n *\n */\nnp_cache *\nfind_np_cache(np_cache **npc_arr,\n\t      const std::vector<std::string> &resnames, node_info **ninfo_arr);\n/*\n *\tfind_alloc_np_cache - find a np_cache by the array of resource names\n *\t\t\t      and nodes which created it.  If the np_cache\n *\t\t\t      does not exist, create it and add it to the list\n */\nnp_cache *\nfind_alloc_np_cache(status *policy, std::vector<np_cache *> &pnpc_arr,\n\t\t    const std::vector<std::string> &resnames, node_info **ninfo_arr,\n\t\t    int (*sort_func)(const void *, const void *));\n\n/*\n * do an inital check to see if a resresv can fit into a node partition\n * based on the meta data we keep\n */\nint resresv_can_fit_nodepart(status *policy, node_partition *np, resource_resv *resresv, unsigned int flags, schd_error *err);\n\n/*\n *\tcreate_specific_nodepart - create a node partition with specific\n *\t\t\t\t   nodes, rather than from a placement\n *\t\t\t\t   set resource=value\n */\nnode_partition *create_specific_nodepart(status *policy, const char *name, node_info **nodes, int flags);\n/* create the placement sets for the server and queues */\nbool create_placement_sets(status *policy, server_info *sinfo);\n\n/* Update placement sets and allparts */\nvoid update_all_nodepart(status *policy, server_info *sinfo, unsigned int flags);\n\n/* Sort all placement sets (server's psets, queue's psets, and hostsets) */\nvoid sort_all_nodepart(status *policy, server_info *sinfo);\n\n/*\n * update the node buckets associated with a node\n */\nvoid update_buckets_for_node(node_bucket **bkts, node_info *ninfo);\n\n/*\n * update the node buckets associated with a node partition on\n * job/resv run/end\n */\nvoid update_buckets_for_node_array(node_bucket **bkts, node_info **ninfo_arr);\n\n#endif /* _NODE_PARTITION_H */\n"
  },
  {
    "path": "src/scheduler/parse.cpp",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file    parse.c\n *\n * @brief\n * \tparse.c - contains functions to related to parsing the config file.\n *\n * Functions included are:\n * \tparse_config()\n * \tis_speccase_sort()\n * \tinit_config()\n * \tscan()\n * \tpreempt_bit_field()\n * \tpreempt_cmp()\n *\n */\n#include <pbs_config.h>\n\n#include <stdio.h>\n#include <string.h>\n#include <stdlib.h>\n#include <errno.h>\n#include <ctype.h>\n#include <log.h>\n#include <libutil.h>\n#include <unistd.h>\n#include \"data_types.h\"\n#include \"parse.h\"\n#include \"constant.h\"\n#include \"config.h\"\n#include \"misc.h\"\n#include \"globals.h\"\n#include \"fairshare.h\"\n#include \"prime.h\"\n#include \"node_info.h\"\n#include \"resource.h\"\n#include \"pbs_internal.h\"\n\nconfig::config() : fairshare_res(\"cput\"), fairshare_ent(\"euser\")\n{\n\tprime_rr = 0;\n\tnon_prime_rr = 0;\n\tprime_bq = 0;\n\tnon_prime_bq = 0;\n\tprime_sf = 0;\n\tnon_prime_sf = 0;\n\tprime_so = 0;\n\tnon_prime_so = 0;\n\tprime_fs = 0;\n\tnon_prime_fs = 0;\n\tprime_bf = 1;\n\tnon_prime_bf = 1;\n\tprime_sn = 0;\n\tnon_prime_sn = 0;\n\tprime_bp = 0;\n\tnon_prime_bp = 0;\n\tprime_pre = 0;\n\tnon_prime_pre = 0;\n\tupdate_comments = 1;\n\tprime_exempt_anytime_queues = 0;\n\tenforce_no_shares = 1;\n\tnode_sort_unused = 0;\n\tresv_conf_ignore = 0;\n\tallow_aoe_calendar = 0;\n#ifdef NAS /* localmod 034 */\n\tprime_sto = 0;\n\tnon_prime_sto = 0;\n#endif /* localmod 034 */\n\n\tprime_smp_dist = SMP_NODE_PACK;\n\tnon_prime_smp_dist = SMP_NODE_PACK;\n\tprime_spill = 0;\n\tnonprime_spill = 0;\n\tdecay_time = 86400;\n\tignore_res.insert(\"mpiprocs\");\n\tignore_res.insert(\"ompthreads\");\n\tmemset(prime, 0, sizeof(prime));\n\tholiday_year = 0;\t\t      /* the year the holidays are for */\n\tunknown_shares = 0;\t\t      /* unknown group shares */\n\tmax_preempt_attempts = SCHD_INFINITY; /* max num of preempt attempts per cyc*/\n\tmax_jobs_to_check = SCHD_INFINITY;    /* max number of jobs to check in cyc*/\n\tfairshare_decay_factor = .5;\t      /* decay factor used when decaying fairshare tree */\n#ifdef NAS\n\t/* localmod 034 */\n\tmax_borrow = 0;\t       /* job share borrowing limit */\n\tper_share_topjobs = 0; /* per share group guaranteed top jobs*/\n\t/* localmod 038 */\n\tper_queues_topjobs = 0; /* per queues guaranteed top jobs */\n\t/* localmod 030 */\n\tmin_intrptd_cycle_length = 0; /* min length of interrupted cycle */\n\tmax_intrptd_cycles = 0;\t      /* max consecutive interrupted cycles */\n#endif\n\n\t/* selection criteria of nodes for provisioning */\n\tprovision_policy = AGGRESSIVE_PROVISION;\n}\n\n/* strtok delimiters for parsing the sched_config file are space and tab */\n#define DELIM \"\\t \"\n\n/**\n * @brief\n * \t\tparse the config file into the global struct config conf\n *\n * @param[in]\tfname\t-\tfile name of the config file\n *\n * @see\tGLOBAL:\tconf  - config structure\n *\n * @return struct config\n * \n * @par\n *\tFILE FORMAT:\n *\tconfig_name [white space] : [white space] config_value [prime_value]\n *\tEX: sort_by: shortest_job_first prime\n */\nconfig\nparse_config(const char *fname)\n{\n\tFILE *fp; /* file pointer to config file */\n\tchar *buf = NULL;\n\tint buf_size = 0;\n\tint linenum = 0; /* the current line number in the file */\n\n\t/* resource type for validity checking */\n\tstruct resource_type type;\n\n\tstruct config tmpconf;\n\n\tif ((fp = fopen(fname, \"r\")) == NULL) {\n\t\tlog_eventf(PBSEVENT_SCHED, PBS_EVENTCLASS_FILE, LOG_NOTICE,\n\t\t\t   fname, \"Can not open file: %s\", fname);\n\t\treturn config();\n\t}\n\n#ifdef NAS\n\t/* localmod 034 */\n\ttmpconf.max_borrow = UNSPECIFIED;\n\ttmpconf.per_share_topjobs = 0;\n\t/* localmod 038 */\n\ttmpconf.per_queues_topjobs = 0;\n\t/* localmod 030 */\n\ttmpconf.min_intrptd_cycle_length = 30;\n\ttmpconf.max_intrptd_cycles = 1;\n#endif\n\n\t/* auto-set any internally needed config values before reading the file */\n\twhile (pbs_fgets_extend(&buf, &buf_size, fp) != NULL) {\n\t\tbool error = false;\n\t\tconst char *obsolete[2] = {0};\n\t\tauto prime = PT_ALL;\n\t\tchar errbuf[1024];\n\t\terrbuf[0] = '\\0';\n\t\tlinenum++;\n\n\t\t/* skip blank lines and comments */\n\t\tif (!skip_line(buf)) {\n\t\t\tauto config_name = scan(buf, ':');\n\t\t\tauto config_value = scan(NULL, 0);\n\t\t\tauto prime_value = scan(NULL, 0);\n\t\t\tchar *endp;\n\t\t\tif (config_name != NULL && config_value != NULL) {\n\t\t\t\tlong num = -1;\n\t\t\t\t;\n\t\t\t\tif (strcasecmp(config_value, \"true\") == 0) {\n\t\t\t\t\t/* value is true */\n\t\t\t\t\tnum = 1;\n\t\t\t\t} else if (strcasecmp(config_value, \"false\") == 0) {\n\t\t\t\t\t/* value is false */\n\t\t\t\t\tnum = 0;\n\t\t\t\t} else if (isdigit((int) config_value[0])) {\n\t\t\t\t\t/* value is number */\n\t\t\t\t\tnum = strtol(config_value, &endp, 10);\n\t\t\t\t}\n\n\t\t\t\tif (prime_value != NULL) {\n\t\t\t\t\tif (!strcmp(prime_value, \"prime\") || !strcmp(prime_value, \"PRIME\"))\n\t\t\t\t\t\tprime = PRIME;\n\t\t\t\t\telse if (!strcmp(prime_value, \"non_prime\") ||\n\t\t\t\t\t\t !strcmp(prime_value, \"NON_PRIME\"))\n\t\t\t\t\t\tprime = NON_PRIME;\n\t\t\t\t\telse if (!strcmp(prime_value, \"all\") || !strcmp(prime_value, \"ALL\"))\n\t\t\t\t\t\tprime = PT_ALL;\n\t\t\t\t\telse if (!strcmp(prime_value, \"none\") || !strcmp(prime_value, \"NONE\"))\n\t\t\t\t\t\tprime = PT_NONE;\n\t\t\t\t\telse {\n\t\t\t\t\t\tsnprintf(errbuf, sizeof(errbuf), \"Invalid prime keyword: %s\", prime_value);\n\t\t\t\t\t\terror = true;\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\tif (!strcmp(config_name, PARSE_ROUND_ROBIN)) {\n\t\t\t\t\tif (prime == PRIME || prime == PT_ALL)\n\t\t\t\t\t\ttmpconf.prime_rr = num ? 1 : 0;\n\t\t\t\t\tif (prime == NON_PRIME || prime == PT_ALL)\n\t\t\t\t\t\ttmpconf.non_prime_rr = num ? 1 : 0;\n\t\t\t\t} else if (!strcmp(config_name, PARSE_BY_QUEUE)) {\n\t\t\t\t\tif (prime == PRIME || prime == PT_ALL)\n\t\t\t\t\t\ttmpconf.prime_bq = num ? 1 : 0;\n\t\t\t\t\tif (prime == NON_PRIME || prime == PT_ALL)\n\t\t\t\t\t\ttmpconf.non_prime_bq = num ? 1 : 0;\n\t\t\t\t} else if (!strcmp(config_name, PARSE_STRICT_FIFO)) {\n\t\t\t\t\tif (prime == PRIME || prime == PT_ALL)\n\t\t\t\t\t\ttmpconf.prime_sf = num ? 1 : 0;\n\t\t\t\t\tif (prime == NON_PRIME || prime == PT_ALL)\n\t\t\t\t\t\ttmpconf.non_prime_sf = num ? 1 : 0;\n\t\t\t\t\tobsolete[0] = config_name;\n\t\t\t\t\tobsolete[1] = \"strict_ordering\";\n\t\t\t\t} else if (!strcmp(config_name, PARSE_STRICT_ORDERING)) {\n\t\t\t\t\tif (prime == PRIME || prime == PT_ALL)\n\t\t\t\t\t\ttmpconf.prime_so = num ? 1 : 0;\n\t\t\t\t\tif (prime == NON_PRIME || prime == PT_ALL)\n\t\t\t\t\t\ttmpconf.non_prime_so = num ? 1 : 0;\n\t\t\t\t} else if (!strcmp(config_name, PARSE_FAIR_SHARE)) {\n\t\t\t\t\tif (prime == PRIME || prime == PT_ALL)\n\t\t\t\t\t\ttmpconf.prime_fs = num ? 1 : 0;\n\t\t\t\t\tif (prime == NON_PRIME || prime == PT_ALL)\n\t\t\t\t\t\ttmpconf.non_prime_fs = num ? 1 : 0;\n\t\t\t\t} else if (!strcmp(config_name, PARSE_HELP_STARVING_JOBS)) {\n\t\t\t\t\tobsolete[0] = config_name;\n\t\t\t\t\tobsolete[1] = \"use eligible_time in job_sort_formula\";\n\t\t\t\t} else if (!strcmp(config_name, PARSE_BACKFILL)) {\n\t\t\t\t\tif (prime == PRIME || prime == PT_ALL)\n\t\t\t\t\t\ttmpconf.prime_bf = num ? 1 : 0;\n\t\t\t\t\tif (prime == NON_PRIME || prime == PT_ALL)\n\t\t\t\t\t\ttmpconf.non_prime_bf = num ? 1 : 0;\n\n\t\t\t\t\tobsolete[0] = config_name;\n\t\t\t\t\tobsolete[1] = \"server's backfill_depth=0\";\n\t\t\t\t} else if (!strcmp(config_name, PARSE_SORT_QUEUES)) {\n\t\t\t\t\tobsolete[0] = config_name;\n\t\t\t\t} else if (!strcmp(config_name, PARSE_UPDATE_COMMENTS)) {\n\t\t\t\t\ttmpconf.update_comments = num ? 1 : 0;\n\t\t\t\t} else if (!strcmp(config_name, PARSE_BACKFILL_PRIME)) {\n\t\t\t\t\tif (prime == PRIME || prime == PT_ALL)\n\t\t\t\t\t\ttmpconf.prime_bp = num ? 1 : 0;\n\t\t\t\t\tif (prime == NON_PRIME || prime == PT_ALL)\n\t\t\t\t\t\ttmpconf.non_prime_bp = num ? 1 : 0;\n\t\t\t\t} else if (!strcmp(config_name, PARSE_PREEMPIVE_SCHED)) {\n\t\t\t\t\tif (prime == PRIME || prime == PT_ALL)\n\t\t\t\t\t\ttmpconf.prime_pre = num ? 1 : 0;\n\t\t\t\t\tif (prime == NON_PRIME || prime == PT_ALL)\n\t\t\t\t\t\ttmpconf.non_prime_pre = num ? 1 : 0;\n\t\t\t\t} else if (!strcmp(config_name, PARSE_PRIME_EXEMPT_ANYTIME_QUEUES))\n\t\t\t\t\ttmpconf.prime_exempt_anytime_queues = num ? 1 : 0;\n\t\t\t\telse if (!strcmp(config_name, PARSE_ENFORCE_NO_SHARES))\n\t\t\t\t\ttmpconf.enforce_no_shares = num ? 1 : 0;\n\t\t\t\telse if (!strcmp(config_name, PARSE_ALLOW_AOE_CALENDAR))\n\t\t\t\t\ttmpconf.allow_aoe_calendar = 1;\n\t\t\t\telse if (!strcmp(config_name, PARSE_PRIME_SPILL)) {\n\t\t\t\t\tif (prime == PRIME || prime == PT_ALL)\n\t\t\t\t\t\ttmpconf.prime_spill = res_to_num(config_value, &type);\n\t\t\t\t\tif (prime == NON_PRIME || prime == PT_ALL)\n\t\t\t\t\t\ttmpconf.nonprime_spill = res_to_num(config_value, &type);\n\n\t\t\t\t\tif (!type.is_time) {\n\t\t\t\t\t\tsnprintf(errbuf, sizeof(errbuf), \"Invalid time %s\", config_value);\n\t\t\t\t\t\terror = true;\n\t\t\t\t\t}\n\t\t\t\t} else if (!strcmp(config_name, PARSE_MAX_STARVE)) {\n\t\t\t\t\tobsolete[0] = config_name;\n\t\t\t\t\tobsolete[1] = \"use eligible_time in job_sort_formula\";\n\t\t\t\t} else if (!strcmp(config_name, PARSE_HALF_LIFE) || !strcmp(config_name, PARSE_FAIRSHARE_DECAY_TIME)) {\n\t\t\t\t\tif (!strcmp(config_name, PARSE_HALF_LIFE)) {\n\t\t\t\t\t\tobsolete[0] = PARSE_HALF_LIFE;\n\t\t\t\t\t\tobsolete[1] = PARSE_FAIRSHARE_DECAY_TIME \" and \" PARSE_FAIRSHARE_DECAY_FACTOR \" instead\";\n\t\t\t\t\t}\n\t\t\t\t\ttmpconf.decay_time = res_to_num(config_value, &type);\n\t\t\t\t\tif (!type.is_time) {\n\t\t\t\t\t\tsnprintf(errbuf, sizeof(errbuf), \"Invalid time %s\", config_value);\n\t\t\t\t\t\terror = true;\n\t\t\t\t\t}\n\t\t\t\t} else if (!strcmp(config_name, PARSE_UNKNOWN_SHARES))\n\t\t\t\t\ttmpconf.unknown_shares = num;\n\t\t\t\telse if (!strcmp(config_name, PARSE_FAIRSHARE_DECAY_FACTOR)) {\n\t\t\t\t\tfloat fnum;\n\t\t\t\t\tfnum = strtod(config_value, &endp);\n\t\t\t\t\tif (*endp == '\\0') {\n\t\t\t\t\t\tif (fnum <= 0 || fnum >= 1) {\n\t\t\t\t\t\t\tsprintf(errbuf, \"%s: Invalid value: %.*f.  Valid values are between 0 and 1.\", PARSE_FAIRSHARE_DECAY_FACTOR, float_digits(fnum, 2), fnum);\n\t\t\t\t\t\t\terror = true;\n\t\t\t\t\t\t} else\n\t\t\t\t\t\t\ttmpconf.fairshare_decay_factor = fnum;\n\t\t\t\t\t} else {\n\t\t\t\t\t\tpbs_strncpy(errbuf, \"Invalid \" PARSE_FAIRSHARE_DECAY_FACTOR, sizeof(errbuf));\n\t\t\t\t\t\terror = true;\n\t\t\t\t\t}\n\t\t\t\t} else if (!strcmp(config_name, PARSE_FAIRSHARE_RES)) {\n\t\t\t\t\ttmpconf.fairshare_res = config_value;\n\t\t\t\t} else if (!strcmp(config_name, PARSE_FAIRSHARE_ENT)) {\n\t\t\t\t\tif (strcmp(config_value, ATTR_euser) &&\n\t\t\t\t\t    strcmp(config_value, ATTR_egroup) &&\n\t\t\t\t\t    strcmp(config_value, ATTR_A) &&\n\t\t\t\t\t    strcmp(config_value, \"queue\") &&\n\t\t\t\t\t    strcmp(config_value, \"egroup:euser\")) {\n\t\t\t\t\t\terror = true;\n\t\t\t\t\t\tsprintf(errbuf, \"%s %s is erroneous (or deprecated).\",\n\t\t\t\t\t\t\tPARSE_FAIRSHARE_ENT, config_value);\n\t\t\t\t\t}\n\t\t\t\t\ttmpconf.fairshare_ent = config_value;\n\t\t\t\t} else if (!strcmp(config_name, PARSE_NODE_GROUP_KEY)) {\n\t\t\t\t\tobsolete[0] = PARSE_NODE_GROUP_KEY;\n\t\t\t\t\tobsolete[1] = \"nothing - set via qmgr\";\n\t\t\t\t} else if (!strcmp(config_name, PARSE_LOG_FILTER)) {\n\t\t\t\t\tobsolete[0] = PARSE_LOG_FILTER;\n\t\t\t\t\tobsolete[1] = \"nothing - set log_events via qmgr\";\n\t\t\t\t} else if (!strcmp(config_name, PARSE_PREEMPT_QUEUE_PRIO)) {\n\t\t\t\t\tobsolete[0] = PARSE_PREEMPT_QUEUE_PRIO;\n\t\t\t\t\tobsolete[1] = \"nothing - set via qmgr\";\n\t\t\t\t} else if (!strcmp(config_name, PARSE_RES_UNSET_INFINITE)) {\n\t\t\t\t\tchar **strarr;\n\n\t\t\t\t\t// mpiprocs and ompthreads are added in the constructor\n\t\t\t\t\tstrarr = break_comma_list(config_value);\n\t\t\t\t\tfor (int i = 0; strarr[i] != NULL; i++)\n\t\t\t\t\t\ttmpconf.ignore_res.insert(strarr[i]);\n\t\t\t\t\tfree_string_array(strarr);\n\n\t\t\t\t} else if (!strcmp(config_name, PARSE_RESV_CONFIRM_IGNORE)) {\n\t\t\t\t\tif (!strcmp(config_value, \"dedicated_time\"))\n\t\t\t\t\t\ttmpconf.resv_conf_ignore = 1;\n\t\t\t\t\telse if (!strcmp(config_value, \"none\"))\n\t\t\t\t\t\ttmpconf.resv_conf_ignore = 0;\n\t\t\t\t\telse {\n\t\t\t\t\t\terror = true;\n\t\t\t\t\t\tsprintf(errbuf, \"%s valid values: dedicated_time or none\",\n\t\t\t\t\t\t\tPARSE_RESV_CONFIRM_IGNORE);\n\t\t\t\t\t}\n\t\t\t\t} else if (!strcmp(config_name, PARSE_RESOURCES)) {\n\t\t\t\t\tbool need_host = false;\n\t\t\t\t\tbool need_vnode = false;\n\t\t\t\t\tchar **strarr;\n\t\t\t\t\t/* hack: add in \"host\" into resources list because this was\n\t\t\t\t\t * done by default prior to 7.1.\n\t\t\t\t\t */\n\t\t\t\t\tif (strstr(config_value, \"host\") == NULL)\n\t\t\t\t\t\tneed_host = true;\n\n\t\t\t\t\t/* hack: add in \"vnode\" in 8.0 */\n\t\t\t\t\tif (strstr(config_value, \"vnode\") == NULL)\n\t\t\t\t\t\tneed_vnode = true;\n\n\t\t\t\t\tstrarr = break_comma_list(config_value);\n\t\t\t\t\tfor (int i = 0; strarr[i] != NULL; i++)\n\t\t\t\t\t\ttmpconf.res_to_check.insert(strarr[i]);\n\t\t\t\t\tfree_string_array(strarr);\n\n\t\t\t\t\tif (need_host)\n\t\t\t\t\t\ttmpconf.res_to_check.insert(\"host\");\n\t\t\t\t\tif (need_vnode)\n\t\t\t\t\t\ttmpconf.res_to_check.insert(\"vnode\");\n\t\t\t\t} else if (!strcmp(config_name, PARSE_DEDICATED_PREFIX))\n\t\t\t\t\ttmpconf.ded_prefix = config_value;\n\t\t\t\telse if (!strcmp(config_name, PARSE_PRIMETIME_PREFIX))\n\t\t\t\t\ttmpconf.pt_prefix = config_value;\n\t\t\t\telse if (!strcmp(config_name, PARSE_NONPRIMETIME_PREFIX)) {\n\t\t\t\t\ttmpconf.npt_prefix = config_value;\n\t\t\t\t} else if (!strcmp(config_name, PARSE_SMP_CLUSTER_DIST)) {\n\t\t\t\t\tfor (int i = 0; i < HIGH_SMP_DIST; i++)\n\t\t\t\t\t\tif (!strcmp(smp_cluster_info[i].str, config_value)) {\n\t\t\t\t\t\t\tif (prime == PRIME || prime == PT_ALL)\n\t\t\t\t\t\t\t\ttmpconf.prime_smp_dist = (enum smp_cluster_dist) smp_cluster_info[i].value;\n\t\t\t\t\t\t\tif (prime == NON_PRIME || prime == PT_ALL)\n\t\t\t\t\t\t\t\ttmpconf.non_prime_smp_dist = (enum smp_cluster_dist) smp_cluster_info[i].value;\n\t\t\t\t\t\t}\n\t\t\t\t} else if (!strcmp(config_name, PARSE_PREEMPT_PRIO)) {\n\t\t\t\t\tobsolete[0] = PARSE_PREEMPT_PRIO;\n\t\t\t\t\tobsolete[1] = \"nothing - set via qmgr\";\n\t\t\t\t} else if (!strcmp(config_name, PARSE_PREEMPT_ORDER)) {\n\t\t\t\t\tobsolete[0] = PARSE_PREEMPT_ORDER;\n\t\t\t\t\tobsolete[1] = \"nothing - set via qmgr\";\n\t\t\t\t} else if (!strcmp(config_name, PARSE_PREEMPT_SORT)) {\n\t\t\t\t\tobsolete[0] = PARSE_PREEMPT_SORT;\n\t\t\t\t\tobsolete[1] = \"nothing - set via qmgr\";\n\t\t\t\t} else if (!strcmp(config_name, PARSE_JOB_SORT_KEY)) {\n\t\t\t\t\tsort_info si;\n\n\t\t\t\t\tauto tok = strtok(config_value, DELIM);\n\n\t\t\t\t\tif (tok != NULL) {\n\t\t\t\t\t\tsi.res_name = tok;\n\t\t\t\t\t\tauto f = allres.find(tok);\n\t\t\t\t\t\tif (f != allres.end())\n\t\t\t\t\t\t\tsi.def = f->second;\n\t\t\t\t\t\telse\n\t\t\t\t\t\t\tsi.def = NULL;\n\n\t\t\t\t\t\ttok = strtok(NULL, DELIM);\n\t\t\t\t\t\tif (tok != NULL) {\n\t\t\t\t\t\t\tif (!strcmp(tok, \"high\") || !strcmp(tok, \"HIGH\") ||\n\t\t\t\t\t\t\t    !strcmp(tok, \"High\")) {\n\t\t\t\t\t\t\t\tsi.order = DESC;\n\t\t\t\t\t\t\t} else if (!strcmp(tok, \"low\") || !strcmp(tok, \"LOW\") ||\n\t\t\t\t\t\t\t\t   !strcmp(tok, \"Low\")) {\n\t\t\t\t\t\t\t\tsi.order = ASC;\n\t\t\t\t\t\t\t} else\n\t\t\t\t\t\t\t\terror = true;\n\t\t\t\t\t\t} else\n\t\t\t\t\t\t\terror = true;\n\n\t\t\t\t\t\tif (!error) {\n\t\t\t\t\t\t\tif (si.res_name == SORT_PRIORITY) {\n\t\t\t\t\t\t\t\tobsolete[0] = SORT_PRIORITY \" in \" PARSE_JOB_SORT_KEY;\n\t\t\t\t\t\t\t\tobsolete[1] = SORT_JOB_PRIORITY;\n\t\t\t\t\t\t\t\tsi.res_name = SORT_JOB_PRIORITY;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tif (prime == PRIME || prime == PT_ALL)\n\t\t\t\t\t\t\t\ttmpconf.prime_sort.push_back(si);\n\n\t\t\t\t\t\t\tif (prime == NON_PRIME || prime == PT_ALL)\n\t\t\t\t\t\t\t\ttmpconf.non_prime_sort.push_back(si);\n\t\t\t\t\t\t} else\n\t\t\t\t\t\t\tpbs_strncpy(errbuf, \"Invalid job_sort_key\", sizeof(errbuf));\n\t\t\t\t\t}\n\t\t\t\t} else if (!strcmp(config_name, PARSE_NODE_SORT_KEY)) {\n\t\t\t\t\tsort_info si;\n\t\t\t\t\tauto tok = strtok(config_value, DELIM);\n\n\t\t\t\t\tif (tok != NULL) {\n\t\t\t\t\t\tsi.res_name = tok;\n\t\t\t\t\t\tauto f = allres.find(tok);\n\t\t\t\t\t\tif (f != allres.end())\n\t\t\t\t\t\t\tsi.def = f->second;\n\t\t\t\t\t\telse\n\t\t\t\t\t\t\tsi.def = NULL;\n\n\t\t\t\t\t\ttok = strtok(NULL, DELIM);\n\t\t\t\t\t\tif (tok != NULL) {\n\t\t\t\t\t\t\tif (!strcmp(tok, \"high\") || !strcmp(tok, \"HIGH\") ||\n\t\t\t\t\t\t\t    !strcmp(tok, \"High\")) {\n\t\t\t\t\t\t\t\tsi.order = DESC;\n\t\t\t\t\t\t\t} else if (!strcmp(tok, \"low\") || !strcmp(tok, \"LOW\") ||\n\t\t\t\t\t\t\t\t   !strcmp(tok, \"Low\")) {\n\t\t\t\t\t\t\t\tsi.order = ASC;\n\t\t\t\t\t\t\t} else\n\t\t\t\t\t\t\t\terror = true;\n\t\t\t\t\t\t} else\n\t\t\t\t\t\t\terror = true;\n\n\t\t\t\t\t\tif (!error) {\n\t\t\t\t\t\t\ttok = strtok(NULL, DELIM);\n\t\t\t\t\t\t\tif (tok == NULL)\n\t\t\t\t\t\t\t\tsi.res_type = RF_AVAIL;\n\t\t\t\t\t\t\telse {\n\t\t\t\t\t\t\t\tif (!strcmp(tok, \"total\"))\n\t\t\t\t\t\t\t\t\tsi.res_type = RF_AVAIL;\n\t\t\t\t\t\t\t\telse if (!strcmp(tok, \"assigned\"))\n\t\t\t\t\t\t\t\t\tsi.res_type = RF_ASSN;\n\t\t\t\t\t\t\t\telse if (!strcmp(tok, \"unused\"))\n\t\t\t\t\t\t\t\t\tsi.res_type = RF_UNUSED;\n\t\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t\t\terror = true;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\tif (!error) {\n\t\t\t\t\t\t\tif (prime == PRIME || prime == PT_ALL) {\n\t\t\t\t\t\t\t\ttmpconf.prime_node_sort.push_back(si);\n\t\t\t\t\t\t\t\tif (si.res_type == RF_UNUSED || si.res_type == RF_ASSN)\n\t\t\t\t\t\t\t\t\ttmpconf.node_sort_unused = 1;\n\t\t\t\t\t\t\t}\n\n\t\t\t\t\t\t\tif (prime == NON_PRIME || prime == PT_ALL) {\n\t\t\t\t\t\t\t\ttmpconf.non_prime_node_sort.push_back(si);\n\t\t\t\t\t\t\t\tif (si.res_type == RF_UNUSED || si.res_type == RF_ASSN)\n\t\t\t\t\t\t\t\t\ttmpconf.node_sort_unused = 1;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t} else\n\t\t\t\t\t\t\tpbs_strncpy(errbuf, \"Invalid node_sort_key\", sizeof(errbuf));\n\t\t\t\t\t}\n\t\t\t\t} else if (!strcmp(config_name, PARSE_SERVER_DYN_RES)) {\n\t\t\t\t\t/* get the resource name */\n\t\t\t\t\tauto tok = strtok(config_value, DELIM);\n\t\t\t\t\tif (tok != NULL) {\n\t\t\t\t\t\tauto res = tok;\n\n\t\t\t\t\t\t/* tok is the rest of the config_value string - the program */\n\t\t\t\t\t\ttok = strtok(NULL, \"\");\n\t\t\t\t\t\twhile (tok != NULL && isspace(*tok))\n\t\t\t\t\t\t\ttok++;\n\n\t\t\t\t\t\tif (tok != NULL && tok[0] == '!') {\n\t\t\t\t\t\t\ttok++;\n\t\t\t\t\t\t\tauto command_line = tok;\n\t\t\t\t\t\t\tauto filename = get_script_name(tok);\n\t\t\t\t\t\t\tif (filename == NULL) {\n\t\t\t\t\t\t\t\tsnprintf(errbuf, sizeof(errbuf), \"server_dyn_res script %s does not exist\", tok);\n\t\t\t\t\t\t\t\terror = true;\n\t\t\t\t\t\t\t} else {\n#if !defined(DEBUG) && !defined(NO_SECURITY_CHECK)\n\t\t\t\t\t\t\t\tint err;\n\t\t\t\t\t\t\t\terr = tmp_file_sec_user(filename, 0, 1, S_IWGRP | S_IWOTH, 1, getuid());\n\t\t\t\t\t\t\t\tif (err != 0) {\n\t\t\t\t\t\t\t\t\tsnprintf(errbuf, sizeof(errbuf),\n\t\t\t\t\t\t\t\t\t\t \"error: %s file has a non-secure file access, errno: %d\", filename, err);\n\t\t\t\t\t\t\t\t\terror = true;\n\t\t\t\t\t\t\t\t}\n#endif\n\t\t\t\t\t\t\t\ttmpconf.dynamic_res.emplace_back(res, command_line, filename);\n\t\t\t\t\t\t\t\tfree(filename);\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\tpbs_strncpy(errbuf, \"Invalid server_dyn_res\", sizeof(errbuf));\n\t\t\t\t\t\t\terror = true;\n\t\t\t\t\t\t}\n\t\t\t\t\t} else {\n\t\t\t\t\t\tpbs_strncpy(errbuf, \"Invalid server_dyn_res\", sizeof(errbuf));\n\t\t\t\t\t\terror = true;\n\t\t\t\t\t}\n\t\t\t\t} else if (!strcmp(config_name, PARSE_SORT_NODES)) {\n\t\t\t\t\tobsolete[0] = config_name;\n\t\t\t\t\tobsolete[1] = PARSE_NODE_SORT_KEY;\n\t\t\t\t\tsort_info si;\n\n\t\t\t\t\tsi.res_name = SORT_PRIORITY;\n\t\t\t\t\tsi.order = DESC;\n\n\t\t\t\t\tif (prime == PRIME || prime == PT_ALL)\n\t\t\t\t\t\ttmpconf.prime_node_sort.push_back(si);\n\t\t\t\t\tif (prime == NON_PRIME || prime == PT_ALL)\n\t\t\t\t\t\ttmpconf.non_prime_node_sort.push_back(si);\n\t\t\t\t} else if (!strcmp(config_name, PARSE_PEER_QUEUE)) {\n\t\t\t\t\tauto lqueue = strtok(config_value, DELIM);\n\t\t\t\t\tif (lqueue != NULL) {\n\t\t\t\t\t\tauto rqueue = strtok(NULL, \"@\");\n\t\t\t\t\t\tif (rqueue != NULL) {\n\t\t\t\t\t\t\twhile (isspace(*rqueue))\n\t\t\t\t\t\t\t\trqueue++;\n\t\t\t\t\t\t\tconst char *rserver = strtok(NULL, DELIM);\n\t\t\t\t\t\t\tif (rserver == NULL)\n\t\t\t\t\t\t\t\trserver = \"\";\n\t\t\t\t\t\t\tif (!error)\n\t\t\t\t\t\t\t\ttmpconf.peer_queues.emplace_back(lqueue, rqueue, rserver);\n\n\t\t\t\t\t\t} else\n\t\t\t\t\t\t\terror = true;\n\t\t\t\t\t} else\n\t\t\t\t\t\terror = true;\n\n\t\t\t\t\tif (error)\n\t\t\t\t\t\tsprintf(errbuf, \"Invalid peer queue\");\n\n\t\t\t\t} else if (!strcmp(config_name, PARSE_PREEMPT_ATTEMPTS))\n\t\t\t\t\ttmpconf.max_preempt_attempts = num;\n\t\t\t\telse if (!strcmp(config_name, PARSE_MAX_JOB_CHECK)) {\n\t\t\t\t\tif (!strcmp(config_value, \"ALL_JOBS\"))\n\t\t\t\t\t\ttmpconf.max_jobs_to_check = SCHD_INFINITY;\n\t\t\t\t\telse\n\t\t\t\t\t\ttmpconf.max_jobs_to_check = num;\n\t\t\t\t} else if (!strcmp(config_name, PARSE_SELECT_PROVISION)) {\n\t\t\t\t\tif (!strcmp(config_value, PROVPOLICY_AVOID))\n\t\t\t\t\t\ttmpconf.provision_policy = AVOID_PROVISION;\n\t\t\t\t}\n#ifdef NAS\n\t\t\t\t/* localmod 034 */\n\t\t\t\telse if (!strcmp(config_name, PARSE_MAX_BORROW)) {\n\t\t\t\t\ttmpconf.max_borrow = res_to_num(config_value, &type);\n\t\t\t\t\tif (!type.is_time)\n\t\t\t\t\t\terror = true;\n\t\t\t\t} else if (!strcmp(config_name, PARSE_SHARES_TRACK_ONLY)) {\n\t\t\t\t\tif (prime == PRIME || prime == PT_ALL)\n\t\t\t\t\t\ttmpconf.prime_sto = num ? 1 : 0;\n\t\t\t\t\tif (prime == NON_PRIME || prime == PT_ALL)\n\t\t\t\t\t\ttmpconf.non_prime_sto = num ? 1 : 0;\n\t\t\t\t} else if (!strcmp(config_name, PARSE_PER_SHARE_DEPTH) ||\n\t\t\t\t\t   !strcmp(config_name, PARSE_PER_SHARE_TOPJOBS)) {\n\t\t\t\t\ttmpconf.per_share_topjobs = num;\n\t\t\t\t}\n\t\t\t\t/* localmod 038 */\n\t\t\t\telse if (!strcmp(config_name, PARSE_PER_QUEUES_TOPJOBS)) {\n\t\t\t\t\ttmpconf.per_queues_topjobs = num;\n\t\t\t\t}\n\t\t\t\t/* localmod 030 */\n\t\t\t\telse if (!strcmp(config_name, PARSE_MIN_INTERRUPTED_CYCLE_LENGTH)) {\n\t\t\t\t\ttmpconf.min_intrptd_cycle_length = num;\n\t\t\t\t} else if (!strcmp(config_name, PARSE_MAX_CONS_INTERRUPTED_CYCLES)) {\n\t\t\t\t\ttmpconf.max_intrptd_cycles = num;\n\t\t\t\t}\n#endif\n\t\t\t\telse {\n\t\t\t\t\tpbs_strncpy(errbuf, \"Unknown config parameter\", sizeof(errbuf));\n\t\t\t\t\terror = true;\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tpbs_strncpy(errbuf, \"Config line invalid\", sizeof(errbuf));\n\t\t\t\terror = true;\n\t\t\t}\n\t\t}\n\n\t\tif (error)\n\t\t\tlog_eventf(PBSEVENT_SCHED, PBS_EVENTCLASS_FILE, LOG_NOTICE, fname,\n\t\t\t\t   \"Error reading line %d: %s\", linenum, errbuf);\n\n\t\tif (obsolete[0] != NULL) {\n\t\t\tif (obsolete[1] != NULL)\n\t\t\t\tlog_eventf(PBSEVENT_SCHED, PBS_EVENTCLASS_FILE, LOG_NOTICE, fname,\n\t\t\t\t\t   \"Obsolete config name %s, instead use %s\", obsolete[0], obsolete[1]);\n\t\t\telse\n\t\t\t\tlog_eventf(PBSEVENT_SCHED, PBS_EVENTCLASS_FILE, LOG_NOTICE, fname,\n\t\t\t\t\t   \"Obsolete config name %s\", obsolete[0]);\n\t\t}\n\t}\n\tfclose(fp);\n\n\tif (buf != NULL)\n\t\tfree(buf);\n\n\tif ((tmpconf.prime_smp_dist != SMP_NODE_PACK || tmpconf.non_prime_smp_dist != SMP_NODE_PACK) && tmpconf.node_sort_unused) {\n\t\tlog_event(PBSEVENT_SCHED, PBS_EVENTCLASS_FILE, LOG_WARNING, \"\", \"smp_cluster_dist and node sorting by unused/assigned resources are not compatible.  The smp_cluster_dist option is being set to pack.\");\n\t\ttmpconf.prime_smp_dist = tmpconf.non_prime_smp_dist = SMP_NODE_PACK;\n\t}\n\n\treturn tmpconf;\n}\n\n/**\n * @brief\n * \t\tCheck if sort_res is a valid special case sorting string\n *\n * @param[in]\tsort_res\t-\tsorting keyword\n * @param[in]\tsort_type\t-\tsorting object type (job or node)\n *\n * @return\tint\n * @retval\t1\t: is special case sort\n * @retval\t0\t: not special case sort\n */\nint\nis_speccase_sort(const std::string &sort_res, int sort_type)\n{\n\tif (sort_type == SOBJ_JOB) {\n\t\tif (sort_res == SORT_JOB_PRIORITY)\n\t\t\treturn 1;\n#ifdef NAS\n\t\t/* localmod 034 */\n\t\tif (sort_res == SORT_ALLOC)\n\t\t\treturn 1;\n\t\t/* localmod 039 */\n\t\tif (sort_res == SORT_QPRI)\n\t\t\treturn 1;\n#endif\n\t\telse\n\t\t\treturn 0;\n\t} else if (sort_type == SOBJ_NODE) {\n\t\tif (sort_res == SORT_PRIORITY || sort_res == SORT_FAIR_SHARE || sort_res == SORT_PREEMPT)\n\t\t\treturn 1;\n\t\telse\n\t\t\treturn 0;\n\t}\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\t\tscan - Scan through the string looking for a white space delimited\n *\t       word or quoted string.  If the target parameter is not 0, then\n *\t       use that as a delimiter as well.\n *\n * @param[in]\tstr\t-\tthe string to scan through.  If NULL, start where we left\n *\t\t   \t\t\t\toff last time.\n * @param[in]\ttarget\t-\tif target is 0, set it to a space.  It is already a delimiter\n *\n * @return\tscanned string or NULL\n *\n */\nchar *\nscan(char *str, char target)\n{\n\tstatic char *isp = NULL; /* internal state pointer used if a NULL is\n\t\t\t\t\t * passed in to str\n\t\t\t\t\t */\n\tchar *ptr;\t\t /* pointer used to search through the str */\n\tchar *start;\n\n\tif (str == NULL && isp == NULL)\n\t\treturn NULL;\n\n\tif (str == NULL)\n\t\tptr = isp;\n\telse\n\t\tptr = str;\n\n\t/* if target is 0, set it to a space.  It is already a delimiter */\n\tif (target == 0)\n\t\ttarget = ' ';\n\n\twhile (isspace(*ptr) || *ptr == target)\n\t\tptr++;\n\n\tstart = ptr;\n\n\tif (*ptr != '\\0') {\n\t\tif (*ptr == '\\\"' || *ptr == '\\'') {\n\t\t\tauto quote = *ptr;\n\t\t\tstart = ++ptr;\n\t\t\twhile (*ptr != '\\0' && *ptr != quote)\n\t\t\t\tptr++;\n\t\t} else {\n\t\t\twhile (*ptr != '\\0' && !isspace(*ptr) && *ptr != target)\n\t\t\t\tptr++;\n\t\t}\n\t\tif (*ptr == '\\0')\n\t\t\tisp = NULL;\n\t\telse {\n\t\t\t*ptr = '\\0';\n\t\t\tisp = ptr + 1;\n\t\t}\n\t\treturn start;\n\t}\n\n\tisp = NULL;\n\treturn NULL;\n}\n\n/**\n * @brief\n *\t\tpreempt_bit_field - take list of preempt names seperated by +'s and\n * \t\t\t    create a bitfield representing it.  The bitfield\n *\t\t\t    is created by taking the name in the prempt enum\n *\t\t\t    and shifting a bit into that position.\n *\n * @param[in]\tplist\t-\ta preempt list\n *\n * @return\ta bitfield of -1 on error\n *\n */\nint\npreempt_bit_field(char *plist)\n{\n\tint bitfield = 0;\n\tint i;\n\tchar *tok;\n\n\ttok = strtok(plist, \"+\");\n\n\twhile (tok != NULL) {\n\t\tauto obitfield = bitfield;\n\t\tfor (i = 0; i < PREEMPT_HIGH; i++) {\n\t\t\tif (!strcmp(preempt_prio_info[i].str, tok))\n\t\t\t\tbitfield |= PREEMPT_TO_BIT(preempt_prio_info[i].value);\n\t\t}\n\n\t\t/* invalid preempt string */\n\t\tif (obitfield == bitfield) {\n\t\t\tbitfield = -1;\n\t\t\tbreak;\n\t\t}\n\n\t\ttok = strtok(NULL, \"+\");\n\t}\n\n\treturn bitfield;\n}\n\n/**\n * @brief\n * \tsort compare function for preempt status's\n * \tsort by descending number of bits in the bitfields (most number of preempt\n * \tstatuses at the top) and then priorities\n *\n * @param[in]\tp1\t-\tpreempt status 1\n * @param[in]\tp2\t-\tpreempt status 2\n *\n * @return\tint\n * @retval\t1\t: p1 < p2\n * @retval\t-1\t: p1 > p2\n * @retval\t0\t: Equal\n */\nint\npreempt_cmp(const void *p1, const void *p2)\n{\n\tint *i1, *i2;\n\n\ti1 = (int *) p1;\n\ti2 = (int *) p2;\n\n\tif (BITCOUNT16(*i1) < BITCOUNT16(*i2))\n\t\treturn 1;\n\telse if (BITCOUNT16(*i1) > BITCOUNT16(*i2))\n\t\treturn -1;\n\telse {\n\t\tif (*(i1 + 1) < *(i2 + 1))\n\t\t\treturn 1;\n\t\telse if (*(i1 + 1) > *(i2 + 1))\n\t\t\treturn -1;\n\t\telse\n\t\t\treturn 0;\n\t}\n}"
  },
  {
    "path": "src/scheduler/parse.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _PARSE_H\n#define _PARSE_H\n\n#include \"data_types.h\"\n#include \"globals.h\"\n\n/*\n *\tparse_config - parse the config file and set a struct config\n *\n *\tFILE FORMAT:\n *\tconfig_name [white space ] : [ white space ] config_value\n */\nconfig parse_config(const char *fname);\n\n/*\n *      scan - Scan through the string looking for a white space delemeted word\n *             or quoted string.\n */\nchar *scan(char *str, char target);\n\n/*\n * sort compare function for preempt status's\n * sort by decending number of bits in the bitfields (most number of preempt\n * statuses at the top) and then priorities\n */\nint preempt_cmp(const void *p1, const void *p2);\n\n/*\n *      preempt_bit_field - take list of preempt names seperated by +'s and\n *                          create a bitfield representing it.  The bitfield\n *                          is created by taking the name in the prempt enum\n *                          and shifting a bit into that position.\n */\nint preempt_bit_field(char *plist);\n\n/* Check if string is a valid special case sorting string */\nint is_speccase_sort(const std::string &, int sort_type);\n\n#endif /* _PARSE_H */\n"
  },
  {
    "path": "src/scheduler/pbs_bitmap.cpp",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <stdio.h>\n#include <stdlib.h>\n\n#include \"pbs_bitmap.h\"\n\n#define BYTES_TO_BITS(x) ((x) *8)\n\n/**\n * @brief allocate space for a pbs_bitmap (and possibly the bitmap itself)\n * @param pbm - bitmap to allocate space for.  NULL to allocate a new bitmap\n * @param num_bits - number of bits to allocate\n * @return pbs_bitmap *\n * @retval bitmap which was allocated\n * @retval NULL on error\n */\npbs_bitmap *\npbs_bitmap_alloc(pbs_bitmap *pbm, unsigned long num_bits)\n{\n\tpbs_bitmap *bm;\n\tunsigned long *tmp_bits;\n\tlong prev_longs;\n\n\tif (num_bits == 0)\n\t\treturn NULL;\n\n\tif (pbm == NULL) {\n\t\tbm = static_cast<pbs_bitmap *>(calloc(1, sizeof(pbs_bitmap)));\n\t\tif (bm == NULL)\n\t\t\treturn NULL;\n\t} else\n\t\tbm = pbm;\n\n\t/* shrinking bitmap, clear previously used bits */\n\tif (num_bits < bm->num_bits) {\n\t\tlong i;\n\t\ti = num_bits / BYTES_TO_BITS(sizeof(unsigned long)) + 1;\n\t\tfor (; static_cast<unsigned long>(i) < bm->num_longs; i++)\n\t\t\tbm->bits[i] = 0;\n\t\tfor (i = pbs_bitmap_next_on_bit(bm, num_bits); i != -1; i = pbs_bitmap_next_on_bit(bm, i))\n\t\t\tpbs_bitmap_bit_off(bm, i);\n\t}\n\n\t/* If we have enough unused bits available, we don't need to allocate */\n\tif (bm->num_longs * BYTES_TO_BITS(sizeof(unsigned long)) >= num_bits) {\n\t\tbm->num_bits = num_bits;\n\t\treturn bm;\n\t}\n\n\tprev_longs = bm->num_longs;\n\n\tbm->num_bits = num_bits;\n\tbm->num_longs = num_bits / BYTES_TO_BITS(sizeof(unsigned long));\n\tif (num_bits % BYTES_TO_BITS(sizeof(unsigned long)) > 0)\n\t\tbm->num_longs++;\n\ttmp_bits = static_cast<unsigned long *>(calloc(bm->num_longs, sizeof(unsigned long)));\n\tif (tmp_bits == NULL) {\n\t\tif (pbm == NULL) /* we allocated the memory */\n\t\t\tpbs_bitmap_free(bm);\n\t\treturn NULL;\n\t}\n\n\tif (bm->bits != NULL) {\n\t\tint i;\n\t\tfor (i = 0; i < prev_longs; i++)\n\t\t\ttmp_bits[i] = bm->bits[i];\n\n\t\tfree(bm->bits);\n\t}\n\tbm->bits = tmp_bits;\n\n\treturn bm;\n}\n\n/* pbs_bitmap destructor */\nvoid\npbs_bitmap_free(pbs_bitmap *bm)\n{\n\tif (bm == NULL)\n\t\treturn;\n\tfree(bm->bits);\n\tfree(bm);\n}\n\n/**\n * @brief turn a bit on for a bitmap\n * @param pbm - the bitmap\n * @param bit - which bit to turn on\n * @return nothing\n */\nint\npbs_bitmap_bit_on(pbs_bitmap *pbm, unsigned long bit)\n{\n\tlong long_ind;\n\tunsigned long b;\n\n\tif (pbm == NULL)\n\t\treturn 0;\n\n\tif (bit >= pbm->num_bits) {\n\t\tif (pbs_bitmap_alloc(pbm, bit + 1) == NULL)\n\t\t\treturn 0;\n\t}\n\n\tlong_ind = bit / BYTES_TO_BITS(sizeof(unsigned long));\n\tb = 1UL << (bit % BYTES_TO_BITS(sizeof(unsigned long)));\n\n\tpbm->bits[long_ind] |= b;\n\treturn 1;\n}\n\n/**\n * @brief turn a bit off for a bitmap\n * @param pbm - the bitmap\n * @param bit - the bit to turn off\n * @return nothing\n */\nint\npbs_bitmap_bit_off(pbs_bitmap *pbm, unsigned long bit)\n{\n\tlong long_ind;\n\tunsigned long b;\n\n\tif (pbm == NULL)\n\t\treturn 0;\n\n\tif (bit >= pbm->num_bits) {\n\t\tif (pbs_bitmap_alloc(pbm, bit + 1) == NULL)\n\t\t\treturn 0;\n\t}\n\n\tlong_ind = bit / BYTES_TO_BITS(sizeof(unsigned long));\n\tb = 1UL << (bit % BYTES_TO_BITS(sizeof(unsigned long)));\n\n\tpbm->bits[long_ind] &= ~b;\n\treturn 1;\n}\n\n/**\n * @brief get the value of a bit\n * @param pbm - the bitmap\n * @param bit - which bit to get the value of\n * @return int\n * @retval 1 if the bit is on\n * @retval 0 if the bit is off\n */\nint\npbs_bitmap_get_bit(pbs_bitmap *pbm, unsigned long bit)\n{\n\tlong long_ind;\n\tunsigned long b;\n\n\tif (pbm == NULL)\n\t\treturn 0;\n\n\tif (bit >= pbm->num_bits)\n\t\treturn 0;\n\n\tlong_ind = bit / BYTES_TO_BITS(sizeof(unsigned long));\n\tb = 1UL << (bit % BYTES_TO_BITS(sizeof(unsigned long)));\n\n\treturn (pbm->bits[long_ind] & b) ? 1 : 0;\n}\n\n/**\n * @brief starting at a bit, get the next on bit\n * @param pbm - the bitmap\n * @param start_bit - which bit to start from\n * @return int\n * @retval number of next on bit\n * @retval -1 if there isn't a next on bit\n */\nint\npbs_bitmap_next_on_bit(pbs_bitmap *pbm, unsigned long start_bit)\n{\n\tunsigned long long_ind;\n\tlong bit;\n\tsize_t i;\n\n\tif (pbm == NULL)\n\t\treturn -1;\n\n\tif (start_bit >= pbm->num_bits)\n\t\treturn -1;\n\n\tlong_ind = start_bit / BYTES_TO_BITS(sizeof(unsigned long));\n\tbit = start_bit % BYTES_TO_BITS(sizeof(unsigned long));\n\n\t/* special case - look at first long that contains start_bit */\n\tif (pbm->bits[long_ind] != 0) {\n\t\tfor (i = bit + 1; i < BYTES_TO_BITS(sizeof(unsigned long)); i++) {\n\t\t\tif (pbm->bits[long_ind] & (1UL << i)) {\n\t\t\t\treturn (long_ind * BYTES_TO_BITS(sizeof(unsigned long)) + i);\n\t\t\t}\n\t\t}\n\n\t\t/* didn't find an on bit after start_bit_index in the long that contained start_bit */\n\t\tif (long_ind < pbm->num_longs) {\n\t\t\tlong_ind++;\n\t\t}\n\t}\n\n\tfor (; long_ind < pbm->num_longs && pbm->bits[long_ind] == 0; long_ind++)\n\t\t;\n\n\tif (long_ind == pbm->num_longs)\n\t\treturn -1;\n\n\tfor (i = 0; i < BYTES_TO_BITS(sizeof(unsigned long)); i++) {\n\t\tif (pbm->bits[long_ind] & (1UL << i)) {\n\t\t\treturn (long_ind * BYTES_TO_BITS(sizeof(unsigned long)) + i);\n\t\t}\n\t}\n\n\treturn -1;\n}\n\n/**\n * @brief get the first on bit\n * @param bm - the bitmap\n * @return int\n * @retval the bit number of the first on bit\n * @retval -1 on error\n */\nint\npbs_bitmap_first_on_bit(pbs_bitmap *bm)\n{\n\tif (pbs_bitmap_get_bit(bm, 0))\n\t\treturn 0;\n\n\treturn pbs_bitmap_next_on_bit(bm, 0);\n}\n\n/**\n * @brief pbs_bitmap version of L = R\n * @param L - bitmap lvalue\n * @param R - bitmap rvalue\n * @return int\n * @retval 1 success\n * @retval 0 failure\n */\nint\npbs_bitmap_assign(pbs_bitmap *L, pbs_bitmap *R)\n{\n\tunsigned long i;\n\n\tif (L == NULL || R == NULL)\n\t\treturn 0;\n\n\t/* In the case where R is longer than L, we need to allocate more space for L\n\t * Instead of using R->num_bits, we call pbs_bitmap_alloc() with the\n\t * full number of bits required for its num_longs.  This is because it\n\t * is possible that R has more space allocated to it than required for its num_bits.\n\t * This happens if it had a previous call to pbs_bitmap_equals() with a shorter bitmap.\n\t */\n\tif (R->num_longs > L->num_longs)\n\t\tif (pbs_bitmap_alloc(L, BYTES_TO_BITS(R->num_longs * sizeof(unsigned long))) == NULL)\n\t\t\treturn 0;\n\n\tfor (i = 0; i < R->num_longs; i++)\n\t\tL->bits[i] = R->bits[i];\n\tif (R->num_longs < L->num_longs)\n\t\tfor (; i < L->num_longs; i++)\n\t\t\tL->bits[i] = 0;\n\n\tL->num_bits = R->num_bits;\n\treturn 1;\n}\n\n/**\n * @brief pbs_bitmap version of L == R\n * @param L - bitmap lvalue\n * @param R - bitmap rvalue\n * @return int\n * @retval 1 bitmaps are equal\n * @retval 0 bitmaps are not equal\n */\nint\npbs_bitmap_is_equal(pbs_bitmap *L, pbs_bitmap *R)\n{\n\tunsigned long i;\n\n\tif (L == NULL || R == NULL)\n\t\treturn 0;\n\n\tif (L->num_bits != R->num_bits)\n\t\treturn 0;\n\n\tfor (i = 0; i < L->num_longs; i++)\n\t\tif (L->bits[i] != R->bits[i])\n\t\t\treturn 0;\n\n\treturn 1;\n}\n"
  },
  {
    "path": "src/scheduler/pbs_bitmap.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _PBS_BITMASK_H\n#define _PBS_BITMASK_H\n\nstruct pbs_bitmap {\n\tunsigned long *bits;\t /* bit storage */\n\tunsigned long num_longs; /* number of longs in the bits array */\n\tunsigned long num_bits;\t /* number of bits that are in use (both 1's and 0's */\n};\n\ntypedef struct pbs_bitmap pbs_bitmap;\n\n/* Allocate bits to a bitmap (and possibly the bitmap itself) */\npbs_bitmap *pbs_bitmap_alloc(pbs_bitmap *pbm, unsigned long num_bits);\n\n/* Destructor */\nvoid pbs_bitmap_free(pbs_bitmap *bm);\n\n/* Turn a bit on */\nint pbs_bitmap_bit_on(pbs_bitmap *pbm, unsigned long bit);\n\n/* Turn a bit off */\nint pbs_bitmap_bit_off(pbs_bitmap *pbm, unsigned long bit);\n\n/* Get a bit */\nint pbs_bitmap_get_bit(pbs_bitmap *pbm, unsigned long bit);\n\n/* Get the first on bit in a bitmap */\nint pbs_bitmap_first_on_bit(pbs_bitmap *bm);\n\n/* Starting at start_bit get the next on bit */\nint pbs_bitmap_next_on_bit(pbs_bitmap *pbm, unsigned long start_bit);\n\n/* pbs_bitmap's version of L = R */\nint pbs_bitmap_assign(pbs_bitmap *L, pbs_bitmap *R);\n\n/* pbs_bitmap's version of L == R */\nint pbs_bitmap_is_equal(pbs_bitmap *L, pbs_bitmap *R);\n\n#endif /* _PBS_BITMASK_H */\n"
  },
  {
    "path": "src/scheduler/pbs_dedicated",
    "content": "# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\n# FORMAT: \t\tFROM\t\tTO\n#\t\t\t----\t\t--\n#\t\tMM/DD/YYYY HH:MM MM/DD/YYYY HH:MM\n#\n#  For example\n04/15/1998 12:00 04/15/1998 15:30\n\n4/15/1998 16:00 4/15/1998 16:40\n"
  },
  {
    "path": "src/scheduler/pbs_holidays",
    "content": "#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n#\n# UNCOMMENT AND CHANGE THIS TO THE CURRENT YEAR\n#YEAR  1970\n#\n# Prime/Nonprime Table\n#\n#   Prime Non-Prime\n# Day   Start Start\n#\n# UNCOMMENT AND SET THE REQUIRED PRIME/NON-PRIME START TIMES\n#  weekday 0600  1730\n#  saturday  none  all\n#  sunday  none  all\n#\n# Day of  Calendar  Company\n# Year    Date    Holiday\n#\n#\n# UNCOMMENT AND ADD CALENDAR HOLIDAYS TO BE CONSIDERED AS NON-PRIME DAYS\n#    1           Jan 1           New Year's Day\n#  359           Dec 25          Christmas Day\n"
  },
  {
    "path": "src/scheduler/pbs_holidays.2017",
    "content": "*\n* Copyright (C) 1994-2021 Altair Engineering, Inc.\n* For more information, contact Altair at www.altair.com.\n*\n* This file is part of both the OpenPBS software (\"OpenPBS\")\n* and the PBS Professional (\"PBS Pro\") software.\n*\n* Open Source License Information:\n*\n* OpenPBS is free software. You can redistribute it and/or modify it under\n* the terms of the GNU Affero General Public License as published by the\n* Free Software Foundation, either version 3 of the License, or (at your\n* option) any later version.\n*\n* OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n* FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n* License for more details.\n*\n* You should have received a copy of the GNU Affero General Public License\n* along with this program.  If not, see <http://www.gnu.org/licenses/>.\n*\n* Commercial License Information:\n*\n* PBS Pro is commercially licensed software that shares a common core with\n* the OpenPBS software.  For a copy of the commercial license terms and\n* conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n* Altair Legal Department.\n*\n* Altair's dual-license business model allows companies, individuals, and\n* organizations to create proprietary derivative works of OpenPBS and\n* distribute them - whether embedded or bundled with other software -\n* under a commercial license agreement.\n*\n* Use of Altair's trademarks, including but not limited to \"PBS™\",\n* \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n* subject to Altair's trademark licensing policies.\n*\n\n*\nYEAR  2017\n*\n* Prime/Nonprime Table\n*\n*   Prime Non-Prime\n* Day   Start Start\n*\n  weekday 0600  1730\n  saturday  none  all\n  sunday  none  all\n*\n* Day of  Calendar  Company\n* Year    Date    Holiday\n*\n\n* if a holiday falls on a saturday, it is observed on the friday before\n* if a holiday falls on a sunday, it is observed on the monday after\n\n* Jan 1\n    1           Jan 1           New Year's Day\n* Third Monday of Jan\n   16           Jan 16          Martin Luther King Day\n* Third Monday in Feb\n   51           Feb 20          President's Day\n* Last Monday in May\n  149           May 29          Memorial Day\n* July 4th\n  185           Jul 4           Independence Day\n* First Monday in Sept\n  247           Sep 4           Labor Day\n* Second Monday in Oct\n  282           Oct 9          Columbus Day\n* Nov 11\n  315           Nov 11          Veterans Day\n* Fourth Thursday in Nov\n  327           Nov 23          Thanksgiving\n* Dec 25\n  359           Dec 25          Christmas Day\n"
  },
  {
    "path": "src/scheduler/pbs_resource_group",
    "content": "# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\n#grp1\t50\troot\t10\n#grp2\t51\troot\t20\n#grp3\t52\troot\t10\n#grp4\t53\tgrp1\t20\n#grp5 \t54\tgrp1\t10\n#grp6\t55\tgrp2\t20\n#usr1\t60\troot\t5\n#usr2\t61\tgrp1\t10\n#usr3\t62\tgrp2\t10\n#usr4\t63\tgrp6\t10\n#usr5\t64\tgrp6\t10\n#usr6\t65\tgrp6\t20\n#usr7\t66\tgrp3\t10\n#usr8\t67\tgrp4\t10\n#usr9\t68\tgrp4\t10\n#usr10\t69\tgrp5\t10\n"
  },
  {
    "path": "src/scheduler/pbs_sched.cpp",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * contains functions related to PBS scheduler\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <stdio.h>\n\n#include \"fifo.h\"\n#include \"log.h\"\n#include \"sched_cmds.h\"\n\nextern int sched_main(int argc, char *argv[], int (*schedule_func)(int, const sched_cmd *));\n\nint\nmain(int argc, char *argv[])\n{\n\tif (set_msgdaemonname(const_cast<char *>(\"pbs_sched\"))) {\n\t\tfprintf(stderr, \"Out of memory\\n\");\n\t\treturn (1);\n\t}\n\n\treturn sched_main(argc, argv, schedule);\n}\n"
  },
  {
    "path": "src/scheduler/pbs_sched_bare.cpp",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * contains functions related to PBS scheduler\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <stdio.h>\n\n#ifdef PYTHON\n#include <pbs_python_private.h>\n#include <Python.h>\n#include <pythonrun.h>\n#include <wchar.h>\n#endif\n\n#include \"check.h\"\n#include \"constant.h\"\n#include \"data_types.h\"\n#include \"fifo.h\"\n#include \"globals.h\"\n#include \"libpbs.h\"\n#include \"log.h\"\n#include \"resource.h\"\n#include \"server_info.h\"\n\nextern int sched_main(int argc, char *argv[], int (*schedule_func)(int, const sched_cmd *));\n\n/**\n * @brief\tPerform scheduling in \"mock run\" mode\n *\n * @param[in]\tpolicy\t-\tpolicy info\n * @param[in]\tsd\t-\tprimary socket descriptor to the server pool\n * @param[in]\tsinfo\t-\tpbs universe we're going to loop over\n * @param[out]\trerr\t-\terror bits from the last job considered\n *\n *\t@return return code of last job scheduled\n *\t@retval -1\t: on error\n */\nint\nmain_sched_loop_bare(int sd, server_info *sinfo)\n{\n\tnode_info **nodes = sinfo->nodes;\n\tresource_resv **jobs = sinfo->jobs;\n\tint ij;\n\tint in = 0;\n\tchar execvnode[PBS_MAXHOSTNAME + strlen(\"(:ncpus=1)\") + 1];\n\n\t/* Algorithm:\n     * - Loop over all jobs, assume that they need just 1 ncpu to run, and\n     *  choose the next free node for it\n     */\n\tfor (ij = 0; jobs[ij] != NULL; ij++) {\n\t\texecvnode[0] = '\\0';\n\n\t\t/* Find the first free node and fill it */\n\t\tfor (; nodes[in] != NULL; in++) {\n\t\t\tnode_info *node = nodes[in];\n\t\t\tschd_resource *ncpures = NULL;\n\n\t\t\tif (node->is_job_busy)\n\t\t\t\tcontinue;\n\n\t\t\tncpures = find_resource(node->res, allres[\"ncpus\"]);\n\t\t\tif (ncpures == NULL)\n\t\t\t\tcontinue;\n\n\t\t\t/* Assign a cpu on this node */\n\t\t\tncpures->assigned += 1;\n\t\t\tif (dynamic_avail(ncpures) == 0) {\n\t\t\t\tnode->is_job_busy = 1;\n\t\t\t\tnode->is_free = 0;\n\t\t\t}\n\n\t\t\t/* Create the exec_node for the job */\n\t\t\tsnprintf(execvnode, sizeof(execvnode), \"(%s:ncpus=1)\", node->name.c_str());\n\n\t\t\t/* Send the run request */\n\t\t\tsend_run_job(sd, 0, jobs[ij]->name, execvnode);\n\n\t\t\tbreak;\n\t\t}\n\t\tif (execvnode[0] == '\\0') {\n\t\t\tlog_event(PBSEVENT_SCHED, PBS_EVENTCLASS_JOB, LOG_NOTICE, \"\",\n\t\t\t\t  \"No free nodes available, won't consider any more jobs\");\n\t\t\tbreak;\n\t\t}\n\t}\n\treturn SUCCESS;\n}\n\n/**\n * @brief\n *\t\tscheduling_cycle - the controling function of the scheduling cycle\n *\n * @param[in]\tsd\t-\tprimary socket descriptor to the server pool\n *\n * @return\tint\n * @retval\t0 for normal operation\n * @retval\t1 for sched exit\n */\nstatic int\nscheduling_cycle_bare(int sd, const sched_cmd *cmd)\n{\n\tserver_info *sinfo; /* ptr to the server/queue/job/node info */\n\n\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_REQUEST, LOG_DEBUG,\n\t\t  \"\", \"Starting Scheduling Cycle\");\n\n\t/* Decide whether we need to send \"can't run\" type updates this cycle */\n\tif (time(NULL) - last_attr_updates >= sc_attrs.attr_update_period)\n\t\tsend_job_attr_updates = 1;\n\telse\n\t\tsend_job_attr_updates = 0;\n\n\tupdate_cycle_status(cstat, 0);\n\n\t/* create the server / queue / job / node structures */\n\tif ((sinfo = query_server(&cstat, sd)) == NULL) {\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_NOTICE,\n\t\t\t  \"\", \"Problem with creating server data structure\");\n\t\tend_cycle_tasks(sinfo);\n\t\treturn 1;\n\t}\n\n\tmain_sched_loop_bare(sd, sinfo);\n\n\tend_cycle_tasks(sinfo);\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\t\tschedule - this function gets called to start each scheduling cycle\n *\t\t   It will handle the difference cases that caused a\n *\t\t   scheduling cycle\n *\n * @param[in]\tsd\t-\tprimary socket descriptor to the server pool\n *\n * @return\tint\n * @retval\t0\t: continue calling scheduling cycles\n * @retval\t1\t: exit scheduler\n */\nstatic int\nschedule_bare(int sd, const sched_cmd *cmd)\n{\n\tswitch (cmd->cmd) {\n\t\tcase SCH_SCHEDULE_NULL:\n\t\tcase SCH_RULESET:\n\t\t\t/* ignore and end cycle */\n\t\t\tbreak;\n\n\t\tcase SCH_SCHEDULE_FIRST:\n\t\t\t/*\n\t\t * on the first cycle after the server restarts custom resources\n\t\t * may have been added.  Dump what we have so we'll requery them.\n\t\t */\n\t\t\tupdate_resource_defs(sd);\n\n\t\t\t/* Get config from the qmgr sched object */\n\t\t\tif (!set_validate_sched_attrs(sd))\n\t\t\t\treturn 0;\n\n\t\tcase SCH_SCHEDULE_NEW:\n\t\tcase SCH_SCHEDULE_TERM:\n\t\tcase SCH_SCHEDULE_CMD:\n\t\tcase SCH_SCHEDULE_TIME:\n\t\tcase SCH_SCHEDULE_JOBRESV:\n\t\tcase SCH_SCHEDULE_STARTQ:\n\t\tcase SCH_SCHEDULE_MVLOCAL:\n\t\tcase SCH_SCHEDULE_ETE_ON:\n\t\tcase SCH_SCHEDULE_RESV_RECONFIRM:\n\t\t\treturn scheduling_cycle_bare(sd, cmd);\n\t\tcase SCH_SCHEDULE_AJOB:\n\t\t\treturn scheduling_cycle_bare(sd, cmd);\n\t\tcase SCH_CONFIGURE:\n\t\t\tlog_event(PBSEVENT_SCHED, PBS_EVENTCLASS_SCHED, LOG_INFO,\n\t\t\t\t  \"reconfigure\", \"Scheduler is reconfiguring\");\n\t\t\tupdate_resource_defs(sd);\n\n\t\t\t/* Get config from sched_priv/ files */\n\t\t\tif (schedinit(-1) != 0)\n\t\t\t\treturn 0;\n\n\t\t\t/* Get config from the qmgr sched object */\n\t\t\tif (!set_validate_sched_attrs(sd))\n\t\t\t\treturn 0;\n\t\t\tbreak;\n\t\tcase SCH_QUIT:\n#ifdef PYTHON\n\t\t\tPy_Finalize();\n#endif\n\t\t\treturn 1; /* have the scheduler exit nicely */\n\t\tdefault:\n\t\t\treturn 0;\n\t}\n\treturn 0;\n}\n\nint\nmain(int argc, char *argv[])\n{\n\tif (set_msgdaemonname(const_cast<char *>(\"pbs_sched_bare\"))) {\n\t\tfprintf(stderr, \"Out of memory\\n\");\n\t\treturn (1);\n\t}\n\n\treturn sched_main(argc, argv, schedule_bare);\n}\n"
  },
  {
    "path": "src/scheduler/pbs_sched_config",
    "content": "# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\n\n# This is the config file for the scheduling policy\n# FORMAT:  option: value prime_option\n#\toption \t\t- the name of what we are changing defined in config.h\n#\tvalue  \t\t- can be boolean/string/numeric depending on the option\n#\tprime_option\t- can be prime/non_prime/all ONLY FOR SOME OPTIONS\n\n#### OVERALL SCHEDULING OPTIONS\n\n#\n# round_robin\n#\tRun a job from each queue before running second job from the\n#\tfirst queue.\n#\n#\tPRIME OPTION\n\nround_robin: False\tall\n\n\n#\n# by_queue\n#\tRun jobs by queues.  If both round_robin and by_queue are not set,\n#\tThe scheduler will look at all the jobs on on the server as one\n#\tlarge queue, and ignore the queues set by the administrator.\n#\n#\tPRIME OPTION\n\nby_queue: True\t\tprime\nby_queue: True\t\tnon_prime\n\n\n#\n# strict_ordering\n#\n#\tRun jobs exactly in the order determined by the scheduling option\n#\tsettings, so that we run the \"most deserving\" job as soon as possible\n#\twhile adhering to site policy.  Note that strict_ordering can result\n#\tin significant idle time unless you use backfilling, which runs smaller\n#\t\"less deserving\" jobs provided that they do not delay the start time\n#\tof the \"most deserving\" job.\n#\n#\tPRIME OPTION\n\nstrict_ordering: false\tALL\n\n#### PRIMETIME OPTIONS:\n\n# NOTE: to set primetime/nonprimetime see $PBS_HOME/sched_priv/holidays file\n\n#\n# backfill_prime\n#\n#\tWhen backfill_prime is turned on, jobs are not allowed to cross from\n#\tprimetime into non-primetime or vice versa.  This option is not related\n#\tto the backfill_depth server parameter.\n#\n#\tPRIME OPTION\n\nbackfill_prime:\tfalse\tALL\n\n#\n# prime_exempt_anytime_queues\n#\n#\tAvoid backfill on queues that are not defined as primetime\n#\tor nonprimetime.\n#\tNOTE: This is not related to backfill for strict_ordering.\n#\n#\tNO PRIME OPTION\n\nprime_exempt_anytime_queues:\tfalse\n\n#\n# prime_spill\n#\n#\tTime duration to allow jobs to cross-over or 'spill' into primetime or\n#  \tnonprimetime.\n#\tNOTE: This is used in conjunction with backfill_prime.\n#\n#\tUsage: prime_spill: \"HH:MM:SS\"\n#\n#\tExamples:\n#\tprime_spill: \"2:00:00\" \tPRIME\n#\tThis means primetime jobs can spill into nonprimetime by up to 2 hours\n#\n#\tprime_spill: \"1:00:00\" ALL\n#\tThis will allow jobs to spill into nonprimeime or into primetime by\n#\tup to an hour.\n#\n#\tPRIME OPTION\n#\n\n#prime_spill: 1:00:00\tALL\n\n#\n# primetime_prefix\n# \tPrefix to define a primetime queue.\n#\tJobs in primeime queues will only be run in primetime\n#\n#\tNO PRIME OPTION\n\nprimetime_prefix: p_\n\n#\n# nonprimetime_prefix\n#\tPrefix to define non-primetime queues.\n# \tJob in non-primetime queues will only be run in non-primetime\n#\n#\tNO PRIME OPTION\nnonprimetime_prefix: np_\n\n#### SORTING OPTIONS:\n\n# job_sort_key\n#\n#\tSort jobs by any resource known to PBS.\n#\tjob_sort_key allows jobs to be sorted by any resource.  This\n#\tincludes any admin defined resource resources.  The sort can be\n#\tascending (low to high) or descending (high to low).\n#\n#\tUsage: job_sort_key: \"resource_name HIGH | LOW\"\n#\n#\tAllowable non resource keys:\n#\t\tfair_share_perc, preempt_priority, job_priority\n#\n#\n#\tExamples:\n#\n#\tjob_sort_key: \"ncpus HIGH\"\n#\tjob_sort_key: \"mem LOW\"\n#\n#\tThis would have a 2 key sort, descending by ncpus and ascending by mem\n#\n#\tPRIME OPTION\n#\n\n#job_sort_key: \"cput LOW\"\tALL\n\n# node_sort_key\n#\n#\tSort nodes by any resource known to PBS.\n#\tnode_sort_key is similar to job_sort_key but works for nodes.\n#\tNodes can be sorted on any resource.  The resource value\n#\tsorted on is the resources_available amount.\n#\n#\tUsage: node_sort_key: \"resource_name HIGH | LOW\"\n#\n#\tnon resource key: sort_priority\n#\n#\tExample:\n#\n#\tnode_sort_key: \"sort_priority HIGH\"\n#\tnode_sort_key: \"ncpus HIGH\"\n#\n#\tPRIME OPTION\n#\n\nnode_sort_key: \"sort_priority HIGH\"\tALL\n\n#\n# provision_policy\n#\n#\tDetermines how scheduler is going to select nodes to satisfy\n#\tprovisioning request of a job.\n#\n#\t\"avoid_provision\" sorts vnodes by requested AOE.\n#\tNodes with same AOE are sorted on node_sort_key.\n#\n#\t\"aggressive_provision\" lets scheduler select nodes first and then\n#\tprovision if necessary. This is the default policy.\n#\n#  \tExample:\n#\n#\tprovision_policy: \"avoid_provision\"\n#\n#\tNO PRIME OPTION\n\nprovision_policy: \"aggressive_provision\"\n\n#### SMP JOB OPTIONS:\n\n#\n# resources\n#\n#\tDefine resource limits to be honored by PBS.\n#\tThe scheduler will not allow a job to run if the amount of assigned\n#\tresources exceeds the available amount.\n#\n#\tNOTE: you need to encase the comma separated list of resources in\n#\t      double quotes (\")\n#  \tExample:\n#\n#  \tresources: \"ncpus, mem, arch\"\n#\n#  \tThis is ONLY schedules jobs based on available ncpus, mem, and arch\n#  \twithin the cluster. Other resources requested by the job will not\n#  \tevaluated for availability.\n#\n#  \tNOTE: Define new resources within\n#\t\t$PBS_HOME/server_priv/resourcedef file.\n#\n#\tNO PRIME OPTION\n\nresources: \"ncpus, mem, arch, host, vnode, aoe, eoe\"\n\n#\n# smp_cluster_dist\n#\n#\tThis option allows you to decide how to distribute jobs to all the\n#\tnodes on your systems.\n#\n#\tpack \t    - pack as many jobs onto a node that will fit before\n#\t\t      running on another node\n#\tround_robin - run one job on each node in a cycle\n#\n#\tPRIME OPTION\n\nsmp_cluster_dist: pack\n\n#### FAIRSHARE OPTIONS\n\n# NOTE: to define fairshare tree see $PBS_HOME/sched_priv/resources_group file\n\n#\n# fair_share\n#\tSchedule jobs based on usage and share values\n#\n#\tPRIME OPTION\n#\n\nfair_share: false\tALL\n\n#\n# unknown_shares\n#\tThe number of shares for the \"unknown\" group\n#\n#\tNO PRIME OPTION\n#\n#\tNOTE: To turn on fairshare and give everyone equal shares,\n#\t      Uncomment this line (and turn on fair_share above)\n\n#unknown_shares: 10\n\n\n#\n# fairshare_usage_res\n#\tThis specifies the resource to collect to fairshare from.\n#\tThe scheduler will collect timing information pertaining to the\n#\tutilization of a particular resources to schedule jobs\n#\n#  \tExample:\n#  \tfairshare_usage_res: cput\n#\n#\tThis collects the cput (cputime) resource.\n#\tNOTE: ONLY one resource can be collected.\n#\n#\tNO PRIME OPTION\nfairshare_usage_res: cput\n\n#\n# fairshare_entity\n#\tThis is a job attribute which will be used for fairshare entities.\n#\tThis can be anything from the username (euser) to the group (egroup)\n#\tetc.  It can also be \"queue\" for the queue name of the job\n#\n#\tNO PRIME OPTION\n\nfairshare_entity: euser\n\n#\n# fairshare_decay_time\n#\tThe duration between when the scheduler decays the fairshare tree\n#\n#\tNO PRIME OPTION\n\nfairshare_decay_time: 24:00:00\n\n#\n# fairshare_decay_factor\n# \tThe factor in which the fairshare tree will be decayed by when it is decayed\n# \tExample: 0.5 would mean a half-life\n#\nfairshare_decay_factor: 0.5\n\n#\n# fairshare_enforce_no_shares\n#\n#\tAny fairshare entity with zero shares will never run.  If an\n#\tentity is in a group with zero shares, they will still not run.\n#\n# \tUsage: fairshare_enforce_no_shares: TRUE|FALSE\n#\n#\tNO PRIME OPTION\n\n# fairshare_enforce_no_shares: TRUE\n\n#### PREEMPTIVE SCHEDULING OPTIONS\n\n#\n# preemptive_sched\n#\n#\tEnables preemptive scheduling.\n#\tThis will allow the scheduler to preempt lower priority\n#\twork to run higher priority jobs.\n#\n#\tPRIME OPTION\n\npreemptive_sched: true\tALL\n\n#### PEER SCHEDULING OPTIONS\n\n#\n# peer_queue\n#\n#\tDefines and enables the scheduler to obtain work from other PBS\n#\tclusters.\n#\n#\tPeer scheduling works by mapping a queue on a remote server to a\n#\tqueue on the local server.  Only one mapping can be made per line,\n#\tbut multiple peer_queue lines can appear in this file. More then\n#\tone mapping can be made to the same queue.  The scheduler will\n#\tsee the union of all the jobs of the multiple queues (local and remote).\n#\n#\tUsage: peer_queue: \"local_queue\t\tremote_queue@remote_server\"\n#\n#\tExamples:\n#\tpeer_queue: \"workq\t\tworkq@svr1\"\n#\tpeer_queue: \"workq\t\tworkq@svr2\"\n#\tpeer_queue: \"remote_work\tworkq@svr3\"\n#\n#\tNO PRIME OPTION\n\n#### DYNAMIC RESOURCE OPTIONS\n\n#\n# mom_resources\n#\n#\tNOTE: The mom_resources option is deprecated and will likely go away in a future release.\n#             Use exechost_periodic hook to set the resource values\n#\n#\tDefines Dynamic Consumable Resources on a per node basis.\n#\n#\tThe mom_resources option is used to be able to query the MOMs to\n#\tset the value resources_available.res where res is a site defined\n#\tresource.  Each mom is queried with the resource name and the\n#\treturn value is used to replace resources_available.res on that node.\n#\n#\tNOTE: this is internal to the scheduler, these values will not\n#\t      be visible outside of the scheduler.\n#\n#\tUsage: mom_resources: \"res1, res2, res3, ... resN\"\n#\n#\tExample:\n#\tmom_resources: \"foo\"\n#\n#\tNO PRIME OPTION\n\n# server_dyn_res\n#\n#\tDefines Dynamic Consumable Resources on a per job basis.\n#\n#\tserver_dyn_res allows the values of resources to be replaced by running\n#\ta program and taking the first line of output as the new value. For\n#\tinstance, querying a licensing server for the available licenses.\n#\n#\tNOTE: this value MUST be quoted (i.e. server_dyn_res: \" ... \" )\n#\n#\tExamples:\n#\tserver_dyn_res: \"mem !/bin/get_mem\"\n#\tserver_dyn_res: \"ncpus !/bin/get_ncpus\"\n#\n#\tNO PRIME OPTION\n\n#### DEDICATED TIME OPTIONS\n\n# NOTE: to set dedicated time see $PBS_HOME/sched_priv/dedicated_time file\n\n#\n# dedicated_prefix\n#\n#\tPrefix to define dedicated time queues.\n# \tAll queues starting with this value are dedicated time queues and\n#\tjobs within these queues will only run during dedicated time.\n#\n# \tExample:\n#\n#\tdedicated_prefix: ded\n#\n#\tdedtime or dedicated time would be dedicated time queues\n#\t(along with anything else starting with ded).\n#\n#\tNO PRIME OPTION\ndedicated_prefix: ded\n"
  },
  {
    "path": "src/scheduler/pbs_sched_utils.cpp",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * contains functions related to PBS scheduler\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <vector>\n\n#include <arpa/inet.h>\n#include <ctype.h>\n#include <errno.h>\n#include <fcntl.h>\n#include <grp.h>\n#include <limits.h>\n#include <netdb.h>\n#include <netinet/in.h>\n#include <pwd.h>\n#include <signal.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <sys/resource.h>\n#include <sys/socket.h>\n#include <sys/stat.h>\n#include <sys/time.h>\n#include <sys/types.h>\n#include <unistd.h>\n#ifdef _POSIX_MEMLOCK\n#include <sys/mman.h>\n#endif /* _POSIX_MEMLOCK */\n\n#if defined(FD_SET_IN_SYS_SELECT_H)\n#include <sys/select.h>\n#endif\n#include <sys/resource.h>\n\n#include \"auth.h\"\n#include \"config.h\"\n#include \"fifo.h\"\n#include \"globals.h\"\n#include \"libpbs.h\"\n#include \"libsec.h\"\n#include \"list_link.h\"\n#include \"log.h\"\n#include \"misc.h\"\n#include \"multi_threading.h\"\n#include \"net_connect.h\"\n#include \"pbs_ecl.h\"\n#include \"pbs_error.h\"\n#include \"pbs_ifl.h\"\n#include \"pbs_share.h\"\n#include \"pbs_version.h\"\n#include \"portability.h\"\n#include \"rm.h\"\n#include \"sched_cmds.h\"\n#include \"server_limits.h\"\n#include \"tpp.h\"\n#include \"libutil.h\"\n\n#define START_CLIENTS 2\t\t\t   /* minimum number of clients */\nauto okclients = std::vector<pbs_net_t>(); /* accept connections from */\nchar *configfile = NULL;\t\t   /* name of file containing\n\t\t\t\t\t\t client names to be added */\n\nextern char *msg_daemonname;\nchar **glob_argv;\nchar usage[] = \"[-d home][-L logfile][-p file][-I schedname][-n][-N][-c clientsfile][-t num threads]\";\nstruct sockaddr_in saddr;\nsigset_t allsigs;\nsigset_t oldsigs;\n\nint sigstoblock[] = {SIGHUP, SIGINT, SIGTERM, SIGUSR1};\n\n/* if we received a sigpipe, this probably means the server went away. */\n\n/* used in segv restart */\ntime_t segv_start_time;\ntime_t segv_last_time;\n\n#ifdef NAS /* localmod 030 */\nextern int do_soft_cycle_interrupt;\nextern int do_hard_cycle_interrupt;\n#endif /* localmod 030 */\n\nextern char *msg_startup1;\n\npthread_mutex_t cleanup_lock;\n\nstatic void reconnect_server(void);\nvoid sched_svr_init(void);\nvoid open_server_conns();\nstatic int schedule_wrapper(sched_cmd *cmd, int opt_no_restart);\nvoid close_server_conns(void);\n\ntypedef int (*schedule_func)(int, const sched_cmd *);\n\nstatic schedule_func schedule_ptr;\n\n/**\n * @brief\n * \t\tcleanup after a segv and re-exec.  Trust as little global mem\n * \t\tas possible... we don't know if it could be corrupt\n *\n * @param[in]\tsig\t-\tsignal\n */\nvoid\non_segv(int sig)\n{\n\tint ret_lock;\n\n\t/* We want any other threads to block here, we want them alive until abort() is called\n\t * as it dumps core for all threads\n\t */\n\tret_lock = pthread_mutex_lock(&cleanup_lock);\n\tif (ret_lock != 0)\n\t\tpthread_exit(NULL);\n\n\t/* we crashed less then 5 minutes ago, lets not restart ourself */\n\tif ((segv_last_time - segv_start_time) < 300) {\n\t\tlog_record(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_INFO, __func__,\n\t\t\t   \"received a sigsegv within 5 minutes of start: aborting.\");\n\n\t\t/* Not unlocking mutex on purpose, we need to hold on to it until the process is killed */\n\t\tabort();\n\t}\n\n\tlog_record(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_INFO, __func__,\n\t\t   \"received segv and restarting\");\n\n\tif (fork() > 0) {  /* the parent rexec's itself */\n\t\tsleep(10); /* allow the child to die */\n\t\texecv(glob_argv[0], glob_argv);\n\t\texit(3);\n\t} else {\n\t\tabort(); /* allow to core and exit */\n\t}\n}\n\n/**\n * @brief\n * \t\tsignal function for receiving a sigpipe - set flag so we know not to talk\n * \t\tto the server any more and leave the cycle as soon as possible\n *\n * @param[in]\tsig\t-\tsigpipe\n */\nvoid\nsigfunc_pipe(int sig)\n{\n\tlog_record(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_INFO, \"sigfunc_pipe\", \"We've received a sigpipe: The server probably died.\");\n\tgot_sigpipe = 1;\n}\n\n/**\n * @brief\tcleanup routine for scheduler exit\n *\n * @param\tvoid\n *\n * @return void\n */\nstatic void\nschedexit(void)\n{\n\t/* close any open connections to peers */\n\tfor (auto &pq : conf.peer_queues) {\n\t\tif (pq.peer_sd >= 0) {\n\t\t\t/* When peering \"local\", do not disconnect server */\n\t\t\tif (!pq.remote_server.empty())\n\t\t\t\tpbs_disconnect(pq.peer_sd);\n\t\t\tpq.peer_sd = -1;\n\t\t}\n\t}\n\n\t/* Kill all worker threads */\n\tif (num_threads > 1) {\n\t\tint *thid;\n\n\t\tthid = (int *) pthread_getspecific(th_id_key);\n\n\t\tif (*thid == 0) {\n\t\t\tkill_threads();\n\t\t\tclose_server_conns();\n\t\t\treturn;\n\t\t}\n\t}\n\n\tclose_server_conns();\n}\n\n/**\n * @brief\n *       Clean up after a signal.\n *\n *  @param[in]\tsig\t-\tsignal\n */\nvoid\ndie(int sig)\n{\n\tint ret_lock;\n\n\tret_lock = pthread_mutex_trylock(&cleanup_lock);\n\tif (ret_lock != 0)\n\t\tpthread_exit(NULL);\n\n\tif (sig > 0)\n\t\tlog_eventf(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_INFO, __func__, \"caught signal %d\", sig);\n\telse\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_INFO, __func__, \"abnormal termination\");\n\n\tschedexit();\n\n\t{\n\t\tint csret;\n\t\tif ((csret = CS_close_app()) != CS_SUCCESS) {\n\t\t\t/*had some problem closing the security library*/\n\n\t\t\tsprintf(log_buffer, \"problem closing security library (%d)\", csret);\n\t\t\tlog_err(-1, \"pbs_sched\", log_buffer);\n\t\t}\n\t}\n\n\tunload_auths();\n\n\tlog_close(1);\n\texit(1);\n}\n\n/**\n * @brief\n * \t\tadd a new client to the list of clients.\n *\n * @param[in]\tname\t-\tClient name.\n */\nint\naddclient(const char *name)\n{\n\tint i;\n\tstruct hostent *host;\n\tstruct in_addr saddr;\n\n\tif ((host = gethostbyname(name)) == NULL) {\n\t\tsprintf(log_buffer, \"host %s not found\", name);\n\t\tlog_err(-1, __func__, log_buffer);\n\t\treturn -1;\n\t}\n\n\tfor (i = 0; host->h_addr_list[i]; i++) {\n\t\tmemcpy((char *) &saddr, host->h_addr_list[i], host->h_length);\n\t\tokclients.push_back(saddr.s_addr);\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\tread_config - read and process the configuration file (see -c option)\n * @par\n *\t\tCurrently, the only statement is $clienthost to specify which systems\n *\t\tcan contact the scheduler.\n *\n * @param[in]\tfile\t-\tconfiguration file\n *\n * @return\tint\n * @retval\t0\t: Ok\n * @retval\t-1\t: !nOtOk!\n */\n#define CONF_LINE_LEN 120\n\nstatic int\nread_config(char *file)\n{\n\tFILE *conf;\n\tint i;\n\tchar line[CONF_LINE_LEN];\n\tchar *token;\n\tstruct specialconfig {\n\t\tconst char *name;\n\t\tint (*handler)(const char *);\n\t} special[] = {\n\t\t{\"clienthost\", addclient},\n\t\t{NULL, NULL}};\n\n#if !defined(DEBUG) && !defined(NO_SECURITY_CHECK)\n\tif (chk_file_sec_user(file, 0, 0, S_IWGRP | S_IWOTH, 1, getuid()))\n\t\treturn (-1);\n#endif\n\n\tif ((conf = fopen(file, \"r\")) == NULL) {\n\t\tlog_err(errno, __func__, \"cannot open config file\");\n\t\treturn (-1);\n\t}\n\twhile (fgets(line, CONF_LINE_LEN, conf)) {\n\n\t\tif ((line[0] == '#') || (line[0] == '\\n'))\n\t\t\tcontinue;\t   /* ignore comment & null line */\n\t\telse if (line[0] == '$') { /* special */\n\n\t\t\tif ((token = strtok(line, \" \\t\")) == NULL)\n\t\t\t\ttoken = const_cast<char *>(\"\");\n\t\t\tfor (i = 0; special[i].name; i++) {\n\t\t\t\tif (strcmp(token + 1, special[i].name) == 0)\n\t\t\t\t\tbreak;\n\t\t\t}\n\t\t\tif (special[i].name == NULL) {\n\t\t\t\tsprintf(log_buffer, \"config name %s not known\",\n\t\t\t\t\ttoken);\n\t\t\t\tlog_record(PBSEVENT_ERROR,\n\t\t\t\t\t   PBS_EVENTCLASS_SERVER, LOG_INFO,\n\t\t\t\t\t   msg_daemonname, log_buffer);\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\ttoken = strtok(NULL, \" \\t\");\n\t\t\tif (*(token + strlen(token) - 1) == '\\n')\n\t\t\t\t*(token + strlen(token) - 1) = '\\0';\n\t\t\tif (special[i].handler(token)) {\n\t\t\t\tfclose(conf);\n\t\t\t\treturn (-1);\n\t\t\t}\n\n\t\t} else {\n\t\t\tlog_record(PBSEVENT_ERROR, PBS_EVENTCLASS_SERVER,\n\t\t\t\t   LOG_INFO, msg_daemonname,\n\t\t\t\t   \"invalid line in config file\");\n\t\t\tfclose(conf);\n\t\t\treturn (-1);\n\t\t}\n\t}\n\tfclose(conf);\n\treturn (0);\n}\n/**\n * @brief\n * \t\trestart on signal\n *\n * @param[in]\tsig\t-\tsignal\n */\nvoid\nrestart(int sig)\n{\n\tconst sched_cmd cmd = {SCH_CONFIGURE, NULL};\n\n\tif (sig) {\n\t\tlog_close(1);\n\t\tlog_open(logfile, path_log);\n\t\tsprintf(log_buffer, \"restart on signal %d\", sig);\n\t} else {\n\t\tsprintf(log_buffer, \"restart command\");\n\t}\n\tlog_record(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_INFO, __func__, log_buffer);\n\tif (configfile) {\n\t\tif (read_config(configfile) != 0)\n\t\t\tdie(0);\n\t}\n\tschedule_ptr(clust_primary_sock, &cmd);\n}\n\n#ifdef NAS /* localmod 030 */\n/**\n * @brief\n * \t\tmake soft cycle interrupt active\n *\n * @param[in]\tsig\t-\tsignal\n */\nvoid\nsoft_cycle_interrupt(int sig)\n{\n\tdo_soft_cycle_interrupt = 1;\n}\n/**\n * @brief\n * \t\tmake hard cycle interrupt active\n *\n * @param[in]\tsig\t-\tsignal\n */\nvoid\nhard_cycle_interrupt(int sig)\n{\n\tdo_hard_cycle_interrupt = 1;\n}\n#endif /* localmod 030 */\n\n/**\n * @brief\n * \t\tlock_out - lock out other daemons from this directory.\n *\n * @param[in]\tfds\t-\tfile descriptor\n * @param[in]\top\t-\tF_WRLCK  or  F_UNLCK\n *\n * @return\t1\n */\n\nstatic void\nlock_out(int fds, int op)\n{\n\tstruct flock flock;\n\n\t(void) lseek(fds, (off_t) 0, SEEK_SET);\n\tflock.l_type = op;\n\tflock.l_whence = SEEK_SET;\n\tflock.l_start = 0;\n\tflock.l_len = 0; /* whole file */\n\tif (fcntl(fds, F_SETLK, &flock) < 0) {\n\t\tlog_err(errno, msg_daemonname, \"another scheduler running\");\n\t\tfprintf(stderr, \"pbs_sched: another scheduler running\\n\");\n\t\texit(1);\n\t}\n}\n\n/**\n * @brief\n * \t\tare_we_primary - are we on the primary Server host\n *\t\tIf either the only configured Server or the Primary in a failover\n *\t\tconfiguration - return true\n *\n * @return\tint\n * @retval\t0\t: we are the secondary\n * @retval\t-1\t: cannot be neither\n * @retval\t1\t: we are the listed primary\n */\nstatic int\nare_we_primary()\n{\n\tchar server_host[PBS_MAXHOSTNAME + 1];\n\tchar hn1[PBS_MAXHOSTNAME + 1];\n\n\tif (pbs_conf.pbs_leaf_name) {\n\t\tchar *endp;\n\t\tsnprintf(server_host, sizeof(server_host), \"%s\", pbs_conf.pbs_leaf_name);\n\t\tendp = strchr(server_host, ','); /* find the first name */\n\t\tif (endp)\n\t\t\t*endp = '\\0';\n\t\tendp = strchr(server_host, ':'); /* cut out the port */\n\t\tif (endp)\n\t\t\t*endp = '\\0';\n\t} else if ((gethostname(server_host, (sizeof(server_host) - 1)) == -1) ||\n\t\t   (get_fullhostname(server_host, server_host, (sizeof(server_host) - 1)) == -1)) {\n\t\tlog_err(-1, __func__, \"Unable to get my host name\");\n\t\treturn -1;\n\t}\n\n\t/* both secondary and primary should be set or neither set */\n\tif ((pbs_conf.pbs_secondary == NULL) && (pbs_conf.pbs_primary == NULL))\n\t\treturn 1;\n\tif ((pbs_conf.pbs_secondary == NULL) || (pbs_conf.pbs_primary == NULL))\n\t\treturn -1;\n\n\tif (get_fullhostname(pbs_conf.pbs_primary, hn1, (sizeof(hn1) - 1)) == -1) {\n\t\tlog_err(-1, __func__, \"Unable to get full host name of primary\");\n\t\treturn -1;\n\t}\n\n\tif (strcmp(hn1, server_host) == 0)\n\t\treturn 1; /* we are the listed primary */\n\n\tif (get_fullhostname(pbs_conf.pbs_secondary, hn1, (sizeof(hn1) - 1)) == -1) {\n\t\tlog_err(-1, __func__, \"Unable to get full host name of secondary\");\n\t\treturn -1;\n\t}\n\tif (strcmp(hn1, server_host) == 0)\n\t\treturn 0; /* we are the secondary */\n\n\treturn -1; /* cannot be neither */\n}\n\n/**\n * @brief close connections to server\n *\n * @return void\n */\nvoid\nclose_server_conns(void)\n{\n\tint i;\n\n\tpbs_disconnect(clust_primary_sock);\n\tpbs_disconnect(clust_secondary_sock);\n\n\ttpp_em_destroy(poll_context);\n\tpoll_context = NULL;\n\n\t/* free qrun_list */\n\tfor (i = 0; i < qrun_list_size; i++) {\n\t\tif (qrun_list[i].jid != NULL)\n\t\t\tfree(qrun_list[i].jid);\n\t}\n\tfree(qrun_list);\n\tqrun_list = NULL;\n\n\tclust_primary_sock = -1;\n\tclust_secondary_sock = -1;\n}\n\n/**\n * @brief Connect to server. Also add secondary connection to the poll list\n *\n *\n * @return void\n */\nvoid\nopen_server_conns(void)\n{\n\tsigset_t prevsigs;\n\tsigset_t tmpsigs;\n\n\twhile (1) {\n\t\tsigemptyset(&prevsigs);\n\t\t/*\n\t \t * Connecting to server may potentially fork for reserve port authentication.\n\t \t * Call to connect to server must be protected from signals because it can cause\n\t \t * scheduler to go into a deadlock on logging mutex.\n\t \t */\n\t\tif (sigprocmask(SIG_BLOCK, &allsigs, &prevsigs) == -1)\n\t\t\tlog_err(errno, __func__, \"sigprocmask(SIG_BLOCK)\");\n\n\t\tif (clust_primary_sock < 0) {\n\t\t\tclust_primary_sock = pbs_connect(NULL);\n\t\t\tif (clust_primary_sock < 0)\n\t\t\t\tgoto unmask_continue;\n\t\t}\n\t\tclust_secondary_sock = pbs_connect(NULL);\n\t\tif (clust_secondary_sock < 0 || clust_primary_sock == clust_secondary_sock)\n\t\t\tgoto unmask_continue;\n\n\t\tif (pbs_register_sched(sc_name, clust_primary_sock, clust_secondary_sock) != 0) {\n\t\t\tlog_errf(pbs_errno, __func__, \"Couldn't register the scheduler %s with connected server\", sc_name);\n\t\t\tgoto unmask_continue;\n\t\t}\n\n\t\t/* Reached here means everything is success, so we will break out of the loop */\n\t\tif (sigprocmask(SIG_SETMASK, &prevsigs, NULL) == -1)\n\t\t\tlog_err(errno, __func__, \"sigprocmask(SIG_SETMASK)\");\n\t\tbreak;\n\n\tunmask_continue:\n\t\ttmpsigs = prevsigs;\n\t\t/* allow blocked signals while waiting for connection */\n\t\tfor (auto &sig : sigstoblock) {\n\t\t\tsigdelset(&tmpsigs, sig);\n\t\t}\n\n\t\tif (sigprocmask(SIG_SETMASK, &tmpsigs, NULL) == -1)\n\t\t\tlog_err(errno, __func__, \"sigprocmask(SIG_SETMASK)\");\n\n\t\t/* wait for 2s for not to burn too much CPU, and then retry connection */\n\t\tsleep(2);\n\t\tif (sigprocmask(SIG_SETMASK, &prevsigs, NULL) == -1)\n\t\t\tlog_err(errno, __func__, \"sigprocmask(SIG_SETMASK)\");\n\t\tcontinue;\n\t}\n\tlog_eventf(PBSEVENT_ADMIN | PBSEVENT_FORCE, PBS_EVENTCLASS_SCHED,\n\t\t   LOG_INFO, msg_daemonname, \"Connected to the server\");\n\n\tsched_svr_init();\n\n\tif (tpp_em_add_fd(poll_context, clust_secondary_sock, EM_IN | EM_HUP | EM_ERR) < 0) {\n\t\tlog_errf(errno, __func__, \"Couldn't add secondary connection to poll list\");\n\t\tdie(-1);\n\t}\n}\n\n/**\n * @brief Initialises event poll context for the server and also sched commands queue\n *\n * @return void\n */\nvoid\nsched_svr_init(void)\n{\n\tif (poll_context == NULL) {\n\t\tpoll_context = tpp_em_init(1);\n\t\tif (poll_context == NULL) {\n\t\t\tlog_err(errno, __func__, \"Failed to init cmd connections context\");\n\t\t\tdie(-1);\n\t\t}\n\t}\n\n\tqrun_list = static_cast<sched_cmd *>(malloc(2 * sizeof(sched_cmd)));\n\tif (qrun_list == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\tdie(0);\n\t}\n}\n\n/**\n * @brief reconnect to all the servers configured\n *\n * @return void\n */\nstatic void\nreconnect_server(void)\n{\n\tclose_server_conns();\n\topen_server_conns();\n\treturn;\n}\n\n/**\n * @brief read incoming command from given secondary connection\n *        and add it into sched_cmds array\n *\n * @param[in]  sock   - secondary connection to server\n *\n * @return int\n * @retval -2 - failure due to memory operation failed\n * @retval -1 - failure while reading command\n * @return  0 - no cmd, server might have closed connection\n * @return  1 - success, read atleast one command\n */\nstatic int\nread_sched_cmd(int sock)\n{\n\tint rc;\n\tsched_cmd cmd;\n\n\trc = get_sched_cmd(sock, &cmd);\n\tif (rc != 1)\n\t\treturn rc;\n\telse {\n\t\tsched_cmd cmd_prio;\n\t\t/*\n\t\t * There is possibility that server has sent\n\t\t * priority command after first non-priority command,\n\t\t * while we were in schedule()\n\t\t *\n\t\t * so try read it in non-blocking mode, but don't\n\t\t * return any failure if fails to read, as we have\n\t\t * successfully enqueued first command\n\t\t *\n\t\t * and if we get priority command then just ignore it\n\t\t * since we are not yet in middle of schedule cycle\n\t\t */\n\t\tint rc_prio = get_sched_cmd_noblk(sock, &cmd_prio);\n\t\tif (rc_prio == -2)\n\t\t\treturn 0;\n\t}\n\n\tif (cmd.cmd != SCH_SCHEDULE_RESTART_CYCLE) {\n\t\tif (cmd.cmd == SCH_SCHEDULE_AJOB)\n\t\t\tqrun_list[qrun_list_size++] = cmd;\n\t\telse {\n\t\t\tif (cmd.cmd >= SCH_SCHEDULE_NULL && cmd.cmd < SCH_CMD_HIGH)\n\t\t\t\tsched_cmds[cmd.cmd] = 1;\n\t\t}\n\t}\n\n\treturn rc;\n}\n\n/**\n * @brief wait for commands from servers\n *\n * @return void\n */\nstatic void\nwait_for_cmds()\n{\n\tint i;\n\tem_event_t *events;\n\tint hascmd = 0;\n\tsigset_t emptyset;\n\n\tqrun_list_size = 0;\n\n\twhile (!hascmd) {\n\t\tsigemptyset(&emptyset);\n\t\tauto nsocks = tpp_em_pwait(poll_context, &events, -1, &emptyset);\n\t\tauto err = errno;\n\n\t\tif (nsocks < 0) {\n\t\t\tif (!(err == EINTR || err == EAGAIN || err == 0)) {\n\t\t\t\tlog_errf(err, __func__, \" tpp_em_wait() error, errno=%d\", err);\n\t\t\t\tsleep(1); /* wait for 1s for not to burn too much CPU */\n\t\t\t}\n\t\t} else {\n\t\t\tfor (i = 0; i < nsocks; i++) {\n\t\t\t\tint sock = EM_GET_FD(events, i);\n\t\t\t\terr = read_sched_cmd(sock);\n\t\t\t\tif (err != 1) {\n\t\t\t\t\t/* if memory error ignore, else reconnect server */\n\t\t\t\t\tif (err != -2)\n\t\t\t\t\t\treconnect_server();\n\t\t\t\t} else\n\t\t\t\t\thascmd = 1;\n\t\t\t}\n\t\t}\n\t}\n}\n\n/**\n * @brief\tHelper function for send_cycle_end, just sends the message out\n *\n * @param\tsec_conn - secondary conn sd to the server\n *\n * @return\tbool\n * @retval\ttrue for success\n * @retval\tfalse for failure/error\n */\nbool\nsend_cycle_end_msg(int sec_conn)\n{\n\tstatic int cycle_end_marker = 0;\n\n\tif (diswsi(sec_conn, cycle_end_marker) != DIS_SUCCESS) {\n\t\tlog_err(errno, __func__, \"Not able to send end of cycle\");\n\t\treturn false;\n\t}\n\n\tif (dis_flush(sec_conn) != 0) {\n\t\tlog_err(errno, __func__, \"Not able to send end of cycle\");\n\t\treturn false;\n\t}\n\n\treturn true;\n}\n\n/**\n *\n * @brief sends end of cycle indication to all the servers configured\n *\n * @param[in] sd - socket descriptor to send end of cycle notification\n *\n * @return void\n */\nstatic void\nsend_cycle_end(void)\n{\n\tif (!send_cycle_end_msg(clust_secondary_sock))\n\t\tgoto reconnect;\n\n\tif (got_sigpipe)\n\t\tgoto reconnect;\n\n\treturn;\n\nreconnect:\n\treconnect_server();\n\tgot_sigpipe = 0;\n}\n\nint\nsched_main(int argc, char *argv[], schedule_func sched_ptr)\n{\n\tint go;\n\tint c;\n\tint errflg = 0;\n\tint lockfds;\n\tpid_t pid;\n\tchar host[PBS_MAXHOSTNAME + 1];\n#ifndef DEBUG\n\tconst char *dbfile = \"sched_out\";\n#endif\n\tstruct sigaction act;\n\tint opt_no_restart = 0;\n\tint stalone = 0;\n#ifdef _POSIX_MEMLOCK\n\tint do_mlockall = 0;\n#endif /* _POSIX_MEMLOCK */\n\tint nthreads = -1;\n\tint num_cores;\n\tchar *endp = NULL;\n\tpthread_mutexattr_t attr;\n\n\t/*the real deal or show version and exit?*/\n\n\tschedule_ptr = sched_ptr;\n\n\tPRINT_VERSION_AND_EXIT(argc, argv);\n\n\tnum_cores = sysconf(_SC_NPROCESSORS_ONLN);\n\n\tif (pbs_loadconf(0) == 0)\n\t\treturn (1);\n\n\tif (validate_running_user(argv[0]) == 0)\n\t\treturn (1);\n\n\t/* disable attribute verification */\n\tset_no_attribute_verification();\n\n\t/* initialize the thread context */\n\tif (pbs_client_thread_init_thread_context() != 0) {\n\t\tfprintf(stderr, \"%s: Unable to initialize thread context\\n\",\n\t\t\targv[0]);\n\t\treturn (1);\n\t}\n\n\tset_log_conf(pbs_conf.pbs_leaf_name, pbs_conf.pbs_mom_node_name,\n\t\t     pbs_conf.locallog, pbs_conf.syslogfac,\n\t\t     pbs_conf.syslogsvr, pbs_conf.pbs_log_highres_timestamp);\n\n\tnthreads = pbs_conf.pbs_sched_threads;\n\n\tglob_argv = argv;\n\tsegv_start_time = segv_last_time = time(NULL);\n\n\topterr = 0;\n\twhile ((c = getopt(argc, argv, \"lL:NI:d:p:c:nt:\")) != EOF) {\n\t\tswitch (c) {\n\t\t\tcase 'l':\n#ifdef _POSIX_MEMLOCK\n\t\t\t\tdo_mlockall = 1;\n#else\n\t\t\t\tfprintf(stderr, \"-l option - mlockall not supported\\n\");\n#endif /* _POSIX_MEMLOCK */\n\t\t\t\tbreak;\n\t\t\tcase 'L':\n\t\t\t\tlogfile = optarg;\n\t\t\t\tbreak;\n\t\t\tcase 'N':\n\t\t\t\tstalone = 1;\n\t\t\t\tbreak;\n\t\t\tcase 'I':\n\t\t\t\tsc_name = optarg;\n\t\t\t\tbreak;\n\t\t\tcase 'd':\n\t\t\t\tif (pbs_conf.pbs_home_path != NULL)\n\t\t\t\t\tfree(pbs_conf.pbs_home_path);\n\t\t\t\tpbs_conf.pbs_home_path = optarg;\n\t\t\t\tbreak;\n\t\t\tcase 'p':\n#ifndef DEBUG\n\t\t\t\tdbfile = optarg;\n#endif\n\t\t\t\tbreak;\n\t\t\tcase 'c':\n\t\t\t\tconfigfile = optarg;\n\t\t\t\tbreak;\n\t\t\tcase 'n':\n\t\t\t\topt_no_restart = 1;\n\t\t\t\tbreak;\n\t\t\tcase 't':\n\t\t\t\tnthreads = strtol(optarg, &endp, 10);\n\t\t\t\tif (*endp != '\\0') {\n\t\t\t\t\tfprintf(stderr, \"%s: bad num threads value\\n\", optarg);\n\t\t\t\t\terrflg = 1;\n\t\t\t\t}\n\t\t\t\tif (nthreads < 1) {\n\t\t\t\t\tfprintf(stderr, \"%s: bad num threads value (should be in range 1-99999)\\n\", optarg);\n\t\t\t\t\terrflg = 1;\n\t\t\t\t}\n\t\t\t\tif (nthreads > num_cores) {\n\t\t\t\t\tfprintf(stderr, \"%s: cannot be larger than number of cores %d, using number of cores instead\\n\",\n\t\t\t\t\t\toptarg, num_cores);\n\t\t\t\t\tnthreads = num_cores;\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\terrflg = 1;\n\t\t\t\tbreak;\n\t\t}\n\t}\n\n\tif (sc_name == NULL) {\n\t\tsc_name = PBS_DFLT_SCHED_NAME;\n\t\tdflt_sched = 1;\n\t}\n\n\tif (errflg) {\n\t\tfprintf(stderr, \"usage: %s %s\\n\", argv[0], usage);\n\t\tfprintf(stderr, \"       %s --version\\n\", argv[0]);\n\t\texit(1);\n\t}\n\n\tif (dflt_sched) {\n\t\t(void) sprintf(log_buffer, \"%s/sched_priv\", pbs_conf.pbs_home_path);\n\t} else {\n\t\t(void) sprintf(log_buffer, \"%s/sched_priv_%s\", pbs_conf.pbs_home_path, sc_name);\n\t}\n#if !defined(DEBUG) && !defined(NO_SECURITY_CHECK)\n\tc = chk_file_sec_user(log_buffer, 1, 0, S_IWGRP | S_IWOTH, 1, getuid());\n\tc |= chk_file_sec(pbs_conf.pbs_environment, 0, 0, S_IWGRP | S_IWOTH, 0);\n\tif (c != 0)\n\t\texit(1);\n#endif /* not DEBUG and not NO_SECURITY_CHECK */\n\tif (chdir(log_buffer) == -1) {\n\t\tperror(\"chdir\");\n\t\texit(1);\n\t}\n\tif (dflt_sched) {\n\t\t(void) sprintf(path_log, \"%s/sched_logs\", pbs_conf.pbs_home_path);\n\t} else {\n\t\t(void) sprintf(path_log, \"%s/sched_logs_%s\", pbs_conf.pbs_home_path, sc_name);\n\t}\n\tif (log_open(logfile, path_log) == -1) {\n\t\tfprintf(stderr, \"%s: logfile could not be opened\\n\", argv[0]);\n\t\texit(1);\n\t}\n\n\t/* The following is code to reduce security risks                */\n\t/* start out with standard umask, system resource limit infinite */\n\n\tumask(022);\n\tif (setup_env(pbs_conf.pbs_environment) == -1)\n\t\texit(1);\n\tc = getgid();\n\t(void) setgroups(1, (gid_t *) &c); /* secure suppl. groups */\n\n\tset_proc_limits(pbs_conf.pbs_core_limit, 0); /* set_proc_limits can call log_record, so call only after opening log file */\n\n\tif (gethostname(host, (sizeof(host) - 1)) == -1) {\n\t\tlog_err(errno, __func__, \"gethostname\");\n\t\tdie(0);\n\t}\n\n\t/*Initialize security library's internal data structures*/\n\tif (load_auths(AUTH_SERVER)) {\n\t\tlog_err(-1, \"pbs_sched\", \"Failed to load auth lib\");\n\t\tdie(0);\n\t}\n\n\t{\n\t\tint csret;\n\n\t\t/* let Libsec do logging if part of PBS daemon code */\n\t\tp_cslog = log_err;\n\n\t\tif ((csret = CS_server_init()) != CS_SUCCESS) {\n\t\t\tsprintf(log_buffer,\n\t\t\t\t\"Problem initializing security library (%d)\", csret);\n\t\t\tlog_err(-1, \"pbs_sched\", log_buffer);\n\t\t\tdie(0);\n\t\t}\n\t}\n\n\taddclient(\"localhost\"); /* who has permission to call MOM */\n\taddclient(host);\n\tif (pbs_conf.pbs_server_name)\n\t\taddclient(pbs_conf.pbs_server_name);\n\tif (pbs_conf.pbs_primary && pbs_conf.pbs_secondary) {\n\t\t/* Failover is configured when both primary and secondary are set. */\n\t\taddclient(pbs_conf.pbs_primary);\n\t\taddclient(pbs_conf.pbs_secondary);\n\t} else if (pbs_conf.pbs_server_host_name) {\n\t\t/* Failover is not configured, but PBS_SERVER_HOST_NAME is. */\n\t\taddclient(pbs_conf.pbs_server_host_name);\n\t}\n\tif (pbs_conf.pbs_leaf_name)\n\t\taddclient(pbs_conf.pbs_leaf_name);\n\n\tif (configfile) {\n\t\tif (read_config(configfile) != 0)\n\t\t\tdie(0);\n\t}\n\n\tif ((c = are_we_primary()) == 1) {\n\t\tlockfds = open(\"sched.lock\", O_CREAT | O_WRONLY, 0644);\n\t} else if (c == 0) {\n\t\tlockfds = open(\"sched.lock.secondary\", O_CREAT | O_WRONLY, 0644);\n\t} else {\n\t\tlog_err(-1, \"pbs_sched\", \"neither primary or secondary server\");\n\t\texit(1);\n\t}\n\tif (lockfds < 0) {\n\t\tlog_err(errno, __func__, \"open lock file\");\n\t\texit(1);\n\t}\n\n\tif (sigemptyset(&allsigs) == -1) {\n\t\tperror(\"sigemptyset\");\n\t\texit(1);\n\t}\n\tif (sigprocmask(SIG_SETMASK, &allsigs, NULL) == -1) { /* unblock */\n\t\tperror(\"sigprocmask\");\n\t\texit(1);\n\t}\n\tact.sa_flags = 0;\n\n\t/* remember to block these during critical sections so we don't get confused */\n\tfor (auto &sig : sigstoblock) {\n\t\tsigaddset(&allsigs, sig);\n\t}\n\tact.sa_mask = allsigs;\n\n\tact.sa_handler = restart; /* do a restart on SIGHUP */\n\tsigaction(SIGHUP, &act, NULL);\n\n#ifdef NAS\t\t\t\t       /* localmod 030 */\n\tact.sa_handler = soft_cycle_interrupt; /* do a cycle interrupt on */\n\t\t\t\t\t       /* SIGUSR1, subject to     */\n\t\t\t\t\t       /* configurable parameters */\n\tsigaction(SIGUSR1, &act, NULL);\n\tact.sa_handler = hard_cycle_interrupt; /* do a cycle interrupt on */\n\t\t\t\t\t       /* SIGUSR2                 */\n\tsigaction(SIGUSR2, &act, NULL);\n#endif /* localmod 030 */\n\n\tact.sa_handler = die; /* bite the biscuit for all following */\n\tsigaction(SIGINT, &act, NULL);\n\tsigaction(SIGTERM, &act, NULL);\n\n\tact.sa_handler = sigfunc_pipe;\n\tsigaction(SIGPIPE, &act, NULL);\n\n\tif (!opt_no_restart) {\n\t\tact.sa_handler = on_segv;\n\t\tsigaction(SIGSEGV, &act, NULL);\n\t\tsigaction(SIGBUS, &act, NULL);\n\t}\n\n#ifndef DEBUG\n\tif (stalone != 1) {\n\t\tif ((pid = fork()) == -1) { /* error on fork */\n\t\t\tperror(\"fork\");\n\t\t\texit(1);\n\t\t} else if (pid > 0) /* parent exits */\n\t\t\texit(0);\n\n\t\tif (setsid() == -1) {\n\t\t\tperror(\"setsid\");\n\t\t\texit(1);\n\t\t}\n\t}\n\tlock_out(lockfds, F_WRLCK);\n\tif (freopen(dbfile, \"a\", stdout) == NULL) \n\t\tlog_errf(-1, __func__, \"freopen failed. ERR : %s\",strerror(errno));\n\tsetvbuf(stdout, NULL, _IOLBF, 0);\n\tdup2(fileno(stdout), fileno(stderr));\n#else\n\tif (stalone != 1) {\n\t\t(void) sprintf(log_buffer, \"Debug build does not fork.\");\n\t\tlog_record(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_INFO,\n\t\t\t   __func__, log_buffer);\n\t}\n\tlock_out(lockfds, F_WRLCK);\n\tsetvbuf(stdout, NULL, _IOLBF, 0);\n\tsetvbuf(stderr, NULL, _IOLBF, 0);\n#endif\n\tpid = getpid();\n\tdaemon_protect(0, PBS_DAEMON_PROTECT_ON);\n\tif (freopen(\"/dev/null\", \"r\", stdin) == NULL) \n\t\tlog_errf(-1, __func__, \"freopen failed. ERR : %s\",strerror(errno));\n\n\t/* write schedulers pid into lockfile */\n\tif (ftruncate(lockfds, (off_t) 0) == -1) \n\t\tlog_errf(-1, __func__, \"ftruncate failed. ERR : %s\",strerror(errno));\n\t(void) sprintf(log_buffer, \"%ld\\n\", (long) pid);\n\tif (write(lockfds, log_buffer, strlen(log_buffer)) == -1) \n\t\tlog_errf(-1, __func__, \"write failed. ERR : %s\",strerror(errno));\n\n#ifdef _POSIX_MEMLOCK\n\tif (do_mlockall == 1) {\n\t\tif (mlockall(MCL_CURRENT | MCL_FUTURE) == -1) {\n\t\t\tlog_err(errno, __func__, \"mlockall failed\");\n\t\t}\n\t}\n#endif /* _POSIX_MEMLOCK */\n\n\t(void) sprintf(log_buffer, msg_startup1, PBS_VERSION, 0);\n\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_ADMIN | PBSEVENT_FORCE,\n\t\t  LOG_NOTICE, PBS_EVENTCLASS_SERVER, msg_daemonname, log_buffer);\n\n\tsprintf(log_buffer, \"%s startup pid %ld\", argv[0], (long) pid);\n\tlog_record(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_INFO, __func__, log_buffer);\n\n\t/*\n\t *  Local initialization stuff\n\t */\n\t/* Set the signal mask temporarily for thread initialization */\n\tif (sigprocmask(SIG_BLOCK, &allsigs, &oldsigs) == -1)\n\t\tlog_err(errno, __func__, \"sigprocmask(SIG_BLOCK)\");\n\tif (schedinit(nthreads)) {\n\t\t(void) sprintf(log_buffer,\n\t\t\t       \"local initialization failed, terminating\");\n\t\tlog_record(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_INFO,\n\t\t\t   __func__, log_buffer);\n\t\texit(1);\n\t}\n\tif (sigprocmask(SIG_SETMASK, &oldsigs, NULL) == -1)\n\t\tlog_err(errno, __func__, \"sigprocmask(SIG_SETMASK)\");\n\n\t/* Initialize cleanup lock */\n\tif (init_mutex_attr_recursive(&attr) != 0)\n\t\tdie(0);\n\n\tpthread_mutex_init(&cleanup_lock, &attr);\n\n\topen_server_conns();\n\n\tfor (go = 1; go;) {\n\t\tint i;\n\n\t\twait_for_cmds();\n\n\t\t/* First walk through the qrun_list as this has high priority followed by the other commands */\n\t\tfor (i = 0; i < qrun_list_size; i++) {\n\t\t\tif (schedule_wrapper(&qrun_list[i], opt_no_restart) == 1) {\n\t\t\t\tgo = 0;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\tfree(qrun_list[i].jid);\n\t\t\tqrun_list[i].jid = NULL;\n\t\t}\n\n\t\tfor (i = 0; go && (i < SCH_CMD_HIGH); i++) {\n\t\t\tsched_cmd cmd;\n\n\t\t\tif (sched_cmds[i] == 0)\n\t\t\t\tcontinue;\n\n\t\t\t/* index itself is the command */\n\t\t\tcmd.cmd = i;\n\n\t\t\t/* jid is always NULL since this list does not contain SCHEDULE_AJOB commands */\n\t\t\tcmd.jid = NULL;\n\n\t\t\t/* clear the entry of sched_cmds[i] as we are going to process this command now */\n\t\t\tsched_cmds[i] = 0;\n\n\t\t\tif (schedule_wrapper(&cmd, opt_no_restart) == 1) {\n\t\t\t\tgo = 0;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t}\n\n\tschedexit();\n\n\tlog_eventf(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_INFO, __func__, \"%s normal finish pid %ld\", argv[0], (long) pid);\n\tlock_out(lockfds, F_UNLCK);\n\n\tunload_auths();\n\tclose_server_conns();\n\tlog_close(1);\n\texit(0);\n}\n\n/**\n * @brief\n *\tschedule_wrapper - Wrapper function for calling schedule\n *\n * @param[in] cmd - pointer to schedulig command\n * @param[in] opt_no_restart - value of opt_no_restart\n *\n * @return\tint\n * @retval\t0\t: continue calling scheduling cycles\n * @retval\t1\t: exit scheduler\n */\nstatic int\nschedule_wrapper(sched_cmd *cmd, int opt_no_restart)\n{\n\ttime_t now;\n\n\tif (sigprocmask(SIG_BLOCK, &allsigs, &oldsigs) == -1)\n\t\tlog_err(errno, __func__, \"sigprocmask(SIG_BLOCK)\");\n\n\t/* Keep track of time to use in SIGSEGV handler */\n\tnow = time(NULL);\n\tif (!opt_no_restart)\n\t\tsegv_last_time = now;\n\n#ifdef DEBUG\n\t{\n\t\tstrftime(log_buffer, sizeof(log_buffer), \"%Y-%m-%d %H:%M:%S\", localtime(&now));\n\t\tDBPRT((\"%s Scheduler received command %d\\n\", log_buffer, cmd->cmd));\n\t}\n#endif\n\n\tif (schedule_ptr(clust_primary_sock, cmd)) /* magic happens here */\n\t\treturn 1;\n\telse\n\t\tsend_cycle_end();\n\n\tif (sigprocmask(SIG_SETMASK, &oldsigs, NULL) == -1)\n\t\tlog_err(errno, __func__, \"sigprocmask(SIG_SETMASK)\");\n\n\treturn 0;\n}\n"
  },
  {
    "path": "src/scheduler/pbsfs.cpp",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file    pbsfs.cpp\n *\n * @brief\n * \t\tpbsfs.cpp - contains functions which are related to PBS file share.\n *\n * Functions included are:\n * \tmain()\n * \tprint_fairshare_entity()\n *\n */\n#include <stdio.h>\n#include <string.h>\n#include <stdlib.h>\n#include <unistd.h>\n#include <libpbs.h>\n#include <pbs_ifl.h>\n#include \"data_types.h\"\n#include \"constant.h\"\n#include \"config.h\"\n#include \"fairshare.h\"\n#include \"parse.h\"\n#include \"pbs_version.h\"\n#include \"sched_cmds.h\"\n#include \"log.h\"\n#include \"fifo.h\"\n\n/* prototypes */\nstatic void print_fairshare_entity(group_info *ginfo);\nstatic void print_fairshare(group_info *root, int level);\n\n/* flags */\n#define FS_GET 1\n#define FS_SET 2\n#define FS_PRINT 4\n#define FS_PRINT_TREE 8\n#define FS_DECAY 16\n#define FS_COMP 32\n#define FS_TRIM_TREE 64\n#define FS_WRITE_FILE 128\n\n/**\n * @brief\n * \t\tThe entry point of pbsfs\n *\n * @return\tint\n * @retval\t0\t: success\n * @retval\t1\t: something is wrong!\n */\nint\nmain(int argc, char *argv[])\n{\n\tchar path_buf[256] = {0};\n\tchar sched_name[PBS_MAXSCHEDNAME + 1] = \"default\";\n\tgroup_info *ginfo;\n\tgroup_info *ginfo2;\n\tint c;\n\tint flags = FS_PRINT;\n\tint flag1 = 0;\n\tdouble val;\n\tchar *endp;\n\tchar *testp;\n\n\t/* the real deal or output version and exit? */\n\tPRINT_VERSION_AND_EXIT(argc, argv);\n\tset_msgdaemonname(const_cast<char *>(\"pbsfs\"));\n\n\tif (pbs_loadconf(0) <= 0)\n\t\texit(1);\n\n\twhile ((c = getopt(argc, argv, \"sgptdceI:-:\")) != -1)\n\t\tswitch (c) {\n\t\t\tcase 'g':\n\t\t\t\tflags = FS_GET;\n\t\t\t\tbreak;\n\t\t\tcase 's':\n\t\t\t\tflags = FS_SET | FS_WRITE_FILE;\n\t\t\t\tbreak;\n\t\t\tcase 'p':\n\t\t\t\tflags = FS_PRINT;\n\t\t\t\tbreak;\n\t\t\tcase 't':\n\t\t\t\tflags = FS_PRINT_TREE;\n\t\t\t\tbreak;\n\t\t\tcase 'd':\n\t\t\t\tflags = FS_DECAY | FS_WRITE_FILE;\n\t\t\t\tbreak;\n\t\t\tcase 'c':\n\t\t\t\tflags = FS_COMP;\n\t\t\t\tbreak;\n\t\t\tcase 'e':\n\t\t\t\tflags = FS_TRIM_TREE | FS_WRITE_FILE;\n\t\t\t\tbreak;\n\t\t\tcase 'I':\n\t\t\t\tsnprintf(sched_name, sizeof(sched_name), \"%s\", optarg);\n\t\t\t\tbreak;\n\t\t\tcase '-':\n\t\t\t\tflag1 = 1;\n\t\t\t\tbreak;\n\t\t}\n\n\tif (flag1 == 1) {\n\t\tfprintf(stderr, \"Usage: pbsfs --version\\n\");\n\t\texit(1);\n\t}\n\n\tif (validate_running_user(argv[0]) == 0) {\n\t\texit(1);\n\t}\n\n\tif ((flags & (FS_PRINT | FS_PRINT_TREE)) && (argc - optind) != 0) {\n\t\tfprintf(stderr, \"Usage: pbsfs -[ptdgcs] [-I sched_name]\\n\");\n\t\texit(1);\n\t} else if ((flags & FS_GET) && (argc - optind) != 1) {\n\t\tfprintf(stderr, \"Usage: pbsfs [-I sched_name] -g <fairshare_entity>\\n\");\n\t\texit(1);\n\t} else if ((flags & FS_SET) && (argc - optind) != 2) {\n\t\tfprintf(stderr, \"Usage: pbsfs [-I sched_name] -s <fairshare_entity> <usage>\\n\");\n\t\texit(1);\n\t} else if ((flags & FS_COMP) && (argc - optind) != 2) {\n\t\tfprintf(stderr, \"Usage: pbsfs [-I sched_name] -c <entity1> <entity2>\\n\");\n\t\texit(1);\n\t}\n\n\tif (strcmp(sched_name, \"default\") != 0) {\n\t\tint pbs_sd;\n\t\tstruct batch_status *bs;\n\t\tstruct batch_status *cur_bs;\n\t\tpbs_sd = pbs_connect(NULL);\n\t\tif (pbs_sd < 0) {\n\t\t\tfprintf(stderr, \"Can't connect to the server\\n\");\n\t\t\texit(1);\n\t\t}\n\t\tbs = pbs_statsched(pbs_sd, NULL, NULL);\n\n\t\tfor (cur_bs = bs; cur_bs != NULL; cur_bs = cur_bs->next) {\n\t\t\tif (strcmp(cur_bs->name, sched_name) == 0) {\n\t\t\t\tstruct attrl *cur_attrl;\n\t\t\t\tfor (cur_attrl = cur_bs->attribs; cur_attrl != NULL; cur_attrl = cur_attrl->next) {\n\t\t\t\t\tif (strcmp(cur_attrl->name, ATTR_sched_priv) == 0) {\n\t\t\t\t\t\tpbs_strncpy(path_buf, cur_attrl->value, sizeof(path_buf));\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tif (cur_attrl == NULL) {\n\t\t\t\t\tfprintf(stderr, \"Scheduler %s does not have its sched_priv set\\n\", sched_name);\n\t\t\t\t\texit(1);\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t\tif (cur_bs == NULL) {\n\t\t\tfprintf(stderr, \"Scheduler %s does not exist\\n\", sched_name);\n\t\t\texit(1);\n\t\t}\n\t\tpbs_disconnect(pbs_sd);\n\n\t} else\n\t\tsnprintf(path_buf, sizeof(path_buf), \"%s/sched_priv/\", pbs_conf.pbs_home_path);\n\n\tif (chdir(path_buf) == -1) {\n\t\tperror(\"Unable to access fairshare data\");\n\t\texit(1);\n\t}\n\tconf = parse_config(CONFIG_FILE);\n\tif ((fstree = preload_tree()) == NULL) {\n\t\tfprintf(stderr, \"Error in preloading fairshare information\\n\");\n\t\treturn 1;\n\t}\n\tif (parse_group(RESGROUP_FILE, fstree->root) == 0)\n\t\treturn 1;\n\n\tif (flags & FS_TRIM_TREE) {\n\t\tread_usage(USAGE_FILE, FS_TRIM, fstree);\n\t\tfstree->last_decay = time(NULL);\n\t} else\n\t\tread_usage(USAGE_FILE, 0, fstree);\n\n\tcalc_fair_share_perc(fstree->root->child, UNSPECIFIED);\n\tcalc_usage_factor(fstree);\n\n\tif (flags & FS_PRINT_TREE)\n\t\tprint_fairshare(fstree->root, 0);\n\telse if (flags & FS_PRINT) {\n\t\tprintf(\"Fairshare usage units are in: %s\\n\", conf.fairshare_res.c_str());\n\t\tprint_fairshare(fstree->root, -1);\n\t} else if (flags & FS_DECAY) {\n\t\tdecay_fairshare_tree(fstree->root);\n\t\tfstree->last_decay = time(NULL);\n\t} else if (flags & (FS_GET | FS_SET | FS_COMP)) {\n\t\tginfo = find_group_info(argv[optind], fstree->root);\n\n\t\tif (ginfo == NULL) {\n\t\t\tfprintf(stderr, \"Fairshare Entity %s does not exist.\\n\", argv[optind]);\n\t\t\treturn 1;\n\t\t}\n\t\tif (flags & FS_COMP) {\n\t\t\tginfo2 = find_group_info(argv[optind + 1], fstree->root);\n\n\t\t\tif (ginfo2 == NULL) {\n\t\t\t\tfprintf(stderr, \"Fairshare Entity %s does not exist.\\n\", argv[optind + 1]);\n\t\t\t\treturn 1;\n\t\t\t}\n\t\t\tswitch (compare_path(ginfo->gpath, ginfo2->gpath)) {\n\t\t\t\tcase -1:\n\t\t\t\t\tprintf(\"%s\\n\", ginfo->name.c_str());\n\t\t\t\t\tbreak;\n\n\t\t\t\tcase 0:\n\t\t\t\t\tprintf(\"%s == %s\\n\", ginfo->name.c_str(), ginfo2->name.c_str());\n\t\t\t\t\tbreak;\n\n\t\t\t\tcase 1:\n\t\t\t\t\tprintf(\"%s\\n\", ginfo2->name.c_str());\n\t\t\t}\n\t\t} else if (flags & FS_GET)\n\t\t\tprint_fairshare_entity(ginfo);\n\t\telse {\n\t\t\ttestp = argv[optind + 1];\n\t\t\tval = strtod(testp, &endp);\n\n\t\t\tif (*endp == '\\0')\n\t\t\t\tginfo->usage = val;\n\t\t}\n\t}\n\n\tif (flags & FS_WRITE_FILE) {\n\t\tFILE *fp;\n\t\t/* make backup of database file */\n\t\tremove(USAGE_FILE \".bak\");\n\t\tif (rename(USAGE_FILE, USAGE_FILE \".bak\") < 0)\n\t\t\tperror(\"Could not backup usage database.\");\n\t\twrite_usage(USAGE_FILE, fstree);\n\t\tif ((fp = fopen(USAGE_TOUCH, \"w\")) != NULL)\n\t\t\tfclose(fp);\n\t}\n\n\treturn 0;\n}\n/**\n * @brief\n * \t\tprint the group info structure.\n *\n * @param[in]\tginfo\t-\tgroup info structure.\n */\nstatic void\nprint_fairshare_entity(group_info *ginfo)\n{\n\tprintf(\n\t\t\"fairshare entity: %s\\n\"\n\t\t\"Resgroup\t\t: %d\\n\"\n\t\t\"cresgroup\t\t: %d\\n\"\n\t\t\"Shares\t\t\t: %d\\n\"\n\t\t\"Percentage\t\t: %f%%\\n\"\n\t\t\"fairshare_tree_usage\t: %f\\n\"\n\t\t\"usage\t\t\t: %.0lf (%s)\\n\"\n\t\t\"usage/perc\t\t: %.0lf\\n\",\n\t\tginfo->name.c_str(),\n\t\tginfo->resgroup,\n\t\tginfo->cresgroup,\n\t\tginfo->shares,\n\t\tginfo->tree_percentage * 100,\n\t\tginfo->usage_factor,\n\t\tginfo->usage, conf.fairshare_res.c_str(),\n\t\tginfo->tree_percentage == 0 ? -1 : ginfo->usage / ginfo->tree_percentage);\n\n\tprintf(\"Path from root: \\n\");\n\tfor (const auto &gp : ginfo->gpath) {\n\t\tprintf(\"%-10s: %5d %10.0f / %5.3f = %.0f\\n\",\n\t\t       gp->name.c_str(), gp->cresgroup,\n\t\t       gp->usage, gp->tree_percentage,\n\t\t       gp->tree_percentage == 0 ? -1 : gp->usage / gp->tree_percentage);\n\t}\n}\n\n/**\n * @brief\n *\t\tprint_fairshare - print out the fair share tree\n *\n * @param[in]\troot\t-\troot of subtree\n * @param[in]\tlevel\t-\t-1\t: print long version\n *\t\t\t\t 0\t: print brief but hierarchical tree\n *\n * @return nothing\n *\n */\nstatic void\nprint_fairshare(group_info *root, int level)\n{\n\tif (root == NULL)\n\t\treturn;\n\n\tif (level < 0) {\n\t\tprintf(\n\t\t\t\"%-10s: Grp: %-5d  cgrp: %-5d\"\n\t\t\t\"Shares: %-6d Usage: %-6.0lf Perc: %6.3f%%\\n\",\n\t\t\troot->name.c_str(), root->resgroup, root->cresgroup, root->shares,\n\t\t\troot->usage, (root->tree_percentage * 100));\n\t} else\n\t\tprintf(\"%*s%s(%d)\\n\", level, \" \", root->name.c_str(), root->cresgroup);\n\n\tprint_fairshare(root->child, level >= 0 ? level + 5 : -1);\n\tprint_fairshare(root->sibling, level);\n}\n"
  },
  {
    "path": "src/scheduler/prev_job_info.cpp",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file    prev_job_info.c\n *\n * @brief\n * \t\tprev_job_info.c -  contains functions which are related to prev_job_info array.\n *\n * Functions included are:\n * \tcreate_prev_job_info()\n * \tfree_prev_job_info()\n * \tfree_pjobs()\n */\n#include <pbs_config.h>\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <errno.h>\n#include <log.h>\n#include \"prev_job_info.h\"\n#include \"job_info.h\"\n#include \"misc.h\"\n#include \"resource_resv.h\"\n#include \"globals.h\"\n\n/**\n * @brief\n *\t\tcreate_prev_job_info - create the prev_job_info array from an array of jobs\n *\n * @param[in]\tjobs\t-\tjob array\n *\n * @par\tNOTE: jinfo_arr is modified\n *\n */\nvoid\ncreate_prev_job_info(resource_resv **jobs)\n{\n\tint i;\n\n\tlast_running.clear();\n\n\tfor (i = 0; jobs[i] != NULL; i++) {\n\t\tif (jobs[i]->job != NULL) {\n\t\t\tprev_job_info pjinfo(jobs[i]->name, jobs[i]->job->ginfo->name, jobs[i]->job->resused);\n\n\t\t\t/* resused is shallow copied, NULL it so it doesn't get freed at the end of the cycle */\n\t\t\tjobs[i]->job->resused = NULL;\n\n\t\t\tlast_running.push_back(pjinfo);\n\t\t}\n\t}\n}\n\nprev_job_info::prev_job_info(const std::string &pname, const std::string &ename, resource_req *rused) : name(pname), entity_name(ename), resused(rused)\n{\n}\n\nprev_job_info::prev_job_info(const prev_job_info &opj) : name(opj.name), entity_name(opj.entity_name)\n{\n\tresused = dup_resource_req_list(opj.resused);\n}\n\nprev_job_info::prev_job_info(prev_job_info &&opj) : name(std::move(opj.name)), entity_name(std::move(opj.entity_name))\n{\n\tresused = opj.resused;\n\topj.resused = NULL;\n}\n\nprev_job_info &\nprev_job_info::operator=(const prev_job_info &opj)\n{\n\tname = opj.name;\n\tentity_name = opj.entity_name;\n\tresused = dup_resource_req_list(opj.resused);\n\n\treturn *this;\n}\n\n/**\n * @brief\n *\t\tprev_job_info destructor\n *\n */\nprev_job_info::~prev_job_info()\n{\n\tfree_resource_req_list(resused);\n}\n"
  },
  {
    "path": "src/scheduler/prev_job_info.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _PREV_JOB_INFO_H\n#define _PREV_JOB_INFO_H\n\n#include \"data_types.h\"\n\n/*\n *      create_prev_job_info - create the prev_job_info array from an array\n *                              of jobs\n */\nvoid create_prev_job_info(resource_resv **jobs);\n\n#endif /* _PREV_JOB_INFO_H */\n"
  },
  {
    "path": "src/scheduler/prime.cpp",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file    prime.c\n *\n * @brief\n * \t\tprime.c -  contains functions which are related to prime time.\n *\n * Functions included are:\n * \tis_prime_time()\n * \tcheck_prime()\n * \tis_holiday()\n * \tparse_holidays()\n * \tload_day()\n * \tend_prime_status_rec()\n * \tend_prime_status()\n * \tinit_prime_time()\n * \tinit_non_prime_time()\n *\n */\n\n#include <algorithm>\n\n#include <pbs_config.h>\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <time.h>\n#include <string.h>\n#include <ctype.h>\n#include <errno.h>\n\n#include <pbs_ifl.h>\n#include <log.h>\n\n#include \"constant.h\"\n#include \"globals.h\"\n#include \"prime.h\"\n#include \"misc.h\"\n\n/**\n * @brief\n * \t\treturn the status of primetime\n *\n * @param[in]\tdate\t-\tthe date to check if its primetime\n *\n * @return\tint\n * @retval \tPRIME\t: if prime time\n * @retval\tNON_PRIME\t: if non_prime\n *\n * @par NOTE: Holidays are considered non-prime\n *\n */\nenum prime_time\nis_prime_time(time_t date)\n{\n\tenum prime_time ret = PRIME; /* return code */\n\tstruct tm *tmptr;\t     /* current time in a struct tm */\n\n\ttmptr = localtime(&date);\n\n\t/* check for holiday: Holiday == non_prime */\n\tif (conf.holiday_year != 0) { /* year == 0: no prime-time */\n\n\t\t/* tm_yday starts at 0, and Julian date starts at 1 */\n\t\tif (is_holiday(tmptr->tm_yday + 1))\n\t\t\tret = NON_PRIME;\n\n\t\t/* is_holiday() calls localtime() which returns a static ptr.  Our tmptr\n\t\t * now no longer points to what we think it points to\n\t\t */\n\t\ttmptr = localtime(&date);\n\n\t\t/* if ret still equals PRIME then it is not a holiday, we need to check\n\t\t * and see if we are in non-prime or prime\n\t\t */\n\t\tif (ret == PRIME) {\n\t\t\tif (tmptr->tm_wday == 0)\n\t\t\t\tret = check_prime(SUNDAY, tmptr);\n\t\t\telse if (tmptr->tm_wday == 1)\n\t\t\t\tret = check_prime(MONDAY, tmptr);\n\t\t\telse if (tmptr->tm_wday == 2)\n\t\t\t\tret = check_prime(TUESDAY, tmptr);\n\t\t\telse if (tmptr->tm_wday == 3)\n\t\t\t\tret = check_prime(WEDNESDAY, tmptr);\n\t\t\telse if (tmptr->tm_wday == 4)\n\t\t\t\tret = check_prime(THURSDAY, tmptr);\n\t\t\telse if (tmptr->tm_wday == 5)\n\t\t\t\tret = check_prime(FRIDAY, tmptr);\n\t\t\telse if (tmptr->tm_wday == 6)\n\t\t\t\tret = check_prime(SATURDAY, tmptr);\n\t\t\telse\n\t\t\t\tret = check_prime(WEEKDAY, tmptr);\n\t\t}\n\t}\n\n\treturn ret;\n}\n\n/**\n * @brief\n *\t\tcheck_prime - check if it is prime time for a particular day\n *\n * @param[in]\td\t-\tdays\n * @param[in]\tt\t-\ttime represented as tm structure.\n *\n * @return\tPRIME if it is in primetime\n * @retval\tNON_PRIME\t: if not\n */\nenum prime_time\ncheck_prime(enum days d, struct tm *t)\n{\n\tenum prime_time prime = NON_PRIME; /* return code */\n\n\t/* Nonprime, prime, and current Times are transformed into military time for easier comparison */\n\tint npt = conf.prime[d][NON_PRIME].hour * 100 + conf.prime[d][NON_PRIME].min;\n\tint pt = conf.prime[d][PRIME].hour * 100 + conf.prime[d][PRIME].min;\n\tint ct = (t->tm_hour) * 100 + (t->tm_min);\n\n\t/* Case 1: all primetime today */\n\tif (conf.prime[d][PRIME].all)\n\t\tprime = PRIME;\n\n\t/* case 2: all nonprime time today */\n\telse if (conf.prime[d][NON_PRIME].all)\n\t\tprime = NON_PRIME;\n\n\t/* case 3: no primetime today */\n\telse if (conf.prime[d][PRIME].none)\n\t\tprime = NON_PRIME;\n\n\t/* case 4: no nonprime time today */\n\telse if (conf.prime[d][NON_PRIME].none)\n\t\tprime = PRIME;\n\t/* There are two more cases to handle, if we represent the 24 hours day as:\n\t *          0000 ------------------------2400\n\t * case 5 is when PRIME starts before NON_PRIME\n\t *          0000 -------P----NP----------2400\n\t * in which case the current time is PRIME only between P and NP\n\t */\n\telse if (npt > pt) {\n\t\tif (pt <= ct && ct < npt)\n\t\t\tprime = PRIME;\n\t\telse\n\t\t\tprime = NON_PRIME;\n\t}\n\t/*  case 6 is when NON_PRIME starts before PRIME\n\t *          0000 -------NP----P----------2400\n\t *  in which case the current time is NONPRIME only between NP and P\n\t *  This case also captures the setting of identical Prime and Nonprime times in the \"holidays\" file\n\t */\n\telse if (npt < pt) {\n\t\tif (npt <= ct && ct < pt)\n\t\t\tprime = NON_PRIME;\n\t\telse\n\t\t\tprime = PRIME;\n\t}\n\t/* Catchall case is NON_PRIME */\n\telse\n\t\tprime = NON_PRIME;\n\n\treturn prime;\n}\n\n/**\n * @brief\n *\t\tis_holiday - returns true if 'date' is a holiday\n *\n * @param[in]\tdate\t-\tamount of days since the beginning of the year\n *\t\t\t\t\t\t\tstarting with Jan 1 == 1 or a time_t.\n *\n * @return\tTRUE/FALSE\n * @retval\tTRUE\t: if today is a holiday\n * @retval\tFALSE\t: if not\n *\n */\nint\nis_holiday(long date)\n{\n\tint jdate;\n\tstruct tm *tmptr;\n\n\tif (date > 366) {\n\t\ttmptr = localtime((time_t *) &date);\n\t\tjdate = tmptr->tm_yday + 1;\n\t} else\n\t\tjdate = date;\n\n\treturn std::find(conf.holidays.begin(), conf.holidays.end(), jdate) != conf.holidays.end();\n}\n\n/**\n * @brief\tSet conf.prime values to reflect \"ALL PRIME\" before we start parsing\n * \t\t\tthe holidays file\n *\n * @param\tvoid\n *\n * @return void\n */\nstatic void\nhandle_missing_prime_info(void)\n{\n\tint d;\n\n\tfor (d = SUNDAY; d < HIGH_DAY; d++) {\n\t\tif (conf.prime[d][PRIME].all + conf.prime[d][PRIME].none + conf.prime[d][PRIME].hour + conf.prime[d][PRIME].min + conf.prime[d][NON_PRIME].hour + conf.prime[d][NON_PRIME].min == 0) {\n\t\t\tconf.prime[d][PRIME].all = TRUE;\n\t\t\tconf.prime[d][PRIME].none = FALSE;\n\t\t\tconf.prime[d][PRIME].hour = static_cast<unsigned int>(UNSPECIFIED);\n\t\t\tconf.prime[d][PRIME].min = static_cast<unsigned int>(UNSPECIFIED);\n\t\t\tconf.prime[d][NON_PRIME].none = TRUE;\n\t\t\tconf.prime[d][NON_PRIME].all = FALSE;\n\t\t\tconf.prime[d][NON_PRIME].hour = static_cast<unsigned int>(UNSPECIFIED);\n\t\t\tconf.prime[d][NON_PRIME].min = static_cast<unsigned int>(UNSPECIFIED);\n\t\t}\n\t}\n}\n\n/**\n * @brief\n *\t\tparse_holidays - parse the holidays file.  It should be in UNICOS 8\n *\t\t\t format.\n *\n * @param[in]\tfname\t-\tname of holidays file\n *\n * @return\tsuccess/failure\n *\n */\nint\nparse_holidays(const char *fname)\n{\n\tFILE *fp;\t   /* file pointer to holidays file */\n\tchar buf[256];\t   /* buffer to read lines of the file into */\n\tchar *config_name; /* the first word of the line */\n\tchar *tok;\t   /* used with strtok() to parse the rest of the line */\n\tchar *endp;\t   /* used with strtol() */\n\tint num;\t   /* used to convert string -> integer */\n\tchar error = 0;\t   /* boolean: is there an error ? */\n\tint linenum = 0;   /* the current line number */\n\n\tif ((fp = fopen(fname, \"r\")) == NULL) {\n\t\tsprintf(log_buffer, \"Error opening file %s\", fname);\n\t\tlog_err(errno, \"parse_holidays\", log_buffer);\n\t\tlog_event(PBSEVENT_SCHED, PBS_EVENTCLASS_FILE, LOG_NOTICE, HOLIDAYS_FILE,\n\t\t\t  \"Warning: cannot open holidays file; assuming 24hr primetime\");\n\t\treturn 0;\n\t}\n\n\twhile (fgets(buf, 256, fp) != NULL) {\n\t\tlinenum++;\n\t\tif (buf[strlen(buf) - 1] == '\\n')\n\t\t\tbuf[strlen(buf) - 1] = '\\0';\n\t\tif (!skip_line(buf)) {\n\t\t\tconfig_name = strtok(buf, \"\t \");\n\n\t\t\t/* this tells us if we have the correct file format.  Its ignored since\n\t\t\t * lots of error messages will be printed if the file format is wrong\n\t\t\t * and if the file format is correct but just not this, we really\n\t\t\t * shouldn't complain\n\t\t\t */\n\t\t\tif (!strcmp(config_name, \"HOLIDAYFILE_VERSION1\"))\n\t\t\t\t;\n\t\t\t/* the current year - if the file is old, we want to know to log errors\n\t\t\t * later about it\n\t\t\t *\n\t\t\t * FORMAT EXAMPLE\n\t\t\t *\n\t\t\t * YEAR 1998\n\t\t\t */\n\t\t\telse if (!strcmp(config_name, \"YEAR\")) {\n\t\t\t\ttok = strtok(NULL, \" \t\");\n\t\t\t\tif (tok == NULL)\n\t\t\t\t\terror = 1;\n\t\t\t\telse {\n\t\t\t\t\tnum = strtol(tok, &endp, 10);\n\t\t\t\t\tif (*endp != '\\0')\n\t\t\t\t\t\terror = 1;\n\t\t\t\t\telse\n\t\t\t\t\t\tconf.holiday_year = num;\n\t\t\t\t}\n\t\t\t}\n\t\t\t/* primetime hours for saturday\n\t\t\t * first number is primetime start, second is nonprime start\n\t\t\t *\n\t\t\t * FORMAT EXAMPLE\n\t\t\t *\n\t\t\t *  saturday \t0400\t1700\n\t\t\t */\n\t\t\telse if (!strcmp(config_name, \"saturday\")) {\n\t\t\t\ttok = strtok(NULL, \" \t\");\n\t\t\t\tif (load_day(SATURDAY, PRIME, tok) < 0)\n\t\t\t\t\terror = 1;\n\n\t\t\t\tif (!error) {\n\t\t\t\t\ttok = strtok(NULL, \" \t\");\n\t\t\t\t\tif (load_day(SATURDAY, NON_PRIME, tok) < 0)\n\t\t\t\t\t\terror = 1;\n\t\t\t\t}\n\t\t\t}\n\t\t\t/* primetime hours for sunday  - same format as saturday */\n\t\t\telse if (!strcmp(config_name, \"sunday\")) {\n\t\t\t\ttok = strtok(NULL, \" \t\");\n\t\t\t\tif (load_day(SUNDAY, PRIME, tok) < 0)\n\t\t\t\t\terror = 1;\n\n\t\t\t\tif (!error) {\n\t\t\t\t\ttok = strtok(NULL, \" \t\");\n\t\t\t\t\tif (load_day(SUNDAY, NON_PRIME, tok) < 0)\n\t\t\t\t\t\terror = 1;\n\t\t\t\t}\n\t\t\t}\n\t\t\t/* primetime for weekday - same format as saturday */\n\t\t\telse if (!strcmp(config_name, \"weekday\")) {\n\t\t\t\ttok = strtok(NULL, \" \t\");\n\t\t\t\tif (load_day(WEEKDAY, PRIME, tok) < 0)\n\t\t\t\t\terror = 1;\n\t\t\t\telse if (load_day(MONDAY, PRIME, tok) < 0)\n\t\t\t\t\terror = 1;\n\t\t\t\telse if (load_day(TUESDAY, PRIME, tok) < 0)\n\t\t\t\t\terror = 1;\n\t\t\t\telse if (load_day(WEDNESDAY, PRIME, tok) < 0)\n\t\t\t\t\terror = 1;\n\t\t\t\telse if (load_day(THURSDAY, PRIME, tok) < 0)\n\t\t\t\t\terror = 1;\n\t\t\t\telse if (load_day(FRIDAY, PRIME, tok) < 0)\n\t\t\t\t\terror = 1;\n\n\t\t\t\tif (!error) {\n\t\t\t\t\ttok = strtok(NULL, \" \t\");\n\t\t\t\t\tif (load_day(WEEKDAY, NON_PRIME, tok) < 0)\n\t\t\t\t\t\terror = 1;\n\t\t\t\t\telse if (load_day(MONDAY, NON_PRIME, tok) < 0)\n\t\t\t\t\t\terror = 1;\n\t\t\t\t\telse if (load_day(TUESDAY, NON_PRIME, tok) < 0)\n\t\t\t\t\t\terror = 1;\n\t\t\t\t\telse if (load_day(WEDNESDAY, NON_PRIME, tok) < 0)\n\t\t\t\t\t\terror = 1;\n\t\t\t\t\telse if (load_day(THURSDAY, NON_PRIME, tok) < 0)\n\t\t\t\t\t\terror = 1;\n\t\t\t\t\telse if (load_day(FRIDAY, NON_PRIME, tok) < 0)\n\t\t\t\t\t\terror = 1;\n\t\t\t\t}\n\t\t\t}\n\t\t\t/* primetime for monday - same format as saturday */\n\t\t\telse if (!strcmp(config_name, \"monday\")) {\n\t\t\t\ttok = strtok(NULL, \" \t\");\n\t\t\t\tif (load_day(MONDAY, PRIME, tok) < 0)\n\t\t\t\t\terror = 1;\n\t\t\t\tif (!error) {\n\t\t\t\t\ttok = strtok(NULL, \" \t\");\n\t\t\t\t\tif (load_day(MONDAY, NON_PRIME, tok) < 0)\n\t\t\t\t\t\terror = 1;\n\t\t\t\t}\n\t\t\t}\n\t\t\t/* primetime for tuesday - same format as saturday */\n\t\t\telse if (!strcmp(config_name, \"tuesday\")) {\n\t\t\t\ttok = strtok(NULL, \" \t\");\n\t\t\t\tif (load_day(TUESDAY, PRIME, tok) < 0)\n\t\t\t\t\terror = 1;\n\n\t\t\t\tif (!error) {\n\t\t\t\t\ttok = strtok(NULL, \" \t\");\n\t\t\t\t\tif (load_day(TUESDAY, NON_PRIME, tok) < 0)\n\t\t\t\t\t\terror = 1;\n\t\t\t\t}\n\t\t\t}\n\t\t\t/* primetime for wednesday - same format as saturday */\n\t\t\telse if (!strcmp(config_name, \"wednesday\")) {\n\t\t\t\ttok = strtok(NULL, \" \t\");\n\t\t\t\tif (load_day(WEDNESDAY, PRIME, tok) < 0)\n\t\t\t\t\terror = 1;\n\n\t\t\t\tif (!error) {\n\t\t\t\t\ttok = strtok(NULL, \" \t\");\n\t\t\t\t\tif (load_day(WEDNESDAY, NON_PRIME, tok) < 0)\n\t\t\t\t\t\terror = 1;\n\t\t\t\t}\n\t\t\t}\n\t\t\t/* primetime for thursday - same format as saturday */\n\t\t\telse if (!strcmp(config_name, \"thursday\")) {\n\t\t\t\ttok = strtok(NULL, \" \t\");\n\t\t\t\tif (load_day(THURSDAY, PRIME, tok) < 0)\n\t\t\t\t\terror = 1;\n\n\t\t\t\tif (!error) {\n\t\t\t\t\ttok = strtok(NULL, \" \t\");\n\t\t\t\t\tif (load_day(THURSDAY, NON_PRIME, tok) < 0)\n\t\t\t\t\t\terror = 1;\n\t\t\t\t}\n\t\t\t}\n\t\t\t/* primetime for friday - same format as saturday */\n\t\t\telse if (!strcmp(config_name, \"friday\")) {\n\t\t\t\ttok = strtok(NULL, \" \t\");\n\t\t\t\tif (load_day(FRIDAY, PRIME, tok) < 0)\n\t\t\t\t\terror = 1;\n\n\t\t\t\tif (!error) {\n\t\t\t\t\ttok = strtok(NULL, \" \t\");\n\t\t\t\t\tif (load_day(FRIDAY, NON_PRIME, tok) < 0)\n\t\t\t\t\t\terror = 1;\n\t\t\t\t}\n\t\t\t}\n\t\t\t/*\n\t\t\t * holidays\n\t\t\t * We only care about the Julian date of the holiday.  It is enough\n\t\t\t * information to find out if it is a holiday or not\n\t\t\t *\n\t\t\t * FORMAT EXAMPLE\n\t\t\t *\n\t\t\t *  Julian date\tCalendar date\tholiday name\n\t\t\t *    1\t\tJan 1\t\tNew Year's Day\n\t\t\t */\n\t\t\telse {\n\t\t\t\tnum = strtol(config_name, &endp, 10);\n\t\t\t\tconf.holidays.push_back(num);\n\t\t\t}\n\n\t\t\tif (error) {\n\t\t\t\tlog_eventf(PBSEVENT_SCHED, PBS_EVENTCLASS_FILE, LOG_NOTICE, fname,\n\t\t\t\t\t   \"Error on line %d, line started with: %s\", linenum, config_name);\n\t\t\t}\n\t\t}\n\t\terror = 0;\n\t}\n\n\tif (conf.holiday_year != 0) {\n\t\t/* Let's make sure that any missing days get marked as 24hr prime-time */\n\t\thandle_missing_prime_info();\n\t}\n\n\tfclose(fp);\n\treturn 0;\n}\n\n/**\n * @brief\n *\t\tload_day - fill in the prime time part of the config structure\n *\n * @param[in]\td\t-\tenum days: can be WEEKDAY, SATURDAY, or SUNDAY\n * @param[in]\tpr \t-\tenum prime_time: can be PRIME or NON_PRIME\n * @param[in]\ttok\t-\ttoken\n *\n * @return\tint\n * @retval\t0\t: on success\n * @retval\t-1\t: on error\n *\n */\nint\nload_day(enum days d, enum prime_time pr, const char *tok)\n{\n\tif (tok != NULL) {\n\t\tif (!strcmp(tok, \"all\") || !strcmp(tok, \"ALL\")) {\n\t\t\tif (pr == NON_PRIME && conf.prime[d][PRIME].all == TRUE) {\n\t\t\t\tlog_event(PBSEVENT_SCHED, PBS_EVENTCLASS_FILE, LOG_NOTICE, HOLIDAYS_FILE,\n\t\t\t\t\t  \"Warning: both prime & non-prime starts are 'all'; assuming 24hr primetime\");\n\t\t\t\treturn 0;\n\t\t\t}\n\t\t\tconf.prime[d][pr].all = TRUE;\n\t\t\tconf.prime[d][pr].hour = static_cast<unsigned int>(UNSPECIFIED);\n\t\t\tconf.prime[d][pr].min = static_cast<unsigned int>(UNSPECIFIED);\n\t\t\tconf.prime[d][pr].none = FALSE;\n\t\t} else if (!strcmp(tok, \"none\") || !strcmp(tok, \"NONE\")) {\n\t\t\tif (pr == NON_PRIME && conf.prime[d][PRIME].none == TRUE) {\n\t\t\t\tlog_event(PBSEVENT_SCHED, PBS_EVENTCLASS_FILE, LOG_NOTICE, HOLIDAYS_FILE,\n\t\t\t\t\t  \"Warning: both prime & non-prime starts are 'none'; assuming 24hr primetime\");\n\t\t\t\treturn load_day(d, PRIME, \"all\");\n\t\t\t}\n\t\t\tconf.prime[d][pr].all = FALSE;\n\t\t\tconf.prime[d][pr].hour = static_cast<unsigned int>(UNSPECIFIED);\n\t\t\tconf.prime[d][pr].min = static_cast<unsigned int>(UNSPECIFIED);\n\t\t\tconf.prime[d][pr].none = TRUE;\n\t\t} else {\n\t\t\tchar *endp; /* used wtih strtol() */\n\t\t\tauto num = strtol(tok, &endp, 10);\n\t\t\tif (*endp == '\\0') {\n\t\t\t\t/* num is a 4 digit number of the time HHMM */\n\t\t\t\tauto mins = num % 100;\n\t\t\t\tauto hours = num / 100;\n\t\t\t\tconf.prime[d][pr].hour = hours;\n\t\t\t\tconf.prime[d][pr].min = mins;\n\t\t\t} else\n\t\t\t\treturn -1;\n\t\t}\n\t} else\n\t\treturn -1;\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\trecursive helper for end_prime_status\n *\n * @param[in]\tstart\t-\tthe time the function starts\n * @param[in]\tdate\t-\tthe time to check (date = time when we start)\n * @param[in]\tprime_status\t-\tcurrent prime status PRIME, NON_PRIME, or\n *\t\t\t\t\t\t\t\t\tNONE to have the function calcuate the prime status at date\n *\n * @retval\ttime_t\t-\twhen the current prime status ends\n * @retval\tSCHD_INFINITY\t-\tif the current prime status never ends\n */\nstatic time_t\nend_prime_status_rec(time_t start, time_t date, enum prime_time prime_status)\n{\n\tstruct tm *tmptr;\n\tenum days day;\n\n\t/* base case: if we're more then 7 days out the current prime status will\n\t * never end.  We know this because the only prime settings are weekly.\n\t */\n\tif (date > start + (7 * 24 * 60 * 60))\n\t\treturn (time_t) SCHD_INFINITY;\n\n\ttmptr = localtime(&date);\n\n\tswitch (tmptr->tm_wday) {\n\t\tcase 0:\n\t\t\tday = SUNDAY;\n\t\t\tbreak;\n\t\tcase 1:\n\t\t\tday = MONDAY;\n\t\t\tbreak;\n\t\tcase 2:\n\t\t\tday = TUESDAY;\n\t\t\tbreak;\n\t\tcase 3:\n\t\t\tday = WEDNESDAY;\n\t\t\tbreak;\n\t\tcase 4:\n\t\t\tday = THURSDAY;\n\t\t\tbreak;\n\t\tcase 5:\n\t\t\tday = FRIDAY;\n\t\t\tbreak;\n\t\tcase 6:\n\t\t\tday = SATURDAY;\n\t\t\tbreak;\n\t\tdefault:\n\t\t\tday = WEEKDAY;\n\t}\n\n\tif (prime_status == PRIME) {\n\t\t/* We are currently in primetime. */\n\t\t/* If there is no non-primetime scheduled today, recurse into tomorrow. */\n\t\tif (conf.prime[day][NON_PRIME].none)\n\t\t\treturn end_prime_status_rec(start, date + time_left_today(tmptr), prime_status);\n\t\t/* If there is no non-primetime left today, recurse into tomorrow. */\n\t\tif (conf.prime[day][NON_PRIME].hour < static_cast<unsigned int>(tmptr->tm_hour))\n\t\t\treturn end_prime_status_rec(start, date + time_left_today(tmptr), prime_status);\n\t\tif (conf.prime[day][NON_PRIME].hour == static_cast<unsigned int>(tmptr->tm_hour) &&\n\t\t    conf.prime[day][NON_PRIME].min < static_cast<unsigned int>(tmptr->tm_min))\n\t\t\treturn end_prime_status_rec(start, date + time_left_today(tmptr), prime_status);\n\t\t/* Non-primetime started at the beginning of the day, return it. */\n\t\tif (conf.prime[day][NON_PRIME].all || is_holiday(tmptr->tm_yday + 1))\n\t\t\treturn date;\n\t\t/* Non-primetime will start later today, return the scheduled time. */\n\t\treturn date + (conf.prime[day][NON_PRIME].hour - tmptr->tm_hour) * 3600 + (static_cast<int>(conf.prime[day][NON_PRIME].min) - tmptr->tm_min) * 60 - tmptr->tm_sec;\n\t} else {\n\t\t/* We are currently in non-primetime. */\n\t\t/* If there is no primetime scheduled today, recurse into tomorrow. */\n\t\tif (conf.prime[day][PRIME].none || is_holiday(tmptr->tm_yday + 1))\n\t\t\treturn end_prime_status_rec(start, date + time_left_today(tmptr),\n\t\t\t\t\t\t    prime_status);\n\t\t/* If there is no primetime left today, recurse into tomorrow. */\n\t\tif (conf.prime[day][PRIME].hour < static_cast<unsigned int>(tmptr->tm_hour))\n\t\t\treturn end_prime_status_rec(start, date + time_left_today(tmptr),\n\t\t\t\t\t\t    prime_status);\n\t\tif (conf.prime[day][PRIME].hour == static_cast<unsigned int>(tmptr->tm_hour) &&\n\t\t    conf.prime[day][PRIME].min < static_cast<unsigned int>(tmptr->tm_min))\n\t\t\treturn end_prime_status_rec(start, date + time_left_today(tmptr),\n\t\t\t\t\t\t    prime_status);\n\t\t/* Primetime started at the beginning of the day, return it. */\n\t\tif (conf.prime[day][PRIME].all)\n\t\t\treturn date;\n\t\t/* Primetime will start later today, return the scheduled time. */\n\t\treturn date + (conf.prime[day][PRIME].hour - tmptr->tm_hour) * 3600 + (static_cast<int>(conf.prime[day][PRIME].min) - tmptr->tm_min) * 60 - tmptr->tm_sec;\n\t}\n}\n\n/**\n * @brief\n * \t\tfind the time when the current prime status\n *\t\t   (primetime or nonprimetime) ends.\n *\n * @param[in]\tdate\t-\tthe time to check (date = time when we start)\n *\n * @par NOTE:\tIf prime status doesn't end in start + 7 days,\n *\t\t\t\tit is considered infinite\n *\n * @return time when the current prime status ends.\n * @retval\ttime_t\t: when the current prime status ends\n * @retval\tSCHD_INFINITY\t: if  the current prime status never ends\n *\n */\ntime_t\nend_prime_status(time_t date)\n{\n\tenum prime_time p;\n\n\tp = is_prime_time(date);\n\n\t/* no year means all prime all the time*/\n\tif (p == PRIME && conf.holiday_year == 0)\n\t\treturn SCHD_INFINITY;\n\n\treturn end_prime_status_rec(date, date, p);\n}\n\n/**\n * @brief\n * \t\tdo any initializations that need to happen at the\n *\t\t\t  start of prime time\n *\n * @param[in]\tpolicy\t-\tpolicy info\n *\n * @return\tint\n * @retval\t1\t: success\n * @retval\t0\t: error\n *\n */\nint\ninit_prime_time(struct status *policy, char *arg)\n{\n\tif (policy == NULL)\n\t\treturn 0;\n\n\tpolicy->is_prime = PRIME;\n\tpolicy->round_robin = conf.prime_rr;\n\tpolicy->by_queue = conf.prime_bq;\n\tpolicy->strict_fifo = conf.prime_sf;\n\tpolicy->strict_ordering = conf.prime_so;\n\tpolicy->sort_by = &conf.prime_sort;\n\tpolicy->fair_share = conf.prime_fs;\n\tpolicy->backfill = conf.prime_bf;\n\tpolicy->sort_nodes = conf.prime_sn;\n\tpolicy->backfill_prime = conf.prime_bp;\n\tpolicy->preempting = conf.prime_pre;\n\tpolicy->node_sort = &conf.prime_node_sort;\n#ifdef NAS /* localmod 034 */\n\tpolicy->shares_track_only = conf.prime_sto;\n#endif /* localmod 034 */\n\n\t/* we want to know how much we can spill over INTO nonprime time */\n\tpolicy->prime_spill = conf.nonprime_spill;\n\tpolicy->smp_dist = conf.prime_smp_dist;\n\n\t/* policy -> prime_status_end is initially set by the scheduler's first\n\t * call to update_cycle_status() at the beginning of the cycle\n\t */\n\tpolicy->prime_status_end = end_prime_status(policy->prime_status_end);\n\n\treturn 1;\n}\n\n/**\n * @brief\n * \t\tdo any initializations that need to happen at\n *\t\t\t      the beginning of non prime time\n *\n * @param[in]\tpolicy\t-\tpolicy info\n *\n * @return\tint\n * @retval\t1\t: success\n * @retval\t0 \t: error\n *\n */\nint\ninit_non_prime_time(struct status *policy, char *arg)\n{\n\tif (policy == NULL)\n\t\treturn 0;\n\n\tpolicy->is_prime = NON_PRIME;\n\tpolicy->round_robin = conf.non_prime_rr;\n\tpolicy->by_queue = conf.non_prime_bq;\n\tpolicy->strict_fifo = conf.non_prime_sf;\n\tpolicy->strict_ordering = conf.non_prime_so;\n\tpolicy->sort_by = &conf.non_prime_sort;\n\tpolicy->fair_share = conf.non_prime_fs;\n\tpolicy->backfill = conf.non_prime_bf;\n\tpolicy->sort_nodes = conf.non_prime_sn;\n\tpolicy->backfill_prime = conf.non_prime_bp;\n\tpolicy->preempting = conf.non_prime_pre;\n\tpolicy->node_sort = &conf.non_prime_node_sort;\n#ifdef NAS /* localmod 034 */\n\tpolicy->shares_track_only = conf.non_prime_sto;\n#endif /* localmod 034 */\n\n\t/* we want to know how much we can spill over INTO primetime */\n\tpolicy->prime_spill = conf.prime_spill;\n\tpolicy->smp_dist = conf.non_prime_smp_dist;\n\n\t/* policy -> prime_status_end is initially set by the scheduler's first\n\t * call to update_cycle_status() at the beginning of the cycle\n\t */\n\tpolicy->prime_status_end = end_prime_status(policy->prime_status_end);\n\n\treturn 1;\n}\n"
  },
  {
    "path": "src/scheduler/prime.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _PRIME_H\n#define _PRIME_H\n\n#include \"time.h\"\n\n/*\n *\ttime_left_today - macro - return the time left today\n *\t\t\t  The macro will calculate the time between x and\n *\t\t\t  23:59:59 and then add 1 to make it 00:00:00\n */\n#define time_left_today(x) ((23 - ((x)->tm_hour)) * 3600 + \\\n\t\t\t    (59 - ((x)->tm_min)) * 60 +    \\\n\t\t\t    (59 - ((x)->tm_sec)) + 1)\n\n/*\n *      is_prime_time - will return true if it is currently prime_time\n */\nenum prime_time is_prime_time(time_t date);\n\n/*\n *      check_prime - check if it is prime time for a particular day\n */\nenum prime_time check_prime(enum days d, struct tm *t);\n\n/*\n *      is_holiday - returns true if argument date is a holiday\n */\nint is_holiday(long date);\n\n/*\n *      load_day - fill in the prime time part of the config structure\n */\nint load_day(enum days d, enum prime_time pr, const char *tok);\n\n/*\n *      parse_holidays - parse the holidays file.  It should be in UNICOS 8\n *                       format.\n */\nint parse_holidays(const char *fname);\n\n/*\n *      init_prime_time - do any initializations that need to happen at the\n *                        start of prime time\n */\nint init_prime_time(struct status *, char *);\n\n/*\n *      init_non_prime_time - do any initializations that need to happen at\n *                            the beginning of non prime time\n */\nint init_non_prime_time(struct status *, char *);\n\n/*\n *\n *\tend_prime_status - find the time when the current prime status\n *\t\t\t   (primetime or nonprimetime) ends.\n *\n *\t  date - the time to check (date = time when we start)\n *\n *\tNOTE: If prime status doesn't end in start + 7 days, it is considered\n *\t\tinfinite\n *\n *\ttime_t - when the current prime status ends\n *      SCHD_INFINITY - if the current prime status never ends\n *\n */\ntime_t end_prime_status(time_t date);\n\n#endif /* _PRIME_H */\n"
  },
  {
    "path": "src/scheduler/queue.cpp",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <errno.h>\n\n#include \"constant.h\"\n#include \"log.h\"\n#include \"queue.h\"\n\n/**\n * @brief\tConstructor for the data structure 'queue'\n *\n * @param\tvoid\n *\n * @return queue *\n * @retval a newly allocated queue object\n * @retval NULL for malloc error\n */\nds_queue *\nnew_ds_queue(void)\n{\n\tds_queue *ret_obj;\n\n\tret_obj = static_cast<ds_queue *>(malloc(sizeof(ds_queue)));\n\tif (ret_obj == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn NULL;\n\t}\n\n\tret_obj->min_size = QUEUE_DS_MIN_SIZE;\n\tret_obj->content_arr = NULL;\n\tret_obj->front = 0;\n\tret_obj->rear = 0;\n\tret_obj->q_size = 0;\n\n\treturn ret_obj;\n}\n\n/**\n * @brief\tDestructor for a queue object\n *\n * @param[in]\tqueue - the queue object to deallocate\n *\n * @return void\n */\nvoid\nfree_ds_queue(ds_queue *queue)\n{\n\tif (queue != NULL) {\n\t\tfree(queue->content_arr);\n\t\tfree(queue);\n\t}\n}\n\n/**\n * @brief\tEnqueue an object into the queue\n *\n * @param[in]\tqueue - the queue to enqueue the object in\n * @param[in]\tobj - the object to enqueue\n *\n * @return int\n * @retval 1 for Success\n * @retval 0 for Failure\n */\nint\nds_enqueue(ds_queue *queue, void *obj)\n{\n\tlong curr_rear;\n\tlong curr_qsize;\n\n\tif (queue == NULL || obj == NULL)\n\t\treturn 0;\n\n\tcurr_rear = queue->rear;\n\tcurr_qsize = queue->q_size;\n\n\tif (curr_rear >= curr_qsize) {\n\t\tlong new_qsize;\n\t\tvoid **realloc_ptr = NULL;\n\n\t\t/* Need to resize the queue */\n\t\tif (curr_qsize == 0) /* First enqueue operation */\n\t\t\tnew_qsize = queue->min_size;\n\t\telse\n\t\t\tnew_qsize = 2 * curr_qsize;\n\n\t\trealloc_ptr = static_cast<void **>(realloc(queue->content_arr, new_qsize * sizeof(void *)));\n\t\tif (realloc_ptr == NULL) {\n\t\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\t\treturn 0;\n\t\t}\n\n\t\tqueue->content_arr = realloc_ptr;\n\t\tqueue->q_size = new_qsize;\n\t}\n\n\tqueue->content_arr[curr_rear] = obj;\n\tqueue->rear = curr_rear + 1;\n\n\treturn 1;\n}\n\n/**\n * @brief\tDequeue an object from the queue\n *\n * @param[in]\tqueue - the queue to use\n *\n * @return void *\n * @retval the first item in queue\n * @retval NULL for error/empty queue\n */\nvoid *\nds_dequeue(ds_queue *queue)\n{\n\tif (queue == NULL)\n\t\treturn NULL;\n\n\tif (queue->front == queue->rear) { /* queue is empty */\n\t\t/* Reset front and rear pointers */\n\t\tqueue->front = 0;\n\t\tqueue->rear = 0;\n\t\treturn NULL;\n\t}\n\n\treturn queue->content_arr[queue->front++];\n}\n\n/**\n * @brief\tCheck if a queue is empty\n *\n * @param[in]\tqueue  - the queue to use\n *\n * @return int\n * @retval 1 if queue is empty\n * @retval 0 otherwise\n */\nint\nds_queue_is_empty(ds_queue *queue)\n{\n\tif (queue == NULL)\n\t\treturn 1;\n\tif (queue->front == queue->rear) {\n\t\t/* Make sure front and rear pointers are set to 0 */\n\t\tqueue->front = 0;\n\t\tqueue->rear = 0;\n\t\treturn 1;\n\t} else\n\t\treturn 0;\n}\n"
  },
  {
    "path": "src/scheduler/queue.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef SRC_SCHEDULER_QUEUE_H_\n#define SRC_SCHEDULER_QUEUE_H_\n\n#define QUEUE_DS_MIN_SIZE 512 /* Minimum size of the queue data structure */\n\ntypedef struct ds_queue ds_queue;\n\nstruct ds_queue {\n\tint min_size;\n\tlong front;\n\tlong rear;\n\tlong q_size;\n\tvoid **content_arr;\n};\n\nds_queue *new_ds_queue(void);\nvoid free_ds_queue(ds_queue *queue);\nint ds_enqueue(ds_queue *queue, void *obj);\nvoid *ds_dequeue(ds_queue *queue);\nint ds_queue_is_empty(ds_queue *queue);\n\n#endif /* SRC_SCHEDULER_QUEUE_H_ */\n"
  },
  {
    "path": "src/scheduler/queue_info.cpp",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file    queue_info.c\n *\n * @brief\n * \t\tqueue_info.c -  contains functions which are related to queue_info structure.\n *\n * Functions included are:\n * \tquery_queues()\n * \tquery_queue_info()\n * \tfree_queues()\n * \tupdate_queue_on_run()\n * \tupdate_queue_on_end()\n * \tdup_queues()\n * \tfind_queue_info()\n * \tnode_queue_cmp()\n *\n */\n#include <pbs_config.h>\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <errno.h>\n#include <assert.h>\n#include <pbs_error.h>\n#include <pbs_ifl.h>\n#include <log.h>\n#include \"queue_info.h\"\n#include \"job_info.h\"\n#include \"resv_info.h\"\n#include \"constant.h\"\n#include \"misc.h\"\n#include \"check.h\"\n#include \"config.h\"\n#include \"globals.h\"\n#include \"node_info.h\"\n#include \"sort.h\"\n#include \"resource_resv.h\"\n#include \"resource.h\"\n#include \"state_count.h\"\n#ifdef NAS\n#include \"site_code.h\"\n#endif\n#include \"node_partition.h\"\n#include \"limits_if.h\"\n#include \"pbs_internal.h\"\n#include \"fifo.h\"\n\n/**\n * @brief\n * \t\tcreates an array of queue_info structs which contain\n *\t\t\tan array of jobs\n *\n * @param[in]\tpolicy\t-\tpolicy info\n * @param[in]\tpbs_sd\t-\tconnection to the pbs_server\n * @param[in]\tsinfo\t-\tserver to query queues from\n *\n * @return\tpointer to the head of the queue structure\n *\n */\nstd::vector<queue_info *>\nquery_queues(status *policy, int pbs_sd, server_info *sinfo)\n{\n\t/* the linked list of queues returned from the server */\n\tstruct batch_status *queues;\n\n\t/* the current queue in the linked list of queues */\n\tstruct batch_status *cur_queue;\n\n\t/* array of pointers to internal scheduling structure for queues */\n\tstd::vector<queue_info *> qinfo_arr;\n\n\t/* return code */\n\tsched_error_code ret;\n\n\t/* buffer to store comment message */\n\tchar comment[MAX_LOG_SIZE];\n\n\t/* buffer to store log message */\n\tchar log_msg[MAX_LOG_SIZE];\n\n\t/* used to count users/groups */\n\tcounts *cts;\n\n\t/* peer server descriptor */\n\tint peer_sd = 0;\n\n\tint err = 0; /* an error has occurred */\n\n\tschd_error *sch_err;\n\n\tif (policy == NULL || sinfo == NULL)\n\t\treturn qinfo_arr;\n\n\tsch_err = new_schd_error();\n\n\tif (sch_err == NULL)\n\t\treturn qinfo_arr;\n\n\t/* get queue info from PBS server */\n\tif ((queues = send_statqueue(pbs_sd, NULL, NULL, NULL)) == NULL) {\n\t\tconst char *errmsg = pbs_geterrmsg(pbs_sd);\n\t\tif (errmsg == NULL)\n\t\t\terrmsg = \"\";\n\t\tlog_eventf(PBSEVENT_SCHED, PBS_EVENTCLASS_QUEUE, LOG_NOTICE, \"queue_info\",\n\t\t\t   \"Statque failed: %s (%d)\", errmsg, pbs_errno);\n\t\tfree_schd_error(sch_err);\n\t\treturn qinfo_arr;\n\t}\n\n\tfor (cur_queue = queues; cur_queue != NULL && !err; cur_queue = cur_queue->next) {\n\t\tqueue_info *qinfo;\n\n\t\t/* convert queue information from batch_status to queue_info */\n\t\tif ((qinfo = query_queue_info(policy, cur_queue, sinfo)) == NULL) {\n\t\t\tfree_schd_error(sch_err);\n\t\t\tpbs_statfree(queues);\n\t\t\tfree_queues(qinfo_arr);\n\t\t\treturn qinfo_arr;\n\t\t}\n\n\t\tif (queue_in_partition(qinfo, sc_attrs.partition)) {\n\t\t\t/* check if it is OK for jobs to run in the queue */\n\t\t\tret = is_ok_to_run_queue(sinfo->policy, qinfo);\n\t\t\tif (ret == SUCCESS)\n\t\t\t\tqinfo->is_ok_to_run = 1;\n\t\t\telse\n\t\t\t\tqinfo->is_ok_to_run = 0;\n\n\t\t\tif (qinfo->has_nodes) {\n\t\t\t\tqinfo->nodes = node_filter(sinfo->nodes, sinfo->num_nodes,\n\t\t\t\t\t\t\t   node_queue_cmp, (void *) qinfo->name.c_str(), 0);\n\n\t\t\t\tqinfo->num_nodes = count_array(qinfo->nodes);\n\t\t\t}\n\n\t\t\tif (ret != QUEUE_NOT_EXEC) {\n\t\t\t\t/* get all the jobs which reside in the queue */\n\t\t\t\tqinfo->jobs = query_jobs(policy, pbs_sd, qinfo, NULL, qinfo->name);\n\n\t\t\t\tif (qinfo->is_ded_queue)\n\t\t\t\t\tsinfo->has_ded_queue = true;\n\t\t\t\tif (qinfo->is_prime_queue)\n\t\t\t\t\tsinfo->has_prime_queue = true;\n\t\t\t\telse if (qinfo->is_nonprime_queue)\n\t\t\t\t\tsinfo->has_nonprime_queue = true;\n\n\t\t\t\tfor (auto &pq : conf.peer_queues) {\n\t\t\t\t\tif (qinfo->name == pq.local_queue) {\n\t\t\t\t\t\tint peer_on = 1;\n\t\t\t\t\t\t/* Locally-peered queues reuse the scheduler's connection */\n\t\t\t\t\t\tif (pq.remote_server.empty()) {\n\t\t\t\t\t\t\tpeer_sd = pbs_sd;\n\t\t\t\t\t\t} else if ((peer_sd = pbs_connect_noblk(const_cast<char *>(pq.remote_server.c_str()))) < 0) {\n\t\t\t\t\t\t\t/* Message was PBSEVENT_SCHED - moved to PBSEVENT_DEBUG2 for\n\t\t\t\t\t\t\t * failover reasons (see bz3002)\n\t\t\t\t\t\t\t */\n\t\t\t\t\t\t\tlog_eventf(PBSEVENT_DEBUG2, PBS_EVENTCLASS_REQUEST, LOG_INFO, qinfo->name,\n\t\t\t\t\t\t\t\t   \"Can not connect to peer %s\", pq.remote_server.c_str());\n\t\t\t\t\t\t\tpq.peer_sd = -1;\n\t\t\t\t\t\t\tpeer_on = 0; /* do not proceed */\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif (peer_on) {\n\t\t\t\t\t\t\tpq.peer_sd = peer_sd;\n\t\t\t\t\t\t\tqinfo->is_peer_queue = 1;\n\t\t\t\t\t\t\t/* get peered jobs */\n\t\t\t\t\t\t\tqinfo->jobs = query_jobs(policy, peer_sd, qinfo, qinfo->jobs, pq.remote_queue);\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\tclear_schd_error(sch_err);\n\t\t\t\tset_schd_error_codes(sch_err, NOT_RUN, ret);\n\t\t\t\tif (qinfo->is_ok_to_run == 0) {\n\t\t\t\t\ttranslate_fail_code(sch_err, comment, log_msg);\n\t\t\t\t\tupdate_jobs_cant_run(pbs_sd, qinfo->jobs, NULL, sch_err, START_WITH_JOB);\n\t\t\t\t}\n\n\t\t\t\tcount_states(qinfo->jobs, &(qinfo->sc));\n\n\t\t\t\tqinfo->running_jobs = resource_resv_filter(qinfo->jobs,\n\t\t\t\t\t\t\t\t\t   qinfo->sc.total, check_run_job, NULL, 0);\n\n\t\t\t\tif (qinfo->running_jobs == NULL)\n\t\t\t\t\terr = 1;\n\n\t\t\t\tif (qinfo->has_soft_limit || qinfo->has_hard_limit) {\n\t\t\t\t\tcounts *allcts;\n\t\t\t\t\tallcts = find_alloc_counts(qinfo->alljobcounts,\n\t\t\t\t\t\t\t\t   PBS_ALL_ENTITY);\n\n\t\t\t\t\tif (qinfo->running_jobs != NULL) {\n\t\t\t\t\t\t/* set the user and group counts */\n\t\t\t\t\t\tfor (int j = 0; qinfo->running_jobs[j] != NULL; j++) {\n\t\t\t\t\t\t\tcts = find_alloc_counts(qinfo->user_counts,\n\t\t\t\t\t\t\t\t\t\tqinfo->running_jobs[j]->user);\n\t\t\t\t\t\t\tupdate_counts_on_run(cts, qinfo->running_jobs[j]->resreq);\n\n\t\t\t\t\t\t\tcts = find_alloc_counts(qinfo->group_counts,\n\t\t\t\t\t\t\t\t\t\tqinfo->running_jobs[j]->group);\n\t\t\t\t\t\t\tupdate_counts_on_run(cts, qinfo->running_jobs[j]->resreq);\n\n\t\t\t\t\t\t\tcts = find_alloc_counts(qinfo->project_counts,\n\t\t\t\t\t\t\t\t\t\tqinfo->running_jobs[j]->project);\n\t\t\t\t\t\t\tupdate_counts_on_run(cts, qinfo->running_jobs[j]->resreq);\n\n\t\t\t\t\t\t\tupdate_counts_on_run(allcts, qinfo->running_jobs[j]->resreq);\n\t\t\t\t\t\t}\n\t\t\t\t\t\tcreate_total_counts(NULL, qinfo, NULL, QUEUE);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tqinfo_arr.push_back(qinfo);\n\n\t\t} else\n\t\t\tdelete qinfo;\n\t}\n\n\tpbs_statfree(queues);\n\tfree_schd_error(sch_err);\n\tif (err) {\n\t\tfree_queues(qinfo_arr);\n\t}\n\n\treturn qinfo_arr;\n}\n\n/**\n * @brief\n *\t\tquery_queue_info - collects information from a batch_status and\n *\t\t\t   puts it in a queue_info struct for easier access\n *\n * @param[in]\tpolicy\t-\tpolicy info\n * @param[in]\tqueue\t-\tbatch_status struct to get queue information from\n * @param[in]\tsinfo\t-\tserver where queue resides\n *\n * @return\tnewly allocated and filled queue_info\n * @retval\tNULL\t: on error\n *\n */\n\nqueue_info *\nquery_queue_info(status *policy, struct batch_status *queue, server_info *sinfo)\n{\n\tstruct attrl *attrp;\t  /* linked list of attributes from server */\n\tstruct queue_info *qinfo; /* queue_info being created */\n\tschd_resource *resp;\t  /* resource in resource qres list */\n\tchar *endp;\t\t  /* used with strtol() */\n\tsch_resource_t count;\t  /* used to convert string -> num */\n\n\tif ((qinfo = new queue_info(queue->name)) == NULL)\n\t\treturn NULL;\n\n\tif (qinfo->liminfo == NULL)\n\t\treturn NULL;\n\n\tattrp = queue->attribs;\n\tqinfo->server = sinfo;\n\twhile (attrp != NULL) {\n\t\tif (!strcmp(attrp->name, ATTR_start)) { /* started */\n\t\t\tif (!strcmp(attrp->value, ATR_TRUE))\n\t\t\t\tqinfo->is_started = 1;\n\t\t\telse\n\t\t\t\tqinfo->is_started = 0;\n\t\t} else if (!strcmp(attrp->name, ATTR_HasNodes)) {\n\t\t\tif (!strcmp(attrp->value, ATR_TRUE)) {\n\t\t\t\tsinfo->has_nodes_assoc_queue = 1;\n\t\t\t\tqinfo->has_nodes = 1;\n\t\t\t} else\n\t\t\t\tqinfo->has_nodes = 0;\n\t\t} else if (!strcmp(attrp->name, ATTR_backfill_depth)) {\n\t\t\tqinfo->backfill_depth = strtol(attrp->value, NULL, 10);\n\t\t\tif (qinfo->backfill_depth > 0)\n\t\t\t\tpolicy->backfill = 1;\n\t\t} else if (!strcmp(attrp->name, ATTR_partition)) {\n\t\t\tif (attrp->value != NULL) {\n\t\t\t\tqinfo->partition = string_dup(attrp->value);\n\t\t\t\tif (qinfo->partition == NULL) {\n\t\t\t\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\t\t\t\tdelete qinfo;\n\t\t\t\t\treturn NULL;\n\t\t\t\t}\n\t\t\t}\n\t\t} else if (is_reslimattr(attrp)) {\n\t\t\t(void) lim_setlimits(attrp, LIM_RES, qinfo->liminfo);\n\t\t\tif (strstr(attrp->value, \"u:\") != NULL)\n\t\t\t\tqinfo->has_user_limit = 1;\n\t\t\tif (strstr(attrp->value, \"g:\") != NULL)\n\t\t\t\tqinfo->has_grp_limit = 1;\n\t\t\tif (strstr(attrp->value, \"p:\") != NULL)\n\t\t\t\tqinfo->has_proj_limit = 1;\n\t\t\tif (strstr(attrp->value, \"o:\") != NULL)\n\t\t\t\tqinfo->has_all_limit = 1;\n\t\t} else if (is_runlimattr(attrp)) {\n\t\t\t(void) lim_setlimits(attrp, LIM_RUN, qinfo->liminfo);\n\t\t\tif (strstr(attrp->value, \"u:\") != NULL)\n\t\t\t\tqinfo->has_user_limit = 1;\n\t\t\tif (strstr(attrp->value, \"g:\") != NULL)\n\t\t\t\tqinfo->has_grp_limit = 1;\n\t\t\tif (strstr(attrp->value, \"p:\") != NULL)\n\t\t\t\tqinfo->has_proj_limit = 1;\n\t\t\tif (strstr(attrp->value, \"o:\") != NULL)\n\t\t\t\tqinfo->has_all_limit = 1;\n\t\t} else if (is_oldlimattr(attrp)) {\n\t\t\tconst char *limname = convert_oldlim_to_new(attrp);\n\t\t\t(void) lim_setlimits(attrp, LIM_OLD, qinfo->liminfo);\n\n\t\t\tif (strstr(limname, \"u:\") != NULL)\n\t\t\t\tqinfo->has_user_limit = 1;\n\t\t\tif (strstr(limname, \"g:\") != NULL)\n\t\t\t\tqinfo->has_grp_limit = 1;\n\t\t\t/* no need to check for project limits because there were no old style project limits */\n\t\t} else if (!strcmp(attrp->name, ATTR_p)) { /* priority */\n\t\t\tcount = strtol(attrp->value, &endp, 10);\n\t\t\tif (*endp != '\\0')\n\t\t\t\tcount = -1;\n\t\t\tqinfo->priority = count;\n\t\t} else if (!strcmp(attrp->name, ATTR_qtype)) { /* queue_type */\n\t\t\tif (!strcmp(attrp->value, \"Execution\")) {\n\t\t\t\tqinfo->is_exec = 1;\n\t\t\t\tqinfo->is_route = 0;\n\t\t\t} else if (!strcmp(attrp->value, \"Route\")) {\n\t\t\t\tqinfo->is_route = 1;\n\t\t\t\tqinfo->is_exec = 0;\n\t\t\t}\n\t\t} else if (!strcmp(attrp->name, ATTR_NodeGroupKey))\n\t\t\tqinfo->node_group_key = break_comma_list(std::string(attrp->value));\n\t\telse if (!strcmp(attrp->name, ATTR_rescavail)) { /* resources_available*/\n#ifdef NAS\n\t\t\t/* localmod 040 */\n\t\t\tif (!strcmp(attrp->resource, ATTR_ignore_nodect_sort)) {\n\t\t\t\tif (!strcmp(attrp->value, ATR_TRUE))\n\t\t\t\t\tqinfo->ignore_nodect_sort = 1;\n\t\t\t\telse\n\t\t\t\t\tqinfo->ignore_nodect_sort = 0;\n\n\t\t\t\tresp = NULL;\n\t\t\t} /* localmod 038 */\n\t\t\telse if (!strcmp(attrp->resource, ATTR_topjob_setaside)) {\n\t\t\t\tif (!strcmp(attrp->value, ATR_TRUE))\n\t\t\t\t\tqinfo->is_topjob_set_aside = 1;\n\t\t\t\telse\n\t\t\t\t\tqinfo->is_topjob_set_aside = 0;\n\n\t\t\t\tresp = NULL;\n\t\t\t} else\n#endif\n\t\t\t\tresp = find_alloc_resource_by_str(qinfo->qres, attrp->resource);\n\t\t\tif (resp != NULL) {\n\t\t\t\tif (qinfo->qres == NULL)\n\t\t\t\t\tqinfo->qres = resp;\n\n\t\t\t\tif (set_resource(resp, attrp->value, RF_AVAIL) == 0) {\n\t\t\t\t\tdelete qinfo;\n\t\t\t\t\treturn NULL;\n\t\t\t\t}\n\t\t\t\tqinfo->has_resav_limit = 1;\n\t\t\t}\n\t\t} else if (!strcmp(attrp->name, ATTR_rescassn)) { /* resources_assigned */\n\t\t\tresp = find_alloc_resource_by_str(qinfo->qres, attrp->resource);\n\t\t\tif (qinfo->qres == NULL)\n\t\t\t\tqinfo->qres = resp;\n\t\t\tif (resp != NULL) {\n\t\t\t\tif (set_resource(resp, attrp->value, RF_ASSN) == 0) {\n\t\t\t\t\tdelete qinfo;\n\t\t\t\t\treturn NULL;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n#ifdef NAS\n\t\t/* localmod 046 */\n\t\telse if (!strcmp(attrp->name, ATTR_maxstarve)) {\n\t\t\ttime_t starve;\n\t\t\tstarve = site_decode_time(attrp->value);\n\t\t\tqinfo->max_starve = starve;\n\t\t}\n\t\t/* localmod 034 */\n\t\telse if (!strcmp(attrp->name, ATTR_maxborrow)) {\n\t\t\ttime_t borrow;\n\t\t\tborrow = site_decode_time(attrp->value);\n\t\t\tqinfo->max_borrow = borrow;\n\t\t}\n#endif\n\n\t\tattrp = attrp->next;\n\t}\n\n\tif (has_hardlimits(qinfo->liminfo))\n\t\tqinfo->has_hard_limit = 1;\n\tif (has_softlimits(qinfo->liminfo))\n\t\tqinfo->has_soft_limit = 1;\n\n\treturn qinfo;\n}\n\n// queue_info constructor\nqueue_info::queue_info(const char *qname) : name(qname)\n{\n\tis_started = false;\n\tis_exec = false;\n\tis_route = false;\n\tis_ok_to_run = false;\n\thas_nodes = false;\n\thas_soft_limit = false;\n\thas_hard_limit = false;\n\tis_peer_queue = false;\n\thas_resav_limit = false;\n\thas_user_limit = false;\n\thas_grp_limit = false;\n\thas_proj_limit = false;\n\thas_all_limit = false;\n\tpriority = 0;\n\tinit_state_count(&sc);\n\tliminfo = lim_alloc_liminfo();\n\tnum_nodes = 0;\n\tqres = NULL;\n\tjobs = NULL;\n\trunning_jobs = NULL;\n\tserver = NULL;\n\tresv = NULL;\n\tnodes = NULL;\n\tnodepart = NULL;\n\tallpart = NULL;\n\tnum_parts = 0;\n\tnum_topjobs = 0;\n\tbackfill_depth = UNSPECIFIED;\n#ifdef NAS\n\t/* localmod 046 */\n\tmax_starve = 0;\n\t/* localmod 034 */\n\tmax_borrow = UNSPECIFIED;\n\t/* localmod 038 */\n\tis_topjob_set_aside = 0;\n\t/* localmod 040 */\n\tignore_nodect_sort = 0;\n#endif\n\tpartition = NULL;\n\n\t/* check if the queue is a dedicated time queue */\n\tif (name.compare(0, conf.ded_prefix.length(), conf.ded_prefix) == 0)\n\t\tis_ded_queue = true;\n\telse\n\t\tis_ded_queue = false;\n\t/* check if the queue is a prime time queue */\n\tif (name.compare(0, conf.pt_prefix.length(), conf.pt_prefix) == 0)\n\t\tis_prime_queue = true;\n\telse\n\t\tis_prime_queue = false;\n\t/* check if the queue is a nonprimetime queue */\n\tif (name.compare(0, conf.npt_prefix.length(), conf.npt_prefix) == 0)\n\t\tis_nonprime_queue = true;\n\telse\n\t\tis_nonprime_queue = false;\n}\n\n/**\n * @brief\n *\t\tfree_queues - free an array of queues\n *\n * @param[in,out]\tqarr\t-\tqinfo array to delete\n *\n * @return\tnothing\n *\n */\n\nvoid\nfree_queues(std::vector<queue_info *> &qarr)\n{\n\tif (qarr.empty())\n\t\treturn;\n\n\tfor (auto queue : qarr)\n\t\tdelete queue;\n\tqarr.clear();\n}\n\n/**\n * @brief\n *\t\tupdate_queue_on_run - update the information kept in a qinfo structure\n *\t\t\t\twhen a resource resv is run\n *\n * @param[in,out]\tqinfo\t-\tthe queue to update\n * @param[in]\tresresv\t-\tthe resource resv that was run\n * @param[in]  job_state -\tthe old state of a job if resresv is a job\n *\t\t\t\tIf the old_state is found to be suspended\n *\t\t\t\tthen only resources that were released\n *\t\t\t\tduring suspension will be accounted.\n *\n * @return\tnothing\n *\n */\nvoid\nupdate_queue_on_run(queue_info *qinfo, resource_resv *resresv, char *job_state)\n{\n\tresource_req *req;\n\n\tif (qinfo == NULL || resresv == NULL)\n\t\treturn;\n\n\tif (resresv->is_job && resresv->job == NULL)\n\t\treturn;\n\n\tif (resresv->is_job) {\n\t\tqinfo->sc.running++;\n\t\t/* note: if job is suspended, counts will get off.\n\t\t *       sc.queued is not used, and sc.suspended isn't used again\n\t\t *       after this point\n\t\t *       BZ 5798\n\t\t */\n\t\tqinfo->sc.queued--;\n\t}\n\n\tif (!cstat.node_sort->empty() && conf.node_sort_unused && qinfo->nodes != NULL)\n\t\tqsort(qinfo->nodes, qinfo->num_nodes, sizeof(node_info *),\n\t\t      multi_node_sort);\n\n\tif ((job_state != NULL) && (*job_state == 'S') && (resresv->job->resreq_rel != NULL))\n\t\treq = resresv->job->resreq_rel;\n\telse\n\t\treq = resresv->resreq;\n\n\twhile (req != NULL) {\n\t\tauto res = find_resource(qinfo->qres, req->def);\n\n\t\tif (res != NULL)\n\t\t\tres->assigned += req->amount;\n\n\t\treq = req->next;\n\t}\n\n\tqinfo->running_jobs = add_resresv_to_array(qinfo->running_jobs, resresv, NO_FLAGS);\n\n\tif (qinfo->has_soft_limit || qinfo->has_hard_limit) {\n\n\t\tif (resresv->is_job && resresv->job != NULL) {\n\t\t\tcounts *cts;\n\n\t\t\tupdate_total_counts(NULL, qinfo, resresv, QUEUE);\n\n\t\t\tcts = find_alloc_counts(qinfo->group_counts, resresv->group);\n\t\t\tupdate_counts_on_run(cts, resresv->resreq);\n\n\t\t\tcts = find_alloc_counts(qinfo->project_counts, resresv->project);\n\t\t\tupdate_counts_on_run(cts, resresv->resreq);\n\n\t\t\tcts = find_alloc_counts(qinfo->user_counts, resresv->user);\n\t\t\tupdate_counts_on_run(cts, resresv->resreq);\n\n\t\t\tauto allcts = find_alloc_counts(qinfo->alljobcounts, PBS_ALL_ENTITY);\n\t\t\tupdate_counts_on_run(allcts, resresv->resreq);\n\t\t}\n\t}\n}\n\n/**\n * @brief\n *\t\tupdate_queue_on_end - update a queue when a resource resv\n *\t\t\t\thas finished running\n *\n * @par\tNOTE:\tjob must be in pre-ended state\n *\n * @param[in,out]\tqinfo\t-\tthe queue to update\n * @param[in]\tresresv\t-\tthe resource resv which is no longer running\n * @param[in]  job_state -\tthe old state of a job if resresv is a job\n *\t\t\t\tIf the old_state is found to be suspended\n *\t\t\t\tthen only resources that were released\n *\t\t\t\tduring suspension will be accounted.\n *\n * @return\tnothing\n *\n */\nvoid\nupdate_queue_on_end(queue_info *qinfo, resource_resv *resresv, const char *job_state)\n{\n\tschd_resource *res = NULL; /* resource from queue */\n\tresource_req *req = NULL;  /* resource request from job */\n\n\tif (qinfo == NULL || resresv == NULL)\n\t\treturn;\n\n\tif (resresv->is_job && resresv->job == NULL)\n\t\treturn;\n\n\tif (resresv->is_job) {\n\t\tif (resresv->job->is_running) {\n\t\t\tqinfo->sc.running--;\n\t\t\tremove_resresv_from_array(qinfo->running_jobs, resresv);\n\t\t} else if (resresv->job->is_exiting)\n\t\t\tqinfo->sc.exiting--;\n\n\t\tstate_count_add(&(qinfo->sc), job_state, 1);\n\t}\n\n\tif ((job_state != NULL) && (*job_state == 'S') && (resresv->job->resreq_rel != NULL))\n\t\treq = resresv->job->resreq_rel;\n\telse\n\t\treq = resresv->resreq;\n\n\twhile (req != NULL) {\n\t\tres = find_resource(qinfo->qres, req->def);\n\n\t\tif (res != NULL) {\n\t\t\tres->assigned -= req->amount;\n\n\t\t\tif (res->assigned < 0) {\n\t\t\t\tlog_eventf(PBSEVENT_DEBUG, PBS_EVENTCLASS_NODE, LOG_DEBUG, __func__,\n\t\t\t\t\t   \"%s turned negative %.2lf, setting it to 0\", res->name, res->assigned);\n\t\t\t\tres->assigned = 0;\n\t\t\t}\n\t\t}\n\t\treq = req->next;\n\t}\n\n\tif (qinfo->has_soft_limit || qinfo->has_hard_limit) {\n\t\tif (is_resresv_running(resresv)) {\n\t\t\tupdate_total_counts_on_end(NULL, qinfo, resresv, QUEUE);\n\t\t\tauto cts = find_counts(qinfo->group_counts, resresv->group);\n\t\t\tupdate_counts_on_end(cts, resresv->resreq);\n\n\t\t\tcts = find_counts(qinfo->project_counts, resresv->project);\n\t\t\tupdate_counts_on_end(cts, resresv->resreq);\n\n\t\t\tcts = find_counts(qinfo->user_counts, resresv->user);\n\t\t\tupdate_counts_on_end(cts, resresv->resreq);\n\n\t\t\tcts = find_alloc_counts(qinfo->alljobcounts, PBS_ALL_ENTITY);\n\t\t\tupdate_counts_on_end(cts, resresv->resreq);\n\t\t}\n\t}\n}\n\n// queue_info destructor\nqueue_info::~queue_info()\n{\n\tfree_resource_resv_array(jobs);\n\tfree_resource_list(qres);\n\tfree(running_jobs);\n\tfree(nodes);\n\tfree_counts_list(alljobcounts);\n\tfree_counts_list(group_counts);\n\tfree_counts_list(project_counts);\n\tfree_counts_list(user_counts);\n\tfree_counts_list(total_alljobcounts);\n\tfree_counts_list(total_group_counts);\n\tfree_counts_list(total_project_counts);\n\tfree_counts_list(total_user_counts);\n\tfree_node_partition_array(nodepart);\n\tfree_node_partition(allpart);\n\tlim_free_liminfo(liminfo);\n\tfree(partition);\n}\n\n/**\n * @brief\n *\t\tdup_queues - duplicate the queues on a server\n *\n * @param[in]\toqueues\t-\tthe queues to duplicate\n * @param[in]\tnsinfo\t-\tthe new server\n *\n * @return\tthe duplicated queue array\n *\n */\nstd::vector<queue_info *>\ndup_queues(const std::vector<queue_info *> &oqueues, server_info *nsinfo)\n{\n\tstd::vector<queue_info *> new_queues = {};\n\n\tif (oqueues.empty())\n\t\treturn new_queues;\n\n\tfor (auto queue : oqueues) {\n\t\tqueue_info *temp;\n\t\ttry {\n\t\t\ttemp = new queue_info(*queue, nsinfo);\n\t\t} catch (std::bad_alloc &e) {\n\t\t\tfree_queues(new_queues);\n\t\t\t/* return the empty vector */\n\t\t\treturn new_queues;\n\t\t}\n\t\tnew_queues.push_back(temp);\n\t}\n\treturn new_queues;\n}\n\n/**\n * @brief \tqueue_info copy constructor\n *\n * @param[in]\tnsinfo\t-\tthe server which owns the duplicated queue\n *\n */\nqueue_info::queue_info(queue_info &oqinfo, server_info *nsinfo) : name(oqinfo.name), sc(oqinfo.sc)\n{\n\tserver = nsinfo;\n\n\tis_started = oqinfo.is_started;\n\tis_exec = oqinfo.is_exec;\n\tis_route = oqinfo.is_route;\n\tis_ok_to_run = oqinfo.is_ok_to_run;\n\tis_ded_queue = oqinfo.is_ded_queue;\n\tis_prime_queue = oqinfo.is_prime_queue;\n\tis_nonprime_queue = oqinfo.is_nonprime_queue;\n\thas_nodes = oqinfo.has_nodes;\n\thas_soft_limit = oqinfo.has_soft_limit;\n\thas_hard_limit = oqinfo.has_hard_limit;\n\tis_peer_queue = oqinfo.is_peer_queue;\n\thas_resav_limit = oqinfo.has_resav_limit;\n\thas_user_limit = oqinfo.has_user_limit;\n\thas_grp_limit = oqinfo.has_grp_limit;\n\thas_proj_limit = oqinfo.has_proj_limit;\n\thas_all_limit = oqinfo.has_all_limit;\n\tliminfo = lim_dup_liminfo(oqinfo.liminfo);\n\tpriority = oqinfo.priority;\n\tnum_parts = oqinfo.num_parts;\n\tnum_topjobs = oqinfo.num_topjobs;\n\tbackfill_depth = oqinfo.backfill_depth;\n\tnum_nodes = oqinfo.num_nodes;\n\n#ifdef NAS\n\t/* localmod 046 */\n\tmax_starve = oqinfo.max_starve;\n\t/* localmod 034 */\n\tmax_borrow = oqinfo.max_borrow;\n\t/* localmod 038 */\n\tis_topjob_set_aside = oqinfo.is_topjob_set_aside;\n\t/* localmod 040 */\n\tignore_nodect_sort = oqinfo.ignore_nodect_sort;\n#endif\n\n\tqres = dup_resource_list(oqinfo.qres);\n\talljobcounts = dup_counts_umap(oqinfo.alljobcounts);\n\tgroup_counts = dup_counts_umap(oqinfo.group_counts);\n\tproject_counts = dup_counts_umap(oqinfo.project_counts);\n\tuser_counts = dup_counts_umap(oqinfo.user_counts);\n\ttotal_alljobcounts = dup_counts_umap(oqinfo.total_alljobcounts);\n\ttotal_group_counts = dup_counts_umap(oqinfo.total_group_counts);\n\ttotal_project_counts = dup_counts_umap(oqinfo.total_project_counts);\n\ttotal_user_counts = dup_counts_umap(oqinfo.total_user_counts);\n\tnodepart = dup_node_partition_array(oqinfo.nodepart, nsinfo);\n\tallpart = dup_node_partition(oqinfo.allpart, nsinfo);\n\tnode_group_key = oqinfo.node_group_key;\n\n\tif (oqinfo.resv != NULL) {\n\t\tresv = find_resource_resv_by_indrank(nsinfo->resvs, oqinfo.resv->resresv_ind, oqinfo.resv->rank);\n\t\tif (resv != NULL) {\n\t\t\tif (!resv->resv->is_standing) {\n\t\t\t\t/* just incase we we didn't set the reservation cross pointer */\n\t\t\t\tresv->resv->resv_queue = this;\n\t\t\t} else {\n\t\t\t\t/* For standing reservations, we need to restore the resv_queue pointers for all occurrences */\n\t\t\t\tint i;\n\t\t\t\tfor (i = 0; server->resvs[i] != NULL; i++) {\n\t\t\t\t\tif (server->resvs[i]->name == resv->name)\n\t\t\t\t\t\tserver->resvs[i]->resv->resv_queue = this;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t} else\n\t\tresv = NULL;\n\n\tjobs = dup_resource_resv_array(oqinfo.jobs, server, this);\n\n\trunning_jobs = resource_resv_filter(jobs, sc.total, check_run_job, NULL, 0);\n\n\tif (oqinfo.has_nodes)\n\t\tnodes = copy_node_ptr_array(oqinfo.nodes, nsinfo->nodes);\n\telse\n\t\tnodes = NULL;\n\n\tpartition = string_dup(oqinfo.partition);\n}\n\n/**\n * @brief\n * \t\tfind a queue by name\n *\n * @param[in]\tqinfo_arr\t-\tthe array of queues to look in\n * @param[in]\tname\t-\tthe name of the queue\n *\n * @return\tthe found queue\n * @retval\tNULL\t: error.\n *\n */\nqueue_info *\nfind_queue_info(std::vector<queue_info *> &qinfo_arr, const std::string &name)\n{\n\tif (qinfo_arr.empty())\n\t\treturn NULL;\n\n\tfor (auto queue : qinfo_arr) {\n\t\tif (queue->name == name)\n\t\t\treturn queue;\n\t}\n\n\t/* either we have found our queue or the NULL sentinal value */\n\treturn NULL;\n}\n\n/**\n * @brief\n *\t\tnode_queue_cmp - used with node_filter to filter nodes attached to a\n *\t\t         specific queue\n *\n * @param[in]\tnode\t-\tthe node we're currently filtering\n * @param[in]\targ\t-\tthe name of the queue\n *\n * @return\tint\n * @return\t1\t: keep the node\n * @return\t0\t: don't keep the node\n *\n */\nint\nnode_queue_cmp(node_info *ninfo, void *arg)\n{\n\tif (ninfo->queue_name == (char *) arg)\n\t\treturn 1;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *      queue_in_partition\t-  Tells whether the given node belongs to this scheduler\n *\n * @param[in]\tqinfo\t\t-  queue information\n * @param[in]\tpartition\t-  partition associated to scheduler\n *\n * @return\ta node_info filled with information from node\n *\n * @return\tint\n * @retval\t1\t: if success\n * @retval\t0\t: if failure\n */\nint\nqueue_in_partition(queue_info *qinfo, char *partition)\n{\n\tif (dflt_sched) {\n\t\tif (qinfo->partition == NULL || (strcmp(qinfo->partition, DEFAULT_PARTITION) == 0))\n\t\t\treturn 1;\n\t\telse\n\t\t\treturn 0;\n\t}\n\tif (qinfo->partition == NULL)\n\t\treturn 0;\n\n\tif (strcmp(partition, qinfo->partition) == 0)\n\t\treturn 1;\n\telse\n\t\treturn 0;\n}\n"
  },
  {
    "path": "src/scheduler/queue_info.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _QUEUE_INFO_H\n#define _QUEUE_INFO_H\n#include <pbs_ifl.h>\n#include \"data_types.h\"\n\n/*\n *\n *      query_queues - creates an array of queue_info structs which contain\n *                      an array of jobs\n */\nstd::vector<queue_info *> query_queues(status *policy, int pbs_sd, server_info *sinfo);\n\n/*\n *      query_queue_info - collects information from a batch_status and\n *                         puts it in a queue_info struct for easier access\n */\nqueue_info *query_queue_info(status *policy, struct batch_status *queue, struct server_info *sinfo);\n\n/*\n *      new_queue_info - allocate and initalize a new queue_info struct\n */\nqueue_info *new_queue_info(int limallocflag);\n\n/*\n *      free_queues - frees the memory for an array\n */\nvoid free_queues(std::vector<queue_info *> &qarr);\n\n/*\n *      update_queue_on_run - update the information kept in a qinfo structure\n *                              when a job is run\n */\nvoid update_queue_on_run(queue_info *qinfo, resource_resv *resresv, char *job_state);\n\n/*\n *      dup_queues - duplicate the queues on a server\n */\nstd::vector<queue_info *> dup_queues(const std::vector<queue_info *> &oqueues, server_info *nsinfo);\n\n/*\n *\n *\tfind_queue_info - find a queue by name\n *\n *\t  qinfo_arr - the array of queues to look in\n *\t  name - the name of the queue\n *\n *\treturn the found queue or NULL\n *\n */\nqueue_info *find_queue_info(std::vector<queue_info *> &qinfo_arr, const std::string &name);\n\n/*\n *\tupdate_queue_on_end - update a queue when a job has finished running\n */\nvoid\nupdate_queue_on_end(queue_info *qinfo, resource_resv *resresv,\n\t\t    const char *job_state);\n\nint queue_in_partition(queue_info *qinfo, char *partition);\n\nstruct batch_status *send_statqueue(int virtual_fd, char *id, struct attrl *attrib, char *extend);\n\n#endif /* _QUEUE_INFO_H */\n"
  },
  {
    "path": "src/scheduler/resource.cpp",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file    resource.c\n *\n * @brief\n * \tresource.c - contains functions related to resources.\n *\n * Functions included are:\n * \tquery_resources()\n * \tconv_rsc_type()\n * \tdef_is_consumable()\n * \tdef_is_bool()\n * \tcopy_resdef_array()\n * \tfind_resdef()\n * \tis_res_avail_set()\n * \tadd_resource_sig()\n * \tcreate_resource_signature()\n * \tupdate_resource_defs()\n * \tresstr_to_resdef()\n * \tcollect_resources_from_requests()\n * \tupdate_sorting_defs()\n *\n */\n\n#include <pbs_config.h>\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <errno.h>\n\n#include <pbs_ifl.h>\n#include <log.h>\n#include \"resource.h\"\n#include \"constant.h\"\n#include \"config.h\"\n#include \"data_types.h\"\n#include \"misc.h\"\n#include \"globals.h\"\n#include \"resource_resv.h\"\n#include \"pbs_internal.h\"\n#include \"limits_if.h\"\n#include \"sort.h\"\n#include \"parse.h\"\n#include \"fifo.h\"\n\n/**\n * @brief\n * \t\tquery a pbs server for the resources it knows about and fill in the global unordered_map\n *\n * @param[in]\tpbs_sd\t-\tcommunication descriptor to pbs server\n *\n * @return unordered_map of resource defs\n */\nstd::unordered_map<std::string, resdef *>\nquery_resources(int pbs_sd)\n{\n\tstruct batch_status *bs;     /* queried resources from server */\n\tstruct batch_status *cur_bs; /* used to iterate over resources */\n\tstruct attrl *attrp;\t     /* iterate over resource fields */\n\tstd::unordered_map<std::string, resdef *> tmpres;\n\n\tif ((bs = send_statrsc(pbs_sd, NULL, NULL, const_cast<char *>(\"p\"))) == NULL) {\n\t\tconst char *errmsg = pbs_geterrmsg(pbs_sd);\n\t\tif (errmsg == NULL)\n\t\t\terrmsg = \"\";\n\n\t\tlog_eventf(PBSEVENT_SCHED, PBS_EVENTCLASS_REQUEST, LOG_INFO, \"pbs_statrsc\",\n\t\t\t   \"pbs_statrsc failed: %s (%d)\", errmsg, pbs_errno);\n\t\treturn {};\n\t}\n\n\tfor (cur_bs = bs; cur_bs != NULL; cur_bs = cur_bs->next) {\n\t\tint flags = NO_FLAGS;\n\t\tresource_type rtype;\n\n\t\tfor (attrp = cur_bs->attribs; attrp != NULL; attrp = attrp->next) {\n\t\t\tchar *endp;\n\n\t\t\tif (!strcmp(attrp->name, ATTR_RESC_TYPE)) {\n\t\t\t\tint num = strtol(attrp->value, &endp, 10);\n\t\t\t\trtype = conv_rsc_type(num);\n\t\t\t} else if (!strcmp(attrp->name, ATTR_RESC_FLAG)) {\n\t\t\t\tflags = strtol(attrp->value, &endp, 10);\n\t\t\t}\n\t\t}\n\t\ttmpres[cur_bs->name] = new resdef(cur_bs->name, flags, rtype);\n\t}\n\tpbs_statfree(bs);\n\n\t/**\n\t * @par Make sure all the well known resources are sent to us.\n\t *      This is to allow us to directly index into the allres umap.\n\t *      Do not directly index into the allres umap for non-well known resources.  Use find_resdef()\n\t */\n\tfor (const auto &r : well_known_res) {\n\t\tif (tmpres.find(r) == tmpres.end()) {\n\t\t\tfor (auto &d : tmpres)\n\t\t\t\tdelete d.second;\n\t\t\treturn {};\n\t\t}\n\t}\n\n\treturn tmpres;\n}\n\n/**\n * @brief\n * \t\tconvert server type number into resource_type struct\n *\n * @param[in]\ttype\t-\tserver type number\n * @param[out]\trtype\t-\tresource type structure\n *\n * @return\tconverted resource_type\n *\n */\nresource_type\nconv_rsc_type(int type)\n{\n\tresource_type rtype;\n\tswitch (type) {\n\t\tcase ATR_TYPE_STR:\n\t\tcase ATR_TYPE_ARST:\n\t\t\trtype.is_string = true;\n\t\t\trtype.is_non_consumable = true;\n\t\t\tbreak;\n\t\tcase ATR_TYPE_BOOL:\n\t\t\trtype.is_boolean = true;\n\t\t\trtype.is_non_consumable = true;\n\t\t\tbreak;\n\t\tcase ATR_TYPE_SIZE:\n\t\t\trtype.is_size = true;\n\t\t\trtype.is_num = true;\n\t\t\trtype.is_consumable = true;\n\t\t\tbreak;\n\t\tcase ATR_TYPE_SHORT:\n\t\tcase ATR_TYPE_LONG:\n\t\tcase ATR_TYPE_LL:\n\t\t\trtype.is_long = true;\n\t\t\trtype.is_num = true;\n\t\t\trtype.is_consumable = true;\n\t\t\tbreak;\n\t\tcase ATR_TYPE_FLOAT:\n\t\t\trtype.is_float = true;\n\t\t\trtype.is_num = true;\n\t\t\trtype.is_consumable = true;\n\t\t\tbreak;\n\t}\n\treturn rtype;\n}\n\n/**\n * @brief find a resdef in the global allres\n * \t  This function should be used if not finding a well known resource\n *\n * @param name - the name of the resdef to find\n *\n * @return resdef *\n * @retval found resdef\n * @retval NULL if not found\n */\nresdef *\nfind_resdef(const std::string &name)\n{\n\tauto f = allres.find(name);\n\tif (f == allres.end())\n\t\treturn NULL;\n\n\treturn f->second;\n}\n\n/**\n * @brief\n * \t\tchecks if a resource avail is set.\n *\n * @param[in]\tres\t-\tresource to see if it is set\n *\n * @return\tint\n * @retval\t1\t: is set\n * @retval\t0\t: not set\n */\nint\nis_res_avail_set(schd_resource *res)\n{\n\tif (res == NULL)\n\t\treturn 0;\n\n\tif (res->type.is_string) {\n\t\tif (res->str_avail != NULL && res->str_avail[0] != NULL)\n\t\t\treturn 1;\n\t} else if (res->avail != SCHD_INFINITY_RES)\n\t\treturn 1;\n\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\tadd the resource signature of 'res' to the string 'sig'\n *\n * @param[in,out]\tsig\t-\tresource signature we're adding to\n * @param[in,out]\tsig_size\t-\tsize of string sig\n * @param[in]\tres\t-\tthe resource to add to sig\n *\n * @return\tsuccess/failure\n * @retval\t1\t: success\n * @retval\t0\t: failure\n */\nint\nadd_resource_sig(char **sig, int *sig_size, schd_resource *res)\n{\n\tif (sig == NULL || res == NULL)\n\t\treturn 0;\n\n\tif (pbs_strcat(sig, sig_size, res->name) == 0)\n\t\treturn 0;\n\tif (pbs_strcat(sig, sig_size, \"=\") == 0)\n\t\treturn 0;\n\tif (pbs_strcat(sig, sig_size, res_to_str(res, RF_AVAIL)) == 0)\n\t\treturn 0;\n\n\treturn 1;\n}\n/**\n * @brief\n * \t\tcreate node string resource node signature based on the resources\n *      in order from the 'resources' parameter.  This signature can be used\n *      to compare two nodes to see if they are equivalent resource wise\n * @par\n *      FORM: res0=val:res1=val:...:resN=val\n *      Where 0, 1, .., N are indices into the resources array\n *\n * @param[in]\tnode\t-\tnode to create signature from\n * @param[in]\tresources\t-\tstring array of resources\n * @param[in]\tflags\t-\tCHECK_ALL_BOOLS - include all booleans even if not in resources\n *\n * @return\tchar *\n * @retval\tsignature of node\n * @retval\tNULL\t: on error\n *\n * @par\tit is the responsibility of the caller to free string returned\n */\nchar *\ncreate_resource_signature(schd_resource *reslist, std::unordered_set<resdef *> &resources, unsigned int flags)\n{\n\tchar *sig = NULL;\n\tint sig_size = 0;\n\tschd_resource *res;\n\n\tif (reslist == NULL)\n\t\treturn NULL;\n\n\tif ((sig = static_cast<char *>(malloc(1024))) == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn NULL;\n\t}\n\tsig_size = 1024;\n\tsig[0] = '\\0';\n\n\tfor (const auto &r : resources) {\n\t\tres = find_resource(reslist, r);\n\t\tif (res != NULL) {\n\t\t\tif (res->indirect_res != NULL) {\n\t\t\t\tres = res->indirect_res;\n\t\t\t}\n\t\t\tif (is_res_avail_set(res)) {\n\t\t\t\tadd_resource_sig(&sig, &sig_size, res);\n\t\t\t\tif (pbs_strcat(&sig, &sig_size, \":\") == NULL) {\n\t\t\t\t\tfree(sig);\n\t\t\t\t\treturn NULL;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\tif ((flags & ADD_ALL_BOOL)) {\n\t\tfor (const auto &br : boolres) {\n\t\t\tif (resources.find(br) == resources.end()) {\n\t\t\t\tres = find_resource(reslist, br);\n\t\t\t\tif (res != NULL) {\n\t\t\t\t\tadd_resource_sig(&sig, &sig_size, res);\n\t\t\t\t\tif (pbs_strcat(&sig, &sig_size, \":\") == NULL) {\n\t\t\t\t\t\tfree(sig);\n\t\t\t\t\t\treturn NULL;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\t/* we will have a trailing ':' if we did anything, so strip it*/\n\tif (sig[0] != '\\0')\n\t\tsig[strlen(sig) - 1] = '\\0';\n\n\treturn sig;\n}\n\n/**\n * @brief update allres and sub-containers of resource definitions.  This is called\n *\t\tin schedule().  If it fails in schedule() we'll pick it up in the next call to quuery_server()\n *\n * @param[in]\tpbs_sd\t-\tconnection descriptor to the pbs server\n *\n * @return\tbool\n * @retval\ttrue - successfully updated resdefs\n * @retval\tfalse - failed to update resdefs\n */\nbool\nupdate_resource_defs(int pbs_sd)\n{\n\tauto tmpres = query_resources(pbs_sd);\n\n\tif (tmpres.empty())\n\t\treturn false;\n\n\tfor (auto &lr : last_running) {\n\t\tresource_req *prev_res = NULL;\n\t\tfor (auto ru = lr.resused; ru != NULL;) {\n\t\t\tauto f = tmpres.find(ru->name);\n\t\t\tif (f == tmpres.end()) {\n\t\t\t\tresource_req *tru;\n\t\t\t\ttru = ru->next;\n\t\t\t\tfree_resource_req(ru);\n\t\t\t\tif (prev_res != NULL)\n\t\t\t\t\tprev_res->next = tru;\n\t\t\t\telse\n\t\t\t\t\tlr.resused = tru;\n\t\t\t\tru = tru;\n\t\t\t} else {\n\t\t\t\tru->def = f->second;\n\t\t\t\tru->name = ru->def->name.c_str();\n\t\t\t\tprev_res = ru;\n\t\t\t\tru = ru->next;\n\t\t\t}\n\t\t}\n\t}\n\n\tfor (auto &d : allres)\n\t\tdelete d.second;\n\n\tallres = tmpres;\n\n\tconsres.clear();\n\tfor (const auto &def : allres) {\n\t\tif (def.second->type.is_consumable)\n\t\t\tconsres.insert(def.second);\n\t}\n\n\tboolres.clear();\n\tfor (const auto &def : allres) {\n\t\tif (def.second->type.is_boolean)\n\t\t\tboolres.insert(def.second);\n\t}\n\n\tconf.resdef_to_check.clear();\n\tif (!conf.res_to_check.empty()) {\n\t\tconf.resdef_to_check = resstr_to_resdef(conf.res_to_check);\n\t}\n\tupdate_sorting_defs();\n\n\tclear_limres();\n\n\treturn true;\n}\n\n/**\n * @brief\n * \t\tconvert an array of string resource names into resdefs\n *\n * @param[in]\tresstr\t-\tarray of resource strings\n *\n * @return\tresdef array\n * @retval\tNULL\t: on error\n */\nstd::unordered_set<resdef *>\nresstr_to_resdef(const std::unordered_set<std::string> &resstr)\n{\n\tstd::unordered_set<resdef *> defs;\n\n\tfor (const auto &str : resstr) {\n\t\tauto def = find_resdef(str);\n\t\tif (def != NULL)\n\t\t\tdefs.insert(def);\n\t\telse {\n\t\t\tlog_event(PBSEVENT_SCHED, PBS_EVENTCLASS_FILE, LOG_NOTICE, str, \"Unknown Resource\");\n\t\t}\n\t}\n\n\treturn defs;\n}\n\nstd::unordered_set<resdef *>\nresstr_to_resdef(const char *const *resstr)\n{\n\tstd::unordered_set<resdef *> defs;\n\n\tfor (int i = 0; resstr[i] != NULL; i++) {\n\t\tauto def = find_resdef(resstr[i]);\n\t\tif (def != NULL)\n\t\t\tdefs.insert(def);\n\t\telse {\n\t\t\tlog_event(PBSEVENT_SCHED, PBS_EVENTCLASS_FILE, LOG_NOTICE, resstr[i], \"Unknown Resource\");\n\t\t}\n\t}\n\n\treturn defs;\n}\n\n/**\n * @brief\n * \t\tcollect a unique list of resource definitions from an array of requests\n *\n * @param[in]\tresresv_arr\t-\tarray of requests\n *\n * @return\tarray of resource definitions\n */\nstd::unordered_set<resdef *>\ncollect_resources_from_requests(resource_resv **resresv_arr)\n{\n\tint i;\n\tresource_req *req;\n\tstd::unordered_set<resdef *> defset;\n\n\tfor (i = 0; resresv_arr[i] != NULL; i++) {\n\t\tresource_resv *r = resresv_arr[i];\n\n\t\t/* schedselect: node resources - resources to be satisfied on the nodes */\n\t\tif (r->select != NULL) {\n\t\t\tfor (const auto &sdef : r->select->defs)\n\t\t\t\tdefset.insert(sdef);\n\t\t}\n\t\t/*\n\t\t * execselect: select created from the exec_vnode.  This is likely to\n\t\t * be a subset of resources from schedselect + 'vnode'.  It is possible\n\t\t * that a job was run with qrun -H (res=val) where res is not part of\n\t\t * the schedselect.  The exec_vnode is taken directly from the -H argument\n\t\t * The qrun -H case is why we need to do this check.\n\t\t */\n\t\tif (r->execselect != NULL) {\n\t\t\tif ((r->job != NULL && in_runnable_state(r)) ||\n\t\t\t    (r->resv != NULL && (r->resv->resv_state == RESV_BEING_ALTERED || r->resv->resv_substate == RESV_DEGRADED))) {\n\t\t\t\tfor (const auto &esd : r->execselect->defs)\n\t\t\t\t\tdefset.insert(esd);\n\t\t\t}\n\t\t}\n\t\t/* Resource_List: job wide resources: resources submitted with\n\t\t * qsub -l and resources with the ATTR_DFLAG_RASSN which the server\n\t\t * sums all the requested amounts in the select and sets job wide\n\t\t */\n\t\tfor (req = r->resreq; req != NULL; req = req->next)\n\t\t\tif (conf.res_to_check.find(req->name) != conf.res_to_check.end())\n\t\t\t\tdefset.insert(req->def);\n\t}\n\treturn defset;\n}\n\n/**\n * @brief update the resource def for a single sort_info vector\n * @param[in] op - operation to do (e.h. update or clear)\n * @param[in, out] siv - sort_info vector to update\n * @param[in] obj - object type (node or job)\n * @param[in] prefix - string prefix for logging\n * \n * @return nothing\n */\nvoid\nupdate_single_sort_def(std::vector<sort_info> &siv, int obj, const char *prefix)\n{\n\tfor (auto &si : siv) {\n\t\tauto f = allres.find(si.res_name);\n\t\tif (is_speccase_sort(si.res_name, obj))\n\t\t\tsi.def = NULL;\n\t\telse if (f == allres.end()) {\n\t\t\tlog_eventf(PBSEVENT_SCHED, PBS_EVENTCLASS_FILE, LOG_NOTICE, CONFIG_FILE,\n\t\t\t\t   \"%s sorting resource %s is not a valid resource\", prefix, si.res_name.c_str());\n\t\t\tsi.def = NULL;\n\t\t} else\n\t\t\tsi.def = f->second;\n\t}\n}\n\n/**\n * @brief update the resource definition pointers in the sort_info structures\n *\n * @par\tWe parse our config file when we start.  We do not have the resource\n *      definitions at that time.  They also can change over time if the server\n *      sends us a SCH_CONFIGURE command.\n *\n * @param[in]\top\t-\tupdate(non-zero) or clear definitions(0)\n */\nvoid\nupdate_sorting_defs(void)\n{\n\tupdate_single_sort_def(conf.prime_node_sort, SOBJ_NODE, \"prime node\");\n\tupdate_single_sort_def(conf.non_prime_node_sort, SOBJ_NODE, \"Non-prime node\");\n\tupdate_single_sort_def(conf.prime_sort, SOBJ_JOB, \"prime job\");\n\tupdate_single_sort_def(conf.non_prime_sort, SOBJ_JOB, \"Non-prime job\");\n}\n\n/**\n * \t@brief resource_type constructor\n */\nresource_type::resource_type()\n{\n\tis_non_consumable = false;\n\tis_string = false;\n\tis_boolean = false;\n\n\tis_consumable = false;\n\tis_num = false;\n\tis_long = false;\n\tis_float = false;\n\tis_size = false;\n\tis_time = false;\n}\n"
  },
  {
    "path": "src/scheduler/resource.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include \"data_types.h\"\n#ifndef _RESOURCE_H\n#define _RESOURCE_H\n\n/*\n *\tquery_resources - query a pbs server for the resources it knows about\n *\n *\t  pbs_sd - communication descriptor to pbs server\n *\n *\treturns resdef list of resources\n */\nstd::unordered_map<std::string, resdef *> query_resources(int pbs_sd);\n\n/*\n *\tconv_rsc_type - convert server type number into resource_type struct\n *\t  IN:  type - server type number\n *\t  OUT: rtype - resource type structure\n *\treturns nothing\n */\nresource_type conv_rsc_type(int type);\n\n/* find and return a resdef entry by name */\nresdef *find_resdef(const std::string &name);\n\n/*\n *  create resdef array based on a str array of resource names\n */\nresdef **resdef_arr_from_str_arr(resdef **deflist, char **strarr);\n\n/*\n * query the resource definition from the server and create derived\n * data structures.  Only query if required.\n */\nbool update_resource_defs(int pbs_sd);\n\n/* checks if a resource avail is set. */\nint is_res_avail_set(schd_resource *res);\n\n/* create a resource signature for a set of resources */\nchar *create_resource_signature(schd_resource *reslist, std::unordered_set<resdef *> &resources, unsigned int flags);\n\n/* collect a unique list of resources from an array of requests */\nstd::unordered_set<resdef *> collect_resources_from_requests(resource_resv **resresv_arr);\n\n/* convert an array of string resource names into resdefs */\nstd::unordered_set<resdef *> resstr_to_resdef(const std::unordered_set<std::string> &);\nstd::unordered_set<resdef *> resstr_to_resdef(const char *const *resstr);\n\n/* filter function for filter_array().  Used to filter out host and vnode */\nint no_hostvnode(void *v, void *arg);\n\n/* add resdef to resdef array */\nint add_resdef_to_array(resdef ***resdef_arr, resdef *def);\n\n/* make a copy of a resdef array -- array itself is new memory,\n *        pointers point to the same thing\n */\nresdef **copy_resdef_array(resdef **deflist);\n\n/* update the def member in sort_info structures in conf */\nvoid update_sorting_defs(void);\n\n/* wrapper for pbs_statrsc */\nstruct batch_status *send_statrsc(int virtual_fd, char *id, struct attrl *attrib, char *extend);\n\n#endif /* _RESOURCE_H */\n"
  },
  {
    "path": "src/scheduler/resource_resv.cpp",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file    resource_resv.c\n *\n * @brief\n * \tresource_resv.c - This file contains functions related to resource reservations.\n *\n * Functions included are:\n * \tfree_resource_resv_array()\n * \tdup_resource_resv_array()\n * \tdup_resource_resv()\n * \tfind_resource_resv()\n * \tfind_resource_resv_by_indrank()\n * \tfind_resource_resv_by_time()\n * \tis_resource_resv_valid()\n * \tdup_resource_req_list()\n * \tdup_resource_req()\n * \tnew_resource_req()\n * \tcreate_resource_req()\n * \tfind_alloc_resource_req()\n * \tfind_alloc_resource_req_by_str()\n * \tfind_resource_req_by_str()\n * \tfind_resource_req()\n * \tset_resource_req()\n * \tfree_resource_req_list()\n * \tfree_resource_req()\n * \tupdate_resresv_on_run()\n * \tupdate_resresv_on_end()\n * \tresource_resv_filter()\n * \tremove_resresv_from_array()\n * \tadd_resresv_to_array()\n * \tcopy_resresv_array()\n * \tis_resresv_running()\n * \tnew_place()\n * \tfree_place()\n * \tdup_place()\n * \tnew_chunk()\n * \tdup_chunk_array()\n * \tdup_chunk()\n * \tfree_chunk_array()\n * \tfree_chunk()\n * \tcompare_res_to_str()\n * \tcompare_non_consumable()\n * \tcreate_select_from_nspec()\n * \tin_runnable_state()\n *\n */\n\n#include <pbs_config.h>\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <errno.h>\n#include <string.h>\n#include <log.h>\n#include <pthread.h>\n#include <libutil.h>\n#include \"pbs_config.h\"\n#include \"data_types.h\"\n#include \"resource_resv.h\"\n#include \"job_info.h\"\n#include \"resv_info.h\"\n#include \"node_info.h\"\n#include \"misc.h\"\n#include \"node_partition.h\"\n#include \"constant.h\"\n#include \"globals.h\"\n#include \"resource.h\"\n#include \"pbs_internal.h\"\n#include \"check.h\"\n#include \"fifo.h\"\n#include \"range.h\"\n#include \"simulate.h\"\n#include \"multi_threading.h\"\n\n/**\n * @brief\n *\t\tresource_resv constructor\n */\nresource_resv::resource_resv(const std::string &rname) : name(rname)\n{\n\tnodepart_name = NULL;\n\tselect = NULL;\n\texecselect = NULL;\n\n\tplace_spec = NULL;\n\n\tis_invalid = 0;\n\tcan_not_fit = 0;\n\tcan_not_run = 0;\n\tcan_never_run = 0;\n\tis_peer_ob = 0;\n\n\tis_prov_needed = 0;\n\tis_job = 0;\n\tis_shrink_to_fit = 0;\n\tis_resv = 0;\n\n\twill_use_multinode = 0;\n\n\tsch_priority = 0;\n\trank = 0;\n\tqtime = 0;\n\tqrank = 0;\n\n\tec_index = UNSPECIFIED;\n\n\tstart = UNSPECIFIED;\n\tend = UNSPECIFIED;\n\tduration = UNSPECIFIED;\n\thard_duration = UNSPECIFIED;\n\tmin_duration = UNSPECIFIED;\n\tresreq = NULL;\n\tserver = NULL;\n\tninfo_arr = NULL;\n\n\tjob = NULL;\n\tresv = NULL;\n\n\taoename = NULL;\n\teoename = NULL;\n\n#ifdef NAS /* localmod 034 */\n\tshare_type = J_TYPE_ignore;\n#endif /* localmod 034 */\n\n\tnode_set_str = NULL;\n\tnode_set = NULL;\n\tresresv_ind = -1;\n\trun_event = NULL;\n\tend_event = NULL;\n}\n\n/**\n * @brief\tpthread routine to free resource_resv array chunk\n *\n * @param[in,out]\tdata - th_data_free_resresv wrapper for resource_resv array\n *\n * @return void\n */\nvoid\nfree_resource_resv_array_chunk(th_data_free_resresv *data)\n{\n\tresource_resv **resresv_arr;\n\tint start;\n\tint end;\n\tint i;\n\n\tresresv_arr = data->resresv_arr;\n\tstart = data->sidx;\n\tend = data->eidx;\n\n\tfor (i = start; i <= end && resresv_arr[i] != NULL; i++) {\n\t\tdelete resresv_arr[i];\n\t}\n}\n\n/**\n * @brief\tAllocates th_data_free_resresv for multi-threading of free_resource_resv_array\n *\n * @param[in,out]\tresresv_arr\t-\tthe resresv array to free\n * @param[in]\tsidx\t-\tstart index for the resresv array for the thread\n * @param[in]\teidx\t-\tend index for the resresv array for the thread\n *\n * @return th_data_free_resresv *\n * @retval a newly allocated th_data_free_resresv object\n * @retval NULL for malloc error\n */\nstatic inline th_data_free_resresv *\nalloc_tdata_free_rr_arr(resource_resv **resresv_arr, int sidx, int eidx)\n{\n\tth_data_free_resresv *tdata;\n\n\ttdata = static_cast<th_data_free_resresv *>(malloc(sizeof(th_data_free_resresv)));\n\tif (tdata == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn NULL;\n\t}\n\n\ttdata->resresv_arr = resresv_arr;\n\ttdata->sidx = sidx;\n\ttdata->eidx = eidx;\n\n\treturn tdata;\n}\n\n/**\n * @brief\n *\t\tfree_resource_resv_array - free an array of resource resvs\n *\n * @param[in,out]\tresresv_arr\t-\tresource resv to free\n *\n * @return\tnothing\n *\n */\nvoid\nfree_resource_resv_array(resource_resv **resresv_arr)\n{\n\tint i;\n\tint chunk_size;\n\tth_data_free_resresv *tdata = NULL;\n\tth_task_info *task = NULL;\n\tint num_tasks;\n\tint num_jobs;\n\tint tid;\n\n\tif (resresv_arr == NULL)\n\t\treturn;\n\n\tnum_jobs = count_array(resresv_arr);\n\n\ttid = *((int *) pthread_getspecific(th_id_key));\n\tif (tid != 0 || num_threads <= 1) {\n\t\t/* don't use multi-threading if I am a worker thread or num_threads is 1 */\n\t\ttdata = alloc_tdata_free_rr_arr(resresv_arr, 0, num_jobs - 1);\n\t\tif (tdata == NULL)\n\t\t\treturn;\n\n\t\tfree_resource_resv_array_chunk(tdata);\n\t\tfree(tdata);\n\t\tfree(resresv_arr);\n\t\treturn;\n\t}\n\n\tchunk_size = num_jobs / num_threads;\n\tchunk_size = (chunk_size > MT_CHUNK_SIZE_MIN) ? chunk_size : MT_CHUNK_SIZE_MIN;\n\tchunk_size = (chunk_size < MT_CHUNK_SIZE_MAX) ? chunk_size : MT_CHUNK_SIZE_MAX;\n\tfor (i = 0, num_tasks = 0; num_jobs > 0;\n\t     num_tasks++, i += chunk_size, num_jobs -= chunk_size) {\n\t\ttdata = alloc_tdata_free_rr_arr(resresv_arr, i, i + chunk_size - 1);\n\t\tif (tdata == NULL)\n\t\t\tbreak;\n\n\t\ttask = static_cast<th_task_info *>(malloc(sizeof(th_task_info)));\n\t\ttask->task_type = TS_FREE_RESRESV;\n\t\ttask->thread_data = (void *) tdata;\n\n\t\tqueue_work_for_threads(task);\n\t}\n\n\t/* Get results from worker threads */\n\tfor (i = 0; i < num_tasks;) {\n\t\tpthread_mutex_lock(&result_lock);\n\t\twhile (ds_queue_is_empty(result_queue))\n\t\t\tpthread_cond_wait(&result_cond, &result_lock);\n\t\twhile (!ds_queue_is_empty(result_queue)) {\n\t\t\ttask = static_cast<th_task_info *>(ds_dequeue(result_queue));\n\t\t\ttdata = static_cast<th_data_free_resresv *>(task->thread_data);\n\t\t\tfree(tdata);\n\t\t\tfree(task);\n\t\t\ti++;\n\t\t}\n\t\tpthread_mutex_unlock(&result_lock);\n\t}\n\n\tfree(resresv_arr);\n}\n\n/**\n * @brief\n *\t\tresource_resv destructor\n *\n */\nresource_resv::~resource_resv()\n{\n\tfree(nodepart_name);\n\tdelete select;\n\tdelete execselect;\n\tfree_place(place_spec);\n\tfree_resource_req_list(resreq);\n\tfree(ninfo_arr);\n\tfree_nspecs(nspec_arr);\n\tfree_job_info(job);\n\tfree_resv_info(resv);\n\tfree(aoename);\n\tfree(eoename);\n\tfree_string_array(node_set_str);\n\tfree(node_set);\n\t/* Avoid dangling pointers inside the calendar */\n\tif (run_event != NULL)\n\t\tdelete_event(server, run_event);\n\n\tif (end_event != NULL)\n\t\tdelete_event(server, end_event);\n}\n\n/**\n * @brief\tpthread routine for duping a chunk of resresvs\n *\n * @param[in,out]\tdata - th_data_dup_resresv object for duping\n *\n * @return void\n */\nvoid\ndup_resource_resv_array_chunk(th_data_dup_resresv *data)\n{\n\tresource_resv **nresresv_arr;\n\tresource_resv **oresresv_arr;\n\tserver_info *nsinfo;\n\tqueue_info *nqinfo;\n\tint start;\n\tint end;\n\tint i;\n\tschd_error *err;\n\n\terr = new_schd_error();\n\tif (err == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\tdata->error = 1;\n\t\treturn;\n\t}\n\n\tnresresv_arr = data->nresresv_arr;\n\toresresv_arr = data->oresresv_arr;\n\tnsinfo = data->nsinfo;\n\tnqinfo = data->nqinfo;\n\tstart = data->sidx;\n\tend = data->eidx;\n\tdata->error = 0;\n\tfor (i = start; i <= end && oresresv_arr[i] != NULL; i++) {\n\t\tif ((nresresv_arr[i] = dup_resource_resv(oresresv_arr[i], nsinfo, nqinfo)) == NULL) {\n\t\t\tdata->error = 1;\n\t\t\tfree_schd_error(err);\n\t\t\treturn;\n\t\t}\n\t}\n\n\tfree_schd_error(err);\n}\n\n/**\n * @brief\tAllocates th_data_dup_resresv for multi-threading of dup_resource_resv_array\n *\n * @param[in]\toresresv_arr\t-\tthe array to duplicate\n * @param[out]\tnresresv_arr\t-\tthe duplicated array\n * @param[in]\tnsinfo\t-\tnew server ptr for new resresv array\n * @param[in]\tnqinfo\t-\tnew queue ptr for new resresv array\n * @param[in]\tsidx\t-\tstart index for the resresv list for the thread\n * @param[in]\teidx\t-\tend index for the resresv list for the thread\n *\n * @return th_data_dup_resresv *\n * @retval a newly allocated th_data_dup_resresv object\n * @retval NULL for malloc error\n */\nstatic inline th_data_dup_resresv *\nalloc_tdata_dup_nodes(resource_resv **oresresv_arr, resource_resv **nresresv_arr, server_info *nsinfo,\n\t\t      queue_info *nqinfo, int sidx, int eidx)\n{\n\tth_data_dup_resresv *tdata;\n\n\ttdata = static_cast<th_data_dup_resresv *>(malloc(sizeof(th_data_dup_resresv)));\n\tif (tdata == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn NULL;\n\t}\n\ttdata->oresresv_arr = oresresv_arr;\n\ttdata->nresresv_arr = nresresv_arr;\n\ttdata->nsinfo = nsinfo;\n\ttdata->nqinfo = nqinfo;\n\ttdata->sidx = sidx;\n\ttdata->eidx = eidx;\n\ttdata->error = 0;\n\n\treturn tdata;\n}\n\n/**\n * @brief\n *\t\tdup_resource_resv_array - dup a array of pointers of resource resvs\n *\n * @param[in]\toresresv_arr\t-\tarray of resource_resv do duplicate\n * @param[in]\tnsinfo\t-\tnew server ptr for new resresv array\n * @param[in]\tnqinfo\t-\tnew queue ptr for new resresv array\n *\n * @return\tnew resource_resv array\n * @retval\tNULL\t: on error\n *\n */\nresource_resv **\ndup_resource_resv_array(resource_resv **oresresv_arr,\n\t\t\tserver_info *nsinfo, queue_info *nqinfo)\n{\n\tresource_resv **nresresv_arr;\n\tth_data_dup_resresv *tdata = NULL;\n\tth_task_info *task = NULL;\n\tint num_resresv;\n\tint thread_job_ct_left;\n\tint th_err = 0;\n\tint tid;\n\n\tif (oresresv_arr == NULL || nsinfo == NULL)\n\t\treturn NULL;\n\n\tnum_resresv = thread_job_ct_left = count_array(oresresv_arr);\n\n\tif ((nresresv_arr = static_cast<resource_resv **>(malloc((num_resresv + 1) * sizeof(resource_resv *)))) == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn NULL;\n\t}\n\tnresresv_arr[0] = NULL;\n\n\ttid = *((int *) pthread_getspecific(th_id_key));\n\tif (tid != 0 || num_threads <= 1) {\n\t\t/* don't use multi-threading if I am a worker thread or num_threads is 1 */\n\t\ttdata = alloc_tdata_dup_nodes(oresresv_arr, nresresv_arr, nsinfo, nqinfo, 0, num_resresv - 1);\n\t\tif (tdata == NULL)\n\t\t\tth_err = 1;\n\t\telse {\n\t\t\tdup_resource_resv_array_chunk(tdata);\n\t\t\tth_err = tdata->error;\n\t\t\tfree(tdata);\n\t\t}\n\t} else { /* We are multithreading */\n\t\tint num_tasks = 0;\n\t\tint chunk_size = num_resresv / num_threads;\n\t\tchunk_size = (chunk_size > MT_CHUNK_SIZE_MIN) ? chunk_size : MT_CHUNK_SIZE_MIN;\n\t\tchunk_size = (chunk_size < MT_CHUNK_SIZE_MAX) ? chunk_size : MT_CHUNK_SIZE_MAX;\n\t\tfor (int j = 0; thread_job_ct_left > 0;\n\t\t     num_tasks++, j += chunk_size, thread_job_ct_left -= chunk_size) {\n\t\t\ttdata = alloc_tdata_dup_nodes(oresresv_arr, nresresv_arr, nsinfo, nqinfo, j, j + chunk_size - 1);\n\t\t\tif (tdata == NULL) {\n\t\t\t\tth_err = 1;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\ttask = static_cast<th_task_info *>(malloc(sizeof(th_task_info)));\n\t\t\ttask->task_type = TS_DUP_RESRESV;\n\t\t\ttask->thread_data = (void *) tdata;\n\n\t\t\tqueue_work_for_threads(task);\n\t\t}\n\n\t\t/* Get results from worker threads */\n\t\tfor (int i = 0; i < num_tasks;) {\n\t\t\tpthread_mutex_lock(&result_lock);\n\t\t\twhile (ds_queue_is_empty(result_queue))\n\t\t\t\tpthread_cond_wait(&result_cond, &result_lock);\n\t\t\twhile (!ds_queue_is_empty(result_queue)) {\n\t\t\t\ttask = static_cast<th_task_info *>(ds_dequeue(result_queue));\n\t\t\t\ttdata = static_cast<th_data_dup_resresv *>(task->thread_data);\n\t\t\t\tif (tdata->error)\n\t\t\t\t\tth_err = 1;\n\t\t\t\tfree(tdata);\n\t\t\t\tfree(task);\n\t\t\t\ti++;\n\t\t\t}\n\t\t\tpthread_mutex_unlock(&result_lock);\n\t\t}\n\t}\n\n\tif (th_err) {\n\t\tfree_resource_resv_array(nresresv_arr);\n\t\treturn NULL;\n\t}\n\tnresresv_arr[num_resresv] = NULL;\n\n\treturn nresresv_arr;\n}\n\n/**\n * @brief\n *\t\tdup_resource_resv - duplicate a resource resv structure\n *\n * @param[in]\toresresv\t-\tres resv to duplicate\n * @param[in]\tnsinfo\t-\tnew server info for resource_resv\n * @param[in]\tnqinfo\t-\tnew queue info for resource_resv if job (NULL if resv)\n * @param[in]\terr\t\t-\terror object\n *\n * @return\tnewly duplicated resource resv\n * @retval\tNULL\t: on error\n *\n */\nresource_resv *\ndup_resource_resv(resource_resv *oresresv, server_info *nsinfo, queue_info *nqinfo, const std::string &name)\n{\n\tresource_resv *nresresv;\n\tschd_error *err;\n\n\tif (oresresv == NULL || nsinfo == NULL)\n\t\treturn NULL;\n\n\terr = new_schd_error();\n\n\tif (err == NULL)\n\t\treturn NULL;\n\n\tif (!is_resource_resv_valid(oresresv, err)) {\n\t\tschdlogerr(PBSEVENT_DEBUG2, PBS_EVENTCLASS_SCHED, LOG_DEBUG, oresresv->name, \"Can't dup resresv\", err);\n\t\tfree_schd_error(err);\n\t\treturn NULL;\n\t}\n\n\tnresresv = new resource_resv(name.c_str());\n\n\tif (nresresv == NULL) {\n\t\tfree_schd_error(err);\n\t\treturn NULL;\n\t}\n\n\tnresresv->server = nsinfo;\n\n\tnresresv->user = oresresv->user;\n\tnresresv->group = oresresv->group;\n\tnresresv->project = oresresv->project;\n\n\tnresresv->nodepart_name = string_dup(oresresv->nodepart_name);\n\tif (oresresv->select != NULL)\n\t\tnresresv->select = new selspec(*oresresv->select); /* must come before calls to dup_nspecs() below */\n\tif (oresresv->execselect != NULL)\n\t\tnresresv->execselect = new selspec(*oresresv->execselect);\n\n\tnresresv->is_invalid = oresresv->is_invalid;\n\tnresresv->can_not_fit = oresresv->can_not_fit;\n\tnresresv->can_not_run = oresresv->can_not_run;\n\tnresresv->can_never_run = oresresv->can_never_run;\n\tnresresv->is_peer_ob = oresresv->is_peer_ob;\n\tnresresv->is_prov_needed = oresresv->is_prov_needed;\n\tnresresv->is_shrink_to_fit = oresresv->is_shrink_to_fit;\n\tnresresv->will_use_multinode = oresresv->will_use_multinode;\n\n\tnresresv->ec_index = oresresv->ec_index;\n\n\tnresresv->sch_priority = oresresv->sch_priority;\n\tnresresv->rank = oresresv->rank;\n\tnresresv->qtime = oresresv->qtime;\n\tnresresv->qrank = oresresv->qrank;\n\n\tnresresv->start = oresresv->start;\n\tnresresv->end = oresresv->end;\n\tnresresv->duration = oresresv->duration;\n\tnresresv->hard_duration = oresresv->hard_duration;\n\tnresresv->min_duration = oresresv->min_duration;\n\n\tnresresv->resreq = dup_resource_req_list(oresresv->resreq);\n\n\tnresresv->place_spec = dup_place(oresresv->place_spec);\n\n\tnresresv->aoename = string_dup(oresresv->aoename);\n\tnresresv->eoename = string_dup(oresresv->eoename);\n\n\tnresresv->node_set_str = dup_string_arr(oresresv->node_set_str);\n\n\tnresresv->resresv_ind = oresresv->resresv_ind;\n\tnresresv->node_set = copy_node_ptr_array(oresresv->node_set, nsinfo->nodes);\n\n\tif (oresresv->is_job) {\n\t\tnresresv->is_job = 1;\n\t\tnresresv->job = dup_job_info(oresresv->job, nqinfo, nsinfo);\n\t\tif (nresresv->job != NULL) {\n\t\t\tif (nresresv->job->resv != NULL) {\n\t\t\t\tnresresv->ninfo_arr = copy_node_ptr_array(oresresv->ninfo_arr,\n\t\t\t\t\t\t\t\t\t  nresresv->job->resv->resv->resv_nodes);\n\t\t\t\tnresresv->nspec_arr = dup_nspecs(oresresv->nspec_arr,\n\t\t\t\t\t\t\t\t nresresv->job->resv->ninfo_arr, NULL);\n\n\t\t\t} else {\n\t\t\t\tnresresv->ninfo_arr = copy_node_ptr_array(oresresv->ninfo_arr,\n\t\t\t\t\t\t\t\t\t  nsinfo->nodes);\n\t\t\t\tnresresv->nspec_arr = dup_nspecs(oresresv->nspec_arr,\n\t\t\t\t\t\t\t\t nsinfo->nodes, NULL);\n\t\t\t}\n\t\t}\n\t} else if (oresresv->is_resv) {\n\t\tselspec *sel;\n\t\tnresresv->is_resv = 1;\n\t\tnresresv->resv = dup_resv_info(oresresv->resv, nsinfo);\n\t\tif (nresresv->resv->select_orig != NULL)\n\t\t\tsel = nresresv->resv->select_orig;\n\t\telse\n\t\t\tsel = nresresv->select;\n\t\tnresresv->resv->orig_nspec_arr = dup_nspecs(oresresv->resv->orig_nspec_arr, nsinfo->nodes, sel);\n\t\tnresresv->ninfo_arr = copy_node_ptr_array(oresresv->ninfo_arr, nsinfo->nodes);\n\t\tnresresv->nspec_arr = dup_nspecs(oresresv->nspec_arr, nsinfo->nodes, NULL);\n\t} else { /* error */\n\t\tdelete nresresv;\n\t\tfree_schd_error(err);\n\t\treturn NULL;\n\t}\n#ifdef NAS /* localmod 034 */\n\tnresresv->share_type = oresresv->share_type;\n#endif /* localmod 034 */\n\n\tif (!is_resource_resv_valid(nresresv, err)) {\n\t\tschdlogerr(PBSEVENT_DEBUG2, PBS_EVENTCLASS_SCHED, LOG_DEBUG, oresresv->name, \"Failed to dup resresv\", err);\n\t\tdelete nresresv;\n\t\tfree_schd_error(err);\n\t\treturn NULL;\n\t}\n\tfree_schd_error(err);\n\treturn nresresv;\n}\nresource_resv *\ndup_resource_resv(resource_resv *oresresv, server_info *nsinfo, queue_info *nqinfo)\n{\n\treturn dup_resource_resv(oresresv, nsinfo, nqinfo, oresresv->name);\n}\n\nresource_resv *\nfind_resource_resv(resource_resv **resresv_arr, const std::string &name)\n{\n\tint i;\n\tif (resresv_arr == NULL || name.empty())\n\t\treturn NULL;\n\n\tfor (i = 0; resresv_arr[i] != NULL && resresv_arr[i]->name != name; i++)\n\t\t;\n\n\treturn resresv_arr[i];\n}\n\n/**\n * @brief\n * \t\tfind a resource_resv by index in all_resresv array or by unique numeric rank\n *\n * @param[in]\tresresv_arr\t-\tarray of resource_resvs to search\n * @param[in]\tindex\t    -\tindex of resource_resv to find\n * @param[in]\trank        -\trank of resource_resv to find\n *\n * @return\tresource_resv *\n * @retval resource_resv\t: if found\n * @retval NULL\t: if not found or on error\n *\n */\nresource_resv *\nfind_resource_resv_by_indrank(resource_resv **resresv_arr, int index, int rank)\n{\n\tint i;\n\tif (resresv_arr == NULL)\n\t\treturn NULL;\n\n\tif (index != -1 && resresv_arr[0] != NULL && resresv_arr[0]->server != NULL &&\n\t    resresv_arr[0]->server->all_resresv != NULL)\n\t\treturn resresv_arr[0]->server->all_resresv[index];\n\n\tfor (i = 0; resresv_arr[i] != NULL && resresv_arr[i]->rank != rank; i++)\n\t\t;\n\n\treturn resresv_arr[i];\n}\n\n/**\n * @brief\n * \t\tfind a resource_resv by name and start time\n *\n * @param[in]\tresresv_arr -\tarray of resource_resvs to search\n * @param[in]\tname        -\tname of resource_resv to find\n * @param[in]\tstart_time  -\tthe start time of the resource_resv\n *\n * @return\tresource_resv *\n * @retval\tresource_resv\t: if found\n * @retval\tNULL\t: if not found or on error\n *\n */\nresource_resv *\nfind_resource_resv_by_time(resource_resv **resresv_arr, const std::string &name, time_t start_time)\n{\n\tint i;\n\tif (resresv_arr == NULL)\n\t\treturn NULL;\n\n\tfor (i = 0; resresv_arr[i] != NULL; i++) {\n\t\tif ((resresv_arr[i]->name == name) && (resresv_arr[i]->start == start_time))\n\t\t\tbreak;\n\t}\n\n\treturn resresv_arr[i];\n}\n\n/**\n * @brief\n *\t\tis_resource_resv_valid - do simple validity checks for a resource resv\n *\n *\n *\t@param[in] resresv - the resource_resv to do check\n *\t@param[out] err - error struct to return why resource_resv is invalid\n *\n *\t@returns int\n *\t@retva1 1 if valid\n *\t@retval 0 if invalid (err returns reason why)\n */\nint\nis_resource_resv_valid(resource_resv *resresv, schd_error *err)\n{\n\tif (resresv == NULL)\n\t\treturn 0;\n\n\tif (resresv->server == NULL) {\n\t\tset_schd_error_codes(err, NEVER_RUN, ERR_SPECIAL);\n\t\tset_schd_error_arg(err, SPECMSG, \"No server pointer\");\n\t\treturn 0;\n\t}\n\n\tif (resresv->is_job && resresv->job == NULL) {\n\t\tset_schd_error_codes(err, NEVER_RUN, ERR_SPECIAL);\n\t\tset_schd_error_arg(err, SPECMSG, \"Job has no job sub-structure\");\n\t\treturn 0;\n\t}\n\n\tif (resresv->is_resv && resresv->resv == NULL) {\n\t\tset_schd_error_codes(err, NEVER_RUN, ERR_SPECIAL);\n\t\tset_schd_error_arg(err, SPECMSG, \"Reservation has no resv sub-structure\");\n\t\treturn 0;\n\t}\n\n\tif (resresv->name.empty()) {\n\t\tset_schd_error_codes(err, NEVER_RUN, ERR_SPECIAL);\n\t\tset_schd_error_arg(err, SPECMSG, \"No Name\");\n\t\treturn 0;\n\t}\n\n\tif (resresv->user.empty()) {\n\t\tset_schd_error_codes(err, NEVER_RUN, ERR_SPECIAL);\n\t\tset_schd_error_arg(err, SPECMSG, \"No User\");\n\t\treturn 0;\n\t}\n\n\tif (resresv->group.empty()) {\n\t\tset_schd_error_codes(err, NEVER_RUN, ERR_SPECIAL);\n\t\tset_schd_error_arg(err, SPECMSG, \"No Group\");\n\t\treturn 0;\n\t}\n\n\tif (resresv->select == NULL) {\n\t\tset_schd_error_codes(err, NEVER_RUN, ERR_SPECIAL);\n\t\tset_schd_error_arg(err, SPECMSG, \"No Select\");\n\t\treturn 0;\n\t}\n\n\tif (resresv->place_spec == NULL) {\n\t\tset_schd_error_codes(err, NEVER_RUN, ERR_SPECIAL);\n\t\tset_schd_error_arg(err, SPECMSG, \"No Place\");\n\t\treturn 0;\n\t}\n\n\tif (!resresv->is_job && !resresv->is_resv) {\n\t\tset_schd_error_codes(err, NEVER_RUN, ERR_SPECIAL);\n\t\tset_schd_error_arg(err, SPECMSG, \"Is neither job nor resv\");\n\t\treturn 0;\n\t}\n\n\tif (is_resresv_running(resresv)) {\n\t\tif (resresv->nspec_arr.empty()) {\n\t\t\tset_schd_error_codes(err, NEVER_RUN, ERR_SPECIAL);\n\t\t\tset_schd_error_arg(err, SPECMSG, \"Is running w/o exec_vnode1\");\n\t\t\treturn 0;\n\t\t}\n\n\t\tif (resresv->ninfo_arr == NULL) {\n\t\t\tset_schd_error_codes(err, NEVER_RUN, ERR_SPECIAL);\n\t\t\tset_schd_error_arg(err, SPECMSG, \"Is running w/o exec_vnode2\");\n\t\t\treturn 0;\n\t\t}\n\t}\n\n\tif (resresv->ninfo_arr != NULL && resresv->nspec_arr.empty()) {\n\t\tset_schd_error_codes(err, NEVER_RUN, ERR_SPECIAL);\n\t\tset_schd_error_arg(err, SPECMSG, \"exec_vnode mismatch 1\");\n\t\treturn 0;\n\t}\n\n\tif (!resresv->nspec_arr.empty() && resresv->ninfo_arr == NULL) {\n\t\tset_schd_error_codes(err, NEVER_RUN, ERR_SPECIAL);\n\t\tset_schd_error_arg(err, SPECMSG, \"exec_vnode mismatch 2\");\n\t\treturn 0;\n\t}\n\n\treturn 1;\n}\n\n/**\n * @brief\n *\t\tdup_resource_req_list - duplicate a resource_req list\n *\n * @param[in]\toreq\t-\tresource_req list to duplicate\n *\n * @return\tduplicated resource_req list\n *\n */\nresource_req *\ndup_resource_req_list(resource_req *oreq)\n{\n\tresource_req *req;\n\tresource_req *nreq;\n\tresource_req *head;\n\tresource_req *prev;\n\n\thead = NULL;\n\tprev = NULL;\n\treq = oreq;\n\n\twhile (req != NULL) {\n\t\tif ((nreq = dup_resource_req(req)) != NULL) {\n\t\t\tif (head == NULL)\n\t\t\t\thead = nreq;\n\t\t\telse\n\t\t\t\tprev->next = nreq;\n\t\t\tprev = nreq;\n\t\t} else {\n\t\t\tfree_resource_req_list(head);\n\t\t\treturn NULL;\n\t\t}\n\t\treq = req->next;\n\t}\n\n\treturn head;\n}\n\n/**\n * @brief\n *\t\tdup_resource_count_list - duplicate a resource_req list\n *\n * @param[in]\torcount\t-\tresource_count list to duplicate\n *\n * @return\tduplicated resource_count list\n *\n */\nresource_count *\ndup_resource_count_list(resource_count *orcount)\n{\n\tresource_count *rcount;\n\tresource_count *nrcount;\n\tresource_count *head;\n\tresource_count *prev;\n\n\thead = NULL;\n\tprev = NULL;\n\trcount = orcount;\n\n\twhile (rcount != NULL) {\n\t\tif ((nrcount = dup_resource_count(rcount)) != NULL) {\n\t\t\tif (head == NULL)\n\t\t\t\thead = nrcount;\n\t\t\telse\n\t\t\t\tprev->next = nrcount;\n\t\t\tprev = nrcount;\n\t\t} else {\n\t\t\tfree_resource_count_list(head);\n\t\t\treturn NULL;\n\t\t}\n\t\trcount = rcount->next;\n\t}\n\n\treturn head;\n}\n\n/**\n * @brief\n *\t\tdup_selective_resource_req_list - duplicate a resource_req list\n *\n * @param[in]\toreq\t-\tresource_req list to duplicate\n * @paral[in]\tdeflist\t\tonly duplicate resources in this list - if NULL, dup all\n *\n * @return\tduplicated resource_req list\n *\n */\nresource_req *\ndup_selective_resource_req_list(resource_req *oreq, std::unordered_set<resdef *> &deflist)\n{\n\tresource_req *req;\n\tresource_req *nreq;\n\tresource_req *head;\n\tresource_req *prev;\n\n\thead = NULL;\n\tprev = NULL;\n\n\tfor (req = oreq; req != NULL; req = req->next) {\n\t\tif (deflist.find(req->def) != deflist.end()) {\n\t\t\tif ((nreq = dup_resource_req(req)) != NULL) {\n\t\t\t\tif (head == NULL)\n\t\t\t\t\thead = nreq;\n\t\t\t\telse\n\t\t\t\t\tprev->next = nreq;\n\t\t\t\tprev = nreq;\n\t\t\t}\n\t\t}\n\t}\n\n\treturn head;\n}\n\n/**\n * @brief\n *\t\tdup_resource_req - duplicate a resource_req struct\n *\n * @param[in]\toreq\t-\tthe resource_req to duplicate\n *\n * @return\tduplicated resource req\n *\n */\nresource_req *\ndup_resource_req(resource_req *oreq)\n{\n\tresource_req *nreq;\n\tif (oreq == NULL)\n\t\treturn NULL;\n\n\tif ((nreq = new_resource_req()) == NULL)\n\t\treturn NULL;\n\n\tnreq->def = oreq->def;\n\tif (nreq->def)\n\t\tnreq->name = nreq->def->name.c_str();\n\n\tmemcpy(&(nreq->type), &(oreq->type), sizeof(struct resource_type));\n\tnreq->res_str = string_dup(oreq->res_str);\n\tnreq->amount = oreq->amount;\n\n\treturn nreq;\n}\n\n/**\n * @brief\n *\t\tdup_resource_count - duplicate a resource_count struct\n *\n * @param[in]\torcount\t-\tthe resource_count to duplicate\n *\n * @return\tduplicated resource count\n *\n */\nresource_count *\ndup_resource_count(resource_count *orcount)\n{\n\tresource_count *nrcount;\n\tif (orcount == NULL)\n\t\treturn NULL;\n\n\tif ((nrcount = new_resource_count()) == NULL)\n\t\treturn NULL;\n\n\tnrcount->def = orcount->def;\n\tif (nrcount->def)\n\t\tnrcount->name = nrcount->def->name.c_str();\n\n\tnrcount->amount = orcount->amount;\n\tnrcount->soft_limit_preempt_bit = orcount->soft_limit_preempt_bit;\n\n\treturn nrcount;\n}\n\n/**\n * @brief\n *\t\tnew_resource_req - allocate and initalize new resource_req\n *\n * @return\tthe new resource_req\n *\n */\n\nresource_req *\nnew_resource_req()\n{\n\tresource_req *resreq;\n\n\tif ((resreq = static_cast<resource_req *>(calloc(1, sizeof(resource_req)))) == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn NULL;\n\t}\n\n\t/* member type zero'd by calloc() */\n\n\tresreq->name = NULL;\n\tresreq->res_str = NULL;\n\tresreq->amount = 0;\n\tresreq->def = NULL;\n\tresreq->next = NULL;\n\n\treturn resreq;\n}\n\n/**\n * @brief\n *\t\tnew_resource_count - allocate and initalize new resource_count\n *\n * @return\tthe new resource_count\n *\n */\nresource_count *\nnew_resource_count()\n{\n\tresource_count *rcount;\n\n\tif ((rcount = static_cast<resource_count *>(malloc(sizeof(resource_count)))) == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn NULL;\n\t}\n\n\trcount->name = NULL;\n\trcount->amount = 0;\n\trcount->soft_limit_preempt_bit = 0;\n\trcount->def = NULL;\n\trcount->next = NULL;\n\n\treturn rcount;\n}\n\n/**\n * @brief\n * \t\tCreate new resource_req with given data\n *\n * @param[in]\tname\t-\tname of resource\n * @param[in]\tvalue\t-\tvalue of resource\n *\n * @return\tnewly created resource_req\n * @retval\tNULL\t: Fail\n */\nresource_req *\ncreate_resource_req(const char *name, const char *value)\n{\n\tresource_req *resreq = NULL;\n\tresdef *rdef;\n\n\tif (name == NULL)\n\t\treturn NULL;\n\n\trdef = find_resdef(name);\n\n\tif (rdef != NULL) {\n\t\tif ((resreq = new_resource_req()) != NULL) {\n\t\t\tresreq->def = rdef;\n\t\t\tresreq->name = rdef->name.c_str();\n\t\t\tresreq->type = rdef->type;\n\n\t\t\tif (value != NULL) {\n\t\t\t\tif (set_resource_req(resreq, value) != 1) {\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_SCHED, LOG_DEBUG, name,\n\t\t\t\t\t\t  \"Bad requested resource data\");\n\t\t\t\t\treturn NULL;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t} else {\n\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_SCHED, LOG_DEBUG, name, \"Resource definition does not exist, resource may be invalid\");\n\t\treturn NULL;\n\t}\n\n\treturn resreq;\n}\n\n/**\n * @brief\n *\t\tfind resource_req by resource definition or allocate and\n *\t\tinitialize a new resource_req also adds new one to the list\n *\n * @param[in]\treqlist\t-\tlist to search\n * @param[in]\tdef\t-\tresource_req to find\n *\n * @return\tresource_req *\n * @retval\tfound or newly allocated resource_req\n *\n */\nresource_req *\nfind_alloc_resource_req(resource_req *reqlist, resdef *def)\n{\n\tresource_req *req;\t   /* used to find or create resource_req */\n\tresource_req *prev = NULL; /* previous resource_req in list */\n\n\tif (def == NULL)\n\t\treturn NULL;\n\n\tfor (req = reqlist; req != NULL && req->def != def; req = req->next) {\n\t\tprev = req;\n\t}\n\n\tif (req == NULL) {\n\t\tif ((req = new_resource_req()) == NULL)\n\t\t\treturn NULL;\n\n\t\treq->def = def;\n\t\treq->type = req->def->type;\n\t\treq->name = def->name.c_str();\n\t\tif (prev != NULL)\n\t\t\tprev->next = req;\n\t}\n\n\treturn req;\n}\n\n/**\n * @brief\n *\t\tfind resource_count by resource definition or allocate and\n *\t\tinitialize a new resource_count also adds new one to the list\n *\n * @param[in]\trcountlist\t-\tlist to search\n * @param[in]\tdef\t\t-\tresource_count to find\n *\n * @return\tresource_count *\n * @retval\tfound or newly allocated resource_count\n *\n */\nresource_count *\nfind_alloc_resource_count(resource_count *rcountlist, resdef *def)\n{\n\tresource_count *rcount;\t     /* used to find or create resource_count */\n\tresource_count *prev = NULL; /* previous resource_count in list */\n\n\tif (def == NULL)\n\t\treturn NULL;\n\n\tfor (rcount = rcountlist; rcount != NULL && rcount->def != def; rcount = rcount->next) {\n\t\tprev = rcount;\n\t}\n\n\tif (rcount == NULL) {\n\t\tif ((rcount = new_resource_count()) == NULL)\n\t\t\treturn NULL;\n\n\t\trcount->def = def;\n\t\trcount->name = def->name.c_str();\n\t\tif (prev != NULL)\n\t\t\tprev->next = rcount;\n\t}\n\n\treturn rcount;\n}\n\n/**\n * @brief\n * \t\tfind resource_req by name or allocate and initialize a new\n *\t\tresource_req also adds new one to the list\n *\n * @param[in]\treqlist\t-\tlist to search\n * @param[in]\tname\t-\tresource_req to find\n *\n * @return\tresource_req *\n * @retval\tfound or newly allocated resource_req\n *\n */\nresource_req *\nfind_alloc_resource_req_by_str(resource_req *reqlist, char *name)\n{\n\tresource_req *req;\t   /* used to find or create resource_req */\n\tresource_req *prev = NULL; /* previous resource_req in list */\n\n\tif (name == NULL)\n\t\treturn NULL;\n\n\tfor (req = reqlist; req != NULL && strcmp(req->name, name); req = req->next) {\n\t\tprev = req;\n\t}\n\n\tif (req == NULL) {\n\t\tif ((req = create_resource_req(name, NULL)) == NULL)\n\t\t\treturn NULL;\n\n\t\tif (prev != NULL)\n\t\t\tprev->next = req;\n\t}\n\n\treturn req;\n}\n\n/**\n * @brief\n * \t\tfind a resource_req from a resource_req list by string name\n *\n * @param[in]\treqlist\t-\tthe resource_req list\n * @param[in]\tname\t-\tresource name to look for\n *\n * @return\tresource_req *\n * @retval\tfound resource request\n * @retval NULL\t: if not found\n *\n */\nresource_req *\nfind_resource_req_by_str(resource_req *reqlist, const char *name)\n{\n\tresource_req *resreq;\n\n\tresreq = reqlist;\n\n\twhile (resreq != NULL && strcmp(resreq->name, name))\n\t\tresreq = resreq->next;\n\n\treturn resreq;\n}\n\n/**\n * @brief\n * \t\tfind resource_req by resource definition\n *\n * @param\treqlist\t-\treq list to search\n * @param\tdef\t-\tresource definition to search for\n *\n * @return\tfound resource_req\n * @retval\tNULL\t: if not found\n */\nresource_req *\nfind_resource_req(resource_req *reqlist, resdef *def)\n{\n\tresource_req *resreq;\n\n\tresreq = reqlist;\n\n\twhile (resreq != NULL && resreq->def != def)\n\t\tresreq = resreq->next;\n\n\treturn resreq;\n}\n\n/**\n * @brief\n * \t\tfind resource_count by resource definition\n *\n * @param\trcountlist\t-\tresource_count list to search\n * @param\tdef\t\t-\tresource definition to search for\n *\n * @return\tresource_count *\n * @return\tfound resource count\n * @retval\tNULL\t: if not found\n */\nresource_count *\nfind_resource_count(resource_count *rcountlist, resdef *def)\n{\n\tresource_count *rcount;\n\n\trcount = rcountlist;\n\n\twhile (rcount != NULL && rcount->def != def)\n\t\trcount = rcount->next;\n\n\treturn rcount;\n}\n\n/**\n * @brief\n *\t\tset_resource_req - set the value and type of a resource req\n *\n * @param[out]\treq\t\tthe resource_req to set\n * @param[in]\tval -\t\tthe string value (can be NULL)\n *\n * @return\tint\n * @retval\t1 for Success\n * @retval\t0 for Error\n */\nint\nset_resource_req(resource_req *req, const char *val)\n{\n\tresdef *rdef;\n\n\tif (req == NULL)\n\t\treturn 0;\n\n\t/* if val is a string, req -> amount will be set to SCHD_INFINITY_RES */\n\treq->amount = res_to_num(val, &(req->type));\n\treq->res_str = string_dup(val);\n\n\tif (req->def != NULL)\n\t\trdef = req->def;\n\telse {\n\t\trdef = find_resdef(req->name);\n\t\treq->def = rdef;\n\t}\n\tif (rdef != NULL)\n\t\treq->type = rdef->type;\n\n\tif (req->amount == SCHD_INFINITY_RES) {\n\t\t/* Verify that this is actually a non-numeric resource */\n\t\tif (!req->type.is_string)\n\t\t\treturn 0;\n\t}\n\n\treturn 1;\n}\n\n/**\n * @brief\n *\t\tfree_resource_req_list - frees memory used by a resource_req list\n *\n * @param[in]\tlist\t-\tresource_req list\n *\n * @return\tnothing\n */\nvoid\nfree_resource_req_list(resource_req *list)\n{\n\tresource_req *resreq;\n\n\tresreq = list;\n\twhile (resreq != NULL) {\n\t\tauto tmp = resreq;\n\t\tresreq = resreq->next;\n\t\tfree_resource_req(tmp);\n\t}\n}\n\n/**\n * @brief\n *\t\tfree_resource_count_list - frees memory used by a resource_count list\n *\n * @param[in]\tlist\t-\tresource_count list\n *\n * @return\tnothing\n */\nvoid\nfree_resource_count_list(resource_count *list)\n{\n\tresource_count *rcount;\n\n\trcount = list;\n\twhile (rcount != NULL) {\n\t\tauto tmp = rcount;\n\t\trcount = rcount->next;\n\t\tfree_resource_count(tmp);\n\t}\n}\n\n/**\n * @brief\n *\t\tfree_resource_req - free memory used by a resource_req structure\n *\n * @param[in,out]\treq\t-\tresource_req to free\n *\n * @return\tnothing\n *\n */\nvoid\nfree_resource_req(resource_req *req)\n{\n\tif (req == NULL)\n\t\treturn;\n\n\tif (req->res_str != NULL)\n\t\tfree(req->res_str);\n\n\tfree(req);\n}\n\n/**\n * @brief\n *\t\tfree_resource_count - free memory used by a resource_count structure\n *\n * @param[in,out]\trcount\t-\tresource_count to free\n *\n * @return\tnothing\n *\n */\nvoid\nfree_resource_count(resource_count *rcount)\n{\n\tfree(rcount);\n}\n\n/**\n * @brief compare two resource_req structures\n * @return equal or not\n * @retval 1 two structures are equal\n * @retval 0 two strictures are not equal\n */\nint\ncompare_resource_req(resource_req *req1, resource_req *req2)\n{\n\n\tif (req1 == NULL && req2 == NULL)\n\t\treturn 1;\n\telse if (req1 == NULL || req2 == NULL)\n\t\treturn 0;\n\n\tif (req1->type.is_consumable || req1->type.is_boolean)\n\t\treturn (req1->amount == req2->amount);\n\n\tif (req1->type.is_string)\n\t\tif (strcmp(req1->res_str, req2->res_str) == 0)\n\t\t\treturn 1;\n\n\treturn 0;\n}\n\n/**\n * @brief compare two resource_req lists possibly excluding certain resources\n * @param[in] req1 - list1\n * @param[in] req2 - list2\n * @param[in] comparr - list of resources to compare or NULL for all resources\n * @return int\n * @retval 1 two lists are equal\n * @retval 0 two lists are not equal\n */\nint\ncompare_resource_req_list(resource_req *req1, resource_req *req2, std::unordered_set<resdef *> &comparr)\n{\n\tresource_req *cur_req1;\n\tresource_req *cur_req2;\n\tresource_req *cur;\n\tint ret1 = 1;\n\tint ret2 = 1;\n\n\tif (req1 == NULL && req2 == NULL)\n\t\treturn 1;\n\n\tif (req1 == NULL || req2 == NULL)\n\t\treturn 0;\n\n\tfor (cur_req1 = req1; ret1 && cur_req1 != NULL; cur_req1 = cur_req1->next) {\n\t\tif (comparr.find(cur_req1->def) != comparr.end()) {\n\t\t\tcur = find_resource_req(req2, cur_req1->def);\n\t\t\tif (cur == NULL)\n\t\t\t\tret1 = 0;\n\t\t\telse\n\t\t\t\tret1 = compare_resource_req(cur_req1, cur);\n\t\t}\n\t}\n\n\tfor (cur_req2 = req2; ret2 && cur_req2 != NULL; cur_req2 = cur_req2->next) {\n\t\tif (comparr.find(cur_req2->def) != comparr.end()) {\n\t\t\tcur = find_resource_req(req1, cur_req2->def);\n\t\t\tif (cur == NULL)\n\t\t\t\tret2 = 0;\n\t\t\telse\n\t\t\t\tret2 = compare_resource_req(cur_req2, cur);\n\t\t}\n\t}\n\n\t/* Either we found a not-match or one list is larger than the other*/\n\tif (!ret1 || !ret2)\n\t\treturn 0;\n\n\treturn 1;\n}\n\n/**\n * @brief\n * \t\tupdate information kept in a resource_resv when one is started\n *\n * @param[in]\tresresv\t-\tthe resource resv to update\n * @param[in]\tnspec_arr\t-\tthe nodes the job ran in\n *\n * @return\tvoid\n */\nvoid\nupdate_resresv_on_run(resource_resv *resresv, std::vector<nspec *> &nspec_arr)\n{\n\tqueue_info *resv_queue;\n\tint ret;\n\n\tif (resresv == NULL)\n\t\treturn;\n\n\tif (resresv->is_job) {\n\t\tif (resresv->job->is_suspended) {\n\t\t\tfor (auto ns : nspec_arr)\n\t\t\t\tns->ninfo->num_susp_jobs--;\n\t\t\tif (!resresv->job->resreleased.empty())\n\t\t\t\tfree_nspecs(resresv->job->resreleased);\n\n\t\t\tif (resresv->job->resreq_rel != NULL) {\n\t\t\t\tfree_resource_req_list(resresv->job->resreq_rel);\n\t\t\t\tresresv->job->resreq_rel = NULL;\n\t\t\t}\n\t\t} else {\n\t\t\tif (resresv->job->is_subjob && resresv->job->parent_job)\n\t\t\t\tresresv->job->parent_job->job->running_subjobs++;\n\t\t}\n\n\t\tset_job_state(\"R\", resresv->job);\n\t\tresresv->job->is_susp_sched = 0;\n\t\tresresv->job->stime = resresv->server->server_time;\n\t\tresresv->start = resresv->server->server_time;\n\t\tresresv->end = resresv->start + calc_time_left(resresv, 0);\n\t\tresresv->job->accrue_type = JOB_RUNNING;\n\n\t\tif (resresv->aoename != NULL) {\n\t\t\tfor (const auto ns : nspec_arr) {\n\t\t\t\tif (ns->go_provision) {\n\t\t\t\t\tresresv->job->is_provisioning = 1;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tif (resresv->execselect == NULL) {\n\t\t\tstd::string selectspec;\n\t\t\tselectspec = create_select_from_nspec(nspec_arr);\n\t\t\tresresv->execselect = parse_selspec(selectspec);\n\t\t}\n\t\tif (resresv->job->dependent_jobs != NULL) {\n\t\t\tfor (int i = 0; resresv->job->dependent_jobs[i] != NULL; i++) {\n\t\t\t\t/* Mark all runone jobs as \"can not run\" */\n\t\t\t\tresresv->job->dependent_jobs[i]->can_not_run = 1;\n\t\t\t}\n\t\t}\n\t} else if (resresv->is_resv && resresv->resv != NULL) {\n\t\tresresv->resv->resv_state = RESV_RUNNING;\n\t\tresresv->resv->is_running = 1;\n\n\t\tresv_queue = find_queue_info(resresv->server->queues,\n\t\t\t\t\t     resresv->resv->queuename);\n\t\tif (resv_queue != NULL) {\n\t\t\t/* reservation queues are stopped before the reservation is started */\n\t\t\tresv_queue->is_started = 1;\n\t\t\t/* because the reservation queue was previously stopped, we need to\n\t\t\t * reevaluate resv_queue -> is_ok_to_run\n\t\t\t */\n\t\t\tret = is_ok_to_run_queue(resresv->server->policy, resv_queue);\n\t\t\tif (ret == SUCCESS)\n\t\t\t\tresv_queue->is_ok_to_run = 1;\n\t\t\telse\n\t\t\t\tresv_queue->is_ok_to_run = 0;\n\t\t}\n\t}\n\tif (resresv->ninfo_arr == NULL)\n\t\tresresv->ninfo_arr = create_node_array_from_nspec(nspec_arr);\n}\n\n/**\n * @brief\n *\t\tupdate_resresv_on_end - update a resource_resv structure when\n *\t\t\t\t      it ends\n *\n * @param[out]\tresresv\t-\tthe resresv to update\n * @param[in]\tjob_state\t-\tthe new state if resresv is a job\n *\n * @return\tnothing\n *\n */\nvoid\nupdate_resresv_on_end(resource_resv *resresv, const char *job_state)\n{\n\tqueue_info *resv_queue;\n\tresource_resv *next_occr = NULL;\n\ttime_t next_occr_time;\n\tint ret;\n\tint i;\n\n\tif (resresv == NULL)\n\t\treturn;\n\n\t/* now that it isn't running, it might be runnable again */\n\tresresv->can_not_run = 0;\n\n\t/* unless of course it's a job and its queue is in an ineligible state */\n\tif (resresv->is_job && resresv->job != NULL &&\n\t    !resresv->job->queue->is_ok_to_run)\n\t\tresresv->can_not_run = 1;\n\n\t/* no longer running... clear start and end times */\n\tresresv->start = UNSPECIFIED;\n\tresresv->end = UNSPECIFIED;\n\n\tif (resresv->is_job && resresv->job != NULL) {\n\t\tset_job_state(job_state, resresv->job);\n\t\tif (resresv->job->is_suspended) {\n\t\t\tresresv->job->is_susp_sched = 1;\n\t\t\tfor (auto ns : resresv->nspec_arr)\n\t\t\t\tns->ninfo->num_susp_jobs++;\n\t\t} else {\n\t\t\tif (resresv->job->is_subjob && resresv->job->parent_job &&\n\t\t\t    resresv->job->parent_job->job->max_run_subjobs != UNSPECIFIED)\n\t\t\t\tresresv->job->parent_job->job->running_subjobs--;\n\t\t}\n\n\t\tresresv->job->is_provisioning = 0;\n\n\t\t/* free resources allocated to job since it's now been requeued */\n\t\tif (resresv->job->is_queued && !resresv->job->is_checkpointed) {\n\t\t\tfree(resresv->ninfo_arr);\n\t\t\tresresv->ninfo_arr = NULL;\n\t\t\tfree_nspecs(resresv->nspec_arr);\n\t\t\tfree_resource_req_list(resresv->job->resused);\n\t\t\tresresv->job->resused = NULL;\n\t\t\tif (resresv->nodepart_name != NULL) {\n\t\t\t\tfree(resresv->nodepart_name);\n\t\t\t\tresresv->nodepart_name = NULL;\n\t\t\t}\n\t\t\tdelete resresv->execselect;\n\t\t\tresresv->execselect = NULL;\n\t\t}\n\t\t/* We need to correct our calendar */\n\t\tif (resresv->end_event != NULL)\n\t\t\tset_timed_event_disabled(resresv->end_event, 1);\n\t} else if (resresv->is_resv && resresv->resv != NULL) {\n\t\tresresv->resv->resv_state = RESV_DELETED;\n\t\tresresv->resv->is_running = 0;\n\n\t\tresv_queue = find_queue_info(resresv->server->queues,\n\t\t\t\t\t     resresv->resv->queuename);\n\t\tif (resv_queue != NULL) {\n\t\t\tresv_queue->is_started = 0;\n\t\t\tret = is_ok_to_run_queue(resresv->server->policy, resv_queue);\n\t\t\tif (ret == SUCCESS)\n\t\t\t\tresv_queue->is_ok_to_run = 1;\n\t\t\telse\n\t\t\t\tresv_queue->is_ok_to_run = 0;\n\n\t\t\tif (resresv->resv->is_standing) {\n\t\t\t\t/* This occurrence is over, move resv pointers of all jobs that are\n\t\t\t\t * left to next occurrence if one exists\n\t\t\t\t */\n\t\t\t\tif (resresv->resv->resv_idx < resresv->resv->count) {\n\t\t\t\t\tnext_occr_time = get_occurrence(resresv->resv->rrule,\n\t\t\t\t\t\t\t\t\tresresv->resv->req_start, resresv->resv->timezone, 2);\n\t\t\t\t\tif (next_occr_time >= 0) {\n\t\t\t\t\t\tnext_occr = find_resource_resv_by_time(resresv->server->resvs,\n\t\t\t\t\t\t\t\t\t\t       resresv->name, next_occr_time);\n\t\t\t\t\t\tif (next_occr != NULL) {\n\t\t\t\t\t\t\tif (resv_queue->jobs != NULL) {\n\t\t\t\t\t\t\t\tfor (i = 0; resv_queue->jobs[i] != NULL; i++) {\n\t\t\t\t\t\t\t\t\tif (in_runnable_state(resv_queue->jobs[i]))\n\t\t\t\t\t\t\t\t\t\tresv_queue->jobs[i]->job->resv = next_occr;\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t} else\n\t\t\t\t\t\t\tlog_eventf(PBSEVENT_DEBUG, PBS_EVENTCLASS_SERVER, LOG_DEBUG, resresv->name,\n\t\t\t\t\t\t\t\t   \"Can't find occurrence of standing reservation at time %ld\", next_occr_time);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n\n/**\n * @brief\n *\t\tresource_resv_filter - filters jobs on specified argument\n *\n * @param[in]\tresresv_arr\t-\tarray of jobs to filter through\n * @param[in]\tsize\t-\tamount of jobs in array\n * @param[in]\tfilter_func\t-\tpointer to a function that will filter\n *\t\t\t\t\t\t\t\t- returns 1: job will be added to new array\n *\t\t\t\t\t\t\t\t- returns 0: job will not be added to new array\n * @param[in]\targ\t-\tan extra arg to pass to filter_func\n * @param[in]\tflags\t-\tflags to describe the filtering\n *\n * @return\tpointer to filtered list\n *\n * @par\tNOTE:\tthis function allocates a new array\n * @note\n *\t\tfilter_func prototype: int func( resource_resv *, void * )\n *\n */\nresource_resv **\nresource_resv_filter(resource_resv **resresv_arr, int size,\n\t\t     int (*filter_func)(resource_resv *, const void *), const void *arg, int flags)\n{\n\tresource_resv **new_resresvs = NULL; /* new array of jobs */\n\tresource_resv **tmp;\n\tint i, j = 0;\n\n\tif (filter_func == NULL) {\n\t\tlog_event(PBSEVENT_SCHED, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t  __func__, \"NULL filter function passed in.\");\n\t\treturn NULL;\n\t}\n\tif (resresv_arr == NULL && size != 0) {\n\t\tlog_event(PBSEVENT_SCHED, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t  __func__, \"NULL input array with non-zero size.\");\n\t\treturn NULL;\n\t}\n\n\t/* NOTE: if resresv_arr is NULL, a one element array will be returned\n\t * the one element being the NULL terminator\n\t */\n\n\tif ((new_resresvs = static_cast<resource_resv **>(malloc((size + 1) * sizeof(resource_resv *)))) == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn NULL;\n\t}\n\n\tfor (i = 0; i < size; i++) {\n\t\tif (filter_func(resresv_arr[i], arg)) {\n\t\t\tnew_resresvs[j] = resresv_arr[i];\n\t\t\tj++;\n\t\t}\n\t}\n\t/* FILTER_FULL - leave the filtered array full size */\n\tif (!(flags & FILTER_FULL)) {\n\t\tif ((tmp = static_cast<resource_resv **>(realloc(new_resresvs, (j + 1) *\n\t\t\t\t\t\t\t\t\t\t       sizeof(resource_resv *)))) == NULL) {\n\t\t\tfree(new_resresvs);\n\t\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\t\treturn NULL;\n\t\t} else\n\t\t\tnew_resresvs = tmp;\n\t}\n\tnew_resresvs[j] = NULL;\n\n\treturn new_resresvs;\n}\n\n/**\n * @brief\n *\t\tremove_resresv_from_array - remove a resource_resv from an array\n *\t\t\t\t    without leaving a hole\n *\n * @param[in,out]\tresresv_arr\t-\tthe array\n * @param[in]\tresresv\t-\tthe resresv to remove\n *\n * @return\tsuccess / failure\n *\n */\nint\nremove_resresv_from_array(resource_resv **resresv_arr,\n\t\t\t  resource_resv *resresv)\n{\n\tint i;\n\n\tif (resresv_arr == NULL || resresv == NULL)\n\t\treturn 0;\n\n\tfor (i = 0; resresv_arr[i] != NULL && resresv_arr[i] != resresv; i++)\n\t\t;\n\n\tif (resresv_arr[i] == resresv) {\n\t\t/* copy all the jobs past the one we found back one spot.  Including\n\t\t * coping the NULL back one as well\n\t\t */\n\t\tfor (; resresv_arr[i] != NULL; i++)\n\t\t\tresresv_arr[i] = resresv_arr[i + 1];\n\t}\n\treturn 1;\n}\n\n/**\n * @brief\n *\t\tadd_resresv_to_array - add a resource resv to an array\n * \t\t\t   note: requires reallocating array\n *\n * @param[in]\tresresv_arr\t-\tjob array to add job to\n * @param[in]\tresresv\t-\tjob to add to array\n * @param[in]\tflags -\n *\t\t\t    SET_RESRESV_INDEX - set resresv_ind of the job/resv\n *\n * @return\tarray (changed from realloc)\n * @retval\tNULL\t: on error\n *\n */\nresource_resv **\nadd_resresv_to_array(resource_resv **resresv_arr,\n\t\t     resource_resv *resresv, int flags)\n{\n\tint size;\n\tresource_resv **new_arr;\n\n\tif (resresv_arr == NULL && resresv == NULL)\n\t\treturn NULL;\n\n\tif (resresv_arr == NULL && resresv != NULL) {\n\t\tnew_arr = static_cast<resource_resv **>(malloc(2 * sizeof(resource_resv *)));\n\t\tif (new_arr == NULL)\n\t\t\treturn NULL;\n\t\tnew_arr[0] = resresv;\n\t\tnew_arr[1] = NULL;\n\t\tif (flags & SET_RESRESV_INDEX)\n\t\t\tresresv->resresv_ind = 0;\n\t\treturn new_arr;\n\t}\n\n\tsize = count_array(resresv_arr);\n\n\t/* realloc for 1 more ptr (2 == 1 for new and 1 for NULL) */\n\tnew_arr = static_cast<resource_resv **>(realloc(resresv_arr, ((size + 2) * sizeof(resource_resv *))));\n\n\tif (new_arr != NULL) {\n\t\tnew_arr[size] = resresv;\n\t\tnew_arr[size + 1] = NULL;\n\t\tif (flags & SET_RESRESV_INDEX)\n\t\t\tresresv->resresv_ind = size;\n\t} else {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn NULL;\n\t}\n\n\treturn new_arr;\n}\n\n/**\n * @brief\n *\t\tcopy_resresv_array - copy an array of resource_resvs by name.\n *\t\t\tThis is useful  when duplicating a data structure\n *\t\t\twith a job array in it which isn't easily reproduced.\n *\n * @par NOTE:\tif a job in resresv_arr is not in tot_arr, that resresv will be\n *\t\t\tleft out of the new array\n *\n * @param[in]\tresresv_arr\t-\tthe job array to copy\n * @param[in]\ttot_arr\t    -\t\tthe total array of jobs\n *\n * @return\tnew resource_resv array or NULL on error\n *\n */\nresource_resv **\ncopy_resresv_array(resource_resv **resresv_arr,\n\t\t   resource_resv **tot_arr)\n{\n\tresource_resv *resresv;\n\tresource_resv **new_resresv_arr;\n\tint size;\n\tint i, j;\n\n\tif (resresv_arr == NULL || tot_arr == NULL)\n\t\treturn NULL;\n\n\tfor (size = 0; resresv_arr[size] != NULL; size++)\n\t\t;\n\n\tnew_resresv_arr =\n\t\tstatic_cast<resource_resv **>(malloc((size + 1) * sizeof(resource_resv *)));\n\tif (new_resresv_arr == NULL) {\n\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_DEBUG, __func__, \"not enough memory.\");\n\t\treturn NULL;\n\t}\n\n\tfor (i = 0, j = 0; resresv_arr[i] != NULL; i++) {\n\t\tresresv = find_resource_resv_by_indrank(tot_arr, resresv_arr[i]->resresv_ind, resresv_arr[i]->rank);\n\n\t\tif (resresv != NULL) {\n\t\t\tnew_resresv_arr[j] = resresv;\n\t\t\tj++;\n\t\t}\n\t}\n\tnew_resresv_arr[j] = NULL;\n\n\treturn new_resresv_arr;\n}\n\n/**\n * @brief\n *\t\tis_resresv_running - is a resource resv in the running state\n *\t\t\t     for a job it's in the \"R\" state\n *\t\t\t     for an advanced reservation its start time is in the past\n *\n *\n * @param[in]\tresresv\t-\tthe resresv to check if it's running\n *\n * @return\tint\n * @retval\t1\t: if running\n * @retval\t0\t: if not running\n *\n */\nint\nis_resresv_running(resource_resv *resresv)\n{\n\tif (resresv == NULL)\n\t\treturn 0;\n\n\tif (resresv->is_job) {\n\t\tif (resresv->job == NULL)\n\t\t\treturn 0;\n\n\t\tif (resresv->job->is_running)\n\t\t\treturn 1;\n\t}\n\n\tif (resresv->is_resv) {\n\t\tif (resresv->resv == NULL)\n\t\t\treturn 0;\n\n\t\tif (resresv->resv->is_running)\n\t\t\treturn 1;\n\t}\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\t\tnew_place - allocate and initialize a placement spec\n *\n * @return\tnewly allocated place\n *\n */\nplace *\nnew_place()\n{\n\tplace *pl;\n\n\tif ((pl = static_cast<place *>(malloc(sizeof(place)))) == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn NULL;\n\t}\n\n\tpl->pack = 0;\n\tpl->free = 0;\n\tpl->excl = 0;\n\tpl->share = 0;\n\tpl->scatter = 0;\n\tpl->vscatter = 0;\n\tpl->exclhost = 0;\n\n\tpl->group = NULL;\n\n\treturn pl;\n}\n\n/**\n * @brief\n *\t\tfree_place - free a placement spec\n *\n * @param[in,out]\tpl\t-\tthe placement spec to free\n *\n * @return\tnothing\n *\n */\nvoid\nfree_place(place *pl)\n{\n\tif (pl == NULL)\n\t\treturn;\n\n\tif (pl->group != NULL)\n\t\tfree(pl->group);\n\n\tfree(pl);\n}\n\n/**\n * @brief\n *\t\tdup_place - duplicate a place structure\n *\n * @param[in]\tpl\t-\tthe place structure to duplicate\n *\n * @return\tduplicated place structure\n *\n */\nplace *\ndup_place(place *pl)\n{\n\tplace *newpl;\n\n\tif (pl == NULL)\n\t\treturn NULL;\n\n\tnewpl = new_place();\n\n\tif (newpl == NULL)\n\t\treturn NULL;\n\n\tnewpl->pack = pl->pack;\n\tnewpl->free = pl->free;\n\tnewpl->scatter = pl->scatter;\n\tnewpl->vscatter = pl->vscatter;\n\tnewpl->excl = pl->excl;\n\tnewpl->exclhost = pl->exclhost;\n\tnewpl->share = pl->share;\n\n\tnewpl->group = string_dup(pl->group);\n\n\treturn newpl;\n}\n\n/**\n * @brief\n *\t\tnew_chunk - constructor for chunk\n *\n * @return\tnew_chunk\n * @retval\tNULL\t: malloc failed.\n */\nchunk *\nnew_chunk()\n{\n\tchunk *ch;\n\n\tif ((ch = static_cast<chunk *>(malloc(sizeof(chunk)))) == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn NULL;\n\t}\n\n\tch->num_chunks = 0;\n\tch->seq_num = 0;\n\tch->str_chunk = NULL;\n\tch->req = NULL;\n\n\treturn ch;\n}\n\n/**\n * @brief\n *\t\tdup_chunk_array - array copy constructor for array of chunk pointers\n *\n * @param[in]\told_chunk_arr\t-\told array of chunk pointers\n *\n * @return\tduplicate chunk array.\n */\nchunk **\ndup_chunk_array(const chunk *const *old_chunk_arr)\n{\n\tint i;\n\tint ct;\n\tchunk **new_chunk_arr = NULL;\n\tint error = 0;\n\n\tif (old_chunk_arr == NULL)\n\t\treturn NULL;\n\n\tct = count_array(old_chunk_arr);\n\n\tif ((new_chunk_arr = static_cast<chunk **>(calloc(ct + 1, sizeof(chunk *)))) == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn NULL;\n\t}\n\n\tfor (i = 0; old_chunk_arr[i] != NULL && !error; i++) {\n\t\tnew_chunk_arr[i] = dup_chunk(old_chunk_arr[i]);\n\t\tif (new_chunk_arr[i] == NULL)\n\t\t\terror = 1;\n\t}\n\n\tnew_chunk_arr[i] = NULL;\n\n\tif (error) {\n\t\tfree_chunk_array(new_chunk_arr);\n\t\treturn NULL;\n\t}\n\n\treturn new_chunk_arr;\n}\n\n/**\n * @brief\n *\t\tdup_chunk - copy constructor for chunk\n *\n * @param[in]\tochunk\t-\told chunk structure\n *\n * @return\tduplicate chunk structure.\n */\nchunk *\ndup_chunk(const chunk *ochunk)\n{\n\tchunk *nchunk;\n\n\tif (ochunk == NULL)\n\t\treturn NULL;\n\n\tnchunk = new_chunk();\n\n\tif (nchunk == NULL)\n\t\treturn NULL;\n\n\tnchunk->num_chunks = ochunk->num_chunks;\n\tnchunk->seq_num = ochunk->seq_num;\n\tnchunk->str_chunk = string_dup(ochunk->str_chunk);\n\tnchunk->req = dup_resource_req_list(ochunk->req);\n\n\tif (nchunk->req == NULL) {\n\t\tfree_chunk(nchunk);\n\t\treturn NULL;\n\t}\n\n\treturn nchunk;\n}\n\n/**\n * @brief\n *\t\tfree_chunk_array - array destructor for array of chunk ptrs\n *\n * @param[in,out]\tchunk_arr\t-\told array of chunk pointers\n *\n * @return\tvoid\n */\nvoid\nfree_chunk_array(chunk **chunk_arr)\n{\n\tint i;\n\n\tif (chunk_arr == NULL)\n\t\treturn;\n\n\tfor (i = 0; chunk_arr[i] != NULL; i++)\n\t\tfree_chunk(chunk_arr[i]);\n\n\tfree(chunk_arr);\n}\n\n/**\n * @brief\n *\t\tfree_chunk - destructor for chunk\n *\n * @param[in,out]\tch\t-\tchunk structure to be freed.\n */\nvoid\nfree_chunk(chunk *ch)\n{\n\tif (ch == NULL)\n\t\treturn;\n\n\tif (ch->str_chunk != NULL)\n\t\tfree(ch->str_chunk);\n\n\tif (ch->req != NULL)\n\t\tfree_resource_req_list(ch->req);\n\n\tfree(ch);\n}\n\n/**\n * @brief find_chunk_by_seq_num - find a chunk by its sequence number\n *\n * @param[in] chunks - array of chunks to search\n * @param[in] seq_num - sequence number to search for\n *\n * @return chunk *\n * @retval chunk found\n * @retval NULL if not found\n */\nchunk *\nfind_chunk_by_seq_num(chunk **chunks, int seq_num)\n{\n\tint i;\n\tfor (i = 0; chunks[i] != NULL; i++)\n\t\tif (chunks[i]->seq_num == seq_num)\n\t\t\treturn chunks[i];\n\n\treturn NULL;\n}\n/**\n * @brief\n *\t\tconstructor for selspec\n *\n * @return\tnew selspec\n * @retval\tNULL\t: Fail\n */\n\nselspec::selspec()\n{\n\ttotal_chunks = 0;\n\ttotal_cpus = 0;\n\tchunks = NULL;\n}\n\n/**\n * @brief\n *\t\tcopy constructor for selspec\n *\n * @param[in]\toldspec\t-\told selspec to be copied\n */\nselspec::selspec(const selspec &oldspec)\n{\n\ttotal_chunks = oldspec.total_chunks;\n\ttotal_cpus = oldspec.total_cpus;\n\tchunks = dup_chunk_array(oldspec.chunks);\n\tdefs = oldspec.defs;\n}\n\nselspec &\nselspec::operator=(const selspec &oldspec)\n{\n\ttotal_chunks = oldspec.total_chunks;\n\ttotal_cpus = oldspec.total_cpus;\n\tchunks = dup_chunk_array(oldspec.chunks);\n\tdefs = oldspec.defs;\n\treturn *this;\n}\n\n/**\n * @brief\n *\t\t - destructor for selspec\n\n */\nselspec::~selspec()\n{\n\tif (chunks != NULL)\n\t\tfree_chunk_array(chunks);\n}\n\n/**\n * @brief\n *\t\tcompare_res_to_str - compare a resource structure of type string to\n *\t\t\t     a character array string\n *\n * @param[in]\tres\t-\tthe resource\n * @param[in]\tstr\t-\tthe string\n * @param[in]\tcmpflag\t-\tcase sensitive or insensitive comparison\n *\n * @return\tint\n * @retval\t1\t: if they match\n * @retval\t0\t: if they don't or res is not a string or error\n *\n */\nint\ncompare_res_to_str(schd_resource *res, char *str, enum resval_cmpflag cmpflag)\n{\n\tint i;\n\n\tif (res == NULL || str == NULL)\n\t\treturn 0;\n\n\tif (res->str_avail == NULL)\n\t\treturn 0;\n\n\tfor (i = 0; res->str_avail[i] != NULL; i++) {\n\t\tif (cmpflag == CMP_CASE) {\n\t\t\tif (!strcmp(res->str_avail[i], str))\n\t\t\t\treturn 1;\n\t\t} else if (cmpflag == CMP_CASELESS) {\n\t\t\tif (!strcasecmp(res->str_avail[i], str))\n\t\t\t\treturn 1;\n\t\t} else {\n\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_NOTICE, res->name, \"Incorrect flag for comparison.\");\n\t\t\treturn 0;\n\t\t}\n\t}\n\t/* if we got here, we didn't match the string */\n\treturn 0;\n}\n\n/**\n * @brief\n *\t\tcompare_non_consumable - perform the == operation on a non consumable\n *\t\t\t\tresource and resource_req\n *\n * @param[in]\tres\t-\tthe resource\n * @param[in]\treq\t-\tresource request\n *\n * @return\tint\n * @retval\t1\t: for a match\n * @retval\t0\t: for not a match\n *\n */\nint\ncompare_non_consumable(schd_resource *res, resource_req *req)\n{\n\n\tif (res == NULL && req == NULL)\n\t\treturn 0;\n\n\tif (req == NULL)\n\t\treturn 0;\n\n\tif (!req->type.is_non_consumable)\n\t\treturn 0;\n\n\tif (res != NULL) {\n\t\tif (!res->type.is_non_consumable)\n\t\t\treturn 0;\n\n\t\tif (res->type.is_string && res->str_avail == NULL)\n\t\t\treturn 0;\n\t}\n\n\t/* successful boolean match: (req = request res = resource on object)\n\t * req: True  res: True\n\t * req: False res: False\n\t * req: False res: NULL\n\t * req:   *   res: TRUE_FALSE\n\t */\n\tif (req->type.is_boolean) {\n\t\tif (!req->amount && res == NULL)\n\t\t\treturn 1;\n\t\telse if (req->amount && res == NULL)\n\t\t\treturn 0;\n\t\telse if (res->avail == TRUE_FALSE)\n\t\t\treturn 1;\n\t\telse\n\t\t\treturn res->avail == req->amount;\n\t}\n\n\tif (req->type.is_string && res != NULL) {\n\t\t/* 'host' to follow IETF rules; 'host' is case insensitive  */\n\t\tif (!strcmp(res->name, \"host\"))\n\t\t\treturn compare_res_to_str(res, req->res_str, CMP_CASELESS);\n\t\telse\n\t\t\treturn compare_res_to_str(res, req->res_str, CMP_CASE);\n\t}\n\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\tcreate a select from an nspec array to place chunks back on the\n *\t\tsame nodes as before.  If an nspec does not have a ninfo, it means\n *\t\twe need to get back the resources, but not on the same node.\n *\n * @param[in]\tnspec_array\t-\tnpsec array to convert\n *\n * @return\tconverted select string\n */\nstd::string\ncreate_select_from_nspec(std::vector<nspec *> &nspec_arr)\n{\n\tstd::string select_spec;\n\tchar buf[2048];\n\tresource_req *req;\n\n\tif (nspec_arr.empty())\n\t\treturn {};\n\n\t/* convert form (node:foo=X:bar=Y) into 1:vnode=node:foo=X:bay=Y*/\n\tfor (const auto &ns : nspec_arr) {\n\t\t/* Don't add exclhost chunks into our select. They will be added back when\n\t\t * we call eval_selspec() with the original place=exclhost.  If we added\n\t\t * them, we'd have issues placing chunks w/o resources\n\t\t */\n\t\tif (ns->resreq != NULL) {\n\t\t\tif (ns->ninfo != NULL) {\n\t\t\t\tselect_spec += \"1:vnode=\";\n\t\t\t\tselect_spec += ns->ninfo->name;\n\t\t\t} else {\n\t\t\t\t/* We need the resources back, but not necessarily on the same node */\n\t\t\t\tselect_spec += \"1\";\n\t\t\t}\n\t\t\tfor (req = ns->resreq; req != NULL; req = req->next) {\n\t\t\t\tchar resstr[MAX_LOG_SIZE];\n\n\t\t\t\tres_to_str_r(req, RF_REQUEST, resstr, sizeof(resstr));\n\t\t\t\tif (resstr[0] == '\\0') {\n\t\t\t\t\treturn {};\n\t\t\t\t}\n\t\t\t\tsnprintf(buf, sizeof(buf), \":%s=%s\", req->name, resstr);\n\t\t\t\tselect_spec += buf;\n\t\t\t}\n\t\t\tselect_spec += \"+\";\n\t\t}\n\t}\n\n\t/* get rid of trailing '+' */\n\tselect_spec.pop_back();\n\n\treturn select_spec;\n}\n\n/**\n * @brief\n * \t\ttrue if job/resv is in a state in which it can be run\n * \t\tJobs are runnable if:\n *\t   \tin state 'Q'\n *\t\tsuspended by the scheduler\n *\t\tis job array in state 'B' and there is a queued subjob\n *\t\tReservations are runnable if they are in state RESV_CONFIRMED\n *\n *\n * @param[in] resresv - resource resv to check\n *\n * @return int\n * @retval\t1\t: if the resource resv is in a runnable state\n * @retval  0\t: if not\n *\n */\nint\nin_runnable_state(resource_resv *resresv)\n{\n\tif (resresv == NULL)\n\t\treturn 0;\n\n\tif (resresv->is_job && resresv->job != NULL) {\n\t\tif (resresv->job->is_array) {\n\t\t\tif (range_next_value(resresv->job->queued_subjobs, -1) >= 0) {\n\t\t\t\tif (resresv->job->is_begin || resresv->job->is_queued)\n\t\t\t\t\treturn 1;\n\t\t\t} else\n\t\t\t\treturn 0;\n\t\t}\n\n\t\tif (resresv->job->is_queued)\n\t\t\treturn 1;\n\n\t\tif (resresv->job->is_susp_sched)\n\t\t\treturn 1;\n\t} else if (resresv->is_resv && resresv->resv != NULL) {\n\t\tif (resresv->resv->resv_state == RESV_CONFIRMED)\n\t\t\treturn 1;\n\t}\n\n\treturn 0;\n}\n"
  },
  {
    "path": "src/scheduler/resource_resv.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _RESOURCE_RESV_H\n#define _RESOURCE_RESV_H\n\n#include \"data_types.h\"\n\n/*\n * pthread routine to free resource_resv array chunk\n */\nvoid\nfree_resource_resv_array_chunk(th_data_free_resresv *data);\n\n/*\n *      free_resource_resv_array - free an array of resource resvs\n */\nvoid free_resource_resv_array(resource_resv **resresv_arr);\n\n/*\n *      dup_resource_resv - duplicate a resource resv structure\n */\nresource_resv *dup_resource_resv(resource_resv *oresresv, server_info *nsinfo,\n\t\t\t\t queue_info *nqinfo, const std::string &name);\n\nresource_resv *dup_resource_resv(resource_resv *oresresv, server_info *nsinfo, queue_info *nqinfo);\n/*\n * pthread routine for duping a chunk of resresvs\n */\nvoid dup_resource_resv_array_chunk(th_data_dup_resresv *data);\n/*\n *      dup_resource_resv_array - dup a array of pointers of resource resvs\n */\nresource_resv **\ndup_resource_resv_array(resource_resv **oresresv_arr,\n\t\t\tserver_info *nsinfo, queue_info *nqinfo);\n\n/*\n *      is_resource_resv_valid - do simple validity checks for a resource resv\n *      returns 1 if valid 0 if not\n */\nint is_resource_resv_valid(resource_resv *resresv, schd_error *err);\n\n/*\n *      find_resource_resv - find a resource_resv by name\n */\nresource_resv *find_resource_resv(resource_resv **resresv_arr, const std::string &name);\n\n/*\n * find a resource_resv by unique numeric rank\n\n */\nresource_resv *find_resource_resv_by_indrank(resource_resv **resresv_arr, int index, int rank);\n\n/**\n *  find_resource_resv_by_time - find a resource_resv by name and start time\n */\nresource_resv *find_resource_resv_by_time(resource_resv **resresv_arr, const std::string &name, time_t start_time);\n\n/*\n *      find_resource_req - find a resource_req from a resource_req list\n */\nresource_req *find_resource_req_by_str(resource_req *reqlist, const char *name);\n\n/*\n *\tfind resource_req by resource definition\n */\nresource_req *find_resource_req(resource_req *reqlist, resdef *def);\n\n/*\n *\tfind resource_count by resource definition\n */\nresource_count *find_resource_count(resource_count *reqlist, resdef *def);\n\n/*\n *      new_resource_req - allocate and initalize new resoruce_req\n */\n#ifdef NAS /* localmod 005 */\nresource_req *new_resource_req(void);\n#else\nresource_req *new_resource_req();\n#endif /* localmod 005 */\n\n/*\n * new_resource_count - allocate and initalize new resource_count\n */\nresource_count *new_resource_count();\n\n/*\n * find_alloc_resource_req[_by_str] -\n * find resource_req by name/resource definition  or allocate and\n * initialize a new resource_req also adds new one to the list\n */\nresource_req *find_alloc_resource_req(resource_req *reqlist, resdef *def);\nresource_req *find_alloc_resource_req_by_str(resource_req *reqlist, char *name);\n\n/*\n * find resource_count by resource definition or allocate\n */\nresource_count *find_alloc_resource_count(resource_count *rcountlist, resdef *def);\n\n/*\n *      free_resource_req_list - frees memory used by a resource_req list\n */\nvoid free_resource_req_list(resource_req *list);\n\n/*\n *\tfree_resource_req - free memory used by a resource_req structure\n */\nvoid free_resource_req(resource_req *req);\n\n/*\n *      free_resource_count_list - frees memory used by a resource_count list\n */\nvoid free_resource_count_list(resource_count *rcount);\n\n/*\n *\tfree_resource_count - free memory used by a resource_count structure\n */\nvoid free_resource_count(resource_count *req);\n\n/*\n *\tset_resource_req - set the value and type of a resource req\n */\nint set_resource_req(resource_req *req, const char *val);\n\n/*\n *\n *      dup_resource_req_list - duplicate a resource_req list\n */\nresource_req *dup_resource_req_list(resource_req *oreq);\n\nresource_req *dup_selective_resource_req_list(resource_req *oreq, std::unordered_set<resdef *> &deflist);\n\n/*\n *\tdup_resource_count_list - duplicate a resource_req list\n */\nresource_count *dup_resource_count_list(resource_count *orcount);\n\n/*\n *      dup_resource_req - duplicate a resource_req struct\n */\nresource_req *dup_resource_req(resource_req *oreq);\n\n/*\n *\tdup_resource_count - duplicate a resource_count struct\n */\nresource_count *dup_resource_count(resource_count *orcount);\n\n/*\n *      update_resresv_on_run - update information kept in a resource_resv\n *                              struct when one is started\n */\nvoid update_resresv_on_run(resource_resv *resresv, std::vector<nspec *> &nspec_arr);\n\n/*\n *      update_resresv_on_end - update a resource_resv structure when\n *                                    it ends\n */\nvoid update_resresv_on_end(resource_resv *resresv, const char *job_state);\n\n/*\n *      resource_resv_filter - filters jobs on specified argument\n */\nresource_resv **\nresource_resv_filter(resource_resv **resresv_arr, int size,\n\t\t     int (*filter_func)(resource_resv *, const void *), const void *arg, int flags);\n\n/*\n *      remove_resresv_from_array - remove a resource_resv from an array\n *                                  without leaving a hole\n */\nint\nremove_resresv_from_array(resource_resv **resresv_arr,\n\t\t\t  resource_resv *resresv);\n\n/*\n *      add_resresv_to_array - add a resource resv to an array\n *                         note: requires reallocating array\n */\nresource_resv **\nadd_resresv_to_array(resource_resv **resresv_arr,\n\t\t     resource_resv *resresv, int flags);\n\n/*\n *      copy_resresv_array - copy an array of resource_resvs by name.\n *                      This is useful  when duplicating a data structure\n *                      with a job array in it which isn't easily reproduced.\n *\n *      NOTE: if a job in resresv_arr is not in tot_arr, that resresv will be\n *              left out of the new array\n */\nresource_resv **\ncopy_resresv_array(resource_resv **resresv_arr,\n\t\t   resource_resv **tot_arr);\n\n/*\n *\tis_resresv_running - is a resource resv in the running state\n *\t\t\t     for a job it's in the \"R\" state\n *\t\t\t     for an advanced reservation it is running\n */\nint is_resresv_running(resource_resv *resresv);\n\n/*\n *\tnew_place - allocate and initialize a placement spec\n *\n *\treturns newly allocated place\n */\n#ifdef NAS /* localmod 005 */\nplace *new_place(void);\n#else\nplace *new_place();\n#endif /* localmod 005 */\n\n/*\n *\tfree_place - free a placement spec\n */\nvoid free_place(place *pl);\n\n/*\n *\tdup_place - duplicate a place structure\n */\nplace *dup_place(place *pl);\n\n/*\n *\tcompare_res_to_str - compare a resource structure of type string to\n *\t\t\t     a character array string\n */\nint compare_res_to_str(schd_resource *res, char *str, enum resval_cmpflag);\n\n/*\n *\tcompare_non_consumable - perform the == operation on a non consumable\n *\t\t\t\tresource and resource_req\n *\treturns 1 for a match or 0 for not a match\n */\nint compare_non_consumable(schd_resource *res, resource_req *req);\n\n/* compare two resource req lists for equality.  Only compare resources in comparr */\nint compare_resource_req_list(resource_req *req1, resource_req *req2, std::unordered_set<resdef *> &comparr);\n\n/* compare two resource_reqs for equality*/\nint compare_resource_req(resource_req *req1, resource_req *req2);\n\n/*\n *\tnew_chunk - constructor for chunk\n */\n#ifdef NAS /* localmod 005 */\nchunk *new_chunk(void);\n#else\nchunk *new_chunk();\n#endif /* localmod 005 */\n\n/*\n *\tdup_chunk_array - array copy constructor for array of chunk ptrs\n */\nchunk **dup_chunk_array(const chunk *const *old_chunk_arr);\n\n/*\n *\tdup_chunk - copy constructor for chunk\n */\nchunk *dup_chunk(const chunk *ochunk);\n\n/*\n *\tfree_chunk_array - array destructor for array of chunk ptrs\n */\nvoid free_chunk_array(chunk **chunk_arr);\n\n/*\n *\tfree_chunk - destructor for chunk\n */\nvoid free_chunk(chunk *ch);\n\n/*\n * create_resource_req - create a new resource_req\n *\n *\treturn new resource_req or NULL\n */\nresource_req *create_resource_req(const char *name, const char *value);\n\n/*\n * create a select from an nspec array to place chunks back on the\n *        same nodes as befor\n *\n * return converted select string\n */\nstd::string create_select_from_nspec(std::vector<nspec *> &nspec_arr);\n\n/* function returns true if job/resv is in a state which it can be run */\nint in_runnable_state(resource_resv *resresv);\n\n#endif /* _RESOURCE_RESV_H */\n"
  },
  {
    "path": "src/scheduler/resv_info.cpp",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file    resv_info.c\n *\n * @brief\n * \tresv_info.c - This file contains functions related to advance reservations.\n *\n * Functions included are:\n *\tstat_resvs()\n *\tquery_reservations()\n *\tquery_resv()\n *\tnew_resv_info()\n *\tfree_resv_info()\n *\tdup_resv_info()\n *\tcheck_new_reservations()\n *\tdisable_reservation_occurrence()\n *\tconfirm_reservation()\n *\trelease_nodes()\n *\tcreate_resv_nodes()\n *\n */\n\n#include <pbs_config.h>\n\n#include <algorithm>\n\n#include <errno.h>\n#include <libutil.h>\n#include <log.h>\n#include <pbs_ifl.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n\n#include \"check.h\"\n#include \"constant.h\"\n#include \"data_types.h\"\n#include \"fifo.h\"\n#include \"globals.h\"\n#include \"job_info.h\"\n#include \"libpbs.h\"\n#include \"misc.h\"\n#include \"node_info.h\"\n#include \"node_partition.h\"\n#include \"pbs_internal.h\"\n#include \"queue_info.h\"\n#include \"resource.h\"\n#include \"resource_resv.h\"\n#include \"resv_info.h\"\n#include \"server_info.h\"\n#include \"simulate.h\"\n#include \"sort.h\"\n\n/**\n * @brief\n * \t\tStatuses reservations from the server in batch status form.\n *\n * @param[in]\tpbs_sd\t-\tThe socket descriptor to the server's connection\n *\n * @return\tbatch status form of the reservations\n */\nstruct batch_status *\nstat_resvs(int pbs_sd)\n{\n\tstruct batch_status *resvs;\n\t/* get the reservation info from the PBS server */\n\tif ((resvs = send_statresv(pbs_sd, NULL, NULL, NULL)) == NULL) {\n\t\tif (pbs_errno) {\n\t\t\tconst char *errmsg = pbs_geterrmsg(pbs_sd);\n\t\t\tif (errmsg == NULL)\n\t\t\t\terrmsg = \"\";\n\t\t\tlog_eventf(PBSEVENT_SCHED, PBS_EVENTCLASS_RESV, LOG_NOTICE, \"resv_info\",\n\t\t\t\t   \"pbs_statresv failed: %s (%d)\", errmsg, pbs_errno);\n\t\t}\n\t\treturn NULL;\n\t}\n\treturn resvs;\n}\n\n/**\n *\n * @brief\n *\tquery_reservations - query the reservations from the server.\n *\n *  Each reservation, is created to reflect its current state in the server.\n *  For a standing reservation, the parent reservation represents the soonest\n *  occurrence known to the server; each remaining occurrence is unrolled to\n *  account for the resources consumed by the standing reservation as a whole.\n *\n *  A degraded reservation, is handled in a manner similar to a confirmed\n *  reservation. Even though resources of a degraded reservation may change in\n *  this scheduling cycle, in case no alternate resources are found, the\n *  reservation retains its currently allocated resources, such that no other\n *  requests make use of the same resources.\n *\n * @param[in] pbs_sd - connection to the pbs server\n * @param[in] sinfo  - the server to query from\n * @param[in] resvs  - batch status of the stat'ed reservations\n *\n * @return    An array of reservations\n *\n */\nresource_resv **\nquery_reservations(int pbs_sd, server_info *sinfo, struct batch_status *resvs)\n{\n\t/* the current reservation in the list */\n\tstruct batch_status *cur_resv;\n\n\t/* the array of pointers to internal scheduling structure for reservations */\n\tresource_resv **resresv_arr;\n\n\t/* the current resv */\n\tresource_resv *resresv;\n\n\tint j;\n\tint idx = 0; /* index of the server info's resource reservation array */\n\tint num_resv = 0;\n\n\tschd_error *err;\n\n\tif (resvs == NULL)\n\t\treturn NULL;\n\n\terr = new_schd_error();\n\tif (err == NULL)\n\t\treturn NULL;\n\n\tcur_resv = resvs;\n\n\twhile (cur_resv != NULL) {\n\t\tnum_resv++;\n\t\tcur_resv = cur_resv->next;\n\t}\n\n\tif ((resresv_arr = static_cast<resource_resv **>(malloc(sizeof(resource_resv *) * (num_resv + 1)))) == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\tfree_schd_error(err);\n\t\treturn NULL;\n\t}\n\tresresv_arr[0] = NULL;\n\tsinfo->num_resvs = num_resv;\n\n\tfor (cur_resv = resvs; cur_resv != NULL; cur_resv = cur_resv->next) {\n\t\tint ignore_resv = 0;\n\t\tclear_schd_error(err);\n\t\tstruct attrl *attrp = NULL;\n\t\t/* Check if this reservation belongs to this scheduler */\n\t\tfor (attrp = cur_resv->attribs; attrp != NULL; attrp = attrp->next) {\n\t\t\tif (strcmp(attrp->name, ATTR_partition) == 0) {\n\t\t\t\tif (sc_attrs.partition != NULL && (strcmp(attrp->value, sc_attrs.partition) != 0))\n\t\t\t\t\tignore_resv = 1;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t\tif (ignore_resv == 1) {\n\t\t\tsinfo->num_resvs--;\n\t\t\tcontinue;\n\t\t}\n\n\t\t/* convert resv info from server batch_status into resv_info */\n\t\tif ((resresv = query_resv(cur_resv, sinfo)) == NULL) {\n\t\t\tfree_resource_resv_array(resresv_arr);\n\t\t\tfree_schd_error(err);\n\t\t\treturn NULL;\n\t\t}\n#ifdef NAS /* localmod 047 */\n\t\tif (resresv->place_spec == NULL) {\n\t\t\tresresv->place_spec = parse_placespec(\"scatter\");\n\t\t}\n#endif /* localmod 047 */\n\n\t\t/* We continue adding valid resvs to our array.  We're\n\t\t * freeing what we allocated and ignoring this resv completely.\n\t\t */\n\t\tif (!is_resource_resv_valid(resresv, err) || resresv->is_invalid) {\n\t\t\tschdlogerr(PBSEVENT_DEBUG, PBS_EVENTCLASS_RESV, LOG_DEBUG, resresv->name,\n\t\t\t\t   \"Reservation is invalid - ignoring for this cycle\", err);\n\t\t\tignore_resv = 1;\n\t\t}\n\t\t/* Make sure it is not a future reservation that is being deleted, if so ignore it */\n\t\telse if ((resresv->resv->resv_state == RESV_BEING_DELETED) && (resresv->start > sinfo->server_time)) {\n\t\t\tlog_event(PBSEVENT_SCHED, PBS_EVENTCLASS_RESV, LOG_DEBUG,\n\t\t\t\t  resresv->name, \"Future reservation is being deleted, ignoring this reservation\");\n\t\t\tignore_resv = 1;\n\t\t} else if ((resresv->resv->resv_state == RESV_BEING_DELETED) && (resresv->resv->resv_nodes != NULL) &&\n\t\t\t   (!is_string_in_arr(resresv->resv->resv_nodes[0]->resvs, resresv->name.c_str()))) {\n\t\t\tlog_event(PBSEVENT_SCHED, PBS_EVENTCLASS_RESV, LOG_DEBUG,\n\t\t\t\t  resresv->name, \"Reservation is being deleted and not present on node, ignoring this reservation\");\n\t\t\tignore_resv = 1;\n\t\t}\n\n\t\tif (ignore_resv == 1) {\n\t\t\tsinfo->num_resvs--;\n\t\t\t/* mark all the jobs of the associated queue as can never run */\n\t\t\tif (resresv->resv->queuename != NULL) {\n\t\t\t\tqueue_info *qinfo = find_queue_info(sinfo->queues, resresv->resv->queuename);\n\t\t\t\tif (qinfo != NULL) {\n\t\t\t\t\tclear_schd_error(err);\n\t\t\t\t\tset_schd_error_arg(err, SPECMSG, \"Reservation is in an invalid state\");\n\t\t\t\t\tset_schd_error_codes(err, NEVER_RUN, ERR_SPECIAL);\n\t\t\t\t\tupdate_jobs_cant_run(pbs_sd, qinfo->jobs, NULL, err, START_WITH_JOB);\n\t\t\t\t}\n\t\t\t}\n\t\t\tdelete resresv;\n\t\t\tcontinue;\n\t\t}\n\n\t\tmodify_jobs_nodes_for_resv(resresv, sinfo->server_time);\n\n\t\t/* The server's info only gives information about a single reservation\n\t\t * object. In the case of a standing reservation, it is up to the\n\t\t * scheduler to account for each occurrence and attempt to confirm the\n\t\t * reservation.\n\t\t *\n\t\t * In such a case, each occurrence has to be 'cloned'\n\t\t * by duplicating the parent reservation and setting the specific start\n\t\t * and end times and unique execvnodes for each occurrence.\n\t\t *\n\t\t * Note that because the first occurrence may reschedule the start time\n\t\t * from the one submitted by reserve_start (see -R of pbs_rsub), the\n\t\t *  initial start time has to be reconfirmed. An example of such\n\t\t * rescheduling is:\n\t\t *   pbs_rsub -R 2000 -E 2100 -r \"FREQ=DAILY;BYHOUR=9,20;COUNT=2\"\n\t\t * for which the first occurrence is 9am and then 8pm.\n\t\t * BYHOUR takes priority over the start time specified by -R of pbs_rsub.\n\t\t *\n\t\t * Only unroll the occurrences if the parent reservation has been\n\t\t * confirmed.\n\t\t */\n\t\tif (resresv->resv->is_standing &&\n\t\t    (resresv->resv->resv_state != RESV_UNCONFIRMED)) {\n\t\t\tresource_resv *resresv_ocr = NULL; /* the occurrence's resource_resv */\n\t\t\tchar *execvnodes_seq = NULL;\t\t   /* confirmed execvnodes sequence string */\n\t\t\tchar **execvnode_ptr = NULL;\n\t\t\tchar **tofree = NULL;\n\t\t\tresource_resv **tmp = NULL;\n\t\t\ttime_t dtstart;\n\t\t\tchar *rrule = NULL;\n\t\t\tchar *tz = NULL;\n\t\t\tchar start_time[128];\n\t\t\tint count = 0;\n\t\t\tint occr_count;\t  /* occurrences count as reported by execvnodes_seq */\n\t\t\tint occr_idx;\t  /* the occurrence index of a standing reservation */\n\t\t\tint degraded_idx; /* index corrected to account for reconfirmation */\n\n\t\t\t/* occr_idx refers to the soonest occurrence to run or currently running\n\t\t\t * Note that resv_idx starts at 1 on the first occurrence and not 0.\n\t\t\t */\n\t\t\toccr_idx = resresv->resv->resv_idx;\n\t\t\texecvnodes_seq = string_dup(resresv->resv->execvnodes_seq);\n\t\t\t/* the error handling for the string duplication returning NULL is\n\t\t\t * combined with the following assignment, because get_execvnodes_count\n\t\t\t * returns 0 if passed a NULL argument\n\t\t\t */\n\t\t\toccr_count = get_execvnodes_count(execvnodes_seq);\n\t\t\t/* this should happen only if the execvnodes_seq are corrupted. In such\n\t\t\t * case, we ignore the reservation and move on to the next one\n\t\t\t */\n\t\t\tif (occr_count == 0) {\n\t\t\t\tlog_event(PBSEVENT_SCHED, PBS_EVENTCLASS_RESV, LOG_DEBUG,\n\t\t\t\t\t  resresv->name, \"Error processing standing reservation, degrading it\");\n\t\t\t\tif  (resresv->resv->resv_state != RESV_RUNNING\n\t\t\t\t         && resresv->resv->resv_state != RESV_DELETING_JOBS)\n\t\t\t\t\tresresv->resv->resv_state = RESV_DEGRADED;\n\t\t\t}\n\t\t\t/* unroll_execvnode_seq will destroy the first argument that is passed\n\t\t\t * to it by calling tokenizing functions, hence, it has to be duplicated\n\t\t\t */\n\t\t\texecvnode_ptr = unroll_execvnode_seq(execvnodes_seq, &tofree);\n\t\t\tcount = resresv->resv->count;\n\n\t\t\t/* 'count' and 'occr_idx' attributes persist through the life of the\n\t\t\t * standing reservation. After a reconfirmation, the new execvnodes\n\t\t\t * sequence may be shortened, therefore the occurrence index used to\n\t\t\t * identify which execvnode is associated to which occurrence needs to\n\t\t\t * be adjusted to take into account the elapsed occurrences\n\t\t\t */\n\t\t\tdegraded_idx = occr_idx - (count - occr_count);\n\n\t\t\t/* The number of remaining occurrences to add to the svr_info is given\n\t\t\t * by the total number of occurrences (count) to which we subtract the\n\t\t\t * number of elapsed occurrences that started at 1.\n\t\t\t * For example, if a standing reservation for a count of 10 is submitted\n\t\t\t * and the reservation has already run 2 and is now scheduling the 3rd\n\t\t\t * one to start, then occr_idx is 3. The number of remaining occurrences\n\t\t\t * to add to the server info is then 10-3=7\n\t\t\t * Note that 'count - occr_idx' is identical to\n\t\t\t * 'occr_count - degraded_idx'\n\t\t\t */\n\t\t\tsinfo->num_resvs += count - occr_idx;\n\n\t\t\t/* Resize the reservations array to append each occurrence */\n\t\t\tif ((tmp = static_cast<resource_resv **>(realloc(resresv_arr,\n\t\t\t\t\t\t\t\t\t sizeof(resource_resv *) * (sinfo->num_resvs + 1)))) == NULL) {\n\t\t\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\t\t\tfree_resource_resv_array(resresv_arr);\n\t\t\t\tdelete resresv;\n\t\t\t\tfree_execvnode_seq(tofree);\n\t\t\t\tfree(execvnodes_seq);\n\t\t\t\tfree(execvnode_ptr);\n\t\t\t\tfree_schd_error(err);\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t\tresresv_arr = tmp;\n\n\t\t\trrule = resresv->resv->rrule;\n\t\t\tdtstart = resresv->resv->req_start;\n\t\t\ttz = resresv->resv->timezone;\n\n\t\t\t/* Add each occurrence to the universe's view by duplicating the\n\t\t\t * parent reservation and resetting start and end times and the\n\t\t\t * execvnode on which the occurrence is confirmed to run.\n\t\t\t */\n\t\t\tfor (j = 0; occr_idx <= count; occr_idx++, j++, degraded_idx++) {\n\t\t\t\t/* If it is not the first occurrence then update the start time as\n\t\t\t\t * req_start_standing (if set). This is to ensure that if the first\n\t\t\t\t * occurrence has been changed, other future occurrences are not\n\t\t\t\t * affected.\n\t\t\t\t */\n\t\t\t\tif (j == 1 && resresv_ocr->resv->req_start_standing != UNSPECIFIED)\n\t\t\t\t\tdtstart = resresv_ocr->resv->req_start_standing;\n\t\t\t\t/* Get the start time of the next occurrence computed from dtstart.\n\t\t\t\t * The server maintains state of a single reservation object for\n\t\t\t\t * which in the case of a standing reservation, it updates start\n\t\t\t\t * and end times and execvnodes.\n\t\t\t\t * The last argument (j+1) indicates the occurrence index from dtstart\n\t\t\t\t * starting at 1. Returns dtstart if it's an advance reservation.\n\t\t\t\t */\n\t\t\t\tauto next = get_occurrence(rrule, dtstart, tz, j + 1);\n\n\t\t\t\t/* Duplicate the \"master\" resv only for subsequent occurrences */\n\t\t\t\tif (j == 0)\n\t\t\t\t\tresresv_ocr = resresv;\n\t\t\t\telse {\n\t\t\t\t\tresresv_ocr = dup_resource_resv(resresv, sinfo, NULL);\n\t\t\t\t\tif (resresv_ocr == NULL) {\n\t\t\t\t\t\tlog_err(errno, __func__, \"Error duplicating resource reservation\");\n\t\t\t\t\t\tfree_resource_resv_array(resresv_arr);\n\t\t\t\t\t\tdelete resresv;\n\t\t\t\t\t\tfree_execvnode_seq(tofree);\n\t\t\t\t\t\tfree(execvnodes_seq);\n\t\t\t\t\t\tfree(execvnode_ptr);\n\t\t\t\t\t\tfree_schd_error(err);\n\t\t\t\t\t\treturn NULL;\n\t\t\t\t\t}\n\t\t\t\t\tif (resresv->resv->resv_state == RESV_RUNNING ||\n\t\t\t\t\t    resresv->resv->resv_state == RESV_BEING_ALTERED ||\n\t\t\t\t\t    resresv->resv->resv_state == RESV_DELETING_JOBS) {\n\t\t\t\t\t\t/* Each occurrence will be added to the simulation framework and\n\t\t\t\t\t\t * should not be in running state. Their state should be\n\t\t\t\t\t\t * Confirmed instead of possibly inheriting the Running state\n\t\t\t\t\t\t * from the parent reservation.\n\t\t\t\t\t\t */\n\t\t\t\t\t\tresresv_ocr->resv->resv_state = RESV_CONFIRMED;\n\t\t\t\t\t\tresresv_ocr->resv->is_running = 0;\n\t\t\t\t\t}\n\t\t\t\t\t/* Duplication deep-copies node info array. This array gets\n\t\t\t\t\t * overwritten and needs to be freed. This is an alternative\n\t\t\t\t\t * to creating another duplication function that only duplicates\n\t\t\t\t\t * the required fields.\n\t\t\t\t\t */\n\t\t\t\t\trelease_nodes(resresv_ocr);\n\n\t\t\t\t\tif (resresv_ocr->resv->select_standing != NULL) {\n\t\t\t\t\t\tdelete resresv_ocr->select;\n\t\t\t\t\t\tresresv_ocr->select = new selspec(*resresv_ocr->resv->select_standing);\n\t\t\t\t\t}\n\n\t\t\t\t\tif (degraded_idx >= 1 && degraded_idx <= occr_count)\n\t\t\t\t\t\tresresv_ocr->resv->orig_nspec_arr = parse_execvnode(\n\t\t\t\t\t\t\texecvnode_ptr[degraded_idx - 1], sinfo, resresv_ocr->select);\n\t\t\t\t\telse {\n\t\t\t\t\t\tresresv_ocr->resv->orig_nspec_arr = {};\n\t\t\t\t\t\tlog_eventf(PBSEVENT_ERROR, PBS_EVENTCLASS_RESV,\n\t\t\t\t\t\t\t   LOG_INFO, resresv->name,\n\t\t\t\t\t\t\t   \"%s: occurence %d has no execvnodes, proceeding without assigned resources\",\n\t\t\t\t\t\t\t   __func__, j + 1);\n\t\t\t\t\t}\n\n\t\t\t\t\tresresv_ocr->nspec_arr = combine_nspec_array(resresv_ocr->resv->orig_nspec_arr);\n\t\t\t\t\tresresv_ocr->ninfo_arr = create_node_array_from_nspec(resresv_ocr->nspec_arr);\n\t\t\t\t\tresresv_ocr->resv->resv_nodes = create_resv_nodes(\n\t\t\t\t\t\tresresv_ocr->nspec_arr, sinfo);\n\t\t\t\t}\n\n\t\t\t\t/* Set occurrence start and end time and nodes information. On the\n\t\t\t\t * first occurrence the start time may need to be reset to the time\n\t\t\t\t * specified by the recurrence rule. See description at the head of\n\t\t\t\t * this block.\n\t\t\t\t */\n\t\t\t\tresresv_ocr->resv->req_start = next;\n\t\t\t\t/* If it is not the first occurrence then update the duration as\n\t\t\t\t * req_duration_standing (if set). This is to ensure that if the first\n\t\t\t\t * occurrence has been changed, other future occurrences are not\n\t\t\t\t * affected.\n\t\t\t\t */\n\t\t\t\tif (j != 0 && resresv->resv->req_duration_standing != UNSPECIFIED)\n\t\t\t\t\tresresv_ocr->hard_duration = resresv_ocr->duration = resresv->resv->req_duration_standing;\n\t\t\t\tresresv_ocr->resv->req_end = next + resresv_ocr->duration;\n\t\t\t\tresresv_ocr->start = resresv_ocr->resv->req_start;\n\t\t\t\tresresv_ocr->end = resresv_ocr->resv->req_end;\n\t\t\t\tresresv_ocr->resv->resv_idx = occr_idx;\n\n\t\t\t\t/* Add the occurrence to the global array of reservations */\n\t\t\t\tresresv_arr[idx++] = resresv_ocr;\n\t\t\t\tresresv_arr[idx] = NULL;\n\n\t\t\t\tauto loc_time = localtime(&resresv_ocr->start);\n\t\t\t\tstrftime(start_time, sizeof(start_time), \"%Y%m%d-%H:%M:%S\", loc_time);\n\n\t\t\t\tlog_eventf(PBSEVENT_DEBUG2, PBS_EVENTCLASS_RESV, LOG_DEBUG, resresv->name,\n\t\t\t\t\t   \"Occurrence %d/%d,%s\", occr_idx, count, start_time);\n\t\t\t}\n\t\t\t/* The parent reservation has already been added so move on to handling\n\t\t\t * the next reservation\n\t\t\t */\n\n\t\t\tfree_execvnode_seq(tofree);\n\t\t\tfree(execvnodes_seq);\n\t\t\tfree(execvnode_ptr);\n\n\t\t\tcontinue;\n\t\t} else {\n\t\t\tresresv_arr[idx++] = resresv;\n\t\t\tresresv_arr[idx] = NULL;\n\t\t}\n\t}\n\n\tfree_schd_error(err);\n\n\treturn resresv_arr;\n}\n\n/**\n * @brief\n *\t\tquery_resv - convert the servers batch_status structure into a\n *\t\t\t resource_resv/resv_info structs for easier access\n *\n * @param[in]\tresv\t-\ta single reservation in batch_status form\n * @param[in]\tsinfo \t- \tthe server\n *\n * @return\tthe converted resource_resv struct\n *\n */\nresource_resv *\nquery_resv(struct batch_status *resv, server_info *sinfo)\n{\n\tstruct attrl *attrp = NULL;    /* linked list of attributes from server */\n\tresource_resv *advresv = NULL; /* resv_info to be created */\n\tresource_req *resreq = NULL;   /* used for the ATTR_l resources */\n\tchar *endp = NULL;\t       /* used with strtol() */\n\tlong count = 0;\t\t       /* used to convert string -> num */\n\tchar *resv_nodes = NULL;       /* used to hold the resv_nodes for later processing */\n\n\tif (resv == NULL)\n\t\treturn NULL;\n\n\tif ((advresv = new resource_resv(resv->name)) == NULL)\n\t\treturn NULL;\n\n\tif ((advresv->resv = new_resv_info()) == NULL) {\n\t\tdelete advresv;\n\t\treturn NULL;\n\t}\n\n\tattrp = resv->attribs;\n\tadvresv->server = sinfo;\n\tadvresv->is_resv = 1;\n\n\twhile (attrp != NULL) {\n\t\tif (!strcmp(attrp->name, ATTR_resv_owner))\n\t\t\tadvresv->user = attrp->value;\n\t\telse if (!strcmp(attrp->name, ATTR_egroup))\n\t\t\tadvresv->group = attrp->value;\n\t\telse if (!strcmp(attrp->name, ATTR_queue))\n\t\t\tadvresv->resv->queuename = string_dup(attrp->value);\n\t\telse if (!strcmp(attrp->name, ATTR_SchedSelect)) {\n\t\t\tadvresv->select = parse_selspec(attrp->value);\n\t\t\tif (advresv->select != NULL && advresv->select->chunks != NULL) {\n\t\t\t\t/* Ignore resv if any of the chunks has no resource req. */\n\t\t\t\tint i;\n\t\t\t\tfor (i = 0; advresv->select->chunks[i] != NULL; i++)\n\t\t\t\t\tif (advresv->select->chunks[i]->req == NULL)\n\t\t\t\t\t\tadvresv->is_invalid = 1;\n\t\t\t}\n\t\t} else if (!strcmp(attrp->name, ATTR_resv_start)) {\n\t\t\tcount = strtol(attrp->value, &endp, 10);\n\t\t\tif (*endp != '\\0')\n\t\t\t\tcount = -1;\n\t\t\tadvresv->resv->req_start = count;\n\t\t} else if (!strcmp(attrp->name, ATTR_resv_end)) {\n\t\t\tcount = strtol(attrp->value, &endp, 10);\n\t\t\tif (*endp != '\\0')\n\t\t\t\tcount = -1;\n\t\t\tadvresv->resv->req_end = count;\n\t\t} else if (!strcmp(attrp->name, ATTR_resv_duration)) {\n\t\t\tcount = strtol(attrp->value, &endp, 10);\n\t\t\tif (*endp != '\\0')\n\t\t\t\tcount = -1;\n\t\t\tadvresv->resv->req_duration = count;\n\t\t} else if (!strcmp(attrp->name, ATTR_resv_alter_revert)) {\n\t\t\tif (!strcmp(attrp->resource, \"start_time\")) {\n\t\t\t\tcount = strtol(attrp->value, &endp, 10);\n\t\t\t\tif (*endp != '\\0')\n\t\t\t\t\tcount = -1;\n\t\t\t\tadvresv->resv->req_start_orig = count;\n\t\t\t} else if (!strcmp(attrp->resource, \"walltime\")) {\n\t\t\t\tadvresv->resv->req_duration_orig = (time_t) res_to_num(attrp->value, NULL);\n\t\t\t}\n\t\t} else if (!strcmp(attrp->name, ATTR_resv_standing_revert)) {\n\t\t\tif (!strcmp(attrp->resource, \"start_time\")) {\n\t\t\t\tcount = strtol(attrp->value, &endp, 10);\n\t\t\t\tif (*endp != '\\0')\n\t\t\t\t\tcount = -1;\n\t\t\t\tadvresv->resv->req_start_standing = count;\n\t\t\t} else if (!strcmp(attrp->resource, \"walltime\")) {\n\t\t\t\tadvresv->resv->req_duration_standing = (time_t) res_to_num(attrp->value, NULL);\n\t\t\t} else if (!strcmp(attrp->resource, \"select\")) {\n\t\t\t\tadvresv->resv->select_standing = parse_selspec(attrp->value);\n\t\t\t}\n\t\t} else if (!strcmp(attrp->name, ATTR_resv_retry)) {\n\t\t\tcount = strtol(attrp->value, &endp, 10);\n\t\t\tif (*endp != '\\0')\n\t\t\t\tcount = -1;\n\t\t\tadvresv->resv->retry_time = count;\n\t\t} else if (!strcmp(attrp->name, ATTR_resv_state)) {\n\t\t\tcount = strtol(attrp->value, &endp, 10);\n\t\t\tif (*endp != '\\0')\n\t\t\t\tcount = -1;\n\t\t\tadvresv->resv->resv_state = (enum resv_states) count;\n\t\t} else if (!strcmp(attrp->name, ATTR_resv_substate)) {\n\t\t\tcount = strtol(attrp->value, &endp, 10);\n\t\t\tif (*endp != '\\0')\n\t\t\t\tcount = -1;\n\t\t\tadvresv->resv->resv_substate = (enum resv_states) count;\n\t\t} else if (!strcmp(attrp->name, ATTR_l)) { /* resources requested*/\n\t\t\tresreq = find_alloc_resource_req_by_str(advresv->resreq, attrp->resource);\n\t\t\tif (resreq == NULL) {\n\t\t\t\tdelete advresv;\n\t\t\t\treturn NULL;\n\t\t\t}\n\n\t\t\tif (set_resource_req(resreq, attrp->value) != 1)\n\t\t\t\tadvresv->is_invalid = 1;\n\t\t\telse {\n\t\t\t\tif (advresv->resreq == NULL)\n\t\t\t\t\tadvresv->resreq = resreq;\n\t\t\t\tif (!strcmp(attrp->resource, \"place\")) {\n\t\t\t\t\tadvresv->place_spec = parse_placespec(attrp->value);\n\t\t\t\t\tif (advresv->place_spec == NULL)\n\t\t\t\t\t\tadvresv->is_invalid = 1;\n\t\t\t\t}\n\t\t\t}\n\t\t} else if (!strcmp(attrp->name, ATTR_resv_nodes))\n\t\t\tresv_nodes = attrp->value;\n\t\telse if (!strcmp(attrp->name, ATTR_node_set))\n\t\t\tadvresv->node_set_str = break_comma_list(attrp->value);\n\t\telse if (!strcmp(attrp->name, ATTR_resv_timezone))\n\t\t\tadvresv->resv->timezone = string_dup(attrp->value);\n\t\telse if (!strcmp(attrp->name, ATTR_resv_rrule))\n\t\t\tadvresv->resv->rrule = string_dup(attrp->value);\n\t\telse if (!strcmp(attrp->name, ATTR_resv_execvnodes))\n\t\t\tadvresv->resv->execvnodes_seq = string_dup(attrp->value);\n\t\telse if (!strcmp(attrp->name, ATTR_resv_idx))\n\t\t\tadvresv->resv->resv_idx = atoi(attrp->value);\n\t\telse if (!strcmp(attrp->name, ATTR_resv_standing)) {\n\t\t\tcount = atoi(attrp->value);\n\t\t\tadvresv->resv->is_standing = count;\n\t\t} else if (!strcmp(attrp->name, ATTR_resv_count))\n\t\t\tadvresv->resv->count = atoi(attrp->value);\n\t\telse if (!strcmp(attrp->name, ATTR_partition)) {\n\t\t\tadvresv->resv->partition = strdup(attrp->value);\n\t\t} else if (!strcmp(attrp->name, ATTR_SchedSelect_orig)) {\n\t\t\tadvresv->resv->select_orig = parse_selspec(attrp->value);\n\t\t}\n\t\tattrp = attrp->next;\n\t}\n\n\t/* If we have a select_orig, this means we're doing an ralter and reducing the size of our reservation\n\t * We need to map the orig chunks to chunks from the smaller select to make sure we keep them.\n\t * To do this, we set the seq_num of the select chunk to the same seq_num of the select_orig chunk\n\t *\n\t */\n\tif (advresv->resv->select_orig != NULL) {\n\t\tint j = 0;\n\t\tchar *sel_orig_str, *sel_str;\n\t\tauto sel_num = strtol(advresv->select->chunks[j]->str_chunk, &sel_str, 10);\n\n\t\tfor (int i = 0; advresv->resv->select_orig->chunks[i] != NULL; i++) {\n\t\t\tchunk *chk = advresv->resv->select_orig->chunks[i];\n\t\t\tauto sel_orig_num = strtol(chk->str_chunk, &sel_orig_str, 10);\n\t\t\tif (strcmp(sel_orig_str, sel_str) == 0 && sel_num <= sel_orig_num) {\n\t\t\t\tadvresv->select->chunks[j++]->seq_num = chk->seq_num;\n\t\t\t\tif (advresv->select->chunks[j] != NULL)\n\t\t\t\t\tsel_num = strtol(advresv->select->chunks[j]->str_chunk, &sel_str, 10);\n\t\t\t\telse\n\t\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t}\n\n\tif (resv_nodes != NULL) {\n\t\tselspec *sel;\n\t\tstd::string selectspec;\n\t\t/* parse the execvnode and create an nspec array with ninfo ptrs pointing\n\t\t * to nodes in the real server\n\t\t */\n\t\tif (advresv->resv->select_orig != NULL)\n\t\t\tsel = advresv->resv->select_orig;\n\t\telse\n\t\t\tsel = advresv->select;\n\t\tadvresv->resv->orig_nspec_arr = parse_execvnode(resv_nodes, sinfo, sel);\n\t\tadvresv->nspec_arr = combine_nspec_array(advresv->resv->orig_nspec_arr);\n\t\tadvresv->ninfo_arr = create_node_array_from_nspec(advresv->nspec_arr);\n\n\t\t/* create a node info array by copying the nodes and setting\n\t\t * available resources to only the ones assigned to the reservation\n\t\t */\n\t\tadvresv->resv->resv_nodes = create_resv_nodes(advresv->nspec_arr, sinfo);\n\t\tselectspec = create_select_from_nspec(advresv->resv->orig_nspec_arr);\n\t\tadvresv->execselect = parse_selspec(selectspec);\n\t}\n\n\t/* If reservation is unconfirmed and the number of occurrences is 0 then flag\n\t * the reservation as invalid. This is an extra check but isn't supposed to\n\t * happen because the server will purge such reservations.\n\t */\n\tif (advresv->resv->resv_state == RESV_UNCONFIRMED &&\n\t    get_num_occurrences(advresv->resv->rrule,\n\t\t\t\tadvresv->resv->req_start,\n\t\t\t\tadvresv->resv->timezone) == 0)\n\t\tadvresv->is_invalid = 1;\n\n\t/* When a reservation is recognized as DEGRADED, it is converted into\n\t * state = CONFIRMED; substate = DEGRADED\n\t * From the scheduler's perspective, the reservation is CONFIRMED in that its\n\t * allocated resources remain scheduled in the calendar, but it is\n\t * handled as an UNCONFIRMED reservation when its resources are to be\n\t * replaced.\n\t */\n\tif (advresv->resv->resv_state == RESV_DEGRADED) {\n\t\tadvresv->resv->resv_state = RESV_CONFIRMED;\n\t\tif (advresv->resv->resv_substate != RESV_IN_CONFLICT)\n\t\t\tadvresv->resv->resv_substate = RESV_DEGRADED;\n\t}\n\n\tif (advresv->resv->resv_state == RESV_BEING_ALTERED) {\n\t\ttime_t alter_end = advresv->resv->req_start_orig + advresv->resv->req_duration_orig;\n\t\tif (advresv->resv->req_start_orig <= sinfo->server_time && alter_end >= sinfo->server_time)\n\t\t\tadvresv->resv->is_running = 1;\n\t} else if (advresv->resv->req_start <= sinfo->server_time && advresv->resv->req_end >= sinfo->server_time)\n\t\tadvresv->resv->is_running = 1;\n\n\tadvresv->rank = get_sched_rank();\n\n\tadvresv->aoename = getaoename(advresv->select);\n\tadvresv->eoename = geteoename(advresv->select);\n\n\t/* reservations requesting AOE mark nodes as exclusive */\n\tif (advresv->aoename) {\n\t\tadvresv->place_spec->share = 0;\n\t\tadvresv->place_spec->excl = 1;\n\t}\n\n\t/*\n\t * Check to see if we can attempt to confirm this reservation.\n\t * If we can, then then all we will do in this cycle is attempt\n\t * to confirm reservations.  In that case, build the calendar\n\t * using the hard durations of jobs.\n\t */\n\tif (will_confirm(advresv, sinfo->server_time))\n\t\tsinfo->use_hard_duration = 1;\n\n\tadvresv->duration = advresv->resv->req_duration;\n\tadvresv->hard_duration = advresv->duration;\n\tif (advresv->resv->resv_state != RESV_UNCONFIRMED) {\n\t\tadvresv->start = advresv->resv->req_start;\n\t\tif (advresv->resv->resv_state == RESV_BEING_DELETED ||\n\t\t    advresv->start + advresv->duration <= sinfo->server_time) {\n\t\t\tadvresv->end = sinfo->server_time + EXITING_TIME;\n\t\t} else\n\t\t\tadvresv->end = advresv->resv->req_end;\n\t}\n\n\tif (advresv->node_set_str != NULL) {\n\t\tadvresv->node_set = create_node_array_from_str(\n\t\t\tadvresv->server->unassoc_nodes, advresv->node_set_str);\n\t}\n\tadvresv->resv->resv_queue =\n\t\tfind_queue_info(sinfo->queues, advresv->resv->queuename);\n\n\t/* It's possible for an in-conflict reservation to be running with no nodes */\n\tif (is_resresv_running(advresv) && advresv->ninfo_arr != NULL) {\n\t\tfor (int j = 0; advresv->ninfo_arr[j] != NULL; j++)\n\t\t\tadvresv->ninfo_arr[j]->num_run_resv++;\n\t}\n\n\treturn advresv;\n}\n\n/**\n * @brief\n *\t\tnew_resv_info - allocate and initialize new resv_info structure\n *\n * @return\tthe new structure\n *\n */\nresv_info *\nnew_resv_info()\n{\n\tresv_info *rinfo;\n\n\trinfo = new resv_info();\n\n\trinfo->queuename = NULL;\n\trinfo->req_start = UNSPECIFIED;\n\trinfo->req_start_orig = UNSPECIFIED;\n\trinfo->req_start_standing = UNSPECIFIED;\n\trinfo->req_end = UNSPECIFIED;\n\trinfo->req_duration = UNSPECIFIED;\n\trinfo->req_duration_orig = UNSPECIFIED;\n\trinfo->req_duration_standing = UNSPECIFIED;\n\trinfo->retry_time = UNSPECIFIED;\n\trinfo->resv_state = RESV_NONE;\n\trinfo->resv_substate = RESV_NONE;\n\trinfo->resv_queue = NULL;\n\trinfo->resv_nodes = NULL;\n\trinfo->timezone = NULL;\n\trinfo->rrule = NULL;\n\trinfo->resv_idx = 1;\n\trinfo->execvnodes_seq = NULL;\n\trinfo->count = 0;\n\trinfo->is_standing = 0;\n\trinfo->is_running = 0;\n\trinfo->occr_start_arr = NULL;\n\trinfo->partition = NULL;\n\trinfo->select_orig = NULL;\n\trinfo->select_standing = NULL;\n\n\treturn rinfo;\n}\n\n/**\n * @brief\n *\t\tfree_resv_info - free all the memory used by a rev_info structure\n *\n * @param[in]\trinfo\t-\tthe resv_info to free\n *\n * @return\tnothing\n *\n */\nvoid\nfree_resv_info(resv_info *rinfo)\n{\n\tif (rinfo == NULL)\n\t\treturn;\n\n\tif (rinfo->queuename != NULL)\n\t\tfree(rinfo->queuename);\n\n\tif (rinfo->resv_nodes != NULL)\n\t\tfree_nodes(rinfo->resv_nodes);\n\n\tif (rinfo->timezone != NULL)\n\t\tfree(rinfo->timezone);\n\n\tif (rinfo->rrule != NULL)\n\t\tfree(rinfo->rrule);\n\n\tif (rinfo->execvnodes_seq != NULL)\n\t\tfree(rinfo->execvnodes_seq);\n\n\tif (rinfo->occr_start_arr != NULL)\n\t\tfree(rinfo->occr_start_arr);\n\n\tif (rinfo->partition != NULL)\n\t\tfree(rinfo->partition);\n\n\tif (rinfo->select_orig != NULL)\n\t\tdelete rinfo->select_orig;\n\n\tif (rinfo->select_standing != NULL)\n\t\tdelete rinfo->select_standing;\n\n\tfree_nspecs(rinfo->orig_nspec_arr);\n\n\tdelete rinfo;\n}\n\n/**\n * @brief\n *\t\tdup_resv_info - duplicate a reservation\n *\n * @param[in]\trinfo\t-\tthe reservation to duplicate\n * @param[in]\tsinfo \t-\tthe server the NEW reservation belongs to\n *\n * @return\tduplicated reservation\n *\n */\nresv_info *\ndup_resv_info(resv_info *rinfo, server_info *sinfo)\n{\n\tresv_info *nrinfo;\n\n\tif (rinfo == NULL)\n\t\treturn NULL;\n\n\tif ((nrinfo = new_resv_info()) == NULL)\n\t\treturn NULL;\n\n\tif (rinfo->queuename != NULL)\n\t\tnrinfo->queuename = string_dup(rinfo->queuename);\n\n\tnrinfo->req_start = rinfo->req_start;\n\tnrinfo->req_start_orig = rinfo->req_start_orig;\n\tnrinfo->req_start_standing = rinfo->req_start_standing;\n\tnrinfo->req_end = rinfo->req_end;\n\tnrinfo->req_duration = rinfo->req_duration;\n\tnrinfo->req_duration_orig = rinfo->req_duration_orig;\n\tnrinfo->req_duration_standing = rinfo->req_duration_standing;\n\tnrinfo->retry_time = rinfo->retry_time;\n\tnrinfo->resv_state = rinfo->resv_state;\n\tnrinfo->resv_substate = rinfo->resv_substate;\n\tnrinfo->is_standing = rinfo->is_standing;\n\tnrinfo->is_running = rinfo->is_running;\n\tnrinfo->timezone = string_dup(rinfo->timezone);\n\tnrinfo->rrule = string_dup(rinfo->rrule);\n\tnrinfo->resv_idx = rinfo->resv_idx;\n\tnrinfo->execvnodes_seq = string_dup(rinfo->execvnodes_seq);\n\tnrinfo->count = rinfo->count;\n\tif (rinfo->partition != NULL)\n\t\tnrinfo->partition = string_dup(rinfo->partition);\n\tif (rinfo->select_orig != NULL)\n\t\tnrinfo->select_orig = new selspec(*rinfo->select_orig);\n\tif (rinfo->select_standing != NULL)\n\t\tnrinfo->select_standing = new selspec(*rinfo->select_standing);\n\n\t/* the queues may not be available right now.  If they aren't, we'll\n\t * catch this when we duplicate the queues\n\t */\n\tif (rinfo->resv_queue != NULL)\n\t\tnrinfo->resv_queue = find_queue_info(sinfo->queues, rinfo->queuename);\n\n\tnrinfo->resv_nodes = dup_nodes(rinfo->resv_nodes, sinfo, NO_FLAGS);\n\n\treturn nrinfo;\n}\n\n/**\n * @brief\n * \t\tcheck for new reservations and handle them\n *\t\tif we can serve the reservation, we reserve it\n *\t\tand if we can't, we delete the reservation\n * @par\n *  \tFor a standing reservation, each occurrence is unrolled and attempted\n * \t\tto be confirmed. If a single occurrence fails to be confirmed, then the\n * \t\tstanding reservation is rejected.\n * @par\n *  \tFor a degraded reservation, the resources allocated to the reservation\n * \t\tare free'd in the simulated universe, and we attempt to reconfirm resources\n * \t\tfor it. If it fails then we inform the server that the reconfirmation has\n * \t\tfailed. If it succeeds, then the previously allocated resources are freed\n * \t\tfrom the real universe and replaced by the newly allocated resources.\n *\n * @param[in]\tpolicy\t-\tpolicy info\n * @param[in]\tpbs_sd\t-\tcommunication descriptor to PBS server\n * @param[in]\tresvs\t-\tlist of reservations\n * @param[in]\tsinfo\t-\tthe server who owns the reservations\n *\n * @return\tint\n * @retval\tnumber of reservations confirmed\n * @retval\t-1\t: something went wrong with confirmation, retry later\n *\n */\nint\ncheck_new_reservations(status *policy, int pbs_sd, resource_resv **resvs, server_info *sinfo)\n{\n\tint count = 0; /* new reservation count */\n\tint pbsrc = 0; /* return code from pbs_confirmresv() */\n\n\tserver_info *nsinfo = NULL;\n\tresource_resv *nresv = NULL;\n\tresource_resv *nresv_copy = NULL;\n\tresource_resv **tmp_resresv = NULL;\n\n\tchar **occr_execvnodes_arr = NULL;\n\tint occr_execvnodes_count = 0;\n\tchar **tofree = NULL;\n\tint occr_count = 1;\n\tint have_alter_request = 0;\n\tint i;\n\tint j, j_adjusted;\n\tschd_error *err;\n\n\tif (sinfo == NULL)\n\t\treturn -1;\n\n\t/* If no reservations to check then return, this is not an error */\n\tif (resvs == NULL)\n\t\treturn 0;\n\n\terr = new_schd_error();\n\tif (err == NULL)\n\t\treturn -1;\n\n\tqsort(sinfo->resvs, sinfo->num_resvs, sizeof(resource_resv *), cmp_resv_state);\n\n\tfor (i = 0; sinfo->resvs[i] != NULL; i++) {\n\t\tif (sinfo->resvs[i]->resv == NULL) {\n\t\t\tlog_event(PBSEVENT_RESV, PBS_EVENTCLASS_RESV, LOG_INFO,\n\t\t\t\t  sinfo->resvs[i]->name,\n\t\t\t\t  \"Error determining if reservation can be confirmed: \"\n\t\t\t\t  \"Could not find the reservation.\");\n\t\t\tcontinue;\n\t\t}\n\n\t\t/* If the reservation is unconfirmed OR is degraded and not running, with a\n\t\t * retry time that is in the past, then the reservation has to be\n\t\t * respectively confirmed and reconfirmed.\n\t\t */\n\t\tif (will_confirm(sinfo->resvs[i], sinfo->server_time)) {\n\t\t\t/* Clone the real universe for simulation scratch work. This universe\n\t\t\t * will be garbage collected after simulation completes.\n\t\t\t */\n\t\t\ttry {\n\t\t\t\tnsinfo = new server_info(*sinfo);\n\t\t\t} catch (std::exception &e) {\n\t\t\t\treturn -1;\n\t\t\t}\n\n\t\t\t/* Resource reservations are ordered by event time, in the case of a\n\t\t\t * standing reservation, the first to be found will be the \"parent\"\n\t\t\t * reservation\n\t\t\t */\n\t\t\tnresv = find_resource_resv_by_indrank(nsinfo->resvs, sinfo->resvs[i]->resresv_ind, sinfo->resvs[i]->rank);\n\t\t\tif (nresv == NULL) {\n\t\t\t\tlog_event(PBSEVENT_RESV, PBS_EVENTCLASS_RESV, LOG_INFO,\n\t\t\t\t\t  sinfo->resvs[i]->name,\n\t\t\t\t\t  \"Error determining if reservation can be confirmed: \"\n\t\t\t\t\t  \"Resource not found.\");\n\t\t\t\tdelete nsinfo;\n\t\t\t\treturn -1;\n\t\t\t}\n\n\t\t\trelease_running_resv_nodes(nresv, nsinfo);\n\n\t\t\t/* Attempt to confirm the reservation. For a standing reservation,\n\t\t\t * each occurrence is unrolled and attempted to be confirmed within the\n\t\t\t * function.\n\t\t\t */\n\t\t\tpbsrc = confirm_reservation(policy, pbs_sd, nresv, nsinfo);\n\n\t\t\t/* confirm_reservation only returns success if all occurrences were\n\t\t\t * confirmed and the communication with the server returned no error\n\t\t\t */\n\t\t\tif (pbsrc == RESV_CONFIRM_SUCCESS) {\n\t\t\t\t/* If a degraded reservation, then we need to release the resources\n\t\t\t\t * that were previously allocated to the reservation in the real\n\t\t\t\t * universe. These resources will be replaced by the newly allocated\n\t\t\t\t * ones from the simulated server universe.\n\t\t\t\t */\n\t\t\t\tif (nresv->resv->resv_substate == RESV_DEGRADED || nresv->resv->resv_substate == RESV_IN_CONFLICT)\n\t\t\t\t\trelease_nodes(sinfo->resvs[i]);\n\n\t\t\t\t/* the number of occurrences is set during the confirmation process */\n\t\t\t\toccr_count = nresv->resv->count;\n\n\t\t\t\t/* Now deal with updating the \"real\" server universe */\n\n\t\t\t\t/* If a standing reservation, unroll the string representation of the\n\t\t\t\t * sequence of execvnodes into an array of pointer to execvnodes */\n\t\t\t\tif (nresv->resv->is_standing) {\n\t\t\t\t\t/* \"tofree\" is a pointer array to a list of unique execvnodes. It is\n\t\t\t\t\t * safely freed exclusively by calling free_execvnode_seq\n\t\t\t\t\t */\n\t\t\t\t\toccr_execvnodes_count = get_execvnodes_count(nresv->resv->execvnodes_seq);\n\t\t\t\t\toccr_execvnodes_arr = unroll_execvnode_seq(\n\t\t\t\t\t\tnresv->resv->execvnodes_seq, &tofree);\n\t\t\t\t\tif (occr_execvnodes_arr == NULL) {\n\t\t\t\t\t\tlog_event(PBSEVENT_RESV, PBS_EVENTCLASS_RESV, LOG_INFO,\n\t\t\t\t\t\t\t  sinfo->resvs[i]->name,\n\t\t\t\t\t\t\t  \"Error unrolling standing reservation.\");\n\t\t\t\t\t\tdelete nsinfo;\n\t\t\t\t\t\treturn -1;\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\t/* Since we will use occr_execvnodes_arr both for standing and advance\n\t\t\t\t\t * reservations, we create an array with a single entry to hold the\n\t\t\t\t\t * advance reservation's execvnode.\n\t\t\t\t\t */\n\t\t\t\t\toccr_execvnodes_arr = static_cast<char **>(malloc(sizeof(char *)));\n\t\t\t\t\tif (occr_execvnodes_arr == NULL) {\n\t\t\t\t\t\tdelete nsinfo;\n\t\t\t\t\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\t\t\t\t\treturn -1;\n\t\t\t\t\t}\n\t\t\t\t\t*occr_execvnodes_arr = nresv->resv->execvnodes_seq;\n\t\t\t\t\toccr_execvnodes_count = 1;\n\t\t\t\t}\n\n\t\t\t\t/* Iterate over all occurrences (would be 1 for advance reservations)\n\t\t\t\t * and copy the information collected during simulation back into the\n\t\t\t\t * real universe\n\t\t\t\t */\n\t\t\t\tfor (j = 0; j < occr_count; j++) {\n\t\t\t\t\t/* On first occurrence, the reservation is the \"parent\" reservation */\n\t\t\t\t\tif (j == 0) {\n\t\t\t\t\t\tnresv_copy = sinfo->resvs[i];\n\t\t\t\t\t}\n\t\t\t\t\t/* Subsequent occurrences need to be either modified or created\n\t\t\t\t\t * depending on whether the reservation is to be reconfirmed or\n\t\t\t\t\t * is getting confirmed for the first time.\n\t\t\t\t\t */\n\t\t\t\t\telse {\n\t\t\t\t\t\t/* For a degraded reservation, it had already been confirmed in a\n\t\t\t\t\t\t * previous scheduling cycle. We retrieve the existing object from\n\t\t\t\t\t\t * the all_resresv list\n\t\t\t\t\t\t */\n\t\t\t\t\t\tif (nresv->resv->resv_substate == RESV_DEGRADED || nresv->resv->resv_substate == RESV_IN_CONFLICT) {\n\t\t\t\t\t\t\tnresv_copy = find_resource_resv_by_time(sinfo->all_resresv,\n\t\t\t\t\t\t\t\t\t\t\t\tnresv_copy->name, nresv->resv->occr_start_arr[j]);\n\t\t\t\t\t\t\tif (nresv_copy == NULL) {\n\t\t\t\t\t\t\t\tlog_event(PBSEVENT_RESV, PBS_EVENTCLASS_RESV,\n\t\t\t\t\t\t\t\t\t  LOG_INFO, nresv->name,\n\t\t\t\t\t\t\t\t\t  \"Error determining if reservation can be confirmed: \"\n\t\t\t\t\t\t\t\t\t  \"Could not find reservation by time.\");\n\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t/* For a new, unconfirmed, reservation, we duplicate the parent\n\t\t\t\t\t\t\t * reservation\n\t\t\t\t\t\t\t */\n\t\t\t\t\t\t\tnresv_copy = dup_resource_resv(nresv_copy, sinfo, NULL);\n\t\t\t\t\t\t\tif (nresv_copy == NULL)\n\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\tif (nresv_copy->resv->select_standing != NULL) {\n\t\t\t\t\t\t\t\tdelete nresv_copy->select;\n\t\t\t\t\t\t\t\tnresv_copy->select = new selspec(*nresv_copy->resv->select_standing);\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\trelease_nodes(nresv_copy);\n\n\t\t\t\t\tj_adjusted = j - (occr_count - occr_execvnodes_count);\n\n\t\t\t\t\tif (j_adjusted >= 0 && j_adjusted < occr_execvnodes_count) {\n\t\t\t\t\t\tnresv_copy->resv->orig_nspec_arr =\n\t\t\t\t\t\t\tparse_execvnode(occr_execvnodes_arr[j_adjusted], sinfo, nresv_copy->select);\n\t\t\t\t\t} else {\n\t\t\t\t\t\tnresv_copy->resv->orig_nspec_arr = {};\n\t\t\t\t\t\tlog_eventf(PBSEVENT_ERROR, PBS_EVENTCLASS_RESV,\n\t\t\t\t\t\t\t   LOG_INFO, nresv->name,\n\t\t\t\t\t\t\t   \"%s: occurence %d has no execvnodes, proceeding without assigned resources\",\n\t\t\t\t\t\t\t   __func__, j + 1);\n\t\t\t\t\t}\n\n\t\t\t\t\tnresv_copy->nspec_arr = combine_nspec_array(nresv_copy->resv->orig_nspec_arr);\n\t\t\t\t\tnresv_copy->ninfo_arr = create_node_array_from_nspec(nresv_copy->nspec_arr);\n\t\t\t\t\tnresv_copy->resv->resv_nodes = create_resv_nodes(nresv_copy->nspec_arr, sinfo);\n\n\t\t\t\t\t/* Note that the sequence of occurrence dates and time are determined\n\t\t\t\t\t * during confirm_reservation and set to the reservation in the\n\t\t\t\t\t * simulated server\n\t\t\t\t\t */\n\t\t\t\t\tnresv_copy->start = nresv->resv->occr_start_arr[j];\n\t\t\t\t\tnresv_copy->end = nresv_copy->start + nresv_copy->duration;\n\n\t\t\t\t\t/* Only add the occurrence to the real universe if we are not\n\t\t\t\t\t * processing a degraded reservation as otherwise, the resources\n\t\t\t\t\t * had already been added to the real universe in query_reservations\n\t\t\t\t\t */\n\t\t\t\t\tif (nresv_copy->resv->resv_substate != RESV_DEGRADED && nresv_copy->resv->resv_substate != RESV_IN_CONFLICT) {\n\t\t\t\t\t\ttimed_event *te_start;\n\t\t\t\t\t\ttimed_event *te_end;\n\t\t\t\t\t\tte_start = create_event(TIMED_RUN_EVENT, nresv_copy->start,\n\t\t\t\t\t\t\t\t\tnresv_copy, NULL, NULL);\n\t\t\t\t\t\tif (te_start == NULL)\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\tte_end = create_event(TIMED_END_EVENT, nresv_copy->end,\n\t\t\t\t\t\t\t\t      nresv_copy, NULL, NULL);\n\t\t\t\t\t\tif (te_end == NULL) {\n\t\t\t\t\t\t\tfree_timed_event(te_start);\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t}\n\t\t\t\t\t\tadd_event(sinfo->calendar, te_start);\n\t\t\t\t\t\tadd_event(sinfo->calendar, te_end);\n\n\t\t\t\t\t\tif (j > 0) {\n\t\t\t\t\t\t\ttmp_resresv = add_resresv_to_array(sinfo->all_resresv, nresv_copy, SET_RESRESV_INDEX);\n\t\t\t\t\t\t\tif (tmp_resresv == NULL)\n\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\tsinfo->all_resresv = tmp_resresv;\n\t\t\t\t\t\t\ttmp_resresv = add_resresv_to_array(sinfo->resvs, nresv_copy, NO_FLAGS);\n\t\t\t\t\t\t\tif (tmp_resresv == NULL)\n\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\tsinfo->resvs = tmp_resresv;\n\t\t\t\t\t\t\tsinfo->num_resvs++;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\n\t\t\t\t\t/* Confirm the reservation such that it is not looked at again in the\n\t\t\t\t\t * main loop of this function.\n\t\t\t\t\t */\n\t\t\t\t\tnresv_copy->resv->resv_state = RESV_CONFIRMED;\n\t\t\t\t\tnresv_copy->resv->resv_substate = RESV_CONFIRMED;\n\t\t\t\t}\n\t\t\t\t/* increment the count if we successfully processed all occurrences */\n\t\t\t\tif (j == occr_count)\n\t\t\t\t\tcount++;\n\t\t\t} else if (pbsrc == RESV_CONFIRM_FAIL) {\n\t\t\t\t/* For a degraded reservation, it had already been confirmed in a\n\t\t\t\t * previous scheduling cycle. We retrieve the existing object from\n\t\t\t\t * the all_resresv list and update the retry_time to break out of\n\t\t\t\t * the main loop that checks for reservations that need confirmation\n\t\t\t\t */\n\t\t\t\tif (nresv->resv->resv_substate == RESV_DEGRADED || nresv->resv->resv_substate == RESV_IN_CONFLICT) {\n\t\t\t\t\tfor (j = 0; j < nresv->resv->count; j++) {\n\t\t\t\t\t\tnresv_copy = find_resource_resv_by_time(sinfo->all_resresv,\n\t\t\t\t\t\t\t\t\t\t\tnresv->name, nresv->resv->occr_start_arr[j]);\n\t\t\t\t\t\tif (nresv_copy == NULL) {\n\t\t\t\t\t\t\tlog_event(PBSEVENT_RESV, PBS_EVENTCLASS_RESV,\n\t\t\t\t\t\t\t\t  LOG_INFO, nresv->name,\n\t\t\t\t\t\t\t\t  \"Error determining if reservation can be confirmed: \"\n\t\t\t\t\t\t\t\t  \"Could not find reservation by time.\");\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t}\n\t\t\t\t\t\t/* Update the retry time such that occurrences of a standing\n\t\t\t\t\t\t * reservation do not independently attempt to be reconfirmed\n\t\t\t\t\t\t * This is meant to break out of the conditional that checks what\n\t\t\t\t\t\t * will be considered \"confirmable\" by the scheduler. Either one\n\t\t\t\t\t\t * of updating the substate to something else than RESV_DEGRADED or\n\t\t\t\t\t\t * updating the reservation retry time to some time in the future\n\t\t\t\t\t\t * invalidating the condition would work.\n\t\t\t\t\t\t *\n\t\t\t\t\t\t * We choose to update the retry_time for consistency with what\n\t\t\t\t\t\t * the server actually does upon receiving the message informing\n\t\t\t\t\t\t * it that the reservation could not be reconfirmed.\n\t\t\t\t\t\t */\n\t\t\t\t\t\tnresv_copy->resv->retry_time = sinfo->server_time + 1;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\t/* clean up */\n\t\t\tfree(nresv->resv->occr_start_arr);\n\t\t\tnresv->resv->occr_start_arr = NULL;\n\t\t\tfree_execvnode_seq(tofree);\n\t\t\ttofree = NULL;\n\t\t\tfree(occr_execvnodes_arr);\n\t\t\toccr_execvnodes_arr = NULL;\n\n\t\t\t/* Clean up simulated server info */\n\t\t\tdelete nsinfo;\n\t\t}\n\t\tif (sinfo->resvs[i]->resv->resv_state == RESV_BEING_ALTERED)\n\t\t\thave_alter_request = 1;\n\n\t\t/* Something went wrong with reservation confirmation, retry later */\n\t\tif (pbsrc == RESV_CONFIRM_RETRY) {\n\t\t\tfree_schd_error(err);\n\t\t\treturn -1;\n\t\t}\n\t}\n\tfree_schd_error(err);\n\t/* If a reservation is being altered, its attributes are the new altered attributes.\n\t * If the alter fails, we can't continue with a cycle because the reservation\n\t * reverted back to its pre-altered state, but the copy we have is as if the alter succeeded.\n\t * If no reservations have been confirmed, we will run a normal cycle.\n\t*/\n\tif (have_alter_request && count == 0)\n\t\treturn -1;\n\treturn count;\n}\n\n/**\n * @brief\n * \t\tmark the timed event associated to a resource reservation at a given time as\n * \t\tdisabled.\n *\n * @param[in]\tevents\t-\tthe event which to be disabled in the calendar.\n * @param[in]\tresv\t-\tthe resource reservation being disabled\n *\n * @return\tint\n * @retval\t1\t: on success\n * @retval\t0\t: on failure\n */\nstatic int\ndisable_reservation_occurrence(timed_event *events,\n\t\t\t       resource_resv *resv)\n{\n\tif (resv == NULL)\n\t\treturn 0;\n\n\tif (resv->run_event != NULL)\n\t\tset_timed_event_disabled(resv->run_event, 1);\n\tif (resv->end_event != NULL)\n\t\tset_timed_event_disabled(resv->end_event, 1);\n\treturn 1;\n}\n\n/**\n * @brief\n * \t\tdetermines if a resource reservation can be satisfied\n *\n * @param[in]\tpolicy\t-\tpolicy info\n * @param[in]\tpbs_sd\t-\tconnection to server\n * @param[in]\tunconf_resv\t-\tthe reservation to confirm\n * @param[in]\tnsinfo\t-\tthe simulated server info universe\n *\n * @return\tint\n * @retval\tRESV_CONFIRM_SUCCESS\n * @retval\tRESV_CONFIRM_FAIL\n *\n * @note\n * \t\tThis function modifies the resource reservation by adding the number of\n * \t\toccurrences and the sequence of occurrence times, which are then used when\n * \t\tchecking for new and degraded reservations.\n */\nint\nconfirm_reservation(status *policy, int pbs_sd, resource_resv *unconf_resv, server_info *nsinfo)\n{\n\ttime_t sim_time;\t\t    /* time in simulation */\n\tunsigned int simrc = TIMED_NOEVENT; /* ret code from simulate_events() */\n\tschd_error *err;\n\tint pbsrc = 0;\t\t\t\t     /* return code from pbs_confirmresv() */\n\tenum resv_conf rconf = RESV_CONFIRM_SUCCESS; /* assume reconf success */\n\tchar logmsg[MAX_LOG_SIZE];\n\tchar logmsg2[MAX_LOG_SIZE];\n\n\tresource_resv *nresv = unconf_resv;\n\tresource_resv **tmp_resresv = NULL;\n\tresource_resv *nresv_copy = NULL;\n\n\tresource_resv *nresv_parent = nresv; /* the \"original\" / parent reservation */\n\n\tint confirmd_occr = 0; /* the number of confirmed occurrence(s) */\n\tint cur_count;\n\n\tint vnodes_down = 0; /* the number of vnodes that are down */\n\n\t/* resv_start_time is used both for calculating the time of an ASAP\n\t * reservation and to keep track of the start time of the first occurrence\n\t * of a standing reservation.\n\t */\n\ttime_t resv_start_time = 0;    /* estimated start time for resv */\n\ttime_t *occr_start_arr = NULL; /* an array of occurrence start times */\n\n\tstd::string execvnodes;\n\tchar *short_xc = NULL;\n\tchar *tmp = NULL;\n\ttime_t next;\n\n\tchar *rrule = nresv->resv->rrule; /* NULL for advance reservation */\n\ttime_t dtstart = nresv->resv->req_start;\n\tchar *tz = nresv->resv->timezone;\n\tint occr_count = nresv->resv->count;\n\tint ridx = nresv->resv->resv_idx - 1;\n\n\tlogmsg[0] = logmsg2[0] = '\\0';\n\n\terr = new_schd_error();\n\tif (err == NULL)\n\t\treturn RESV_CONFIRM_FAIL;\n\n\t/* If the number of occurrences is not set, this is a first time confirmation\n\t * otherwise it is a reconfirmation request\n\t */\n\tif (occr_count == 0)\n\t\toccr_count = get_num_occurrences(rrule, dtstart, tz);\n\telse {\n\t\t/* If the number of occurrences (occr_count) was already set, then we are\n\t\t * dealing with the reconfirmation of a reservation. We need to adjust the\n\t\t * number of occurrences to account only for the remaining occurrences and\n\t\t * not the original number at the time the reservation was first submitted\n\t\t */\n\t\tif (nresv->resv->resv_state != RESV_BEING_ALTERED)\n\t\t\toccr_count -= ridx;\n\t\telse\n\t\t\toccr_count = 1;\n\t}\n\n\tif ((occr_start_arr = static_cast<time_t *>(calloc(sizeof(time_t), occr_count))) == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn RESV_CONFIRM_FAIL;\n\t}\n\n\t/* Each reservation attempts to confirm a set of nodes on which to run for\n\t * a given start and end time. When handling an advance reservation,\n\t * the current reservation is considered. For a standing reservation,\n\t * each occurrence is processed by duplicating the parent reservation\n\t * and attempts to confirm it.\n\t *\n\t * All the scratch work attempting to confirm the reservation takes place\n\t * in a deep copy of the server info,and is done by simulating events just\n\t * as if the server were processing them.\n\t *\n\t * At the end of the simulation, this cloned server info is completely\n\t * wiped and a fresh version is created from the recorded 'sinfo' state.\n\t *\n\t * It's critical that when handling a standing reservation, each occurrence\n\t * be added to the server info such that the duplicated server info has up to\n\t * date information.\n\t */\n\tcur_count = 0;\n\tfor (int j = 0; j < occr_count && rconf == RESV_CONFIRM_SUCCESS;\n\t     j++, cur_count = j) {\n\t\t/* Get the start time of the next occurrence.\n\t\t * See call to same function in query_reservations for a more in-depth\n\t\t * description.\n\t\t */\n\t\tnext = get_occurrence(rrule, dtstart, tz, j + 1);\n\t\t/* keep track of each occurrence's start time */\n\t\toccr_start_arr[j] = next;\n\n\t\t/* Processing occurrences of a standing reservation requires duplicating\n\t\t * the \"parent\" reservation as template for each occurrence, modifying its\n\t\t * start time and duration and running a simulation for that occurrence\n\t\t *\n\t\t * This duplication is only done for subsequent occurrences, not for the\n\t\t * parent reservation.\n\t\t */\n\t\tif (j > 0) {\n\t\t\t/* If the reservation is Degraded, then it has already been added to the\n\t\t\t * real universe, so instead of duplicating the parent reservation, it\n\t\t\t * is retrieved from the duplicated real server universe\n\t\t\t */\n\t\t\tif (nresv->resv->resv_substate == RESV_DEGRADED || nresv->resv->resv_substate == RESV_IN_CONFLICT) {\n\t\t\t\tnresv_copy = find_resource_resv_by_time(nsinfo->all_resresv,\n\t\t\t\t\t\t\t\t\tnresv->name, next);\n\t\t\t\tif (nresv_copy == NULL) {\n\t\t\t\t\tlog_event(PBSEVENT_RESV, PBS_EVENTCLASS_RESV, LOG_INFO, nresv->name,\n\t\t\t\t\t\t  \"Error determining if reservation can be confirmed: \"\n\t\t\t\t\t\t  \"Could not find reservation by time.\");\n\t\t\t\t\trconf = RESV_CONFIRM_FAIL;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tnresv = nresv_copy;\n\t\t\t} else {\n\t\t\t\tnresv_copy = dup_resource_resv(nresv, nsinfo, NULL);\n\n\t\t\t\tif (nresv_copy == NULL) {\n\t\t\t\t\trconf = RESV_CONFIRM_FAIL;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tnresv = nresv_copy;\n\n\t\t\t\t/* add it to the simulated universe of reservations.\n\t\t\t\t * Also add it to the reservation list (resvs) to be garbage collected\n\t\t\t\t */\n\t\t\t\ttmp_resresv = add_resresv_to_array(nsinfo->resvs, nresv, NO_FLAGS);\n\t\t\t\tif (tmp_resresv == NULL) {\n\t\t\t\t\tdelete nresv;\n\t\t\t\t\trconf = RESV_CONFIRM_FAIL;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tnsinfo->resvs = tmp_resresv;\n\n\t\t\t\ttmp_resresv = add_resresv_to_array(nsinfo->all_resresv, nresv, SET_RESRESV_INDEX);\n\t\t\t\tif (tmp_resresv == NULL) {\n\t\t\t\t\tdelete nresv;\n\t\t\t\t\trconf = RESV_CONFIRM_FAIL;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tnsinfo->all_resresv = tmp_resresv;\n\t\t\t\tnsinfo->num_resvs++;\n\t\t\t}\n\t\t\texecvnodes += TOKEN_SEPARATOR;\n\t\t}\n\n\t\t/* If reservation is degraded, then verify that some node(s) associated to\n\t\t * the reservation are down before attempting to reconfirm it. If some\n\t\t * are, then resources allocated to this reservation are released and the\n\t\t * reconfirmation proceeds.\n\t\t */\n\t\tif (nresv->resv->resv_substate == RESV_DEGRADED || nresv->resv->resv_substate == RESV_IN_CONFLICT ||\n\t\t    nresv->resv->resv_state == RESV_BEING_ALTERED) {\n\t\t\tselspec *spec = new selspec(*nresv->select);\n\t\t\tvnodes_down = resv_reduce_chunks(nresv, spec);\n\n\t\t\tif (vnodes_down < 0 && nresv->resv->resv_substate != RESV_IN_CONFLICT) {\n\t\t\t\tif (vnodes_down == -1)\n\t\t\t\t\tsnprintf(logmsg, sizeof(logmsg), \"Reservation has running jobs in it\");\n\t\t\t\trconf = RESV_CONFIRM_FAIL;\n\t\t\t\tbreak;\n\t\t\t} else if (nresv->resv->is_standing && nresv->resv->resv_state == RESV_DELETING_JOBS) {\n\t\t\t\tsnprintf(logmsg, sizeof(logmsg), \"Occurrence is ending, will try later\");\n\t\t\t\trconf = RESV_CONFIRM_FAIL;\n\t\t\t} else if (vnodes_down > 0 || nresv->resv->resv_substate == RESV_IN_CONFLICT ||\n\t\t\t\t   nresv->resv->resv_state == RESV_BEING_ALTERED) {\n\t\t\t\tif (nresv->resv->is_running) {\n\t\t\t\t\tstd::string sel;\n\t\t\t\t\tdelete nresv->execselect;\n\n\t\t\t\t\tsel = create_select_from_nspec(nresv->resv->orig_nspec_arr);\n\t\t\t\t\tnresv->execselect = parse_selspec(sel);\n\t\t\t\t\tfor (size_t ind = 0; ind < nresv->resv->orig_nspec_arr.size(); ind++) {\n\t\t\t\t\t\tnresv->execselect->chunks[ind]->seq_num = nresv->resv->orig_nspec_arr[ind]->seq_num;\n\t\t\t\t\t}\n\t\t\t\t\tif (spec != NULL) {\n\t\t\t\t\t\tif (nresv->execselect == NULL)\n\t\t\t\t\t\t\tnresv->execselect = spec;\n\t\t\t\t\t\telse {\n\t\t\t\t\t\t\t/* Everything in that select has a running job on it.  Now add in the rest */\n\t\t\t\t\t\t\tint num_exec_chunks = count_array(nresv->execselect->chunks);\n\t\t\t\t\t\t\tchunk **tmp = static_cast<chunk **>(realloc(nresv->execselect->chunks, (num_exec_chunks + count_array(spec->chunks) + 1) * sizeof(chunk *)));\n\t\t\t\t\t\t\tif (tmp == NULL) {\n\t\t\t\t\t\t\t\trconf = RESV_CONFIRM_FAIL;\n\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tnresv->execselect->chunks = tmp;\n\t\t\t\t\t\t\tint j = num_exec_chunks;\n\t\t\t\t\t\t\tfor (int i = 0; spec->chunks[i] != NULL; i++) {\n\t\t\t\t\t\t\t\tif (spec->chunks[i]->num_chunks > 0) {\n\t\t\t\t\t\t\t\t\tnresv->execselect->chunks[j] = spec->chunks[i];\n\t\t\t\t\t\t\t\t\tnresv->execselect->total_chunks += spec->chunks[i]->num_chunks;\n\t\t\t\t\t\t\t\t\tspec->chunks[i] = NULL;\n\t\t\t\t\t\t\t\t\tj++;\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tnresv->execselect->chunks[j] = NULL;\n\t\t\t\t\t\t\tdelete spec;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\trelease_running_resv_nodes(nresv, nsinfo);\n\t\t\t\t} else\n\t\t\t\t\tdelete spec;\n\t\t\t\trelease_nodes(nresv);\n\t\t\t} else if (vnodes_down == 0) {\n\t\t\t\t/* this occurrence doesn't require reconfirmation so skip it by\n\t\t\t\t * incrementing the number of occurrences confirmed and appending\n\t\t\t\t * this occurrence's execvnodes to the sequence of execvnodes\n\t\t\t\t */\n\t\t\t\tconfirmd_occr++;\n\t\t\t\ttmp = create_execvnode(nresv->resv->orig_nspec_arr);\n\t\t\t\tif (j == 0)\n\t\t\t\t\texecvnodes = tmp;\n\t\t\t\telse { /* subsequent occurrences */\n\t\t\t\t\texecvnodes += tmp;\n\t\t\t\t\texecvnodes += TOKEN_SEPARATOR;\n\t\t\t\t}\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\tif (!nresv->resv->is_running) {\n\t\t\t\tif (disable_reservation_occurrence(nsinfo->calendar->events, nresv) != 1) {\n\t\t\t\t\tlog_event(PBSEVENT_RESV, PBS_EVENTCLASS_RESV, LOG_INFO, nresv->name,\n\t\t\t\t\t\t  \"Error determining if reservation can be confirmed: \"\n\t\t\t\t\t\t  \"Could not mark occurrence disabled.\");\n\t\t\t\t\trconf = RESV_CONFIRM_FAIL;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\tif (nresv->resv->req_start == PBS_RESV_FUTURE_SCH) { /* ASAP Resv */\n\t\t\tresv_start_time = calc_run_time(nresv->name, nsinfo, NO_FLAGS);\n\t\t\t/* Update occr_start_arr used to update the real sinfo structure */\n\t\t\toccr_start_arr[j] = resv_start_time;\n\t\t} else {\n\t\t\tnresv->resv->req_start = next;\n\t\t\tnresv->start = nresv->resv->req_start;\n\t\t\tnresv->end = nresv->start + nresv->resv->req_duration;\n\n\t\t\t/* \"next\" is used in simulate_events to determine the time up to which\n\t\t\t * to simulate the universe\n\t\t\t */\n\t\t\tsimrc = simulate_events(policy, nsinfo, SIM_TIME, (void *) &next, &sim_time);\n\t\t}\n\t\tif (!(simrc & TIMED_ERROR) && resv_start_time >= 0) {\n\t\t\tclear_schd_error(err);\n\t\t\tauto ns = is_ok_to_run(nsinfo->policy, nsinfo, NULL, nresv, NO_ALLPART, err);\n\t\t\tif (!ns.empty()) {\n\t\t\t\tstd::sort(ns.begin(), ns.end(), cmp_nspec);\n\t\t\t\ttmp = create_execvnode(ns);\n\t\t\t\tfree_nspecs(ns);\n\t\t\t\tif (tmp == NULL) {\n\t\t\t\t\tlog_event(PBSEVENT_RESV, PBS_EVENTCLASS_RESV, LOG_INFO, nresv->name,\n\t\t\t\t\t\t  \"Error determining if reservation can be confirmed: \"\n\t\t\t\t\t\t  \"Creation of execvnode failed.\");\n\t\t\t\t\trconf = RESV_CONFIRM_FAIL;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\n\t\t\t\tif (j == 0) { /* first occurrence keeps track of first execvnode */\n\t\t\t\t\texecvnodes = tmp;\n\t\t\t\t\t/* Update resv_start_time only if not an ASAP reservation to\n\t\t\t\t\t * schedule the reservation on the first occurrence.\n\t\t\t\t\t */\n\t\t\t\t\tif (resv_start_time == 0)\n\t\t\t\t\t\tresv_start_time = next;\n\t\t\t\t} else /* subsequent occurrences */\n\t\t\t\t\texecvnodes += tmp;\n\n\t\t\t\tconfirmd_occr++;\n\t\t\t}\n\t\t\t/* Something went wrong trying to determine if it's \"ok to run\", which\n\t\t\t * may be a problem checking for limits or checking for availability of\n\t\t\t * resources.\n\t\t\t */\n\t\t\telse {\n\t\t\t\t(void) translate_fail_code(err, NULL, logmsg);\n\n\t\t\t\t/* If the reservation is degraded, we log a message and continue */\n\t\t\t\tif (nresv->resv->resv_substate == RESV_DEGRADED || nresv->resv->resv_substate == RESV_IN_CONFLICT) {\n\t\t\t\t\tlog_eventf(PBSEVENT_RESV, PBS_EVENTCLASS_RESV, LOG_INFO, nresv->name, \"Reservation Failed to Reconfirm: %s\", logmsg);\n\t\t\t\t}\n\t\t\t\t/* failed to confirm so move on. This will throw flow out of the\n\t\t\t\t * loop\n\t\t\t\t */\n\t\t\t\trconf = RESV_CONFIRM_FAIL;\n\t\t\t}\n\t\t} /* end of simulation */\n\t\telse {\n\t\t\tlog_event(PBSEVENT_RESV, PBS_EVENTCLASS_RESV, LOG_INFO,\n\t\t\t\t  nresv->name,\n\t\t\t\t  \"Error determining if reservation can be confirmed: \"\n\t\t\t\t  \"Simulation failed.\");\n\t\t\trconf = RESV_CONFIRM_FAIL;\n\t\t}\n\t}\n\n\t/* Finished simulating occurrences now time to confirm if ok. Currently\n\t * the confirmation is an all or nothing process but may come to change. */\n\tif (confirmd_occr == occr_count) {\n\t\t/* We either confirm a standing or advance reservation, the standing\n\t\t * has a special sequence of execvnodes while the advance has a single\n\t\t * execvnode. The sequence of execvnodes is created by concatenating\n\t\t * each execvnode and condensing the concatenated string.\n\t\t */\n\t\tif (nresv_parent->resv->is_standing)\n\t\t\tshort_xc = condense_execvnode_seq(execvnodes.c_str());\n\t\telse\n\t\t\tshort_xc = string_dup(execvnodes.c_str());\n\n\t\tif (short_xc == NULL || get_execvnodes_count(short_xc) != occr_count) {\n\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_RESV, LOG_DEBUG, nresv_parent->name, \"Invalid execvnode_seq while confirming reservation\");\n\t\t\trconf = RESV_CONFIRM_RETRY;\n\t\t} else {\n\t\t\tchar confirm_msg[LOG_BUF_SIZE] = {0};\n\n\t\t\tlog_eventf(PBSEVENT_RESV, PBS_EVENTCLASS_RESV, LOG_INFO, nresv_parent->name,\n\t\t\t\t   \"Confirming %d Occurrences\", occr_count);\n\n\t\t\t/* Send a reservation confirm message, if anything goes wrong pbsrc\n\t\t\t* will return an error\n\t\t\t*/\n\t\t\tsnprintf(confirm_msg, LOG_BUF_SIZE, \"%s:partition=%s\", PBS_RESV_CONFIRM_SUCCESS,\n\t\t\t\t sc_attrs.partition ? sc_attrs.partition : DEFAULT_PARTITION);\n\n\t\t\tpbsrc = send_confirmresv(pbs_sd, nresv_parent, short_xc, resv_start_time, confirm_msg);\n\t\t}\n\t} else {\n\t\t/* This message is sent to inform that we could not confirm the reservation.\n\t\t * If the reservation was degraded then the retry time will be reset.\n\t\t * \"null\" is used satisfy the API but any string would do because we've\n\t\t * failed to confirm the reservation and no execvnodes were determined\n\t\t */\n\t\tpbsrc = send_confirmresv(pbs_sd, nresv_parent, \"null\", resv_start_time, PBS_RESV_CONFIRM_FAIL);\n\t}\n\n\t/* Error handling first checks for the return code from the server and the\n\t * confirmation flag. If either failed, we print the error message, otherwise\n\t * we print success\n\t */\n\tif (pbsrc > 0 || rconf == RESV_CONFIRM_FAIL) {\n\t\t/* If the scheduler could not find a place for the reservation, use the\n\t\t * translated error code. Otherwise, use the error message from the server.\n\t\t */\n\t\tif (rconf == RESV_CONFIRM_FAIL)\n\t\t\tlog_eventf(PBSEVENT_RESV, PBS_EVENTCLASS_RESV, LOG_INFO, nresv_parent->name,\n\t\t\t\t   \"PBS Failed to confirm resv: %s\", logmsg);\n\t\telse {\n\t\t\tconst char *errmsg = pbs_geterrmsg(pbs_sd);\n\t\t\tif (errmsg == NULL)\n\t\t\t\terrmsg = \"\";\n\t\t\tlog_eventf(PBSEVENT_RESV, PBS_EVENTCLASS_RESV, LOG_INFO, nresv_parent->name,\n\t\t\t\t   \"PBS Failed to confirm resv: %s (%d)\", errmsg, pbs_errno);\n\t\t\trconf = RESV_CONFIRM_RETRY;\n\t\t}\n\n\t\tif (nresv_parent->resv->resv_substate == RESV_DEGRADED) {\n\t\t\tif (vnodes_down >= 0)\n\t\t\t\tlog_eventf(PBSEVENT_RESV, PBS_EVENTCLASS_RESV, LOG_INFO, nresv_parent->name,\n\t\t\t\t\t   \"Reservation is in degraded mode\");\n\n\t\t\t/* we failed to confirm the degraded reservation but we still need to\n\t\t\t * set the remaining occurrences start time to avoid looking at them\n\t\t\t * in the future. We had set the occr_start_arr times in the main loop\n\t\t\t * so we only care about the remaining ones\n\t\t\t */\n\t\t\tfor (; cur_count < occr_count; cur_count++) {\n\t\t\t\tnext = get_occurrence(rrule, dtstart, tz, cur_count + 1);\n\t\t\t\toccr_start_arr[cur_count] = next;\n\t\t\t}\n\t\t}\n\t\tfree(short_xc);\n\t}\n\t/* If the (re)confirmation was a success then we update the sequence of\n\t * occurrence start times, the number of occurrences, and the sequence of\n\t * execvnodes\n\t */\n\telse if (rconf == RESV_CONFIRM_SUCCESS) {\n\t\tlog_event(PBSEVENT_RESV, PBS_EVENTCLASS_RESV, LOG_INFO, nresv_parent->name,\n\t\t\t  \"Reservation Confirmed\");\n\n\t\t/* If handling a degraded reservation or while altering a standing reservation\n\t\t * we recreate a new execvnode sequence string, so the old should be cleared.\n\t\t */\n\t\tfree(nresv_parent->resv->execvnodes_seq);\n\n\t\t/* set or update (for reconfirmation) the sequence of execvnodes */\n\t\tnresv_parent->resv->execvnodes_seq = short_xc;\n\t}\n\t/* The sequence of occurrence times and the total number of occurrences are\n\t * made available to populate the 'real' sinfo in check_new_reservations\n\t */\n\tnresv_parent->resv->occr_start_arr = occr_start_arr;\n\tnresv_parent->resv->count = occr_count;\n\n\t/* clean up */\n\tfree_schd_error(err);\n\n\t/* the return value is initialized to RESV_CONFIRM_SUCCESS */\n\treturn rconf;\n}\n\n/**\n * @brief determine if a nspec superchunk/chunk has any running jobs on them\n * @param[in] resv - reservation to check\n * @param[in] chunk_ind - index of the chunk start\n * @param[out] running_jobs - are there running jobs on the nodes we're checking\n * @return bool\n * @retval true chunk has running jobs on it\n * @retval false chunk does not have any running jobs on it\n *\n */\nbool\ncheck_chunk_running(resource_resv *resv, int chunk_ind, bool &down_run)\n{\n\tbool found_running_jobs = false;\n\tif (resv == NULL || chunk_ind < 0 || !resv->is_resv ||\n\t    resv->resv == NULL || resv->resv->resv_queue == NULL)\n\t\treturn false;\n\n\tfor (size_t i = chunk_ind; i < resv->resv->orig_nspec_arr.size(); i++) {\n\t\tauto ninfo = resv->resv->orig_nspec_arr[i]->ninfo;\n\n\t\tif (ninfo != NULL) {\n\t\t\tif (resv->resv->resv_queue->running_jobs != NULL)\n\t\t\t\tfor (int j = 0; resv->resv->resv_queue->running_jobs[j] != NULL; j++) {\n\t\t\t\t\tresource_resv *job = resv->resv->resv_queue->running_jobs[j];\n\t\t\t\t\tfor (int k = 0; job->ninfo_arr[k] != NULL; k++) {\n\t\t\t\t\t\tif (job->ninfo_arr[k]->rank == ninfo->rank) {\n\t\t\t\t\t\t\tfound_running_jobs = true;\n\t\t\t\t\t\t\tif (ninfo->is_stale || ninfo->is_offline || ninfo->is_down ||\n\t\t\t\t\t\t\t    ninfo->is_unknown || ninfo->is_maintenance || ninfo->is_sleeping)\n\t\t\t\t\t\t\t\tdown_run = true;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\tif (resv->resv->orig_nspec_arr[i]->end_of_chunk)\n\t\t\t\tbreak;\n\t\t}\n\t\tif (down_run)\n\t\t\treturn true;\n\t}\n\treturn found_running_jobs;\n}\n\n/**\n * @brief remove nodes without running jobs from an nspec array.\n *\n * @param[in] resv - reservation to remove nodes from\n * @param[in] start_of_chk - index into resv->orig_nspec_arr of where to start\n * @param[in] chk_seq_num - sequence number of chunk to remove nodes from\n * @param[out] down_run - we found a running job on a downed node\n *\n * @note all nspec chunks to be removed will have their ninfo pointer NULL'd.  It up to the caller to actually remove them from the reservation.\n *\n * @return int\n * @retval number of chunks removed from the nspec array\n * @retval -1 nspecs are not mapped to select chunks\n */\nint\nremove_empty_nodes(resource_resv *resv, int start_of_chk, int chk_seq_num, bool &down_run)\n{\n\tint chunks_removed = 0;\n\n\tauto &nspec_arr = resv->resv->orig_nspec_arr;\n\n\tfor (size_t i = start_of_chk; i < nspec_arr.size(); i++) {\n\t\tbool running_jobs = false;\n\t\tif (nspec_arr[i]->chk == NULL)\n\t\t\treturn -1;\n\n\t\tif (nspec_arr[i]->chk->seq_num != chk_seq_num)\n\t\t\tbreak;\n\t\trunning_jobs = check_chunk_running(resv, i, down_run);\n\t\tif (!running_jobs) {\n\t\t\tsize_t k = i - 1;\n\t\t\tdo {\n\t\t\t\tk++;\n\t\t\t\tnspec_arr[k]->ninfo = NULL;\n\t\t\t} while (nspec_arr[k] != NULL && !nspec_arr[k]->end_of_chunk);\n\n\t\t\tchunks_removed++;\n\t\t\t/* In the case of a superchunk, we advance past it.  In the case of a normal chunk, k didn't move, so we reassign i to i */\n\t\t\ti = k;\n\t\t}\n\t}\n\treturn chunks_removed;\n}\n\n/**\n * @brief The output of this function will be an nspec array with nodes which only has\n * \trunning jobs on them, and a select spec of corrisponding chunks of what we released and need back.\n * \te.g. we had (vn1:ncpus=1)+(vn2:ncpus=1)+(vn3:ncpus=1) and a new select spec of 2:ncpus=1.\n * \t\tThere is a running job on vn2.  The resulting nspec array only has vn2 in it, and\n * \t\tthe select spec has 1:ncpus=1 which is what is left.\n * \n * \tThe idea is that the caller will create an execselect of 1:vnode=vn2:ncpus=1+1:ncpus=1\n *\n * @param[in] resv - the reservation to shrink\n * @param[out] spec - The part of the new select that isn't kept from orig_nspec_arr\n *\n * @return int\n * @retval 1 the reservation has been successfully shrunk\n * @retval 0 no select_orig, not doing a pbs_ralter -l select\n * @retval -1 can't remove enough chunks due to running jobs\n * @retval -2 can't reduce due to resv_nodes not correctly mapped to select_orig\n */\nint\nresv_reduce_chunks(resource_resv *resv, selspec *spec)\n{\n\tint cnt;\n\tint start_of_chunk = 0;\n\tint j, k;\n\tchunk **chks_orig, **chks;\n\n\tif (resv == NULL || spec == NULL)\n\t\treturn -2;\n\n\tif (resv->resv->resv_state == RESV_BEING_ALTERED) {\n\t\t/* We're not altering the select, just return success */\n\t\tif (resv->resv->select_orig == NULL)\n\t\t\treturn 0;\n\t\tchks_orig = resv->resv->select_orig->chunks;\n\t} else\n\t\tchks_orig = resv->select->chunks;\n\n\tcnt = resv->resv->orig_nspec_arr.size();\n\tj = 0;\n\tchks = resv->select->chunks;\n\tfor (int i = 0; chks_orig[i] != NULL; i++) {\n\t\tint num_chks = 0;\n\t\tbool down_run = false;\n\n\t\tnum_chks = remove_empty_nodes(resv, start_of_chunk, chks_orig[i]->seq_num, down_run);\n\t\tif (chks[j] == NULL || chks_orig[i]->seq_num != chks[j]->seq_num) {\n\t\t\tif (num_chks == -1)\n\t\t\t\treturn -2;\n\t\t\t/* If we didn't remove all the nodes, some of them must have running jobs on them */\n\t\t\tif (num_chks != chks_orig[i]->num_chunks)\n\t\t\t\treturn -1;\n\n\t\t} else {\n\t\t\tint chk_diff = chks_orig[i]->num_chunks - chks[j]->num_chunks;\n\n\t\t\tif (num_chks == -1)\n\t\t\t\treturn -2;\n\n\t\t\t/* We are shrinking the reservation and can't find enough nodes to free without running jobs on them */\n\t\t\tif (chk_diff > 0 && num_chks < chk_diff)\n\t\t\t\treturn -1;\n\t\t\tif (resv->resv->resv_substate == RESV_DEGRADED && down_run)\n\t\t\t\treturn -1;\n\n\t\t\tspec->chunks[j]->num_chunks -= (chks_orig[i]->num_chunks - num_chks);\n\n\t\t\tj++;\n\t\t}\n\t\tfor (k = start_of_chunk; k < cnt && resv->resv->orig_nspec_arr[k]->chk->seq_num == chks_orig[i]->seq_num; k++)\n\t\t\t;\n\t\tstart_of_chunk = k;\n\t}\n\tauto &ns = resv->resv->orig_nspec_arr;\n\t/* We marked all nodes to remove by NULLing the ninfo ptr.  Remove them here from the vector */\n\tns.erase(std::remove_if(ns.begin(), ns.end(), [](nspec *n) {\n\t\t\t if (n->ninfo == NULL) {\n\t\t\t\t delete n;\n\t\t\t\t return true;\n\t\t\t }\n\t\t\t return false;\n\t\t }),\n\t\t ns.end());\n\tfree_nspecs(resv->nspec_arr);\n\tresv->nspec_arr = combine_nspec_array(resv->resv->orig_nspec_arr);\n\n\treturn 1;\n}\n\n/**\n * @brief\n * \t\tRelease resources allocated to a reservation\n *\n * @param[in]\tresresv\t-\tthe reservation\n *\n * @return\tvoid\n */\nvoid\nrelease_nodes(resource_resv *resresv)\n{\n\tfree_nodes(resresv->resv->resv_nodes);\n\tresresv->resv->resv_nodes = NULL;\n\n\tfree(resresv->ninfo_arr);\n\tresresv->ninfo_arr = NULL;\n\n\tfree_nspecs(resresv->nspec_arr);\n\n\tfree_nspecs(resresv->resv->orig_nspec_arr);\n\n\tif (resresv->nodepart_name != NULL) {\n\t\tfree(resresv->nodepart_name);\n\t\tresresv->nodepart_name = NULL;\n\t}\n}\n\n/**\n * @brief\n *\t\tcreate_resv_nodes - create a node info array by copying the\n *\t\t\t    nodes and setting available resources to\n *\t\t\t    only the ones assigned to the reservation\n *\n * @param[in]\tnspec_arr -\tthe nspec array created from the resv_nodes\n * @param[in]\tsinfo     -\tserver reservation belongs too\n *\n * @return\tnew node universe\n * @retval\tNULL\t: on error\n */\nnode_info **\ncreate_resv_nodes(std::vector<nspec *> &nspec_arr, server_info *sinfo)\n{\n\tnode_info **nodes;\n\tschd_resource *res;\n\tresource_req *req;\n\n\tnodes = static_cast<node_info **>(malloc((nspec_arr.size() + 1) * sizeof(node_info *)));\n\tif (nodes != NULL) {\n\t\tsize_t i;\n\t\tfor (i = 0; i < nspec_arr.size(); i++) {\n\t\t\t/* please note - the new duplicated nodes will NOT be part\n\t\t\t\t * of sinfo.  This means that you can't find a node in\n\t\t\t\t * node -> server -> nodes.  We include the server because\n\t\t\t\t * it is expected that every node have a server pointer and\n\t\t\t\t * parts of the code gets cranky if it isn't there.\n\t\t\t\t */\n\t\t\tnodes[i] = dup_node_info(nspec_arr[i]->ninfo, sinfo, DUP_INDIRECT);\n\t\t\tnodes[i]->svr_node = nspec_arr[i]->ninfo;\n\n\t\t\t/* reservation nodes in state resv_exclusive can be assigned to jobs\n\t\t\t\t * within the reservation\n\t\t\t\t */\n\t\t\tif (nodes[i]->is_resv_exclusive)\n\t\t\t\tremove_node_state(nodes[i], ND_resv_exclusive);\n\n\t\t\treq = nspec_arr[i]->resreq;\n\t\t\twhile (req != NULL) {\n\t\t\t\tres = find_alloc_resource(nodes[i]->res, req->def);\n\n\t\t\t\tif (res != NULL) {\n\t\t\t\t\tif (res->indirect_res != NULL)\n\t\t\t\t\t\tres = res->indirect_res;\n\t\t\t\t\tres->avail = req->amount;\n\t\t\t\t\tmemcpy(&(res->type), &(req->type), sizeof(struct resource_type));\n\t\t\t\t\tif (res->type.is_consumable)\n\t\t\t\t\t\tres->assigned = 0; /* clear now, set later */\n\t\t\t\t}\n\t\t\t\treq = req->next;\n\t\t\t}\n\t\t}\n\t\tnodes[i] = NULL;\n\t}\n\treturn nodes;\n}\n\n/**\n * @brief - adjust resources on nodes belonging to a reservation that is\n *\t    running and is either degraded or being altered.  We need to free\n * \t    the resources on these nodes so the resources are available for\n * \t    check_nodes() to assign back to the reservation.\n *\n * @param[in] resv - reservation to alter nodes for\n * @param[in] sinfo - PBS universe\n *\n */\n\nvoid\nrelease_running_resv_nodes(resource_resv *resv, server_info *sinfo)\n{\n\tif (resv == NULL || sinfo == NULL || resv->ninfo_arr == NULL)\n\t\treturn;\n\tif (resv->resv->is_running && (resv->resv->resv_substate == RESV_DEGRADED || resv->resv->resv_state == RESV_BEING_ALTERED)) {\n\t\tauto resv_nodes = resv->ninfo_arr;\n\t\tfor (int i = 0; resv_nodes[i] != NULL; i++) {\n\t\t\tauto ninfo = find_node_by_indrank(sinfo->nodes, resv_nodes[i]->node_ind, resv_nodes[i]->rank);\n\t\t\tupdate_node_on_end(ninfo, resv, NULL);\n\t\t}\n\t\tsinfo->pset_metadata_stale = 1;\n\t}\n}\n\n/**\n * \t@brief determine if the scheduler will attempt to confirm a reservation\n *\n * \t@return int\n * \t@retval 1 - will attempt confirmation\n * \t@retval 0 - will not attempt confirmation\n */\nint\nwill_confirm(resource_resv *resv, time_t server_time)\n{\n\t/* If the reservation is unconfirmed OR is degraded and not running, with a\n\t * retry time that is in the past, then the reservation has to be\n\t * respectively confirmed and reconfirmed.\n\t */\n\tif (resv->resv->resv_state == RESV_UNCONFIRMED ||\n\t    resv->resv->resv_state == RESV_BEING_ALTERED ||\n\t    ((resv->resv->resv_substate == RESV_DEGRADED || resv->resv->resv_substate == RESV_IN_CONFLICT) &&\n\t     resv->resv->retry_time != UNSPECIFIED &&\n\t     resv->resv->retry_time <= server_time))\n\t\treturn 1;\n\n\treturn 0;\n}\n\n/**\n * @brief Update jobs and nodes for a reservation\n * \n * @param[in] resresv - the reservation\n * @param[in] server_time - time current time in the server\n */\nvoid\nmodify_jobs_nodes_for_resv(resource_resv *resresv, time_t server_time)\n{\n\tif (resresv->resv == NULL || resresv->resv->resv_queue == NULL)\n\t\treturn;\n\n\tresresv->resv->resv_queue->resv = resresv;\n\tif (resresv->resv->resv_queue->jobs != NULL) {\n\t\tfor (int j = 0; resresv->resv->resv_queue->jobs[j] != NULL; j++) {\n\t\t\tauto rjob = resresv->resv->resv_queue->jobs[j];\n\t\t\trjob->job->resv = resresv;\n\t\t\trjob->job->can_not_preempt = 1;\n\t\t\tif (rjob->node_set_str != NULL)\n\t\t\t\trjob->node_set =\n\t\t\t\t\tcreate_node_array_from_str(resresv->resv->resv_nodes,\n\t\t\t\t\t\t\t\t   rjob->node_set_str);\n\n\t\t\t/* if a job will exceed the end time of a duration, it will be\n\t\t\t * killed by the server. We set the job's end time to the resv's\n\t\t\t * end time for better estimation.\n\t\t\t */\n\t\t\tif (server_time + rjob->duration > resresv->end) {\n\t\t\t\trjob->duration = resresv->end - server_time;\n\t\t\t\trjob->hard_duration = rjob->duration;\n\t\t\t\tif (rjob->end != UNSPECIFIED)\n\t\t\t\t\trjob->end = resresv->end;\n\t\t\t}\n\n\t\t\tif (rjob->job->is_running) {\n\t\t\t\t/* the reservations resv_nodes is pointing to\n\t\t\t\t * a node_info array with just the reservations part of the node\n\t\t\t\t * i.e. the universe of the reservation\n\t\t\t\t */\n\t\t\t\tint k = 0;\n\t\t\t\tfor (auto ns : rjob->nspec_arr) {\n\t\t\t\t\tauto resvnode = find_node_info(resresv->resv->resv_nodes, ns->ninfo->name);\n\n\t\t\t\t\tif (resvnode != NULL) {\n\t\t\t\t\t\t/* update the ninfo to point to the ninfo in our universe */\n\t\t\t\t\t\tns->ninfo = resvnode;\n\t\t\t\t\t\trjob->ninfo_arr[k++] = resvnode;\n\n\t\t\t\t\t\t/* update resource assigned amounts on the nodes in the\n\t\t\t\t\t\t * reservation's universe\n\t\t\t\t\t\t */\n\t\t\t\t\t\tfor (auto req = ns->resreq; req != NULL; req = req->next) {\n\t\t\t\t\t\t\tif (req->type.is_consumable) {\n\t\t\t\t\t\t\t\tauto res = find_resource(ns->ninfo->res, req->def);\n\t\t\t\t\t\t\t\tif (res != NULL)\n\t\t\t\t\t\t\t\t\tres->assigned += req->amount;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t} else {\n\t\t\t\t\t\tlog_eventf(PBSEVENT_RESV, PBS_EVENTCLASS_RESV, LOG_INFO, rjob->name,\n\t\t\t\t\t\t\t   \"Job has been assigned a node that doesn't exist in its reservation: %s\", ns->ninfo->name.c_str());\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\trjob->ninfo_arr[k] = NULL;\n\t\t\t}\n\t\t}\n\t\tauto jobs_in_reservations = resource_resv_filter(resresv->resv->resv_queue->jobs,\n\t\t\t\t\t\t\t\t count_array(resresv->resv->resv_queue->jobs),\n\t\t\t\t\t\t\t\t check_running_job_in_reservation, NULL, 0);\n\t\tcollect_jobs_on_nodes(resresv->resv->resv_nodes, jobs_in_reservations,\n\t\t\t\t      count_array(jobs_in_reservations), NO_FLAGS);\n\t\tfree(jobs_in_reservations);\n\n\t\t/* Sort the nodes to ensure correct job placement. */\n\t\tqsort(resresv->resv->resv_nodes,\n\t\t      count_array(resresv->resv->resv_nodes),\n\t\t      sizeof(node_info *), multi_node_sort);\n\t}\n}"
  },
  {
    "path": "src/scheduler/resv_info.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _RESV_INFO_H\n#define _RESV_INFO_H\n\n#include <pbs_config.h>\n#include \"data_types.h\"\n\n/*\n *\tstat_resvs - status the reservations in batch_status form from the\n *\tserver\n */\nstruct batch_status *stat_resvs(int pbs_sd);\n\n/*\n *\tquery_reservations - query the reservations from the server\n */\nresource_resv **query_reservations(int pbs_sd, server_info *sinfo, struct batch_status *resvs);\n\n/*\n *\tquery_resv_info - convert the servers batch_statys structure into a\n */\nresource_resv *query_resv(struct batch_status *resv, server_info *sinfo);\n\n/*\n *\tnew_resv_info - allocate and initialize new resv_info structure\n */\n#ifdef NAS /* localmod 005 */\nresv_info *new_resv_info(void);\n#else\nresv_info *new_resv_info();\n#endif /* localmod 005 */\n\n/*\n *\tfree_resv_info - free all the memory used by a rev_info structure\n */\nvoid free_resv_info(resv_info *rinfo);\n\n/*\n *\tdup_resv_info - duplicate a reservation\n */\nresv_info *dup_resv_info(resv_info *rinfo, server_info *sinfo);\n\n/*\n *      check_new_reservations - check for new reservations and handle them\n *                               if we can serve the reservation, we reserve it\n *                               and if we can't, we delete the reservation\n */\nint check_new_reservations(status *policy, int pbs_sd, resource_resv **resvs, server_info *sinfo);\n\n/**\n *      confirm_reservation - attempts to confirm a resource reservation\n */\nint confirm_reservation(status *policy, int pbs_sd, resource_resv *unconf_resv, server_info *nsinfo);\n\n/**\n * Release reousrces allocated to a reservation\n */\nvoid release_nodes(resource_resv *resresv);\n\n/*\n *\tcreate_resv_nodes - create a node universe for a reservation\n */\nnode_info **create_resv_nodes(std::vector<nspec *> &nspec_arr, server_info *sinfo);\n\n/*\n *\trelease_running_resv_nodes - adjust nodes resources for reservations that\n *\t\t\t\t  that are being altered or are degraded.\n */\nvoid release_running_resv_nodes(resource_resv *resv, server_info *sinfo);\n\n/* release chunks of the resv_nodes without running jobs on them */\nint resv_reduce_chunks(resource_resv *resv, selspec *spec);\n\n/* Will we try and confirm this reservation in this cycle */\nint will_confirm(resource_resv *resv, time_t server_time);\n\n/* wrapper for pbs_confirmresv */\nint send_confirmresv(int virtual_sd, resource_resv *resv, const char *location, unsigned long start, const char *extend);\n\n/* wrapper for pbs_statresv */\nstruct batch_status *send_statresv(int virtual_fd, char *id, struct attrl *attrib, char *extend);\n\n/* Update jobs and nodes for resv */\nvoid modify_jobs_nodes_for_resv(resource_resv *resresv, time_t server_time);\n\n#endif /* _RESV_INFO_H */\n"
  },
  {
    "path": "src/scheduler/sched_exception.cpp",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include \"data_types.h\"\n\n// Copy Constructor\nsched_exception::sched_exception(const sched_exception &e)\n{\n\tmessage = e.get_message();\n\terror_code = e.get_error_code();\n}\n\n// Assignment Operator\nsched_exception &\nsched_exception::operator=(const sched_exception &e)\n{\n\tmessage = e.get_message();\n\terror_code = e.get_error_code();\n\treturn (*this);\n}\n\n// Parametrized Constructor\nsched_exception::sched_exception(const std::string &str, const enum sched_error_code err) : message(str), error_code(err) {}\n\n// Getter function for error_code\nenum sched_error_code\nsched_exception::get_error_code() const\n{\n\treturn error_code;\n}\n\n// Getter function for message\nconst std::string &\nsched_exception::get_message() const\n{\n\treturn message;\n}\n\n/*\n * @brief Overridden function to return char * of message\n *\n * @return const char *\n */\nconst char *\nsched_exception::what()\n{\n\treturn message.c_str();\n}\n"
  },
  {
    "path": "src/scheduler/sched_ifl_wrappers.cpp",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h>\n\n#include <stdlib.h>\n#include <pbs_ifl.h>\n#include <libpbs.h>\n#include \"data_types.h\"\n#include \"fifo.h\"\n#include \"globals.h\"\n#include \"job_info.h\"\n#include \"misc.h\"\n#include \"log.h\"\n#include \"server_info.h\"\n#include \"libutil.h\"\n\n/**\n * @brief\tSend the relevant runjob request to server\n *\n * @param[in]\tsd\t-\tcommunication handle\n * @param[in]\thas_runjob_hook\t- does server have a runjob hook?\n * @param[in]\tjobid\t-\tid of the job to run\n * @param[in]\texecvnode\t-\tthe execvnode to run the job on\n * @param[in]\tsvr_id_job -\tserver id of the job\n *\n * @return\tint\n * @retval\treturn value of the runjob call\n */\nint\nsend_run_job(int sd, int has_runjob_hook, const std::string &jobid, char *execvnode)\n{\n\tif (jobid.empty() || execvnode == NULL)\n\t\treturn 1;\n\n\tif (sc_attrs.runjob_mode == RJ_EXECJOB_HOOK)\n\t\treturn pbs_runjob(sd, const_cast<char *>(jobid.c_str()), execvnode, NULL);\n\telse if (((sc_attrs.runjob_mode == RJ_RUNJOB_HOOK) && has_runjob_hook))\n\t\treturn pbs_asyrunjob_ack(sd, const_cast<char *>(jobid.c_str()), execvnode, NULL);\n\telse\n\t\treturn pbs_asyrunjob(sd, const_cast<char *>(jobid.c_str()), execvnode, NULL);\n}\n\n/**\n * @brief\n * \t\tsend delayed attributes to the server for a job\n *\n * @param[in]\tsd\t-\tcommunication handle\n * @param[in]\tresresv\t-\tresource_resv object for job\n * @param[in]\tpattr\t-\tattrl list to update on the server\n *\n * @return\tint\n * @retval\t1\tsuccess\n * @retval\t0\tfailure to update\n */\nint\nsend_attr_updates(int sd, resource_resv *resresv, struct attrl *pattr)\n{\n\tconst char *errbuf;\n\tint one_attr = 0;\n\tconst std::string &job_name = resresv->name;\n\n\tif (job_name.empty() || pattr == NULL)\n\t\treturn 0;\n\n\tif (sd == SIMULATE_SD)\n\t\treturn 1; /* simulation always successful */\n\n\tif (pattr->next == NULL)\n\t\tone_attr = 1;\n\n\tif (pbs_asyalterjob(sd, const_cast<char *>(job_name.c_str()), pattr, NULL) == 0) {\n\t\tlast_attr_updates = time(NULL);\n\t\treturn 1;\n\t}\n\n\tif (is_finished_job(pbs_errno) == 1) {\n\t\tif (one_attr)\n\t\t\tlog_eventf(PBSEVENT_SCHED, PBS_EVENTCLASS_JOB, LOG_INFO, job_name,\n\t\t\t\t   \"Failed to update attr \\'%s\\' = %s, Job already finished\",\n\t\t\t\t   pattr->name, pattr->value);\n\t\telse\n\t\t\tlog_event(PBSEVENT_SCHED, PBS_EVENTCLASS_JOB, LOG_INFO, job_name,\n\t\t\t\t  \"Failed to update job attributes, Job already finished\");\n\t\treturn 0;\n\t}\n\n\terrbuf = pbs_geterrmsg(sd);\n\tif (errbuf == NULL)\n\t\terrbuf = \"\";\n\tif (one_attr)\n\t\tlog_eventf(PBSEVENT_SCHED, PBS_EVENTCLASS_SCHED, LOG_WARNING, job_name,\n\t\t\t   \"Failed to update attr \\'%s\\' = %s: %s (%d)\",\n\t\t\t   pattr->name, pattr->value, errbuf, pbs_errno);\n\telse\n\t\tlog_eventf(PBSEVENT_SCHED, PBS_EVENTCLASS_SCHED, LOG_WARNING, job_name,\n\t\t\t   \"Failed to update job attributes: %s (%d)\",\n\t\t\t   errbuf, pbs_errno);\n\n\treturn 0;\n}\n\n/**\n * @brief\tWrapper for pbs_preempt_jobs\n *\n * @param[in]\tsd - communication handle\n * @param[in]\tpreempt_jobs_list - list of jobs to preempt\n *\n * @return\tpreempt_job_info *\n * @retval\treturn value of pbs_preempt_jobs\n */\npreempt_job_info *\nsend_preempt_jobs(int sd, char **preempt_jobs_list)\n{\n\treturn pbs_preempt_jobs(sd, preempt_jobs_list);\n}\n\n/**\n * @brief\tWrapper for pbs_signaljob\n *\n * @param[in]\tsd - communication handle\n * @param[in]\tresresv - resource_resv for the job to send signal to\n * @param[in]\tsignal - the signal to send (e.g - \"resume\")\n * @param[in]\textend - extend data for signaljob\n *\n * @return\tint\n * @retval\t0 for success, 1 for error\n */\nint\nsend_sigjob(int sd, resource_resv *resresv, const char *signal, char *extend)\n{\n\treturn pbs_sigjob(sd, const_cast<char *>(resresv->name.c_str()), const_cast<char *>(signal), extend);\n}\n\n/**\n * @brief\tWrapper for pbs_confirmresv\n *\n * @param[in]\tsd - communication handle\n * @param[in]\tresv - resource_resv for the resv to send confirmation to\n * @param[in] \tlocation - string of vnodes/resources to be allocated to the resv.\n * @param[in] \tstart - start time of reservation if non-zero\n * @param[in] \textend - extend data for pbs_confirmresv\n *\n * @return\tint\n * @retval\t0\tSuccess\n * @retval\t!0\terror\n */\nint\nsend_confirmresv(int sd, resource_resv *resv, const char *location, unsigned long start, const char *extend)\n{\n\treturn pbs_confirmresv(sd, const_cast<char *>(resv->name.c_str()), const_cast<char *>(location), start, const_cast<char *>(extend));\n}\n\n/**\n * @brief\tWrapper for pbs_selstat\n *\n * @param[in] sd - communication handle\n * @param[in] attrib - pointer to attropl structure(selection criteria)\n * @param[in] extend - extend string to encode req\n * @param[in] rattrib - list of attributes to return\n *\n * @return\tstruct batch_status *\n * @retval\tlist of queried jobs\n * @retval\tNULL for error\n */\nstruct batch_status *\nsend_selstat(int sd, struct attropl *attrib, struct attrl *rattrib, char *extend)\n{\n\treturn pbs_selstat(sd, attrib, rattrib, extend);\n}\n\n/**\n * @brief\tWrapper for pbs_statvnode\n *\n * @param[in] sd - communication handle\n * @param[in] id - object id\n * @param[in] attrib - pointer to attribute list\n * @param[in] extend - extend string for encoding req\n *\n * @return\tstruct batch_status *\n * @retval\tlist of queried nodes\n * @retval\tNULL for error\n */\nstruct batch_status *\nsend_statvnode(int sd, char *id, struct attrl *attrib, char *extend)\n{\n\treturn pbs_statvnode(sd, id, attrib, extend);\n}\n\n/**\n * @brief\tWrapper for pbs_statsched\n *\n * @param[in] sd - communication handle\n * @param[in] attrib - pointer to attribute list\n * @param[in] extend - extend string for encoding req\n *\n * @return\tstruct batch_status *\n * @retval\tlist of queried scheds\n * @retval\tNULL for error\n */\nstruct batch_status *\nsend_statsched(int sd, struct attrl *attrib, char *extend)\n{\n\treturn pbs_statsched(sd, attrib, extend);\n}\n\n/**\n * @brief\tWrapper for pbs_statque\n *\n * @param[in] sd - communication handle\n * @param[in] id - object id\n * @param[in] attrib - pointer to attribute list\n * @param[in] extend - extend string for encoding req\n *\n * @return\tstruct batch_status *\n * @retval\tlist of queried queues\n * @retval\tNULL for error\n */\nstruct batch_status *\nsend_statqueue(int sd, char *id, struct attrl *attrib, char *extend)\n{\n\treturn pbs_statque(sd, id, attrib, extend);\n}\n\n/**\n * @brief\tWrapper for pbs_statserver\n *\n * @param[in] sd - communication handle\n * @param[in] attrib - pointer to attribute list\n * @param[in] extend - extend string for encoding req\n *\n * @return\tstruct batch_status *\n * @retval\tbatch_status for server (aggregated in case of multi-server)\n * @retval\tNULL for error\n */\nstruct batch_status *\nsend_statserver(int sd, struct attrl *attrib, char *extend)\n{\n\treturn pbs_statserver(sd, attrib, extend);\n}\n\n/**\n * @brief\tWrapper for pbs_statrsc\n *\n * @param[in] sd - communication handle\n * @param[in] id - object id\n * @param[in] attrib - pointer to attribute list\n * @param[in] extend - extend string for encoding req\n *\n * @return\tstruct batch_status *\n * @retval\tlist of resources\n * @retval\tNULL for error\n */\nstruct batch_status *\nsend_statrsc(int sd, char *id, struct attrl *attrib, char *extend)\n{\n\treturn pbs_statrsc(sd, id, attrib, extend);\n}\n\n/**\n * @brief\tWrapper for pbs_statresv\n *\n * @param[in] sd - communication handle\n * @param[in] id - object id\n * @param[in] attrib - pointer to attribute list\n * @param[in] extend - extend string for encoding req\n *\n * @return\tstruct batch_status *\n * @retval\tlist of reservations\n * @retval\tNULL for error\n */\nstruct batch_status *\nsend_statresv(int sd, char *id, struct attrl *attrib, char *extend)\n{\n\treturn pbs_statresv(sd, id, attrib, extend);\n}\n"
  },
  {
    "path": "src/scheduler/server_info.cpp",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file    server_info.c\n *\n * @brief\n * server_info.c - contains functions related to server_info structure.\n *\n * Functions included are:\n * \tquery_server_info()\n * \tquery_server_dyn_res()\n * \tquery_sched_obj()\n * \tfind_alloc_resource()\n * \tfind_alloc_resource_by_str()\n * \tfind_resource_by_str()\n * \tfind_resource()\n * \tfree_server_info()\n * \tfree_resource_list()\n * \tfree_resource()\n * \tnew_resource()\n * \tcreate_resource()\n * \tadd_resource_list()\n * \tadd_resource_value()\n * \tadd_resource_str_arr()\n * \tadd_resource_bool()\n * \tupdate_server_on_run()\n * \tupdate_server_on_end()\n * \tcreate_server_arrays()\n * \tcheck_run_job()\n * \tcheck_exit_job()\n * \tcheck_susp_job()\n * \tcheck_running_job_not_in_reservation()\n * \tcheck_running_job_in_reservation()\n * \tcheck_resv_running_on_node()\n * \tdup_resource_list()\n * \tdup_selective_resource_list()\n * \tdup_ind_resource_list()\n * \tdup_resource()\n * \tis_unassoc_node()\n * \tfree_counts_list()\n * \tdup_counts_umap()\n * \tfind_counts()\n * \tfind_alloc_counts()\n * \tupdate_counts_on_run()\n * \tupdate_counts_on_end()\n * \tcounts_max()\n * \tupdate_universe_on_end()\n * \tset_resource()\n * \tfind_indirect_resource()\n * \tresolve_indirect_resources()\n * \tupdate_preemption_on_run()\n * \tread_formula()\n * \tdup_status()\n * \tfree_queue_list()\n * \tcreate_total_counts()\n * \tupdate_total_counts()\n * \tupdate_total_counts_on_end()\n * \tget_sched_rank()\n * \tadd_queue_to_list()\n * \tfind_queue_list_by_priority()\n * \tappend_to_queue_list()\n *\n */\n\n#include <pbs_config.h>\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <errno.h>\n#include <ctype.h>\n#include <signal.h>\n#include <sys/wait.h>\n#include <algorithm>\n#include <exception>\n\n#include \"pbs_entlim.h\"\n#include \"pbs_ifl.h\"\n#include \"pbs_error.h\"\n#include \"log.h\"\n#include \"pbs_share.h\"\n#include \"libpbs.h\"\n#include \"constant.h\"\n#include \"config.h\"\n#include \"server_info.h\"\n#include \"queue_info.h\"\n#include \"job_info.h\"\n#include \"misc.h\"\n#include \"node_info.h\"\n#include \"globals.h\"\n#include \"resv_info.h\"\n#include \"sort.h\"\n#include \"resource_resv.h\"\n#include \"state_count.h\"\n#include \"node_partition.h\"\n#include \"resource.h\"\n#include \"assert.h\"\n#include \"limits_if.h\"\n#include \"resource.h\"\n#include \"pbs_internal.h\"\n#include \"simulate.h\"\n#include \"fairshare.h\"\n#include \"check.h\"\n#include \"fifo.h\"\n#include \"buckets.h\"\n#include \"parse.h\"\n#include \"hook.h\"\n#include \"libpbs.h\"\n#include \"libutil.h\"\n#ifdef NAS\n#include \"site_code.h\"\n#endif\n\nextern char **environ;\n\n/**\n *\t@brief\n *\t\tcreates a structure of arrays consisting of a server\n *\t\tand all the queues and jobs that reside in that server\n *\n * @par Order of Query\n *\t\tquery_server()\n *      -> query_sched()\n *\t \t-> query_nodes()\n *\t \t-> query_queues()\n *\t    -> query_jobs()\n *\t \t->query_reservations()\n *\n * @param[in]\tpol\t\t-\tinput policy structure - will be dup'd\n * @param[in]\tpbs_sd\t-\tconnection to pbs_server\n *\n * @return\tthe server_info struct\n * @retval\tserver_info -> policy - policy structure for cycle\n* @retval\tNULL\t: error\n *\n */\nserver_info *\nquery_server(status *pol, int pbs_sd)\n{\n\tstruct batch_status *server;   /* info about the server */\n\tstruct batch_status *bs_resvs; /* batch status of the reservations */\n\tserver_info *sinfo;\t       /* scheduler internal form of server info */\n\tint num_express_queues = 0;    /* number of express queues */\n\tstatus *policy;\n\tint job_arrays_associated = FALSE;\n\tint i;\n\n\tif (pol == NULL)\n\t\treturn NULL;\n\n\tif (allres.empty())\n\t\tif (update_resource_defs(pbs_sd) == false)\n\t\t\treturn NULL;\n\n\t/* get server information from pbs server */\n\tif ((server = send_statserver(pbs_sd, NULL, NULL)) == NULL) {\n\t\tconst char *errmsg = pbs_geterrmsg(pbs_sd);\n\t\tif (errmsg == NULL)\n\t\t\terrmsg = \"\";\n\t\tlog_eventf(PBSEVENT_SCHED, PBS_EVENTCLASS_SERVER, LOG_NOTICE, \"server_info\",\n\t\t\t   \"pbs_statserver failed: %s (%d)\", errmsg, pbs_errno);\n\t\treturn NULL;\n\t}\n\n\t/* convert batch_status structure into server_info structure */\n\tif ((sinfo = query_server_info(pol, server)) == NULL) {\n\t\tpbs_statfree(server);\n\t\treturn NULL;\n\t}\n\n\t/* We dup'd the policy structure for the cycle */\n\tpolicy = sinfo->policy;\n\n\tif (query_server_dyn_res(sinfo) == -1) {\n\t\tpbs_statfree(server);\n\t\tsinfo->fstree = NULL;\n\t\tdelete sinfo;\n\t\treturn NULL;\n\t}\n\n\tif (!dflt_sched && (sc_attrs.partition == NULL)) {\n\t\tlog_event(PBSEVENT_SCHED, PBS_EVENTCLASS_SERVER, LOG_ERR, __func__, \"Scheduler does not contain a partition\");\n\t\tpbs_statfree(server);\n\t\tsinfo->fstree = NULL;\n\t\tdelete sinfo;\n\t\treturn NULL;\n\t}\n\n\t/* to avoid a possible race condition in which the time it takes to\n\t * query nodes is long enough that a reservation may have crossed\n\t * into running state, we stat the reservation just before nodes and\n\t * will populate internal data structures based on this batch status\n\t * after all other data is queried\n\t */\n\tbs_resvs = stat_resvs(pbs_sd);\n\n\t/* get the nodes, if any - NOTE: will set sinfo -> num_nodes */\n\tif ((sinfo->nodes = query_nodes(pbs_sd, sinfo)) == NULL) {\n\t\tpbs_statfree(server);\n\t\tsinfo->fstree = NULL;\n\t\tdelete sinfo;\n\t\tpbs_statfree(bs_resvs);\n\t\treturn NULL;\n\t}\n\n\t/* sort the nodes before we filter them down to more useful lists */\n\tif (!policy->node_sort->empty())\n\t\tqsort(sinfo->nodes, sinfo->num_nodes, sizeof(node_info *),\n\t\t      multi_node_sort);\n\n\t/* get the queues */\n\tsinfo->queues = query_queues(policy, pbs_sd, sinfo);\n\tif (sinfo->queues.empty()) {\n\t\tpbs_statfree(server);\n\t\tsinfo->fstree = NULL;\n\t\tdelete sinfo;\n\t\tpbs_statfree(bs_resvs);\n\t\treturn NULL;\n\t}\n\n\tif (sinfo->has_nodes_assoc_queue)\n\t\tsinfo->unassoc_nodes =\n\t\t\tnode_filter(sinfo->nodes, sinfo->num_nodes, is_unassoc_node, NULL, 0);\n\telse\n\t\tsinfo->unassoc_nodes = sinfo->nodes;\n\n\t/* count the queues and total up the individual queue states\n\t * for server totals. (total up all the state_count structs)\n\t */\n\tfor (auto qinfo : sinfo->queues) {\n\t\ttotal_states(&(sinfo->sc), &(qinfo->sc));\n\n\t\tif (qinfo->priority >= sc_attrs.preempt_queue_prio)\n\t\t\tnum_express_queues++;\n\t}\n\n\tif (num_express_queues > 1)\n\t\tsinfo->has_mult_express = 1;\n\n\t/* sort the queues before we collect the jobs list (i.e. set_jobs())\n\t * in the case we don't sort the jobs and don't have by_queue turned on\n\t */\n\tif ((policy->round_robin == 1) || (policy->by_queue == 1))\n\t\tstd::sort(sinfo->queues.begin(), sinfo->queues.end(), cmp_queue_prio_dsc);\n\tif (policy->round_robin == 1) {\n\t\t/* queues are already sorted in descending order of their priority */\n\t\tfor (auto qinfo : sinfo->queues) {\n\t\t\tauto ret_val = add_queue_to_list(&sinfo->queue_list, qinfo);\n\t\t\tif (ret_val == 0) {\n\t\t\t\tsinfo->fstree = NULL;\n\t\t\t\tdelete sinfo;\n\t\t\t\tpbs_statfree(bs_resvs);\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t}\n\t}\n\n\t/* get reservations, if any - NOTE: will set sinfo -> num_resvs */\n\tsinfo->resvs = query_reservations(pbs_sd, sinfo, bs_resvs);\n\tpbs_statfree(bs_resvs);\n\n\tif (create_server_arrays(sinfo) == 0) { /* bad stuff happened */\n\t\tsinfo->fstree = NULL;\n\t\tdelete sinfo;\n\t\treturn NULL;\n\t}\n#ifdef NAS /* localmod 050 */\n\t/* Give site a chance to tweak values before jobs are sorted */\n\tif (site_tidy_server(sinfo) == 0) {\n\t\tdelete sinfo;\n\t\treturn NULL;\n\t}\n#endif /* localmod 050 */\n\tassociate_dependent_jobs(sinfo);\n\n\t/* create res_to_check arrays based on current jobs/resvs */\n\tpolicy->resdef_to_check = collect_resources_from_requests(sinfo->all_resresv);\n\tfor (const auto &rd : policy->resdef_to_check) {\n\t\tif (!(rd == allres[\"host\"] || rd == allres[\"vnode\"]))\n\t\t\tpolicy->resdef_to_check_no_hostvnode.insert(rd);\n\n\t\tif (rd->flags & ATR_DFLAG_RASSN)\n\t\t\tpolicy->resdef_to_check_rassn.insert(rd);\n\n\t\tif ((rd->flags & ATR_DFLAG_RASSN) && (rd->flags & ATR_DFLAG_CVTSLT))\n\t\t\tpolicy->resdef_to_check_rassn_select.insert(rd);\n\t}\n\n\tsinfo->calendar = create_event_list(sinfo);\n\n\tsinfo->running_jobs =\n\t\tresource_resv_filter(sinfo->jobs, sinfo->sc.total, check_run_job,\n\t\t\t\t     NULL, FILTER_FULL);\n\tsinfo->exiting_jobs = resource_resv_filter(sinfo->jobs,\n\t\t\t\t\t\t   sinfo->sc.total, check_exit_job, NULL, 0);\n\tif (sinfo->running_jobs == NULL || sinfo->exiting_jobs == NULL) {\n\t\tsinfo->fstree = NULL;\n\t\tdelete sinfo;\n\t\treturn NULL;\n\t}\n\n\tif (sinfo->has_soft_limit || sinfo->has_hard_limit) {\n\t\tcounts *allcts;\n\t\tallcts = find_alloc_counts(sinfo->alljobcounts, PBS_ALL_ENTITY);\n\t\tjob_arrays_associated = TRUE;\n\t\t/* set the user, group , project counts */\n\t\tfor (int i = 0; sinfo->running_jobs[i] != NULL; i++) {\n\t\t\tcounts *cts; /* used to count running per user/grp */\n\n\t\t\tcts = find_alloc_counts(sinfo->user_counts,\n\t\t\t\t\t\tsinfo->running_jobs[i]->user);\n\n\t\t\tupdate_counts_on_run(cts, sinfo->running_jobs[i]->resreq);\n\n\t\t\tcts = find_alloc_counts(sinfo->group_counts,\n\t\t\t\t\t\tsinfo->running_jobs[i]->group);\n\n\t\t\tupdate_counts_on_run(cts, sinfo->running_jobs[i]->resreq);\n\n\t\t\tcts = find_alloc_counts(sinfo->project_counts,\n\t\t\t\t\t\tsinfo->running_jobs[i]->project);\n\n\t\t\tupdate_counts_on_run(cts, sinfo->running_jobs[i]->resreq);\n\n\t\t\tupdate_counts_on_run(allcts, sinfo->running_jobs[i]->resreq);\n\t\t\t/* Since we are already looping on running jobs, associcate running\n\t\t\t * subjobs to their parent.\n\t\t\t */\n\t\t\tif ((sinfo->running_jobs[i]->job->is_subjob) &&\n\t\t\t    (associate_array_parent(sinfo->running_jobs[i], sinfo) == 1)) {\n\t\t\t\tsinfo->fstree = NULL;\n\t\t\t\tdelete sinfo;\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t}\n\t\tcreate_total_counts(sinfo, NULL, NULL, SERVER);\n\t}\n\tif (job_arrays_associated == FALSE) {\n\t\tfor (i = 0; sinfo->running_jobs[i] != NULL; i++) {\n\t\t\tif ((sinfo->running_jobs[i]->job->is_subjob) &&\n\t\t\t    (associate_array_parent(sinfo->running_jobs[i], sinfo) == 1)) {\n\t\t\t\tsinfo->fstree = NULL;\n\t\t\t\tdelete sinfo;\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t}\n\t}\n\n\tpolicy->equiv_class_resdef = create_resresv_sets_resdef(policy);\n\tsinfo->equiv_classes = create_resresv_sets(policy, sinfo);\n\n\t/* To avoid duplicate accounting of jobs on nodes, we are only interested in\n\t * jobs that are bound to the server nodes and not those bound to reservation\n\t * nodes, which are accounted for by collect_jobs_on_nodes in\n\t * query_reservation, hence the use of the filtered list of jobs\n\t */\n\tauto jobs_alive = resource_resv_filter(sinfo->jobs, sinfo->sc.total, check_running_job_not_in_reservation, NULL, 0);\n\tcollect_jobs_on_nodes(sinfo->nodes, jobs_alive, count_array(jobs_alive), DETECT_GHOST_JOBS);\n\n\t/* Now that the job_arr is created, garbage collect the jobs */\n\tfree(jobs_alive);\n\n\tcollect_resvs_on_nodes(sinfo->nodes, sinfo->resvs, sinfo->num_resvs);\n\n\tsinfo->unordered_nodes = static_cast<node_info **>(malloc((sinfo->num_nodes + 1) * sizeof(node_info *)));\n\tif (sinfo->unordered_nodes == NULL) {\n\t\tsinfo->fstree = NULL;\n\t\tdelete sinfo;\n\t\treturn NULL;\n\t}\n\n\t/* Ideally we'd query everything about a node in query_node().  We query\n\t * nodes very early on in the query process.  Not all the information\n\t * necessary for a node is available at that time.  We need to delay it till here.\n\t */\n\tfor (i = 0; sinfo->nodes[i] != NULL; i++) {\n\t\tauto *ninfo = sinfo->nodes[i];\n\t\tninfo->nodesig = create_resource_signature(ninfo->res,\n\t\t\t\t\t\t\t   policy->resdef_to_check_no_hostvnode, ADD_ALL_BOOL);\n\t\tninfo->nodesig_ind = add_str_to_unique_array(&(sinfo->nodesigs),\n\t\t\t\t\t\t\t     ninfo->nodesig);\n\n\t\tif (ninfo->has_ghost_job)\n\t\t\tcreate_resource_assn_for_node(ninfo);\n\n\t\tsinfo->nodes[i]->node_ind = i;\n\t\tsinfo->unordered_nodes[i] = ninfo;\n\t}\n\tsinfo->unordered_nodes[i] = NULL;\n\n\tgeneric_sim(sinfo->calendar, TIMED_RUN_EVENT, 0, 0, add_node_events, NULL, NULL);\n\n\t/* Create placement sets  after collecting jobs on nodes because\n\t * we don't want to account for resources consumed by ghost jobs\n\t */\n\tcreate_placement_sets(policy, sinfo);\n\n\tsinfo->buckets = create_node_buckets(policy, sinfo->nodes, sinfo->queues, UPDATE_BUCKET_IND);\n\n\tif (sinfo->buckets != NULL) {\n\t\tint ct;\n\t\tct = count_array(sinfo->buckets);\n\t\tqsort(sinfo->buckets, ct, sizeof(node_bucket *), multi_bkt_sort);\n\t}\n\n\tpbs_statfree(server);\n\n\treturn sinfo;\n}\n\n/**\n * @brief\n * \t\ttakes info from a batch_status structure about\n *\t\ta server into a server_info structure for easy access\n *\n * @param[in]\tpol\t\t-\tscheduler policy structure\n * @param[in]\tserver\t-\tbatch_status struct of server info\n *\t\t\t\t\t\t\tchain possibly NULL\n *\n * @return\tnewly allocated and filled server_info struct\n *\n */\nserver_info *\nquery_server_info(status *pol, struct batch_status *server)\n{\n\tstruct attrl *attrp;  /* linked list of attributes */\n\tserver_info *sinfo;   /* internal scheduler structure for server info */\n\tschd_resource *resp;  /* a resource to help create the resource list */\n\tsch_resource_t count; /* used to convert string -> integer */\n\tchar *endp;\t      /* used with strtol() */\n\tstatus *policy;\n\n\tif (pol == NULL || server == NULL)\n\t\treturn NULL;\n\n\tsinfo = new server_info(server->name);\n\n\tif (sinfo->liminfo == NULL)\n\t\treturn NULL;\n\n\tif ((sinfo->policy = dup_status(pol)) == NULL) {\n\t\tdelete sinfo;\n\t\treturn NULL;\n\t}\n\n\tpolicy = sinfo->policy;\n\n\t/* set the time to the current time */\n\tsinfo->server_time = policy->current_time;\n\n\tattrp = server->attribs;\n\n\twhile (attrp != NULL) {\n\t\tif (is_reslimattr(attrp)) {\n\t\t\t(void) lim_setlimits(attrp, LIM_RES, sinfo->liminfo);\n\t\t\tif (strstr(attrp->value, \"u:\") != NULL)\n\t\t\t\tsinfo->has_user_limit = 1;\n\t\t\tif (strstr(attrp->value, \"g:\") != NULL)\n\t\t\t\tsinfo->has_grp_limit = 1;\n\t\t\tif (strstr(attrp->value, \"p:\") != NULL)\n\t\t\t\tsinfo->has_proj_limit = 1;\n\t\t\tif (strstr(attrp->value, \"o:\") != NULL)\n\t\t\t\tsinfo->has_all_limit = 1;\n\t\t} else if (is_runlimattr(attrp)) {\n\t\t\t(void) lim_setlimits(attrp, LIM_RUN, sinfo->liminfo);\n\t\t\tif (strstr(attrp->value, \"u:\") != NULL)\n\t\t\t\tsinfo->has_user_limit = 1;\n\t\t\tif (strstr(attrp->value, \"g:\") != NULL)\n\t\t\t\tsinfo->has_grp_limit = 1;\n\t\t\tif (strstr(attrp->value, \"p:\") != NULL)\n\t\t\t\tsinfo->has_proj_limit = 1;\n\t\t\tif (strstr(attrp->value, \"o:\") != NULL)\n\t\t\t\tsinfo->has_all_limit = 1;\n\t\t} else if (is_oldlimattr(attrp)) {\n\t\t\tconst char *limname = convert_oldlim_to_new(attrp);\n\t\t\t(void) lim_setlimits(attrp, LIM_OLD, sinfo->liminfo);\n\n\t\t\tif (strstr(limname, \"u:\") != NULL)\n\t\t\t\tsinfo->has_user_limit = 1;\n\t\t\tif (strstr(limname, \"g:\") != NULL)\n\t\t\t\tsinfo->has_grp_limit = 1;\n\t\t\t/* no need to check for project limits because there were no old style project limits */\n\t\t} else if (!strcmp(attrp->name, ATTR_NodeGroupEnable)) {\n\t\t\tif (!strcmp(attrp->value, ATR_TRUE))\n\t\t\t\tsinfo->node_group_enable = 1;\n\t\t\telse\n\t\t\t\tsinfo->node_group_enable = 0;\n\t\t} else if (!strcmp(attrp->name, ATTR_NodeGroupKey))\n\t\t\tsinfo->node_group_key = break_comma_list(std::string(attrp->value));\n\t\telse if (!strcmp(attrp->name, ATTR_job_sort_formula)) { /* Deprecated */\n\t\t\tsinfo->job_sort_formula = read_formula();\n\t\t\tif (policy->sort_by->size() > 1) /* 0 is the formula itself */\n\t\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_SERVER, LOG_DEBUG, __func__,\n\t\t\t\t\t  \"Job sorting formula and job_sort_key are incompatible.  \"\n\t\t\t\t\t  \"The job sorting formula will be used.\");\n\n\t\t} else if (!strcmp(attrp->name, ATTR_rescavail)) { /* resources_available*/\n\t\t\tresp = find_alloc_resource_by_str(sinfo->res, attrp->resource);\n\n\t\t\tif (resp != NULL) {\n\t\t\t\tif (sinfo->res == NULL)\n\t\t\t\t\tsinfo->res = resp;\n\n\t\t\t\tif (set_resource(resp, attrp->value, RF_AVAIL) == 0) {\n\t\t\t\t\tdelete sinfo;\n\t\t\t\t\treturn NULL;\n\t\t\t\t}\n\t\t\t}\n\t\t} else if (!strcmp(attrp->name, ATTR_rescassn)) { /* resources_assigned */\n\t\t\tresp = find_alloc_resource_by_str(sinfo->res, attrp->resource);\n\t\t\tif (sinfo->res == NULL)\n\t\t\t\tsinfo->res = resp;\n\t\t\tif (resp != NULL) {\n\t\t\t\tif (set_resource(resp, attrp->value, RF_ASSN) == 0) {\n\t\t\t\t\tdelete sinfo;\n\t\t\t\t\treturn NULL;\n\t\t\t\t}\n\t\t\t}\n\t\t} else if (!strcmp(attrp->name, ATTR_EligibleTimeEnable)) {\n\t\t\tif (!strcmp(attrp->value, ATR_TRUE))\n\t\t\t\tsinfo->eligible_time_enable = 1;\n\t\t\telse\n\t\t\t\tsinfo->eligible_time_enable = 0;\n\t\t} else if (!strcmp(attrp->name, ATTR_ProvisionEnable)) {\n\t\t\tif (!strcmp(attrp->value, ATR_TRUE))\n\t\t\t\tsinfo->provision_enable = 1;\n\t\t\telse\n\t\t\t\tsinfo->provision_enable = 0;\n\t\t} else if (!strcmp(attrp->name, ATTR_power_provisioning)) {\n\t\t\tif (!strcmp(attrp->value, ATR_TRUE))\n\t\t\t\tsinfo->power_provisioning = 1;\n\t\t\telse\n\t\t\t\tsinfo->power_provisioning = 0;\n\t\t} else if (!strcmp(attrp->name, ATTR_backfill_depth)) {\n\t\t\tcount = strtol(attrp->value, &endp, 10);\n\t\t\tif (*endp == '\\0')\n\t\t\t\tsinfo->policy->backfill_depth = count;\n\t\t\tif (count == 0)\n\t\t\t\tsinfo->policy->backfill = 0;\n\t\t} else if (!strcmp(attrp->name, ATTR_restrict_res_to_release_on_suspend)) {\n\t\t\tchar **resl;\n\t\t\tresl = break_comma_list(attrp->value);\n\t\t\tif (resl != NULL) {\n\t\t\t\tpolicy->rel_on_susp = resstr_to_resdef(resl);\n\t\t\t\tfree_string_array(resl);\n\t\t\t}\n\t\t} else if (!strcmp(attrp->name, ATTR_has_runjob_hook)) {\n\t\t\tif (!strcmp(attrp->value, ATR_TRUE))\n\t\t\t\tsinfo->has_runjob_hook = 1;\n\t\t\telse\n\t\t\t\tsinfo->has_runjob_hook = 0;\n\t\t}\n\t\tattrp = attrp->next;\n\t}\n\n\tif (sinfo->job_sort_formula == NULL && sc_attrs.job_sort_formula != NULL) {\n\t\tsinfo->job_sort_formula = string_dup(sc_attrs.job_sort_formula);\n\t\tif (sinfo->job_sort_formula == NULL) {\n\t\t\tdelete sinfo;\n\t\t\treturn NULL;\n\t\t}\n\t}\n\n\tif (has_hardlimits(sinfo->liminfo))\n\t\tsinfo->has_hard_limit = 1;\n\tif (has_softlimits(sinfo->liminfo))\n\t\tsinfo->has_soft_limit = 1;\n\n\t/* Since we want to keep track of fairshare changes from cycle to cycle\n\t * copy in the global fairshare tree root.  Be careful to not free it\n\t * at the end of the cycle.\n\t */\n\tsinfo->fstree = fstree;\n#ifdef NAS /* localmod 034 */\n\tsite_set_share_head(sinfo);\n#endif /* localmod 034 */\n\n\treturn sinfo;\n}\n\n/**\n * @brief\n * \t\texecute all configured server_dyn_res scripts\n *\n * @param[in]\tsinfo\t-\tserver info\n *\n * @retval\t0\t: on success\n * @retval -1\t: on error\n */\nint\nquery_server_dyn_res(server_info *sinfo)\n{\n\tint k;\n\tint pipe_err;\n\tchar res_zero[] = \"0\"; /* dynamic res failure implies resource <-0 */\n\tschd_resource *res;    /* used for updating node resources */\n\tFILE *fp;\t       /* for popen() for res_assn */\n\n\tfor (const auto &dr : conf.dynamic_res) {\n\t\tres = find_alloc_resource_by_str(sinfo->res, dr.res);\n\t\tif (res != NULL) {\n\t\t\tfd_set set;\n\t\t\tsigset_t allsigs;\n\t\t\tpid_t pid = 0; /* pid of child */\n\t\t\tint pdes[2];\n\t\t\tchar buf[256]; /* buffer for reading from pipe */\n\t\t\tk = 0;\n\t\t\tbuf[0] = '\\0';\n\n\t\t\tif (sinfo->res == NULL)\n\t\t\t\tsinfo->res = res;\n\n\t\t\tpipe_err = errno = 0;\n\n/* Make sure file does not have open permissions */\n#if !defined(DEBUG) && !defined(NO_SECURITY_CHECK)\n\t\t\tint err;\n\t\t\terr = tmp_file_sec_user(const_cast<char *>(dr.script_name.c_str()), 0, 1, S_IWGRP | S_IWOTH, 1, getuid());\n\t\t\tif (err != 0) {\n\t\t\t\tlog_eventf(PBSEVENT_SECURITY, PBS_EVENTCLASS_SERVER, LOG_ERR, \"server_dyn_res\",\n\t\t\t\t\t   \"error: %s file has a non-secure file access, setting resource %s to 0, errno: %d\",\n\t\t\t\t\t   dr.script_name.c_str(), res->name, err);\n\t\t\t\tset_resource(res, res_zero, RF_AVAIL);\n\t\t\t\tcontinue;\n\t\t\t}\n#endif\n\n\t\t\tif (pipe(pdes) < 0) {\n\t\t\t\tpipe_err = errno;\n\t\t\t}\n\t\t\tif (!pipe_err) {\n\t\t\t\tswitch (pid = fork()) {\n\t\t\t\t\tcase -1: /* error */\n\t\t\t\t\t\tclose(pdes[0]);\n\t\t\t\t\t\tclose(pdes[1]);\n\t\t\t\t\t\tpipe_err = errno;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\tcase 0: /* child */\n\t\t\t\t\t\tclose(pdes[0]);\n\t\t\t\t\t\tif (pdes[1] != STDOUT_FILENO) {\n\t\t\t\t\t\t\tdup2(pdes[1], STDOUT_FILENO);\n\t\t\t\t\t\t\tclose(pdes[1]);\n\t\t\t\t\t\t}\n\t\t\t\t\t\tsetpgid(0, 0);\n\t\t\t\t\t\tif (sigemptyset(&allsigs) == -1) {\n\t\t\t\t\t\t\tlog_err(errno, __func__, \"sigemptyset failed\");\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif (sigprocmask(SIG_SETMASK, &allsigs, NULL) == -1) { /* unblock all signals */\n\t\t\t\t\t\t\tlog_err(errno, __func__, \"sigprocmask(UNBLOCK)\");\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\tchar *argv[4];\n\t\t\t\t\t\targv[0] = const_cast<char *>(\"/bin/sh\");\n\t\t\t\t\t\targv[1] = const_cast<char *>(\"-c\");\n\t\t\t\t\t\targv[2] = const_cast<char *>(dr.command_line.c_str());\n\t\t\t\t\t\targv[3] = NULL;\n\n\t\t\t\t\t\texecve(\"/bin/sh\", argv, environ);\n\t\t\t\t\t\t_exit(127);\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tk = 0;\n\t\t\tif (!pipe_err) {\n\t\t\t\tint ret;\n\t\t\t\tFD_ZERO(&set);\n\t\t\t\tFD_SET(pdes[0], &set);\n\t\t\t\tif (sc_attrs.server_dyn_res_alarm) {\n\t\t\t\t\tstruct timeval timeout;\n\t\t\t\t\ttimeout.tv_sec = sc_attrs.server_dyn_res_alarm;\n\t\t\t\t\ttimeout.tv_usec = 0;\n\t\t\t\t\tret = select(FD_SETSIZE, &set, NULL, NULL, &timeout);\n\t\t\t\t} else {\n\t\t\t\t\tret = select(FD_SETSIZE, &set, NULL, NULL, NULL);\n\t\t\t\t}\n\t\t\t\tif (ret == -1) {\n\t\t\t\t\tlog_eventf(PBSEVENT_ERROR, PBS_EVENTCLASS_SERVER, LOG_DEBUG, \"server_dyn_res\",\n\t\t\t\t\t\t   \"Select() failed for script %s\", dr.command_line.c_str());\n\t\t\t\t} else if (ret == 0) {\n\t\t\t\t\tlog_eventf(PBSEVENT_DEBUG, PBS_EVENTCLASS_SERVER, LOG_DEBUG, \"server_dyn_res\",\n\t\t\t\t\t\t   \"Program %s timed out\", dr.command_line.c_str());\n\t\t\t\t}\n\t\t\t\tif (pid > 0) {\n\t\t\t\t\tclose(pdes[1]);\n\t\t\t\t\tif (ret > 0) {\n\t\t\t\t\t\t/* Parent; only open if child created and select showed sth to read,\n\t\t\t\t\t\t* but assume fdopen can't fail\n\t\t\t\t\t\t*/\n\t\t\t\t\t\tfp = fdopen(pdes[0], \"r\");\n\t\t\t\t\t\tif (fgets(buf, sizeof(buf), fp) == NULL) {\n\t\t\t\t\t\t\tpipe_err = errno;\n\t\t\t\t\t\t} else\n\t\t\t\t\t\t\tk = strlen(buf);\n\n\t\t\t\t\t\tfclose(fp);\n\t\t\t\t\t} else\n\t\t\t\t\t\tclose(pdes[0]);\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif (k > 0) {\n\t\t\t\tbuf[k] = '\\0';\n\t\t\t\t/* chop \\r or \\n from buf so that is_num() doesn't think it's a str */\n\t\t\t\twhile (--k) {\n\t\t\t\t\tif ((buf[k] != '\\n') && (buf[k] != '\\r'))\n\t\t\t\t\t\tbreak;\n\t\t\t\t\tbuf[k] = '\\0';\n\t\t\t\t}\n\t\t\t\tif (set_resource(res, buf, RF_AVAIL) == 0) {\n\t\t\t\t\tlog_eventf(PBSEVENT_DEBUG, PBS_EVENTCLASS_SERVER, LOG_DEBUG, \"server_dyn_res\",\n\t\t\t\t\t\t   \"Script %s returned bad output\", dr.command_line.c_str());\n\t\t\t\t\t(void) set_resource(res, res_zero, RF_AVAIL);\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tif (pipe_err != 0)\n\t\t\t\t\tlog_eventf(PBSEVENT_DEBUG, PBS_EVENTCLASS_SERVER, LOG_DEBUG, \"server_dyn_res\",\n\t\t\t\t\t\t   \"Can't pipe to program %s: %s\", dr.command_line.c_str(), strerror(pipe_err));\n\t\t\t\tlog_eventf(PBSEVENT_DEBUG, PBS_EVENTCLASS_SERVER, LOG_DEBUG, \"server_dyn_res\",\n\t\t\t\t\t   \"Setting resource %s to 0\", res->name);\n\t\t\t\t(void) set_resource(res, res_zero, RF_AVAIL);\n\t\t\t}\n\t\t\tif (res->type.is_non_consumable)\n\t\t\t\tlog_eventf(PBSEVENT_DEBUG2, PBS_EVENTCLASS_SERVER, LOG_DEBUG, \"server_dyn_res\",\n\t\t\t\t\t   \"%s = %s\", dr.command_line.c_str(), res_to_str(res, RF_AVAIL));\n\t\t\telse\n\t\t\t\tlog_eventf(PBSEVENT_DEBUG2, PBS_EVENTCLASS_SERVER, LOG_DEBUG, \"server_dyn_res\",\n\t\t\t\t\t   \"%s = %s (\\\"%s\\\")\", dr.command_line.c_str(), res_to_str(res, RF_AVAIL), buf);\n\n\t\t\tif (pid > 0) {\n\t\t\t\tkill(-pid, SIGTERM);\n\t\t\t\tif (waitpid(pid, NULL, WNOHANG) == 0) {\n\t\t\t\t\tusleep(250000);\n\t\t\t\t\tif (waitpid(pid, NULL, WNOHANG) == 0) {\n\t\t\t\t\t\tkill(-pid, SIGKILL);\n\t\t\t\t\t\twaitpid(pid, NULL, 0);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\ttry and find a resource by resdef, and if it is not\n *\t\tthere, allocate space for it and add it to the resource list\n *\n * @param[in]\tresplist\t- \tthe resource list\n * @param[in]\tname \t\t-\tthe name of the resource\n *\n * @return\tschd_resource\n * @retval\tNULL\t: error\n *\n * @par MT-Safe:\tno\n */\nschd_resource *\nfind_alloc_resource(schd_resource *resplist, resdef *def)\n{\n\tschd_resource *resp;\t    /* used to search through list of resources */\n\tschd_resource *prev = NULL; /* the previous resources in the list */\n\n\tif (def == NULL)\n\t\treturn NULL;\n\n\tfor (resp = resplist; resp != NULL && resp->def != def; resp = resp->next) {\n\t\tprev = resp;\n\t}\n\n\tif (resp == NULL) {\n\t\tif ((resp = new_resource()) == NULL)\n\t\t\treturn NULL;\n\n\t\tresp->def = def;\n\t\tresp->type = def->type;\n\t\tresp->name = def->name.c_str();\n\n\t\tif (prev != NULL)\n\t\t\tprev->next = resp;\n\t}\n\n\treturn resp;\n}\n\n/**\n * @brief\n * \t\ttry and find a resource by name, and if it is not\n *\t\tthere, allocate space for it and add it to the resource list\n *\n * @param[in]\tresplist \t- \tthe resource list\n * @param[in]\tname \t\t- \tthe name of the resource\n *\n * @return\tschd_resource\n * @retval\tNULL :\tError\n *\n * @par MT-Safe:\tno\n */\nschd_resource *\nfind_alloc_resource_by_str(schd_resource *resplist, const char *name)\n{\n\tschd_resource *resp;\t    /* used to search through list of resources */\n\tschd_resource *prev = NULL; /* the previous resources in the list */\n\n\tif (name == NULL)\n\t\treturn NULL;\n\n\tfor (resp = resplist; resp != NULL && strcmp(resp->name, name);\n\t     resp = resp->next) {\n\t\tprev = resp;\n\t}\n\n\tif (resp == NULL) {\n\t\tif ((resp = create_resource(name, NULL, RF_NONE)) == NULL)\n\t\t\treturn NULL;\n\n\t\tif (prev != NULL)\n\t\t\tprev->next = resp;\n\t}\n\n\treturn resp;\n}\nschd_resource *\nfind_alloc_resource_by_str(schd_resource *resplist, const std::string &name)\n{\n\treturn find_alloc_resource_by_str(resplist, name.c_str());\n}\n\n/**\n * @brief\n * \t\tfinds a resource by string in a resource list\n *\n * @param[in]\treslist - \tthe resource list\n * @param[in]\tname\t- \tthe name of the resource\n *\n * @return\tschd_resource\n * @retval\tNULL\t: if not found\n *\n * @par MT-Safe:\tno\n */\nschd_resource *\nfind_resource_by_str(schd_resource *reslist, const char *name)\n{\n\tschd_resource *resp; /* used to search through list of resources */\n\n\tif (reslist == NULL || name == NULL)\n\t\treturn NULL;\n\n\tresp = reslist;\n\n\twhile (resp != NULL && strcmp(resp->name, name))\n\t\tresp = resp->next;\n\n\treturn resp;\n}\n\nschd_resource *\nfind_resource_by_str(schd_resource *reslist, const std::string &name)\n{\n\treturn find_resource_by_str(reslist, name.c_str());\n}\n\n/**\n * @brief\n * \t\tfind resource by resource definition\n *\n * @param \treslist - \tresource list to search\n * @param \tdef \t- \tresource definition to search for\n *\n * @return\tthe found resource\n * @retval\tNULL\t: if not found\n */\nschd_resource *\nfind_resource(schd_resource *reslist, resdef *def)\n{\n\tschd_resource *resp;\n\n\tif (reslist == NULL || def == NULL)\n\t\treturn NULL;\n\n\tresp = reslist;\n\n\twhile (resp != NULL && resp->def != def)\n\t\tresp = resp->next;\n\n\treturn resp;\n}\n\n/**\n * @brief\tfree the svr_to_psets map\n * \t\tNote: this won't be needed once we convert node_partition to a class\n *\n * @return void\n */\nvoid\nserver_info::free_server_psets()\n{\n\tfor (auto &spset : svr_to_psets) {\n\t\tfree_node_partition(spset.second);\n\t}\n\tsvr_to_psets.clear();\n}\n\n/**\n * @brief\tdup a sinfo->svr_to_psets map (deep copy)\n *\t\t\tNote: this might not be needed once we convert node_partition to a class\n *\n * @param[in]\tspsets - map of server psets\n *\n * @return nothing\n */\nvoid\nserver_info::dup_server_psets(const std::unordered_map<std::string, node_partition *> &spsets)\n{\n\tfor (const auto &spset : spsets) {\n\t\tsvr_to_psets[spset.first] = dup_node_partition(spset.second, this);\n\t\tif (svr_to_psets[spset.first] == NULL) {\n\t\t\tfree_server_psets();\n\t\t\treturn;\n\t\t}\n\t}\n}\n\n/**\n * @brief\n * \t\tfree_server_info - free the space used by a server_info\n *\t\tstructure\n *\n * @return\tvoid\n *\n * @par MT-Safe:\tno\n */\nvoid\nserver_info::free_server_info()\n{\n\tif (jobs != NULL)\n\t\tfree(jobs);\n\tif (all_resresv != NULL)\n\t\tfree(all_resresv);\n\tif (running_jobs != NULL)\n\t\tfree(running_jobs);\n\tif (exiting_jobs != NULL)\n\t\tfree(exiting_jobs);\n\t/* if we don't have nodes associated with queues, this is a reference */\n\tif (has_nodes_assoc_queue == 0)\n\t\tunassoc_nodes = NULL;\n\telse if (unassoc_nodes != NULL)\n\t\tfree(unassoc_nodes);\n\tfree_counts_list(alljobcounts);\n\tfree_counts_list(group_counts);\n\tfree_counts_list(project_counts);\n\tfree_counts_list(user_counts);\n\tfree_counts_list(total_alljobcounts);\n\tfree_counts_list(total_group_counts);\n\tfree_counts_list(total_project_counts);\n\tfree_counts_list(total_user_counts);\n\tfree_node_partition_array(nodepart);\n\tfree_node_partition(allpart);\n\tfree_server_psets();\n\tfree_node_partition_array(hostsets);\n\tfree_string_array(nodesigs);\n\tfree_np_cache_array(npc_arr);\n\tfree_event_list(calendar);\n\tif (policy != NULL)\n\t\tdelete policy;\n\tif (fstree != NULL)\n\t\tdelete fstree;\n\tlim_free_liminfo(liminfo);\n\tliminfo = NULL;\n\tfree_queue_list(queue_list);\n\tfree_resresv_set_array(equiv_classes);\n\tfree_node_bucket_array(buckets);\n\n\tif (unordered_nodes != NULL)\n\t\tfree(unordered_nodes);\n\n\tfree_resource_list(res);\n\tfree(job_sort_formula);\n\n#ifdef NAS\n\t/* localmod 034 */\n\tsite_free_shares(sinfo);\n#endif\n\n\t/* We need to free the sinfo first to free the calendar.\n\t * When the calendar is freed, the job events modify the jobs.  We can't\n\t * free the jobs before then.\n\t */\n\tfree_queues(queues);\n\tfree_nodes(nodes);\n\tfree_resource_resv_array(resvs);\n\n#ifdef NAS /* localmod 053 */\n\tsite_restore_users();\n#endif /* localmod 053 */\n}\n\n/**\n * @brief\n *\t\tfree_resource_list - frees the memory used by a resource list\n *\n * @param[in]\treslist\t-\tthe resource list to free\n *\n * @return\tvoid\n *\n * @par MT-Safe:\tno\n */\nvoid\nfree_resource_list(schd_resource *reslist)\n{\n\tschd_resource *resp, *tmp;\n\n\tif (reslist == NULL)\n\t\treturn;\n\n\tresp = reslist;\n\twhile (resp != NULL) {\n\t\ttmp = resp->next;\n\t\tfree_resource(resp);\n\n\t\tresp = tmp;\n\t}\n}\n\n/**\n * @brief\n * \t\tfree_resource - frees the memory used by a resource structure\n *\n * @param[in]\treslist\t-\tthe resource to free\n *\n * @return\tvoid\n *\n * @par MT-Safe:\tno\n */\nvoid\nfree_resource(schd_resource *resp)\n{\n\tif (resp == NULL)\n\t\treturn;\n\n\tif (resp->orig_str_avail != NULL)\n\t\tfree(resp->orig_str_avail);\n\n\tif (resp->indirect_vnode_name != NULL)\n\t\tfree(resp->indirect_vnode_name);\n\n\tif (resp->str_avail != NULL)\n\t\tfree_string_array(resp->str_avail);\n\n\tif (resp->str_assigned != NULL)\n\t\tfree(resp->str_assigned);\n\n\tfree(resp);\n}\n\n// Init function\nvoid\nserver_info::init_server_info()\n{\n\n\thas_soft_limit = false;\n\thas_hard_limit = false;\n\thas_user_limit = false;\n\thas_grp_limit = false;\n\thas_proj_limit = false;\n\thas_all_limit = false;\n\thas_mult_express = false;\n\thas_multi_vnode = false;\n\thas_prime_queue = false;\n\thas_nonprime_queue = false;\n\thas_nodes_assoc_queue = false;\n\thas_ded_queue = false;\n\thas_runjob_hook = false;\n\tnode_group_enable = false;\n\teligible_time_enable = false;\n\tprovision_enable = false;\n\tpower_provisioning = false;\n\tuse_hard_duration = false;\n\tpset_metadata_stale = false;\n\tnum_parts = 0;\n\thas_nonCPU_licenses = 0;\n\tnum_preempted = 0;\n\tres = NULL;\n\tqueue_list = NULL;\n\tjobs = NULL;\n\tall_resresv = NULL;\n\tcalendar = NULL;\n\trunning_jobs = NULL;\n\texiting_jobs = NULL;\n\tnodes = NULL;\n\tunassoc_nodes = NULL;\n\tresvs = NULL;\n\tnodepart = NULL;\n\tallpart = NULL;\n\thostsets = NULL;\n\tnodesigs = NULL;\n\tqrun_job = NULL;\n\tpolicy = NULL;\n\tfstree = NULL;\n\tequiv_classes = NULL;\n\tbuckets = NULL;\n\tunordered_nodes = NULL;\n\tnum_nodes = 0;\n\tnum_resvs = 0;\n\tnum_hostsets = 0;\n\tserver_time = 0;\n\tjob_sort_formula = NULL;\n\tinit_state_count(&sc);\n\tmemset(preempt_count, 0, (NUM_PPRIO + 1) * sizeof(int));\n\tliminfo = NULL;\n\n#ifdef NAS\n\t/* localmod 034 */\n\tshare_head = NULL;\n#endif\n}\n\n// Constructor\nserver_info::server_info(const char *sname) : name(sname)\n{\n\tinit_server_info();\n\tliminfo = lim_alloc_liminfo();\n}\n\n// Destructor\nserver_info::~server_info()\n{\n\tfree_server_info();\n}\n\n/**\n * @brief\n * \t\tnew_resource - allocate and initialize new resource struct\n *\n * @return\tschd_resource\n * @retval\tNULL\t: Error\n *\n * @par MT-Safe:\tyes\n */\nschd_resource *\nnew_resource()\n{\n\tschd_resource *resp; /* the new resource */\n\n\tif ((resp = static_cast<schd_resource *>(calloc(1, sizeof(schd_resource)))) == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn NULL;\n\t}\n\n\t/* member type zero'd by calloc() */\n\n\tresp->name = NULL;\n\tresp->next = NULL;\n\tresp->def = NULL;\n\tresp->orig_str_avail = NULL;\n\tresp->indirect_vnode_name = NULL;\n\tresp->indirect_res = NULL;\n\tresp->str_avail = NULL;\n\tresp->str_assigned = NULL;\n\tresp->assigned = RES_DEFAULT_ASSN;\n\tresp->avail = RES_DEFAULT_AVAIL;\n\n\treturn resp;\n}\n\n/**\n * @brief\n * \t\tCreate new resource with given data\n *\n * @param[in]\tname\t-\tname of resource\n * @param[in] \tvalue\t-\tvalue of resource\n * @param[in] \tfield\t-\tis the value RF_AVAIL or RF_ASSN\n *\n * @see\tset_resource()\n *\n * @return schd_resource *\n * @retval newly created resource\n * @retval NULL\t: on error\n */\nschd_resource *\ncreate_resource(const char *name, const char *value, enum resource_fields field)\n{\n\tschd_resource *nres = NULL;\n\tresdef *rdef;\n\n\tif (name == NULL)\n\t\treturn NULL;\n\n\tif (value == NULL && field != RF_NONE)\n\t\treturn NULL;\n\n\trdef = find_resdef(name);\n\n\tif (rdef != NULL) {\n\t\tif ((nres = new_resource()) != NULL) {\n\t\t\tnres->def = rdef;\n\t\t\tnres->name = rdef->name.c_str();\n\t\t\tnres->type = rdef->type;\n\n\t\t\tif (value != NULL) {\n\t\t\t\tif (set_resource(nres, value, field) == 0) {\n\t\t\t\t\tfree_resource(nres);\n\t\t\t\t\treturn NULL;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t} else {\n\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_SCHED, LOG_DEBUG, name,\n\t\t\t  \"Resource definition does not exist, resource may be invalid\");\n\t\treturn NULL;\n\t}\n\n\treturn nres;\n}\n\n/**\n * @brief modify the resources_assigned values for a resource_list\n * \t\t(e.g. either A += B or A -= B) where\n * \t\tA is a resource list and B is a resource_req list.\n *\n * @param[in] res_list - The schd_resource list which is modified\n * @param[in] req_list - What is modifing the schd_resource list\n * @param[in] type - SCHD_INCR for += or SCHD_DECR for -=\n *\n * @return int\n * @retval 1 - success\n * @retval 0 - failure\n */\nint\nmodify_resource_list(schd_resource *res_list, resource_req *req_list, int type)\n{\n\tschd_resource *cur_res;\n\tresource_req *cur_req;\n\tschd_resource *end_res = NULL;\n\n\tif (res_list == NULL || req_list == NULL)\n\t\treturn 0;\n\n\tfor (cur_req = req_list; cur_req != NULL; cur_req = cur_req->next) {\n\t\tif (cur_req->type.is_consumable) {\n\t\t\tcur_res = find_resource(res_list, cur_req->def);\n\t\t\tif (cur_res == NULL && type == SCHD_INCR) {\n\t\t\t\tif (end_res == NULL)\n\t\t\t\t\tfor (end_res = res_list; end_res->next != NULL; end_res = end_res->next)\n\t\t\t\t\t\t;\n\t\t\t\tend_res->next = create_resource(cur_req->name, cur_req->res_str, RF_AVAIL);\n\t\t\t\tif (end_res->next == NULL)\n\t\t\t\t\treturn 0;\n\t\t\t\tend_res = end_res->next;\n\t\t\t} else {\n\t\t\t\tif (type == SCHD_INCR)\n\t\t\t\t\tcur_res->assigned += cur_req->amount;\n\t\t\t\telse if (type == SCHD_DECR)\n\t\t\t\t\tcur_res->assigned -= cur_req->amount;\n\t\t\t}\n\t\t}\n\t}\n\treturn 1;\n}\n\n/**\n * @brief\n * \t\tadd_resource_list - add one resource list to another\n *\t\ti.e. r1 += r2\n *\n * @param[in]\tpolicy\t-\tpolicy info\n * @param[in]\tr1 \t\t- \tlval resource\n * @param[in]\tr2 \t\t- \trval resource\n * @param[in]\tflags \t-\n *\t\t\t\t\t\t\tNO_UPDATE_NON_CONSUMABLE - do not update\n *\t\t\t\t\t\t\tnon consumable resources\n *\t\t\t\t\t\t\tUSE_RESOURCE_LIST - use policy->resdef_to_check\n *\t\t\t\t\t\t\t(and all bools) instead of all resources\n *\t\t\t\t\t\t\tADD_UNSET_BOOLS_FALSE - add unset bools as false\n *\n * @return\tint\n * @retval\t1\t: success\n * @retval\t0\t: failure\n *\n * @par MT-Safe:\tno\n */\nint\nadd_resource_list(status *policy, schd_resource *r1, schd_resource *r2, unsigned int flags)\n{\n\tschd_resource *cur_r1;\n\tschd_resource *cur_r2;\n\tschd_resource *end_r1 = NULL;\n\tschd_resource *nres;\n\tsch_resource_t assn;\n\n\tif (r1 == NULL || r2 == NULL)\n\t\treturn 0;\n\n\tfor (cur_r2 = r2; cur_r2 != NULL; cur_r2 = cur_r2->next) {\n\t\tif ((flags & NO_UPDATE_NON_CONSUMABLE) && cur_r2->def->type.is_non_consumable)\n\t\t\tcontinue;\n\t\tif ((flags & USE_RESOURCE_LIST)) {\n\t\t\tconst auto &rtc = policy->resdef_to_check;\n\t\t\tif (rtc.find(cur_r2->def) == rtc.end() && !cur_r2->type.is_boolean)\n\t\t\t\tcontinue;\n\t\t}\n\n\t\tcur_r1 = find_resource(r1, cur_r2->def);\n\t\tif (cur_r1 == NULL) { /* resource in r2 which is not in r1 */\n\t\t\tif (!(flags & NO_UPDATE_NON_CONSUMABLE) || cur_r2->type.is_consumable) {\n\t\t\t\tif (end_r1 == NULL)\n\t\t\t\t\tfor (end_r1 = r1; end_r1->next != NULL; end_r1 = end_r1->next)\n\t\t\t\t\t\t;\n\t\t\t\tend_r1->next = dup_resource(cur_r2);\n\t\t\t\tif (end_r1->next == NULL)\n\t\t\t\t\treturn 0;\n\t\t\t\tend_r1 = end_r1->next;\n\t\t\t}\n\t\t} else if (cur_r1->type.is_consumable) {\n\t\t\tif ((flags & ADD_AVAIL_ASSIGNED)) {\n\t\t\t\tif (cur_r2->avail == RES_DEFAULT_AVAIL)\n\t\t\t\t\tassn = RES_DEFAULT_ASSN; /* nothing is set, so add nothing */\n\t\t\t\telse\n\t\t\t\t\tassn = cur_r2->avail;\n\t\t\t} else\n\t\t\t\tassn = cur_r2->assigned;\n\t\t\tadd_resource_value(&(cur_r1->avail), &(cur_r2->avail),\n\t\t\t\t\t   RES_DEFAULT_AVAIL);\n\t\t\tadd_resource_value(&(cur_r1->assigned),\n\t\t\t\t\t   &assn, RES_DEFAULT_ASSN);\n\t\t} else {\n\t\t\tif (!(flags & NO_UPDATE_NON_CONSUMABLE)) {\n\t\t\t\tif (cur_r1->type.is_string) {\n\t\t\t\t\tif (cur_r1->def == allres[\"vnode\"])\n\t\t\t\t\t\tadd_resource_str_arr(cur_r1, cur_r2->str_avail, 1);\n\t\t\t\t\telse\n\t\t\t\t\t\tadd_resource_str_arr(cur_r1, cur_r2->str_avail, 0);\n\t\t\t\t} else if (cur_r1->type.is_boolean)\n\t\t\t\t\t(void) add_resource_bool(cur_r1, cur_r2);\n\t\t\t}\n\t\t}\n\t}\n\n\tif (flags & ADD_UNSET_BOOLS_FALSE) {\n\t\tfor (const auto &br : boolres) {\n\t\t\tif (find_resource(r2, br) == NULL) {\n\t\t\t\tcur_r1 = find_resource(r1, br);\n\t\t\t\tif (cur_r1 == NULL) {\n\t\t\t\t\tnres = create_resource(br->name.c_str(), ATR_FALSE, RF_AVAIL);\n\t\t\t\t\tif (nres == NULL)\n\t\t\t\t\t\treturn 0;\n\n\t\t\t\t\tif (end_r1 == NULL)\n\t\t\t\t\t\tfor (end_r1 = r1; end_r1->next != NULL; end_r1 = end_r1->next)\n\t\t\t\t\t\t\t;\n\t\t\t\t\tend_r1->next = nres;\n\t\t\t\t\tend_r1 = nres;\n\t\t\t\t} else {\n\t\t\t\t\tnres = false_res();\n\t\t\t\t\tif (nres == NULL)\n\t\t\t\t\t\treturn 0;\n\t\t\t\t\tnres->name = br->name.c_str();\n\t\t\t\t\t(void) add_resource_bool(cur_r1, nres);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\treturn 1;\n}\n\n/**\n * @brief\n *\t\tadd_resource_value - add a resource value to another\n *\t\t\t\ti.e. val1 += val2\n *\n * @param[in]\tval1\t\t-\tvalue 1\n * @param[in]\tval2\t\t-\tvalue 2\n * @param[in]\tinitial_val - \tvalue set by resource constructor\n *\n * @return\tvoid\n *\n * @par MT-Safe:\tno\n */\nvoid\nadd_resource_value(sch_resource_t *val1, sch_resource_t *val2,\n\t\t   sch_resource_t initial_val)\n{\n\tif (val1 == NULL || val2 == NULL)\n\t\treturn;\n\n\tif (*val1 == initial_val)\n\t\t*val1 = *val2;\n\telse if (*val2 != initial_val)\n\t\t*val1 += *val2;\n\t/* else val2 is default and val1 isn't, so we leave val1 alone */\n}\n\n/**\n * @brief\n * \t\tAdd values from a string array to a string resource (available).\n *\t\tOnly add values if they do not exist.\n *\n * @param[in]\tres\t\t\t-\tresource to add values to\n * @param[in]\tstr_arr\t\t-\tstring array of values to add\n * @param[in]\tallow_dup \t- \tshould we allow dups or not?\n *\n * @return\tint\n *\n * @retval\t1\t: success\n * @retval\t0\t: failure\n *\n * @par MT-Safe:\tno\n */\nint\nadd_resource_str_arr(schd_resource *res, char **str_arr, int allow_dup)\n{\n\tint i;\n\n\tif (res == NULL || str_arr == NULL)\n\t\treturn 0;\n\n\tif (!res->type.is_string)\n\t\treturn 0;\n\n\tfor (i = 0; str_arr[i] != NULL; i++) {\n\t\tif (add_str_to_unique_array(&(res->str_avail), str_arr[i]) < 0)\n\t\t\treturn 0;\n\t}\n\treturn 1;\n}\n\n/**\n * @brief\n * \t\taccumulate two boolean resources together\n *\t\tT + T = True | F + F = False | T + F = TRUE_FALSE\n *\n * @param[in] \tr1\t-\tlval : left side boolean to add to\n * @param[in]\tr2 \t-\trval : right side boolean - if NULL, treat as false\n *\n * @return int\n * @retval\t1\t: Ok\n * @retval\t0\t: Error\n */\nint\nadd_resource_bool(schd_resource *r1, schd_resource *r2)\n{\n\tint r1val, r2val;\n\tif (r1 == NULL)\n\t\treturn 0;\n\n\tif (!r1->type.is_boolean || (r2 != NULL && !r2->type.is_boolean))\n\t\treturn 0;\n\n\t/* We can't accumulate any more values than TRUE and FALSE,\n\t * so if we have both than return success early\n\t */\n\tr1val = r1->avail;\n\tif (r1val == TRUE_FALSE)\n\t\treturn 1;\n\n\tr2val = r2 == NULL ? FALSE : r2->avail;\n\n\t/********************************************\n\t *        Possible Value Combinations       *\n\t *       r1     *    r2   *    r1 result    *\n\t * ******************************************\n\t *       T      *     T    *       T        *\n\t *       T      *     F    *   TRUE_FALSE   *\n\t *       F      *     T    *   TRUE_FALSE   *\n\t *       F      *     F    *       F        *\n\t ********************************************/\n\n\tif (r1val && !r2val)\n\t\tr1->avail = TRUE_FALSE;\n\telse if (!r1val && r2val)\n\t\tr1->avail = TRUE_FALSE;\n\n\treturn 1;\n}\n\n/**\n * @brief\n * \t\tupdate_server_on_run - update server_info structure\n *\t\t\t\twhen a resource resv is run\n *\n * @policy[in]\tpolicy \t- \tpolicy info\n * @param[in]\tsinfo \t- \tserver_info to update\n * @param[in]\tqinfo \t- \tqueue_info the job is in\n *\t\t\t\t\t\t\t(if resresv is a job)\n * @param[in]\tresresv - \tresource_resv that was run\n * @param[in]  job_state -\tthe old state of a job if resresv is a job\n *\t\t\t\tIf the old_state is found to be suspended\n *\t\t\t\tthen only resources that were released\n *\t\t\t\tduring suspension will be accounted.\n *\n * @return\tvoid\n *\n * @par MT-Safe:\tno\n */\nvoid\nupdate_server_on_run(status *policy, server_info *sinfo,\n\t\t     queue_info *qinfo, resource_resv *resresv, char *job_state)\n{\n\tif (sinfo == NULL || resresv == NULL)\n\t\treturn;\n\n\tif (resresv->is_job) {\n\t\tif (resresv->job == NULL)\n\t\t\treturn;\n\t\tif (qinfo == NULL)\n\t\t\treturn;\n\t}\n\n\t/*\n\t * Update the server level resources\n\t *   -- if a job is in a reservation, the resources have already been\n\t *      accounted for and assigned to the reservation.  We don't want to\n\t *      double count them\n\t */\n\tif (resresv->is_resv || (qinfo != NULL && qinfo->resv == NULL)) {\n\t\tresource_req *req; /* used to cycle through resources to update */\n\n\t\tif (resresv->is_job && (job_state != NULL) && (*job_state == 'S') && (resresv->job->resreq_rel != NULL))\n\t\t\treq = resresv->job->resreq_rel;\n\t\telse\n\t\t\treq = resresv->resreq;\n\t\twhile (req != NULL) {\n\t\t\tif (req->type.is_consumable) {\n\t\t\t\tauto res = find_resource(sinfo->res, req->def);\n\n\t\t\t\tif (res)\n\t\t\t\t\tres->assigned += req->amount;\n\t\t\t}\n\t\t\treq = req->next;\n\t\t}\n\t}\n\n\tif (resresv->is_job) {\n\t\tsinfo->sc.running++;\n\t\t/* note: if job is suspended, counts will get off.\n\t\t *       sc.queued is not used, and sc.suspended isn't used again\n\t\t *       after this point\n\t\t *       BZ 5798\n\t\t */\n\t\tsinfo->sc.queued--;\n\n\t\t/* sort the nodes before we filter them down to more useful lists */\n\t\tif (!cstat.node_sort->empty() && conf.node_sort_unused) {\n\t\t\tif (resresv->job->resv != NULL &&\n\t\t\t    resresv->job->resv->resv != NULL) {\n\t\t\t\tnode_info **resv_nodes;\n\t\t\t\tint num_resv_nodes;\n\n\t\t\t\tresv_nodes = resresv->job->resv->resv->resv_nodes;\n\t\t\t\tnum_resv_nodes = count_array(resv_nodes);\n\t\t\t\tqsort(resv_nodes, num_resv_nodes, sizeof(node_info *),\n\t\t\t\t      multi_node_sort);\n\t\t\t} else {\n\t\t\t\tqsort(sinfo->nodes, sinfo->num_nodes, sizeof(node_info *),\n\t\t\t\t      multi_node_sort);\n\n\t\t\t\tif (sinfo->nodes != sinfo->unassoc_nodes) {\n\t\t\t\t\tauto num_unassoc = count_array(sinfo->unassoc_nodes);\n\t\t\t\t\tqsort(sinfo->unassoc_nodes, num_unassoc, sizeof(node_info *),\n\t\t\t\t\t      multi_node_sort);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\t/* We're running a job or reservation, which will affect the cached data.\n\t\t * We'll flush the cache and rebuild it if needed\n\t\t */\n\t\tfree_np_cache_array(sinfo->npc_arr);\n\n\t\t/* a new job has been run, update running jobs array */\n\t\tsinfo->running_jobs = add_resresv_to_array(sinfo->running_jobs, resresv, NO_FLAGS);\n\t}\n\n\tif (sinfo->has_soft_limit || sinfo->has_hard_limit) {\n\t\tif (resresv->is_job) {\n\t\t\tcounts *cts; /* used in updating project/group/user counts */\n\n\t\t\tupdate_total_counts(sinfo, NULL, resresv, SERVER);\n\n\t\t\tcts = find_alloc_counts(sinfo->group_counts, resresv->group);\n\t\t\tupdate_counts_on_run(cts, resresv->resreq);\n\n\t\t\tcts = find_alloc_counts(sinfo->project_counts, resresv->project);\n\t\t\tupdate_counts_on_run(cts, resresv->resreq);\n\n\t\t\tcts = find_alloc_counts(sinfo->user_counts, resresv->user);\n\t\t\tupdate_counts_on_run(cts, resresv->resreq);\n\n\t\t\tauto allcts = find_alloc_counts(sinfo->alljobcounts, PBS_ALL_ENTITY);\n\t\t\tupdate_counts_on_run(allcts, resresv->resreq);\n\t\t}\n\t}\n}\n\n/**\n * @brief\n * \t\tupdate_server_on_end - update a server_info structure when a\n *\t\tresource resv has finished running\n *\n * @param[in]   policy \t- policy info\n * @param[in]\tsinfo\t- server_info to update\n * @param[in]\tqinfo \t- queue_info the job is in\n * @param[in]\tresresv - resource_resv that finished running\n * @param[in]  job_state -\tthe old state of a job if resresv is a job\n *\t\t\t\tIf the old_state is found to be suspended\n *\t\t\t\tthen only resources that were released\n *\t\t\t\tduring suspension will be accounted.\n *\n * @return\tvoid\n *\n * @note\n * \t\tJob must be in pre-ended state (job_state is new state)\n *\n * @par MT-Safe:\tno\n */\nvoid\nupdate_server_on_end(status *policy, server_info *sinfo, queue_info *qinfo,\n\t\t     resource_resv *resresv, const char *job_state)\n{\n\tif (sinfo == NULL || resresv == NULL)\n\t\treturn;\n\tif (resresv->is_job) {\n\t\tif (resresv->job == NULL)\n\t\t\treturn;\n\t\tif (qinfo == NULL)\n\t\t\treturn;\n\t}\n\n\tif (resresv->is_job) {\n\t\tif (resresv->job->is_running) {\n\t\t\tsinfo->sc.running--;\n\t\t\tremove_resresv_from_array(sinfo->running_jobs, resresv);\n\t\t} else if (resresv->job->is_exiting) {\n\t\t\tsinfo->sc.exiting--;\n\t\t\tremove_resresv_from_array(sinfo->exiting_jobs, resresv);\n\t\t}\n\t\tstate_count_add(&(sinfo->sc), job_state, 1);\n\t}\n\n\t/*\n\t *\tif the queue is a reservation then the resources belong to it and not\n\t *\tthe server\n\t */\n\tif (resresv->is_resv || (qinfo != NULL && qinfo->resv == NULL)) {\n\t\tresource_req *req; /* resource request from job */\n\n\t\tif (resresv->is_job && (job_state != NULL) && (*job_state == 'S') && (resresv->job->resreq_rel != NULL))\n\t\t\treq = resresv->job->resreq_rel;\n\t\telse\n\t\t\treq = resresv->resreq;\n\n\t\twhile (req != NULL) {\n\t\t\tauto res = find_resource(sinfo->res, req->def);\n\n\t\t\tif (res != NULL) {\n\t\t\t\tres->assigned -= req->amount;\n\n\t\t\t\tif (res->assigned < 0) {\n\t\t\t\t\tlog_eventf(PBSEVENT_DEBUG, PBS_EVENTCLASS_SERVER, LOG_DEBUG, __func__,\n\t\t\t\t\t\t   \"%s turned negative %.2lf, setting it to 0\", res->name, res->assigned);\n\t\t\t\t\tres->assigned = 0;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\treq = req->next;\n\t\t}\n\t}\n\n\t/* We're ending a job or reservation, which will affect the cached data.\n\t * We'll flush the cache and rebuild it if needed\n\t */\n\tfree_np_cache_array(sinfo->npc_arr);\n\n\tif (sinfo->has_soft_limit || sinfo->has_hard_limit) {\n\t\tif (resresv->is_job && resresv->job->is_running) {\n\t\t\tcounts *cts; /* update user/group/project counts */\n\n\t\t\tupdate_total_counts_on_end(sinfo, NULL, resresv, SERVER);\n\t\t\tcts = find_counts(sinfo->group_counts, resresv->group);\n\n\t\t\tif (cts != NULL)\n\t\t\t\tupdate_counts_on_end(cts, resresv->resreq);\n\n\t\t\tcts = find_counts(sinfo->project_counts, resresv->project);\n\n\t\t\tif (cts != NULL)\n\t\t\t\tupdate_counts_on_end(cts, resresv->resreq);\n\n\t\t\tcts = find_counts(sinfo->user_counts, resresv->user);\n\n\t\t\tif (cts != NULL)\n\t\t\t\tupdate_counts_on_end(cts, resresv->resreq);\n\n\t\t\tcts = find_alloc_counts(sinfo->alljobcounts, PBS_ALL_ENTITY);\n\n\t\t\tif (cts != NULL)\n\t\t\t\tupdate_counts_on_end(cts, resresv->resreq);\n\t\t}\n\t}\n}\n/**\n * @brief\n * \t\tcopy_server_arrays - copy server arrays of all jobs and all reservations\n *\n * @param[in]\tnsinfo\t-\tthe server to copy lists to\n * @param[in]\tosinfo\t-\tthe server to copy lists from\n *\n * @return\tint\n * @retval\t1\t: success\n * @retval\t0 \t: failure\n *\n * @par MT-Safe:\tno\n */\nint\ncopy_server_arrays(server_info *nsinfo, const server_info *osinfo)\n{\n\tresource_resv **job_arr;     /* used in copying jobs to job array */\n\tresource_resv **all_arr;     /* used in copying jobs to job/resv array */\n\tresource_resv **resresv_arr; /* used as source array to copy */\n\tint i = 0;\n\tint j = 0;\n\n\tif (nsinfo == NULL || osinfo == NULL)\n\t\treturn 0;\n\n\tif ((job_arr = static_cast<resource_resv **>(calloc((osinfo->sc.total + 1), sizeof(resource_resv *)))) == NULL) {\n\t\tlog_err(errno, __func__, \"Error allocating memory\");\n\t\treturn 0;\n\t}\n\n\tif ((all_arr = static_cast<resource_resv **>(calloc((osinfo->sc.total + osinfo->num_resvs + 1),\n\t\t\t\t\t\t\t    sizeof(resource_resv *)))) == NULL) {\n\t\tfree(job_arr);\n\t\tlog_err(errno, __func__, \"Error allocating memory\");\n\t\treturn 0;\n\t}\n\n\tfor (auto queue : nsinfo->queues) {\n\t\tresresv_arr = queue->jobs;\n\n\t\tif (resresv_arr != NULL) {\n\t\t\tfor (i = 0; resresv_arr[i] != NULL; i++, j++)\n\t\t\t\tjob_arr[j] = all_arr[resresv_arr[i]->resresv_ind] = resresv_arr[i];\n\t\t}\n\t}\n\n\tif (nsinfo->resvs != NULL) {\n\t\tfor (i = 0; nsinfo->resvs[i] != NULL; i++)\n\t\t\tall_arr[nsinfo->resvs[i]->resresv_ind] = nsinfo->resvs[i];\n\t}\n\tnsinfo->jobs = job_arr;\n\tnsinfo->all_resresv = all_arr;\n\tnsinfo->num_resvs = osinfo->num_resvs;\n\treturn 1;\n}\n\n/**\n * @brief\n * \t\tcreate_server_arrays - create a large server resresv array\n *\t\tof all jobs on the system by coping all the jobs from the\n *\t\tqueue job arrays.  Also create an array of both jobs and\n *\t\treservations.\n *\n * @param[in]\tsinfo\t-\tthe server\n *\n * @return\tint\n * @retval\t1\t: success\n * @retval\t0 \t: failure\n *\n * @par MT-Safe:\tno\n */\nint\ncreate_server_arrays(server_info *sinfo)\n{\n\tresource_resv **job_arr;     /* used in copying jobs to job array */\n\tresource_resv **all_arr;     /* used in copying jobs to job/resv array */\n\tresource_resv **resresv_arr; /* used as source array to copy */\n\tint i = 0, j;\n\n\tif ((job_arr = static_cast<resource_resv **>(malloc(sizeof(resource_resv *) * (sinfo->sc.total + 1)))) == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn 0;\n\t}\n\n\tif ((all_arr = static_cast<resource_resv **>(malloc(sizeof(resource_resv *) *\n\t\t\t\t\t\t\t    (sinfo->sc.total + sinfo->num_resvs + 1)))) == NULL) {\n\t\tfree(job_arr);\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn 0;\n\t}\n\n\tfor (auto queue : sinfo->queues) {\n\t\tresresv_arr = queue->jobs;\n\n\t\tif (resresv_arr != NULL) {\n\t\t\tfor (j = 0; resresv_arr[j] != NULL; j++, i++) {\n\t\t\t\tjob_arr[i] = all_arr[i] = resresv_arr[j];\n\t\t\t\tall_arr[i]->resresv_ind = i;\n\t\t\t}\n\t\t\tif (i > sinfo->sc.total) {\n\t\t\t\tfree(job_arr);\n\t\t\t\tfree(all_arr);\n\t\t\t\treturn 0;\n\t\t\t}\n\t\t}\n\t}\n\tjob_arr[i] = NULL;\n\n\tif (sinfo->resvs != NULL) {\n\t\tfor (j = 0; sinfo->resvs[j] != NULL; j++, i++) {\n\t\t\tall_arr[i] = sinfo->resvs[j];\n\t\t\tall_arr[i]->resresv_ind = i;\n\t\t}\n\t}\n\tall_arr[i] = NULL;\n\n\tsinfo->jobs = job_arr;\n\tsinfo->all_resresv = all_arr;\n\n\treturn 1;\n}\n\n/**\n * @brief\n * \t\thelper function for resource_resv_filter() - returns 1 if\n *\t\tjob is running\n *\n * @param[in]\tjob\t-\tresource reservation job.\n * @param[in]\targ\t-\targument (not used here)\n *\n * @return\tint\n * @retval\t0\t: job not running\n * @retval\t1\t: job is running\n */\nint\ncheck_run_job(resource_resv *job, const void *arg)\n{\n\tif (job->is_job && job->job != NULL)\n\t\treturn job->job->is_running;\n\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\thelper function for resource_resv_filter()\n *\n * @param[in]\tjob\t-\tresource reservation job.\n * @param[in]\targ\t-\targument (not used here)\n *\n * @return\tint\n * @retval\t1\t: if job is exiting\n */\nint\ncheck_exit_job(resource_resv *job, const void *arg)\n{\n\tif (job->is_job && job->job != NULL)\n\t\treturn job->job->is_exiting;\n\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\thelper function for resource_resv_filter()\n *\n * @param[in]\tjob\t-\tresource reservation job.\n * @param[in]\targ\t-\targument (not used here)\n *\n * @return\tint\n * @retval\t1\t: if job is suspended\n * @retval\t0\t: if job is not suspended\n */\nint\ncheck_susp_job(resource_resv *job, const void *arg)\n{\n\tif (job->is_job && job->job != NULL)\n\t\treturn job->job->is_suspended;\n\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\thelper function for resource_resv_filter()\n *\n * @param[in]\tjob\t-\tresource reservation job.\n * @param[in]\targ\t-\targument (not used here)\n *\n * @return\tint\n * @retval\t1\t: if job is running\n * @retval\t0\t: if job is not running\n */\nint\ncheck_job_running(resource_resv *job, const void *arg)\n{\n\tif (job->is_job && (job->job->is_running || job->job->is_exiting || job->job->is_userbusy))\n\t\treturn 1;\n\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\thelper function for resource_resv_filter()\n *\n * @param[in]\tjob\t-\tresource reservation job.\n * @param[in]\targ\t-\targument (not used here)\n *\n * @return\tint\n * @retval\t1\t: if job is running and in a reservation\n * @retval\t0\t: if job is not running or not in a reservation\n */\nint\ncheck_running_job_in_reservation(resource_resv *job, const void *arg)\n{\n\tif (job->is_job && job->job != NULL && job->job->resv != NULL &&\n\t    (check_job_running(job, arg) == 1))\n\t\treturn 1;\n\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\thelper function for resource_resv_filter()\n *\n * @param[in]\tjob\t-\tresource reservation job.\n * @param[in]\targ\t-\targument (not used here)\n *\n * @return\tint\n * @retval\t1\t: if job is running and not in a reservation\n * @retval\t0\t: if job is not running or in a reservation\n */\nint\ncheck_running_job_not_in_reservation(resource_resv *job, const void *arg)\n{\n\tif (job->is_job && job->job != NULL && job->job->resv == NULL &&\n\t    (check_job_running(job, arg) == 1))\n\t\treturn 1;\n\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\thelper function for resource_resv_filter()\n *\n * @param[in]\tresv\t-\tresource reservation structure\n * @param[in]\targ\t\t-\tthe array of nodes to look in\n *\n * @return\tint\n * @retval\t1\t: if reservation is running on node passed in arg\n * @retval\t0\t: if reservation is not running on node passed in arg\n */\nint\ncheck_resv_running_on_node(resource_resv *resv, const void *arg)\n{\n\tif (resv->is_resv && resv->resv != NULL) {\n\t\tif (resv->resv->is_running || resv->resv->resv_state == RESV_BEING_DELETED)\n\t\t\tif (find_node_info(resv->ninfo_arr, (char *) arg))\n\t\t\t\treturn 1;\n\t}\n\treturn 0;\n}\n\n// Copy constructor\nserver_info::server_info(const server_info &osinfo)\n{\n\tinit_server_info();\n\tif (osinfo.fstree != NULL)\n\t\tfstree = new fairshare_head(*osinfo.fstree);\n\thas_mult_express = osinfo.has_mult_express;\n\thas_soft_limit = osinfo.has_soft_limit;\n\thas_hard_limit = osinfo.has_hard_limit;\n\thas_user_limit = osinfo.has_user_limit;\n\thas_all_limit = osinfo.has_all_limit;\n\thas_grp_limit = osinfo.has_grp_limit;\n\thas_proj_limit = osinfo.has_proj_limit;\n\thas_multi_vnode = osinfo.has_multi_vnode;\n\thas_prime_queue = osinfo.has_prime_queue;\n\thas_nonprime_queue = osinfo.has_nonprime_queue;\n\thas_ded_queue = osinfo.has_ded_queue;\n\thas_nodes_assoc_queue = osinfo.has_nodes_assoc_queue;\n\tnode_group_enable = osinfo.node_group_enable;\n\teligible_time_enable = osinfo.eligible_time_enable;\n\tprovision_enable = osinfo.provision_enable;\n\tpower_provisioning = osinfo.power_provisioning;\n\tuse_hard_duration = osinfo.use_hard_duration;\n\tpset_metadata_stale = osinfo.pset_metadata_stale;\n\tname = osinfo.name;\n\tliminfo = lim_dup_liminfo(osinfo.liminfo);\n\tserver_time = osinfo.server_time;\n\tres = dup_resource_list(osinfo.res);\n\talljobcounts = dup_counts_umap(osinfo.alljobcounts);\n\tgroup_counts = dup_counts_umap(osinfo.group_counts);\n\tproject_counts = dup_counts_umap(osinfo.project_counts);\n\tuser_counts = dup_counts_umap(osinfo.user_counts);\n\ttotal_alljobcounts = dup_counts_umap(osinfo.total_alljobcounts);\n\ttotal_group_counts = dup_counts_umap(osinfo.total_group_counts);\n\ttotal_project_counts = dup_counts_umap(osinfo.total_project_counts);\n\ttotal_user_counts = dup_counts_umap(osinfo.total_user_counts);\n\tnode_group_key = osinfo.node_group_key;\n\tnodesigs = dup_string_arr(osinfo.nodesigs);\n\n\tpolicy = dup_status(osinfo.policy);\n\n\tnum_nodes = osinfo.num_nodes;\n\n\t/* dup the nodes, if there are any nodes */\n\tnodes = dup_nodes(osinfo.nodes, this, NO_FLAGS);\n\n\tif (has_nodes_assoc_queue)\n\t\tunassoc_nodes = node_filter(nodes, num_nodes, is_unassoc_node, NULL, 0);\n\telse\n\t\tunassoc_nodes = nodes;\n\n\tunordered_nodes = dup_unordered_nodes(osinfo.unordered_nodes, nodes);\n\n\t/* dup the reservations */\n\tresvs = dup_resource_resv_array(osinfo.resvs, this, NULL);\n\tnum_resvs = osinfo.num_resvs;\n\n#ifdef NAS /* localmod 053 */\n\tsite_save_users();\n#endif /* localmod 053 */\n\n\t/* duplicate the queues */\n\tqueues = dup_queues(osinfo.queues, this);\n\tif (queues.empty()) {\n\t\tfree_server_info();\n\t\tthrow sched_exception(\"Unable to duplicate queues\", SCHD_ERROR);\n\t}\n\n\tif (osinfo.queue_list != NULL) {\n\t\t/* queues are already sorted in descending order of their priority */\n\t\tfor (unsigned int i = 0; i < queues.size(); i++) {\n\t\t\tauto ret_val = add_queue_to_list(&queue_list, queues[i]);\n\t\t\tif (ret_val == 0) {\n\t\t\t\tfstree = NULL;\n\t\t\t\tfree_server_info();\n\t\t\t\tthrow sched_exception(\"Unable to add queue to queue_list\", SCHD_ERROR);\n\t\t\t}\n\t\t}\n\t}\n\n\tsc = osinfo.sc;\n\n\t/* sets nsinfo -> jobs and nsinfo -> all_resresv */\n\tcopy_server_arrays(this, &osinfo);\n\n\tequiv_classes = dup_resresv_set_array(osinfo.equiv_classes, this);\n\n\t/* the event list is created dynamically during the evaluation of resource\n\t * reservations. It is a sorted list of all_resresv, initialized to NULL to\n\t * appropriately be freed in free_event_list */\n\tcalendar = dup_event_list(osinfo.calendar, this);\n\tif (calendar == NULL) {\n\t\tfree_server_info();\n\t\tthrow sched_exception(\"Unable to duplicate calendar\", SCHD_ERROR);\n\t}\n\n\trunning_jobs =\n\t\tresource_resv_filter(jobs, sc.total,\n\t\t\t\t     check_run_job, NULL, FILTER_FULL);\n\n\texiting_jobs =\n\t\tresource_resv_filter(jobs, sc.total,\n\t\t\t\t     check_exit_job, NULL, 0);\n\n\tnum_preempted = osinfo.num_preempted;\n\n\tif (osinfo.qrun_job != NULL)\n\t\tqrun_job = find_resource_resv(jobs,\n\t\t\t\t\t      osinfo.qrun_job->name);\n\n\tfor (int i = 0; i < NUM_PPRIO; i++)\n\t\tpreempt_count[i] = osinfo.preempt_count[i];\n\n#ifdef NAS /* localmod 034 */\n\tif (!site_dup_shares(&osinfo, this)) {\n\t\tfree_server_info();\n\t\tthrow sched_exception(\"Unable to duplicate shares\", SCHD_ERROR);\n\t}\n#endif /* localmod 034 */\n\n\t/* Now we do any processing which has to happen last */\n\n\t/* the jobs are not dupped when we dup the nodes, so we need to copy\n\t * the node's job arrays now\n\t */\n\tfor (int i = 0; osinfo.nodes[i] != NULL; i++)\n\t\tnodes[i]->job_arr =\n\t\t\tcopy_resresv_array(osinfo.nodes[i]->job_arr, jobs);\n\n\tnum_parts = osinfo.num_parts;\n\tif (osinfo.nodepart != NULL) {\n\t\tnodepart = dup_node_partition_array(osinfo.nodepart, this);\n\t\tif (nodepart == NULL) {\n\t\t\tfree_server_info();\n\t\t\tthrow sched_exception(\"Unable to duplicate node partition\", SCHD_ERROR);\n\t\t}\n\t}\n\tallpart = dup_node_partition(osinfo.allpart, this);\n\tif (osinfo.hostsets != NULL) {\n\t\tint j, k;\n\t\thostsets = dup_node_partition_array(osinfo.hostsets, this);\n\t\tif (hostsets == NULL) {\n\t\t\tfree_server_info();\n\t\t\tthrow sched_exception(\"Unable to duplicate hostsets\", SCHD_ERROR);\n\t\t}\n\t\t/* reattach nodes to their host sets*/\n\t\tfor (j = 0; hostsets[j] != NULL; j++) {\n\t\t\tnode_partition *hset = hostsets[j];\n\t\t\tfor (k = 0; hset->ninfo_arr[k] != NULL; k++)\n\t\t\t\thset->ninfo_arr[k]->hostset = hset;\n\t\t}\n\t\tnum_hostsets = osinfo.num_hostsets;\n\t}\n\n\t/* the running resvs are not dupped when we dup the nodes, so we need to copy\n\t * the node's running resvs arrays now\n\t */\n\tfor (int i = 0; osinfo.nodes[i] != NULL; i++) {\n\t\tnodes[i]->run_resvs_arr =\n\t\t\tcopy_resresv_array(osinfo.nodes[i]->run_resvs_arr, resvs);\n\t\tnodes[i]->np_arr =\n\t\t\tcopy_node_partition_ptr_array(osinfo.nodes[i]->np_arr, nodepart);\n\t\tif (calendar != NULL)\n\t\t\tnodes[i]->node_events = dup_te_lists(osinfo.nodes[i]->node_events, calendar->next_event);\n\t}\n\tbuckets = dup_node_bucket_array(osinfo.buckets, this);\n\t/* Now that all job information has been created, time to associate\n\t * jobs to each other if they have runone dependency\n\t */\n\tassociate_dependent_jobs(this);\n\n\tfor (int i = 0; running_jobs[i] != NULL; i++) {\n\t\tif ((running_jobs[i]->job->is_subjob) &&\n\t\t    (associate_array_parent(running_jobs[i], this) == 1)) {\n\t\t\tfree_server_info();\n\t\t\tthrow sched_exception(\"Unable to associate running subjob with parent\", SCHD_ERROR);\n\t\t}\n\t}\n\n\tif (osinfo.job_sort_formula != NULL) {\n\t\tjob_sort_formula = string_dup(osinfo.job_sort_formula);\n\t\tif (job_sort_formula == NULL) {\n\t\t\tfree_server_info();\n\t\t\tthrow sched_exception(\"Unable to duplicate job_sort_fornula\", SCHD_ERROR);\n\t\t}\n\t}\n\n\t/* Copy the map of server psets */\n\tdup_server_psets(osinfo.svr_to_psets);\n}\n\n/**\n * @brief\n * \t\tdup_resource_list - dup a resource list\n *\n * @param[in]\tres - the resource list to duplicate\n *\n * @return\tduplicated resource list\n * @retval\tNULL\t: Error\n *\n * @par MT-Safe:\tno\n */\nschd_resource *\ndup_resource_list(schd_resource *res)\n{\n\tschd_resource *pres;\n\tschd_resource *nres;\n\tschd_resource *prev = NULL;\n\tschd_resource *head = NULL;\n\n\tfor (pres = res; pres != NULL; pres = pres->next) {\n\t\tnres = dup_resource(pres);\n\t\tif (prev == NULL)\n\t\t\thead = nres;\n\t\telse\n\t\t\tprev->next = nres;\n\n\t\tprev = nres;\n\t}\n\n\treturn head;\n}\n/**\n * @brief\n * \t\tdup a resource list selectively + booleans (set or unset=false)\n *\n *\n *\t@param[in]\tres - the resource list to duplicate\n *\t@param[in]\tdeflist -  dup resources in this list\n *\t@param[in]\tflags - @see add_resource_list()\n *\n * @return\tduplicated resource list\n * @retval\tNULL\t: Error\n *\n * @par MT-Safe:\tno\n */\nschd_resource *\ndup_selective_resource_list(schd_resource *res, std::unordered_set<resdef *> &deflist, unsigned flags)\n{\n\tschd_resource *pres;\n\tschd_resource *nres;\n\tschd_resource *prev = NULL;\n\tschd_resource *head = NULL;\n\n\tfor (pres = res; pres != NULL; pres = pres->next) {\n\t\tif (((flags & ADD_ALL_BOOL) && pres->type.is_boolean) ||\n\t\t    deflist.find(pres->def) != deflist.end()) {\n\t\t\tnres = dup_resource(pres);\n\t\t\tif (nres == NULL) {\n\t\t\t\tfree_resource_list(head);\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t\tif ((flags & ADD_AVAIL_ASSIGNED)) {\n\t\t\t\tif (nres->avail == RES_DEFAULT_AVAIL)\n\t\t\t\t\tnres->assigned = RES_DEFAULT_ASSN;\n\t\t\t\telse\n\t\t\t\t\tnres->assigned = nres->avail;\n\t\t\t}\n\t\t\tif (prev == NULL)\n\t\t\t\thead = nres;\n\t\t\telse\n\t\t\t\tprev->next = nres;\n\t\t\tprev = nres;\n\t\t}\n\t}\n\t/* add on any booleans which are unset (i.e.,  false) */\n\tif (flags & ADD_UNSET_BOOLS_FALSE) {\n\t\tfor (const auto &br : boolres) {\n\t\t\tif (find_resource(res, br) == NULL) {\n\t\t\t\tnres = create_resource(br->name.c_str(), ATR_FALSE, RF_AVAIL);\n\t\t\t\tif (nres == NULL) {\n\t\t\t\t\tfree_resource_list(head);\n\t\t\t\t\treturn NULL;\n\t\t\t\t}\n\t\t\t\tif (prev == NULL)\n\t\t\t\t\thead = nres;\n\t\t\t\telse\n\t\t\t\t\tprev->next = nres;\n\t\t\t\tprev = nres;\n\t\t\t}\n\t\t}\n\t}\n\treturn head;\n}\n\n/**\n * @brief\n * \t\tdup_ind_resource_list - dup a resource list - if a resource in\n *\t\tthe list is indirect, dup the pointed to resource instead\n *\n * @param[in]\tres - the resource list to duplicate\n *\n * @return\tduplicated resource list\n * @retval\tNULL\t: Error\n *\n * @par MT-Safe:\tno\n */\nschd_resource *\ndup_ind_resource_list(schd_resource *res)\n{\n\tschd_resource *pres;\n\tschd_resource *nres;\n\tschd_resource *prev = NULL;\n\tschd_resource *head = NULL;\n\n\tfor (pres = res; pres != NULL; pres = pres->next) {\n\t\tif (pres->indirect_res != NULL)\n\t\t\tnres = dup_resource(pres->indirect_res);\n\t\telse\n\t\t\tnres = dup_resource(pres);\n\n\t\tif (nres == NULL) {\n\t\t\tfree_resource_list(head);\n\t\t\treturn NULL;\n\t\t}\n\n\t\tif (prev == NULL)\n\t\t\thead = nres;\n\t\telse\n\t\t\tprev->next = nres;\n\n\t\tprev = nres;\n\t}\n\n\treturn head;\n}\n\n/**\n * @brief\n * \t\tdup_resource - duplicate a resource struct\n *\n * @param[in]\tres\t- the resource to dup\n *\n * @return\tthe duplicated resource\n * @retval\tNULL\t: Error\n *\n * @par MT-Safe:\tno\n */\nschd_resource *\ndup_resource(schd_resource *res)\n{\n\tschd_resource *nres;\n\n\tif ((nres = new_resource()) == NULL)\n\t\treturn NULL;\n\n\tnres->def = res->def;\n\tif (nres->def != NULL)\n\t\tnres->name = nres->def->name.c_str();\n\n\tif (res->indirect_vnode_name != NULL)\n\t\tnres->indirect_vnode_name = string_dup(res->indirect_vnode_name);\n\n\tif (res->orig_str_avail != NULL)\n\t\tnres->orig_str_avail = string_dup(res->orig_str_avail);\n\n\tif (res->str_avail != NULL)\n\t\tnres->str_avail = dup_string_arr(res->str_avail);\n\n\tif (res->str_assigned != NULL)\n\t\tnres->str_assigned = string_dup(res->str_assigned);\n\n\tnres->avail = res->avail;\n\tnres->assigned = res->assigned;\n\n\tmemcpy(&(nres->type), &(res->type), sizeof(struct resource_type));\n\n\treturn nres;\n}\n\n/**\n * @brief\n * \t\tis_unassoc_node - finds nodes which are not associated\n *\t\twith queues used with node_filter\n *\n * @param[in]\tninfo - the node to check\n *\n * @return\tint\n * @retval\t1\t-\tif the node does not have a queue associated with it\n * @retval\t0\t- \totherwise\n *\n * @par MT-Safe:\tyes\n */\nint\nis_unassoc_node(node_info *ninfo, void *arg)\n{\n\tif (ninfo->queue_name.empty())\n\t\treturn 1;\n\n\treturn 0;\n}\n\n// counts constructor\ncounts::counts(const std::string &rname) : name(rname)\n{\n\trunning = 0;\n\trescts = NULL;\n\tsoft_limit_preempt_bit = 0;\n}\n\n// counts copy constructor\ncounts::counts(const counts &rcount) : name(rcount.name)\n{\n\trunning = rcount.running;\n\trescts = dup_resource_count_list(rcount.rescts);\n\tsoft_limit_preempt_bit = rcount.soft_limit_preempt_bit;\n}\n\n// count assignment operator\ncounts &\ncounts::operator=(const counts &rcount)\n{\n\tthis->name = rcount.name;\n\tthis->running = rcount.running;\n\tthis->rescts = dup_resource_count_list(rcount.rescts);\n\tthis->soft_limit_preempt_bit = rcount.soft_limit_preempt_bit;\n\treturn (*this);\n}\n\n// destructor\ncounts::~counts()\n{\n\tfree_resource_count_list(rescts);\n}\n\n/**\n * @brief\n * \t\tfree_counts_list - free a list of counts structures\n *\n * @param[in]\tctslist\t- the counts structure to free\n *\n * @return\tvoid\n *\n * @par MT-Safe:\tno\n */\nvoid\nfree_counts_list(counts_umap &ctslist)\n{\n\tfor (auto it : ctslist)\n\t\tdelete it.second;\n}\n\n/**\n * @brief\n * \t\tdup_counts_umap - duplicate a counts map\n *\n * @param[in]\tomap\t- the counts map to duplicate\n *\n * @return\tnew counts_umap\n */\ncounts_umap\ndup_counts_umap(const counts_umap &omap)\n{\n\tcounts_umap nmap;\n\tfor (auto &iter : omap)\n\t\tnmap[iter.first] = new counts(*(iter.second));\n\treturn nmap;\n}\n\n/**\n * @brief\n * \t\tfind_counts - find a counts structure by name\n *\n * @param[in]\tctslist - the counts list to search\n * @param[in]\tname \t- the name to find\n *\n * @return\tfound counts structure\n * @retval\tNULL\t: error\n *\n * @par MT-Safe:\tno\n */\ncounts *\nfind_counts(counts_umap &ctslist, const std::string &name)\n{\n\tauto ret = ctslist.find(name);\n\tif (ret == ctslist.end())\n\t\treturn NULL;\n\treturn ret->second;\n}\n\n/**\n * @brief\n * \t\tfind_alloc_counts - find a counts structure by name or allocate\n *\t\t a new counts, name it, and add it to the end of the list\n *\n * @param[in]\tctslist - the counts map to search\n * @param[in]\tname \t- the name to find\n *\n * @return\tfound or newly-allocated counts structure\n * @retval\tNULL\t: error\n *\n * @par MT-Safe:\tno\n */\ncounts *\nfind_alloc_counts(counts_umap &ctslist, const std::string &name)\n{\n\tcounts *ret;\n\n\tret = find_counts(ctslist, name);\n\tif (ret == NULL) {\n\t\tctslist[name] = new counts(name);\n\t\treturn ctslist[name];\n\t}\n\treturn ret;\n}\n\n/**\n * @brief\n * \t\tupdate_counts_on_run - update a counts struct on the running of\n *\t\ta job\n *\n * @param[in]\tcts \t- the counts structure to update\n * @param[in]\tresreq \t- the resource requirements of the job which ran\n *\n * @return\tvoid\n *\n * @par MT-Safe:\tno\n */\nvoid\nupdate_counts_on_run(counts *cts, resource_req *resreq)\n{\n\tresource_count *ctsreq; /* rescts to update */\n\tresource_req *req;\t/* current in resreq */\n\n\tif (cts == NULL)\n\t\treturn;\n\n\tcts->running++;\n\n\tif (resreq == NULL)\n\t\treturn;\n\n\treq = resreq;\n\n\twhile (req != NULL) {\n\t\tctsreq = find_alloc_resource_count(cts->rescts, req->def);\n\n\t\tif (ctsreq != NULL) {\n\t\t\tif (cts->rescts == NULL)\n\t\t\t\tcts->rescts = ctsreq;\n\n\t\t\tctsreq->amount += req->amount;\n\t\t}\n\t\treq = req->next;\n\t}\n}\n\n/**\n * @brief\n * \t\tupdate_counts_on_end - update a counts structure on the end\n *\t\tof a job\n *\n * @param[in]\tcts \t- counts structure to update\n * @param[in]\tresreq \t- the resource requirements of the job which\n *\t\t\t\t\t\t\tended\n *\n * @return\tvoid\n *\n * @par MT-Safe:\tno\n */\nvoid\nupdate_counts_on_end(counts *cts, resource_req *resreq)\n{\n\tif (cts == NULL || resreq == NULL)\n\t\treturn;\n\n\tcts->running--;\n\n\tfor (auto req = resreq; req != NULL; req = req->next) {\n\t\tauto ctsreq = find_resource_count(cts->rescts, req->def);\n\t\tif (ctsreq != NULL)\n\t\t\tctsreq->amount -= req->amount;\n\t}\n}\n\n/*\n * @brief  Helper function that sets the max counts map if the counts structure passed\n *\t   to this function has higher number running jobs or resources.\n * @param[in,out] counts_umap - map of max counts structures.\n * @param[in] ncounts * - pointer to counts structure that needs to be updated\n *\n * @return void\n */\nstatic void\nset_counts_max(counts_umap &cmax, const counts *ncounts)\n{\n\tcounts *cur_fmax;\n\tresource_count *cur_res;\n\tresource_count *cur_res_max;\n\n\tcur_fmax = find_counts(cmax, ncounts->name);\n\tif (cur_fmax == NULL) {\n\t\tcmax[ncounts->name] = new counts(*ncounts);\n\t\treturn;\n\t} else {\n\t\tif (ncounts->running > cur_fmax->running)\n\t\t\tcur_fmax->running = ncounts->running;\n\t\tfor (cur_res = ncounts->rescts; cur_res != NULL; cur_res = cur_res->next) {\n\t\t\tcur_res_max = find_resource_count(cur_fmax->rescts, cur_res->def);\n\t\t\tif (cur_res_max == NULL) {\n\t\t\t\tcur_res_max = dup_resource_count(cur_res);\n\t\t\t\tif (cur_res_max == NULL) {\n\t\t\t\t\tfree_counts_list(cmax);\n\t\t\t\t\treturn;\n\t\t\t\t}\n\n\t\t\t\tcur_res_max->next = cur_fmax->rescts;\n\t\t\t\tcur_fmax->rescts = cur_res_max;\n\t\t\t} else {\n\t\t\t\tif (cur_res->amount > cur_res_max->amount)\n\t\t\t\t\tcur_res_max->amount = cur_res->amount;\n\t\t\t}\n\t\t}\n\t}\n\treturn;\n}\n\n/**\n * @brief\n * \t\tperform a max() between the current list of maxes and a\n *\t\tnew list.  If any element from the new list is greater\n *\t\tthan the current max, we free the old, and dup the new\n *\t\tand attach it in.\n *\n * @param[in,out]\tcmax - current max\n * @param[in]\tncounts - new counts map.  If anything in this map is\n *\t\t\t  greater than the cur_max, it needs to be dup'd.\n *\n * @return\tvoid\n */\nvoid\ncounts_max(counts_umap &cmax, counts_umap &ncounts)\n{\n\n\tif (ncounts.size() == 0)\n\t\treturn;\n\n\tif (cmax.size() == 0) {\n\t\tcmax = ncounts;\n\t\treturn;\n\t}\n\n\tfor (const auto &cur : ncounts)\n\t\tset_counts_max(cmax, cur.second);\n}\n\n// overloaded\nvoid\ncounts_max(counts_umap &cmax, counts *ncounts)\n{\n\tif (ncounts == NULL)\n\t\treturn;\n\n\tif (cmax.size() == 0) {\n\t\tcmax[ncounts->name] = new counts(*ncounts);\n\t\treturn;\n\t}\n\tset_counts_max(cmax, ncounts);\n\treturn;\n}\n\n/**\n * @brief\n * \t\tupdate_universe_on_end - update a pbs universe when a job/resv\n *\t\tends\n *\n * @param[in]   policy \t\t- policy info\n * @param[in]\tresresv \t- the resresv itself which is ending\n * @param[in]\tjob_state \t- the new state of a job if resresv is a job\n * @param[in]\tflags\t\t- flags to modify behavior of the function\n * \t\t\t\t\tNO_ALLPART - do not update most of the metadata of the allpart\n *\n * @return\tvoid\n *\n * @par MT-Safe:\tno\n */\nvoid\nupdate_universe_on_end(status *policy, resource_resv *resresv, const char *job_state, unsigned int flags)\n{\n\tserver_info *sinfo = NULL;\n\tqueue_info *qinfo = NULL;\n\n\tif (resresv == NULL)\n\t\treturn;\n\n\tif (resresv->is_job && job_state == NULL)\n\t\treturn;\n\n\tif (!is_resource_resv_valid(resresv, NULL))\n\t\treturn;\n\n\tsinfo = resresv->server;\n\n\tif (resresv->is_job) {\n\t\tqinfo = resresv->job->queue;\n\t\tif (resresv->job != NULL && resresv->execselect != NULL) {\n\t\t\tint need_metadata_update = 0;\n\t\t\tfor (const auto &sdef : resresv->execselect->defs) {\n\t\t\t\tconst auto &rdc = policy->resdef_to_check;\n\t\t\t\tif (rdc.find(sdef) == rdc.end()) {\n\t\t\t\t\tpolicy->resdef_to_check.insert(sdef);\n\t\t\t\t\tneed_metadata_update = 1;\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (need_metadata_update) {\n\t\t\t\t/* Since a new resource was added to resdef_to_check, the meta data needs to be recreated.\n\t\t\t\t * This will happen on the next call to node_partition_update()\n\t\t\t\t */\n\t\t\t\tif (sinfo->allpart != NULL) {\n\t\t\t\t\tfree_resource_list(sinfo->allpart->res);\n\t\t\t\t\tsinfo->allpart->res = NULL;\n\t\t\t\t}\n\t\t\t\tfor (auto queue : sinfo->queues) {\n\t\t\t\t\tif (queue->allpart != NULL) {\n\t\t\t\t\t\tfree_resource_list(queue->allpart->res);\n\t\t\t\t\t\tqueue->allpart->res = NULL;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\tif (resresv->ninfo_arr != NULL) {\n\t\tfor (int i = 0; resresv->ninfo_arr[i] != NULL; i++)\n\t\t\tupdate_node_on_end(resresv->ninfo_arr[i], resresv, job_state);\n\t}\n\n\tupdate_server_on_end(policy, sinfo, qinfo, resresv, job_state);\n\n\tif (qinfo != NULL)\n\t\tupdate_queue_on_end(qinfo, resresv, job_state);\n\t/* update soft limits for jobs that are not in reservation */\n\tif (resresv->is_job && resresv->job->resv_id == NULL)\n\t\tupdate_soft_limits(sinfo, qinfo, resresv);\n\t/* Mark the metadata stale.  It will be updated in the next call to is_ok_to_run() */\n\tsinfo->pset_metadata_stale = 1;\n\n\tupdate_resresv_on_end(resresv, job_state);\n\n#ifdef NAS /* localmod 057 */\n\tsite_update_on_end(sinfo, qinfo, resresv);\n#endif /* localmod 057 */\n\tupdate_preemption_priority(sinfo, resresv);\n}\n\n/**\n * @brief Update scheduler's cache of the universe when a job/resv runs\n * \n * @param[in] policy - policy info\n * @param[in] pbs_sd - connection descriptor to pbs server or SIMULATE_SD if we're simulating\n * @param[in] rr - the job or reservations\n * @param[in] sinfo - the server\n * @param[in] qinfo - the queue in the case of a job\n * @param[in] flags - flags which modify behavior\n * \t\t\t\tRURR_ADD_END_EVENT - add an end event to calendar for this job\n * \t\t\t\tRURR_NOPRINT - don't print 'Job run'\n\n * @return true \n * @return false \n */\nbool\nupdate_universe_on_run_helper(status *policy, int pbs_sd, resource_resv *rr, unsigned int flags)\n{\n\tchar old_state = 0;\n\tresource_resv *array = NULL;\n\tserver_info *sinfo;\n\tqueue_info *qinfo = NULL;\n\n\tif (rr == NULL)\n\t\treturn false;\n\n\tif (!(flags & RURR_NOPRINT))\n\t\tlog_event(PBSEVENT_SCHED, PBS_EVENTCLASS_JOB, LOG_INFO, rr->name, \"Job run\");\n\n\tsinfo = rr->server;\n\tif (rr->is_job)\n\t\tqinfo = rr->job->queue;\n\n\tif (rr->is_job && rr->job != NULL && rr->job->parent_job != NULL)\n\t\tarray = rr->job->parent_job;\n\n\t/* any resresv marked can_not_run will be ignored by the scheduler\n\t * so just incase we run by this resresv again, we want to ignore it\n\t * since it is already running\n\t */\n\trr->can_not_run = true;\n\n\tif (rr->is_job && rr->job->is_suspended)\n\t\told_state = 'S';\n\n\tupdate_resresv_on_run(rr, rr->nspec_arr);\n\n\tif ((flags & RURR_ADD_END_EVENT)) {\n\t\tauto te = create_event(TIMED_END_EVENT, rr->end, rr, NULL, NULL);\n\t\tif (te == NULL)\n\t\t\treturn false;\n\n\t\tadd_event(sinfo->calendar, te);\n\t}\n\n\tif (array != NULL) {\n\t\tupdate_array_on_run(array->job, rr->job);\n\n\t\t/* Subjobs inherit all attributes from their parent job array. This means\n\t\t * we need to make sure the parent resresv array has its accrue_type set\n\t\t * before running the job.  If all subresresvs have run,\n\t\t * then resresv array's accrue_type becomes ineligible.\n\t\t */\n\t\tif (array->is_job &&\n\t\t    range_next_value(array->job->queued_subjobs, -1) < 0)\n\t\t\tupdate_accruetype(pbs_sd, sinfo, ACCRUE_MAKE_INELIGIBLE, SUCCESS, array);\n\t\telse\n\t\t\tupdate_accruetype(pbs_sd, sinfo, ACCRUE_MAKE_ELIGIBLE, SUCCESS, array);\n\t}\n\n\tif (!rr->nspec_arr.empty()) {\n\t\tbool sort_nodepart = false;\n\t\tfor (auto n : rr->nspec_arr) {\n\t\t\tupdate_node_on_run(n, rr, &old_state);\n\t\t\tif (n->ninfo->np_arr != NULL) {\n\t\t\t\tnode_partition **npar = n->ninfo->np_arr;\n\t\t\t\tfor (int j = 0; npar[j] != NULL; j++) {\n\t\t\t\t\tmodify_resource_list(npar[j]->res, n->resreq, SCHD_INCR);\n\t\t\t\t\tif (!n->ninfo->is_free)\n\t\t\t\t\t\tnpar[j]->free_nodes--;\n\t\t\t\t\tsort_nodepart = true;\n\t\t\t\t\tupdate_buckets_for_node(npar[j]->bkts, n->ninfo);\n\t\t\t\t}\n\t\t\t}\n\t\t\t/* if the node is being provisioned, it's brought down in\n\t\t\t * update_node_on_run().  We need to add an event in the calendar to\n\t\t\t * bring it back up.\n\t\t\t */\n\t\t\tif (n->go_provision) {\n\t\t\t\tif (add_prov_event(sinfo->calendar, sinfo->server_time + PROVISION_DURATION, n->ninfo) == 0)\n\t\t\t\t\treturn false;\n\t\t\t}\n\t\t}\n\t\tif (sort_nodepart)\n\t\t\tsort_all_nodepart(policy, sinfo);\n\t}\n\n\tupdate_queue_on_run(qinfo, rr, &old_state);\n\n\tupdate_server_on_run(policy, sinfo, qinfo, rr, &old_state);\n\n\t/* update soft limits for jobs that are not in reservation */\n\tif (rr->is_job && rr->job->resv_id == NULL) {\n\t\t/* update the entity preempt bit */\n\t\tupdate_soft_limits(sinfo, qinfo, rr);\n\t\t/* update the job preempt status */\n\t\tset_preempt_prio(rr, qinfo, sinfo);\n\t}\n\n\t/* update_preemption_priority() must be called post queue/server update */\n\tupdate_preemption_priority(sinfo, rr);\n\n\tif (sinfo->policy->fair_share)\n\t\tupdate_usage_on_run(rr);\n\n\tif (rr->is_job && rr->job->is_preempted) {\n\t\tunset_job_attr(pbs_sd, rr, ATTR_sched_preempted, UPDATE_LATER);\n\t\trr->job->is_preempted = 0;\n\t\trr->job->time_preempted = UNSPECIFIED;\n\t\tsinfo->num_preempted--;\n\t}\n\n#ifdef NAS /* localmod 057 */\n\tsite_update_on_run(sinfo, qinfo, rr, ns);\n#endif /* localmod 057 */\n\n\treturn true;\n}\n\n/**\n * @brief Update scheduler's cache of the universe when a job/resv runs\n * \n * @param[in] policy - policy info\n * @param[in] pbs_sd - connection descriptor to pbs server or SIMULATE_SD if we're simulating\n * @param[in] rr - the job or reservations\n * @param[in] orig_ns - where we're running the job/resv\n * @param[in] sinfo - the server\n * @param[in] qinfo - the queue in the case of a job\n * @param[in] flags - flags which modify behavior\n * \t\t\t\tRURR_ADD_END_EVENT - add an end event to calendar for this job\n * \t\t\t\tRURR_NOPRINT - don't print 'Job run'\n * @return true - success\n * @return false - failure\n */\nbool\nupdate_universe_on_run(status *policy, int pbs_sd, resource_resv *rr, std::vector<nspec *> &orig_ns, unsigned int flags)\n{\n\tif (!rr->nspec_arr.empty())\n\t\tfree_nspecs(rr->nspec_arr);\n\n\trr->nspec_arr = combine_nspec_array(orig_ns);\n\n\tif (rr->is_resv) {\n\t\tif (rr->resv->orig_nspec_arr.empty())\n\t\t\tfree_nspecs(rr->resv->orig_nspec_arr);\n\t\trr->resv->orig_nspec_arr = std::move(orig_ns);\n\t} else\n\t\tfree_nspecs(orig_ns);\n\n\treturn update_universe_on_run_helper(policy, pbs_sd, rr, flags);\n}\n\n/**\n * @brief Update scheduler's cache of the universe when a job/resv runs.  This overload\n * \tdoes not pass an ns_arr parameter and uses the nspec array from the job/resv itself.\n * \n * @param[in] policy - policy info\n * @param[in] pbs_sd - connection descriptor to pbs server or SIMULATE_SD if we're simulating\n * @param[in] rr - the job or reservations\n * @param[in] sinfo - the server\n * @param[in] qinfo - the queue in the case of a job\n * @param[in] flags - flags which modify behavior\n * \t\t\t\tRURR_ADD_END_EVENT - add an end event to calendar for this job\n * \t\t\t\tRURR_NOPRINT - don't print 'Job run'\n * @return true - success\n * @return false - failure\n */\nbool\nupdate_universe_on_run(status *policy, int pbs_sd, resource_resv *rr, unsigned int flags)\n{\n\treturn update_universe_on_run_helper(policy, pbs_sd, rr, flags);\n}\n\n/**\n * @brief\n * \t\tset_resource - set the values of the resource structure.  This\n *\t\tfunction can be called in one of two ways.  It can be called\n *\t\twith resources_available value, or the resources_assigned\n *\t\tvalue.\n *\n * @param[in]\tres \t- the resource to set\n * @param[in]\tval \t- the value to set upon the resource\n * @param[in]\tfield \t- the type of field to set (available or assigned)\n *\n * @return\tint\n * @retval\t1 : success\n * @retval\t0 : failure/error\n *\n * @note\n * \t\tIf we have resource type information from the server,\n *\t\twe will use it.  If not, we will try to set the\n *\t\tresource type from the resources_available value first,\n *\t\tthen from the resources_assigned\n *\n * @par MT-Safe:\tno\n */\nint\nset_resource(schd_resource *res, const char *val, enum resource_fields field)\n{\n\tresdef *rdef;\n\n\tif (res == NULL || val == NULL)\n\t\treturn 0;\n\n\trdef = find_resdef(res->name);\n\tres->def = rdef;\n\n\tif (rdef != NULL)\n\t\tres->type = rdef->type;\n\telse {\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_RESC, LOG_WARNING, res->name, \"Can't find resource definition\");\n\t\treturn 0;\n\t}\n\n\tif (field == RF_AVAIL) {\n\t\t/* if this resource is being re-set, lets free the memory we previously\n\t\t * allocated in the last call to this function.  We NULL the values just\n\t\t * incase we don't reset them later (e.g. originally set a resource\n\t\t * indirect and then later set it directly)\n\t\t */\n\t\tif (res->orig_str_avail != NULL) {\n\t\t\tfree(res->orig_str_avail);\n\t\t\tres->orig_str_avail = NULL;\n\t\t}\n\t\tif (res->indirect_vnode_name != NULL) {\n\t\t\tfree(res->indirect_vnode_name);\n\t\t\tres->indirect_vnode_name = NULL;\n\t\t}\n\t\tif (res->str_avail != NULL) {\n\t\t\tfree_string_array(res->str_avail);\n\t\t\tres->str_avail = NULL;\n\t\t}\n\n\t\tres->orig_str_avail = string_dup(val);\n\t\tif (res->orig_str_avail == NULL)\n\t\t\treturn 0;\n\n\t\tif (val[0] == '@') {\n\t\t\tres->indirect_vnode_name = string_dup(&val[1]);\n\t\t\t/* res -> indirect_res is assigned by a call to\n\t\t\t * resolve_indirect_resources()\n\t\t\t */\n\t\t\tif (res->indirect_vnode_name == NULL)\n\t\t\t\treturn 0;\n\t\t} else {\n\t\t\t/* if val is a string, avail will be set to SCHD_INFINITY_RES */\n\t\t\tres->avail = res_to_num(val, NULL);\n\t\t\tif (res->avail == SCHD_INFINITY_RES) {\n\t\t\t\t/* Verify that this is a string type resource */\n\t\t\t\tif (!res->def->type.is_string) {\n\t\t\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_RESC, LOG_WARNING, res->name, \"Invalid value for consumable resource\");\n\t\t\t\t\treturn 0;\n\t\t\t\t}\n\t\t\t}\n\t\t\tres->str_avail = break_comma_list(const_cast<char *>(val));\n\t\t\tif (res->str_avail == NULL) {\n\t\t\t\tlog_eventf(PBSEVENT_ERROR, PBS_EVENTCLASS_RESC, LOG_WARNING, res->name, \"Invalid value: %s\", val);\n\t\t\t\treturn 0;\n\t\t\t}\n\t\t}\n\t} else if (field == RF_ASSN) {\n\t\t/* clear previously allocated memory in the case of a reassignment */\n\t\tif (res->str_assigned != NULL) {\n\t\t\tfree(res->str_assigned);\n\t\t\tres->str_assigned = NULL;\n\t\t}\n\t\tif (val[0] == '@') /* Indirect resources will be found elsewhere, assign 0 */\n\t\t\tres->assigned = 0;\n\t\telse\n\t\t\tres->assigned = res_to_num(val, NULL);\n\t\tres->str_assigned = string_dup(val);\n\t\tif (res->str_assigned == NULL)\n\t\t\treturn 0;\n\t}\n\n\treturn 1;\n}\n\n/**\n * @brief\n * \t\tfind_indirect_resource - follow the indirect resource pointers\n *\t\tto find the real resource at the end\n *\n * @param[in]\tres \t- the indirect resource\n * @param[in]\tnodes \t- the nodes to search\n *\n * @return\tthe indirect resource\n * @retval\tNULL\t: on error\n *\n * @par MT-Safe:\tno\n */\nschd_resource *\nfind_indirect_resource(schd_resource *res, node_info **nodes)\n{\n\tschd_resource *cur_res = NULL;\n\tint i;\n\tint error = 0;\n\tconst int max = 10;\n\n\tif (res == NULL || nodes == NULL)\n\t\treturn NULL;\n\n\tcur_res = res;\n\n\tfor (i = 0; i < max && cur_res != NULL && cur_res->indirect_vnode_name != NULL && !error; i++) {\n\t\tauto ninfo = find_node_info(nodes, cur_res->indirect_vnode_name);\n\t\tif (ninfo != NULL) {\n\t\t\tcur_res = find_resource(ninfo->res, cur_res->def);\n\t\t\tif (cur_res == NULL) {\n\t\t\t\terror = 1;\n\t\t\t\tlog_eventf(PBSEVENT_DEBUG, PBS_EVENTCLASS_NODE, LOG_DEBUG, __func__,\n\t\t\t\t\t   \"Resource %s is indirect, and does not exist on indirect node %s\",\n\t\t\t\t\t   res->name, ninfo->name.c_str());\n\t\t\t}\n\t\t} else {\n\t\t\terror = 1;\n\t\t\tlog_eventf(PBSEVENT_DEBUG, PBS_EVENTCLASS_NODE, LOG_DEBUG, __func__,\n\t\t\t\t   \"Resource %s is indirect but points to node %s, which was not found\",\n\t\t\t\t   res->name, cur_res->indirect_vnode_name);\n\t\t\tcur_res = NULL;\n\t\t}\n\t}\n\tif (i == max) {\n\t\tlog_eventf(PBSEVENT_DEBUG, PBS_EVENTCLASS_NODE, LOG_DEBUG, __func__,\n\t\t\t   \"Attempted %d indirection lookups for resource %s=@%s-- \"\n\t\t\t   \"looks like a cycle, bailing out\",\n\t\t\t   max, cur_res->name, cur_res->indirect_vnode_name);\n\t\treturn NULL;\n\t}\n\n\tif (error)\n\t\treturn NULL;\n\n\treturn cur_res;\n}\n\n/**\n * @brief\n * \t\tresolve_indirect_resources - resolve indirect resources for node\n *\t\tarray\n *\n * @param[in,out]\tnodes\t-\tthe nodes to resolve\n *\n * @return\tint\n * @retval\t1\t: if successful\n * @retval\t0\t: if there were any errors\n *\n * @par MT-Safe:\tno\n */\nint\nresolve_indirect_resources(node_info **nodes)\n{\n\tint i;\n\tschd_resource *cur_res;\n\tint error = 0;\n\n\tif (nodes == NULL)\n\t\treturn 0;\n\n\tfor (i = 0; nodes[i] != NULL; i++) {\n\t\tcur_res = nodes[i]->res;\n\t\twhile (cur_res != NULL) {\n\t\t\tif (cur_res->indirect_vnode_name) {\n\t\t\t\tcur_res->indirect_res = find_indirect_resource(cur_res, nodes);\n\t\t\t\tif (cur_res->indirect_res == NULL)\n\t\t\t\t\terror = 1;\n\t\t\t}\n\t\t\tcur_res = cur_res->next;\n\t\t}\n\t}\n\n\tif (error)\n\t\treturn 0;\n\n\treturn 1;\n}\n\n/**\n * @brief\n * \t\tupdate_preemption_priority - update preemption status when a\n *\t\tjob runs/ends\n *\n * @param[in]\tsinfo \t- server where job was run\n * @param[in]\tresresv - job which was run\n *\n * @return\tvoid\n *\n * @note\n * \t\tMust be called after update_server_on_run/end() and\n *\t\tupdate_queue_on_run/end()\n *\n * @note\n * \t\tThe only thing which will change preemption priorities\n *\t\tin the middle of a scheduling cycle is soft user/group/project\n *\t\tlimits. If a user, group, or project goes under a limit because\n *\t\tof this job running, we need to update those jobs\n *\n * @par MT-Safe:\tno\n */\nvoid\nupdate_preemption_priority(server_info *sinfo, resource_resv *resresv)\n{\n\tif (cstat.preempting && resresv->is_job) {\n\t\tif (sinfo->has_soft_limit || resresv->job->queue->has_soft_limit) {\n\t\t\tfor (int i = 0; sinfo->jobs[i] != NULL; i++) {\n\t\t\t\tif (sinfo->jobs[i]->job != NULL) {\n\t\t\t\t\tint usrlim = resresv->job->queue->has_user_limit || sinfo->has_user_limit;\n\t\t\t\t\tint grplim = resresv->job->queue->has_grp_limit || sinfo->has_grp_limit;\n\t\t\t\t\tint projlim = resresv->job->queue->has_proj_limit || sinfo->has_proj_limit;\n\t\t\t\t\tif ((usrlim && (resresv->user == sinfo->jobs[i]->user)) ||\n\t\t\t\t\t    (grplim && (resresv->group == sinfo->jobs[i]->group)) ||\n\t\t\t\t\t    (projlim && (resresv->project == sinfo->jobs[i]->project)))\n\t\t\t\t\t\tset_preempt_prio(sinfo->jobs[i],\n\t\t\t\t\t\t\t\t sinfo->jobs[i]->job->queue, sinfo);\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t/* now that we've set all the preempt levels, we need to count them */\n\t\t\tmemset(sinfo->preempt_count, 0, NUM_PPRIO * sizeof(int));\n\t\t\tfor (int i = 0; sinfo->running_jobs[i] != NULL; i++)\n\t\t\t\tif (!sinfo->running_jobs[i]->job->can_not_preempt)\n\t\t\t\t\tsinfo->preempt_count[preempt_level(sinfo->running_jobs[i]->job->preempt)]++;\n\t\t}\n\t}\n}\n\n/**\n * @brief\n * \t\tread_formula - read the formula from a well known file\n *\n * @return\tformula in malloc'd buffer\n * @retval\tNULL\t: on error\n *\n * @par MT-Safe:\tno\n */\n#define RF_BUFSIZE 1024\nchar *\nread_formula(void)\n{\n\tchar *form;\n\tchar *tmp;\n\tchar buf[RF_BUFSIZE];\n\tsize_t bufsize = RF_BUFSIZE;\n\tFILE *fp;\n\n\tif ((fp = fopen(FORMULA_FILENAME, \"r\")) == NULL) {\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_REQUEST, LOG_INFO, __func__,\n\t\t\t  \"Can not open file to read job_sort_formula.  Please reset formula with qmgr.\");\n\t\treturn NULL;\n\t}\n\n\tif ((form = static_cast<char *>(malloc(bufsize + 1))) == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\tfclose(fp);\n\t\treturn NULL;\n\t}\n\n\tform[0] = '\\0';\n\n\t/* first line is a comment */\n\tif (fgets(buf, RF_BUFSIZE, fp) == NULL) \n\t\tlog_errf(-1, __func__, \"fgets failed.\");\n\n\twhile (fgets(buf, RF_BUFSIZE, fp) != NULL) {\n\t\tauto len = strlen(form) + strlen(buf);\n\t\tif (len > bufsize) {\n\t\t\ttmp = static_cast<char *>(realloc(form, len * 2 + 1));\n\t\t\tif (tmp == NULL) {\n\t\t\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\t\t\tfree(form);\n\t\t\t\tfclose(fp);\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t\tform = tmp;\n\t\t\tbufsize = len;\n\t\t}\n\t\tstrcat(form, buf);\n\t}\n\n\tif (form[strlen(form) - 1] == '\\n')\n\t\tform[strlen(form) - 1] = '\\0';\n\n\tfclose(fp);\n\treturn form;\n}\n\n/**\n * @brief\n * \t\tdup_status - status copy constructor\n *\n * @param[in]\tost\t-\tstatus input\n *\n * @return\tduplicated status\n * @retval\tNULL\t: on error\n */\nstatus *\ndup_status(status *ost)\n{\n\tstatus *nst;\n\n\tif (ost == NULL)\n\t\treturn NULL;\n\n\tnst = new status();\n\tif (nst == NULL)\n\t\treturn NULL;\n\n\t*nst = *ost;\n\n\treturn nst;\n}\n\n/**\n * @brief\n * \t\tfree_queue_list - to free two dimensional queue_list array\n *\n * @param[in]\tin_list - List which need to be deleted.\n *\n * @return\tvoid\n */\nvoid\nfree_queue_list(queue_info ***queue_list)\n{\n\tint i;\n\n\tif (queue_list == NULL)\n\t\treturn;\n\tfor (i = 0; queue_list[i] != NULL; i++)\n\t\tfree(queue_list[i]);\n\tfree(queue_list);\n}\n\n/**\n * @brief\n * \t\tcreate_total_counts - This function checks if\n *\t    total_*_counts list for user/group/project and alljobcounts\n *\t    is empty and if so, it duplicates or creates new counts with the\n *\t    user/group/project name mentioned in resource_resv structure.\n *\n * @param[in,out]  sinfo\t-\tserver_info structure used to check and set\n *                          \ttotal_*_counts\n * @param[in,out]  qinfo   \t-\tqueue_info structure used to check and set\n *                          \ttotal_*_counts\n * @param[in]      resresv \t-\tresource_resv structure to get user/group/project\n * @param[in]      mode    \t-\tTo state whether total_*_counts in server_info\n *                          \tstructure needs to be created or in queue_info.\n *\n * @return\tvoid\n */\nvoid\ncreate_total_counts(server_info *sinfo, queue_info *qinfo,\n\t\t    resource_resv *resresv, int mode)\n{\n\tif (mode == SERVER || mode == ALL) {\n\t\tif (sinfo->total_group_counts.size() == 0) {\n\t\t\tif (sinfo->group_counts.size() != 0)\n\t\t\t\tsinfo->total_group_counts = dup_counts_umap(sinfo->group_counts);\n\t\t\telse if (resresv != NULL)\n\t\t\t\tfind_alloc_counts(sinfo->total_group_counts, resresv->group);\n\t\t}\n\t\tif (sinfo->total_user_counts.size() == 0) {\n\t\t\tif (sinfo->user_counts.size() != 0)\n\t\t\t\tsinfo->total_user_counts = dup_counts_umap(sinfo->user_counts);\n\t\t\telse if (resresv != NULL)\n\t\t\t\tfind_alloc_counts(sinfo->total_user_counts, resresv->user);\n\t\t}\n\t\tif (sinfo->total_project_counts.size() == 0) {\n\t\t\tif (sinfo->project_counts.size() != 0)\n\t\t\t\tsinfo->total_project_counts = dup_counts_umap(sinfo->project_counts);\n\t\t\telse if (resresv != NULL)\n\t\t\t\tfind_alloc_counts(sinfo->total_project_counts, resresv->project);\n\t\t}\n\t\tif (sinfo->total_alljobcounts.size() == 0) {\n\t\t\tif (sinfo->alljobcounts.size() != 0)\n\t\t\t\tsinfo->total_alljobcounts = dup_counts_umap(sinfo->alljobcounts);\n\t\t\telse\n\t\t\t\tfind_alloc_counts(sinfo->total_alljobcounts, PBS_ALL_ENTITY);\n\t\t}\n\t}\n\tif (mode == QUEUE || mode == ALL) {\n\t\tif (qinfo->total_group_counts.size() == 0) {\n\t\t\tif (qinfo->group_counts.size() != 0)\n\t\t\t\tqinfo->total_group_counts = dup_counts_umap(qinfo->group_counts);\n\t\t\telse if (resresv != NULL)\n\t\t\t\tfind_alloc_counts(qinfo->total_group_counts, resresv->group);\n\t\t}\n\t\tif (qinfo->total_user_counts.size() == 0) {\n\t\t\tif (qinfo->user_counts.size() != 0)\n\t\t\t\tqinfo->total_user_counts = dup_counts_umap(qinfo->user_counts);\n\t\t\telse if (resresv != NULL)\n\t\t\t\tfind_alloc_counts(qinfo->total_user_counts, resresv->user);\n\t\t}\n\t\tif (qinfo->total_project_counts.size() == 0) {\n\t\t\tif (qinfo->project_counts.size() != 0)\n\t\t\t\tqinfo->total_project_counts = dup_counts_umap(qinfo->project_counts);\n\t\t\telse if (resresv != NULL)\n\t\t\t\tfind_alloc_counts(qinfo->total_project_counts, resresv->project);\n\t\t}\n\t\tif (qinfo->total_alljobcounts.size() == 0) {\n\t\t\tif (qinfo->alljobcounts.size() != 0)\n\t\t\t\tqinfo->total_alljobcounts = dup_counts_umap(qinfo->alljobcounts);\n\t\t\telse if (resresv != NULL)\n\t\t\t\tfind_alloc_counts(qinfo->total_alljobcounts, PBS_ALL_ENTITY);\n\t\t}\n\t}\n\treturn;\n}\n\n/**\n * @brief\n * \t\tupdate_total_counts update a total counts list on running or\n *         queing a job\n *\n * @param[in]\tsi\t\t-\tserver_info structure to use for count updation\n * @param[in]\tqi\t\t-\tqueue_info structure to use for count updation\n * @param[in]\trr\t\t-\tresource_resv structure to use for count updation\n * @param[in]  \tmode\t-\tTo state whether total_*_counts in server_info\n *                      \tstructure needs to be updated or in queue_info.\n *\n * @return\tvoid\n *\n */\nvoid\nupdate_total_counts(server_info *si, queue_info *qi,\n\t\t    resource_resv *rr, int mode)\n{\n\tcreate_total_counts(si, qi, rr, mode);\n\tif (((mode == SERVER) || (mode == ALL)) &&\n\t    ((si != NULL) && si->has_hard_limit)) {\n\t\tupdate_counts_on_run(find_alloc_counts(si->total_group_counts, rr->group), rr->resreq);\n\t\tupdate_counts_on_run(find_alloc_counts(si->total_project_counts, rr->project), rr->resreq);\n\t\tupdate_counts_on_run(find_alloc_counts(si->total_alljobcounts, PBS_ALL_ENTITY), rr->resreq);\n\t\tupdate_counts_on_run(find_alloc_counts(si->total_user_counts, rr->user), rr->resreq);\n\t} else if (((mode == QUEUE) || (mode == ALL)) &&\n\t\t   ((qi != NULL) && qi->has_hard_limit)) {\n\t\tupdate_counts_on_run(find_alloc_counts(qi->total_group_counts, rr->group), rr->resreq);\n\t\tupdate_counts_on_run(find_alloc_counts(qi->total_project_counts, rr->project), rr->resreq);\n\t\tupdate_counts_on_run(find_alloc_counts(qi->total_alljobcounts, PBS_ALL_ENTITY), rr->resreq);\n\t\tupdate_counts_on_run(find_alloc_counts(qi->total_user_counts, rr->user), rr->resreq);\n\t}\n}\n\n/**\n * @brief\n * \t\tupdate_total_counts_on_end update a total counts list on preempting\n *         a running job\n *\n * @param[in]\tsi\t\t-\tserver_info structure to use for count updation\n * @param[in]\tqi\t\t-\tqueue_info structure to use for count updation\n * @param[in]\trr\t\t-\tresource_resv structure to use for count updation\n * @param[in]  \tmode\t-\tTo state whether total_*_counts in server_info\n *                      \tstructure needs to be updated or in queue_info.\n *\n * @return\tvoid\n *\n */\nvoid\nupdate_total_counts_on_end(server_info *si, queue_info *qi,\n\t\t\t   resource_resv *rr, int mode)\n{\n\tcreate_total_counts(si, qi, rr, mode);\n\tif (((mode == SERVER) || (mode == ALL)) &&\n\t    ((si != NULL) && si->has_hard_limit)) {\n\t\tupdate_counts_on_end(find_alloc_counts(si->total_group_counts, rr->group), rr->resreq);\n\t\tupdate_counts_on_end(find_alloc_counts(si->total_project_counts, rr->project), rr->resreq);\n\t\tupdate_counts_on_end(find_alloc_counts(si->total_alljobcounts, PBS_ALL_ENTITY), rr->resreq);\n\t\tupdate_counts_on_end(find_alloc_counts(si->total_user_counts, rr->user), rr->resreq);\n\t} else if (((mode == QUEUE) || (mode == ALL)) &&\n\t\t   ((qi != NULL) && qi->has_hard_limit)) {\n\t\tupdate_counts_on_end(find_alloc_counts(qi->total_group_counts, rr->group), rr->resreq);\n\t\tupdate_counts_on_end(find_alloc_counts(qi->total_project_counts, rr->project), rr->resreq);\n\t\tupdate_counts_on_end(find_alloc_counts(qi->total_alljobcounts, PBS_ALL_ENTITY), rr->resreq);\n\t\tupdate_counts_on_end(find_alloc_counts(qi->total_user_counts, rr->user), rr->resreq);\n\t}\n}\n\n/**\n * @brief\n * \t\tget a unique rank to uniquely identify an object\n *\n * @return\tint\n * @retval\tunique number for this scheduling cycle\n */\nint\nget_sched_rank()\n{\n\tcstat.order++;\n\treturn cstat.order;\n}\n\n/**\n * @brief\n * \t\tadd_queue_to_list - This function alligns all queues according to\n *                              their priority.\n *\n * @param[in,out]\tqlhead\t-\taddress of 3 dimensional queue list.\n * @param[in]\t\tqinfo\t-\tqueue which is getting added in queue_list.\n *\n * @return\tint\n * @retval\t1\t: If successful in adding the qinfo to queue_list.\n * @retval\t0\t: If failed to add qinfo to queue_list.\n */\nint\nadd_queue_to_list(queue_info ****qlhead, queue_info *qinfo)\n{\n\tint queue_list_size = 0;\n\tvoid *temp = NULL;\n\tqueue_info ***temp_list = NULL;\n\tqueue_info ***list_head;\n\n\tif (qlhead == NULL)\n\t\treturn 0;\n\n\tlist_head = *qlhead;\n\tqueue_list_size = count_array(list_head);\n\n\ttemp_list = find_queue_list_by_priority(list_head, qinfo->priority);\n\tif (temp_list == NULL) {\n\t\ttemp = realloc(list_head, (queue_list_size + 2) * sizeof(queue_info **));\n\t\tif (temp == NULL) {\n\t\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\t\treturn 0;\n\t\t}\n\t\t*qlhead = list_head = static_cast<queue_info ***>(temp);\n\t\tlist_head[queue_list_size] = NULL;\n\t\tlist_head[queue_list_size + 1] = NULL;\n\t\tif (append_to_queue_list(&list_head[queue_list_size], qinfo) == NULL)\n\t\t\treturn 0;\n\t} else {\n\t\tif (append_to_queue_list(temp_list, qinfo) == NULL)\n\t\t\treturn 0;\n\t}\n\treturn 1;\n}\n\n/**\n * @brief\n * \t\tfind_queue_list_by_priority - function finds out the array of queues\n *                               which matches with the priority passed to this\n *                               function. It returns the base address of matching\n *                               array.\n *\n * @param[in]\tlist_head \t- \tHead pointer to queue_info list.\n * @param[in]\tpriority \t-  \tPriority of the queue that needs to be searched.\n *\n * @return\tqueue_info*** - base address of array which has given priority.\n * @retval\tNULL\t: when function is not able to find the array.\n *\n */\nstruct queue_info ***\nfind_queue_list_by_priority(queue_info ***list_head, int priority)\n{\n\tint i;\n\tif (list_head == NULL)\n\t\treturn NULL;\n\tfor (i = 0; list_head[i] != NULL; i++) {\n\t\tif ((list_head[i][0] != NULL) && list_head[i][0]->priority == priority)\n\t\t\treturn (&list_head[i]);\n\t}\n\treturn NULL;\n}\n\n/**\n * @brief\n * \t\tappend_to_queue_list - function that will reallocate and append\n *                               \"add\" to the list provided.\n * @param[in,out]\tlist\t-\tpointer to queue_info** which gets reallocated\n *                       \t\tand \"add\" is appended to it.\n * @param[in] \t\tadd \t-   queue_info  that needs to be appended.\n *\n * @return\tqueue_info** : newly appended list.\n * @retval\tNULL\t: when realloc fails.\n *           \t\t\tpointer to appended list.\n */\nstruct queue_info **\nappend_to_queue_list(queue_info ***list, queue_info *add)\n{\n\tint count = 0;\n\tqueue_info **temp = NULL;\n\n\tif ((list == NULL) || (add == NULL))\n\t\treturn NULL;\n\tcount = count_array(*list);\n\n\t/* count contains number of elements in list (excluding NULL). we add 2 to add the NULL\n\t * back in, plus our new element.\n\t */\n\ttemp = (queue_info **) realloc(*list, (count + 2) * sizeof(queue_info *));\n\tif (temp == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn NULL;\n\t}\n\ttemp[count] = add;\n\ttemp[count + 1] = NULL;\n\t*list = temp;\n\treturn (*list);\n}\n\n/**\n * @brief basically do a reslist->assigned += reqlist->amount for all of reqlist\n * @param reslist - resource list\n * @param reqlist - resource_req list\n * @return\n */\nvoid\nadd_req_list_to_assn(schd_resource *reslist, resource_req *reqlist)\n{\n\tif (reslist == NULL || reqlist == NULL)\n\t\treturn;\n\n\tfor (auto req = reqlist; req != NULL; req = req->next) {\n\t\tauto r = find_resource(reslist, req->def);\n\t\tif (r != NULL && r->type.is_consumable)\n\t\t\tr->assigned += req->amount;\n\t}\n}\n\n/**\n * @brief create the ninfo->res->assigned values for the node\n * @param ninfo - the node\n * @return int\n * @retval 1 success\n * @retval 0 failure\n */\nint\ncreate_resource_assn_for_node(node_info *ninfo)\n{\n\tschd_resource *r;\n\tschd_resource *ncpus_res = NULL;\n\tint i;\n\n\tif (ninfo == NULL)\n\t\treturn 0;\n\n\tfor (r = ninfo->res; r != NULL; r = r->next)\n\t\tif (r->type.is_consumable) {\n\t\t\tr->assigned = 0;\n\t\t\tif (r->def == allres[\"ncpus\"])\n\t\t\t\tncpus_res = r;\n\t\t}\n\n\t/* First off, add resource from running jobs (that aren't in resvs) */\n\tif (ninfo->job_arr != NULL) {\n\t\tfor (i = 0; ninfo->job_arr[i] != NULL; i++) {\n\t\t\t/* ignore jobs in reservations.  The resources will be accounted for with the reservation itself.  */\n\t\t\tif (ninfo->job_arr[i]->job != NULL && ninfo->job_arr[i]->job->resv == NULL) {\n\t\t\t\tfor (auto &n : ninfo->job_arr[i]->nspec_arr) {\n\t\t\t\t\tif (n->ninfo->rank == ninfo->rank)\n\t\t\t\t\t\tadd_req_list_to_assn(ninfo->res, n->resreq);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\t/* Next up, account for running reservations.  Running reservations consume all resources on the node when they start.  */\n\tif (ninfo->run_resvs_arr != NULL) {\n\t\tfor (i = 0; ninfo->run_resvs_arr[i] != NULL; i++) {\n\t\t\tfor (auto &n : ninfo->run_resvs_arr[i]->nspec_arr) {\n\t\t\t\tif (n->ninfo->rank == ninfo->rank)\n\t\t\t\t\tadd_req_list_to_assn(ninfo->res, n->resreq);\n\t\t\t}\n\t\t}\n\t}\n\n\t/* Lastly if restrict_res_to_release_on_suspend is set, suspended jobs may not have released all their resources\n\t * This is tricky since a suspended job knows what resources they released.\n\t * We need to know what they didn't release to account for in the nodes resources_assigned\n\t * Also, we only need to deal with suspended jobs outside of reservations since resources for reservations were handled above.\n\t */\n\tif (ninfo->num_susp_jobs > 0) {\n\t\tint i;\n\t\tserver_info *sinfo = ninfo->server;\n\t\tfor (i = 0; sinfo->jobs[i] != NULL; i++) {\n\t\t\tif (sinfo->jobs[i]->job->is_suspended && sinfo->jobs[i]->job->resv == NULL) {\n\t\t\t\tnspec *ens;\n\t\t\t\tens = find_nspec(sinfo->jobs[i]->nspec_arr, ninfo);\n\t\t\t\tif (ens != NULL) {\n\t\t\t\t\tnspec *rns;\n\t\t\t\t\trns = find_nspec(sinfo->jobs[i]->job->resreleased, ninfo);\n\t\t\t\t\tif (rns != NULL) {\n\t\t\t\t\t\tresource_req *cur_req;\n\t\t\t\t\t\tfor (cur_req = ens->resreq; cur_req != NULL; cur_req = cur_req->next) {\n\t\t\t\t\t\t\tif (cur_req->type.is_consumable)\n\t\t\t\t\t\t\t\tif (find_resource_req(rns->resreq, cur_req->def) == NULL) {\n\t\t\t\t\t\t\t\t\tschd_resource *nres;\n\t\t\t\t\t\t\t\t\tnres = find_resource(ninfo->res, cur_req->def);\n\t\t\t\t\t\t\t\t\tif (nres != NULL)\n\t\t\t\t\t\t\t\t\t\tnres->assigned += cur_req->amount;\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\tif (ncpus_res != NULL && ncpus_res->assigned < ncpus_res->avail)\n\t\tremove_node_state(ninfo, ND_jobbusy);\n\n\treturn 1;\n}\n\n/**\n * @brief compares two schd_resource structs for equality\n *\n * @return int\n * @retval 1 if equal\n * @retval 0 if not equal\n */\nint\ncompare_resource_avail(schd_resource *r1, schd_resource *r2)\n{\n\tif (r1 == NULL && r2 == NULL)\n\t\treturn 1;\n\tif (r1 == NULL || r2 == NULL)\n\t\treturn 0;\n\n\tif (r1->def->type.is_string) {\n\t\tif (match_string_array(r1->str_avail, r2->str_avail) == SA_FULL_MATCH)\n\t\t\treturn 1;\n\t\telse\n\t\t\treturn 0;\n\t}\n\tif (r1->avail == r2->avail)\n\t\treturn 1;\n\n\treturn 0;\n}\n\n/**\n * @brief compare two schd_resource lists for equality\n * @return int\n * @retval 1 if equal\n * @retval 0 if not equal\n */\nint\ncompare_resource_avail_list(schd_resource *r1, schd_resource *r2)\n{\n\tschd_resource *cur;\n\n\tif (r1 == NULL && r2 == NULL)\n\t\treturn 1;\n\tif (r1 == NULL || r2 == NULL)\n\t\treturn 0;\n\n\tfor (cur = r1; cur != NULL; cur = cur->next) {\n\t\tschd_resource *res;\n\n\t\tres = find_resource(r2, cur->def);\n\t\tif (res != NULL) {\n\t\t\tif (compare_resource_avail(cur, res) == 0)\n\t\t\t\treturn 0;\n\t\t} else if (cur->type.is_boolean) { /* Unset boolean == False */\n\t\t\tif (cur->avail != 0)\n\t\t\t\treturn 0;\n\t\t} else\n\t\t\treturn 0;\n\t}\n\n\treturn 1;\n}\n\n/**\n * @brief dup sinfo->unordered_nodes from the nnodes array.\n * @param[in] old_unordered_nodes - unordered_nodes array to dup\n * @param[in] nnodes - nodes from new universe.  Nodes are references into this\n * \t\tarray\n *\n * @return new unordered_nodes\n */\nnode_info **\ndup_unordered_nodes(node_info **old_unordered_nodes, node_info **nnodes)\n{\n\tint i;\n\tint ct1;\n\tint ct2;\n\tnode_info **new_unordered_nodes;\n\n\tif (old_unordered_nodes == NULL || nnodes == NULL)\n\t\treturn NULL;\n\n\tct1 = count_array(nnodes);\n\tct2 = count_array(old_unordered_nodes);\n\n\tif (ct1 != ct2)\n\t\treturn NULL;\n\n\tnew_unordered_nodes = static_cast<node_info **>(calloc((ct1 + 1), sizeof(node_info *)));\n\tif (new_unordered_nodes == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn NULL;\n\t}\n\n\tfor (i = 0; i < ct1; i++)\n\t\tnew_unordered_nodes[nnodes[i]->node_ind] = nnodes[i];\n\n\tnew_unordered_nodes[ct1] = NULL;\n\n\treturn new_unordered_nodes;\n}\n"
  },
  {
    "path": "src/scheduler/server_info.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _SERVER_INFO_H\n#define _SERVER_INFO_H\n\n#include <pbs_ifl.h>\n#include \"state_count.h\"\n#include \"data_types.h\"\n#include \"constant.h\"\n\n/* Modes passed to update_total_counts_on_run() */\nenum counts_on_run {\n\tSERVER,\n\tQUEUE,\n\tALL\n};\n/*\n *      query_server - creates a structure of arrays consisting of a server\n *                      and all the queues and jobs that reside in that server\n */\nserver_info *query_server(status *pol, int pbs_sd);\n\n/*\n *\tquery_server_info - collect information out of a statserver call\n *\t\t\t    into a server_info structure\n */\nserver_info *query_server_info(status *pol, struct batch_status *server);\n\n/*\n * \tquery_server_dyn_res - execute all configured server_dyn_res scripts\n */\nint query_server_dyn_res(server_info *sinfo);\n\n/*\n *\tfind_alloc_resource[_by_str] - try and find a resource, and if it is\n *                                     not there allocate space for it and\n *                                     add it to the resource list\n */\n\nschd_resource *find_alloc_resource(schd_resource *resplist, resdef *def);\nschd_resource *find_alloc_resource_by_str(schd_resource *resplist, const char *name);\nschd_resource *find_alloc_resource_by_str(schd_resource *resplist, const std::string &name);\n\n/*  finds a resource in a resource list by string resource name */\n\nschd_resource *find_resource_by_str(schd_resource *reslist, const char *name);\nschd_resource *find_resource_by_str(schd_resource *reslist, const std::string &name);\n\n/*\n *\tfind resource by resource definition\n */\nschd_resource *find_resource(schd_resource *reslist, resdef *def);\n\n/*\n *      free_resource - free a resource struct\n */\nvoid free_resource(schd_resource *resp);\n\n/*\n *      free_resource_list - free a resource list\n */\nvoid free_resource_list(schd_resource *reslist);\n\n/*\n *      new_resource - allocate and initialize new resoruce struct\n */\nschd_resource *new_resource(void);\n\n/*\n * Create new resource with given data\n */\nschd_resource *create_resource(const char *name, const char *value, enum resource_fields field);\n\n/*\n *\tfree_server - free a list of server_info structs\n */\nvoid free_server(server_info *sinfo);\n\n/*\n *      update_server_on_run - update server_info strucutre when a job is run\n */\nvoid\nupdate_server_on_run(status *policy, server_info *sinfo, queue_info *qinfo,\n\t\t     resource_resv *resresv, char *job_state);\n\n/*\n *\n *      create_server_arrays - create a large server resresv array of all the\n *                             jobs on the system by coping all the jobs\n *                             from the queue job arrays.  Also create an array\n *                             of both jobs and reservations\n */\nint create_server_arrays(server_info *sinfo);\n\n/*\n *\tcopy_server_arrays - copy server's jobs and all_resresv arrays\n */\nint copy_server_arrays(server_info *nsinfo, const server_info *osinfo);\n\n/*\n *      check_exit_job - function used by job_filter to filter out\n *                       jobs not in the exiting state\n */\nint check_exit_job(resource_resv *job, const void *arg);\n\n/*\n *\n *\tcheck_susp_job - function used by job_filter to filter out jobs\n *\t\t\t   which are suspended\n */\nint check_susp_job(resource_resv *job, const void *arg);\n\n/*\n *\n *\tcheck_job_running - function used by job_filter to filter out\n *\t\t\t   jobs that are running\n */\nint check_job_running(resource_resv *job, const void *arg);\n\n/*\n *\n *\tcheck_running_job_in_reservation - function used by job_filter to filter out\n *\t\t\t   jobs that are in a reservation\n */\nint check_running_job_in_reservation(resource_resv *job, const void *arg);\n\n/*\n *\n *\tcheck_running_job_not_in_reservation - function used by job_filter to filter out\n *\t\t\t   jobs that are not in a reservation\n */\nint check_running_job_not_in_reservation(resource_resv *job, const void *arg);\n\n/*\n *\n *      check_resv_running_on_node - function used by resv_filter to filter out\n *\t\t\t\trunning reservations\n */\nint check_resv_running_on_node(resource_resv *resv, const void *arg);\n\n/*\n *      dup_resource_list - dup a resource list\n */\nschd_resource *dup_resource_list(schd_resource *res);\n\n/* dup a resource list selectively only duping specific resources */\n\nschd_resource *dup_selective_resource_list(schd_resource *res, std::unordered_set<resdef *> &deflist, unsigned flags);\n\n/*\n *\tdup_ind_resource_list - dup a resource list - if a resource is indirect\n *\t\t\t\tdup the pointed to resource instead\n */\nschd_resource *dup_ind_resource_list(schd_resource *res);\n\n/*\n *      dup_resource - duplicate a resource struct\n */\nschd_resource *dup_resource(schd_resource *res);\n\n/*\n *      check_resv_job - finds if a job has a reservation\n *                       used with job_filter\n */\nint check_resv_job(resource_resv *job, void *unused);\n\n/*\n *      free_resource_list - frees the memory used by a resource list\n */\nvoid free_resource_list(schd_resource *reslist);\n\n/*\n *      free_resource - frees the memory used by a resource structure\n */\nvoid free_resource(schd_resource *resp);\n\n/*\n *      update_server_on_end - update a server structure when a job has\n *                             finished running\n */\nvoid\nupdate_server_on_end(status *policy, server_info *sinfo, queue_info *qinfo,\n\t\t     resource_resv *resresv, const char *job_state);\n\n/*\n *      check_unassoc_node - finds nodes which are not associated with queues\n *                           used with node_filter\n */\nint is_unassoc_node(node_info *ninfo, void *arg);\n\n/*\n *      new_counts - create a new counts structure and return it\n */\ncounts *new_counts(void);\n\n/*\n *      free_counts_list - free a list of counts structures\n */\nvoid free_counts_list(counts_umap &ctslist);\n\n/*\n *\tdup_counts_umap - duplicate counts_umap\n */\ncounts_umap dup_counts_umap(const counts_umap &omap);\n\n/*\n *      find_counts - find a counts structure by name\n */\ncounts *find_counts(counts_umap &ctslist, const std::string &name);\n\n/*\n *      find_alloc_counts - find a counts structure by name or allocate a new\n *                          counts, name it, and add it to the end of the list\n */\ncounts *find_alloc_counts(counts_umap &ctslist, const std::string &name);\n\n/*\n *      update_counts_on_run - update a counts struct on the running of a job\n */\nvoid update_counts_on_run(counts *cts, resource_req *resreq);\n\n/*\n *      update_counts_on_end - update a counts structure on the end of a job\n */\nvoid update_counts_on_end(counts *cts, resource_req *resreq);\n\n/**\n *\tcounts_max - perform a max() the current max and a new list.  If any\n *\t\t\telement from the new list is greater than the current\n *\t\t\tmax, we free the old, and dup the new and attach it\n *\t\t\tin.\n *\n *\t  \\param cmax    - current max that will be updated.\n *\t  \\param new     - new counts lists.  If anything in this list is\n *\t\t\t   greater than the cur_max, it needs to be dup'd.\n *\n *\t  returns void\n */\nvoid counts_max(counts_umap &cmax, counts_umap &ncounts);\nvoid counts_max(counts_umap &cmax, counts *ncounts);\n\n/*\n *      check_run_job - function used by resource_resv_filter to filter out\n *                      non-running jobs.\n */\nint check_run_job(resource_resv *job, const void *arg);\n\n/*\n *      update_universe_on_end - update a pbs universe when a job/resv ends\n */\nvoid update_universe_on_end(status *policy, resource_resv *resresv, const char *job_state, unsigned int flags);\n\nbool update_universe_on_run(status *policy, int pbs_sd, resource_resv *rr, std::vector<nspec *> &orig_ns, unsigned int flags);\nbool update_universe_on_run(status *policy, int pbs_sd, resource_resv *rr, unsigned int flags);\n\n/*\n *\n *\tset_resource - set the values of the resource structure.  This\n *\t\tfunction can be called in one of two ways.  It can be called\n *\t\twith resources_available value, or the resources_assigned\n *\t\tvalue.\n *\n *\tNOTE: If we have resource type information from the server, we will\n * \t\tuse it.  If not, we will try and set the resource type from\n * \t\tthe resources_available value first, if not then the\n *\t\tresources_assigned\n *\n *\tres - the resource to set\n *\tval - the value to set upon the resource\n *\tfield - the type of field to set (avaialble or assigned)\n *\n *\treturns 1 on success 0 on failure/error\n *\n */\nint set_resource(schd_resource *res, const char *val, enum resource_fields field);\n\n/*\n *\tupdate_preemption_priority - update preemption status when a\n *   \t\t\t\t\tresource resv runs/ends\n *\treturns nothing\n */\nvoid update_preemption_priority(server_info *sinfo, resource_resv *resresv);\n\n/*\n *\tadd_resource_list - add one resource list to another\n *\t\t\ti.e. r1 += r2\n *\treturns 1 on success\n *\t\t0 on failure\n */\nint add_resource_list(status *policy, schd_resource *r1, schd_resource *r2, unsigned int flags);\n\nint modify_resource_list(schd_resource *res_list, resource_req *req_list, int type);\n\n/*\n *\tadd_resource_value - add a resource value to another\n *\t\t\t\ti.e. val1 += val2\n */\nvoid add_resource_value(sch_resource_t *val1, sch_resource_t *val2,\n\t\t\tsch_resource_t default_val);\n\n/*\n *  add_resource_string_arr - add values from a string array to\n *                             a string resource.  Only add values if\n *                             they do not exist unless specified by allow_dup\n */\nint add_resource_str_arr(schd_resource *res, char **str_arr, int allow_dup);\n\n/*\n *      accumulate two boolean resources together (r1 += r2)\n *        T + T = True | F + F = False | T + F = TRUE_FALSE\n */\nint add_resource_bool(schd_resource *r1, schd_resource *r2);\n\n/*\n *\tfind_indirect_resource - follow the indirect resource pointers to\n *\t\t\t\t find the real resource at the end\n *\treturns the indirect resource or NULL on error\n */\nschd_resource *find_indirect_resource(schd_resource *res, node_info **nodes);\n\n/*\n *\tresolve_indirect_resources - resource indirect resources for node array\n *\n *\t  nodes - the nodes to resolve\n *\n *\treturns 1 if successful\n *\t\t0 if there were any errors\n */\nint resolve_indirect_resources(node_info **nodes);\n\n/*\n *\tread_formula - read the formula from a well known file\n *\n *\treturns formula in malloc'd buffer or NULL on error\n */\nchar *read_formula(void);\n\n/*\n * create_total_counts -  Creates total counts list for server & queue\n */\nvoid\ncreate_total_counts(server_info *sinfo, queue_info *qinfo,\n\t\t    resource_resv *resresv, int mode);\n\n/*\n * Updates total counts list for server & queue on run and on\n * preemption.\n */\nvoid\nupdate_total_counts(server_info *si, queue_info *qi,\n\t\t    resource_resv *rr, int mode);\nvoid\nupdate_total_counts_on_end(server_info *si, queue_info *qi,\n\t\t\t   resource_resv *rr, int mode);\n\n/**\n * @brief - get a unique rank to uniquely identify an object\n * @return int\n * @retval unique number for this scheduling cycle\n */\nint get_sched_rank();\n\n/*\n *  add_queue_to_list - This function aligns all queues according to\n *                      their priority so that we can round robin\n *                      across those.\n */\nint add_queue_to_list(queue_info ****qlhead, queue_info *qinfo);\n\n/*\n * append_to_queue_list - function that will reallocate and append\n *                        \"add\" to the list provided.\n */\nstruct queue_info **append_to_queue_list(queue_info ***list, queue_info *add);\n\n/*\n * find_queue_list_by_priority - function finds out the array of queues\n *                               which matches with the priority passed to this\n *                               function. It returns the base address of matching\n *                               array.\n */\nstruct queue_info ***find_queue_list_by_priority(queue_info ***list_head, int priority);\n\n/*\n * free_queue_list - to free two dimensional queue_list array\n */\nvoid free_queue_list(queue_info ***queue_list);\n\nvoid add_req_list_to_assn(schd_resource *, resource_req *);\n\nint create_resource_assn_for_node(node_info *);\n\nint compare_resource_avail_list(schd_resource *r1, schd_resource *r2);\nint compare_resource_avail(schd_resource *r1, schd_resource *r2);\n\nnode_info **dup_unordered_nodes(node_info **old_unordered_nodes, node_info **nnodes);\n\nstatus *dup_status(status *ost);\n\nstruct batch_status *send_statserver(int virtual_fd, struct attrl *attrib, char *extend);\n\n#endif /* _SERVER_INFO_H */\n"
  },
  {
    "path": "src/scheduler/simulate.cpp",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file    simulate.c\n *\n * @brief\n * \t\tsimulate.c - This file contains functions related to simulation of pbs event.\n *\n * Functions included are:\n * \tsimulate_events()\n * \tis_timed()\n * \tget_next_event()\n * \tnext_event()\n * \tfind_init_timed_event()\n * \tfind_first_timed_event_backwards()\n * \tfind_next_timed_event()\n * \tfind_prev_timed_event()\n * \tset_timed_event_disabled()\n * \tfind_timed_event()\n * \tperform_event()\n * \texists_run_event()\n * \tcalc_run_time()\n * \tcreate_event_list()\n * \tcreate_events()\n * \tnew_event_list()\n * \tdup_event_list()\n * \tfree_event_list()\n * \tnew_timed_event()\n * \tdup_timed_event()\n * \tfind_event_ptr()\n * \tdup_timed_event_list()\n * \tfree_timed_event()\n * \tfree_timed_event_list()\n * \tadd_event()\n * \tadd_timed_event()\n * \tdelete_event()\n * \tcreate_event()\n * \tdetermine_event_name()\n * \tdedtime_change()\n * \tadd_dedtime_events()\n * \tsimulate_resmin()\n * \tpolicy_change_to_str()\n * \tpolicy_change_info()\n * \tdescribe_simret()\n * \tadd_prov_event()\n * \tgeneric_sim()\n *\n */\n#include <pbs_config.h>\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <errno.h>\n#include <log.h>\n\n#include \"simulate.h\"\n#include \"data_types.h\"\n#include \"resource_resv.h\"\n#include \"resv_info.h\"\n#include \"node_info.h\"\n#include \"server_info.h\"\n#include \"queue_info.h\"\n#include \"fifo.h\"\n#include \"constant.h\"\n#include \"sort.h\"\n#include \"check.h\"\n#include \"log.h\"\n#include \"misc.h\"\n#include \"prime.h\"\n#include \"globals.h\"\n#include \"check.h\"\n#include \"buckets.h\"\n#ifdef NAS /* localmod 030 */\n#include \"site_code.h\"\n#endif /* localmod 030 */\n\n/** @struct\tpolicy_change_func_name\n *\n * @brief\n * \t\tstructure to map a function pointer to string name\n * \t\tfor printing of policy change events\n */\nstruct policy_change_func_name {\n\tevent_func_t func;\n\tconst char *str;\n};\n\n// clang-format off\nstatic const struct policy_change_func_name policy_change_func_name[] =\n{\n\t{(event_func_t)init_prime_time, \"prime time\"},\n\t{(event_func_t)init_non_prime_time, \"non-prime time\"},\n\t{NULL, NULL}\n};\n\n// clang-format on\n\n/**\n * @brief\n * \t\tsimulate the future of a PBS universe\n *\n * @param[in] \tpolicy   - policy info\n * @param[in] \tsinfo    - PBS universe to simulate\n * @param[in] \tcmd      - simulation command\n * @param[in] \targ      - optional argument\n * @param[out] \tsim_time - the time in the simulated universe\n *\n * @return\tbitfield of what type of event(s) were simulated\n */\nunsigned int\nsimulate_events(status *policy, server_info *sinfo,\n\t\tenum schd_simulate_cmd cmd, void *arg, time_t *sim_time)\n{\n\ttime_t event_time = 0;\t /* time of the event being simulated */\n\ttime_t cur_sim_time = 0; /* current time in simulation */\n\tunsigned int ret = 0;\n\tevent_list *calendar;\n\n\ttimed_event *event; /* the timed event to take action on */\n\n\tif (sinfo == NULL || sim_time == NULL)\n\t\treturn TIMED_ERROR;\n\n\tif (cmd == SIM_TIME && arg == NULL)\n\t\treturn TIMED_ERROR;\n\n\tif (cmd == SIM_NONE)\n\t\treturn TIMED_NOEVENT;\n\n\tif (sinfo->calendar == NULL)\n\t\treturn TIMED_NOEVENT;\n\n\tif (sinfo->calendar->current_time == NULL)\n\t\treturn TIMED_ERROR;\n\n\tcalendar = sinfo->calendar;\n\n\tevent = next_event(sinfo, DONT_ADVANCE);\n\n\tif (event == NULL)\n\t\treturn TIMED_NOEVENT;\n\n\tif (event->disabled)\n\t\tevent = next_event(sinfo, ADVANCE);\n\n\tif (event == NULL)\n\t\treturn TIMED_NOEVENT;\n\n\tcur_sim_time = (*calendar->current_time);\n\n\tif (cmd == SIM_NEXT_EVENT) {\n\t\tlong t = 1;\n\t\tif (arg != NULL) {\n\t\t\tt = *((long *) arg);\n\t\t\tif (t == 0)\n\t\t\t\tt = 1;\n\t\t}\n\t\t/* t is the opt_backfill_fuzzy window.  In order to create more consistent estimates\n\t\t * shorten the first window to the next time boundary (e.g. if t=1hr and it\n\t\t * is now 12:31, the first window is 29m).  The subsequent windows will be the same.\n\t\t */\n\t\tevent_time = (event->event_time + t) / t * t;\n\t} else if (cmd == SIM_TIME)\n\t\tevent_time = *((time_t *) arg);\n\n\twhile (event != NULL && event->event_time <= event_time) {\n\t\tcur_sim_time = event->event_time;\n\n\t\t(*calendar->current_time) = cur_sim_time;\n\t\tif (perform_event(policy, event) == 0) {\n\t\t\tret = TIMED_ERROR;\n\t\t\tbreak;\n\t\t}\n\n\t\tret |= event->event_type;\n\n\t\tevent = next_event(sinfo, ADVANCE);\n\t}\n\n\tif (calendar->first_run_event != NULL && cur_sim_time > calendar->first_run_event->event_time)\n\t\tcalendar->first_run_event = find_init_timed_event(calendar->next_event, 0, TIMED_RUN_EVENT);\n\n\t(*sim_time) = cur_sim_time;\n\n\tif (cmd == SIM_TIME) {\n\t\t(*sim_time) = event_time;\n\t\t(*calendar->current_time) = event_time;\n\t}\n\n\treturn ret;\n}\n\n/**\n * @brief\n *\t\tis_timed - check if an event_ptr has timed elements\n * \t\t\t (i.e. has a start and end time)\n *\n * @param[in]\tevent_ptr\t-\tthe event to check\n *\n * @return\tint\n * @retval\t1\t: if its timed\n * @retval\t0\t: if it is not\n *\n */\nint\nis_timed(event_ptr_t *event_ptr)\n{\n\tif (event_ptr == NULL)\n\t\treturn 0;\n\n\tif ((static_cast<resource_resv *>(event_ptr))->start == UNSPECIFIED)\n\t\treturn 0;\n\n\tif ((static_cast<resource_resv *>(event_ptr))->end == UNSPECIFIED)\n\t\treturn 0;\n\n\treturn 1;\n}\n\n/**\n * @brief\n * \t\tget the next_event from an event list\n *\n * @param[in]\telist\t-\tthe event list\n *\n * @par NOTE:\n * \t\t\tIf prime status events matter, consider using\n *\t\t\tnext_event(sinfo, DONT_ADVANCE).  This function only\n *\t\t\treturns the next_event pointer of the event list.\n *\n * @return\tthe current event from the event list\n * @retval\tNULL\t: elist is null\n *\n */\ntimed_event *\nget_next_event(event_list *elist)\n{\n\tif (elist == NULL)\n\t\treturn NULL;\n\n\treturn elist->next_event;\n}\n\n/**\n * @brief\n * \t\tmove sinfo -> calendar to the next event and return it.\n *\t     If the next event is a prime status event,  created\n *\t     on the fly and returned.\n *\n * @param[in] \tsinfo \t- server containing the calendar\n * @param[in] \tadvance - advance to the next event or not.  \n * \t\t\tPrime status event creation happens if we advance or not.\n *\n * @return\tthe next event\n * @retval\tNULL\t: if there are no more events\n *\n */\ntimed_event *\nnext_event(server_info *sinfo, int advance)\n{\n\ttimed_event *te;\n\ttimed_event *pe;\n\tevent_list *calendar;\n\n\tif (sinfo == NULL || sinfo->calendar == NULL)\n\t\treturn NULL;\n\n\tcalendar = sinfo->calendar;\n\n\tif (advance)\n\t\tte = find_next_timed_event(calendar->next_event,\n\t\t\t\t\t   IGNORE_DISABLED_EVENTS, ALL_MASK);\n\telse\n\t\tte = calendar->next_event;\n\n\t/* should we add a periodic prime event\n\t * i.e. does a prime status event fit between now and the next event\n\t * ( now -- Prime Event -- next event )\n\t *\n\t * or if we're out of events (te == NULL), we need to return\n\t * one last prime event.  There may be things waiting on a specific prime\n\t * status.\n\t */\n\tif (!calendar->eol) {\n\t\tif (sinfo->policy->prime_status_end != SCHD_INFINITY) {\n\t\t\tif (te == NULL ||\n\t\t\t    (*calendar->current_time <= sinfo->policy->prime_status_end &&\n\t\t\t     sinfo->policy->prime_status_end < te->event_time)) {\n\t\t\t\tevent_func_t func;\n\n\t\t\t\tif (sinfo->policy->is_prime)\n\t\t\t\t\tfunc = (event_func_t) init_non_prime_time;\n\t\t\t\telse\n\t\t\t\t\tfunc = (event_func_t) init_prime_time;\n\n\t\t\t\tpe = create_event(TIMED_POLICY_EVENT, sinfo->policy->prime_status_end,\n\t\t\t\t\t\t  (event_ptr_t *) sinfo->policy, func, NULL);\n\n\t\t\t\tif (pe == NULL)\n\t\t\t\t\treturn NULL;\n\n\t\t\t\tadd_event(sinfo->calendar, pe);\n\t\t\t\t/* important to set calendar -> eol after calling add_event(),\n\t\t\t\t * because add_event() can clear calendar -> eol\n\t\t\t\t */\n\t\t\t\tif (te == NULL)\n\t\t\t\t\tcalendar->eol = 1;\n\t\t\t\tte = pe;\n\t\t\t}\n\t\t}\n\t}\n\n\tcalendar->next_event = te;\n\n\treturn te;\n}\n\n/**\n * @brief\n * \t\tfind the initial event based on a timed_event\n *\n * @param[in]\tevent            - the current event\n * @param[in] \tignore_disabled  - ignore disabled events\n * @param[in] \tsearch_type_mask - bitmask of types of events to search\n *\n * @return\tthe initial event of the correct type/disabled or not\n * @retval\tNULL\t: event is NULL.\n *\n * @par NOTE:\n * \t\t\tIGNORE_DISABLED_EVENTS exists to be passed in as the\n *\t\t   \tignore_disabled parameter.  It is non-zero.\n *\n * @par NOTE:\n * \t\t\tALL_MASK can be passed in for search_type_mask to search\n *\t\t    for all events types\n */\ntimed_event *\nfind_init_timed_event(timed_event *event, int ignore_disabled, unsigned int search_type_mask)\n{\n\ttimed_event *e;\n\n\tif (event == NULL)\n\t\treturn NULL;\n\n\tfor (e = event; e != NULL; e = e->next) {\n\t\tif (ignore_disabled && e->disabled)\n\t\t\tcontinue;\n\t\telse if ((e->event_type & search_type_mask) == 0)\n\t\t\tcontinue;\n\t\telse\n\t\t\tbreak;\n\t}\n\n\treturn e;\n}\n\n/**\n * @brief\n * \t\tfind the first event based on a timed_event while iterating backwards\n *\n * @param[in] event            - the current event\n * @param[in] ignore_disabled  - ignore disabled events\n * @param[in] search_type_mask - bitmask of types of events to search\n *\n * @return the previous event of the correct type/disabled or not\n *\n * @par NOTE:\n * \t\t\tIGNORE_DISABLED_EVENTS exists to be passed in as the\n *\t\t   \tignore_disabled parameter.  It is non-zero.\n *\n * @par NOTE:\n * \t\t\tALL_MASK can be passed in for search_type_mask to search\n *\t\t   \tfor all events types\n */\ntimed_event *\nfind_first_timed_event_backwards(timed_event *event, int ignore_disabled, unsigned int search_type_mask)\n{\n\ttimed_event *e;\n\n\tif (event == NULL)\n\t\treturn NULL;\n\n\tfor (e = event; e != NULL; e = e->prev) {\n\t\tif (ignore_disabled && e->disabled)\n\t\t\tcontinue;\n\t\telse if ((e->event_type & search_type_mask) == 0)\n\t\t\tcontinue;\n\t\telse\n\t\t\tbreak;\n\t}\n\n\treturn e;\n}\n/**\n * @brief\n * \t\tfind the next event based on a timed_event\n *\n * @param[in] event            - the current event\n * @param[in] ignore_disabled  - ignore disabled events\n * @param[in] search_type_mask - bitmask of types of events to search\n *\n * @return\tthe next timed event of the correct type and disabled or not\n * @retval\tNULL\t: event is NULL.\n */\ntimed_event *\nfind_next_timed_event(timed_event *event, int ignore_disabled, unsigned int search_type_mask)\n{\n\tif (event == NULL)\n\t\treturn NULL;\n\treturn find_init_timed_event(event->next, ignore_disabled, search_type_mask);\n}\n\n/**\n * @brief\n * \t\tfind the previous event based on a timed_event\n *\n * @param[in] event            - the current event\n * @param[in] ignore_disabled  - ignore disabled events\n * @param[in] search_type_mask - bitmask of types of events to search\n *\n * @return\tthe previous timed event of the correct type and disabled or not\n * @retval\tNULL\t: event is NULL.\n */\ntimed_event *\nfind_prev_timed_event(timed_event *event, int ignore_disabled, unsigned int search_type_mask)\n{\n\tif (event == NULL)\n\t\treturn NULL;\n\treturn find_first_timed_event_backwards(event->prev, ignore_disabled, search_type_mask);\n}\n/**\n * @brief\n * \t\tset the timed_event disabled bit\n *\n * @param[in]\tte       - timed event to set\n * @param[in] \tdisabled - used to set the disabled bit\n *\n * @return\tnothing\n */\nvoid\nset_timed_event_disabled(timed_event *te, int disabled)\n{\n\tif (te == NULL)\n\t\treturn;\n\n\tte->disabled = disabled ? 1 : 0;\n}\n\n/**\n * @brief\n * \t\tfind a timed_event by any or all of the following:\n *\t\tevent name, time of event, or event type.  At times\n *\t\tmultiple search parameters are needed to\n *\t\tdifferentiate between similar events.\n *\n * @param[in]\tte_list \t- timed_event list to search in\n * @param[in] \tignore_disabled - ignore disabled events\n * @param[in] \tname    \t- name of timed_event to search or NULL to ignore\n * @param[in] \tevent_type \t- event_type or TIMED_NOEVENT to ignore\n * @param[in] \tevent_time \t- time or 0 to ignore\n *\n * @par NOTE:\n *\t\t\tIf all three search parameters are ignored,  the first event\n *\t\t\tof te_list will be returned\n *\n * @return\tfound timed_event\n * @retval\tNULL\t: on error\n *\n */\ntimed_event *\nfind_timed_event(timed_event *te_list, const std::string &name, int ignore_disabled,\n\t\t enum timed_event_types event_type, time_t event_time)\n{\n\ttimed_event *te;\n\tint found_name = 0;\n\tint found_type = 0;\n\tint found_time = 0;\n\n\tif (te_list == NULL)\n\t\treturn NULL;\n\n\tfor (te = te_list; te != NULL; te = find_next_timed_event(te, 0, ALL_MASK)) {\n\t\tif (ignore_disabled && te->disabled)\n\t\t\tcontinue;\n\t\tfound_name = found_type = found_time = 0;\n\t\tif (name.empty() || te->name == name)\n\t\t\tfound_name = 1;\n\n\t\tif (event_type == te->event_type || event_type == TIMED_NOEVENT)\n\t\t\tfound_type = 1;\n\n\t\tif (event_time == te->event_time || event_time == 0)\n\t\t\tfound_time = 1;\n\n\t\tif (found_name + found_type + found_time == 3)\n\t\t\tbreak;\n\t}\n\n\treturn te;\n}\n\ntimed_event *\nfind_timed_event(timed_event *te_list, int ignore_disabled, enum timed_event_types event_type, time_t event_time)\n{\n\treturn find_timed_event(te_list, \"\", ignore_disabled, event_type, event_time);\n}\n\ntimed_event *\nfind_timed_event(timed_event *te_list, enum timed_event_types event_type)\n{\n\treturn find_timed_event(te_list, \"\", 0, event_type, 0);\n}\n\ntimed_event *\nfind_timed_event(timed_event *te_list, const std::string &name, enum timed_event_types event_type, time_t event_time)\n{\n\treturn find_timed_event(te_list, name, 0, event_type, event_time);\n}\n\ntimed_event *\nfind_timed_event(timed_event *te_list, time_t event_time)\n{\n\treturn find_timed_event(te_list, \"\", 0, TIMED_NOEVENT, event_time);\n}\n\n/**\n * @brief\n * \t\ttakes a timed_event and performs any actions\n *\t\trequired by the event to be completed.\n *\n * @param[in] policy\t-\tstatus\n * @param[in] event \t- \tthe event to perform\n *\n * @return int\n * @retval 1\t: success\n * @retval 0\t: failure\n */\nint\nperform_event(status *policy, timed_event *event)\n{\n\tchar logbuf[MAX_LOG_SIZE];\n\tchar timebuf[128];\n\tresource_resv *resresv;\n\tint ret = 1;\n\n\tif (event == NULL || event->event_ptr == NULL)\n\t\treturn 0;\n\n\tsprintf(timebuf, \"%s\", ctime(&event->event_time));\n\t/* ctime() puts a \\n at the end of the line, nuke it*/\n\ttimebuf[strlen(timebuf) - 1] = '\\0';\n\n\tswitch (event->event_type) {\n\t\tcase TIMED_END_EVENT: /* event_ptr type: (resource_resv *) */\n\t\t\tresresv = static_cast<resource_resv *>(event->event_ptr);\n\t\t\tupdate_universe_on_end(policy, resresv, \"X\", NO_ALLPART);\n\n\t\t\tsprintf(logbuf, \"%s end point\", resresv->is_job ? \"job\" : \"reservation\");\n\t\t\tbreak;\n\t\tcase TIMED_RUN_EVENT: /* event_ptr type: (resource_resv *) */\n\t\t\tresresv = static_cast<resource_resv *>(event->event_ptr);\n\t\t\tif (sim_run_update_resresv(policy, resresv, NO_ALLPART) == false) {\n\t\t\t\tlog_event(PBSEVENT_SCHED, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t\t  event->name, \"Simulation: Event failed to be run\");\n\t\t\t\tret = 0;\n\t\t\t} else {\n\t\t\t\tsprintf(logbuf, \"%s start point\",\n\t\t\t\t\tresresv->is_job ? \"job\" : \"reservation\");\n\t\t\t}\n\t\t\tbreak;\n\t\tcase TIMED_POLICY_EVENT:\n\t\t\tstrcpy(logbuf, \"Policy change\");\n\t\t\tbreak;\n\t\tcase TIMED_DED_START_EVENT:\n\t\t\tstrcpy(logbuf, \"Dedtime Start\");\n\t\t\tbreak;\n\t\tcase TIMED_DED_END_EVENT:\n\t\t\tstrcpy(logbuf, \"Dedtime End\");\n\t\t\tbreak;\n\t\tcase TIMED_NODE_UP_EVENT:\n\t\t\tstrcpy(logbuf, \"Node Up\");\n\t\t\tbreak;\n\t\tcase TIMED_NODE_DOWN_EVENT:\n\t\t\tstrcpy(logbuf, \"Node Down\");\n\t\t\tbreak;\n\t\tdefault:\n\t\t\tlog_event(PBSEVENT_SCHED, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t  event->name, \"Simulation: Unknown event type\");\n\t\t\tret = 0;\n\t}\n\tif (event->event_func != NULL)\n\t\tevent->event_func(event->event_ptr, event->event_func_arg);\n\n\tif (ret)\n\t\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t   event->name, \"Simulation: %s [%s]\", logbuf, timebuf);\n\treturn ret;\n}\n\n/**\n * @brief\n * \t\treturns 1 if there exists a timed run event in\n *\t\tthe event list between the current event\n *\t\tand the last event, or the end time if it is set\n *\n * @param[in] calendar \t- event list\n * @param[in] end \t\t- optional end time (0 means search all events)\n *\n * @return\tint\n * @retval\t1\t: there exists a run event\n * @retval\t0\t: there doesn't exist a run event\n *\n */\nint\nexists_run_event(event_list *calendar, time_t end)\n{\n\tif (calendar == NULL || calendar->first_run_event == NULL)\n\t\treturn 0;\n\n\tif (calendar->first_run_event->event_time < end)\n\t\treturn 1;\n\treturn 0;\n}\n\n/**\n * @brief finds if there is a reservation run event between now and 'end'\n * @param[in] calendar - the calendar to search\n * @param[in] end - when to stop searching\n *\n * @returns int\n * @retval 1 found a reservation event\n * @retval 0 did not find a reservation event\n */\nint\nexists_resv_event(event_list *calendar, time_t end)\n{\n\ttimed_event *te;\n\ttimed_event *te_list;\n\n\tif (calendar == NULL)\n\t\treturn 0;\n\n\tte_list = calendar->first_run_event;\n\tif (te_list == NULL) /* no run events in our calendar */\n\t\treturn 0;\n\n\tfor (te = te_list; te != NULL && te->event_time <= end;\n\t     te = find_next_timed_event(te, 0, TIMED_RUN_EVENT)) {\n\t\tif (te->event_type == TIMED_RUN_EVENT) {\n\t\t\tresource_resv *resresv = static_cast<resource_resv *>(te->event_ptr);\n\t\t\tif (resresv->is_resv)\n\t\t\t\treturn 1;\n\t\t}\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\tcalculate the run time of a resresv through simulation of\n *\t\tfuture calendar events\n *\n * @param[in] name \t- the name of the resresv to find the start time of\n * @param[in] sinfo - the pbs environment\n * \t\t\t\t\t  NOTE: sinfo will be modified, it should be a copy\n * @param[in] flags - some flags to control the function\n *\t\t\t\t\t\tSIM_RUN_JOB - simulate running the resresv\n *\n * @return\tint\n * @retval\ttime_t of when the job will run\n *\t@retval\t0\t: can not determine when job will run\n *\t@retval\t1\t: on error\n *\n */\ntime_t\ncalc_run_time(const std::string &name, server_info *sinfo, int flags)\n{\n\ttime_t event_time = (time_t) 0; /* time of the simulated event */\n\tevent_list *calendar;\t\t/* calendar we are simulating in */\n\tresource_resv *resresv;\t\t/* the resource resv to find star time for */\n\t/* the value returned from simulate_events().  Init to TIMED_END_EVENT to\n\t * force the initial check to see if the job can run\n\t */\n\tunsigned int ret = TIMED_END_EVENT;\n\tschd_error *err = NULL;\n\ttimed_event *te_start;\n\ttimed_event *te_end;\n\tstd::vector<nspec *> nspec_arr;\n\tunsigned int ok_flags = NO_ALLPART;\n\tqueue_info *qinfo = NULL;\n\n\tif (name.empty() || sinfo == NULL)\n\t\treturn (time_t) -1;\n\n\tevent_time = sinfo->server_time;\n\tcalendar = sinfo->calendar;\n\n\tresresv = find_resource_resv(sinfo->all_resresv, name);\n\n\tif (!is_resource_resv_valid(resresv, NULL))\n\t\treturn (time_t) -1;\n\n\tif (flags & USE_BUCKETS)\n\t\tok_flags |= USE_BUCKETS;\n\tif (resresv->is_job) {\n\t\tok_flags |= IGNORE_EQUIV_CLASS;\n\t\tqinfo = resresv->job->queue;\n\t}\n\n\terr = new_schd_error();\n\tif (err == NULL)\n\t\treturn (time_t) 0;\n\n\tdo {\n\t\t/* policy is used from sinfo instead of being passed into calc_run_time()\n\t\t * because it's being simulated/updated in simulate_events()\n\t\t */\n\n\t\tauto desc = describe_simret(ret);\n\t\tif (desc > 0 || (desc == 0 && policy_change_info(sinfo, resresv))) {\n\t\t\tclear_schd_error(err);\n\t\t\tnspec_arr = is_ok_to_run(sinfo->policy, sinfo, qinfo, resresv, ok_flags, err);\n\t\t}\n\n\t\tif (nspec_arr.empty()) /* event can not run */\n\t\t\tret = simulate_events(sinfo->policy, sinfo, SIM_NEXT_EVENT, &sc_attrs.opt_backfill_fuzzy, &event_time);\n\n#ifdef NAS /* localmod 030 */\n\t\tif (check_for_cycle_interrupt(0)) {\n\t\t\tbreak;\n\t\t}\n#endif /* localmod 030 */\n\t} while (nspec_arr.empty() && !(ret & (TIMED_NOEVENT | TIMED_ERROR)));\n\n#ifdef NAS /* localmod 030 */\n\tif (check_for_cycle_interrupt(0) || (ret & TIMED_ERROR)) {\n#else\n\tif ((ret & TIMED_ERROR)) {\n#endif /* localmod 030 */\n\t\tfree_schd_error(err);\n\t\tfree_nspecs(nspec_arr);\n\t\treturn -1;\n\t}\n\n\t/* we can't run the job, but there are no timed events left to process */\n\tif (nspec_arr.empty() && (ret & TIMED_NOEVENT)) {\n\t\tschdlogerr(PBSEVENT_SCHED, PBS_EVENTCLASS_SCHED, LOG_WARNING, resresv->name,\n\t\t\t   \"Can't find start time estimate\", err);\n\t\tfree_schd_error(err);\n\t\treturn 0;\n\t}\n\n\t/* err is no longer needed, we've reported it. */\n\tfree_schd_error(err);\n\terr = NULL;\n\n\tif (resresv->is_job)\n\t\tresresv->job->est_start_time = event_time;\n\n\tresresv->start = event_time;\n\tresresv->end = event_time + resresv->duration;\n\n\tte_start = create_event(TIMED_RUN_EVENT, resresv->start,\n\t\t\t\t(event_ptr_t *) resresv, NULL, NULL);\n\tif (te_start == NULL) {\n\t\tfree_nspecs(nspec_arr);\n\t\treturn -1;\n\t}\n\n\tte_end = create_event(TIMED_END_EVENT, resresv->end,\n\t\t\t      (event_ptr_t *) resresv, NULL, NULL);\n\tif (te_end == NULL) {\n\t\tfree_nspecs(nspec_arr);\n\t\tfree_timed_event(te_start);\n\t\treturn -1;\n\t}\n\n\tadd_event(calendar, te_start);\n\tadd_event(calendar, te_end);\n\n\tif (flags & SIM_RUN_JOB)\n\t\tsim_run_update_resresv(sinfo->policy, resresv, nspec_arr, NO_ALLPART);\n\telse\n\t\tfree_nspecs(nspec_arr);\n\n\treturn event_time;\n}\n\n/**\n * @brief\n * \t\tcreate an event_list from running jobs and confirmed resvs\n *\n * @param[in]\tsinfo\t-\tserver universe to act upon\n *\n * @return\tevent_list\n */\nevent_list *\ncreate_event_list(server_info *sinfo)\n{\n\tevent_list *elist;\n\n\telist = new_event_list();\n\n\tif (elist == NULL)\n\t\treturn NULL;\n\n\telist->events = create_events(sinfo);\n\n\telist->next_event = elist->events;\n\telist->first_run_event = find_timed_event(elist->events, TIMED_RUN_EVENT);\n\telist->current_time = &sinfo->server_time;\n\tadd_dedtime_events(elist, sinfo->policy);\n\n\treturn elist;\n}\n\n/**\n * @brief\n *\t\tcreate_events - creates an timed_event list from running jobs\n *\t\t\t    and confirmed reservations\n *\n * @param[in] sinfo - server universe to act upon\n *\n * @return\ttimed_event list\n *\n */\ntimed_event *\ncreate_events(server_info *sinfo)\n{\n\ttimed_event *events = NULL;\n\ttimed_event *te = NULL;\n\tresource_resv **all = NULL;\n\tint errflag = 0;\n\tint i = 0;\n\ttime_t end = 0;\n\tresource_resv **all_resresv_copy;\n\tint all_resresv_len;\n\n\t/* create a temporary copy of all_resresv array which is sorted such that\n\t * the timed events are in the front of the array.\n\t * Once the first non-timed event is reached, we're done\n\t */\n\tall_resresv_len = count_array(sinfo->all_resresv);\n\tall_resresv_copy = static_cast<resource_resv **>(malloc((all_resresv_len + 1) * sizeof(resource_resv *)));\n\tif (all_resresv_copy == NULL)\n\t\treturn 0;\n\tfor (i = 0; sinfo->all_resresv[i] != NULL; i++)\n\t\tall_resresv_copy[i] = sinfo->all_resresv[i];\n\tall_resresv_copy[i] = NULL;\n\tall = all_resresv_copy;\n\n\t/* sort the all resersv list so all the timed events are in the front */\n\tqsort(all, count_array(all), sizeof(resource_resv *), cmp_events);\n\n\tfor (i = 0; all[i] != NULL && is_timed(all[i]); i++) {\n\t\t/* only add a run event for a job or reservation if they're\n\t\t * in a runnable state (i.e. don't add it if they're running)\n\t\t */\n\t\tif (in_runnable_state(all[i])) {\n\t\t\tte = create_event(TIMED_RUN_EVENT, all[i]->start, all[i], NULL, NULL);\n\t\t\tif (te == NULL) {\n\t\t\t\terrflag++;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\tevents = add_timed_event(events, te);\n\t\t}\n\n\t\tif (sinfo->use_hard_duration)\n\t\t\tend = all[i]->start + all[i]->hard_duration;\n\t\telse\n\t\t\tend = all[i]->end;\n\t\tte = create_event(TIMED_END_EVENT, end, all[i], NULL, NULL);\n\t\tif (te == NULL) {\n\t\t\terrflag++;\n\t\t\tbreak;\n\t\t}\n\t\tevents = add_timed_event(events, te);\n\t}\n\n\t/* for nodes that are in state=sleep add a timed event */\n\tfor (i = 0; sinfo->nodes[i] != NULL; i++) {\n\t\tnode_info *node = sinfo->nodes[i];\n\t\tif (node->is_sleeping) {\n\t\t\tte = create_event(TIMED_NODE_UP_EVENT, sinfo->server_time + PROVISION_DURATION,\n\t\t\t\t\t  (event_ptr_t *) node, (event_func_t) node_up_event, NULL);\n\t\t\tif (te == NULL) {\n\t\t\t\terrflag++;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\tevents = add_timed_event(events, te);\n\t\t}\n\t}\n\n\t/* A malloc error was encountered, free all allocated memory and return */\n\tif (errflag > 0) {\n\t\tfree_timed_event_list(events);\n\t\tfree(all_resresv_copy);\n\t\treturn 0;\n\t}\n\n\tfree(all_resresv_copy);\n\treturn events;\n}\n\n/**\n * @brief\n * \t\tnew_event_list() - event_list constructor\n *\n * @return\tevent_list *\n * @retval\tNULL\t: malloc failed\n */\nevent_list *\nnew_event_list()\n{\n\tevent_list *elist;\n\n\tif ((elist = static_cast<event_list *>(malloc(sizeof(event_list)))) == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn NULL;\n\t}\n\n\telist->eol = 0;\n\telist->events = NULL;\n\telist->next_event = NULL;\n\telist->first_run_event = NULL;\n\telist->current_time = NULL;\n\n\treturn elist;\n}\n\n/**\n * @brief\n * \t\tdup_event_list() - evevnt_list copy constructor\n *\n * @param[in] oelist - event list to copy\n * @param[in] nsinfo - new universe\n *\n * @return\tduplicated event_list\n *\n */\nevent_list *\ndup_event_list(event_list *oelist, server_info *nsinfo)\n{\n\tevent_list *nelist;\n\n\tif (oelist == NULL || nsinfo == NULL)\n\t\treturn NULL;\n\n\tnelist = new_event_list();\n\n\tif (nelist == NULL)\n\t\treturn NULL;\n\n\tnelist->eol = oelist->eol;\n\tnelist->current_time = &nsinfo->server_time;\n\n\tif (oelist->events != NULL) {\n\t\tnelist->events = dup_timed_event_list(oelist->events, nsinfo);\n\t\tif (nelist->events == NULL) {\n\t\t\tfree_event_list(nelist);\n\t\t\treturn NULL;\n\t\t}\n\t}\n\n\tif (oelist->next_event != NULL) {\n\t\tnelist->next_event = find_timed_event(nelist->events, oelist->next_event->name,\n\t\t\t\t\t\t      oelist->next_event->event_type,\n\t\t\t\t\t\t      oelist->next_event->event_time);\n\t\tif (nelist->next_event == NULL) {\n\t\t\tlog_event(PBSEVENT_SCHED, PBS_EVENTCLASS_SCHED, LOG_WARNING,\n\t\t\t\t  oelist->next_event->name, \"can't find next event in duplicated list\");\n\t\t\tfree_event_list(nelist);\n\t\t\treturn NULL;\n\t\t}\n\t}\n\n\tif (oelist->first_run_event != NULL) {\n\t\tnelist->first_run_event =\n\t\t\tfind_timed_event(nelist->events, oelist->first_run_event->name, TIMED_RUN_EVENT,\n\t\t\t\t\t oelist->first_run_event->event_time);\n\t\tif (nelist->first_run_event == NULL) {\n\t\t\tlog_event(PBSEVENT_SCHED, PBS_EVENTCLASS_SCHED, LOG_WARNING, oelist->first_run_event->name,\n\t\t\t\t  \"can't find first run event event in duplicated list\");\n\t\t\tfree_event_list(nelist);\n\t\t\treturn NULL;\n\t\t}\n\t}\n\n\treturn nelist;\n}\n\n/**\n * @brief\n * \t\tfree_event_list - event_list destructor\n *\n * @param[in] elist - event list to freed\n */\nvoid\nfree_event_list(event_list *elist)\n{\n\tif (elist == NULL)\n\t\treturn;\n\n\tfree_timed_event_list(elist->events);\n\tfree(elist);\n}\n\n/**\n * @brief\n * \t\tnew_timed_event() - timed_event constructor\n *\n * @return\ttimed_event *\n * @retval\tNULL\t: malloc error\n *\n */\ntimed_event *\nnew_timed_event()\n{\n\ttimed_event *te;\n\n\tif ((te = new timed_event()) == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn NULL;\n\t}\n\n\tte->disabled = 0;\n\tte->event_type = TIMED_NOEVENT;\n\tte->event_time = 0;\n\tte->event_ptr = NULL;\n\tte->event_func = NULL;\n\tte->event_func_arg = NULL;\n\tte->next = NULL;\n\tte->prev = NULL;\n\n\treturn te;\n}\n\n/**\n * @brief\n * \t\tdup_timed_event() - timed_event copy constructor\n *\n * @par\n * \t\tdup_timed_event() modifies the run_event and end_event memebers of the resource_resv.\n * \t\tIf dup_timed_event() is not called as part of server_info() copy constructor, the resource_resvs of\n * \t\tthe main server_info will be modified, even if server_info->calendar is not.\n *\n * @param[in]\tote \t- timed_event to copy\n * @param[in] \tnsinfo \t- \"new\" universe where to find the event_ptr\n *\n * @return\ttimed_event *\n * @retval\tNULL\t: something wrong\n */\ntimed_event *\ndup_timed_event(timed_event *ote, server_info *nsinfo)\n{\n\ttimed_event *nte;\n\tevent_ptr_t *event_ptr;\n\n\tif (ote == NULL || nsinfo == NULL)\n\t\treturn NULL;\n\n\tevent_ptr = find_event_ptr(ote, nsinfo);\n\tif (event_ptr == NULL)\n\t\treturn NULL;\n\n\tnte = create_event(ote->event_type, ote->event_time, event_ptr, ote->event_func, ote->event_func_arg);\n\tset_timed_event_disabled(nte, ote->disabled);\n\n\treturn nte;\n}\n\n/*\n * @brief constructor for te_list\n * @return new te_list structure\n */\nte_list *\nnew_te_list()\n{\n\tte_list *tel;\n\ttel = static_cast<te_list *>(malloc(sizeof(te_list)));\n\n\tif (tel == NULL) {\n\t\tlog_err(errno, __func__, MEM_ERR_MSG);\n\t\treturn NULL;\n\t}\n\n\ttel->event = NULL;\n\ttel->next = NULL;\n\n\treturn tel;\n}\n\n/*\n * @brief te_list destructor\n * @param[in] tel - te_list to free\n *\n * @return void\n */\nvoid\nfree_te_list(te_list *tel)\n{\n\tif (tel == NULL)\n\t\treturn;\n\tfree_te_list(tel->next);\n\tfree(tel);\n}\n\n/*\n * @brief te_list copy constructor\n * @param[in] ote - te_list to copy\n * @param[in] new_timed_even_list - new timed events\n *\n * @return copied te_list\n */\nte_list *\ndup_te_list(te_list *ote, timed_event *new_timed_event_list)\n{\n\tte_list *nte;\n\n\tif (ote == NULL || new_timed_event_list == NULL)\n\t\treturn NULL;\n\n\tnte = new_te_list();\n\tif (nte == NULL)\n\t\treturn NULL;\n\n\tnte->event = find_timed_event(new_timed_event_list, ote->event->name, ote->event->event_type, ote->event->event_time);\n\n\treturn nte;\n}\n\n/*\n * @brief copy constructor for a list of te_list structures\n * @param[in] ote - te_list to copy\n * @param[in] new_timed_even_list - new timed events\n *\n * @return copied te_list list\n */\n\nte_list *\ndup_te_lists(te_list *ote, timed_event *new_timed_event_list)\n{\n\tte_list *nte;\n\tte_list *end_te = NULL;\n\tte_list *cur;\n\tte_list *nte_head = NULL;\n\n\tif (ote == NULL || new_timed_event_list == NULL)\n\t\treturn NULL;\n\n\tfor (cur = ote; cur != NULL; cur = cur->next) {\n\t\tnte = dup_te_list(cur, new_timed_event_list);\n\t\tif (nte == NULL) {\n\t\t\tfree_te_list(nte_head);\n\t\t\treturn NULL;\n\t\t}\n\t\tif (end_te != NULL)\n\t\t\tend_te->next = nte;\n\t\telse\n\t\t\tnte_head = nte;\n\n\t\tend_te = nte;\n\t}\n\treturn nte_head;\n}\n\n/*\n * @brief add a te_list for a timed_event to a list sorted by the event's time\n * @param[in,out] tel - te_list to add to\n * @param[in] te - timed_event to add\n *\n * @return success/failure\n * @retval 1 success\n * @retbal 0 failure\n */\nint\nadd_te_list(te_list **tel, timed_event *te)\n{\n\tte_list *cur_te;\n\tte_list *prev = NULL;\n\tte_list *ntel;\n\n\tif (tel == NULL || te == NULL)\n\t\treturn 0;\n\n\tfor (cur_te = *tel; cur_te != NULL && cur_te->event->event_time < te->event_time; prev = cur_te, cur_te = cur_te->next)\n\t\t;\n\n\tntel = new_te_list();\n\tif (ntel == NULL)\n\t\treturn 0;\n\tntel->event = te;\n\n\tif (prev == NULL) {\n\t\tntel->next = *tel;\n\t\t(*tel) = ntel;\n\t} else {\n\t\tprev->next = ntel;\n\t\tntel->next = cur_te;\n\t}\n\treturn 1;\n}\n\n/*\n * @brief remove a te_list from a list by timed_event\n * @param[in,out] tel - te_list to remove event from\n * @param[in] te - timed_event to remove\n *\n * @return success/failure\n * @retval 1 success\n * @retval 0 failure\n */\nint\nremove_te_list(te_list **tel, timed_event *e)\n{\n\tte_list *prev_tel;\n\tte_list *cur_tel;\n\n\tif (tel == NULL || *tel == NULL || e == NULL)\n\t\treturn 0;\n\n\tprev_tel = NULL;\n\tfor (cur_tel = *tel; cur_tel != NULL && cur_tel->event != e; prev_tel = cur_tel, cur_tel = cur_tel->next)\n\t\t;\n\tif (prev_tel == NULL) {\n\t\t*tel = cur_tel->next;\n\t\tfree(cur_tel);\n\t} else if (cur_tel != NULL) {\n\t\tprev_tel->next = cur_tel->next;\n\t\tfree(cur_tel);\n\t} else\n\t\treturn 0;\n\n\treturn 1;\n}\n\n/**\n * @brief\n *\t\tfind_event_ptr - find the correct event pointer for the duplicated\n *\t\t\t event based on event type\n *\n * @param[in]\tote\t\t- old event\n * @param[in] \tnsinfo \t- \"new\" universe\n *\n * @return event_ptr in new universe\n * @retval\tNULL\t: on error\n */\nevent_ptr_t *\nfind_event_ptr(timed_event *ote, server_info *nsinfo)\n{\n\tresource_resv *oep; /* old event_ptr in resresv form */\n\tevent_ptr_t *event_ptr = NULL;\n\n\tif (ote == NULL || nsinfo == NULL)\n\t\treturn NULL;\n\n\tswitch (ote->event_type) {\n\t\tcase TIMED_RUN_EVENT:\n\t\tcase TIMED_END_EVENT:\n\t\t\toep = static_cast<resource_resv *>(ote->event_ptr);\n\t\t\tif (oep->is_resv)\n\t\t\t\tevent_ptr =\n\t\t\t\t\tfind_resource_resv_by_time(nsinfo->all_resresv,\n\t\t\t\t\t\t\t\t   oep->name, oep->start);\n\t\t\telse\n\t\t\t\t/* In case of jobs there can be only one occurance of job in\n\t\t\t\t * all_resresv list, so no need to search using start time of job\n\t\t\t\t */\n\t\t\t\tevent_ptr = find_resource_resv_by_indrank(nsinfo->all_resresv,\n\t\t\t\t\t\t\t\t\t  oep->resresv_ind, oep->rank);\n\n\t\t\tif (event_ptr == NULL) {\n\t\t\t\tlog_event(PBSEVENT_SCHED, PBS_EVENTCLASS_SCHED, LOG_WARNING, ote->name,\n\t\t\t\t\t  \"Event can't be found in new server to be duplicated.\");\n\t\t\t\tevent_ptr = NULL;\n\t\t\t}\n\t\t\tbreak;\n\t\tcase TIMED_POLICY_EVENT:\n\t\tcase TIMED_DED_START_EVENT:\n\t\tcase TIMED_DED_END_EVENT:\n\t\t\tevent_ptr = nsinfo->policy;\n\t\t\tbreak;\n\t\tcase TIMED_NODE_DOWN_EVENT:\n\t\tcase TIMED_NODE_UP_EVENT:\n\t\t\tevent_ptr = find_node_info(nsinfo->nodes,\n\t\t\t\t\t\t   static_cast<node_info *>(ote->event_ptr)->name);\n\t\t\tbreak;\n\t\tdefault:\n\t\t\tlog_eventf(PBSEVENT_SCHED, PBS_EVENTCLASS_SCHED, LOG_WARNING, __func__,\n\t\t\t\t   \"Unknown event type: %d\", (int) ote->event_type);\n\t}\n\n\treturn event_ptr;\n}\n\n/**\n * @brief\n *\t\tdup_timed_event_list() - timed_event copy constructor for a list\n *\n * @param[in]\tote_list \t- list of timed_events to copy\n * @param[in]\tnsinfo\t\t- \"new\" universe where to find the event_ptr\n *\n * @return\ttimed_event *\n * @retval\tNULL\t: one of the input is null\n */\ntimed_event *\ndup_timed_event_list(timed_event *ote_list, server_info *nsinfo)\n{\n\ttimed_event *ote;\n\ttimed_event *nte = NULL;\n\ttimed_event *nte_prev = NULL;\n\ttimed_event *nte_head = NULL;\n\n\tif (ote_list == NULL || nsinfo == NULL)\n\t\treturn NULL;\n\n\tfor (ote = ote_list; ote != NULL; ote = ote->next) {\n\t\tnte = dup_timed_event(ote, nsinfo);\n\t\tif (nte == NULL) {\n\t\t\tfree_timed_event_list(nte_head);\n\t\t\treturn NULL;\n\t\t}\n\t\tif (nte_prev != NULL)\n\t\t\tnte_prev->next = nte;\n\t\telse\n\t\t\tnte_head = nte;\n\t\tnte->prev = nte_prev;\n\n\t\tnte_prev = nte;\n\t}\n\n\treturn nte_head;\n}\n\n/**\n * @brief\n * \t\tfree_timed_event - timed_event destructor\n *\n * @param[in]\tte\t-\ttimed event.\n */\nvoid\nfree_timed_event(timed_event *te)\n{\n\tif (te == NULL)\n\t\treturn;\n\tif (te->event_ptr != NULL) {\n\t\tif (te->event_type & TIMED_RUN_EVENT)\n\t\t\tstatic_cast<resource_resv *>(te->event_ptr)->run_event = NULL;\n\t\tif (te->event_type & TIMED_END_EVENT)\n\t\t\tstatic_cast<resource_resv *>(te->event_ptr)->end_event = NULL;\n\t}\n\n\tdelete te;\n}\n\n/**\n * @brief\n * \t\tfree_timed_event_list - destructor for a list of timed_event structures\n *\n * @param[in]\tte_list\t-\ttimed event list\n */\nvoid\nfree_timed_event_list(timed_event *te_list)\n{\n\ttimed_event *te;\n\ttimed_event *te_next;\n\n\tif (te_list == NULL)\n\t\treturn;\n\n\tte = te_list;\n\n\twhile (te != NULL) {\n\t\tte_next = te->next;\n\t\tfree_timed_event(te);\n\t\tte = te_next;\n\t}\n}\n\n/**\n * @brief\n * \t\tadd a timed_event to an event list\n *\n * @param[in] calendar - event list\n * @param[in] te       - timed event\n *\n * @retval 1 : success\n * @retval 0 : failure\n *\n */\nint\nadd_event(event_list *calendar, timed_event *te)\n{\n\ttime_t current_time;\n\tint events_is_null = 0;\n\n\tif (calendar == NULL || calendar->current_time == NULL || te == NULL)\n\t\treturn 0;\n\n\tcurrent_time = *calendar->current_time;\n\n\tif (calendar->events == NULL)\n\t\tevents_is_null = 1;\n\n\tcalendar->events = add_timed_event(calendar->events, te);\n\n\t/* empty event list - the new event is the only event */\n\tif (events_is_null)\n\t\tcalendar->next_event = te;\n\telse if (calendar->next_event != NULL) {\n\t\t/* check if we're adding an event between now and our current event.\n\t\t * If so, it becomes our new current event\n\t\t */\n\t\tif (te->event_time > current_time) {\n\t\t\tif (te->event_time < calendar->next_event->event_time)\n\t\t\t\tcalendar->next_event = te;\n\t\t\telse if (te->event_time == calendar->next_event->event_time) {\n\t\t\t\tcalendar->next_event =\n\t\t\t\t\tfind_timed_event(calendar->events, te->event_time);\n\t\t\t}\n\t\t}\n\t}\n\t/* if next_event == NULL, then we've simulated to the end. */\n\telse if (te->event_time >= current_time)\n\t\tcalendar->next_event = te;\n\n\tif (te->event_type == TIMED_RUN_EVENT)\n\t\tif (calendar->first_run_event == NULL || te->event_time < calendar->first_run_event->event_time)\n\t\t\tcalendar->first_run_event = te;\n\n\t/* if we had previously run to the end of the list\n\t * and now we have more work to do, clear the eol bit\n\t */\n\tif (calendar->eol && calendar->next_event != NULL)\n\t\tcalendar->eol = 0;\n\n\treturn 1;\n}\n\n/**\n * @brief\n * \t\tadd_timed_event - add an event to a sorted list of events\n *\n * @note\n *\t\tASSUMPTION: if multiple events are at the same time, all\n *\t\t    end events will come first\n *\n * @param\tevents - event list to add event to\n * @param \tte     - timed_event to add to list\n *\n * @return\thead of timed_event list\n */\ntimed_event *\nadd_timed_event(timed_event *events, timed_event *te)\n{\n\ttimed_event *eloop;\n\ttimed_event *eloop_prev = NULL;\n\n\tif (te == NULL)\n\t\treturn events;\n\n\tif (events == NULL)\n\t\treturn te;\n\n\tfor (eloop = events; eloop != NULL; eloop = eloop->next) {\n\t\tif (eloop->event_time > te->event_time)\n\t\t\tbreak;\n\t\tif (eloop->event_time == te->event_time &&\n\t\t    te->event_type == TIMED_END_EVENT) {\n\t\t\tbreak;\n\t\t}\n\n\t\teloop_prev = eloop;\n\t}\n\n\tif (eloop_prev == NULL) {\n\t\tte->next = events;\n\t\tevents->prev = te;\n\t\tte->prev = NULL;\n\t\treturn te;\n\t}\n\n\tte->next = eloop;\n\teloop_prev->next = te;\n\tte->prev = eloop_prev;\n\tif (eloop != NULL)\n\t\teloop->prev = te;\n\n\treturn events;\n}\n\n/**\n * @brief\n * \t\tdelete a timed event from an event_list\n *\n * @param[in] sinfo    - sinfo which contains calendar to delete from\n * @param[in] e        - event to delete\n *\n * @return void\n */\n\nvoid\ndelete_event(server_info *sinfo, timed_event *e)\n{\n\tevent_list *calendar;\n\n\tif (sinfo == NULL || e == NULL)\n\t\treturn;\n\n\tcalendar = sinfo->calendar;\n\n\tif (calendar->next_event == e)\n\t\tcalendar->next_event = e->next;\n\n\tif (calendar->first_run_event == e)\n\t\tcalendar->first_run_event = find_timed_event(calendar->events, TIMED_RUN_EVENT);\n\n\tif (e->prev == NULL)\n\t\tcalendar->events = e->next;\n\telse\n\t\te->prev->next = e->next;\n\n\tif (e->next != NULL)\n\t\te->next->prev = e->prev;\n\n\tfree_timed_event(e);\n}\n\n/**\n * @brief\n *\t\tcreate_event - create a timed_event with the passed in arguemtns\n *\n * @param[in]\tevent_type - event_type member\n * @param[in] \tevent_time - event_time member\n * @param[in] \tevent_ptr  - event_ptr member\n * @param[in] \tevent_func - event_func function pointer member\n *\n * @return\tnewly created timed_event\n * @retval\tNULL\t: on error\n */\ntimed_event *\ncreate_event(enum timed_event_types event_type,\n\t     time_t event_time, event_ptr_t *event_ptr,\n\t     event_func_t event_func, void *event_func_arg)\n{\n\ttimed_event *te;\n\n\tif (event_ptr == NULL)\n\t\treturn NULL;\n\n\tte = new_timed_event();\n\tif (te == NULL)\n\t\treturn NULL;\n\n\tte->event_type = event_type;\n\tte->event_time = event_time;\n\tte->event_ptr = event_ptr;\n\tte->event_func = event_func;\n\tte->event_func_arg = event_func_arg;\n\n\tif (event_type & TIMED_RUN_EVENT)\n\t\tstatic_cast<resource_resv *>(event_ptr)->run_event = te;\n\tif (event_type & TIMED_END_EVENT)\n\t\tstatic_cast<resource_resv *>(event_ptr)->end_event = te;\n\n\tif (determine_event_name(te) == 0) {\n\t\tfree_timed_event(te);\n\t\treturn NULL;\n\t}\n\n\treturn te;\n}\n\n/**\n * @brief\n *\t\tdetermine_event_name - determine a timed events name based off of\n *\t\t\t\tevent type and sets it\n *\n * @param[in]\tte\t-\tthe event\n *\n * @par Side Effects\n *\t\tte -> name is set to static data or data owned by other entities.\n *\t\tIt should not be freed.\n *\n * @return\tint\n * @retval\t1\t: if the name was successfully set\n * @retval\t0\t: if not\n */\nint\ndetermine_event_name(timed_event *te)\n{\n\tconst char *name;\n\n\tif (te == NULL)\n\t\treturn 0;\n\n\tswitch (te->event_type) {\n\t\tcase TIMED_RUN_EVENT:\n\t\tcase TIMED_END_EVENT:\n\t\t\tte->name = static_cast<resource_resv *>(te->event_ptr)->name;\n\t\t\tbreak;\n\t\tcase TIMED_POLICY_EVENT:\n\t\t\tname = policy_change_to_str(te);\n\t\t\tif (name != NULL)\n\t\t\t\tte->name = name;\n\t\t\telse\n\t\t\t\tte->name = \"policy change\";\n\t\t\tbreak;\n\t\tcase TIMED_DED_START_EVENT:\n\t\t\tte->name = \"dedtime_start\";\n\t\t\tbreak;\n\t\tcase TIMED_DED_END_EVENT:\n\t\t\tte->name = \"dedtime_end\";\n\t\t\tbreak;\n\t\tcase TIMED_NODE_UP_EVENT:\n\t\tcase TIMED_NODE_DOWN_EVENT:\n\t\t\tte->name = static_cast<node_info *>(te->event_ptr)->name;\n\t\t\tbreak;\n\t\tdefault:\n\t\t\tlog_eventf(PBSEVENT_SCHED, PBS_EVENTCLASS_SCHED, LOG_WARNING,\n\t\t\t\t   __func__, \"Unknown event type: %d\", (int) te->event_type);\n\t\t\treturn 0;\n\t}\n\n\treturn 1;\n}\n\n/**\n * @brief\n * \t\tupdate dedicated time policy\n *\n * @param[in] policy - policy info (contains dedicated time policy)\n * @param[in] arg    - \"START\" or \"END\"\n *\n * @return int\n * @retval 1 : success\n * @retval 0 : failure/error\n *\n */\n\nint\ndedtime_change(status *policy, void *arg)\n{\n\tchar *event_arg;\n\n\tif (policy == NULL || arg == NULL)\n\t\treturn 0;\n\n\tevent_arg = (char *) arg;\n\n\tif (strcmp(event_arg, DEDTIME_START) == 0)\n\t\tpolicy->is_ded_time = 1;\n\telse if (strcmp(event_arg, DEDTIME_END) == 0)\n\t\tpolicy->is_ded_time = 0;\n\telse {\n\t\tlog_event(PBSEVENT_SCHED, PBS_EVENTCLASS_SCHED, LOG_WARNING,\n\t\t\t  __func__, \"unknown dedicated time change\");\n\t\treturn 0;\n\t}\n\n\treturn 1;\n}\n\n/**\n * @brief\n * \t\tadd the dedicated time events from conf\n *\n * @param[in] elist \t- the event list to add the dedicated time events to\n * @param[in] policy \t- status structure for the dedicated time events\n *\n *\t@retval 1 : success\n *\t@retval 0 : failure\n */\nint\nadd_dedtime_events(event_list *elist, status *policy)\n{\n\tif (elist == NULL)\n\t\treturn 0;\n\n\tfor (const auto &dt : conf.ded_time) {\n\t\tauto te_start = create_event(TIMED_DED_START_EVENT, dt.from, policy, (event_func_t) dedtime_change, (void *) DEDTIME_START);\n\t\tif (te_start == NULL)\n\t\t\treturn 0;\n\n\t\tauto te_end = create_event(TIMED_DED_END_EVENT, dt.to, policy, (event_func_t) dedtime_change, (void *) DEDTIME_END);\n\t\tif (te_end == NULL) {\n\t\t\tfree_timed_event(te_start);\n\t\t\treturn 0;\n\t\t}\n\n\t\tadd_event(elist, te_start);\n\t\tadd_event(elist, te_end);\n\t}\n\treturn 1;\n}\n\n/**\n * @brief\n * \t\tsimulate the minimum amount of a resource list\n *\t\tfor an event list until a point in time.  The\n *\t\tcomparison we are simulating the minimum for is\n *\t\t(resources_available.foo - resources_assigned.foo)\n *\t\tThe minimum is simulated by holding resources_available\n *\t\tconstant and maximizing the resources_assigned value\n *\n * @note\n * \t\tThis function only simulates START and END events.  If at some\n *\t\tpoint in the future we start simulating events such as\n *\t\tqmgr -c 's s resources_available.ncpus + =5' this function will\n *\t\twill have to be revisited.\n *\n * @param[in] reslist\t- resource list to simulate\n * @param[in] end\t- end time\n * @param[in] calendar\t- calendar to simulate\n * @param[in] incl_arr\t- only use events for resresvs in this array (can be NULL)\n * @param[in] exclude\t- job/resv to ignore (possibly NULL)\n *\n * @return static pointer to amount of resources available during\n * @retval the entire length from now to end\n * @retval\tNULL\t: on error\n *\n * @par MT-safe: No\n */\nschd_resource *\nsimulate_resmin(schd_resource *reslist, time_t end, event_list *calendar,\n\t\tresource_resv **incl_arr, resource_resv *exclude)\n{\n\tstatic schd_resource *retres = NULL; /* return pointer */\n\n\tschd_resource *cur_res;\n\tschd_resource *cur_resmin;\n\tschd_resource *res;\n\tschd_resource *resmin = NULL;\n\ttimed_event *te;\n\tunsigned int event_mask = (TIMED_RUN_EVENT | TIMED_END_EVENT);\n\n\tif (reslist == NULL)\n\t\treturn NULL;\n\n\t/* if there is no calendar, then there is nothing to do */\n\tif (calendar == NULL)\n\t\treturn reslist;\n\n\t/* If there are no run events in the calendar between now and the end time\n\t * then there is nothing to do. Nothing will reduce resources (only increase)\n\t */\n\tif (exists_run_event(calendar, end) == 0)\n\t\treturn reslist;\n\n\tif (retres != NULL) {\n\t\tfree_resource_list(retres);\n\t\tretres = NULL;\n\t}\n\n\tif ((res = dup_resource_list(reslist)) == NULL)\n\t\treturn NULL;\n\tif ((resmin = dup_resource_list(reslist)) == NULL) {\n\t\tfree_resource_list(res);\n\t\treturn NULL;\n\t}\n\n\tte = get_next_event(calendar);\n\tfor (te = find_init_timed_event(te, IGNORE_DISABLED_EVENTS, event_mask);\n\t     te != NULL && (end == 0 || te->event_time < end);\n\t     te = find_next_timed_event(te, IGNORE_DISABLED_EVENTS, event_mask)) {\n\t\tauto resresv = static_cast<resource_resv *>(te->event_ptr);\n\t\tif (incl_arr == NULL || find_resource_resv_by_indrank(incl_arr, -1, resresv->rank) != NULL) {\n\t\t\tif (resresv != exclude) {\n\t\t\t\tfor (auto req = resresv->resreq; req != NULL; req = req->next) {\n\t\t\t\t\tif (req->type.is_consumable) {\n\t\t\t\t\t\tcur_res = find_alloc_resource(res, req->def);\n\n\t\t\t\t\t\tif (cur_res == NULL) {\n\t\t\t\t\t\t\tfree_resource_list(res);\n\t\t\t\t\t\t\tfree_resource_list(resmin);\n\t\t\t\t\t\t\treturn NULL;\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\tif (te->event_type == TIMED_RUN_EVENT)\n\t\t\t\t\t\t\tcur_res->assigned += req->amount;\n\t\t\t\t\t\telse\n\t\t\t\t\t\t\tcur_res->assigned -= req->amount;\n\n\t\t\t\t\t\tcur_resmin = find_alloc_resource(resmin, req->def);\n\t\t\t\t\t\tif (cur_resmin == NULL) {\n\t\t\t\t\t\t\tfree_resource_list(res);\n\t\t\t\t\t\t\tfree_resource_list(resmin);\n\t\t\t\t\t\t\treturn NULL;\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif (cur_res->assigned > cur_resmin->assigned)\n\t\t\t\t\t\t\tcur_resmin->assigned = cur_res->assigned;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\tfree_resource_list(res);\n\tretres = resmin;\n\treturn retres;\n}\n\n/**\n * @brief\n * \t\treturn a printable name for a policy change event\n *\n * @param[in]\tte\t-\tpolicy change timed event\n *\n * @return\tprintable string name of policy change event\n * @retval\tNULL\t: if not found or error\n */\nconst char *\npolicy_change_to_str(timed_event *te)\n{\n\tint i;\n\tif (te == NULL)\n\t\treturn NULL;\n\n\tfor (i = 0; policy_change_func_name[i].func != NULL; i++) {\n\t\tif (te->event_func == policy_change_func_name[i].func)\n\t\t\treturn policy_change_func_name[i].str;\n\t}\n\n\treturn NULL;\n}\n\n/**\n * @brief\n * \t\tshould we do anything on policy change events\n *\n * @param[in] sinfo \t- server\n * @param[in] resresv \t- a resresv to check\n *\n * @return\tint\n * @retval\t1\t: there is something to do\n * @retval \t0 \t: nothing to do\n * @retval \t-1 \t: error\n *\n */\nint\npolicy_change_info(server_info *sinfo, resource_resv *resresv)\n{\n\tstatus *policy;\n\n\tif (sinfo == NULL || sinfo->policy == NULL)\n\t\treturn -1;\n\n\tpolicy = sinfo->policy;\n\n\t/* check to see if we may be holding resoures by backfilling during one\n\t * prime status, just to turn it off in the next, thus increasing the\n\t * resource pool\n\t */\n\tif (conf.prime_bf != conf.non_prime_bf)\n\t\treturn 1;\n\n\t/* check to see if we're backfilling around prime status changes\n\t * if we are, we may have been holding up running jobs until the next\n\t * prime status change.  In this case, we have something to do at a status\n\t * change.\n\t * We only have to worry if prime_exempt_anytime_queues is false.  If it is\n\t * True, backfill_prime only affects prime or non-prime queues which we\n\t * handle below.\n\t */\n\tif (!conf.prime_exempt_anytime_queues &&\n\t    (conf.prime_bp + conf.non_prime_bp >= 1))\n\t\treturn 1;\n\n\tif (resresv != NULL) {\n\t\tif (resresv->is_job && resresv->job != NULL) {\n\t\t\tif (policy->is_ded_time && resresv->job->queue->is_ded_queue)\n\t\t\t\treturn 1;\n\t\t\tif (policy->is_prime == PRIME &&\n\t\t\t    resresv->job->queue->is_prime_queue)\n\t\t\t\treturn 1;\n\t\t\tif (policy->is_prime == NON_PRIME &&\n\t\t\t    resresv->job->queue->is_nonprime_queue)\n\t\t\t\treturn 1;\n\t\t}\n\n\t\treturn 0;\n\t}\n\n\tif (policy->is_ded_time && sinfo->has_ded_queue) {\n\t\tfor (auto qinfo : sinfo->queues) {\n\t\t\tif (qinfo->is_ded_queue &&\n\t\t\t    qinfo->jobs != NULL)\n\t\t\t\treturn 1;\n\t\t}\n\t}\n\tif (policy->is_prime == PRIME && sinfo->has_prime_queue) {\n\t\tfor (auto qinfo : sinfo->queues) {\n\t\t\tif (qinfo->is_prime_queue &&\n\t\t\t    qinfo->jobs != NULL)\n\t\t\t\treturn 1;\n\t\t}\n\t}\n\tif (policy->is_prime == NON_PRIME && sinfo->has_nonprime_queue) {\n\t\tfor (auto qinfo : sinfo->queues) {\n\t\t\tif (qinfo->is_nonprime_queue &&\n\t\t\t    qinfo->jobs != NULL)\n\t\t\t\treturn 1;\n\t\t}\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\ttakes a bitfield returned by simulate_events and will determine if\n *      the amount resources have gone up down, or are unchanged.  If events\n *\t  \tcaused resources to be both freed and used, we err on the side of\n *\t  \tcaution and say there are more resources.\n *\n * @param[in]\tsimret\t-\treturn bitfield from simulate_events\n *\n * @retval\t1 : more resources are available for use\n * @retval  0 : resources have not changed\n * @retval -1 : less resources are available for use\n */\nint\ndescribe_simret(unsigned int simret)\n{\n\tunsigned int more =\n\t\t(TIMED_END_EVENT | TIMED_DED_END_EVENT | TIMED_NODE_UP_EVENT);\n\tunsigned int less =\n\t\t(TIMED_RUN_EVENT | TIMED_DED_START_EVENT | TIMED_NODE_DOWN_EVENT);\n\n\tif (simret & more)\n\t\treturn 1;\n\tif (simret & less)\n\t\treturn -1;\n\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\tadds event(s) for bringing the node back up after we provision a node\n *\n * @param[in] calnedar \t\t- event list to add event(s) to\n * @param[in] event_time \t- time of the event\n * @param[in] node \t\t\t- node in question\n *\n * @return\tsuccess/failure\n * @retval \t1 : on sucess\n * @retval \t0 : in failure/error\n */\nint\nadd_prov_event(event_list *calendar, time_t event_time, node_info *node)\n{\n\ttimed_event *te;\n\n\tif (calendar == NULL || node == NULL)\n\t\treturn 0;\n\n\tte = create_event(TIMED_NODE_UP_EVENT, event_time, (event_ptr_t *) node,\n\t\t\t  (event_func_t) node_up_event, NULL);\n\tif (te == NULL)\n\t\treturn 0;\n\tadd_event(calendar, te);\n\t/* if the node is resv node, we need to add an event to bring the\n\t * server version of the resv node back up\n\t */\n\tif (node->svr_node != NULL) {\n\t\tte = create_event(TIMED_NODE_UP_EVENT, event_time,\n\t\t\t\t  (event_ptr_t *) node->svr_node, (event_func_t) node_up_event, NULL);\n\t\tif (te == NULL)\n\t\t\treturn 0;\n\t\tadd_event(calendar, te);\n\t}\n\treturn 1;\n}\n\n/**\n * @brief\n * \t\tgeneric simulation function which will call a function pointer over\n *      events of a calendar from now up to (but not including) the end time.\n * @par\n *\t  \tThe simulation works by looping searching for a success or failure.\n *\t  \tThe loop will stop if the function returns 1 for success or -1 for\n *\t  \tfailure.  We continue looping if the function returns 0.  If we run\n *\t  \tout of events, we return the default passed in.\n *\n * @par Function:\n * \t\tThe function can return three return values\n *\t \t>0 success - stop looping and return success\n *\t  \t0 failure - keep looping\n *\t \t<0 failure - stop looping and return failure\n *\n * @param[in] calendar \t\t- calendar of timed events\n * @param[in] event_mask \t- mask of timed_events which we want to simulate\n * @param[in] end \t\t\t- end of simulation (0 means search all events)\n * @param[in] default_ret \t- default return value if we reach the end of the simulation\n * @param[in] func \t\t\t- the function to call on each timed event\n * @param[in] arg1 \t\t\t- generic arg1 to function\n * @param[in] arg2 \t\t\t- generic arg2 to function\n *\n * @return success of simulate\n * @retval 1 : if simulation is success\n * @retval 0 : if func returns failure or there is an error\n */\nint\ngeneric_sim(event_list *calendar, unsigned int event_mask, time_t end, int default_ret,\n\t    int (*func)(timed_event *, void *, void *), void *arg1, void *arg2)\n{\n\ttimed_event *te;\n\tint rc = 0;\n\tif (calendar == NULL || func == NULL)\n\t\treturn 0;\n\n\t/* We need to handle the calendar's initial event special because\n\t * get_next_event() only returns the calendar's next_event member.\n\t * We need to make sure the initial event is of the correct type.\n\t */\n\tte = get_next_event(calendar);\n\n\tfor (te = find_init_timed_event(te, IGNORE_DISABLED_EVENTS, event_mask);\n\t     te != NULL && rc == 0 && (end == 0 || te->event_time < end);\n\t     te = find_next_timed_event(te, IGNORE_DISABLED_EVENTS, event_mask)) {\n\t\trc = func(te, arg1, arg2);\n\t}\n\n\tif (rc > 0)\n\t\treturn 1;\n\telse if (rc < 0)\n\t\treturn 0;\n\n\treturn default_ret;\n}\n"
  },
  {
    "path": "src/scheduler/simulate.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _SIMULATE_H\n#define _SIMULATE_H\n\n#include \"data_types.h\"\n#include \"constant.h\"\n\n/*\n *\tsimulate_events - simulate the future of a PBS universe\n */\nunsigned int\nsimulate_events(status *policy, server_info *sinfo,\n\t\tenum schd_simulate_cmd cmd, void *arg, time_t *sim_time);\n\n/*\n *\tis_timed - check if a resresv is a timed event\n * \t\t\t (i.e. has a start and end time)\n */\nint is_timed(event_ptr_t *event_ptr);\n\n/*\n *      get_next_event - get the next_event from an event list\n *\n *        \\param elist - the event list\n *\n *      \\return the current event from the event list\n */\ntimed_event *get_next_event(event_list *elist);\n\n/*\n * find the initial event based on a timed_event\n *\n *\tevent            - the current event\n *\tignore_disabled  - ignore disabled events\n *\tsearch_type_mask - bitmask of types of events to search\n *\n *\treturns the initial event of the correct type/disabled or not\n *\n *\tNOTE: IGNORE_DISABLED_EVENTS exists to be passed in as the\n *\t\t   ignore_disabled parameter.  It is non-zero.\n *\n *\tNOTE: ALL_MASK can be passed in for search_type_mask to search\n *\t\t   for all events types\n */\ntimed_event *find_init_timed_event(timed_event *event, int ignore_disabled, unsigned int search_type_mask);\n/*\n *\n *\tfind_next_timed_event - find the next event based on a timed_event\n *\n *\tevent            - the current event\n *\tignore_disabled  - ignore disabled events\n *\tsearch_type_mask - bitmask of types of events to search\n *\n *\treturn the next timed event of the correct type and disabled or not\n */\ntimed_event *find_next_timed_event(timed_event *event, int ignore_disabled, unsigned int search_type_mask);\n\n/*\n *\n *\tfind the previous event based on a timed_event\n *\n *\t  event            - the current event\n *\t  ignore_disabled  - ignore disabled events\n *\t  search_type_mask - bitmask of types of events to search\n *\n *\treturn the previous timed event of the correct type and disabled or not\n */\ntimed_event *find_prev_timed_event(timed_event *event, int ignore_disabled, unsigned int search_type_mask);\n\n/*\n *      set_timed_event_disabled - set the timed_event disabled bit\n *\n *        te       - timed event to set\n *        disabled - used to set the disabled bit\n *\n *      return nothing\n */\nvoid set_timed_event_disabled(timed_event *te, int disabled);\n\n/*\n *\n *\tfind_timed_event - find a timed_event\n *\t\t\t   we need to search by name and type because there\n *\t\t\t   can be multiple events with the same name and\n *\t\t\t   different types\n *\n *\t  te_list - timed_event list to search in\n *\t  ignore_disabled - ignore disabled events\n *\t  name    - name of timed_event to search for\n *\t  event_type - event_type or TIMED_LOW to ignore\n *\t  event_time - time or 0 to ignore\n *\n *\treturn found timed_event or NULL\n *\n */\ntimed_event *\nfind_timed_event(timed_event *te_list, const std::string &name, int ignore_disabled,\n\t\t enum timed_event_types event_type, time_t event_time);\ntimed_event *find_timed_event(timed_event *te_list, int ignore_disabled, enum timed_event_types event_type, time_t event_time);\ntimed_event *find_timed_event(timed_event *te_list, enum timed_event_types event_type);\ntimed_event *find_timed_event(timed_event *te_list, const std::string &name, enum timed_event_types event_type, time_t event_time);\ntimed_event *find_timed_event(timed_event *te_list, time_t event_time);\n\n/*\n *      next_event - move an event_list to the next event and return it\n *\n *        \\param elist - the event list\n *\n *      \\return the next event or NULL if there are no more events\n */\ntimed_event *next_event(server_info *sinfo, int advance);\n\n/*\n *      perform_event - takes a timed_event and performs any actions\n *                      required by the event to be completed.  Currently\n *                      only handles run and end events.\n *\n *        \\param event - the event to peform\n *\n *      \\return succss 1 /failure 0\n */\nint perform_event(status *policy, timed_event *event);\n\n/*\n *      create_event_list - create an event_list from running jobs and\n *                              confirmed reservations\n *\n *        \\param sinfo - server universe to act upon\n *\n *      \\return event_list\n */\nevent_list *create_event_list(server_info *sinfo);\n\n/*\n *\texists_run_event - returns 1 if there exists a timed run event in\n *\t\t\t\tthe event list between the current event\n *\t\t\t\tand the last event, or the end time if it is\n *\t\t\t\tset\n *\n *\t  calendar - event list\n *\t  end_time - optional end time (0 means search all events)\n *\n *\treturns 1: there exists a run event\n *\t\t0: there doesn't exist a run event\n */\nint exists_run_event(event_list *calendar, time_t end);\n\n/* Checks to see if there is a run event on a node before the end time */\nint exists_run_event_on_node(node_info *ninf, time_t end);\n\n/* Checks if a reservation run event exists between now and 'end' */\nint exists_resv_event(event_list *calendar, time_t end);\n\n/*\n *      create_events - creates an timed_event list from running jobs\n *                          and confirmed reservations\n *\n *        \\param sinfo - server universe to act upon\n *        \\param flags -\n *\n *        \\return timed_event list\n */\ntimed_event *create_events(server_info *sinfo);\n\n/*\n * new_event_list() - event_list constructor\n */\n#ifdef NAS /* localmod 005 */\nevent_list *new_event_list(void);\n#else\nevent_list *new_event_list();\n#endif /* localmod 005 */\n\n/*\n *      dup_event_list() - evevnt_list copy constructor\n *\n *        \\param oel - event list to copy\n *        \\param nsinfo - new universe\n *\n *      \\return duplicated event_list\n */\nevent_list *dup_event_list(event_list *oelist, server_info *nsinfo);\n\n/*\n * free_event_list - event_list destructor\n */\nvoid free_event_list(event_list *elist);\n\n/*\n * new_timed_event() - timed_event constructor\n */\n#ifdef NAS /* localmod 005 */\ntimed_event *new_timed_event(void);\n#else\ntimed_event *new_timed_event();\n#endif /* localmod 005 */\n\n/*\n * dup_timed_event() - timed_event copy constructor\n *\n *   \\param ote - timed_event to copy\n *   \\param nsinfo - \"new\" universe where to find the event_ptr\n */\ntimed_event *dup_timed_event(timed_event *ote, server_info *nsinfo);\n\n/*\n *      dup_timed_event_list() - timed_event copy constructor for a list\n *\n *        \\param ote_list - list of timed_events to copy\n *        \\param nsinfo - \"new\" universe where to find the event_ptr\n */\ntimed_event *dup_timed_event_list(timed_event *ote_list, server_info *nsinfo);\n\n/*\n * free_timed_event - timed_event destructor\n */\nvoid free_timed_event(timed_event *te);\n\n/*\n * free_timed_event_list - destructor for a list of timed_event structures\n */\nvoid free_timed_event_list(timed_event *te_list);\n\n#ifndef NAS /* localmod 005 */\n/*\n * new_event_list - event_list constructor\n */\nevent_list *new_event_list();\n\n/*\n *      dup_event_list - event_list copy constructor\n *\n *        \\param oel    - event list to copy\n *        \\param nsinfo - \"new\" universe to find event pointers from\n */\nevent_list *dup_event_list(event_list *oel, server_info *nsinfo);\n\n/*\n *      free_event_list - event_list destructor\n *\n *        \\param el - event_list to free\n */\nvoid free_event_list(event_list *el);\n#endif /* localmod 005 */\n\n/*\n *      find_event_by_name - find an event by event name\n *\n *        \\param events - list of timed_event structures to search\n *        \\param name   - name of event to search for\n *\n *      \\return found timed_event or NULL\n */\ntimed_event *find_event_by_name(timed_event *events, char *name);\n\n/*\n *      add_timed_event - add an event to a sorted list of events\n *\n *      ASSUMPTION: if multiple events are at the same time, all\n *                  end events will come first\n *\n *        \\param events - event list to add event to\n *        \\param te     - timed_event to add to list\n *\n *      \\return head of timed_event list\n */\ntimed_event *add_timed_event(timed_event *events, timed_event *te);\n/*\n *\n *\tadd_event - add a timed_event to an event list\n *\n *\t  calendar - event list\n *\t  te       - timed event\n *\n *\treturn 1 success / 0 failure\n *\n */\nint add_event(event_list *calendar, timed_event *te);\n\n/*\n *\tdelete_event - delete a timed event from an event list\n */\nvoid delete_event(server_info *sinfo, timed_event *e);\n\n/*\n *      create_event - create a timed_event with the passed in arguemtns\n *\n *        \\param event_type - event_type member\n *        \\param event_time - event_time member\n *        \\param event_ptr  - event_ptr member\n *\n *      \\return newly created timed_event or NULL on error\n */\ntimed_event *\ncreate_event(enum timed_event_types event_type,\n\t     time_t event_time, event_ptr_t *event_ptr,\n\t     event_func_t event_func, void *event_func_arg);\n\n/*\n *\tcalc_run_time - calculate the run time of a job\n *\n *\treturns time_t of when the job will run\n *\t\tor -1 on error\n */\ntime_t calc_run_time(const std::string &name, server_info *sinfo, int flags);\n\n/*\n *\n *\tfind_event_ptr - find the correct event pointer for the duplicated\n *\t\t\t event based on event type\n *\n *\t  \\param ote    - old event\n *\t  \\param nsinfo - \"new\" universe\n *\n * \t\\return event_ptr in new universe or NULL on error\n */\nevent_ptr_t *find_event_ptr(timed_event *ote, server_info *nsinfo);\n/*\n *\tdetermine_event_name - determine a timed events name based off of\n *\t\t\t\tevent type and sets it\n *\n *\t  \\param te - the event\n *\n *\t\\par Side Effects\n *\tte -> name is set to static data or data owned by other entities.\n *\tIt should not be freed.\n *\n *\t\\returns 1 if the name was successfully set 0 if not\n */\nint determine_event_name(timed_event *te);\n\n/*\n *\tdedtime_change - update dedicated time policy\n *\n *\t  \\param policy - status structure to update dedicated time policy\n *\t  \\param arg    - \"START\" or \"END\"\n *\n *\t\\return success 1 or failure/error 0\n */\nint dedtime_change(status *policy, void *arg);\n\n/*\n *\tadd_dedtime_events - add the dedicated time events from conf\n *\n *\t  \\param elist - the event list to add the dedicated time events to\n *\n *\t\\return success 1 / failure 0\n */\nint add_dedtime_events(event_list *elist, struct status *policy);\n\n/*\n *\n *\tsimulate_resmin - simulate the minimum amount of a resource list\n *\t\t\t  for an event list until a point in time\n *\n *\t  reslist  - resource list to simulate\n *\t  end\t  - end time\n *\t  calendar - calendar to simulate\n *\t  incl_arr - only use events for resresvs in this array (can be NULL)\n *\t  exclude\t  - job/resv to ignore (possibly NULL)\n *\n *\treturn static pointer to amount of resources available during\n *\treturn the entire length from now to end\n */\nschd_resource *\nsimulate_resmin(schd_resource *reslist, time_t end, event_list *calendar,\n\t\tresource_resv **incl_arr, resource_resv *exclude);\n\n/*\n *\n *\tpolicy_change_to_str - return a printable name for a policy change event\n *\n *\t  te - policy change timed event\n *\n *\treturn printable string name of policy change event\n */\nconst char *policy_change_to_str(timed_event *te);\n\n/*\n * policy_change_info - should we do anything on policy change events\n */\nint policy_change_info(server_info *sinfo, resource_resv *resresv);\n/*\n *        takes a bitfield returned by simulate_events and will determine if\n *        the amount resources have gone up down, or are unchanged.  If events\n *\t  caused resources to be both freed and used, we err on the side of\n *\t  caution and say there are more resources.\n */\nint describe_simret(unsigned int simret);\n\n/*\n *       adds event(s) for bringing the node back up after we provision a node\n */\nint add_prov_event(event_list *calendar, time_t event_time, node_info *node);\n\n/*\n * generic simulation function which will call a function pointer over\n * a calendar from now to an end time.  The simulation will continue\n * until the end of the calendar or until the function returns <=0.\n */\nint\ngeneric_sim(event_list *calendar, unsigned int event_mask, time_t end, int default_ret,\n\t    int (*func)(timed_event *, void *, void *), void *arg1, void *arg2);\n\nte_list *new_te_list();\n\nte_list *dup_te_list(te_list *ote, timed_event *new_timed_event_list);\nte_list *dup_te_lists(te_list *ote, timed_event *new_timed_event_list);\n\nvoid free_te_list(te_list *tel);\n\nint add_te_list(te_list **tel, timed_event *te);\nint remove_te_list(te_list **tel, timed_event *e);\n\n#endif /* _SIMULATE_H */\n"
  },
  {
    "path": "src/scheduler/site_code.cpp",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/*\n *=====================================================================\n * site_code.c - Code to implement site-specific scheduler functions\n *=====================================================================\n */\n\n// clang-format off\n\n#include <pbs_config.h>\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <ctype.h>\n#include <errno.h>\n#include <pbs_error.h>\n#include <pbs_ifl.h>\n#include <regex.h>\n#include <sched_cmds.h>\n#include <time.h>\n#include <limits.h>\n#include <assert.h>\n#include \"log.h\"\n#include \"data_types.h\"\n#include \"fifo.h\"\n#include \"queue_info.h\"\n#include \"server_info.h\"\n#include \"node_info.h\"\n#include \"check.h\"\n#include \"constant.h\"\n#include \"job_info.h\"\n#include \"misc.h\"\n#include \"config.h\"\n#include \"sort.h\"\n#include \"parse.h\"\n#include \"globals.h\"\n#include \"prev_job_info.h\"\n#include \"fairshare.h\"\n#include \"prime.h\"\n#include \"dedtime.h\"\n#include \"resv_info.h\"\n#include \"range.h\"\n#include \"resource.h\"\n#include \"resource_resv.h\"\n#include \"simulate.h\"\n\n#ifdef NAS\n\n#include \"site_data.h\"\n#include \"site_code.h\"\n#include \"site_queue.h\"\n\n#define TJ_COST_MAX\t10.0\t/* Max CPU to spend searching for top jobs */\n/* Global NAS variables */\n/* localmod 030 */\nint do_soft_cycle_interrupt;\nint do_hard_cycle_interrupt;\nint consecutive_interrupted_cycles = 0;\ntime_t interrupted_cycle_start_time;\n/* localmod 038 */\nint num_topjobs_per_queues;      /* # of per_queues top jobs on the calendar */\n\nstruct\tshr_type {\n\tstruct shr_type\t*next;\n\tint\t\tsh_tidx;\t/* type index */\n\tint\t\tsh_cls;\t\t/* index into sh_amt arrays */\n\tint\t\tcpus_per_node;\t/* guess as to CPUs per node of type */\n\tchar\t\tname[4];\t/* actually as long as needed */\n};\nstruct\tshr_class {\n\tstruct shr_class *next;\n\tint\t\tsh_cls;\t\t/* index into sh_amt arrays */\n\tchar\t\tname[4];\t/* actually as long as needed */\n};\n\nstatic void bump_share_count(share_info *, enum site_j_share_type, sh_amt *, int);\nstatic void bump_demand_count(share_info *, enum site_j_share_type, sh_amt *, int);\nstatic void clear_topjob_counts(share_info* root);\nstatic int check_cpu_share(share_head* sh, resource_resv* resv);\nstatic void count_active_cpus(resource_resv **, int, sh_amt *);\nstatic void count_demand_cpus(resource_resv **, int);\nstatic void count_contrib_cpus(share_info *, share_info *, sh_amt *);\nstatic void count_cpus(node_info **, int ncnt, queue_info **, sh_amt *);\nstatic void set_share_cpus(share_info *node, sh_amt *, sh_amt *);\nstatic void zero_share_counts(share_info *node);\n\nstatic int dup_shares(share_head *oldsh, server_info *nsinfo);\nstatic share_info *dup_share_tree(share_info *oroot);\nstatic share_info *find_entity_share(char *name, share_info *node);\nstatic share_info *find_most_favored_share(share_info* root, int topjobs);\nstatic int find_share_class(struct shr_class *head, char *name);\nstatic share_info *find_share_group(share_info *root, char *name);\nstatic site_user_info *find_user(site_user_info **head, char *name);\n/* static int find_share_type(struct shr_type *head, char *name); */\nstatic void free_share_head(share_head *sh, int flag);\nstatic void free_share_tree(share_info *root);\nstatic void free_users(site_user_info **);\nstatic double get_share_ratio(sh_amt*, sh_amt*, sh_amt_array*);\nstatic int init_users(server_info *);\nstatic void list_share_info(FILE *, share_info *, const char *, int, const char *, int);\nstatic struct share_head* new_share_head(int cnt);\nstatic share_info* new_share_info(char *name, int cnt);\nstatic share_info* new_share_info_clone(share_info *old);\nstatic int reconcile_shares(share_info *root, int cnt);\nstatic int reconcile_share_tree(share_info *root, share_info *def, int cnt);\n/* static struct shr_class* shr_class_info_by_idx(int); */\n/* static struct shr_class* shr_class_info_by_name(const char *); */\nstatic char* shr_class_name_by_idx(int);\n/* static struct shr_class* shr_class_info_by_type_name(const char *); */\nstatic struct shr_type* shr_type_info_by_idx(int);\nstatic struct shr_type* shr_type_info_by_name(const char *);\nstatic void squirrel_shr_head(server_info *sinfo);\nstatic void un_squirrel_shr_head(server_info *sinfo);\nstatic void squirrel_shr_tree(share_info *root);\nstatic void un_squirrel_shr_tree(share_info *root);\n\ntypedef\t\tint (pick_next_filter)(resource_resv *, share_info *);\n\nstatic resource_resv* pick_next_job(status *, resource_resv **, int (*)(), share_info *);\n\n#ifdef\tNAS_HWY149\nstatic int job_filter_hwy149(resource_resv *, share_info *);\n#endif\nstatic int job_filter_dedres(resource_resv *, share_info *);\n#ifdef\tNAS_HWY101\nstatic int job_filter_hwy101(resource_resv *, share_info *);\n#endif\nstatic int job_filter_normal(resource_resv *, share_info *);\n\n/*\n * Private variables used by CPU allocations code\n */\n\nstatic\tstruct shr_class *shr_classes = NULL;\nstatic\tint\t\tshr_class_count = 0;\nstatic\tstruct shr_type\t*shr_types = NULL;\nstatic\tint\t\tshr_type_count = 0;\nstatic\tchar\t\t*shr_selector = NULL;\nstatic\tshare_head\t*cur_shr_head = NULL;\n\n/*\n * Other private variables\n */\nstatic\tsite_user_info\t*users = NULL;\n\n/*\n *=====================================================================\n * External functions\n *=====================================================================\n */\n\n\n\n/*\n *=====================================================================\n * site_bump_topjobs(resv, delta) - Increment topjob count for job's\n *\tshare group\n * Entry:\tresv = resource_resv for job\n *\t\tdelta = CPU time needed to calendar the job\n * Returns:\tnew value for topjob count\n *=====================================================================\n */\nint\nsite_bump_topjobs(resource_resv *resv, double delta)\n{\n\tjob_info*\tjob;\n\tshare_info*\tsi;\n\n\tif (resv == NULL || !resv->is_job || (job = resv->job) == NULL)\n\t\treturn 0;\n\tif ((si = job->sh_info) == NULL || (si = si->leader) == NULL)\n\t\treturn 0;\n\tsi->tj_cpu_cost += delta;\n#ifdef NAS_DEBUG\n\tprintf(\"YYY %s %d %g %g %g\\n\", si->name, si->topjob_count+1,\n\t\tsi->ratio, si->ratio_max, si->tj_cpu_cost);\n\tfflush(stdout);\n#endif\n\treturn ++(si->topjob_count);\n}\n\n\n\n/*\n *=====================================================================\n * site_check_cpu_share(sinfo, resv) - Check whether job\n *\t\t\twould exceed any group CPU allocation.\n * Entry:\tsinfo = Server info\n *\t\tpolicy = policy in effect at current time\n *\t\tresv = job/reservation\n * Returns:\t0 if job not blocked\n *\t\t<> if blocked by group CPU allocation\n *=====================================================================\n */\nint\nsite_check_cpu_share(server_info *sinfo, status *policy, resource_resv *resv)\n{\n\tint\t\trc = 0;\t\t/* Assume okay */\n\tjob_info\t*job;\n\tshare_head\t*sh;\t\t/* global share totals */\n\ttimed_event\t*te;\n\tlong\t\ttime_left, end;\n\tunsigned int\tevent_mask;\n\n\tif (sinfo == NULL || policy == NULL || resv == NULL)\n\t\treturn 0;\n\tif (!resv->is_job || (job = resv->job) == NULL)\n\t\treturn 0;\n\tif ((sh = sinfo->share_head) == NULL)\n\t\treturn 0;\n\t/* Allow accumulating shares, but not enforcing them */\n\tif (policy->shares_track_only)\n\t\treturn 0;\n\t/*\n\t * Skip rest if job exempt from limits\n\t */\n\tif (resv->share_type == J_TYPE_ignore)\n\t\treturn 0;\n#ifdef\tNAS_HWY149\n\tif (job->NAS_pri == NAS_HWY149)\n\t\treturn 0;\n#endif\n#ifdef\tNAS_HWY101\n\tif (job->NAS_pri == NAS_HWY101)\n\t\treturn 0;\n#endif\n\tif (job->resv != NULL)\n\t\treturn 0;\t\t/* job running in reservation */\n\n\trc = check_cpu_share(sh, resv);\n\tif (rc != 0) {\n\t\t/*\n\t\t * Job cannot run now\n\t\t */\n\t\treturn rc;\n\t}\n\t/*\n\t * See if would conflict with anything on calendar\n\t */\n\tif (sinfo->calendar == NULL)\n\t\treturn rc;\n\ttime_left = calc_time_left(resv, 0);\n\tend = sinfo->server_time + time_left;\n\tif (!exists_run_event(sinfo->calendar, end))\n\t\treturn rc;\n\tsquirrel_shr_head(sinfo);\n\tte = get_next_event(sinfo->calendar);\n\tevent_mask = TIMED_RUN_EVENT | TIMED_END_EVENT;\n\tfor (te = find_init_timed_event(te, IGNORE_DISABLED_EVENTS, event_mask);\n\t\tte != NULL && te->event_time < end;\n\t\tte = find_next_timed_event(te, IGNORE_DISABLED_EVENTS, event_mask)) {\n\n\t\tresource_resv *te_rr;\n\n\t\tte_rr = (resource_resv *) te->event_ptr;\n\t\tif (te_rr == resv)\n\t\t\tcontinue;\t\t/* Should not happen */\n\t\tif (te->event_type == TIMED_RUN_EVENT) {\n\t\t\tsite_update_on_run(sinfo, NULL, te_rr, 0, NULL);\n\t\t\trc = check_cpu_share(sh, resv);\n\t\t\tif (rc != 0) {\n\t\t\t\trc = BACKFILL_CONFLICT;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t\tif (te->event_type == TIMED_END_EVENT) {\n\t\t\tsite_update_on_end(sinfo, NULL, te_rr);\n\t\t\t/* Next test should never catch anything */\n\t\t\trc = check_cpu_share(sh, resv);\n\t\t\tif (rc != 0) {\n\t\t\t\trc = BACKFILL_CONFLICT;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t}\n\tun_squirrel_shr_head(sinfo);\n\treturn rc;\n}\n\n\n\n/*\n *=====================================================================\n * check_cpu_share(sinfo, resv) - Check whether job would exceed CPU\n *\t\tshares at this instant in time.\n * Entry:\tsh = global share totals\n *\t\tresv = resource reservation to check\n *=====================================================================\n */\nstatic int\ncheck_cpu_share(share_head *sh, resource_resv *resv)\n{\n\tint\t\trc = 0;\t\t/* Assume okay */\n\tjob_info\t*job;\n\tshare_info\t*leader;\t/* info for group leader */\n\tint\t\tsh_cls;\t\t/* current share class */\n\tsh_amt\t\t*job_amts;\t/* amounts requested by job */\n\n\tif (sh == NULL || resv == NULL)\n\t\treturn rc;\n\tif ((job = resv->job) == NULL)\n\t\treturn rc;\n\tleader = job->sh_info;\n\tif (leader == NULL || (leader = leader->leader) == NULL)\n\t\treturn 0;\n\tjob_amts = job->sh_amts;\n\tif (job_amts == NULL)\n\t\treturn 0;\n\t/*\n\t * Precedence of blockages: high to low\n\t * GROUP_CPU_INSUFFICIENT\n\t * GROUP_CPU_SHARE\n\t * none\n\t */\n\tfor (sh_cls = 0; sh_cls < shr_class_count ; ++sh_cls) {\n\t\tint limited, borrowed, allocated;\n\t\tint asking;\n\t\tint rc2 = 0;\n\n\t\tasking = job_amts[sh_cls];\n#if\tNAS_CPU_MULT > 1\n\t\tif (asking % NAS_CPU_MULT) {\n\t\t\t/*\n\t\t\t * Round to multiple of NAS_CPU_MULT\n\t\t\t */\n\t\t\tasking += NAS_CPU_MULT - (asking % NAS_CPU_MULT);\n\t\t}\n#endif\n\t\tlimited = leader->share_inuse[sh_cls][J_TYPE_limited];\n\t\tborrowed = leader->share_inuse[sh_cls][J_TYPE_borrow];\n\t\tallocated = leader->share_ncpus[sh_cls];\n\n\t\tswitch (resv->share_type) {\n\t\t\tcase J_TYPE_limited:\n\t\t\t\t/*\n\t\t\t\t * If job exceeds share by itself\n\t\t\t\t */\n\t\t\t\tif (asking > allocated) {\n\t\t\t\t\trc2 = GROUP_CPU_INSUFFICIENT;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\t/*\n\t\t\t\t * If total limited jobs would exceed share\n\t\t\t\t */\n\t\t\t\tif (asking + limited > allocated) {\n\t\t\t\t\trc2 = GROUP_CPU_SHARE;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\t/* Fall through */\n\t\t\tcase J_TYPE_borrow:\n\t\t\t\t/*\n\t\t\t\t * Have we borrowed too much\n\t\t\t\t */\n\t\t\t\tif (asking + limited + borrowed >\n\t\t\t\t\tallocated + sh->sh_contrib[sh_cls]) {\n\t\t\t\t\trc2 = GROUP_CPU_SHARE;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\t;\n\t\t}\n\t\t/*\n\t\t * Remember most important limit among shares\n\t\t */\n\t\tif (rc == 0 || rc2 == GROUP_CPU_INSUFFICIENT) rc = rc2;\n\t}\n\treturn rc;\n}\n\n\n\n/*\n *=====================================================================\n * site_decode_time(str) - decode time string\n * (Based on decode_time in attr_fn_time.c)\n * Entry:\tstr = string in hh:mm:ss format\n * Returns:\tvalue of str in seconds\n *=====================================================================\n */\n#define PBS_MAX_TIME (LONG_MAX - 1)\ntime_t\nsite_decode_time(const char *val)\n{\n\tint   i;\n\tchar  msec[4];\n\tint   ncolon = 0;\n\tchar *pc;\n\ttime_t  rv = 0;\n\tchar *workval;\n\tchar *workvalsv;\n\n\tif (val == NULL || *val == '\\0') {\n\t\treturn (0);\n\t}\n\n\tworkval = strdup(val);\n\tworkvalsv = workval;\n\n\tfor (i = 0; i < 3; ++i)\n\t\tmsec[i] = '0';\n\tmsec[i] = '\\0';\n\n\tfor (pc = workval; *pc; ++pc) {\n\n\t\tif (*pc == ':') {\n\t\t\tif (++ncolon > 2)\n\t\t\t\tgoto badval;\n\t\t\t*pc = '\\0';\n\t\t\trv = (rv * 60) + atoi(workval);\n\t\t\tworkval = pc + 1;\n\n\t\t} else if (*pc == '.') {\n\t\t\t*pc++ = '\\0';\n\t\t\tfor (i = 0; (i < 3) && *pc; ++i)\n\t\t\t\tmsec[i] = *pc++;\n\t\t\tbreak;\n\t\t} else if (!isdigit((int)*pc)) {\n\t\t\tgoto badval;\t/* bad value */\n\t\t}\n\t}\n\trv = (rv * 60) + atoi(workval);\n\tif (rv > PBS_MAX_TIME)\n\t\tgoto badval;\n\tif (atoi(msec) >= 500)\n\t\trv++;\n\t(void)free(workvalsv);\n\treturn (rv);\n\n\tbadval:\t(void)free(workvalsv);\n\treturn (0);\n}\n\n\n\n/*\n *=====================================================================\n * site_dup_shares( osinfo, nsinfo ) - Duplicate share info.\n * Entry:\tosinfo = ptr to current server info\n *\t\tnsinfo = ptr to new server info\n *\t\t\tjobs[] in nsinfo must be filled in already\n * Returns:\t1 if duped okay, else 0\n * Sets\t\tnsinfo->share_head\n *=====================================================================\n */\nint\nsite_dup_shares(server_info *osinfo, server_info *nsinfo)\n{\n\tshare_head\t*oldsh;\n\tresource_resv\t*resv;\n\tint\t\ti;\n\n\tif (osinfo == NULL || nsinfo == NULL)\n\t\treturn 0;\n\tif ((oldsh = osinfo->share_head) == NULL) {\n\t\t/*\n\t\t * If not using shares, done.\n\t\t */\n\t\treturn 1;\n\t}\n\tif (oldsh->root == NULL)\n\t\treturn 0;\n\tif (!dup_shares(oldsh, nsinfo))\n\t\treturn 0;\n\t/*\n\t * Need to go through copy of jobs and point them into the new tree\n\t */\n\tfor (i = 0; i < nsinfo->sc.total; ++i) {\n\t\tresv = nsinfo->jobs[i];\n\t\tif (!resv->is_job || resv->job == NULL || resv->job->sh_info == NULL)\n\t\t\tcontinue;\n\t\tresv->job->sh_info = resv->job->sh_info->tptr;\n\t}\n\treturn 1;\n}\n\n\n\n/*\n *=====================================================================\n * site_dup_share_amts(oldp) - Clone share amount array\n * Entry:\toldp = ptr to existing array\n * Returns:\tptr to copy of old\n *=====================================================================\n */\nsh_amt *\nsite_dup_share_amts(sh_amt *oldp)\n{\n\tsh_amt\t*newp;\n\tsize_t\tsz;\n\n\tif (oldp == NULL)\n\t\treturn NULL;\n\tsz = shr_class_count * sizeof(*newp);\n\tnewp = static_cast<sh_amt *>(malloc(sz));\n\tif (newp == NULL)\n\t\treturn NULL;\n\tmemcpy(newp, oldp, sz);\n\treturn newp;\n}\n\n\n\n/*\n *=====================================================================\n * site_find_alloc_share(sinfo, name) - Find share info, allocating new\n *\t\tentry if needed.\n * Entry:\tsinfo = Current server info\n *\t\tname = Entity to locate share info for.\n * Returns:\tpointer to matching share_info structure,\n *\t\tNULL if no match\n *=====================================================================\n */\nshare_info *\nsite_find_alloc_share(server_info *sinfo, char *name)\n{\n\tshare_info *\tsi;\n\tshare_info *\tnsi;\n\n\tif (sinfo->share_head == NULL || (si = sinfo->share_head->root) == NULL)\n\t\treturn NULL;\n\tsi = find_entity_share(name, si);\n\tif (si == NULL) {\n\t\t/*\n\t\t * The default group is the root of the tree\n\t\t */\n\t\treturn sinfo->share_head->root;\n\t}\n\tif (si && si->pattern_type == share_info::PATTERN_SEPARATE &&\n\t\tstrcmp(name, si->name) != 0) {\n\t\t/*\n\t\t * On match against SEPARATE pattern, create new entry with\n\t\t * exact match.\n\t\t */\n\t\tnsi = new_share_info(name, shr_class_count);\n\t\tif (nsi != NULL) {\n\t\t\tnsi->pattern_type = share_info::PATTERN_NONE;\n\t\t\tnsi->leader = si->leader;\n\t\t\tnsi->parent = si;\n\t\t\tif (si->child) {\n\t\t\t\tfor (si = si->child ; si->sibling ; si = si->sibling)\n\t\t\t\t\t;\n\t\t\t\tsi->sibling = nsi;\n\t\t\t} else {\n\t\t\t\tsi->child = nsi;\n\t\t\t}\n\t\t\tsi = nsi;\n\t\t}\n\t}\n\treturn si;\n}\n\n\n\n/*\n *=====================================================================\n * site_free_shares(sinfo) - Free cloned share info\n * Entry:\tsinfo = server owning cloned info\n *=====================================================================\n */\nvoid\nsite_free_shares(server_info *sinfo)\n{\n\tshare_head\t*sh;\n\n\tif (sinfo == NULL || (sh = sinfo->share_head) == NULL)\n\t\treturn;\n\tfree_share_head(sh, 1);\n\tsinfo->share_head = NULL;\n}\n\n\n\n/*\n *=====================================================================\n * site_get_sharestatic_cast< resresv >(Get ratio of cpus used to allocated\n * Entry:\tresresv = pointer to resource_resv\n * Returns:\tApproximate ratio of current CPUs in use to allocation\n *\t\tfor job's group.\n *=====================================================================\n */\ndouble\nsite_get_share(resource_resv *resresv)\n{\n\tjob_info\t*job;\n\tshare_info\t*si;\n\tdouble\t\tresult = 0.0;\n\n\tif (!resresv->is_job ||\n\t\t(job = resresv->job) == NULL ||\n\t\t(si = job->sh_info) == NULL ||\n\t\t(si = si->leader) == NULL)\n\t\t\treturn result;\n#ifdef\tNAS_HWY149\n\tif (job->priority == NAS_HWY149 || job->NAS_pri == NAS_HWY149) {\n\t\treturn result;\t\t/* Favor jobs on highway */\n\t}\n#endif\n#ifdef\tDRT_XXX_NAS_HWY101\n\tif (job->priority == NAS_HWY101 || job->NAS_pri == NAS_HWY101) {\n\t\treturn result;\t\t/* Favor jobs on highway */\n\t}\n#endif\n\tif (resresv->share_type == J_TYPE_ignore) {\n\t\treturn result;\t\t/* Favor jobs exempt from shares */\n\t}\n\tresult = get_share_ratio(si->share_ncpus, job->sh_amts,\n\t\tsi->share_inuse);\n\treturn result;\n}\n\n\n\n\n/*\n *=====================================================================\n * site_init_alloc( sinfo ) - Initialize allocated shares CPUs data\n * Entry:\tsinfo = ptr to server_info, with all data about jobs,\n *\t\t\tqueues, nodes, etc, already collected.\n * Exit:\talloc info updated\n *=====================================================================\n */\nvoid\nsite_init_alloc(server_info *sinfo)\n{\n\tshare_info\t*root;\n\tshare_info\t*leader;\n\tsh_amt *\tsh_active;\t/* counts of CPUs in use */\n\tsh_amt *\tsh_avail;\t/* counts of CPUs not in use */\n\tsh_amt *\tsh_contrib;\t/* counts of CPUs avail for borrow */\n\tsh_amt *\tsh_total;\t/* total counts of CPUs */\n\tshare_head *\tshead;\t\t/* active share info */\n\tint\t\ti;\n\n\tif (sinfo == NULL || (shead = sinfo->share_head) == NULL)\n\t\treturn;\n\tsh_active = shead->sh_active;\n\tsh_avail = shead->sh_avail;\n\tsh_contrib = shead->sh_contrib;\n\tsh_total = shead->sh_total;\n\troot = shead->root;\n\tif (sh_active == NULL || sh_avail == NULL || sh_contrib == NULL\n\t\t|| sh_total == NULL || root == NULL)\n\t\treturn;\n\t/*\n\t * Scan nodes to total number of CPUs of each type -> sh_total\n\t */\n\tcount_cpus(sinfo->nodes, sinfo->num_nodes, sinfo->queues, sh_total);\n\t/*\n\t * Scan jobs to accumulate CPUs in use or requested into share info\n\t * structures.\n\t */\n\tzero_share_counts(root);\n\tmemset(sh_active, 0, shr_class_count * sizeof(*sh_active));\n\tcount_active_cpus(sinfo->jobs, sinfo->sc.total, sh_active);\n\tcount_demand_cpus(sinfo->jobs, sinfo->sc.total);\n\t/*\n\t * Now, adjust CPUs available for sharing downward by current\n\t * use of jobs not associated with a share group.\n\t */\n\tleader = root->leader;\n\tfor (i = 0; i < shr_class_count; ++i) {\n\t\tint t;\n\t\tt = sh_total[i];\n\t\tif (leader != NULL) {\n\t\t\tint\tj;\n\t\t\tfor (j = 0; j < J_TYPE_COUNT; ++j) {\n\t\t\t\tt -= leader->share_inuse[i][j];\n\t\t\t}\n\t\t}\n\t\tsh_avail[i] = t;\n\t}\n\t/*\n\t * Convert raw allocations into CPU counts -> share_ncpus.\n\t */\n\tset_share_cpus(root, root->share_gross, sh_avail);\n\t/*\n\t * Count how many CPUs are available for borrowing.\n\t */\n\tcount_contrib_cpus(root, root, sh_contrib);\n\t/*\n\t * Root has access to all CPUs.\n\t */\n\tfor (i = 0; i < shr_class_count; ++i) {\n\t\troot->share_ncpus[i] = sh_total[i];\n\t}\n\tif (conf.partition_id == NULL) {\n\t\tsite_list_shares(stdout, sinfo, \"sia_\", 1);\n\t\tfflush(stdout);\n\t}\n}\n\n\n\n/*\n *=====================================================================\n * site_is_queue_topjob_set_aside(resv) - Check the topjob_set_aside attribute\n *\t\tfor the queue of the given job\n * Entry:\tresv = resource_resv for job\n * Returns:\t1 if topjob_set_aside=True for the queue\n *\t\t0 otherwise\n *=====================================================================\n */\nint\nsite_is_queue_topjob_set_aside(resource_resv *resv)\n{\n\tjob_info*\tjob;\n\n\tif (resv == NULL || !resv->is_job || (job = resv->job) == NULL ||\n\t\tjob->queue == NULL)\n\t\treturn 0;\n\n\treturn job->queue->is_topjob_set_aside;\n}\n\n\n\n/*\n *=====================================================================\n * site_is_share_king(policy) - Check if group shares are most important\n *\t\tjob sort criterion\n * Entry:\tpolicy = policy in effect\n *\t\tCall with policy = NULL to fetch previously computed value.\n * Returns:\t1 if group shares is second job sort key (after formula)\n *\t\t0 otherwise\n *=====================================================================\n */\nint\nsite_is_share_king(status *policy)\n{\n\tstatic int\tis_king = 0;\n\n\tif (policy == NULL)\n\t\treturn is_king;\t\t/* return previous value */\n\t/*\n\t * If no shares, shares are not king.\n\t */\n\tif (cur_shr_head == NULL) {\n\t\tis_king = 0;\n\t\treturn is_king;\n\t}\n\t/*\n\t * Examine the sort keys to see if shares are primary key\n\t */\n\tis_king = 0;\n\tif (!policy->sort_by.empty() && policy->sort_by[0].res_name == SORT_ALLOC)\n\t\tis_king = 1;\n\t\n\treturn is_king;\n}\n\n\n\n/*\n *=====================================================================\n * site_list_shares(fp, sinfo, pfx, flag) - Write current CPU allocation\n *\t\t\tinfo to file\n * Entry:\tfp = FILE * to write to\n *\t\tsinfo = server to list data for\n *\t\tpfx = string to prefix each line with\n *\t\tflag = non-zero to list only leaders\n * Exit:\tData from tree written to file\n *=====================================================================\n */\nvoid\nsite_list_shares(FILE *fp, server_info *sinfo, const char *pfx, int flag)\n{\n\tshare_info\t*root;\n\tint\t\tidx;\n\n\tif (fp == NULL || sinfo == NULL || sinfo->share_head == NULL\n\t\t|| (root = sinfo->share_head->root) == NULL) {\n\t\treturn;\n\t}\n\tfor (idx = 0; idx < shr_class_count ; ++idx) {\n\t\tchar *sname;\n\n\t\tsname = shr_class_name_by_idx(idx);\n\t\tlist_share_info(fp, root, pfx, idx, sname, flag);\n\t}\n}\n\n\n\n\n/*\n *=====================================================================\n * site_list_jobs( sinfo, rarray ) - List jobs in queue to file\n * Entry:\tsinfo = server info\n *\t\trarray array of pointers to jobs, terminated by NULL\n *=====================================================================\n */\nvoid\nsite_list_jobs(server_info *sinfo, resource_resv **rarray)\n{\n\tFILE\t\t*sj;\n\tchar\t\t*fname;\n\tint\t\ti;\n\tshare_info\t*si;\n\tsh_amt\t\t*job_amts;\n\tchar\t\t*sname;\n\tconst char\t*starving;\n\n\tfname = SORTED_FILE;\n\tsj = fopen(fname, \"w+\");\n\tif (sj == NULL) {\n\t\tsprintf(log_buffer, \"Cannot open %s: %s\\n\",\n\t\t\tfname, strerror(errno));\n\t\tlog_event(PBSEVENT_SCHED, PBS_EVENTCLASS_JOB, LOG_ERR,\n\t\t\t__func__, log_buffer);\n\t\treturn;\n\t}\n\tsite_list_shares(sj, sinfo, \"#A \", 0);\n\tfor (i = 0; ; ++i) {\n\t\tstruct resource_resv\t*rp;\n\t\tstruct job_info\t\t*job;\n\t\tchar \t\t*name, *queue, *user;\n\t\ttime_t\t\tstart;\n\t\tint\t\tjpri;\n\t\tint\t\tncpus;\n\n\t\trp = rarray[i];\n\t\tif (rp == NULL)\n\t\t\tbreak;\n\t\t/*\n\t\t * List only jobs\n\t\t */\n\t\tif (!rp->is_job)\n\t\t\tcontinue;\n\t\tjob = rp->job;\n\t\t/*\n\t\t * that are still in the queue\n\t\t */\n\t\tif (!job->is_queued)\n\t\t\tcontinue;\n\t\tname = rp->name;\n\t\tqueue = job->queue->name;\n\t\tuser = rp->user;\n\t\tsi = job->sh_info;\n\t\tsname = NULL;\n\t\tif (si) {\n\t\t\tswitch (si->pattern_type) {\n\t\t\t\tcase share_info::pattern_type::PATTERN_COMBINED:\n\t\t\t\tcase share_info::pattern_type::PATTERN_SEPARATE:\n\t\t\t\t\tif (si->leader) sname = si->leader->name;\n\t\t\t\t\tbreak;\n\t\t\t\tdefault:\n\t\t\t\t\tsname = si->name;\n\t\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t\tif (sname == NULL)\n\t\t\tsname = \"<none>\";\n\t\tstarving = job->is_starving ? \"s\" : \"-\";\n\t\tstart = rp->start;\n\t\tjpri = job->NAS_pri;\n\t\tncpus = rp->select ? rp->select->total_cpus : -1; /* XXX */\n\t\tjob_amts = job->sh_amts;\n\t\tif (job_amts) {\n\t\t\tint sh_cls;\n\t\t\tncpus = 0;\n\t\t\tfor (sh_cls = 0; sh_cls < shr_class_count; ++sh_cls) {\n\t\t\t\tncpus += job_amts[sh_cls];\n\t\t\t}\n\t\t}\n\t\tif (start == UNSPECIFIED || start == sinfo->server_time)\n\t\t\tstart = 0;\n\t\tfprintf(sj, \"  %s\\t%s\\t%s\\t%s\\t%s\\t%lu\\t%d\\t%d\\n\",\n\t\t\tname, queue, user, sname, starving,\n\t\t\t(unsigned long)start, jpri, ncpus);\n\t}\n\tfclose(sj);\n}\n\n\n/*\n *=====================================================================\n * site_parse_shares(fname) - Read CPU shares file\n * Entry\tfname = path to file\n * Returns\t1 if all okay\n *\t\t0 on errors, messages to log\n * Modifies\tstatic variables declared at start of file\n *=====================================================================\n */\nint\nsite_parse_shares(char *fname)\n{\n\tshare_info\t*cur;\n\tint\t\terrcnt = 0;\t/* parse error counter */\n\tFILE \t\t*fp;\t\t/* shares file */\n\tint\t\ti;\n\tint\t\tlineno = 0;\t/* current line number in file */\n\tshare_info\t*parent;\t/* parent of current node */\n\tshare_info\t*root = NULL;\t/* New tree under construction */\n\tchar\t\t*save_ptr;\t/* posn for strtok_r() */\n\tchar\t\t*sp;\t\t/* temp ptr into buf */\n\tchar\t\t*sp2;\t\t/* ditto */\n\tchar\t\t*sp3;\t\t/* ditto */\n\tint\t\tstate;\n\tsh_amt\t\t*tshares;\t/* temp shares values */\n\tstruct shr_class *tclass;\t/* temp class pointer */\n\tstruct shr_type\t*ttype;\t\t/* temp type pointer */\n\tint\t\tnew_cls_cnt;\t/* number of CPU classes */\n\tint\t\tnew_type_cnt;\t/* number of CPU types */\n\tstruct shr_class *new_shr_clses;\t/* new class list */\n\tstruct shr_class *new_cls_tail;\n\tstruct shr_type\t*new_shr_types;\t/* new type list */\n\tstruct shr_type\t*new_type_tail;\n#define\tLINE_BUF_SIZE\t256\n\tchar\t\tbuf[LINE_BUF_SIZE];\t/* line buffer */\n\tchar\t\tclass_and_type[LINE_BUF_SIZE];\t/* [class:]type */\n\tchar\t\tnew_sel[LINE_BUF_SIZE];\t/* new value for type_selector */\n\tchar\t\tpattern[LINE_BUF_SIZE];\t/* share group name/pattern */\n\n\tstate = 0;\n\tnew_shr_clses = new_cls_tail = NULL;\n\tnew_shr_types = new_type_tail = NULL;\n\tnew_cls_cnt = 0;\n\tnew_type_cnt = 0;\n\ttshares = NULL;\n\tif ((fp = fopen(fname, \"r\")) == NULL) {\n\t\ti = errno;\n\t\tsprintf(log_buffer, \"Error opening file %s\", fname);\n\t\tlog_err(i, __func__, log_buffer);\n\t\treturn 1;\t\t/* continue without shares */\n\t}\n\twhile (fgets(buf, LINE_BUF_SIZE, fp) != NULL) {\n\t\t++lineno;\n\t\tcur = NULL;\n\t\tsp = strchr(buf, '\\n');\n\t\tif (sp == NULL) {\n\t\t\tsprintf(log_buffer, \"Line %d excessively long.  Giving up.\",\n\t\t\t\tlineno);\n\t\t\tgoto err_out_l;\n\t\t}\n\t\t/*\n\t\t * Terminate lines at comment\n\t\t */\n\t\tsp = strchr(buf, '#');\n\t\tif (sp != NULL) {\n\t\t\tsp[0] = '\\n';\n\t\t\tsp[1] = '\\0';\n\t\t}\n\t\t/*\n\t\t * First non-comment line is \"classes\" line.\n\t\t */\n\t\tsp = strtok_r(buf, \" \\t\\n\", &save_ptr);\n\t\tif (sp == NULL)\n\t\t\tcontinue;\t/* Empty or comment */\n\t\tif (strcasecmp(sp, \"classes\") == 0) {\n\t\t\tif (state != 0) {\n\t\t\t\tsprintf(log_buffer, \"\\\"classes\\\" must be first line in shares file\");\n\t\t\t\tgoto err_out_l;\n\t\t\t}\n\t\t\tsp = strtok_r(NULL, \" \\t\\n\", &save_ptr);\n\t\t\tif (sp == NULL) {\n\t\t\t\tsprintf(log_buffer, \"Empty \\\"classes\\\" line\");\n\t\t\t\tgoto err_out_l;\n\t\t\t}\n\t\t\tstrcpy(new_sel, sp);\n\t\t\t/*\n\t\t\t * Set up default class and type entries\n\t\t\t */\n\t\t\tstrcpy(log_buffer, \"Malloc failure\"); /* just in case */\n\t\t\ttclass = static_cast<shr_class *>(malloc(sizeof(*tclass)));\n\t\t\tif (tclass == NULL) {\n\t\t\t\tgoto err_out_l;\n\t\t\t}\n\t\t\ttclass->next = NULL;\n\t\t\ttclass->sh_cls = 0;\n\t\t\ttclass->name[0] = '\\0';\n\t\t\tnew_shr_clses = tclass;\n\t\t\tnew_cls_tail = tclass;\n\n\t\t\tttype = static_cast<shr_type *>(malloc(sizeof(*ttype)));\n\t\t\tif (ttype == NULL) {\n\t\t\t\tgoto err_out_l;\n\t\t\t}\n\t\t\tttype->next = NULL;\n\t\t\tttype->sh_tidx = 0;\n\t\t\tttype->sh_cls = 0;\n\t\t\tttype->cpus_per_node = 1;\n\t\t\tttype->name[0] = '\\0';\n\t\t\tnew_shr_types = ttype;\n\t\t\tnew_type_tail = ttype;\n\n\t\t\tnew_cls_cnt = 1;\n\t\t\tnew_type_cnt = 1;\n\t\t\t/*\n\t\t\t * Now, collect list of selector values\n\t\t\t */\n\t\t\twhile ((sp = strtok_r(NULL, \" \\t\\n\", &save_ptr)) != NULL) {\n\t\t\t\tsp = strcpy(class_and_type, sp);\n\t\t\t\tsp2 = strchr(sp, ':');\n\t\t\t\t/* sp gives class, sp2 gives type */\n\t\t\t\tif (sp2 == NULL) {\n\t\t\t\t\t/*\n\t\t\t\t\t * No class given, use previous tclass,\n\t\t\t\t\t * else default.\n\t\t\t\t\t */\n\t\t\t\t\tif (tclass == NULL) {\n\t\t\t\t\t\ttclass = new_shr_clses;\n\t\t\t\t\t}\n\t\t\t\t\tsp2 = sp;\n\t\t\t\t} else if (sp2 == sp) {\n\t\t\t\t\t/*\n\t\t\t\t\t * Empty class, use default\n\t\t\t\t\t */\n\t\t\t\t\ttclass = new_shr_clses;\n\t\t\t\t\tsp2++;\n\t\t\t\t} else {\n\t\t\t\t\t*sp2++ = '\\0';\n\t\t\t\t\tfor (tclass = new_shr_clses; tclass;\n\t\t\t\t\t\ttclass = tclass->next) {\n\t\t\t\t\t\tif (strcmp(sp, tclass->name) == 0)\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t\tif (tclass == NULL) {\n\t\t\t\t\t\t/*\n\t\t\t\t\t\t * New class.  Add to list.\n\t\t\t\t\t\t */\n\t\t\t\t\t\ttclass = static_cast<shr_class *>(malloc(sizeof(*tclass)\n\t\t\t\t\t\t\t+ strlen(sp)));\n\t\t\t\t\t\tif (tclass == NULL)\n\t\t\t\t\t\t\tgoto err_out_l;\n\t\t\t\t\t\ttclass->next = NULL;\n\t\t\t\t\t\ttclass->sh_cls = new_cls_cnt++;\n\t\t\t\t\t\tstrcpy(tclass->name, sp);\n\t\t\t\t\t\tnew_cls_tail->next = tclass;\n\t\t\t\t\t\tnew_cls_tail = tclass;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tsp3 = strchr(sp2, '@');\n\t\t\t\t/* sp3 gives cpus_per_node */\n\t\t\t\tif (sp3) {\n\t\t\t\t\t*sp3++ = '\\0';\n\t\t\t\t}\n\t\t\t\t/*\n\t\t\t\t * Type names must be unique.\n\t\t\t\t */\n\t\t\t\tfor (ttype = new_shr_types; ttype; ttype = ttype->next) {\n\t\t\t\t\tif (strcmp(sp2, ttype->name) == 0) {\n\t\t\t\t\t\tif (*sp2 == '\\0')\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\tsprintf(log_buffer, \"duplicate type: %s\", sp2);\n\t\t\t\t\t\tgoto err_out_l;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tif (ttype == NULL) {\n\t\t\t\t\tttype = static_cast<shr_type *>(malloc(sizeof(*ttype) + strlen(sp2)));\n\t\t\t\t\tif (ttype == NULL)\n\t\t\t\t\t\tgoto err_out_l;\n\t\t\t\t\tttype->sh_tidx = new_type_cnt++;\n\t\t\t\t\tttype->sh_cls = tclass->sh_cls;\n\t\t\t\t\tttype->cpus_per_node = 1;\n\t\t\t\t\tif (sp3) {\n\t\t\t\t\t\ti = atoi(sp3);\n\t\t\t\t\t\tif (i > 0)\n\t\t\t\t\t\t\tttype->cpus_per_node = i;\n\t\t\t\t\t}\n\t\t\t\t\tttype->next = NULL;\n\t\t\t\t\tstrcpy(ttype->name, sp2);\n\t\t\t\t\tnew_type_tail->next = ttype;\n\t\t\t\t\tnew_type_tail = ttype;\n\t\t\t\t}\n\t\t\t}\n\t\t\t++state;\n\t\t\ttshares = static_cast<sh_amt *>(malloc(new_cls_cnt * sizeof(*tshares)));\n\t\t\tif (tshares == NULL) {\n\t\t\t\tgoto err_out_l;\n\t\t\t}\n\t\t\tcontinue;\n\t\t}\n\t\t/*\n\t\t * Remaining lines are tree lines, of form\n\t\t * pattern\tparent\t[class:share ...] [default_share]\n\t\t */\n\t\tif (state == 0) {\n\t\t\tsprintf(log_buffer, \"\\\"classes\\\" must appear first in shares file\");\n\t\t\tgoto err_out_l;\n\t\t}\n\t\tif (root == NULL) {\n\t\t\t/*\n\t\t\t * Now that we have count of classes, can allocate\n\t\t\t * root node.\n\t\t\t */\n\t\t\troot = new_share_info(\"root\", new_cls_cnt);\n\t\t\tif (root == NULL) {\n\t\t\t\tstrcpy(log_buffer, \"Cannot allocate ROOT node\");\n\t\t\t\tgoto err_out_l;\n\t\t\t}\n\t\t}\n\t\tstrcpy(pattern, sp);\n\t\tsp = strtok_r(NULL, \" \\t\\n\", &save_ptr);\n\t\tif (sp == NULL) {\n\t\t\tsprintf(log_buffer,\n\t\t\t\t\"Unrecognized shares line: %d: begins %s\",\n\t\t\t\tlineno, pattern);\n\t\t\tgoto err_parse;\n\t\t}\n\t\tif (find_share_group(root, pattern) != NULL) {\n\t\t\tsprintf(log_buffer,\n\t\t\t\t\"Duplicated group at line %d: %s\",\n\t\t\t\tlineno, pattern);\n\t\t\tgoto err_parse;\n\t\t}\n\t\tparent = find_share_group(root, sp);\t/* check valid parent */\n\t\tif (parent == NULL) {\n\t\t\tsprintf(log_buffer, \"Unknown parent (%s) at line %d\",\n\t\t\t\tsp, lineno);\n\t\t\tgoto err_parse;\n\t\t}\n\t\tfor (i = 0; i < new_cls_cnt; ++i) {\n\t\t\ttshares[i] = -1;\n\t\t}\n\t\t/*\n\t\t * Extract share pairs from rest of line.\n\t\t * We could skip some of the following if we assumed\n\t\t * that save_ptr pointed into the string at the next place\n\t\t * to start scanning, but its value is supposedly opaque.\n\t\t * Basically, we squash out spaces around colons in the\n\t\t * rest of the line to make it easier to strtok.\n\t\t */\n\t\tsp = strtok_r(NULL, \"\\n\", &save_ptr);\n\t\tif (sp != NULL) {\n\t\t\tint st = 0;\n\t\t\tint c;\n\t\t\tchar *sp4;\n\t\t\tfor (sp4 = buf; (c = *sp++) != '\\0';) {\n\t\t\t\tswitch (st) {\n\t\t\t\t\tcase 0:\t/* leading space */\n\t\t\t\t\t\tif (!isspace(c)) { st = 1; }\n\t\t\t\t\t\tbreak;\n\t\t\t\t\tcase 1:\t/* token, possibly with trailing : */\n\t\t\t\t\t\tif (isspace(c)) { st = 2; continue; }\n\t\t\t\t\t\tif (c == ':') { st = 3; break; }\n\t\t\t\t\t\tbreak;\n\t\t\t\t\tcase 2:\t/* skip spaces before : */\n\t\t\t\t\t\tif (isspace(c))\tcontinue;\n\t\t\t\t\t\tif (c == ':') { st = 3; break; }\n\t\t\t\t\t\t/* Oops, no colon, restore delimiter */\n\t\t\t\t\t\t*sp4++ = ' ';\n\t\t\t\t\t\tst = 0;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\tcase 3:\t/* skip spaces after : */\n\t\t\t\t\t\tif (isspace(c)) continue;\n\t\t\t\t\t\tst = 4;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\tcase 4:\t/* token after a colon */\n\t\t\t\t\t\tif (isspace(c)) { st = 0; }\n\t\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\t*sp4++ = c;\n\t\t\t}\n\t\t\t*sp4 = '\\0';\n\t\t\tsp = strtok_r(buf, \" \\t\\n\", &save_ptr);\n\t\t}\n\t\t/*\n\t\t * Whew!  Now ready to extract shares\n\t\t */\n\t\tfor (; sp ; sp = strtok_r(NULL, \" \\t\\n\", &save_ptr)) {\n\t\t\tchar\t*name;\n\t\t\tchar\t*value;\n\t\t\tchar\t*sp5;\n\t\t\tlong\tl;\n\n\t\t\tname = sp;\n\t\t\tif ((value = strchr(sp, ':')) != NULL) {\n\t\t\t\t*value++ = '\\0';\n\t\t\t} else {\n\t\t\t\tvalue = name;\n\t\t\t}\n\t\t\t/*\n\t\t\t * Extract name and value\n\t\t\t */\n\t\t\tif (value == name) {\n\t\t\t\ti = 0;\n\t\t\t\tname = \"\";\n\t\t\t} else {\n\t\t\t\tif ((i = find_share_class(new_shr_clses, name)) == 0) {\n\t\t\t\t\tsprintf(log_buffer, \"Unknown share class (%s) on line %d\",\n\t\t\t\t\t\tname, lineno);\n\t\t\t\t\tgoto err_parse;\n\t\t\t\t}\n\t\t\t}\n\t\t\tl = strtol(value, &sp5, 10);\n\t\t\tif (*sp5 != '\\0' || l < 0) {\n\t\t\t\tsprintf(log_buffer, \"Invalid share (%s) on line %d\",\n\t\t\t\t\tvalue, lineno);\n\t\t\t\tgoto err_parse;\n\t\t\t}\n\t\t\tif (tshares[i] != -1) {\n\t\t\t\tsprintf(log_buffer, \"Repeated type (%s) on line %d\",\n\t\t\t\t\tname, lineno);\n\t\t\t\tgoto err_parse;\n\t\t\t}\n\t\t\ttshares[i] = l;\n\t\t}\n\t\t/*\n\t\t * We have collected everything we need to create new tree\n\t\t * node.\n\t\t */\n\t\tcur = new_share_info(pattern, new_cls_cnt);\n\t\tif (cur == NULL)\n\t\t\tcontinue;\n\t\tfor (i = 0; i < new_cls_cnt; ++i) {\n\t\t\tsh_amt t;\n\t\t\tt = tshares[i];\n\t\t\tif (t < 0)\n\t\t\t\tt = 0;\n\t\t\tcur->share_gross[i] = t;\n\t\t}\n\t\tcur->lineno = lineno;\n\t\t/*\n\t\t * If the name is a pattern, compile it, after bracketing\n\t\t * between ^ and $.\n\t\t */\n\t\tif (strpbrk(pattern, \"|*.\\\\(){}[]+\") != NULL) {\n\t\t\tint result;\n\t\t\tenum share_info::pattern_type ptype = share_info::pattern_type::PATTERN_COMBINED;\n\t\t\tchar *t = static_cast<char *>(malloc(strlen(pattern) + 3));\n\t\t\tchar *t2 = pattern;\n\t\t\tif (t != NULL) {\n\t\t\t\tif (*t2 == '+') {\n\t\t\t\t\tptype = share_info::pattern_type::PATTERN_SEPARATE;\n\t\t\t\t\t++t2;\n\t\t\t\t}\n\t\t\t\tt[0] = '^';\n\t\t\t\tstrcpy(t+1, t2);\n\t\t\t\tstrcat(t+1, \"$\");\n\t\t\t\tresult = regcomp(&cur->pattern, t,\n\t\t\t\t\tREG_ICASE|REG_NOSUB);\n\t\t\t\tif (result == 0) {\n\t\t\t\t\tcur->pattern_type = ptype;\n\t\t\t\t} else {\n\t\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\t\"Regcomp error on line %d for pattern %s\", lineno, t);\n\t\t\t\t\tfree(t);\n\t\t\t\t\tgoto err_parse;\n\t\t\t\t}\n\t\t\t\tfree(t);\n\t\t\t}\n\t\t}\n\t\t/*\n\t\t * Link in.  We use tptr to hold youngest child.\n\t\t */\n\t\tcur->parent = parent;\n\t\tif (parent->child == NULL) {\n\t\t\tparent->child = parent->tptr = cur;\n\t\t} else {\n\t\t\tparent->tptr->sibling = cur;\n\t\t\tparent->tptr = cur;\n\t\t}\n\t\tcontinue;\t\t/* Done with line */\nerr_parse:\n\t\tlog_event(PBSEVENT_SCHED, PBS_EVENTCLASS_FILE, LOG_NOTICE,\n\t\t\t__func__, log_buffer);\n\t\tif (cur) {\n\t\t\tfree(cur);\n\t\t}\n\t\tif (++errcnt > 10) {\n\t\t\tstrcpy(log_buffer, \"Giving up on shares file.\");\n\t\t\tgoto err_out_l;\n\t\t}\n\t}\n\tfclose(fp);\n\tfp = NULL;\n\tif (errcnt > 0) {\n\t\tstrcpy(log_buffer, \"Errors encountered in shares file.\");\n\t\tgoto err_out_l;\n\t}\n\tif (root == NULL) {\n\t\tstrcpy(log_buffer, \"No share groups defined.\");\n\t\tgoto err_out_l;\n\t}\n\t/*\n\t * Everything parsed okay, reconcile, then update global values\n\t */\n\tif (!reconcile_shares(root, new_cls_cnt)) {\n\t\tstrcpy(log_buffer, \"Inconsistencies detected\");\n\t\tgoto err_out_l;\n\t}\n\t{\tstruct share_head *newsh;\n\t\tnewsh = new_share_head(new_cls_cnt);\n\t\tif (newsh == NULL) {\n\t\t\tstrcpy(log_buffer, \"Cannot allocate new share header\");\n\t\t\tgoto err_out_l;\n\t\t}\n\t\tif (cur_shr_head) {\n\t\t\tfree_share_head(cur_shr_head, 0);\n\t\t}\n\t\tcur_shr_head = newsh;\n\t}\n\tcur_shr_head->root = root;\n\t{\tstruct shr_class *nextp;\n\t\tfor (tclass = shr_classes; tclass; tclass = nextp) {\n\t\t\tnextp = tclass->next;\n\t\t\tfree(tclass);\n\t\t}\n\t}\n\t{\tstruct shr_type\t*nextp;\n\t\tfor (ttype = shr_types; ttype; ttype = nextp) {\n\t\t\tnextp = ttype->next;\n\t\t\tfree(ttype);\n\t\t}\n\t}\n\tshr_classes = new_shr_clses;\n\tshr_types = new_shr_types;\n\tif (shr_selector) free(shr_selector);\n\tshr_selector = strdup(new_sel);\n\tshr_class_count = new_cls_cnt;\n\tshr_type_count = new_type_cnt;\n\treturn 1;\n\nerr_out_l:\n\tlog_err(-1, __func__, log_buffer);\n\tlog_event(PBSEVENT_SCHED, PBS_EVENTCLASS_FILE, LOG_NOTICE, __func__,\n\t\t\"Warning: CPU shares file parse error: file ignored\");\n\tfor (ttype = new_shr_types; ttype; ttype = new_shr_types) {\n\t\tnew_shr_types = ttype->next;\n\t\tfree(ttype);\n\t}\n\tif (tshares)\n\t\tfree(tshares);\n\tfree_share_tree(root);\n\tif (fp)\n\t\tfclose(fp);\n\treturn 0;\n}\n\n\n\n/*\n *=====================================================================\n * site_find_runnable_res( resv_arr ) - Site specific code for picking\n *\t\t\tnext resv/job to try to run\n * Entry:\tresv_arr = array of ptrs to resource_resv,\n *\t\t\tsorted per job sort key list.\n *\t\tShould be called at beginning of job loop with NULL\n *\t\tto reset state.\n * Returns:\tptr to selected resource_resv,\n *\t\t\tNULL if no more choices.\n *=====================================================================\n */\nresource_resv *\nsite_find_runnable_res(resource_resv** resresv_arr)\n{\n\tstatic\tenum { S_INIT, S_RESV, S_HWY149, S_DEDRES, S_HWY101, S_TOPJOB, S_NORMAL } state;\n\tresource_resv *\tresv;\n\tserver_info *\tsinfo;\n\tshare_head *\tshp;\n\tshare_info *\tsi;\n\tint\t\ti;\n\n\tif (resresv_arr == NULL) {\n\t\tstate = S_INIT;\n\t\treturn NULL;\n\t}\n\t/*\n\t * Find any job in list and use it to get current server info,\n\t * which, in turn, leads to current share info\n\t */\n\tfor (i = 0; (resv = resresv_arr[i]) != NULL; ++i) {\n\t\tif (resv->is_job && resv->job != NULL)\n\t\t\tbreak;\n\t}\n\tif (resv == NULL)\n\t\treturn NULL;\n\tsinfo = resv->job->queue->server;\n\tshp = sinfo->share_head;\n\tsi = NULL;\n\n\tif (state == S_INIT) {\n\t\tif (shp) {\n\t\t\tclear_topjob_counts(shp->root);\n\t\t}\n\t\tstate = S_RESV;\n\t}\n\tif (state == S_RESV) {\n\t\tfor (i = 0; (resv = resresv_arr[i]) != NULL; i++) {\n\t\t\tif (!resv->is_job && !resv->can_not_run &&\n\t\t\t\t\tin_runnable_state(resv)) {\n\t\t\t\treturn resv;\n\t\t\t}\n\t\t}\n\t\tstate = S_HWY149;\n\t}\n\tif (state == S_HWY149) {\n#ifdef\tNAS_HWY149\n\t\t/*\n\t\t * Go through operator boosted jobs (highest priority)\n\t\t */\n\t\tif ((resv = pick_next_job(sinfo->policy, resresv_arr,\n\t\t\tjob_filter_hwy149, NULL)) != NULL)\n\t\t\treturn resv;\n#endif\n\t\tstate = S_DEDRES;\n\t}\n\t/*\n\t * Stop looking now if interested only in resuming jobs.\n\t * localmod XXXY\n\t */\n\tif (conf.resume_only)\n\t\treturn NULL;\n\tif (state == S_DEDRES) {\n\t\t/*\n\t\t * Go through jobs in queues that use per_queues_topjobs, these\n\t\t * queues should have nodes assigned to them and therefore\n\t\t * these jobs will not take nodes away from later 101/top jobs\n\t\t * in below code\n\t\t */\n\t\tif ((resv = pick_next_job(sinfo->policy, resresv_arr,\n\t\t\tjob_filter_dedres, NULL)) != NULL)\n\t\t\treturn resv;\n\n\t\tstate = S_HWY101;\n\t}\n\tif (state == S_HWY101) {\n#ifdef\tNAS_HWY101\n\t\t/*\n\t\t * Go through operator boosted jobs\n\t\t */\n\t\tif ((resv = pick_next_job(sinfo->policy, resresv_arr,\n\t\t\tjob_filter_hwy101, NULL)) != NULL)\n\t\t\treturn resv;\n#endif\n\t\tstate = S_TOPJOB;\n\t}\n\tif (state == S_TOPJOB) {\n\t\t/*\n\t\t * Find most-favored group not at topjob limit\n\t\t */\n\t\tif (shp != NULL)\n\t\t\tsi = find_most_favored_share(shp->root, conf.per_share_topjobs);\n\t\tif (si == NULL) {\n\t\t\tstate = S_NORMAL;\n\t\t}\n\t}\n\tif ((resv = pick_next_job(sinfo->policy, resresv_arr,\n\t\tjob_filter_normal, si)) != NULL)\n\t\treturn resv;\n\n\t/*\n\t * Searched whole list without match.  Try again with different share\n\t * group.\n\t */\n\tif (si != NULL) {\n\t\tsi->none_left = 1;\n\t\tresv = site_find_runnable_res(resresv_arr);\n\t}\n\treturn resv;\n}\n\n\n\n/*\n *=====================================================================\n * site_resort_jobs(njob) - Possibly resort queues after starting job\n * Entry:\tnjob = job that was just started\n *=====================================================================\n */\nvoid\nsite_resort_jobs(resource_resv *njob)\n{\n\tserver_info\t*sinfo;\n\tqueue_info\t*queue;\n\tjob_info\t*job;\n\tint\t\ti;\n\n\tif (njob == NULL || !njob->is_job || (job = njob->job) == NULL\n\t\t|| (queue = job->queue) == NULL || (sinfo = njob->server) == NULL)\n\t\t\treturn;\n\t/*\n\t * Update values that changed due to job starting.\n\t */\n\tfor (i = 0; i < sinfo->sc.total; ++i) {\n\t\tresource_resv *resv;\n\n\t\tresv = sinfo->jobs[i];\n\t\tif (!resv->is_job || !in_runnable_state(resv))\n\t\t\tcontinue;\n\t\t(void) job_starving(sinfo->policy, resv);\n\t}\n\t/*\n\t * Now, redo sorting.\n\t */\n\tqsort(sinfo->jobs, sinfo->sc.total, sizeof(resource_resv *), cmp_sort);\n\tfor (i = 0; sinfo->queues[i] != NULL; ++i) {\n\t\tqsort(sinfo->queues[i]->jobs, sinfo->queues[i]->sc.total,\n\t\t\tsizeof(resource_resv *), cmp_sort);\n\t}\n}\n\n\n\n/*\n *=====================================================================\n * site_restore_users() - Restore user values after adding job to\n *\t\t\tcalendar\n * Exit:\tUser values reset\n *=====================================================================\n */\nvoid\nsite_restore_users(void)\n{\n\tsite_user_info\t*user;\n\n\tfor (user = users; user ; user = user->next) {\n\t\tuser->current_use = user->saved_cu;\n\t\tuser->current_use_pqt = user->saved_cup;\n\t}\n}\n\n\n\n/*\n *=====================================================================\n * site_save_users() - Save users values during clone operation.\n * Exit:\tCurrent important values stored away\n *=====================================================================\n */\nvoid\nsite_save_users(void)\n{\n\tsite_user_info\t*user;\n\n\tfor (user = users; user ; user = user->next) {\n\t\tuser->saved_cu = user->current_use;\n\t\tuser->saved_cup = user->current_use_pqt;\n\t}\n}\n\n\n\n\n/*\n *=====================================================================\n * site_set_job_share(resresv) - Set counts of share resources\n *\t\trequested by job.\n * Entry:\tresresv = resource reservation for job\n *\t\tThe select spec is assumed to be parsed into chunks.\n * Exit:\tjob's sh_amts array set\n *=====================================================================\n */\nvoid\nsite_set_job_share(resource_resv *resresv)\n{\n\tchunk *\t\t\tchunk;\n\tint\t\t\ti;\n\tjob_info *\t\tjob;\n\tresource_req *\t\tpreq;\n\tselspec *\t\tselect;\n\tsh_amt *\t\tsh_amts;\n\tstruct shr_type *\tstp;\n\n\tif (resresv == NULL || (select = resresv->select) == NULL)\n\t\treturn;\n\tif (!resresv->is_job || (job = resresv->job) == NULL)\n\t\treturn;\n\tif (shr_class_count == 0 || shr_selector == NULL)\n\t\treturn;\n\tif ((sh_amts = job->sh_amts) == NULL) {\n\t\tsh_amts = static_cast<sh_amt *>(malloc(shr_class_count * sizeof(*sh_amts)));\n\t\tif (sh_amts == NULL)\n\t\t\treturn;\n\t\tjob->sh_amts = sh_amts;\n\t}\n\tmemset(sh_amts, 0, shr_class_count * sizeof(*sh_amts));\n\tfor (i = 0; (chunk = select->chunks[i]) != NULL; ++i) {\n\t\tint ncpus;\n\t\tint sh_cls;\n\n\t\tncpus = 0;\n\t\tstp = NULL;\n\n\t\tfor (preq = chunk->req; preq != NULL; preq = preq->next) {\n\t\t\tif (strcmp(preq->name, shr_selector) == 0) {\n\t\t\t\tstp = shr_type_info_by_name(preq->res_str);\n\t\t\t} else if (strcmp(preq->name, \"ncpus\") == 0) {\n\t\t\t\tncpus = preq->amount;\n#if\tNAS_CPU_MULT > 1\n\t\t\t\tif (ncpus % NAS_CPU_MULT) {\n\t\t\t\t\tncpus += NAS_CPU_MULT - (ncpus % NAS_CPU_MULT);\n\t\t\t\t}\n#endif\n\t\t\t}\n\t\t}\n\t\tif (stp == NULL) {\n\t\t\tstp = shr_type_info_by_idx(0);\t/* default */\n\t\t}\n\t\tsh_cls = stp->sh_cls;\n\t\t/*\n\t\t * The next line assumes vnodes are allocated exclusively\n\t\t */\n\t\tif (stp->cpus_per_node > ncpus) {\n\t\t\tncpus = stp->cpus_per_node;\n\t\t}\n\t\t/* XXX HACK HACK until SBUrate available, localmod 126 */\n\t\tncpus = stp->cpus_per_node;\n\t\t/* end HACK localmod 126 */\n\t\tsh_amts[sh_cls] += chunk->num_chunks * ncpus;\n\t}\n}\n\n\n\n/*\n *=====================================================================\n * site_set_NAS_pri(job, max_starve, starve_num) - calculate the\n *\t\tNAS priority for a job\n * Entry:\tjob = job to have its NAS_pri field set\n *\t\tmax_starve = starve time for queue job is in\n *\t\tstarve_num = how long job has starved\n * Exit:\tjob->NAS_pri set\n *=====================================================================\n */\n#if\tNAS_HWY101\n#define\tMAX_NAS_PRI\t(NAS_HWY101 - 1)\n#else\n#define\tMAX_NAS_PRI\t100\n#endif\n#define\tIDLE_BOOST\t10\t/* Boost for users with nothing else running */\nvoid\nsite_set_NAS_pri(job_info *job, time_t max_starve, long starve_num)\n{\n\tqueue_info* \tqueue;\n\tsite_user_info*\tsui;\n\tlong\t\tstarve_adjust;\n\n\tif (job == NULL || (queue = job->queue) == NULL)\n\t\treturn;\n\tif (job->priority > 0) {\n\t\tjob->NAS_pri = job->priority;\n\t\treturn;\n\t}\n\t/* localmod 116\n\t * Queued jobs get their job priority boosted by 2 for each\n\t * max_starve interval they have waited, up to a maximum of 20.\n\t */\n\tstarve_adjust = 0;\n\tif (max_starve > 0 && max_starve < Q_SITE_STARVE_NEVER) {\n\t\tstarve_adjust = 2 * starve_num / max_starve;\n\t\tif (starve_adjust < 0) starve_adjust = 0;\n\t\tif (starve_adjust > 20) starve_adjust = 20;\n\t}\n\tjob->NAS_pri = job->queue->priority + starve_adjust;\n\t/*\n\t * Jobs get a boost of 10 if there are no other jobs currently\n\t * running for the user.\n\t */\n\tsui = job->u_info;\n\tif (sui != NULL) {\n\t\tsch_resource_t t;\n\t\tt = queue->is_topjob_set_aside ? sui->current_use_pqt :\n\t\t\tsui->current_use;\n\t\tif (t == 0 && job->NAS_pri < MAX_NAS_PRI) {\n\t\t\tint pri = job->NAS_pri + IDLE_BOOST;\n\t\t\tif (pri > MAX_NAS_PRI) pri = MAX_NAS_PRI;\n\t\t\tjob->NAS_pri = pri;\n\t\t}\n\t}\n}\n\n\n/*\n *=====================================================================\n * site_set_node_share(ninfo, res) - Set type of share node supplies\n * Entry:\tninfo = pointer to node info\n *\t\tres = pointer to resource available on node\n * Exit:\tninfo->sh_cls, sh_type set if appropriate\n *=====================================================================\n */\nvoid\nsite_set_node_share(node_info *ninfo, schd_resource *res)\n{\n\tint\t\ti;\n\tstruct shr_type\t*stp = NULL;\n\n\tif (ninfo == NULL || res == NULL || shr_selector == NULL)\n\t\treturn;\n\tif (strcmp(res->name, shr_selector) != 0)\n\t\treturn;\t\t\t/* not our resource */\n\tninfo->sh_cls = 0;\n\tif (res->str_avail == NULL)\n\t\treturn;\n\tfor (i = 0; res->str_avail[i]; ++i) {\n\t\tif ((stp = shr_type_info_by_name(res->str_avail[i]))!=NULL) {\n\t\t\tninfo->sh_cls = stp->sh_cls;\n\t\t\tninfo->sh_type = stp->sh_tidx;\n\t\t\tbreak;\n\t\t}\n\t}\n}\n\n\n\n/*\n *=====================================================================\n * site_set_share_head(sinfo) - Set share head into server info\n * Entry:\tsinfo = ptr to server info\n * Returns:\t1 on success, 0 on error\n * Assumes:\tcur_shr_head set\n *=====================================================================\n */\nint\nsite_set_share_head(server_info *sinfo)\n{\n\tif (sinfo == NULL)\n\t\treturn 0;\n\tif (cur_shr_head == NULL)\n\t\treturn 0;\n\tsinfo->share_head = cur_shr_head;\n\treturn 1;\n}\n\n\n\n/*\n *=====================================================================\n * site_set_share_type(sinfo, resresv) - Set share type for job\n *=====================================================================\n */\nvoid\nsite_set_share_type(server_info * sinfo, resource_resv * resresv)\n{\n\tjob_info *\tji;\n\tqueue_info *\tqi;\n\ttime_t\t\tmax_borrow;\n\ttime_t\t\tremaining;\n\n\tif (sinfo == NULL || resresv == NULL)\n\t\treturn;\n\t/*\n\t * Assume shares not relevant\n\t */\n\tresresv->share_type = J_TYPE_ignore;\n\tif (conf.max_borrow == UNSPECIFIED) {\n\t\treturn;\n\t}\n\tji = resresv->job;\n\tif (ji == NULL || !resresv->is_job)\n\t\treturn;\n\tqi = ji->queue;\n\tif (qi == NULL)\n\t\treturn;\n\tmax_borrow = qi->max_borrow;\n\tif (max_borrow == UNSPECIFIED)\n\t\tmax_borrow = conf.max_borrow;\n\tif (max_borrow == 0) {\n\t\treturn;\t\t\t/* max borrow of 0 means exempt */\n\t}\n\tif (ji->is_running) {\n\t\tremaining = resresv->end - sinfo->server_time;\n\t} else {\n\t\tremaining = resresv->duration;\n\t}\n\tif (remaining > max_borrow) {\n\t\tresresv->share_type = J_TYPE_limited;\n\t} else {\n\t\tresresv->share_type = J_TYPE_borrow;\n\t}\n}\n\n\n\n/*\n *=====================================================================\n * site_should_backfill_with_job(policy, sinfo, resresv, ntj, nqtj, err)\n * Entry:\tpolicy = pointer to current policy\n *\t\tsinfo = server state where job resides\n *\t\tresresv = the job to check\n *\t\tntj = number of topjobs so far\n *\t\tnqtj = number of queue topjobs so far\n *\t\terr = error structure from trying to run job immediately\n * Returns:\t0 if should not calendar\n *\t\t1 calendar based on backfill_depth\n *\t\t2 calendar based on per_queue_topjobs\n *\t\t3 calendar based on per_share_topjobs\n *\t\t4 calendar based on share usage ratio\n *=====================================================================\n */\nint site_should_backfill_with_job(status *policy, server_info *sinfo, resource_resv *resresv, int ntj, int nqtj, schd_error *err)\n{\n\tint\t\trc;\n\tshare_info\t*si;\n\tstruct job_info\t*job;\n\n\tif (policy == NULL || sinfo == NULL || resresv == NULL || err == NULL)\n\t\treturn 0;\n\tif (!resresv->is_job || (job = resresv->job) == NULL)\n\t\treturn 0;\n\t/*\n\t * Do normal checks and reject if they reject.\n\t */\n\trc = should_backfill_with_job(policy, sinfo, resresv, ntj);\n\tif (rc == 0)\n\t\treturn rc;\n\t/*\n\t * Start of site-specific calendaring code\n\t */\n#ifdef NAS_HWY149\n\t/*\n\t * Don't drain for node shuffle jobs or other specials.\n\t */\n\tif (job->NAS_pri == NAS_HWY149)\n\t\treturn 0;\n#endif\n\t/*\n\t * Jobs blocked by other jobs from the same user are not eligible\n\t * for starving/backfill help.\n\t */\n\tswitch (err->error_code) {\n\t\tcase SERVER_USER_LIMIT_REACHED:\n\t\tcase QUEUE_USER_LIMIT_REACHED:\n\t\tcase SERVER_USER_RES_LIMIT_REACHED:\n\t\tcase QUEUE_USER_RES_LIMIT_REACHED:\n\t\t\treturn 0;\n\t\t\t/*\n\t\t\t * No point to backfill for jobs blocked by dedicated\n\t\t\t * time.  All resources will become available at\n\t\t\t * the end of the dedicated time.\n\t\t\t */\n\t\tcase DED_TIME:\n\t\tcase CROSS_DED_TIME_BOUNDRY:\n\t\t\treturn 0;\n\t\t\t/*\n\t\t\t * If job exceeds total mission allocation,\n\t\t\t * it can never run.\n\t\t\t */\n\t\tcase GROUP_CPU_INSUFFICIENT:\n\t\t\treturn 0;\n\t\tdefault:\n\t\t\t;\n\t}\n\t/* Check if in queues with special topjob limit */\n\t/* localmod 038 */\n\tif (site_is_queue_topjob_set_aside(resresv)\n\t\t\t&& nqtj < conf.per_queues_topjobs)\n\t  \treturn 2;\n\t/* Check if per-share count exhausted. */\n\tsi = job->sh_info;\n\tif (si) si = si->leader;\n\tif (si && si->topjob_count < conf.per_share_topjobs)\n\t\treturn 3;\t\t/* Still within share guarantee */\n\t/* localmod 154 */\n\t/* Check if share using less than allocation */\n\tif (si && si->ratio_max < 1.0 && si->tj_cpu_cost < TJ_COST_MAX /* XXX */)\n\t\treturn 4;\n\t/* Back to non-NAS tests.  Have we calendared backfill_depth jobs? */\n\tif (ntj >= policy->backfill_depth)\n\t\treturn 0;\n\treturn 1;\n}\n\n\n/*\n *=====================================================================\n * site_tidy_server(sinfo) - Tweak data collected from server\n * Entry:\tsinfo = Server info.  The following are some of the\n *\t\t\tfields that can be used:\n *\n *\t\t\tnodes = array of nodes\n *\t\t\tqueues = array of queues (sorted)\n *\t\t\tresvs = array of reservations\n *\t\t\tjobs = array of jobs (partially sorted or not,\n *\t\t\t\tdepending on by_queue or round_robin)\n *\t\t\tall_resresv = array of all jobs and reservations\n *\t\t\t\t(sorted on event time)\n *\n *\t\t\tNote: the following are *not* set: running_resvs,\n *\t\t\trunning_jobs, exiting_jobs, starving_jobs,\n *\t\t\tuser_counts, group_counts.\n * Return:\t0 on error, 1 on success\n *=====================================================================\n */\nint\nsite_tidy_server(server_info *sinfo)\n{\n\tint\t\trc;\n\tint\t\ti;\n\tresource_resv\t*resv;\n\n\tif (sinfo->share_head == NULL)\n\t\tsinfo->share_head = cur_shr_head;\n\tsite_init_alloc(sinfo);\n\trc = init_users(sinfo);\n\tif (rc != 0)\n\t\treturn 0;\n\t/*\n\t * Adjust queued job priorities now that we have user info.\n\t */\n\tfor (i = 0; i < sinfo->sc.total; ++i) {\n\t\tresv = sinfo->jobs[i];\n\t\tif (!resv->is_job || !in_runnable_state(resv))\n\t\t\tcontinue;\n\t\t(void) job_starving(sinfo->policy, resv);\n\t}\n\treturn 1;\n}\n\n\n\n/*\n *=====================================================================\n * site_update_on_end(sinfo, qinfo, res) - Do site specific updating\n *\t\twhen job ends.\n * Entry:\tsinfo = server info\n *\t\tqinfo = info for queue job running in\n *\t\tres = resv/job info\n * Exit:\tLocal data updated\n *=====================================================================\n */\nvoid\nsite_update_on_end(server_info *sinfo, queue_info *qinfo, resource_resv *resv)\n{\n\tjob_info\t*job;\n\tshare_info\t*si;\n\tsh_amt\t\t*sc;\n\tshare_head\t*shead;\n\n\tif (sinfo == NULL || (shead = sinfo->share_head) == NULL)\n\t\treturn;\n\tif (!resv->is_job || (job = resv->job) == NULL)\n\t\treturn;\n\tif ((si = job->sh_info) == NULL || (sc = job->sh_amts) == NULL)\n\t\treturn;\n\tbump_share_count(si, resv->share_type, sc, -1);\n\tbump_demand_count(si, resv->share_type, sc, 1);\n\tif ((si = si->leader) == NULL)\n\t\treturn;\n\tif (resv->share_type != J_TYPE_ignore) {\n\t\tint i;\n\n\t\tfor (i = 0; i < shr_class_count; ++i) {\n\t\t\tint borrowed;\n\t\t\tint ncpus;\n\n\t\t\tncpus = sc[i];\n\t\t\tshead->sh_avail[i] += ncpus;\n\t\t\tborrowed = si->share_inuse[i][J_TYPE_limited] +\n\t\t\t\tsi->share_inuse[i][J_TYPE_borrow] -\n\t\t\t\tsi->share_ncpus[i];\n\t\t\tif (borrowed > 0) {\n\t\t\t\tif (borrowed > ncpus)\n\t\t\t\t\tborrowed = ncpus;\n\t\t\t\tshead->sh_contrib[i] += borrowed;\n\t\t\t}\n\t\t}\n\t\tsi->ratio = get_share_ratio(si->share_ncpus, NULL,\n\t\t\tsi->share_inuse);\n\t}\n#ifdef NAS_DEBUG\n\tprintf(\" YYY- %s %d %g %g %s\\n\", si->name, (int)resv->share_type, si->ratio, si->ratio_max, resv->name);\n\tfflush(stdout);\n#endif\n}\n\n\n\n/*\n *=====================================================================\n * site_update_on_run(sinfo, qinfo, res, flag, ns) - Do site specific\n *\t\tupdating when job started.\n * Entry:\tsinfo = server info\n *\t\tqinfo = info for queue job running in\n *\t\tres = resv/job info\n *\t\tflag = 0 when calendaring, 1 if really starting\n *\t\tns = node specification job run on\n * Exit:\tLocal data updated\n *=====================================================================\n */\nvoid\nsite_update_on_run(server_info *sinfo, queue_info *qinfo,\n\tresource_resv *resv, int flag, nspec **ns)\n{\n\tjob_info\t*job;\n\tshare_info\t*si;\n\tsh_amt\t\t*sc;\n\tshare_head\t*shead;\n\tqueue_info\t*queue;\n\tsite_user_info\t*sui;\n\tint\t\ti, ncpus, borrowed;\n\n\tif (sinfo == NULL || (shead = sinfo->share_head) == NULL)\n\t\treturn;\n\tif (!resv->is_job || (job = resv->job) == NULL)\n\t\treturn;\n\tif ((si = job->sh_info) == NULL || (sc = job->sh_amts) == NULL)\n\t\treturn;\n\tqueue = job->queue;\n\tsui = job->u_info;\n\tif (flag && sui && queue) {\n\t\tif (queue->is_topjob_set_aside) {\n\t\t\tsui->current_use_pqt += job->accrue_rate;\n\t\t} else {\n\t\t\tsui->current_use += job->accrue_rate;\n\t\t}\n\t}\n\tbump_share_count(si, resv->share_type, sc, 1);\n\tbump_demand_count(si, resv->share_type, sc, -1);\n\tif ((si = si->leader) == NULL)\n\t\treturn;\n\tif (resv->share_type != J_TYPE_ignore) {\n\t\tfor (i = 0; i < shr_class_count; ++i) {\n\t\t\tncpus = sc[i];\n\t\t\tshead->sh_avail[i] -= ncpus;\n\t\t\tborrowed = \\\n\t\t\t   si->share_inuse[i][J_TYPE_limited]\n\t\t\t  +si->share_inuse[i][J_TYPE_borrow]\n\t\t\t  -si->share_ncpus[i];\n\t\t\tif (borrowed > 0) {\n\t\t\t\tif (borrowed > ncpus)\n\t\t\t\t\tborrowed = ncpus;\n\t\t\t\tshead->sh_contrib[i] -= borrowed;\n\t\t\t}\n\t\t}\n\t\tsi->ratio = get_share_ratio(si->share_ncpus, NULL,\n\t\t\tsi->share_inuse);\n\t\t/* localmod 154 */\n\t\t/* Keep track of highest ratio seen */\n\t\tif (si->ratio > si->ratio_max)\n\t\t\tsi->ratio_max = si->ratio;\n\t}\n#ifdef NAS_DEBUG\n\tprintf(\" YYY+ %s %d %g %g %s\\n\", si->name, (int)resv->share_type, si->ratio, si->ratio_max, resv->name);\n\tfflush(stdout);\n#endif\n}\n\n\n\n/*\n *=====================================================================\n * site_vnode_inherit( nodes ) - Have vnodes inherit certain values\n *\t\tfrom their natural vnode.\n * Entry:\tnodes = array of node_info structures for all vnodes\n * Exit:\tvnodes updated\n *=====================================================================\n */\nvoid\nsite_vnode_inherit(node_info ** nodes)\n{\n\tint\t\tnidx;\n\tnode_info *\tnatural;\n\tnode_info *\tninfo;\n\tresource *\tres;\n\tresource *\tcur;\n\tresource *\tprev;\n\n\tif (nodes == NULL)\n\t\treturn;\n\tnatural = NULL;\n\tfor (nidx = 0; (ninfo = nodes[nidx]) != NULL; ++nidx) {\n\t\t/*\n\t\t * Is this a natural node?\n\t\t */\n\t\tres = find_resource(ninfo->res, allres[\"host\"]);\n\t\tif (res == NULL)\n\t\t\tcontinue;\n\t\tif (compare_res_to_str(res, ninfo->name, CMP_CASELESS)) {\n\t\t\tnatural = ninfo;\t/* Natural vnode */\n\t\t\tcontinue;\n\t\t}\n\t\t/*\n\t\t * For vnode, locate natural vnode\n\t\t */\n\t\tif (natural == NULL\n\t\t\t|| !compare_res_to_str(res, natural->name, CMP_CASELESS)) {\n\t\t\tint i;\n\t\t\tfor (i = 0; (natural = nodes[i]) != NULL; ++i) {\n\t\t\t\tif (compare_res_to_str(res, natural->name, CMP_CASELESS)) {\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tif (natural == NULL)\n\t\t\tcontinue;\n\t\t/*\n\t\t * Copy interesting status from natural vnode to this vnode\n\t\t */\n\t\tninfo->is_down |= natural->is_down;\n\t\tninfo->is_offline |= natural->is_offline;\n\t\tninfo->is_unknown |= natural->is_unknown;\n\t\tif (ninfo->is_down || ninfo->is_offline || ninfo->is_unknown) {\n\t\t\tninfo->is_free = 0;\n\t\t}\n\t\tninfo->no_multinode_jobs |= natural->no_multinode_jobs;\n\t\tif (natural->queue_name && ninfo->queue_name == NULL) {\n\t\t\tninfo->queue_name = strdup(natural->queue_name);\n\t\t}\n\t\tif (ninfo->priority == 0) {\n\t\t\tninfo->priority = natural->priority;\n\t\t}\n\t\t/*\n\t\t * Copy natural vnode resources to this vnode\n\t\t */\n\t\tfor (res = natural->res; res != NULL; res = res->next) {\n\t\t\t/*\n\t\t\t * Cannot duplicate consumable resources\n\t\t\t */\n\t\t\tif (res->type.is_consumable)\n\t\t\t\tcontinue;\n\t\t\t/*\n\t\t\t * Skip if resource already set for vnode\n\t\t\t */\n\t\t\tfor (prev = NULL, cur = ninfo->res; cur != NULL;\n\t\t\t\tcur = cur->next) {\n\t\t\t\tif (strcmp(cur->name, res->name) == 0) {\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tprev = cur;\n\t\t\t}\n\t\t\tif (cur != NULL)\n\t\t\t\tcontinue;\n\t\t\t/*\n\t\t\t * Add resource to end of vnode's list\n\t\t\t */\n\t\t\tcur = new_resource();\n\t\t\tif (cur == NULL)\n\t\t\t\tcontinue;\n\t\t\tcur->name = strdup(res->name);\n\t\t\tset_resource(cur, res->orig_str_avail, RF_AVAIL);\n\t\t\tif (prev == NULL) {\n\t\t\t\tninfo->res = cur;\n\t\t\t} else {\n\t\t\t\tcur->next = prev->next;\n\t\t\t\tprev->next = cur;\n\t\t\t}\n\t\t}\n\t}\n}\n\n\n\n/*\n *=====================================================================\n * Internal functions\n *=====================================================================\n */\n\n\n\n/*\n *=====================================================================\n * clear_topjob_counts(root) - Reset per group topjob counts\n * Entry:       root = root of share subtree to work on\n *=====================================================================\n */\nstatic void\nclear_topjob_counts(share_info* root)\n{\n\tif (root == NULL)\n\t\treturn;\n\troot->topjob_count = 0;\n\troot->none_left = 0;\n\tif (root->leader == root) {\n\t\troot->ratio = get_share_ratio(root->share_ncpus, NULL,\n\t\t\troot->share_inuse);\n\t\t/* localmod 154 */\n\t\troot->ratio_max = root->ratio;\n\t\troot->tj_cpu_cost = 0.0;\n\t}\n\tif (root->child)\n\t\tclear_topjob_counts(root->child);\n\tif (root->sibling)\n\t\tclear_topjob_counts(root->sibling);\n}\n\n\n/*\n *=====================================================================\n * count_cpus(nodes, ncnt, queues, totals) - Count total CPUs available\n *\t\tfor allocation\n * Entry:\tnodes = array of node_info struct ptrs\n *\t\tncnt = count of entries in nodes\n *\t\tqueues = array of queue_info structures\n *\t\ttotals = sh_amt array for totals\n * Exit:\ttotals array updated with CPU type counts\n *=====================================================================\n */\nstatic void\ncount_cpus(node_info **nodes, int ncnt, queue_info **queues, sh_amt *totals)\n{\n\tint\t\ti;\n\tresource\t*res;\n\tsch_resource_t\tncpus;\n\n\tfor (i = 0; i < shr_class_count; ++i) {\n\t\ttotals[i] = 0;\n\t}\n\tfor (i = 0; i < ncnt; ++i) {\n\t\tnode_info\t*node;\n\n\t\tnode = nodes[i];\n\t\t/*\n\t\t * Skip nodes in unusable states.\n\t\t * (Unless jobs are still assigned to them.)\n\t\t */\n\t\tif ((node->is_down || node->is_offline)\n\t\t\t&&  (node->jobs == NULL || node->jobs[0] == NULL))\n\t\t\tcontinue;\n#if\tNAS_DONT_COUNT_EXEMPT\n\t\t/*\n\t\t * Don't count nodes associated with specific queues\n\t\t * if jobs in queue are exempt from CPU shares.\n\t\t */\n\t\tif (node->queue_name) {\n\t\t\tqueue_info *queue;\n\t\t\tqueue = find_queue_info(queues, node->queue_name);\n\t\t\tif (queue == NULL || queue->max_borrow == 0)\n\t\t\t\tcontinue;\n\t\t}\n#endif\n\t\t/*\n\t\t * Include available CPUs in count\n\t\t * For hosts that are down or offline, we count only\n\t\t * assigned CPUs.  This should exactly balance the CPUs\n\t\t * counted against running jobs.\n\t\t */\n#if 0 /* XXX HACK until SBUrate available, localmod 126 */\n\t\tres = find_resource(node->res, allres[\"ncpus\"]);\n\t\tif (res != NULL && res->avail != SCHD_INFINITY) {\n\t\t\tif (node->is_down || node->is_offline)\n\t\t\t\t/*\n\t\t\t\t * Use string value, because reservations\n\t\t\t\t * can affect res->assigned without updating\n\t\t\t\t * str_assigned.\n\t\t\t\t */\n\t\t\t\tncpus = strtol(res->str_assigned, NULL, 0);\n\t\t\telse\n\t\t\t\tncpus = res->avail;\n\t\t\ttotals[node->sh_cls] += ncpus;\n\t\t}\n#else\n\t\t{\n\t\tstruct shr_type *stp;\n\t\tstp = shr_type_info_by_idx(node->sh_type);\n\t\ttotals[node->sh_cls] += stp->cpus_per_node;\n\t\t}\n#endif /* localmod 126 */\n\t}\n}\n\n\n\n\n/*\n *=====================================================================\n * count_active_cpus(resvs, jcnt, sh_active) - Update share alloc data\n *\t\tbased on running jobs.\n * Entry:\tresvs = array of resource_resv struct ptrs\n *\t\tjcnt = count of entries in jobs array\n *\t\tsh_active = array to total use into\n * Exit:\tshare_inuse values updated\n *=====================================================================\n */\nstatic void\ncount_active_cpus(resource_resv **resvs, int jcnt, sh_amt *sh_active)\n{\n\tint\t\ti, k;\n\tresource_resv\t*resv;\n\n\tmemset(sh_active, 0, shr_class_count * sizeof(*sh_active));\n\tfor (i = 0; i < jcnt; ++i) {\n\t\tjob_info *job;\n\n\t\t/*\n\t\t * Skip everything but running jobs\n\t\t */\n\t\tresv = resvs[i];\n\t\tif (!resv->is_job || (job = resv->job) == NULL)\n\t\t\tcontinue;\n\t\tif (!job->is_running)\n\t\t\tcontinue;\n\t\tif (job->sh_amts == NULL)\n\t\t\tcontinue;\n\t\t/*\n\t\t * Add used CPUs to group total based on job share type\n\t\t */\n\t\tif (resv->share_type != J_TYPE_ignore) {\n\t\t\tfor (k = 0; k < shr_class_count; ++k) {\n\t\t\t\tsh_active[k] += job->sh_amts[k];\n\t\t\t}\n\t\t}\n\t\tbump_share_count(job->sh_info, resv->share_type, job->sh_amts, 1);\n\t}\n}\n\n\n\n\n/*\n *=====================================================================\n * count_demand_cpus(resvs, jcnt, sh_demand) - Update share use data\n *\t\tfor queued jobs.\n * Entry:\tresvs = array of resource_resv struct ptrs\n *\t\tjcnt = count of entries in jobs array\n * Exit:\tshare_demand values updated\n *=====================================================================\n */\nstatic void\ncount_demand_cpus(resource_resv **resvs, int jcnt)\n{\n\tint\t\ti;\n\tjob_info\t*job;\n\tresource_resv\t*resv;\n\n\tfor (i = 0; i < jcnt; ++i) {\n\t\t/*\n\t\t * Skip everything but eligible, queued jobs\n\t\t */\n\t\tresv = resvs[i];\n\t\tif (!resv->is_job || (job = resv->job) == NULL)\n\t\t\tcontinue;\n\t\tif (!in_runnable_state(resv))\n\t\t\tcontinue;\n\t\tbump_demand_count(job->sh_info, resv->share_type, job->sh_amts, 1);\n\t}\n}\n\n\n\n\n/*\n *=====================================================================\n * count_contrib_cpus(root, node, sh_contrib) - Count CPUs available\n *\t\tfor borrowing.\n * Entry:\troot = base of share info tree\n *\t\tnode = base of current sub-tree\n *\t\tsh_contrib = where to accumulate overall totals\n * Exit:\tContents of sh_contrib set\n *=====================================================================\n */\nstatic void\ncount_contrib_cpus(share_info *root, share_info *node, sh_amt *sh_contrib)\n{\n\tint\ti;\n\tint\tcontrib;\n\n\tif (root == NULL || node == NULL)\n\t\treturn;\n\tif (node == root) {\n\t\t/* Clear counts */\n\t\tmemset(sh_contrib, 0, shr_class_count * sizeof(*sh_contrib));\n\t}\n\t/*\n\t * Only nodes with allocations can contribute\n\t */\n\tif (node == node->leader && node != root) {\n\t\tfor (i = 0; i < shr_class_count; ++i) {\n\t\t\tcontrib = node->share_ncpus[i] -\n\t\t\t\t(node->share_inuse[i][J_TYPE_limited]\n\t\t\t\t+ node->share_inuse[i][J_TYPE_borrow]\n\t\t\t\t+ node->share_demand[i][J_TYPE_limited]\n\t\t\t\t+ node->share_demand[i][J_TYPE_borrow]\n\t\t\t\t) ;\n\t\t\tif (contrib > 0)\n\t\t\t\tsh_contrib[i] += contrib;\n\t\t}\n\t}\n\tif (node->child)\n\t\tcount_contrib_cpus(root, node->child, sh_contrib);\n\tif (node->sibling)\n\t\tcount_contrib_cpus(root, node->sibling, sh_contrib);\n\tif (node == root) {\n\t\t/*\n\t\t * Remove root demand from amounts available.\n\t\t */\n\t\tfor (i = 0; i < shr_class_count; ++i) {\n\t\t\tint j;\n\n\t\t\tcontrib = sh_contrib[i];\n\t\t\tfor (j = 0; j < J_TYPE_COUNT; ++j) {\n\t\t\t\tif (j != J_TYPE_borrow) {\n\t\t\t\t\tcontrib -= root->share_demand[i][j];\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (contrib < 0)\n\t\t\t\tcontrib = 0;\n\t\t\tsh_contrib[i] = contrib;\n\t\t}\n\t}\n}\n\n\n\n/*\n *=====================================================================\n * dup_shares (oldsh, nsinfo) - duplicate share tree\n * Entry:\toldsh = existing share head\n *\t\tnsinfo = server info to record new share tree in\n * Returns:\t1 on success, 0 on error\n *=====================================================================\n */\nstatic int\ndup_shares(share_head *oldsh, server_info *nsinfo)\n{\n\tshare_info\t*oroot;\n\tshare_info\t*nroot;\n\tshare_head\t*newsh;\n\n\tif (oldsh == NULL || nsinfo == NULL)\n\t\treturn 0;\n\tif ((oroot = oldsh->root) == NULL)\n\t\treturn 0;\n\tnewsh = new_share_head(shr_class_count);\n\tif (newsh == NULL)\n\t\treturn 0;\n\tnroot = dup_share_tree(oroot);\n\tif (nroot == NULL) {\n\t\tfree_share_head(newsh, 1);\n\t\treturn 0;\n\t}\n\tnewsh->root = nroot;\n\tnewsh->prev = oldsh;\n\tcur_shr_head = newsh;\n\tmemcpy(newsh->sh_total, oldsh->sh_total,\n\t\tshr_class_count*sizeof(*newsh->sh_total));\n\tmemcpy(newsh->sh_avail, oldsh->sh_avail,\n\t\tshr_class_count*sizeof(*newsh->sh_avail));\n\tmemcpy(newsh->sh_contrib, oldsh->sh_contrib,\n\t\tshr_class_count*sizeof(*newsh->sh_contrib));\n\tnsinfo->share_head = newsh;\n\treturn 1;\n}\n\n\n\n/*\n *=====================================================================\n * dup_share_tree(root) - clone a share_info (sub)tree\n * Entry:\troot = root of subtree to clone\n * Returns:\troot of cloned copy\n * Modifies:\ttptr link in original tree points to clone of that node\n *=====================================================================\n */\nstatic share_info *\ndup_share_tree(share_info *oroot)\n{\n\tshare_info\t*nroot;\n\n\tif (oroot == NULL)\n\t\treturn NULL;\n\tnroot = new_share_info_clone(oroot);\n\tif (nroot == NULL)\n\t\treturn NULL;\n\toroot->tptr = nroot;\n\t/*\n\t * Update pointers where needed, etc.\n\t */\n\tif (oroot->parent != NULL)\n\t\tnroot->parent = oroot->parent->tptr;\n\tif (oroot->leader != NULL)\n\t\tnroot->leader = oroot->leader->tptr;\n\t/*\n\t * Breadth-first tree walk\n\t */\n\tnroot->sibling = dup_share_tree(oroot->sibling);\n\tnroot->child = dup_share_tree(oroot->child);\n\treturn nroot;\n}\n\n\n\n/*\n *=====================================================================\n * find_entity_share(name, node) - Look up share info for entity\n *\t\tPatterns are taken into account.\n *\t\tThe sub-tree rooted at node is searched for the best\n *\t\tmatch, where best is either an exact match, or the\n *\t\tpattern with the lowest line number.\n * Entry:\tname = Name of entity to locate.\n *\t\tnode = Root of subtree to search\n * Returns:\tPointer to matching share_info structure\n *=====================================================================\n */\nstatic share_info *\nfind_entity_share(char *name, share_info *node)\n{\n\tshare_info *\tsi;\n\tshare_info *\tchild;\n\tshare_info *\tbest_si;\n\n\tif (node == NULL) {\n\t\treturn NULL;\n\t}\n\tif (strcmp(name, node->name) == 0) {\n\t\treturn node;\t\t/* Simple match */\n\t}\n\tbest_si = NULL;\n\tif (node->pattern_type != share_info::pattern_type::PATTERN_NONE) {\n\t\tif (regexec(&node->pattern, name, 0, NULL, 0) == 0) {\n\t\t\t/* Found one match */\n\t\t\tbest_si = node;\n\t\t}\n\t}\n\tfor (child = node->child; child; child = child->sibling) {\n\t\tsi = find_entity_share(name, child);\n\t\tif (si) {\n\t\t\tif (si->pattern_type == share_info::pattern_type::PATTERN_NONE) {\n\t\t\t\t/* Found simple match in sub-tree */\n\t\t\t\tbest_si = si;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\tif (best_si == NULL) {\n\t\t\t\tbest_si = si;\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\tif (si->lineno < best_si->lineno) {\n\t\t\t\tbest_si = si;\n\t\t\t}\n\t\t}\n\t}\n\treturn best_si;\n}\n\n\n\n/*\n *=====================================================================\n * find_most_favored_share(root, topjobs) - Search share group list\n *\t\tfor group that is under the topjobs limit and has\n *\t\tthe lowest share use ratio.\n * Entry:\troot = pointer to (sub)tree to search\n *\t\ttopjobs = configured topjob guarantee\n * Returns:\tPointer to favored share group info\n *\t\tNULL if no group under topjobs.\n *=====================================================================\n */\nshare_info *\nfind_most_favored_share(share_info* root, int topjobs)\n{\n\tshare_info*\tbest;\n\tshare_info*\tsi;\n\n\tif (root == NULL)\n\t\treturn NULL;\n\tif (root->leader == root\n\t\t&& (root->topjob_count < topjobs\n\t\t  || root->tj_cpu_cost < TJ_COST_MAX)\n\t\t&& !root->none_left)\n\t\tbest = root;\n\telse\n\t\tbest = NULL;\n\tif (root->child) {\n\t\tsi = find_most_favored_share(root->child, topjobs);\n\t\tif (best == NULL || (si != NULL && si->ratio < best->ratio))\n\t\t\tbest = si;\n\t}\n\tif (root->sibling) {\n\t\tsi = find_most_favored_share(root->sibling, topjobs);\n\t\tif (best == NULL || (si != NULL && si->ratio < best->ratio))\n\t\t\tbest = si;\n\t}\n\treturn best;\n}\n\n\n\n/*\n *=====================================================================\n * find_share_class(root, name) - Find share class in tree and return\n *\t\tits class index\n * Entry:\troot = root of class list to search\n *\t\tname = name of class to search for\n * Returns:\tmatching class index\n *\t\t0 (default) on no match\n *=====================================================================\n */\nstatic int\nfind_share_class(struct shr_class *root, char *name)\n{\n\twhile (root) {\n\t\tif (strcmp(root->name, name) == 0)\n\t\t\tbreak;\n\t\troot = root->next;\n\t}\n\treturn root ? root->sh_cls : 0;\n}\n\n\n\n/*\n *=====================================================================\n * find_share_group(root, name) - Look up share group info by name.\n *\t\tNo pattern matching is performed.\n * Entry:\troot = root of share info tree\n *\t\tname = name to find\n * Returns:\tptr to share info, or NULL if not found\n *=====================================================================\n */\nstatic share_info *\nfind_share_group(share_info *root, char *name)\n{\n\tshare_info\t*child;\n\tshare_info\t*result = NULL;\n\n\tif (root == NULL || name == NULL)\n\t\treturn NULL;\n\tif (strcmp(root->name, name) == 0)\n\t\treturn root;\n\tfor (child = root->child; child; child = child->sibling) {\n\t\tresult = find_share_group(child, name);\n\t\tif (result)\n\t\t\tbreak;\n\t}\n\treturn result;\n}\n\n\n/*\n *=====================================================================\n * find_share_type(head, name) - Look up type by name and return its\n *\ttype index.\n * Entry:\thead = head of table list\n *\t\tname = name to find\n * Returns:\tindex, or 0 on no match (default)\n *=====================================================================\n */\n#if 0\nstatic int\nfind_share_type(struct shr_type *head, char *name)\n{\n\tif (head == NULL || name == NULL || *name == '\\0')\n\t\treturn 0;\n\tfor (; head ; head = head->next) {\n\t\tif (strcmp(name, head->name) == 0) {\n\t\t\treturn head->sh_cls;\n\t\t}\n\t}\n\treturn 0;\n}\n#endif\n\n\n\n/*\n *=====================================================================\n * find_user(head, name) - Look up user in list, adding if missing.\n * Entry:\thead = ptr to head of list\n *\t\tname = name to find\n * Returns:\tptr to user info structure\n *\t\tNULL on error\n *\t\t*head possibly updated\n *=====================================================================\n */\nstatic site_user_info*\nfind_user(site_user_info **head, char *name)\n{\n\tsite_user_info\t*cur;\n\tsite_user_info\t*prev;\n\tsite_user_info\t*sui;\n\n\tif (head == NULL)\n\t\treturn NULL;\n\tprev = NULL;\n\tfor (cur = *head; cur ; cur = cur->next) {\n\t\tint rc = strcasecmp(name, cur->user_name);\n\t\tif (rc == 0)\n\t\t\treturn cur;\n\t\tif (rc > 0)\n\t\t\tbreak;\n\t\tprev = cur;\n\t}\n\t/*\n\t * Not found, allocate a new entry and link it in.\n\t */\n\tsui = malloc(sizeof(site_user_info) + strlen(name));\n\tif (sui == NULL)\n\t\treturn NULL;\t\t/* memory allocation failed */\n\tstrcpy(sui->user_name, name);\n\tsui->current_use = sui->current_use_pqt = 0;\n\tsui->next = cur;\n\tif (prev == NULL) {\n\t\t*head = sui;\n\t} else {\n\t\tprev->next = sui;\n\t}\n\treturn sui;\n}\n\n\n/*\n *=====================================================================\n * free_share_head(sh, flag) - Free a share head and associated tree\n * Entry:\tsh = ptr to share head\n *\t\tflag = true if tree expected to be a clone\n *=====================================================================\n */\nstatic void\nfree_share_head(share_head *sh, int flag)\n{\n\tshare_info\t*root;\n\n\tif (sh == NULL)\n\t\treturn;\n\troot = sh->root;\n\tif (root == NULL)\n\t\treturn;\n\tif (flag) {\n\t\t/*\n\t\t * Be careful when releasing things that are supposed\n\t\t * to be clones.\n\t\t */\n\t\tif (!root->am_clone)\n\t\t\treturn;\n\t\tif (sh != cur_shr_head)\n\t\t\treturn;\n\t\tcur_shr_head = cur_shr_head->prev;\n\t}\n\tfree_share_tree(root);\n\tfree(sh);\n}\n\n\n\n/*\n *=====================================================================\n * free_share_tree(root) - Free share info tree\n * Entry:\troot = root of (sub)tree to free\n *=====================================================================\n */\nstatic void\nfree_share_tree(share_info *root)\n{\n\tif (root == NULL)\n\t\treturn;\n\tfree_share_tree(root->child);\n\tfree_share_tree(root->sibling);\n\tif (!root->am_clone) {\n\t\tif (root->pattern_type != share_info::pattern_type::PATTERN_NONE) {\n\t\t\tregfree(&root->pattern);\n\t\t}\n\t}\n\tfree(root);\n}\n\n\n\n/*\n *=====================================================================\n * free_users(head) - Free linked list of users rooted at head\n * Entry:\thead = ptr to head of list\n * Exit:\tList freed, head NULLed\n *=====================================================================\n */\nstatic void\nfree_users(site_user_info **head)\n{\n\tsite_user_info*\tcur;\n\tsite_user_info* next;\n\n\tfor (cur = *head; cur; cur = next) {\n\t\tnext = cur->next;\n\t\tfree(cur);\n\t}\n\t*head = NULL;\n}\n\n\n\n/*\n *=====================================================================\n * get_share_ratio(ncpus, asking, amts) - Compute group share use ratio\n *\t\tThis is the maximum of the use ratios for classes\n *\t\tthat are relevant.\n * Entry:\tncpus = sh_amt array for group allocation\n *\t\tasking = sh_amt array for job,\n *\t\t\tNULL if desire value for group as a whole.\n *\t\tamts = current use numbers.\n *=====================================================================\n */\ndouble\nget_share_ratio(sh_amt* ncpus, sh_amt* asking, sh_amt_array* amts)\n{\n\tint\tcls;\n\tdouble\tratio = 0.0;\n\tdouble\tt;\n\n\tfor (cls = 0; cls < shr_class_count; ++cls) {\n\t\tif (ncpus[cls] == 0)\n\t\t\tcontinue;\n\t\tif (asking != NULL && asking[cls] == 0)\n\t\t\tcontinue;\n\t\tt = (double)(amts[cls][J_TYPE_limited]\n\t\t\t+ amts[cls][J_TYPE_borrow])\n\t\t/ (double)(ncpus[cls]);\n\t\tif (t > ratio)\n\t\t\tratio = t;\n\t}\n\treturn ratio;\n}\n\n\n\n/*\n *=====================================================================\n * init_users(sinfo) - Collect information about users\n * Entry:\tsinfo = Server info\n * Returns:\t0 on success, else non-zero\n *=====================================================================\n */\nstatic int\ninit_users(server_info *sinfo)\n{\n\tresource_resv\t**resvs = sinfo->jobs;\n\tresource_resv\t*resv;\n\tjob_info\t*job;\n\tint\t\tjcnt = sinfo->sc.total;\n\tint\t\ti;\n\tsite_user_info\t*sui;\n\tqueue_info\t*queue;\n\n\tfree_users(&users);\n\tfor (i = 0; i < jcnt; ++i) {\n\t\tresv = resvs[i];\n\t\tif (!resv->is_job || (job = resv->job) == NULL)\n\t\t\tcontinue;\n\t\tif ((queue = job->queue) == NULL) {\n\t\t\tjob->u_info = NULL;\n\t\t\tcontinue;\n\t\t}\n\t\tsui = find_user(&users, resv->user);\n\t\tif (sui == NULL) {\n\t\t\treturn 1;\n\t\t}\n\t\tjob->u_info = sui;\n\t\t/*\n\t\t * Accumulate accrual rates for running jobs\n\t\t */\n\t\tif (!job->is_running)\n\t\t\tcontinue;\n\t\tif (queue->is_topjob_set_aside) {\n\t\t\tsui->current_use_pqt += job->accrue_rate;\n\t\t} else {\n\t\t\tsui->current_use += job->accrue_rate;\n\t\t}\n\t}\n\treturn 0;\n}\n\n\n\n/*\n *=====================================================================\n * list_share_info(fp, root, pfx, idx, sname, flag) - Write current share\n *\t\tinfo to file\n * Entry:\tfp = FILE * to write to\n *\t\troot = base of sub-tree to write\n *\t\tpfx = string to prefix each line with\n *\t\tidx = share type to report on\n *\t\tsname = identifier for share type\n *\t\tflag = non-zero to list only leaders\n * Exit:\tsubtree info written to file\n *=====================================================================\n */\nstatic void\nlist_share_info(FILE *fp, share_info *root, const char *pfx, int idx, const char *sname, int flag)\n{\n\tif (shr_types == NULL || shr_class_count == 0)\n\t\treturn;\n\tif (flag == 0 || root == root->leader) {\n\t\tchar\t\tbuf[J_TYPE_COUNT * 2 * 15];\n\t\tchar\t\t*p;\n\t\tchar\t\t*s;\n\t\tchar\t\t*lname;\n\t\tint\t\tj;\n\t\tsh_amt\t\t*use_amts;\n\t\tsh_amt\t\t*dmd_amts;\n\n\t\tuse_amts = &root->share_inuse[idx][0];\n\t\tdmd_amts = &root->share_demand[idx][0];\n\t\ts = \"\";\n\t\tp = buf;\n\t\tfor (j = 0; j < J_TYPE_COUNT; ++j) {\n\t\t\tint len;\n\n\t\t\tlen = sprintf(p, \"%s%d+%d\",\n\t\t\t\ts, use_amts[j], dmd_amts[j]);\n\t\t\tp += len;\n\t\t\ts = \"/\";\n\t\t}\n\t\tlname = root->leader ? root->leader->name : \"<no_leader>\";\n\t\tfprintf(fp, \"%s%17s=%s\\t%d\\t%d\\t%d\\t%s\\t%s\\n\",\n\t\t\tpfx, root->name, sname,\n\t\t\troot->share_gross[idx], root->share_net[idx],\n\t\t\troot->share_ncpus[idx], buf, lname);\n\t}\n\tif (root->child)\n\t\tlist_share_info(fp, root->child, pfx, idx, sname, flag);\n\tif (root->sibling)\n\t\tlist_share_info(fp, root->sibling, pfx, idx, sname, flag);\n}\n\n\n\n/*\n *=====================================================================\n * set_share_cpus(node, gross, sh_avail) - Apportion CPUs based on allocations\n * Entry:\tnode = fairshare info subtree\n *\t\tgross = total gross share units\n *\t\tavail = available CPUs of each type\n * Exit:\tshare_ncpus fields in tree updated\n *=====================================================================\n */\nstatic void\nset_share_cpus(share_info *node, sh_amt *gross, sh_amt *sh_avail)\n{\n\tint\t\ti;\n\n\tif (node == NULL)\n\t\treturn;\n\t/*\n\t * Only groups with allocations get ncpus set\n\t */\n\tif (node->share_gross[0] >= 0) {\n\t\tint cpus;\n\t\tfor (i = 0; i < shr_class_count; ++i) {\n\t\t\tif (node->share_net[i] == 0) {\n\t\t\t\tcpus = 0;\n\t\t\t} else {\n\t\t\t\tdouble t_shares, t_cpus;\n\t\t\t\tt_cpus = sh_avail[i];\n\t\t\t\tt_shares = gross[i];\n\t\t\t\t/*\n\t\t\t\t * Have to worry about 32-bit overflow in\n\t\t\t\t * the following computation.\n\t\t\t\t */\n\t\t\t\tcpus = (int)((t_cpus * node->share_net[i]) /\n\t\t\t\t\tt_shares);\n\t\t\t\tif (cpus < 4) {\n\t\t\t\t\tprintf(\"%s: group %s gets only %d %s CPUs\\n\",\n\t\t\t\t\t\t__func__, node->name,\n\t\t\t\t\t\tcpus, shr_class_name_by_idx(i));\n\t\t\t\t\tfflush(stdout);\n\t\t\t\t}\n\t\t\t}\n\t\t\tnode->share_ncpus[i] = cpus;\n\t\t}\n\t} else {\n\t\tfor (i = 0; i < shr_class_count; ++i) {\n\t\t\tnode->share_ncpus[i] = -1;\n\t\t}\n\t}\n\tif (node->sibling)\n\t\tset_share_cpus(node->sibling, gross, sh_avail);\n\tif (node->child)\n\t\tset_share_cpus(node->child, gross, sh_avail);\n}\n\n\n\n/*\n *=====================================================================\n * bump_share_count(si, stype, sc, sign) - Bump group inuse CPU counts\n * Entry:\tsi = group's share info\n *\t\tstype = Which counter to bump\n *\t\tsc = Array of counts to bump by.\n *\t\tsign = +/-1 to select incrementing/decrementing\n * Exit:\tCounters bumped within tree\n *=====================================================================\n */\nstatic void\nbump_share_count(share_info *si, enum site_j_share_type stype, sh_amt *sc, int sign)\n{\n\tshare_info\t*leader;\n\tint\t\ti;\n\n\tif (si == NULL)\n\t\treturn;\n\t/*\n\t * Bump count for group itself and for sub-tree leader\n\t * (unless group is leader)\n\t */\n\tfor (i = 0; i < shr_class_count; ++i) {\n\t\tsi->share_inuse[i][stype] += sc[i] * sign;\n\t}\n\tleader = si->leader;\n\tif (leader && leader != si) {\n\t\tfor (i = 0; i < shr_class_count; ++i) {\n\t\t\tleader->share_inuse[i][stype] += sc[i] * sign;\n\t\t}\n\t}\n}\n\n\n\n/*\n *=====================================================================\n * bump_demand_count(si, stype, sc, sign) - Bump group demand CPU counts\n * Entry:\tsi = group's share info\n *\t\tstype = Which counter to bump\n *\t\tsc = Array of counts to bump by.\n *\t\tsign = +/-1 to select incrementing/decrementing\n * Exit:\tCounters bumped within tree\n *=====================================================================\n */\nstatic void\nbump_demand_count(share_info *si, enum site_j_share_type stype, sh_amt *sc, int sign)\n{\n\tshare_info\t*leader;\n\tint\t\ti;\n\n\tif (si == NULL)\n\t\treturn;\n\tfor (i = 0; i < shr_class_count; ++i) {\n\t\tsi->share_demand[i][stype] += sc[i] * sign;\n\t}\n\tleader = si->leader;\n\tif (leader && leader != si) {\n\t\tfor (i = 0; i < shr_class_count; ++i) {\n\t\t\tleader->share_demand[i][stype] += sc[i] * sign;\n\t\t}\n\t}\n}\n\n\n\n/*\n *=====================================================================\n * zero_share_counts(node) - zero CPU info in tree\n * Entry:\tnode = root of portion of tree to zero\n * Exit:\tshare_inuse[], share_demand[] zeroed in sub-tree\n *=====================================================================\n */\nstatic void\nzero_share_counts(share_info *node)\n{\n\tif (node == NULL)\n\t\treturn;\n\tmemset(node->share_inuse, 0, shr_class_count * sizeof(*node->share_inuse));\n\tmemset(node->share_demand, 0, shr_class_count * sizeof(*node->share_demand));\n\tif (node->child)\n\t\tzero_share_counts(node->child);\n\tif (node->sibling)\n\t\tzero_share_counts(node->sibling);\n}\n\n\n\n/*\n *=====================================================================\n * new_share_head(cnt) - Allocate new share_info head structure\n * Entry:\tcnt = number of sh_amt classes\n * Returns:\tpointer to initialized head structure\n *=====================================================================\n */\nstatic share_head *\nnew_share_head(int cnt)\n{\n\tshare_head\t*newsh;\n\tsize_t\t\tsz;\n\tsh_amt\t\t*ptr;\n\n\t/*\n\t * Double cnt to allow space for backup copy of class values.\n\t * Original values go in indices 0..cnt-1, backup in cnt..2*cnt-1\n\t */\n\tcnt *= 2;\n\n\tsz = sizeof(struct share_head);\n\tsz += cnt * sizeof(sh_amt);\t/*active*/\n\tsz += cnt * sizeof(sh_amt);\t/*avail*/\n\tsz += cnt * sizeof(sh_amt);\t/*contrib*/\n\tsz += cnt * sizeof(sh_amt);\t/*total*/\n\tnewsh = calloc(1, sz);\n\tif (newsh == NULL)\n\t\treturn NULL;\n\tptr = (sh_amt *)(newsh + 1);\n\tnewsh->sh_active = ptr;\n\tptr += cnt;\n\tnewsh->sh_avail = ptr;\n\tptr += cnt;\n\tnewsh->sh_contrib = ptr;\n\tptr += cnt;\n\tnewsh->sh_total = ptr;\n\tptr += cnt;\n\treturn newsh;\n}\n\n\n\n/*\n *=====================================================================\n * new_share_info(name, cnt) - Create new share_info node\n * Entry:\tname = name to assign to node\n *\t\tcnt = number of classes to make room for.\n * Returns:\tpointer to new share_info struct\n *=====================================================================\n */\nstatic share_info *\nnew_share_info(char *name, int cnt)\n{\n\tsize_t\t\tsz;\n\tsh_amt\t\t*ptr;\n\tsh_amt_array\t*aptr;\n\tshare_info\t*si;\n\n\t/*\n\t * The share_info struct contains pointers to variable-sized\n\t * arrays of sh_amts.  These arrays are allocated after the\n\t * base structure and the pointers set to point to them.\n\t */\n\t/*\n\t * We allocate space for backup copies of some items.\n\t * Original values use indices 0..cnt-1, backups use cnt..2*cnt-1.\n\t * Note that for sh_amt_arrays, the backup copies are after all\n\t * the original arrays, so again, the original values use indices\n\t * 0..cnt-1 as the first subscript, and the backups use cnt..2*cnt-1.\n\t */\n\tsz = sizeof(share_info);\n\tsz += cnt * sizeof(sh_amt);\t\t/*gross*/\n\tsz += cnt * sizeof(sh_amt);\t\t/*net*/\n\tsz += cnt * sizeof(sh_amt);\t\t/*ncpus*/\n\tsz += 2 * cnt * sizeof(sh_amt_array);\t/*inuse + backup*/\n\tsz += 2 * cnt * sizeof(sh_amt_array);\t/*demand + backup*/\n\tsi = calloc(1, sz + strlen(name) + 1);\n\tif (si != NULL) {\n\t\tptr = (sh_amt *)(si + 1);\n\t\tsi->share_gross = ptr;\n\t\tptr += cnt;\n\t\tsi->share_net = ptr;\n\t\tptr += cnt;\n\t\tsi->share_ncpus = ptr;\n\t\tptr += cnt;\n\t\taptr = (sh_amt_array *) ptr;\n\t\tsi->share_inuse = aptr;\n\t\taptr += 2 * cnt;\n\t\tsi->share_demand = aptr;\n\t\taptr += 2 * cnt;\n\t\tsi->name = (char *)aptr;\n\t\tassert(si->name - (char *)si <= sz);\n\t\tstrcpy(si->name, name);\n\t\tsi->size = sz;\n\t}\n\treturn si;\n}\n\n\n\n/*\n *=====================================================================\n * new_share_info_clone(old) - Clone a share_info structure\n *\tReturned node has copy of sh_amt values, but shares\n *\tname.\n * Entry:\told = ptr to existing share_info to copy\n * Returns:\tptr to clone\n *=====================================================================\n */\nstatic share_info *\nnew_share_info_clone(share_info *old)\n{\n\tshare_info\t*si;\n\tsh_amt\t\t*ptr;\n\tsh_amt_array\t*aptr;\n\tint\t\tcnt = shr_class_count;\n\n\tif (old == NULL)\n\t\treturn NULL;\n\tsi = malloc(old->size);\n\tif (si) {\n\t\tmemcpy(si, old, old->size);\n\t\t/* Zap tree pointers */\n\t\tsi->parent = si->sibling = si->child = si->leader = NULL;\n\t\tsi->am_clone = 1;\n\t\t/*\n\t\t * Adjust internal pointers\n\t\t */\n\t\tptr = (sh_amt *)(si + 1);\n\t\tsi->share_gross = ptr;\n\tptr += cnt;\n\t\tsi->share_net = ptr;\n\tptr += cnt;\n\t\tsi->share_ncpus = ptr;\n\tptr += cnt;\n\t\taptr = (sh_amt_array *) ptr;\n\t\tsi->share_inuse = aptr;\n\taptr += 2 * cnt;\n\t\tsi->share_demand = aptr;\naptr += 2 * cnt;\n\t}\n\treturn si;\n}\n\n\n\n/*\n *=====================================================================\n * reconcile_shares(root, cnt) - Complete construction of share tree after\n *\t\tshare file all read.\n * Entry:\troot = root of share_info tree\n *\t\tcnt = count of sh_amt entries in amount arrays\n * Returns:\t1 if all okay, else 0\n *=====================================================================\n */\nstatic int\nreconcile_shares(share_info *root, int cnt)\n{\n\tint\ti;\n\tint\tresult = 1;\n\n\tif (root == NULL)\n\t\treturn result;\t\t/* Nothing to do */\n\troot->leader = root;\t\t/* ROOT is its own leader */\n\tfor (i = 0; i < cnt; ++i)\n\t\troot->share_gross[i] = -2;\n\tresult = reconcile_share_tree(root, root, cnt);\n\treturn result;\n}\n\n\n\n/*\n *=====================================================================\n * reconcile_share_tree(root, def, cnt) - Complete construction of\n *\t\tshare info subtree.\n * Entry:\troot = root of subtree\n *\t\tdef = default leader for this subtree\n *\t\tcnt = count of sh_amt entries in amount arrays\n * Returns:\t1 if all okay, else 0\n *=====================================================================\n */\nstatic int\nreconcile_share_tree(share_info *root, share_info *def, int cnt)\n{\n\tshare_info\t*child;\n\tint\t\ti;\n\n\tif (root == NULL || def == NULL)\n\t\treturn 1;\n\t/*\n\t * If current root has allocation, it becomes default leader for\n\t * it and its kiddies.\n\t */\n\tfor (i = 0; i < cnt; ++i) {\n\t\tif (root->share_gross[i] > 0) {\n\t\t\tdef = root;\n\t\t\tbreak;\n\t\t}\n\t}\n\troot->leader = def;\n\t/*\n\t * Traverse tree depth-first, using share_net as temp to accumulate\n\t * gross values for children.\n\t */\n\tfor (i = 0; i < cnt; ++i) {\n\t\troot->share_net[i] = 0;\n\t}\n\tfor (child = root->child; child; child = child->sibling) {\n\t\tif (!reconcile_share_tree(child, def, cnt)) {\n\t\t\treturn 0;\n\t\t}\n\t\tfor (i = 0; i < cnt; ++i) {\n\t\t\troot->share_net[i] += child->share_net[i];\n\t\t}\n\t}\n\t/*\n\t * If we are a leader, make sure our share is sufficient to cover\n\t * our children.  If not, gripe and increase it to match.\n\t */\n\tif (def == root) {\n\t\tfor (i = 0; i < cnt; ++i) {\n\t\t\tsh_amt c_sum, gross;\n\n\t\t\tgross = root->share_gross[i];\n\t\t\tc_sum = root->share_net[i];\n\t\t\tif (c_sum > gross) {\n\t\t\t\tif (gross >= 0) {\n\t\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\t\"%s share for %s too small for children: %d < %d\",\n\t\t\t\t\t\troot->name, shr_class_name_by_idx(i),\n\t\t\t\t\t\tgross, c_sum);\n\t\t\t\t\tlog_event(PBSEVENT_SCHED, PBS_EVENTCLASS_FILE,\n\t\t\t\t\t\tLOG_NOTICE, __func__, log_buffer);\n\t\t\t\t}\n\t\t\t\troot->share_gross[i] = gross = c_sum;\n\t\t\t}\n\t\t\troot->share_net[i] = gross - c_sum;\n\t\t}\n\t} else {\n\t\tfor (i = 0; i < cnt; ++i) {\n\t\t\troot->share_gross[i] = -1;\n\t\t}\n\t}\n\treturn 1;\n}\n\n\n\n/*\n *=====================================================================\n * shr_class_info_by_idx(idx) - Look up Nth CPU share class info.\n * Entry:\tidx = index of share class\n * Returns:\tpointer to matching class info, or default entry\n *=====================================================================\n */\n#if 0\nstatic struct shr_class *\nshr_class_info_by_idx(int idx)\n{\n\tstruct shr_class\t*scp;\n\n\tfor (scp = shr_classes; scp; scp = scp->next) {\n\t\tif (scp->sh_cls == idx)\n\t\t\tbreak;\n\t}\n\tif (scp == NULL)\n\t\tscp = shr_classes;\n\treturn scp;\n}\n#endif\n\n\n\n/*\n *=====================================================================\n * shr_class_info_by_name(name) - Look up CPU share class by class name\n * Entry:\tname = name of class\n * Returns:\tpointer to matching class info, or default entry.\n *=====================================================================\n */\n#if 0\nstatic struct shr_class *\nshr_class_info_by_name(const char * name)\n{\n\tstruct shr_class\t*scp;\n\n\tfor (scp = shr_classes; scp; scp = scp->next) {\n\t\tif (strcmp(scp->name, name) == 0)\n\t\t\tbreak;\n\t}\n\tif (scp == NULL)\n\t\tscp = shr_classes;\n\treturn scp;\n}\n#endif\n\n\n\n/*\n *=====================================================================\n * shr_class_info_by_type_name(name) - Look up share class by type name\n * Entry:\tname = name of CPU type to look up\n * Returns:\tpointer to matching share class, or default class if no\n *\t\tmatch\n *=====================================================================\n */\n#if 0\nstatic struct shr_class *\nshr_class_info_by_type_name(const char * name)\n{\n\tstruct shr_type\t*stp;\n\tstruct shr_class\t*scp;\n\n\tfor (stp = shr_types; stp; stp = stp->next) {\n\t\tif (strcmp(name, stp->name) == 0) {\n\t\t\tscp = shr_class_info_by_idx(stp->sh_cls);\n\t\t\tbreak;\n\t\t}\n\t}\n\tif (stp == NULL || scp == NULL)\n\t\tscp = shr_classes;\n\treturn scp;\n}\n#endif\n\n\n\n/*\n *=====================================================================\n * shr_class_name_by_idx(idx) - Look up Nth share class name.\n * Entry:\tidx = which share class to find\n * Returns:\tmatching class name or \"\" if none\n *=====================================================================\n */\nstatic char *\nshr_class_name_by_idx(int idx)\n{\n\tstruct shr_class\t*scp;\n\tchar *sp;\n\n\tfor (scp = shr_classes; scp; scp = scp->next) {\n\t\tif (scp->sh_cls == idx)\n\t\t\tbreak;\n\t}\n\tsp = scp ? scp->name : \"\";\n\treturn sp;\n}\n\n\n\n/*\n *=====================================================================\n * shr_type_info_by_idx(idx) - Look up Nth CPU type info\n * Entry:\tidx = desired CPU type\n * Returns:\tpointer to idx'th CPU type info, or default type info\n *=====================================================================\n */\nstatic struct shr_type *\nshr_type_info_by_idx(int idx)\n{\n\tstruct shr_type\t*stp;\n\n\tfor (stp = shr_types; stp; stp = stp->next) {\n\t\tif (stp->sh_tidx == idx)\n\t\t\tbreak;\n\t}\n\tif (stp == NULL)\n\t\tstp = shr_types;\n\treturn stp;\n}\n\n\n\n/*\n *=====================================================================\n * shr_type_info_by_name(name) - Look up CPU type info by type name\n * Entry:\tname = desired CPU type name\n * Returns:\tpointer to matching CPU type info, or default type\n *=====================================================================\n */\nstatic struct shr_type *\nshr_type_info_by_name(const char* name)\n{\n\tstruct shr_type\t*stp;\n\n\tfor (stp = shr_types; stp; stp = stp->next) {\n\t\tif (strcmp(stp->name, name) == 0)\n\t\t\tbreak;\n\t}\n\tif (stp == NULL)\n\t\tstp = shr_types;\n\treturn stp;\n}\n\n\n\n/*\n *=====================================================================\n * squirrel_shr_head(sinfo) - make backup of alterable shares data\n * Entry:\tsinfo = current server info\n *=====================================================================\n */\nstatic void\nsquirrel_shr_head(server_info *sinfo)\n{\n\tshare_head\t*sh;\n\tint\t\tcnt = shr_class_count;\n\n\tif (sinfo == NULL || (sh = sinfo->share_head) == NULL)\n\t\treturn;\n#define MM(x) memmove(sh->x + cnt, sh->x, cnt * sizeof(*sh->x))\n\tMM(sh_active);\n\tMM(sh_avail);\n\tMM(sh_contrib);\n\tMM(sh_total);\n#undef MM\n\tsquirrel_shr_tree(sh->root);\n}\n\n\n/*\n *=====================================================================\n * un_squirrel_shr_head(sinfo) - restore share values from backup\n * Entry:\tsinfo = server info\n *=====================================================================\n */\nstatic void\nun_squirrel_shr_head(server_info *sinfo)\n{\n\tshare_head\t*sh;\n\tint\t\tcnt = shr_class_count;\n\n\tif (sinfo == NULL || (sh = sinfo->share_head) == NULL)\n\t\treturn;\n#define MM(x) memmove(sh->x, sh->x + cnt, cnt * sizeof(*sh->x))\n\tMM(sh_active);\n\tMM(sh_avail);\n\tMM(sh_contrib);\n\tMM(sh_total);\n#undef MM\n\tun_squirrel_shr_tree(sh->root);\n}\n\n\n\n/*\n *=====================================================================\n * squirrel_shr_tree(root) - make backup of alterable shares data\n * Entry:\troot = root of (sub)tree of share_info to backup.\n *=====================================================================\n */\nstatic void\nsquirrel_shr_tree(share_info *root)\n{\n\tint\t\tcnt = shr_class_count;\n\n\tif (root == NULL)\n\t\treturn;\n#define\tMM(x) memmove(root->x + cnt, root->x, cnt * sizeof(*root->x))\n\tMM(share_inuse);\n\tMM(share_demand);\n#undef MM\n\tif (root->sibling)\n\t\tsquirrel_shr_tree(root->sibling);\n\tif (root->child)\n\t\tsquirrel_shr_tree(root->child);\n\troot->ratio_bak = root->ratio;\n}\n\n\n\n/*\n *=====================================================================\n * un_squirrel_shr_tree(root) - restore alterable share data\n * Entry:\troot = root of (sub)tree of share_info to restore.\n *=====================================================================\n */\nstatic void\nun_squirrel_shr_tree(share_info *root)\n{\n\tint\t\tcnt = shr_class_count;\n\n\tif (root == NULL)\n\t\treturn;\n#define\tMM(x) memmove(root->x, root->x + cnt, cnt * sizeof(*root->x))\n\tMM(share_inuse);\n\tMM(share_demand);\n#undef MM\n\tif (root->sibling)\n\t\tun_squirrel_shr_tree(root->sibling);\n\tif (root->child)\n\t\tun_squirrel_shr_tree(root->child);\n\troot->ratio = root->ratio_bak;\n}\n\n\n\n/*\n *=====================================================================\n * pick_next_job(policy, jobs, pnfilter, si) - Slightly modified version\n *\t\tof Altair's extract_fairshare. We add an additional job\n *\t\tcheck using the pnfilter function\n * Entry:\tpolicy = current scheduler policy structure\n *\t\tjobs = list of jobs to search\n *\t\tpnfilter = pointer to a binary job filter function\n *\t\tsi = pointer to current most favored share\n * Returns:\tpointer to next job that matches search criteria, else NULL\n *=====================================================================\n */\nstatic resource_resv *\npick_next_job(status *policy, resource_resv **jobs, pick_next_filter pnfilter, share_info *si)\n{\n\tresource_resv *good = NULL;\t/* job with the min usage / percentage */\n\tint cmp;\t\t\t/* comparison value of two jobs */\n\tint i;\n\n\tif (policy == NULL || jobs == NULL || pnfilter == NULL)\n\t\treturn NULL;\n\n\tfor (i = 0; jobs[i] != NULL; i++) {\n\t\tif (jobs[i]->is_job && jobs[i]->job !=NULL) {\n\t\t\tif (!jobs[i]->can_not_run && in_runnable_state(jobs[i]) &&\n\t\t\t\tpnfilter(jobs[i], si)) {\n\t\t\t\tif (!policy->fair_share)\n\t\t\t\t\treturn jobs[i];\n\n\t\t\t\tif (good == NULL) {\n\t\t\t\t\tgood = jobs[i];\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\t\t\t\t/*\n\t\t\t\t * Restrict share comparisons to same job sort level.\n\t\t\t\t */\n\t\t\t\tif (multi_sort(good, jobs[i]) != 0) {\n#if NAS_DEBUG\n\t\t\t\t\tprintf(\"%s: stopped at %s vs. %s\\n\",\n\t\t\t\t\t\t__func__, good->name, jobs[i]->name);\n\t\t\t\t\tfflush(stdout);\n#endif\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tif (good->job->ginfo != jobs[i]->job->ginfo) {\n\t\t\t\t\tcmp = compare_path(good->job->ginfo->gpath,\n\t\t\t\t\t\tjobs[i]->job->ginfo->gpath);\n\t\t\t\t\tif (cmp > 0)\n\t\t\t\t\t\tgood = jobs[i];\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\treturn good;\n}\n\n\n\n/*\n *=====================================================================\n * job_filter_hwy149(resv, si) - binary job filter\n * Entry:\tresv = job\n *\t\tsi = pointer to current most favored share\n * Returns:\t1 if job passes filter, else 0\n *=====================================================================\n */\n#ifdef\tNAS_HWY149\nstatic int\njob_filter_hwy149(resource_resv *resv, share_info *si)\n{\n\tif (resv == NULL || resv->job == NULL)\n\t\treturn 0;\n\n\tif (resv->job->priority == NAS_HWY149 ||\n\t\tresv->job->NAS_pri == NAS_HWY149)\n\t\treturn 1;\n\n\treturn 0;\n}\n#endif\n\n\n\n/*\n *=====================================================================\n * job_filter_dedres(resv, si) - binary job filter\n * Entry:\tresv = job\n *\t\tsi = pointer to current most favored share\n * Returns:\t1 if job passes filter, else 0\n *=====================================================================\n */\nstatic int\njob_filter_dedres(resource_resv *resv, share_info *si)\n{\n\tif (resv == NULL)\n\t\treturn 0;\n\n\tif (site_is_queue_topjob_set_aside(resv) &&\n\t\tnum_topjobs_per_queues < conf.per_queues_topjobs)\n\t\treturn 1;\n\n\treturn 0;\n}\n\n\n\n/*\n *=====================================================================\n * job_filter_hwy101(resv, si) - binary job filter\n * Entry:\tresv = job\n *\t\tsi = pointer to current most favored share\n * Returns:\t1 if job passes filter, else 0\n *=====================================================================\n */\n#ifdef\tNAS_HWY101\nstatic int\njob_filter_hwy101(resource_resv *resv, share_info *si)\n{\n\tif (resv == NULL || resv->job == NULL)\n\t\treturn 0;\n\n\tif (resv->job->priority == NAS_HWY101 ||\n\t\tresv->job->NAS_pri == NAS_HWY101)\n\t\treturn 1;\n\n\treturn 0;\n}\n#endif\n\n\n\n/*\n *=====================================================================\n * job_filter_normal(resv, si) - binary job filter\n * Entry:\tresv = job\n *\t\tsi = pointer to current most favored share\n * Returns:\t1 if job passes filter, else 0\n *=====================================================================\n */\nstatic int\njob_filter_normal(resource_resv *resv, share_info *si)\n{\n\tif (resv == NULL || resv->job == NULL)\n\t\treturn 0;\n\n\tif (si == NULL || resv->job->sh_info == NULL)\n\t\t/* Not using shares */\n\t\treturn 1;\n\tif (resv->job->sh_info->leader == si &&\n\t\t!site_is_queue_topjob_set_aside(resv))\n\t\treturn 1;\n\n\treturn 0;\n}\n\n\n\n/*\n *=====================================================================\n *=====================================================================\n */\n\n\n/* start localmod 030 */\n/*\n *=====================================================================\n * check_for_cycle_interrupt(do_logging) - Check if a cycle interrupt\n has been requested.\n * Entry\tdo_logging = whether to print to scheduler log\n * Returns\t1 if cycle should be interrupted\n *\t\t0 if cycle should continue\n *=====================================================================\n */\nint\ncheck_for_cycle_interrupt(int do_logging)\n{\n\tif (!do_soft_cycle_interrupt && !do_hard_cycle_interrupt) {\n\t\treturn 0;\n\t}\n\n\tif (!do_hard_cycle_interrupt &&\n\t    consecutive_interrupted_cycles >= conf.max_intrptd_cycles) {\n\t\treturn 0;\n\t}\n\n\tif (do_hard_cycle_interrupt ||\n\t    time(NULL) >=\n\t\tinterrupted_cycle_start_time + conf.min_intrptd_cycle_length) {\n\t\tif (do_logging)\n\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_SERVER, LOG_DEBUG, __func__,\n\t\t\t\t\"Short circuit of this cycle\");\n\n\t\treturn 1;\n\t}\n\n\tif (do_logging) {\n\t\tsprintf(log_buffer, \"Too early to short circuit (%ds elapsed, need %ds)\",\n\t\t\t(int)(time(NULL) - interrupted_cycle_start_time),\n\t\t\tconf.min_intrptd_cycle_length);\n\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_SERVER, LOG_DEBUG, __func__, log_buffer);\n\t}\n\n\treturn 0;\n}\n/* end localmod 030 */\n\n#endif /* NAS */\n// clang-format on"
  },
  {
    "path": "src/scheduler/site_code.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/*\n * site_code.h - Site additions to scheduler code\n */\n\n#ifdef NAS\n#define SHARE_FILE \"shares\"\n#define SORTED_FILE \"sortedjobs\"\n\nextern int site_bump_topjobs(resource_resv *resv, double delta);\nextern int site_check_cpu_share(server_info *, status *, resource_resv *);\nextern time_t site_decode_time(const char *val);\nextern int site_dup_shares(server_info *, server_info *);\nextern sh_amt *site_dup_share_amts(sh_amt *oldp);\nextern share_info *site_find_alloc_share(server_info *, char *);\nextern void site_free_shares(server_info *);\nextern double site_get_share(resource_resv *);\nextern void site_init_shares(server_info *sinfo);\nextern int site_is_queue_topjob_set_aside(resource_resv *resv);\nextern int site_is_share_king(status *policy);\nextern void site_list_shares(FILE *fp, server_info *, const char *pfx, int);\nextern void site_list_jobs(server_info *sinfo, resource_resv **rarray);\nextern int site_parse_shares(char *fname);\nextern resource_resv *site_find_runnable_res(resource_resv **resresv_arr);\nextern void site_resort_jobs(resource_resv *);\nextern void site_restore_users(void);\nextern void site_save_users(void);\nextern void site_set_job_share(resource_resv *resresv);\nextern void site_set_NAS_pri(job_info *, time_t, long);\nextern void site_set_node_share(node_info *ninfo, schd_resource *res);\nextern int site_set_share_head(server_info *sinfo);\nextern void site_set_share_type(server_info *, resource_resv *);\nextern int site_should_backfill_with_job(status *policy, server_info *sinfo, resource_resv *resresv, int ntj, int nqtj, schd_error *err);\nextern int site_tidy_server(server_info *sinfo);\nextern void site_update_on_end(server_info *, queue_info *, resource_resv *);\nextern void site_update_on_run(server_info *, queue_info *, resource_resv *,\n\t\t\t       int flag, nspec **);\nextern void site_vnode_inherit(node_info **);\nextern int check_for_cycle_interrupt(int);\n\nextern int num_topjobs_per_queues;\n\nextern int do_soft_cycle_interrupt;\nextern int do_hard_cycle_interrupt;\nextern int consecutive_interrupted_cycles;\nextern time_t interrupted_cycle_start_time;\n\n#endif\n"
  },
  {
    "path": "src/scheduler/site_data.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/*\n * site_data.h - Site additions to scheduler data types\n */\n\n#ifdef NAS\n\ntypedef sh_amt sh_amt_array[J_TYPE_COUNT];\n\nstruct share_info {\n\tchar *name;\t    /* Name for share group */\n\tshare_info *parent; /* Pointers to build share tree */\n\tshare_info *sibling;\n\tshare_info *child;\n\tshare_info *leader;\t\t     /* group owning share this group uses */\n\tshare_info *tptr;\t\t     /* temp link for during tree manip */\n\tsize_t size;\t\t\t     /* total size of struct, less name */\n\tint am_clone;\t\t\t     /* true if we are clone of another */\n\tint lineno;\t\t\t     /* line number from shares file */\n\tint topjob_count;\t\t     /* jobs considered this cycle */\n\tint none_left;\t\t\t     /* all jobs for share considered */\n\tenum pattern_type {\t\t     /* what type of pattern is name */\n\t\t\t    PATTERN_NONE = 0 /* not a pattern -> exact match */\n\t\t\t    ,\n\t\t\t    PATTERN_COMBINED = 1 /* pattern, usage all lumped together */\n\t\t\t    ,\n\t\t\t    PATTERN_SEPARATE = 2 /* pattern, record usage for each */\n\t} pattern_type;\n\tregex_t pattern;  /* name compiled into a regex */\n\tdouble ratio;\t  /* current use / allocation */\n\tdouble ratio_bak; /* backup copy of ratio */\n\t/* localmod 154 */\n\tdouble ratio_max;\t    /* max ratio seen during calendaring */\n\tdouble tj_cpu_cost;\t    /* how much CPU time has been consumed\n\t\t\t\t\t * putting top jobs on calendar */\n\tsh_amt *share_gross;\t    /* group's gross share, if specified */\n\tsh_amt *share_net;\t    /* gross minus children's gross */\n\tsh_amt *share_ncpus;\t    /* share, as CPU count */\n\tsh_amt_array *share_inuse;  /* current CPU use by this group */\n\tsh_amt_array *share_demand; /* current CPU unmet demand */\n};\n\nstruct share_head {\n\tshare_info *root;   /* root of share tree */\n\tshare_head *prev;   /* tree this was cloned from */\n\tsh_amt *sh_active;  /* CPU counts in use */\n\tsh_amt *sh_avail;   /* CPU counts not in use */\n\tsh_amt *sh_contrib; /* CPU counts that can be borrowed */\n\tsh_amt *sh_total;   /* total allocatable CPU counts */\n};\n\nstruct site_user_info {\n\tstruct site_user_info *next;\t/* Linked list */\n\tsch_resource_t current_use;\t/* Total accrual rate normal queues */\n\tsch_resource_t current_use_pqt; /* Accrual in set-aside queues */\n\tsch_resource_t saved_cu;\t/* Saved current_use */\n\tsch_resource_t saved_cup;\t/* Saved current_use_pqt */\n\tchar user_name[1];\t\t/* Dummy length */\n};\n\n#endif /* NAS */\n"
  },
  {
    "path": "src/scheduler/sort.cpp",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file    sort.c\n *\n * @brief\n * \t\tsort.c - This file will hold the compare functions used by qsort\n *\t\tto sort the jobs\n *\n * Functions included are:\n * \tcmpres()\n * \tcmp_placement_sets()\n * \tcmp_nspec()\n * \tcmp_queue_prio_dsc()\n * \tcmp_events()\n * \tcmp_fairshare()\n * \tcmp_preempt_priority_asc()\n * \tcmp_preempt_stime_asc()\n * \tcmp_preemption()\n * \tmulti_sort()\n * \tcmp_job_sort_formula()\n * \tmulti_node_sort()\n * \tmulti_nodepart_sort()\n * \tresresv_sort_cmp()\n * \tnode_sort_cmp()\n * \tcmp_sort()\n * \tfind_nodepart_amount()\n * \tfind_node_amount()\n * \tfind_resresv_amount()\n * \tcmp_node_host()\n * \tcmp_aoe()\n * \tcmp_job_preemption_time_asc()\n * \tsort_jobs()\n * \tswapfunc()\n * \tmed3()\n * \tqsort()\n *\n */\n#include <pbs_config.h>\n\n#include \"check.h\"\n#include \"constant.h\"\n#include \"data_types.h\"\n#include \"fairshare.h\"\n#include \"fifo.h\"\n#include \"globals.h\"\n#include \"misc.h\"\n#include \"node_info.h\"\n#include \"resource.h\"\n#include \"resource_resv.h\"\n#include \"server_info.h\"\n#include \"sort.h\"\n#include <errno.h>\n#include <log.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n\n#ifdef NAS\n#include \"site_code.h\"\n#endif\n\n/**\n * @brief\n *\t\tcompare two new numerical resource numbers for a descending sort\n *\n * @param[in]\tr1\t-\tresource 1\n * @param[in]\tr2\t-\tresource 2\n *\n * @return\tint\n * @retval\t-1\t: if r1 < r2\n * @retval\t0 \t: if r1 == r2\n * @retval\t1  \t: if r1 > r2\n */\nint\ncmpres(sch_resource_t r1, sch_resource_t r2)\n{\n\tif (r1 == SCHD_INFINITY_RES && r2 == SCHD_INFINITY_RES)\n\t\treturn 0;\n\tif (r1 == SCHD_INFINITY_RES)\n\t\treturn -1;\n\tif (r2 == SCHD_INFINITY_RES)\n\t\treturn 1;\n\tif (r1 < r2)\n\t\treturn -1;\n\tif (r1 == r2)\n\t\treturn 0;\n\n\treturn 1;\n}\n\n/**\n * @brief\n *\t\tcmp_placement_sets - sort placement sets by\n *\t\t\ttotal cpus\n *\t\t\ttotal memory\n *\t\t\tfree cpus\n *\t\t\tfree memory\n *\n * @param[in]\tv1\t-\tnode partition 1\n * @param[in]\tv2\t-\tnode partition 2\n *\n * @return\tint\n * @retval\t-1\t: if v1 < v2\n * @retval\t0 \t: if v1 == v2\n * @retval\t1  \t: if v1 > v2\n */\nint\ncmp_placement_sets(const void *v1, const void *v2)\n{\n\tnode_partition *np1, *np2;\n\tschd_resource *ncpus1, *ncpus2;\n\tschd_resource *mem1, *mem2;\n\tint rc = 0;\n\n\tif (v1 == NULL && v2 == NULL)\n\t\treturn 0;\n\n\telse if (v1 == NULL && v2 != NULL)\n\t\treturn -1;\n\n\telse if (v1 != NULL && v2 == NULL)\n\t\treturn 1;\n\n\tnp1 = *((node_partition **) v1);\n\tnp2 = *((node_partition **) v2);\n\n\tncpus1 = find_resource(np1->res, allres[\"ncpus\"]);\n\tncpus2 = find_resource(np2->res, allres[\"ncpus\"]);\n\n\tif (ncpus1 != NULL && ncpus2 != NULL)\n\t\trc = cmpres(ncpus1->avail, ncpus2->avail);\n\n\tif (!rc) {\n\t\tmem1 = find_resource(np1->res, allres[\"mem\"]);\n\t\tmem2 = find_resource(np2->res, allres[\"mem\"]);\n\n\t\tif (mem1 != NULL && mem2 != NULL)\n\t\t\trc = cmpres(mem1->avail, mem2->avail);\n\t}\n\n\tif (!rc) {\n\t\tif (ncpus1 != NULL && ncpus2 != NULL)\n\t\t\trc = cmpres(dynamic_avail(ncpus1), dynamic_avail(ncpus2));\n\t}\n\n\tif (!rc) {\n\t\tif (mem1 != NULL && mem2 != NULL)\n\t\t\trc = cmpres(dynamic_avail(mem1), dynamic_avail(mem2));\n\t}\n\n\treturn rc;\n}\n\n/**\n * @brief\n * \t\tcmp_nspec - sort nspec by sequence number\n */\nbool\ncmp_nspec(const nspec *n1, const nspec *n2)\n{\n\tif (n1->seq_num < n2->seq_num)\n\t\treturn true;\n\telse if (n1->seq_num > n2->seq_num)\n\t\treturn false;\n\telse\n\t\treturn cmp_nspec_by_sub_seq(n1, n2);\n}\n\n/**\n * @brief\n * \t\tcmp_nspec_by_sub_seq - sort nspec by sub sequence number\n */\nbool\ncmp_nspec_by_sub_seq(const nspec *n1, const nspec *n2)\n{\n\tif (n1->sub_seq_num < n2->sub_seq_num)\n\t\treturn true;\n\telse\n\t\treturn false;\n}\n\n/**\n * @brief\n *      cmp_queue_prio_dsc - sort queues in decending priority\n *\n * @param[in]\tq1\t-\tqueue_info 1\n * @param[in]\tq2\t-\tqueue_info 2\n *\n * @return\tint\n * @retval\t1\t: if q1 < q2\n * @retval\t0 \t: if q1 >= q2\n */\nbool\ncmp_queue_prio_dsc(const queue_info *q1, const queue_info *q2)\n{\n\treturn (q2->priority < q1->priority);\n}\n\n/**\n * @brief\n *\t\tcmp_events - sort jobs/resvs into a timeline of the next event\n *\t\tto happen: running jobs ending, advanced reservations starting\n *\t\tor ending\n *\n * @param[in]\tv1\t-\tresource_resv 1\n * @param[in]\tv2\t-\tresource_resv 2\n *\n * @return\tint\n * @retval\t-1\t: if v1 < v2\n * @retval\t0 \t: if v1 == v2\n * @retval\t1  \t: if v1 > v2\n */\nint\ncmp_events(const void *v1, const void *v2)\n{\n\tresource_resv *r1, *r2;\n\ttime_t t1, t2;\n\tint run1, run2;\t\t\t    /* are r1 and r2 in runnable states? */\n\tint end_event1 = 0, end_event2 = 0; /* are r1 and r2 end events? */\n\n\tr1 = *((resource_resv **) v1);\n\tr2 = *((resource_resv **) v2);\n\n\tif (r1->start != UNSPECIFIED && r2->start == UNSPECIFIED)\n\t\treturn -1;\n\n\tif (r1->start == UNSPECIFIED && r2->start == UNSPECIFIED)\n\t\treturn 0;\n\n\tif (r1->start == UNSPECIFIED && r2->start != UNSPECIFIED)\n\t\treturn 1;\n\n\trun1 = in_runnable_state(r1);\n\trun2 = in_runnable_state(r2);\n\n\tif (r1->start >= r1->server->server_time && run1)\n\t\tt1 = r1->start;\n\telse {\n\t\tend_event1 = 1;\n\t\tt1 = r1->end;\n\t}\n\n\tif (r2->start >= r2->server->server_time && run2)\n\t\tt2 = r2->start;\n\telse {\n\t\tend_event2 = 1;\n\t\tt2 = r2->end;\n\t}\n\n\tif (t1 < t2)\n\t\treturn -1;\n\telse if (t1 == t2) {\n\t\t/* if event times are equal, this means that events which consume\n\t\t * resources and release resources happen at the same time.  We need to\n\t\t * make sure events which release resources come first, so the events\n\t\t * which consume them, can indeed do that.\n\t\t */\n\t\tif (end_event1)\n\t\t\treturn -1;\n\t\telse if (end_event2)\n\t\t\treturn 1;\n\t\telse\n\t\t\treturn 0;\n\t} else\n\t\treturn 1;\n}\n\n/**\n * @brief\n *\t\tcmp_fair_share - compare on fair share percentage only.\n *\t\t\t This is for strict priority.\n *\n * @param[in]\tj1\t-\tresource_resv 1\n * @param[in]\tj2\t-\tresource_resv 2\n *\n * @return\tint\n * @retval\t1\t: if j1 < j2\n * @retval\t0 \t: if j1 == j2\n * @retval\t-1  : if j1 > j2\n */\nint\ncmp_fairshare(const void *j1, const void *j2)\n{\n\tresource_resv *r1 = *(resource_resv **) j1;\n\tresource_resv *r2 = *(resource_resv **) j2;\n\tif (r1->job != NULL && r1->job->ginfo != NULL &&\n\t    r2->job != NULL && r2->job->ginfo != NULL)\n\t\treturn compare_path(r1->job->ginfo->gpath, r2->job->ginfo->gpath);\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\t\tcmp_preempt_priority_asc - used to sort jobs in ascending preemption\n *\t\t\t\t   priority\n *\n * @param[in]\tj1\t-\tresource_resv 1\n * @param[in]\tj2\t-\tresource_resv 2\n *\n * @return\tint\n * @retval\t-1\t: if j1 < j2\n * @retval\t0 \t: if j1 == j2\n * @retval\t1  : if j1 > j2\n */\nint\ncmp_preempt_priority_asc(const void *j1, const void *j2)\n{\n\tif ((*(resource_resv **) j1)->job->preempt < (*(resource_resv **) j2)->job->preempt)\n\t\treturn -1;\n\telse if ((*(resource_resv **) j1)->job->preempt > (*(resource_resv **) j2)->job->preempt)\n\t\treturn 1;\n\telse {\n\t\tif ((*(resource_resv **) j1)->rank < (*(resource_resv **) j2)->rank)\n\t\t\treturn -1;\n\t\telse if ((*(resource_resv **) j1)->rank > (*(resource_resv **) j2)->rank)\n\t\t\treturn 1;\n\t}\n\n\treturn 0;\n}\n\n/**\n * @brief\n *      cmp_preempt_stime_asc - used to soft jobs in ascending preemption\n *                              priority and start time\n *\n * @param[in]\tj1\t-\tresource_resv 1\n * @param[in]\tj2\t-\tresource_resv 2\n *\n * @return\tint\n * @retval\t-1\t: if j1 < j2\n * @retval\t0 \t: if j1 == j2\n * @retval\t1   : if j1 > j2\n */\nint\ncmp_preempt_stime_asc(const void *j1, const void *j2)\n{\n\tif ((*(resource_resv **) j1)->job->preempt < (*(resource_resv **) j2)->job->preempt)\n\t\treturn -1;\n\telse if ((*(resource_resv **) j1)->job->preempt > (*(resource_resv **) j2)->job->preempt)\n\t\treturn 1;\n\telse {\n\t\t/* sort by start time */\n\t\tif ((*(resource_resv **) j1)->job->stime > (*(resource_resv **) j2)->job->stime)\n\t\t\treturn -1;\n\t\tif ((*(resource_resv **) j1)->job->stime < (*(resource_resv **) j2)->job->stime)\n\t\t\treturn 1;\n\t}\n\n\treturn 0;\n}\n\n/**\n * @brief\n *  \tcmp_premeption  - compare first by preemption priority and then\n *                   \t\tif the job had been preempted, the order preempted.\n *\n * @param[in]\tr1\t-\tresource_resv 1\n * @param[in]\tr2\t-\tresource_resv 2\n *\n * @return\tint\n * @retval\t1\t: if r1 < r2\n * @retval\t0 \t: if r1 == r2\n * @retval\t-1  : if r1 > r2\n */\nint\ncmp_preemption(resource_resv *r1, resource_resv *r2)\n{\n\tif (r1 != NULL && r2 == NULL)\n\t\treturn -1;\n\n\tif (r1 == NULL && r2 == NULL)\n\t\treturn 0;\n\n\tif (r1 == NULL && r2 != NULL)\n\t\treturn 1;\n\n\t/* error, allow some other sort key to take over */\n\tif (r1->job == NULL || r2->job == NULL)\n\t\treturn 0;\n\n\tif (r1->job->preempt < r2->job->preempt)\n\t\treturn 1;\n\telse if (r1->job->preempt > r2->job->preempt)\n\t\treturn -1;\n\n\treturn 0;\n}\n\n/* multi keyed sorting\n * call compare function to sort for the first key\n * if the two keys are equal, call the compare funciton for the second key\n * repeat for all keys\n */\n\n/**\n * @brief\n * \t\tmulti_sort - a multi keyed sorting compare function for jobs\n *\n * @param[in] r1 - job to compare\n * @param[in] r2 - job to compare\n *\n * @return int\n * @retval -1, 0, 1 : standard qsort() cmp\n */\nint\nmulti_sort(resource_resv *r1, resource_resv *r2)\n{\n\tint ret = 0;\n\n\tfor (const auto &si : *cstat.sort_by) {\n\t\tret = resresv_sort_cmp(r1, r2, si);\n\t\tif (ret)\n\t\t\tbreak;\n\t}\n\n\treturn ret;\n}\n\n/**\n * @brief\n * \t\tcmp_job_sort_formula - used to sort jobs based on their evaluated\n *\t\t\t\t      job_sort_formula value (in DESC order)\n * @param[in] r1 - job to compare\n * @param[in] r2 - job to compare\n *\n * @return int\n * @retval -1, 0, 1 : standard qsort() cmp\n */\nint\ncmp_job_sort_formula(const void *j1, const void *j2)\n{\n\tresource_resv *r1 = *(resource_resv **) j1;\n\tresource_resv *r2 = *(resource_resv **) j2;\n\n\tif (r1->job->formula_value < r2->job->formula_value)\n\t\treturn 1;\n\tif (r1->job->formula_value > r2->job->formula_value)\n\t\treturn -1;\n\treturn 0;\n}\n\n/**\n * @brief\n *\t\tmulti_node_sort - a multi keyed sorting compare function for nodes\n *\n * @param[in] n1 - node1 to compare\n * @param[in] n2 - node2 to compare\n *\n * @return int\n *\t@retval -1, 0, 1 - standard qsort() cmp\n */\nint\nmulti_node_sort(const void *n1, const void *n2)\n{\n\tint ret = 0;\n\n\tfor (const auto &si : *cstat.node_sort) {\n\t\tret = node_sort_cmp(n1, n2, si, SOBJ_NODE);\n\t\tif (ret)\n\t\t\tbreak;\n\t}\n\n\tif (ret == 0) {\n\t\tconst node_info *r1 = *(static_cast<const node_info *const *>(n1));\n\t\tconst node_info *r2 = *(static_cast<const node_info *const *>(n2));\n\t\tif (r1->rank < r2->rank)\n\t\t\treturn -1;\n\t\telse\n\t\t\treturn 1;\n\t}\n\n\treturn ret;\n}\n\n/**\n * @brief\n * \t\tqsort() compare function for multi-resource node partition sorting\n *\n * @param[in] n1 - nodepart 1 to compare\n * @param[in] n2 - nodepart 2 to compare\n *\n * @return int\n *\t@retval -1, 0, 1 - standard qsort() cmp\n */\nint\nmulti_nodepart_sort(const void *n1, const void *n2)\n{\n\tint ret = 0;\n\n\tfor (const auto &si : *cstat.node_sort) {\n\t\tret = node_sort_cmp(n1, n2, si, SOBJ_PARTITION);\n\t\tif (ret)\n\t\t\tbreak;\n\t}\n\tif (ret == 0) {\n\t\tconst node_partition *r1 = *(static_cast<const node_partition *const *>(n1));\n\t\tconst node_partition *r2 = *(static_cast<const node_partition *const *>(n2));\n\t\tif (r1->rank < r2->rank)\n\t\t\treturn -1;\n\t\telse\n\t\t\treturn 1;\n\t}\n\treturn ret;\n}\n\n/**\n * @brief\n *\t\tmulti_bkt_sort - a multi keyed sorting compare function for node buckets\n *\n * @param[in] b1 - bkt1 to compare\n * @param[in] b2 - bkt2 to compare\n *\n * @return int\n *\t@retval -1, 0, 1 - standard qsort() cmp\n */\nint\nmulti_bkt_sort(const void *b1, const void *b2)\n{\n\tint ret = 0;\n\n\tfor (const auto &si : *cstat.node_sort) {\n\t\tret = node_sort_cmp(b1, b2, si, SOBJ_BUCKET);\n\t\tif (ret)\n\t\t\tbreak;\n\t}\n\n\treturn ret;\n}\n\n/**\n * @brief\n * \t\tcompares two jobs using a sort defined by a sort_info\n *\t\tThis is used downstream by qsort()\n *\n * @param[in] r1\t-\tfirst job to compare\n * @param[in] r2 \t-\tsecond job to compare\n * @param[in] si \t- \tsort_info describing how to sort the jobs\n *\n * @returns -1, 0, 1 : standard qsort()) cmp\n *\n */\nint\nresresv_sort_cmp(resource_resv *r1, resource_resv *r2, const struct sort_info &si)\n{\n\tsch_resource_t v1, v2;\n\n\tif (r1 != NULL && r2 == NULL)\n\t\treturn -1;\n\n\tif (r1 == NULL && r2 == NULL)\n\t\treturn 0;\n\n\tif (r1 == NULL && r2 != NULL)\n\t\treturn 1;\n\n\tv1 = find_resresv_amount(r1, si.res_name, si.def);\n\tv2 = find_resresv_amount(r2, si.res_name, si.def);\n\n\tif (v1 == v2)\n\t\treturn 0;\n\n\tif (si.order == ASC) {\n\t\tif (v1 < v2)\n\t\t\treturn -1;\n\t\telse\n\t\t\treturn 1;\n\t} else {\n\t\tif (v1 < v2)\n\t\t\treturn 1;\n\t\telse\n\t\t\treturn -1;\n\t}\n}\n\n/**\n * @brief\n * \t\tcompares either two nodes or node_partitions based on a resource,\n *              Ascending/Descending, and what part of the resource to use (total, unused, etc)\n *\n * @param[in] vp1 \t\t- the node/parts/bkts to compare\n * @param[in] vp2 \t\t- the node/parts/bkts to compare\n * @param[in] si \t\t- sort info describing how to sort nodes\n * @param[in] obj_type \t- node or node_partition\n *\n * @return int\n * @retval -1, 0, 1 : standard qsort() cmp\n */\nint\nnode_sort_cmp(const void *vp1, const void *vp2, const struct sort_info &si, const enum sort_obj_type obj_type)\n{\n\tsch_resource_t v1, v2;\n\tnode_info **n1 = NULL;\n\tnode_info **n2 = NULL;\n\tnode_partition **np1 = NULL;\n\tnode_partition **np2 = NULL;\n\tnode_bucket **b1 = NULL;\n\tnode_bucket **b2 = NULL;\n\n\tif (vp1 != NULL && vp2 == NULL)\n\t\treturn -1;\n\n\tif (vp1 == NULL && vp2 == NULL)\n\t\treturn 0;\n\n\tif (vp1 == NULL && vp2 != NULL)\n\t\treturn 1;\n\n\tswitch (obj_type) {\n\t\tcase SOBJ_NODE:\n\t\t\tn1 = (node_info **) vp1;\n\t\t\tn2 = (node_info **) vp2;\n\t\t\tv1 = find_node_amount(*n1, si.res_name, si.def, si.res_type);\n\t\t\tv2 = find_node_amount(*n2, si.res_name, si.def, si.res_type);\n\t\t\tbreak;\n\t\tcase SOBJ_PARTITION:\n\t\t\tnp1 = (node_partition **) vp1;\n\t\t\tnp2 = (node_partition **) vp2;\n\t\t\tv1 = find_nodepart_amount(*np1, si.res_name, si.def, si.res_type);\n\t\t\tv2 = find_nodepart_amount(*np2, si.res_name, si.def, si.res_type);\n\t\t\tbreak;\n\t\tcase SOBJ_BUCKET:\n\t\t\tb1 = (node_bucket **) vp1;\n\t\t\tb2 = (node_bucket **) vp2;\n\t\t\tv1 = find_bucket_amount(*b1, si.res_name, si.def, si.res_type);\n\t\t\tv2 = find_bucket_amount(*b2, si.res_name, si.def, si.res_type);\n\t\t\tbreak;\n\n\t\tdefault:\n\t\t\treturn 0;\n\t\t\tbreak;\n\t}\n\n\tif (v1 == v2)\n\t\treturn 0;\n\n\tif (si.order == ASC) {\n\t\tif (v1 < v2)\n\t\t\treturn -1;\n\t\telse if (v1 > v2)\n\t\t\treturn 1;\n\t} else {\n\t\tif (v1 < v2)\n\t\t\treturn 1;\n\t\telse if (v1 > v2)\n\t\t\treturn -1;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\tentrypoint into job sort used by qsort\n *\n *\t\t1. Sort all preemption priority jobs in the front\n *\t\t2. Sort all preempted jobs in ascending order of their preemption time\n *\t\t3. Sort jobs according to their fairshare usage.\n *\t\t4. sort by unique rank to stabilize the sort\n *\n * @param[in]\tv1\t-\tresource_resv 1\n * @param[in]\tv2\t-\tresource_resv 2\n *\n * @return\t-1,0,1 : based on sorting function.\n */\nint\ncmp_sort(const void *v1, const void *v2)\n{\n\tint cmp;\n\tresource_resv *r1;\n\tresource_resv *r2;\n\n\tr1 = *((resource_resv **) v1);\n\tr2 = *((resource_resv **) v2);\n\n\tif (r1 != NULL && r2 == NULL)\n\t\treturn -1;\n\n\tif (r1 == NULL && r2 == NULL)\n\t\treturn 0;\n\n\tif (r1 == NULL && r2 != NULL)\n\t\treturn 1;\n\n\tif (in_runnable_state(r1) && !in_runnable_state(r2))\n\t\treturn -1;\n\telse if (in_runnable_state(r2) && !in_runnable_state(r1))\n\t\treturn 1;\n\t/* both jobs are runnable */\n\telse {\n\t\t/* sort based on preemption */\n\t\tcmp = cmp_preemption(r1, r2);\n\t\tif (cmp != 0)\n\t\t\treturn cmp;\n\n\t\tcmp = cmp_job_preemption_time_asc(&r1, &r2);\n\t\tif (cmp != 0)\n\t\t\treturn cmp;\n\n\t\t/* sort on the basis of job sort formula */\n\t\tcmp = cmp_job_sort_formula(&r1, &r2);\n\t\tif (cmp != 0)\n\t\t\treturn cmp;\n#ifndef NAS /* localmod 041 */\n\t\tif (r1->server->policy->fair_share) {\n\t\t\tcmp = cmp_fairshare(&r1, &r2);\n\t\t\tif (cmp != 0)\n\t\t\t\treturn cmp;\n\t\t}\n#endif /* localmod 041 */\n\n\t\t/* normal resource based sort */\n\t\tcmp = multi_sort(r1, r2);\n\t\tif (cmp != 0)\n\t\t\treturn cmp;\n\n\t\t/* stabilize the sort */\n\t\telse {\n\t\t\tif (r1->qrank < r2->qrank)\n\t\t\t\treturn -1;\n\t\t\telse if (r1->qrank > r2->qrank)\n\t\t\t\treturn 1;\n\t\t\tif (r1->rank < r2->rank)\n\t\t\t\treturn -1;\n\t\t\telse if (r1->rank > r2->rank)\n\t\t\t\treturn 1;\n\t\t\telse {\n\t\t\t\treturn 0;\n\t\t\t}\n\t\t}\n\t}\n}\n/**\n * @brief\n * \t\treturn resource values based on res_type for node partition\n *\n * @param[in] np \t\t- node partition\n * @param[in] res \t\t- resource name\n * @param[in] def \t\t- resource definition of res\n * @param[in] res_type \t- type of resource value to use\n *\n * @note\n * \t\tspecial case sorting \"resource\" SORT_PRIORITY is not meaningful for\n *       \tnode partitions.  0 will always be returned\n *\n * @return\tsch_resource_t\n */\nsch_resource_t\nfind_nodepart_amount(node_partition *np, const std::string &res, resdef *def,\n\t\t     enum resource_fields res_type)\n{\n\tschd_resource *nres;\n\n\tif (def != NULL)\n\t\tnres = find_resource(np->res, def);\n\telse\n\t\tnres = find_resource_by_str(np->res, res);\n\n\tif (nres != NULL) {\n\t\tif (res_type == RF_AVAIL)\n\t\t\treturn nres->avail;\n\t\telse if (res_type == RF_ASSN)\n\t\t\treturn nres->assigned;\n\t\telse if (res_type == RF_UNUSED)\n\t\t\treturn nres->avail - nres->assigned;\n\t\telse /* error */\n\t\t\treturn 0;\n\t}\n\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\treturn resource values based on res_type for node bucket\n *\n * @param[in] bkt \t\t- node bucket\n * @param[in] res \t\t- resource name\n * @param[in] def \t\t- resource definition of res\n * @param[in] res_type \t- type of resource value to use\n *\n * @return\tsch_resource_t\n */\nsch_resource_t\nfind_bucket_amount(node_bucket *bkt, const std::string &res, resdef *def, enum resource_fields res_type)\n{\n\tschd_resource *nres;\n\n\tif (def != NULL)\n\t\tnres = find_resource(bkt->res_spec, def);\n\telse if (res == SORT_PRIORITY)\n\t\treturn bkt->priority;\n\telse\n\t\tnres = find_resource_by_str(bkt->res_spec, res);\n\n\tif (nres != NULL) {\n\t\tif (res_type == RF_AVAIL)\n\t\t\treturn nres->avail;\n\t\telse if (res_type == RF_ASSN)\n\t\t\treturn nres->assigned;\n\t\telse if (res_type == RF_UNUSED)\n\t\t\treturn nres->avail - nres->assigned;\n\t\telse /* error */\n\t\t\treturn 0;\n\t}\n\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\treturn resource values based on res_type for a node\n *\n * @param[in] ninfo \t- node\n * @param[in] res \t\t- resource name or special case\n * @param[in] def \t\t- resource definition of res\n * @param[in] res_type \t- type of resource value to use\n *  *\n * @return sch_resource_t\n * @retval\t0\t: error\n */\nsch_resource_t\nfind_node_amount(node_info *ninfo, const std::string &res, resdef *def,\n\t\t enum resource_fields res_type)\n{\n\t/* def is NULL on special case sort keys */\n\tif (def != NULL) {\n\t\tschd_resource *nres;\n\t\tnres = find_resource(ninfo->res, def);\n\n\t\tif (nres != NULL) {\n\t\t\tif (nres->indirect_res != NULL)\n\t\t\t\tnres = nres->indirect_res;\n\t\t\tif (res_type == RF_AVAIL)\n\t\t\t\treturn nres->avail;\n\t\t\telse if (res_type == RF_ASSN)\n\t\t\t\treturn nres->assigned;\n\t\t\telse if (res_type == RF_UNUSED)\n\t\t\t\treturn nres->avail - nres->assigned;\n\t\t\telse /* error */\n\t\t\t\treturn 0;\n\t\t}\n\n\t} else if (res == SORT_PRIORITY)\n\t\treturn ninfo->priority;\n\telse if (res == SORT_USED_TIME)\n\t\treturn ninfo->last_used_time;\n\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\tfind resource or special case sorting values for jobs\n *\n * @param[in] resresv \t- the job\n * @param[in] res \t\t- the resource/special case name\n * @param[in] def \t\t- the resource definition of res (NULL for special case)\n *\n * @return\tsch_resource_t\n * @retval\t0\t: on error\n */\nsch_resource_t\nfind_resresv_amount(resource_resv *resresv, const std::string &res, resdef *def)\n{\n\t/* def is NULL on special case sort keys */\n\tif (def != NULL) {\n\t\tresource_req *req;\n\n\t\treq = find_resource_req(resresv->resreq, def);\n\n\t\tif (req != NULL)\n\t\t\treturn req->amount;\n\t}\n\n\tif (res == SORT_JOB_PRIORITY)\n#ifdef NAS /* localmod 045 */\n\t\treturn (sch_resource_t) resresv->job->NAS_pri;\n#else\n\t\treturn (sch_resource_t) resresv->job->priority;\n#endif /* localmod 045 */\n\telse if (res == SORT_FAIR_SHARE && resresv->job->ginfo != NULL)\n\t\treturn (sch_resource_t) resresv->job->ginfo->tree_percentage;\n\telse if (res == SORT_PREEMPT)\n\t\treturn (sch_resource_t) resresv->job->preempt;\n#ifdef NAS\n\t/* localmod 034 */\n\telse if (res == SORT_ALLOC)\n\t\treturn (sch_resource_t) (100.0 * site_get_share(resresv));\n\t/* localmod 039 */\n\telse if (res == SORT_QPRI && resresv->job->queue != NULL)\n\t\treturn (sch_resource_t) resresv->job->queue->priority;\n\t/* localmod 040 */\n\telse if (res == SORT_NODECT) {\n\t\t/* return the node count - dpr */\n\t\tint ndct = resresv->job->nodect;\n\t\treturn (sch_resource_t) ndct;\n\t}\n#endif\n\treturn 0;\n}\n\n/**\n * @brief\n * \t \tsort nodes by resources_available.host\n *\n * @param[in]\tv1\t-\tnode_info 1\n * @param[in]\tv2\t-\tnode_info 2\n *\n * @return int\n * @retval -1, 0, 1 - standard qsort() cmp\n*/\nint\ncmp_node_host(const void *v1, const void *v2)\n{\n\tschd_resource *res1;\n\tschd_resource *res2;\n\tnode_info **n1;\n\tnode_info **n2;\n\tint rc = 0;\n\n\tn1 = (node_info **) v1;\n\tn2 = (node_info **) v2;\n\n\tres1 = find_resource((*n1)->res, allres[\"host\"]);\n\tres2 = find_resource((*n2)->res, allres[\"host\"]);\n\n\tif (res1 != NULL && res2 != NULL)\n\t\trc = strcmp(res1->orig_str_avail, res2->orig_str_avail);\n\n\t/* if the host is the same and we have a node_sort_key, preserve the sort */\n\tif (rc == 0 && !cstat.node_sort->empty())\n\t\treturn multi_node_sort(v1, v2);\n\n\treturn rc;\n}\n\n/**\n * @brief\n *\t\tSorting function used with 'avoid_provision' policy\n *\n * @par Functionality:\n *\t\tThis function compares two nodes and determines the order by comparing\n *\t\tthe AOE instantiated on them with AOE requested by job (or reservation).\n *\t\tIf order cannot be determined then node_sort_key is used.\n *\n * @param[in]\tv1\t\t-\tpointer to const void\n * @param[in]\tv2\t\t-\tpointer to const void\n *\n * @return\tint\n * @retval\t -1 : v1 has precendence\n * @retval\t  0 : v1 and v2 both have equal precedence\n * @retval\t  1 : v2 has precendence\n *\n * @par Side Effects:\n *\t\tUnknown\n *\n * @par MT-safe: No\n *\n */\nint\ncmp_aoe(const void *v1, const void *v2)\n{\n\tnode_info **n1;\n\tnode_info **n2;\n\tint v1rank = 0, v2rank = 0; /* to reduce strcmp() calls */\n\tint ret;\n\n\tn1 = (node_info **) v1;\n\tn2 = (node_info **) v2;\n\n\t/* Between nodes, one with aoe and other without aoe, one with aoe\n\t * comes first.\n\t */\n\n\tif ((*n1)->current_aoe) {\n\t\tif (strcmp((*n1)->current_aoe, cmp_aoename) == 0)\n\t\t\tv1rank = 1;\n\t\telse\n\t\t\tv1rank = -1;\n\t}\n\n\tif ((*n2)->current_aoe) {\n\t\tif (strcmp((*n2)->current_aoe, cmp_aoename) == 0)\n\t\t\tv2rank = 1;\n\t\telse\n\t\t\tv2rank = -1;\n\t}\n\n\tret = v2rank - v1rank;\n\n\tif (ret == 0)\n\t\treturn multi_node_sort(v1, v2);\n\n\treturn ret;\n}\n\n/**\n * @brief\n * \t\tA function which sorts the jobs according to the time they are\n *        preempted (in ascending order)\n *\n * @param[in] j1 - job to compare.\n * @param[in] j2 - job to compare.\n *\n * @return\tint\n * @retval \t-1 : If j1 was preempted before j2.\n * @retval  0  : If both were preempted at the same time\n * @retval  1  : if j2 was preempted before j1.\n */\nint\ncmp_job_preemption_time_asc(const void *j1, const void *j2)\n{\n\tresource_resv *r1;\n\tresource_resv *r2;\n\n\tr1 = *((resource_resv **) j1);\n\tr2 = *((resource_resv **) j2);\n\n\tif (r1 == NULL && r2 == NULL)\n\t\treturn 0;\n\n\tif (r1 != NULL && r2 == NULL)\n\t\treturn -1;\n\n\tif (r1 == NULL && r2 != NULL)\n\t\treturn 1;\n\n\tif (r1->job == NULL || r2->job == NULL)\n\t\treturn 0;\n\n\t/* If one job is preempted and second is not then preempted job gets priority\n\t * If both jobs are preempted, one which is preempted first gets priority\n\t */\n\tif (r1->job->time_preempted == UNSPECIFIED &&\n\t    r2->job->time_preempted == UNSPECIFIED)\n\t\treturn 0;\n\telse if (r1->job->time_preempted != UNSPECIFIED &&\n\t\t r2->job->time_preempted == UNSPECIFIED)\n\t\treturn -1;\n\telse if (r1->job->time_preempted == UNSPECIFIED &&\n\t\t r2->job->time_preempted != UNSPECIFIED)\n\t\treturn 1;\n\n\tif (r1->job->time_preempted < r2->job->time_preempted)\n\t\treturn -1;\n\telse if (r1->job->time_preempted > r2->job->time_preempted)\n\t\treturn 1;\n\treturn 0;\n}\n\n/**\n * @brief\n * cmp_resv_state\t- compare reservation state with RESV_BEING_ALTERED.\n *\n * @param[in] r1\t- reservation to compare.\n * @param[in] r2\t- reservation to compare.\n *\n * @return - int\n * @retval  1: If r2's state is RESV_BEING_ALTERED and r1's state is not.\n *          0: If both the reservation's states are not RESV_BEING_ALTERED.\n *         -1: If r1's state is RESV_BEING_ALTERED and r2's state is not.\n */\nint\ncmp_resv_state(const void *r1, const void *r2)\n{\n\tenum resv_states resv1_state;\n\tenum resv_states resv2_state;\n\n\tresv1_state = (*(resource_resv **) r1)->resv->resv_state;\n\tresv2_state = (*(resource_resv **) r2)->resv->resv_state;\n\n\tif (resv1_state != RESV_BEING_ALTERED && resv2_state == RESV_BEING_ALTERED)\n\t\treturn 1;\n\tif (resv2_state != RESV_BEING_ALTERED && resv1_state == RESV_BEING_ALTERED)\n\t\treturn -1;\n\telse\n\t\treturn 0;\n}\n\n/**\n * @brief\n * \t\tsort_jobs - This function sorts all jobs according to their preemption\n *      priority, preempted time and fairshare.\n *\t\tsort_jobs is called whenever we need to sort jobs on the basis of\n *\t\tvarious policies set in scheduler.\n * @param[in]\t\tpolicy\t-\tPolicy structure to decide whether to sort for fairshare or\n *                     \t\t\tnot. If yes, then according should it be according to\n *                     \t\t\tby_queue or round_robin.\n * @param[in,out]\tsinfo \t- \tServer info struct which contains all the jobs that needs\n *                        \t\tsorting.\n * @return\tvoid\n */\nvoid\nsort_jobs(status *policy, server_info *sinfo)\n{\n\t/** sort jobs in such a way that Higher Priority jobs come on top\n\t * followed by preempted jobs and then normal jobs\n\t */\n\tif (policy->fair_share) {\n\t\t/** sort per queue basis and then use these jobs (combined from all the queues)\n\t\t * to select the next job.\n\t\t */\n\t\tif (policy->by_queue || policy->round_robin) {\n\t\t\tint job_index = 0;\n\t\t\t/* cycle through queues and sort them on the basis of preemption priority,\n\t\t\t * preempted jobs, and fairshare usage\n\t\t\t */\n\t\t\tfor (auto qinfo : sinfo->queues) {\n\t\t\t\tif (qinfo->sc.total > 0) {\n\t\t\t\t\tqsort(qinfo->jobs, qinfo->sc.total,\n\t\t\t\t\t      sizeof(resource_resv *), cmp_sort);\n\t\t\t\t}\n\t\t\t}\n\t\t\tfor (auto qinfo : sinfo->queues) {\n\t\t\t\tfor (int index = 0; index < qinfo->sc.total; index++) {\n\t\t\t\t\tsinfo->jobs[job_index] = qinfo->jobs[index];\n\t\t\t\t\tjob_index++;\n\t\t\t\t}\n\t\t\t}\n\t\t\tsinfo->jobs[job_index] = NULL;\n\t\t}\n\t\t/** Sort on entire complex **/\n\t\telse if (!policy->by_queue && !policy->round_robin) {\n\t\t\tqsort(sinfo->jobs, count_array(sinfo->jobs), sizeof(resource_resv *), cmp_sort);\n\t\t}\n\t} else if (policy->by_queue) {\n\t\tfor (auto qinfo : sinfo->queues) {\n\t\t\tqsort(qinfo->jobs, count_array(qinfo->jobs), sizeof(resource_resv *), cmp_sort);\n\t\t}\n\t\tqsort(sinfo->jobs, count_array(sinfo->jobs), sizeof(resource_resv *), cmp_sort);\n\t} else if (policy->round_robin) {\n\t\tif (sinfo->queue_list != NULL) {\n\t\t\tint queue_list_size = count_array(sinfo->queue_list);\n\t\t\tfor (int i = 0; i < queue_list_size; i++) {\n\t\t\t\tint queue_index_size = count_array(sinfo->queue_list[i]);\n\t\t\t\tfor (int j = 0; j < queue_index_size; j++) {\n\t\t\t\t\tqsort(sinfo->queue_list[i][j]->jobs, count_array(sinfo->queue_list[i][j]->jobs),\n\t\t\t\t\t      sizeof(resource_resv *), cmp_sort);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t} else\n\t\tqsort(sinfo->jobs, count_array(sinfo->jobs), sizeof(resource_resv *), cmp_sort);\n}\n"
  },
  {
    "path": "src/scheduler/sort.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _SORT_H\n#define _SORT_H\n\n/*\n *\tcompare two new numerical resource numbers\n *\n *\treturns -1 if r1 < r2\n *\t\t0  if r1 == r2\n *\t\t1  if r1 > r2\n */\nint cmpres(sch_resource_t r1, sch_resource_t r2);\n\n/*\n * cmp_nspec - sort nspec by sequence number\n *\n */\nbool cmp_nspec(const nspec *n1, const nspec *n2);\n\n/*\n * cmp_nspec_by_sub_seq - sort nspec by sub sequence number\n */\nbool cmp_nspec_by_sub_seq(const nspec *n1, const nspec *n2);\n\n/*\n *\tcmp_placement_sets - sort placement sets by\n *\t\t\ttotal cpus\n *\t\t\ttotal memory\n *\t\t\tfree cpus\n *\t\t\tfree memory\n */\nint cmp_placement_sets(const void *v1, const void *v2);\n\n/*\n * cmp_fairshare - compare based on compare_path()\n */\nint cmp_fairshare(const void *j1, const void *j2);\n\n/*\n *\n *      cmp_queue_prio_dsc - compare function used by qsort to sort queues\n *                           by decending priority\n *\n */\nbool cmp_queue_prio_dsc(const queue_info *q1, const queue_info *q2);\n\n/*\n *      cmp_fair_share - compare function for the fair share algorithm\n */\nint cmp_fair_share(const void *j1, const void *j2);\n\n/*\n *      cmp_preempt_priority_asc - used to sort jobs in decending preemption\n *                                 priority\n */\nint cmp_preempt_priority_asc(const void *j1, const void *j2);\n\n/*\n *      cmp_preempt_stime_asc - used to soft jobs in ascending preemption\n *                              start time\n */\nint cmp_preempt_stime_asc(const void *j1, const void *j2);\n\n/*\n *      multi_sort - a multi keyed sortint compare function\n */\nint multi_sort(resource_resv *r1, resource_resv *r2);\n\n/*\n *\tcmp_job_sort_formula - used to sort jobs in descending order of their evaluated formula value\n */\nint cmp_job_sort_formula(const void *j1, const void *j2);\n\n/*\n *\n *      cmp_sort - entrypoint into job sort used by qsort\n */\nint cmp_sort(const void *v1, const void *v2);\n\n/*\n *      find_resresv_amount - find resource amount for jobs + special cases\n */\nsch_resource_t find_resresv_amount(resource_resv *resresv, const std::string &res, resdef *def);\n\n/*\n *      find_node_amount - find the resource amount for nodes + special cases\n */\nsch_resource_t find_node_amount(node_info *ninfo, const std::string &res, resdef *def, enum resource_fields res_type);\n\n/* return resource values based on res_type for node partition */\nsch_resource_t find_nodepart_amount(node_partition *np, const std::string &res, resdef *def, enum resource_fields res_type);\n\nsch_resource_t find_bucket_amount(node_bucket *bkt, const std::string &res, resdef *def, enum resource_fields res_type);\n\n/*\n * Compares either two nodes or node_partitions based on a resource,\n * Ascending/Descending, and what part of the resource to use (total, unused, etc)\n */\nint node_sort_cmp(const void *vp1, const void *vp2, const struct sort_info &si, enum sort_obj_type obj_type);\n\n/*\n *      resresv_sort_cmp - compares 2 jobs on the current resresv sort\n *                      used with qsort (qsort calls multi_sort())\n */\nint resresv_sort_cmp(resource_resv *r1, resource_resv *r2, const sort_info &si);\n\n/*\n *      multi_node_sort - a multi keyed sorting compare function for nodes\n */\nint multi_node_sort(const void *n1, const void *n2);\n\n/* qsort() compare function for multi-resource node partition sorting */\nint multi_nodepart_sort(const void *n1, const void *n2);\n\n/* qsort() compare function for multi-resource bucket sorting */\nint multi_bkt_sort(const void *b1, const void *b2);\n\n/*\n *\tcmp_events - sort jobs/resvs into a timeline of the next even to\n *\t\thappen: running jobs ending, advanced reservations starting\n *\t\tor ending\n */\nint cmp_events(const void *v1, const void *v2);\n\n/* sort nodes by resources_available.host */\nint cmp_node_host(const void *v1, const void *v2);\n\n/* sorting routine to be used with PROVPOLICY_AVOID only */\nint cmp_aoe(const void *v1, const void *v2);\n\n/*\n *      cmp_job_preemption_time_asc- used to sort jobs in ascending preempted\n *                                 time.\n */\nint cmp_job_preemption_time_asc(const void *j1, const void *j2);\n\n/*\n *  cmp_preemption - This function is use to sort jobs according to their\n *  preemption priority\n */\nint cmp_preemption(resource_resv *r1, resource_resv *r2);\n\n/*\n * cmp_resv_state - compare based on resv_state\n */\nint cmp_resv_state(const void *r1, const void *r2);\n\n/*\n * sort_jobs - This function sorts all jobs according to their preemption\n *             priority, preempted time and fairshare.\n */\nvoid sort_jobs(status *policy, server_info *sinfo);\n\n#endif /* _SORT_H */\n"
  },
  {
    "path": "src/scheduler/state_count.cpp",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file    state_count.c\n *\n * @brief\n * \t\tstate_count.c - This file contains functions related to state_count struct\n *\n * Functions included are:\n * \tinit_state_count()\n * \tcount_states()\n * \ttotal_states()\n * \tstate_count_add()\n *\n */\n#include <pbs_config.h>\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <pbs_error.h>\n#include <pbs_ifl.h>\n#include <log.h>\n#include \"state_count.h\"\n#include \"constant.h\"\n#include \"misc.h\"\n\n/**\n * @brief\n *\t\tinit_state_count - initalize state count struct\n *\n * @param[out]\tsc - the struct to initalize\n *\n * @return\tnothing\n *\n */\nvoid\ninit_state_count(state_count *sc)\n{\n\tsc->running = 0;\n\tsc->queued = 0;\n\tsc->transit = 0;\n\tsc->exiting = 0;\n\tsc->held = 0;\n\tsc->waiting = 0;\n\tsc->suspended = 0;\n\tsc->userbusy = 0;\n\tsc->invalid = 0;\n\tsc->begin = 0;\n\tsc->expired = 0;\n\tsc->total = 0;\n}\n\n/**\n * @brief\n *\t\tcount_states - count the jobs in each state and set the state counts\n *\n * @param[in]\tjobs - array of jobs\n * @param[out]\tsc   - state count structure passed by reference\n *\n * @return\tnothing\n *\n */\nvoid\ncount_states(resource_resv **jobs, state_count *sc)\n{\n\tif (jobs != NULL) {\n\t\tfor (int i = 0; jobs[i] != NULL; i++) {\n\t\t\tif (jobs[i]->job != NULL) {\n\t\t\t\tif (jobs[i]->job->is_queued)\n\t\t\t\t\tsc->queued++;\n\t\t\t\telse if (jobs[i]->job->is_running)\n\t\t\t\t\tsc->running++;\n\t\t\t\telse if (jobs[i]->job->is_transit)\n\t\t\t\t\tsc->transit++;\n\t\t\t\telse if (jobs[i]->job->is_exiting)\n\t\t\t\t\tsc->exiting++;\n\t\t\t\telse if (jobs[i]->job->is_held)\n\t\t\t\t\tsc->held++;\n\t\t\t\telse if (jobs[i]->job->is_waiting)\n\t\t\t\t\tsc->waiting++;\n\t\t\t\telse if (jobs[i]->job->is_suspended)\n\t\t\t\t\tsc->suspended++;\n\t\t\t\telse if (jobs[i]->job->is_userbusy)\n\t\t\t\t\tsc->userbusy++;\n\t\t\t\telse if (jobs[i]->job->is_begin)\n\t\t\t\t\tsc->begin++;\n\t\t\t\telse if (jobs[i]->job->is_expired)\n\t\t\t\t\tsc->expired++;\n\t\t\t\telse {\n\t\t\t\t\tsc->invalid++;\n\t\t\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO, jobs[i]->name, \"Job in unknown state\");\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\tsc->total = sc->queued + sc->running + sc->transit +\n\t\t    sc->exiting + sc->held + sc->waiting +\n\t\t    sc->suspended + sc->userbusy + sc->begin +\n\t\t    sc->expired + sc->invalid;\n}\n\n/**\n * @brief\n *\t\ttotal_states - add the states from sc2 to the states in sc1\n *\t\t       i.e. sc1 += sc2\n *\n * @param[out]\tsc1 - the accumlator\n * @param[in]\tsc2 - what is being totaled\n *\n * @return\tnothing\n *\n */\nvoid\ntotal_states(state_count *sc1, state_count *sc2)\n{\n\tsc1->running += sc2->running;\n\tsc1->queued += sc2->queued;\n\tsc1->held += sc2->held;\n\tsc1->waiting += sc2->waiting;\n\tsc1->exiting += sc2->exiting;\n\tsc1->transit += sc2->transit;\n\tsc1->suspended += sc2->suspended;\n\tsc1->userbusy += sc2->userbusy;\n\tsc1->begin += sc2->begin;\n\tsc1->expired += sc2->expired;\n\tsc1->invalid += sc2->invalid;\n\tsc1->total += sc2->total;\n}\n\n/**\n * @brief\n *\t\tstate_count_add - add a certain amount to a a state count element\n *\t\t\t  based on a job state letter\n *\n * @param[out]\tsc\t \t\t- state count\n * @param[in]\tjob_state \t- state to decrement\n * @param[in]\tamount \t\t- amount to add (it increment, pass 1, to decrement pass -1)\n *\n *\n * @return\tnothing\n *\n */\nvoid\nstate_count_add(state_count *sc, const char *job_state, int amount)\n{\n\tif (sc == NULL || job_state == NULL)\n\t\treturn;\n\n\tswitch (job_state[0]) {\n\t\tcase 'Q':\n\t\t\tsc->queued += amount;\n\t\t\tbreak;\n\n\t\tcase 'R':\n\t\t\tsc->running += amount;\n\t\t\tbreak;\n\n\t\tcase 'T':\n\t\t\tsc->transit += amount;\n\t\t\tbreak;\n\n\t\tcase 'H':\n\t\t\tsc->held += amount;\n\t\t\tbreak;\n\n\t\tcase 'W':\n\t\t\tsc->waiting += amount;\n\t\t\tbreak;\n\n\t\tcase 'E':\n\t\t\tsc->exiting += amount;\n\t\t\tbreak;\n\n\t\tcase 'S':\n\t\t\tsc->suspended += amount;\n\t\t\tbreak;\n\n\t\tcase 'U':\n\t\t\tsc->userbusy += amount;\n\t\t\tbreak;\n\n\t\tcase 'B':\n\t\t\tsc->begin += amount;\n\t\t\tbreak;\n\n\t\tcase 'X':\n\t\t\tsc->expired += amount;\n\t\t\tbreak;\n\n\t\tdefault:\n\t\t\tsc->invalid += amount;\n\t\t\tbreak;\n\t}\n}\n"
  },
  {
    "path": "src/scheduler/state_count.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _STATE_COUNT_H\n#define _STATE_COUNT_H\n\n#include \"data_types.h\"\n\n/*\n *\tinit_state_count - initalize state count struct\n */\nvoid init_state_count(state_count *sc);\n\n/*\n *      count_states - count the jobs in each state and set the state counts\n */\nvoid count_states(resource_resv **jobs, state_count *sc);\n\n/*\n *\taccumulate states from one state_count into another\n */\nvoid total_states(state_count *sc1, state_count *sc2);\n\n/*\n *      state_count_add - add a certain amount to a a state count element\n *                        based on a job state letter\n *                        it increment, pass in 1, to decrement pass in -1\n */\nvoid state_count_add(state_count *sc, const char *job_state, int amount);\n#endif /* _STATE_COUNT_H */\n"
  },
  {
    "path": "src/server/Makefile.am",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nsbin_PROGRAMS = pbs_server.bin pbs_comm\n\npbs_server_bin_CPPFLAGS = \\\n\t-I$(top_srcdir)/src/include \\\n\t-I$(top_srcdir)/src/lib/Liblicensing \\\n\t@expat_inc@ \\\n\t@libical_inc@ \\\n\t@libz_inc@ \\\n\t@PYTHON_INCLUDES@ \\\n\t@KRB5_CFLAGS@\n\npbs_server_bin_LDADD = \\\n\t$(top_builddir)/src/lib/Libpbs/libpbs.la \\\n\t$(top_builddir)/src/lib/Libtpp/libtpp.a \\\n\t$(top_builddir)/src/lib/Libattr/libattr.a \\\n\t$(top_builddir)/src/lib/Libutil/libutil.a \\\n\t$(top_builddir)/src/lib/Liblog/liblog.a \\\n\t$(top_builddir)/src/lib/Libnet/libnet.a \\\n\t$(top_builddir)/src/lib/Libsec/libsec.a \\\n\t$(top_builddir)/src/lib/Libsite/libsite.a \\\n\t$(top_builddir)/src/lib/Libpython/libpbspython_svr.a \\\n\t$(top_builddir)/src/lib/Libdb/libpbsdb.la \\\n\t$(top_builddir)/src/lib/Liblicensing/liblicensing.la \\\n\t@KRB5_LIBS@ \\\n\t@expat_lib@ \\\n\t@libz_lib@ \\\n\t@libical_lib@ \\\n\t@PYTHON_LDFLAGS@ \\\n\t@PYTHON_LIBS@ \\\n\t-lssl \\\n\t-lcrypto\n\npbs_server_bin_SOURCES = \\\n\taccounting.c \\\n\tarray_func.c \\\n\tattr_recov.c \\\n\tattr_recov_db.c \\\n\tdis_read.c \\\n\tfailover.c \\\n\tgeteusernam.c \\\n\thook_func.c \\\n\tissue_request.c \\\n\tjattr_get_set.c \\\n\tjob_func.c \\\n\tjob_recov_db.c \\\n\tjob_route.c \\\n\tlicensing_func.c \\\n\tmom_info.c \\\n\tdaemon_info.c \\\n\tnattr_get_set.c \\\n\tnode_func.c \\\n\tnode_manager.c \\\n\tnode_recov_db.c \\\n\tpbs_db_func.c \\\n\tpbsd_init.c \\\n\tpbsd_main.c \\\n\tprocess_request.c \\\n\tqattr_get_set.c \\\n\tqueue_func.c \\\n\tqueue_recov_db.c \\\n\trattr_get_set.c \\\n\treply_send.c \\\n\treq_delete.c \\\n\treq_getcred.c \\\n\treq_holdjob.c \\\n\treq_jobobit.c \\\n\treq_locate.c \\\n\treq_manager.c \\\n\treq_message.c \\\n\treq_modify.c \\\n\treq_preemptjob.c \\\n\treq_movejob.c \\\n\treq_quejob.c \\\n\treq_register.c \\\n\treq_rerun.c \\\n\treq_rescq.c \\\n\treq_runjob.c \\\n\treq_select.c \\\n\treq_shutdown.c \\\n\treq_signal.c \\\n\treq_stat.c \\\n\treq_track.c \\\n\treq_cred.c \\\n\tresc_attr.c \\\n\trun_sched.c \\\n\tsattr_get_set.c \\\n\tsched_attr_get_set.c \\\n\tsched_func.c \\\n\tsetup_resc.c \\\n\tstat_job.c \\\n\tsvr_chk_owner.c \\\n\tsvr_connect.c \\\n\tsvr_func.c \\\n\tsvr_jobfunc.c \\\n\tsvr_mail.c \\\n\tsvr_movejob.c \\\n\tsvr_recov_db.c \\\n\tsvr_resccost.c \\\n\tsvr_credfunc.c \\\n\tuser_func.c \\\n\tvnparse.c\n\npbs_comm_CPPFLAGS = \\\n\t-I$(top_srcdir)/src/include \\\n\t@libz_inc@ \\\n\t@KRB5_CFLAGS@\n\npbs_comm_LDADD = \\\n\t$(top_builddir)/src/lib/Libpbs/libpbs.la \\\n\t$(top_builddir)/src/lib/Libtpp/libtpp.a \\\n\t$(top_builddir)/src/lib/Liblog/liblog.a \\\n\t$(top_builddir)/src/lib/Libutil/libutil.a \\\n\t-lpthread \\\n\t@libz_lib@ \\\n\t@socket_lib@ \\\n\t@KRB5_LIBS@\n\npbs_comm_SOURCES = pbs_comm.c\n"
  },
  {
    "path": "src/server/accounting.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file    accounting.c\n *\n * @brief\n * accounting.c - contains functions to record accounting information\n *\n * Functions included are:\n *\tacct_open()\n *\tacct_record()\n *\tacct_close()\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n#include \"portability.h\"\n#include <sys/param.h>\n#include <sys/types.h>\n#include <string.h>\n#include <errno.h>\n#include <limits.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <time.h>\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"resource.h\"\n#include \"server_limits.h\"\n#include \"job.h\"\n#include \"reservation.h\"\n#include \"queue.h\"\n#include \"pbs_nodes.h\"\n#include \"log.h\"\n#include \"acct.h\"\n#include \"pbs_license.h\"\n#include \"server.h\"\n#include \"svrfunc.h\"\n#include \"libutil.h\"\n\n/* Local Data */\n\nstatic FILE *acctfile; /* open stream for log file */\nstatic volatile int acct_opened = 0;\nstatic int acct_opened_day;\nstatic int acct_auto_switch = 0;\nstatic char *acct_buf = 0;\nstatic int acct_bufsize = PBS_ACCT_MAX_RCD;\nstatic const char *do_not_emit_alter[] = {ATTR_estimated, ATTR_used, NULL};\n\n/* Global Data */\n\nextern char *acctlog_spacechar;\nextern attribute_def job_attr_def[];\nextern char *path_acct;\nextern int resc_access_perm;\nextern time_t time_now;\nextern struct resc_sum *svr_resc_sum;\nextern struct server server;\nextern char *msg_job_end_stat;\n\n/**\n * @brief\n * grow_acct_buf - called when need to grow the account buffer\n *\n * @param[out]\tpb - New address in the account buffer after the reallocation\n * @param[out]\tavail - Remaining size in the returned variable\n * @param[in]\tneed - Required extra size for reallocation\n *\n * @return      Error code\n * @retval\t 0  - Success\n * @retval\t-1  - Failure\n *\n * @par Side Effects:\n *     the accounting buffer (acct_buf) is grown\n *\n * @par MT-safe: No\n */\nstatic int\ngrow_acct_buf(char **pb, int *avail, int need)\n{\n\tsize_t ln;\n\tchar *new;\n\n\tln = acct_bufsize + need + need + PBS_ACCT_LEAVE_EXTRA;\n\tnew = realloc(acct_buf, (size_t) (ln + 1));\n\tif (new == NULL) {\n\t\tlog_err(errno, __func__, \"realloc failure\");\n\t\treturn (-1);\n\t}\n\tacct_buf = new;\n\tacct_bufsize = ln;\n\tln = strlen(acct_buf);\n\t*pb = acct_buf + ln;\n\t*avail = acct_bufsize - ln;\n\treturn 0;\n}\n\n/**\n * @brief\n * sum_resc_alloc() - sums up the consumable resources listed in\n *\tthe exec_vnode for accounting.  The caller is responsible\n *\tfor taking the sums in svr_resc_sum[] and formating the\n *\tdata into a buffer for logging.\n *\n * @param[in]\tpjob - pointer to job\n * @param[in]\tlist - pbs list head\n *\n * @return      Error code\n * @retval\t0\t\t\t- if ok and data in svr_resc_sum[]\n * @retval  non zero\t- on error and data is not valid\n *\n * @par MT-safe: No\n */\n\nstatic void\nsum_resc_alloc(const job *pjob, pbs_list_head *list)\n{\n\tchar *chunk;\n\tchar *exechost;\n\tint i;\n\tint j;\n\tint nelem;\n\tchar *noden;\n\tstruct key_value_pair *pkvp;\n\tresource *presc;\n\tstruct pbsnode *pnode;\n\tint rc;\n\n\tstatic attribute tmpatr;\n\n\tif ((pjob == NULL) ||\n\t    !(is_jattr_set(pjob, JOB_ATR_exec_vnode)))\n\t\treturn;\n\n\t/* if a vnode was allocated \"excl\",  we need to charge all of its   */\n\t/* resources, but only once.   So we need to mark the vnode as seen */\n\t/* To do that, we first need to unmark them...\t\t\t    */\n\n\tfor (i = 0; i < svr_totnodes; i++)\n\t\tpbsndlist[i]->nd_svrflags &= ~NODE_ACCTED;\n\n\texechost = get_jattr_str(pjob, JOB_ATR_exec_vnode);\n\n\t/* clear the summation table used later */\n\n\tfor (i = 0; svr_resc_sum[i].rs_def; ++i) {\n\t\t(void) memset((char *) &svr_resc_sum[i].rs_attr, 0, sizeof(struct attribute));\n\n\t\tsvr_resc_sum[i].rs_set = 0;\n\t\tsvr_resc_sum[i].rs_prs = NULL;\n\t}\n\n\t/* now, go through the exec_vnode specified for the job, for any       */\n\t/* resource that matches an entry in the table, set the pointer and set flag */\n\n\tchunk = parse_plus_spec(exechost, &rc);\n\tif (rc != 0)\n\t\treturn;\n\twhile (chunk) {\n\t\tif (parse_node_resc(chunk, &noden, &nelem, &pkvp) == 0) {\n\n\t\t\t/* find if node is shared or excl */\n\n\t\t\tpnode = find_nodebyname(noden);\n\t\t\tif (pnode) {\n\t\t\t\tif ((pnode->nd_state & INUSE_JOBEXCL) == 0) {\n\n\t\t\t\t\t/* shared, record only what was requested from the vnode */\n\n\t\t\t\t\tfor (j = 0; j < nelem; ++j) {\n\t\t\t\t\t\tfor (i = 0; svr_resc_sum[i].rs_def; ++i) {\n\t\t\t\t\t\t\tif (strcmp(svr_resc_sum[i].rs_def->rs_name, pkvp[j].kv_keyw) == 0) {\n\t\t\t\t\t\t\t\t/* incr sum by amount requested by user */\n\t\t\t\t\t\t\t\trc = svr_resc_sum[i].rs_def->rs_decode(&tmpatr,\n\t\t\t\t\t\t\t\t\t\t\t\t       0, 0, pkvp[j].kv_val);\n\t\t\t\t\t\t\t\tif (rc != 0)\n\t\t\t\t\t\t\t\t\treturn;\n\t\t\t\t\t\t\t\t(void) svr_resc_sum[i].rs_def->rs_set(&svr_resc_sum[i].rs_attr, &tmpatr, INCR);\n\n\t\t\t\t\t\t\t\tsvr_resc_sum[i].rs_set = 1;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\n\t\t\t\t} else if (!(pnode->nd_svrflags & NODE_ACCTED)) {\n\n\t\t\t\t\t/* vnode used exclusively and not already accounted, */\n\t\t\t\t\t/* so incr sum by amount in whole vnode              */\n\n\t\t\t\t\tpnode->nd_svrflags |= NODE_ACCTED; /* mark that it has been recorded */\n\t\t\t\t\tfor (i = 0; svr_resc_sum[i].rs_def; ++i) {\n\t\t\t\t\t\tpresc = find_resc_entry(get_nattr(pnode, ND_ATR_ResourceAvail), svr_resc_sum[i].rs_def);\n\t\t\t\t\t\tif (presc && (is_attr_set(&presc->rs_value))) {\n\t\t\t\t\t\t\t(void) svr_resc_sum[i].rs_def->rs_set(&svr_resc_sum[i].rs_attr, &presc->rs_value, INCR);\n\t\t\t\t\t\t\tsvr_resc_sum[i].rs_set = 1;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\n\t\t} else {\n\t\t\treturn;\n\t\t}\n\t\tchunk = parse_plus_spec(NULL, &rc);\n\t\tif (rc != 0)\n\t\t\treturn;\n\t}\n\n\tfor (i = 0; svr_resc_sum[i].rs_def != NULL; ++i) {\n\t\tif (svr_resc_sum[i].rs_set) {\n\t\t\t(void) svr_resc_sum[i].rs_def->rs_encode(\n\t\t\t\t&svr_resc_sum[i].rs_attr,\n\t\t\t\tlist,\n\t\t\t\t\"resource_assigned\",\n\t\t\t\tsvr_resc_sum[i].rs_def->rs_name,\n\t\t\t\tATR_ENCODE_CLIENT, NULL);\n\t\t}\n\t}\n\n\treturn;\n}\n\n/**\n * @brief\n * cpy_quote_value - append the value to the buffer\n *\tIf the string contains no spaces,  it is appended as is.\n *\tIf the string contains spaces, and contains a \", then quote the string with ' characters,\n *\telse quote the string with \" characters\n *\n * @param[in,out]\tpb - Source string and stores the result after appending.\n * @param[in]\tvalue - value which needs to be appended\n *\n * @return      void\n */\nstatic void\ncpy_quote_value(char *pb, char *value)\n{\n\tchar *quotechar;\n\n\tif (strchr(value, (int) ' ') != 0) {\n\t\tif (strchr(value, (int) '\"') != 0)\n\t\t\tquotechar = \"'\";\n\t\telse\n\t\t\tquotechar = \"\\\"\";\n\t\t(void) strcat(pb, quotechar);\n\t\t(void) strcat(pb, value);\n\t\t(void) strcat(pb, quotechar);\n\t} else {\n\t\t(void) strcat(pb, value);\n\t}\n}\n\n/* These are various printing formats used in acct_job() */\n#define GRIDNAME_FMT \"gridname=\\\"%s\\\" \"\n#define USER_FMT \"user=%s \"\n#define GROUP_FMT \"group=%s \"\n#define ACCOUNT_FMT \"account=\\\"%s\\\" \"\n#define PROJECT_FMT1 \"project=\\\"%s\\\" \"\n#define PROJECT_FMT2 \"project=%s \"\n#define ACCOUNTING_ID_FMT \"accounting_id=\\\"%s\\\" \"\n#define JOBNAME_FMT \"jobname=%s \"\n#define QUEUE_FMT \"queue=%s \"\n#define RESVNAME_FMT \"resvname=%s \"\n#define RESVID_FMT \"resvID=%s \"\n#define RESVJOBID_FMT \"resvjobID=%s \"\n#define ARRAY_INDICES_FMT \"array_indices=%s \"\n#define EXEC_HOST_FMT \"exec_host=%s \"\n#define EXEC_VNODE_FMT \"exec_vnode=%s \"\n#define DEPEND_FMT \"depend=%s \"\n\n/* Amount of space needed in account log buffer for the ctime, qtime, etime, */\n/* start attributes */\n#define ACCTBUF_TIMES_NEED 72\n\n/**\n * @brief\n * Get the resources_used job attribute\n *\n * @param[in]\tpjob\t- pointer to job structure\n * @param[in]\tresc_used - pointer to resources used string\n * @param[in]\tresc_used_size - size of resources used string\n *\n * @return\tint\n * @retval\t0 upon success\n * @retval\t-1\tif error encountered.\n *\n */\nstatic int\nget_resc_used(job *pjob, char **resc_used, int *resc_used_size)\n{\n\tstruct svrattrl *patlist = NULL;\n\tpbs_list_head temp_head;\n\tCLEAR_HEAD(temp_head);\n\n\tif (get_jattr_usr_encoded(pjob, JOB_ATR_resc_used) != NULL)\n\t\tpatlist = get_jattr_usr_encoded(pjob, JOB_ATR_resc_used);\n\telse if (get_jattr_priv_encoded(pjob, JOB_ATR_resc_used) != NULL)\n\t\tpatlist = get_jattr_priv_encoded(pjob, JOB_ATR_resc_used);\n\telse\n\t\tencode_resc(get_jattr(pjob, JOB_ATR_resc_used),\n\t\t\t    &temp_head, job_attr_def[JOB_ATR_resc_used].at_name,\n\t\t\t    NULL, ATR_ENCODE_CLIENT, &patlist);\n\t/*\n\t * NOTE:\n\t * Following code for constructing resources used information is same as job_obit()\n\t * with minor different that to traverse patlist in this code\n\t * we have to use patlist->al_sister since it is encoded information in job struct\n\t * where in job_obit() we are using GET_NEXT(patlist->al_link) which is part of batch\n\t * request.\n\t * ji_acctrec is lost on server restart.  Recreate it here if needed.\n\t */\n\twhile (patlist) {\n\t\t/* log to accounting_logs only if there's a value */\n\t\tif (strlen(patlist->al_value) > 0) {\n\t\t\tif (concat_rescused_to_buffer(resc_used, resc_used_size, patlist, \" \", NULL) != 0) {\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t}\n\t\tpatlist = patlist->al_sister;\n\t}\n\tfree_attrlist(&temp_head);\n\treturn 0;\n}\n\n/**\n * @brief\n *\tGet the value of \"walltime\" resource for the job's given\n *\tresource index 'res'.\n *\n *\n * @param[in]\tpjob\t- pointer to job structure\n * @param[in]\tres\t- resource entity index (e.g. JOB_ATR_resource)\n *\n * @return\tlong\n * @retval\t<n>\twalltime value\n * @retval\t-1\tif error encountered.\n *\n */\nlong\nget_walltime(const job *jp, int res)\n{\n\tresource_def *rscdef;\n\tresource *pres;\n\n\trscdef = &svr_resc_def[RESC_WALLTIME];\n\tpres = find_resc_entry(get_jattr(jp, res), rscdef);\n\tif (pres == NULL)\n\t\treturn (-1);\n\telse if (!is_attr_set(&pres->rs_value))\n\t\treturn (-1);\n\telse\n\t\treturn pres->rs_value.at_val.at_long; /*wall time value*/\n}\n\n/**\n * @brief\n *\tForm and write a job termination/rerun record with resource usage.\n * \tBuild common data for queue/start/end job accounting record\n *\n * @par\tFunctionality:\n *\tUsed by account_jobstr() and account_jobend()\n *\n *\n * @param[in]\tpjob\t- pointer to job structure\n * @param[in]\ttype\t- account record type\n * @param[in]\tbuf\t- buffer holding the data that will be stored in\n *\t\t\t  accounting logs.\n * @param[in]\tlen\t- number of characters in 'buf' still available to\n *\t\t\t  store data.\n * @return\tchar *\n * @retval\tpointer to 'buf' containing new data.\n *\n */\nstatic char *\nacct_job(const job *pjob, int type, char *buf, int len)\n{\n\tpbs_list_head attrlist;\n\tint i, k;\n\tint nd;\n\tsvrattrl *pal;\n\tchar *pb;\n\tint att_index;\n\tint len_orig;\n\tchar save_char;\n\tint old_perm;\n\n\tpb = buf;\n\tCLEAR_HEAD(attrlist);\n\n\t/* gridname */\n\tif (is_jattr_set(pjob, JOB_ATR_gridname)) {\n\t\tnd = strlen(get_jattr_str(pjob, JOB_ATR_gridname)) + sizeof(GRIDNAME_FMT);\n\t\tif (nd > len)\n\t\t\tif (grow_acct_buf(&pb, &len, nd) == -1)\n\t\t\t\treturn (pb);\n\n\t\tsnprintf(pb, len, GRIDNAME_FMT,\n\t\t\t get_jattr_str(pjob, JOB_ATR_gridname));\n\t\ti = strlen(pb);\n\t\tpb += i;\n\t\tlen -= i;\n\t}\n\n\t/* user */\n\tnd = sizeof(USER_FMT) + strlen(get_jattr_str(pjob, JOB_ATR_euser));\n\tif (nd > len)\n\t\tif (grow_acct_buf(&pb, &len, nd) == -1)\n\t\t\treturn (pb);\n\tsnprintf(pb, len, USER_FMT, get_jattr_str(pjob, JOB_ATR_euser));\n\n\ti = strlen(pb);\n\tpb += i;\n\tlen -= i;\n\n\t/* group */\n\tnd = sizeof(GROUP_FMT) + strlen(get_jattr_str(pjob, JOB_ATR_egroup));\n\tif (nd > len)\n\t\tif (grow_acct_buf(&pb, &len, nd) == -1)\n\t\t\treturn (pb);\n\n\tsnprintf(pb, len, GROUP_FMT, get_jattr_str(pjob, JOB_ATR_egroup));\n\n\ti = strlen(pb);\n\tpb += i;\n\tlen -= i;\n\n\t/* account */\n\tif (is_jattr_set(pjob, JOB_ATR_account)) {\n\t\tnd = sizeof(ACCOUNT_FMT) + strlen(get_jattr_str(pjob, JOB_ATR_account));\n\t\tif (nd > len)\n\t\t\tif (grow_acct_buf(&pb, &len, nd) == -1)\n\t\t\t\treturn (pb);\n\n\t\tsnprintf(pb, len, ACCOUNT_FMT, get_jattr_str(pjob, JOB_ATR_account));\n\n\t\ti = strlen(pb);\n\t\tpb += i;\n\t\tlen -= i;\n\t}\n\n\t/* project */\n\tif (is_jattr_set(pjob, JOB_ATR_project)) {\n\t\tchar *projstr;\n\n\t\tprojstr = get_jattr_str(pjob, JOB_ATR_project);\n\t\t/* using PROJECT_FMT1 if projstr needs to be quoted; otherwise, PROJECT_FMT2 */\n\t\tnd = sizeof(PROJECT_FMT1) + strlen(projstr);\n\t\tif (nd > len)\n\t\t\tif (grow_acct_buf(&pb, &len, nd) == -1)\n\t\t\t\treturn (pb);\n\t\tif (strchr(projstr, ' ') != NULL) {\n\t\t\tsnprintf(pb, len, PROJECT_FMT1, projstr);\n\t\t} else {\n\t\t\tsnprintf(pb, len, PROJECT_FMT2, projstr);\n\t\t}\n\t\ti = strlen(pb);\n\t\tpb += i;\n\t\tlen -= i;\n\t}\n\n\t/* accounting_id */\n\tif (is_jattr_set(pjob, JOB_ATR_acct_id)) {\n\t\tnd = sizeof(ACCOUNTING_ID_FMT) + strlen(get_jattr_str(pjob, JOB_ATR_acct_id));\n\t\tif (nd > len)\n\t\t\tif (grow_acct_buf(&pb, &len, nd) == -1)\n\t\t\t\treturn (pb);\n\t\tsnprintf(pb, len, ACCOUNTING_ID_FMT, get_jattr_str(pjob, JOB_ATR_acct_id));\n\t\ti = strlen(pb);\n\t\tpb += i;\n\t\tlen -= i;\n\t}\n\n\t/* job name */\n\tnd = sizeof(JOBNAME_FMT) + strlen(get_jattr_str(pjob, JOB_ATR_jobname));\n\tif (nd > len)\n\t\tif (grow_acct_buf(&pb, &len, nd) == -1)\n\t\t\treturn (pb);\n\tsnprintf(pb, len, JOBNAME_FMT, get_jattr_str(pjob, JOB_ATR_jobname));\n\ti = strlen(pb);\n\tpb += i;\n\tlen -= i;\n\n\t/* queue name */\n\tnd = sizeof(QUEUE_FMT) + strlen(pjob->ji_qhdr->qu_qs.qu_name);\n\tif (nd > len)\n\t\tif (grow_acct_buf(&pb, &len, nd) == -1)\n\t\t\treturn (pb);\n\tsnprintf(pb, len, QUEUE_FMT, pjob->ji_qhdr->qu_qs.qu_name);\n\ti = strlen(pb);\n\tpb += i;\n\tlen -= i;\n\n\tif (pjob->ji_myResv) {\n\t\tnd = sizeof(RESVID_FMT) + strlen(pjob->ji_myResv->ri_qs.ri_resvID);\n\t\tif (is_rattr_set(pjob->ji_myResv, RESV_ATR_resv_name))\n\t\t\tnd += sizeof(RESVNAME_FMT) + strlen(get_rattr_str(pjob->ji_myResv, RESV_ATR_resv_name));\n\t\tif (nd > len)\n\t\t\tif (grow_acct_buf(&pb, &len, nd) == -1)\n\t\t\t\treturn (pb);\n\t\t/* reservation name */\n\t\tif (is_rattr_set(pjob->ji_myResv, RESV_ATR_resv_name)) {\n\t\t\tsnprintf(pb, len, RESVNAME_FMT, get_rattr_str(pjob->ji_myResv, RESV_ATR_resv_name));\n\t\t\ti = strlen(pb);\n\t\t\tpb += i;\n\t\t\tlen -= i;\n\t\t}\n\n\t\t/* reservation ID */\n\t\tsnprintf(pb, len, RESVID_FMT, pjob->ji_myResv->ri_qs.ri_resvID);\n\t\ti = strlen(pb);\n\t\tpb += i;\n\t\tlen -= i;\n\t}\n\n\t/* insure space for all *times */\n\tnd = ACCTBUF_TIMES_NEED;\n\tif (nd > len)\n\t\tif (grow_acct_buf(&pb, &len, nd) == -1)\n\t\t\treturn (pb);\n\n\t/* create time */\n\tsprintf(pb, \"ctime=%ld \", get_jattr_long(pjob, JOB_ATR_ctime));\n\ti = strlen(pb);\n\tpb += i;\n\tlen -= i;\n\n\t/* queued time */\n\tsprintf(pb, \"qtime=%ld \", get_jattr_long(pjob, JOB_ATR_qtime));\n\ti = strlen(pb);\n\tpb += i;\n\tlen -= i;\n\n\t/* eligible time, how long ready to run */\n\tsprintf(pb, \"etime=%ld \", get_jattr_long(pjob, JOB_ATR_etime));\n\ti = strlen(pb);\n\tpb += i;\n\tlen -= i;\n\n\tif (type != PBS_ACCT_QUEUE) {\n\t\t/* start time */\n\t\tsprintf(pb, \"start=%ld \", (long) pjob->ji_qs.ji_stime);\n\t\ti = strlen(pb);\n\t\tpb += i;\n\t\tlen -= i;\n\t} else if (is_jattr_set(pjob, JOB_ATR_depend)) {\n\t\tpbs_list_head phead;\n\t\tsvrattrl *svrattrl_list = NULL;\n\t\tCLEAR_HEAD(phead);\n\t\tjob_attr_def[JOB_ATR_depend].at_encode(get_jattr(pjob, JOB_ATR_depend),\n\t\t\t\t\t\t       &phead, job_attr_def[JOB_ATR_depend].at_name, NULL, ATR_ENCODE_CLIENT, &svrattrl_list);\n\t\tif (svrattrl_list != NULL) {\n\t\t\tnd = sizeof(DEPEND_FMT) + strlen(svrattrl_list->al_value);\n\t\t\tif (nd > len)\n\t\t\t\tif (grow_acct_buf(&pb, &len, nd) == -1)\n\t\t\t\t\treturn (pb);\n\t\t\tsnprintf(pb, len, DEPEND_FMT, svrattrl_list->al_value);\n\t\t\ti = strlen(pb);\n\t\t\tpb += i;\n\t\t\tlen -= i;\n\t\t\tfree_svrattrl(svrattrl_list);\n\t\t}\n\t}\n\n\tif ((is_jattr_set(pjob, JOB_ATR_array_indices_submitted)) &&\n\t    (check_job_state(pjob, JOB_STATE_LTR_BEGUN) || type == PBS_ACCT_QUEUE)) {\n\n\t\t/* for an Array Job in Begun state,  record index range */\n\n\t\tnd = sizeof(ARRAY_INDICES_FMT) + strlen(get_jattr_str(pjob, JOB_ATR_array_indices_submitted));\n\t\tif (nd > len)\n\t\t\tif (grow_acct_buf(&pb, &len, nd) == -1)\n\t\t\t\treturn (pb);\n\t\tsnprintf(pb, len, ARRAY_INDICES_FMT, get_jattr_str(pjob, JOB_ATR_array_indices_submitted));\n\t\ti = strlen(pb);\n\t\tpb += i;\n\t\tlen -= i;\n\n\t} else {\n\n\t\t/* regular job */\n\t\tif ((type == PBS_ACCT_END) &&\n\t\t    (is_jattr_set(pjob, JOB_ATR_exec_host_orig)))\n\t\t\tatt_index = JOB_ATR_exec_host_orig;\n\t\telse\n\t\t\tatt_index = JOB_ATR_exec_host;\n\n\t\tif (is_jattr_set(pjob, att_index)) {\n\t\t\t/* execution host list, may be loooong */\n\t\t\tnd = sizeof(EXEC_HOST_FMT) + strlen(get_jattr_str(pjob, att_index));\n\t\t\tif (nd > len)\n\t\t\t\tif (grow_acct_buf(&pb, &len, nd) == -1)\n\t\t\t\t\treturn (pb);\n\t\t\tsnprintf(pb, len, EXEC_HOST_FMT, get_jattr_str(pjob, att_index));\n\t\t\ti = strlen(pb);\n\t\t\tpb += i;\n\t\t\tlen -= i;\n\t\t}\n\t\tif ((type == PBS_ACCT_END) &&\n\t\t    (is_jattr_set(pjob, JOB_ATR_exec_vnode_orig)))\n\t\t\tatt_index = JOB_ATR_exec_vnode_orig;\n\t\telse\n\t\t\tatt_index = JOB_ATR_exec_vnode;\n\n\t\tif (is_jattr_set(pjob, att_index)) {\n\t\t\t/* execution vnode list, will be even longer */\n\t\t\tnd = sizeof(EXEC_VNODE_FMT) + strlen(get_jattr_str(pjob, att_index));\n\t\t\tif (nd > len)\n\t\t\t\tif (grow_acct_buf(&pb, &len, nd) == -1)\n\t\t\t\t\treturn (pb);\n\t\t\tsnprintf(pb, len, EXEC_VNODE_FMT, get_jattr_str(pjob, att_index));\n\t\t\ti = strlen(pb);\n\t\t\tpb += i;\n\t\t\tlen -= i;\n\t\t}\n\t}\n\n\t/* now encode the job's resource_list attribute */\n\tif ((type == PBS_ACCT_END) &&\n\t    (is_jattr_set(pjob, JOB_ATR_resource_orig))) {\n\t\tatt_index = JOB_ATR_resource_orig;\n\t\tlen_orig = 5; /* length of \"_orig\" */\n\t} else {\n\t\tatt_index = JOB_ATR_resource;\n\t\tlen_orig = 0;\n\t}\n\n\told_perm = resc_access_perm;\n\tresc_access_perm = READ_ONLY;\n\t(void) job_attr_def[att_index].at_encode(\n\t\tget_jattr(pjob, att_index),\n\t\t&attrlist,\n\t\tjob_attr_def[att_index].at_name,\n\t\tNULL,\n\t\tATR_ENCODE_CLIENT, NULL);\n\tresc_access_perm = old_perm;\n\n\tnd = 0; /* compute total size needed in buf */\n\tpal = GET_NEXT(attrlist);\n\twhile (pal != NULL) {\n\t\t/* +5 in count is for '=', ' ', start and end quotes, and \\0 */\n\t\tnd += strlen(pal->al_name) + strlen(pal->al_value) + 5;\n\t\tif (pal->al_resc)\n\t\t\tnd += 1 + strlen(pal->al_resc);\n\t\tpal = GET_NEXT(pal->al_link);\n\t}\n\tif (nd > len)\n\t\tif (grow_acct_buf(&pb, &len, nd) == -1)\n\t\t\treturn (pb);\n\n\twhile ((pal = GET_NEXT(attrlist)) != NULL) {\n\t\t/* strip off the '_orig' suffix */\n\t\tif (len_orig > 0) {\n\t\t\tk = strlen(pal->al_name);\n\t\t\tif (k > len_orig) {\n\t\t\t\tsave_char = pal->al_name[k - len_orig];\n\t\t\t\tpal->al_name[k - len_orig] = '\\0';\n\t\t\t}\n\t\t}\n\t\t(void) strcat(pb, pal->al_name);\n\t\tif (len_orig > 0) {\n\t\t\tif (k > len_orig) {\n\t\t\t\tpal->al_name[k - len_orig] = save_char;\n\t\t\t}\n\t\t}\n\t\tif (pal->al_resc) {\n\t\t\t(void) strcat(pb, \".\");\n\t\t\t(void) strcat(pb, pal->al_resc);\n\t\t}\n\t\t(void) strcat(pb, \"=\");\n\t\tcpy_quote_value(pb, pal->al_value);\n\t\t(void) strcat(pb, \" \");\n\t\tdelete_link(&pal->al_link);\n\t\t(void) free(pal);\n\t\tpb += strlen(pb);\n\t}\n\treturn (pb);\n}\n\n/**\n * @brief\n * acct_resv - build data for start/end reservation  accounting record\n *\n * @par\tFunctionality:\n *\tUsed by account_resvstr() and account_resvend()\n *\n * @param[in]\tpresv - pointer to reservation structure\n * @param[in]\tbuf\t- buffer holding the data that will be stored in\n *\t\t\t  accounting logs.\n * @param[in]\tlen\t- number of characters in 'buf' still available to\n *\t\t\t  store data.\n * @return\tchar *\n * @retval\tpointer to 'buf' containing new data.\n */\nstatic char *\nacct_resv(resc_resv *presv, char *buf, int len)\n{\n\tpbs_list_head attrlist; /*retrieved resources list put here*/\n\tint i;\n\tsvrattrl *pal;\n\tchar *pb;\n\tint old_perm;\n\n\tpb = buf;\n\tCLEAR_HEAD(attrlist);\n\n\t/* owner */\n\ti = 8 + strlen(get_rattr_str(presv, RESV_ATR_resv_owner));\n\tif (i > len)\n\t\tif (grow_acct_buf(&pb, &len, i) == -1)\n\t\t\treturn (pb);\n\t(void) sprintf(pb, \"owner=%s \", get_rattr_str(presv, RESV_ATR_resv_owner));\n\ti = strlen(pb);\n\tpb += i;\n\tlen -= i;\n\n\t/* name */\n\tif (is_rattr_set(presv, RESV_ATR_resv_name)) {\n\t\ti = 7 + strlen(get_rattr_str(presv, RESV_ATR_resv_name));\n\t\tif (i > len)\n\t\t\tif (grow_acct_buf(&pb, &len, i) == -1)\n\t\t\t\treturn (pb);\n\t\t(void) sprintf(pb, \"name=%s \", get_rattr_str(presv, RESV_ATR_resv_name));\n\t\ti = strlen(pb);\n\t\tpb += i;\n\t\tlen -= i;\n\t}\n\n\t/* account */\n\tif (is_rattr_set(presv, RESV_ATR_account)) {\n\t\ti = 10 + strlen(get_rattr_str(presv, RESV_ATR_account));\n\t\tif (i > len)\n\t\t\tif (grow_acct_buf(&pb, &len, i) == -1)\n\t\t\t\treturn (pb);\n\t\t(void) sprintf(pb, \"account=%s \", get_rattr_str(presv, RESV_ATR_account));\n\t\ti = strlen(pb);\n\t\tpb += i;\n\t\tlen -= i;\n\t}\n\n\t/* queue name */\n\ti = 23;\n\tif (i > len)\n\t\tif (grow_acct_buf(&pb, &len, i) == -1)\n\t\t\treturn (pb);\n\tif (presv->ri_qp != NULL)\n\t\tsprintf(pb, \"queue=%s \", presv->ri_qp->qu_qs.qu_name);\n\n\ti = strlen(pb);\n\tpb += i;\n\tlen -= i;\n\n\t/* allow space for all *times */\n\ti = 90;\n\tif (i > len)\n\t\tif (grow_acct_buf(&pb, &len, i) == -1)\n\t\t\treturn (pb);\n\n\t/* create time */\n\t(void) sprintf(pb, \"ctime=%ld \", get_rattr_long(presv, RESV_ATR_ctime));\n\ti = strlen(pb);\n\tpb += i;\n\tlen -= i;\n\n\t/* reservation start time */\n\t(void) sprintf(pb, \"start=%ld \", (long) presv->ri_qs.ri_stime);\n\ti = strlen(pb);\n\tpb += i;\n\tlen -= i;\n\n\t/* reservation end time */\n\t(void) sprintf(pb, \"end=%ld \", (long) presv->ri_qs.ri_etime);\n\ti = strlen(pb);\n\tpb += i;\n\tlen -= i;\n\n\t/* reservation duration time */\n\t(void) sprintf(pb, \"duration=%ld \", (long) presv->ri_qs.ri_duration);\n\ti = strlen(pb);\n\tpb += i;\n\tlen -= i;\n\n\t/* nodes string may be loooong */\n\tif (is_rattr_set(presv, RESV_ATR_resv_nodes)) {\n\t\ti = 8 + strlen(get_rattr_str(presv, RESV_ATR_resv_nodes));\n\t\tif (i > len)\n\t\t\tif (grow_acct_buf(&pb, &len, i) == -1)\n\t\t\t\treturn (pb);\n\t\t(void) sprintf(pb, \"nodes=%s \", get_rattr_str(presv, RESV_ATR_resv_nodes));\n\t\ti = strlen(pb);\n\t\tpb += i;\n\t\tlen -= i;\n\t}\n\n\t/* now encode any user, group or host ACL */\n\n\told_perm = resc_access_perm;\n\tresc_access_perm = READ_ONLY;\n\t(void) resv_attr_def[RESV_ATR_auth_u].at_encode(\n\t\tget_rattr(presv, RESV_ATR_auth_u),\n\t\t&attrlist,\n\t\tresv_attr_def[RESV_ATR_auth_u].at_name,\n\t\tNULL,\n\t\tATR_ENCODE_CLIENT, NULL);\n\n\t(void) resv_attr_def[RESV_ATR_auth_g].at_encode(\n\t\tget_rattr(presv, RESV_ATR_auth_g),\n\t\t&attrlist,\n\t\tresv_attr_def[RESV_ATR_auth_g].at_name,\n\t\tNULL,\n\t\tATR_ENCODE_CLIENT, NULL);\n\n\t(void) resv_attr_def[RESV_ATR_auth_h].at_encode(\n\t\tget_rattr(presv, RESV_ATR_auth_h),\n\t\t&attrlist,\n\t\tresv_attr_def[RESV_ATR_auth_h].at_name,\n\t\tNULL,\n\t\tATR_ENCODE_CLIENT, NULL);\n\n\t/* now encode the reservation's resource_list attribute */\n\n\tresc_access_perm = READ_ONLY;\n\t(void) resv_attr_def[RESV_ATR_resource].at_encode(\n\t\tget_rattr(presv, RESV_ATR_resource),\n\t\t&attrlist,\n\t\tresv_attr_def[RESV_ATR_resource].at_name,\n\t\tNULL,\n\t\tATR_ENCODE_CLIENT, NULL);\n\tresc_access_perm = old_perm;\n\n\t/* compute space need for the encode attributes */\n\n\ti = 0;\n\tpal = GET_NEXT(attrlist);\n\twhile (pal != NULL) {\n\t\t/* +5 in count is for '=', ' ', start and end quotes, and \\0 */\n\t\ti += strlen(pal->al_name) + strlen(pal->al_value) + 5;\n\t\tif (pal->al_resc)\n\t\t\ti += 1 + strlen(pal->al_resc);\n\t\tpal = GET_NEXT(pal->al_link);\n\t}\n\tif (i > len)\n\t\tif (grow_acct_buf(&pb, &len, i) == -1)\n\t\t\treturn (pb);\n\n\t/*write encoded attrlist values into buffer being developed*/\n\n\twhile ((pal = GET_NEXT(attrlist)) != NULL) {\n\t\t(void) strcat(pb, pal->al_name);\n\t\tif (pal->al_resc) {\n\t\t\t(void) strcat(pb, \".\");\n\t\t\t(void) strcat(pb, pal->al_resc);\n\t\t}\n\t\t(void) strcat(pb, \"=\");\n\t\tcpy_quote_value(pb, pal->al_value);\n\t\t(void) strcat(pb, \" \");\n\t\tdelete_link(&pal->al_link);\n\t\t(void) free(pal);\n\t\tpb += strlen(pb);\n\t}\n\treturn (pb);\n}\n\n/**\n * @brief\n * acct_open() - open the acct file for append.\n * Opens a (new) acct file.\n * If a acct file is already open, and the new file is successfully opened,\n * the old file is closed.  Otherwise the old file is left open.\n *\n * @param[in]\tfilename - abs pathname or NULL\n *\n * @return      Error code\n * @retval\t 0  - Success\n * @retval\t-1  - Failure\n */\nint\nacct_open(char *filename)\n{\n\tchar filen[_POSIX_PATH_MAX];\n\tchar logmsg[_POSIX_PATH_MAX + 80];\n\tFILE *newacct;\n\ttime_t now;\n\tstruct tm *ptm;\n\n\tif (acct_buf == NULL) { /* malloc buffer space */\n\t\tacct_buf = (char *) malloc(acct_bufsize + 1);\n\t\tif (acct_buf == NULL)\n\t\t\treturn (-1);\n\t}\n\n\tif (filename == NULL) { /* go with default */\n\t\tnow = time(0);\n\t\tptm = localtime(&now);\n\t\t(void) sprintf(filen, \"%s%04d%02d%02d\",\n\t\t\t       path_acct,\n\t\t\t       ptm->tm_year + 1900, ptm->tm_mon + 1, ptm->tm_mday);\n\t\tfilename = filen;\n\t\tacct_auto_switch = 1;\n\t\tacct_opened_day = ptm->tm_yday;\n\t} else if (*filename == '\\0') { /* a null name is not an error */\n\t\treturn (0);\t\t/* turns off account logging.  */\n\t} else if (*filename != '/') {\n\t\treturn (-1); /* not absolute */\n\t}\n\tif ((newacct = fopen(filename, \"a\")) == NULL) {\n\t\tlog_err(errno, \"acct_open\", filename);\n\t\treturn (-1);\n\t}\n\n\t(void) setvbuf(newacct, NULL, _IOLBF, 0); /* set line buffering */\n\n\tif (acct_opened > 0) /* if acct was open, close it */\n\t\t(void) fclose(acctfile);\n\n\tacctfile = newacct;\n\tacct_opened = 1; /* note that file is open */\n\t(void) sprintf(logmsg, \"Account file %s opened\", filename);\n\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_INFO,\n\t\t  \"Act\", logmsg);\n\n\treturn (0);\n}\n\n/**\n * @brief\n * acct_close - close the current open log file\n *\n * @return\tvoid\n */\nvoid\nacct_close()\n{\n\tif (acct_opened == 1) {\n\t\t(void) fclose(acctfile);\n\t\tacct_opened = 0;\n\t}\n}\n\n/**\n * @brief\n * write_account_record - write basic accounting record\n *\n * @param[in]\tacctype - accounting record type\n * @param[in]\tid - accounting record id\n * @param[in,out]\ttext - text to log, may be null\n *\n * @return\tvoid\n */\nvoid\nwrite_account_record(int acctype, const char *id, char *text)\n{\n\tstruct tm *ptm;\n\n\tif (acct_opened == 0)\n\t\treturn; /* file not open, don't bother */\n\n\tptm = localtime(&time_now);\n\n\t/* Do we need to switch files */\n\n\tif (acct_auto_switch && (acct_opened_day != ptm->tm_yday)) {\n\t\tacct_close();\n\t\tacct_open(NULL);\n\t}\n\tif (text == NULL)\n\t\ttext = \"\";\n\n\t(void) fprintf(acctfile,\n\t\t       \"%02d/%02d/%04d %02d:%02d:%02d;%c;%s;%s\\n\",\n\t\t       ptm->tm_mon + 1, ptm->tm_mday, ptm->tm_year + 1900,\n\t\t       ptm->tm_hour, ptm->tm_min, ptm->tm_sec,\n\t\t       (char) acctype, id, text);\n}\n\n/**\n * @brief\n *account_record - basic job related accounting record\n *\n * @param[in]\tacctype - accounting record type\n * @param[in]\tpjob - pointer to job\n * @param[in]\ttext - text to log, may be null\n *\n * @return\tvoid\n */\nvoid\naccount_record(int acctype, const job *pjob, char *text)\n{\n\twrite_account_record(acctype, pjob->ji_qs.ji_jobid, text);\n}\n\n/**\n * @brief\n * account_recordResv - write basic accounting record\n *\n * @param[in]\tacctype - accounting record type\n * @param[in]\tpresv - pointer to reservation structure\n * @param[in]\ttext - text to log, may be null\n *\n * @return\tvoid\n */\nvoid\naccount_recordResv(int acctype, resc_resv *presv, char *text)\n{\n\twrite_account_record(acctype, presv->ri_qs.ri_resvID, text);\n}\n\n/**\n * @brief\n *\tForm and write a record that contains basic job information and the\n *\tassigned consumable resource values for the job.\n *\n * @par\tFunctionality:\n *\tTakes various information from the job structure, start time, owner,\n *\tResource_List, etc., and the resource assigned information (based on\n *\tjob's exec_vnode value) and formats the record type requested.\n *\tCurrently, this is used for 'R' (Job run/started).\n *\n *\tThe record is then written to the accounting log.\n *\n * @see:\n *\tcomplete_running()\n *\n * @param[in]\tpjob\t- pointer to job structure\n * @param[in]\ttype\t- record type, PBS_ACCT_RUN ('R'),\n *\t\t\t\t\tPBS_ACCT_NEXT ('c').\n * @return\tvoid\n *\n * @par\tMT-safe: No - uses a global buffer, \"acct_buf\".\n */\nvoid\naccount_jobstr(const job *pjob, int type)\n{\n\tpbs_list_head attrlist;\n\tint nd;\n\tint len;\n\tsvrattrl *pal;\n\tchar *pb;\n\n\tCLEAR_HEAD(attrlist);\n\n\t/* pack in general information about the job */\n\n\tacct_job(pjob, type, acct_buf, acct_bufsize);\n\tacct_buf[acct_bufsize] = '\\0';\n\n\tif (type != PBS_ACCT_QUEUE) {\n\n\t\tnd = strlen(acct_buf);\n\t\tpb = acct_buf + nd;\n\t\tlen = acct_bufsize - nd;\n\n\t\tsum_resc_alloc(pjob, &attrlist);\n\n\t\tnd = 0; /* compute total size needed in buf */\n\t\tpal = GET_NEXT(attrlist);\n\t\twhile (pal != NULL) {\n\t\t\t/* +5 in count is for '=', ' ', start and end quotes, and \\0 */\n\t\t\tnd += strlen(pal->al_name) + strlen(pal->al_value) + 5;\n\t\t\tif (pal->al_resc)\n\t\t\t\tnd += 1 + strlen(pal->al_resc);\n\t\t\tpal = GET_NEXT(pal->al_link);\n\t\t}\n\t\tif ((nd <= len) ||\n\t\t    (grow_acct_buf(&pb, &len, nd) == 0)) {\n\n\t\t\t/* have room in buffer, so copy in resources_assigned */\n\n\t\t\twhile ((pal = GET_NEXT(attrlist)) != NULL) {\n\t\t\t\tstrcat(pb, pal->al_name);\n\t\t\t\tif (pal->al_resc) {\n\t\t\t\t\tstrcat(pb, \".\");\n\t\t\t\t\tstrcat(pb, pal->al_resc);\n\t\t\t\t}\n\t\t\t\tstrcat(pb, \"=\");\n\t\t\t\tcpy_quote_value(pb, pal->al_value);\n\t\t\t\tstrcat(pb, \" \");\n\t\t\t\tdelete_link(&pal->al_link);\n\t\t\t\tfree(pal);\n\t\t\t\tpb += strlen(pb);\n\t\t\t}\n\t\t}\n\t}\n\taccount_record(type, pjob, acct_buf);\n}\n\n/**\n * @brief\n * account_resvstart - write a \"reservation start\" record\n *\n * @param[in]\tpresv - pointer to reservation structure\n *\n * @return\tvoid\n */\nvoid\naccount_resvstart(resc_resv *presv)\n{\n\t/* pack in general information about the reservation */\n\n\t(void) acct_resv(presv, acct_buf, acct_bufsize);\n\tacct_buf[acct_bufsize] = '\\0';\n\taccount_recordResv(PBS_ACCT_BR, presv, acct_buf);\n}\n\n/**\n * @brief\n *\tForm and write a job termination/rerun record with resource usage.\n *\n * @par\tFunctionality:\n *\tTakes various information from the job structure, start time, owner,\n *\tResource_List, etc., and the resource usage information (see\n *\tji_acctrec) if present and formats the record type requested.\n *\tCurrently, this is used for 'E' and 'R' records.  The record is then\n *\twritten to the accounting log.\n *\n * @see:\n *\ton_job_exit() and on_job_rerun() as well as force_reque().\n *\n * @param[in]\tpjob\t- pointer to job structure\n * @param[in]\tused\t- resource usage information from Mom,  this is a string\n *\t\t\t  consisting of space separated keyword=value pairs,\n *\t\t\t  may be null pointer\n * @param[in]\ttype\t- record type, PBS_ACCT_END ('E') or\n *\t\t\t  PBS_ACCT_RERUN ('R')\n * @return\tvoid\n *\n * @par\tMT-safe: No - uses a global buffer, \"acct_buf\".\n *\n */\nvoid\naccount_jobend(job *pjob, char *used, int type)\n{\n\tint i = 0;\n\tint len = 0;\n\tchar *pb = NULL;\n\tchar *resc_used;\n\tint resc_used_size = 0;\n\n\t/* pack in general information about the job */\n\n\tpb = acct_job(pjob, type, acct_buf, acct_bufsize);\n\tlen = acct_bufsize - (pb - acct_buf);\n\n\t/*\n\t * for each keyword=value pair added, the following steps should be\n\t * followed:\n\t * a. calculate (or over-estimate) the size of the date to be added\n\t * b. check that there is sufficient room in the buffer, \"len\" is the\n\t *    current unused amount.\n\t * c. If necessary, grow the buffer by calling grow_acct_buf().\n\t *    If this function fails,  just write out what we already have.\n\t * d. Append the new datum to the buffer at \"pb\".  Each new item should\n\t *    have a single leading space.  The variable \"pb\" is maintained\n\t *    to point to the end to save \"strcat\" time.\n\t * e. Advance \"pb\" by the length of the datum added and decrement\n\t *    \"len\" by the same amount.\n\t */\n\n\t/* session */\n\ti = 30;\n\tif (i > len)\n\t\tif (grow_acct_buf(&pb, &len, i) == -1)\n\t\t\tgoto writeit;\n\t(void) sprintf(pb, \"session=%ld\",\n\t\t       get_jattr_long(pjob, JOB_ATR_session_id));\n\ti = strlen(pb);\n\tpb += i;\n\tlen -= i;\n\n\t/* Alternate id if present */\n\n\tif (is_jattr_set(pjob, JOB_ATR_altid)) {\n\t\ti = 9 + strlen(get_jattr_str(pjob, JOB_ATR_altid));\n\t\tif (i > len)\n\t\t\tif (grow_acct_buf(&pb, &len, i) == -1)\n\t\t\t\tgoto writeit;\n\n\t\t(void) sprintf(pb, \" alt_id=%s\",\n\t\t\t       get_jattr_str(pjob, JOB_ATR_altid));\n\n\t\ti = strlen(pb);\n\t\tpb += i;\n\t\tlen -= i;\n\t}\n\n\t/* add the execution ended time */\n\ti = 18;\n\tif (i > len)\n\t\tif (grow_acct_buf(&pb, &len, i) == -1)\n\t\t\tgoto writeit;\n\t(void) sprintf(pb, \" end=%ld\", (long) time_now);\n\ti = strlen(pb);\n\tpb += i;\n\tlen -= i;\n\n\t/* finally add on resources used from req_jobobit() */\n\tif (type == PBS_ACCT_END || type == PBS_ACCT_RERUN) {\n\t\tif ((used == NULL && pjob->ji_acctrec == NULL) || (used != NULL && strstr(used, \"resources_used\") == NULL)) {\n\t\t\t/* If pbs_server is restarted during the end of job processing then used maybe NULL.\n\t\t\t * So we try to derive the resource usage information from resources_used attribute of\n\t\t\t * the job and then reconstruct the resources usage information into resc_used buffer.\n\t\t\t */\n\n\t\t\t/* Allocate initial space for resc_used.  Future space will be allocated by pbs_strcat(). */\n\t\t\tresc_used = malloc(RESC_USED_BUF_SIZE);\n\t\t\tif (resc_used == NULL)\n\t\t\t\tgoto writeit;\n\t\t\tresc_used_size = RESC_USED_BUF_SIZE;\n\n\t\t\t/* strlen(msg_job_end_stat) == 12 characters plus a number.  This should be plenty big */\n\t\t\t(void) snprintf(resc_used, resc_used_size, msg_job_end_stat,\n\t\t\t\t\tpjob->ji_qs.ji_un.ji_exect.ji_exitstat);\n\n\t\t\tif (get_resc_used(pjob, &resc_used, &resc_used_size) == -1) {\n\t\t\t\tfree(resc_used);\n\t\t\t\tgoto writeit;\n\t\t\t}\n\n\t\t\tused = resc_used;\n\t\t\tfree(pjob->ji_acctrec);\n\t\t\tpjob->ji_acctrec = used;\n\t\t}\n\t}\n\n\tif (used != NULL) {\n\t\ti = strlen(used) + 1;\n\t\tif (i > len)\n\t\t\tif (grow_acct_buf(&pb, &len, i) == -1)\n\t\t\t\tgoto writeit;\n\t\t(void) strcat(pb, \" \");\n\t\t(void) strcat(pb, used);\n\t\ti = strlen(pb);\n\t\tpb += i;\n\t\tlen -= i;\n\t}\n\n\t/* Add eligible_time */\n\tif (get_sattr_long(SVR_ATR_EligibleTimeEnable) == 1) {\n\t\tchar timebuf[TIMEBUF_SIZE] = {0};\n\t\ti = 26; /* max size for \" eligible_time=<value>\" */\n\t\tif (i > len)\n\t\t\tif (grow_acct_buf(&pb, &len, i) == -1)\n\t\t\t\tgoto writeit;\n\n\t\tconvert_duration_to_str(get_jattr_long(pjob, JOB_ATR_eligible_time), timebuf, TIMEBUF_SIZE);\n\t\t(void) sprintf(pb, \" eligible_time=%s\", timebuf);\n\t\ti = strlen(pb);\n\t\tpb += i;\n\t\tlen -= i;\n\t}\n\n\t/* Add in run count */\n\n\ti = 34; /* sort of max size for \"run_count=<value>\" */\n\tif (i > len)\n\t\tif (grow_acct_buf(&pb, &len, i) == -1)\n\t\t\tgoto writeit;\n\tsprintf(pb, \" run_count=%ld\",\n\t\tget_jattr_long(pjob, JOB_ATR_runcount));\n\t/* if any more is added after this point, */\n\t/* don't forget to reset pb and len first */\n\n\t/* done creating record,  now write it out */\n\nwriteit:\n\tacct_buf[acct_bufsize - 1] = '\\0';\n\taccount_record(type, pjob, acct_buf);\n}\n/**\n * @brief\n *\tLog the license used.\n *\n * @see\n *\tcall_log_license\n *\n * @param[in]   pu\t-\tpointer to licenses_high_use\n *\n * @return      void\n */\nvoid\nlog_licenses(pbs_licenses_high_use *pu)\n{\n\tsprintf(acct_buf, \"floating license hour:%d day:%d month:%d max:%d\",\n\t\tpu->lu_max_hr,\n\t\tpu->lu_max_day,\n\t\tpu->lu_max_month,\n\t\tpu->lu_max_forever);\n\twrite_account_record(PBS_ACCT_LIC, \"license\", acct_buf);\n}\n\n/**\n * @brief\n *\tBuilds job accounting record.\n *\n * @par Functionality:\n *      This function builds basic job data to be printed with provisioning\n *\trecord.\n *\n * @see\n *\tset_job_ProvAcctRcd\n *\n * @param[in]   pjob\t-\tpointer to job\n * @param[in]   buf\t-\tpointer to buffer to contain job related data\n * @param[in]   len\t-\tlength of buffer\n *\n * @return      pointer to string\n * @retval       char* : job accounting info\n *\n * @par Side Effects:\n *     the accounting buffer (acct_buf) is grown\n *\n * @par MT-safe: No\n *\n */\nstatic char *\ncommon_acct_job(job *pjob, char *buf, int len)\n{\n\tint i;\n\tint nd;\n\tchar *pb;\n\n\tpb = buf;\n\n\t/* user */\n\tnd = 7 + strlen(get_jattr_str(pjob, JOB_ATR_euser));\n\tif (nd > len)\n\t\tif (grow_acct_buf(&pb, &len, nd) == -1)\n\t\t\treturn (pb);\n\n\tsprintf(pb, \"user=%s \", get_jattr_str(pjob, JOB_ATR_euser));\n\n\ti = strlen(pb);\n\tpb += i;\n\tlen -= i;\n\n\t/* group */\n\tnd = 8 + strlen(get_jattr_str(pjob, JOB_ATR_egroup));\n\tif (nd > len)\n\t\tif (grow_acct_buf(&pb, &len, nd) == -1)\n\t\t\treturn (pb);\n\n\tsprintf(pb, \"group=%s \", get_jattr_str(pjob, JOB_ATR_egroup));\n\n\ti = strlen(pb);\n\tpb += i;\n\tlen -= i;\n\n\t/* job name */\n\tnd = 10 + strlen(get_jattr_str(pjob, JOB_ATR_jobname));\n\tif (nd > len)\n\t\tif (grow_acct_buf(&pb, &len, nd) == -1)\n\t\t\treturn (pb);\n\tsprintf(pb, \"jobname=%s \", get_jattr_str(pjob, JOB_ATR_jobname));\n\ti = strlen(pb);\n\tpb += i;\n\tlen -= i;\n\n\t/* queue name */\n\tnd = 8 + strlen(pjob->ji_qhdr->qu_qs.qu_name);\n\tif (nd > len)\n\t\tif (grow_acct_buf(&pb, &len, nd) == -1)\n\t\t\treturn (pb);\n\tsprintf(pb, \"queue=%s \", pjob->ji_qhdr->qu_qs.qu_name);\n\n\treturn (pb);\n}\n\n/**\n * @brief\n *\tCreates start/end provisioning record.\n *\n * @par Functionality:\n *      This function creates start/end provisioning record for a single job.\n *\n * @see\n *\n * @param[in]   pjob\t-\tpointer to job\n * @param[in]   time_se\t-\tstart or end time stamp depending upon value of type\n * @param[in]   type\t-\tinteger value to select type of record,\n *\t\t\t\t1 = start, 2 = end\n *\n * @return\tvoid\n *\n * @par Side Effects:\n *      The accounting buffer (acct_buf) is grown\n *\n * @par MT-safe: No\n *\n */\nvoid\nset_job_ProvAcctRcd(job *pjob, long time_se, int type)\n{\n\tint nd;\n\tint len;\n\tchar *pb;\n\tint i;\n\n\t/* pack in general information about the job */\n\n\t(void) common_acct_job(pjob, acct_buf, acct_bufsize);\n\tacct_buf[acct_bufsize - 1] = '\\0';\n\n\tnd = strlen(acct_buf);\n\tpb = acct_buf + nd;\n\tlen = acct_bufsize - nd;\n\n\t/* node list that were provisioned */\n#ifdef NAS /* localmod 136 */\n\tif (get_jattr_str(pjob, JOB_ATR_prov_vnode) == NULL) {\n\t\tchar logmsg[1024];\n\t\tsprintf(logmsg, \"prov_vnode is NULL for job %s\", get_jattr_str(pjob, JOB_ATR_hashname));\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_INFO, \"Bug\", logmsg);\n\n\t\treturn;\n\t}\n#endif /* localmod 136 */\n\tnd = 18 + strlen(get_jattr_str(pjob, JOB_ATR_prov_vnode));\n\tif (nd > len)\n\t\tif (grow_acct_buf(&pb, &len, nd) == -1)\n\t\t\treturn;\n\t(void) sprintf(pb, \"provision_vnode=%s \",\n\t\t       get_jattr_str(pjob, JOB_ATR_prov_vnode));\n\ti = strlen(pb);\n\tpb += i;\n\tlen -= i;\n\n\tswitch (type) {\n\t\tcase PROVISIONING_STARTED:\n\t\t\tnd = 45;\n\t\t\tif (nd > len)\n\t\t\t\tif (grow_acct_buf(&pb, &len, nd) == -1)\n\t\t\t\t\treturn;\n\t\t\t(void) sprintf(pb, \"provision_event=START start_time=%ld\", time_se);\n\t\t\tacct_buf[acct_bufsize - 1] = '\\0';\n\t\t\taccount_record(PBS_ACCT_PROV_START, pjob, acct_buf);\n\t\t\tbreak;\n\t\tcase PROVISIONING_SUCCESS:\n\t\tcase PROVISIONING_FAILURE:\n\t\t\tnd = 56;\n\t\t\tif (nd > len)\n\t\t\t\tif (grow_acct_buf(&pb, &len, nd) == -1)\n\t\t\t\t\treturn;\n\n\t\t\t(void) sprintf(pb, \"provision_event=END status=%s end_time=%ld\",\n\t\t\t\t       (type == 2) ? \"SUCCESS\" : \"FAILURE\", time_se);\n\t\t\tacct_buf[acct_bufsize - 1] = '\\0';\n\t\t\taccount_record(PBS_ACCT_PROV_END, pjob, acct_buf);\n\t\t\tbreak;\n\t}\n}\n\n/**\n * @brief\n * \tBuild common data for update job accounting record\n *\n * @par\tFunctionality:\n *\tUsed by account_job_update()\n *\n *\n * @param[in]\tpjob\t- pointer to job structure\n * @param[in]\ttype\t- type of accounting record: PBS_ACCT_UPDATE,\n *\t\t\t  PBS_ACCT_LAST.\n * @param[in]\tbuf\t- buffer holding the data that will be stored in\n *\t\t\t  accounting logs.\n * @param[in]\tlen\t- number of characters in 'buf' still available to\n *\t\t\t  store data.\n * @return\tchar *\n * @retval\tpointer to 'buf' containing new data.\n *\n */\nstatic char *\nbuild_common_data_for_job_update(const job *pjob, int type, char *buf, int len)\n{\n\tpbs_list_head attrlist;\n\tint ct;\n\tint nd;\n\tsvrattrl *pal;\n\tchar *pb;\n\tint k, len_acct, att_index;\n\tchar save_char;\n\tint old_perm;\n\n\tpb = buf;\n\tCLEAR_HEAD(attrlist);\n\n\t/* gridname */\n\tif (is_jattr_set(pjob, JOB_ATR_gridname)) {\n\t\tnd = strlen(get_jattr_str(pjob, JOB_ATR_gridname)) + sizeof(GRIDNAME_FMT);\n\t\tif (nd > len)\n\t\t\tif (grow_acct_buf(&pb, &len, nd) == -1)\n\t\t\t\treturn (pb);\n\n\t\t(void) snprintf(pb, len, GRIDNAME_FMT,\n\t\t\t\tget_jattr_str(pjob, JOB_ATR_gridname));\n\t\tct = strlen(pb);\n\t\tpb += ct;\n\t\tlen -= ct;\n\t}\n\n\t/* user */\n\tnd = sizeof(USER_FMT) + strlen(get_jattr_str(pjob, JOB_ATR_euser));\n\tif (nd > len)\n\t\tif (grow_acct_buf(&pb, &len, nd) == -1)\n\t\t\treturn (pb);\n\n\t(void) snprintf(pb, len, USER_FMT,\n\t\t\tget_jattr_str(pjob, JOB_ATR_euser));\n\n\tct = strlen(pb);\n\tpb += ct;\n\tlen -= ct;\n\n\t/* group */\n\tnd = sizeof(GROUP_FMT) + strlen(get_jattr_str(pjob, JOB_ATR_egroup));\n\tif (nd > len)\n\t\tif (grow_acct_buf(&pb, &len, nd) == -1)\n\t\t\treturn (pb);\n\n\t(void) snprintf(pb, len, GROUP_FMT,\n\t\t\tget_jattr_str(pjob, JOB_ATR_egroup));\n\n\tct = strlen(pb);\n\tpb += ct;\n\tlen -= ct;\n\n\t/* account */\n\tif (is_jattr_set(pjob, JOB_ATR_account)) {\n\t\tnd = sizeof(ACCOUNT_FMT) + strlen(get_jattr_str(pjob, JOB_ATR_account));\n\t\tif (nd > len)\n\t\t\tif (grow_acct_buf(&pb, &len, nd) == -1)\n\t\t\t\treturn (pb);\n\n\t\t(void) snprintf(pb, len, ACCOUNT_FMT,\n\t\t\t\tget_jattr_str(pjob, JOB_ATR_account));\n\n\t\tct = strlen(pb);\n\t\tpb += ct;\n\t\tlen -= ct;\n\t}\n\n\t/* project */\n\tif (is_jattr_set(pjob, JOB_ATR_project)) {\n\t\tchar *projstr;\n\n\t\tprojstr = get_jattr_str(pjob, JOB_ATR_project);\n\t\t/* using PROJECT_FMT1 if projstr needs to be quoted; otherwise, PROJECT_FMT2 */\n\t\tnd = sizeof(PROJECT_FMT1) + strlen(projstr);\n\t\tif (nd > len)\n\t\t\tif (grow_acct_buf(&pb, &len, nd) == -1)\n\t\t\t\treturn (pb);\n\t\tif (strchr(projstr, ' ') != NULL) {\n\t\t\t(void) snprintf(pb, len, PROJECT_FMT1, projstr);\n\t\t} else {\n\t\t\t(void) snprintf(pb, len, PROJECT_FMT2, projstr);\n\t\t}\n\t\tct = strlen(pb);\n\t\tpb += ct;\n\t\tlen -= ct;\n\t}\n\n\t/* accounting_id */\n\tif (is_jattr_set(pjob, JOB_ATR_acct_id)) {\n\t\tnd = sizeof(ACCOUNTING_ID_FMT) + strlen(get_jattr_str(pjob, JOB_ATR_acct_id));\n\t\tif (nd > len)\n\t\t\tif (grow_acct_buf(&pb, &len, nd) == -1)\n\t\t\t\treturn (pb);\n\t\t(void) snprintf(pb, len, ACCOUNTING_ID_FMT,\n\t\t\t\tget_jattr_str(pjob, JOB_ATR_acct_id));\n\t\tct = strlen(pb);\n\t\tpb += ct;\n\t\tlen -= ct;\n\t}\n\n\t/* job name */\n\tnd = sizeof(JOBNAME_FMT) + strlen(get_jattr_str(pjob, JOB_ATR_jobname));\n\tif (nd > len)\n\t\tif (grow_acct_buf(&pb, &len, nd) == -1)\n\t\t\treturn (pb);\n\t(void) snprintf(pb, len, JOBNAME_FMT,\n\t\t\tget_jattr_str(pjob, JOB_ATR_jobname));\n\tct = strlen(pb);\n\tpb += ct;\n\tlen -= ct;\n\n\t/* queue name */\n\tnd = sizeof(QUEUE_FMT) + strlen(pjob->ji_qhdr->qu_qs.qu_name);\n\tif (nd > len)\n\t\tif (grow_acct_buf(&pb, &len, nd) == -1)\n\t\t\treturn (pb);\n\t(void) snprintf(pb, len, QUEUE_FMT, pjob->ji_qhdr->qu_qs.qu_name);\n\tct = strlen(pb);\n\tpb += ct;\n\tlen -= ct;\n\n\tif (pjob->ji_myResv) {\n\t\tnd = sizeof(RESVID_FMT) + strlen(pjob->ji_myResv->ri_qs.ri_resvID);\n\t\tif (is_rattr_set(pjob->ji_myResv, RESV_ATR_resv_name))\n\t\t\tnd += sizeof(RESVNAME_FMT) + strlen(get_rattr_str(pjob->ji_myResv, RESV_ATR_resv_name));\n\t\tif (nd > len)\n\t\t\tif (grow_acct_buf(&pb, &len, nd) == -1)\n\t\t\t\treturn (pb);\n\t\t/* reservation name */\n\t\tif (is_rattr_set(pjob->ji_myResv, RESV_ATR_resv_name)) {\n\t\t\t(void) snprintf(pb, len, RESVNAME_FMT, get_rattr_str(pjob->ji_myResv, RESV_ATR_resv_name));\n\t\t\tct = strlen(pb);\n\t\t\tpb += ct;\n\t\t\tlen -= ct;\n\t\t}\n\n\t\t/* reservation ID */\n\t\t(void) snprintf(pb, len, RESVID_FMT,\n\t\t\t\tpjob->ji_myResv->ri_qs.ri_resvID);\n\t\tct = strlen(pb);\n\t\tpb += ct;\n\t\tlen -= ct;\n\t}\n\n\t/* insure space for all *times */\n\tnd = ACCTBUF_TIMES_NEED;\n\tif (nd > len)\n\t\tif (grow_acct_buf(&pb, &len, nd) == -1)\n\t\t\treturn (pb);\n\n\t/* create time */\n\tsprintf(pb, \"ctime=%ld \", get_jattr_long(pjob, JOB_ATR_ctime));\n\tct = strlen(pb);\n\tpb += ct;\n\tlen -= ct;\n\n\t/* queued time */\n\tsprintf(pb, \"qtime=%ld \", get_jattr_long(pjob, JOB_ATR_qtime));\n\tct = strlen(pb);\n\tpb += ct;\n\tlen -= ct;\n\n\t/* eligible time, how long ready to run */\n\tsprintf(pb, \"etime=%ld \", get_jattr_long(pjob, JOB_ATR_etime));\n\tct = strlen(pb);\n\tpb += ct;\n\tlen -= ct;\n\n\t/* start time */\n\tsprintf(pb, \"start=%ld \", (long) pjob->ji_qs.ji_stime);\n\tct = strlen(pb);\n\tpb += ct;\n\tlen -= ct;\n\n\tif ((is_jattr_set(pjob, JOB_ATR_array_indices_submitted)) &&\n\t    check_job_state(pjob, JOB_STATE_LTR_BEGUN)) {\n\n\t\t/* for an Array Job in Begun state,  record index range */\n\n\t\tnd = sizeof(ARRAY_INDICES_FMT) + strlen(get_jattr_str(pjob, JOB_ATR_array_indices_submitted));\n\t\tif (nd > len)\n\t\t\tif (grow_acct_buf(&pb, &len, nd) == -1)\n\t\t\t\treturn (pb);\n\t\tsnprintf(pb, len, ARRAY_INDICES_FMT, get_jattr_str(pjob, JOB_ATR_array_indices_submitted));\n\t\tct = strlen(pb);\n\t\tpb += ct;\n\t\tlen -= ct;\n\n\t\t/* now encode the job's resource_list attribute */\n\t\t/* of the just concluded phase */\n\t\told_perm = resc_access_perm;\n\t\tresc_access_perm = READ_ONLY;\n\t\tif (type == PBS_ACCT_UPDATE)\n\t\t\tatt_index = JOB_ATR_resource_acct;\n\t\telse\n\t\t\tatt_index = JOB_ATR_resource;\n\t\tjob_attr_def[att_index].at_encode(\n\t\t\tget_jattr(pjob, att_index),\n\t\t\t&attrlist,\n\t\t\tjob_attr_def[att_index].at_name,\n\t\t\tNULL,\n\t\t\tATR_ENCODE_CLIENT, NULL);\n\t\tresc_access_perm = old_perm;\n\n\t\tnd = 0; /* compute total size needed in buf */\n\t\tpal = GET_NEXT(attrlist);\n\t\twhile (pal != NULL) {\n\t\t\t/* +5 in count is for '=', ' ', start and end quotes, and \\0 */\n\t\t\tnd += strlen(pal->al_name) + strlen(pal->al_value) + 5;\n\t\t\tif (pal->al_resc)\n\t\t\t\tnd += 1 + strlen(pal->al_resc);\n\t\t\tpal = GET_NEXT(pal->al_link);\n\t\t}\n\t\tif (nd > len)\n\t\t\tif (grow_acct_buf(&pb, &len, nd) == -1)\n\t\t\t\treturn (pb);\n\n\t\tif (type == PBS_ACCT_UPDATE)\n\t\t\tlen_acct = 5; /* for length of \"_acct\" */\n\t\telse\n\t\t\tlen_acct = 0;\n\t\twhile ((pal = GET_NEXT(attrlist)) != NULL) {\n\t\t\t/* strip off the '_acct' suffix */\n\t\t\tif (len_acct > 0) {\n\t\t\t\tk = strlen(pal->al_name);\n\t\t\t\tif (k > len_acct) {\n\t\t\t\t\tsave_char = pal->al_name[k - len_acct];\n\t\t\t\t\tpal->al_name[k - len_acct] = '\\0';\n\t\t\t\t}\n\t\t\t}\n\t\t\t(void) strcat(pb, pal->al_name);\n\t\t\tif (len_acct > 0) {\n\t\t\t\tif (k > len_acct) {\n\t\t\t\t\tpal->al_name[k - len_acct] = save_char;\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (pal->al_resc) {\n\n\t\t\t\t(void) strcat(pb, \".\");\n\t\t\t\t(void) strcat(pb, pal->al_resc);\n\t\t\t}\n\t\t\t(void) strcat(pb, \"=\");\n\t\t\tcpy_quote_value(pb, pal->al_value);\n\t\t\t(void) strcat(pb, \" \");\n\t\t\tdelete_link(&pal->al_link);\n\t\t\t(void) free(pal);\n\t\t\tpb += strlen(pb);\n\t\t}\n\t} else {\n\n\t\t/* regular job */\n\t\t/* record exec_host of a completed phase */\n\t\tif (type == PBS_ACCT_UPDATE)\n\t\t\tatt_index = JOB_ATR_exec_host_acct;\n\t\telse\n\t\t\tatt_index = JOB_ATR_exec_host;\n\t\tif (is_jattr_set(pjob, att_index)) {\n\t\t\t/* execution host list, may be loooong */\n\t\t\tnd = sizeof(EXEC_HOST_FMT) + strlen(get_jattr_str(pjob, att_index));\n\t\t\tif (nd > len)\n\t\t\t\tif (grow_acct_buf(&pb, &len, nd) == -1)\n\t\t\t\t\treturn (pb);\n\t\t\t(void) snprintf(pb, len, EXEC_HOST_FMT,\n\t\t\t\t\tget_jattr_str(pjob, att_index));\n\t\t\tct = strlen(pb);\n\t\t\tpb += ct;\n\t\t\tlen -= ct;\n\t\t}\n\n\t\t/* record exec_vnode of a just concluded phase */\n\t\tif (type == PBS_ACCT_UPDATE)\n\t\t\tatt_index = JOB_ATR_exec_vnode_acct;\n\t\telse\n\t\t\tatt_index = JOB_ATR_exec_vnode;\n\t\tif (is_jattr_set(pjob, att_index)) {\n\t\t\t/* execution vnode list, will be even longer */\n\t\t\tnd = sizeof(EXEC_VNODE_FMT) + strlen(get_jattr_str(pjob, att_index));\n\t\t\tif (nd > len)\n\t\t\t\tif (grow_acct_buf(&pb, &len, nd) == -1)\n\t\t\t\t\treturn (pb);\n\t\t\t(void) snprintf(pb, len, EXEC_VNODE_FMT,\n\t\t\t\t\tget_jattr_str(pjob, att_index));\n\t\t\tct = strlen(pb);\n\t\t\tpb += ct;\n\t\t\tlen -= ct;\n\t\t}\n\n\t\t/* now encode the job's resource_list attribute */\n\t\t/* of the just concluded phase */\n\t\told_perm = resc_access_perm;\n\t\tresc_access_perm = READ_ONLY;\n\t\tif (type == PBS_ACCT_UPDATE)\n\t\t\tatt_index = JOB_ATR_resource_acct;\n\t\telse\n\t\t\tatt_index = JOB_ATR_resource;\n\t\t(void) job_attr_def[att_index].at_encode(\n\t\t\tget_jattr(pjob, att_index),\n\t\t\t&attrlist,\n\t\t\tjob_attr_def[att_index].at_name,\n\t\t\tNULL,\n\t\t\tATR_ENCODE_CLIENT, NULL);\n\t\tresc_access_perm = old_perm;\n\n\t\tnd = 0; /* compute total size needed in buf */\n\t\tpal = GET_NEXT(attrlist);\n\t\twhile (pal != NULL) {\n\t\t\t/* +5 in count is for '=', ' ', start and end quotes, and \\0 */\n\t\t\tnd += strlen(pal->al_name) + strlen(pal->al_value) + 5;\n\t\t\tif (pal->al_resc)\n\t\t\t\tnd += 1 + strlen(pal->al_resc);\n\t\t\tpal = GET_NEXT(pal->al_link);\n\t\t}\n\t\tif (nd > len)\n\t\t\tif (grow_acct_buf(&pb, &len, nd) == -1)\n\t\t\t\treturn (pb);\n\n\t\tif (type == PBS_ACCT_UPDATE)\n\t\t\tlen_acct = strlen(\"_acct\");\n\t\telse\n\t\t\tlen_acct = 0;\n\t\twhile ((pal = GET_NEXT(attrlist)) != NULL) {\n\t\t\t/* strip off the '_acct' suffix */\n\t\t\tif (len_acct > 0) {\n\t\t\t\tk = strlen(pal->al_name);\n\t\t\t\tif (k > len_acct) {\n\t\t\t\t\tsave_char = pal->al_name[k - len_acct];\n\t\t\t\t\tpal->al_name[k - len_acct] = '\\0';\n\t\t\t\t}\n\t\t\t}\n\t\t\t(void) strcat(pb, pal->al_name);\n\t\t\tif (len_acct > 0) {\n\t\t\t\tif (k > len_acct) {\n\t\t\t\t\tpal->al_name[k - len_acct] = save_char;\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (pal->al_resc) {\n\n\t\t\t\t(void) strcat(pb, \".\");\n\t\t\t\t(void) strcat(pb, pal->al_resc);\n\t\t\t}\n\t\t\t(void) strcat(pb, \"=\");\n\t\t\tcpy_quote_value(pb, pal->al_value);\n\t\t\t(void) strcat(pb, \" \");\n\t\t\tdelete_link(&pal->al_link);\n\t\t\t(void) free(pal);\n\t\t\tpb += strlen(pb);\n\t\t}\n\t}\n\n\treturn (pb);\n}\n\n/**\n * @brief\n *\tForm and write a job update record with resource usage.\n *\n * @par\tFunctionality:\n *\tTakes various information from the job structure, start time, owner,\n *\tResource_List, etc., and the resource usage information\n *\tif present and formats the record type requested.\n *\tCurrently, this is used for 'u' and 'e' records.  The record is then\n *\twritten to the accounting log.\n *\n * @see build_common_data_for_job_update()\n *\n * @param[in]\tpjob\t- pointer to job structure\n * @param[in]\tused\t- resource usage information from Mom,\n *\t\t\t  this is a string consisting of space separated\n *\t\t\t  keyword=value pairs, may be null pointer\n * @param[in]\ttype\t- record type, PBS_ACCT_UPDATE ('u') or PBS_ACCT_LAST ('e')\n * @return\tvoid\n *\n * @par\tMT-safe: No - uses a global buffer, \"acct_buf\".\n *\n */\nvoid\naccount_job_update(job *pjob, int type)\n{\n\tint i = 0;\n\tint len = 0;\n\tchar *pb = NULL;\n\tpbs_list_head attrlist;\n\tstruct svrattrl *patlist = NULL;\n\tchar *resc_used = NULL;\n\tint resc_used_size = 0;\n\tint k, len_upd, attr_index;\n\tchar save_char = '\\0';\n\tint old_perm;\n\n\tif (!is_jattr_set(pjob, JOB_ATR_exec_vnode_acct))\n\t\treturn;\n\n\tCLEAR_HEAD(attrlist);\n\t/* pack in general information about the job */\n\n\tpb = build_common_data_for_job_update(pjob, type, acct_buf, acct_bufsize);\n\tlen = acct_bufsize - (pb - acct_buf);\n\n\t/* session */\n\ti = 30;\n\tif (i > len)\n\t\tif (grow_acct_buf(&pb, &len, i) == -1)\n\t\t\tgoto writeit;\n\tsprintf(pb, \"session=%ld\", get_jattr_long(pjob, JOB_ATR_session_id));\n\ti = strlen(pb);\n\tpb += i;\n\tlen -= i;\n\n\t/* Alternate id if present */\n\n\tif (is_jattr_set(pjob, JOB_ATR_altid)) {\n\t\t/* 9 is for length of \" alt_id=\" and \\0 */\n\t\ti = 9 + strlen(get_jattr_str(pjob, JOB_ATR_altid));\n\t\tif (i > len)\n\t\t\tif (grow_acct_buf(&pb, &len, i) == -1)\n\t\t\t\tgoto writeit;\n\n\t\tsprintf(pb, \" alt_id=%s\", get_jattr_str(pjob, JOB_ATR_altid));\n\n\t\ti = strlen(pb);\n\t\tpb += i;\n\t\tlen -= i;\n\t}\n\n\t/* Add eligible_time */\n\tif (get_sattr_long(SVR_ATR_EligibleTimeEnable) == 1) {\n\t\tchar timebuf[TIMEBUF_SIZE];\n\t\ti = 26; /* sort of max size for \" eligible_time=<value>\" */\n\t\tif (i > len)\n\t\t\tif (grow_acct_buf(&pb, &len, i) == -1)\n\t\t\t\tgoto writeit;\n\n\t\tconvert_duration_to_str(get_jattr_long(pjob, JOB_ATR_eligible_time), timebuf, TIMEBUF_SIZE);\n\t\t(void) sprintf(pb, \" eligible_time=%s\", timebuf);\n\t\ti = strlen(pb);\n\t\tpb += i;\n\t\tlen -= i;\n\t}\n\n\t/* Add in runcount */\n\n\ti = 34; /* sort of max size for \"run_count=<value>\" */\n\tif (i > len)\n\t\tif (grow_acct_buf(&pb, &len, i) == -1)\n\t\t\tgoto writeit;\n\tsprintf(pb, \" run_count=%ld\", get_jattr_long(pjob, JOB_ATR_runcount));\n\n\t/* now encode the job's resources_used attribute */\n\told_perm = resc_access_perm;\n\tresc_access_perm = READ_ONLY;\n\tif (is_jattr_set(pjob, JOB_ATR_resc_used_update)) {\n\t\tlen_upd = 7; /* for length of \"_update\" */\n\t\tattr_index = JOB_ATR_resc_used_update;\n\t} else {\n\t\tlen_upd = 0;\n\t\tattr_index = JOB_ATR_resc_used;\n\t}\n\tjob_attr_def[attr_index].at_encode(get_jattr(pjob, attr_index),\n\t\t\t\t\t   &attrlist,\n\t\t\t\t\t   job_attr_def[attr_index].at_name,\n\t\t\t\t\t   NULL,\n\t\t\t\t\t   ATR_ENCODE_CLIENT, NULL);\n\tresc_access_perm = old_perm;\n\n\t/* Allocate initial space for resc_used.  Future space will be allocated by pbs_strcat(). */\n\tresc_used = malloc(RESC_USED_BUF_SIZE);\n\tif (resc_used == NULL)\n\t\tgoto writeit;\n\tresc_used_size = RESC_USED_BUF_SIZE;\n\tresc_used[0] = '\\0';\n\n\tpatlist = GET_NEXT(attrlist);\n\twhile (patlist) {\n\t\tk = 0;\n\t\tif (len_upd > 0) {\n\t\t\t/* strip off the '_update' suffix */\n\t\t\tk = strlen(patlist->al_name);\n\t\t\tif (k > len_upd) {\n\t\t\t\tsave_char = patlist->al_name[k - len_upd];\n\t\t\t\tpatlist->al_name[k - len_upd] = '\\0';\n\t\t\t}\n\t\t}\n\t\t/*\n\t\t * To calculate length of the string of the form \"resources_used.<resource>=<value>\".\n\t\t * Additional length of 3 is required to accommodate the characters '.', '=' and ' '.\n\t\t */\n\t\tif (strlen(patlist->al_value) > 0) {\n\n\t\t\tif (pbs_strcat(&resc_used, &resc_used_size, \" \") == NULL) {\n\t\t\t\tlog_err(errno, __func__, \"Failed to allocate memory.\");\n\t\t\t\tif (len_upd > 0 && k > len_upd) {\n\t\t\t\t\tpatlist->al_name[k - len_upd] = save_char;\n\t\t\t\t}\n\t\t\t\tgoto writeit;\n\t\t\t}\n\t\t\tif (pbs_strcat(&resc_used, &resc_used_size, patlist->al_name) == NULL) {\n\t\t\t\tlog_err(errno, __func__, \"Failed to allocate memory.\");\n\t\t\t\tif (len_upd > 0 && k > len_upd) {\n\t\t\t\t\tpatlist->al_name[k - len_upd] = save_char;\n\t\t\t\t}\n\t\t\t\tgoto writeit;\n\t\t\t}\n\t\t\tif (len_upd > 0 && k > len_upd) {\n\t\t\t\tpatlist->al_name[k - len_upd] = save_char;\n\t\t\t}\n\t\t\tif (patlist->al_resc) {\n\t\t\t\tif (pbs_strcat(&resc_used, &resc_used_size, \".\") == NULL) {\n\t\t\t\t\tlog_err(errno, __func__, \"Failed to allocate memory.\");\n\t\t\t\t\tgoto writeit;\n\t\t\t\t}\n\t\t\t\tif (pbs_strcat(&resc_used, &resc_used_size, patlist->al_resc) == NULL) {\n\t\t\t\t\tlog_err(errno, __func__, \"Failed to allocate memory.\");\n\t\t\t\t\tgoto writeit;\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (pbs_strcat(&resc_used, &resc_used_size, \"=\") == NULL) {\n\t\t\t\tlog_err(errno, __func__, \"Failed to allocate memory.\");\n\t\t\t\tgoto writeit;\n\t\t\t}\n\t\t\tif (patlist->al_resc && (strcmp(patlist->al_resc, WALLTIME) == 0)) {\n\t\t\t\tlong j, k;\n\n\t\t\t\tk = get_walltime(pjob, JOB_ATR_resc_used_acct);\n\t\t\t\tj = get_walltime(pjob, JOB_ATR_resc_used);\n\t\t\t\tif ((k >= 0) && (j >= k)) {\n\t\t\t\t\tchar timebuf[TIMEBUF_SIZE];\n\n\t\t\t\t\tconvert_duration_to_str(j - k, timebuf, TIMEBUF_SIZE);\n\t\t\t\t\tif (pbs_strcat(&resc_used, &resc_used_size, timebuf) == NULL) {\n\t\t\t\t\t\tlog_err(errno, __func__, \"Failed to allocate memory.\");\n\t\t\t\t\t\tgoto writeit;\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tif (pbs_strcat(&resc_used, &resc_used_size, patlist->al_value) == NULL) {\n\t\t\t\t\t\tlog_err(errno, __func__, \"Failed to allocate memory.\");\n\t\t\t\t\t\tgoto writeit;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tif (pbs_strcat(&resc_used, &resc_used_size, patlist->al_value) == NULL) {\n\t\t\t\t\tlog_err(errno, __func__, \"Failed to allocate memory.\");\n\t\t\t\t\tgoto writeit;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tpatlist = patlist->al_sister;\n\t}\n\tfree_attrlist(&attrlist);\n\n\tif (resc_used[0] != '\\0') {\n\t\ti = strlen(resc_used) + 1;\n\t\tif (i > len)\n\t\t\tif (grow_acct_buf(&pb, &len, i) == -1)\n\t\t\t\tgoto writeit;\n\t\t(void) strcat(pb, \" \");\n\t\t(void) strcat(pb, resc_used);\n\t\ti = strlen(pb);\n\t\tpb += i;\n\t\tlen -= i;\n\n\t\tset_attr_with_attr(&job_attr_def[JOB_ATR_resc_used_acct], get_jattr(pjob, JOB_ATR_resc_used_acct), get_jattr(pjob, JOB_ATR_resc_used), INCR);\n\t}\n\nwriteit:\n\tacct_buf[acct_bufsize - 1] = '\\0';\n\taccount_record(type, pjob, acct_buf);\n\tif (resc_used != NULL)\n\t\tfree(resc_used);\n}\n\n/**\n * @brief\n * \tlog an alter record for modified jobs\n * \tplist contains the attributes and resources requested to be modified.\n * \tWe only modify those because the ATTR_l encode function will encode\n * \tall resources, not just the ones we want.\n *\n * @param[in] pjob - job to log records for.\n * @param[in] plist - list of attributes and resources to log\n *\n * @returns void\n */\nvoid\nlog_alter_records_for_attrs(job *pjob, svrattrl *plist)\n{\n\tsvrattrl *cur_svr;\n\tpbs_list_head phead;\n\tsvrattrl *cur_plist;\n\tchar *per_attr_buf = NULL;\n\tstatic char *entire_record = NULL;\n\tstatic int entire_record_len = 0;\n\tint error = 0;\n\tint i;\n\n\tif (entire_record == NULL) {\n\t\tentire_record = malloc(1024 * sizeof(char));\n\t\tif (entire_record == NULL)\n\t\t\treturn;\n\t\tentire_record_len = 1024;\n\t}\n\tentire_record[0] = '\\0';\n\n\tCLEAR_HEAD(phead);\n\tfor (i = 0; i < JOB_ATR_LAST; i++) {\n\t\tattribute *pattr = get_jattr(pjob, i);\n\t\tif (pattr->at_flags & ATR_VFLAG_MODIFY) {\n\t\t\tsvrattrl *svrattrl_list = NULL;\n\t\t\tjob_attr_def[i].at_encode(pattr, &phead, job_attr_def[i].at_name, NULL, ATR_ENCODE_CLIENT, &svrattrl_list);\n\t\t\tfor (cur_plist = plist; cur_plist != NULL; cur_plist = (svrattrl *) GET_NEXT(cur_plist->al_link)) {\n\t\t\t\tint j;\n\t\t\t\tint ignore = 0;\n\t\t\t\tfor (j = 0; do_not_emit_alter[j] != NULL; j++)\n\t\t\t\t\tif (strcmp(do_not_emit_alter[j], cur_plist->al_name) == 0) {\n\t\t\t\t\t\tignore = 1;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\tif (ignore || strcmp(cur_plist->al_name, job_attr_def[i].at_name) != 0)\n\t\t\t\t\tcontinue;\n\t\t\t\telse {\n\t\t\t\t\tfor (cur_svr = svrattrl_list; cur_svr != NULL; cur_svr = (svrattrl *) GET_NEXT(cur_svr->al_link)) {\n\t\t\t\t\t\tif (pattr->at_type == ATR_TYPE_RESC) {\n\t\t\t\t\t\t\tif (cur_plist->al_resc != NULL) {\n\t\t\t\t\t\t\t\tif (strcmp(cur_plist->al_resc, cur_svr->al_resc) == 0) {\n\t\t\t\t\t\t\t\t\tchar *fmt;\n\t\t\t\t\t\t\t\t\tif (strchr(cur_svr->al_value, ' ') == NULL)\n\t\t\t\t\t\t\t\t\t\tfmt = \"%s.%s=%s\";\n\t\t\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t\t\t\tfmt = \"%s.%s=\\\"%s\\\"\";\n\t\t\t\t\t\t\t\t\tpbs_asprintf(&per_attr_buf, fmt, cur_svr->al_name, cur_svr->al_resc, cur_svr->al_value);\n\t\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\tchar *fmt;\n\t\t\t\t\t\t\tif (strchr(cur_svr->al_value, ' ') == NULL)\n\t\t\t\t\t\t\t\tfmt = \"%s=%s\";\n\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t\tfmt = \"%s=\\\"%s\\\"\";\n\t\t\t\t\t\t\tpbs_asprintf(&per_attr_buf, fmt, cur_svr->al_name, cur_svr->al_value);\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tif (per_attr_buf == NULL && cur_plist->al_value[0] == '\\0') { /* unset */\n\t\t\t\t\tpbs_asprintf(&per_attr_buf, \"%s%s%s=UNSET\", cur_plist->al_name, cur_plist->al_resc ? \".\" : \"\", cur_plist->al_resc ? cur_plist->al_resc : \"\");\n\t\t\t\t}\n\n\t\t\t\tif (entire_record[0] != '\\0')\n\t\t\t\t\tif (pbs_strcat(&entire_record, &entire_record_len, \" \") == NULL)\n\t\t\t\t\t\terror = 1;\n\n\t\t\t\tif (error == 0)\n\t\t\t\t\tif (pbs_strcat(&entire_record, &entire_record_len, per_attr_buf) == NULL)\n\t\t\t\t\t\terror = 1;\n\n\t\t\t\tfree(per_attr_buf);\n\t\t\t\tper_attr_buf = NULL;\n\t\t\t\tif (error)\n\t\t\t\t\treturn;\n\t\t\t}\n\t\t\tfree_svrattrl(svrattrl_list);\n\t\t}\n\t}\n\tif (entire_record[0] != '\\0')\n\t\taccount_record(PBS_ACCT_ALTER, pjob, entire_record);\n}\n\n/**\n * @brief\n * Common function to log a suspend/resume record\n * for suspend/resume job events respectively.\n *\n * @param[in] pjob - job to log records for.\n * @param[in] acct_type - Accounting type flag\n *\n * @returns void\n */\nvoid\nlog_suspend_resume_record(job *pjob, int acct_type)\n{\n\tif (acct_type == PBS_ACCT_SUSPEND) {\n\t\tchar *resc_buf;\n\t\tint resc_buf_size = RESC_USED_BUF_SIZE;\n\n\t\t/* Allocating initial space as required by resc_used. Future space will be allocated by pbs_strcat(). */\n\t\tresc_buf = malloc(RESC_USED_BUF_SIZE);\n\t\tif (resc_buf == NULL)\n\t\t\treturn;\n\n\t\tresc_buf[0] = '\\0';\n\n\t\tif (get_resc_used(pjob, &resc_buf, &resc_buf_size) == -1) {\n\t\t\twrite_account_record(acct_type, pjob->ji_qs.ji_jobid, NULL);\n\t\t\tfree(resc_buf);\n\t\t\treturn;\n\t\t}\n\n\t\tif (is_jattr_set(pjob, JOB_ATR_resc_released)) {\n\t\t\tchar *ret;\n\t\t\tret = pbs_strcat(&resc_buf, &resc_buf_size, \" resources_released=\");\n\t\t\tif (ret == NULL) {\n\t\t\t\tfree(resc_buf);\n\t\t\t\treturn;\n\t\t\t}\n\n\t\t\tret = pbs_strcat(&resc_buf, &resc_buf_size, get_jattr_str(pjob, JOB_ATR_resc_released));\n\t\t\tif (ret == NULL) {\n\t\t\t\tfree(resc_buf);\n\t\t\t\treturn;\n\t\t\t}\n\t\t}\n\t\twrite_account_record(acct_type, pjob->ji_qs.ji_jobid, resc_buf + 1);\n\t\tfree(resc_buf);\n\t\treturn;\n\t}\n\n\twrite_account_record(acct_type, pjob->ji_qs.ji_jobid, NULL);\n}\n"
  },
  {
    "path": "src/server/array_func.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/*\n * array_func.c - Functions which provide basic Job Array functions\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <unistd.h>\n#include <sys/types.h>\n#include <sys/param.h>\n#include <sys/stat.h>\n#include <ctype.h>\n#include <errno.h>\n#include <stdlib.h>\n#include <stdio.h>\n#include <string.h>\n#include \"pbs_ifl.h\"\n#include \"libpbs.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"resource.h\"\n#include \"server_limits.h\"\n#include \"server.h\"\n#include \"job.h\"\n#include \"log.h\"\n#include \"pbs_error.h\"\n#include \"batch_request.h\"\n#include \"pbs_nodes.h\"\n#include \"svrfunc.h\"\n#include \"acct.h\"\n#include <sys/time.h>\n#include \"range.h\"\n\n/* External data */\nextern char *msg_job_end_stat;\nextern int resc_access_perm;\nextern time_t time_now;\n\n/*\n * list of job attributes to copy from the parent Array job\n * when creating a sub job.\n */\nstatic enum job_atr attrs_to_copy[] = {\n\tJOB_ATR_jobname,\n\tJOB_ATR_job_owner,\n\tJOB_ATR_resc_used,\n\tJOB_ATR_state,\n\tJOB_ATR_in_queue,\n\tJOB_ATR_at_server,\n\tJOB_ATR_account,\n\tJOB_ATR_ctime,\n\tJOB_ATR_errpath,\n\tJOB_ATR_grouplst,\n\tJOB_ATR_join,\n\tJOB_ATR_keep,\n\tJOB_ATR_mtime,\n\tJOB_ATR_mailpnts,\n\tJOB_ATR_mailuser,\n\tJOB_ATR_nodemux,\n\tJOB_ATR_outpath,\n\tJOB_ATR_priority,\n\tJOB_ATR_qtime,\n\tJOB_ATR_remove,\n\tJOB_ATR_rerunable,\n\tJOB_ATR_resource,\n\tJOB_ATR_session_id,\n\tJOB_ATR_shell,\n\tJOB_ATR_sandbox,\n\tJOB_ATR_jobdir,\n\tJOB_ATR_stagein,\n\tJOB_ATR_stageout,\n\tJOB_ATR_substate,\n\tJOB_ATR_userlst,\n\tJOB_ATR_variables,\n\tJOB_ATR_euser,\n\tJOB_ATR_egroup,\n\tJOB_ATR_hashname,\n\tJOB_ATR_hopcount,\n\tJOB_ATR_queuetype,\n\tJOB_ATR_security,\n\tJOB_ATR_etime,\n\tJOB_ATR_refresh,\n\tJOB_ATR_gridname,\n\tJOB_ATR_umask,\n\tJOB_ATR_cred,\n\tJOB_ATR_runcount,\n\tJOB_ATR_eligible_time,\n\tJOB_ATR_sample_starttime,\n\tJOB_ATR_executable,\n\tJOB_ATR_Arglist,\n\tJOB_ATR_reserve_ID,\n\tJOB_ATR_project,\n\tJOB_ATR_run_version,\n\tJOB_ATR_tolerate_node_failures,\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n\tJOB_ATR_cred_id,\n#endif\n\tJOB_ATR_submit_host,\n\tJOB_ATR_LAST /* This MUST be LAST\t*/\n};\n\n/**\n * @brief\n * \t\t\tis_job_array - determines if the job id indicates\n *\n * @par\tFunctionality:\n * Note - subjob index or range may be invalid and not detected as such\n *\n * @param[in]\tid - Job Id.\n *\n * @return      Job Type\n * @retval\tIS_ARRAY_NO  - A regular job\n * @retval\tIS_ARRAY_ArrayJob  - A ArrayJob\n * @retval\tIS_ARRAY_Single  - A single subjob\n * @retval\tIS_ARRAY_Range  - A range of subjobs\n */\nint\nis_job_array(char *id)\n{\n\tchar *pc;\n\n\tif ((pc = strchr(id, (int) '[')) == NULL)\n\t\treturn IS_ARRAY_NO; /* not an ArrayJob nor a subjob (range) */\n\tif (*++pc == ']')\n\t\treturn IS_ARRAY_ArrayJob; /* an ArrayJob */\n\n\t/* know it is either a single subjob or an range there of */\n\n\twhile (isdigit((int) *pc))\n\t\t++pc;\n\tif ((*pc == '-') || (*pc == ','))\n\t\treturn IS_ARRAY_Range; /* a range of subjobs */\n\telse\n\t\treturn IS_ARRAY_Single;\n}\n\n/**\n * @brief\n * \t\tget_queued_subjobs_ct\t-\tget the number of queued subjobs if pjob is job array else return 1\n *\n * @param[in]\tpjob\t-\tpointer to job structure\n *\n * @return\tint\n * @retval\t-1\t: parse error\n * @retval\tpositive\t: count of subjobs in JOB_ATR_array_indices_remaining if job array else 1\n */\nint\nget_queued_subjobs_ct(job *pjob)\n{\n\tif (NULL == pjob)\n\t\treturn -1;\n\n\tif (pjob->ji_qs.ji_svrflags & JOB_SVFLG_ArrayJob) {\n\t\tif (NULL == pjob->ji_ajinfo)\n\t\t\treturn -1;\n\n\t\treturn pjob->ji_ajinfo->tkm_subjsct[JOB_STATE_QUEUED];\n\t}\n\n\treturn 1;\n}\n\n/**\n * @brief\n * \t\tfind_arrayparent - find and return a pointer to the job that is or will be\n * \t\tthe parent of the subjob id string\n *\n * @param[in]\tsubjobid - sub job id.\n *\n *\t@return\tparent job\n */\njob *\nfind_arrayparent(char *subjobid)\n{\n\tint i;\n\tchar idbuf[PBS_MAXSVRJOBID + 1];\n\tchar *pc;\n\n\tfor (i = 0; i < PBS_MAXSVRJOBID; i++) {\n\t\tidbuf[i] = *(subjobid + i);\n\t\tif (idbuf[i] == '[')\n\t\t\tbreak;\n\t}\n\tidbuf[++i] = ']';\n\tidbuf[++i] = '\\0';\n\tpc = strchr(subjobid, (int) '.');\n\tif (pc)\n\t\tstrcat(idbuf, pc);\n\treturn (find_job(idbuf));\n}\n\n/**\n * @brief\n * \t\tupdate_array_indices_remaining_attr - updates array_indices_remaining attribute\n *\n * @param[in,out]\tparent - pointer to parent job.\n *\n * @return\tvoid\n */\nstatic void\nupdate_array_indices_remaining_attr(job *parent)\n{\n\tchar *pnewstr = range_to_str(parent->ji_ajinfo->trm_quelist);\n\n\tif (pnewstr == NULL || *pnewstr == '\\0')\n\t\tpnewstr = \"-\";\n\tset_jattr_str_slim(parent, JOB_ATR_array_indices_remaining, pnewstr, NULL);\n\tupdate_subjob_state_ct(parent);\n}\n\n/**\n * @brief\n * \tupdate state counts of subjob based on given information\n *\n * @param[in]\tparent - pointer to parent job.\n * @param[in]\tsj     - pointer to subjob (can be NULL)\n * @param[in]\toffset - sub job index.\n * @param[in]\tnewstate - newstate of the sub job.\n *\n * @return void\n */\nvoid\nupdate_sj_parent(job *parent, job *sj, char *sjid, char oldstate, char newstate)\n{\n\tajinfo_t *ptbl;\n\tint idx;\n\tint ostatenum;\n\tint nstatenum;\n\n\tif (oldstate == newstate)\n\t\treturn;\n\n\tif (parent == NULL || sjid == NULL || sjid[0] == '\\0' || (idx = get_index_from_jid(sjid)) == -1)\n\t\treturn;\n\n\tptbl = parent->ji_ajinfo;\n\tif (ptbl == NULL)\n\t\treturn;\n\n\tostatenum = state_char2int(oldstate);\n\tnstatenum = state_char2int(newstate);\n\tif (ostatenum == -1 || nstatenum == -1)\n\t\treturn;\n\n\tptbl->tkm_subjsct[ostatenum]--;\n\tptbl->tkm_subjsct[nstatenum]++;\n\n\tif (oldstate == JOB_STATE_LTR_QUEUED)\n\t\trange_remove_value(&ptbl->trm_quelist, idx);\n\tif (newstate == JOB_STATE_LTR_QUEUED)\n\t\trange_add_value(&ptbl->trm_quelist, idx, ptbl->tkm_step);\n\tupdate_array_indices_remaining_attr(parent);\n\n\tif (sj && newstate != JOB_STATE_LTR_QUEUED) {\n\t\tif (is_jattr_set(sj, JOB_ATR_exit_status)) {\n\t\t\tint e = get_jattr_long(sj, JOB_ATR_exit_status);\n\t\t\tint pe = 0;\n\t\t\tif (is_jattr_set(parent, JOB_ATR_exit_status))\n\t\t\t\tpe = get_jattr_long(parent, JOB_ATR_exit_status);\n\t\t\tif (e > 0)\n\t\t\t\tpe = 1;\n\t\t\telse if (e < 0)\n\t\t\t\tpe = 2;\n\t\t\telse\n\t\t\t\tpe = 0;\n\n\t\t\tset_jattr_l_slim(parent, JOB_ATR_exit_status, pe, SET);\n\t\t}\n\t\tif (is_jattr_set(sj, JOB_ATR_stageout_status)) {\n\t\t\tint pe = -1;\n\t\t\tint e = get_jattr_long(sj, JOB_ATR_stageout_status);\n\t\t\tif (is_jattr_set(parent, JOB_ATR_stageout_status))\n\t\t\t\tpe = get_jattr_long(parent, JOB_ATR_stageout_status);\n\t\t\tif (e > 0 && pe != 0)\n\t\t\t\tset_jattr_l_slim(parent, JOB_ATR_stageout_status, e, SET);\n\t\t}\n\t}\n\tjob_save_db(parent);\n}\n\n/**\n * @brief\n * \t\tchk_array_doneness - check if all subjobs are expired and if so,\n *\t\tpurge the Array Job itself\n *\n * @param[in,out]\tparent - pointer to parent job.\n *\n *\t@return\tvoid\n */\nvoid\nchk_array_doneness(job *parent)\n{\n\tstruct batch_request *preq;\n\tchar hook_msg[HOOK_MSG_SIZE] = {0};\n\tint rc;\n\tajinfo_t *ptbl = NULL;\n\n\tif (parent == NULL || parent->ji_ajinfo == NULL)\n\t\treturn;\n\n\tptbl = parent->ji_ajinfo;\n\tif (ptbl->tkm_flags & (TKMFLG_NO_DELETE | TKMFLG_CHK_ARRAY))\n\t\treturn; /* delete of subjobs in progress, or re-entering, so return here */\n\n\tif (ptbl->tkm_subjsct[JOB_STATE_QUEUED] + ptbl->tkm_subjsct[JOB_STATE_RUNNING] + ptbl->tkm_subjsct[JOB_STATE_HELD] + ptbl->tkm_subjsct[JOB_STATE_EXITING] == 0) {\n\n\t\t/* Array Job all done, do simple eoj processing */\n\t\tparent->ji_qs.ji_un_type = JOB_UNION_TYPE_EXEC;\n\t\tparent->ji_qs.ji_un.ji_exect.ji_momaddr = 0;\n\t\tparent->ji_qs.ji_un.ji_exect.ji_momport = 0;\n\n\t\tparent->ji_qs.ji_un.ji_exect.ji_exitstat = get_jattr_long(parent, JOB_ATR_exit_status);\n\n\t\tcheck_block(parent, \"\");\n\t\tif (check_job_state(parent, JOB_STATE_LTR_BEGUN)) {\n\t\t\tchar acctbuf[40];\n\n\t\t\tparent->ji_qs.ji_obittime = time_now;\n\t\t\tset_jattr_l_slim(parent, JOB_ATR_obittime, parent->ji_qs.ji_obittime, SET);\n\n\t\t\t/* Allocate space for the jobobit hook event params */\n\t\t\tpreq = alloc_br(PBS_BATCH_JobObit);\n\t\t\tif (preq) {\n\t\t\t\tpreq->rq_ind.rq_obit.rq_pjob = parent;\n\t\t\t\tDBPRT((\"rq_jobobit svr_setjobstate update parent job state to 'F'\"));\n\t\t\t\tsvr_setjobstate(parent, JOB_STATE_LTR_FINISHED, JOB_SUBSTATE_FINISHED);\n\t\t\t\trc = process_hooks(preq, hook_msg, sizeof(hook_msg), pbs_python_set_interrupt);\n\t\t\t\tif (rc == -1) {\n\t\t\t\t\tlog_err(-1, __func__, \"rq_jobobit process_hooks call failed\");\n\t\t\t\t}\n\t\t\t\tfree_br(preq);\n\t\t\t} else {\n\t\t\t\tlog_err(PBSE_INTERNAL, __func__, \"rq_jobobit alloc failed\");\n\t\t\t}\n\n\t\t\t/* if BEGUN, issue 'E' account record */\n\t\t\tsprintf(acctbuf, msg_job_end_stat, parent->ji_qs.ji_un.ji_exect.ji_exitstat);\n\t\t\taccount_job_update(parent, PBS_ACCT_LAST);\n\t\t\taccount_jobend(parent, acctbuf, PBS_ACCT_END);\n\n\t\t\tsvr_mailowner(parent, MAIL_END, MAIL_NORMAL, acctbuf);\n\t\t}\n\t\tif (is_jattr_set(parent, JOB_ATR_depend))\n\t\t\tdepend_on_term(parent);\n\n\t\t/*\n\t\t * Check if the history of the finished job can be saved or it needs to be purged .\n\t\t */\n\t\tptbl->tkm_flags |= TKMFLG_CHK_ARRAY;\n\t\tsvr_saveorpurge_finjobhist(parent);\n\t}\n}\n\n/**\n * @brief\n * \tfind subjob and its state and substate\n *\n * @param[in]     parent    - pointer to the parent job\n * @param[in]     sjidx     - subjob index\n * @param[out]    state     - put state of subjob if not null\n * @param[out]    substate  - put substate of subjob if not null\n *\n * @return job *\n * @retval !NULL - if subjob found\n * @return NULL  - if subjob not found\n */\njob *\nget_subjob_and_state(job *parent, int sjidx, char *state, int *substate)\n{\n\tjob *sj;\n\n\tif (state)\n\t\t*state = JOB_STATE_LTR_UNKNOWN;\n\tif (substate)\n\t\t*substate = JOB_SUBSTATE_UNKNOWN;\n\n\tif (parent == NULL || sjidx < 0)\n\t\treturn NULL;\n\n\tif (sjidx < parent->ji_ajinfo->tkm_start || sjidx > parent->ji_ajinfo->tkm_end)\n\t\treturn NULL;\n\n\tif (((sjidx - parent->ji_ajinfo->tkm_start) % parent->ji_ajinfo->tkm_step) != 0)\n\t\treturn NULL;\n\n\tsj = find_job(create_subjob_id(parent->ji_qs.ji_jobid, sjidx));\n\tif (sj == NULL) {\n\t\tif (range_contains(parent->ji_ajinfo->trm_quelist, sjidx)) {\n\t\t\tif (state)\n\t\t\t\t*state = JOB_STATE_LTR_QUEUED;\n\t\t\tif (substate)\n\t\t\t\t*substate = JOB_SUBSTATE_QUEUED;\n\t\t} else {\n\t\t\tif (state) {\n\t\t\t\tchar pjs = get_job_state(parent);\n\t\t\t\tif (pjs == JOB_STATE_LTR_FINISHED)\n\t\t\t\t\t*state = JOB_STATE_LTR_FINISHED;\n\t\t\t\telse\n\t\t\t\t\t*state = JOB_STATE_LTR_EXPIRED;\n\t\t\t}\n\t\t\tif (substate)\n\t\t\t\t*substate = JOB_SUBSTATE_FINISHED;\n\t\t}\n\t\treturn NULL;\n\t}\n\n\tif (state)\n\t\t*state = get_job_state(sj);\n\tif (substate)\n\t\t*substate = get_job_substate(sj);\n\n\treturn sj;\n}\n/**\n * @brief\n * \t\tupdate_subjob_state_ct - update the \"array_state_count\" attribute of an\n * \t\tarray job\n *\n * @param[in]\tpjob - pointer to the job\n *\n * @return\tvoid\n */\nvoid\nupdate_subjob_state_ct(job *pjob)\n{\n\tchar buf[BUF_SIZE];\n\tconst char *statename[] = {\"Transit\", \"Queued\", \"Held\", \"Waiting\", \"Running\",\n\t\t\t\t   \"Exiting\", \"Expired\", \"Beginning\", \"Moved\", \"Finished\"};\n\n\tbuf[0] = '\\0';\n\tsnprintf(buf, sizeof(buf), \"%s:%d %s:%d %s:%d %s:%d\",\n\t\t statename[JOB_STATE_QUEUED],\n\t\t pjob->ji_ajinfo->tkm_subjsct[JOB_STATE_QUEUED],\n\t\t statename[JOB_STATE_RUNNING],\n\t\t pjob->ji_ajinfo->tkm_subjsct[JOB_STATE_RUNNING],\n\t\t statename[JOB_STATE_EXITING],\n\t\t pjob->ji_ajinfo->tkm_subjsct[JOB_STATE_EXITING],\n\t\t statename[JOB_STATE_EXPIRED],\n\t\t pjob->ji_ajinfo->tkm_subjsct[JOB_STATE_EXPIRED]);\n\n\tset_jattr_str_slim(pjob, JOB_ATR_array_state_count, buf, NULL);\n}\n/**\n * @brief\n * \t\tsubst_array_index - Substitute the actual index into the file name\n * \t\tif this is a sub job and if the array index substitution\n * \t\tstring is in the specified file path.  If, not the original string\n * \t\tis returned unchanged.\n *\n * @param[in]\tpjob - pointer to the job\n * @param[in]\tpath - name of local or destination\n *\n * @return\tpath\n */\nchar *\nsubst_array_index(job *pjob, char *path)\n{\n\tchar *pindorg;\n\tchar *cvt;\n\tchar trail[MAXPATHLEN + 1];\n\tjob *ppjob = pjob->ji_parentaj;\n\n\tif (ppjob == NULL)\n\t\treturn path;\n\tif ((pindorg = strstr(path, PBS_FILE_ARRAY_INDEX_TAG)) == NULL)\n\t\treturn path; /* unchanged */\n\n\tcvt = get_range_from_jid(pjob->ji_qs.ji_jobid);\n\tif (cvt == NULL)\n\t\treturn path;\n\t*pindorg = '\\0';\n\tstrcpy(trail, pindorg + strlen(PBS_FILE_ARRAY_INDEX_TAG));\n\tstrcat(path, cvt);\n\tstrcat(path, trail);\n\treturn path;\n}\n\n/**\n * @brief\n * \t\tmk_subjob_index_tbl - make the subjob index tracking table\n *\t\t(ajinfo_t) based on the number of indexes in the \"range\"\n *\n * @param[in]\trange - subjob index range\n * @param[out]\tpbserror - PBSError to return\n * @param[in]\tmode - \"actmode\" parameter to action function of \"array_indices_submitted\"\n *\n * @return\tptr to table\n * @retval  NULL\t- error\n */\nstatic int\nsetup_ajinfo(job *pjob, int mode)\n{\n\tint i;\n\tint limit;\n\tint start;\n\tint end;\n\tint step;\n\tint count;\n\tchar *eptr;\n\tchar *range;\n\tajinfo_t *trktbl;\n\n\tif (pjob->ji_ajinfo) {\n\t\tfree_range_list(pjob->ji_ajinfo->trm_quelist);\n\t\tfree(pjob->ji_ajinfo);\n\t}\n\tpjob->ji_ajinfo = NULL;\n\trange = get_jattr_str(pjob, JOB_ATR_array_indices_submitted);\n\tif (range == NULL)\n\t\treturn PBSE_BADATVAL;\n\ti = parse_subjob_index(range, &eptr, &start, &end, &step, &count);\n\tif (i != 0)\n\t\treturn PBSE_BADATVAL;\n\n\tif ((mode == ATR_ACTION_NEW) || (mode == ATR_ACTION_ALTER)) {\n\t\tif (is_sattr_set(SVR_ATR_maxarraysize))\n\t\t\tlimit = get_sattr_long(SVR_ATR_maxarraysize);\n\t\telse\n\t\t\tlimit = PBS_MAX_ARRAY_JOB_DFL; /* default limit 10000 */\n\n\t\tif (count > limit)\n\t\t\treturn PBSE_MaxArraySize;\n\t}\n\n\ttrktbl = (ajinfo_t *) malloc(sizeof(ajinfo_t));\n\tif (trktbl == NULL)\n\t\treturn PBSE_SYSTEM;\n\tfor (i = 0; i < PBS_NUMJOBSTATE; i++)\n\t\ttrktbl->tkm_subjsct[i] = 0;\n\tif (mode == ATR_ACTION_RECOV || mode == ATR_ACTION_ALTER)\n\t\ttrktbl->trm_quelist = NULL;\n\telse {\n\t\ttrktbl->trm_quelist = new_range(start, end, step, count, NULL);\n\t\tif (trktbl->trm_quelist == NULL) {\n\t\t\tfree(trktbl);\n\t\t\treturn PBSE_SYSTEM;\n\t\t}\n\t\ttrktbl->tkm_subjsct[JOB_STATE_QUEUED] = count;\n\t}\n\ttrktbl->tkm_dsubjsct = 0;\n\ttrktbl->tkm_ct = count;\n\ttrktbl->tkm_start = start;\n\ttrktbl->tkm_end = end;\n\ttrktbl->tkm_step = step;\n\ttrktbl->tkm_flags = 0;\n\tpjob->ji_ajinfo = trktbl;\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n * \t\tsetup_arrayjob_attrs - set up the special attributes of an Array Job\n *\t\tCalled as \"action\" routine for the attribute array_indices_submitted\n *\n * @param[in]\tpattr - pointer to special attributes of an Array Job\n * @param[in]\tpobj -  pointer to job structure\n * @param[in]\tmode -  actmode\n *\n * @return\tPBS error\n * @retval  0\t- success\n */\nint\nsetup_arrayjob_attrs(attribute *pattr, void *pobj, int mode)\n{\n\tjob *pjob = pobj;\n\n\tif (mode != ATR_ACTION_ALTER && mode != ATR_ACTION_NEW && mode != ATR_ACTION_RECOV)\n\t\treturn PBSE_BADATVAL;\n\n\tif (is_job_array(pjob->ji_qs.ji_jobid) != IS_ARRAY_ArrayJob)\n\t\treturn PBSE_BADATVAL; /* not an Array Job */\n\n\tif (mode == ATR_ACTION_ALTER && !check_job_state(pjob, JOB_STATE_LTR_QUEUED))\n\t\treturn PBSE_MODATRRUN; /* cannot modify once begun */\n\n\t/* set attribute \"array\" True  and clear \"array_state_count\" */\n\tpjob->ji_qs.ji_svrflags |= JOB_SVFLG_ArrayJob;\n\tset_jattr_b_slim(pjob, JOB_ATR_array, 1, SET);\n\tfree_jattr(pjob, JOB_ATR_array_state_count);\n\n\tif ((mode == ATR_ACTION_NEW) || (mode == ATR_ACTION_RECOV)) {\n\t\tint rc = PBSE_BADATVAL;\n\t\tif ((rc = setup_ajinfo(pjob, mode)) != PBSE_NONE)\n\t\t\treturn rc;\n\t}\n\n\tif (mode == ATR_ACTION_RECOV)\n\t\treturn PBSE_NONE;\n\n\tupdate_array_indices_remaining_attr(pjob);\n\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n * \t\tfixup_arrayindicies - set state of subjobs based on array_indicies_remaining\n * @par\tFunctionality:\n * \t\tThis is used when a job is being qmoved into this server.\n * \t\tIt is necessary that the indices_submitted be first to cause the\n * \t\tcreation of the tracking tbl. If the job is created here, no need of fix indicies\n *\n * @param[in]\tpattr - pointer to special attributes of an Array Job\n * @param[in]\tpobj -  pointer to job structure\n * @param[in]\tmode -  actmode\n * @return\tPBS error\n * @retval  0\t- success\n */\nint\nfixup_arrayindicies(attribute *pattr, void *pobj, int mode)\n{\n\tjob *pjob = pobj;\n\tchar *range;\n\tint qcount;\n\n\tif (!pjob || !(pjob->ji_qs.ji_svrflags & JOB_SVFLG_ArrayJob) || !pjob->ji_ajinfo)\n\t\treturn PBSE_BADATVAL;\n\n\tif (mode == ATR_ACTION_NEW && (pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE))\n\t\treturn PBSE_NONE;\n\n\tif (pjob->ji_ajinfo->trm_quelist != NULL)\n\t\treturn PBSE_BADATVAL;\n\n\trange = get_jattr_str(pjob, JOB_ATR_array_indices_remaining);\n\tpjob->ji_ajinfo->trm_quelist = range_parse(range);\n\tif (pjob->ji_ajinfo->trm_quelist == NULL) {\n\t\tif (range && range[0] == '-') {\n\t\t\tpjob->ji_ajinfo->tkm_subjsct[JOB_STATE_QUEUED] = 0;\n\t\t\tpjob->ji_ajinfo->tkm_subjsct[JOB_STATE_EXPIRED] = pjob->ji_ajinfo->tkm_ct;\n\t\t\tupdate_subjob_state_ct(pjob);\n\t\t\treturn PBSE_NONE;\n\t\t}\n\t\treturn PBSE_BADATVAL;\n\t}\n\n\tqcount = range_count(pjob->ji_ajinfo->trm_quelist);\n\tpjob->ji_ajinfo->tkm_subjsct[JOB_STATE_QUEUED] = qcount;\n\tpjob->ji_ajinfo->tkm_subjsct[JOB_STATE_EXPIRED] = pjob->ji_ajinfo->tkm_ct - qcount;\n\tupdate_subjob_state_ct(pjob);\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n * \t\tcreate_subjob - create a Subjob from the parent Array Job\n * \t\tCertain attributes are changed or left out\n * @param[in]\tparent - pointer to parent Job\n * @param[in]\tnewjid -  new job id\n * @param[in]\trc -  return code\n * @return\tpointer to new job\n * @retval  NULL\t- error\n */\njob *\ncreate_subjob(job *parent, char *newjid, int *rc)\n{\n\tpbs_list_head attrl;\n\tint i;\n\tint j;\n\tchar *index;\n\tattribute_def *pdef;\n\tattribute *ppar;\n\tattribute *psub;\n\tsvrattrl *psatl;\n\tjob *subj;\n\tlong eligibletime;\n\tlong long time_usec;\n\tstruct timeval tval;\n\tchar path[MAXPATHLEN + 1];\n\n\tif (newjid == NULL) {\n\t\t*rc = PBSE_IVALREQ;\n\t\treturn NULL;\n\t}\n\n\tif ((parent->ji_qs.ji_svrflags & JOB_SVFLG_ArrayJob) == 0) {\n\t\t*rc = PBSE_IVALREQ;\n\t\treturn NULL; /* parent not an array job */\n\t}\n\n\t/* find and copy the index */\n\tif ((index = get_range_from_jid(newjid)) == NULL) {\n\t\t*rc = PBSE_IVALREQ;\n\t\treturn NULL;\n\t}\n\n\t/*\n\t * allocate and clear basic structure\n\t * cannot copy job attributes because cannot share strings and other\n\t * malloc-ed data,  so copy ji_qs as a whole and then copy the\n\t * non-saved items before ji_qs.\n\t */\n\n\tif ((subj = job_alloc()) == NULL) {\n\t\t*rc = PBSE_SYSTEM;\n\t\treturn NULL;\n\t}\n\tsubj->ji_qs = parent->ji_qs; /* copy the fixed save area */\n\tsubj->ji_qhdr = parent->ji_qhdr;\n\tsubj->ji_myResv = parent->ji_myResv;\n\tsubj->ji_parentaj = parent;\n\tstrcpy(subj->ji_qs.ji_jobid, newjid); /* replace job id */\n\t*subj->ji_qs.ji_fileprefix = '\\0';\n\n\t/*\n\t * now that is all done, copy the required attributes by\n\t * encoding and then decoding into the new array.  Then add the\n\t * subjob specific attributes.\n\t */\n\n\tresc_access_perm = ATR_DFLAG_ACCESS;\n\tCLEAR_HEAD(attrl);\n\tfor (i = 0; attrs_to_copy[i] != JOB_ATR_LAST; i++) {\n\t\tj = (int) attrs_to_copy[i];\n\t\tppar = get_jattr(parent, j);\n\t\tpsub = get_jattr(subj, j);\n\t\tpdef = &job_attr_def[j];\n\n\t\tif (pdef->at_encode(ppar, &attrl, pdef->at_name, NULL,\n\t\t\t\t    ATR_ENCODE_MOM, &psatl) > 0) {\n\t\t\tfor (psatl = (svrattrl *) GET_NEXT(attrl); psatl;\n\t\t\t     psatl = ((svrattrl *) GET_NEXT(psatl->al_link))) {\n\t\t\t\tset_attr_generic(psub, pdef, psatl->al_value, psatl->al_resc, INTERNAL);\n\t\t\t}\n\t\t\t/* carry forward the default bit if set */\n\t\t\tpsub->at_flags |= (ppar->at_flags & ATR_VFLAG_DEFLT);\n\t\t\tfree_attrlist(&attrl);\n\t\t}\n\t}\n\n\tset_jattr_generic(subj, JOB_ATR_array_id, parent->ji_qs.ji_jobid, NULL, INTERNAL);\n\tset_jattr_generic(subj, JOB_ATR_array_index, index, NULL, INTERNAL);\n\n\t/* Lastly, set or clear a few flags and link in the structure */\n\n\tsubj->ji_qs.ji_svrflags &= ~JOB_SVFLG_ArrayJob;\n\tsubj->ji_qs.ji_svrflags |= JOB_SVFLG_SubJob;\n\tset_job_substate(subj, JOB_SUBSTATE_TRANSICM);\n\tsvr_setjobstate(subj, JOB_STATE_LTR_QUEUED, JOB_SUBSTATE_QUEUED);\n\n\t/* subjob needs to borrow eligible time from parent job array.\n\t * expecting only to accrue eligible_time and nothing else.\n\t */\n\tif (get_sattr_long(SVR_ATR_EligibleTimeEnable) == 1) {\n\n\t\teligibletime = get_jattr_long(parent, JOB_ATR_eligible_time);\n\n\t\tif (get_jattr_long(parent, JOB_ATR_accrue_type) == JOB_ELIGIBLE)\n\t\t\teligibletime += get_jattr_long(subj, JOB_ATR_sample_starttime) - get_jattr_long(parent, JOB_ATR_sample_starttime);\n\n\t\tset_jattr_l_slim(subj, JOB_ATR_eligible_time, eligibletime, SET);\n\t}\n\n\tgettimeofday(&tval, NULL);\n\ttime_usec = (tval.tv_sec * 1000000L) + tval.tv_usec;\n\t/* set the queue rank attribute */\n\tset_jattr_ll_slim(subj, JOB_ATR_qrank, time_usec, SET);\n\tif (svr_enquejob(subj, NULL) != 0) {\n\t\tjob_purge(subj);\n\t\t*rc = PBSE_IVALREQ;\n\t\treturn NULL;\n\t}\n\n\tpbs_strncpy(path, get_jattr_str(subj, JOB_ATR_outpath), sizeof(path));\n\tsubst_array_index(subj, path);\n\tset_jattr_str_slim(subj, JOB_ATR_outpath, path, NULL);\n\tpbs_strncpy(path, get_jattr_str(subj, JOB_ATR_errpath), sizeof(path));\n\tsubst_array_index(subj, path);\n\tset_jattr_str_slim(subj, JOB_ATR_errpath, path, NULL);\n\n\t*rc = PBSE_NONE;\n\treturn subj;\n}\n\n/**\n * @brief\n *\t \tDuplicate the existing batch request for a running subjob\n *\n * @param[in]\topreq\t- the batch status request structure to duplicate\n * @param[in]\tpjob\t- the parent job structure of the subjob\n * @param[in]\tfunc\t- the function to call after duplicating the batch\n *\t\t\t   structure.\n * @par\n *\t\t1. duplicate the batch request\n *\t\t2. replace the job id with the one from the running subjob\n *\t\t3. link the new batch request to the original and incr its ref ct\n *\t\t4. call the \"func\" with the new batch request and job\n * @note\n *\t\tCurrently, this is called in PBS_Batch_DeleteJob, PBS_Batch_SignalJob,\n *\t\tPBS_Batch_Rerun, and PBS_Batch_RunJob subjob requests.\n *\t\tFor any other request types, be sure to add another switch case below\n *\t\t(matching request type).\n * @return int\n * @retval return value of the callback function (0 for success, 1 for error)\n */\nint\ndup_br_for_subjob(struct batch_request *opreq, job *pjob, int (*func)(struct batch_request *, job *))\n{\n\tstruct batch_request *npreq;\n\n\tnpreq = alloc_br(opreq->rq_type);\n\tif (npreq == NULL)\n\t\treturn 1;\n\n\tnpreq->rq_perm = opreq->rq_perm;\n\tnpreq->rq_fromsvr = opreq->rq_fromsvr;\n\tnpreq->rq_conn = opreq->rq_conn;\n\tnpreq->rq_orgconn = opreq->rq_orgconn;\n\tnpreq->rq_time = opreq->rq_time;\n\tstrcpy(npreq->rq_user, opreq->rq_user);\n\tstrcpy(npreq->rq_host, opreq->rq_host);\n\tnpreq->rq_extend = opreq->rq_extend;\n\tnpreq->rq_reply.brp_choice = BATCH_REPLY_CHOICE_NULL;\n\tnpreq->rq_refct = 0;\n\n\t/* for each type, update the job id with the one from the new job */\n\n\tswitch (opreq->rq_type) {\n\t\tcase PBS_BATCH_DeleteJobList:\n\t\t\tnpreq->rq_ind.rq_deletejoblist = opreq->rq_ind.rq_deletejoblist;\n\t\t\tnpreq->rq_ind.rq_deletejoblist.rq_count = 1;\n\t\t\tnpreq->rq_ind.rq_deletejoblist.rq_jobslist = break_comma_list(pjob->ji_qs.ji_jobid);\n\t\t\tnpreq->rq_ind.rq_deletejoblist.jobid_to_resume = 0;\n\t\t\tbreak;\n\t\tcase PBS_BATCH_DeleteJob:\n\t\t\tnpreq->rq_ind.rq_delete = opreq->rq_ind.rq_delete;\n\t\t\tstrcpy(npreq->rq_ind.rq_delete.rq_objname,\n\t\t\t       pjob->ji_qs.ji_jobid);\n\t\t\tbreak;\n\t\tcase PBS_BATCH_SignalJob:\n\t\t\tnpreq->rq_ind.rq_signal = opreq->rq_ind.rq_signal;\n\t\t\tstrcpy(npreq->rq_ind.rq_signal.rq_jid,\n\t\t\t       pjob->ji_qs.ji_jobid);\n\t\t\tbreak;\n\t\tcase PBS_BATCH_Rerun:\n\t\t\tstrcpy(npreq->rq_ind.rq_rerun,\n\t\t\t       pjob->ji_qs.ji_jobid);\n\t\t\tbreak;\n\t\tcase PBS_BATCH_RunJob:\n\t\t\tnpreq->rq_ind.rq_run = opreq->rq_ind.rq_run;\n\t\t\tstrcpy(npreq->rq_ind.rq_run.rq_jid,\n\t\t\t       pjob->ji_qs.ji_jobid);\n\t\t\tbreak;\n\t\tdefault:\n\t\t\tdelete_link(&npreq->rq_link);\n\t\t\tfree(npreq);\n\t\t\treturn 1;\n\t}\n\n\tnpreq->rq_parentbr = opreq;\n\topreq->rq_refct++;\n\n\treturn func(npreq, pjob);\n}\n"
  },
  {
    "path": "src/server/attr_recov.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n *\n * @brief\n *\tThis file contains the functions to perform a buffered\n *\tsave of an object (structure) and an attribute array to a file.\n *\tIt also has the function to recover (reload) an attribute array.\n *\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include \"pbs_ifl.h\"\n#include <assert.h>\n#include <errno.h>\n#include <memory.h>\n#include <stdlib.h>\n#include <stdio.h>\n#include <unistd.h>\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"log.h\"\n#include \"pbs_nodes.h\"\n#include \"svrfunc.h\"\n\n/* Global Variables */\n\nextern int resc_access_perm;\n\nchar pbs_recov_filename[MAXPATHLEN + 1];\n\n/* data items global to functions in this file */\n\n#define PKBUFSIZE 4096\n\nchar pk_buffer[PKBUFSIZE]; /* used to do buffered output */\nstatic int pkbfds = -2;\t   /* descriptor to use for saves */\nstatic size_t spaceavail;  /* space in pk_buffer available */\nstatic size_t spaceused;   /* amount of space used  in pkbuffer */\n\n/**\n * @brief\n * \t\tsave_setup - set up the save i/o buffer.\n *\t\tThe \"buffer control information\" is left updated to reflect\n *\t\tthe file descriptor, and the space in the buffer.\n * @param[in]\tfds - file descriptor value\n */\n\nvoid\nsave_setup(int fds)\n{\n\n\tif (pkbfds != -2) { /* somebody forgot to flush the buffer */\n\t\tlog_err(-1, \"save_setup\", \"someone forgot to flush\");\n\t}\n\n\t/* initialize buffer control */\n\n\tpkbfds = fds;\n\tspaceavail = PKBUFSIZE;\n\tspaceused = 0;\n}\n\n/**\n * @brief\n * \t\tsave_struct - Copy a structure (as a block)  into the save i/o buffer\n *\t\tThis is useful to save fixed sized structure without pointers\n *\t\tthat point outside of the structure itself.\n *\n *\tWrite out buffer as required. Leave spaceavail and spaceused updated\n *\n * @param[in]\tpobj - pointer to the structure whose contents needs to be copied into a buffer.\n * @param[in]\tobjsize - required size of the object.\n *\n * @return      Error code\n * @retval\t 0  - Success\n * @retval\t-1  - Failure\n */\n\nint\nsave_struct(char *pobj, unsigned int objsize)\n{\n\tint amt;\n\tsize_t copysize;\n\tint i;\n\tchar *pbufin;\n\tchar *pbufout;\n\n\tassert(pkbfds >= 0);\n\n\twhile (objsize > 0) {\n\t\tpbufin = pk_buffer + spaceused;\n\t\tif (objsize > spaceavail) {\n\t\t\tif ((copysize = spaceavail) != 0) {\n\t\t\t\t(void) memcpy(pbufin, pobj, copysize);\n\t\t\t}\n\t\t\tamt = PKBUFSIZE;\n\t\t\tpbufout = pk_buffer;\n\t\t\twhile ((i = write(pkbfds, pbufout, amt)) != amt) {\n\t\t\t\tif (i == -1) {\n\t\t\t\t\tif (errno != EINTR) {\n\t\t\t\t\t\treturn (-1);\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tamt -= i;\n\t\t\t\t\tpbufout += i;\n\t\t\t\t}\n\t\t\t}\n\t\t\tpobj += copysize;\n\t\t\tspaceavail = PKBUFSIZE;\n\t\t\tspaceused = 0;\n\t\t} else {\n\t\t\tcopysize = (size_t) objsize;\n\t\t\t(void) memcpy(pbufin, pobj, copysize);\n\t\t\tspaceavail -= copysize;\n\t\t\tspaceused += copysize;\n\t\t}\n\t\tobjsize -= copysize;\n\t}\n\treturn (0);\n}\n\n/**\n * @buffer\n * \t\tsave_flush - flush out the current save operation\n *\t\tFlush buffer if needed, reset spaceavail, spaceused,\n *\t\tclear out file descriptor\n *\n *\tReturns: 0 on success\n *\t\t-1 on failure (flush failed)\n */\n\nint\nsave_flush(void)\n{\n\tint i;\n\tchar *pbuf;\n\n\tassert(pkbfds >= 0);\n\n\tpbuf = pk_buffer;\n\tif (spaceused > 0) {\n\t\twhile ((i = write(pkbfds, pbuf, spaceused)) != spaceused) {\n\t\t\tif (i == -1) {\n\t\t\t\tif (errno != EINTR) {\n\t\t\t\t\tlog_err(errno, \"save_flush\", \"bad write\");\n\t\t\t\t\treturn (-1);\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tpbuf += i;\n\t\t\t\tspaceused -= i;\n\t\t\t}\n\t\t}\n\t}\n\tpkbfds = -2; /* flushed flag */\n\treturn (0);\n}\n\n/**\n * @brief\n *\t \tWrite set of attributes to disk file\n *\n * @par\tFunctionality:\n *\t\tEach of the attributes is encoded  into the attrlist form.\n *\t\tThey are packed and written using save_struct().\n *\n *\t\tThe final real attribute is followed by a dummy attribute with a\n *\t\tal_size of ENDATTRIB.  This cannot be mistaken for the size of a\n *\t\treal attribute.\n *\n *\t\tNote: attributes of type ATR_TYPE_ACL are not saved with the other\n *\t\tattribute of the parent (queue or server).  They are kept in their\n *\t\town file.\n *\n * @param[in]\tpadef - Address of parent's attribute definition array\n * @param[in]\tpattr - Address of the parent objects attribute array\n * @param[in]\tnumattr - Number of attributes in the list\n *\n * @return      Error code\n * @retval\t 0  - Success\n * @retval\t-1  - Failure\n */\nint\nsave_attr_fs(attribute_def *padef, attribute *pattr, int numattr)\n{\n\tsvrattrl dummy;\n\tint errct = 0;\n\tpbs_list_head lhead;\n\tint i;\n\tsvrattrl *pal;\n\tint rc;\n\n\t/* encode each attribute which has a value (not non-set) */\n\n\tCLEAR_HEAD(lhead);\n\n\tfor (i = 0; i < numattr; i++) {\n\n\t\tif ((padef + i)->at_type != ATR_TYPE_ACL) {\n\n\t\t\t/* note access lists are not saved this way */\n\n\t\t\trc = (padef + i)->at_encode(pattr + i, &lhead,\n\t\t\t\t\t\t    (padef + i)->at_name,\n\t\t\t\t\t\t    NULL, ATR_ENCODE_SAVE, NULL);\n\n\t\t\tif (rc < 0)\n\t\t\t\terrct++;\n\n\t\t\t(pattr + i)->at_flags &= ~ATR_VFLAG_MODIFY;\n\n\t\t\t/* now that it has been encoded, block and save it */\n\n\t\t\twhile ((pal = (svrattrl *) GET_NEXT(lhead)) !=\n\t\t\t       NULL) {\n\n\t\t\t\tif (save_struct((char *) pal, pal->al_tsize) < 0)\n\t\t\t\t\terrct++;\n\t\t\t\tdelete_link(&pal->al_link);\n\t\t\t\t(void) free(pal);\n\t\t\t}\n\t\t}\n\t}\n\n\t/* indicate last of attributes by writting dummy entry */\n\n#ifdef DEBUG\n\t(void) memset(&dummy, 0, sizeof(dummy));\n#endif\n\tdummy.al_tsize = ENDATTRIBUTES;\n\tif (save_struct((char *) &dummy, sizeof(dummy)) < 0)\n\t\terrct++;\n\n\tif (errct)\n\t\treturn (-1);\n\telse\n\t\treturn (0);\n}\n\n/**\n * @brief\n *\t\tread attributes from disk file\n *\n *\t\tRecover (reload) attribute from file written by save_attr().\n *\t\tSince this is not often done (only on server initialization),\n *\t\tBuffering the reads isn't done.\n *\n * @param[in]\tfd - The file descriptor of the file to write to\n * @param[in] \tparent - void pointer to one of the PBS objects\n *\t\t\t\t\t  to whom these attributes belong\n * @param[in]   padef_idx - Search index of this attribute definition array\n * @param[in]\tpadef - Address of parent's attribute definition array\n * @param[in]\tpattr - Address of the parent objects attribute array\n * @param[in]\tlimit - Index of the last attribute\n * @param[in]\tunknown - Index of the start of the unknown attribute list\n *\n * @return      Error code\n * @retval\t 0  - Success\n * @retval\t-1  - Failure\n */\n\nint\nrecov_attr_fs(int fd, void *parent, void *padef_idx, attribute_def *padef, attribute *pattr, int limit, int unknown)\n{\n\tint amt;\n\tint len;\n\tint index;\n\tsvrattrl *pal = NULL;\n\tint palsize = 0;\n\tsvrattrl *tmpal = NULL;\n\n\tpal = (svrattrl *) malloc(sizeof(svrattrl));\n\tif (!pal)\n\t\treturn (-1);\n\tpalsize = sizeof(svrattrl);\n\n\t/* set all privileges (read and write) for decoding resources\t*/\n\t/* This is a special (kludge) flag for the recovery case, see\t*/\n\t/* decode_resc() in lib/Libattr/attr_fn_resc.c\t\t\t*/\n\n\tresc_access_perm = ATR_DFLAG_ACCESS;\n\n\t/* For each attribute, read in the attr_extern header */\n\n\twhile (1) {\n\t\terrno = -1;\n\t\tmemset(pal, 0, palsize);\n\t\tlen = read(fd, (char *) pal, sizeof(svrattrl));\n\t\tif (len != sizeof(svrattrl)) {\n\t\t\tsprintf(log_buffer, \"read1 error of %s\",\n\t\t\t\tpbs_recov_filename);\n\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\tfree(pal);\n\t\t\treturn (errno);\n\t\t}\n\t\tif (pal->al_tsize == ENDATTRIBUTES)\n\t\t\tbreak; /* hit dummy attribute that is eof */\n\t\tamt = pal->al_tsize - sizeof(svrattrl);\n\t\tif (amt < 1) {\n\t\t\tsprintf(log_buffer, \"Invalid attr list size in %s\",\n\t\t\t\tpbs_recov_filename);\n\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\tfree(pal);\n\t\t\treturn (errno);\n\t\t}\n\n\t\t/* read in the attribute chunk (name and encoded value) */\n\n\t\tif (palsize < pal->al_tsize) {\n\t\t\ttmpal = (svrattrl *) realloc(pal, pal->al_tsize);\n\t\t\tif (tmpal == NULL) {\n\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\"Unable to alloc attr list size in %s\",\n\t\t\t\t\tpbs_recov_filename);\n\t\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\t\tfree(pal);\n\t\t\t\treturn (errno);\n\t\t\t}\n\t\t\tpal = tmpal;\n\t\t\tpalsize = pal->al_tsize;\n\t\t}\n\t\tif (!pal)\n\t\t\treturn (errno);\n\t\tCLEAR_LINK(pal->al_link);\n\n\t\t/* read in the actual attribute data */\n\n\t\tlen = read(fd, (char *) pal + sizeof(svrattrl), amt);\n\t\tif (len != amt) {\n\t\t\tsprintf(log_buffer, \"read2 error of %s\",\n\t\t\t\tpbs_recov_filename);\n\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\tfree(pal);\n\t\t\treturn (errno);\n\t\t}\n\n\t\t/* the pointer into the data are of course bad, so reset them */\n\n\t\tpal->al_name = (char *) pal + sizeof(svrattrl);\n\t\tif (pal->al_rescln)\n\t\t\tpal->al_resc = pal->al_name + pal->al_nameln;\n\t\telse\n\t\t\tpal->al_resc = NULL;\n\t\tif (pal->al_valln)\n\t\t\tpal->al_value = pal->al_name + pal->al_nameln +\n\t\t\t\t\tpal->al_rescln;\n\t\telse\n\t\t\tpal->al_value = NULL;\n\n\t\tpal->al_refct = 1; /* ref count reset to 1 */\n\n\t\t/* find the attribute definition based on the name */\n\n\t\tindex = find_attr(padef_idx, padef, pal->al_name);\n\t\tif (index < 0) {\n\t\t\t/*\n\t\t\t * There are two ways this could happen:\n\t\t\t * 1. if the (job) attribute is in the \"unknown\" list -\n\t\t\t *    keep it there;\n\t\t\t * 2. if the server was rebuilt and an attribute was\n\t\t\t *    deleted, -  the fact is logged and the attribute\n\t\t\t *    is discarded (system,queue) or kept (job)\n\t\t\t */\n\t\t\tif (unknown > 0) {\n\t\t\t\tindex = unknown;\n\t\t\t} else {\n\t\t\t\tlog_errf(-1, __func__, \"unknown attribute \\\"%s\\\" discarded\", pal->al_name);\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t}\n\n\t\t/*\n\t\t * In the normal case we just decode the attribute directly\n\t\t * into the real attribute since there will be one entry only\n\t\t * for that attribute.\n\t\t *\n\t\t * However, \"entity limits\" are special and may have multiple,\n\t\t * the first of which is \"SET\" and the following are \"INCR\".\n\t\t * For the SET case, we do it directly as for the normal attrs.\n\t\t * For the INCR,  we have to decode into a temp attr and then\n\t\t * call set_entity to do the INCR.\n\t\t */\n\n\t\tif (((padef + index)->at_type != ATR_TYPE_ENTITY) || (pal->al_atopl.op != INCR)) {\n\t\t\tint rc = set_attr_generic(pattr + index, padef + index, pal->al_value, pal->al_resc, INTERNAL);\n\t\t\tif (!rc) {\n\t\t\t\tif ((padef + index)->at_action)\n\t\t\t\t\t(void) (padef + index)->at_action(pattr + index, parent, ATR_ACTION_RECOV);\n\t\t\t}\n\t\t} else {\n\t\t\t/* for INCR case of entity limit, decode locally */\n\t\t\tset_attr_generic(pattr + index, padef + index, pal->al_value, pal->al_resc, INCR);\n\t\t}\n\t\t(pattr + index)->at_flags = pal->al_flags & ~ATR_VFLAG_MODIFY;\n\t}\n\n\t(void) free(pal);\n\treturn (0);\n}\n"
  },
  {
    "path": "src/server/attr_recov_db.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include \"pbs_ifl.h\"\n#include <assert.h>\n#include <errno.h>\n#include <memory.h>\n#include <stdlib.h>\n#include <stdio.h>\n#include <unistd.h>\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"log.h\"\n#include \"pbs_nodes.h\"\n#include \"svrfunc.h\"\n#include \"resource.h\"\n#include \"pbs_db.h\"\n#include <openssl/sha.h>\n\n/* Global Variables */\nextern int resc_access_perm;\n\nextern struct attribute_def svr_attr_def[];\nextern struct attribute_def que_attr_def[];\n\n/**\n * @brief\n *\tCompute and check whether the quick save area has been modified\n *\n * @param[in] \t    qs       - pointer to the quick save area\n * @param[in]\t    len      - length of the quick save area\n * @param[in/out]   oldhash  - pointer to a opaque value of current quick save area signature/hash\n *\n * @return      Error code\n * @retval\t 0 - quick save area was not changed\n * @retval\t 1 - quick save area has changed\n */\nint\ncompare_obj_hash(void *qs, int len, void *oldhash)\n{\n\tchar hash[DIGEST_LENGTH];\n\n\tif (SHA1((const unsigned char *) qs, len, (unsigned char *) &hash) == NULL)\n\t\texit(-1);\n\n\tif (memcmp(hash, oldhash, DIGEST_LENGTH) != 0) {\n\t\tmemcpy(oldhash, hash, DIGEST_LENGTH); /* update the signature */\n\t\treturn 1;\n\t}\n\n\treturn 0; /* qs was not modified */\n}\n\n/**\n * @brief\n *\tEncode a single  attribute to the database structure of type pbs_db_attr_list_t\n *\n * @param[in]\tpadef - Address of parent's attribute definition array\n * @param[in]\tpattr - Address of the parent objects attribute array\n * @param[out]\tdb_attr_list - pointer to the structure of type pbs_db_attr_list_t for storing in DB\n *\n * @return  error code\n * @retval   -1 - Failure\n * @retval    0 - Success\n *\n */\nint\nencode_single_attr_db(attribute_def *padef, attribute *pattr, pbs_db_attr_list_t *db_attr_list)\n{\n\tpbs_list_head *lhead;\n\tint rc = 0;\n\n\tlhead = &db_attr_list->attrs;\n\n\trc = padef->at_encode(pattr, lhead, padef->at_name, NULL, ATR_ENCODE_DB, NULL);\n\tif (rc < 0)\n\t\treturn -1;\n\n\tdb_attr_list->attr_count += rc;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tEncode the given attributes to the database structure of type pbs_db_attr_list_t\n *\n * @param[in]\tpadef - Address of parent's attribute definition array\n * @param[in]\tpattr - Address of the parent objects attribute array\n * @param[in]\tnumattr - Number of attributes in the list\n * @param[in]\tall  - Encode all attributes\n *\n * @return  error code\n * @retval   -1 - Failure\n * @retval    0 - Success\n *\n */\nint\nencode_attr_db(attribute_def *padef, attribute *pattr, int numattr, pbs_db_attr_list_t *db_attr_list, int all)\n{\n\tint i;\n\n\tdb_attr_list->attr_count = 0;\n\n\tCLEAR_HEAD(db_attr_list->attrs);\n\n\tfor (i = 0; i < numattr; i++) {\n\t\tif (!((pattr + i)->at_flags & ATR_VFLAG_MODIFY))\n\t\t\tcontinue;\n\n\t\tif ((((padef + i)->at_flags & ATR_DFLAG_NOSAVM) == 0) || all) {\n\t\t\tif (encode_single_attr_db((padef + i), (pattr + i), db_attr_list) != 0)\n\t\t\t\treturn -1;\n\n\t\t\t(pattr + i)->at_flags &= ~ATR_VFLAG_MODIFY;\n\t\t}\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\tDecode the list of attributes from the database to the regular attribute structure\n *\n * @param[in]\t  parent - pointer to parent object\n * @param[in]\t  attr_list - recovered/to be decoded attribute list\n * @param[in]     padef_idx - Search index of this attribute array\n * @param[in]\t  padef - Address of parent's attribute definition array\n * @param[in,out] pattr - Address of the parent objects attribute array\n * @param[in]\t  limit - Number of attributes in the list\n * @param[in]\t  unknown\t- The index of the unknown attribute if any\n *\n * @return      Error code\n * @retval\t 0  - Success\n * @retval\t-1  - Failure\n *\n *\n */\nint\ndecode_attr_db(void *parent, pbs_list_head *attr_list, void *padef_idx, struct attribute_def *padef, struct attribute *pattr, int limit, int unknown)\n{\n\tint index;\n\tsvrattrl *pal = (svrattrl *) 0;\n\tsvrattrl *tmp_pal = (svrattrl *) 0;\n\tvoid **palarray = NULL;\n\n\tif ((palarray = calloc(limit, sizeof(void *))) == NULL) {\n\t\tlog_err(-1, __func__, \"Out of memory\");\n\t\treturn -1;\n\t}\n\n\t/* set all privileges (read and write) for decoding resources\t*/\n\t/* This is a special (kludge) flag for the recovery case, see\t*/\n\t/* decode_resc() in lib/Libattr/attr_fn_resc.c\t\t\t*/\n\n\tresc_access_perm = ATR_DFLAG_ACCESS;\n\n\tfor (pal = (svrattrl *) GET_NEXT(*attr_list); pal != NULL; pal = (svrattrl *) GET_NEXT(pal->al_link)) {\n\t\t/* find the attribute definition based on the name */\n\t\tindex = find_attr(padef_idx, padef, pal->al_name);\n\t\tif (index < 0) {\n\n\t\t\t/*\n\t\t\t* There are two ways this could happen:\n\t\t\t* 1. if the (job) attribute is in the \"unknown\" list -\n\t\t\t*    keep it there;\n\t\t\t* 2. if the server was rebuilt and an attribute was\n\t\t\t*    deleted, -  the fact is logged and the attribute\n\t\t\t*    is discarded (system,queue) or kept (job)\n\t\t\t*/\n\t\t\tif (unknown > 0) {\n\t\t\t\tindex = unknown;\n\t\t\t} else {\n\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE, \"unknown attribute \\\"%s\\\" discarded\", pal->al_name);\n\t\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t}\n\t\tif (palarray[index] == NULL)\n\t\t\tpalarray[index] = pal;\n\t\telse {\n\t\t\ttmp_pal = palarray[index];\n\t\t\twhile (tmp_pal->al_sister)\n\t\t\t\ttmp_pal = tmp_pal->al_sister;\n\n\t\t\t/* this is the end of the list of attributes */\n\t\t\ttmp_pal->al_sister = pal;\n\t\t}\n\t}\n\n\t/* now do the decoding */\n\tfor (index = 0; index < limit; index++) {\n\t\t/*\n\t\t * In the normal case we just decode the attribute directly\n\t\t * into the real attribute since there will be one entry only\n\t\t * for that attribute.\n\t\t *\n\t\t * However, \"entity limits\" are special and may have multiple,\n\t\t * the first of which is \"SET\" and the following are \"INCR\".\n\t\t * For the SET case, we do it directly as for the normal attrs.\n\t\t * For the INCR,  we have to decode into a temp attr and then\n\t\t * call set_entity to do the INCR.\n\t\t */\n\t\t/*\n\t\t * we don't store the op value into the database, so we need to\n\t\t * determine (in case of an ENTITY) whether it is the first\n\t\t * value, or was decoded before. We decide this based on whether\n\t\t * the flag has ATR_VFLAG_SET\n\t\t *\n\t\t */\n\t\tpal = palarray[index];\n\t\twhile (pal) {\n\t\t\tif ((padef[index].at_type == ATR_TYPE_ENTITY) && is_attr_set(&pattr[index])) {\n\t\t\t\t/* for INCR case of entity limit, decode locally */\n\t\t\t\tset_attr_generic(&pattr[index], &padef[index], pal->al_value, pal->al_resc, INCR);\n\t\t\t} else {\n\t\t\t\tset_attr_generic(&pattr[index], &padef[index], pal->al_value, pal->al_resc, INTERNAL);\n\t\t\t\tint act_rc = 0;\n\t\t\t\tif (padef[index].at_action)\n\t\t\t\t\tif ((act_rc = (padef[index].at_action(&pattr[index], parent, ATR_ACTION_RECOV)))) {\n\t\t\t\t\t\tlog_errf(act_rc, __func__, \"Action function failed for %s attr, errn %d...unsetting attribute\", (padef+index)->at_name, act_rc);\n\t\t\t\t\t\tfor (index++; index <= limit; index++) {\n\t\t\t\t\t\t\twhile (pal) {\n\t\t\t\t\t\t\t\ttmp_pal = pal->al_sister;\n\t\t\t\t\t\t\t\tdelete_link(&pal->al_link);\n\t\t\t\t\t\t\t\tfree(pal);\n\t\t\t\t\t\t\t\tpal = tmp_pal;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tif (index < limit)\n\t\t\t\t\t\t\t\tpal = palarray[index];\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif (padef[index].at_free)\n\t\t\t\t\t\t\tpadef[index].at_free(&pattr[index]);\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t}\n\t\t\t(pattr + index)->at_flags = (pal->al_flags & ~ATR_VFLAG_MODIFY) | ATR_VFLAG_MODCACHE;\n\n\t\t\ttmp_pal = pal->al_sister;\n\t\t\tpal = tmp_pal;\n\t\t}\n\t}\n\t(void) free(palarray);\n\n\treturn 0;\n}\n"
  },
  {
    "path": "src/server/checkkey.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file    checkkey.c\n *\n * @brief\n *\t\tattr_recov.c - The 'checkkey.c' file goes with the source package is used to verify licesnse\n *\n * Included public functions are:\n *\n *\tinit_fl_license_attrs\tinitialize values of license structure\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <strings.h>\n#include <unistd.h>\n#include <sys/types.h>\n#include <sys/stat.h>\n#include <fcntl.h>\n#include <time.h>\n#include <sys/types.h>\n#include <netinet/in.h>\n#include \"portability.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"pbs_ifl.h\"\n#include \"pbs_nodes.h\"\n#include \"server.h\"\n#include \"svrfunc.h\"\n#include \"net_connect.h\"\n#include \"pbs_error.h\"\n#include \"log.h\"\n#include \"pbs_license.h\"\n#include \"work_task.h\"\n#include \"job.h\"\n\nchar *pbs_licensing_license_location = NULL;\nlong pbs_min_licenses = PBS_MIN_LICENSING_LICENSES;\nlong pbs_max_licenses = PBS_MAX_LICENSING_LICENSES;\nint pbs_licensing_linger = PBS_LIC_LINGER_TIME;\n\n/* Global Data Items: */\nextern pbs_list_head svr_alljobs;\n\nextern pbs_net_t pbs_server_addr;\nunsigned long hostidnum;\nint ext_license_server = 0;\nint license_expired = 0;\n\nvoid\nreturn_licenses(struct work_task *ptask)\n{\n}\n\nvoid\ninit_licensing(int delay)\n{\n}\n\nint\nstatus_licensing(void)\n{\n\treturn (0);\n}\n\nvoid\nclose_licensing(void)\n{\n}\n/**\n * @brief\n * \t\treturning host id.\n *\n * @return      Host ID\n */\nunsigned long\npbs_get_hostid(void)\n{\n\tunsigned long hid;\n\n\thid = (unsigned long) gethostid();\n\tif (hid == 0)\n\t\thid = (unsigned long) pbs_server_addr;\n\treturn hid;\n}\n/**\n * @brief\n * \t\tinitialize values of license structure\n *\n * @param[in]\tlicenses - pointer to the license block structure\n */\nvoid\ninit_fl_license_attrs(struct license_block *licenses)\n{\n\tlicenses->lb_glob_floating = 10000000;\n\tlicenses->lb_aval_floating = 10000000;\n\tlicenses->lb_used_floating = 0;\n\tlicenses->lb_high_used_floating = 0;\n\tlicenses->lb_do_task = 0;\n}\n"
  },
  {
    "path": "src/server/daemon_info.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include \"pbs_nodes.h\"\n#include \"svrfunc.h\"\n#include \"tpp.h\"\n\n/**\n * @file\tdaemon_info.c\n * @brief\n * \t\tmom_info.c - functions relating to the daemon_info structures\n *\n *\t\tFunctions are used for both mom and peer server\n */\n\n#ifndef PBS_MOM /* Not Mom, i.e. the Server */\n\n/**\n * @brief initialize daemon info structure\n * This struct is common for all service end points\n * inlcluding mom/peer-svr\n *\n * @param[in] pul - list of IP addresses of host; will be freed on error\n *\t\t\tor saved in structure; caller must not free pul\n * @param[in] port - port of service end point\n * @param[in] pmi - machine info struct\n * @return dmn_info_t*\n */\ndmn_info_t *\ninit_daemon_info(unsigned long *pul, uint port, struct machine_info *pmi)\n{\n\tdmn_info_t *dmn_info = calloc(1, sizeof(dmn_info_t));\n\tif (!dmn_info)\n\t\treturn NULL;\n\n\tdmn_info->dmn_state = INUSE_UNKNOWN | INUSE_DOWN | INUSE_NEEDS_HELLOSVR;\n\tdmn_info->dmn_stream = -1;\n\tCLEAR_HEAD(dmn_info->dmn_deferred_cmds);\n\tdmn_info->dmn_addrs = pul;\n\n\twhile (*pul) {\n\t\ttinsert2(*pul, port, pmi, &ipaddrs);\n\t\tpul++;\n\t}\n\n\treturn dmn_info;\n}\n\n/**\n * @brief free up daemon info struct and associated data\n *\n * @param[in] pmi - mom/peer-svr struct\n */\nvoid\ndelete_daemon_info(struct machine_info *pmi)\n{\n\tdmn_info_t *pdmninfo;\n\tunsigned long *up;\n\n\tif (!pmi || !pmi->mi_dmn_info)\n\t\treturn;\n\n\tpdmninfo = pmi->mi_dmn_info;\n\n\t/* take stream out of tree */\n\ttpp_close(pdmninfo->dmn_stream);\n\ttdelete2((unsigned long) pdmninfo->dmn_stream, 0, &streams);\n\n\tif (pdmninfo->dmn_addrs) {\n\t\tfor (up = pdmninfo->dmn_addrs; *up; up++) {\n\t\t\t/* del Mom's IP addresses from tree  */\n\t\t\ttdelete2(*up, pmi->mi_port, &ipaddrs);\n\t\t}\n\t\tfree(pdmninfo->dmn_addrs);\n\t\tpdmninfo->dmn_addrs = NULL;\n\t}\n\n\tfree(pdmninfo);\n\tpmi->mi_dmn_info = NULL;\n}\n\n#endif /* end of server functions */\n"
  },
  {
    "path": "src/server/dis_read.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * dis_read.c - contains function to read and decode the DIS\n *\tencoded requests and replies.\n *\n *\tIncluded public functions are:\n *\n *\tdecode_DIS_CopyFiles\n *\tdecode_DIS_CopyFiles_Cred\n *\tdecode_DIS_replySvr_inner\n *\tdecode_DIS_replySvr\n *\tdecode_DIS_replySvrTPP\n *\tdis_request_read\n *\tDIS_reply_read\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <sys/types.h>\n#include <stdlib.h>\n#include <errno.h>\n#include <string.h>\n#include <stdio.h>\n#include \"dis.h\"\n#include \"libpbs.h\"\n#include \"list_link.h\"\n#include \"server_limits.h\"\n#include \"attribute.h\"\n#include \"log.h\"\n#include \"pbs_error.h\"\n#include \"credential.h\"\n#include \"batch_request.h\"\n#include \"net_connect.h\"\n\n/* External Global Data */\n\nextern char *msg_nosupport;\n\n/*\tData items are:\tstring\t\tjob id\t\t\t(may be null)\n *\t\t\tstring\t\tjob owner\t\t(may be null)\n *\t\t\tstring\t\texecution user name\n *\t\t\tstring\t\texecution group name\t(may be null)\n *\t\t\tunsigned int\tdirection & sandbox flag\n *\t\t\tunsigned int\tcount of file pairs in set\n *\t\t\tset of\t\tfile pairs:\n *\t\t\t\tunsigned int\tflag\n *\t\t\t\tstring\t\tlocal path name\n *\t\t\t\tstring\t\tremote path name (may be null)\n */\n/**\n * @brief\n * \t\tdecode_DIS_CopyFiles() - decode a Copy Files Dependency Batch Request\n *\n *\t\tThis request is used by the server ONLY.\n *\t\tThe batch request structure pointed to by preq must already exist.\n *\n * @see\n * dis_request_read\n *\n * @param[in] sock - socket of connection from Mom\n * @param[in] preq - pointer to the batch request structure to be filled in\n *\n * @return int\n * @retval 0 - success\n * @retval non-zero - decode failure error from a DIS routine\n */\nint\ndecode_DIS_CopyFiles(int sock, struct batch_request *preq)\n{\n\tint pair_ct;\n\tstruct rq_cpyfile *pcf;\n\tstruct rqfpair *ppair;\n\tint rc;\n\n\tpcf = &preq->rq_ind.rq_cpyfile;\n\tCLEAR_HEAD(pcf->rq_pair);\n\tif ((rc = disrfst(sock, PBS_MAXSVRJOBID, pcf->rq_jobid)) != 0)\n\t\treturn rc;\n\tif ((rc = disrfst(sock, PBS_MAXUSER, pcf->rq_owner)) != 0)\n\t\treturn rc;\n\tif ((rc = disrfst(sock, PBS_MAXUSER, pcf->rq_user)) != 0)\n\t\treturn rc;\n\tif ((rc = disrfst(sock, PBS_MAXGRPN, pcf->rq_group)) != 0)\n\t\treturn rc;\n\tpcf->rq_dir = disrui(sock, &rc);\n\tif (rc)\n\t\treturn rc;\n\n\tpair_ct = disrui(sock, &rc);\n\tif (rc)\n\t\treturn rc;\n\n\twhile (pair_ct--) {\n\n\t\tppair = (struct rqfpair *) malloc(sizeof(struct rqfpair));\n\t\tif (ppair == NULL)\n\t\t\treturn DIS_NOMALLOC;\n\t\tCLEAR_LINK(ppair->fp_link);\n\t\tppair->fp_local = 0;\n\t\tppair->fp_rmt = 0;\n\n\t\tppair->fp_flag = disrui(sock, &rc);\n\t\tif (rc) {\n\t\t\t(void) free(ppair);\n\t\t\treturn rc;\n\t\t}\n\t\tppair->fp_local = disrst(sock, &rc);\n\t\tif (rc) {\n\t\t\t(void) free(ppair);\n\t\t\treturn rc;\n\t\t}\n\t\tppair->fp_rmt = disrst(sock, &rc);\n\t\tif (rc) {\n\t\t\t(void) free(ppair->fp_local);\n\t\t\t(void) free(ppair);\n\t\t\treturn rc;\n\t\t}\n\t\tappend_link(&pcf->rq_pair, &ppair->fp_link, ppair);\n\t}\n\treturn 0;\n}\n/*\tData items are:\tstring\t\tjob id\t\t\t(may be null)\n *\t\t\tstring\t\tjob owner\t\t(may be null)\n *\t\t\tstring\t\texecution user name\n *\t\t\tstring\t\texecution group name\t(may be null)\n *\t\t\tunsigned int\tdirection & sandbox flag\n *\t\t\tunsigned int\tcount of file pairs in set\n *\t\t\tset of\t\tfile pairs:\n *\t\t\t\tunsigned int\tflag\n *\t\t\t\tstring\t\tlocal path name\n *\t\t\t\tstring\t\tremote path name (may be null)\n *\t\t\tunsigned int\tcredential length (bytes)\n *\t\t\tbyte string\tcredential\n */\n/**\n * @brief\n * \t\tdecode_DIS_CopyFiles_Cred() - decode a Copy Files with Credential\n *\t\t\t\t Dependency Batch Request\n *\n *\t\tThis request is used by the server ONLY.\n *\t\tThe batch request structure pointed to by preq must already exist.\n *\n * @see\n * \t\tdis_request_read\n *\n * @param[in] sock - socket of connection from Mom\n * @param[in] preq - pointer to the batch request structure to be filled in\n *\n * @return int\n * @retval 0 - success\n * @retval non-zero - decode failure error from a DIS routine\n */\nint\ndecode_DIS_CopyFiles_Cred(int sock, struct batch_request *preq)\n{\n\tint pair_ct;\n\tstruct rq_cpyfile_cred *pcfc;\n\tstruct rqfpair *ppair;\n\tint rc;\n\n\tpcfc = &preq->rq_ind.rq_cpyfile_cred;\n\tCLEAR_HEAD(pcfc->rq_copyfile.rq_pair);\n\tif ((rc = disrfst(sock, PBS_MAXSVRJOBID, pcfc->rq_copyfile.rq_jobid)) != 0)\n\t\treturn rc;\n\tif ((rc = disrfst(sock, PBS_MAXUSER, pcfc->rq_copyfile.rq_owner)) != 0)\n\t\treturn rc;\n\tif ((rc = disrfst(sock, PBS_MAXUSER, pcfc->rq_copyfile.rq_user)) != 0)\n\t\treturn rc;\n\tif ((rc = disrfst(sock, PBS_MAXGRPN, pcfc->rq_copyfile.rq_group)) != 0)\n\t\treturn rc;\n\tpcfc->rq_copyfile.rq_dir = disrui(sock, &rc);\n\tif (rc)\n\t\treturn rc;\n\n\tpair_ct = disrui(sock, &rc);\n\tif (rc)\n\t\treturn rc;\n\n\twhile (pair_ct--) {\n\n\t\tppair = (struct rqfpair *) malloc(sizeof(struct rqfpair));\n\t\tif (ppair == NULL)\n\t\t\treturn DIS_NOMALLOC;\n\t\tCLEAR_LINK(ppair->fp_link);\n\t\tppair->fp_local = 0;\n\t\tppair->fp_rmt = 0;\n\n\t\tppair->fp_flag = disrui(sock, &rc);\n\t\tif (rc) {\n\t\t\t(void) free(ppair);\n\t\t\treturn rc;\n\t\t}\n\t\tppair->fp_local = disrst(sock, &rc);\n\t\tif (rc) {\n\t\t\t(void) free(ppair);\n\t\t\treturn rc;\n\t\t}\n\t\tppair->fp_rmt = disrst(sock, &rc);\n\t\tif (rc) {\n\t\t\t(void) free(ppair->fp_local);\n\t\t\t(void) free(ppair);\n\t\t\treturn rc;\n\t\t}\n\t\tappend_link(&pcfc->rq_copyfile.rq_pair,\n\t\t\t    &ppair->fp_link, ppair);\n\t}\n\n\tpcfc->rq_credtype = disrui(sock, &rc);\n\tif (rc)\n\t\treturn rc;\n\tpcfc->rq_pcred = disrcs(sock, &pcfc->rq_credlen, &rc);\n\tif (rc)\n\t\treturn rc;\n\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\tdecode_DIS_replySvr_inner() - decode a Batch Protocol Reply Structure for Server\n *\n *\t\tThis routine decodes a batch reply into the form used by server.\n *\t\tThe only difference between this and the command version is on status\n *\t\treplies.  For the server,  the attributes are decoded into a list of\n *\t\tserver svrattrl structures rather than a commands's attrl.\n *\n * @see\n * \t\tdecode_DIS_replySvrTPP\n *\n * @param[in] sock - socket connection from which to read reply\n * @param[in,out] reply - batch_reply structure defined in libpbs.h, it must be allocated\n *\t\t\t\t\t  by the caller.\n *\n * @return int\n * @retval 0 - success\n * @retval -1 - if brp_choice is wrong\n * @retval non-zero - decode failure error from a DIS routine\n */\n\nstatic int\ndecode_DIS_replySvr_inner(int sock, struct batch_reply *reply)\n{\n\tint ct;\n\tstruct brp_select *psel;\n\tstruct brp_select **pselx;\n\tint rc = 0;\n\tsize_t txtlen;\n\n\t/* next decode code, auxcode and choice (union type identifier) */\n\n\treply->brp_code = disrsi(sock, &rc);\n\tif (rc)\n\t\treturn rc;\n\treply->brp_auxcode = disrsi(sock, &rc);\n\tif (rc)\n\t\treturn rc;\n\treply->brp_choice = disrui(sock, &rc);\n\tif (rc)\n\t\treturn rc;\n\treply->brp_is_part = disrui(sock, &rc);\n\tif (rc)\n\t\treturn rc;\n\n\tswitch (reply->brp_choice) {\n\n\t\tcase BATCH_REPLY_CHOICE_NULL:\n\t\t\tbreak; /* no more to do */\n\n\t\tcase BATCH_REPLY_CHOICE_Queue:\n\t\tcase BATCH_REPLY_CHOICE_RdytoCom:\n\t\tcase BATCH_REPLY_CHOICE_Commit:\n\t\t\trc = disrfst(sock, PBS_MAXSVRJOBID + 1, reply->brp_un.brp_jid);\n\t\t\tif (rc)\n\t\t\t\treturn (rc);\n\t\t\tbreak;\n\n\t\tcase BATCH_REPLY_CHOICE_Select:\n\n\t\t\t/* have to get count of number of strings first */\n\n\t\t\treply->brp_un.brp_select = NULL;\n\t\t\tpselx = &reply->brp_un.brp_select;\n\t\t\tct = disrui(sock, &rc);\n\t\t\tif (rc)\n\t\t\t\treturn rc;\n\t\t\treply->brp_count = ct;\n\n\t\t\twhile (ct--) {\n\t\t\t\tpsel = (struct brp_select *) malloc(sizeof(struct brp_select));\n\t\t\t\tif (psel == 0)\n\t\t\t\t\treturn DIS_NOMALLOC;\n\t\t\t\tpsel->brp_next = NULL;\n\t\t\t\tpsel->brp_jobid[0] = '\\0';\n\t\t\t\trc = disrfst(sock, PBS_MAXSVRJOBID + 1, psel->brp_jobid);\n\t\t\t\tif (rc) {\n\t\t\t\t\t(void) free(psel);\n\t\t\t\t\treturn rc;\n\t\t\t\t}\n\t\t\t\t*pselx = psel;\n\t\t\t\tpselx = &psel->brp_next;\n\t\t\t}\n\t\t\tbreak;\n\n\t\tcase BATCH_REPLY_CHOICE_Text:\n\n\t\t\t/* text reply */\n\n\t\t\treply->brp_un.brp_txt.brp_str = disrcs(sock, &txtlen, &rc);\n\t\t\treply->brp_un.brp_txt.brp_txtlen = txtlen;\n\t\t\tbreak;\n\n\t\tcase BATCH_REPLY_CHOICE_Locate:\n\n\t\t\t/* Locate Job Reply */\n\n\t\t\trc = disrfst(sock, PBS_MAXDEST + 1, reply->brp_un.brp_locate);\n\t\t\tbreak;\n\n\t\tdefault:\n\t\t\treturn -1;\n\t}\n\n\treturn rc;\n}\n\n/**\n * @brief\n * \t\tdecode a Batch Protocol Reply Structure for Server\n *\n *  \tThis routine reads reply over TCP by calling decode_DIS_replySvr_inner()\n * \t\tto read the reply to a batch request. This routine reads the protocol type\n * \t\tand version before calling decode_DIS_replySvr_inner() to read the rest of\n * \t\tthe reply structure.\n *\n * @see\n *\t\tDIS_reply_read\n *\n * @param[in] sock - socket connection from which to read reply\n * @param[out] reply - The reply structure to be returned\n *\n * @return Error code\n * @retval DIS_SUCCESS(0) - Success\n * @retval !DIS_SUCCESS   - Failure (see dis.h)\n */\nint\ndecode_DIS_replySvr(int sock, struct batch_reply *reply)\n{\n\tint rc = 0;\n\tint i;\n\t/* first decode \"header\" consisting of protocol type and version */\n\n\ti = disrui(sock, &rc);\n\tif (rc != 0)\n\t\treturn rc;\n\tif (i != PBS_BATCH_PROT_TYPE)\n\t\treturn DIS_PROTO;\n\ti = disrui(sock, &rc);\n\tif (rc != 0)\n\t\treturn rc;\n\tif (i != PBS_BATCH_PROT_VER)\n\t\treturn DIS_PROTO;\n\n\treturn (decode_DIS_replySvr_inner(sock, reply));\n}\n\n/**\n * @brief\n * \tdecode a Batch Protocol Reply Structure for Server over TPP stream\n *\n * \tThis routine reads data over TPP stream by calling decode_DIS_replySvr_inner()\n * \tto read the reply to a batch request. This routine reads the protocol type\n * \tand version before calling decode_DIS_replySvr_inner() to read the rest of\n * \tthe reply structure.\n *\n * @see\n * \tDIS_reply_read\n *\n * @param[in] sock - socket connection from which to read reply\n * @param[out] reply - The reply structure to be returned\n *\n * @return Error code\n * @retval DIS_SUCCESS(0) - Success\n * @retval !DIS_SUCCESS   - Failure (see dis.h)\n */\nint\ndecode_DIS_replySvrTPP(int sock, struct batch_reply *reply)\n{\n\t/* for tpp based connection, header has already been read */\n\treturn (decode_DIS_replySvr_inner(sock, reply));\n}\n\n/**\n * @brief\n * \tRead in an DIS encoded request from the network\n * \tand decodes it:\n *\tRead and decode the request into the request structures\n *\n * @see\n * \tprocess_request and read_fo_request\n *\n * @param[in] sfds\t- the socket descriptor\n * @param[in,out] request - will contain the decoded request\n *\n * @return int\n * @retval 0 \tif request read ok, batch_request pointed to by request is updated.\n * @retval -1 \tif EOF (no request but no error)\n * @retval >0 \tif errors ( a PBSE_ number)\n */\n\nint\ndis_request_read(int sfds, struct batch_request *request)\n{\n#ifndef PBS_MOM\n\tint i;\n#endif /* PBS_MOM */\n\tint proto_type;\n\tint proto_ver;\n\tint rc; /* return code */\n\n\tif (request->prot == PROT_TCP)\n\t\tDIS_tcp_funcs(); /* setup for DIS over tcp */\n\n\t/* Decode the Request Header, that will tell the request type */\n\n\trc = decode_DIS_ReqHdr(sfds, request, &proto_type, &proto_ver);\n\n\tif (rc != 0) {\n\t\tif (rc == DIS_EOF)\n\t\t\treturn EOF;\n\t\t(void) sprintf(log_buffer,\n\t\t\t       \"Req Header bad, errno %d, dis error %d\",\n\t\t\t       errno, rc);\n\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_REQUEST, LOG_DEBUG,\n\t\t\t  \"?\", log_buffer);\n\n\t\treturn PBSE_DISPROTO;\n\t}\n\n\tif (proto_ver > PBS_BATCH_PROT_VER)\n\t\treturn PBSE_DISPROTO;\n\n\t/* Decode the Request Body based on the type */\n\n\tswitch (request->rq_type) {\n\t\tcase PBS_BATCH_Connect:\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_Disconnect:\n\t\t\treturn (-1); /* set EOF return */\n\n\t\tcase PBS_BATCH_QueueJob:\n\t\tcase PBS_BATCH_SubmitResv:\n\t\t\tCLEAR_HEAD(request->rq_ind.rq_queuejob.rq_attr);\n\t\t\trc = decode_DIS_QueueJob(sfds, request);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_JobCred:\n\t\t\trc = decode_DIS_JobCred(sfds, request);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_UserCred:\n\t\t\trc = decode_DIS_UserCred(sfds, request);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_jobscript:\n\t\tcase PBS_BATCH_MvJobFile:\n\t\t\trc = decode_DIS_JobFile(sfds, request);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_RdytoCommit:\n\t\tcase PBS_BATCH_Commit:\n\t\tcase PBS_BATCH_Rerun:\n\t\t\trc = decode_DIS_JobId(sfds, request->rq_ind.rq_commit);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_DeleteJobList:\n\t\t\trc = decode_DIS_DelJobList(sfds, request);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_DeleteJob:\n\t\tcase PBS_BATCH_DeleteResv:\n\t\tcase PBS_BATCH_ResvOccurEnd:\n\t\tcase PBS_BATCH_HoldJob:\n\t\tcase PBS_BATCH_ModifyJob:\n\t\tcase PBS_BATCH_ModifyJob_Async:\n\t\t\trc = decode_DIS_Manage(sfds, request);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_MessJob:\n\t\t\trc = decode_DIS_MessageJob(sfds, request);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_Shutdown:\n\t\tcase PBS_BATCH_FailOver:\n\t\t\trc = decode_DIS_ShutDown(sfds, request);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_SignalJob:\n\t\t\trc = decode_DIS_SignalJob(sfds, request);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_StatusJob:\n\t\t\trc = decode_DIS_Status(sfds, request);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_PySpawn:\n\t\t\trc = decode_DIS_PySpawn(sfds, request);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_Authenticate:\n\t\t\trc = decode_DIS_Authenticate(sfds, request);\n\t\t\tbreak;\n\n#ifndef PBS_MOM\n\t\tcase PBS_BATCH_RegisterSched:\n\t\t\trequest->rq_ind.rq_register_sched.rq_name = disrst(sfds, &rc);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_RelnodesJob:\n\t\t\trc = decode_DIS_RelnodesJob(sfds, request);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_LocateJob:\n\t\t\trc = decode_DIS_JobId(sfds, request->rq_ind.rq_locate);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_Manager:\n\t\tcase PBS_BATCH_ReleaseJob:\n\t\t\trc = decode_DIS_Manage(sfds, request);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_MoveJob:\n\t\tcase PBS_BATCH_OrderJob:\n\t\t\trc = decode_DIS_MoveJob(sfds, request);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_RunJob:\n\t\tcase PBS_BATCH_AsyrunJob:\n\t\tcase PBS_BATCH_AsyrunJob_ack:\n\t\tcase PBS_BATCH_StageIn:\n\t\tcase PBS_BATCH_ConfirmResv:\n\t\t\trc = decode_DIS_Run(sfds, request);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_DefSchReply:\n\t\t\trequest->rq_ind.rq_defrpy.rq_cmd = disrsi(sfds, &rc);\n\t\t\tif (rc)\n\t\t\t\tbreak;\n\t\t\trequest->rq_ind.rq_defrpy.rq_id = disrst(sfds, &rc);\n\t\t\tif (rc)\n\t\t\t\tbreak;\n\t\t\trequest->rq_ind.rq_defrpy.rq_err = disrsi(sfds, &rc);\n\t\t\tif (rc)\n\t\t\t\tbreak;\n\t\t\ti = disrsi(sfds, &rc);\n\t\t\tif (rc)\n\t\t\t\tbreak;\n\t\t\tif (i)\n\t\t\t\trequest->rq_ind.rq_defrpy.rq_txt = disrst(sfds, &rc);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_SelectJobs:\n\t\tcase PBS_BATCH_SelStat:\n\t\t\tCLEAR_HEAD(request->rq_ind.rq_select.rq_selattr);\n\t\t\tCLEAR_HEAD(request->rq_ind.rq_select.rq_rtnattr);\n\t\t\trc = decode_DIS_svrattrl(sfds,\n\t\t\t\t\t\t &request->rq_ind.rq_select.rq_selattr);\n\t\t\trc = decode_DIS_svrattrl(sfds,\n\t\t\t\t\t\t &request->rq_ind.rq_select.rq_rtnattr);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_StatusNode:\n\t\tcase PBS_BATCH_StatusResv:\n\t\tcase PBS_BATCH_StatusQue:\n\t\tcase PBS_BATCH_StatusSvr:\n\t\tcase PBS_BATCH_StatusSched:\n\t\tcase PBS_BATCH_StatusRsc:\n\t\tcase PBS_BATCH_StatusHook:\n\t\t\trc = decode_DIS_Status(sfds, request);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_TrackJob:\n\t\t\trc = decode_DIS_TrackJob(sfds, request);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_Rescq:\n\t\tcase PBS_BATCH_ReserveResc:\n\t\tcase PBS_BATCH_ReleaseResc:\n\t\t\trc = decode_DIS_Rescl(sfds, request);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_RegistDep:\n\t\t\trc = decode_DIS_Register(sfds, request);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_ModifyResv:\n\t\t\tdecode_DIS_ModifyResv(sfds, request);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_PreemptJobs:\n\t\t\tdecode_DIS_PreemptJobs(sfds, request);\n\t\t\tbreak;\n\n#else /* yes PBS_MOM */\n\n\t\tcase PBS_BATCH_CopyHookFile:\n\t\t\trc = decode_DIS_CopyHookFile(sfds, request);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_DelHookFile:\n\t\t\trc = decode_DIS_DelHookFile(sfds, request);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_CopyFiles:\n\t\tcase PBS_BATCH_DelFiles:\n\t\t\trc = decode_DIS_CopyFiles(sfds, request);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_CopyFiles_Cred:\n\t\tcase PBS_BATCH_DelFiles_Cred:\n\t\t\trc = decode_DIS_CopyFiles_Cred(sfds, request);\n\t\t\tbreak;\n\t\tcase PBS_BATCH_Cred:\n\t\t\trc = decode_DIS_Cred(sfds, request);\n\t\t\tbreak;\n\n#endif /* PBS_MOM */\n\n\t\tdefault:\n\t\t\tsprintf(log_buffer, \"%s: %d from %s\", msg_nosupport,\n\t\t\t\trequest->rq_type, request->rq_user);\n\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_REQUEST, LOG_DEBUG,\n\t\t\t\t  \"?\", log_buffer);\n\t\t\trc = PBSE_UNKREQ;\n\t\t\tbreak;\n\t}\n\n\tif (rc == 0) { /* Decode the Request Extension, if present */\n\t\trc = decode_DIS_ReqExtend(sfds, request);\n\t\tif (rc != 0) {\n\t\t\t(void) sprintf(log_buffer,\n\t\t\t\t       \"Request type: %d Req Extension bad, dis error %d\", request->rq_type, rc);\n\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_REQUEST,\n\t\t\t\t  LOG_DEBUG, \"?\", log_buffer);\n\t\t\trc = PBSE_DISPROTO;\n\t\t}\n\t} else if (rc != PBSE_UNKREQ) {\n\t\t(void) sprintf(log_buffer,\n\t\t\t       \"Req Body bad, dis error %d, type %d\",\n\t\t\t       rc, request->rq_type);\n\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_REQUEST,\n\t\t\t  LOG_DEBUG, \"?\", log_buffer);\n\t\trc = PBSE_DISPROTO;\n\t}\n\n\treturn (rc);\n}\n\n/**\n * @brief\n * \ttop level function to read and decode DIS based batch reply\n *\n * \tCalls decode_DIS_replySvrTPP in case of PROT_TPP and decode_DIS_replySvr\n * \tin case of PROT_TCP to read the reply\n *\n * @see\n *\tread_reg_reply, process_Dreply and process_DreplyTPP.\n *\n * @param[in] sock - socket connection from which to read reply\n * @param[out] reply - The reply structure to be returned\n * @param[in] prot - Whether to read over tcp or tpp based connection\n *\n * @return Error code\n * @retval DIS_SUCCESS(0) - Success\n * @retval !DIS_SUCCESS   - Failure (see dis.h)\n */\nint\nDIS_reply_read(int sock, struct batch_reply *preply, int prot)\n{\n\tif (prot == PROT_TPP)\n\t\treturn (decode_DIS_replySvrTPP(sock, preply));\n\n\tDIS_tcp_funcs();\n\treturn (decode_DIS_replySvr(sock, preply));\n}\n"
  },
  {
    "path": "src/server/failover.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @brief\n * failover.c\t- Functions relating to the FailOver Requests\n *\n *\tIncluded public functions are:\n *\n *\trel_handshake\n *\tutimes\n *\tprimary_handshake\n *\tsecondary_handshake\n *\tfo_shutdown_reply\n *\tfailover_send_shutdown\n *\tclose_secondary\n *\tput_failover\n *\treq_failover\n *\tread_fo_request\n *\tread_reg_reply\n *\talm_handler\n *\talt_conn\n *\ttakeover_from_secondary\n *\tbe_secondary\n *\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <sys/types.h>\n#include <sys/stat.h>\n#include <sys/time.h>\n#include <fcntl.h>\n#include <stdio.h>\n#include <utime.h>\n#include \"libpbs.h\"\n#include <signal.h>\n#include <string.h>\n#include <time.h>\n#include <unistd.h>\n#include <sys/wait.h>\n#include \"server_limits.h\"\n#include \"credential.h\"\n#include \"attribute.h\"\n#include \"server.h\"\n#include \"batch_request.h\"\n#include \"net_connect.h\"\n#include \"work_task.h\"\n#include \"pbs_error.h\"\n#include \"log.h\"\n#include \"pbs_nodes.h\"\n#include \"svrfunc.h\"\n#include \"dis.h\"\n#include \"libsec.h\"\n#include \"pbs_db.h\"\n\n/* used internal to this file only */\n#define SECONDARY_STATE_noconn -1 /* not connected to Primary */\n#define SECONDARY_STATE_conn 0\t  /* connect to Primary */\n#define SECONDARY_STATE_regsent 1 /* have sent register to Primary */\n#define SECONDARY_STATE_handsk 3  /* receiving regular handshakes  */\n#define SECONDARY_STATE_nohsk 4\t  /* handsakes have stopped comming */\n#define SECONDARY_STATE_shutd 5\t  /* told to shut down */\n#define SECONDARY_STATE_takeov 6  /* primary back up and taking over */\n#define SECONDARY_STATE_inact 7\t  /* told to go inactive/idle */\n#define SECONDARY_STATE_idle 8\t  /* idle until primary back up */\n\n#define HANDSHAKE_TIME 5\n\n/* Global Data Items: */\n\nextern char *msg_daemonname;\nextern unsigned long hostidnum;\nextern char *path_priv;\nextern char *path_svrlive;\nextern char *path_secondaryact;\nextern time_t secondary_delay;\nextern time_t time_now;\n\nextern struct connection *svr_conn;\nextern struct batch_request *saved_takeover_req;\nextern pbs_net_t pbs_server_addr;\n\nint pbs_failover_active = 0; /* indicates if Seconary is active */\n/* Private data items */\n\nstatic int sec_sock = -1;\t      /* used by Secondary */\nstatic int Secondary_connection = -1; /* used by Primary,connection_handle*/\nstatic int Secondary_state = SECONDARY_STATE_noconn;\nstatic time_t hd_time;\nstatic int goidle_ack = 0;\nstatic char msg_takeover[] = \"received takeover message from primary, going inactive\";\nstatic char msg_regfailed[] = \"Primary rejected attempt to register as Secondary\";\n\n/**\n * @brief\n * \t\trel_handshake_reply - free the batch request used for a handshake\n *\t\tCannot use release_req() as we don't want the connection closed\n *\n * @see\n * \t\tprimary_handshake\n * @param[in]\tpwt - pointer to the work task entry which invoked the function.\n *\n * @return: none\n */\nstatic void\nrel_handshake(struct work_task *pwt)\n{\n\tDBPRT((\"Failover: rel_handshake\\n\"))\n\tfree_br((struct batch_request *) pwt->wt_parm1);\n}\n\n/**\n * @brief\n *\t\tDo the handshake with secondary server to show that the primary\n *\t\tis still alive.\n *\n * @par Functionality:\n *\t\tPerform periodic handshake to indicate to the Secondary Server that\n *\t\twe, the Primary Server, is alive.  The handshake consists of three\n *\t\tseparate communication channels:\n *\t\t1. \"Touching\" the \"svrlive\" file (in PBS_HOME/server_priv);\n *\t   \tthis is done whenever this function is called.\n *\t\t2. If a Secondary Server has registered, a \"handshake\" message is sent\n *\t   \tover the persistent TCP connection to the Secondary.\n *\t\t3. Also if a Secondary has registered,  the \"secondary_active\" file is\n *\t   \tstatus-ed to see if it exists;  this is created by the Secondary\n *\t   \twhen it goes active.  If this file is present, the Primary will\n *\t   \trestart itself in an attempt to take backover.\n *\t\tThis function is called from pbsd_main.c when it initializing.  It will\n *\t\tcreate a work-task to recall itself every HANDSHAKE_TIME (5) seconds.\n *\n * @see\n *\t\tmain\n *\n * @param[in]\tpwt - pointer to the work task entry which invoked the function.\n *\n * @return: none\n */\nvoid\nprimary_handshake(struct work_task *pwt)\n{\n\tstruct batch_request *preq;\n\tstruct stat sb;\n\n\t/* touch svrlive file as an \"I am alive\" sign */\n\n\t(void) update_svrlive();\n\n\t/* if connection, send HandShake request to Secondary */\n\n\tif (Secondary_connection >= 0) {\n\t\tDBPRT((\"Failover: sending handshake\\n\"))\n\t\tif ((preq = alloc_br(PBS_BATCH_FailOver)) != NULL) {\n\t\t\tpreq->rq_ind.rq_failover = FAILOVER_HandShake;\n\t\t\tif (issue_Drequest(Secondary_connection, preq, rel_handshake, 0, 0) != 0) {\n\t\t\t\tclose_conn(Secondary_connection);\n\t\t\t\tSecondary_connection = -2;\n\t\t\t}\n\t\t}\n\n\t\t/* see if Secondary has taken over even though we are up */\n\n\t\tif (stat(path_secondaryact, &sb) != -1) {\n\t\t\tset_sattr_l_slim(SVR_ATR_State, SV_STATE_SECIDLE, SET); /* cause myself to recycle */\n\t\t\tDBPRT((\"Primary server found secondary active, restarting\\n\"))\n\t\t}\n\t}\n\n\t/* reset work_task to call this again */\n\n\t(void) set_task(WORK_Timed, time_now + HANDSHAKE_TIME, primary_handshake, NULL);\n\n\treturn;\n}\n\n/**\n * @brief\n *\t\t\"Touch\" the sverlive file (in PBS_HOME/server_priv) to indicate that\n *\t\tthe Secondary Server is active.\n *\n * @par Functionality:\n *\t\tWhen starting up, the Primary Server will monitor the time of \"svrlive\"\n *\t\tto see if the Secondary appears to be active and needs to be told to\n *\t\tgo inactive; see pbsd_main.c.\n *\t\tThis function is first called out of the main program (pbsd_main.c)\n *\t\twhen the Secondary becomes the active server.  It will create a\n *\t\twork-task to recall itself every HANDSHAKE_TIME (5) seconds.\n *\n * @see\n *\t\tmain\n *\n * @param[in]\tpwt - pointer to the work task entry which invoked the function.\n *\n * @return: none\n */\nvoid\nsecondary_handshake(struct work_task *pwt)\n{\n\t(void) update_svrlive();\n\t(void) set_task(WORK_Timed, time_now + HANDSHAKE_TIME,\n\t\t\tsecondary_handshake, NULL);\n}\n\n/**\n * @brief\n *\t\tHandles reply from Secondary for shutdown  or go inactive message\n *\t\tClears the SV_STATE_PRIMDLY bit from the internal Server state so\n *\t\tthe Primary can exit from the main loop.\n *\n * @see\n *\t\tfailover_send_shutdown\n *\n * @param[in]\tpwt - pointer to the work task entry which invoked the function.\n *\n * @return: none\n */\nstatic void\nfo_shutdown_reply(struct work_task *pwt)\n{\n\tset_sattr_l_slim(SVR_ATR_State, get_sattr_long(SVR_ATR_State) & ~SV_STATE_PRIMDLY, SET);\n\trelease_req(pwt);\n}\n/**\n * @brief\n * \t\tSend a \"shutdown\" or \"stay idle\" request to the\n *\t\tSecondary Server when the Primary is shutting down.\n *\t\tWill cause a wait for the reply as this is critical, see\n *\t\tfo_shutdown_reply().\n *\n * @see\n * \t\tsvr_shutdown and req_shutdown\n *\n * @param[in] type\t- type of message to send to Secondary\n *\n * @return  int - success or failure\n * @retval    0 - shutdown request sent to Secondary\n * @retval    1 - no Secondary connect, nothing (need be) done; or failed\n *\t\t  to send message.\n */\nint\nfailover_send_shutdown(int type)\n{\n\tstruct batch_request *preq;\n\n\tif (Secondary_connection == -1)\n\t\treturn (1); /* no secondary, nothing to do */\n\n\tif ((preq = alloc_br(PBS_BATCH_FailOver)) == NULL) {\n\t\tclose_conn(Secondary_connection);\n\t\tSecondary_connection = -2;\n\t\treturn (1);\n\t}\n\tpreq->rq_ind.rq_failover = type;\n\tif (issue_Drequest(Secondary_connection, preq, fo_shutdown_reply, 0, 0) != 0) {\n\t\tclose_conn(Secondary_connection);\n\t\tSecondary_connection = -2;\n\t\treturn (1);\n\t}\n\treturn (0);\n}\n\n/**\n * @brief\n * close_secondary - clear the Secondary_connect indictor when socket closed\n *\n * @see\n * \t\treq_failover\n *\n * @param[in] sock\t- socket of connection\n *\n * @return  none\n */\n\nstatic void\nclose_secondary(int sock)\n{\n\tconn_t *conn;\n\n\tconn = get_conn(sock);\n\tif (!conn)\n\t\treturn;\n\n\tif (Secondary_connection == conn->cn_sock)\n\t\tSecondary_connection = -1;\n\n\tDBPRT((\"Failover: close secondary on socket %d\\n\", sock))\n\n\treturn;\n}\n\n/**\n * @brief\n * \t\tput_failover - encode the FailOver request\n *\t\tUsed via issue_Drequest() by the active server for handshake/control\n *\t\tused directly by the secondary for register message\n *\n *\t@see\n *\t\ttakeover_from_secondary, be_secondary and issue_Drequest\n *\n * @param[in] sock\t- socket of connection\n * @param[in] request\t- FailOver request\n *\n * @return  int\n * @retval\t0\t- success\n * @retval\tnon-zero - decode failure error from a DIS routine\n */\nint\nput_failover(int sock, struct batch_request *request)\n{\n\tint rc;\n\n\tDBPRT((\"Failover: sending FO(%d) request\\n\", request->rq_ind.rq_failover))\n\tDIS_tcp_funcs();\n\tif ((rc = encode_DIS_ReqHdr(sock, PBS_BATCH_FailOver, pbs_current_user)) == 0)\n\t\tif ((rc = diswui(sock, request->rq_ind.rq_failover)) == 0)\n\t\t\tif ((rc = encode_DIS_ReqExtend(sock, 0)) == 0)\n\t\t\t\trc = dis_flush(sock);\n\treturn rc;\n}\n\n/**\n * @brief\n *\treturning host id.\n *\n * @return Host ID\n */\nunsigned long\npbs_get_hostid(void)\n{\n\tunsigned long hid;\n\n\thid = (unsigned long) gethostid();\n\tif (hid == 0)\n\t\thid = (unsigned long) pbs_server_addr;\n\treturn hid;\n}\n\n/**\n * @brief\n * \t\treq_failover - service failover related requests\n *\t\tPrimary - when receives \"register\", this is reached via process_request\n *\t\t  and dispatch_request (like any request)\n *\t\tSecondary - handshake and control calls, reached directly from\n *\t\t  handshake_decode from be_secondary\n *\n * @see\n * \t\tdispatch_request and read_fo_request\n *\n * @param[in,out] preq\t- pointer to failover request\n *\n * @return  none\n */\n\nvoid\nreq_failover(struct batch_request *preq)\n{\n\tint err = 0;\n\tchar hostbuf[PBS_MAXHOSTNAME + 1];\n\tconn_t *conn;\n\tunsigned long hostidnum;\n\n\tpreq->rq_reply.brp_auxcode = 0;\n\n\tconn = get_conn(preq->rq_conn);\n\tif (!conn) {\n\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\treturn;\n\t}\n\n\tDBPRT((\"Failover: received FO(%d) request\\n\", preq->rq_ind.rq_failover))\n\tswitch (preq->rq_ind.rq_failover) {\n\n\t\tcase FAILOVER_Register:\n\t\t\t/*\n\t\t\t * The one request that should be seen by the primary server\n\t\t\t * - register the secondary and\treturn the primary's hostid\n\t\t\t *\n\t\t\t * Request must be from Secondary system with privileged\n\t\t\t * for now - should be only one, so error if already one\n\t\t\t */\n\n\t\t\thostbuf[0] = '\\0';\n\t\t\t(void) get_connecthost(preq->rq_conn, hostbuf, (sizeof(hostbuf) - 1));\n\n\t\t\tif (Secondary_connection >= 0) {\n\t\t\t\terr = PBSE_OBJBUSY;\n\t\t\t\tsprintf(log_buffer, \"Failover: second secondary tried to register, host: %s\", hostbuf);\n\t\t\t\tDBPRT((\"%s\\n\", log_buffer))\n\t\t\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER,\n\t\t\t\t\t  LOG_WARNING, msg_daemonname, log_buffer);\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\tsprintf(log_buffer, \"Failover: registering %s as Secondary Server\", hostbuf);\n\t\t\tDBPRT((\"%s\\n\", log_buffer))\n\t\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER,\n\t\t\t\t  LOG_INFO, msg_daemonname, log_buffer);\n\n\t\t\t/* Mark the connection as non-expiring */\n\n\t\t\tconn->cn_authen |= PBS_NET_CONN_NOTIMEOUT;\n\t\t\tSecondary_connection = preq->rq_conn;\n\t\t\tconn->cn_func = process_Dreply;\n\t\t\tnet_add_close_func(preq->rq_conn, close_secondary);\n\n\t\t\t/* return the host id as a text string */\n\t\t\t/* (make do with existing capability to return data in reply */\n\n\t\t\thostidnum = pbs_get_hostid();\n\t\t\tsprintf(hostbuf, \"%ld\", hostidnum);\n\t\t\t(void) reply_text(preq, PBSE_NONE, hostbuf);\n\t\t\treturn;\n\n\t\t\t/*\n\t\t\t * the reminder of the requests come from the Primary to Secondary.\n\t\t\t */\n\n\t\tcase FAILOVER_HandShake:\n\t\t\t/* Handshake - the Primary is up, all is right with the world */\n\t\t\t/* record time of the handshake\t,  then just acknowledge it   */\n\t\t\thd_time = time(0);\n\t\t\tif (Secondary_state == SECONDARY_STATE_nohsk)\n\t\t\t\tSecondary_state = SECONDARY_STATE_handsk;\n\t\t\tbreak;\n\n\t\tcase FAILOVER_PrimIsBack:\n\t\t\t/*\n\t\t\t * Primary is Back - The Primary Server is back up and\n\t\t\t * wishes to take control again.\n\t\t\t * This is the only failover request normally\n\t\t\t * seen by the Secondary while it is active.\n\t\t\t */\n\t\t\tset_sattr_l_slim(SVR_ATR_State, SV_STATE_SECIDLE, SET);\n\t\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_CRIT,\n\t\t\t\t  msg_daemonname, msg_takeover);\n\t\t\t(void) unlink(path_secondaryact); /* remove file */\n\t\t\tDBPRT((\"%s\\n\", msg_takeover))\n\t\t\tbreak;\n\n\t\t\t/* These requests come from the Primary while the \t*/\n\t\t\t/* Secondary is inactive \t\t\t\t*/\n\n\t\tcase FAILOVER_SecdShutdown:\n\t\t\t/* Primary is shutting down, Secondary should also go down */\n\t\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_CRIT,\n\t\t\t\t  msg_daemonname, \"Failover: Secondary told to shut down\");\n\n\t\t\treply_send(preq);\n\t\t\texit(0);\n\n\t\tcase FAILOVER_SecdGoInactive:\n\t\t\t/* Primary is shutting down, Secondary should remain inactive */\n\t\t\tSecondary_state = SECONDARY_STATE_inact;\n\t\t\tbreak;\n\n\t\tcase FAILOVER_SecdTakeOver:\n\t\t\tsleep(10); /* give primary a bit more time to go down */\n\t\t\tSecondary_state = SECONDARY_STATE_takeov;\n\t\t\tbreak;\n\n\t\tdefault:\n\t\t\tDBPRT((\"Failover: invalid request\\n\"))\n\t\t\terr = 1;\n\t}\n\n\tif (err) {\n\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t} else {\n\n\t\tpreq->rq_reply.brp_code = 0;\n\t\tpreq->rq_reply.brp_choice = BATCH_REPLY_CHOICE_NULL;\n\t\tif (preq->rq_ind.rq_failover == FAILOVER_PrimIsBack) {\n\t\t\t/*\n\t\t\t * save ptr of preq, Seconday will acknowledge the\n\t\t\t * request after the nodes have been saved off\n\t\t\t */\n\t\t\tsaved_takeover_req = preq;\n\t\t} else if (preq->rq_ind.rq_failover == FAILOVER_SecdTakeOver) {\n\t\t\t/* acknowledge the request */\n\t\t\treply_send(preq);\n\t\t\t/*\n\t\t\t * Primary is shutting down, Secondary should go active\n\t\t\t * wait for Primary to actually shut down\n\t\t\t * (connection closes)\n\t\t\t */\n\t\t\t(void) wait_request(600, NULL);\n\t\t\tif (sec_sock != -1) {\n\t\t\t\tclose_conn(sec_sock);\n\t\t\t\tsec_sock = -1;\n\t\t\t}\n\t\t} else {\n\t\t\t/* acknowledge the request */\n\t\t\tDBPRT((\"Failover: acknowledging FO(%d) request\\n\", preq->rq_ind.rq_failover))\n\t\t\treply_send(preq);\n\t\t}\n\t}\n\treturn;\n}\n\n/**\n * @brief\n *\t\tRead and decode the failover request.  This function is\n *\t\tused only by the secondary,  in place of process_request().\n *\n * @see\n * \t\tread_reg_reply\n *\n * @param[in] conn - connection on which the batch request is to be read\n *\n * @return None\n */\nstatic void\nread_fo_request(int conn)\n{\n\tint rc;\n\tstruct batch_request *request;\n\n\tif ((request = alloc_br(0)) == NULL) { /* freed when reply sent */\n\t\tDBPRT((\"Failover: Unable to allocate request structure\\n\"))\n\t\tSecondary_state = SECONDARY_STATE_noconn;\n\t\tclose_conn(conn);\n\t\tsec_sock = -1;\n\t\treturn;\n\t}\n\trequest->rq_conn = conn;\n\trc = dis_request_read(conn, request);\n\tDBPRT((\"Failover: received request (rc=%d) secondary state %d\\n\", rc, Secondary_state))\n\tif (rc == -1) {\n\t\t/*\n\t\t * EOF/socket closed, if the Secondary state is\n\t\t * SECONDARY_STATE_inact or SECONDARY_STATE_noconn, then leave\n\t\t * unchanged as there is a race as to when this end sees the\n\t\t * connect closed by the primaryr;\n\t\t * otherwise set to SECONDARY_STATE_nohsk to start timing\n\t\t * to go active\n\t\t */\n\t\tif ((Secondary_state != SECONDARY_STATE_inact) &&\n\t\t    (Secondary_state != SECONDARY_STATE_noconn))\n\t\t\tSecondary_state = SECONDARY_STATE_nohsk;\n\n\t\t/* make sure our side is closed */\n\t\tclose_conn(conn);\n\t\tsec_sock = -1;\n\t\tfree_br(request);\n\t\treturn;\n\n\t} else if (rc != 0) {\n\t\t/* read or decode error */\n\t\tDBPRT((\"Failover: read or decode error\\n\"))\n\t\tSecondary_state = SECONDARY_STATE_noconn;\n\t\tclose_conn(conn);\n\t\tsec_sock = -1;\n\t\tfree_br(request);\n\t\treturn;\n\t}\n\n\treq_failover(request); /* will send reply, which will free request */\n\treturn;\n}\n\n/**\n * @brief\n * \t\tread_reg_reply - read and decode the reply for one of two special failover\n *\t\tmessages: from the primary for the register request; or from the\n *\t\tSecondary in reply to a take-over message.\n *\n * @par functionality:\n *\t\tNormally, the active Server uses the normal process_Dreply() to\n *\t\tread and decode the response to a message, even the normal failover\n *\t\tmessages such as handshake.  This function is used by the Secondary\n *\t\tserver only for the reply from register message.  On receiving a\n *\t\tnon-error reply, we advance the secondary state to \"waiting for\n *\t\thandshake\".  If the Primary sends an explicit error (reject), the\n *\t\tSecondary just exits as it isn't wanted.  Likewise on a read error,\n *\t\tunless it is a EOF on the read.  In that case we assume the Primary\n *\t\treally isn't up, and change set to \"take over\" which causes a retry\n * \t\tof the connection.\n *\n *\t\tThe Primary Server will use this to read/process the reply from a\n *\t\ttakeover message since at that point the Primary is not fully\n *\t\tinitialized.\n *\n * @see\n * \t\ttakeover_from_secondary and be_secondary\n *\n * @param[in] sock - the socket from which to read.\n *\n * @return none\n */\nstatic void\nread_reg_reply(int sock)\n{\n\tstruct batch_reply fo_reply;\n\tint rc;\n\tunsigned long hid;\n\tchar *txtm;\n\tchar *txts;\n\n\tfo_reply.brp_code = 0;\n\tfo_reply.brp_choice = BATCH_REPLY_CHOICE_NULL;\n\tfo_reply.brp_un.brp_txt.brp_txtlen = 0;\n\tfo_reply.brp_un.brp_txt.brp_str = 0;\n\trc = DIS_reply_read(sock, &fo_reply, 0);\n\n\tif ((rc != 0) || (fo_reply.brp_code != 0)) {\n\t\tDBPRT((\"Failover: received invalid reply: non-zero code or EOF\\n\"))\n\t\tif ((rc == DIS_EOD) && (Secondary_state == SECONDARY_STATE_regsent)) {\n\t\t\t/* EOD/EOF on read of reply to register message,  */\n\t\t\t/* go ahead and take over as primary must be down */\n\t\t\t/* as we had successfully connected\t\t  */\n\t\t\tSecondary_state = SECONDARY_STATE_takeov;\n\t\t\treturn;\n\t\t}\n\n\t\tif (goidle_ack) {\n\t\t\ttxts = pbs_conf.pbs_secondary;\n\t\t\ttxtm = \"failed to acknowledge request to go idle\";\n\t\t} else {\n\t\t\ttxts = pbs_conf.pbs_primary;\n\t\t\ttxtm = \"did not accept secondary registration\";\n\t\t}\n\t\tsprintf(log_buffer, \"Active PBS Server at %s %s\", txts, txtm);\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_CRIT,\n\t\t\t  msg_daemonname, log_buffer);\n\t\texit(1); /* bad reply */\n\t}\n\n\tif (goidle_ack) {\n\t\t/* waiting for reply to \"go idle\" request to active secondary */\n\t\t/* ok response means the active as agreed to shut down\t  */\n\t\tgoidle_ack = 0; /* see takeover_from_secondary() \t  */\n\n\t} else {\n\n\t\tif ((fo_reply.brp_choice != BATCH_REPLY_CHOICE_Text) ||\n\t\t    (fo_reply.brp_un.brp_txt.brp_str == 0)) {\n\n\t\t\tif (fo_reply.brp_code == PBSE_UNKREQ) {\n\t\t\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_CRIT,\n\t\t\t\t\t  msg_daemonname, msg_regfailed);\n\t\t\t\tDBPRT((\"%s\\n\", msg_regfailed))\n\t\t\t\texit(1);\n\t\t\t}\n\t\t\tDBPRT((\"Failover: received invalid reply\\n\"))\n\t\t\t/* reset back to beginning */\n\t\t\tSecondary_state = SECONDARY_STATE_noconn;\n\t\t} else {\n\n\t\t\tchar fn[MAXPATHLEN + 1];\n\t\t\tint fd;\n\t\t\tsize_t s;\n\t\t\tconn_t *conn;\n\n\t\t\tconn = get_conn(sock);\n\t\t\tif (!conn) {\n\t\t\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER,\n\t\t\t\t\t  LOG_CRIT, msg_daemonname,\n\t\t\t\t\t  \"unable to socket in connection table\");\n\t\t\t\texit(1);\n\t\t\t}\n\n\t\t\tDBPRT((\"Failover: received ok reply\\n\"))\n\t\t\thid = (unsigned long) atol(fo_reply.brp_un.brp_txt.brp_str);\n\t\t\thid = hid ^ pbs_get_hostid();\n\t\t\tfree(fo_reply.brp_un.brp_txt.brp_str);\n\t\t\tfo_reply.brp_choice = BATCH_REPLY_CHOICE_NULL;\n\t\t\t/* change function for reading socket from read_reg_reply to */\n\t\t\t/* read_fo_request, and then wait for the handshakes\t     */\n\n\t\t\tconn->cn_func = read_fo_request;\n\t\t\tSecondary_state = SECONDARY_STATE_handsk;\n\t\t\thd_time = time(0);\n\t\t\t(void) sprintf(fn, \"%s/license.fo\", path_priv);\n\t\t\t/* save Primary's host id */\n\t\t\tfd = open(fn, O_WRONLY | O_CREAT | O_TRUNC, 0600);\n\t\t\ts = sizeof(hid);\n\t\t\tif ((fd == -1) || (write(fd, (char *) &hid, s) != s)) {\n\t\t\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_CRIT,\n\t\t\t\t\t  msg_daemonname, \"unable to save Primary hostid\");\n\t\t\t\texit(1);\n\t\t\t}\n\t\t\tclose(fd);\n\t\t}\n\t}\n}\n\n/**\n * @brief\n * \t\talm_handler - handler for sigalarm for alt_conn()\n *\n * @see\n * \t\talt_conn\n *\n * @param[in] sig - signal signum.\n *\n * @return\tnone\n */\nstatic void\nalm_handler(int sig)\n{\n\treturn;\n}\n\n/**\n * @brief\n * alt_conn - connect to primary/secondary with timeout around the connect\n *\n * @see\n * \t\ttakeover_from_secondary and be_secondary\n *\n * @param[in] addr - host address of primary or secondary.\n * @param[in] sec - timeout in seconds.\n *\n * @return\tthe socket from which to read\n * @retval\t-1\t- error\n */\nstatic int\nalt_conn(pbs_net_t addr, unsigned int sec)\n{\n\tint sock;\n\tstruct sigaction act;\n\n\tact.sa_handler = alm_handler;\n\tsigemptyset(&act.sa_mask);\n\tact.sa_flags = 0;\n\tsigaction(SIGALRM, &act, 0);\n\talarm(sec);\n\tsock = client_to_svr(addr, pbs_server_port_dis, B_RESERVED);\n\talarm(0);\n\tif (sock < 0)\n\t\tsock = -1;\n\tact.sa_handler = SIG_DFL;\n\tsigaction(SIGALRM, &act, 0);\n\n\treturn (sock);\n}\n\n/**\n * @brief\n * \tFunction to check if stonith script exists at PBS_HOME/server_priv/stonith.\n * \tIf it does then invoke the script for execution.\n *\n * @param[in]\tnode - hostname of the node, that needs to brought down.\n *\n * @return\tError code\n * @retval\t 0 - stonith script executed successfully or script does not exist.\n * @retval      -1 - stonith script failed to bring down node.\n *\n */\nint\ncheck_and_invoke_stonith(char *node)\n{\n\tchar stonith_cmd[3 * MAXPATHLEN + 1] = {0};\n\tchar stonith_fl[MAXPATHLEN + 1] = {0};\n\tchar *p = NULL;\n\tchar out_err_fl[MAXPATHLEN + 1] = {0};\n\tchar *out_err_msg = NULL;\n\tint rc = 0;\n\tint fd = 0;\n\tstruct stat stbuf;\n\n\tif (node == NULL)\n\t\treturn -1;\n\n\tsnprintf(stonith_fl, sizeof(stonith_fl), \"%s/server_priv/stonith\", pbs_conf.pbs_home_path);\n\n\tif (stat(stonith_fl, &stbuf) != 0) {\n\t\tif (errno == ENOENT) {\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE, \"Skipping STONITH\");\n\t\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_INFO,\n\t\t\t\t  msg_daemonname, log_buffer);\n\t\t\treturn 0;\n\t\t}\n\t}\n\n\t/* create unique filename by appending pid */\n\tsnprintf(out_err_fl, sizeof(out_err_fl),\n\t\t \"%s/spool/stonith_out_err_fl_%s_%d\", pbs_conf.pbs_home_path, node, getpid());\n\n\t/* execute stonith script and redirect output to file */\n\tsnprintf(stonith_cmd, sizeof(stonith_cmd), \"%s %s > %s 2>&1\", stonith_fl, node, out_err_fl);\n\tsnprintf(log_buffer, LOG_BUF_SIZE,\n\t\t \"Executing STONITH script to bring down primary at %s\", pbs_conf.pbs_server_name);\n\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_NOTICE,\n\t\t  msg_daemonname, log_buffer);\n\n\trc = system(stonith_cmd);\n\n\tif (rc != 0) {\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE,\n\t\t\t \"STONITH script execution failed, script exit code: %d\", rc);\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_SERVER, LOG_CRIT,\n\t\t\t  msg_daemonname, log_buffer);\n\t} else {\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE,\n\t\t\t \"STONITH script executed successfully\");\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_INFO,\n\t\t\t  msg_daemonname, log_buffer);\n\t}\n\n\t/* read the contents of out_err_fl and load to out_err_msg */\n\tif ((fd = open(out_err_fl, 0)) != -1) {\n\t\tif (fstat(fd, &stbuf) != -1) {\n\t\t\tout_err_msg = malloc(stbuf.st_size + 1);\n\t\t\tif (out_err_msg == NULL) {\n\t\t\t\tclose(fd);\n\t\t\t\tunlink(out_err_fl);\n\t\t\t\tlog_err(errno, __func__, \"malloc failed\");\n\t\t\t\treturn -1;\n\t\t\t}\n\n\t\t\tif (read(fd, out_err_msg, stbuf.st_size) == -1) {\n\t\t\t\tclose(fd);\n\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE,\n\t\t\t\t\t \"%s: read failed, errno: %d\", out_err_fl, errno);\n\t\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\t\tfree(out_err_msg);\n\t\t\t\treturn -1;\n\t\t\t}\n\n\t\t\t*(out_err_msg + stbuf.st_size) = '\\0';\n\t\t\tp = out_err_msg + strlen(out_err_msg) - 1;\n\n\t\t\twhile ((p >= out_err_msg) && (*p == '\\r' || *p == '\\n'))\n\t\t\t\t*p-- = '\\0'; /* supress the last newline */\n\t\t}\n\t\tclose(fd);\n\t}\n\n\tif (out_err_msg) {\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE,\n\t\t\t \"%s, exit_code: %d.\", out_err_msg, rc);\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_INFO,\n\t\t\t  msg_daemonname, log_buffer);\n\t\tfree(out_err_msg);\n\t}\n\n\tunlink(out_err_fl);\n\n\tif (rc != 0)\n\t\treturn -1;\n\treturn rc;\n}\n\n/**\n * @brief\n *\t\tTake control back from an active Secondary Server\n *\n * @par Functionality:\n *\t\tAttempt to connect to the Secondary Server,  timeout the connection\n *\t\trequest if it isn't accepted in a short time.  If the connection\n *\t\tcannot be made because the IP address is not available or if the\n *\t\tconnection is made but the Secondary does not acknowledge the request,\n *\t\tthe the Primary will print a message and exit (the process).\n *\n * @see\n * \t\tmain\n *\n * @return  - int: failover server role\n * @retval  0 - unable to contact the Secondary Server.\n * @retval  1 - Contacted Secondary and it acknowledged the takeover request.\n */\nint\ntakeover_from_secondary()\n{\n\tstruct batch_request *pfo_req;\n\tpbs_net_t addr;\n\tint sock;\n\tconn_t *conn;\n\n\t/*\n\t * need to do a limited initialization of the network tables,\n\t * connect to the secondary,\n\t * send a go away message,\n\t * wait for the reply which is very ununusual for us,\n\t * wait a bit, and then clean up the network tables\n\t */\n\n\t(void) init_network(0);\n\t(void) init_network_add(-1, NULL, NULL);\n\n\t/* connect to active secondary if we can */\n\t/* if connected, send take-over request */\n\t/* wait for reply */\n\taddr = get_hostaddr(pbs_conf.pbs_secondary);\n\tif (addr == (pbs_net_t) 0) {\n\t\tfprintf(stderr, \"Cannot get network address of Secondary, aborting\\n\");\n\t\texit(1);\n\t}\n\tsock = alt_conn(addr, 4);\n\tif (sock < 0)\n\t\treturn 0;\n\n\tconn = add_conn(sock, ToServerDIS, addr, 0, NULL, read_reg_reply);\n\tif (conn == NULL) {\n\t\t/* path highly unlikely but theoretically possible */\n\n\t\tfprintf(stderr, \"Connection not found, abort takeover from secondary\\n\");\n\t\texit(1);\n\t}\n\n\tconn->cn_authen |= PBS_NET_CONN_AUTHENTICATED;\n\n\tif ((pfo_req = alloc_br(PBS_BATCH_FailOver)) == NULL) {\n\t\tfprintf(stderr, \"Unable to allocate request structure, abort takeover from secondary\\n\");\n\t\texit(1);\n\t}\n\tpfo_req->rq_ind.rq_failover = FAILOVER_PrimIsBack;\n\tif (put_failover(sock, pfo_req) != 0) {\n\t\tfprintf(stderr, \"Could not communicate with Secondary, aborting\\n\");\n\t\texit(1);\n\t}\n\tgoidle_ack = 1;\n\t(void) wait_request(600, NULL);\n\n\tif (goidle_ack == 1) {\n\t\t/* cannot seem to force active secondary to go idle */\n\t\tfprintf(stderr, \"Secondary not idling, aborting\\n\");\n\t\texit(2);\n\t}\n\tprintf(\"Have taken control from Secondary Server\\n\");\n\treturn 1;\n}\n\n/**\n * @brief\n * \t\tbe_secondary - detect if primary is up\n *\t\tif primary up - wait for it to go down, then take over\n *\t\tif primary down and delay is not -1, wait for primary to come up\n *\t\tif primary down and delay is -1, then take over now\n *\n * @see\n *\t\tmain\n *\n * @param[in] delay - time for which secondary to wait for primary to come up.\n *\n *\treturns: 1 - if should stay inactive secondary\n *\t\t 0 - if should take over as active\n */\nint\nbe_secondary(time_t delay)\n{\n\tint loop = 0;\n\tpbs_net_t primaddr;\n\tstruct batch_request *pfo_req;\n\tstruct stat sb;\n\tint sbloop = 0;\n\ttime_t sbtime = 0;\n\tFILE *secact;\n\ttime_t mytime = 0;\n\ttime_t takeov_on_nocontact;\n\tconn_t *conn;\n\tint rc = 0;\n\n\t/*\n\t * do limited initialization of the network tables\n\t * send register request to Primary\n\t * loop waiting for handshake\n\t */\n\n\t(void) init_network(0);\n\t(void) init_network_add(-1, NULL, NULL);\n\thd_time = time(0);\n\n\t/* connect to primary */\n\n\tprimaddr = get_hostaddr(pbs_conf.pbs_primary);\n\tif (primaddr == (pbs_net_t) 0) {\n\t\tfprintf(stderr, \"pbs_server: unable to obtain Primary Server's network address, aborting.\\n\");\n\t\texit(1);\n\t}\n\n\tif (secondary_delay == (time_t) -1) {\n\t\tsecondary_delay = 0;\n\t\tprintf(\"pbs_server: secondary directed to start up as active\\n\");\n\t} else {\n\t\tsprintf(log_buffer, \"pbs_server: coming up as Secondary, Primary is %s\", pbs_conf.pbs_primary);\n\t\tprintf(\"%s\\n\", log_buffer);\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_NOTICE,\n\t\t\t  msg_daemonname, log_buffer);\n\t}\n\ttakeov_on_nocontact = hd_time + (60 * 5) + secondary_delay;\n\n#ifndef DEBUG\n\tpbs_close_stdfiles(); /* set stdin/stdout/stderr to /dev/null */\n#endif\t\t\t      /* DEBUG */\n\n\t/*\n\t * Secondary Server State machine\n\t */\n\n\twhile (1) {\n\t\ttime_now = time(0);\n\t\t++loop;\n\n\t\tDBPRT((\"Failover: Secondary_state is %d\\n\", Secondary_state));\n\t\tswitch (Secondary_state) {\n\t\t\tcase SECONDARY_STATE_noconn:\n\t\t\tcase SECONDARY_STATE_idle:\n\n\t\t\t\t/* for both noconn and idle try to reconnect */\n\t\t\t\tsbloop = 0;\n\t\t\t\tsbtime = 0;\n\t\t\t\tmytime = 0;\n\n\t\t\t\tif (sec_sock >= 0)\n\t\t\t\t\tclose_conn(sec_sock);\n\n\t\t\t\tsec_sock = client_to_svr(primaddr, pbs_server_port_dis, B_RESERVED);\n\t\t\t\tif (sec_sock < 0) {\n\n\t\t\t\t\t/* failed to reconnect to primary */\n\t\t\t\t\t/* if _idle, just try again later */\n\t\t\t\t\t/* else if time is up, go active  */\n\n\t\t\t\t\tif ((Secondary_state == SECONDARY_STATE_noconn) && ((delay == (time_t) -1) || (time_now > takeov_on_nocontact))) {\n\t\t\t\t\t\t/* can take over role of active server */\n\t\t\t\t\t\tsec_sock = -1;\n\t\t\t\t\t\tSecondary_state = SECONDARY_STATE_takeov;\n\t\t\t\t\t} else {\n\t\t\t\t\t\t/* wait for primary to come up and try again */\n\t\t\t\t\t\tsec_sock = -1;\n\t\t\t\t\t\tsleep(10);\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\t/* made contact with primary, set to send registration */\n\t\t\t\t\tSecondary_state = SECONDARY_STATE_conn;\n\t\t\t\t\tconn = add_conn(sec_sock, ToServerDIS, primaddr, 0, NULL,\n\t\t\t\t\t\t\tread_reg_reply);\n\t\t\t\t\tif (conn) {\n\t\t\t\t\t\tconn->cn_authen |= PBS_NET_CONN_AUTHENTICATED;\n\t\t\t\t\t\tDBPRT((\"Failover: reconnected to primary\\n\"))\n\t\t\t\t\t} else {\n\t\t\t\t\t\t/* a possible but unlikely case */\n\t\t\t\t\t\tlog_err(-1, \"be_secondary\",\n\t\t\t\t\t\t\t\"Connection not found, close socket free context\");\n\t\t\t\t\t\t(void) CS_close_socket(sec_sock);\n\t\t\t\t\t\tclose(sec_sock);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tbreak;\n\n\t\t\tcase SECONDARY_STATE_conn:\n\n\t\t\t\t/* Primary is up, so send register request and wait on reply */\n\t\t\t\t/* state changed when reply processed, see read_reg_reply     */\n\n\t\t\t\tif ((pfo_req = alloc_br(PBS_BATCH_FailOver)) == NULL) {\n\t\t\t\t\tclose_conn(sec_sock);\n\t\t\t\t\tSecondary_state = SECONDARY_STATE_noconn;\n\t\t\t\t\tsec_sock = -1;\n\t\t\t\t} else {\n\t\t\t\t\tpfo_req->rq_ind.rq_failover = FAILOVER_Register;\n\t\t\t\t\tif (put_failover(sec_sock, pfo_req) == 0) {\n\t\t\t\t\t\tSecondary_state = SECONDARY_STATE_regsent;\n\t\t\t\t\t} else {\n\t\t\t\t\t\tclose_conn(sec_sock);\n\t\t\t\t\t\tSecondary_state = SECONDARY_STATE_noconn;\n\t\t\t\t\t\tsec_sock = -1;\n\t\t\t\t\t}\n\t\t\t\t\tfree_br(pfo_req);\n\t\t\t\t}\n\t\t\t\tbreak;\n\n\t\t\tcase SECONDARY_STATE_regsent:\n\t\t\t\t/* waiting on reply from register, do nothing ... */\n\t\t\t\t/* state will change in read_reg_reply() \t  */\n\t\t\t\tbreak;\n\n\t\t\tcase SECONDARY_STATE_handsk:\n\t\t\t\t/* waiting for handshake from the primary\t*/\n\t\t\t\t/* check to see if it has been too long\t\t*/\n\t\t\t\tif (time_now >= (hd_time + 2 * HANDSHAKE_TIME)) {\n\t\t\t\t\t/* haven't received handshake recently */\n\t\t\t\t\tSecondary_state = SECONDARY_STATE_nohsk;\n\t\t\t\t\tsprintf(log_buffer, \"Secondary has not received handshake in %ld seconds\", time_now - hd_time);\n\t\t\t\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER,\n\t\t\t\t\t\t  LOG_WARNING, msg_daemonname, log_buffer);\n\t\t\t\t}\n\t\t\t\tbreak;\n\n\t\t\tcase SECONDARY_STATE_nohsk:\n\t\t\t\t/* have not received a hankshake or connection closed */\n\t\t\t\t/* check time stamp on path_svrlive */\n\t\t\t\tif (stat(path_svrlive, &sb) == 0) {\n\n\t\t\t\t\t/* able to stat the server database */\n\n\t\t\t\t\tDBPRT((\"Failover: my: %ld stat: %ld dly: %ld\\n\",\n\t\t\t\t\t       time_now, sb.st_mtime, secondary_delay))\n\n\t\t\t\t\tif (sb.st_mtime > sbtime) {\n\t\t\t\t\t\t/* mtime appears to be changing...           */\n\t\t\t\t\t\t/* this happens at least the first time here */\n\t\t\t\t\t\tsbtime = sb.st_mtime;\n\t\t\t\t\t\tmytime = time_now;\n\n\t\t\t\t\t\tif ((sbloop++ > 4) && (sec_sock == -1)) {\n\t\t\t\t\t\t\t/* files still being touched, but    */\n\t\t\t\t\t\t\t/* no handshake,  try to reconnect   */\n\t\t\t\t\t\t\tDBPRT((\"Failover: going to noconn, still being touched\\n\"))\n\t\t\t\t\t\t\tSecondary_state = SECONDARY_STATE_noconn;\n\t\t\t\t\t\t}\n\n\t\t\t\t\t} else if (time_now > (mytime + secondary_delay)) {\n\t\t\t\t\t\t/* mtime hasn't changed in too long, take over */\n\t\t\t\t\t\tSecondary_state = SECONDARY_STATE_takeov;\n\t\t\t\t\t}\n\n\t\t\t\t} else if (time_now > (hd_time + secondary_delay)) {\n\t\t\t\t\t/*\n\t\t\t\t\t * couldn't stat the directory in the last\n\t\t\t\t\t * \"secondary_delay\" seconds, Secondary must be\n\t\t\t\t\t * the one off the network, try to reconnect\n\t\t\t\t\t */\n\t\t\t\t\tSecondary_state = SECONDARY_STATE_noconn;\n\t\t\t\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER,\n\t\t\t\t\t\t  LOG_CRIT, msg_daemonname,\n\t\t\t\t\t\t  \"Secondary unable to stat server live file\");\n\t\t\t\t\tDBPRT((\"Failover: going to noconn, cannot stat\\n\"))\n\n\t\t\t\t} else if ((sec_sock == -1) && ((loop % 3) == 0)) {\n\t\t\t\t\t/* no connection and cannot stat, try   */\n\t\t\t\t\t/* once in a while to quickly reconnect */\n\t\t\t\t\tif ((sec_sock = alt_conn(primaddr, 8)) >= 0) {\n\t\t\t\t\t\tSecondary_state = SECONDARY_STATE_conn;\n\t\t\t\t\t\tconn = add_conn(sec_sock, ToServerDIS, primaddr, 0, NULL,\n\t\t\t\t\t\t\t\tread_reg_reply);\n\t\t\t\t\t\tif (conn) {\n\t\t\t\t\t\t\tconn->cn_authen |= PBS_NET_CONN_AUTHENTICATED;\n\t\t\t\t\t\t\tDBPRT((\"Failover: reconnected to primary\\n\"))\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t/* a possible but unlikely case */\n\t\t\t\t\t\t\tlog_err(-1, \"be_secondary\",\n\t\t\t\t\t\t\t\t\"Connection not found, close socket free context\");\n\t\t\t\t\t\t\t(void) CS_close_socket(sec_sock);\n\t\t\t\t\t\t\tclose(sec_sock);\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tbreak;\n\n\t\t\tcase SECONDARY_STATE_shutd:\n\t\t\t\texit(0); /* told to shutdown */\n\n\t\t\tcase SECONDARY_STATE_takeov:\n\t\t\t\t/* check with Primary one last time before taking over */\n\t\t\t\tif (sec_sock != -1) {\n\t\t\t\t\tclose_conn(sec_sock);\n\t\t\t\t\tsec_sock = -1;\n\t\t\t\t}\n\t\t\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_NOTICE,\n\t\t\t\t\t  msg_daemonname, \"Secondary attempting to connect with Primary one last time before taking over\");\n\t\t\t\tif ((sec_sock = alt_conn(primaddr, 8)) >= 0) {\n\t\t\t\t\tSecondary_state = SECONDARY_STATE_conn;\n\t\t\t\t\tconn = add_conn(sec_sock, ToServerDIS, primaddr, 0, NULL,\n\t\t\t\t\t\t\tread_reg_reply);\n\t\t\t\t\tif (conn) {\n\t\t\t\t\t\tconn->cn_authen |= PBS_NET_CONN_AUTHENTICATED;\n\t\t\t\t\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER,\n\t\t\t\t\t\t\t  LOG_NOTICE, msg_daemonname,\n\t\t\t\t\t\t\t  \"Secondary reconnected with Primary\");\n\t\t\t\t\t} else {\n\t\t\t\t\t\t/* a possible but unlikely case */\n\t\t\t\t\t\tlog_err(-1, \"be_secondary\",\n\t\t\t\t\t\t\t\"Connection not found, close socket free context\");\n\t\t\t\t\t\t(void) CS_close_socket(sec_sock);\n\t\t\t\t\t\tclose(sec_sock);\n\t\t\t\t\t}\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\t/* Invoke stonith, to make sure primary is down */\n\t\t\t\trc = check_and_invoke_stonith(pbs_conf.pbs_primary);\n\t\t\t\tif (rc) {\n\t\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE,\n\t\t\t\t\t\t \"Secondary will attempt taking over again\");\n\t\t\t\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_INFO,\n\t\t\t\t\t\t  msg_daemonname, log_buffer);\n\n\t\t\t\t\tsleep(10);\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\t/* take over from primary */\n\t\t\t\tpbs_failover_active = 1;\n\t\t\t\tsecact = fopen(path_secondaryact, \"w\");\n\t\t\t\tif (secact != NULL) {\n\t\t\t\t\t/* create file that says secondary is up */\n\t\t\t\t\tfprintf(secact, \"%s\\n\", server_host);\n\t\t\t\t\tfclose(secact);\n\t\t\t\t\tDBPRT((\"Secondary server creating %s\\n\", path_secondaryact))\n\t\t\t\t}\n\t\t\t\treturn (0);\n\n\t\t\tcase SECONDARY_STATE_inact:\n\t\t\t\t/*\n\t\t\t\t * first wait for Primary to close connection indicating\n\t\t\t\t * that it is going down, then wait a safety few more seconds\n\t\t\t\t */\n\t\t\t\t(void) wait_request(600, NULL);\n\t\t\t\tsleep(10);\n\t\t\t\tlog_event(PBSEVENT_DEBUG, LOG_DEBUG, PBS_EVENTCLASS_SERVER,\n\t\t\t\t\t  msg_daemonname,\n\t\t\t\t\t  \"Secondary completed waiting for Primary to go down\");\n\t\t\t\tclose_conn(sec_sock);\n\t\t\t\tsec_sock = -1;\n\t\t\t\t/* change state to indicate Secondary is idle */\n\t\t\t\tSecondary_state = SECONDARY_STATE_idle;\n\t\t\t\t/* will recycle back to the top */\n\t\t\t\tbreak;\n\t\t}\n\n\t\tif (wait_request(1, NULL) == -1) {\n\t\t\tSecondary_state = SECONDARY_STATE_noconn;\n\t\t\tclose_conn(sec_sock);\n\t\t\tsec_sock = -1;\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "src/server/geteusernam.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @brief\n * geteusernam.c\t- Functions relating to effective user name\n *\n *\tIncluded public functions are:\n *\n *\tdetermine_euser\n *\tdetermine_egroup\n *\tset_objexid\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <sys/types.h>\n#include <sys/param.h>\n#include <grp.h>\n#include <pwd.h>\n#include \"pbs_ifl.h\"\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include \"server_limits.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"job.h\"\n#include \"reservation.h\"\n#include \"server.h\"\n#include \"pbs_error.h\"\n#include \"pbs_nodes.h\"\n#include \"svrfunc.h\"\n\n/* External Data */\n\n/**\n * @brief\n *  \tdetermine_euser - determine the effective user name\n *\n *  \tDetermine with which \"user name\" this object is to be associated\n *  \t 1. from user_list use name with host name that matches this host\n *  \t 2. from user_list use name with no host specification\n *  \t 3. user owner name\n *\n * @see\n *\t\tset_objexid\n *\n * @param[in]\tobjtype - object type\n * @param[in]\tpobj -  ptr to object to link in\n * @param[in]\tpattr - pointer to User_List attribute\n * @param[in,out]\tisowner - Return: 1  if object's owner chosen as euser\n *\n * @return pointer to the user name\n *\n */\n\nstatic char *\ndetermine_euser(void *pobj, int objtype, attribute *pattr, int *isowner)\n{\n\tchar *hit = 0;\n\tint i;\n\tstruct array_strings *parst;\n\tchar *pn;\n\tchar *ptr;\n\tattribute *objattr;\n\tstatic char username[PBS_MAXUSER + 1];\n\n\tmemset(username, '\\0', sizeof(username));\n\n\t/* set index and pointers based on object type */\n\tif (objtype == JOB_OBJECT)\n\t\tobjattr = get_jattr((job *) pobj, JOB_ATR_job_owner);\n\telse\n\t\tobjattr = get_rattr((resc_resv *) pobj, RESV_ATR_resv_owner);\n\n\t/* search the User_List attribute */\n\n\tif (is_attr_set(pattr) &&\n\t    (parst = pattr->at_val.at_arst)) {\n\t\t*isowner = 0;\n\t\tfor (i = 0; i < parst->as_usedptr; i++) {\n\t\t\tpn = parst->as_string[i];\n\t\t\tptr = strchr(pn, '@');\n\t\t\tif (ptr) { /* if has host specification, check for the complete host name, if host name is incorrect, hit is not set */\n\t\t\t\tif (!strcasecmp(server_host, ptr + 1)) {\n\t\t\t\t\thit = pn; /* option 1. */\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t} else {\t  /* wildcard host (null) */\n\t\t\t\thit = pn; /* option 2. */\n\t\t\t}\n\t\t}\n\t}\n\tif (!is_attr_set(pattr)) { /* if no user is specified, default to the object owner ( 3.) */\n\t\thit = get_attr_str(objattr);\n\t\t*isowner = 1;\n\t}\n\n\t/* copy user name into return buffer and strip off host name only when hit is set\n\t * i.e. when either no user is specified(in this case, default the job to the object owner)\n\t * or a user is provided with the correct host name.\n\t * If not set, job can't be run as no user to run the job */\n\tif (hit) {\n\t\tcvrt_fqn_to_name(hit, username);\n\t}\n\n\treturn (username);\n}\n\n/**\n * @brief\n * \t\tdetermine_egroup - determine the effective group name\n *\n * \t\tDetermine (for this host) with which \"group name\" this object is to be\n * \t\tassociated\n *\n * @par\tFunctionality:\n *  \t1. from group_list use name with host name that matches this host\n *  \t2. from group_list use name with no host specification\n *  \t3. NULL, not specified\n *\n *\t@see\n *\t\tset_objexid\n *\n * @param[in]\tobjtype - object type\n * @param[in]\tpobj -  ptr to object to link in\n * @param[in]\tpattr - pointer to group_list attribute\n *\n * Returns pointer to the group name or a NULL pointer\n */\n\nstatic char *\ndetermine_egroup(void *pobj, int objtype, attribute *pattr)\n{\n\tchar *hit = 0;\n\tint i;\n\tstruct array_strings *parst;\n\tchar *pn;\n\tchar *ptr;\n\tstatic char groupname[PBS_MAXUSER + 1];\n\n\t/* search the group-list attribute */\n\n\tif ((is_attr_set(pattr)) &&\n\t    (parst = pattr->at_val.at_arst)) {\n\t\tfor (i = 0; i < parst->as_usedptr; i++) {\n\t\t\tpn = parst->as_string[i];\n\t\t\tptr = strchr(pn, '@');\n\t\t\tif (ptr) { /* has host specification */\n\t\t\t\tif (!strncasecmp(server_host, ptr + 1, strlen(ptr + 1))) {\n\t\t\t\t\thit = pn; /* option 1. */\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t} else {\t  /* wildcard host (null) */\n\t\t\t\thit = pn; /* option 2. */\n\t\t\t}\n\t\t}\n\t}\n\tif (!hit) /* nothing sepecified, return null */\n\t\treturn NULL;\n\n\t/* copy group name into return buffer, strip host name */\n\tcvrt_fqn_to_name(hit, groupname);\n\treturn (groupname);\n}\n\n/**\n * @brief\n * \t\tset_objexid - validate and set the object's effective/execution username\n *\t\tand its effective/execution group name attributes.  For jobs, these\n *\t\tare JOB_ATR_euser and JOB_ATR_egroup attributes of the job structure.\n *\t\tFor reservations, they are RESV_ATR_euser and RESV_ATR_egroup of the\n *\t\tresc_resv structure.\n *\n * @par\tFunctionality:\n *\t\t1.  Determine the effective/execution user_name.\n *\t\t1a. Get the password entry for that username.\n *\t\t1b. Uid of 0 (superuser) is not allowed, might cause root-rot\n *\t\t1c. Determine if the object's owner name is permitted to map\n *\t    to the effective/execution username\n *\t\t1d. Set the object's effective/execution user_name to the name\n *\t    that got determined in the above steps\n *\t\t2.  Determine the effective/execution group name.\n *\t\t2a. Determine if the effective/execution user_name belongs to\n *\t    this effective/execution group\n *\t\t2b. Set JOB_ATR_egroup to the execution group name.\n *\t\t2b. Set the object's effective/execution group_name to the name\n *\t    that got determined in the above step\n *\n * @see\n *\t\tmodify_job_attr, req_resvSub and svr_chkque\n *\n * @param[in]\tobjtype - object type\n * @param[in]\tpobj -  ptr to object to link in\n * @param[in]\tpattr - pointer to attribute structure\n * \t\t\t\t\t\twhich contains group_List attributes and User_List attributes\n *\n * @returns\t int\n * @retval\t 0\t- if everything got determined and set appropriately else,\n * @retval\tnon-zero\t- error number if something went awry\n */\n\nint\nset_objexid(void *pobj, int objtype, attribute *attrry)\n{\n\tint addflags = 0;\n\tint isowner;\n\tattribute *pattr;\n\tchar *puser;\n\tchar *pgrpn;\n\tchar *owner;\n\tint idx_ul, idx_gl;\n\tint idx_owner, idx_euser, idx_egroup;\n\tint idx_acct;\n\tint bad_euser, bad_egrp;\n\tattribute *objattrs;\n\tattribute_def *obj_attr_def;\n\tattribute *paclRoot; /*future: aclRoot resv != aclRoot job*/\n\tchar **pmem;\n\tstruct group *gpent;\n\tstruct passwd *pwent;\n\tchar gname[PBS_MAXGRPN + 1];\n\n\t/* determine index values and pointers based on object type */\n\tif (objtype == JOB_OBJECT) {\n\t\tidx_ul = (int) JOB_ATR_userlst;\n\t\tidx_gl = (int) JOB_ATR_grouplst;\n\t\tidx_owner = (int) JOB_ATR_job_owner;\n\t\tidx_euser = (int) JOB_ATR_euser;\n\t\tidx_egroup = (int) JOB_ATR_egroup;\n\t\tidx_acct = (int) JOB_ATR_account;\n\t\tobj_attr_def = job_attr_def;\n\t\tobjattrs = ((job *) pobj)->ji_wattr;\n\t\towner = get_jattr_str(pobj, idx_owner);\n\t\tpaclRoot = get_sattr(SVR_ATR_AclRoot);\n\t\tbad_euser = PBSE_BADUSER;\n\t\tbad_egrp = PBSE_BADGRP;\n\t} else {\n\t\tidx_ul = (int) RESV_ATR_userlst;\n\t\tidx_gl = (int) RESV_ATR_grouplst;\n\t\tidx_owner = (int) RESV_ATR_resv_owner;\n\t\tidx_euser = (int) RESV_ATR_euser;\n\t\tidx_egroup = (int) RESV_ATR_egroup;\n\t\tidx_acct = (int) RESV_ATR_account;\n\t\tobj_attr_def = resv_attr_def;\n\t\tobjattrs = ((resc_resv *) pobj)->ri_wattr;\n\t\towner = get_rattr_str(pobj, idx_owner);\n\t\tpaclRoot = get_sattr(SVR_ATR_AclRoot);\n\t\tbad_euser = PBSE_R_UID;\n\t\tbad_egrp = PBSE_R_GID;\n\t}\n\n\t/* if passed in \"User_List\" attribute is set use it - this may\n\t * be a newly modified one.\n\t * if not set, fall back to the object's User_List, which may\n\t * actually be the same as what is passed into this function\n\t */\n\n\tif ((attrry + idx_ul)->at_flags & ATR_VFLAG_SET)\n\t\tpattr = attrry + idx_ul;\n\telse\n\t\tpattr = &objattrs[idx_ul];\n\n\tif ((puser = determine_euser(pobj, objtype, pattr, &isowner)) == NULL)\n\t\treturn (bad_euser);\n\n\tpwent = getpwnam(puser);\n\tif (pwent == NULL) {\n\t\tif (!get_sattr_long(SVR_ATR_FlatUID))\n\t\t\treturn (bad_euser);\n\t} else if (pwent->pw_uid == 0) {\n\t\tif (!is_attr_set(paclRoot))\n\t\t\treturn (bad_euser); /* root not allowed */\n\t\tif (acl_check(paclRoot, owner, ACL_User) == 0)\n\t\t\treturn (bad_euser); /* root not allowed */\n\t}\n\n\tif (!isowner || !get_sattr_long(SVR_ATR_FlatUID)) {\n\t\tif (site_check_user_map(pobj, objtype, puser) == -1)\n\t\t\treturn (bad_euser);\n\t}\n\n\tpattr = &objattrs[idx_euser];\n\tfree_attr(obj_attr_def, pattr, idx_euser);\n\tset_attr_generic(pattr, &obj_attr_def[idx_euser], puser, NULL, INTERNAL);\n\n\tif (pwent != NULL) {\n\t\t/* if account (qsub -A) is not specified, set to empty string */\n\n\t\tpattr = &objattrs[idx_acct];\n\t\tif (!is_attr_set(pattr))\n\t\t\tset_attr_generic(pattr, &obj_attr_def[idx_acct], \"\\0\", NULL, INTERNAL);\n\n\t\t/*\n\t\t * now figure out (for this host) the effective/execute \"group name\"\n\t\t * for the object.\n\t\t * PBS requires that each group have an entry in the group file,\n\t\t * see the admin guide for the reason why...\n\t\t *\n\t\t * use the passed group_list if set, may be a newly modified one.\n\t\t * if it isn't set, use the object's group_list, which may in fact\n\t\t * be same as what was passed\n\t\t */\n\n\t\tif (is_attr_set(attrry + idx_gl))\n\t\t\tpattr = attrry + idx_gl;\n\t\telse\n\t\t\tpattr = &objattrs[idx_gl];\n\t\tif ((pgrpn = determine_egroup(pobj, objtype, pattr)) != NULL) {\n\n\t\t\t/* user specified a group, group must exists and either\t   */\n\t\t\t/* must be user's primary group\t or the user must be in it */\n\n\t\t\tgpent = getgrnam(pgrpn);\n\t\t\tif (gpent == NULL) {\n\t\t\t\tif (pwent != NULL)\t   /* no such group is allowed */\n\t\t\t\t\treturn (bad_egrp); /* only when no user (flatuid)*/\n\n\t\t\t} else if (gpent->gr_gid != pwent->pw_gid) { /* not primary */\n\t\t\t\tpmem = gpent->gr_mem;\n\t\t\t\twhile (*pmem) {\n\t\t\t\t\tif (!strcmp(puser, *pmem))\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t++pmem;\n\t\t\t\t}\n\t\t\t\tif (*pmem == 0)\n\t\t\t\t\treturn (bad_egrp); /* user not in group */\n\t\t\t}\n\t\t\taddflags = ATR_VFLAG_SET;\n\n\t\t} else {\n\n\t\t\t/* Use user login group */\n\t\t\tgpent = getgrgid(pwent->pw_gid);\n\t\t\tif (gpent != NULL) {\n\t\t\t\tpgrpn = gpent->gr_name; /* use group name */\n\t\t\t} else {\n\t\t\t\t(void) sprintf(gname, \"%d\", pwent->pw_gid);\n\t\t\t\tpgrpn = gname; /* turn gid into string */\n\t\t\t}\n\n\t\t\t/*\n\t\t\t * setting the DEFAULT flag is a \"kludy\" way to keep MOM from\n\t\t\t * having to do an unneeded look up of the group file.\n\t\t\t * We needed to have JOB_ATR_egroup set for the server but\n\t\t\t * MOM only wants it if it is not the login group, so there!\n\t\t\t */\n\t\t\taddflags = ATR_VFLAG_SET | ATR_VFLAG_DEFLT;\n\t\t}\n\t} else {\n\n\t\t/*\n\t\t * null password entry,\n\t\t * set group to \"default\" and set default for Mom to use login group\n\t\t */\n\n\t\tpgrpn = \"-default-\";\n\t\taddflags = ATR_VFLAG_SET | ATR_VFLAG_DEFLT;\n\t}\n\n\tpattr = attrry + idx_egroup;\n\tfree_attr(obj_attr_def, pattr, idx_egroup);\n\n\tif (addflags != 0) {\n\t\tset_attr_generic(pattr, &obj_attr_def[idx_egroup], pgrpn, NULL, INTERNAL);\n\t\tpattr->at_flags |= addflags;\n\t}\n\n\treturn (0);\n}\n"
  },
  {
    "path": "src/server/hook_func.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file    hook_func.c\n *\n * @brief\n * hook_func.c - contains functions to record accounting information\n *\n * Functions included are:\n *\n * hook_action_tid_set\n * hook_action_tid_get\n * hook_track_save\n * send_rescdef\n * hook_track_recov\n * mgr_hook_create\n * mgr_hook_delete\n * py_compile_and_run\n * mgr_hook_import\n * mgr_hook_export\n * copy_hook\n * mgr_hook_set\n * mgr_hook_unset\n * status_hook\n * req_stat_hook\n * set_exec_time\n * set_hold_types\n * set_attribute\n * set_job_varlist\n * set_job_reslist\n * attribute_jobmap_init\n * attribute_jobmap_clear\n * attribute_jobmap_restore\n * do_runjob_accept_actions\n * do_runjob_reject_actions\n * write_hook_reject_debug_output_and_close\n * write_hook_accept_debug_output_and_close\n * process_hooks\n * recreate_request\n * add_mom_hook_action\n * delete_mom_hook_action\n * find_mom_hook_action\n * add_pending_mom_hook_action\n * delete_pending_mom_hook_action\n * has_pending_mom_action_delete\n * sync_mom_hookfiles_count\n * collapse_hook_tr\n * mk_deferred_hook_info\n * post_sendhookTPP\n * check_add_hook_mcast_info\n * del_deferred_hook_cmds\n * sync_mom_hookfilesTPP\n * mc_sync_mom_hookfiles\n * add_pending_mom_allhooks_action\n * next_sync_mom_hookfiles\n * mark_mom_hooks_seen\n * mom_hooks_seen_count\n * uc_delete_mom_hooks\n * get_hook_rescdef_checksum\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <unistd.h>\n#include <sys/param.h>\n#include <dirent.h>\n#include <sys/types.h>\n#include <sys/stat.h>\n#include <sys/wait.h>\n#include <ctype.h>\n#include <errno.h>\n#include <assert.h>\n\n#include <memory.h>\n#include <fcntl.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include \"pbs_ifl.h\"\n#include \"libpbs.h\"\n#include \"list_link.h\"\n#include \"work_task.h\"\n#include \"attribute.h\"\n#include \"batch_request.h\"\n#include \"hook.h\"\n#include \"log.h\"\n#include \"server_limits.h\"\n#include \"attribute.h\"\n#include \"credential.h\"\n#include \"batch_request.h\"\n#include \"job.h\"\n#include \"reservation.h\"\n#include \"queue.h\"\n#include \"pbs_nodes.h\"\n#include \"svrfunc.h\"\n#include \"placementsets.h\"\n#include <pbs_python.h> /* for python interpreter */\n#include <signal.h>\n#include \"hook_func.h\"\n#include \"net_connect.h\"\n#include \"sched_cmds.h\"\n#include \"tpp.h\"\n#include \"reservation.h\"\n#include \"cmds.h\"\n#include \"server.h\"\n#include \"pbs_sched.h\"\n#include \"dis.h\"\n#include \"acct.h\"\n\n/* External functions */\nextern void disable_svr_prov();\nextern void set_srv_prov_attributes();\nextern int should_retry_route(int);\n\n/* Local Private Functions */\n\n/* Global Data items */\nint do_sync_mom_hookfiles = 1;\nint sync_mom_hookfiles_replies_pending = 0;\npbs_list_head vnode_attr_list;\npbs_list_head resv_attr_list;\n\n/* Local Data */\nstatic char merr[] = \"malloc failed\";\nstatic int mom_hooks_seen = 0;\t\t    /* # of mom hooks seen */\nstatic long long int hook_action_tid = 0LL; /* transaction id of the next */\nstatic int g_hook_replies_expected = 0;\t    /* used only in TPP mode */\nstatic int g_hook_replies_recvd = 0;\t    /* used only in TPP mode */\nstatic time_t g_sync_hook_time = 0;\t    /* time when mom hook files were last sent */\nstatic long long int g_sync_hook_tid = 0LL; /* identifies the latest group of hook updates to send out */\nstatic unsigned long hook_rescdef_checksum = 0;\n\n/* mom hook action(s) to keep track */\n\n#define GROW_MOMHOOK_ARRAY_AMT 10\n#define CONN_RETRY 3\n#define IS_SPECIAL_CHAR(c) ((c == '\"') || (c == '\\'') || (c == ',') || (c == '\\\\'))\n#define VALID_HOOK_CONFIG_SUFFIX \".json .py .txt .xml .ini\"\n\n#define SYNC_MOM_HOOKFILES_TIMEOUT_TPP 120 /* 2 minutes */\n#define SYNC_MOM_HOOKFILES_TIMEOUT 900\t   /* 15 minutes */\n\nextern char *msg_daemonname;\nextern char *path_priv;\nextern char *path_hooks;\nextern char *path_hooks_workdir;\nextern char *path_hooks_tracking;\n\nextern char path_log[];\nextern char *log_file;\nextern pbs_net_t pbs_server_addr;\n\nextern char *msg_badexit;\nextern char *msg_err_malloc;\nextern char *msg_manager;\nextern char *msg_man_cre;\nextern char *msg_man_del;\nextern char *msg_man_set;\nextern char *msg_man_uns;\nextern char *msg_noattr;\nextern char *msg_internal;\nextern char *msg_norelytomom;\nextern pbs_list_head svr_allhooks;\nextern pbs_list_head svr_queuejob_hooks;\nextern pbs_list_head svr_postqueuejob_hooks;\nextern pbs_list_head svr_modifyjob_hooks;\nextern pbs_list_head svr_resvsub_hooks;\nextern pbs_list_head svr_modifyresv_hooks;\nextern pbs_list_head svr_movejob_hooks;\nextern pbs_list_head svr_runjob_hooks;\nextern pbs_list_head svr_jobobit_hooks;\nextern pbs_list_head svr_management_hooks;\nextern pbs_list_head svr_modifyvnode_hooks;\nextern pbs_list_head svr_periodic_hooks;\nextern pbs_list_head svr_provision_hooks;\nextern pbs_list_head svr_resv_confirm_hooks;\nextern pbs_list_head svr_resv_begin_hooks;\nextern pbs_list_head svr_resv_end_hooks;\nextern pbs_list_head svr_execjob_begin_hooks;\nextern pbs_list_head svr_execjob_prologue_hooks;\nextern pbs_list_head svr_execjob_epilogue_hooks;\nextern pbs_list_head svr_execjob_preterm_hooks;\nextern pbs_list_head svr_execjob_end_hooks;\nextern pbs_list_head svr_exechost_periodic_hooks;\nextern pbs_list_head svr_exechost_startup_hooks;\nextern pbs_list_head svr_execjob_launch_hooks;\nextern pbs_list_head svr_execjob_attach_hooks;\nextern pbs_list_head svr_execjob_resize_hooks;\nextern pbs_list_head svr_execjob_abort_hooks;\nextern pbs_list_head svr_execjob_postsuspend_hooks;\nextern pbs_list_head svr_execjob_preresume_hooks;\nextern time_t time_now;\nextern struct python_interpreter_data svr_interp_data;\nextern pbs_list_head task_list_event;\nextern struct work_task *add_mom_deferred_list(int stream, mominfo_t *minfo, void (*func)(struct work_task *), char *msgid, void *parm1, void *parm2);\n\nextern char *path_rescdef;\nextern char *path_hooks_rescdef;\n\nextern int comp_resc_gt;\nextern int comp_resc_lt;\nextern int comp_resc_nc;\n\nstruct def_hk_cmd_info {\n\tint index;\n\tint event;\n\tlong long int tid; /* transaction id */\n};\n\n/* structures required for TPP mcast communication\n * of hooks to moms\n */\ntypedef struct {\n\tint mconn;\n\tchar *msgid;\n\tchar *hookname;\n\tint action;\n} hook_mcast_info_t;\n\n/* global array of mcast information structs */\nhook_mcast_info_t *g_hook_mcast_array = NULL;\nint g_hook_mcast_array_len = 0;\nextern int get_msgid(char **id);\n\n/**\n * @brief\n *\t\thook_action_tid_set\t- Sets the value of the global 'hook_action_tid' variable to the given\n *\t\t'newval'.\n * @see\n *\t\tmain, hook_track_recov and mc_sync_mom_hookfiles.\n *\n * @param[in]\tnewval - the new value.\n *\n * @return void\n */\nvoid\nhook_action_tid_set(long long int newval)\n{\n\thook_action_tid = newval;\n\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t \"hook_action_tid=%lld\", hook_action_tid);\n\tlog_event(PBSEVENT_DEBUG4, PBS_EVENTCLASS_SERVER,\n\t\t  LOG_INFO, \"hook_action_tid_set\", log_buffer);\n}\n/**\n * @brief\n *\t\tReturns the value of the global 'hook_action_tid' variable.\n *\n * @see\n *\t\tsync_mom_hookfilesTPP and mc_sync_mom_hookfiles.\n *\n * @return long long int\t- the 'hook_action_tid' value.\n */\nlong long int\nhook_action_tid_get(void)\n{\n\treturn (hook_action_tid);\n}\n\n/**\n * @brief\n * \t\tattrlist_add\t- Add <name> = <val> attributes to list headed by 'atl'\n *\n * @param[in,out]\tatl -  attribute list headed by atl.\n * @param[in]\t\tname - name, in the name value pair.\n * @param[in]\t\tval -  val, in the name value pair.\n *\n * @return\tint\n * @retval\t0 - for success\n * @retval\t1 - otherwise.\n */\nstatic int\nattrlist_add(pbs_list_head *atl, char *name, char *val)\n{\n\tsvrattrl *pal2;\n\n\tif ((name == NULL) || (val == NULL)) {\n\t\tlog_err(-1, __func__, \"name or val param is NULL\");\n\t\treturn (1);\n\t}\n\n\tif ((name[0] == '\\0') || (val[0] == '\\0')) {\n\t\tsprintf(log_buffer,\n\t\t\t\"(%s,%s) - name or val parameter is empty\", name, val);\n\t\tlog_err(errno, __func__, log_buffer);\n\t\treturn (1);\n\t}\n\n\tpal2 = attrlist_create(name, NULL, (int) strlen(val) + 1);\n\tif (pal2 == NULL) {\n\t\tsprintf(log_buffer,\n\t\t\t\"(%s,%s) - failed to create attribute list\", name, val);\n\t\tlog_err(errno, __func__, log_buffer);\n\t\treturn (1);\n\t}\n\tstrcpy(pal2->al_value, val);\n\tappend_link(atl, &pal2->al_link, pal2);\n\treturn (0);\n}\n\n/* Mom Hooks tracking functions */\n\n/**\n * @brief\n *\t\thook_track_save\t- Save the mom hooks pending actions data to a path_hooks_tracking file.\n *\t\tFile format is:\n *\t\t<mom_hostname>:<mom_port> <hookname> <action> <tid>\n *\n * @par Note:\n *\t\tIf 'minfo' is not NULL and k != -1, then data from action array attached\n *\t\tto 'minfo' at index 'k' is appended to path_hooks_tracking file.\n *\n *\t\tIf 'minfo' is not NULL and k == -1, then no data is saved\n *\t\tas hook_track_save() returns immediately.\n *\n *\t\tIf 'minfo' is NULL and whether or not k != -1 or k == -1, then data\n *\t\tfrom action array attached to each mom in the system, gets\n *\t\tappended to path_hooks_tracking file.\n *\n * @param[in]\tminfo - used in conjunction with 'k' below:\n *\t\t\tif not NULL and\n *\t\t\tk != -1, then append only the array\n *\t\t\tdata attached to 'minfo'indexed at 'k' to\n *\t\t\tpath_hooks_tracking file.\n *\n * @param[in]\tk     - used in conjunction with 'minfo' above.\n *\n * @return\tvoid\n *\n */\n\nvoid\nhook_track_save(void *minfo, int k)\n{\n\tint i, j;\n\tFILE *fp = NULL;\n\tmominfo_t **minfo_array = NULL;\n\tint minfo_array_size;\n\tmominfo_t *minfo_array_tmp[1];\n\tchar msg[HOOK_MSG_SIZE + 1];\n\tmom_hook_action_t *hook_act;\n\n\tif ((minfo != NULL) && (k == -1))\n\t\treturn;\n\n\tif (minfo != NULL) {\n\n\t\tfp = fopen(path_hooks_tracking, \"a\");\n\t\tminfo_array_tmp[0] = minfo;\n\t\tminfo_array = (mominfo_t **) minfo_array_tmp;\n\t\tminfo_array_size = 1;\n\t} else {\n\t\tfp = fopen(path_hooks_tracking, \"w\");\n\t\tminfo_array = mominfo_array;\n\t\tminfo_array_size = mominfo_array_size;\n\t}\n\n\tif (fp == NULL) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"Failed to open hook tracking file %s\",\n\t\t\t path_hooks_tracking);\n\t\tlog_err(errno, __func__, log_buffer);\n\t\treturn;\n\t}\n\n\tif (lock_file(fileno(fp), F_WRLCK, path_hooks_tracking, LOCK_RETRY_DEFAULT,\n\t\t      msg, sizeof(msg)) != 0) {\n\t\tlog_err(errno, __func__, msg);\n\t\tfclose(fp);\n\t\treturn; /* failed to lock */\n\t}\n\n\tfor (i = 0; i < minfo_array_size; i++) {\n\n\t\tif (minfo_array[i] == NULL)\n\t\t\tcontinue;\n\n\t\tfor (j = 0; j < ((mom_svrinfo_t *) minfo_array[i]->mi_data)->msr_num_action; j++) {\n\n\t\t\tif ((minfo == NULL) || (k == j)) {\n\t\t\t\thook_act = ((mom_svrinfo_t *) minfo_array[i]->mi_data)->msr_action[j];\n\t\t\t\tif (hook_act) {\n\t\t\t\t\tfprintf(fp, \"%s:%d %s %d %lld\\n\", minfo_array[i]->mi_host,\n\t\t\t\t\t\tminfo_array[i]->mi_port,\n\t\t\t\t\t\thook_act->hookname,\n\t\t\t\t\t\thook_act->action,\n\t\t\t\t\t\thook_act->tid);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\tfflush(fp);\n\n\tif (lock_file(fileno(fp), F_UNLCK, path_hooks_tracking, LOCK_RETRY_DEFAULT,\n\t\t      msg, sizeof(msg)) != 0)\n\t\tlog_err(errno, __func__, msg);\n\n\tfclose(fp);\n}\n/**\n *\n * @brief\n *\t\tsend_rescdef\t- checks if server's resourcedef file has a newer timestamp than the\n *\t\tresourcedef file currently known to hooks, and if so sets up the\n *\t\tnecessary mom hook actions to send the resourcedef file to each of the\n *\t\tmom hosts on the next sync_mom_hookfilesTPP() call.\n *\n * @param[in]\tforce - if set to '1', means to skip checking for timetamps\n * \t\t\tand just force setting up the mom hook actions to\n * \t\t\tsend the resourcedef file to each of the mom hosts.\n *\n * @return void\n */\n\nvoid\nsend_rescdef(int force)\n{\n\n\tint st;\n\tint st2;\n\tstruct stat sbuf;\n\tstruct stat sbuf2;\n\n\tif (mom_hooks_seen <= 0) {\n\t\tif (mom_hooks_seen < 0) { /* should not happen */\n\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t\t\t  LOG_INFO, __func__,\n\t\t\t\t  \"mom_hooks_seen went negative, resetting to 0\");\n\t\t\tmom_hooks_seen = 0;\n\t\t}\n\t\treturn;\n\t}\n\n\tst = stat(path_rescdef, &sbuf);\n\tst2 = stat(path_hooks_rescdef, &sbuf2);\n\n\tif ((st == 0) && (sbuf.st_size > 0) &&\n\t    (force || ((st2 == -1) && (errno == ENOENT)) ||\n\t     ((st2 == 0) && (sbuf.st_mtime > sbuf2.st_mtime)))) {\n\t\tst = copy_file_internal(path_rescdef, path_hooks_rescdef);\n\t\tif (st != 0) {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"warning: Failed to copy %s %s (error %d)\",\n\t\t\t\t path_rescdef, path_hooks_rescdef, st);\n\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_SERVER,\n\t\t\t\t  LOG_ERR, __func__, log_buffer);\n\t\t} else {\n\t\t\tadd_pending_mom_hook_action(NULL, PBS_RESCDEF,\n\t\t\t\t\t\t    MOM_HOOK_ACTION_SEND_RESCDEF);\n\t\t\tdo_sync_mom_hookfiles = 1;\n\t\t}\n\t} else if (((st == -1) || (sbuf.st_size == 0)) && (st2 == 0)) {\n\t\t/* server_priv/resourcedef disappeared and yet */\n\t\t/* server_priv/hooks/resourcedef still exists */\n\t\tadd_pending_mom_hook_action(NULL, PBS_RESCDEF,\n\t\t\t\t\t    MOM_HOOK_ACTION_DELETE_RESCDEF);\n\t\tdo_sync_mom_hookfiles = 1;\n\t\t(void) unlink(path_hooks_rescdef);\n\t}\n\n\thook_rescdef_checksum = crc_file(path_hooks_rescdef);\n}\n\n/**\n * @brief\n *\t\thook_track_recov\t- Recover the tracking data in path_hooks_tracking for mom hooks.\n *\n * @par Functionality\n *\t\tThe tracking data contains the pending mom hook actions, and in a file\n *\t\thas the format:\n *\t\t<mom_name> <port_number> <hook_name> <action> <tid>\n *\n * @note\n *\t\tThis is to recover any pending mom hook actions when the\n *\t\tserver restarts.\n *\n *\t\tThis will also look for the highest value <tid> it has recovered\n *\t\tfrom the hooks tracking file, and uses that as the new value of\n *\t\tthe global variable hook_action_tid.\n *\n *\t\tIf a hook-related file (*.HK, *.PY, resourcedef) was sent to the\n *\t\tremote mom, and the mom was able to get the file, and if the child\n *\t\tserver goes away before it can record the result in the hooks\n *\t\ttracking file, then the main server would be left with still the old\n *\t\ttracking entry for the hook action, and is retried again by the\n *\t\tnext spawned child server. If the main server dies in the middle\n *\t\tof the transaction, when it restarts, it will be able to recover any\n *\t\thook action results by the child server. And if there are still\n *\t\tpending actions, then they are retried by the next child server.\n *\n * @return      void\n */\nvoid\nhook_track_recov(void)\n{\n\tint i, j;\n\tFILE *fp = NULL;\n\tchar linebuf[BUFSIZ];\n\tint linenum;\n\tchar *mom_name;\n\tchar *hook_name;\n\tint port_num;\n\tchar *port_num_str;\n\tint action;\n\tchar *action_str;\n\tlong long int hook_tid;\n\tchar *p;\n\tchar *p2;\n\tchar msg[HOOK_MSG_SIZE + 1];\n\tint msg_len = HOOK_MSG_SIZE;\n\tmominfo_t *minfo = NULL;\n\tstruct stat sbuf;\n\tlong long int max_tid_recov = 0LL; /* holds maximum tid number seen */\n\t/* in mom hooks tracking file */\n\n\tif (stat(path_hooks_tracking, &sbuf) != 0) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"Failed to stat %s errno=%d\", path_hooks_tracking,\n\t\t\t errno);\n\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_SERVER,\n\t\t\t  LOG_ERR, __func__, log_buffer);\n\t\treturn;\n\t}\n\n\tfp = fopen(path_hooks_tracking, \"r\");\n\tif (fp == NULL) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"Failed to open file %s errno=%d\",\n\t\t\t path_hooks_tracking, errno);\n\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_SERVER,\n\t\t\t  LOG_ERR, __func__, log_buffer);\n\t\treturn;\n\t}\n\n\tif (lock_file(fileno(fp), F_RDLCK, path_hooks_tracking, LOCK_RETRY_DEFAULT,\n\t\t      msg, sizeof(msg)) != 0) {\n\t\tlog_err(errno, __func__, msg);\n\t\tfclose(fp);\n\t\treturn; /* failed to lock */\n\t}\n\n\t/* we now have a lock on the file */\n\n\t/* Need to clear out old entries */\n\tfor (i = 0; i < mominfo_array_size; i++) {\n\t\tif (mominfo_array[i] == NULL)\n\t\t\tcontinue;\n\t\tfor (j = 0; j < ((mom_svrinfo_t *) mominfo_array[i]->mi_data)->msr_num_action; j++) {\n\t\t\tif (((mom_svrinfo_t *) mominfo_array[i]->mi_data)->msr_action[j] != NULL) {\n\t\t\t\tfree(((mom_svrinfo_t *) mominfo_array[i]->mi_data)->msr_action[j]);\n\t\t\t\t((mom_svrinfo_t *) mominfo_array[i]->mi_data)->msr_action[j] = NULL;\n\t\t\t}\n\t\t}\n\t}\n\n\t/* We'll recover what we can, and just ignore errors */\n\t/* encountered processing the hook tracking file entries */\n\tlinenum = 0;\n\twhile (fgets(linebuf, sizeof(linebuf), fp) != NULL) {\n\n\t\tlinenum++;\n\t\tif ((p = strrchr(linebuf, '\\n')) != NULL) {\n\t\t\t*p = '\\0';\n\t\t} else {\n\t\t\tsnprintf(msg, msg_len - 1,\n\t\t\t\t \"warning: line %d is too long\", linenum);\n\t\t\tlog_err(PBSE_SYSTEM, __func__, msg);\n\t\t\tcontinue;\n\t\t}\n\t\t/* ignore initial white space; skip blank lines */\n\t\tp = linebuf;\n\n\t\t/* skip whitespace */\n\t\twhile ((*p != '\\0') && isspace(*p))\n\t\t\tp++;\n\n\t\tif (*p == '\\0')\n\t\t\tcontinue; /* empty line */\n\n\t\tmom_name = p;\n\t\tif ((p2 = strchr(linebuf, ':')) == NULL) {\n\t\t\tsnprintf(msg, msg_len - 1,\n\t\t\t\t \"warning: line %d:  missing ':'\", linenum);\n\t\t\tlog_err(PBSE_SYSTEM, __func__, msg);\n\t\t\tcontinue;\n\t\t}\n\n\t\tif (*(p2 + 1) == '\\0') {\n\t\t\tsnprintf(msg, msg_len - 1,\n\t\t\t\t \"warning: line %d:  no <port num>\",\n\t\t\t\t linenum);\n\t\t\tlog_err(PBSE_SYSTEM, __func__, msg);\n\t\t\tcontinue;\n\t\t}\n\t\t*p2 = '\\0';\n\n\t\tp2++;\n\t\tport_num_str = p2;\n\t\t/* skip non-white space */\n\t\twhile ((*p2 != '\\0') && !isspace(*p2))\n\t\t\tp2++;\n\n\t\tif (*p2 == '\\0') {\n\t\t\tsnprintf(msg, msg_len - 1,\n\t\t\t\t \"warning: line %d:  no <hook name>\",\n\t\t\t\t linenum);\n\t\t\tlog_err(PBSE_SYSTEM, __func__, msg);\n\t\t\tcontinue;\n\t\t}\n\n\t\t*p2 = '\\0';\n\n\t\tport_num = atoi(port_num_str);\n\n\t\t/* skip whitespace */\n\t\tp2++;\n\t\twhile ((*p2 != '\\0') && isspace(*p2))\n\t\t\tp2++;\n\n\t\tif (*p2 == '\\0') {\n\t\t\tsnprintf(msg, msg_len - 1,\n\t\t\t\t \"warning: line %d:  no <hook name>\",\n\t\t\t\t linenum);\n\t\t\tlog_err(PBSE_SYSTEM, __func__, msg);\n\t\t\tcontinue;\n\t\t}\n\n\t\thook_name = p2;\n\t\twhile ((*p2 != '\\0') && !isspace(*p2))\n\t\t\tp2++;\n\n\t\tif (*p2 == '\\0') {\n\t\t\tsnprintf(msg, msg_len - 1,\n\t\t\t\t \"warning: line %d:  no <hook action>\", linenum);\n\t\t\tlog_err(PBSE_SYSTEM, __func__, msg);\n\t\t\tcontinue;\n\t\t}\n\n\t\t*p2++ = '\\0';\n\n\t\t/* skip whitespace */\n\t\twhile ((*p2 != '\\0') && isspace(*p2))\n\t\t\tp2++;\n\n\t\taction_str = p2;\n\n\t\t/* get non-whitespace characters */\n\t\twhile ((*p2 != '\\0') && !isspace(*p2))\n\t\t\tp2++;\n\n\t\tif (*p2 == '\\0') {\n\t\t\tsnprintf(msg, msg_len - 1,\n\t\t\t\t \"warning: line %d:  no <tid>\", linenum);\n\t\t\tlog_err(PBSE_SYSTEM, __func__, msg);\n\t\t\tcontinue;\n\t\t}\n\n\t\t*p2 = '\\0';\n\n\t\taction = atoi(action_str);\n\n\t\tp2++;\n\n\t\t/* skip whitespace */\n\t\twhile ((*p2 != '\\0') && isspace(*p2))\n\t\t\tp2++;\n\n\t\t/* preserve the recovered tid value */\n\t\t/* and use it to store into mom hook */\n\t\t/* action data structure */\n\t\thook_tid = atoll(p2);\n\t\tif (hook_tid > max_tid_recov)\n\t\t\tmax_tid_recov = hook_tid;\n\n\t\t/* optimization: use previous result if match */\n\t\tif ((minfo == NULL) ||\n\t\t    (strcasecmp(minfo->mi_host, mom_name) != 0) ||\n\t\t    (minfo->mi_port != port_num)) {\n\n\t\t\tminfo = find_mom_entry(mom_name, port_num);\n\t\t}\n\t\tif (minfo != NULL) {\n\t\t\tadd_mom_hook_action(&((mom_svrinfo_t *) minfo->mi_data)->msr_action,\n\t\t\t\t\t    &((mom_svrinfo_t *) minfo->mi_data)->msr_num_action,\n\t\t\t\t\t    hook_name, action, 1, hook_tid);\n\t\t}\n\t}\n\n\thook_action_tid_set(max_tid_recov);\n\n\tif (fp != NULL) {\n\t\tif (lock_file(fileno(fp), F_UNLCK, path_hooks_tracking,\n\t\t\t      LOCK_RETRY_DEFAULT, msg, sizeof(msg)) != 0) {\n\t\t\tlog_err(errno, __func__, msg);\n\t\t}\n\t\tfclose(fp);\n\t}\n}\n\n/*\n ************************************************************************\n *   Hook-related Qmgr operations.\n ************************************************************************\n */\n\n/**\n * @brief\n *\t\tmgr_hook_create\t- Processes a \"create hook\" qmgr request.\n *\n * @see\n *\t\treq_manager\n *\n * @param[in]\tpreq\t- the requestor info via batch request structure\n *\n * @note\n *\t\tReturns a reply to the sender of the batch_request.\n */\nvoid\nmgr_hook_create(struct batch_request *preq)\n{\n\tsvrattrl *plist, *plx;\n\thook *phook = NULL;\n\thook *phook2 = NULL;\n\tchar hook_msg[HOOK_MSG_SIZE] = {'\\0'};\n\tchar *hook_user_val = NULL;\n\tchar *hook_fail_action_val = NULL;\n\tchar *hook_freq_val = NULL;\n\n\tif (strlen(preq->rq_ind.rq_manager.rq_objname) == 0) {\n\t\treply_text(preq, PBSE_HOOKERROR, \"no hook name specified\");\n\t\treturn;\n\t}\n\n\tif ((phook2 = find_hook(preq->rq_ind.rq_manager.rq_objname)) != NULL) {\n\t\tif (phook2->pending_delete) {\n\t\t\tsnprintf(hook_msg, sizeof(hook_msg),\n\t\t\t\t \"hook name \\'%s\\' is pending delete, try another name\",\n\t\t\t\t preq->rq_ind.rq_manager.rq_objname);\n\t\t} else {\n\t\t\tsnprintf(hook_msg, sizeof(hook_msg),\n\t\t\t\t \"hook name \\'%s\\' already registered, try another name\",\n\t\t\t\t preq->rq_ind.rq_manager.rq_objname);\n\t\t}\n\t\tgoto mgr_hook_create_error;\n\t}\n\n\tphook = hook_alloc();\n\tif (phook == NULL) {\n\t\treq_reject(PBSE_INTERNAL, 0, preq);\n\t\treturn;\n\t}\n\n\tif (set_hook_name(phook, preq->rq_ind.rq_manager.rq_objname,\n\t\t\t  hook_msg, sizeof(hook_msg)) != 0) {\n\t\tgoto mgr_hook_create_error;\n\t}\n\tplist = (svrattrl *) GET_NEXT(preq->rq_ind.rq_manager.rq_attr);\n\n\tplx = plist;\n\twhile (plx) {\n\t\tif (strcasecmp(plx->al_name, HOOKATT_TYPE) == 0) {\n\t\t\tif (set_hook_type(phook, plx->al_value,\n\t\t\t\t\t  hook_msg, sizeof(hook_msg), 0) != 0)\n\t\t\t\tgoto mgr_hook_create_error;\n\t\t} else if (strcasecmp(plx->al_name, HOOKATT_ENABLED) == 0) {\n\t\t\tif (set_hook_enabled(phook, plx->al_value,\n\t\t\t\t\t     hook_msg, sizeof(hook_msg)) != 0)\n\t\t\t\tgoto mgr_hook_create_error;\n\t\t} else if (strcasecmp(plx->al_name, HOOKATT_DEBUG) == 0) {\n\t\t\tif (set_hook_debug(phook, plx->al_value,\n\t\t\t\t\t   hook_msg, sizeof(hook_msg)) != 0)\n\t\t\t\tgoto mgr_hook_create_error;\n\t\t} else if (strcasecmp(plx->al_name, HOOKATT_USER) == 0) {\n\t\t\t/* setting hook user value must be a deferred action, */\n\t\t\t/* as it is dependent on event having */\n\t\t\t/* execjob_prologue, execjob_epilogue, or */\n\t\t\t/* execjob_preterm being set. The event set action */\n\t\t\t/* could appear after this set user action. */\n\t\t\tif (hook_user_val != NULL)\n\t\t\t\tfree(hook_user_val);\n\t\t\thook_user_val = strdup(plx->al_value);\n\t\t\tif (hook_user_val == NULL) {\n\t\t\t\tsnprintf(hook_msg, sizeof(hook_msg),\n\t\t\t\t\t \"strdup(%s) failed: errno %d\",\n\t\t\t\t\t plx->al_value, errno);\n\t\t\t\tgoto mgr_hook_create_error;\n\t\t\t}\n\t\t} else if (strcasecmp(plx->al_name, HOOKATT_FAIL_ACTION) == 0) {\n\t\t\t/* setting hook fail_action value must be a deferred action, */\n\t\t\t/* as it is dependent on event mom hook event */\n\t\t\t/* being set. The event set action */\n\t\t\t/* could appear after this set fail_action. */\n\t\t\tif (hook_fail_action_val != NULL)\n\t\t\t\tfree(hook_fail_action_val);\n\t\t\thook_fail_action_val = strdup(plx->al_value);\n\t\t\tif (hook_fail_action_val == NULL) {\n\t\t\t\tsnprintf(hook_msg, sizeof(hook_msg),\n\t\t\t\t\t \"strdup(%s) failed: errno %d\",\n\t\t\t\t\t plx->al_value, errno);\n\t\t\t\tgoto mgr_hook_create_error;\n\t\t\t}\n\t\t} else if (strcasecmp(plx->al_name, HOOKATT_EVENT) == 0) {\n\t\t\tif (set_hook_event(phook, plx->al_value,\n\t\t\t\t\t   hook_msg, sizeof(hook_msg)) != 0)\n\t\t\t\tgoto mgr_hook_create_error;\n\t\t} else if (strcasecmp(plx->al_name, HOOKATT_ORDER) == 0) {\n\t\t\tif (set_hook_order(phook, plx->al_value,\n\t\t\t\t\t   hook_msg, sizeof(hook_msg)) != 0)\n\t\t\t\tgoto mgr_hook_create_error;\n\t\t} else if (strcasecmp(plx->al_name, HOOKATT_ALARM) == 0) {\n\t\t\tif (set_hook_alarm(phook, plx->al_value,\n\t\t\t\t\t   hook_msg, sizeof(hook_msg)) != 0)\n\t\t\t\tgoto mgr_hook_create_error;\n\t\t} else if (strcasecmp(plx->al_name, HOOKATT_FREQ) == 0) {\n\t\t\t/* setting hook freq value must be a deferred action, */\n\t\t\t/* as it is dependent on event having */\n\t\t\t/* exechost_periodic being set. The event set action */\n\t\t\t/* could appear after this set freq action. */\n\t\t\tif (hook_freq_val != NULL)\n\t\t\t\tfree(hook_freq_val);\n\t\t\thook_freq_val = strdup(plx->al_value);\n\t\t\tif (hook_freq_val == NULL) {\n\t\t\t\tsnprintf(hook_msg, sizeof(hook_msg),\n\t\t\t\t\t \"strdup(%s) failed: errno %d\",\n\t\t\t\t\t plx->al_value, errno);\n\t\t\t\tgoto mgr_hook_create_error;\n\t\t\t}\n\t\t} else {\n\t\t\tsnprintf(hook_msg, sizeof(hook_msg) - 1, \"%s - %s\",\n\t\t\t\t msg_noattr, plx->al_name);\n\t\t\tgoto mgr_hook_create_error;\n\t\t}\n\n\t\tplx = (svrattrl *) GET_NEXT(plx->al_link);\n\t}\n\n\t/* Now do the deferred set actions */\n\tif (hook_user_val != NULL) {\n\t\tif (set_hook_user(phook, hook_user_val,\n\t\t\t\t  hook_msg, sizeof(hook_msg), 1) != 0)\n\t\t\tgoto mgr_hook_create_error;\n\t\tfree(hook_user_val);\n\t\thook_user_val = NULL;\n\t}\n\t/* Now do the deferred set fail_action */\n\tif (hook_fail_action_val != NULL) {\n\t\tif (set_hook_fail_action(phook, hook_fail_action_val,\n\t\t\t\t\t hook_msg, sizeof(hook_msg), 1) != 0)\n\t\t\tgoto mgr_hook_create_error;\n\t\tfree(hook_fail_action_val);\n\t\thook_fail_action_val = NULL;\n\t}\n\tif (hook_freq_val != NULL) {\n\t\tif (set_hook_freq(phook, hook_freq_val,\n\t\t\t\t  hook_msg, sizeof(hook_msg)) != 0)\n\t\t\tgoto mgr_hook_create_error;\n\t\tfree(hook_freq_val);\n\t\thook_freq_val = NULL;\n\t}\n\n\tsprintf(log_buffer, msg_manager, msg_man_cre,\n\t\tpreq->rq_user, preq->rq_host);\n\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_HOOK, LOG_INFO,\n\t\t  preq->rq_ind.rq_manager.rq_objname, log_buffer);\n\n\tif (hook_save(phook) != 0) {\n\t\tsnprintf(hook_msg, sizeof(hook_msg),\n\t\t\t \"Failed to store '%s' permanently.\",\n\t\t\t preq->rq_ind.rq_manager.rq_objname);\n\t\tgoto mgr_hook_create_error;\n\t}\n\n\tif (phook->event & MOM_EVENTS) {\n\t\tadd_pending_mom_hook_action(NULL, phook->hook_name,\n\t\t\t\t\t    MOM_HOOK_ACTION_SEND_ATTRS);\n\t\tmom_hooks_seen++;\n\t\tif (mom_hooks_seen == 1) {\n\t\t\t/* used to be no mom hooks in the system, but now */\n\t\t\t/* one is introduced. So see if resourcedef file */\n\t\t\t/* changed and need to be flagged to be sent to */\n\t\t\t/* the moms */\n\t\t\tsend_rescdef(0);\n\t\t}\n\t}\n\treply_ack(preq); /*create completely successful*/\n\treturn;\n\nmgr_hook_create_error:\n\tif (hook_user_val != NULL)\n\t\tfree(hook_user_val);\n\tif (hook_fail_action_val != NULL)\n\t\tfree(hook_fail_action_val);\n\tif (hook_freq_val != NULL)\n\t\tfree(hook_freq_val);\n\n\tif (phook)\n\t\thook_purge(phook, pbs_python_ext_free_python_script);\n\treply_text(preq, PBSE_HOOKERROR, hook_msg);\n\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_HOOK, LOG_INFO, __func__, hook_msg);\n}\n\n/**\n * @brief\n *\t\tmgr_hook_delete\t- Processes a \"delete hook\" qmgr request.\n *\n * @see\n * \t\treq_manager\n *\n * @param[in]\tpreq\t- a batch_request structure representing the request.\n */\nvoid\nmgr_hook_delete(struct batch_request *preq)\n{\n\n\thook *phook = NULL;\n\tchar hookname[PBS_MAXSVRJOBID + 1] = {'\\0'};\n\tchar hook_msg[HOOK_MSG_SIZE] = {'\\0'};\n\n\tif (strlen(preq->rq_ind.rq_manager.rq_objname) == 0) {\n\t\treply_text(preq, PBSE_HOOKERROR, \"no hook name specified\");\n\t\treturn;\n\t}\n\n\tsnprintf(hookname, sizeof(hookname), \"%s\", preq->rq_ind.rq_manager.rq_objname);\n\n\tphook = find_hook(hookname);\n\n\tif (phook == NULL) {\n\t\tsnprintf(hook_msg, sizeof(hook_msg), \"%s does not exist!\",\n\t\t\t hookname);\n\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_HOOK, LOG_INFO,\n\t\t\t  hookname, hook_msg);\n\t\treply_text(preq, PBSE_HOOKERROR, hook_msg);\n\t\treturn;\n\t}\n\n\tif (phook->pending_delete) {\n\t\tsnprintf(hook_msg, sizeof(hook_msg), \"%s is pending delete!\",\n\t\t\t hookname);\n\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_HOOK, LOG_INFO,\n\t\t\t  hookname, hook_msg);\n\t\treply_text(preq, PBSE_HOOKERROR, hook_msg);\n\t\treturn;\n\t}\n\n\tif (phook->type == HOOK_PBS) {\n\t\tsprintf(log_buffer, \"cannot delete a PBS hook named %s\",\n\t\t\tphook->hook_name);\n\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_HOOK, LOG_INFO,\n\t\t\t  hookname, log_buffer);\n\t\treply_text(preq, PBSE_HOOKERROR, \"cannot delete a PBS hook\");\n\t\treturn;\n\t}\n\n\tif (phook->event & HOOK_EVENT_PROVISION)\n\t\tdisable_svr_prov();\n\n\tif (phook->event & HOOK_EVENT_PERIODIC)\n\t\tdelete_task_by_parm1_func(phook, NULL, DELETE_ALL);\n\n\tif (phook->event & MOM_EVENTS) {\n\n\t\tphook->pending_delete = 1;\n\t\tadd_pending_mom_hook_action(NULL, phook->hook_name,\n\t\t\t\t\t    MOM_HOOK_ACTION_DELETE);\n\t\tmom_hooks_seen--;\n\t\t(void) snprintf(hook_msg, sizeof(hook_msg),\n\t\t\t\t\"hook %s queued for deletion\", phook->hook_name);\n\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_HOOK, LOG_INFO,\n\t\t\t  hookname, hook_msg);\n\t\tif (hook_save(phook) != 0) {\n\t\t\tsnprintf(hook_msg, sizeof(hook_msg),\n\t\t\t\t \"Warning: failed to store '%s' permanently.\",\n\t\t\t\t preq->rq_ind.rq_manager.rq_objname);\n\t\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_HOOK, LOG_INFO,\n\t\t\t\t  hookname, hook_msg);\n\t\t}\n\t\treply_ack(preq); /*request completely successful*/\n\t} else {\n\t\thook_purge(phook, pbs_python_ext_free_python_script);\n\n\t\t(void) sprintf(log_buffer, msg_manager, msg_man_del,\n\t\t\t       preq->rq_user, preq->rq_host);\n\n\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_HOOK, LOG_INFO,\n\t\t\t  hookname, log_buffer);\n\t\treply_ack(preq); /*request completely successful*/\n\t}\n}\n\n/**\n *  @brief\n *  \tpy_compile_and_run\t- Compiles and/or runs the input_file_path based on file's suffix.\n *\n * @see\n * \t\tmgr_hook_import\n *\n *  @param[in]\tinput_file_path - file to compile and/or run.\n *  @param[out]\thook_msg - any error message resulting from compilation/run\n *\t\t\t\tfilled in here.\n *  @param[in]\tmsg_len - size of the hook_msg\n *  @param[in]\thook_name - name of the associated hook.\n *  @param[in]\tcompile_only - if set to 1, means to compile only\n *  \t\t\t\t'input_file_path'\n *\n * @return int\n * @retval\t0\t- for success\n * @retval\t!= 0\t- for any failure encountered\n */\nstatic int\npy_compile_and_run(char *input_file_path, char *hook_msg, size_t msg_len,\n\t\t   char *hookname, int compile_only)\n{\n\tstruct python_script *py_test_script = NULL;\n\tint rc;\n\n\tif ((input_file_path == NULL) || (hook_msg == NULL) || (hookname == NULL)) {\n\t\treturn -1;\n\t}\n\n\tmemset(hook_msg, '\\0', msg_len);\n\n\tif (pbs_python_ext_alloc_python_script(input_file_path,\n\t\t\t\t\t       (struct python_script **) &py_test_script) == -1) {\n\t\tsnprintf(hook_msg, msg_len,\n\t\t\t \"failed to allocate storage for python script %s\",\n\t\t\t input_file_path);\n\t\treturn -1;\n\t}\n\n\tif (compile_only) {\n\t\trc = pbs_python_check_and_compile_script(&svr_interp_data,\n\t\t\t\t\t\t\t py_test_script);\n\t} else {\n\t\trc = pbs_python_run_code_in_namespace(&svr_interp_data,\n\t\t\t\t\t\t      py_test_script, 0);\n\t}\n\t/* free py_script immediately, as it was used only for */\n\t/* test compile */\n\tpbs_python_ext_free_python_script(py_test_script);\n\tfree(py_test_script);\n\tpy_test_script = NULL;\n\n\tif (rc != 0) {\n\t\tsnprintf(hook_msg, msg_len,\n\t\t\t \"Failed to validate config file, \"\n\t\t\t \"hook '%s' config file not overwritten\",\n\t\t\t hookname);\n\t\treturn (rc);\n\t}\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\t\tmgr_hook_import\t- Processes an \"import hook\" qmgr request.\n * @note\n *\t\tReturns a reply to the sender of the batch_request.\n * @see\n * \t\treq_manager\n *\n * @param[in]\tpreq\t- batch_request structure representing request.\n */\nvoid\nmgr_hook_import(struct batch_request *preq)\n{\n\tsvrattrl *plist, *plx;\n\tchar hookname[PBS_MAXSVRJOBID + 1] = {'\\0'};\n\thook *phook;\n\tchar content_type[BUFSIZ];\n\tchar content_encoding[BUFSIZ];\n\tchar input_file[MAXPATHLEN + 1];\n\tchar input_file_path[MAXPATHLEN + 1];\n\tchar input_path[MAXPATHLEN + 1] = {'\\0'};\n\tchar temp_path[MAXPATHLEN + 1];\n\tchar output_path[MAXPATHLEN + 1] = {'\\0'};\n\tchar msgbuf[HOOK_MSG_SIZE];\n\tchar *hook_msg = NULL;\n\tint overwrite;\n\tstruct python_script *py_test_script = NULL;\n\tint rc;\n\tint hook_obj;\n\n\thook_obj = preq->rq_ind.rq_manager.rq_objtype;\n\n\tif (strlen(preq->rq_ind.rq_manager.rq_objname) == 0) {\n\t\treply_text(preq, PBSE_HOOKERROR, \"no hook name specified\");\n\t\treturn;\n\t}\n\n\tsnprintf(hookname, sizeof(hookname), \"%s\", preq->rq_ind.rq_manager.rq_objname);\n\n\tphook = find_hook(hookname);\n\n\tif ((phook == NULL) || phook->pending_delete) {\n\t\tpbs_asprintf(&hook_msg, \"%s does not exist!\", hookname);\n\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_HOOK, LOG_INFO, hookname, hook_msg);\n\t\treply_text(preq, PBSE_HOOKERROR, hook_msg);\n\t\tfree(hook_msg);\n\t\treturn;\n\t}\n\n\t/* Normally, only HOOK_SITE hooks can be shown/operated on in qmgr. */\n\t/* But HOOK_PBS hooks can also be shown if the qmgr request is */\n\t/* specifically operating on the \"pbshook\" keyword. */\n\tif ((phook->type != HOOK_SITE) && (hook_obj != MGR_OBJ_PBS_HOOK)) {\n\t\tpbs_asprintf(&hook_msg, \"%s not of '%s' type\", hookname, HOOKSTR_SITE);\n\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_HOOK, LOG_INFO, hookname, hook_msg);\n\t\treply_text(preq, PBSE_HOOKERROR, hook_msg);\n\t\tfree(hook_msg);\n\t\treturn;\n\t}\n\n\t/* Cannot show a HOOK_SITE hook if the qmgr request keyword is */\n\t/* \"pbshook\" */\n\tif ((phook->type == HOOK_SITE) && (hook_obj == MGR_OBJ_PBS_HOOK)) {\n\t\tpbs_asprintf(&hook_msg, \"%s not of '%s' type\", hookname, HOOKSTR_PBS);\n\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_HOOK, LOG_INFO, hookname, hook_msg);\n\t\treply_text(preq, PBSE_HOOKERROR, hook_msg);\n\t\tfree(hook_msg);\n\t\treturn;\n\t}\n\n\tsprintf(log_buffer, msg_manager, __func__, preq->rq_user, preq->rq_host);\n\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_HOOK, LOG_INFO,\n\t\t  hookname, log_buffer);\n\n\tplist = (svrattrl *) GET_NEXT(preq->rq_ind.rq_manager.rq_attr);\n\tplx = plist;\n\tcontent_type[0] = '\\0';\n\tcontent_encoding[0] = '\\0';\n\tinput_file[0] = '\\0';\n\twhile (plx) {\n\n\t\tif (strcasecmp(plx->al_name, CONTENT_TYPE_PARAM) == 0) {\n\t\t\tif (plx->al_value == NULL) {\n\t\t\t\tpbs_asprintf(&hook_msg, \"<%s> is NULL\",\n\t\t\t\t\t     CONTENT_TYPE_PARAM);\n\t\t\t\tgoto mgr_hook_import_error;\n\t\t\t}\n\t\t\tif (hook_obj == MGR_OBJ_PBS_HOOK) {\n\t\t\t\tif (strcmp(plx->al_value, HOOKSTR_CONFIG) != 0) {\n\t\t\t\t\tpbs_asprintf(&hook_msg, \"<%s> must be %s\",\n\t\t\t\t\t\t     CONTENT_TYPE_PARAM, HOOKSTR_CONFIG);\n\t\t\t\t\tgoto mgr_hook_import_error;\n\t\t\t\t}\n\t\t\t} else if ((strcmp(plx->al_value, HOOKSTR_CONTENT) != 0) &&\n\t\t\t\t   (strcmp(plx->al_value, HOOKSTR_CONFIG) != 0)) {\n\t\t\t\tpbs_asprintf(&hook_msg, \"<%s> must be %s or %s\",\n\t\t\t\t\t     CONTENT_TYPE_PARAM, HOOKSTR_CONTENT, HOOKSTR_CONFIG);\n\t\t\t\tgoto mgr_hook_import_error;\n\t\t\t}\n\t\t\tstrcpy(content_type, plx->al_value);\n\t\t} else if (strcasecmp(plx->al_name,\n\t\t\t\t      CONTENT_ENCODING_PARAM) == 0) {\n\t\t\tif (plx->al_value == NULL) {\n\t\t\t\tpbs_asprintf(&hook_msg, \"<%s> is NULL\",\n\t\t\t\t\t     CONTENT_ENCODING_PARAM);\n\t\t\t\tgoto mgr_hook_import_error;\n\t\t\t}\n\t\t\tif ((strcmp(plx->al_value, HOOKSTR_DEFAULT) != 0) &&\n\t\t\t    (strcmp(plx->al_value, HOOKSTR_BASE64) != 0)) {\n\t\t\t\tpbs_asprintf(&hook_msg, \"<%s> must be '%s' or '%s'\",\n\t\t\t\t\t     CONTENT_ENCODING_PARAM,\n\t\t\t\t\t     HOOKSTR_DEFAULT, HOOKSTR_BASE64);\n\t\t\t\tgoto mgr_hook_import_error;\n\t\t\t}\n\t\t\tstrcpy(content_encoding, plx->al_value);\n\t\t} else if (strcasecmp(plx->al_name, INPUT_FILE_PARAM) == 0) {\n\t\t\tif (plx->al_value == NULL) {\n\t\t\t\tpbs_asprintf(&hook_msg, \"input-file is NULL\");\n\t\t\t\tgoto mgr_hook_import_error;\n\t\t\t}\n\n\t\t\tif (is_full_path(plx->al_value)) {\n\t\t\t\tpbs_asprintf(&hook_msg, \"<%s> path must be relative to %s\",\n\t\t\t\t\t     INPUT_FILE_PARAM, path_hooks_workdir);\n\t\t\t\tgoto mgr_hook_import_error;\n\t\t\t}\n\t\t\tstrcpy(input_file, plx->al_value);\n\t\t} else {\n\t\t\tpbs_asprintf(&hook_msg, \"unrecognized parameter - %s\",\n\t\t\t\t     plx->al_name);\n\t\t\tgoto mgr_hook_import_error;\n\t\t}\n\n\t\tplx = (struct svrattrl *) GET_NEXT(plx->al_link);\n\t}\n\n\tif (strcmp(content_type, HOOKSTR_CONFIG) == 0) {\n\t\tchar *p;\n\t\tFILE *temp_fp;\n\t\tchar *p2;\n\t\tchar tempfile_path[MAXPATHLEN + 1];\n\n\t\tp = strrchr(input_file, '.');\n\t\tif (p != NULL) {\n\t\t\tif (!in_string_list(p, ' ', VALID_HOOK_CONFIG_SUFFIX)) {\n\t\t\t\tpbs_asprintf(&hook_msg,\n\t\t\t\t\t     \"<%s> contains an invalid suffix, \"\n\t\t\t\t\t     \"should be one of: %s\",\n\t\t\t\t\t     INPUT_FILE_PARAM,\n\t\t\t\t\t     VALID_HOOK_CONFIG_SUFFIX);\n\t\t\t\tgoto mgr_hook_import_error;\n\t\t\t}\n\n\t\t\tif (strcmp(p, \".py\") == 0) {\n\t\t\t\tsnprintf(input_file_path, sizeof(input_file_path),\n\t\t\t\t\t \"%s%s\", path_hooks_workdir, input_file);\n\t\t\t\trc = py_compile_and_run(input_file_path, msgbuf,\n\t\t\t\t\t\t\tsizeof(msgbuf) - 1, hookname, 1);\n\t\t\t\tif (rc != 0) {\n\t\t\t\t\thook_msg = strdup(msgbuf);\n\t\t\t\t\tgoto mgr_hook_import_error;\n\t\t\t\t}\n\t\t\t} else if (strcmp(p, \".ini\") == 0) {\n\t\t\t\tsnprintf(input_file_path, sizeof(input_file_path),\n\t\t\t\t\t \"%s%s\", path_hooks_workdir, input_file);\n\t\t\t\tsnprintf(tempfile_path, sizeof(tempfile_path),\n\t\t\t\t\t \"%s\", input_file_path);\n\t\t\t\tp2 = strrchr(tempfile_path, '.');\n\t\t\t\t*p2 = '\\0';\n\t\t\t\ttemp_fp = fopen(tempfile_path, \"w\");\n\t\t\t\tif (temp_fp != NULL) {\n\t\t\t\t\tfprintf(temp_fp, \"import ConfigParser\\n\");\n\t\t\t\t\tfprintf(temp_fp,\n\t\t\t\t\t\t\"Config  = ConfigParser.RawConfigParser()\\n\");\n\t\t\t\t\tfprintf(temp_fp,\n\t\t\t\t\t\t\"Config.read(\\\"%s\\\")\\n\", input_file_path);\n\t\t\t\t\tfclose(temp_fp);\n\t\t\t\t\trc = py_compile_and_run(tempfile_path,\n\t\t\t\t\t\t\t\tmsgbuf, sizeof(msgbuf) - 1,\n\t\t\t\t\t\t\t\thookname, 0);\n\t\t\t\t\tif (rc != 0) {\n\t\t\t\t\t\thook_msg = strdup(msgbuf);\n\t\t\t\t\t\tgoto mgr_hook_import_error;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t} else if (strcmp(p, \".json\") == 0) {\n\t\t\t\tsnprintf(input_file_path, sizeof(input_file_path),\n\t\t\t\t\t \"%s%s\", path_hooks_workdir, input_file);\n\t\t\t\tsnprintf(tempfile_path, sizeof(tempfile_path),\n\t\t\t\t\t \"%s\", input_file_path);\n\t\t\t\tp2 = strrchr(tempfile_path, '.');\n\t\t\t\t*p2 = '\\0';\n\t\t\t\ttemp_fp = fopen(tempfile_path, \"w\");\n\t\t\t\tif (temp_fp != NULL) {\n\t\t\t\t\tfprintf(temp_fp,\n\t\t\t\t\t\t\"import json\\n\");\n\t\t\t\t\tfprintf(temp_fp,\n\t\t\t\t\t\t\"fd = open(\\\"%s\\\")\\n\",\n\t\t\t\t\t\tinput_file_path);\n\t\t\t\t\tfprintf(temp_fp,\n\t\t\t\t\t\t\"json.load(fd)\\n\");\n\t\t\t\t\tfprintf(temp_fp,\n\t\t\t\t\t\t\"fd.close()\\n\");\n\t\t\t\t\tfclose(temp_fp);\n\t\t\t\t\trc = py_compile_and_run(tempfile_path,\n\t\t\t\t\t\t\t\tmsgbuf, sizeof(msgbuf) - 1,\n\t\t\t\t\t\t\t\thookname, 0);\n\t\t\t\t\tif (rc != 0) {\n\t\t\t\t\t\thook_msg = strdup(msgbuf);\n\t\t\t\t\t\tgoto mgr_hook_import_error;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\tsnprintf(input_path, sizeof(input_path),\n\t\t \"%s%s\", path_hooks_workdir, input_file);\n\tsnprintf(temp_path, sizeof(temp_path), \"%s%.*s.tmp\", path_hooks_workdir,\n\t\t (int) (sizeof(temp_path) - strlen(input_file) - 5), input_file);\n\n\tif (strcmp(content_type, HOOKSTR_CONTENT) == 0) {\n\t\tsnprintf(output_path, sizeof(output_path),\n\t\t\t \"%s%s%s\", path_hooks, hookname,\n\t\t\t HOOK_SCRIPT_SUFFIX);\n\t} else if (strcmp(content_type, HOOKSTR_CONFIG) == 0) {\n\t\tsnprintf(output_path, sizeof(output_path),\n\t\t\t \"%s%s%s\", path_hooks, hookname,\n\t\t\t HOOK_CONFIG_SUFFIX);\n\t} else {\n\t\tpbs_asprintf(&hook_msg, \"<%s> is unknown\", CONTENT_TYPE_PARAM);\n\t\tgoto mgr_hook_import_error;\n\t}\n\n\toverwrite = 0;\n\tif (file_exists(output_path))\n\t\toverwrite = 1;\n\n\tif (strcmp(content_type, HOOKSTR_CONFIG) == 0) {\n\t\tif (decode_hook_content(input_path, output_path, content_encoding,\n\t\t\t\t\tmsgbuf, sizeof(msgbuf)) != 0) {\n\t\t\thook_msg = strdup(msgbuf);\n\t\t\tgoto mgr_hook_import_error;\n\t\t}\n\t\tif (overwrite) {\n\t\t\tpbs_asprintf(&hook_msg,\n\t\t\t\t     \"hook '%s' contents overwritten\", hookname);\n\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t\t\t  LOG_INFO, __func__, hook_msg);\n\t\t\treply_text(preq, 0, hook_msg);\n\t\t\tfree(hook_msg);\n\t\t} else {\n\t\t\treply_ack(preq);\n\t\t}\n\t\tphook->hook_config_checksum = crc_file(output_path);\n\n\t\tif (phook->event & MOM_EVENTS)\n\t\t\tadd_pending_mom_hook_action(NULL, phook->hook_name,\n\t\t\t\t\t\t    MOM_HOOK_ACTION_SEND_CONFIG);\n\n\t\treturn;\n\n\t} else {\n\t\tif (decode_hook_content(input_path, temp_path, content_encoding,\n\t\t\t\t\tmsgbuf, sizeof(msgbuf)) != 0) {\n\t\t\thook_msg = strdup(msgbuf);\n\t\t\tgoto mgr_hook_import_error;\n\t\t}\n\t}\n\n\t/* create a py_script */\n\tif (pbs_python_ext_alloc_python_script(temp_path,\n\t\t\t\t\t       (struct python_script **) &py_test_script) == -1) {\n\t\tpbs_asprintf(&hook_msg,\n\t\t\t     \"failed to allocate storage for python script %s\",\n\t\t\t     temp_path);\n\t\tunlink(temp_path);\n\t\tgoto mgr_hook_import_error;\n\t}\n\t/* try compiling the now decoded file */\n\trc = pbs_python_check_and_compile_script(&svr_interp_data,\n\t\t\t\t\t\t py_test_script);\n\t/* free py_script immediately, as it was used only for test compile */\n\tpbs_python_ext_free_python_script(py_test_script);\n\tfree(py_test_script);\n\n\tif (rc != 0) {\n\t\tif (overwrite)\n\t\t\tpbs_asprintf(&hook_msg,\n\t\t\t\t     \"Failed to compile script, \"\n\t\t\t\t     \"hook '%s' contents not overwritten\",\n\t\t\t\t     hookname);\n\t\telse\n\t\t\tpbs_asprintf(&hook_msg, \"Failed to compile script\");\n\n\t\tunlink(temp_path);\n\t\tgoto mgr_hook_import_error;\n\t}\n\n\t/* now actually overwrite the old file, no decoding this time */\n\tif (decode_hook_content(temp_path, output_path, HOOKSTR_DEFAULT,\n\t\t\t\tmsgbuf, sizeof(msgbuf)) != 0) {\n\t\thook_msg = strdup(msgbuf);\n\t\tunlink(temp_path);\n\t\tgoto mgr_hook_import_error;\n\t}\n\n\tunlink(temp_path);\n\n\tif (overwrite) {\n\t\tpbs_asprintf(&hook_msg, \"hook '%s' contents overwritten\", hookname);\n\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t\t  LOG_INFO, __func__, hook_msg);\n\t\treply_text(preq, 0, hook_msg);\n\t\tfree(hook_msg);\n\t} else {\n\t\treply_ack(preq);\n\t}\n\n\tif (phook->script) {\n\t\tpbs_python_ext_free_python_script(phook->script);\n\t\tfree(phook->script);\n\t\tphook->script = NULL;\n\t}\n\n\tif (pbs_python_ext_alloc_python_script(output_path,\n\t\t\t\t\t       (struct python_script **) &phook->script) == -1) {\n\t\tpbs_asprintf(&hook_msg,\n\t\t\t     \"failed to allocate storage for python script %s\",\n\t\t\t     output_path);\n\t\tgoto mgr_hook_import_error;\n\t}\n\n\tphook->hook_script_checksum = crc_file(output_path);\n\n\tif (phook->event & HOOK_EVENT_PROVISION)\n\t\tset_srv_prov_attributes(); /* check and set prov attributes */\n\n\tif (phook->event & MOM_EVENTS)\n\t\tadd_pending_mom_hook_action(NULL, phook->hook_name,\n\t\t\t\t\t    MOM_HOOK_ACTION_SEND_SCRIPT);\n\n\tif (phook->event & HOOK_EVENT_PERIODIC) {\n\t\tset_srv_pwr_prov_attribute(); /* check and set power attributes */\n\t\tif ((phook->enabled == TRUE) && (phook->freq > 0)) {\n\t\t\t/* Search and delete all already existing periodic hook task */\n\t\t\tdelete_task_by_parm1_func(phook, NULL, DELETE_ALL);\n\t\t\t(void) set_task(WORK_Timed, time_now + phook->freq, run_periodic_hook, phook);\n\t\t}\n\t}\n\n\tadd_to_svrattrl_list(&preq->rq_ind.rq_manager.rq_attr, OUTPUT_FILE_PARAM,\n\t\t\t     NULL, output_path, 0, NULL);\n\treturn;\n\nmgr_hook_import_error:\n\tunlink(temp_path);\n\treply_text(preq, PBSE_HOOKERROR, hook_msg);\n\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_HOOK, LOG_INFO, __func__, hook_msg);\n\tfree(hook_msg);\n}\n\n/**\n * @brief\n *\t\tmgr_hook_export\t- Processes an \"export hook\" qmgr request.\n * @note\n *\t\tReturns a reply to the sender of the batch_request.\n *\n * @see\n * \t\treq_manager\n *\n * @param[in]\tpreq\t- batch_request structure representing the request.\n */\nvoid\nmgr_hook_export(struct batch_request *preq)\n{\n\tsvrattrl *plist, *plx;\n\tchar hookname[PBS_MAXSVRJOBID + 1] = {'\\0'};\n\thook *phook;\n\tchar content_type[BUFSIZ];\n\tchar content_encoding[BUFSIZ];\n\tchar output_file[MAXPATHLEN + 1];\n\tchar input_path[MAXPATHLEN + 1] = {'\\0'};\n\tchar output_path[MAXPATHLEN + 1] = {'\\0'};\n\tchar hook_msg[HOOK_MSG_SIZE] = {'\\0'};\n\tint hook_obj;\n\n\thook_obj = preq->rq_ind.rq_manager.rq_objtype;\n\n\tif (strlen(preq->rq_ind.rq_manager.rq_objname) == 0) {\n\t\treply_text(preq, PBSE_HOOKERROR, \"no hook name specified\");\n\t\treturn;\n\t}\n\n\tsnprintf(hookname, sizeof(hookname), \"%.*s\", PBS_MAXSVRJOBID,\n\t\t preq->rq_ind.rq_manager.rq_objname);\n\n\t/* Else one and only one vhook */\n\tphook = find_hook(hookname);\n\n\tif ((phook == NULL) || phook->pending_delete) {\n\t\tsnprintf(hook_msg, sizeof(hook_msg), \"%s does not exist!\", hookname);\n\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_HOOK, LOG_INFO, hookname, hook_msg);\n\t\treply_text(preq, PBSE_HOOKERROR, hook_msg);\n\t\treturn;\n\t}\n\n\t/* Normally, only HOOK_SITE hooks can be shown/operated on in qmgr. */\n\t/* But HOOK_PBS hooks can also be shown if the qmgr request is */\n\t/* specifically operating on the \"pbshook\" keyword. */\n\tif ((phook->type != HOOK_SITE) && (hook_obj != MGR_OBJ_PBS_HOOK)) {\n\t\tsnprintf(hook_msg, sizeof(hook_msg), \"%s not of '%s' type\", hookname, HOOKSTR_SITE);\n\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_HOOK, LOG_INFO, hookname, hook_msg);\n\t\treply_text(preq, PBSE_HOOKERROR, hook_msg);\n\t\treturn;\n\t}\n\n\t/* Cannot show a HOOK_SITE hook if the qmgr request keyword is */\n\t/* \"pbshook\" */\n\tif ((phook->type == HOOK_SITE) && (hook_obj == MGR_OBJ_PBS_HOOK)) {\n\t\tsnprintf(hook_msg, sizeof(hook_msg), \"%s not of '%s' type\", hookname, HOOKSTR_PBS);\n\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_HOOK, LOG_INFO, hookname, hook_msg);\n\t\treply_text(preq, PBSE_HOOKERROR, hook_msg);\n\t\treturn;\n\t}\n\n\tsprintf(log_buffer, msg_manager, __func__, preq->rq_user, preq->rq_host);\n\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_HOOK, LOG_INFO,\n\t\t  hookname, log_buffer);\n\n\tplist = (svrattrl *) GET_NEXT(preq->rq_ind.rq_manager.rq_attr);\n\tplx = plist;\n\tcontent_type[0] = '\\0';\n\tcontent_encoding[0] = '\\0';\n\toutput_file[0] = '\\0';\n\twhile (plx) {\n\n\t\tif (strcasecmp(plx->al_name, CONTENT_TYPE_PARAM) == 0) {\n\t\t\tif (plx->al_value == NULL) {\n\t\t\t\tsnprintf(hook_msg, sizeof(hook_msg),\n\t\t\t\t\t \"<%s> is NULL\", CONTENT_TYPE_PARAM);\n\t\t\t\tgoto mgr_hook_export_error;\n\t\t\t}\n\t\t\tif (hook_obj == MGR_OBJ_PBS_HOOK) {\n\t\t\t\tif (strcmp(plx->al_value, HOOKSTR_CONFIG) != 0) {\n\t\t\t\t\tsnprintf(hook_msg, sizeof(hook_msg),\n\t\t\t\t\t\t \"<%s> must be %s\",\n\t\t\t\t\t\t CONTENT_TYPE_PARAM, HOOKSTR_CONFIG);\n\t\t\t\t\tgoto mgr_hook_export_error;\n\t\t\t\t}\n\t\t\t} else if ((strcmp(plx->al_value,\n\t\t\t\t\t   HOOKSTR_CONTENT) != 0) &&\n\t\t\t\t   (strcmp(plx->al_value, HOOKSTR_CONFIG) != 0)) {\n\t\t\t\tsnprintf(hook_msg, sizeof(hook_msg),\n\t\t\t\t\t \"<%s> must be %s\",\n\t\t\t\t\t CONTENT_TYPE_PARAM, HOOKSTR_CONTENT);\n\t\t\t\tgoto mgr_hook_export_error;\n\t\t\t}\n\t\t\tstrcpy(content_type, plx->al_value);\n\t\t} else if (strcasecmp(plx->al_name,\n\t\t\t\t      CONTENT_ENCODING_PARAM) == 0) {\n\t\t\tif (plx->al_value == NULL) {\n\t\t\t\tsnprintf(hook_msg, sizeof(hook_msg),\n\t\t\t\t\t \"<%s> is NULL\", CONTENT_ENCODING_PARAM);\n\t\t\t\tgoto mgr_hook_export_error;\n\t\t\t}\n\t\t\tif ((strcmp(plx->al_value, HOOKSTR_DEFAULT) != 0) &&\n\t\t\t    (strcmp(plx->al_value, HOOKSTR_BASE64) != 0)) {\n\t\t\t\tsnprintf(hook_msg, sizeof(hook_msg),\n\t\t\t\t\t \"<%s> must be '%s' or '%s'\",\n\t\t\t\t\t CONTENT_ENCODING_PARAM,\n\t\t\t\t\t HOOKSTR_DEFAULT, HOOKSTR_BASE64);\n\t\t\t\tgoto mgr_hook_export_error;\n\t\t\t}\n\t\t\tstrcpy(content_encoding, plx->al_value);\n\t\t} else if (strcasecmp(plx->al_name, OUTPUT_FILE_PARAM) == 0) {\n\t\t\tif (plx->al_value == NULL) {\n\t\t\t\tsnprintf(hook_msg, sizeof(hook_msg),\n\t\t\t\t\t \"<%s> is NULL\", OUTPUT_FILE_PARAM);\n\t\t\t\tgoto mgr_hook_export_error;\n\t\t\t}\n\n\t\t\tif (is_full_path(plx->al_value)) {\n\t\t\t\tsnprintf(hook_msg, sizeof(hook_msg),\n\t\t\t\t\t \"<%s> path must be relative to %s\",\n\t\t\t\t\t OUTPUT_FILE_PARAM, path_hooks_workdir);\n\t\t\t\tgoto mgr_hook_export_error;\n\t\t\t}\n\t\t\tstrcpy(output_file, plx->al_value);\n\t\t} else {\n\t\t\tsnprintf(hook_msg, sizeof(hook_msg),\n\t\t\t\t \"unrecognized parameter - %s\",\n\t\t\t\t plx->al_name);\n\t\t\tgoto mgr_hook_export_error;\n\t\t}\n\n\t\tplx = (struct svrattrl *) GET_NEXT(plx->al_link);\n\t}\n\n\tif (strcmp(content_type, HOOKSTR_CONTENT) == 0) {\n\t\tsnprintf(input_path, MAXPATHLEN, \"%s%s%s\",\n\t\t\t path_hooks, hookname, HOOK_SCRIPT_SUFFIX);\n\t} else if (strcmp(content_type, HOOKSTR_CONFIG) == 0) {\n\t\tsnprintf(input_path, MAXPATHLEN, \"%s%s%s\",\n\t\t\t path_hooks, hookname, HOOK_CONFIG_SUFFIX);\n\t} else {\n\t\tsnprintf(hook_msg, sizeof(hook_msg),\n\t\t\t \"<%s> is unknown\", CONTENT_TYPE_PARAM);\n\t\tgoto mgr_hook_export_error;\n\t}\n\n\tsnprintf(output_path, sizeof(output_path), \"%s%.*s\", path_hooks_workdir,\n\t\t (int) (sizeof(output_path) - strlen(output_file)), output_file);\n\tif (encode_hook_content(input_path, output_path, content_encoding,\n\t\t\t\thook_msg, sizeof(hook_msg)) != 0)\n\t\tgoto mgr_hook_export_error;\n\treply_ack(preq);\n\treturn;\n\nmgr_hook_export_error:\n\treply_text(preq, PBSE_HOOKERROR, hook_msg);\n\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_HOOK, LOG_INFO,\n\t\t  __func__, hook_msg);\n}\n\n#define COPY_HOOK_SAVE 0\n#define COPY_HOOK_RESTORE 1\n/**\n * @brief\n * \t\tcopy_hook\t- Copies some of the attribute values\n * \t\t(order, type, enabled, user, event, alarm) of 'src_hook' into 'dst_hook'.\n * \t\tIf mode is COPY_HOOK_SAVE, then just do a straight copy of the values.\n * \t\tIf mode is COPY_HOOK_RESTORE, then populate 'dst_hook' by\n * \t\tinvoking the native set_hook* functions, in order to update any\n * \t\tdependencies. For example, if order is updated, then various system hooks\n * \t\tlists would automatically be updated to reflect the new ordering.\n *\n *\t@param[in]\tsrc_hook\t- source hook from which values needs to be copied.\n *\t@param[in]\tdst_hook\t- destination hook to which values needs to be copied.\n *\t@param[in]\tmode\t\t- mode - whether direct copying or dependencies needs to be updated.\n */\nstatic void\ncopy_hook(hook *src_hook, hook *dst_hook, int mode)\n{\n\tchar hook_msg[HOOK_MSG_SIZE];\n\n\tif ((src_hook == NULL) || (dst_hook == NULL)) {\n\t\treturn;\n\t}\n\n\tswitch (mode) {\n\t\tcase COPY_HOOK_SAVE:\n\t\t\tdst_hook->order = src_hook->order;\n\t\t\tdst_hook->type = src_hook->type;\n\t\t\tdst_hook->enabled = src_hook->enabled;\n\t\t\tdst_hook->user = src_hook->user;\n\t\t\tdst_hook->fail_action = src_hook->fail_action;\n\t\t\tdst_hook->debug = src_hook->debug;\n\t\t\tdst_hook->event = src_hook->event;\n\t\t\tdst_hook->alarm = src_hook->alarm;\n\t\t\tdst_hook->freq = src_hook->freq;\n\t\t\tbreak;\n\t\tcase COPY_HOOK_RESTORE:\n\t\t\t(void) set_hook_order(dst_hook,\n\t\t\t\t\t      hook_order_as_string(src_hook->order), hook_msg,\n\t\t\t\t\t      sizeof(hook_msg));\n\t\t\t(void) set_hook_type(dst_hook,\n\t\t\t\t\t     hook_type_as_string(src_hook->type), hook_msg,\n\t\t\t\t\t     sizeof(hook_msg), 1);\n\t\t\t(void) set_hook_enabled(dst_hook,\n\t\t\t\t\t\thook_enabled_as_string(src_hook->enabled), hook_msg,\n\t\t\t\t\t\tsizeof(hook_msg));\n\t\t\t(void) set_hook_user(dst_hook,\n\t\t\t\t\t     hook_user_as_string(src_hook->user), hook_msg,\n\t\t\t\t\t     sizeof(hook_msg), 0);\n\t\t\t(void) set_hook_fail_action(dst_hook,\n\t\t\t\t\t\t    hook_fail_action_as_string(src_hook->fail_action), hook_msg,\n\t\t\t\t\t\t    sizeof(hook_msg), 0);\n\t\t\t(void) set_hook_debug(dst_hook,\n\t\t\t\t\t      hook_debug_as_string(src_hook->debug), hook_msg,\n\t\t\t\t\t      sizeof(hook_msg));\n\t\t\t(void) set_hook_event(dst_hook,\n\t\t\t\t\t      hook_event_as_string(src_hook->event), hook_msg,\n\t\t\t\t\t      sizeof(hook_msg));\n\t\t\t(void) set_hook_alarm(dst_hook,\n\t\t\t\t\t      hook_alarm_as_string(src_hook->alarm), hook_msg,\n\t\t\t\t\t      sizeof(hook_msg));\n\t\t\t(void) set_hook_freq(dst_hook,\n\t\t\t\t\t     hook_freq_as_string(src_hook->freq), hook_msg,\n\t\t\t\t\t     sizeof(hook_msg));\n\t\t\tbreak;\n\t}\n}\n\n/**\n * @brief\n * \t\tmgr_hook_set\t- Sets hook attributes\n *\n *\t\tFinds the set of hooks, either one specified, or all for a host.\n *\t\tSets the request attributes on that set.\n *\t\treturns a reply to the sender of the batch_request\n * @note\n * \t\tThis is an atomic operation - either all the listed attributes\n * \t\tare set or none at all - uses copy_hook() to save/restore values.\n *\n * @see\n * \t\treq_manager\n *\n * @param[in]\tpreq\t- batch_request structure representing the request.\n */\nvoid\nmgr_hook_set(struct batch_request *preq)\n\n{\n\tsvrattrl *plist, *plx;\n\tchar hookname[PBS_MAXSVRJOBID + 1] = {'\\0'};\n\thook *phook;\n\tint num_set = 0;\n\tint got_event = 0; /* event attribute operated on */\n\tchar hook_msg[HOOK_MSG_SIZE] = {'\\0'};\n\thook shook;\n\tunsigned int prev_phook_event = 0;\n\tchar *hook_user_val = NULL;\n\tchar *hook_fail_action_val = NULL;\n\tenum batch_op hook_fail_action_op = DFLT;\n\tchar *hook_freq_val = NULL;\n\tint hook_obj;\n\n\thook_obj = preq->rq_ind.rq_manager.rq_objtype;\n\n\tif (strlen(preq->rq_ind.rq_manager.rq_objname) == 0) {\n\t\treply_text(preq, PBSE_HOOKERROR, \"no hook name specified\");\n\t\treturn;\n\t}\n\n\tsnprintf(hookname, sizeof(hookname), \"%s\", preq->rq_ind.rq_manager.rq_objname);\n\n\tphook = find_hook(hookname);\n\n\tif ((phook == NULL) || phook->pending_delete) {\n\t\tsnprintf(hook_msg, sizeof(hook_msg), \"%s does not exist!\", hookname);\n\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_HOOK, LOG_INFO, hookname, hook_msg);\n\t\treply_text(preq, PBSE_HOOKERROR, hook_msg);\n\t\treturn;\n\t}\n\n\tprev_phook_event = phook->event;\n\n\tif ((phook->type == HOOK_PBS) && (hook_obj != MGR_OBJ_PBS_HOOK)) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"cannot set attributes of a '%s' hook named %s\", HOOKSTR_PBS, phook->hook_name);\n\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_HOOK, LOG_INFO, hookname, log_buffer);\n\t\tsnprintf(log_buffer, sizeof(log_buffer), \"cannot set attributes of a '%s' hook\", HOOKSTR_PBS);\n\t\treply_text(preq, PBSE_HOOKERROR, log_buffer);\n\t\treturn;\n\t}\n\n\tif ((phook->type == HOOK_SITE) && (hook_obj == MGR_OBJ_PBS_HOOK)) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"cannot set attributes of a '%s' hook named %s\", HOOKSTR_SITE, phook->hook_name);\n\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_HOOK, LOG_INFO, hookname, log_buffer);\n\t\tsnprintf(log_buffer, sizeof(log_buffer), \"cannot set attributes of a '%s' hook\", HOOKSTR_SITE);\n\t\treply_text(preq, PBSE_HOOKERROR, log_buffer);\n\t\treturn;\n\t}\n\n\tsprintf(log_buffer, msg_manager, __func__, preq->rq_user, preq->rq_host);\n\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_HOOK, LOG_INFO,\n\t\t  hookname, log_buffer);\n\n\tcopy_hook(phook, &shook, COPY_HOOK_SAVE);\n\n\tplist = (svrattrl *) GET_NEXT(preq->rq_ind.rq_manager.rq_attr);\n\tplx = plist;\n\twhile (plx) {\n\n\t\tif (strcasecmp(plx->al_name, HOOKATT_TYPE) == 0) {\n\t\t\tif (plx->al_op != SET)\n\t\t\t\tgoto opnotequal;\n\t\t\tif (set_hook_type(phook, plx->al_value,\n\t\t\t\t\t  hook_msg, sizeof(hook_msg), 0) != 0)\n\t\t\t\tgoto mgr_hook_set_error;\n\t\t\tnum_set++;\n\t\t} else if (strcasecmp(plx->al_name, HOOKATT_ENABLED) == 0) {\n\t\t\tif (plx->al_op != SET)\n\t\t\t\tgoto opnotequal;\n\t\t\tif (set_hook_enabled(phook, plx->al_value,\n\t\t\t\t\t     hook_msg, sizeof(hook_msg)) != 0)\n\t\t\t\tgoto mgr_hook_set_error;\n\t\t\tif (phook->event & HOOK_EVENT_PERIODIC) {\n\t\t\t\tif ((phook->enabled == TRUE) && (phook->freq > 0)) {\n\t\t\t\t\t/* Delete all existing work tasks\n\t\t\t\t\t * There might be two of them:\n\t\t\t\t\t *  1 - related to running the next occurance\n\t\t\t\t\t *  2 - related to running the post processing function\n\t\t\t\t\t */\n\t\t\t\t\tdelete_task_by_parm1_func(phook, NULL, DELETE_ALL);\n\t\t\t\t\tif (phook->script != NULL)\n\t\t\t\t\t\t(void) set_task(WORK_Timed, time_now + phook->freq,\n\t\t\t\t\t\t\t\trun_periodic_hook, phook);\n\t\t\t\t\telse {\n\t\t\t\t\t\tsprintf(log_buffer, \"periodic hook is missing information, check hook frequency and script\");\n\t\t\t\t\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_HOOK, LOG_INFO,\n\t\t\t\t\t\t\t  hookname, log_buffer);\n\t\t\t\t\t\tsnprintf(hook_msg, sizeof(hook_msg),\n\t\t\t\t\t\t\t \"periodic hook is missing information, check hook frequency and script\");\n\t\t\t\t\t\tgoto mgr_hook_set_error;\n\t\t\t\t\t}\n\t\t\t\t} else\n\t\t\t\t\t/* Delete any existing work task */\n\t\t\t\t\tdelete_task_by_parm1_func(phook, NULL, DELETE_ALL);\n\t\t\t}\n\t\t\tnum_set++;\n\t\t} else if (strcasecmp(plx->al_name, HOOKATT_DEBUG) == 0) {\n\t\t\tif (plx->al_op != SET)\n\t\t\t\tgoto opnotequal;\n\t\t\tif (set_hook_debug(phook, plx->al_value,\n\t\t\t\t\t   hook_msg, sizeof(hook_msg)) != 0)\n\t\t\t\tgoto mgr_hook_set_error;\n\t\t\tnum_set++;\n\t\t} else if (strcasecmp(plx->al_name, HOOKATT_USER) == 0) {\n\t\t\tif (plx->al_op != SET)\n\t\t\t\tgoto opnotequal;\n\n\t\t\t/* setting hook user value must be a deferred action, */\n\t\t\t/* as it is dependent on event having */\n\t\t\t/* execjob_prologue, execjob_epilogue, or */\n\t\t\t/* execjob_preterm being set. The event set action */\n\t\t\t/* could appear after this set user action. */\n\t\t\tif (hook_user_val != NULL)\n\t\t\t\tfree(hook_user_val);\n\t\t\thook_user_val = strdup(plx->al_value);\n\t\t\tif (hook_user_val == NULL) {\n\t\t\t\tsnprintf(hook_msg, sizeof(hook_msg),\n\t\t\t\t\t \"strdup(%s) failed: errno %d\",\n\t\t\t\t\t plx->al_value, errno);\n\t\t\t\tgoto mgr_hook_set_error;\n\t\t\t}\n\t\t} else if (strcasecmp(plx->al_name, HOOKATT_FAIL_ACTION) == 0) {\n\t\t\t/* setting hook fail_action value must be a deferred action, */\n\t\t\t/* as it is dependent on event having */\n\t\t\t/* mom hook event being set. The event set action */\n\t\t\t/* could appear after this set fail_action. */\n\t\t\tif (hook_fail_action_val != NULL)\n\t\t\t\tfree(hook_fail_action_val);\n\t\t\thook_fail_action_val = strdup(plx->al_value);\n\t\t\tif (hook_fail_action_val == NULL) {\n\t\t\t\tsnprintf(hook_msg, sizeof(hook_msg),\n\t\t\t\t\t \"strdup(%s) failed: errno %d\",\n\t\t\t\t\t plx->al_value, errno);\n\t\t\t\tgoto mgr_hook_set_error;\n\t\t\t}\n\t\t\thook_fail_action_op = plx->al_op;\n\n\t\t} else if (strcasecmp(plx->al_name, HOOKATT_EVENT) == 0) {\n\n\t\t\tgot_event = 1;\n\n\t\t\tswitch (plx->al_op) {\n\t\t\t\tcase SET:\n\t\t\t\t\tif (set_hook_event(phook, plx->al_value,\n\t\t\t\t\t\t\t   hook_msg, sizeof(hook_msg)) != 0)\n\t\t\t\t\t\tgoto mgr_hook_set_error;\n\n\t\t\t\t\t/* if exechost_periodic disappears, then */\n\t\t\t\t\t/* unset hook freq value */\n\t\t\t\t\tif ((phook->event & HOOK_EVENT_EXECHOST_PERIODIC) == 0) {\n\t\t\t\t\t\tphook->freq = HOOK_FREQ_DEFAULT;\n\t\t\t\t\t}\n\n\t\t\t\t\tif ((phook->event & USER_MOM_EVENTS) == 0) {\n\t\t\t\t\t\tphook->user = HOOK_PBSADMIN;\n\t\t\t\t\t}\n\t\t\t\t\tif ((phook->event & FAIL_ACTION_EVENTS) == 0) {\n\t\t\t\t\t\tphook->fail_action = HOOK_FAIL_ACTION_NONE;\n\t\t\t\t\t}\n\t\t\t\t\tnum_set++;\n\t\t\t\t\tbreak;\n\t\t\t\tcase INCR:\n\t\t\t\t\tif (add_hook_event(phook, plx->al_value,\n\t\t\t\t\t\t\t   hook_msg, sizeof(hook_msg)) != 0)\n\t\t\t\t\t\tgoto mgr_hook_set_error;\n\t\t\t\t\tnum_set++;\n\t\t\t\t\tbreak;\n\t\t\t\tcase DECR:\n\t\t\t\t\tif (del_hook_event(phook, plx->al_value,\n\t\t\t\t\t\t\t   hook_msg, sizeof(hook_msg)) != 0)\n\t\t\t\t\t\tgoto mgr_hook_set_error;\n\n\t\t\t\t\t/* if exechost_periodic disappears, then */\n\t\t\t\t\t/* unset hook freq value */\n\t\t\t\t\tif ((phook->event & HOOK_EVENT_EXECHOST_PERIODIC) == 0) {\n\t\t\t\t\t\tphook->freq = HOOK_FREQ_DEFAULT;\n\t\t\t\t\t}\n\n\t\t\t\t\tif ((phook->event & USER_MOM_EVENTS) == 0) {\n\t\t\t\t\t\tphook->user = HOOK_PBSADMIN;\n\t\t\t\t\t}\n\t\t\t\t\tif ((phook->event & FAIL_ACTION_EVENTS) == 0) {\n\t\t\t\t\t\tphook->fail_action = HOOK_FAIL_ACTION_NONE;\n\t\t\t\t\t}\n\t\t\t\t\tif ((phook->event & HOOK_EVENT_EXECJOB_BEGIN) == 0) {\n\t\t\t\t\t\tphook->fail_action &= ~HOOK_FAIL_ACTION_SCHEDULER_RESTART_CYCLE;\n\t\t\t\t\t}\n\t\t\t\t\tif ((phook->event & HOOK_EVENT_EXECHOST_STARTUP) == 0) {\n\t\t\t\t\t\tphook->fail_action &= ~HOOK_FAIL_ACTION_CLEAR_VNODES;\n\t\t\t\t\t}\n\t\t\t\t\tnum_set++;\n\t\t\t\t\tbreak;\n\t\t\t\tdefault:\n\t\t\t\t\tsnprintf(hook_msg, sizeof(hook_msg),\n\t\t\t\t\t\t \"%s - %s:%d\", msg_internal,\n\t\t\t\t\t\t plx->al_name, plx->al_op);\n\t\t\t\t\tgoto mgr_hook_set_error;\n\t\t\t}\n\n\t\t} else if (strcasecmp(plx->al_name, HOOKATT_ORDER) == 0) {\n\t\t\tif (plx->al_op != SET)\n\t\t\t\tgoto opnotequal;\n\t\t\tif (phook->event & HOOK_EVENT_PERIODIC) {\n\t\t\t\tsprintf(log_buffer, \"Setting order for a periodic hook has no effect\");\n\t\t\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_HOOK, LOG_INFO,\n\t\t\t\t\t  hookname, log_buffer);\n\t\t\t} else if (set_hook_order(phook, plx->al_value,\n\t\t\t\t\t\t  hook_msg, sizeof(hook_msg)) != 0)\n\t\t\t\tgoto mgr_hook_set_error;\n\t\t\tnum_set++;\n\t\t} else if (strcasecmp(plx->al_name, HOOKATT_ALARM) == 0) {\n\t\t\tif (plx->al_op != SET)\n\t\t\t\tgoto opnotequal;\n\t\t\tif (set_hook_alarm(phook, plx->al_value,\n\t\t\t\t\t   hook_msg, sizeof(hook_msg)) != 0)\n\t\t\t\tgoto mgr_hook_set_error;\n\t\t\tnum_set++;\n\t\t} else if (strcasecmp(plx->al_name, HOOKATT_FREQ) == 0) {\n\t\t\tif (plx->al_op != SET)\n\t\t\t\tgoto opnotequal;\n\t\t\t/* setting hook freq value must be a deferred action, */\n\t\t\t/* as it is dependent on event having */\n\t\t\t/* exechost_periodic being set. The event set action */\n\t\t\t/* could appear after this set freq action. */\n\t\t\tif (hook_freq_val != NULL)\n\t\t\t\tfree(hook_freq_val);\n\t\t\thook_freq_val = strdup(plx->al_value);\n\t\t\tif (hook_freq_val == NULL) {\n\t\t\t\tsnprintf(hook_msg, sizeof(hook_msg),\n\t\t\t\t\t \"strdup(%s) failed: errno %d\",\n\t\t\t\t\t plx->al_value, errno);\n\t\t\t\tgoto mgr_hook_set_error;\n\t\t\t}\n\t\t} else {\n\t\t\tsnprintf(hook_msg, sizeof(hook_msg) - 1, \"%s - %s\",\n\t\t\t\t msg_noattr, plx->al_name);\n\t\t\tgoto mgr_hook_set_error;\n\t\t}\n\n\t\tplx = (struct svrattrl *) GET_NEXT(plx->al_link);\n\t}\n\n\t/* Now do the deferred set actions */\n\tif (hook_user_val != NULL) {\n\t\tif (set_hook_user(phook, hook_user_val,\n\t\t\t\t  hook_msg, sizeof(hook_msg), 1) != 0)\n\t\t\tgoto mgr_hook_set_error;\n\t\telse\n\t\t\tnum_set++;\n\t\tfree(hook_user_val);\n\t\thook_user_val = NULL;\n\t}\n\tif (hook_fail_action_val != NULL) {\n\t\tswitch (hook_fail_action_op) {\n\t\t\tcase SET:\n\t\t\t\tif (set_hook_fail_action(phook,\n\t\t\t\t\t\t\t hook_fail_action_val,\n\t\t\t\t\t\t\t hook_msg, sizeof(hook_msg), 1) != 0)\n\t\t\t\t\tgoto mgr_hook_set_error;\n\t\t\t\telse\n\n\t\t\t\t\tnum_set++;\n\t\t\t\tbreak;\n\t\t\tcase INCR:\n\t\t\t\tif (add_hook_fail_action(phook,\n\t\t\t\t\t\t\t hook_fail_action_val,\n\t\t\t\t\t\t\t hook_msg, sizeof(hook_msg), 1) != 0)\n\t\t\t\t\tgoto mgr_hook_set_error;\n\t\t\t\telse\n\t\t\t\t\tnum_set++;\n\t\t\t\tbreak;\n\t\t\tcase DECR:\n\t\t\t\tif (del_hook_fail_action(phook,\n\t\t\t\t\t\t\t hook_fail_action_val,\n\t\t\t\t\t\t\t hook_msg, sizeof(hook_msg)) != 0)\n\t\t\t\t\tgoto mgr_hook_set_error;\n\t\t\t\telse\n\t\t\t\t\tnum_set++;\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\tsnprintf(hook_msg, sizeof(hook_msg),\n\t\t\t\t\t \"%s - %s:%d\", msg_internal,\n\t\t\t\t\t plx ? plx->al_name : \"\", plx ? plx->al_op : -1);\n\t\t\t\tgoto mgr_hook_set_error;\n\t\t}\n\t\tfree(hook_fail_action_val);\n\t\thook_fail_action_val = NULL;\n\t}\n\tif (hook_freq_val != NULL) {\n\t\tif (set_hook_freq(phook, hook_freq_val,\n\t\t\t\t  hook_msg, sizeof(hook_msg)) != 0)\n\t\t\tgoto mgr_hook_set_error;\n\t\telse\n\t\t\tnum_set++;\n\t\tfree(hook_freq_val);\n\t\thook_freq_val = NULL;\n\t\tif ((phook->enabled == TRUE) && (phook->freq > 0)) {\n\t\t\t/* Delete all existing hook related timed work tasks */\n\t\t\tdelete_task_by_parm1_func(phook, run_periodic_hook, DELETE_ALL);\n\t\t\tif (phook->script != NULL)\n\t\t\t\t(void) set_task(WORK_Timed, time_now + phook->freq,\n\t\t\t\t\t\trun_periodic_hook, phook);\n\t\t}\n\t}\n\n\tif (num_set > 0) {\n\t\tif (hook_save(phook) != 0) {\n\t\t\tsnprintf(hook_msg, sizeof(hook_msg),\n\t\t\t\t \"Failed to store '%s' permanently.\",\n\t\t\t\t preq->rq_ind.rq_manager.rq_objname);\n\t\t\tgoto mgr_hook_set_error;\n\t\t}\n\t\tif (phook->event & MOM_EVENTS) {\n\t\t\tadd_pending_mom_hook_action(NULL, phook->hook_name,\n\t\t\t\t\t\t    MOM_HOOK_ACTION_SEND_ATTRS);\n\t\t\tif ((prev_phook_event & MOM_EVENTS) == 0) {\n\t\t\t\t/* previous hook's event did not include a */\n\t\t\t\t/* mom hook-related event, but current */\n\t\t\t\t/* one does due to a set operation */\n\t\t\t\tmom_hooks_seen++;\n\t\t\t\tif (mom_hooks_seen == 1) {\n\t\t\t\t\t/* used to be no mom hooks in the system, */\n\t\t\t\t\t/* but now one is introduced. So see if */\n\t\t\t\t\t/* resourcedef file changed and need to */\n\t\t\t\t\t/* be flagged to be sent to the moms */\n\t\t\t\t\tsend_rescdef(0);\n\t\t\t\t}\n\t\t\t\tadd_pending_mom_hook_action(NULL,\n\t\t\t\t\t\t\t    phook->hook_name,\n\t\t\t\t\t\t\t    MOM_HOOK_ACTION_SEND_SCRIPT);\n\t\t\t\tadd_pending_mom_hook_action(NULL,\n\t\t\t\t\t\t\t    phook->hook_name,\n\t\t\t\t\t\t\t    MOM_HOOK_ACTION_SEND_CONFIG);\n\t\t\t}\n\t\t} else {\n\t\t\tif (prev_phook_event & MOM_EVENTS) {\n\t\t\t\t/* previous hook's event include a */\n\t\t\t\t/* mom hook-related event, but current */\n\t\t\t\t/* one doesn't due to the set operation */\n\t\t\t\tadd_pending_mom_hook_action(NULL,\n\t\t\t\t\t\t\t    phook->hook_name,\n\t\t\t\t\t\t\t    MOM_HOOK_ACTION_DELETE);\n\t\t\t\tmom_hooks_seen--;\n\t\t\t}\n\t\t}\n\t}\n\n\tif (phook->event & HOOK_EVENT_PROVISION)\n\t\tset_srv_prov_attributes(); /* check and set prov attributes */\n\n\tif (phook->event & HOOK_EVENT_PERIODIC)\n\t\tset_srv_pwr_prov_attribute(); /* check and set power attributes */\n\n\treply_ack(preq); /*create completely successful*/\n\treturn;\n\nopnotequal:\n\tsnprintf(hook_msg, sizeof(hook_msg), \"'%s' operator not =\",\n\t\t plx->al_name);\n\nmgr_hook_set_error:\n\tif (hook_user_val != NULL)\n\t\tfree(hook_user_val);\n\tif (hook_fail_action_val != NULL)\n\t\tfree(hook_fail_action_val);\n\tif (hook_freq_val != NULL)\n\t\tfree(hook_freq_val);\n\n\tif ((num_set > 0) || got_event) {\n\t\t/*\n\t\t * got_event of 1 means set_hook_event() was called which\n\t\t * would have automatically initialized phook->event to 0\n\t\t * so we'll need to restore previous value\n\t\t */\n\t\tcopy_hook(&shook, phook, COPY_HOOK_RESTORE);\n\t}\n\n\treply_text(preq, PBSE_HOOKERROR, hook_msg);\n\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_HOOK, LOG_INFO,\n\t\t  __func__, hook_msg);\n\treturn;\n}\n\n/**\n * @brief\n * \t\tmgr_hook_unset\t- Unsets hook attributes\n *\n *\t\tFinds the set of hooks, either one specified, or all for a host.\n *\t\tUnsets the request attributes on that set.\n *\t\treturns a reply to the sender of the batch_request\n *\n * @note\n * \t\tThis is an atomic operation - either all the listed attributes\n * \t\tare unset or none at all - uses copy_hook() to save/restore\n *\t\t\t\t\t\t\t\tvalues.\n *\n * @param[in]\tpreq\t- batch_request structure representing the request.\n */\n\nvoid\nmgr_hook_unset(struct batch_request *preq)\n\n{\n\tsvrattrl *plist, *plx;\n\tchar hookname[PBS_MAXSVRJOBID + 1] = {'\\0'};\n\thook *phook;\n\tint num_unset = 0;\n\tchar hook_msg[HOOK_MSG_SIZE] = {'\\0'};\n\thook shook;\n\tunsigned int prev_phook_event;\n\tint hook_obj;\n\n\thook_obj = preq->rq_ind.rq_manager.rq_objtype;\n\n\tif (strlen(preq->rq_ind.rq_manager.rq_objname) == 0) {\n\t\treply_text(preq, PBSE_HOOKERROR, \"no hook name specified\");\n\t\treturn;\n\t}\n\n\tsnprintf(hookname, sizeof(hookname), \"%s\", preq->rq_ind.rq_manager.rq_objname);\n\n\t/* Else one and only one vhook */\n\tphook = find_hook(hookname);\n\n\tif ((phook == NULL) || phook->pending_delete) {\n\t\tsnprintf(hook_msg, sizeof(hook_msg), \"%s does not exist!\", hookname);\n\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_HOOK, LOG_INFO, hookname, hook_msg);\n\t\treply_text(preq, PBSE_HOOKERROR, hook_msg);\n\t\treturn;\n\t}\n\n\tprev_phook_event = phook->event;\n\n\tif ((phook->type == HOOK_PBS) && (hook_obj != MGR_OBJ_PBS_HOOK)) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"cannot unset attributes of a '%s' hook named %s\", HOOKSTR_PBS, phook->hook_name);\n\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_HOOK, LOG_INFO, hookname, log_buffer);\n\t\tsnprintf(log_buffer, sizeof(log_buffer), \"cannot unset attributes of a '%s' hook\", HOOKSTR_PBS);\n\t\treply_text(preq, PBSE_HOOKERROR, log_buffer);\n\t\treturn;\n\t}\n\n\tif ((phook->type == HOOK_SITE) && (hook_obj == MGR_OBJ_PBS_HOOK)) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"cannot unset attributes of a '%s' hook named %s\", HOOKSTR_SITE, phook->hook_name);\n\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_HOOK, LOG_INFO, hookname, log_buffer);\n\t\tsnprintf(log_buffer, sizeof(log_buffer), \"cannot unset attributes of a '%s' hook\", HOOKSTR_SITE);\n\t\treply_text(preq, PBSE_HOOKERROR, log_buffer);\n\t\treturn;\n\t}\n\n\tsprintf(log_buffer, msg_manager, __func__, preq->rq_user, preq->rq_host);\n\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_HOOK, LOG_INFO,\n\t\t  hookname, log_buffer);\n\n\tcopy_hook(phook, &shook, COPY_HOOK_SAVE);\n\tplist = (svrattrl *) GET_NEXT(preq->rq_ind.rq_manager.rq_attr);\n\tplx = plist;\n\twhile (plx) {\n\n\t\tif (strcasecmp(plx->al_name, HOOKATT_TYPE) == 0) {\n\t\t\tif (unset_hook_type(phook, hook_msg,\n\t\t\t\t\t    sizeof(hook_msg)) != 0)\n\t\t\t\tgoto mgr_hook_unset_error;\n\t\t\tnum_unset++;\n\t\t} else if (strcasecmp(plx->al_name, HOOKATT_ENABLED) == 0) {\n\t\t\tif (unset_hook_enabled(phook, hook_msg,\n\t\t\t\t\t       sizeof(hook_msg)) != 0)\n\t\t\t\tgoto mgr_hook_unset_error;\n\t\t\tnum_unset++;\n\t\t} else if (strcasecmp(plx->al_name, HOOKATT_DEBUG) == 0) {\n\t\t\tif (unset_hook_debug(phook, hook_msg,\n\t\t\t\t\t     sizeof(hook_msg)) != 0)\n\t\t\t\tgoto mgr_hook_unset_error;\n\t\t\tnum_unset++;\n\t\t} else if (strcasecmp(plx->al_name, HOOKATT_USER) == 0) {\n\t\t\tif (unset_hook_user(phook, hook_msg,\n\t\t\t\t\t    sizeof(hook_msg)) != 0)\n\t\t\t\tgoto mgr_hook_unset_error;\n\t\t\tnum_unset++;\n\t\t} else if (strcasecmp(plx->al_name, HOOKATT_FAIL_ACTION) == 0) {\n\t\t\tif (unset_hook_fail_action(phook, hook_msg,\n\t\t\t\t\t\t   sizeof(hook_msg)) != 0)\n\t\t\t\tgoto mgr_hook_unset_error;\n\t\t\tnum_unset++;\n\t\t} else if (strcasecmp(plx->al_name, HOOKATT_EVENT) == 0) {\n\t\t\tif (unset_hook_event(phook, hook_msg,\n\t\t\t\t\t     sizeof(hook_msg)) != 0)\n\t\t\t\tgoto mgr_hook_unset_error;\n\t\t\t/* Given that we've set hook's event list to empty, */\n\t\t\t/* then we need to reset the freq, user, fail_action values, */\n\t\t\t/* which are dependent on certain events being */\n\t\t\t/* present. */\n\t\t\tphook->freq = HOOK_FREQ_DEFAULT;\n\t\t\tphook->user = HOOK_USER_DEFAULT;\n\t\t\tphook->fail_action = HOOK_FAIL_ACTION_DEFAULT;\n\t\t\tnum_unset++;\n\t\t} else if (strcasecmp(plx->al_name, HOOKATT_ORDER) == 0) {\n\t\t\tif (unset_hook_order(phook, hook_msg,\n\t\t\t\t\t     sizeof(hook_msg)) != 0)\n\t\t\t\tgoto mgr_hook_unset_error;\n\t\t\tnum_unset++;\n\t\t} else if (strcasecmp(plx->al_name, HOOKATT_ALARM) == 0) {\n\t\t\tif (unset_hook_alarm(phook, hook_msg,\n\t\t\t\t\t     sizeof(hook_msg)) != 0)\n\t\t\t\tgoto mgr_hook_unset_error;\n\t\t\tnum_unset++;\n\t\t} else if (strcasecmp(plx->al_name, HOOKATT_FREQ) == 0) {\n\t\t\tif (unset_hook_freq(phook, hook_msg,\n\t\t\t\t\t    sizeof(hook_msg)) != 0)\n\t\t\t\tgoto mgr_hook_unset_error;\n\t\t\tnum_unset++;\n\t\t} else {\n\t\t\tsnprintf(hook_msg, sizeof(hook_msg) - 1, \"%s - %s\",\n\t\t\t\t msg_noattr, plx->al_name);\n\t\t\tgoto mgr_hook_unset_error;\n\t\t}\n\n\t\tplx = (struct svrattrl *) GET_NEXT(plx->al_link);\n\t}\n\n\tif (num_unset > 0) {\n\t\tif (hook_save(phook) != 0) {\n\t\t\tsnprintf(hook_msg, sizeof(hook_msg),\n\t\t\t\t \"Failed to store '%s' permanently.\",\n\t\t\t\t preq->rq_ind.rq_manager.rq_objname);\n\t\t\tgoto mgr_hook_unset_error;\n\t\t}\n\t\tif (phook->event & MOM_EVENTS) {\n\t\t\tadd_pending_mom_hook_action(NULL, phook->hook_name,\n\t\t\t\t\t\t    MOM_HOOK_ACTION_SEND_ATTRS);\n\t\t} else {\n\t\t\tif (prev_phook_event & MOM_EVENTS) {\n\t\t\t\t/* previous hook's event included a */\n\t\t\t\t/* mom hook-related event, but current */\n\t\t\t\t/* one doesn't due to the unset operation */\n\t\t\t\tadd_pending_mom_hook_action(NULL,\n\t\t\t\t\t\t\t    phook->hook_name,\n\t\t\t\t\t\t\t    MOM_HOOK_ACTION_DELETE);\n\t\t\t\tmom_hooks_seen--;\n\t\t\t}\n\t\t}\n\t}\n\n\tif (phook->event & HOOK_EVENT_PROVISION)\n\t\tset_srv_prov_attributes(); /* check and set prov attributes */\n\n\tif (phook->event & HOOK_EVENT_PERIODIC)\n\t\tset_srv_pwr_prov_attribute(); /* check and set power attributes */\n\n\treply_ack(preq); /*unset completely successful*/\n\treturn;\n\nmgr_hook_unset_error:\n\n\tif (num_unset > 0)\n\t\tcopy_hook(&shook, phook, COPY_HOOK_RESTORE);\n\n\treply_text(preq, PBSE_HOOKERROR, hook_msg);\n\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_HOOK, LOG_INFO, __func__, hook_msg);\n}\n/*\n ************************************************************************\n *   Hook-related req_stat operations.\n ************************************************************************\n */\n\n/**\n * @brief\n * \t\tstatus_hook - Build the status reply for a single hook.\n *\n * @see\n * \t\treq_stat_hook\n *\n * @param[in]\tphook\t\t- ptr to hook to status\n * @param[in]\tpreq\t\t- batch_request structure representing the request.\n * @param[in,out]\tpstathd\t- head of list to append status to\n * @param[out]\thook_msg\t- output message\n * @param[out]\tmsg_len\t\t- required length of output message\n *\n * @return      Error code\n * @retval\t0  - Success\n * @retval\tnonzero  - Failure\n */\n\nint\nstatus_hook(hook *phook, struct batch_request *preq, pbs_list_head *pstathd, char *hook_msg, size_t msg_len)\n{\n\tstruct brp_status *pstat;\n\tsvrattrl *pal;\n\tchar val_str[HOOK_BUF_SIZE];\n\tchar *hookname;\n\tint hook_obj;\n\n\t/* status_hook() request will not have the object type directly. The extend */\n\t/* field will determine the object type */\n\tif (preq->rq_extend != NULL) {\n\t\tif (strcmp(preq->rq_extend, PBS_HOOK) == 0) {\n\t\t\thook_obj = MGR_OBJ_PBS_HOOK;\n\t\t} else if (strcmp(preq->rq_extend, SITE_HOOK) == 0) {\n\t\t\thook_obj = MGR_OBJ_SITE_HOOK;\n\t\t} else {\n\t\t\treturn (PBSE_HOOKERROR); /* bad hook object type */\n\t\t}\n\t} else {\n\t\thook_obj = MGR_OBJ_SITE_HOOK;\n\t}\n\tmemset(hook_msg, '\\0', msg_len);\n\n\tpstat = (struct brp_status *) malloc(sizeof(struct brp_status));\n\tif (pstat == NULL)\n\t\treturn (PBSE_SYSTEM);\n\n\thookname = phook->hook_name;\n\tpstat->brp_objtype = hook_obj;\n\n\t(void) strcpy(pstat->brp_objname, hookname);\n\tCLEAR_LINK(pstat->brp_stlink);\n\tCLEAR_HEAD(pstat->brp_attr);\n\tappend_link(pstathd, &pstat->brp_stlink, pstat);\n\tpreq->rq_reply.brp_count++;\n\n\t/* add attributes to the status reply */\n\n\tpal = (svrattrl *) GET_NEXT(preq->rq_ind.rq_status.rq_attr);\n\n\tif (pal) {\n\n\t\twhile (pal) {\n\t\t\tval_str[0] = '\\0';\n\t\t\tif (strcmp(pal->al_name, HOOKATT_TYPE) == 0) {\n\t\t\t\tstrcpy(val_str, hook_type_as_string(phook->type));\n\t\t\t} else if (strcmp(pal->al_name, HOOKATT_ENABLED) == 0) {\n\t\t\t\tstrcpy(val_str, hook_enabled_as_string(phook->enabled));\n\t\t\t} else if (strcmp(pal->al_name, HOOKATT_USER) == 0) {\n\t\t\t\tstrcpy(val_str, hook_user_as_string(phook->user));\n\t\t\t} else if (strcmp(pal->al_name, HOOKATT_EVENT) == 0) {\n\t\t\t\tstrcpy(val_str, hook_event_as_string(phook->event));\n\t\t\t} else if (strcmp(pal->al_name, HOOKATT_ORDER) == 0) {\n\t\t\t\tstrcpy(val_str, hook_order_as_string(phook->order));\n\t\t\t} else if (strcmp(pal->al_name, HOOKATT_ALARM) == 0) {\n\t\t\t\tstrcpy(val_str, hook_alarm_as_string(phook->alarm));\n\t\t\t} else if ((strcmp(pal->al_name, HOOKATT_FREQ) == 0) &&\n\t\t\t\t   (((phook->event & HOOK_EVENT_EXECHOST_PERIODIC) != 0) ||\n\t\t\t\t    ((phook->event & HOOK_EVENT_PERIODIC) != 0))) {\n\t\t\t\tstrcpy(val_str, hook_freq_as_string(phook->freq));\n\t\t\t} else if (strcmp(pal->al_name, HOOKATT_DEBUG) == 0) {\n\t\t\t\tstrcpy(val_str, hook_debug_as_string(phook->debug));\n\t\t\t} else if (strcmp(pal->al_name, HOOKATT_FAIL_ACTION) == 0) {\n\t\t\t\tstrcpy(val_str, hook_fail_action_as_string(phook->fail_action));\n\t\t\t} else {\n\t\t\t\tsnprintf(hook_msg, msg_len - 1,\n\t\t\t\t\t \"unknown hook attribute %s\", pal->al_name);\n\t\t\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_HOOK,\n\t\t\t\t\t  LOG_INFO, hookname, hook_msg);\n\t\t\t\treturn (PBSE_HOOKERROR);\n\t\t\t}\n\n\t\t\tif (attrlist_add(&pstat->brp_attr, pal->al_name,\n\t\t\t\t\t val_str) != 0)\n\t\t\t\treturn (PBSE_INTERNAL);\n\t\t\tpal = (svrattrl *) GET_NEXT(pal->al_link);\n\t\t}\n\t} else { /* return all attribute values */\n\n\t\tif ((attrlist_add(&pstat->brp_attr, HOOKATT_TYPE,\n\t\t\t\t  hook_type_as_string(phook->type)) != 0) ||\n\t\t    (attrlist_add(&pstat->brp_attr, HOOKATT_ENABLED,\n\t\t\t\t  hook_enabled_as_string(phook->enabled)) != 0) ||\n\t\t    (attrlist_add(&pstat->brp_attr, HOOKATT_EVENT,\n\t\t\t\t  hook_event_as_string(phook->event)) != 0) ||\n\t\t    (attrlist_add(&pstat->brp_attr, HOOKATT_USER,\n\t\t\t\t  hook_user_as_string(phook->user)) != 0) ||\n\t\t    (attrlist_add(&pstat->brp_attr, HOOKATT_ALARM,\n\t\t\t\t  hook_alarm_as_string(phook->alarm)) != 0) ||\n\t\t    ((((phook->event & HOOK_EVENT_EXECHOST_PERIODIC) != 0) ||\n\t\t      ((phook->event & HOOK_EVENT_PERIODIC) != 0)) &&\n\t\t     (attrlist_add(&pstat->brp_attr, HOOKATT_FREQ,\n\t\t\t\t   hook_freq_as_string(phook->freq)) != 0)) ||\n\t\t    (attrlist_add(&pstat->brp_attr, HOOKATT_ORDER,\n\t\t\t\t  hook_order_as_string(phook->order)) != 0) ||\n\t\t    (attrlist_add(&pstat->brp_attr, HOOKATT_DEBUG,\n\t\t\t\t  hook_debug_as_string(phook->debug)) != 0) ||\n\t\t    (attrlist_add(&pstat->brp_attr, HOOKATT_FAIL_ACTION,\n\t\t\t\t  hook_fail_action_as_string(phook->fail_action)) != 0))\n\t\t\treturn (PBSE_INTERNAL);\n\t}\n\n\treturn (0);\n}\n\n/**\n * @brief\n * \t\treq_stat_hook - service the Status Hook Request\n *\n *\t\tThis request processes the request for status of a single SITE hook or\n *\t\tthe set of SITE hooks at the local server..\n *\n *\t@see\n *\t\tdispatch_request\n *\n * @param[in]\tpreq\t\t- ptr to the decoded request\n *\n * @return\tvoid\n */\n\nvoid\nreq_stat_hook(struct batch_request *preq)\n{\n\tchar *name;\n\thook *phook = NULL;\n\tstruct batch_reply *preply;\n\tint rc = 0;\n\tint type = 0;\n\tchar hook_msg[HOOK_MSG_SIZE];\n\tint hook_obj;\n\n\t/*\n\t * first, validate the name of the requested object, either\n\t * a hook, or null for all hooks\n\t */\n\n\tname = preq->rq_ind.rq_status.rq_id;\n\n\t/* req_stat_hook() will not have the object type directly, but it can */\n\t/* be determined via the extend field of the request. */\n\tif (preq->rq_extend != NULL) {\n\t\tif (strcmp(preq->rq_extend, PBS_HOOK) == 0) {\n\t\t\thook_obj = MGR_OBJ_PBS_HOOK;\n\t\t} else if (strcmp(preq->rq_extend, SITE_HOOK) == 0) {\n\t\t\thook_obj = MGR_OBJ_SITE_HOOK;\n\t\t} else {\n\t\t\treply_text(preq, PBSE_HOOKERROR, \"baad hook object type\");\n\t\t\treturn;\n\t\t}\n\t} else {\n\t\thook_obj = MGR_OBJ_SITE_HOOK;\n\t}\n\n\tif (*name == '\\0') { /* match all hooks */\n\t\ttype = 1;\n\t} else {\n\t\tphook = find_hook(name);\n\t\tif ((phook == NULL) || phook->pending_delete) {\n\t\t\treply_text(preq, PBSE_HOOKERROR, \"hook not found\");\n\t\t\treturn;\n\t\t}\n\t}\n\n\tpreply = &preq->rq_reply;\n\tpreply->brp_choice = BATCH_REPLY_CHOICE_Status;\n\tCLEAR_HEAD(preply->brp_un.brp_status);\n\tpreply->brp_count = 0;\n\n\tif (type == 0) { /* get status of the one named hook */\n\t\t/* can only stat HOOK_SITE hooks */\n\t\tif (((hook_obj == MGR_OBJ_PBS_HOOK) &&\n\t\t     (phook->type == HOOK_PBS)) ||\n\t\t    ((hook_obj == MGR_OBJ_SITE_HOOK) &&\n\t\t     (phook->type == HOOK_SITE))) {\n\t\t\trc = status_hook(phook, preq,\n\t\t\t\t\t &preply->brp_un.brp_status,\n\t\t\t\t\t hook_msg, sizeof(hook_msg));\n\t\t}\n\n\t} else { /* get status of SITE or PBS hooks */\n\n\t\tphook = (hook *) GET_NEXT(svr_allhooks);\n\t\twhile (phook) {\n\t\t\tif (!phook->pending_delete &&\n\t\t\t    (((hook_obj == MGR_OBJ_PBS_HOOK) &&\n\t\t\t      (phook->type == HOOK_PBS)) ||\n\t\t\t     ((hook_obj == MGR_OBJ_SITE_HOOK) &&\n\t\t\t      (phook->type == HOOK_SITE)))) {\n\t\t\t\trc = status_hook(phook, preq,\n\t\t\t\t\t\t &preply->brp_un.brp_status, hook_msg,\n\t\t\t\t\t\t sizeof(hook_msg));\n\t\t\t\tif (rc != 0)\n\t\t\t\t\tbreak;\n\t\t\t}\n\t\t\tphook = (hook *) GET_NEXT(phook->hi_allhooks);\n\t\t}\n\t}\n\tif (rc) {\n\t\tif (hook_msg[0] == '\\0')\n\t\t\treq_reject(rc, 0, preq);\n\t\telse\n\t\t\treply_text(preq, PBSE_HOOKERROR, hook_msg);\n\t} else {\n\t\t(void) reply_send(preq);\n\t}\n}\n\n/**\n * @brief\n * \t\tThis function will set pjobs Execution_Time attribute value to the value\n * \t\tcorresponding to new_exec_time_str.\n *\n * @see\n * \t\tdo_runjob_reject_actions\n *\n * @param[in,out]\tpjob - job structure\n * @param[in]\t\tnew_exec_time_str - new execution time which needs to be placed in job structure\n * @param[out]\t\tmsg - filled in with actual error message of this function fails\n * @param[in]\t\tmsg_len - size of 'msg' buffer.\n * @param[in]\t\thook_name - the name of the hook where the set attribute\n *\t\t\t\t\tfunction is called.\n *\n * @return int\n * @retval \t0\tfor success.\n * @retval\t!= 0\totherwise.\n */\nstatic int\nset_exec_time(job *pjob, char *new_exec_time_str, char *msg,\n\t      int msg_len, char *hook_name)\n{\n\n\tint rc = 1;\n\tlong new_exec_time;\n\tchar *exec_time_ctime;\n\n\tif ((msg == NULL) || (msg_len <= 0)) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"Bad msg buffer parameter!\");\n\t\treturn (2);\n\t}\n\n\tif ((pjob == NULL) || (new_exec_time_str == NULL) ||\n\t    (hook_name == NULL)) {\n\t\tsnprintf(msg, msg_len - 1,\n\t\t\t \"%s: bad pjob, new_exec_time_str, or hook_name!\", __func__);\n\t\treturn (2);\n\t}\n\n\tnew_exec_time = atol(new_exec_time_str);\n\n\tif (new_exec_time == 0) {\n\t\tsnprintf(msg, msg_len - 1,\n\t\t\t \"%s: Failed to convert %s into long\", __func__, new_exec_time_str);\n\t\treturn (2);\n\t}\n\n\texec_time_ctime = ctime(&new_exec_time);\n\tif (exec_time_ctime == NULL) {\n\t\tsnprintf(msg, msg_len - 1,\n\t\t\t \"%s: Failed to decode new_exec_time into ctime str\", __func__);\n\t\treturn (2);\n\t}\n\texec_time_ctime[strlen(exec_time_ctime) - 1] = '\\0';\n\n\tfree_jattr(pjob, JOB_ATR_exectime);\n\n\trc = set_jattr_str_slim(pjob, JOB_ATR_exectime, new_exec_time_str, NULL);\n\n\tif (rc == 0) {\n\t\tif (job_attr_def[(int) JOB_ATR_exectime].at_action) {\n\t\t\trc = job_attr_def[(int) JOB_ATR_exectime].at_action(\n\t\t\t\tget_jattr(pjob, JOB_ATR_exectime),\n\t\t\t\tpjob, ATR_ACTION_ALTER);\n\t\t\tif (rc != 0) {\n\t\t\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\t\t\"Failed executing JOB_ATR_exectime action function.\");\n\t\t\t}\n\t\t}\n\t} else {\n\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\"Failed decoding a value for JOB_ATR_exectime.\");\n\t}\n\n\tif (rc != 0) {\n\t\tsnprintf(msg, msg_len - 1,\n\t\t\t \"'%s' hook failed to set job's %s = %s\",\n\t\t\t hook_name,\n\t\t\t ATTR_a,\n\t\t\t exec_time_ctime);\n\t\tfree_jattr(pjob, JOB_ATR_exectime);\n\t} else {\n\t\tint newsub;\n\t\tchar newstate;\n\t\tFILE *fp_debug_out = NULL;\n\n\t\tsnprintf(msg, msg_len, \"'%s' hook set job's %s = %s\", hook_name, ATTR_a, exec_time_ctime);\n\t\tsvr_evaljobstate(pjob, &newstate, &newsub, 0);\n\t\tsvr_setjobstate(pjob, newstate, newsub);\n\n\t\tfp_debug_out = pbs_python_get_hook_debug_output_fp();\n\t\tif (fp_debug_out != NULL) {\n\t\t\tfprintf(fp_debug_out, \"%s.%s=%ld\\n\", EVENT_JOB_OBJECT, ATTR_a, new_exec_time);\n\t\t}\n\t}\n\n\treturn (rc);\n}\n\n/**\n * @brief\n * \t\tThis takes care of setting pjobs Hold_Types attribute to the\n * \t\tnew_hold_types_str.\n *\n * @see\n * \t\tdo_runjob_reject_actions\n *\n * @param[in,out]\tpjob - job structure\n * @param[in]\t\tnew_hold_types_str - new hold type which needs to be placed in job structure\n * @param[out]\t\tmsg - Any messages resulting from the action is logged in 'msg'\n * @param[in]\t\topval - set or unset will do on pjob based on the value of opval\n * @param[in]\t\tdelval - if unset is chosen delval will be assigned instead of 'new_hold_types_str'\n * @param[in]\t\tmsg_len - 'msg' is up to 'msg_len' size.\n * @param[in]\t\thook_name - the name of the hook where the set attribute\n *\t\t\t\t\tfunction is called.\n *\n * @return int\n * @retval \t0\tfor success.\n * @retval\t!= 0\totherwise.\n */\nstatic int\nset_hold_types(job *pjob, char *new_hold_types_str,\n\t       char *opval, char *delval, char *msg, int msg_len, char *hook_name)\n{\n\tlong old_hold;\n\tint do_release;\n\tint rc;\n\tchar newstate;\n\tint newsub;\n\n\tif ((msg == NULL) || (msg_len <= 0)) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"Bad msg buffer parameter\");\n\t\treturn (2);\n\t}\n\n\tif ((pjob == NULL) || (new_hold_types_str == NULL) ||\n\t    (opval == NULL) || (delval == NULL) || (hook_name == NULL)) {\n\t\tsnprintf(msg, msg_len - 1,\n\t\t\t \"%s: bad pjob, new_hold_types_str, or hook_name\", __func__);\n\t\treturn (2);\n\t}\n\n\tif (strcmp(opval, PY_OPVAL_SUB) == 0)\n\t\tdo_release = 1;\n\telse\n\t\tdo_release = 0;\n\n\told_hold = get_jattr_long(pjob, JOB_ATR_hold);\n\n\trc = set_jattr_str_slim(pjob, JOB_ATR_hold, new_hold_types_str, NULL);\n\n\tif (rc != 0) {\n\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\"Failed decoding a value for JOB_ATR_hold.\");\n\t\tsnprintf(msg, msg_len - 1,\n\t\t\t \"'%s' hook failed to %s job's %s = %s\",\n\t\t\t hook_name,\n\t\t\t (do_release ? \"unset\" : \"set\"),\n\t\t\t ATTR_h,\n\t\t\t (do_release ? delval : new_hold_types_str));\n\t\tfree_jattr(pjob, JOB_ATR_hold);\n\t\treturn (rc);\n\t}\n\n\tsnprintf(msg, msg_len - 1,\n\t\t \"'%s' hook %s job's %s = %s\",\n\t\t hook_name,\n\t\t (do_release ? \"unset\" : \"set\"),\n\t\t ATTR_h,\n\t\t (do_release ? delval : new_hold_types_str));\n\n\tif (!do_release &&\n\t    (get_jattr_long(pjob, JOB_ATR_hold) != 0)) {\n\t\ttime_t now;\n\t\tchar date[32];\n\t\tchar buf[HOOK_BUF_SIZE];\n\t\t/* Note the hold time in the job comment. */\n\t\tnow = time(NULL);\n\t\tsnprintf(date, sizeof(date), \"%s\", (const char *) ctime(&now));\n\t\t(void) sprintf(buf, \"Job held by '%s' hook on %s\",\n\t\t\t       hook_name, date);\n\t\tset_jattr_str_slim(pjob, JOB_ATR_Comment, buf, NULL);\n\t}\n\n\tif (old_hold != get_jattr_long(pjob, JOB_ATR_hold)) {\n\t\tFILE *fp_debug_out = NULL;\n\t\t/* indicate attributes changed */\n\t\tsvr_evaljobstate(pjob, &newstate, &newsub, 0);\n\t\tsvr_setjobstate(pjob, newstate, newsub);\n\n\t\tfp_debug_out = pbs_python_get_hook_debug_output_fp();\n\t\tif (fp_debug_out != NULL) {\n\t\t\tfprintf(fp_debug_out, \"%s.%s=%s\\n\", EVENT_JOB_OBJECT, ATTR_h, new_hold_types_str);\n\t\t}\n\t}\n\n\treturn (rc);\n}\n\n/**\n * @brief\n *    \tset_attribute\t- This function will set pjob's attribute internally indexed\n *     \t\t\t\t\t\tat 'attr_index' to 'new_str'.\n *\n * @see\n * \t\tset_job_varlist, do_runjob_accept_actions and do_runjob_reject_actions\n *\n * @param[in]\tpjob - job in question\n * @param[in]\tattr_index - index to internal job table holding attribute info\n * @param[in]\tmsg - filled in with actual error message of this function fails\n * @param[in]\tmsg_len - size of 'msg' buffer.\n * @param[in]\thook_name - the name of the hook where the set attribute\n *\t\t\t\tfunction is called.\n *\n * @return int\n * @retval \t0\tfor success.\n * @retval\t!= 0\totherwise.\n */\nstatic int\nset_attribute(job *pjob, int attr_index,\n\t      char *new_str, char *msg, int msg_len, char *hook_name)\n{\n\n\tint rc = 1;\n\tchar *attr_name = NULL;\n\tchar *new_attrval_str = NULL;\n\n\tif ((msg == NULL) || (msg_len <= 0)) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"Bad msg buffer parameter!\");\n\t\treturn (2);\n\t}\n\n\tif ((pjob == NULL) || (new_str == NULL) || (hook_name == NULL)) {\n\t\tsnprintf(msg, msg_len - 1,\n\t\t\t \"%s: bad pjob, new_attrval_str, or hook_name!\", __func__);\n\t\treturn (2);\n\t}\n\n\tattr_name = job_attr_def[attr_index].at_name;\n\tif (attr_name == NULL) {\n\t\tsnprintf(msg, msg_len - 1,\n\t\t\t \"%s: bad job attribute name indexed at '%d'!\", __func__,\n\t\t\t attr_index);\n\t\treturn (2);\n\t}\n\n\t/*\n\t * Need to dup 'new_str' for if fed to job attribute's decode function,\n\t * cannot guarantee that the value will not get \"munged\".\n\t */\n\tnew_attrval_str = strdup(new_str);\n\tif (new_attrval_str == NULL) {\n\t\tlog_err(errno, __func__, \"strdup\");\n\t\tsnprintf(msg, msg_len - 1,\n\t\t\t \"%s: strdup failed (errno=%d)\", __func__, errno);\n\t\treturn (2);\n\t}\n\n\tif (strcmp(attr_name, ATTR_depend) == 0) {\n\t\tchar *pdepend;\n\n\t\tpdepend = malloc(PBS_DEPEND_LEN);\n\t\tif (pdepend == NULL) {\n\t\t\tlog_err(errno, __func__, \"malloc\");\n\t\t\tsnprintf(msg, msg_len - 1,\n\t\t\t\t \"%s: malloc failed (errno=%d)\", __func__, errno);\n\t\t\treturn (2);\n\t\t}\n\n\t\t/* below replaces short jobid with full jobid */\n\t\tif (parse_depend_list(new_attrval_str, &pdepend,\n\t\t\t\t      PBS_DEPEND_LEN) != 0) {\n\t\t\tfree(pdepend);\n\t\t\tlog_err(errno, __func__, \"parse_depend_list\");\n\t\t\tsnprintf(msg, msg_len - 1,\n\t\t\t\t \"%s: failed to parse_depend_list(%s) (errno=%d)\",\n\t\t\t\t __func__, new_attrval_str, errno);\n\t\t\treturn (2);\n\t\t}\n\n\t\t/* replace the value with the expanded value */\n\t\tfree(new_attrval_str);\n\t\tnew_attrval_str = pdepend;\n\t}\n\n\tfree_jattr(pjob, attr_index);\n\n\trc = set_jattr_str_slim(pjob, attr_index, new_attrval_str, NULL);\n\tif (rc == 0) {\n\t\tif (job_attr_def[attr_index].at_action) {\n\t\t\trc = job_attr_def[attr_index].at_action(\n\t\t\t\tget_jattr(pjob, attr_index),\n\t\t\t\tpjob, ATR_ACTION_ALTER);\n\t\t\tif (rc != 0) {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t \"Failed executing attribute '%s' action function.\",\n\t\t\t\t\t attr_name);\n\t\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\t}\n\t\t}\n\t\tsnprintf(log_buffer, sizeof(log_buffer), \"%s=%s\", attr_name, new_attrval_str);\n\t\taccount_record(PBS_ACCT_ALTER, pjob, log_buffer);\n\n\t} else {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"Failed decoding a value for '%s'\", attr_name);\n\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t}\n\n\tif (rc != 0) {\n\t\tsnprintf(msg, msg_len - 1,\n\t\t\t \"'%s' hook failed to set job's %s = %s\",\n\t\t\t hook_name,\n\t\t\t attr_name,\n\t\t\t new_str);\n\t\tfree_jattr(pjob, attr_index);\n\t} else {\n\t\tFILE *fp_debug_out = NULL;\n\n\t\tsnprintf(msg, msg_len - 1,\n\t\t\t \"'%s' hook set job's %s = %s\",\n\t\t\t hook_name,\n\t\t\t attr_name,\n\t\t\t new_str);\n\n\t\tfp_debug_out = pbs_python_get_hook_debug_output_fp();\n\t\tif (fp_debug_out != NULL) {\n\t\t\tfprintf(fp_debug_out, \"%s.%s=%s\\n\", EVENT_JOB_OBJECT, attr_name, new_str);\n\t\t}\n\t}\n\n\tfree(new_attrval_str);\n\treturn (rc);\n}\n\n/**\n * @brief\n *\t\tSets the job's Variable_List value to the one set in the hook\n *\t\tscript, if they differ.\n *\n * @see\n * \t\tdo_runjob_accept_actions and do_runjob_reject_actions\n *\n * @param[in]\tpjob\t- job to set\n * @param[in]\thook_name - name of the hook that is setting the Variable_List.\n * @param[in]\tmsg\t- message buffer to be filled if error occurred\n * @param[in]\tmsg_len\t- size of 'msg' buffer.\n *\n * @return int\n * @retval\t0\t- for success (including if nothing got set)\n * @retval\t1\t- for error occurred setting job's Variable_List\n */\nstatic int\nset_job_varlist(job *pjob, char *hook_name, char *msg, int msg_len)\n{\n\tchar *orig_env_str = NULL;\n\tint i;\n\tsize_t elen;\n\tstruct array_strings *astr;\n\tchar *new_attrval_str = NULL;\n\tchar *pfrom, *end, *pc = NULL;\n\n\tif ((pjob == NULL) || (msg == NULL) || (msg_len <= 0)) {\n\t\tlog_err(-1, __func__, \"pjob, msg,or msg_len parameter is bad\");\n\t\treturn (1);\n\t}\n\n\tif (is_jattr_set(pjob, JOB_ATR_variables)) {\n\n\t\t/* transform raw Variable_List data into a string */\n\t\t/* of the form \"<var1>=<val1>,<var2>=<val2>,...\" with */\n\t\t/* special characters escaped with a backslash. */\n\t\tastr = get_jattr_arst(pjob, JOB_ATR_variables);\n\t\telen = 0;\n\t\tfor (i = 0; i < astr->as_usedptr; ++i) {\n\t\t\tpfrom = astr->as_string[i];\n\t\t\tend = pfrom + strlen(pfrom);\n\t\t\twhile (pfrom < end) {\n\t\t\t\t/* account for back-slashes required */\n\t\t\t\t/* to escape special characters */\n\t\t\t\tif (IS_SPECIAL_CHAR(*pfrom))\n\t\t\t\t\telen++;\n\t\t\t\telen++;\n\t\t\t\tpfrom++;\n\t\t\t}\n\n\t\t\telen++; /* add 1 for separator comma or ending NULL */\n\t\t}\n\n\t\tif (elen > 0) {\n\t\t\torig_env_str = (char *) malloc(elen);\n\t\t\tif (orig_env_str == NULL) {\n\t\t\t\tsnprintf(msg, msg_len - 1,\n\t\t\t\t\t \"malloc failure setting Variable_List for job %s\",\n\t\t\t\t\t pjob->ji_qs.ji_jobid);\n\t\t\t\tlog_err(errno, __func__, msg);\n\t\t\t\treturn (1);\n\t\t\t}\n\t\t\tmemset(orig_env_str, '\\0', elen);\n\t\t\tfor (i = 0; i < astr->as_usedptr; ++i) {\n\t\t\t\tpfrom = astr->as_string[i];\n\t\t\t\tend = pfrom + strlen(pfrom);\n\n\t\t\t\t/* set destination string */\n\t\t\t\tif (i == 0) {\n\t\t\t\t\tpc = orig_env_str;\n\t\t\t\t} else {\n\t\t\t\t\t*pc++ = ',';\n\t\t\t\t}\n\n\t\t\t\twhile (pfrom < end) {\n\t\t\t\t\tif (IS_SPECIAL_CHAR(*pfrom))\n\t\t\t\t\t\t*pc++ = '\\\\';\n\t\t\t\t\t*pc++ = *pfrom++;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\tnew_attrval_str = pbs_python_event_job_getval(ATTR_v);\n\n\tif ((orig_env_str != NULL) && (new_attrval_str != NULL)) {\n\t\tif (varlist_same(orig_env_str, new_attrval_str) == 1) {\n\t\t\t/* nothing to reset */\n\t\t\tnew_attrval_str = NULL;\n\t\t}\n\t}\n\n\tif (orig_env_str != NULL) {\n\t\tfree(orig_env_str);\n\t}\n\n\tif (new_attrval_str == NULL)\n\t\treturn (0); /* nothing to set */\n\n\tif (set_attribute(pjob, JOB_ATR_variables, new_attrval_str, msg,\n\t\t\t  msg_len - 1, hook_name) != 0) {\n\t\tlog_event(PBSEVENT_ERROR | PBSEVENT_FORCE, PBS_EVENTCLASS_JOB,\n\t\t\t  LOG_ERR, pjob->ji_qs.ji_jobid, msg);\n\t\treturn (1);\n\t} else {\n\t\tif (msg[0] != '\\0')\n\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t  pjob->ji_qs.ji_jobid, msg);\n\t}\n\treturn (0);\n}\n\nenum hook_result {\n\tACCEPT_HOOK_EVENT,\n\tREJECT_HOOK_EVENT\n};\n\n/**\n * @brief\n *\t\tSets the job's Resource_List.<resc>* values to the ones set in the hook\n *\t\tscript. 'hook_mode' determines which set of resources (<resc>) is NOT\n *\t\tallowed to be modified.\n *\n * @see\n * \t\tdo_runjob_accept_actions and do_runjob_reject_actions\n *\n * @param[in]\tpjob\t- job to set\n * @param[in]\thook_name - name of the hook that is setting the Resource_List.\n * @param[in]\tmsg\t- message buffer to be filled if error occurred\n * @param[in]\tmsg_len\t- size of 'msg' buffer.\n * @param[in]\thook_mode - if ACCEPT_HOOK_EVENT the list of resources\n *\t\t\t     that are not modifiable are: \"nodect\", \"select\",\n *\t\t\t     \"place\", and any resource with the\n *\t\t\t     ATR_DFLAG_CVTSLT flag set. (i.e. those that\n *\t\t\t     convert to a select spec). If REJECT_HOOK_EVENT,\n *\t\t\t     only \"nodect\" is currently not modifiable.\n *\n * @return int\n * @retval\t0\t- for success (including if nothing got set)\n * @retval\t1\t- for error occurred setting job's Variable_List\n */\nstatic int\nset_job_reslist(job *pjob, char *hook_name, char *msg, int msg_len,\n\t\tenum hook_result hook_mode)\n{\n\tchar *val_str_dup = NULL;\n\tchar *np = NULL;\n\tchar *np1 = NULL;\n\tchar *resc = NULL;\n\tchar *new_rescval_str = NULL;\n\tresource_def *rescdef;\n\tresource *prescjb;\n\tresource *presc;\n\tresource_def *pseldef = NULL;\n\tattribute *jb;\n\tint rc = 0;\n\tchar *new_attrval_str = NULL;\n\tFILE *fp_debug_out = NULL;\n\n\tif ((pjob == NULL) || (hook_name == NULL) || (msg == NULL) ||\n\t    (msg_len <= 0)) {\n\t\tlog_err(-1, __func__,\n\t\t\t\"pjob, hook_name, msg, or msg_len parameter is bad\");\n\t\treturn (1);\n\t}\n\n\tnew_attrval_str = pbs_python_event_job_getval(ATTR_l);\n\n\tif (new_attrval_str == NULL)\n\t\treturn (0); /* nothing to set */\n\n\tval_str_dup = strdup(new_attrval_str);\n\tif (val_str_dup == NULL) {\n\t\tlog_err(errno, __func__, \"strdup failed\");\n\t\treturn (1);\n\t}\n\n\tfp_debug_out = pbs_python_get_hook_debug_output_fp();\n\n\tjb = get_jattr(pjob, JOB_ATR_resource);\n\tnp = strtok(val_str_dup, \",\");\n\twhile (np != NULL) {\n\t\tresc = np;\n\t\tnp1 = strstr(np, \"=\");\n\t\tif (np1 != NULL)\n\t\t\t*np1 = '\\0';\n\n\t\tnew_rescval_str = pbs_python_event_jobresc_getval_hookset(ATTR_l, resc);\n\n\t\tif (new_rescval_str == NULL) {\n\t\t\tnp = strtok(NULL, \",\");\n\t\t\tcontinue;\n\t\t}\n\t\trescdef = find_resc_def(svr_resc_def, resc);\n\t\tif (rescdef == NULL) {\n\t\t\tsnprintf(msg, msg_len - 1,\n\t\t\t\t \"Setting job '%s' attribute %s.%s failed: unknown resource\", pjob->ji_qs.ji_jobid, ATTR_l, resc);\n\n\t\t\tlog_event(PBSEVENT_ERROR | PBSEVENT_FORCE, PBS_EVENTCLASS_JOB, LOG_ERR, pjob->ji_qs.ji_jobid, msg);\n\n\t\t\tfree(val_str_dup);\n\t\t\treturn (1);\n\t\t}\n\n\t\tif (((hook_mode == ACCEPT_HOOK_EVENT) &&\n\t\t     ((strcmp(resc, \"nodect\") == 0) ||\n\t\t      (strcmp(resc, \"select\") == 0) ||\n\t\t      (strcmp(resc, \"place\") == 0) ||\n\t\t      ((rescdef->rs_flags & ATR_DFLAG_CVTSLT) != 0))) ||\n\t\t    ((hook_mode == REJECT_HOOK_EVENT) &&\n\t\t     (strcmp(resc, \"nodect\") == 0))) {\n\n\t\t\tsnprintf(msg, msg_len - 1,\n\t\t\t\t \"'%s' hook failed to set job's %s.%s = %s (not allowed)\",\n\t\t\t\t hook_name, ATTR_l, resc, new_rescval_str);\n\n\t\t\tlog_event(PBSEVENT_ERROR | PBSEVENT_FORCE,\n\t\t\t\t  PBS_EVENTCLASS_JOB, LOG_ERR,\n\t\t\t\t  pjob->ji_qs.ji_jobid, msg);\n\t\t\tfree(val_str_dup);\n\t\t\treturn (1);\n\t\t}\n\n\t\tprescjb = find_resc_entry(jb, rescdef);\n\n\t\tif (prescjb == NULL) {\n\t\t\tprescjb = add_resource_entry(jb, rescdef);\n\t\t}\n\n\t\tif (prescjb == NULL) {\n\t\t\tsnprintf(msg, msg_len - 1, \"'%s' hook failed to add job's %s.%s = %s\",\n\t\t\t\t hook_name,\n\t\t\t\t ATTR_l,\n\t\t\t\t resc,\n\t\t\t\t new_rescval_str);\n\t\t\tlog_event(PBSEVENT_ERROR | PBSEVENT_FORCE,\n\t\t\t\t  PBS_EVENTCLASS_JOB, LOG_ERR, pjob->ji_qs.ji_jobid,\n\t\t\t\t  msg);\n\t\t\tfree(val_str_dup);\n\t\t\treturn (1);\n\t\t}\n\n\t\tif ((rc = rescdef->rs_decode(&prescjb->rs_value,\n\t\t\t\t\t     ATTR_l, rescdef->rs_name, new_rescval_str)) != 0) {\n\t\t\tsnprintf(msg, msg_len - 1,\n\t\t\t\t \"'%s' hook failed to set job's %s.%s = %s\",\n\t\t\t\t hook_name, ATTR_l,\n\t\t\t\t resc, new_rescval_str);\n\t\t\tlog_event(PBSEVENT_ERROR | PBSEVENT_FORCE,\n\t\t\t\t  PBS_EVENTCLASS_JOB, LOG_ERR,\n\t\t\t\t  pjob->ji_qs.ji_jobid, msg);\n\t\t\tfree(val_str_dup);\n\t\t\treturn (1);\n\t\t}\n\n\t\tsnprintf(msg, msg_len - 1, \"'%s' hook set job's %s.%s = %s\",\n\t\t\t hook_name,\n\t\t\t ATTR_l,\n\t\t\t resc,\n\t\t\t new_rescval_str);\n\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t  pjob->ji_qs.ji_jobid, msg);\n\n\t\tsnprintf(log_buffer, sizeof(log_buffer), \"%s.%s=%s\", ATTR_l, resc, new_rescval_str);\n\t\taccount_record(PBS_ACCT_ALTER, pjob, log_buffer);\n\n\t\tif (fp_debug_out != NULL) {\n\t\t\tfprintf(fp_debug_out, \"%s.%s[%s]=%s\\n\", EVENT_JOB_OBJECT, ATTR_l, resc, new_rescval_str);\n\t\t}\n\t\tnp = strtok(NULL, \",\");\n\t}\n\n\t/* The following forces new job's Resource_List values to be seen */\n\t/* in qstat -f */\n\tjb->at_flags |= ATR_MOD_MCACHE;\n\n\tpseldef = &svr_resc_def[RESC_SELECT];\n\tpresc = find_resc_entry(jb, pseldef);\n\tif (presc && (presc->rs_value.at_flags & ATR_VFLAG_DEFLT)) {\n\t\t/* changing Resource_List and select is a default */\n\t\t/* clear \"select\" so it is rebuilt in set_resc_deflt */\n\t\tpseldef->rs_free(&presc->rs_value);\n\t}\n\t(void) set_resc_deflt((void *) pjob, JOB_OBJECT, NULL);\n\tfree(val_str_dup);\n\treturn (0);\n}\n\n/* Associates a job attribute value to a slot in an attributes table. */\nstruct attribute_jobmap {\n\tenum job_atr attr_i; /* index to some table */\n\tattribute attr_val;  /* job attribute value */\n};\n\n/**\n * @brief\n *\t\tInitializes each entry of the attribute_jobmap table (a_map) to\n *\t\tthe value corresponding to the attribute entry in job attributes\n *\t\ttable.\n *\n * @see\n * \t\tdo_runjob_accept_actions and do_runjob_reject_actions\n *\n * @param[in]\tpjob - contains the original attribute values\n * @param[in]\ta_map - holds the saved attribute values.\n *\n * @return void\n */\nstatic void\nattribute_jobmap_init(job *pjob, struct attribute_jobmap *a_map)\n{\n\tint index, a_index;\n\n\tif ((pjob == NULL) || (a_map == NULL)) {\n\t\tlog_err(-1, __func__, \"bad pjob or a_map param\");\n\t\treturn;\n\t}\n\n\tfor (index = 0; (a_index = (int) a_map[index].attr_i) >= 0; ++index) {\n\t\tif (is_attr_set(&a_map[index].attr_val))\n\t\t\tfree_attr(job_attr_def, &a_map[index].attr_val, a_index);\n\n\t\tclear_attr(&a_map[index].attr_val, &job_attr_def[a_index]);\n\t\tif (is_jattr_set(pjob, a_index)) {\n\t\t\tjob_attr_def[a_index].at_set(\n\t\t\t\t&a_map[index].attr_val,\n\t\t\t\tget_jattr(pjob, a_index), SET);\n\t\t}\n\t}\n}\n\n/**\n * @brief\n *\t\tClear each entry of the attribute_jobmap table (a_map) and\n *\t\tzero out the memory.\n *\n * @see\n * \t\tdo_runjob_accept_actions and do_runjob_reject_actions\n *\n * @param[in]\ta_map - holds the saved attribute values.\n *\n * @return void\n */\nstatic void\nattribute_jobmap_clear(struct attribute_jobmap *a_map)\n{\n\tint index, a_index;\n\n\tif (a_map == NULL) {\n\t\tlog_err(-1, __func__, \"bad a_map param\");\n\t\treturn;\n\t}\n\n\tfor (index = 0; (a_index = (int) a_map[index].attr_i) >= 0; ++index) {\n\t\tif (is_attr_set(&a_map[index].attr_val))\n\t\t\tfree_attr(job_attr_def, &a_map[index].attr_val, a_index);\n\t\tclear_attr(&a_map[index].attr_val, &job_attr_def[a_index]);\n\t}\n}\n\n/**\n * @brief\n * \t\tRestores pjob's attribute values saved in 'a_map'.\n *\n * @see\n *\t\tprocess_hooks\n *\n * @param[in]\tpjob - contains the attribute values to be filled in\n * @param[in]\ta_map - holds the saved attribute values.\n *\n * @return void\n */\nstatic void\nattribute_jobmap_restore(job *pjob, struct attribute_jobmap *a_map)\n{\n\tint index, a_index;\n\tchar *attr_name = NULL;\n\tattribute *pattr, *pattr_o;\n\tattribute_def *pdef;\n\tchar newstate;\n\tint newsub;\n\n\tif ((pjob == NULL) || (a_map == NULL)) {\n\t\tlog_err(-1, __func__, \"bad pjob or a_map param\");\n\t\treturn;\n\t}\n\n\tfor (index = 0; (a_index = (int) a_map[index].attr_i) >= 0; ++index) {\n\t\tattr_name = job_attr_def[a_index].at_name;\n\t\tif (attr_name == NULL)\n\t\t\tcontinue;\n\n\t\tpattr = get_jattr(pjob, a_index); /* current value */\n\t\tpattr_o = &a_map[index].attr_val; /* original value */\n\t\tpdef = &job_attr_def[a_index];\n\n\t\t/* if there's a saved value, then use it */\n\t\tif (is_attr_set(pattr_o)) {\n\n\t\t\tif (pdef->at_comp != NULL) {\n\t\t\t\tif (pdef->at_type == ATR_TYPE_RESC) {\n\t\t\t\t\tif ((pdef->at_comp(pattr_o, pattr) == 0) && (comp_resc_gt == 0) && (comp_resc_lt == 0) && (comp_resc_nc == 0)) {\n\t\t\t\t\t\tcontinue;\n\t\t\t\t\t}\n\t\t\t\t} else if (pdef->at_type == ATR_TYPE_ARST) {\n\t\t\t\t\t/* compare if both are substrings of each other */\n\t\t\t\t\tif ((pdef->at_comp(pattr, pattr_o) == 0) && (pdef->at_comp(pattr_o, pattr) == 0)) {\n\t\t\t\t\t\tcontinue;\n\t\t\t\t\t}\n\t\t\t\t} else if (pdef->at_comp(pattr, pattr_o) == 0) {\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (pdef->at_free) {\n\t\t\t\tpdef->at_free(pattr);\n\t\t\t}\n\t\t\tif (pdef->at_set(pattr, pattr_o, SET) == 0) {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t \"restored job %s's previous value\",\n\t\t\t\t\t attr_name);\n\t\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB,\n\t\t\t\t\t  LOG_INFO, pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\t}\n\t\t} else if (is_attr_set(pattr)) {\n\t\t\t/* original/saved value is unset, and yet current */\n\t\t\t/* value is set, need to revert to unset state */\n\t\t\tif (pdef->at_free) {\n\t\t\t\tpdef->at_free(pattr);\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t \"restored job %s's previous unset value\",\n\t\t\t\t\t attr_name);\n\t\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB,\n\t\t\t\t\t  LOG_INFO, pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\t}\n\t\t}\n\t}\n\n\tsvr_evaljobstate(pjob, &newstate, &newsub, 0);\n\tsvr_setjobstate(pjob, newstate, newsub);\n}\n\n/*\n * the following is a list of attributes modifiable in a HOOK_EVENT_RUNJOB\n * hook that ends in a pbs.event().accept().\n * If this list is updated, be sure to update the 'runjob_modifiable_jobattrs'\n * macro in src/lib/Libpython/pbs_python_svr_internal.c.\n */\nstruct attribute_jobmap runjob_accept_attrlist[] = {\n\t{JOB_ATR_outpath, {0}},\n\t{JOB_ATR_errpath, {0}},\n\t{JOB_ATR_resource, {0}},\n\t{JOB_ATR_variables, {0}},\n\t{JOB_ATR_create_resv_from_job, {0}},\n\t{(enum job_atr) - 1, {0}}};\n\n/**\n *\n * @brief\n *\t\tPerform the job updates to 'pjob' as a result of a RUNJOB hook\n *\t\tname 'hook_name' execution ending in a pbs.event().accept() call.\n *\n * @see\n * \t\tprocess_hooks\n *\n * @param[in]\tpjob\t- job to modify.\n * @param[in]\thook_name - hook doing the accept action.\n * @param[in]\tmsg\t- gets filled in with the error message in case of\n *\t\t\t  \tan error.\n * @param[in]\tmsg_len\t- size of the 'msg' buffer\n *\n * @return\tint\n * @retval\t0\t- for success\n * @retval\t1\t- for error, with 'msg' buffer filled.\n */\nstatic int\ndo_runjob_accept_actions(job *pjob, char *hook_name, char *msg, int msg_len)\n{\n\tint index, aindex;\n\tchar *attr_name = NULL;\n\tchar *new_attrval_str = NULL;\n\n\tif ((pjob == NULL) || (msg == NULL) || (msg_len <= 0) ||\n\t    (hook_name == NULL)) {\n\t\tlog_err(-1, __func__, \"bad pjob, msg, or hook_name param\");\n\t\treturn (1);\n\t}\n\n\tmsg[0] = '\\0';\n\n\tattribute_jobmap_init(pjob, runjob_accept_attrlist);\n\n\tfor (index = 0; (aindex = (int) runjob_accept_attrlist[index].attr_i) >= 0;\n\t     ++index) {\n\n\t\tattr_name = job_attr_def[aindex].at_name;\n\t\tif (attr_name == NULL) {\n\t\t\tlog_err(-1, __func__,\n\t\t\t\t\"encountered an unexpected NULL attr_name\");\n\t\t\tcontinue;\n\t\t}\n\t\tif (strcmp(attr_name, ATTR_v) == 0) {\n\t\t\tif (set_job_varlist(pjob, hook_name, msg,\n\t\t\t\t\t    msg_len - 1) != 0) {\n\t\t\t\treturn (1);\n\t\t\t}\n\t\t} else if (strcmp(attr_name, ATTR_l) == 0) {\n\t\t\tif (set_job_reslist(pjob, hook_name, msg,\n\t\t\t\t\t    msg_len - 1, ACCEPT_HOOK_EVENT) != 0) {\n\t\t\t\treturn (1);\n\t\t\t}\n\t\t} else if ((strcmp(attr_name, ATTR_o) == 0) ||\n\t\t\t   (strcmp(attr_name, ATTR_create_resv_from_job) == 0) ||\n\t\t\t   (strcmp(attr_name, ATTR_e) == 0)) {\n\t\t\tnew_attrval_str =\n\t\t\t\tpbs_python_event_job_getval_hookset(attr_name,\n\t\t\t\t\t\t\t\t    NULL, 0, NULL, 0);\n\t\t\tif (new_attrval_str == NULL)\n\t\t\t\tcontinue;\n\n\t\t\tif (set_attribute(pjob, aindex, new_attrval_str, msg, msg_len - 1, hook_name) != 0) {\n\t\t\t\tlog_event(PBSEVENT_ERROR | PBSEVENT_FORCE, PBS_EVENTCLASS_JOB, LOG_ERR, pjob->ji_qs.ji_jobid, msg);\n\t\t\t\treturn (1);\n\t\t\t} else {\n\t\t\t\tif (msg[0] != '\\0')\n\t\t\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO, pjob->ji_qs.ji_jobid, msg);\n\t\t\t}\n\t\t}\n\t}\n\n\t/*\n\t * Don't let the values in runjob_accept_attrlist (a_map) linger.\n\t * When resources get deleted and recreated between job runs, the\n\t * pointer to rs_defin can get out of sync causing a crash when\n\t * the data is reinitialized.\n\t */\n\tattribute_jobmap_clear(runjob_accept_attrlist);\n\n\treturn (0);\n}\n\n/*\n * the following is a list of attributes modifiable in a HOOK_EVENT_RUNJOB\n * hook that ends in a pbs.event().reject().\n * If this list is updated, be sure to update the 'runjob_modifiable_jobattrs'\n * macro in src/lib/Libpython/pbs_python_svr_internal.c.\n */\nstruct attribute_jobmap runjob_reject_attrlist[] = {\n\t{JOB_ATR_exectime, {0}},\n\t{JOB_ATR_hold, {0}},\n\t{JOB_ATR_project, {0}},\n\t{JOB_ATR_depend, {0}},\n\t{JOB_ATR_variables, {0}},\n\t{JOB_ATR_resource, {0}},\n\t{(enum job_atr) - 1, {0}}};\n\n/**\n * @brief\n *\t\tPerform the job updates to 'pjob' as a result of a RUNJOB hook\n *\t\texecution ending in a pbs.event().reject() call.\n *\n * @see\n * \t\tprocess_hooks\n * @param[in]\tpjob\t- job to modify.\n * @param[in]\thook_name - name of hook doing the reject action.\n *\n * @return int\n * @retval 0\t- success\n * @retval 1\t- fail\n */\nstatic int\ndo_runjob_reject_actions(job *pjob, char *hook_name)\n{\n\tint index, aindex;\n\tint rc = 0;\n\tchar *attr_name = NULL;\n\tchar *new_attrval_str = NULL;\n\n\tif ((pjob == NULL) || (hook_name == NULL)) {\n\t\tlog_err(-1, __func__, \"bad pjob or hook_name param\");\n\t\treturn (1);\n\t}\n\n\tattribute_jobmap_init(pjob, runjob_reject_attrlist);\n\n\tfor (index = 0; (aindex = (int) runjob_reject_attrlist[index].attr_i) >= 0;\n\t     ++index) {\n\n\t\tattr_name = job_attr_def[aindex].at_name;\n\t\tif (attr_name == NULL) {\n\t\t\tlog_err(-1, __func__,\n\t\t\t\t\"encountered an unexpected NULL attr_name\");\n\t\t\tcontinue;\n\t\t}\n\n\t\tlog_buffer[0] = '\\0';\n\t\tif (strcmp(attr_name, ATTR_a) == 0) {\n\n\t\t\tnew_attrval_str =\n\t\t\t\tpbs_python_event_job_getval_hookset(attr_name,\n\t\t\t\t\t\t\t\t    NULL, 0, NULL, 0);\n\n\t\t\tif (new_attrval_str == NULL)\n\t\t\t\tcontinue;\n\n\t\t\tif (set_exec_time(pjob, new_attrval_str, log_buffer,\n\t\t\t\t\t  LOG_BUF_SIZE, hook_name) != 0) {\n\n\t\t\t\tlog_event(PBSEVENT_ERROR | PBSEVENT_FORCE,\n\t\t\t\t\t  PBS_EVENTCLASS_JOB, LOG_ERR,\n\t\t\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\t\trc = 1;\n\t\t\t\tbreak;\n\t\t\t} else {\n\t\t\t\tif (log_buffer[0] != '\\0')\n\t\t\t\t\tlog_event(PBSEVENT_JOB,\n\t\t\t\t\t\t  PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t\t\t  pjob->ji_qs.ji_jobid,\n\t\t\t\t\t\t  log_buffer);\n\t\t\t}\n\n\t\t} else if (strcmp(attr_name, ATTR_h) == 0) {\n\t\t\tchar hold_opval[HOOK_BUF_SIZE];\n\t\t\tchar hold_delval[HOOK_BUF_SIZE];\n\n\t\t\tnew_attrval_str =\n\t\t\t\tpbs_python_event_job_getval_hookset(ATTR_h,\n\t\t\t\t\t\t\t\t    hold_opval, HOOK_BUF_SIZE, hold_delval,\n\t\t\t\t\t\t\t\t    HOOK_BUF_SIZE);\n\t\t\tif (new_attrval_str == NULL)\n\t\t\t\tcontinue;\n\n\t\t\tif (set_hold_types(pjob, new_attrval_str,\n\t\t\t\t\t   hold_opval, hold_delval, log_buffer, LOG_BUF_SIZE,\n\t\t\t\t\t   hook_name) != 0) {\n\t\t\t\tlog_event(PBSEVENT_ERROR | PBSEVENT_FORCE,\n\t\t\t\t\t  PBS_EVENTCLASS_JOB, LOG_ERR,\n\t\t\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\t} else {\n\n\t\t\t\tif (log_buffer[0] != '\\0')\n\t\t\t\t\tlog_event(PBSEVENT_JOB,\n\t\t\t\t\t\t  PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t\t\t  pjob->ji_qs.ji_jobid,\n\t\t\t\t\t\t  log_buffer);\n\t\t\t}\n\n\t\t} else if (strcmp(attr_name, ATTR_v) == 0) {\n\t\t\tif (set_job_varlist(pjob, hook_name, log_buffer,\n\t\t\t\t\t    LOG_BUF_SIZE - 1) != 0) {\n\t\t\t\trc = 1;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t} else if (strcmp(attr_name, ATTR_l) == 0) {\n\t\t\tif (set_job_reslist(pjob, hook_name, log_buffer,\n\t\t\t\t\t    LOG_BUF_SIZE - 1, REJECT_HOOK_EVENT) != 0) {\n\t\t\t\trc = 1;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t} else {\n\t\t\tnew_attrval_str =\n\t\t\t\tpbs_python_event_job_getval_hookset(attr_name,\n\t\t\t\t\t\t\t\t    NULL, 0, NULL, 0);\n\t\t\tif (new_attrval_str == NULL)\n\t\t\t\tcontinue;\n\n\t\t\tif (set_attribute(pjob, aindex,\n\t\t\t\t\t  new_attrval_str, log_buffer, LOG_BUF_SIZE,\n\t\t\t\t\t  hook_name) != 0) {\n\n\t\t\t\tlog_event(PBSEVENT_ERROR | PBSEVENT_FORCE,\n\t\t\t\t\t  PBS_EVENTCLASS_JOB, LOG_ERR,\n\t\t\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\t\trc = 1;\n\t\t\t\tbreak;\n\t\t\t} else {\n\t\t\t\tif (log_buffer[0] != '\\0')\n\t\t\t\t\tlog_event(PBSEVENT_JOB,\n\t\t\t\t\t\t  PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t\t\t  pjob->ji_qs.ji_jobid,\n\t\t\t\t\t\t  log_buffer);\n\t\t\t}\n\t\t}\n\t}\n\n\t/* update the eligible time to JOB_INITIAL */\n\tif (is_sattr_set(SVR_ATR_EligibleTimeEnable) && get_sattr_long(SVR_ATR_EligibleTimeEnable) == 1)\n\t\tupdate_eligible_time(JOB_INITIAL, pjob);\n\n\t/*\n\t * Don't let the values in runjob_reject_attrlist (a_map) linger.\n\t * When resources get deleted and recreated between job runs, the\n\t * pointer to rs_defin can get out of sync causing a crash when\n\t * the data is reinitialized.\n\t */\n\tattribute_jobmap_clear(runjob_reject_attrlist);\n\n\treturn (rc);\n}\n\n/*\n * the following is a list of attributes modifiable in a HOOK_EVENT_POSTQUEUEJOB\n * hook that ends in a pbs.event().accept().\n */\nstruct attribute_jobmap postqueuejob_accept_attrlist[] = {\n\t{JOB_ATR_hold, {0}},\n\t{JOB_ATR_project, {0}},\n\t{JOB_ATR_variables, {0}},\n\t{JOB_ATR_resource, {0}},\n\t{(enum job_atr) - 1, {0}}};\n\n/**\n * @brief\n *\t\tPerform the job updates to 'pjob' as a result of a POSTQUEUEJOB hook\n *\t\texecution ending in a pbs.event().accept() call.\n *\n * @see\n * \t\tprocess_hooks\n * @param[in]\tpjob\t- job to modify.\n * @param[in]\thook_name - name of hook doing the accept action.\n *\n * @return int\n * @retval 0\t- success\n * @retval 1\t- fail\n */\nstatic int\ndo_postqueuejob_accept_actions(job *pjob, char *hook_name)\n{\n\tint index, aindex;\n\tint rc = 0;\n\tchar *attr_name = NULL;\n\tchar *new_attrval_str = NULL;\n\n\tif ((pjob == NULL) || (hook_name == NULL)) {\n\t\tlog_err(-1, __func__, \"bad pjob or hook_name param\");\n\t\treturn (1);\n\t}\n\n\tattribute_jobmap_init(pjob, postqueuejob_accept_attrlist);\n\n\tfor (index = 0; (aindex = (int) postqueuejob_accept_attrlist[index].attr_i) >= 0;\n\t     ++index) {\n\n\t\tattr_name = job_attr_def[aindex].at_name;\n\t\tif (attr_name == NULL) {\n\t\t\tlog_err(-1, __func__,\n\t\t\t\t\"encountered an unexpected NULL attr_name\");\n\t\t\tcontinue;\n\t\t}\n\n\t\tlog_buffer[0] = '\\0';\n\t\tif (strcmp(attr_name, ATTR_h) == 0) {\n\t\t\tchar hold_opval[HOOK_BUF_SIZE];\n\t\t\tchar hold_delval[HOOK_BUF_SIZE];\n\n\t\t\tnew_attrval_str =\n\t\t\t\tpbs_python_event_job_getval_hookset(ATTR_h,\n\t\t\t\t\t\t\t\t    hold_opval, HOOK_BUF_SIZE, hold_delval,\n\t\t\t\t\t\t\t\t    HOOK_BUF_SIZE);\n\t\t\tif (new_attrval_str == NULL)\n\t\t\t\tcontinue;\n\n\t\t\tif (set_hold_types(pjob, new_attrval_str,\n\t\t\t\t\t   hold_opval, hold_delval, log_buffer, LOG_BUF_SIZE,\n\t\t\t\t\t   hook_name) != 0) {\n\t\t\t\tlog_event(PBSEVENT_ERROR | PBSEVENT_FORCE,\n\t\t\t\t\t  PBS_EVENTCLASS_JOB, LOG_ERR,\n\t\t\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\t} else {\n\n\t\t\t\tif (log_buffer[0] != '\\0')\n\t\t\t\t\tlog_event(PBSEVENT_JOB,\n\t\t\t\t\t\t  PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t\t\t  pjob->ji_qs.ji_jobid,\n\t\t\t\t\t\t  log_buffer);\n\t\t\t}\n\n\t\t} else if (strcmp(attr_name, ATTR_v) == 0) {\n\t\t\tif (set_job_varlist(pjob, hook_name, log_buffer,\n\t\t\t\t\t    LOG_BUF_SIZE - 1) != 0) {\n\t\t\t\trc = 1;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t} else if (strcmp(attr_name, ATTR_l) == 0) {\n\t\t\tif (set_job_reslist(pjob, hook_name, log_buffer,\n\t\t\t\t\t    LOG_BUF_SIZE - 1, REJECT_HOOK_EVENT) != 0) {\n\t\t\t\trc = 1;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t} else {\n\t\t\tnew_attrval_str =\n\t\t\t\tpbs_python_event_job_getval_hookset(attr_name,\n\t\t\t\t\t\t\t\t    NULL, 0, NULL, 0);\n\t\t\tif (new_attrval_str == NULL)\n\t\t\t\tcontinue;\n\n\t\t\tif (set_attribute(pjob, aindex,\n\t\t\t\t\t  new_attrval_str, log_buffer, LOG_BUF_SIZE,\n\t\t\t\t\t  hook_name) != 0) {\n\n\t\t\t\tlog_event(PBSEVENT_ERROR | PBSEVENT_FORCE,\n\t\t\t\t\t  PBS_EVENTCLASS_JOB, LOG_ERR,\n\t\t\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\t\trc = 1;\n\t\t\t\tbreak;\n\t\t\t} else {\n\t\t\t\tif (log_buffer[0] != '\\0')\n\t\t\t\t\tlog_event(PBSEVENT_JOB,\n\t\t\t\t\t\t  PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t\t\t  pjob->ji_qs.ji_jobid,\n\t\t\t\t\t\t  log_buffer);\n\t\t\t}\n\t\t}\n\t}\n\tattribute_jobmap_clear(runjob_reject_attrlist);\n\treturn (rc);\n}\n\n/**\n * @brief\n * \t\tWrite into the hook debug output file, information about\n * \t\thook reject action, and close the hook debug output file stream.\n *\n * @param[in]\treject_msg\t- the hook reject message.\n *\n * @return void\n */\nvoid\nwrite_hook_reject_debug_output_and_close(char *reject_msg)\n{\n\tchar *hook_outfile;\n\tFILE *fp_debug_out = NULL;\n\n\tfp_debug_out = pbs_python_get_hook_debug_output_fp();\n\n\tif (fp_debug_out == NULL) {\n\t\t/* prepare to open file if output file pointer not stored */\n\t\thook_outfile = pbs_python_get_hook_debug_output_file();\n\t\tif ((hook_outfile != NULL) && (hook_outfile[0] != '\\0')) {\n\t\t\t/* need to open in append mode, as */\n\t\t\t/* process_hooks() may have */\n\t\t\t/* already written into this file. */\n\t\t\tfp_debug_out = fopen(hook_outfile, \"a\");\n\t\t\tif (fp_debug_out == NULL) {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t \"warning: open of hook debug output file %s failed!\",\n\t\t\t\t\t hook_outfile);\n\t\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\t} else {\n\t\t\t\tpbs_python_set_hook_debug_output_fp(fp_debug_out);\n\t\t\t}\n\t\t}\n\t}\n\n\tif (fp_debug_out != NULL) {\n\t\tfprintf(fp_debug_out, \"%s=True\\n\",\n\t\t\tEVENT_REJECT_OBJECT);\n\t\tfprintf(fp_debug_out, \"%s=False\\n\",\n\t\t\tEVENT_ACCEPT_OBJECT);\n\t\tif (reject_msg != NULL)\n\t\t\tfprintf(fp_debug_out, \"%s=%s\\n\",\n\t\t\t\tEVENT_REJECT_MSG_OBJECT, reject_msg);\n\t\tfclose(fp_debug_out);\n\t\tpbs_python_set_hook_debug_output_fp(NULL);\n\t}\n}\n\n/**\n * @brief\n * \t\tWrite into the hook debug output file, information about\n * \t\thook accept action, and close out the hook debug output file\n * \t\tstream.\n *\n * @see\n * \t\tprocess_hooks\n *\n * @return void\n */\nvoid\nwrite_hook_accept_debug_output_and_close(void)\n{\n\tchar *hook_outfile;\n\tFILE *fp_debug_out = NULL;\n\n\tfp_debug_out = pbs_python_get_hook_debug_output_fp();\n\n\tif (fp_debug_out == NULL) {\n\t\t/* prepare to open file if output file pointer not stored */\n\t\thook_outfile = pbs_python_get_hook_debug_output_file();\n\t\tif ((hook_outfile != NULL) && (hook_outfile[0] != '\\0')) {\n\t\t\t/* need to open in append mode, as */\n\t\t\t/* process_hooks() may have */\n\t\t\t/* already written into this file. */\n\t\t\tfp_debug_out = fopen(hook_outfile, \"a\");\n\t\t\tif (fp_debug_out == NULL) {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t \"warning: open of hook debug output file %s failed!\",\n\t\t\t\t\t hook_outfile);\n\t\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\t} else {\n\t\t\t\tpbs_python_set_hook_debug_output_fp(fp_debug_out);\n\t\t\t}\n\t\t}\n\t}\n\n\tif (fp_debug_out != NULL) {\n\t\tfprintf(fp_debug_out, \"%s=True\\n\",\n\t\t\tEVENT_ACCEPT_OBJECT);\n\t\tfprintf(fp_debug_out, \"%s=False\\n\",\n\t\t\tEVENT_REJECT_OBJECT);\n\t\tfclose(fp_debug_out);\n\t\tpbs_python_set_hook_debug_output_fp(NULL);\n\t}\n}\n\n/**\n * @brief\n * \t\tgetting the vnode attributes and resource list\n *\t\t\teach node attribute will be of the format:\n *\t\t\t<node_name>.<attr_name>\n *\n * @see\n * \t\trun_periodic_hook\n *\n * @return void\n */\npbs_list_head *\nget_vnode_list(void)\n{\n\tint i;\n\tint index;\n\tchar name_str_buf[STRBUF + 1] = {'\\0'};\n\tstruct pbsnode *pnode = NULL;\n\tattribute_def *padef = node_attr_def;\n\n\tCLEAR_HEAD(vnode_attr_list);\n\tfor (i = 0; i < svr_totnodes; i++) {\n\t\tpnode = pbsndlist[i];\n\t\tfor (index = 0; index < ND_ATR_LAST; index++) {\n\t\t\tif ((padef + index)->at_flags & ATR_VFLAG_SET) {\n\t\t\t\tstrncpy(name_str_buf, pnode->nd_name, STRBUF);\n\t\t\t\tstrcat(name_str_buf, \".\");\n\t\t\t\tstrncat(name_str_buf, (padef + index)->at_name, (STRBUF - strlen(name_str_buf)));\n\t\t\t\tif ((padef + index)->at_encode(get_nattr(pnode, index), &vnode_attr_list, name_str_buf, NULL, ATR_ENCODE_HOOK, NULL) < 0) {\n\t\t\t\t\tchar *msgbuf;\n\n\t\t\t\t\tpbs_asprintf(&msgbuf,\n\t\t\t\t\t\t     \"error on encoding node attributes: %s\",\n\t\t\t\t\t\t     name_str_buf);\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG2,\n\t\t\t\t\t\t  PBS_EVENTCLASS_HOOK, LOG_ERR,\n\t\t\t\t\t\t  __func__, msgbuf);\n\t\t\t\t\tfree(msgbuf);\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\treturn &vnode_attr_list;\n}\n\n/**\n * @brief\n * \t\tgetting the reservation attribute and resource list\n *\n * @see\n * \t\trun_periodic_hook\n *\n * @return void\n */\npbs_list_head *\nget_resv_list(void)\n{\n\tint index;\n\tchar name_str_buf[STRBUF + 1] = {'\\0'};\n\tresc_resv *presv;\n\tattribute_def *padef = resv_attr_def;\n\n\tCLEAR_HEAD(resv_attr_list);\n\tpresv = (resc_resv *) GET_NEXT(svr_allresvs);\n\n\twhile (presv != NULL) {\n\t\tfor (index = 0; index < RESV_ATR_LAST; index++) {\n\t\t\tif ((padef + index)->at_flags & ATR_VFLAG_SET) {\n\t\t\t\tstrncpy(name_str_buf, presv->ri_qs.ri_resvID, STRBUF);\n\t\t\t\tstrcat(name_str_buf, \".\");\n\t\t\t\tstrncat(name_str_buf, (padef + index)->at_name, (STRBUF - strlen(name_str_buf)));\n\t\t\t\tif ((padef + index)->at_encode(get_rattr(presv, index), &resv_attr_list, name_str_buf, NULL, ATR_ENCODE_HOOK, NULL) < 0) {\n\t\t\t\t\tchar *msgbuf;\n\n\t\t\t\t\tpbs_asprintf(&msgbuf,\n\t\t\t\t\t\t     \"error on encoding reservation attributes: %s\",\n\t\t\t\t\t\t     name_str_buf);\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG2,\n\t\t\t\t\t\t  PBS_EVENTCLASS_HOOK, LOG_ERR,\n\t\t\t\t\t\t  __func__, msgbuf);\n\t\t\t\t\tfree(msgbuf);\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tpresv = (resc_resv *) GET_NEXT(presv->ri_allresvs);\n\t}\n\treturn &resv_attr_list;\n}\n\n/**\n * @brief\n *\n *\t\tProcess hook scripts based on request type.\n *\t\tThis loops through the matching list of\n *\t\thooks, and executes the corresponding hook scripts.\n *\n * @see\n * \t\treq_modifyjob, req_movejob, req_quejob, req_resvSub, req_delete and req_runjob\n *\n * @param[in] \tpreq\t- the batch request\n * @param[in] \thook_msg  - upon failure, fill this buffer with the actual error\n *\t\t\t    message.\n * @param[in]   msg_len  - the size of 'hook_msg' buffer.\n * @param[in]   pyinter_func - the interrupt function used when hook has reached\n *\t\t\tits execution time limit (alarm). This function raises\n *\t\t\tsome signal to the calling process.\n *\t\t      Ex. pbs_python_set_interrupt() which sends an\n *\t\t\t  an INT signal (ctrl-C)\n * @return\tint\n * @retval\t1 means all the executed hooks have agreed to accept the request\n * @retval \t0 means at least one hook was encountered to have rejected the\n request.\n * @retval\t2 means no hook script executed (special case).\n * @retval\t-1 an internal error occurred\n *\n * @par MT-safe: No\n */\nint\nprocess_hooks(struct batch_request *preq, char *hook_msg, size_t msg_len,\n\t      void (*pyinter_func)(void))\n{\n\thook *phook;\n\thook *phook_next = NULL;\n\tunsigned int hook_event;\n\thook_input_param_t req_ptr;\n\tpbs_list_head *head_ptr;\n\tjob *pjob = NULL;\n\tint t;\n\tchar *jobid = NULL;\n\tint num_run = 0;\n\tint rc = 1;\n\tint event_initialized = 0;\n\tconn_t *conn = NULL;\n\tchar *hostname = NULL;\n\n\tif (!svr_interp_data.interp_started) {\n\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_HOOK,\n\t\t\t  LOG_ERR, __func__, \"Python interpreter not started, skipping hooks\");\n\t\treturn (2);\n\t}\n\n\thook_input_param_init(&req_ptr);\n\tif (preq->rq_type == PBS_BATCH_QueueJob) {\n\t\thook_event = HOOK_EVENT_QUEUEJOB;\n\t\treq_ptr.rq_job = (struct rq_quejob *) &preq->rq_ind.rq_queuejob;\n\t\thead_ptr = &svr_queuejob_hooks;\n\t} else if (preq->rq_type == PBS_BATCH_PostQueueJob) {\n\t\thook_event = HOOK_EVENT_POSTQUEUEJOB;\n\t\treq_ptr.rq_postqueuejob = (struct rq_postqueuejob *) &preq->rq_ind.rq_postqueuejob;\n\t\thead_ptr = &svr_postqueuejob_hooks;\n\t\tjobid = ((struct rq_postqueuejob *) (req_ptr.rq_postqueuejob))->rq_jid;\n\t\tt = is_job_array(jobid);\n\t\tif ((t == IS_ARRAY_Single) || (t == IS_ARRAY_NO)) {\n\t\t\tpjob = find_job(jobid);\n\t\t}\n\t} else if (preq->rq_type == PBS_BATCH_SubmitResv) {\n\t\thook_event = HOOK_EVENT_RESVSUB;\n\t\treq_ptr.rq_job = (struct rq_quejob *) &preq->rq_ind.rq_queuejob;\n\t\thead_ptr = &svr_resvsub_hooks;\n\t} else if (preq->rq_type == PBS_BATCH_ModifyResv) {\n\t\thook_event = HOOK_EVENT_MODIFYRESV;\n\t\treq_ptr.rq_manage = (struct rq_quejob *) &preq->rq_ind.rq_modify;\n\t\thead_ptr = &svr_modifyresv_hooks;\n\t} else if (preq->rq_type == PBS_BATCH_ModifyJob) {\n\t\thook_event = HOOK_EVENT_MODIFYJOB;\n\t\treq_ptr.rq_manage = (struct rq_manage *) &preq->rq_ind.rq_modify;\n\t\thead_ptr = &svr_modifyjob_hooks;\n\t\t/* Modifyjob hooks not run if requester is the scheduler */\n\t\tif ((preq->rq_user[0] != '\\0') && (strcmp(preq->rq_user, PBS_SCHED_DAEMON_NAME) == 0) && (pbs_conf.sched_modify_event == 0))\n\t\t\treturn (2);\n\t} else if (preq->rq_type == PBS_BATCH_MoveJob) {\n\t\thook_event = HOOK_EVENT_MOVEJOB;\n\t\treq_ptr.rq_move = (struct rq_move *) &preq->rq_ind.rq_move;\n\t\thead_ptr = &svr_movejob_hooks;\n\t} else if (preq->rq_type == PBS_BATCH_RunJob || preq->rq_type == PBS_BATCH_AsyrunJob ||\n\t\t   preq->rq_type == PBS_BATCH_AsyrunJob_ack) {\n\t\thook_event = HOOK_EVENT_RUNJOB;\n\t\treq_ptr.rq_run = (struct rq_runjob *) &preq->rq_ind.rq_run;\n\t\thead_ptr = &svr_runjob_hooks;\n\n\t\tjobid = ((struct rq_runjob *) (req_ptr.rq_run))->rq_jid;\n\t\tt = is_job_array(jobid);\n\t\tif ((t == IS_ARRAY_Single) || (t == IS_ARRAY_NO)) {\n\t\t\tpjob = find_job(jobid); /* regular job and single subjob */\n\t\t}\n\n\t\t/* an array job or range of subjobs will fall through with pjob set to NULL */\n\n\t\tif (pjob == NULL) {\n\t\t\tlog_event(PBSEVENT_DEBUG2,\n\t\t\t\t  PBS_EVENTCLASS_HOOK, LOG_ERR, __func__,\n\t\t\t\t  \"Did not find a job tied to runjob request!\");\n\t\t\treturn (-1);\n\t\t}\n\t} else if (preq->rq_type == PBS_BATCH_JobObit) {\n\t\thook_event = HOOK_EVENT_JOBOBIT;\n\t\treq_ptr.rq_obit = (struct rq_jobobit *) &preq->rq_ind.rq_obit;\n\t\thead_ptr = &svr_jobobit_hooks;\n\t} else if (preq->rq_type == PBS_BATCH_Manager) {\n\t\thook_event = HOOK_EVENT_MANAGEMENT;\n\t\tpreq->rq_ind.rq_management.rq_reply = &preq->rq_reply;\n\t\tpreq->rq_ind.rq_management.rq_time = preq->rq_time;\n\t\t/* Copying the pointer to rq_management below is safe since\n\t\treq_manager() bumps the reference count on preq */\n\t\treq_ptr.rq_manage = (struct rq_manage *) &preq->rq_ind.rq_management;\n\t\thead_ptr = &svr_management_hooks;\n\t} else if (preq->rq_type == PBS_BATCH_ModifyVnode) {\n\t\thook_event = HOOK_EVENT_MODIFYVNODE;\n\t\treq_ptr.rq_modifyvnode = (struct rq_modifyvnode *) &preq->rq_ind.rq_modifyvnode;\n\t\thead_ptr = &svr_modifyvnode_hooks;\n\t} else if (preq->rq_type == PBS_BATCH_HookPeriodic) {\n\t\thook_event = HOOK_EVENT_PERIODIC;\n\t\thead_ptr = &svr_periodic_hooks;\n\t} else if (preq->rq_type == PBS_BATCH_DeleteResv || preq->rq_type == PBS_BATCH_ResvOccurEnd) {\n\t\thook_event = HOOK_EVENT_RESV_END;\n\t\treq_ptr.rq_manage = (struct rq_manage *) &preq->rq_ind.rq_delete;\n\t\thead_ptr = &svr_resv_end_hooks;\n\t} else if (preq->rq_type == PBS_BATCH_BeginResv) {\n\t\thook_event = HOOK_EVENT_RESV_BEGIN;\n\t\treq_ptr.rq_manage = (struct rq_manage *) &preq->rq_ind.rq_resresvbegin;\n\t\thead_ptr = &svr_resv_begin_hooks;\n\t} else if (preq->rq_type == PBS_BATCH_ConfirmResv) {\n\t\thook_event = HOOK_EVENT_RESV_CONFIRM;\n\t\treq_ptr.rq_run = (struct rq_runjob *) &preq->rq_ind.rq_run;\n\t\thead_ptr = &svr_resv_confirm_hooks;\n\t} else {\n\t\treturn (-1); /* unexpected event encountered */\n\t}\n\n\tmemset(hook_msg, '\\0', msg_len);\n\n\t/* initialize global flags */\n\tpbs_python_event_accept();\n\n\tfor (phook = (hook *) GET_NEXT(*head_ptr); phook; phook = phook_next) {\n\n\t\tif (preq->rq_type == PBS_BATCH_QueueJob) {\n\t\t\tphook_next = (hook *) GET_NEXT(phook->hi_queuejob_hooks);\n\t\t} else if (preq->rq_type == PBS_BATCH_PostQueueJob) {\n\t\t\tphook_next = (hook *) GET_NEXT(phook->hi_postqueuejob_hooks);\n\t\t} else if (preq->rq_type == PBS_BATCH_SubmitResv) {\n\t\t\tphook_next = (hook *) GET_NEXT(phook->hi_resvsub_hooks);\n\t\t} else if (preq->rq_type == PBS_BATCH_ModifyResv) {\n\t\t\tphook_next = (hook *) GET_NEXT(phook->hi_modifyresv_hooks);\n\t\t} else if (preq->rq_type == PBS_BATCH_ModifyJob) {\n\t\t\tphook_next = (hook *) GET_NEXT(phook->hi_modifyjob_hooks);\n\t\t} else if (preq->rq_type == PBS_BATCH_MoveJob) {\n\t\t\tphook_next = (hook *) GET_NEXT(phook->hi_movejob_hooks);\n\t\t} else if (preq->rq_type == PBS_BATCH_RunJob || preq->rq_type == PBS_BATCH_AsyrunJob ||\n\t\t\t   preq->rq_type == PBS_BATCH_AsyrunJob_ack) {\n\t\t\tphook_next = (hook *) GET_NEXT(phook->hi_runjob_hooks);\n\t\t} else if (preq->rq_type == PBS_BATCH_JobObit) {\n\t\t\tphook_next = (hook *) GET_NEXT(phook->hi_jobobit_hooks);\n\t\t} else if (preq->rq_type == PBS_BATCH_Manager) {\n\t\t\tphook_next = (hook *) GET_NEXT(phook->hi_management_hooks);\n\t\t} else if (preq->rq_type == PBS_BATCH_ModifyVnode) {\n\t\t\tphook_next = (hook *) GET_NEXT(phook->hi_modifyvnode_hooks);\n\t\t} else if (preq->rq_type == PBS_BATCH_HookPeriodic) {\n\t\t\tphook_next = (hook *) GET_NEXT(phook->hi_periodic_hooks);\n\t\t} else if (preq->rq_type == PBS_BATCH_ConfirmResv) {\n\t\t\tphook_next = (hook *) GET_NEXT(phook->hi_resv_confirm_hooks);\n\t\t} else if (preq->rq_type == PBS_BATCH_BeginResv) {\n\t\t\tphook_next = (hook *) GET_NEXT(phook->hi_resv_begin_hooks);\n\t\t} else if (preq->rq_type == PBS_BATCH_DeleteResv || preq->rq_type == PBS_BATCH_ResvOccurEnd) {\n\t\t\tphook_next = (hook *) GET_NEXT(phook->hi_resv_end_hooks);\n\t\t} else {\n\t\t\treturn (-1); /* should not get here */\n\t\t}\n\n\t\tif (phook->enabled == FALSE)\n\t\t\tcontinue;\n\n\t\tif (phook->user != HOOK_PBSADMIN)\n\t\t\tcontinue;\n\n\t\tif (phook->script == NULL) {\n\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_HOOK,\n\t\t\t\t  LOG_ERR, phook->hook_name,\n\t\t\t\t  \"Hook has no script content. Skipping hook.\");\n\t\t\tcontinue;\n\t\t}\n\n\t\tif (hook_event & HOOK_EVENT_PERIODIC) {\n\t\t\t(void) set_task(WORK_Timed, time_now + phook->freq, run_periodic_hook, phook);\n\t\t\tnum_run++;\n\t\t\tcontinue;\n\t\t}\n\t\tif (preq->rq_conn >= 0 && (conn = get_conn(preq->rq_conn)) != NULL) {\n\t\t\thostname = conn->cn_physhost;\n\t\t} else {\n\t\t\thostname = preq->rq_host;\n\t\t}\n\t\trc = server_process_hooks(preq->rq_type, preq->rq_user, hostname, phook,\n\t\t\t\t\t  hook_event, pjob, &req_ptr, hook_msg, msg_len, pyinter_func,\n\t\t\t\t\t  &num_run, &event_initialized);\n\t\tpbs_python_ext_free_global_dict(phook->script);\n\t\tif ((rc == 0) || (rc == -1)) {\n\t\t\tpbs_python_clear_attributes();\n\t\t\treturn (rc);\n\t\t}\n\t}\n\n\tif (num_run == 0)\n\t\treturn (2);\n\t/* clear attributes for the requests which don't call recreate_request */\n\tif ((preq->rq_type != PBS_BATCH_SubmitResv) && (preq->rq_type != PBS_BATCH_ModifyJob) &&\n\t\t\t(preq->rq_type != PBS_BATCH_QueueJob) && (preq->rq_type != PBS_BATCH_MoveJob) &&\n\t\t\t(preq->rq_type != PBS_BATCH_ModifyResv)) {\n\t\t\t\tpbs_python_clear_attributes();\n\t}\n\treturn 1;\n}\n/**\n * @brief\n *\n *\t\tThis function executes the hook script passed to it in\n *\t\thook structure.\n *\n * @param[in] \trq_type\t    - batch request type\n * @param[in] \trq_user\t    - batch request user\n * @param[in] \trq_host\t    - request host\n * @param[in]\tphook\t    - structure of the hook that needs to execute\n * @param[in]\thook_event  - hook event type\n * @param[in]\tpjob\t    - structure of job corresponding to which hook needs to run\n *\t\t\t      It is null when used with periodic hook.\n * @param[in]\treq_ptr\t    - Input parameters to be passed to the hook.\n * @param[in] \thook_msg  - upon failure, fill this buffer with the actual error\n *\t\t\t    message.\n * @param[in]   msg_len  - the size of 'hook_msg' buffer.\n * @param[in]   pyinter_func - the interrupt function used when hook has reached\n *\t\t\tits execution time limit (alarm). This function raises\n *\t\t\tsome signal to the calling process.\n *\t\t      Ex. pbs_python_set_interrupt() which sends an\n *\t\t\t  an INT signal (ctrl-C)\n * @param[out]\tnum_run\t    - reference of an integer which is incremented when\n *\t\t\t      hook runs successfully.\n * @return\tint\n * @retval\t1 means the executed hook has agreed to accept the request\n * @retval \t0 means at least one hook was encountered to have rejected the\n request.\n * @retval\t2 means no hook script executed (special case).\n * @retval\t-1 an internal error occurred\n *\n * @par MT-safe: No\n */\nint\nserver_process_hooks(int rq_type, char *rq_user, char *rq_host, hook *phook,\n\t\t     int hook_event, job *pjob, hook_input_param_t *req_ptr,\n\t\t     char *hook_msg, int msg_len, void (*pyinter_func)(void),\n\t\t     int *num_run, int *event_initialized)\n{\n\n\tchar hook_inputfile[MAXPATHLEN + 1];\n\tchar hook_datafile[MAXPATHLEN + 1];\n\tchar hook_outfile[MAXPATHLEN + 1];\n\tFILE *fp_debug = NULL;\n\tFILE *fp2_debug = NULL;\n\tFILE *fp_debug_out = NULL;\n\tFILE *fp_debug_out_save = NULL;\n\tstatic char env_pbs_hook_config[2 * MAXPATHLEN + 1];\n\tchar hook_config_path[MAXPATHLEN + 1];\n\tstruct python_script *py_script = NULL;\n\tstruct stat sbuf;\n\tint rc;\n\tchar *p;\n\tstatic size_t suffix_sz;\n\thook_output_param_t req_params_out;\n\tpid_t mypid;\n\tpbs_list_head event_vnode;\n\tpbs_list_head event_resv;\n\tchar perf_label[MAXBUFLEN];\n\n\tif (phook == NULL) {\n\t\tlog_event(PBSEVENT_DEBUG3,\n\t\t\t  PBS_EVENTCLASS_HOOK, LOG_ERR,\n\t\t\t  __func__, \"no associated hook\");\n\t\treturn -1;\n\t}\n\n\tif (req_ptr == NULL) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"warning: empty hook input param!\");\n\t\tlog_event(PBSEVENT_DEBUG3,\n\t\t\t  PBS_EVENTCLASS_HOOK, LOG_ERR,\n\t\t\t  phook->hook_name, log_buffer);\n\t\treturn -1;\n\t}\n\n\tmypid = getpid();\n\tif (pjob != NULL)\n\t\tsnprintf(perf_label, sizeof(perf_label), \"hook_%s_%s_%s\", hook_event_as_string(hook_event), phook->hook_name, pjob->ji_qs.ji_jobid);\n\telse\n\t\tsnprintf(perf_label, sizeof(perf_label), \"hook_%s_%s_%d\", hook_event_as_string(hook_event), phook->hook_name, mypid);\n\n\thook_perf_stat_start(perf_label, \"server_process_hooks\", 1);\n\n\tif (suffix_sz == 0)\n\t\tsuffix_sz = strlen(HOOK_SCRIPT_SUFFIX);\n\n\t/* initialize various hook_debug_* instance */\n\tpbs_python_set_hook_debug_output_fp(NULL);\n\tpbs_python_set_hook_debug_output_file(\"\");\n\n\tif (phook->debug) {\n\t\tif (rq_type == PBS_BATCH_HookPeriodic)\n\t\t\tsnprintf(hook_inputfile, MAXPATHLEN, FMT_HOOK_INFILE, path_hooks_workdir,\n\t\t\t\t hook_event_as_string(hook_event), phook->hook_name, mypid);\n\t\telse\n\t\t\tsnprintf(hook_inputfile, MAXPATHLEN, FMT_HOOK_INFILE, path_hooks_workdir,\n\t\t\t\t hook_event_as_string(hook_event), phook->hook_name, (int) time(0));\n\n\t\tfp_debug = fopen(hook_inputfile, \"w\");\n\t\tif (fp_debug == NULL) {\n\t\t\tsprintf(log_buffer,\n\t\t\t\t\"warning: open of debug input file %s failed!\",\n\t\t\t\thook_inputfile);\n\t\t\tlog_event(PBSEVENT_DEBUG3,\n\t\t\t\t  PBS_EVENTCLASS_HOOK, LOG_ERR,\n\t\t\t\t  phook->hook_name, log_buffer);\n\t\t} else {\n\t\t\tpbs_python_set_hook_debug_input_fp(fp_debug);\n\t\t\tpbs_python_set_hook_debug_input_file(hook_inputfile);\n\t\t}\n\n\t\tif (rq_type == PBS_BATCH_HookPeriodic)\n\t\t\tsnprintf(hook_datafile, MAXPATHLEN, FMT_HOOK_DATAFILE,\n\t\t\t\t path_hooks_workdir, hook_event_as_string(hook_event),\n\t\t\t\t phook->hook_name, mypid);\n\t\telse\n\t\t\tsnprintf(hook_datafile, MAXPATHLEN, FMT_HOOK_DATAFILE,\n\t\t\t\t path_hooks_workdir, hook_event_as_string(hook_event),\n\t\t\t\t phook->hook_name, (int) time(0));\n\n\t\tfp2_debug = fopen(hook_datafile, \"w\");\n\t\tif (fp2_debug == NULL) {\n\t\t\tsprintf(log_buffer,\n\t\t\t\t\"warning: open of debug data file %s failed!\",\n\t\t\t\thook_datafile);\n\t\t\tlog_event(PBSEVENT_DEBUG3,\n\t\t\t\t  PBS_EVENTCLASS_HOOK, LOG_ERR,\n\t\t\t\t  phook->hook_name, log_buffer);\n\t\t} else {\n\t\t\tpbs_python_set_hook_debug_data_fp(fp2_debug);\n\t\t\tpbs_python_set_hook_debug_data_file(hook_datafile);\n\t\t}\n\t}\n\n\t/* optimization here - create an event object only if there's */\n\t/* at least one enabled hook */\n\tif (!(*event_initialized)) { /* only once for all hooks */\n\t\trc = pbs_python_event_set(hook_event, rq_user,\n\t\t\t\t\t  rq_host, req_ptr, perf_label);\n\n\t\tif (rc == -1) { /* internal server code failure */\n\t\t\tlog_event(PBSEVENT_DEBUG2,\n\t\t\t\t  PBS_EVENTCLASS_HOOK, LOG_ERR,\n\t\t\t\t  phook->hook_name,\n\t\t\t\t  \"Encountered an error while setting event\");\n\t\t}\n\t\t*event_initialized = 1;\n\n\t} else if (phook->debug && (fp_debug != NULL)) {\n\t\t/* If we have several hooks attached to the same*/\n\t\t/* hook event, the first hook that runs */\n\t\t/* will call pbs_python_event_set() (above if case), */\n\t\t/* which will generate the hook input */\n\t\t/* debug file. On the next hook and succeeding hooks */\n\t\t/* that execute, we'll need to generate the */\n\t\t/* intermediate hook input debug file (based on */\n\t\t/* changes made by the previous hooks), by calling */\n\t\t/* recreate_request() on a 'temp_req' structure */\n\t\t/* that will be discarded (not acted upon). */\n\t\tstruct batch_request *temp_req;\n\t\tint do_recreate = 0;\n\n\t\ttemp_req = alloc_br(rq_type);\n\t\tif (temp_req != NULL) {\n\t\t\tswitch (rq_type) {\n\t\t\t\tcase PBS_BATCH_QueueJob:\n\t\t\t\tcase PBS_BATCH_SubmitResv:\n\t\t\t\t\tCLEAR_HEAD(temp_req->rq_ind.rq_queuejob.rq_attr);\n\t\t\t\t\tdo_recreate = 1;\n\t\t\t\t\tbreak;\n\t\t\t\tcase PBS_BATCH_PostQueueJob:\n\t\t\t\t\tCLEAR_HEAD(temp_req->rq_ind.rq_postqueuejob.rq_attr);\n\t\t\t\t\tdo_recreate = 1;\n\t\t\t\t\tbreak;\n\t\t\t\tcase PBS_BATCH_ModifyJob:\n\t\t\t\t\tCLEAR_HEAD(temp_req->rq_ind.rq_modify.rq_attr);\n\t\t\t\t\tdo_recreate = 1;\n\t\t\t\t\tbreak;\n\t\t\t\tcase PBS_BATCH_Manager:\n\t\t\t\t\tCLEAR_HEAD(temp_req->rq_ind.rq_manager.rq_attr);\n\t\t\t\t\tdo_recreate = 0;\n\t\t\t\t\tbreak;\n\t\t\t\tdefault:\n\t\t\t\t\tdo_recreate = 0;\n\t\t\t}\n\t\t\tif (do_recreate) {\n\t\t\t\tfp_debug_out_save = pbs_python_get_hook_debug_output_fp();\n\t\t\t\tpbs_python_set_hook_debug_output_fp(fp_debug);\n\t\t\t\t/* recreate_request() appends */\n\t\t\t\t/* pbs.event().job or */\n\t\t\t\t/* pbs.event().resv values from */\n\t\t\t\t/* previous hooks execution into */\n\t\t\t\t/* 'temp_req' structure, which */\n\t\t\t\t/* results also in the values being */\n\t\t\t\t/* written into the file represented */\n\t\t\t\t/* by 'fp_debug'. */\n\t\t\t\t(void) recreate_request(temp_req);\n\t\t\t\tpbs_python_set_hook_debug_output_fp(fp_debug_out_save);\n\t\t\t}\n\t\t\tfree_br(temp_req);\n\t\t} else {\n\t\t\tlog_event(PBSEVENT_DEBUG3,\n\t\t\t\t  PBS_EVENTCLASS_HOOK, LOG_WARNING,\n\t\t\t\t  phook->hook_name,\n\t\t\t\t  \"warning: can't generate complete hook input file due to malloc failure.\");\n\t\t}\n\t}\n\t/* hook_name changes for each hook */\n\t/* This sets Python event object's hook_name value */\n\trc = pbs_python_event_set_attrval(PY_EVENT_HOOK_NAME,\n\t\t\t\t\t  phook->hook_name);\n\n\tif (rc == -1) {\n\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t\t  LOG_ERR, phook->hook_name,\n\t\t\t  \"Failed to set event 'hook_name'.\");\n\t\tif (fp_debug != NULL) {\n\t\t\tfclose(fp_debug);\n\t\t\tfp_debug = NULL;\n\t\t\tpbs_python_set_hook_debug_input_fp(NULL);\n\t\t\tpbs_python_set_hook_debug_input_file(\"\");\n\t\t}\n\t\tif (fp2_debug != NULL) {\n\t\t\tfclose(fp2_debug);\n\t\t\tfp2_debug = NULL;\n\t\t\tpbs_python_set_hook_debug_data_fp(NULL);\n\t\t\tpbs_python_set_hook_debug_data_file(\"\");\n\t\t}\n\t\trc = -1;\n\t\tgoto server_process_hooks_exit;\n\t}\n\n\t/*\n\t * hook_type needed for internal processing;\n\t * hook_type changes for each hook.\n\t * This sets Python event object's hook_type value\n\t */\n\trc = pbs_python_event_set_attrval(PY_EVENT_HOOK_TYPE,\n\t\t\t\t\t  hook_type_as_string(phook->type));\n\n\tif (rc == -1) {\n\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t\t  LOG_ERR, phook->hook_name,\n\t\t\t  \"Failed to set event 'hook_type'.\");\n\t\tif (fp_debug != NULL) {\n\t\t\tfclose(fp_debug);\n\t\t\tfp_debug = NULL;\n\t\t\tpbs_python_set_hook_debug_input_fp(NULL);\n\t\t\tpbs_python_set_hook_debug_input_file(\"\");\n\t\t}\n\t\tif (fp2_debug != NULL) {\n\t\t\tfclose(fp2_debug);\n\t\t\tfp2_debug = NULL;\n\t\t\tpbs_python_set_hook_debug_data_fp(NULL);\n\t\t\tpbs_python_set_hook_debug_data_file(\"\");\n\t\t}\n\t\tif (fp_debug_out != NULL) {\n\t\t\tfclose(fp_debug_out);\n\t\t\tfp_debug_out = NULL;\n\t\t\tpbs_python_set_hook_debug_output_fp(NULL);\n\t\t\tpbs_python_set_hook_debug_output_file(\"\");\n\t\t}\n\t\trc = -1;\n\t\tgoto server_process_hooks_exit;\n\t}\n\n\tif (rq_type == PBS_BATCH_HookPeriodic) {\n\t\tchar freq_str[5];\n\t\tsprintf(freq_str, \"%d\", phook->freq);\n\t\trc = pbs_python_event_set_attrval(PY_EVENT_FREQ, freq_str);\n\n\t\tif (rc == -1) {\n\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t\t\t  LOG_ERR, phook->hook_name,\n\t\t\t\t  \"Failed to set event 'freq'.\");\n\t\t\tif (fp_debug != NULL) {\n\t\t\t\tfclose(fp_debug);\n\t\t\t\tfp_debug = NULL;\n\t\t\t\tpbs_python_set_hook_debug_input_fp(NULL);\n\t\t\t\tpbs_python_set_hook_debug_input_file(\"\");\n\t\t\t}\n\t\t\tif (fp2_debug != NULL) {\n\t\t\t\tfclose(fp2_debug);\n\t\t\t\tfp2_debug = NULL;\n\t\t\t\tpbs_python_set_hook_debug_data_fp(NULL);\n\t\t\t\tpbs_python_set_hook_debug_data_file(\"\");\n\t\t\t}\n\t\t\tif (fp_debug_out != NULL) {\n\t\t\t\tfclose(fp_debug_out);\n\t\t\t\tfp_debug_out = NULL;\n\t\t\t\tpbs_python_set_hook_debug_output_fp(NULL);\n\t\t\t\tpbs_python_set_hook_debug_output_file(\"\");\n\t\t\t}\n\t\t\trc = -1;\n\t\t\tgoto server_process_hooks_exit;\n\t\t}\n\t}\n\n\tset_alarm(phook->alarm, pyinter_func);\n\n\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_HOOK,\n\t\t  LOG_INFO, phook->hook_name, \"started\");\n\n\tpbs_python_set_mode(PY_MODE); /* hook script mode */\n\n\t/* hook script may create files, and we don't want it to */\n\t/* be littering server's private directory. */\n\t/* NOTE: path_hooks_workdir is periodically cleaned up */\n\tif (chdir(path_hooks_workdir) != 0) {\n\t\tlog_event(PBSEVENT_DEBUG2,\n\t\t\t  PBS_EVENTCLASS_HOOK, LOG_WARNING, phook->hook_name,\n\t\t\t  \"unable to go to hooks tmp directory\");\n\t}\n\tpbs_python_set_os_environ(PBS_HOOK_CONFIG_FILE, NULL);\n\t(void) pbs_python_set_pbs_hook_config_filename(NULL);\n\n\tstrncpy(env_pbs_hook_config, PBS_HOOK_CONFIG_FILE,\n\t\tsizeof(env_pbs_hook_config) - 1);\n\tpy_script = phook->script;\n\tif (py_script->path != NULL) {\n\t\tstrncpy(hook_config_path, py_script->path, sizeof(hook_config_path) - 1);\n\t\tp = strstr(hook_config_path, HOOK_SCRIPT_SUFFIX);\n\t\tif (p != NULL) {\n\t\t\t/* replace <HOOK_SCRIPT_SUFFIX> with */\n\t\t\t/* <HOOK_CONFIG_SUFFIX>. suffix_sz is */\n\t\t\t/* length of <HOOK_SCRIPT_SUFFIX> so as */\n\t\t\t/* to not overflow. */\n\t\t\tstrncpy(p, HOOK_CONFIG_SUFFIX, suffix_sz);\n\n\t\t\tif (stat(hook_config_path, &sbuf) == 0) {\n\t\t\t\tpbs_python_set_os_environ(\n\t\t\t\t\tPBS_HOOK_CONFIG_FILE,\n\t\t\t\t\thook_config_path);\n\t\t\t\t(void) pbs_python_set_pbs_hook_config_filename(hook_config_path);\n\t\t\t}\n\t\t}\n\t}\n\n\trc = pbs_python_check_and_compile_script(&svr_interp_data,\n\t\t\t\t\t\t phook->script);\n\n\t/* reset global flag to allow modification of */\n\t/* attributes and resources for every new hook execution. */\n\tpbs_python_event_param_mod_allow();\n\n\t/* Reset flag to restart scheduling cycle */\n\tpbs_python_no_scheduler_restart_cycle();\n\n\tif (rq_type == PBS_BATCH_RunJob || rq_type == PBS_BATCH_AsyrunJob || rq_type == PBS_BATCH_AsyrunJob_ack) {\n\t\t/* Clear dictionary that remembers previously */\n\t\t/* set ATTR_l resources in a hook script */\n\t\t/* Currently, only job ATTR_l resources can be */\n\t\t/* modified in a runjob hook. */\n\t\tif (pbs_python_event_jobresc_clear_hookset(ATTR_l) != 0) {\n\t\t\tlog_event(PBSEVENT_DEBUG2,\n\t\t\t\t  PBS_EVENTCLASS_HOOK,\n\t\t\t\t  LOG_ERR, phook->hook_name,\n\t\t\t\t  \"Failed to clear jobresc hookset dictionary.\");\n\t\t\tif (fp_debug != NULL) {\n\t\t\t\tfclose(fp_debug);\n\t\t\t\tfp_debug = NULL;\n\t\t\t\tpbs_python_set_hook_debug_input_fp(NULL);\n\t\t\t\tpbs_python_set_hook_debug_input_file(\"\");\n\t\t\t}\n\t\t\tif (fp2_debug != NULL) {\n\t\t\t\tfclose(fp2_debug);\n\t\t\t\tfp2_debug = NULL;\n\t\t\t\tpbs_python_set_hook_debug_data_fp(NULL);\n\t\t\t\tpbs_python_set_hook_debug_data_file(\"\");\n\t\t\t}\n\t\t\tif (fp_debug_out != NULL) {\n\t\t\t\tfclose(fp_debug_out);\n\t\t\t\tfp_debug_out = NULL;\n\t\t\t\tpbs_python_set_hook_debug_output_fp(NULL);\n\t\t\t\tpbs_python_set_hook_debug_output_file(\"\");\n\t\t\t}\n\t\t\trc = -1;\n\t\t\tgoto server_process_hooks_exit;\n\t\t}\n\t}\n\n\tif (fp_debug != NULL) {\n\t\t/* print name of user requested queue if I am a queuejob hook */\n\t\tif (rq_type == PBS_BATCH_QueueJob) {\n\t\t\tchar *qname = ((struct rq_queuejob *) req_ptr->rq_job)->rq_destin;\n\t\t\t/* use default queue if user did not specify a queue for the job */\n\t\t\tif ((!qname || *qname == '\\0' || *qname == '@') && is_sattr_set(SVR_ATR_dflt_que))\n\t\t\t\tfprintf(fp_debug, \"%s.queue=%s\\n\", EVENT_JOB_OBJECT, get_sattr_str(SVR_ATR_dflt_que));\n\t\t\telse\n\t\t\t\tfprintf(fp_debug, \"%s.queue=%s\\n\", EVENT_JOB_OBJECT, qname);\n\t\t}\n\t\tfprintf(fp_debug, \"%s.%s=%s\\n\", PBS_OBJ, GET_NODE_NAME_FUNC,\n\t\t\t(char *) server_host);\n\t\tfprintf(fp_debug, \"%s.%s=%s\\n\", EVENT_OBJECT, PY_EVENT_TYPE,\n\t\t\thook_event_as_string(hook_event));\n\t\tfprintf(fp_debug, \"%s.%s=%s\\n\", EVENT_OBJECT, PY_EVENT_HOOK_NAME,\n\t\t\tphook->hook_name);\n\t\tfprintf(fp_debug, \"%s.%s=%s\\n\", EVENT_OBJECT, PY_EVENT_HOOK_TYPE,\n\t\t\thook_type_as_string(phook->type));\n\t\tfprintf(fp_debug, \"%s.%s=%s\\n\", EVENT_OBJECT, \"requestor\",\n\t\t\trq_user);\n\t\tfprintf(fp_debug, \"%s.%s=%s\\n\", EVENT_OBJECT, \"requestor_host\",\n\t\t\trq_host);\n\t\tfprintf(fp_debug, \"%s.%s=%s\\n\", EVENT_OBJECT, \"user\", hook_user_as_string(phook->user));\n\t\tfprintf(fp_debug, \"%s.%s=%d\\n\", EVENT_OBJECT, \"alarm\", phook->alarm);\n\t}\n\n\t/* let rc pass through */\n\tif (rc == 0) {\n\t\thook_perf_stat_start(perf_label, \"run_code\", 0);\n\t\trc = pbs_python_run_code_in_namespace(&svr_interp_data, phook->script, 0);\n\t\thook_perf_stat_stop(perf_label, \"run_code\", 0);\n\t}\n\n\tif (fp_debug != NULL) {\n\t\tfclose(fp_debug);\n\t\tfp_debug = NULL;\n\t\tpbs_python_set_hook_debug_input_fp(NULL);\n\t\tpbs_python_set_hook_debug_input_file(\"\");\n\t}\n\n\t/* set hook_debug_output_file for recreate_request(), set_* calls */\n\t/* to dump any hook results in the file. */\n\tif (phook->debug || rq_type == PBS_BATCH_HookPeriodic) {\n\t\tif (rq_type == PBS_BATCH_HookPeriodic)\n\t\t\tsnprintf(hook_outfile, MAXPATHLEN, FMT_HOOK_OUTFILE,\n\t\t\t\t path_hooks_workdir, hook_event_as_string(hook_event),\n\t\t\t\t phook->hook_name, mypid);\n\t\telse\n\t\t\tsnprintf(hook_outfile, MAXPATHLEN, FMT_HOOK_OUTFILE,\n\t\t\t\t path_hooks_workdir, hook_event_as_string(hook_event),\n\t\t\t\t phook->hook_name, (int) time(0));\n\n\t\tfp_debug_out = fopen(hook_outfile, \"w\");\n\t\tif (fp_debug_out == NULL) {\n\t\t\tchar *msgbuf;\n\n\t\t\tpbs_asprintf(&msgbuf,\n\t\t\t\t     \"warning: open of debug output file %s failed!\",\n\t\t\t\t     hook_inputfile);\n\t\t\tlog_event(PBSEVENT_DEBUG3,\n\t\t\t\t  PBS_EVENTCLASS_HOOK, LOG_ERR,\n\t\t\t\t  phook->hook_name, msgbuf);\n\t\t\tfree(msgbuf);\n\t\t\tif (rq_type == PBS_BATCH_HookPeriodic) {\n\t\t\t\t/* we will need output file to read data from hook later */\n\t\t\t\trc = -1;\n\t\t\t\tgoto server_process_hooks_exit;\n\t\t\t}\n\t\t} else {\n\t\t\tfp_debug_out_save = pbs_python_get_hook_debug_output_fp();\n\t\t\tif (fp_debug_out_save != NULL) {\n\t\t\t\tfclose(fp_debug_out_save);\n\t\t\t}\n\t\t\tpbs_python_set_hook_debug_output_fp(fp_debug_out);\n\t\t\tpbs_python_set_hook_debug_output_file(hook_outfile);\n\t\t}\n\t} else {\n\t\tfp_debug_out_save = pbs_python_get_hook_debug_output_fp();\n\t\tif (fp_debug_out_save != NULL) {\n\t\t\tfclose(fp_debug_out_save);\n\t\t}\n\t\tpbs_python_set_hook_debug_output_fp(NULL);\n\t\t/* NOTE: don't call */\n\t\t/* pbs_python_set_hook_debug_output_file() as */\n\t\t/* we still need a file to dump any remaining */\n\t\t/* debug output in case all hooks end */\n\t\t/* up accepting the current event with some */\n\t\t/* hooks with debug=true and some that are */\n\t\t/* debug=false */\n\t}\n\n\tif (fp2_debug != NULL) {\n\t\tfclose(fp2_debug);\n\t\tfp2_debug = NULL;\n\t\tpbs_python_set_hook_debug_data_fp(NULL);\n\t\tpbs_python_set_hook_debug_data_file(\"\");\n\t}\n\n\t/* go back to server's private directory */\n\tif (chdir(path_priv) != 0) {\n\t\tlog_event(PBSEVENT_DEBUG2,\n\t\t\t  PBS_EVENTCLASS_HOOK, LOG_WARNING, phook->hook_name,\n\t\t\t  \"unable to go back server private directory\");\n\t}\n\n\tpbs_python_set_mode(C_MODE); /* PBS C mode - flexible */\n\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_HOOK,\n\t\t  LOG_INFO, phook->hook_name, \"finished\");\n\tset_alarm(0, NULL);\n\n\tswitch (rc) {\n\t\tcase -1: /* internal error */\n\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t\t\t  LOG_ERR, phook->hook_name,\n\t\t\t\t  \"Internal server error encountered. Skipping hook.\");\n\t\t\tif (fp_debug_out != NULL) {\n\t\t\t\tfclose(fp_debug_out);\n\t\t\t\tfp_debug_out = NULL;\n\t\t\t\tpbs_python_set_hook_debug_output_fp(NULL);\n\t\t\t\tpbs_python_set_hook_debug_output_file(\"\");\n\t\t\t}\n\t\t\trc = -1;\n\t\t\tgoto server_process_hooks_exit;\n\t\tcase -2: /* unhandled exception */\n\t\t\tpbs_python_event_reject(NULL);\n\t\t\tpbs_python_event_param_mod_disallow();\n\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t\t \"%s hook '%s' encountered an exception, \"\n\t\t\t\t \"request rejected\",\n\t\t\t\t hook_event_as_string(hook_event), phook->hook_name);\n\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t\t\t  LOG_ERR, phook->hook_name, log_buffer);\n\t\t\tsnprintf(hook_msg, msg_len - 1,\n\t\t\t\t \"request rejected as filter hook '%s' encountered an \"\n\t\t\t\t \"exception. Please inform Admin\",\n\t\t\t\t phook->hook_name);\n\t\t\twrite_hook_reject_debug_output_and_close(hook_msg);\n\t\t\trc = 0;\n\t\t\tgoto server_process_hooks_exit;\n\t\tcase -3: /* alarm timeout */\n\t\t\tpbs_python_event_reject(NULL);\n\t\t\tpbs_python_event_param_mod_disallow();\n\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t\t \"alarm call while running %s hook '%s', \"\n\t\t\t\t \"request rejected\",\n\t\t\t\t hook_event_as_string(hook_event), phook->hook_name);\n\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t\t\t  LOG_ERR, phook->hook_name, log_buffer);\n\t\t\tsnprintf(hook_msg, msg_len - 1,\n\t\t\t\t \"request rejected as filter hook '%s' got an \"\n\t\t\t\t \"alarm call. Please inform Admin\",\n\t\t\t\t phook->hook_name);\n\t\t\twrite_hook_reject_debug_output_and_close(hook_msg);\n\t\t\trc = 0;\n\t\t\tgoto server_process_hooks_exit;\n\t}\n\t*num_run += 1;\n\tif (pbs_python_get_scheduler_restart_cycle_flag() == TRUE) {\n\n\t\tset_scheduler_flag(SCH_SCHEDULE_RESTART_CYCLE, dflt_scheduler);\n\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t\t  LOG_INFO, phook->hook_name,\n\t\t\t  \"requested for scheduler to restart cycle\");\n\t}\n\n\t/* reject if at least one hook script rejects */\n\tif (pbs_python_event_get_accept_flag() == FALSE) {\n\t\tchar *emsg = NULL;\n\n\t\tif (rq_type == PBS_BATCH_RunJob || rq_type == PBS_BATCH_AsyrunJob || rq_type == PBS_BATCH_AsyrunJob_ack) {\n\t\t\tchar *new_error_path_str = NULL;\n\t\t\tchar *new_output_path_str = NULL;\n\n\t\t\tnew_error_path_str =\n\t\t\t\tpbs_python_event_job_getval_hookset(ATTR_e,\n\t\t\t\t\t\t\t\t    NULL, 0, NULL, 0);\n\n\t\t\tif (new_error_path_str != NULL) {\n\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\"cannot modify job attribute '%s' after runjob \"\n\t\t\t\t\t\"request has been rejected.\",\n\t\t\t\t\tATTR_e);\n\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t\t\t\t  LOG_ERR, phook->hook_name, log_buffer);\n\t\t\t}\n\n\t\t\tnew_output_path_str =\n\t\t\t\tpbs_python_event_job_getval_hookset(ATTR_o,\n\t\t\t\t\t\t\t\t    NULL, 0, NULL, 0);\n\n\t\t\tif (new_output_path_str != NULL) {\n\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\"cannot modify job attribute '%s' after runjob \"\n\t\t\t\t\t\"request has been rejected.\",\n\t\t\t\t\tATTR_o);\n\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t\t\t\t  LOG_ERR, phook->hook_name, log_buffer);\n\t\t\t}\n\n\t\t\tif (do_runjob_reject_actions(pjob, phook->hook_name) != 0)\n\t\t\t\tattribute_jobmap_restore(pjob, runjob_reject_attrlist);\n\t\t}\n\n\t\tsnprintf(hook_msg, msg_len - 1,\n\t\t\t \"%s request rejected by '%s'\",\n\t\t\t hook_event_as_string(hook_event),\n\t\t\t phook->hook_name);\n\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_HOOK,\n\t\t\t  LOG_ERR, phook->hook_name, hook_msg);\n\t\tif ((emsg = pbs_python_event_get_reject_msg()) != NULL) {\n\t\t\tsnprintf(hook_msg, msg_len - 1, \"%s\", emsg);\n\t\t\t/* log also the custom reject message */\n\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_HOOK,\n\t\t\t\t  LOG_ERR, phook->hook_name, hook_msg);\n\n\t\t\tif (rq_type == PBS_BATCH_AsyrunJob) {\n\t\t\t\tchar *jcomment = NULL;\n\n\t\t\t\tpbs_asprintf(&jcomment, \"Not Running: PBS Error: %s\", hook_msg);\n\t\t\t\t/* For async run, sched won't update job's comment, so let's do that */\n\t\t\t\tset_jattr_str_slim(pjob, JOB_ATR_Comment, jcomment, NULL);\n\t\t\t\tfree(jcomment);\n\t\t\t}\n\t\t}\n\n\t\tpbs_python_do_vnode_set();\n\t\twrite_hook_reject_debug_output_and_close(emsg);\n\t\trc = 0;\n\t\tgoto server_process_hooks_exit;\n\t} else { /* hook request has been accepted */\n\n\t\tif (rq_type == PBS_BATCH_PostQueueJob) {\n\n\t\t\thook_msg[0] = '\\0';\n\t\t\tif (do_postqueuejob_accept_actions(pjob, phook->hook_name) != 0) {\n\t\t\t\tlog_eventf(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK, LOG_ERR, phook->hook_name,\n\t\t\t\t\t   \"postqueuejob request rejected: %s\", hook_msg);\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t \"request rejected by filter hook: %s\", hook_msg);\n\t\t\t\tstrncpy(hook_msg, log_buffer, msg_len - 1);\n\t\t\t\tattribute_jobmap_restore(pjob, postqueuejob_accept_attrlist);\n\t\t\t\twrite_hook_reject_debug_output_and_close(hook_msg);\n\t\t\t\trc = 0;\n\t\t\t\tgoto server_process_hooks_exit;\n\t\t\t}\n\t\t} else if (rq_type == PBS_BATCH_RunJob || rq_type == PBS_BATCH_AsyrunJob || rq_type == PBS_BATCH_AsyrunJob_ack) {\n\t\t\tchar *new_exec_time_str = NULL;\n\t\t\tchar *new_hold_types_str = NULL;\n\t\t\tchar *new_project_str = NULL;\n\t\t\tchar *new_depend_str = NULL;\n\t\t\tchar *new_conv_str = NULL;\n\t\t\tchar hold_opval[HOOK_BUF_SIZE];\n\t\t\tchar hold_delval[HOOK_BUF_SIZE];\n\t\t\tint job_modified = 0;\n\t\t\tint vnode_modified = 0;\n\n\t\t\tnew_exec_time_str =\n\t\t\t\tpbs_python_event_job_getval_hookset(ATTR_a,\n\t\t\t\t\t\t\t\t    NULL, 0, NULL, 0);\n\n\t\t\tif (new_exec_time_str != NULL) {\n\t\t\t\tjob_modified = 1;\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t \"Found job '%s' attribute flagged to be set\",\n\t\t\t\t\t ATTR_a);\n\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK, LOG_ERR, phook->hook_name, log_buffer);\n\t\t\t}\n\n\t\t\tif (job_modified != 1) {\n\t\t\t\tnew_hold_types_str =\n\t\t\t\t\tpbs_python_event_job_getval_hookset(ATTR_h,\n\t\t\t\t\t\t\t\t\t    hold_opval, HOOK_BUF_SIZE, hold_delval,\n\t\t\t\t\t\t\t\t\t    HOOK_BUF_SIZE);\n\n\t\t\t\tif (new_hold_types_str != NULL) {\n\t\t\t\t\tjob_modified = 1;\n\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t\t \"Found job '%s' attribute flagged to be set\", ATTR_h);\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK, LOG_ERR, phook->hook_name, log_buffer);\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif (job_modified != 1) {\n\t\t\t\tnew_project_str =\n\t\t\t\t\tpbs_python_event_job_getval_hookset(\n\t\t\t\t\t\tATTR_project, NULL, 0, NULL, 0);\n\n\t\t\t\tif (new_project_str != NULL) {\n\t\t\t\t\tjob_modified = 1;\n\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t\t \"Found job '%s' attribute flagged to be set\",\n\t\t\t\t\t\t ATTR_project);\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK, LOG_ERR, phook->hook_name, log_buffer);\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif (job_modified != 1) {\n\t\t\t\tnew_depend_str =\n\t\t\t\t\tpbs_python_event_job_getval_hookset(\n\t\t\t\t\t\tATTR_depend, NULL, 0, NULL, 0);\n\n\t\t\t\tif (new_depend_str != NULL) {\n\t\t\t\t\tjob_modified = 1;\n\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t\t \"Found job '%s' attribute flagged to be set\",\n\t\t\t\t\t\t ATTR_depend);\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK, LOG_ERR, phook->hook_name, log_buffer);\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif (job_modified != 1) {\n\t\t\t\tnew_conv_str =\n\t\t\t\t\tpbs_python_event_job_getval_hookset(\n\t\t\t\t\t\tATTR_create_resv_from_job, NULL, 0, NULL, 0);\n\n\t\t\t\tif (new_conv_str != NULL)\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK, LOG_ERR, phook->hook_name,\n\t\t\t\t\t\t  \"Found job \" ATTR_create_resv_from_job \" attribute flagged to be set\");\n\t\t\t}\n\n\t\t\tvnode_modified = pbs_python_has_vnode_set();\n\n\t\t\tif (job_modified || vnode_modified) {\n\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\"runjob request rejected by '%s': \"\n\t\t\t\t\t\"cannot modify %s after runjob \"\n\t\t\t\t\t\"request has been accepted.\",\n\t\t\t\t\tphook->hook_name,\n\t\t\t\t\t(vnode_modified ? PY_EVENT_PARAM_VNODE : PY_EVENT_PARAM_JOB));\n\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t\t\t\t  LOG_ERR, phook->hook_name, log_buffer);\n\t\t\t\t/* The following message will appear when */\n\t\t\t\t/* calling pbs_geterrmsg():\t\t  */\n\t\t\t\tsnprintf(hook_msg, msg_len - 1,\n\t\t\t\t\t \"request rejected by filter hook '%s': \"\n\t\t\t\t\t \"cannot modify %s after runjob \"\n\t\t\t\t\t \"request has been accepted.\",\n\t\t\t\t\t phook->hook_name,\n\t\t\t\t\t (vnode_modified ? PY_EVENT_PARAM_VNODE : PY_EVENT_PARAM_JOB));\n\n\t\t\t\twrite_hook_reject_debug_output_and_close(hook_msg);\n\t\t\t\trc = 0;\n\t\t\t\tgoto server_process_hooks_exit;\n\t\t\t}\n\n\t\t\thook_msg[0] = '\\0';\n\t\t\tif (do_runjob_accept_actions(pjob, phook->hook_name, hook_msg, msg_len - 1) != 0) {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t \"runjob request rejected: %s\", hook_msg);\n\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK, LOG_ERR, phook->hook_name, log_buffer);\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t \"request rejected by filter hook: %s\", hook_msg);\n\t\t\t\tstrncpy(hook_msg, log_buffer, msg_len - 1);\n\t\t\t\tattribute_jobmap_restore(pjob, runjob_accept_attrlist);\n\t\t\t\twrite_hook_reject_debug_output_and_close(hook_msg);\n\t\t\t\trc = 0;\n\t\t\t\tgoto server_process_hooks_exit;\n\t\t\t}\n\t\t}\n\n\t\tif (rq_type == PBS_BATCH_HookPeriodic) {\n\t\t\tif (fp_debug_out != NULL) {\n\t\t\t\tfprintf(fp_debug_out, \"%s=True\\n\", EVENT_ACCEPT_OBJECT);\n\t\t\t\tfprintf(fp_debug_out, \"%s=False\\n\", EVENT_REJECT_OBJECT);\n\t\t\t}\n\t\t\thook_output_param_init(&req_params_out);\n\t\t\tCLEAR_HEAD(event_vnode);\n\t\t\tCLEAR_HEAD(event_resv);\n\t\t\treq_params_out.vns_list = (pbs_list_head *) &event_vnode;\n\t\t\treq_params_out.resv_list = (pbs_list_head *) &event_resv;\n\t\t\trc = pbs_python_event_to_request(hook_event, &req_params_out, perf_label, HOOK_PERF_HOOK_OUTPUT);\n\t\t\tif (rc == -1) {\n\t\t\t\tlog_err(PBSE_INTERNAL, phook->hook_name, \"error occured recreating request!\");\n\t\t\t}\n\t\t\tif (fp_debug_out != NULL)\n\t\t\t\tfprint_svrattrl_list(fp_debug_out, EVENT_VNODELIST_OBJECT, &event_vnode);\n\t\t\tfree_attrlist(&event_vnode);\n\t\t\tCLEAR_HEAD(event_vnode);\n\t\t\tif (fp_debug_out != NULL)\n\t\t\t\tfclose(fp_debug_out);\n\t\t\tpbs_python_set_hook_debug_output_fp(NULL);\n\t\t\trc = 1;\n\t\t\tgoto server_process_hooks_exit;\n\t\t}\n\t}\n\n\twrite_hook_accept_debug_output_and_close();\n\trc = 1;\nserver_process_hooks_exit:\n\thook_perf_stat_stop(perf_label, \"server_process_hooks\", 1);\n\treturn (rc);\n}\n\n/**\n * @brief\n *\t\tRecreates the 'preq' structure based on the values specified by\n * \t\tthe hook writer in the corresponding Python event request object.\n *\t\tCAUTION: If this returns -1, don't process 'preq' as it could be\n *\t\tin an incompletely filled state. It must be freed by any of the\n * \t\tfunctions that call reply_send() (which calls free_attrlist(preq)).\n *\n * @param[in] \tpreq\t- the batch request\n *\n * @return\tint\n * @retval\t0 \t- success\n * @retval\t-1\t- failure\n */\nint\nrecreate_request(struct batch_request *preq)\n{\n\tint rc;\n\thook_output_param_t req_params;\n\tFILE *fp_debug = NULL;\n\tchar *hook_outfile = NULL;\n\tchar perf_label[MAXBUFLEN];\n\n\tif (!svr_interp_data.interp_started) {\n\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_HOOK,\n\t\t\t  LOG_ERR, __func__,\n\t\t\t  \"Python interpreter not started, so no request recreation\");\n\t\treturn (0);\n\t}\n\n\tif (pbs_python_get_hook_debug_output_fp() == NULL) {\n\t\t/* prepare to open file if output file pointer not stored */\n\t\thook_outfile = pbs_python_get_hook_debug_output_file();\n\t}\n\n\tif ((hook_outfile != NULL) && (hook_outfile[0] != '\\0')) {\n\t\t/* need to open in append mode, as process_hooks() may have */\n\t\t/* already written into this file. */\n\t\tfp_debug = fopen(hook_outfile, \"a\");\n\t\tif (fp_debug == NULL) {\n\t\t\tsprintf(log_buffer,\n\t\t\t\t\"warning: open of hook debug output file %s failed!\",\n\t\t\t\thook_outfile);\n\t\t} else {\n\t\t\tpbs_python_set_hook_debug_output_fp(fp_debug);\n\t\t}\n\t}\n\thook_output_param_init(&req_params);\n\tif (preq->rq_type == PBS_BATCH_QueueJob) {\n\t\treq_params.rq_job = (struct rq_quejob *) &preq->rq_ind.rq_queuejob;\n\t\tsnprintf(perf_label, sizeof(perf_label), \"hook_%s_%s_%d\", HOOKSTR_QUEUEJOB, preq->rq_ind.rq_queuejob.rq_jid, getpid());\n\t\trc = pbs_python_event_to_request(HOOK_EVENT_QUEUEJOB,\n\t\t\t\t\t\t &req_params, perf_label, HOOK_PERF_HOOK_OUTPUT);\n\t} else if (preq->rq_type == PBS_BATCH_PostQueueJob) {\n\t\treq_params.rq_postqueuejob = (struct rq_postqueuejob *) &preq->rq_ind.rq_postqueuejob;\n\t\tsnprintf(perf_label, sizeof(perf_label), \"hook_%s_%s_%d\", HOOKSTR_QUEUEJOB, preq->rq_ind.rq_postqueuejob.rq_jid, getpid());\n\t\trc = pbs_python_event_to_request(HOOK_EVENT_POSTQUEUEJOB,\n\t\t\t\t\t\t &req_params, perf_label, HOOK_PERF_HOOK_OUTPUT);\n\t} else if (preq->rq_type == PBS_BATCH_SubmitResv) {\n\t\treq_params.rq_job = (struct rq_quejob *) &preq->rq_ind.rq_queuejob;\n\t\tsnprintf(perf_label, sizeof(perf_label), \"hook_%s_%s_%d\", HOOKSTR_RESVSUB, preq->rq_ind.rq_queuejob.rq_jid, getpid());\n\t\trc = pbs_python_event_to_request(HOOK_EVENT_RESVSUB,\n\t\t\t\t\t\t &req_params, perf_label, HOOK_PERF_HOOK_OUTPUT);\n\t} else if (preq->rq_type == PBS_BATCH_ModifyResv) {\n\t\treq_params.rq_manage = (struct manage *) &preq->rq_ind.rq_modify;\n\t\tsnprintf(perf_label, sizeof(perf_label), \"hook_%s_%s_%d\", HOOKSTR_MODIFYRESV, preq->rq_ind.rq_modify.rq_objname, getpid());\n\t\trc = pbs_python_event_to_request(HOOK_EVENT_MODIFYRESV,\n\t\t\t\t\t\t &req_params, perf_label, HOOK_PERF_HOOK_OUTPUT);\n\t} else if (preq->rq_type == PBS_BATCH_ModifyJob) {\n\t\treq_params.rq_manage = (struct manage *) &preq->rq_ind.rq_modify;\n\t\tsnprintf(perf_label, sizeof(perf_label), \"hook_%s_%s_%d\", HOOKSTR_MODIFYJOB, preq->rq_ind.rq_modify.rq_objname, getpid());\n\t\trc = pbs_python_event_to_request(HOOK_EVENT_MODIFYJOB,\n\t\t\t\t\t\t &req_params, perf_label, HOOK_PERF_HOOK_OUTPUT);\n\t} else if (preq->rq_type == PBS_BATCH_MoveJob) {\n\t\treq_params.rq_move = (struct rq_move *) &preq->rq_ind.rq_move;\n\t\tsnprintf(perf_label, sizeof(perf_label), \"hook_%s_%s_%d\", HOOKSTR_MOVEJOB, preq->rq_ind.rq_move.rq_jid, getpid());\n\t\trc = pbs_python_event_to_request(HOOK_EVENT_MOVEJOB,\n\t\t\t\t\t\t &req_params, perf_label, HOOK_PERF_HOOK_OUTPUT);\n\t} else {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"unexpected request type\");\n\t\trc = -1;\n\t}\n\tif (rc == -1) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"error occured recreating request!\");\n\t}\n\n\tif (fp_debug != NULL) {\n\t\tfclose(fp_debug);\n\t\tfp_debug = NULL;\n\t\tpbs_python_set_hook_debug_output_fp(NULL);\n\t\tpbs_python_set_hook_debug_output_file(\"\");\n\t}\n\t/* clear python cached objects */\n\tpbs_python_clear_attributes();\n\treturn (rc);\n}\n\n/* Mom hook action-related items */\n\n/**\n * @brief\n *\t\tadd_mom_hook_action - create both a mom_hook_action_t entry and insert\n *\t\ta pointer to that element into the *hookact_array which currently has\n *\t\ta size of *hookact_array_size, which may be expanded if needed.\n *\n * @par Functionality:\n *\t\tSearches for existing mom_hook_action_t entry in the\n *\t\t'hookact_array', with matching hookname;\n *\t\tif found and the tid field (transaction id) of the matching entry\n *\t\tis <= input_tid, update that entry with 'action' and 'input_tid'\n *\t\tvalues, and return the index to the updated entry. If no matching\n *\t\tmom_hook_action_t entry is found, then find an existing empty slot in\n *\t\tthe 'hookact_array' and put in the 'hookname', 'action', and\n *\t\t'input_tid' data entry to it, and return its index.\n * \t\tIf there's no empty slot, the array is expanded by\n *\t\tGROW_MOMHOOK_ARRAY_AMT amount. Then add the 'hookname' and 'action'\n *\t\tdata in the first newly created empty slot, and return its index.\n *\n * @par Note:\n *\t\tNormally, the action value is appended to existing entries in\n *\t\t*hookact_array. If the parameter 'set_action' is set to 1, the\n *\t\taction value is not appended but directly assigned.\n *\n *\t\tIf hook action being added is for PBS_RESCDEF (\"resourcedef\"),\n *\t\tthen ensure this entry appears before other hooks in 'hookact_array',\n *\t\tfor mom hooks will depend on the PBS_RESCDEF file for custom resources.\n *\n * @see\n * \t\thook_track_recov and add_pending_mom_hook_action\n *\n * @param[in/out] hookact_array - pointer to the hook action array,\n *\t\t\t\t\twhich if expanded would get a new pointer value.\n * @param[in/out] hookact_array_size - pointer to the number of entries in\n *\t\t\t\t\t*hookact_array, which if expanded, would get a new\n *\t\t\t\t\tnumber of elements value.\n * @param[in]\thookname - name of hook with pending action\n * @param[in]\taction - flag specifying the type of pending action in hookname.\n * @param[in]\tset_action - if set to 1, then the action value will be\n *\t\t\t\tassigned directly and not appended to the list of\n *\t\t\t\taction values.\n * @param[in]\tinput_tid - transaction id to assocate to newly added\n *\t\t\t\tmom hook action (<hook_name>,<action>).\n *\n * @return\tint\n * @retval\tReturns the index to the 'hookact_array' containing the\n updated or added mom_hook_action_t entry.\n * @retval\t-1 if no mom_hook_action_t entry got updated or added,\n *\t\tperhaps due to an error.\n *\n * @par Side Effects: None\n *\n */\n\nint\nadd_mom_hook_action(mom_hook_action_t ***hookact_array,\n\t\t    int *hookact_array_size, char *hookname,\n\t\t    unsigned int action, int set_action,\n\t\t    long long int input_tid)\n{\n\tint empty = -1;\n\tint i, j;\n\tmom_hook_action_t *pact, *pact2, *pact_tmp;\n\tmom_hook_action_t **tp;\n\n\tif ((hookact_array == NULL) || (hookact_array_size == NULL) ||\n\t    (hookname == NULL))\n\t\treturn -1;\n\n\tfor (i = 0; i < *hookact_array_size; i++) {\n\t\tpact = (*hookact_array)[i];\n\t\tif (pact) {\n\t\t\tif (strcmp(pact->hookname, hookname) == 0) {\n\t\t\t\t/* check if existing entry is newer than */\n\t\t\t\t/* the entry being added */\n\t\t\t\tif (pact->tid > input_tid) {\n\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t\t \"not adding hook %s action %d as \"\n\t\t\t\t\t\t \"entry's tid=%lld > input_tid=%lld\",\n\t\t\t\t\t\t hookname, pact->action, pact->tid,\n\t\t\t\t\t\t input_tid);\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG3,\n\t\t\t\t\t\t  PBS_EVENTCLASS_REQUEST, LOG_WARNING,\n\t\t\t\t\t\t  \"add_mom_hook_action\", log_buffer);\n\t\t\t\t\treturn (-1);\n\t\t\t\t}\n\t\t\t\tif (set_action) {\n\t\t\t\t\tpact->action = action;\n\t\t\t\t} else if ((pact->action & action & pact->reply_expected)) {\n\t\t\t\t\tcontinue; /* dont reuse the action object if replies are still expected for same action */\n\t\t\t\t} else {\n\t\t\t\t\tif (action & MOM_HOOK_ACTION_DELETE) {\n\t\t\t\t\t\tif (pact->action & MOM_HOOK_SEND_ACTIONS) {\n\t\t\t\t\t\t\t/* there's a current send action, so delete action should not execute first */\n\t\t\t\t\t\t\tpact->do_delete_action_first = 0;\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t/* there's no current send action, so delete action should execute first */\n\t\t\t\t\t\t\tpact->do_delete_action_first = 1;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tpact->action |= action;\n\t\t\t\t}\n\t\t\t\tpact->tid = input_tid;\n\t\t\t\tdo_sync_mom_hookfiles = 1;\n\t\t\t\treturn i;\n\t\t\t} else if ((pact->action == MOM_HOOK_ACTION_NONE) &&\n\t\t\t\t   (empty == -1)) {\n\t\t\t\t/* be sure to free up previous entry which */\n\t\t\t\t/* was previously malloc-ed */\n\t\t\t\tfree(pact);\n\t\t\t\t(*hookact_array)[i] = NULL;\n\t\t\t\tempty = i;\n\t\t\t}\n\t\t} else if (empty == -1) {\n\t\t\tempty = i; /* save index of first empty slot */\n\t\t}\n\t}\n\n\tif (empty == -1) {\n\t\t/* there wasn't an empty slot in the array we can use */\n\t\t/* need to grow the array\t\t\t      */\n\n\t\ttp = (mom_hook_action_t **) realloc(*hookact_array,\n\t\t\t\t\t\t    (size_t)(sizeof(mom_hook_action_t *) * (*hookact_array_size + GROW_MOMHOOK_ARRAY_AMT)));\n\t\tif (tp != NULL) {\n\t\t\tempty = *hookact_array_size;\n\t\t\t*hookact_array = tp;\n\t\t\t*hookact_array_size += GROW_MOMHOOK_ARRAY_AMT;\n\t\t\tfor (i = empty; i < *hookact_array_size; i++)\n\t\t\t\t(*hookact_array)[i] = NULL;\n\t\t} else {\n\t\t\tlog_err(errno, __func__, merr);\n\t\t\treturn (-1);\n\t\t}\n\t}\n\n\t/* now allocate the memory for the mom_hook_action_t element itself */\n\n\tpact = (mom_hook_action_t *) malloc(sizeof(mom_hook_action_t));\n\tif (pact != NULL) {\n\t\tsnprintf(pact->hookname, sizeof(pact->hookname), \"%s\", hookname);\n\t\tpact->action = action;\n\t\tpact->reply_expected = action;\n\t\tpact->do_delete_action_first = 0;\n\t\tpact->tid = input_tid;\n\t\tdo_sync_mom_hookfiles = 1;\n\t\t(*hookact_array)[empty] = pact;\n\t} else {\n\t\tlog_err(errno, __func__, merr);\n\t\treturn (-1);\n\t}\n\n\t/* If hook action added is for PBS_RESCDEF, then we need to sort the */\n\t/* mom hook action array so that this resourcedef \t\t     */\n\t/* entry appear before regular mom hooks. This allows \t\t     */\n\t/* sync_mom_hookfilesTPP() to send out resourcedef files first before   */\n\t/* the mom hook files, for the latter could be depending on the      */\n\t/* former. */\n\tif (strcmp(hookname, PBS_RESCDEF) == 0) {\n\n\t\t/* j indexed array goes from last element to first */\n\t\t/* i indexed array goes from first to last */\n\t\t/* slot entry in j to be exchanged with i's,and i must be < j */\n\t\t/* for we're moving later resourcedef entry in j into */\n\t\t/* the earliest entry in i */\n\t\tfor (j = (*hookact_array_size) - 1; j >= 0; j--) {\n\t\t\tpact = (*hookact_array)[j];\n\t\t\tif (pact && (strcmp(pact->hookname, PBS_RESCDEF) == 0)) {\n\t\t\t\tfor (i = 0; (i < *hookact_array_size) && (i < j); i++) {\n\t\t\t\t\tpact2 = (*hookact_array)[i];\n\t\t\t\t\tif (pact2 && (strcmp(pact2->hookname,\n\t\t\t\t\t\t\t     PBS_RESCDEF) != 0)) {\n\t\t\t\t\t\t/* exchange places, moved later */\n\t\t\t\t\t\t/* resourcedef file entry to */\n\t\t\t\t\t\t/* earlier entry */\n\t\t\t\t\t\tpact_tmp = pact2;\n\t\t\t\t\t\t/* set resourcedef entry */\n\t\t\t\t\t\t(*hookact_array)[i] = pact;\n\t\t\t\t\t\t(*hookact_array)[j] = pact_tmp;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tbreak; /* only be one PBS_RESCDEF entry */\n\t\t\t}\n\t\t}\n\t}\n\n\treturn empty;\n}\n\n/**\n * @brief\n *\tRemoves from hookact_array of size hookact_array_size the mom hook\n *\taction: (hookname, action).\n *\n * @see\n * \t\tdelete_pending_mom_hook_action\n *\n * @param[in]\thookact_array - mom hook action array\n * @param[in]\thookact_array_size - number of elements in hookact_array\n * @param[in]\thookname - the hook in question\n * @param[in]\taction - the mom hook action to unset\n *\n * @return int\n * @retval\tindex to the mom hook action entry in hookact_array where\n *\t\t'action' flag has been removed.\n * @retval\t-1\t- if no entry found or error encountered.\n */\n\nint\ndelete_mom_hook_action(mom_hook_action_t **hookact_array,\n\t\t       int hookact_array_size, char *hookname, unsigned int action)\n{\n\tint i;\n\n\tif ((hookact_array == NULL) || (hookname == NULL))\n\t\treturn (-1);\n\n\t/* find the entry in the array that does point here */\n\tfor (i = 0; i < hookact_array_size; i++) {\n\t\tif ((hookact_array[i] != NULL) &&\n\t\t    strcmp(hookact_array[i]->hookname, hookname) == 0) {\n\t\t\thookact_array[i]->action &= ~action;\n\t\t\treturn (i);\n\t\t}\n\t}\n\treturn (-1);\n}\n\n/**S\n * @brief\n *\t\tFind and return a pointer to a mom_hook_action_t element in\n *\t\thookact_array of size hookact_array_size, defined by hookname.\n *\n * @see\n * \t\thas_pending_mom_action_delete\n *\n * @param[in]\thookact_array - mom hook action array\n * @param[in]\thookact_array_size - number of elements in hookact_array\n * @param[in]\thookname - the hook to find.\n *\n * @return\tmom_hook_action_t *\n * @retval\tpointer to the the mom_hook_action_t entry\n * @retval\tNULL if it did not find it.\n */\n\nmom_hook_action_t *\nfind_mom_hook_action(mom_hook_action_t **hookact_array,\n\t\t     int hookact_array_size, char *hookname)\n{\n\tint i;\n\n\tmom_hook_action_t *pact;\n\n\tfor (i = 0; i < hookact_array_size; i++) {\n\t\tpact = hookact_array[i];\n\t\tif (pact &&\n\t\t    (strcmp(pact->hookname, hookname) == 0))\n\t\t\treturn pact;\n\t}\n\n\treturn NULL; /* didn't find it */\n}\n\n/**\n * @brief\n *\t\tAdds a pending action to 'hookname' for the mom in 'minfo' if not NULL,\n *\t\tor all the moms in the system.\n * @par NOTE:\n *\t\tFor every successful pending action add, a line of data is\n *\t\twritten in [PATH_HOOKS]/hook_tracking.TR file as:\n *\t\t<mom_name>:<mom_port> <hook_name> <action>\n *\t\twhere <action> is the current action flag value.\n *\n * @param[in]\tminfo\t\t- if not NULL, then add mom hook action\n *\t\t\t\ton this particular mom in 'minfo'.\n * @param[in]\thookname\t- name of hook with pending action.\n * @param[in] \taction\t\t- the type of action\n *\t\t\t\t(MOM_HOOK_ACTION_SEND_ATTRS,\n *\t\t\t\tMOM_HOOK_ACTION_SEND_SCRIPT, etc...)\n *\n * @return\tvoid\n */\nvoid\nadd_pending_mom_hook_action(void *minfo, char *hookname, unsigned int action)\n{\n\tint i, j;\n\tmominfo_t **minfo_array = NULL;\n\tint minfo_array_size;\n\tmominfo_t *minfo_array_tmp[1];\n\n\tif ((mominfo_t *) minfo == NULL) {\n\t\tminfo_array = mominfo_array;\n\t\tminfo_array_size = mominfo_array_size;\n\t} else {\n\t\tminfo_array_tmp[0] = (mominfo_t *) minfo;\n\t\tminfo_array = (mominfo_t **) minfo_array_tmp;\n\t\tminfo_array_size = 1;\n\t}\n\n\tfor (i = 0; i < minfo_array_size; i++) {\n\n\t\tif (minfo_array[i] == NULL)\n\t\t\tcontinue;\n\n\t\tif (!minfo_array[i]->mi_data ||\n\t\t    (minfo_array[i]->mi_dmn_info->dmn_state & (INUSE_UNKNOWN | INUSE_NEEDS_HELLOSVR))) {\n\t\t\tcontinue;\n\t\t}\n\n\t\tj = add_mom_hook_action(&((mom_svrinfo_t *) minfo_array[i]->mi_data)->msr_action,\n\t\t\t\t\t&((mom_svrinfo_t *) minfo_array[i]->mi_data)->msr_num_action, hookname,\n\t\t\t\t\taction, 0, hook_action_tid);\n\n\t\thook_track_save((mominfo_t *) minfo_array[i], j);\n\t}\n}\n\n/**\n * @brief\n *\t\tDeletes a pending action to 'hookname' in mom described by 'minfo' if\n *\t\tnot NULL, or for all the moms in the system.\n *\n * @par Note:\n *\t\tFor every successful pending action delete, a line of data is\n *\t\twritten in [PATH_HOOKS]/<hookname>.TR file as:\n *\t\t<mom_name>:<mom_port> <hook_name> <remaining_hook_action>\n *\n * @param[in]\tminfo\t\t- if not NULL, then delete mom hook action\n *\t\t\t\ton this particular mom in 'minfo'.\n *\t\t\t\tif NULL, then delete mom hook action on\n *\t\t\t\tall the moms in the system.\n * @param[in]\thookname\t- name of hook with pending hook action\n * @param[in] \taction\t\t- the type of action\n *\t\t\t\t(MOM_HOOK_ACTION_SEND_ATTRS,\n *\t\t\t\tMOM_HOOK_ACTION_SEND_SCRIPT, etc...)\n *\n * @return void\n */\nvoid\ndelete_pending_mom_hook_action(void *minfo, char *hookname,\n\t\t\t       unsigned int action)\n{\n\tint i, k;\n\tmominfo_t **minfo_array = NULL;\n\tint minfo_array_size;\n\tmominfo_t *minfo_array_tmp[1];\n\n\tif ((mominfo_t *) minfo == NULL) {\n\t\tminfo_array = mominfo_array;\n\t\tminfo_array_size = mominfo_array_size;\n\t} else {\n\t\tminfo_array_tmp[0] = (mominfo_t *) minfo;\n\t\tminfo_array = (mominfo_t **) minfo_array_tmp;\n\t\tminfo_array_size = 1;\n\t}\n\n\tfor (i = 0; i < minfo_array_size; i++) {\n\n\t\tif (minfo_array[i] == NULL)\n\t\t\tbreak;\n\n\t\tk = delete_mom_hook_action(((mom_svrinfo_t *) minfo_array[i]->mi_data)->msr_action,\n\t\t\t\t\t   ((mom_svrinfo_t *) minfo_array[i]->mi_data)->msr_num_action, hookname, action);\n\n\t\thook_track_save((mominfo_t *) minfo_array[i], k);\n\t}\n}\n\n/**\n * @brief\n *\t\tDetermines if 'hookname' has a pending MOM_HOOK_ACTION_DELETE to\n *\t\tthe moms.\n *\n * @see\n * \t\tcollapse_hook_tr and pbsd_init\n *\n * @param[in]\thookname - the hook in question\n *\n * @return\tint\n * @retval\t1\tif there's a pending delete action\n * @retval\t0\totherwise.\n */\nint\nhas_pending_mom_action_delete(char *hookname)\n{\n\tmom_hook_action_t *pact;\n\tint i;\n\n\tfor (i = 0; i < mominfo_array_size; i++) {\n\n\t\tif (mominfo_array[i] == NULL)\n\t\t\tcontinue;\n\n\t\tpact = find_mom_hook_action(((mom_svrinfo_t *) mominfo_array[i]->mi_data)->msr_action,\n\t\t\t\t\t    ((mom_svrinfo_t *) mominfo_array[i]->mi_data)->msr_num_action, hookname);\n\n\t\tif (pact && (pact->action & MOM_HOOK_ACTION_DELETE))\n\t\t\treturn 1;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\t\tReturns the number of pending hook actions, such as send hook\n *\t\tattributes/scripts, resourcedef file,\n *\t\tto a particular mom, or to all the moms in the\n *\t\tsystem.\n *\n * @see\n * \t\tset_nodes\n *\n * @param[in]\tminfo\t- count referring to the mom described by this\n *\t\t\t\t'minfo' only, or\n *\t\t\t \tif NULL, then return total count for all the\n *\t\t\t\tmoms in the system.\n *\n * @return int\n * @retval\t<num>\t- Number of still pending mom hook actions.\n */\nint\nsync_mom_hookfiles_count(void *minfo)\n{\n\tint i, j;\n\tmominfo_t **minfo_array = NULL;\n\tint minfo_array_size;\n\tmominfo_t *minfo_array_tmp[1];\n\tint action_expected = 0;\n\tmom_hook_action_t *pact;\n\n\tif (minfo == NULL) {\n\t\tminfo_array = mominfo_array;\n\t\tminfo_array_size = mominfo_array_size;\n\t} else {\n\t\tminfo_array_tmp[0] = minfo;\n\t\tminfo_array = (mominfo_t **) minfo_array_tmp;\n\t\tminfo_array_size = 1;\n\t}\n\n\tfor (i = 0; i < minfo_array_size; i++) {\n\n\t\tif (minfo_array[i] == NULL)\n\t\t\tcontinue;\n\n\t\tfor (j = 0; j < ((mom_svrinfo_t *) minfo_array[i]->mi_data)->msr_num_action; j++) {\n\t\t\tpact = ((mom_svrinfo_t *) minfo_array[i]->mi_data)->msr_action[j];\n\n\t\t\tif ((pact == NULL) ||\n\t\t\t    (pact->action == MOM_HOOK_ACTION_NONE))\n\t\t\t\tcontinue;\n\n\t\t\tif (pact->action & MOM_HOOK_ACTION_DELETE)\n\t\t\t\taction_expected++;\n\n\t\t\tif (pact->action & MOM_HOOK_ACTION_SEND_ATTRS)\n\t\t\t\taction_expected++;\n\n\t\t\tif (pact->action & MOM_HOOK_ACTION_SEND_SCRIPT)\n\t\t\t\taction_expected++;\n\n\t\t\tif (pact->action & MOM_HOOK_ACTION_SEND_CONFIG)\n\t\t\t\taction_expected++;\n\n\t\t\tif (pact->action & MOM_HOOK_ACTION_DELETE_RESCDEF) {\n\t\t\t\taction_expected++;\n\t\t\t} else if (pact->action & MOM_HOOK_ACTION_SEND_RESCDEF) {\n\t\t\t\taction_expected++;\n\t\t\t}\n\t\t}\n\t}\n\n\treturn (action_expected);\n}\n\n/**\n * @brief\n * \t\tHandles the collapsing of a hook tracking file\n * \t\tRecovers the hooks tracking data from the hooks\n * \t\ttracking file, loops through the pending actions\n * \t\tand deletes all purged hooks, and finally does\n * \t\ta hook tracking file save, effectively collapsing\n * \t\tthe tracking file.\n *\n * @see\n * \t\tpost_sendhookTPP and next_sync_mom_hookfiles.\n */\nstatic void\ncollapse_hook_tr()\n{\n\thook *phook;\n\thook *phook_current;\n\n\t/* purge deleted hooks */\n\tphook = (hook *) GET_NEXT(svr_allhooks);\n\twhile (phook) {\n\t\tphook_current = phook;\n\t\tphook = (hook *) GET_NEXT(phook->hi_allhooks);\n\n\t\tif (phook_current->pending_delete &&\n\t\t    !has_pending_mom_action_delete(\n\t\t\t    phook_current->hook_name)) {\n\t\t\thook_purge(phook_current,\n\t\t\t\t   pbs_python_ext_free_python_script);\n\t\t}\n\t}\n\n\t/* This collapses, purges the hook tracking file, which */\n\t/* could become empty, so the next call to */\n\t/* hook_track_recov() could cause the hook_action_tid to */\n\t/* reset back to 0, which is what we want. */\n\thook_track_save(NULL, -1);\n}\n\n/**\n * @brief\n *\t\tAllocates a def_hk_cmd_info structure, filling it with 'index', 'event',\n *\t\t'tid' values, and return a pointer to this structure.\n *\n * @see\n * \t\tcheck_add_hook_mcast_info\n *\n * @param[in] index - index value to the returned def_hk_cmd_info structure.\n * @param[in] event - event value to the returned def_hk_cmd_info structure.\n * @param[in] tid - transaction id value to the returned def_hk_cmd_info structure.\n *\n * @return struct def_hk_cmd_info *\n *\n * @retval pointer to the structure\n * @retval NULL - if an error occurred allocating and populating the structure.\n *\n * @Note\n *\tThe caller must call free() on the returned memory pointer if no longer\n *\tneeded.\n *\n */\nstruct\n\tdef_hk_cmd_info *\n\tmk_deferred_hook_info(int index, int event, long long int tid)\n{\n\tstruct def_hk_cmd_info *info = malloc(sizeof(struct def_hk_cmd_info));\n\tif (info) {\n\t\tinfo->index = index;\n\t\tinfo->event = event;\n\t\tinfo->tid = tid;\n\t}\n\treturn info;\n}\n\n/**\n * @brief\n *\t\tcheck if there is any new pending action\n *\n * @param[in] minfo - pointer to mom info\n * @param[in] pact - pointer to current hook action\n * @param[in] j - index of pact in minfo->mi_data->msr_action[]\n * @param[in] event - action event to consider\n *\n * @return int\n * @retval 0  - not found\n * @retval 1  - found\n */\nint\ncheck_for_latest_action(mominfo_t *minfo, mom_hook_action_t *pact, int j, int event)\n{\n\t/* if the same action is marked in the next actions for the same hook then remove\n\t * the pending flag\n\t */\n\tint i;\n\tmom_hook_action_t *pact2;\n\tfor (i = 0; i < ((mom_svrinfo_t *) minfo->mi_data)->msr_num_action; i++) {\n\t\tpact2 = ((mom_svrinfo_t *) minfo->mi_data)->msr_action[i];\n\t\tif (pact2 && (i != j) && (pact2->tid > pact->tid) && (pact2->action & event) &&\n\t\t    pact2->hookname[0] && (strcmp(pact2->hookname, pact->hookname) == 0)) {\n\t\t\treturn 1;\n\t\t}\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\t\tCall back for the hook deferred requests over TPP stream\n *\t\tparm1 points to the mominfo_t\n *\t\tparm2 points to more information about the hook cmd\n *\t\twt_aux has the reply code from mom\n *\n * @Note\n *\t\tIf sending or deleting a hook to a mom has been rejected because\n *\t\tmom is not acceping root remote scripts for security reasons, then\n *\t\tthis will be considered still a successful send.\n *\n *\t\tThe globals g_hook_replies_recvd is incremented for\n *\t\teach reply received. When this matches the global\n *\t\tvariable g_hook_replies_expected, the global variable\n *\t\tsync_mom_hookfiles_replies_pending is reset to 0, such\n *\t\tthat the next \"hook transaction\" can now start.\n *\n * @param[in] pwt - The work task pointer\n *\n * @return void\n */\nvoid\npost_sendhookTPP(struct work_task *pwt)\n{\n\tmominfo_t *minfo = pwt->wt_parm1;\n\tmom_hook_action_t *pact;\n\tint rc = pwt->wt_aux;\n\tstruct def_hk_cmd_info *info = (struct def_hk_cmd_info *) pwt->wt_parm2;\n\tchar hookfile[MAXPATHLEN + 1];\n\tint j;\n\tint event;\n\tlong long int tid;\n\tchar *msgbuf;\n\tbool failed_flag = FALSE;\n\n\tif (!info)\n\t\treturn;\n\n\tj = info->index;\n\tevent = info->event;\n\ttid = info->tid;\n\n\tfree(info);\n\n\tif (tid != g_sync_hook_tid) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"%s reply (tid=%lld) not from current \"\n\t\t\t \"batch of hook updates (tid=%lld) from mhost=%s\",\n\t\t\t __func__, tid, g_sync_hook_tid, minfo->mi_host[0] ? minfo->mi_host : \"\");\n\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_SERVER, LOG_INFO, __func__, log_buffer);\n\t\treturn; /* return now as info->index no longer valid */\n\t}\n\n\tpact = ((mom_svrinfo_t *) minfo->mi_data)->msr_action[j];\n\n\tif (event == MOM_HOOK_ACTION_DELETE_RESCDEF) {\n\t\tsnprintf(hookfile, sizeof(hookfile), \"%.*s%.*s\",\n\t\t\t (int) (sizeof(hookfile) - PBS_HOOK_NAME_SIZE),\n\t\t\t path_hooks, PBS_HOOK_NAME_SIZE, pact->hookname);\n\t\tif (rc != 0) {\n\t\t\tpbs_asprintf(&msgbuf,\n\t\t\t\t     \"errno %d: failed to delete rescdef file %s from %s\",\n\t\t\t\t     pbs_errno, hookfile, minfo->mi_host);\n\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_REQUEST, LOG_WARNING, msg_daemonname, msgbuf);\n\t\t\tfree(msgbuf);\n\t\t\tfailed_flag = TRUE;\n\t\t} else {\n\t\t\tpbs_asprintf(&msgbuf,\n\t\t\t\t     \"successfully deleted rescdef file %s from %s:%d\",\n\t\t\t\t     hookfile, minfo->mi_host, minfo->mi_port);\n\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_REQUEST, LOG_INFO, msg_daemonname, msgbuf);\n\t\t\tfree(msgbuf);\n\t\t\t/* Delete all SEND_RESCDEF action */\n\t\t\t/* so it doesn't get retried for this */\n\t\t\t/* \"deleted\" resourcdef. */\n\t\t\tpact->action &= ~(MOM_HOOK_ACTION_DELETE_RESCDEF | MOM_HOOK_ACTION_SEND_RESCDEF);\n\t\t\thook_track_save((mominfo_t *) minfo, j);\n\t\t}\n\t}\n\n\tif (event == MOM_HOOK_ACTION_SEND_RESCDEF) {\n\t\tsnprintf(hookfile, sizeof(hookfile), \"%.*s%.*s\",\n\t\t\t (int) (sizeof(hookfile) - PBS_HOOK_NAME_SIZE), path_hooks,\n\t\t\t PBS_HOOK_NAME_SIZE, pact->hookname);\n\t\tif ((rc != 0) && (pbs_errno != PBSE_MOM_REJECT_ROOT_SCRIPTS)) {\n\t\t\tpbs_asprintf(&msgbuf,\n\t\t\t\t     \"errno %d: failed to copy rescdef file %s to %s:%d\",\n\t\t\t\t     pbs_errno, hookfile, minfo->mi_host, minfo->mi_port);\n\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_REQUEST, LOG_WARNING, msg_daemonname, msgbuf);\n\t\t\tfree(msgbuf);\n\t\t\tfailed_flag = TRUE;\n\t\t} else {\n\t\t\tif (rc != PBSE_MOM_REJECT_ROOT_SCRIPTS) {\n\t\t\t\tpbs_asprintf(&msgbuf,\n\t\t\t\t\t     \"successfully sent rescdef file %s to %s:%d\",\n\t\t\t\t\t     hookfile, minfo->mi_host, minfo->mi_port);\n\t\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_REQUEST, LOG_INFO, msg_daemonname, msgbuf);\n\t\t\t\tfree(msgbuf);\n\t\t\t} else {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t \"warning: sending resourcedef to %s:%d got rejected (mom's reject_root_scripts=1)\",\n\t\t\t\t\t minfo->mi_host, minfo->mi_port);\n\t\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_REQUEST, LOG_INFO, msg_daemonname, log_buffer);\n\t\t\t}\n\t\t\tpact->action &= ~(MOM_HOOK_ACTION_SEND_RESCDEF);\n\t\t\thook_track_save((mominfo_t *) minfo, j);\n\t\t}\n\t}\n\n\tif (event == MOM_HOOK_ACTION_DELETE) {\n\t\tsnprintf(hookfile, sizeof(hookfile), \"%.*s%s\",\n\t\t\t (int) (sizeof(hookfile) - strlen(HOOK_FILE_SUFFIX) - 1),\n\t\t\t pact->hookname, HOOK_FILE_SUFFIX);\n\t\tif (rc != 0) {\n\t\t\tpbs_asprintf(&msgbuf,\n\t\t\t\t     \"errno %d: failed to delete hook file %s from %s\",\n\t\t\t\t     pbs_errno, hookfile, minfo->mi_host);\n\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_REQUEST, LOG_WARNING, msg_daemonname, msgbuf);\n\t\t\tfree(msgbuf);\n\t\t\tfailed_flag = TRUE;\n\t\t} else {\n\t\t\tpbs_asprintf(&msgbuf,\n\t\t\t\t     \"successfully deleted hook file %s from %s:%d\",\n\t\t\t\t     hookfile, minfo->mi_host, minfo->mi_port);\n\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_REQUEST, LOG_INFO, msg_daemonname, msgbuf);\n\t\t\tfree(msgbuf);\n\t\t\tpact->action &= ~MOM_HOOK_ACTION_DELETE;\n\t\t\thook_track_save((mominfo_t *) minfo, j);\n\t\t}\n\t}\n\n\tif (event == MOM_HOOK_ACTION_SEND_ATTRS) {\n\t\tsnprintf(hookfile, sizeof(hookfile), \"%.*s%.*s%s\",\n\t\t\t (int) (sizeof(hookfile) - PBS_HOOK_NAME_SIZE - strlen(HOOK_FILE_SUFFIX)),\n\t\t\t path_hooks, PBS_HOOK_NAME_SIZE, pact->hookname, HOOK_FILE_SUFFIX);\n\t\tif ((rc != 0) && (pbs_errno != PBSE_MOM_REJECT_ROOT_SCRIPTS)) {\n\t\t\tpbs_asprintf(&msgbuf,\n\t\t\t\t     \"errno %d: failed to copy hook file %s to %s:%d\",\n\t\t\t\t     pbs_errno, hookfile, minfo->mi_host, minfo->mi_port);\n\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_REQUEST, LOG_WARNING, msg_daemonname, msgbuf);\n\t\t\tfree(msgbuf);\n\t\t\tfailed_flag = TRUE;\n\t\t} else {\n\t\t\tif (pbs_errno != PBSE_MOM_REJECT_ROOT_SCRIPTS) {\n\t\t\t\tpbs_asprintf(&msgbuf,\n\t\t\t\t\t     \"successfully sent hook file %s to %s:%d\",\n\t\t\t\t\t     hookfile, minfo->mi_host, minfo->mi_port);\n\t\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_REQUEST, LOG_INFO, msg_daemonname, msgbuf);\n\t\t\t\tfree(msgbuf);\n\t\t\t} else {\n\t\t\t\tpbs_asprintf(&msgbuf,\n\t\t\t\t\t     \"warning: sending hook file %s to %s:%d got rejected (mom's reject_root_scripts=1)\",\n\t\t\t\t\t     hookfile, minfo->mi_host, minfo->mi_port);\n\t\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_REQUEST, LOG_INFO, msg_daemonname, msgbuf);\n\t\t\t\tfree(msgbuf);\n\t\t\t}\n\t\t\tpact->action &= ~(MOM_HOOK_ACTION_SEND_ATTRS);\n\t\t\thook_track_save((mominfo_t *) minfo, j);\n\t\t}\n\t}\n\n\tif (event == MOM_HOOK_ACTION_SEND_CONFIG) {\n\t\tsnprintf(hookfile, sizeof(hookfile), \"%.*s%.*s%s\",\n\t\t\t (int) (sizeof(hookfile) - PBS_HOOK_NAME_SIZE - strlen(HOOK_CONFIG_SUFFIX)),\n\t\t\t path_hooks, PBS_HOOK_NAME_SIZE, pact->hookname, HOOK_CONFIG_SUFFIX);\n\t\tif ((rc != 0) && (pbs_errno != PBSE_MOM_REJECT_ROOT_SCRIPTS)) {\n\t\t\tpbs_asprintf(&msgbuf,\n\t\t\t\t     \"errno %d: failed to copy hook file %s to %s:%d\",\n\t\t\t\t     pbs_errno, hookfile, minfo->mi_host, minfo->mi_port);\n\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_REQUEST, LOG_WARNING, msg_daemonname, msgbuf);\n\t\t\tfree(msgbuf);\n\t\t\tfailed_flag = TRUE;\n\t\t} else {\n\t\t\tif (pbs_errno != PBSE_MOM_REJECT_ROOT_SCRIPTS) {\n\t\t\t\tpbs_asprintf(&msgbuf,\n\t\t\t\t\t     \"successfully sent hook file %s to %s:%d\",\n\t\t\t\t\t     hookfile, minfo->mi_host, minfo->mi_port);\n\t\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_REQUEST, LOG_INFO, msg_daemonname, msgbuf);\n\t\t\t\tfree(msgbuf);\n\t\t\t} else {\n\t\t\t\tpbs_asprintf(&msgbuf,\n\t\t\t\t\t     \"warning: sending hook file %s to %s:%d got rejected (mom's reject_root_scripts=1)\",\n\t\t\t\t\t     hookfile, minfo->mi_host, minfo->mi_port);\n\t\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_REQUEST, LOG_INFO, msg_daemonname, msgbuf);\n\t\t\t\tfree(msgbuf);\n\t\t\t}\n\t\t\tpact->action &= ~(MOM_HOOK_ACTION_SEND_CONFIG);\n\t\t\thook_track_save((mominfo_t *) minfo, j);\n\t\t}\n\t}\n\n\tif (event == MOM_HOOK_ACTION_SEND_SCRIPT) {\n\t\tsnprintf(hookfile, sizeof(hookfile), \"%.*s%.*s%s\",\n\t\t\t (int) (sizeof(hookfile) - PBS_HOOK_NAME_SIZE - strlen(HOOK_SCRIPT_SUFFIX)),\n\t\t\t path_hooks, PBS_HOOK_NAME_SIZE, pact->hookname, HOOK_SCRIPT_SUFFIX);\n\t\tif ((rc != 0) && (pbs_errno != PBSE_MOM_REJECT_ROOT_SCRIPTS)) {\n\t\t\tpbs_asprintf(&msgbuf,\n\t\t\t\t     \"errno %d: failed to copy hook file %s to %s:%d\",\n\t\t\t\t     pbs_errno, hookfile, minfo->mi_host, minfo->mi_port);\n\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_REQUEST, LOG_WARNING, msg_daemonname, msgbuf);\n\t\t\tfree(msgbuf);\n\t\t\tfailed_flag = TRUE;\n\t\t} else {\n\t\t\tif (pbs_errno != PBSE_MOM_REJECT_ROOT_SCRIPTS) {\n\t\t\t\tpbs_asprintf(&msgbuf,\n\t\t\t\t\t     \"successfully sent hook file %s to %s:%d\",\n\t\t\t\t\t     hookfile, minfo->mi_host, minfo->mi_port);\n\t\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_REQUEST, LOG_INFO, msg_daemonname, msgbuf);\n\t\t\t\tfree(msgbuf);\n\t\t\t} else {\n\t\t\t\tpbs_asprintf(&msgbuf,\n\t\t\t\t\t     \"warning: sending hook file %s to %s:%d got rejected (mom's reject_root_scripts=1)\",\n\t\t\t\t\t     hookfile, minfo->mi_host, minfo->mi_port);\n\t\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_REQUEST, LOG_INFO, msg_daemonname, msgbuf);\n\t\t\t\tfree(msgbuf);\n\t\t\t}\n\t\t\tpact->action &= ~(MOM_HOOK_ACTION_SEND_SCRIPT);\n\t\t\thook_track_save((mominfo_t *) minfo, j);\n\t\t}\n\t}\n\n\tpact->reply_expected &= ~(event);\n\tif (failed_flag && check_for_latest_action(minfo, pact, j, event)) {\n\t\tpact->action &= ~(event);\n\t\thook_track_save(minfo, j);\n\t}\n\n\tg_hook_replies_recvd++;\n\n\tDBPRT((\"expected=%d, replies=%d\\n\", g_hook_replies_expected, g_hook_replies_recvd));\n\n\tif (g_hook_replies_recvd == g_hook_replies_expected) {\n\t\t/*\n\t\t * We are done with this batch of hook replies\n\t\t * allow next set of hook requests to go out now\n\t\t */\n\t\tsync_mom_hookfiles_replies_pending = 0;\n\t\tg_hook_replies_recvd = 0;\n\t\tg_hook_replies_expected = 0;\n\n\t\t/* attempt a collapse of the hook tracking file now */\n\t\tcollapse_hook_tr();\n\t}\n}\n\n/**\n * @brief\n *\t\tstatic helper function to check and add a hook command to a mom\n *\t\tto a list of multicast commands.\n *\n *\t\tA TPP multicast command consists of the same command to be sent to\n *\t\ta groups of target moms.\n *\n * @param[in] conn      - The stream to the mom\n * @param[in] minfo     - The pointer to the mom info\n * @param[in] hookname  - Name of the hook for which the command is being sent\n * @param[in] action    - The hook action/cmd being performed\n * @param[in] act_index - The index in the moms hook actions array\n *\n * @return hook_mcast_info_t\t- structures required for TPP mcast communication of hooks to moms\n */\nstatic hook_mcast_info_t *\ncheck_add_hook_mcast_info(int conn, mominfo_t *minfo, char *hookname, int action, int act_index)\n{\n\tint i;\n\tvoid *tmp;\n\tstruct def_hk_cmd_info *info = NULL;\n\tchar *dup_msgid = NULL;\n\n\tfor (i = 0; i < g_hook_mcast_array_len; i++) {\n\t\tif (strcmp(g_hook_mcast_array[i].hookname, hookname) == 0 &&\n\t\t    g_hook_mcast_array[i].action == action)\n\t\t\tbreak;\n\t}\n\tif (i < g_hook_mcast_array_len) {\n\t\t/* add this connection as part of the mcast connections */\n\n\t\tif ((info = mk_deferred_hook_info(act_index, action,\n\t\t\t\t\t\t  g_sync_hook_tid)) == NULL)\n\t\t\treturn NULL;\n\n\t\tif ((dup_msgid = strdup(g_hook_mcast_array[i].msgid)) == NULL) {\n\t\t\tfree(info);\n\t\t\treturn NULL;\n\t\t}\n\n\t\tif (add_mom_deferred_list(conn, minfo, post_sendhookTPP,\n\t\t\t\t\t  dup_msgid, minfo, info) == NULL) {\n\t\t\tfree(info);\n\t\t\tfree(dup_msgid);\n\t\t\treturn NULL;\n\t\t}\n\n\t\tif (tpp_mcast_add_strm(g_hook_mcast_array[i].mconn, conn, FALSE) != 0) {\n\t\t\tfree(info);\n\t\t\tfree(dup_msgid);\n\t\t\treturn NULL;\n\t\t}\n\t\tgoto SUCCESS_RET;\n\t}\n\n\t/* we did not find a match, allocate a new index */\n\ti = g_hook_mcast_array_len;\n\n\ttmp = realloc(g_hook_mcast_array, sizeof(hook_mcast_info_t) * (g_hook_mcast_array_len + 1));\n\tif (!tmp) {\n\t\tlog_err(-1, __func__, \"Could not allocate array of hook info\");\n\t\treturn NULL;\n\t}\n\tg_hook_mcast_array = tmp;\n\n\tg_hook_mcast_array[i].action = action;\n\tif ((g_hook_mcast_array[i].hookname = strdup(hookname)) == NULL)\n\t\treturn NULL;\n\n\tif (get_msgid(&g_hook_mcast_array[i].msgid) != 0)\n\t\treturn NULL;\n\n\tif ((info = mk_deferred_hook_info(act_index, action,\n\t\t\t\t\t  g_sync_hook_tid)) == NULL)\n\t\treturn NULL;\n\n\tif (add_mom_deferred_list(conn, minfo, post_sendhookTPP,\n\t\t\t\t  strdup(g_hook_mcast_array[i].msgid), minfo, info) == NULL) {\n\t\tfree(info);\n\t\treturn NULL;\n\t}\n\n\tif ((g_hook_mcast_array[i].mconn = tpp_mcast_open()) == -1) {\n\t\tfree(info);\n\t\treturn NULL;\n\t}\n\n\tif (tpp_mcast_add_strm(g_hook_mcast_array[i].mconn, conn, FALSE) != 0) {\n\t\tfree(info);\n\t\treturn NULL;\n\t}\n\n\t/* Increment size of the array here only when everything is successful\n\t * This way, we do not have to reset anything back if we failed earlier\n\t * The expanded array is okay to not be resized back\n\t */\n\tg_hook_mcast_array_len++;\n\nSUCCESS_RET:\n\t((mom_svrinfo_t *) minfo->mi_data)->msr_action[act_index]->reply_expected |= action;\n\n\tg_hook_replies_expected++;\n\n\treturn &g_hook_mcast_array[i];\n}\n\n/**\n * @brief\n *\t\tstatic helper function to delete the deferred hook commands from the\n *\t\tall the moms that are part of the multicast messages information\n *\t\ttracked by index.\n *\n * @see\n * \t\tsync_mom_hookfilesTPP\n *\n * @param[in]\tindex - The index in the g_hook_mcast_array\n *\n * @return void\n */\nstatic void\ndel_deferred_hook_cmds(int index)\n{\n\tchar *msgid = g_hook_mcast_array[index].msgid;\n\tint mconn = g_hook_mcast_array[index].mconn;\n\tint *conns, count, i, handle;\n\tmominfo_t *pmom = 0;\n\tstruct work_task *ptask, *tmp_task;\n\tstruct def_hk_cmd_info *info;\n\tint j;\n\tint event;\n\tmom_hook_action_t *pact;\n\tmominfo_t *minfo;\n\n\tconns = tpp_mcast_members(mconn, &count);\n\tfor (i = 0; i < count; i++) {\n\t\thandle = conns[i];\n\n\t\tif ((pmom = tfind2((u_long) handle, 0, &streams)) == NULL)\n\t\t\treturn;\n\n\t\t/* get the task list */\n\t\tptask = (struct work_task *) GET_NEXT(pmom->mi_dmn_info->dmn_deferred_cmds);\n\n\t\twhile (ptask) {\n\t\t\t/* no need to compare wt_event with handle, since the\n\t\t\t * task list is for this mom and so it will always match\n\t\t\t */\n\t\t\ttmp_task = ptask;\n\t\t\tptask = (struct work_task *) GET_NEXT(ptask->wt_linkobj2);\n\t\t\tif (tmp_task->wt_type == WORK_Deferred_cmd &&\n\t\t\t    strcmp(msgid, tmp_task->wt_event2) == 0) {\n\n\t\t\t\tif (tmp_task->wt_event2)\n\t\t\t\t\tfree(tmp_task->wt_event2);\n\n\t\t\t\tminfo = tmp_task->wt_parm1;\n\t\t\t\tinfo = (struct def_hk_cmd_info *) tmp_task->wt_parm2;\n\n\t\t\t\tif (!info)\n\t\t\t\t\treturn;\n\n\t\t\t\tj = info->index;\n\t\t\t\tevent = info->event;\n\t\t\t\tfree(info);\n\n\t\t\t\tpact = ((mom_svrinfo_t *) minfo->mi_data)->msr_action[j];\n\n\t\t\t\tpact->action &= ~(event);\n\t\t\t\tpact->reply_expected &= ~(event);\n\t\t\t\thook_track_save((mominfo_t *) minfo, j);\n\n\t\t\t\t/* now dispatch the reply to the routine in the work task */\n\t\t\t\tdelete_task(tmp_task);\n\n\t\t\t\tg_hook_replies_expected--;\n\t\t\t}\n\t\t}\n\t}\n}\n\n/**\n * @brief\n *\t\tPerforms actions such as send hook attributes/scripts, and also\n *\t\tresourcedef file to a particular mom, or to all the moms in the\n *\t\tsystem (this function performs this using deferred requests on TPP stream)\n *\n * @see\n * \t\tmc_sync_mom_hookfiles and uc_delete_mom_hooks\n *\n * @param[in]\tminfo\t- particular mom information to send hook request, or\n *\t\t\t \tif NULL, then hook action request sent to all the\n *\t\t\t\tmoms in the system.\n *\n * @return enum sync_hookfiles_result\n * @retval\tSYNC_HOOKFILES_NONE\tif all mom hook actions succeeded in sending.\n * @retval\tSYNC_HOOKFILES_FAIL\tif all mom hook actions failed to be sent.\n * @retval\tSYNC_HOOKFILES_PARTAIL\tif some (not all) mom hook actions failed to be sent.\n */\nenum sync_hookfiles_result\nsync_mom_hookfilesTPP(void *minfo)\n{\n\tint i, j;\n\tint conn = -1; /* a client style connection handle */\n\tchar hookfile[MAXPATHLEN + 1];\n\tmominfo_t **minfo_array = NULL;\n\tint minfo_array_size;\n\tmominfo_t *minfo_array_tmp[1];\n\tmom_hook_action_t *pact;\n\tint skipped = 0;\n\tint ret = SYNC_HOOKFILES_NONE;\n\n\tif (minfo == NULL) {\n\t\tminfo_array = mominfo_array;\n\t\tminfo_array_size = mominfo_array_size;\n\t} else {\n\t\tminfo_array_tmp[0] = minfo;\n\t\tminfo_array = (mominfo_t **) minfo_array_tmp;\n\t\tminfo_array_size = 1;\n\t}\n\n\tsync_mom_hookfiles_replies_pending = 1;\n\tg_sync_hook_tid = hook_action_tid_get();\n\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t \"g_sync_hook_tid=%lld\", g_sync_hook_tid);\n\tlog_event(PBSEVENT_DEBUG4, PBS_EVENTCLASS_SERVER,\n\t\t  LOG_INFO, __func__, log_buffer);\n\n\tfor (i = 0; i < minfo_array_size; i++) {\n\n\t\tif (minfo_array[i] == NULL)\n\t\t\tcontinue;\n\n\t\tconn = minfo_array[i]->mi_dmn_info->dmn_stream;\n\t\tif (conn == -1) {\n\t\t\tskipped++;\n\t\t\tcontinue;\n\t\t}\n\n\t\tif (minfo_array[i]->mi_dmn_info->dmn_state & INUSE_DOWN) {\n\t\t\tskipped++;\n\t\t\tcontinue;\n\t\t}\n\n\t\ttpp_add_close_func(conn, process_DreplyTPP); /* register a close handler */\n\n\t\tpbs_errno = 0;\n\t\tfor (j = 0; j < ((mom_svrinfo_t *) minfo_array[i]->mi_data)->msr_num_action; j++) {\n\t\t\thook *phook;\n\t\t\tpact = ((mom_svrinfo_t *) minfo_array[i]->mi_data)->msr_action[j];\n\n\t\t\tif ((pact == NULL) || (pact->action == MOM_HOOK_ACTION_NONE))\n\t\t\t\tcontinue;\n\n\t\t\tif (pact->action & MOM_HOOK_ACTION_DELETE_RESCDEF) {\n\t\t\t\tif (!check_add_hook_mcast_info(conn, minfo_array[i], pact->hookname,\n\t\t\t\t\t\t\t       MOM_HOOK_ACTION_DELETE_RESCDEF, j))\n\t\t\t\t\tret = SYNC_HOOKFILES_FAIL;\n\t\t\t} else if (pact->action & MOM_HOOK_ACTION_SEND_RESCDEF) {\n\t\t\t\tif (!check_add_hook_mcast_info(conn, minfo_array[i], pact->hookname,\n\t\t\t\t\t\t\t       MOM_HOOK_ACTION_SEND_RESCDEF, j))\n\t\t\t\t\tret = SYNC_HOOKFILES_FAIL;\n\t\t\t}\n\n\t\t\t/* execute delete action before the send actions */\n\t\t\tif (pact->do_delete_action_first && (pact->action & MOM_HOOK_ACTION_DELETE)) {\n\t\t\t\tif (!check_add_hook_mcast_info(conn, minfo_array[i], pact->hookname,\n\t\t\t\t\t\t\t       MOM_HOOK_ACTION_DELETE, j))\n\t\t\t\t\tret = SYNC_HOOKFILES_FAIL;\n\t\t\t}\n\n\t\t\tphook = find_hook(pact->hookname);\n\t\t\tif (pact->action & MOM_HOOK_ACTION_SEND_ATTRS) {\n\t\t\t\tif (!phook || (phook->event & MOM_EVENTS) == 0)\n\t\t\t\t\tpact->action &= ~MOM_HOOK_ACTION_SEND_ATTRS;\n\t\t\t\telse if (!check_add_hook_mcast_info(conn, minfo_array[i], pact->hookname, MOM_HOOK_ACTION_SEND_ATTRS, j))\n\t\t\t\t\tret = SYNC_HOOKFILES_FAIL;\n\t\t\t}\n\n\t\t\tif (pact->action & MOM_HOOK_ACTION_SEND_CONFIG) {\n\t\t\t\tif (!phook || (phook->event & MOM_EVENTS) == 0)\n\t\t\t\t\tpact->action &= ~MOM_HOOK_ACTION_SEND_CONFIG;\n\t\t\t\telse if (!check_add_hook_mcast_info(conn, minfo_array[i], pact->hookname, MOM_HOOK_ACTION_SEND_CONFIG, j))\n\t\t\t\t\tret = SYNC_HOOKFILES_FAIL;\n\t\t\t}\n\n\t\t\tif (pact->action & MOM_HOOK_ACTION_SEND_SCRIPT) {\n\t\t\t\tif (!phook || (phook->event & MOM_EVENTS) == 0)\n\t\t\t\t\tpact->action &= ~MOM_HOOK_ACTION_SEND_SCRIPT;\n\t\t\t\telse if (!check_add_hook_mcast_info(conn, minfo_array[i], pact->hookname,\n\t\t\t\t\t\t\t\t    MOM_HOOK_ACTION_SEND_SCRIPT, j))\n\t\t\t\t\tret = SYNC_HOOKFILES_FAIL;\n\t\t\t}\n\n\t\t\t/* execute send actions above first, and then this delete action */\n\n\t\t\tif ((!pact->do_delete_action_first) && (pact->action & MOM_HOOK_ACTION_DELETE)) {\n\t\t\t\tif (!check_add_hook_mcast_info(conn, minfo_array[i], pact->hookname,\n\t\t\t\t\t\t\t       MOM_HOOK_ACTION_DELETE, j))\n\t\t\t\t\tret = SYNC_HOOKFILES_FAIL;\n\t\t\t}\n\t\t} /* j-loop */\n\t}\t  /* i-loop */\n\n\t/* now do the actual transmissions */\n\tfor (i = 0; i < g_hook_mcast_array_len; i++) {\n\t\tchar *msgid = g_hook_mcast_array[i].msgid;\n\t\tchar *hookname = g_hook_mcast_array[i].hookname;\n\t\tint mconn = g_hook_mcast_array[i].mconn;\n\t\tint rc = 0;\n\t\tint cmd;\n\t\tint filetype;\n\n\t\tif (g_hook_mcast_array[i].action == MOM_HOOK_ACTION_DELETE_RESCDEF) {\n\t\t\tsnprintf(hookfile, sizeof(hookfile), \"%s\", hookname);\n\t\t\tcmd = 1;\n\t\t\tfiletype = 1;\n\t\t} else if (g_hook_mcast_array[i].action & MOM_HOOK_ACTION_SEND_RESCDEF) {\n\t\t\tsnprintf(hookfile, sizeof(hookfile), \"%s%s\", path_hooks, hookname);\n\t\t\tcmd = 2;\n\t\t\tfiletype = 1;\n\t\t} else if (g_hook_mcast_array[i].action & MOM_HOOK_ACTION_DELETE) {\n\t\t\tsnprintf(hookfile, sizeof(hookfile), \"%s%s\", hookname, HOOK_FILE_SUFFIX);\n\t\t\tcmd = 1;\n\t\t\tfiletype = 2;\n\t\t} else if (g_hook_mcast_array[i].action & MOM_HOOK_ACTION_SEND_ATTRS) {\n\t\t\tsnprintf(hookfile, sizeof(hookfile), \"%s%s%s\", path_hooks, hookname, HOOK_FILE_SUFFIX);\n\t\t\tcmd = 2;\n\t\t\tfiletype = 2;\n\t\t} else if (g_hook_mcast_array[i].action & MOM_HOOK_ACTION_SEND_CONFIG) {\n\t\t\tsnprintf(hookfile, sizeof(hookfile), \"%s%s%s\", path_hooks, hookname, HOOK_CONFIG_SUFFIX);\n\t\t\tcmd = 2;\n\t\t\tfiletype = 2;\n\t\t} else if (g_hook_mcast_array[i].action & MOM_HOOK_ACTION_SEND_SCRIPT) {\n\t\t\tsnprintf(hookfile, sizeof(hookfile), \"%s%s%s\", path_hooks, hookname, HOOK_SCRIPT_SUFFIX);\n\t\t\tcmd = 2;\n\t\t\tfiletype = 2;\n\t\t} else {\n\t\t\tcmd = 0;\n\t\t\tfiletype = 0;\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"Unrecognized hook action\");\n\t\t\trc = -1;\n\t\t}\n\n\t\tif (cmd == 1) {\n\t\t\tif (PBSD_delhookfile(mconn, hookfile, PROT_TPP, &msgid) != 0) {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t \"errno %d: failed to multicast deletion of %s file %s\",\n\t\t\t\t\t pbs_errno, ((filetype == 1) ? \"rscdef\" : \"hook\"), hookfile);\n\t\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_SERVER,\n\t\t\t\t\t  LOG_INFO, __func__, log_buffer);\n\t\t\t\trc = -1;\n\t\t\t} else {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"PBSD_delhookfile(hookfile=%s)\", hookfile);\n\t\t\t\tlog_event(PBSEVENT_DEBUG4, PBS_EVENTCLASS_SERVER, LOG_INFO, __func__, log_buffer);\n\t\t\t}\n\t\t} else if (cmd == 2) {\n\n\t\t\trc = PBSD_copyhookfile(mconn, hookfile, PROT_TPP, &msgid);\n\t\t\tif (rc == -2) {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"PBSD_copyhookfile(mconn=%d, hookfile=%s): no hook file to copy (rc == -2)\", mconn, hookfile);\n\t\t\t\tlog_event(PBSEVENT_DEBUG4, PBS_EVENTCLASS_SERVER,\n\t\t\t\t\t  LOG_INFO, __func__, log_buffer);\n\t\t\t\t/* no hookfile to copy */\n\t\t\t\tdel_deferred_hook_cmds(i);\n\t\t\t\trc = 0;\n\t\t\t} else if (rc != 0) {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t \"errno %d: failed to multicast copy %s file %s\",\n\t\t\t\t\t pbs_errno, ((filetype == 1) ? \"rscdef\" : \"hook\"), hookfile);\n\t\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_SERVER, LOG_INFO, __func__, log_buffer);\n\t\t\t\trc = -1;\n\t\t\t} else {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"PBSD_copyhookfile(hookfile=%s)\", hookfile);\n\t\t\t\tlog_event(PBSEVENT_DEBUG4, PBS_EVENTCLASS_SERVER, LOG_INFO, __func__, log_buffer);\n\t\t\t}\n\t\t}\n\n\t\tif (rc == -1) {\n\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_REQUEST, LOG_WARNING, msg_daemonname, log_buffer);\n\t\t\tdel_deferred_hook_cmds(i);\n\t\t\tret = SYNC_HOOKFILES_FAIL;\n\t\t}\n\n\t\t/* we are done with the mcast for this index */\n\t\ttpp_mcast_close(mconn);\n\t\tfree(g_hook_mcast_array[i].hookname);\n\t\tfree(g_hook_mcast_array[i].msgid);\n\t}\n\n\tif (g_hook_mcast_array) {\n\t\tfree(g_hook_mcast_array);\n\t\tg_hook_mcast_array = NULL;\n\t\tg_hook_mcast_array_len = 0;\n\t}\n\n\tif (g_hook_replies_expected == 0) {\n\t\t/* No hook requests sent, so we set the\n\t\t * variable to 0, so that the next set of\n\t\t * hook pending operations can get triggered\n\t\t */\n\t\tsync_mom_hookfiles_replies_pending = 0;\n\t}\n\n\t/* set success to partial so that we come back and try again later */\n\tif (skipped > 0)\n\t\tret = SYNC_HOOKFILES_SUCCESS_PARTIAL;\n\n\t/* if we returned SYNC_HOOKFILES_NONE, then all hook actions were sent, no retry\n\t * needs to be done. This is in sync with mc_sync_mom_hookfiles() return values.\n\t */\n\treturn (ret);\n}\n\n/**\n * @brief\n *\tMulti cast to moms all the pending mom hook sync operations\n *\n * @return int\n * @retval 0\tfor successfully executing the task process to sync_mom_hookfilesTPP()\n * @retval != 0 if an error occurred.\n */\nint\nmc_sync_mom_hookfiles(void)\n{\n\tint rc;\n\n\tg_sync_hook_time = time(0);\n\tsnprintf(log_buffer, sizeof(log_buffer), \"g_sync_hook_time = %s\", ctime(&g_sync_hook_time));\n\tlog_event(PBSEVENT_DEBUG4, PBS_EVENTCLASS_SERVER, LOG_INFO, __func__, log_buffer);\n\trc = sync_mom_hookfilesTPP(NULL);\n\t/* transaction id to use for next batch of updates */\n\thook_action_tid_set(hook_action_tid_get() + 1);\n\treturn rc;\n}\n\n/**\n * @brief\n *\t\tAdds a pending action to all hooks for the mom in 'minfo' if not NULL,\n *\t\tor all the moms in the system.\n * @par NOTE\n *\t\tFor every successful pending action add, a line of data is\n *\t\twritten in [PATH_HOOKS]/hook_tracking.TR file as:\n *\t\t\t<mom_name> <mom_port> <hook_name> <action>\n *\t\twhere <action> is the current action flag value.\n *\n * @param[in]\tminfo\t\t- if not NULL, then add mom hook action\n *\t\t\t\ton this particular mom in 'minfo'.\n * @param[in] \taction\t\t- the type of action\n *\t\t\t\t(MOM_HOOK_ACTION_SEND_ATTRS,\n *\t\t\t\tMOM_HOOK_ACTION_SEND_SCRIPT, etc...)\n *\n * @return void\n */\nvoid\nadd_pending_mom_allhooks_action(void *minfo, unsigned int action)\n{\n\thook *phook;\n\n\tphook = (hook *) GET_NEXT(svr_allhooks);\n\twhile (phook) {\n\t\tif (phook->hook_name && (phook->event & MOM_EVENTS)) {\n\t\t\tadd_pending_mom_hook_action((mominfo_t *) minfo, phook->hook_name, action);\n\t\t}\n\t\tphook = (hook *) GET_NEXT(phook->hi_allhooks);\n\t}\n}\n\n/**\n * @brief\n *\t\tclears out reply_expected flags of timed out actions\n *\t\tand delete their wait tasks\n *\n * @see\n * \t\thandle_hook_sync_timeout\n *\n * @param[in]\ttid\t- transaction id of timedout hook sync sequence\n *\n * @return void\n */\nvoid\nclear_timed_out_reply_expected(long long int tid)\n{\n\tint i, j;\n\tmom_hook_action_t *pact;\n\tstruct work_task *ptask, *tmp_task;\n\tmominfo_t *pmom;\n\tstruct def_hk_cmd_info *info;\n\n\tfor (i = 0; i < mominfo_array_size; i++) {\n\n\t\tif ((pmom = mominfo_array[i]) == NULL)\n\t\t\tcontinue;\n\n\t\tif (((mom_svrinfo_t *) pmom->mi_data)->msr_num_action && ((mom_svrinfo_t *) pmom->mi_data)->msr_action) {\n\n\t\t\tfor (j = 0; j < ((mom_svrinfo_t *) pmom->mi_data)->msr_num_action; j++) {\n\t\t\t\tpact = ((mom_svrinfo_t *) pmom->mi_data)->msr_action[j];\n\t\t\t\tif (pact && pact->reply_expected && (pact->tid == tid)) {\n\t\t\t\t\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_SERVER,\n\t\t\t\t\t\t   LOG_INFO, __func__, \"timedout, clearing reply_expected for %d event[%lld] of %s hook for %s\",\n\t\t\t\t\t\t   pact->reply_expected, tid, pact->hookname, pmom->mi_host);\n\t\t\t\t\t/* get the task list */\n\t\t\t\t\tptask = (struct work_task *) GET_NEXT(pmom->mi_dmn_info->dmn_deferred_cmds);\n\n\t\t\t\t\twhile (ptask) {\n\t\t\t\t\t\t/* no need to compare wt_event with handle, since the\n\t\t\t\t\t\t* task list is for this mom and so it will always match\n\t\t\t\t\t\t*/\n\t\t\t\t\t\ttmp_task = ptask;\n\t\t\t\t\t\tptask = (struct work_task *) GET_NEXT(ptask->wt_linkobj2);\n\t\t\t\t\t\tif ((tmp_task->wt_type == WORK_Deferred_cmd) &&\n\t\t\t\t\t\t    (pmom == tmp_task->wt_parm1)) {\n\n\t\t\t\t\t\t\tinfo = (struct def_hk_cmd_info *) tmp_task->wt_parm2;\n\n\t\t\t\t\t\t\tif (!info || (j != info->index) || !(pact->reply_expected & info->event)) {\n\t\t\t\t\t\t\t\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_SERVER,\n\t\t\t\t\t\t\t\t\t   LOG_INFO, __func__, \"timedout, skipped deleting pending WORK_Deferred_cmd for %s:%s\",\n\t\t\t\t\t\t\t\t\t   pact->hookname, pmom->mi_host);\n\t\t\t\t\t\t\t\tcontinue;\n\t\t\t\t\t\t\t}\n\n\t\t\t\t\t\t\tif (tmp_task->wt_event2)\n\t\t\t\t\t\t\t\tfree(tmp_task->wt_event2);\n\n\t\t\t\t\t\t\tif (check_for_latest_action(pmom, pact, j, info->event))\n\t\t\t\t\t\t\t\tpact->action &= ~(info->event);\n\n\t\t\t\t\t\t\tfree(info);\n\n\t\t\t\t\t\t\tdelete_task(tmp_task);\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tpact->reply_expected = 0U;\n\t\t\t\t\tpact->tid = hook_action_tid_get();\n\t\t\t\t\thook_track_save(pmom, j);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n\n/**\n * @brief\n *\t\tchecks for hook sync operation's timeout\n *\t\tand handles timeout activities if so\n *\n * @see\n * \t\tnext_sync_mom_hookfiles\n *\n * @return int\n * @retval 0\tif no timeout has occured\n * @retval != 0 if a timeout occurred.\n */\nint\nhandle_hook_sync_timeout(void)\n{\n\tunsigned long timeout_sec;\n\ttime_t timeout_time;\n\ttime_t current_time;\n\ttimeout_sec = SYNC_MOM_HOOKFILES_TIMEOUT_TPP;\n\tif (is_sattr_set(SVR_ATR_sync_mom_hookfiles_timeout))\n\t\ttimeout_sec = get_sattr_long(SVR_ATR_sync_mom_hookfiles_timeout);\n\tcurrent_time = time(NULL);\n\ttimeout_time = g_sync_hook_time + timeout_sec;\n\tif (sync_mom_hookfiles_replies_pending) {\n\t\tif (current_time <= timeout_time) {\n\t\t\t/* previous updates still in progress and not timed out */\n\t\t\treturn 0;\n\t\t}\n\n\t\t/* we're timing out previous sync mom hook files process/action */\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"Timing out previous send of mom hook updates \"\n\t\t\t \"(send replies expected=%d received=%d)\",\n\t\t\t g_hook_replies_expected, g_hook_replies_recvd);\n\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_SERVER,\n\t\t\t  LOG_INFO, __func__, log_buffer);\n\t\tsnprintf(log_buffer, sizeof(log_buffer), \"timeout_sec=%lu\", timeout_sec);\n\t\tlog_event(PBSEVENT_DEBUG4, PBS_EVENTCLASS_SERVER, LOG_INFO, __func__, log_buffer);\n\n\t\tclear_timed_out_reply_expected(g_sync_hook_tid);\n\t\tg_hook_replies_recvd = 0;\n\t\tg_hook_replies_expected = 0;\n\t\t/* attempt collapsing  the hook tracking file */\n\t\tcollapse_hook_tr();\n\t\tsync_mom_hookfiles_replies_pending = 0;\n\t\treturn 1;\n\t}\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\t\tChecks to see if it's time to run mc_sync_mom_hookfiles() and if so,\n *\t\tthen run mc_sync_mom_hookfiles().\n *\n * @see\n * \t\tnext_task\n *\n * @return\tvoid\n */\nvoid\nnext_sync_mom_hookfiles(void)\n{\n\tint timed_out = handle_hook_sync_timeout();\n\n\tif ((do_sync_mom_hookfiles || timed_out) && !sync_mom_hookfiles_replies_pending && mc_sync_mom_hookfiles() == 0)\n\t\tdo_sync_mom_hookfiles = 0;\n}\n\n/**\n * @brief\n *\t\tMark that a mom hook has been seen, resulting in 'mom_hooks_seen'\n *\t\tvariable getting incremented.\n *\n * @see\n * \t\tpbsd_init\n *\n * @return void\n */\nvoid\nmark_mom_hooks_seen(void)\n{\n\n\tif (mom_hooks_seen < 0) { /* should not happen */\n\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t\t  LOG_INFO, __func__,\n\t\t\t  \"mom_hooks_seen went negative, resetting to 0\");\n\t\tmom_hooks_seen = 0;\n\t}\n\tmom_hooks_seen++;\n}\n\n/**\n * @brief\n *\t\tReturns the value of the 'mom_hooks_seen' variable.\n *\n * @see\n * \t\tcreate_mom_entry, delete_svrmom_entry and start_vnode_provisioning.\n *\n * @return int - the mom_hooks_seen value\n */\nint\nmom_hooks_seen_count(void)\n{\n\treturn (mom_hooks_seen);\n}\n\n/**\n * @brief\n *\t\tunicast delete mom hook requests and delete rescdef request\n *\t\tto the 'mom' represented by 'minfo' data.\n *\t\tdo not care on request failures\n *\n * @see\n * \t\tdelete_svrmom_entry\n *\n * @param[in]\tminfo\t- data reprsenting the 'mom'.\n *\n * @return void\n */\nvoid\nuc_delete_mom_hooks(void *minfo)\n{\n\t/*\n\t * unicast delete hook batch request\n\t */\n\thook *phook;\n\tchar hookfile[MAXPATHLEN + 1];\n\tmominfo_t *mom_info = (mominfo_t *) minfo;\n\tchar *msgid = NULL;\n\n\tphook = (hook *) GET_NEXT(svr_allhooks);\n\twhile (phook) {\n\t\tif (phook->hook_name &&\n\t\t    (phook->event & MOM_EVENTS)) {\n\t\t\tmsgid = NULL;\n\t\t\tsnprintf(hookfile, sizeof(hookfile), \"%s%s\", phook->hook_name, HOOK_FILE_SUFFIX);\n\t\t\tPBSD_delhookfile(mom_info->mi_dmn_info->dmn_stream, hookfile, PROT_TPP, &msgid);\n\t\t\tfree(msgid);\n\t\t\tmsgid = NULL;\n\t\t}\n\t\tphook = (hook *) GET_NEXT(phook->hi_allhooks);\n\t}\n\tPBSD_delhookfile(mom_info->mi_dmn_info->dmn_stream, PBS_RESCDEF, PROT_TPP, &msgid);\n\tfree(msgid);\n\treturn;\n}\n\n/**\n * @brief\n * \t\tReturns the hook resourcedef file checksum value.\n *\n * @see\n * \t\tis_request\n *\n * @return usigned long\n */\nunsigned long\nget_hook_rescdef_checksum(void)\n{\n\treturn (hook_rescdef_checksum);\n}\n\n/**\n *\n * @brief\n *\tGet the results from 'output_file' of a previously run hook.\n *\n * @param[in] \t\tinput_file -  file to process.\n * @param[in,out] \taccept_flag -  return 1 if event accept flag is true.\n * @param[in,out] \treject_flag -  return 1 if event reject flag is true.\n * @param[in,out] \treject_msg -  the reject message if reject_flag is 1.\n * @param[in]\t\treject_msg_size -  size of reject_msg buffer.\n * @param[in,out] \tpjob -  job in question, where if present (not NULL),\n *\t\t\t        it gets filled in with the\n *\t\t\t\t\"pbs.event().job\" entries in 'input_file'.\n *\t\t\t\t'pjob' can be NULL in periodic hooks, since\n *\t\t\t\t periodic hooks are not tied to jobs.\n *\t\t\t\t Note that pbs.event().job_list[<jobid>] entries\n *\t\t\t\t in 'input_file' fill in the individual\n *\t\t\t\t <jobid>'s job struct entry in the system, and\n *\t\t\t\t not the passed 'pjob' structure.\n * @param[in]\t\tphook -  hook that executed of which we're getting the\n *\t\t\t\tresults. If non-NULL, then phook->user is\n *\t\t\t\tused to validate 'pbs.event().job.euser' line\n *\t\t\t\tin 'input_file'.\n *\t\t\t\tIf main Mom is reading a job related hook\n *\t\t\t\tresults file, phook will be null; an entry in\n *\t\t\t\tthe file should give us the hook name from which\n *\t\t\t\tphook is found.\n * @param[out]\t\thook_output - struct of parameters to fill in output.\n *\n * @return int\n * @retval\t0 for success\n * @retval\tnon-zero for failure;  the returned parameters (accept_flag,\n *\t\treject_flag and pjob) may be invalid and should be\n *\t\tignored.   The list svrvnalist could have mallocate space and\n *\t\tshould be freed by the calling program.\n */\nint\nget_server_hook_results(char *input_file, int *accept_flag, int *reject_flag, char *reject_msg,\n\t\t\tint reject_msg_size, job *pjob, hook *phook, hook_output_param_t *hook_output)\n{\n\n\tchar *name_str;\n\tchar *resc_str;\n\tchar *obj_name;\n\tchar *data_value;\n\tchar *vname_str;\n\tint rc = -1;\n\tchar *pc, *pc1, *pc2, *pc3, *pc4;\n\tchar *in_data = NULL;\n\tsize_t ll;\n\tFILE *fp;\n\tchar *p;\n\tint vn_obj_len = strlen(EVENT_VNODELIST_OBJECT);\n\tchar hook_job_outfile[MAXPATHLEN + 1];\n\tFILE *fp2 = NULL;\n\tchar *line_data = NULL;\n\tint line_data_sz;\n\tlong int endpos;\n\tchar hook_euser[PBS_MAXUSER + 1] = {'\\0'};\n\tint arg_list_entries = 0;\n\tint b_triple_quotes = 0;\n\tint e_triple_quotes = 0;\n\tchar buf_data[STRBUF];\n\tint buf_data_sz = STRBUF;\n\tint valln = 0;\n\tsvrattrl *plist = NULL;\n\tstruct pbsnode *pnode;\n\tint bad = 0;\n\tchar *pbse_err;\n\tchar raw_err[10];\n\n\t/* Preset hook_euser for later.  If we are reading a job related     */\n\t/* copy of hook results, there will be one or more (one per hook)    */\n\t/* pbs_event().hook_euser=<value> entries.  In that case, hook_euser */\n\t/* is reset to the <value>.  A null string <value> means PBSADMIN.   */\n\tif (phook && pjob && (phook->user == HOOK_PBSUSER)) {\n\t\tstrncpy(hook_euser,\n\t\t\tget_jattr_str(pjob, JOB_ATR_euser),\n\t\t\tPBS_MAXUSER);\n\t}\n\n\t/* input_file will have content of the format: */\n\t/* pbs.event().accept=True */\n\t/* pbs.event().reject=False */\n\t/* pbs.event().vnode.pcpus=4 */\n\t/* pbs.event().vnode_list[\"node_name\"].state=sleep */\n\t/* pbs.event().vnode_list[\"node_name\"].resources_available[ncpus]=10 */\n\tif ((input_file != NULL) && (*input_file != '\\0')) {\n\t\tfp = fopen(input_file, \"r\");\n\n\t\tif (fp == NULL) {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"failed to open input file %s\", input_file);\n\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\treturn (1);\n\t\t}\n\t} else {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"bad input_file parameter\");\n\t\treturn (1);\n\t}\n\n\tline_data_sz = STRBUF;\n\tline_data = (char *) malloc(line_data_sz);\n\tif (line_data == NULL) {\n\t\tlog_err(errno, __func__, \"malloc failed\");\n\t\trc = 1;\n\t\tgoto get_hook_results_end;\n\t}\n\tline_data[0] = '\\0';\n\n\tif (fseek(fp, 0, SEEK_END) != 0) {\n\t\tlog_err(errno, __func__, \"fseek to end failed\");\n\t\trc = 1;\n\t\tgoto get_hook_results_end;\n\t}\n\n\tendpos = ftell(fp);\n\tif (fseek(fp, 0, SEEK_SET) != 0) {\n\t\tlog_err(errno, __func__, \"fseek to beginning failed\");\n\t\trc = 1;\n\t\tgoto get_hook_results_end;\n\t}\n\n\twhile (fgets(buf_data, buf_data_sz, fp) != NULL) {\n\t\tb_triple_quotes = 0;\n\t\te_triple_quotes = 0;\n\n\t\tif (pbs_strcat(&line_data, &line_data_sz, buf_data) == NULL) {\n\t\t\tgoto get_hook_results_end;\n\t\t}\n\t\tif (in_data != NULL) {\n\t\t\tfree(in_data);\n\t\t}\n\t\tin_data = strdup(line_data); /* preserve line_data */\n\t\tif (in_data == NULL) {\n\t\t\tlog_err(errno, __func__, \"strdup failed\");\n\t\t\trc = 1;\n\t\t\tgoto get_hook_results_end;\n\t\t}\n\n\t\tif ((p = strchr(in_data, '=')) != NULL) {\n\t\t\t/* string begins with three consecutive double quotes */\n\t\t\tb_triple_quotes = starts_with_triple_quotes(p + 1);\n\t\t}\n\n\t\tll = strlen(in_data);\n\t\tif (in_data[ll - 1] == '\\n') {\n\t\t\t/* string ends with three consecutive double quotes */\n\t\t\te_triple_quotes = ends_with_triple_quotes(in_data, 0);\n\n\t\t\tif (b_triple_quotes && !e_triple_quotes) {\n\t\t\t\tint jj;\n\n\t\t\t\twhile (fgets(buf_data, buf_data_sz, fp) != NULL) {\n\t\t\t\t\tif (pbs_strcat(&line_data, &line_data_sz,\n\t\t\t\t\t\t       buf_data) == NULL) {\n\t\t\t\t\t\tgoto get_hook_results_end;\n\t\t\t\t\t}\n\n\t\t\t\t\tjj = strlen(line_data);\n\t\t\t\t\tif ((line_data[jj - 1] != '\\n') &&\n\t\t\t\t\t    (ftell(fp) != endpos)) {\n\t\t\t\t\t\t/* get more input for current item. */\n\t\t\t\t\t\tcontinue;\n\t\t\t\t\t}\n\t\t\t\t\te_triple_quotes = ends_with_triple_quotes(line_data, 0);\n\n\t\t\t\t\tif (e_triple_quotes) {\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\tif ((!b_triple_quotes && e_triple_quotes) ||\n\t\t\t\t    (b_triple_quotes && !e_triple_quotes)) {\n\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t\t \"unmatched triple quotes! Skipping  line %s\",\n\t\t\t\t\t\t in_data);\n\t\t\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\t\t\t/* process a new line */\n\t\t\t\t\tline_data[0] = '\\0';\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\n\t\t\t\tif (in_data != NULL) {\n\t\t\t\t\tfree(in_data);\n\t\t\t\t}\n\t\t\t\tin_data = strdup(line_data); /* preserve line_data */\n\t\t\t\tif (in_data == NULL) {\n\t\t\t\t\tlog_err(errno, __func__, \"strdup failed\");\n\t\t\t\t\trc = 1;\n\t\t\t\t\tgoto get_hook_results_end;\n\t\t\t\t}\n\t\t\t\t/* remove newline */\n\t\t\t\tin_data[strlen(in_data) - 1] = '\\0';\n\t\t\t} else {\n\t\t\t\t/* remove newline */\n\t\t\t\tin_data[ll - 1] = '\\0';\n\t\t\t}\n\n\t\t} else if (ftell(fp) != endpos) { /* continued on next line */\n\t\t\t/* get more input for current item.  */\n\t\t\tcontinue;\n\t\t}\n\n\t\tdata_value = NULL;\n\t\tif ((p = strchr(in_data, '=')) != NULL) {\n\t\t\t*p = '\\0';\n\t\t\tp++;\n\t\t\twhile (isspace(*p))\n\t\t\t\tp++;\n\n\t\t\tif (b_triple_quotes) {\n\t\t\t\t/* strip triple quotes */\n\t\t\t\tp += 3;\n\t\t\t}\n\t\t\tdata_value = p;\n\t\t\tif (e_triple_quotes) {\n\t\t\t\tends_with_triple_quotes(p, 1);\n\t\t\t}\n\t\t}\n\n\t\tobj_name = in_data;\n\n\t\tpc = strrchr(in_data, '.');\n\t\tif (pc) {\n\t\t\t*pc = '\\0';\n\t\t\tpc++;\n\t\t} else {\n\t\t\tpc = in_data;\n\t\t}\n\t\tname_str = pc;\n\n\t\tpc1 = strchr(pc, '[');\n\t\tpc2 = strchr(pc, ']');\n\t\tresc_str = NULL;\n\t\tif (pc1 && pc2 && (pc2 > pc1)) {\n\t\t\t*pc1 = '\\0';\n\t\t\tpc1++;\n\t\t\t*pc2 = '\\0';\n\t\t\tpc2++;\n\n\t\t\t/* now let's if there's anything quoted inside */\n\t\t\tpc3 = strchr(pc1, '\"');\n\t\t\tif (pc3 != NULL)\n\t\t\t\tpc4 = strchr(pc3 + 1, '\"');\n\t\t\telse\n\t\t\t\tpc4 = NULL;\n\n\t\t\tif (pc3 && pc4 && (pc4 > pc3)) {\n\t\t\t\tpc3++;\n\t\t\t\t*pc4 = '\\0';\n\t\t\t\tresc_str = pc3;\n\t\t\t} else {\n\t\t\t\tresc_str = pc1;\n\t\t\t}\n\t\t}\n\n\t\t/* at this point, we have */\n\t\t/* Given:  pbs.event().<attribute>=<value> */\n\t\t/* Given:  pbs.event().job.<attribute>=<value> */\n\t\t/* Given:  pbs.event().job.<attribute>[<resc>]=<value> */\n\t\t/* Given:  pbs.event().vnode_list[<vname>].<attribute>=<value> */\n\t\t/* Given:  pbs.event().vnode_list[<vname>].<attribute>[<resc>]=<value> */\n\t\t/* We get: */\n\n\t\t/* obj_name = pbs.event() or \"pbs.event().job\" or \"pbs.event().vnode_list[<vname>]\" */\n\t\t/* name_str = <attribute> */\n\t\t/* resc_str = <resc> */\n\t\t/* data_value = <value> */\n\n\t\tif (data_value == NULL) {\n\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"%s: no value given\", in_data);\n\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\trc = 1;\n\t\t\tgoto get_hook_results_end;\n\t\t}\n\n\t\tif (strcmp(obj_name, EVENT_OBJECT) == 0) {\n\t\t\tif (strcmp(name_str, \"hook_euser\") == 0) {\n\t\t\t\tstrncpy(hook_euser, data_value, PBS_MAXUSER);\n\t\t\t} else if ((accept_flag != NULL) &&\n\t\t\t\t   strcmp(name_str, \"accept\") == 0) {\n\t\t\t\tif (strcmp(data_value, \"True\") == 0)\n\t\t\t\t\t*accept_flag = 1;\n\t\t\t\telse\n\t\t\t\t\t*accept_flag = 0;\n\t\t\t} else if ((reject_flag != NULL) &&\n\t\t\t\t   strcmp(name_str, \"reject\") == 0) {\n\n\t\t\t\tif (strcmp(data_value, \"True\") == 0)\n\t\t\t\t\t*reject_flag = 1;\n\t\t\t\telse\n\t\t\t\t\t*reject_flag = 0;\n\t\t\t} else if ((reject_msg != NULL) &&\n\t\t\t\t   (strcmp(name_str, \"reject_msg\") == 0)) {\n\t\t\t\tstrncpy(reject_msg, data_value,\n\t\t\t\t\treject_msg_size - 1);\n\t\t\t} else if (strcmp(name_str, PY_EVENT_PARAM_PROGNAME) == 0) {\n\t\t\t\tif (hook_output != NULL) {\n\t\t\t\t\tchar **prog;\n\t\t\t\t\t/* need to free up here previous value */\n\t\t\t\t\t/* in case of multiple hooks! */\n\t\t\t\t\tprog = hook_output->progname;\n\t\t\t\t\tif (*prog != NULL) {\n\t\t\t\t\t\tfree(*prog);\n\t\t\t\t\t}\n\t\t\t\t\t*prog = strdup(data_value);\n\t\t\t\t}\n\t\t\t} else if (strcmp(name_str, PY_EVENT_PARAM_ARGLIST) == 0) {\n\t\t\t\targ_list_entries++;\n\t\t\t\tif (hook_output != NULL) {\n\t\t\t\t\tpbs_list_head *ar_list;\n\t\t\t\t\tar_list = hook_output->argv_list;\n\t\t\t\t\t/* free previous values at start of new list */\n\t\t\t\t\tif (arg_list_entries == 1) {\n\t\t\t\t\t\tfree_attrlist(ar_list);\n\t\t\t\t\t}\n\t\t\t\t\tadd_to_svrattrl_list(ar_list, name_str, resc_str,\n\t\t\t\t\t\t\t     data_value, 0, NULL);\n\t\t\t\t}\n\t\t\t} else if (strcmp(name_str, PY_EVENT_PARAM_ENV) == 0) {\n\t\t\t\tif (hook_output != NULL) {\n\t\t\t\t\tchar **env;\n\t\t\t\t\tenv = hook_output->env;\n\t\t\t\t\tif (*env != NULL) {\n\t\t\t\t\t\tfree(*env);\n\t\t\t\t\t}\n\t\t\t\t\t*env = strdup(data_value);\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t/* if the hook is rejected we can go out now */\n\t\t\tif ((reject_flag != NULL) && (*reject_flag == 1) && (reject_msg != NULL))\n\t\t\t\tgoto get_hook_results_end;\n\t\t} else if (strncmp(obj_name, EVENT_VNODELIST_OBJECT,\n\t\t\t\t   vn_obj_len) == 0) {\n\n\t\t\t/* NOTE: obj_name here is: pbs.event().vnode_list[<vname>] */\n\n\t\t\t/* important here to look for the leftmost '[' (using strchr)\n\t\t\t * and the rightmost ']' (using strrchr)\n\t\t\t * as we can have:\n\t\t\t *\tpbs.event().vnode_list[\"altix[5]\"].<attr>=<val>\n\t\t\t * \tand \"altix[5]\" is a valid vnode id.\n\t\t\t */\n\t\t\tif (((pc1 = strchr(obj_name, '[')) != NULL) &&\n\t\t\t    ((pc2 = strrchr(obj_name, ']')) != NULL) &&\n\t\t\t    (pc2 > pc1)) {\n\t\t\t\tpc1++;\t     /*  pc1=<vname>] */\n\t\t\t\t*pc2 = '\\0'; /* pc1=<vname>  */\n\t\t\t\tpc2++;\n\n\t\t\t\t/* now let's if there's anything quoted inside */\n\t\t\t\tpc3 = strchr(pc1, '\"');\n\t\t\t\tif (pc3 != NULL)\n\t\t\t\t\tpc4 = strchr(pc3 + 1, '\"');\n\t\t\t\telse\n\t\t\t\t\tpc4 = NULL;\n\n\t\t\t\tif (pc3 && pc4 && (pc4 > pc3)) {\n\t\t\t\t\tpc3++;\n\t\t\t\t\t*pc4 = '\\0';\n\t\t\t\t\tvname_str = pc3;\n\t\t\t\t} else {\n\t\t\t\t\tvname_str = pc1;\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t \"object '%s' does not have a vnode name!\",\n\t\t\t\t\t obj_name);\n\t\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\t\t/* process a new line */\n\t\t\t\tline_data[0] = '\\0';\n\t\t\t\tcontinue;\n\t\t\t}\n\n\t\t\t/* server periodic hook vnode objects */\n\t\t\tvalln = (int) strlen(data_value) + 1;\n\t\t\tplist = attrlist_create(name_str, resc_str, valln);\n\t\t\tif (plist == NULL) {\n\t\t\t\t(void) sprintf(log_buffer, \"failed to add svrattrl list %s.%s.%s:%s\",\n\t\t\t\t\t       vname_str, name_str, resc_str, data_value);\n\t\t\t\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_FORCE, PBS_EVENTCLASS_SERVER,\n\t\t\t\t\t  LOG_NOTICE, msg_daemonname, log_buffer);\n\t\t\t} else {\n\t\t\t\tstrcpy(plist->al_value, data_value);\n\t\t\t\t(plist->al_link).ll_next->ll_struct = NULL;\n\t\t\t\t/* there are vnode hook updates */\n\t\t\t\t/* Push hook changes to server */\n\t\t\t\tpnode = find_nodebyname(vname_str);\n\t\t\t\tif (pnode == NULL) {\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_HOOK,\n\t\t\t\t\t\t  LOG_INFO, vname_str, \"node_name not found\");\n\t\t\t\t} else if ((pnode->nd_state & INUSE_DELETED) == 0) {\n\n\t\t\t\t\trc = mgr_set_attr(pnode->nd_attr, node_attr_idx, node_attr_def, ND_ATR_LAST,\n\t\t\t\t\t\t\t  plist, ATR_DFLAG_WRACC, &bad, (void *) pnode, ATR_ACTION_ALTER);\n\t\t\t\t\tif (rc != 0) {\n\t\t\t\t\t\tpbse_err = pbse_to_txt(rc);\n\t\t\t\t\t\tsnprintf(raw_err, sizeof(raw_err), \"%d\", rc);\n\t\t\t\t\t\tsprintf(log_buffer, \"vnode %s: failed to set %s to %s: %s\",\n\t\t\t\t\t\t\tpnode->nd_name, plist->al_name, plist->al_value ? plist->al_value : \"\",\n\t\t\t\t\t\t\tpbse_err ? pbse_err : raw_err);\n\t\t\t\t\t\tlog_err(PBSE_SYSTEM, __func__, log_buffer);\n\t\t\t\t\t} else {\n\t\t\t\t\t\tmgr_log_attr(msg_man_set, plist,\n\t\t\t\t\t\t\t     PBS_EVENTCLASS_NODE, pnode->nd_name, NULL);\n\t\t\t\t\t\tpnode->nd_modified = 1;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tfree_svrattrl(plist);\n\t\t\t}\n\t\t}\n\t\t/* TODO: for job objects */\n\t\t/* TODO: for Server objects */\n\t\t/* TODO: for PBS objects */\n\t\tif ((fp2 != NULL) && (fputs(line_data, fp2) < 0)) {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"Failed to save data in file %s\",\n\t\t\t\t hook_job_outfile);\n\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\trc = 1;\n\t\t\tgoto get_hook_results_end;\n\t\t}\n\t\tline_data[0] = '\\0';\n\t}\n\n\tsave_nodes_db(0, NULL);\n\n\trc = 0;\n\nget_hook_results_end:\n\n\tif (fp != NULL)\n\t\tfclose(fp);\n\n\tif (fp2 != NULL) {\n\t\tif (fflush(fp2) != 0) {\n\t\t\t/* error in writting job related hook results file */\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"Failed to save data in file %s\",\n\t\t\t\t hook_job_outfile);\n\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\trc = 1;\n\t\t\tfclose(fp2);\n\t\t\tunlink(hook_job_outfile);\n\t\t} else {\n\t\t\tfclose(fp2);\n\t\t}\n\t}\n\tif (phook && !phook->debug) {\n\t\t(void) unlink(input_file);\n\t}\n\tif (line_data != NULL) {\n\t\tfree(line_data);\n\t}\n\tif (in_data != NULL) {\n\t\tfree(in_data);\n\t}\n\n\treturn (rc);\n}\n\n/**\n * @brief\n *\t\tCallback function for reaping server periodic hook child.\n * @param[in]\tptask\t- work task pointer\n *\n * @return\tvoid\n */\nstatic void\npost_server_periodic_hook(struct work_task *ptask)\n{\n\n\tint stat;\n\thook *phook;\n\tpid_t mypid;\n\tchar hook_outfile[MAXPATHLEN + 1];\n\ttime_t next_time;\n\tint accept_flag = 1;\n\tint reject_flag = 0;\n\tstat = ptask->wt_aux;\n\tphook = (hook *) ptask->wt_parm1;\n\tmypid = ptask->wt_event;\n\n\tif (phook == NULL) {\n\t\tlog_err(-1, __func__, \"A periodic hook disappeared\");\n\t\treturn;\n\t}\n\tif (WIFEXITED(stat)) {\n\t\tchar reject_msg[HOOK_MSG_SIZE + 1] = {'\\0'};\n\t\tchar *next_time_str;\n\t\tint hook_error_flag = 0;\n\n\t\t/* Check hook exit status */\n\t\tif (stat == 0) {\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE,\n\t\t\t\t \"Hook got rejected\");\n\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t\t\t  LOG_ERR, phook->hook_name, log_buffer);\n\t\t\thook_error_flag = 1; /* hook results are invalid */\n\t\t}\n\n\t\t/* hook results path */\n\t\tsnprintf(hook_outfile, MAXPATHLEN, FMT_HOOK_OUTFILE,\n\t\t\t path_hooks_workdir, HOOKSTR_PERIODIC,\n\t\t\t phook->hook_name, mypid);\n\n\t\tif (hook_error_flag == 0) {\n\t\t\t/* hook exited normally, get results from file  */\n\t\t\tif (get_server_hook_results(hook_outfile, &accept_flag, &reject_flag,\n\t\t\t\t\t\t    reject_msg, sizeof(reject_msg), NULL, phook, NULL) != 0) {\n\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t\t\t\t  LOG_ERR, phook->hook_name,\n\t\t\t\t\t  \"Failed getting hook results\");\n\t\t\t\t/* error getting results, do not accept results */\n\t\t\t\thook_error_flag = 1;\n\t\t\t}\n\t\t}\n\n\t\tif ((hook_error_flag == 1) || (accept_flag == 0)) {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"%s request rejected by '%s'\",\n\t\t\t\t \"periodic\", phook->hook_name);\n\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t\t\t  LOG_ERR, phook->hook_name, log_buffer);\n\t\t\tif (reject_msg[0] != '\\0') {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"%s\",\n\t\t\t\t\t reject_msg);\n\t\t\t\t/* log also the custom reject message */\n\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t\t\t\t  LOG_ERR, phook->hook_name, log_buffer);\n\t\t\t}\n\t\t}\n\n\t\tif (hook_error_flag == 0) {\n\t\t\t/* No hook error means data is communicated to */\n\t\t\t/* the server and actions are done to jobs.    */\n\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_HOOK,\n\t\t\t\t  LOG_INFO, phook->hook_name, \"periodic hook accepted\");\n\n\t\t\t/* remove the processed results file, note that if  */\n\t\t\t/* there was an error, it is left for debugging use */\n\t\t\tif (!phook->debug)\n\t\t\t\t(void) unlink(hook_outfile); /* remove file */\n\t\t}\n\n\t\tnext_time = time_now + phook->freq;\n\t\tnext_time_str = ctime(&next_time);\n\t\tif ((next_time_str != NULL) && (next_time_str[0] != '\\0')) {\n\t\t\tnext_time_str[strlen(next_time_str) - 1] = '\\0'; /* remove newline */\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"will run on %s\",\n\t\t\t\t next_time_str);\n\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_HOOK,\n\t\t\t\t  LOG_ERR, phook->hook_name, log_buffer);\n\t\t}\n\n\t\tsprintf(log_buffer, \"Server periodic hook ran successfully\");\n\t} else\n\t\tsprintf(log_buffer, \"Server periodic hook encountered errors: %d\", stat);\n\n\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_SERVER, LOG_INFO,\n\t\t  __func__, log_buffer);\n\n\treturn;\n}\n\n/**\n * @brief\n *\t\tCallback function for Timed work tasks to run periodic hooks\n * @param[in]\tptask\t- work task pointer\n *\n * @return  void\n */\nvoid\nrun_periodic_hook(struct work_task *ptask)\n{\n\tchar hook_msg[HOOK_MSG_SIZE] = {'\\0'};\n\tint ret;\n\tint num_run = 0;\n\thook *phook;\n\thook_input_param_t req_ptr;\n\tpid_t pid;\n\tint event_initialized = 0;\n\n\tphook = (hook *) ptask->wt_parm1;\n\thook_input_param_init(&req_ptr);\n\tif (phook == NULL) {\n\t\tlog_err(-1, __func__, \"A periodic hook disappeared\");\n\t\treturn;\n\t}\n\n\tif (phook->enabled == 0 || phook->script == NULL || phook->freq < 1) {\n\t\tsprintf(log_buffer, \"periodic hook is missing information, check hook frequency and script\");\n\t\tlog_err(-1, __func__, log_buffer);\n\t}\n\n\tif (has_task_by_parm1(phook) == 1) {\n\t\t/* There is already a task present related to\n\t\t * post processing for previously running hook.\n\t\t * Don't run hook this time, just register a\n\t\t * timed task for it's next occurance\n\t\t */\n\t\t(void) set_task(WORK_Timed, time_now + phook->freq, run_periodic_hook, phook);\n\t\treturn;\n\t}\n\n\tpid = fork();\n\n\tif (pid == -1) { /* Error on fork */\n\t\tlog_err(errno, __func__, \"fork failed\\n\");\n\t\tpbs_errno = PBSE_SYSTEM;\n\t\treturn;\n\t}\n\n\tif (pid != 0) { /* The parent (main server) */\n\t\t/* Set a task for post processing of the running hook */\n\t\tstruct work_task *ptask;\n\t\tptask = set_task(WORK_Deferred_Child, (long) pid,\n\t\t\t\t post_server_periodic_hook, phook);\n\t\tif (!ptask) {\n\t\t\tlog_err(errno, __func__, msg_err_malloc);\n\t\t\treturn;\n\t\t}\n\t\t/* Set a timed task for next occurance of this hook */\n\t\t(void) set_task(WORK_Timed, time_now + phook->freq,\n\t\t\t\trun_periodic_hook, phook);\n\t} else {\n\t\t/* Close all server connections */\n\t\tnet_close(-1);\n\t\ttpp_terminate();\n\t\t/* Unprotect child from being killed by kernel */\n\t\tdaemon_protect(0, PBS_DAEMON_PROTECT_OFF);\n\n\t\t/* set vnodes and reservation list to hook input parameter */\n\t\treq_ptr.vns_list = (pbs_list_head *) get_vnode_list();\n\t\treq_ptr.resv_list = (pbs_list_head *) get_resv_list();\n\n\t\tret = server_process_hooks(PBS_BATCH_HookPeriodic, NULL, NULL, phook,\n\t\t\t\t\t   HOOK_EVENT_PERIODIC, NULL, &req_ptr, hook_msg,\n\t\t\t\t\t   sizeof(hook_msg), pbs_python_set_interrupt, &num_run, &event_initialized);\n\t\tif (ret == 0)\n\t\t\tlog_event(PBSE_HOOKERROR, PBS_EVENTCLASS_HOOK, LOG_ERR, __func__, hook_msg);\n\n#if defined(DEBUG)\n\t\t/* for valgrind, clear some stuff up */\n\t\t{\n\t\t\thook *phook = (hook *) GET_NEXT(svr_allhooks);\n\t\t\twhile (phook) {\n\t\t\t\thook *tmp;\n\t\t\t\tfree(phook->hook_name);\n\t\t\t\tif (phook->script) {\n\t\t\t\t\tstruct python_script *scr = phook->script;\n\t\t\t\t\tfree(scr->path);\n\t\t\t\t\tfree(scr->py_code_obj);\n\t\t\t\t}\n\t\t\t\tfree(phook->script);\n\t\t\t\ttmp = phook;\n\t\t\t\tphook = (hook *) GET_NEXT(phook->hi_allhooks);\n\t\t\t\tfree(tmp);\n\t\t\t}\n\t\t}\n#endif\n\n\t\texit(ret);\n\t}\n\treturn;\n}\n"
  },
  {
    "path": "src/server/issue_request.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tissue_request.c\n *\n * @brief\n * \t\tFunction to allow the server to issue requests to to other batch\n * \t\tservers, scheduler, MOM, or even itself.\n *\n * \t\tThe encoding of the data takes place in other routines, see\n * \t\tthe API routines in libpbs.a\n *\n * Functions included are:\n *\trelay_to_mom()\n *\treissue_to_svr()\n *\tissue_to_svr()\n *\trelease_req()\n *\tadd_mom_deferred_list()\n *\tissue_Drequest()\n *\tprocess_Dreply()\n *\tprocess_DreplyTPP()\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <stdio.h>\n#include <errno.h>\n#include <string.h>\n#include <sys/types.h>\n#include \"dis.h\"\n#include \"libpbs.h\"\n#include \"server_limits.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"credential.h\"\n#include \"batch_request.h\"\n#include \"log.h\"\n#include \"job.h\"\n#include \"work_task.h\"\n#include \"net_connect.h\"\n#include \"pbs_nodes.h\"\n#include \"svrfunc.h\"\n#include \"server.h\"\n#include <libutil.h>\n#include \"tpp.h\"\n\n/* Global Data Items: */\nextern pbs_list_head task_list_event;\nextern time_t time_now;\nextern char *msg_issuebad;\nextern char *msg_norelytomom;\nextern char *msg_err_malloc;\n\nextern pbs_net_t pbs_server_addr;\nextern int max_connection;\n\n/**\n *\n * @brief\n *\tWrapper program to relay_to_mom2() with the 'pwt' argument\n *\tpassed as NULL.\n */\nint\nrelay_to_mom(job *pjob, struct batch_request *request,\n\t     void (*func)(struct work_task *))\n{\n\treturn (relay_to_mom2(pjob, request, func, NULL));\n}\n\n/**\n * @brief\n * \t\tRelay a (typically existing) batch_request to MOM\n *\n *\t\tMake connection to MOM and issue the request.  Called with\n *\t\tnetwork address rather than name to save look-ups.\n *\n *\t\tUnlike issue_to_svr(), a failed connection is not retried.\n *\t\tThe calling routine typically handles this problem.\n *\n * @param[in,out]\tpjob - pointer to job\n * @param[in]\trequest - the request to send\n * @param[in]\tfunc - function pointer taking work_task structure as argument.\n * @param[out]  ppwt - the work task maintained by server\n *\t\t\tto handle deferred replies from request.\n *\n * @return\tint\n * @retval\t0\t- success\n * @retval\tnon-zero\t- error code\n */\n\nint\nrelay_to_mom2(job *pjob, struct batch_request *request,\n\t      void (*func)(struct work_task *), struct work_task **ppwt)\n{\n\tint rc;\n\tint conn; /* a client style connection handle */\n\tpbs_net_t momaddr;\n\tunsigned int momport;\n\tstruct work_task *pwt;\n\tint prot = PROT_TPP;\n\tmominfo_t *pmom = 0;\n\tpbs_list_head *mom_tasklist_ptr = NULL;\n\n\tmomaddr = pjob->ji_qs.ji_un.ji_exect.ji_momaddr;\n\tmomport = pjob->ji_qs.ji_un.ji_exect.ji_momport;\n\n\tif ((pmom = tfind2((unsigned long) momaddr, momport, &ipaddrs)) == NULL)\n\t\treturn (PBSE_NORELYMOM);\n\n\tmom_tasklist_ptr = &pmom->mi_dmn_info->dmn_deferred_cmds;\n\n\tconn = svr_connect(momaddr, momport, process_Dreply, ToServerDIS, prot);\n\tif (conn < 0) {\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_REQUEST, LOG_WARNING, \"\", msg_norelytomom);\n\t\treturn (PBSE_NORELYMOM);\n\t}\n\n\trequest->rq_orgconn = request->rq_conn; /* save client socket */\n\tpbs_errno = 0;\n\trc = issue_Drequest(conn, request, func, &pwt, prot);\n\tif ((rc == 0) && (func != release_req)) {\n\t\t/* work-task entry job related on TPP based connection, link to the job's list */\n\t\tappend_link(&pjob->ji_svrtask, &pwt->wt_linkobj, pwt);\n\t\tif (prot == PROT_TPP)\n\t\t\tappend_link(mom_tasklist_ptr, &pwt->wt_linkobj2, pwt); /* if tpp, link to mom list as well */\n\t}\n\n\tif (ppwt != NULL)\n\t\t*ppwt = pwt;\n\n\t/*\n\t * We do not want req_reject() to send non PBSE error numbers.\n\t * Check for internal errors and when found return PBSE_SYSTEM.\n\t */\n\tif ((rc != 0) && (pbs_errno == 0))\n\t\treturn (PBSE_SYSTEM);\n\telse\n\t\treturn (rc);\n}\n\n/**\n * @brief\n * \t\treissue_to_svr - recall issue_to_svr() after a delay to retry sending\n *\t\ta request that failed for a temporary reason\n *\n * @see\n *  issue_to_svr\n *\n * @param[in]\tpwt - pointer to work structure\n *\n * @return\tvoid\n */\nstatic void\nreissue_to_svr(struct work_task *pwt)\n{\n\tstruct batch_request *preq = pwt->wt_parm1;\n\tint issue_to_svr(char *, struct batch_request *,\n\t\t\t void (*)(struct work_task *));\n\n\t/* if not timed-out, retry send to remote server */\n\n\tif (((time_now - preq->rq_time) > PBS_NET_RETRY_LIMIT) ||\n\t    (issue_to_svr(preq->rq_host, preq, (void (*)(struct work_task *)) pwt->wt_parm2) == -1)) {\n\n\t\t/* either timed-out or got hard error, tell post-function  */\n\t\tpwt->wt_aux = -1;   /* seen as error by post function  */\n\t\tpwt->wt_event = -1; /* seen as connection by post func */\n\t\t((void (*)(struct work_task *)) pwt->wt_parm2)(pwt);\n\t}\n\treturn;\n}\n\n/**\n * @brief\n * \t\tissue_to_svr - issue a batch request to a server\n *\t\tThis function parses the server name, looks up its host address,\n *\t\tmakes a connection and called issue_request (above) to send\n *\t\tthe request.\n *\n * @param[in]\tservern - name of host sending request\n * @param[in,out]\tpreq - batch request to send\n * @param[in]\treplyfunc - Call back func gor reply\n *\n * @return\tint\n * @retval\t0 - success,\n * @retval -1 - permanent error (no such host)\n *\n * @note\n *\tOn temporary error, establish a work_task to retry after a delay.\n */\n\nint\nissue_to_svr(char *servern, struct batch_request *preq, void (*replyfunc)(struct work_task *))\n{\n\tint do_retry = 0;\n\tint handle;\n\tpbs_net_t svraddr;\n\tchar *svrname;\n\tunsigned int port = pbs_server_port_dis;\n\tstruct work_task *pwt;\n\textern int pbs_failover_active;\n\textern char primary_host[];\n\n\t(void) strcpy(preq->rq_host, servern);\n\tpreq->rq_fromsvr = 1;\n\tpreq->rq_perm = ATR_DFLAG_MGRD | ATR_DFLAG_MGWR | ATR_DFLAG_SvWR;\n\tsvrname = parse_servername(servern, &port);\n\n\tif ((pbs_failover_active != 0) && (svrname != NULL)) {\n\t\t/* we are the active secondary server in a failover config    */\n\t\t/* if the message is going to the primary,then redirect to me */\n\t\tsize_t len;\n\n\t\tlen = strlen(svrname);\n\t\tif (strncasecmp(svrname, primary_host, len) == 0) {\n\t\t\tif ((primary_host[(int) len] == '\\0') ||\n\t\t\t    (primary_host[(int) len] == '.'))\n\t\t\t\tsvrname = server_host;\n\t\t}\n\t}\n\tif (comp_svraddr(pbs_server_addr, svrname, &svraddr) == 0)\n\t\treturn (issue_Drequest(PBS_LOCAL_CONNECTION, preq, replyfunc, 0, 0));\n\n\tif (svraddr == (pbs_net_t) 0) {\n\t\tif (pbs_errno == PBS_NET_RC_RETRY)\n\t\t\t/* Non fatal error - retry */\n\t\t\tdo_retry = 1;\n\t} else {\n\t\thandle = svr_connect(svraddr, port, process_Dreply, ToServerDIS, PROT_TCP);\n\t\tif (handle >= 0)\n\t\t\treturn (issue_Drequest(handle, preq, replyfunc, 0, 0));\n\t\telse if (handle == PBS_NET_RC_RETRY)\n\t\t\tdo_retry = 1;\n\t}\n\n\t/* if reached here, it didn`t go, do we retry? */\n\n\tif (do_retry) {\n\t\tpwt = set_task(WORK_Timed, (long) (time_now + (2 * PBS_NET_RETRY_TIME)),\n\t\t\t       reissue_to_svr, (void *) preq);\n\t\tpwt->wt_parm2 = (void *) replyfunc;\n\t\treturn (0);\n\t} else\n\t\treturn (-1);\n}\n\n/**\n * @brief\n * \t\t\trelease_req - this is the basic function to call after we are\n *\t\t\tthrough with an internally generated request to another server.\n *\t\t\tIt frees the request structure and closes the connection (handle).\n *\n *\t\t\tIn the work task entry, wt_event is the connection handle and\n *\t\t\twt_parm1 is a pointer to the request structure.\n *\n * @note\n *\t\t\tTHIS SHOULD NOT BE USED IF AN EXTERNAL (CLIENT) REQUEST WAS \"relayed\".\n *\t\t\tThe request/reply structure is still needed to reply to the client.\n *\n * @param[in]\tpwt - pointer to work structure\n *\n * @return void\n */\n\nvoid\nrelease_req(struct work_task *pwt)\n{\n\tfree_br((struct batch_request *) pwt->wt_parm1);\n\tif (pwt->wt_event != -1 && pwt->wt_aux2 != PROT_TPP)\n\t\tsvr_disconnect(pwt->wt_event);\n}\n\n/**\n *\t@brief\n *\t\tadd a task to the moms deferred command list\n *\t\tof commands issued to the server\n *\n *\t\tUsed only in case of TPP\n *\n * @param[in] stream - stream on which command is being sent\n * @param[in] minfo  - The mominfo_t pointer for the mom\n * @param[in] func   - Call back func when mom responds\n * @param[in] msgid  - String unique identifying the command from others\n * @param[in] parm1  - Fist parameter to the work task to be set\n * @param[in] parm2  - Second parameter to the work task to be set\n *\n * @return Work task structure that was allocated and added to moms deferred cmd list\n * @retval NULL  - Failure\n * @retval !NULL - Success\n *\n */\nstruct work_task *\nadd_mom_deferred_list(int stream, mominfo_t *minfo, void (*func)(struct work_task *), char *msgid, void *parm1, void *parm2)\n{\n\tstruct work_task *ptask = NULL;\n\n\t/* WORK_Deferred_cmd is very similar to WORK_Deferred_reply.\n\t * However in case of WORK_Deferred_reply, the wt_parm1 is assumed to\n\t * contain a batch_request structure. In cases where there is no\n\t * batch_request structure associated, we use the WORK_Deferred_cmd\n\t * event type to differentiate it in process_DreplyTPP.\n\t */\n\tptask = set_task(WORK_Deferred_cmd, (long) stream, func, parm1);\n\tif (ptask == NULL) {\n\t\tlog_err(errno, __func__, \"could not set_task\");\n\t\treturn NULL;\n\t}\n\tptask->wt_aux2 = PROT_TPP; /* set to tpp */\n\tptask->wt_parm2 = parm2;\n\tptask->wt_event2 = msgid;\n\n\t/* remove this task from the event list, as we will be adding to deferred list anyway\n\t * and there is no child process whose exit needs to be reaped\n\t */\n\tdelete_link(&ptask->wt_linkevent);\n\n\t/* append to the moms deferred command list */\n\tappend_link(&minfo->mi_dmn_info->dmn_deferred_cmds, &ptask->wt_linkobj2, ptask);\n\treturn ptask;\n}\n\n/**\n * @brief\n * \t\tissue a batch request to another server or to a MOM\n *\t\tor even to ourself!\n *\n *\t\tIf the request is meant for this every server, then\n *\t\tSet up work-task of type WORK_Deferred_Local with a dummy\n *\t\tconnection handle (PBS_LOCAL_CONNECTION).\n *\n *\t\tDispatch the request to be processed.  [reply_send() will\n *\t\tdispatch the reply via the work task entry.]\n *\n *\t\tIf the request is to another server/MOM, then\n *\t\tSet up work-task of type WORK_Deferred_Reply with the\n *\t\tconnection handle as the event.\n *\n *\t\tEncode and send the request.\n *\n *\t\tWhen the reply is ready,  process_Dreply() will decode it and\n *\t\tdispatch the work task.\n *\n * @note\n *\t\tIT IS UP TO THE FUNCTION DISPATCHED BY THE WORK TASK TO CLOSE THE\n *\t\tCONNECTION (connection handle not socket) and FREE THE REQUEST\n *\t\tSTRUCTURE.  The connection (non-negative if open) is in wt_event\n *\t\tand the pointer to the request structure is in wt_parm1.\n *\n * @param[in] conn\t- connection index\n * @param[in] request\t- batch request to send\n * @param[in] func\t- The callback function to invoke to handle the batch reply\n * @param[out] ppwt\t- Return work task to be maintained by server to handle deferred replies\n * @param[in] prot\t- PROT_TCP or PROT_TPP\n *\n * @return  Error code\n * @retval   0 - Success\n * @retval  -1 - Failure\n *\n */\nint\nissue_Drequest(int conn, struct batch_request *request, void (*func)(struct work_task *), struct work_task **ppwt, int prot)\n{\n\tstruct attropl *patrl;\n\tstruct work_task *ptask;\n\tstruct svrattrl *psvratl;\n\tint rc;\n\tint sock = -1;\n\tenum work_type wt;\n\tchar *msgid = NULL;\n\n\trequest->tppcmd_msgid = NULL;\n\n\tif (conn == PBS_LOCAL_CONNECTION) {\n\t\twt = WORK_Deferred_Local;\n\t\trequest->rq_conn = PBS_LOCAL_CONNECTION;\n\t} else if (prot == PROT_TPP) {\n\t\tsock = conn;\n\t\trequest->rq_conn = conn;\n\t\twt = WORK_Deferred_Reply;\n\t} else {\n\t\tsock = conn;\n\t\trequest->rq_conn = sock;\n\t\twt = WORK_Deferred_Reply;\n\t\tDIS_tcp_funcs();\n\t}\n\n\tptask = set_task(wt, (long) conn, func, (void *) request);\n\tif (ptask == NULL) {\n\t\tlog_err(errno, __func__, \"could not set_task\");\n\t\tif (ppwt != 0)\n\t\t\t*ppwt = 0;\n\t\treturn (-1);\n\t}\n\n\tif (conn == PBS_LOCAL_CONNECTION) {\n\n\t\t/* the request should be issued to ourself */\n\n\t\tdispatch_request(PBS_LOCAL_CONNECTION, request);\n\t\tif (ppwt != 0)\n\t\t\t*ppwt = ptask;\n\t\treturn (0);\n\t}\n\n\t/* the request is bound to another server, encode/send the request */\n\tswitch (request->rq_type) {\n\n\t\tcase PBS_BATCH_DeleteJob:\n\t\t\trc = PBSD_mgr_put(conn,\n\t\t\t\t\t  PBS_BATCH_DeleteJob,\n\t\t\t\t\t  MGR_CMD_DELETE,\n\t\t\t\t\t  MGR_OBJ_JOB,\n\t\t\t\t\t  request->rq_ind.rq_delete.rq_objname,\n\t\t\t\t\t  NULL,\n\t\t\t\t\t  request->rq_extend,\n\t\t\t\t\t  prot,\n\t\t\t\t\t  &msgid);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_HoldJob:\n\t\t\tattrl_fixlink(&request->rq_ind.rq_hold.rq_orig.rq_attr);\n\t\t\tpsvratl = (struct svrattrl *) GET_NEXT(\n\t\t\t\trequest->rq_ind.rq_hold.rq_orig.rq_attr);\n\t\t\tpatrl = &psvratl->al_atopl;\n\t\t\trc = PBSD_mgr_put(conn,\n\t\t\t\t\t  PBS_BATCH_HoldJob,\n\t\t\t\t\t  MGR_CMD_SET,\n\t\t\t\t\t  MGR_OBJ_JOB,\n\t\t\t\t\t  request->rq_ind.rq_hold.rq_orig.rq_objname,\n\t\t\t\t\t  patrl,\n\t\t\t\t\t  NULL,\n\t\t\t\t\t  prot,\n\t\t\t\t\t  &msgid);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_MessJob:\n\t\t\trc = PBSD_msg_put(conn,\n\t\t\t\t\t  request->rq_ind.rq_message.rq_jid,\n\t\t\t\t\t  request->rq_ind.rq_message.rq_file,\n\t\t\t\t\t  request->rq_ind.rq_message.rq_text,\n\t\t\t\t\t  NULL,\n\t\t\t\t\t  prot,\n\t\t\t\t\t  &msgid);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_RelnodesJob:\n\t\t\trc = PBSD_relnodes_put(conn,\n\t\t\t\t\t       request->rq_ind.rq_relnodes.rq_jid,\n\t\t\t\t\t       request->rq_ind.rq_relnodes.rq_node_list,\n\t\t\t\t\t       NULL,\n\t\t\t\t\t       prot,\n\t\t\t\t\t       &msgid);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_PySpawn:\n\t\t\trc = PBSD_py_spawn_put(conn,\n\t\t\t\t\t       request->rq_ind.rq_py_spawn.rq_jid,\n\t\t\t\t\t       request->rq_ind.rq_py_spawn.rq_argv,\n\t\t\t\t\t       request->rq_ind.rq_py_spawn.rq_envp,\n\t\t\t\t\t       prot,\n\t\t\t\t\t       &msgid);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_ModifyJob:\n\t\t\tattrl_fixlink(&request->rq_ind.rq_modify.rq_attr);\n\t\t\tpatrl = (struct attropl *) &((struct svrattrl *) GET_NEXT(\n\t\t\t\t\t\t\t     request->rq_ind.rq_modify.rq_attr))\n\t\t\t\t\t->al_atopl;\n\t\t\trc = PBSD_mgr_put(conn,\n\t\t\t\t\t  PBS_BATCH_ModifyJob,\n\t\t\t\t\t  MGR_CMD_SET,\n\t\t\t\t\t  MGR_OBJ_JOB,\n\t\t\t\t\t  request->rq_ind.rq_modify.rq_objname,\n\t\t\t\t\t  patrl,\n\t\t\t\t\t  NULL,\n\t\t\t\t\t  prot,\n\t\t\t\t\t  &msgid);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_ModifyJob_Async:\n\t\t\tattrl_fixlink(&request->rq_ind.rq_modify.rq_attr);\n\t\t\tpatrl = (struct attropl *) &((struct svrattrl *) GET_NEXT(\n\t\t\t\t\t\t\t     request->rq_ind.rq_modify.rq_attr))\n\t\t\t\t\t->al_atopl;\n\t\t\trc = PBSD_mgr_put(conn,\n\t\t\t\t\t  PBS_BATCH_ModifyJob_Async,\n\t\t\t\t\t  MGR_CMD_SET,\n\t\t\t\t\t  MGR_OBJ_JOB,\n\t\t\t\t\t  request->rq_ind.rq_modify.rq_objname,\n\t\t\t\t\t  patrl,\n\t\t\t\t\t  NULL,\n\t\t\t\t\t  prot,\n\t\t\t\t\t  &msgid);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_Rerun:\n\t\t\tif (prot == PROT_TPP) {\n\t\t\t\trc = is_compose_cmd(sock, IS_CMD, &msgid);\n\t\t\t\tif (rc != 0)\n\t\t\t\t\tbreak;\n\t\t\t}\n\t\t\trc = encode_DIS_ReqHdr(sock, PBS_BATCH_Rerun, pbs_current_user);\n\t\t\tif (rc != 0)\n\t\t\t\tbreak;\n\t\t\trc = encode_DIS_JobId(sock, request->rq_ind.rq_rerun);\n\t\t\tif (rc != 0)\n\t\t\t\tbreak;\n\t\t\trc = encode_DIS_ReqExtend(sock, 0);\n\t\t\tif (rc != 0)\n\t\t\t\tbreak;\n\t\t\trc = dis_flush(sock);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_RegistDep:\n\t\t\tif (prot == PROT_TPP) {\n\t\t\t\trc = is_compose_cmd(sock, IS_CMD, &msgid);\n\t\t\t\tif (rc != 0)\n\t\t\t\t\tbreak;\n\t\t\t}\n\t\t\trc = encode_DIS_ReqHdr(sock,\n\t\t\t\t\t       PBS_BATCH_RegistDep, pbs_current_user);\n\t\t\tif (rc != 0)\n\t\t\t\tbreak;\n\t\t\trc = encode_DIS_Register(sock, request);\n\t\t\tif (rc != 0)\n\t\t\t\tbreak;\n\t\t\trc = encode_DIS_ReqExtend(sock, 0);\n\t\t\tif (rc != 0)\n\t\t\t\tbreak;\n\t\t\trc = dis_flush(sock);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_SignalJob:\n\t\t\trc = PBSD_sig_put(conn,\n\t\t\t\t\t  request->rq_ind.rq_signal.rq_jid,\n\t\t\t\t\t  request->rq_ind.rq_signal.rq_signame,\n\t\t\t\t\t  NULL,\n\t\t\t\t\t  prot,\n\t\t\t\t\t  &msgid);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_StatusJob:\n\t\t\trc = PBSD_status_put(conn,\n\t\t\t\t\t     PBS_BATCH_StatusJob,\n\t\t\t\t\t     request->rq_ind.rq_status.rq_id,\n\t\t\t\t\t     NULL, NULL,\n\t\t\t\t\t     prot,\n\t\t\t\t\t     &msgid);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_TrackJob:\n\t\t\tif (prot == PROT_TPP) {\n\t\t\t\trc = is_compose_cmd(sock, IS_CMD, &msgid);\n\t\t\t\tif (rc != 0)\n\t\t\t\t\tbreak;\n\t\t\t}\n\t\t\trc = encode_DIS_ReqHdr(sock, PBS_BATCH_TrackJob, pbs_current_user);\n\t\t\tif (rc != 0)\n\t\t\t\tbreak;\n\t\t\trc = encode_DIS_TrackJob(sock, request);\n\t\t\tif (rc != 0)\n\t\t\t\tbreak;\n\t\t\trc = encode_DIS_ReqExtend(sock, request->rq_extend);\n\t\t\tif (rc != 0)\n\t\t\t\tbreak;\n\t\t\trc = dis_flush(sock);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_CopyFiles:\n\t\t\tif (prot == PROT_TPP) {\n\t\t\t\trc = is_compose_cmd(sock, IS_CMD, &msgid);\n\t\t\t\tif (rc != 0)\n\t\t\t\t\tbreak;\n\t\t\t}\n\t\t\trc = encode_DIS_ReqHdr(sock,\n\t\t\t\t\t       PBS_BATCH_CopyFiles, pbs_current_user);\n\t\t\tif (rc != 0)\n\t\t\t\tbreak;\n\t\t\trc = encode_DIS_CopyFiles(sock, request);\n\t\t\tif (rc != 0)\n\t\t\t\tbreak;\n\t\t\trc = encode_DIS_ReqExtend(sock, get_job_credid(request->rq_ind.rq_cpyfile.rq_jobid));\n\t\t\tif (rc != 0)\n\t\t\t\tbreak;\n\t\t\trc = dis_flush(sock);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_CopyFiles_Cred:\n\t\t\tif (prot == PROT_TPP) {\n\t\t\t\trc = is_compose_cmd(sock, IS_CMD, &msgid);\n\t\t\t\tif (rc != 0)\n\t\t\t\t\tbreak;\n\t\t\t}\n\t\t\trc = encode_DIS_ReqHdr(sock,\n\t\t\t\t\t       PBS_BATCH_CopyFiles_Cred, pbs_current_user);\n\t\t\tif (rc != 0)\n\t\t\t\tbreak;\n\t\t\trc = encode_DIS_CopyFiles_Cred(sock, request);\n\t\t\tif (rc != 0)\n\t\t\t\tbreak;\n\t\t\trc = encode_DIS_ReqExtend(sock, 0);\n\t\t\tif (rc != 0)\n\t\t\t\tbreak;\n\t\t\trc = dis_flush(sock);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_DelFiles:\n\t\t\tif (prot == PROT_TPP) {\n\t\t\t\trc = is_compose_cmd(sock, IS_CMD, &msgid);\n\t\t\t\tif (rc != 0)\n\t\t\t\t\tbreak;\n\t\t\t}\n\t\t\trc = encode_DIS_ReqHdr(sock,\n\t\t\t\t\t       PBS_BATCH_DelFiles, pbs_current_user);\n\t\t\tif (rc != 0)\n\t\t\t\tbreak;\n\t\t\trc = encode_DIS_CopyFiles(sock, request);\n\t\t\tif (rc != 0)\n\t\t\t\tbreak;\n\t\t\trc = encode_DIS_ReqExtend(sock, 0);\n\t\t\tif (rc != 0)\n\t\t\t\tbreak;\n\t\t\trc = dis_flush(sock);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_DelFiles_Cred:\n\t\t\tif (prot == PROT_TPP) {\n\t\t\t\trc = is_compose_cmd(sock, IS_CMD, &msgid);\n\t\t\t\tif (rc != 0)\n\t\t\t\t\tbreak;\n\t\t\t}\n\t\t\trc = encode_DIS_ReqHdr(sock,\n\t\t\t\t\t       PBS_BATCH_DelFiles_Cred, pbs_current_user);\n\t\t\tif (rc != 0)\n\t\t\t\tbreak;\n\t\t\trc = encode_DIS_CopyFiles_Cred(sock, request);\n\t\t\tif (rc != 0)\n\t\t\t\tbreak;\n\t\t\trc = encode_DIS_ReqExtend(sock, 0);\n\t\t\tif (rc != 0)\n\t\t\t\tbreak;\n\t\t\trc = dis_flush(sock);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_FailOver:\n\t\t\t/* we should never do this on tpp based connection */\n\t\t\trc = put_failover(sock, request);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_Cred:\n\t\t\trc = PBSD_cred(conn,\n\t\t\t\t       request->rq_ind.rq_cred.rq_credid,\n\t\t\t\t       request->rq_ind.rq_cred.rq_jobid,\n\t\t\t\t       request->rq_ind.rq_cred.rq_cred_type,\n\t\t\t\t       request->rq_ind.rq_cred.rq_cred_data,\n\t\t\t\t       request->rq_ind.rq_cred.rq_cred_validity,\n\t\t\t\t       prot,\n\t\t\t\t       &msgid);\n\t\t\tbreak;\n\n\t\tdefault:\n\t\t\t(void) sprintf(log_buffer, msg_issuebad, request->rq_type);\n\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\tdelete_task(ptask);\n\t\t\trc = -1;\n\t\t\tbreak;\n\t}\n\n\tif (rc) {\n\t\tsprintf(log_buffer,\n\t\t\t\"issue_Drequest failed, error=%d on request %d\",\n\t\t\trc, request->rq_type);\n\t\tlog_err(-1, __func__, log_buffer);\n\t\tif (msgid)\n\t\t\tfree(msgid);\n\t\tdelete_task(ptask);\n\t} else if (ppwt != 0) {\n\t\tif (prot == PROT_TPP) {\n\t\t\ttpp_add_close_func(sock, process_DreplyTPP); /* register a close handler */\n\n\t\t\tptask->wt_event2 = msgid;\n\t\t\t/*\n\t\t\t * since its delayed task for tpp based connection\n\t\t\t * remove it from the task_event list\n\t\t\t * caller will add to moms deferred cmd list\n\t\t\t */\n\t\t\tdelete_link(&ptask->wt_linkevent);\n\t\t}\n\t\tptask->wt_aux2 = prot;\n\t\t*ppwt = ptask;\n\t}\n\n\treturn (rc);\n}\n\n/**\n * @brief\n * \t\tprocess the reply received for a request issued to\n *\t\tanother server via issue_request() over TCP\n *\n * @param[in] sock - TCP socket over which reply arrived\n *\n * @return\tvoid\n */\nvoid\nprocess_Dreply(int sock)\n{\n\tstruct work_task *ptask;\n\tint rc;\n\tstruct batch_request *request;\n\n\t/* find the work task for the socket, it will point us to the request */\n\tptask = (struct work_task *) GET_NEXT(task_list_event);\n\twhile (ptask) {\n\t\tif ((ptask->wt_type == WORK_Deferred_Reply) && (ptask->wt_event == sock))\n\t\t\tbreak;\n\t\tptask = (struct work_task *) GET_NEXT(ptask->wt_linkevent);\n\t}\n\tif (!ptask) {\n\t\tclose_conn(sock);\n\t\treturn;\n\t}\n\n\trequest = ptask->wt_parm1;\n\n\t/* read and decode the reply */\n\t/* set long timeout on I/O   */\n\n\tpbs_tcp_timeout = PBS_DIS_TCP_TIMEOUT_LONG;\n\n\tif ((rc = DIS_reply_read(sock, &request->rq_reply, 0)) != 0) {\n\t\tclose_conn(sock);\n\t\trequest->rq_reply.brp_code = rc;\n\t\trequest->rq_reply.brp_choice = BATCH_REPLY_CHOICE_NULL;\n\t}\n\tpbs_tcp_timeout = PBS_DIS_TCP_TIMEOUT_SHORT; /* short timeout */\n\n\t/* now dispatch the reply to the routine in the work task */\n\n\tdispatch_task(ptask);\n\treturn;\n}\n\n/**\n * @brief\n * \t\tprocess the reply received for a request issued to\n *\t\t  another server via issue_request()\n *\n * \t\tReads the reply from the TPP stream and executes the work task associated\n * \t\twith the reply message. The request for which this reply arrived\n * \t\tis matched by comparing the msgid of the reply with the msgid of the work\n * \t\ttasks stored in the dmn_deferred_cmds list of the mom for this stream.\n *\n * @param[in] handle - TPP handle on which reply/close arrived\n *\n * @return void\n */\nvoid\nprocess_DreplyTPP(int handle)\n{\n\tstruct work_task *ptask;\n\tint rc;\n\tstruct batch_request *request;\n\tstruct batch_reply *reply;\n\tchar *msgid = NULL;\n\tmominfo_t *pmom = 0;\n\n\tif ((pmom = tfind2((u_long) handle, 0, &streams)) == NULL)\n\t\treturn;\n\n\tDIS_tpp_funcs();\n\n\t/* find the work task for the socket, it will point us to the request */\n\tmsgid = disrst(handle, &rc);\n\n\tif (!msgid || rc) { /* tpp connection actually broke, cull all pending requests */\n\t\twhile ((ptask = GET_NEXT(pmom->mi_dmn_info->dmn_deferred_cmds))) {\n\t\t\t/* no need to compare wt_event with handle, since the\n\t\t\t * task list is for this mom and so it will always match\n\t\t\t */\n\t\t\tif (ptask->wt_type == WORK_Deferred_Reply) {\n\t\t\t\trequest = ptask->wt_parm1;\n\t\t\t\tif (request) {\n\t\t\t\t\trequest->rq_reply.brp_code = PBSE_NORELYMOM;\n\t\t\t\t\trequest->rq_reply.brp_choice = BATCH_REPLY_CHOICE_NULL;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tptask->wt_aux = PBSE_NORELYMOM;\n\t\t\tpbs_errno = PBSE_NORELYMOM;\n\t\t\tfree(ptask->wt_event2);\n\n\t\t\tdispatch_task(ptask);\n\t\t}\n\t} else {\n\t\t/* we read msgid fine, so proceed to match it and process the respective task */\n\n\t\t/* get the task list */\n\t\tptask = GET_NEXT(pmom->mi_dmn_info->dmn_deferred_cmds);\n\n\t\twhile (ptask) {\n\n\t\t\tchar *cmd_msgid = ptask->wt_event2;\n\n\t\t\tif (strcmp(cmd_msgid, msgid) == 0) {\n\n\t\t\t\tif (ptask->wt_type == WORK_Deferred_Reply)\n\t\t\t\t\trequest = ptask->wt_parm1;\n\t\t\t\telse\n\t\t\t\t\trequest = NULL;\n\n\t\t\t\tif (!request) {\n\t\t\t\t\tif ((reply = (struct batch_reply *) malloc(sizeof(struct batch_reply))) == 0) {\n\t\t\t\t\t\tdelete_task(ptask);\n\t\t\t\t\t\tfree(cmd_msgid);\n\t\t\t\t\t\tlog_err(errno, msg_daemonname, \"Out of memory creating batch reply\");\n\t\t\t\t\t\treturn;\n\t\t\t\t\t}\n\t\t\t\t\t(void) memset(reply, 0, sizeof(struct batch_reply));\n\t\t\t\t} else {\n\t\t\t\t\treply = &request->rq_reply;\n\t\t\t\t}\n\n\t\t\t\t/* read and decode the reply */\n\t\t\t\tif ((rc = DIS_reply_read(handle, reply, 1)) != 0) {\n\t\t\t\t\treply->brp_code = rc;\n\t\t\t\t\treply->brp_choice = BATCH_REPLY_CHOICE_NULL;\n\t\t\t\t\tptask->wt_aux = PBSE_NORELYMOM;\n\t\t\t\t\tpbs_errno = PBSE_NORELYMOM;\n\t\t\t\t} else {\n\t\t\t\t\tptask->wt_aux = reply->brp_code;\n\t\t\t\t\tpbs_errno = reply->brp_code;\n\t\t\t\t}\n\n\t\t\t\tptask->wt_parm3 = reply; /* set the reply in case callback fn uses without having a preq */\n\n\t\t\t\tdispatch_task(ptask);\n\n\t\t\t\tif (!request)\n\t\t\t\t\tPBSD_FreeReply(reply);\n\n\t\t\t\tfree(cmd_msgid);\n\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\tptask = (struct work_task *) GET_NEXT(ptask->wt_linkobj2);\n\t\t}\n\t\tfree(msgid); /* the msgid read should be free after use in matching */\n\t}\n}\n"
  },
  {
    "path": "src/server/jattr_get_set.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h>\n\n#include \"job.h\"\n\n/**\n * @brief\tGet attribute of job based on given attr index\n *\n * @param[in] pjob     - pointer to job struct\n * @param[in] attr_idx - attribute index\n *\n * @return attribute *\n * @retval NULL  - failure\n * @retval !NULL - pointer to attribute struct\n */\nattribute *\nget_jattr(const job *pjob, int attr_idx)\n{\n\tif (pjob != NULL)\n\t\treturn _get_attr_by_idx((attribute *) pjob->ji_wattr, attr_idx);\n\treturn NULL;\n}\n\n/**\n * @brief\tCheck if the job is in the state specified\n *\n * @param[in]\tpjob - pointer to the job\n * @param[in]\tstate - the state to check\n *\n * @return\tint\n * @retval\t1 if the job is in the state specified\n * @retval\t0 otherwise\n */\nint\ncheck_job_state(const job *pjob, char state)\n{\n\tif (pjob == NULL)\n\t\treturn 0;\n\n\tif (get_job_state(pjob) == state)\n\t\treturn 1;\n\n\treturn 0;\n}\n\n/**\n * @brief\tCheck if the job is in the substate specified\n *\n * @param[in]\tpjob - pointer to the job\n * @param[in]\tsubstate - the substate to check\n *\n * @return\tint\n * @retval\t1 if the job is in the state specified\n * @retval\t0 otherwise\n */\nint\ncheck_job_substate(const job *pjob, int substate)\n{\n\tif (pjob == NULL)\n\t\treturn 0;\n\n\tif (get_job_substate(pjob) == substate)\n\t\treturn 1;\n\n\treturn 0;\n}\n\n/**\n * @brief\tGet the state character value of a job\n *\n * @param[in]\tpjob - the job object\n *\n * @return char\n * @return state character\n * @return -1 for error\n */\nchar\nget_job_state(const job *pjob)\n{\n\tif (pjob != NULL) {\n\t\treturn get_attr_c(get_jattr(pjob, JOB_ATR_state));\n\t}\n\n\treturn JOB_STATE_LTR_UNKNOWN;\n}\n\n/**\n * @brief\tConvenience function to get the numeric representation of job state value\n *\n * @param[in]\tpjob - job object\n *\n * @return int\n * @retval numeric form of job state\n * @retvam -1 for error\n */\nint\nget_job_state_num(const job *pjob)\n{\n\tchar statec;\n\tint staten;\n\n\tif (pjob == NULL)\n\t\treturn -1;\n\n\tstatec = get_attr_c(get_jattr(pjob, JOB_ATR_state));\n\tif (statec == -1)\n\t\treturn -1;\n\n\tstaten = state_char2int(statec);\n\n\treturn staten;\n}\n\n/**\n * @brief\tGet the substate value of a job\n *\n * @param[in]\tpjob - the job object\n *\n * @return long\n * @return substate value\n * @return -1 for error\n */\nlong\nget_job_substate(const job *pjob)\n{\n\tif (pjob != NULL) {\n\t\treturn get_attr_l(get_jattr(pjob, JOB_ATR_substate));\n\t}\n\n\treturn -1;\n}\n\n/**\n * @brief\tGetter function for job attribute of type string\n *\n * @param[in]\tpjob - pointer to the job\n * @param[in]\tattr_idx - index of the attribute to return\n *\n * @return\tchar *\n * @retval\tstring value of the attribute\n * @retval\tNULL if pjob is NULL\n */\nchar *\nget_jattr_str(const job *pjob, int attr_idx)\n{\n\tif (pjob != NULL)\n\t\treturn get_attr_str(get_jattr(pjob, attr_idx));\n\n\treturn NULL;\n}\n\n/**\n * @brief\tGetter function for job attribute of type string of array\n *\n * @param[in]\tpjob - pointer to the job\n * @param[in]\tattr_idx - index of the attribute to return\n *\n * @return\tstruct array_strings *\n * @retval\tvalue of the attribute\n * @retval\tNULL if pjob is NULL\n */\nstruct array_strings *\nget_jattr_arst(const job *pjob, int attr_idx)\n{\n\tif (pjob != NULL)\n\t\treturn get_attr_arst(get_jattr(pjob, attr_idx));\n\n\treturn NULL;\n}\n\n/**\n * @brief\tGetter for job attribute's list value\n *\n * @param[in]\tpjob - pointer to the job\n * @param[in]\tattr_idx - index of the attribute to return\n *\n * @return\tpbs_list_head\n * @retval\tvalue of attribute\n */\npbs_list_head\nget_jattr_list(const job *pjob, int attr_idx)\n{\n\treturn get_attr_list(get_jattr(pjob, attr_idx));\n}\n\n/**\n * @brief\tGetter function for job attribute of type long\n *\n * @param[in]\tpjob - pointer to the job\n * @param[in]\tattr_idx - index of the attribute to return\n *\n * @return\tlong\n * @retval\tlong value of the attribute\n * @retval\t-1 if pjob is NULL\n */\nlong\nget_jattr_long(const job *pjob, int attr_idx)\n{\n\tif (pjob != NULL)\n\t\treturn get_attr_l(get_jattr(pjob, attr_idx));\n\n\treturn -1;\n}\n\n/**\n * @brief\tGetter function for job attribute of type long long\n *\n * @param[in]\tpjob - pointer to the job\n * @param[in]\tattr_idx - index of the attribute to return\n *\n * @return\tlong long\n * @retval\tlong long value of the attribute\n * @retval\t-1 if pjob is NULL\n */\nlong long\nget_jattr_ll(const job *pjob, int attr_idx)\n{\n\tif (pjob != NULL)\n\t\treturn get_attr_ll(get_jattr(pjob, attr_idx));\n\n\treturn -1;\n}\n\n/**\n * @brief\tGetter function for job attribute's user_encoded value\n *\n * @param[in]\tpjob - pointer to the job\n * @param[in]\tattr_idx - index of the attribute to return\n *\n * @return\tsvrattrl *\n * @retval\tuser_encoded value of the attribute\n * @retval\tNULL if pjob is NULL\n */\nsvrattrl *\nget_jattr_usr_encoded(const job *pjob, int attr_idx)\n{\n\tif (pjob != NULL)\n\t\treturn (get_jattr(pjob, attr_idx))->at_user_encoded;\n\n\treturn NULL;\n}\n\n/**\n * @brief\tGetter function for job attribute's priv_encoded value\n *\n * @param[in]\tpjob - pointer to the job\n * @param[in]\tattr_idx - index of the attribute to return\n *\n * @return\tsvrattrl *\n * @retval\tpriv_encoded value of the attribute\n * @retval\tNULL if pjob is NULL\n */\nsvrattrl *\nget_jattr_priv_encoded(const job *pjob, int attr_idx)\n{\n\tif (pjob != NULL)\n\t\treturn (get_jattr(pjob, attr_idx))->at_priv_encoded;\n\n\treturn NULL;\n}\n\n/**\n * @brief\tSetter for job state\n *\n * @param[in]\tjob - pointer to job\n * @param[in]\tval - state val\n *\n * @return\tvoid\n */\nvoid\nset_job_state(job *pjob, char val)\n{\n\tif (pjob != NULL)\n\t\tset_attr_c(get_jattr(pjob, JOB_ATR_state), val, SET);\n}\n\n/**\n * @brief\tSetter for job substate\n *\n * @param[in]\tjob - pointer to job\n * @param[in]\tval - substate val\n *\n * @return\tvoid\n */\nvoid\nset_job_substate(job *pjob, long val)\n{\n\tif (pjob != NULL)\n\t\tset_jattr_l_slim(pjob, JOB_ATR_substate, val, SET);\n}\n\n/**\n * @brief\tGeneric Job attribute setter (call if you want at_set() action functions to be called)\n *\n * @param[in]\tpjob - pointer to job\n * @param[in]\tattr_idx - attribute index to set\n * @param[in]\tval - new val to set\n * @param[in]\trscn - new resource val to set, if applicable\n * @param[in]\top - batch_op operation, SET, INCR, DECR etc.\n *\n * @return\tint\n * @retval\t0 for success\n * @retval\t!0 for failure\n */\nint\nset_jattr_generic(job *pjob, int attr_idx, char *val, char *rscn, enum batch_op op)\n{\n\tif (pjob == NULL || val == NULL)\n\t\treturn 1;\n\n\treturn set_attr_generic(get_jattr(pjob, attr_idx), &job_attr_def[attr_idx], val, rscn, op);\n}\n\n/**\n * @brief\t\"fast\" job attribute setter for string values\n *\n * @param[in]\tpjob - pointer to job\n * @param[in]\tattr_idx - attribute index to set\n * @param[in]\tval - new val to set\n * @param[in]\trscn - new resource val to set, if applicable\n *\n * @return\tint\n * @retval\t0 for success\n * @retval\t!0 for failure\n */\nint\nset_jattr_str_slim(job *pjob, int attr_idx, char *val, char *rscn)\n{\n\tif (pjob == NULL || val == NULL)\n\t\treturn 1;\n\n\treturn set_attr_generic(get_jattr(pjob, attr_idx), &job_attr_def[attr_idx], val, rscn, INTERNAL);\n}\n\n/**\n * @brief\t\"fast\" job attribute setter for long values\n *\n * @param[in]\tpjob - pointer to job\n * @param[in]\tattr_idx - attribute index to set\n * @param[in]\tval - new val to set\n * @param[in]\top - batch_op operation, SET, INCR, DECR etc.\n *\n * @return\tint\n * @retval\t0 for success\n * @retval\t1 for failure\n */\nint\nset_jattr_l_slim(job *pjob, int attr_idx, long val, enum batch_op op)\n{\n\tif (pjob == NULL)\n\t\treturn 1;\n\n\tset_attr_l(get_jattr(pjob, attr_idx), val, op);\n\n\treturn 0;\n}\n\n/**\n * @brief\t\"fast\" job attribute setter for long long values\n *\n * @param[in]\tpjob - pointer to job\n * @param[in]\tattr_idx - attribute index to set\n * @param[in]\tval - new val to set\n * @param[in]\top - batch_op operation, SET, INCR, DECR etc.\n *\n * @return\tint\n * @retval\t0 for success\n * @retval\t1 for failure\n */\nint\nset_jattr_ll_slim(job *pjob, int attr_idx, long long val, enum batch_op op)\n{\n\tif (pjob == NULL)\n\t\treturn 1;\n\n\tset_attr_ll(get_jattr(pjob, attr_idx), val, op);\n\n\treturn 0;\n}\n\n/**\n * @brief\t\"fast\" job attribute setter for boolean values\n *\n * @param[in]\tpjob - pointer to job\n * @param[in]\tattr_idx - attribute index to set\n * @param[in]\tval - new val to set\n * @param[in]\top - batch_op operation, SET, INCR, DECR etc.\n *\n * @return\tint\n * @retval\t0 for success\n * @retval\t1 for failure\n */\nint\nset_jattr_b_slim(job *pjob, int attr_idx, long val, enum batch_op op)\n{\n\tif (pjob == NULL)\n\t\treturn 1;\n\n\tset_attr_b(get_jattr(pjob, attr_idx), val, op);\n\n\treturn 0;\n}\n\n/**\n * @brief\t\"fast\" job attribute setter for char values\n *\n * @param[in]\tpjob - pointer to job\n * @param[in]\tattr_idx - attribute index to set\n * @param[in]\tval - new val to set\n * @param[in]\top - batch_op operation, SET, INCR, DECR etc.\n *\n * @return\tint\n * @retval\t0 for success\n * @retval\t1 for failure\n */\nint\nset_jattr_c_slim(job *pjob, int attr_idx, char val, enum batch_op op)\n{\n\tif (pjob == NULL)\n\t\treturn 1;\n\n\tset_attr_c(get_jattr(pjob, attr_idx), val, op);\n\n\treturn 0;\n}\n\n/**\n * @brief\tCheck if a job attribute is set\n *\n * @param[in]\tpjob - pointer to job\n * @param[in]\tattr_idx - attribute index to check\n *\n * @return\tint\n * @retval\t1 if it is set\n * @retval\t0 otherwise\n */\nint\nis_jattr_set(const job *pjob, int attr_idx)\n{\n\tif (pjob != NULL)\n\t\treturn is_attr_set(get_jattr(pjob, attr_idx));\n\n\treturn 0;\n}\n\n/**\n * @brief\tMark a job attribute as \"not set\"\n *\n * @param[in]\tpjob - pointer to job\n * @param[in]\tattr_idx - attribute index to set\n *\n * @return\tvoid\n */\nvoid\nmark_jattr_not_set(job *pjob, int attr_idx)\n{\n\tif (pjob != NULL) {\n\t\tattribute *attr = get_jattr(pjob, attr_idx);\n\t\tATR_UNSET(attr);\n\t}\n}\n\n/**\n * @brief\tMark a job attribute as \"set\"\n *\n * @param[in]\tpjob - pointer to job\n * @param[in]\tattr_idx - attribute index to set\n *\n * @return\tvoid\n */\nvoid\nmark_jattr_set(job *pjob, int attr_idx)\n{\n\tif (pjob != NULL)\n\t\t(get_jattr(pjob, attr_idx))->at_flags |= ATR_VFLAG_SET;\n}\n\n/**\n * @brief\tFree a job attribute\n *\n * @param[in]\tpjob - pointer to job\n * @param[in]\tattr_idx - attribute index to free\n *\n * @return\tvoid\n */\nvoid\nfree_jattr(job *pjob, int attr_idx)\n{\n\tif (pjob != NULL)\n\t\tfree_attr(job_attr_def, get_jattr(pjob, attr_idx), attr_idx);\n}\n\n/**\n * @brief\tclear a job attribute\n *\n * @param[in]\tpjob - pointer to job\n * @param[in]\tattr_idx - attribute index to clear\n *\n * @return\tvoid\n */\nvoid\nclear_jattr(job *pjob, int attr_idx)\n{\n\tif (pjob != NULL)\n\t\tclear_attr(get_jattr(pjob, attr_idx), &job_attr_def[attr_idx]);\n}\n"
  },
  {
    "path": "src/server/job_func.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <unistd.h>\n#include <sys/param.h>\n#include <dirent.h>\n#include <time.h>\n#include <sys/types.h>\n#include <sys/stat.h>\n#include <ctype.h>\n#include <errno.h>\n#include <assert.h>\n\n#include <signal.h>\n#include <memory.h>\n#include <fcntl.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include \"pbs_ifl.h\"\n#include \"libpbs.h\"\n#include \"list_link.h\"\n#include \"work_task.h\"\n#include \"attribute.h\"\n#include \"resource.h\"\n#include \"server_limits.h\"\n#include \"server.h\"\n#include \"resv_node.h\"\n#include \"queue.h\"\n#include \"sched_cmds.h\"\n#include \"pbs_sched.h\"\n\n#include \"job.h\"\n#include \"reservation.h\"\n#include \"pbs_nodes.h\"\n#include \"log.h\"\n#include \"pbs_error.h\"\n#include \"batch_request.h\"\n#include \"pbs_entlim.h\"\n#include \"libutil.h\"\n\n#ifndef PBS_MOM\n#include \"pbs_idx.h\"\n#include \"ticket.h\"\n#else\n#include \"mom_server.h\"\n#include \"mom_func.h\"\n#include \"mom_hook_func.h\"\n#endif\n\n#include \"svrfunc.h\"\n#include <libutil.h>\n#include \"acct.h\"\n#include \"credential.h\"\n#include \"net_connect.h\"\n#include \"pbs_reliable.h\"\n\n#if defined(PBS_MOM) && defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n#include \"renew_creds.h\"\n#endif\n\nextern int time_now;\n\n/* External functions */\n#ifdef WIN32\nextern int read_cred(job *pjob, char **cred, size_t *len);\n#endif\n\nvoid on_job_exit(struct work_task *);\n\n/* Local Private Functions */\n\nstatic void job_init_wattr(job *);\n\n#ifndef PBS_MOM /* SERVER ONLY */\nstatic void post_resv_purge(struct work_task *pwt);\n#endif\n\n/* Global Data items */\n#ifndef PBS_MOM\nextern struct server server;\n#endif /* PBS_MOM */\nextern char *msg_abt_err;\nextern char *path_jobs;\nextern char *path_spool;\nextern char server_name[];\nextern char *pbs_server_name;\nextern pbs_list_head svr_newjobs;\nextern pbs_list_head svr_alljobs;\nextern char *msg_err_purgejob;\n\n#ifdef PBS_MOM\nextern void rmtmpdir(char *);\nvoid nodes_free(job *);\nextern char *std_file_name(job *pjob, enum job_file which, int *keeping);\nextern char *path_checkpoint;\n\n/**\n * @brief\n * \t\tfree up the tasks from the list of tasks associated with particular job, delete links and close connection.\n *\n * @param[in]\tpj - pointer to job structure\n *\n * @return void\n */\nvoid\ntasks_free(job *pj)\n{\n\tpbs_task *tp = (pbs_task *) GET_NEXT(pj->ji_tasks);\n\tobitent *op;\n\tinfoent *ip;\n\tint i;\n\n\twhile (tp) {\n\t\top = (obitent *) GET_NEXT(tp->ti_obits);\n\t\twhile (op) {\n\t\t\tdelete_link(&op->oe_next);\n\t\t\tfree(op);\n\t\t\top = (obitent *) GET_NEXT(tp->ti_obits);\n\t\t}\n\n\t\tip = (infoent *) GET_NEXT(tp->ti_info);\n\t\twhile (ip) {\n\t\t\tdelete_link(&ip->ie_next);\n\t\t\tfree(ip->ie_name);\n\t\t\tfree(ip->ie_info);\n\t\t\tfree(ip);\n\t\t\tip = (infoent *) GET_NEXT(tp->ti_info);\n\t\t}\n\n\t\tif (tp->ti_tmfd != NULL) {\n\t\t\tfor (i = 0; i < tp->ti_tmnum; i++)\n\t\t\t\tclose_conn(tp->ti_tmfd[i]);\n\t\t\tfree(tp->ti_tmfd);\n\t\t}\n\t\tdelete_link(&tp->ti_jobtask);\n\t\tfree(tp);\n\t\ttp = (pbs_task *) GET_NEXT(pj->ji_tasks);\n\t}\n}\n#else /* PBS_MOM */\n\nchar *\nget_job_credid(char *jobid)\n{\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n\tjob *pjob;\n\n\tif ((pjob = find_job(jobid)) == NULL)\n\t\treturn NULL;\n\n\tif (is_jattr_set(pjob, JOB_ATR_cred_id)) {\n\t\treturn get_jattr_str(pjob, JOB_ATR_cred_id);\n\t}\n#endif\n\n\treturn NULL;\n}\n\n/**\n * @brief\n * \t\tjob_abt - abort a job\n *\n *\t\tThe job removed from the system and a mail message is sent\n *\t\tto the job owner.\n *\n * @param[in]\tpjob - pointer to job structure\n * @param[in]\ttext - job status message\n */\n\nint\njob_abt(job *pjob, char *text)\n{\n\tchar old_state;\n\tint old_substate;\n\tint rc = 0;\n\n\tif (pjob == NULL)\n\t\treturn 0; /* nothing to do */\n\t/* save old state and update state to Exiting */\n\n\told_state = get_job_state(pjob);\n\n\tif (old_state == JOB_STATE_LTR_FINISHED)\n\t\treturn 0; /* nothing to do for this job */\n\n\told_substate = get_job_substate(pjob);\n\n\t/* notify user of abort if notification was requested */\n\n\tif (text) { /* req_delete sends own mail and acct record */\n\t\taccount_record(PBS_ACCT_ABT, pjob, text);\n\t\tsvr_mailowner(pjob, MAIL_ABORT, MAIL_NORMAL, text);\n\t}\n\n\tif ((old_state == JOB_STATE_LTR_RUNNING) && (old_substate != JOB_SUBSTATE_PROVISION)) {\n\t\tsvr_setjobstate(pjob, JOB_STATE_LTR_RUNNING, JOB_SUBSTATE_ABORT);\n\t\trc = issue_signal(pjob, \"SIGKILL\", release_req, 0);\n\t\tif (rc != 0) {\n\t\t\t(void) sprintf(log_buffer, msg_abt_err,\n\t\t\t\t       pjob->ji_qs.ji_jobid, old_substate);\n\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE) == 0) {\n\t\t\t\t/* notify creator that job is exited */\n\t\t\t\tset_job_state(pjob, JOB_STATE_LTR_EXITING);\n\t\t\t\tissue_track(pjob);\n\t\t\t}\n\t\t\t/*\n\t\t\t * Check if the history of the finished job can be saved or it needs to be purged .\n\t\t\t */\n\t\t\tsvr_saveorpurge_finjobhist(pjob);\n\t\t}\n\t} else if ((old_state == JOB_STATE_LTR_TRANSIT) &&\n\t\t   (old_substate == JOB_SUBSTATE_TRNOUT)) {\n\t\t/* I don't know of a case where this could happen */\n\t\t(void) sprintf(log_buffer, msg_abt_err,\n\t\t\t       pjob->ji_qs.ji_jobid, old_substate);\n\t\tlog_err(-1, __func__, log_buffer);\n\t} else if (old_substate == JOB_SUBSTATE_PROVISION) {\n\t\tsvr_setjobstate(pjob, JOB_STATE_LTR_RUNNING, JOB_SUBSTATE_ABORT);\n\t\t/*\n\t\t * Check if the history of the finished job can be saved or it needs to be purged .\n\t\t */\n\t\tsvr_saveorpurge_finjobhist(pjob);\n\t} else if (old_state == JOB_STATE_LTR_HELD && old_substate == JOB_SUBSTATE_DEPNHOLD &&\n\t\t   (is_jattr_set(pjob, JOB_ATR_depend))) {\n\t\tsvr_setjobstate(pjob, JOB_STATE_LTR_HELD, JOB_SUBSTATE_ABORT);\n\t\tdepend_on_term(pjob);\n\t\t/*\n\t\t * Check if the history of the finished job can be saved or it needs to be purged .\n\t\t */\n\t\tsvr_saveorpurge_finjobhist(pjob);\n\t} else {\n\t\tsvr_setjobstate(pjob, JOB_STATE_LTR_EXITING, JOB_SUBSTATE_ABORT);\n\t\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE) == 0) {\n\t\t\t/* notify creator that job is exited */\n\t\t\tissue_track(pjob);\n\t\t}\n\t\t/*\n\t\t * Check if the history of the finished job can be saved or it needs to be purged .\n\t\t */\n\t\tsvr_saveorpurge_finjobhist(pjob);\n\t}\n\n\treturn (rc);\n}\n\n/**\n * @brief\n * \t\tjob_delete_attr - delete a job attribute\n *\n *\t\tThe job attribute is removed from the memory and db.\n *\n * @param[in]\tpjob - pointer to job structure\n * @param[in]\tattr_idx - attribute to remove\n * \n * @return\tint\n * @retval\t0\t- success\n * @retval\t!=0\t- fail\n */\n\nint\njob_delete_attr(job *pjob, int attr_idx)\n{\n\tvoid *conn = (void *) svr_db_conn;\n\tint index;\n\tpbs_db_obj_info_t obj;\n\tpbs_db_attr_list_t db_attr_list;\n\tattribute_def attr_def;\n\tattribute *pattr;\n\tint rc;\n\n\tobj.pbs_db_un.pbs_db_job = NULL;\n\tobj.pbs_db_obj_type = PBS_DB_JOB;\n\n\tdb_attr_list.attr_count = 0;\n\tCLEAR_HEAD(db_attr_list.attrs);\n\n\tif (is_jattr_set(pjob, attr_idx)) {\n\t\tattr_def = job_attr_def[attr_idx];\n\t\tif ((index = find_attr(job_attr_idx, job_attr_def, attr_def.at_name)) < 0) {\n\t\t\treturn -1;\n\t\t}\n\t\tpattr = pjob->ji_wattr;\n\t\tif ((rc = encode_single_attr_db((job_attr_def + index), (pattr + index), &db_attr_list)) != 0) {\n\t\t\treturn rc;\n\t\t}\n\t\tif ((rc = pbs_db_delete_attr_obj(conn, &obj, pjob->ji_qs.ji_jobid, &db_attr_list)) < 0) {\n\t\t\tfree_db_attr_list(&db_attr_list);\n\t\t\treturn rc;\n\t\t}\n\t\tfree_db_attr_list(&db_attr_list);\n\n\t\tclear_jattr(pjob, attr_idx);\n\t}\n\n\treturn 0;\n}\n#endif /* PBS_MOM */\n\n/**\n * @brief\n * \t\tjob_alloc - allocate space for a job structure and initialize working\n *\t\t\t\tattribute to \"unset\"\n *\n * @return\tpointer to structure or null is space not available.\n */\n\njob *\njob_alloc(void)\n{\n\tjob *pj;\n\n\tpj = (job *) malloc(sizeof(job));\n\tif (pj == NULL) {\n\t\tlog_err(errno, __func__, \"no memory\");\n\t\treturn NULL;\n\t}\n\t(void) memset((char *) pj, (int) 0, (size_t) sizeof(job));\n\n\tCLEAR_LINK(pj->ji_alljobs);\n\tCLEAR_LINK(pj->ji_jobque);\n\tCLEAR_LINK(pj->ji_unlicjobs);\n\n\tpj->ji_rerun_preq = NULL;\n\n#ifdef PBS_MOM\n\tCLEAR_HEAD(pj->ji_tasks);\n\tCLEAR_HEAD(pj->ji_failed_node_list);\n\tCLEAR_HEAD(pj->ji_node_list);\n\tpj->ji_taskid = TM_INIT_TASK;\n\tpj->ji_numnodes = 0;\n\tpj->ji_numrescs = 0;\n\tpj->ji_numvnod = 0;\n\tpj->ji_num_assn_vnodes = 0;\n\tpj->ji_hosts = NULL;\n\tpj->ji_vnods = NULL;\n\tpj->ji_assn_vnodes = NULL;\n\tpj->ji_resources = NULL;\n\tpj->ji_obit = TM_NULL_EVENT;\n\tpj->ji_postevent = TM_NULL_EVENT;\n\tpj->ji_preq = NULL;\n\tpj->ji_nodekill = TM_ERROR_NODE;\n\tpj->ji_flags = 0;\n\tpj->ji_jsmpipe = -1;\n\tpj->ji_mjspipe = -1;\n\tpj->ji_jsmpipe2 = -1;\n\tpj->ji_mjspipe2 = -1;\n\tpj->ji_child2parent_job_update_pipe = -1;\n\tpj->ji_parent2child_job_update_pipe = -1;\n\tpj->ji_parent2child_job_update_status_pipe = -1;\n\tpj->ji_parent2child_moms_status_pipe = -1;\n\tpj->ji_updated = 0;\n\tpj->ji_hook_running_bg_on = BG_NONE;\n\tpj->ji_bg_hook_task = NULL;\n\tpj->ji_report_task = NULL;\n\tpj->ji_env.v_envp = NULL;\n#ifdef WIN32\n\tpj->ji_hJob = NULL;\n\tpj->ji_user = NULL;\n\tpj->ji_grpcache = NULL;\n#endif\n\tpj->ji_stdout = 0;\n\tpj->ji_stderr = 0;\n\tpj->ji_setup = NULL;\n\tpj->ji_momsubt = 0;\n\tpj->ji_msconnected = 0;\n\tCLEAR_HEAD(pj->ji_multinodejobs);\n\tpj->ji_extended.ji_ext.ji_stdout = 0;\n\tpj->ji_extended.ji_ext.ji_stderr = 0;\n#else /* SERVER */\n\tpj->ji_discarding = 0;\n\tpj->ji_prunreq = NULL;\n\tpj->ji_pmt_preq = NULL;\n\tCLEAR_HEAD(pj->ji_svrtask);\n\tCLEAR_HEAD(pj->ji_rejectdest);\n\tpj->ji_terminated = 0;\n\tpj->ji_deletehistory = 0;\n\tpj->ji_script = NULL;\n\tpj->ji_prov_startjob_task = NULL;\n#endif\n\tpj->ji_qs.ji_jsversion = JSVERSION;\n\tpj->ji_momhandle = -1;\t\t/* mark mom connection invalid */\n\tpj->ji_mom_prot = PROT_INVALID; /* invalid protocol type */\n\tpj->newobj = 1;\n\n\t/* set the working attributes to \"unspecified\" */\n\n\tjob_init_wattr(pj);\n\n#ifndef PBS_MOM\n\tset_job_state(pj, JOB_STATE_LTR_TRANSIT);\n\tset_job_substate(pj, JOB_SUBSTATE_TRANSIN);\n\n\t/* start accruing time from the time job was created */\n\tset_jattr_l_slim(pj, JOB_ATR_sample_starttime, time_now, SET);\n\tset_jattr_l_slim(pj, JOB_ATR_eligible_time, 0, SET);\n\n\t/* if eligible_time_enable is not true, then job does not accrue eligible time */\n\tif (is_sattr_set(SVR_ATR_EligibleTimeEnable) && get_sattr_long(SVR_ATR_EligibleTimeEnable) == TRUE) {\n\t\tint elig_val;\n\n\t\telig_val = determine_accruetype(pj);\n\t\tupdate_eligible_time(elig_val, pj);\n\t}\n#endif\n\n\treturn (pj);\n}\n\n#ifndef PBS_MOM\n\n/**\n * @brief\n * \tfree work tasks and pending batch requests related to this job\n *\n * @param[in]\tpj - pointer to job structure\n *\n * @return\tvoid\n */\nvoid\nfree_job_work_tasks(job *pj)\n{\n\tstruct work_task *pwt;\n\tstruct batch_request *tbr = NULL;\n\t/*\n\t* Delete any work task entries associated with the job.\n\t* mom deferred tasks via TPP are also hooked into the\n\t* ji_svrtask now, so they also get automatically cleared\n\t* in this following loop\n\t*/\n\twhile ((pwt = (struct work_task *) GET_NEXT(pj->ji_svrtask)) != NULL) {\n\t\tif (pwt->wt_type == WORK_Deferred_Reply) {\n\t\t\ttbr = (struct batch_request *) pwt->wt_parm1;\n\t\t\tif (tbr != NULL) {\n\t\t\t\t/* Check if the reply is for scheduler\n\t\t\t\t * If so, then reject the request.\n\t\t\t\t */\n\t\t\t\tif ((tbr->rq_orgconn != -1) &&\n\t\t\t\t    (find_sched_from_sock(tbr->rq_orgconn, CONN_SCHED_PRIMARY) != NULL)) {\n\t\t\t\t\ttbr->rq_conn = tbr->rq_orgconn;\n\t\t\t\t\treq_reject(PBSE_HISTJOBID, 0, tbr);\n\t\t\t\t}\n\t\t\t\t/*\n\t\t\t\t* free batch request from task struct\n\t\t\t\t* if task is deferred reply\n\t\t\t\t*/\n\t\t\t\telse\n\t\t\t\t\tfree_br(tbr);\n\t\t\t}\n\t\t}\n\n\t\t/* wt_event2 either has additional data (like msgid) or NULL */\n\t\tfree(pwt->wt_event2);\n\n\t\tdelete_task(pwt);\n\t}\n}\n#endif\n\n/**\n * @brief\n * \t\tjob_free - free job structure and its various sub-structures\n *\n * @param[in]\tpj - pointer to job structure\n *\n * @return\tvoid\n */\nvoid\njob_free(job *pj)\n{\n\tint i;\n\n#ifdef PBS_MOM\n\n#ifdef WIN32\n\tif (is_jattr_set(pj, JOB_ATR_altid)) {\n\t\tchar *p;\n\n\t\tp = strstr(get_jattr_str(pj, JOB_ATR_altid),\n\t\t\t   \"HomeDirectory=\");\n\t\tif (p) {\n\t\t\tstruct passwd *pwdp = NULL;\n\n\t\t\tif ((get_jattr_str(pj, JOB_ATR_euser)) &&\n\t\t\t    (pwdp = getpwnam(get_jattr_str(pj, JOB_ATR_euser)))) {\n\t\t\t\tif (pwdp->pw_userlogin != INVALID_HANDLE_VALUE) {\n\t\t\t\t\tif (impersonate_user(pwdp->pw_userlogin) == 0) {\n\t\t\t\t\t\tsprintf(log_buffer, \"Failed to ImpersonateLoggedOnUser user: %s\", pwdp->pw_name);\n\t\t\t\t\t\tlog_joberr(-1, __func__, log_buffer, pj->ji_qs.ji_jobid);\n\t\t\t\t\t\treturn;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\t/* p + 14 is the string after HomeDirectory= */\n\t\t\t\tunmap_unc_path(p + 14);\n\t\t\t\t(void) revert_impersonated_user();\n\t\t\t}\n\t\t\tunmap_unc_path(p + 14); /* also unmap under Admin to be sure */\n\t\t}\n\t}\n#endif\n\n#endif\n\n\t/* remove any malloc working attribute space */\n\n\tfor (i = 0; i < (int) JOB_ATR_LAST; i++)\n\t\tfree_jattr(pj, i);\n\n#ifndef PBS_MOM\n\t{\n\t\t/* Server only */\n\t\tbadplace *bp;\n\n\t\tfree_job_work_tasks(pj);\n\n\t\t/* free any bad destination structs */\n\n\t\tbp = (badplace *) GET_NEXT(pj->ji_rejectdest);\n\t\twhile (bp) {\n\t\t\tdelete_link(&bp->bp_link);\n\t\t\tfree(bp);\n\t\t\tbp = (badplace *) GET_NEXT(pj->ji_rejectdest);\n\t\t}\n\t}\n\tif (pj->ji_ajinfo) {\n\t\tfree_range_list(pj->ji_ajinfo->trm_quelist);\n\t\tfree(pj->ji_ajinfo);\n\t\tpj->ji_ajinfo = NULL;\n\t}\n\tpj->ji_parentaj = NULL;\n\tif (pj->ji_discard)\n\t\tfree(pj->ji_discard);\n\tif (pj->ji_acctrec)\n\t\tfree(pj->ji_acctrec);\n\tif (pj->ji_clterrmsg)\n\t\tfree(pj->ji_clterrmsg);\n\tif (pj->ji_script)\n\t\tfree(pj->ji_script);\n\tif (pj->ji_prov_startjob_task)\n\t\tdelete_task(pj->ji_prov_startjob_task);\n\n#else /* PBS_MOM  Mom Only */\n\n\tif (pj->ji_grpcache)\n\t\t(void) free(pj->ji_grpcache);\n\n\tassert(pj->ji_preq == NULL);\n\tnodes_free(pj);\n\ttasks_free(pj);\n\tif (pj->ji_resources) {\n\t\tfor (i = 0; i < pj->ji_numrescs; i++) {\n\t\t\tfree(pj->ji_resources[i].nodehost);\n\t\t\tpj->ji_resources[i].nodehost = NULL;\n\t\t\tif (is_attr_set(&pj->ji_resources[i].nr_used) != 0)\n\t\t\t\tfree_attr(job_attr_def, &pj->ji_resources[i].nr_used, JOB_ATR_resc_used);\n\t\t}\n\t\tpj->ji_numrescs = 0;\n\t\tfree(pj->ji_resources);\n\t\tpj->ji_resources = NULL;\n\t}\n\n\treliable_job_node_free(&pj->ji_failed_node_list);\n\treliable_job_node_free(&pj->ji_node_list);\n\n\tif (pj->ji_bg_hook_task) {\n\t\tmom_process_hooks_params_t *php;\n\t\tphp = pj->ji_bg_hook_task->wt_parm2;\n\t\tif (php != NULL) {\n\t\t\tif (php->hook_output) {\n\t\t\t\tfree(php->hook_output->reject_errcode);\n\t\t\t\tfree(php->hook_output);\n\t\t\t}\n\t\t\tfree(php->hook_input);\n\t\t\tfree(php);\n\t\t}\n\t\tdelete_task(pj->ji_bg_hook_task);\n\t}\n\n\tif (pj->ji_report_task)\n\t\tdelete_task(pj->ji_report_task);\n\n\t/*\n\t ** This gets rid of any dependent job structure(s) from ji_setup.\n\t */\n\tif (job_free_extra != NULL)\n\t\tjob_free_extra(pj);\n\n\tCLEAR_HEAD(pj->ji_multinodejobs);\n\n#ifdef WIN32\n\tif (pj->ji_hJob) {\n\t\tCloseHandle(pj->ji_hJob);\n\t\tpj->ji_hJob = NULL;\n\t}\n#endif\n\n#endif /* PBS_MOM */\n\t/* if a subjob (of a Array Job), do not free certain items */\n\t/* which are malloced and shared with the parent Array Job */\n\t/* They will be freed when the parent is removed           */\n\n\tpj->ji_qs.ji_jobid[0] = 'X'; /* as a \"freed\" marker */\n\tfree(pj);\t\t     /* now free the main structure */\n}\n\n/**\n * @brief\n * \t\tjob_init_wattr - initialize job working attribute array\n *\t\tset the types and the \"unspecified value\" flag\n *\n * @see\n * \t\tjob_alloc\n *\n * @param[in]\tpj - pointer to job structure\n *\n * @return\tvoid\n */\nstatic void\njob_init_wattr(job *pj)\n{\n\tint i;\n\n\tfor (i = 0; i < (int) JOB_ATR_LAST; i++) {\n\t\tclear_attr(get_jattr(pj, i), &job_attr_def[i]);\n\t}\n}\n\n/**\n * @brief\n *      spool_filename - formulate stdout/err file name in the spool area.\n *\n * @param[in]    pjob     - pointer to job structure.\n * @param[out]   namebuf  - output/error file name.\n * @param[in]    suffix   - output/error file name suffix.\n *\n * @return  void\n */\nvoid\nspool_filename(job *pjob, char *namebuf, char *suffix)\n{\n\tif (*pjob->ji_qs.ji_fileprefix != '\\0')\n\t\t(void) strcat(namebuf, pjob->ji_qs.ji_fileprefix);\n\telse\n\t\t(void) strcat(namebuf, pjob->ji_qs.ji_jobid);\n\t(void) strcat(namebuf, suffix);\n}\n\n/**\n * @brief\n * \t\tremove_stdouter_files - remove stdout/err files from the spool directory\n *\n * @param[in]   pjob    - pointer to job structure\n * @param[in]\tsuffix\t- output/error file name suffix.\n *\n * @return\tvoid\n */\nvoid\nremove_stdouterr_files(job *pjob, char *suffix)\n{\n\tchar namebuf[MAXPATHLEN + 1];\n\n\t(void) strcpy(namebuf, path_spool);\n\tspool_filename(pjob, namebuf, suffix);\n\tif (unlink(namebuf) < 0)\n\t\tif (errno != ENOENT)\n\t\t\tlog_joberr(errno, __func__, msg_err_purgejob, pjob->ji_qs.ji_jobid);\n}\n\n/**\n * @brief\n * \t\tdirect_write_requested - checks whether direct_write is requested by the job.\n *\n * @param[in]\tpjob\t- pointer to job structure.\n *\n * @return\tbool\n * @retval 1 : direct write is requested by the job.\n * @retval 0 : direct write is not requested by the job.\n */\nint\ndirect_write_requested(job *pjob)\n{\n\tchar *pj_attrk = NULL;\n\tif ((is_jattr_set(pjob, JOB_ATR_keep))) {\n\t\tpj_attrk = get_jattr_str(pjob, JOB_ATR_keep);\n\t\tif (strchr(pj_attrk, 'd') && (strchr(pj_attrk, 'o') || (strchr(pj_attrk, 'e'))))\n\t\t\treturn 1;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\tConvenience function to delete job related files for a job being purged\n *\n * @param[in]\tpjob - the job being purged\n * @param[in]\tfsuffix - suffix of the file to delete\n *\n * @return\tvoid\n */\nvoid\ndel_job_related_file(job *pjob, char *fsuffix)\n{\n\tchar namebuf[MAXPATHLEN + 1] = {'\\0'};\n\n\tstrcpy(namebuf, path_jobs);\n\tif (*pjob->ji_qs.ji_fileprefix != '\\0')\n\t\tstrcat(namebuf, pjob->ji_qs.ji_fileprefix);\n\telse\n\t\tstrcat(namebuf, pjob->ji_qs.ji_jobid);\n\tstrcat(namebuf, fsuffix);\n\tif (unlink(namebuf) < 0) {\n\t\tif (errno != ENOENT) {\n\t\t\tlog_joberr(errno, __func__, msg_err_purgejob,\n\t\t\t\t   pjob->ji_qs.ji_jobid);\n\t\t}\n\t}\n}\n\n#ifdef PBS_MOM\n/**\n * @brief\tRename the job's <taskdir>.TK to <taskdir>.TK.RM\n *\n * @param[in]\tpjob - the job being purged\n *\n * @return\tchar *\n * @retval\t!NULL  - new taskdir path name, malloced which must be freed.\n * @retval\tNULL - rename failed, error encountered\n */\nstatic char *\nrename_taskdir(job *pjob)\n{\n\tchar *namebuf = NULL;\n\tchar *renamebuf = NULL;\n\tchar *fprefix;\n\n\tif ((pjob == NULL) || (path_jobs == NULL))\n\t\treturn (NULL);\n\n\tif (*pjob->ji_qs.ji_fileprefix != '\\0')\n\t\tfprefix = pjob->ji_qs.ji_fileprefix;\n\telse\n\t\tfprefix = pjob->ji_qs.ji_jobid;\n\n\tif (fprefix == NULL)\n\t\treturn (NULL);\n\n\tif (pbs_asprintf(&namebuf, \"%s%s%s\", path_jobs, fprefix, JOB_TASKDIR_SUFFIX) == -1)\n\t\treturn (NULL);\n\tif (pbs_asprintf(&renamebuf, \"%s%s\", namebuf, JOB_DEL_SUFFIX) == -1)\n\t\treturn (namebuf);\n\tif (rename(namebuf, renamebuf) == 0) {\n\t\tfree(namebuf);\n\t\treturn (renamebuf);\n\t}\n\tfree(renamebuf);\n\treturn (namebuf);\n}\n\n/**\n * @brief\tConvenience function to delete directories associated with a job being purged\n *\n * @param[in]\tpjob - the job being purged\n * @param[in]\ttaskdir - if not NULL, the path name to the task directory to cleanup\n *\n * @return\tvoid\n */\nvoid\ndel_job_dirs(job *pjob, char *taskdir)\n{\n\tchar namebuf[MAXPATHLEN + 1] = {'\\0'};\n\n\tif (taskdir == NULL) {\n\t\tstrcpy(namebuf, path_jobs); /* job directory path */\n\t\tif (*pjob->ji_qs.ji_fileprefix != '\\0')\n\t\t\tstrcat(namebuf, pjob->ji_qs.ji_fileprefix);\n\t\telse\n\t\t\tstrcat(namebuf, pjob->ji_qs.ji_jobid);\n\t\tstrcat(namebuf, JOB_TASKDIR_SUFFIX);\n\t\tremtree(namebuf);\n\t} else {\n\t\tremtree(taskdir);\n\t}\n\trmtmpdir(pjob->ji_qs.ji_jobid); /* remove tmpdir */\n\n\t/* remove the staging and execution directory when sandbox=PRIVATE\n\t ** and there are no stage-out errors\n\t */\n\tif (is_jattr_set(pjob, JOB_ATR_sandbox) &&\n\t    (strcasecmp(get_jattr_str(pjob, JOB_ATR_sandbox), \"PRIVATE\") == 0)) {\n\t\tint check_shared = 0;\n\t\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE) == 0)\n\t\t\t/* sister mom */\n\t\t\tcheck_shared = 1;\n\n\t\tif (!(pjob->ji_qs.ji_svrflags & JOB_SVFLG_StgoFal)) {\n\t\t\tif (pjob->ji_grpcache != NULL)\n\t\t\t\trmjobdir(pjob->ji_qs.ji_jobid,\n\t\t\t\t\t jobdirname(pjob->ji_qs.ji_jobid, pjob->ji_grpcache->gc_homedir),\n\t\t\t\t\t pjob->ji_grpcache->gc_uid,\n\t\t\t\t\t pjob->ji_grpcache->gc_gid,\n\t\t\t\t\t check_shared);\n\t\t\telse\n\t\t\t\trmjobdir(pjob->ji_qs.ji_jobid,\n\t\t\t\t\t jobdirname(pjob->ji_qs.ji_jobid, NULL),\n\t\t\t\t\t 0,\n\t\t\t\t\t 0,\n\t\t\t\t\t check_shared);\n\t\t}\n\t}\n}\n\n/**\n * @brief\tConvenience function to delete checkpoint files for a job being purged\n *\n * @param[in]\tpjob - job being purged\n *\n * @return void\n */\nvoid\ndel_chkpt_files(job *pjob)\n{\n\tchar namebuf[MAXPATHLEN + 1] = {'\\0'};\n\n\tif (path_checkpoint != NULL) { /* delete checkpoint files */\n\t\tpbs_strncpy(namebuf, path_checkpoint, sizeof(namebuf));\n\t\tif (*pjob->ji_qs.ji_fileprefix != '\\0')\n\t\t\t(void) strcat(namebuf, pjob->ji_qs.ji_fileprefix);\n\t\telse\n\t\t\t(void) strcat(namebuf, pjob->ji_qs.ji_jobid);\n\t\t(void) strcat(namebuf, JOB_CKPT_SUFFIX);\n\t\t(void) remtree(namebuf);\n\t\t(void) strcat(namebuf, \".old\");\n\t\t(void) remtree(namebuf);\n\t}\n}\n\n/**\n * @brief\n * \tfind_env_slot - find if the environment variable is already in the table,\n *\tIf so, replace the existing one with the new one.\n *\n * @param[in] ptbl - pointer to var_table which holds environment variable for job\n * @param[in] pstr - new environment variable\n *\n * @return\tint\n * @retval\t!(-1)\tsuccess\n * @retval\t-1\tFailure\n *\n */\n\nint\nfind_env_slot(struct var_table *ptbl, char *pstr)\n{\n\tint i;\n\tint len = 1; /* one extra for '=' */\n\n\tif (pstr == NULL)\n\t\treturn (-1);\n\tfor (i = 0; (*(pstr + i) != '=') && (*(pstr + i) != '\\0'); ++i)\n\t\t++len;\n\n\tfor (i = 0; i < ptbl->v_used; ++i) {\n\t\tif (strncmp(ptbl->v_envp[i], pstr, len) == 0)\n\t\t\treturn (i);\n\t}\n\treturn (-1);\n}\n\n/**\n * @brief\n *\tbld_env_variables - Add an entry to the table that defines the environment variables for a job.\n * @par\n * \tNote that this function returns void. It gives the caller no indication\n * \twhether the operation failed, which it could. In the case where the\n * \toperation does fail, the variable will not be added to the table and\n * \twill not be present in the job's environment. The caller would have\n * \tto check the table upon return of this function to confirm the\n * \tvariable was added/updated correctly.\n *\n * @param[in] vtable - variable table\n * @param[in] name - variable name alone or a \"name=value\" string\n * @param[in] value - variable value or NULL if name contains \"name=value\"\n *\n * @return - None\n *\n */\nvoid\nbld_env_variables(struct var_table *vtable, char *name, char *value)\n{\n\tint amt;\n\tint i;\n\tchar *block;\n\n\tif ((vtable == NULL) || (name == NULL))\n\t\treturn;\n\n\tif (value == NULL) {\n\t\t/* name must contain '=' */\n\t\tif (strchr(name, (int) '=') == NULL)\n\t\t\treturn;\n\t} else {\n\t\t/* name may not contain '=' */\n\t\tif (strchr(name, (int) '=') != NULL)\n\t\t\treturn;\n\t}\n\n\tamt = strlen(name) + 1; /* plus 1 for terminator */\n\tif (value)\n\t\tamt += strlen(value) + 1; /* plus 1 for \"=\" */\n\n\tblock = malloc(amt);\n\tif (block == NULL) /* no room for string */\n\t\treturn;\n\n\t(void) strcpy(block, name);\n\tif (value) {\n\t\t(void) strcat(block, \"=\");\n\t\t(void) strcat(block, value);\n\t}\n\n\tif ((i = find_env_slot(vtable, block)) < 0) {\n\t\t/*\n\t\t ** See if last available slot is used.\n\t\t ** This needs to be one less than v_ensize\n\t\t ** to make sure there is a NULL termination.\n\t\t */\n\t\tif (vtable->v_used + 1 == vtable->v_ensize) {\n\t\t\tint newsize = vtable->v_ensize * 2;\n\t\t\tchar **tt = realloc(vtable->v_envp,\n\t\t\t\t\t    newsize * sizeof(char *));\n\n\t\t\tif (tt == NULL)\n\t\t\t\treturn; /* no room for pointer */\n\t\t\tvtable->v_ensize = newsize;\n\t\t\tvtable->v_envp = tt;\n\t\t}\n\n\t\t*(vtable->v_envp + vtable->v_used++) = block;\n\t\t*(vtable->v_envp + vtable->v_used) = NULL;\n\t} else {\n\t\t/* free old value */\n\t\tfree(*(vtable->v_envp + i));\n\t\t*(vtable->v_envp + i) = block;\n\t}\n}\n\n/**\n * @brief\n *\tAdd to 'array_dest' the entries in 'array1'.\n *\n * @param[in]\tarray_src  - environment array to duplicate\n * @param[in]\tarray_dest - environment array in which to duplicate\n *\n * @return\tchar**\n *\t!NULL\tthe environment array\n *\tNULL\tif an error occurred.\n * @par MT-safe: no\n */\nvoid\nadd_envp(char **array_src, struct var_table *array_dest)\n{\n\tchar *e_var, *e_val, *p;\n\tint i;\n\n\tif (array_src == NULL) {\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_ERR,\n\t\t\t  __func__, \"Unexpected input\");\n\t\treturn;\n\t}\n\ti = 0;\n\twhile ((e_var = array_src[i]) != NULL) {\n\t\tif ((p = strchr(e_var, '=')) != NULL) {\n\t\t\t*p = '\\0';\n\t\t\tp++;\n\t\t\te_val = p;\n\t\t} else {\n\t\t\te_val = NULL; /* can be NULL */\n\t\t}\n\t\tbld_env_variables(array_dest, e_var, e_val);\n\t\tif (e_val != NULL)\n\t\t\t*(p - 1) = '='; /* restore */\n\t\ti++;\n\t}\n}\n\n#endif\n\n/**\n * @brief\n * \t\tjob_purge - purge job from system\n *\n * \t\tThe job is dequeued; the job control file, script file and any spooled\n * \t\toutput files are unlinked, and the job structure is freed.\n * \t\tIf we are MOM, the task files and checkpoint files are also\n * \t\tremoved.\n *\n * @param[in]\tpj - pointer to job structure\n *\n * @return\tvoid\n */\n\nvoid\njob_purge(job *pjob)\n{\n\textern char *msg_err_purgejob;\n#ifdef PBS_MOM\n\tchar namebuf[MAXPATHLEN + 1] = {'\\0'};\n\tint keeping = 0;\n\tchar *taskdir_path = NULL;\n\n\tpid_t pid = -1;\n\tint child_process = 0;\n\n#else\n\textern char *msg_err_purgejob_db;\n\tpbs_db_obj_info_t obj;\n\tpbs_db_job_info_t dbjob;\n\tvoid *conn = (void *) svr_db_conn;\n#endif /* PBS_MOM */\n\n\tif (pjob->ji_rerun_preq != NULL) {\n\t\tlog_joberr(PBSE_INTERNAL, __func__, \"rerun request outstanding\",\n\t\t\t   pjob->ji_qs.ji_jobid);\n\t\treply_text(pjob->ji_rerun_preq, PBSE_INTERNAL, \"job rerun\");\n\t\tpjob->ji_rerun_preq = NULL;\n\t}\n#ifndef PBS_MOM\n\tif (pjob->ji_pmt_preq != NULL) {\n\t\tlog_joberr(PBSE_INTERNAL, __func__, \"preempt request outstanding\",\n\t\t\t   pjob->ji_qs.ji_jobid);\n\t\treply_preempt_jobs_request(PBSE_INTERNAL, PREEMPT_METHOD_DELETE, pjob);\n\t}\n#endif\n#ifdef PBS_MOM\n\tif (pjob->ji_pending_ruu != NULL) {\n\t\truu *x = (ruu *) (pjob->ji_pending_ruu);\n\t\tsend_resc_used(x->ru_cmd, 1, x);\n\t\tFREE_RUU(x);\n\t}\n\tdelete_link(&pjob->ji_jobque);\n\tdelete_link(&pjob->ji_alljobs);\n\tdelete_link(&pjob->ji_unlicjobs);\n\tif (pbs_idx_delete(jobs_idx, pjob->ji_qs.ji_jobid) != PBS_IDX_RET_OK)\n\t\tlog_joberr(PBSE_INTERNAL, __func__, \"Failed to remove job from index\", pjob->ji_qs.ji_jobid);\n\n\tif (pjob->ji_preq != NULL) {\n\t\tlog_joberr(PBSE_INTERNAL, __func__, \"request outstanding\",\n\t\t\t   pjob->ji_qs.ji_jobid);\n\t\treply_text(pjob->ji_preq, PBSE_INTERNAL, \"job deleted\");\n\t\tpjob->ji_preq = NULL;\n\t}\n\n\tfree_string_array(pjob->ji_env.v_envp);\n\n#ifndef WIN32\n\n\tif (pjob->ji_momsubt != 0) { /* child running */\n\t\t(void) kill(pjob->ji_momsubt, SIGKILL);\n\t\tpjob->ji_momsubt = 0;\n\t}\n\t/* if open, close pipes to/from Mom starter process */\n\tif (pjob->ji_jsmpipe != -1) {\n\t\tconn_t *connection = NULL;\n\n\t\tif ((is_jattr_set(pjob, JOB_ATR_session_id)) == 0 &&\n\t\t    !(get_jattr_long(pjob, JOB_ATR_session_id)) &&\n\t\t    (connection = get_conn(pjob->ji_jsmpipe)) != NULL) {\n\t\t\t/*\n\t\t\t * If session id for the job is not set, retain pjob->ji_jsmpipe.\n\t\t\t * Set cn_data to NULL so that we can kill the process when\n\t\t\t * record_finish_exec is called.\n\t\t\t */\n\t\t\tconnection->cn_data = NULL;\n\t\t} else\n\t\t\t(void) close_conn(pjob->ji_jsmpipe);\n\t}\n\tif (pjob->ji_mjspipe != -1)\n\t\t(void) close(pjob->ji_mjspipe);\n\n\t/* if open, close 2nd pipes to/from Mom starter process */\n\tif (pjob->ji_jsmpipe2 != -1)\n\t\t(void) close_conn(pjob->ji_jsmpipe2);\n\tif (pjob->ji_mjspipe2 != -1)\n\t\t(void) close(pjob->ji_mjspipe2);\n\n\t/* if open, close 3rd pipes to/from Mom starter process */\n\tif (pjob->ji_child2parent_job_update_pipe != -1)\n\t\t(void) close_conn(pjob->ji_child2parent_job_update_pipe);\n\tif (pjob->ji_parent2child_job_update_pipe != -1)\n\t\t(void) close(pjob->ji_parent2child_job_update_pipe);\n\n\t/* if open, close 4th pipes to/from Mom starter process */\n\tif (pjob->ji_parent2child_job_update_status_pipe != -1)\n\t\t(void) close(pjob->ji_parent2child_job_update_status_pipe);\n\n\t/* if open, close 5th pipes to/from Mom starter process */\n\tif (pjob->ji_parent2child_moms_status_pipe != -1)\n\t\t(void) close(pjob->ji_parent2child_moms_status_pipe);\n#else\n\tif (pjob->ji_user)\n\t\twunloaduserprofile(pjob->ji_user);\n#endif\n#else  /* not PBS_MOM */\n\tif ((!check_job_substate(pjob, JOB_SUBSTATE_TRANSIN)) &&\n\t    (!check_job_substate(pjob, JOB_SUBSTATE_TRANSICM))) {\n\t\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_SubJob) && (!check_job_state(pjob, JOB_STATE_LTR_FINISHED))) {\n\t\t\tif ((check_job_substate(pjob, JOB_SUBSTATE_RERUN3)) || (check_job_substate(pjob, JOB_SUBSTATE_QUEUED)))\n\t\t\t\tupdate_sj_parent(pjob->ji_parentaj, pjob, pjob->ji_qs.ji_jobid, get_job_state(pjob), JOB_STATE_LTR_QUEUED);\n\t\t\telse {\n\t\t\t\tif (pjob->ji_terminated && pjob->ji_parentaj && pjob->ji_parentaj->ji_ajinfo)\n\t\t\t\t\tpjob->ji_parentaj->ji_ajinfo->tkm_dsubjsct++;\n\t\t\t\tupdate_sj_parent(pjob->ji_parentaj, pjob, pjob->ji_qs.ji_jobid, get_job_state(pjob), JOB_STATE_LTR_EXPIRED);\n\t\t\t\tchk_array_doneness(pjob->ji_parentaj);\n\t\t\t}\n\t\t}\n\n\t\taccount_entity_limit_usages(pjob, NULL, NULL, DECR,\n\t\t\t\t\t    pjob->ji_etlimit_decr_queued ? ETLIM_ACC_ALL_MAX : ETLIM_ACC_ALL);\n\n\t\tsvr_dequejob(pjob);\n\t}\n#endif /* PBS_MOM */\n\n#ifdef PBS_MOM\n\n\t/* on the mom end, perform file-system related cleanup in a forked process\n\t * only if job is executed successfully with exit status 0(JOB_EXEC_OK)\n\t */\n\tif (pjob->ji_qs.ji_un.ji_momt.ji_exitstat == JOB_EXEC_OK) {\n\t\t/* rename the taskdir path to avoid race condition when job\n\t\t * reruns. It will be removed later in the child process.\n\t\t */\n\t\ttaskdir_path = rename_taskdir(pjob);\n\t\tpid = fork();\n\t\tif (pid > 0) {\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n\t\t\tdelete_cred(pjob->ji_qs.ji_jobid);\n#endif\n\t\t\t/* parent mom */\n\t\t\tjob_free(pjob);\n\t\t\tfree(taskdir_path);\n\t\t\treturn;\n\t\t}\n\t\tif (!pid)\n\t\t\tchild_process = 1;\n\t}\n\t/* Parent Mom process will continue the job cleanup itself, if call to fork is failed */\n\t/* delete script file */\n\tdel_job_related_file(pjob, JOB_SCRIPT_SUFFIX);\n\n\tif (pjob->ji_preq != NULL) {\n\t\treq_reject(PBSE_MOMREJECT, 0, pjob->ji_preq);\n\t\tpjob->ji_preq = NULL;\n\t}\n\n\tdel_job_dirs(pjob, taskdir_path);\n\tfree(taskdir_path);\n\n\tdel_chkpt_files(pjob);\n\n\t/* remove stdout/err files if remove_files is set. */\n\tif (is_jattr_set(pjob, JOB_ATR_remove) && (pjob->ji_qs.ji_un.ji_momt.ji_exitstat == JOB_EXEC_OK)) {\n\t\tchar *remove = get_jattr_str(pjob, JOB_ATR_remove);\n\t\tif (strchr(remove, 'o')) {\n\t\t\t(void) strcpy(namebuf, std_file_name(pjob, StdOut, &keeping));\n\t\t\tif (*namebuf && (unlink(namebuf) < 0))\n\t\t\t\tif (errno != ENOENT)\n\t\t\t\t\tlog_err(errno, __func__, msg_err_purgejob);\n\t\t}\n\t\tif (strchr(remove, 'e')) {\n\t\t\t(void) strcpy(namebuf, std_file_name(pjob, StdErr, &keeping));\n\t\t\tif (*namebuf && (unlink(namebuf) < 0))\n\t\t\t\tif (errno != ENOENT)\n\t\t\t\t\tlog_err(errno, __func__, msg_err_purgejob);\n\t\t}\n\t}\n\n#ifdef WIN32\n\t/* following introduced by fix to BZ 6363 for executing scripts */\n\t/* directly on the command line */\n\t(void) strcpy(namebuf, path_jobs); /* delete any *.BAT file */\n\tif (*pjob->ji_qs.ji_fileprefix != '\\0')\n\t\t(void) strcat(namebuf, pjob->ji_qs.ji_fileprefix);\n\telse\n\t\t(void) strcat(namebuf, pjob->ji_qs.ji_jobid);\n\t(void) strcat(namebuf, \".BAT\");\n\n\tif (unlink(namebuf) < 0) {\n\t\tif (errno != ENOENT)\n\t\t\tlog_err(errno, __func__, msg_err_purgejob);\n\t}\n#endif\n#else  /* PBS_MOM */\n\n\t/* server code */\n\tremove_stdouterr_files(pjob, JOB_STDOUT_SUFFIX);\n\tremove_stdouterr_files(pjob, JOB_STDERR_SUFFIX);\n#endif /* PBS_MOM */\n\n#ifdef PBS_MOM\n\t/* delete job file */\n\tdel_job_related_file(pjob, JOB_FILE_SUFFIX);\n\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n\tdelete_cred(pjob->ji_qs.ji_jobid);\n#endif\n\n#else\n\t/* delete job and dependants from database */\n\tobj.pbs_db_obj_type = PBS_DB_JOB;\n\tobj.pbs_db_un.pbs_db_job = &dbjob;\n\tstrcpy(dbjob.ji_jobid, pjob->ji_qs.ji_jobid);\n\tif (pbs_db_delete_obj(conn, &obj) == -1) {\n\t\tlog_joberr(-1, __func__, msg_err_purgejob_db,\n\t\t\t   pjob->ji_qs.ji_jobid);\n\t}\n\n\tif (pjob->ji_qs.ji_svrflags & JOB_SVFLG_HasNodes)\n\t\tfree_nodes(pjob);\n\n#endif\n\n\tdel_job_related_file(pjob, JOB_CRED_SUFFIX);\n\n\t/* Clearing purge job info from svr_newjobs list */\n\tif (pjob == (job *) GET_NEXT(svr_newjobs))\n\t\tdelete_link(&pjob->ji_alljobs);\n\n\tjob_free(pjob);\n\n#ifdef PBS_MOM\n\tif (child_process) {\n\t\t/* I am child of the forked process. Deleted all the\n\t\t * particular job related files, thus exiting.\n\t\t */\n\t\texit(0);\n\t}\n#endif\n\n\treturn;\n}\n\n/**\n * @brief\n * \t\tfind_job() - find job by jobid\n *\n *\t\tSearch list of all server jobs for one with same job id\n *\t\tReturn NULL if not found or pointer to job struct if found.\n *\n *\t\tIf the host portion of the job ID contains a dot, it is\n *\t\tassumed that the string represents the FQDN. If no dot is\n *\t\tpresent, the string represents the short (unqualified)\n *\t\thostname. For example, \"foo\" will match \"foo.bar.com\", but\n *\t\t\"foo.bar\" will not match \"foo.bar.com\".\n *\n *\t\tIf server, then search in AVL tree otherwise Linked list.\n *\n * @param[in]\tjobid - job ID string.\n *\n * @return\tpointer to job struct\n * @retval NULL\t- if job by jobid not found.\n */\n\njob *\nfind_job(char *jobid)\n{\n#ifndef PBS_MOM\n\tsize_t len;\n\tchar *host_dot;\n\tchar *serv_dot;\n\tchar *host;\n#endif\n\tchar *at;\n\tjob *pj = NULL;\n\tchar buf[PBS_MAXSVRJOBID + 1];\n\tvoid *pbuf = &buf;\n\n\tif (jobid == NULL || jobid[0] == '\\0')\n\t\treturn NULL;\n\n\t/* Make a copy of the job ID string before we modify it. */\n\tsnprintf(buf, sizeof(buf), \"%s\", jobid);\n\t/*\n\t * If @server_name was specified, it was used to route the\n\t * request to this server. It will not be part of the string\n\t * we are searching for, so truncate the string at the '@'\n\t * character.\n\t */\n\tif ((at = strchr(buf, (int) '@')) != NULL)\n\t\t*at = '\\0';\n\n#ifndef PBS_MOM\n\t/*\n\t * index search cannot find partially formed jobid's.\n\t * While storing we supplied the full jobid.\n\t * So while retrieving also we have to provide\n\t * the exact key that was used while storing the job\n\t */\n\tif ((host_dot = strchr(buf, '.')) != NULL) {\n\t\t/* The job ID string contains a host string */\n\t\thost = host_dot + 1;\n\t\tif (strncasecmp(server_name, host, PBS_MAXSERVERNAME + 1) != 0) {\n\t\t\t/*\n\t\t\t * The server_name and host strings do not match.\n\t\t\t * Try to determine if one is the FQDN and the other\n\t\t\t * is the short name. If there is no match, do not\n\t\t\t * modify the string we will be searching for. If\n\t\t\t * there is a match, replace host with server_name.\n\t\t\t *\n\t\t\t * Do not call is_same_host() to avoid DNS lookup\n\t\t\t * because server_name may not resolve to a real\n\t\t\t * host when PBS_SERVER_HOST_NAME is set or when\n\t\t\t * failover is enabled. The lookup could hang the\n\t\t\t * server for some amount of time.\n\t\t\t */\n\t\t\thost_dot = strchr(host, '.');\n\t\t\tserv_dot = strchr(server_name, '.');\n\t\t\tif (host_dot != NULL) {\n\t\t\t\t/* the host string is FQDN */\n\t\t\t\tif (serv_dot == NULL) {\n\t\t\t\t\t/* the server_name is not FQDN */\n\t\t\t\t\tlen = strlen(server_name);\n\t\t\t\t\tif (len == (host_dot - host)) {\n\t\t\t\t\t\tif (strncasecmp(host, server_name, len) == 0) {\n\t\t\t\t\t\t\t/* Use server_name to ensure cases match. */\n\t\t\t\t\t\t\tstrcpy(host, server_name);\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t} else if (serv_dot != NULL) {\n\t\t\t\t/* the host string is not FQDN */\n\t\t\t\t/* the server_name is FQDN */\n\t\t\t\tlen = strlen(host);\n\t\t\t\tif (len == (serv_dot - server_name)) {\n\t\t\t\t\tif (strncasecmp(host, server_name, len) == 0) {\n\t\t\t\t\t\t/* Use server_name to ensure cases match. */\n\t\t\t\t\t\tstrcpy(host, server_name);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t} else {\n\t\t\t/*\n\t\t\t * Case insensitive compare was successful.\n\t\t\t * Use server_name to ensure cases match.\n\t\t\t */\n\t\t\tstrcpy(host, server_name);\n\t\t}\n\t} else {\n\t\t/* The job ID string does not contain a host string */\n\t\tstrcat(buf, \".\");\n\t\tstrcat(buf, server_name);\n\t}\n\n#endif\n\tif (pbs_idx_find(jobs_idx, &pbuf, (void **) &pj, NULL) == PBS_IDX_RET_OK)\n\t\treturn pj;\n\treturn NULL;\n}\n\n/**\n *  @brief\n *\t\tOutput credential into job file.\n *\n * @param[in]\t\tpjob - pointer to job struct\n * @param[in]\t\tcred - JobCredential\n * @param[in]\t\tlen - size of credentials.\n *\n * @return\tint\n * @retval\t0\t- success\n * @retval\t-1\t- fail\n */\nint\nwrite_cred(job *pjob, char *cred, size_t len)\n{\n\textern char *path_jobs;\n\tchar name_buf[MAXPATHLEN + 1];\n\tint cred_fd;\n\tint ret = -1;\n\n\t(void) strcpy(name_buf, path_jobs);\n\tif (*pjob->ji_qs.ji_fileprefix != '\\0')\n\t\t(void) strcat(name_buf, pjob->ji_qs.ji_fileprefix);\n\telse\n\t\t(void) strcat(name_buf, pjob->ji_qs.ji_jobid);\n\t(void) strcat(name_buf, JOB_CRED_SUFFIX);\n\n\tif ((cred_fd = open(name_buf, O_WRONLY | O_CREAT | O_EXCL, 0600)) == -1) {\n\t\tlog_err(errno, __func__, name_buf);\n\t\treturn -1;\n\t}\n\n#ifdef WIN32\n\tsecure_file(name_buf, \"Administrators\", READS_MASK | WRITES_MASK | STANDARD_RIGHTS_REQUIRED);\n\tsetmode(cred_fd, O_BINARY);\n#endif\n\n\tif (write(cred_fd, cred, len) != len) {\n\t\tlog_err(errno, __func__, \"write cred\");\n\t\tgoto done;\n\t}\n\n\tret = 0;\n\ndone:\n\tclose(cred_fd);\n\treturn ret;\n}\n\n/**\n * @brief\n *\t\tCheck if this job has an associated credential file.  If it does,\n *\t\tthe credential file is opened and the credential is read into\n *\t\tmalloc'ed memory.\n *\n * @param[in]\t\tpjob - job whose credentials needs to be read.\n * @param[out]\t\tcred - JobCredential\n * @param[in]\t\tlen - size of credentials.\n *\n * @return\tint\n * @retval\t1\t- no cred\n * @retval\t0\t- success\n * @retval\t-1\t- error\n */\nint\nread_cred(job *pjob, char **cred, size_t *len)\n{\n\textern char *path_jobs;\n\tchar name_buf[MAXPATHLEN + 1];\n\tchar *hold = NULL;\n\tstruct stat sbuf;\n\tint fd;\n\tint ret = -1;\n\n\t(void) strcpy(name_buf, path_jobs);\n\tif (*pjob->ji_qs.ji_fileprefix != '\\0')\n\t\t(void) strcat(name_buf, pjob->ji_qs.ji_fileprefix);\n\telse\n\t\t(void) strcat(name_buf, pjob->ji_qs.ji_jobid);\n\t(void) strcat(name_buf, JOB_CRED_SUFFIX);\n\n\tif ((fd = open(name_buf, O_RDONLY)) == -1) {\n\t\tif (errno == ENOENT)\n\t\t\treturn 1;\n\t\tlog_err(errno, __func__, \"open\");\n\t\treturn ret;\n\t}\n\n\tif (fstat(fd, &sbuf) == -1) {\n\t\tlog_err(errno, __func__, \"fstat\");\n\t\tgoto done;\n\t}\n\n\thold = malloc(sbuf.st_size);\n\tassert(hold != NULL);\n\n#ifdef WIN32\n\tsetmode(fd, O_BINARY);\n#endif\n\n\tif (read(fd, hold, sbuf.st_size) != sbuf.st_size) {\n\t\tlog_err(errno, __func__, \"read\");\n\t\tgoto done;\n\t}\n\t*len = sbuf.st_size;\n\t*cred = hold;\n\thold = NULL;\n\tret = 0;\n\ndone:\n\tclose(fd);\n\tif (hold != NULL)\n\t\tfree(hold);\n\treturn ret;\n}\n\n/**\n * @brief\n * \tReturns 1 if job 'job' should remain running in spite of node failures.\n * @param[in]\tpjob\t- job being queried\n *\n * @return int\n * @retval\t1 - if true\n * @retval\t0 - if false or 'tolerate_node_failures' attribute is unset\n */\nint\ndo_tolerate_node_failures(job *pjob)\n{\n\n\tif (pjob == NULL)\n\t\treturn (0);\n\n#if MOM_ALPS\n\t/* not currently supported on the Crays */\n\treturn (0);\n#endif\n\n\tif ((is_jattr_set(pjob, JOB_ATR_tolerate_node_failures)) &&\n\t    ((strcmp(get_jattr_str(pjob, JOB_ATR_tolerate_node_failures), TOLERATE_NODE_FAILURES_ALL) == 0) ||\n\t     ((strcmp(get_jattr_str(pjob, JOB_ATR_tolerate_node_failures), TOLERATE_NODE_FAILURES_JOB_START) == 0) &&\n\t      !check_job_substate(pjob, JOB_SUBSTATE_RUNNING)))) {\n\t\treturn (1);\n\t}\n\treturn (0);\n}\n/**\n * @brief\n *\tThis function updates/creates the resource list named\n *\t'res_list_name' and indexed in pjob as\n *\t'res_list_index', using resources assigned values specified\n *\tin 'exec_vnode'. This also saves the previous values in\n *\tpjob's 'backup_res_list_index' attribute if not already\n *\tset.\n *\n * @param[in,out] pjob - job structure\n * @param[in]\t  res_list_name - resource list name\n * @param[in]\t  rel_list_index - attribute index in job structure\n * @param[in]\t  exec_vnode - string containing  the various resource\n *\t\t\tassignments\n * @param[in]\t  op - kind of operation to be performed while setting\n *\t\t     the resource value.\n * @param[in]\t  always_set  - if set, even if there is no resulting\n *\t\t\tresource list, try to have at least one entry\n *\t\t\t(e.g., ncpus=0) to keep the list set.\n * @param[in]\t  backup_res_list_index - index to  job's attribute\n *\t\t\tresource list to hold original values.\n *\n * @return int\n * @retval 0  - success\n * @retval 1 - failure\n */\n\nint\nupdate_resources_list(job *pjob, char *res_list_name,\n\t\t      int res_list_index, char *exec_vnode, enum batch_op op,\n\t\t      int always_set, int backup_res_list_index)\n{\n\tchar *chunk;\n\tint j;\n\tint rc;\n\tint nelem;\n\tchar *noden;\n\tstruct key_value_pair *pkvp;\n\tresource_def *prdef;\n\tresource *presc, *pr, *next;\n\tattribute tmpattr;\n\n\tif (exec_vnode == NULL || pjob == NULL) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"bad input parameter\");\n\n\t\treturn (1);\n\t}\n\n\t/* Save current resource values in backup resource list */\n\t/* if backup resources list is not already set */\n\tif (is_jattr_set(pjob, res_list_index)) {\n\n\t\tif ((is_jattr_set(pjob, backup_res_list_index)) == 0) {\n\t\t\tfree_jattr(pjob, backup_res_list_index);\n\t\t\tset_attr_with_attr(&job_attr_def[backup_res_list_index], get_jattr(pjob, backup_res_list_index), get_jattr(pjob, res_list_index), INCR);\n\t\t}\n\n\t\tpr = (resource *) GET_NEXT(get_jattr_list(pjob, res_list_index));\n\t\twhile (pr != NULL) {\n\t\t\tnext = (resource *) GET_NEXT(pr->rs_link);\n\t\t\tif (pr->rs_defin->rs_flags & (ATR_DFLAG_RASSN | ATR_DFLAG_FNASSN | ATR_DFLAG_ANASSN)) {\n\t\t\t\tdelete_link(&pr->rs_link);\n\t\t\t\tif (pr->rs_value.at_flags & ATR_VFLAG_INDIRECT)\n\t\t\t\t\tfree_str(&pr->rs_value);\n\t\t\t\telse\n\t\t\t\t\tpr->rs_defin->rs_free(&pr->rs_value);\n\t\t\t\t(void) free(pr);\n\t\t\t}\n\t\t\tpr = next;\n\t\t}\n\t}\n\n\trc = 0;\n\tfor (chunk = parse_plus_spec(exec_vnode, &rc); chunk && (rc == 0);\n\t     chunk = parse_plus_spec(NULL, &rc)) {\n\n\t\tif ((rc = parse_node_resc(chunk, &noden, &nelem, &pkvp)) != 0) {\n\t\t\tlog_err(rc, __func__, \"parse of exec_vnode failed\");\n\t\t\tgoto update_resources_list_error;\n\t\t}\n\t\tfor (j = 0; j < nelem; j++) {\n\t\t\tprdef = find_resc_def(svr_resc_def, pkvp[j].kv_keyw);\n\t\t\tif (prdef == NULL) {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t \"unknown resource %s in exec_vnode\",\n\t\t\t\t\t pkvp[j].kv_keyw);\n\t\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\t\tgoto update_resources_list_error;\n\t\t\t}\n\n\t\t\tif (prdef->rs_flags & (ATR_DFLAG_RASSN | ATR_DFLAG_FNASSN | ATR_DFLAG_ANASSN)) {\n\t\t\t\tpresc = add_resource_entry(\n\t\t\t\t\tget_jattr(pjob, res_list_index),\n\t\t\t\t\tprdef);\n\t\t\t\tif (presc == NULL) {\n\t\t\t\t\tsnprintf(log_buffer,\n\t\t\t\t\t\t sizeof(log_buffer),\n\t\t\t\t\t\t \"failed to add resource\"\n\t\t\t\t\t\t \"  %s\",\n\t\t\t\t\t\t prdef->rs_name);\n\t\t\t\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\t\t\tlog_buffer);\n\t\t\t\t\tgoto update_resources_list_error;\n\t\t\t\t}\n\t\t\t\tif ((rc = prdef->rs_decode(&tmpattr,\n\t\t\t\t\t\t\t   res_list_name, prdef->rs_name,\n\t\t\t\t\t\t\t   pkvp[j].kv_val)) != 0) {\n\t\t\t\t\tsnprintf(log_buffer,\n\t\t\t\t\t\t sizeof(log_buffer),\n\t\t\t\t\t\t \"decode of %s failed\",\n\t\t\t\t\t\t prdef->rs_name);\n\n\t\t\t\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\t\t\tlog_buffer);\n\t\t\t\t\tgoto update_resources_list_error;\n\t\t\t\t}\n\t\t\t\t(void) prdef->rs_set(&presc->rs_value,\n\t\t\t\t\t\t     &tmpattr, op);\n\t\t\t}\n\t\t}\n\t}\n\n\tif (rc != 0) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"error parsing exec_vnode\");\n\t\tgoto update_resources_list_error;\n\t}\n\n\tif (always_set &&\n\t    ((is_jattr_set(pjob, res_list_index)) == 0)) {\n\t\t/* this means no resources got freed during suspend */\n\t\t/* let's put a dummy entry for ncpus=0 */\n\t\tprdef = &svr_resc_def[RESC_NCPUS];\n\t\tpresc = add_resource_entry(get_jattr(pjob, res_list_index), prdef);\n\t\tif (presc == NULL) {\n\t\t\tlog_err(PBSE_INTERNAL, __func__,\n\t\t\t\t\"failed to add ncpus in resource list\");\n\t\t\treturn (1);\n\t\t}\n\t\tif ((rc = prdef->rs_decode(&tmpattr, res_list_name,\n\t\t\t\t\t   prdef->rs_name, \"0\")) != 0) {\n\t\t\tlog_err(rc, __func__,\n\t\t\t\t\"decode of ncpus=0 failed\");\n\t\t\treturn (1);\n\t\t}\n\t\t(void) prdef->rs_set(&presc->rs_value, &tmpattr, op);\n\t}\n\n\treturn (0);\n\nupdate_resources_list_error:\n\tfree_jattr(pjob, backup_res_list_index);\n\tmark_jattr_not_set(pjob, backup_res_list_index);\n\tset_attr_with_attr(&job_attr_def[res_list_index],\n\t\t\t   get_jattr(pjob, res_list_index),\n\t\t\t   get_jattr(pjob, backup_res_list_index), INCR);\n\treturn (1);\n}\n\n#ifndef PBS_MOM /*SERVER ONLY*/\n\n/**\n * @brief\n * \tallocate space for a \"resc_resv\" structure and initialize\n *\n * @return\tresc_resv *\n * @retval\tnonzero\t- successful\n * @retval\t0\t- unsuccessful\n */\n\nresc_resv *\nresv_alloc(char *resvid)\n{\n\tint i;\n\tresc_resv *resvp;\n\tchar *dot = NULL;\n\n\tresvp = (resc_resv *) calloc(1, sizeof(resc_resv));\n\tif (resvp == NULL) {\n\t\tlog_err(errno, __func__, \"no memory\");\n\t\treturn NULL;\n\t}\n\n\tCLEAR_LINK(resvp->ri_allresvs);\n\tCLEAR_HEAD(resvp->ri_svrtask);\n\tCLEAR_HEAD(resvp->ri_rejectdest);\n\tresvp->newobj = 1;\n\n\t/* set the reservation structure's version number and\n\t * the working attributes to \"unspecified\"\n\t */\n\tresvp->ri_qs.ri_rsversion = RSVERSION;\n\tfor (i = 0; i < RESV_ATR_LAST; i++)\n\t\tclear_rattr(resvp, i);\n\n\tif ((dot = strchr(resvid, (int) '.')) != 0)\n\t\t*dot = '\\0';\n\n\t/*\n\t * ignore first char in given id as it can change, see req_resvSub()\n\t * So, we only use digits in given id as key in index\n\t */\n\tif (pbs_idx_insert(resvs_idx, (void *) (resvid + 1), (void *) resvp) != PBS_IDX_RET_OK) {\n\t\t*dot = '.';\n\t\tlog_errf(-1, __func__, \"Failed to add resv %s into index\", resvid);\n\t\tfree(resvp);\n\t\treturn NULL;\n\t}\n\tif (dot)\n\t\t*dot = '.';\n\n\treturn (resvp);\n}\n\n/**\n * @brief\n * \t\tresv_free - deals only with the actual \"freeing\" of a reservation,\n *\t\taccounting, notifying, removing the reservation from linked lists\n *\t\tare handled before hand by resv_abt, resv_purge.  This just frees\n *\t\tany hanging substructures, deletes any attached work_tasks and frees\n *\t\tthe resc_resv\tstructure itself.\n *\n * @param[in,out]\t\tpresv - reservation struct which needs to be freed.\n *\n * @return void\n */\n\nvoid\nresv_free(resc_resv *presv)\n{\n\tint i;\n\tstruct work_task *pwt;\n\tbadplace *bp;\n\tchar *dot = NULL;\n\tchar *resvid = presv->ri_qs.ri_resvID;\n\n\t/* remove any malloc working attribute space */\n\n\tfor (i = 0; i < (int) RESV_ATR_LAST; i++)\n\t\tfree_rattr(presv, i);\n\n\t/* delete any work task entries associated with the resv */\n\n\twhile ((pwt = (struct work_task *) GET_NEXT(presv->ri_svrtask)) != 0) {\n\t\tdelete_task(pwt);\n\t}\n\n\t/* free any bad destination structs */\n\t/* We may never use this code if reservations can't be routed */\n\n\tbp = (badplace *) GET_NEXT(presv->ri_rejectdest);\n\twhile (bp) {\n\t\tdelete_link(&bp->bp_link);\n\t\tfree(bp);\n\t\tbp = (badplace *) GET_NEXT(presv->ri_rejectdest);\n\t}\n\n\t/* any \"interactive\" batch request? (shouldn't be); free it now */\n\tif (presv->ri_brp)\n\t\tfree_br(presv->ri_brp);\n\n\tif ((dot = strchr(resvid, (int) '.')) != 0)\n\t\t*dot = '\\0';\n\n\tif (pbs_idx_delete(resvs_idx, (void *) (resvid + 1)) != PBS_IDX_RET_OK) {\n\t\tif (dot)\n\t\t\t*dot = '.';\n\t\tdot = NULL;\n\t\tlog_errf(-1, __func__, \"Failed to delete resv %s from index\", resvid);\n\t}\n\tif (dot)\n\t\t*dot = '.';\n\n\t/* now free the main structure */\n\tfree(presv);\n}\n\n/**\n * @brief\n * \t\tresv_purge - purge reservation from system\n *\n * \t\tThe reservation is unlinked from the server's svr_allresvs;\n * \t\tthe reservation control file is unlinked, any attached work_task's\n * \t\tare deleted and the resc_resv structure is freed along with any\n * \t\thanging, malloc'd memory areas.\n *\n * \t\tThis function - ASSUMES - that if the reservation is supported by a\n * \t\tpbs_queue that queue is empty OR having history jobs only (i.e. job\n * \t\tin state JOB_STATE_LTR_MOVED/JOB_STATE_LTR_FINISHED). So, whatever mechanism\n * \t\tis being used to remove the jobs from such a supporting queue should,\n * \t\tat the outset, store the value \"False\" into the queue's \"enabled\"\n * \t\tattribute (blocks new jobs from being placed in the queue while the\n * \t\tserver attempts to delete those currently in the queue) and into its\n * \t\t\"scheduling\" attribute (to disable servicing by the scheduler).\n *\n * \t\tAny hanging, empty pbs_queue will be handled by creating and issuing\n * \t\tto the server a PBS_BATCH_Manager request to delete this queue.  This\n * \t\twill be dispatched immediately and a work_task having a function whose\n * \t\tsole job is to free the batch_request struct is placed on the \"immediate\"\n * \t\ttask list, for processing by the \"next_task\" function in the main loop\n * \t\tof the server.\n *\n * \t\tThis function should only be called after a check has been made to\n * \t\tto verify that the party deleting the reservation has proper permission\n *\n * @param[in]\tpresv - pointer to reservation which needs to be puged.\n *\n * @return\tvoid\n */\n\nvoid\nresv_purge(resc_resv *presv)\n{\n\tstruct batch_request *preq;\n\tstruct work_task *pwt;\n\textern char *msg_purgeResvFail;\n\textern char *msg_purgeResvDb;\n\tpbs_db_obj_info_t obj;\n\tpbs_db_resv_info_t dbresv;\n\n\tif (presv == NULL)\n\t\treturn;\n\n\tif (presv->ri_qp != NULL) {\n\t\t/*\n\t\t * Issue a batch_request to remove the supporting pbs_queue\n\t\t * As Stated: Assumption is that the queue is empty of jobs\n\t\t */\n\t\tpreq = alloc_br(PBS_BATCH_Manager);\n\t\tif (preq == NULL) {\n\t\t\t(void) sprintf(log_buffer, \"batch request allocation failed\");\n\t\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_RESV, LOG_ERR,\n\t\t\t\t  presv->ri_qs.ri_resvID, log_buffer);\n\t\t\treturn;\n\t\t}\n\n\t\tCLEAR_LINK(preq->rq_ind.rq_manager.rq_attr);\n\t\tpreq->rq_ind.rq_manager.rq_cmd = MGR_CMD_DELETE;\n\t\tpreq->rq_ind.rq_manager.rq_objtype = MGR_OBJ_QUEUE;\n\n\t\t(void) strcpy(preq->rq_user, \"pbs_server\");\n\t\t(void) strcpy(preq->rq_host, pbs_server_name);\n\t\t/*\n\t\t * Copy the queue name from the attributes rather than use the\n\t\t * presv->ri_qp->qu_qs.qu_name value. The post_resv_purge()\n\t\t * function could modify it at any time. See SPID 352225.\n\t\t */\n\t\tstrcpy(preq->rq_ind.rq_manager.rq_objname, get_rattr_str(presv, RESV_ATR_queue));\n\n\t\t/* It is assumed that the prior check on permission was OK */\n\t\tpreq->rq_perm |= ATR_DFLAG_MGWR;\n\n\t\tif (issue_Drequest(PBS_LOCAL_CONNECTION, preq, post_resv_purge, &pwt, 0) == -1) {\n\t\t\t/* Failed to delete queue. */\n\t\t\tfree_br(preq);\n\t\t\tlog_event(PBSEVENT_RESV, PBS_EVENTCLASS_RESV, LOG_WARNING,\n\t\t\t\t  presv->ri_qs.ri_resvID, msg_purgeResvFail);\n\t\t\treturn;\n\t\t}\n\t\t/*\n\t\t * Queue was deleted. Invocation of post_resv_purge() will\n\t\t * re-call resv_purge() (passing wt_parm2)\n\t\t */\n\t\tif (pwt)\n\t\t\tpwt->wt_parm2 = presv;\n\t\treturn;\n\t}\n\n\t/* reservation no longer has jobs or a supporting queue */\n\n\tif (presv->ri_giveback) {\n\t\t/*ok, resources were actually assigned to this reservation\n\t\t *and must now be accounted back into the loaner's pool\n\t\t */\n\n\t\tset_resc_assigned((void *) presv, 1, DECR);\n\t\tpresv->ri_giveback = 0;\n\t}\n\n\t/* Remove reservation's link element from the server's global list (svr_allresvs) */\n\tdelete_link(&presv->ri_allresvs);\n\n\t/* Delete any lingering tasks pointing to this reservation */\n\tdelete_task_by_parm1_func(presv, NULL, DELETE_ALL);\n\n\t/* Release any nodes that were associated to this reservation */\n\tfree_resvNodes(presv);\n\tset_scheduler_flag(SCH_SCHEDULE_TERM, dflt_scheduler);\n\n\tstrcpy(dbresv.ri_resvid, presv->ri_qs.ri_resvID);\n\tobj.pbs_db_obj_type = PBS_DB_RESV;\n\tobj.pbs_db_un.pbs_db_resv = &dbresv;\n\tif (pbs_db_delete_obj(svr_db_conn, &obj) == -1)\n\t\tlog_err(errno, __func__, msg_purgeResvDb);\n\n\t/* Free resc_resv struct, any hanging substructs, any attached *work_task structs */\n\tresv_free(presv);\n\treturn;\n}\n\n/**\n * @brief\n *  \tpost_resv_purge - As with the other \"post_*\" functions, this\n *\t\thandles the return reply from an internally generated request.\n *\t\tFunction resv_purge() ended up having to generate an internal\n *\t\trequest to qmgr() to delete the reservation's attached queue.\n *\t\tWhen the reply to that is received and indicates success,\n *\t\tresv_purge() will be re-called and this time the latter half\n *\t\tof the resv_purge() code will execute to finish the purge.\n *\t\tOtherwise, the reservation just won't get purged.  It will\n *\t\tjust be defunct\n *\n * @param[in]\tpwt - work structure which contains internally generated request.\n *\n * @return void\n */\n\nstatic void\npost_resv_purge(struct work_task *pwt)\n{\n\tint code;\n\tresc_resv *presv;\n\tstruct batch_request *preq;\n\n\tpreq = (struct batch_request *) pwt->wt_parm1;\n\tpresv = (resc_resv *) pwt->wt_parm2;\n\tcode = preq->rq_reply.brp_code;\n\n\t/*Release the batch_request hanging (wt_parm1) from the\n\t *work_task structure\n\t */\n\trelease_req(pwt);\n\n\tif (code) {\n\t\t/*response from the request is that an error occured\n\t\t *So, we failed on deleting the reservation's queue -\n\t\t *should mail owner about the failure\n\t\t */\n\t\treturn;\n\t}\n\n\t/*qmgr gave no error in doing MGR_CMD_DELETE on the queue\n\t *So it's safe to clear the reservation's queue pointer\n\t */\n\tpresv->ri_qp = NULL;\n\n\t/*now re-call resv_purge to execute the function's lower part*/\n\tresv_purge(presv);\n}\n\n/**\n * @brief\n * \t\tresv_abt - abort a reservation\n *\n * \t\tThe reservation removed from the system and a mail message is sent\n * \t\tto the reservation owner.\n *\n * @param[in]\tpresv - reservation structure\n * @param[in]\ttext - matter/content in the mail.\n *\n * @return\terror code\n * @retval\t0\t- success\n * @retval\t-1\t- error\n */\n\nint\nresv_abt(resc_resv *presv, char *text)\n{\n\tint old_state;\n\tint rc = 0;\n\n\told_state = presv->ri_qs.ri_state;\n\n\tif (old_state == RESV_BEING_DELETED) {\n\t\tif ((presv->ri_qp != NULL &&\n\t\t     presv->ri_qp->qu_numjobs == 0) ||\n\t\t    presv->ri_qp == NULL) {\n\n\t\t\taccount_recordResv(PBS_ACCT_ABT, presv, \"\");\n\t\t\tsvr_mailownerResv(presv, MAIL_ABORT, MAIL_NORMAL, text);\n\t\t\tresv_purge(presv);\n\t\t} else\n\t\t\trc = -1;\n\t}\n\treturn (rc);\n}\n\n/**\n * @brief\n * \t\tSet node state to resv-exclusive if either reservation requests\n * \t\texclusive placement or the node sharing attribute is to be exclusive or\n * \t\treservation requests AOE.\n *\n * @param[in]\tpresv The reservation being considered\n *\n * @return void\n *\n * @MT-safe: No\n */\nvoid\nresv_exclusive_handler(resc_resv *presv)\n{\n\tresource_def *prsdef;\n\tresource *pplace;\n\tpbsnode_list_t *pnl;\n\tint share_node = VNS_DFLT_SHARED;\n\tint share_resv = VNS_DFLT_SHARED;\n\tchar *scdsel;\n\n\tprsdef = &svr_resc_def[RESC_PLACE];\n\tpplace = find_resc_entry(get_rattr(presv, RESV_ATR_resource), prsdef);\n\tif (pplace && pplace->rs_value.at_val.at_str) {\n\t\tif ((place_sharing_type(pplace->rs_value.at_val.at_str,\n\t\t\t\t\tVNS_FORCE_EXCLHOST) != VNS_UNSET) ||\n\t\t    (place_sharing_type(pplace->rs_value.at_val.at_str,\n\t\t\t\t\tVNS_FORCE_EXCL) != VNS_UNSET)) {\n\t\t\tshare_resv = VNS_FORCE_EXCL;\n\t\t}\n\t\tif (place_sharing_type(pplace->rs_value.at_val.at_str,\n\t\t\t\t       VNS_IGNORE_EXCL) == VNS_IGNORE_EXCL) {\n\t\t\tshare_resv = VNS_IGNORE_EXCL;\n\t\t}\n\t}\n\n\tif (share_resv != VNS_FORCE_EXCL) {\n\t\tscdsel = get_rattr_str(presv, RESV_ATR_SchedSelect);\n\t\tif (scdsel && strstr(scdsel, \"aoe=\"))\n\t\t\tshare_resv = VNS_FORCE_EXCL;\n\t}\n\tfor (pnl = presv->ri_pbsnode_list; pnl != NULL; pnl = pnl->next) {\n\t\tshare_node = get_nattr_long(pnl->vnode, ND_ATR_Sharing);\n\n\t\t/*\n\t\t * set node state to resv-exclusive if either node forces exclusive\n\t\t * or reservation requests exclusive and node does not ignore\n\t\t * exclusive.\n\t\t */\n\t\tDBPRT((\"node=%s, share_node=%d, share_resv=%d\", pnl->vnode->nd_name, share_node, share_resv));\n\t\tif ((share_node == VNS_FORCE_EXCL) || (share_node == VNS_FORCE_EXCLHOST) ||\n\t\t    ((share_node != VNS_IGNORE_EXCL) && (share_resv == VNS_FORCE_EXCL)) ||\n\t\t    (((share_node == VNS_DFLT_EXCL) || (share_node == VNS_DFLT_EXCLHOST)) && (share_resv != VNS_IGNORE_EXCL))) {\n\t\t\tset_vnode_state(pnl->vnode, INUSE_RESVEXCL, Nd_State_Or);\n\t\t}\n\t}\n}\n\n/**\n * @brief\n *  \tFind aoe from the reservation request\n *\n * @see\n *\t\tresc_select_action\n *\n * @param[in]\tpresv\t- pointer to the reservation\n *\n * @return\tchar *\n * @retval\tNULL     - no aoe requested\n * @retval\tNON NULL - value of aoe requested\n *\n * @par Side Effects:\n *\tMemory returned is to be freed by caller\n *\n * @par MT-safe: yes\n *\n */\nchar *\nfind_aoe_from_request(resc_resv *presv)\n{\n\tchar *aoe_req = NULL;\n\tchar *p, *q;\n\tint i = 0;\n\n\t/* look into schedselect as this is expanded form of select\n\t * after taking into account default_chunk.res.\n\t */\n\tif (presv == NULL)\n\t\treturn NULL;\n\n\tif ((q = get_rattr_str(presv, RESV_ATR_SchedSelect)) != NULL) {\n\t\t/* just get first appearance of aoe */\n\t\tif ((p = strstr(q, \"aoe=\")) != NULL) {\n\t\t\tp += 4; /* strlen(\"aoe=\") = 4 */\n\t\t\t/* get length of aoe name in i. */\n\t\t\tfor (q = p; *q && *q != ':' && *q != '+'; i++, q++)\n\t\t\t\t;\n\t\t\taoe_req = malloc(i + 1);\n\t\t\tif (aoe_req == NULL) {\n\t\t\t\tlog_err(ENOMEM, __func__, \"out of memory\");\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t\tstrncpy(aoe_req, p, i);\n\t\t\taoe_req[i] = '\\0';\n\t\t}\n\t}\n\treturn aoe_req;\n}\n#endif /*ifndef PBS_MOM*/\n\n/**\n * @brief\n * \t\tget_jobowner - copy the basic job owner's name, without the @host suffix.\n *\t\tThe \"to\" buffer must be large enough (PBS_MAXUSER+1).\n *\n * @param[in]\tfrom\t-\t basic job owner's name\n * @param[out]\tto\t-\t\"to\" buffer where name is copied.\n */\nvoid\nget_jobowner(char *from, char *to)\n{\n\tint i;\n\n\tfor (i = 0; i < PBS_MAXUSER; ++i) {\n\t\tif ((*(from + i) == '@') || (*(from + i) == '\\0'))\n\t\t\tbreak;\n\t\t*(to + i) = *(from + i);\n\t}\n\t*(to + i) = '\\0';\n}\n\n/**\n * @brief\n * \t\tsetup_from - setup the \"from\" name for a standard job file:\n *\t\toutput, error, or chkpt\n *\n * @param[in]\tpjob\t- job structure\n * @param[in]\tsuffix\t- suffix for the \"from\" name\n *\n * @return\t\"from\" name\n */\n\nstatic char *\nsetup_from(job *pjob, char *suffix)\n{\n\tchar *from;\n\n\tfrom = malloc(strlen(pjob->ji_qs.ji_jobid) + strlen(suffix) + 1);\n\tif (from) {\n\t\t(void) strcpy(from, pjob->ji_qs.ji_jobid);\n\t\t(void) strcat(from, suffix);\n\t}\n\treturn (from);\n}\n\n/**\n * @brief\n * \t\tsetup_cpyfiles - if need be, allocate and initialize a Copy Files\n *\t\tbatch request, then append the file pairs\n *\n * @param[in]\tpreq\t- batch request\n * @param[in]\tpjob\t- job structure\n * @param[in]\tfrom\t- local (to mom) name\n * @param[in]\tto\t\t- remote (destination) name\n * @param[in]\tdirection\t- copy direction\n * @param[in]\ttflag\t- 1 if stdout or stderr , 2 if stage out or in\n *\n * @return\tmodified batch request.\n * @retval\tNULL\t- failure\n */\n\nstatic struct batch_request *\nsetup_cpyfiles(struct batch_request *preq, job *pjob, char *from, char *to, int direction, int tflag)\n{\n\tstruct rq_cpyfile *pcf;\n\tstruct rq_cpyfile_cred *pcfc;\n\tstruct rqfpair *pair;\n\tsize_t cred_len = 0;\n\tchar *cred = NULL;\n\tchar *prq_jobid;\n\tchar *prq_owner;\n\tchar *prq_user;\n\tchar *prq_group;\n\tint *prq_dir;\n\tpbs_list_head *prq_pair;\n\tattribute *attr;\n\n#ifndef PBS_MOM\n\t/* if this is a sub job of an array job, then check to see if the */\n\t/* index needs to be substituted in the paths\t\t\t  */\n\tif (pjob->ji_qs.ji_svrflags & JOB_SVFLG_SubJob) {\n\t\tto = subst_array_index(pjob, to);\n\t\tfrom = subst_array_index(pjob, from);\n\t}\n#endif\n\n\tif (preq == NULL) {\n\t\t/* check that certain required attributues are valid */\n\n\t\tif (get_jattr_str(pjob, JOB_ATR_job_owner) == NULL || get_jattr_str(pjob, JOB_ATR_euser) == NULL) {\n\t\t\t/* this case shouldn't happen, log it and don't do copy     */\n\t\t\t/* use null jobid, if attr missing, jobid is likely bad too */\n\n\t\t\tlog_event(PBSEVENT_ERROR | PBSEVENT_JOB, PBS_EVENTCLASS_FILE,\n\t\t\t\t  LOG_INFO, \"\",\n\t\t\t\t  \"cannot copy files for job, owner/euser missing\");\n\t\t\tif (from)\n\t\t\t\tfree(from);\n\t\t\tif (to)\n\t\t\t\tfree(to);\n\t\t\tif (cred)\n\t\t\t\tfree(cred);\n\t\t\treturn NULL;\n\t\t}\n\t\t/* allocate and initialize the batch request struct */\n#ifndef PBS_MOM\n\t\tif (get_credential(parse_servername(get_jattr_str(pjob, JOB_ATR_exec_vnode), NULL),\n\t\t\t\t   pjob, PBS_GC_CPYFILE, &cred, &cred_len) == 0) {\n\t\t\tpreq = alloc_br(PBS_BATCH_CopyFiles_Cred);\n\t\t} else\n#endif\n\t\t\tpreq = alloc_br(PBS_BATCH_CopyFiles);\n\n\t\tif (preq == NULL) {\n\t\t\tif (from)\n\t\t\t\tfree(from);\n\t\t\tif (to)\n\t\t\t\tfree(to);\n\t\t\tif (cred)\n\t\t\t\tfree(cred);\n\t\t\treturn (preq);\n\t\t}\n\n\t\tif (preq->rq_type == PBS_BATCH_CopyFiles_Cred) {\n\t\t\tpreq->rq_ind.rq_cpyfile_cred.rq_credtype = pjob->ji_extended.ji_ext.ji_credtype;\n\t\t\tpreq->rq_ind.rq_cpyfile_cred.rq_pcred = cred;\n\t\t\tpreq->rq_ind.rq_cpyfile_cred.rq_credlen = cred_len;\n\t\t\tpcfc = &preq->rq_ind.rq_cpyfile_cred;\n\t\t\tprq_jobid = pcfc->rq_copyfile.rq_jobid;\n\t\t\tprq_owner = pcfc->rq_copyfile.rq_owner;\n\t\t\tprq_user = pcfc->rq_copyfile.rq_user;\n\t\t\tprq_group = pcfc->rq_copyfile.rq_group;\n\t\t\tprq_dir = &pcfc->rq_copyfile.rq_dir;\n\t\t\tprq_pair = &pcfc->rq_copyfile.rq_pair;\n\t\t} else {\n\t\t\tpcf = &preq->rq_ind.rq_cpyfile;\n\t\t\tprq_jobid = pcf->rq_jobid;\n\t\t\tprq_owner = pcf->rq_owner;\n\t\t\tprq_user = pcf->rq_user;\n\t\t\tprq_group = pcf->rq_group;\n\t\t\tprq_dir = &pcf->rq_dir;\n\t\t\tprq_pair = &pcf->rq_pair;\n\t\t}\n\t\tCLEAR_HEAD((*prq_pair));\n\n\t\t/* copy jobid, owner, exec-user, group names, upto the @host part */\n\n\t\tstrcpy(prq_jobid, pjob->ji_qs.ji_jobid);\n\t\tget_jobowner(get_jattr_str(pjob, JOB_ATR_job_owner), prq_owner);\n\t\tget_jobowner(get_jattr_str(pjob, JOB_ATR_euser), prq_user);\n\t\tattr = get_jattr(pjob, JOB_ATR_egroup);\n\t\tif ((attr->at_flags & ATR_VFLAG_DEFLT) == 0 && get_attr_str(attr) != 0)\n\t\t\tstrcpy(prq_group, get_attr_str(attr));\n\t\telse\n\t\t\tprq_group[0] = '\\0'; /* default: use login group */\n\n\t\t*prq_dir = direction;\n\n\t\t/* set \"sandbox=PRIVATE\" mode */\n\t\tif (is_jattr_set(pjob, JOB_ATR_sandbox)) {\n\t\t\t/* set STAGE_JOBDIR mode based on job settings */\n\t\t\tif (strcasecmp(get_jattr_str(pjob, JOB_ATR_sandbox), \"PRIVATE\") == 0)\n\t\t\t\t*prq_dir |= STAGE_JOBDIR;\n\t\t} /* O_WORKDIR check would go here */\n\n\t} else {\n\n\t\t/* use the existing request structure */\n\n\t\tif (preq->rq_type == PBS_BATCH_CopyFiles_Cred) {\n\t\t\tpcfc = &preq->rq_ind.rq_cpyfile_cred;\n\t\t\tprq_pair = &pcfc->rq_copyfile.rq_pair;\n\t\t} else {\n\t\t\tpcf = &preq->rq_ind.rq_cpyfile;\n\t\t\tprq_pair = &pcf->rq_pair;\n\t\t}\n\t}\n\n\tpair = (struct rqfpair *) malloc(sizeof(struct rqfpair));\n\tif (pair == NULL) {\n\t\tfree(from);\n\t\tfree(to);\n\t\tfree_br(preq);\n\t\treturn NULL;\n\t}\n\n\tCLEAR_LINK(pair->fp_link);\n\tpair->fp_local = from;\n\tpair->fp_rmt = to;\n\tpair->fp_flag = tflag;\n\tappend_link(prq_pair, &pair->fp_link, pair);\n\treturn (preq);\n}\n/**\n * @brief\n * \t\tis_join - Is the file joined to another.\n *\n * @param[in]\tpjob\t- job structure\n * @param[in]\tati\t- job attribute, output/error path.\n *\n * @return\tjoined or not\n * @retval\t0\t- either the first or not in list.\n * @retval\t1\t- being joined.\n */\nstatic int\nis_join(job *pjob, enum job_atr ati)\n{\n\tchar key;\n\tchar *pd;\n\n\tif (ati == JOB_ATR_outpath)\n\t\tkey = 'o';\n\telse if (ati == JOB_ATR_errpath)\n\t\tkey = 'e';\n\telse\n\t\treturn (0);\n\tif (is_jattr_set(pjob, JOB_ATR_join)) {\n\t\tpd = get_jattr_str(pjob, JOB_ATR_join);\n\t\tif (pd && *pd && (*pd != 'n')) {\n\t\t\t/* if not the first letter, and in list - is joined */\n\t\t\tif ((*pd != key) && (strchr(pd + 1, (int) key)))\n\t\t\t\treturn (1); /* being joined */\n\t\t}\n\t}\n\treturn (0); /* either the first or not in list */\n}\n\n/**\n * @brief\n * \t\tcpy_stdfile - determine if one of the job's standard files (output or error)\n *\t\tis to be copied, if so set up the Copy Files request.\n *\n * @param[in]\tpreq\t- batch request\n * @param[in]\tpjob\t- job structure\n * @param[in]\tati\t- JOB_ATR_, output/error path.\n *\n * @return\tmodified batch request.\n * @retval\tNULL\tfailure\n */\n\nstruct batch_request *\ncpy_stdfile(struct batch_request *preq, job *pjob, enum job_atr ati)\n{\n\tchar *from;\n\tchar key;\n\tchar *suffix;\n\tchar *to = NULL;\n\tchar *keep;\n\n\t/* if the job is interactive, don't bother to return output file */\n\n\tif (is_jattr_set(pjob, JOB_ATR_interactive) && get_jattr_long(pjob, JOB_ATR_interactive))\n\t\treturn NULL;\n\n\t/* set up depending on which file */\n\n\tif (ati == JOB_ATR_errpath) {\n\t\tkey = 'e';\n\t\tsuffix = JOB_STDERR_SUFFIX;\n\t} else {\n\t\tkey = 'o';\n\t\tsuffix = JOB_STDOUT_SUFFIX;\n\t}\n\n\tif (!is_jattr_set(pjob, ati)) { /* This shouldn't be */\n\t\t(void) sprintf(log_buffer, \"%c file missing\", key);\n\t\tlog_event(PBSEVENT_ERROR | PBSEVENT_JOB, PBS_EVENTCLASS_JOB,\n\t\t\t  LOG_INFO, pjob->ji_qs.ji_jobid, log_buffer);\n\t\treturn NULL;\n\t}\n\n\t/* Is the file joined to another, if so don't copy it */\n\n\tif (is_join(pjob, ati))\n\t\treturn (preq);\n\n\t/*\n\t * If the job has a keep file attribute, and the specified file is in\n\t * the keep list, MOM has already placed the file in the user's HOME\n\t * directory.  It don't need to be copied.\n\t */\n\tif (is_jattr_set(pjob, JOB_ATR_keep) && strchr((keep = get_jattr_str(pjob, JOB_ATR_keep)), key) && !strchr(keep, 'd'))\n\t\treturn (preq);\n\n\t/*\n\t * If the job has a remove file attribute and the job has succeeded,\n\t * std_files doesn't has to be copied.\n\t */\n\tif (is_jattr_set(pjob, JOB_ATR_exit_status)) {\n\t\tif (get_jattr_long(pjob, JOB_ATR_exit_status) == JOB_EXEC_OK) {\n\t\t\tif (is_jattr_set(pjob, JOB_ATR_remove) && (strchr(get_jattr_str(pjob, JOB_ATR_remove), key)))\n\t\t\t\treturn (preq);\n\t\t}\n\t}\n\n\t/* else go with the supplied name */\n\tto = strdup(get_jattr_str(pjob, ati));\n\tif (to == NULL)\n\t\treturn (preq); /* cannot continue with this one */\n\n\t/* build up the name used by MOM as the from name */\n\n\tfrom = setup_from(pjob, suffix);\n\tif (from == NULL) {\n\t\t(void) free(to);\n\t\treturn (preq);\n\t}\n\n\t/* now set names into the batch request */\n\n\treturn (setup_cpyfiles(preq, pjob, from, to, STAGE_DIR_OUT, STDJOBFILE));\n}\n\n/**\n * @brief\n * \t\tcpy_stage - set up a Copy Files request to include files specified by the\n *\t\tuser to be staged out (also used for stage-in).\n *\t\t\"stage_out\" is a resource that may or may not *\texist on a host.\n *\t\tIf such exists, the files are listed one per string as\n *\t\t\"local_name@remote_host:remote_name\".\n *\n * @param[in]\tpreq\t- batch request\n * @param[in]\tpjob\t- job structure\n * @param[in]\tati\t- JOB_ATR_stageout\n * @param[in]\tdirection\t-  1 = , 2 =\n *\n * @return\tbatch_request *\n */\n\nstruct batch_request *\ncpy_stage(struct batch_request *preq, job *pjob, enum job_atr ati, int direction)\n{\n\tint i;\n\tchar *from;\n\tstruct array_strings *parst;\n\tchar *plocal;\n\tchar *prmt;\n\tchar *to;\n\n\tif (is_jattr_set(pjob, ati)) {\n\n\t\t/* at last, we know we have files to stage out/in */\n\n\t\tparst = get_jattr_arst(pjob, ati);\n\t\tfor (i = 0; i < parst->as_usedptr; ++i) {\n\t\t\tplocal = parst->as_string[i];\n\t\t\tprmt = strchr(plocal, (int) '@');\n\t\t\tif (prmt) {\n\t\t\t\t*prmt = '\\0';\n\t\t\t\tfrom = malloc(strlen(plocal) + 1);\n\t\t\t\tif (from) {\n\t\t\t\t\t(void) strcpy(from, plocal);\n\t\t\t\t\t*prmt = '@'; /* restore the @ */\n\t\t\t\t} else {\n\t\t\t\t\treturn (preq);\n\t\t\t\t}\n\t\t\t\tto = malloc(strlen(prmt + 1) + 1);\n\t\t\t\tif (to) {\n\t\t\t\t\t(void) strcpy(to, prmt + 1);\n\t\t\t\t} else {\n\t\t\t\t\t(void) free(from);\n\t\t\t\t\treturn (preq);\n\t\t\t\t}\n\t\t\t\tpreq = setup_cpyfiles(preq, pjob, from, to,\n\t\t\t\t\t\t      direction, STAGEFILE);\n\t\t\t}\n\t\t}\n\t}\n\n\treturn (preq);\n}\n\nint\nhas_stage(job *pjob)\n{\n\tstruct batch_request *preq = NULL;\n\n\tpreq = cpy_stdfile(NULL, pjob, JOB_ATR_outpath);\n\tif (preq) {\n\t\tfree_br(preq);\n\t\treturn 1;\n\t}\n\tpreq = cpy_stdfile(NULL, pjob, JOB_ATR_errpath);\n\tif (preq) {\n\t\tfree_br(preq);\n\t\treturn 1;\n\t}\n\tpreq = cpy_stage(NULL, pjob, JOB_ATR_stageout, STAGE_DIR_OUT);\n\tif (preq) {\n\t\tfree_br(preq);\n\t\treturn 1;\n\t}\n\tpreq = cpy_stage(NULL, pjob, JOB_ATR_stagein, STAGE_DIR_IN);\n\tif (preq) {\n\t\tfree_br(preq);\n\t\treturn 1;\n\t}\n\treturn 0;\n}\n"
  },
  {
    "path": "src/server/job_recov_db.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <stdio.h>\n#include <sys/types.h>\n#include <sys/param.h>\n#include <execinfo.h>\n\n#include \"pbs_ifl.h\"\n#include <errno.h>\n#include <fcntl.h>\n#include <string.h>\n#include <stdlib.h>\n#include <time.h>\n\n#include <unistd.h>\n#include \"server_limits.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"job.h\"\n#include \"reservation.h\"\n#include \"queue.h\"\n#include \"log.h\"\n#include \"pbs_nodes.h\"\n#include \"svrfunc.h\"\n#include <memory.h>\n#include \"libutil.h\"\n#include \"pbs_db.h\"\n\n#define MAX_SAVE_TRIES 3\n\nextern void *svr_db_conn;\nextern int server_init_type;\nextern pbs_list_head svr_allresvs;\n#define BACKTRACE_BUF_SIZE 50\nvoid print_backtrace(char *);\n\n/* global data items */\nextern time_t time_now;\n\njob *recov_job_cb(pbs_db_obj_info_t *dbobj, int *refreshed);\nresc_resv *recov_resv_cb(pbs_db_obj_info_t *dbobj, int *refreshed);\n\n/**\n * @brief\n *\t\tconvert job structure to DB format\n *\n * @see\n * \t\tjob_save_db\n *\n * @param[in]\tpjob - Address of the job in the server\n * @param[out]\tdbjob - Address of the database job object\n *\n * @retval\t-1  Failure\n * @retval\t>=0 What to save: 0=nothing, OBJ_SAVE_NEW or OBJ_SAVE_QS\n */\nstatic int\njob_to_db(job *pjob, pbs_db_job_info_t *dbjob)\n{\n\tint savetype = 0;\n\tint save_all_attrs = 0;\n\n\tstrcpy(dbjob->ji_jobid, pjob->ji_qs.ji_jobid);\n\n\tif (check_job_state(pjob, JOB_STATE_LTR_FINISHED))\n\t\tsave_all_attrs = 1;\n\n\tif ((encode_attr_db(job_attr_def, pjob->ji_wattr, JOB_ATR_LAST, &dbjob->db_attr_list, save_all_attrs)) != 0)\n\t\treturn -1;\n\n\tif (pjob->newobj) /* object was never saved/loaded before */\n\t\tsavetype |= (OBJ_SAVE_NEW | OBJ_SAVE_QS);\n\n\tif (compare_obj_hash(&pjob->ji_qs, sizeof(pjob->ji_qs), pjob->qs_hash) == 1) {\n\t\tint statenum;\n\n\t\tsavetype |= OBJ_SAVE_QS;\n\n\t\tstatenum = get_job_state_num(pjob);\n\t\tif (statenum == -1) {\n\t\t\tlog_errf(PBSE_INTERNAL, __func__, \"get_job_state_num failed for job state %c\",\n\t\t\t\t get_job_state(pjob));\n\t\t\treturn -1;\n\t\t}\n\n\t\tdbjob->ji_state = statenum;\n\t\tdbjob->ji_substate = get_job_substate(pjob);\n\t\tdbjob->ji_svrflags = pjob->ji_qs.ji_svrflags;\n\t\tdbjob->ji_stime = pjob->ji_qs.ji_stime;\n\t\tstrcpy(dbjob->ji_queue, pjob->ji_qs.ji_queue);\n\t\tstrcpy(dbjob->ji_destin, pjob->ji_qs.ji_destin);\n\t\tdbjob->ji_un_type = pjob->ji_qs.ji_un_type;\n\t\tif (pjob->ji_qs.ji_un_type == JOB_UNION_TYPE_NEW) {\n\t\t\tdbjob->ji_fromsock = pjob->ji_qs.ji_un.ji_newt.ji_fromsock;\n\t\t\tdbjob->ji_fromaddr = pjob->ji_qs.ji_un.ji_newt.ji_fromaddr;\n\t\t} else if (pjob->ji_qs.ji_un_type == JOB_UNION_TYPE_EXEC)\n\t\t\tdbjob->ji_exitstat = pjob->ji_qs.ji_un.ji_exect.ji_exitstat;\n\t\telse if (pjob->ji_qs.ji_un_type == JOB_UNION_TYPE_ROUTE) {\n\t\t\tdbjob->ji_quetime = pjob->ji_qs.ji_un.ji_routet.ji_quetime;\n\t\t\tdbjob->ji_rteretry = pjob->ji_qs.ji_un.ji_routet.ji_rteretry;\n\t\t} else if (pjob->ji_qs.ji_un_type == JOB_UNION_TYPE_MOM) {\n\t\t\tdbjob->ji_exitstat = pjob->ji_qs.ji_un.ji_momt.ji_exitstat;\n\t\t}\n\t\t/* extended portion */\n\t\tstrcpy(dbjob->ji_jid, pjob->ji_extended.ji_ext.ji_jid);\n\t\tdbjob->ji_credtype = pjob->ji_extended.ji_ext.ji_credtype;\n\t\tdbjob->ji_qrank = get_jattr_ll(pjob, JOB_ATR_qrank);\n\t}\n\n\treturn savetype;\n}\n\n/**\n * @brief\n *\t\tconvert from database to job structure\n *\n * @see\n * \t\tjob_recov_db\n *\n * @param[out]\tpjob - Address of the job in the server\n * @param[in]\tdbjob - Address of the database job object\n *\n * @retval   !=0  Failure\n * @retval   0    Success\n */\nstatic int\ndb_to_job(job *pjob, pbs_db_job_info_t *dbjob)\n{\n\tchar statec;\n\n\t/* Variables assigned constant values are not stored in the DB */\n\tpjob->ji_qs.ji_jsversion = JSVERSION;\n\tstrcpy(pjob->ji_qs.ji_jobid, dbjob->ji_jobid);\n\n\tstatec = state_int2char(dbjob->ji_state);\n\tif (statec == '0') {\n\t\tlog_errf(PBSE_INTERNAL, __func__, \"state_int2char failed to convert state %d\", dbjob->ji_state);\n\t\treturn 1;\n\t}\n\tset_job_state(pjob, statec);\n\tset_job_substate(pjob, dbjob->ji_substate);\n\n\tpjob->ji_qs.ji_svrflags = dbjob->ji_svrflags;\n\tpjob->ji_qs.ji_stime = dbjob->ji_stime;\n\tstrcpy(pjob->ji_qs.ji_queue, dbjob->ji_queue);\n\tstrcpy(pjob->ji_qs.ji_destin, dbjob->ji_destin);\n\tpjob->ji_qs.ji_fileprefix[0] = 0;\n\tpjob->ji_qs.ji_un_type = dbjob->ji_un_type;\n\tif (pjob->ji_qs.ji_un_type == JOB_UNION_TYPE_NEW) {\n\t\tpjob->ji_qs.ji_un.ji_newt.ji_fromsock = dbjob->ji_fromsock;\n\t\tpjob->ji_qs.ji_un.ji_newt.ji_fromaddr = dbjob->ji_fromaddr;\n\t\tpjob->ji_qs.ji_un.ji_newt.ji_scriptsz = 0;\n\t} else if (pjob->ji_qs.ji_un_type == JOB_UNION_TYPE_EXEC)\n\t\tpjob->ji_qs.ji_un.ji_exect.ji_exitstat = dbjob->ji_exitstat;\n\telse if (pjob->ji_qs.ji_un_type == JOB_UNION_TYPE_ROUTE) {\n\t\tpjob->ji_qs.ji_un.ji_routet.ji_quetime = dbjob->ji_quetime;\n\t\tpjob->ji_qs.ji_un.ji_routet.ji_rteretry = dbjob->ji_rteretry;\n\t} else if (pjob->ji_qs.ji_un_type == JOB_UNION_TYPE_MOM) {\n\t\tpjob->ji_qs.ji_un.ji_momt.ji_svraddr = 0;\n\t\tpjob->ji_qs.ji_un.ji_momt.ji_exitstat = dbjob->ji_exitstat;\n\t\tpjob->ji_qs.ji_un.ji_momt.ji_exuid = 0;\n\t\tpjob->ji_qs.ji_un.ji_momt.ji_exgid = 0;\n\t}\n\n\t/* extended portion */\n\tstrcpy(pjob->ji_extended.ji_ext.ji_jid, dbjob->ji_jid);\n\tpjob->ji_extended.ji_ext.ji_credtype = dbjob->ji_credtype;\n\n\tif ((decode_attr_db(pjob, &dbjob->db_attr_list.attrs, job_attr_idx, job_attr_def, pjob->ji_wattr, JOB_ATR_LAST, JOB_ATR_UNKN)) != 0)\n\t\treturn -1;\n\n\tcompare_obj_hash(&pjob->ji_qs, sizeof(pjob->ji_qs), pjob->qs_hash);\n\n\tpjob->newobj = 0;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\t\tSave job to database\n *\n * @param[in]\tpjob - The job to save\n *\n * @return      Error code\n * @retval\t 0 - Success\n * @retval\t-1 - Failure\n * @retval\t 1 - Jobid clash, retry with new jobid\n *\n */\nint\njob_save_db(job *pjob)\n{\n\tpbs_db_job_info_t dbjob = {{0}};\n\tpbs_db_obj_info_t obj;\n\tvoid *conn = svr_db_conn;\n\tint savetype;\n\tint rc = -1;\n\tint old_mtime, old_flags;\n\tchar *conn_db_err = NULL;\n\n\told_mtime = get_jattr_long(pjob, JOB_ATR_mtime);\n\told_flags = (get_jattr(pjob, JOB_ATR_mtime))->at_flags;\n\n\tif ((savetype = job_to_db(pjob, &dbjob)) == -1)\n\t\tgoto done;\n\n\tobj.pbs_db_obj_type = PBS_DB_JOB;\n\tobj.pbs_db_un.pbs_db_job = &dbjob;\n\n\t/* update mtime before save, so the same value gets to the DB as well */\n\tset_jattr_l_slim(pjob, JOB_ATR_mtime, time_now, SET);\n\tif ((rc = pbs_db_save_obj(conn, &obj, savetype)) == 0)\n\t\tpjob->newobj = 0;\n\ndone:\n\tfree_db_attr_list(&dbjob.db_attr_list);\n\n\tif (rc != 0) {\n\t\t/* revert mtime, flags update */\n\t\tset_jattr_l_slim(pjob, JOB_ATR_mtime, old_mtime, SET);\n\t\t(get_jattr(pjob, JOB_ATR_mtime))->at_flags = old_flags;\n\n\t\tpbs_db_get_errmsg(PBS_DB_ERR, &conn_db_err);\n\t\tlog_errf(PBSE_INTERNAL, __func__, \"Failed to save job %s %s\", pjob->ji_qs.ji_jobid, conn_db_err ? conn_db_err : \"\");\n\t\tif (conn_db_err) {\n\t\t\tif ((savetype & OBJ_SAVE_NEW) && strstr(conn_db_err, \"duplicate key value\"))\n\t\t\t\trc = 1;\n\t\t\tfree(conn_db_err);\n\t\t}\n\n\t\tif (rc == -1)\n\t\t\tpanic_stop_db();\n\t}\n\n\treturn (rc);\n}\n\n/**\n * @brief\n *\tUtility function called inside job_recov_db\n *\n * @param[in]\tdbjob - Pointer to the database structure of a job\n * @param[in]   pjob  - Pointer to job structure to populate\n *\n * @retval\t NULL - Failure\n * @retval\t!NULL - Success, pointer to job structure recovered\n *\n */\njob *\njob_recov_db_spl(pbs_db_job_info_t *dbjob, job *pjob)\n{\n\tjob *pj = NULL;\n\n\tif (!pjob) {\n\t\tpj = job_alloc();\n\t\tpjob = pj;\n\t}\n\n\tif (pjob) {\n\t\tif (db_to_job(pjob, dbjob) == 0)\n\t\t\treturn (pjob);\n\t}\n\n\t/* error case */\n\tif (pj)\n\t\tjob_free(pj); /* free if we allocated here */\n\n\tlog_errf(PBSE_INTERNAL, __func__, \"Failed to decode job %s\", dbjob->ji_jobid);\n\n\treturn (NULL);\n}\n\n/**\n * @brief\n *\tRecover job from database\n *\n * @param[in]\tjid - Job id of job to recover\n * @param[in]\tpjob - job pointer, if any, to be updated\n *\n * @return      The recovered job\n * @retval\t NULL - Failure\n * @retval\t!NULL - Success, pointer to job structure recovered\n *\n */\njob *\njob_recov_db(char *jid, job *pjob)\n{\n\tpbs_db_job_info_t dbjob = {{0}};\n\tpbs_db_obj_info_t obj;\n\tint rc = -1;\n\tvoid *conn = svr_db_conn;\n\tchar *conn_db_err = NULL;\n\n\tstrcpy(dbjob.ji_jobid, jid);\n\n\trc = pbs_db_load_obj(conn, &obj);\n\tif (rc == -2)\n\t\treturn pjob; /* no change in job, return the same job */\n\n\tif (rc == 0)\n\t\tpjob = job_recov_db_spl(&dbjob, pjob);\n\telse {\n\t\tpbs_db_get_errmsg(PBS_DB_ERR, &conn_db_err);\n\t\tlog_errf(PBSE_INTERNAL, __func__, \"Failed to load job %s %s\", jid, conn_db_err ? conn_db_err : \"\");\n\t\tfree(conn_db_err);\n\t}\n\n\tfree_db_attr_list(&dbjob.db_attr_list);\n\n\treturn (pjob);\n}\n\n/**\n * @brief\n *\t\tconvert resv structure to DB format\n *\n * @see\n * \t\tresv_save_db\n *\n * @param[in]\tpresv - Address of the resv in the server\n * @param[out]  dbresv - Address of the database resv object\n *\n * @retval   -1  Failure\n * @retval   >=0 What to save: 0=nothing, OBJ_SAVE_NEW or OBJ_SAVE_QS\n */\nstatic int\nresv_to_db(resc_resv *presv, pbs_db_resv_info_t *dbresv)\n{\n\tint savetype = 0;\n\n\tstrcpy(dbresv->ri_resvid, presv->ri_qs.ri_resvID);\n\n\tif ((encode_attr_db(resv_attr_def, presv->ri_wattr, (int) RESV_ATR_LAST, &(dbresv->db_attr_list), 0)) != 0)\n\t\treturn -1;\n\n\tif (presv->newobj) /* object was never saved or loaded before */\n\t\tsavetype |= (OBJ_SAVE_NEW | OBJ_SAVE_QS);\n\n\tif (compare_obj_hash(&presv->ri_qs, sizeof(presv->ri_qs), presv->qs_hash) == 1) {\n\t\tsavetype |= OBJ_SAVE_QS;\n\n\t\tstrcpy(dbresv->ri_queue, presv->ri_qs.ri_queue);\n\t\tdbresv->ri_duration = presv->ri_qs.ri_duration;\n\t\tdbresv->ri_etime = presv->ri_qs.ri_etime;\n\t\tdbresv->ri_state = presv->ri_qs.ri_state;\n\t\tdbresv->ri_stime = presv->ri_qs.ri_stime;\n\t\tdbresv->ri_substate = presv->ri_qs.ri_substate;\n\t\tdbresv->ri_svrflags = presv->ri_qs.ri_svrflags;\n\t\tdbresv->ri_tactive = presv->ri_qs.ri_tactive;\n\t}\n\n\treturn savetype;\n}\n\n/**\n * @brief\n *\t\tconvert from database to resv structure\n *\n * @param[out]\tpresv - Address of the resv in the server\n * @param[in]\tdbresv - Address of the database resv object\n *\n * @retval   !=0  Failure\n * @retval   0    Success\n */\nstatic int\ndb_to_resv(resc_resv *presv, pbs_db_resv_info_t *dbresv)\n{\n\tstrcpy(presv->ri_qs.ri_resvID, dbresv->ri_resvid);\n\tstrcpy(presv->ri_qs.ri_queue, dbresv->ri_queue);\n\tpresv->ri_qs.ri_duration = dbresv->ri_duration;\n\tpresv->ri_qs.ri_etime = dbresv->ri_etime;\n\tpresv->ri_qs.ri_state = dbresv->ri_state;\n\tpresv->ri_qs.ri_stime = dbresv->ri_stime;\n\tpresv->ri_qs.ri_substate = dbresv->ri_substate;\n\tpresv->ri_qs.ri_svrflags = dbresv->ri_svrflags;\n\tpresv->ri_qs.ri_tactive = dbresv->ri_tactive;\n\n\tif ((decode_attr_db(presv, &dbresv->db_attr_list.attrs, resv_attr_idx, resv_attr_def, presv->ri_wattr, RESV_ATR_LAST, RESV_ATR_UNKN)) != 0)\n\t\treturn -1;\n\n\tcompare_obj_hash(&presv->ri_qs, sizeof(presv->ri_qs), presv->qs_hash);\n\n\tpresv->newobj = 0;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tSave resv to database\n *\n * @param[in]\tpresv - The resv to save\n * @param[in]   updatetype:\n *\t\tSAVERESV_QUICK - Quick update without attributes\n *\t\tSAVERESV_FULL  - Full update with attributes\n *\t\tSAVERESV_NEW   - New resv, insert into database\n *\n * @return      Error code\n * @retval\t 0 - Success\n * @retval\t-1 - Failure\n * @retval\t 1 - resvid clash, retry with new resvid\n *\n */\nint\nresv_save_db(resc_resv *presv)\n{\n\tpbs_db_resv_info_t dbresv = {{0}};\n\tpbs_db_obj_info_t obj;\n\tvoid *conn = svr_db_conn;\n\tint savetype;\n\tint rc = -1;\n\tint old_mtime, old_flags;\n\tchar *conn_db_err = NULL;\n\tattribute *mtime;\n\n\tmtime = get_rattr(presv, RESV_ATR_mtime);\n\told_mtime = get_attr_l(mtime);\n\told_flags = mtime->at_flags;\n\n\tif ((savetype = resv_to_db(presv, &dbresv)) == -1)\n\t\tgoto done;\n\n\tobj.pbs_db_obj_type = PBS_DB_RESV;\n\tobj.pbs_db_un.pbs_db_resv = &dbresv;\n\n\t/* update mtime before save, so the same value gets to the DB as well */\n\tset_rattr_l_slim(presv, RESV_ATR_mtime, time_now, SET);\n\tif ((rc = pbs_db_save_obj(conn, &obj, savetype)) == 0)\n\t\tpresv->newobj = 0;\n\ndone:\n\tfree_db_attr_list(&dbresv.db_attr_list);\n\n\tif (rc != 0) {\n\t\tset_attr_l(mtime, old_mtime, SET);\n\t\tmtime->at_flags = old_flags;\n\n\t\tpbs_db_get_errmsg(PBS_DB_ERR, &conn_db_err);\n\t\tlog_errf(PBSE_INTERNAL, __func__, \"Failed to save resv %s %s\", presv->ri_qs.ri_resvID, conn_db_err ? conn_db_err : \"\");\n\t\tif (conn_db_err) {\n\t\t\tif ((savetype & OBJ_SAVE_NEW) && strstr(conn_db_err, \"duplicate key value\"))\n\t\t\t\trc = 1;\n\t\t\tfree(conn_db_err);\n\t\t}\n\n\t\tif (rc == -1)\n\t\t\tpanic_stop_db();\n\t}\n\n\treturn (rc);\n}\n\n/**\n * @brief\n *\tRecover resv from database\n *\n * @param[in]\tresvid - Resv id to recover\n * @param[in]\tpresv - Resv pointer, if any, to be updated\n *\n * @return      The recovered reservation\n * @retval\t NULL - Failure\n * @retval\t!NULL - Success, pointer to resv structure recovered\n *\n */\nresc_resv *\nresv_recov_db(char *resvid, resc_resv *presv)\n{\n\tresc_resv *pr = NULL;\n\tpbs_db_resv_info_t dbresv = {{0}};\n\tpbs_db_obj_info_t obj;\n\tvoid *conn = svr_db_conn;\n\tint rc = -1;\n\tchar *conn_db_err = NULL;\n\n\tif (!presv) {\n\t\tif ((pr = resv_alloc(resvid)) == NULL) {\n\t\t\tlog_err(-1, __func__, \"resv_alloc failed\");\n\t\t\treturn NULL;\n\t\t}\n\t\tpresv = pr;\n\t}\n\n\tstrcpy(dbresv.ri_resvid, resvid);\n\tobj.pbs_db_obj_type = PBS_DB_RESV;\n\tobj.pbs_db_un.pbs_db_resv = &dbresv;\n\n\trc = pbs_db_load_obj(conn, &obj);\n\tif (rc == -2)\n\t\treturn presv; /* no change in resv */\n\n\tif (rc == 0)\n\t\trc = db_to_resv(presv, &dbresv);\n\n\tfree_db_attr_list(&dbresv.db_attr_list);\n\n\tif (rc != 0) {\n\t\tpresv = NULL; /* so we return NULL */\n\n\t\tpbs_db_get_errmsg(PBS_DB_ERR, &conn_db_err);\n\t\tlog_errf(PBSE_INTERNAL, __func__, \"Failed to load resv %s %s\", resvid, conn_db_err ? conn_db_err : \"\");\n\t\tfree(conn_db_err);\n\t\tif (pr)\n\t\t\tresv_free(pr); /* free if we allocated here */\n\t}\n\n\treturn presv;\n}\n\n/**\n * @brief\n *\tRefresh/retrieve job from database and add it into AVL tree if not present\n *\n *\t@param[in]  dbobj     - The pointer to the wrapper job object of type pbs_db_job_info_t\n * \t@param[out]  refreshed - To check if job is refreshed\n *\n * @return\tThe recovered job\n * @retval\tNULL - Failure\n * @retval\t!NULL - Success, pointer to job structure recovered\n *\n */\njob *\nrecov_job_cb(pbs_db_obj_info_t *dbobj, int *refreshed)\n{\n\tjob *pj = NULL;\n\tpbs_db_job_info_t *dbjob = dbobj->pbs_db_un.pbs_db_job;\n\tstatic int numjobs = 0;\n\n\t*refreshed = 0;\n\tif ((pj = job_recov_db_spl(dbjob, NULL)) == NULL) {\n\t\tif ((server_init_type == RECOV_COLD) || (server_init_type == RECOV_CREATE)) {\n\t\t\t/* remove the loaded job from db */\n\t\t\tif (pbs_db_delete_obj(svr_db_conn, dbobj) != 0)\n\t\t\t\tlog_errf(PBSE_SYSTEM, __func__, \"job %s not purged\", dbjob->ji_jobid);\n\t\t}\n\t\tgoto err;\n\t}\n\n\tpbsd_init_job(pj, server_init_type);\n\t*refreshed = 1;\n\n\tif ((++numjobs % 20) == 0) {\n\t\t/* periodically touch the file so the  */\n\t\t/* world knows we are alive and active */\n\t\tupdate_svrlive();\n\t}\n\nerr:\n\tfree_db_attr_list(&dbjob->db_attr_list);\n\tif (pj == NULL)\n\t\tlog_errf(PBSE_SYSTEM, __func__, \"Failed to recover job %s\", dbjob->ji_jobid);\n\treturn pj;\n}\n\n/**\n * @brief\n * \t\trecov_resv_cb - callback function to process and load\n * \t\t\t\t\t  resv database result to pbs structure.\n *\n * @param[in]\tdbobj\t- database resv structure to C.\n * @param[out]\trefreshed - To check if reservation recovered\n *\n * @return\tresv structure - on success\n * @return \tNULL - on failure\n */\nresc_resv *\nrecov_resv_cb(pbs_db_obj_info_t *dbobj, int *refreshed)\n{\n\tresc_resv *presv = NULL;\n\tpbs_db_resv_info_t *dbresv = dbobj->pbs_db_un.pbs_db_resv;\n\tint load_type = 0;\n\n\t*refreshed = 0;\n\t/* if resv is not in list, load the resv from database */\n\tif ((presv = resv_recov_db(dbresv->ri_resvid, NULL)) == NULL)\n\t\tgoto err;\n\n\tpbsd_init_resv(presv, load_type);\n\t*refreshed = 1;\nerr:\n\tfree_db_attr_list(&dbresv->db_attr_list);\n\tif (presv == NULL)\n\t\tlog_errf(-1, __func__, \"Failed to recover resv %s\", dbresv->ri_resvid);\n\treturn presv;\n}\n"
  },
  {
    "path": "src/server/job_route.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tjob_route.c\n * @brief\n * \t\tjob_route.c - functions to route a job to another queue\n *\n * Included functions are:\n *\n *\tjob_route() - attempt to route a job to a new destination.\n *\tadd_dest()\t- Add an entry to the list of bad destinations for a job.\n *\tis_bad_dest()\t- Check the job for a match of dest in the list of rejected destinations.\n *\tdefault_router()\t- basic function for \"routing\" jobs.\n *\tqueue_route()\t- route any \"ready\" jobs in a specific queue\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <stdio.h>\n#include <sys/types.h>\n\n#include \"pbs_ifl.h\"\n#include <errno.h>\n#include <string.h>\n#include <stdlib.h>\n#include \"pbs_error.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"server_limits.h\"\n#include \"work_task.h\"\n#include \"server.h\"\n#include \"log.h\"\n#include \"credential.h\"\n#include \"libpbs.h\"\n#include \"batch_request.h\"\n#include \"resv_node.h\"\n#include \"queue.h\"\n\n#include \"job.h\"\n#include \"reservation.h\"\n#include \"pbs_nodes.h\"\n#include \"svrfunc.h\"\n#include <memory.h>\n\n/* Local Functions */\n\n/* Global Data */\n\nextern char *msg_badstate;\nextern char *msg_routexceed;\nextern char *msg_routebad;\nextern char *msg_err_malloc;\nextern time_t time_now;\n\n/**\n * @brief\n * \t\tAdd an entry to the list of bad destinations for a job.\n *\n * @see\n * \t\tdefault_router and post_routejob.\n *\n *\t@param[in]\tjobp - pointer to job structure\n *\n * @return\tvoid\n */\n\nvoid\nadd_dest(job *jobp)\n{\n\tbadplace *bp;\n\tchar *baddest = jobp->ji_qs.ji_destin;\n\n\tbp = (badplace *) malloc(sizeof(badplace));\n\tif (bp == NULL) {\n\t\tlog_err(errno, __func__, msg_err_malloc);\n\t\treturn;\n\t}\n\tCLEAR_LINK(bp->bp_link);\n\n\tstrcpy(bp->bp_dest, baddest);\n\n\tappend_link(&jobp->ji_rejectdest, &bp->bp_link, bp);\n\treturn;\n}\n\n/**\n * @brief\n * \t\tCheck the job for a match of dest in the list of rejected destinations.\n *\n * @see\n * \t\tdefault_router\n *\n * @param[in]\tjobp - pointer to job structure\n * @param[in]\tdest - destination which needs to be matched.\n *\n *\tReturn: pointer if found, NULL if not.\n */\n\nbadplace *\nis_bad_dest(job *jobp, char *dest)\n{\n\tbadplace *bp;\n\n\tbp = (badplace *) GET_NEXT(jobp->ji_rejectdest);\n\twhile (bp) {\n\t\tif (strcmp(bp->bp_dest, dest) == 0)\n\t\t\tbreak;\n\t\tbp = (badplace *) GET_NEXT(bp->bp_link);\n\t}\n\treturn (bp);\n}\n\n/**\n * @brief\n * \t\tdefault_router - basic function for \"routing\" jobs.\n *\t\tDoes a round-robin attempt on the destinations as listed,\n *\t\tjob goes to first destination that takes it.\n *\n *\t\tIf no destination will accept the job, PBSE_ROUTEREJ is returned,\n *\t\totherwise 0 is returned.\n *\n * @see\n * \t\tsite_alt_router and job_route.\n *\n * @param[in,out]\tjobp - pointer to job structure\n * @param[in]\tqp - PBS queue.\n * @param[in]\tretry_time - retry time before each attempt.\n *\n * @return\tint\n * @retval\t0\t- success\n * @retval\tPBSE_ROUTEREJ\t- If no destination will accept the job\n */\n\nint\ndefault_router(job *jobp, struct pbs_queue *qp, long retry_time)\n{\n\tstruct array_strings *dests = NULL;\n\tchar *destination;\n\tint last;\n\n\tif (is_qattr_set(qp, QR_ATR_RouteDestin)) {\n\t\tdests = get_qattr_arst(qp, QR_ATR_RouteDestin);\n\t\tlast = dests->as_usedptr;\n\t} else\n\t\tlast = 0;\n\n\t/* loop through all possible destinations */\n\n\twhile (1) {\n\t\tif (jobp->ji_lastdest >= last) {\n\t\t\tjobp->ji_lastdest = 0; /* have tried all */\n\t\t\tif (jobp->ji_retryok == 0) {\n\t\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB,\n\t\t\t\t\t  LOG_DEBUG,\n\t\t\t\t\t  jobp->ji_qs.ji_jobid, msg_routebad);\n\t\t\t\treturn (PBSE_ROUTEREJ);\n\t\t\t} else {\n\n\t\t\t\t/* set time to retry job */\n\t\t\t\tjobp->ji_qs.ji_un.ji_routet.ji_rteretry = retry_time;\n\t\t\t\tjobp->ji_retryok = 0;\n\t\t\t\treturn (0);\n\t\t\t}\n\t\t}\n\n\t\tdestination = dests->as_string[jobp->ji_lastdest++];\n\n\t\tif (is_bad_dest(jobp, destination))\n\t\t\tcontinue;\n\n\t\tswitch (svr_movejob(jobp, destination, NULL)) {\n\n\t\t\tcase -1: /* permanent failure */\n\t\t\t\tadd_dest(jobp);\n\t\t\t\tbreak;\n\n\t\t\tcase 0: /* worked */\n\t\t\tcase 2: /* deferred */\n\t\t\t\treturn (0);\n\n\t\t\tcase 1: /* failed, but try destination again */\n\t\t\t\tjobp->ji_retryok = 1;\n\t\t\t\tbreak;\n\t\t}\n\t}\n}\n\n/**\n * @brief\n * \t\tjob_route - route a job to another queue\n *\n * \t\tThis is only called for jobs in a routing queue.\n * \t\tLoop over all the possible destinations for the route queue.\n * \t\tCheck each one to see if it is ok to try it.  It could have been\n * \t\ttried before and returned a rejection.  If so, skip to the next\n * \t\tdestination.  If it is ok to try it, look to see if it is a local\n * \t\tqueue.  If so, it is an internal procedure to try/do the move.\n * \t\tIf not, a child process is created to deal with it in the\n * \t\tfunction net_route(), see svr_movejob.c\n *\n * @see\n * \t\tqueue_route\n *\n * @param[in]\tjobp - pointer to job structure\n *\n *@return\tint\n *@retval\t0\t- success\n *@retval\tnon-zero\t- failure\n */\n\nint\njob_route(job *jobp)\n{\n\tint bad_state = 0;\n\ttime_t life;\n\tstruct pbs_queue *qp;\n\tlong retry_time;\n\n\t/* see if the job is able to be routed */\n\n\tswitch (get_job_state(jobp)) {\n\n\t\tcase JOB_STATE_LTR_TRANSIT:\n\t\t\treturn (0); /* already going, ignore it */\n\n\t\tcase JOB_STATE_LTR_QUEUED:\n\t\t\tbreak; /* ok to try */\n\n\t\tcase JOB_STATE_LTR_HELD:\n\t\t\tbad_state = !get_qattr_long(jobp->ji_qhdr, QR_ATR_RouteHeld);\n\t\t\tbreak;\n\n\t\tcase JOB_STATE_LTR_WAITING:\n\t\t\tbad_state = !get_qattr_long(jobp->ji_qhdr, QR_ATR_RouteWaiting);\n\t\t\tbreak;\n\n\t\tcase JOB_STATE_LTR_MOVED:\n\t\tcase JOB_STATE_LTR_FINISHED:\n\t\t\t/*\n\t\t\t * If the job in ROUTE_Q is already deleted (ji_state ==\n\t\t\t * JOB_STATE_LTR_FINISHED) or routed (ji_state == JOB_STATE_LTR_MOVED)\n\t\t\t * and kept for history purpose, then ignore it until being\n\t\t\t * cleaned up by SERVER.\n\t\t\t */\n\t\t\treturn (0);\n\n\t\tdefault:\n\t\t\tlog_eventf(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t\t   jobp->ji_qs.ji_jobid, \"(%s) %s, state=%d\",\n\t\t\t\t   __func__, msg_badstate, get_job_state(jobp));\n\t\t\treturn (0);\n\t}\n\n\t/* check the queue limits, can we route any (more) */\n\n\tqp = jobp->ji_qhdr;\n\tif (get_qattr_long(qp, QA_ATR_Started) == 0)\n\t\treturn (0); /* queue not started - no routing */\n\n\tif (is_qattr_set(qp, QA_ATR_MaxRun) && get_qattr_long(qp, QA_ATR_MaxRun) <= qp->qu_njstate[JOB_STATE_TRANSIT])\n\t\treturn (0); /* max number of jobs being routed */\n\n\t/* what is the retry time and life time of a job in this queue */\n\n\tif (is_qattr_set(qp, QR_ATR_RouteRetryTime))\n\t\tretry_time = (long) time_now + get_qattr_long(qp, QR_ATR_RouteRetryTime);\n\telse\n\t\tretry_time = (long) time_now + PBS_NET_RETRY_TIME;\n\n\tif (is_qattr_set(qp, QR_ATR_RouteLifeTime))\n\t\tlife = jobp->ji_qs.ji_un.ji_routet.ji_quetime + get_qattr_long(qp, QR_ATR_RouteLifeTime);\n\telse\n\t\tlife = 0; /* forever */\n\n\tif (life && (life < time_now)) {\n\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t  jobp->ji_qs.ji_jobid, msg_routexceed);\n\t\treturn (PBSE_ROUTEEXPD); /* job too long in queue */\n\t}\n\n\tif (bad_state)\t    /* not currently routing this job */\n\t\treturn (0); /* else ignore this job */\n\n\tif (get_qattr_long(qp, QR_ATR_AltRouter) == 0)\n\t\treturn (default_router(jobp, qp, retry_time));\n\telse\n\t\treturn (site_alt_router(jobp, qp, retry_time));\n}\n\n/**\n * @brief\n * \t\tqueue_route - route any \"ready\" jobs in a specific queue\n *\n *\t\tlook for any job in the queue whose route retry time has\n *\t\tpassed.\n\n *\t\tIf the queue is \"started\" and if the number of jobs in the\n *\t\tTransiting state is less than the max_running limit, then\n *\t\tattempt to route it.\n *\n * @see\n * \t\tmain\n *\n * @param[in]\tpque\t- PBS queue.\n *\n * @return\tvoid\n */\n\nvoid\nqueue_route(pbs_queue *pque)\n{\n\tjob *nxjb;\n\tjob *pjob;\n\tint rc;\n\n\tpjob = (job *) GET_NEXT(pque->qu_jobs);\n\twhile (pjob) {\n\t\tnxjb = (job *) GET_NEXT(pjob->ji_jobque);\n\t\tif (pjob->ji_qs.ji_un.ji_routet.ji_rteretry <= time_now) {\n\t\t\tif ((rc = job_route(pjob)) == PBSE_ROUTEREJ)\n\t\t\t\tjob_abt(pjob, msg_routebad);\n\t\t\telse if (rc == PBSE_ROUTEEXPD)\n\t\t\t\tjob_abt(pjob, msg_routexceed);\n\t\t}\n\t\tpjob = nxjb;\n\t}\n}\n"
  },
  {
    "path": "src/server/license_client.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tlicense_client.c\n * @brief\n *  This file contains stub functions\n * which are not used in the open source.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n#include \"pbs_internal.h\"\n#include <sys/types.h>\n#include \"pbs_nodes.h\"\n#include \"pbs_license.h\"\n\n#define LICSTATE_SERVER_UNCONF 0x1  /* no license server configured */\n#define LICSTATE_HAS_SERVER 0x2\t    /* license server reachable */\n#define LICSTATE_SOCKETS_UNCONF 0x4 /* no socket license file configured */\n#define LICSTATE_HAS_SOCKETS 0x8    /* nonzero number of socket licenses */\n\nenum licensing_backend prev_lb = LIC_UNKNOWN; /* Value of previous licensing backend. */\nenum licensing_backend last_valid_attempt = LIC_UNKNOWN;\n\nint pbs_licensing_checkin(void);\nint pbs_checkout_licensing(int);\nchar *pbs_license_location(void);\nvoid inspect_license_path(void);\nint licstate_is_configured(enum licensing_backend);\nint licstate_is_up(enum licensing_backend);\n\n/**\n * @brief\n *\t\tpbs_licensing_status\t- It's a placeholder function\n * \t\twhich has intentionally kept empty.\n * @return\tLICSTATE_HAS_SOCKETS\n */\nint\npbs_licensing_status(void)\n{\n\treturn (LICSTATE_HAS_SOCKETS);\n}\n\n/**\n * @brief\n *\t\tpbs_licensing_count\t- It's a placeholder function\n * \t\twhich has intentionally kept empty.\n * @return\t10000000\n */\nint\npbs_licensing_count(void)\n{\n\treturn (licenses.lb_aval_floating);\n}\n\n/**\n * @brief\n *\t\tpbs_open_con_licensing\t- It's a placeholder function\n * \t\twhich has intentionally kept empty.\n * @return\tzero\n */\nint\npbs_open_con_licensing(void)\n{\n\treturn (0);\n}\n\n/**\n * @brief\n *\t\tpbs_close_con_licensing\t- It's a placeholder function\n * \t\twhich has intentionally kept empty.\n * @return\tvoid\n */\nvoid\npbs_close_con_licensing(void)\n{\n}\n\n/**\n * @brief\n *\t\tpbs_licensing_checkin\t- It's a placeholder function\n * \t\twhich has intentionally kept empty.\n * @return\tzero\n */\nint\npbs_licensing_checkin(void)\n{\n\treturn (0);\n}\n\n/**\n * @brief\n *\t\tpbs_checkout_licensing\t- It's a placeholder function\n * \t\twhich has intentionally kept empty.\n * @return\tneed\n */\nint\npbs_checkout_licensing(int need)\n{\n\treturn (need);\n}\n\n/**\n * @brief\n *\t\tpbs_license_location\t- It's a placeholder function\n * \t\twhich has intentionally kept empty.\n * @return\tNULL\n */\nchar *\npbs_license_location(void)\n{\n\treturn (pbs_licensing_license_location);\n}\n\n/**\n * @brief\n *\t\tinspect_license_path\t- It's a placeholder function\n * \t\twhich has intentionally kept empty.\n * @return\tvoid\n */\nvoid\ninspect_license_path(void)\n{\n}\n\n/**\n * @brief\n *\t\tinit_socket_licenses\t- It's a placeholder function\n * \t\twhich has intentionally kept empty.\n * @return\tvoid\n */\nvoid\ninit_socket_licenses(char *license_file)\n{\n}\n\n/**\n * @brief\n *\t\tsockets_release\t- It's a placeholder function\n * \t\twhich has intentionally kept empty.\n * @return\tvoid\n */\nvoid\nsockets_release(int nsockets)\n{\n\treturn;\n}\n\n/**\n * @brief\n *\t\tsockets_consume\t- It's a placeholder function\n * \t\twhich has intentionally kept empty.\n * @return\tzero\n */\nint\nsockets_consume(int nsockets)\n{\n\treturn (0);\n}\n\n/**\n * @brief\n *\t\tlicstate_unconfigured\t- It's a placeholder function\n * \t\twhich has intentionally kept empty.\n */\nvoid\nlicstate_unconfigured(enum licensing_backend lb)\n{\n}\n\n/**\n * @brief\n *\t\tlicstate_down\t- It's a placeholder function\n * \t\twhich has intentionally kept empty.\n */\nvoid\nlicstate_down(void)\n{\n}\n\n/**\n * @brief\n *\t\tlicstate_is_up\t- It's a placeholder function\n * \t\twhich has intentionally kept empty.\n * @return\tLICSTATE_HAS_SOCKETS\n */\nint\nlicstate_is_configured(enum licensing_backend lb)\n{\n\treturn (LICSTATE_HAS_SOCKETS);\n}\n\n/**\n * @brief\n *\t\tlicstate_is_up\t- It's a placeholder function\n * \t\twhich has intentionally kept empty.\n * @return\tone\n */\nint\nlicstate_is_up(enum licensing_backend lb)\n{\n\treturn (1);\n}\n\n/**\n * @brief\n *\t\tlicense_sanity_check\t- It's a placeholder function\n * \t\twhich has intentionally kept empty.\n * @return\tzero\n */\nint\nlicense_sanity_check(void)\n{\n\treturn (0);\n}\n\n/**\n * @brief\n *\t\tlicense_more_nodes\t- It's a placeholder function\n * \t\twhich has intentionally kept empty.\n * @return\tvoid\n */\nvoid\nlicense_more_nodes(void)\n{\n\treturn;\n}\n\n/**\n * @brief\n *\t\tpropagate_socket_licensing\t- It's a placeholder function\n * \t\twhich has intentionally kept empty.\n * @return\tvoid\n */\nvoid\npropagate_socket_licensing(mominfo_t *pmom)\n{\n\treturn;\n}\n\n/**\n * @brief\n *\t\tnsockets_from_topology\t- It's a placeholder function\n * \t\twhich has intentionally kept empty.\n * @return\tzero\n */\nint\nnsockets_from_topology(char *topology_str, ntt_t type, struct pbsnode *pnode)\n{\n\treturn 0;\n}\n\n/**\n * @brief\n *\t\tunlicense_socket_licensed_nodes\t- It's a placeholder function\n * \t\twhich has intentionally kept empty.\n * @return\tvoid\n */\nvoid\nunlicense_socket_licensed_nodes(void)\n{\n\treturn;\n}\n\n/**\n * @brief\n *\t\tclear_license_info\t- It's a placeholder function\n * \t\twhich has intentionally kept empty.\n * @return\tvoid\n */\nvoid\nclear_license_info(void)\n{\n\treturn;\n}\n\n/**\n * @brief\n *\t\trelease_node_lic\t- It's a placeholder function\n * \t\twhich has intentionally kept empty.\n * @return zero\n */\nint\nrelease_node_lic(void *pobj)\n{\n\treturn 0;\n}\n\n/**\n * @brief\n *\t\tvalidate_sign\t- It's a placeholder function\n * \t\twhich has intentionally kept empty.\n * @return True\n */\nint\nvalidate_sign(char *sign, void *pobj)\n{\n\treturn 1;\n}\n\n/**\n * @brief\n *\t\tcheck_sign\t- It's a placeholder function\n * \t\twhich has intentionally kept empty.\n * @return zero\n */\nint\ncheck_sign(void *pobj, void *new)\n{\n\treturn 0;\n}\n\n/**\n * @brief\n *\t\tprocess_topology_info\t- It's a placeholder function\n * \t\twhich has intentionally kept empty.\n * @return\tvoid\n */\nvoid\nprocess_topology_info(void *pobj, char *topology_str, ntt_t type)\n{\n\treturn;\n}\n\n/**\n * @brief\n *\t\tunset_signature\t- It's a placeholder function\n * \t\twhich has intentionally kept empty.\n * @return\tvoid\n */\nvoid\nunset_signature(void *pobj, char *rs_name)\n{\n\treturn;\n}\n"
  },
  {
    "path": "src/server/licensing_func.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file    licensing_func.c\n *\n * @brief\n * \t\tlicensing_func.c - miscellaneous server functions\n *\n * Functions included are:\n *\n */\n\n#include <pbs_config.h>\n\n#include <time.h>\n#include \"pbs_nodes.h\"\n#include \"pbs_license.h\"\n#include \"server.h\"\n#include \"work_task.h\"\n#include \"liblicense.h\"\n#include \"svrfunc.h\"\n\n#define TEMP_BUF_LEN 20\n\npbs_licensing_control licensing_control;\npbs_license_counts license_counts;\npbs_list_head unlicensed_nodes_list;\nstruct work_task *init_licensing_task;\nstruct work_task *get_more_licenses_task;\nstruct work_task *licenses_linger_time_task;\nextern time_t time_now;\n/**\n * @brief\n * \tconsume_licenses - use count licenses from the pool of\n * \t\t\t   already checked out licenses.\n *\n * @param[in]\t- count\t- number of licenses to consume\n *\n * @return int\n * @retval  0 - success - able to consume count licenses\n * @retval -1 - not enough licenses available.\n */\nstatic int\nconsume_licenses(long count)\n{\n\tif (count <= license_counts.licenses_local) {\n\t\tlicense_counts.licenses_local -= count;\n\t\tlicense_counts.licenses_used += count;\n\t\treturn 0;\n\t} else\n\t\treturn -1;\n}\n\n/**\n * @brief\n * \treturn_licenses - return count licenses back to the\n * \t\t\t  pool of already checked out licenses.\n *\n * @param[in]\t- count\t- number of licenses to consume\n *\n * @return void\n */\nstatic void\nreturn_licenses(long count)\n{\n\tlicense_counts.licenses_local += count;\n\tlicense_counts.licenses_used -= count;\n}\n\n/**\n * @brief\n * \tadd_to_unlicensed_node_list - add a node to the list\n * \t\t\t\t      of unlicensed nodes\n *\n * @param[in]\t- index - index of the node\n *\n * @return void\n */\nstatic void\nadd_to_unlicensed_node_list(struct pbsnode *pnode)\n{\n\n\tif (pnode->nd_svrflags & NODE_UNLICENSED)\n\t\treturn;\n\n\tCLEAR_LINK(pnode->un_lic_link);\n\n\tappend_link(&unlicensed_nodes_list, &pnode->un_lic_link, pnode);\n\n\tpnode->nd_svrflags |= NODE_UNLICENSED;\n}\n\n/**\n * @brief\n * \tremove_from_unlicensed_node_list - remove a node from\n * \t\t\t\t\t   the list of\n * \t\t\t\t\t   unlicensed nodes.\n *\n * @param[in]\t- index\t- index of the node\n *\n * @return void\n */\nvoid\nremove_from_unlicensed_node_list(struct pbsnode *pnode)\n{\n\tif (!(pnode->nd_svrflags & NODE_UNLICENSED))\n\t\treturn;\n\n\tpnode->nd_svrflags &= ~NODE_UNLICENSED;\n\tdelete_link(&pnode->un_lic_link);\n}\n\n/**\n * @brief\n *\tdistribute_licenseinfo - for cray all the inventory is reported by the first vnode.\n *\t\tso it has to be distributed to subsidiary vnodes.\n *\t\tThe distribution may not be even but we are trying our best.\n *\n * @param[in]\tpointer to mom_svrinfo_t\n * @param[in]\ttotal license count needed to be distributed.\n *\n * @return\tvoid\n *\n * @par MT-Safe:\tno\n */\nstatic void\ndistribute_licenseinfo(mominfo_t *pmom, int lic_count)\n{\n\tint i;\n\tpbsnode *pnode = NULL;\n\tint numvnds = ((mom_svrinfo_t *) pmom->mi_data)->msr_numvnds;\n\tint lic_rem = lic_count % (numvnds - 1);\n\n\tif (lic_count <= 0)\n\t\treturn;\n\n\tfor (i = 1; i < numvnds; i++) {\n\t\tpnode = ((mom_svrinfo_t *) pmom->mi_data)->msr_children[i];\n\t\tif (lic_rem) {\n\t\t\tset_nattr_l_slim(pnode, ND_ATR_LicenseInfo, ((lic_count / (numvnds - 1)) + 1), SET);\n\t\t\tlic_rem -= 1;\n\t\t} else {\n\t\t\tset_nattr_l_slim(pnode, ND_ATR_LicenseInfo, (lic_count / (numvnds - 1)), SET);\n\t\t}\n\t}\n}\n\n/**\n * @brief\n *\t\tpropagate the ND_ATR_License == ND_LIC_TYPE_locked value to\n *\t\tsubsidiary vnodes\n *\n * @param[in]\tpointer to mom_svrinfo_t\n *\n * @return\tvoid\n *\n * @par MT-Safe:\tno\n * @par Side Effects:\n * \t\tsocket license attribute modifications\n *\n * @par Note:\n *\t\tNormally, a natural vnode's socket licensing state propagates\n *\t\tto the subsidary vnodes.  However, this is not the case when\n *\t\tthe natural vnode is representing a Cray login node:  Cray login\n *\t\tand compute nodes are licensed separately;  the socket licensing\n *\t\tstate propagates freely among a MoM's compute nodes but not from\n *\t\ta login node to any compute node.\n */\nvoid\npropagate_licenses_to_vnodes(mominfo_t *pmom)\n{\n\tstruct pbsnode *ptmp = /* pointer to natural vnode */\n\t\t((mom_svrinfo_t *) pmom->mi_data)->msr_children[0];\n\tresource_def *prdefvntype;\n\tresource *prc; /* vntype resource pointer */\n\tstruct array_strings *as;\n\tpbsnode *pfrom_Lic;   /* source License pointer */\n\tattribute *pfrom_RA;  /* source ResourceAvail pointer */\n\tint node_index_start; /* where we begin looking for socket licenses */\n\tint i;\n\tint lic_count;\n\n\t/* Any other vnodes? If not, no work to do */\n\tif (((mom_svrinfo_t *) pmom->mi_data)->msr_numvnds < 2)\n\t\treturn;\n\n\tprdefvntype = &svr_resc_def[RESC_VNTYPE];\n\n\t/*\n \t * Determine where to begin looking for socket licensed nodes:  if\n \t * the natural vnode is for a Cray login node, the important nodes\n \t * are those for Cray compute nodes, which begin after the\n \t * login node (which is always the natural vnode and therefore\n \t * always first);  otherwise, we start looking at the beginning.\n \t */\n\tpfrom_RA = get_nattr(ptmp, ND_ATR_ResourceAvail);\n\tif (((pfrom_RA->at_flags & ATR_VFLAG_SET) != 0) &&\n\t    ((prc = find_resc_entry(pfrom_RA, prdefvntype)) != NULL) &&\n\t    ((prc->rs_value.at_flags & ATR_VFLAG_SET) != 0)) {\n\t\t/*\n \t\t * Node has a ResourceAvail vntype entry;  see whether it\n \t\t * contains CRAY_LOGIN.\n \t\t */\n\t\tas = prc->rs_value.at_val.at_arst;\n\t\tfor (i = 0; i < as->as_usedptr; i++)\n\t\t\tif (strcmp(as->as_string[i], CRAY_LOGIN) == 0) {\n\t\t\t\tnode_index_start = 1;\n\t\t\t\tbreak;\n\t\t\t} else\n\t\t\t\tnode_index_start = 0;\n\t} else\n\t\tnode_index_start = 0;\n\n\t/*\n \t * Make a pass over the subsidiary vnodes to see whether any have socket\n \t * licenses;  if not, no work to do.\n \t */\n\tfor (i = node_index_start, pfrom_Lic = NULL, lic_count = 0;\n\t     i < ((mom_svrinfo_t *) pmom->mi_data)->msr_numvnds; i++) {\n\t\tpbsnode *n = ((mom_svrinfo_t *) pmom->mi_data)->msr_children[i];\n\n\t\tif (is_nattr_set(n, ND_ATR_LicenseInfo))\n\t\t\tlic_count = get_nattr_long(n, ND_ATR_LicenseInfo);\n\n\t\tif (is_nattr_set(n, ND_ATR_License) && get_nattr_c(n, ND_ATR_License) == ND_LIC_TYPE_locked) {\n\t\t\tpfrom_Lic = n;\n\t\t} else\n\t\t\tadd_to_unlicensed_node_list(n);\n\t}\n\tif (node_index_start)\n\t\tdistribute_licenseinfo(pmom, lic_count);\n\n\tif (pfrom_Lic == NULL)\n\t\treturn;\n\n\t/*\n \t * Now make another pass, this time updating the other vnodes'\n \t * ND_ATR_License attribute.\n \t */\n\tfor (i = node_index_start;\n\t     i < ((mom_svrinfo_t *) pmom->mi_data)->msr_numvnds; i++) {\n\t\tpbsnode *n = ((mom_svrinfo_t *) pmom->mi_data)->msr_children[i];\n\t\tset_nattr_c_slim(n, ND_ATR_License, ND_LIC_TYPE_locked, SET);\n\t\tlog_eventf(PBSEVENT_DEBUG4, PBS_EVENTCLASS_NODE,\n\t\t\t   LOG_DEBUG, pmom->mi_host, \"ND_ATR_License copied from %s to %s\",\n\t\t\t   pfrom_Lic->nd_name, n->nd_name);\n\t}\n}\n\n/**\n * @brief\n * \tclear_node_lic_attrs -\tclear a node's ND_ATR_License and maybe\n * \t\t\t\tND_ATR_LicenseInfo\n *\n * @param[in] - pnode - pointer to the node\n * @param[in] - clear_license_info - if ND_ATR_LicenseInfo should be cleared.\n *\n * @return void\n */\nvoid\nclear_node_lic_attrs(pbsnode *pnode, int clear_license_info)\n{\n\tif (clear_license_info && is_nattr_set(pnode, ND_ATR_LicenseInfo))\n\t\tclear_nattr(pnode, ND_ATR_LicenseInfo);\n\n\tif (is_nattr_set(pnode, ND_ATR_License)) {\n\t\tclear_nattr(pnode, ND_ATR_License);\n\t\tpnode->nd_svrflags &= ~NODE_UNLICENSED;\n\t}\n}\n\n/**\n * @brief\n * \tset_node_lic_info_attr\t- set node's license information\n * \t\t\t\t  ND_ATR_LicenseInfo\n *\n * @param - pnode - pointer to the node\n *\n * @return - void\n */\nvoid\nset_node_lic_info_attr(pbsnode *pnode)\n{\n\tint state = lic_needed_for_node(pnode->nd_lic_info);\n\n\tif (state == -3)\n\t\treturn;\n\telse {\n\t\tset_nattr_l_slim(pnode, ND_ATR_LicenseInfo, state, SET);\n\t\tnode_save_db(pnode);\n\t}\n}\n\n/**\n * @brief\n * \tcheck_license_expiry - \tchecks if licenses are about to expire,\n * \t\t\t\tand if so logs the warning message and\n * \t\t\t\tsends an email to the account defined\n * \t\t\t\tby the 'mail_from' server attribute\n * \t\t\t\tabout an expiring license.\n * @return - void\n */\nvoid\ncheck_license_expiry(struct work_task *wt)\n{\n\tchar *warn_str = NULL;\n\n\twarn_str = lic_check_expiry();\n\tif (warn_str && (strlen(warn_str) > 0)) {\n\t\tstruct tm *plt;\n\n\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_SERVER, LOG_DEBUG,\n\t\t\t  msg_daemonname, warn_str);\n\n\t\tplt = localtime(&time_now);\n\t\tif (plt && (plt->tm_yday != licensing_control.expiry_warning_email_yday)) {\n\t\t\t/* Send email at most once a day to prevent\n\t\t\t * bombarding a recipient's inbox.\n\t\t\t * NOTE: Sending of email can also be turned off\n\t\t\t * by unsetting the 'mail_from' server attribute.\n\t\t\t */\n\t\t\tsprintf(log_buffer, \"License server %s: %s\",\n\t\t\t\tpbs_licensing_location, warn_str);\n\t\t\tsvr_mailowner(0, 0, 0, log_buffer);\n\t\t\tlicensing_control.expiry_warning_email_yday = plt->tm_yday;\n\t\t}\n\t}\n\tset_task(WORK_Timed, time_now + 86400, check_license_expiry, NULL);\n}\n\n/**\n * @brief\n * \tget_licenses\t- get count licenses from pbs_license_info\n *\n * @param[in]\t- count\t- number of licenses\n *\n * @return - int\n * @retval - 0 -   Succces\n * @retval - < 0 - Failure\n */\nint\nget_licenses(int lic_count)\n{\n\tint status;\n\tint diff = lic_count - licensing_control.licenses_checked_out;\n\t/* Try getting the licenses */\n\tstatus = lic_get(lic_count);\n\tif (status < 0) {\n\t\tsprintf(log_buffer,\n\t\t\t\"%d licenses could not be checked out from pbs_license_info=%s\",\n\t\t\tlic_count, pbs_licensing_location);\n\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER,\n\t\t\t  LOG_NOTICE, msg_daemonname, log_buffer);\n\t\tlicense_counts.licenses_local = 0;\n\t\tlicense_counts.licenses_used = 0;\n\t\tlicensing_control.licenses_checked_out = 0;\n\t} else {\n\t\tsprintf(log_buffer,\n\t\t\t\"%d licenses checked out from pbs_license_info=%s\",\n\t\t\tlic_count, pbs_licensing_location);\n\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER,\n\t\t\t  LOG_NOTICE, msg_daemonname, log_buffer);\n\n\t\tlicensing_control.licenses_checked_out = lic_count;\n\t\tlicensing_control.licenses_checkout_time = time_now;\n\t\tlicense_counts.licenses_local = lic_count - license_counts.licenses_used;\n\t\tlicense_counts.licenses_global -= diff;\n\t}\n\tcheck_license_expiry(NULL);\n\treturn status;\n}\n\n/**\n * @brief\n * \tcalc_licences_allowed - calculate the number of licenses that can be\n * \t\t\t\tchecked out based on pbs_license_min and\n * \t\t\t\tpbs_license_max\n *\n * @return\tint\n * @retval\tn\t: number of licenses we can check out\n *\n * @Note - should be called only after lic_obtainable() has been called.\n */\n\nstatic long\ncalc_licenses_allowed()\n{\n\tlong count = licensing_control.licenses_total_needed;\n\n\tif (licensing_control.licenses_min > count)\n\t\tcount = licensing_control.licenses_min;\n\n\tif (licensing_control.licenses_max < count)\n\t\tcount = licensing_control.licenses_max;\n\n\tif ((license_counts.licenses_global + licensing_control.licenses_checked_out) < count)\n\t\tcount = license_counts.licenses_global + licensing_control.licenses_checked_out;\n\n\treturn count;\n}\n\n/**\n * @brief\n * \tget_more_licenses - task to get more licenses when we have unlicensed nodes\n *\n * @param[in] - ptask\n *\n * @return void\n */\nvoid\nget_more_licenses(struct work_task *ptask)\n{\n\tint status;\n\tlong lic_count;\n\n\tget_more_licenses_task = NULL;\n\n\tlicense_counts.licenses_global = lic_obtainable();\n\n\tif (license_counts.licenses_global < (licensing_control.licenses_total_needed - licensing_control.licenses_checked_out))\n\t\tget_more_licenses_task = set_task(WORK_Timed, time_now + 300, get_more_licenses, NULL);\n\n\tif (license_counts.licenses_global > 0) {\n\t\tlic_count = calc_licenses_allowed();\n\t\tif (lic_count != licensing_control.licenses_checked_out) {\n\t\t\tif ((lic_count < licensing_control.licenses_checked_out) && (lic_count < licensing_control.licenses_total_needed)) {\n\t\t\t\tint i;\n\t\t\t\tfor (i = 0; i < svr_totnodes; i++)\n\t\t\t\t\tclear_node_lic_attrs(pbsndlist[i], 0);\n\t\t\t\tlicense_counts.licenses_used = 0;\n\t\t\t}\n\t\t\tstatus = get_licenses(lic_count);\n\t\t\tif (status == 0)\n\t\t\t\tlicense_nodes();\n\t\t}\n\t} else\n\t\tlicense_counts.licenses_global = 0;\n}\n\n/**\n * @brief - update_license_highuse - record max number of lic used over time\n * \t\t\t\t     This information is logged into the\n * \t\t\t\t     accounting license file.\n */\nstatic void\nupdate_license_highuse(void)\n{\n\tint u;\n\n\tu = license_counts.licenses_used;\n\tif (u > license_counts.licenses_high_use.lu_max_hr)\n\t\tlicense_counts.licenses_high_use.lu_max_hr = u;\n\tif (u > license_counts.licenses_high_use.lu_max_day)\n\t\tlicense_counts.licenses_high_use.lu_max_day = u;\n\tif (u > license_counts.licenses_high_use.lu_max_month)\n\t\tlicense_counts.licenses_high_use.lu_max_month = u;\n\tif (u > license_counts.licenses_high_use.lu_max_forever)\n\t\tlicense_counts.licenses_high_use.lu_max_forever = u;\n}\n\n/**\n * @brief\n * \tlicense_one_node - try licensing a single node\n *\n * @param[in] pnode - pointer to the node\n *\n * @return void\n */\nvoid\nlicense_one_node(pbsnode *pnode)\n{\n\tset_node_lic_info_attr(pnode);\n\n\tif (license_counts.licenses_global > 0 || license_counts.licenses_used > 0) {\n\t\tif (get_nattr_c(pnode, ND_ATR_License) != ND_LIC_TYPE_locked) {\n\t\t\tif (consume_licenses(get_nattr_long(pnode, ND_ATR_LicenseInfo)) == 0) {\n\t\t\t\tset_nattr_c_slim(pnode, ND_ATR_License, ND_LIC_TYPE_locked, SET);\n\t\t\t\tupdate_license_highuse();\n\t\t\t} else {\n\t\t\t\tadd_to_unlicensed_node_list(pnode);\n\t\t\t\tif (is_nattr_set(pnode, ND_ATR_LicenseInfo)) {\n\t\t\t\t\tlicensing_control.licenses_total_needed += get_nattr_long(pnode, ND_ATR_LicenseInfo);\n\t\t\t\t}\n\t\t\t\tif (get_more_licenses_task)\n\t\t\t\t\tdelete_task(get_more_licenses_task);\n\t\t\t\tget_more_licenses_task = set_task(WORK_Timed, time_now + 2, get_more_licenses, NULL);\n\t\t\t}\n\t\t}\n\t} else\n\t\tadd_to_unlicensed_node_list(pnode);\n}\n\n/**\n * @brief\tOn Cray, we need to release all licenses distributed across\n * \t\tthe vnodes before consuming the bulk count of licenses\n * \t\tfor the first vnode. Distribution will be done at a later stage.\n *\n * @param[in,out]\tpnode\t- pointer to node structure\n * @param[in]\t\ttype\t- type of topology info\n *\n * @return\tvoid\n */\nvoid\nrelease_lic_for_cray(struct pbsnode *pnode)\n{\n\tint i;\n\n\tfor (i = 0; i < pnode->nd_nummoms; i++) {\n\t\tif (((mom_svrinfo_t *) pnode->nd_moms[i]->mi_data)->msr_numvnds > 1) {\n\t\t\tmom_svrinfo_t *mi_data = (mom_svrinfo_t *) pnode->nd_moms[i]->mi_data;\n\t\t\tfor (i = 1; i < mi_data->msr_numvnds; i++) {\n\t\t\t\tpnode = mi_data->msr_children[i];\n\t\t\t\tif (is_nattr_set(pnode, ND_ATR_License) && get_nattr_c(pnode, ND_ATR_License) == ND_LIC_TYPE_locked) {\n\t\t\t\t\tclear_nattr(pnode, ND_ATR_License);\n\t\t\t\t\treturn_licenses(get_nattr_long(pnode, ND_ATR_LicenseInfo));\n\t\t\t\t}\n\t\t\t}\n\t\t\tbreak;\n\t\t}\n\t}\n}\n\n/**\n * @brief\n * \tlicense_nodes - License the nodes\n *\n * @return void\n */\nvoid\nlicense_nodes()\n{\n\tint i;\n\tpbsnode *np, *pnext;\n\n\tnp = (pbsnode *) GET_NEXT(unlicensed_nodes_list);\n\twhile (np != NULL) {\n\t\tpnext = (pbsnode *) GET_NEXT(np->un_lic_link);\n\t\tif (get_nattr_c(np, ND_ATR_License) != ND_LIC_TYPE_locked) {\n\t\t\tif (is_nattr_set(np, ND_ATR_LicenseInfo)) {\n\t\t\t\tif (consume_licenses(get_nattr_long(np, ND_ATR_LicenseInfo)) == 0) {\n\t\t\t\t\tset_nattr_c_slim(np, ND_ATR_License, ND_LIC_TYPE_locked, SET);\n\t\t\t\t\tremove_from_unlicensed_node_list(np);\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tfor (i = 0; i < np->nd_nummoms; i++)\n\t\t\t\t\tpropagate_licenses_to_vnodes(np->nd_moms[i]);\n\t\t\t}\n\t\t} else {\n\t\t\tremove_from_unlicensed_node_list(np);\n\t\t}\n\t\tnp = pnext;\n\t}\n\tupdate_license_highuse();\n\treturn;\n}\n\n/**\n * @brief\n * \tinit_licensing - initialize licensing\n *\n * @param[in] ptask - associcated work task\n *\n * @return: void\n */\n\nvoid\ninit_licensing(struct work_task *ptask)\n{\n\tint i;\n\tint count;\n\tlong lic_count;\n\n\tif (init_licensing_task && (init_licensing_task != ptask)) {\n\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER,\n\t\t\t  LOG_INFO, msg_daemonname,\n\t\t\t  \"skipping a init licensing task\");\n\t\treturn;\n\t}\n\n\t/*\n \t * We have to calculate the number of licenses again\n \t * as the license location has changed.\n \t */\n\tmemset(&license_counts, 0, sizeof(license_counts));\n\tlicensing_control.licenses_total_needed =\n\t\tlicensing_control.licenses_checkout_time =\n\t\t\tlicensing_control.licenses_checked_out = 0;\n\tlicensing_control.expiry_warning_email_yday = -1;\n\n\tcount = lic_init(pbs_licensing_location);\n\tif (count < 0) {\n\t\tfor (i = 0; i < svr_totnodes; i++) {\n\t\t\tclear_node_lic_attrs(pbsndlist[i], 1);\n\t\t\tadd_to_unlicensed_node_list(pbsndlist[i]);\n\t\t}\n\n\t\tswitch (count) {\n\t\t\tcase -1:\n\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\"pbs_license_info=%s does not point to a license server\",\n\t\t\t\t\tpbs_licensing_location);\n\t\t\t\tbreak;\n\t\t\tcase -2:\n\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\"connection could not be established with pbs_license_info=%s\",\n\t\t\t\t\tpbs_licensing_location);\n\t\t\t\tbreak;\n\t\t\tcase -3:\n\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\"supported licenses type not available at pbs_license_info=%s\",\n\t\t\t\t\tpbs_licensing_location);\n\t\t\t\tbreak;\n\t\t}\n\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER,\n\t\t\t  LOG_NOTICE, msg_daemonname, log_buffer);\n\t\treturn;\n\t}\n\tfor (i = 0; i < svr_totnodes; i++) {\n\t\tclear_node_lic_attrs(pbsndlist[i], 0);\n\t\tif (is_nattr_set(pbsndlist[i], ND_ATR_LicenseInfo)) {\n\t\t\tlicensing_control.licenses_total_needed += get_nattr_long(pbsndlist[i], ND_ATR_LicenseInfo);\n\t\t} else {\n\t\t\tif (pbsndlist[i]->nd_lic_info != NULL) {\n\t\t\t\tset_node_lic_info_attr(pbsndlist[i]);\n\t\t\t\tlicensing_control.licenses_total_needed += get_nattr_long(pbsndlist[i], ND_ATR_LicenseInfo);\n\t\t\t}\n\t\t}\n\t\tadd_to_unlicensed_node_list(pbsndlist[i]);\n\t}\n\n\t/* Determine how many licenses we can check out */\n\tlicense_counts.licenses_global = count;\n\tlic_count = calc_licenses_allowed();\n\n\tif (lic_count > 0) {\n\t\tint status;\n\t\tstatus = get_licenses(lic_count);\n\n\t\tif (status == 0)\n\t\t\t/* Now let us license the nodes */\n\t\t\tlicense_nodes();\n\t}\n\n\treturn;\n}\n\n/**\n * @brief\tcheck the sign is valid for given node\n *\n * @param[in]\tsign\thash input\n * @param[out]\tpnode\tpointer to node structure\n *\n * @return\tint\n * @retval\tPBSE_NONE\t: Hash is valid\n * @retval\tPBSE_BADNDATVAL\t: Bad attribute value\n * @retval\tPBSE_LICENSEINV\t: License is invalid\n */\nstatic int\nvalidate_sign(char *sign, struct pbsnode *pnode)\n{\n\tint ret;\n\ttime_t expiry = 0;\n\tchar **cred_list = break_delimited_str(sign, '_');\n\n\tret = checkkey(cred_list, pnode->nd_name, &expiry);\n\tfree_string_array(cred_list);\n\tswitch (ret) {\n\t\tcase -3:\n\t\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_NODE,\n\t\t\t\t  LOG_NOTICE, pnode->nd_name, \"Invalid signature\");\n\t\t\treturn PBSE_LICENSEINV;\n\t\tcase -2:\n\t\t\treturn PBSE_BADTSPEC;\n\t\tcase -1:\n\t\t\treturn PBSE_BADNDATVAL;\n\t\tcase 0:\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"Signature is valid till:%ld\", expiry);\n\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_NODE,\n\t\t\t\t  LOG_DEBUG, pnode->nd_name, log_buffer);\n\t\t\tif (is_nattr_set(pnode, ND_ATR_License) && get_nattr_c(pnode, ND_ATR_License) == ND_LIC_TYPE_locked) {\n\t\t\t\treturn_licenses(get_nattr_long(pnode, ND_ATR_LicenseInfo));\n\t\t\t\tclear_nattr(pnode, ND_ATR_License);\n\t\t\t\tclear_nattr(pnode, ND_ATR_LicenseInfo);\n\t\t\t}\n\t\t\tset_nattr_c_slim(pnode, ND_ATR_License, ND_LIC_TYPE_cloud, SET);\n\t\t\tbreak;\n\t\tcase 1:\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"Signature is valid, but it has expired at:%ld\", expiry);\n\t\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_NODE,\n\t\t\t\t  LOG_DEBUG, pnode->nd_name, log_buffer);\n\t\t\treturn PBSE_NONE;\n\t}\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\tIf changing lic_signature, check sign\n *\n * @param[out]\tpnode\tpointer to node structure\n * @param[in]\tnew\t- new attribute\n *\n * @return\tint\n * @retval\tPBSE_NONE\t: Hash is valid\n * @retval\tPBSE_BADNDATVAL\t: Bad attribute value\n * @retval\tPBSE_LICENSEINV\t: License is invalid\n */\nint\ncheck_sign(pbsnode *pnode, attribute *new)\n{\n\tresource *presc;\n\tresource_def *prdef;\n\tint err = PBSE_NONE;\n\n\tprdef = find_resc_def(svr_resc_def, ND_RESC_LicSignature);\n\tpresc = find_resc_entry((attribute *) new, prdef);\n\tif (presc && (presc->rs_value.at_flags & ATR_VFLAG_MODIFY)) {\n\t\tif ((err = validate_sign(presc->rs_value.at_val.at_str, pnode)) != PBSE_NONE)\n\t\t\treturn (err);\n\t\tpresc->rs_value.at_flags &= ~ATR_VFLAG_DEFLT;\n\t}\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n * \tinitialize license counters\n *\n * @param[in]\tcounts - pointer to the pbs_license_counts structure\n */\nvoid\nreset_license_counters(pbs_license_counts *counts)\n{\n\tlong global = lic_obtainable();\n\n\tif (global > 0) {\n\t\tcounts->licenses_global = global;\n\t\tcounts->licenses_local = global;\n\t} else {\n\t\tcounts->licenses_global = 0;\n\t\tcounts->licenses_local = 0;\n\t}\n\tcounts->licenses_used = 0;\n\tcounts->licenses_high_use.lu_max_forever = 0;\n}\n\n/*\n * @brief - release_node_lic -  return back the licenses to the pool\n * \t\t\t\twhen a node is deleted.\n * @param - pobj - pointer to the node\n *\n * @return - int\n * @retval - 1 - if the licenses were returned\n * @retval - 0 - the node was not licensed in the first place.\n */\nint\nrelease_node_lic(void *pobj)\n{\n\tif (pobj) {\n\t\tstruct pbsnode *pnode = pobj;\n\n\t\tlicensing_control.licenses_total_needed -= get_nattr_long(pnode, ND_ATR_LicenseInfo);\n\n\t\t/* release license if node is locked */\n\t\tif (get_nattr_c(pnode, ND_ATR_License) == ND_LIC_TYPE_locked && is_nattr_set(pnode, ND_ATR_LicenseInfo)) {\n\t\t\treturn_licenses(get_nattr_long(pnode, ND_ATR_LicenseInfo));\n\t\t\tclear_nattr(pnode, ND_ATR_License);\n\t\t\tclear_nattr(pnode, ND_ATR_LicenseInfo);\n\t\t\treturn 1;\n\t\t}\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\tclear license on unset action of lic_signature\n *\n * @param[in,out]\tpnode\t-\tpointer to node structure\n * @param[in]\t\trs_name\t-\tresource name\n *\n * @return\tvoid\n */\nvoid\nunset_signature(void *pobj, char *rs_name)\n{\n\tstruct pbsnode *pnode = pobj;\n\n\tif (!pnode || !rs_name)\n\t\treturn;\n\n\tif (!strcmp(rs_name, ND_RESC_LicSignature)) {\n\t\tif (is_nattr_set(pnode, ND_ATR_License) && get_nattr_c(pnode, ND_ATR_License) == ND_LIC_TYPE_cloud)\n\t\t\tclear_nattr(pnode, ND_ATR_License);\n\t}\n}\n\n/**\n * @brief\n *\t\tunlicense_nodes\t-\treset the ND_ATR_License value\n *\t\tof a socket-licensed node. if we don't have enough licenses.\n *\n * @return\tvoid\n *\n * @par MT-Safe:\tno\n * @par Side Effects:\n *\t\tNone\n */\nvoid\nunlicense_nodes(void)\n{\n\tint i;\n\tpbsnode *np;\n\tint first = 1;\n\tstatic char msg_node_unlicensed[] = \"%s attribute reset on one or more nodes\";\n\n\tfor (i = 0; i < svr_totnodes; i++) {\n\t\tnp = pbsndlist[i];\n\t\tif (get_nattr_c(np, ND_ATR_License) == ND_LIC_TYPE_locked) {\n\t\t\tclear_nattr(np, ND_ATR_License);\n\t\t\tclear_nattr(np, ND_ATR_LicenseInfo);\n\t\t\tnode_save_db(np);\n\t\t\tif (first) {\n\t\t\t\tfirst = 0;\n\t\t\t\tsprintf(log_buffer, msg_node_unlicensed,\n\t\t\t\t\tATTR_NODE_License);\n\t\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_SERVER,\n\t\t\t\t\t  LOG_ERR, msg_daemonname, log_buffer);\n\t\t\t}\n\t\t}\n\t}\n}\n\n/**\n * @brief - return_lingering_licenses - task to return unused licenses\n * \t\t\t\t\tback to pbs_license_info\n *\n * @return - void\n */\nvoid\nreturn_lingering_licenses(struct work_task *ptask)\n{\n\tif ((licensing_control.licenses_checked_out > licensing_control.licenses_min) &&\n\t    (license_counts.licenses_local > 0))\n\t\tget_licenses(licensing_control.licenses_min);\n\n\tlicenses_linger_time_task = set_task(WORK_Timed,\n\t\t\t\t\t     time_now + licensing_control.licenses_linger_time,\n\t\t\t\t\t     return_lingering_licenses, NULL);\n}\n"
  },
  {
    "path": "src/server/mom_info.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tmom_info.c\n * @brief\n * \t\tmom_info.c - functions relating to the mominfo structures and vnodes\n *\n *\t\tSome of the functions here in are used by both the Server and Mom,\n *\t\tothers are used by one or the other but not both.\n *\n * Included functions are:\n *\n * \tcreate_mom_entry()\n * \tdelete_mom_entry()\n * \tfind_mom_entry()\n * \tcreate_svrmom_entry()\n * \tdelete_svrmom_entry()\n * \tcreate_mommap_entry()\n * \tdelete_momvmap_entry()\n * \tfind_vmap_entry()\n * \tadd_mom_data()\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <assert.h>\n#include <stdio.h>\n#include <unistd.h>\n#include <stdlib.h>\n#include <errno.h>\n#include <sys/types.h>\n#include \"libpbs.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"resource.h\"\n#include \"credential.h\"\n#include \"server_limits.h\"\n#include \"batch_request.h\"\n#include \"server.h\"\n#include \"pbs_nodes.h\"\n#include \"pbs_error.h\"\n#include \"log.h\"\n#include \"svrfunc.h\"\n#include \"tpp.h\"\n#include \"pbs_internal.h\"\n#include \"work_task.h\"\n#include \"hook_func.h\"\n\nstatic char merr[] = \"malloc failed\";\n\n/* Global Data Itmes */\n\n/* mominfo_array is an array of mominfo_t pointers, one per host */\n\nmominfo_t **mominfo_array = NULL;\nint mominfo_array_size = 0;\t   /* num entries in the array */\nmominfo_time_t mominfo_time = {0}; /* time stamp of mominfo update */\nint svr_num_moms = 0;\nvnpool_mom_t *vnode_pool_mom_list = NULL;\n\nextern char *msg_daemonname;\nextern char *path_hooks_rescdef;\nextern char *msg_new_inventory_mom;\n\nextern int remove_mom_ipaddresses_list(mominfo_t *pmom);\n\n/*\n * The following function are used by both the Server and Mom\n *\tcreate_mom_entry()\n *\tdelete_mom_entry()\n *\tfind_mom_entry()\n */\n\n#define GROW_MOMINFO_ARRAY_AMT 10\n\n/**\n * @brief\n *\t\tcreate_mom_entry - create both a mominfo_t entry and insert a pointer\n *\t\tto that element into the mominfo_array which may be expanded if needed\n *\n * @par Functionality:\n *\t\tSearches for existing mominfo_t entry with matching hostname and port;\n *\t\tif found returns it, otherwise adds entry.   An empty slot in the\n *\t\tmominfo_array[] will be used to hold pointer to created entry.  If no\n *\t\tempty slot, the array is expanded by GROW_MOMINFO_ARRAY_AMT amount.\n *\n * @param[in]\thostname - hostname of host on which Mom will be running\n * @param[in]\tport     - port number to which Mom will be listening\n *\n * @return\tmominfo_t *\n * @retval\tReturns pointer to the mominfo entry, existing or created\n * @retval\tNULL on error.\n *\n * @par Side Effects: None\n *\n * @par MT-safe: no, would need lock on realloc of global mominfo_array[]\n *\n */\n\nmominfo_t *\ncreate_mom_entry(char *hostname, unsigned int port)\n{\n\tint empty = -1;\n\tint i;\n\tmominfo_t *pmom;\n\tmominfo_t **tp;\n\n\tfor (i = 0; i < mominfo_array_size; ++i) {\n\t\tpmom = mominfo_array[i];\n\t\tif (pmom) {\n\t\t\tif ((strcasecmp(pmom->mi_host, hostname) == 0) &&\n\t\t\t    (pmom->mi_port == port))\n\t\t\t\treturn pmom;\n\t\t} else if (empty == -1) {\n\t\t\tempty = i; /* save index of first empty slot */\n\t\t}\n\t}\n\n\tif (empty == -1) {\n\t\t/* there wasn't an empty slot in the array we can use */\n\t\t/* need to grow the array\t\t\t      */\n\n\t\ttp = (mominfo_t **) realloc(mominfo_array,\n\t\t\t\t\t    (size_t) (sizeof(mominfo_t *) * (mominfo_array_size + GROW_MOMINFO_ARRAY_AMT)));\n\t\tif (tp) {\n\t\t\tempty = mominfo_array_size;\n\t\t\tmominfo_array = tp;\n\t\t\tmominfo_array_size += GROW_MOMINFO_ARRAY_AMT;\n\t\t\tfor (i = empty; i < mominfo_array_size; ++i)\n\t\t\t\tmominfo_array[i] = NULL;\n\t\t} else {\n\t\t\tlog_err(errno, __func__, merr);\n\t\t\treturn NULL;\n\t\t}\n\t}\n\n\t/* now allocate the memory for the mominfo_t element itself */\n\n\tpmom = (mominfo_t *) malloc(sizeof(mominfo_t));\n\tif (pmom) {\n\t\t(void) strncpy(pmom->mi_host, hostname, PBS_MAXHOSTNAME);\n\t\tpmom->mi_host[PBS_MAXHOSTNAME] = '\\0';\n\t\tpmom->mi_port = port;\n\t\tpmom->mi_rmport = port + 1;\n\t\tpmom->mi_modtime = (time_t) 0;\n\t\tpmom->mi_dmn_info = NULL;\n\t\tpmom->mi_data = NULL;\n\t\tCLEAR_LINK(pmom->mi_link);\n#ifndef PBS_MOM\n\t\tif (mom_hooks_seen_count() > 0) {\n\t\t\tstruct stat sbuf;\n\t\t\t/*\n\t\t\t * there should be at least one hook to\n\t\t\t * add mom actions below, which are in\n\t\t\t * behalf of existing hooks.\n\t\t\t */\n\t\t\tadd_pending_mom_allhooks_action(pmom, MOM_HOOK_ACTION_SEND_ATTRS | MOM_HOOK_ACTION_SEND_CONFIG | MOM_HOOK_ACTION_SEND_SCRIPT);\n\t\t\tif (stat(path_hooks_rescdef, &sbuf) == 0)\n\t\t\t\tadd_pending_mom_hook_action(pmom, PBS_RESCDEF, MOM_HOOK_ACTION_SEND_RESCDEF);\n\t\t}\n#endif\n\n\t\tmominfo_array[empty] = pmom;\n\t\t++svr_num_moms; /* increment number of Moms */\n\t} else {\n\t\tlog_err(errno, __func__, merr);\n\t}\n\n\treturn pmom;\n}\n\n/**\n *\n * @brief\n *\t\tDestory a mominfo_t element and null the pointer to that\n *\t\telement in the mominfo_array;\n * @par Functionality:\n *\t\tThe heap entry pointed to by the mi_data member is freed also.\n *\t\tHowever, any extra malloc-ed space in that member must be freed\n *\t\tindependently. Note, this means the mominfo_array may have null\n *\t\tentries anywhere.\n *\n * @param[in]\tpmom - the element being operated on.\n *\n * @return\tvoid\n */\n\nvoid\ndelete_mom_entry(mominfo_t *pmom)\n{\n\tint i;\n\n\tif (pmom == NULL)\n\t\treturn;\n\n\t/*\n\t * Remove any work_task entries that may be referencing this mom\n\t * BEFORE we free any data.\n\t */\n\tdelete_task_by_parm1_func((void *) pmom, NULL, DELETE_ONE);\n\n\t/* find the entry in the arry that does point here */\n\tfor (i = 0; i < mominfo_array_size; ++i) {\n\t\tif (mominfo_array[i] == pmom) {\n\t\t\tmominfo_array[i] = NULL;\n\t\t\tbreak;\n\t\t}\n\t}\n\n\t/* free the mi_data after all hook work is done, since the hook actions\n\t * use the mi_data.\n\t */\n\tfree(pmom->mi_data);\n\n\tdelete_link(&pmom->mi_link);\n\tmemset(pmom, 0, sizeof(mominfo_t));\n\tfree(pmom);\n\t--svr_num_moms;\n\n\treturn;\n}\n\n/**\n * @brief\n * \t\tfind_mom_entry - find and return a pointer to a mominfo_t element\n *\t\tdefined by the hostname and port\n * @note\n *\t\tthe mominfo_array may have null entries anywhere.\n *\n * @param[in]\thostname - hostname of host on which Mom will be running\n * @param[in]\tport     - port number to which Mom will be listening\n *\n * @return\tpointer to a mominfo_t element\n * @reval\tNULL\t- couldn't find.\n */\n\nmominfo_t *\nfind_mom_entry(char *hostname, unsigned int port)\n{\n\tint i;\n\tmominfo_t *pmom;\n\n\tfor (i = 0; i < mominfo_array_size; ++i) {\n\t\tpmom = mominfo_array[i];\n\t\tif (pmom &&\n\t\t    (strcasecmp(pmom->mi_host, hostname) == 0) &&\n\t\t    (pmom->mi_port == port))\n\t\t\treturn pmom;\n\t}\n\n\treturn NULL; /* didn't find it */\n}\n\n#ifndef PBS_MOM /* Not Mom, i.e. the Server */\n\n/*\n * The following functions are used by the Server only !\n */\n\n/**\n * @brief\n * \t\tcreate_svrmom_entry - create both a mominfo entry and the mom_svrinfo\n *\t\tentry associated with it.\n *\t\tAlso used as a peer server structure for multi-server.\n * @par Functionality:\n *\t\tFinds an existing mominfo_t structure for the hostname/port tuple,\n *\t\tcreate mominfo_t and associated mom_svrinfo_t structures; and array\n *\t\t(size 1) of pointers to pbs nodes for the children vnodes.\n * @note\n * \t\tuse delete_mom_entry() to delete both the mominfo and\n *\t\tmom_svrinfo entries.\n * @see\n * \t\tcreate_pbs_node2\n *\n * @param[in]\thostname - hostname of host on which Mom will be running\n * @param[in]\tport     - port number to which Mom will be listening\n * @param[in]\tpul      - list of IP addresses of host; will be freed on error\n *\t\t\t   \t\t\t\tor saved in structure; caller must not free pul\n *\n * @return\tmominfo_t *\n * @retval\tpointer to the created mominfo entry\t- success\n * @retval\tNULL\t- error.\n *\n * @par Side Effects: None\n *\n * @par MT-safe: see create_mom_entry() and tinsert2()\n *\n */\n\nmominfo_t *\ncreate_svrmom_entry(char *hostname, unsigned int port, unsigned long *pul)\n{\n\tmominfo_t *pmom;\n\tmom_svrinfo_t *psvrmom;\n\n\tpmom = create_mom_entry(hostname, port);\n\n\tif (pmom == NULL) {\n\t\tfree(pul);\n\t\treturn pmom;\n\t}\n\n\tif (pmom->mi_data != NULL) {\n\t\tfree(pul);\n\t\treturn pmom; /* already there */\n\t}\n\n\tpsvrmom = (mom_svrinfo_t *) malloc(sizeof(mom_svrinfo_t));\n\tif (!psvrmom) {\n\t\tlog_err(PBSE_SYSTEM, __func__, merr);\n\t\tdelete_mom_entry(pmom);\n\t\treturn NULL;\n\t}\n\n\tpsvrmom->msr_pcpus = 0;\n\tpsvrmom->msr_acpus = 0;\n\tpsvrmom->msr_pmem = 0;\n\tpsvrmom->msr_numjobs = 0;\n\tpsvrmom->msr_arch = NULL;\n\tpsvrmom->msr_pbs_ver = NULL;\n\tpsvrmom->msr_timedown = (time_t) 0;\n\tpsvrmom->msr_wktask = 0;\n\tpsvrmom->msr_jbinxsz = 0;\n\tpsvrmom->msr_jobindx = NULL;\n\tpsvrmom->msr_numvnds = 0;\n\tpsvrmom->msr_numvslots = 1;\n\tpsvrmom->msr_vnode_pool = 0;\n\tpsvrmom->msr_has_inventory = 0;\n\tpsvrmom->msr_children =\n\t\t(struct pbsnode **) calloc((size_t) (psvrmom->msr_numvslots),\n\t\t\t\t\t   sizeof(struct pbsnode *));\n\tif (psvrmom->msr_children == NULL) {\n\t\tlog_err(errno, __func__, merr);\n\t\tfree(psvrmom);\n\t\tdelete_mom_entry(pmom);\n\t\treturn NULL;\n\t}\n\tpsvrmom->msr_action = NULL;\n\tpsvrmom->msr_num_action = 0;\n\n\tpmom->mi_data = psvrmom; /* must be done before call tinsert2 */\n\n\tif (pmom->mi_dmn_info) {\n\t\tfree(pul);\n\t\treturn pmom; /* already there */\n\t}\n\n\tpmom->mi_dmn_info = init_daemon_info(pul, port, pmom);\n\tif (!pmom->mi_dmn_info) {\n\t\tlog_err(PBSE_SYSTEM, __func__, merr);\n\t\tdelete_svrmom_entry(pmom);\n\t\treturn NULL;\n\t}\n\n\treturn pmom;\n}\n\n/**\n * @brief\n * \t\topen_conn_stream - do an tpp_open if it is safe to do so.\n *\n * @param[in]\tpmom\t- pointer to mominfo structure\n *\n * @return\tint\n * @retval\t-1: cannot be opened or error on opening\n * @retval\t>=0: success\n */\nint\nopen_conn_stream(mominfo_t *pmom)\n{\n\tint stream = -1;\n\tdmn_info_t *pdmninfo;\n\n\tpdmninfo = pmom->mi_dmn_info;\n\tif (pdmninfo->dmn_stream >= 0)\n\t\treturn pdmninfo->dmn_stream;\n\n\tif ((stream = tpp_open(pmom->mi_host, pmom->mi_rmport)) < 0) {\n\t\tlog_eventf(PBSEVENT_DEBUG, PBS_EVENTCLASS_SERVER, LOG_DEBUG,\n\t\t\t   msg_daemonname, \"Failed to open connection stream for %s\", pmom->mi_host);\n\t\treturn -1;\n\t}\n\n\tpdmninfo->dmn_stream = stream;\n\tpdmninfo->dmn_state &= ~(INUSE_UNKNOWN | INUSE_DOWN);\n\ttinsert2((u_long) stream, 0, pmom, &streams);\n\n\treturn stream;\n}\n\n/**\n * @brief\n * \t\tdelete_svrmom_entry - destroy a mom_svrinfo_t element and the parent\n *\t\tmominfo_t element.\n *\n * @see\n * \t\teffective_node_delete\n *\n * @param[in]\tpmom\t- pointer to mominfo structure\n *\n * @return\tvoid\n */\n\nvoid\ndelete_svrmom_entry(mominfo_t *pmom)\n{\n\tmom_svrinfo_t *psvrmom = (mom_svrinfo_t *) pmom->mi_data;\n\tint i;\n\n\tif (psvrmom) {\n\n#ifndef PBS_MOM\n\t\t/* send request to this mom to delete all hooks known from this server. */\n\t\t/* we'll just send this delete request only once */\n\t\t/* if a hook fails to delete, then that mom host when it */\n\t\t/* come back will still have the hook. */\n\t\tif (pmom->mi_dmn_info && !(pmom->mi_dmn_info->dmn_state & INUSE_UNKNOWN) && (mom_hooks_seen_count() > 0)) {\n\t\t\tuc_delete_mom_hooks(pmom);\n\t\t}\n#endif\n\n\t\tif (psvrmom->msr_arch)\n\t\t\tfree(psvrmom->msr_arch);\n\n\t\tif (psvrmom->msr_pbs_ver)\n\t\t\tfree(psvrmom->msr_pbs_ver);\n\n\t\tif (psvrmom->msr_children)\n\t\t\tfree(psvrmom->msr_children);\n\n\t\tif (psvrmom->msr_jobindx) {\n\t\t\tfree(psvrmom->msr_jobindx);\n\t\t\tpsvrmom->msr_jbinxsz = 0;\n\t\t\tpsvrmom->msr_jobindx = NULL;\n\t\t}\n\n\t\tif (remove_mom_ipaddresses_list(pmom) != 0) {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"Could not remove IP address for mom %s:%d from cache\",\n\t\t\t\t pmom->mi_host, pmom->mi_port);\n\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t}\n\t}\n\n\tdelete_daemon_info(pmom);\n\n#ifndef PBS_MOM\n\tfor (i = 0; i < psvrmom->msr_num_action; ++i) {\n\t\tif (psvrmom->msr_action[i] != NULL) {\n\t\t\tfree(psvrmom->msr_action[i]);\n\t\t\tpsvrmom->msr_action[i] = NULL;\n\t\t}\n\t}\n\tfree(psvrmom->msr_action);\n#endif\n\n\tmemset((void *) psvrmom, 0, sizeof(mom_svrinfo_t));\n\n\tdelete_mom_entry(pmom);\n}\n\n/**\n * @brief\n *\tFind the pool that matches what is set on the node\n *\n * @param[in]\tpmom - pointer to the mom\n * @param[out]\tppool - pointer to the matching pool structure\n *\n * @return\tvnpool_mom_t *\n * @retval\tpointer to the matching pool structure\n * @retval\tNULL - if there is no match\n */\nvnpool_mom_t *\nfind_vnode_pool(mominfo_t *pmom)\n{\n\tmom_svrinfo_t *psvrmom = (mom_svrinfo_t *) (pmom->mi_data);\n\tvnpool_mom_t *ppool = vnode_pool_mom_list;\n\n\tif (psvrmom->msr_vnode_pool != 0) {\n\t\twhile (ppool != NULL) {\n\t\t\tif (ppool->vnpm_vnode_pool == psvrmom->msr_vnode_pool)\n\t\t\t\treturn (ppool);\n\n\t\t\tppool = ppool->vnpm_next;\n\t\t}\n\t}\n\treturn NULL;\n}\n\n/**\n * @brief\n *\tReset the \"inventory Mom\" for a vnode_pool if the specified Mom is the\n *\tcurrent inventory Mom.  Done when she is down or deleted from the pool.\n *\n * @param[in] pmom - Pointer to the Mom (mominfo_t) structure of the Mom\n *\tbeing removed/marked down.\n */\nvoid\nreset_pool_inventory_mom(mominfo_t *pmom)\n{\n\tint i;\n\tvnpool_mom_t *ppool;\n\tmominfo_t *pxmom;\n\tmom_svrinfo_t *pxsvrmom;\n\tmom_svrinfo_t *psvrmom = (mom_svrinfo_t *) (pmom->mi_data);\n\n\t/* If this Mom is in a vnode pool and is the inventory Mom for that */\n\t/* pool remove her from that role and if another Mom in the pool and */\n\t/* is up, make that Mom the new inventory Mom */\n\n\tif (psvrmom->msr_vnode_pool != 0) {\n\t\tppool = find_vnode_pool(pmom);\n\t\tif (ppool != NULL) {\n\t\t\tif (ppool->vnpm_inventory_mom != pmom)\n\t\t\t\treturn; /* in the pool but is not the inventory mom */\n\n\t\t\t/* this newly down/deleted Mom was the inventory Mom, */\n\t\t\t/* clear her as the inventory mom in the pool */\n\t\t\tppool->vnpm_inventory_mom = NULL;\n\t\t\tpsvrmom->msr_has_inventory = 0;\n\n\t\t\t/* see if another Mom is up to become \"the one\" */\n\t\t\tfor (i = 0; i < ppool->vnpm_nummoms; ++i) {\n\t\t\t\tpxmom = ppool->vnpm_moms[i];\n\t\t\t\tpxsvrmom = (mom_svrinfo_t *) pxmom->mi_data;\n\t\t\t\tif ((pxmom->mi_dmn_info->dmn_state & INUSE_DOWN) == 0) {\n\t\t\t\t\tppool->vnpm_inventory_mom = pxmom;\n\t\t\t\t\tpxsvrmom->msr_has_inventory = 1;\n\t\t\t\t\tlog_eventf(PBSEVENT_DEBUG, PBS_EVENTCLASS_SERVER,\n\t\t\t\t\t\t   LOG_DEBUG, msg_daemonname, msg_new_inventory_mom,\n\t\t\t\t\t\t   ppool->vnpm_vnode_pool, pxmom->mi_host);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n\n/**\n * @brief\n *\tAdd a Mom (mominfo_t) to the list of Moms associated with managing\n *\ta vnode pool.  Create the pool if need be (not yet exists).\n *\n * @param[in] pmom - Pointer to the mominfo_t for the Mom\n * @return Error code\n * @retval - 0 - Success\n * @retval - pbs_errno - Failure code\n *\n * @par MT-safe: No\n */\nint\nadd_mom_to_pool(mominfo_t *pmom)\n{\n\tint i;\n\tvnpool_mom_t *ppool;\n\tmominfo_t **tmplst;\n\tint added_pool = 0;\n\tmom_svrinfo_t *psvrmom = (mom_svrinfo_t *) pmom->mi_data;\n\n\tif (psvrmom->msr_vnode_pool == 0)\n\t\treturn PBSE_NONE; /* Mom not in a pool */\n\n\tppool = find_vnode_pool(pmom);\n\tif (ppool != NULL) {\n\t\t/* Found existing pool. Is Mom already in it? */\n\t\tfor (i = 0; i < ppool->vnpm_nummoms; ++i) {\n\t\t\tif (ppool->vnpm_moms[i] == pmom) {\n\t\t\t\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_NODE,\n\t\t\t\t\t   LOG_INFO, pmom->mi_host,\n\t\t\t\t\t   \"POOL: add_mom_to_pool - \"\n\t\t\t\t\t   \"Mom already in pool %ld\",\n\t\t\t\t\t   psvrmom->msr_vnode_pool);\n\t\t\t\treturn PBSE_NONE; /* she is already there */\n\t\t\t}\n\t\t}\n\t}\n\n\t/* The pool doesn't exist yet, we need to add a pool entry */\n\tif (ppool == NULL) {\n\t\tppool = (vnpool_mom_t *) calloc(1, (size_t) sizeof(struct vnpool_mom));\n\t\tif (ppool == NULL) {\n\t\t\t/* no memory */\n\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_NODE, LOG_ERR,\n\t\t\t\t  pmom->mi_host, \"Failed to expand vnode_pool_mom_list\");\n\t\t\treturn PBSE_SYSTEM;\n\t\t}\n\t\tadded_pool = 1;\n\t\tppool->vnpm_vnode_pool = psvrmom->msr_vnode_pool;\n\t}\n\n\t/* now add Mom to pool list, expanding list if need be */\n\n\t/* expand the array, perhaps from nothingness */\n\ttmplst = (mominfo_t **) realloc(ppool->vnpm_moms, (ppool->vnpm_nummoms + 1) * sizeof(mominfo_t *));\n\tif (tmplst == NULL) {\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_NODE, LOG_ERR,\n\t\t\t  pmom->mi_host, \"unable to add mom to pool, no memory\");\n\n\t\tif (added_pool)\n\t\t\tfree(ppool);\n\t\treturn PBSE_SYSTEM;\n\t}\n\tppool->vnpm_moms = tmplst;\n\tppool->vnpm_moms[ppool->vnpm_nummoms++] = pmom;\n\n\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_SERVER, LOG_DEBUG, msg_daemonname,\n\t\t   \"Mom %s added to vnode_pool %ld\", pmom->mi_host, psvrmom->msr_vnode_pool);\n\tif (ppool->vnpm_inventory_mom == NULL) {\n\t\tppool->vnpm_inventory_mom = pmom;\n\t\tpsvrmom->msr_has_inventory = 1;\n\t\tlog_eventf(PBSEVENT_DEBUG, PBS_EVENTCLASS_SERVER, LOG_DEBUG, msg_daemonname,\n\t\t\t   msg_new_inventory_mom, psvrmom->msr_vnode_pool, pmom->mi_host);\n\t}\n\n\tif (vnode_pool_mom_list == NULL) {\n\t\tvnode_pool_mom_list = ppool;\n\t} else if (added_pool == 1) {\n\t\tppool->vnpm_next = vnode_pool_mom_list;\n\t\tvnode_pool_mom_list = ppool;\n\t}\n\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n *\tremove a Mom (mominfo_t) from the list of Moms associated with managing\n *\ta vnode pool.\n *\n * @param[in] pmom - Pointer to the mominfo_t for the Mom\n *\n * @par MT-safe: No\n */\nvoid\nremove_mom_from_pool(mominfo_t *pmom)\n{\n\tint i;\n\tint j;\n\tvnpool_mom_t *ppool;\n\tmom_svrinfo_t *psvrmom = (mom_svrinfo_t *) pmom->mi_data;\n\n\tif (psvrmom->msr_vnode_pool == 0)\n\t\treturn; /* Mom not in a pool */\n\n\tppool = find_vnode_pool(pmom);\n\tif (ppool != NULL) {\n\t\t/* found existing pool, if Mom is in it remove her */\n\t\t/* from it.  If not, nothing to do. */\n\t\tfor (i = 0; i < ppool->vnpm_nummoms; ++i) {\n\t\t\tif (ppool->vnpm_moms[i] == pmom) {\n\t\t\t\tppool->vnpm_moms[i] = NULL;\n\t\t\t\tfor (j = i + 1; j < ppool->vnpm_nummoms; ++j) {\n\t\t\t\t\tppool->vnpm_moms[j - 1] = ppool->vnpm_moms[j];\n\t\t\t\t}\n\t\t\t\t--ppool->vnpm_nummoms;\n\t\t\t\t/* find someone else to be the inventory Mom if need be */\n\t\t\t\treset_pool_inventory_mom(pmom);\n\t\t\t\tpsvrmom->msr_vnode_pool = 0;\n\t\t\t}\n\t\t}\n\t}\n}\n\n#else /* PBS_MOM */\n\n/*\n * The following functions are used by Mom only !\n */\n\n/**\n * @brief\n * \t\tcreate_mommap_entry - create an entry to map a vnode to its parent Mom\n *\t\tand initialize it.   If the actual host of the vnode, used only for\n *\t\tMPI is not the same as the Mom host, then set it.  If the two hosts\n *\t\tare the same, then mvm_hostn is null and the Mom name should be used\n *\n * @param[in]\tvnode\t- vnode for which entry needs to be made\n * @param[in]\thostn\t- host name for MPI via PBS_NODEFILE\n * @param[in]\tpmom\t- pointer to mominfo structure\n * @param[in]\tnotask\t- mvm_notask\n *\n * @return\tmomvmap_t\n * @retval\tNULL\t- failure\n */\nmomvmap_t *\ncreate_mommap_entry(char *vnode, char *hostn, mominfo_t *pmom, int notask)\n{\n\tint empty = -1;\n\tint i;\n\tmomvmap_t *pmmape;\n\tmomvmap_t **tpa;\n\n#ifdef DEBUG\n\tassert((vnode != NULL) && (*vnode != '\\0') && (pmom != NULL));\n#else\n\tif ((vnode == NULL) || (*vnode == '\\0') || (pmom == NULL)) {\n\t\treturn NULL;\n\t}\n#endif\n\n\t/* find a empty slot in the map array */\n\n\tfor (i = 0; i < mommap_array_size; ++i) {\n\t\tif (mommap_array[i] == NULL) {\n\t\t\tempty = i;\n\t\t\tbreak;\n\t\t}\n\t}\n\tif (empty == -1) { /* need to expand array */\n\t\ttpa = (momvmap_t **) realloc(mommap_array, (size_t) (sizeof(momvmap_t *) * (mommap_array_size + GROW_MOMINFO_ARRAY_AMT)));\n\t\tif (tpa) {\n\t\t\tempty = mommap_array_size;\n\t\t\tmommap_array = tpa;\n\t\t\tmommap_array_size += GROW_MOMINFO_ARRAY_AMT;\n\t\t\tfor (i = empty; i < mommap_array_size; ++i)\n\t\t\t\tmommap_array[i] = NULL;\n\t\t} else {\n\t\t\tlog_err(errno, __func__, merr);\n\t\t\treturn NULL;\n\t\t}\n\t}\n\n\t/* now allocate the entry itself and initalize it */\n\n\tpmmape = malloc(sizeof(momvmap_t));\n\tif (pmmape) {\n\t\t(void) strncpy(pmmape->mvm_name, vnode, PBS_MAXNODENAME);\n\t\tpmmape->mvm_name[PBS_MAXNODENAME] = '\\0';\n\t\tif ((hostn == NULL) || (*hostn == '\\0')) {\n\t\t\tpmmape->mvm_hostn = NULL;\n\t\t} else {\n\t\t\tpmmape->mvm_hostn = strdup(hostn);\n\t\t\tif (pmmape->mvm_hostn == NULL) {\n\t\t\t\tlog_err(errno, __func__, merr);\n\t\t\t}\n\t\t}\n\t\tpmmape->mvm_notask = notask;\n\t\tpmmape->mvm_mom = pmom;\n\n\t\tmommap_array[empty] = pmmape;\n\t} else {\n\t\tlog_err(errno, __func__, merr);\n\t}\n\treturn (pmmape);\n}\n\n/**\n * @brief\n *\t\tdelete_momvmap_entry - delete a momvmap_t entry\n * @see\n * \t\tfree_vnodemap\n * @param[in,out]\t- a momvmap_t entry\n *\n * @return\tvoid\n */\nvoid\ndelete_momvmap_entry(momvmap_t *pmmape)\n{\n\tif (pmmape->mvm_hostn)\n\t\tfree(pmmape->mvm_hostn);\n\tmemset(pmmape, 0, sizeof(momvmap_t));\n\tfree(pmmape);\n}\n\n/**\n * @brief\n * \t\tfind_vmap_entry - find the momvmap_t entry for a vnode name\n *\n * @param[in]\tvname\t- vnode name\n *\n * @return\tmomvmap_t *\n * @retval\tmom_vmap entry\t- success\n * @retval\tNULL\t- failure\n */\n\nmomvmap_t *\nfind_vmap_entry(const char *vname)\n{\n\tint i;\n\tmomvmap_t *pmap;\n\n\tfor (i = 0; i < mommap_array_size; ++i) {\n\t\tpmap = mommap_array[i];\n\t\tif ((pmap != NULL) && (strcasecmp(pmap->mvm_name, vname) == 0))\n\t\t\treturn pmap;\n\t}\n\treturn NULL;\n}\n\nmominfo_t *\nfind_mom_by_vnodename(const char *vname)\n{\n\tmomvmap_t *pmap;\n\n\tpmap = find_vmap_entry(vname);\n\tif (pmap)\n\t\treturn (pmap->mvm_mom);\n\telse\n\t\treturn NULL;\n}\n\nmominfo_t *\nadd_mom_data(const char *vnid, void *data)\n{\n\tmominfo_t *pmom;\n\n\tif ((pmom = find_mom_by_vnodename(vnid)) != NULL) {\n\t\tpmom->mi_data = data;\n\t\treturn (pmom);\n\t}\n\n\treturn NULL;\n}\n#endif /* PBS_MOM */\n"
  },
  {
    "path": "src/server/nattr_get_set.c",
    "content": "/*\n * Copyright (C) 1994-2020 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h>\n\n#include \"pbs_nodes.h\"\n\n/**\n * @brief\tGet attribute of node based on given attr index\n *\n * @param[in] pnode    - pointer to node struct\n * @param[in] attr_idx - attribute index\n *\n * @return attribute *\n * @retval NULL  - failure\n * @retval !NULL - pointer to attribute struct\n */\nattribute *\nget_nattr(const struct pbsnode *pnode, int attr_idx)\n{\n\tif (pnode != NULL)\n\t\treturn _get_attr_by_idx((attribute *) pnode->nd_attr, attr_idx);\n\treturn NULL;\n}\n\n/**\n * @brief\tGetter function for node attribute of type string\n *\n * @param[in]\tpnode - pointer to the node\n * @param[in]\tattr_idx - index of the attribute to return\n *\n * @return\tchar *\n * @retval\tstring value of the attribute\n * @retval\tNULL if pnode is NULL\n */\nchar *\nget_nattr_str(const struct pbsnode *pnode, int attr_idx)\n{\n\tif (pnode != NULL)\n\t\treturn get_attr_str(get_nattr(pnode, attr_idx));\n\n\treturn NULL;\n}\n\n/**\n * @brief\tGetter function for node attribute of type string of array\n *\n * @param[in]\tpnode - pointer to the node\n * @param[in]\tattr_idx - index of the attribute to return\n *\n * @return\tstruct array_strings *\n * @retval\tvalue of the attribute\n * @retval\tNULL if pnode is NULL\n */\nstruct array_strings *\nget_nattr_arst(const struct pbsnode *pnode, int attr_idx)\n{\n\tif (pnode != NULL)\n\t\treturn get_attr_arst(get_nattr(pnode, attr_idx));\n\n\treturn NULL;\n}\n\n/**\n * @brief\tGetter for node attribute's list value\n *\n * @param[in]\tpnode - pointer to the node\n * @param[in]\tattr_idx - index of the attribute to return\n *\n * @return\tpbs_list_head\n * @retval\tvalue of attribute\n */\npbs_list_head\nget_nattr_list(const struct pbsnode *pnode, int attr_idx)\n{\n\treturn get_attr_list(get_nattr(pnode, attr_idx));\n}\n\n/**\n * @brief\tGetter function for node attribute of type long\n *\n * @param[in]\tpnode - pointer to the node\n * @param[in]\tattr_idx - index of the attribute to return\n *\n * @return\tlong\n * @retval\tlong value of the attribute\n * @retval\t-1 if pnode is NULL\n */\nlong\nget_nattr_long(const struct pbsnode *pnode, int attr_idx)\n{\n\tif (pnode != NULL)\n\t\treturn get_attr_l(get_nattr(pnode, attr_idx));\n\n\treturn -1;\n}\n\n/**\n * @brief\tGetter function for node attribute of type char\n *\n * @param[in]\tpnode - pointer to the node\n * @param[in]\tattr_idx - index of the attribute to return\n *\n * @return\tchar\n * @retval\tchar value of the attribute\n * @retval\t-1 if pnode is NULL\n */\nchar\nget_nattr_c(const struct pbsnode *pnode, int attr_idx)\n{\n\tif (pnode != NULL)\n\t\treturn get_attr_c(get_nattr(pnode, attr_idx));\n\n\treturn -1;\n}\n\n/**\n * @brief\tGeneric node attribute setter (call if you want at_set() action functions to be called)\n *\n * @param[in]\tpnode - pointer to node\n * @param[in]\tattr_idx - attribute index to set\n * @param[in]\tval - new val to set\n * @param[in]\trscn - new resource val to set, if applicable\n * @param[in]\top - batch_op operation, SET, INCR, DECR etc.\n *\n * @return\tint\n * @retval\t0 for success\n * @retval\t!0 for failure\n */\nint\nset_nattr_generic(struct pbsnode *pnode, int attr_idx, char *val, char *rscn, enum batch_op op)\n{\n\tif (pnode == NULL || val == NULL)\n\t\treturn 1;\n\n\tpnode->nd_modified = 1;\n\treturn set_attr_generic(get_nattr(pnode, attr_idx), &node_attr_def[attr_idx], val, rscn, op);\n}\n\n/**\n * @brief\t\"fast\" node attribute setter for string values\n *\n * @param[in]\tpnode - pointer to node\n * @param[in]\tattr_idx - attribute index to set\n * @param[in]\tval - new val to set\n * @param[in]\trscn - new resource val to set, if applicable\n *\n * @return\tint\n * @retval\t0 for success\n * @retval\t!0 for failure\n */\nint\nset_nattr_str_slim(struct pbsnode *pnode, int attr_idx, char *val, char *rscn)\n{\n\tif (pnode == NULL || val == NULL)\n\t\treturn 1;\n\n\tpnode->nd_modified = 1;\n\treturn set_attr_generic(get_nattr(pnode, attr_idx), &node_attr_def[attr_idx], val, rscn, INTERNAL);\n}\n\n/**\n * @brief\t\"fast\" node attribute setter for long values\n *\n * @param[in]\tpnode - pointer to node\n * @param[in]\tattr_idx - attribute index to set\n * @param[in]\tval - new val to set\n * @param[in]\top - batch_op operation, SET, INCR, DECR etc.\n *\n * @return\tint\n * @retval\t0 for success\n * @retval\t1 for failure\n */\nint\nset_nattr_l_slim(struct pbsnode *pnode, int attr_idx, long val, enum batch_op op)\n{\n\tif (pnode == NULL)\n\t\treturn 1;\n\n\tif ((attr_idx != ND_ATR_last_state_change_time) && \n\t\t(attr_idx != ND_ATR_state || (val & INUSE_NOAUTO_MASK)))\n\t\tpnode->nd_modified = 1;\n\tset_attr_l(get_nattr(pnode, attr_idx), val, op);\n\n\treturn 0;\n}\n\n/**\n * @brief\t\"fast\" node attribute setter for boolean values\n *\n * @param[in]\tpnode - pointer to node\n * @param[in]\tattr_idx - attribute index to set\n * @param[in]\tval - new val to set\n * @param[in]\top - batch_op operation, SET, INCR, DECR etc.\n *\n * @return\tint\n * @retval\t0 for success\n * @retval\t1 for failure\n */\nint\nset_nattr_b_slim(struct pbsnode *pnode, int attr_idx, long val, enum batch_op op)\n{\n\tif (pnode == NULL)\n\t\treturn 1;\n\n\tpnode->nd_modified = 1;\n\tset_attr_b(get_nattr(pnode, attr_idx), val, op);\n\n\treturn 0;\n}\n\n/**\n * @brief\t\"fast\" node attribute setter for char values\n *\n * @param[in]\tpnode - pointer to node\n * @param[in]\tattr_idx - attribute index to set\n * @param[in]\tval - new val to set\n * @param[in]\top - batch_op operation, SET, INCR, DECR etc.\n *\n * @return\tint\n * @retval\t0 for success\n * @retval\t1 for failure\n */\nint\nset_nattr_c_slim(struct pbsnode *pnode, int attr_idx, char val, enum batch_op op)\n{\n\tif (pnode == NULL)\n\t\treturn 1;\n\n\tpnode->nd_modified = 1;\n\tset_attr_c(get_nattr(pnode, attr_idx), val, op);\n\n\treturn 0;\n}\n\n/**\n * @brief\t\"fast\" node attribute setter for short values\n *\n * @param[in]\tpnode - pointer to node\n * @param[in]\tattr_idx - attribute index to set\n * @param[in]\tval - new val to set\n * @param[in]\top - batch_op operation, SET, INCR, DECR etc.\n *\n * @return\tint\n * @retval\t0 for success\n * @retval\t1 for failure\n */\nint\nset_nattr_short_slim(struct pbsnode *pnode, int attr_idx, short val, enum batch_op op)\n{\n\tif (pnode == NULL)\n\t\treturn 1;\n\n\tpnode->nd_modified = 1;\n\tset_attr_short(get_nattr(pnode, attr_idx), val, op);\n\n\treturn 0;\n}\n\n/**\n * @brief\tCheck if a node attribute is set\n *\n * @param[in]\tpnode - pointer to node\n * @param[in]\tattr_idx - attribute index to check\n *\n * @return\tint\n * @retval\t1 if it is set\n * @retval\t0 otherwise\n */\nint\nis_nattr_set(const struct pbsnode *pnode, int attr_idx)\n{\n\tif (pnode != NULL)\n\t\treturn is_attr_set(get_nattr(pnode, attr_idx));\n\n\treturn 0;\n}\n\n/**\n * @brief\tFree a node attribute\n *\n * @param[in]\tpnode - pointer to node\n * @param[in]\tattr_idx - attribute index to free\n *\n * @return\tvoid\n */\nvoid\nfree_nattr(struct pbsnode *pnode, int attr_idx)\n{\n\tif (pnode != NULL)\n\t\tfree_attr(node_attr_def, get_nattr(pnode, attr_idx), attr_idx);\n}\n\n/**\n * @brief\tclear a node attribute\n *\n * @param[in]\tpnode - pointer to node\n * @param[in]\tattr_idx - attribute index to clear\n *\n * @return\tvoid\n */\nvoid\nclear_nattr(struct pbsnode *pnode, int attr_idx)\n{\n\tif (pnode != NULL)\n\t\tclear_attr(get_nattr(pnode, attr_idx), &node_attr_def[attr_idx]);\n}\n\n/**\n * @brief\tSpecial setter func to set node's job info value\n *\n * @param[in]\tpnode - pointer to node\n * @param[in]\tattr_idx - attribute index to set\n * @param[in]\tval - pointer to node as value to set\n *\n * @return\tvoid\n */\nvoid\nset_nattr_jinfo(struct pbsnode *pnode, int attr_idx, struct pbsnode *val)\n{\n\tpnode->nd_modified = 1;\n\tattribute *attr = get_nattr(pnode, attr_idx);\n\tattr->at_val.at_jinfo = val;\n\tattr->at_flags = ATR_SET_MOD_MCACHE;\n}\n"
  },
  {
    "path": "src/server/node_func.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n *\n * @brief\n *\t\tvarious functions dealing with nodes, properties and\n *\t\t the following global variables:\n *\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <ctype.h>\n#include <time.h>\n#include <errno.h>\n#include <sys/types.h>\n#include <sys/stat.h>\n#include <fcntl.h>\n#include \"pbs_db.h\"\n#include <unistd.h>\n#include <sys/socket.h>\n#include <netinet/in.h>\n#include <arpa/inet.h>\n#include <netdb.h>\n#include <assert.h>\n#include \"pbs_ifl.h\"\n#include \"libpbs.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"resource.h\"\n#include \"credential.h\"\n#include \"server_limits.h\"\n#include \"batch_request.h\"\n#include \"server.h\"\n#include \"resv_node.h\"\n#include \"job.h\"\n#include \"queue.h\"\n#include \"reservation.h\"\n#include \"pbs_nodes.h\"\n#include \"svrfunc.h\"\n#include \"pbs_error.h\"\n#include \"log.h\"\n#include \"tpp.h\"\n#include \"work_task.h\"\n#include \"net_connect.h\"\n#include \"cmds.h\"\n#include \"pbs_license.h\"\n#include \"pbs_idx.h\"\n#include \"libutil.h\"\n#if !defined(H_ERRNO_DECLARED)\nextern int h_errno;\n#endif\n\n/* Global Data */\n\nextern int svr_quehasnodes;\nextern int svr_totnodes;\nextern pbs_list_head svr_queues;\nextern mominfo_time_t mominfo_time;\nextern char *resc_in_err;\nextern void *node_idx;\nextern time_t time_now;\n\nextern int node_delete_db(struct pbsnode *);\nextern pbsnode *recov_node_cb(pbs_db_obj_info_t *, int *);\nextern int check_sign(pbsnode *, attribute *);\nextern void license_one_node(pbsnode *);\nextern int process_topology_info(void **, char *);\nextern void release_lic_for_cray(struct pbsnode *);\nstatic void remove_node_topology(char *);\n\n/**\n * @brief\n * \t\tfind_nodebyname() - find a node host by its name\n * @param[in]\tnodename\t- node being searched\n *\n * @return\tstrcut pbsnode *\n * @retval\t!NULL - success\n * @retval\tNULL  - failure\n */\nstruct pbsnode *\nfind_nodebyname(char *nodename)\n{\n\tchar *pslash;\n\tstruct pbsnode *node = NULL;\n\n\tif (nodename == NULL)\n\t\treturn NULL;\n\tif (*nodename == '(')\n\t\tnodename++; /* skip over leading paren */\n\tif ((pslash = strchr(nodename, (int) '/')) != NULL)\n\t\t*pslash = '\\0';\n\tif (node_idx == NULL)\n\t\treturn NULL;\n\tif (pbs_idx_find(node_idx, (void **) &nodename, (void **) &node, NULL) != PBS_IDX_RET_OK)\n\t\treturn NULL;\n\n\treturn node;\n}\n\n/**\n * @brief\n * \t\tfind_nodebyaddr() - find a node host by its addr\n * @param[in]\taddr\t- addr being searched\n *\n * @return\tpbsnode\n * @retval\tNULL\t- failure\n */\n\nstruct pbsnode *\nfind_nodebyaddr(pbs_net_t addr)\n{\n\tint i, j;\n\tdmn_info_t *pdmninfo;\n\n\tfor (i = 0; i < svr_totnodes; i++) {\n\t\tpdmninfo = pbsndlist[i]->nd_moms[0]->mi_dmn_info;\n\t\tfor (j = 0; pdmninfo->dmn_addrs[j]; j++) {\n\t\t\tif (addr == pdmninfo->dmn_addrs[j]) {\n\t\t\t\treturn (pbsndlist[i]);\n\t\t\t}\n\t\t}\n\t}\n\treturn NULL;\n}\n\n/**\n * @brief\n * \t \tadd status of each requested (or all) node-attribute to the status reply.\n *\t\tif a node-attribute is incorrectly specified, *bad is set to the node-attribute's ordinal position.\n * @see\n * \t\tstatus_node\n * @param[in,out]\tpal\t- the node to check\n * @param[in]\tpadef \t- \tthe defined node attributes\n * @param[out]\tpnode \t- \tno longer an attribute ptr\n * @param[in]\tlimit \t- \tnumber of array elements in padef\n * @param[in]\tpriv \t- \trequester's privilege\n * @param[in]\tphead \t- \theads list of svrattrl structs that hang\n * \t\t\t\t\t\t\toff the brp_attr member of the status sub\n * \t\t\t\t\t\t\tstructure in the request's \"reply area\"\n * @param[in]\tbad \t- \tif node-attribute error record it's list position here\n *\n * @return\tint\n * @retval\t0\t\t- success\n * @retval\t!= 0\t- error\n */\n\nint\nstatus_nodeattrib(svrattrl *pal, struct pbsnode *pnode, int limit, int priv, pbs_list_head *phead, int *bad)\n{\n\tint rc = 0; /*return code, 0 == success*/\n\tint index;\n\tint nth; /*tracks list position (ordinal tacker)   */\n\tattribute_def *padef = node_attr_def;\n\n\tpriv &= ATR_DFLAG_RDACC; /* user-client privilege      */\n\n\tif (pal) { /*caller has requested status on specific node-attributes*/\n\t\tnth = 0;\n\t\twhile (pal) {\n\t\t\t++nth;\n\t\t\tindex = find_attr(node_attr_idx, padef, pal->al_name);\n\t\t\tif (index < 0) {\n\t\t\t\t*bad = nth; /* name in this position not found */\n\t\t\t\trc = PBSE_UNKNODEATR;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\tif ((padef + index)->at_flags & priv) {\n\t\t\t\trc = (padef + index)->at_encode(get_nattr(pnode, index), phead, (padef + index)->at_name, NULL, ATR_ENCODE_CLIENT, NULL);\n\t\t\t\tif (rc < 0) {\n\t\t\t\t\trc = -rc;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\trc = 0;\n\t\t\t}\n\t\t\tpal = (svrattrl *) GET_NEXT(pal->al_link);\n\t\t}\n\n\t} else {\n\t\t/*\n\t\t **\tnon-specific request,\n\t\t **\treturn all readable attributes\n\t\t */\n\t\tfor (index = 0; index < limit; index++) {\n\t\t\tif ((padef + index)->at_flags & priv) {\n\t\t\t\trc = (padef + index)->at_encode(get_nattr(pnode, index), phead, (padef + index)->at_name, NULL, ATR_ENCODE_CLIENT, NULL);\n\t\t\t\tif (rc < 0) {\n\t\t\t\t\trc = -rc;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\trc = 0;\n\t\t\t}\n\t\t}\n\t}\n\n\treturn (rc);\n}\n\n/**\n * @brief\n * \t\tfree_prop_list\n * \t\tFor each element of a null terminated prop list call free\n * \t\tto clean up any string buffer that hangs from the element.\n *\t\tAfter this, call free to remove the struct prop.\n *\n * @param[in,out]\tprop \t- \tprop list\n *\n * @return\tvoid\n */\n\nvoid\nfree_prop_list(struct prop *prop)\n{\n\tstruct prop *pp;\n\n\twhile (prop) {\n\t\tpp = prop->next;\n\t\tfree(prop->name);\n\t\tprop->name = NULL;\n\t\tfree(prop);\n\t\tprop = pp;\n\t}\n}\n\n/**\n * @brief\n *\t\tinitialize_pbsnode - carries out initialization on a new\n *\t\tpbs node.  The assumption is that all the parameters are valid.\n * @see\n * \t\tcreate_pbs_node2\n * @param[out]\tpnode \t- \tnew pbs node\n * @param[in]\tpname\t- \tnode name\n * @param[in]\tntype \t- \ttime-shared or cluster\n *\n * @return\tint\n * @retval\tPBSE_NONE\t- success\n */\n\nint\ninitialize_pbsnode(struct pbsnode *pnode, char *pname, int ntype)\n{\n\tint i;\n\tattribute *pat1;\n\tattribute *pat2;\n\tresource_def *prd;\n\tresource *presc;\n\n\tpnode->nd_name = pname;\n\tpnode->nd_ntype = ntype;\n\tpnode->nd_nsn = 0;\n\tpnode->nd_nsnfree = 0;\n\tpnode->nd_svrflags = 0;\n\tpnode->nd_ncpus = 1;\n\tpnode->nd_psn = NULL;\n\tpnode->nd_hostname = NULL;\n\tpnode->nd_state = INUSE_UNKNOWN | INUSE_DOWN;\n\tpnode->nd_resvp = NULL;\n\tpnode->nd_pque = NULL;\n\tpnode->nd_nummoms = 0;\n\tpnode->nd_svrflags |= NODE_NEWOBJ;\n\tpnode->nd_lic_info = NULL;\n\tpnode->nd_modified = 0;\n\tpnode->nd_moms = (mominfo_t **) calloc(1, sizeof(mominfo_t *));\n\tif (pnode->nd_moms == NULL)\n\t\treturn (PBSE_SYSTEM);\n\tpnode->nd_nummslots = 1;\n\n\t/* first, clear the attributes */\n\n\tfor (i = 0; i < ND_ATR_LAST; i++)\n\t\tclear_attr(get_nattr(pnode, i), &node_attr_def[i]);\n\n\t/* then, setup certain attributes */\n\tset_nattr_l_slim(pnode, ND_ATR_state, pnode->nd_state, SET);\n\n\tset_nattr_short_slim(pnode, ND_ATR_ntype, pnode->nd_ntype, SET);\n\n\tset_nattr_jinfo(pnode, ND_ATR_jobs, pnode);\n\tset_nattr_jinfo(pnode, ND_ATR_resvs, pnode);\n\n\tset_nattr_l_slim(pnode, ND_ATR_ResvEnable, 1, SET);\n\t(get_nattr(pnode, ND_ATR_ResvEnable))->at_flags |= ATR_VFLAG_DEFLT;\n\n\tset_nattr_str_slim(pnode, ND_ATR_version, \"unavailable\", NULL);\n\t(get_nattr(pnode, ND_ATR_version))->at_flags |= ATR_VFLAG_DEFLT;\n\n\tset_nattr_l_slim(pnode, ND_ATR_Sharing, VNS_DFLT_SHARED, SET);\n\t(get_nattr(pnode, ND_ATR_Sharing))->at_flags |= ATR_VFLAG_DEFLT;\n\n\tpat1 = get_nattr(pnode, ND_ATR_ResourceAvail);\n\tpat2 = get_nattr(pnode, ND_ATR_ResourceAssn);\n\n\tprd = &svr_resc_def[RESC_ARCH];\n\t(void) add_resource_entry(pat1, prd);\n\n\tprd = &svr_resc_def[RESC_MEM];\n\t(void) add_resource_entry(pat1, prd);\n\n\tprd = &svr_resc_def[RESC_NCPUS];\n\t(void) add_resource_entry(pat1, prd);\n\n\t/* add to resources_assigned any resource with ATR_DFLAG_FNASSN */\n\t/* or  ATR_DFLAG_ANASSN set in the resource definition          */\n\n\tfor (prd = svr_resc_def; prd; prd = prd->rs_next) {\n\t\tif ((prd->rs_flags & (ATR_DFLAG_FNASSN | ATR_DFLAG_ANASSN)) &&\n\t\t    (prd->rs_flags & ATR_DFLAG_MOM)) {\n\t\t\tpresc = add_resource_entry(pat2, prd);\n\t\t\tpresc->rs_value.at_flags = ATR_SET_MOD_MCACHE;\n\t\t}\n\t}\n\n\t/* clear the modify flags */\n\n\tfor (i = 0; i < (int) ND_ATR_LAST; i++)\n\t\t(get_nattr(pnode, i))->at_flags &= ~ATR_VFLAG_MODIFY;\n\treturn (PBSE_NONE);\n}\n\n/**\n * @brief\n * \t\tsubnode_delete - delete the specified subnode\n *\t\tby marking it deleted\n *\n * @see\n * \t\teffective_node_delete and delete_a_subnode\n *\n * @param[in]\tpsubn\t-\tncpus on a vnode\n *\n * @return\tvoid\n */\n\nstatic void\nsubnode_delete(struct pbssubn *psubn)\n{\n\tstruct jobinfo *jip, *jipt;\n\n\tfor (jip = psubn->jobs; jip; jip = jipt) {\n\t\tjipt = jip->next;\n\t\tfree(jip->jobid);\n\t\tfree(jip);\n\t}\n\tpsubn->jobs = NULL;\n\tpsubn->next = NULL;\n\tpsubn->inuse = INUSE_DELETED;\n\tfree(psubn);\n}\n/**\n * @brief\n * \t\tRemove the vnode from the list of vnodes of a mom.\n * @see\n * \t\teffective_node_delete\n *\n * @param[in]\tpnode\t- Vnode structure\n *\n * @return\tvoid\n */\nstatic void\nremove_vnode_from_moms(struct pbsnode *pnode)\n{\n\tint imom;\n\tint ivnd;\n\n\tmom_svrinfo_t *psvrm;\n\n\tfor (imom = 0; imom < pnode->nd_nummoms; ++imom) {\n\t\tpsvrm = pnode->nd_moms[imom]->mi_data;\n\t\tfor (ivnd = 0; ivnd < psvrm->msr_numvnds; ++ivnd) {\n\t\t\tif (psvrm->msr_children[ivnd] == pnode) {\n\t\t\t\t/* move list down to remove this entry */\n\t\t\t\twhile (ivnd < psvrm->msr_numvnds - 1) {\n\t\t\t\t\tpsvrm->msr_children[ivnd] =\n\t\t\t\t\t\tpsvrm->msr_children[ivnd + 1];\n\t\t\t\t\t++ivnd;\n\t\t\t\t}\n\t\t\t\tpsvrm->msr_children[ivnd] = NULL;\n\t\t\t\t--psvrm->msr_numvnds;\n\t\t\t\tbreak; /* done with this Mom */\n\t\t\t}\n\t\t}\n\t}\n}\n\n/**\n * @brief\n * \t\tremove_mom_from_vnodes - remove this Mom from the list of Moms for any\n *\t\tvnode (after the natural vnode) and removed from the Mom attribute\n * @see\n * \t\teffective_node_delete\n * @param[in]\tpmom\t-\tMom which needs to be removed\n *\n * @return\tvoid\n */\nstatic void\nremove_mom_from_vnodes(mominfo_t *pmom)\n{\n\tint imom;\n\tint ivnd;\n\tstruct pbsnode *pnode;\n\tmom_svrinfo_t *psvrmom;\n\tattribute tmomattr;\n\n\tpsvrmom = pmom->mi_data;\n\tif (psvrmom->msr_numvnds == 1)\n\t\treturn;\n\n\t/* setup temp \"Mom\" attribute with the host name to remove */\n\tclear_attr(&tmomattr, &node_attr_def[(int) ND_ATR_Mom]);\n\tset_attr_generic(&tmomattr, &node_attr_def[(int) ND_ATR_Mom], pmom->mi_host, NULL, INTERNAL);\n\n\t/* start index \"invd\" at 1 to skip natural vnode for this Mom */\n\tfor (ivnd = 1; ivnd < psvrmom->msr_numvnds; ++ivnd) {\n\t\tpnode = psvrmom->msr_children[ivnd];\n\t\tfor (imom = 0; imom < pnode->nd_nummoms; ++imom) {\n\t\t\tif (pnode->nd_moms[imom] == pmom) {\n\t\t\t\t/* move list down to remove this mom */\n\t\t\t\twhile (imom < pnode->nd_nummoms - 1) {\n\t\t\t\t\tpnode->nd_moms[imom] =\n\t\t\t\t\t\tpnode->nd_moms[imom + 1];\n\t\t\t\t\t++imom;\n\t\t\t\t}\n\t\t\t\tpnode->nd_moms[imom] = NULL;\n\t\t\t\t--pnode->nd_nummoms;\n\t\t\t\t/* remove (decr) Mom host from Mom attrbute */\n\t\t\t\t(void) node_attr_def[(int) ND_ATR_Mom].at_set(\n\t\t\t\t\tget_nattr(pnode, ND_ATR_Mom),\n\t\t\t\t\t&tmomattr, DECR);\n\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t}\n\tnode_attr_def[(int) ND_ATR_Mom].at_free(&tmomattr);\n}\n\n/**\n * @brief Free a pbsnode structure\n *\n * @param[in] pnode - ptr to pnode to delete\n *\n * @par MT-safe: No\n **/\nvoid\nfree_pnode(struct pbsnode *pnode)\n{\n\tint i;\n\n\tif (!pnode)\n\t\treturn;\n\n\tfree(pnode->nd_name);\n\tfree(pnode->nd_hostname);\n\tfree(pnode->nd_moms);\n\t/* free attributes */\n\tfor (i = 0; i < ND_ATR_LAST; i++) {\n\t\tif (is_attr_set(&pnode->nd_attr[i]))\n\t\t\tnode_attr_def[i].at_free(&pnode->nd_attr[i]);\n\t}\n\tfree(pnode); /* delete the pnode from memory */\n}\n\n/**\n * @brief\n * \t\teffective_node_delete - physically delete a vnode, including its\n *\t\tpbsnode structure, associated attribute, etc and free the licenses.\n *\t\tThis should not be called if the vnode has jobs running on it.\n *\n * @param[in,out]\tpnode\t-\tvnode structure\n *\n * @return\tvoid\n */\nvoid\neffective_node_delete(struct pbsnode *pnode)\n{\n\tint i;\n\tstruct pbssubn *psubn;\n\tstruct pbssubn *pnxt;\n\tmom_svrinfo_t *psvrmom;\n\tdmn_info_t *pdmninfo;\n\tint iht;\n\tint lic_released = 0;\n\n\tpsubn = pnode->nd_psn;\n\twhile (psubn) {\n\t\tpnxt = psubn->next;\n\t\tsubnode_delete(psubn);\n\t\tpsubn = pnxt;\n\t}\n\n\tremove_from_unlicensed_node_list(pnode);\n\tlic_released = release_node_lic(pnode);\n\n\tif (pnode->nd_nummoms > 1) {\n\t\t/* unlink from mominfo for all parent Moms */\n\t\tremove_vnode_from_moms(pnode);\n\t} else if (pnode->nd_nummoms == 1) {\n\t\tpsvrmom = (mom_svrinfo_t *) (pnode->nd_moms[0]->mi_data);\n\t\tpdmninfo = pnode->nd_moms[0]->mi_dmn_info;\n\t\tif (psvrmom->msr_children[0] == pnode) {\n\t\t\t/*\n\t\t\t * This is the \"natural\" vnode for a Mom\n\t\t\t * must mean for the Mom to go away also\n\t\t\t * first remove from any vnode pool\n\t\t\t */\n\t\t\tremove_mom_from_pool(pnode->nd_moms[0]);\n\n\t\t\t/* Then remove this MoM from any other vnode she manages */\n\t\t\tremove_mom_from_vnodes(pnode->nd_moms[0]);\n\n\t\t\t/* then delete the Mom */\n\t\t\tfor (i = 0; pdmninfo->dmn_addrs[i]; i++) {\n\t\t\t\tu_long ipaddr = pdmninfo->dmn_addrs[i];\n\t\t\t\tif (ipaddr)\n\t\t\t\t\tdelete_iplist_element(pbs_iplist, ipaddr);\n\t\t\t}\n\t\t\tdelete_svrmom_entry(pnode->nd_moms[0]);\n\t\t\tpnode->nd_moms[0] = NULL; /* since we deleted the mom */\n\t\t} else {\n\t\t\t/* unlink from mominfo of parent Moms */\n\t\t\tremove_vnode_from_moms(pnode);\n\t\t}\n\t}\n\n\t/* set the nd_moms to NULL before calling save */\n\tif (pnode->nd_moms)\n\t\tfree(pnode->nd_moms);\n\tpnode->nd_moms = NULL;\n\n\tDBPRT((\"Deleting node %s from database\\n\", pnode->nd_name))\n\tnode_delete_db(pnode);\n\n\tremove_node_topology(pnode->nd_name);\n\n\t/* delete the node from the node tree as well as the node array */\n\tif (node_idx != NULL)\n\t\tpbs_idx_delete(node_idx, pnode->nd_name);\n\n\tfor (iht = pnode->nd_arr_index + 1; iht < svr_totnodes; iht++) {\n\t\tpbsndlist[iht - 1] = pbsndlist[iht];\n\t\t/* adjust the arr_index since we are coalescing elements */\n\t\tpbsndlist[iht - 1]->nd_arr_index--;\n\t}\n\tsvr_totnodes--;\n\tfree_pnode(pnode);\n\n\tif (lic_released)\n\t\tlicense_nodes();\n}\n\n/**\n * @brief\n *\tsetup_notification -  Sets up the  mechanism for notifying\n *\tother members of the server's node pool that a new node was added\n *\tmanually via qmgr.\n *\tThe IS_CLUSTER_ADDRS message is only sent to the existing Moms.\n * @see\n * \t\tmgr_node_create\n *\n * @return\tvoid\n */\nvoid\nsetup_notification()\n{\n\tint i;\n\tint nmom;\n\tstatic time_t addr_send_tm = 0;\n\n\tfor (i = 0; i < svr_totnodes; i++) {\n\t\tif (pbsndlist[i]->nd_state & INUSE_DELETED)\n\t\t\tcontinue;\n\n\t\tset_vnode_state(pbsndlist[i], INUSE_DOWN, Nd_State_Or);\n\t\tpost_attr_set(get_nattr(pbsndlist[i], ND_ATR_state));\n\t\tfor (nmom = 0; nmom < pbsndlist[i]->nd_nummoms; ++nmom) {\n\t\t\t((pbsndlist[i]->nd_moms[nmom]->mi_dmn_info))->dmn_state |= INUSE_NEED_ADDRS;\n\t\t}\n\t}\n\n\t/* send IS_CLUSTERADDR2 to happen in next 2 seconds */\n\tif (addr_send_tm <= time_now) {\n\t\taddr_send_tm = time_now + MCAST_WAIT_TM;\n\t\tstruct work_task *ptask = set_task(WORK_Timed, addr_send_tm, mcast_msg, NULL);\n\t\tptask->wt_aux = IS_CLUSTER_ADDRS;\n\t}\n}\n\n/**\n * @brief\n * \t\tprocess_host_name_part - actually processes the node name part of the form\n *\t\tnode[:ts|:gl]\n *\t\tchecks the node type and rechecks agaist the ntype attribute which\n *\t\tmay be in the attribute list given by plist\n * @see\n * \t\tcreate_pbs_node2\n *\n * @param[in]\t\tobjname\t-\tnode to be's name\n * @param[out]\t\tplist\t-\tTHINGS RETURNED\n * @param[out]\t\tpname\t-\tnode name w/o any :ts\n * @param[out]\t\tntype\t-\tnode type, time-shared, not\n *\n * @return\tint\n * @retval\t0\t- success\n */\nint\nprocess_host_name_part(char *objname, svrattrl *plist, char **pname, int *ntype)\n{\n\tattribute lattr;\n\tchar *pnodename; /*caller supplied node name */\n\tint len;\n\n\tlen = strlen(objname);\n\tif (len == 0)\n\t\treturn (PBSE_UNKNODE);\n\n\tpnodename = strdup(objname);\n\n\tif (pnodename == NULL)\n\t\treturn (PBSE_SYSTEM);\n\n\t*ntype = NTYPE_PBS;\n\tif (len >= 3) {\n\t\tif (!strcmp(&pnodename[len - 3], \":ts\")) {\n\t\t\tpnodename[len - 3] = '\\0';\n\t\t}\n\t}\n\t*pname = pnodename; /* return node name\t  */\n\n\tif ((*ntype == NTYPE_PBS) && (plist != NULL)) {\n\t\t/* double check type */\n\t\twhile (plist) {\n\t\t\tif (!strcasecmp(plist->al_name, ATTR_NODE_ntype))\n\t\t\t\tbreak;\n\t\t\tplist = (svrattrl *) GET_NEXT(plist->al_link);\n\t\t}\n\t\tif (plist) {\n\t\t\tclear_attr(&lattr, &node_attr_def[ND_ATR_ntype]);\n\t\t\t(void) decode_ntype(&lattr, plist->al_name, 0, plist->al_value);\n\t\t\t*ntype = (int) lattr.at_val.at_short;\n\t\t}\n\t}\n\n\treturn (0); /* function successful    */\n}\n\nstatic char *nodeerrtxt = \"Node description file update failed\";\n\n/**\n * @brief\n *\t\tStatic function to update the specified mom in the db.\n *\n * @see\n * \t\tsave_nodes_db, save_nodes_db_inner\n *\n * @return\terror code\n * @retval\t-1 - Failure\n * @retval\t 0 - Success\n *\n */\nstatic int\nsave_nodes_db_mom(mominfo_t *pmom)\n{\n\tstruct pbsnode *np;\n\tpbs_list_head wrtattr;\n\tmom_svrinfo_t *psvrm;\n\tint nchild;\n\n\tCLEAR_HEAD(wrtattr);\n\n\tif (pmom == NULL)\n\t\treturn -1;\n\n\tpsvrm = (mom_svrinfo_t *) pmom->mi_data;\n\tfor (nchild = 0; nchild < psvrm->msr_numvnds; ++nchild) {\n\t\tnp = psvrm->msr_children[nchild];\n\t\tif (np == NULL)\n\t\t\tcontinue;\n\n\t\tif (np->nd_state & INUSE_DELETED) {\n\t\t\t/* this shouldn't happen, if it does, ignore it */\n\t\t\tcontinue;\n\t\t}\n\n\t\tif (np->nd_modified) {\n\t\t\tif (node_save_db(np) != 0) {\n\t\t\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER, LOG_WARNING, \"nodes\", nodeerrtxt);\n\t\t\t\treturn (-1);\n\t\t\t}\n\t\t\tnp->nd_modified = 0;\n\t\t}\n\t}\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\t\tStatic function to update all the nodes in the db\n *\n * @see\n * \t\tsave_nodes_db_mom\n *\n * @return\terror code\n * @retval\t-1 - Failure\n * @retval\t 0 - Success\n *\n */\nstatic int\nsave_nodes_db_inner(void)\n{\n\tint i;\n\tpbs_list_head wrtattr;\n\tmominfo_t *pmom;\n\n\t/* for each Mom ... */\n\tCLEAR_HEAD(wrtattr);\n\n\tfor (i = 0; i < mominfo_array_size; ++i) {\n\t\tpmom = mominfo_array[i];\n\t\tif (pmom == NULL)\n\t\t\tcontinue;\n\n\t\tif (save_nodes_db_mom(pmom) == -1)\n\t\t\treturn -1;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\t\tWhen called, this function will update\n *\t\tall the nodes in the db. It will update the mominfo_time to the db\n *\t\tand save all the nodes.\n *\n *  \tThe updates are done under a single transaction.\n *  \tUpon successful conclusion the transaction is commited.\n *\n * @param[in]\tchangemodtime - flag to change the mom modification time or not\n * @param[in]\tp - when p is set, save the specific mom to the db\n *\t\t    when p is unset, save all the nodes to the db\n *\n * @return\terror code\n * @retval\t-1 - Failure\n * @retval\t 0 - Success\n *\n */\nint\nsave_nodes_db(int changemodtime, void *p)\n{\n\tstruct pbsnode *np;\n\tpbs_db_mominfo_time_t mom_tm = {0, 0};\n\tpbs_db_obj_info_t obj;\n\tint num;\n\tresource *resc;\n\tchar *rname;\n\tresource_def *rscdef;\n\tint i;\n\tmominfo_t *pmom = (mominfo_t *) p;\n\tchar *conn_db_err = NULL;\n\n\tDBPRT((\"%s: entered\\n\", __func__))\n\n\tif (changemodtime) { /* update generation on host-vnode map */\n\t\tif (mominfo_time.mit_time == time(0))\n\t\t\tmominfo_time.mit_gen++;\n\t\telse {\n\t\t\tmominfo_time.mit_time = time(0);\n\t\t\tmominfo_time.mit_gen = 1;\n\t\t}\n\t}\n\n\tif (svr_totnodes == 0 || mominfo_array_size == 0) {\n\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER,\n\t\t\t  LOG_ALERT, \"nodes\",\n\t\t\t  \"Server has empty nodes list\");\n\t\treturn (-1);\n\t}\n\n\t/* insert/update the mominfo_time to db */\n\tmom_tm.mit_time = mominfo_time.mit_time;\n\tmom_tm.mit_gen = mominfo_time.mit_gen;\n\tobj.pbs_db_obj_type = PBS_DB_MOMINFO_TIME;\n\tobj.pbs_db_un.pbs_db_mominfo_tm = &mom_tm;\n\n\tif (pbs_db_save_obj(svr_db_conn, &obj, OBJ_SAVE_QS) == 1) {\t   /* no row updated */\n\t\tif (pbs_db_save_obj(svr_db_conn, &obj, OBJ_SAVE_NEW) != 0) /* insert also failed */\n\t\t\tgoto db_err;\n\t}\n\n\tif (pmom) {\n\t\tif (save_nodes_db_mom(pmom) == -1)\n\t\t\tgoto db_err;\n\t} else {\n\t\tif (save_nodes_db_inner() == -1)\n\t\t\tgoto db_err;\n\t}\n\n\t/*\n\t * Clear the ATR_VFLAG_MODIFY bit on each node attribute\n\t * and on the node_group_key resource, for those nodes\n\t * that possess a node_group_key resource\n\t */\n\n\tif (is_sattr_set(SVR_ATR_NodeGroupKey))\n\t\trname = get_sattr_str(SVR_ATR_NodeGroupKey);\n\telse\n\t\trname = NULL;\n\n\tif (rname)\n\t\trscdef = find_resc_def(svr_resc_def, rname);\n\telse\n\t\trscdef = NULL;\n\n\tfor (i = 0; i < svr_totnodes; i++) {\n\t\tnp = pbsndlist[i];\n\t\tif (np->nd_state & INUSE_DELETED)\n\t\t\tcontinue;\n\n\t\tfor (num = 0; num < ND_ATR_LAST; num++) {\n\n\t\t\t(get_nattr(np, num))->at_flags &= ~ATR_VFLAG_MODIFY;\n\n\t\t\tif (num == ND_ATR_ResourceAvail)\n\t\t\t\tif (rname != NULL && rscdef != NULL) {\n\t\t\t\t\tif ((resc = find_resc_entry(get_nattr(np, ND_ATR_ResourceAvail), rscdef)))\n\t\t\t\t\t\tresc->rs_value.at_flags &= ~ATR_VFLAG_MODIFY;\n\t\t\t\t}\n\t\t}\n\t}\n\treturn (0);\n\ndb_err:\n\tpbs_db_get_errmsg(PBS_DB_ERR, &conn_db_err);\n\tlog_errf(-1, __func__, \"Unable to save node to the database %s\", conn_db_err ? conn_db_err : \"\");\n\tfree(conn_db_err);\n\tpanic_stop_db();\n\treturn (-1);\n}\n\n/**\n * @brief\n * \t\tinit_prop - allocate and initialize a prop struct\n *\n * @param[in]\tpname\t-\tpoints to the property string\n *\n * @return\tstruct prop *\n */\n\nstruct prop *\ninit_prop(char *pname)\n{\n\tstruct prop *pp;\n\n\tif ((pp = (struct prop *) malloc(sizeof(struct prop))) != NULL) {\n\t\tpp->name = pname;\n\t\tpp->mark = 0;\n\t\tpp->next = 0;\n\t}\n\n\treturn (pp);\n}\n\n/**\n * @brief\n * \t\tcreate_subnode - create a subnode entry and link to parent node\n * @see\n * \t\tmod_node_ncpus, set_nodes, create_pbs_node2\n * @param[in] - pnode -\tparent node.\n * @param[in] - lstsn - Points to the last subnode in the parent node list. This\n *\t\t\t\t\t\teliminates the need to find the last node in parent node list.\n *\n * @return\tstruct pbssubn *\n */\nstruct pbssubn *\ncreate_subnode(struct pbsnode *pnode, struct pbssubn *lstsn)\n{\n\tstruct pbssubn *psubn;\n\tstruct pbssubn **nxtsn;\n\n\tpsubn = (struct pbssubn *) malloc(sizeof(struct pbssubn));\n\tif (psubn == NULL) {\n\t\treturn NULL;\n\t}\n\n\t/* initialize the subnode and link into the parent node */\n\n\tpsubn->next = NULL;\n\tpsubn->jobs = NULL;\n\tpsubn->inuse = 0;\n\tpsubn->index = pnode->nd_nsn++;\n\tpnode->nd_nsnfree++;\n\tif ((pnode->nd_state & INUSE_JOB) != 0) {\n\t\t/* set_vnode_state(pnode, INUSE_FREE, Nd_State_Set); */\n\t\t/* removed as part of OS prov fix- this was causing a provisioning\n\t\t * node to lose its INUSE_PROV flag. Prb occurred when OS with low\n\t\t * ncpus booted into OS with higher ncpus.\n\t\t */\n\t\tset_vnode_state(pnode, ~INUSE_JOB, Nd_State_And);\n\t}\n\n\tif (lstsn) /* If not null, then append new subnode directly to the last node */\n\t\tlstsn->next = psubn;\n\telse {\n\t\tnxtsn = &pnode->nd_psn; /* link subnode onto parent node's list */\n\t\twhile (*nxtsn)\n\t\t\tnxtsn = &((*nxtsn)->next);\n\t\t*nxtsn = psubn;\n\t}\n\treturn (psubn);\n}\n\n/**\n * @brief\n *\t\tRead the, \"nodes\" information from database\n *\t\tcontaining the list of properties for each node.\n *\t\tThe list of nodes is formed with pbsndlist as the head.\n * @see\n * \t\tpbsd_init\n *\n * @return\terror code\n * @retval\t-1\t- Failure\n * @retval\t0\t- Success\n *\n */\nint\nsetup_nodes()\n{\n\tpbs_db_obj_info_t obj;\n\tpbs_db_node_info_t dbnode = {{0}};\n\tpbs_db_mominfo_time_t mom_tm = {0, 0};\n\tint rc;\n\tvoid *conn = (void *) svr_db_conn;\n\tchar *conn_db_err = NULL;\n\n\tDBPRT((\"%s: entered\\n\", __func__))\n\n\tsvr_totnodes = 0;\n\n\t/* Load  the mominfo_time from the db */\n\tobj.pbs_db_obj_type = PBS_DB_MOMINFO_TIME;\n\tobj.pbs_db_un.pbs_db_mominfo_tm = &mom_tm;\n\tif (pbs_db_load_obj(svr_db_conn, &obj) == -1) {\n\t\tlog_errf(-1, __func__, \"Could not load momtime info\");\n\t\treturn (-1);\n\t}\n\tmominfo_time.mit_time = mom_tm.mit_time;\n\tmominfo_time.mit_gen = mom_tm.mit_gen;\n\n\tobj.pbs_db_obj_type = PBS_DB_NODE;\n\tobj.pbs_db_un.pbs_db_node = &dbnode;\n\n\trc = pbs_db_search(conn, &obj, NULL, (query_cb_t) &recov_node_cb);\n\tif (rc == -1) {\n\t\tpbs_db_get_errmsg(PBS_DB_ERR, &conn_db_err);\n\t\tif (conn_db_err != NULL) {\n\t\t\tlog_errf(-1, __func__, conn_db_err);\n\t\t\tfree(conn_db_err);\n\t\t}\n\t\treturn (-1);\n\t}\n\n\treturn (0);\n}\n\n/**\n * @brief\n * \t\tdelete_a_subnode - mark a (last) single subnode entry as deleted\n * @see\n * \t\tmod_node_ncpus\n *\n * @param[in,out]\tpnode\t- parent node list\n *\n * @return\tvoid\n */\nstatic void\ndelete_a_subnode(struct pbsnode *pnode)\n{\n\tstruct pbssubn *psubn;\n\tstruct pbssubn *pprior = 0;\n\n\tpsubn = pnode->nd_psn;\n\n\twhile (psubn->next) {\n\t\tpprior = psubn;\n\t\tpsubn = psubn->next;\n\t}\n\n\t/*\n\t * found last subnode in list for given node, mark it deleted\n\t * note, have to update nd_nsnfree using pnode\n\t * because it point to the real node rather than the the copy (pnode)\n\t * and the real node is overwritten by the copy\n\t */\n\n\tif ((psubn->inuse & INUSE_JOB) == 0)\n\t\tpnode->nd_nsnfree--;\n\n\tsubnode_delete(psubn);\n\tif (pprior)\n\t\tpprior->next = NULL;\n}\n\n/**\n * @brief\n * \t\tmod_node_ncpus - when resources_available.ncpus changes, need to\n *\t\tupdate the number of subnodes, creating or deleting as required\n *\n * @param[in,out]\tpnode\t- parent node list\n * @param[in]\t\tncpus\t- resources_available.ncpus\n * @param[in]\t\tactmode\t- value for the \"actmode\" parameter\n *\n * @return\tint\n * @return\t0\t- success\n */\n\nint\nmod_node_ncpus(struct pbsnode *pnode, long ncpus, int actmode)\n{\n\tlong old_np;\n\tstruct pbssubn *lst_sn;\n\tif ((actmode == ATR_ACTION_NEW) || (actmode == ATR_ACTION_ALTER)) {\n\n\t\tif (ncpus < 0)\n\t\t\treturn PBSE_BADATVAL;\n\t\telse if (ncpus == 0)\n\t\t\tncpus = 1; /* insure at least 1 subnode */\n\n\t\told_np = pnode->nd_nsn;\n\t\tlst_sn = NULL;\n\t\twhile (ncpus != old_np) {\n\n\t\t\tif (ncpus < old_np) {\n\t\t\t\tdelete_a_subnode(pnode);\n\t\t\t\told_np--;\n\t\t\t} else {\n\t\t\t\t/* Store the last subnode of parent node list.\n\t\t\t\t * This removes the need to find the last node of\n\t\t\t\t * parent node's list, in create_subnode().\n\t\t\t\t */\n\t\t\t\tlst_sn = create_subnode(pnode, lst_sn);\n\t\t\t\told_np++;\n\t\t\t}\n\t\t}\n\t\tpnode->nd_nsn = old_np;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\tfix_indirect_resc_targets - set or clear ATR_VFLAG_TARGET flag in\n * \t\ta target resource \"index\" is the index into the node's attribute\n * \t\tarray (which attr). If invoked with ND_ATR__ResourceAvail or\n * \t\tND_ATR_ResourceAssn, the target flag is applied on both. We need\n * \t\tto do this as the check for target flag in fix_indirectness relies\n * \t\ton resources_assigned as resources_available is already got over-written.\n *\n * @param[out]\tpsourcend\t- Vnode structure\n * @param[in]\tpsourcerc\t- target resource\n * @param[in]\tindex\t\t- index into the node's attribute array\n * @param[in]\tset\t\t\t- decides set or unset.\n *\n * @return\tint\n * @retval\t-1\t- error\n * @retval\t0\t- success\n */\nint\nfix_indirect_resc_targets(struct pbsnode *psourcend, resource *psourcerc, int index, int set)\n{\n\tchar *nname;\n\tchar *pn;\n\tstruct pbsnode *pnode;\n\tresource *ptargetrc;\n\n\tif (psourcend)\n\t\tnname = psourcend->nd_name;\n\telse\n\t\tnname = \" \";\n\n\tpn = psourcerc->rs_value.at_val.at_str;\n\tif ((pn == NULL) ||\n\t    (*pn != '@') ||\n\t    ((pnode = find_nodebyname(pn + 1)) == NULL)) {\n\t\tsprintf(log_buffer,\n\t\t\t\"resource %s on vnode points to invalid vnode %s\",\n\t\t\tpsourcerc->rs_defin->rs_name, pn);\n\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_NODE, LOG_CRIT,\n\t\t\t  nname, log_buffer);\n\t\treturn -1;\n\t}\n\n\tptargetrc = find_resc_entry(get_nattr(pnode, index), psourcerc->rs_defin);\n\tif (ptargetrc == NULL) {\n\t\tsprintf(log_buffer, \"resource %s on vnode points to missing resource on vnode %s\", psourcerc->rs_defin->rs_name, pn + 1);\n\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_NODE, LOG_CRIT,\n\t\t\t  nname, log_buffer);\n\t\treturn -1;\n\t} else {\n\t\tif (set)\n\t\t\tptargetrc->rs_value.at_flags |= ATR_VFLAG_TARGET;\n\t\telse\n\t\t\tptargetrc->rs_value.at_flags &= ~ATR_VFLAG_TARGET;\n\n\t\tif (index == ND_ATR_ResourceAvail)\n\t\t\tindex = ND_ATR_ResourceAssn;\n\t\telse\n\t\t\tindex = ND_ATR_ResourceAvail;\n\n\t\tptargetrc = find_resc_entry(get_nattr(pnode, index), psourcerc->rs_defin);\n\t\tif (!ptargetrc) {\n\t\t\t/* For unset if the avail/assign counterpart is null, just return without creating the rescource.\n\t\t\t* This happens only during node clean-up stage */\n\t\t\tif (!set || index == ND_ATR_ResourceAvail)\n\t\t\t\treturn 0;\n\t\t\tptargetrc = add_resource_entry(get_nattr(pnode, index), psourcerc->rs_defin);\n\t\t\tif (!ptargetrc)\n\t\t\t\treturn PBSE_SYSTEM;\n\t\t}\n\n\t\tif (set)\n\t\t\tptargetrc->rs_value.at_flags |= ATR_VFLAG_TARGET;\n\t\telse\n\t\t\tptargetrc->rs_value.at_flags &= ~ATR_VFLAG_TARGET;\n\t}\n\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\tindirect_target_check - called via a work task to (re)set ATR_VFLAG_TARGET\n *\t\tin any resource which is the target of another indirect resource\n *\n *\t\tThis covers the cases where a target node might not have been setup\n *\t\ton Server recovery/startup.\n * @see\n * \t\tfix_indirectness and mgr_unset_attr\n *\n * @param[in]\tptask\t-\twork task structure\n *\n * @return\tvoid\n */\n\nvoid\nindirect_target_check(struct work_task *ptask)\n{\n\tint i;\n\tstruct pbsnode *pnode;\n\tresource *presc;\n\n\tfor (i = 0; i < svr_totnodes; i++) {\n\t\tpnode = pbsndlist[i];\n\t\tif (pnode->nd_state & INUSE_DELETED ||\n\t\t    pnode->nd_state & INUSE_STALE)\n\t\t\tcontinue;\n\t\tif (is_nattr_set(pnode, ND_ATR_ResourceAvail)) {\n\t\t\tfor (presc = (resource *) GET_NEXT(get_nattr_list(pnode, ND_ATR_ResourceAvail));\n\t\t\t     presc;\n\t\t\t     presc = (resource *) GET_NEXT(presc->rs_link)) {\n\n\t\t\t\tif (presc->rs_value.at_flags & ATR_VFLAG_INDIRECT) {\n\t\t\t\t\tfix_indirect_resc_targets(pnode, presc, (int) ND_ATR_ResourceAvail, 1);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n\n/**\n * @brief\n * \t\tfix_indirectness - check if a member of a node's resource_available is\n *\t\tbecoming indirect (points to another node) or was indirect and is\n *\t\tbecoming direct.\n *\n *\t\tIf becoming indirect, check that the target node is known (unless just\n *\t\trecovering) and that the target resource itself is not indirect.\n *\n *\t\tIf \"doit\" is true, then and only then make the needed changes in\n *\t\tresources_available and resources_assigned.\n * @see\n * \t\tnode_np_action and update2_to_vnode\n *\n * @param[in]\t\tpresc\t-\tpointer to resource structure\n * @param[in,out]\tpnode\t-\tthe node for checking.\n * @param[in]\t\tdoit\t-\tIf \"doit\" is true, then and only then make the needed changes in\n * \t\t\t\t\t\t\t\trecovering) and that the target resource itself is not indirect.\n *\n * @return\tint\n * @return\t0\t- success\n * @retval\tnonzero\t- failure\n *\n */\nint\nfix_indirectness(resource *presc, struct pbsnode *pnode, int doit)\n{\n\tint consumable;\n\tresource *presc_avail; /* resource available */\n\tresource *presc_assn;  /* resource assigned  */\n\tstruct pbssubn *psn;\n\tstruct pbsnode *ptargetnd; /* target node\t\t  */\n\tresource *ptargetrc;\t   /* target resource avail  */\n\tstruct resource_def *prdef;\n\tint recover_ok;\n\tint run_safety_check = 0;\n\n\tprdef = presc->rs_defin;\n\n\trecover_ok = (get_sattr_long(SVR_ATR_State) == SV_STATE_INIT); /* if true, then recoverying and targets may not yet be there */\n\tconsumable = prdef->rs_flags & (ATR_DFLAG_ANASSN | ATR_DFLAG_FNASSN);\n\tpresc_avail = find_resc_entry(get_nattr(pnode, ND_ATR_ResourceAvail), prdef);\n\tpresc_assn = find_resc_entry(get_nattr(pnode, ND_ATR_ResourceAssn), prdef);\n\n\tif (doit == 0) { /* check for validity only this pass */\n\n\t\tif (presc->rs_value.at_flags & ATR_VFLAG_INDIRECT) {\n\n\t\t\t/* disallow change if vnode has running jobs */\n\t\t\tfor (psn = pnode->nd_psn; psn; psn = psn->next) {\n\t\t\t\tif (psn->jobs != NULL)\n\t\t\t\t\treturn PBSE_OBJBUSY;\n\t\t\t}\n\n\t\t\t/* setting this resource to be indirect, make serveral checks */\n\n\t\t\t/* this vnode may not be a target of another indirect */\n\t\t\tif (presc_assn) {\n\t\t\t\tif (presc_assn->rs_value.at_flags & ATR_VFLAG_TARGET) {\n\t\t\t\t\tif ((resc_in_err = strdup(presc_assn->rs_defin->rs_name)) == NULL)\n\t\t\t\t\t\treturn PBSE_SYSTEM;\n\t\t\t\t\treturn PBSE_INDIRECTHOP;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t/* target vnode must be known unless the Server is recovering */\n\t\t\t/* the value (at_str) is \"@vnodename\", so skip over the '@'   */\n\t\t\tptargetnd = find_nodebyname(presc->rs_value.at_val.at_str + 1);\n\t\t\tif (ptargetnd == NULL) {\n\t\t\t\tif (!recover_ok)\n\t\t\t\t\treturn (PBSE_UNKNODE);\n\t\t\t} else {\n\n\t\t\t\t/* target resource must exist */\n\t\t\t\tptargetrc = find_resc_entry(get_nattr(ptargetnd, ND_ATR_ResourceAvail), prdef);\n\t\t\t\tif (pnode == ptargetnd) {\n\t\t\t\t\t/* target node may not be itself  */\n\t\t\t\t\tif ((resc_in_err = strdup(prdef->rs_name)) == NULL)\n\t\t\t\t\t\treturn PBSE_SYSTEM;\n\t\t\t\t\treturn PBSE_INDIRECTHOP;\n\t\t\t\t} else if (ptargetrc == NULL) {\n\t\t\t\t\tif ((resc_in_err = strdup(prdef->rs_name)) == NULL)\n\t\t\t\t\t\treturn PBSE_SYSTEM;\n\t\t\t\t\treturn (PBSE_INDIRECTBT);\n\t\t\t\t} else {\n\t\t\t\t\tif (ptargetrc->rs_value.at_flags & ATR_VFLAG_INDIRECT) {\n\t\t\t\t\t\t/* target cannot be indirect itself */\n\t\t\t\t\t\tif ((resc_in_err = strdup(ptargetrc->rs_defin->rs_name)) == NULL)\n\t\t\t\t\t\t\treturn PBSE_SYSTEM;\n\t\t\t\t\t\treturn PBSE_INDIRECTHOP;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\t/* if consumable, insure resource exists in this node's */\n\t\t\t\t/* resources_assigned */\n\n\t\t\t\tif (consumable) {\n\t\t\t\t\tptargetrc = add_resource_entry(get_nattr(pnode, ND_ATR_ResourceAssn), prdef);\n\t\t\t\t\tif (ptargetrc == NULL)\n\t\t\t\t\t\treturn PBSE_SYSTEM;\n\t\t\t\t}\n\t\t\t}\n\n\t\t} else {\n\n\t\t\t/* new is not indirect, was the original?\n\t\t\t* we are using resource-assigned to identify that the resource\n\t\t\t* was an indirect resource because attribute's set function has\n\t\t\t* already changed resources-available */\n\n\t\t\tif (presc_assn) {\n\t\t\t\tif (presc_assn->rs_value.at_flags & ATR_VFLAG_INDIRECT) {\n\t\t\t\t\t/* disallow change if vnode has running jobs */\n\t\t\t\t\tfor (psn = pnode->nd_psn; psn; psn = psn->next) {\n\t\t\t\t\t\tif (psn->jobs != NULL)\n\t\t\t\t\t\t\treturn PBSE_OBJBUSY;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\treturn PBSE_NONE;\n\n\t} else {\n\n\t\t/*\n\t\t * In this pass,  actually do the required changes:\n\t\t * If setting...\n\t\t *\t- set ATR_VFLAG_TARGET on the target resource entry\n\t\t *  - change the paired Resource_Assigned entry to also be indirect\n\t\t * If unsetting...\n\t\t *\t- clear ATR_VFLAG_TARGET on the old target resource\n\t\t *  - change the paired Resource_Assigned entry to be direct\n\t\t */\n\n\t\tif (presc->rs_value.at_flags & ATR_VFLAG_INDIRECT) {\n\t\t\tint rc;\n\n\t\t\t/* setting to be indirect */\n\t\t\trc = fix_indirect_resc_targets(pnode, presc, (int) ND_ATR_ResourceAvail, 1);\n\t\t\tif (rc == PBSE_SYSTEM)\n\t\t\t\treturn rc;\n\t\t\telse if (rc == -1)\n\t\t\t\trun_safety_check = 1; /* need to set after nodes done */\n\n\t\t\tif (consumable && (presc_assn != NULL)) {\n\t\t\t\tprdef->rs_free(&presc_assn->rs_value); /* free first */\n\t\t\t\t(void) decode_str(&presc_assn->rs_value, NULL, NULL,\n\t\t\t\t\t\t  presc->rs_value.at_val.at_str);\n\t\t\t\tpresc_assn->rs_value.at_flags |= ATR_VFLAG_INDIRECT;\n\t\t\t}\n\n\t\t} else if (presc_avail && presc_assn && (presc_assn->rs_value.at_flags & ATR_VFLAG_INDIRECT)) {\n\t\t\t/* unsetting an old indirect reference */\n\t\t\t/* Clear ATR_VFLAG_TARGET on target vnode */\n\t\t\t(void) fix_indirect_resc_targets(pnode, presc_assn, (int) ND_ATR_ResourceAssn, 0);\n\t\t\tpresc_avail->rs_value.at_flags &= ~ATR_VFLAG_INDIRECT;\n\t\t\tif (consumable) {\n\t\t\t\tfree_str(&presc_assn->rs_value);\n\t\t\t\tprdef->rs_decode(&presc_assn->rs_value, NULL, NULL, NULL);\n\t\t\t\tpresc_assn->rs_value.at_flags &= ~ATR_VFLAG_INDIRECT;\n\t\t\t}\n\t\t\trun_safety_check = 1;\n\t\t}\n\n\t\tif (run_safety_check) /* double check TARGET bit on targets */\n\t\t\t(void) set_task(WORK_Immed, 0, indirect_target_check, NULL);\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\tnode_np_action - action routine for a node's resources_available attribute\n *\t\tDoes several things:\n *\t\t1. prohibits resources_available.hosts from being changed;\n *\t\t2. when resources_available.ncpus (np in nodes file) changes,\n *\t   \tupdate the subnode structures;\n *\t\t3. For any modified resource, check if it is changing \"indirectness\"\n *\n * @param[in]\tnew\t-\tnewly changed resources_available\n * @param[in]\tpobj\t-\tpointer to a pbsnode struct\n * @param[in]\tactmode\t-\taction mode: \"NEW\" or \"ALTER\"\n *\n * @return\tint\n * @retval\t0\t- success\n * @retval\tnonzero\t- error\n */\n\nint\nnode_np_action(attribute *new, void *pobj, int actmode)\n{\n\tint err;\n\tstruct pbsnode *pnode = (struct pbsnode *) pobj;\n\tresource_def *prdef;\n\tresource *presc;\n\tlong new_np;\n\n\tif (actmode == ATR_ACTION_FREE) /* cannot unset resources_available */\n\t\treturn (PBSE_IVALREQ);\n\n\t/* 1. prevent change of \"host\" or \"vnode\" */\n\tprdef = &svr_resc_def[RESC_HOST];\n\tpresc = find_resc_entry(new, prdef);\n\tif ((presc != NULL) &&\n\t    (presc->rs_value.at_flags & ATR_VFLAG_MODIFY)) {\n\t\tif (actmode != ATR_ACTION_NEW)\n\t\t\treturn (PBSE_ATTRRO);\n\t}\n\tprdef = &svr_resc_def[RESC_VNODE];\n\tpresc = find_resc_entry(new, prdef);\n\tif ((presc != NULL) &&\n\t    (presc->rs_value.at_flags & ATR_VFLAG_MODIFY)) {\n\t\tif (actmode != ATR_ACTION_NEW)\n\t\t\treturn (PBSE_ATTRRO);\n\t}\n\t/* prevent change of \"aoe\" */\n\tprdef = &svr_resc_def[RESC_AOE];\n\tpresc = find_resc_entry(new, prdef);\n\tif ((presc != NULL) &&\n\t    (presc->rs_value.at_flags & ATR_VFLAG_MODIFY)) {\n\t\tif (pnode->nd_state & (INUSE_PROV | INUSE_WAIT_PROV))\n\t\t\treturn (PBSE_NODEPROV_NOACTION);\n\t\tif (is_nattr_set(pnode, ND_ATR_Mom) && (!compare_short_hostname(\n\t\t\t\t\t\t\t       (get_nattr_arst(pnode, ND_ATR_Mom))->as_string[0],\n\t\t\t\t\t\t\t       server_host)))\n\t\t\treturn (PBSE_PROV_HEADERROR);\n\t}\n\n\t/* 2. If changing ncpus, fix subnodes */\n\tprdef = &svr_resc_def[RESC_NCPUS];\n\tpresc = find_resc_entry(new, prdef);\n\n\tif (presc == NULL)\n\t\treturn PBSE_SYSTEM;\n\tif (presc->rs_value.at_flags & ATR_VFLAG_MODIFY) {\n\t\tnew_np = presc->rs_value.at_val.at_long;\n\t\tpresc->rs_value.at_flags &= ~ATR_VFLAG_DEFLT;\n\t\tif ((err = mod_node_ncpus(pnode, new_np, actmode)) != 0)\n\t\t\treturn (err);\n\t}\n\n\tif ((err = check_sign((pbsnode *) pobj, new)) != PBSE_NONE)\n\t\treturn err;\n\n\t/* 3. check each entry that is modified to see if it is now   */\n\t/*    becoming an indirect reference or was one and now isn't */\n\t/*    This first pass just validates the changes...\t      */\n\n\tfor (presc = (resource *) GET_NEXT(new->at_val.at_list);\n\t     presc != NULL;\n\t     presc = (resource *) GET_NEXT(presc->rs_link)) {\n\n\t\tif (presc->rs_value.at_flags & ATR_VFLAG_MODIFY)\n\t\t\tif ((err = fix_indirectness(presc, pnode, 0)) != 0)\n\t\t\t\treturn (err);\n\t}\n\n\t/* Now do it again and actual make the needed changes since  */\n\t/* there are no errors to worry about\t\t\t     */\n\tfor (presc = (resource *) GET_NEXT(new->at_val.at_list);\n\t     presc != NULL;\n\t     presc = (resource *) GET_NEXT(presc->rs_link)) {\n\t\tif (presc->rs_value.at_flags & ATR_VFLAG_MODIFY)\n\t\t\t(void) fix_indirectness(presc, pnode, 1);\n\t}\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n * \t\tnode_pcpu_action - action routine for node's pcpus (physical) resource\n *\n * @param[in]\tnew\t-\t\tderive props into this attribute\n * @param[in]\tpobj\t-\tpointer to a pbsnode struct\n * @param[in]\tactmode\t-\taction mode: \"NEW\" or \"ALTER\"\n *\n * @return\tint\n * @retval\t0\t- success\n * @retval\tnonzero\t- error\n */\n\nint\nnode_pcpu_action(attribute *new, void *pobj, int actmode)\n{\n\n\tstruct pbsnode *pnode = (struct pbsnode *) pobj;\n\tresource_def *prd;\n\tresource *prc;\n\tlong new_np;\n\n\t/* get new value of pcpus */\n\tnew_np = new->at_val.at_long;\n\tpnode->nd_ncpus = new_np;\n\n\t/* now get ncpus */\n\tprd = &svr_resc_def[RESC_NCPUS];\n\tprc = find_resc_entry(get_nattr(pnode, ND_ATR_ResourceAvail), prd);\n\tif (prc == 0) {\n\t\treturn (0); /* if this error happens - ignore it */\n\t}\n\tif (((is_attr_set(&prc->rs_value)) == 0) ||\n\t    ((prc->rs_value.at_flags & ATR_VFLAG_DEFLT) != 0)) {\n\t\tif (prc->rs_value.at_val.at_long != new_np) {\n\t\t\tprc->rs_value.at_val.at_long = new_np;\n\t\t\tprc->rs_value.at_flags |= ATR_SET_MOD_MCACHE | ATR_VFLAG_DEFLT;\n\t\t\treturn (mod_node_ncpus(pnode, new_np, actmode));\n\t\t}\n\t}\n\treturn (0);\n}\n\n/**\n * @brief\n * \t\tmark_which_queues_have_nodes()\n *\n *\t\tMark the queue header for queues that have nodes associated with\n *\t\tthem.  This is used when looking for nodes for jobs that are in\n *\t\tsuch a queue.\n *\n * @see\n * \t\tnode_queue_action, pbsd_init and mgr_node_unset.\n *\n * @return\tvoid\n */\n\nvoid\nmark_which_queues_have_nodes()\n{\n\tint i;\n\tpbs_queue *pque;\n\n\t/* clear \"has node\" flag in all queues */\n\n\tsvr_quehasnodes = 0;\n\n\tpque = (pbs_queue *) GET_NEXT(svr_queues);\n\twhile (pque != NULL) {\n\t\tset_qattr_l_slim(pque, QE_ATR_HasNodes, 0, SET);\n\t\tATR_UNSET(get_qattr(pque, QE_ATR_HasNodes));\n\t\tpque = (pbs_queue *) GET_NEXT(pque->qu_link);\n\t}\n\n\t/* now (re)set flag for those queues that do have nodes */\n\n\tfor (i = 0; i < svr_totnodes; i++) {\n\t\tif (pbsndlist[i]->nd_pque) {\n\t\t\tset_qattr_l_slim(pbsndlist[i]->nd_pque, QE_ATR_HasNodes, 1, SET);\n\t\t\tsvr_quehasnodes = 1;\n\t\t}\n\t}\n}\n\n/**\n * @brief\n * \t\tnode_queue_action - action routine for nodes when \"queue\" attribute set\n *\n * @param[in]\tpattr\t-\tattribute\n * @param[in]\tpobj\t-\tpointer to a pbsnode struct\n * @param[in]\tactmode\t-\taction mode: \"NEW\" or \"ALTER\"\n *\n * @return\tint\n * @retval\t0\t- success\n * @retval\tnonzero\t- error\n */\n\nint\nnode_queue_action(attribute *pattr, void *pobj, int actmode)\n{\n\tstruct pbsnode *pnode;\n\tpbs_queue *pq;\n\n\tpnode = (struct pbsnode *) pobj;\n\n\tif (is_attr_set(pattr)) {\n\n\t\tpq = find_queuebyname(pattr->at_val.at_str);\n\t\tif (pq == 0) {\n\t\t\treturn (PBSE_UNKQUE);\n\t\t} else if (pq->qu_qs.qu_type != QTYPE_Execution) {\n\t\t\treturn (PBSE_ATTRTYPE);\n\t\t} else if (is_qattr_set(pq, QA_ATR_partition) &&\n\t\t\t   is_nattr_set(pnode, ND_ATR_partition) &&\n\t\t\t   strcmp(get_qattr_str(pq, QA_ATR_partition), get_nattr_str(pnode, ND_ATR_partition))) {\n\t\t\treturn PBSE_PARTITION_NOT_IN_QUE;\n\t\t} else {\n\t\t\tpnode->nd_pque = pq;\n\t\t}\n\t} else {\n\t\tpnode->nd_pque = NULL;\n\t}\n\tmark_which_queues_have_nodes();\n\treturn 0;\n}\n/**\n * @brief\n * \t\tset_node_host_name returns 0 if actmode is 1, otherwise PBSE_ATTRRO\n */\nint\nset_node_host_name(attribute *pattr, void *pobj, int actmode)\n{\n\tif (actmode == ATR_ACTION_NEW || actmode == ATR_ACTION_RECOV)\n\t\treturn 0;\n\telse\n\t\treturn PBSE_ATTRRO;\n}\n/**\n * @brief\n * \t\tset_node_host_name returns 0 if actmode is 1, otherwise PBSE_ATTRRO\n */\nint\nset_node_mom_port(attribute *pattr, void *pobj, int actmode)\n{\n\tif (actmode == ATR_ACTION_NEW || actmode == ATR_ACTION_RECOV)\n\t\treturn 0;\n\telse\n\t\treturn PBSE_ATTRRO;\n}\n\n/**\n * @brief\n * \t\tReturns true (1) if none of the following bits are set:\n *\t\t\tOFFLINE, OFFLINE_BY_MOM, DOWN, DELETED, STALE\n *\t\totherwise return false (0) for the node being \"down\"\n *\n a @param[in]\tnodename - name of the node to check\n *\n * @return int\n * @retval 0\t- means vnode is not up\n * @retval 1\t- means vnode is up\n */\n\nint\nis_vnode_up(char *nodename)\n{\n\tstruct pbsnode *np;\n\n\tnp = find_nodebyname(nodename);\n\tif ((np == NULL) ||\n\t    ((np->nd_state & (INUSE_OFFLINE | INUSE_OFFLINE_BY_MOM | INUSE_DOWN | INUSE_DELETED | INUSE_STALE)) != 0))\n\t\treturn 0; /* vnode is not up */\n\telse\n\t\treturn 1; /* vnode is up */\n}\n\n/**\n * @brief\n * \t\tdecode_Mom_list - decode a comma string which specifies a list of Mom/host\n *\t\tnames into an attr of type ATR_TYPE_ARST\n *\t\tEach host name is fully qualified before being added into the array\n *\n * @param[in,out]\tpatr\t-\tattribute\n * @param[in]\t\tname\t-\tattribute name\n * @param[in]\trescn\t-\tresource name, unused here.\n * @param[in]\tval\t-\tcomma separated string of substrings\n *\n *\tReturns: 0 if ok,\n *\t\t>0 error number if an error occured,\n *\t\t*patr members set\n */\n\nint\ndecode_Mom_list(attribute *patr, char *name, char *rescn, char *val)\n{\n\tint rc;\n\tint ns;\n\tint i = 0;\n\tchar *p;\n\tchar buf[PBS_MAXHOSTNAME + 1];\n\tstatic char **str_arr = NULL;\n\tstatic long int str_arr_len = 0;\n\tattribute new;\n\tstruct sockaddr_in check_ip;\n\tint is_node_name_ip;\n\n\tif ((val == NULL) || (strlen(val) == 0) || count_substrings(val, &ns)) {\n\t\tnode_attr_def[(int) ND_ATR_Mom].at_free(patr);\n\t\tclear_attr(patr, &node_attr_def[(int) ND_ATR_Mom]);\n\t\t/* ATTR_VFLAG_SET is cleared now */\n\t\tpatr->at_flags &= ATR_MOD_MCACHE;\n\t\treturn (0);\n\t}\n\n\tif (is_attr_set(patr)) {\n\t\tnode_attr_def[(int) ND_ATR_Mom].at_free(patr);\n\t\tclear_attr(patr, &node_attr_def[(int) ND_ATR_Mom]);\n\t}\n\n\tif (str_arr_len == 0) {\n\t\tstr_arr = malloc(((2 * ns) + 1) * sizeof(char *));\n\t\tstr_arr_len = (2 * ns) + 1;\n\t} else if (str_arr_len < ns) {\n\t\tchar **new_str_arr;\n\t\tnew_str_arr = realloc(str_arr, ((2 * ns) + 1) * sizeof(char *));\n\t\t/* str_arr will be untouched if realloc failed */\n\t\tif (!new_str_arr)\n\t\t\treturn PBSE_SYSTEM;\n\t\tstr_arr = new_str_arr;\n\t\tstr_arr_len = (2 * ns) + 1;\n\t}\n\t/* Filling node list to a array, this has been done outside the\n\t * second for loop since, parse_comma_string() is being again called internally by\n\t * decode_arst() that alters the static variable in parse_comma_string().\n\t */\n\tstr_arr[0] = NULL;\n\tp = parse_comma_string(val);\n\tfor (i = 0; (str_arr[i] = p) != NULL; i++)\n\t\tp = parse_comma_string(NULL);\n\n\tfor (i = 0; (p = str_arr[i]) != NULL; i++) {\n\t\tclear_attr(&new, &node_attr_def[(int) ND_ATR_Mom]);\n\t\tis_node_name_ip = inet_pton(AF_INET, p, &(check_ip.sin_addr));\n\t\tif (is_node_name_ip || get_fullhostname(p, buf, (sizeof(buf) - 1)) != 0) {\n\t\t\tstrncpy(buf, p, (sizeof(buf) - 1));\n\t\t\tbuf[sizeof(buf) - 1] = '\\0';\n\t\t}\n\n\t\trc = decode_arst(&new, ATTR_NODE_Mom, NULL, buf);\n\t\tif (rc != 0)\n\t\t\tcontinue;\n\t\tset_arst(patr, &new, INCR);\n\t\tfree_arst(&new);\n\t}\n\n\treturn (0);\n}\n\n/**\n * @brief\n * \t\tremember the node topology information reported by a node's MoM\n *\n * @param[in]\tnode_name\t-\tthe name of the node\n * @param[in]\ttopology\t-\ttopology information from node's MoM\n *\n * @return\tvoid\n *\n * @par MT-Safe:\tno\n *\n * @par Note:\n *\t\tInformation is recorded in the $PBS_HOME/server_priv/node_topology/\n *\t\tdirectory, one file per node.  The information in these files may be\n *\t\tconsumed by the hwloc lstopo command using\n *\t\tlstopo -i <node topology file path>\n *\n * @see\thttp://www.open-mpi.org/projects/hwloc/doc/v1.3/tools.php, lstopo(1)\n */\nstatic void\nrecord_node_topology(char *node_name, char *topology)\n{\n\tchar path[MAXPATHLEN + 1];\n\tint fd;\n\tint topology_len;\n\tstatic char topology_dir[] = \"topology\";\n\tstatic char msg_topologydiroverflow[] = \"unexpected overflow \"\n\t\t\t\t\t\t\"creating node topology \"\n\t\t\t\t\t\t\"directory\";\n\tstatic char msg_mkdirfail[] = \"failed to create topology directory\";\n\tstatic char msg_topologypathoverflow[] = \"unexpected overflow \"\n\t\t\t\t\t\t \"creating node topology \"\n\t\t\t\t\t\t \"file %s\";\n\tstatic char msg_createpathfail[] = \"failed to open path to node \"\n\t\t\t\t\t   \"topology file for node %s\";\n\tstatic char msg_writepathfail[] = \"failed to write node topology \"\n\t\t\t\t\t  \"for node %s\";\n\tstatic char msg_notdir[] = \"topology directory path exists but is \"\n\t\t\t\t   \"not a directory\";\n\tstruct stat sb;\n\n\tif (snprintf(path, sizeof(path), \"%s/server_priv/%s\",\n\t\t     pbs_conf.pbs_home_path,\n\t\t     topology_dir) >= sizeof(path)) {\n\t\tsprintf(log_buffer, \"%s\", msg_topologydiroverflow);\n\t\tlog_event(PBSEVENT_DEBUG3,\n\t\t\t  PBS_EVENTCLASS_SERVER,\n\t\t\t  LOG_DEBUG, msg_daemonname,\n\t\t\t  log_buffer);\n\t\treturn;\n\t}\n\tif (stat(path, &sb) == -1) {\n\t\t/* can't stat path - assume it does not exist */\n\t\tif (mkdir(path, S_IRWXU) == -1) {\n\t\t\tsprintf(log_buffer, \"%s\", msg_mkdirfail);\n\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\treturn;\n\t\t}\n\t} else if (!S_ISDIR(sb.st_mode)) {\n\t\t/* path exists but is not a directory */\n\t\tsprintf(log_buffer, \"%s\", msg_notdir);\n\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_SERVER, LOG_DEBUG,\n\t\t\t  msg_daemonname, log_buffer);\n\t\treturn;\n\t}\n\n\t/* path exists and is a directory */\n\tif (snprintf(path, sizeof(path), \"%s/server_priv/%s/%s\",\n\t\t     pbs_conf.pbs_home_path,\n\t\t     topology_dir, node_name) >= sizeof(path)) {\n\t\tsprintf(log_buffer, msg_topologypathoverflow, node_name);\n\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_SERVER, LOG_DEBUG,\n\t\t\t  msg_daemonname, log_buffer);\n\t\treturn;\n\t}\n\tif ((fd = open(path, O_CREAT | O_TRUNC | O_WRONLY, S_IRUSR)) == -1) {\n\t\tsprintf(log_buffer, msg_createpathfail,\n\t\t\tnode_name);\n\t\tlog_err(errno, __func__, log_buffer);\n\t\treturn;\n\t}\n\ttopology_len = strlen(topology);\n\tif (write(fd, topology, topology_len) != topology_len) {\n\t\tsprintf(log_buffer, msg_writepathfail, node_name);\n\t\tlog_err(errno, __func__, log_buffer);\n\t}\n\t(void) close(fd);\n}\n\n/**\n * @brief\n * \t\tremove the node topology information for the given node name\n * @see\n * \t\teffective_node_delete\n *\n * @param[in]\tnode_name\t-\tthe name of the node\n *\n * @return\tvoid\n *\n * @par MT-Safe:\tno\n *\n */\nstatic void\nremove_node_topology(char *node_name)\n{\n\tchar path[MAXPATHLEN + 1];\n\tstatic char topology_dir[] = \"topology\";\n\tstatic char msg_topologyfileoverflow[] = \"unexpected overflow \"\n\t\t\t\t\t\t \"removing topology \"\n\t\t\t\t\t\t \"file for node %s\";\n\tstatic char msg_unlinkfail[] = \"unlink of topology file for \"\n\t\t\t\t       \"node %s failed\";\n\n\tif (snprintf(path, sizeof(path), \"%s/server_priv/%s/%s\",\n\t\t     pbs_conf.pbs_home_path,\n\t\t     topology_dir, node_name) >= sizeof(path)) {\n\t\tsprintf(log_buffer, msg_topologyfileoverflow,\n\t\t\tnode_name);\n\t\tlog_event(PBSEVENT_DEBUG3,\n\t\t\t  PBS_EVENTCLASS_SERVER,\n\t\t\t  LOG_DEBUG, msg_daemonname,\n\t\t\t  log_buffer);\n\t} else if ((unlink(path) == -1) && (errno != ENOENT)) {\n\t\tsprintf(log_buffer, msg_unlinkfail, node_name);\n\t\tlog_err(errno, __func__, log_buffer);\n\t}\n}\n\n/**\n * @brief\n * \t\tset the node topology attribute\n *\n * @param[in]\tnew\t\t-\tpointer to new attribute\n * @param[in]\tpobj\t-\tpointer to parent object of the attribute (in this case,\n *\t\t\t\t\t\t\ta pbsnode)\n * @param[in]\top\t\t-\tthe attribute operation being performed\n *\n * @return\tint\n * @retval\tPBSE_NONE\t- success\n * @retval\tnonzero\t\t- PBSE_* error\n *\n * @par MT-Safe:\tno\n *\n * @par\tNote\n * \t\tThis attribute is versioned (by an arbitrary string terminating\n *\t\tin a ':' character).  In the case of the NODE_TOPOLOGY_TYPE_HWLOC\n *\t\tversion, the value following the version string is the topology\n *\t\tinformation captured by the MoM via hwloc_topology_load() and it\n *\t\tis saved in $PBS_HOME/server_priv/ by record_node_topology().\n *\n * @see\n * \t\trecord_node_topology()\n *\n * @par Side Effects:\n *\t\tNone\n */\nint\nset_node_topology(attribute *new, void *pobj, int op)\n{\n#ifdef NAS /* localmod 035 */\n\treturn (PBSE_NONE);\n#else\n\n\tint rc = PBSE_NONE;\n\tstruct pbsnode *pnode = ((pbsnode *) pobj);\n\tchar *valstr;\n\tntt_t ntt;\n\tchar msg_unknown_topology_type[] = \"unknown topology type in \"\n\t\t\t\t\t   \"topology attribute for node %s\";\n\n\tswitch (op) {\n\n\t\tcase ATR_ACTION_NOOP:\n\t\t\tbreak;\n\n\t\tcase ATR_ACTION_NEW:\n\t\tcase ATR_ACTION_ALTER:\n\n\t\t\tvalstr = new->at_val.at_str;\n\n\t\t\t/*\n\t\t\t *\tCurrently two topology types are known;  if it's one\n\t\t\t *\twe expect, step over it to the actual value we care\n\t\t\t *\tabout.\n\t\t\t */\n\t\t\tif (strstr(valstr, NODE_TOPOLOGY_TYPE_HWLOC) == valstr) {\n\t\t\t\tvalstr += strlen(NODE_TOPOLOGY_TYPE_HWLOC);\n\t\t\t\tntt = tt_hwloc;\n\t\t\t} else if (strstr(valstr, NODE_TOPOLOGY_TYPE_CRAY) == valstr) {\n\t\t\t\tvalstr += strlen(NODE_TOPOLOGY_TYPE_CRAY);\n\t\t\t\tntt = tt_Cray;\n\t\t\t} else if (strstr(valstr, NODE_TOPOLOGY_TYPE_WIN) == valstr) {\n\t\t\t\tvalstr += strlen(NODE_TOPOLOGY_TYPE_WIN);\n\t\t\t\tntt = tt_Win;\n\t\t\t} else {\n\t\t\t\tsprintf(log_buffer, msg_unknown_topology_type,\n\t\t\t\t\tpnode->nd_name);\n\t\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_SERVER,\n\t\t\t\t\t  LOG_DEBUG, __func__, log_buffer);\n\t\t\t\treturn (PBSE_INTERNAL);\n\t\t\t}\n\n\t\t\trecord_node_topology(pnode->nd_name, valstr);\n\t\t\tprocess_topology_info(&(pnode->nd_lic_info), valstr);\n\t\t\tif (ntt == tt_Cray)\n\t\t\t\trelease_lic_for_cray(pnode);\n\t\t\tlicense_one_node(pnode);\n\n\t\t\tbreak;\n\n\t\tcase ATR_ACTION_RECOV:\n\t\tcase ATR_ACTION_FREE:\n\t\tdefault:\n\t\t\trc = PBSE_INTERNAL;\n\t}\n\n\tif (rc == PBSE_NONE)\n\t\tpost_attr_set(new);\n\treturn rc;\n#endif /* localmod 035 */\n}\n\n/**\n * @brief chk_vnode_pool - action routine for a node's vnode_pool attribute\n *      Does several things:\n *      1. Verifies that there is only one Mom being pointed to\n *      2. Verifies in the Mom structure that this is the zero-th node\n *\n * @param[in] new - ptr to attribute being modified with new value\n * @param[in] pobj - ptr to parent object (pbs_node)\n * @param[in] actmode - type of action: recovery, new node, or altering\n *\n * @return error code\n * @retval PBSE_NONE  (zero) - on success\n * @retval PBSE_*  (non zero) - on error\n */\nint\nchk_vnode_pool(attribute *new, void *pobj, int actmode)\n{\n\tstatic char id[] = \"chk_vnode_pool\";\n\tint pool = -1;\n\n\tswitch (actmode) {\n\t\tcase ATR_ACTION_NEW:\n\t\tcase ATR_ACTION_RECOV:\n\n\t\t\tpool = new->at_val.at_long;\n\t\t\tsprintf(log_buffer, \"vnode_pool value is = %d\", pool);\n\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_NODE, LOG_DEBUG, id, log_buffer);\n\t\t\tif (pool <= 0) {\n\t\t\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER,\n\t\t\t\t\t  LOG_WARNING, id, \"invalid vnode_pool provided\");\n\t\t\t\treturn (PBSE_BADATVAL);\n\t\t\t}\n\t\t\tbreak;\n\n\t\tcase ATR_ACTION_ALTER:\n\t\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER,\n\t\t\t\t  LOG_DEBUG, id, \"Unsupported actions for vnode_pool\");\n\t\t\treturn (PBSE_IVALREQ);\n\n\t\tdefault:\n\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_SERVER,\n\t\t\t\t  LOG_DEBUG, id, \"Unsupported actions for vnode_pool\");\n\t\t\treturn (PBSE_INTERNAL);\n\t}\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n *\t\taction routine for the node's \"partition\" attribute\n *\n * @param[in]\tpattr\t-\tattribute being set\n * @param[in]\tpobj\t-\tObject on which attribute is being set\n * @param[in]\tactmode\t-\tthe mode of setting, recovery or just alter\n *\n * @return\terror code\n * @retval\tPBSE_NONE\t-\tSuccess\n * @retval\t!PBSE_NONE\t-\tFailure\n *\n */\nint\naction_node_partition(attribute *pattr, void *pobj, int actmode)\n{\n\tstruct pbsnode *pnode;\n\tpbs_queue *pq;\n\tstruct pbssubn *psn;\n\n\tpnode = (pbsnode *) pobj;\n\n\tif (actmode == ATR_ACTION_RECOV)\n\t\treturn PBSE_NONE;\n\n\tif (strcmp(pattr->at_val.at_str, DEFAULT_PARTITION) == 0)\n\t\treturn PBSE_DEFAULT_PARTITION;\n\n\tif (is_nattr_set(pnode, ND_ATR_Queue)) {\n\t\tpq = find_queuebyname(get_nattr_str(pnode, ND_ATR_Queue));\n\t\tif (pq == 0)\n\t\t\treturn PBSE_UNKQUE;\n\t\tif (is_qattr_set(pq, QA_ATR_partition) && pattr->at_flags & ATR_VFLAG_SET) {\n\t\t\tif (strcmp(get_qattr_str(pq, QA_ATR_partition), pattr->at_val.at_str) != 0)\n\t\t\t\treturn PBSE_QUE_NOT_IN_PARTITION;\n\t\t}\n\t}\n\n\t/* reject setting the node partition if the node is busy or has a reservation scheduled to run on it */\n\tif (pnode->nd_resvp != NULL)\n\t\treturn PBSE_NODE_BUSY;\n\n\tfor (psn = pnode->nd_psn; psn; psn = psn->next)\n\t\tif (psn->jobs != NULL)\n\t\t\treturn PBSE_NODE_BUSY;\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n * \tSet node 'pnode's state to either use the non-down or non-inuse node state value,\n * \tor the value derived from the 'new' attribute state.\n *\n * @param[in] \t\tnew - input attribute to derive state from\n * @param[in/out]\tpnode - node who state is being set.\n * @param[in]\t\tactmode - action mode: \"NEW\" or \"ALTER\"\n *\n * @return int\n * @retval 0\t\t\tif set normally\n * @retval PBSE_NODESTALE\tif pnode's state is INUSE_STALE\n * @retval PBSE_NODEPROV\tif pnode's state is INUSE_PROV\n * \t   PBSE_INTERNAL\tif 'actmode' is unrecognized\n */\n\nint\nnode_state(attribute *new, void *pnode, int actmode)\n{\n\tint rc = 0;\n\tstruct pbsnode *np;\n\tstatic unsigned long keep = ~(INUSE_DOWN | INUSE_OFFLINE | INUSE_OFFLINE_BY_MOM | INUSE_SLEEP);\n\n\tnp = (struct pbsnode *) pnode; /*because of def of at_action  args*/\n\n\t/* cannot change state of stale node */\n\tif (np->nd_state & INUSE_STALE)\n\t\treturn PBSE_NODESTALE;\n\n\t/* cannot change state of provisioning node */\n\tif (np->nd_state & INUSE_PROV)\n\t\treturn PBSE_NODEPROV;\n\n\tswitch (actmode) {\n\n\t\tcase ATR_ACTION_NEW: /*derive attribute*/\n\t\t\tset_vnode_state(np, (np->nd_state & keep) | new->at_val.at_long, Nd_State_Set);\n\t\t\tbreak;\n\n\t\tcase ATR_ACTION_ALTER:\n\t\t\tset_vnode_state(np, (np->nd_state & keep) | new->at_val.at_long, Nd_State_Set);\n\t\t\tbreak;\n\n\t\tdefault:\n\t\t\trc = PBSE_INTERNAL;\n\t}\n\t/* Now that we are setting the node state, same state should also reflect on the mom */\n\tif (np->nd_nummoms == 1) {\n\t\tdmn_info_t *pdmn_info = np->nd_moms[0]->mi_dmn_info;\n\t\tpdmn_info->dmn_state = (pdmn_info->dmn_state & keep) | new->at_val.at_long;\n\t}\n\treturn rc;\n}\n"
  },
  {
    "path": "src/server/node_manager.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n *\n * @brief\n *\t\tall the functions related to node management.\n *\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <unistd.h>\n#include <string.h>\n#include <ctype.h>\n#include <errno.h>\n#include <fcntl.h>\n#include <sys/types.h>\n#include <netdb.h>\n#include <netinet/in.h>\n#include <stddef.h>\n#include <time.h>\n\n#include \"portability.h\"\n#include \"libpbs.h\"\n#include \"server_limits.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"resource.h\"\n#include \"server.h\"\n#include \"net_connect.h\"\n#include \"work_task.h\"\n#include \"job.h\"\n#include \"reservation.h\"\n#include \"acct.h\"\n#include \"queue.h\"\n#include \"pbs_nodes.h\"\n#include \"log.h\"\n#include \"tpp.h\"\n#include \"dis.h\"\n#include \"resmon.h\"\n#include \"mom_server.h\"\n#include \"pbs_license.h\"\n#include \"ticket.h\"\n#include \"placementsets.h\"\n#include \"pbs_ifl.h\"\n#include \"grunt.h\"\n#include \"libutil.h\"\n#include \"pbs_db.h\"\n#include \"batch_request.h\"\n#include \"hook_func.h\"\n#include \"sched_cmds.h\"\n#include \"provision.h\"\n#include \"pbs_sched.h\"\n#include \"svrfunc.h\"\n\n#if !defined(H_ERRNO_DECLARED)\nextern int h_errno;\n#endif\n\nint mom_send_vnode_map = 0; /* server must send vnode map to Mom */\nint svr_quehasnodes;\n\nstatic int mtfd_replyhello = -1;\nstatic int mtfd_replyhello_noinv = -1;\n\nstatic int cvt_overflow(size_t, size_t);\nstatic int cvt_realloc(char **, size_t *, char **, size_t *);\n\nstatic void set_resv_for_degrade(struct pbsnode *pnode, resc_resv *presv);\nextern time_t time_now;\nextern int server_init_type;\n\nextern int ctnodes(char *);\nextern char *resc_in_err;\nextern struct server server;\nextern int tpp_network_up; /* from pbsd_main.c - used only in case of TPP */\n\nextern unsigned int pbs_mom_port;\n\nextern char *msg_noloopbackif;\nextern char *msg_job_end_stat;\nextern char *msg_daemonname;\nextern char *msg_new_inventory_mom;\nextern pbs_list_head svr_allhooks;\n\nextern void is_vnode_prov_done(char *); /* for provisioning */\nextern void free_prov_vnode(struct pbsnode *);\nextern void fail_vnode_job(struct prov_vnode_info *, int);\nextern struct prov_tracking *get_prov_record_by_vnode(char *);\nextern int parse_prov_vnode(char *, exec_vnode_listtype *);\n\nstatic void check_and_set_multivnode(struct pbsnode *);\nint write_single_node_mom_attr(struct pbsnode *np);\n\nstatic char *hook_privilege = \"Not allowed to update vnodes or to request scheduler restart cycle, if run as a non-manager/operator user %s@%s\";\n\nextern struct python_interpreter_data svr_interp_data;\n\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\nextern void svr_renew_job_cred(struct work_task *pwt);\n#endif\n\nextern long node_fail_requeue;\n\nextern void propagate_licenses_to_vnodes(mominfo_t *pmom);\n\n#define SKIP_NONE 0\n#define SKIP_EXCLUSIVE 1\n#define SKIP_ANYINUSE 2\n\n#define GLOB_SZ 511\n#define STR_TIME_SZ 20\n\n#define MAX_NODE_WAIT 600\n\n/*\n * Tree search generalized from Knuth (6.2.2) Algorithm T just like\n * the AT&T man page says.\n *\n * The tree structure is for internal use only, lint doesn't grok it.\n *\n * Written by reading the System V Interface Definition, not the code.\n *\n */\n/*LINTLIBRARY*/\n\n/*\n **      Modified by Tom Proett for PBS.\n */\n\nstruct tree *ipaddrs = NULL; /* tree of ip addrs */\nstruct tree *streams = NULL; /* tree of stream numbers */\n\nextern pntPBS_IP_LIST pbs_iplist;\n\nstatic int\ncomp_keys(u_long key1, u_long key2, struct tree *pt)\n{\n\tif (key1 == pt->key1) {\n\t\tif (key2 == pt->key2)\n\t\t\treturn 0;\n\t\telse if (key2 < pt->key2)\n\t\t\treturn -1;\n\t\telse\n\t\t\treturn 1;\n\t} else if (key1 < pt->key1)\n\t\treturn -1;\n\telse\n\t\treturn 1;\n}\n\n/**\n * @brief\n *  \tfind value in tree, return NULL if not found\n *\n * @param[in]\tkey1\t-\tkey to be located\n * @param[in]\tkey2 \t-\tkey to be located\n * @param[in]\trootp \t-\taddress of tree root\n *\n * @return\tmominfo_t *\n * @retval\ta pointer to the mominfo_t object located in the tree\t- found\n * @retval\tNULL\t- not found\n *\n * @par MT-safe: No\n */\nmominfo_t *\ntfind2(const u_long key1, const u_long key2, struct tree **rootp)\n{\n\tif (rootp == NULL)\n\t\treturn NULL;\n\n\twhile (*rootp != NULL) { /* Knuth's T1: */\n\t\tint i;\n\n\t\ti = comp_keys(key1, key2, *rootp);\n\t\tif (i == 0)\n\t\t\treturn (*rootp)->momp; /* we found it! */\n\t\telse if (i < 0)\n\t\t\trootp = &(*rootp)->left; /* T3: follow left branch */\n\t\telse\n\t\t\trootp = &(*rootp)->right; /* T4: follow right branch */\n\t}\n\treturn NULL;\n}\n/**\n * @brief\n *  \tinsert a mom on the tree.\n *\n * @param[in]\tkey1\t-\tkey to be located\n * @param[in]\tkey2\t-\tkey to be located\n * @param[in]\tmomp \t-\tkey to be located\n * @param[in,out]\trootp \t-\taddress of tree root\n *\n * @return\tmominfo_t *\n * @retval\ta pointer to the mominfo_t object located in the tree\t- found\n * @retval\tNULL\t- not found\n *\n * @par MT-safe: No\n */\nvoid\ntinsert2(const u_long key1, const u_long key2, mominfo_t *momp, struct tree **rootp)\n{\n\tint i;\n\tstruct tree *q;\n\n\tDBPRT((\"tinsert2: %lu|%lu %s stream %d\\n\", key1, key2,\n\t       momp->mi_host, momp->mi_dmn_info ? momp->mi_dmn_info->dmn_stream : -1))\n\n\tif (rootp == NULL)\n\t\treturn;\n\twhile (*rootp != NULL) { /* Knuth's T1: */\n\t\ti = comp_keys(key1, key2, *rootp);\n\t\tif (i == 0)\n\t\t\treturn; /* we found it! */\n\t\telse if (i < 0)\n\t\t\trootp = &(*rootp)->left; /* T3: follow left branch */\n\t\telse\n\t\t\trootp = &(*rootp)->right; /* T4: follow right branch */\n\t}\n\tq = (struct tree *) malloc(sizeof(struct tree));\n\t/* T5: key not found */\n\tif (q != NULL) {\t/* make new node */\n\t\t*rootp = q;\t/* link new node to old */\n\t\tq->key1 = key1; /* initialize new node */\n\t\tq->key2 = key2; /* initialize new node */\n\t\tq->momp = momp;\n\t\tq->left = q->right = NULL;\n\t}\n\treturn;\n}\n\n/**\n * @brief Send the IS_CLUSTER_ADDRS message to Mom so she has the\n *      latest list of IP addresses of the all the Moms in the complex.\n *\n * @param[in] stream - the open stream to the Mom\n * @param[in] combine_msg - combine message in the caller\n *\n * @return int\n * @retval DIS_SUCCESS (0) for success\n * @retval != 0 otherwise.\n */\nstatic int\nsend_ip_addrs_to_mom(int stream, int combine_msg)\n{\n\tint j;\n\tint ret;\n\n\tDBPRT((\"%s: entered\\n\", __func__))\n\n\tif (stream < 0)\n\t\treturn -1;\n\n\tif (!combine_msg)\n\t\tif ((ret = is_compose(stream, IS_CLUSTER_ADDRS)) != DIS_SUCCESS)\n\t\t\treturn (ret);\n\n\tif ((ret = diswui(stream, pbs_iplist->li_nrowsused)) != DIS_SUCCESS)\n\t\treturn ret;\n\n\tfor (j = 0; j < pbs_iplist->li_nrowsused; j++) {\n#ifdef DEBUG\n\t\tunsigned long ipaddr;\n\t\tipaddr = IPLIST_GET_LOW(pbs_iplist, j);\n\t\tDBPRT((\"%s: ip %d\\t%ld.%ld.%ld.%ld\\n\", __func__, j,\n\t\t       (ipaddr & 0xff000000) >> 24,\n\t\t       (ipaddr & 0x00ff0000) >> 16,\n\t\t       (ipaddr & 0x0000ff00) >> 8,\n\t\t       (ipaddr & 0x000000ff)))\n#endif /* DEBUG */\n\t\tDBPRT((\"%s: depth %ld\\n\", __func__, (long) IPLIST_GET_HIGH(pbs_iplist, j)))\n\t\tif ((ret = diswul(stream, IPLIST_GET_LOW(pbs_iplist, j))) != DIS_SUCCESS)\n\t\t\treturn (ret);\n\t\tif ((ret = diswul(stream, IPLIST_GET_HIGH(pbs_iplist, j))) != DIS_SUCCESS)\n\t\t\treturn (ret);\n\t}\n\tif (!combine_msg)\n\t\treturn dis_flush(stream);\n\treturn 0;\n}\n\n/**\n * @brief Reply to IS_HELLOSVR\n * Sending all the information mom needs from the server.\n * including need inventory, rpp value and mom ip addresses.\n *\n * @param[in] stream - the open stream to the Mom\n * @param[in] need_inv - whether the server needs inventory of the mom.\n *\n * @return int\n * @retval DIS_SUCCESS (0) for success\n * @retval != 0 otherwise.\n */\nstatic int\nreply_hellosvr(int stream, int need_inv)\n{\n\tint ret;\n\n\tDBPRT((\"%s: entered\\n\", __func__))\n\n\tif (stream < 0)\n\t\treturn -1;\n\n\tif ((ret = is_compose(stream, IS_REPLYHELLO)) != DIS_SUCCESS)\n\t\treturn ret;\n\n\tif ((ret = diswsi(stream, need_inv)) != DIS_SUCCESS)\n\t\treturn ret;\n\n\tif ((ret = send_ip_addrs_to_mom(stream, 1)) != DIS_SUCCESS)\n\t\treturn ret;\n\n\treturn dis_flush(stream);\n}\n\n/**\n * @brief\n *  \tdelete node with given key\n *\n * @param[in]\tkey1\t-\tkey to be located\n * @param[in]\tkey2\t-\tkey to be located\n * @param[in]\trootp \t-\taddress of tree root\n *\n * @return\troot node\n * @retval\troot\t- after successful deletion.\n * @retval\tNULL\t- could not found the key to be freed.\n */\nvoid *\ntdelete2(const u_long key1, const u_long key2, struct tree **rootp)\n{\n\tstruct tree *p;\n\tstruct tree *q;\n\tstruct tree *r;\n\tint i;\n\n\tDBPRT((\"tdelete2: %lu|%lu\\n\", key1, key2))\n\tif (rootp == NULL || (p = *rootp) == NULL)\n\t\treturn NULL;\n\twhile ((i = comp_keys(key1, key2, *rootp)) != 0) {\n\t\tp = *rootp;\n\t\trootp = (i < 0) ? &(*rootp)->left : /* left branch */\n\t\t\t\t&(*rootp)->right;   /* right branch */\n\t\tif (*rootp == NULL)\n\t\t\treturn NULL; /* key not found */\n\t}\n\tr = (*rootp)->right;\t\t  /* D1: */\n\tif ((q = (*rootp)->left) == NULL) /* Left */\n\t\tq = r;\n\telse if (r != NULL) {\t       /* Right is null? */\n\t\tif (r->left == NULL) { /* D2: Find successor */\n\t\t\tr->left = q;\n\t\t\tq = r;\n\t\t} else { /* D3: Find NULL link */\n\t\t\tfor (q = r->left; q->left != NULL; q = r->left)\n\t\t\t\tr = q;\n\t\t\tr->left = q->right;\n\t\t\tq->left = (*rootp)->left;\n\t\t\tq->right = (*rootp)->right;\n\t\t}\n\t}\n\tfree((struct tree *) *rootp); /* D4: Free node */\n\t*rootp = q;\t\t      /* link parent to new node */\n\treturn (p);\n}\n/**\n * @brief\n *  \tfree the entire tree\n *\n * @param[in]\trootp \t-\taddress of tree root\n *\n * @return\tvoid\n */\nvoid\ntfree2(struct tree **rootp)\n{\n\tif (rootp == NULL || *rootp == NULL)\n\t\treturn;\n\ttfree2(&(*rootp)->left);\n\ttfree2(&(*rootp)->right);\n\tfree(*rootp);\n\t*rootp = NULL;\n}\n\n/**\n * @brief\n * \t\tget the addr of the host on which a node is defined\n *\n * @param[in]\tname\t- is in one of the forms:\n *\t\t\t\t\t\t\tnodename[:DDDD][:resc=val...]\n *\t\t\t\t\t\t\tnodename[:DDDD]/DD[*DD]\n *\t\t\t\t\t\t\twhere D is a numerical digit;  :DDDD is a port number\n * @param[in]\tport\t- the port number as commonly used in exec_vnode string or\n * \t\t\t\t\t\t\texec_host string\n *\n * @return\tThe IP address and port from the first Mom declared for the node\n *\n * @par MT-safe: No\n */\npbs_net_t\nget_addr_of_nodebyname(char *name, unsigned int *port)\n{\n\tchar *nodename;\n\tstruct pbsnode *np;\n\n\tnodename = parse_servername(name, NULL);\n\t/* ignore the port which might have been found in the string */\n\tnp = find_nodebyname(nodename);\n\tif (np == 0 || is_nattr_set(np, ND_ATR_Mom) == 0)\n\t\treturn (0);\n\t/* address and port from mom_svrinfo */\n\t*port = np->nd_moms[0]->mi_port;\n\treturn (get_hostaddr(np->nd_moms[0]->mi_host));\n}\n\nenum Set_All_State_When {\n\tSet_ALL_State_All_Down,\t  /* set on vnodes when all Moms are down */\n\tSet_All_State_Regardless, /* set on vnodes regardless */\n\tSet_All_State_All_Offline /* set on vnodes when all Moms are offline */\n};\n\n/**\n * @brief\n * \t\tset or clear state bits on the mominfo entry and all\n *\t\tvirtual nodes under that Mom and set the comment, if txt is null,\n *\t\tset the comment, if txt is null,\n *\t\tdo_set = 1 means set the bits in \"bits\", otherwise clear them\n *\n * @param[in]\tpmom\t- pointer to mom\n * @param[in]\tdo_set\t- do_set = 1 means set the bits, otherwise clear them\n * @param[in]\ttxt\t\t- set the comment, if txt is null, set the comment, if txt is null,\n * @param[in]\tsetwhen\t- of type Set_All_State_When enum, having two states.\n *\n * @return\tvoid\n *\n * @par MT-safe: No\n */\nstatic void\nset_all_state(mominfo_t *pmom, int do_set, unsigned long bits, char *txt,\n\t      enum Set_All_State_When setwhen)\n{\n\tint imom;\n\tunsigned long mstate;\n\tmom_svrinfo_t *psvrmom = (mom_svrinfo_t *) (pmom->mi_data);\n\tdmn_info_t *pdmn_info = pmom->mi_dmn_info;\n\tstruct pbsnode *pvnd;\n\tattribute *pat;\n\tint nchild;\n\tunsigned long inuse_flag = 0;\n\n\tif (do_set) { /* STALE is not meaning in the state of the Mom, don't set it */\n\t\tpdmn_info->dmn_state |= (bits & ~INUSE_STALE);\n\t} else {\n\t\tpdmn_info->dmn_state &= ~bits;\n\t}\n\n\tlog_eventf(PBSEVENT_DEBUG2, PBS_EVENTCLASS_NODE, LOG_INFO, pmom->mi_host,\n\t\t   \"set_all_state;txt=%s mi_modtime=%ld\", txt, pmom->mi_modtime);\n\n\t/* Set the inuse_flag based off the value of setwhen */\n\tif (setwhen == Set_ALL_State_All_Down) {\n\t\tinuse_flag = INUSE_DOWN;\n\t} else if (setwhen == Set_All_State_All_Offline) {\n\t\tinuse_flag = INUSE_OFFLINE_BY_MOM;\n\t}\n\n\tfor (nchild = 0; nchild < psvrmom->msr_numvnds; ++nchild) {\n\t\tint do_this_vnode;\n\n\t\tdo_this_vnode = 1;\n\n\t\tpvnd = psvrmom->msr_children[nchild];\n\n\t\t/*\n\t\t * If this vnode has more than one Mom and\n\t\t * setwhen is Set_ALL_State_All_Down or\n\t\t * setwhen is Set_All_State_All_Offline, then we only change\n\t\t * state if all Moms are down/offline\n\t\t */\n\t\tif ((pvnd->nd_nummoms > 1) &&\n\t\t    ((setwhen == Set_ALL_State_All_Down) ||\n\t\t     (setwhen == Set_All_State_All_Offline))) {\n\t\t\tfor (imom = 0; imom < pvnd->nd_nummoms; ++imom) {\n\t\t\t\tmstate = pvnd->nd_moms[imom]->mi_dmn_info->dmn_state;\n\t\t\t\tif ((mstate & inuse_flag) == 0) {\n\t\t\t\t\tdo_this_vnode = 0;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\t/* Skip resetting state only on cray_compute nodes when state is sleep */\n\t\tif ((pvnd->nd_state & INUSE_SLEEP) &&\n\t\t    (setwhen == Set_All_State_Regardless) &&\n\t\t    (bits & INUSE_SLEEP) &&\n\t\t    !(do_set)) {\n\t\t\tresource_def *prd;\n\t\t\tresource *prc;\n\t\t\tpat = &pvnd->nd_attr[(int) ND_ATR_ResourceAvail];\n\t\t\tprd = find_resc_def(svr_resc_def, \"vntype\");\n\t\t\tif (pat && prd && (prc = find_resc_entry(pat, prd))) {\n\t\t\t\tif (strcmp(prc->rs_value.at_val.at_arst->as_string[0], CRAY_COMPUTE) == 0)\n\t\t\t\t\tdo_this_vnode = 0;\n\t\t\t}\n\t\t}\n\t\tif (do_this_vnode == 0)\n\t\t\tcontinue; /* skip setting state on this vnode */\n\n\t\tif (do_set) {\n\t\t\tset_vnode_state(pvnd, bits, Nd_State_Or);\n\t\t} else {\n\t\t\tset_vnode_state(pvnd, ~bits, Nd_State_And);\n\t\t\tif ((bits & INUSE_OFFLINE_BY_MOM) &&\n\t\t\t    (pvnd->nd_state & INUSE_OFFLINE)) {\n\t\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_NODE,\n\t\t\t\t\t  LOG_NOTICE, pvnd->nd_name,\n\t\t\t\t\t  \"clearing offline_by_mom state for \"\n\t\t\t\t\t  \"vnode: still offlined because of \"\n\t\t\t\t\t  \"previous admin offline action`\");\n\t\t\t}\n\t\t}\n\n\t\tpost_attr_set(get_nattr(pvnd, ND_ATR_state));\n\t\tpat = get_nattr(pvnd, ND_ATR_Comment);\n\n\t\t/*\n\t\t * change the comment only if it is a default comment (set by\n\t\t * the serve and not the Manager;  if \"txt\" is null, just\n\t\t * clear (unset) the comment\n\t\t *\n\t\t * comments set as part of INUSE_OFFLINE_BY_MOM state\n\t\t * action should not be touched.\n\t\t */\n\n\t\tif ((bits & INUSE_OFFLINE_BY_MOM) ||\n\t\t    ((is_attr_set(pat)) == 0) ||\n\t\t    ((pat->at_flags & ATR_VFLAG_DEFLT) != 0)) {\n\n\t\t\t/* default comment */\n\t\t\tfree_attr(node_attr_def, pat, ND_ATR_Comment);\n\t\t\tif (txt)\n\t\t\t\tset_attr_generic(pat, &node_attr_def[(int) ND_ATR_Comment], txt, NULL, INTERNAL);\n\n\t\t\tif (do_set && (bits & INUSE_OFFLINE_BY_MOM)) {\n\t\t\t\t/* this means not directly set by the server */\n\t\t\t\t/* This means server did not set comment */\n\t\t\t\t/* directly but as done per mom */\n\t\t\t\tpat->at_flags &= ~ATR_VFLAG_DEFLT;\n\t\t\t\tmark_attr_set(pat);\n\t\t\t} else {\n\t\t\t\t/* ATR_VFLAG_DEFLT means server set comment */\n\t\t\t\t/* itself */\n\t\t\t\tpat->at_flags |= ATR_VFLAG_DEFLT;\n\t\t\t}\n\t\t}\n\t}\n}\n\n/**\n * @brief\n * \t\trequeue/delete job on primary node going down.\n *\n * @par Functionality:\n *\t\tIf the primary, Mother Superior, node of a job goes down, it\n *\t\tshould be requeued if possible or delete.\n *\n *\t\tCalled via a work-task set up in momptr_down()\n * @see\n * \t\tmomptr_down\n *\n * @param[in]\tpwt\t-\twork task structure.\n *\n * @return\tvoid\n */\n\nstatic void\nnode_down_requeue(struct work_task *pwt)\n{\n\tchar *nname;\n\tmominfo_t *mp;\n\tmom_svrinfo_t *svmp;\n\tjob *pj;\n\tstruct pbsnode *np;\n\tstruct pbssubn *psn;\n\tstruct jobinfo *pjinfo;\n\tstruct jobinfo *pjinfo_nxt;\n\tint nchild;\n\tint cnt;\n\tint i;\n\tchar *tmp_acctrec = NULL;\n\tstruct pbsnode *vnode = NULL;\n\texec_vnode_listtype prov_vnode_list = NULL;\n\tstruct prov_tracking *ptracking;\n\tstruct prov_vnode_info *prov_vnode_info;\n\n\tDBPRT((\"node_down_requeue invoked\\n\"))\n\tif (!pwt) {\n\t\tsprintf(log_buffer, \"Illegal value passed to %s\", __func__);\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_SERVER, LOG_ERR,\n\t\t\t  msg_daemonname, log_buffer);\n\t\treturn;\n\t}\n\tmp = (mominfo_t *) pwt->wt_parm1;\n\tif (!mp) {\n\t\tsprintf(log_buffer, \"Illegal mominfo value in %s\", __func__);\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_SERVER, LOG_ERR,\n\t\t\t  msg_daemonname, log_buffer);\n\t\treturn;\n\t}\n\tsvmp = (mom_svrinfo_t *) (mp->mi_data);\n\tif (!svmp) {\n\t\tsprintf(log_buffer, \"Illegal srvinfo value in %s\", __func__);\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_SERVER, LOG_ERR,\n\t\t\t  msg_daemonname, log_buffer);\n\t\treturn;\n\t}\n\n\t/* clear ptr to this worktask */\n\tsvmp->msr_wktask = 0;\n\n\t/* is node still down? If not, leave jobs as is */\n\tif ((mp->mi_dmn_info->dmn_state & INUSE_DOWN) == 0)\n\t\treturn;\n\n\tDBPRT((\"node_down_requeue node still down\\n\"))\n\n\tfor (nchild = 0; nchild < svmp->msr_numvnds; ++nchild) {\n\t\tnp = svmp->msr_children[nchild];\n\t\t/* is node still provisioning? If yes, leave jobs as is */\n\t\tif ((np->nd_state & INUSE_PROV) == 0) {\n\t\t\tDBPRT((\"node_down_requeue node still provisioning\\n\"))\n\n\t\t\tfor (psn = np->nd_psn; psn; psn = psn->next) {\n\t\t\t\tfor (pjinfo = psn->jobs; pjinfo; pjinfo = pjinfo_nxt) {\n\t\t\t\t\tpj = find_job(pjinfo->jobid);\n\t\t\t\t\tpjinfo_nxt = pjinfo->next;\n\t\t\t\t\twhile (pjinfo_nxt && !strcmp(pjinfo_nxt->jobid, pj->ji_qs.ji_jobid)) {\n\t\t\t\t\t\t/* skip over next occurrence of same job in list*/\n\t\t\t\t\t\t/* if it is deleted in discard_job(), we would\t*/\n\t\t\t\t\t\t/* have a pointer to nothingness\t\t*/\n\t\t\t\t\t\tpjinfo_nxt = pjinfo_nxt->next;\n\t\t\t\t\t}\n\n\t\t\t\t\tnname = parse_servername(\n\t\t\t\t\t\tget_jattr_str(pj, JOB_ATR_exec_vnode), NULL);\n\t\t\t\t\tif (nname && (strcasecmp(np->nd_name, nname) == 0)) {\n\t\t\t\t\t\t/* node is Mother Superior for job */\n\t\t\t\t\t\tset_jattr_l_slim(pj, JOB_ATR_exit_status, JOB_EXEC_RERUN_MS_FAIL, SET);\n\n\t\t\t\t\t\tsprintf(log_buffer, msg_job_end_stat, JOB_EXEC_RERUN_MS_FAIL);\n\t\t\t\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO, pj->ji_qs.ji_jobid, log_buffer);\n\n\t\t\t\t\t\t/* If Job is  in wait Provision state, then fail_vnode_provisioning should be called.\n\t\t\t\t\t\t * Since this job is going to get requed and can run on different set of vnodes\n\t\t\t\t\t\t * hence to make sure provisioning failure on previous set of vnodes doesn't create problem.\n\t\t\t\t\t\t */\n\t\t\t\t\t\tif (check_job_substate(pj, JOB_SUBSTATE_PROVISION)) {\n\t\t\t\t\t\t\tcnt = parse_prov_vnode(get_jattr_str(pj, JOB_ATR_prov_vnode),\n\t\t\t\t\t\t\t\t\t       &prov_vnode_list);\n\n\t\t\t\t\t\t\t/* Check if any node associated to the provisioned job is still in provisioning state. */\n\t\t\t\t\t\t\tfor (i = 0; i < cnt; i++) {\n\t\t\t\t\t\t\t\tif ((vnode = find_nodebyname(prov_vnode_list[i]))) {\n\t\t\t\t\t\t\t\t\tif ((ptracking = get_prov_record_by_vnode(vnode->nd_name))) {\n\t\t\t\t\t\t\t\t\t\tprov_vnode_info = ptracking->prov_vnode_info;\n\t\t\t\t\t\t\t\t\t\tif (prov_vnode_info) {\n\t\t\t\t\t\t\t\t\t\t\tfail_vnode_job(prov_vnode_info, -1); /* Passing -1 so that fail_vnode_job neither hold nor requeue the job */\n\t\t\t\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t\t/* Set for requeuing the job if job is rerunnable */\n\t\t\t\t\t\tif (get_jattr_long(pj, JOB_ATR_rerunable) != 0) {\n\t\t\t\t\t\t\tset_job_substate(pj, JOB_SUBSTATE_RERUN3);\n\t\t\t\t\t\t\tif (pj->ji_acctrec != NULL) {\n\t\t\t\t\t\t\t\tif (pbs_asprintf(&tmp_acctrec, \"%s %s\", pj->ji_acctrec, log_buffer) == -1) {\n\t\t\t\t\t\t\t\t\tfree(tmp_acctrec); /* free 1 byte malloc'd in pbs_asprintf() */\n\t\t\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t\t\tfree(pj->ji_acctrec);\n\t\t\t\t\t\t\t\t\tpj->ji_acctrec = tmp_acctrec;\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t\tpj->ji_acctrec = strdup(log_buffer);\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\t/* When job is non-rerunnable and if job has any dependencies,\n\t\t\t\t\t\t *register dependency request to delete the dependent jobs.\n\t\t\t\t\t\t */\n\t\t\t\t\t\tif (get_jattr_long(pj, JOB_ATR_rerunable) == 0 &&\n\t\t\t\t\t\t    (is_jattr_set(pj, JOB_ATR_depend))) {\n\t\t\t\t\t\t\t/* set job exit status from MOM */\n\t\t\t\t\t\t\tpj->ji_qs.ji_un.ji_exect.ji_exitstat = JOB_EXEC_RERUN_MS_FAIL;\n\t\t\t\t\t\t\t(void) depend_on_term(pj);\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\t/* notify all sisters to discard the job */\n\t\t\t\t\t\tdiscard_job(pj, \"on node down requeue\", 0);\n\n\t\t\t\t\t\t/* Clear \"resources_used\" only if not waiting on any mom */\n\t\t\t\t\t\tif (!pj->ji_jdcd_waiting && ((pj->ji_qs.ji_svrflags & (JOB_SVFLG_CHKPT | JOB_SVFLG_ChkptMig)) == 0)) {\n\t\t\t\t\t\t\tfree_jattr(pj, JOB_ATR_resc_used);\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n\n/**\n * @brief\n * \t\tcalled when a node is marked down or responds to an\n * \t\tIS_DISCARD_JOB message.\n *\n * @par Functionality:\n * \t\tIf all Moms have responded or are down, then we can deal with the job\n * \t\tdepending on the substate.\n *\n *\t\tIf the second arg (pmom) is null, just check the state; if not null\n *\t\tthen mark that Mom's slot as done, then check\n *\n * @see\n * \t\tdiscard_job\n *\n * @param[in,out]\tpjob\t-\tpoint to the job\n*  @param[in]\t\tpmom\t-\tif (pmom) is null, just check the state; if not null then mark that Mom's slot as done\n * @param[in]\t\tnewstate-\tnew state.\n *\n * @return\tvoid\n *\n * @par MT-safe: No\n */\nstatic void\npost_discard_job(job *pjob, mominfo_t *pmom, int newstate)\n{\n\tchar *downmom = NULL;\n\tchar hook_msg[HOOK_MSG_SIZE] = {0};\n\tstruct jbdscrd *pdsc;\n\tstruct batch_request *preq;\n\tint rc;\n\n\tif (pjob->ji_discard == NULL) {\n\t\tpjob->ji_discarding = 0;\n\t\treturn;\n\t}\n\tif (pmom != NULL) {\n\t\tfor (pdsc = pjob->ji_discard; pdsc->jdcd_mom; ++pdsc) {\n\t\t\tif (pdsc->jdcd_mom == pmom) {\n\t\t\t\tpdsc->jdcd_state = newstate;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t}\n\n\tfor (pdsc = pjob->ji_discard; pdsc->jdcd_mom; ++pdsc) {\n\t\tif (pdsc->jdcd_state == JDCD_WAITING)\n\t\t\treturn; /* need to wait some more */\n\t}\n\tpjob->ji_jdcd_waiting = 0;\n\n\t/* not waiting on any Mom to reply to an IS_DISCARD_JOB */\n\t/* so can now deal with the job                         */\n\n\t/* find name of (a) down mom */\n\tfor (pdsc = pjob->ji_discard; pdsc->jdcd_mom; ++pdsc) {\n\t\tif (pdsc->jdcd_state == JDCD_DOWN) {\n\t\t\tdownmom = pdsc->jdcd_mom->mi_host;\n\t\t\tbreak;\n\t\t}\n\t}\n\tif (downmom == NULL)\n\t\tdownmom = \"\"; /* didn't find one, null string for msg */\n\n\tfree(pjob->ji_discard);\n\tpjob->ji_discard = NULL;\n\n\tif (check_job_state(pjob, JOB_STATE_LTR_QUEUED) && (check_job_substate(pjob, JOB_SUBSTATE_QUEUED))) {\n\t\tstatic char nddown[] = \"Job never started, execution node %s down\";\n\n\t\t/*\n\t\t * The job was rejected by mother superior and has\n\t\t * already been placed back in queued state by a\n\t\t * call to svr_evaljobstate() within post_sendjob().\n\t\t * This is done regarless of whether the job is\n\t\t * rerunnable or not, since it never actually started.\n\t\t * There was no start record for this job, so no need\n\t\t * to call account_jobend().\n\t\t */\n\t\tsprintf(log_buffer, nddown, downmom);\n\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\treturn;\n\t}\n\n\tif (check_job_state(pjob, JOB_STATE_LTR_HELD) && (check_job_substate(pjob, JOB_SUBSTATE_HELD))) {\n\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t  pjob->ji_qs.ji_jobid, \"Leaving job in held state\");\n\t\treturn;\n\t}\n\n\tif (check_job_substate(pjob, JOB_SUBSTATE_RERUN3) || pjob->ji_discarding) {\n\n\t\tstatic char *ndreque;\n\n\t\tif (pjob->ji_discarding)\n\t\t\tndreque = \"Job requeued, discard response received\";\n\t\telse\n\t\t\tndreque = \"Job requeued, execution node %s down\";\n\n\t\t/*\n\t\t * Job to be rerun,   no need to check if job is rerunnable\n\t\t * because to get here the job is either rerunnable or Mom\n\t\t * tried to run the job and it failed before it ever went\n\t\t * into execution and sent the server JOB_EXEC_RETRY\n\t\t */\n\t\tsprintf(log_buffer, ndreque, downmom);\n\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\tforce_reque(pjob);\n\t\tif (pjob->ji_acctrec) {\n\t\t\tfree(pjob->ji_acctrec); /* logged, so clear it */\n\t\t\tpjob->ji_acctrec = NULL;\n\t\t}\n\n\t\t/* free resc_used */\n\t\tif ((is_jattr_set(pjob, JOB_ATR_resc_used)) &&\n\t\t    ((pjob->ji_qs.ji_svrflags & (JOB_SVFLG_CHKPT | JOB_SVFLG_ChkptMig)) == 0))\n\t\t\tfree_jattr(pjob, JOB_ATR_resc_used);\n\n\t\tpjob->ji_discarding = 0;\n\t\treturn;\n\t}\n\n\t/* at this point the job is to be purged */\n\tpjob->ji_qs.ji_obittime = time_now;\n\tset_jattr_l_slim(pjob, JOB_ATR_obittime, pjob->ji_qs.ji_obittime, SET);\n\n\t/* Allocate space for the jobobit hook event params */\n\tpreq = alloc_br(PBS_BATCH_JobObit);\n\tif (preq == NULL) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"rq_jobobit alloc failed\");\n\t} else {\n\t\tpreq->rq_ind.rq_obit.rq_pjob = pjob;\n\t\trc = process_hooks(preq, hook_msg, sizeof(hook_msg), pbs_python_set_interrupt);\n\t\tif (rc == -1) {\n\t\t\tlog_err(-1, __func__, \"rq_jobobit process_hooks call failed\");\n\t\t}\n\t\tfree_br(preq);\n\t}\n\n\tif (pjob->ji_acctrec) {\n\t\t/* fairly normal job exit, record accounting info */\n\t\taccount_job_update(pjob, PBS_ACCT_LAST);\n\t\taccount_jobend(pjob, pjob->ji_acctrec, PBS_ACCT_END);\n\n\t\tif (get_sattr_long(SVR_ATR_log_events) & PBSEVENT_JOB_USAGE) {\n\t\t\t/* log events set to record usage */\n\t\t\tlog_event(PBSEVENT_JOB_USAGE, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t  pjob->ji_qs.ji_jobid, pjob->ji_acctrec);\n\t\t} else {\n\t\t\tchar *pc;\n\n\t\t\t/* no usage in log, truncate messge */\n\t\t\tif ((pc = strchr(pjob->ji_acctrec, (int) ' ')) != NULL)\n\t\t\t\t*pc = '\\0';\n\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t  pjob->ji_qs.ji_jobid, pjob->ji_acctrec);\n\t\t}\n\n\t} else {\n\t\tstatic char ndtext[] = \"Job deleted, execution node %s down\";\n\n\t\tsprintf(log_buffer, ndtext, downmom);\n\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\taccount_record(PBS_ACCT_DEL, pjob, log_buffer);\n\t\tsvr_mailowner(pjob, MAIL_ABORT, MAIL_FORCE, log_buffer);\n\t}\n\n\trel_resc(pjob); /* free any resc assigned to the job */\n\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE) == 0)\n\t\tissue_track(pjob);\n\t/*\n\t * If the server is configured to maintain job history, then\n\t * keep the job structure which will be cleaned up later by\n\t * SERVER, probably after the history duration. History job\n\t * type is T_MOM_DOWN(2) for the jobs to be purged because\n\t * of MOM failure.\n\t */\n\tif (svr_chk_history_conf())\n\t\tsvr_setjob_histinfo(pjob, T_MOM_DOWN);\n\telse\n\t\tjob_purge(pjob);\n\n\treturn;\n}\n\n/**\n * @brief\n * \t\tmark mom (by ptr) down and log message\n *\n * @param[in]\t\tpmom\t-\tmom which is down\n * @param[in]\t\twhy\t\t-\tthe reason why the mom is down\n *\n * @return\tvoid\n */\nvoid\nmomptr_down(mominfo_t *pmom, char *why)\n{\n\tint i;\n\tint j;\n\tint nj;\n\tint nchild;\n\tstruct pbsnode *np;\n\tstruct jobinfo *pji;\n\tjob **parray;\n\tstruct pbssubn *psn;\n\tmom_svrinfo_t *psvrmom = (mom_svrinfo_t *) (pmom->mi_data);\n\tlong sec;\n\tint setwktask = 0;\n\tint is_provisioning = 0;\n\tjob *pj;\n\n\tpmom->mi_dmn_info->dmn_state |= INUSE_DOWN;\n\n\t/* log message if node just down or been down for an hour */\n\t/* mark mom down and vnodes down as well                  */\n\tif ((psvrmom->msr_timedown + 3600) > time_now)\n\t\treturn;\n\n\tpsvrmom->msr_timedown = time_now;\n\n\t/* is node provisioning? */\n\tfor (nchild = 0; nchild < psvrmom->msr_numvnds; ++nchild) {\n\t\tnp = psvrmom->msr_children[nchild];\n\t\tif (np->nd_state & INUSE_PROV) {\n\t\t\tis_provisioning = 1;\n\t\t\tbreak;\n\t\t}\n\t}\n\n#ifndef NAS /* localmod 023 */\n\t/* do not display 'node down' msg and comment */\n\tif (is_provisioning) {\n\t\tset_all_state(pmom, 1, INUSE_DOWN, NULL,\n\t\t\t      Set_All_State_Regardless);\n\t} else {\n#endif /* localmod 023 */\n\n#ifdef NAS /* localmod 023 */\n\t\tif (is_provisioning)\n\t\t\t(void) snprintf(log_buffer, sizeof(log_buffer), \"node down for provisioning: %s\", why);\n\t\telse\n#endif /* localmod 023 */\n\t\t\t(void) snprintf(log_buffer, sizeof(log_buffer), \"node down: %s\", why);\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_NODE,\n\t\t\t  LOG_ALERT, pmom->mi_host, log_buffer);\n\n\t\tset_all_state(pmom, 1, INUSE_DOWN, log_buffer,\n\t\t\t      Set_ALL_State_All_Down);\n#ifndef NAS /* localmod 023 */\n\t}\n#endif /* localmod 023 */\n\n\tfor (nchild = 0; nchild < psvrmom->msr_numvnds; ++nchild) {\n\n\t\tnp = psvrmom->msr_children[nchild];\n\n\t\tfor (psn = np->nd_psn; psn; psn = psn->next) {\n\t\t\tif (psn->jobs) {\n\t\t\t\tsetwktask = 1;\n\t\t\t\tnj = 0;\n\t\t\t\t/* find list of jobs on this sub-node */\n\t\t\t\t/* first, how many are they */\n\t\t\t\tfor (pji = psn->jobs; pji; pji = pji->next) {\n\t\t\t\t\tpj = find_job(pji->jobid);\n\t\t\t\t\tif (pj && pj->ji_discard)\n\t\t\t\t\t\t++nj;\n\t\t\t\t}\n\t\t\t\t/* if any, save pointer to the jobs in an array as the    */\n\t\t\t\t/* list may be distrubed by the post_discard_job function */\n\t\t\t\tif (nj != 0) {\n\t\t\t\t\tparray = (job **) calloc((size_t) nj, sizeof(job *));\n\t\t\t\t\tif (parray) {\n\t\t\t\t\t\ti = 0;\n\t\t\t\t\t\tfor (pji = psn->jobs; pji; pji = pji->next) {\n\t\t\t\t\t\t\tpj = find_job(pji->jobid);\n\t\t\t\t\t\t\tif (pj && pj->ji_discard) {\n\t\t\t\t\t\t\t\t/* we only want one entry per job */\n\t\t\t\t\t\t\t\tfor (j = 0; j < i; ++j) {\n\t\t\t\t\t\t\t\t\tif (*(parray + j) == pj)\n\t\t\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\tif (j == i) {\n\t\t\t\t\t\t\t\t\t*(parray + i) = pj; /* new, add it */\n\t\t\t\t\t\t\t\t\t++i;\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\tfor (i = 0; i < nj; ++i)\n\t\t\t\t\t\t\tif (*(parray + i))\n\t\t\t\t\t\t\t\tpost_discard_job(*(parray + i), pmom, JDCD_DOWN);\n\n\t\t\t\t\t\tfree(parray);\n\t\t\t\t\t\tparray = NULL;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\t/* If this Mom is in a vnode pool and is the inventory Mom for that pool */\n\t/* remove her from that role and if another Mom in the pool is up make   */\n\t/* that one the new inventory Mom */\n\n\tif (psvrmom->msr_vnode_pool != 0) {\n\t\treset_pool_inventory_mom(pmom);\n\t}\n\n\tif (((sec = node_fail_requeue) != 0) &&\n\t    (setwktask != 0) && (psvrmom->msr_wktask == NULL)) {\n\n\t\t/* there isn't an outstanding work task to deal with the jobs    */\n\t\t/* and node has jobs, set task to deal with the jobs after delay */\n\n\t\tif (sec < 0) /* if less than zero, treat as if one */\n\t\t\tsec = 1;\n\n\t\tpsvrmom->msr_wktask = set_task(WORK_Timed, time_now + sec, node_down_requeue, (void *) pmom);\n\t}\n\n\treturn;\n}\n\n/**\n * @brief\n * \t\tGiven a vnode_state_op, return the string value.\n * \t\tThe enum is found in pbs_nodes.h\n *\n * @param[in]\top - The operation for the state change\n *\n * @return\tchar *\n */\nchar *\nget_vnode_state_op(enum vnode_state_op op)\n{\n\tswitch (op) {\n\t\tcase Nd_State_Set:\n\t\t\treturn \"Nd_State_Set\";\n\t\tcase Nd_State_Or:\n\t\t\treturn \"Nd_State_Or\";\n\t\tcase Nd_State_And:\n\t\t\treturn \"Nd_State_And\";\n\t}\n\treturn \"ND_state_unknown\";\n}\n\n/**\n * @brief\n * \t\tCreate a duplicate of the specified vnode\n *\n * @param[in]\tvnode - the vnode to duplicate\n *\n * @note\n *  Creates a shallow duplicate of struct * and char * members.\n *\n *\n * @return  duplicated vnode\n */\nstatic struct pbsnode *\nshallow_vnode_dup(struct pbsnode *vnode)\n{\n\tint i;\n\tstruct pbsnode *vnode_dup = NULL;\n\n\tif (vnode == NULL) {\n\t\treturn NULL;\n\t}\n\n\t/*\n\t * Allocate and initialize vnode_o, then copy vnode elements into vnode_o\n\t */\n\tif ((vnode_dup = calloc(1, sizeof(struct pbsnode))) == NULL) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"vnode_dup alloc failed\");\n\t\treturn NULL;\n\t}\n\n\t/*\n\t * Copy vnode elements (same order as \"struct pbsnode\" element definition)\n\t */\n\n\tvnode_dup->nd_name = vnode->nd_name;\n\tvnode_dup->nd_moms = vnode->nd_moms;\n\tvnode_dup->nd_nummoms = vnode->nd_nummoms;\n\tvnode_dup->nd_nummslots = vnode->nd_nummslots;\n\tvnode_dup->nd_index = vnode->nd_index;\n\tvnode_dup->nd_arr_index = vnode->nd_arr_index;\n\tvnode_dup->nd_hostname = vnode->nd_hostname;\n\tvnode_dup->nd_psn = vnode->nd_psn;\n\tvnode_dup->nd_resvp = vnode->nd_resvp;\n\tvnode_dup->nd_nsn = vnode->nd_nsn;\n\tvnode_dup->nd_nsnfree = vnode->nd_nsnfree;\n\tvnode_dup->nd_ncpus = vnode->nd_ncpus;\n\tvnode_dup->nd_state = vnode->nd_state;\n\tvnode_dup->nd_ntype = vnode->nd_ntype;\n\tvnode_dup->nd_pque = vnode->nd_pque;\n\tvnode_dup->nd_svrflags = vnode->nd_svrflags;\n\tfor (i = 0; i < ND_ATR_LAST; i++) {\n\t\tvnode_dup->nd_attr[i] = vnode->nd_attr[i];\n\t}\n\treturn vnode_dup;\n}\n\n/**\n * @brief\n * \t\tChange the state of a vnode. See pbs_nodes.h for definition of node's\n * \t\tavailability and unavailability.\n *\n * \t\tThis function detects the type of change, either from available to\n * \t\tunavailable, and invokes the appropriate handler to handle the state\n * \t\tchange.\n *\n * @param[in]\tpbsnode\t- The vnode\n * @param[in]\tstate_bits\t- the value to set the vnode to\n * @param[in]\ttype\t- The operation on the node\n *\n * @return\tvoid\n *\n * @par MT-safe: No\n */\nvoid\nset_vnode_state(struct pbsnode *pnode, unsigned long state_bits, enum vnode_state_op type)\n{\n\t/*\n\t * Vars used to construct hook event data\n\t */\n\tstruct batch_request *preq = NULL;\n\tstruct pbsnode *vnode_o = NULL;\n\tchar hook_msg[HOOK_MSG_SIZE] = {0};\n\tint time_int_val;\n\tint last_time_int;\n\n\ttime_now = time(NULL);\n\ttime_int_val = time_now;\n\n\tif (pnode == NULL)\n\t\treturn;\n\n\t/*\n\t * Allocate space for the modifyvnode hook event params\n\t */\n\tpreq = alloc_br(PBS_BATCH_ModifyVnode);\n\tif (preq == NULL) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"rq_modifyvnode alloc failed\");\n\t\treturn;\n\t}\n\n\t/*\n\t * Create a duplicate of the vnode\n\t */\n\tvnode_o = shallow_vnode_dup(pnode);\n\tif (vnode_o == NULL) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"shallow_vnode_dup failed\");\n\t\tgoto fn_free_and_return;\n\t}\n\n\t/*\n\t * Apply specified state operation (to the vnode only)\n\t */\n\tswitch (type) {\n\t\tcase Nd_State_Set:\n\t\t\tpnode->nd_state = state_bits;\n\t\t\tbreak;\n\t\tcase Nd_State_Or:\n\t\t\tpnode->nd_state |= state_bits;\n\t\t\tbreak;\n\t\tcase Nd_State_And:\n\t\t\tpnode->nd_state &= state_bits;\n\t\t\tbreak;\n\t\tdefault:\n\t\t\tDBPRT((\"%s: operator type unrecognized %d, defaulting to Nd_State_Set\",\n\t\t\t       __func__, type))\n\t\t\ttype = Nd_State_Set;\n\t\t\tpnode->nd_state = state_bits;\n\t}\n\n\t/* Populate hook param rq_modifyvnode with old and new vnode states */\n\tpreq->rq_ind.rq_modifyvnode.rq_vnode_o = vnode_o;\n\tpreq->rq_ind.rq_modifyvnode.rq_vnode = pnode;\n\n\tDBPRT((\"%s(%5s): Requested state transition 0x%lx --> 0x%lx\\n\", __func__,\n\t       pnode->nd_name, vnode_o->nd_state, pnode->nd_state))\n\n\t/* sync state attribute with nd_state */\n\n\tif (pnode->nd_state != get_nattr_long(pnode, ND_ATR_state))\n\t\tset_nattr_l_slim(pnode, ND_ATR_state, pnode->nd_state, SET);\n\n\tif (vnode_o->nd_state != pnode->nd_state) {\n\t\tset_nattr_l_slim(pnode, ND_ATR_last_state_change_time, time_int_val, SET);\n\n\t\t/* Write the vnode state change event to server log */\n\t\tlast_time_int = (int) vnode_o->nd_attr[(int) ND_ATR_last_state_change_time].at_val.at_long;\n\t\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_NODE, LOG_INFO, pnode->nd_name,\n\t\t   \t\"set_vnode_state;vnode.state=0x%lx vnode_o.state=0x%lx \"\n\t\t   \t\"vnode.last_state_change_time=%d vnode_o.last_state_change_time=%d \"\n\t\t   \t\"state_bits=0x%lx state_bit_op_type_str=%s state_bit_op_type_enum=%d\",\n\t\t   \tpnode->nd_state, vnode_o->nd_state, time_int_val, last_time_int,\n\t\t   \tstate_bits, get_vnode_state_op(type), type);\n\t}\n\n\tif (pnode->nd_state & INUSE_PROV) {\n\t\tif (!(pnode->nd_state & VNODE_UNAVAILABLE) ||\n\t\t    (pnode->nd_state == INUSE_PROV)) { /* INUSE_FREE is 0 */\n\n\t\t\tresource_def *prd;\n\t\t\tresource *prc;\n\n\t\t\tprd = &svr_resc_def[RESC_VNTYPE];\n\t\t\tif (prd && (prc = find_resc_entry(get_nattr(pnode, ND_ATR_ResourceAvail), prd))) {\n\t\t\t\tif (strncmp(prc->rs_value.at_val.at_arst->as_string[0],\n\t\t\t\t\t    \"cray_compute\", 12) == 0) {\n\n\t\t\t\t\t/**\n \t\t\t\t\t * Unlike other nodes, in compute-node\n \t\t\t\t\t * provisioning, MOM node does not restart\n \t\t\t\t\t * so is_vnode_prov_done will not get call from\n \t\t\t\t\t * IS_HOOK_CHECKSUMS request and the same is\n \t\t\t\t\t * called here.\n \t\t\t\t\t */\n\n\t\t\t\t\tDBPRT((\"%s: calling [is_vnode_prov_done] from set_vnode_state, type = %d\\n\", __func__, type))\n\t\t\t\t\tis_vnode_prov_done(pnode->nd_name);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\t/* while node is provisioning, we don't want the reservation\n\t\t * to degrade, hence returning.\n\t\t */\n\t\tgoto fn_fire_event;\n\t}\n\n\tunsigned long bits;\n\tbits = vnode_o->nd_state ^ pnode->nd_state;\n\n\tif (bits & (INUSE_OFFLINE | INUSE_OFFLINE_BY_MOM |\n\t\t    INUSE_MAINTENANCE | INUSE_SLEEP |\n\t\t    INUSE_PROV | INUSE_WAIT_PROV))\n\t\tpnode->nd_modified = 1;\n\n\tDBPRT((\"%s(%5s): state transition 0x%lx --> 0x%lx\\n\", __func__, pnode->nd_name,\n\t       vnode_o->nd_state, pnode->nd_state))\n\n\t/* node is marked INUSE_DOWN | INUSE_PROV when provisioning.\n\t * need to check transition from INUSE_PROV to UNAVAILABLE\n\t */\n\tif ((!(vnode_o->nd_state & VNODE_UNAVAILABLE) ||\n\t     (vnode_o->nd_state & INUSE_PROV)) &&\n\t    (pnode->nd_state & VNODE_UNAVAILABLE)) {\n\t\t/* degrade all associated reservations. The '1' instructs the function to\n\t\t * account for the unavailable vnodes in the reservation's counter\n\t\t */\n\t\t(void) vnode_unavailable(pnode, 1);\n\t} else if (((vnode_o->nd_state & VNODE_UNAVAILABLE)) &&\n\t\t   ((!(pnode->nd_state & VNODE_UNAVAILABLE)) ||\n\t\t    (pnode->nd_state == INUSE_FREE))) {\n\t\t(void) vnode_available(pnode);\n\t}\n\nfn_fire_event:\n\t/* Fire off the vnode state change event */\n\tprocess_hooks(preq, hook_msg, sizeof(hook_msg), pbs_python_set_interrupt);\n\nfn_free_and_return:\n\tfree(vnode_o);\n\tfree_br(preq);\n}\n\n/**\n *  @brief\n *  \tA vnode becomes available when its state transitions towards no bits\n *  \twith VNODE_UNAVAILABLE set.\n * \t\tIf the node was associated to a reservation and the reservation was degraded\n * \t\tthen the reservation is adjusted to reflect that one of its associated vnode\n * \t\tis now back up.\n *\n * \t\tIf all the vnodes associated to the reservation are back up\n * \t\tthen the reservation does not need to be reconfirmed by the scheduler.\n * @see\n * \t\tset_vnode_state\n *\n * @param[in]\tnp\t- the node that has become available again\n *\n * @return\tvoid\n *\n * @par MT-safe: No\n */\nvoid\nvnode_available(struct pbsnode *np)\n{\n\tresc_resv *presv;\n\tstruct resvinfo *rinfp;\n\tstruct resvinfo *rinfp_hd = NULL;\n\tchar *execvnodes = NULL;\n\tint occurrence = -1;\n\n\tif (np == NULL)\n\t\treturn;\n\n\t/* the vnode has no associated reservations, no action is required */\n\tif ((rinfp = find_vnode_in_resvs(np, Skip_Degraded_Time)) == NULL)\n\t\treturn;\n\n\tDBPRT((\"%s(%s): entered\\n\", __func__, np->nd_name))\n\n\t/* keep track of the head of the linked list for garbage collection */\n\trinfp_hd = rinfp;\n\n\t/* Process each reservation that this node is associated to */\n\tfor (presv = rinfp->resvp; rinfp; rinfp = rinfp->next) {\n\t\tif ((presv = rinfp->resvp) == NULL) {\n\t\t\tlog_err(PBSE_SYSTEM, __func__, \"could not access reservation\");\n\t\t\tcontinue;\n\t\t}\n\t\t/* If none of the vnodes associated to the reservation are down, reset\n\t\t * the states of the reservation to their previous values.\n\t\t *\n\t\t * ri_vnodes_down lives with the reservation information during the\n\t\t * lifecycle of the server process, it is not stored to disk upon server\n\t\t * restart. The second check on number of nodes down != 0 is done to\n\t\t * avoid altering reservation information if the state of a node changes\n\t\t * to UP while no nodes were previously seen as down\n\t\t */\n\t\tif (presv->ri_vnodes_down != 0) {\n\t\t\t/* decrement number of nodes down */\n\t\t\tpresv->ri_vnodes_down--;\n\n\t\t\tif (presv->ri_vnodes_down == 0) {\n\t\t\t\t/* If the reservation is currently running, reset its state to\n\t\t\t\t * running\n\t\t\t\t */\n\t\t\t\tif (presv->ri_qs.ri_state == RESV_RUNNING)\n\t\t\t\t\tresv_setResvState(presv, RESV_RUNNING, RESV_RUNNING);\n\t\t\t\telse {\n\t\t\t\t\t/* Otherwise revert its state to Confirmed */\n\t\t\t\t\tresv_setResvState(presv, RESV_CONFIRMED, RESV_CONFIRMED);\n\t\t\t\t}\n\t\t\t\t/* Unset all of the reservation retry attributes and values */\n\t\t\t\tunset_resv_retry(presv);\n\t\t\t}\n\t\t} else {\n\t\t\t/* An inconsistency in recognizing node state transitions caused an\n\t\t\t * unexpected re-entry into this handler. Since this is not\n\t\t\t * supposed to happen we only log it for now.\n\t\t\t */\n\t\t\t/* If a standing reservation we print the execvnodes sequence\n\t\t\t * string for debugging purposes */\n\t\t\tif (get_rattr_long(presv, RESV_ATR_resv_standing)) {\n\t\t\t\tif (is_rattr_set(presv, RESV_ATR_resv_execvnodes))\n\t\t\t\t\texecvnodes = get_rattr_str(presv, RESV_ATR_resv_execvnodes);\n\t\t\t\tif (execvnodes == NULL)\n\t\t\t\t\texecvnodes = \"\";\n \t\t\t\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_RESV, LOG_DEBUG,\n\t\t\t\t           presv->ri_qs.ri_resvID, \"execvnodes sequence: %s\", execvnodes);\n\t\t\t\tif (is_rattr_set(presv, RESV_ATR_resv_idx))\n\t\t\t\t\toccurrence = get_rattr_long(presv, RESV_ATR_resv_idx);\n\t\t\t\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_RESV, LOG_DEBUG,\n\t\t\t\t           presv->ri_qs.ri_resvID, \"vnodes in occurrence %d: %d; \",\n\t\t\t\t           occurrence, presv->ri_vnodect);\n\t\t\t} else {\n\t\t\t\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_RESV, LOG_DEBUG,\n\t\t\t           presv->ri_qs.ri_resvID, \"vnodes in reservation: %d; \",\n\t\t\t           presv->ri_vnodect);\n\t\t\t}\n\t\t}\n\t}\n\n\tfree_rinf_list(rinfp_hd);\n}\n\n/**\n * @brief\n * \t\tA node is considered unavailable if it is in one of the states:\n * \t\tOFFLINE, DOWN, DELETED, STALE, or UNKNOWN.\n *\n * \t\tIf a node is in a reservation and the resv is associated to the soonest\n * \t\toccurrence then flag the reservation as state degraded and substate\n * \t\tdegraded.\n *\n * \t\tOtherwise, if the reservation is a standing reservation, and the\n * \t\tnode is in a later occurrence, then mark the reservation in substate\n * \t\tdegraded.\n *\n * @param[in]\tnp\t- the unavailable node\n * @param[in]\taccount_vnode\t- register the vnode as down in the reservation's counts.\n *\n * @return\tvoid\n * @par MT-safe: No\n */\nvoid\nvnode_unavailable(struct pbsnode *np, int account_vnode)\n{\n\tchar *nd_name;\n\tchar *resv_nodes;\n\tresc_resv *presv;\n\tstruct resvinfo *rinfp;\n\tstruct resvinfo *rinfp_hd = NULL;\n\tint *presv_state;\n\tint *presv_substate;\n\tint in_soonest_occr;\n\tlong degraded_time;\n\tlong resv_start_time;\n\tlong retry_time;\n\tchar *execvnodes = NULL;\n\tint occurrence = -1;\n\n\tif (np == NULL)\n\t\treturn;\n\n\tif (!(nd_name = np->nd_name))\n\t\treturn;\n\n\t/* If the vnode has no associated reservation, i.e., the vnode does not\n\t * appear in any advance reservation nor any occurrence of a standing\n\t * reservation, then no action is required.\n\t */\n\tif ((rinfp = find_vnode_in_resvs(np, Set_Degraded_Time)) == NULL)\n\t\treturn;\n\n\tDBPRT((\"%s(%s): entered\\n\", __func__, np->nd_name))\n\n\t/* keep track of the head of the linked list for garbage collection */\n\trinfp_hd = rinfp;\n\n\t/* Process each reservation that this node is associated to */\n\tfor (presv = rinfp->resvp; rinfp; rinfp = rinfp->next) {\n\n\t\tif ((presv = rinfp->resvp) == NULL) {\n\t\t\tlog_err(PBSE_SYSTEM, __func__, \"could not access reservation\");\n\t\t\tcontinue;\n\t\t}\n\n\t\tpresv_state = &presv->ri_qs.ri_state;\n\t\tpresv_substate = &presv->ri_qs.ri_substate;\n\t\tretry_time = get_rattr_long(presv, RESV_ATR_retry);\n\t\tresv_nodes = get_rattr_str(presv, RESV_ATR_resv_nodes);\n\t\tresv_start_time = get_rattr_long(presv, RESV_ATR_start);\n\t\t/* the start time of the soonest degraded occurrence */\n\t\tdegraded_time = presv->ri_degraded_time;\n\t\tin_soonest_occr = find_vnode_in_execvnode(resv_nodes, np->nd_name);\n\n\t\tif (retry_time == 0)\n\t\t\tset_resv_retry(presv, determine_resv_retry(presv));\n\n\t\t/* If the downed node is part of the soonest reservation then the\n\t\t * reservation is marked degraded. This is recognized by having the\n\t\t * degraded_time be equal to the reservation start time or if the vnode\n\t\t * name is present in the soonest occurrence's resv_nodes attribute.\n\t\t */\n\t\tif ((degraded_time == resv_start_time) || (in_soonest_occr == 1)) {\n\t\t\tDBPRT((\"vnode_unavailable: changing reservation state to degraded\\n\"))\n\t\t\tif (*presv_state == RESV_CONFIRMED) {\n\t\t\t\t(void) resv_setResvState(presv, RESV_DEGRADED, RESV_DEGRADED);\n\t\t\t} else {\n\t\t\t\t/* If reservation is currently running and a node is down then\n\t\t\t\t * set its substate to degraded\n\t\t\t\t */\n\t\t\t\t(void) resv_setResvState(presv, presv->ri_qs.ri_state, RESV_DEGRADED);\n\t\t\t}\n\t\t} else if (degraded_time > resv_start_time)\n\t\t\t(void) resv_setResvState(presv, presv->ri_qs.ri_state, RESV_DEGRADED);\n\n\t\t/* reference count the number of vnodes down such that the state of the\n\t\t * reservation can be reset to CONFIRMED once the number of unavailable\n\t\t * nodes reaches 0.\n\t\t */\n\t\tif ((*presv_substate == RESV_DEGRADED) && (account_vnode == 1)) {\n\t\t\t/* the number of vnodes down could exceed the number of vnodes in\n\t\t\t * the reservation only in the case of a standing reservation for\n\t\t\t * which the vnodes unavailable are associated to later occurrences\n\t\t\t */\n\t\t\tif (presv->ri_vnodes_down > presv->ri_vnodect) {\n\t\t\t\t/* If a standing reservation we print the execvnodes sequence\n\t\t\t\t * string for debugging purposes */\n\t\t\t\tif (get_rattr_arst(presv, RESV_ATR_resv_standing)) {\n\t\t\t\t\tif (is_rattr_set(presv, RESV_ATR_resv_execvnodes))\n\t\t\t\t\t\texecvnodes = get_rattr_str(presv, RESV_ATR_resv_execvnodes);\n\t\t\t\t\tif (execvnodes == NULL)\n\t\t\t\t\t\texecvnodes = \"\";\n\t\t\t\t\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_RESV, LOG_DEBUG,\n\t\t\t\t\t\t   presv->ri_qs.ri_resvID, \"execvnodes sequence: %s\",\n\t\t\t\t\t\t   execvnodes);\n\t\t\t\t\tif (is_rattr_set(presv, RESV_ATR_resv_idx))\n\t\t\t\t\t\toccurrence = get_rattr_long(presv, RESV_ATR_resv_idx);\n \t\t\t\t\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_RESV, LOG_DEBUG,\n\t\t\t\t\t           presv->ri_qs.ri_resvID,\n\t\t\t\t\t           \"vnodes in occurrence %d: %d;\"\n\t\t\t\t\t           \" unavailable vnodes in reservation: %d\",\n\t\t\t\t\t           occurrence, presv->ri_vnodect, presv->ri_vnodes_down);\n\t\t\t\t} else {\n\t\t\t\t\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_RESV, LOG_DEBUG,\n \t\t\t\t\t   presv->ri_qs.ri_resvID,\n\t\t\t\t\t   \"vnodes in reservation: %d; unavailable vnodes in reservation: %d\",\n \t\t\t\t\t   presv->ri_vnodect, presv->ri_vnodes_down);\n\t\t\t\t}\n\t\t\t}\n\t\t\tpresv->ri_vnodes_down++;\n\t\t}\n\n\t} /* End of for. Process next reservation associated to the affected node */\n\n\tfree_rinf_list(rinfp_hd);\n}\n\n/**\n * @brief\n * \t\tSearch all reservations for an associated node that matches the one\n * \t\tpassed as argument.\n *\n * @param[in]\tnp\t-\tThe node to find in the reservations list\n * @param[in]\tvnode_degraded_op\t-\tTo indicate whether to set the degraded time on\n * \t\t\t\t\t\t\t\t\t\tthe reservation or not.\n *\n * @return\tresvinfo *\n * @retval\t-\tThe reservation info structure\t- for the matching reservations\n * @retval\tNULL\t- if none are found.\n *\n * @note\n * \t\tif none are found. This function allocates memory that has to be freed by\n * \t\tthe caller.\n *\n * @par MT-safe: No\n */\nstruct resvinfo *\nfind_vnode_in_resvs(struct pbsnode *np, enum vnode_degraded_op degraded_op)\n{\n\tstruct resvinfo *rinfp;\n\tstruct resvinfo *parent_rinfp;\n\tresc_resv *presv;\n\tpbsnode_list_t *pl;\n\tint match = 0;\n\tint is_degraded = 0;\n\tlong retry_time;\n\n\tif (np == NULL)\n\t\treturn NULL;\n\n\t/* Walk all reservations and check if the node is associated to an\n\t * occurrence of a standing reservation\n\t *\n\t * While walking the reservation's list, we create a resv info linked list\n\t * that contains all reservations on which the node appears\n\t */\n\trinfp = malloc(sizeof(struct resvinfo));\n\tif (!rinfp)\n\t\treturn NULL;\n\n\trinfp->resvp = NULL;\n\trinfp->next = NULL;\n\n\tparent_rinfp = rinfp;\n\n\tfor (presv = (resc_resv *) GET_NEXT(svr_allresvs); presv != NULL;\n\t     presv = (resc_resv *) GET_NEXT(presv->ri_allresvs)) {\n\t\t/* When processing an advance reservation, set the degraded time to be\n\t\t * the start time of the reservation and process the next reservation\n\t\t */\n\t\tif (get_rattr_long(presv, RESV_ATR_resv_standing) == 0) {\n\t\t\tfor (pl = presv->ri_pbsnode_list; pl; pl = pl->next) {\n\t\t\t\tif (np == pl->vnode)\n\t\t\t\t\tbreak;\n\t\t\t}\n\t\t\tif (!pl)\n\t\t\t\tcontinue;\n\n\t\t\tpresv->ri_degraded_time = get_rattr_long(presv, RESV_ATR_start);\n\t\t\tif (!match) {\n\t\t\t\trinfp->resvp = presv;\n\t\t\t\trinfp->next = NULL;\n\t\t\t\tmatch = 1;\n\t\t\t} else {\n\t\t\t\trinfp->next = malloc(sizeof(struct resvinfo));\n\t\t\t\tif (!rinfp->next) {\n\t\t\t\t\tlog_err(PBSE_SYSTEM, __func__,\n\t\t\t\t\t\t\"could not allocate memory to create a resvinfo list\");\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\trinfp = rinfp->next;\n\t\t\t\trinfp->resvp = presv;\n\t\t\t\trinfp->next = NULL;\n\t\t\t}\n\t\t} else { /* Standing Reservation */\n\t\t\t/* If the sequence of execvnodes of the considered standing reservation\n\t\t\t * isn't set, process the next element. Note that this should never\n\t\t\t * happen as the reservation should have been confirmed and the nodes\n\t\t\t * been assigned to it\n\t\t\t */\n\t\t\tif (!is_rattr_set(presv, RESV_ATR_resv_execvnodes)) {\n\t\t\t\tis_degraded = 1;\n\t\t\t\tlog_eventf(PBSEVENT_DEBUG, PBS_EVENTCLASS_RESV, LOG_NOTICE,\n\t\t\t\t           presv->ri_qs.ri_resvID,\n\t\t\t\t           \"%s: Reservation's execvnodes_seq are corrupted, degrading it\",\n\t\t\t\t           __func__);\n\t\t\t\tif (presv->ri_qs.ri_substate != RESV_DEGRADED) {\n\t\t\t\t\tif (presv->ri_qs.ri_state == RESV_RUNNING\n\t\t\t\t\t    || presv->ri_qs.ri_state == RESV_DELETING_JOBS) {\n\t\t\t\t\t\t/*\n\t\t\t\t\t\t** leave it as is, rely on resv_vnodes to run jobs in it;\n\t\t\t\t\t\t** once this occurrence finally finishes\n\t\t\t\t\t\t** we will re-evaluate whether to reconfirm next occurrence.\n\t\t\t\t\t\t** Still set substate to degraded now to alert site admins\n\t\t\t\t\t\t*/\n\t\t\t\t\t\tresv_setResvState(presv, presv->ri_qs.ri_state, RESV_DEGRADED);\n\t\t\t\t\t} else {\n\t\t\t\t\t\tresv_setResvState(presv, RESV_DEGRADED, RESV_DEGRADED);\n\t\t\t\t\t\tretry_time = determine_resv_retry(presv);\n\t\t\t\t\t\t/* if server just came up wait 'some time' for nodes to come up */\n\t\t\t\t\t\tif (time_now + ESTIMATED_DELAY_NODES_UP < retry_time)\n\t\t\t\t\t\t\tretry_time = time_now + ESTIMATED_DELAY_NODES_UP;\n\t\t\t\t\t\t/* bogus value for degraded_time, but avoid skipping a reconfirmation */\n\t\t\t\t\t\tpresv->ri_degraded_time = get_rattr_long(presv, RESV_ATR_start);\n\t\t\t\t\t\tlog_eventf(PBSEVENT_ERROR, PBS_EVENTCLASS_RESV, LOG_NOTICE, presv->ri_qs.ri_resvID,\n\t\t\t\t\t\t           \"%s: Reservation with corrupted nodes, setting up reconfirmation\",\n\t\t\t\t\t\t           __func__);\n\t\t\t\t\t\tforce_resv_retry(presv, retry_time);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\t/* the reservation is degraded but we cannot associate it with the node */\n \t\t\t\tcontinue;\n\t\t\t} else {\n\t\t\t\tis_degraded = find_degraded_occurrence(presv, np, degraded_op);\n \t\t\t}\n\n\t\t\t/* If no occurrence is degraded move on to the next reservation */\n\t\t\tif (is_degraded == 0)\n\t\t\t\tcontinue;\n\n\t\t\t/* Add the reservation to the constructed linked list to which this\n\t\t\t * node is associated\n\t\t\t */\n\t\t\tif (!match) {\n\t\t\t\trinfp->resvp = presv;\n\t\t\t\trinfp->next = NULL;\n\t\t\t\tmatch = 1;\n\t\t\t} else {\n\t\t\t\trinfp->next = malloc(sizeof(struct resvinfo));\n\t\t\t\tif (!rinfp->next) {\n\t\t\t\t\tlog_err(PBSE_SYSTEM, __func__,\n\t\t\t\t\t\t\"could not allocate memory to create a resvinfo list\");\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\trinfp = rinfp->next;\n\t\t\t\trinfp->resvp = presv;\n\t\t\t\trinfp->next = NULL;\n\t\t\t}\n\t\t}\n\t}\n\n\t/* no reservations are associated to this vnode */\n\tif (!match) {\n\t\tfree(rinfp);\n\t\trinfp = NULL;\n\t\tparent_rinfp = NULL;\n\t}\n\n\treturn parent_rinfp;\n}\n\n/**\n * @brief\n * \t\tWalk occurrences of a standing reservation searching for the soonest\n * \t\tvalid degraded occurrence associated to the vnode passed as argument.\n *\n * @param[in]\tpresv\t- The reservation being processed\n * @param[in]\tnp\t- The node affected, either available or unavailable\n * @param[in]\tvnode_degraded_op\t- determines if a degraded time should be set\n *\n * @return\tint\n * @retval\t1\t- upon success finding the node in the reservation (including its\n * \t\t\t\t\toccurrences when a standing reservation)\n * @retval 0\t- if the node was not found.\n *\n * @par Side-effects: this function will also set the degraded time of the\n * reservation when instructed to by the degraded_op operator.\n *\n * @par MT-safe: No\n */\nint\nfind_degraded_occurrence(resc_resv *presv, struct pbsnode *np,\n\t\t\t enum vnode_degraded_op degraded_op)\n{\n\tchar **execvnodes_seq;\n\tchar *short_execvnodes_seq = NULL;\n\tchar **tofree = NULL;\n\tchar *rrule;\n\tchar *tz;\n\tchar *execvnodes = NULL;\n\tlong dtstart;\n\tlong occr_time;\n\tlong curr_degraded_time;\n\tint ridx;\n\tint ridx_adjusted;\n\tint rcount;\n\tint rcount_adjusted;\n\tint i, j;\n\tint occr_found;\n\n\tif (presv == NULL)\n\t\treturn 0;\n\n\tif (np == NULL)\n\t\treturn 0;\n\n\trrule = get_rattr_str(presv, RESV_ATR_resv_rrule);\n\ttz = get_rattr_str(presv, RESV_ATR_resv_timezone);\n\tdtstart = get_rattr_long(presv, RESV_ATR_start);\n\tif (is_rattr_set(presv, RESV_ATR_resv_execvnodes))\n\t\texecvnodes = get_rattr_str(presv, RESV_ATR_resv_execvnodes);\n\tif (execvnodes == NULL || (short_execvnodes_seq = strdup(execvnodes)) == NULL)\n\t\treturn -1;\n\texecvnodes_seq = unroll_execvnode_seq(short_execvnodes_seq, &tofree);\n\t/* If an error occurred during unrolling, this reservation is ignored */\n\tif (!(*execvnodes_seq)) {\n\t\tfree(short_execvnodes_seq);\n\t\treturn -1;\n\t}\n\n\tridx = get_rattr_long(presv, RESV_ATR_resv_idx);\n\trcount = get_rattr_long(presv, RESV_ATR_resv_count);\n\t/* A reconfirmed degraded reservation reports the number of\n\t * reconfirmed occurrences from the time of degradation.\n\t */\n\trcount_adjusted = get_execvnodes_count(execvnodes);\n\n\tridx_adjusted = ridx - (rcount - rcount_adjusted);\n\toccr_found = 0;\n\tcurr_degraded_time = 0;\n\n\t/* Search for a match for this node in each occurrence's execvnode */\n\tfor (i = ridx_adjusted - 1, j = 1; i < rcount_adjusted; i++, j++) {\n\t\tif (i < 0) {\n\t\t\tlog_eventf(PBSEVENT_ERROR, PBS_EVENTCLASS_RESV, LOG_NOTICE, presv->ri_qs.ri_resvID,\n\t\t\t\t       \"%s: attempt to find vnodes for for occurence %d failed; skipping\",\n\t\t           __func__, j);\n\t\t\tcontinue;\n\t\t}\n\t\tif (find_vnode_in_execvnode(execvnodes_seq[i], np->nd_name)) {\n\t\t\toccr_found = 1;\n\t\t\tif (degraded_op == Set_Degraded_Time) {\n\t\t\t\t/* we keep track of the occurrence time to determine the earliest\n\t\t\t\t * degraded time\n\t\t\t\t */\n\t\t\t\toccr_time = get_occurrence(rrule, dtstart, tz, j);\n\n\t\t\t\tif (presv->ri_degraded_time == 0 &&\n\t\t\t\t    curr_degraded_time == 0) {\n\t\t\t\t\tcurr_degraded_time = occr_time;\n\t\t\t\t}\n\t\t\t} else\n\t\t\t\tbreak;\n\t\t}\n\t}\n\t/* clean up unrolled execvnodes sequence helpers */\n\tfree(execvnodes_seq);\n\texecvnodes_seq = NULL;\n\tfree(short_execvnodes_seq);\n\tshort_execvnodes_seq = NULL;\n\tfree_execvnode_seq(tofree);\n\ttofree = NULL;\n\n\t/* No matching vnode name was found in any occurrence */\n\tif (!occr_found)\n\t\treturn 0;\n\n\t/* A matching vnode was found in an occurrence but no degraded time was set\n\t * , we set it to curr_degraded_time for consistency\n\t */\n\tif (presv->ri_degraded_time == 0 && curr_degraded_time != 0)\n\t\tpresv->ri_degraded_time = curr_degraded_time;\n\n\treturn 1;\n}\n\n/**\n * @brief\n * \t\tGarbage collect the dynamically generated reservation list\n * @see\n *\t\tvnode_available and vnode_unavailable\n *\n * @param[in,out]\trinfp\t-dynamically generated reservation list\n *\n * @return\tvoid\n *\n * @par MT-safe: No\n */\nvoid\nfree_rinf_list(struct resvinfo *rinfp)\n{\n\tstruct resvinfo *rinfp_tmp = rinfp;\n\n\tif (rinfp_tmp == NULL)\n\t\treturn;\n\n\twhile (rinfp != NULL) {\n\t\trinfp_tmp = rinfp->next;\n\t\tfree(rinfp);\n\t\trinfp = rinfp_tmp;\n\t}\n}\n\n/**\n * @brief\n * \t\tUnset all reservations retry attributes and variables.\n *\n * @param[in]\tpresv - The reservation to process\n *\n * @return\tvoid\n *\n * @par MT-safe: No\n */\nvoid\nunset_resv_retry(resc_resv *presv)\n{\n\tif (presv == NULL)\n\t\treturn;\n\n\tif (!is_rattr_set(presv, RESV_ATR_retry))\n\t\treturn;\n\n\tset_rattr_l_slim(presv, RESV_ATR_retry, 0, SET);\n\n\tpresv->ri_resv_retry = 0;\n\tpresv->ri_degraded_time = 0;\n}\n\n/**\n * @brief\n * \t\tSet reservation retry attributes and variables.\n * \t\tThe reservation attribute RESV_ATR_retry is recovered upon a server\n * \t\trestart. The field ri_resv_retry is not.\n * \t\tIf RESV_ATR_retry is set, we add that already existing time as the\n * \t\tevent time, otherwise we compute the event time\n *\n * @param[in]\tpresv\t-\tThe reservation to process\n * @param[in]\tretry_time\t-\tThe retry time to set\n * @param[in]\tforced  - determines which handler we call\n * \n *\n * @return\tvoid\n *\n * @par MT-safe: No\n */\nvoid\nset_resv_retry2(resc_resv *presv, long retry_time, int forced)\n{\n\tstruct work_task *pwt;\n\textern void resv_retry_handler(struct work_task *ptask);\n\textern void resv_retry_handler_forced(struct work_task *ptask);\n\tchar *msg;\n\tchar *str_time;\n\n\tif (presv == NULL)\n\t\treturn;\n\n\tif (presv->ri_resv_retry)\n\t\tmsg = \"Next attempt to reconfirm reservation will be made on %s\";\n\telse\n\t\tmsg = \"An attempt to reconfirm reservation will be made on %s\";\n\n\tset_rattr_l_slim(presv, RESV_ATR_retry, retry_time, SET);\n\n\tpresv->ri_resv_retry = retry_time;\n\n\tstr_time = ctime(&retry_time);\n\tif (str_time == NULL)\n\t\tstr_time = \"\";\n\tlog_eventf(PBSEVENT_DEBUG2, PBS_EVENTCLASS_RESV, LOG_NOTICE, presv->ri_qs.ri_resvID, msg, str_time);\n\n\t/* Set a work task to initiate a scheduling cycle when the time to check\n\t * for alternate nodes to assign the reservation comes\n\t */\n\tif ((pwt = set_task(WORK_Timed, retry_time, forced ? resv_retry_handler_forced : resv_retry_handler, presv)) != NULL) {\n\t\t/* set things so that the reservation going away will result in\n\t\t * any \"yet to be processed\" work tasks also going away\n\t\t */\n\t\tappend_link(&presv->ri_svrtask, &pwt->wt_linkobj, pwt);\n\t}\n}\n\n/**\n * @brief\n * \t\tSet reservation retry attributes and variables.\n * \t\tThe reservation attribute RESV_ATR_retry is recovered upon a server\n * \t\trestart. The field ri_resv_retry is not.\n * \t\tIf RESV_ATR_retry is set, we add that already existing time as the\n * \t\tevent time, otherwise we compute the event time\n *\n *      This one will only kick a reconfirmation if ri_vnodes_down is positive\n *\n * @param[in]\tpresv\t-\tThe reservation to process\n * @param[in]\tretry_time\t-\tThe retry time to set\n *\n * @return\tvoid\n *\n * @par MT-safe: No\n */\nvoid\nset_resv_retry(resc_resv *presv, long retry_time)\n{\n\tset_resv_retry2(presv, retry_time, 0);\n}\n\n/**\n * @brief\n * \t\tSet reservation retry attributes and variables.\n * \t\tThe reservation attribute RESV_ATR_retry is recovered upon a server\n * \t\trestart. The field ri_resv_retry is not.\n * \t\tIf RESV_ATR_retry is set, we add that already existing time as the\n * \t\tevent time, otherwise we compute the event time\n *\n *      This will always kick a reconfirmation\n *\n * @param[in]\tpresv\t-\tThe reservation to process\n * @param[in]\tretry_time\t-\tThe retry time to set\n *\n * @return\tvoid\n *\n * @par MT-safe: No\n */\nvoid\nforce_resv_retry(resc_resv *presv, long retry_time)\n{\n\tset_resv_retry2(presv, retry_time, 1);\n}\n\n/**\n * @brief\n * \t\tsearch string big for exact occurrence of string little. The preceding\n * \t\tand successsor characters of the occurring string should be legal vnode\n * \t\tcharacters. The pattern defined by 'little' consists only of legal vnode\n * \t\tcharacters.\n *\n * \t\tThis function is used to find an exact match of a vnode name within an\n * \t\texecvnode string, for example searching for \"node1\" in the execvnode\n * \t\t(node12:ncpus=1)+(node1node1:ncpus=2)+(node1:npcus=3)+(node3:mem=5000:npcus=1)\n * @see\n * \t\tvnode_unavailable and find_degraded_occurrence\n *\n * @param[in]\tbig\t-\tthe original string to search\n * @param[in]\tlittle\t-\tthe pattern to find\n *\n * @return\tint\n * @retval\t1\t- if the pattern is found\n * @retval\t0\t- otherwise\n *\n * @par MT-safe: no\n */\nint\nfind_vnode_in_execvnode(char *big, char *little)\n{\n\tchar *s;\n\tint patt_length;\n\n\tif (big == NULL)\n\t\treturn 0;\n\n\tif (little == NULL)\n\t\treturn 0;\n\n\ts = strstr(big, little);\n\n\tpatt_length = strlen(little);\n\n\t/*\n\t * Note that the pattern little can never occur at the beginning of big, as\n\t * the only way this would happen would be for a string containing a\n\t * repetition of the pattern, as in the second execvnode in the example above,\n\t * where node1 is repeated twice and therefore a vnode name distinct from\n\t * node1, is skipped by catching the index value being 0.\n\t */\n\twhile (s != NULL) {\n\t\tptrdiff_t index;\n\n\t\t/* Get the index in the original string at which the occurrence is found\n\t\t * using pointer arithmetic. */\n\t\tindex = s - big;\n\n\t\t/* If the pattern isn't part of the remainder of a pattern, for example\n\t\t * looking for \"node1\" in \"node1node1\" and the immediately preceding and\n\t\t * succeeding characters aren't legal vnode characters, then it is a match\n\t\t */\n\t\tif (index != 0 && !legal_vnode_char(big[index - 1], 1) && !legal_vnode_char(big[index + patt_length], 1))\n\t\t\treturn 1;\n\t\t/* Otherwise, we move by the amount that the pattern requires before\n\t\t * running the search again\n\t\t */\n\t\ts = s + patt_length;\n\n\t\ts = strstr(s, little);\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\tdecode_stat_update - decodes body of status update request from MOM\n *\t\tnumber of jobs should already be decoded by caller\n * @see\n * \t\tstat_update and recv_job_obit.\n *\n * @param[in]\tstream\t-\tTPP stream open from Mom on which to read the msg\n * @param[out]\tprused\t-\tJob Resource Usage requests\n *\n * @return\tint\n * @return\treturn code\n */\n\nstatic int\ndecode_stat_update(int stream, ruu *prused)\n{\n\tint hc;\n\tint rc;\n\n\tprused->ru_pjobid = disrst(stream, &rc);\n\tif (rc)\n\t\treturn rc;\n\n\thc = disrsi(stream, &rc);\n\tif (rc)\n\t\treturn rc;\n\tif (hc) {\n\t\t/* there is a comment string following */\n\t\tprused->ru_comment = disrst(stream, &rc);\n\t\tif (rc)\n\t\t\treturn rc;\n\t} else {\n\t\tprused->ru_comment = NULL;\n\t}\n\tprused->ru_status = disrsi(stream, &rc);\n\tif (rc)\n\t\treturn rc;\n\tprused->ru_hop = disrsi(stream, &rc);\n\tif (rc)\n\t\treturn rc;\n\n\tCLEAR_HEAD(prused->ru_attr);\n\trc = decode_DIS_svrattrl(stream, &prused->ru_attr);\n\tif (rc) {\n\t\tfree_attrlist(&prused->ru_attr);\n\t}\n\treturn rc;\n}\n\n/**\n * @brief\n *\t\tUpdate job resource usage based on information sent from Mom.\n *\t\tAll updates include the lastest information on resource usage.\n * @par Functionality:\n *\t\tAn update from Mom also contains certain attributes which\n *\t\tneed to be recorded,  the most inportant of which is the job's\n *\t\tsession id.  When the session id is modified, the job's substate is\n *\t\tchanged from PRERUN to RUNNING; this also saves the job to the database,\n *\t\totherwise it is saved explicitly.\n * @see\n * \t\tis_request\n *\n * @param[in] stream - TPP stream open from Mom on which to read the msg\n *\n * @return\tvoid\n */\nstatic void\nstat_update(int stream)\n{\n\tint bad;\n\tint num;\n\tint njobs;\n\tjob *pjob;\n\tint rc;\n\truu rused = {0};\n\tsvrattrl *sattrl;\n\tmominfo_t *mp;\n\n\tnjobs = disrui(stream, &rc); /* number of jobs in update */\n\tif (rc)\n\t\treturn;\n\n\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG, __func__, \"received updates = %d\", njobs);\n\n\trused.ru_next = NULL;\n\twhile (njobs--) {\n\n\t\trused.ru_pjobid = NULL;\n\t\tif (decode_stat_update(stream, &rused) != 0) {\n\n\t\t\tif ((mp = tfind2((u_long) stream, 0, &streams)) != NULL) {\n\n\t\t\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_NODE,\n\t\t\t\t\t  LOG_NOTICE, mp->mi_host, \"error in stat_update\");\n\t\t\t}\n\t\t\ttpp_eom(stream);\n\t\t\tbreak;\n\t\t}\n\t\tDBPRT((\"stat_update: update for %s\\n\", rused.ru_pjobid))\n\n\t\tif (((pjob = find_job(rused.ru_pjobid)) != NULL) &&\n\t\t    (check_job_state(pjob, JOB_STATE_LTR_RUNNING) || check_job_state(pjob, JOB_STATE_LTR_EXITING)) &&\n\t\t    (get_jattr_long(pjob, JOB_ATR_run_version) == rused.ru_hop)) {\n\n\t\t\tlong old_sid = 0; /* used to save prior sid of job */\n\t\t\tsvrattrl *execvnode_entry = NULL;\n\t\t\tsvrattrl *schedselect_entry = NULL;\n\t\t\tchar *cur_execvnode = NULL;\n\t\t\tchar *cur_schedselect = NULL;\n\n\t\t\tif (is_jattr_set(pjob, JOB_ATR_exec_vnode))\n\t\t\t\tcur_execvnode = get_jattr_str(pjob, JOB_ATR_exec_vnode);\n\n\t\t\tif (is_jattr_set(pjob, JOB_ATR_SchedSelect))\n\t\t\t\tcur_schedselect = get_jattr_str(pjob, JOB_ATR_SchedSelect);\n\n\t\t\t/* update all the attributes sent from Mom */\n\t\t\texecvnode_entry = find_svrattrl_list_entry(&rused.ru_attr, ATTR_execvnode, NULL);\n\t\t\tschedselect_entry = find_svrattrl_list_entry(&rused.ru_attr, ATTR_SchedSelect, NULL);\n\n\t\t\tif ((execvnode_entry != NULL) &&\n\t\t\t    (execvnode_entry->al_value != NULL) &&\n\t\t\t    (schedselect_entry != NULL) &&\n\t\t\t    (schedselect_entry->al_value != NULL) &&\n\t\t\t    (cur_execvnode != NULL) &&\n\t\t\t    (strcmp(cur_execvnode, execvnode_entry->al_value) != 0) &&\n\t\t\t    (cur_schedselect != NULL) &&\n\t\t\t    (strcmp(cur_schedselect, schedselect_entry->al_value) != 0)) {\n\n\t\t\t\t/* decreements everything found in exec_vnode */\n\t\t\t\tset_resc_assigned((void *) pjob, 0, DECR);\n\t\t\t\tfree_nodes(pjob);\n\n\t\t\t\tif (cur_execvnode != NULL) {\n\t\t\t\t\tset_jattr_str_slim(pjob, JOB_ATR_exec_vnode_acct, cur_execvnode, NULL);\n\t\t\t\t}\n\n\t\t\t\tif ((is_jattr_set(pjob, JOB_ATR_resource_acct)) != 0) {\n\t\t\t\t\tfree_jattr(pjob, JOB_ATR_resource_acct);\n\t\t\t\t\tmark_jattr_not_set(pjob, JOB_ATR_resource_acct);\n\t\t\t\t}\n\t\t\t\tset_attr_with_attr(&job_attr_def[JOB_ATR_resource_acct], get_jattr(pjob, JOB_ATR_resource_acct), get_jattr(pjob, JOB_ATR_resource), INCR);\n\n\t\t\t\tset_jattr_str_slim(pjob, JOB_ATR_exec_host_acct, get_jattr_str(pjob, JOB_ATR_exec_host), NULL);\n\n\t\t\t\tif (assign_hosts(pjob, execvnode_entry->al_value, 1) == 0) {\n\t\t\t\t\tresource_def *prdefsl;\n\t\t\t\t\tresource *presc;\n\t\t\t\t\t(void) update_resources_list(pjob, ATTR_l,\n\t\t\t\t\t\t\t\t     JOB_ATR_resource,\n\t\t\t\t\t\t\t\t     execvnode_entry->al_value,\n\t\t\t\t\t\t\t\t     INCR, 0,\n\t\t\t\t\t\t\t\t     JOB_ATR_resource_orig);\n\n\t\t\t\t\tif ((is_jattr_set(pjob, JOB_ATR_SchedSelect_orig)) == 0)\n\t\t\t\t\t\tset_jattr_str_slim(pjob, JOB_ATR_SchedSelect_orig, cur_schedselect, NULL);\n\t\t\t\t\tset_jattr_str_slim(pjob, JOB_ATR_SchedSelect, schedselect_entry->al_value, NULL);\n\n\t\t\t\t\t/* re-generate nodect */\n\t\t\t\t\tset_chunk_sum(get_jattr(pjob, JOB_ATR_SchedSelect), get_jattr(pjob, JOB_ATR_resource));\n\t\t\t\t\tset_resc_assigned((void *) pjob, 0, INCR);\n\n\t\t\t\t\tprdefsl = &svr_resc_def[RESC_SELECT];\n\t\t\t\t\t/* re-generate \"select\" resource */\n\t\t\t\t\tpresc = find_resc_entry(get_jattr(pjob, JOB_ATR_resource), prdefsl);\n\t\t\t\t\tif (presc == NULL)\n\t\t\t\t\t\tpresc = add_resource_entry(get_jattr(pjob, JOB_ATR_resource), prdefsl);\n\t\t\t\t\tif (presc != NULL)\n\t\t\t\t\t\t(void) prdefsl->rs_decode(&presc->rs_value, NULL, \"select\", schedselect_entry->al_value);\n\t\t\t\t\taccount_jobstr(pjob, PBS_ACCT_PRUNE);\n\t\t\t\t} else {\n\t\t\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t\t\t  pjob->ji_qs.ji_jobid,\n\t\t\t\t\t\t  \"error assigning hosts...requeueing job\");\n\t\t\t\t\tdiscard_job(pjob, \"Force rerun\", 1);\n\t\t\t\t\tforce_reque(pjob);\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif (execvnode_entry != NULL) {\n\t\t\t\tdelete_link(&execvnode_entry->al_link);\n\t\t\t\tfree(execvnode_entry);\n\t\t\t}\n\t\t\tif (schedselect_entry != NULL) {\n\t\t\t\tdelete_link(&schedselect_entry->al_link);\n\t\t\t\tfree(schedselect_entry);\n\t\t\t}\n\t\t\tif (is_jattr_set(pjob, JOB_ATR_session_id))\n\t\t\t\told_sid = get_jattr_long(pjob, JOB_ATR_session_id);\n\t\t\t/* update all the attributes sent from Mom */\n\t\t\tsattrl = (svrattrl *) GET_NEXT(rused.ru_attr);\n\t\t\tif (sattrl != NULL) {\n\t\t\t\tif (modify_job_attr(pjob, sattrl,\n\t\t\t\t\t\t    ATR_DFLAG_MGWR | ATR_DFLAG_SvWR, &bad) != 0) {\n\t\t\t\t\tif ((mp = tfind2((u_long) stream, 0, &streams)) != NULL) {\n\t\t\t\t\t\tfor (num = 1; num < bad; num++)\n\t\t\t\t\t\t\tsattrl = (struct svrattrl *) GET_NEXT(sattrl->al_link);\n\t\t\t\t\t\tsprintf(log_buffer, \"unable to update attribute %s.%s in stat_update\", sattrl->al_name, sattrl->al_resc);\n\t\t\t\t\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_NODE,\n\t\t\t\t\t\t\t  LOG_NOTICE, mp->mi_host, log_buffer);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif ((is_jattr_set(pjob, JOB_ATR_session_id)) && (get_jattr_long(pjob, JOB_ATR_session_id) != old_sid)) {\n\t\t\t\t/* save new or updated session id for the job */\n\t\t\t\t/* and if needed update substate to running   */\n\t\t\t\t/*\n\t\t\t\t * save the session id and likely update the job\n\t\t\t\t * substate, normally it is changed from\n\t\t\t\t * PRERUN (or PROVISION) to RUNNING here, but\n\t\t\t\t * it may have already been changed to:\n\t\t\t\t * - EXITING if the OBIT arrived first.\n\t\t\t\t */\n\t\t\t\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid,\n\t\t\t\t\t   \"Received session ID for job: %ld\", get_jattr_long(pjob, JOB_ATR_session_id));\n\t\t\t\tif ((check_job_substate(pjob, JOB_SUBSTATE_PRERUN)) ||\n\t\t\t\t    (check_job_substate(pjob, JOB_SUBSTATE_PROVISION))) {\n\t\t\t\t\t/* log acct info and make RUNNING */\n\t\t\t\t\tcomplete_running(pjob);\n\t\t\t\t\t/* this causes a save of the job */\n\t\t\t\t\tsvr_setjobstate(pjob, JOB_STATE_LTR_RUNNING,\n\t\t\t\t\t\t\tJOB_SUBSTATE_RUNNING);\n\t\t\t\t\t/*\n\t\t\t\t\t * If JOB_DEPEND_TYPE_BEFORESTART dependency is set for the current job\n\t\t\t\t\t * then release the after dependency for its childs as the current job\n\t\t\t\t\t * is changing its state from JOB_SUBSTATE_PRERUN to JOB_SUBSTATE_RUNNING\n\t\t\t\t\t */\n\t\t\t\t\tif (is_jattr_set(pjob, JOB_ATR_depend)) {\n\t\t\t\t\t\t(void) depend_on_exec(pjob);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t} else if ((is_jattr_set(pjob, JOB_ATR_session_id)) == 0) {\n\t\t\t\t/* this has been downgraded to DEBUG3  */\n\t\t\t\t/* level (from DEBUG2)\t\t       */\n\t\t\t\t/* since a mom hook can actually send  */\n\t\t\t\t/* job updates, even before a job gets */\n\t\t\t\t/* a session id */\n\t\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB,\n\t\t\t\t\t  LOG_DEBUG, pjob->ji_qs.ji_jobid,\n\t\t\t\t\t  \"update from Mom without session id\");\n\t\t\t} else {\n\t\t\t\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid, \"Received the same SID as before: %ld\", get_jattr_long(pjob, JOB_ATR_session_id));\n\t\t\t\tjob_save_db(pjob);\n\t\t\t}\n\t\t}\n\t\t(void) free(rused.ru_comment);\n\t\trused.ru_comment = NULL;\n\t\t(void) free(rused.ru_pjobid);\n\t\trused.ru_pjobid = NULL;\n\t\tfree_attrlist(&rused.ru_attr);\n\t}\n}\n\n/**\n * @brief\n * \t\treceive a job_obit IS message from a Mom on TPP stream.\n *\n *\t\tDecode the message into a resc_used_update structure and call\n *\t\tjob_obit() to start the end of job procedures\n * @see\n * \t\tis_request\n *\n * @param[in]\tstream\t-\tthe TPP stream connecting to the Mom\n *\n * @return\tvoid\n */\n\nstatic void\nrecv_job_obit(int stream)\n{\n\tint njobs = 0;\n\tint i = 0;\n\tchar **reject_list = NULL;\n\tchar **ack_list = NULL;\n\tint reject_count = 0;\n\tint ack_count = 0;\n\tmominfo_t *mp = NULL;\n\truu rused = {0};\n\n\tnjobs = disrui(stream, &i); /* number of jobs in update */\n\tif (i)\n\t\treturn;\n\n\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG, __func__, \"received obits = %d\", njobs);\n\n\treject_list = (char **) calloc(1, njobs * sizeof(char *));\n\tif (reject_list == NULL)\n\t\tgoto recv_job_obit_err;\n\n\tack_list = (char **) calloc(1, njobs * sizeof(char *));\n\tif (ack_list == NULL)\n\t\tgoto recv_job_obit_err;\n\n\twhile (njobs--) {\n\t\tCLEAR_HEAD(rused.ru_attr);\n\t\trused.ru_comment = NULL;\n\t\trused.ru_next = NULL;\n\t\trused.ru_pjobid = NULL;\n\n\t\tif (decode_stat_update(stream, &rused) == 0) {\n\t\t\tint is_reject = 0;\n\n\t\t\tDBPRT((\"recv_job_obit: decoded obit for %s\\n\", rused.ru_pjobid))\n\t\t\tis_reject = job_obit(&rused, stream);\n\t\t\tif (is_reject == 1) {\n\t\t\t\treject_list[reject_count++] = rused.ru_pjobid;\n\t\t\t\trused.ru_pjobid = NULL;\n\t\t\t} else if (is_reject != -1) { /* -1 means ignore ruu */\n\t\t\t\tack_list[ack_count++] = rused.ru_pjobid;\n\t\t\t\trused.ru_pjobid = NULL;\n\t\t\t}\n\t\t\tfree(rused.ru_comment);\n\t\t\tif (rused.ru_pjobid != NULL)\n\t\t\t\tfree(rused.ru_pjobid);\n\t\t\tfree_attrlist(&rused.ru_attr);\n\t\t} else\n\t\t\tgoto recv_job_obit_err;\n\t}\n\n\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG, __func__, \"processed obits, sending replies acks: %d, rejects: %d\", ack_count, reject_count);\n\n\tif (ack_count > 0 || reject_count > 0) {\n\t\tif (is_compose(stream, IS_OBITREPLY) != DIS_SUCCESS)\n\t\t\tgoto recv_job_obit_err;\n\t\tif (diswui(stream, ack_count) != DIS_SUCCESS)\n\t\t\tgoto recv_job_obit_err;\n\t\tif (ack_count > 0) {\n\t\t\tfor (i = 0; i < ack_count; i++) {\n\t\t\t\tif (diswst(stream, ack_list[i]) != DIS_SUCCESS)\n\t\t\t\t\tgoto recv_job_obit_err;\n\t\t\t\tfree(ack_list[i]);\n\t\t\t\tack_list[i] = NULL;\n\t\t\t}\n\t\t}\n\t\tfree(ack_list);\n\t\tack_list = NULL;\n\t\tack_count = 0;\n\t\tif (diswui(stream, reject_count) != DIS_SUCCESS)\n\t\t\tgoto recv_job_obit_err;\n\t\tif (reject_count > 0) {\n\t\t\tfor (i = 0; i < reject_count; i++) {\n\t\t\t\tif (diswst(stream, reject_list[i]) != DIS_SUCCESS)\n\t\t\t\t\tgoto recv_job_obit_err;\n\t\t\t\tfree(reject_list[i]);\n\t\t\t\treject_list[i] = NULL;\n\t\t\t}\n\t\t}\n\t\tdis_flush(stream);\n\t\tfree(reject_list);\n\t\treject_list = NULL;\n\t\treject_count = 0;\n\t}\n\n\treturn;\n\nrecv_job_obit_err:\n\tif (rused.ru_pjobid) {\n\t\tDBPRT((\"recv_job_obit: failed to decode obit for %s\\n\", rused.ru_pjobid))\n\t\tlog_joberr(PBSE_INTERNAL, __func__, \"Failed to decode obit\", rused.ru_pjobid);\n\t\tfree(rused.ru_pjobid);\n\t}\n\tif (rused.ru_comment)\n\t\tfree(rused.ru_comment);\n\tfree_attrlist(&rused.ru_attr);\n\n\t/* had a error, discard rest of message */\n\tif ((mp = tfind2((u_long) stream, 0, &streams)) != NULL) {\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_NODE, LOG_NOTICE, mp->mi_host, \"error in recv_job_obit\");\n\t}\n\ttpp_eom(stream);\n\n\tif (reject_list != NULL) {\n\t\tfor (i = 0; i < reject_count; i++) {\n\t\t\tif (reject_list[i] != NULL)\n\t\t\t\tfree(reject_list[i]);\n\t\t}\n\t\tfree(reject_list);\n\t}\n\tif (ack_list != NULL) {\n\t\tfor (i = 0; i < ack_count; i++) {\n\t\t\tif (ack_list[i] != NULL)\n\t\t\t\tfree(ack_list[i]);\n\t\t}\n\t\tfree(ack_list);\n\t}\n}\n\n/**\n * @brief\n * \tTell Mom to discard (kill) a running job.\n *\n *\tThis is done in certain circumstances, such as\n *\n *\t1. If Mom was marked down and jobs where requeued on node_down_requeue,\n *\t\tMom will kill off the job and then send an OBIT which wil be rejected\n *\t\tbecause the run version will not match.\n *\n *\t2. Mother Superior or a Sister failed to acknowledge the Delete Job request\n *\t\tat the end of job processing.  This tells all Moms involved to delete\n *\t\tthe job and free the resources.\n *\n * @param[in]\tstream\t-\tthe TPP stream connecting to the Mom\n * @param[in]\tjobid\t-\tjob id to be discarded.\n * @param[in]\trunver\t-\tis the run version (hop) of the jobs which should be deleted. A runver of -1 is delete any.\n * @param[in]\ttxt\t-\tthe reason why it is getting discarded.\n *\n * @return\tvoid\n */\n\nstatic void\nsend_discard_job(int stream, char *jobid, int runver, char *txt)\n{\n\tDBPRT((\"discard_job %s\\n\", jobid))\n\tif (stream != -1) {\n\t\tstatic char sdjfmt[] = \"Discard running job, %s %s\";\n\t\tint rc;\n\n\t\tif ((rc = is_compose(stream, IS_DISCARD_JOB)) == DIS_SUCCESS) {\n\t\t\tif ((rc = diswst(stream, jobid)) == DIS_SUCCESS)\n\t\t\t\tif ((rc = diswsi(stream, runver)) == DIS_SUCCESS)\n\t\t\t\t\tdis_flush(stream);\n\t\t}\n\t\tif (rc != DIS_SUCCESS) {\n\t\t\tmominfo_t *mp;\n\n\t\t\tif (txt == NULL)\n\t\t\t\ttxt = \"\";\n\t\t\tsprintf(log_buffer, sdjfmt, txt, \"failed\");\n\t\t\tmp = tfind2((u_long) stream, 0, &streams);\n\t\t\tif (mp)\n\t\t\t\tmomptr_down(mp, log_buffer);\n\t\t} else if (txt) {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer), sdjfmt, txt, \"\");\n\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO, jobid,\n\t\t\t\t  log_buffer);\n\t\t}\n\t}\n\tDBPRT((\"send_discard_job for %s, stream %d \\n\", jobid, stream))\n}\n\n/**\n * @brief\n * \t\tDuring the execution of a job, one or more Moms involved with\n *\t\tthe job apparent went down.\n *\n * @par\n *\t\tTo make sure that the resources allocated\n *\t\tto the job by the Moms are released for other jobs, we send a\n *\t\tIS_DISCARD_JOB message to each Mom.\n * @par\n *\t\tA structure (struct jbdscrd) is hung off of the the job structure\n *\t\tto track which Moms have acknowledge the IS_DISCARD_JOB message, see\n *\t\tpost_discard_job(), and which Moms are down, see mom_ptrdown().\n * @par\n *\t\tThe \"txt\" message is logged one time only to prevent flooding the log\n *\t\twith duplicate messages.\n * @par\n *\t\tIf the \"noack\" flag is true, then we do not wish to wait for the\n *\t\tMom's acknowledgement because the job is being requeued/deleted\n *\t\timmediately.   In this case we do not set ji_discard and do not\n *\t\tcall post_discard_job() for the first check.\n *\n * @param[in,out]\tpjob\t-\tjob structure\n * @param[in]\ttxt\t\t-\tThe \"txt\" message is logged one time only to prevent flooding the log with duplicate messages.\n * @param[in]\tnoack\t-\tIf the \"noack\" flag is true, then we do not wish to wait for the Mom's acknowledgement\n * \t\t\t\t\t\t\t because the job is being requeued/deleted immediately.\n *\n * @return\tvoid\n */\nvoid\ndiscard_job(job *pjob, char *txt, int noack)\n{\n\tint i;\n\tint nmom;\n\tstruct jbdscrd *pdsc = NULL;\n\tchar *pc;\n\tchar *pn;\n\tstruct pbsnode *pnode;\n\tint rc;\n\tint rver;\n\n\t/* We're about to discard the job, reply to a preemption.\n\t * This serves as a catch all just incase the code doesn't reply on its own.\n\t */\n\n\tif (pjob->ji_pmt_preq != NULL)\n\t\treply_preempt_jobs_request(PBSE_NONE, PREEMPT_METHOD_DELETE, pjob);\n\n\tif ((is_jattr_set(pjob, JOB_ATR_exec_vnode)) == 0) {\n\t\t/*  no exec_vnode list from which to work */\n\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t  pjob->ji_qs.ji_jobid,\n\t\t\t  \"in discard_job and no exec_vnode\");\n\t\treturn;\n\t}\n\tif (pjob->ji_discard) {\n\t\t/* must be already discarding */\n\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t  pjob->ji_qs.ji_jobid,\n\t\t\t  \"cancel previous discard_job tracking for new discard_job request\");\n\t\tfree(pjob->ji_discard);\n\t\tpjob->ji_discard = NULL;\n\t}\n\n\t/* first count up number of vnodes in exec_vnode to size the\t*/\n\t/* jbdscrd (job discard) array, this may result in more entries\t*/\n\t/* than needed for the number of Moms, but that is ok\t\t*/\n\n\tnmom = 1;\n\tpn = get_jattr_str(pjob, JOB_ATR_exec_vnode);\n\twhile ((pn = strchr(pn, (int) '+')) != NULL) {\n\t\tnmom++;\n\t\tpn++;\n\t}\n\t/* allocate one extra for the null terminator */\n\tpdsc = calloc(sizeof(struct jbdscrd), (size_t) (nmom + 1));\n\tif (pdsc == NULL)\n\t\treturn;\n\n\t/* note, calloc has zeroed the space, so the jdcd_mom ptrs are null */\n\n\t/* go through the list of hosts and add each parent Mom once */\n\tnmom = 0;\n\tpn = parse_plus_spec(get_jattr_str(pjob, JOB_ATR_exec_host), &rc);\n\twhile (pn) {\n\t\tpc = pn;\n\t\twhile ((*pc != '\\0') && (*pc != ':'))\n\t\t\t++pc;\n\t\t*pc = '\\0';\n\n\t\tpnode = find_nodebyname(pn);\n\t\t/* had better be the \"natural\" vnode with only the one parent */\n\t\tif (pnode != NULL) {\n\t\t\tfor (i = 0; i < nmom; ++i) {\n\t\t\t\tif ((pdsc + i)->jdcd_mom == pnode->nd_moms[0])\n\t\t\t\t\tbreak; /* already have this Mom */\n\t\t\t}\n\t\t\tif (i == nmom) {\n\t\t\t\t(pdsc + nmom)->jdcd_mom = pnode->nd_moms[0];\n\t\t\t\tif (pnode->nd_moms[0]->mi_dmn_info->dmn_state & INUSE_DOWN)\n\t\t\t\t\t(pdsc + nmom)->jdcd_state = JDCD_DOWN;\n\t\t\t\telse {\n\t\t\t\t\t(pdsc + nmom)->jdcd_state = JDCD_WAITING;\n\t\t\t\t\tpjob->ji_jdcd_waiting = 1;\n\t\t\t\t}\n\t\t\t\tnmom++;\n\t\t\t}\n\t\t}\n\t\tpn = parse_plus_spec(NULL, &rc);\n\t}\n\n\t/* Get run version of this job */\n\trver = get_jattr_long(pjob, JOB_ATR_run_version);\n\n\t/* unless \"noack\", attach discard array to the job */\n\tif (noack == 0)\n\t\tpjob->ji_discard = pdsc;\n\telse\n\t\tpjob->ji_discard = NULL;\n\n\t/* Send discard message to each Mom that is up or mark the entry down */\n\tfor (i = 0; i < nmom; i++) {\n\t\tint s;\n\n\t\ts = (pdsc + i)->jdcd_mom->mi_dmn_info->dmn_stream;\n\t\tif ((s != -1) && ((pdsc + i)->jdcd_state != JDCD_DOWN)) {\n\t\t\tsend_discard_job(s, pjob->ji_qs.ji_jobid, rver, txt);\n\t\t\ttxt = NULL; /* so one log message only */\n\t\t} else\n\t\t\t(pdsc + i)->jdcd_state = JDCD_DOWN;\n\t}\n\n\t/*\n\t * at this point unless \"noack\", we call post_discard_job() to see if\n\t * there are any outstanding discard requests and if not to deal with\n\t * the job the second arg is NULL to indicate \"just checking\"\n\t */\n\tif (noack == 0)\n\t\tpost_discard_job(pjob, NULL, 0);\n\telse\n\t\tfree(pdsc); /* not attached to job, free it now */\n}\n\n/**\n * @brief\n * \t\treceive message that a job is suspended/resumed because\n *\t\tthe cycle harvesting workstation has gone busy/idle.\n *\n *\t\tNote, the JOB_SVFLG_Actsuspd bit which is set in the job is independent\n *\t\tof the JOB_SVRFLG_Suspend bit which is set by qsig -s suspend.\n *\t\tBoth may be set.\n *\n *\t\tData received:\tinteger  job state (1 suspended, 0 resumed)\n *\t\t\tstring\t jobid\n *\n * @param[in]\tstream\t-\tthe TPP stream connecting to the Mom\n *\n * @reurn\tvoid\n */\nstatic void\nrecv_wk_job_idle(int stream)\n{\n\tint rc;\n\tint which;\n\tchar *jobid;\n\tjob *pjob;\n\n\twhich = disrui(stream, &rc); /* 1 = suspend, 0 = resume */\n\tif (rc)\n\t\treturn;\n\n\tjobid = disrst(stream, &rc); /* job id */\n\tif (rc)\n\t\treturn;\n\n\tpjob = find_job(jobid);\n\tif (pjob) {\n\t\t/* suspend or resume job */\n\n\t\tset_job_state(pjob, JOB_STATE_LTR_RUNNING);\n\n\t\tif (which)\n\t\t\tpjob->ji_qs.ji_svrflags |= JOB_SVFLG_Actsuspd;\n\t\telse\n\t\t\tpjob->ji_qs.ji_svrflags &= ~JOB_SVFLG_Actsuspd;\n\n\t\tjob_save_db(pjob);\n\t}\n\n\tfree(jobid);\n}\n\n/**\n * @brief\n *\tClears job 'pjob' from the pnode's list of jobs.\n *\n * @param[in]\tjobid\t- job id\n * @param[in]\tpnode\t- node structure\n *\n * @return int\n * @retval\t<val> - # of cpus freed as a result of removing 'pjob'.\n *\n */\nstatic int\ndeallocate_job_from_node(char *jobid, struct pbsnode *pnode)\n{\n\tint numcpus = 0;    /* for floating licensing */\n\tint still_has_jobs; /* still jobs on this vnode */\n\tstruct pbssubn *np;\n\tstruct jobinfo *jp, *prev, *next;\n\n\tif ((jobid == NULL) || (pnode == NULL)) {\n\t\treturn (0);\n\t}\n\n\tstill_has_jobs = 0;\n\tfor (np = pnode->nd_psn; np; np = np->next) {\n\n\t\tfor (prev = NULL, jp = np->jobs; jp; jp = next) {\n\t\t\tnext = jp->next;\n\t\t\tif (strcmp(jp->jobid, jobid)) {\n\t\t\t\tprev = jp;\n\t\t\t\tstill_has_jobs = 1; /* another job still here */\n\t\t\t\tcontinue;\n\t\t\t}\n\n\t\t\tif (prev == NULL)\n\t\t\t\tnp->jobs = next;\n\t\t\telse\n\t\t\t\tprev->next = next;\n\t\t\tif (jp->has_cpu) {\n\t\t\t\tpnode->nd_nsnfree++; /* up count of free */\n\t\t\t\tnumcpus++;\n\t\t\t\tif (pnode->nd_nsnfree > pnode->nd_nsn) {\n\t\t\t\t\tlog_event(PBSEVENT_SYSTEM,\n\t\t\t\t\t\t  PBS_EVENTCLASS_NODE, LOG_ALERT,\n\t\t\t\t\t\t  pnode->nd_name,\n\t\t\t\t\t\t  \"CPU count incremented free more than total\");\n\t\t\t\t}\n\t\t\t}\n\t\t\tfree(jp->jobid);\n\t\t\tfree(jp);\n\t\t\tjp = NULL;\n\t\t}\n\t\tif (np->jobs == NULL) {\n\t\t\tnp->inuse &= ~(INUSE_JOB | INUSE_JOBEXCL);\n\t\t}\n\t}\n\tif (still_has_jobs) {\n\t\t/* if the vnode still has jobs, then don't clear */\n\t\t/* JOBEXCL */\n\t\tif (pnode->nd_nsnfree > 0) {\n\t\t\t/* some cpus free, clear \"job-busy\" state */\n\t\t\tset_vnode_state(pnode, ~INUSE_JOB, Nd_State_And);\n\t\t}\n\t} else {\n\t\t/* no jobs at all, clear both JOBEXCL and \"job-busy\" */\n\t\tset_vnode_state(pnode,\n\t\t\t\t~(INUSE_JOB | INUSE_JOBEXCL),\n\t\t\t\tNd_State_And);\n\n\t\t/* call function to check and free the node from the */\n\t\t/* prov list and reset wait_prov flag, if set */\n\t\tif (check_job_substate(find_job(jobid), JOB_SUBSTATE_PROVISION))\n\t\t\tfree_prov_vnode(pnode);\n\t}\n\n\treturn (numcpus);\n}\n\n/**\n *\n * @brief\n *\tGiven a string of exec_vnode format, remove the vnode entries\n *\tthat are found in 'vnodelist'.\n *\n * @param[in]\texecvnode \t- the input exec_vnode string\n * @param[in]\tvnodelist \t- list of vnodes, plus-separated, that are to be deleted from the\n *\t\t\t    \t\t'execvnode' entry.\n * @param[in]\terr_msg\t\t- if there's any failure, put appropriate message here.\n * @param[in]\terr_msg_sz \t- size of the 'err_msg' buffer.\n *\n * @return char *\n * @retaval <string>\t- a new version of 'execvnode' string with entries\n *\t\t\t  containing the vnodes in 'vnodelist' taken out.\n * @retval  NULL\t- if an error has occurred.\n *\n * @note\n *\treturned string is a malloced value that must be freed.\n */\nstatic char *\ndelete_from_exec_vnode(char *execvnode, char *vnodelist, char *err_msg,\n\t\t       int err_msg_sz)\n{\n\tchar *exec_vnode = NULL;\n\tchar *new_exec_vnode = NULL;\n\tchar *chunk = NULL;\n\tchar *last = NULL;\n\tint hasprn = 0;\n\tint entry = 0;\n\tint nelem;\n\tchar *noden;\n\tstruct key_value_pair *pkvp;\n\tchar buf[LOG_BUF_SIZE] = {0};\n\tint j;\n\tint paren = 0;\n\tint parend = 0;\n\n\tif (execvnode == NULL) {\n\t\tsnprintf(err_msg, err_msg_sz, \"bad parameter\");\n\t\treturn NULL;\n\t}\n\n\texec_vnode = strdup(execvnode);\n\tif (exec_vnode == NULL) {\n\t\tsnprintf(err_msg, err_msg_sz, \"execvnode strdup error\");\n\t\tgoto delete_from_exec_vnode_exit;\n\t}\n\n\tnew_exec_vnode = (char *) calloc(1, strlen(exec_vnode) + 1);\n\tif (new_exec_vnode == NULL) {\n\t\tsnprintf(err_msg, err_msg_sz,\n\t\t\t \"new_exec_vnode calloc error\");\n\t\tgoto delete_from_exec_vnode_exit;\n\t}\n\n\tnew_exec_vnode[0] = '\\0';\n\tentry = 0; /* exec_vnode entries */\n\tparen = 0;\n\tfor (chunk = parse_plus_spec_r(exec_vnode, &last, &hasprn);\n\t     chunk != NULL;\n\t     chunk = parse_plus_spec_r(last, &last, &hasprn)) {\n\t\tparen += hasprn;\n\t\tif (parse_node_resc(chunk, &noden, &nelem, &pkvp) == 0) {\n\t\t\tif ((vnodelist != NULL) &&\n\t\t\t    !in_string_list(noden, '+', vnodelist)) {\n\n\t\t\t\t/* there's something put in previously */\n\t\t\t\tif (entry > 0) {\n\t\t\t\t\tstrcat(new_exec_vnode, \"+\");\n\t\t\t\t}\n\n\t\t\t\tif (((hasprn > 0) && (paren > 0)) ||\n\t\t\t\t    ((hasprn == 0) && (paren == 0))) {\n\t\t\t\t\t/* at the beginning of chunk for current host */\n\t\t\t\t\tif (!parend) {\n\t\t\t\t\t\tstrcat(new_exec_vnode, \"(\");\n\t\t\t\t\t\tparend = 1;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tif (!parend) {\n\t\t\t\t\tstrcat(new_exec_vnode, \"(\");\n\t\t\t\t\tparend = 1;\n\t\t\t\t}\n\t\t\t\tstrcat(new_exec_vnode, noden);\n\t\t\t\tentry++;\n\n\t\t\t\tfor (j = 0; j < nelem; ++j) {\n\t\t\t\t\tsnprintf(buf, sizeof(buf), \":%s=%s\",\n\t\t\t\t\t\t pkvp[j].kv_keyw, pkvp[j].kv_val);\n\t\t\t\t\tstrcat(new_exec_vnode, buf);\n\t\t\t\t}\n\n\t\t\t\t/* have all chunks for current host */\n\t\t\t\tif (paren == 0) {\n\n\t\t\t\t\tif (parend) {\n\t\t\t\t\t\tstrcat(new_exec_vnode, \")\");\n\t\t\t\t\t\tparend = 0;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t} else {\n\n\t\t\t\tif (hasprn < 0) {\n\t\t\t\t\t/* matched ')' in chunk, so need to */\n\t\t\t\t\t/* balance the parenthesis */\n\t\t\t\t\tif (parend) {\n\t\t\t\t\t\tstrcat(new_exec_vnode, \")\");\n\t\t\t\t\t\tparend = 0;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t} else {\n\t\t\tsnprintf(err_msg, err_msg_sz,\n\t\t\t\t \"parse_node_resc error\");\n\t\t\tgoto delete_from_exec_vnode_exit;\n\t\t}\n\t}\n\n\tentry = strlen(new_exec_vnode) - 1;\n\tif ((entry >= 0) && (new_exec_vnode[entry] == '+'))\n\t\tnew_exec_vnode[entry] = '\\0';\n\n\tfree(exec_vnode);\n\treturn (new_exec_vnode);\n\ndelete_from_exec_vnode_exit:\n\tfree(exec_vnode);\n\tfree(new_exec_vnode);\n\treturn NULL;\n}\n\n/**\n * @brief\n *\tThis return 1 if the given 'pmom' is a parent mom of\n *\tnode 'pnode'.\n *\n * @param[in]\tpmom - the parent mom\n * @param[in]\tpnode - the node to match against.\n *\n * @return int\n * @retval 1\t- if true\n * @retval 0\t- if  false\n */\nstatic int\nis_parent_mom_of_node(mominfo_t *pmom, pbsnode *pnode)\n{\n\tint i;\n\n\tif ((pmom == NULL) || (pnode == NULL) ||\n\t    (pnode->nd_moms == NULL)) {\n\t\treturn (0);\n\t}\n\n\tfor (i = 0; i < pnode->nd_nummoms; i++) {\n\t\tif (pnode->nd_moms[i] == pmom) {\n\t\t\treturn (1);\n\t\t}\n\t}\n\treturn (0);\n}\n\n/**\n * @brief\n *\tThis removes 'pjob' from vnodes managed by parent mom 'pmom'.\n *\tAlso, if pjob's 'exec_vnode_deallocated' attribute is set,\n *\tthen remove entries in 'exec_vnode_deallocated' that match\n *\tthe vnodes where 'pjob' has already been taken out.\n *\n * @param[in]\tpmom - the parent mom who sent the request.\n * @param[in]\tpjob - job in question\n *\n * @return void\n */\nstatic void\ndeallocate_job(mominfo_t *pmom, job *pjob)\n{\n\tint i;\n\tint totcpus = 0;\n\tint totcpus0 = 0;\n\tchar *freed_vnode_list = NULL;\n\tint freed_sz = 0;\n\tchar *new_exec_vnode = NULL;\n\tchar *jobid;\n\tpbs_sched *psched;\n\n\tif ((pmom == NULL) || (pjob == NULL)) {\n\t\treturn;\n\t}\n\n\tjobid = pjob->ji_qs.ji_jobid;\n\tif ((jobid == NULL) || (*jobid == '\\0'))\n\t\treturn;\n\n\tfor (i = 0; i < svr_totnodes; i++) {\n\t\tpbsnode *pnode;\n\n\t\tpnode = pbsndlist[i];\n\n\t\tif ((pnode != NULL) && !(pnode->nd_state & INUSE_DELETED) && is_parent_mom_of_node(pmom, pnode)) {\n\t\t\ttotcpus0 = totcpus;\n\t\t\ttotcpus += deallocate_job_from_node(pjob->ji_qs.ji_jobid, pnode);\n\t\t\tif (totcpus > totcpus0) {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t \"clearing job %s from node %s\", jobid, pnode->nd_name);\n\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_NODE, LOG_DEBUG,\n\t\t\t\t\t  pmom->mi_host, log_buffer);\n\t\t\t}\n\t\t\tif (i != 0) {\n\t\t\t\tif (pbs_strcat(&freed_vnode_list, &freed_sz, \"+\") == NULL) {\n\t\t\t\t\tlog_err(-1, __func__, \"pbs_strcat failed\");\n\t\t\t\t\tfree(freed_vnode_list);\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (pbs_strcat(&freed_vnode_list, &freed_sz, pnode->nd_name) == NULL) {\n\t\t\t\tlog_err(-1, __func__, \"pbs_strcat failed\");\n\t\t\t\tfree(freed_vnode_list);\n\t\t\t\treturn;\n\t\t\t}\n\t\t}\n\t}\n\tif (totcpus > 0) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer), \"deallocating %d cpu(s) from job %s\", totcpus, jobid);\n\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_NODE, LOG_DEBUG,\n\t\t\t  pmom->mi_host, log_buffer);\n\t}\n\n\tif ((freed_vnode_list != NULL) && (is_jattr_set(pjob, JOB_ATR_exec_vnode_deallocated))) {\n\t\tchar err_msg[LOG_BUF_SIZE];\n\n\t\tnew_exec_vnode = delete_from_exec_vnode(\n\t\t\tget_jattr_str(pjob, JOB_ATR_exec_vnode_deallocated),\n\t\t\tfreed_vnode_list, err_msg, LOG_BUF_SIZE);\n\n\t\tif (new_exec_vnode == NULL) {\n\t\t\tlog_err(-1, __func__, err_msg);\n\t\t\tfree(freed_vnode_list);\n\t\t\treturn;\n\t\t}\n\n\t\tset_jattr_str_slim(pjob, JOB_ATR_exec_vnode_deallocated, new_exec_vnode, NULL);\n\t\tfree(new_exec_vnode);\n\t}\n\tif (find_assoc_sched_jid(pjob->ji_qs.ji_jobid, &psched))\n\t\tset_scheduler_flag(SCH_SCHEDULE_TERM, psched);\n\telse {\n\t\tlog_err(-1, __func__, \"Unable to find scheduler associated with partition\");\n\t}\n\tfree(freed_vnode_list);\n}\n/**\n * @brief\n * \tWe got an EOF on a stream.\n *\n * @param[in]\tstream\t-\tthe TPP stream connecting to the Mom\n * @param[in]\tret\t-\tnot used here\n * @param[in]\tmsg\t-\tthe reason why the mom is down\n *\n * @return\tvoid\n */\nvoid\nstream_eof(int stream, int ret, char *msg)\n{\n\tmominfo_t *mp;\n\n\tDBPRT((\"entering %s\", __func__))\n\n\ttpp_close(stream);\n\n\t/* find who the stream belongs to and mark down */\n\tif ((mp = tfind2((u_long) stream, 0, &streams)) != NULL) {\n\t\tDBPRT((\"%s: %s down\\n\", __func__, mp->mi_host))\n\t\tlog_errf(-1, __func__, \"%s down\", mp->mi_host);\n\t\tif (msg == NULL)\n\t\t\tmsg = \"communication closed\";\n\n\t\tmomptr_down(mp, msg);\n\n\t\t/* Down node and all subnodes */\n\t\tmp->mi_dmn_info->dmn_stream = -1;\n\n\t\t/* Since stream is now closed, reset the intermediate\n\t\t * state INUSE_INIT.\n\t\t */\n\t\tmp->mi_dmn_info->dmn_state &= ~INUSE_INIT;\n\n#ifdef NAS /* localmod 005 */\n\t\ttdelete2((u_long) stream, 0ul, &streams);\n#else\n\t\ttdelete2((u_long) stream, 0, &streams);\n#endif /* localmod 005 */\n\t}\n\treturn;\n}\n\n/**\n * @brief\n * \t\tMark all the nodes in mom array as unknown\n * @see\n * \t\tnet_down_handler\n *\n * @param[in]\tthis value should be 1 - to mark all the mom state as unknown.\n *\n * @return\tvoid\n */\nvoid\nmark_nodes_unknown(int all)\n{\n\tmominfo_t *pmom;\n\tint i;\n\tint stm;\n\tdmn_info_t *pdmn_info;\n\n\tDBPRT((\"entering %s\", __func__))\n\n\tfor (i = 0; i < mominfo_array_size; i++) {\n\t\tif (mominfo_array[i]) {\n\t\t\tpmom = mominfo_array[i];\n\t\t\tpdmn_info = pmom->mi_dmn_info;\n\n\t\t\tif ((pdmn_info->dmn_state & INUSE_INIT) || all == 1) {\n\t\t\t\tset_all_state(pmom, 1, INUSE_UNKNOWN, NULL, Set_All_State_Regardless);\n\t\t\t\tstm = pdmn_info->dmn_stream;\n\t\t\t\tif (stm >= 0) {\n\t\t\t\t\ttpp_close(stm);\n\t\t\t\t\ttdelete2((u_long) stm, 0, &streams);\n\t\t\t\t}\n\t\t\t\tpdmn_info->dmn_stream = -1;\n\n\t\t\t\t/* Since stream is being closed, reset the intermediate\n\t\t\t\t * state INUSE_INIT.\n\t\t\t\t */\n\t\t\t\tpdmn_info->dmn_state &= ~INUSE_INIT;\n\t\t\t\tpdmn_info->dmn_state |= INUSE_UNKNOWN | INUSE_MARKEDDOWN;\n\t\t\t}\n\t\t}\n\t}\n}\n\n/**\n * @brief The TPP multicast version for server -> mom.\n *\n * @param[in] pmom - The mom to ping\n * @param[in] mtfd - The TPP channel to add moms for multicasting.\n * @param[in] unique - Ensure only unique values are added.\n * \n * \n * @return int\n * @retval 0: success\n * @retval !0: failure\n *\n */\nint\nmcast_add(mominfo_t *pmom, int *mtfd, bool unique)\n{\n\tdmn_info_t *pdmninfo;\n\tint rc = 0;\n\n\tif (!pmom)\n\t\treturn -1;\n\n\tpdmninfo = pmom->mi_dmn_info;\n\n\tDBPRT((\"%s: entered\\n\", __func__))\n\n\tif (pdmninfo->dmn_stream < 0)\n\t\treturn -1;\n\n\t/* open the tpp mcast channel here */\n\tif (*mtfd == -1 && (*mtfd = tpp_mcast_open()) == -1) {\n\t\tlog_err(-1, __func__, \"Failed to open TPP mcast channel for broadcasting messages\");\n\t\treturn -1;\n\t}\n\n\trc = tpp_mcast_add_strm(*mtfd, pdmninfo->dmn_stream, unique);\n\n\tif (rc == -1) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"Failed to add service endpoint at %s:%d to mcast\", pmom->mi_host, pmom->mi_port);\n\t\tlog_err(-1, __func__, log_buffer);\n\t\ttpp_close(pdmninfo->dmn_stream);\n\t\ttdelete2((u_long) pdmninfo->dmn_stream, 0, &streams);\n\t\tpdmninfo->dmn_stream = -1;\n\t}\n\n\treturn rc;\n}\n\n/**\n * @brief\n * \tMulticast function to close all failed streams and close them\n *\n * @param[in]\tstm\t- multi-cast stream where broadcast is attempted\n * @param[in]\tret\t- failure return code\n *\n * @return\tvoid\n */\nvoid\nclose_streams(int stm, int ret)\n{\n\tint *strms;\n\tint count = 0;\n\tint i;\n\tmominfo_t *pmom;\n\tdmn_info_t *pdmninfo;\n\tstruct sockaddr_in *addr;\n\n\tif (stm < 0)\n\t\treturn;\n\n\tstrms = tpp_mcast_members(stm, &count);\n\n\tfor (i = 0; i < count; i++) {\n\t\tif ((pmom = tfind2((u_long) strms[i], 0, &streams)) != NULL) {\n\t\t\tpdmninfo = pmom->mi_dmn_info;\n\t\t\t/* find the respective mom from the stream */\n\t\t\taddr = tpp_getaddr(pdmninfo->dmn_stream);\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"%s %d to %s(%s)\",\n\t\t\t\t dis_emsg[ret], errno, pmom->mi_host, netaddr(addr));\n\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\tstream_eof(pdmninfo->dmn_stream, ret, \"mcast failed!\");\n\t\t}\n\t}\n}\n\n/**\n * @brief\n * \t\tMom multicast functions to broadcast a single command to all the moms.\n *\n * @param[in]\tptask\t- work task structure\n *\n * @return\tvoid\n */\nvoid\nmcast_msg(struct work_task *ptask)\n{\n\tdmn_info_t *pdmninfo;\n\tint i;\n\tint ret;\n\n\tif (!ptask)\n\t\treturn;\n\n\tDBPRT((\"%s: entered\\n cmd: %d\", __func__, ptask->wt_aux))\n\n\tint cmd = ptask->wt_aux;\n\tint mtfd = -1;\n\n\tswitch (cmd) {\n\t\tcase IS_CLUSTER_ADDRS:\n\t\t\tfor (i = 0; i < mominfo_array_size; i++) {\n\n\t\t\t\tif (!mominfo_array[i])\n\t\t\t\t\tcontinue;\n\n\t\t\t\tpdmninfo = mominfo_array[i]->mi_dmn_info;\n\t\t\t\tif ((pdmninfo->dmn_state & INUSE_NEED_ADDRS) && pdmninfo->dmn_stream >= 0) {\n\t\t\t\t\tmcast_add(mominfo_array[i], &mtfd, FALSE);\n\t\t\t\t\tif (pdmninfo->dmn_state & INUSE_MARKEDDOWN)\n\t\t\t\t\t\tpdmninfo->dmn_state &= ~INUSE_MARKEDDOWN;\n\t\t\t\t\tset_all_state(mominfo_array[i], 0, INUSE_DOWN | INUSE_NEED_ADDRS,\n\t\t\t\t\t\t      NULL, Set_All_State_Regardless);\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif ((ret = send_ip_addrs_to_mom(mtfd, 0)) != DIS_SUCCESS)\n\t\t\t\tclose_streams(mtfd, ret);\n\n\t\t\ttpp_mcast_close(mtfd);\n\t\t\tbreak;\n\n\t\tcase IS_REPLYHELLO:\n\t\t\tif (mtfd_replyhello != -1)\n\t\t\t\tif ((ret = reply_hellosvr(mtfd_replyhello, 1)) != DIS_SUCCESS)\n\t\t\t\t\tclose_streams(mtfd_replyhello, ret);\n\t\t\tif (mtfd_replyhello_noinv != -1)\n\t\t\t\tif ((ret = reply_hellosvr(mtfd_replyhello_noinv, 0)) != DIS_SUCCESS)\n\t\t\t\t\tclose_streams(mtfd_replyhello_noinv, ret);\n\n\t\t\ttpp_mcast_close(mtfd_replyhello);\n\t\t\ttpp_mcast_close(mtfd_replyhello_noinv);\n\t\t\tmtfd_replyhello = -1;\n\t\t\tmtfd_replyhello_noinv = -1;\n\t\t\tbreak;\n\n\t\tdefault:\n\t\t\tbreak;\n\t}\n}\n\n/**\n * @brief\n * \t\tAdd placement set names to the Server's pnames attribute.\n * @see\n * \t\tupdate2_to_vnode and is_request\n *\n * @param[in]\tnamestr\t-\tThe namestr paramenter is a comma separated set of strings\n * \t\t\t\t\t\t\tEach separate name is added only if it isn't already in pnames\n * @return\tint\n * @retval\t0\t- success\n * @retval\t1\t- failure\n */\n\nstatic int\nsetup_pnames(char *namestr)\n{\n\tint i;\n\tchar *newbuffer;\n\tint newentries = 0;\n\tchar *pe;\n\tchar *ps;\n\tattribute *ppnames;\n\tstruct array_strings *pparst;\n\tchar *workcopy;\n\tint resc_added = 0;\n\n\tif ((namestr == NULL) || (*namestr == '\\0'))\n\t\treturn 0;\n\tworkcopy = strdup(namestr);\n\tif (workcopy == NULL)\n\t\treturn 1;\n\tif ((newbuffer = (char *) malloc(strlen(workcopy) + 1)) == NULL) {\n\t\tfree(workcopy);\n\t\treturn 1;\n\t}\n\t*newbuffer = '\\0';\n\n\tppnames = get_sattr(SVR_ATR_PNames);\n\tpparst = ppnames->at_val.at_arst;\n\tps = workcopy;\n\n\t/* look at each individual resource name in the comma seperated list */\n\twhile (*ps) {\n\t\twhile (*ps && isspace((int) *ps))\n\t\t\tps++;\n\t\tpe = ps;\n\t\twhile (*pe && (*pe != ','))\n\t\t\tpe++;\n\t\tif (*pe == ',')\n\t\t\t*pe++ = '\\0';\n\n\t\t/* is the resource name already in the pnames attribute? */\n\t\tif (pparst) {\n\t\t\tfor (i = 0; i < pparst->as_usedptr; ++i) {\n\t\t\t\tif (strcasecmp(ps, pparst->as_string[i]) == 0)\n\t\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t\tif ((pparst == NULL) || (i == pparst->as_usedptr)) {\n\t\t\t/* not there already, ok to add this word */\n\t\t\tif (newentries++)\n\t\t\t\tstrcat(newbuffer, \",\");\n\t\t\tstrcat(newbuffer, ps);\n\t\t}\n\n\t\t/* next see if it needs to be added to resourcedef */\n\t\tif (!find_resc_def(svr_resc_def, ps)) {\n\t\t\tif (add_resource_def(ps, ATR_TYPE_ARST, NO_USER_SET) == 0)\n\t\t\t\tresc_added++;\n\t\t}\n\n\t\tps = pe;\n\t}\n\n\tif (resc_added > 0) {\n\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t\t  LOG_INFO, \"setup_pnames\",\n\t\t\t  \"Restarting Python interpreter as resourcedef file has changed.\");\n\t\tpbs_python_ext_shutdown_interpreter(&svr_interp_data);\n\t\tif (pbs_python_ext_start_interpreter(&svr_interp_data) != 0) {\n\t\t\tlog_err(PBSE_INTERNAL, __func__, \"Failed to restart Python interpreter\");\n\t\t\tfree(workcopy);\n\t\t\tfree(newbuffer);\n\t\t\treturn 1;\n\t\t}\n\n\t\tsend_rescdef(1);\n\t}\n\n\tif (newentries) {\n\t\tint flag = 0;\n\n\t\tif (is_attr_set(ppnames) == 0 || (ppnames->at_flags & (ATR_VFLAG_SET | ATR_VFLAG_DEFLT)) == (ATR_VFLAG_SET | ATR_VFLAG_DEFLT))\n\t\t\tflag = ATR_VFLAG_DEFLT;\n\n\t\tset_sattr_generic(SVR_ATR_PNames, newbuffer, NULL, INCR);\n\t\tppnames->at_flags |= flag;\n\t}\n\tfree(workcopy);\n\tfree(newbuffer);\n\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\tadd mom to the vnode list if it is not listed, and if there is no room,\n * \t\tre-strucure and create room for the mom. Add Mom's name to this vnode's Mom attribute\n * \t\tand set reverse linkage Mom -> node.\n * @see\n * \t\tupdate2_to_vnode and create_pbs_node2.\n *\n *\n */\nint\ncross_link_mom_vnode(struct pbsnode *pnode, mominfo_t *pmom)\n{\n\tint i;\n\tint n;\n\tmom_svrinfo_t *prmomsvr;\n\n\tif ((pnode == NULL) || (pmom == NULL))\n\t\treturn (PBSE_NONE);\n\n\t/* see if the node already has this Mom listed,if not add her */\n\n\tfor (i = 0; i < pnode->nd_nummoms; ++i) {\n\t\tif (pnode->nd_moms[i] == pmom)\n\t\t\tbreak;\n\t}\n\n\tif (i == pnode->nd_nummoms) {\n\t\t/* need to add this parent Mom in the node's array */\n\t\tif (pnode->nd_nummoms == pnode->nd_nummslots) {\n\n\t\t\t/* need to expand the array to make room */\n\t\t\tmominfo_t **tmpim;\n\n\t\t\tn = pnode->nd_nummslots;\n\t\t\tif (n == 0)\n\t\t\t\tn = 1;\n\t\t\telse\n\t\t\t\tn *= 2;\n\t\t\ttmpim = (mominfo_t **) realloc(pnode->nd_moms,\n\t\t\t\t\t\t       n * sizeof(mominfo_t *));\n\t\t\tif (tmpim == NULL)\n\t\t\t\treturn (PBSE_SYSTEM);\n\t\t\tpnode->nd_moms = tmpim;\n\t\t\tpnode->nd_nummslots = n;\n\t\t}\n\t\tpnode->nd_moms[pnode->nd_nummoms++] = pmom;\n\n\t\t/* also add Mom's name to this vnode's Mom attribute */\n\t\tset_nattr_generic(pnode, ND_ATR_Mom, pmom->mi_host, NULL, INCR);\n\t}\n\n\t/* Now set reverse linkage Mom -> node */\n\n\tprmomsvr = pmom->mi_data;\n\tfor (i = 0; i < prmomsvr->msr_numvnds; ++i) {\n\t\tif (prmomsvr->msr_children[i] == pnode)\n\t\t\tbreak;\n\t}\n\tif (i == prmomsvr->msr_numvnds) {\n\n\t\t/* need to add this node to array of Mom's children */\n\t\tif (prmomsvr->msr_numvnds == prmomsvr->msr_numvslots) {\n\t\t\t/* need to expand the array (double it) */\n\t\t\tstruct pbsnode **tmpn;\n\n\t\t\tn = prmomsvr->msr_numvslots;\n\t\t\tif (n == 0)\n\t\t\t\tn = 1;\n\t\t\telse\n\t\t\t\tn *= 2;\n\t\t\ttmpn = (struct pbsnode **) realloc(prmomsvr->msr_children,\n\t\t\t\t\t\t\t   n * sizeof(struct pbsnode *));\n\t\t\tif (tmpn == NULL)\n\t\t\t\treturn (PBSE_SYSTEM);\n\t\t\tprmomsvr->msr_children = tmpn;\n\t\t\tprmomsvr->msr_numvslots = n;\n\t\t}\n\t\tprmomsvr->msr_children[prmomsvr->msr_numvnds++] = pnode;\n\t}\n\treturn 0;\n}\n\n#define UPDATE_FROM_HOOK \"update_from_hook\"\n#define UPDATE2 \"update2\"\n#define UPDATE_FROM_HOOK_U \"UPDATE_FROM_HOOK\"\n#define UPDATE2_U \"UPDATE2\"\n#define UPDATE_FROM_MOM_HOOK \"update from mom hook\"\n#define UPDATE \"update\"\n/**\n * @brief\n * \t\tcreate/update vnodes from the information sent by Mom in the UPDATE2\n * \t\tmessage.\n *\n * @param[in]  pvnal \t- info on one vnode from Mom\n * @param[in]  new   \t- true if ok to create new vnode\n * @param[in]  pmom  \t- the Mom which sent this update\n * @param[out] madenew \t- set non-zero if any new vnodes were created\n * @param[out] from_hook - set non-zero if request coming from hook\n *\t\t\t  Normally set to 1 for regular vnoded request;\n *\t\t\t  2 for qmgr-like (non-vnoded) request.\n *\n * @return int\n * @retval\tzero\t- ok\n * @retval\tPBSE_ number\t- error\n *\n * @par MT-safe: No\n */\nstatic int\nupdate2_to_vnode(vnal_t *pvnal, int new, mominfo_t *pmom, int *madenew, int from_hook)\n{\n\tint bad;\n\tint i;\n\tint j;\n\tint localmadenew = 0;\n\tstruct pbsnode *pnode;\n\tpbs_list_head atrlist;\n\tsvrattrl *pal;\n\tchar buf[200];\n\tattribute *pattr;\n\tattribute *pRA;\n\tvna_t *psrp;\n\tchar *dot;\n\tchar *resc;\n\tresource *prs;\n\tresource_def *prdef;\n\tresource_def *prdefhost;\n\tresource_def *prdefvnode;\n\tmom_svrinfo_t *pcursvrm;\n\tvnpool_mom_t *ppool;\n\tstatic char *cannot_def_resc = \"error: resource %s for vnode %s cannot be defined\";\n\tchar *p;\n\tchar hook_name[HOOK_BUF_SIZE + 1];\n\tint vn_state_updates = 0;\n\tint vn_resc_added = 0;\n\n\tCLEAR_HEAD(atrlist);\n\n\t/*\n\t * Can't do static initialization of these because svr_resc_def\n\t * may change as new resources are added dynamically.\n\t */\n\tprdefhost = &svr_resc_def[RESC_HOST];\n\tprdefvnode = &svr_resc_def[RESC_VNODE];\n\n\tpnode = find_nodebyname(pvnal->vnal_id);\n\n\tif (pnode == NULL) {\n\t\t/*\n\t\t * see if this vnode def entry contains the topology info\n\t\t * if so, it is the natural vnode for this Mom or for the\n\t\t * first compute node on a cray which isn't of concern here\n\t\t */\n\n\t\tint have_topology = 0;\n\t\tint is_compute_node = 0;\n\t\tfor (i = 0; i < pvnal->vnal_used; i++) {\n\t\t\tpsrp = VNAL_NODENUM(pvnal, i);\n\t\t\tif (strcasecmp(psrp->vna_name, ATTR_NODE_TopologyInfo) == 0)\n\t\t\t\thave_topology = 1;\n\t\t\tif ((strcasecmp(psrp->vna_name, \"resources_available.vntype\") == 0) && (strcasecmp(psrp->vna_val, CRAY_COMPUTE) == 0))\n\t\t\t\tis_compute_node = 1;\n\t\t}\n\t\tif ((have_topology == 1) && (is_compute_node == 0)) {\n\t\t\t/* this is for the natural vnode, use that for pnode */\n\t\t\tmom_svrinfo_t *prmomsvr = pmom->mi_data;\n\t\t\tpnode = prmomsvr->msr_children[0];\n\t\t}\n\t}\n\n\tif ((pnode == NULL) && new) {\n\t\t/* create vnode */\n\t\tpal = attrlist_create(ATTR_NODE_Mom, 0, strlen(pmom->mi_host) + 1);\n\t\tstrcpy(pal->al_value, pmom->mi_host);\n\t\tappend_link(&atrlist, &pal->al_link, pal);\n\t\tif (pmom->mi_port != PBS_MOM_SERVICE_PORT) {\n\t\t\tsprintf(buf, \"%u\", pmom->mi_port);\n\t\t\tpal = attrlist_create(ATTR_NODE_Port, 0, strlen(buf) + 1);\n\t\t\tstrcpy(pal->al_value, buf);\n\t\t\tappend_link(&atrlist, &pal->al_link, pal);\n\t\t}\n\t\tpal = GET_NEXT(atrlist);\n\t\tbad = create_pbs_node(pvnal->vnal_id, pal, ATR_DFLAG_MGWR,\n\t\t\t\t      &bad, &pnode, FALSE);\n\t\tfree_attrlist(&atrlist);\n\t\tif (bad != 0) {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"could not autocreate vnode \\\"%s\\\", error = %d\",\n\t\t\t\t pvnal->vnal_id, bad);\n\t\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_NODE,\n\t\t\t\t  LOG_NOTICE, pmom->mi_host, log_buffer);\n\t\t\treturn bad;\n\t\t}\n\t\t*madenew = 1;\n\t\tlocalmadenew = 1;\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"autocreated vnode %s\", pvnal->vnal_id);\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_NODE,\n\t\t\t  LOG_INFO, pmom->mi_host, log_buffer);\n\t}\n\n\tif (pnode == NULL) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"%s reported in %s message from Mom on %s\",\n\t\t\t pvnal->vnal_id,\n\t\t\t from_hook ? UPDATE_FROM_HOOK_U : UPDATE2_U,\n\t\t\t pmom->mi_host);\n\t\tlog_err(PBSE_UNKNODE, from_hook ? UPDATE_FROM_HOOK : UPDATE2, log_buffer);\n\t\treturn PBSE_UNKNODE;\n\t}\n\n\t/* If the request is coming from a hook, check if the MoM requesting the update\n\t * actually owns the vnode. If it does not, do not crosslink and return an error.\n\t */\n\tif (from_hook == 1) {\n\t\tint pnode_has_mom = 0;\n\t\t/* see if the node already has this Mom listed,if not add her */\n\t\tfor (i = 0; i < pnode->nd_nummoms; ++i) {\n\t\t\tif (pnode->nd_moms[i] == pmom)\n\t\t\t\tpnode_has_mom = 1;\n\t\t\tbreak;\n\t\t}\n\n\t\tif (!pnode_has_mom) {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"Not allowed to update vnode '%s', as it is owned by a different mom\", pvnal->vnal_id);\n\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_NODE,\n\t\t\t\t  LOG_INFO, pmom->mi_host, log_buffer);\n\t\t\treturn (PBSE_BADHOST);\n\t\t}\n\t}\n\n\t/* if mom has a vnode_pool value */\n\tpcursvrm = (mom_svrinfo_t *) (pmom->mi_data);\n\tif ((localmadenew == 1) && (pcursvrm->msr_vnode_pool > 0)) {\n\t\tppool = find_vnode_pool(pmom);\n\t\tif (ppool != NULL) {\n\t\t\tfor (j = 0; j < ppool->vnpm_nummoms; ++j) {\n\t\t\t\tif (ppool->vnpm_moms[j] != NULL) {\n\t\t\t\t\tint ret;\n\n\t\t\t\t\tif ((ret = cross_link_mom_vnode(pnode, ppool->vnpm_moms[j])) != 0) {\n\t\t\t\t\t\t/* deal with error */\n\t\t\t\t\t\treturn (ret);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t} else if (from_hook != 2) {\n\t\t/* not done for UPDATE_FROM_HOOK2 (i.e. from_hook == 2)\n\t\t * as it becomes like a qmgr request. So no need to change\n\t\t * current vnode's parent mom be the incoming node,\n\t\t * which is what cross_link_mom_node() does.\n\t\t */\n\t\tif ((i = cross_link_mom_vnode(pnode, pmom)) != 0)\n\t\t\treturn (i);\n\t}\n\n\t/*\n\t * Attributes and Resources within Resources_Available set by a Mom\n\t * via this message (and not coming from the UPDATE_FROM_HOOK),\n\t * have the ATR_VFLAG_DEFLT (default) flag set.\n\t * If the Mom no longer reports the attribute/resource it should be\n\t * unset.  The only way to do this is unset all \"default\" attribute/\n\t * resources first then reset what Mom is now reporting.\n\t *\n\t * Exceptions to the above:\n\t * resources_available.host - must be set,  so it isn't unset\n\t *\teven if default\n\t * sharing - can only be set via this message, so set to the default\n\t *\tvalue to insure it is reset based on what Mom now sends or to\n\t *\tthe default setting if Mom no longer sends anything\n\t */\n\n\tif (!from_hook) {\n\t\tfor (i = 0; i < ND_ATR_LAST; ++i) {\n\t\t\t/* if this vnode has been updated earlier in this update2 */\n\t\t\t/* then don't free anything but topology */\n\t\t\tif ((i != ND_ATR_TopologyInfo))\n\t\t\t\tcontinue; /* seeing vnl update for node just updated, don't clear */\n\n\t\t\tif (i != ND_ATR_ResourceAvail) {\n\t\t\t\tif (((get_nattr(pnode, i))->at_flags & (ATR_VFLAG_SET | ATR_VFLAG_DEFLT)) == (ATR_VFLAG_SET | ATR_VFLAG_DEFLT))\n\t\t\t\t\tfree_nattr(pnode, i);\n\t\t\t} else if (is_nattr_set(pnode, i) != 0) {\n\t\t\t\tprs = (resource *) GET_NEXT(get_nattr_list(pnode, i));\n\t\t\t\twhile (prs) {\n\t\t\t\t\tif ((prs->rs_value.at_flags & ATR_VFLAG_DEFLT) &&\n\t\t\t\t\t    (prs->rs_defin != prdefhost) &&\n\t\t\t\t\t    (prs->rs_defin != prdefvnode)) {\n\t\t\t\t\t\tprs->rs_defin->rs_free(&prs->rs_value);\n\t\t\t\t\t}\n\t\t\t\t\tprs = (resource *) GET_NEXT(prs->rs_link);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\tset_nattr_l_slim(pnode, ND_ATR_Sharing, VNS_DFLT_SHARED, SET);\n\t\t(get_nattr(pnode, ND_ATR_Sharing))->at_flags |= ATR_VFLAG_DEFLT;\n\t}\n\n\t/* set attributes/resources if they are default */\n\n\tpRA = get_nattr(pnode, ND_ATR_ResourceAvail);\n\n\tfor (i = 0; i < pvnal->vnal_used; i++) {\n\t\tpsrp = VNAL_NODENUM(pvnal, i);\n\t\tstrncpy(buf, psrp->vna_name, sizeof(buf) - 1);\n\t\tbuf[sizeof(buf) - 1] = '\\0';\n\n\t\t/* make sure no trailing white space in the value */\n\t\tfor (dot = psrp->vna_val + strlen(psrp->vna_val) - 1;\n\t\t     dot >= psrp->vna_val;\n\t\t     dot--) {\n\t\t\tif (isspace((int) *dot))\n\t\t\t\t*dot = '\\0';\n\t\t\telse\n\t\t\t\tbreak;\n\t\t}\n\n\t\tif ((dot = strchr(buf, (int) '.')) != NULL) {\n\t\t\t/* found a resource setting, had better be Resources_Available */\n\t\t\tresc = dot + 1;\n\t\t\t*dot = '\\0';\n\t\t\tif ((strcasecmp(buf, ATTR_rescavail) != 0) &&\n\t\t\t    (from_hook && (strcasecmp(buf, ATTR_rescassn) != 0))) {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t \"error: not legal to set resource in attribute %s, in %s for vnode %s\",\n\t\t\t\t\t psrp->vna_name, from_hook ? UPDATE_FROM_MOM_HOOK : UPDATE,\n\t\t\t\t\t pnode->nd_name);\n\t\t\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_NODE,\n\t\t\t\t\t  LOG_ERR, pmom->mi_host, log_buffer);\n\t\t\t\tcontinue;\n\t\t\t}\n\n\t\t\tif (from_hook && (strcasecmp(buf, ATTR_rescassn) == 0))\n\t\t\t\tpRA = get_nattr(pnode, ND_ATR_ResourceAssn);\n\n\t\t\t/* Is the resource already defined? */\n\t\t\tprdef = find_resc_def(svr_resc_def, resc);\n\t\t\tif (prdef == NULL) {\n\t\t\t\tint err;\n\n\t\t\t\t/* currently resource is undefined, add it */\n\n\t\t\t\terr = add_resource_def(resc, psrp->vna_type, psrp->vna_flag);\n\t\t\t\tif (err < 0) {\n\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer), cannot_def_resc,\n\t\t\t\t\t\t resc, pvnal->vnal_id);\n\t\t\t\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_NODE,\n\t\t\t\t\t\t  LOG_ERR, pmom->mi_host, log_buffer);\n\t\t\t\t\tcontinue; /* skip this attribute, go to next */\n\t\t\t\t} else {\n\n\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t\t \"adding resource %s, type %d, in update for vnode %s\", resc, psrp->vna_type, pnode->nd_name);\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_NODE,\n\t\t\t\t\t\t  LOG_INFO, pmom->mi_host, log_buffer);\n\t\t\t\t\tvn_resc_added++;\n\t\t\t\t}\n\t\t\t\t/* now find the new resource definition */\n\t\t\t\tprdef = find_resc_def(svr_resc_def, resc);\n\t\t\t\tif (prdef == NULL)\n\t\t\t\t\tcontinue; /* skip this attribute, go to next */\n\t\t\t} else if ((psrp->vna_type != 0) &&\n\t\t\t\t   (psrp->vna_type != prdef->rs_type)) {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer), cannot_def_resc,\n\t\t\t\t\t resc, pvnal->vnal_id);\n\t\t\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_NODE,\n\t\t\t\t\t  LOG_ERR, pmom->mi_host, log_buffer);\n\t\t\t\tcontinue; /* skip this attribute/resource, go to next */\n\t\t\t}\n\n\t\t\t/* add resource entry to Resources_Available for the vnode */\n\n\t\t\tprs = add_resource_entry(pRA, prdef);\n\t\t\tif (prs) {\n\t\t\t\tbad = 0;\n\t\t\t\tif (from_hook ||\n\t\t\t\t    (prs->rs_value.at_flags & (ATR_VFLAG_SET | ATR_VFLAG_DEFLT)) != ATR_VFLAG_SET) {\n\t\t\t\t\t/* if not from_hook, will only set */\n\t\t\t\t\t/* resource values that have the */\n\t\t\t\t\t/* ATR_VFLAG_DEFLT flag only, which */\n\t\t\t\t\t/* means it wasn't set externally */\n\t\t\t\t\t/* (i.e. qmgr). */\n\t\t\t\t\t/* if from_hook, we override values */\n\t\t\t\t\t/* set externally. */\n\n\t\t\t\t\t/* If indirect resource, decode it as a string */\n\t\t\t\t\tif (psrp->vna_val[0] == '@') {\n\t\t\t\t\t\textern int resc_access_perm;\n\t\t\t\t\t\tint perms = resc_access_perm;\n\t\t\t\t\t\tresc_access_perm |= ATR_PERM_ALLOW_INDIRECT;\n\t\t\t\t\t\tbad = decode_str(&prs->rs_value, psrp->vna_name, resc, psrp->vna_val);\n\t\t\t\t\t\tresc_access_perm = perms;\n\t\t\t\t\t\tif (bad == 0) {\n\t\t\t\t\t\t\tprs->rs_value.at_flags |= ATR_VFLAG_DEFLT | ATR_VFLAG_INDIRECT;\n\t\t\t\t\t\t\tbad = fix_indirectness(prs, pnode, 1);\n\t\t\t\t\t\t}\n\t\t\t\t\t} else if ((bad = prdef->rs_decode(&prs->rs_value, buf, resc, psrp->vna_val)) == 0) {\n\t\t\t\t\t\t/* This (ATR_FLAG_DEFLT) means set by the */\n\t\t\t\t\t\t/* server and not manager */\n\t\t\t\t\t\t/* mom hook we're treating */\n\t\t\t\t\t\t/* set by manager */\n\t\t\t\t\t\tif (from_hook) {\n\t\t\t\t\t\t\t/* These flags ensure */\n\t\t\t\t\t\t\t/* changes survive */\n\t\t\t\t\t\t\t/* server restart */\n\t\t\t\t\t\t\tprs->rs_value.at_flags &= ~ATR_VFLAG_DEFLT;\n\t\t\t\t\t\t\tpost_attr_set(&prs->rs_value);\n\t\t\t\t\t\t\tif (psrp->vna_val[0] != '\\0') {\n\t\t\t\t\t\t\t\tprs->rs_value.at_flags |= (ATR_VFLAG_SET | ATR_VFLAG_MODIFY);\n\t\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t\tprs->rs_defin->rs_free(&prs->rs_value);\n\t\t\t\t\t\t\t\tdelete_link(&prs->rs_link);\n\t\t\t\t\t\t\t\tfree(prs);\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t} else\n\t\t\t\t\t\t\tprs->rs_value.at_flags |= ATR_VFLAG_DEFLT;\n\t\t\t\t\t\tif (strcasecmp(\"ncpus\", resc) == 0) {\n\t\t\t\t\t\t\t/* if ncpus, adjust virtual/subnodes */\n\t\t\t\t\t\t\tj = prs->rs_value.at_val.at_long;\n\t\t\t\t\t\t\tmod_node_ncpus(pnode, j, ATR_ACTION_ALTER);\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tif (bad != 0) {\n\t\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t\t\t \"Error %d decoding resource %s in update for vnode %s\", bad, resc, pnode->nd_name);\n\t\t\t\t\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_NODE,\n\t\t\t\t\t\t\t  LOG_WARNING, pmom->mi_host, log_buffer);\n\t\t\t\t\t} else if (from_hook) {\n\t\t\t\t\t\tsnprintf(log_buffer,\n\t\t\t\t\t\t\t sizeof(log_buffer),\n\t\t\t\t\t\t\t \"Updated vnode %s's \"\n\t\t\t\t\t\t\t \"resource %s=%s per \"\n\t\t\t\t\t\t\t \"mom hook request\",\n\t\t\t\t\t\t\t pnode->nd_name,\n\t\t\t\t\t\t\t psrp->vna_name,\n\t\t\t\t\t\t\t psrp->vna_val);\n\t\t\t\t\t\tlog_event(PBSEVENT_DEBUG2,\n\t\t\t\t\t\t\t  PBS_EVENTCLASS_NODE,\n\t\t\t\t\t\t\t  LOG_INFO, pmom->mi_host,\n\t\t\t\t\t\t\t  log_buffer);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\n\t\t} else if (strcasecmp(psrp->vna_name, VNATTR_PNAMES) == 0) {\n\n\t\t\t/* special case pnames because it is set at the Server */\n\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"pnames %s\",\n\t\t\t\t psrp->vna_val);\n\t\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_NODE,\n\t\t\t\t  LOG_INFO, pmom->mi_host, log_buffer);\n\n\t\t\tsetup_pnames(psrp->vna_val);\n\n\t\t} else if (strcasecmp(psrp->vna_name, VNATTR_HOOK_REQUESTOR) == 0) {\n\n\t\t\tif (from_hook) {\n\t\t\t\t/* decides whether succeeding requests from the same 'pvnal' */\n\t\t\t\t/* should be allowed; if the name is the null string */\n\t\t\t\t/* the hook ran as the administrator (root) */\n\n\t\t\t\tif ((*psrp->vna_val != '\\0') &&\n\t\t\t\t    ((svr_get_privilege(psrp->vna_val, pmom->mi_host) &\n\t\t\t\t      (ATR_DFLAG_MGWR | ATR_DFLAG_OPWR)) == 0)) {\n\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t\t hook_privilege, psrp->vna_val, pmom->mi_host);\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_NODE,\n\t\t\t\t\t\t  LOG_INFO, pmom->mi_host, log_buffer);\n\t\t\t\t\treturn (PBSE_PERM);\n\t\t\t\t}\n\t\t\t}\n\t\t} else if (strcasecmp(psrp->vna_name, VNATTR_HOOK_OFFLINE_VNODES) == 0) {\n\n\t\t\tif (from_hook) {\n\t\t\t\tp = strchr(psrp->vna_val, ',');\n\t\t\t\thook_name[0] = '\\0';\n\t\t\t\tif (p != NULL) {\n\t\t\t\t\t*p = '\\0';\n\t\t\t\t\tp++;\n\t\t\t\t\tstrncpy(hook_name, p, HOOK_BUF_SIZE);\n\t\t\t\t}\n\t\t\t\tif (strcmp(psrp->vna_val, \"1\") == 0) {\n\t\t\t\t\tchar hook_buf[sizeof(hook_name) + 40];\n\t\t\t\t\tsnprintf(hook_buf, sizeof(hook_buf),\n\t\t\t\t\t\t \"offlined by hook '%s' due to hook error\",\n\t\t\t\t\t\t hook_name);\n\t\t\t\t\tmark_node_offline_by_mom(pnode->nd_name, hook_buf);\n\t\t\t\t} else if (strcmp(psrp->vna_val, \"0\") == 0) {\n\t\t\t\t\tclear_node_offline_by_mom(pnode->nd_name, NULL);\n\t\t\t\t}\n\t\t\t\tif (p != NULL)\n\t\t\t\t\t*p = ','; /* restore psrp->vna_val */\n\t\t\t}\n\n\t\t} else if (strcasecmp(psrp->vna_name, VNATTR_HOOK_SCHEDULER_RESTART_CYCLE) == 0) {\n\n\t\t\tif (from_hook) {\n\t\t\t\tp = strchr(psrp->vna_val, ',');\n\t\t\t\thook_name[0] = '\\0';\n\t\t\t\tif (p != NULL) {\n\t\t\t\t\t*p = '\\0';\n\t\t\t\t\tp++;\n\t\t\t\t\tstrncpy(hook_name, p, HOOK_BUF_SIZE);\n\t\t\t\t}\n\t\t\t\tif (strcmp(psrp->vna_val, \"1\") == 0) {\n\n\t\t\t\t\tset_scheduler_flag(SCH_SCHEDULE_RESTART_CYCLE, dflt_scheduler);\n\t\t\t\t\tsnprintf(log_buffer,\n\t\t\t\t\t\t sizeof(log_buffer),\n\t\t\t\t\t\t \"hook '%s' requested for \"\n\t\t\t\t\t\t \"scheduler to restart cycle\",\n\t\t\t\t\t\t hook_name);\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG2,\n\t\t\t\t\t\t  PBS_EVENTCLASS_NODE,\n\t\t\t\t\t\t  LOG_INFO, pmom->mi_host,\n\t\t\t\t\t\t  log_buffer);\n\t\t\t\t}\n\t\t\t\tif (p != NULL)\n\t\t\t\t\t*p = ','; /* restore psrp->vna_val */\n\t\t\t}\n\n\t\t} else {\n\n\t\t\t/* a non-resource attribute to set */\n\n\t\t\tj = find_attr(node_attr_idx, node_attr_def, psrp->vna_name);\n\t\t\tif (j == -1) {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t \"unknown attribute %s in %s for vnode %s\",\n\t\t\t\t\t psrp->vna_name,\n\t\t\t\t\t from_hook ? UPDATE_FROM_MOM_HOOK : UPDATE,\n\t\t\t\t\t pnode->nd_name);\n\t\t\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_NODE,\n\t\t\t\t\t  LOG_WARNING, pmom->mi_host, log_buffer);\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\tpattr = get_nattr(pnode, j);\n\t\t\tif (from_hook || ((pattr->at_flags &\n\t\t\t\t\t   (ATR_VFLAG_SET | ATR_VFLAG_DEFLT)) != ATR_VFLAG_SET)) {\n\t\t\t\t/* if not from_hook, will only set attribute */\n\t\t\t\t/* values that have the ATR_VFLAG_DEFLT flag */\n\t\t\t\t/* only, which means it wasn't set externally */\n\t\t\t\t/* (i.e. qmgr). */\n\t\t\t\t/* if from_hook, we override values */\n\t\t\t\t/* set externally. */\n\n\t\t\t\tif (from_hook) {\n\t\t\t\t\tif (node_attr_def[j].at_action &&\n\t\t\t\t\t    (bad = node_attr_def[j].at_action(pattr, pnode, ATR_ACTION_ALTER))) {\n\t\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t\t    \"Error %d setting attribute %s \"\n\t\t\t\t\t\t    \"in %s for vnode %s\",\n\t\t\t\t\t\t    bad, psrp->vna_name,\n\t\t\t\t\t\t    UPDATE_FROM_MOM_HOOK,\n\t\t\t\t\t\t    pnode->nd_name);\n\t\t\t\t\t\tlog_event(PBSEVENT_SYSTEM,\n\t\t\t\t\t\t    PBS_EVENTCLASS_NODE, LOG_WARNING,\n\t\t\t\t\t\t    pmom->mi_host, log_buffer);\n\t\t\t\t\t\tcontinue;\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\tbad = set_attr_generic(pattr, &node_attr_def[j], psrp->vna_val, NULL, INTERNAL);\n\t\t\t\tif (bad != 0) {\n\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t\t \"Error %d decoding attribute %s \"\n\t\t\t\t\t\t \"in %s for vnode %s\",\n\t\t\t\t\t\t bad, psrp->vna_name,\n\t\t\t\t\t\t from_hook ? UPDATE_FROM_MOM_HOOK : UPDATE,\n\t\t\t\t\t\t pnode->nd_name);\n\t\t\t\t\tlog_event(PBSEVENT_SYSTEM,\n\t\t\t\t\t\t  PBS_EVENTCLASS_NODE, LOG_WARNING,\n\t\t\t\t\t\t  pmom->mi_host, log_buffer);\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\t\t\t\tif (from_hook) {\n\t\t\t\t\t/* these flag values ensure changes */\n\t\t\t\t\t/* are displayed and survive server */\n\t\t\t\t\t/* restart */\n\n\t\t\t\t\tpattr->at_flags &= ~ATR_VFLAG_DEFLT;\n\t\t\t\t\tpattr->at_flags |= ATR_VFLAG_MODCACHE;\n\t\t\t\t\tif (psrp->vna_val[0] != '\\0')\n\t\t\t\t\t\tpattr->at_flags |= (ATR_VFLAG_SET | ATR_VFLAG_MODIFY);\n\t\t\t\t\tsnprintf(log_buffer,\n\t\t\t\t\t\t sizeof(log_buffer),\n\t\t\t\t\t\t \"Updated vnode %s's \"\n\t\t\t\t\t\t \"attribute %s=%s per \"\n\t\t\t\t\t\t \"mom hook request\",\n\t\t\t\t\t\t pnode->nd_name,\n\t\t\t\t\t\t psrp->vna_name,\n\t\t\t\t\t\t psrp->vna_val);\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG2,\n\t\t\t\t\t\t  PBS_EVENTCLASS_NODE, LOG_INFO,\n\t\t\t\t\t\t  pmom->mi_host, log_buffer);\n\n\t\t\t\t} else\n\t\t\t\t\tpattr->at_flags |= ATR_VFLAG_DEFLT;\n\n\t\t\t\tif (strcasecmp(psrp->vna_name, ATTR_NODE_VnodePool) == 0) {\n\t\t\t\t\tif ((bad = node_attr_def[j].at_action(pattr,\n\t\t\t\t\t\t\t\t\t      pnode, ATR_ACTION_ALTER)) == 0) {\n\t\t\t\t\t\tpattr->at_flags |= ATR_VFLAG_DEFLT;\n\t\t\t\t\t} else {\n\t\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t\t\t \"Error %d setting attribute %s \"\n\t\t\t\t\t\t\t \"in update for vnode %s\",\n\t\t\t\t\t\t\t bad,\n\t\t\t\t\t\t\t psrp->vna_name, pnode->nd_name);\n\t\t\t\t\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_NODE,\n\t\t\t\t\t\t\t  LOG_WARNING, pmom->mi_host, log_buffer);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\tif ((strcasecmp(psrp->vna_name,\n\t\t\t\t\tATTR_NODE_TopologyInfo) == 0) ||\n\t\t\t    (strcasecmp(psrp->vna_name,\n\t\t\t\t\tATTR_NODE_state) == 0)) {\n\t\t\t\tbad = node_attr_def[j].at_action(pattr,\n\t\t\t\t\t\t\t\t pnode, ATR_ACTION_ALTER);\n\t\t\t\tif (bad != 0) {\n\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t\t \"Error %d setting attribute %s \"\n\t\t\t\t\t\t \"in %s for vnode %s\",\n\t\t\t\t\t\t bad,\n\t\t\t\t\t\t psrp->vna_name,\n\t\t\t\t\t\t from_hook ? UPDATE_FROM_MOM_HOOK : UPDATE,\n\t\t\t\t\t\t pnode->nd_name);\n\t\t\t\t\tlog_event(PBSEVENT_SYSTEM,\n\t\t\t\t\t\t  PBS_EVENTCLASS_NODE,\n\t\t\t\t\t\t  LOG_WARNING,\n\t\t\t\t\t\t  pmom->mi_host,\n\t\t\t\t\t\t  log_buffer);\n\t\t\t\t}\n\t\t\t\tif (strcasecmp(psrp->vna_name,\n\t\t\t\t\t       ATTR_NODE_state) == 0) {\n\t\t\t\t\tvn_state_updates++;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\tif (vn_resc_added > 0) {\n\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t\t  LOG_INFO, \"update2_to_vnode\",\n\t\t\t  \"Restarting Python interpreter as resourcedef file has changed.\");\n\t\tpbs_python_ext_shutdown_interpreter(&svr_interp_data);\n\t\tif (pbs_python_ext_start_interpreter(&svr_interp_data) != 0) {\n\t\t\tlog_err(PBSE_INTERNAL, __func__, \"Failed to restart Python interpreter\");\n\t\t\treturn PBSE_INTERNAL;\n\t\t}\n\n\t\tsend_rescdef(1);\n\t}\n\n\tif (pnode) {\n\t\tint states_to_clear = 0;\n\n\t\tcheck_and_set_multivnode(pnode);\n\n\t\tif (from_hook) {\n\t\t\t/* INUSE_DOWN not part here since it could */\n\t\t\t/* have been set from a hook . */\n\t\t\tstates_to_clear = (INUSE_STALE | INUSE_UNKNOWN);\n\t\t\tif (vn_state_updates == 0) {\n\t\t\t\tstates_to_clear |= INUSE_DOWN;\n\t\t\t}\n\t\t} else {\n\t\t\tstates_to_clear = (INUSE_STALE | INUSE_DOWN | INUSE_UNKNOWN);\n\t\t}\n\n\t\t/* clear stale, down, unknown bits in state */\n\t\tset_vnode_state(pnode,\n\t\t\t\t~states_to_clear,\n\t\t\t\tNd_State_And);\n\t\tpnode->nd_modified = 1;\n\t\treturn 0;\n\t} else {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"vnode %s declared by %s but it does not exist\",\n\t\t\t pvnal->vnal_id, pmom->mi_host);\n\t\tlog_err(PBSE_UNKNODE, from_hook ? UPDATE_FROM_HOOK : UPDATE2, log_buffer);\n\t\treturn PBSE_UNKNODE;\n\t}\n}\n\n/**\n * @brief\n * \t\tCheck if vnode shares the resource \"host\" with any other vnode, and\n * \t\tset vnode attribute \"in_multivnode_host\" accordingly.\n * @see\n * \t\tupdate2_to_vnode\n *\n * @param[in] pnode - The node being considered\n *\n * @return void\n *\n * @par MT-Safe: no, depends on globals svr_totnodes and pbsndlist\n *\n * @par Esoteric Side-case:\n * \t\tIn a multivnode host, all vnodes being processed\n * \t\tthat have not been checked by this function are assumed to be in state\n * \t\tstale; this is needed to handle the case of two vnodes that would swap\n * \t\tresources_available.host on an update.\n */\nstatic void\ncheck_and_set_multivnode(struct pbsnode *pnode)\n{\n\tint i;\n\tresource *prc;\n\tresource *prc_i;\n\tresource_def *prd;\n\tchar *host_str1 = NULL;\n\n\tif (pnode == NULL)\n\t\treturn;\n\n\tprd = &svr_resc_def[RESC_HOST];\n\tif (prd == NULL)\n\t\treturn;\n\n\tprc = find_resc_entry(get_nattr(pnode, ND_ATR_ResourceAvail), prd);\n\tif (prc == NULL) {\n\t\tif (pnode->nd_hostname != NULL)\n\t\t\thost_str1 = pnode->nd_hostname;\n\t} else {\n\t\thost_str1 = prc->rs_value.at_val.at_str;\n\t}\n\n\tfor (i = 0; i < svr_totnodes; i++) {\n\t\tif (pnode != pbsndlist[i]) {\n\t\t\tchar *host_str2 = NULL;\n\n\t\t\tif (pbsndlist[i]->nd_state & INUSE_STALE)\n\t\t\t\tcontinue;\n\n\t\t\tprc_i = find_resc_entry(get_nattr(pbsndlist[i], ND_ATR_ResourceAvail), prd);\n\t\t\tif (prc_i == NULL) {\n\t\t\t\tif (pbsndlist[i]->nd_hostname != NULL)\n\t\t\t\t\thost_str2 = pbsndlist[i]->nd_hostname;\n\t\t\t} else {\n\t\t\t\thost_str2 = prc_i->rs_value.at_val.at_str;\n\t\t\t}\n\n\t\t\tif (host_str1 && host_str2 && !strcmp(host_str1, host_str2)) {\n\t\t\t\tset_nattr_l_slim(pbsndlist[i], ND_ATR_in_multivnode_host, 1, SET);\n\t\t\t\t(get_nattr(pbsndlist[i], ND_ATR_in_multivnode_host))->at_flags |= ATR_VFLAG_DEFLT;\n\n\t\t\t\t/* DEFLT needed to reset on update */\n\t\t\t\tset_nattr_l_slim(pnode, ND_ATR_in_multivnode_host, 1, SET);\n\t\t\t\t(get_nattr(pnode, ND_ATR_in_multivnode_host))->at_flags |= ATR_VFLAG_DEFLT;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t}\n}\n\n/**\n * @brief\n * \t\tread the list of running jobs sent by Mom in a\n *\t\tHELLO3/4 message and validate them against their state known to the\n *\t\tServer.  Message contains the following:\n * @par\n *\t\tcount of number of jobs which follows\n *\t\tfor each job\n *\t\t   string - job id\n *\t\t   int    - job substate\n *\t\t   long   - run version (count)\n *\t\t   int    - node id, 0 (for Mother Superior) to N-1 **\n *\t\t   string - exec_vnode string **\n *\n *  \t\t** - these values are not currently used for anything.\n * @see\n * \t\tis_request\n *\n * @param[in]\tstream\t- list of running jobs sent by Mom in a HELLO3/4 message\n *\n * @return\tvoid\n */\nvoid\nmom_running_jobs(int stream)\n{\n\tchar *execvnod = NULL;\n\tchar *jobid = NULL;\n\tunsigned njobs = 0;\n\tjob *pjob = NULL;\n\tint rc = 0;\n\tint substate = 0;\n\tlong runver = 0, runver_server = 0;\n\tint discarded = 0;\n\tmominfo_t *pmom = NULL;\n\tchar mom_name[PBS_MAXHOSTNAME + 2] = \"UNKNOWN\";\n\tchar exec_host_name[PBS_MAXHOSTNAME + 2] = \"UNKNOWN2\";\n\tchar *slash_pos = NULL;\n\tint exec_host_hostlen = 0;\n\n\tnjobs = disrui(stream, &rc); /* number of jobs in update */\n\tif (rc)\n\t\treturn;\n\n\twhile (njobs--) {\n\t\trunver_server = 0;\n\t\tdiscarded = 0;\n\t\tstrcpy(mom_name, \"_UNKNOWN_\");\n\t\tstrcpy(exec_host_name, \"_UNKNOWN2_\");\n\t\texecvnod = NULL;\n\t\tjobid = NULL;\n\n\t\tjobid = disrst(stream, &rc);\n\t\tif (rc)\n\t\t\tgoto err;\n\t\tsubstate = disrsi(stream, &rc);\n\t\tif (rc)\n\t\t\tgoto err;\n\t\trunver = disrsl(stream, &rc);\n\t\tif (rc)\n\t\t\tgoto err;\n\t\t(void) disrsi(stream, &rc); /* sister is not currently used */\n\t\tif (rc)\n\t\t\tgoto err;\n\t\texecvnod = disrst(stream, &rc);\n\t\tif (rc)\n\t\t\tgoto err;\n\n\t\tDBPRT((\"mom_running_jobs: %s substate: %d runver: %ld\\n\", jobid, substate, runver))\n\t\tif ((pjob = find_job(jobid)) == NULL) {\n\t\t\t/* job not found,  tell Mom to discard it */\n\t\t\tsend_discard_job(stream, jobid, -1, \"not known to Server\");\n\t\t\tdiscarded = 1;\n\t\t}\n\n\t\tif (pjob && !discarded && (is_jattr_set(pjob, JOB_ATR_run_version)))\n\t\t\trunver_server = get_jattr_long(pjob, JOB_ATR_run_version);\n\n\t\tif (pjob && !discarded && (runver_server != runver)) {\n\t\t\tif (runver_server > 0) {\n\t\t\t\t/* different Version, discard it */\n\t\t\t\tsend_discard_job(stream, jobid, runver, \"has been run again\");\n\t\t\t\tdiscarded = 1;\n\t\t\t} else {\n\t\t\t\t/* server had no clue about runver -- accept what MOM tells us if exec_host matches stream source */\n\n\t\t\t\tif ((pmom = tfind2((u_long) stream, 0, &streams)) != NULL && ((mom_svrinfo_t *) (pmom->mi_data))->msr_numvnds > 0)\n\t\t\t\t\tstrncpy(mom_name, ((mom_svrinfo_t *) (pmom->mi_data))->msr_children[0]->nd_name, PBS_MAXHOSTNAME);\n\t\t\t\tif ((is_jattr_set(pjob, JOB_ATR_exec_host)) &&\n\t\t\t\t    (slash_pos = strchr(get_jattr_str(pjob, JOB_ATR_exec_host), '/')) != NULL) {\n\t\t\t\t\texec_host_hostlen = slash_pos - get_jattr_str(pjob, JOB_ATR_exec_host);\n\t\t\t\t\tstrncpy(exec_host_name, get_jattr_str(pjob, JOB_ATR_exec_host), exec_host_hostlen);\n\t\t\t\t\texec_host_name[exec_host_hostlen] = '\\0';\n\t\t\t\t}\n\n\t\t\t\tif (!strcmp(exec_host_name, mom_name)) {\n\t\t\t\t\t/* natural vnode of MOM at end of stream matches exec_host first entry */\n\n\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"run_version %ld for job recovered from MOM with vnode %s; exec_host %s\", runver, mom_name, exec_host_name);\n\t\t\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_ALERT, pjob->ji_qs.ji_jobid, log_buffer);\n\n\t\t\t\t\tset_jattr_l_slim(pjob, JOB_ATR_run_version, runver, SET);\n\n\t\t\t\t\tif (!(is_jattr_set(pjob, JOB_ATR_runcount)) || (get_jattr_long(pjob, JOB_ATR_runcount) <= 0)) {\n\t\t\t\t\t\tset_jattr_l_slim(pjob, JOB_ATR_runcount, runver, SET);\n\t\t\t\t\t\t/* update for resources used will save this to DB on later message from MOM, if it is indeed valid */\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\t/* wrong MOM, exec_host either empty or non-matching, discard job on MOM (and hope the correct MOM will come along) */\n\n\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"run_version recovery: exec_host %s != MOM name %s, discarding job on that MOM\", exec_host_name, mom_name);\n\t\t\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_ALERT, pjob->ji_qs.ji_jobid, log_buffer);\n\n\t\t\t\t\tsend_discard_job(stream, jobid, -1, \"MOM fails to match exec_host\");\n\t\t\t\t\tdiscarded = 1;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\tif (pjob && !discarded && !check_job_substate(pjob, substate)) {\n\n\t\t\t/* Job substates disagree */\n\n\t\t\tif ((check_job_substate(pjob, JOB_SUBSTATE_SCHSUSP)) ||\n\t\t\t    (check_job_substate(pjob, JOB_SUBSTATE_SUSPEND))) {\n\n\t\t\t\tif (substate == JOB_SUBSTATE_RUNNING) {\n\n\t\t\t\t\t/* tell Mom to suspend job */\n\t\t\t\t\t(void) issue_signal(pjob, \"SIG_SUSPEND\", release_req, 0);\n\t\t\t\t}\n\t\t\t} else if (check_job_substate(pjob, JOB_SUBSTATE_RUNNING)) {\n\t\t\t\tif (substate == JOB_SUBSTATE_SUSPEND) {\n\n\t\t\t\t\t/* tell Mom to resume job */\n\t\t\t\t\t(void) issue_signal(pjob, \"SIG_RESUME\", release_req, 0);\n\t\t\t\t}\n\n\t\t\t} else if ((!check_job_state(pjob, JOB_STATE_LTR_EXITING)) &&\n\t\t\t\t   (!check_job_state(pjob, JOB_STATE_LTR_RUNNING))) {\n\n\t\t\t\t/* for any other disagreement of state except */\n\t\t\t\t/* in Exiting or RUNNING, discard job         */\n\t\t\t\tsend_discard_job(stream, jobid, runver, \"state mismatch\");\n\t\t\t\tpjob->ji_discarding = 1;\n\t\t\t}\n\n\t\t\t/*\n\t\t\t * calls to issue_signal would reset transport from TPP to TCP\n\t\t\t * revert it back to TPP before continuing\n\t\t\t */\n\t\t\tDIS_tpp_funcs();\n\t\t}\n\n\t\t/* all other cases - job left as is */\n\n\t\tfree(jobid);\n\t\tjobid = NULL;\n\t\tfree(execvnod);\n\t\texecvnod = NULL;\n\t}\n\treturn;\n\nerr:\n\tsnprintf(log_buffer, sizeof(log_buffer), \"%s for %s\", dis_emsg[rc],\n\t\t \"HELLO3/4\");\n\tlog_err(errno, \"mom_running_jobs\", log_buffer);\n\tfree(jobid);\n\tfree(execvnod);\n}\n\n/**\n * @brief\n * \t\tInput is coming from another server (MOM) over a TPP stream.\n *\n * @par\n *\t\tRead the stream to get a Inter-Server request.\n *\t\tSome error cases call stream_eof instead of tpp_close because\n *\t\ta customer encountered a stream mixup (spid 183257) where a\n *\t\tstream that should not have been found by tfind2 was found.\n *\n * @param[in] stream  - TPP stream on which the request is arriving\n * @param[in] version - Version of protocol, not to be changed lightly as it makes everything incompatable.\n *\n * @return none\n */\nvoid\nis_request(int stream, int version)\n{\n\tint check_other_moms_time = 0;\n\tint command = 0;\n\tint command_orig = 0;\n\tint cr_node;\n\tint ret = DIS_SUCCESS;\n\tint i, j;\n\tu_Long l;\n\tint ivnd;\n\tchar *jid = NULL;\n\tint made_new_vnodes;\n\tunsigned long hook_seq;\n\tchar *hook_euser;\n\tjob *pjob;\n\tunsigned long ipaddr;\n\tunsigned long port;\n\tstruct sockaddr_in *addr;\n\tstruct pbsnode *np = NULL;\n\tattribute *pala;\n\tresource_def *prd;\n\tresource *prc;\n\tmominfo_t *pmom;\n\tmom_svrinfo_t *psvrmom;\n\tdmn_info_t *pdmninfo;\n\tint s;\n\tchar *val;\n\tunsigned long oldstate;\n\tvnl_t *vnlp; /* vnode list */\n\tstatic char node_up[] = \"node up\";\n\tpbs_list_head reported_hooks;\n\thook *phook;\n\tchar *hname = NULL;\n\tunsigned long hook_rescdef_checksum;\n\tunsigned long chksum_rescdef;\n\tstatic int reply_send_tm = 0;\n\tchar *badconstr = \"unset\";\n\n\tCLEAR_HEAD(reported_hooks);\n\tDBPRT((\"%s: stream %d version %d\\n\", __func__, stream, version))\n\taddr = tpp_getaddr(stream);\n\tif (version != IS_PROTOCOL_VER) {\n\t\tsprintf(log_buffer, \"protocol version %d unknown from %s\",\n\t\t\tversion, netaddr(addr));\n\t\tlog_err(-1, __func__, log_buffer);\n\t\tstream_eof(stream, 0, NULL);\n\t\treturn;\n\t}\n\tif (addr == NULL) {\n\t\tsprintf(log_buffer, \"Sender unknown\");\n\t\tlog_err(-1, __func__, log_buffer);\n\t\tstream_eof(stream, 0, NULL);\n\t\treturn;\n\t}\n\n\tipaddr = ntohl(addr->sin_addr.s_addr);\n\n\tcommand = disrsi(stream, &ret);\n\tif (ret != DIS_SUCCESS) {\n\t\tbadconstr = \"disrsi:command\";\n\t\tgoto badcon;\n\t}\n\n\tif (command == IS_HELLOSVR) {\n\t\tport = disrui(stream, &ret);\n\t\tif (ret != DIS_SUCCESS) {\n\t\t\tbadconstr = \"disrui:port\";\n\t\t\tgoto badcon;\n\t\t}\n\n\t\tDBPRT((\"%s: IS_HELLOSVR addr: %s, port %lu\\n\", __func__, netaddr(addr), port))\n\n\t\tif ((pmom = tfind2(ipaddr, port, &ipaddrs)) == NULL) {\n\t\t\tbadconstr = \"tfind2:pmom\";\n\t\t\tgoto badcon;\n\t\t}\n\n\t\tlog_eventf(PBSEVENT_SYSTEM, PBS_EVENTCLASS_NODE,\n\t\t\t   LOG_NOTICE, pmom->mi_host, \"Hello from MoM on port=%lu\", port);\n\n\t\tpsvrmom = (mom_svrinfo_t *) (pmom->mi_data);\n\t\tpdmninfo = pmom->mi_dmn_info;\n\t\tpdmninfo->dmn_state |= INUSE_UNKNOWN;\n\t\tif (pdmninfo->dmn_stream >= 0 && pdmninfo->dmn_stream != stream) {\n\t\t\tDBPRT((\"%s: stream %d from %s:%d already open on %d\\n\",\n\t\t\t       __func__, stream, pmom->mi_host,\n\t\t\t       ntohs(addr->sin_port), pdmninfo->dmn_stream))\n\t\t\ttpp_close(pdmninfo->dmn_stream);\n\t\t\ttdelete2((u_long) pdmninfo->dmn_stream, 0ul, &streams);\n\t\t}\n\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n\t\tif (psvrmom->msr_numjobs > 0)\n\t\t\tpdmninfo->dmn_state |= INUSE_NEED_CREDENTIALS;\n#endif\n\n\t\tif (psvrmom->msr_vnode_pool != 0) {\n\t\t\t/*\n\t\t\t * Mom has a pool, see if the pool has an\n\t\t\t * inventory Mom already, if not make this Mom the one\n\t\t\t */\n\t\t\tvnpool_mom_t *ppool;\n\t\t\tppool = find_vnode_pool(pmom);\n\t\t\tif (ppool != NULL) {\n\t\t\t\tif (ppool->vnpm_inventory_mom == NULL) {\n\t\t\t\t\tppool->vnpm_inventory_mom = pmom;\n\t\t\t\t\tpsvrmom->msr_has_inventory = 1;\n\t\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\tmsg_new_inventory_mom,\n\t\t\t\t\t\tppool->vnpm_vnode_pool,\n\t\t\t\t\t\tpmom->mi_host);\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_SERVER,\n\t\t\t\t\t\t  LOG_DEBUG, msg_daemonname, log_buffer);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\t/* we save this stream for future communications */\n\t\tpdmninfo->dmn_stream = stream;\n\t\tpdmninfo->dmn_state |= INUSE_INIT;\n\t\tpdmninfo->dmn_state &= ~INUSE_NEEDS_HELLOSVR;\n\t\ttinsert2((u_long) stream, 0ul, pmom, &streams);\n\t\ttpp_eom(stream);\n\n\t\t/* mcast reply togethor */\n\t\tif (psvrmom->msr_vnode_pool <= 0 || psvrmom->msr_has_inventory)\n\t\t\tmcast_add(pmom, &mtfd_replyhello, FALSE);\n\t\telse\n\t\t\tmcast_add(pmom, &mtfd_replyhello_noinv, FALSE);\n\n\t\tif (reply_send_tm <= time_now) {\n\t\t\tstruct work_task *ptask;\n\n\t\t\t/* time to wait depends on the no of moms server knows */\n\t\t\treply_send_tm = time_now + (mominfo_array_size > 1024 ? MCAST_WAIT_TM : 0);\n\t\t\tptask = set_task(WORK_Timed, reply_send_tm, mcast_msg, NULL);\n\t\t\tptask->wt_aux = IS_REPLYHELLO;\n\t\t}\n\t\treturn;\n\n\t} else if ((pmom = tfind2((u_long) stream, 0, &streams)) != NULL)\n\t\tgoto found;\n\nbadcon:\n\tsprintf(log_buffer, \"bad attempt to connect from %s, reason=%s\",\n\t\tnetaddr(addr), badconstr);\n\tlog_err(-1, __func__, log_buffer);\n\tstream_eof(stream, 0, NULL);\n\treturn;\n\nfound:\n\tpsvrmom = (mom_svrinfo_t *) (pmom->mi_data);\n\tpdmninfo = pmom->mi_dmn_info;\n\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_SERVER, LOG_DEBUG, msg_daemonname, \"Received request2: %d\", command);\n\n\tswitch (command) {\n\n\t\tcase IS_CMD_REPLY:\n\t\t\tDBPRT((\"%s: IS_CMD_REPLY\\n\", __func__))\n\t\t\tprocess_DreplyTPP(stream);\n\t\t\tbreak;\n\n\t\tcase IS_REGISTERMOM:\n\t\t\tif (psvrmom->msr_wktask) { /* if task requeue jobs, delete it */\n\t\t\t\tdelete_task(psvrmom->msr_wktask);\n\t\t\t\tpsvrmom->msr_wktask = 0;\n\t\t\t}\n\n\t\t\tset_all_state(pmom, 0,\n\t\t\t\t      INUSE_UNKNOWN | INUSE_NEED_ADDRS | INUSE_SLEEP, NULL,\n\t\t\t\t      Set_All_State_Regardless);\n\t\t\tset_all_state(pmom, 1, INUSE_DOWN | INUSE_INIT, NULL,\n\t\t\t\t      Set_ALL_State_All_Down);\n\t\t\tif ((pdmninfo->dmn_state & INUSE_MARKEDDOWN) == 0)\n\t\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_NODE, LOG_INFO,\n\t\t\t\t\t  pmom->mi_host, \"Setting host to Initialize\");\n\n\t\t\t/* validate jobs Mom reported against what I have */\n\t\t\tmom_running_jobs(stream);\n\t\t\t/*\n\t\t\t * respond to HELLO from Mom by sending her optional vmap and\n\t\t\t * all addresses of all Moms\n\t\t\t */\n\t\t\tcommand_orig = command;\n\t\t\tif (psvrmom->msr_vnode_pool <= 0 || psvrmom->msr_has_inventory)\n\t\t\t\tcommand = IS_UPDATE2;\n\t\t\telse\n\t\t\t\tcommand = IS_UPDATE;\n\t\t\t/* fall into IS_UPDATE */\n\n\t\tcase IS_UPDATE:\n\t\tcase IS_UPDATE2:\n\n\t\t\tif (psvrmom->msr_vnode_pool != 0) {\n\t\t\t\tsprintf(log_buffer, \"POOL: IS_UPDATE%c received\",\n\t\t\t\t\t(command == IS_UPDATE) ? ' ' : '2');\n\t\t\t\tlog_event(PBSEVENT_DEBUG4, PBS_EVENTCLASS_NODE,\n\t\t\t\t\t  LOG_INFO, pmom->mi_host, log_buffer);\n\t\t\t}\n\n\t\t\tcr_node = 0;\n\t\t\tmade_new_vnodes = 0;\n\n\t\t\tif (command == IS_UPDATE) {\n\t\t\t\tDBPRT((\"%s: IS_UPDATE %s\\n\", __func__, pmom->mi_host))\n\t\t\t} else {\n\t\t\t\tDBPRT((\"%s: IS_UPDATE2 %s\\n\", __func__, pmom->mi_host))\n\t\t\t}\n\n\t\t\tset_all_state(pmom, 0, INUSE_BUSY | INUSE_UNKNOWN, NULL,\n\t\t\t\t      Set_All_State_Regardless);\n\n\t\t\ts = disrui(stream, &ret); /* state bits, also used later */\n\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\tgoto err;\n\n\t\t\tDBPRT((\"state 0x%x \", s))\n\t\t\tif (s & INUSE_DOWN) {\n\t\t\t\tmomptr_down(pmom, \"by mom\");\n\t\t\t} else if (s & INUSE_BUSY) {\n\t\t\t\tset_all_state(pmom, 1, INUSE_BUSY, NULL,\n\t\t\t\t\t      Set_All_State_Regardless);\n\t\t\t}\n\n\t\t\ti = disrui(stream, &ret); /* num of phy CPUs on system */\n\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\tgoto err;\n\n\t\t\t/* physical cpus, set on the one vnode or the \"special\" */\n\t\t\tDBPRT((\"phy ncpus %d \", i))\n\t\t\tpsvrmom->msr_pcpus = i;\n\t\t\tif (psvrmom->msr_numvnds > 0) {\n\t\t\t\tnp = psvrmom->msr_children[0]; /* the \"one\" */\n\t\t\t\tnp->nd_ncpus = psvrmom->msr_pcpus;\n\t\t\t\tset_nattr_l_slim(np, ND_ATR_pcpus, psvrmom->msr_pcpus, SET);\n\t\t\t}\n\n\t\t\ti = disrui(stream, &ret); /* num of avail CPUs on host */\n\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\tgoto err;\n\n\t\t\tDBPRT((\"avail cpus %d \", i))\n\t\t\tpsvrmom->msr_acpus = i;\n\n\t\t\tl = disrull(stream, &ret); /* memory (KB) on system */\n\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\tgoto err;\n\n\t\t\tDBPRT((\"mem %llukb \", l))\n\n\t\t\tpsvrmom->msr_pmem = l;\n\n\t\t\tval = disrst(stream, &ret); /* arch of Mom's host */\n\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\tgoto err;\n\n\t\t\tDBPRT((\"arch %s \", val))\n\t\t\tfree(psvrmom->msr_arch);\n\t\t\tpsvrmom->msr_arch = val;\n\n\t\t\tif ((pdmninfo->dmn_state & INUSE_MARKEDDOWN) == 0) {\n\t\t\t\tsprintf(log_buffer, \"update%c state:%d ncpus:%ld\",\n\t\t\t\t\tcommand == IS_UPDATE ? ' ' : '2',\n\t\t\t\t\ts, psvrmom->msr_pcpus);\n\t\t\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_NODE,\n\t\t\t\t\t  LOG_INFO, pmom->mi_host, log_buffer);\n\t\t\t}\n\n\t\t\tif (command == IS_UPDATE) {\n\t\t\t\t/* Only one vnode,  set resources_available    */\n\t\t\t\t/* for multiple vnodes, the info is in UPDATE2 */\n\n\t\t\t\tif (psvrmom->msr_numvnds != 0) {\n\t\t\t\t\tnp = psvrmom->msr_children[0];\n\n\t\t\t\t\t/*\n\t\t\t\t\t * Sharing attribute - three cases for at_flags:\n\t\t\t\t\t * 1. If the sharing attribute was explicitly set via qmgr the flag will be\n\t\t\t\t\t *    ATR_VFLAG_SET.\n\t\t\t\t\t * 2. If set via a prior update2 message from Mom, the flags\n\t\t\t\t\t *    would be ATR_VFLAG_SET | ATR_VFLAG_DEFLT.\n\t\t\t\t\t * 3. If unset, flags would be zero.\n\t\t\t\t\t *\n\t\t\t\t\t * For case 2 and 3, but not for case 1, set or reset the sharing attribute\n\t\t\t\t\t * to the default of \"default_shared\" on the natural vnode as it may have been\n\t\t\t\t\t * changed via a prior UPDATE2 (multi-vnode) message but the vnodedef file\n\t\t\t\t\t * has now been removed; hence this UPDATE message instead of UPDATE2.\n\t\t\t\t\t */\n\t\t\t\t\tif (((get_nattr(np, ND_ATR_Sharing))->at_flags & (ATR_VFLAG_SET | ATR_VFLAG_DEFLT)) != ATR_VFLAG_SET) {\n\t\t\t\t\t\t/* unset or ATR_VFLAG_DEFLT is set */\n\t\t\t\t\t\tset_nattr_l_slim(np, ND_ATR_Sharing, VNS_DFLT_SHARED, SET);\n\t\t\t\t\t\t(get_nattr(np, ND_ATR_Sharing))->at_flags |= ATR_VFLAG_DEFLT;\n\t\t\t\t\t}\n\n\t\t\t\t\t/* mark all vnodes under this Mom stale, then because    */\n\t\t\t\t\t/* this is non-vnoded update, un-stale the natural vnode */\n\t\t\t\t\t/* EXCEPT when the Mom is in a vnode_pool \t\t */\n\t\t\t\t\tif (psvrmom->msr_vnode_pool <= 0) {\n\t\t\t\t\t\tset_all_state(pmom, 1, INUSE_STALE, NULL,\n\t\t\t\t\t\t\t      Set_All_State_Regardless);\n\t\t\t\t\t\tset_vnode_state(np, ~INUSE_STALE, Nd_State_And);\n\t\t\t\t\t}\n\n\t\t\t\t\tpala = get_nattr(np, ND_ATR_ResourceAvail);\n\n\t\t\t\t\t/* available cpus */\n\t\t\t\t\ti = psvrmom->msr_acpus;\n\t\t\t\t\tprd = &svr_resc_def[RESC_NCPUS];\n\t\t\t\t\tprc = find_resc_entry(pala, prd);\n\t\t\t\t\tif (prc == NULL)\n\t\t\t\t\t\tprc = add_resource_entry(pala, prd);\n\t\t\t\t\tif (((is_attr_set(&prc->rs_value)) == 0) ||\n\t\t\t\t\t    ((prc->rs_value.at_flags & ATR_VFLAG_DEFLT) != 0)) {\n\t\t\t\t\t\tmod_node_ncpus(np, i, ATR_ACTION_ALTER);\n\t\t\t\t\t\tprc->rs_value.at_val.at_long = i;\n\t\t\t\t\t\tprc->rs_value.at_flags |= (ATR_SET_MOD_MCACHE | ATR_VFLAG_DEFLT);\n\t\t\t\t\t}\n\n\t\t\t\t\t/* available memory */\n\t\t\t\t\tprd = &svr_resc_def[RESC_MEM];\n\t\t\t\t\tprc = find_resc_entry(pala, prd);\n\t\t\t\t\tif (prc == NULL)\n\t\t\t\t\t\tprc = add_resource_entry(pala, prd);\n\t\t\t\t\tif ((prc->rs_value.at_flags & ATR_VFLAG_DEFLT) ||\n\t\t\t\t\t    ((is_attr_set(&prc->rs_value)) == 0)) {\n\t\t\t\t\t\t/* set size in KB */\n\t\t\t\t\t\tprc->rs_value.at_val.at_size.atsv_num =\n\t\t\t\t\t\t\tpsvrmom->msr_pmem;\n\t\t\t\t\t\tprc->rs_value.at_val.at_size.atsv_shift = 10;\n\t\t\t\t\t\tprc->rs_value.at_flags |= (ATR_SET_MOD_MCACHE | ATR_VFLAG_DEFLT);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t/* UPDATE2 message - multiple vnoded system */\n\t\t\tif (command == IS_UPDATE2) {\n\t\t\t\tvnlp = vn_decode_DIS(stream, &ret);\n\t\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\t\tgoto err;\n\t\t\t\tif (vnlp == NULL) {\n\t\t\t\t\tsprintf(log_buffer, \"vn_decode_DIS vn failed\");\n\t\t\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\t\t} else if (vnlp->vnl_modtime >= pmom->mi_modtime) {\n\t\t\t\t\tint i, j;\n\n\t\t\t\t\tif (vnlp->vnl_modtime > pmom->mi_modtime)\n\t\t\t\t\t\tcr_node = 1;\n\n\t\t\t\t\t/* set stale bit in state for all non sleeping vnodes, */\n\t\t\t\t\t/* it will be cleared for the vnodes     */\n\t\t\t\t\t/* listed in the update2 messsage\t */\n\t\t\t\t\tset_all_state(pmom, 1, INUSE_STALE, NULL,\n\t\t\t\t\t\t      Set_All_State_Regardless);\n\n\t\t\t\t\tpmom->mi_modtime = vnlp->vnl_modtime;\n\t\t\t\t\tsprintf(log_buffer, \"Mom reporting %lu vnodes as of %s\", vnlp->vnl_used, ctime((time_t *) &vnlp->vnl_modtime));\n\t\t\t\t\t*(log_buffer + strlen(log_buffer) - 1) = '\\0';\n\n\t\t\t\t\tif ((pdmninfo->dmn_state & INUSE_MARKEDDOWN) == 0)\n\t\t\t\t\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_NODE, LOG_INFO, pmom->mi_host, log_buffer);\n\t\t\t\t\t/*\n\t\t\t\t\t * If the vnode will have multiple\n\t\t\t\t\t * parent Moms, set flag to cross check\n\t\t\t\t\t * mod time against all parent Moms\n\t\t\t\t\t */\n\t\t\t\t\tif (vnlp->vnl_used > 1)\n\t\t\t\t\t\tcheck_other_moms_time = 1;\n\n\t\t\t\t\tfor (i = 0; i < vnlp->vnl_used; i++) {\n\t\t\t\t\t\tvnal_t *vnrlp;\n\t\t\t\t\t\tvnrlp = VNL_NODENUM(vnlp, i);\n\t\t\t\t\t\t/* create vnode */\n\t\t\t\t\t\t(void) update2_to_vnode(vnrlp, cr_node, pmom, &made_new_vnodes, 0);\n\t\t\t\t\t\tfor (j = 0; j < vnrlp->vnal_used; j++) {\n\t\t\t\t\t\t\tvna_t *psrp;\n\n\t\t\t\t\t\t\tpsrp = VNAL_NODENUM(vnrlp, j);\n\t\t\t\t\t\t\tif (strcasecmp(psrp->vna_name,\n\t\t\t\t\t\t\t\t       VNATTR_PNAMES) == 0) {\n\t\t\t\t\t\t\t\tsnprintf(log_buffer,\n\t\t\t\t\t\t\t\t\t sizeof(log_buffer),\n\t\t\t\t\t\t\t\t\t \"pnames %s\", psrp->vna_val);\n\t\t\t\t\t\t\t\tlog_event(PBSEVENT_SYSTEM,\n\t\t\t\t\t\t\t\t\t  PBS_EVENTCLASS_NODE,\n\t\t\t\t\t\t\t\t\t  LOG_INFO,\n\t\t\t\t\t\t\t\t\t  pmom->mi_host, log_buffer);\n\n\t\t\t\t\t\t\t\tsetup_pnames(psrp->vna_val);\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\n\t\t\t\t\t/* if multiple vnodes indicated (above) and\n\t\t\t\t\t * if the vnodes (except the first) have\n\t\t\t\t\t * multiple Moms,  update the map mod\n\t\t\t\t\t * time on those Moms as well\n\t\t\t\t\t */\n\t\t\t\t\tif (check_other_moms_time &&\n\t\t\t\t\t    (psvrmom->msr_numvnds > 1)) {\n\t\t\t\t\t\tif (psvrmom->msr_children[1]->nd_nummoms > 1) {\n\t\t\t\t\t\t\tj = psvrmom->msr_children[1]->nd_nummoms;\n\t\t\t\t\t\t\tfor (i = 0; i < j; ++i) {\n\t\t\t\t\t\t\t\tpsvrmom->msr_children[1]->nd_moms[i]->mi_modtime = vnlp->vnl_modtime;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tif (made_new_vnodes || cr_node) {\n\t\t\t\t\t\tsave_nodes_db(1, pmom); /* update the node database */\n\t\t\t\t\t\tpropagate_licenses_to_vnodes(pmom);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tvnl_free(vnlp);\n\t\t\t\tvnlp = NULL;\n\t\t\t}\n\n\t\t\t/*read mom's pbs_version data if appended*/\n\n\t\t\tval = disrst(stream, &ret);\n\t\t\tif (ret == DIS_SUCCESS) {\n\t\t\t\tDBPRT((\"mom's pbs_version %s \", val))\n\t\t\t\tfree(psvrmom->msr_pbs_ver);\n\t\t\t\tpsvrmom->msr_pbs_ver = val;\n\t\t\t} else if (ret == DIS_EOD) {\n\t\t\t\t/*found no appended version data*/\n\t\t\t\tfree(psvrmom->msr_pbs_ver);\n\t\t\t\tpsvrmom->msr_pbs_ver = strdup(\"unavailable\");\n\t\t\t} else\n\t\t\t\tgoto err;\n\n\t\t\t/* for either UPDATE or UPDATE2...\t\t    */\n\t\t\t/* log which vnodes under that Mom are stale\t    */\n\t\t\t/* set default resources for \"arch\" on all vnodes   */\n\t\t\t/* also set each vnodes' ATR_ResvEnable if need be  */\n\t\t\t/* Set ncpus and mem in resources_available on the  */\n\t\t\t/* natural vnode if they are not already set.       */\n\n\t\t\tfor (ivnd = 0; ivnd < psvrmom->msr_numvnds; ++ivnd) {\n\t\t\t\tnp = psvrmom->msr_children[ivnd];\n\n\t\t\t\tif (np->nd_state & INUSE_STALE) {\n\t\t\t\t\t/* vnode is stale */\n\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t\t \"vnode %s is stale\", np->nd_name);\n\t\t\t\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_NODE,\n\t\t\t\t\t\t  LOG_INFO, pmom->mi_host, log_buffer);\n\t\t\t\t}\n\n\t\t\t\tpala = get_nattr(np, ND_ATR_ResourceAvail);\n\n\t\t\t\tprd = &svr_resc_def[RESC_ARCH];\n\t\t\t\tprc = find_resc_entry(pala, prd);\n\t\t\t\tif (prc == NULL)\n\t\t\t\t\tprc = add_resource_entry(pala, prd);\n\t\t\t\tif (!is_attr_set(&prc->rs_value)) {\n\t\t\t\t\tif (is_attr_set(&prc->rs_value))\n\t\t\t\t\t\tfree(prc->rs_value.at_val.at_str);\n\t\t\t\t\tprc->rs_value.at_val.at_str = strdup(psvrmom->msr_arch);\n\t\t\t\t\tprc->rs_value.at_flags |= (ATR_SET_MOD_MCACHE | ATR_VFLAG_DEFLT);\n\t\t\t\t}\n\n\t\t\t\t/*\n\t\t\t\t * make sure resources_available.[ncpus,mem] are set\n\t\t\t\t * on the \"natural\" (first vnode).  Use value from\n\t\t\t\t * the Mom.\n\t\t\t\t */\n\t\t\t\tif (ivnd == 0) {\n\t\t\t\t\t/* the first = natural vnode */\n\t\t\t\t\tprd = &svr_resc_def[RESC_NCPUS];\n\t\t\t\t\tprc = find_resc_entry(pala, prd);\n\t\t\t\t\tif (prc == NULL)\n\t\t\t\t\t\tprc = add_resource_entry(pala, prd);\n\t\t\t\t\tif (prc &&\n\t\t\t\t\t    ((is_attr_set(&prc->rs_value)) == 0)) {\n\t\t\t\t\t\tprc->rs_value.at_val.at_long = psvrmom->msr_acpus;\n\t\t\t\t\t\tprc->rs_value.at_flags |= (ATR_SET_MOD_MCACHE | ATR_VFLAG_DEFLT);\n\t\t\t\t\t}\n\t\t\t\t\tprd = &svr_resc_def[RESC_MEM];\n\t\t\t\t\tprc = find_resc_entry(pala, prd);\n\t\t\t\t\tif (prc == NULL)\n\t\t\t\t\t\tprc = add_resource_entry(pala, prd);\n\t\t\t\t\tif (prc &&\n\t\t\t\t\t    ((is_attr_set(&prc->rs_value)) == 0)) {\n\t\t\t\t\t\tprc->rs_value.at_val.at_size.atsv_num =\n\t\t\t\t\t\t\tpsvrmom->msr_pmem;\n\t\t\t\t\t\tprc->rs_value.at_val.at_size.atsv_shift = 10;\n\t\t\t\t\t\tprc->rs_value.at_flags |= (ATR_SET_MOD_MCACHE | ATR_VFLAG_DEFLT);\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\t/*\n\t\t\t\t * is resv_enable attribute in manual/automatic mode?\n\t\t\t\t *\n\t\t\t\t * Automatic mode is implemented by utilizing the\n\t\t\t\t * ATR_VFLAG_DEFLT bit\n\t\t\t\t * The table which follows enumerates the cases:\n\t\t\t\t *\n\t\t\t\t * Manual mode if:\n\t\t\t\t * (ATR_VFLAG_SET & at_flags)==1  &&\n\t\t\t\t * (ATR_VFLAG_DEFLT & at_flags)==0\n\t\t\t\t *\n\t\t\t\t * Automatic mode if:\n\t\t\t\t * (ATR_VFLAG_SET & at_flags)==1  &&\n\t\t\t\t * (ATR_VFLAG_DEFLT & at_flags)==1\n\t\t\t\t * (ATR_VFLAG_SET & at_flags)==0  &&\n\t\t\t\t * (ATR_VFLAG_DEFLT & at_flags)==1\n\t\t\t\t * (ATR_VFLAG_SET & at_flags)==0  &&\n\t\t\t\t * (ATR_VFLAG_DEFLT & at_flags)==0\n\t\t\t\t *\n\t\t\t\t * The later two forms of automatic mode transition to\n\t\t\t\t * the first form listed.  Doing it this way provides a\n\t\t\t\t * means by which the operator can go in to manual mode\n\t\t\t\t * but still have a * way to revert back to automatic\n\t\t\t\t * mode if needed.\n\t\t\t\t */\n\n\t\t\t\tif (!((get_nattr(np, ND_ATR_ResvEnable))->at_flags & ATR_VFLAG_SET) ||\n\t\t\t\t    ((get_nattr(np, ND_ATR_ResvEnable))->at_flags & ATR_VFLAG_DEFLT)) {\n\n\t\t\t\t\tint change = 0;\n\n\t\t\t\t\t/*\n\t\t\t\t\t * attribute resv_enable is in automatic mode\n\t\t\t\t\t * does mom config file show mom configured for\n\t\t\t\t\t * cycle harvesting?\n\t\t\t\t\t */\n\t\t\t\t\tif (s & MOM_STATE_CONF_HARVEST) {\n\t\t\t\t\t\tif (get_nattr_long(np, ND_ATR_ResvEnable)) {\n\t\t\t\t\t\t\tset_nattr_l_slim(np, ND_ATR_ResvEnable, 0, SET);\n\t\t\t\t\t\t\tchange = 1;\n\t\t\t\t\t\t}\n\t\t\t\t\t} else {\n\t\t\t\t\t\tif (!get_nattr_long(np, ND_ATR_ResvEnable)) {\n\t\t\t\t\t\t\tset_nattr_l_slim(np, ND_ATR_ResvEnable, 1, SET);\n\t\t\t\t\t\t\tchange = 1;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\n\t\t\t\t\tif (change || !((get_nattr(np, ND_ATR_ResvEnable))->at_flags & ATR_VFLAG_SET) || !((get_nattr(np, ND_ATR_ResvEnable))->at_flags & ATR_VFLAG_DEFLT))\n\t\t\t\t\t\t(get_nattr(np, ND_ATR_ResvEnable))->at_flags |= ATR_VFLAG_DEFLT;\n\t\t\t\t}\n\n\t\t\t\tif (psvrmom->msr_pbs_ver != NULL) {\n\n\t\t\t\t\tif (is_nattr_set(np, ND_ATR_version) == 0 || strcmp(psvrmom->msr_pbs_ver, get_nattr_str(np, ND_ATR_version)) != 0) {\n\t\t\t\t\t\tfree_nattr(np, ND_ATR_version);\n\t\t\t\t\t\tif (!set_nattr_str_slim(np, ND_ATR_version, psvrmom->msr_pbs_ver, NULL))\n\t\t\t\t\t\t\tnp->nd_modified = 1;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif (made_new_vnodes || cr_node)\n\t\t\t\tsave_nodes_db(1, pmom); /* update the node database */\n\n\t\t\tif (command_orig == IS_REGISTERMOM) {\n\t\t\t\t/* Mom is acknowledging the info sent by the Server */\n\t\t\t\t/* Mark the Mom and associated vnodes as up */\n\t\t\t\toldstate = pdmninfo->dmn_state;\n\t\t\t\tif (pdmninfo->dmn_state & INUSE_MARKEDDOWN)\n\t\t\t\t\tpdmninfo->dmn_state &= ~INUSE_MARKEDDOWN;\n\n\t\t\t\tset_all_state(pmom, 0, INUSE_DOWN | INUSE_INIT,\n\t\t\t\t\t      NULL, Set_All_State_Regardless);\n\n\t\t\t\t/* log a node up message only if it was not marked\n\t\t\t * as \"markeddown\" by TPP layer due to broken connection\n\t\t\t * to pbs_comm router\n\t\t\t */\n\t\t\t\tif ((oldstate & INUSE_MARKEDDOWN) == 0) {\n\t\t\t\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_NODE,\n\t\t\t\t\t\t  LOG_NOTICE, pmom->mi_host, node_up);\n\t\t\t\t}\n\t\t\t\tpsvrmom->msr_timedown = 0;\n\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n\t\t\t\tif (pdmninfo->dmn_state & INUSE_NEED_CREDENTIALS) {\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_NODE,\n\t\t\t\t\t\t  LOG_INFO, pmom->mi_host, \"mom needs credentials\");\n\n\t\t\t\t\tfor (i = 0; i < psvrmom->msr_numjobs; i++) {\n\t\t\t\t\t\tif (psvrmom->msr_jobindx[i])\n\t\t\t\t\t\t\tset_task(WORK_Immed, 0, svr_renew_job_cred, psvrmom->msr_jobindx[i]->ji_qs.ji_jobid);\n\t\t\t\t\t}\n\n\t\t\t\t\tpdmninfo->dmn_state &= ~INUSE_NEED_CREDENTIALS;\n\t\t\t\t}\n#endif\n\t\t\t}\n\t\t\tbreak;\n\n\t\tcase IS_RESCUSED:\n\t\tcase IS_RESCUSED_FROM_HOOK:\n\n\t\t\tif (command == IS_RESCUSED) {\n\t\t\t\tDBPRT((\"%s: IS_RESCUSED\\n\", __func__))\n\t\t\t} else {\n\t\t\t\tDBPRT((\"%s: IS_RESCUSED_FROM_HOOK\\n\", __func__))\n\t\t\t}\n\n\t\t\tstat_update(stream);\n\t\t\tbreak;\n\n\t\tcase IS_JOBOBIT:\n\t\t\tDBPRT((\"%s: IS_JOBOBIT\\n\", __func__))\n\t\t\trecv_job_obit(stream);\n\t\t\tbreak;\n\n\t\tcase IS_IDLE:\n\t\t\tDBPRT((\"%s: IS_IDLE\\n\", __func__))\n\t\t\trecv_wk_job_idle(stream);\n\t\t\tbreak;\n\n\t\tcase IS_DISCARD_DONE:\n\t\t\t/* Mom is acknowledging a IS_DISCARD_JOB request    */\n\t\t\t/* Mark her entry in the discard structure complete */\n\n\t\t\tjid = disrst(stream, &ret); /* job id */\n\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\tgoto err;\n\t\t\tj = disrsi(stream, &ret); /* run (hop) count */\n\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\tgoto err;\n\t\t\tsprintf(log_buffer, \"Discard done for job %s\", jid);\n\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_NODE, LOG_DEBUG,\n\t\t\t\t  pmom->mi_host, log_buffer);\n\t\t\tDBPRT((\"%s: Mom %s %s (%d)\\n\", __func__, pmom->mi_host, log_buffer, j))\n\t\t\tpjob = find_job(jid);\n\t\t\tif (pjob &&\n\t\t\t    (get_jattr_long(pjob, JOB_ATR_run_version) == j)) {\n\t\t\t\tpost_discard_job(pjob, pmom, JDCD_REPLIED);\n\t\t\t}\n\t\t\tfree(jid);\n\t\t\tjid = NULL;\n\t\t\tbreak;\n\n\t\tcase IS_HOOK_JOB_ACTION: {\n\t\t\tint *replies_seq = NULL;\n\t\t\tint replies_count = 0;\n\t\t\tint acts_count = 0;\n\n\t\t\tacts_count = i = disrsi(stream, &ret); /* number of actions in request */\n\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\tgoto err;\n\t\t\tif ((replies_seq = (int *) malloc(sizeof(int) * i)) == NULL)\n\t\t\t\tgoto err;\n\t\t\twhile (i--) {\n\t\t\t\tint runct;\n\t\t\t\tint hact;\n\t\t\t\tint hook_seq;\n\n\t\t\t\t/* job id */\n\t\t\t\tjid = disrst(stream, &ret);\n\t\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\t\tgoto hook_act_reply;\n\t\t\t\t/* hook action sequence number for acknowledgement */\n\t\t\t\thook_seq = disrul(stream, &ret);\n\t\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\t\tgoto hook_act_reply;\n\t\t\t\t/* run count of job to verify that job hasn't changed */\n\t\t\t\trunct = disrsi(stream, &ret);\n\t\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\t\tgoto hook_act_reply;\n\t\t\t\t/* action: delete or requeue */\n\t\t\t\thact = disrsi(stream, &ret);\n\t\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\t\tgoto hook_act_reply;\n\t\t\t\t/* user requesting action, not currently used */\n\t\t\t\t(void) disrui(stream, &ret);\n\t\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\t\tgoto hook_act_reply;\n\n\t\t\t\tif (((pjob = find_job(jid)) != NULL) &&\n\t\t\t\t    (check_job_state(pjob, JOB_STATE_LTR_RUNNING) ||\n\t\t\t\t     check_job_state(pjob, JOB_STATE_LTR_EXITING)) &&\n\t\t\t\t    (get_jattr_long(pjob, JOB_ATR_run_version) == runct)) {\n\t\t\t\t\t/* set the Exit_status job attribute */\n\t\t\t\t\t/* to be later checked in job_obit() */\n\t\t\t\t\tif (hact == JOB_ACT_REQ_REQUEUE) {\n\t\t\t\t\t\tset_jattr_l_slim(pjob, JOB_ATR_exit_status, JOB_EXEC_HOOK_RERUN, SET);\n\t\t\t\t\t\tlog_eventf(PBSEVENT_SYSTEM, PBS_EVENTCLASS_NODE, LOG_INFO, pmom->mi_host,\n\t\t\t\t\t\t\t   \"hook request rerun %s\", jid);\n\t\t\t\t\t} else if (hact == JOB_ACT_REQ_DELETE) {\n\t\t\t\t\t\tset_jattr_l_slim(pjob, JOB_ATR_exit_status, JOB_EXEC_HOOK_DELETE, SET);\n\t\t\t\t\t\tlog_eventf(PBSEVENT_SYSTEM, PBS_EVENTCLASS_NODE, LOG_INFO, pmom->mi_host,\n\t\t\t\t\t\t\t   \"hook request delete %s\", jid);\n\t\t\t\t\t} else if (hact == JOB_ACT_REQ_DEALLOCATE) {\n\n\t\t\t\t\t\t/* decrement everything found in exec_vnode/exec_vnode_deallocated  */\n\t\t\t\t\t\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_Suspend) == 0) {\n\t\t\t\t\t\t\t/* don't update resources_assigned if job is suspended */\n\t\t\t\t\t\t\tset_resc_assigned((void *) pjob, 0, DECR);\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\tdeallocate_job(pmom, pjob);\n\n\t\t\t\t\t\t/* increment everything found in new exec_vnode/exec_vnode_deallocated  */\n\t\t\t\t\t\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_Suspend) == 0) {\n\t\t\t\t\t\t\t/* don't update resources_assigned if job is suspended */\n\t\t\t\t\t\t\tset_resc_assigned((void *) pjob, 0, INCR);\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tfree(jid);\n\t\t\t\tjid = NULL;\n\t\t\t\treplies_seq[replies_count++] = hook_seq;\n\t\t\t}\n\t\thook_act_reply:\n\t\t\tif (replies_count > 0) {\n\t\t\t\tif (is_compose(stream, IS_HOOK_ACTION_ACK) != DIS_SUCCESS)\n\t\t\t\t\tgoto err;\n\t\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\t\tgoto err;\n\t\t\t\tret = diswsi(stream, IS_HOOK_JOB_ACTION);\n\t\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\t\tgoto err;\n\t\t\t\tret = diswsi(stream, replies_count);\n\t\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\t\tgoto err;\n\t\t\t\tfor (i = 0; i < replies_count; i++) {\n\t\t\t\t\tret = diswul(stream, replies_seq[i]);\n\t\t\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\t\t\tgoto err;\n\t\t\t\t}\n\t\t\t\tret = dis_flush(stream);\n\t\t\t\tif (ret != DIS_SUCCESS) {\n\t\t\t\t\tret = DIS_NOCOMMIT;\n\t\t\t\t\tgoto err;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif (replies_count != acts_count)\n\t\t\t\tgoto err;\n\t\t} break;\n\n\t\tcase IS_HOOK_SCHEDULER_RESTART_CYCLE:\n\t\t\thook_euser = disrst(stream, &ret);\n\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\tgoto err;\n\t\t\tif (*hook_euser != '\\0') {\n\t\t\t\tif ((svr_get_privilege(hook_euser, pmom->mi_host) &\n\t\t\t\t     (ATR_DFLAG_MGWR | ATR_DFLAG_OPWR)) == 0) {\n\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t\t hook_privilege, hook_euser,\n\t\t\t\t\t\t pmom->mi_host);\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_NODE,\n\t\t\t\t\t\t  LOG_INFO, pmom->mi_host, log_buffer);\n\t\t\t\t\tfree(hook_euser);\n\t\t\t\t\thook_euser = NULL;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t\tfree(hook_euser);\n\t\t\thook_euser = NULL;\n\t\t\tset_scheduler_flag(SCH_SCHEDULE_RESTART_CYCLE, dflt_scheduler);\n\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_NODE,\n\t\t\t\t  LOG_INFO, pmom->mi_host,\n\t\t\t\t  \"requested for scheduler to restart cycle\");\n\t\t\tbreak;\n\n\t\tcase IS_UPDATE_FROM_HOOK:\n\t\tcase IS_UPDATE_FROM_HOOK2:\n\t\t\thook_seq = disrul(stream, &ret);\n\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\tgoto err;\n\n\t\t\t/* hook_euser is not currently used, plan on using it  */\n\t\t\t/* instead of VNATTR_HOOK_REQUESTOR in the future      */\n\t\t\t/* its here to prevent need of changing protocol later */\n\t\t\thook_euser = disrst(stream, &ret);\n\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\tgoto err;\n\t\t\tfree(hook_euser);\n\t\t\thook_euser = NULL;\n\n\t\t\tvnlp = vn_decode_DIS(stream, &ret);\n\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\tgoto err;\n\n\t\t\tif (vnlp == NULL) {\n\t\t\t\tsprintf(log_buffer, \"vn_decode_DIS vn failed\");\n\t\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\t\tgoto err;\n\t\t\t}\n\n\t\t\tcr_node = 0;\n\t\t\t/* is_update2 changes (from vnodedef files) are sent at the same time */\n\t\t\t/* as is_update_from_hook changes, so they'll have the same vnlp timestamp. */\n\t\t\t/* is_update2 also records the received vnlp's vnl_modtime in pmom->mi_modtime. */\n\t\t\tif (vnlp->vnl_modtime >= pmom->mi_modtime)\n\t\t\t\tcr_node = 1;\n\t\t\tfor (i = 0; i < vnlp->vnl_used; i++) {\n\t\t\t\tvnal_t *vnrlp;\n\t\t\t\tvnrlp = VNL_NODENUM(vnlp, i);\n\t\t\t\t/* update vnode */\n\t\t\t\tmade_new_vnodes = 0;\n\t\t\t\tif (update2_to_vnode(vnrlp, cr_node, pmom, &made_new_vnodes, (command == IS_UPDATE_FROM_HOOK2) ? 2 : 1) == PBSE_PERM) {\n\t\t\t\t\tbreak; /* encountered a bad permission */\n\t\t\t\t}\n\t\t\t}\n\t\t\tvnl_free(vnlp);\n\t\t\tvnlp = NULL;\n\n\t\t\t/* tell Mom we got this one, reply with the type of */\n\t\t\t/* action requested and the sequence number         */\n\n\t\t\tif (is_compose(stream, IS_HOOK_ACTION_ACK) != DIS_SUCCESS)\n\t\t\t\tgoto err;\n\n\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\tgoto err;\n\t\t\tret = diswsi(stream, IS_UPDATE_FROM_HOOK);\n\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\tgoto err;\n\t\t\tret = diswsi(stream, 1);\n\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\tgoto err;\n\t\t\tret = diswul(stream, hook_seq);\n\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\tgoto err;\n\t\t\tret = dis_flush(stream);\n\t\t\tif (ret != DIS_SUCCESS) {\n\t\t\t\tret = DIS_NOCOMMIT;\n\t\t\t\tgoto err;\n\t\t\t}\n\t\t\tif (made_new_vnodes || cr_node) {\n\t\t\t\tsave_nodes_db(1, pmom); /* update the node database */\n\t\t\t\tpropagate_licenses_to_vnodes(pmom);\n\t\t\t}\n\t\t\tbreak;\n\n\t\tcase IS_HOOK_CHECKSUMS:\n\t\t\tCLEAR_HEAD(reported_hooks);\n\t\t\ti = disrsi(stream, &ret); /* number of hooks to report */\n\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\tgoto err;\n\n\t\t\twhile (i--) {\n\t\t\t\tunsigned long chksum_hk;\n\t\t\t\tunsigned long chksum_py;\n\t\t\t\tunsigned long chksum_cf;\n\t\t\t\tunsigned int haction;\n\n\t\t\t\thaction = 0;\n\t\t\t\t/* hook name */\n\t\t\t\thname = disrst(stream, &ret);\n\t\t\t\tif ((ret != DIS_SUCCESS) || (hname == NULL))\n\t\t\t\t\tgoto err;\n\n\t\t\t\t/* hook control file checksum */\n\t\t\t\tchksum_hk = disrul(stream, &ret);\n\t\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\t\tgoto err;\n\n\t\t\t\t/* hook script checksum */\n\t\t\t\tchksum_py = disrul(stream, &ret);\n\t\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\t\tgoto err;\n\n\t\t\t\t/* hook config file checksum */\n\t\t\t\tchksum_cf = disrul(stream, &ret);\n\t\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\t\tgoto err;\n\n\t\t\t\tphook = find_hook(hname);\n\t\t\t\tif ((phook == NULL) ||\n\t\t\t\t    ((phook->event & MOM_EVENTS) == 0)) {\n\t\t\t\t\t/* mom has a hook that the server */\n\t\t\t\t\t/* does not  know about. tell mom */\n\t\t\t\t\t/* to delete that hook */\n\t\t\t\t\tsnprintf(log_buffer,\n\t\t\t\t\t\t sizeof(log_buffer),\n\t\t\t\t\t\t \"encountered a mom (%s) hook %s \"\n\t\t\t\t\t\t \"that the server does not know \"\n\t\t\t\t\t\t \"about! Telling mom to delete\",\n\t\t\t\t\t\t pmom->mi_host, hname);\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG3,\n\t\t\t\t\t\t  PBS_EVENTCLASS_HOOK,\n\t\t\t\t\t\t  LOG_ERR, hname,\n\t\t\t\t\t\t  log_buffer);\n\t\t\t\t\tadd_pending_mom_hook_action(pmom,\n\t\t\t\t\t\t\t\t    hname, MOM_HOOK_ACTION_DELETE);\n\t\t\t\t\tfree(hname);\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\n\t\t\t\tif ((phook->hook_control_checksum > 0) &&\n\t\t\t\t    (phook->hook_control_checksum != chksum_hk)) {\n\n\t\t\t\t\tsnprintf(log_buffer,\n\t\t\t\t\t\t sizeof(log_buffer),\n\t\t\t\t\t\t \"hook control file \"\n\t\t\t\t\t\t \"mismatched checksums: server: \"\n\t\t\t\t\t\t \"%lu mom (%s): %lu...resending\",\n\t\t\t\t\t\t phook->hook_control_checksum,\n\t\t\t\t\t\t pmom->mi_host, chksum_hk);\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG3,\n\t\t\t\t\t\t  PBS_EVENTCLASS_HOOK,\n\t\t\t\t\t\t  LOG_ERR, phook->hook_name,\n\t\t\t\t\t\t  log_buffer);\n\t\t\t\t\thaction |= MOM_HOOK_ACTION_SEND_ATTRS;\n\t\t\t\t}\n\n\t\t\t\tif ((phook->hook_script_checksum > 0) &&\n\t\t\t\t    (phook->hook_script_checksum != chksum_py)) {\n\n\t\t\t\t\tsnprintf(log_buffer,\n\t\t\t\t\t\t sizeof(log_buffer),\n\t\t\t\t\t\t \"hook script \"\n\t\t\t\t\t\t \"mismatched checksums: server: \"\n\t\t\t\t\t\t \"%lu mom (%s): %lu...resending\",\n\t\t\t\t\t\t phook->hook_script_checksum,\n\t\t\t\t\t\t pmom->mi_host, chksum_py);\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG3,\n\t\t\t\t\t\t  PBS_EVENTCLASS_HOOK,\n\t\t\t\t\t\t  LOG_ERR, phook->hook_name,\n\t\t\t\t\t\t  log_buffer);\n\t\t\t\t\thaction |= MOM_HOOK_ACTION_SEND_SCRIPT;\n\t\t\t\t}\n\n\t\t\t\tif ((phook->hook_config_checksum > 0) &&\n\t\t\t\t    (phook->hook_config_checksum != chksum_cf)) {\n\n\t\t\t\t\tsnprintf(log_buffer,\n\t\t\t\t\t\t sizeof(log_buffer),\n\t\t\t\t\t\t \"hook config file \"\n\t\t\t\t\t\t \"mismatched checksums: server: \"\n\t\t\t\t\t\t \"%lu mom (%s): %lu...resending\",\n\t\t\t\t\t\t phook->hook_config_checksum,\n\t\t\t\t\t\t pmom->mi_host, chksum_cf);\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG3,\n\t\t\t\t\t\t  PBS_EVENTCLASS_HOOK,\n\t\t\t\t\t\t  LOG_ERR, phook->hook_name,\n\t\t\t\t\t\t  log_buffer);\n\t\t\t\t\thaction |= MOM_HOOK_ACTION_SEND_CONFIG;\n\t\t\t\t}\n\n\t\t\t\tif (haction != 0) {\n\t\t\t\t\tadd_pending_mom_hook_action(pmom,\n\t\t\t\t\t\t\t\t    hname, haction);\n\t\t\t\t}\n\n\t\t\t\tif (add_to_svrattrl_list(&reported_hooks, hname,\n\t\t\t\t\t\t\t NULL, NULL, 0, NULL) == -1) {\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG3,\n\t\t\t\t\t\t  PBS_EVENTCLASS_HOOK,\n\t\t\t\t\t\t  LOG_INFO, hname,\n\t\t\t\t\t\t  \"failed to add to reported \"\n\t\t\t\t\t\t  \"hooks list\");\n\t\t\t\t}\n\n\t\t\t\tfree(hname);\n\t\t\t}\n\n\t\t\t/* hook resourcedef checksum */\n\t\t\tchksum_rescdef = disrul(stream, &ret);\n\t\t\tif (ret != DIS_SUCCESS)\n\t\t\t\tgoto err;\n\n\t\t\thook_rescdef_checksum = get_hook_rescdef_checksum();\n\t\t\tif ((hook_rescdef_checksum > 0) &&\n\t\t\t    (hook_rescdef_checksum != chksum_rescdef)) {\n\n\t\t\t\tsnprintf(log_buffer,\n\t\t\t\t\t sizeof(log_buffer),\n\t\t\t\t\t \"hook resourcedef file \"\n\t\t\t\t\t \"mismatched checksums: server: \"\n\t\t\t\t\t \"%lu mom %s: %lu...resending\",\n\t\t\t\t\t hook_rescdef_checksum, pmom->mi_host,\n\t\t\t\t\t chksum_rescdef);\n\t\t\t\tlog_event(PBSEVENT_DEBUG3,\n\t\t\t\t\t  PBS_EVENTCLASS_HOOK,\n\t\t\t\t\t  LOG_ERR, PBS_RESCDEF,\n\t\t\t\t\t  log_buffer);\n\t\t\t\tadd_pending_mom_hook_action(pmom,\n\t\t\t\t\t\t\t    PBS_RESCDEF,\n\t\t\t\t\t\t\t    MOM_HOOK_ACTION_SEND_RESCDEF);\n\t\t\t}\n\n\t\t\t/* Look for mom hooks known to the server that are */\n\t\t\t/* not known to the mom sending the request. */\n\t\t\tphook = (hook *) GET_NEXT(svr_allhooks);\n\t\t\twhile (phook) {\n\t\t\t\tif (phook->hook_name &&\n\t\t\t\t    !phook->pending_delete &&\n\t\t\t\t    (phook->event & MOM_EVENTS) &&\n\t\t\t\t    (find_svrattrl_list_entry(&reported_hooks,\n\t\t\t\t\t\t\t      phook->hook_name, NULL) == NULL)) {\n\t\t\t\t\tadd_pending_mom_hook_action(pmom,\n\t\t\t\t\t\t\t\t    phook->hook_name,\n\t\t\t\t\t\t\t\t    MOM_HOOK_ACTION_SEND_ATTRS | MOM_HOOK_ACTION_SEND_SCRIPT | MOM_HOOK_ACTION_SEND_CONFIG);\n\t\t\t\t}\n\t\t\t\tphook = (hook *) GET_NEXT(phook->hi_allhooks);\n\t\t\t}\n\n\t\t\tfree_attrlist(&reported_hooks);\n\t\t\tnp = psvrmom->msr_children[0];\n\t\t\tif (np->nd_state & INUSE_PROV) {\n\t\t\t\tDBPRT((\"%s: calling [is_vnode_prov_done] from is_request\\n\", __func__))\n\t\t\t\tis_vnode_prov_done(np->nd_name);\n\t\t\t}\n\n\t\t\tbreak;\n\n\t\tcase IS_CMD:\n\t\t\tDBPRT((\"%s: IS_CMD\\n\", __func__))\n\t\t\tprocess_IS_CMD(stream);\n\t\t\tbreak;\n\t}\n\n\ttpp_eom(stream);\n\treturn;\n\nerr:\n\t/*\n\t ** We come here if we got a DIS write error.\n\t */\n\tDBPRT((\"\\nINTERNAL or DIS i/o error\\n\"))\n\tsnprintf(log_buffer, sizeof(log_buffer), \"%s from %s(%s)\",\n\t\t dis_emsg[ret], pmom->mi_host, netaddr(addr));\n\tlog_err(-1, __func__, log_buffer);\n\tfree(jid);\n\tjid = NULL;\n\tfree(hname);\n\thname = NULL;\n\tfree_attrlist(&reported_hooks);\n\n\tstream_eof(stream, ret, \"write_err\");\n\n\treturn;\n}\n\n/**\n * @brief\n * \t\tfree list of prop structures created by proplist()\n *\n * @param[in,out]\tprop\t- head to list of prop structures which needs to e freed\n *\n * @return\tvoid\n */\n\nstatic void\nfree_prop(struct prop *prop)\n{\n\tstruct prop *pp;\n\n\tfor (pp = prop; pp; pp = prop) {\n\t\tprop = pp->next;\n\t\tfree(pp->name);\n\t\tpp->name = NULL;\n\t\tfree(pp);\n\t}\n}\n\n/**\n * @brief\n * \t\tParse a number in a spec.\n *\n * @param[in]\tptr\t- The string being parsed\n * @param[out]\tnum\t- The number parsed\n * @param[in]\tznotok\t- (zero not ok) set true means a zero value is an error\n *\n *@return\tint\n * @retval\t0\t- if okay\n * @retval  1\t- if no number exists\n * @retval -1\t- on error\n */\nstatic int\nnumber(char **ptr, int *num, int znotok)\n{\n\tchar holder[80];\n\tint i = 0;\n\tchar *str = *ptr;\n\n\twhile (isdigit(*str))\n\t\tholder[i++] = *str++;\n\n\tif (i == 0)\n\t\treturn 1;\n\tif (isalpha((int) *str))\n\t\treturn 1; /* cannot have digit followed by letter */\n\n\tholder[i] = '\\0';\n\tif (((i = atoi(holder)) == 0) && znotok) {\n\t\tsprintf(log_buffer, \"zero illegal\");\n\t\treturn -1;\n\t}\n\n\t*ptr = str;\n\t*num = i;\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\tCheck string to see if it is a legal property name.\n *\n * @param[in]\tptr\t- The string being parsed\n * @param[out]\tprop\t- set to static char array containing the property\n *\n * @see\n * \t\tproplist and ctcpus\n *\n * @return\tint\n * @retval\t0\t- if string is a legal property name\n * @retval\t1\t- if string is not a legal property name\n *\n * @par MT-safe: No\n */\nstatic int\nproperty(char **ptr, char **prop)\n{\n\tstatic char name[80];\n\tchar *str = *ptr;\n\tint i = 0;\n\n\tif (!isalnum(*str)) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"first character of property (%s) not alphanum\", str);\n\t\treturn 1;\n\t}\n\n\twhile (isalnum(*str) || *str == '-' || *str == '_' || *str == '.' || *str == '=')\n\t\tname[i++] = *str++;\n\n\tname[i] = '\\0';\n\t*prop = (i == 0) ? NULL : name;\n\n\t/* skip over \"/vp_number\" */\n\tif (*str == '/') {\n\t\tdo {\n\t\t\tstr++;\n\t\t} while (isdigit(*str));\n\t}\n\t*ptr = str;\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\tCreate a property list from a string.\n *\n * @param[in]\tstr\t- The string being parsed\n * @param[out]\tprop\t- set to static char array containing the property\n * @param[out]\tnode_req\t- node request\n *\n * @return\tint\n * @retval 0 on success\n * @retval 1 on failure.\n */\nstatic int\nproplist(char **str, struct prop **plist, struct node_req *node_req)\n{\n\tstruct prop *pp;\n\tchar *pname;\n\tchar *pequal;\n\n\tnode_req->nr_ppn = 1; /* default to 1 process per node */\n\tnode_req->nr_cpp = 1; /* default to 1 cpu per process */\n\tnode_req->nr_np = 1;  /* default to 1 total cpus */\n\n\tfor (;;) {\n\t\tif (property(str, &pname))\n\t\t\treturn 1;\n\t\tif (pname == NULL)\n\t\t\tbreak;\n\n\t\t/* special property */\n\t\tif ((pequal = strchr(pname, (int) '=')) != NULL) {\n\n\t\t\t/* identify the special property and place its value */\n\t\t\t/* into node_req \t\t\t\t\t */\n\t\t\t*pequal = '\\0';\n\t\t\tif (strcasecmp(pname, \"ppn\") == 0) {\n\t\t\t\t/* Processes (tasks) per Node */\n\t\t\t\tpequal++;\n\t\t\t\tif ((number(&pequal, &node_req->nr_ppn, 1) != 0) ||\n\t\t\t\t    (*pequal != '\\0'))\n\t\t\t\t\treturn 1;\n\t\t\t\tnode_req->nr_np = node_req->nr_ppn * node_req->nr_cpp;\n\t\t\t} else if ((strcasecmp(pname, \"cpp\") == 0) ||\n\t\t\t\t   (strcasecmp(pname, \"ncpus\") == 0)) {\n\t\t\t\t/* CPUs (threads) per Process (task) */\n\t\t\t\tpequal++;\n\t\t\t\tif ((number(&pequal, &node_req->nr_cpp, 0) != 0) ||\n\t\t\t\t    (*pequal != '\\0'))\n\t\t\t\t\treturn 1;\n\t\t\t\tnode_req->nr_np = node_req->nr_ppn * node_req->nr_cpp;\n\t\t\t} else {\n\t\t\t\treturn 1; /* not recognized - error */\n\t\t\t}\n\t\t} else {\n\t\t\tpp = (struct prop *) malloc(sizeof(struct prop));\n\t\t\tif (pp == NULL)\n\t\t\t\treturn 1; /* no mem */\n\t\t\tpp->mark = 1;\n\t\t\tif ((pp->name = strdup(pname)) == NULL) {\n\t\t\t\tfree(pp);\n\t\t\t\treturn 1;\n\t\t\t}\n\t\t\tpp->next = *plist;\n\t\t\t*plist = pp;\n\t\t}\n\t\tif (**str != ':')\n\t\t\tbreak;\n\t\t(*str)++;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\tDo a quick validation of the nodespec\n * @see\n * \t\tset_node_ct\n *\n * @param[in]\tstr\t- nodespec string to be parsed\n *\n * @return\tint\n * @retval\t0\t- success\n * @retval\t>0\t- failure\n */\nint\nvalidate_nodespec(char *str)\n{\n\tint i;\n\tint num = 1;\t\t  /*default: a request for 1 node*/\n\tstruct prop *prop = NULL; /*assume sub-spec calls out no proper */\n\tstruct node_req node_req;\n\t/* first quickly validate the node spec */\n\n\tif (str == NULL)\n\t\treturn PBSE_BADNODESPEC;\n\n\twhile (*str) {\n\n\t\tfree_prop(prop);\n\t\tprop = NULL; /* this is a must */\n\n\t\t/*Determine how many nodes this subspec requests*/\n\n\t\tif ((i = number(&str, &num, 1)) == -1)\n\t\t\treturn PBSE_BADNODESPEC;\n\n\t\t/*Determine properties the node must have and how many processors*/\n\n\t\tif (i == 0) {\t\t   /* subspec specified a number */\n\t\t\tif (*str == ':') { /* subspec is specifying properties */\n\t\t\t\t(str)++;\n\t\t\t\tif (proplist(&str, &prop, &node_req)) {\n\t\t\t\t\tfree_prop(prop);\n\t\t\t\t\treturn PBSE_BADNODESPEC;\n\t\t\t\t}\n\t\t\t}\n\t\t} else { /* subspec doesn't specify a number */\n\t\t\tif (proplist(&str, &prop, &node_req)) {\n\t\t\t\tfree_prop(prop);\n\t\t\t\treturn PBSE_BADNODESPEC; /* err in gen of prop list */\n\t\t\t}\n\t\t}\n\n\t\tif (*str == '+')\n\t\t\t++str;\n\t\telse if (*str == '#')\n\t\t\tbreak;\n\t\telse if (*str != '\\0')\n\t\t\treturn PBSE_BADNODESPEC;\n\t}\n\tfree_prop(prop);\n\tprop = NULL; /* this is a must */\n\treturn 0;\n}\n\n#define GLOB_SZ 511\n/**\n * @brief\n * \t\tAdd the \"global\" spec to every sub-spec in \"spec\".\n *\n * @param[in,out]\tspec\t- spec to which \"global\" spec needs to be added\n * @param[in]\tglobal\t- which will be copied into every sub-spec in \"spec\".\n *\n * @return a malloc-ed copy of the newly modified string.\n * @retval\tNULL\t- error\n *\n * @par MT-safe: No\n */\nstatic char *\nmod_spec(char *spec, char *global)\n{\n\tstatic char *line = NULL;\n\tstatic int line_len = 0;\n\tchar *cp;\n\tint i;\n\tint glen;\n\tint len;\n\n\tif (line_len == 0) {\n\t\tline = (char *) malloc(GLOB_SZ + 1);\n\t\tif (line == NULL)\n\t\t\treturn NULL;\n\t\tline_len = GLOB_SZ;\n\t}\n\n\t/* count number of times the global will be inserted into line */\n\ti = 1;\n\tglen = strlen(global);\n\tcp = spec;\n\twhile ((cp = strchr(cp, (int) '+')) != NULL) {\n\t\ti++;\n\t\tcp++;\n\t}\n\tlen = strlen(spec) + (i * (glen + 1)) + 1;\n\tif (len > line_len) {\n\t\t/* need to expand line */\n\t\tcp = realloc(line, (size_t) len);\n\t\tif (cp == NULL)\n\t\t\treturn NULL;\n\t\tline = cp;\n\t\tline_len = len;\n\t}\n\n\t/* now copy spec into line appending \":global\" at the end of */\n\t/* segment seperated by a \"+\"\t\t\t\t     */\n\n\tcp = line;\n\twhile (*spec) {\n\t\tif (*spec == '+') {\n\t\t\t*cp++ = ':';\n\t\t\tstrcpy(cp, global);\n\t\t\tcp += glen;\n\t\t}\n\t\t*cp++ = *spec++;\n\t}\n\t*cp++ = ':';\n\tstrcpy(cp, global);\n\n\treturn (strdup(line));\n}\n\n/**\n * @brief\n * \t\tconvert an existing nodespec to the \"matching\" select directive\n *\n * @param[in]\tstr\t- node string\n * @param[in,out]\tcvt_bp\t- is a pointer to the current buffer\n * @param[in,out]\tcvt_lenp\t- is a pointer to the current buffer's length\n * @param[in]\tpattr\t- a list headed in an attribute that points to the specified resource_def structure\n *\n * @return\tint\n * @retval\t0\t- success\n * @retval\t>0\t- pbs error\n * @retval\t-1\t- modifiers does not exist in \"nodes specification\"\n *\n * @par MT-safe: No\n */\nint\ncvt_nodespec_to_select(char *str, char **cvt_bp, size_t *cvt_lenp, attribute *pattr)\n{\n\tint hcpp = 0;\n\tint hmem = 0;\n\tchar *globs;\n\tint i;\n\tu_Long memamt = 0;\n\tint nt;\n\tchar *nspec;\n\tint num = 1; /*default: a request for 1 node*/\n\tstruct node_req node_req;\n\tchar *pcvt;\n\tsize_t pcvt_free;\n\tresource *pncpus;\n\tresource *pmem;\n\tstruct prop *prop = NULL; /*assume sub-spec calls out no proper */\n\tint ret = -1;\t\t  /*assume error occurs*/\n\tstruct prop *walkprop;\n\tchar sprintf_buf[BUFSIZ];\n\tresource_def *pncpusdef = NULL;\n\tresource_def *pmemdef = NULL;\n\n\t**cvt_bp = '\\0';\n\tpcvt = *cvt_bp;\n\tpcvt_free = *cvt_lenp;\n\n\tpncpusdef = &svr_resc_def[RESC_NCPUS];\n\tpmemdef = &svr_resc_def[RESC_MEM];\n\n\t/*\n\t * check the local copy of the \"nodes\" specification for any \"global\"\n\t * modifiers.  Re-write the spec copy in expanded form if modifiers\n\t * exist.  Ignore #excl and #shared as they are examined when\n\t * creating the \"place\" directive.\n\t */\n\n\tnspec = strdup(str);\n\tif (nspec == NULL)\n\t\treturn (PBSE_SYSTEM);\n\n\tif ((globs = strchr(nspec, '#')) != NULL) {\n\t\tchar *cp;\n\t\tchar *hold;\n\t\tstatic char *excl = \"excl\";\n\t\tstatic char *shared = \"shared\";\n\n\t\t*globs++ = '\\0';\n\t\tglobs = strdup(globs);\n\t\tif (globs == NULL) {\n\t\t\tfree(nspec);\n\t\t\treturn (PBSE_SYSTEM);\n\t\t}\n\t\twhile ((cp = strrchr(globs, '#')) != NULL) {\n\t\t\t*cp++ = '\\0';\n\t\t\tif ((strcasecmp(cp, excl) != 0) &&\n\t\t\t    (strcasecmp(cp, shared) != 0)) {\n\t\t\t\thold = mod_spec(nspec, cp);\n\t\t\t\tif (hold == NULL) {\n\t\t\t\t\tfree(globs);\n\t\t\t\t\tfree(nspec);\n\t\t\t\t\treturn -1;\n\t\t\t\t}\n\t\t\t\tfree(nspec);\n\t\t\t\tnspec = hold;\n\t\t\t}\n\t\t}\n\t\tif ((strcasecmp(globs, excl) != 0) &&\n\t\t    (strcasecmp(globs, shared) != 0)) {\n\t\t\thold = mod_spec(nspec, globs);\n\t\t\tif (hold == NULL) {\n\t\t\t\tfree(globs);\n\t\t\t\tfree(nspec);\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t\tfree(nspec);\n\t\t\tnspec = hold;\n\t\t}\n\t\tfree(globs);\n\t\tglobs = NULL;\n\t}\n\tstr = nspec; /* work on the copy of the string */\n\n\t/* find the number of cpus specified in the node string */\n\n\tnt = ctcpus(str, &hcpp); /* total number of cpus requested in str */\n\n\t/* Is \"ncpus\" set as a separate resource? */\n\n\tif ((pncpus = find_resc_entry(pattr, pncpusdef)) == NULL) {\n\t\tif ((pncpus = add_resource_entry(pattr, pncpusdef)) == 0) {\n\t\t\tfree(nspec);\n\t\t\treturn (PBSE_SYSTEM);\n\t\t}\n\t}\n\n\tif ((pncpus->rs_value.at_flags & (ATR_VFLAG_SET | ATR_VFLAG_DEFLT)) ==\n\t    ATR_VFLAG_SET) {\n\n\t\tlong nc;\n\n\t\t/* ncpus is already set and not a default */\n\n\t\tnc = pncpus->rs_value.at_val.at_long;\n\t\tif (hcpp && (nt != pncpus->rs_value.at_val.at_long)) {\n\t\t\t/* if cpp string specificed, this is an error */\n\t\t\tfree(nspec);\n\t\t\treturn (PBSE_BADATVAL);\n\t\t} else if ((nc % nt) != 0) {\n\t\t\t/* ncpus must be multiple of number of tasks */\n\t\t\tfree(nspec);\n\t\t\treturn (PBSE_BADATVAL);\n\t\t} else if ((hcpp == 0) && ((nc / nt) > 1)) {\n\t\t\t/* append ncpus=(C/T) to each chunk */\n\t\t\tnt = nc / nt;\n\t\t} else\n\t\t\tnt = 1;\n\n\t} else\n\t\tnt = 1;\n\n\t/* How about \"mem\", is it set in the Resource_List */\n\n\tpmem = find_resc_entry(pattr, pmemdef);\n\tif (pmem &&\n\t    (pmem->rs_value.at_flags & (ATR_VFLAG_SET | ATR_VFLAG_DEFLT)) ==\n\t\t    ATR_VFLAG_SET) {\n\t\thmem = 1;\n\t\tmemamt = get_kilobytes_from_attr(&pmem->rs_value) / ctnodes(str);\n\t}\n\n\twhile (*str) {\n\t\tsize_t needed;\n\n\t\tnode_req.nr_ppn = 1;\n\t\tnode_req.nr_cpp = 1;\n\t\tnode_req.nr_np = 1;\n\n\t\tfree_prop(prop);\n\t\tprop = NULL; /* this is a must */\n\n\t\t/*Determine how many nodes this subspec requests*/\n\n\t\tif ((i = number(&str, &num, 1)) == -1) {\n\t\t\tfree(nspec);\n\t\t\tfree_prop(prop);\n\t\t\treturn ret;\n\t\t}\n\n\t\t/*Determine properties the node must have and how many processors*/\n\n\t\tif (i == 0) {\t\t   /* subspec specified a number */\n\t\t\tif (*str == ':') { /* subspec is specifying properties */\n\t\t\t\tstr++;\n\t\t\t\tif (proplist(&str, &prop, &node_req)) {\n\t\t\t\t\tfree(nspec);\n\t\t\t\t\tfree_prop(prop);\n\t\t\t\t\treturn ret;\n\t\t\t\t}\n\t\t\t}\n\t\t} else { /* subspec doesn't specify a number */\n\t\t\tif (proplist(&str, &prop, &node_req)) {\n\t\t\t\tfree(nspec);\n\t\t\t\tfree_prop(prop);\n\t\t\t\treturn ret; /* error in generation of prop list */\n\t\t\t}\n\t\t}\n\n\t\t/* start building the select spec */\n\t\t/* 1.  the number of chunks       */\n\n\t\tsprintf(sprintf_buf, \"%d:\", num);\n\t\tneeded = strlen(sprintf_buf) + 1;\n\t\tif (cvt_overflow(pcvt_free, needed) &&\n\t\t    (cvt_realloc(cvt_bp, cvt_lenp, &pcvt, &pcvt_free) == 0)) {\n\t\t\tfree(nspec);\n\t\t\tfree_prop(prop);\n\t\t\treturn (PBSE_SYSTEM);\n\t\t}\n\t\t(void) memcpy(pcvt, sprintf_buf, needed);\n\t\tpcvt = pcvt + needed - 1; /* advance to NULL byte */\n\t\tpcvt_free -= needed;\n\n\t\t/* 2.  the number of cpus */\n\n\t\tsprintf(sprintf_buf, \"ncpus=%d\", node_req.nr_np * nt);\n\t\tneeded = strlen(sprintf_buf) + 1;\n\t\tif (cvt_overflow(pcvt_free, needed) &&\n\t\t    (cvt_realloc(cvt_bp, cvt_lenp, &pcvt, &pcvt_free) == 0)) {\n\t\t\tfree(nspec);\n\t\t\tfree_prop(prop);\n\t\t\treturn (PBSE_SYSTEM);\n\t\t}\n\t\t(void) memcpy(pcvt, sprintf_buf, needed);\n\t\tpcvt = pcvt + needed - 1; /* advance to NULL byte */\n\t\tpcvt_free -= needed;\n\n\t\t/* 3. the amt of mem, if specified */\n\n\t\tif (hmem) {\n\t\t\tsprintf(sprintf_buf, \":mem=%lluKB\", memamt);\n\t\t\tneeded = strlen(sprintf_buf) + 1;\n\t\t\tif (cvt_overflow(pcvt_free, needed) &&\n\t\t\t    (cvt_realloc(cvt_bp, cvt_lenp, &pcvt, &pcvt_free) == 0)) {\n\t\t\t\tfree(nspec);\n\t\t\t\tfree_prop(prop);\n\t\t\t\treturn (PBSE_SYSTEM);\n\t\t\t}\n\t\t\t(void) memcpy(pcvt, sprintf_buf, needed);\n\t\t\tpcvt = pcvt + needed - 1; /* advance to NULL byte */\n\t\t\tpcvt_free -= needed;\n\t\t}\n\n\t\t/* 4. now need to see if any property matches a node name */\n\n\t\tfor (walkprop = prop; walkprop; walkprop = walkprop->next) {\n\t\t\tfor (i = 0; i < svr_totnodes; i++) {\n\t\t\t\tif (pbsndlist[i]->nd_state & INUSE_DELETED)\n\t\t\t\t\tcontinue;\n\t\t\t\tif (strcasecmp(pbsndlist[i]->nd_name, walkprop->name) == 0) {\n\t\t\t\t\twalkprop->mark = 0;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\t/* 5. now turn each property into \"property=True\" unless */\n\t\t/* it was a nodename, then it is  \"host=prop\"\t  */\n\n\t\tfor (walkprop = prop; walkprop; walkprop = walkprop->next) {\n\t\t\tif (walkprop->mark)\n\t\t\t\tsnprintf(sprintf_buf, sizeof(sprintf_buf),\n\t\t\t\t\t \":%s=%s\", walkprop->name, ATR_TRUE);\n\t\t\telse\n\t\t\t\tsnprintf(sprintf_buf, sizeof(sprintf_buf),\n\t\t\t\t\t \":host=%s\", walkprop->name);\n\t\t\tneeded = strlen(sprintf_buf) + 1;\n\t\t\tif (cvt_overflow(pcvt_free, needed) &&\n\t\t\t    (cvt_realloc(cvt_bp, cvt_lenp, &pcvt, &pcvt_free) == 0)) {\n\t\t\t\tfree(nspec);\n\t\t\t\tfree_prop(prop);\n\t\t\t\treturn (PBSE_SYSTEM);\n\t\t\t}\n\t\t\t(void) memcpy(pcvt, sprintf_buf, needed);\n\t\t\tpcvt = pcvt + needed - 1; /* advance to NULL byte */\n\t\t\tpcvt_free -= needed;\n\t\t}\n\n\t\t/* 6. if nr_ppn != 1,  add mpiproces=nr_ppn */\n\t\tif (node_req.nr_ppn != 1) {\n\t\t\tsprintf(sprintf_buf, \":mpiprocs=%d\", node_req.nr_ppn);\n\t\t\tneeded = strlen(sprintf_buf) + 1;\n\t\t\tif (cvt_overflow(pcvt_free, needed) &&\n\t\t\t    (cvt_realloc(cvt_bp, cvt_lenp, &pcvt, &pcvt_free) == 0)) {\n\t\t\t\tfree(nspec);\n\t\t\t\tfree_prop(prop);\n\t\t\t\treturn (PBSE_SYSTEM);\n\t\t\t}\n\t\t\t(void) memcpy(pcvt, sprintf_buf, needed);\n\t\t\tpcvt = pcvt + needed - 1; /* advance to NULL byte */\n\t\t\tpcvt_free -= needed;\n\t\t}\n\n\t\tif (*str == '+') {\n\t\t\t++str;\n\t\t\tneeded = 2; /* 2 = strlen(\"+\") + 1 */\n\t\t\tif (cvt_overflow(pcvt_free, needed) &&\n\t\t\t    (cvt_realloc(cvt_bp, cvt_lenp, &pcvt, &pcvt_free) == 0)) {\n\t\t\t\tfree(nspec);\n\t\t\t\tfree_prop(prop);\n\t\t\t\treturn (PBSE_SYSTEM);\n\t\t\t}\n\t\t\t(void) memcpy(pcvt, \"+\", needed);\n\t\t\tpcvt = pcvt + needed - 1;\n\t\t\tpcvt_free -= needed;\n\t\t} else\n\t\t\tbreak;\n\t}\n\tfree(nspec);\n\tfree_prop(prop);\n\treturn 0;\n}\n\n#define CVT_PAD 256 /* if less than this much free space, get more */\n\n/**\n * @brief\n * \t\tis there room in this buffer or should we allocate more?\n *\n * @param[in] buflen is the current buffer's length\n * @param[in] needed is the amount of data we wish to append\n *\n * @return\tint\n * @retval\t0\t- success\n * @retval\t1\t- overflow\n */\nstatic int\ncvt_overflow(size_t buflen, size_t needed)\n{\n\tif ((needed > buflen) || ((buflen - needed) < CVT_PAD))\n\t\treturn 1;\n\telse\n\t\treturn 0;\n}\n\n/**\n * @brief\n * \t\tallocate more room in bufptr.\n *\n * @param[in,out] bp is a pointer to the current buffer\n * @param[in,out] bplen is a pointer to the current buffer's length\n * @param[in,out] curbp is the current pointer into the buffer\n * @param[in,out] bpfree is a pointer to the amount of free space in the\n * \t\t\t\t\tcurrent buffer\n *\n * @return\tint\n * @retval\t1\t- success\n * @retval\t0\t- failure\n */\nstatic int\ncvt_realloc(char **bp, size_t *bplen, char **curbp, size_t *bpfree)\n{\n\tchar *newbp;\n\tsize_t realloc_incr = *bplen;\n\tptrdiff_t curoffset = *curbp - *bp;\n\n\tif ((newbp = realloc(*bp, *bplen + realloc_incr)) == NULL) {\n\t\treturn 0;\n\t} else {\n\t\t*bp = newbp;\n\t\t*bplen += realloc_incr;\n\t\t*bpfree += realloc_incr;\n\t\t*curbp = newbp + curoffset;\n\t\treturn 1;\n\t}\n}\n\n#define JBINXSZ_GROW 16;\n/**\n * @brief\n * \t\tadd a job pointer into the index array of a mominfo_t.\n *\n * @par\n *\t\tThe index of the entry is used in the exec_host string following the\n *\t\tslash character to be unique for each job running on that Mom\n *\n * @param[in,out]\tpnode\t- pbsnode structure\n * @param[in]\tpjob\t- a job pointer\n *\n * @return\tint\n * @retval\t>=0\t- index in which the job got added\n * @retval\t-1\t- could not realloc memory for adding job index\n */\nstatic int\nadd_job_index_to_mom(struct pbsnode *pnode, job *pjob)\n{\n\tint i;\n\tsize_t newn;\n\tsize_t oldn;\n\tjob **pnew;\n\tmom_svrinfo_t *psm;\n\n\tpsm = (mom_svrinfo_t *) ((pnode->nd_moms[0])->mi_data);\n\n\t/* see if there is an empty slot in the array */\n\n\tfor (i = 0; i < psm->msr_jbinxsz; i++) {\n\t\tif (psm->msr_jobindx[i] == NULL) {\n\t\t\tpsm->msr_jobindx[i] = pjob;\n\t\t\treturn i;\n\t\t}\n\t}\n\n\t/* didn't find an empty slot, need to expand array */\n\n\toldn = psm->msr_jbinxsz;\n\tnewn = oldn + JBINXSZ_GROW;\n\n\tpnew = realloc(psm->msr_jobindx, sizeof(struct job *) * newn);\n\tif (pnew == NULL) {\n\t\tlog_err(PBSE_SYSTEM, \"add_job_index_to_mom\",\n\t\t\t\"could not realloc memory for adding job index\");\n\t\treturn -1;\n\t}\n\tfor (i = oldn; i < newn; i++)\n\t\tpnew[i] = NULL;\n\tpsm->msr_jobindx = pnew;\n\tpsm->msr_jbinxsz = newn;\n\tpsm->msr_jobindx[oldn] = pjob;\n\treturn oldn;\n}\n\n/**\n * @brief\n * \t\tadd a job pointer into the index array of a mominfo_t.\n *\n * @par\n *\t\tusing a known, old, slot number.   Used to restore the index for a\n *\t\trunning job on server recovery.   If for some reason the correct slot\n *\t\tis inuse by a different job, slot -1 is returned.\n *\n * @param[in,out]\tpnode\t- pbsnode structure\n * @param[in]\tpjob\t- a job pointer\n * @param[in]\tslot\t- slot into which job needs to be inserted.\n *\n * @return\tint\n * @retval\t>=0\t- slot where job is inserted.\n * @retval\t-1\t- already in use or slot doesn't exist.\n */\nstatic int\nset_old_job_index(struct pbsnode *pnode, job *pjob, int slot)\n{\n\tint i;\n\tjob **pnew;\n\tmom_svrinfo_t *psm;\n\n\tpsm = (mom_svrinfo_t *) ((pnode->nd_moms[0])->mi_data);\n\n\t/* see if the slot exists in the array */\n\n\tif (slot >= psm->msr_jbinxsz) {\n\t\tsize_t oldn;\n\t\tsize_t newn;\n\n\t\t/* slot doesn't exist, need to expand array */\n\n\t\toldn = psm->msr_jbinxsz;\n\t\tnewn = slot + JBINXSZ_GROW;\n\n\t\tpnew = realloc(psm->msr_jobindx, sizeof(struct job *) * newn);\n\t\tif (pnew == NULL) {\n\t\t\tlog_err(PBSE_SYSTEM, \"set_old_job_index\",\n\t\t\t\t\"could not realloc memory for adding job index\");\n\t\t\treturn -1;\n\t\t}\n\t\tfor (i = oldn; i < newn; i++)\n\t\t\tpnew[i] = NULL;\n\t\tpsm->msr_jobindx = pnew;\n\t\tpsm->msr_jbinxsz = newn;\n\t}\n\t/* if slot is empty, or already this job, use slot */\n\tif ((psm->msr_jobindx[slot] == NULL) || (psm->msr_jobindx[slot] == pjob))\n\t\tpsm->msr_jobindx[slot] = pjob;\n\telse\n\t\tslot = -1; /* all ready in use */\n\n\treturn slot;\n}\n\n#define OUTBUF_SZ 200\n/**\n * @brief\n * \t\tbuild an exec_vnode string when the operator only provided a list of\n * \t\tnodes.\n *\n * @par\n * \t\tFrom the select spec, assign each\n *\t\tchunk on a round-robin basis to the nodes given as the destination.\n *\n *\t\tThis may very well overload some nodes or end up with chunks on nodes\n *\t\ton which they do not belong,  the operator must be aware.\n *\n * @param[in]\tpjob\t- a job pointer\n * @param[in]\tnds\t\t- list of nodes\n *\n * @return\tchar *\n * @retval\tbuilded string\t- success\n * @retval\tNULL\t- failure\n *\n * @par MT-safe: No\n */\nstatic char *\nbuild_execvnode(job *pjob, char *nds)\n{\n\tint i;\n\tint j;\n\tsize_t ns;\n\tchar **ndarray;\n\tint nnodes = 0;\n\tlong nchunks;\n\tchar *pc;\n\tchar *psl;\n\tint rc;\n\tattribute *pschedselect;\n\tchar *selspec;\n\tstatic size_t outbufsz = 0;\n\tstatic char *outbuf = NULL;\n\n\tif (!pjob || !nds)\n\t\treturn NULL;\n\n\tpschedselect = get_jattr(pjob, JOB_ATR_SchedSelect);\n\tif (!is_attr_set(pschedselect))\n\t\treturn (nds);\n\n\tselspec = pschedselect->at_val.at_str;\n\n\tif (outbufsz == 0) {\n\t\toutbufsz = OUTBUF_SZ;\n\t\toutbuf = malloc(outbufsz);\n\t\tif (outbuf == NULL) {\n\t\t\tlog_err(ENOMEM, \"build_execvnode\", \"out of  memory\");\n\t\t\treturn NULL;\n\t\t}\n\t}\n\n\t/* break the \"plus-ed\" list of nodes into an array */\n\n\tnnodes = 1;\n\tpc = nds;\n\twhile ((pc = strchr(pc, (int) '+'))) {\n\t\tnnodes++;\n\t\tpc++;\n\t}\n\tndarray = (char **) malloc(nnodes * sizeof(char *));\n\tif (ndarray == NULL)\n\t\treturn NULL;\n\tmemset(ndarray, 0, nnodes * sizeof(char *));\n\n\ti = 0;\n\tpc = parse_plus_spec(nds, &rc);\n\n\twhile (pc) {\n\t\tif ((*(ndarray + i) = strdup(pc)) == NULL) {\n\t\t\trc = errno;\n\t\t\tbreak;\n\t\t}\n\t\tpsl = strchr(*(ndarray + i), (int) '/');\n\t\tif (psl)\n\t\t\t*psl = '\\0';\n\t\t++i;\n\t\tpc = parse_plus_spec(NULL, &rc);\n\t}\n\n\t*outbuf = '\\0';\n\n\t/*\n        * if the number of nodes identified for ndarray (nnodes) are not equal\n        * to the number of nodes identified by parse_plus_spec, then\n        * the vnode specification is invalid.\n        */\n\n\tif (rc || i != nnodes)\n\t\tgoto done;\n\n\t/* now loop breaking up the select spec into separate chunks */\n\t/* and determining how many times each chunk is to be used   */\n\n\ti = 0;\n\tpc = parse_plus_spec(selspec, &rc);\n\twhile (pc) {\n\t\tnchunks = strtol(pc, &pc, 10);\n\t\tif (nchunks <= 0)\n\t\t\tnchunks = 1;\n\n\t\tfor (j = 0; j < nchunks; ++j) {\n\n\t\t\tns = strlen(*(ndarray + i)) + strlen(pc) + 2;\n\t\t\tif ((strlen(outbuf) + ns) > outbufsz) {\n\t\t\t\tchar *tmp;\n\t\t\t\tsize_t newsz;\n\n\t\t\t\tif (ns > OUTBUF_SZ)\n\t\t\t\t\tnewsz = outbufsz + ns;\n\t\t\t\telse\n\t\t\t\t\tnewsz = outbufsz + OUTBUF_SZ;\n\t\t\t\ttmp = realloc(outbuf, newsz);\n\t\t\t\tif (tmp) {\n\t\t\t\t\toutbuf = tmp;\n\t\t\t\t\toutbufsz = newsz;\n\t\t\t\t} else {\n\t\t\t\t\trc = PBSE_SYSTEM;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tstrcat(outbuf, *(ndarray + i));\n\t\t\tif (*pc != ':')\n\t\t\t\tstrcat(outbuf, \":\");\n\t\t\tstrcat(outbuf, pc);\n\t\t\tstrcat(outbuf, \"+\");\n\n\t\t\tif (++i >= nnodes)\n\t\t\t\ti = 0;\n\t\t}\n\t\tpc = parse_plus_spec(NULL, &rc);\n\t}\n\t*(outbuf + strlen(outbuf) - 1) = '\\0'; /* remove trailing '+' */\ndone:\n\t/* it is safe to freeing <ndarray> upto <nnodes> as it was memset'd */\n\tfor (i = 0; i < nnodes; ++i)\n\t\tfree(*(ndarray + i));\n\tfree(ndarray);\n\tndarray = NULL;\n\tif (rc)\n\t\treturn NULL;\n\telse\n\t\treturn (outbuf);\n}\n\n/**\n * @brief\n * \t\tforeach parent mom return the one that is up and with the fewest jobs\n *\n * @param[in]\tpnode\t- vnode\n * @param[in]\tpcur_mom\t- former parent Mom\n *\n * @return\tmominfo_t *\n * @retval\trtnmom - mom will returned if successful\n * @reval\tNULL\t- what will be returned if all are down/offline\n */\nstatic mominfo_t *\nwhich_parent_mom(pbsnode *pnode, mominfo_t *pcur_mom)\n{\n\tint i;\n\tint nj = 0;\n\tmominfo_t *pmom;\n\tmom_svrinfo_t *psvrmom;\n\tmominfo_t *rtnmom;\n\n\t/*\n\t * If we have a Mom parent of the prior vnode, pcur_mom is not NULL,\n\t * continue to use same parent if she is also a parent of this vnode.\n\t */\n\n\tif (pcur_mom != NULL) {\n\t\tfor (i = 0; i < pnode->nd_nummoms; ++i) {\n\t\t\tif (pcur_mom == pnode->nd_moms[i])\n\t\t\t\treturn (pcur_mom); /* use same as before */\n\t\t}\n\t}\n\n\t/* no former parent Mom or she is not a parent of this vnode,\n\t * find the \"least busy\" Mom parent of this vnode\n\t */\n\n\trtnmom = NULL; /* what will be returned if all are down/offline */\n\n\tfor (i = 0; i < pnode->nd_nummoms; ++i) {\n\t\tpmom = pnode->nd_moms[i];\n\t\tpsvrmom = (mom_svrinfo_t *) pmom->mi_data;\n\n\t\t/* if first mom or mom with fewer jobs, go with her for now */\n\t\tif (((pmom->mi_dmn_info->dmn_state & (INUSE_DOWN | INUSE_OFFLINE | INUSE_OFFLINE_BY_MOM)) == 0) &&\n\t\t    ((psvrmom->msr_children[0]->nd_state & (INUSE_DOWN | INUSE_OFFLINE | INUSE_OFFLINE_BY_MOM)) == 0)) {\n\t\t\t/* this mom/natural-vnode is not down nor offline */\n\t\t\tif ((rtnmom == NULL) || (nj > psvrmom->msr_numjobs)) {\n\t\t\t\tnj = psvrmom->msr_numjobs;\n\t\t\t\trtnmom = pmom;\n\t\t\t}\n\t\t}\n\t}\n\treturn (rtnmom);\n}\n\n/**\n * @brief assign jobs on each subnode of a node\n * \n * subnode is a structure corresponds to each cpu within a node.\n * assign the jobid on them based on hw_ncpus count. subnodes will\n * be created based on jobs if svr_init is TRUE.\n * \n * @param[in,out] pnode - node where jobs needs to be assigned\n * @param[in] hw_ncpus - number of cpus requested by the job\n * @param[in] jobid - job id going to land on the node\n * @param[in] svr_init - happens during server initialization?\n * @param[in] share_job - job sharing type\n * @return int \n * @reval PBSE_* : for failure\n */\nstatic int\nassign_jobs_on_subnode(struct pbsnode *pnode, int hw_ncpus, char *jobid, int svr_init, int share_job)\n{\n\tstruct pbssubn *snp;\n\tstruct jobinfo *jp;\n\tint rc = 0;\n\n\tif ((svr_init == FALSE) && (pnode->nd_state & INUSE_JOBEXCL)) {\n\t\t/* allocate node only if it is not occupied by other jobs */\n\t\tfor (snp = pnode->nd_psn; snp; snp = snp->next) {\n\t\t\tfor (jp = snp->jobs; jp; jp = jp->next) {\n\t\t\t\tif (strcmp(jp->jobid, jobid))\n\t\t\t\t\treturn PBSE_RESCUNAV;\n\t\t\t}\n\t\t}\n\t}\n\n\tsnp = pnode->nd_psn;\n\tif (hw_ncpus == 0) {\n\t\t/* setup jobinfo struture */\n\t\tjp = (struct jobinfo *) malloc(sizeof(struct jobinfo));\n\t\tif (jp) {\n\t\t\tjp->next = snp->jobs;\n\t\t\tjp->has_cpu = 0; /* has no cpus allocatted */\n\t\t\tsnp->jobs = jp;\n\t\t\tjp->jobid = strdup(jobid);\n\t\t\tif (!jp->jobid)\n\t\t\t\trc = PBSE_SYSTEM;\n\t\t} else\n\t\t\trc = PBSE_SYSTEM;\n\n\t} else {\n\t\tstruct pbssubn *lst_sn;\n\t\tint ncpus;\n\n\t\tlst_sn = NULL;\n\t\tfor (ncpus = 0; ncpus < hw_ncpus; ncpus++) {\n\n\t\t\twhile (snp->inuse != INUSE_FREE) {\n\t\t\t\tif (snp->next)\n\t\t\t\t\tsnp = snp->next;\n\t\t\t\telse if (svr_init == TRUE) {\n\t\t\t\t\t/*\n\t\t\t\t\t\t* Server is in the process of recovering jobs at\n\t\t\t\t\t\t* start up. Haven't contacted the Moms yet, so\n\t\t\t\t\t\t* unsure about the number of cpus.  So add as many\n\t\t\t\t\t\t* subnodes as needed to hold all of the job chunks\n\t\t\t\t\t\t* which were allocated to the node.\n\t\t\t\t\t\t*/\n\t\t\t\t\tif ((snp = create_subnode(pnode, lst_sn)) == NULL) {\n\t\t\t\t\t\treturn PBSE_SYSTEM;\n\t\t\t\t\t}\n\t\t\t\t\tbreak;\n\t\t\t\t} else\n\t\t\t\t\tbreak; /* if last subnode, use it even if in use */\n\t\t\t}\n\n\t\t\tif (share_job == VNS_FORCE_EXCL)\n\t\t\t\tsnp->inuse |= INUSE_JOBEXCL;\n\t\t\telse\n\t\t\t\tsnp->inuse |= INUSE_JOB;\n\n\t\t\tpnode->nd_nsnfree--;\n\t\t\t/*\n\t\t\t* Store the last subnode of parent node list.\n\t\t\t* This removes the need to find the last node of\n\t\t\t* parent node's list, in create_subnode().\n\t\t\t*/\n\t\t\tlst_sn = snp;\n\t\t\tif (pnode->nd_nsnfree < 0) {\n\t\t\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_NODE,\n\t\t\t\t\t  LOG_ALERT, pnode->nd_name,\n\t\t\t\t\t  \"free CPU count went negative on node\");\n\t\t\t}\n\n\t\t\t/* setup jobinfo struture */\n\t\t\tjp = (struct jobinfo *) malloc(sizeof(struct jobinfo));\n\t\t\tif (jp) {\n\t\t\t\tjp->next = snp->jobs;\n\t\t\t\tjp->has_cpu = 1; /* has a cpu allocatted */\n\t\t\t\tsnp->jobs = jp;\n\t\t\t\tjp->jobid = strdup(jobid);\n\t\t\t\tif (!jp->jobid) {\n\t\t\t\t\trc = PBSE_SYSTEM;\n\t\t\t\t\tgoto end;\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\trc = PBSE_SYSTEM;\n\t\t\t\tgoto end;\n\t\t\t}\n\n\t\t\tDBPRT((\"set_node: node: %s/%ld to job %s, still free: %ld\\n\",\n\t\t\t       pnode->nd_name, snp->index, jobid,\n\t\t\t       pnode->nd_nsnfree))\n\t\t}\n\t}\n\nend:\n\tif (rc == PBSE_SYSTEM)\n\t\tlog_errf(rc, __func__, \"Failed to allocate memory!\");\n\treturn rc;\n}\n\n/**\n * @brief update node state based on the job sharing type\n * and node sharing type. For instance:\n * node-state is set to exclusive if either of them are exclusive.\n * \n * @param[in,out] pnode - node for which state is updated\n * @param[in] share_job - job sharing type\n */\nstatic void\nupdate_node_state(struct pbsnode *pnode, int share_job)\n{\n\tint share_node = get_nattr_long(pnode, ND_ATR_Sharing);\n\n\tif (share_node == (int) VNS_FORCE_EXCL || share_node == (int) VNS_FORCE_EXCLHOST) {\n\t\tset_vnode_state(pnode, INUSE_JOBEXCL, Nd_State_Or);\n\t} else if (share_node == VNS_IGNORE_EXCL) {\n\t\tif (pnode->nd_nsnfree <= 0)\n\t\t\tset_vnode_state(pnode, INUSE_JOB, Nd_State_Or);\n\t\telse\n\t\t\tset_vnode_state(pnode, ~(INUSE_JOB | INUSE_JOBEXCL), Nd_State_And);\n\t} else if (share_node == VNS_DFLT_EXCL || share_node == VNS_DFLT_EXCLHOST) {\n\t\tif (share_job == VNS_IGNORE_EXCL) {\n\t\t\tif (pnode->nd_nsnfree <= 0)\n\t\t\t\tset_vnode_state(pnode, INUSE_JOB, Nd_State_Or);\n\t\t\telse\n\t\t\t\tset_vnode_state(pnode, ~(INUSE_JOB | INUSE_JOBEXCL), Nd_State_And);\n\t\t} else {\n\t\t\tset_vnode_state(pnode, INUSE_JOBEXCL, Nd_State_Or);\n\t\t}\n\t} else if (share_job == VNS_FORCE_EXCL) {\n\t\tset_vnode_state(pnode, INUSE_JOBEXCL, Nd_State_Or);\n\t} else if (pnode->nd_nsnfree <= 0) {\n\t\tset_vnode_state(pnode, INUSE_JOB, Nd_State_Or);\n\t} else {\n\t\tset_vnode_state(pnode, ~(INUSE_JOB | INUSE_JOBEXCL), Nd_State_And);\n\t}\n}\n\n/**\n * @brief Determines job sharing type\n * Job sharing type is determined based on the job placement directive\n * which will be in the form of job's resource.\n * \n * @param[in] pjob - job struct\n * @return int\n * @retval enum vnode_sharing\n */\nint\nget_job_share_type(struct job *pjob)\n{\n\tattribute *patresc; /* ptr to job/resv resource_list */\n\tpatresc = get_jattr(pjob, JOB_ATR_resource);\n\tint share_job = VNS_UNSET;\n\tresource *pplace;\n\tresource_def *prsdef;\n\n\tprsdef = &svr_resc_def[RESC_PLACE];\n\tpplace = find_resc_entry(patresc, prsdef);\n\tif (pplace && pplace->rs_value.at_val.at_str) {\n\t\tif ((place_sharing_type(pplace->rs_value.at_val.at_str,\n\t\t\t\t\tVNS_FORCE_EXCLHOST) != VNS_UNSET) ||\n\t\t    (place_sharing_type(pplace->rs_value.at_val.at_str,\n\t\t\t\t\tVNS_FORCE_EXCL) != VNS_UNSET)) {\n\t\t\tshare_job = VNS_FORCE_EXCL;\n\t\t} else if (place_sharing_type(pplace->rs_value.at_val.at_str,\n\t\t\t\t\t      VNS_IGNORE_EXCL) == VNS_IGNORE_EXCL)\n\t\t\tshare_job = VNS_IGNORE_EXCL;\n\t\telse\n\t\t\tshare_job = VNS_DFLT_SHARED;\n\t}\n\n\treturn share_job;\n}\n\n#define EHBUF_SZ 500\n/**\n * @brief\n *\t \tset_nodes - take the node plus resource spec from the scheduler or\n *\t\toperator and allocate the named nodes internally.\n *\n * @par Functionality:\n *\n *\t\tTakes the node plus resource spec from the scheduler or operator and\n *\t\tallocate the named nodes internally.  If the operator only provides\n *\t\ta list of nodes,  we attempt to associate the resource chunks from the\n *\t\tselect spec with the nodes, see build_execvnode().\n *\n *\t\t\"mk_new_host\" set true (non-zero) directs that (1) a new exec_host\n *\t\tstring should be created and returned and the job should be added to\n *\t\tthe job index array on each Mom,  or if false the existing exec_host\n *\t\tstring should be used to reset the job_index array on the Moms to\n *\t\tthe indices already listed in the existing exec_host.\n *\n *\t\tThe job index array is used to provide a \"unique\" number for each chunk\n *\t\ton a given Mom.  This appears in the exec_host string following the \"/\"\n *\t\tand was used by Mom on an IBM SP to set the switch interface; it is\n *\t\tcurrently maintained only for backward compatibility.\n *\n *\t\tOn a non error (zero) exit, \"execvnod_out\" is set to point to either\n *\t\tthe original or possibly modified exec_vnode string.\n *\n *\t\tOn a non error exit,  \"hoststr\" is set to point to a new exec_host\n *\t\tstring if \"mk_new_host\" is true or left pointing to the orignal\n *\t\texec_host string if \"mk_new_host\" is false.\n *\n *\t\texecvnod_out and hoststr should NOT be freed as they point\n *\t\teither to the original strings or a string living in a static buffer.\n *\n *\t\t\"svr_init\" is only set to TRUE when the server is recovering running\n *\t\tjobs on startup. This flag tells the function to ignore certain\n *\t\terrors, such as:\n *\t   \t- unknown resources\n *\t\t\tIt is possible that a resource definition has been removed,\n *\t\t\twe still wish to have the job show up on the nodes; so ignore\n *\t\t\tthis error.\n *\t   \t- unlicensed nodes\n *\t\t\tOn initialization, the nodes have not yet been Licensed, and\n *\t\t\tsince they may use fixed licenses, ignore this step.\n *\t   \t- Job exclusive allocation\n *\t\t\tSince the node was assigned to the job, just reassigne it\n *\t\t\twithout this check.\n *\n * @param[in]\tpobj         -  pointer to an object, either job or reservation\n * @param[in]\tobjtype      -  set to JOB_OBJECT if pobj points to a job,\n *                              otherwise pobj points to a reservation object\n * @param[in]\texecvnod_in  -  original vnode list from scheduler/operator\n * @param[out]\texecvnod_out -  original or modified list of vnodes and\n *                              resources, becomes exec_vnode value.\n * @param[in]\thoststr      -  original or modified exec_host string, see\n *                              mk_new_host.\n * @param[in]\thoststr2      - original or modified exec_host2 string\n *\n * @param[in]\tmk_new_host  -  if True (non-zero), this function is to create\n *                              a new hoststr including new job indices,\n *                              otherwise return existing exec_host unchanged.\n * @param[in]\tsvr_init     -  if True, server is recovering jobs.\n *\n * @return\tint\n * @retval\tPBSE_NONE : success\n * @retval\tnon-zero  : various PBSE error returns.\n *\n * @par Side Effects: None\n *\n * @par MT-safe: No\n */\nint\nset_nodes(void *pobj, int objtype, char *execvnod_in, char **execvnod_out, char **hoststr, char **hoststr2, int mk_new_host, int svr_init)\n{\n\tchar *chunk;\n\tint setck;\n\tchar *execvncopy;\n\tint hasprn; /* set if chunk grouped in parenthesis */\n\tint hostcpus;\n\tint i;\n\tchar *last;\n\tchar *execvnod = NULL;\n\tint ndindex;\n\tint nelem;\n\tmominfo_t *parentmom;\n\tmominfo_t *parentmom_first = NULL;\n\tchar *peh = NULL;\n\tchar *pehnxt = NULL;\n\tjob *pjob = NULL;\n\tchar *pc;\n\tchar *pc2;\n\tint share_job = VNS_UNSET;\n\tchar *vname;\n\tresc_resv *presv = NULL;\n\tint tc; /* num of nodes being allocated  */\n\tstruct pbsnode *pnode;\n\tstruct key_value_pair *pkvp;\n\tstruct howl {\n\t\tpbsnode *hw_pnd;   /* ptr to node */\n\t\tpbsnode *hw_natvn; /* pointer to \"natural\" vnode */\n\t\tchar *hw_mom_host;\n\t\tint hw_mom_port;\n\t\tmominfo_t *hw_mom;\n\t\tint hw_ncpus; /* num of cpus needed from this node */\n\t\tint hw_chunk; /* non-zero if start of a chunk      */\n\t\tint hw_index; /* index of job on Mom if hw_chunk   */\n\t\tint hw_htcpu; /* sum of cpus on this Mom, hw_chuhk */\n\t} * phowl;\n\tstatic size_t ehbufsz = 0;\n\tstatic size_t ehbufsz2 = 0;\n\tstatic char *ehbuf = NULL;\n\tstatic char *ehbuf2 = NULL;\n\tint rc = 0;\n\n\tif (ehbufsz == 0) {\n\t\t/* allocate the basic buffer for exec_host string */\n\t\tehbuf = (char *) malloc(EHBUF_SZ);\n\t\tif (ehbuf == NULL)\n\t\t\treturn (PBSE_SYSTEM);\n\t\tehbufsz = EHBUF_SZ;\n\t}\n\n\tif (ehbufsz2 == 0) {\n\t\t/* allocate the basic buffer for exec_host string */\n\t\tehbuf2 = (char *) malloc(EHBUF_SZ);\n\t\tif (ehbuf2 == NULL)\n\t\t\treturn (PBSE_SYSTEM);\n\t\tehbufsz2 = EHBUF_SZ;\n\t}\n\n\tif (objtype == JOB_OBJECT) {\n\n\t\tpjob = (job *) pobj;\n\n\t\tif (execvnod_in == NULL) {\n\t\t\texecvnod_in = *hoststr;\n\t\t}\n\t\tif (strchr(execvnod_in, (int) ':') == NULL) {\n\t\t\t/* need to take node only list and build a pseudo */\n\t\t\t/* exec_vnode string with the resources included  */\n\t\t\texecvnod = build_execvnode(pjob, execvnod_in);\n\t\t} else {\n\t\t\texecvnod = execvnod_in;\n\t\t}\n\t\tif (execvnod == NULL)\n\t\t\treturn PBSE_BADNODESPEC;\n\n\t\tif (!strlen(execvnod)) {\n\t\t\tlog_eventf(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t   pjob->ji_qs.ji_jobid, \"Unknown node received\");\n\t\t\treturn PBSE_UNKNODE;\n\t\t}\n\n\t\t/* are we to allocate the nodes \"excl\" ? */\n\t\tshare_job = get_job_share_type(pjob);\n\n\t} else if (objtype == RESC_RESV_OBJECT) {\n\t\tpresv = (resc_resv *) pobj;\n\t\texecvnod = execvnod_in;\n\t}\n\n\t/* first count the number of vnodes */\n\n\ttc = 1;\n\tpc = execvnod;\n\n\twhile ((pc = strchr(pc, (int) '+')) != NULL) {\n\t\t++tc;\n\t\tpc++;\n\t}\n\n\t/* allocate an howl array to hold info about allocated nodes */\n\n\tphowl = (struct howl *) malloc(tc * sizeof(struct howl));\n\tif (phowl == NULL)\n\t\treturn (PBSE_SYSTEM);\n\n\tndindex = 0;\n\n\t/* parse the exec_vnode string into a string of chunks and */\n\t/* then parse each chunk for the required resources        */\n\n\texecvncopy = strdup(execvnod);\n\tif (execvncopy == NULL) {\n\t\trc = PBSE_SYSTEM;\n\t\tgoto end;\n\t}\n\n\tif (mk_new_host == 0) {\n\t\tif (hoststr && *hoststr)\n\t\t\tpeh = *hoststr; /* use old exec_host to redo index arrays */\n\t\telse\n\t\t\tpeh = \"\"; /* a dummy null string */\n\t}\n\tpehnxt = peh;\n\n\tsetck = 1; /* set flag to indicate likely end of chunk */\n\t/* therefore next entry is start of new chunk */\n\thostcpus = 0; /* number of cpus from all vnodes on host   */\n\t/* from which chunk was taken\t\t      */\n\n\tparentmom = NULL; /* use for multi-mom vnodes\t\t      */\n\n\t/* note: hasprn is set based on finding '(' or ')'\n\t *\t> 0 = found '(' at start of substring\n\t *\t= 0 = no parens or found both in one substring\n\t *\t< 0 = found ')' at end of substring\n\t */\n\n\tfor (chunk = parse_plus_spec_r(execvncopy, &last, &hasprn);\n\t     chunk; chunk = parse_plus_spec_r(last, &last, &hasprn)) {\n\n\t\tif (parse_node_resc(chunk, &vname, &nelem, &pkvp) == 0) {\n\t\t\tif ((pnode = find_nodebyname(vname)) == NULL) {\n\t\t\t\tif (objtype == JOB_OBJECT) {\n\t\t\t\t\tlog_eventf(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t\t\t   pjob->ji_qs.ji_jobid, \"Unknown node %s received\", vname);\n\t\t\t\t} else if (objtype == RESC_RESV_OBJECT)\n\t\t\t\t\tlog_eventf(PBSEVENT_DEBUG, PBS_EVENTCLASS_RESV, LOG_INFO,\n\t\t\t\t\t\t   presv->ri_qs.ri_resvID, \"Unknown node %s received\", vname);\n\t\t\t\tfree(execvncopy);\n\t\t\t\trc = PBSE_UNKNODE;\n\t\t\t\tgoto end;\n\t\t\t}\n\n\t\t\tif ((pnode->nd_state & VNODE_UNAVAILABLE) && (svr_init == FALSE))\n\t\t\t\tif ((objtype == RESC_RESV_OBJECT) && (presv->ri_qs.ri_resvID[0] != PBS_MNTNC_RESV_ID_CHAR) /*&& (presv->ri_qs.ri_state == RESV_UNCONFIRMED)*/)\n\t\t\t\t\tset_resv_for_degrade(pnode, presv);\n\n\t\t\tif (pjob != NULL) { /* only for jobs do we warn if a mom */\n\t\t\t\t/* hook has not been sent */\n\t\t\t\tfor (i = 0; i < pnode->nd_nummoms; ++i) {\n\n\t\t\t\t\tif ((pnode->nd_moms[i] != NULL) &&\n\t\t\t\t\t    (sync_mom_hookfiles_count(pnode->nd_moms[i]) > 0)) {\n\t\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t\t\t \"vnode %s's parent mom %s:%d has a pending copy hook or delete hook request\", pnode->nd_name, pnode->nd_moms[i]->mi_host,\n\t\t\t\t\t\t\t pnode->nd_moms[i]->mi_port);\n\t\t\t\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_NODE,\n\t\t\t\t\t\t\t  LOG_WARNING, pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t(phowl + ndindex)->hw_pnd = pnode;\n\t\t\t(phowl + ndindex)->hw_ncpus = 0;\n\t\t\t(phowl + ndindex)->hw_chunk = setck;\n\t\t\t(phowl + ndindex)->hw_index = -1; /* will fill in later */\n\t\t\t(phowl + ndindex)->hw_htcpu = 0;\n\t\t\tif (setck == 1) { /* start of new chunk on host */\n\t\t\t\tif (mk_new_host) {\n\n\t\t\t\t\t/* look up \"natural\" vnode name for either 'the Mom' */\n\t\t\t\t\t/* or 'a Mom' for the real vnode.  This is used in   */\n\t\t\t\t\t/* the exec_host string                              */\n\t\t\t\t\tif (pnode->nd_nummoms > 1) { /* multi-mom */\n\t\t\t\t\t\tparentmom = which_parent_mom(pnode, parentmom);\n\t\t\t\t\t\tif (parentmom == NULL) {\n\t\t\t\t\t\t\t/* cannot find a Mom that works */\n\t\t\t\t\t\t\tfree(execvncopy);\n\t\t\t\t\t\t\trc = PBSE_SYSTEM;\n\t\t\t\t\t\t\tgoto end;\n\t\t\t\t\t\t}\n\t\t\t\t\t\t/*\n\t\t\t\t\t\t * save the \"first\" allocated Mom for incr\n\t\t\t\t\t\t * the count of jobs on that Mom; used in\n\t\t\t\t\t\t * load-balancing across multi-Mom vnodes\n\t\t\t\t\t\t * [i.e. in a Cray]\n\t\t\t\t\t\t */\n\t\t\t\t\t\tif (parentmom_first == NULL)\n\t\t\t\t\t\t\tparentmom_first = parentmom;\n\n\t\t\t\t\t\t/* record \"native\" vnode for the chosen Mom */\n\t\t\t\t\t\t(phowl + ndindex)->hw_natvn = ((struct mom_svrinfo *) (parentmom->mi_data))->msr_children[0];\n\t\t\t\t\t\t(phowl + ndindex)->hw_mom = parentmom;\n\t\t\t\t\t} else if (pnode->nd_nummoms == 1) {\n\t\t\t\t\t\t/* single parent Mom, just use her */\n\t\t\t\t\t\t(phowl + ndindex)->hw_natvn = ((mom_svrinfo_t *) (pnode->nd_moms[0]->mi_data))->msr_children[0];\n\t\t\t\t\t\t(phowl + ndindex)->hw_mom = pnode->nd_moms[0];\n\t\t\t\t\t\tif (parentmom_first == NULL)\n\t\t\t\t\t\t\tparentmom_first = pnode->nd_moms[0];\n\t\t\t\t\t\t/* if the first chunk goes to a single parent */\n\t\t\t\t\t\t/* set parentmom in case the next chunk can   */\n\t\t\t\t\t\t/* also go there;  otherwise keep the old     */\n\t\t\t\t\t\t/* parentmom value.                           */\n\t\t\t\t\t\tif (parentmom == NULL)\n\t\t\t\t\t\t\tparentmom = parentmom_first;\n\t\t\t\t\t}\n\t\t\t\t} else if (objtype == JOB_OBJECT) {\n\t\t\t\t\t/*\n\t\t\t\t\t * exec_host applies to job's only ...\n\t\t\t\t\t * Have an existing exec_host string which is being\n\t\t\t\t\t * kept.  Reuse it to obtain the \"natural\" vnode and\n\t\t\t\t\t * the \"index\" number which we will use in\n\t\t\t\t\t * set_old_job_index() later\n\t\t\t\t\t */\n\t\t\t\t\twhile (*pehnxt && (*pehnxt != '/'))\n\t\t\t\t\t\tpehnxt++;\n\t\t\t\t\t*pehnxt = '\\0';\n\t\t\t\t\t(phowl + ndindex)->hw_natvn = find_nodebyname(peh);\n\t\t\t\t\tif ((phowl + ndindex)->hw_natvn == NULL) {\n\t\t\t\t\t\tlog_eventf(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t\t\t\t   pjob->ji_qs.ji_jobid, \"Unknown node %s received\", peh);\n\t\t\t\t\t\tfree(phowl);\n\t\t\t\t\t\tfree(execvncopy);\n\t\t\t\t\t\treturn (PBSE_UNKNODE);\n\t\t\t\t\t}\n\t\t\t\t\tif ((phowl + ndindex)->hw_pnd->nd_moms)\n\t\t\t\t\t\t(phowl + ndindex)->hw_mom = (phowl + ndindex)->hw_pnd->nd_moms[0];\n\t\t\t\t\telse {\n\t\t\t\t\t\t(phowl + ndindex)->hw_mom_host = (phowl + ndindex)->hw_pnd->nd_attr[ND_ATR_Mom].at_val.at_str;\n\t\t\t\t\t\t(phowl + ndindex)->hw_mom_port = (phowl + ndindex)->hw_pnd->nd_attr[ND_ATR_Port].at_val.at_long;\n\t\t\t\t\t}\n\t\t\t\t\t*pehnxt = '/';\n\t\t\t\t\t(phowl + ndindex)->hw_index = atoi(++pehnxt);\n\t\t\t\t\twhile (*pehnxt && (*pehnxt != '+'))\n\t\t\t\t\t\tpehnxt++;\n\t\t\t\t\tif (*pehnxt == '+')\n\t\t\t\t\t\tpeh = ++pehnxt;\n\t\t\t\t\telse\n\t\t\t\t\t\tpeh = pehnxt;\n\t\t\t\t\tif (parentmom_first == NULL)\n\t\t\t\t\t\tparentmom_first = (phowl + ndindex)->hw_natvn->nd_moms[0];\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t/* set setck to indicate if next vnode starts a new chunk */\n\t\t\t/* stays the same if hasprn == 0\t\t\t  */\n\t\t\tif (hasprn > 0)\n\t\t\t\tsetck = 0; /* continuation of multi-vnode chunk  */\n\t\t\telse if (hasprn < 0)\n\t\t\t\tsetck = 1; /* end of multi-vnode chunk,start new */\n\n\t\t\tfor (i = 0; i < nelem; i++) {\n\t\t\t\tif (strcasecmp(\"ncpus\", (pkvp + i)->kv_keyw) == 0)\n\t\t\t\t\t(phowl + ndindex)->hw_ncpus = atoi((pkvp + i)->kv_val);\n\t\t\t\telse {\n\t\t\t\t\tif ((find_resc_def(svr_resc_def, (pkvp + i)->kv_keyw) == NULL) && (svr_init == FALSE)) {\n\t\t\t\t\t\tfree(execvncopy);\n\t\t\t\t\t\tresc_in_err = strdup((pkvp + i)->kv_keyw);\n\t\t\t\t\t\trc = PBSE_UNKRESC;\n\t\t\t\t\t\tgoto end;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\n\t\t\thostcpus += (phowl + ndindex)->hw_ncpus;\n\n\t\t\tif (setck == 1) {\n\t\t\t\t(phowl + ndindex)->hw_htcpu = hostcpus;\n\t\t\t\thostcpus = 0;\n\t\t\t}\n\n\t\t} else {\n\t\t\t/* Error */\n\t\t\tfree(execvncopy);\n\t\t\trc = PBSE_BADATVAL;\n\t\t\tgoto end;\n\t\t}\n\n\t\tndindex++;\n\t}\n\n\tfree(execvncopy);\n\texecvncopy = NULL;\n\n\t/* now we have an array of the required nodes */\n\n\tif (objtype == JOB_OBJECT) {\n\t\tsize_t ehlen;\n\t\tsize_t ehlen2;\n\n\t\t/* FOR JOBS ... */\n\n\t\t/* make sure that the buf for the new exec_host str is sufficent */\n\t\t/* allow room for each name plus /NNNNNN*MMMMMM+ (16 characters) */\n\t\tehlen = 0;\n\t\tehlen2 = 0;\n\n\t\tfor (i = 1; i < ndindex; ++i) {\n\t\t\tif ((phowl + i)->hw_chunk) {\n\t\t\t\tehlen += strlen((phowl + i)->hw_natvn->nd_name) + 16;\n\t\t\t\tif ((phowl + i)->hw_mom)\n\t\t\t\t\tehlen2 += strlen((phowl + i)->hw_mom->mi_host);\n\t\t\t\telse\n\t\t\t\t\tehlen2 += strlen((phowl + i)->hw_mom_host);\n\t\t\t\tehlen2 += 6 + 16;\n\t\t\t}\n\t\t}\n\n\t\tif (ehlen >= ehbufsz) {\n\t\t\t/* need to grow buffer */\n\t\t\tpc = realloc(ehbuf, ehlen + EHBUF_SZ);\n\t\t\tif (pc) {\n\t\t\t\tehbuf = pc;\n\t\t\t\tehbufsz = ehlen + EHBUF_SZ;\n\t\t\t} else {\n\t\t\t\trc = PBSE_SYSTEM;\n\t\t\t\tgoto end;\n\t\t\t}\n\t\t}\n\n\t\tif (ehlen2 >= ehbufsz2) {\n\t\t\t/* need to grow buffer */\n\t\t\tpc2 = realloc(ehbuf2, ehlen2 + EHBUF_SZ);\n\t\t\tif (pc2) {\n\t\t\t\tehbuf2 = pc2;\n\t\t\t\tehbufsz2 = ehlen2 + EHBUF_SZ;\n\t\t\t} else {\n\t\t\t\trc = PBSE_SYSTEM;\n\t\t\t\tgoto end;\n\t\t\t}\n\t\t}\n\n\t\t/*\n\t\t * Add a \"jobinfo\" structure to each subnode of *pnode that\n\t\t * is specified.\n\t\t */\n\n\t\tfor (i = 0; i < ndindex; ++i) {\n\n\t\t\tpnode = (phowl + i)->hw_pnd;\n\n\t\t\tif ((svr_init == TRUE) &&\n\t\t\t    ((check_job_substate(pjob, JOB_SUBSTATE_SUSPEND) ||\n\t\t\t      check_job_substate(pjob, JOB_SUBSTATE_SCHSUSP))) &&\n\t\t\t    (is_jattr_set(pjob, JOB_ATR_resc_released)))\n\t\t\t\t/* No need to add suspended job to jobinfo structure and assign CPU slots to it*/\n\t\t\t\tbreak;\n\n\t\t\trc = assign_jobs_on_subnode(pnode, (phowl + i)->hw_ncpus, pjob->ji_qs.ji_jobid, svr_init, share_job);\n\t\t\tif (rc != PBSE_NONE)\n\t\t\t\tgoto end;\n\n\t\t\tupdate_node_state(pnode, share_job);\n\n\t\t\t/*\n\t\t\t * now for each new chunk, add the job to the Mom job index\n\t\t\t * array anew or reusing the indices from the existing\n\t\t\t * exec_host\n\t\t\t */\n\n\t\t\tif ((phowl + i)->hw_chunk && (phowl + i)->hw_mom) {\n\t\t\t\tif (mk_new_host) {\n\t\t\t\t\t/* add new job index to Mom and save it    */\n\t\t\t\t\t/* for creating the (new) exec_host string */\n\t\t\t\t\t(phowl + i)->hw_index = add_job_index_to_mom((phowl + i)->hw_natvn, pjob);\n\t\t\t\t} else {\n\t\t\t\t\t/* as we are keeping the exec_host from before */\n\t\t\t\t\t/* reset the job index to the value saved from */\n\t\t\t\t\t/* parsing the old exec_host earlier           */\n\t\t\t\t\t(phowl + i)->hw_index =\n\t\t\t\t\t\tset_old_job_index((phowl + i)->hw_natvn,\n\t\t\t\t\t\t\t\t  pjob, (phowl + i)->hw_index);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\t/* set flag in job that it has nodes associated with it */\n\t\t/* increment the number of jobs on the job's Mother Superior */\n\n\t\tpjob->ji_qs.ji_svrflags |= JOB_SVFLG_HasNodes;\n\n\t\t/*\n\t\t * increment the number of jobs on the job's Mother Superior\n\t\t * this has to be done in association with setting\n\t\t * JOB_SVFLG__HasNodes, see free_nodes()\n\t\t * It is decremented in free_nodes()\n\t\t */\n\t\tif (parentmom_first)\n\t\t\t((mom_svrinfo_t *) (parentmom_first->mi_data))->msr_numjobs++;\n\n\t\tif (mk_new_host) {\n\n\t\t\t/* make the new exec_host string */\n\n\t\t\t*ehbuf = '\\0';\n\t\t\tpc = ehbuf;\n\t\t\t*ehbuf2 = '\\0';\n\t\t\tpc2 = ehbuf2;\n\t\t\tfor (i = 0; i < ndindex; ++i) {\n\t\t\t\tif ((phowl + i)->hw_chunk) {\n\t\t\t\t\tsprintf(pc, \"%s/%d\", (phowl + i)->hw_natvn->nd_name,\n\t\t\t\t\t\t(phowl + i)->hw_index);\n\n\t\t\t\t\tif ((phowl + i)->hw_mom)\n\t\t\t\t\t\tsprintf(pc2, \"%s:%d/%d\", (phowl + i)->hw_mom->mi_host,\n\t\t\t\t\t\t\t(phowl + i)->hw_mom->mi_port,\n\t\t\t\t\t\t\t(phowl + i)->hw_index);\n\t\t\t\t\telse\n\t\t\t\t\t\tsprintf(pc2, \"%s:%d/%d\", (phowl + i)->hw_mom_host,\n\t\t\t\t\t\t\t(phowl + i)->hw_mom_port,\n\t\t\t\t\t\t\t(phowl + i)->hw_index);\n\n\t\t\t\t\tpc = ehbuf + strlen(ehbuf);\n\t\t\t\t\tpc2 = ehbuf2 + strlen(ehbuf2);\n\n\t\t\t\t\tif ((phowl + i)->hw_htcpu != 1) {\n\t\t\t\t\t\tsprintf(pc, \"*%d\", (phowl + i)->hw_htcpu);\n\t\t\t\t\t\tpc = ehbuf + strlen(ehbuf);\n\n\t\t\t\t\t\tsprintf(pc2, \"*%d\", (phowl + i)->hw_htcpu);\n\t\t\t\t\t\tpc2 = ehbuf2 + strlen(ehbuf2);\n\t\t\t\t\t}\n\t\t\t\t\t*(pc++) = '+';\n\t\t\t\t\t*pc = '\\0';\n\n\t\t\t\t\t*(pc2++) = '+';\n\t\t\t\t\t*pc2 = '\\0';\n\t\t\t\t}\n\t\t\t}\n\t\t\t*(ehbuf + strlen(ehbuf) - 1) = '\\0';   /* remove last '+' */\n\t\t\t*(ehbuf2 + strlen(ehbuf2) - 1) = '\\0'; /* remove last '+' */\n\t\t}\n\n\t} else {\n\n\t\t/* FOR RESERVATIONS */\n\n\t\t/* now for each node, create a resvinfo structure */\n\t\tfor (i = 0; i < ndindex; ++i) {\n\n\t\t\tstruct resvinfo *rp;\n\t\t\t/* Create a list of pointers to each vnode associated to the reservation */\n\t\t\trp = (struct resvinfo *) malloc(sizeof(struct resvinfo));\n\t\t\tif (rp) {\n\t\t\t\tpbsnode_list_t *tmp_pl;\n\t\t\t\trp->next = (phowl + i)->hw_pnd->nd_resvp;\n\t\t\t\t(phowl + i)->hw_pnd->nd_resvp = rp;\n\t\t\t\trp->resvp = presv;\n\n\t\t\t\t/* create a backlink from the reservation to the vnode */\n\t\t\t\ttmp_pl = malloc(sizeof(pbsnode_list_t));\n\t\t\t\tif (tmp_pl == NULL) {\n\t\t\t\t\tfree(phowl);\n\t\t\t\t\treturn PBSE_SYSTEM;\n\t\t\t\t}\n\t\t\t\ttmp_pl->next = presv->ri_pbsnode_list;\n\t\t\t\ttmp_pl->vnode = (phowl + i)->hw_pnd;\n\t\t\t\tpresv->ri_pbsnode_list = tmp_pl;\n\t\t\t\tpresv->ri_vnodect++;\n\t\t\t\tDBPRT((\"%s: Adding %s to %s\\n\", __func__,\n\t\t\t\t       (phowl + i)->hw_pnd->nd_name, presv->ri_qs.ri_resvID))\n\t\t\t}\n\t\t}\n\t\tpresv->ri_qs.ri_svrflags |= RESV_SVFLG_HasNodes;\n\t}\n\n\t*execvnod_out = execvnod;\n\tif (mk_new_host) {\n\t\t*hoststr = ehbuf;\n\t\t*hoststr2 = ehbuf2;\n\t}\n\nend:\n\tfree(phowl);\n\treturn rc;\n}\n\nstatic void\nremove_job_index_from_mom(job *pjob, struct pbsnode *pnode)\n{\n\tint i;\n\tint j;\n\tmom_svrinfo_t *psvrmom;\n\n\tif (pnode == NULL)\n\t\treturn;\n\n\tfor (i = 0; i < pnode->nd_nummoms; i++) {\n\t\tif (pnode->nd_moms[i] == NULL)\n\t\t\tcontinue;\n\t\tpsvrmom = (mom_svrinfo_t *) (pnode->nd_moms[i]->mi_data);\n\n\t\tfor (j = 0; j < psvrmom->msr_jbinxsz; j++) {\n\t\t\tif (psvrmom->msr_jobindx[j] == pjob) {\n\t\t\t\tpsvrmom->msr_jobindx[j] = NULL;\n\t\t\t}\n\t\t}\n\t}\n}\n\n/**\n * @brief\n * \t\tfree nodes allocated to a job\n *\n * @param[in,out]\tpjob\t- job structure\n *\n * @return\tvoid\n */\nvoid\nfree_nodes(job *pjob)\n{\n\tstruct pbsnode *pnode;\n\tmom_svrinfo_t *psvrmom;\n\tchar *execvnod_in = NULL;\n\tchar *execvncopy;\n\tchar *chunk;\n\tchar *last;\n\tint hasprn;\n\tchar *vname;\n\tint nelem;\n\tstruct key_value_pair *pkvp;\n\tchar *execvnod = NULL;\n\n\t/* decrement number of jobs on the Mom who is the first Mom */\n\t/* for the job, Mother Superior; incremented in set_nodes() */\n\t/* and saved in ji_destin in assign_hosts()\t\t    */\n\tif (((pjob->ji_qs.ji_svrflags & JOB_SVFLG_HasNodes) != 0) &&\n\t    (pjob->ji_qs.ji_destin[0] != '\\0')) {\n\t\tpnode = find_nodebyname(pjob->ji_qs.ji_destin);\n\t\tif (pnode) {\n\t\t\tpsvrmom = pnode->nd_moms[0]->mi_data;\n\t\t\tif (--psvrmom->msr_numjobs < 0)\n\t\t\t\tpsvrmom->msr_numjobs = 0;\n\t\t}\n\t}\n\n\t/* Now loop through the Moms and remove the jobindx entry */\n\t/*  remove this jobs's jobinfo entry from each vnode   */\n\n\tif (is_jattr_set(pjob, JOB_ATR_exec_vnode_orig))\n\t\texecvnod_in = get_jattr_str(pjob, JOB_ATR_exec_vnode_orig);\n\telse if (is_jattr_set(pjob, JOB_ATR_exec_vnode))\n\t\texecvnod_in = get_jattr_str(pjob, JOB_ATR_exec_vnode);\n\n\tif (execvnod_in == NULL) {\n\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid, \"in free_nodes and no exec_vnode\");\n\t\treturn;\n\t}\n\n\tif (strchr(execvnod_in, (int) ':') == NULL) {\n\t\t/* need to take node only list and build a pseudo */\n\t\t/* exec_vnode string with the resources included  */\n\t\texecvnod = build_execvnode(pjob, execvnod_in);\n\t} else {\n\t\texecvnod = execvnod_in;\n\t}\n\texecvncopy = strdup(execvnod);\n\tif (execvncopy == NULL)\n\t\treturn;\n\n\tchunk = parse_plus_spec_r(execvncopy, &last, &hasprn);\n\n\twhile (chunk) {\n\t\tif (parse_node_resc(chunk, &vname, &nelem, &pkvp) == 0) {\n\t\t\tpnode = find_nodebyname(vname);\n\t\t\tremove_job_index_from_mom(pjob, pnode);\n\t\t\tdeallocate_job_from_node(pjob->ji_qs.ji_jobid, pnode);\n\t\t\tchunk = parse_plus_spec_r(last, &last, &hasprn);\n\t\t}\n\t}\n\tfree(execvncopy);\n\tpjob->ji_qs.ji_svrflags &= ~JOB_SVFLG_HasNodes;\n}\n\n/**\n * @brief\n * \t\tfree nodes allocated to a reservation object\n *\n *\t\tThis function is the analog of \"free_nodes\" for job objects\n *\n * @param[in]\tpresv\t- The reservation for which nodes are freed\n *\n * @return void\n *\n * @par Side-effects: This function will unset the resv-exclusive node state if\n * the reservation has a start time in the past. Care must be taken with\n * standing reservations.\n *\n * @par MT-safe: No\n */\nvoid\nfree_resvNodes(resc_resv *presv)\n{\n\tstruct pbsnode *pnode;\n\tstruct resvinfo *rinfp, *prev;\n\tint i;\n\tpbsnode_list_t *pnl;\n\tpbsnode_list_t *pnl_next;\n\n\tDBPRT((\"%s: entered\\n\", __func__))\n\tfor (i = 0; i < svr_totnodes; i++) {\n\t\tpnode = pbsndlist[i];\n\n\t\tfor (prev = NULL, rinfp = pnode->nd_resvp; rinfp;) {\n\n\t\t\tif (rinfp->resvp != presv) {\n\t\t\t\tprev = rinfp;\n\t\t\t\trinfp = rinfp->next;\n\t\t\t\tcontinue;\n\t\t\t}\n\n\t\t\t/* garbage collect the pbsnode_list */\n\t\t\tfor (pnl = presv->ri_pbsnode_list, pnl_next = pnl; pnl_next; pnl = pnl_next) {\n\t\t\t\tpnl_next = pnl->next;\n\t\t\t\tfree(pnl);\n\t\t\t}\n\t\t\tpresv->ri_pbsnode_list = NULL;\n\n\t\t\t/* free from provisioning list, if node was in wait_prov */\n\t\t\tfree_prov_vnode(pnode);\n\n\t\t\t/* Unset the resv-exclusive bit if set and\n\t\t\t * the node was associated to a running reservation\n\t\t\t * that is either being deleted or just ended.\n\t\t\t */\n\t\t\tif (pnode->nd_state & INUSE_RESVEXCL &&\n\t\t\t    presv->ri_qs.ri_stime <= time_now)\n\t\t\t\tset_vnode_state(pnode, ~INUSE_RESVEXCL, Nd_State_And);\n\n\t\t\tDBPRT((\"Freeing resvinfo on node %s from reservation %s\\n\",\n\t\t\t       pnode->nd_name, presv->ri_qs.ri_resvID))\n\t\t\tif (prev == NULL) {\n\t\t\t\tpnode->nd_resvp = rinfp->next;\n\t\t\t\tfree(rinfp);\n\t\t\t\trinfp = pnode->nd_resvp;\n\t\t\t} else {\n\t\t\t\tprev->next = rinfp->next;\n\t\t\t\tfree(rinfp);\n\t\t\t\trinfp = prev->next;\n\t\t\t}\n\t\t}\n\t}\n\tpresv->ri_vnodect = 0;\n\tpresv->ri_qs.ri_svrflags &= ~RESV_SVFLG_HasNodes;\n}\n\n/**\n * @brief\n *\tDoes a check to make sure a resource value  in 'presc'\n *\thas not gone negative, and if so, reset value to 0, and\n *\tlog a message.\n *\n * @param[in]\tprdef\t- resource definition of 'presc'\n * @param[in]\tpresc\t- resource in question\n * @param[in]\tnoden\t- non-NULL if resources coming from a vnode\n *\n * @return void\n */\nstatic void\ncheck_for_negative_resource(resource_def *prdef, resource *presc, char *noden)\n{\n\tint nerr = 0;\n\n\tif ((prdef == NULL) || (presc == NULL)) {\n\t\treturn;\n\t}\n\t/* make sure nothing in resources_assigned goes negative */\n\tswitch (prdef->rs_type) {\n\t\tcase ATR_TYPE_LONG:\n\t\t\tif (presc->rs_value.at_val.at_long < 0) {\n\t\t\t\tpresc->rs_value.at_val.at_long = 0;\n\t\t\t\tnerr = 1;\n\t\t\t}\n\t\t\tbreak;\n\t\tcase ATR_TYPE_LL:\n\t\t\tif (presc->rs_value.at_val.at_ll < 0) {\n\t\t\t\tpresc->rs_value.at_val.at_ll = 0;\n\t\t\t\tnerr = 1;\n\t\t\t}\n\t\t\tbreak;\n\t\tcase ATR_TYPE_SHORT:\n\t\t\tif (presc->rs_value.at_val.at_short < 0) {\n\t\t\t\tpresc->rs_value.at_val.at_short = 0;\n\t\t\t\tnerr = 1;\n\t\t\t}\n\t\t\tbreak;\n\t\tcase ATR_TYPE_FLOAT:\n\t\t\tif (presc->rs_value.at_val.at_float < 0.0) {\n\t\t\t\tpresc->rs_value.at_val.at_float = 0.0;\n\t\t\t\tnerr = 1;\n\t\t\t}\n\t\t\tbreak;\n\t}\n\n\tif (nerr) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"resource %s went negative on node\",\n\t\t\t prdef->rs_name);\n\t\tif (noden) {\n\t\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_NODE,\n\t\t\t\t  LOG_ALERT, noden, log_buffer);\n\t\t} else {\n\t\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER,\n\t\t\t\t  LOG_ALERT, msg_daemonname, log_buffer);\n\t\t}\n\t}\n}\n\n/**\n * @brief\n * \t\tadjust the resources_assigned on a node.\n *\n * @par\n *\t\tCalled with the node name, the node ordinal (0 for first node),\n *\t\tthe +/- operator, the resource name, and the resource value.\n *\n * @param[out]\tnoden\t- node name\n * @param[in]\taflag\t- node ordinal (0 for first node)\n * @param[in]\tbatch_op\t- operator of type enum batch_op.\n * @param[in]\tprdef\t- resource structure which stores resource name\n * @param[in]\tval\t- resource value\n * @param[in]\thop\t- always called with 0, this values checks for the level of indirectness.\n *\n * @return\tint\n * @retval\t0\t- success\n * @retval\t!=0\t- failure code\n */\nstatic int\nadj_resc_on_node(char *noden, int aflag, enum batch_op op, resource_def *prdef, char *val, int hop)\n{\n\tpbsnode *pnode;\n\tresource *presc;\n\tattribute *pattr;\n\tint rc;\n\tattribute tmpattr;\n\n\t/* make sure there isn't multiple levels of indirectness */\n\t/* resource->resource->resource */\n\n\tif (hop > 1) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"multiple level of indirectness for resource %s\",\n\t\t\t prdef->rs_name);\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_NODE,\n\t\t\t  LOG_ALERT, noden, log_buffer);\n\t\treturn (PBSE_INDIRECTHOP);\n\t}\n\n\t/* If it is accumulated for the Nth node, then */\n\n\tif ((prdef->rs_flags & aflag) == 0)\n\t\treturn 0;\n\n\t/* find the node */\n\n\tpnode = find_nodebyname(noden);\n\tif (pnode == NULL)\n\t\treturn PBSE_UNKNODE;\n\n\t/* find the resources_assigned resource for the node */\n\n\tpattr = get_nattr(pnode, ND_ATR_ResourceAssn);\n\tif ((presc = find_resc_entry(pattr, prdef)) == NULL) {\n\t\tpresc = add_resource_entry(pattr, prdef);\n\t\tif (presc == NULL)\n\t\t\treturn PBSE_INTERNAL;\n\t}\n\tif ((presc->rs_value.at_flags & ATR_VFLAG_INDIRECT) &&\n\t    (*presc->rs_value.at_val.at_str == '@')) {\n\n\t\t/* indirect reference to another vnode, recurse w/ that node */\n\n\t\tnoden = presc->rs_value.at_val.at_str + 1;\n\t\treturn (adj_resc_on_node(noden, aflag, op, prdef, val, ++hop));\n\t}\n\n\t/* decode the resource value and +/- it to the attribute */\n\n\tmemset((void *) &tmpattr, 0, sizeof(attribute));\n\trc = 0;\n\n\tif ((rc = prdef->rs_decode(&tmpattr, ATTR_rescassn, prdef->rs_name, val)) != 0)\n\t\treturn rc;\n\trc = prdef->rs_set(&presc->rs_value, &tmpattr, op);\n\tif (op == DECR) {\n\t\tcheck_for_negative_resource(prdef, presc, noden);\n\t}\n\treturn rc;\n}\n\n/**\n * @brief\n * \t\tupdate the resources assigned at the vnode level\n *\t\tfor a job.   Resources_assigned.X is incremented or decremented\n *\t\tbased on the operator.\n *\n * @par\n *\t\tThe resource list is taken from the exec_vnode string of the job.\n *\t\tIt is in the form: NodeA:resc=val:resc=val+NodeB:...\n * @par\n *\t\tEach \"chunk\" (subspec between plus signs) is broken into the vnode\n *\t\tname and a key_value_pair array of resources and values.  For each\n *\t\tresource, the corresponding resource (if present) in the vnodes's\n *\t\tresources_assigned is adjusted.\n *\n * @param[in]\tpjob\t- job to update\n * @param[in]\tpexech\t- exec_vnode string\n * @param[in]\top\t- operator of type enum batch_op.\n *\n * @return\tvoid\n */\nvoid\nupdate_job_node_rassn(job *pjob, attribute *pexech, enum batch_op op)\n{\n\tint asgn = ATR_DFLAG_ANASSN | ATR_DFLAG_FNASSN;\n\tchar *chunk;\n\tint j;\n\tint nelem;\n\tchar *noden;\n\tint rc;\n\tresource_def *prdef = NULL;\n\tstruct key_value_pair *pkvp;\n\tattribute *queru = NULL;\n\tattribute *sysru = NULL;\n\tresource *pr = NULL;\n\tattribute tmpattr;\n\tint nchunk = 0;\n\n\t/* Parse the exec_vnode string */\n\n\tif (!is_attr_set(pexech))\n\t\treturn;\n\n\tif ((pjob != NULL) &&\n\t    (pexech == get_jattr(pjob, JOB_ATR_exec_vnode_deallocated))) {\n\t\tchar *pc;\n\t\tsysru = get_sattr(SVR_ATR_resource_assn);\n\t\tqueru = get_qattr(pjob->ji_qhdr, QE_ATR_ResourceAssn);\n\n\t\tpc = pexech->at_val.at_str;\n\t\twhile (*pc != '\\0') {\n\t\t\t/* given exec_vnode format: (<chunk1>+<chunk2>)+(<chunk3), \t*/\n\t\t\t/* <chunk1> and <chunk2> belong to the same node host,      \t*/\n\t\t\t/* while  <chunk3> belongs to another node host. \t\t*/\n\t\t\t/* The number of node host chunks can be determined by # of     */\n\t\t\t/* left parentheses */\n\t\t\tif (*pc == '(') {\n\t\t\t\tnchunk++;\n\t\t\t}\n\t\t\tpc++;\n\t\t}\n\t}\n\tchunk = parse_plus_spec(pexech->at_val.at_str, &rc);\n\tif (rc != 0)\n\t\treturn;\n\twhile (chunk) {\n\t\tif (parse_node_resc(chunk, &noden, &nelem, &pkvp) == 0) {\n\t\t\tfor (j = 0; j < nelem; ++j) {\n\t\t\t\tprdef = find_resc_def(svr_resc_def, pkvp[j].kv_keyw);\n\t\t\t\tif (prdef == NULL)\n\t\t\t\t\treturn;\n\n\t\t\t\t/* skip all non-consumable resources (e.g. aoe) */\n\t\t\t\tif ((prdef->rs_flags & asgn) == 0) {\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\n\t\t\t\trc = adj_resc_on_node(noden, asgn, op, prdef, pkvp[j].kv_val, 0);\n\t\t\t\tif (rc && rc != PBSE_UNKNODE)\n\t\t\t\t\treturn;\n\n\t\t\t\t/* update system attribute of resources assigned */\n\n\t\t\t\tif (sysru || queru) {\n\t\t\t\t\tif ((rc = prdef->rs_decode(&tmpattr, ATTR_rescassn, pkvp[j].kv_keyw,\n\t\t\t\t\t\t\t\t   pkvp[j].kv_val)) != 0)\n\t\t\t\t\t\treturn;\n\t\t\t\t}\n\n\t\t\t\tif (sysru) {\n\t\t\t\t\tpr = find_resc_entry(sysru, prdef);\n\t\t\t\t\tif (pr == NULL) {\n\t\t\t\t\t\tpr = add_resource_entry(sysru, prdef);\n\t\t\t\t\t\tif (pr == NULL)\n\t\t\t\t\t\t\treturn;\n\t\t\t\t\t}\n\t\t\t\t\tprdef->rs_set(&pr->rs_value, &tmpattr, op);\n\t\t\t\t\tif (op == DECR) {\n\t\t\t\t\t\tcheck_for_negative_resource(prdef, pr, NULL);\n\t\t\t\t\t}\n\t\t\t\t\tpost_attr_set(sysru);\n\t\t\t\t}\n\n\t\t\t\t/* update queue attribute of resources assigned */\n\n\t\t\t\tif (queru) {\n\t\t\t\t\tpr = find_resc_entry(queru, prdef);\n\t\t\t\t\tif (pr == NULL) {\n\t\t\t\t\t\tpr = add_resource_entry(queru, prdef);\n\t\t\t\t\t\tif (pr == NULL)\n\t\t\t\t\t\t\treturn;\n\t\t\t\t\t}\n\t\t\t\t\tprdef->rs_set(&pr->rs_value, &tmpattr, op);\n\t\t\t\t\tif (op == DECR) {\n\t\t\t\t\t\tcheck_for_negative_resource(prdef, pr, NULL);\n\t\t\t\t\t}\n\t\t\t\t\tpost_attr_set(queru);\n\t\t\t\t}\n\t\t\t}\n\t\t} else {\n\t\t\treturn;\n\t\t}\n\t\tasgn = ATR_DFLAG_ANASSN;\n\t\tchunk = parse_plus_spec(NULL, &rc);\n\t\tif (rc != 0)\n\t\t\treturn;\n\t}\n\n\tif (sysru || queru) {\n\t\t/* set pseudo-resource \"nodect\" to the number of chunks */\n\t\tprdef = &svr_resc_def[RESC_NODECT];\n\t\tif (prdef == NULL) {\n\t\t\treturn;\n\t\t}\n\t}\n\tif (sysru) {\n\t\tpr = find_resc_entry(sysru, prdef);\n\t\tif (pr == NULL)\n\t\t\tpr = add_resource_entry(sysru, prdef);\n\t\tif (pr) {\n\n\t\t\tif (op == DECR) {\n\t\t\t\tpr->rs_value.at_val.at_long -= nchunk;\n\t\t\t\tcheck_for_negative_resource(prdef, pr, NULL);\n\t\t\t} else {\n\t\t\t\tpr->rs_value.at_val.at_long += nchunk;\n\t\t\t}\n\t\t\tpr->rs_value.at_flags |= ATR_SET_MOD_MCACHE | ATR_VFLAG_DEFLT;\n\t\t}\n\t}\n\tif (queru) {\n\t\tpr = find_resc_entry(queru, prdef);\n\t\tif (pr == NULL)\n\t\t\tpr = add_resource_entry(queru, prdef);\n\t\tif (pr) {\n\t\t\tif (op == DECR) {\n\t\t\t\tpr->rs_value.at_val.at_long -= nchunk;\n\t\t\t\tcheck_for_negative_resource(prdef, pr, NULL);\n\t\t\t} else {\n\t\t\t\tpr->rs_value.at_val.at_long += nchunk;\n\t\t\t}\n\t\t\tpr->rs_value.at_flags |= ATR_VFLAG_DEFLT | ATR_SET_MOD_MCACHE;\n\t\t}\n\t}\n\treturn;\n}\n\n/**\n * @brief\n * \t\tmark node by name down\n *\n * @param[in]\tnodename - node being searched then marking as down.\n * @param[in]\twhy - error message\n *\n * @return void\n */\nvoid\nmark_node_down(char *nodename, char *why)\n{\n\tstruct pbsnode *pnode;\n\n\t/* note - find_nodebyname strips off /VP */\n\n\tif ((pnode = find_nodebyname(nodename)) != NULL) {\n\t\t/* XXXX fix see momptr_down() XXXX */\n\t\tmomptr_down(pnode->nd_moms[0], why);\n\t}\n}\n\n/**\n * @brief\n * \t\tMark mom (by ptr) down and log message given by 'why'.\n *\n * @param[in]\tpmom - a mom entry\n * @param[in]\twhy - node comment\n *\n * @return void\n */\nvoid\nmomptr_offline_by_mom(mominfo_t *pmom, char *why)\n{\n\tif (pmom == NULL)\n\t\treturn;\n\n\tpmom->mi_dmn_info->dmn_state |= INUSE_OFFLINE_BY_MOM;\n\n\tif ((why != NULL) && (why[0] != '\\0'))\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_NODE,\n\t\t\t  LOG_ALERT, pmom->mi_host, why);\n\n\tset_all_state(pmom, 1, INUSE_OFFLINE_BY_MOM, why, Set_All_State_All_Offline);\n\treturn;\n}\n\n/**\n * @brief\n * \t\toffline_by_mom vnodes whose parent mom is 'nodename'.\n *\n * @param[in]\tnodename - node to mark offline_by_mom state\n * @param[in]\twhy - comment to put in the node\n *\n * @return void\n */\nvoid\nmark_node_offline_by_mom(char *nodename, char *why)\n{\n\tstruct pbsnode *pnode;\n\n\t/* note - find_nodebyname strips off /VP */\n\n\tif ((pnode = find_nodebyname(nodename)) != NULL) {\n\t\t/* XXXX fix see momptr_down() XXXX */\n\t\tmomptr_offline_by_mom(pnode->nd_moms[0], why);\n\t\tnode_save_db(pnode);\n\t}\n}\n\n/**\n * @brief\n * \t\tClear mom (by ptr) offline_by_mom state and log message given by 'why'.\n *\n * @param[in]\tpmom - a mom entry\n * @param[in]\twhy - node comment\n *\n * @return void\n */\nvoid\nmomptr_clear_offline_by_mom(mominfo_t *pmom, char *why)\n{\n\tif (pmom == NULL)\n\t\treturn;\n\n\tif ((why != NULL) && (why[0] != '\\0'))\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_NODE,\n\t\t\t  LOG_ALERT, pmom->mi_host, why);\n\n\t/* '0' second argument means to clear */\n\tset_all_state(pmom, 0, INUSE_OFFLINE_BY_MOM, why, Set_All_State_Regardless);\n\treturn;\n}\n\n/**\n * @brief\n * \t\tclears offline_by_mom vnodes whose parent mom is 'nodename'.\n *\n * @param[in]\tnodename - node to clear offline_by_mom state\n * @param[in]\twhy - comment to put in the node\n *\n * @return void\n */\nvoid\nclear_node_offline_by_mom(char *nodename, char *why)\n{\n\tstruct pbsnode *pnode;\n\n\t/* note - find_nodebyname strips off /VP */\n\n\tif ((pnode = find_nodebyname(nodename)) != NULL) {\n\t\t/* XXXX fix see momptr_down() XXXX */\n\t\tmomptr_clear_offline_by_mom(pnode->nd_moms[0], why);\n\t\tnode_save_db(pnode);\n\t}\n}\n\n/**\n * @brief\n * \t\tsend Mom on each node a shutdown command.\n *\n * @par\n *\t\tNote, there is no error checking or retry.   If Mom doesn't go down,\n *\t\tso be it.\n */\nvoid\nshutdown_nodes(void)\n{\n\tdmn_info_t *pdmninfo;\n\tint i, ret;\n\n\tDBPRT((\"%s: entered\\n\", __func__))\n\tfor (i = 0; i < mominfo_array_size; i++) {\n\t\tmominfo_t *pmom;\n\n\t\tpmom = mominfo_array[i];\n\n\t\tif (pmom == NULL)\n\t\t\tcontinue;\n\n\t\tpdmninfo = pmom->mi_dmn_info;\n\t\tif (pdmninfo->dmn_stream < 0)\n\t\t\tcontinue;\n\n\t\tDBPRT((\"%s: down %s\\n\", __func__, pmom->mi_host))\n\n\t\tret = is_compose(pdmninfo->dmn_stream, IS_SHUTDOWN);\n\t\tif (ret == DIS_SUCCESS) {\n\t\t\t(void) dis_flush(pdmninfo->dmn_stream);\n\t\t}\n\t}\n}\n\n/**\n * @brief\n * \t\tcount number of processors specified in node string.\n *\n * @param[out] *hascpp\t- is set non-zero if :cpp or :ncpus appears in string\n *\t\t\t\t\t\tindicating that user has specified fixed placement of cpus\n *\n * @return\ttotalcpu\n */\nint\nctcpus(char *buf, int *hascpp)\n{\n\tint i;\n\tchar *pc;\n\tchar *pplus;\n\tchar *str;\n\tint totalcpu = 0;\n\n\tif (!buf)\n\t\treturn 0;\n\n\tstr = buf;\n\t*hascpp = 0;\n\n\t/* look for each subnode element: [N[:]][ppn=Y[:]][cpp=Z] */\n\twhile (*str) {\n\t\tint cpp;\n\t\tint nd;\n\t\tint ppn;\n\n\t\tnd = 1;\n\t\tcpp = 1;\n\t\tppn = 1;\n\t\tif ((pplus = strchr(str, (int) '+')))\n\t\t\t*pplus = '\\0';\n\n\t\tif (number(&str, &i, 1) == 0) {\n\t\t\tnd = i; /* leading \"N\" */\n\t\t\tif (*str)\n\t\t\t\tstr++;\n\t\t}\n\n\t\twhile (1) {\n\n\t\t\tif (property(&str, &pc))\n\t\t\t\tbreak;\n\n\t\t\tif (strncasecmp(pc, \"ppn=\", 4) == 0) {\n\t\t\t\ti = atoi(pc + 4);\n\t\t\t\tif (i == 0)\n\t\t\t\t\treturn 1; /* error */\n\t\t\t\tppn = i;\n\t\t\t}\n\t\t\tif ((strncasecmp(pc, \"cpp=\", 4) == 0) ||\n\t\t\t    (strncasecmp(pc, \"ncpus=\", 6) == 0)) {\n\t\t\t\t*hascpp = 1; /* found a cpp/ncpus item */\n\t\t\t\tpc = strchr(pc, (int) '=');\n\t\t\t\ti = atoi(pc + 1);\n\t\t\t\tif (i == 0)\n\t\t\t\t\treturn 1;\n\t\t\t\tcpp = i;\n\t\t\t}\n\t\t\tif (*str != ':')\n\t\t\t\tbreak;\n\t\t\tstr++;\n\t\t}\n\n\t\ttotalcpu += nd * cpp * ppn;\n\t\tif (pplus) {\n\t\t\t*pplus = '+';\n\t\t\tstr = pplus + 1;\n\t\t\t/* continue on to next subnode element */\n\t\t} else\n\t\t\tbreak;\n\t}\n\treturn totalcpu;\n}\n\n/**\n * @brief\n * \t\tshould be called from function pbsd_init.\n *\n * @par\n *\t\tIts purpose is to re-establish the resvinfo for any reservation\n *\t\thaving state \"CONFIRMED\", which is still \"time-viable\" and which\n *\t\thad a set of nodes allocated to it when the server was taken down.\n *\n *\tSpecifically:\n *\t   a) examine reservation attribute RESV_ATR_resv_nodes to\n *\t      determine which vnode are in the '+' separated\n *\t      string and if the server still knows about these nodes\n *\n *\t   b) if (a) succeeds, call assign_resv_resc to assign the resources to\n *        the reservation\n *\n *\t   c) if at any point in the process described in steps (a) or (b)\n *\t      a failure occurs, update the reservation's state to cause\n *\t      subsequent reservation deletion to occur\n *\n *  @return  void\n *\n *  @par Side-effects:\n *       If the reservation has yet to be CONFIRMED, or has a state\n *\t\tindicating that it's to be deleted, the function simply\n *\t\treturns without doing anything.\n *\n *  @par MT-safe: No\n */\nvoid\nset_old_subUniverse(resc_resv *presv)\n{\n\tint rc;\n\tchar *sp;\n\n\tif (presv == NULL || svr_totnodes == 0)\n\t\treturn;\n\n\tif (!is_rattr_set(presv, RESV_ATR_resv_nodes)) {\n\t\treturn;\n\t}\n\n\tif (presv->ri_qs.ri_state != RESV_CONFIRMED &&\n\t    presv->ri_qs.ri_substate != RESV_DEGRADED &&\n\t    presv->ri_qs.ri_substate != RESV_IN_CONFLICT &&\n\t    presv->ri_qs.ri_state != RESV_RUNNING)\n\t\treturn;\n\n\t/* duplicate the resv_nodes because assign_resv_resc will first free the\n\t * resv_nodes attribute before doing the allocation and setting the nodes\n\t */\n\tsp = strdup(get_rattr_str(presv, RESV_ATR_resv_nodes));\n\tif (sp == NULL) {\n\t\tlog_err(errno, __func__, \"Could not allocate memory\");\n\t\treturn;\n\t}\n\t/* for resources that are not specified in the request and for which\n\t * default values can be determined, set these values as the values\n\t * for those resources\n\t */\n\tif ((rc = set_resc_deflt((void *) presv, RESC_RESV_OBJECT, NULL)) != 0) {\n\t\tlog_eventf(PBSEVENT_ERROR, PBS_EVENTCLASS_RESV, LOG_NOTICE,\n\t\t\t   presv->ri_qs.ri_resvID, \"problem assigning default resource \"\n\t\t\t\t\t\t   \"to reservation %d\",\n\t\t\t   rc);\n\t\tfree(sp);\n\t\treturn;\n\t}\n\t/* set the nodes on the reservation */\n\trc = assign_resv_resc(presv, sp, TRUE);\n\tif (rc != PBSE_NONE) {\n\t\tlog_eventf(PBSEVENT_ERROR, PBS_EVENTCLASS_RESV,\n\t\t\t   LOG_NOTICE, presv->ri_qs.ri_resvID,\n\t\t\t   \"problem assigning resource to reservation %d\", rc);\n\t\tfree(sp);\n\t\treturn;\n\t}\n\n\tif ((presv->ri_qs.ri_state == RESV_RUNNING) ||\n\t    (presv->ri_qs.ri_state == RESV_TIME_TO_RUN))\n\t\tresv_exclusive_handler(presv);\n\n\t/* the total number of vnodes associated to the reservation is computed\n\t * in set_nodes which is called from assign_resv_resc. We assume that\n\t * all vnodes are down until they report up.\n\t */\n\tpresv->ri_vnodes_down = presv->ri_vnodect;\n\tDBPRT((\"%s: %s ri_vnodect: %d\\n\", __func__, presv->ri_qs.ri_resvID,\n\t       presv->ri_vnodect))\n\t/* Upon restart, ignore the degraded state of confirmed reservations by\n\t * reverting their state back to confirmed. If vnodes don't report back\n\t * available the reservation will go through the degradation process.\n\t * In other words, we assume the reservation is confirmed again until\n\t * proven wrong.\n\t */\n\tif ((presv->ri_qs.ri_substate == RESV_DEGRADED || presv->ri_qs.ri_substate == RESV_IN_CONFLICT) &&\n\t    presv->ri_qs.ri_state != RESV_RUNNING) {\n\t\t(void) resv_setResvState(presv, RESV_CONFIRMED, RESV_CONFIRMED);\n\n\t\t/* unset the reservation retry time attribute */\n\t\tunset_resv_retry(presv);\n\t}\n\tfree(sp);\n}\n\n/**\n * @brief\n * \t\tWalk all vnodes and invoke vnode_unavailable for all those that were\n *  \tset offline or offline_by_mom.\n *\n * \t\tWe assume that the reservation is in the state prior to it being degraded,\n * \t\twhich would be either CONFIRMED, UNCONFIRMED, or RUNNING.\n *\n * \t\tIf some of the nodes do not come back up, then the process of degrading\n * \t\tthe reservation is followed by detecting a node as unavailable\n *\n * @return void\n *\n *  @par MT-safe: No\n */\nvoid\ndegrade_offlined_nodes_reservations(void)\n{\n\tint i;\n\n\tDBPRT((\"%s: entered\\n\", __func__))\n\tfor (i = 0; i < svr_totnodes; i++) {\n\t\tstruct pbsnode *pn;\n\t\tpn = pbsndlist[i];\n\t\tif ((pn->nd_state & (INUSE_OFFLINE | INUSE_OFFLINE_BY_MOM)) != 0 ||\n\t\t    (pn->nd_state & INUSE_UNRESOLVABLE) != 0) {\n\t\t\t/* find all associated reservations and mark them\n\t\t\t * degraded but do not increment the count of downed\n\t\t\t * vnodes as these have already been accounted for in\n\t\t\t * set_old_subuniverse.\n\t\t\t */\n\t\t\tvnode_unavailable(pn, 0);\n\t\t}\n\t}\n\t/* create a task to check for vnodes that don't report back up after MAX_NODE_WAIT */\n\t(void) set_task(WORK_Timed, time_now + MAX_NODE_WAIT,\n\t\t\tdegrade_downed_nodes_reservations, NULL);\n}\n\n/**\n * @brief\n * \t\tWalk all vnodes and invoke vnode_unavailable for all those that have\n *  \tremained (unknown | down | stale) since the server restarted.\n *\n * @return\tvoid\n *\n * @par MT-safe: No\n */\nvoid\ndegrade_downed_nodes_reservations(struct work_task *pwt)\n{\n\tint i;\n\tstruct pbsnode *pn;\n\n\tDBPRT((\"%s: entered\\n\", __func__))\n\tfor (i = 0; i < svr_totnodes; i++) {\n\t\tpn = pbsndlist[i];\n\t\t/* checking for nodes that are down, including stale state,\n\t\t * but excluding those that are offlined as those were checked\n\t\t * earlier in degrade_offlined_nodes_reservations.\n\t\t */\n\t\tif (!(pn->nd_state & (INUSE_OFFLINE | INUSE_OFFLINE_BY_MOM)) &&\n\t\t    (pn->nd_state & (INUSE_DOWN |\n\t\t\t\t     INUSE_UNKNOWN | INUSE_STALE))) {\n\t\t\t/* find all associated reservations and mark them\n\t\t\t * degraded but do not increment the count of downed\n\t\t\t * vnodes as these have already been accounted for in\n\t\t\t * set_old_subuniverse\n\t\t\t */\n\t\t\tvnode_unavailable(pn, 0);\n\t\t}\n\t}\n}\n\n/**\n * @brief\tSet last_used_time for job's exec_vnodes or reservation's resv_nodes.\n *\t\tFinds the vnodes by name and sets ND_ATR_last_used_time to time_now.\n *\n * @param[in]\tpobj - pointer to job/reservation.\n * @param[in]\ttype - int, denoting the type of object.\n *                     Value 1 means reservation object.\n *                     Value 0 means job object.\n *\n * @retval\tvoid\n */\nvoid\nset_last_used_time_node(void *pobj, int type)\n{\n\tchar *pc;\n\tchar *pn;\n\tchar *last_pn = NULL;\n\tstruct pbsnode *pnode;\n\tint rc;\n\tint time_int_val;\n\n\ttime_int_val = time_now;\n\n\tif (pobj == NULL)\n\t\treturn;\n\n\tif (type) {\n\t\tresc_resv *presv;\n\n\t\tpresv = pobj;\n\t\tpn = parse_plus_spec(get_rattr_str(presv, RESV_ATR_resv_nodes), &rc);\n\t} else {\n\t\tjob *pjob;\n\n\t\tpjob = pobj;\n\t\tpn = parse_plus_spec(get_jattr_str(pjob, JOB_ATR_exec_vnode), &rc);\n\t}\n\n\twhile (pn) {\n\t\tint cmp_ret;\n\n\t\tpc = pn;\n\t\twhile ((*pc != '\\0') && (*pc != ':'))\n\t\t\t++pc;\n\t\t*pc = '\\0';\n\n\t\tif (last_pn == NULL || (cmp_ret = strcmp(pn, last_pn)) != 0) {\n\t\t\tpnode = find_nodebyname(pn);\n\t\t\t/* had better be the \"natural\" vnode with only the one parent */\n\t\t\tif (pnode) {\n\t\t\t\tset_nattr_l_slim(pnode, ND_ATR_last_used_time, time_int_val, SET);\n\t\t\t\tnode_save_db(pnode);\n\t\t\t}\n\t\t}\n\t\tlast_pn = pn;\n\t\tpn = parse_plus_spec(NULL, &rc);\n\t}\n}\n\n/**\n * @brief update_resource_rel - This function creates JOB_ATR_resc_released_list job attribute\n *\t\t    and add RASSN resources reported in ATTR_released attribute to it.\n * @param[out] pjob - job structure\n * @param[in] attrib - attribute which contains list of resources to be released\n * @param[in] op - kind of operation to be performed while setting the resource value.\n *\n * @return int\n * @retval 0  - SUCCESS\n * @retval > 0 - FAILURE\n */\nint\nupdate_resources_rel(job *pjob, attribute *attrib, enum batch_op op)\n{\n\tchar *chunk;\n\tint j;\n\tint rc;\n\tint nelem;\n\tchar *noden;\n\tstruct key_value_pair *pkvp;\n\tresource_def *prdef;\n\tresource *presc;\n\tresource *presc_sq;\n\tattribute tmpattr;\n\n\tif (attrib == NULL || pjob == NULL)\n\t\treturn 1;\n\n\tchunk = parse_plus_spec(attrib->at_val.at_str, &rc);\n\tif (rc != 0)\n\t\treturn 1;\n\twhile (chunk) {\n\t\tif (parse_node_resc(chunk, &noden, &nelem, &pkvp) == 0) {\n\t\t\tfor (j = 0; j < nelem; j++) {\n\t\t\t\tprdef = find_resc_def(svr_resc_def, pkvp[j].kv_keyw);\n\t\t\t\tif (prdef == NULL)\n\t\t\t\t\treturn 1;\n\t\t\t\tif (prdef->rs_flags & (ATR_DFLAG_RASSN | ATR_DFLAG_ANASSN | ATR_DFLAG_FNASSN)) {\n\t\t\t\t\tpresc = add_resource_entry(get_jattr(pjob, JOB_ATR_resc_released_list), prdef);\n\t\t\t\t\tif (presc == NULL)\n\t\t\t\t\t\treturn 1;\n\t\t\t\t\tif ((rc = prdef->rs_decode(&tmpattr, ATTR_rel_list, prdef->rs_name, pkvp[j].kv_val)) != 0)\n\t\t\t\t\t\treturn rc;\n\t\t\t\t\tprdef->rs_set(&presc->rs_value, &tmpattr, op);\n\t\t\t\t}\n\t\t\t}\n\t\t\tchunk = parse_plus_spec(NULL, &rc);\n\t\t\tif (rc != 0)\n\t\t\t\treturn 1;\n\t\t} else\n\t\t\treturn 1;\n\t}\n\t/* Now iterate through all of the job resources that are present on at\n\t * queue/server level and add them to resource_release_list. Only do this if\n\t * restrict_res_to_release_on_suspend is set\n\t */\n\tif (is_sattr_set(SVR_ATR_restrict_res_to_release_on_suspend)) {\n\t\tpresc_sq = (resource *) GET_NEXT(get_jattr_list(pjob, JOB_ATR_resource));\n\t\tfor (; presc_sq != NULL; presc_sq = (resource *) GET_NEXT(presc_sq->rs_link)) {\n\t\t\tprdef = presc_sq->rs_defin;\n\t\t\t/* make sure it is a server/queue level consumable resource and not\n\t\t\t* set in resource_released_list already\n\t\t\t*/\n\t\t\tif ((prdef->rs_flags & ATR_DFLAG_RASSN) &&\n\t\t\t    (find_resc_entry(get_jattr(pjob, JOB_ATR_resc_released_list), prdef) == NULL)) {\n\t\t\t\tstruct array_strings *pval = get_sattr_arst(SVR_ATR_restrict_res_to_release_on_suspend);\n\t\t\t\tfor (j = 0; pval != NULL && j < pval->as_usedptr; j++) {\n\t\t\t\t\tif (strcmp(pval->as_string[j], prdef->rs_name) == 0) {\n\t\t\t\t\t\tpresc = add_resource_entry(get_jattr(pjob, JOB_ATR_resc_released_list), prdef);\n\t\t\t\t\t\tif (presc == NULL)\n\t\t\t\t\t\t\treturn 1;\n\t\t\t\t\t\tprdef->rs_set(&presc->rs_value, &presc_sq->rs_value, op);\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\tFree pjob's vnodes whose parent mom is a sister mom.\n *\n * @param[in,out] pjob - Job structure\n * @param[in]\tvnodelist - non-NULL means it's the list of vnode names\n *\t\t\tto free. If NULL, free all the vnodes assigned\n *\t\t\tto 'pjob' whose parent mom is a sister mom.\n * @param[in]\tkeep_select - non-NULL means it's a select string that\n *\t\t\tdescribes vnodes to be kept while freeing all other vnodes\n *\t\t\tassigned to 'pjob' whose parent mom is a sister mom.\n * @param[out]  err_msg - if function returns != 0 (failure), return\n *\t\t\t  any error message in this buffer.\n * @param[int]\terr_msg_sz - size of 'err_msg' buf.\n * @param[int]\treply_req - the batch request to reply to if any.\n * @return int\n * @retval 0 - success\n * @retval != 0  - failure error code.\n */\nint\nfree_sister_vnodes(job *pjob, char *vnodelist, char *keep_select, char *err_msg,\n\t\t   int err_msg_sz, struct batch_request *reply_req)\n{\n\tint rc = 0;\n\tpbs_sched *psched;\n\n\tif (pjob == NULL) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"bad pjob parameter\");\n\t\treturn (1);\n\t}\n\n\tif (!is_jattr_set(pjob, JOB_ATR_exec_vnode))\n\t\treturn 0; /* nothing to free up */\n\n\tif (err_msg_sz > 0)\n\t\terr_msg[0] = '\\0';\n\n\t/* decrements everything found in exec_vnode */\n\tset_resc_assigned((void *) pjob, 0, DECR);\n\n\t/* re-create the job's exec_vnode based on free vnodes specs */\n\tif ((rc = recreate_exec_vnode(pjob, vnodelist, keep_select, err_msg,\n\t\t\t\t      err_msg_sz)) != 0) {\n\t\tset_resc_assigned((void *) pjob, 0, INCR);\n\t\treturn (rc);\n\t}\n\t/* increment everything found in new exec_vnode */\n\tset_resc_assigned((void *) pjob, 0, INCR);\n\n\tif (find_assoc_sched_jid(pjob->ji_qs.ji_jobid, &psched))\n\t\tset_scheduler_flag(SCH_SCHEDULE_TERM, psched);\n\telse {\n\t\tlog_err(-1, __func__, \"Unable to find scheduler associated with partition\");\n\t}\n\trc = send_job_exec_update_to_mom(pjob, err_msg, err_msg_sz, reply_req);\n\n\tif (rc == 0) {\n\t\taccount_job_update(pjob, PBS_ACCT_UPDATE);\n\t\taccount_jobstr(pjob, PBS_ACCT_NEXT);\n\t}\n\n\treturn (rc);\n}\n\n/**\n * @brief\n *\tWrapper function to update_job_node_rassn() function.\n *\n * @param[in]\tpexech\t- exec_vnode string\n * @param[in]\top\t- operator of type enum batch_op.\n */\nvoid\nupdate_node_rassn(attribute *pexech, enum batch_op op)\n{\n\tupdate_job_node_rassn(NULL, pexech, op);\n}\n\n/**\n * @brief update the jid on the respective nodes in the execvnode string\n * and update the state based on share_job value\n * \n * @param[in] jid - job id\n * @param[in] exec_vnode - execvnode string\n * @param[in] op - operation INCR/DECR\n * @param[in] share_job - job sharing type\n */\nvoid\nupdate_jobs_on_node(char *jid, char *exec_vnode, int op, int share_job)\n{\n\tchar *chunk;\n\tint j;\n\tint nelem;\n\tchar *noden;\n\tint rc;\n\tresource_def *prdef = NULL;\n\tstruct key_value_pair *pkvp;\n\tint asgn = ATR_DFLAG_ANASSN | ATR_DFLAG_FNASSN;\n\tlong ncpus;\n\tstruct pbsnode *pnode;\n\n\tfor (chunk = parse_plus_spec(exec_vnode, &rc);\n\t     chunk && !rc; chunk = parse_plus_spec(NULL, &rc)) {\n\n\t\tif (parse_node_resc(chunk, &noden, &nelem, &pkvp) == 0) {\n\t\t\tif ((pnode = find_nodebyname(noden)) == NULL)\n\t\t\t\tcontinue;\n\t\t\tfor (j = 0; j < nelem; ++j) {\n\t\t\t\tprdef = find_resc_def(svr_resc_def, pkvp[j].kv_keyw);\n\t\t\t\tif (prdef == NULL)\n\t\t\t\t\treturn;\n\t\t\t\t/* skip all non-consumable resources (e.g. aoe) */\n\t\t\t\tif ((prdef->rs_flags & asgn) == 0)\n\t\t\t\t\tcontinue;\n\n\t\t\t\tif (!strcmp(prdef->rs_name, \"ncpus\")) {\n\t\t\t\t\tncpus = strtol(pkvp[j].kv_val, NULL, 10);\n\t\t\t\t\tif (ncpus < 0) {\n\t\t\t\t\t\tlog_err(PBSE_SYSTEM, \"bad value for ncpus: %s\\n\", pkvp[j].kv_val);\n\t\t\t\t\t\tncpus = 0;\n\t\t\t\t\t}\n\t\t\t\t\tif (op == INCR) {\n\t\t\t\t\t\tassign_jobs_on_subnode(pnode, ncpus, jid, 0, share_job);\n\t\t\t\t\t\tupdate_node_state(pnode, share_job);\n\t\t\t\t\t} else if (op == DECR)\n\t\t\t\t\t\tdeallocate_job_from_node(jid, pnode);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n\n/**\n * @brief - Degrade a reservation.\n *\n * This function is different from vnode_unavailable, as here we know the\n * reservation that needs to be degraded\n *\n * @param[in] - pnode - pbsnode which has gone down.\n * @param[in] - presv - reservation that needs to be degraded.\n *\n */\n\nstatic void\nset_resv_for_degrade(struct pbsnode *pnode, resc_resv *presv)\n{\n\tlong degraded_time;\n\n\tif ((degraded_time = get_rattr_long(presv, RESV_ATR_resv_standing)) == 0)\n\t\tpresv->ri_degraded_time = degraded_time;\n\telse\n\t\tfind_degraded_occurrence(presv, pnode, Set_Degraded_Time);\n\n\tdegraded_time = presv->ri_degraded_time;\n\n\tif (degraded_time > (time_now + resv_retry_time))\n\t\tset_resv_retry(presv, (time_now + resv_retry_time));\n\n\t(void) resv_setResvState(presv, presv->ri_qs.ri_state, RESV_DEGRADED);\n\n\t/* the number of vnodes down could exceed the number of vnodes in\n\t * the reservation only in the case of a standing reservation for\n\t * which the vnodes unavailable are associated to later occurrences\n\t */\n\tif (presv->ri_vnodes_down > presv->ri_vnodect) {\n\t\t/* If a standing reservation we print the execvnodes sequence\n\t\t * string for debugging purposes\n\t\t */\n\t\tif (get_rattr_long(presv, RESV_ATR_resv_standing)) {\n\t\t\tchar *execvnodes = NULL;\n\t\t\tint occurrence = -1;\n\n\t\t\tif (is_rattr_set(presv, RESV_ATR_resv_execvnodes))\n\t\t\t\texecvnodes = get_rattr_str(presv, RESV_ATR_resv_execvnodes);\n\t\t\tif (execvnodes == NULL)\n\t\t\t\texecvnodes = \"\";\n \t\t\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_RESV, LOG_DEBUG,\n\t\t\t           presv->ri_qs.ri_resvID, \"execvnodes sequence: %s\",\n\t\t\t           execvnodes);\n\t\t\tif (is_rattr_set(presv, RESV_ATR_resv_idx))\n\t\t\t\toccurrence = get_rattr_long(presv, RESV_ATR_resv_idx);\n\t\t\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_RESV, LOG_DEBUG,\n\t\t\t           presv->ri_qs.ri_resvID, \"vnodes in occurrence %d: %d; \"\n\t\t\t           \"unavailable vnodes in reservation: %d\",\n\t\t\t           occurrence, presv->ri_vnodect, presv->ri_vnodes_down);\n\t\t} else {\n\t\t\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_RESV, LOG_DEBUG,\n\t\t\t           presv->ri_qs.ri_resvID, \"vnodes in reservation: %d; \"\n\t\t\t           \"unavailable vnodes in reservation: %d\",\n\t\t\t           presv->ri_vnodect, presv->ri_vnodes_down);\n\t\t}\n\t}\n\tpresv->ri_vnodes_down++;\n}\n\n/**\n * \t@brief determine the new retry time for a resv\n *\n * \t@param[in] presv - the reservation\n *\n * \t@return long\n * \t@retval next resv retry time for the resv\n */\nlong\ndetermine_resv_retry(resc_resv *presv)\n{\n\tlong retry;\n\tlong resv_start = get_rattr_long(presv, RESV_ATR_start);\n\n\tif (time_now < resv_start && time_now + resv_retry_time > resv_start)\n\t\tretry = resv_start;\n\telse\n\t\tretry = time_now + resv_retry_time;\n\n\treturn retry;\n}\n"
  },
  {
    "path": "src/server/node_recov_db.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file    node_recov_db.c\n *\n * @brief\n *\t\tnode_recov_db.c - This file contains the functions to record a node\n *\t\tdata structure to database and to recover it from database.\n *\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <stdio.h>\n#include <sys/types.h>\n\n#include \"pbs_ifl.h\"\n#include <errno.h>\n#include <fcntl.h>\n#include <string.h>\n#include <stdlib.h>\n#include <time.h>\n\n#include <unistd.h>\n#include \"server_limits.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n\n#include \"log.h\"\n#include \"attribute.h\"\n#include \"list_link.h\"\n#include \"server_limits.h\"\n#include \"credential.h\"\n#include \"libpbs.h\"\n#include \"batch_request.h\"\n#include \"pbs_nodes.h\"\n#include \"job.h\"\n#include \"resource.h\"\n#include \"reservation.h\"\n#include \"queue.h\"\n#include \"svrfunc.h\"\n#include <memory.h>\n#include \"libutil.h\"\n#include \"pbs_db.h\"\n\nstruct pbsnode *recov_node_cb(pbs_db_obj_info_t *dbobj, int *refreshed);\nstruct pbsnode *pbsd_init_node(pbs_db_node_info_t *dbnode, int type);\n\n/**\n * @brief\n *\t\tconvert from database to node structure\n *\n * @param[out]\tpnode - Address of the node in the server\n * @param[in]\tpdbnd - Address of the database node object\n *\n * @return\tError code\n * @retval   0 - Success\n * @retval  -1 - Failure\n *\n */\nstatic int\ndb_to_node(struct pbsnode *pnode, pbs_db_node_info_t *pdbnd)\n{\n\tif (pdbnd->nd_name[0] != 0) {\n\t\tfree(pnode->nd_name); /* free any previously allocated value */\n\t\tpnode->nd_name = strdup(pdbnd->nd_name);\n\t\tif (pnode->nd_name == NULL)\n\t\t\treturn -1;\n\t} else\n\t\tpnode->nd_name = NULL;\n\n\tif (pdbnd->nd_hostname[0] != 0) {\n\t\tfree(pnode->nd_hostname); /* free any previously allocated value */\n\t\tpnode->nd_hostname = strdup(pdbnd->nd_hostname);\n\t\tif (pnode->nd_hostname == NULL) {\n\t\t\tfree(pnode->nd_name);\n\t\t\treturn -1;\n\t\t}\n\t} else\n\t\tpnode->nd_hostname = NULL;\n\n\tpnode->nd_ntype = pdbnd->nd_ntype;\n\tpnode->nd_state = pdbnd->nd_state;\n\tif (pnode->nd_pque)\n\t\tstrcpy(pnode->nd_pque->qu_qs.qu_name, pdbnd->nd_pque);\n\n\tif ((decode_attr_db(pnode, &pdbnd->db_attr_list.attrs, node_attr_idx, node_attr_def, pnode->nd_attr, ND_ATR_LAST, 0)) != 0)\n\t\treturn -1;\n\n\tpnode->nd_svrflags &= ~NODE_NEWOBJ;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\t\tRecover a node from the database\n *\n * @param[in]\tnd_name\t- node name\n * @param[in]\tpnode\t- node pointer, if any, to be updated\n *\n * @return\tThe recovered node structure\n * @retval\tNULL - Failure\n * @retval\t!NULL - Success - address of recovered node returned\n */\nstruct pbsnode *\nnode_recov_db(char *nd_name, struct pbsnode *pnode)\n{\n\tpbs_db_obj_info_t obj;\n\tvoid *conn = (void *) svr_db_conn;\n\tpbs_db_node_info_t dbnode = {{0}};\n\tint rc = 0;\n\tstruct pbsnode *pnd = NULL;\n\tchar *conn_db_err = NULL;\n\n\tif (!pnode) {\n\t\tif ((pnd = malloc(sizeof(struct pbsnode)))) {\n\t\t\tpnode = pnd;\n\t\t\tinitialize_pbsnode(pnode, strdup(nd_name), NTYPE_PBS);\n\t\t} else {\n\t\t\tlog_err(-1, __func__, \"node_alloc failed\");\n\t\t\treturn NULL;\n\t\t}\n\t}\n\n\tstrcpy(dbnode.nd_name, nd_name);\n\tobj.pbs_db_obj_type = PBS_DB_NODE;\n\tobj.pbs_db_un.pbs_db_node = &dbnode;\n\n\trc = pbs_db_load_obj(conn, &obj);\n\tif (rc == -2)\n\t\treturn pnode; /* no change in node, return the same pnode */\n\n\tif (rc == 0)\n\t\trc = db_to_node(pnode, &dbnode);\n\telse {\n\t\tpbs_db_get_errmsg(PBS_DB_ERR, &conn_db_err);\n\t\tlog_errf(PBSE_INTERNAL, __func__, \"Failed to load node %s %s\", nd_name, conn_db_err ? conn_db_err : \"\");\n\t\tfree(conn_db_err);\n\t}\n\n\tfree_db_attr_list(&dbnode.db_attr_list);\n\n\tif (rc != 0) {\n\t\tpnode = NULL;\t /* so we return NULL */\n\t\tfree_pnode(pnd); /* free if we allocated here */\n\t}\n\treturn pnode;\n}\n\n/**\n * @brief\n *\t\tconvert node structure to DB format\n *\n * @param[in]\tpnode - Address of the node in the server\n * @param[out]\tpdbnd - Address of the database node object\n *\n * @return 0    Success\n * @retval\t>=0 What to save: 0=nothing, OBJ_SAVE_NEW or OBJ_SAVE_QS\n */\nstatic int\nnode_to_db(struct pbsnode *pnode, pbs_db_node_info_t *pdbnd)\n{\n\tint wrote_np = 0;\n\tsvrattrl *psvrl, *tmp;\n\tint vnode_sharing = 0;\n\tint savetype = 0;\n\n\tstrcpy(pdbnd->nd_name, pnode->nd_name);\n\n\tsavetype |= OBJ_SAVE_QS;\n\t/* nodes do not have a qs area, so we cannot check whether qs changed or not\n\t * hence for now, we always write the qs area, for now!\n\t */\n\n\t/* node_index is used to sort vnodes upon recovery.\n\t* For Cray multi-MoM'd vnodes, we ensure that natural vnodes come\n\t* before the vnodes that it manages by introducing offsetting all\n\t* non-natural vnodes indices to come after natural vnodes.\n\t*/\n\tif (pnode->nd_nummoms > 1)\n\t\tpdbnd->nd_index = 1; /* multiple parent moms, Cray case */\n\telse\n\t\tpdbnd->nd_index = 0; /* default, single parent mom */\n\n\tif (pnode->nd_hostname)\n\t\tstrcpy(pdbnd->nd_hostname, pnode->nd_hostname);\n\n\tif (pnode->nd_moms && pnode->nd_moms[0])\n\t\tpdbnd->mom_modtime = pnode->nd_moms[0]->mi_modtime;\n\n\tpdbnd->nd_ntype = pnode->nd_ntype;\n\tpdbnd->nd_state = pnode->nd_state;\n\tif (pnode->nd_pque)\n\t\tstrcpy(pdbnd->nd_pque, pnode->nd_pque->qu_qs.qu_name);\n\telse\n\t\tpdbnd->nd_pque[0] = 0;\n\n\tif ((encode_attr_db(node_attr_def, pnode->nd_attr, ND_ATR_LAST, &pdbnd->db_attr_list, 0)) != 0)\n\t\treturn -1;\n\n\t/* MSTODO: how can we optimize this loop - eliminate this? */\n\tpsvrl = (svrattrl *) GET_NEXT(pdbnd->db_attr_list.attrs);\n\twhile (psvrl != NULL) {\n\t\tif ((!wrote_np) && (strcmp(psvrl->al_name, ATTR_rescavail) == 0) && (strcmp(psvrl->al_resc, \"ncpus\") == 0)) {\n\t\t\twrote_np = 1;\n\t\t\tpsvrl = (svrattrl *) GET_NEXT(psvrl->al_link);\n\t\t\tcontinue;\n\t\t}\n\n\t\tif (strcmp(psvrl->al_name, ATTR_NODE_pcpus) == 0) {\n\t\t\t/* don't write out pcpus at this point, see */\n\t\t\t/* check for pcpus if needed after loop end */\n\t\t\ttmp = (svrattrl *) GET_NEXT(psvrl->al_link); /* store next node pointer */\n\t\t\tdelete_link(&psvrl->al_link);\n\t\t\tfree(psvrl);\n\t\t\tpdbnd->db_attr_list.attr_count--;\n\t\t\tpsvrl = tmp;\n\t\t\tcontinue;\n\n\t\t} else if (strcmp(psvrl->al_name, ATTR_NODE_resv_enable) == 0) {\n\t\t\t/*  write resv_enable only if not default value */\n\t\t\tif ((psvrl->al_flags & ATR_VFLAG_DEFLT) != 0) {\n\t\t\t\ttmp = (svrattrl *) GET_NEXT(psvrl->al_link); /* store next node pointer */\n\t\t\t\tdelete_link(&psvrl->al_link);\n\t\t\t\tfree(psvrl);\n\t\t\t\tpdbnd->db_attr_list.attr_count--;\n\t\t\t\tpsvrl = tmp;\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t}\n\t\tpsvrl = (svrattrl *) GET_NEXT(psvrl->al_link);\n\t}\n\n\t/*\n\t * Attributes with default values are not general saved to disk.\n\t * However to deal with some special cases, things needed for\n\t * attaching jobs to the vnodes on recover that we don't have\n\t * except after we hear from Mom, i.e. we :\n\t * 1. Need number of cpus, if it isn't writen as a non-default, as\n\t *    \"np\", then write \"pcpus\" which will be treated as a default\n\t * 2. Need the \"sharing\" attribute written even if default\n\t *    and not the default value (i.e. it came from Mom).\n\t *    so save it as the \"special\" [sharing] when it is a default\n\t */\n\tif (wrote_np == 0) {\n\t\tchar pcpu_str[10];\n\t\tsvrattrl *pal;\n\n\t\t/* write the default value for the num of cpus */\n\t\tsprintf(pcpu_str, \"%ld\", pnode->nd_nsn);\n\t\tpal = make_attr(ATTR_NODE_pcpus, \"\", pcpu_str, 0);\n\t\tappend_link(&pdbnd->db_attr_list.attrs, &pal->al_link, pal);\n\n\t\tpdbnd->db_attr_list.attr_count++;\n\t}\n\n\tif (vnode_sharing) {\n\t\tchar *vn_str;\n\t\tsvrattrl *pal;\n\n\t\tvn_str = vnode_sharing_to_str((enum vnode_sharing) get_nattr_long(pnode, ND_ATR_Sharing));\n\t\tpal = make_attr(ATTR_NODE_Sharing, \"\", vn_str, 0);\n\t\tappend_link(&pdbnd->db_attr_list.attrs, &pal->al_link, pal);\n\n\t\tpdbnd->db_attr_list.attr_count++;\n\t}\n\n\treturn savetype;\n}\n\n/**\n * @brief\n *\tSave a node to the database. When we save a node to the database, delete\n *\tthe old node information and write the node afresh. This ensures that\n *\tany deleted attributes of the node are removed, and only the new ones are\n *\tupdated to the database.\n *\n * @param[in]\tpnode - Pointer to the node to save\n *\n * @return      Error code\n * @retval\t0 - Success\n * @retval\t-1 - Failure\n *\n */\nint\nnode_save_db(struct pbsnode *pnode)\n{\n\tpbs_db_node_info_t dbnode = {{0}};\n\tpbs_db_obj_info_t obj;\n\tvoid *conn = (void *) svr_db_conn;\n\tchar *conn_db_err = NULL;\n\tint savetype;\n\tint rc = -1;\n\n\tif ((savetype = node_to_db(pnode, &dbnode)) == -1)\n\t\tgoto done;\n\n\tobj.pbs_db_obj_type = PBS_DB_NODE;\n\tobj.pbs_db_un.pbs_db_node = &dbnode;\n\n\tif ((rc = pbs_db_save_obj(conn, &obj, savetype)) != 0) {\n\t\tsavetype |= (OBJ_SAVE_NEW | OBJ_SAVE_QS);\n\t\trc = pbs_db_save_obj(conn, &obj, savetype);\n\t}\n\n\tif (rc == 0)\n\t\tpnode->nd_svrflags &= ~NODE_NEWOBJ;\n\ndone:\n\tfree_db_attr_list(&dbnode.db_attr_list);\n\n\tif (rc != 0) {\n\t\tpbs_db_get_errmsg(PBS_DB_ERR, &conn_db_err);\n\t\tlog_errf(PBSE_INTERNAL, __func__, \"Failed to save node %s %s\", pnode->nd_name, conn_db_err ? conn_db_err : \"\");\n\t\tfree(conn_db_err);\n\t\tpanic_stop_db();\n\t}\n\treturn rc;\n}\n\n/**\n * @brief\n *\tDelete a node from the database\n *\n * @param[in]\tpnode - Pointer to the node to delete\n *\n * @return      Error code\n * @retval\t0 - Success\n * @retval\t-1 - Failure\n *\n */\nint\nnode_delete_db(struct pbsnode *pnode)\n{\n\tpbs_db_node_info_t dbnode;\n\tpbs_db_obj_info_t obj;\n\tvoid *conn = (void *) svr_db_conn;\n\tchar *conn_db_err = NULL;\n\n\tstrcpy(dbnode.nd_name, pnode->nd_name);\n\tobj.pbs_db_obj_type = PBS_DB_NODE;\n\tobj.pbs_db_un.pbs_db_node = &dbnode;\n\n\tif (pbs_db_delete_obj(conn, &obj) == -1) {\n\t\tlog_errf(PBSE_INTERNAL, __func__, \"Failed to delete node %s %s\", pnode->nd_name, conn_db_err ? conn_db_err : \"\");\n\t\treturn (-1);\n\t} else\n\t\treturn (0); /* \"success\" or \"success but rows deleted\" */\n}\n\n/**\n * @brief\n *\tRefresh/retrieve node from database and add it into node list if not present\n *\n *\t@param[in]\tdbobj - The pointer to the wrapper node object of type pbs_db_node_info_t\n *\t@param[in]\trefreshed - To check if node refreshed\n *\n * @return\tThe recovered node\n * @retval\tNULL - Failure\n * @retval\t!NULL - Success, pointer to node structure recovered\n *\n */\nstruct pbsnode *\nrecov_node_cb(pbs_db_obj_info_t *dbobj, int *refreshed)\n{\n\tstruct pbsnode *pnode = NULL;\n\tpbs_db_node_info_t *dbnode = dbobj->pbs_db_un.pbs_db_node;\n\tint load_type = 0;\n\textern time_t time_now;\n\n\t*refreshed = 0;\n\tif ((pnode = pbsd_init_node(dbnode, load_type)) != NULL) {\n\t\t*refreshed = 1;\n\t\tpnode->nd_state |= (dbnode->nd_state & INUSE_NOAUTO_MASK);\n\t\tif (pnode->nd_state != get_nattr_long(pnode, ND_ATR_state)) {\n\t\t\tset_nattr_l_slim(pnode, ND_ATR_state, pnode->nd_state, SET);\n\t\t\tset_nattr_l_slim(pnode, ND_ATR_last_state_change_time, time_now, SET);\n\t\t}\n\t\tpnode->nd_modified = 0;\n\t}\n\n\tfree_db_attr_list(&dbnode->db_attr_list);\n\tif (pnode == NULL)\n\t\tlog_errf(-1, __func__, \"Failed to load node %s\", dbnode->nd_name);\n\treturn pnode;\n}\n"
  },
  {
    "path": "src/server/pbs_comm.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tpbs_comm.c\n *\n * @brief\n * \t\tImplements the router (pbs_comm) process in the TCP network\n *\n * @par\tFunctionality:\n *          Reads its own router name and port from the pbs config file.\n *          Also reads names of other routers from the pbs config file.\n *          It then calls tpp_router_init to initialize itself as a\n *          router process, and sleeps in a while loop.\n *\n *          It protects itself from SIGPIPEs generated by sends() inside\n *          the library by setting the disposition to ignore.\n *\n */\n#include <pbs_config.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <fcntl.h>\n#include <unistd.h>\n#include <sys/stat.h>\n#include <signal.h>\n#include <avltree.h>\n\n#include <grp.h>\n#include <sys/resource.h>\n\n#include <ctype.h>\n\n#include \"pbs_ifl.h\"\n#include \"pbs_internal.h\"\n#include \"log.h\"\n#include \"tpp.h\"\n#include \"server_limits.h\"\n#include \"pbs_version.h\"\n#include \"auth.h\"\n\nchar daemonname[PBS_MAXHOSTNAME + 8];\nextern char *msg_corelimit;\nextern char *msg_init_chdir;\nint lockfds;\nint already_forked = 0;\n#define PBS_COMM_LOGDIR \"comm_logs\"\n\nchar server_host[PBS_MAXHOSTNAME + 1];\t/* host_name of server */\nchar primary_host[PBS_MAXHOSTNAME + 1]; /* host_name of primary */\n\nstatic int stalone = 0; /* is program running not as a service ? */\nstatic int get_out = 0;\nstatic int hupped = 0;\n\n/*\n * Server failover role\n */\nenum failover_state {\n\tFAILOVER_NONE,\t       /* Only Server, no failover */\n\tFAILOVER_PRIMARY,      /* Primary in failover configuration */\n\tFAILOVER_SECONDARY,    /* Secondary in failover */\n\tFAILOVER_CONFIG_ERROR, /* error in configuration */\n\tFAILOVER_NEITHER,      /* failover configured, but I am neither primary/secondary */\n};\n\n/**\n * @brief\n *\t\tare_we_primary - determines the failover role, are we the Primary\n *\t\tServer, the Secondary Server or the only Server (no failover)\n *\n * @return\tint: failover server role\n * @retval  FAILOVER_NONE\t- failover not configured\n * @retval  FAILOVER_PRIMARY\t- Primary Server\n * @retval  FAILOVER_SECONDARY\t- Secondary Server\n * @retval  FAILOVER_CONFIG_ERROR\t- error in pbs.conf configuration\n * @retval  FAILOVER_NEITHER\t- failover configured, but I am neither primary/secondary\n */\nenum failover_state\nare_we_primary(void)\n{\n\tchar hn1[PBS_MAXHOSTNAME + 1];\n\n\t/* both secondary and primary should be set or neither set */\n\tif ((pbs_conf.pbs_secondary == NULL) && (pbs_conf.pbs_primary == NULL))\n\t\treturn FAILOVER_NONE;\n\n\tif ((pbs_conf.pbs_secondary == NULL) || (pbs_conf.pbs_primary == NULL))\n\t\treturn FAILOVER_CONFIG_ERROR;\n\n\tif (get_fullhostname(pbs_conf.pbs_primary, primary_host, (sizeof(primary_host) - 1)) == -1) {\n\t\tlog_err(-1, __func__, \"Unable to get full host name of primary\");\n\t\treturn FAILOVER_CONFIG_ERROR;\n\t}\n\n\tif (strcmp(primary_host, server_host) == 0)\n\t\treturn FAILOVER_PRIMARY; /* we are the listed primary */\n\n\tif (get_fullhostname(pbs_conf.pbs_secondary, hn1, (sizeof(hn1) - 1)) == -1) {\n\t\tlog_err(-1, __func__, \"Unable to get full host name of secondary\");\n\t\treturn FAILOVER_CONFIG_ERROR;\n\t}\n\tif (strcmp(hn1, server_host) == 0)\n\t\treturn FAILOVER_SECONDARY; /* we are the secondary */\n\n\treturn FAILOVER_NEITHER; /* failover configured, but I am neither primary nor secondary */\n}\n\n/**\n * @brief\n *\t\tusage - prints the usage in terminal if the user mistype in terminal\n *\n * @param[in]\tprog\t- program name which will be typed in the terminal along wih arguments.\n *\n * @return\tvoid\n */\nvoid\nusage(char *prog)\n{\n\tfprintf(stderr, \"Usage: %s [-r other_pbs_comms][-t threads][-N]\\n\"\n\t\t\t\"       %s --version\\n\",\n\t\tprog, prog);\n}\n\n/**\n * @brief\n * \t\tTermination signal handler for the pbs_comm daemon\n *\n * \t\tHandles termination signals like SIGTERM and sets a global\n * \t\tvariable to exit the daemon\n *\n * @param[in] sig - name of signal caught\n *\n * @return\tvoid\n */\nstatic void\nstop_me(int sig)\n{\n\tchar buf[100];\n\n\t/* set global variable to stop loop */\n\tget_out = 1;\n\tsprintf(buf, \"Caught signal %d\\n\", sig);\n\tfprintf(stderr, \"%s\\n\", buf);\n\tlog_err(-1, __func__, buf);\n}\n\n/**\n * @brief\n * \t\tHUP handler for the pbs_comm daemon\n *\n * \t\tHandles the SIGHUP signal and sets a global\n * \t\tvariable to reload the daemon configuration\n *\n * @param[in]\tsig\t- name of signal caught\n *\n * @return\tvoid\n */\nstatic void\nhup_me(int sig)\n{\n\tchar buf[100];\n\n\t/* set global variable to stop loop */\n\thupped = 1;\n\tsprintf(buf, \"Caught signal %d\\n\", sig);\n\tfprintf(stderr, \"%s\\n\", buf);\n\tlog_err(-1, __func__, buf);\n}\n\n/**\n * @brief\n * \t\tlock out the lockfile for this daemon\n * \t\t\t$PBS_HOME/server_priv/comm.lock\n *\n * @param[in] fds - lockfile fd to lock\n * @param[in] op  - F_WRLCK to lock, F_UNLCK to unlock\n *\n * @return\tvoid\n */\nstatic void\nlock_out(int fds, int op)\n{\n\tint i;\n\tint j;\n\tstruct flock flock;\n\tchar buf[100];\n\n\tj = 1; /* not fail over, try lock one time */\n\n\t(void) lseek(fds, (off_t) 0, SEEK_SET);\n\tflock.l_type = op;\n\tflock.l_whence = SEEK_SET;\n\tflock.l_start = 0;\n\tflock.l_len = 0;\n\tfor (i = 0; i < j; i++) {\n\t\tif (fcntl(fds, F_SETLK, &flock) != -1) {\n\t\t\tif (op == F_WRLCK) {\n\t\t\t\t/* if write-lock, record pid in file */\n\t\t\t\tif (ftruncate(fds, (off_t) 0) == -1) \n\t\t\t\t\tlog_errf(-1, __func__, \"ftruncate failed. ERR : %s\",strerror(errno));\n\t\t\t\t(void) sprintf(buf, \"%d\\n\", getpid());\n\t\t\t\tif(write(fds, buf, strlen(buf)) == -1) \n\t\t\t\t\tlog_errf(-1, __func__, \"write failed. ERR : %s\",strerror(errno));\n\t\t\t}\n\t\t\treturn;\n\t\t}\n\t\tsleep(2);\n\t}\n\n\t(void) strcpy(log_buffer, \"another PBS comm router running at the same port\");\n\tfprintf(stderr, \"pbs_comm: %s\\n\", log_buffer);\n\texit(1);\n}\n\n#ifndef DEBUG\n/**\n * @brief\n * \t\tredirect stdin, stdout and stderr to /dev/null\n *\t\tNot done if compiled with debug\n */\nvoid\npbs_close_stdfiles(void)\n{\n\tstatic int already_done = 0;\n\n\tif (!already_done) {\n\t\t(void) fclose(stdin);\n\t\t(void) fclose(stdout);\n\t\t(void) fclose(stderr);\n\n\t\tif (fopen(NULL_DEVICE, \"r\") == NULL) \n\t\t\tlog_errf(-1, __func__, \"fopen of null device failed. ERR : %s\",strerror(errno));\n\t\tif (fopen(NULL_DEVICE, \"w\") == NULL) \n\t\t\tlog_errf(-1, __func__, \"fopen of null device failed. ERR : %s\",strerror(errno));\n\t\tif (fopen(NULL_DEVICE, \"w\") == NULL) \n\t\t\tlog_errf(-1, __func__, \"fopen of null device failed. ERR : %s\",strerror(errno));\n\t\talready_done = 1;\n\t}\n}\n\n/**\n * @brief\n *\t\tForks a background process and continues on that, while\n * \t\texiting the foreground process. It also sets the child process to\n * \t\tbecome the session leader. This function is avaible only on Non-Windows\n * \t\tplatforms and in non-debug mode.\n *\n * @return -  pid_t\t- sid of the child process (result of setsid)\n * @retval       >0\t- sid of the child process.\n * @retval       -1\t- Fork or setsid failed.\n */\nstatic pid_t\ngo_to_background()\n{\n\tpid_t sid = -1;\n\tint rc;\n\n\tlock_out(lockfds, F_UNLCK);\n\trc = fork();\n\tif (rc == -1) /* fork failed */\n\t\treturn ((pid_t) -1);\n\tif (rc > 0)\n\t\texit(0); /* parent goes away, allowing booting to continue */\n\n\tlock_out(lockfds, F_WRLCK);\n\tif ((sid = setsid()) == -1) {\n\t\tfprintf(stderr, \"pbs_comm: setsid failed\");\n\t\treturn ((pid_t) -1);\n\t}\n\talready_forked = 1;\n\treturn sid;\n}\n#endif /* DEBUG is defined */\n\n/**\n * @brief\n *\t\tmain - the initialization and main loop of pbs_comm\n *\n * @param[in]\targc\t- argument count.\n * @param[in]\targv\t- argument values.\n *\n * @return\tint\n * @retval\t0\t- success\n * @retval\t>0\t- error\n */\nint\nmain(int argc, char **argv)\n{\n\tchar *name = NULL;\n\tstruct tpp_config conf;\n\tint tpp_fd;\n\tchar *pc;\n\tint numthreads;\n\tchar lockfile[MAXPATHLEN + 1];\n\tchar path_log[MAXPATHLEN + 1];\n\tchar svr_home[MAXPATHLEN + 1];\n\tchar *log_file = 0;\n\tchar *host;\n\tint port;\n\tchar *routers = NULL;\n\tint c, i, rc;\n\textern char *optarg;\n\tint are_primary;\n\tint num_var_env;\n\tstruct sigaction act;\n\tstruct sigaction oact;\n\n\t/*the real deal or just pbs_version and exit*/\n\tPRINT_VERSION_AND_EXIT(argc, argv);\n\n\t/* As a security measure and to make sure all file descriptors\t*/\n\t/* are available to us,  close all above stderr\t\t\t*/\n\n\ti = sysconf(_SC_OPEN_MAX);\n\twhile (--i > 2)\n\t\t(void) close(i); /* close any file desc left open by parent */\n\n\t/* If we are not run with real and effective uid of 0, forget it */\n\tif ((getuid() != 0) || (geteuid() != 0)) {\n\t\tfprintf(stderr, \"%s: Must be run by root\\n\", argv[0]);\n\t\treturn (2);\n\t}\n\n\t/* set standard umask */\n\tumask(022);\n\n\t/* load the pbs conf file */\n\tif (pbs_loadconf(0) == 0) {\n\t\tfprintf(stderr, \"%s: Configuration error\\n\", argv[0]);\n\t\treturn (1);\n\t}\n\n\tset_log_conf(pbs_conf.pbs_leaf_name, pbs_conf.pbs_mom_node_name,\n\t\t     pbs_conf.locallog, pbs_conf.syslogfac,\n\t\t     pbs_conf.syslogsvr, pbs_conf.pbs_log_highres_timestamp);\n\n\tumask(022);\n\n\t/* The following is code to reduce security risks                */\n\t/* start out with standard umask, system resource limit infinite */\n\tif ((num_var_env = setup_env(pbs_conf.pbs_environment)) == -1) {\n\t\texit(1);\n\t}\n\n\ti = getgid();\n\t(void) setgroups(1, (gid_t *) &i); /* secure suppl. groups */\n\n\tlog_event_mask = &pbs_conf.pbs_comm_log_events;\n\ttpp_set_logmask(*log_event_mask);\n\n\trouters = pbs_conf.pbs_comm_routers;\n\tnumthreads = pbs_conf.pbs_comm_threads;\n\n\tserver_host[0] = '\\0';\n\tif (pbs_conf.pbs_comm_name) {\n\t\tname = pbs_conf.pbs_comm_name;\n\t\thost = tpp_parse_hostname(name, &port);\n\t\tif (host)\n\t\t\tsnprintf(server_host, sizeof(server_host), \"%s\", host);\n\t\tfree(host);\n\t\thost = NULL;\n\t} else if (pbs_conf.pbs_leaf_name) {\n\t\tchar *endp;\n\n\t\tsnprintf(server_host, sizeof(server_host), \"%s\", pbs_conf.pbs_leaf_name);\n\t\tendp = strchr(server_host, ','); /* find the first name */\n\t\tif (endp)\n\t\t\t*endp = '\\0';\n\t\tendp = strchr(server_host, ':'); /* cut out the port */\n\t\tif (endp)\n\t\t\t*endp = '\\0';\n\t\tname = server_host;\n\t} else {\n\t\tif (gethostname(server_host, (sizeof(server_host) - 1)) == -1) {\n\t\t\tsprintf(log_buffer, \"Could not determine my hostname, errno=%d\", errno);\n\t\t\tfprintf(stderr, \"%s\\n\", log_buffer);\n\t\t\treturn (1);\n\t\t}\n\t\tif ((get_fullhostname(server_host, server_host, (sizeof(server_host) - 1)) == -1)) {\n\t\t\tsprintf(log_buffer, \"Could not determine my hostname\");\n\t\t\tfprintf(stderr, \"%s\\n\", log_buffer);\n\t\t\treturn (1);\n\t\t}\n\t\tname = server_host;\n\t}\n\tif (server_host[0] == '\\0') {\n\t\tsprintf(log_buffer, \"Could not determine server host\");\n\t\tfprintf(stderr, \"%s\\n\", log_buffer);\n\t\treturn (1);\n\t}\n\n\twhile ((c = getopt(argc, argv, \"r:t:e:N\")) != -1) {\n\t\tswitch (c) {\n\t\t\tcase 'e':\n\t\t\t\t*log_event_mask = strtol(optarg, NULL, 0);\n\t\t\t\tbreak;\n\t\t\tcase 'r':\n\t\t\t\trouters = optarg;\n\t\t\t\tbreak;\n\t\t\tcase 't':\n\t\t\t\tnumthreads = atol(optarg);\n\t\t\t\tif (numthreads == -1) {\n\t\t\t\t\tusage(argv[0]);\n\t\t\t\t\treturn (1);\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase 'N':\n\t\t\t\tstalone = 1;\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\tusage(argv[0]);\n\t\t\t\treturn (1);\n\t\t}\n\t}\n\n\t(void) strcpy(daemonname, \"Comm@\");\n\t(void) strcat(daemonname, name);\n\tif ((pc = strchr(daemonname, (int) '.')) != NULL)\n\t\t*pc = '\\0';\n\n\tif (set_msgdaemonname(daemonname)) {\n\t\tfprintf(stderr, \"Out of memory\\n\");\n\t\treturn 1;\n\t}\n\n\t(void) snprintf(path_log, sizeof(path_log), \"%s/%s\", pbs_conf.pbs_home_path, PBS_COMM_LOGDIR);\n\t(void) log_open(log_file, path_log);\n\n\t/* set pbs_comm's process limits */\n\tset_proc_limits(pbs_conf.pbs_core_limit, TPP_MAXOPENFD); /* set_proc_limits can call log_record, so call only after opening log file */\n\n\t(void) snprintf(svr_home, sizeof(svr_home), \"%s/%s\", pbs_conf.pbs_home_path, PBS_SVR_PRIVATE);\n\tif (chdir(svr_home) != 0) {\n\t\t(void) sprintf(log_buffer, msg_init_chdir, svr_home);\n\t\tlog_err(-1, __func__, log_buffer);\n\t\treturn (1);\n\t}\n\n\t(void) sprintf(lockfile, \"%s/%s/comm.lock\", pbs_conf.pbs_home_path, PBS_SVR_PRIVATE);\n\tif ((are_primary = are_we_primary()) == FAILOVER_SECONDARY) {\n\t\tstrcat(lockfile, \".secondary\");\n\t} else if (are_primary == FAILOVER_CONFIG_ERROR) {\n\t\tsprintf(log_buffer, \"Failover configuration error\");\n\t\tlog_err(-1, __func__, log_buffer);\n\t\treturn (3);\n\t}\n\n\tif ((lockfds = open(lockfile, O_CREAT | O_WRONLY, 0600)) < 0) {\n\t\t(void) sprintf(log_buffer, \"pbs_comm: unable to open lock file\");\n\t\tlog_err(errno, __func__, log_buffer);\n\t\treturn (1);\n\t}\n\n\tif ((host = tpp_parse_hostname(name, &port)) == NULL) {\n\t\tsprintf(log_buffer, \"Out of memory parsing leaf name\");\n\t\tlog_err(errno, __func__, log_buffer);\n\t\treturn (1);\n\t}\n\n\t/* set tpp config */\n\trc = set_tpp_config(&pbs_conf, &conf, host, port, routers);\n\tif (rc == -1) {\n\t\t(void) sprintf(log_buffer, \"Error setting TPP config\");\n\t\tlog_err(-1, __func__, log_buffer);\n\t\treturn (1);\n\t}\n\tfree(host);\n\n\ti = 0;\n\tif (conf.routers) {\n\t\twhile (conf.routers[i]) {\n\t\t\tsprintf(log_buffer, \"Router[%d]:%s\", i, conf.routers[i]);\n\t\t\tfprintf(stdout, \"%s\\n\", log_buffer);\n\t\t\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_FORCE, PBS_EVENTCLASS_SERVER, LOG_INFO, msg_daemonname, log_buffer);\n\t\t\ti++;\n\t\t}\n\t}\n\n#ifndef DEBUG\n\tif (stalone != 1)\n\t\tgo_to_background();\n#endif\n\n\tif (already_forked == 0)\n\t\tlock_out(lockfds, F_WRLCK);\n\n\t/* go_to_backgroud call creates a forked process,\n\t * thus print/log pid only after go_to_background()\n\t * has been called\n\t */\n\tsprintf(log_buffer, \"%s ready (pid=%d), Proxy Name:%s, Threads:%d\", argv[0], getpid(), conf.node_name, numthreads);\n\tfprintf(stdout, \"%s\\n\", log_buffer);\n\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_FORCE, PBS_EVENTCLASS_SERVER, LOG_INFO, msg_daemonname, log_buffer);\n\tlog_supported_auth_methods(pbs_conf.supported_auth_methods);\n\n#ifndef DEBUG\n\tpbs_close_stdfiles();\n#endif\n\n\t/* comm runs 1 + tpp_conf.nuthreads threads - these might use avltree functionality */\n\tavl_set_maxthreads(numthreads + 1);\n\n\tsigemptyset(&act.sa_mask);\n\tact.sa_flags = 0;\n\tact.sa_handler = hup_me;\n\tif (sigaction(SIGHUP, &act, &oact) != 0) {\n\t\tlog_err(errno, __func__, \"sigaction for HUP\");\n\t\treturn (2);\n\t}\n\tact.sa_handler = stop_me;\n\tif (sigaction(SIGINT, &act, &oact) != 0) {\n\t\tlog_err(errno, __func__, \"sigaction for INT\");\n\t\treturn (2);\n\t}\n\tif (sigaction(SIGTERM, &act, &oact) != 0) {\n\t\tlog_err(errno, __func__, \"sigactin for TERM\");\n\t\treturn (2);\n\t}\n\tif (sigaction(SIGQUIT, &act, &oact) != 0) {\n\t\tlog_err(errno, __func__, \"sigactin for QUIT\");\n\t\treturn (2);\n\t}\n#ifdef SIGSHUTDN\n\tif (sigaction(SIGSHUTDN, &act, &oact) != 0) {\n\t\tlog_err(errno, __func__, \"sigactin for SHUTDN\");\n\t\treturn (2);\n\t}\n#endif /* SIGSHUTDN */\n\n\tact.sa_handler = SIG_IGN;\n\tif (sigaction(SIGPIPE, &act, &oact) != 0) {\n\t\tlog_err(errno, __func__, \"sigaction for PIPE\");\n\t\treturn (2);\n\t}\n\tif (sigaction(SIGUSR2, &act, &oact) != 0) {\n\t\tlog_err(errno, __func__, \"sigaction for USR2\");\n\t\treturn (2);\n\t}\n\tif (sigaction(SIGUSR1, &act, &oact) != 0) {\n\t\tlog_err(errno, __func__, \"sigaction for USR1\");\n\t\treturn (2);\n\t}\n\tif (load_auths(AUTH_SERVER)) {\n\t\tlog_err(-1, __func__, \"Failed to load auth lib\");\n\t\treturn 2;\n\t}\n\n\tconf.node_type = TPP_ROUTER_NODE;\n\tconf.numthreads = numthreads;\n\n\tif ((tpp_fd = tpp_init_router(&conf)) == -1) {\n\t\tlog_err(-1, __func__, \"tpp init failed\\n\");\n\t\treturn 1;\n\t}\n\n\t/* Protect from being killed by kernel */\n\tdaemon_protect(0, PBS_DAEMON_PROTECT_ON);\n\n\t/* go in a while loop */\n\twhile (get_out == 0) {\n\n\t\tif (hupped == 1) {\n\t\t\tstruct pbs_config pbs_conf_bak;\n\t\t\tint new_logevent;\n\n\t\t\thupped = 0; /* reset back */\n\t\t\tmemcpy(&pbs_conf_bak, &pbs_conf, sizeof(struct pbs_config));\n\n\t\t\tif (pbs_loadconf(1) == 0) {\n\t\t\t\tlog_err(-1, __func__, \"Configuration error, ignoring\");\n\t\t\t\tmemcpy(&pbs_conf, &pbs_conf_bak, sizeof(struct pbs_config));\n\t\t\t} else {\n\t\t\t\t/* restore old pbs.conf */\n\t\t\t\tnew_logevent = pbs_conf.pbs_comm_log_events;\n\t\t\t\tmemcpy(&pbs_conf, &pbs_conf_bak, sizeof(struct pbs_config));\n\t\t\t\tpbs_conf.pbs_comm_log_events = new_logevent;\n\t\t\t\tlog_err(-1, __func__, \"Processed SIGHUP\");\n\n\t\t\t\tlog_event_mask = &pbs_conf.pbs_comm_log_events;\n\t\t\t\ttpp_set_logmask(*log_event_mask);\n\t\t\t\tset_log_conf(pbs_conf.pbs_leaf_name, pbs_conf.pbs_mom_node_name,\n\t\t\t\t\t     pbs_conf.locallog, pbs_conf.syslogfac,\n\t\t\t\t\t     pbs_conf.syslogsvr, pbs_conf.pbs_log_highres_timestamp);\n\t\t\t}\n\t\t}\n\t\tsleep(3);\n\t}\n\n\ttpp_router_shutdown();\n\n\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_FORCE, PBS_EVENTCLASS_SERVER, LOG_NOTICE, msg_daemonname, \"Exiting\");\n\tlog_close(1);\n\n\tlock_out(lockfds, F_UNLCK); /* unlock  */\n\t(void) close(lockfds);\n\t(void) unlink(lockfile);\n\tunload_auths();\n\n\treturn 0;\n}\n"
  },
  {
    "path": "src/server/pbs_db_func.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file    pbsd_db_func.c\n *\n * @brief\n * pbsd_db_func.c - contains functions to initialize several pbs data structures.\n *\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <sys/types.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <ctype.h>\n#include <errno.h>\n#include <fcntl.h>\n#include <memory.h>\n#include <signal.h>\n#include <time.h>\n#include <sys/stat.h>\n#include <libutil.h>\n\n#include <dirent.h>\n#include <grp.h>\n#include <netdb.h>\n#include <pwd.h>\n#include <unistd.h>\n#include <sys/param.h>\n#include <sys/resource.h>\n#include <sys/time.h>\n\n#include \"libpbs.h\"\n#include \"pbs_ifl.h\"\n#include \"net_connect.h\"\n#include \"log.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"server_limits.h\"\n#include \"server.h\"\n#include \"credential.h\"\n#include \"ticket.h\"\n#include \"batch_request.h\"\n#include \"work_task.h\"\n#include \"resv_node.h\"\n#include \"job.h\"\n#include \"queue.h\"\n#include \"reservation.h\"\n#include \"pbs_db.h\"\n#include \"pbs_nodes.h\"\n#include \"tracking.h\"\n#include \"provision.h\"\n#include \"avltree.h\"\n#include \"svrfunc.h\"\n#include \"acct.h\"\n#include \"pbs_version.h\"\n#include \"pbs_license.h\"\n#include \"resource.h\"\n#include \"pbs_python.h\"\n#include \"hook.h\"\n#include \"hook_func.h\"\n#include \"pbs_share.h\"\n\n#ifndef SIGKILL\n/* there is some weid stuff in gcc include files signal.h & sys/params.h */\n#include <signal.h>\n#endif\n\n#define MAX_DB_RETRIES 5\n#define MAX_DB_LOOP_DELAY 10\n#define IPV4_STR_LEN 15\nstatic int db_oper_failed_times = 0;\nstatic int last_rc = -1; /* we need to reset db_oper_failed_times for each state change of the db */\nstatic int conn_db_state = 0;\nextern int pbs_failover_active;\nextern int server_init_type;\nextern int stalone;\t\t\t  /* is program running not as a service ? */\nchar conn_db_host[PBS_MAXSERVERNAME + 1]; /* db host where connection is made */\nvoid *svr_db_conn = NULL;\t\t  /* server's global database connection pointer */\nvoid *conn = NULL;\t\t\t  /* pointer to work out a valid connection - later assigned to svr_db_conn */\nextern int pbs_decrypt_pwd(char *, int, size_t, char **, const unsigned char *, const unsigned char *);\nextern pid_t go_to_background();\nvoid *setup_db_connection(char *, int, int);\nstatic void *get_db_connect_information();\nstatic int touch_db_stop_file(void);\nstatic int start_db();\nvoid stop_db();\n\n/**\n * @brief\n *\t\tChecks whether database is down, and if so\n *\t\tstarts up the database in asynchronous mode.\n *\n * @return - Failure code\n * @retval   PBS_DB_DOWN - Error in pbs_start_db\n * @retval   PBS_DB_STARTING - Database is starting\n *\n */\nstatic int\nstart_db()\n{\n\tchar *failstr = NULL;\n\tint rc;\n\n\tlog_eventf(PBSEVENT_SYSTEM | PBSEVENT_FORCE, PBS_EVENTCLASS_SERVER, LOG_CRIT, msg_daemonname, \"Starting PBS dataservice\");\n\n\trc = pbs_start_db(conn_db_host, pbs_conf.pbs_data_service_port);\n\tif (rc != 0) {\n\t\tif (rc == PBS_DB_OOM_ERR) {\n\t\t\tpbs_db_get_errmsg(PBS_DB_OOM_ERR, &failstr);\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE, \"%s %s\", \"WARNING:\", failstr ? failstr : \"\");\n\t\t} else {\n\t\t\tpbs_db_get_errmsg(PBS_DB_ERR, &failstr);\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE, \"%s %s\", \"Failed to start PBS dataservice.\", failstr ? failstr : \"\");\n\t\t}\n\t\tlog_eventf(PBSEVENT_SYSTEM | PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER, LOG_ERR, msg_daemonname, log_buffer);\n\t\tfprintf(stderr, \"%s\\n\", log_buffer);\n\t\tfree(failstr);\n\t\tif (rc != PBS_DB_OOM_ERR)\n\t\t\treturn PBS_DB_DOWN;\n\t}\n\n\tsleep(1); /* give time for database to atleast establish the ports */\n\treturn PBS_DB_STARTING;\n}\n\n/**\n * @brief\n *\t\tStop the database if up, and log a message if the database\n *\t\tfailed to stop.\n *\t\tTry to stop till not successful, with incremental delay.\n */\nvoid\nstop_db()\n{\n\tchar *db_err = NULL;\n\tint db_delay = 0;\n\tpbs_db_disconnect(svr_db_conn);\n\tsvr_db_conn = NULL;\n\n\t/* check status of db, shutdown if up */\n\tdb_oper_failed_times = 0;\n\twhile (1) {\n\t\tif (pbs_status_db(conn_db_host, pbs_conf.pbs_data_service_port) != 0)\n\t\t\treturn; /* dataservice not running, got killed? */\n\n\t\tlog_eventf(PBSEVENT_SYSTEM | PBSEVENT_FORCE, PBS_EVENTCLASS_SERVER, LOG_CRIT, msg_daemonname, \"Stopping PBS dataservice\");\n\n\t\tif (pbs_stop_db(conn_db_host, pbs_conf.pbs_data_service_port) != 0) {\n\t\t\tpbs_db_get_errmsg(PBS_DB_ERR, &db_err);\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE, \"%s %s\", \"Failed to stop PBS dataservice.\", db_err ? db_err : \"\");\n\t\t\tlog_eventf(PBSEVENT_SYSTEM | PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER, LOG_ERR, msg_daemonname, log_buffer);\n\t\t\tfprintf(stderr, \"%s\\n\", log_buffer);\n\t\t\tfree(db_err);\n\t\t\tdb_err = NULL;\n\t\t}\n\n\t\tdb_oper_failed_times++;\n\t\t/* try stopping after some time again */\n\t\tdb_delay = (int) (1 + db_oper_failed_times * 1.5);\n\t\tif (db_oper_failed_times > MAX_DB_LOOP_DELAY)\n\t\t\tdb_delay = MAX_DB_LOOP_DELAY; /* limit to MAX_DB_LOOP_DELAY secs */\n\t\tsleep(db_delay);\t\t      /* don't burn the CPU looping too fast */\n\t}\n}\n\n/**\n * @brief\n *\tAttempt to mail a message to \"mail_from\" (administrator), shut down\n *\tthe database, close the log and exit the Server. Called when a database save fails.\n *\tPanic shutdown of server due to database error. Closing database and log system.\n *\n */\nvoid\npanic_stop_db()\n{\n\tchar panic_stop_txt[] = \"Panic shutdown of Server on database error.  Please check PBS_HOME file system for no space condition.\";\n\n\tlog_err(-1, __func__, panic_stop_txt);\n\tsvr_mailowner(0, 0, 0, panic_stop_txt);\n\tstop_db();\n\tlog_close(1);\n\texit(1);\n}\n\n/**\n * @brief\n *\t\tSetup a new database connection structure.\n *\n * @par Functionality:\n *\t\tDisconnect and destroy any active connection associated\n *\t\twith global variable conn.\n *\t\tIt then calls pbs_db_connect to initialize a new connection\n *\t\tstructure (pointed to by conn) with the values of host, timeout and\n *\t\thave_db_control. Logs error on failure.\n *\n * @param[in]\thost\t- The host to connect to\n * @param[in]\tport\t- The port to connect to\n * @param[in]\ttimeout\t- The connection timeout\n *\n * @return\tInitialized connection handle.\n * @retval  !NULL - Connection Established.\n * @retval  NULL - No Connection.\n *\n */\nvoid *\nsetup_db_connection(char *host, int port, int timeout)\n{\n\tint failcode = 0;\n\tint rc = 0;\n\tchar *conn_db_err = NULL;\n\tvoid *lconn = NULL;\n\n\t/* Make sure we have the database instance up and running */\n\t/* If the services are down, retry will attempt to start_db() */\n\trc = pbs_status_db(host, port);\n\tif (rc == 1)\n\t\treturn NULL;\n\telse if (rc == -1)\n\t\tfailcode = PBS_DB_ERR;\n\telse\n\t\tfailcode = pbs_db_connect(&lconn, host, port, timeout);\n\tif (!lconn) {\n\t\tpbs_db_get_errmsg(failcode, &conn_db_err);\n\t\tif (conn_db_err) {\n\t\t\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_FORCE, PBS_EVENTCLASS_SERVER,\n\t\t\t\t  LOG_CRIT, msg_daemonname, conn_db_err);\n\t\t\tfree(conn_db_err);\n\t\t}\n\t}\n\treturn lconn;\n}\n\n/**\n * @brief\n *\t\tThis function creates connection information which is used by\n * \t\tconnect_to_db.\n *\n * @return - void pointer\n * @retval  NULL - Function failed. Error will be logged\n * @retval  !NULL - Newly allocated connection handler.\n *\n */\nstatic void *\nget_db_connect_information()\n{\n\tvoid *lconn = NULL;\n\tint rc = 0;\n\tint conn_timeout = 0;\n\tchar *failstr = NULL;\n\n\t/*\n\t * Decide where to connect to, the timeout, and whether we can have control over the\n\t * database instance or not. Based on these create a new connection structure by calling\n\t * setup_db_connection. The behavior is as follows:\n\t *\n\t * a) If external database is configured (pbs_data_service_host), then always connect to that\n\t *\tand do not try to start/stop the database (both in standalone / failover cases). In case of a\n\t *\tconnection failure (in between pbs processing) in standalone setup, try reconnecting to the\n\t *\texternal database for ever.\n\t *\tIn case of connection failure (not at startup) in a failover setup, try connecting to the external\n\t *\tdatabase only once more and then quit, letting failover kick in.\n\t *\n\t *\n\t * b) With embedded database:\n\t *\tStatus the database:\n\t *\t- If no database running, start database locally.\n\t *\n\t *\t- If database already running locally, its all good.\n\t *\n\t *\t- If database is running on another host, then,\n\t *\t\ta) If standalone, continue to attempt to start database locally.\n\t *\t\tb) If primary, attempt to connect to secondary db, if it\n\t *\t\t   connects, then throw error and start over (since primary\n\t *\t\t   should never use the secondary's database. If connect fails\n\t *\t\t   database is then try to start database locally.\n\t *\t\tc) If secondary, attempt connection to primary db; if it\n\t *\t\t   connects, continue to use it happily. If it fails, attempt to start\n\t *\t\t   database locally.\n\t *\n\t */\n\tif (pbs_conf.pbs_data_service_host) {\n\t\t/*\n\t\t * External database configured,  infinite timeout, database instance not in our control\n\t\t */\n\t\tconn_timeout = PBS_DB_CNT_TIMEOUT_INFINITE;\n\t\tstrncpy(conn_db_host, pbs_conf.pbs_data_service_host, PBS_MAXSERVERNAME);\n\t} else {\n\t\t/*\n\t\t * Database is in our control, we need to figure out the status of the database first\n\t\t * Is it already running? Is it running on another machine?\n\t\t *  Check whether database is up or down.\n\t\t *  It calls pbs_status_db to figure out the database status.\n\t\t *  pbs_db_status returns:\n\t\t *\t-1 - failed to execute\n\t\t *\t0  - Data service running on local host\n\t\t *\t1  - Data service NOT running\n\t\t *\t2  - Data service running on another host\n\t\t *\n\t\t * If pbs_db_status is not sure whether db is running or not, then it attempts\n\t\t * to connect to the host database to confirm that.\n\t\t *\n\t\t */\n\t\tif (pbs_conf.pbs_primary) {\n\t\t\tif (!pbs_failover_active)\n\t\t\t\tstrncpy(conn_db_host, pbs_conf.pbs_primary, PBS_MAXSERVERNAME);\n\t\t\telse\n\t\t\t\tstrncpy(conn_db_host, pbs_conf.pbs_secondary, PBS_MAXSERVERNAME);\n\t\t} else if (pbs_conf.pbs_server_host_name)\n\t\t\tstrncpy(conn_db_host, pbs_conf.pbs_server_host_name, PBS_MAXSERVERNAME);\n\t\telse\n\t\t\tstrncpy(conn_db_host, server_host, PBS_MAXSERVERNAME);\n\n\t\trc = pbs_status_db(conn_db_host, pbs_conf.pbs_data_service_port);\n\t\tif (rc == -1) {\n\t\t\tpbs_db_get_errmsg(PBS_DB_ERR, &failstr);\n\t\t\tlog_errf(PBSE_INTERNAL, msg_daemonname, \"status db failed: %s\", failstr ? failstr : \"\");\n\t\t\tfree(failstr);\n\t\t\treturn NULL;\n\t\t}\n\n\t\tlog_eventf(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_INFO, msg_daemonname, \"pbs_status_db exit code %d\", rc);\n\n\t\tif (last_rc != rc) {\n\t\t\t/*\n\t\t\t * we might have failed trying to start database several times locally\n\t\t\t * if however, the database state has changed (like its stopped by admin),\n\t\t\t * then we reset db_oper_failed_times.\n\t\t\t *\n\t\t\t * Basically we check against the error code from the last try, if its\n\t\t\t * not the same error code, then it means that something in the database\n\t\t\t * startup has changed (or failing to start for a different reason).\n\t\t\t * Since the db_oper_failed_times is used to count the number of failures\n\t\t\t * of one particular kind, so we reset it when the error code differs\n\t\t\t * from that in the last try.\n\t\t\t */\n\t\t\tdb_oper_failed_times = 0;\n\t\t}\n\t\tlast_rc = rc;\n\n\t\tif (pbs_conf.pbs_primary) {\n\t\t\tif (rc == 0 || rc == 1) /* db running locally or db not running */\n\t\t\t\tconn_timeout = PBS_DB_CNT_TIMEOUT_INFINITE;\n\t\t\tif (rc == 2) /* db could be running on secondary, don't start, try connecting to secondary's */\n\t\t\t\tconn_timeout = PBS_DB_CNT_TIMEOUT_NORMAL;\n\n\t\t\tif (!pbs_failover_active) {\n\t\t\t\t/* Failover is configured, and this is the primary */\n\t\t\t\tif (rc == 0 || rc == 1) /* db running locally or db not running */\n\t\t\t\t\tstrncpy(conn_db_host, pbs_conf.pbs_primary, PBS_MAXSERVERNAME);\n\n\t\t\t\tif (rc == 2) /* db could be running on secondary, don't start, try connecting to secondary's */\n\t\t\t\t\tstrncpy(conn_db_host, pbs_conf.pbs_secondary, PBS_MAXSERVERNAME);\n\n\t\t\t} else {\n\t\t\t\t/* Failover is configured and this is active secondary */\n\t\t\t\tif (rc == 0 || rc == 1) /* db running locally or db not running */\n\t\t\t\t\tstrncpy(conn_db_host, pbs_conf.pbs_secondary, PBS_MAXSERVERNAME);\n\n\t\t\t\t/* db could be running on primary, don't start, try connecting to primary's */\n\t\t\t\tif (rc == 2) {\n\t\t\t\t\tstrncpy(conn_db_host, pbs_conf.pbs_primary, PBS_MAXSERVERNAME);\n\t\t\t\t\tconn_db_state = PBS_DB_STARTED;\n\t\t\t\t}\n\t\t\t}\n\t\t} else {\n\t\t\t/*\n\t\t\t * No failover configured. Try connecting forever to our own instance, have control.\n\t\t\t */\n\t\t\tconn_timeout = PBS_DB_CNT_TIMEOUT_INFINITE;\n\t\t}\n\t}\n\tif (rc == 1)\n\t\tconn_db_state = start_db();\n\n\tif (conn_db_state == PBS_DB_STARTING || conn_db_state == PBS_DB_STARTED)\n\t\tlconn = setup_db_connection(conn_db_host, pbs_conf.pbs_data_service_port, conn_timeout);\n\treturn lconn;\n}\n\n/**\n * @brief\n * \t\ttouch_db_stop_file\t- create a touch file when db is stopped.\n *\n * @return\tint\n * @retval\t0\t- created touch file\n * @retval\t-1\t- unable to create touch file\n */\nstatic int\ntouch_db_stop_file(void)\n{\n\tint fd;\n\tchar closefile[MAXPATHLEN + 1];\n\tsnprintf(closefile, MAXPATHLEN, \"%s/datastore/pbs_dbclose\", pbs_conf.pbs_home_path);\n\n#ifndef O_RSYNC\n#define O_RSYNC 0\n#endif\n\tif ((fd = open(closefile, O_WRONLY | O_CREAT | O_RSYNC, 0600)) != -1)\n\t\treturn -1;\n\tclose(fd);\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\tconnect_to_db\t- Try and continue forever till a successful database connection is made.\n *\n * @param[in]\tbackground\t- Process can attempt to connect in the background.\n * @return\tint\n * @retval\t0\t- Success\n * @retval\tNon-Zero\t- Failure\n */\nint\nconnect_to_db(int background)\n{\n\tint try_db = 0;\n\tint db_stop_counts = 0;\n\tint db_stop_email_sent = 0;\n\tint conn_state;\n#ifndef DEBUG\n\tpid_t sid = -1;\n#endif\n\tint db_delay = 0;\ntry_db_again:\n\tfprintf(stdout, \"Connecting to PBS dataservice.\");\n\n\tconn_state = PBS_DB_CONNECT_STATE_NOT_CONNECTED;\n\tdb_oper_failed_times = 0;\n\n\twhile (1) {\n#ifndef DEBUG\n\t\tfprintf(stdout, \".\");\n#endif\n\t\tif (conn_state == PBS_DB_CONNECT_STATE_FAILED) {\n\t\t\tpbs_db_disconnect(conn);\n\t\t\tconn_state = PBS_DB_CONNECT_STATE_NOT_CONNECTED;\n\t\t\tdb_oper_failed_times++;\n\t\t\tconn_db_state = PBS_DB_DOWN; /* allow to retry to start db again */\n\t\t} else if (conn_state == PBS_DB_CONNECT_STATE_CONNECTED) {\n\t\t\tsprintf(log_buffer, \"connected to PBS dataservice@%s\", conn_db_host);\n\t\t\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_FORCE,\n\t\t\t\t  PBS_EVENTCLASS_SERVER, LOG_CRIT,\n\t\t\t\t  msg_daemonname, log_buffer);\n\t\t\tfprintf(stdout, \"%s\\n\", log_buffer);\n\t\t}\n\n\t\tif (conn && conn_state == PBS_DB_CONNECT_STATE_CONNECTED)\n\t\t\tbreak;\n\n\t\tif (conn_db_state == PBS_DB_DOWN) {\n\t\t\tconn_db_state = start_db();\n\t\t\tif (conn_db_state == PBS_DB_STARTING) {\n\t\t\t\t/* started new database instance, reset connection */\n\t\t\t\tpbs_db_disconnect(conn); /* disconnect from any old connection, cleanup memory */\n\t\t\t\tconn_state = PBS_DB_CONNECT_STATE_NOT_CONNECTED;\n\t\t\t} else if (conn_db_state == PBS_DB_DOWN) {\n\t\t\t\tdb_oper_failed_times++;\n\t\t\t}\n\t\t}\n\n\t\tif (conn_state == PBS_DB_CONNECT_STATE_NOT_CONNECTED) {\n\t\t\t/* get fresh connection */\n\t\t\tif ((conn = get_db_connect_information()) == NULL)\n\t\t\t\tconn_state = PBS_DB_CONNECT_STATE_FAILED;\n\t\t\telse\n\t\t\t\tconn_state = PBS_DB_CONNECT_STATE_CONNECTED;\n\t\t}\n\t\tdb_delay = (int) (1 + db_oper_failed_times * 1.5);\n\t\tif (db_delay > MAX_DB_LOOP_DELAY)\n\t\t\tdb_delay = MAX_DB_LOOP_DELAY; /* limit to MAX_DB_LOOP_DELAY secs */\n\t\tsleep(db_delay);\t\t      /* dont burn the CPU looping too fast */\n\t\tupdate_svrlive();\t\t      /* indicate we are alive */\n#ifndef DEBUG\n\t\tif (background && try_db >= 4) {\n\t\t\tfprintf(stdout, \"continuing in background.\\n\");\n\t\t\tif ((sid = go_to_background()) == -1)\n\t\t\t\treturn (2);\n\t\t}\n#endif /* DEBUG is defined */\n\t\ttry_db++;\n\t}\n\n\tif (!pbs_conf.pbs_data_service_host) {\n\t\t/*\n\t\t * Check the connected host and see if it is connected to right host.\n\t\t * In case of a failover, PBS server should be connected to database\n\t\t * on the same host as it is executing on. Thus, if PBS server ends\n\t\t * up connected to a database on another host (say primary server\n\t\t * connected to database on secondary or vice versa), then it is\n\t\t * deemed unacceptable. In such a case throw error on log notifying\n\t\t * that PBS is attempting to stop the database on the other side\n\t\t * and restart the loop all over.\n\t\t */\n\t\tif (pbs_conf.pbs_primary) {\n\t\t\tif (!pbs_failover_active) {\n\t\t\t\t/* primary instance */\n\t\t\t\tif (strcmp(conn_db_host, pbs_conf.pbs_primary) != 0) {\n\t\t\t\t\t/* primary instance connected to secondary database, not acceptable */\n\t\t\t\t\tlog_errf(-1, msg_daemonname, \"PBS data service is up on the secondary instance, attempting to stop it\");\n\t\t\t\t\tpbs_db_disconnect(conn);\n\t\t\t\t\tconn = NULL;\n\n\t\t\t\t\ttouch_db_stop_file();\n\n\t\t\t\t\tif (db_stop_email_sent == 0) {\n\t\t\t\t\t\tif (++db_stop_counts > MAX_DB_RETRIES) {\n\t\t\t\t\t\t\tlog_errf(-1, msg_daemonname, \"Not able to stop PBS data service at the secondary site, please stop manually\");\n\t\t\t\t\t\t\tsvr_mailowner(0, 0, 1, log_buffer);\n\t\t\t\t\t\t\tdb_stop_email_sent = 1;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tsleep(10);\n\t\t\t\t\tgoto try_db_again;\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\t/* secondary instance */\n\t\t\t\tif (strcmp(conn_db_host, pbs_conf.pbs_primary) == 0) {\n\t\t\t\t\t/* secondary instance connected to primary database, not acceptable */\n\t\t\t\t\tlog_errf(-1, msg_daemonname, \"PBS data service is up on the primary instance, attempting to stop it\");\n\n\t\t\t\t\tpbs_db_disconnect(conn);\n\t\t\t\t\tconn = NULL;\n\n\t\t\t\t\ttouch_db_stop_file();\n\n\t\t\t\t\tif (db_stop_email_sent == 0) {\n\t\t\t\t\t\tif (++db_stop_counts > MAX_DB_RETRIES) {\n\t\t\t\t\t\t\tlog_errf(-1, msg_daemonname, \"Not able to stop PBS data service at the primary site, please stop manually\");\n\t\t\t\t\t\t\tsvr_mailowner(0, 0, 1, log_buffer);\n\t\t\t\t\t\t\tdb_stop_email_sent = 1;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tsleep(10);\n\t\t\t\t\tgoto try_db_again;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\tsvr_db_conn = conn; /* use this connection */\n\tconn = NULL;\t    /* ensure conn does not point to svr_db_conn any more */\n\treturn 0;\n}\n\n/**\n * @brief\n *\tFrees attribute list memory\n *\n * @param[in]\tattr_list - List of pbs_db_attr_list_t objects\n *\n * @return      None\n *\n */\nvoid\nfree_db_attr_list(pbs_db_attr_list_t *attr_list)\n{\n\tif (attr_list->attr_count > 0) {\n\t\tfree_attrlist(&attr_list->attrs);\n\t\tattr_list->attr_count = 0;\n\t}\n}\n"
  },
  {
    "path": "src/server/pbsd_init.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n *\n * @brief\n * contains functions to initialize several pbs data structures.\n *\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <sys/types.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <ctype.h>\n#include <errno.h>\n#include <fcntl.h>\n#include <memory.h>\n#include <signal.h>\n#include <time.h>\n#include <sys/stat.h>\n#include <libutil.h>\n\n#include <dirent.h>\n#include <grp.h>\n#include <netdb.h>\n#include <pwd.h>\n#include <unistd.h>\n#include <sys/param.h>\n#include <sys/resource.h>\n#include <sys/time.h>\n\n#include \"libpbs.h\"\n#include \"pbs_ifl.h\"\n#include \"net_connect.h\"\n#include \"log.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"server_limits.h\"\n#include \"server.h\"\n#include \"credential.h\"\n#include \"ticket.h\"\n#include \"batch_request.h\"\n#include \"work_task.h\"\n#include \"resv_node.h\"\n#include \"job.h\"\n#include \"queue.h\"\n#include \"reservation.h\"\n#include \"pbs_db.h\"\n#include \"pbs_nodes.h\"\n#include \"tracking.h\"\n#include \"provision.h\"\n#include \"pbs_idx.h\"\n#include \"svrfunc.h\"\n#include \"acct.h\"\n#include \"pbs_version.h\"\n#include \"tpp.h\"\n#include \"pbs_license.h\"\n#include \"resource.h\"\n#include \"pbs_python.h\"\n#include \"hook.h\"\n#include \"hook_func.h\"\n#include \"pbs_share.h\"\n#include \"liblicense.h\"\n\n#ifndef SIGKILL\n/* there is some weid stuff in gcc include files signal.h & sys/params.h */\n#include <signal.h>\n#endif\n\n/* global Data Items */\n\nextern char *msg_startup3;\nextern char *msg_daemonname;\nextern char *msg_init_abt;\nextern char *msg_init_queued;\nextern char *msg_init_substate;\nextern char *msg_err_noqueue;\nextern char *msg_err_noqueue1;\nextern char *msg_init_resvNOq;\nextern char *msg_init_recovque;\nextern char *msg_init_recovresv;\nextern char *msg_init_expctq;\nextern char *msg_init_nojobs;\nextern char *msg_init_exptjobs;\nextern char *msg_init_unkstate;\nextern char *msg_init_baddb;\nextern char *msg_init_chdir;\nextern char *msg_unkresc;\nextern char *msg_corelimit;\n\nextern char *acct_file;\nextern int ext_license_server;\nextern char *log_file;\nextern char *path_acct;\nextern char *path_usedlicenses;\nextern char path_log[];\nextern char *path_priv;\nextern char *path_jobs;\nextern char *path_users;\nextern char *path_spool;\nextern char *path_track;\nextern char *path_prov_track;\nextern long new_log_event_mask;\nextern char server_name[];\nextern pbs_list_head svr_newjobs;\nextern pbs_list_head svr_allresvs;\nextern time_t time_now;\n\nextern struct server server;\nextern struct attribute attr_jobscript_max_size;\nextern char *path_hooks;\nextern char *path_hooks_workdir;\nextern pbs_list_head prov_allvnodes;\nextern int max_concurrent_prov;\nextern void *svr_db_conn;\n\nextern pbs_list_head svr_allhooks;\n\n/* External Functions Called */\n\nextern void on_job_exit(struct work_task *);\nextern void on_job_rerun(struct work_task *);\nextern int resize_prov_table(int newsize);\nextern void offline_all_provisioning_vnodes(void);\nextern void stop_db();\nextern job *job_recov_db_spl(pbs_db_job_info_t *dbjob, job *pjob);\nextern pbs_sched *sched_alloc(char *sched_name);\nextern job *recov_job_cb(pbs_db_obj_info_t *, int *);\nextern resc_resv *recov_resv_cb(pbs_db_obj_info_t *, int *);\nextern pbs_queue *recov_queue_cb(pbs_db_obj_info_t *, int *);\nextern pbs_sched *recov_sched_cb(pbs_db_obj_info_t *, int *);\nextern void revert_alter_reservation(resc_resv *presv);\nextern void log_licenses(pbs_licenses_high_use *pu);\n/* Private functions in this file */\n\nstatic void catch_child(int);\nstatic void init_abt_job(job *);\nstatic void change_logs(int);\nint chk_save_file(char *filename);\nstatic void need_y_response(int, char *);\nstatic int pbsd_init_reque(job *job, int change_state);\nstatic void resume_net_move(struct work_task *);\nstatic void stop_me(int);\nstatic int Rmv_if_resv_not_possible(job *);\nstatic int attach_queue_to_reservation(resc_resv *);\nstatic void call_log_license(struct work_task *);\n/* private data */\n\n#define CHANGE_STATE 1\n#define KEEP_STATE 0\nstatic char badlicense[] = \"One or more PBS license keys are invalid, jobs may not run\";\nchar *pbs_licensing_location = NULL;\n/**\n * @brief\n *\t\tInitializes the server attribute array with default values which are\n * \t\tnecessary for recovery and action routines to work properly.\n *\n * @return\tvoid\n */\nvoid\ninit_server_attrs()\n{\n\tresource_def *prdef = NULL;\n\tresource *presc = NULL;\n\tint i = 0;\n\n\tfor (i = 0; i < SVR_ATR_LAST; i++)\n\t\tclear_attr(get_sattr(i), &svr_attr_def[i]);\n\n\tset_sattr_str_slim(SVR_ATR_scheduler_iteration, TOSTR(PBS_SCHEDULE_CYCLE), NULL);\n\n\tserver.newobj = 1;\n\n\tset_sattr_l_slim(SVR_ATR_State, SV_STATE_INIT, SET);\n\n\tset_sattr_l_slim(SVR_ATR_ResvEnable, 1, SET);\n\t(get_sattr(SVR_ATR_ResvEnable))->at_flags |= ATR_VFLAG_DEFLT;\n\n\tset_sattr_str_slim(SVR_ATR_SvrHost, server_host, NULL);\n\t(get_sattr(SVR_ATR_SvrHost))->at_flags |= ATR_VFLAG_DEFLT;\n\n\tset_sattr_l_slim(SVR_ATR_NodeFailReq, PBS_NODE_FAIL_REQUEUE_DEFAULT, SET);\n\t(get_sattr(SVR_ATR_NodeFailReq))->at_flags |= ATR_VFLAG_DEFLT;\n\n\tset_sattr_l_slim(SVR_ATR_ResendTermDelay, PBS_RESEND_TERM_DELAY_DEFAULT, SET);\n\t(get_sattr(SVR_ATR_ResendTermDelay))->at_flags |= ATR_VFLAG_DEFLT;\n\n\tset_sattr_l_slim(SVR_ATR_maxarraysize, PBS_MAX_ARRAY_JOB_DFL, SET);\n\t(get_sattr(SVR_ATR_maxarraysize))->at_flags |= ATR_VFLAG_DEFLT;\n\n\tset_sattr_l_slim(SVR_ATR_license_min, PBS_MIN_LICENSING_LICENSES, SET);\n\t(get_sattr(SVR_ATR_license_min))->at_flags |= ATR_VFLAG_DEFLT;\n\tlicensing_control.licenses_min = PBS_MIN_LICENSING_LICENSES;\n\n\tset_sattr_l_slim(SVR_ATR_license_max, PBS_MAX_LICENSING_LICENSES, SET);\n\t(get_sattr(SVR_ATR_license_max))->at_flags |= ATR_VFLAG_DEFLT;\n\tlicensing_control.licenses_max = PBS_MAX_LICENSING_LICENSES;\n\n\tset_sattr_l_slim(SVR_ATR_license_linger, PBS_LIC_LINGER_TIME, SET);\n\t(get_sattr(SVR_ATR_license_linger))->at_flags |= ATR_VFLAG_DEFLT;\n\tlicensing_control.licenses_linger_time = PBS_LIC_LINGER_TIME;\n\n\tset_sattr_l_slim(SVR_ATR_EligibleTimeEnable, 0, SET);\n\t(get_sattr(SVR_ATR_EligibleTimeEnable))->at_flags |= ATR_VFLAG_DEFLT;\n\n\tset_sattr_l_slim(SVR_ATR_max_concurrent_prov, PBS_MAX_CONCURRENT_PROV, SET);\n\t(get_sattr(SVR_ATR_max_concurrent_prov))->at_flags |= ATR_VFLAG_DEFLT;\n\n\tset_sattr_str_slim(SVR_ATR_max_job_sequence_id, TOSTR(SVR_MAX_JOB_SEQ_NUM_DEFAULT), NULL);\n\t(get_sattr(SVR_ATR_max_job_sequence_id))->at_flags |= ATR_VFLAG_DEFLT;\n\n\tset_attr_generic(&attr_jobscript_max_size, &svr_attr_def[SVR_ATR_jobscript_max_size], DFLT_JOBSCRIPT_MAX_SIZE, NULL, INTERNAL);\n\tattr_jobscript_max_size.at_type = ATR_TYPE_SIZE; /* needed by get_bytes_from_attr */\n\n\tset_sattr_l_slim(SVR_ATR_has_runjob_hook, 0, SET);\n\tset_sattr_l_slim(SVR_ATR_log_events, SVR_LOG_DFLT, SET);\n\t*log_event_mask = get_sattr_long(SVR_ATR_log_events);\n\tset_sattr_str_slim(SVR_ATR_mailer, SENDMAIL_CMD, NULL);\n\tset_sattr_str_slim(SVR_ATR_mailfrom, PBS_DEFAULT_MAIL, NULL);\n\tset_sattr_l_slim(SVR_ATR_query_others, 1, SET);\n\tset_sattr_l_slim(SVR_ATR_scheduling, 1, SET);\n\tset_sattr_l_slim(SVR_ATR_clear_est_enable, 0, SET);\n\n\tprdef = &svr_resc_def[RESC_NCPUS];\n\tif (prdef) {\n\t\tattribute *pattr = get_sattr(SVR_ATR_DefaultChunk);\n\t\tpresc = add_resource_entry(pattr, prdef);\n\t\tif (presc) {\n\t\t\tpresc->rs_value.at_val.at_long = 1;\n\t\t\tpresc->rs_value.at_flags = ATR_VFLAG_DEFLT | ATR_SET_MOD_MCACHE;\n\t\t\tpattr->at_flags = ATR_VFLAG_DEFLT | ATR_SET_MOD_MCACHE;\n\t\t\t(void) deflt_chunk_action(pattr, (void *) &server, ATR_ACTION_NEW);\n\t\t}\n\t\tpattr = get_sattr(SVR_ATR_resource_deflt);\n\t\tpresc = add_resource_entry(pattr, prdef);\n\t\tif (presc) {\n\t\t\tpresc->rs_value.at_val.at_long = 1;\n\t\t\tpresc->rs_value.at_flags = ATR_VFLAG_DEFLT | ATR_SET_MOD_MCACHE;\n\t\t\tpattr->at_flags = ATR_VFLAG_DEFLT | ATR_SET_MOD_MCACHE;\n\t\t}\n\t}\n}\n\n/**\n * @brief\n *\t\tThis file contains the functions to initialize the PBS Batch Server.\n *\t\tThe code is called once when the server is brought up.\n *\n * @param[in]\ttype\t- The type of initialization\n *\t\t\t\t\t\t\tRECOV_CREATE - reinitializes all serverdb data\n *\n * @return\tError code\n * @retval\t0\t- Success\n * @retval\tNon-Zero\t- Failure\n *\n */\nint\npbsd_init(int type)\n{\n\tint a_opt = -1;\n\tint baselen;\n\tstruct dirent *pdirent;\n\tDIR *dir;\n\tint fd;\n\tint i = 0;\n\tchar zone_dir[MAXPATHLEN];\n\tchar *hook_suffix = HOOK_FILE_SUFFIX;\n\tint hook_suf_len = strlen(hook_suffix);\n\thook *phook, *phook_current;\n\tchar *psuffix;\n\tint rc;\n\tstruct stat statbuf;\n\tchar hook_msg[HOOK_MSG_SIZE];\n\tchar *conn_db_err = NULL;\n\tstruct sigaction act;\n\tstruct sigaction oact;\n\n\tstruct tm *ptm;\n\tpbs_db_job_info_t dbjob = {{0}};\n\tpbs_db_resv_info_t dbresv = {{0}};\n\tpbs_db_que_info_t dbque = {{0}};\n\tpbs_db_sched_info_t dbsched = {{0}};\n\tpbs_db_obj_info_t obj = {0};\n\tvoid *conn = (void *) svr_db_conn;\n\tchar *buf = NULL;\n\tint buf_len = 0;\n\n#ifdef RLIMIT_CORE\n\tint char_in_cname = 0;\n#endif /* RLIMIT_CORE */\n\n\tif ((job_attr_idx = cr_attrdef_idx(job_attr_def, JOB_ATR_LAST)) == NULL) {\n\t\tlog_err(errno, __func__, \"Failed creating job attribute search index\");\n\t\treturn (-1);\n\t}\n\tif ((node_attr_idx = cr_attrdef_idx(node_attr_def, ND_ATR_LAST)) == NULL) {\n\t\tlog_err(errno, __func__, \"Failed creating node attribute search index\");\n\t\treturn (-1);\n\t}\n\tif ((que_attr_idx = cr_attrdef_idx(que_attr_def, QA_ATR_LAST)) == NULL) {\n\t\tlog_err(errno, __func__, \"Failed creating queue attribute search index\");\n\t\treturn (-1);\n\t}\n\tif ((svr_attr_idx = cr_attrdef_idx(svr_attr_def, SVR_ATR_LAST)) == NULL) {\n\t\tlog_err(errno, __func__, \"Failed creating server attribute search index\");\n\t\treturn (-1);\n\t}\n\tif ((sched_attr_idx = cr_attrdef_idx(sched_attr_def, SCHED_ATR_LAST)) == NULL) {\n\t\tlog_err(errno, __func__, \"Failed creating sched attribute search index\");\n\t\treturn (-1);\n\t}\n\tif ((resv_attr_idx = cr_attrdef_idx(resv_attr_def, RESV_ATR_LAST)) == NULL) {\n\t\tlog_err(errno, __func__, \"Failed creating resv attribute search index\");\n\t\treturn (-1);\n\t}\n\tif (cr_rescdef_idx(svr_resc_def, svr_resc_size) != 0) {\n\t\tlog_err(errno, __func__, \"Failed creating resc definition search index\");\n\t\treturn (-1);\n\t}\n\n\t/* initialize the pointers in the resource_def array */\n\n\tfor (i = 0; i < (svr_resc_size - 1); ++i)\n\t\tsvr_resc_def[i].rs_next = &svr_resc_def[i + 1];\n\t/* last entry is left with null pointer */\n\n\t/* The following is code to reduce security risks                */\n\n\tlog_supported_auth_methods(pbs_conf.supported_auth_methods);\n\n\ti = getgid();\n\t(void) setgroups(1, (gid_t *) &i); /* secure suppl. groups */\n\n#ifdef RLIMIT_CORE\n\tif (pbs_conf.pbs_core_limit) {\n\t\tchar *pc = pbs_conf.pbs_core_limit;\n\t\twhile (*pc != '\\0') {\n\t\t\tif (!isdigit(*pc)) {\n\t\t\t\t/* there is a character in core limit */\n\t\t\t\tchar_in_cname = 1;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\tpc++;\n\t\t}\n\t}\n#endif /* RLIMIT_CORE */\n\n\t{\n\t\tstruct rlimit rlimit;\n\n\t\trlimit.rlim_cur = RLIM_INFINITY;\n\t\trlimit.rlim_max = RLIM_INFINITY;\n\n\t\t(void) setrlimit(RLIMIT_CPU, &rlimit);\n\t\t(void) setrlimit(RLIMIT_FSIZE, &rlimit);\n\t\t(void) setrlimit(RLIMIT_DATA, &rlimit);\n\t\t(void) setrlimit(RLIMIT_STACK, &rlimit);\n#ifdef RLIMIT_RSS\n\t\t(void) setrlimit(RLIMIT_RSS, &rlimit);\n#endif /* RLIMIT_RSS */\n#ifdef RLIMIT_VMEM\n\t\t(void) setrlimit(RLIMIT_VMEM, &rlimit);\n#endif /* RLIMIT_VMEM */\n#ifdef RLIMIT_CORE\n\t\tif (pbs_conf.pbs_core_limit) {\n\t\t\tstruct rlimit corelimit;\n\t\t\tcorelimit.rlim_max = RLIM_INFINITY;\n\t\t\tif (strcmp(\"unlimited\", pbs_conf.pbs_core_limit) == 0)\n\t\t\t\tcorelimit.rlim_cur = RLIM_INFINITY;\n\t\t\telse if (char_in_cname == 1) {\n\t\t\t\tlog_record(PBSEVENT_ERROR, PBS_EVENTCLASS_SERVER, LOG_WARNING,\n\t\t\t\t\t   __func__, msg_corelimit);\n\t\t\t\tcorelimit.rlim_cur = RLIM_INFINITY;\n\t\t\t} else\n\t\t\t\tcorelimit.rlim_cur =\n\t\t\t\t\t(rlim_t) atol(pbs_conf.pbs_core_limit);\n\t\t\t(void) setrlimit(RLIMIT_CORE, &corelimit);\n\t\t}\n#endif /* RLIMIT_CORE */\n\t}\n\n\t/* 1. set up to catch or ignore various signals */\n\tsigemptyset(&act.sa_mask);\n\tact.sa_flags = 0;\n\tact.sa_handler = change_logs;\n\tif (sigaction(SIGHUP, &act, &oact) != 0) {\n\t\tlog_err(errno, __func__, \"sigaction for HUP\");\n\t\treturn (2);\n\t}\n\tact.sa_handler = stop_me;\n\tif (sigaction(SIGINT, &act, &oact) != 0) {\n\t\tlog_err(errno, __func__, \"sigaction for INT\");\n\t\treturn (2);\n\t}\n\tif (sigaction(SIGTERM, &act, &oact) != 0) {\n\t\tlog_err(errno, __func__, \"sigactin for TERM\");\n\t\treturn (2);\n\t}\n#ifdef NDEBUG\n\tif (sigaction(SIGQUIT, &act, &oact) != 0) {\n\t\tlog_err(errno, __func__, \"sigactin for QUIT\");\n\t\treturn (2);\n\t}\n#endif /* NDEBUG */\n#ifdef SIGSHUTDN\n\tif (sigaction(SIGSHUTDN, &act, &oact) != 0) {\n\t\tlog_err(errno, __func__, \"sigactin for SHUTDN\");\n\t\treturn (2);\n\t}\n#endif /* SIGSHUTDN */\n\n\tact.sa_handler = catch_child;\n\tif (sigaction(SIGCHLD, &act, &oact) != 0) {\n\t\tlog_err(errno, __func__, \"sigaction for CHLD\");\n\t\treturn (2);\n\t}\n\n\tact.sa_handler = SIG_IGN;\n\tif (sigaction(SIGPIPE, &act, &oact) != 0) {\n\t\tlog_err(errno, __func__, \"sigaction for PIPE\");\n\t\treturn (2);\n\t}\n\tif (sigaction(SIGUSR1, &act, &oact) != 0) {\n\t\tlog_err(errno, __func__, \"sigaction for USR1\");\n\t\treturn (2);\n\t}\n\tif (sigaction(SIGUSR2, &act, &oact) != 0) {\n\t\tlog_err(errno, __func__, \"sigaction for USR2\");\n\t\treturn (2);\n\t}\n\n\t/* 2. check security and set up various global variables we need */\n\n#if !defined(DEBUG) && !defined(NO_SECURITY_CHECK)\n\trc = chk_file_sec(path_jobs, 1, 0, S_IWGRP | S_IWOTH, 1);\n\tif (stat(path_users, &statbuf) != 0)\n\t\t(void) mkdir(path_users, 0750);\n\trc |= chk_file_sec(path_users, 1, 0, S_IWGRP | S_IWOTH, 1);\n\trc |= chk_file_sec(path_hooks, 1, 0, S_IWGRP | S_IWOTH, 0);\n\trc |= chk_file_sec(path_hooks_workdir, 1, 0, S_IWGRP | S_IWOTH, 0);\n\trc |= chk_file_sec(path_spool, 1, 1, 0, 0);\n\trc |= chk_file_sec(path_acct, 1, 1, S_IWGRP | S_IWOTH, 0);\n\trc |= chk_file_sec(pbs_conf.pbs_environment, 0, 0, S_IWGRP | S_IWOTH, 1);\n\n\tif (rc) {\n\t\tlog_err(-1, __func__, \"chk_file_sec has a failure\");\n\t\treturn (3);\n\t}\n#endif /* not DEBUG and not NO_SECURITY_CHECK */\n\n\ttime_now = time(NULL);\n\n\trc = setup_resc(1);\n\tif (rc != 0) {\n\t\t/* log_buffer set in setup_resc */\n\t\tlog_err(-1, __func__, log_buffer);\n\t\t/* return value of -1 means a fatal error, -2 means errors\n\t\t * were \"auto-corrected\" */\n\t\tif (rc == -1)\n\t\t\treturn (-1);\n\t}\n\n\t/* 3. Set default server attibutes values */\n\tmemset(&server, 0, sizeof(server));\n\tserver.sv_started = time(&time_now); /* time server started */\n\tif (is_sattr_set(SVR_ATR_scheduling))\n\t\ta_opt = get_sattr_long(SVR_ATR_scheduling);\n\n\tinit_server_attrs();\n\n\t/* 5. If not a \"create\" initialization, recover server db */\n\t/*    and sched db\t\t\t\t\t  */\n\trc = svr_recov_db();\n\tif ((rc != 0) && (type != RECOV_CREATE)) {\n\t\tpbs_db_get_errmsg(PBS_DB_ERR, &conn_db_err);\n\t\tif (conn_db_err != NULL) {\n\t\t\tlog_errf(-1, __func__, \"%s\", conn_db_err);\n\t\t\tfree(conn_db_err);\n\t\t}\n\t\tneed_y_response(type, \"no server database exists\");\n\t\ttype = RECOV_CREATE;\n\t}\n\tif (type != RECOV_CREATE) {\n\t\t/* Server read success full ?*/\n\n\t\tif (rc != 0) {\n\t\t\tlog_errf(rc, __func__, msg_init_baddb);\n\t\t\treturn (-1);\n\t\t}\n\n\t\tif (is_sattr_set(SVR_ATR_resource_assn))\n\t\t\tfree_sattr(SVR_ATR_resource_assn);\n\n\t\tif (new_log_event_mask) {\n\t\t\t/* set to what was given on command line -e option */\n\t\t\tset_sattr_l_slim(SVR_ATR_log_events, new_log_event_mask, SET);\n\t\t}\n\t\t*log_event_mask = get_sattr_long(SVR_ATR_log_events);\n\n\t\t/* if server comment is a default, clear it */\n\t\t/* it will be reset as needed               */\n\t\tif (((get_sattr(SVR_ATR_Comment))->at_flags & (ATR_VFLAG_SET | ATR_VFLAG_DEFLT)) == (ATR_VFLAG_SET | ATR_VFLAG_DEFLT))\n\t\t\tfree_sattr(SVR_ATR_Comment);\n\n\t\t/* now do sched db */\n\n\t\tobj.pbs_db_obj_type = PBS_DB_SCHED;\n\t\tobj.pbs_db_un.pbs_db_sched = &dbsched;\n\n\t\trc = pbs_db_search(conn, &obj, NULL, (query_cb_t) &recov_sched_cb);\n\t\tif (rc == -1) {\n\t\t\tpbs_db_get_errmsg(PBS_DB_ERR, &conn_db_err);\n\t\t\tif (conn_db_err != NULL) {\n\t\t\t\tlog_errf(-1, __func__, \"%s\", conn_db_err);\n\t\t\t\tfree(conn_db_err);\n\t\t\t}\n\t\t\treturn (-1);\n\t\t}\n\n\t\tif (!dflt_scheduler) {\n\t\t\tdflt_scheduler = sched_alloc(PBS_DFLT_SCHED_NAME);\n\t\t\tset_sched_default(dflt_scheduler, 0);\n\t\t\tsched_save_db(dflt_scheduler);\n\t\t}\n\n\t\tif (get_sattr_long(SVR_ATR_scheduling))\n\t\t\tset_scheduler_flag(SCH_SCHEDULE_ETE_ON, NULL);\n\t} else {\t     /* init type is \"create\" */\n\t\tif (rc == 0) /* server was loaded */\n\t\t\tneed_y_response(type, \"server database exists\");\n\n\t\tsvr_save_db(&server);\n\n\t\tdflt_scheduler = sched_alloc(PBS_DFLT_SCHED_NAME);\n\t\tset_sched_default(dflt_scheduler, 0);\n\t\tsched_save_db(dflt_scheduler);\n\t}\n\n\t/* 4. Check License information */\n\n\treset_license_counters(&license_counts);\n\n\tfd = open(path_usedlicenses, O_RDONLY, 0400);\n\n\tif ((fd == -1) ||\n\t    (read(fd, &(license_counts.licenses_high_use), sizeof(pbs_licenses_high_use)) !=\n\t     sizeof(pbs_licenses_high_use))) {\n\t\tlicense_counts.licenses_high_use.lu_max_hr = 0;\n\t\tlicense_counts.licenses_high_use.lu_max_day = 0;\n\t\tlicense_counts.licenses_high_use.lu_max_month = 0;\n\t\tlicense_counts.licenses_high_use.lu_max_forever = 0;\n\t\tptm = localtime(&time_now);\n\t\tlicense_counts.licenses_high_use.lu_day = ptm->tm_mday;\n\t\tlicense_counts.licenses_high_use.lu_month = ptm->tm_mon;\n\t}\n\tif (fd != -1)\n\t\tclose(fd);\n\n\tset_sattr_str_slim(SVR_ATR_version, PBS_VERSION, NULL);\n\n\tif ((pbs_licensing_location == NULL) && (license_counts.licenses_local == 0)) {\n\t\tprintf(\"%s\\n\", badlicense);\n\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER, LOG_ALERT,\n\t\t\t  msg_daemonname, badlicense);\n\t}\n\n\tif (pbs_licensing_location) {\n\t\tsprintf(log_buffer, \"Using license server at %s\",\n\t\t\tPBS_LICENSE_LOCATION);\n\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER, LOG_NOTICE,\n\t\t\t  msg_daemonname, log_buffer);\n\t\tprintf(\"%s\\n\", log_buffer);\n\t}\n\tif (license_counts.licenses_local > 0) {\n\t\tsprintf(log_buffer,\n\t\t\t\"Licenses valid for %ld Floating hosts\",\n\t\t\tlicense_counts.licenses_local);\n\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER, LOG_NOTICE,\n\t\t\t  msg_daemonname, log_buffer);\n\t\tprintf(\"%s\\n\", log_buffer);\n\t}\n\n\t/* start a timed-event every hour to long the number of floating used */\n\tif ((license_counts.licenses_local > 0))\n\t\t(void) set_task(WORK_Timed, (long) (((time_now + 3600) / 3600) * 3600),\n\t\t\t\tcall_log_license, 0);\n\n\t/* 6. open accounting file */\n\n\tif (acct_open(acct_file) != 0) {\n\t\tlog_errf(-1, __func__, \"Could not open accounting file\");\n\t\treturn (-1);\n\t}\n\n\t/* 7. Set up other server and global variables */\n\n\tif (a_opt != -1) {\n\t\t/* a_option was set, overrides saved value of scheduling attr */\n\t\tset_sattr_l_slim(SVR_ATR_scheduling, a_opt, SET);\n\t}\n\n\t/*\n\t * 8A. If not a \"create\" initialization, recover queues.\n\t *    If a create, remove any queues that might be there.\n\t */\n\tif ((queues_idx = pbs_idx_create(0, 0)) == NULL) {\n\t\tlog_err(-1, __func__, \"Creating queue index failed!\");\n\t\treturn (-1);\n\t}\n\n\tserver.sv_qs.sv_numque = 0;\n\n\t/* get jobs from DB for this instance of server, by port and address */\n\tobj.pbs_db_obj_type = PBS_DB_QUEUE;\n\tobj.pbs_db_un.pbs_db_que = &dbque;\n\n\trc = pbs_db_search(conn, &obj, NULL, (query_cb_t) &recov_queue_cb);\n\tif (rc == -1) {\n\t\tpbs_db_get_errmsg(PBS_DB_ERR, &conn_db_err);\n\t\tif (conn_db_err != NULL) {\n\t\t\tlog_errf(-1, __func__, \"%s\", conn_db_err);\n\t\t\tfree(conn_db_err);\n\t\t}\n\t\treturn (-1);\n\t}\n\n\t/* Open and read in node list if one exists */\n\tif ((rc = setup_nodes()) == -1) {\n\t\t/* log_buffer set in setup_nodes */\n\t\tlog_errf(-1, __func__, log_buffer);\n\t\treturn (-1);\n\t}\n\tmark_which_queues_have_nodes();\n\n\t/* at this point, we know all the resource types have been defined,        */\n\t/* build the resource summation table for validating the Select directives */\n\tupdate_resc_sum();\n\n\t/*\n\t * 8B. If not a \"create\" initialization, recover reservations.\n\t */\n\t/* set the zoneinfo directory to $PBS_EXEC/zoneinfo.\n\t * This is used for standing reservations user of libical */\n\tsprintf(zone_dir, \"%s%s\", pbs_conf.pbs_exec_path, ICAL_ZONEINFO_DIR);\n\tset_ical_zoneinfo(zone_dir);\n\n\t/* load reservations */\n\tif ((resvs_idx = pbs_idx_create(0, 0)) == NULL) {\n\t\tlog_err(-1, __func__, \"Creating reservations index failed!\");\n\t\treturn (-1);\n\t}\n\tobj.pbs_db_obj_type = PBS_DB_RESV;\n\tobj.pbs_db_un.pbs_db_resv = &dbresv;\n\n\trc = pbs_db_search(conn, &obj, NULL, (query_cb_t) &recov_resv_cb);\n\tif (rc == -1) {\n\t\tpbs_db_get_errmsg(PBS_DB_ERR, &conn_db_err);\n\t\tif (conn_db_err != NULL) {\n\t\t\tlog_errf(-1, __func__, \"%s\", conn_db_err);\n\t\t\tfree(conn_db_err);\n\t\t}\n\t\treturn (-1);\n\t}\n\n\t/*\n\t * 9. If not \"create\" or \"clean\" recovery, recover the jobs.\n\t *    If a create or clean recovery, delete any jobs.\n\t *    Before job creation/recovery, create the jobs index.\n\t */\n\tif ((jobs_idx = pbs_idx_create(0, 0)) == NULL) {\n\t\tlog_err(-1, __func__, \"Creating jobs index failed!\");\n\t\treturn (-1);\n\t}\n\n\tserver.sv_qs.sv_numjobs = 0;\n\n\t/* get jobs from DB */\n\tobj.pbs_db_obj_type = PBS_DB_JOB;\n\tobj.pbs_db_un.pbs_db_job = &dbjob;\n\trc = pbs_db_search(conn, &obj, NULL, (query_cb_t) &recov_job_cb);\n\tif (rc == -1) {\n\t\tpbs_db_get_errmsg(PBS_DB_ERR, &conn_db_err);\n\t\tif (conn_db_err != NULL) {\n\t\t\tlog_errf(-1, __func__, \"%s\", conn_db_err);\n\t\t\tfree(conn_db_err);\n\t\t}\n\t\treturn (-1);\n\t} else if (rc == 1) {\n\t\tif ((type != RECOV_CREATE) && (type != RECOV_COLD))\n\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_SERVER,\n\t\t\t\t  LOG_DEBUG, msg_daemonname, msg_init_nojobs);\n\t}\n\n\tlog_eventf(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_NOTICE, msg_daemonname, msg_init_exptjobs, server.sv_qs.sv_numjobs);\n\n\t/* Now, cause any reservations marked RESV_FINISHED to be\n\t * removed and place \"begin\" and \"end\" tasks onto the\n\t * \"work_task_timed\" list, as appropriate, for those that\n\t * remain\n\t */\n\n\tremove_deleted_resvs();\n\tdegrade_corrupted_confirmed_resvs();\n\tadd_resv_beginEnd_tasks();\n\n\tresv_timer_init();\n\n\t/* Put us back in the Server's Private directory */\n\n\tif (chdir(path_priv) != 0) {\n\t\t(void) sprintf(log_buffer, msg_init_chdir, path_priv);\n\t\tlog_err(-1, __func__, log_buffer);\n\t\treturn (3);\n\t}\n\n\t/*\n\t * 10. Recover the hooks.\n\t *\n\t */\n\n\tif (chdir(path_hooks) != 0) {\n\t\t(void) sprintf(log_buffer, msg_init_chdir, path_hooks);\n\t\tlog_err(errno, __func__, log_buffer);\n\t\treturn (-1);\n\t}\n\n\tdir = opendir(\".\");\n\tif (dir == NULL) {\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_SERVER,\n\t\t\t  LOG_DEBUG, msg_daemonname,\n\t\t\t  \"Could not open hooks dir\");\n\t} else {\n\t\t/* Now, for each hook found ... */\n\n\t\twhile (errno = 0,\n\t\t       (pdirent = readdir(dir)) != NULL) {\n\n\t\t\tif (chk_save_file(pdirent->d_name) != 0) {\n\t\t\t\tcontinue;\n\t\t\t}\n\n\t\t\t/* recover the hooks */\n\n\t\t\tbaselen = strlen(pdirent->d_name) - hook_suf_len;\n\t\t\tpsuffix = pdirent->d_name + baselen;\n\t\t\tif (strcmp(psuffix, hook_suffix)) {\n\t\t\t\tcontinue;\n\t\t\t}\n\n\t\t\tif ((phook =\n\t\t\t\t     hook_recov(pdirent->d_name, NULL, hook_msg,\n\t\t\t\t\t\tsizeof(hook_msg),\n\t\t\t\t\t\tpbs_python_ext_alloc_python_script,\n\t\t\t\t\t\tpbs_python_ext_free_python_script)) == NULL) {\n\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\"hook_recov(%s): can't recover - %s\",\n\t\t\t\t\tpdirent->d_name, hook_msg);\n\t\t\t\tlog_event(PBSEVENT_SYSTEM,\n\t\t\t\t\t  PBS_EVENTCLASS_SERVER, LOG_NOTICE,\n\t\t\t\t\t  msg_daemonname, log_buffer);\n\t\t\t} else {\n\t\t\t\tsprintf(log_buffer, \"Found hook %s type=%s\",\n\t\t\t\t\tphook->hook_name,\n\t\t\t\t\t((phook->type == HOOK_SITE) ? \"site\" : \"pbs\"));\n\t\t\t\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_ADMIN |\n\t\t\t\t\t\t  PBSEVENT_DEBUG,\n\t\t\t\t\t  PBS_EVENTCLASS_SERVER,\n\t\t\t\t\t  LOG_INFO, msg_daemonname, log_buffer);\n\t\t\t\tif (phook->event & MOM_EVENTS)\n\t\t\t\t\tmark_mom_hooks_seen();\n\t\t\t}\n\t\t}\n\n\t\tif (errno != 0 && errno != ENOENT)\n\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_SERVER,\n\t\t\t\t  LOG_DEBUG, msg_daemonname,\n\t\t\t\t  \"Could not read hooks dir\");\n\t\t(void) closedir(dir);\n\t}\n\tprint_hooks(0);\n\tprint_hooks(HOOK_EVENT_QUEUEJOB);\n\tprint_hooks(HOOK_EVENT_POSTQUEUEJOB);\n\tprint_hooks(HOOK_EVENT_MODIFYJOB);\n\tprint_hooks(HOOK_EVENT_RESVSUB);\n\tprint_hooks(HOOK_EVENT_MODIFYRESV);\n\tprint_hooks(HOOK_EVENT_MOVEJOB);\n\tprint_hooks(HOOK_EVENT_RUNJOB);\n\tprint_hooks(HOOK_EVENT_JOBOBIT);\n\tprint_hooks(HOOK_EVENT_MANAGEMENT);\n\tprint_hooks(HOOK_EVENT_MODIFYVNODE);\n\tprint_hooks(HOOK_EVENT_PROVISION);\n\tprint_hooks(HOOK_EVENT_PERIODIC);\n\tprint_hooks(HOOK_EVENT_RESV_CONFIRM);\n\tprint_hooks(HOOK_EVENT_RESV_BEGIN);\n\tprint_hooks(HOOK_EVENT_RESV_END);\n\n\t/*\n\t * cleanup  the hooks work directory\n\t */\n\n\tcleanup_hooks_workdir(0);\n\n\t/* Put us back in the Server's Private directory */\n\n\tif (chdir(path_priv) != 0) {\n\t\t(void) sprintf(log_buffer, msg_init_chdir, path_priv);\n\t\tlog_err(-1, __func__, log_buffer);\n\t\treturn (3);\n\t}\n\n\t/* 11. Open and read in tracking records */\n\n\tfd = open(path_track, O_RDONLY | O_CREAT, 0600);\n\tif (fd < 0) {\n\t\tlog_err(errno, __func__, \"unable to open tracking file\");\n\t\treturn (-1);\n\t}\n#if !defined(DEBUG) && !defined(NO_SECURITY_CHECK)\n\tif (chk_file_sec(path_track, 0, 0, S_IWGRP | S_IWOTH, 0) != 0)\n\t\treturn (-1);\n#endif /* not DEBUG and not NO_SECURITY_CHECK */\n\n\tif (fstat(fd, &statbuf) < 0) {\n\t\tlog_err(errno, \"pbs_init\", \"unable to stat tracking file\");\n\t\treturn (-1);\n\t} else {\n\n\t\tsize_t amt;\n\t\tsize_t rd;\n\t\tchar *w;\n\n\t\t/* validate the size of the file, it should be a multiple */\n\t\t/* of the tracking structure size                         */\n\n\t\ti = statbuf.st_size / sizeof(struct tracking);\n\t\tamt = i * sizeof(struct tracking);\n\n\t\tif (amt != statbuf.st_size) {\n\t\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER,\n\t\t\t\t  LOG_ALERT, msg_daemonname,\n\t\t\t\t  \"tracking file has invalid length\");\n\t\t}\n\t\tif (i < PBS_TRACK_MINSIZE)\n\t\t\tserver.sv_tracksize = PBS_TRACK_MINSIZE;\n\t\telse\n\t\t\tserver.sv_tracksize = i;\n\t\tserver.sv_track = (struct tracking *) calloc(server.sv_tracksize,\n\t\t\t\t\t\t\t     sizeof(struct tracking));\n\t\tif (server.sv_track == NULL) {\n\t\t\tlog_err(errno, \"init\", \"out of memory\");\n\t\t\treturn -1;\n\t\t}\n\t\tfor (i = 0; i < server.sv_tracksize; i++)\n\t\t\t(server.sv_track + i)->tk_mtime = 0;\n\n\t\tw = (char *) server.sv_track;\n\n\t\t/* read in the file (a mutiple of the struct size) */\n\n\t\twhile (amt > 0) {\n\t\t\trd = read(fd, w, amt);\n\t\t\tif ((rd == -1) && (errno == EINTR)) {\n\t\t\t\tcontinue;\n\t\t\t} else if (rd <= 0) {\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\tamt -= rd;\n\t\t\tw += rd;\n\t\t}\n\t\t(void) close(fd);\n\t\tserver.sv_trackmodifed = 0;\n\t}\n\n\t/* set work task to periodically save the tracking records */\n\n\t(void) set_task(WORK_Timed, (long) (time_now + PBS_SAVE_TRACK_TM),\n\t\t\ttrack_save, 0);\n\n\tfd = open(path_prov_track, O_RDONLY | O_CREAT, 0600);\n\tif (fd < 0) {\n\t\tlog_err(errno, __func__, \"unable to open prov_tracking file\");\n\t\treturn (-1);\n\t}\n#if !defined(DEBUG) && !defined(NO_SECURITY_CHECK)\n\tif (chk_file_sec(path_prov_track, 0, 0, S_IWGRP | S_IWOTH, 0) != 0)\n\t\treturn (-1);\n#endif /* not DEBUG and not NO_SECURITY_CHECK */\n\n\tif (fstat(fd, &statbuf) < 0) {\n\t\tlog_err(errno, \"pbs_init\", \"unable to stat prov_tracking file\");\n\t\treturn (-1);\n\t} else {\n\t\tsize_t amt;\n\t\tsize_t rd;\n\t\tchar *p, *buffer;  /* to hold entire file data */\n\t\tint ctrl_flag = 0; /* we always write pvtk_mtime first */\n\t\tchar *token;\n\t\tlong mtime;\n\t\ti = 0;\n\n\t\t/* whats the size of data in file */\n\t\tamt = statbuf.st_size;\n\n\t\tserver.sv_provtracksize = get_sattr_long(SVR_ATR_max_concurrent_prov);\n\t\tDBPRT((\"%s: server.sv_provtracksize=%d amt=%ld\\n\", __func__, server.sv_provtracksize, (long) amt))\n\n\t\tp = malloc(amt + 1);\n\t\tif (p == NULL) {\n\t\t\tlog_err(errno, \"pbs_init\", \"unable to malloc\");\n\t\t\tclose(fd);\n\t\t\treturn (-1);\n\t\t}\n\t\tbuffer = p;\n\n\t\t/* read entire file into buffer */\n\t\twhile (amt > 0) {\n\t\t\trd = read(fd, p, amt);\n\t\t\tif ((rd == -1) && (errno == EINTR)) {\n\t\t\t\tcontinue;\n\t\t\t} else if (rd <= 0) {\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\tamt -= rd;\n\t\t\tp += rd;\n\t\t}\n\t\t(void) close(fd);\n\t\tbuffer[statbuf.st_size] = '\\0';\n\n\t\tserver.sv_prov_track = (struct prov_tracking *) calloc(server.sv_provtracksize,\n\t\t\t\t\t\t\t\t       sizeof(struct prov_tracking));\n\t\tif (server.sv_prov_track == NULL) {\n\t\t\tfree(buffer);\n\t\t\tlog_err(errno, \"pbs_init\", \"unable to calloc\");\n\t\t\treturn (-1);\n\t\t}\n\n\t\tfor (i = 0; i < server.sv_provtracksize; i++) {\n\t\t\tserver.sv_prov_track[i].pvtk_mtime = 0;\n\t\t\tserver.sv_prov_track[i].pvtk_pid = -1;\n\t\t\tserver.sv_prov_track[i].pvtk_vnode = NULL;\n\t\t\tserver.sv_prov_track[i].pvtk_aoe_req = NULL;\n\t\t\tserver.sv_prov_track[i].prov_vnode_info = NULL;\n\t\t}\n\n\t\t/* start tokenizing by '|' */\n\t\ti = 0;\n\t\ttoken = strtok(buffer, \"|\");\n\t\twhile (token != NULL && i < server.sv_provtracksize) {\n\t\t\tswitch (ctrl_flag) {\n\t\t\t\tcase 0:\n\t\t\t\t\terrno = 0;\n\t\t\t\t\tmtime = strtol(token, NULL, 10);\n\t\t\t\t\tif (errno) {\n\t\t\t\t\t\tfree(buffer);\n\t\t\t\t\t\tfree(server.sv_prov_track);\n\t\t\t\t\t\tlog_err(errno, \"pbs_init\",\n\t\t\t\t\t\t\t\"bad data in prov_tracking\");\n\t\t\t\t\t\treturn (-1);\n\t\t\t\t\t}\n\t\t\t\t\tserver.sv_prov_track[i].pvtk_mtime = mtime;\n\t\t\t\t\t++ctrl_flag;\n\t\t\t\t\tbreak;\n\t\t\t\tcase 1:\n\t\t\t\t\t/* after first save, 0 is written if */\n\t\t\t\t\t/* value is null. If reading 0, then */\n\t\t\t\t\t/* pvtk_vnode should be null else it */\n\t\t\t\t\t/* becomes \"0\" */\n\t\t\t\t\tif (strcmp(token, \"0\") != 0) {\n\t\t\t\t\t\tserver.sv_prov_track[i].pvtk_vnode =\n\t\t\t\t\t\t\t(char *) malloc(strlen(token) + 1);\n\t\t\t\t\t\tif (server.sv_prov_track[i].pvtk_vnode == NULL) {\n\t\t\t\t\t\t\tfree(buffer);\n\t\t\t\t\t\t\tfree(server.sv_prov_track);\n\t\t\t\t\t\t\tlog_err(errno, \"pbs_init\",\n\t\t\t\t\t\t\t\t\"unable to malloc\");\n\t\t\t\t\t\t\treturn (-1);\n\t\t\t\t\t\t}\n\t\t\t\t\t\tstrcpy(server.sv_prov_track[i].pvtk_vnode,\n\t\t\t\t\t\t       token);\n\t\t\t\t\t}\n\t\t\t\t\t++ctrl_flag;\n\t\t\t\t\tbreak;\n\t\t\t\tcase 2:\n\t\t\t\t\tif (strcmp(token, \"0\") != 0) {\n\t\t\t\t\t\tserver.sv_prov_track[i].pvtk_aoe_req =\n\t\t\t\t\t\t\t(char *) malloc(strlen(token) + 1);\n\t\t\t\t\t\tif (server.sv_prov_track[i].pvtk_vnode == NULL) {\n\t\t\t\t\t\t\tfree(buffer);\n\t\t\t\t\t\t\tfree(server.sv_prov_track);\n\t\t\t\t\t\t\tlog_err(errno, \"pbs_init\",\n\t\t\t\t\t\t\t\t\"unable to malloc\");\n\t\t\t\t\t\t\treturn (-1);\n\t\t\t\t\t\t}\n\t\t\t\t\t\tstrcpy(server.sv_prov_track[i].pvtk_aoe_req,\n\t\t\t\t\t\t       token);\n\t\t\t\t\t}\n\t\t\t\t\tctrl_flag = 0;\n\t\t\t\t\t++i;\n\t\t\t\t\tbreak;\n\t\t\t}\n\t\t\ttoken = strtok(NULL, \"|\");\n\t\t}\n\t\tserver.sv_provtrackmodifed = 0;\n\t\tfree(buffer);\n\t\t/* less data recovered than expected */\n\t\tif ((i != server.sv_provtracksize) && (statbuf.st_size != 0)) {\n\t\t\tsprintf(log_buffer, \"Recovered prov_tracking, \"\n\t\t\t\t\t    \"Expected %d, recovered %d records\",\n\t\t\t\tserver.sv_provtracksize, i);\n\t\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER, LOG_WARNING,\n\t\t\t\t  msg_daemonname, log_buffer);\n\t\t}\n\t}\n\n\t/* mark all nodes that are in the prov tracking table as offline,\n\t * also do away with all jobs that were waiting on such nodes\n\t */\n\toffline_all_provisioning_vnodes();\n\tserver.sv_cur_prov_records = 0;\n\n\t(void) resize_prov_table(max_concurrent_prov);\n\tCLEAR_HEAD(prov_allvnodes);\n\n\t/* trigger degraded reservations on offlined nodes */\n\tdegrade_offlined_nodes_reservations();\n\n\thook_track_recov();\n\n\t/* Check to see that jobs in the maintenance_jobs attribute on a node still exist\n\t * If they don't exist any more, remove them from a node's maintenance_jobs attribute\n\t */\n\tbuf = NULL;\n\tbuf_len = 0;\n\tfor (i = 0; i < svr_totnodes; i++) {\n\t\tstruct array_strings *arst;\n\t\tif (is_nattr_set(pbsndlist[i], ND_ATR_MaintJobs) && (arst = get_nattr_arst(pbsndlist[i], ND_ATR_MaintJobs))->as_usedptr > 0) {\n\t\t\tint j;\n\t\t\tint len = 0;\n\t\t\tint cur_len = 0;\n\t\t\tattribute new;\n\n\t\t\tfor (j = 0; j < arst->as_usedptr; j++)\n\t\t\t\tlen += strlen(arst->as_string[j]) + 1; /* 1 for the comma*/\n\n\t\t\tif (len > buf_len) {\n\t\t\t\tchar *tmp_buf;\n\t\t\t\ttmp_buf = realloc(buf, len + 1);\n\t\t\t\tif (tmp_buf == NULL) {\n\t\t\t\t\tfree(buf);\n\t\t\t\t\treturn (-1);\n\t\t\t\t} else {\n\t\t\t\t\tbuf = tmp_buf;\n\t\t\t\t\tbuf_len = len;\n\t\t\t\t}\n\t\t\t}\n\t\t\tbuf[0] = '\\0';\n\t\t\tfor (j = 0; j < arst->as_usedptr; j++) {\n\t\t\t\tif (find_job(arst->as_string[j]) == NULL) {\n\t\t\t\t\tstrncat(buf, arst->as_string[j], len);\n\t\t\t\t\tstrncat(buf, \",\", len);\n\t\t\t\t\tbuf[len] = '\\0';\n\t\t\t\t}\n\t\t\t}\n\t\t\t/* Did we find a string we need to remove*/\n\t\t\tcur_len = strlen(buf);\n\t\t\tif (cur_len > 0) {\n\t\t\t\tbuf[cur_len - 1] = '\\0'; /* remove trailing comma */\n\t\t\t\tclear_attr(&new, &node_attr_def[(int) ND_ATR_MaintJobs]);\n\t\t\t\tdecode_arst(&new, ATTR_NODE_MaintJobs, NULL, buf);\n\t\t\t\tset_arst(get_nattr(pbsndlist[i], ND_ATR_MaintJobs), &new, DECR);\n\t\t\t}\n\n\t\t\tif (arst->as_usedptr > 0)\n\t\t\t\tset_vnode_state(pbsndlist[i], INUSE_MAINTENANCE, Nd_State_Or);\n\t\t}\n\t}\n\tfree(buf);\n\n\t/* purge deleted hooks */\n\tphook = (hook *) GET_NEXT(svr_allhooks);\n\twhile (phook) {\n\t\tphook_current = phook;\n\t\tphook = (hook *) GET_NEXT(phook->hi_allhooks);\n\n\t\tif (phook_current->pending_delete && !has_pending_mom_action_delete(phook_current->hook_name))\n\t\t\thook_purge(phook_current, pbs_python_ext_free_python_script);\n\t}\n\tsend_rescdef(0);\n\thook_track_save(NULL, -1); /* refresh path_hooks_tracking file */\n\n\t(void) set_task(WORK_Immed, time_now, memory_debug_log, NULL);\n\n\treturn (0);\n}\n\n/**\n * @brief\n * \t\treassign_resc - for a recovered running job, reassign the resources and\n *\t\tnodes to the job.\n *\n * @param[in,out]\tpjob\t- the job.\n *\n * @return\tvoid\n */\nstatic void\nreassign_resc(job *pjob)\n{\n\tint set_exec_vnode;\n\tint rc;\n\tint unset_resc_released = 0;\n\tchar *hoststr = get_jattr_str(pjob, JOB_ATR_exec_host);\n\tchar *hoststr2 = get_jattr_str(pjob, JOB_ATR_exec_host2);\n\tchar *vnodein;\n\tchar *vnodeout;\n\n\t/* safety check: if no hoststr, no node (hosts) assigned, just return */\n\tif (hoststr == NULL)\n\t\treturn;\n\n\tif ((is_jattr_set(pjob, JOB_ATR_exec_vnode)) == 0) {\n\t\t/*\n\t\t * if exec_vnode is not set, we must be dealing with a\n\t\t * pre-8.0 job.   Then we need to set exec_vnode anew based\n\t\t * on the select spec that was auto generated when the job\n\t\t * was requeued and the existing exec_host.  This is done in\n\t\t * the same as as when a \"qrun -H vn+vn+... jobid\" is done.\n\t\t */\n\t\tset_exec_vnode = 1;\n\t\tvnodein = hoststr;\n\t} else {\n\t\tset_exec_vnode = 0;\n\t\tvnodein = get_jattr_str(pjob, JOB_ATR_exec_vnode);\n\t}\n\n\trc = set_nodes((void *) pjob, JOB_OBJECT,\n\t\t       vnodein,\n\t\t       &vnodeout,\n\t\t       &hoststr,\n\t\t       &hoststr2,\n\t\t       set_exec_vnode,\n\t\t       TRUE);\n\n\tif (rc != 0) {\n\t\tsprintf(log_buffer, \"Unable to reallocate resources from nodes for job, error %d\", rc);\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_NOTICE,\n\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t} else if (set_exec_vnode == 1) {\n\t\t/* need to recreate the exec_host/exec_vnode values */\n\t\tfree_jattr(pjob, JOB_ATR_exec_host);\n\t\tfree_jattr(pjob, JOB_ATR_exec_vnode);\n\t\tset_jattr_str_slim(pjob, JOB_ATR_exec_vnode, vnodeout, NULL);\n\t\tset_jattr_str_slim(pjob, JOB_ATR_exec_host, hoststr, NULL);\n\t}\n\n\tif ((rc == 0) && (is_jattr_set(pjob, JOB_ATR_exec_vnode_deallocated))) {\n\t\tchar *hstr = NULL;\n\t\tchar *hstr2 = NULL;\n\t\tchar *vnalloc = NULL;\n\t\tchar *new_exec_vnode_deallocated;\n\n\t\tnew_exec_vnode_deallocated = get_jattr_str(pjob, JOB_ATR_exec_vnode_deallocated);\n\n\t\trc = set_nodes((void *) pjob, JOB_OBJECT, new_exec_vnode_deallocated,\n\t\t\t       &vnalloc, &hstr, &hstr2, 1, TRUE);\n\t\tif (rc != 0) {\n\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_WARNING,\n\t\t\t\t  pjob->ji_qs.ji_jobid, \"warning: Failed to make some nodes aware of a deleted job\");\n\t\t}\n\t}\n\n\tif ((check_job_substate(pjob, JOB_SUBSTATE_SCHSUSP) || check_job_substate(pjob, JOB_SUBSTATE_SUSPEND)) &&\n\t    (is_jattr_set(pjob, JOB_ATR_resc_released))) {\n\t\t/*\n\t\t * Allocating resources back to a suspended job is tricky.\n\t\t * Suspended jobs only hold part of their resources\n\t\t * If set_resc_assigned() is called by a job with the JOB_ATR_resc_released set,\n\t\t * only some of the resources will be acted upon.  Since this\n\t\t * is a fresh job from disk, we need to allocate all of\n\t\t * its resources to it before we partially release some.\n\t\t * We do this by temporarily unsetting JOB_ATR_resc_released attribute while\n\t\t * restoring the job's resources.  This will allocate all of the\n\t\t * requested resources to the job.  We add the flag back to the job\n\t\t * and then decrement the resources released when the job was originally suspended.\n\t\t */\n\t\tmark_jattr_not_set(pjob, JOB_ATR_resc_released);\n\t\tunset_resc_released = 1;\n\t}\n\n\tset_resc_assigned((void *) pjob, 0, INCR);\n\n\tif (unset_resc_released == 1) {\n\t\tmark_jattr_set(pjob, JOB_ATR_resc_released);\n\t\tset_resc_assigned((void *) pjob, 0, DECR);\n\t}\n}\n\n/**\n * @brief\n * \t\tpbsd_init_job - decide what to do with the recovered job structure\n *\n *\t\tThe action depends on the type of initialization.\n *\n * @param[in,out]\tpjob\t- the job.\n * @param[in]\ttype\t\t- type of initialization.\n *\n * @return\tint\n * @retval\t0\t- success\n * @retval\t-1\t- error.\n */\nint\npbsd_init_job(job *pjob, int type)\n{\n\tchar newstate;\n\tint newsubstate;\n\n\t/* chk if job belongs to a reservation or is a reservation job.  If this is true\n\t* and the reservation is no longer possible, return (1) else return (0) */\n\tif (Rmv_if_resv_not_possible(pjob)) {\n\t\taccount_record(PBS_ACCT_ABT, pjob, \"\");\n\t\tsvr_mailowner(pjob, MAIL_ABORT, MAIL_NORMAL, msg_init_abt);\n\t\tcheck_block(pjob, msg_init_abt);\n\t\tjob_purge(pjob);\n\t\treturn 0;\n\t}\n\n\tpjob->ji_momhandle = -1;\n\tpjob->ji_mom_prot = PROT_INVALID;\n\n\t/* update at_server attribute in case name changed */\n\n\tfree_jattr(pjob, JOB_ATR_at_server);\n\tset_jattr_generic(pjob, JOB_ATR_at_server, server_name, NULL, SET);\n\n\t/* now based on the initialization type */\n\n\tif ((type == RECOV_COLD) || (type == RECOV_CREATE)) {\n\t\tneed_y_response(type, \"jobs exists\");\n\t\tinit_abt_job(pjob);\n\t} else {\n\n\t\tif (type != RECOV_HOT)\n\t\t\tpjob->ji_qs.ji_svrflags &= ~JOB_SVFLG_HOTSTART;\n\n\t\t/* make sure JOB_SVFLG_RescAssn is cleared,\t\t   */\n\t\t/* we will reassign resources if needed\tbased on the job's */\n\t\t/* substate (if the job had resources when server exited   */\n\t\t/* JOB_SVFLG_RescAssn is reset when the resources are\t   */\n\t\t/* reassigned by calling reassign_resc().\t\t   */\n\t\tpjob->ji_qs.ji_svrflags &= ~JOB_SVFLG_RescAssn;\n\n\t\t/* Update run_version if it is not set but run_count is,   */\n\t\t/* Likely means recovering a job from a older version      */\n\t\tif ((is_jattr_set(pjob, JOB_ATR_run_version) == 0) && is_jattr_set(pjob, JOB_ATR_runcount) != 0) {\n\t\t\tset_jattr_l_slim(pjob, JOB_ATR_run_version, get_jattr_long(pjob, JOB_ATR_runcount), SET);\n\t\t}\n\n\t\tif (pjob->ji_qs.ji_svrflags & JOB_SVFLG_SubJob) {\n\t\t\tif ((pjob->ji_parentaj = find_arrayparent(pjob->ji_qs.ji_jobid)) == NULL) {\n\t\t\t\t/* parent job object not found */\n\t\t\t\tinit_abt_job(pjob);\n\t\t\t\treturn -1;\n\t\t\t}\n\n\t\t\tupdate_sj_parent(pjob->ji_parentaj, pjob, pjob->ji_qs.ji_jobid, JOB_STATE_LTR_EXPIRED, get_job_state(pjob));\n\t\t}\n\n\t\tswitch (get_job_substate(pjob)) {\n\n\t\t\tcase JOB_SUBSTATE_TRANSICM:\n\t\t\t\tif (pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE) {\n\n\t\t\t\t\t/*\n\t\t\t\t\t * This server created the job, so client\n\t\t\t\t\t * was qsub (a transient client), it won't be\n\t\t\t\t\t * arround to recommit, so auto-commit now\n\t\t\t\t\t */\n\n\t\t\t\t\tset_job_state(pjob, JOB_STATE_LTR_QUEUED);\n\t\t\t\t\tset_job_substate(pjob, JOB_SUBSTATE_QUEUED);\n\n\t\t\t\t\tif (pbsd_init_reque(pjob, CHANGE_STATE) == -1)\n\t\t\t\t\t\treturn -1;\n\t\t\t\t} else {\n\t\t\t\t\t/*\n\t\t\t\t\t * another server is sending, append to new job\n\t\t\t\t\t * list and wait for commit; need to clear\n\t\t\t\t\t * receiving sock number though\n\t\t\t\t\t */\n\t\t\t\t\tpjob->ji_qs.ji_un.ji_newt.ji_fromsock = -1;\n\t\t\t\t\tappend_link(&svr_newjobs, &pjob->ji_alljobs, pjob);\n\t\t\t\t}\n\t\t\t\tbreak;\n\n\t\t\tcase JOB_SUBSTATE_TRNOUT:\n\t\t\t\tset_job_state(pjob, JOB_STATE_LTR_QUEUED);\n\t\t\t\tset_job_substate(pjob, JOB_SUBSTATE_QUEUED);\n\t\t\t\t/* requeue as queued */\n\t\t\t\tif (pbsd_init_reque(pjob, CHANGE_STATE) == -1)\n\t\t\t\t\treturn -1;\n\t\t\t\tbreak;\n\n\t\t\tcase JOB_SUBSTATE_TRNOUTCM:\n\n\t\t\t\tif (check_job_state(pjob, JOB_STATE_LTR_RUNNING)) {\n\t\t\t\t\t/* was sending to Mom, requeue for now */\n\n\t\t\t\t\tsvr_evaljobstate(pjob, &newstate, &newsubstate, 1);\n\t\t\t\t\tsvr_setjobstate(pjob, newstate, newsubstate);\n\t\t\t\t} else {\n\t\t\t\t\t/* requeue as is - rdy to cmt */\n\n\t\t\t\t\t/* resend rtc */\n\t\t\t\t\tset_task(WORK_Immed, 0, resume_net_move, (void *) pjob);\n\t\t\t\t}\n\t\t\t\tif (pbsd_init_reque(pjob, KEEP_STATE) == -1)\n\t\t\t\t\treturn -1;\n\t\t\t\tbreak;\n\n\t\t\tcase JOB_SUBSTATE_QUEUED:\n\t\t\tcase JOB_SUBSTATE_PRESTAGEIN:\n\t\t\tcase JOB_SUBSTATE_STAGEIN:\n\t\t\tcase JOB_SUBSTATE_STAGECMP:\n\t\t\tcase JOB_SUBSTATE_STAGEFAIL:\n\t\t\tcase JOB_SUBSTATE_STAGEGO:\n\t\t\tcase JOB_SUBSTATE_HELD:\n\t\t\tcase JOB_SUBSTATE_SYNCHOLD:\n\t\t\tcase JOB_SUBSTATE_DEPNHOLD:\n\t\t\tcase JOB_SUBSTATE_WAITING:\n\t\t\t\tif (pbsd_init_reque(pjob, CHANGE_STATE) == -1)\n\t\t\t\t\treturn -1;\n\t\t\t\tbreak;\n\n\t\t\tcase JOB_SUBSTATE_PRERUN:\n\t\t\t\tif (pbsd_init_reque(pjob, KEEP_STATE) == -1)\n\t\t\t\t\treturn -1;\n\t\t\t\tbreak;\n\n\t\t\tcase JOB_SUBSTATE_PROVISION:\n\t\t\t\tif (is_jattr_set(pjob, JOB_ATR_prov_vnode)) /* If JOB_ATR_prov_vnode is set, free it */\n\t\t\t\t\tfree_jattr(pjob, JOB_ATR_prov_vnode);\n\t\t\t\tif (pbsd_init_reque(pjob, CHANGE_STATE) == -1)\n\t\t\t\t\treturn -1;\n\t\t\t\tbreak;\n\n\t\t\tcase JOB_SUBSTATE_RUNNING:\n\t\t\tcase JOB_SUBSTATE_SUSPEND:\n\t\t\tcase JOB_SUBSTATE_SCHSUSP:\n\t\t\tcase JOB_SUBSTATE_BEGUN:\n\t\t\t\tif (pbsd_init_reque(pjob, KEEP_STATE) == -1)\n\t\t\t\t\treturn -1;\n\t\t\t\tif (check_job_substate(pjob, JOB_SUBSTATE_RUNNING) ||\n\t\t\t\t    ((is_jattr_set(pjob, JOB_ATR_resc_released)) &&\n\t\t\t\t     (check_job_substate(pjob, JOB_SUBSTATE_SCHSUSP) ||\n\t\t\t\t      check_job_substate(pjob, JOB_SUBSTATE_SUSPEND)))) {\n\n\t\t\t\t\treassign_resc(pjob);\n\t\t\t\t\tif (type == RECOV_HOT)\n\t\t\t\t\t\tpjob->ji_qs.ji_svrflags |= JOB_SVFLG_HOTSTART;\n\t\t\t\t}\n\t\t\t\tbreak;\n\n\t\t\tcase JOB_SUBSTATE_SYNCRES:\n\n\t\t\t\t/* clear all dependent job ready flags */\n\n\t\t\t\tif (pbsd_init_reque(pjob, CHANGE_STATE) == -1)\n\t\t\t\t\treturn -1;\n\t\t\t\tbreak;\n\n\t\t\tcase JOB_SUBSTATE_TERM:\n\t\t\tcase JOB_SUBSTATE_EXITING:\n\t\t\tcase JOB_SUBSTATE_STAGEOUT:\n\t\t\tcase JOB_SUBSTATE_STAGEDEL:\n\t\t\tcase JOB_SUBSTATE_EXITED:\n\t\t\t\tset_task(WORK_Immed, 0, on_job_exit, (void *) pjob);\n\t\t\t\tif (pbsd_init_reque(pjob, KEEP_STATE) == -1)\n\t\t\t\t\treturn -1;\n\t\t\t\treassign_resc(pjob);\n\t\t\t\tbreak;\n\n\t\t\tcase JOB_SUBSTATE_ABORT:\n\t\t\t\t/* requeue job and if no nodes assigned,thats all */\n\t\t\t\tif (pbsd_init_reque(pjob, KEEP_STATE) == -1)\n\t\t\t\t\treturn -1;\n\t\t\t\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_HasNodes) != 0) {\n\t\t\t\t\t/* has nodes so reassign */\n\t\t\t\t\tset_task(WORK_Immed, 0, on_job_exit, (void *) pjob);\n\t\t\t\t\treassign_resc(pjob);\n\t\t\t\t}\n\t\t\t\tbreak;\n\n\t\t\tcase JOB_SUBSTATE_MOVED:\n\t\t\tcase JOB_SUBSTATE_FAILED:\n\t\t\tcase JOB_SUBSTATE_FINISHED:\n\t\t\tcase JOB_SUBSTATE_TERMINATED:\n\t\t\t\tif (pbsd_init_reque(pjob, KEEP_STATE) == -1)\n\t\t\t\t\treturn -1;\n\t\t\t\tbreak;\n\n\t\t\tcase JOB_SUBSTATE_RERUN:\n\t\t\t\tif (check_job_state(pjob, JOB_STATE_LTR_EXITING))\n\t\t\t\t\tset_task(WORK_Immed, 0, on_job_rerun, (void *) pjob);\n\t\t\t\tif (pbsd_init_reque(pjob, KEEP_STATE) == -1)\n\t\t\t\t\treturn -1;\n\t\t\t\tbreak;\n\n\t\t\tcase JOB_SUBSTATE_RERUN1:\n\t\t\tcase JOB_SUBSTATE_RERUN2:\n\t\t\tcase JOB_SUBSTATE_RERUN3:\n\t\t\t\tset_task(WORK_Immed, 0, on_job_rerun, (void *) pjob);\n\t\t\t\tif (pbsd_init_reque(pjob, KEEP_STATE) == -1)\n\t\t\t\t\treturn -1;\n\t\t\t\tbreak;\n\n\t\t\tdefault:\n\t\t\t\t(void) sprintf(log_buffer,\n\t\t\t\t\t       msg_init_unkstate, get_job_substate(pjob));\n\t\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB,\n\t\t\t\t\t  LOG_NOTICE,\n\t\t\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\t\tjob_abt(pjob, log_buffer);\n\t\t\t\treturn -1;\n\t\t}\n\n\t\t/* update entity limit sums for this job */\n\t\t(void) account_entity_limit_usages(pjob, NULL, NULL, INCR, ETLIM_ACC_ALL);\n\n\t\t/* if job has exec host of Mom, set addr and port based on hostname */\n\n\t\tif (pjob->ji_qs.ji_un_type == JOB_UNION_TYPE_EXEC) {\n\t\t\tpjob->ji_qs.ji_un.ji_exect.ji_momaddr = 0;\n\t\t\tpjob->ji_qs.ji_un.ji_exect.ji_momport = 0;\n\n\t\t\tif (is_jattr_set(pjob, JOB_ATR_exec_host)) {\n\t\t\t\tpbs_net_t new_momaddr;\n\t\t\t\tunsigned int new_momport;\n\n\t\t\t\tnew_momaddr =\n\t\t\t\t\tget_addr_of_nodebyname(\n\t\t\t\t\t\tget_jattr_str(pjob, JOB_ATR_exec_host), &new_momport);\n\n\t\t\t\tif (new_momaddr != 0) {\n\t\t\t\t\tpjob->ji_qs.ji_un.ji_exect.ji_momaddr = new_momaddr;\n\t\t\t\t\tpjob->ji_qs.ji_un.ji_exect.ji_momport = new_momport;\n\t\t\t\t} else {\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB,\n\t\t\t\t\t\t  LOG_INFO, pjob->ji_qs.ji_jobid,\n\t\t\t\t\t\t  \"Failed to update mom address. Mom address not changed.\");\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\tpbsd_init_resv - decide what to do with the recovered reservation structure.\n *\n *\t\tThe action depends on the type of initialization.\n *\n * @param[in,out]\tpresv\t- the reservation.\n * @param[in]\t\ttype\t- type of initialization (read-only=0 , or ownership=1)\n * \t\t\t\t\ttype is unused for now, will be used in later PRs\n *\n */\nvoid\npbsd_init_resv(resc_resv *presv, int type)\n{\n\trevert_alter_reservation(presv);\n\tis_resv_window_in_future(presv);\n\tset_old_subUniverse(presv);\n\n\t/* add resv to server list */\n\tappend_link(&svr_allresvs, &presv->ri_allresvs, presv);\n\tif (attach_queue_to_reservation(presv))\n\t\t/* reservation needed queue; failed to find it */\n\t\tlog_eventf(PBSEVENT_SYSTEM | PBSEVENT_ADMIN | PBSEVENT_DEBUG, PBS_EVENTCLASS_RESV,\n\t\t\t   LOG_NOTICE, msg_daemonname, msg_init_resvNOq, presv->ri_qs.ri_queue, presv->ri_qs.ri_resvID);\n\telse\n\t\tlog_eventf(PBSEVENT_SYSTEM | PBSEVENT_ADMIN | PBSEVENT_DEBUG, PBS_EVENTCLASS_SERVER,\n\t\t\t   LOG_INFO, msg_daemonname, msg_init_recovresv, presv->ri_qs.ri_resvID);\n}\n\n/**\n * @brief\n * \t\tpbsd_init_node - decide what to do with the recovered node structure.\n *\n *\t\tThe action depends on the type of initialization.\n *\n * @param[in,out]\tdbnode\t- the node recovered.\n * @param[in]\t\ttype\t- type of initialization (read-only=0 , or ownership=1)\n * \t\t\t\t\ttype is unused for now, will be used in later PRs\n *\n * @return\tptr to pbsnode\n * @retval\tNode structure\t- success\n * @retval\tNULL\t- error.\n */\nstruct pbsnode *\npbsd_init_node(pbs_db_node_info_t *dbnode, int type)\n{\n\ttime_t mom_modtime = 0;\n\tstruct pbsnode *np;\n\tsvrattrl *pal;\n\tint bad;\n\tint rc = 0;\n\tint perm = ATR_DFLAG_ACCESS | ATR_PERM_ALLOW_INDIRECT;\n\n\tmom_modtime = dbnode->mom_modtime;\n\n\tpal = GET_NEXT(dbnode->db_attr_list.attrs);\n\n\t/* now create node and subnodes */\n\trc = create_pbs_node2(dbnode->nd_name, pal, perm, &bad, &np, FALSE, TRUE); /* allow unknown resources */\n\tif (rc)\n\t\tnp = NULL;\n\n\tif (np) {\n\t\tif (mom_modtime)\n\t\t\tnp->nd_moms[0]->mi_modtime = mom_modtime;\n\n\t\tif (is_nattr_set(np, ND_ATR_vnode_pool) && get_nattr_long(np, ND_ATR_vnode_pool) > 0) {\n\t\t\tmominfo_t *pmom = np->nd_moms[0];\n\t\t\tif (pmom && (np == ((mom_svrinfo_t *) (pmom->mi_data))->msr_children[0])) {\n\t\t\t\t/* natural vnode being recovered, add to pool */\n\t\t\t\tadd_mom_to_pool(np->nd_moms[0]);\n\t\t\t}\n\t\t}\n\t} else {\n\t\tif (rc == PBSE_NODEEXIST)\n\t\t\tsprintf(log_buffer, \"duplicate node \\\"%s\\\"\", dbnode->nd_name);\n\t\telse\n\t\t\tsprintf(log_buffer, \"could not create node \\\"%s\\\", error = %d\", dbnode->nd_name, rc);\n\t\tlog_errf(-1, __func__, log_buffer);\n\t}\n\treturn np;\n}\n\n/**\n * @brief\n * \t\tpbsd_init_reque - re-enqueue the job into the queue it was in\n *\n *\t\tupdate the state, typically to some form of QUEUED.\n *\t\tmake sure substate attributes match actual value.\n *\n * @param[in,out]\tpjob\t- the job.\n * @param[in]\tchange_state- possible  values,\n * \t\t\t\t\t\t\t\tCHANGE_STATE - 1\n * \t\t\t\t\t\t\t\tKEEP_STATE\t - 0\n *\n * @return\tint\n * @retval\t0\t- success\n * @retval\t-1\t- error.\n */\nstatic int\npbsd_init_reque(job *pjob, int change_state)\n{\n\tchar logbuf[384];\n\tchar newstate;\n\tint newsubstate;\n\tint rc;\n\n\t/* re-enqueue the job into the queue it was in */\n\n\tif (change_state) {\n\t\t/* update the state, typically to some form of QUEUED */\n\t\tunset_extra_attributes(pjob);\n\t\tsvr_evaljobstate(pjob, &newstate, &newsubstate, 1);\n\t\tif (pjob->ji_qs.ji_svrflags & JOB_SVFLG_SubJob)\n\t\t\tupdate_sj_parent(pjob->ji_parentaj, pjob, pjob->ji_qs.ji_jobid, get_job_state(pjob), newstate);\n\t\tset_job_state(pjob, newstate);\n\t\tset_job_substate(pjob, newsubstate);\n\t}\n\n\t/* make sure substate attributes match actual value */\n\tpost_attr_set(get_jattr(pjob, JOB_ATR_substate));\n\n\tif ((rc = svr_enquejob(pjob, NULL)) == 0) {\n\t\tsprintf(logbuf, msg_init_substate, get_job_substate(pjob));\n\t\t(void) strcat(logbuf, msg_init_queued);\n\t\t(void) strcat(logbuf, pjob->ji_qs.ji_queue);\n\t\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_ADMIN | PBSEVENT_DEBUG,\n\t\t\t  PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t  pjob->ji_qs.ji_jobid, logbuf);\n\t} else {\n\t\tif (rc == PBSE_UNKQUE) {\n\n\t\t\t/* Oops, this should never happen */\n\n\t\t\tsprintf(logbuf, \"%s %s; job %s queue %s\",\n\t\t\t\tmsg_err_noqueue, msg_err_noqueue1,\n\t\t\t\tpjob->ji_qs.ji_jobid, pjob->ji_qs.ji_queue);\n\t\t} else if (rc == PBSE_UNKRESC) {\n\t\t\tsprintf(logbuf, \"%s %s; job %s\",\n\t\t\t\tmsg_err_noqueue, msg_unkresc,\n\t\t\t\tpjob->ji_qs.ji_jobid);\n\t\t} else {\n\t\t\tsprintf(logbuf, \"%s; job %s queue %s error %d\",\n\t\t\t\tmsg_err_noqueue,\n\t\t\t\tpjob->ji_qs.ji_jobid, pjob->ji_qs.ji_queue, rc);\n\t\t}\n\t\tlog_err(-1, \"pbsd_init\", logbuf);\n\t\t(void) job_abt(pjob, logbuf);\n\t\treturn (-1);\n\t}\n\treturn (0);\n}\n\n/**\n * @brief\n * \t\tcatch_child() - the signal handler for  SIGCHLD.\n *\t\tSet a flag for the main loop to know that a child processes\n *\t\tneeds to be reaped.\n *\n * @param[in]\tsig\t- not used in fun.\n *\n * @return\tvoid\n */\nstatic void\ncatch_child(int sig)\n{\n\textern int reap_child_flag;\n\n\treap_child_flag = 1;\n}\n\n/**\n * @brief\n * \t\tchange_logs - signal handler for SIGHUP\n *\t\tCauses the accounting file and log file to be closed and reopened.\n *\t\tThus the old one can be renamed.\n *\n * @param[in]\tsig\t- not used in fun.\n *\n * @return\tvoid\n */\nstatic void\nchange_logs(int sig)\n{\n\tacct_close();\n\tlog_close(1);\n\tlog_open(log_file, path_log);\n\t(void) acct_open(acct_file);\n}\n\n/**\n * @brief\n * \t\tstop_me - signal handler for all caught signals which terminate the server\n *\n *\t\tRecord the signal so an log_event call can be made outside of\n *\t\tthe handler, and set the server state to indicate we should shut down.\n *\n * @param[in]\tsig\t- not used in fun.\n *\n * @return\tvoid\n */\n/*ARGSUSED*/\nstatic void\nstop_me(int sig)\n{\n\tset_sattr_l_slim(SVR_ATR_State, SV_STATE_SHUTSIG, SET);\n}\n/**\n * @brief\n * \t\tchk_save_file - check whether data can be saved into file.\n *\n *\t\tchecks include the file permission checks and regular file check.\n *\n * @param[in]\tfilename\t- file which needs to be checked.\n *\n * @return\terror code\n * @retval\t0\t- success\n * @retval\t-1\t- failure\n */\nint\nchk_save_file(char *filename)\n{\n\tstruct stat sb;\n\n\tif (stat(filename, &sb) == -1)\n\t\treturn (errno);\n\n\tif (S_ISREG(sb.st_mode))\n\t\treturn (0);\n\treturn (-1);\n}\n\n/**\n * @brief\n * \t\tresume_net_move - call net_move() to complete the routing of a job\n *\t\tThis is invoked via a work task created on recovery of a job\n *\t\tin JOB_SUBSTATE_TRNOUTCM state.\n *\n * @param[in]\tptask\t- work task created on recovery of a job\n *\n * @return\tvoid\n */\nstatic void\nresume_net_move(struct work_task *ptask)\n{\n\tnet_move((job *) ptask->wt_parm1, 0);\n}\n\n/**\n * @brief\n * \t\tneed_y_response - on create/clean initialization that would delete\n *\t\tinformation, obtain the operator approval first.\n *\n * @param[in]\ttype\t- server initialization mode\n * @param[in]\ttxt\t- text field in msg_startup3 string\n *\n * @return\tvoid\n *\n * @par MT-safe: No\n */\nstatic void\nneed_y_response(int type, char *txt)\n{\n\tstatic int answ = -2;\n\tint c;\n\tchar *t[] = {\"Hot\",\n\t\t     \"Warm\",\n\t\t     \"Cold\",\n\t\t     \"Create\"};\n\n\tchar *tp;\n\n\tif (answ > 0)\n\t\treturn; /* already gotten a response */\n\n\tfflush(stdin);\n\tif ((type > RECOV_CREATE) || (type < RECOV_HOT)) {\n\t\tstop_db();\n\t\texit(1);\n\t}\n\n\ttp = t[type];\n\n\tprintf(msg_startup3, msg_daemonname, server_name, tp, txt);\n\twhile (1) {\n\t\tansw = getchar();\n\t\tc = answ;\n\t\twhile ((c != '\\n') && (c != EOF))\n\t\t\tc = getchar();\n\t\tswitch (answ) {\n\t\t\tcase 'y':\n\t\t\tcase 'Y':\n\t\t\t\treturn;\n\n\t\t\tcase EOF:\n\t\t\tcase '\\n':\n\t\t\tcase 'n':\n\t\t\tcase 'N':\n\t\t\t\tprintf(\"PBS server %s initialization aborted\\n\", server_name);\n\t\t\t\tstop_db();\n\t\t\t\texit(0);\n\t\t}\n\t\tprintf(\"y(es) or n(o) please:\\n\");\n\t}\n}\n\n/**\n * @brief\n * \t\tinit_abt_job() - log and email owner message that job is being aborted at\n *\t\tinitialization; then purge job (must be called after job is enqueued).\n *\n * @param[in]\tpjob\t- job\n *\n * @return\tvoid\n */\nstatic void\ninit_abt_job(job *pjob)\n{\n\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_ADMIN | PBSEVENT_DEBUG,\n\t\t  PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t  pjob->ji_qs.ji_jobid, msg_init_abt);\n\tsvr_mailowner(pjob, MAIL_ABORT, MAIL_NORMAL, msg_init_abt);\n\tcheck_block(pjob, msg_init_abt);\n\tjob_purge(pjob);\n}\n\n/**\n * @brief\n * \t\tRmv_if_resv_not_possible - If the job belongs to a reservation that\n *\t\tis no longer possible then report back that it should not be requeued.\n *\n * \t\tIf the job is in a standing reservation queue then do not check whether it is\n * \t\tviable as this will be handled as part of the end event for the occurrence.\n * \t\tNote that the end event is added to the work task by remove_delete_resvs.\n *\n * @param[in,out]\tpjob\t- reservation job\n *\n * @return\treturn code\n * @retval\t0\t- OK to requeue\n * @retval\t1\t- should not be requeued\n */\nstatic int\nRmv_if_resv_not_possible(job *pjob)\n{\n\tint rc = 0; /*assume OK to requeue*/\n\tresc_resv *presv;\n\tpbs_queue *pque;\n\n\tif ((pque = find_queuebyname(pjob->ji_qs.ji_queue)) != 0) {\n\t\tif ((presv = pque->qu_resvp) != 0) {\n\n\t\t\t/*we are dealing with a job in a reservation*/\n\n\t\t\tpjob->ji_myResv = presv;\n\n\t\t\t/* If a standing reservation then ignore the check for end time\n\t\t\t\t* The behavior of a standing reservation differs from that of an\n\t\t\t\t* advance one in that only running jobs are deleted at the end of\n\t\t\t\t*  an occurrence (be it missed or not).\n\t\t\t\t*/\n\t\t\tif (get_rattr_long(presv, RESV_ATR_resv_count) > 1)\n\t\t\t\treturn 0;\n\n\t\t\tif (presv->ri_qs.ri_etime < time_now)\n\t\t\t\trc = 1;\n\t\t}\n\t}\n\treturn (rc);\n}\n\n/**\n * @brief\n *  \tattach_queue_to_reservation - if the reservation happens to\n *\t\tbe supported by a pbs_queue, find the queue and attach\n *\t\tit to the reservation\n *\n * @param[in,out]\tpresv\t- reservation.\n *\n * @return\tint\n * @retval\t0\t- success\n * @retval\t-1\t- failure\n */\nstatic int\nattach_queue_to_reservation(resc_resv *presv)\n{\n\tif (presv == NULL)\n\t\treturn (0);\n\tpresv->ri_qp = find_queuebyname(presv->ri_qs.ri_queue);\n\n\tif (presv->ri_qp) {\n\t\t/*resv points to queue and queue points back*/\n\t\tpresv->ri_qp->qu_resvp = presv;\n\t\treturn (0);\n\t} else\n\t\treturn (-1);\n}\n\n/**\n * @brief\n * \t\tcall_log_license - call the routine to long the floating license info\n *\n * @param[in]\tptask\t- work task structure.\n *\n * @return\tvoid\n */\nstatic void\ncall_log_license(struct work_task *ptask)\n{\n\tint fd;\n\tlong ntime;\n\tstruct tm *tms;\n\n\t/* log the floating license info */\n\n\tlog_licenses(&license_counts.licenses_high_use);\n\n\t/* reset values for time periods that have passed */\n\n\tlicense_counts.licenses_high_use.lu_max_hr = 0;\n\tntime = ptask->wt_event;\n\ttms = localtime((time_t *) &ntime);\n\tif (tms->tm_mday != license_counts.licenses_high_use.lu_day) {\n\t\tlicense_counts.licenses_high_use.lu_max_day = 0;\n\t\tlicense_counts.licenses_high_use.lu_day = tms->tm_mday;\n\t}\n\tif (tms->tm_mon != license_counts.licenses_high_use.lu_month) {\n\t\tlicense_counts.licenses_high_use.lu_max_month = 0;\n\t\tlicense_counts.licenses_high_use.lu_month = tms->tm_mon;\n\t}\n\n\t/* write current info to file */\n\tfd = open(path_usedlicenses, O_WRONLY | O_CREAT | O_TRUNC, 0600);\n\tif (fd != -1) {\n\t\tif (write(fd, &license_counts.licenses_high_use, sizeof(license_counts.licenses_high_use)) == -1)\n\t\t\tlog_errf(-1, __func__, \"write failed. ERR : %s\",strerror(errno));\n\t\tclose(fd);\n\t}\n\n\t/* call myself again at the top of the next hour */\n\tntime = ((ntime + 3601) / 3600) * 3600;\n\t(void) set_task(WORK_Timed, ntime, call_log_license, 0);\n}\n"
  },
  {
    "path": "src/server/pbsd_main.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @brief\n * \t\tThe entry point function for pbs_daemon.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <sys/types.h>\n#include <sys/stat.h>\n#include <sys/param.h>\n#include <netinet/in.h>\n#include <sys/wait.h>\n#include <netdb.h>\n#include <unistd.h>\n#include <signal.h>\n#ifdef _POSIX_MEMLOCK\n#include <sys/mman.h>\n#endif /* _POSIX_MEMLOCK */\n\n#include \"pbs_ifl.h\"\n#include <assert.h>\n#include <ctype.h>\n#include <errno.h>\n#include <fcntl.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <time.h>\n\n#include \"ticket.h\"\n#ifdef linux\n#include <sys/prctl.h>\n#endif\n\n#include \"list_link.h\"\n#include \"work_task.h\"\n#include \"log.h\"\n#include \"server_limits.h\"\n#include \"attribute.h\"\n#include \"resource.h\"\n#include \"job.h\"\n#include \"reservation.h\"\n#include \"queue.h\"\n#include \"server.h\"\n#include \"net_connect.h\"\n#include \"libpbs.h\"\n#include \"credential.h\"\n#include \"batch_request.h\"\n#include \"pbs_idx.h\"\n#include \"pbs_nodes.h\"\n#include \"svrfunc.h\"\n#include <libutil.h>\n#include \"tracking.h\"\n#include \"acct.h\"\n#include \"sched_cmds.h\"\n#include \"tpp.h\"\n#include \"dis.h\"\n#include \"libsec.h\"\n#include \"pbs_version.h\"\n#include \"pbs_license.h\"\n#include \"hook.h\"\n#include \"pbs_ecl.h\"\n#include \"provision.h\"\n#include \"pbs_db.h\"\n#include \"pbs_sched.h\"\n#include \"pbs_share.h\"\n#include <pbs_python.h> /* for python interpreter */\n#include \"auth.h\"\n\n#include \"pbs_v1_module_common.i\"\n\n/* External functions called */\n\nextern int pbsd_init(int);\nextern void shutdown_ack();\nextern int takeover_from_secondary(void);\nextern int be_secondary(time_t sec);\nextern void set_srv_prov_attributes();\nextern int connect_to_db(int);\nextern void stop_db();\n#ifdef NAS /* localmod 005 */\nextern int chk_and_update_db_svrhost();\n#endif /* localmod 005 */\n\n/* External data items */\nextern pbs_list_head svr_requests;\nextern char *msg_err_malloc;\nextern int pbs_failover_active;\n\n/* Local Private Functions */\n\nstatic int get_port(char *, unsigned int *, pbs_net_t *);\nstatic time_t next_task();\nstatic int start_hot_jobs();\nstatic void lock_out(int, int);\n#define HOT_START_PING_RATE 15\n\n/* Global Data Items */\n\nint stalone = 0; /* is program running not as a service ? */\nchar *acct_file = NULL;\nchar daemonname[PBS_MAXHOSTNAME + 8];\nint used_unix_licenses = 0;\nint used_linix_licenses = 0;\nchar *log_file = NULL;\nchar *path_acct;\nchar *path_usedlicenses;\nchar path_log[MAXPATHLEN + 1];\nchar *path_priv;\nchar *path_jobs;\nchar *path_hooks_tracking;\nchar *path_users;\nchar *path_hooks_rescdef;\nchar *path_spool;\nchar *path_track;\nchar *path_svrlive;\nextern char *path_prov_track;\nchar *path_secondaryact;\nchar *pbs_o_host = \"PBS_O_HOST\";\npbs_net_t pbs_mom_addr;\nunsigned int pbs_mom_port;\nunsigned int pbs_rm_port;\npbs_net_t pbs_server_addr;\nunsigned int pbs_server_port_dis;\nint reap_child_flag = 0;\ntime_t secondary_delay = 30;\npbs_sched *dflt_scheduler = NULL; /* the default scheduler */\nint shutdown_who;\t\t  /* see req_shutdown() */\nchar *mom_host = server_host;\nlong new_log_event_mask = 0;\nint server_init_type = RECOV_WARM;\npbs_list_head svr_deferred_req; /* list of lists, one for each scheduler */\npbs_list_head svr_newjobs; /* list of incomming new jobs       */\npbs_list_head svr_allscheds;\nextern pbs_list_head svr_creds_cache; /* all credentials available to send */\nstruct batch_request *saved_takeover_req;\nint svr_unsent_qrun_req = 0; /* Set to 1 for scheduling unsent qrun requests */\n\nvoid *jobs_idx;\nvoid *queues_idx;\nvoid *resvs_idx;\n\nsigset_t allsigs;\n\n/* private data */\nstatic char *suffix_slash = \"/\";\nstatic int brought_up_alt_sched = 0;\nvoid stop_db();\nextern void mark_nodes_unknown(int);\n\n/*\n * Used only by the TPP layer, to ping nodes only if the connection to the\n * local router to the server is up.\n * Initially set the connection to up, so that first time ping happens\n * by default.\n */\nint tpp_network_up = 0;\n\n/**\n * @brief\n * \t\tThe handler that is called by TPP layer when the connection to the local\n * \t\trouter is restored\n *\n * @param[in]\tdata\t- Any associated data passed from TPP layer\n *\n * @return\tvoid\n */\nvoid\nnet_restore_handler(void *data)\n{\n\tlog_event(PBSEVENT_ERROR | PBSEVENT_FORCE, PBS_EVENTCLASS_SERVER, LOG_ALERT, __func__, \"net restore handler called\");\n\ttpp_network_up = 1;\n}\n\n/**\n * @brief\n * \t\tThe handler that is called by TPP layer when the connection to the local\n * \t\trouter goes down\n *\n * @param[in]\tdata\t- Any associated data passed from TPP layer\n *\n * @return\tvoid\n */\nvoid\nnet_down_handler(void *data)\n{\n\tif (tpp_network_up == 1) {\n\t\ttpp_network_up = 0;\n\t\t/* now loop and set all nodes to down */\n\t\tlog_event(PBSEVENT_ERROR | PBSEVENT_FORCE, PBS_EVENTCLASS_SERVER, LOG_ALERT, __func__, \"marking all nodes unknown\");\n\t\tmark_nodes_unknown(1);\n\t}\n}\n\nstatic int lockfds = -1;\nstatic int already_forked = 0; /* we check this variable even in non-debug mode, so dont condition compile it */\nstatic int background = 0;\n\n#ifndef DEBUG\n/**\n * @brief\n *\t\tForks a background process and continues on that, while\n * \t\texiting the foreground process. It also sets the child process to\n * \t\tbecome the session leader. This function is avaible only on Non-Windows\n * \t\tplatforms and in non-debug mode.\n *\n * @return\tpid_t\t- sid of the child process (result of setsid)\n * @retval       >0\t- sid of the child process.\n * @retval       -1\t- Fork or setsid failed.\n */\npid_t\ngo_to_background()\n{\n\tpid_t sid = -1;\n\tint rc;\n\n\tlock_out(lockfds, F_UNLCK);\n\trc = fork();\n\tif (rc == -1) { /* fork failed */\n\t\tlog_err(errno, msg_daemonname, \"fork failed\");\n\t\treturn ((pid_t) -1);\n\t}\n\tif (rc > 0)\n\t\texit(0); /* parent goes away, allowing booting to continue */\n\n\tlock_out(lockfds, F_WRLCK);\n\tif ((sid = setsid()) == -1) {\n\t\tlog_err(errno, msg_daemonname, \"setsid failed\");\n\t\treturn ((pid_t) -1);\n\t}\n\tpbs_close_stdfiles();\n\talready_forked = 1;\n\treturn sid;\n}\n#endif /* DEBUG is defined */\n\n/**\n * @brief\n * \t\tRead a message from a TPP stream. Only one kind of message\n * \t\tis expected -- Inter Server requests from MOM's.\n *\n * @param[in]\tstream\t- TPP stream from which message is read.\n *\n * @return\tvoid\n */\nvoid\ndo_tpp(int stream)\n{\n\tint ret, proto, version;\n\tvoid is_request(int, int);\n\tvoid stream_eof(int, int, char *);\n\n\tDIS_tpp_funcs();\n\tproto = disrsi(stream, &ret);\n\tif (ret != DIS_SUCCESS) {\n\t\tDBPRT((\"tpp read failure: ret: %d, proto: %d\\n\", ret, proto));\n\t\tstream_eof(stream, ret, NULL);\n\t\treturn;\n\t}\n\tversion = disrsi(stream, &ret);\n\tif (ret != DIS_SUCCESS) {\n\t\tDBPRT((\"%s: no protocol version number %s\\n\",\n\t\t       __func__, dis_emsg[ret]))\n\t\tstream_eof(stream, ret, NULL);\n\t\treturn;\n\t}\n\n\tswitch (proto) {\n\t\tcase IS_PROTOCOL:\n\t\t\tDBPRT((\"%s: got an inter-server request\\n\", __func__))\n\t\t\tis_request(stream, version);\n\t\t\tbreak;\n\t\tdefault:\n\t\t\tDBPRT((\"%s: unknown request %d\\n\", __func__, proto))\n\t\t\tstream_eof(stream, ret, NULL);\n\t\t\tbreak;\n\t}\n\treturn;\n}\n\n/**\n * @brief\n * \t\tRead the TPP stream using tpp_poll and invoke do_tpp using that stream.\n *\n * @param[in]\tfd\t- not used.\n *\n * @return\tvoid\n */\nvoid\ntpp_request(int fd)\n{\n\tint iloop;\n\tint rpp_max_pkt_check = RPP_MAX_PKT_CHECK_DEFAULT;\n\n\t/*\n\t * Interleave TPP processing with batch request processing.\n\t * Certain things like hook/short-job propagation can generate a\n\t * huge amount of TPP traffic that can make batch processing\n\t * appear sluggish if not interleaved.\n\t *\n\t */\n\tif (is_sattr_set(SVR_ATR_rpp_max_pkt_check))\n\t\trpp_max_pkt_check = get_sattr_long(SVR_ATR_rpp_max_pkt_check);\n\n\tfor (iloop = 0; iloop < rpp_max_pkt_check; iloop++) {\n\t\tint stream;\n\n\t\tif ((stream = tpp_poll()) == -1) {\n\t\t\tlog_err(errno, __func__, \"tpp_poll\");\n\t\t\tbreak;\n\t\t}\n\t\tif (stream == -2)\n\t\t\tbreak;\n\t\tdo_tpp(stream);\n\t}\n\treturn;\n}\n\n/**\n * @brief\n * \t\tbuild_path - build the pathname for a PBS directory\n *\n * @param[in]\tparent\t- parent directory name (dirname)\n * @param[in]\tname\t- sub directory name\n * @param[in]\tsufix\t- suffix string to append\n *\n * @return\tPBS directory\n */\n\nchar *\nbuild_path(char *parent, char *name, char *sufix)\n{\n\tint prefixslash;\n\tchar *ppath;\n\tsize_t len;\n\n\t/*\n\t * allocate space for the names + maybe a slash between + the suffix\n\t */\n\n\tif (*(parent + strlen(parent) - 1) == '/')\n\t\tprefixslash = 0;\n\telse\n\t\tprefixslash = 1;\n\n\tlen = strlen(parent) + strlen(name) + prefixslash + 1;\n\tif (sufix)\n\t\tlen += strlen(sufix);\n\tppath = malloc(len);\n\tif (ppath) {\n\t\t(void) strcpy(ppath, parent);\n\t\tif (prefixslash)\n\t\t\t(void) strcat(ppath, \"/\");\n\t\t(void) strcat(ppath, name);\n\t\tif (sufix)\n\t\t\t(void) strcat(ppath, sufix);\n\t\treturn (ppath);\n\t} else {\n\t\tlog_err(errno, \"build_path\", msg_err_malloc);\n\t\tlog_close(1);\n\t\texit(3);\n\t}\n\t/*NOTREACHED*/\n}\n\n#ifndef DEBUG\n/**\n * @brief\n * \t\tpbs_close_stdfiles - redirect stdin, stdout and stderr to /dev/null\n *\t\tNot done if compiled with debug\n *\n * @par MT-safe: No\n */\nvoid\npbs_close_stdfiles(void)\n{\n\tstatic int already_done = 0;\n\n\tif (!already_done) {\n\t\tFILE *dummyfile;\n\n\t\t(void) fclose(stdin);\n\t\t(void) fclose(stdout);\n\t\t(void) fclose(stderr);\n\n\t\tdummyfile = fopen(NULL_DEVICE, \"r\");\n\t\tassert((dummyfile != 0) && (fileno(dummyfile) == 0));\n\n\t\tdummyfile = fopen(NULL_DEVICE, \"w\");\n\t\tassert((dummyfile != 0) && (fileno(dummyfile) == 1));\n\t\tdummyfile = fopen(NULL_DEVICE, \"w\");\n\t\tassert((dummyfile != 0) && (fileno(dummyfile) == 2));\n\t\talready_done = 1;\n\t}\n}\n#endif /* DEBUG */\n\n/**\n * @brief\n * \t\tclear_exec_vnode - clear the exec_vnode attribute\n *\t\tThis is done when the server is coming out of HOT start (first\n *\t\tregular RUN cycle).  Jobs which were running when the Server was\n *\t\tshut down may have there exec_vnode left to assist in HOT start.\n *\t\tIf left set, the job is trapped into requiring those nodes.\n *\t\tClear on any job not running and without a restart file.\n */\nstatic void\nclear_exec_vnode()\n{\n\tjob *pjob;\n\n\tfor (pjob = (job *) GET_NEXT(svr_alljobs); pjob;\n\t     pjob = (job *) GET_NEXT(pjob->ji_alljobs)) {\n\t\tif ((!check_job_state(pjob, JOB_STATE_LTR_RUNNING)) &&\n\t\t    (!check_job_state(pjob, JOB_STATE_LTR_FINISHED)) &&\n\t\t    (!check_job_state(pjob, JOB_STATE_LTR_MOVED)) &&\n\t\t    (!check_job_state(pjob, JOB_STATE_LTR_EXITING))) {\n\t\t\tif (is_jattr_set(pjob, JOB_ATR_exec_vnode) && (pjob->ji_qs.ji_svrflags & JOB_SVFLG_CHKPT) == 0) {\n\t\t\t\tfree_jattr(pjob, JOB_ATR_exec_vnode);\n\t\t\t\tfree_jattr(pjob, JOB_ATR_exec_host);\n\t\t\t\tfree_jattr(pjob, JOB_ATR_exec_host2);\n\t\t\t}\n\t\t}\n\t}\n}\n\n/**\n * @brief\n * \t\treap_child() - reap dead child processes\n *\n * \t\tCollect child status and add to work list entry for that child.\n * \t\tThe list entry is marked as immediate to show the child is gone and\n * \t\tsvr_delay_entry is incremented to indicate to next_task() to check for it.\n */\n\nstatic void\nreap_child(void)\n{\n\tstruct work_task *ptask;\n\tpid_t pid;\n\tint statloc;\n\n\twhile (1) {\n\t\tif ((pid = waitpid((pid_t) -1, &statloc, WNOHANG)) == (pid_t) -1) {\n\t\t\tif (errno == ECHILD) {\n\t\t\t\treap_child_flag = 0;\n\t\t\t\treturn;\n\t\t\t} else if (errno == EINTR) {\n\t\t\t\tcontinue;\n\t\t\t} else {\n\t\t\t\treturn;\n\t\t\t}\n\t\t} else if (pid == 0) {\n\t\t\treap_child_flag = 0;\n\t\t\treturn;\n\t\t}\n\t\tptask = (struct work_task *) GET_NEXT(task_list_event);\n\t\twhile (ptask) {\n\t\t\tif ((ptask->wt_type == WORK_Deferred_Child) &&\n\t\t\t    (ptask->wt_event == pid)) {\n\t\t\t\tptask->wt_type = WORK_Deferred_Cmp;\n\t\t\t\tptask->wt_aux = (int) statloc; /* exit status */\n\t\t\t\tsvr_delay_entry++;\t       /* see next_task() */\n\t\t\t}\n\t\t\tptask = (struct work_task *) GET_NEXT(ptask->wt_linkevent);\n\t\t}\n\t}\n}\n\n/**\n * @brief\n *\tthis function handles auth related data before process_request()\n *\n * @param[in] conn - connection data\n *\n * @return\tint\n * @retval\t>0\tdata ready\n * @retval\t0\tno data ready\n * @retval\t-1\terror\n * @retval\t-2\ton EOF\n */\nint\ntcp_pre_process(conn_t *conn)\n{\n\tchar errbuf[LOG_BUF_SIZE];\n\tint rc;\n\n\tif (conn->cn_auth_config == NULL)\n\t\treturn 1;\n\n\tDIS_tcp_funcs();\n\tif (conn->cn_auth_config->encrypt_method[0] != '\\0') {\n\t\trc = transport_chan_get_ctx_status(conn->cn_sock, FOR_ENCRYPT);\n\t\tif (rc == (int) AUTH_STATUS_UNKNOWN)\n\t\t\treturn 1;\n\n\t\tif (rc < (int) AUTH_STATUS_CTX_READY) {\n\t\t\terrbuf[0] = '\\0';\n\t\t\trc = engage_server_auth(conn->cn_sock, conn->cn_hostname, FOR_ENCRYPT, AUTH_SERVER, errbuf, sizeof(errbuf));\n\t\t\tif (errbuf[0] != '\\0') {\n\t\t\t\tif (rc != 0)\n\t\t\t\t\tlog_event(PBSEVENT_ERROR | PBSEVENT_FORCE, PBS_EVENTCLASS_SERVER, LOG_ERR, __func__, errbuf);\n\t\t\t\telse\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG | PBSEVENT_FORCE, PBS_EVENTCLASS_SERVER, LOG_DEBUG, __func__, errbuf);\n\t\t\t}\n\t\t\treturn rc;\n\t\t}\n\t}\n\n\trc = transport_chan_get_ctx_status(conn->cn_sock, FOR_AUTH);\n\tif (rc == (int) AUTH_STATUS_UNKNOWN)\n\t\treturn 1;\n\n\tif (rc < (int) AUTH_STATUS_CTX_READY) {\n\t\terrbuf[0] = '\\0';\n\t\trc = engage_server_auth(conn->cn_sock, conn->cn_hostname, FOR_AUTH, AUTH_SERVER, errbuf, sizeof(errbuf));\n\t\tif (errbuf[0] != '\\0') {\n\t\t\tif (rc != 0)\n\t\t\t\tlog_event(PBSEVENT_ERROR | PBSEVENT_FORCE, PBS_EVENTCLASS_SERVER, LOG_ERR, __func__, errbuf);\n\t\t\telse\n\t\t\t\tlog_event(PBSEVENT_DEBUG | PBSEVENT_FORCE, PBS_EVENTCLASS_SERVER, LOG_DEBUG, __func__, errbuf);\n\t\t}\n\t\treturn rc;\n\t}\n\n\treturn 1;\n}\n\n/**\n * @brief\n * \t\tmain - the initialization and main loop of pbs_daemon\n *\n * @param[in]\targc\t- argument count.\n * @param[in]\targv\t- argument values.\n *\n * @return\terror code\n * @retval\t0\t- success\n * @retval\t!=0\t- failed\n *\n * @par MT-safe: No\n */\nint\nmain(int argc, char **argv)\n{\n\tchar *nodename = NULL;\n\tint are_primary;\n\tint c, rc;\n\tint i;\n\tint tppfd; /* fd to receive is HELLO's */\n\tstruct tpp_config tpp_conf;\n\tchar lockfile[MAXPATHLEN + 1];\n\tchar **origevp;\n\tchar *pc;\n\tpbs_queue *pque;\n\tchar *servicename;\n\ttime_t svrlivetime;\n\tint sock;\n\tstruct stat sb_sa;\n\tstruct batch_request *periodic_req;\n\tchar hook_msg[HOOK_MSG_SIZE];\n\tpbs_sched *psched;\n\tchar *keep_daemon_name = NULL;\n\tpid_t sid = -1;\n\tlong state;\n\ttime_t waittime;\n#ifdef _POSIX_MEMLOCK\n\tint do_mlockall = 0;\n#endif /* _POSIX_MEMLOCK */\n\textern char **environ;\n\n\tstatic struct {\n\t\tchar *it_name;\n\t\tint it_type;\n\t} init_name_type[] = {\n\t\t{\"hot\", RECOV_HOT},\n\t\t{\"warm\", RECOV_WARM},\n\t\t{\"cold\", RECOV_COLD},\n\t\t{\"create\", RECOV_CREATE},\n\t\t{\"updatedb\", RECOV_UPDATEDB},\n\t\t{\"\", RECOV_Invalid}};\n\tstatic int first_run = 1;\n\n\textern int optind;\n\textern char *optarg;\n\textern char *msg_svrdown;  /* log message */\n\textern char *msg_startup1; /* log message */\n\textern char *msg_startup2; /* log message */\n\t/* python externs */\n\textern void pbs_python_svr_initialize_interpreter_data(struct python_interpreter_data * interp_data);\n\textern void pbs_python_svr_destroy_interpreter_data(struct python_interpreter_data * interp_data);\n\n\t/* set python interp data */\n\tsvr_interp_data.data_initialized = 0;\n\tsvr_interp_data.init_interpreter_data = pbs_python_svr_initialize_interpreter_data;\n\tsvr_interp_data.destroy_interpreter_data = pbs_python_svr_destroy_interpreter_data;\n\t/*the real deal or just pbs_version and exit*/\n\n\tPRINT_VERSION_AND_EXIT(argc, argv);\n\n\t/* As a security measure and to make sure all file descriptors\t*/\n\t/* are available to us,  close all above stderr\t\t\t*/\n\ti = sysconf(_SC_OPEN_MAX);\n\twhile (--i > 2)\n\t\t(void) close(i); /* close any file desc left open by parent */\n\n\t/* If we are not run with real and effective uid of 0, forget it */\n\tif ((getuid() != 0) || (geteuid() != 0)) {\n\t\tfprintf(stderr, \"%s: Must be run by root\\n\", argv[0]);\n\t\treturn (1);\n\t}\n\n\t/* set standard umask */\n\tumask(022);\n\n\t/* set single threaded mode */\n\tpbs_client_thread_set_single_threaded_mode();\n\t/* disable attribute verification */\n\tset_no_attribute_verification();\n\n\t/* initialize the thread context */\n\tif (pbs_client_thread_init_thread_context() != 0) {\n\t\tlog_err(-1, __func__,\n\t\t\t\"Unable to initialize thread context\");\n\t\treturn (1);\n\t}\n\n\tif (pbs_loadconf(0) == 0)\n\t\treturn (1);\n\n\tset_log_conf(pbs_conf.pbs_leaf_name, pbs_conf.pbs_mom_node_name,\n\t\t     pbs_conf.locallog, pbs_conf.syslogfac,\n\t\t     pbs_conf.syslogsvr, pbs_conf.pbs_log_highres_timestamp);\n\n\t/* find out who we are (hostname) */\n\tserver_host[0] = '\\0';\n\tif (pbs_conf.pbs_leaf_name) {\n\t\tchar *endp;\n\t\tsnprintf(server_host, sizeof(server_host), \"%s\", pbs_conf.pbs_leaf_name);\n\t\tendp = strchr(server_host, ','); /* find first name */\n\t\tif (endp)\n\t\t\t*endp = '\\0';\n\t\tendp = strchr(server_host, ':'); /* cut out port, if present */\n\t\tif (endp)\n\t\t\t*endp = '\\0';\n\t} else if (gethostname(server_host, (sizeof(server_host) - 1)) == -1) {\n\t\tlog_err(-1, __func__, \"Host name too large\");\n\t\treturn (-1);\n\t}\n\tif ((server_host[0] == '\\0') ||\n\t    (get_fullhostname(server_host, server_host, (sizeof(server_host) - 1)) == -1)) {\n\t\tlog_err(-1, __func__, \"Unable to get my host name\");\n\t\treturn (-1);\n\t}\n\n\t(void) strcpy(daemonname, \"Server@\");\n\t(void) strcat(daemonname, server_host);\n\tif ((pc = strchr(daemonname, (int) '.')) != NULL)\n\t\t*pc = '\\0';\n\n\tif (set_msgdaemonname(daemonname)) {\n\t\tfprintf(stderr, \"Out of memory\\n\");\n\t\treturn 1;\n\t}\n\n\t/* initialize service port numbers for self, Scheduler, and MOM */\n\n\tpbs_server_port_dis = pbs_conf.batch_service_port;\n\tpbs_mom_port = pbs_conf.mom_service_port;\n\tpbs_rm_port = pbs_conf.manager_service_port;\n\n\t/* by default, server_name is what is set in /etc/pbs.conf */\n\t(void) strcpy(server_name, pbs_conf.pbs_server_name);\n\n\tpbs_server_name = pbs_default();\n\tif ((!pbs_server_name) || (*pbs_server_name == '\\0')) {\n\t\tlog_err(-1, __func__, \"Unable to get server host name\");\n\t\treturn (-1);\n\t}\n\n\tpbs_server_addr = get_hostaddr(server_host);\n\tpbs_mom_addr = pbs_server_addr; /* assume on same host */\n\n\t/* parse the parameters from the command line */\n\n\twhile ((c = getopt(argc, argv, \"A:a:Cd:e:F:p:t:lL:M:NR:g:G:s:P:-:\")) != -1) {\n\t\tswitch (c) {\n\t\t\tcase 'a':\n\t\t\t\tif (decode_b(get_sattr(SVR_ATR_scheduling), NULL,\n\t\t\t\t\t     NULL, optarg) != 0) {\n\t\t\t\t\t(void) fprintf(stderr, \"%s: bad -a option\\n\", argv[0]);\n\t\t\t\t\treturn (1);\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase 'd':\n\t\t\t\tif (pbs_conf.pbs_home_path != NULL)\n\t\t\t\t\tfree(pbs_conf.pbs_home_path);\n\t\t\t\tpbs_conf.pbs_home_path = optarg;\n\t\t\t\tbreak;\n\t\t\tcase 'e':\n\t\t\t\tnew_log_event_mask = strtol(optarg, NULL, 0);\n\t\t\t\tbreak;\n\t\t\tcase 'p':\n\t\t\t\tservicename = optarg;\n\t\t\t\tif (strlen(server_name) + strlen(servicename) + 1 >\n\t\t\t\t    (size_t) PBS_MAXSERVERNAME) {\n\t\t\t\t\t(void) fprintf(stderr,\n\t\t\t\t\t\t       \"%s: -p host:port too long\\n\", argv[0]);\n\t\t\t\t\treturn (1);\n\t\t\t\t}\n\t\t\t\t(void) strcat(server_name, \":\");\n\t\t\t\t(void) strcat(server_name, servicename);\n\t\t\t\tif ((pbs_server_port_dis = atoi(servicename)) == 0) {\n\t\t\t\t\t(void) fprintf(stderr,\n\t\t\t\t\t\t       \"%s: -p host:port invalid\\n\", argv[0]);\n\t\t\t\t\treturn (1);\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase 't':\n\t\t\t\tfor (i = RECOV_HOT; i < RECOV_Invalid; i++) {\n\t\t\t\t\tif (strcmp(optarg, init_name_type[i].it_name) == 0) {\n\t\t\t\t\t\tserver_init_type = init_name_type[i].it_type;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tif (i == RECOV_Invalid) {\n\t\t\t\t\t(void) fprintf(stderr, \"%s -t bad recovery type\\n\",\n\t\t\t\t\t\t       argv[0]);\n\t\t\t\t\treturn (1);\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase 'A':\n\t\t\t\tacct_file = optarg;\n\t\t\t\tbreak;\n\t\t\tcase 'C':\n\t\t\t\tstalone = 2;\n\t\t\t\tbreak;\n\t\t\tcase 'F':\n\t\t\t\ti = atoi(optarg);\n\t\t\t\tif (i < -1) {\n\t\t\t\t\t(void) fprintf(stderr, \"%s -F invalid delay time\\n\",\n\t\t\t\t\t\t       argv[0]);\n\t\t\t\t\treturn (1);\n\t\t\t\t}\n\t\t\t\tsecondary_delay = (time_t) i;\n\t\t\t\tbreak;\n\t\t\tcase 'l':\n#ifdef _POSIX_MEMLOCK\n\t\t\t\tdo_mlockall = 1;\n#else\n\t\t\t\tfprintf(stderr, \"-l option - mlockall not supported\\n\");\n#endif /* _POSIX_MEMLOCK */\n\t\t\t\tbreak;\n\t\t\tcase 'L':\n\t\t\t\tlog_file = optarg;\n\t\t\t\tbreak;\n\t\t\tcase 'M':\n\t\t\t\tif (get_port(optarg, &pbs_mom_port, &pbs_mom_addr)) {\n\t\t\t\t\t(void) fprintf(stderr, \"%s: bad -M %s\\n\", argv[0], optarg);\n\t\t\t\t\treturn (1);\n\t\t\t\t}\n\t\t\t\tif (isalpha((int) *optarg)) {\n\t\t\t\t\tif ((pc = strchr(optarg, (int) ':')) != NULL)\n\t\t\t\t\t\t*pc = '\\0';\n\t\t\t\t\tmom_host = optarg;\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase 'N':\n\t\t\t\tstalone = 1;\n\t\t\t\tbreak;\n\t\t\tcase 'R':\n\t\t\t\tif ((pbs_rm_port = atoi(optarg)) == 0) {\n\t\t\t\t\t(void) fprintf(stderr, \"%s: bad -R %s\\n\",\n\t\t\t\t\t\t       argv[0], optarg);\n\t\t\t\t\treturn 1;\n\t\t\t\t}\n\t\t\t\tbreak;\n\n\t\t\tcase '-':\n\t\t\t\t(void) fprintf(stderr, \"%s: bad - mistyped or specified more than --version\\n\", argv[0]);\n\t\t\t\treturn (1);\n\n\t\t\tdefault:\n\t\t\t\t(void) fprintf(stderr, \"%s: unknown option: %c\\n\", argv[0], c);\n\t\t\t\treturn (1);\n\t\t}\n\t}\n\n\tif (optind < argc) {\n\t\t(void) fprintf(stderr, \"%s: invalid operand\\n\", argv[0]);\n\t\treturn (1);\n\t}\n\n\t/* make sure no other server is running with this home directory */\n\n\t(void) sprintf(lockfile, \"%s/%s/server.lock\", pbs_conf.pbs_home_path,\n\t\t       PBS_SVR_PRIVATE);\n\tif ((are_primary = are_we_primary()) == FAILOVER_SECONDARY) {\n\t\tstrcat(lockfile, \".secondary\");\n\t} else if (are_primary == FAILOVER_CONFIG_ERROR) {\n\t\tlog_err(-1, msg_daemonname, \"neither primary or secondary server\");\n\t\treturn (3);\n\t}\n\n#ifdef NAS /* localmod 104 */\n\tif ((lockfds = open(lockfile, O_CREAT | O_WRONLY, 0644)) < 0)\n#else\n\tif ((lockfds = open(lockfile, O_CREAT | O_WRONLY, 0600)) < 0)\n#endif /* localmod 104 */\n\t{\n\t\t(void) sprintf(log_buffer, \"%s: unable to open lock file\",\n\t\t\t       msg_daemonname);\n\t\t(void) fprintf(stderr, \"%s\\n\", log_buffer);\n\t\tlog_err(errno, msg_daemonname, log_buffer);\n\t\treturn (2);\n\t}\n\n\tCLEAR_HEAD(svr_requests);\n\tCLEAR_HEAD(task_list_immed);\n\tCLEAR_HEAD(task_list_interleave);\n\tCLEAR_HEAD(task_list_timed);\n\tCLEAR_HEAD(task_list_event);\n\tCLEAR_HEAD(svr_queues);\n\tCLEAR_HEAD(svr_alljobs);\n\tCLEAR_HEAD(svr_newjobs);\n\tCLEAR_HEAD(svr_allresvs);\n\tCLEAR_HEAD(svr_deferred_req);\n\tCLEAR_HEAD(svr_allhooks);\n\tCLEAR_HEAD(svr_queuejob_hooks);\n\tCLEAR_HEAD(svr_postqueuejob_hooks);\n\tCLEAR_HEAD(svr_modifyjob_hooks);\n\tCLEAR_HEAD(svr_resvsub_hooks);\n\tCLEAR_HEAD(svr_modifyresv_hooks);\n\tCLEAR_HEAD(svr_movejob_hooks);\n\tCLEAR_HEAD(svr_runjob_hooks);\n\tCLEAR_HEAD(svr_jobobit_hooks);\n\tCLEAR_HEAD(svr_management_hooks);\n\tCLEAR_HEAD(svr_modifyvnode_hooks);\n\tCLEAR_HEAD(svr_periodic_hooks);\n\tCLEAR_HEAD(svr_provision_hooks);\n\tCLEAR_HEAD(svr_resv_confirm_hooks);\n\tCLEAR_HEAD(svr_resv_begin_hooks);\n\tCLEAR_HEAD(svr_resv_end_hooks);\n\tCLEAR_HEAD(svr_execjob_begin_hooks);\n\tCLEAR_HEAD(svr_execjob_prologue_hooks);\n\tCLEAR_HEAD(svr_execjob_epilogue_hooks);\n\tCLEAR_HEAD(svr_execjob_preterm_hooks);\n\tCLEAR_HEAD(svr_execjob_launch_hooks);\n\tCLEAR_HEAD(svr_execjob_end_hooks);\n\tCLEAR_HEAD(svr_exechost_periodic_hooks);\n\tCLEAR_HEAD(svr_exechost_startup_hooks);\n\tCLEAR_HEAD(svr_execjob_attach_hooks);\n\tCLEAR_HEAD(svr_execjob_resize_hooks);\n\tCLEAR_HEAD(svr_execjob_abort_hooks);\n\tCLEAR_HEAD(svr_execjob_postsuspend_hooks);\n\tCLEAR_HEAD(svr_execjob_preresume_hooks);\n\tCLEAR_HEAD(svr_allscheds);\n\tCLEAR_HEAD(svr_creds_cache);\n\tCLEAR_HEAD(unlicensed_nodes_list);\n\n\t/* initialize paths that we will need */\n\tpath_priv = build_path(pbs_conf.pbs_home_path, PBS_SVR_PRIVATE,\n\t\t\t       suffix_slash);\n\tpath_spool = build_path(pbs_conf.pbs_home_path, PBS_SPOOLDIR,\n\t\t\t\tsuffix_slash);\n\tpath_jobs = build_path(path_priv, PBS_JOBDIR, suffix_slash);\n\tpath_users = build_path(path_priv, PBS_USERDIR, suffix_slash);\n\tpath_rescdef = build_path(path_priv, PBS_RESCDEF, NULL);\n\tpath_acct = build_path(path_priv, PBS_ACCT, suffix_slash);\n\tpath_track = build_path(path_priv, PBS_TRACKING, NULL);\n\tpath_prov_track = build_path(path_priv, PBS_PROV_TRACKING, NULL);\n\tpath_usedlicenses = build_path(path_priv, \"usedlic\", NULL);\n\tpath_secondaryact = build_path(path_priv, \"secondary_active\", NULL);\n\tpath_hooks = build_path(path_priv, PBS_HOOKDIR, suffix_slash);\n\tpath_hooks_workdir = build_path(path_priv, PBS_HOOK_WORKDIR,\n\t\t\t\t\tsuffix_slash);\n\tpath_hooks_tracking = build_path(path_priv, PBS_HOOK_TRACKING,\n\t\t\t\t\t HOOK_TRACKING_SUFFIX);\n\tpath_hooks_rescdef = build_path(path_hooks, PBS_RESCDEF, NULL);\n\tpath_svrlive = build_path(path_priv, PBS_SVRLIVE, NULL);\n\n\t/* save original environment in case we re-exec */\n\torigevp = environ;\n\n\t/*\n\t * Open the log file so we can start recording events\n\t *\n\t * set log_event_mask to point to the log_event attribute value so\n\t * it controls which events are logged.\n\t */\n\tset_sattr_l_slim(SVR_ATR_log_events, PBSEVENT_MASK, SET);\n\t*log_event_mask = get_sattr_long(SVR_ATR_log_events);\n\t(void) sprintf(path_log, \"%s/%s\", pbs_conf.pbs_home_path, PBS_LOGFILES);\n\n\t(void) log_open(log_file, path_log);\n\t(void) sprintf(log_buffer, msg_startup1, PBS_VERSION, server_init_type);\n\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_ADMIN | PBSEVENT_FORCE,\n\t\t  LOG_NOTICE,\n\t\t  PBS_EVENTCLASS_SERVER, msg_daemonname, log_buffer);\n\n\t/*Initialize security library's internal data structures*/\n\tif (load_auths(AUTH_SERVER)) {\n\t\tlog_err(-1, __func__, \"Failed to load auth lib\");\n\t\texit(3);\n\t}\n\n\t{\n\t\tint csret;\n\n\t\t/* let Libsec do logging if part of PBS daemon code */\n\t\tp_cslog = log_err;\n\n\t\tif ((csret = CS_server_init()) != CS_SUCCESS) {\n\t\t\tsprintf(log_buffer,\n\t\t\t\t\"Problem initializing security library (%d)\", csret);\n\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\texit(3);\n\t\t}\n\t}\n\n\t/* At this point we must decide if we are the primary or secondary */\n\n\tif (are_primary == FAILOVER_NONE) {\n\t\tlock_out(lockfds, F_WRLCK); /* no failover configured */\n\t} else if (are_primary == FAILOVER_PRIMARY) {\n\t\tchar *takeovermsg = \"Notifying Secondary Server that we are taking over\";\n\t\t/* we believe we are the primary server */\n\n\t\tlock_out(lockfds, F_WRLCK);\n\t\tsvrlivetime = 0;\n\t\ti = 0;\n\n\t\t/*\n\t\t * try to connect to the Secondary Server to tell it to go away\n\t\t * Keep trying untill we connect or see the svrlive time is\n\t\t * not changing\n\t\t */\n\n\t\tprintf(\"%s\\n\", takeovermsg);\n\t\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_ADMIN | PBSEVENT_FORCE,\n\t\t\t  LOG_NOTICE, PBS_EVENTCLASS_SERVER, msg_daemonname,\n\t\t\t  takeovermsg);\n\t\twhile (1) {\n\t\t\tif (takeover_from_secondary() == 1) {\n\t\t\t\t/* contacted Secondary, its gone */\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\t/* could not connact Secondary */\n\t\t\tif (stat(path_secondaryact, &sb_sa) == -1)\n\t\t\t\tbreak; /* no file saying its active */\n\t\t\tif (stat(path_svrlive, &sb_sa) == -1)\n\t\t\t\tbreak; /* no svrlive file */\n\t\t\tif (sb_sa.st_mtime > svrlivetime) {\n\t\t\t\t/* time stamp is changing, at   */\n\t\t\t\t/* least once, loop for a retry */\n\t\t\t\tsvrlivetime = sb_sa.st_mtime;\n\t\t\t} else if ((time_now = time(0)) > (svrlivetime + secondary_delay)) {\n\t\t\t\t/* has not changed during the delay time */\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\tsleep(4);\n\t\t\tif ((++i % 15) == 3) {\n\t\t\t\t/* display and log this about once a minute */\n\t\t\t\t/* after a couple of tries */\n\t\t\t\tsprintf(log_buffer, \"Unable to contact Secondary Server but it appears to be running; it may need to be shutdown manually.\");\n\t\t\t\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_ADMIN |\n\t\t\t\t\t\t  PBSEVENT_FORCE,\n\t\t\t\t\t  LOG_NOTICE,\n\t\t\t\t\t  PBS_EVENTCLASS_SERVER, msg_daemonname,\n\t\t\t\t\t  log_buffer);\n\t\t\t\tprintf(\"%s\", log_buffer);\n\t\t\t\tprintf(\"  Will continue to attempt to takeover\\n\");\n\t\t\t}\n\t\t}\n\n\t\t/* in case secondary didn't remove the file */\n\t\t/* also tells the secondary to go idle\t    */\n\t\t(void) unlink(path_secondaryact);\n\n\t} else {\n\t\t/* we believe we are a secondary server */\n#ifndef DEBUG\n\t\t/* go into the background and become own sess/process group */\n\t\tif (stalone == 0) {\n\t\t\tif ((sid = go_to_background()) == -1)\n\t\t\t\treturn (2);\n\t\t}\n#endif /* DEBUG */\n\n\t\t/* will not attempt to lock again if go_to_background was already called */\n\t\tif (already_forked == 0)\n\t\t\tlock_out(lockfds, F_WRLCK);\n\n\t\t/* Protect from being killed by kernel */\n\t\tdaemon_protect(0, PBS_DAEMON_PROTECT_ON);\n\n\t\tdo {\n\t\t\tc = be_secondary(secondary_delay);\n\t\t} while (c == 1); /* recycle and stay inactive */\n\t}\n\n\t/*\n\t * At this point, we are the active Server ...\n\t *\n\t * Initialize the server objects and perform specified recovery\n\t * will be left in the server's private directory\n\t */\n\n#ifdef linux\n\t/*\n\t * Set floating-point emulation control bits to silently emulate\n\t * fp operations accesses. This works on Linux IA64 only, so we do not\n\t * check the return status. On non-IA64 linux machine, it silently fails.\n\t *\n\t */\n\tprctl(PR_SET_FPEMU, PR_FPEMU_NOPRINT, 0, 0, 0);\n#endif\n\n\t/* Setup db connection here */\n\tif (server_init_type != RECOV_CREATE && !stalone && !already_forked)\n\t\tbackground = 1;\n\tif ((rc = connect_to_db(background)) != 0)\n\t\treturn rc;\n\n\t/* database connection code end */\n\n\tif (stalone == 2) {\n\t\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_FORCE, LOG_NOTICE,\n\t\t\t  PBS_EVENTCLASS_SERVER, msg_daemonname, msg_svrdown);\n\t\tacct_close();\n\t\tstop_db();\n\t\tlog_close(1);\n\t\treturn (0);\n\t}\n\n\t/* initialize the network interface */\n\n\tif ((sock = init_network(pbs_server_port_dis)) < 0) {\n\t\t(void) sprintf(log_buffer,\n\t\t\t       \"init_network failed using ports Server:%u MOM:%u RM:%u\",\n\t\t\t       pbs_server_port_dis, pbs_mom_port, pbs_rm_port);\n\t\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER,\n\t\t\t  LOG_ERR, msg_daemonname, log_buffer);\n\t\tfprintf(stderr, \"%s\\n\", log_buffer);\n\t\tstop_db();\n\t\treturn (4);\n\t}\n\n\t/* go into the background and become own sess/process group */\n\n#ifndef DEBUG\n\tif (stalone == 0 && already_forked == 0) {\n\t\tif ((sid = go_to_background()) == -1) {\n\t\t\tstop_db();\n\t\t\treturn (2);\n\t\t}\n\t}\n\tpbs_close_stdfiles();\n#else  /* DEBUG is defined */\n\tsid = getpid();\n\t(void) setvbuf(stdout, NULL, _IOLBF, 0);\n\t(void) setvbuf(stderr, NULL, _IOLBF, 0);\n#endif /* end the ifndef DEBUG */\n\n\t/* Protect from being killed by kernel */\n\tdaemon_protect(0, PBS_DAEMON_PROTECT_ON);\n\n#ifdef _POSIX_MEMLOCK\n\tif (do_mlockall == 1) {\n\t\tif (mlockall(MCL_CURRENT | MCL_FUTURE) == -1) {\n\t\t\tlog_err(errno, msg_daemonname, \"mlockall failed\");\n\t\t}\n\t}\n#endif /* _POSIX_MEMLOCK */\n\n\tsigemptyset(&allsigs);\n\tsigaddset(&allsigs, SIGHUP);  /* remember to block these */\n\tsigaddset(&allsigs, SIGINT);  /* during critical sections */\n\tsigaddset(&allsigs, SIGTERM); /* so we don't get confused */\n\tsigaddset(&allsigs, SIGCHLD);\n\t/* block signals while we do things */\n\tif (sigprocmask(SIG_BLOCK, &allsigs, NULL) == -1)\n\t\tlog_err(errno, msg_daemonname, \"sigprocmask(BLOCK)\");\n\n\t/* initialize the network interface */\n\tif (init_network_add(sock, tcp_pre_process, process_request) != 0) {\n\t\t(void) sprintf(log_buffer, \"add connection for init_network failed\");\n\t\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER,\n\t\t\t  LOG_ERR, msg_daemonname, log_buffer);\n\t\tstop_db();\n\t\treturn (3);\n\t}\n\n\tsprintf(log_buffer, \"Out of memory\");\n\tif (pbs_conf.pbs_leaf_name) {\n\t\tchar *p;\n\t\tnodename = strdup(pbs_conf.pbs_leaf_name);\n\n\t\t/* reset pbs_leaf_name to only the first leaf name with port */\n\t\tp = strchr(pbs_conf.pbs_leaf_name, ','); /* keep only the first leaf name */\n\t\tif (p)\n\t\t\t*p = '\\0';\n\t\tp = strchr(pbs_conf.pbs_leaf_name, ':'); /* cut out the port */\n\t\tif (p)\n\t\t\t*p = '\\0';\n\t} else {\n\t\tchar *host = NULL;\n\t\tif (pbs_conf.pbs_primary)\n\t\t\tif (!pbs_failover_active)\n\t\t\t\thost = pbs_conf.pbs_primary;\n\t\t\telse\n\t\t\t\thost = pbs_conf.pbs_secondary;\n\t\telse if (pbs_conf.pbs_server_host_name)\n\t\t\thost = pbs_conf.pbs_server_host_name;\n\t\telse if (pbs_conf.pbs_server_name)\n\t\t\thost = pbs_conf.pbs_server_name;\n\n\t\t/* since pbs_leaf_name was not specified, determine all IPs */\n\t\tnodename = get_all_ips(host, log_buffer, sizeof(log_buffer) - 1);\n\t}\n\n\tif (!nodename) {\n\t\tlog_err(-1, __func__, log_buffer);\n\t\tfprintf(stderr, \"%s\\n\", \"Unable to determine TPP node name\");\n\t\tstop_db();\n\t\treturn (1);\n\t}\n\n\tif (setup_env(pbs_conf.pbs_environment) == -1) {\n\t\tfprintf(stderr, \"%s\\n\", \"Setup environment failed\");\n\t\tstop_db();\n\t\treturn (3);\n\t}\n\n\t/* set tpp config */\n\trc = set_tpp_config(&pbs_conf, &tpp_conf, nodename, pbs_server_port_dis, pbs_conf.pbs_leaf_routers);\n\tfree(nodename);\n\tif (rc == -1) {\n\t\t(void) sprintf(log_buffer, \"Error setting TPP config\");\n\t\tfprintf(stderr, \"%s\", log_buffer);\n\t\tstop_db();\n\t\treturn (3);\n\t}\n\n\ttpp_set_app_net_handler(net_down_handler, net_restore_handler);\n\ttpp_conf.node_type = TPP_LEAF_NODE_LISTEN; /* server needs to know about all CTL LEAVE messages */\n\n\tif ((tppfd = tpp_init(&tpp_conf)) == -1) {\n\t\tlog_err(-1, msg_daemonname, \"tpp_init failed\");\n\t\tfprintf(stderr, \"%s\", log_buffer);\n\t\tstop_db();\n\t\treturn (3);\n\t}\n\n\t(void) add_conn(tppfd, TppComm, (pbs_net_t) 0, 0, NULL, tpp_request);\n\n\ttfree2(&ipaddrs);\n\ttfree2(&streams);\n\n\tif (pbsd_init(server_init_type) != 0) {\n\t\tlog_err(-1, msg_daemonname, \"pbsd_init failed\");\n\t\tstop_db();\n\t\treturn (3);\n\t}\n\n\t/* record the fact that the Secondary is up and active (running) */\n\n\tif (pbs_failover_active) {\n\t\tsprintf(log_buffer, \"Failover Secondary Server at %s has gone active\", server_host);\n\t\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_FORCE, PBS_EVENTCLASS_SERVER,\n\t\t\t  LOG_CRIT, msg_daemonname, log_buffer);\n\n\t\t/* now go set up work task to do timestamp svrlive file */\n\n\t\t(void) set_task(WORK_Timed, time_now, secondary_handshake, NULL);\n\n\t\tsvr_mailowner(0, 0, 1, log_buffer);\n\t\tif (get_sattr_long(SVR_ATR_scheduling)) {\n\t\t\t/* Bring up scheduler here */\n\t\t\tif (dflt_scheduler->sc_primary_conn == -1) {\n\t\t\t\tchar **workenv;\n\t\t\t\tchar schedcmd[MAXPATHLEN + 1];\n\t\t\t\t/* save the current, \"safe\", environment.\n\t\t\t\t * reset the enviroment to that when first started\n\t\t\t\t * this is to get PBS_CONF_FILE if specified.*/\n\t\t\t\tworkenv = environ;\n\t\t\t\tenviron = origevp;\n\n\t\t\t\tsnprintf(schedcmd, sizeof(schedcmd), \"%s/sbin/pbs_sched &\", pbs_conf.pbs_exec_path);\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"starting scheduler: %s\", schedcmd);\n\t\t\t\tif (system(schedcmd) == -1) \n\t\t\t\t\tlog_errf(-1, __func__, \"system(%s) failed. ERR : %s\",schedcmd, strerror(errno));\n\n\t\t\t\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_FORCE,\n\t\t\t\t\t  PBS_EVENTCLASS_SERVER, LOG_CRIT,\n\t\t\t\t\t  msg_daemonname, log_buffer);\n\n\t\t\t\tbrought_up_alt_sched = 1;\n\t\t\t\t/* restore environment to \"safe\" one */\n\t\t\t\tenviron = workenv;\n\t\t\t}\n\t\t}\n\t} else if (are_primary == FAILOVER_PRIMARY) {\n\t\t/* now go set up work task to do handshake with secondary */\n\n\t\t(void) set_task(WORK_Timed, time_now, primary_handshake, NULL);\n\t}\n\n\tlog_eventf(PBSEVENT_SYSTEM | PBSEVENT_FORCE, PBS_EVENTCLASS_SERVER, LOG_INFO,\n\t\t   msg_daemonname, msg_startup2,\n\t\t   sid, pbs_server_port_dis, pbs_mom_port, pbs_rm_port);\n\n\t/*\n\t * Now at last, we are read to do some batch work, the\n\t * following section constitutes the \"main\" loop of the server\n\t */\n\n\tif (server_init_type == RECOV_HOT)\n\t\tset_sattr_l_slim(SVR_ATR_State, SV_STATE_HOT, SET);\n\telse\n\t\tset_sattr_l_slim(SVR_ATR_State, SV_STATE_RUN, SET);\n\n\t/* Can start the python interpreter this late, before the main loop,*/\n\t/* which is when requests are actually read and processed           */\n\t/* (in wait_request), and when python processing is needed.         */\n\tsvr_interp_data.daemon_name = strdup(msg_daemonname);\n\n\tif (svr_interp_data.daemon_name == NULL) { /* should not happen */\n\t\tlog_err(errno, msg_daemonname, \"strdup failed!\");\n\t\tstop_db();\n\t\treturn (1);\n\t}\n\n\t/* save it so we can free it without needing the pointer inside svr_interp_data */\n\tkeep_daemon_name = svr_interp_data.daemon_name;\n\n\tsnprintf(svr_interp_data.local_host_name, sizeof(svr_interp_data.local_host_name),\n\t\t \"%s\", server_host);\n\tif ((pc = strchr(svr_interp_data.local_host_name, '.')) != NULL)\n\t\t*pc = '\\0';\n\n\tif (pbs_python_ext_start_interpreter(&svr_interp_data) != 0) {\n\t\tlog_err(-1, msg_daemonname, \"Failed to start Python interpreter\");\n\t\tstop_db();\n\t\tfree(keep_daemon_name);\n\t\treturn (1);\n\t}\n\n\t/* check and enable the prov attributes */\n\tset_srv_prov_attributes();\n\n\t/* check and set power attribute */\n\tset_srv_pwr_prov_attribute();\n\n\tperiodic_req = alloc_br(PBS_BATCH_HookPeriodic);\n\tif (periodic_req == NULL) {\n\t\tlog_err(errno, msg_daemonname, \"Out of memory!\");\n\t\tstop_db();\n\t\tfree(keep_daemon_name);\n\t\treturn (1);\n\t}\n\tprocess_hooks(periodic_req, hook_msg, sizeof(hook_msg), pbs_python_set_interrupt);\n\n\t/*\n\t * main loop of server\n\t * stays in this loop until server's state is either\n\t * \t_DOWN - time to complete shutdown and exit, or\n\t *\t_SECIDLE - time for Secondary Server in failover to go\n\t *\t\tback to an inactive state.\n\t * If state includes SV_STATE_PRIMDLY, stay in loop; this will be\n\t * cleared when Secondary Server responds to a request.\n\t */\n\twhile ((state = get_sattr_long(SVR_ATR_State)) != SV_STATE_DOWN && state != SV_STATE_SECIDLE) {\n\n\t\t/*\n\t\t * double check that if we are an active Secondary Server, that\n\t\t * that the Primary has not come back alive; if it did it will\n\t\t * remove the \"secondary active\" file.\n\t\t */\n\t\tif (are_primary == FAILOVER_SECONDARY) {\n\t\t\tif (stat(path_secondaryact, &sb_sa) == -1) {\n\t\t\t\tif (errno == ENOENT) {\n\t\t\t\t\t/* file gone, restart to go idle */\n\t\t\t\t\tset_sattr_l_slim(SVR_ATR_State, SV_STATE_SECIDLE, SET);\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\t/* first process any task whose time delay has expired */\n\t\twaittime = next_task();\n\n\t\tif ((state = get_sattr_long(SVR_ATR_State)) == SV_STATE_RUN) { /* In normal Run State */\n\n\t\t\tif (first_run) {\n\n\t\t\t\t/*\n\t\t\t\t * clear exec_vnode for jobs that doesn't need\n\t\t\t\t * it, otherwise job is locked into those nodes\n\t\t\t\t */\n\t\t\t\tclear_exec_vnode();\n\t\t\t\tfirst_run = 0;\n\t\t\t}\n\t\t\tfor (psched = (pbs_sched *) GET_NEXT(svr_allscheds); psched; psched = (pbs_sched *) GET_NEXT(psched->sc_link)) {\n\n\t\t\t\t/* schedule anything only if sched is connected */\n\t\t\t\tif (psched->sc_primary_conn == -1 || psched->sc_secondary_conn == -1)\n\t\t\t\t\tcontinue;\n\n\t\t\t\t/* if we have a high prio sched command, send it 1st */\n\t\t\t\tif (psched->svr_do_sched_high != SCH_SCHEDULE_NULL)\n\t\t\t\t\tschedule_high(psched);\n\t\t\t\tif (psched->svr_do_schedule == SCH_SCHEDULE_RESTART_CYCLE) {\n\t\t\t\t\tif (!send_sched_cmd(psched, psched->svr_do_schedule, NULL)) {\n\t\t\t\t\t\tlog_eventf(PBSEVENT_DEBUG2, PBS_EVENTCLASS_SERVER, LOG_NOTICE, msg_daemonname,\n\t\t\t\t\t\t\t   \"sent scheduler restart scheduling cycle request to %s\", psched->sc_name);\n\t\t\t\t\t} else\n\t\t\t\t\t\tpsched->svr_do_schedule = SCH_SCHEDULE_NULL;\n\t\t\t\t} else if (svr_unsent_qrun_req || (psched->svr_do_schedule != SCH_SCHEDULE_NULL && get_sched_attr_long(psched, SCHED_ATR_scheduling))) {\n\t\t\t\t\t/*\n\t\t\t\t\t * If svr_unsent_qrun_req is set to one there are pending qrun\n\t\t\t\t\t * request, then do schedule_jobs irrespective of the server scheduling\n\t\t\t\t\t * state.\n\t\t\t\t\t * If svr_unsent_qrun_req is not set then do the existing checking and do\n\t\t\t\t\t * scheduling only if server scheduling is turned on.\n\t\t\t\t\t */\n\n\t\t\t\t\tpsched->sch_next_schedule = time_now + get_sched_attr_long(psched, SCHED_ATR_schediteration);\n\t\t\t\t\tif (schedule_jobs(psched) == 0 && svr_unsent_qrun_req)\n\t\t\t\t\t\tsvr_unsent_qrun_req = 0;\n\t\t\t\t}\n\t\t\t}\n\t\t} else if (state == SV_STATE_HOT) {\n\n\t\t\t/* Are there HOT jobs to rerun */\n\t\t\t/* only try every _CYCLE seconds */\n\n\t\t\tif (time_now > server.sv_hotcycle + SVR_HOT_CYCLE) {\n\t\t\t\tserver.sv_hotcycle = time_now + SVR_HOT_CYCLE;\n\t\t\t\tc = start_hot_jobs();\n\t\t\t}\n\n\t\t\t/* If more than _LIMIT seconds since start, stop */\n\n\t\t\tif ((c == 0) ||\n\t\t\t    (time_now > server.sv_started + SVR_HOT_LIMIT)) {\n\t\t\t\tserver_init_type = RECOV_WARM;\n\t\t\t\tset_sattr_l_slim(SVR_ATR_State, SV_STATE_RUN, SET);\n\t\t\t\tstate = SV_STATE_RUN;\n\t\t\t}\n\t\t}\n\n\t\t/* any jobs to route today */\n\n\t\tpque = (pbs_queue *) GET_NEXT(svr_queues);\n\t\twhile (pque) {\n\t\t\tif (pque->qu_qs.qu_type == QTYPE_RoutePush)\n\t\t\t\tqueue_route(pque);\n\t\t\tpque = (pbs_queue *) GET_NEXT(pque->qu_link);\n\t\t}\n\n\t\tif (reap_child_flag)\n\t\t\treap_child();\n\n\t\t/* wait for a request and process it */\n\t\tif (wait_request(waittime, priority_context) != 0) {\n\t\t\tlog_err(-1, msg_daemonname, \"wait_requst failed\");\n\t\t}\n\n\t\tif (reap_child_flag)  /* check again incase signal arrived */\n\t\t\treap_child(); /* before they were blocked          */\n\n\t\tif ((state = get_sattr_long(SVR_ATR_State)) == SV_STATE_SHUTSIG)\n\t\t\t(void) svr_shutdown(SHUT_SIG); /* caught sig */\n\n\t\t/*\n\t\t * if in process of shuting down and all running jobs\n\t\t * and all children are done, change state to DOWN\n\t\t */\n\n\t\tif ((state > SV_STATE_RUN) &&\n\t\t    (state < SV_STATE_SECIDLE) &&\n\t\t    (server.sv_jobstates[JOB_STATE_RUNNING] == 0) &&\n\t\t    (server.sv_jobstates[JOB_STATE_EXITING] == 0) &&\n\t\t    ((void *) GET_NEXT(task_list_event) == NULL)) {\n\t\t\tset_sattr_l_slim(SVR_ATR_State, SV_STATE_DOWN, SET);\n\t\t\tstate = SV_STATE_DOWN;\n\t\t}\n\t}\n\tDBPRT((\"Server out of main loop, state is %ld\\n\", state))\n\n\t/* set the current seq id to the last id before final save */\n\tserver.sv_qs.sv_lastid = server.sv_qs.sv_jobidnumber;\n\tsvr_save_db(&server); /* final recording of server */\n\ttrack_save(NULL);     /* save tracking data\t     */\n\n\t/* if brought up the Secondary Scheduler, take it down */\n\n\tif (brought_up_alt_sched == 1)\n\t\tsend_sched_cmd(dflt_scheduler, SCH_QUIT, NULL);\n\n\t/* if Moms are to to down as well, tell them */\n\n\tif (state != SV_STATE_SECIDLE && (shutdown_who & SHUT_WHO_MOM))\n\t\tshutdown_nodes();\n\n\t/* if brought up the DB, take it down */\n\tstop_db();\n\n\tif (are_primary == FAILOVER_SECONDARY) {\n\t\t/* we are the secondary server */\n\t\t(void) unlink(path_secondaryact); /* remove file */\n\n\t\tif (state == SV_STATE_SECIDLE && saved_takeover_req != NULL) {\n\t\t\t/*\n\t\t\t * If we are the secondary server that is\n\t\t\t * going inactive AND there is a batch request struct,\n\t\t\t * send acknowledgement back to primary so primary\n\t\t\t * server knows that the data have been written.\n\t\t\t */\n\t\t\tDBPRT((\"Failover: acknowledging FO(%d) request\\n\", saved_takeover_req->rq_ind.rq_failover))\n\t\t\treply_send(saved_takeover_req);\n\t\t\tsaved_takeover_req = NULL;\n\t\t}\n\t}\n\n#if defined(DEBUG)\n\t/* for valgrind, clear some stuff up */\n\t{\n\t\thook *phook = (hook *) GET_NEXT(svr_allhooks);\n\t\twhile (phook) {\n\t\t\thook *tmp;\n\t\t\tfree(phook->hook_name);\n\t\t\tpbs_python_ext_free_python_script(phook->script);\n\t\t\tfree(phook->script);\n\t\t\ttmp = phook;\n\t\t\tphook = (hook *) GET_NEXT(phook->hi_allhooks);\n\t\t\tfree(tmp);\n\t\t}\n\t}\n#endif\n\n\t/* Shut down interpreter now before closing network connections */\n\tpbs_python_ext_shutdown_interpreter(&svr_interp_data); /* stop python if started */\n\n\tshutdown_ack();\n\tnet_close(-1); /* close all network connections */\n\ttpp_shutdown();\n\n\t/*\n\t * SERVER is going to be shutdown, destroy indexes\n\t */\n\tpbs_idx_destroy(jobs_idx);\n\tpbs_idx_destroy(queues_idx);\n\tpbs_idx_destroy(resvs_idx);\n\n\t{\n\t\tint csret;\n\t\tif ((csret = CS_close_app()) != CS_SUCCESS) {\n\t\t\t/*had some problem closing the security library*/\n\n\t\t\tsprintf(log_buffer, \"problem closing security library (%d)\", csret);\n\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t}\n\t}\n\n\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_FORCE, PBS_EVENTCLASS_SERVER,\n\t\t  LOG_NOTICE, msg_daemonname, msg_svrdown);\n\tacct_close();\n\tlog_close(1);\n\tfree(keep_daemon_name); /* logs closed, can free here */\n\n\tlock_out(lockfds, F_UNLCK); /* unlock  */\n\t(void) close(lockfds);\n\t(void) unlink(lockfile);\n\tunload_auths();\n\n\tif (state == SV_STATE_SECIDLE) {\n\t\t/*\n\t\t * Secondary Server going inactive, or the Primary needs to\n\t\t * recycle itself (found Secondary active);\n\t\t * re-execv the Server, keeps things clean\n\t\t */\n\t\tDBPRT((\"Failover: reexecing %s as %s \", server_host, argv[0]))\n\t\tsprintf(log_buffer, \"%s restarting as %s\", server_host,\n\t\t\tare_primary == FAILOVER_PRIMARY ? \"primary\" : \"secondary\");\n\t\tif (*argv[0] == '/') {\n\t\t\texecve(argv[0], argv, origevp);\n\t\t} else {\n\t\t\tsprintf(log_buffer, \"%s/sbin/pbs_server\",\n\t\t\t\tpbs_conf.pbs_exec_path);\n\t\t\texecve(log_buffer, argv, origevp);\n\t\t}\n\t\tDBPRT((\"Failover: execv failed\\n\"))\n\t}\n\treturn (0);\n}\n\n/**\n * @brief\n * \t\tget_port - parse host:port for -M and -S option\n *\t\tReturns into *port and *addr if and only if that part is specified\n *\t\tBoth port and addr are returned in HOST byte order.\n *\n * @param[in]\targ\t- \"host\", \"port\", \":port\", or \"host:port\"\n * @param[in]\tport\t- RETURN: new port if one given\n * @param[in]\targ\t- RETURN: daemon's address if host given\n *\n * @return\tint\n * @retval\t0\t- ok\n * @retval\t-1\t- error\n */\n\nstatic int\nget_port(char *arg, unsigned int *port, pbs_net_t *addr)\n{\n\tif (*arg == ':')\n\t\t++arg;\n\tif (isdigit((int) *arg)) { /* port only specified */\n\t\t*port = (unsigned int) atoi(arg);\n\t} else {\n\t\tchar *name;\n\n\t\tname = parse_servername(arg, port);\n\t\tif (name) {\n\t\t\t*addr = get_hostaddr(name);\n\t\t} else {\n\t\t\treturn (-1);\n\t\t}\n\t}\n\tif ((*port == 0) || (*addr == 0))\n\t\treturn (-1);\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\tnext_task - look for the next work task to perform:\n *\t\t1. If svr_delay_entry is set, then a delayed task is ready so\n *\t   \t\tfind and process it.\n *\t\t2. All items on the immediate list, then\n *\t\t3. All items on the timed task list which have expired times\n *\n * @return\tamount of time till next task\n */\n\nstatic time_t\nnext_task()\n{\n\n\ttime_t tilwhen;\n\tpbs_sched *psched;\n\n\ttilwhen = default_next_task();\n\n\t/* should the scheduler be run?  If so, adjust the delay time  */\n\n\tfor (psched = (pbs_sched *) GET_NEXT(svr_allscheds); psched; psched = (pbs_sched *) GET_NEXT(psched->sc_link)) {\n\t\ttime_t delay;\n\t\tif ((delay = psched->sch_next_schedule - time_now) <= 0)\n\t\t\tset_scheduler_flag(SCH_SCHEDULE_TIME, psched);\n\t\telse if (delay < tilwhen)\n\t\t\ttilwhen = delay;\n\t}\n\n\tnext_sync_mom_hookfiles();\n\n\treturn (tilwhen);\n}\n\n/**\n * @brief\n * \t\tstart_hot_jobs - place any job which is state QUEUED and has the\n *\t\tHOT start flag set into execution.\n *\n * @return\tnumber of jobs to be hot started.\n */\n\nstatic int\nstart_hot_jobs()\n{\n\tint ct = 0;\n\tchar *nodename;\n\n\tjob *pjob;\n\n\tpjob = (job *) GET_NEXT(svr_alljobs);\n\twhile (pjob) {\n\t\tif ((check_job_substate(pjob, JOB_SUBSTATE_QUEUED)) &&\n\t\t    (pjob->ji_qs.ji_svrflags & JOB_SVFLG_HOTSTART)) {\n\t\t\tif (is_jattr_set(pjob, JOB_ATR_exec_vnode)) {\n\t\t\t\tct++;\n\t\t\t\t/* find Mother Superior node and see if she is up */\n\t\t\t\tnodename = parse_servername(get_jattr_str(pjob, JOB_ATR_exec_vnode), NULL);\n\t\t\t\tif (is_vnode_up(nodename)) {\n\t\t\t\t\t/* she is up so can send her the job */\n\t\t\t\t\t/* else we will try later            */\n\t\t\t\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_JOB,\n\t\t\t\t\t\t  LOG_INFO,\n\t\t\t\t\t\t  pjob->ji_qs.ji_jobid,\n\t\t\t\t\t\t  \"attempting to hot start job\");\n\t\t\t\t\t(void) svr_startjob(pjob, 0);\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\t/* no vnode list, cannot hot start, clear flag */\n\t\t\t\tpjob->ji_qs.ji_svrflags &= ~JOB_SVFLG_HOTSTART;\n\t\t\t}\n\t\t}\n\t\tpjob = (job *) GET_NEXT(pjob->ji_alljobs);\n\t}\n\treturn (ct);\n}\n\n/**\n * @brief\n * \t\tlock_out - lock out other daemons from this directory.\n *\t\tAnd record (on write-lock), my pid into the file\n *\n * @param[in]\tfds\t- file descriptor.\n * @param[in]\top\t- F_WRLCK  or  F_UNLCK\n */\n\nstatic void\nlock_out(int fds, int op)\n{\n\tint i;\n\tint j;\n\tstruct flock flock;\n\tchar buf[100];\n\n\tif (pbs_conf.pbs_secondary == NULL)\n\t\tj = 1; /* not fail over, try lock one time */\n\telse\n\t\tj = 30; /* fail over, try for a minute */\n\n\t(void) lseek(fds, (off_t) 0, SEEK_SET);\n\tflock.l_type = op;\n\tflock.l_whence = SEEK_SET;\n\tflock.l_start = 0;\n\tflock.l_len = 0;\n\tfor (i = 0; i < j; i++) {\n\t\tif (fcntl(fds, F_SETLK, &flock) != -1) {\n\t\t\tif (op == F_WRLCK) {\n\t\t\t\t/* if write-lock, record pid in file */\n\t\t\t\tif (ftruncate(fds, (off_t) 0) == -1)\n\t\t\t\t\tlog_errf(-1, __func__, \"ftruncate failed. ERR : %s\",strerror(errno));\n\n\t\t\t\t(void) sprintf(buf, \"%d\\n\", getpid());\n\t\t\t\tif (write(fds, buf, strlen(buf)) == -1) \n\t\t\t\t\tlog_errf(-1, __func__, \"write failed. ERR : %s\",strerror(errno));\n\t\t\t}\n\t\t\treturn;\n\t\t}\n\t\tsleep(2);\n\t}\n\n\t(void) strcpy(log_buffer, \"another server running\");\n\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_ADMIN | PBSEVENT_FORCE,\n\t\t  LOG_NOTICE, PBS_EVENTCLASS_SERVER, msg_daemonname,\n\t\t  log_buffer);\n\tfprintf(stderr, \"pbs_server: %s\\n\", log_buffer);\n\texit(1);\n}\n"
  },
  {
    "path": "src/server/process_request.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file    process_request.c\n *\n * @brief\n *  process_request - this function gets, checks, and invokes the proper\n *\tfunction to deal with a batch request received over the network.\n *\n *\tAll data encoding/decoding dependencies are moved to a lower level\n *\troutine.  That routine must convert\n *\tthe data received into the internal server structures regardless of\n *\tthe data structures used by the encode/decode routines.  This provides\n *\tthe \"protocol\" and \"protocol generation tool\" freedom to the bulk\n *\tof the server.\n *\n * Functions included are:\n *\tpbs_crypt_des()\n *\tget_credential()\n *\tprocess_request()\n *\tset_to_non_blocking()\n *\tclear_non_blocking()\n *\tdispatch_request()\n *\tclose_client()\n *\talloc_br()\n *\tclose_quejob()\n *\tfree_rescrq()\n *\tarrayfree()\n *\tread_carray()\n *\tdecode_DIS_PySpawn()\n *\tfree_br()\n *\tfreebr_manage()\n *\tfreebr_cpyfile()\n *\tfreebr_cpyfile_cred()\n *\tparse_servername()\n *\tget_servername()\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <errno.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <unistd.h>\n#include <time.h>\n#include <sys/types.h>\n#include <sys/time.h>\n#include <netinet/in.h>\n#include <memory.h>\n#include <assert.h>\n#include <fcntl.h>\n#include <grp.h>\n#include <pwd.h>\n#include <dlfcn.h>\n#include <ctype.h>\n#include \"libpbs.h\"\n#include \"pbs_error.h\"\n#include \"server_limits.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"job.h\"\n#include \"server.h\"\n#include \"user.h\"\n#include \"credential.h\"\n#include \"ticket.h\"\n#include \"net_connect.h\"\n#include \"batch_request.h\"\n#include \"log.h\"\n#include \"tpp.h\"\n#include \"dis.h\"\n#include \"pbs_nodes.h\"\n#include \"svrfunc.h\"\n#include <libutil.h>\n#include \"pbs_sched.h\"\n#include \"auth.h\"\n\n/* global data items */\n\npbs_list_head svr_requests;\n\nextern struct server server;\nextern pbs_list_head svr_newjobs;\nextern pbs_list_head svr_allconns;\nextern time_t time_now;\nextern char *msg_err_noqueue;\nextern char *msg_err_malloc;\nextern char *msg_reqbadhost;\nextern char *msg_request;\nextern char *msg_auth_request;\n\nextern int is_local_root(char *, char *);\nextern void req_stat_hook(struct batch_request *);\n\n/* Private functions local to this file */\n\nstatic void freebr_manage(struct rq_manage *);\nstatic void freebr_cpyfile(struct rq_cpyfile *);\nstatic void freebr_cpyfile_cred(struct rq_cpyfile_cred *);\nstatic void close_quejob(int sfds);\n\n/**\n * @brief\n *\t\tReturn 1 if there is no credential, 0 if there is and -1 on error.\n *\n * @param[in]\tremote\t- server name\n * @param[in]\tjobp\t- job whose credentials needs to be read.\n * @param[in]\tfrom\t- can have the following values,\n * \t\t\t\t\t\t\tPBS_GC_BATREQ, PBS_GC_CPYFILE and PBS_GC_EXEC\n * @param[out]\tdata\t- kerberos credential\n * @param[out]\tdsize\t- kerberos credential data length\n *\n * @return\tint\n * @retval\t1\t- there is no credential\n * @retval\t0\t- there is credential\n * @retval\t-1\t- error\n */\nint\nget_credential(char *remote, job *jobp, int from, char **data, size_t *dsize)\n{\n\tint ret;\n\n#ifndef PBS_MOM\n\t/*\n\t * ensure job's euser exists as this can be called\n\t * from pbs_send_job who is moving a job from a routing\n\t * queue which doesn't have euser set\n\t */\n\tif (is_jattr_set(jobp, JOB_ATR_euser) && get_jattr_str(jobp, JOB_ATR_euser)) {\n\t\tret = user_read_password(get_jattr_str(jobp, JOB_ATR_euser), data, dsize);\n\n\t\t/* we have credential but type is NONE, force DES */\n\t\tif (ret == 0 && (jobp->ji_extended.ji_ext.ji_credtype == PBS_CREDTYPE_NONE))\n\t\t\tjobp->ji_extended.ji_ext.ji_credtype = PBS_CREDTYPE_AES;\n\t} else\n\t\tret = read_cred(jobp, data, dsize);\n#else\n\tret = read_cred(jobp, data, dsize);\n#endif\n\treturn ret;\n}\n\nstatic void\nreq_authenticate(conn_t *conn, struct batch_request *request)\n{\n\tauth_def_t *authdef = NULL;\n\tauth_def_t *encryptdef = NULL;\n\tconn_t *cp = NULL;\n\n\tif (!is_string_in_arr(pbs_conf.supported_auth_methods, request->rq_ind.rq_auth.rq_auth_method)) {\n\t\treq_reject(PBSE_NOSUP, 0, request);\n\t\tclose_client(conn->cn_sock);\n\t\treturn;\n\t}\n\n\tif (request->rq_ind.rq_auth.rq_encrypt_method[0] != '\\0') {\n\t\tencryptdef = get_auth(request->rq_ind.rq_auth.rq_encrypt_method);\n\t\tif (encryptdef == NULL || encryptdef->encrypt_data == NULL || encryptdef->decrypt_data == NULL) {\n\t\t\treq_reject(PBSE_NOSUP, 0, request);\n\t\t\tclose_client(conn->cn_sock);\n\t\t\treturn;\n\t\t}\n\t}\n\n\tif (strcmp(request->rq_ind.rq_auth.rq_auth_method, AUTH_RESVPORT_NAME) != 0) {\n\t\tauthdef = get_auth(request->rq_ind.rq_auth.rq_auth_method);\n\t\tif (authdef == NULL) {\n\t\t\treq_reject(PBSE_NOSUP, 0, request);\n\t\t\tclose_client(conn->cn_sock);\n\t\t\treturn;\n\t\t}\n\t\tcp = conn;\n\t} else {\n\t\t/* ensure resvport auth request is coming from priv port */\n\t\tif ((conn->cn_authen & PBS_NET_CONN_FROM_PRIVIL) == 0) {\n\t\t\treq_reject(PBSE_BADCRED, 0, request);\n\t\t\tclose_client(conn->cn_sock);\n\t\t\treturn;\n\t\t}\n\t\tcp = (conn_t *) GET_NEXT(svr_allconns);\n\t\tfor (; cp != NULL; cp = GET_NEXT(cp->cn_link)) {\n\t\t\tif (request->rq_ind.rq_auth.rq_port == cp->cn_port && conn->cn_addr == cp->cn_addr) {\n\t\t\t\tcp->cn_authen |= PBS_NET_CONN_AUTHENTICATED;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t\tif (cp == NULL) {\n\t\t\treq_reject(PBSE_BADCRED, 0, request);\n\t\t\tclose_client(conn->cn_sock);\n\t\t\treturn;\n\t\t}\n\t}\n\n\tcp->cn_auth_config = make_auth_config(request->rq_ind.rq_auth.rq_auth_method,\n\t\t\t\t\t      request->rq_ind.rq_auth.rq_encrypt_method,\n\t\t\t\t\t      pbs_conf.pbs_exec_path,\n\t\t\t\t\t      pbs_conf.pbs_home_path,\n\t\t\t\t\t      (void *) log_event);\n\tif (cp->cn_auth_config == NULL) {\n\t\treq_reject(PBSE_SYSTEM, 0, request);\n\t\tclose_client(conn->cn_sock);\n\t\treturn;\n\t}\n\n\t(void) strcpy(cp->cn_username, request->rq_user);\n\t(void) strcpy(cp->cn_hostname, request->rq_host);\n\tcp->cn_timestamp = time_now;\n\n\tif (encryptdef != NULL) {\n\t\tencryptdef->set_config((const pbs_auth_config_t *) (cp->cn_auth_config));\n\t\ttransport_chan_set_authdef(cp->cn_sock, encryptdef, FOR_ENCRYPT);\n\t\ttransport_chan_set_ctx_status(cp->cn_sock, AUTH_STATUS_CTX_ESTABLISHING, FOR_ENCRYPT);\n\t}\n\n\tif (authdef != NULL) {\n\t\tif (encryptdef != authdef)\n\t\t\tauthdef->set_config((const pbs_auth_config_t *) (cp->cn_auth_config));\n\t\ttransport_chan_set_authdef(cp->cn_sock, authdef, FOR_AUTH);\n\t\ttransport_chan_set_ctx_status(cp->cn_sock, AUTH_STATUS_CTX_ESTABLISHING, FOR_AUTH);\n\t}\n\tif (strcmp(request->rq_ind.rq_auth.rq_auth_method, AUTH_RESVPORT_NAME) == 0) {\n\t\ttransport_chan_set_ctx_status(cp->cn_sock, AUTH_STATUS_CTX_READY, FOR_AUTH);\n\t}\n\treply_ack(request);\n}\n\n#ifndef PBS_MOM\n/**\n * @brief handle incoming register sched request\n *\n * @param[in] conn - pointer to connection structure on which request came\n * @param[in] preq - pointer to incoming request structure\n *\n * @return void\n */\nstatic void\nreq_register_sched(conn_t *conn, struct batch_request *preq)\n{\n\tpbs_sched *sched;\n\tconn_t *pconn;\n\tint rc;\n\tint are_primary;\n\tint preq_conn;\n\tchar *auth_user = pbs_conf.pbs_daemon_service_auth_user ? pbs_conf.pbs_daemon_service_auth_user : pbs_conf.pbs_daemon_service_user;\n\tchar *user = auth_user ? auth_user : pbs_current_user;\n\tchar *conn_auth_user;\n\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n\tconn_auth_user = malloc(strlen(conn->cn_username) + 1 + strlen(conn->cn_hostname) + 1);\n\tif (conn_auth_user == NULL) {\n\t\tlog_err(errno, __func__, msg_err_malloc);\n\t\trc = PBSE_SYSTEM;\n\t\tgoto rerr;\n\t}\n\n\tstrcpy(conn_auth_user, conn->cn_username);\n\tstrcat(conn_auth_user, \"@\");\n\tstrcat(conn_auth_user, conn->cn_hostname);\n#else\n\tconn_auth_user = strdup(conn->cn_username);\n\tif (conn_auth_user == NULL) {\n\t\tlog_err(errno, __func__, msg_err_malloc);\n\t\trc = PBSE_SYSTEM;\n\t\tgoto rerr;\n\t}\n#endif\n\n\tif ((conn->cn_authen & PBS_NET_CONN_AUTHENTICATED) == 0 || strcmp(conn_auth_user, user) != 0) {\n\t\trc = PBSE_PERM;\n\t\tfree(conn_auth_user);\n\t\tgoto rerr;\n\t}\n\tfree(conn_auth_user);\n\n\tif (preq->rq_ind.rq_register_sched.rq_name == NULL) {\n\t\trc = PBSE_IVALREQ;\n\t\tgoto rerr;\n\t}\n\tsched = find_sched(preq->rq_ind.rq_register_sched.rq_name);\n\tif (sched == NULL) {\n\t\trc = PBSE_UNKSCHED;\n\t\tgoto rerr;\n\t}\n\n\tif (pbs_conf.pbs_primary != NULL && pbs_conf.pbs_secondary != NULL) {\n\t\tare_primary = are_we_primary();\n\t\tpbs_net_t addr;\n\t\tif (are_primary == FAILOVER_PRIMARY) {\n\t\t\taddr = get_hostaddr(pbs_conf.pbs_primary);\n\t\t\tif (addr != conn->cn_addr) {\n\t\t\t\trc = PBSE_BADHOST;\n\t\t\t\tgoto rerr;\n\t\t\t}\n\t\t} else if (are_primary == FAILOVER_SECONDARY) {\n\t\t\taddr = get_hostaddr(pbs_conf.pbs_secondary);\n\t\t\tif (addr != conn->cn_addr) {\n\t\t\t\trc = PBSE_BADHOST;\n\t\t\t\tgoto rerr;\n\t\t\t}\n\t\t} else {\n\t\t\trc = PBSE_BADHOST;\n\t\t\tgoto rerr;\n\t\t}\n\t}\n\n\tif (sched->sc_primary_conn != -1 && sched->sc_secondary_conn != -1) {\n\t\trc = PBSE_SCHEDCONNECTED;\n\t\tgoto rerr;\n\t}\n\tif (sched->sc_primary_conn == -1) {\n\t\tsched->sc_primary_conn = conn->cn_sock;\n\t\tnet_add_close_func(conn->cn_sock, scheduler_close);\n\t\treply_ack(preq);\n\t\treturn;\n\t} else if (sched->sc_primary_conn != -1) {\n\t\tpconn = get_conn(sched->sc_primary_conn);\n\t\tif (!pconn) {\n\t\t\trc = PBSE_INTERNAL;\n\t\t\tgoto rerr;\n\t\t}\n\t} else {\n\t\trc = PBSE_IVALREQ;\n\t\tgoto rerr;\n\t}\n\tif (pconn->cn_sock == conn->cn_sock) {\n\t\trc = PBSE_IVALREQ;\n\t\tgoto rerr;\n\t}\n\tif ((pconn->cn_authen & PBS_NET_CONN_AUTHENTICATED) == 0 || strcmp(pconn->cn_physhost, conn->cn_physhost) || pconn->cn_addr != conn->cn_addr) {\n\t\trc = PBSE_PERM;\n\t\tgoto rerr;\n\t}\n\tsched->sc_primary_conn = pconn->cn_sock;\n\tsched->sc_secondary_conn = conn->cn_sock;\n\tconn->cn_authen |= PBS_NET_CONN_FROM_PRIVIL | PBS_NET_CONN_NOTIMEOUT;\n\tpconn->cn_authen |= PBS_NET_CONN_FROM_PRIVIL | PBS_NET_CONN_NOTIMEOUT;\n\tpconn->cn_origin = CONN_SCHED_PRIMARY;\n\tconn->cn_origin = CONN_SCHED_SECONDARY;\n\tnet_add_close_func(conn->cn_sock, scheduler_close);\n\tnet_add_close_func(pconn->cn_sock, scheduler_close);\n\tif (!set_conn_as_priority(pconn)) {\n\t\tlog_eventf(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SCHED, LOG_ERR, sched->sc_name, \"Failed to set primary connection as priority connection\");\n\t\trc = PBSE_INTERNAL;\n\t\tgoto rerr;\n\t}\n\tif (!set_conn_as_priority(conn)) {\n\t\tlog_eventf(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SCHED, LOG_ERR, sched->sc_name, \"Failed to set secondary connection as priority connection\");\n\t\trc = PBSE_INTERNAL;\n\t\tgoto rerr;\n\t}\n\n\treply_ack(preq);\n\tlog_eventf(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SCHED, LOG_ERR, sched->sc_name, \"scheduler connected\");\n\t/*\n\t * scheduler (re-)connected, ask it to configure itself\n\t *\n\t * this must come after above reply_ack\n\t */\n\tsend_sched_cmd(sched, SCH_CONFIGURE, NULL);\n\treturn;\n\nrerr:\n\tpreq_conn = preq->rq_conn;\n\treq_reject(rc, 0, preq);\n\tclose_client(preq_conn);\n}\n#endif\n\n/*\n* @brief\n * \t\tprocess_request - process an request from the network:\n *\t\tCall function to read in the request and decode it.\n *\t\tValidate requesting host and user.\n *\t\tCall function to process request based on type.\n *\t\tThat function MUST free the request by calling free_br()\n *\n * @param[in]\tsfds\t- file descriptor (socket) to get request\n */\n\nvoid\nprocess_request(int sfds)\n{\n\tint rc;\n\tstruct batch_request *request;\n\tconn_t *conn;\n#ifndef PBS_MOM\n\tint access_allowed;\n#endif\n\n\ttime_now = time(NULL);\n\n\tconn = get_conn(sfds);\n\n\tif (!conn) {\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_REQUEST, LOG_ERR, __func__, \"did not find socket in connection table\");\n\t\tclosesocket(sfds);\n\t\treturn;\n\t}\n\n#ifndef PBS_MOM\n\tif (conn->cn_origin == CONN_SCHED_SECONDARY) {\n\t\tif (recv_sched_cycle_end(sfds) != 0)\n\t\t\tclose_conn(sfds);\n\t\treturn;\n\t}\n#endif\n\n\tif ((request = alloc_br(0)) == NULL) {\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_REQUEST, LOG_ERR, __func__, \"Unable to allocate request structure\");\n\t\tclose_conn(sfds);\n\t\treturn;\n\t}\n\trequest->rq_conn = sfds;\n\tif (get_connecthost(sfds, request->rq_host, PBS_MAXHOSTNAME)) {\n\t\tlog_eventf(PBSEVENT_DEBUG, PBS_EVENTCLASS_REQUEST, LOG_DEBUG, __func__, \"%s: %lu\", msg_reqbadhost, get_connectaddr(sfds));\n\t\treq_reject(PBSE_BADHOST, 0, request);\n\t\treturn;\n\t}\n\t/*\n\t * Read in the request and decode it to the internal request structure.\n\t */\n#ifndef PBS_MOM\n\tif (conn->cn_active == FromClientDIS) {\n\t\trc = dis_request_read(sfds, request);\n\t} else {\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_REQUEST, LOG_ERR, __func__, \"request on invalid type of connection\");\n\t\tclose_conn(sfds);\n\t\tfree_br(request);\n\t\treturn;\n\t}\n#else  /* PBS_MOM */\n\trc = dis_request_read(sfds, request);\n#endif /* PBS_MOM */\n\n\tif (rc == -1) { /* End of file */\n\t\tclose_client(sfds);\n\t\tfree_br(request);\n\t\treturn;\n\t} else if ((rc == PBSE_SYSTEM) || (rc == PBSE_INTERNAL)) {\n\t\t/* read error, likely cannot send reply so just disconnect */\n\n\t\t/* ??? not sure about this ??? */\n\n\t\tclose_client(sfds);\n\t\tfree_br(request);\n\t\treturn;\n\t} else if (rc > 0) {\n\t\t/*\n\t\t * request didn't decode, either garbage or unknown\n\t\t * request type, in ether case, return reject-reply\n\t\t */\n\t\treq_reject(rc, 0, request);\n\t\tclose_client(sfds);\n\t\treturn;\n\t}\n\n#ifndef PBS_MOM\n\tstrcpy(conn->cn_physhost, request->rq_host);\n\tif (conn->cn_username[0] == '\\0')\n\t\tstrcpy(conn->cn_username, request->rq_user);\n\tif (conn->cn_hostname[0] == '\\0')\n\t\tstrcpy(conn->cn_hostname, request->rq_host);\n\tif (conn->cn_origin == CONN_SCHED_PRIMARY) {\n\t\t/*\n\t\t * If the request is coming from scheduler,\n\t\t * change the \"user\" from daemon user to \"Scheduler\"\n\t\t */\n\t\tstrncpy(request->rq_user, PBS_SCHED_DAEMON_NAME, PBS_MAXUSER);\n\t\trequest->rq_user[PBS_MAXUSER] = '\\0';\n\t}\n#endif /* PBS_MOM */\n\tlog_eventf(PBSEVENT_DEBUG2, PBS_EVENTCLASS_REQUEST, LOG_DEBUG, \"\", msg_request, request->rq_type, request->rq_user, request->rq_host, sfds);\n\n\tif (request->rq_type == PBS_BATCH_Authenticate) {\n\t\treq_authenticate(conn, request);\n\t\treturn;\n\t}\n\n#ifndef PBS_MOM\n\tif (request->rq_type != PBS_BATCH_Connect) {\n\t\tif (transport_chan_get_ctx_status(sfds, FOR_AUTH) != AUTH_STATUS_CTX_READY &&\n\t\t    (conn->cn_authen & PBS_NET_CONN_AUTHENTICATED) == 0) {\n\t\t\treq_reject(PBSE_BADCRED, 0, request);\n\t\t\tclose_client(sfds);\n\t\t\treturn;\n\t\t}\n\n\t\tif (conn->cn_credid == NULL &&\n\t\t    conn->cn_auth_config != NULL &&\n\t\t    conn->cn_auth_config->auth_method != NULL &&\n\t\t    strcmp(conn->cn_auth_config->auth_method, AUTH_RESVPORT_NAME) != 0) {\n\t\t\tchar *user = NULL;\n\t\t\tchar *host = NULL;\n\t\t\tchar *realm = NULL;\n\t\t\tauth_def_t *authdef = transport_chan_get_authdef(sfds, FOR_AUTH);\n\n\t\t\tif (authdef == NULL) {\n\t\t\t\treq_reject(PBSE_PERM, 0, request);\n\t\t\t\tclose_client(sfds);\n\t\t\t\treturn;\n\t\t\t}\n\n\t\t\tif (authdef->get_userinfo(transport_chan_get_authctx(sfds, FOR_AUTH), &user, &host, &realm) != 0) {\n\t\t\t\treq_reject(PBSE_PERM, 0, request);\n\t\t\t\tclose_client(sfds);\n\t\t\t\treturn;\n\t\t\t}\n\n\t\t\tif (user != NULL && realm != NULL) {\n\t\t\t\tsize_t clen = strlen(user) + strlen(realm) + 2; /* 1 for '@' and 1 for '\\0' */\n\t\t\t\tif ((conn->cn_credid = (char *) calloc(1, clen)) == NULL) {\n\t\t\t\t\treq_reject(PBSE_SYSTEM, errno, request);\n\t\t\t\t\tclose_client(sfds);\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t\tstrcpy(conn->cn_credid, user);\n\t\t\t\tstrcat(conn->cn_credid, \"@\");\n\t\t\t\tstrcat(conn->cn_credid, realm);\n\t\t\t\tfree(realm);\n\t\t\t}\n\n\t\t\tif (user != NULL) {\n\t\t\t\tstrcpy(conn->cn_username, user);\n\t\t\t\tfree(user);\n\t\t\t}\n\n\t\t\tif (host != NULL) {\n\t\t\t\tstrcpy(conn->cn_hostname, host);\n\t\t\t\tfree(host);\n\t\t\t}\n\t\t}\n\n\t\tconn->cn_authen |= PBS_NET_CONN_AUTHENTICATED;\n\t}\n\n\tif (request->rq_type == PBS_BATCH_RegisterSched) {\n\t\treq_register_sched(conn, request);\n\t\treturn;\n\t}\n\n\t/* is the request from a host acceptable to the server */\n\tif (get_sattr_long(SVR_ATR_acl_host_enable)) {\n\t\t/* acl enabled, check it; always allow myself\t*/\n\n\t\tstruct pbsnode *isanode = NULL;\n\t\tif (is_sattr_set(SVR_ATR_acl_host_moms_enable) && get_sattr_long(SVR_ATR_acl_host_moms_enable) == 1) {\n\t\t\tisanode = find_nodebyaddr(get_connectaddr(sfds));\n\n\t\t\tif ((isanode != NULL) && (isanode->nd_state & INUSE_DELETED))\n\t\t\t\tisanode = NULL;\n\t\t}\n\n\t\tif (isanode == NULL) {\n\t\t\tpbs_net_t addr;\n\t\t\tchar ip[PBS_MAXIP_LEN + 1];\n\n\t\t\taddr = get_hostaddr(request->rq_host);\n\t\t\tif (snprintf(ip, PBS_MAXIP_LEN + 1, \"%ld.%ld.%ld.%ld\",\n\t\t\t\t     (addr & 0xff000000) >> 24,\n\t\t\t\t     (addr & 0x00ff0000) >> 16,\n\t\t\t\t     (addr & 0x0000ff00) >> 8,\n\t\t\t\t     (addr & 0x000000ff)) <= 0) {\n\t\t\t\taddr = 0;\n\t\t\t}\n\n\t\t\taccess_allowed = 0;\n\n\t\t\tif (acl_check(get_sattr(SVR_ATR_acl_hosts), request->rq_host, ACL_Host)) {\n\t\t\t\taccess_allowed = 1;\n\t\t\t}\n\n\t\t\tif (addr != 0 && acl_check(get_sattr(SVR_ATR_acl_hosts), ip, ACL_Subnet)) {\n\t\t\t\taccess_allowed = 1;\n\t\t\t}\n\n\t\t\tif (strcasecmp(server_host, request->rq_host) == 0) {\n\t\t\t\taccess_allowed = 1;\n\t\t\t}\n\n\t\t\tif (access_allowed == 0) {\n\t\t\t\treq_reject(PBSE_BADHOST, 0, request);\n\t\t\t\tclose_client(sfds);\n\t\t\t\treturn;\n\t\t\t}\n\t\t}\n\t}\n\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n\tif (conn->cn_credid != NULL &&\n\t    conn->cn_auth_config != NULL &&\n\t    conn->cn_auth_config->auth_method != NULL &&\n\t    strcmp(conn->cn_auth_config->auth_method, AUTH_GSS_NAME) == 0) {\n\n\t\taccess_allowed = 0;\n\n\t\tif (strcasecmp(server_host, request->rq_host) == 0) {\n\t\t\t/* always allow myself */\n\t\t\taccess_allowed = 1;\n\t\t}\n\n\t\tif (get_sattr_long(SVR_ATR_acl_krb_realm_enable)) {\n\t\t\tif (acl_check(get_sattr(SVR_ATR_acl_krb_realms), conn->cn_credid, ACL_Host)) {\n\t\t\t\taccess_allowed = 1;\n\t\t\t}\n\t\t} else {\n\t\t\taccess_allowed = 1;\n\t\t}\n\n\t\t/*\n\t\t * copy principal to request structure\n\t\t * so authenticate_user() can proceed\n\t\t */\n\t\tstrcpy(request->rq_user, conn->cn_username);\n\t\tstrcpy(request->rq_host, conn->cn_hostname);\n\n\t\tlog_eventf(PBSEVENT_DEBUG2, PBS_EVENTCLASS_REQUEST, LOG_DEBUG,\n\t\t\t   \"\", msg_auth_request, request->rq_type, request->rq_user,\n\t\t\t   request->rq_host, conn->cn_physhost, sfds);\n\n\t\tif (access_allowed == 0) {\n\t\t\treq_reject(PBSE_PERM, 0, request);\n\t\t\tclose_client(sfds);\n\t\t\treturn;\n\t\t}\n\t}\n#endif\n\n\t/*\n\t * determine source (user client or another server) of request.\n\t * set the permissions granted to the client\n\t */\n\tif (conn->cn_authen & PBS_NET_CONN_FROM_PRIVIL) {\n\n\t\t/* request came from another server */\n\n\t\trequest->rq_fromsvr = 1;\n\t\trequest->rq_perm = ATR_DFLAG_USRD | ATR_DFLAG_USWR |\n\t\t\t\t   ATR_DFLAG_OPRD | ATR_DFLAG_OPWR |\n\t\t\t\t   ATR_DFLAG_MGRD | ATR_DFLAG_MGWR |\n\t\t\t\t   ATR_DFLAG_SvWR;\n\n\t} else {\n\n\t\t/* request not from another server */\n\n\t\trequest->rq_fromsvr = 0;\n\n\t\t/*\n\t\t * Client must be authenticated by a Authenticate User Request,\n\t\t * if not, reject request and close connection.\n\t\t * -- The following is retained for compat with old cmds --\n\t\t * The exception to this is of course the Connect Request which\n\t\t * cannot have been authenticated, because it contains the\n\t\t * needed ticket; so trap it here.  Of course, there is no\n\t\t * prior authentication on the Authenticate User request either,\n\t\t * but it comes over a reserved port and appears from another\n\t\t * server, hence is automatically granted authorization.\n\n\t\t */\n\n\t\tif (request->rq_type == PBS_BATCH_Connect) {\n\t\t\treq_connect(request);\n\t\t\treturn;\n\t\t}\n\n\t\tif ((conn->cn_authen & PBS_NET_CONN_AUTHENTICATED) == 0) {\n\t\t\trc = PBSE_BADCRED;\n\t\t} else {\n\t\t\trc = authenticate_user(request, conn);\n\t\t}\n\t\tif (rc != 0) {\n\t\t\treq_reject(rc, 0, request);\n\t\t\tif (rc == PBSE_BADCRED)\n\t\t\t\tclose_client(sfds);\n\t\t\treturn;\n\t\t}\n\n\t\trequest->rq_perm = svr_get_privilege(request->rq_user, request->rq_host);\n\t}\n\n\t/* if server shutting down, disallow new jobs and new running */\n\n\tif (get_sattr_long(SVR_ATR_State) > SV_STATE_RUN) {\n\t\tswitch (request->rq_type) {\n\t\t\tcase PBS_BATCH_AsyrunJob:\n\t\t\tcase PBS_BATCH_AsyrunJob_ack:\n\t\t\tcase PBS_BATCH_JobCred:\n\t\t\tcase PBS_BATCH_UserCred:\n\t\t\tcase PBS_BATCH_MoveJob:\n\t\t\tcase PBS_BATCH_QueueJob:\n\t\t\tcase PBS_BATCH_RunJob:\n\t\t\tcase PBS_BATCH_StageIn:\n\t\t\tcase PBS_BATCH_jobscript:\n\t\t\t\treq_reject(PBSE_SVRDOWN, 0, request);\n\t\t\t\treturn;\n\t\t}\n\t}\n\n#else /* THIS CODE FOR MOM ONLY */\n\n\t/* check connecting host against allowed list of ok clients */\n\tif (!addrfind(conn->cn_addr)) {\n\t\treq_reject(PBSE_BADHOST, 0, request);\n\t\tclose_client(sfds);\n\t\treturn;\n\t}\n\n\tif ((conn->cn_authen & PBS_NET_CONN_FROM_PRIVIL) == 0) {\n\t\treq_reject(PBSE_BADCRED, 0, request);\n\t\tclose_client(sfds);\n\t\treturn;\n\t}\n\n\trequest->rq_fromsvr = 1;\n\trequest->rq_perm = ATR_DFLAG_USRD | ATR_DFLAG_USWR |\n\t\t\t   ATR_DFLAG_OPRD | ATR_DFLAG_OPWR |\n\t\t\t   ATR_DFLAG_MGRD | ATR_DFLAG_MGWR |\n\t\t\t   ATR_DFLAG_SvWR | ATR_DFLAG_MOM;\n#endif\n\n\t/*\n\t * dispatch the request to the correct processing function.\n\t * The processing function must call reply_send() to free\n\t * the request struture.\n\t */\n\n\tdispatch_request(sfds, request);\n\treturn;\n}\n\n#ifndef PBS_MOM /* Server Only Functions */\n/**\n * @brief\n *\t\tSet socket to non-blocking to prevent write from hanging up the\n *\t\tServer for a long time.\n *\n *\t\tThis is called from dispatch_request() below for requests that will\n *\t\ttypically produce a large amout of output, such as stating all jobs.\n *\t\tIt is called after the incoming request has been read.  After the\n *\t\trequest is processed and replied to, the socket will be reset, see\n *\t\tclear_non_blocking().  The existing socket flags are saved in the\n *\t\tconnection table entry cn_sockflgs for use by clear_non_blocking().\n *\n * @param[in] conn - the connection structure.\n *\n * @return\tsuccess or failure\n * @retval\t-l\t- failure\n * @retval \t0\t- success\n */\n\nstatic int\nset_to_non_blocking(conn_t *conn)\n{\n\n\tif (conn->cn_sock != PBS_LOCAL_CONNECTION) {\n\t\tint flg;\n\t\tflg = fcntl(conn->cn_sock, F_GETFL);\n\t\tif (((flg = fcntl(conn->cn_sock, F_GETFL)) == -1) ||\n\t\t    (fcntl(conn->cn_sock, F_SETFL, flg | O_NONBLOCK) == -1)) {\n\t\t\tlog_err(errno, __func__,\n\t\t\t\t\"Unable to set client socking non-blocking\");\n\t\t\treturn -1;\n\t\t}\n\t\tconn->cn_sockflgs = flg;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\t\tClear non-blocking from a socket.\n *\n *\t\tThe function set_to_non_blocking() must be called first, it saved\n *\t\tthe prior socket flags in the connection table.  This function resets\n *\t\tthe socket flags to that value.\n *\n @param[in] conn - the connection structure.\n */\n\nstatic void\nclear_non_blocking(conn_t *conn)\n{\n\tif (!conn)\n\t\treturn;\n\tif (conn->cn_sock != PBS_LOCAL_CONNECTION) {\n\t\tint flg;\n\t\tif ((flg = conn->cn_sockflgs) != -1)\n\t\t\t/* reset socket flag to prior value */\n\t\t\t(void) fcntl(conn->cn_sock, F_SETFL, flg);\n\t\tconn->cn_sockflgs = 0;\n\t}\n}\n#endif /* !PBS_MOM */\n\n/**\n * @brief\n * \t\tDetermine the request type and invoke the corresponding\n *\t\tfunction.\n * @par\n *\t\tThe function will perform the request action and return the\n *\t\treply.  The function MUST also reply and free the request by calling\n *\t\treply_send().\n *\n * @param[in]\tsfds\t- socket connection\n * @param[in]\trequest - the request information\n */\n\nvoid\ndispatch_request(int sfds, struct batch_request *request)\n{\n\n\tconn_t *conn = NULL;\n\tint prot = request->prot;\n\n\tif (prot == PROT_TCP) {\n\t\tif (sfds != PBS_LOCAL_CONNECTION) {\n\t\t\tconn = get_conn(sfds);\n\t\t\tif (!conn) {\n\t\t\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_REQUEST, LOG_ERR, __func__, \"did not find socket in connection table\");\n\t\t\t\treq_reject(PBSE_SYSTEM, 0, request);\n\t\t\t\tclose_client(sfds);\n\t\t\t\treturn;\n\t\t\t}\n\t\t}\n\t}\n\n\tswitch (request->rq_type) {\n\n\t\tcase PBS_BATCH_QueueJob:\n\t\t\tif (prot == PROT_TPP) {\n\t\t\t\trequest->tpp_ack = 0;\n\t\t\t\ttpp_add_close_func(sfds, close_quejob);\n\t\t\t} else\n\t\t\t\tnet_add_close_func(sfds, close_quejob);\n\t\t\treq_quejob(request);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_JobCred:\n\t\t\tif (prot == PROT_TPP)\n\t\t\t\trequest->tpp_ack = 0;\n\t\t\treq_jobcredential(request);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_UserCred:\n#ifdef PBS_MOM\n#ifdef WIN32\n\t\t\treq_reject(PBSE_NOSUP, 0, request);\n#else\n\t\t\treq_reject(PBSE_UNKREQ, 0, request);\n#endif\n\t\t\tclose_client(sfds);\n#else\n\t\t\treq_usercredential(request);\n#endif\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_jobscript:\n\t\t\tif (prot == PROT_TPP)\n\t\t\t\trequest->tpp_ack = 0;\n\t\t\treq_jobscript(request);\n\t\t\tbreak;\n\n\t\t\t/*\n\t\t\t * The PBS_BATCH_Rdytocommit message is deprecated.\n\t\t\t * The server does not do anything with it anymore, but\n\t\t\t * simply acks the request (in case some client makes this call)\n\t\t\t */\n\t\tcase PBS_BATCH_RdytoCommit:\n\t\t\tif (prot == PROT_TPP)\n\t\t\t\trequest->tpp_ack = 0;\n\t\t\treply_ack(request);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_Commit:\n\t\t\tif (prot == PROT_TPP)\n\t\t\t\trequest->tpp_ack = 0;\n\t\t\treq_commit(request);\n\t\t\tif (prot == PROT_TPP)\n\t\t\t\ttpp_add_close_func(sfds, (void (*)(int)) 0);\n\t\t\telse\n\t\t\t\tnet_add_close_func(sfds, (void (*)(int)) 0);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_DeleteJobList:\n\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t  request->rq_ind.rq_deletejoblist.rq_jobslist[0],\n\t\t\t\t  \"delete job request received\");\n\t\t\treq_deletejob(request);\n\t\t\tbreak;\n\t\tcase PBS_BATCH_DeleteJob:\n\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t  request->rq_ind.rq_delete.rq_objname,\n\t\t\t\t  \"delete job request received\");\n\t\t\treq_deletejob(request);\n\t\t\tbreak;\n\n#ifndef PBS_MOM\n\t\tcase PBS_BATCH_SubmitResv:\n\t\t\treq_resvSub(request);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_DeleteResv:\n\t\t\treq_deleteReservation(request);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_ModifyResv:\n\t\t\treq_modifyReservation(request);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_ResvOccurEnd:\n\t\t\treq_reservationOccurrenceEnd(request);\n\t\t\tbreak;\n#endif\n\n\t\tcase PBS_BATCH_HoldJob:\n\t\t\tif (sfds != PBS_LOCAL_CONNECTION && prot == PROT_TCP)\n\t\t\t\tconn->cn_authen |= PBS_NET_CONN_NOTIMEOUT;\n\t\t\treq_holdjob(request);\n\t\t\tbreak;\n#ifndef PBS_MOM\n\t\tcase PBS_BATCH_PreemptJobs:\n\t\t\treq_preemptjobs(request);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_LocateJob:\n\t\t\treq_locatejob(request);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_Manager:\n\t\t\treq_manager(request);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_RelnodesJob:\n\t\t\treq_relnodesjob(request);\n\t\t\tbreak;\n\n#endif\n\t\tcase PBS_BATCH_MessJob:\n\t\t\treq_messagejob(request);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_PySpawn:\n\t\t\tif (sfds != PBS_LOCAL_CONNECTION && prot == PROT_TCP)\n\t\t\t\tconn->cn_authen |= PBS_NET_CONN_NOTIMEOUT;\n\t\t\treq_py_spawn(request);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_ModifyJob:\n\t\tcase PBS_BATCH_ModifyJob_Async:\n\t\t\treq_modifyjob(request);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_Rerun:\n\t\t\treq_rerunjob(request);\n\t\t\tbreak;\n#ifndef PBS_MOM\n\t\tcase PBS_BATCH_MoveJob:\n\t\t\treq_movejob(request);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_OrderJob:\n\t\t\treq_orderjob(request);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_Rescq:\n\t\t\treq_reject(PBSE_NOSUP, 0, request);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_ReserveResc:\n\t\t\treq_reject(PBSE_NOSUP, 0, request);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_ReleaseResc:\n\t\t\treq_reject(PBSE_NOSUP, 0, request);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_ReleaseJob:\n\t\t\tif (sfds != PBS_LOCAL_CONNECTION && prot == PROT_TCP)\n\t\t\t\tconn->cn_authen |= PBS_NET_CONN_NOTIMEOUT;\n\t\t\treq_releasejob(request);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_RunJob:\n\t\tcase PBS_BATCH_AsyrunJob:\n\t\tcase PBS_BATCH_AsyrunJob_ack:\n\t\t\treq_runjob(request);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_DefSchReply:\n\t\t\treq_defschedreply(request);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_ConfirmResv:\n\t\t\treq_confirmresv(request);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_SelectJobs:\n\t\tcase PBS_BATCH_SelStat:\n\t\t\treq_selectjobs(request);\n\t\t\tbreak;\n\n#endif /* !PBS_MOM */\n\n\t\tcase PBS_BATCH_Shutdown:\n\t\t\treq_shutdown(request);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_SignalJob:\n\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t  request->rq_ind.rq_signal.rq_jid,\n\t\t\t\t  \"signal job request received\");\n\t\t\treq_signaljob(request);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_MvJobFile:\n\t\t\treq_mvjobfile(request);\n\t\t\tbreak;\n\n#ifndef PBS_MOM /* Server Only Functions */\n\n\t\tcase PBS_BATCH_StatusJob:\n\t\t\tif (set_to_non_blocking(conn) == -1) {\n\t\t\t\treq_reject(PBSE_SYSTEM, 0, request);\n\t\t\t\tclose_client(sfds);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\treq_stat_job(request);\n\t\t\tclear_non_blocking(get_conn(sfds));\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_StatusQue:\n\t\t\tif (set_to_non_blocking(conn) == -1) {\n\t\t\t\treq_reject(PBSE_SYSTEM, 0, request);\n\t\t\t\tclose_client(sfds);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\treq_stat_que(request);\n\t\t\tclear_non_blocking(get_conn(sfds));\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_StatusNode:\n\t\t\tif (prot != PROT_TPP && set_to_non_blocking(conn) == -1) {\n\t\t\t\treq_reject(PBSE_SYSTEM, 0, request);\n\t\t\t\tclose_client(sfds);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\treq_stat_node(request);\n\t\t\tclear_non_blocking(get_conn(sfds));\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_StatusResv:\n\t\t\tif (set_to_non_blocking(conn) == -1) {\n\t\t\t\treq_reject(PBSE_SYSTEM, 0, request);\n\t\t\t\tclose_client(sfds);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\treq_stat_resv(request);\n\t\t\tclear_non_blocking(get_conn(sfds));\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_StatusSvr:\n\t\t\treq_stat_svr(request);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_StatusSched:\n\t\t\treq_stat_sched(request);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_StatusHook:\n\t\t\t/* Scheduler is allowed to make the request */\n\t\t\tif (conn->cn_origin != CONN_SCHED_PRIMARY && !is_local_root(request->rq_user, request->rq_host)) {\n\t\t\t\tsprintf(log_buffer, \"%s@%s is unauthorized to \"\n\t\t\t\t\t\t    \"access hooks data from server %s\",\n\t\t\t\t\trequest->rq_user, request->rq_host, server_host);\n\t\t\t\treply_text(request, PBSE_HOOKERROR, log_buffer);\n\t\t\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_HOOK,\n\t\t\t\t\t  LOG_INFO, \"\", log_buffer);\n\t\t\t\t/* don't call close_client() to allow other */\n\t\t\t\t/* non-hook related requests to continue */\n\t\t\t\tbreak;\n\t\t\t}\n\n\t\t\tif (set_to_non_blocking(conn) == -1) {\n\t\t\t\treq_reject(PBSE_SYSTEM, 0, request);\n\t\t\t\tclose_client(sfds);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\treq_stat_hook(request);\n\t\t\tclear_non_blocking(get_conn(sfds));\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_TrackJob:\n\t\t\treq_track(request);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_RegistDep:\n\t\t\treq_register(request);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_StageIn:\n\t\t\treq_stagein(request);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_FailOver:\n\t\t\treq_failover(request);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_StatusRsc:\n\t\t\treq_stat_resc(request);\n\t\t\tbreak;\n#else /* MOM only functions */\n\n\t\tcase PBS_BATCH_CopyFiles:\n\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t  request->rq_ind.rq_cpyfile.rq_jobid,\n\t\t\t\t  \"copy file request received\");\n\t\t\t/* don't time-out as copy may take long time */\n\t\t\tif (sfds != PBS_LOCAL_CONNECTION && prot == PROT_TCP)\n\t\t\t\tconn->cn_authen |= PBS_NET_CONN_NOTIMEOUT;\n\t\t\treq_cpyfile(request);\n\t\t\tbreak;\n\t\tcase PBS_BATCH_CopyFiles_Cred:\n\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t  request->rq_ind.rq_cpyfile_cred.rq_copyfile.rq_jobid,\n\t\t\t\t  \"copy file cred request received\");\n\t\t\t/* don't time-out as copy may take long time */\n\t\t\tif (sfds != PBS_LOCAL_CONNECTION && prot == PROT_TCP)\n\t\t\t\tconn->cn_authen |= PBS_NET_CONN_NOTIMEOUT;\n\t\t\treq_cpyfile(request);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_DelFiles:\n\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t  request->rq_ind.rq_cpyfile.rq_jobid,\n\t\t\t\t  \"delete file request received\");\n\t\t\treq_delfile(request);\n\t\t\tbreak;\n\t\tcase PBS_BATCH_DelFiles_Cred:\n\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t  request->rq_ind.rq_cpyfile_cred.rq_copyfile.rq_jobid,\n\t\t\t\t  \"delete file cred request received\");\n\t\t\treq_delfile(request);\n\t\t\tbreak;\n\t\tcase PBS_BATCH_CopyHookFile:\n\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_HOOK,\n\t\t\t\t  LOG_INFO,\n\t\t\t\t  request->rq_ind.rq_hookfile.rq_filename,\n\t\t\t\t  \"copy hook-related file request received\");\n\t\t\treq_copy_hookfile(request);\n\t\t\tbreak;\n\t\tcase PBS_BATCH_DelHookFile:\n\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_HOOK,\n\t\t\t\t  LOG_INFO,\n\t\t\t\t  request->rq_ind.rq_hookfile.rq_filename,\n\t\t\t\t  \"delete hook-related file request received\");\n\t\t\treq_del_hookfile(request);\n\t\t\tbreak;\n\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n\t\tcase PBS_BATCH_Cred:\n\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB,\n\t\t\t\t  LOG_INFO,\n\t\t\t\t  request->rq_ind.rq_cred.rq_jobid,\n\t\t\t\t  \"credentials received\");\n\t\t\treq_cred(request);\n\t\t\tbreak;\n#endif\n#endif\n\t\tdefault:\n\t\t\treq_reject(PBSE_UNKREQ, 0, request);\n\t\t\tclose_client(sfds);\n\t\t\tbreak;\n\t}\n\treturn;\n}\n\n/**\n * @brief\n *  process_IS_CMD: Create batch request on received IS_CMD message\n *                   and dispatch request.\n *\n *  @param[in] - stream -  connection stream.\n *\n *  @return void\n *\n */\nvoid\nprocess_IS_CMD(int stream)\n{\n\tint rc;\n\tstruct batch_request *request;\n\tstruct sockaddr_in *addr;\n\tchar *msgid = NULL;\n\n\tif ((addr = tpp_getaddr(stream)) == NULL) {\n\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_REQUEST, LOG_DEBUG, \"?\", \"Sender unknown\");\n\t\treturn;\n\t}\n\n\t/* in case of IS_CMD there is a unique id passed with each command,\n\t * which we need to send back with the reply so server can\n\t * match the replies to the requests\n\t */\n\tmsgid = disrst(stream, &rc);\n\tif (!msgid || rc) {\n\t\tclose(stream);\n\t\treturn;\n\t}\n\n\trequest = alloc_br(0); /* freed when reply sent */\n\tif (!request) {\n\t\tclose(stream);\n\t\tif (msgid)\n\t\t\tfree(msgid);\n\t\treturn;\n\t}\n\n\trequest->rq_conn = stream;\n\tpbs_strncpy(request->rq_host, netaddr(addr), sizeof(request->rq_host));\n\trequest->rq_fromsvr = 1;\n\trequest->prot = PROT_TPP;\n\trequest->tppcmd_msgid = msgid;\n\n\trc = dis_request_read(stream, request);\n\tif (rc != 0) {\n\t\tclose(stream);\n\t\tfree_br(request);\n\t\treturn;\n\t}\n\n\tlog_eventf(PBSEVENT_DEBUG2, PBS_EVENTCLASS_REQUEST, LOG_DEBUG, \"\",\n\t\t   msg_request, request->rq_type, request->rq_user,\n\t\t   request->rq_host, stream);\n\n\tdispatch_request(stream, request);\n}\n\n/**\n * @brief\n * \t\tclose_client - close a connection to a client, also \"inactivate\"\n *\t\t  any outstanding batch requests on that connection.\n *\n * @param[in]\tsfds\t- connection socket\n */\n\nvoid\nclose_client(int sfds)\n{\n\tstruct batch_request *preq;\n\n\tclose_conn(sfds); /* close the connection */\n\tpreq = (struct batch_request *) GET_NEXT(svr_requests);\n\twhile (preq) { /* list of outstanding requests */\n\t\tif (preq->rq_conn == sfds)\n\t\t\tpreq->rq_conn = -1;\n\t\tif (preq->rq_orgconn == sfds)\n\t\t\tpreq->rq_orgconn = -1;\n\t\tpreq = (struct batch_request *) GET_NEXT(preq->rq_link);\n\t}\n}\n\n/**\n * @brief\n * \t\talloc_br - allocate and clear a batch_request structure\n *\n * @param[in]\ttype\t- type of request\n *\n * @return\tbatch_request *\n * @retval\tNULL\t- error\n */\n\nstruct batch_request *\nalloc_br(int type)\n{\n\tstruct batch_request *req;\n\n\treq = (struct batch_request *) malloc(sizeof(struct batch_request));\n\tif (req == NULL)\n\t\tlog_err(errno, \"alloc_br\", msg_err_malloc);\n\telse {\n\t\tmemset((void *) req, (int) 0, sizeof(struct batch_request));\n\t\treq->rq_type = type;\n\t\tCLEAR_LINK(req->rq_link);\n\t\treq->rq_conn = -1;    /* indicate not connected */\n\t\treq->rq_orgconn = -1; /* indicate not connected */\n\t\treq->rq_time = time_now;\n\t\treq->tpp_ack = 1;\t  /* enable acks to be passed by tpp by default */\n\t\treq->prot = PROT_TCP;\t  /* not tpp by default */\n\t\treq->tppcmd_msgid = NULL; /* NULL msgid to boot */\n\t\treq->rq_reply.brp_is_part = 0;\n\t\treq->rq_reply.brp_choice = BATCH_REPLY_CHOICE_NULL;\n\t\tappend_link(&svr_requests, &req->rq_link, req);\n\t}\n\treturn (req);\n}\n\n/**\n * @brief\n * \tcopy constructor for batch request - shallow copy\n *\n * @param[in]\tsrc\t- request to be copied\n *\n * @return\tbatch_request *\n * @retval\tNULL\t- error\n */\n\nstruct batch_request *\ncopy_br(struct batch_request *src)\n{\n\tstruct batch_request *req;\n\n\tif (!src)\n\t\treturn NULL;\n\n\treq = calloc(sizeof(struct batch_request), 1);\n\tif (req == NULL) {\n\t\tlog_err(errno, __func__, msg_err_malloc);\n\t\treturn NULL;\n\t}\n\n\treq->rq_type = src->rq_type;\n\tCLEAR_LINK(req->rq_link);\n\treq->rq_conn = src->rq_conn;\n\treq->rq_orgconn = src->rq_orgconn;\n\treq->rq_time = src->rq_time;\n\treq->tpp_ack = src->tpp_ack;\n\treq->prot = src->prot;\n\treq->rq_reply.brp_is_part = src->rq_reply.brp_is_part;\n\treq->rq_reply.brp_choice = src->rq_reply.brp_choice;\n\treq->rq_fromsvr = src->rq_fromsvr;\n\treq->rq_perm = src->rq_perm;\n\tif (src->rq_user[0])\n\t\tmemcpy(&req->rq_user, &src->rq_user, sizeof(req->rq_user));\n\tif (src->rq_host[0])\n\t\tmemcpy(&req->rq_host, &src->rq_host, sizeof(req->rq_host));\n\tappend_link(&svr_requests, &req->rq_link, req);\n\n\treturn (req);\n}\n\n/**\n * @brief\n * \t\tclose_quejob - locate and deal with the new job that was being received\n *\t\t  when the net connection closed.\n *\n * @param[in]\tsfds\t- file descriptor (socket) to get request\n */\n\nstatic void\nclose_quejob(int sfds)\n{\n\tjob *pjob;\n\n\tpjob = (job *) GET_NEXT(svr_newjobs);\n\twhile (pjob != NULL) {\n\t\tif (pjob->ji_qs.ji_un.ji_newt.ji_fromsock == sfds) {\n\t\t\tif (check_job_substate(pjob, JOB_SUBSTATE_TRANSICM)) {\n\n#ifndef PBS_MOM\n\t\t\t\tif (pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE) {\n\n\t\t\t\t\t/*\n\t\t\t\t\t * the job was being created here for the first time\n\t\t\t\t\t * go ahead and enqueue it as QUEUED; otherwise, hold\n\t\t\t\t\t * it here as TRANSICM until we hear from the sending\n\t\t\t\t\t * server again to commit.\n\t\t\t\t\t */\n\t\t\t\t\tdelete_link(&pjob->ji_alljobs);\n\t\t\t\t\tset_job_state(pjob, JOB_STATE_LTR_QUEUED);\n\t\t\t\t\tset_job_substate(pjob, JOB_SUBSTATE_QUEUED);\n\t\t\t\t\tif (svr_enquejob(pjob, NULL))\n\t\t\t\t\t\t(void) job_abt(pjob, msg_err_noqueue);\n\t\t\t\t}\n#endif /* PBS_MOM */\n\n\t\t\t} else {\n\n\t\t\t\t/* else delete the job */\n\n\t\t\t\tdelete_link(&pjob->ji_alljobs);\n\t\t\t\tjob_purge(pjob);\n\t\t\t}\n\t\t\tbreak;\n\t\t}\n\t\tpjob = GET_NEXT(pjob->ji_alljobs);\n\t}\n\treturn;\n}\n\n#ifndef PBS_MOM /* Server Only */\n/**\n * @brief\n * \t\tfree_rescrq - free resource queue.\n *\n * @param[in,out]\tpq\t- resource queue\n */\nstatic void\nfree_rescrq(struct rq_rescq *pq)\n{\n\tint i;\n\n\ti = pq->rq_num;\n\twhile (i--) {\n\t\tif (*(pq->rq_list + i))\n\t\t\t(void) free(*(pq->rq_list + i));\n\t}\n\tif (pq->rq_list)\n\t\t(void) free(pq->rq_list);\n}\n#endif /* Server Only */\n\n/**\n * @brief\n * \t\tFree malloc'ed array of strings.  Used in MOM to deal with tm_spawn\n * \t\tequests and both MOM and the server for py_spawn requests.\n *\n * @param[out]\tarray\t- malloc'ed array of strings\n */\nvoid\narrayfree(char **array)\n{\n\tint i;\n\n\tif (array == NULL)\n\t\treturn;\n\tfor (i = 0; array[i]; i++)\n\t\tfree(array[i]);\n\tfree(array);\n}\n\n/**\n * @brief\n *\t\tRead a bunch of strings into a NULL terminated array.\n *\t\tThe strings are regular null terminated char arrays\n *\t\tand the string array is NULL terminated.\n *\n *\t\tPass in array location to hold the allocated array\n *\t\tand return an error value if there is a problem.  If\n *\t\tan error does occur, arrloc is not changed.\n *\n * @param[in]\tstream\t- socket where you reads the request.\n * @param[out]\tarrloc\t- NULL terminated array where strings are stored.\n *\n * @return\terror code\n */\nstatic int\nread_carray(int stream, char ***arrloc)\n{\n\tint i, num, ret;\n\tchar *cp, **carr;\n\n\tif (arrloc == NULL)\n\t\treturn PBSE_INTERNAL;\n\n\tnum = 4; /* keep track of the number of array slots */\n\tcarr = (char **) calloc(sizeof(char **), num);\n\tif (carr == NULL)\n\t\treturn PBSE_SYSTEM;\n\n\tfor (i = 0;; i++) {\n\t\tcp = disrst(stream, &ret);\n\t\tif ((cp == NULL) || (ret != DIS_SUCCESS)) {\n\t\t\tarrayfree(carr);\n\t\t\tif (cp != NULL)\n\t\t\t\tfree(cp);\n\t\t\treturn PBSE_SYSTEM;\n\t\t}\n\t\tif (*cp == '\\0') {\n\t\t\tfree(cp);\n\t\t\tbreak;\n\t\t}\n\t\tif (i == num - 1) {\n\t\t\tchar **hold;\n\n\t\t\thold = (char **) realloc(carr,\n\t\t\t\t\t\t num * 2 * sizeof(char **));\n\t\t\tif (hold == NULL) {\n\t\t\t\tarrayfree(carr);\n\t\t\t\tfree(cp);\n\t\t\t\treturn PBSE_SYSTEM;\n\t\t\t}\n\t\t\tcarr = hold;\n\n\t\t\t/* zero the last half of the now doubled carr */\n\t\t\tmemset(&carr[num], 0, num * sizeof(char **));\n\t\t\tnum *= 2;\n\t\t}\n\t\tcarr[i] = cp;\n\t}\n\tcarr[i] = NULL;\n\t*arrloc = carr;\n\treturn ret;\n}\n\n/**\n * @brief\n *\t\tRead a python spawn request off the wire.\n *\t\tEach of the argv and envp arrays is sent by writing a counted\n *\t\tstring followed by a zero length string (\"\").\n *\n * @param[in]\tsock\t- socket where you reads the request.\n * @param[in]\tpreq\t- the batch_request structure to free up.\n */\nint\ndecode_DIS_PySpawn(int sock, struct batch_request *preq)\n{\n\tint rc;\n\n\trc = disrfst(sock, sizeof(preq->rq_ind.rq_py_spawn.rq_jid),\n\t\t     preq->rq_ind.rq_py_spawn.rq_jid);\n\tif (rc)\n\t\treturn rc;\n\n\trc = read_carray(sock, &preq->rq_ind.rq_py_spawn.rq_argv);\n\tif (rc)\n\t\treturn rc;\n\n\trc = read_carray(sock, &preq->rq_ind.rq_py_spawn.rq_envp);\n\tif (rc)\n\t\treturn rc;\n\n\treturn rc;\n}\n\n/**\n * @brief\n *\t\tRead a release nodes from job request off the wire.\n *\n * @param[in]\tsock\t- socket where you reads the request.\n * @param[in]\tpreq\t- the batch_request structure containing the request details.\n *\n * @return int\n *\n * @retval\t0\t- if successful\n * @retval\t!= 0\t- if not successful (an error encountered along the way)\n */\nint\ndecode_DIS_RelnodesJob(int sock, struct batch_request *preq)\n{\n\tint rc;\n\n\tpreq->rq_ind.rq_relnodes.rq_node_list = NULL;\n\n\trc = disrfst(sock, PBS_MAXSVRJOBID + 1, preq->rq_ind.rq_relnodes.rq_jid);\n\tif (rc)\n\t\treturn rc;\n\n\tpreq->rq_ind.rq_relnodes.rq_node_list = disrst(sock, &rc);\n\treturn rc;\n}\n\n/**\n * @brief\n * \t\tFree space allocated to a batch_request structure\n *\t\tincluding any sub-structures\n *\n * @param[in]\tpreq - the batch_request structure to free up.\n */\nvoid\nfree_br(struct batch_request *preq)\n{\n\tdelete_link(&preq->rq_link);\n\treply_free(&preq->rq_reply);\n\n\tif (preq->rq_parentbr) {\n\t\t/*\n\t\t * have a parent who has the original info, so we cannot\n\t\t * free any data malloc-ed outside of the basic structure;\n\t\t * decrement the reference count in the parent and when it\n\t\t * goes to zero,  reply_send() it\n\t\t */\n\t\tif (preq->rq_parentbr->rq_refct > 0) {\n\t\t\tif (--preq->rq_parentbr->rq_refct == 0) {\n#ifndef PBS_MOM /* Server Only */\n\t\t\t\tif (preq->rq_parentbr->rq_type == PBS_BATCH_DeleteJobList) {\n\t\t\t\t\tif (delete_pending_arrayjobs(preq->rq_parentbr))\n\t\t\t\t\t\treply_send(preq->rq_parentbr);\n\t\t\t\t} else\n#endif /* End of server */\n\t\t\t\t\treply_send(preq->rq_parentbr);\n\t\t\t}\n\t\t}\n\n\t\tfree(preq->tppcmd_msgid);\n\t\tif (preq->rq_type == PBS_BATCH_DeleteJobList)\n\t\t\tif (preq->rq_ind.rq_deletejoblist.rq_jobslist)\n\t\t\t\tfree_string_array(preq->rq_ind.rq_deletejoblist.rq_jobslist);\n\t\tfree(preq);\n\t\treturn;\n\t}\n\n\t/*\n\t * IMPORTANT - free any data that is malloc-ed outside of the\n\t * basic batch_request structure below here so it is not freed\n\t * when a copy of the structure (for a Array subjob) is freed\n\t */\n\tif (preq->rq_extend)\n\t\t(void) free(preq->rq_extend);\n\n\tswitch (preq->rq_type) {\n\t\tcase PBS_BATCH_QueueJob:\n\t\t\tfree_attrlist(&preq->rq_ind.rq_queuejob.rq_attr);\n\t\t\tbreak;\n\t\tcase PBS_BATCH_JobCred:\n\t\t\tif (preq->rq_ind.rq_jobcred.rq_data)\n\t\t\t\t(void) free(preq->rq_ind.rq_jobcred.rq_data);\n\t\t\tbreak;\n\t\tcase PBS_BATCH_UserCred:\n\t\t\tif (preq->rq_ind.rq_usercred.rq_data)\n\t\t\t\t(void) free(preq->rq_ind.rq_usercred.rq_data);\n\t\t\tbreak;\n\t\tcase PBS_BATCH_jobscript:\n\t\t\tif (preq->rq_ind.rq_jobfile.rq_data)\n\t\t\t\t(void) free(preq->rq_ind.rq_jobfile.rq_data);\n\t\t\tbreak;\n\t\tcase PBS_BATCH_CopyHookFile:\n\t\t\tif (preq->rq_ind.rq_hookfile.rq_data)\n\t\t\t\t(void) free(preq->rq_ind.rq_hookfile.rq_data);\n\t\t\tbreak;\n\t\tcase PBS_BATCH_HoldJob:\n\t\t\tfreebr_manage(&preq->rq_ind.rq_hold.rq_orig);\n\t\t\tbreak;\n\t\tcase PBS_BATCH_MessJob:\n\t\t\tif (preq->rq_ind.rq_message.rq_text)\n\t\t\t\t(void) free(preq->rq_ind.rq_message.rq_text);\n\t\t\tbreak;\n\t\tcase PBS_BATCH_RelnodesJob:\n\t\t\tif (preq->rq_ind.rq_relnodes.rq_node_list)\n\t\t\t\t(void) free(preq->rq_ind.rq_relnodes.rq_node_list);\n\t\t\tbreak;\n\t\tcase PBS_BATCH_PySpawn:\n\t\t\tarrayfree(preq->rq_ind.rq_py_spawn.rq_argv);\n\t\t\tarrayfree(preq->rq_ind.rq_py_spawn.rq_envp);\n\t\t\tbreak;\n\t\tcase PBS_BATCH_ModifyJob:\n\t\tcase PBS_BATCH_ModifyResv:\n\t\tcase PBS_BATCH_ModifyJob_Async:\n\t\t\tfreebr_manage(&preq->rq_ind.rq_modify);\n\t\t\tbreak;\n\n\t\tcase PBS_BATCH_RunJob:\n\t\tcase PBS_BATCH_AsyrunJob:\n\t\tcase PBS_BATCH_AsyrunJob_ack:\n\t\tcase PBS_BATCH_StageIn:\n\t\tcase PBS_BATCH_ConfirmResv:\n\t\t\tif (preq->rq_ind.rq_run.rq_destin) {\n\t\t\t\tfree(preq->rq_ind.rq_run.rq_destin);\n\t\t\t\tpreq->rq_ind.rq_run.rq_destin = NULL;\n\t\t\t}\n\t\t\tbreak;\n\t\tcase PBS_BATCH_StatusJob:\n\t\tcase PBS_BATCH_StatusQue:\n\t\tcase PBS_BATCH_StatusNode:\n\t\tcase PBS_BATCH_StatusSvr:\n\t\tcase PBS_BATCH_StatusSched:\n\t\tcase PBS_BATCH_StatusHook:\n\t\tcase PBS_BATCH_StatusRsc:\n\t\tcase PBS_BATCH_StatusResv:\n\t\t\tif (preq->rq_ind.rq_status.rq_id)\n\t\t\t\tfree(preq->rq_ind.rq_status.rq_id);\n\t\t\tfree_attrlist(&preq->rq_ind.rq_status.rq_attr);\n\t\t\tbreak;\n\t\tcase PBS_BATCH_DeleteJobList:\n\t\t\tif (preq->rq_ind.rq_deletejoblist.rq_jobslist)\n\t\t\t\tfree_string_array(preq->rq_ind.rq_deletejoblist.rq_jobslist);\n\t\t\tbreak;\n\t\tcase PBS_BATCH_CopyFiles:\n\t\tcase PBS_BATCH_DelFiles:\n\t\t\tfreebr_cpyfile(&preq->rq_ind.rq_cpyfile);\n\t\t\tbreak;\n\t\tcase PBS_BATCH_CopyFiles_Cred:\n\t\tcase PBS_BATCH_DelFiles_Cred:\n\t\t\tfreebr_cpyfile_cred(&preq->rq_ind.rq_cpyfile_cred);\n\t\t\tbreak;\n\t\tcase PBS_BATCH_MvJobFile:\n\t\t\tif (preq->rq_ind.rq_jobfile.rq_data)\n\t\t\t\tfree(preq->rq_ind.rq_jobfile.rq_data);\n\t\t\tbreak;\n\t\tcase PBS_BATCH_Cred:\n\t\t\tif (preq->rq_ind.rq_cred.rq_cred_data)\n\t\t\t\tfree(preq->rq_ind.rq_cred.rq_cred_data);\n\t\t\tbreak;\n\n#ifndef PBS_MOM /* Server Only */\n\t\tcase PBS_BATCH_RegisterSched:\n\t\t\tfree(preq->rq_ind.rq_register_sched.rq_name);\n\t\t\tbreak;\n\t\tcase PBS_BATCH_SubmitResv:\n\t\t\tfree_attrlist(&preq->rq_ind.rq_queuejob.rq_attr);\n\t\t\tbreak;\n\t\tcase PBS_BATCH_Manager:\n\t\t\tfreebr_manage(&preq->rq_ind.rq_manager);\n\t\t\tbreak;\n\t\tcase PBS_BATCH_ReleaseJob:\n\t\t\tfreebr_manage(&preq->rq_ind.rq_release);\n\t\t\tbreak;\n\t\tcase PBS_BATCH_Rescq:\n\t\tcase PBS_BATCH_ReserveResc:\n\t\tcase PBS_BATCH_ReleaseResc:\n\t\t\tfree_rescrq(&preq->rq_ind.rq_rescq);\n\t\t\tbreak;\n\t\tcase PBS_BATCH_DefSchReply:\n\t\t\tfree(preq->rq_ind.rq_defrpy.rq_id);\n\t\t\tfree(preq->rq_ind.rq_defrpy.rq_txt);\n\t\t\tbreak;\n\t\tcase PBS_BATCH_SelectJobs:\n\t\tcase PBS_BATCH_SelStat:\n\t\t\tfree_attrlist(&preq->rq_ind.rq_select.rq_selattr);\n\t\t\tfree_attrlist(&preq->rq_ind.rq_select.rq_rtnattr);\n\t\t\tbreak;\n\t\tcase PBS_BATCH_PreemptJobs:\n\t\t\tfree(preq->rq_ind.rq_preempt.ppj_list);\n\t\t\tfree(preq->rq_reply.brp_un.brp_preempt_jobs.ppj_list);\n\t\t\tbreak;\n#endif /* PBS_MOM */\n\t}\n\tif (preq->tppcmd_msgid)\n\t\tfree(preq->tppcmd_msgid);\n\t(void) free(preq);\n}\n/**\n * @brief\n * \t\tit is a wrapper function of free_attrlist()\n *\n * @param[in]\tpmgr - request manage structure.\n */\nstatic void\nfreebr_manage(struct rq_manage *pmgr)\n{\n\tfree_attrlist(&pmgr->rq_attr);\n}\n/**\n * @brief\n * \t\tremove all the rqfpair and free their memory\n *\n * @param[in]\tpcf - rq_cpyfile structure on which rq_pairs needs to be freed.\n */\nstatic void\nfreebr_cpyfile(struct rq_cpyfile *pcf)\n{\n\tstruct rqfpair *ppair;\n\n\twhile ((ppair = (struct rqfpair *) GET_NEXT(pcf->rq_pair)) != NULL) {\n\t\tdelete_link(&ppair->fp_link);\n\t\tif (ppair->fp_local)\n\t\t\t(void) free(ppair->fp_local);\n\t\tif (ppair->fp_rmt)\n\t\t\t(void) free(ppair->fp_rmt);\n\t\t(void) free(ppair);\n\t}\n}\n/**\n * @brief\n * \t\tremove list of rqfpair along with encrpyted credential.\n *\n * @param[in]\tpcfc - rq_cpyfile_cred structure\n */\nstatic void\nfreebr_cpyfile_cred(struct rq_cpyfile_cred *pcfc)\n{\n\tstruct rqfpair *ppair;\n\n\twhile ((ppair = (struct rqfpair *) GET_NEXT(pcfc->rq_copyfile.rq_pair)) != NULL) {\n\t\tdelete_link(&ppair->fp_link);\n\t\tif (ppair->fp_local)\n\t\t\t(void) free(ppair->fp_local);\n\t\tif (ppair->fp_rmt)\n\t\t\t(void) free(ppair->fp_rmt);\n\t\t(void) free(ppair);\n\t}\n\tif (pcfc->rq_pcred)\n\t\tfree(pcfc->rq_pcred);\n}\n\n/**\n * @brief\n * \t\tObtain the name and port of the server as defined by pbs_conf\n *\n * @param[out] port - Passed through to parse_servername(), not modified here.\n *\n * @return char *\n * @return NULL - failure\n * @retval !NULL - pointer to server name\n */\nchar *\nget_servername(unsigned int *port)\n{\n\tchar *name = NULL;\n\n\tif (pbs_conf.pbs_primary)\n\t\tname = parse_servername(pbs_conf.pbs_primary, port);\n\telse if (pbs_conf.pbs_server_host_name)\n\t\tname = parse_servername(pbs_conf.pbs_server_host_name, port);\n\telse\n\t\tname = parse_servername(pbs_conf.pbs_server_name, port);\n\n\treturn name;\n}\n"
  },
  {
    "path": "src/server/qattr_get_set.c",
    "content": "/*\n * Copyright (C) 1994-2020 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h>\n\n#include \"job.h\"\n#include \"reservation.h\"\n#include \"queue.h\"\n\n/**\n * @brief\tGet attribute of queue based on given attr index\n *\n * @param[in] pq    - pointer to queue struct\n * @param[in] attr_idx - attribute index\n *\n * @return attribute *\n * @retval NULL  - failure\n * @retval !NULL - pointer to attribute struct\n */\nattribute *\nget_qattr(const pbs_queue *pq, int attr_idx)\n{\n\tif (pq != NULL)\n\t\treturn _get_attr_by_idx((attribute *) pq->qu_attr, attr_idx);\n\treturn NULL;\n}\n\n/**\n * @brief\tGetter function for queue attribute of type string\n *\n * @param[in]\tpq - pointer to the queue\n * @param[in]\tattr_idx - index of the attribute to return\n *\n * @return\tchar *\n * @retval\tstring value of the attribute\n * @retval\tNULL if pq is NULL\n */\nchar *\nget_qattr_str(const pbs_queue *pq, int attr_idx)\n{\n\tif (pq != NULL)\n\t\treturn get_attr_str(get_qattr(pq, attr_idx));\n\n\treturn NULL;\n}\n\n/**\n * @brief\tGetter function for queue attribute of type string of array\n *\n * @param[in]\tpq - pointer to the queue\n * @param[in]\tattr_idx - index of the attribute to return\n *\n * @return\tstruct array_strings *\n * @retval\tvalue of the attribute\n * @retval\tNULL if pq is NULL\n */\nstruct array_strings *\nget_qattr_arst(const pbs_queue *pq, int attr_idx)\n{\n\tif (pq != NULL)\n\t\treturn get_attr_arst(get_qattr(pq, attr_idx));\n\n\treturn NULL;\n}\n\n/**\n * @brief\tGetter for queue attribute's list value\n *\n * @param[in]\tpq - pointer to the queue\n * @param[in]\tattr_idx - index of the attribute to return\n *\n * @return\tpbs_list_head\n * @retval\tvalue of attribute\n */\npbs_list_head\nget_qattr_list(const pbs_queue *pq, int attr_idx)\n{\n\treturn get_attr_list(get_qattr(pq, attr_idx));\n}\n\n/**\n * @brief\tGetter function for queue attribute of type long\n *\n * @param[in]\tpq - pointer to the queue\n * @param[in]\tattr_idx - index of the attribute to return\n *\n * @return\tlong\n * @retval\tlong value of the attribute\n * @retval\t-1 if pq is NULL\n */\nlong\nget_qattr_long(const pbs_queue *pq, int attr_idx)\n{\n\tif (pq != NULL)\n\t\treturn get_attr_l(get_qattr(pq, attr_idx));\n\n\treturn -1;\n}\n\n/**\n * @brief\tGeneric queue attribute setter (call if you want at_set() action functions to be called)\n *\n * @param[in]\tpq - pointer to queue\n * @param[in]\tattr_idx - attribute index to set\n * @param[in]\tval - new val to set\n * @param[in]\trscn - new resource val to set, if applicable\n * @param[in]\top - batch_op operation, SET, INCR, DECR etc.\n *\n * @return\tint\n * @retval\t0 for success\n * @retval\t!0 for failure\n */\nint\nset_qattr_generic(pbs_queue *pq, int attr_idx, char *val, char *rscn, enum batch_op op)\n{\n\tif (pq == NULL || val == NULL)\n\t\treturn 1;\n\n\treturn set_attr_generic(get_qattr(pq, attr_idx), &que_attr_def[attr_idx], val, rscn, op);\n}\n\n/**\n * @brief\t\"fast\" queue attribute setter for string values\n *\n * @param[in]\tpq - pointer to queue\n * @param[in]\tattr_idx - attribute index to set\n * @param[in]\tval - new val to set\n * @param[in]\trscn - new resource val to set, if applicable\n *\n * @return\tint\n * @retval\t0 for success\n * @retval\t!0 for failure\n */\nint\nset_qattr_str_slim(pbs_queue *pq, int attr_idx, char *val, char *rscn)\n{\n\tif (pq == NULL || val == NULL)\n\t\treturn 1;\n\n\treturn set_attr_generic(get_qattr(pq, attr_idx), &que_attr_def[attr_idx], val, rscn, INTERNAL);\n}\n\n/**\n * @brief\t\"fast\" queue attribute setter for long values\n *\n * @param[in]\tpq - pointer to queue\n * @param[in]\tattr_idx - attribute index to set\n * @param[in]\tval - new val to set\n * @param[in]\top - batch_op operation, SET, INCR, DECR etc.\n *\n * @return\tint\n * @retval\t0 for success\n * @retval\t1 for failure\n */\nint\nset_qattr_l_slim(pbs_queue *pq, int attr_idx, long val, enum batch_op op)\n{\n\tif (pq == NULL)\n\t\treturn 1;\n\n\tset_attr_l(get_qattr(pq, attr_idx), val, op);\n\n\treturn 0;\n}\n\n/**\n * @brief\t\"fast\" queue attribute setter for boolean values\n *\n * @param[in]\tpq - pointer to queue\n * @param[in]\tattr_idx - attribute index to set\n * @param[in]\tval - new val to set\n * @param[in]\top - batch_op operation, SET, INCR, DECR etc.\n *\n * @return\tint\n * @retval\t0 for success\n * @retval\t1 for failure\n */\nint\nset_qattr_b_slim(pbs_queue *pq, int attr_idx, long val, enum batch_op op)\n{\n\tif (pq == NULL)\n\t\treturn 1;\n\n\tset_attr_b(get_qattr(pq, attr_idx), val, op);\n\n\treturn 0;\n}\n\n/**\n * @brief\t\"fast\" queue attribute setter for char values\n *\n * @param[in]\tpq - pointer to queue\n * @param[in]\tattr_idx - attribute index to set\n * @param[in]\tval - new val to set\n * @param[in]\top - batch_op operation, SET, INCR, DECR etc.\n *\n * @return\tint\n * @retval\t0 for success\n * @retval\t1 for failure\n */\nint\nset_qattr_c_slim(pbs_queue *pq, int attr_idx, char val, enum batch_op op)\n{\n\tif (pq == NULL)\n\t\treturn 1;\n\n\tset_attr_c(get_qattr(pq, attr_idx), val, op);\n\n\treturn 0;\n}\n\n/**\n * @brief\tCheck if a queue attribute is set\n *\n * @param[in]\tpq - pointer to queue\n * @param[in]\tattr_idx - attribute index to check\n *\n * @return\tint\n * @retval\t1 if it is set\n * @retval\t0 otherwise\n */\nint\nis_qattr_set(const pbs_queue *pq, int attr_idx)\n{\n\tif (pq != NULL)\n\t\treturn is_attr_set(get_qattr(pq, attr_idx));\n\n\treturn 0;\n}\n\n/**\n * @brief\tFree a queue attribute\n *\n * @param[in]\tpq - pointer to queue\n * @param[in]\tattr_idx - attribute index to free\n *\n * @return\tvoid\n */\nvoid\nfree_qattr(pbs_queue *pq, int attr_idx)\n{\n\tif (pq != NULL)\n\t\tfree_attr(que_attr_def, get_qattr(pq, attr_idx), attr_idx);\n}\n"
  },
  {
    "path": "src/server/queue_func.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file    queue_func.c\n *\n *@brief\n * \t\tqueue_func.c - various functions dealing with queues\n *\n * Included functions are:\n *\tque_alloc()\t- allocacte and initialize space for queue structure\n *\tque_free()\t- free queue structure\n *\tque_purge()\t- remove queue from server\n *\tfind_queuebyname() - find a queue with a given name\n #ifdef NAS localmod 075\n *\tfind_resvqueuebyname() - find a reservation queue, given resv name\n #endif localmod 075\n *\tget_dfltque()\t- get default queue\n * \tqstart_action() - determine accrue type for all jobs in queue\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <stdio.h>\n#include <unistd.h>\n#include <sys/types.h>\n#include <sys/param.h>\n#include <memory.h>\n#include <stdlib.h>\n#include <errno.h>\n#include <string.h>\n#include \"pbs_ifl.h\"\n#include \"list_link.h\"\n#include \"log.h\"\n#include \"attribute.h\"\n#include \"server_limits.h\"\n#include \"server.h\"\n#include \"job.h\"\n#include \"reservation.h\"\n#include \"queue.h\"\n#include \"pbs_error.h\"\n#include \"sched_cmds.h\"\n#include \"pbs_db.h\"\n#include \"pbs_nodes.h\"\n#include \"pbs_sched.h\"\n#include \"pbs_idx.h\"\n\n/* Global Data */\n\nextern char *msg_err_unlink;\nextern struct server server;\nextern pbs_list_head svr_queues;\nextern time_t time_now;\nextern long svr_history_enable;\n#ifndef PBS_MOM\nextern void *svr_db_conn;\n#endif\n\n/**\n * @brief\n * \t\tque_alloc - allocate space for a queue structure and initialize\n *\t\tattributes to \"unset\"\n *\n * @param[in]\tname\t- queue name\n *\n * @return\tpbs_queue *\n * @retval\tnull\t- space not available.\n */\n\npbs_queue *\nque_alloc(char *name)\n{\n\tint i;\n\tpbs_queue *pq;\n\n\tpq = (pbs_queue *) calloc(1, sizeof(pbs_queue));\n\tif (pq == NULL) {\n\t\tlog_err(errno, __func__, \"no memory\");\n\t\treturn NULL;\n\t}\n\n\tpq->qu_qs.qu_type = QTYPE_Unset;\n\tpq->newobj = 1;\n\tCLEAR_HEAD(pq->qu_jobs);\n\tCLEAR_LINK(pq->qu_link);\n\n\tsnprintf(pq->qu_qs.qu_name, sizeof(pq->qu_qs.qu_name), \"%s\", name);\n\tif (pbs_idx_insert(queues_idx, pq->qu_qs.qu_name, pq) != PBS_IDX_RET_OK) {\n\t\tlog_eventf(PBSEVENT_ERROR | PBSEVENT_FORCE, PBS_EVENTCLASS_QUEUE, LOG_ERR,\n\t\t\t   \"Failed to add queue in index %s\", pq->qu_qs.qu_name);\n\t\tfree(pq);\n\t\treturn NULL;\n\t}\n\tappend_link(&svr_queues, &pq->qu_link, pq);\n\tserver.sv_qs.sv_numque++;\n\n\t/* set the working attributes to \"unspecified\" */\n\tfor (i = 0; i < (int) QA_ATR_LAST; i++)\n\t\tclear_attr(get_qattr(pq, i), &que_attr_def[i]);\n\n\treturn (pq);\n}\n\n/**\n * @brief\n *\t\tque_free - free queue structure and its various sub-structures\n *\t\tQueue ACL's are regular attributes that are stored in the DB\n *\t\tand not in separate files.\n *\n * @param[in]\tpq\t- The pointer to the queue to free\n *\n */\nvoid\nque_free(pbs_queue *pq)\n{\n\tint i;\n\tkey_value_pair *pkvp = NULL;\n\n\t/* remove any malloc working attribute space */\n\tfor (i = 0; i < (int) QA_ATR_LAST; i++)\n\t\tfree_qattr(pq, i);\n\n\t/* free default chunks set on queue */\n\tpkvp = pq->qu_seldft;\n\tif (pkvp) {\n\t\tfor (i = 0; i < pq->qu_nseldft; ++i) {\n\t\t\tfree((pkvp + i)->kv_keyw);\n\t\t\tfree((pkvp + i)->kv_val);\n\t\t}\n\t\tfree(pkvp);\n\t}\n\n\t/* now free the main structure */\n\tserver.sv_qs.sv_numque--;\n\tdelete_link(&pq->qu_link);\n\tif (pbs_idx_delete(queues_idx, pq->qu_qs.qu_name) != PBS_IDX_RET_OK)\n\t\tlog_eventf(PBSEVENT_ERROR | PBSEVENT_FORCE, PBS_EVENTCLASS_QUEUE, LOG_ERR,\n\t\t\t   \"Failed to delete queue %s from index\", pq->qu_qs.qu_name);\n\t(void) free(pq);\n}\n\n/**\n * @brief\n *\t\tque_purge - purge queue from system\n *\t\tThe queue is dequeued, the queue file is unlinked.\n *\t\tIf the queue contains any jobs, the purge is not allowed.\n *\t\tEventually the queue is deleted from the database\n *\n * @param[in]\tpque\t- The pointer to the queue to purge\n *\n * @return\terror code\n * @retval\t0\t- queue purged or queue not valid\n * @retval\tPBSE_OBJBUSY\t- queue deletion not allowed\n */\nint\nque_purge(pbs_queue *pque)\n{\n\tpbs_db_obj_info_t obj;\n\tpbs_db_que_info_t dbque;\n\tvoid *conn = (void *) svr_db_conn;\n\n\t/*\n\t * If the queue (pque) is not valid, then nothing to\n\t * do, just return 0.\n\t */\n\tif (pque == NULL)\n\t\treturn (0);\n\n\t/* are there any jobs still in the queue */\n\tif (pque->qu_numjobs != 0) {\n\t\t/*\n\t\t * If the queue still has job(s), check if the SERVER\n\t\t * is configured for history info and all the jobs in\n\t\t * queue are history jobs. If yes, then allow queue\n\t\t * deletion otherwise return PBSE_OBJBUSY.\n\t\t */\n\t\tif (svr_history_enable) { /* SVR histconf chk */\n\n\t\t\tjob *pjob = NULL;\n\t\t\tjob *nxpjob = NULL;\n\t\t\tint state_num;\n\n\t\t\tpjob = (job *) GET_NEXT(pque->qu_jobs);\n\t\t\twhile (pjob) {\n\t\t\t\t/*\n\t\t\t\t * If it is not a history job (MOVED/FINISHED), then\n\t\t\t\t * return with PBSE_OBJBUSY error.\n\t\t\t\t */\n\t\t\t\tif ((!check_job_state(pjob, JOB_STATE_LTR_MOVED)) &&\n\t\t\t\t    (!check_job_state(pjob, JOB_STATE_LTR_FINISHED)) &&\n\t\t\t\t    (!check_job_state(pjob, JOB_STATE_LTR_EXPIRED)))\n\t\t\t\t\treturn (PBSE_OBJBUSY);\n\t\t\t\tpjob = (job *) GET_NEXT(pjob->ji_jobque);\n\t\t\t}\n\t\t\t/*\n\t\t\t * All are history jobs, unlink all of them from queue.\n\t\t\t * Update the number of jobs in the queue and their state\n\t\t\t * count as the queue is going to be purged. No job(s)\n\t\t\t * should point to the queue to be purged, make the queue\n\t\t\t * header pointer of job(pjob->ji_qhdr) to NULL.\n\t\t\t */\n\t\t\tpjob = (job *) GET_NEXT(pque->qu_jobs);\n\t\t\tstate_num = get_job_state_num(pjob);\n\t\t\twhile (pjob) {\n\t\t\t\tnxpjob = (job *) GET_NEXT(pjob->ji_jobque);\n\t\t\t\tdelete_link(&pjob->ji_jobque);\n\t\t\t\t--pque->qu_numjobs;\n\t\t\t\tif (state_num != -1)\n\t\t\t\t\t--pque->qu_njstate[state_num];\n\t\t\t\tpjob->ji_qhdr = NULL;\n\t\t\t\tpjob = nxpjob;\n\t\t\t}\n\t\t} else {\n\t\t\treturn (PBSE_OBJBUSY);\n\t\t}\n\t}\n\n\t/* delete queue from database */\n\tstrcpy(dbque.qu_name, pque->qu_qs.qu_name);\n\tobj.pbs_db_obj_type = PBS_DB_QUEUE;\n\tobj.pbs_db_un.pbs_db_que = &dbque;\n\tif (pbs_db_delete_obj(conn, &obj) != 0) {\n\t\t(void) sprintf(log_buffer,\n\t\t\t       \"delete of que %s from datastore failed\",\n\t\t\t       pque->qu_qs.qu_name);\n\t\tlog_err(errno, \"queue_purge\", log_buffer);\n\t}\n\tque_free(pque);\n\n\treturn (0);\n}\n\n/**\n * @brief\n * \t\tfind_queuebyname() - find a queue by its name\n *\n * @param[in]\tquename\t- queue name\n *\n * @return\tpbs_queue *\n */\n\npbs_queue *\nfind_queuebyname(char *quename)\n{\n\tchar *at;\n\tpbs_queue *pque = NULL;\n\tint rc = PBS_IDX_RET_FAIL;\n\n\tif (quename == NULL || quename[0] == '\\0')\n\t\treturn NULL;\n\n\tat = strchr(quename, (int) '@'); /* strip off server (fragment) */\n\tif (at)\n\t\t*at = '\\0';\n\n\trc = pbs_idx_find(queues_idx, (void **) &quename, (void **) &pque, NULL);\n\tif (at)\n\t\t*at = '@'; /* restore '@' server portion */\n\tif (rc == PBS_IDX_RET_OK)\n\t\treturn pque;\n\treturn NULL;\n}\n\n#ifdef NAS /* localmod 075 */\n/**\n * @brief\n * \t\tfind_resvqueuebyname() - find a queue by the name of its reservation\n *\n * @param[in]\tquename\t- queue name.\n *\n * @return\tpbs_queue *\n */\npbs_queue *\nfind_resvqueuebyname(char *quename)\n{\n\tchar *pc;\n\tpbs_queue *pque;\n\tchar qname[PBS_MAXDEST + 1];\n\n\t(void) strncpy(qname, quename, PBS_MAXDEST);\n\tqname[PBS_MAXDEST] = '\\0';\n\tpc = strchr(qname, (int) '@'); /* strip off server (fragment) */\n\tif (pc)\n\t\t*pc = '\\0';\n\tfor (pque = (pbs_queue *) GET_NEXT(svr_queues);\n\t     pque != NULL; pque = (pbs_queue *) GET_NEXT(pque->qu_link)) {\n\t\tif (pque->qu_resvp != NULL && (strcmp(qname, get_rattr_str(pque->qu_resvp, RESV_ATR_resv_name)) == 0))\n\t\t\tbreak;\n\t}\n\tif (pc)\n\t\t*pc = '@'; /* restore '@' server portion */\n\treturn (pque);\n}\n#endif /* localmod 075 */\n\n/**\n * @brief\n * \tfind_resv() - find reservation resc_resv struct by its ID or queue name\n *\n *\tSearch list of all server resc_resv structs for one with given\n *\treservation id or reservation queue name\n *\n * @param[in]\tid_or_quename - reservation ID or queue name\n *\n * @return\tpointer to resc_resv struct\n * @retval\tNULL\t- not found\n */\nresc_resv *\nfind_resv(char *id_or_quename)\n{\n\tchar *dot = NULL;\n\tresc_resv *presv = NULL;\n\tvoid *prid;\n\n\tif (id_or_quename == NULL || id_or_quename[0] == '\\0')\n\t\treturn NULL;\n\n\tif ((dot = strchr(id_or_quename, (int) '.')) != 0)\n\t\t*dot = '\\0';\n\n\tprid = id_or_quename + 1; /* ignore first char, as index key doesn't have it */\n\tif (pbs_idx_find(resvs_idx, &prid, (void **) &presv, NULL) != PBS_IDX_RET_OK) {\n\t\tif (dot)\n\t\t\t*dot = '.';\n\t\treturn NULL;\n\t}\n\tif (dot)\n\t\t*dot = '.';\n\treturn presv;\n}\n\n/**\n * @brief\n * \t\tget_dftque - get the default queue (if declared)\n *\n * @return\tpbs_queue *\n */\n\npbs_queue *\nget_dfltque(void)\n{\n\tpbs_queue *pq = NULL;\n\n\tif (is_sattr_set(SVR_ATR_dflt_que))\n\t\tpq = find_queuebyname(get_sattr_str(SVR_ATR_dflt_que));\n\treturn (pq);\n}\n\n/**\n * @brief\n * \t\tqueuestart_action - when queue is stopped or started,\n *\t\tfor all jobs in queue and determine their accrue type\n * \t\taction function for QA_ATR_started.\n *\n * @param[in]\tpattr\t- pointer to special attributes of an Array Job\n * @param[in]\tpobject\t- queue which is stopped or started\n * @param[in]\tactmode\t- not used.\n *\n * @return\tint\n * @retval\t0\t- success\n */\nint\nqueuestart_action(attribute *pattr, void *pobject, int actmode)\n{\n\tjob *pj; /* pointer to job */\n\tlong oldtype;\n\tlong newaccruetype = -1; /* if determining accrue type */\n\tpbs_queue *pque = (pbs_queue *) pobject;\n\tpbs_sched *psched;\n\n\tif (pque != NULL && get_sattr_long(SVR_ATR_EligibleTimeEnable) == 1) {\n\n\t\tif (pattr->at_val.at_long == 0) { /* started = OFF */\n\t\t\t/* queue stopped, start accruing eligible time */\n\t\t\t/* running jobs and jobs accruing ineligible time are exempted */\n\t\t\t/* jobs accruing eligible time are also exempted */\n\n\t\t\tpj = (job *) GET_NEXT(pque->qu_jobs);\n\n\t\t\twhile (pj != NULL) {\n\n\t\t\t\toldtype = get_jattr_long(pj, JOB_ATR_accrue_type);\n\n\t\t\t\tif (oldtype != JOB_RUNNING && oldtype != JOB_INELIGIBLE &&\n\t\t\t\t    oldtype != JOB_ELIGIBLE) {\n\n\t\t\t\t\t/* determination of accruetype not required here */\n\t\t\t\t\t(void) update_eligible_time(JOB_ELIGIBLE, pj);\n\t\t\t\t}\n\n\t\t\t\tpj = (job *) GET_NEXT(pj->ji_jobque);\n\t\t\t}\n\n\t\t} else { /* started = ON */\n\t\t\t/* determine accrue type and accrue time */\n\n\t\t\tpj = (job *) GET_NEXT(pque->qu_jobs);\n\n\t\t\twhile (pj != NULL) {\n\n\t\t\t\toldtype = get_jattr_long(pj, JOB_ATR_accrue_type);\n\n\t\t\t\tif (oldtype != JOB_RUNNING && oldtype != JOB_INELIGIBLE &&\n\t\t\t\t    oldtype != JOB_ELIGIBLE) {\n\n\t\t\t\t\tnewaccruetype = determine_accruetype(pj);\n\t\t\t\t\tupdate_eligible_time(newaccruetype, pj);\n\t\t\t\t}\n\n\t\t\t\tpj = (job *) GET_NEXT(pj->ji_jobque);\n\t\t\t}\n\n\t\t\t/* if scheduling = True, notify scheduler to start */\n\t\t\tif (get_sattr_long(SVR_ATR_scheduling)) {\n\t\t\t\tif (find_assoc_sched_pque(pque, &psched))\n\t\t\t\t\tset_scheduler_flag(SCH_SCHEDULE_STARTQ, psched);\n\t\t\t\telse {\n\t\t\t\t\tsprintf(log_buffer, \"No scheduler associated with the partition %s\", get_qattr_str(pque, QA_ATR_partition));\n\t\t\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\taction routine for the queue's \"partition\" attribute\n *\n * @param[in]\tpattr\t-\tattribute being set\n * @param[in]\tpobj\t-\tObject on which attribute is being set\n * @param[in]\tactmode\t-\tthe mode of setting, recovery or just alter\n *\n * @return\terror code\n * @retval\tPBSE_NONE\t-\tSuccess\n * @retval\t!PBSE_NONE\t-\tFailure\n *\n */\nint\naction_queue_partition(attribute *pattr, void *pobj, int actmode)\n{\n\tint i;\n\n\tif (actmode == ATR_ACTION_RECOV)\n\t\treturn PBSE_NONE;\n\n\tif (((pbs_queue *) pobj)->qu_qs.qu_type == QTYPE_RoutePush)\n\t\treturn PBSE_ROUTE_QUE_NO_PARTITION;\n\n\tif (strcmp(pattr->at_val.at_str, DEFAULT_PARTITION) == 0)\n\t\treturn PBSE_DEFAULT_PARTITION;\n\n\tfor (i = 0; i < svr_totnodes; i++) {\n\t\tif (pbsndlist[i]->nd_pque) {\n\t\t\tif (strcmp(pbsndlist[i]->nd_pque->qu_qs.qu_name, ((pbs_queue *) pobj)->qu_qs.qu_name) == 0) {\n\t\t\t\tif (is_nattr_set(pbsndlist[i], ND_ATR_partition) && (pattr->at_flags) & ATR_VFLAG_SET)\n\t\t\t\t\tif (strcmp(get_nattr_str(pbsndlist[i], ND_ATR_partition), pattr->at_val.at_str))\n\t\t\t\t\t\treturn PBSE_INVALID_PARTITION_QUE;\n\t\t\t}\n\t\t}\n\t}\n\n\treturn PBSE_NONE;\n}\n"
  },
  {
    "path": "src/server/queue_recov_db.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file    queue_recov_db.c\n *\n * @brief\n *\t\tqueue_recov_db.c - This file contains the functions to record a queue\n *\t\tdata structure to database and to recover it from database.\n *\n *\t\tThe data is recorded in the database\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <sys/types.h>\n#include <sys/param.h>\n#include \"pbs_ifl.h\"\n#include <errno.h>\n#include <fcntl.h>\n#include <stdlib.h>\n#include <stdio.h>\n#include <string.h>\n#include <unistd.h>\n#include \"server_limits.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"job.h\"\n#include \"reservation.h\"\n#include \"queue.h\"\n#include \"log.h\"\n#include \"pbs_nodes.h\"\n#include \"svrfunc.h\"\n#include \"pbs_db.h\"\n\n#ifndef PBS_MOM\nextern void *svr_db_conn;\nextern char *msg_init_recovque;\n#endif\n\npbs_queue *recov_queue_cb(pbs_db_obj_info_t *dbobj, int *refreshed);\n\n/**\n * @brief\n *\t\tconvert queue structure to DB format\n *\n * @param[in]\tpque\t- Address of the queue in the server\n * @param[out]\tpdbque  - Address of the database queue object\n *\n * @retval   -1  Failure\n * @retval\t>=0 What to save: 0=nothing, OBJ_SAVE_NEW or OBJ_SAVE_QS\n */\nstatic int\nque_to_db(pbs_queue *pque, pbs_db_que_info_t *pdbque)\n{\n\tint savetype = 0;\n\n\tstrcpy(pdbque->qu_name, pque->qu_qs.qu_name);\n\tpdbque->qu_type = pque->qu_qs.qu_type;\n\n\tif ((encode_attr_db(que_attr_def, pque->qu_attr, (int) QA_ATR_LAST, &pdbque->db_attr_list, 0)) != 0)\n\t\treturn -1;\n\n\tif (pque->newobj) /* object was never saved or loaded before */\n\t\tsavetype |= (OBJ_SAVE_NEW | OBJ_SAVE_QS);\n\n\tif (compare_obj_hash(&pque->qu_qs, sizeof(pque->qu_qs), pque->qs_hash) == 1) {\n\t\tsavetype |= OBJ_SAVE_QS;\n\t\tpdbque->qu_type = pque->qu_qs.qu_type;\n\t}\n\n\treturn savetype;\n}\n\n/**\n * @brief\n *\t\tconvert from database to queue structure\n *\n * @param[out]\tpque\t- Address of the queue in the server\n * @param[in]\tpdbque\t- Address of the database queue object\n *\n *@return 0      Success\n *@return !=0    Failure\n */\nstatic int\ndb_to_que(pbs_queue *pque, pbs_db_que_info_t *pdbque)\n{\n\tstrcpy(pque->qu_qs.qu_name, pdbque->qu_name);\n\tpque->qu_qs.qu_type = pdbque->qu_type;\n\n\tif ((decode_attr_db(pque, &pdbque->db_attr_list.attrs, que_attr_idx, que_attr_def, pque->qu_attr, QA_ATR_LAST, 0)) != 0)\n\t\treturn -1;\n\n\tcompare_obj_hash(&pque->qu_qs, sizeof(pque->qu_qs), pque->qs_hash);\n\n\tpque->newobj = 0;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tSave a queue to the database\n *\n * @param[in]\tpque  - Pointer to the queue to save\n *\n * @return      Error code\n * @retval\t0 - Success\n * @retval\t1 - Failure\n *\n */\nint\nque_save_db(pbs_queue *pque)\n{\n\tpbs_db_que_info_t dbque = {{0}};\n\tpbs_db_obj_info_t obj;\n\tvoid *conn = (void *) svr_db_conn;\n\tchar *conn_db_err = NULL;\n\tint savetype;\n\tint rc = -1;\n\n\tif ((savetype = que_to_db(pque, &dbque)) == -1)\n\t\tgoto done;\n\n\tobj.pbs_db_obj_type = PBS_DB_QUEUE;\n\tobj.pbs_db_un.pbs_db_que = &dbque;\n\n\tif ((rc = pbs_db_save_obj(conn, &obj, savetype)) == 0)\n\t\tpque->newobj = 0;\n\ndone:\n\tfree_db_attr_list(&dbque.db_attr_list);\n\n\tif (rc != 0) {\n\t\tpbs_db_get_errmsg(PBS_DB_ERR, &conn_db_err);\n\t\tlog_errf(PBSE_INTERNAL, __func__, \"Failed to save queue %s %s\", pque->qu_qs.qu_name, conn_db_err ? conn_db_err : \"\");\n\t\tfree(conn_db_err);\n\t\tpanic_stop_db();\n\t}\n\treturn rc;\n}\n\n/**\n * @brief\n *\t\tRecover a queue from the database\n *\n * @param[in]\tqname\t- Name of the queue to recover\n * @param[out]  pq - Queue pointer, if any, to be updated\n *\n * @return\tThe recovered queue structure\n * @retval\tNULL\t- Failure\n * @retval\t!NULL\t- Success - address of recovered queue returned\n *\n */\npbs_queue *\nque_recov_db(char *qname, pbs_queue *pq)\n{\n\tpbs_queue *pque = NULL;\n\tpbs_db_que_info_t dbque = {{0}};\n\tpbs_db_obj_info_t obj;\n\tvoid *conn = (void *) svr_db_conn;\n\tint rc = -1;\n\tchar *conn_db_err = NULL;\n\n\tif (!pq) {\n\t\tif ((pque = que_alloc(qname)) == NULL) {\n\t\t\tlog_err(-1, __func__, \"que_alloc failed\");\n\t\t\treturn NULL;\n\t\t}\n\t\tpq = pque;\n\t}\n\n\tstrcpy(dbque.qu_name, qname);\n\tobj.pbs_db_obj_type = PBS_DB_QUEUE;\n\tobj.pbs_db_un.pbs_db_que = &dbque;\n\n\trc = pbs_db_load_obj(conn, &obj);\n\tif (rc == -2)\n\t\treturn pq; /* no change in que, return the same pq */\n\n\tif (rc == 0)\n\t\trc = db_to_que(pq, &dbque);\n\telse {\n\t\tpbs_db_get_errmsg(PBS_DB_ERR, &conn_db_err);\n\t\tlog_errf(PBSE_INTERNAL, __func__, \"Failed to load queue %s, %s\", qname, conn_db_err ? conn_db_err : \"\");\n\t\tfree(conn_db_err);\n\t}\n\n\tfree_db_attr_list(&dbque.db_attr_list);\n\n\tif (rc != 0) {\n\t\tpq = NULL; /* so we return NULL */\n\n\t\tif (pque)\n\t\t\tque_free(pque); /* free if we allocated here */\n\t}\n\treturn pq;\n}\n\n/**\n * @brief\n *\tRefresh/retrieve queue from database and add it into AVL tree if not present\n *\n *\t@param[in]\tdbobj - The pointer to the wrapper queue object of type pbs_db_que_info_t\n *\t@param[out]\trefreshed - To check if queues refreshed\n *\n * @return\tThe recovered queue\n * @retval\tNULL - Failure\n * @retval\t!NULL - Success, pointer to queue structure recovered\n *\n */\npbs_queue *\nrecov_queue_cb(pbs_db_obj_info_t *dbobj, int *refreshed)\n{\n\tpbs_queue *pque = NULL;\n\tpbs_db_que_info_t *dbque = dbobj->pbs_db_un.pbs_db_que;\n\n\t*refreshed = 0;\n\tif ((pque = que_recov_db(dbque->qu_name, NULL)) != NULL) {\n\t\t/* que_recov increments sv_numque */\n\t\tlog_eventf(PBSEVENT_SYSTEM | PBSEVENT_ADMIN | PBSEVENT_DEBUG, PBS_EVENTCLASS_SERVER, LOG_INFO, msg_daemonname, msg_init_recovque, pque->qu_qs.qu_name);\n\t\t*refreshed = 1;\n\t}\n\n\tfree_db_attr_list(&dbque->db_attr_list);\n\tif (pque == NULL)\n\t\tlog_errf(PBSE_INTERNAL, __func__, \"Failed to refresh queue %s\", dbque->qu_name);\n\treturn pque;\n}\n"
  },
  {
    "path": "src/server/rattr_get_set.c",
    "content": "/*\n * Copyright (C) 1994-2020 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h>\n\n#include \"attribute.h\"\n#include \"job.h\"\n#include \"reservation.h\"\n#include \"resv_node.h\"\n\n/**\n * @brief\tGet attribute of reservation based on given attr index\n *\n * @param[in] presv    - pointer to node struct\n * @param[in] attr_idx - attribute index\n *\n * @return attribute *\n * @retval NULL  - failure\n * @retval !NULL - pointer to attribute struct\n */\nattribute *\nget_rattr(const resc_resv *presv, int attr_idx)\n{\n\tif (presv != NULL)\n\t\treturn _get_attr_by_idx((attribute *) presv->ri_wattr, attr_idx);\n\treturn NULL;\n}\n\n/**\n * @brief\tGetter function for reservation attribute of type string\n *\n * @param[in]\tpresv - pointer to the reservation\n * @param[in]\tattr_idx - index of the attribute to return\n *\n * @return\tchar *\n * @retval\tstring value of the attribute\n * @retval\tNULL if presv is NULL\n */\nchar *\nget_rattr_str(const resc_resv *presv, int attr_idx)\n{\n\tif (presv != NULL)\n\t\treturn get_attr_str(get_rattr(presv, attr_idx));\n\n\treturn NULL;\n}\n\n/**\n * @brief\tGetter function for reservation attribute of type string of array\n *\n * @param[in]\tpresv - pointer to the reservation\n * @param[in]\tattr_idx - index of the attribute to return\n *\n * @return\tstruct array_strings *\n * @retval\tvalue of the attribute\n * @retval\tNULL if presv is NULL\n */\nstruct array_strings *\nget_rattr_arst(const resc_resv *presv, int attr_idx)\n{\n\tif (presv != NULL)\n\t\treturn get_attr_arst(get_rattr(presv, attr_idx));\n\n\treturn NULL;\n}\n\n/**\n * @brief\tGetter for reservation attribute's list value\n *\n * @param[in]\tpresv - pointer to the reservation\n * @param[in]\tattr_idx - index of the attribute to return\n *\n * @return\tpbs_list_head\n * @retval\tvalue of attribute\n */\npbs_list_head\nget_rattr_list(const resc_resv *presv, int attr_idx)\n{\n\treturn get_attr_list(get_rattr(presv, attr_idx));\n}\n\n/**\n * @brief\tGetter function for reservation attribute of type long\n *\n * @param[in]\tpresv - pointer to the reservation\n * @param[in]\tattr_idx - index of the attribute to return\n *\n * @return\tlong\n * @retval\tlong value of the attribute\n * @retval\t-1 if presv is NULL\n */\nlong\nget_rattr_long(const resc_resv *presv, int attr_idx)\n{\n\tif (presv != NULL)\n\t\treturn get_attr_l(get_rattr(presv, attr_idx));\n\n\treturn -1;\n}\n\n/**\n * @brief\tGeneric reservation attribute setter (call if you want at_set() action functions to be called)\n *\n * @param[in]\tpresv - pointer to reservation\n * @param[in]\tattr_idx - attribute index to set\n * @param[in]\tval - new val to set\n * @param[in]\trscn - new resource val to set, if applicable\n * @param[in]\top - batch_op operation, SET, INCR, DECR etc.\n *\n * @return\tint\n * @retval\t0 for success\n * @retval\t!0 for failure\n */\nint\nset_rattr_generic(resc_resv *presv, int attr_idx, char *val, char *rscn, enum batch_op op)\n{\n\tif (presv == NULL || val == NULL)\n\t\treturn 1;\n\n\treturn set_attr_generic(get_rattr(presv, attr_idx), &resv_attr_def[attr_idx], val, rscn, op);\n}\n\n/**\n * @brief\t\"fast\" reservation attribute setter for string values\n *\n * @param[in]\tpresv - pointer to reservation\n * @param[in]\tattr_idx - attribute index to set\n * @param[in]\tval - new val to set\n * @param[in]\trscn - new resource val to set, if applicable\n *\n * @return\tint\n * @retval\t0 for success\n * @retval\t!0 for failure\n */\nint\nset_rattr_str_slim(resc_resv *presv, int attr_idx, char *val, char *rscn)\n{\n\tif (presv == NULL || val == NULL)\n\t\treturn 1;\n\n\treturn set_attr_generic(get_rattr(presv, attr_idx), &resv_attr_def[attr_idx], val, rscn, INTERNAL);\n}\n\n/**\n * @brief\t\"fast\" reservation attribute setter for long values\n *\n * @param[in]\tpresv - pointer to reservation\n * @param[in]\tattr_idx - attribute index to set\n * @param[in]\tval - new val to set\n * @param[in]\top - batch_op operation, SET, INCR, DECR etc.\n *\n * @return\tint\n * @retval\t0 for success\n * @retval\t1 for failure\n */\nint\nset_rattr_l_slim(resc_resv *presv, int attr_idx, long val, enum batch_op op)\n{\n\tif (presv == NULL)\n\t\treturn 1;\n\n\tset_attr_l(get_rattr(presv, attr_idx), val, op);\n\n\treturn 0;\n}\n\n/**\n * @brief\t\"fast\" reservation attribute setter for boolean values\n *\n * @param[in]\tpresv - pointer to reservation\n * @param[in]\tattr_idx - attribute index to set\n * @param[in]\tval - new val to set\n * @param[in]\top - batch_op operation, SET, INCR, DECR etc.\n *\n * @return\tint\n * @retval\t0 for success\n * @retval\t1 for failure\n */\nint\nset_rattr_b_slim(resc_resv *presv, int attr_idx, long val, enum batch_op op)\n{\n\tif (presv == NULL)\n\t\treturn 1;\n\n\tset_attr_b(get_rattr(presv, attr_idx), val, op);\n\n\treturn 0;\n}\n\n/**\n * @brief\t\"fast\" reservation attribute setter for char values\n *\n * @param[in]\tpresv - pointer to reservation\n * @param[in]\tattr_idx - attribute index to set\n * @param[in]\tval - new val to set\n * @param[in]\top - batch_op operation, SET, INCR, DECR etc.\n *\n * @return\tint\n * @retval\t0 for success\n * @retval\t1 for failure\n */\nint\nset_rattr_c_slim(resc_resv *presv, int attr_idx, char val, enum batch_op op)\n{\n\tif (presv == NULL)\n\t\treturn 1;\n\n\tset_attr_c(get_rattr(presv, attr_idx), val, op);\n\n\treturn 0;\n}\n\n/**\n * @brief\tCheck if a reservation attribute is set\n *\n * @param[in]\tpresv - pointer to reservation\n * @param[in]\tattr_idx - attribute index to check\n *\n * @return\tint\n * @retval\t1 if it is set\n * @retval\t0 otherwise\n */\nint\nis_rattr_set(const resc_resv *presv, int attr_idx)\n{\n\tif (presv != NULL)\n\t\treturn is_attr_set(get_rattr(presv, attr_idx));\n\n\treturn 0;\n}\n\n/**\n * @brief\tFree a reservation attribute\n *\n * @param[in]\tpresv - pointer to reservation\n * @param[in]\tattr_idx - attribute index to free\n *\n * @return\tvoid\n */\nvoid\nfree_rattr(resc_resv *presv, int attr_idx)\n{\n\tif (presv != NULL)\n\t\tfree_attr(resv_attr_def, get_rattr(presv, attr_idx), attr_idx);\n}\n\n/**\n * @brief\tclear a reservation attribute\n *\n * @param[in]\tpresv - pointer to reservation\n * @param[in]\tattr_idx - attribute index to clear\n *\n * @return\tvoid\n */\nvoid\nclear_rattr(resc_resv *presv, int attr_idx)\n{\n\tclear_attr(get_rattr(presv, attr_idx), &resv_attr_def[attr_idx]);\n}\n"
  },
  {
    "path": "src/server/reply_send.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file    reply_send.c\n *\n * @brief\n * \t\tThis file contains the routines used to send a reply to a client following\n * \t\tthe processing of a request.  The following routines are provided here:\n *\n *\treply_send()  - the main routine, used by all reply senders\n *\treply_ack()   - send a basic no error acknowledgement\n *\treq_reject()  - send a basic error return\n *\treply_text()  - send a return with a supplied text string\n *\treply_jobid() - used by several requests where the job id must be sent\n *\treply_free()  - free the substructure that might hang from a reply\n *\tset_err_msg() - set a message relating to the error \"code\"\n *\tdis_reply_write()\t- reply is sent to a remote client\n *\treply_badattr()\t- Create a reject (error) reply for a request including the name of the bad attribute/resource.\n *\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <stdio.h>\n#include <string.h>\n#include <errno.h>\n#include <sys/types.h>\n#include <signal.h>\n#include \"libpbs.h\"\n#include \"dis.h\"\n#include \"log.h\"\n#include \"pbs_error.h\"\n#include \"server_limits.h\"\n#include \"list_link.h\"\n#include \"net_connect.h\"\n#include \"attribute.h\"\n#include \"credential.h\"\n#include \"batch_request.h\"\n#include \"work_task.h\"\n#include \"pbs_nodes.h\"\n#include \"svrfunc.h\"\n#include \"tpp.h\"\n\n/* External Globals */\n\nextern char *msg_daemonname;\nextern char *msg_system;\n\n#ifndef PBS_MOM\nextern pbs_list_head task_list_event;\nextern pbs_list_head task_list_immed;\nextern char *resc_in_err;\n#endif /* PBS_MOM */\n\n#ifndef WIN32\nextern volatile int reply_timedout; /* global to notify DIS routines reply took too long */\n#endif\n#define ERR_MSG_SIZE 256\n\n/**\n * @brief\n * \t\tset a message relating to the error \"code\"\n *\n * @param code[in] - the specific error\n * @param msgbuf[out] - the buffer into which the message is placed\n * @param msglen[in] - length of above buffer\n */\nstatic void\nset_err_msg(int code, char *msgbuf, size_t msglen)\n{\n\tchar *msg = NULL;\n\tchar *msg_tmp;\n\n\t/* subtract 1 from buffer length to insure room for null terminator */\n\t--msglen;\n\n\t/* see if there is an error message associated with the code */\n\n\t*msgbuf = '\\0';\n\tif (code == PBSE_SYSTEM) {\n\t\tstrncpy(msgbuf, msg_daemonname, msglen - 2);\n\t\tstrcat(msgbuf, \": \");\n\t\tstrncat(msgbuf, msg_system, msglen - strlen(msgbuf));\n\t\tmsg_tmp = strerror(errno);\n\n\t\tif (msg_tmp)\n\t\t\tstrncat(msgbuf, msg_tmp, msglen - strlen(msgbuf));\n\t\telse\n\t\t\tstrncat(msgbuf, \"Unknown error\", msglen - strlen(msgbuf));\n\n#ifndef PBS_MOM\n\t} else if ((\n\t\t\t   (code == PBSE_UNKRESC) ||\n\t\t\t   (code == PBSE_BADATVAL) ||\n\t\t\t   (code == PBSE_INVALSELECTRESC) ||\n\t\t\t   (code == PBSE_INVALJOBRESC) ||\n\t\t\t   (code == PBSE_INVALNODEPLACE) ||\n\t\t\t   (code == PBSE_DUPRESC) ||\n\t\t\t   (code == PBSE_INDIRECTHOP) ||\n\t\t\t   (code == PBSE_SAVE_ERR) ||\n\t\t\t   (code == PBSE_INDIRECTBT)) &&\n\t\t   (resc_in_err != NULL)) {\n\t\tstrncpy(msgbuf, pbse_to_txt(code), msglen - 2);\n\t\t/* -2 is to make sure there is room for a colon and a space */\n\t\tstrcat(msgbuf, \": \");\n\t\tstrncat(msgbuf, resc_in_err, msglen - strlen(msgbuf));\n\t\tfree(resc_in_err);\n\t\tresc_in_err = NULL;\n\t\tmsg = NULL;\n#endif\n\t} else if (code > PBSE_) {\n\t\tmsg = pbse_to_txt(code);\n\n\t} else {\n\t\tmsg = strerror(code);\n\t}\n\n\tif (msg) {\n\t\t(void) strncpy(msgbuf, msg, msglen);\n\t}\n\tmsgbuf[msglen] = '\\0';\n}\n#ifndef WIN32\n/**\n * @brief\n * \t\tSIGALRM signal handler for dis_reply_write\n *\n * Set the volatile global variable reply_timedout\n * Record about the timeout in TCP reply.\n *\n * @param[in]\tsig -  signal number\n *\n * @return\treturn void\n */\nvoid\nreply_alarm(int sig)\n{\n\treply_timedout = 1;\n\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_REQUEST, LOG_WARNING,\n\t\t  \"dis_reply_write\", \"timeout attempting to send TCP reply\");\n}\n#endif\n\n/**\n * @brief\n * \t\treply is to be sent to a remote client\n *\n * @param[in]\tsfds - connection socket\n * @param[in]\tpreq - batch_request which contains the reply for the request\n *\n * @return\treturn code\n */\nstatic int\ndis_reply_write(int sfds, struct batch_request *preq)\n{\n\tint rc;\n\tstruct batch_reply *preply = &preq->rq_reply;\n#ifndef WIN32\n\tstruct sigaction act, oact;\n\ttime_t old_tcp_timeout = pbs_tcp_timeout;\n#endif\n\n\tif (preq->prot == PROT_TPP) {\n\t\trc = encode_DIS_replyTPP(sfds, preq->tppcmd_msgid, preply);\n\t} else {\n#ifndef WIN32\n\t\treply_timedout = 0;\n\t\t/* set alarm to interrupt poll() etc. while flushing out data */\n\t\tsigemptyset(&act.sa_mask);\n\t\tact.sa_flags = 0;\n\t\tact.sa_handler = reply_alarm;\n\t\tif (sigaction(SIGALRM, &act, &oact) == -1)\n\t\t\treturn (PBS_NET_RC_RETRY);\n\t\talarm(PBS_DIS_TCP_TIMEOUT_REPLY);\n\t\tpbs_tcp_timeout = PBS_DIS_TCP_TIMEOUT_REPLY;\n#endif\n\t\t/*\n\t\t * clear pbs_tcp_errno - set on error in dis_flush when called\n\t\t * either in encode_DIS_reply() or directly below.\n\t\t */\n\t\tpbs_tcp_errno = 0;\n\t\tDIS_tcp_funcs(); /* setup for DIS over tcp */\n\n\t\trc = encode_DIS_reply(sfds, preply);\n\t}\n\n\tif (rc == 0) {\n\t\trc = dis_flush(sfds);\n\t}\n\n#ifndef WIN32\n\treply_timedout = 0; /* Resetting the value for next tcp connection */\n\tif (preq->prot == PROT_TCP) {\n\t\talarm(0);\n\t\t(void) sigaction(SIGALRM, &oact, NULL); /* reset handler for SIGALRM */\n\t}\n\tpbs_tcp_timeout = old_tcp_timeout;\n#endif\n\tif (rc) {\n\t\tchar hn[PBS_MAXHOSTNAME + 1];\n\n\t\tif (get_connecthost(sfds, hn, PBS_MAXHOSTNAME) == -1)\n\t\t\tstrcpy(hn, \"??\");\n\t\t(void) sprintf(log_buffer, \"DIS reply failure, %d, to host %s, errno=%d\", rc, hn, pbs_tcp_errno);\n\t\t/* if EAGAIN - then write was blocked and timed-out, note it */\n\t\tif (pbs_tcp_errno == EAGAIN)\n\t\t\tstrcat(log_buffer, \" write timed out\");\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_REQUEST, LOG_WARNING,\n\t\t\t  \"dis_reply_write\", log_buffer);\n\t\tclose_client(sfds);\n\t}\n\treturn rc;\n}\n\nint\nreply_send_status_part(struct batch_request *preq)\n{\n\tint rc = PBSE_SYSTEM;\n\tif (preq->rq_conn >= 0) {\n\t\tstruct batch_reply *preply = &preq->rq_reply;\n\t\tpreply->brp_is_part = 1;\n\t\trc = dis_reply_write(preq->rq_conn, preq);\n\t\tif (rc != PBSE_NONE)\n\t\t\treturn rc;\n\t\treply_free(&preq->rq_reply);\n\t\tpreply->brp_choice = BATCH_REPLY_CHOICE_Status;\n\t\tCLEAR_HEAD(preply->brp_un.brp_status);\n\t\tpreply->brp_count = 0;\n\t}\n\treturn rc;\n}\n\n/**\n * @brief\n * \t\tSend a reply to a batch request, reply either goes to a\n * \t\tremote client over the network:\n *\t\tEncode the reply to a \"presentation element\",\n *\t\tallocate the presenetation stream and attach to socket,\n *\t\twrite out reply, and free ps, pe, and isoreply structures.\n * \t\tOr the reply is for a request from the local server:\n *\t\tlocate the work task associated with the request and dispatch it\n *\n * @par Side-effects:\n *\t\tThe request (and reply) structures are freed.\n *\n * @param[in]\trequest\t- batch request\n *\n * @return\terror code\n * @retval\t0\t- success\n * @retval\t!=0\t- failure\n */\nint\nreply_send(struct batch_request *request)\n{\n#ifndef PBS_MOM\n\tstruct work_task *ptask;\n#endif /* PBS_MOM */\n\tint rc = 0;\n\tint sfds; /* socket */\n\tint rq_type;\n\n\tif (request == NULL)\n\t\treturn 0;\n\n\tsfds = request->rq_conn;\n\trq_type = request->rq_type;\n\n\tif (rq_type == PBS_BATCH_ModifyJob_Async || rq_type == PBS_BATCH_AsyrunJob) {\n\t\tfree_br(request);\n\t\treturn 0;\n\t}\n\n\trequest->rq_reply.brp_is_part = 0;\n\n\t/* if this is a child request, just move the error to the parent */\n\tif (request->rq_parentbr) {\n\t\tif (((request->rq_parentbr->rq_reply.brp_choice == BATCH_REPLY_CHOICE_NULL) || (request->rq_parentbr->rq_reply.brp_choice == BATCH_REPLY_CHOICE_Delete)) && (request->rq_parentbr->rq_reply.brp_code == 0)) {\n\t\t\trequest->rq_parentbr->rq_reply.brp_code = request->rq_reply.brp_code;\n\t\t\trequest->rq_parentbr->rq_reply.brp_auxcode = request->rq_reply.brp_auxcode;\n\t\t\tif (request->rq_type == PBS_BATCH_DeleteJobList) {\n\t\t\t\trequest->rq_parentbr->rq_reply.brp_count = request->rq_reply.brp_count;\n\t\t\t\tpbs_delstatfree(request->rq_parentbr->rq_reply.brp_un.brp_deletejoblist.brp_delstatc);\n\t\t\t\trequest->rq_parentbr->rq_reply.brp_un.brp_deletejoblist.brp_delstatc = request->rq_reply.brp_un.brp_deletejoblist.brp_delstatc;\n\t\t\t}\n\t\t\tif (request->rq_reply.brp_choice == BATCH_REPLY_CHOICE_Text) {\n\t\t\t\trequest->rq_parentbr->rq_reply.brp_choice =\n\t\t\t\t\trequest->rq_reply.brp_choice;\n\t\t\t\trequest->rq_parentbr->rq_reply.brp_un.brp_txt.brp_txtlen = request->rq_reply.brp_un.brp_txt.brp_txtlen;\n\t\t\t\trequest->rq_parentbr->rq_reply.brp_un.brp_txt.brp_str =\n\t\t\t\t\tstrdup(request->rq_reply.brp_un.brp_txt.brp_str);\n\t\t\t\tif (request->rq_parentbr->rq_reply.brp_un.brp_txt.brp_str == NULL) {\n\t\t\t\t\tlog_err(-1, \"reply_send\", \"Unable to allocate Memory!\\n\");\n\t\t\t\t\treturn (PBSE_SYSTEM);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t} else if (request->rq_refct > 0) {\n\t\t/* waiting on sister (subjob) requests, will send when */\n\t\t/* last one decrecments the reference count to zero    */\n\t\treturn 0;\n\t} else if (sfds == PBS_LOCAL_CONNECTION) {\n\n#ifndef PBS_MOM\n\t\t/*\n\t\t * reply stays local, find work task and move it to\n\t\t * the immediate list for dispatching.\n\t\t *\n\t\t * Special Note: In this instance the function that utimately\n\t\t * gets dispatched by the work_task entry has the RESPONSIBILITY\n\t\t * for freeing the batch_request structure.\n\t\t */\n\n\t\tptask = find_work_task(WORK_Deferred_Local, request, NULL);\n\t\treturn convert_work_task(ptask, WORK_Immed);\n\n\t\t/* Uh Oh, should have found a task and didn't */\n\n\t\tlog_err(-1, __func__, \"did not find work task for local request\");\n#endif /* PBS_MOM */\n\t\trc = PBSE_SYSTEM;\n\n\t} else if (sfds >= 0) {\n\n\t\t/*\n\t\t * Otherwise, the reply is to be sent to a remote client\n\t\t */\n\t\tif (rc == PBSE_NONE) {\n\t\t\trc = dis_reply_write(sfds, request);\n\t\t}\n\t}\n\n\tfree_br(request);\n\treturn (rc);\n}\n\n/**\n * @brief\n * \t\tSend a normal acknowledgement reply to a request\n *\n * @param[in,out]\tpreq\t- request structure\n *\n * @par Side-effects:\n *\t\tAlways frees the request structure.\n *\n */\nvoid\nreply_ack(struct batch_request *preq)\n{\n\tint rq_type;\n\n\tif (preq == NULL)\n\t\treturn;\n\n\trq_type = preq->rq_type;\n\tif (rq_type == PBS_BATCH_ModifyJob_Async || rq_type == PBS_BATCH_AsyrunJob) {\n\t\tfree_br(preq);\n\t\treturn;\n\t}\n\n\tif (preq->prot == PROT_TPP && preq->tpp_ack == 0) {\n\t\tfree_br(preq);\n\t\treturn;\n\t}\n\n\tif (preq->rq_type != PBS_BATCH_DeleteJobList) {\n\t\tif (preq->rq_reply.brp_choice != BATCH_REPLY_CHOICE_NULL)\n\t\t\t/* in case another reply was being built up, clean it out */\n\t\t\treply_free(&preq->rq_reply);\n\t\tpreq->rq_reply.brp_choice = BATCH_REPLY_CHOICE_NULL;\n\t}\n\n\tpreq->rq_reply.brp_code = PBSE_NONE;\n\tpreq->rq_reply.brp_auxcode = 0;\n\n\t(void) reply_send(preq);\n}\n\n/**\n * @brief\n * \t\tFree any sub-structures that might hang from the basic\n * \t\tbatch_reply structure, the reply structure itself IS NOT FREED.\n *\n * @param[in]\tprep\t- basic batch_reply structure\n */\nvoid\nreply_free(struct batch_reply *prep)\n{\n\tstruct brp_status *pstat;\n\tstruct brp_status *pstatx;\n\tstruct brp_select *psel;\n\tstruct brp_select *pselx;\n\tstruct batch_deljob_status *pdelstat;\n\tstruct batch_deljob_status *pdelstatx;\n\n\tif (prep->brp_choice == BATCH_REPLY_CHOICE_Text) {\n\t\tif (prep->brp_un.brp_txt.brp_str) {\n\t\t\t(void) free(prep->brp_un.brp_txt.brp_str);\n\t\t\tprep->brp_un.brp_txt.brp_str = NULL;\n\t\t\tprep->brp_un.brp_txt.brp_txtlen = 0;\n\t\t}\n\n\t} else if (prep->brp_choice == BATCH_REPLY_CHOICE_Select) {\n\t\tpsel = prep->brp_un.brp_select;\n\t\twhile (psel) {\n\t\t\tpselx = psel->brp_next;\n\t\t\t(void) free(psel);\n\t\t\tpsel = pselx;\n\t\t}\n\n\t} else if (prep->brp_choice == BATCH_REPLY_CHOICE_Status) {\n\t\tpstat = (struct brp_status *) GET_NEXT(prep->brp_un.brp_status);\n\t\twhile (pstat) {\n\t\t\tpstatx = (struct brp_status *) GET_NEXT(pstat->brp_stlink);\n\t\t\tfree_attrlist(&pstat->brp_attr);\n\t\t\t(void) free(pstat);\n\t\t\tpstat = pstatx;\n\t\t}\n\n\t} else if (prep->brp_choice == BATCH_REPLY_CHOICE_Delete) {\n\t\tpdelstat = prep->brp_un.brp_deletejoblist.brp_delstatc;\n\t\twhile (pdelstat) {\n\t\t\tpdelstatx = pdelstat->next;\n\t\t\tif (pdelstat->name)\n\t\t\t\tfree(pdelstat->name);\n\t\t\tfree(pdelstat);\n\t\t\tpdelstat = pdelstatx;\n\t\t}\n\n\t} else if (prep->brp_choice == BATCH_REPLY_CHOICE_RescQuery) {\n\t\t(void) free(prep->brp_un.brp_rescq.brq_avail);\n\t\t(void) free(prep->brp_un.brp_rescq.brq_alloc);\n\t\t(void) free(prep->brp_un.brp_rescq.brq_resvd);\n\t\t(void) free(prep->brp_un.brp_rescq.brq_down);\n\t}\n\tprep->brp_choice = BATCH_REPLY_CHOICE_NULL;\n}\n\n/**\n * @brief\n * \t\tCreate a reject (error) reply for a request, then send the reply.\n *\n * @param[in]\tcode\t- PBS error code indicating the reason the rejection\n * \t\t\t\t\t\t\tis taking place.  If this is PBSE_NONE, no log message is output.\n * @param[in]\taux\t- Auxiliary error code\n * @param[in,out]\tpreq\t- Pointer to batch request\n *\n * @par Side-effects:\n *\t\tAlways frees the request structure.\n */\nvoid\nreq_reject(int code, int aux, struct batch_request *preq)\n{\n\tint evt_type;\n\tchar msgbuf[ERR_MSG_SIZE];\n\tint rq_type;\n\n\tif (preq == NULL)\n\t\treturn;\n\n\trq_type = preq->rq_type;\n\tif (rq_type == PBS_BATCH_ModifyJob_Async || rq_type == PBS_BATCH_AsyrunJob) {\n\t\tfree_br(preq);\n\t\treturn;\n\t}\n\n\tif (code != PBSE_NONE) {\n\t\tevt_type = PBSEVENT_DEBUG;\n\t\tif (code == PBSE_BADHOST)\n\t\t\tevt_type |= PBSEVENT_SECURITY;\n\t\tsprintf(log_buffer,\n\t\t\t\"Reject reply code=%d, aux=%d, type=%d, from %s@%s\",\n\t\t\tcode, aux, preq->rq_type, preq->rq_user, preq->rq_host);\n\t\tlog_event(evt_type, PBS_EVENTCLASS_REQUEST, LOG_INFO,\n\t\t\t  \"req_reject\", log_buffer);\n\t}\n\tset_err_msg(code, msgbuf, ERR_MSG_SIZE);\n\n\tif (preq->rq_type != PBS_BATCH_DeleteJobList) {\n\t\tif (preq->rq_reply.brp_choice != BATCH_REPLY_CHOICE_NULL) {\n\t\t\t/* in case another reply was being built up, clean it out */\n\t\t\treply_free(&preq->rq_reply);\n\t\t}\n\n\t\tif (*msgbuf != '\\0') {\n\t\t\tpreq->rq_reply.brp_choice = BATCH_REPLY_CHOICE_Text;\n\t\t\tif ((preq->rq_reply.brp_un.brp_txt.brp_str = strdup(msgbuf)) == NULL) {\n\t\t\t\tlog_err(-1, \"req_reject\", \"Unable to allocate Memory!\\n\");\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tpreq->rq_reply.brp_un.brp_txt.brp_txtlen = strlen(msgbuf);\n\t\t} else {\n\t\t\tpreq->rq_reply.brp_choice = BATCH_REPLY_CHOICE_NULL;\n\t\t}\n\t}\n\n\tpreq->rq_reply.brp_code = code;\n\tpreq->rq_reply.brp_auxcode = aux;\n\n\t(void) reply_send(preq);\n}\n\n/**\n * @brief\n * \t\tCreate a reject (error) reply for a request including\n * \t\tthe name of the bad attribute/resource.\n *\n * @param[in]\tcode\tPBS error code indicating the reason the rejection\n * \t\t\tis taking place.  If this is PBSE_NONE, no log message is output.\n * @param[in]\taux\tAuxiliary error code\n * @param[in,out] pal\texternal form of attributes.\n * @param[in]\tpreq\tPointer to batch request\n */\nvoid\nreply_badattr(int code, int aux, svrattrl *pal,\n\t      struct batch_request *preq)\n{\n\tint i = 1;\n\tsize_t len;\n\tchar msgbuf[ERR_MSG_SIZE];\n\n\tif (preq == NULL)\n\t\treturn;\n\n\tif (preq->rq_type == PBS_BATCH_ModifyJob_Async) {\n\t\tfree_br(preq);\n\t\treturn;\n\t}\n\n#ifdef NAS /* localmod 005 */\n\tset_err_msg(code, msgbuf, sizeof(msgbuf));\n#else\n\tset_err_msg(code, msgbuf, ERR_MSG_SIZE);\n#endif /* localmod 005 */\n\n\twhile (pal) {\n\t\tif (i == aux) {\n\t\t\t/* append attribute info only if it fits in msgbuf */\n\t\t\t/* add one for space between msg and attribute name */\n\t\t\tlen = strlen(msgbuf) + 1 + strlen(pal->al_name);\n\t\t\tif (pal->al_resc)\n\t\t\t\t/* add one for dot between attribute and resource */\n\t\t\t\tlen += 1 + strlen(pal->al_resc);\n\n#ifdef NAS /* localmod 005 */\n\t\t\tif (len < sizeof(msgbuf)) {\n#else\n\t\t\tif (len < ERR_MSG_SIZE) {\n#endif /* localmod 005 */\n\t\t\t\t(void) strcat(msgbuf, \" \");\n\t\t\t\t(void) strcat(msgbuf, pal->al_name);\n\t\t\t\tif (pal->al_resc) {\n\t\t\t\t\t(void) strcat(msgbuf, \".\");\n\t\t\t\t\t(void) strcat(msgbuf, pal->al_resc);\n\t\t\t\t}\n\t\t\t}\n\t\t\tbreak;\n\t\t}\n\t\tpal = (svrattrl *) GET_NEXT(pal->al_link);\n\t\t++i;\n\t}\n\n\t(void) reply_text(preq, code, msgbuf);\n}\n\n/**\n * @brief\n * \t\tReturn a reply with a supplied text string.\n *\n * @param[out]\tpreq\t- Pointer to batch request\n * @param[in]\tcode\t- PBS error code indicating the reason the rejection\n * \t\t\t\t\t\t\tis taking place.  If this is PBSE_NONE, no log message is output.\n * @param[in]\ttext\t- text string for the reply.\n *\n * @return\terror code\n */\nint\nreply_text(struct batch_request *preq, int code, char *text)\n{\n\tif (preq == NULL)\n\t\treturn 0;\n\n\tif (preq->rq_type == PBS_BATCH_ModifyJob_Async) {\n\t\tfree_br(preq);\n\t\treturn 0;\n\t}\n\n\tif (preq->rq_reply.brp_choice != BATCH_REPLY_CHOICE_NULL)\n\t\t/* in case another reply was being built up, clean it out */\n\t\treply_free(&preq->rq_reply);\n\n\tpreq->rq_reply.brp_code = code;\n\tpreq->rq_reply.brp_auxcode = 0;\n\tif (text && *text) {\n\t\tpreq->rq_reply.brp_choice = BATCH_REPLY_CHOICE_Text;\n\t\tif ((preq->rq_reply.brp_un.brp_txt.brp_str = strdup(text)) == NULL)\n\t\t\treturn 0;\n\t\tpreq->rq_reply.brp_un.brp_txt.brp_txtlen = strlen(text);\n\t} else {\n\t\tpreq->rq_reply.brp_choice = BATCH_REPLY_CHOICE_NULL;\n\t}\n\treturn reply_send(preq);\n}\n\n/**\n * @brief\n * \t\tReturn a reply with the job id.\n *\n * @see req_queuejob()\n * @see req_commit()\n *\n * @par Side-effects:\n *\t\tAlways frees the request structure.\n *\n * @param[out]\tpreq\t- Pointer to batch request\n * @param[in]\tjobid\t- job id.\n * @param[in]\twhich\t- the union discriminator\n */\nint\nreply_jobid(struct batch_request *preq, char *jobid, int which)\n{\n\tif (preq->rq_reply.brp_choice != BATCH_REPLY_CHOICE_NULL)\n\t\t/* in case another reply was being built up, clean it out */\n\t\treply_free(&preq->rq_reply);\n\n\tpreq->rq_reply.brp_code = 0;\n\tpreq->rq_reply.brp_auxcode = 0;\n\tpreq->rq_reply.brp_choice = which;\n\t(void) strncpy(preq->rq_reply.brp_un.brp_jid, jobid, PBS_MAXSVRJOBID);\n\treturn (reply_send(preq));\n}\n"
  },
  {
    "path": "src/server/req_cred.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\treq_cred.c\n *\n * @brief\n *  Server routines providing and sending credentials to superior mom.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include \"libpbs.h\"\n#include \"job.h\"\n#include \"batch_request.h\"\n#include \"pbs_error.h\"\n#include \"log.h\"\n#include \"pbs_nodes.h\"\n#include \"server.h\"\n\n#define CRED_DATA_SIZE 4096\n\n/* credential type */\n#define CRED_NONE 0\n#define CRED_KRB5 1\n\nextern struct server server;\nextern void release_req(struct work_task *);\nextern int relay_to_mom(job *, struct batch_request *, void (*)(struct work_task *));\nextern time_t time_now;\npbs_list_head svr_creds_cache; /* all credentials cached and available to send */\nextern long svr_cred_renew_cache_period;\n\nstruct cred_cache {\n\tpbs_list_link cr_link;\n\tchar credid[PBS_MAXUSER + 1];\n\tlong validity;\n\tint type;\n\tchar *data; /* credentials in base64 */\n\tsize_t size;\n};\ntypedef struct cred_cache cred_cache;\n\n/* @brief\n *\tFirst, this function checks whether the credentials for credid (e.g. principal) of the\n *\tjob are stored in server's memory cache and whether the credentials are\n *\tnot too old. Such credentials are returned. If they are not present\n *\tin cache or are too old new credentials are requested with the\n *\tSVR_ATR_cred_renew_tool and renewed credentials are stored in the cache\n *\t(server's memory).\n *\n * @param[in] pjob - pointer to job, the credentials are requested for this job\n *\n * @return\tcred_cache\n * @retval\tstructure with credentials on success\n * @retval\tNULL otherwise\n */\nstatic struct cred_cache *\nget_cached_cred(job *pjob)\n{\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n\tcred_cache *cred = NULL;\n\tcred_cache *nxcred = NULL;\n\tchar cmd[MAXPATHLEN + PBS_MAXUSER + 2]; /* +1 for space and +1 for EOL */\n\tchar buf[CRED_DATA_SIZE];\n\tFILE *fp;\n\tlong validity = 0;\n\tint cred_type = CRED_NONE;\n\tint ret = 0;\n\n\t/* try the cache first */\n\tcred = (cred_cache *) GET_NEXT(svr_creds_cache);\n\twhile (cred) {\n\t\tnxcred = (cred_cache *) GET_NEXT(cred->cr_link);\n\n\t\tif (strcmp(cred->credid, get_jattr_str(pjob, JOB_ATR_cred_id)) == 0 &&\n\t\t    cred->validity - svr_cred_renew_cache_period > time_now) {\n\t\t\t/* valid credential found */\n\t\t\treturn cred;\n\t\t}\n\n\t\t/* too old credential - delete from cache */\n\t\tif (cred->validity - svr_cred_renew_cache_period <= time_now) {\n\t\t\tdelete_link(&cred->cr_link);\n\t\t\tfree(cred->data);\n\t\t\tfree(cred);\n\t\t}\n\n\t\tcred = nxcred;\n\t}\n\n\t/* valid credentials not cached, get new one */\n\n\tif (!is_sattr_set(SVR_ATR_cred_renew_tool)) {\n\t\tlog_eventf(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER,\n\t\t\t   LOG_ERR, msg_daemonname, \"%s is not set\", ATTR_cred_renew_tool);\n\t\treturn NULL;\n\t}\n\n\tlog_eventf(PBSEVENT_DEBUG, PBS_EVENTCLASS_SERVER,\n\t\t   LOG_DEBUG, msg_daemonname, \"using %s '%s' to acquire credentials for user: %s\",\n\t\t   ATTR_cred_renew_tool,\n\t\t   get_sattr_str(SVR_ATR_cred_renew_tool),\n\t\t   get_jattr_str(pjob, JOB_ATR_cred_id));\n\n\tsnprintf(cmd, MAXPATHLEN + PBS_MAXUSER + 2, \"%s %s\", /* +1 for space and +1 for EOL */\n\t\t get_sattr_str(SVR_ATR_cred_renew_tool),\n\t\t get_jattr_str(pjob, JOB_ATR_cred_id));\n\n\tif ((fp = popen(cmd, \"r\")) == NULL) {\n\t\tlog_eventf(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER,\n\t\t\t   LOG_ERR, msg_daemonname, \"%s failed to open pipe, command: '%s'\",\n\t\t\t   ATTR_cred_renew_tool, cmd);\n\t\treturn NULL;\n\t}\n\n\twhile (fgets(buf, CRED_DATA_SIZE, fp) != NULL) {\n\t\tstrtok(buf, \"\\n\");\n\t\tif (strncmp(buf, \"Valid until:\", strlen(\"Valid until:\")) == 0)\n\t\t\tvalidity = strtol(buf + strlen(\"Valid until:\"), NULL, 10);\n\n\t\tif (strncmp(buf, \"Type: \", strlen(\"Type: \")) == 0) {\n\t\t\tif (strncmp(buf + strlen(\"Type: \"), \"Kerberos\", strlen(\"Kerberos\")) == 0)\n\t\t\t\tcred_type = CRED_KRB5;\n\t\t}\n\n\t\t/* last line in buf is credential in base64 - will be read later */\n\t}\n\n\tif ((ret = pclose(fp))) {\n\t\tlog_eventf(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER,\n\t\t\t   LOG_ERR, msg_daemonname, \"%s command '%s' failed, exitcode: %d\",\n\t\t\t   ATTR_cred_renew_tool, cmd, WEXITSTATUS(ret));\n\t\treturn NULL;\n\t}\n\n\tif (buf == NULL || strlen(buf) <= 1 || validity < time_now) {\n\t\tlog_eventf(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER,\n\t\t\t   LOG_ERR, msg_daemonname, \"%s command '%s' returned invalid credentials for %s\",\n\t\t\t   ATTR_cred_renew_tool, cmd,\n\t\t\t   get_jattr_str(pjob, JOB_ATR_cred_id));\n\n\t\treturn NULL;\n\t}\n\n\tif ((cred = (cred_cache *) malloc(sizeof(cred_cache))) == NULL) {\n\t\tlog_err(errno, __func__, \"Unable to allocate Memory!\\n\");\n\t\treturn NULL;\n\t}\n\n\tstrncpy(cred->credid, get_jattr_str(pjob, JOB_ATR_cred_id), PBS_MAXUSER);\n\tcred->credid[PBS_MAXUSER] = '\\0';\n\tcred->type = cred_type;\n\tcred->validity = validity;\n\tcred->size = strlen(buf);\n\n\tif ((cred->data = (char *) malloc(cred->size + 1)) == NULL) {\n\t\tlog_err(errno, __func__, \"Unable to allocate Memory!\\n\");\n\t\tfree(cred);\n\t\treturn NULL;\n\t}\n\n\t/* here we read the credential in base64 */\n\tstrcpy(cred->data, buf);\n\n\t/* store cred to cache */\n\tCLEAR_LINK(cred->cr_link);\n\tappend_link(&svr_creds_cache, &cred->cr_link, cred);\n\treturn cred;\n#else\n\treturn NULL;\n#endif\n}\n\n/* @brief\n *\tPrepare batch request structure for sending credentials to superior mom\n *\tand fill in the structure with data like credentials or credid.\n *\n * @param[in] preq - batch request\n * @param[in] pjob - pointer to job\n *\n * @return\tpreq\n * @retval\tstructure with batch request\n */\nstatic struct batch_request *\nsetup_cred(struct batch_request *preq, job *pjob)\n{\n\tcred_cache *cred;\n\n\tif (preq == NULL) {\n\t\tpreq = alloc_br(PBS_BATCH_Cred);\n\n\t\tif (preq == NULL) {\n\t\t\treturn preq;\n\t\t}\n\t}\n\n\tpreq->rq_ind.rq_cred.rq_cred_data = NULL;\n\n\tif ((cred = get_cached_cred(pjob)) == NULL) {\n\t\tfree_br(preq);\n\t\treturn NULL;\n\t}\n\n\tstrcpy(preq->rq_ind.rq_cred.rq_credid, get_jattr_str(pjob, JOB_ATR_cred_id));\n\tstrcpy(preq->rq_ind.rq_cred.rq_jobid, pjob->ji_qs.ji_jobid);\n\tpreq->rq_ind.rq_cred.rq_cred_type = cred->type;\n\tpreq->rq_ind.rq_cred.rq_cred_validity = cred->validity;\n\tif ((preq->rq_ind.rq_cred.rq_cred_data = (char *) malloc(cred->size + 1)) == NULL) {\n\t\tlog_err(errno, __func__, \"Unable to allocate Memory!\\n\");\n\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t}\n\tstrcpy(preq->rq_ind.rq_cred.rq_cred_data, cred->data);\n\n\treturn preq;\n}\n\n/* @brief\n *\tOnce the credentials are sent to to superior mom, this function is called\n *\tand if the credentials were sent successfully then the job attribute\n *\tJOB_ATR_cred_validity is changed to the validity of credentials.\n *\n * @param[in] pwt - pointer to work task\n *\n */\nstatic void\npost_cred(struct work_task *pwt)\n{\n\tint code;\n\tjob *pjob;\n\tstruct batch_request *preq;\n\n\tpreq = pwt->wt_parm1;\n\tcode = preq->rq_reply.brp_code;\n\tpjob = find_job(preq->rq_ind.rq_cred.rq_jobid);\n\n\tif (pjob != NULL) {\n\n\t\tif (code != 0) {\n\t\t\tlog_eventf(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB,\n\t\t\t\t   LOG_INFO, pjob->ji_qs.ji_jobid,\n\t\t\t\t   \"sending credential to mom failed, returned code: %d\", code);\n\t\t} else {\n\t\t\t/* send_cred was successful  - update validity*/\n\n\t\t\tset_jattr_l_slim(pjob, JOB_ATR_cred_validity, preq->rq_ind.rq_cred.rq_cred_validity, SET);\n\n\t\t\t/* save the full job */\n\t\t\t(void) job_save_db(pjob);\n\n\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_INFO, pjob->ji_qs.ji_jobid, \"sending credential to mom succeed\");\n\t\t}\n\t} else\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_INFO, __func__, \"failed, job unknown\");\n\n\trelease_req(pwt); /* close connection and release request */\n}\n\n/* @brief\n *\tRetrieve and send credentials for a particular job to the superior\n *\tmom of the job.\n *\n * @param[in] pjob - pointer to job\n *\n * @return\tint\n * @retval\t0 on success\n * @retval\t!= 0 on error\n */\nint\nsend_cred(job *pjob)\n{\n\tstruct batch_request *credreq = NULL;\n\tint rc;\n\n\tif (pjob == NULL) {\n\t\treturn PBSE_SYSTEM;\n\t}\n\n\tcredreq = setup_cred(credreq, pjob);\n\tif (credreq) {\n\t\trc = relay_to_mom(pjob, credreq, post_cred);\n\t\tif (rc == PBSE_NORELYMOM) /* otherwise the post_cred will free the request */\n\t\t\tfree_br(credreq);\n\t\treturn rc;\n\t}\n\n\treturn PBSE_IVALREQ;\n}\n"
  },
  {
    "path": "src/server/req_delete.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\treq_delete.c\n *\n * Functions relating to the Delete Job Batch Requests.\n *\n * Included funtions are:\n *\tremove_stagein()\n *\tacct_del_write()\n *\tcheck_deletehistoryjob()\n *\tissue_delete()\n *\treq_deletejob()\n *\treq_deletejob2()\n *\treq_deleteReservation()\n *\tpost_deljobfromresv_req()\n *\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <stdio.h>\n#include <sys/types.h>\n#include <signal.h>\n#include \"portability.h\"\n#include \"libpbs.h\"\n#include \"server_limits.h\"\n#include \"list_link.h\"\n#include \"work_task.h\"\n#include \"attribute.h\"\n#include \"server.h\"\n#include \"credential.h\"\n#include \"batch_request.h\"\n#include \"resv_node.h\"\n#include \"queue.h\"\n#include \"hook.h\"\n\n#include \"job.h\"\n#include \"reservation.h\"\n#include \"pbs_error.h\"\n#include \"acct.h\"\n#include \"log.h\"\n#include \"pbs_nodes.h\"\n#include \"svrfunc.h\"\n\n#define QDEL_BREAKER_SECS 5\n\n/* Global Data Items: */\n\nextern char *msg_deletejob;\nextern char *msg_delrunjobsig;\nextern char *msg_manager;\nextern char *msg_noDeljobfromResv;\nextern char *msg_deleteresv;\nextern char *msg_deleteresvJ;\nextern char *msg_job_history_delete;\nextern char *msg_job_history_notset;\nextern char *msg_also_deleted_job_history;\nextern char *msg_err_malloc;\nextern struct server server;\nextern time_t time_now;\n\n/* External functions */\n\nextern int issue_to_svr(char *, struct batch_request *, void (*func)(struct work_task *));\nextern struct batch_request *cpy_stage(struct batch_request *, job *, enum job_atr, int);\nextern resc_resv *chk_rescResv_request(char *, struct batch_request *);\n\n/* Private Functions in this file */\n\nstatic void post_delete_mom1(struct work_task *);\nstatic void post_deljobfromresv_req(struct work_task *);\nstatic int req_deletejob2(struct batch_request *preq, job *pjob);\nstatic void resume_deletion(struct work_task *ptask);\n\n/* Private Data Items */\n\nstatic char *sigk = \"SIGKILL\";\nstatic char *sigt = \"SIGTERM\";\nstatic char *sigtj = SIG_TermJob;\nstatic char *acct_fmt = \"requestor=%s@%s\";\nstatic bool qdel_mail = true; /* true: sending mail */\n\n/**\n * @brief\n * \tService to resume the deletion - this is called from a work_interleave task \n *\n * @param[in/out] preq - pointer to task structure\n *\n * @return void\n */\nstatic void\nresume_deletion(struct work_task *ptask)\n{\n\tstruct batch_request *preq = (struct batch_request *) ptask->wt_parm1;\n\n\tif (preq == NULL)\n\t\treturn;\n\n\tif (preq->rq_type == PBS_BATCH_DeleteJobList)\n\t\tpreq->rq_ind.rq_deletejoblist.rq_resume = TRUE;\n\n\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_SERVER, LOG_DEBUG, __func__,\n\t\t   \"Resuming deletetion operation\");\n\treq_deletejob(preq);\n\treturn;\n}\n\n/**\n * @brief\n * \t\tremove_stagein() - request that mom delete staged-in files for a job\n *\t\tused when the job is to be purged after files have been staged in\n *\n * @param[in,out]\tpjob\t- job\n * \n * @return\tint\n * @retval\t0\t- success\n * @retval\tnon-zero\t- error code\n */\n\nint\nremove_stagein(job *pjob)\n{\n\tstruct batch_request *preq = 0;\n\tint rc = 0;\n\n\tpreq = cpy_stage(preq, pjob, JOB_ATR_stagein, 0);\n\n\tif (preq) { /* have files to delete\t\t*/\n\n\t\t/* change the request type from copy to delete  */\n\n\t\tpreq->rq_type = PBS_BATCH_DelFiles;\n\t\tpreq->rq_extra = NULL;\n\t\trc = relay_to_mom(pjob, preq, release_req);\n\t\tif (rc == 0) {\n\t\t\tpjob->ji_qs.ji_svrflags &= ~JOB_SVFLG_StagedIn;\n\t\t} else {\n\t\t\t/* log that we were unable to remove the files */\n\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_FILE,\n\t\t\t\t  LOG_NOTICE, pjob->ji_qs.ji_jobid,\n\t\t\t\t  \"unable to remove staged-in files for job\");\n\t\t\tfree_br(preq);\n\t\t}\n\t}\n\treturn rc;\n}\n\n/**\n * @brief\n * \t\tacct_del_write - write the Job Deleted account record\n *\n * @param[in]\tjid\t- Job Id.\n * @param[in]\tpjob\t- Job structure.\n * @param[in]\tpreq - batch_request\n * @param[in]\tnomail\t- do not send mail to the job owner if enabled.\n */\n\nstatic void\nacct_del_write(char *jid, job *pjob, struct batch_request *preq, int nomail)\n{\n\tsprintf(log_buffer, acct_fmt, preq->rq_user, preq->rq_host);\n\twrite_account_record(PBS_ACCT_DEL, jid, log_buffer);\n\n\tsprintf(log_buffer, msg_manager, msg_deletejob,\n\t\tpreq->rq_user, preq->rq_host);\n\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO, jid, log_buffer);\n\n\tif (pjob != NULL) {\n\t\tint jt = is_job_array(jid);\n\n\t\tswitch (jt) {\n\t\t\tcase IS_ARRAY_NO: /* not a subjob */\n\t\t\tcase IS_ARRAY_ArrayJob:\n\n\t\t\t\t/* if block set, send word */\n\t\t\t\tcheck_block(pjob, log_buffer);\n\t\t}\n\n\t\tif (preq->rq_parentbr == NULL && nomail == 0 &&\n\t\t    svr_chk_owner(preq, pjob) != 0 &&\n\t\t    qdel_mail != 0) {\n\t\t\tsvr_mailowner_id(jid, pjob,\n\t\t\t\t\t MAIL_OTHER, MAIL_FORCE, log_buffer);\n\t\t}\n\t}\n}\n\n/**\n * @brief check whether all jobs got deleted\n * \n * @param[in] preply - reply struct\n * @return bool\n * @retval true - All jobs got deleted.\n * @retval false - more jobs pending to be deleted\n */\nstatic bool\nall_jobs_deleted(struct batch_reply *preply)\n{\n\tvoid *idx = preply->brp_un.brp_deletejoblist.undeleted_job_idx;\n\n\tif (pbs_idx_is_empty(idx)) {\n\t\tpbs_idx_destroy(idx);\n\t\tpreply->brp_un.brp_deletejoblist.undeleted_job_idx = NULL;\n\t\treturn TRUE;\n\t}\n\n\treturn FALSE;\n}\n\n/**\n * @Brief\n * \t- Updates the reply struct with the finished jobid\n *\t- Updates the job's error code in reply structure by allocating \n *\t\tbatch_deljob_status struct.\n *\t- Check if it is time to respond back.\n *\n * @param[in]\tpreq - pointer batch request structure\n * @param[in]\tjid\t- job id.\n * @param[in]\terrcode - Job's error code \n *\n * @return\tbool\n * @retval\tTRUE\t- all jobs deleted\n * @retval\tFALSE\t- jobs remaining to be deleted\n */\nbool\nupdate_deljob_rply(struct batch_request *preq, char *jid, int errcode)\n{\n\tstruct batch_deljob_status *pdelstat;\n\tstruct batch_reply *preply = &preq->rq_reply;\n\tvoid *idx;\n\tchar **data = NULL;\n\n\tif (preq->rq_type != PBS_BATCH_DeleteJobList)\n\t\treturn FALSE;\n\n\tif (errcode != PBSE_NONE) {\n\t\t/* allocate reply structure and fill in jobid and status portion */\n\t\tpdelstat = (struct batch_deljob_status *) malloc(sizeof(struct batch_deljob_status));\n\t\tif (pdelstat == NULL)\n\t\t\tlog_err(PBSE_SYSTEM, __func__, \"Failed to allocate memory\");\n\n\t\tpdelstat->name = strdup(jid);\n\t\tpdelstat->code = errcode;\n\t\tpdelstat->next = preply->brp_un.brp_deletejoblist.brp_delstatc;\n\t\tpreply->brp_un.brp_deletejoblist.brp_delstatc = pdelstat;\n\t\tpreq->rq_reply.brp_count++;\n\t}\n\n\tidx = preply->brp_un.brp_deletejoblist.undeleted_job_idx;\n\tif (jid)\n\t\tif (pbs_idx_find(idx, (void **) &jid, (void **) &data, NULL) == PBS_IDX_RET_OK)\n\t\t\tpbs_idx_delete(idx, jid);\n\t\telse\n\t\t\tlog_eventf(PBSEVENT_DEBUG, PBS_EVENTCLASS_SERVER, LOG_DEBUG, __func__,\n\t\t\t\t   \"job %s has already been deleted from delete job list\", jid);\n\telse\n\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_SERVER, LOG_DEBUG,\n\t\t\t  __func__, \"update_deljob_rply invoked with empty job id\");\n\n\treturn all_jobs_deleted(preply);\n}\n\n/**\n * @Brief\n *\t\tIf the job is a history job then purge its history\n *\t\tIf the job is a non-history job then it must be terminated before purging its history. Will be\n *\t\tdone by req_deletejob()\n *\n * @param[in]\tpreq\t- Batch request structure.\n *\n * @return\tint\n * @retval\tTRUE\t- Job history  has been purged\n * @retval\tFALSE\t- Job is not a history job\n */\nint\ncheck_deletehistoryjob(struct batch_request *preq, char *jid)\n{\n\tjob *histpjob;\n\tjob *pjob;\n\tint historyjob;\n\tint histerr;\n\tint t;\n\n\t/*\n\t * If the array subjob or range of subjobs are in a history state then\n\t * reject the request as we cant delete history of array subjobs\n\t */\n\tt = is_job_array(jid);\n\tif ((t == IS_ARRAY_Single) || (t == IS_ARRAY_Range)) {\n\t\tpjob = find_arrayparent(jid);\n\t\tif ((histerr = svr_chk_histjob(pjob))) {\n\t\t\tif (update_deljob_rply(preq, jid, PBSE_NOHISTARRAYSUBJOB))\n\t\t\t\treq_reject(PBSE_NOHISTARRAYSUBJOB, 0, preq);\n\t\t\treturn TRUE;\n\t\t} else {\n\t\t\t/*\n\t\t\t * Job is in a Non Finished state . It must be terminated and then its history\n\t\t\t *  should be purged .\n\t\t\t */\n\t\t\treturn FALSE;\n\t\t}\n\t}\n\n\thistpjob = find_job(jid);\n\n\thistoryjob = svr_chk_histjob(histpjob);\n\tif (historyjob == PBSE_HISTJOBID) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t msg_job_history_delete, preq->rq_user,\n\t\t\t preq->rq_host);\n\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t  jid,\n\t\t\t  log_buffer);\n\n\t\t/* Issue history job delete request to remote server if job is moved. */\n\t\tif (check_job_state(histpjob, JOB_STATE_LTR_MOVED))\n\t\t\tissue_delete(histpjob);\n\n\t\tif (histpjob->ji_qs.ji_svrflags & JOB_SVFLG_ArrayJob) {\n\t\t\tif (histpjob->ji_ajinfo) {\n\t\t\t\tint i;\n\t\t\t\tfor (i = histpjob->ji_ajinfo->tkm_start; i <= histpjob->ji_ajinfo->tkm_end; i += histpjob->ji_ajinfo->tkm_step) {\n\t\t\t\t\tjob *psjob = get_subjob_and_state(histpjob, i, NULL, NULL);\n\t\t\t\t\tif (psjob) {\n\t\t\t\t\t\tlog_eventf(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t\t\t\t   psjob->ji_qs.ji_jobid,\n\t\t\t\t\t\t\t   msg_job_history_delete, preq->rq_user,\n\t\t\t\t\t\t\t   preq->rq_host);\n\t\t\t\t\t\tjob_purge(psjob);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\tjob_purge(histpjob);\n\n\t\tif (update_deljob_rply(preq, jid, PBSE_HISTJOBDELETED))\n\t\t\treply_send(preq);\n\t\treturn TRUE;\n\t} else {\n\t\t/*\n\t\t *  Job is in a Non Finished state . It must be terminated and then its history\n\t\t * should be purged .\n\t\t */\n\t\treturn FALSE;\n\t}\n}\n\n/**\n * @Brief\n *\t\tIssue PBS_BATCH_DeleteJob request to remote server.\n *\n * @param[in]\tpjob - Job structure.\n */\nvoid\nissue_delete(job *pjob)\n{\n\tstruct batch_request *preq;\n\tchar rmt_server[PBS_MAXSERVERNAME + 1] = {'\\0'};\n\tchar *at = NULL;\n\n\tif (pjob == NULL)\n\t\treturn;\n\n\tif ((at = strchr(get_jattr_str(pjob, JOB_ATR_in_queue), (int) '@')) == NULL)\n\t\treturn;\n\n\tsnprintf(rmt_server, sizeof(rmt_server), \"%s\", at + 1);\n\n\tpreq = alloc_br(PBS_BATCH_DeleteJob);\n\tif (preq == NULL)\n\t\treturn;\n\n\tpbs_strncpy(preq->rq_ind.rq_delete.rq_objname, pjob->ji_qs.ji_jobid, sizeof(preq->rq_ind.rq_delete.rq_objname));\n\tpreq->rq_extend = malloc(strlen(DELETEHISTORY) + 1);\n\tif (preq->rq_extend == NULL) {\n\t\tlog_err(errno, \"issue_delete\", msg_err_malloc);\n\t\treturn;\n\t}\n\n\tstrncpy(preq->rq_extend, DELETEHISTORY, strlen(DELETEHISTORY) + 1);\n\n\tissue_to_svr(rmt_server, preq, release_req);\n}\n\n/**\n * @Brief\n *\t\tdecrement entity usage for a single un-instantiated subjob\n *\n * @param[in]\tparent - pointer to parent Job structure.\n */\nstatic void\ndecr_single_subjob_usage(job *parent)\n{\n\tparent->ji_qs.ji_svrflags &= ~JOB_SVFLG_ArrayJob;\t\t\t\t /* small hack to decrement usage for a single un-instantiated subjob */\n\taccount_entity_limit_usages(parent, NULL, NULL, DECR, ETLIM_ACC_ALL);\t\t /* for server limit */\n\taccount_entity_limit_usages(parent, parent->ji_qhdr, NULL, DECR, ETLIM_ACC_ALL); /* for queue limit */\n\tparent->ji_qs.ji_svrflags |= JOB_SVFLG_ArrayJob;\t\t\t\t /* setting arrayjob flag back */\n}\n\n/**\n * @brief Initialize routine for deljoblist\n * reorder the deljob list so that queued jobs will appear first.\n * do not reorder if already sorted\n * \n * deljob can wait if jobs are in transit state.\n * This triggers other jobs in deljob list to get triggerred to be run.\n * delete will take longer to delete these running jobs as it has to contact the mom\n * meanwhile other jobs in the list will start to run\n * triggering a series of delete followed by run.\n * This has performance issues and makes debugging slower. \n * Ordering should prevent that.\n * Complexity: O(N)\n * \n * @param[in] preq - request structure\n *\n * @return int\n * @retval 0 for success\n * @retval 1 for failure\n */\nstatic int\ninit_deljoblist(struct batch_request *preq)\n{\n\tint head = -1;\n\tint tail;\n\tchar **jlist = preq->rq_ind.rq_deletejoblist.rq_jobslist;\n\tstruct batch_reply *preply = &preq->rq_reply;\n\tjob *pjob;\n\tchar *temp;\n\tint jt;\n\tchar *jid;\n\tchar *p1;\n\tchar *p2;\n\tchar range_jid[PBS_MAXSVRJOBID];\n\n\tif (preq->rq_ind.rq_deletejoblist.rq_resume)\n\t\treturn 0;\n\n\tif (!jlist || !jlist[0])\n\t\treturn 0;\n\n\tpreply->brp_un.brp_deletejoblist.undeleted_job_idx = pbs_idx_create(0, 0);\n\n\tfor (tail = 0; jlist[tail]; tail++) {\n\n\t\tjt = is_job_array(jlist[tail]);\n\n\t\tif (jt == IS_ARRAY_Range) {\n\t\t\tpjob = find_arrayparent(jlist[tail]);\n\t\t} else {\n\t\t\tpjob = find_job(jlist[tail]);\n\t\t}\n\n\t\tif (!pjob)\n\t\t\tcontinue;\n\n\t\tif (jt == IS_ARRAY_Range) {\n\t\t\t/* append fqdn to range jid\n\t\t\t * the result is e.g.:\n\t\t\t * '123[1-3].full.domain.name'\n\t\t\t */\n\t\t\tsprintf(range_jid, \"%s\", jlist[tail]);\n\t\t\tp1 = strchr(pjob->ji_qs.ji_jobid, '.');\n\t\t\tif (p1) {\n\t\t\t\tp2 = strchr(range_jid, '.');\n\t\t\t\tif (p2)\n\t\t\t\t\t*p2 = '\\0';\n\t\t\t\tstrncat(range_jid, p1, PBS_MAXSVRJOBID - 1);\n\t\t\t}\n\t\t\tfree(jlist[tail]);\n\t\t\tjlist[tail] = strdup(range_jid);\n\t\t\tif (jlist[tail] == NULL) {\n\t\t\t\treturn 1;\n\t\t\t}\n\n\t\t\tjid = jlist[tail];\n\t\t} else {\n\t\t\t/* use jid with fqdn */\n\t\t\tif (strcmp(jlist[tail], pjob->ji_qs.ji_jobid) != 0) {\n\t\t\t\tfree(jlist[tail]);\n\t\t\t\tjlist[tail] = strdup(pjob->ji_qs.ji_jobid);\n\t\t\t\tif (jlist[tail] == NULL)\n\t\t\t\t\treturn 1;\n\t\t\t}\n\n\t\t\tjid = jlist[tail];\n\t\t}\n\n\t\tif (pbs_idx_insert(preply->brp_un.brp_deletejoblist.undeleted_job_idx, jid, NULL) != PBS_IDX_RET_OK) {\n\t\t\treturn 1;\n\t\t}\n\n\t\tif (head == -1) {\n\t\t\tif (get_job_state(pjob) != JOB_STATE_LTR_QUEUED) {\n\t\t\t\thead = tail;\n\t\t\t}\n\t\t\tcontinue;\n\t\t}\n\n\t\tif (get_job_state(pjob) == JOB_STATE_LTR_QUEUED) {\n\t\t\ttemp = jlist[head];\n\t\t\tjlist[head++] = jlist[tail];\n\t\t\tjlist[tail] = temp;\n\t\t}\n\t}\n\n\treturn 0;\n}\n\n/**\n * @brief delete any pending array jobs part of delete request\n * \tfrom the reply index\n * \n * @param[in] preq - request structure\n * \n * @return bool\n * @retval TRUE: no more jobs to be deleted\n * @reval  FALSE: more jobs pending to be deleted\n */\nbool\ndelete_pending_arrayjobs(struct batch_request *preq)\n{\n\tvoid *idx;\n\tvoid *idx_ctx = NULL;\n\tchar *jid = NULL;\n\tstruct batch_reply *preply = &preq->rq_reply;\n\tchar **data = NULL;\n\n\tidx = preply->brp_un.brp_deletejoblist.undeleted_job_idx;\n\n\twhile (pbs_idx_find(idx, (void **) &jid, (void **) &data, &idx_ctx) == PBS_IDX_RET_OK) {\n\t\tint job_type = is_job_array(jid);\n\t\tif (jid && (((job_type == IS_ARRAY_ArrayJob) || (job_type == IS_ARRAY_Range) || (job_type == IS_ARRAY_Single))))\n\t\t\tpbs_idx_delete(idx, jid);\n\t}\n\n\tpbs_idx_free_ctx(idx_ctx);\n\n\treturn all_jobs_deleted(preply);\n}\n\n/**\n * @brief\n * \t\treq_deletejob - service the Delete Job Request\n *\n *\t\tThis request deletes a job.\n *\n * @param[in]\tpreq\t- Job Request\n */\nvoid\nreq_deletejob(struct batch_request *preq)\n{\n\tint forcedel = 0;\n\tint i;\n\tchar jid[PBS_MAXSVRJOBID + 1];\n\tint jt; /* job type */\n\tchar *pc;\n\tjob *pjob;\n\tjob *parent;\n\tchar *range;\n\tchar sjst; /* subjob state */\n\tint rc = 0;\n\tint delhist = 0;\n\tint err = PBSE_NONE;\n\tchar **jobids;\n\tint count;\n\tint j;\n\tstruct batch_reply *preply = &preq->rq_reply;\n\tpreply->brp_un.brp_deletejoblist.brp_delstatc = NULL;\n\tpreply->brp_count = 0;\n\tint start_jobid = 0;\n\ttime_t begin_time = time(NULL);\n\n\tif (preq->rq_type == PBS_BATCH_DeleteJobList) {\n\n\t\tif (!preq->rq_ind.rq_deletejoblist.rq_resume) {\n\t\t\tif (init_deljoblist(preq) != 0) {\n\t\t\t\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_SERVER, LOG_DEBUG, __func__,\n\t\t\t\t\t\t\"Error while initializing deljoblist operation (init_deljoblist failed)\");\n\t\t\t\treq_reject(PBSE_INTERNAL, 0, preq);\n\t\t\t\treturn;\n\t\t\t}\n\t\t} else\n\t\t\tstart_jobid = preq->rq_ind.rq_deletejoblist.jobid_to_resume;\n\n\t\tpreply->brp_choice = BATCH_REPLY_CHOICE_Delete;\n\t\tjobids = preq->rq_ind.rq_deletejoblist.rq_jobslist;\n\t\tcount = preq->rq_ind.rq_deletejoblist.rq_count;\n\t} else {\n\t\tjobids = break_comma_list(preq->rq_ind.rq_delete.rq_objname);\n\t\tcount = 1;\n\t}\n\n\tif (preq->rq_extend && strstr(preq->rq_extend, DELETEHISTORY))\n\t\tdelhist = 1;\n\tif (preq->rq_extend && strstr(preq->rq_extend, FORCE))\n\t\tforcedel = 1;\n\n\t/* with nomail , nomail_force , nomail_deletehist or nomailforce_deletehist options are set\n\t *  no mail is sent\n\t */\n\tif (preq->rq_extend && strstr(preq->rq_extend, NOMAIL))\n\t\tqdel_mail = false;\n\n\tfor (j = start_jobid; j < count; j++) {\n\n\t\tif (j != start_jobid) {\n\t\t\tpreq->rq_ind.rq_deletejoblist.rq_resume = FALSE;\n\t\t\tpreq->rq_ind.rq_deletejoblist.jobid_to_resume = j;\n\t\t}\n\n\t\tif ((time(NULL) - begin_time) > QDEL_BREAKER_SECS) {\n\t\t\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_SERVER, LOG_DEBUG, __func__,\n\t\t\t\t   \"req_delete has been running for %d seconds, Pausing for other requests\",\n\t\t\t\t   QDEL_BREAKER_SECS);\n\t\t\tset_task(WORK_Interleave, 0, resume_deletion, preq);\n\t\t\treturn;\n\t\t}\n\n\t\tsnprintf(jid, sizeof(jid), \"%s\", jobids[j]);\n\t\tparent = chk_job_request(jid, preq, &jt, &err);\n\t\tif (parent == NULL) {\n\t\t\tpjob = find_job(jid);\n\t\t\tif (pjob != NULL && pjob->ji_pmt_preq != NULL)\n\t\t\t\treply_preempt_jobs_request(err, PREEMPT_METHOD_DELETE, pjob);\n\t\t\tif (update_deljob_rply(preq, jid, err)) {\n\t\t\t\tif (preq->rq_type == PBS_BATCH_DeleteJobList)\n\t\t\t\t\treq_reject(err, 0, preq); /* note, req_reject is not called for delete job request 2 */\n\t\t\t\treturn;\n\t\t\t} else\n\t\t\t\tcontinue;\n\t\t}\n\n\t\tif (delhist) {\n\t\t\trc = check_deletehistoryjob(preq, jid);\n\t\t\tif (rc == TRUE)\n\t\t\t\tcontinue;\n\t\t}\n\n\t\tif (jt == IS_ARRAY_NO) {\n\n\t\t\t/* just a regular job, pass it on down the line and be done\n\t\t\t * If the request is to purge the history of the job then set ji_deletehistory to 1\n\t\t\t */\n\t\t\tif (delhist)\n\t\t\t\tparent->ji_deletehistory = 1;\n\t\t\tif (req_deletejob2(preq, parent) == 2)\n\t\t\t\treturn;\n\t\t\tcontinue;\n\n\t\t} else if (jt == IS_ARRAY_Single) {\n\t\t\t/* single subjob, if running do full delete, */\n\t\t\t/* if not then just set it expired\t\t */\n\n\t\t\tpjob = get_subjob_and_state(parent, get_index_from_jid(jid), &sjst, NULL);\n\t\t\tif (sjst == JOB_STATE_LTR_UNKNOWN) {\n\t\t\t\tif (update_deljob_rply(preq, jid, PBSE_IVALREQ)) {\n\t\t\t\t\treq_reject(PBSE_IVALREQ, 0, preq);\n\t\t\t\t\treturn;\n\t\t\t\t} else\n\t\t\t\t\tcontinue;\n\t\t\t}\n\n\t\t\tif ((sjst == JOB_STATE_LTR_EXITING) && (forcedel == 0)) {\n\t\t\t\tif (parent->ji_pmt_preq != NULL) {\n\t\t\t\t\tpjob = find_job(jid);\n\t\t\t\t\treply_preempt_jobs_request(PBSE_BADSTATE, PREEMPT_METHOD_DELETE, pjob);\n\t\t\t\t}\n\t\t\t\tif (update_deljob_rply(preq, jid, PBSE_BADSTATE)) {\n\t\t\t\t\treq_reject(PBSE_BADSTATE, 0, preq);\n\t\t\t\t\treturn;\n\t\t\t\t} else\n\t\t\t\t\tcontinue;\n\t\t\t} else if (sjst == JOB_STATE_LTR_EXPIRED) {\n\t\t\t\tif (update_deljob_rply(preq, jid, PBSE_NOHISTARRAYSUBJOB)) {\n\t\t\t\t\treq_reject(PBSE_NOHISTARRAYSUBJOB, 0, preq);\n\t\t\t\t\treturn;\n\t\t\t\t} else\n\t\t\t\t\tcontinue;\n\t\t\t} else if (pjob != NULL) {\n\t\t\t\t/*\n\t\t\t\t* If the request is to also purge the history of the sub job then set ji_deletehistory to 1\n\t\t\t\t*/\n\t\t\t\tif (delhist)\n\t\t\t\t\tpjob->ji_deletehistory = 1;\n\t\t\t\tif (req_deletejob2(preq, pjob) == 2)\n\t\t\t\t\treturn;\n\t\t\t} else {\n\t\t\t\tupdate_sj_parent(parent, NULL, jid, sjst, JOB_STATE_LTR_EXPIRED);\n\t\t\t\tacct_del_write(jid, parent, preq, 0);\n\t\t\t\tparent->ji_ajinfo->tkm_dsubjsct++;\n\t\t\t\tdecr_single_subjob_usage(parent);\n\t\t\t\tif (update_deljob_rply(preq, jid, PBSE_NONE))\n\t\t\t\t\treply_ack(preq);\n\t\t\t}\n\t\t\tchk_array_doneness(parent);\n\t\t\tcontinue;\n\n\t\t} else if (jt == IS_ARRAY_ArrayJob) {\n\t\t\tint del_parent = 1;\n\t\t\tint start = parent->ji_ajinfo->tkm_start;\n\n\t\t\t/*\n\t\t\t * For array jobs the history is stored at the parent array level and also at the subjob level .\n\t\t\t * If the request is to delete the history of an array job then set  ji_deletehistory to 1 for\n\t\t\t * the parent array.The function chk_array_doneness() will take care of eventually\n\t\t\t *  purging the history .\n\t\t\t */\n\t\t\tif (delhist)\n\t\t\t\tparent->ji_deletehistory = 1;\n\t\t\t/* The Array Job itself ... */\n\t\t\t/* for each subjob that is running, delete it via req_deletejob2 */\n\n\t\t\tif (preq->rq_ind.rq_deletejoblist.rq_resume)\n\t\t\t\tstart = preq->rq_ind.rq_deletejoblist.subjobid_to_resume;\n\t\t\telse {\n\t\t\t\t/* in case of resume counts are incremented already */\n\t\t\t\t++preq->rq_refct;\n\t\t\t}\n\n\t\t\t/* keep the array from being removed while we are looking at it */\n\t\t\tparent->ji_ajinfo->tkm_flags |= TKMFLG_NO_DELETE;\n\t\t\tfor (i = start; i <= parent->ji_ajinfo->tkm_end; i += parent->ji_ajinfo->tkm_step) {\n\n\t\t\t\tif ((time(NULL) - begin_time) > QDEL_BREAKER_SECS) {\n\t\t\t\t\tpreq->rq_ind.rq_deletejoblist.subjobid_to_resume = i;\n\t\t\t\t\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_SERVER, LOG_DEBUG, __func__,\n\t\t\t\t\t\t   \"req_delete has been running for %d seconds, Pausing for other requests\",\n\t\t\t\t\t\t   QDEL_BREAKER_SECS);\n\t\t\t\t\tset_task(WORK_Interleave, 0, resume_deletion, preq);\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t\tpjob = get_subjob_and_state(parent, i, &sjst, NULL);\n\t\t\t\tif (sjst == JOB_STATE_LTR_UNKNOWN)\n\t\t\t\t\tcontinue;\n\t\t\t\tif ((sjst == JOB_STATE_LTR_EXITING) && !forcedel)\n\t\t\t\t\tcontinue;\n\t\t\t\tif (pjob) {\n\t\t\t\t\tif (delhist)\n\t\t\t\t\t\tpjob->ji_deletehistory = 1;\n\t\t\t\t\tif (check_job_state(pjob, JOB_STATE_LTR_EXPIRED)) {\n\t\t\t\t\t\tlog_eventf(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t\t\t\t   pjob->ji_qs.ji_jobid,\n\t\t\t\t\t\t\t   msg_job_history_delete, preq->rq_user,\n\t\t\t\t\t\t\t   preq->rq_host);\n\t\t\t\t\t\tjob_purge(pjob);\n\t\t\t\t\t} else {\n\t\t\t\t\t\tif (dup_br_for_subjob(preq, pjob, req_deletejob2) == 2)\n\t\t\t\t\t\t\tcontinue;\n\t\t\t\t\t\tdel_parent = 0;\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\t/* Queued, Waiting, Held, just set to expired */\n\t\t\t\t\tif (sjst != JOB_STATE_LTR_EXPIRED) {\n\t\t\t\t\t\tupdate_sj_parent(parent, NULL, create_subjob_id(parent->ji_qs.ji_jobid, i), sjst, JOB_STATE_LTR_EXPIRED);\n\t\t\t\t\t\tdecr_single_subjob_usage(parent);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\tparent->ji_ajinfo->tkm_flags &= ~TKMFLG_NO_DELETE;\n\n\t\t\t/* if deleting running subjobs, then just return;            */\n\t\t\t/* parent will be deleted when last running subjob(s) ends   */\n\t\t\t/* and reply will be sent to client when last delete is done */\n\t\t\t/* If not deleteing running subjobs, delete2 to del parent   */\n\n\t\t\tif (--preq->rq_refct == 0) {\n\t\t\t\tif ((parent = find_job(jid)) != NULL) {\n\t\t\t\t\tif (req_deletejob2(preq, parent) == 2) {\n\t\t\t\t\t\tpreq->rq_ind.rq_deletejoblist.subjobid_to_resume = parent->ji_ajinfo->tkm_end;\n\t\t\t\t\t\t++preq->rq_refct;\n\t\t\t\t\t\treturn;\n\t\t\t\t\t}\n\t\t\t\t\tdel_parent = 0;\n\t\t\t\t} else if (update_deljob_rply(preq, jid, PBSE_NONE))\n\t\t\t\t\treply_send(preq);\n\t\t\t} else\n\t\t\t\tacct_del_write(jid, parent, preq, 0);\n\n\t\t\tif (del_parent == 1) {\n\t\t\t\tif ((parent = find_job(jid)) != NULL) {\n\t\t\t\t\tif (req_deletejob2(preq, parent) == 2) {\n\t\t\t\t\t\tpreq->rq_ind.rq_deletejoblist.subjobid_to_resume = parent->ji_ajinfo->tkm_end;\n\t\t\t\t\t\t++preq->rq_refct;\n\t\t\t\t\t\treturn;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tcontinue;\n\t\t}\n\t\t/* what's left to handle is a range of subjobs, foreach subjob \t*/\n\t\t/* if running, do full delete, else just update state\t        */\n\n\t\trange = get_range_from_jid(jid);\n\t\tif (range == NULL) {\n\t\t\tif (update_deljob_rply(preq, jid, PBSE_IVALREQ)) {\n\t\t\t\treq_reject(PBSE_IVALREQ, 0, preq);\n\t\t\t\treturn;\n\t\t\t} else\n\t\t\t\tcontinue;\n\t\t}\n\n\t\t++preq->rq_refct;\n\t\twhile (1) {\n\t\t\tint start;\n\t\t\tint end;\n\t\t\tint step;\n\t\t\tint count;\n\n\t\t\tif ((i = parse_subjob_index(range, &pc, &start, &end, &step, &count)) == -1) {\n\t\t\t\tif (update_deljob_rply(preq, jid, PBSE_IVALREQ))\n\t\t\t\t\treq_reject(PBSE_IVALREQ, 0, preq);\n\t\t\t\tbreak;\n\t\t\t} else if (i == 1)\n\t\t\t\tbreak;\n\n\t\t\t/*\n\t\t\t * Ensure that the range specified in the delete job request does not exceed the\n\t\t\t * index of the highest numbered array subjob\n\t\t\t */\n\t\t\tif (start < parent->ji_ajinfo->tkm_start || start > parent->ji_ajinfo->tkm_end) {\n\t\t\t\tif (update_deljob_rply(preq, jid, PBSE_UNKJOBID))\n\t\t\t\t\treq_reject(PBSE_UNKJOBID, 0, preq);\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\tfor (i = start; i <= end; i += step) {\n\t\t\t\tpjob = get_subjob_and_state(parent, i, &sjst, NULL);\n\t\t\t\tif (sjst == JOB_STATE_LTR_UNKNOWN)\n\t\t\t\t\tcontinue;\n\n\t\t\t\tif ((sjst == JOB_STATE_LTR_EXITING) && !forcedel)\n\t\t\t\t\tcontinue;\n\n\t\t\t\tif (pjob) {\n\t\t\t\t\tif (delhist)\n\t\t\t\t\t\tpjob->ji_deletehistory = 1;\n\t\t\t\t\tif (check_job_state(pjob, JOB_STATE_LTR_EXPIRED)) {\n\t\t\t\t\t\tlog_eventf(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_INFO, pjob->ji_qs.ji_jobid, msg_job_history_delete, preq->rq_user, preq->rq_host);\n\t\t\t\t\t\tjob_purge(pjob);\n\t\t\t\t\t} else {\n\t\t\t\t\t\tif (dup_br_for_subjob(preq, pjob, req_deletejob2) == 2)\n\t\t\t\t\t\t\tcontinue;\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\t/* Queued, Waiting, Held, just set to expired */\n\t\t\t\t\tif (sjst != JOB_STATE_LTR_EXPIRED) {\n\t\t\t\t\t\tupdate_sj_parent(parent, NULL, create_subjob_id(parent->ji_qs.ji_jobid, i), sjst, JOB_STATE_LTR_EXPIRED);\n\t\t\t\t\t\tdecr_single_subjob_usage(parent);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\trange = pc;\n\t\t}\n\t\tif (i != -1) {\n\t\t\tsprintf(log_buffer, msg_manager, msg_deletejob,\n\t\t\t\tpreq->rq_user, preq->rq_host);\n\t\t\tif (qdel_mail != 0) {\n\t\t\t\tsvr_mailowner_id(jid, parent, MAIL_OTHER, MAIL_FORCE, log_buffer);\n\t\t\t}\n\t\t}\n\n\t\t/* if deleting running subjobs, then just return;            */\n\t\t/* parent will be deleted when last running subjob(s) ends   */\n\t\t/* and reply will be sent to client when last delete is done */\n\n\t\tif (--preq->rq_refct == 0) {\n\t\t\tif (update_deljob_rply(preq, jid, PBSE_NONE))\n\t\t\t\treply_send(preq);\n\t\t\tchk_array_doneness(parent);\n\t\t}\n\t}\n}\n\n/**\n * @brief\n * \t\treq_deletejob2 - service the Delete Job Request\n *\n *\t\tThis request deletes a job.\n *\n * @param[in]\tpreq\t- Job Request\n * @param[in,out]\tpjob\t- Job structure\n *\n * @return int\n * @retval 0 for Success\n * @retval 1 for Error\n * @retval 2 if deljoblist needs to be suspended\n */\n\nstatic int\nreq_deletejob2(struct batch_request *preq, job *pjob)\n{\n\tint abortjob = 0;\n\tchar *sig;\n\tchar hook_msg[HOOK_MSG_SIZE] = {0};\n\tchar *rec = \"\";\n\tint forcedel = 0;\n\tstruct work_task *pwtold;\n\tstruct work_task *pwtnew;\n\tstruct batch_request *temp_preq = NULL;\n\tint rc;\n\tint is_mgr = 0;\n\tint jt;\n\tchar *job_id = NULL;\n\n\t/* + 2 is for the '@' in user@host and for the null termination byte. */\n\tchar by_user[PBS_MAXUSER + PBS_MAXHOSTNAME + 2] = {'\\0'};\n\n\t/* active job is being deleted by delete job batch request */\n\tpjob->ji_terminated = 1;\n\tif ((preq->rq_user[0] != '\\0') && (preq->rq_host[0] != '\\0'))\n\t\tsprintf(by_user, \"%s@%s\", preq->rq_user, preq->rq_host);\n\n\tif ((preq->rq_extend && strstr(preq->rq_extend, FORCE)))\n\t\tforcedel = 1;\n\n\t/* See if the request is coming from a manager */\n\tif (preq->rq_perm & (ATR_DFLAG_MGRD | ATR_DFLAG_MGWR))\n\t\tis_mgr = 1;\n\n\tjt = is_job_array(pjob->ji_qs.ji_jobid);\n\n\tif (check_job_state(pjob, JOB_STATE_LTR_TRANSIT)) {\n\n\t\t/*\n\t\t * Find pid of router from existing work task entry,\n\t\t * then establish another work task on same child.\n\t\t * Next, signal the router and wait for its completion;\n\t\t */\n\n\t\tpwtold = (struct work_task *) GET_NEXT(pjob->ji_svrtask);\n\t\twhile (pwtold) {\n\t\t\tif ((pwtold->wt_type == WORK_Deferred_Child) ||\n\t\t\t    (pwtold->wt_type == WORK_Deferred_Cmp)) {\n\t\t\t\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid,\n\t\t\t\t\t   \"Job in Transit state, deletion will resume once we hear back!\");\n\t\t\t\tpwtnew = set_task(pwtold->wt_type,\n\t\t\t\t\t\t  pwtold->wt_event, resume_deletion,\n\t\t\t\t\t\t  preq);\n\t\t\t\tif (pwtnew) {\n\t\t\t\t\t/*\n\t\t\t\t\t * reset type in case the SIGCHLD came\n\t\t\t\t\t * in during the set_task;  it makes\n\t\t\t\t\t * sure that next_task() will find the\n\t\t\t\t\t * new entry.\n\t\t\t\t\t */\n\t\t\t\t\tpwtnew->wt_type = pwtold->wt_type;\n\t\t\t\t\tpwtnew->wt_aux = pwtold->wt_aux;\n\n\t\t\t\t\tkill((pid_t) pwtold->wt_event, SIGTERM);\n\t\t\t\t\tset_job_substate(pjob, JOB_SUBSTATE_ABORT);\n\t\t\t\t\tif (preq->rq_type == PBS_BATCH_DeleteJobList) {\n\t\t\t\t\t\t/* let the caller know that the deljoblist request needs to be suspended */\n\t\t\t\t\t\treturn 2;\n\t\t\t\t\t}\n\t\t\t\t\treturn 0; /* all done for now */\n\n\t\t\t\t} else {\n\t\t\t\t\tif (pjob->ji_pmt_preq != NULL)\n\t\t\t\t\t\treply_preempt_jobs_request(PBSE_SYSTEM, PREEMPT_METHOD_DELETE, pjob);\n\n\t\t\t\t\tif (update_deljob_rply(preq, pjob->ji_qs.ji_jobid, PBSE_SYSTEM)) {\n\t\t\t\t\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\t\t\t\t\treturn 0;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\tpwtold = (struct work_task *) GET_NEXT(pwtold->wt_linkobj);\n\t\t}\n\t\t/* should never get here ...  */\n\t\tlog_err(-1, \"req_delete\", \"Did not find work task for router\");\n\t\tif (pjob->ji_pmt_preq != NULL)\n\t\t\treply_preempt_jobs_request(PBSE_INTERNAL, PREEMPT_METHOD_DELETE, pjob);\n\n\t\tif (update_deljob_rply(preq, pjob->ji_qs.ji_jobid, PBSE_INTERNAL)) {\n\t\t\treq_reject(PBSE_INTERNAL, 0, preq);\n\t\t\treturn 0;\n\t\t}\n\n\t} else if (((jt != IS_ARRAY_Range) && (jt != IS_ARRAY_Single)) &&\n\t\t   (check_job_state(pjob, JOB_STATE_LTR_QUEUED) ||\n\t\t    check_job_state(pjob, JOB_STATE_LTR_HELD))) {\n\t\tdepend_runone_remove_dependency(pjob);\n\t}\n\n\tif (is_mgr && forcedel) {\n\t\t/*\n\t\t * Set exit status for the job to SIGKILL as we will not be working with any obit.\n\t\t */\n\t\tpjob->ji_qs.ji_un.ji_exect.ji_exitstat = SIGKILL + 0x100;\n\t}\n\n\tif (check_job_state(pjob, JOB_STATE_LTR_RUNNING) ||\n\t    (check_job_substate(pjob, JOB_SUBSTATE_TERM))) {\n\t\tif (check_job_substate(pjob, JOB_SUBSTATE_RERUN)) {\n\t\t\t/* rerun just started, clear that substate and */\n\t\t\t/* normal delete will happen when mom replies  */\n\n\t\t\tset_job_substate(pjob, JOB_SUBSTATE_RUNNING);\n\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t  pjob->ji_qs.ji_jobid, \"deleting instead of rerunning\");\n\t\t\tacct_del_write(pjob->ji_qs.ji_jobid, pjob, preq, 0);\n\t\t\tif (update_deljob_rply(preq, pjob->ji_qs.ji_jobid, PBSE_NONE))\n\t\t\t\treply_ack(preq);\n\t\t\treturn 0;\n\t\t}\n\n\t\tif (((check_job_substate(pjob, JOB_SUBSTATE_SUSPEND)) ||\n\t\t     (check_job_substate(pjob, JOB_SUBSTATE_SCHSUSP))) &&\n\t\t    (is_jattr_set(pjob, JOB_ATR_resc_released))) {\n\t\t\tset_resc_assigned(pjob, 0, INCR);\n\t\t\tfree_jattr(pjob, JOB_ATR_resc_released);\n\t\t\tmark_jattr_not_set(pjob, JOB_ATR_resc_released);\n\t\t\tif (is_jattr_set(pjob, JOB_ATR_resc_released_list)) {\n\t\t\t\tfree_jattr(pjob, JOB_ATR_resc_released_list);\n\t\t\t\tmark_jattr_not_set(pjob, JOB_ATR_resc_released_list);\n\t\t\t}\n\t\t}\n\n\t\tif (check_job_substate(pjob, JOB_SUBSTATE_PROVISION)) {\n\t\t\tif (forcedel) {\n\t\t\t\t/*\n\t\t\t\t * discard_job not called since job not sent\n\t\t\t\t * to MOM\n\t\t\t\t */\n\t\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB,\n\t\t\t\t\t  LOG_INFO,\n\t\t\t\t\t  pjob->ji_qs.ji_jobid, \"deleting job\");\n\t\t\t\tacct_del_write(pjob->ji_qs.ji_jobid, pjob,\n\t\t\t\t\t       preq, 0);\n\t\t\t\tif (update_deljob_rply(preq, pjob->ji_qs.ji_jobid, PBSE_NONE))\n\t\t\t\t\treply_ack(preq);\n\t\t\t\trel_resc(pjob);\n\t\t\t\tjob_abt(pjob, NULL);\n\t\t\t} else {\n\t\t\t\tif (pjob->ji_pmt_preq != NULL)\n\t\t\t\t\treply_preempt_jobs_request(PBSE_BADSTATE, PREEMPT_METHOD_DELETE, pjob);\n\t\t\t\tif (update_deljob_rply(preq, pjob->ji_qs.ji_jobid, PBSE_BADSTATE))\n\t\t\t\t\treq_reject(PBSE_BADSTATE, 0, preq);\n\t\t\t}\n\t\t\treturn 0;\n\t\t}\n\n\t\t/*\n\t\t * Job is in fact running, so we want to terminate it.\n\t\t *\n\t\t * Send signal request to MOM.  The server will automagically\n\t\t * pick up and \"finish\" off the client request when MOM replies.\n\t\t * If not \"force\" send special termjob signal,\n\t\t * if \"force\" send SIGTERM.\n\t\t */\n\t\tif (forcedel)\n\t\t\tsig = sigk;\n\t\telse\n\t\t\tsig = sigtj;\n\n\t\tif (is_mgr && forcedel)\n\t\t\ttemp_preq = NULL;\n\t\telse\n\t\t\ttemp_preq = preq;\n\n\t\trc = issue_signal(pjob, sig, post_delete_mom1, temp_preq);\n\n\t\t/*\n\t\t * If forcedel is set and request is from a manager,\n\t\t * job is deleted from server regardless\n\t\t * of issue_signal to MoM was a success or failure.\n\t\t * Eventually, when the mom updates server about the job,\n\t\t * server sends a discard message to mom and job is then\n\t\t * deleted from mom as well.\n\t\t */\n\t\tif ((rc || is_mgr) && forcedel) {\n\t\t\tsvr_setjobstate(pjob, JOB_STATE_LTR_EXITING, JOB_SUBSTATE_EXITED);\n\t\t\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE) == 0)\n\t\t\t\tissue_track(pjob);\n\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t  pjob->ji_qs.ji_jobid, \"Delete forced\");\n\t\t\tacct_del_write(pjob->ji_qs.ji_jobid, pjob, preq, 0);\n\n\t\t\t/*\n\t\t\t * If we are waiting for preemption to be complete and someone does a qdel -Wforce\n\t\t\t * we need to reply back to the scheduler.  We need to reply success so we don't\n\t\t\t * attempt another preemption method.  This leads to a minor race condition\n\t\t\t * where the moms might not be finished cleaning up when the high priority job runs.\n\t\t\t */\n\t\t\tif (pjob->ji_pmt_preq != NULL)\n\t\t\t\treply_preempt_jobs_request(PBSE_NONE, PREEMPT_METHOD_DELETE, pjob);\n\n\t\t\tif (preq->rq_parentbr)\n\t\t\t\treply_ack(preq);\n\t\t\telse {\n\t\t\t\tif (update_deljob_rply(preq, pjob->ji_qs.ji_jobid, PBSE_NONE))\n\t\t\t\t\treply_ack(preq);\n\t\t\t}\n\n\t\t\tpjob->ji_qs.ji_obittime = time_now;\n\t\t\tset_jattr_l_slim(pjob, JOB_ATR_obittime, pjob->ji_qs.ji_obittime, SET);\n\n\t\t\t/* Allocate space for the jobobit hook event params */\n\t\t\ttemp_preq = alloc_br(PBS_BATCH_JobObit);\n\t\t\tif (temp_preq == NULL) {\n\t\t\t\tlog_err(PBSE_INTERNAL, __func__, \"rq_jobobit alloc failed\");\n\t\t\t} else {\n\t\t\t\ttemp_preq->rq_ind.rq_obit.rq_pjob = pjob;\n\t\t\t\trc = process_hooks(temp_preq, hook_msg, sizeof(hook_msg), pbs_python_set_interrupt);\n\t\t\t\tif (rc == -1) {\n\t\t\t\t\tlog_err(-1, __func__, \"rq_jobobit process_hooks call failed\");\n\t\t\t\t}\n\t\t\t\tfree_br(temp_preq);\n\t\t\t}\n\t\t\tdiscard_job(pjob, \"Forced Delete\", 1);\n\t\t\trel_resc(pjob);\n\n\t\t\taccount_job_update(pjob, PBS_ACCT_LAST);\n\t\t\taccount_jobend(pjob, pjob->ji_acctrec, PBS_ACCT_END);\n\n\t\t\tif (pjob->ji_acctrec)\n\t\t\t\trec = pjob->ji_acctrec;\n\n\t\t\tif (get_sattr_long(SVR_ATR_log_events) & PBSEVENT_JOB_USAGE) {\n\t\t\t\t/* log events set to record usage */\n\t\t\t\tlog_event(PBSEVENT_JOB_USAGE | PBSEVENT_JOB_USAGE,\n\t\t\t\t\t  PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t\t  pjob->ji_qs.ji_jobid, rec);\n\t\t\t} else {\n\t\t\t\tchar *pc;\n\n\t\t\t\t/* no usage in log, truncate messge */\n\n\t\t\t\tif ((pc = strchr(rec, ' ')) != NULL)\n\t\t\t\t\t*pc = '\\0';\n\t\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t\t  pjob->ji_qs.ji_jobid, rec);\n\t\t\t}\n\n\t\t\tif (is_mgr) {\n\t\t\t\t/*\n\t\t\t\t * Set exit status for the job to SIGKILL as we will not be working with any obit.\n\t\t\t\t */\n\t\t\t\tset_jattr_l_slim(pjob, JOB_ATR_exit_status, pjob->ji_qs.ji_un.ji_exect.ji_exitstat, SET);\n\t\t\t}\n\n\t\t\t/* see if it has any dependencies */\n\t\t\tif (is_jattr_set(pjob, JOB_ATR_depend))\n\t\t\t\tdepend_on_term(pjob);\n\n\t\t\t/*\n\t\t\t * Check if the history of the finished job can be saved or it needs to be purged .\n\t\t\t */\n\t\t\tsvr_saveorpurge_finjobhist(pjob);\n\t\t\treturn 0;\n\t\t}\n\t\tif (rc) {\n\t\t\tif (pjob->ji_pmt_preq != NULL)\n\t\t\t\treply_preempt_jobs_request(rc, PREEMPT_METHOD_DELETE, pjob);\n\t\t\tif (update_deljob_rply(preq, pjob->ji_qs.ji_jobid, rc))\n\t\t\t\treq_reject(rc, 0, preq); /* cant send to MOM */\n\t\t\tsprintf(log_buffer, \"Delete failed %d\", rc);\n\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_NOTICE,\n\t\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\treturn 0;\n\t\t}\n\t\t/* normally will ack reply when mom responds */\n\t\tupdate_job_finish_comment(pjob, JOB_SUBSTATE_TERMINATED, by_user);\n\t\tsprintf(log_buffer, msg_delrunjobsig, sig);\n\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\treturn 0;\n\t} else if ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_CHKPT) != 0) {\n\n\t\t/* job has restart file at mom, do end job processing */\n\n\t\tsvr_setjobstate(pjob, JOB_STATE_LTR_EXITING, JOB_SUBSTATE_EXITING);\n\t\tpjob->ji_momhandle = -1; /* force new connection */\n\t\tpjob->ji_mom_prot = PROT_INVALID;\n\t\tset_task(WORK_Immed, 0, on_job_exit, (void *) pjob);\n\n\t} else if ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_StagedIn) != 0) {\n\n\t\t/* job has staged-in file, should remove them */\n\t\tremove_stagein(pjob);\n\t\tabortjob = 1; /* set flag to abort job after mail sent */\n\n\t} else {\n\n\t\t/*\n\t\t * the job is not transiting (though it may have been) and\n\t\t * is not running, so abort it.\n\t\t */\n\t\tabortjob = 1; /* set flag to abort job after mail sent */\n\t}\n\t/*\n\t * Log delete and if requesting client is not job owner, send mail.\n\t */\n\n\tacct_del_write(pjob->ji_qs.ji_jobid, pjob, preq, 0);\n\tjob_id = strdup(pjob->ji_qs.ji_jobid);\n\n\tif (pjob->ji_qs.ji_svrflags & JOB_SVFLG_ArrayJob) {\n\t\tif (forcedel) {\n\t\t\t/* If the array job is being force deleted, then reset the subjob\n\t\t\t   state counts so that chk_array_doneness() doesn't consider the\n\t\t\t   array job to still have active subjobs. */\n\t\t\tpjob->ji_ajinfo->tkm_subjsct[JOB_STATE_QUEUED] = 0;\n\t\t\tpjob->ji_ajinfo->tkm_subjsct[JOB_STATE_RUNNING] = 0;\n\t\t\tpjob->ji_ajinfo->tkm_subjsct[JOB_STATE_HELD] = 0;\n\t\t\tpjob->ji_ajinfo->tkm_subjsct[JOB_STATE_EXITING] = 0;\n\t\t}\n\t\tchk_array_doneness(pjob);\n\t}\n\telse if (abortjob) {\n\t\tif (check_job_state(pjob, JOB_STATE_LTR_EXITING))\n\t\t\tdiscard_job(pjob, \"Forced Delete\", 1);\n\t\trel_resc(pjob);\n\t\tif (pjob->ji_pmt_preq != NULL)\n\t\t\treply_preempt_jobs_request(PBSE_NONE, PREEMPT_METHOD_DELETE, pjob);\n\t\tjob_abt(pjob, NULL);\n\t}\n\n\tif (preq->rq_parentbr)\n\t\treply_send(preq);\n\telse {\n\t\tif (update_deljob_rply(preq, job_id, PBSE_NONE))\n\t\t\treply_send(preq);\n\t}\n\n\tfree(job_id);\n\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\treq_reservationOccurrenceEnd - service the PBS_BATCH_ResvOccurEnd Request\n *\n *\t\tThis request runs a hook script at the end of the reservation occurrence\n *\n * @param[in]\tpreq\t- Job Request\n *\n */\n\nvoid\nreq_reservationOccurrenceEnd(struct batch_request *preq)\n{\n\tchar hook_msg[HOOK_MSG_SIZE] = {0};\n\n\tswitch (process_hooks(preq, hook_msg, sizeof(hook_msg), pbs_python_set_interrupt)) {\n\t\tcase 0: /* explicit reject */\n\t\t\treply_text(preq, PBSE_HOOKERROR, hook_msg);\n\t\t\tbreak;\n\t\tcase 1: /* no recreate request as there are only read permissions */\n\t\tcase 2: /* no hook script executed - go ahead and accept event*/\n\t\t\treply_ack(preq);\n\t\t\tbreak;\n\t\tdefault:\n\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK, LOG_INFO, __func__, \"resv_end event: accept req by default\");\n\t\t\treply_ack(preq);\n\t}\n\treturn;\n}\n\n/**\n * @brief\n * \t\treq_deleteReservation - service the PBS_BATCH_DeleteResv Request\n *\n *\t\tThis request deletes a resources reservation if the requester\n *\t\tis authorized to do this.\n *\n * @param[in]\tpreq\t- Job Request\n *\n * @par\tMT-safe: No\n */\n\nvoid\nreq_deleteReservation(struct batch_request *preq)\n{\n\tstatic int lenF = 6; /*strlen (\"False\") + 1*/\n\n\tresc_resv *presv;\n\tjob *pjob;\n\tstruct batch_request *newreq;\n\tstruct work_task *pwt;\n\n\tchar buf[PBS_MAXHOSTNAME + PBS_MAXUSER + 2]; /* temp, possibly remove in future */\n\tchar user[PBS_MAXUSER + 1];\n\tchar host[PBS_MAXHOSTNAME + 1];\n\tint perm;\n\tint relVal;\n\tint state, sub;\n\tlong futuredr;\n\n\t/*Does resc_resv object exist and requester have enough priviledge?*/\n\n\tpresv = chk_rescResv_request(preq->rq_ind.rq_manager.rq_objname, preq);\n\n\t/*Note: on failure, chk_rescResv_request invokes req_reject\n\t *Appropriate reply got sent & batch_request got freed\n\t */\n\tif (presv == NULL)\n\t\treturn;\n\n\t/*Know resc_resv struct exists and requester allowed to remove it*/\n\tfuturedr = presv->ri_futuredr;\n\tpresv->ri_futuredr = 0; /*would be non-zero if getting*/\n\t/*here from task_list_timed*/\n\tstrcpy(user, preq->rq_user); /*need after request is gone*/\n\tstrcpy(host, preq->rq_host);\n\tperm = preq->rq_perm;\n\n\t/*Generate message(s) to reservation owner (listed users) as appropriate\n\t *according to what was requested in the mailpoints attribute and who\n\t *the submitter of the request happens to be (user, scheduler, or us)\n\t */\n\tresv_mailAction(presv, preq);\n\t/*ck_submitClient_needs_reply()*/\n\tif (presv->ri_brp) {\n\t\tif (presv->ri_qs.ri_state == RESV_UNCONFIRMED) {\n\t\t\tif (is_rattr_set(presv, RESV_ATR_interactive) &&\n\t\t\t    get_rattr_long(presv, RESV_ATR_interactive) < 0 &&\n\t\t\t    futuredr != 0) {\n\n\t\t\t\tsprintf(buf, \"%s delete, wait period expired\",\n\t\t\t\t\tpresv->ri_qs.ri_resvID);\n\t\t\t} else {\n\t\t\t\tsprintf(buf, \"%s DENIED\", presv->ri_qs.ri_resvID);\n\t\t\t}\n\n\t\t} else {\n\t\t\tsprintf(buf, \"%s BEING DELETED\", presv->ri_qs.ri_resvID);\n\t\t}\n\n\t\treply_text(presv->ri_brp, PBSE_NONE, buf);\n\t\tpresv->ri_brp = NULL;\n\t}\n\n\tsprintf(buf, \"%s@%s\", preq->rq_user, preq->rq_host);\n\tsprintf(log_buffer, \"requestor=%s\", buf);\n\n\tif (strcmp(get_rattr_str(presv, RESV_ATR_resv_owner), buf))\n\t\taccount_recordResv(PBS_ACCT_DRss, presv, log_buffer);\n\telse\n\t\taccount_recordResv(PBS_ACCT_DRclient, presv, log_buffer);\n\n\tif (presv->ri_qs.ri_state != RESV_UNCONFIRMED) {\n\t\tchar hook_msg[HOOK_MSG_SIZE] = {0};\n\t\tswitch (process_hooks(preq, hook_msg, sizeof(hook_msg), pbs_python_set_interrupt)) {\n\t\t\tcase 0: /* explicit reject */\n\t\t\tcase 1: /* no recreate request as there are only read permissions */\n\t\t\tcase 2: /* no hook script executed - go ahead and accept event*/\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK, LOG_INFO, __func__, \"resv_end event: accept req by default\");\n\t\t}\n\t}\n\n\t/*If there are any jobs associated with the reservation, construct and\n\t *issue a PBS_BATCH_DeleteJob request for each job.\n\t *\n\t *General notes on this process:\n\t *Use issue_Drequest() to issue a PBS_BATCH_* request - can be to\n\t *1) this server, 2) another server, 3) to a pbs_mom.\n\t *\n\t *In the present situation the server is going to issue the request to\n\t *itself (a locally generated request).  The future \"event\" that's\n\t *to occur, and which must be handled, is the reply fom this request.\n\t *The handling task is initially placed on the server's \"task_list_event\"\n\t *list as a task of type \"WORK_Deferred_Local\" and, a call is made to the\n\t *general \"dispatch\" function (dispatch_request) to dispatch the request.\n\t *In replying back to itself regarding the request to itself, function\n\t *\"reply_send\" is called.  Since it is passed a pointer to the batch_request\n\t *structure for the request that it received, it will note that the request\n\t *came from itself (connection set to PBS_LOCAL_CONNECTION).  It finds the\n\t *handling task on \"task_list_event\" by finding the task with task field\n\t *\"wt_parm1\" set equal to the address of the batch request structure in\n\t *question.  That task is moved off the \"event_task_list\" and put on the\n\t *\"immedite_task_list\" where it can be found and invoked the next time that\n\t *the server calls function \"next_task\" from it's main loop.\n\t *The work_task function that's going to be invoked will be responding to\n\t *the \"reply\" that comes back from the servers original request to itself.\n\t *The handling function, in addition to whatever else it might do, does\n\t *have the RESPONSIBILITY of calling free_br() to remove all memory associated\n\t *with the batch_request structure.\n\t */\n\n\tif (presv->ri_qp != NULL && presv->ri_qp->qu_numjobs > 0) {\n\n\t\t/*One or more jobs are attached to this resource reservation\n\t\t *Issue a PBS_BATCH_Manager request to set \"enable\" to \"False\"\n\t\t *for the queue (if it is not already so set), set \"start\" for\n\t\t *the queue to \"False\" as well- so the scheduler will cease\n\t\t *scheduling the jobs in queue, then issue a PBS_BATCH_DeleteJob\n\t\t *request for each resident job.\n\t\t */\n\n\t\tint deleteProblem = 0;\n\t\tjob *pnxj;\n\n\t\tif (get_qattr_long(presv->ri_qp, QA_ATR_Enabled)) {\n\n\t\t\tsvrattrl *psatl;\n\t\t\tnewreq = alloc_br(PBS_BATCH_Manager);\n\t\t\tif (newreq == NULL) {\n\t\t\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tCLEAR_HEAD(newreq->rq_ind.rq_manager.rq_attr);\n\n\t\t\tnewreq->rq_ind.rq_manager.rq_cmd = MGR_CMD_SET;\n\t\t\tnewreq->rq_ind.rq_manager.rq_objtype = MGR_OBJ_QUEUE;\n\t\t\tstrcpy(newreq->rq_ind.rq_manager.rq_objname,\n\t\t\t       presv->ri_qp->qu_qs.qu_name);\n\t\t\tstrcpy(newreq->rq_user, user);\n\t\t\tstrcpy(newreq->rq_host, host);\n\t\t\tnewreq->rq_perm = perm;\n\n\t\t\tif ((psatl = attrlist_create(ATTR_enable, NULL, lenF)) == NULL) {\n\n\t\t\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\t\t\tfree_br(newreq);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tpsatl->al_flags = que_attr_def[QA_ATR_Enabled].at_flags;\n\t\t\tstrcpy(psatl->al_value, \"False\");\n\t\t\tappend_link(&newreq->rq_ind.rq_manager.rq_attr, &psatl->al_link, psatl);\n\n\t\t\tif ((psatl = attrlist_create(ATTR_start, NULL, lenF)) == NULL) {\n\n\t\t\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\t\t\tfree_br(newreq);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tpsatl->al_flags = que_attr_def[QA_ATR_Started].at_flags;\n\t\t\tstrcpy(psatl->al_value, \"False\");\n\t\t\tappend_link(&newreq->rq_ind.rq_manager.rq_attr,\n\t\t\t\t    &psatl->al_link, psatl);\n\n\t\t\tif (issue_Drequest(PBS_LOCAL_CONNECTION, newreq,\n\t\t\t\t\t   release_req, &pwt, 0) == -1) {\n\t\t\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\t\t\tfree_br(newreq);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\t/* set things so that any removal of the reservation\n\t\t\t * structure also removes any \"yet to be processed\"\n\t\t\t * work tasks that are associated with the reservation\n\t\t\t */\n\t\t\tappend_link(&presv->ri_svrtask, &pwt->wt_linkobj, pwt);\n\n\t\t\ttickle_for_reply();\n\t\t}\n\n\t\t/*Ok, input to the queue is stopped, try and delete jobs in queue*/\n\n\t\trelVal = 1;\n\t\teval_resvState(presv, RESVSTATE_req_deleteReservation,\n\t\t\t       relVal, &state, &sub);\n\t\tresv_setResvState(presv, state, sub);\n\t\tpjob = (job *) GET_NEXT(presv->ri_qp->qu_jobs);\n\t\twhile (pjob != NULL) {\n\n\t\t\tpnxj = (job *) GET_NEXT(pjob->ji_jobque);\n\n\t\t\t/* skip all expired subjobs, expired subjobs are deleted when array parent is\n\t\t\t * issued delete request\n\t\t\t */\n\t\t\tfor (; pnxj != NULL && (pnxj->ji_qs.ji_svrflags & JOB_SVFLG_SubJob) &&\n\t\t\t       check_job_state(pnxj, JOB_STATE_LTR_EXPIRED);\n\t\t\t     pnxj = (job *) GET_NEXT(pnxj->ji_jobque))\n\t\t\t\t;\n\t\t\t/*\n\t\t\t * If a history job (job state is JOB_STATE_LTR_MOVED\n\t\t\t * or JOB_STATE_LTR_FINISHED, then no need to delete\n\t\t\t * it again as it is already deleted.\n\t\t\t */\n\t\t\tif (check_job_state(pjob, JOB_STATE_LTR_MOVED) ||\n\t\t\t    (pjob->ji_qs.ji_svrflags & JOB_SVFLG_SubJob) ||\n\t\t\t    check_job_state(pjob, JOB_STATE_LTR_FINISHED)) {\n\t\t\t\tpjob = pnxj;\n\t\t\t\tcontinue;\n\t\t\t}\n\n\t\t\tnewreq = alloc_br(PBS_BATCH_DeleteJob);\n\t\t\tif (newreq != NULL) {\n\n\t\t\t\t/*when owner of job is not same as owner of resv, */\n\t\t\t\t/*need extra permission; Also extra info for owner*/\n\n\t\t\t\tCLEAR_HEAD(newreq->rq_ind.rq_manager.rq_attr);\n\t\t\t\tnewreq->rq_perm = perm | ATR_DFLAG_MGWR;\n\t\t\t\tnewreq->rq_extend = NULL;\n\n\t\t\t\t/*reply processing needs resv*/\n\t\t\t\tnewreq->rq_extra = (void *) presv;\n\n\t\t\t\tstrcpy(newreq->rq_user, user);\n\t\t\t\tstrcpy(newreq->rq_host, host);\n\t\t\t\tstrcpy(newreq->rq_ind.rq_delete.rq_objname,\n\t\t\t\t       pjob->ji_qs.ji_jobid);\n\n\t\t\t\tif (issue_Drequest(PBS_LOCAL_CONNECTION, newreq,\n\t\t\t\t\t\t   release_req, &pwt, 0) == -1) {\n\t\t\t\t\tdeleteProblem++;\n\t\t\t\t\tfree_br(newreq);\n\t\t\t\t}\n\t\t\t\t/* set things so that any removal of the reservation\n\t\t\t\t * structure also removes any \"yet to be processed\"\n\t\t\t\t * work tasks that are associated with the reservation\n\t\t\t\t */\n\t\t\t\tappend_link(&presv->ri_svrtask, &pwt->wt_linkobj, pwt);\n\n\t\t\t\ttickle_for_reply();\n\t\t\t} else\n\t\t\t\tdeleteProblem++;\n\n\t\t\tpjob = pnxj;\n\t\t}\n\n\t\tif (deleteProblem) {\n\t\t\t/*some problems attempting to delete reservation's jobs\n\t\t\t *shouldn't end up re-calling req_deleteReservation\n\t\t\t */\n\t\t\tsprintf(log_buffer, \"%s %s\\n\",\n\t\t\t\t\"problem deleting jobs belonging to\",\n\t\t\t\tpresv->ri_qs.ri_resvID);\n\t\t\treply_text(preq, PBSE_RESVMSG, log_buffer);\n\t\t} else {\n\t\t\t/*no problems so far, we are attempting to do it\n\t\t\t *If all job deletions succeed, resv_purge()\n\t\t\t *should get triggered\n\t\t\t */\n\t\t\treply_ack(preq);\n\n\t\t\t/*\n\t\t\t * If all the jobs in the RESV are history jobs, then\n\t\t\t * better to purge the RESV now only without waiting\n\t\t\t * for next resv delete iteration.\n\t\t\t */\n\t\t\tpjob = NULL;\n\t\t\tif (presv && presv->ri_qp)\n\t\t\t\tpjob = (job *) GET_NEXT(presv->ri_qp->qu_jobs);\n\t\t\twhile (pjob != NULL) {\n\t\t\t\tif ((!check_job_state(pjob, JOB_STATE_LTR_MOVED)) &&\n\t\t\t\t    (!check_job_state(pjob, JOB_STATE_LTR_FINISHED)) &&\n\t\t\t\t    (!check_job_state(pjob, JOB_STATE_LTR_EXPIRED)))\n\t\t\t\t\tbreak;\n\t\t\t\tpjob = (job *) GET_NEXT(pjob->ji_jobque);\n\t\t\t}\n\t\t\tif (pjob == NULL) /* all are history jobs only */\n\t\t\t\tresv_purge(presv);\n\t\t\telse {\n\t\t\t\t/* other jobs remain, need to set task to monitor */\n\t\t\t\t/* when they are dequeued */\n\t\t\t\tpwt = set_task(WORK_Immed, 0, post_deljobfromresv_req,\n\t\t\t\t\t       (void *) presv);\n\t\t\t\tif (pwt)\n\t\t\t\t\tappend_link(&presv->ri_svrtask,\n\t\t\t\t\t\t    &pwt->wt_linkobj, pwt);\n\t\t\t}\n\t\t}\n\n\t\t/*This is all we can do for now*/\n\t\treturn;\n\t}\n\n\t/* Ok, we have no jobs attached so can purge reservation\n\t\tIf reservation has an attached queue, a request to qmgr\n\t\twill get made to delete the queue\n\t\t*/\n\trelVal = 2;\n\teval_resvState(presv, RESVSTATE_req_deleteReservation,\n\t\t       relVal, &state, &sub);\n\tresv_setResvState(presv, state, sub);\n\treply_ack(preq);\n\tresv_purge(presv);\n\treturn;\n}\n\n/**\n * @brief\n * \t\tpost_delete_mom1 - first of 2 work task trigger functions to finish the\n *\t\tdeleting of a running job.  This first part is invoked when MOM\n *\t\tresponds to the SIGTERM signal request.\n *\n * @param[in]\tpwt\t- work task\n */\n\nstatic void\npost_delete_mom1(struct work_task *pwt)\n{\n\tint auxcode;\n\tjob *pjob;\n\tstruct batch_request *preq_sig; /* signal request to MOM */\n\tstruct batch_request *preq_clt; /* original client request */\n\tint rc;\n\tint tries = 0;\n\n\tpreq_sig = pwt->wt_parm1;\n\trc = preq_sig->rq_reply.brp_code;\n\t/* Look for PBSE_ error code in wt_aux if brp code reports some other error */\n\tif (rc && pwt->wt_aux && (rc < PBSE_) && (pwt->wt_aux > PBSE_))\n\t\trc = pwt->wt_aux;\n\tauxcode = preq_sig->rq_reply.brp_auxcode;\n\tpreq_clt = preq_sig->rq_extra;\n\tif (preq_clt == NULL) {\n\t\trelease_req(pwt);\n\t\treturn;\n\t}\n\n\tpjob = find_job(preq_sig->rq_ind.rq_signal.rq_jid);\n\tif (pjob == NULL) {\n\t\t/* job has gone away */\n\t\tif (update_deljob_rply(preq_clt, preq_sig->rq_ind.rq_signal.rq_jid, PBSE_UNKJOBID))\n\t\t\treq_reject(PBSE_UNKJOBID, 0, preq_clt);\n\t\trelease_req(pwt);\n\t\treturn;\n\t}\n\trelease_req(pwt);\n\nresend:\n\tif (rc) {\n\t\t/* mom rejected request */\n\t\tsprintf(log_buffer, \"MOM rejected signal during delete (%d)\", rc);\n\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\n\t\tif (rc == PBSE_UNKSIG) {\n\t\t\tif (tries++) {\n\t\t\t\tif (update_deljob_rply(preq_clt, pjob->ji_qs.ji_jobid, rc))\n\t\t\t\t\treq_reject(rc, 0, preq_clt);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\t/* 2nd try, use SIGTERM */\n\t\t\t/* prevent infinite loop, try 10 times only */\n\t\t\tif (preq_clt->rq_reply.brp_count < 10) {\n\t\t\t\tpreq_clt->rq_reply.brp_count++;\n\t\t\t\trc = delayed_issue_signal(pjob, sigt, post_delete_mom1,\n\t\t\t\t\tpreq_clt, get_sattr_long(SVR_ATR_ResendTermDelay));\n\t\t\t\tif (rc == 0)\n\t\t\t\t\treturn; /* will be back when replies */\n\t\t\t\tgoto resend;\n\t\t\t}\n\t\t} else if (rc == PBSE_UNKJOBID) {\n\t\t\tif (check_job_substate(pjob, JOB_SUBSTATE_PRERUN)) {\n\t\t\t\t/* This means the job has trigerred to run again,\n\t\t\t\t try deleting again! */\n\n\t\t\t\t/* prevent infinite loop, try 10 times only */\n\t\t\t\tif (preq_clt->rq_reply.brp_count < 10) {\n\t\t\t\t\tpreq_clt->rq_reply.brp_count++;\n\t\t\t\t\trc = delayed_issue_signal(pjob, sigtj, post_delete_mom1,\n\t\t\t\t\t\tpreq_clt, get_sattr_long(SVR_ATR_ResendTermDelay));\n\t\t\t\t\tif (rc == 0)\n\t\t\t\t\t\treturn; /* will be back when replies */\n\t\t\t\t\tgoto resend;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t/* MOM claims no knowledge, so just purge it */\n\t\t\tacct_del_write(pjob->ji_qs.ji_jobid, pjob, preq_clt, 0);\n\t\t\t/* removed the resources assigned to job */\n\t\t\tfree_nodes(pjob);\n\t\t\tset_resc_assigned(pjob, 0, DECR);\n\t\t\tif (update_deljob_rply(preq_clt, pjob->ji_qs.ji_jobid, PBSE_NONE))\n\t\t\t\treply_ack(preq_clt);\n\t\t\tsvr_saveorpurge_finjobhist(pjob);\n\t\t} else {\n\t\t\tif (pjob->ji_pmt_preq != NULL)\n\t\t\t\treply_preempt_jobs_request(rc, PREEMPT_METHOD_DELETE, pjob);\n\t\t\tif (update_deljob_rply(preq_clt, pjob->ji_qs.ji_jobid, rc))\n\t\t\t\treq_reject(rc, 0, preq_clt);\n\t\t}\n\t\treturn;\n\t}\n\n\tacct_del_write(pjob->ji_qs.ji_jobid, pjob, preq_clt, 0);\n\n\tif (preq_clt->rq_parentbr)\n\t\treply_ack(preq_clt);\n\telse\n\t\tif (update_deljob_rply(preq_clt, pjob->ji_qs.ji_jobid, PBSE_NONE))\n\t\t\treply_ack(preq_clt); /* dont need it, reply now */\n\n\tif (auxcode == JOB_SUBSTATE_TERM) {\n\t\t/* Mom running a site supplied Terminate Job script   */\n\t\t/* Put job into special Exiting state and we are done */\n\n\t\tsvr_setjobstate(pjob, JOB_STATE_LTR_EXITING, JOB_SUBSTATE_TERM);\n\t\treturn;\n\t}\n}\n\n/**\n * @brief\n *\t\tpost_deljobfromresv_req\n *\n * @par Functionality:\n *\n *\t\tThis work_task function is triggered after all jobs in the queue\n *\t\tassociated with a reservation have had delete requests issued.\n *\t\tIf all jobs are  indeed found to be no longer present,\n *\t\tthe down counter in the reservation structure is\n *\t\tdecremented.  When the decremented value becomes less\n *\t\tthan or equal to zero, issue a request to delete the\n *\t\treservation.\n *\n *\t\tIf SERVER is configured for history jobs...\n *\t\tIf the reservation down counter is positive, check if all\n *\t\tthe jobs in the resv are history jobs. If yes, purge the\n *\t\treservation using resv_purge() without waiting.\n *\n *\t\tIf there are still non-history jobs, recall itself after 30 seconds.\n *\n * @param[in]\tpwt\t- pointer to the work task, the reservation structure\n *\t\t\t  \t\t\tpointer is contained in wt_parm1\n *\n * @return\tvoid\n *\n */\n\nstatic void\n\tpost_deljobfromresv_req(struct work_task *pwt)\n{\n\tresc_resv *presv;\n\tjob *pjob = NULL;\n\n\tpresv = (resc_resv *) ((struct batch_request *) pwt->wt_parm1);\n\n\t/* return if presv is not valid */\n\tif (presv == NULL)\n\t\treturn;\n\n\tpresv->ri_downcnt = presv->ri_qp->qu_numjobs;\n\tif (presv->ri_downcnt != 0) {\n\t\tif (presv->ri_qp)\n\t\t\tpjob = (job *) GET_NEXT(presv->ri_qp->qu_jobs);\n\t\twhile (pjob != NULL) {\n\t\t\tif ((!check_job_state(pjob, JOB_STATE_LTR_MOVED)) &&\n\t\t\t    (!check_job_state(pjob, JOB_STATE_LTR_FINISHED)) &&\n\t\t\t    (!check_job_state(pjob, JOB_STATE_LTR_EXPIRED)))\n\t\t\t\tbreak;\n\t\t\tpjob = (job *) GET_NEXT(pjob->ji_jobque);\n\t\t}\n\t\t/*\n\t\t\t* If pjob is NULL, then all are history jobs only,\n\t\t\t* make the ri_downcnt to 0, so that resv_purge()\n\t\t\t* can be called down.\n\t\t\t*/\n\t\tif (pjob == NULL)\n\t\t\tpresv->ri_downcnt = 0;\n\t}\n\n\tif (presv->ri_downcnt == 0) {\n\t\tresv_purge(presv);\n\t} else if (pjob) {\n\t\t/* one or more jobs still not able to be deleted; set me up for\n\t\t * another call for 30 seconds into the future.\n\t\t */\n\t\tpwt = set_task(WORK_Timed, time_now + 30, post_deljobfromresv_req,\n\t\t\t       (void *) presv);\n\t\tif (pwt)\n\t\t\tappend_link(&presv->ri_svrtask, &pwt->wt_linkobj, pwt);\n\t}\n}\n"
  },
  {
    "path": "src/server/req_getcred.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\treq_getcred.c\n *\n * This file contains function relating to the PBS credential system,\n * it includes the major functions:\n *   req_connect    - validate the credential in a Connection Request (old)\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <sys/types.h>\n#include <string.h>\n#include \"libpbs.h\"\n#include \"server_limits.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"credential.h\"\n#include \"net_connect.h\"\n#include \"batch_request.h\"\n#include \"pbs_share.h\"\n#include \"log.h\"\n\n/**\n * @brief\n * \t\treq_connect - process a Connection Request\n * \t\tAlmost does nothing.\n *\n * @param[in]\tpreq\t- Connection Request\n */\n\nvoid\nreq_connect(struct batch_request *preq)\n{\n\tconn_t *conn = get_conn(preq->rq_conn);\n\n\tif (!conn) {\n\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\treturn;\n\t}\n\n\tif (preq->rq_extend != NULL) {\n\t\tif (strcmp(preq->rq_extend, QSUB_DAEMON) == 0)\n\t\t\tconn->cn_authen |= PBS_NET_CONN_FROM_QSUB_DAEMON;\n\t}\n\n\treply_ack(preq);\n}\n"
  },
  {
    "path": "src/server/req_holdjob.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/*\n * @file\tsvr_holdjob.c\n *\n * Functions relating to the Hold and Release Job Batch Requests.\n *\n * Included funtions are:\n *\treq_holdjob()\n *\treq_releasejob()\n *\tchk_hold_priv()\n *\tget_hold()\n *\tpost_hold()\n *\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <stdio.h>\n#include <time.h>\n#include <sys/types.h>\n#include \"libpbs.h\"\n#include \"server_limits.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"server.h\"\n#include \"credential.h\"\n#include \"batch_request.h\"\n#include \"net_connect.h\"\n#include \"job.h\"\n#include \"work_task.h\"\n#include \"pbs_error.h\"\n#include \"log.h\"\n#include \"acct.h\"\n#include \"pbs_nodes.h\"\n#include \"svrfunc.h\"\n\n/* Private Functions Local to this file */\n\nstatic int get_hold(pbs_list_head *, char **);\nvoid post_hold(struct work_task *);\n\n/* Global Data Items: */\n\nextern struct server server;\n\nextern char *msg_jobholdset;\nextern char *msg_jobholdrel;\nextern char *msg_mombadhold;\nextern char *msg_postmomnojob;\nextern time_t time_now;\nextern job *chk_job_request(char *, struct batch_request *, int *, int *);\n\nint chk_hold_priv(long val, int perm);\n\n/* Private Data */\n\nstatic attribute temphold;\n\n/**\n * @brief\n * \t\tchk_hold_priv - check that client has privilege to set/clear hold\n *\n * @param[in]\tval\t- hold bits being changed\n * @param[in]\tperm\t- client privilege\n *\n * @return\terror code\n * @retval\t0\t- success\n * @retval\t!=0\t- failure\n */\n\nint\nchk_hold_priv(long val, int perm)\n{\n\tif ((val & HOLD_s) && ((perm & ATR_DFLAG_MGWR) == 0))\n\t\treturn (PBSE_PERM);\n\tif ((val & HOLD_o) && ((perm & (ATR_DFLAG_MGWR | ATR_DFLAG_OPWR)) == 0))\n\t\treturn (PBSE_PERM);\n\treturn (PBSE_NONE);\n}\n\n/**\n * @brief\n * \t\treq_holdjob - service the Hold Job Request\n *\n *\t\tThis request sets one or more holds on a job.\n *\t\tThe state of the job may change as a result.\n *\n * @param[in,out]\tpreq\t- Job Request\n */\nvoid\nreq_holdjob(struct batch_request *preq)\n{\n\tlong hold_val;\n\tint jt; /* job type */\n\tchar newstate;\n\tint newsub;\n\tlong old_hold;\n\tjob *pjob;\n\tchar *pset;\n\tchar jid[PBS_MAXSVRJOBID + 1];\n\tint rc;\n\tchar date[32];\n\ttime_t now;\n\tint err = PBSE_NONE;\n\n\tsnprintf(jid, sizeof(jid), \"%s\", preq->rq_ind.rq_hold.rq_orig.rq_objname);\n\n\tpjob = chk_job_request(jid, preq, &jt, &err);\n\tif (pjob == NULL) {\n\t\tpjob = find_job(jid);\n\t\tif (pjob != NULL && pjob->ji_pmt_preq != NULL)\n\t\t\treply_preempt_jobs_request(err, PREEMPT_METHOD_CHECKPOINT, pjob);\n\t\treturn;\n\t}\n\tif ((jt != IS_ARRAY_NO) && (jt != IS_ARRAY_ArrayJob)) {\n\t\t/*\n\t\t * We need to find the job again because chk_job_request() will return\n\t\t * the parent array if the job is a subjob.\n\t\t */\n\t\tpjob = find_job(jid);\n\t\tif (pjob != NULL && pjob->ji_pmt_preq != NULL)\n\t\t\treply_preempt_jobs_request(PBSE_IVALREQ, PREEMPT_METHOD_CHECKPOINT, pjob);\n\t\treq_reject(PBSE_IVALREQ, 0, preq);\n\t\treturn;\n\t}\n\tif ((check_job_state(pjob, JOB_STATE_LTR_RUNNING)) &&\n\t    (check_job_substate(pjob, JOB_SUBSTATE_PROVISION))) {\n\t\tif (pjob->ji_pmt_preq != NULL)\n\t\t\treply_preempt_jobs_request(PBSE_BADSTATE, PREEMPT_METHOD_CHECKPOINT, pjob);\n\n\t\treq_reject(PBSE_BADSTATE, 0, preq);\n\t\treturn;\n\t}\n\n\t/* cannot do anything until we decode the holds to be set */\n\n\tif ((rc = get_hold(&preq->rq_ind.rq_hold.rq_orig.rq_attr, &pset)) != 0) {\n\t\tif (pjob->ji_pmt_preq != NULL)\n\t\t\treply_preempt_jobs_request(rc, PREEMPT_METHOD_CHECKPOINT, pjob);\n\t\treq_reject(rc, 0, preq);\n\t\treturn;\n\t}\n\n\t/* if other than HOLD_u is being set, must have privil */\n\n\tif ((rc = chk_hold_priv(get_attr_l(&temphold), preq->rq_perm)) != 0) {\n\t\tif (pjob->ji_pmt_preq != NULL)\n\t\t\treply_preempt_jobs_request(rc, PREEMPT_METHOD_CHECKPOINT, pjob);\n\n\t\treq_reject(rc, 0, preq);\n\t\treturn;\n\t}\n\n\t/* HOLD_bad_password can only be done by root or admin */\n\tif ((get_attr_l(&temphold) & HOLD_bad_password) &&\n\t    strcasecmp(preq->rq_user, PBS_DEFAULT_ADMIN) != 0) {\n\t\tif (pjob->ji_pmt_preq != NULL)\n\t\t\treply_preempt_jobs_request(PBSE_PERM, PREEMPT_METHOD_CHECKPOINT, pjob);\n\n\t\treq_reject(PBSE_PERM, 0, preq);\n\t\treturn;\n\t}\n\n\told_hold = get_jattr_long(pjob, JOB_ATR_hold);\n\tset_jattr_b_slim(pjob, JOB_ATR_hold, get_attr_l(&temphold), INCR);\n\thold_val = get_jattr_long(pjob, JOB_ATR_hold);\n\n\t/* Note the hold time in the job comment. */\n\tnow = time(NULL);\n\t(void) strncpy(date, (const char *) ctime(&now), 24);\n\tdate[24] = '\\0';\n\t(void) sprintf(log_buffer, \"Job held by %s on %s\", preq->rq_user, date);\n\tset_jattr_str_slim(pjob, JOB_ATR_Comment, log_buffer, NULL);\n\n\t(void) sprintf(log_buffer, msg_jobholdset, pset, preq->rq_user,\n\t\t       preq->rq_host);\n\n\tif ((check_job_state(pjob, JOB_STATE_LTR_RUNNING)) &&\n\t    (!check_job_substate(pjob, JOB_SUBSTATE_PRERUN)) &&\n\t    (get_jattr_str(pjob, JOB_ATR_chkpnt)) &&\n\t    (*get_jattr_str(pjob, JOB_ATR_chkpnt) != 'n')) {\n\n\t\t/* have MOM attempt checkpointing */\n\n\t\tif ((rc = relay_to_mom(pjob, preq, post_hold)) != 0) {\n\t\t\thold_val = old_hold; /* reset to the old value */\n\t\t\tif (pjob->ji_pmt_preq != NULL)\n\t\t\t\treply_preempt_jobs_request(rc, PREEMPT_METHOD_CHECKPOINT, pjob);\n\t\t\treq_reject(rc, 0, preq);\n\t\t} else {\n\t\t\tpjob->ji_qs.ji_svrflags |=\n\t\t\t\t(JOB_SVFLG_HASRUN | JOB_SVFLG_CHKPT | JOB_SVFLG_HASHOLD);\n\t\t\t(void) job_save_db(pjob);\n\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\t}\n\t} else {\n\n\t\t/* every thing went well, may need to update the job state */\n\n\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\tif (old_hold != hold_val) {\n\t\t\t/* indicate attributes changed     */\n\t\t\tsvr_evaljobstate(pjob, &newstate, &newsub, 0);\n\t\t\tsvr_setjobstate(pjob, newstate, newsub);\n\t\t}\n\t\t/* Reject preemption because job requested -c n */\n\t\tif (pjob->ji_pmt_preq != NULL)\n\t\t\treply_preempt_jobs_request(PBSE_NOSUP, PREEMPT_METHOD_CHECKPOINT, pjob);\n\t\treply_ack(preq);\n\t}\n}\n\n/**\n * @brief\n * \t\treq_releasejob - service the Release Job Request\n *\n *\t\tThis request clears one or more holds on a job.\n *\t\tAs a result, the job might change state.\n *\n * @param[in]\tpreq\t- ptr to the decoded request\n */\n\nvoid\nreq_releasejob(struct batch_request *preq)\n{\n\tint jt; /* job type */\n\tchar newstate;\n\tint newsub;\n\tlong old_hold;\n\tjob *pjob;\n\tchar *pset;\n\tint rc;\n\n\tpjob = chk_job_request(preq->rq_ind.rq_release.rq_objname, preq, &jt, NULL);\n\tif (pjob == NULL)\n\t\treturn;\n\n\tif ((jt != IS_ARRAY_NO) && (jt != IS_ARRAY_ArrayJob)) {\n\t\treq_reject(PBSE_IVALREQ, 0, preq);\n\t\treturn;\n\t}\n\n\t/* cannot do anything until we decode the holds to be set */\n\n\tif ((rc = get_hold(&preq->rq_ind.rq_hold.rq_orig.rq_attr, &pset)) != 0) {\n\t\treq_reject(rc, 0, preq);\n\t\treturn;\n\t}\n\n\t/* if other than HOLD_u is being released, must have privil */\n\n\tif ((rc = chk_hold_priv(get_attr_l(&temphold), preq->rq_perm)) != 0) {\n\t\treq_reject(rc, 0, preq);\n\t\treturn;\n\t}\n\n\t/* all ok so far, unset the hold */\n\n\told_hold = get_jattr_long(pjob, JOB_ATR_hold);\n\trc = set_attr_with_attr(&job_attr_def[(int) JOB_ATR_hold], get_jattr(pjob, JOB_ATR_hold), &temphold, DECR);\n\tif (rc) {\n\t\treq_reject(rc, 0, preq);\n\t\treturn;\n\t}\n\n\t// clang-format off\n\n\t/* every thing went well, if holds changed, update the job state */\n\n#ifndef NAS /* localmod 105 Always reset etime on release */\n\tif (old_hold != get_jattr_long(pjob, JOB_ATR_hold)) {\n#endif /* localmod 105 */\n#ifdef NAS /* localmod 105 */\n\t\t{\n\t\t\tattribute *etime = get_jattr(pjob, JOB_ATR_etime);\n\t\t\tetime->at_val.at_long = time_now;\n\t\t\tpost_attr_set(etime);\n#endif /* localmod 105 */\n\t\tsvr_evaljobstate(pjob, &newstate, &newsub, 0);\n\t\tsvr_setjobstate(pjob, newstate, newsub); /* saves job */\n\n\t}\n\n\tif ((jt == IS_ARRAY_ArrayJob) && (pjob->ji_ajinfo)) {\n\t\tint i;\n\t\tfor(i = pjob->ji_ajinfo->tkm_start ; i <= pjob->ji_ajinfo->tkm_end ; i += pjob->ji_ajinfo->tkm_step) {\n\t\t\tjob *psubjob = get_subjob_and_state(pjob, i, NULL, NULL);\n\t\t\tif (psubjob && (check_job_state(psubjob, JOB_STATE_LTR_HELD))) {\n#ifndef NAS\n\t\t\t\told_hold = get_jattr_long(psubjob, JOB_ATR_hold);\n\t\t\t\trc =\n#endif\n\t\t\t\t\tset_attr_with_attr(&job_attr_def[(int)JOB_ATR_hold], get_jattr(psubjob, JOB_ATR_hold), &temphold, DECR);\t\n#ifndef NAS /* localmod 105 Always reset etime on release */\n\t\t\t\tif (!rc && (old_hold != get_jattr_long(psubjob, JOB_ATR_hold))) {\n#endif /* localmod 105 */\n#ifdef NAS /* localmod 105 */\n\t\t\t\t{\n\t\t\t\t\tattribute *etime = get_jattr(psubjob, JOB_ATR_etime);\n\t\t\t\t\tetime->at_val.at_long = time_now;\n\t\t\t\t\tpost_attr_set(etime);\n#endif /* localmod 105 */\n\t\t\t\t\tsvr_evaljobstate(psubjob, &newstate, &newsub, 0);\n\t\t\t\t\tsvr_setjobstate(psubjob, newstate, newsub); /* saves job */\n\t\t\t\t}\n\t\t\t\tif (get_jattr_long(psubjob, JOB_ATR_hold) == HOLD_n)\n\t\t\t\t\tfree_jattr(psubjob, JOB_ATR_Comment);\n\t\t\t\t(void)sprintf(log_buffer, msg_jobholdrel, pset, preq->rq_user, preq->rq_host);\n\t\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO, psubjob->ji_qs.ji_jobid, log_buffer);\n\t\t\t}\n\t\t}\n\t}\n\tif (get_jattr_long(pjob, JOB_ATR_hold) == HOLD_n) {\n\t\tif ((jt == IS_ARRAY_ArrayJob) && (pjob->ji_qs.ji_stime != 0) ) {\n\t\t\tchar timebuf[128];\n\n\t\t\tstrftime(timebuf, 128, \"%a %b %d at %H:%M\", localtime(&pjob->ji_qs.ji_stime));\n\t\t\tsprintf(log_buffer, \"Job Array Began at %s\", timebuf);\n\n\t\t\tset_jattr_str_slim(pjob, JOB_ATR_Comment, log_buffer, NULL);\n\t\t} else\n\t\t\tfree_jattr(pjob, JOB_ATR_Comment);\n\t}\n\t(void)sprintf(log_buffer, msg_jobholdrel, pset, preq->rq_user,\n\t\tpreq->rq_host);\n\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\tpjob->ji_qs.ji_jobid, log_buffer);\n\treply_ack(preq);\n}\n\n\t\t// clang-format on\n\n\t\t/**\n * @brief\n * \t\tget_hold - search a list of attributes (svrattrl) for the hold-types\n * \t\tattribute.  This is used by the Hold Job and Release Job request,\n *\t\ttherefore it is an error if the hold-types attribute is not present,\n *\t\tor there is more than one.\n *\n *\t\tDecode the hold attribute into temphold.\n *\n * @param[in]\tphead\t- pbs list head.\n * @param[out]\tpset\t- RETURN - ptr to hold value\n *\n * @return\terror code\n */\n\n\t\tstatic int\n\t\tget_hold(pbs_list_head * phead, char **pset)\n\t\t{\n\t\t\tint have_one = 0;\n\t\t\tstruct svrattrl *holdattr = NULL;\n\t\t\tstruct svrattrl *pal;\n\n\t\t\tpal = (struct svrattrl *) GET_NEXT((*phead));\n\t\t\twhile (pal) {\n\t\t\t\tif (!strcasecmp(pal->al_name, job_attr_def[(int) JOB_ATR_hold].at_name)) {\n\t\t\t\t\tholdattr = pal;\n\t\t\t\t\t*pset = pal->al_value;\n\t\t\t\t\thave_one++;\n\t\t\t\t} else {\n\t\t\t\t\treturn (PBSE_IVALREQ);\n\t\t\t\t}\n\t\t\t\tpal = (struct svrattrl *) GET_NEXT(pal->al_link);\n\t\t\t}\n\t\t\tif (have_one != 1)\n\t\t\t\treturn (PBSE_IVALREQ);\n\n\t\t\t/* decode into temporary attribute structure */\n\n\t\t\tclear_attr(&temphold, &job_attr_def[(int) JOB_ATR_hold]);\n\t\t\treturn (set_attr_generic(&temphold, &job_attr_def[JOB_ATR_hold],\n\t\t\t\t\t\t holdattr->al_value,\n\t\t\t\t\t\t NULL, INTERNAL));\n\t\t}\n\n\t\t/**\n * @brief\n * \t\t\"post hold\" - A round hole in the ground in which a post is placed :-)\n *\t\tThis function is called when a hold request which was sent to Mom has\n *\t\tbeen responed to by MOM.  The hold request for the running job is\n *\t\tcompleted and replied to based on what was returned by Mom.\n *\n *\t\tIf Mom repies with:\n *\t  \tNo error (0) - job is marked as checkpointed;\n *\t  \tPBSE_NOSUP - checkpoint in not supported,  job just has hold type set;\n *\t  \tPBSE_CKPBSY - a prior checkpoint is still in progress;\n *\t  \tFor any error other than PBSE_NOSUP, a message is logged and returned\n *\t  \tto the client.\n *\n * @param[in]\tpwt\t- pointer to work task entry holding information about the\n *\t\t\t\toriginal client \"hold job\" request.\n *\n * @return void\n */\n\n\t\tvoid\n\t\t\tpost_hold(struct work_task * pwt)\n\t\t{\n\t\t\tint code;\n\t\t\tjob *pjob;\n\t\t\tstruct batch_request *preq;\n\t\t\tconn_t *conn;\n\n\t\t\tif (pwt->wt_aux2 != 1)\n\t\t\t\tsvr_disconnect(pwt->wt_event); /* close connection to MOM */\n\t\t\tpreq = pwt->wt_parm1;\n\t\t\tcode = preq->rq_reply.brp_code;\n\t\t\tpreq->rq_conn = preq->rq_orgconn; /* restore client socket */\n\n\t\t\tpjob = find_job(preq->rq_ind.rq_hold.rq_orig.rq_objname);\n\n\t\t\tif (pjob == NULL) {\n\t\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t\t\t  preq->rq_ind.rq_hold.rq_orig.rq_objname,\n\t\t\t\t\t  msg_postmomnojob);\n\t\t\t\treq_reject(PBSE_UNKJOBID, 0, preq);\n\t\t\t\treturn;\n\t\t\t}\n\n\t\t\tif (pwt->wt_aux2 != PROT_TPP) {\n\t\t\t\tconn = get_conn(preq->rq_conn);\n\n\t\t\t\tif (!conn) {\n\t\t\t\t\tif (pjob->ji_pmt_preq != NULL)\n\t\t\t\t\t\treply_preempt_jobs_request(PBSE_SYSTEM, PREEMPT_METHOD_CHECKPOINT, pjob);\n\t\t\t\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\t\t\t\treturn;\n\t\t\t\t}\n\n\t\t\t\tconn->cn_authen &= ~PBS_NET_CONN_NOTIMEOUT;\n\t\t\t}\n\n\t\t\tif (code != PBSE_NONE) {\n\t\t\t\t/* Checkpoint failed, remove checkpoint flags from job */\n\t\t\t\tpjob->ji_qs.ji_svrflags &= ~(JOB_SVFLG_HASHOLD | JOB_SVFLG_CHKPT);\n\t\t\t\tif (pjob->ji_pmt_preq != NULL)\n\t\t\t\t\treply_preempt_jobs_request(code, PREEMPT_METHOD_CHECKPOINT, pjob);\n\n\t\t\t\tif (code != PBSE_NOSUP) {\n\t\t\t\t\t/* a \"real\" error - log message with return error code */\n\t\t\t\t\t(void) sprintf(log_buffer, msg_mombadhold, code);\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\t\t\t/* send message back to server for display to user */\n\t\t\t\t\treply_text(preq, code, log_buffer);\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\t/* record that MOM has a checkpoint file */\n\t\t\t\tset_job_substate(pjob, JOB_SUBSTATE_RERUN);\n\t\t\t\tif (preq->rq_reply.brp_auxcode) /* chkpt can be moved */\n\t\t\t\t\tpjob->ji_qs.ji_svrflags =\n\t\t\t\t\t\t(pjob->ji_qs.ji_svrflags & ~JOB_SVFLG_CHKPT) |\n\t\t\t\t\t\tJOB_SVFLG_HASRUN | JOB_SVFLG_ChkptMig;\n\n\t\t\t\tjob_save_db(pjob);\n\n\t\t\t\t/* note in accounting file */\n\n\t\t\t\taccount_record(PBS_ACCT_CHKPNT, pjob, NULL);\n\t\t\t\tif (pjob->ji_pmt_preq != NULL)\n\t\t\t\t\treply_preempt_jobs_request(PBSE_NONE, PREEMPT_METHOD_CHECKPOINT, pjob);\n\t\t\t}\n\n\t\t\treply_ack(preq);\n\t\t}\n"
  },
  {
    "path": "src/server/req_jobobit.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * functions dealing with a Job Obituary Request (Notice)\n * and the associated post execution job clean up\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <sys/types.h>\n#include <ctype.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <time.h>\n\n#include \"libpbs.h\"\n#include \"server_limits.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"resource.h\"\n#include \"server.h\"\n#include \"job.h\"\n#include \"credential.h\"\n#include \"ticket.h\"\n#include \"batch_request.h\"\n#include \"work_task.h\"\n#include \"pbs_error.h\"\n#include \"log.h\"\n#include \"acct.h\"\n#include \"net_connect.h\"\n#include \"pbs_nodes.h\"\n#include \"svrfunc.h\"\n#include \"sched_cmds.h\"\n#include \"mom_server.h\"\n#include \"dis.h\"\n#include \"tpp.h\"\n#include \"libutil.h\"\n#include \"pbs_sched.h\"\n\n/* External Global Data Items */\n\nextern unsigned int pbs_mom_port;\nextern char *path_spool;\nextern int server_init_type;\nextern pbs_net_t pbs_server_addr;\nextern char *msg_init_abt;\nextern char *msg_job_end;\nextern char *msg_job_end_sig;\nextern char *msg_job_end_stat;\nextern char *msg_momnoexec1;\nextern char *msg_momnoexec2;\nextern char *msg_baduser;\nextern char *msg_job_globfail1;\nextern char *msg_obitnojob;\nextern char *msg_obitnocpy;\nextern char *msg_obitnodel;\nextern char *msg_bad_password;\nextern char *msg_hook_reject_deletejob;\nextern char *msg_hook_reject_rerunjob;\nextern char *msg_momkillncpusburst;\nextern char *msg_momkillncpussum;\nextern char *msg_momkillvmem;\nextern char *msg_momkillmem;\nextern char *msg_momkillcput;\nextern char *msg_momkillwalltime;\nextern time_t time_now;\n\n/* External Functions called */\n\nextern void set_resc_assigned(void *, int, enum batch_op);\nextern long get_walltime(const job *, int);\n\n/* Local public functions  */\nvoid on_job_rerun(struct work_task *ptask);\nvoid on_job_exit(struct work_task *ptask);\nextern void set_admin_suspend(job *pjob, int set_remove_nstate);\n\nstatic char *msg_obitnotrun = \"job not running, may have been requeued on node failure\";\n\n/**\n * @brief\n * \t\tmom_comm - if needed, open a connection with the MOM under which\n *\t\tthe job was running.  The connection is typically set up by\n *\t\treq_jobobit() using the connection already established by MOM.\n *\t\tHowever, on server recovery there will be no pre-established connection.\n *\n *\t\tIf a connection is needed and cannot be setup, set up a work-task\n *\t\tentry and try again later.\n *\n * @param[in]\tpjob\t- job structure\n * @param[in]\tfunc\t- function pointer which accepts a work task structure and returns void\n * \t\t\t\t\t\t\there it can calls on_job_exit and on_job_rerun\n *\n * @return\topen connection handle to MOM\n * @retval\t-1\t- failure\n */\n\nint\nmom_comm(job *pjob, void (*func)(struct work_task *))\n{\n\tunsigned int dum;\n\tlong t;\n\tstruct work_task *pwt;\n\tint prot = PROT_TPP;\n\n\tif (pjob->ji_momhandle < 0) {\n\n\t\t/* need to make connection, called from pbsd_init() */\n\n\t\tif (pjob->ji_qs.ji_un.ji_exect.ji_momaddr == 0) {\n\t\t\tchar *exec_vnode = get_jattr_str(pjob, JOB_ATR_exec_vnode);\n\t\t\tif (!is_jattr_set(pjob, JOB_ATR_exec_vnode) || exec_vnode == NULL)\n\t\t\t\treturn -1;\n\t\t\tpjob->ji_qs.ji_un.ji_exect.ji_momaddr = get_addr_of_nodebyname(exec_vnode, &dum);\n\t\t\tif (pjob->ji_qs.ji_un.ji_exect.ji_momaddr == 0)\n\t\t\t\treturn -1;\n\n\t\t\tpjob->ji_qs.ji_un.ji_exect.ji_momport = dum;\n\t\t}\n\t\tpjob->ji_momhandle = svr_connect(\n\t\t\tpjob->ji_qs.ji_un.ji_exect.ji_momaddr,\n\t\t\tpjob->ji_qs.ji_un.ji_exect.ji_momport,\n\t\t\tprocess_Dreply,\n\t\t\tToServerDIS,\n\t\t\tprot);\n\t\tpjob->ji_mom_prot = prot;\n\n\t\tif (pjob->ji_momhandle < 0) {\n\t\t\tchar *operation;\n\n\t\t\tt = pjob->ji_retryok++;\n\t\t\tt = PBS_NET_RETRY_TIME + t * t;\n\n\t\t\tif (func == on_job_exit)\n\t\t\t\toperation = \"exit\";\n\t\t\telse if (func == on_job_rerun)\n\t\t\t\toperation = \"rerun\";\n\t\t\telse\n\t\t\t\toperation = \"UNKNOWN\";\n\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"cannot connect to MOM, reschedule job %s \"\n\t\t\t\t \"in %ld seconds\",\n\t\t\t\t operation, t);\n\t\t\tlog_err(-1, pjob->ji_qs.ji_jobid, log_buffer);\n\n\t\t\tt += time_now;\n\t\t\tpwt = set_task(WORK_Timed, t, func, (void *) pjob);\n\t\t\tappend_link(&pjob->ji_svrtask, &pwt->wt_linkobj, pwt);\n\t\t\treturn (-1);\n\t\t}\n\t}\n\treturn (pjob->ji_momhandle);\n}\n\n/**\n * @brief\n * \t\trel_resc - release resources assigned to the job\n *\n * @param[in]\tpjob\t- job structure\n */\nvoid\nrel_resc(job *pjob)\n{\n\tconn_t *conn = NULL;\n\tpbs_sched *psched;\n\n\tfree_nodes(pjob);\n\n\t/* removed the resources used by the job from the used svr/que attr  */\n\n\tset_resc_assigned((void *) pjob, 0, DECR);\n\n\t/* is there a rerun request waiting for acknowledgement that        */\n\t/* resources (including licenses) are indeed released? Then ack it. */\n\tif (pjob->ji_rerun_preq != NULL) { /* set only in req_rerun() */\n\t\tif (pjob->ji_rerun_preq->rq_conn != PBS_LOCAL_CONNECTION)\n\t\t\tconn = get_conn(pjob->ji_rerun_preq->rq_conn);\n\n\t\treply_ack(pjob->ji_rerun_preq);\n\n\t\t/* clear no-timeout flag on connection to prevent stale connections */\n\t\tif (conn)\n\t\t\tconn->cn_authen &= ~PBS_NET_CONN_NOTIMEOUT;\n\n\t\tpjob->ji_rerun_preq = NULL;\n\t}\n\tif (pjob->ji_qs.ji_svrflags & JOB_SVFLG_AdmSuspd)\n\t\tset_admin_suspend(pjob, 0);\n\n\t/* Mark that scheduler should be called */\n\n\tif (find_assoc_sched_jid(pjob->ji_qs.ji_jobid, &psched))\n\t\tset_scheduler_flag(SCH_SCHEDULE_TERM, psched);\n\telse {\n\t\tpbs_queue *pq;\n\t\tpq = find_queuebyname(pjob->ji_qs.ji_queue);\n\t\tsprintf(log_buffer, \"Unable to reach scheduler associated with partition %s\", get_qattr_str(pq, QA_ATR_partition));\n\t\tlog_err(-1, __func__, log_buffer);\n\t}\n}\n/**\n * @brief\n * \t\ton_exitrerun_msg\t- log message on exit rerun fails, used with\n * \t\t\t\t\t\t\t\ton_job_rerun() and conn_to_mom_failed()\n *\n * @param[in]\tpjob\t- job which has failed\n * @param[in]\tfmt\t- failure message\n */\nstatic void\non_exitrerun_msg(job *pjob, char *fmt)\n{\n\tchar *hostname = \" ? \";\n\n\tif (pjob->ji_qs.ji_destin[0] != '\\0')\n\t\thostname = pjob->ji_qs.ji_destin;\n\n\tsprintf(log_buffer, fmt, pjob->ji_qs.ji_jobid, hostname);\n\tlog_event(PBSEVENT_ERROR | PBSEVENT_ADMIN | PBSEVENT_JOB,\n\t\t  PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n}\n\n/**\n * @brief\n * \t\tconn_to_mom_failed - called when the connection to Mom for end of job\n *\t\tprocessing is broken (Mom gone?).  Log it and close and attempt to\n *\t\topen a new one by going around again.\n *\n * @param[in]\tpjob\t- job structure\n * @param[in]\tfunc\t- function pointer which accepts a work task structure and returns void\n * \t\t\t\t\t\t\there it can calls on_job_exit and on_job_rerun\n */\n\nstatic void\nconn_to_mom_failed(job *pjob, void (*func)(struct work_task *))\n{\n\tstruct work_task *ptask;\n\n\ton_exitrerun_msg(pjob, \"end of job processing for %s, connection to Mom on host %s was broken\");\n\tif (pjob->ji_mom_prot == PROT_TCP) {\n\t\tsvr_disconnect(pjob->ji_momhandle);\n\t} else {\n\t\ttpp_close(pjob->ji_momhandle);\n\t\ttdelete2((u_long) pjob->ji_momhandle, 0, &streams);\n\t}\n\tpjob->ji_momhandle = -1;\n\tptask = set_task(WORK_Immed, 0, func, pjob);\n\tappend_link(&pjob->ji_svrtask, &ptask->wt_linkobj, ptask);\n\treturn;\n}\n\nstatic void\nend_job(job *pjob, int isexpress)\n{\n\tstruct batch_request *preq;\n\tchar hook_msg[HOOK_MSG_SIZE] = {0};\n\tchar *rec = \"\";\n\tint rc;\n\n\tif (isexpress) {\n\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid, \"express end of job\");\n\t\t/* see if have any dependencies */\n\t\tif (is_jattr_set(pjob, JOB_ATR_depend))\n\t\t\t(void) depend_on_term(pjob);\n\n\t\t/* Set job's exec_vnodes with current time for last_used_time. */\n\t\tset_last_used_time_node(pjob, 0);\n\t}\n\n\tpjob->ji_qs.ji_obittime = time_now;\n\tset_jattr_l_slim(pjob, JOB_ATR_obittime, pjob->ji_qs.ji_obittime, SET);\n\n\t/* Allocate space for the jobobit hook event params */\n\tpreq = alloc_br(PBS_BATCH_JobObit);\n\tif (preq == NULL) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"rq_jobobit alloc failed\");\n\t} else {\n\t\tpreq->rq_ind.rq_obit.rq_pjob = pjob;\n\t\trc = process_hooks(preq, hook_msg, sizeof(hook_msg), pbs_python_set_interrupt);\n\t\tif (rc == -1) {\n\t\t\tlog_err(-1, __func__, \"rq_jobobit process_hooks call failed\");\n\t\t}\n\t\tfree_br(preq);\n\t}\n\n\tif (pjob->ji_momhandle != -1 && pjob->ji_mom_prot == PROT_TCP)\n\t\tsvr_disconnect(pjob->ji_momhandle);\n\trel_resc(pjob); /* free any resc assigned to the job */\n\n\taccount_job_update(pjob, PBS_ACCT_LAST);\n\taccount_jobend(pjob, pjob->ji_acctrec, PBS_ACCT_END);\n\n\tif (pjob->ji_acctrec)\n\t\trec = pjob->ji_acctrec;\n\n\tif (get_sattr_long(SVR_ATR_log_events) & PBSEVENT_JOB_USAGE) {\n\t\t/* log events set to record usage */\n\t\tlog_event(PBSEVENT_JOB_USAGE | PBSEVENT_JOB_USAGE,\n\t\t\t  PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t  pjob->ji_qs.ji_jobid, rec);\n\t} else {\n\t\tchar *pc;\n\n\t\t/* no usage in log, truncate messge */\n\n\t\tif ((pc = strchr(rec, ' ')) != NULL)\n\t\t\t*pc = '\\0';\n\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t  pjob->ji_qs.ji_jobid, rec);\n\t}\n\n\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE) == 0)\n\t\tissue_track(pjob);\n\n\tif (pjob->ji_pmt_preq != NULL)\n\t\treply_preempt_jobs_request(PBSE_NONE, PREEMPT_METHOD_DELETE, pjob);\n\t/*\n\t * Check if the history of the finished job can be saved or it needs to be purged.\n\t */\n\tsvr_saveorpurge_finjobhist(pjob);\n}\n\n/**\n * @brief\n * \t\tcontinue post-execution processing of a job that terminated.\n *\n *\t\tThis function is called by pbsd_init() on recovery, by job_obit()\n *\t\ton job termination and by itself (via a work task).  The clue to where\n *\t\twe are is the job substate and the type of the work task entry it is\n *\t\tcalled with.  If the work task entry type is Work_Immed, then this is\n *\t\tthe first time in for the job substate.  Otherwise it is with the reply\n *\t\tgiven by MOM.\n *\n *\t\tNOTE:\n *\t\tOn the initial work task (WORK_Immed), the wt_parm1 is a job pointer.\n *\t\tOn a call-back work task (WORK_Deferred_Reply) generated by\n *\t\tsend_request(), the wt_parm1 is pointing to the request; and the\n *\t\trq_extra field in the request points to the job.\n *\n * @param[in,out]\tptask\t- work task\n */\nvoid\non_job_exit(struct work_task *ptask)\n{\n\tint handle;\n\tjob *pjob;\n\tstruct batch_request *preq;\n\tstruct work_task *pt;\n\tint rc;\n\tint stageout_status = 1; /* success */\n\tlong t;\n\tpbs_list_head *mom_tasklist_ptr = NULL;\n\tmominfo_t *pmom = 0;\n\tint release_nodes_on_stageout = 0;\n\n\tif (ptask->wt_type != WORK_Deferred_Reply) {\n\t\tpreq = NULL;\n\t\tpjob = (job *) ptask->wt_parm1;\n\t} else {\n\t\tpreq = (struct batch_request *) ptask->wt_parm1;\n\t\tpjob = (job *) preq->rq_extra;\n\t}\n\n\t/* minor check on validity of pjob */\n\tif (isdigit((int) pjob->ji_qs.ji_jobid[0]) == 0)\n\t\treturn; /* not pointing to currently valid job */\n\n\tif (check_job_substate(pjob, JOB_SUBSTATE_EXITING)) {\n\t\t/*\n\t\t * If jobs doesn't have any files to stage/delete and there is no execjob_end\n\t\t * hook to run, end job immediately\n\t\t *\n\t\t * If no stage files but has execjob_end hook to run, put job directly in\n\t\t * exited sub state\n\t\t */\n\t\tint hs = has_stage(pjob);\n\t\trc = num_eligible_hooks(HOOK_EVENT_EXECJOB_END);\n\t\tif (!rc && !hs) {\n\t\t\tend_job(pjob, 1);\n\t\t\treturn;\n\t\t} else if (rc > 0 && !hs)\n\t\t\tsvr_setjobstate(pjob, JOB_STATE_LTR_EXITING, JOB_SUBSTATE_EXITED);\n\t}\n\n\tif (is_jattr_set(pjob, JOB_ATR_relnodes_on_stageout) &&\n\t    (get_jattr_long(pjob, JOB_ATR_relnodes_on_stageout) != 0))\n\t\trelease_nodes_on_stageout = 1;\n\n\tif ((handle = mom_comm(pjob, on_job_exit)) < 0)\n\t\treturn;\n\n\tif (pjob->ji_mom_prot == PROT_TPP) {\n\t\tpmom = tfind2((unsigned long) pjob->ji_qs.ji_un.ji_exect.ji_momaddr,\n\t\t\t      pjob->ji_qs.ji_un.ji_exect.ji_momport,\n\t\t\t      &ipaddrs);\n\t\tif (!pmom || (pmom->mi_dmn_info->dmn_state & INUSE_DOWN))\n\t\t\treturn;\n\t\tmom_tasklist_ptr = &pmom->mi_dmn_info->dmn_deferred_cmds;\n\t}\n\n\tswitch (get_job_substate(pjob)) {\n\n\t\tcase JOB_SUBSTATE_EXITING:\n\t\tcase JOB_SUBSTATE_ABORT:\n\n\t\t\tsvr_setjobstate(pjob, JOB_STATE_LTR_EXITING, JOB_SUBSTATE_STAGEOUT);\n\t\t\tptask->wt_type = WORK_Immed;\n\n\t\t\t/* Initialize retryok */\n\t\t\tpjob->ji_retryok = 0;\n\n\t\t\t/* NO BREAK, fall into stage out processing */\n\n\t\tcase JOB_SUBSTATE_STAGEOUT:\n\n\t\t\tif (ptask->wt_type != WORK_Deferred_Reply) {\n\n\t\t\t\t/* this is the very first call, have mom copy files */\n\t\t\t\t/* first check the standard files: output & error   */\n\n\t\t\t\tpreq = cpy_stdfile(preq, pjob, JOB_ATR_outpath);\n\t\t\t\tpreq = cpy_stdfile(preq, pjob, JOB_ATR_errpath);\n\n\t\t\t\t/* are there any stage-out files ?\t\t \t*/\n\n\t\t\t\tpreq = cpy_stage(preq, pjob, JOB_ATR_stageout, STAGE_DIR_OUT);\n\n\t\t\t\tif (preq) { /* have files to copy \t\t*/\n\t\t\t\t\tif (release_nodes_on_stageout) {\n\t\t\t\t\t\tif (free_sister_vnodes(pjob, NULL, NULL, log_buffer, LOG_BUF_SIZE, NULL) != 0) {\n\t\t\t\t\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_WARNING, pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tpreq->rq_extra = (void *) pjob;\n\t\t\t\t\trc = issue_Drequest(handle, preq, on_job_exit, &pt, pjob->ji_mom_prot);\n\t\t\t\t\tif (rc == 0) {\n\t\t\t\t\t\tappend_link(&pjob->ji_svrtask, &pt->wt_linkobj, pt);\n\t\t\t\t\t\tif (pjob->ji_mom_prot == PROT_TPP)\n\t\t\t\t\t\t\tif (mom_tasklist_ptr)\n\t\t\t\t\t\t\t\tappend_link(mom_tasklist_ptr, &pt->wt_linkobj2, pt); /* if tpp, link to mom list as well */\n\t\t\t\t\t\treturn;\t\t\t\t\t\t\t\t     /* come back when mom replies */\n\t\t\t\t\t} else {\n\t\t\t\t\t\t/* set up as if mom returned error */\n\n\t\t\t\t\t\tpreq->rq_reply.brp_code = rc;\n\t\t\t\t\t\tpreq->rq_reply.brp_choice = BATCH_REPLY_CHOICE_NULL;\n\t\t\t\t\t\tpreq->rq_reply.brp_un.brp_txt.brp_txtlen = 0;\n\t\t\t\t\t\t/* we will \"fall\" into the post reply side */\n\t\t\t\t\t}\n\n\t\t\t\t} else { /* no files to copy, any to delete? */\n\t\t\t\t\tsvr_setjobstate(pjob, JOB_STATE_LTR_EXITING, JOB_SUBSTATE_STAGEDEL);\n\t\t\t\t\tptask = set_task(WORK_Immed, 0, on_job_exit, pjob);\n\t\t\t\t\tappend_link(&pjob->ji_svrtask, &ptask->wt_linkobj, ptask);\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t/* here we have a reply (maybe faked) from MOM about the copy */\n\n\t\t\tif (preq->rq_reply.brp_code != 0) { /* error from MOM */\n\n\t\t\t\tif ((preq->rq_reply.brp_code == DIS_EOF) ||\n\t\t\t\t    (preq->rq_reply.brp_code == DIS_EOD)) {\n\t\t\t\t\t/* connection to Mom broken */\n\t\t\t\t\tconn_to_mom_failed(pjob, on_job_exit);\n\t\t\t\t\tfree_br(preq);\n\t\t\t\t\tpreq = NULL;\n\t\t\t\t\treturn;\n\t\t\t\t}\n\n\t\t\t\tif (preq->rq_reply.brp_code == PBSE_NOCOPYFILE)\n\t\t\t\t\tstageout_status = 0;\n\n\t\t\t\ton_exitrerun_msg(pjob, msg_obitnocpy);\n\t\t\t\tif (preq->rq_reply.brp_choice == BATCH_REPLY_CHOICE_Text) {\n\t\t\t\t\tint len = strlen(log_buffer);\n\n\t\t\t\t\tif (len < LOG_BUF_SIZE + 2) {\n\t\t\t\t\t\tlog_buffer[len++] = '\\n';\n\t\t\t\t\t\tstrncpy(&log_buffer[len],\n\t\t\t\t\t\t\tpreq->rq_reply.brp_un.brp_txt.brp_str,\n\t\t\t\t\t\t\tLOG_BUF_SIZE - len);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tsvr_mailowner(pjob, MAIL_OTHER, MAIL_FORCE, log_buffer);\n\t\t\t}\n\n\t\t\tset_jattr_l_slim(pjob, JOB_ATR_stageout_status, stageout_status, SET);\n\n\t\t\t/*\n\t\t\t * files (generally) copied ok, move on to the next phase by\n\t\t\t * \"faking\" the immediate work task.\n\t\t\t */\n\n\t\t\tfree_br(preq);\n\t\t\tpreq = NULL;\n\t\t\tsvr_setjobstate(pjob, JOB_STATE_LTR_EXITING, JOB_SUBSTATE_STAGEDEL);\n\t\t\tptask->wt_type = WORK_Immed;\n\n\t\t\t/* NO BREAK - FALL INTO THE NEXT CASE */\n\n\t\tcase JOB_SUBSTATE_STAGEDEL:\n\n\t\t\tif (ptask->wt_type != WORK_Deferred_Reply) { /* first time in */\n\n\t\t\t\t/* Build list of files which were staged-in so they can\n\t\t\t\t * can be deleted.\n\t\t\t\t */\n\n\t\t\t\tpreq = cpy_stage(preq, pjob, JOB_ATR_stagein, 0);\n\n\t\t\t\tif (preq) { /* have files to delete\t\t*/\n\n\t\t\t\t\t/* change the request type from copy to delete  */\n\n\t\t\t\t\tif (preq->rq_type == PBS_BATCH_CopyFiles_Cred)\n\t\t\t\t\t\tpreq->rq_type = PBS_BATCH_DelFiles_Cred;\n\t\t\t\t\telse\n\t\t\t\t\t\tpreq->rq_type = PBS_BATCH_DelFiles;\n\t\t\t\t\tpreq->rq_extra = (void *) pjob;\n\n\t\t\t\t\trc = issue_Drequest(handle, preq, on_job_exit, &pt, pjob->ji_mom_prot);\n\t\t\t\t\tif (rc == 0) {\n\t\t\t\t\t\tappend_link(&pjob->ji_svrtask, &pt->wt_linkobj, pt);\n\t\t\t\t\t\tif (pjob->ji_mom_prot == PROT_TPP)\n\t\t\t\t\t\t\tif (mom_tasklist_ptr)\n\t\t\t\t\t\t\t\tappend_link(mom_tasklist_ptr, &pt->wt_linkobj2, pt); /* if tpp, link to mom list as well */\n\t\t\t\t\t\treturn;\t\t\t\t\t\t\t\t     /* come back when mom replies */\n\t\t\t\t\t} else {\n\t\t\t\t\t\t/* set up as if mom returned error */\n\n\t\t\t\t\t\tpreq->rq_reply.brp_code = rc;\n\t\t\t\t\t\tpreq->rq_reply.brp_choice = BATCH_REPLY_CHOICE_NULL;\n\n\t\t\t\t\t\t/* we will \"fall\" into the post reply side */\n\t\t\t\t\t}\n\n\t\t\t\t} else { /* preq == 0, no files to delete   */\n\t\t\t\t\tsvr_setjobstate(pjob, JOB_STATE_LTR_EXITING, JOB_SUBSTATE_EXITED);\n\t\t\t\t\tptask = set_task(WORK_Immed, 0, on_job_exit, pjob);\n\t\t\t\t\tappend_link(&pjob->ji_svrtask, &ptask->wt_linkobj, ptask);\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t/* After MOM replied (maybe faked) to Delete Files request */\n\n\t\t\tif (preq->rq_reply.brp_code != 0) { /* an error occurred */\n\n\t\t\t\tif ((preq->rq_reply.brp_code == DIS_EOF) ||\n\t\t\t\t    (preq->rq_reply.brp_code == DIS_EOD)) {\n\t\t\t\t\t/* tcp connection to Mom broken */\n\t\t\t\t\tconn_to_mom_failed(pjob, on_job_exit);\n\t\t\t\t\tfree_br(preq);\n\t\t\t\t\tpreq = NULL;\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t\tif (preq->rq_reply.brp_code == PBSE_TRYAGAIN) {\n\t\t\t\t\t/* Mom hasn't finished her post processing yet,\n\t\t\t\t\t * send the delete request again later.\n\t\t\t\t\t */\n\t\t\t\t\tt = pjob->ji_retryok++;\n\t\t\t\t\tt = time_now + (t * t);\n\t\t\t\t\tptask = set_task(WORK_Timed, t, on_job_exit, pjob);\n\t\t\t\t\tappend_link(&pjob->ji_svrtask, &ptask->wt_linkobj, ptask);\n\n\t\t\t\t\tfree_br(preq);\n\t\t\t\t\tpreq = NULL;\n\t\t\t\t\treturn;\n\t\t\t\t}\n\n\t\t\t\ton_exitrerun_msg(pjob, msg_obitnodel);\n\t\t\t\tif (preq->rq_reply.brp_choice == BATCH_REPLY_CHOICE_Text) {\n\t\t\t\t\tint len = strlen(log_buffer);\n\n\t\t\t\t\tif (len < LOG_BUF_SIZE + 2) {\n\t\t\t\t\t\tlog_buffer[len++] = '\\n';\n\t\t\t\t\t\tstrncpy(&log_buffer[len],\n\t\t\t\t\t\t\tpreq->rq_reply.brp_un.brp_txt.brp_str,\n\t\t\t\t\t\t\tLOG_BUF_SIZE - len);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tsvr_mailowner(pjob, MAIL_OTHER, MAIL_FORCE, log_buffer);\n\t\t\t}\n\t\t\tfree_br(preq);\n\t\t\tpreq = NULL;\n\t\t\tsvr_setjobstate(pjob, JOB_STATE_LTR_EXITING, JOB_SUBSTATE_EXITED);\n\n\t\t\tptask->wt_type = WORK_Immed;\n\n\t\t\t/* NO BREAK, FALL INTO NEXT CASE */\n\n\t\tcase JOB_SUBSTATE_EXITED:\n\n\t\t\tif (ptask->wt_type != WORK_Deferred_Reply) { /* first time in */\n\n\t\t\t\t/* see if have any dependencies */\n\n\t\t\t\tif (is_jattr_set(pjob, JOB_ATR_depend))\n\t\t\t\t\t(void) depend_on_term(pjob);\n\n\t\t\t\t/* tell mom to delete the job */\n\n\t\t\t\tpreq = alloc_br(PBS_BATCH_DeleteJob);\n\t\t\t\tif (preq) {\n\t\t\t\t\tstrcpy(preq->rq_ind.rq_delete.rq_objname,\n\t\t\t\t\t       pjob->ji_qs.ji_jobid);\n\t\t\t\t\tpreq->rq_extra = (void *) pjob;\n\t\t\t\t\trc = issue_Drequest(handle, preq, on_job_exit, &pt, pjob->ji_mom_prot);\n\t\t\t\t\tif (rc == 0) {\n\t\t\t\t\t\tappend_link(&pjob->ji_svrtask, &pt->wt_linkobj, pt);\n\t\t\t\t\t\tif (pjob->ji_mom_prot == PROT_TPP)\n\t\t\t\t\t\t\tif (mom_tasklist_ptr)\n\t\t\t\t\t\t\t\tappend_link(mom_tasklist_ptr, &pt->wt_linkobj2, pt); /* if tpp, link to mom list as well */\n\t\t\t\t\t\treturn;\t\t\t\t\t\t\t\t     /* come back when mom replies */\n\t\t\t\t\t} else {\n\t\t\t\t\t\t/* set up as if mom returned error */\n\n\t\t\t\t\t\tpreq->rq_reply.brp_code = rc;\n\t\t\t\t\t\tpreq->rq_reply.brp_choice = BATCH_REPLY_CHOICE_NULL;\n\n\t\t\t\t\t\t/* we will \"fall\" into the post reply side */\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tlog_err(-1, pjob->ji_qs.ji_jobid,\n\t\t\t\t\t\t\"Unable to malloc memory for deletejob\");\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t/* Set job's exec_vnodes with current time for last_used_time. */\n\t\t\tset_last_used_time_node(pjob, 0);\n\n\t\t\t/* here we have a reply from MOM about the delete */\n\t\t\t/* if delete ok, send final track and purge the job */\n\n\t\t\tif (preq->rq_reply.brp_code == PBSE_SISCOMM) {\n\n\t\t\t\t/* some sister Mom apparently failed to delete the job and\n\t\t\t\t * free resoures, keep job until discard_job() does its job\n\t\t\t\t */\n\t\t\t\tfree_br(preq);\n\t\t\t\tpreq = NULL;\n\t\t\t\tif (handle != -1 && pjob->ji_mom_prot == PROT_TCP)\n\t\t\t\t\tsvr_disconnect(handle);\n\n\t\t\t\tdiscard_job(pjob, \"A sister Mom failed to delete job\", 0);\n\t\t\t\treturn;\n\t\t\t} else if ((preq->rq_reply.brp_code == DIS_EOF) ||\n\t\t\t\t   (preq->rq_reply.brp_code == DIS_EOD)) {\n\t\t\t\t/* tcp connection to Mom broken */\n\t\t\t\tconn_to_mom_failed(pjob, on_job_exit);\n\t\t\t\tfree_br(preq);\n\t\t\t\tpreq = NULL;\n\t\t\t\treturn;\n\t\t\t} else if (preq->rq_reply.brp_code == PBSE_TRYAGAIN) {\n\t\t\t\t/* Mom hasn't finished her post processing yet,\n\t\t\t\t * send the delete request again later.\n\t\t\t\t */\n\t\t\t\tt = pjob->ji_retryok++;\n\t\t\t\tt = time_now + (t * t);\n\t\t\t\tptask = set_task(WORK_Timed, t, on_job_exit, pjob);\n\t\t\t\tappend_link(&pjob->ji_svrtask, &ptask->wt_linkobj, ptask);\n\n\t\t\t\tfree_br(preq);\n\t\t\t\tpreq = NULL;\n\t\t\t\treturn;\n\t\t\t} else {\n\t\t\t\t/* all went ok with the delete by Mom(s) */\n\t\t\t\tfree_br(preq);\n\t\t\t\tpreq = NULL;\n\t\t\t\tend_job(pjob, 0);\n\t\t\t}\n\t\t\tbreak;\n\t\tcase JOB_SUBSTATE_TERMINATED:\n\t\t\tset_last_used_time_node(pjob, 0);\n\t}\n}\n\n/**\n * @brief\n *\tUnset values of various attributes of 'pjob'\n *\tspecifically for node ramp down feature.\n *\n * @param[in]\tpjob - job in question\n *\n * @return void\n *\n */\nvoid\nunset_extra_attributes(job *pjob)\n{\n\tif (pjob == NULL)\n\t\treturn;\n\n\tif (is_jattr_set(pjob, JOB_ATR_resource_orig)) {\n\t\tfree_jattr(pjob, JOB_ATR_resource);\n\t\tmark_jattr_not_set(pjob, JOB_ATR_resource);\n\t\tset_attr_with_attr(&job_attr_def[(int) JOB_ATR_resource], get_jattr(pjob, JOB_ATR_resource), get_jattr(pjob, JOB_ATR_resource_orig), INCR);\n\n\t\tfree_jattr(pjob, JOB_ATR_resource_orig);\n\t\tmark_jattr_not_set(pjob, JOB_ATR_resource_orig);\n\t}\n\n\tif (is_jattr_set(pjob, JOB_ATR_resc_used_update)) {\n\t\tfree_jattr(pjob, JOB_ATR_resc_used_update);\n\t\tmark_jattr_not_set(pjob, JOB_ATR_resc_used_update);\n\t}\n\n\tif (is_jattr_set(pjob, JOB_ATR_exec_vnode_acct)) {\n\t\tfree_jattr(pjob, JOB_ATR_exec_vnode_acct);\n\t\tmark_jattr_not_set(pjob, JOB_ATR_exec_vnode_acct);\n\t}\n\n\tif (is_jattr_set(pjob, JOB_ATR_exec_vnode_orig)) {\n\t\tfree_jattr(pjob, JOB_ATR_exec_vnode_orig);\n\t\tmark_jattr_not_set(pjob, JOB_ATR_exec_vnode_orig);\n\t}\n\n\tif (is_jattr_set(pjob, JOB_ATR_exec_host_acct)) {\n\t\tfree_jattr(pjob, JOB_ATR_exec_host_acct);\n\t\tmark_jattr_not_set(pjob, JOB_ATR_exec_host_acct);\n\t}\n\n\tif (is_jattr_set(pjob, JOB_ATR_exec_host_orig)) {\n\t\tfree_jattr(pjob, JOB_ATR_exec_host_orig);\n\t\tmark_jattr_not_set(pjob, JOB_ATR_exec_host_orig);\n\t}\n\n\tif (is_jattr_set(pjob, JOB_ATR_SchedSelect_orig)) {\n\t\tset_jattr_str_slim(pjob, JOB_ATR_SchedSelect, get_jattr_str(pjob, JOB_ATR_SchedSelect_orig), NULL);\n\n\t\tfree_jattr(pjob, JOB_ATR_SchedSelect_orig);\n\t\tmark_jattr_not_set(pjob, JOB_ATR_SchedSelect_orig);\n\t}\n\n\tif (is_jattr_set(pjob, JOB_ATR_exec_vnode_deallocated)) {\n\t\tfree_jattr(pjob, JOB_ATR_exec_vnode_deallocated);\n\t\tmark_jattr_not_set(pjob, JOB_ATR_exec_vnode_deallocated);\n\t}\n}\n\n/**\n * @brief\n * \t\ton_job_rerun - Handle the clean up of jobs being rerun.  This gets\n *\t\tmessy if the job is being executed on another host.  Then the\n *\t\t\"standard\" files must be copied to the server for safe keeping.\n *\n *\t\tThe basic flow is very much like that of on_job_exit().\n *\t\tThe substate will already set to JOB_SUBSTATE_RERUN and the\n *\t\tJOB_SVFLG_HASRUN bit set in ji_svrflags.\n *\n * @param[in,out]\tptask\t- work task structure\n */\nvoid\non_job_rerun(struct work_task *ptask)\n{\n\tint handle;\n\tchar newstate;\n\tchar hook_msg[HOOK_MSG_SIZE] = {0};\n\tint newsubst;\n\tjob *pjob;\n\tstruct batch_request *preq;\n\tstruct work_task *pt;\n\tint rc;\n\tpbs_list_head *mom_tasklist_ptr = NULL;\n\tmominfo_t *pmom = 0;\n\n\tif (ptask->wt_type != WORK_Deferred_Reply) {\n\t\tpreq = NULL;\n\t\tpjob = (job *) ptask->wt_parm1;\n\t} else {\n\t\tpreq = (struct batch_request *) ptask->wt_parm1;\n\t\tpjob = (job *) preq->rq_extra;\n\t}\n\n\t/* minor check on validatity of pjob */\n\n\tif (isdigit((int) pjob->ji_qs.ji_jobid[0]) == 0)\n\t\treturn; /* not pointing to currently valid job */\n\n\tif ((handle = mom_comm(pjob, on_job_rerun)) < 0)\n\t\treturn;\n\n\tif (pjob->ji_mom_prot == PROT_TPP) {\n\t\tpmom = tfind2((unsigned long) pjob->ji_qs.ji_un.ji_exect.ji_momaddr,\n\t\t\t      pjob->ji_qs.ji_un.ji_exect.ji_momport,\n\t\t\t      &ipaddrs);\n\t\tif (!pmom || (pmom->mi_dmn_info->dmn_state & INUSE_DOWN))\n\t\t\treturn;\n\t\tmom_tasklist_ptr = &pmom->mi_dmn_info->dmn_deferred_cmds;\n\t}\n\n\tswitch (get_job_substate(pjob)) {\n\n\t\tcase JOB_SUBSTATE_RERUN:\n\n\t\t\tif (ptask->wt_type != WORK_Deferred_Reply) {\n\t\t\t\tif (pjob->ji_qs.ji_un.ji_exect.ji_momaddr == pbs_server_addr) {\n\n\t\t\t\t\t/* files don`t need to be moved, go to next step */\n\n\t\t\t\t\tsvr_setjobstate(pjob, JOB_STATE_LTR_EXITING, JOB_SUBSTATE_RERUN1);\n\t\t\t\t\tptask = set_task(WORK_Immed, 0, on_job_rerun, pjob);\n\t\t\t\t\tappend_link(&pjob->ji_svrtask, &ptask->wt_linkobj, ptask);\n\t\t\t\t\treturn;\n\t\t\t\t}\n\n\t\t\t\t/* here is where we have to save the files\t*/\n\t\t\t\t/* ask mom to send them back to the server\t*/\n\t\t\t\t/* mom deletes her copy if returned ok\t*/\n\n\t\t\t\tpreq = alloc_br(PBS_BATCH_Rerun);\n\t\t\t\tif (preq == NULL) {\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t\t(void) strcpy(preq->rq_ind.rq_rerun, pjob->ji_qs.ji_jobid);\n\t\t\t\tpreq->rq_extra = (void *) pjob;\n\n\t\t\t\trc = issue_Drequest(handle, preq, on_job_rerun, &pt, pjob->ji_mom_prot);\n\t\t\t\tif (rc == 0) {\n\t\t\t\t\t/* request ok, will come back when its done */\n\t\t\t\t\tappend_link(&pjob->ji_svrtask, &pt->wt_linkobj, pt);\n\t\t\t\t\tif (pjob->ji_mom_prot == PROT_TPP)\n\t\t\t\t\t\tif (mom_tasklist_ptr)\n\t\t\t\t\t\t\tappend_link(mom_tasklist_ptr, &pt->wt_linkobj2, pt); /* if tpp, link to mom list as well */\n\t\t\t\t\treturn;\n\t\t\t\t} else {\n\t\t\t\t\t/* set up as if mom returned error */\n\n\t\t\t\t\tpreq->rq_reply.brp_code = rc;\n\t\t\t\t\tpreq->rq_reply.brp_choice = BATCH_REPLY_CHOICE_NULL;\n\t\t\t\t\t/* we will \"fall\" into the post reply side */\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t/* We get here if MOM replied (may be faked above)  */\n\t\t\t/* to the rerun (return files) request issued above */\n\n\t\t\tif (preq->rq_reply.brp_code != 0) { /* error */\n\t\t\t\t/* for now, just log it */\n\t\t\t\tif ((preq->rq_reply.brp_code == DIS_EOF) ||\n\t\t\t\t    (preq->rq_reply.brp_code == DIS_EOD)) {\n\t\t\t\t\t/* tcp connection to Mom broken */\n\t\t\t\t\tconn_to_mom_failed(pjob, on_job_rerun);\n\t\t\t\t\tfree_br(preq);\n\t\t\t\t\tpreq = NULL;\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t\ton_exitrerun_msg(pjob, msg_obitnocpy);\n\t\t\t}\n\t\t\tsvr_setjobstate(pjob, JOB_STATE_LTR_EXITING, JOB_SUBSTATE_RERUN1);\n\t\t\tptask->wt_type = WORK_Immed;\n\t\t\tfree_br(preq);\n\t\t\tpreq = NULL;\n\n\t\t\t/* NO BREAK, FALL THROUGH TO NEXT CASE, including the request */\n\n\t\tcase JOB_SUBSTATE_RERUN1:\n\n\t\t\tif (ptask->wt_type != WORK_Deferred_Reply) {\n\n\t\t\t\t/* this is the very first call, have mom copy files */\n\t\t\t\t/* are there any stage-out files to process? \t*/\n\n\t\t\t\tpreq = cpy_stage(preq, pjob, JOB_ATR_stageout, STAGE_DIR_OUT);\n\n\t\t\t\tif (preq) { /* have files to copy \t\t*/\n\t\t\t\t\tpreq->rq_extra = (void *) pjob;\n\t\t\t\t\trc = issue_Drequest(handle, preq, on_job_rerun, &pt, pjob->ji_mom_prot);\n\t\t\t\t\tif (rc == 0) {\n\t\t\t\t\t\tappend_link(&pjob->ji_svrtask, &pt->wt_linkobj, pt);\n\t\t\t\t\t\tif (pjob->ji_mom_prot == PROT_TPP)\n\t\t\t\t\t\t\tif (mom_tasklist_ptr)\n\t\t\t\t\t\t\t\tappend_link(mom_tasklist_ptr, &pt->wt_linkobj2, pt); /* if tpp, link to mom list as well */\n\t\t\t\t\t\treturn;\t\t\t\t\t\t\t\t     /* come back when mom replies */\n\t\t\t\t\t} else\n\t\t\t\t\t\t/* set up as if mom returned error */\n\n\t\t\t\t\t\tpreq->rq_reply.brp_code = rc;\n\t\t\t\t\tpreq->rq_reply.brp_choice = BATCH_REPLY_CHOICE_NULL;\n\t\t\t\t\tpreq->rq_reply.brp_un.brp_txt.brp_txtlen = 0;\n\t\t\t\t\t/* we will \"fall\" into the post reply side */\n\n\t\t\t\t} else { /* no files to copy, any to delete? */\n\t\t\t\t\tsvr_setjobstate(pjob, JOB_STATE_LTR_EXITING, JOB_SUBSTATE_RERUN2);\n\t\t\t\t\tptask = set_task(WORK_Immed, 0, on_job_rerun, pjob);\n\t\t\t\t\tappend_link(&pjob->ji_svrtask, &ptask->wt_linkobj, ptask);\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t/* here we have a reply (maybe faked) from MOM about the copy */\n\n\t\t\tif (preq->rq_reply.brp_code != 0) { /* error from MOM */\n\n\t\t\t\tif ((preq->rq_reply.brp_code == DIS_EOF) ||\n\t\t\t\t    (preq->rq_reply.brp_code == DIS_EOD)) {\n\t\t\t\t\t/* tcp connection to Mom broken */\n\t\t\t\t\tconn_to_mom_failed(pjob, on_job_rerun);\n\t\t\t\t\tfree_br(preq);\n\t\t\t\t\tpreq = NULL;\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t\ton_exitrerun_msg(pjob, msg_obitnocpy);\n\t\t\t\tif (preq->rq_reply.brp_choice == BATCH_REPLY_CHOICE_Text) {\n\t\t\t\t\tint len = strlen(log_buffer);\n\n\t\t\t\t\tif (len < LOG_BUF_SIZE + 2) {\n\t\t\t\t\t\tlog_buffer[len++] = '\\n';\n\t\t\t\t\t\tstrncpy(&log_buffer[len],\n\t\t\t\t\t\t\tpreq->rq_reply.brp_un.brp_txt.brp_str,\n\t\t\t\t\t\t\tLOG_BUF_SIZE - len);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tsvr_mailowner(pjob, MAIL_OTHER, MAIL_FORCE, log_buffer);\n\t\t\t}\n\n\t\t\t/*\n\t\t\t * files (generally) copied ok, move on to the next phase by\n\t\t\t * \"faking\" the immediate work task.\n\t\t\t */\n\n\t\t\tfree_br(preq);\n\t\t\tpreq = NULL;\n\t\t\tsvr_setjobstate(pjob, JOB_STATE_LTR_EXITING, JOB_SUBSTATE_RERUN2);\n\t\t\tptask->wt_type = WORK_Immed;\n\n\t\t\t/* NO BREAK - FALL INTO THE NEXT CASE */\n\n\t\tcase JOB_SUBSTATE_RERUN2:\n\n\t\t\tif (ptask->wt_type != WORK_Deferred_Reply) {\n\n\t\t\t\t/* here is where we delete  any stage-in files\t   */\n\n\t\t\t\tpreq = cpy_stage(preq, pjob, JOB_ATR_stagein, 0);\n\t\t\t\tif (preq) {\n\t\t\t\t\tpreq->rq_type = PBS_BATCH_DelFiles;\n\t\t\t\t\tpreq->rq_extra = (void *) pjob;\n\t\t\t\t\trc = issue_Drequest(handle, preq, on_job_rerun, &pt, pjob->ji_mom_prot);\n\t\t\t\t\tif (rc == 0) {\n\t\t\t\t\t\tappend_link(&pjob->ji_svrtask, &pt->wt_linkobj, pt);\n\t\t\t\t\t\tif (pjob->ji_mom_prot == PROT_TPP)\n\t\t\t\t\t\t\tif (mom_tasklist_ptr)\n\t\t\t\t\t\t\t\tappend_link(mom_tasklist_ptr, &pt->wt_linkobj2, pt); /* if tpp, link to mom list as well */\n\t\t\t\t\t\treturn;\n\t\t\t\t\t} else { /* error on sending request */\n\t\t\t\t\t\tpreq->rq_reply.brp_code = rc;\n\t\t\t\t\t\tpreq->rq_reply.brp_choice = BATCH_REPLY_CHOICE_NULL;\n\t\t\t\t\t\t/* we will \"fall\" into the post reply side */\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tsvr_setjobstate(pjob, JOB_STATE_LTR_EXITING, JOB_SUBSTATE_RERUN3);\n\t\t\t\t\tptask = set_task(WORK_Immed, 0, on_job_rerun, pjob);\n\t\t\t\t\tappend_link(&pjob->ji_svrtask, &ptask->wt_linkobj, ptask);\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t/* post reply side for delete file request to MOM */\n\t\t\tif (preq->rq_reply.brp_code != 0) { /* error */\n\t\t\t\tif ((preq->rq_reply.brp_code == DIS_EOF) ||\n\t\t\t\t    (preq->rq_reply.brp_code == DIS_EOD)) {\n\t\t\t\t\t/* tcp connection to Mom broken */\n\t\t\t\t\tconn_to_mom_failed(pjob, on_job_rerun);\n\t\t\t\t\tfree_br(preq);\n\t\t\t\t\tpreq = NULL;\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t\t/* for other errors, just log it */\n\t\t\t\ton_exitrerun_msg(pjob, msg_obitnocpy);\n\t\t\t}\n\t\t\tfree_br(preq);\n\t\t\tpreq = NULL;\n\t\t\tsvr_setjobstate(pjob, JOB_STATE_LTR_EXITING, JOB_SUBSTATE_RERUN3);\n\t\t\tptask->wt_type = WORK_Immed;\n\n\t\t\t/* NO BREAK, FALL THROUGH TO NEXT CASE */\n\n\t\tcase JOB_SUBSTATE_RERUN3:\n\n\t\t\tif (ptask->wt_type != WORK_Deferred_Reply) {\n\t\t\t\t/* need to have MOM delete her copy of the job */\n\t\t\t\tpreq = alloc_br(PBS_BATCH_DeleteJob);\n\t\t\t\tif (preq) {\n\t\t\t\t\tstrcpy(preq->rq_ind.rq_delete.rq_objname,\n\t\t\t\t\t       pjob->ji_qs.ji_jobid);\n\t\t\t\t\tpreq->rq_extra = (void *) pjob;\n\t\t\t\t\trc = issue_Drequest(handle, preq, on_job_rerun, &pt, pjob->ji_mom_prot);\n\t\t\t\t\tif (rc == 0) {\n\t\t\t\t\t\tappend_link(&pjob->ji_svrtask, &pt->wt_linkobj, pt);\n\t\t\t\t\t\tif (pjob->ji_mom_prot == PROT_TPP)\n\t\t\t\t\t\t\tif (mom_tasklist_ptr)\n\t\t\t\t\t\t\t\tappend_link(mom_tasklist_ptr, &pt->wt_linkobj2, pt); /* if tpp, link to mom list as well */\n\t\t\t\t\t\treturn;\t\t\t\t\t\t\t\t     /* come back when Mom replies */\n\t\t\t\t\t} else {\n\t\t\t\t\t\t/* set up as if mom returned error */\n\t\t\t\t\t\tpreq->rq_reply.brp_code = rc;\n\t\t\t\t\t\tpreq->rq_reply.brp_choice = BATCH_REPLY_CHOICE_NULL;\n\t\t\t\t\t\t/* fall into next section */\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tlog_err(-1, pjob->ji_qs.ji_jobid,\n\t\t\t\t\t\t\"Unable to malloc memory for rerun\");\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t/* here we have a reply from MOM about the delete */\n\t\t\t/* if delete ok, send final track and purge the job */\n\n\t\t\tif (preq->rq_reply.brp_code == PBSE_SISCOMM) {\n\n\t\t\t\t/*\n\t\t\t\t * some sister Mom apparently failed to delete the job and\n\t\t\t\t * free resoures, keep job until discard_job() does its job\n\t\t\t\t */\n\t\t\t\tfree_br(preq);\n\t\t\t\tpreq = NULL;\n\t\t\t\tif (handle != -1 && pjob->ji_mom_prot == PROT_TCP)\n\t\t\t\t\tsvr_disconnect(handle);\n\n\t\t\t\tif (pjob->ji_pmt_preq != NULL)\n\t\t\t\t\treply_preempt_jobs_request(PBSE_SISCOMM, PREEMPT_METHOD_DELETE, pjob);\n\n\t\t\t\tdiscard_job(pjob, \"A sister Mom failed to delete job\", 0);\n\t\t\t\treturn;\n\t\t\t} else if ((preq->rq_reply.brp_code == DIS_EOF) ||\n\t\t\t\t   (preq->rq_reply.brp_code == DIS_EOD)) {\n\t\t\t\t/* tcp connection to Mom broken */\n\t\t\t\tconn_to_mom_failed(pjob, on_job_rerun);\n\t\t\t\tfree_br(preq);\n\t\t\t\tpreq = NULL;\n\t\t\t\treturn;\n\t\t\t} else {\n\t\t\t\t/* all went ok with the delete by Mom(s) */\n\t\t\t\tfree_br(preq);\n\t\t\t\tpreq = NULL;\n\n\t\t\t\tpjob->ji_qs.ji_obittime = time_now;\n\t\t\t\tset_jattr_l_slim(pjob, JOB_ATR_obittime, pjob->ji_qs.ji_obittime, SET);\n\n\t\t\t\t/* Allocate space for the jobobit hook event params */\n\t\t\t\tpreq = alloc_br(PBS_BATCH_JobObit);\n\t\t\t\tif (preq == NULL) {\n\t\t\t\t\tlog_err(PBSE_INTERNAL, __func__, \"rq_jobobit alloc failed\");\n\t\t\t\t} else {\n\t\t\t\t\tpreq->rq_ind.rq_obit.rq_pjob = pjob;\n\n\t\t\t\t\trc = process_hooks(preq, hook_msg, sizeof(hook_msg), pbs_python_set_interrupt);\n\t\t\t\t\tif (rc == -1) {\n\t\t\t\t\t\tlog_err(-1, __func__, \"rq_jobobit process_hooks call failed\");\n\t\t\t\t\t}\n\t\t\t\t\tfree_br(preq);\n\t\t\t\t}\n\n\t\t\t\tif (handle != -1 && pjob->ji_mom_prot == PROT_TCP)\n\t\t\t\t\tsvr_disconnect(handle);\n\n\t\t\t\taccount_jobend(pjob, pjob->ji_acctrec, PBS_ACCT_RERUN);\n\t\t\t\tif (pjob->ji_acctrec) {\n\t\t\t\t\tfree(pjob->ji_acctrec); /* logged, so clear it */\n\t\t\t\t\tpjob->ji_acctrec = NULL;\n\t\t\t\t}\n\t\t\t\tif ((is_jattr_set(pjob, JOB_ATR_resc_released))) {\n\t\t\t\t\t/* If JOB_ATR_resc_released attribute is set and we are trying\n\t\t\t\t\t * to rerun a job then we need to reassign resources first because\n\t\t\t\t\t * when we suspend a job we don't decrement all of the resources.\n\t\t\t\t\t * So we need to set partially released resources\n\t\t\t\t\t * back again to release all other resources\n\t\t\t\t\t */\n\t\t\t\t\tset_resc_assigned(pjob, 0, INCR);\n\t\t\t\t\tfree_jattr(pjob, JOB_ATR_resc_released);\n\t\t\t\t\tmark_jattr_not_set(pjob, JOB_ATR_resc_released);\n\t\t\t\t\tif (is_jattr_set(pjob, JOB_ATR_resc_released_list)) {\n\t\t\t\t\t\tfree_jattr(pjob, JOB_ATR_resc_released_list);\n\t\t\t\t\t\tmark_jattr_not_set(pjob, JOB_ATR_resc_released_list);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\trel_resc(pjob); /* free resc assigned to job */\n\n\t\t\t\t/* Respond to pending preemption request from the scheduler, if any */\n\t\t\t\tif (pjob->ji_pmt_preq != NULL)\n\t\t\t\t\treply_preempt_jobs_request(PBSE_NONE, PREEMPT_METHOD_REQUEUE, pjob);\n\n\t\t\t\tunset_extra_attributes(pjob);\n\n\t\t\t\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_HOTSTART) == 0) {\n\t\t\t\t\t/* in case of server shutdown, don't clear exec_vnode */\n\t\t\t\t\t/* will use it on hotstart when next comes up\t      */\n\t\t\t\t\tfree_jattr(pjob, JOB_ATR_exec_vnode);\n\t\t\t\t\tfree_jattr(pjob, JOB_ATR_exec_host);\n\t\t\t\t\tfree_jattr(pjob, JOB_ATR_exec_host2);\n\t\t\t\t}\n\t\t\t\tpjob->ji_momhandle = -1;\n\t\t\t\tpjob->ji_mom_prot = PROT_INVALID;\n\t\t\t\t/* job dir has no meaning for re-queued jobs, so unset it */\n\t\t\t\tfree_jattr(pjob, JOB_ATR_jobdir);\n\n\t\t\t\tpjob->ji_qs.ji_svrflags &= ~JOB_SVFLG_StagedIn;\n\t\t\t\tsvr_evaljobstate(pjob, &newstate, &newsubst, 0);\n\t\t\t\tsvr_setjobstate(pjob, newstate, newsubst);\n\t\t\t}\n\t}\n}\n/**\n * @brief\n * \t\tsetrerun\t- job is to be retried on start failure or\n * \t\tjob is rerunnable and should set for rerun\n *\n * @param[in]\tpjob\t- job which needs to be set for rerun.\n *\n * @return\texit code\n * @retval\t0\t- substate set to rerun.\n * @retval\t1\t- substate left as it\n */\nstatic int\nsetrerun(job *pjob)\n{\n\tif ((pjob->ji_qs.ji_un.ji_exect.ji_exitstat == JOB_EXEC_RETRY) ||\n\t    (get_jattr_long(pjob, JOB_ATR_rerunable) != 0)) {\n\t\tset_job_substate(pjob, JOB_SUBSTATE_RERUN);\n\t\treturn 0;\n\t} else {\n\t\tsvr_mailowner(pjob, MAIL_ABORT, MAIL_FORCE, msg_init_abt);\n\t\treturn 1;\n\t}\n}\n\n/**\n * @brief\n *\t\tConcatenate the resources used to the buffer provided.\n *\n * @param[in,out]buffer - pointer to buffer to add info to.  May grow/change due to pbs_strcat() (realloc)\n * @param[in,out]buffer_size - size of buffer - may increase through pbs_strcat()\n * @param[in]\t\tdelim - a pointer to the delimiter to use\n * @param[in]\t\tpjob - job structure for additional info\n */\nint\nconcat_rescused_to_buffer(char **buffer, int *buffer_size, svrattrl *patlist, char *delim, const job *pjob)\n{\n\tint val_len;\n\n\tif (buffer == NULL || buffer_size == NULL || patlist == NULL || delim == NULL)\n\t\treturn 1;\n\t/*\n\t * To calculate length of the string of the form \"resources_used.<resource>=<value>\".\n\t * Additional length of 3 is required to accommodate the characters '.', '=' and '\\n'.\n\t */\n\tval_len = strlen(patlist->al_value);\n\t/* log to accounting_logs only if there's a value */\n\tif (val_len > 0) {\n\t\tif (pbs_strcat(buffer, buffer_size, delim) == NULL) {\n\t\t\tlog_err(errno, __func__, \"Failed to allocate memory.\");\n\t\t\treturn 1;\n\t\t}\n\t\tif (pbs_strcat(buffer, buffer_size, patlist->al_name) == NULL) {\n\t\t\tlog_err(errno, __func__, \"Failed to allocate memory.\");\n\t\t\treturn 1;\n\t\t}\n\t\tif (patlist->al_resc) {\n\t\t\tif (pbs_strcat(buffer, buffer_size, \".\") == NULL) {\n\t\t\t\tlog_err(errno, __func__, \"Failed to allocate memory.\");\n\t\t\t\treturn 1;\n\t\t\t}\n\t\t\tif (pbs_strcat(buffer, buffer_size, patlist->al_resc) == NULL) {\n\t\t\t\tlog_err(errno, __func__, \"Failed to allocate memory.\");\n\t\t\t\treturn 1;\n\t\t\t}\n\t\t}\n\t\tif (pbs_strcat(buffer, buffer_size, \"=\") == NULL) {\n\t\t\tlog_err(errno, __func__, \"Failed to allocate memory.\");\n\t\t\treturn 1;\n\t\t}\n\t\tif ((pjob != NULL) &&\n\t\t    patlist->al_resc && (strcmp(patlist->al_resc, WALLTIME) == 0)) {\n\t\t\tlong j, k;\n\n\t\t\tk = get_walltime(pjob, JOB_ATR_resc_used_acct);\n\t\t\tj = get_walltime(pjob, JOB_ATR_resc_used);\n\t\t\tif ((k >= 0) && (j >= k)) {\n\t\t\t\tchar timebuf[TIMEBUF_SIZE] = {0};\n\n\t\t\t\tconvert_duration_to_str(j - k, timebuf, TIMEBUF_SIZE);\n\t\t\t\tif (pbs_strcat(buffer, buffer_size, timebuf) == NULL) {\n\t\t\t\t\tlog_err(errno, __func__,\n\t\t\t\t\t\t\"Failed to allocate memory.\");\n\t\t\t\t\treturn 1;\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tif (pbs_strcat(buffer, buffer_size,\n\t\t\t\t\t       patlist->al_value) == NULL) {\n\t\t\t\t\tlog_err(errno, __func__,\n\t\t\t\t\t\t\"Failed to allocate memory.\");\n\t\t\t\t\treturn 1;\n\t\t\t\t}\n\t\t\t}\n\t\t} else if (pbs_strcat(buffer, buffer_size,\n\t\t\t\t      patlist->al_value) == NULL) {\n\t\t\tlog_err(errno, __func__, \"Failed to allocate memory.\");\n\t\t\treturn 1;\n\t\t}\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\t\tProcess the Job Obituary Notice (request) from MOM for a job which has.\n *\t\tterminated.  The Obit contains the exit status and final resource\n *\t\tusage for the job.\n * @par\n *\t\tIf the job cannot be found, the Server tells Mom to discard her copy.\n *\t\tThis may be the case if the job was forcefully deleted while Mom was\n *\t\tdown or the Server was restarted cold/clean discarding the jobs.\n *\n *\t\tDepending on the state of the job:\n *\t\t- Not RUNNING and not EXITING - tell Mom to discard the job.\n *\t\t- Also not in substate _TERM - Mom wishes to restart the end of job\n *\t  \t processing; likely because she hasn't heard from the Server.\n *\t\t- If the \"run count\" in the obit does not match the Server's, Mom has\n *\t \t an old copy and she is told to discard it.\n * @par\n *\t\tNormally, the Obit is received when the job is in substate RUNNING.\n *\t\tThe job is moved into that substate when Mom sends the session id of\n *\t\tthe job,  see stat_update().  However, it is possible that  the Obit\n *\t\tis received before that and the job is in substate _PRERUN (or very\n *\t\tunlikely _PROVISION).  If this is the case, call complete_running()\n *\t\tto update the job to _RUNNING and write the \"S\" accounting record before\n *\t\twe write the \"E\" record.\n * @par\n *\t\tThere are special job exit values (negative nunbers which cannot be\n *\t\tactual exits status of the job).  These are typically because Mom\n *\t\tcould not complete starting the job or Mom is being restarted without\n *\t\tthe \"-p\" option.\n *\t\t- JOB_EXEC_FAIL1: Mom could not start job, the standard out/err files\n *\t  \twere not created.\n *\t\t- JOB_EXEC_FAIL2: Mom could not start the job, but had created the\n *\t  \tfiles so there is useful info in them.\n *\t\t- JOB_EXEC_INITABT: Mom aborted the running job on her initialization.\n *\t\t- JOB_EXEC_FAILUID: Mom aborted the job because of an invalid uid/gid.\n *\t\t- JOB_EXEC_FAIL_PASSWORD: Mom aborted the job because she needed the\n *\t  \tuser's password (Windows) and the password didn't work.\n *\t\t- JOB_EXEC_RETRY: Mom couldn't start the job, but it might work later,\n *\t  \tso requeue it.\n *\t\t- JOB_EXEC_BADRESRT: The job could not be started from the checkpoint\n *\t  \trestart file.\n *\t\t- JOB_EXEC_INITRST: Mom aborted a checkpointed job which should be\n *\t  \trequeued for a later \"restart\".\n *\t\t- JOB_EXEC_QUERST: The Epilogue told Mom to requeue the job which can\n *\t  \tbe restarted from a checkpoint.\n *\t\t- JOB_EXEC_RERUN: or JOB_EXEC_RERUN_SIS_FAIL: requeue the job if it is\n *\t  \trerunable (not submitted with \"-r n\").\n *\t\t- JOB_EXEC_FAILHOOK_RERUN: returned by a job rejected by a mom hook\n *\t  \tand the next action is to requeue/rerun the job.\n *\t\t- JOB_EXEC_FAILHOOK_DELETE: returned by a job rejected by a mom hook\n *\t  \tand the next action is to just delete the job.\n *\t\t- JOB_EXEC_HOOK_RERUN - returned by a job that ran a mom hook that\n *\t  \tinstructed the server to requeue the job once reaching the end.\n *\t\t- JOB_EXEC_HOOK_DELETE - returned by a job that ran a mom hook that\n *\t  \tinstructed the server to delete the job  once reaching the end.\n *\t\t- JOB_EXEC_HOOKERROR: returned by a job rejected by a mom hook\n *\t\tdue to an exception, or hook alarm was raised,\n *\t\tand the next action is to requeue/rerun the job.\n * @par\n *\t\tOtherwise record the accounting information to be recorded later in the\n *\t\tprocessing.  Now, the job is moved into \"exiting\" processing or \"rerun\"\n *\t\tporcessiong (qrerun) via a work-task entry invoking either on_job_exit()\n *\t\tor on_job_rerun().\n *\n * @param[in] - pruu   - the structure containing the resource usage info\n * @param[in] - stream - the TPP stream connecting to the Mom\n *                       The Server will send back either a rejection or an acceptance\n *                       of  the Obit.\n * @return int\n *\n * @retval 0  - accept obit\n * @retval 1  - reject obit\n * @retval -1 - ignore obit\n *\n */\nint\njob_obit(ruu *pruu, int stream)\n{\n\tint alreadymailed = 0;\n\tchar *acctbuf = NULL;\n\tint acctbuf_size = 0;\n\tint dummy;\n\tint num;\n\tint exitstatus;\n\tint local_exitstatus = 0;\n\tchar *mailbuf = NULL;\n\tint mailbuf_size = 0;\n\tchar newstate;\n\tint newsubst;\n\tjob *pjob;\n\tsvrattrl *patlist;\n\tstruct work_task *ptask;\n\tvoid (*eojproc)(struct work_task *);\n\tchar *mailmsg = NULL;\n\tchar *msg = NULL;\n\n\ttime_now = time(0);\n\n\tDBPRT((\"%s: Obit received for job %s status=%d hop=%d\\n\", __func__, pruu->ru_pjobid, pruu->ru_status, pruu->ru_hop))\n\tpjob = find_job(pruu->ru_pjobid);\n\tif (pjob == NULL) { /* not found */\n\t\tDBPRT((\"%s: job %s not found!\\n\", __func__, pruu->ru_pjobid))\n\t\tif (server_init_type == RECOV_COLD || server_init_type == RECOV_CREATE)\n\t\t\tsprintf(log_buffer, msg_obitnojob, PBSE_CLEANEDOUT);\n\t\telse if (is_job_array(pruu->ru_pjobid) == IS_ARRAY_Single)\n\t\t\tsprintf(log_buffer, \"%s\", msg_obitnotrun);\n\t\telse\n\t\t\tsprintf(log_buffer, msg_obitnojob, PBSE_UNKJOBID);\n\t\tlog_event(PBSEVENT_ERROR | PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_NOTICE, pruu->ru_pjobid, log_buffer);\n\n\t\t/* tell MOM the job was blown away */\n\t\treturn 1;\n\t}\n\n\tlog_eventf(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_INFO, pruu->ru_pjobid,\n\t\t   \"Obit received momhop:%d serverhop:%ld state:%c substate:%d\",\n\t\t   pruu->ru_hop, get_jattr_long(pjob, JOB_ATR_run_version), get_job_state(pjob), get_job_substate(pjob));\n\n\tif (!check_job_state(pjob, JOB_STATE_LTR_RUNNING)) {\n\t\tDBPRT((\"%s: job %s not in running state!\\n\",\n\t\t       __func__, pruu->ru_pjobid))\n\t\tif (!check_job_state(pjob, JOB_STATE_LTR_EXITING)) {\n\n\t\t\t/* not running and not exiting - bad news   */\n\t\t\t/* may be from old Mom and job was requeued */\n\t\t\t/* tell mom to trash job\t\t    */\n\t\t\tDBPRT((\"%s: job %s not in exiting state!\\n\",\n\t\t\t       __func__, pruu->ru_pjobid))\n\t\t\tpjob->ji_discarding = 0;\n\n\t\t\tlog_event(PBSEVENT_ERROR | PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO, pruu->ru_pjobid, msg_obitnotrun);\n\t\t\treturn 1;\n\t\t} else if (!check_job_substate(pjob, JOB_SUBSTATE_TERM)) {\n\t\t\t/*\n\t\t\t * not in special site script substate, Mom must have\n\t\t\t * had a problem and wants to have the post job\n\t\t\t * processing restarted.\n\t\t\t *\n\t\t\t * If there is an open connection to Mom for this job,\n\t\t\t * find the associate work task, remove and free it and\n\t\t\t * any outstanding batch_request to Mom.  Then close\n\t\t\t * the connection so we start fresh and stay in sync.\n\t\t\t */\n\t\t\tif (pjob->ji_momhandle != -1) {\n\t\t\t\tstruct batch_request *prequest;\n\t\t\t\textern pbs_list_head task_list_event;\n\n\t\t\t\tptask = (struct work_task *) GET_NEXT(task_list_event);\n\t\t\t\twhile (ptask) {\n\t\t\t\t\tif (ptask->wt_type == WORK_Deferred_Reply && ptask->wt_event == pjob->ji_momhandle)\n\t\t\t\t\t\tbreak;\n\t\t\t\t\tptask = (struct work_task *) GET_NEXT(ptask->wt_linkevent);\n\t\t\t\t}\n\t\t\t\tif (ptask) {\n\t\t\t\t\tif ((prequest = ptask->wt_parm1) != NULL)\n\t\t\t\t\t\tfree_br(prequest);\n\t\t\t\t\tdelete_task(ptask);\n\t\t\t\t}\n\t\t\t\tif (pjob->ji_mom_prot == PROT_TCP)\n\t\t\t\t\tsvr_force_disconnect(pjob->ji_momhandle);\n\n\t\t\t\tpjob->ji_momhandle = -1;\n\t\t\t\tpjob->ji_mom_prot = PROT_INVALID;\n\t\t\t}\n\t\t\tif (get_job_substate(pjob) < JOB_SUBSTATE_RERUN)\n\t\t\t\teojproc = on_job_exit;\n\t\t\telse\n\t\t\t\teojproc = on_job_rerun;\n\t\t\tptask = set_task(WORK_Immed, 0, eojproc, (void *) pjob);\n\t\t\tappend_link(&pjob->ji_svrtask, &ptask->wt_linkobj, ptask);\n\t\t\treturn -1;\n\t\t}\n\t\t/*\n\t\t * State EXITING and substate TERM, this is the real obit\n\t\t * so fall throught and start real end of job processing\n\t\t */\n\t}\n\n\tif (pruu->ru_hop < get_jattr_long(pjob, JOB_ATR_run_version)) {\n\t\t/*\n\t\t * Obit is for an older run version,  likely a Mom coming back\n\t\t * alive after being down awhile and job was requeue and run\n\t\t * somewhere else.   Just tell Mom to junk job\n\t\t */\n\t\tDBPRT((\"%s: job %s run count too low\\n\", __func__, pruu->ru_pjobid))\n\t\treturn 1;\n\t} else if (pjob->ji_qs.ji_svrflags & JOB_SVFLG_SubJob) {\n\n\t\t/*\n\t\t * Won't have a valid hop count in the job structure\n\t\t * look at where job is running and who is sending the obit\n\t\t */\n\n\t\tint ivndx;\n\t\tmominfo_t *psendmom;\n\t\tstruct pbsnode *sendvnp;\n\t\tchar *runningnode;\n\t\textern struct tree *streams;\n\n\t\tpsendmom = tfind2(stream, 0, &streams);\n\t\trunningnode = parse_servername(get_jattr_str(pjob, JOB_ATR_exec_vnode), NULL);\n\t\tif (psendmom && runningnode) {\n\t\t\tfor (ivndx = 0; ivndx < ((mom_svrinfo_t *) (psendmom->mi_data))->msr_numvnds; ++ivndx) {\n\t\t\t\tsendvnp = ((mom_svrinfo_t *) (psendmom->mi_data))->msr_children[ivndx];\n\t\t\t\tif (strcasecmp(runningnode, sendvnp->nd_name) == 0) {\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (ivndx == ((mom_svrinfo_t *) (psendmom->mi_data))->msr_numvnds) {\n\t\t\t\t/* not the same node, reject the obit */\n\t\t\t\treturn 1;\n\t\t\t}\n\t\t}\n\t}\n\n\t/*\n\t * have hit a race condition where the send_job child's process\n\t * may not yet have been reaped.  Update accounting for job start\n\t */\n\n\tif (check_job_substate(pjob, JOB_SUBSTATE_PRERUN) || check_job_substate(pjob, JOB_SUBSTATE_PROVISION)) {\n\t\tDBPRT((\"%s: job %s in prerun state.\\n\", __func__, pruu->ru_pjobid))\n\t\tcomplete_running(pjob);\n\t}\n\tif (pjob->ji_prunreq) {\n\t\treply_ack(pjob->ji_prunreq);\n\t\tpjob->ji_prunreq = NULL;\n\t}\n\n\t/* save exit state, update the resources used */\n\texitstatus = pruu->ru_status;\n\tpjob->ji_qs.ji_un.ji_exect.ji_exitstat = exitstatus;\n\n\t/* set the Exit_status job attribute */\n\tif (is_jattr_set(pjob, JOB_ATR_exit_status))\n\t\tlocal_exitstatus = get_jattr_long(pjob, JOB_ATR_exit_status);\n\n\tif ((local_exitstatus == JOB_EXEC_HOOK_RERUN || local_exitstatus == JOB_EXEC_HOOK_DELETE) &&\n\t    exitstatus != JOB_EXEC_FAILHOOK_RERUN && exitstatus != JOB_EXEC_FAILHOOK_DELETE)\n\t\texitstatus = local_exitstatus;\n\telse\n\t\tset_jattr_l_slim(pjob, JOB_ATR_exit_status, exitstatus, SET);\n\n\tpatlist = (svrattrl *) GET_NEXT(pruu->ru_attr);\n\n\t/* record usage attribute to job for history */\n\tdummy = 0;\n\tif (modify_job_attr(pjob, patlist, ATR_DFLAG_MGWR | ATR_DFLAG_SvWR, &dummy) != 0) {\n\t\tfor (num = 1; num < dummy; num++)\n\t\t\tpatlist = (struct svrattrl *) GET_NEXT(patlist->al_link);\n\t\tlog_eventf(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_NOTICE, pjob->ji_qs.ji_jobid,\n\t\t\t   \"unable to update attribute %s.%s in job_obit\", patlist->al_name, patlist->al_resc);\n\t}\n\n\t/* Allocate initial space for acctbuf/mailbuf.  Future space will be allocated by pbs_strcat(). */\n\tacctbuf = malloc(RESC_USED_BUF_SIZE);\n\tmailbuf = malloc(RESC_USED_BUF_SIZE);\n\n\tif (acctbuf == NULL || mailbuf == NULL) {\n\t\tlog_err(errno, __func__, \"Failed to allocate memory\");\n\t\t/* Just incase one of the buffers got allocated */\n\t\tfree(acctbuf);\n\t\tacctbuf = NULL;\n\t\tfree(mailbuf);\n\t\tmailbuf = NULL;\n\t} else {\n\t\tacctbuf_size = RESC_USED_BUF_SIZE;\n\t\tmailbuf_size = RESC_USED_BUF_SIZE;\n\n\t\tsnprintf(acctbuf, acctbuf_size, msg_job_end_stat, pjob->ji_qs.ji_un.ji_exect.ji_exitstat);\n\t\tif (exitstatus < 10000)\n\t\t\tstrncpy(mailbuf, acctbuf, mailbuf_size);\n\t\telse\n\t\t\tsnprintf(mailbuf, mailbuf_size, msg_job_end_sig, exitstatus - 10000);\n\t\t/*\n\t\t * NOTE:\n\t\t * Following code for constructing resources used information is same as account_jobend()\n\t\t * with minor difference that to traverse patlist in this code\n\t\t * we have to use GET_NEXT(patlist->al_link) since it is part of batch request\n\t\t * and in account_jobend() we are using patlist->al_sister which is encoded\n\t\t * information in job struct.\n\t\t * This collects all resources_used information returned from the mom.\n\t\t */\n\t\tfor (; patlist; patlist = (svrattrl *) GET_NEXT(patlist->al_link)) {\n\t\t\tresource_def *tmpdef;\n\n\t\t\tif (strcmp(patlist->al_name, ATTR_used) != 0)\n\t\t\t\tcontinue;\n\t\t\ttmpdef = find_resc_def(svr_resc_def, patlist->al_resc);\n\t\t\tif (tmpdef == NULL)\n\t\t\t\tcontinue;\n\t\t\t/*\n\t\t\t * Copy all resources to the accounting buffer.\n\t\t\t * Copy all but invisible resources into the mail buffer.\n\t\t\t * The ATR_DFLAG_USRD flag will not be set on invisible resources.\n\t\t\t */\n\t\t\tif (concat_rescused_to_buffer(&acctbuf, &acctbuf_size, patlist, \" \", pjob) != 0)\n\t\t\t\tbreak;\n\t\t\tif (tmpdef->rs_flags & ATR_DFLAG_USRD) {\n\t\t\t\tif (concat_rescused_to_buffer(&mailbuf, &mailbuf_size, patlist, \"\\n\", pjob) != 0)\n\t\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t}\n\n\t/* make sure ji_momhandle is -1 to force new connection to mom */\n\tpjob->ji_momhandle = -1;\n\tpjob->ji_mom_prot = PROT_INVALID;\n\tpjob->ji_retryok = 0; /* for retry if Mom down */\n\n\t/* clear suspended flag if it was set, also clear suspended-workstation busy flag if set */\n\n\tpjob->ji_qs.ji_svrflags &= ~(JOB_SVFLG_Suspend | JOB_SVFLG_Actsuspd);\n\n\t/* Was there a special exit status from MOM ? */\n\tif (exitstatus < 0 && exitstatus != JOB_EXEC_CHKP) {\n\t\t/* negative exit status is special */\n\t\tswitch (exitstatus) {\n\t\t\tcase JOB_EXEC_FAILHOOK_DELETE:\n\t\t\t\t/* this is a reject */\n\t\t\t\tlog_event(PBSEVENT_ERROR | PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO, pjob->ji_qs.ji_jobid, msg_hook_reject_deletejob);\n\t\t\t\tDBPRT((\"%s: MOM rejected job %s due to a hook.\\n\", __func__, pruu->ru_pjobid))\n\t\t\t\tsvr_mailowner(pjob, MAIL_ABORT, MAIL_FORCE, msg_hook_reject_deletejob);\n\t\t\t\talreadymailed = 1;\n\t\t\t\tbreak;\n\n\t\t\tcase JOB_EXEC_HOOK_DELETE:\n\t\t\t\t/* more likely an accept with a hook delete option */\n\t\t\t\tlog_event(PBSEVENT_ADMIN | PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO, pjob->ji_qs.ji_jobid, \"a hook requested for job to be deleted\");\n\t\t\t\tDBPRT((\"%s: a hook requested for job %s to be deleted.\\n\", __func__, pruu->ru_pjobid))\n\t\t\t\tsvr_mailowner(pjob, MAIL_ABORT, MAIL_FORCE, \"a hook requested for job to be deleted\");\n\t\t\t\talreadymailed = 1;\n\t\t\t\tbreak;\n\n\t\t\tcase JOB_EXEC_FAIL1:\n\t\t\tdefault:\n\t\t\t\t/* MOM rejected job with fatal error, abort job */\n\t\t\t\tDBPRT((\"%s: MOM rejected job %s with fatal error.\\n\", __func__, pruu->ru_pjobid))\n\t\t\t\tsvr_mailowner(pjob, MAIL_ABORT, MAIL_FORCE, msg_momnoexec1);\n\t\t\t\talreadymailed = 1;\n\t\t\t\tbreak;\n\n\t\t\tcase JOB_EXEC_FAIL2:\n\t\t\t\t/* MOM reject job after files setup, abort job */\n\t\t\t\tDBPRT((\"%s: MOM rejected job %s after setup.\\n\", __func__, pruu->ru_pjobid))\n\t\t\t\tsvr_mailowner(pjob, MAIL_ABORT, MAIL_FORCE, msg_momnoexec2);\n\t\t\t\talreadymailed = 1;\n\t\t\t\tbreak;\n\n\t\t\tcase JOB_EXEC_INITABT:\n\t\t\t\t/* MOM aborted job on her initialization */\n\t\t\t\tDBPRT((\"%s: MOM aborted job %s on init, no requeue.\\n\", __func__, pruu->ru_pjobid))\n\t\t\t\talreadymailed = setrerun(pjob);\n\t\t\t\tpjob->ji_qs.ji_svrflags |= JOB_SVFLG_HASRUN;\n\t\t\t\tbreak;\n\n\t\t\tcase JOB_EXEC_FAILUID:\n\t\t\t\t/* MOM abort job because uid or gid was invalid */\n\t\t\t\tDBPRT((\"%s: MOM rejected job %s with invaild uid/gid.\\n\", __func__, pruu->ru_pjobid))\n\t\t\t\tsvr_mailowner(pjob, MAIL_ABORT, MAIL_FORCE, msg_baduser);\n\t\t\t\talreadymailed = 1;\n\t\t\t\t/* go to the retry case */\n\t\t\t\tgoto RetryJob;\n\n\t\t\tcase JOB_EXEC_FAIL_PASSWORD:\n\n\t\t\t\t/* put job on password hold */\n\t\t\t\tset_jattr_b_slim(pjob, JOB_ATR_hold, HOLD_bad_password, INCR);\n\n\t\t\t\tset_job_substate(pjob, JOB_SUBSTATE_HELD);\n\t\t\t\tsvr_evaljobstate(pjob, &newstate, &newsubst, 0);\n\t\t\t\tsvr_setjobstate(pjob, newstate, newsubst);\n\n\t\t\t\tmsg = pruu->ru_comment ? pruu->ru_comment : \"\";\n\t\t\t\tmailmsg = (char *) malloc(strlen(msg) + 1 + strlen(msg_bad_password) + 1);\n\t\t\t\tif (mailmsg) {\n\t\t\t\t\tsprintf(mailmsg, \"%s:%s\", msg, msg_bad_password);\n\t\t\t\t\tsvr_mailowner(pjob, MAIL_BEGIN, MAIL_FORCE, mailmsg);\n\t\t\t\t\tset_jattr_str_slim(pjob, JOB_ATR_Comment, mailmsg, NULL);\n\n\t\t\t\t\tlog_event(PBSEVENT_ERROR | PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t\t\t  pjob->ji_qs.ji_jobid, mailmsg);\n\t\t\t\t\tfree(mailmsg);\n\t\t\t\t} else {\n\t\t\t\t\tsvr_mailowner(pjob, MAIL_BEGIN, MAIL_FORCE, msg_bad_password);\n\t\t\t\t\tset_jattr_str_slim(pjob, JOB_ATR_Comment, msg_bad_password, NULL);\n\t\t\t\t}\n\n\t\t\tcase JOB_EXEC_RETRY:\n\t\t\tcase JOB_EXEC_FAILHOOK_RERUN:\n\t\t\tcase JOB_EXEC_HOOK_RERUN:\n\t\t\tcase JOB_EXEC_HOOKERROR:\n\t\t\tcase JOB_EXEC_JOINJOB:\n\t\t\t\tif (exitstatus == JOB_EXEC_FAILHOOK_RERUN || exitstatus == JOB_EXEC_HOOKERROR) {\n\t\t\t\t\tlog_event(PBSEVENT_ERROR | PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO, pjob->ji_qs.ji_jobid, msg_hook_reject_rerunjob);\n\t\t\t\t\tDBPRT((\"%s: MOM rejected job %s due to a hook.\\n\", __func__, pruu->ru_pjobid))\n\t\t\t\t} else if (exitstatus == JOB_EXEC_JOINJOB) {\n\t\t\t\t\tlog_event(PBSEVENT_ERROR | PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t\t\t  pjob->ji_qs.ji_jobid, \"Mom rejected job due to join job error\");\n\t\t\t\t\texitstatus = JOB_EXEC_RETRY;\n\t\t\t\t}\n\t\t\tRetryJob:\n\t\t\t\t/* MOM rejected job, but said retry it */\n\t\t\t\tDBPRT((\"%s: MOM rejected job %s but will retry.\\n\", __func__, pruu->ru_pjobid))\n\t\t\t\tif (pjob->ji_qs.ji_svrflags & JOB_SVFLG_HASRUN) /* has run before, treat this as another rerun */\n\t\t\t\t\talreadymailed = setrerun(pjob);\n\t\t\t\telse /* have mom remove job files, not saving them, and requeue job */\n\t\t\t\t\tset_job_substate(pjob, JOB_SUBSTATE_RERUN2);\n\n\t\t\t\tcheck_failed_attempts(pjob);\n\t\t\t\tbreak;\n\n\t\t\tcase JOB_EXEC_BADRESRT:\n\t\t\t\t/* MOM could not restart job, setup for rerun */\n\t\t\t\tDBPRT((\"%s: MOM could not restart job %s, will rerun.\\n\", __func__, pruu->ru_pjobid))\n\t\t\t\talreadymailed = setrerun(pjob);\n\t\t\t\tpjob->ji_qs.ji_svrflags &= ~JOB_SVFLG_CHKPT;\n\t\t\t\tbreak;\n\n\t\t\tcase JOB_EXEC_INITRST:\n\t\t\t\t/*\n\t\t\t\t\t* Mom aborted job on Mom being restarted, job has been\n\t\t\t\t\t* checkpointed and can be \"restarted\" rather than rerun\n\t\t\t\t\t*/\n\t\t\tcase JOB_EXEC_QUERST:\n\t\t\t\t/*\n\t\t\t\t\t* Epilogue requested requeue of a checkpointed job\n\t\t\t\t\t* it can be restarted later from restart file\n\t\t\t\t\t*\n\t\t\t\t\t* In both cases, job has checkpoint/restart file,\n\t\t\t\t\t* requeue job and leave all information on execution\n\t\t\t\t\t* host for a later restart\n\t\t\t\t\t*/\n\t\t\t\tDBPRT((\"%s: MOM request requeue of job for restart.\\n\", __func__))\n\t\t\t\tif (pjob->ji_qs.ji_svrflags & JOB_SVFLG_SubJob)\n\t\t\t\t\tgoto RetryJob;\n\n\t\t\t\trel_resc(pjob);\n\t\t\t\tpjob->ji_qs.ji_svrflags |= JOB_SVFLG_HASRUN | JOB_SVFLG_CHKPT;\n\n\t\t\t\tsvr_evaljobstate(pjob, &newstate, &newsubst, 1);\n\t\t\t\tsvr_setjobstate(pjob, newstate, newsubst);\n\t\t\t\tif (pjob->ji_mom_prot == PROT_TCP)\n\t\t\t\t\tsvr_disconnect(pjob->ji_momhandle);\n\n\t\t\t\tpjob->ji_momhandle = -1;\n\t\t\t\tpjob->ji_mom_prot = PROT_INVALID;\n\n\t\t\t\tfree(mailbuf);\n\t\t\t\tfree(acctbuf);\n\t\t\t\treturn 0;\n\n\t\t\tcase JOB_EXEC_INITRMG:\n\t\t\t\t/*\n\t\t\t\t\t* MOM abort job on init, job has migratable checkpoint\n\t\t\t\t\t* Must recover output and checkpoint file, do eoj\n\t\t\t\t\t*/\n\n\t\t\t\tDBPRT((\"%s: MOM aborted migratable job %s on init, will requeue.\\n\", __func__, pruu->ru_pjobid))\n\n\t\t\t\tif (pjob->ji_qs.ji_svrflags & JOB_SVFLG_SubJob)\n\t\t\t\t\tgoto RetryJob;\n\n\t\t\t\talreadymailed = setrerun(pjob);\n\t\t\t\tpjob->ji_qs.ji_svrflags |= JOB_SVFLG_HASRUN | JOB_SVFLG_ChkptMig;\n\t\t\t\tbreak;\n\n\t\t\tcase JOB_EXEC_RERUN:\n\t\t\tcase JOB_EXEC_RERUN_SIS_FAIL:\n\t\t\t\tif (get_jattr_long(pjob, JOB_ATR_rerunable))\n\t\t\t\t\tset_job_substate(pjob, JOB_SUBSTATE_RERUN);\n\t\t\t\telse {\n\t\t\t\t\tset_job_substate(pjob, JOB_SUBSTATE_EXITING);\n\t\t\t\t\tsvr_mailowner(pjob, MAIL_ABORT, MAIL_NORMAL,\n\t\t\t\t\t\t      \"Non-rerunable job deleted on requeue\");\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase JOB_EXEC_FAIL_SECURITY:\n\t\t\t\t/* MOM rejected job with security breach fatal error, abort job */\n\t\t\t\tDBPRT((\"%s: MOM rejected job %s with security breach fatal error.\\n\", __func__, pruu->ru_pjobid))\n\t\t\t\tset_jattr_b_slim(pjob, JOB_ATR_hold, HOLD_s, INCR);\n\t\t\t\tset_jattr_str_slim(pjob, JOB_ATR_Comment,\n\t\t\t\t\t\t   \"job held due to possible security breach of job tmpdir, failed to start\", NULL);\n\t\t\t\trel_resc(pjob);\n\t\t\t\tsvr_setjobstate(pjob, JOB_STATE_LTR_HELD, JOB_SUBSTATE_HELD);\n\t\t\t\tfree(mailbuf);\n\t\t\t\tfree(acctbuf);\n\t\t\t\treturn 0;\n\t\t\tcase JOB_EXEC_KILL_NCPUS_BURST:\n\t\t\t\t/* MOM killed job due to exceeding ncpus (burst), abort job */\n\t\t\t\tDBPRT((\"%s: MOM killed job %s due to exceeding ncpus (burst).\\n\", __func__, pruu->ru_pjobid))\n\t\t\t\tsvr_mailowner(pjob, MAIL_ABORT, MAIL_FORCE, msg_momkillncpusburst);\n\t\t\t\talreadymailed = 1;\n\t\t\t\tbreak;\n\t\t\tcase JOB_EXEC_KILL_NCPUS_SUM:\n\t\t\t\t/* MOM killed job due to exceeding ncpus (sum), abort job */\n\t\t\t\tDBPRT((\"%s: MOM killed job %s due to exceeding ncpus (sum).\\n\", __func__, pruu->ru_pjobid))\n\t\t\t\tsvr_mailowner(pjob, MAIL_ABORT, MAIL_FORCE, msg_momkillncpussum);\n\t\t\t\talreadymailed = 1;\n\t\t\t\tbreak;\n\t\t\tcase JOB_EXEC_KILL_VMEM:\n\t\t\t\t/* MOM killed job due to exceeding vmem, abort job */\n\t\t\t\tDBPRT((\"%s: MOM killed job %s due to exceeding vmem.\\n\", __func__, pruu->ru_pjobid))\n\t\t\t\tsvr_mailowner(pjob, MAIL_ABORT, MAIL_FORCE, msg_momkillvmem);\n\t\t\t\talreadymailed = 1;\n\t\t\t\tbreak;\n\t\t\tcase JOB_EXEC_KILL_MEM:\n\t\t\t\t/* MOM killed job due to exceeding mem, abort job */\n\t\t\t\tDBPRT((\"%s: MOM killed job %s due to exceeding mem.\\n\", __func__, pruu->ru_pjobid))\n\t\t\t\tsvr_mailowner(pjob, MAIL_ABORT, MAIL_FORCE, msg_momkillmem);\n\t\t\t\talreadymailed = 1;\n\t\t\t\tbreak;\n\t\t\tcase JOB_EXEC_KILL_CPUT:\n\t\t\t\t/* MOM killed job due to exceeding cput, abort job */\n\t\t\t\tDBPRT((\"%s: MOM killed job %s due to exceeding cput.\\n\", __func__, pruu->ru_pjobid))\n\t\t\t\tsvr_mailowner(pjob, MAIL_ABORT, MAIL_FORCE, msg_momkillcput);\n\t\t\t\talreadymailed = 1;\n\t\t\t\tbreak;\n\t\t\tcase JOB_EXEC_KILL_WALLTIME:\n\t\t\t\t/* MOM killed job due to exceeding walltime, abort job */\n\t\t\t\tDBPRT((\"%s: MOM killed job %s due to exceeding walltime.\\n\", __func__, pruu->ru_pjobid))\n\t\t\t\tsvr_mailowner(pjob, MAIL_ABORT, MAIL_FORCE, msg_momkillwalltime);\n\t\t\t\talreadymailed = 1;\n\t\t\t\tbreak;\n\t\t}\n\t}\n\n\t/* Send email if exiting (not rerun) */\n\n\tif ((exitstatus == JOB_EXEC_FAILHOOK_DELETE) || (exitstatus == JOB_EXEC_HOOK_DELETE) ||\n\t    (!check_job_substate(pjob, JOB_SUBSTATE_RERUN) && !check_job_substate(pjob, JOB_SUBSTATE_RERUN2))) {\n\t\tDBPRT((\"%s: Job %s is terminating and not rerun.\\n\", __func__, pjob->ji_qs.ji_jobid))\n\n\t\tsvr_setjobstate(pjob, JOB_STATE_LTR_EXITING, JOB_SUBSTATE_EXITING);\n\t\tif (alreadymailed == 0 && mailbuf != NULL)\n\t\t\tsvr_mailowner(pjob, MAIL_END, MAIL_NORMAL, mailbuf);\n\t}\n\n\t/* can free this now since no need to use it */\n\tfree(mailbuf);\n\n\t/* save record accounting for later */\n\tfree(pjob->ji_acctrec);\n\tpjob->ji_acctrec = acctbuf;\n\n\t/* Now, what do we do with the job... */\n\tif (exitstatus == JOB_EXEC_FAILHOOK_DELETE || exitstatus == JOB_EXEC_HOOK_DELETE ||\n\t    (!check_job_substate(pjob, JOB_SUBSTATE_RERUN) && !check_job_substate(pjob, JOB_SUBSTATE_RERUN2))) {\n\t\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_CHKPT) && ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_SubJob) == 0) && (pjob->ji_qs.ji_svrflags & JOB_SVFLG_HASHOLD)) {\n\n\t\t\t/* non-migratable checkpoint, leave there\n\t\t\t * and just requeue the job.\n\t\t\t */\n\n\t\t\trel_resc(pjob);\n\t\t\tpjob->ji_qs.ji_svrflags |= JOB_SVFLG_HASRUN;\n\t\t\tpjob->ji_qs.ji_svrflags &= ~JOB_SVFLG_HASHOLD;\n\t\t\tsvr_evaljobstate(pjob, &newstate, &newsubst, 1);\n\t\t\tsvr_setjobstate(pjob, newstate, newsubst);\n\t\t\tif (pjob->ji_mom_prot == PROT_TCP)\n\t\t\t\tsvr_disconnect(pjob->ji_momhandle);\n\t\t\tpjob->ji_momhandle = -1;\n\t\t\tpjob->ji_mom_prot = PROT_INVALID;\n\t\t\treturn 0;\n\t\t}\n\n\t\tcheck_block(pjob, \"\"); /* if block set, send word */\n\t\tptask = set_task(WORK_Immed, 0, on_job_exit, (void *) pjob);\n\t\tappend_link(&pjob->ji_svrtask, &ptask->wt_linkobj, ptask);\n\n\t\t/* \"on_job_exit()\" will be dispatched out of the main loop */\n\n\t} else {\n\t\t/*\n\t\t * Rerunning job ...\n\t\t * If not checkpointed, clear \"resources_used\"\n\t\t * Requeue job\n\t\t */\n\t\tDBPRT((\"%s: Rerunning job %s\\n\", __func__, pjob->ji_qs.ji_jobid))\n\t\tif ((pjob->ji_qs.ji_svrflags & (JOB_SVFLG_CHKPT | JOB_SVFLG_ChkptMig)) == 0)\n\t\t\tfree_jattr(pjob, JOB_ATR_resc_used);\n\n\t\tsvr_setjobstate(pjob, JOB_STATE_LTR_EXITING, get_job_substate(pjob));\n\t\tptask = set_task(WORK_Immed, 0, on_job_rerun, (void *) pjob);\n\t\tappend_link(&pjob->ji_svrtask, &ptask->wt_linkobj, ptask);\n\n\t\t/* \"on_job_rerun()\" will be dispatched out of the main loop */\n\t}\n\n\tDBPRT((\"%s: Returning from end of function.\\n\", __func__))\n\n\treturn 0;\n}\n"
  },
  {
    "path": "src/server/req_locate.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/*\n * @file\treq_locate.c\n * @brief\n * \tFunctions relating to the Locate Job Batch Request.\n *\n * Included funtions are:\n *\treq_locatejob()\n *\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <sys/types.h>\n#include \"libpbs.h\"\n#include <signal.h>\n#include <string.h>\n#include \"server_limits.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"server.h\"\n#include \"credential.h\"\n#include \"batch_request.h\"\n#include \"job.h\"\n#include \"work_task.h\"\n#include \"tracking.h\"\n#include \"pbs_error.h\"\n#include \"log.h\"\n\n/* Global Data Items: */\n\nextern struct server server;\nextern char server_name[];\n\n/* External functions */\nextern int svr_chk_histjob(job *);\nextern int is_job_array(char *);\n\n/**\n * @brief\n * \t\treq_locatejob - service the Locate Job Request\n *\n *\t\tThis request attempts to locate a job.\n *\n * @param[in]\tpreq\t- Job Request\n */\n\nvoid\nreq_locatejob(struct batch_request *preq)\n{\n\tchar *at;\n\tint i;\n\tjob *pjob;\n\tchar *location = NULL;\n\n\tif ((at = strchr(preq->rq_ind.rq_locate, (int) '@')) != NULL)\n\t\t*at = '\\0'; /* strip off @server_name */\n\tpjob = find_job(preq->rq_ind.rq_locate);\n\n\t/*\n\t * Reject request for history jobs:\n\t *\ti) jobs with state FINISHED\n\t *\tii) jobs with state MOVED and substate FINISHED\n\t */\n\tif (pjob && svr_chk_histjob(pjob) == PBSE_HISTJOBID) {\n\t\treq_reject(PBSE_HISTJOBID, 0, preq);\n\t\treturn;\n\t}\n\n\t/*\n\t * return the location if job is not history (i.e. state is not\n\t * JOB_STATE_LTR_MOVED) else search in tracking table.\n\t */\n\tif (pjob && (!check_job_state(pjob, JOB_STATE_LTR_MOVED)))\n\t\tlocation = pbs_server_name;\n\telse {\n\t\tint job_array_ret;\n\t\tjob_array_ret = is_job_array(preq->rq_ind.rq_locate);\n\t\tif ((job_array_ret == IS_ARRAY_Single) || (job_array_ret == IS_ARRAY_Range)) {\n\t\t\tint i;\n\t\t\tchar idbuf[PBS_MAXSVRJOBID + 1] = {'\\0'};\n\t\t\tchar *pc;\n\t\t\tfor (i = 0; i < PBS_MAXSVRJOBID; i++) {\n\t\t\t\tidbuf[i] = *(preq->rq_ind.rq_locate + i);\n\t\t\t\tif (idbuf[i] == '[')\n\t\t\t\t\tbreak;\n\t\t\t}\n\t\t\tidbuf[++i] = ']';\n\t\t\tidbuf[++i] = '\\0';\n\t\t\tpc = strchr(preq->rq_ind.rq_locate, (int) '.');\n\t\t\tif (pc)\n\t\t\t\tstrcat(idbuf, pc);\n\t\t\tstrncpy(preq->rq_ind.rq_locate, idbuf, sizeof(preq->rq_ind.rq_locate));\n\t\t}\n\t\tfor (i = 0; i < server.sv_tracksize; i++) {\n\t\t\tif ((server.sv_track + i)->tk_mtime &&\n\t\t\t    !strcmp((server.sv_track + i)->tk_jobid, preq->rq_ind.rq_locate)) {\n\t\t\t\tlocation = (server.sv_track + i)->tk_location;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t}\n\tif (location) {\n\t\tpreq->rq_reply.brp_code = 0;\n\t\tpreq->rq_reply.brp_auxcode = 0;\n\t\tpreq->rq_reply.brp_choice = BATCH_REPLY_CHOICE_Locate;\n\t\t(void) strcpy(preq->rq_reply.brp_un.brp_locate, location);\n\t\treply_send(preq);\n\t} else\n\t\treq_reject(PBSE_UNKJOBID, 0, preq);\n\treturn;\n}\n"
  },
  {
    "path": "src/server/req_manager.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n *\n * @brief\n *\tFunctions relating to the Manager Batch Request (qmgr)\n *\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <stdio.h>\n#include <sys/types.h>\n#include <sys/socket.h>\n\n#include <arpa/inet.h>\n#include <ctype.h>\n#include <memory.h>\n#include <string.h>\n#include <stdlib.h>\n#include <unistd.h>\n#include <netdb.h>\n#include <errno.h>\n\n#include \"libpbs.h\"\n#include \"server_limits.h\"\n#include \"list_link.h\"\n#include \"work_task.h\"\n#include \"attribute.h\"\n#include \"resource.h\"\n#include \"pbs_license.h\"\n#include \"server.h\"\n#include \"job.h\"\n#include \"reservation.h\"\n#include \"queue.h\"\n#include \"credential.h\"\n#include \"batch_request.h\"\n#include \"net_connect.h\"\n#include \"pbs_error.h\"\n#include \"log.h\"\n#include \"pbs_nodes.h\"\n#include \"svrfunc.h\"\n#include \"pbs_ifl.h\"\n#include \"batch_request.h\"\n#include \"hook.h\"\n#include \"hook_func.h\"\n#include \"pbs_entlim.h\"\n#include \"provision.h\"\n#include \"pbs_db.h\"\n#include \"assert.h\"\n#include \"pbs_idx.h\"\n#include \"sched_cmds.h\"\n#include \"pbs_sched.h\"\n#include \"pbs_share.h\"\n\n#define PERM_MANAGER (ATR_DFLAG_MGWR | ATR_DFLAG_MGRD)\n#define PERM_OPorMGR (ATR_DFLAG_MGWR | ATR_DFLAG_MGRD | ATR_DFLAG_OPRD | ATR_DFLAG_OPWR)\n\npntPBS_IP_LIST pbs_iplist = NULL;\n\nvoid *node_idx = NULL;\nstatic void *hostaddr_idx = NULL;\n\n/* Global Data Items: */\n\nextern void unset_license_location(void);\nextern void unset_license_min(void);\nextern void unset_license_max(void);\nextern void unset_license_linger(void);\nextern void unset_job_history_enable(void);\nextern void unset_job_history_duration(void);\nextern void unset_max_job_sequence_id(void);\nextern void force_qsub_daemons_update(void);\nextern void unset_node_fail_requeue(void);\nextern void unset_resend_term_delay(void);\nextern pbs_sched *sched_alloc(char *sched_name);\nextern void sched_free(pbs_sched *psched);\nextern int sched_delete(pbs_sched *psched);\n\nextern struct server server;\nextern pbs_list_head svr_queues;\nextern attribute_def que_attr_def[];\nextern attribute_def svr_attr_def[];\nextern char *msg_attrtype;\nextern char *msg_daemonname;\nextern char *msg_manager;\nextern char *msg_man_cre;\nextern char *msg_man_del;\nextern char *msg_man_set;\nextern char *msg_man_uns;\nextern char *msg_noattr;\nextern unsigned int pbs_mom_port;\nextern char *path_hooks;\nextern int max_concurrent_prov;\nextern char *msg_cannot_set_route_que;\nextern int check_req_aoe_available(struct pbsnode *, char *);\nint resize_prov_table(int);\n\n/* private data */\n\nstatic char *all_quename = \"_All_\";\nstatic char *all_nodes = \"_All_\";\nenum res_op_flag {\n\tINDIRECT_RES_UNLINK,\n\tINDIRECT_RES_CHECK,\n};\n\nextern time_t time_now;\nextern void *svr_db_conn;\nstruct work_task *rescdef_wt_g = NULL;\n\n/*\n * This structure used as part of the index tree\n * to do a faster lookup of hostnames.\n * It is stored against pname in make_host_addresses_list()\n */\nstruct pul_store {\n\tu_long *pul; /* list of ipaddresses */\n\tint len;     /* length */\n};\n\n/**\n * @brief\n * \t\tis_local_root: returns TRUE if <user>@<host> corresponds to the local\n *  \troot (or Admin-privilege account) on the local host.\n *\n * @param[in]\tuser\t- user name\n * @param[in]\thost\t- host name\n *\n * @return\tBoolean\n * @retval\tTRUE\t- User has Admin-privilege\n * @retval\tFALSE\t- No Admin-privilege\n */\nint\nis_local_root(char *user, char *host)\n{\n\t/* Similar method used in svr_get_privilege() */\n\tif (strcmp(user, PBS_DEFAULT_ADMIN) == 0) {\n\t\tint is_local = 0;\n\t\tchar myhostname[PBS_MAXHOSTNAME + 1];\n\t\t/* First try without DNS lookup. */\n\t\tif (strcasecmp(host, server_host) == 0) {\n\t\t\tis_local = 1;\n\t\t} else if (strcasecmp(host, LOCALHOST_SHORTNAME) == 0) {\n\t\t\tis_local = 1;\n\t\t} else if (strcasecmp(host, LOCALHOST_FULLNAME) == 0) {\n\t\t\tis_local = 1;\n\t\t} else {\n\t\t\tif (gethostname(myhostname, (sizeof(myhostname) - 1)) == -1) {\n\t\t\t\tmyhostname[0] = '\\0';\n\t\t\t}\n\t\t\tif (strcasecmp(host, myhostname) == 0) {\n\t\t\t\tis_local = 1;\n\t\t\t}\n\t\t}\n\t\tif (is_local == 0) {\n\t\t\t/* Now try with DNS lookup. */\n\t\t\tif (is_same_host(host, server_host)) {\n\t\t\t\tis_local = 1;\n\t\t\t} else if (is_same_host(host, myhostname)) {\n\t\t\t\tis_local = 1;\n\t\t\t}\n\t\t}\n\t\tif (is_local != 0)\n\t\t\treturn (TRUE);\n\t}\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n\tchar *privil_auth_user = pbs_conf.pbs_privileged_auth_user ? pbs_conf.pbs_privileged_auth_user : NULL;\n\tchar uh[PBS_MAXUSER + PBS_MAXHOSTNAME + 2];\n\tif (privil_auth_user &&\n\t    is_string_in_arr(pbs_conf.supported_auth_methods, AUTH_GSS_NAME)) {\n\t\tstrcpy(uh, user);\n\t\tstrcat(uh, \"@\");\n\t\tstrcat(uh, host);\n\n\t\tif (strcmp(uh, privil_auth_user) == 0) {\n\t\t\treturn (TRUE);\n\t\t}\n\t}\n#endif\n\treturn (FALSE);\n}\n\n/**\n * @brief\n * \t\twarnings_update - adds node pointer to the warnings array if\n * \t\tclient needs a warning about this node.\n *\n * \t\tThe first argument, a \"warning code\", determines the processing that\n * \t\toccurs.  It might be simply some initialization, or it might be the\n * \t\tactual test for a \"warning required\" situation.\n *\n * @param[in]\twcode\t- \"warning code\", determines the processing that occurs.\n * @param[out]\twnodes\t- warnings array\n * @param[in,out]\twidx\t- index inside warnings array where the node is getting added\n * @param[in]\tnp\t- node pointer which gets added into warnings array.\n *\n * @return\tNone\n *\n * @par MT-safe: No\n */\nvoid\nwarnings_update(int wcode, pbsnode **wnodes, int *widx, pbsnode *np)\n{\n\tresource *resc;\n\tstruct pbssubn *psub;\n\n\tstatic int ngrp_had = 0;\n\tstatic char *rname = NULL;\n\tstatic resource_def *rscdef = NULL;\n\tint warning_ok = 0;\n\n\tswitch (wcode) {\n\t\tcase WARN_ngrp_init:\n\t\tcase WARN_ngrp:\n\t\tcase WARN_ngrp_ck:\n\t\t\tif (is_sattr_set(SVR_ATR_NodeGroupKey))\n\t\t\t\twarning_ok = 1;\n\t\t\tbreak;\n\n\t\tdefault:\n\t\t\tbreak;\n\t}\n\n\tif (warning_ok == 1 && wcode == WARN_ngrp_init) {\n\t\t/*\n\t\t * initialize some static data\n\t\t */\n\n\t\trname = get_sattr_str(SVR_ATR_NodeGroupKey);\n\t\tif ((rname != NULL) && (*rname != '\\0'))\n\t\t\trscdef = find_resc_def(svr_resc_def, rname);\n\t\treturn;\n\n\t} else if (warning_ok == 1 && wcode == WARN_ngrp && wnodes != NULL) {\n\n\t\tif (rscdef != NULL) {\n\t\t\tresc = find_resc_entry(get_nattr(np, ND_ATR_ResourceAvail), rscdef);\n\t\t\tif (resc != NULL) {\n\t\t\t\tif (resc->rs_value.at_flags & ATR_VFLAG_MODIFY) {\n\t\t\t\t\tif (np->nd_resvp) {\n\t\t\t\t\t\twnodes[*widx] = np;\n\t\t\t\t\t\t*widx += 1;\n\t\t\t\t\t}\n\t\t\t\t\tfor (psub = np->nd_psn; psub != 0; psub = psub->next) {\n\t\t\t\t\t\tif (psub->jobs) {\n\t\t\t\t\t\t\twnodes[*widx] = np;\n\t\t\t\t\t\t\t*widx += 1;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t} else if (resc == NULL && ngrp_had) {\n\n\t\t\t\tif (np->nd_resvp) {\n\t\t\t\t\twnodes[*widx] = np;\n\t\t\t\t\t*widx += 1;\n\t\t\t\t}\n\t\t\t\tfor (psub = np->nd_psn; psub != 0; psub = psub->next) {\n\t\t\t\t\tif (psub->jobs) {\n\t\t\t\t\t\twnodes[*widx] = np;\n\t\t\t\t\t\t*widx += 1;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t} else if (warning_ok == 1 && wcode == WARN_ngrp_ck) {\n\t\tif (rscdef != NULL) {\n\t\t\tresc = find_resc_entry(get_nattr(np, ND_ATR_ResourceAvail), rscdef);\n\t\t\tif (resc != NULL)\n\t\t\t\tngrp_had = 1;\n\t\t\telse\n\t\t\t\tngrp_had = 0;\n\t\t}\n\t}\n}\n\n/**\n * @brief\n * \t\twarn_msg_build - assembles the appropriate warning message\n *\n * @param[in]\twcode\t- \"warning code\", determines the processing that occurs.\n * @param[out]\twnodes\t- warnings array\n * @param[in,out]\twidx\t- index inside warnings array where the node is getting added\n *\n * @return\tpointer to assembled message\n * @retval\tNULL\t- pointer if can't build message\n *\n * @note\n * Calling party RESPONSIBLE for freeing dynamic memory that\n * is acquired by this function.\n */\n\nchar *\nwarn_msg_build(int wcode, pbsnode **wnodes, int widx)\n{\n\tchar whead[] = \"WARNING: modified grouping resource on node(s) with reservation(s)/job(s) - \";\n\tchar nohead[] = \"\";\n\tchar *wmsg;\n\tchar *phead;\n\tint i, len;\n\n\tif (widx == 0)\n\t\treturn NULL;\n\n\tif (wcode == WARN_ngrp)\n\t\tphead = whead;\n\telse\n\t\tphead = nohead;\n\n\tfor (len = strlen(phead), i = 0; i < widx; ++i)\n\t\tlen += strlen(wnodes[i]->nd_name) + 2; /* 2: \", \" */\n\n\tif ((wmsg = malloc(len + 1)) == NULL)\n\t\treturn NULL;\n\n\tstrcpy(wmsg, phead);\n\tfor (i = 0; i < widx; ++i) {\n\t\tstrcat(wmsg, wnodes[i]->nd_name);\n\t\tstrcat(wmsg, \", \");\n\t}\n\twmsg[strlen(wmsg) - 2] = '\\0'; /* overwrite trailing \", \" */\n\treturn (wmsg);\n}\n\n/**\n * @brief\n * \t\tcheck_que_attr - check if attributes in request are consistent with\n *\t\tthe current queue type.  This is called when creating or setting\n *\t\tthe attributes of a queue.\n *\n * @param[in]\tpque\t- current queue\n *\n * @return\tchar *\n * @retval\tNULL\t- if all ok\n * @retval\tNULL\t- name if bad attribute is not ok\n */\n\nstatic char *\ncheck_que_attr(pbs_queue *pque)\n{\n\tint i;\n\tint type;\n\n\ttype = pque->qu_qs.qu_type; /* current type of queue */\n\tfor (i = 0; i < (int) QA_ATR_LAST; ++i) {\n\t\tif (is_qattr_set(pque, i)) {\n\t\t\tif (que_attr_def[i].at_parent == PARENT_TYPE_QUE_ALL) {\n\t\t\t\tcontinue;\n\t\t\t} else if (que_attr_def[i].at_parent == PARENT_TYPE_QUE_EXC) {\n\t\t\t\tif (type == QTYPE_Unset)\n\t\t\t\t\ttype = QTYPE_Execution;\n\t\t\t\telse if (type != QTYPE_Execution)\n\t\t\t\t\treturn (que_attr_def[i].at_name);\n\n\t\t\t} else if (que_attr_def[i].at_parent == PARENT_TYPE_QUE_RTE) {\n\t\t\t\tif (type == QTYPE_Unset)\n\t\t\t\t\ttype = QTYPE_RoutePush;\n\t\t\t\telse if (type != QTYPE_RoutePush)\n\t\t\t\t\treturn (que_attr_def[i].at_name);\n\t\t\t}\n\t\t}\n\t}\n\n\treturn NULL; /* all attributes are ok */\n}\n/**\n * @brief\n * \t\tSet resource limit on resources. If resource limits can not be set\n * \t\ton any resources, return error. Resource limit can not be set on\n * \t\tmin_walltime and max_walltime.\n *\n * @param[in]\told_list\t-  existing attribute list\n * @param[in]\tnew_list\t- new attribute list\n * @param[in]\top\t- operation e.g. \"SET\"\n *\n * @return\tsuccess/failure\n * @retval\t=0\t- OK\n * @retval  >0\t- error.\n *\n */\n\nint\nset_resources_min_max(attribute *old, attribute *new, enum batch_op op)\n{\n\tif (op == SET) {\n\t\tresource_def *resdef = NULL;\n\t\tresource *pres = NULL;\n\t\tresdef = &svr_resc_def[RESC_MIN_WALLTIME];\n\t\tpres = find_resc_entry(new, resdef);\n\t\tif (pres != NULL)\n\t\t\treturn PBSE_NOLIMIT_RESOURCE;\n\t\tresdef = &svr_resc_def[RESC_MAX_WALLTIME];\n\t\tpres = find_resc_entry(new, resdef);\n\t\tif (pres != NULL)\n\t\t\treturn PBSE_NOLIMIT_RESOURCE;\n\t}\n\treturn (set_resc(old, new, op));\n}\n/**\n * @brief\n * \t\tcheck_que_enable - check if attempt to enable incompletely defined queue\n *\t\tThis is the at_action() routine for QA_ATR_Enabled\n *\n * @param[in]\tpattr\t- attribute structure\n * @param[in]\tpque\t- queue\n * @param[in]\tmode\t- not used here\n *\n * @return\terror code\n * @retval\t0\t- success\n * @retval\t!=0\t- failure\n */\n\nint\ncheck_que_enable(attribute *pattr, void *pque, int mode)\n{\n\tif (pattr->at_val.at_long != 0) {\n\n\t\t/*\n\t\t * admin attempting to  enabled queue,\n\t\t * is it completely defined\n\t\t */\n\n\t\tif (((pbs_queue *) pque)->qu_qs.qu_type == QTYPE_Unset)\n\t\t\treturn (PBSE_QUENOEN);\n\t\telse if (((pbs_queue *) pque)->qu_qs.qu_type == QTYPE_RoutePush) {\n\t\t\tif (!is_qattr_set((pbs_queue *) pque, QR_ATR_RouteDestin) || (get_qattr_arst((pbs_queue *) pque, QR_ATR_RouteDestin))->as_usedptr == 0)\n\t\t\t\treturn (PBSE_QUENOEN);\n\t\t}\n\t}\n\treturn (0); /* ok to enable queue */\n}\n\n/**\n * @brief\n * \t\tCheck the requested value of the queue type attribute\n *\t\tand set qu_type accordingly if there are no conflicts with\n *\t\troute-only or execution-only attributes.\n *\t\tThis is the at_action() routine for QA_ATR_QType\n *\n * @param[in]\tpattr - pointer to the attribute being set\n * @param[in]\tpque  - pointer to parent object (queue)\n * @param[in]\tmode  - mode of operation: set, recovery, ... unused here\n *\n * @return\terror code\n * @retval\t0\t- success\n * @retval\t!=0\t- error\n */\n\nint\nset_queue_type(attribute *pattr, void *pque, int mode)\n{\n\tint i;\n\tchar *pca;\n\tchar *pcv;\n\tint spectype;\n\tstatic struct {\n\t\tint type;\n\t\tchar *name;\n\t} qt[2] = {\n\t\t{QTYPE_Execution, \"Execution\"},\n\t\t{QTYPE_RoutePush, \"Route\"}};\n\n\tif (!is_attr_set(pattr))\n\t\t/* better be set or we shouldn't be here */\n\t\treturn (PBSE_BADATVAL);\n\n\t/* does the requested value match a legal value? */\n\n\tfor (i = 0; i < 2; i++) {\n\t\tspectype = qt[i].type;\n\t\tpca = pattr->at_val.at_str;\n\t\tpcv = qt[i].name;\n\t\tif (*pca == '\\0')\n\t\t\treturn (PBSE_BADATVAL);\n\n\t\twhile (*pca) {\n\t\t\tif (toupper((int) *pca++) != toupper((int) *pcv++)) {\n\t\t\t\tspectype = -1; /* no match */\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\n\t\tif (spectype != -1) { /* set up the attribute */\n\n\t\t\t/* If not an Execution queue, cannot */\n\t\t\t/* have nodes allocated to it        */\n\t\t\tif ((spectype != QTYPE_Execution) &&\n\t\t\t    is_qattr_set((pbs_queue *) pque, QE_ATR_HasNodes) &&\n\t\t\t    get_qattr_long((pbs_queue *) pque, QE_ATR_HasNodes) != 0) {\n\t\t\t\treturn (PBSE_ATTRTYPE);\n\t\t\t} else {\n\t\t\t\tif (is_qattr_set((pbs_queue *) pque, QA_ATR_partition) &&\n\t\t\t\t    (spectype == QTYPE_RoutePush)) {\n\t\t\t\t\treturn PBSE_CANNOT_SET_ROUTE_QUE;\n\t\t\t\t}\n\t\t\t}\n\t\t\t((pbs_queue *) pque)->qu_qs.qu_type = spectype;\n\t\t\t(void) free(pattr->at_val.at_str);\n\t\t\tpattr->at_val.at_str = malloc(strlen(qt[i].name) + 1);\n\t\t\tif (pattr->at_val.at_str == NULL)\n\t\t\t\treturn (PBSE_SYSTEM);\n\t\t\t(void) strcpy(pattr->at_val.at_str, qt[i].name);\n\t\t\tpattr->at_flags |= ATR_MOD_MCACHE;\n\t\t\treturn (0);\n\t\t}\n\t}\n\treturn (PBSE_BADATVAL);\n}\n\n/**\n * @brief\n * \t\tmgr_log_attr - log the change of an attribute\n *\n * @param[in]\tmsg\t- log message\n * @param[in]\tplist\t- svrattrl list header\n * @param[in]\tlogclass\t- see log.h\n * @param[in]\tobjname\t- object being modified\n * @param[in]\thookname\t- for adding 'by <hookname>' msg\n */\n\nvoid\nmgr_log_attr(char *msg, struct svrattrl *plist, int logclass, char *objname, char *hookname)\n{\n\tchar *pstr;\n\n\twhile (plist) {\n\t\t(void) strcpy(log_buffer, msg);\n\t\t(void) strcat(log_buffer, plist->al_name);\n\t\tif (plist->al_rescln) {\n\t\t\t(void) strcat(log_buffer, \".\");\n\t\t\t(void) strcat(log_buffer, plist->al_resc);\n\t\t}\n\t\tif (plist->al_op == INCR)\n\t\t\tpstr = \" + \";\n\t\telse if (plist->al_op == DECR)\n\t\t\tpstr = \" - \";\n\t\telse\n\t\t\tpstr = \" = \";\n\t\t(void) strcat(log_buffer, pstr);\n\t\tif (plist->al_valln)\n\t\t\t(void) strncat(log_buffer, plist->al_value,\n\t\t\t\t       LOG_BUF_SIZE - strlen(log_buffer) - 1);\n\n\t\tif (hookname != NULL) {\n\t\t\t(void) strncat(log_buffer, \" by \",\n\t\t\t\t       LOG_BUF_SIZE - strlen(log_buffer) - 1);\n\t\t\t(void) strncat(log_buffer, hookname,\n\t\t\t\t       LOG_BUF_SIZE - strlen(log_buffer) - 1);\n\t\t}\n\n\t\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\t\tlog_event(PBSEVENT_ADMIN, logclass, LOG_INFO, objname, log_buffer);\n\t\tplist = (struct svrattrl *) GET_NEXT(plist->al_link);\n\t}\n}\n\n/**\n * @brief\n * \t\tunset_indirect - unset the indirect target for a node resource_available\n *\n * @param[in]\tpresc\t- pointer to resource structure\n * @param[in]\tpidx    - search index of the attribute array\n * @param[in]\tname\t- checks whether name attribute is \"resources_available\"\n * @param[in]\tpobj\t- Vnode structure\n * @param[in]\tobjtype\t- it tells node is parent type or not\n */\nstatic void\nunset_indirect(resource *presc, void *pidx, attribute_def *pdef, char *name, void *pobj, int objtype)\n{\n\tint i;\n\tstruct pbsnode *pnode;\n\tresource_def *prdef;\n\n\tif (objtype != PARENT_TYPE_NODE)\n\t\treturn;\n\n\tpnode = (struct pbsnode *) pobj;\n\t(void) fix_indirect_resc_targets(pnode, presc, ND_ATR_ResourceAvail, 0);\n\tfree_str(&presc->rs_value);\n\n\t/* Now,  if and only if the above attrbute was \"resources_available\" */\n\t/* find and unset indirectness and clear for \"resources_assigned\"    */\n\n\tif (strcasecmp(name, ATTR_rescavail) != 0)\n\t\treturn;\n\n\ti = find_attr(pidx, pdef, ATTR_rescassn);\n\tif (i < 0)\n\t\treturn;\n\n\tprdef = presc->rs_defin;\n\tpresc = find_resc_entry(get_nattr(pnode, i), prdef);\n\tif (presc) {\n\t\tif (presc->rs_value.at_flags & ATR_VFLAG_INDIRECT) {\n\t\t\t(void) fix_indirect_resc_targets(pnode, presc, ND_ATR_ResourceAssn, 0);\n\t\t\tfree_str(&presc->rs_value);\n\t\t}\n\t}\n\treturn;\n}\n\n/*\n * @brief\n * \t\tSet attributes for manager function.\n *\n * @param[in]\tpattr\t- Address of the parent objects attribute array\n * @param[in]\tpidx\t- Search index for the attribute def array\n * @param[in]\tpdef\t- Address of attribute definition array\n * @param[in]\tlimit\t- Last attribute in the list\n * @param[in]\tplist\t- List of attributes to set\n * @param[in]\tprivil\t- Permission list\n * @param[out]\tbad \t- A bad attributes index is returned in this param\n *\t\t       \t\t\t\tThis actually returns the bad index + 1.\n * @param[in]   parent\t- Pointer to the parent object\n * @param[in]   mode \t- operation mode.\n * @param[in]   allow_unkresc\t- set to TRUE to allow unknown resource values;\n * \t\t\t\t\t\t\t\t\totherwise, FALSE.\n *\n * @return\tError code\n * @retval\tPBSE_NONE  - Success\n * @retval\t! PBSE_NONE - Failure\n *\n * @note\n *\t\tThe set operation is performed as an atomic operation: all specified\n *\t\tattributes must be successfully set, or none are modified.\n */\n\nstatic int\nmgr_set_attr2(attribute *pattr, void *pidx, attribute_def *pdef, int limit, svrattrl *plist, int privil, int *bad, void *parent, int mode, int allow_unkresc)\n{\n\tint index;\n\tattribute *new;\n\tattribute *pre_copy;\n\tattribute *pnew;\n\tattribute *pold;\n\tattribute *attr_save;\n\tint rc;\n\tresource *presc;\n\tresource *oldpresc;\n\n\tif (plist == NULL)\n\t\treturn (PBSE_NONE);\n\n\t/*\n\t * We have multiple attribute lists in play here.  pre_copy is used\n\t * to copy the attributes into pattr prior to calling the action functions.\n\t * This means the object passed to the action function is fully up to date.\n\t * new is attribute list which the action functions are called with.  new\n\t * may be modified by the action functions, so we need to copy these to pattr\n\t * a second time.  Lastly we have attr_save.  It is a copy of the object's\n\t * attributes prior to any change.  If any action function fails, we copy\n\t * attr_copy back to the real attributes before leaving the function.\n\t */\n\n\tnew = (attribute *) calloc((unsigned int) limit, sizeof(attribute));\n\tif (new == NULL)\n\t\treturn (PBSE_SYSTEM);\n\n\t/* Below says if 'allow_unkresc' is TRUE, then set 'unkn' param\n\t * of attr_atomic_set() to '1'; otherwise, set it to '-1' meaning\n\t * not to allow unknown resource\n\t */\n\tif ((rc = attr_atomic_set(plist, pattr, new, pidx, pdef, limit, (allow_unkresc ? 1 : -1), privil, bad)) != 0) {\n\t\tattr_atomic_kill(new, pdef, limit);\n\t\treturn (rc);\n\t}\n\n\tpre_copy = (attribute *) calloc((unsigned int) limit, sizeof(attribute));\n\tif (pre_copy == NULL) {\n\t\tattr_atomic_kill(new, pdef, limit);\n\t\treturn (PBSE_SYSTEM);\n\t}\n\n\tattr_atomic_copy(pre_copy, new, pdef, limit);\n\n\tattr_save = calloc((unsigned int) limit, sizeof(attribute));\n\tif (attr_save == NULL) {\n\t\tattr_atomic_kill(new, pdef, limit);\n\t\tattr_atomic_kill(pre_copy, pdef, limit);\n\t\treturn (PBSE_SYSTEM);\n\t}\n\n\tattr_atomic_copy(attr_save, pattr, pdef, limit);\n\n\tfor (index = 0; index < limit; index++) {\n\t\tpnew = pre_copy + index;\n\t\tpold = pattr + index;\n\t\tif (pnew->at_flags & ATR_VFLAG_MODIFY) {\n\t\t\t/* Special test, aka kludge, for entity-limits, make sure\n\t\t\t * not accepting an entity without an actual limit; i.e.\n\t\t\t * [u:user] instead of [u:user=limit].  The [u:user] form\n\t\t\t * is allowed in the \"unset\" attribute function in which\n\t\t\t * case the entry isn't present when it gets here\n\t\t\t */\n\n\t\t\tif ((pdef + index)->at_type == ATR_TYPE_ENTITY) {\n\t\t\t\tsvr_entlim_leaf_t *pleaf;\n\t\t\t\tvoid *unused = NULL;\n\n\t\t\t\twhile ((pleaf = entlim_get_next((new + index)->at_val.at_enty.ae_tree, &unused)) != NULL) {\n\n\t\t\t\t\t/* entry that is Modified, and not Set meant it had a null value - illegal */\n\t\t\t\t\tif ((pleaf->slf_limit.at_flags & (ATR_VFLAG_SET | ATR_VFLAG_MODIFY)) == ATR_VFLAG_MODIFY) {\n\t\t\t\t\t\t*bad = index + 1;\n\t\t\t\t\t\tattr_atomic_kill(new, pdef, limit);\n\t\t\t\t\t\tattr_atomic_kill(pre_copy, pdef, limit);\n\t\t\t\t\t\tattr_atomic_kill(attr_save, pdef, limit);\n\t\t\t\t\t\treturn (PBSE_BADATVAL);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t/* now replace the old values with any modified new values */\n\n\t\t\t(pdef + index)->at_free(pold);\n\t\t\tpold->at_flags = pnew->at_flags; /* includes MODIFY */\n\n\t\t\tif (pold->at_type == ATR_TYPE_LIST) {\n\t\t\t\tlist_move(&pnew->at_val.at_list, &pold->at_val.at_list);\n\t\t\t} else if (pold->at_type == ATR_TYPE_RESC) {\n\t\t\t\tset_resc(pold, pnew, INCR);\n\t\t\t\t/* clear ATR_VFLAG_DEFLT on modified values */\n\t\t\t\tfor (presc = GET_NEXT(pold->at_val.at_list);\n\t\t\t\t     presc;\n\t\t\t\t     presc = GET_NEXT(presc->rs_link)) {\n\t\t\t\t\tif (presc->rs_value.at_flags & ATR_VFLAG_MODIFY) {\n\t\t\t\t\t\tpresc->rs_value.at_flags &= ~ATR_VFLAG_DEFLT;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tfor (presc = GET_NEXT(pnew->at_val.at_list);\n\t\t\t\t     presc;\n\t\t\t\t     presc = GET_NEXT(presc->rs_link)) {\n\t\t\t\t\tif ((presc->rs_value.at_flags & ATR_VFLAG_MODIFY) == 0) {\n\t\t\t\t\t\tif (presc->rs_value.at_flags & ATR_VFLAG_DEFLT) {\n\t\t\t\t\t\t\toldpresc = find_resc_entry(pold,\n\t\t\t\t\t\t\t\t\t\t   presc->rs_defin);\n\t\t\t\t\t\t\tif (oldpresc) {\n\t\t\t\t\t\t\t\toldpresc->rs_value.at_flags |= ATR_VFLAG_DEFLT;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\t(pdef + index)->at_free(pnew);\n\t\t\t} else {\n\t\t\t\t/*\n\t\t\t\t * copy value from new into old including pointers to\n\t\t\t\t * strings and array of strings, clear the\n\t\t\t\t * \"new\" attribute so those \"pointers\" are not freed\n\t\t\t\t * when \"new\" is freed later\n\t\t\t\t */\n\t\t\t\t*pold = *pnew;\n\t\t\t\tclear_attr(pnew, pdef + index);\n\t\t\t}\n\t\t}\n\t}\n\n\tfor (index = 0; index < limit; index++) {\n\t\t/*\n\t\t * for each attribute which is to be modified, call the\n\t\t * at_action routine for the attribute, if one exists, with the\n\t\t * new value.  If the action fails, undo everything.\n\t\t */\n\t\tif ((new + index)->at_flags & ATR_VFLAG_MODIFY) {\n\t\t\tif ((pdef + index)->at_action) {\n\t\t\t\trc = (pdef + index)->at_action((new + index), parent, mode);\n\t\t\t\tif (rc) {\n\t\t\t\t\t*bad = index + 1;\n\t\t\t\t\tattr_atomic_kill(new, pdef, limit);\n\t\t\t\t\tattr_atomic_kill(pre_copy, pdef, limit);\n\t\t\t\t\tattr_atomic_copy(pattr, attr_save, pdef, limit);\n\t\t\t\t\tattr_atomic_kill(attr_save, pdef, limit);\n\t\t\t\t\treturn (rc);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\t/* The action functions might have modified new.  Need to set pattr again */\n\n\tfor (index = 0; index < limit; index++) {\n\t\tpnew = new + index;\n\t\tpold = pattr + index;\n\t\tif (pnew->at_flags & ATR_VFLAG_MODIFY) {\n\t\t\t(pdef + index)->at_free(pold);\n\t\t\tpold->at_flags = pnew->at_flags; /* includes MODIFY */\n\n\t\t\tif (pold->at_type == ATR_TYPE_LIST) {\n\t\t\t\tlist_move(&pnew->at_val.at_list, &pold->at_val.at_list);\n\t\t\t} else if (pold->at_type == ATR_TYPE_RESC) {\n\t\t\t\tset_resc(pold, pnew, INCR);\n\t\t\t\t/* clear ATR_VFLAG_DEFLT on modified values */\n\t\t\t\tfor (presc = GET_NEXT(pold->at_val.at_list);\n\t\t\t\t     presc;\n\t\t\t\t     presc = GET_NEXT(presc->rs_link)) {\n\t\t\t\t\tif (presc->rs_value.at_flags & ATR_VFLAG_MODIFY) {\n\t\t\t\t\t\tpresc->rs_value.at_flags &= ~ATR_VFLAG_DEFLT;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tfor (presc = GET_NEXT(pnew->at_val.at_list);\n\t\t\t\t     presc;\n\t\t\t\t     presc = GET_NEXT(presc->rs_link)) {\n\t\t\t\t\tif ((presc->rs_value.at_flags & ATR_VFLAG_MODIFY) == 0) {\n\t\t\t\t\t\tif (presc->rs_value.at_flags & ATR_VFLAG_DEFLT) {\n\t\t\t\t\t\t\toldpresc = find_resc_entry(pold,\n\t\t\t\t\t\t\t\t\t\t   presc->rs_defin);\n\t\t\t\t\t\t\tif (oldpresc) {\n\t\t\t\t\t\t\t\toldpresc->rs_value.at_flags |= ATR_VFLAG_DEFLT;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\t(pdef + index)->at_free(pnew);\n\t\t\t} else {\n\t\t\t\t/*\n\t\t\t\t * copy value from new into old including pointers to\n\t\t\t\t * strings and array of strings, clear the\n\t\t\t\t * \"new\" attribute so those \"pointers\" are not freed\n\t\t\t\t * when \"new\" is freed later\n\t\t\t\t */\n\t\t\t\t*pold = *pnew;\n\t\t\t\tclear_attr(pnew, pdef + index);\n\t\t\t}\n\t\t}\n\t}\n\n\t/*\n\t * we have moved all the \"external\" values to the old array, thus\n\t * we just free the new array, NOT call at_free on each.\n\t */\n\tfree(new);\n\tfree(pre_copy);\n\tattr_atomic_kill(attr_save, pdef, limit);\n\tif (pdef->at_parent == PARENT_TYPE_NODE)\n\t\t((pbsnode *)parent)->nd_modified = 1;\n\treturn (PBSE_NONE);\n}\n\n/**\n * @brief\n * \t\tWrapper function to 'mgr_set_attr2()' without the 'allow_unkresc'\n * \t\targument.\n *\n * @param[in]\tpattr\t- Address of the parent objects attribute array\n * @param[in]\tpidx\t- Search index for the attribute def array\n * @param[in]\tpdef\t- Address of attribute definition array\n * @param[in]\tlimit\t- Last attribute in the list\n * @param[in]\tplist\t- List of attributes to set\n * @param[in]\tprivil\t- Permission list\n * @param[out]\tbad \t- A bad attributes index is returned in this param\n *\t\t       \t\t\t\tThis actually returns the bad index + 1.\n * @param[in]   parent\t- Pointer to the parent object\n * @param[in]   mode \t- operation mode.\n *\n * @return\tError code\n * @retval\tPBSE_NONE  - Success\n * @retval\t! PBSE_NONE - Failure\n **/\nint\nmgr_set_attr(attribute *pattr, void *pidx, attribute_def *pdef, int limit, svrattrl *plist, int privil, int *bad, void *parent, int mode)\n{\n\treturn (mgr_set_attr2(pattr, pidx, pdef, limit, plist, privil, bad, parent, mode, FALSE));\n}\n\n/**\n * @brief\n *\t\tUnset (clear) attributes for manager function\n *\n *\t\tOperation depends on type of attribute and if a resource is specified.\n *\t\tFor a attribute of type ATR_TYPE_RESC:\n *\t  \tIf a resource name is specified, unset only that resource entry.\n *\t  \tIf a resource name is not specified, unset the whole attribute.\n *\t\tFor a attribute of type ATR_TYPE_ENTITY:\n *\t  \tIf a resource name is specified, unset only entries with that resc\n *\t  \tIf a resource name is not specified, unset the whole attribute.\n *\t\tFor \"normal\" attributes, unset the entire attribute.\n *\n * @param[in]\tpattr\t- Address of the parent objects attribute array\n * @param[in]\tpidx\t- Search index of the attribute array\n * @param[in]\tpdef\t- Address of attribute definition array\n * @param[in]\tlimit \t- Last attribute in the list\n * @param[in]\tplist \t- List of attributes to unset.\n * @param[in]\tprivil\t- Permission list.  A value of -1 is an override that\n * \t\t\t \t\t\t\tbypasses the check for permissions, used internally by\n * \t\t\t \t\t\t\tthe server\n * @param[out]\tbad  \t- A bad attributes index is returned in this param\n * @param[in]\tpobj \t- Pointer to the parent object\n * @param[in] \tptype \t- Type of the parent object\n * @param[in] \trflag \t- INDIRECT_RES_UNLINK will unlink indirect resources\n * \t\t     \t\t\t\tINDIRECT_RES_CHECK will return an error if an indirect\n * \t\t    \t\t\t \tresource is set\n *\n * @return\tSuccess/Failure\n * @retval\t0\t- Success\n * @retval\t-1  - Failure\n */\nstatic int\nmgr_unset_attr(attribute *pattr, void *pidx, attribute_def *pdef, int limit, svrattrl *plist, int privil, int *bad, void *pobj, int ptype, enum res_op_flag rflag)\n{\n\tvoid *parent_id = NULL;\n\tpbs_db_attr_list_t db_attr_list;\n\tint do_indirect_check = 0;\n\tint index;\n\tint ord;\n\tint rc;\n\tsvrattrl *pl;\n\tresource_def *prsdef;\n\tresource *presc;\n\tstruct pbsnode *pnode = pobj;\n\tvoid *conn = (void *) svr_db_conn;\n\tpbs_db_obj_info_t obj;\n\tobj.pbs_db_un.pbs_db_job = NULL;\n\n\t/* first check the attribute exists and we have privilege to set */\n\tord = 0;\n\tpl = plist;\n\twhile (pl) {\n\t\tord++;\n\t\tindex = find_attr(pidx, pdef, pl->al_name);\n\t\tif (index < 0) {\n\t\t\t*bad = ord;\n\t\t\treturn (PBSE_NOATTR);\n\t\t}\n\n\t\t/* have we privilege to unset the attribute ? */\n\n\t\tif ((privil != -1) && ((pdef + index)->at_flags & privil & ATR_DFLAG_WRACC) == 0) {\n\t\t\t*bad = ord;\n\t\t\treturn (PBSE_ATTRRO);\n\t\t}\n\t\tif (((pdef + index)->at_type == ATR_TYPE_RESC) &&\n\t\t    (pl->al_resc != NULL)) {\n\n\t\t\t/* check the individual resource */\n\n\t\t\tprsdef = find_resc_def(svr_resc_def, pl->al_resc);\n\t\t\tif (prsdef == NULL) {\n\t\t\t\t*bad = ord;\n\t\t\t\treturn (PBSE_UNKRESC);\n\t\t\t}\n\t\t\tif ((privil != -1) && ((prsdef->rs_flags & privil & ATR_DFLAG_WRACC) == 0)) {\n\t\t\t\t*bad = ord;\n\t\t\t\treturn (PBSE_PERM);\n\t\t\t}\n\t\t\tpresc = find_resc_entry(pattr + index, prsdef);\n\t\t\tif (presc &&\n\t\t\t    (presc->rs_value.at_flags & ATR_VFLAG_TARGET)) {\n\t\t\t\tif (rflag == INDIRECT_RES_UNLINK) {\n\t\t\t\t\tpresc->rs_value.at_flags &= ~ATR_VFLAG_TARGET;\n\t\t\t\t} else {\n\t\t\t\t\t*bad = ord;\n\t\t\t\t\treturn (PBSE_OBJBUSY);\n\t\t\t\t}\n\t\t\t}\n\t\t\tif ((pnode->nd_state & INUSE_PROV) &&\n\t\t\t    !strcmp(prsdef->rs_name, \"aoe\")) {\n\t\t\t\t*bad = ord;\n\t\t\t\treturn (PBSE_NODEPROV_NOACTION);\n\t\t\t}\n\t\t}\n\t\tif ((pnode->nd_state & INUSE_PROV) &&\n\t\t    !strcmp((pdef + index)->at_name, ATTR_NODE_current_aoe)) {\n\t\t\t*bad = ord;\n\t\t\treturn (PBSE_NODEPROV_NOACTION);\n\t\t}\n\n\t\tpl = (svrattrl *) GET_NEXT(pl->al_link);\n\t}\n\n\t/* ok, now clear them */\n\tdb_attr_list.attr_count = 0;\n\tCLEAR_HEAD(db_attr_list.attrs);\n\n\twhile (plist) {\n\t\tindex = find_attr(pidx, pdef, plist->al_name);\n\t\tif (encode_single_attr_db((pdef + index), (pattr + index), &db_attr_list) != 0)\n\t\t\treturn (PBSE_NOATTR);\n\n\t\tif (((pdef + index)->at_type == ATR_TYPE_RESC) &&\n\t\t    (plist->al_resc != NULL)) {\n\n\t\t\t/* attribute of type resource and specified resource */\n\t\t\t/* free resource member, not the attribute */\n\n\t\t\tprsdef = find_resc_def(svr_resc_def, plist->al_resc);\n\t\t\tpresc = find_resc_entry(pattr + index, prsdef);\n\t\t\tif (presc) {\n\t\t\t\tif ((ptype != PARENT_TYPE_SERVER) ||\n\t\t\t\t    (index != (int) SVR_ATR_resource_cost)) {\n\n\t\t\t\t\tunset_signature(pnode, prsdef->rs_name);\n\t\t\t\t\tif ((ptype == PARENT_TYPE_NODE) && (presc->rs_value.at_flags & ATR_VFLAG_INDIRECT)) {\n\t\t\t\t\t\tunset_indirect(presc, pidx, pdef, plist->al_name, pobj, ptype);\n\t\t\t\t\t\tdo_indirect_check = 1;\n\t\t\t\t\t}\n\t\t\t\t\tprsdef->rs_free(&presc->rs_value);\n\t\t\t\t}\n\t\t\t\tdelete_link(&presc->rs_link);\n\t\t\t\tfree(presc);\n\t\t\t\tpresc = NULL;\n\t\t\t}\n\t\t\t/* If the last resource has been delinked from  */\n\t\t\t/* the attribute,  \"unset\" the attribute itself */\n\t\t\tpresc = (resource *) GET_NEXT((pattr + index)->at_val.at_list);\n\t\t\tif (presc == NULL)\n\t\t\t\tmark_attr_not_set(pattr + index);\n\t\t\t(pattr + index)->at_flags |= ATR_MOD_MCACHE;\n\n\t\t} else if (((pdef + index)->at_type == ATR_TYPE_ENTITY) &&\n\t\t\t   (plist->al_resc != NULL)) {\n\n\t\t\t/* attribute of type ENTITY and specifed resource */\n\t\t\t/* unset the entity limit on that resource for    */\n\t\t\t/* all entities */\n\n\t\t\tunset_entlim_resc(pattr + index, plist->al_resc);\n\n\t\t} else {\n\n\t\t\t/* either the attribute is not of type ENTITY or RESC */\n\t\t\t/* or there is no specific resource specified         */\n\t\t\tif ((pdef + index)->at_type == ATR_TYPE_RESC) {\n\n\t\t\t\t/* if a resource type, check each for being indirect */\n\t\t\t\tpresc = (resource *) GET_NEXT((pattr + index)->at_val.at_list);\n\t\t\t\twhile (presc) {\n\t\t\t\t\tif (presc->rs_value.at_flags & ATR_VFLAG_INDIRECT) {\n\t\t\t\t\t\tunset_indirect(presc, pidx, pdef, plist->al_name, pobj, ptype);\n\t\t\t\t\t\tdo_indirect_check = 1;\n\t\t\t\t\t}\n\t\t\t\t\tpresc = (resource *) GET_NEXT(presc->rs_link);\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t/* now free the whole attribute */\n\n\t\t\t(pdef + index)->at_free(pattr + index);\n\t\t\t(pattr + index)->at_flags |= ATR_VFLAG_MODIFY;\n\t\t}\n\t\tplist = (svrattrl *) GET_NEXT(plist->al_link);\n\t}\n\n\t/* now delete the collected list from the database */\n\tswitch (ptype) {\n\t\tcase PARENT_TYPE_SERVER:\n\t\t\tobj.pbs_db_obj_type = PBS_DB_SVR;\n\t\t\tparent_id = 0;\n\t\t\tbreak;\n\n\t\tcase PARENT_TYPE_SCHED:\n\t\t\tobj.pbs_db_obj_type = PBS_DB_SCHED;\n\t\t\tparent_id = ((pbs_sched *) pobj)->sc_name;\n\t\t\tbreak;\n\n\t\tcase PARENT_TYPE_NODE:\n\t\t\tobj.pbs_db_obj_type = PBS_DB_NODE;\n\t\t\tparent_id = pnode->nd_name;\n\t\t\tbreak;\n\n\t\tcase PARENT_TYPE_QUE_ALL:\n\t\t\tobj.pbs_db_obj_type = PBS_DB_QUEUE;\n\t\t\tparent_id = ((pbs_queue *) pobj)->qu_qs.qu_name;\n\t\t\tbreak;\n\n\t\tcase PARENT_TYPE_JOB:\n\t\t\tobj.pbs_db_obj_type = PBS_DB_JOB;\n\t\t\tparent_id = ((job *) pobj)->ji_qs.ji_jobid;\n\t\t\tbreak;\n\n\t\tcase PARENT_TYPE_RESV:\n\t\t\tobj.pbs_db_obj_type = PBS_DB_RESV;\n\t\t\tparent_id = ((resc_resv *) pobj)->ri_qs.ri_resvID;\n\t\t\tbreak;\n\t}\n\n\trc = pbs_db_delete_attr_obj(conn, &obj, parent_id, &db_attr_list);\n\tfree_db_attr_list(&db_attr_list);\n\n\tif (rc != 0)\n\t\treturn -1;\n\n\tif (do_indirect_check)\n\t\tindirect_target_check(0);\n\treturn (0);\n}\n\n/**\n * @brief\n *\t\tProcess request to create a queue\n *\n *\t\tCreates queue and calls mgr_set_attr to set queue attributes.\n *\n * @param[in]\tpreq\t- Pointer to a batch request structure\n */\n\nvoid\n\tmgr_queue_create(struct batch_request *preq)\n{\n\tint bad;\n\tchar *badattr;\n\tsvrattrl *plist;\n\tpbs_queue *pque;\n\tint rc;\n\n\trc = strlen(preq->rq_ind.rq_manager.rq_objname);\n\n\tif ((rc > PBS_MAXQUEUENAME) || (rc == 0)) {\n\t\treq_reject(PBSE_QUENBIG, 0, preq);\n\t\treturn;\n\t}\n\tif (find_queuebyname(preq->rq_ind.rq_manager.rq_objname)) {\n\t\treq_reject(PBSE_QUEEXIST, 0, preq);\n\t\treturn;\n\t}\n\n\tpque = que_alloc(preq->rq_ind.rq_manager.rq_objname);\n\n\t/* set the queue attributes */\n\n\tplist = (svrattrl *) GET_NEXT(preq->rq_ind.rq_manager.rq_attr);\n\trc = mgr_set_attr(pque->qu_attr, que_attr_idx, que_attr_def, QA_ATR_LAST, plist, preq->rq_perm, &bad, (void *) pque, ATR_ACTION_NEW);\n\tif (rc != 0) {\n\t\treply_badattr(rc, bad, plist, preq);\n\t\tque_free(pque);\n\t\tpque = NULL;\n\t} else {\n\t\tque_save_db(pque);\n\n\t\tlog_eventf(PBSEVENT_ADMIN, PBS_EVENTCLASS_QUEUE, LOG_INFO, pque->qu_qs.qu_name, msg_manager, msg_man_cre, preq->rq_user, preq->rq_host);\n\t\tmgr_log_attr(msg_man_set, plist, PBS_EVENTCLASS_QUEUE, preq->rq_ind.rq_manager.rq_objname, NULL);\n\n\t\t/* check the appropriateness of the attributes vs. queue type */\n\n\t\tif ((badattr = check_que_attr(pque)) != NULL) {\n\t\t\t/* miss match, issue warning */\n\t\t\t(void) sprintf(log_buffer, msg_attrtype, pque->qu_qs.qu_name, badattr);\n\t\t\t(void) reply_text(preq, PBSE_ATTRTYPE, log_buffer);\n\t\t} else {\n\t\t\treply_ack(preq);\n\t\t}\n\t}\n}\n\n/**\n * @brief\n *\t\tDelete a queue\n *\n * @par\n * \t\tThe queue must be empty of jobs\n * @param[in,out]\tpreq\t- Pointer to a batch request structure\n *     \t\t\t\t\t\t\t The request can be rejected with the following error codes:\n *       \t\t\t\t\t\tPBSE_OBJBUSY returned if it is attached to a reservation/node or if the queue fails to get deleted\n *       \t\t\t\t\t\tPBSE_UNKQUE returned if the queue is unknown\n *       \t\t\t\t\t\tPBSE_SYSTEM returned for system issues like failure to allocate memory\n *\n * @par\n * \t\tAcknowledgement reply sent to the batch request(preq) upon successful deletion of the queue\n */\n\nvoid\nmgr_queue_delete(struct batch_request *preq)\n{\n\tint i;\n\tint j;\n\tint total_queues;\n\tint len;\n\tchar *problem_names;\n\tint problem_cnt;\n\tchar *name;\n\tpbs_queue *pque = NULL;\n\tpbs_queue *next_queue = NULL;\n\tint rc;\n\tint type = 0;\n\tstruct pbs_queue **problem_queues = NULL;\n\n\tname = preq->rq_ind.rq_manager.rq_objname;\n\n\tif ((*name == '\\0') || (*name == '@')) {\n\t\ttype = 1;\n\t}\n\n\t/* get the queue to be deleted */\n\tif (type == 0) {\n\t\tpque = find_queuebyname(name);\n\t} else {\n\t\tproblem_queues = (struct pbs_queue **) malloc(server.sv_qs.sv_numque * sizeof(struct pbs_queue *));\n\t\tif (problem_queues == NULL) {\n\t\t\tlog_err(ENOMEM, __func__, \"out of memory\");\n\t\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\t\treturn;\n\t\t}\n\t\tproblem_cnt = 0;\n\t\tpque = (pbs_queue *) GET_NEXT(svr_queues);\n\t}\n\n\t/* if the queue is unknown, reject the request */\n\tif (pque == NULL) {\n\t\tfree(problem_queues);\n\t\treq_reject(PBSE_UNKQUE, 0, preq);\n\t\treturn;\n\t}\n\n\ttotal_queues = server.sv_qs.sv_numque;\n\n\tfor (j = 0; (pque != NULL) && (j < total_queues); j++) {\n\t\trc = 0;\n\t\t/* Do not allow deletion of a queue associated to a reservation, unless\n\t\t * it is coming from the server itself */\n\t\tif ((pque->qu_resvp != NULL) && (preq->rq_conn != PBS_LOCAL_CONNECTION)) {\n\t\t\trc = PBSE_OBJBUSY;\n\t\t}\n\n\t\t/* are there nodes associated with the queue */\n\n\t\tfor (i = 0; i < svr_totnodes; i++) {\n\t\t\tif (pbsndlist[i]->nd_pque == pque) {\n\t\t\t\trc = PBSE_OBJBUSY;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\n\t\t/* if modification is for all queues, get the next queue if the present queue will be deleted successfully */\n\n\t\tif (type == 1) {\n\t\t\tnext_queue = (pbs_queue *) GET_NEXT(pque->qu_link);\n\t\t}\n\n\t\tif (rc == 0) {\n\t\t\tchar queue_name[PBS_MAXQUEUENAME + 1];\n\t\t\t/* Save the queue name before we purge it so it will appear in the log.  */\n\t\t\tpbs_strncpy(queue_name, pque->qu_qs.qu_name, sizeof(queue_name));\n\t\t\tif ((rc = que_purge(pque)) != 0) {\n\t\t\t\trc = PBSE_OBJBUSY;\n\t\t\t} else\n\t\t\t\tlog_eventf(PBSEVENT_ADMIN, PBS_EVENTCLASS_QUEUE, LOG_INFO, queue_name, msg_manager, msg_man_del, preq->rq_user, preq->rq_host);\n\t\t}\n\t\tif (rc != 0) {\n\t\t\tif (type == 1) {\n\t\t\t\tif (problem_queues) /*we have an array in which to save*/\n\t\t\t\t\tproblem_queues[problem_cnt] = pque;\n\t\t\t\t++problem_cnt;\n\t\t\t} else {\n\t\t\t\treq_reject(rc, 0, preq);\n\t\t\t\treturn;\n\t\t\t}\n\t\t}\n\n\t\t/* Get the next queue if all queues need to be deleted\n\t\t * from the default server */\n\t\tif (type == 1) {\n\t\t\tpque = next_queue;\n\t\t} else {\n\t\t\tbreak;\n\t\t}\n\t}\n\n\tif (type == 1) { /*modification was for all queues  */\n\n\t\tif (problem_cnt) { /*one or more problems encountered*/\n\n\t\t\tfor (len = 0, i = 0; i < problem_cnt; i++)\n\t\t\t\tlen = strlen(problem_queues[i]->qu_qs.qu_name) + 3;\n\n\t\t\tlen += strlen(pbse_to_txt(PBSE_OBJBUSY));\n\n\t\t\tif ((problem_names = malloc(len)) != NULL) {\n\n\t\t\t\tstrcpy(problem_names, pbse_to_txt(PBSE_OBJBUSY));\n\t\t\t\tfor (i = 0; i < problem_cnt; i++) {\n\t\t\t\t\tif (i)\n\t\t\t\t\t\tstrcat(problem_names, \", \");\n\t\t\t\t\tstrcat(problem_names, \" \");\n\t\t\t\t\tstrcat(problem_names, problem_queues[i]->qu_qs.qu_name);\n\t\t\t\t}\n\n\t\t\t\t(void) reply_text(preq, PBSE_OBJBUSY, problem_names);\n\t\t\t\tfree(problem_names);\n\t\t\t\tproblem_names = NULL;\n\t\t\t} else {\n\t\t\t\t(void) reply_text(preq, PBSE_SYSTEM, pbse_to_txt(PBSE_SYSTEM));\n\t\t\t}\n\t\t}\n\n\t\tfree(problem_queues);\n\t\tproblem_queues = NULL;\n\n\t\tif (problem_cnt) { /*reply has already been sent  */\n\t\t\treturn;\n\t\t}\n\t}\n\n\treply_ack(preq);\n}\n\n/**\n * @brief\n *\t\tSet Server Attribute Values\n *\n *\t\tSets the requested attributes and returns a reply\n *\n * @param[in]\tpreq\t- Pointer to a batch request structure\n * @param[in]\tconn\t- Pointer to a connection structure assosiated with preq\n */\n\nvoid\nmgr_server_set(struct batch_request *preq, conn_t *conn)\n{\n\tint bad_attr = 0;\n\tsvrattrl *plist, *psvrat;\n\tpbs_list_head setlist;\n\tpbs_list_head unsetlist;\n\tint rc;\n\tint has_log_events = 0;\n\n\tCLEAR_HEAD(setlist);\n\tCLEAR_HEAD(unsetlist);\n\n\tplist = (svrattrl *) GET_NEXT(preq->rq_ind.rq_manager.rq_attr);\n\t/*Only root at server host can set server attribute \"acl_roots\".*/\n\twhile (plist) {\n\t\tif (strcmp(plist->al_name, ATTR_logevents) == 0)\n\t\t\thas_log_events = 1;\n\t\tif (strcasecmp(plist->al_name, ATTR_aclroot) == 0) {\n\t\t\tif (!is_local_root(preq->rq_user, preq->rq_host)) {\n\t\t\t\treply_badattr(PBSE_ATTRRO, bad_attr, plist, preq);\n\t\t\t\treturn;\n\t\t\t}\n\t\t}\n\t\t/*\n\t\t * We do not overwrite/update the entire record in the database. Therefore, to\n\t\t * unset attributes, we will need to find out the ones with a 0 or NULL value set.\n\t\t * We create a separate list for removal from the list of attributes provided, and\n\t\t * pass it to mgr_unset_attr, below\n\t\t */\n\t\tpsvrat = dup_svrattrl(plist);\n\t\tif (psvrat == NULL) {\n\t\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\t\tfree_attrlist(&setlist);\n\t\t\tfree_attrlist(&unsetlist);\n\t\t\treturn;\n\t\t}\n\t\tif (psvrat->al_atopl.value == NULL || psvrat->al_atopl.value[0] == '\\0')\n\t\t\tappend_link(&unsetlist, &psvrat->al_link, psvrat);\n\t\telse\n\t\t\tappend_link(&setlist, &psvrat->al_link, psvrat);\n\n\t\tplist = (struct svrattrl *) GET_NEXT(plist->al_link);\n\t}\n\n\t/* if the unsetist has attributes, call server_unset to remove them separately */\n\tplist = (svrattrl *) GET_NEXT(unsetlist);\n\tif (plist) {\n\t\trc = mgr_unset_attr(server.sv_attr, svr_attr_idx, svr_attr_def, SVR_ATR_LAST, plist,\n\t\t\t\t    preq->rq_perm, &bad_attr, (void *) &server, PARENT_TYPE_SERVER, INDIRECT_RES_CHECK);\n\t\tif (rc != 0) {\n\t\t\treply_badattr(rc, bad_attr, plist, preq);\n\t\t\tfree_attrlist(&setlist);\n\t\t\tfree_attrlist(&unsetlist);\n\t\t\treturn;\n\t\t}\n\t}\n\n\tplist = (svrattrl *) GET_NEXT(setlist);\n\tif (!plist)\n\t\tgoto done;\n\n\trc = mgr_set_attr(server.sv_attr, svr_attr_idx, svr_attr_def, SVR_ATR_LAST, plist,\n\t\t\t  preq->rq_perm, &bad_attr, (void *) &server,\n\t\t\t  ATR_ACTION_ALTER);\n\tif (rc != 0) {\n\t\treply_badattr(rc, bad_attr, plist, preq);\n\t} else {\n\t\tsvr_save_db(&server);\n\tdone:\n\t\tfree_attrlist(&setlist);\n\t\tfree_attrlist(&unsetlist);\n\n\t\tif (has_log_events)\n\t\t\t*log_event_mask = get_sattr_long(SVR_ATR_log_events);\n\t\tlog_eventf(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER, LOG_INFO, msg_daemonname, msg_manager, msg_man_set, preq->rq_user, preq->rq_host);\n\t\tmgr_log_attr(msg_man_set, GET_NEXT(preq->rq_ind.rq_manager.rq_attr), PBS_EVENTCLASS_SERVER, msg_daemonname, NULL);\n\t\treply_ack(preq);\n\t}\n}\n\n/**\n * @brief\n *\t\tUnset (clear) Server Attribute Values\n *\n *\t\tClears the requested attributes and returns a reply\n *\n * @param[in]\tpreq\t- Pointer to a batch request structure\n * @param[in]\tconn\t- Pointer to a connection structure assosiated with preq\n */\n\nvoid\nmgr_server_unset(struct batch_request *preq, conn_t *conn)\n{\n\tint bad_attr = 0;\n\tsvrattrl *plist;\n\tint rc;\n\n\tplist = (svrattrl *) GET_NEXT(preq->rq_ind.rq_manager.rq_attr);\n\n\t/* Check unsetting pbs_license_info,\t\t\t*/\n\t/*                 pbs_license_min,\t\t\t*/\n\t/*                 pbs_license_max,\t\t\t*/\n\t/*                 pbs_license_linger_time.\t\t*/\n\twhile (plist) {\n\t\tif (strcasecmp(plist->al_name, ATTR_aclroot) == 0) {\n\t\t\t/*Only root at server host can unset server attribute \"acl_roots\".*/\n\t\t\tif (!is_local_root(preq->rq_user, preq->rq_host)) {\n\t\t\t\treply_badattr(PBSE_ATTRRO, bad_attr, plist, preq);\n\t\t\t\treturn;\n\t\t\t}\n\t\t} else if (strcasecmp(plist->al_name,\n\t\t\t\t      ATTR_pbs_license_info) == 0) {\n\t\t\tunset_license_location();\n\t\t} else if (strcasecmp(plist->al_name,\n\t\t\t\t      ATTR_license_min) == 0) {\n\t\t\tunset_license_min();\n\t\t} else if (strcasecmp(plist->al_name,\n\t\t\t\t      ATTR_license_max) == 0) {\n\t\t\tunset_license_max();\n\t\t} else if (strcasecmp(plist->al_name,\n\t\t\t\t      ATTR_license_linger) == 0) {\n\t\t\tunset_license_linger();\n\t\t} else if (strcasecmp(plist->al_name, ATTR_resv_retry_init) == 0 ||\n\t\t\t   strcasecmp(plist->al_name, ATTR_resv_retry_time) == 0) {\n\t\t\tresv_retry_time = RESV_RETRY_TIME_DEFAULT;\n\t\t} else if (strcasecmp(plist->al_name,\n\t\t\t\t      ATTR_JobHistoryEnable) == 0) {\n\t\t\tunset_job_history_enable();\n\t\t} else if (strcasecmp(plist->al_name,\n\t\t\t\t      ATTR_JobHistoryDuration) == 0) {\n\t\t\tunset_job_history_duration();\n\t\t} else if (strcasecmp(plist->al_name,\n\t\t\t\t      ATTR_max_job_sequence_id) == 0) {\n\t\t\tunset_max_job_sequence_id();\n\t\t} else if (strcasecmp(plist->al_name,\n\t\t\t\t      ATTR_max_concurrent_prov) == 0) {\n\t\t\tmax_concurrent_prov = PBS_MAX_CONCURRENT_PROV;\n\t\t\tresize_prov_table(max_concurrent_prov);\n\t\t} else if (strcasecmp(plist->al_name,\n\t\t\t\t      ATTR_dfltqsubargs) == 0) {\n\t\t\tforce_qsub_daemons_update();\n\t\t} else if (strcasecmp(plist->al_name,\n\t\t\t\t      ATTR_nodefailrq) == 0) {\n\t\t\tunset_node_fail_requeue();\n\t\t} else if (strcasecmp(plist->al_name,\n\t\t\t\t      ATTR_resendtermdelay) == 0) {\n\t\t\tunset_resend_term_delay();\n\t\t} else if (strcasecmp(plist->al_name,\n\t\t\t\t      ATTR_jobscript_max_size) == 0) {\n\t\t\tunset_jobscript_max_size();\n\t\t} else if (strcasecmp(plist->al_name,\n\t\t\t\t      ATTR_scheduling) == 0) {\n\t\t\tif (dflt_scheduler) {\n\t\t\t\tset_sched_attr_l_slim(dflt_scheduler, SCHED_ATR_scheduling, 0, SET);\n\t\t\t\tsched_save_db(dflt_scheduler);\n\t\t\t}\n\t\t} else if (strcasecmp(plist->al_name, ATTR_schediteration) == 0) {\n\t\t\tif (dflt_scheduler) {\n\t\t\t\tsvrattrl *tm_list;\n\t\t\t\t/* value is 600 so it is of size 4 including terminating character */\n\t\t\t\ttm_list = attrlist_create(plist->al_name, NULL, 8);\n\t\t\t\tif (tm_list == NULL) {\n\t\t\t\t\treply_badattr(-1, bad_attr, plist, preq);\n\t\t\t\t}\n\t\t\t\ttm_list->al_link.ll_next->ll_struct = NULL;\n\t\t\t\t/* when unset, set scheduler_iteration to 600 seconds */\n\t\t\t\tsprintf(tm_list->al_value, \"%d\", PBS_SCHEDULE_CYCLE);\n\t\t\t\trc = mgr_set_attr(dflt_scheduler->sch_attr, sched_attr_idx, sched_attr_def, SCHED_ATR_LAST, tm_list,\n\t\t\t\t\t\t  MGR_ONLY_SET, &bad_attr, (void *) dflt_scheduler, ATR_ACTION_ALTER);\n\t\t\t\tif (rc != 0) {\n\t\t\t\t\tfree_svrattrl(tm_list);\n\t\t\t\t\treply_badattr(rc, bad_attr, plist, preq);\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t\tsched_save_db(dflt_scheduler);\n\t\t\t\tfree_svrattrl(tm_list);\n\t\t\t}\n\t\t}\n\t\tplist = (struct svrattrl *) GET_NEXT(plist->al_link);\n\t}\n\tplist = (svrattrl *) GET_NEXT(preq->rq_ind.rq_manager.rq_attr);\n\n\trc = mgr_unset_attr(server.sv_attr, svr_attr_idx, svr_attr_def, SVR_ATR_LAST, plist,\n\t\t\t    preq->rq_perm, &bad_attr, (void *) &server, PARENT_TYPE_SERVER, INDIRECT_RES_CHECK);\n\tif (rc != 0)\n\t\treply_badattr(rc, bad_attr, plist, preq);\n\telse {\n\t\tattribute *pattr = get_sattr(SVR_ATR_DefaultChunk);\n\t\tif (pattr->at_flags & ATR_VFLAG_MODIFY) {\n\t\t\t(void) deflt_chunk_action(pattr, (void *) &server, ATR_ACTION_ALTER);\n\t\t}\n\t\t/* Now set the default values on some of the unset attributes */\n\t\tplist = (svrattrl *) GET_NEXT(preq->rq_ind.rq_manager.rq_attr);\n\t\tfor (plist = (svrattrl *) GET_NEXT(preq->rq_ind.rq_manager.rq_attr);\n\t\t     plist != NULL; plist = (struct svrattrl *) GET_NEXT(plist->al_link)) {\n\t\t\tif (strcasecmp(plist->al_name, ATTR_logevents) == 0) {\n\t\t\t\tset_sattr_l_slim(SVR_ATR_log_events, SVR_LOG_DFLT, SET);\n\t\t\t\t*log_event_mask = get_sattr_long(SVR_ATR_log_events);\n\t\t\t} else if (strcasecmp(plist->al_name, ATTR_mailer) == 0)\n\t\t\t\tset_sattr_str_slim(SVR_ATR_mailer, SENDMAIL_CMD, NULL);\n\t\t\telse if (strcasecmp(plist->al_name, ATTR_mailfrom) == 0)\n\t\t\t\tset_sattr_str_slim(SVR_ATR_mailfrom, PBS_DEFAULT_MAIL, NULL);\n\t\t\telse if (strcasecmp(plist->al_name, ATTR_queryother) == 0)\n\t\t\t\tset_sattr_l_slim(SVR_ATR_query_others, 1, SET);\n\t\t\telse if (strcasecmp(plist->al_name, ATTR_schediteration) == 0)\n\t\t\t\tset_sattr_str_slim(SVR_ATR_scheduler_iteration, TOSTR(PBS_SCHEDULE_CYCLE), NULL);\n\t\t\telse if (strcasecmp(plist->al_name, ATTR_ResvEnable) == 0)\n\t\t\t\tset_sattr_l_slim(SVR_ATR_ResvEnable, 1, SET);\n\t\t\telse if (strcasecmp(plist->al_name, ATTR_maxarraysize) == 0)\n\t\t\t\tset_sattr_str_slim(SVR_ATR_maxarraysize, TOSTR(PBS_MAX_ARRAY_JOB_DFL), NULL);\n\t\t\telse if (strcasecmp(plist->al_name, ATTR_max_concurrent_prov) == 0)\n\t\t\t\tset_sattr_str_slim(SVR_ATR_max_concurrent_prov, TOSTR(PBS_MAX_CONCURRENT_PROV), NULL);\n\t\t\telse if (strcasecmp(plist->al_name, ATTR_EligibleTimeEnable) == 0)\n\t\t\t\tset_sattr_l_slim(SVR_ATR_EligibleTimeEnable, 0, SET);\n\t\t\telse if (strcasecmp(plist->al_name, ATTR_license_linger) == 0) {\n\t\t\t\tset_sattr_l_slim(SVR_ATR_license_linger, PBS_LIC_LINGER_TIME, SET);\n\t\t\t\tlicensing_control.licenses_linger_time = PBS_LIC_LINGER_TIME;\n\t\t\t} else if (strcasecmp(plist->al_name, ATTR_license_max) == 0) {\n\t\t\t\tset_sattr_l_slim(SVR_ATR_license_max, PBS_MAX_LICENSING_LICENSES, SET);\n\t\t\t\tlicensing_control.licenses_max = PBS_MAX_LICENSING_LICENSES;\n\t\t\t} else if (strcasecmp(plist->al_name, ATTR_license_min) == 0) {\n\t\t\t\tset_sattr_l_slim(SVR_ATR_license_min, PBS_MIN_LICENSING_LICENSES, SET);\n\t\t\t\tlicensing_control.licenses_min = PBS_MIN_LICENSING_LICENSES;\n\t\t\t} else if (strcasecmp(plist->al_name, ATTR_rescdflt) == 0) {\n\t\t\t\tif (plist->al_resc != NULL && strcasecmp(plist->al_resc, \"ncpus\") == 0) {\n\t\t\t\t\tsvrattrl *tm_list;\n\t\t\t\t\ttm_list = attrlist_create(plist->al_name, \"ncpus\", 8);\n\t\t\t\t\tif (tm_list == NULL) {\n\t\t\t\t\t\treply_badattr(-1, bad_attr, plist, preq);\n\t\t\t\t\t\treturn;\n\t\t\t\t\t}\n\t\t\t\t\ttm_list->al_link.ll_next->ll_struct = NULL;\n\t\t\t\t\tsprintf(tm_list->al_value, \"%d\", 1);\n\t\t\t\t\trc = mgr_set_attr(server.sv_attr, svr_attr_idx, svr_attr_def, SVR_ATR_LAST, tm_list,\n\t\t\t\t\t\t\t  NO_USER_SET, &bad_attr, (void *) &server, ATR_ACTION_ALTER);\n\t\t\t\t\tif (rc != 0) {\n\t\t\t\t\t\tfree_svrattrl(tm_list);\n\t\t\t\t\t\treply_badattr(rc, bad_attr, plist, preq);\n\t\t\t\t\t\treturn;\n\t\t\t\t\t}\n\t\t\t\t\tfree_svrattrl(tm_list);\n\t\t\t\t}\n\t\t\t} else if (strcasecmp(plist->al_name, ATTR_scheduling) == 0) {\n\t\t\t\tset_sattr_l_slim(SVR_ATR_scheduling, 1, SET);\n\t\t\t} else if (strcasecmp(plist->al_name, ATTR_clearesten) == 0) {\n\t\t\t\tset_sattr_l_slim(SVR_ATR_clear_est_enable, 0, SET);\n\t\t\t}\n\t\t}\n\t\tsvr_save_db(&server);\n\t\tlog_eventf(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER, LOG_INFO,\n\t\t\t   msg_daemonname, msg_manager, msg_man_uns,\n\t\t\t   preq->rq_user, preq->rq_host);\n\t\tmgr_log_attr(msg_man_uns, GET_NEXT(preq->rq_ind.rq_manager.rq_attr), PBS_EVENTCLASS_SERVER, msg_daemonname, NULL);\n\t\treply_ack(preq);\n\t}\n}\n\n/**\n * @brief\n *\t\tSet Scheduler Attribute Values\n *\n *\t\tSets the requested attributes and returns a reply\n *\n * @param[in] preq - Pointer to a batch request structure\n */\n\nvoid\nmgr_sched_set(struct batch_request *preq)\n{\n\tint bad_attr = 0;\n\tsvrattrl *plist, *psvrat;\n\tpbs_list_head setlist;\n\tpbs_list_head unsetlist;\n\tint rc;\n\tpbs_sched *psched;\n\tint only_scheduling = 1;\n\n\tpsched = find_sched(preq->rq_ind.rq_manager.rq_objname);\n\tif (!psched) {\n\t\treq_reject(PBSE_UNKSCHED, 0, preq);\n\t\treturn;\n\t}\n\n\tCLEAR_HEAD(setlist);\n\tCLEAR_HEAD(unsetlist);\n\n\tplist = (svrattrl *) GET_NEXT(preq->rq_ind.rq_manager.rq_attr);\n\twhile (plist) {\n\t\tif (strcmp(plist->al_atopl.name, ATTR_scheduling)) {\n\t\t\tonly_scheduling = 0;\n\t\t}\n\t\t/*\n\t\t * We do not overwrite/update the entire record in the database. Therefore, to\n\t\t * unset attributes, we will need to find out the ones with a 0 or NULL value set.\n\t\t * We create a separate list for removal from the list of attributes provided, and\n\t\t * pass it to mgr_unset_attr, below\n\t\t */\n\t\tpsvrat = dup_svrattrl(plist);\n\t\tif (psvrat == NULL) {\n\t\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\t\tfree_attrlist(&setlist);\n\t\t\tfree_attrlist(&unsetlist);\n\t\t\treturn;\n\t\t}\n\t\tif (psvrat->al_atopl.value == NULL || psvrat->al_atopl.value[0] == '\\0')\n\t\t\tappend_link(&unsetlist, &psvrat->al_link, psvrat);\n\t\telse\n\t\t\tappend_link(&setlist, &psvrat->al_link, psvrat);\n\n\t\tplist = (struct svrattrl *) GET_NEXT(plist->al_link);\n\t}\n\n\t/* if the unsetlist has attributes, call server_unset to remove them separately */\n\tplist = (svrattrl *) GET_NEXT(unsetlist);\n\tif (plist) {\n\t\trc = mgr_unset_attr(psched->sch_attr, sched_attr_idx, sched_attr_def, SCHED_ATR_LAST, plist,\n\t\t\t\t    preq->rq_perm, &bad_attr, (void *) psched, PARENT_TYPE_SCHED, INDIRECT_RES_CHECK);\n\t\tif (rc != 0) {\n\t\t\treply_badattr(rc, bad_attr, plist, preq);\n\t\t\tfree_attrlist(&setlist);\n\t\t\tfree_attrlist(&unsetlist);\n\t\t\treturn;\n\t\t}\n\t}\n\n\tplist = (svrattrl *) GET_NEXT(setlist);\n\tif (!plist)\n\t\tgoto done;\n\n\trc = mgr_set_attr(psched->sch_attr, sched_attr_idx, sched_attr_def,\n\t\t\t  SCHED_ATR_LAST, plist, preq->rq_perm, &bad_attr, (void *) psched, ATR_ACTION_ALTER);\n\tif (rc != 0) {\n\t\treply_badattr(rc, bad_attr, plist, preq);\n\t\tfree_attrlist(&setlist);\n\t\tfree_attrlist(&unsetlist);\n\t\treturn;\n\t}\n\tif (only_scheduling != 1)\n\t\tset_scheduler_flag(SCH_CONFIGURE, psched);\n\n\tsched_save_db(psched);\n\ndone:\n\tfree_attrlist(&setlist);\n\tfree_attrlist(&unsetlist);\n\tsprintf(log_buffer, msg_manager, msg_man_set, preq->rq_user, preq->rq_host);\n\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_SCHED, LOG_INFO, msg_daemonname, log_buffer);\n\tmgr_log_attr(msg_man_set, GET_NEXT(preq->rq_ind.rq_manager.rq_attr), PBS_EVENTCLASS_SCHED, msg_daemonname, NULL);\n\treply_ack(preq);\n}\n\n/**\n * @brief\n *\t\tUnset (clear) Sched Attribute Values\n *\n *\t\tClears the requested attributes and returns a reply\n *\n * @param[in]\tpreq\t- Pointer to a batch request structure\n */\n\nvoid\nmgr_sched_unset(struct batch_request *preq)\n{\n\tint bad_attr = 0;\n\tsvrattrl *plist, *tmp_plist;\n\tint rc;\n\n\tpbs_sched *psched = find_sched(preq->rq_ind.rq_manager.rq_objname);\n\tif (!psched) {\n\t\treq_reject(PBSE_UNKSCHED, 0, preq);\n\t\treturn;\n\t}\n\n\tfor (tmp_plist = (svrattrl *) GET_NEXT(preq->rq_ind.rq_manager.rq_attr); tmp_plist; tmp_plist = (struct svrattrl *) GET_NEXT(tmp_plist->al_link)) {\n\t\tif (strcasecmp(tmp_plist->al_name, ATTR_schediteration) == 0) {\n\t\t\tif (dflt_scheduler) {\n\t\t\t\tsvrattrl *t_list;\n\n\t\t\t\tt_list = attrlist_create(tmp_plist->al_name, NULL, 0);\n\t\t\t\tif (t_list == NULL)\n\t\t\t\t\treply_badattr(-1, bad_attr, tmp_plist, preq);\n\n\t\t\t\tt_list->al_link.ll_next->ll_struct = NULL;\n\t\t\t\trc = mgr_unset_attr(server.sv_attr, svr_attr_idx, svr_attr_def, SVR_ATR_LAST, t_list,\n\t\t\t\t\t\t    -1, &bad_attr, (void *) &server, PARENT_TYPE_SERVER, INDIRECT_RES_CHECK);\n\t\t\t\tif (rc != 0) {\n\t\t\t\t\tfree_svrattrl(t_list);\n\t\t\t\t\treply_badattr(rc, bad_attr, tmp_plist, preq);\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t\tsvr_save_db(&server);\n\t\t\t\tfree_svrattrl(t_list);\n\t\t\t}\n\t\t}\n\t}\n\n\tplist = (svrattrl *) GET_NEXT(preq->rq_ind.rq_manager.rq_attr);\n\trc = mgr_unset_attr(psched->sch_attr, sched_attr_idx, sched_attr_def, SCHED_ATR_LAST, plist,\n\t\t\t    preq->rq_perm, &bad_attr, (void *) psched, PARENT_TYPE_SCHED, INDIRECT_RES_CHECK);\n\tif (rc != 0) {\n\t\treply_badattr(rc, bad_attr, plist, preq);\n\t\treturn;\n\t}\n\n\tset_sched_default(psched, 0);\n\n\tsched_save_db(psched);\n\tsprintf(log_buffer, msg_manager, msg_man_uns, preq->rq_user, preq->rq_host);\n\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_SCHED, LOG_INFO, msg_daemonname, log_buffer);\n\tmgr_log_attr(msg_man_uns, plist, PBS_EVENTCLASS_SCHED, msg_daemonname, NULL);\n\treply_ack(preq);\n}\n\n/**\n * @brief\n *\t\tSet Queue Attribute Values\n *\n *\t\tFinds the queue, Sets the requested attributes and returns a reply\n *\n * @param[in]\tpreq\t- Pointer to a batch request structure\n */\n\nvoid\nmgr_queue_set(struct batch_request *preq)\n{\n\tint allques;\n\tint bad = 0;\n\tchar *badattr;\n\tsvrattrl *plist, *psvrat;\n\tpbs_list_head setlist;\n\tpbs_list_head unsetlist;\n\tpbs_queue *pque;\n\tchar *qname;\n\tint rc;\n\n\tif ((*preq->rq_ind.rq_manager.rq_objname == '\\0') ||\n\t    (*preq->rq_ind.rq_manager.rq_objname == '@')) {\n\t\tqname = all_quename;\n\t\tallques = 1;\n\t\tpque = (pbs_queue *) GET_NEXT(svr_queues);\n\t} else {\n\t\tqname = preq->rq_ind.rq_manager.rq_objname;\n\t\tallques = 0;\n\t\tpque = find_queuebyname(qname);\n\t}\n\tif (pque == NULL) {\n\t\treq_reject(PBSE_UNKQUE, 0, preq);\n\t\treturn;\n\t}\n\n\tlog_eventf(PBSEVENT_ADMIN, PBS_EVENTCLASS_QUEUE, LOG_INFO, qname, msg_manager, msg_man_set, preq->rq_user, preq->rq_host);\n\n\tCLEAR_HEAD(setlist);\n\tCLEAR_HEAD(unsetlist);\n\n\tplist = (svrattrl *) GET_NEXT(preq->rq_ind.rq_manager.rq_attr);\n\twhile (plist) {\n\t\t/*\n\t\t * We do not overwrite/update the entire record in the database. Therefore, to\n\t\t * unset attributes, we will need to find out the ones with a 0 or NULL value set.\n\t\t * We create a separate list for removal from the list of attributes provided, and\n\t\t * pass it to mgr_unset_attr, below\n\t\t */\n\t\tpsvrat = dup_svrattrl(plist);\n\t\tif (psvrat == NULL) {\n\t\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\t\tfree_attrlist(&setlist);\n\t\t\tfree_attrlist(&unsetlist);\n\t\t\treturn;\n\t\t}\n\t\tif (psvrat->al_atopl.value == NULL || psvrat->al_atopl.value[0] == '\\0')\n\t\t\tappend_link(&unsetlist, &psvrat->al_link, psvrat);\n\t\telse\n\t\t\tappend_link(&setlist, &psvrat->al_link, psvrat);\n\n\t\tplist = (struct svrattrl *) GET_NEXT(plist->al_link);\n\t}\n\n\tif (allques)\n\t\tpque = (pbs_queue *) GET_NEXT(svr_queues);\n\n\twhile (pque) {\n\t\t/* if the unsetlist has attributes, call server_unset to remove them separately */\n\t\tplist = (svrattrl *) GET_NEXT(unsetlist);\n\t\tif (plist) {\n\t\t\trc = mgr_unset_attr(pque->qu_attr, que_attr_idx, que_attr_def, QA_ATR_LAST, plist,\n\t\t\t\t\t    preq->rq_perm, &bad, (void *) pque, PARENT_TYPE_QUE_ALL, INDIRECT_RES_CHECK);\n\t\t\tif (rc != 0) {\n\t\t\t\treply_badattr(rc, bad, plist, preq);\n\t\t\t\tfree_attrlist(&setlist);\n\t\t\t\tfree_attrlist(&unsetlist);\n\t\t\t\treturn;\n\t\t\t}\n\t\t}\n\n\t\tplist = (svrattrl *) GET_NEXT(setlist);\n\t\tif (plist) {\n\t\t\trc = mgr_set_attr(pque->qu_attr, que_attr_idx, que_attr_def, QA_ATR_LAST,\n\t\t\t\t\t  plist, preq->rq_perm, &bad, pque, ATR_ACTION_ALTER);\n\t\t\tif (rc != 0) {\n\t\t\t\treply_badattr(rc, bad, plist, preq);\n\t\t\t\tfree_attrlist(&setlist);\n\t\t\t\tfree_attrlist(&unsetlist);\n\t\t\t\treturn;\n\t\t\t} else {\n\t\t\t\tque_save_db(pque);\n\t\t\t\tmgr_log_attr(msg_man_set, GET_NEXT(preq->rq_ind.rq_manager.rq_attr), PBS_EVENTCLASS_QUEUE, pque->qu_qs.qu_name, NULL);\n\t\t\t}\n\t\t}\n\t\tif (allques)\n\t\t\tpque = (pbs_queue *) GET_NEXT(pque->qu_link);\n\t\telse\n\t\t\tbreak;\n\t}\n\tfree_attrlist(&setlist);\n\tfree_attrlist(&unsetlist);\n\n\t/* check the appropriateness of the attributes based on queue type */\n\n\tif (allques)\n\t\tpque = (pbs_queue *) GET_NEXT(svr_queues);\n\twhile (pque) {\n\t\tif ((badattr = check_que_attr(pque)) != NULL) {\n\t\t\t(void) sprintf(log_buffer, msg_attrtype, pque->qu_qs.qu_name, badattr);\n\t\t\t(void) reply_text(preq, PBSE_ATTRTYPE, log_buffer);\n\t\t\treturn;\n\t\t}\n\t\tif (allques)\n\t\t\tpque = (pbs_queue *) GET_NEXT(pque->qu_link);\n\t\telse\n\t\t\tbreak;\n\t}\n\n\treply_ack(preq);\n}\n\n/**\n * @brief\n *\t\tUnset (clear)  Queue Attribute Values\n *\n *\t\tFinds the queue, clears the requested attributes and returns a reply\n *\n * @param[in] preq - Pointer to a batch request structure\n */\n\nvoid\nmgr_queue_unset(struct batch_request *preq)\n{\n\tint allques;\n\tint bad_attr = 0;\n\tsvrattrl *plist;\n\tpbs_queue *pque;\n\tchar *qname;\n\tint rc;\n\n\tif ((*preq->rq_ind.rq_manager.rq_objname == '\\0') ||\n\t    (*preq->rq_ind.rq_manager.rq_objname == '@')) {\n\t\tqname = all_quename;\n\t\tallques = 1;\n\t\tpque = (pbs_queue *) GET_NEXT(svr_queues);\n\t} else {\n\t\tallques = 0;\n\t\tqname = preq->rq_ind.rq_manager.rq_objname;\n\t\tpque = find_queuebyname(qname);\n\t}\n\tif (pque == NULL) {\n\t\treq_reject(PBSE_UNKQUE, 0, preq);\n\t\treturn;\n\t}\n\tlog_eventf(PBSEVENT_ADMIN, PBS_EVENTCLASS_QUEUE, LOG_INFO,\n\t\t   qname, msg_manager, msg_man_uns, preq->rq_user, preq->rq_host);\n\n\tfor (plist = (svrattrl *) GET_NEXT(preq->rq_ind.rq_manager.rq_attr);\n\t     plist != NULL;\n\t     plist = (svrattrl *) GET_NEXT(plist->al_link)) {\n\t\tif (strcmp(plist->al_name, ATTR_qtype) == 0) {\n\t\t\treq_reject(PBSE_NEEDQUET, 0, preq);\n\t\t\treturn;\n\t\t}\n\t}\n\n\tplist = (svrattrl *) GET_NEXT(preq->rq_ind.rq_manager.rq_attr);\n\n\twhile (pque) {\n\t\trc = mgr_unset_attr(pque->qu_attr, que_attr_idx, que_attr_def, QA_ATR_LAST,\n\t\t\t\t    plist, preq->rq_perm, &bad_attr,\n\t\t\t\t    (void *) pque, PARENT_TYPE_QUE_ALL, INDIRECT_RES_CHECK);\n\t\tif (rc != 0) {\n\t\t\treply_badattr(rc, bad_attr, plist, preq);\n\t\t\treturn;\n\t\t} else {\n\t\t\tattribute *attr;\n\t\t\tif ((attr = get_qattr(pque, QE_ATR_DefaultChunk))->at_flags & ATR_VFLAG_MODIFY)\n\t\t\t\t(void) deflt_chunk_action(attr, (void *) pque, ATR_ACTION_ALTER);\n\t\t\tque_save_db(pque);\n\t\t\tmgr_log_attr(msg_man_uns, plist, PBS_EVENTCLASS_QUEUE, pque->qu_qs.qu_name, NULL);\n\t\t\tif (is_qattr_set(pque, QA_ATR_QType) == 0)\n\t\t\t\tpque->qu_qs.qu_type = QTYPE_Unset;\n\t\t}\n\t\tif (allques)\n\t\t\tpque = GET_NEXT(pque->qu_link);\n\t\telse\n\t\t\tbreak;\n\t}\n\treply_ack(preq);\n}\n\n/**\n * @brief\n *\t\tSet vnode attributes\n *\n * \t\tFinds the set of vnodes, either one specified, all for a host or all.\n * \t\tSets the request attributes on that set.\n * \t\treturns a reply to the sender of the batch_request\n *\n * \t\tNote the use of the ':' to indicate a port number as part of a host name\n * \t\tis purely for internal testing and is not documented externally.\n *\n * @param[in] preq - Pointer to a batch request structure\n *\n * @par MT-safe: No\n */\n\nstatic void\nmgr_node_set(struct batch_request *preq)\n{\n\textern char *msg_queue_not_in_partition;\n\textern char *msg_partition_not_in_queue;\n\tint bad = 0;\n\tchar hostname[PBS_MAXHOSTNAME + 1];\n\tint numnodes = 1; /* number of vnodes to be operated on */\n\tsvrattrl *plist, *psvrat;\n\tpbs_list_head setlist;\n\tpbs_list_head unsetlist;\n\tchar *nodename;\n\tmominfo_t *pmom = NULL;\n\tmom_svrinfo_t *psvrmom = NULL;\n\tstruct pbsnode *pnode;\n\tint rc;\n\tint i, j, len;\n\tint problem_cnt = 0;\n\tint momidx;\n\tchar *problem_names = NULL;\n\tstruct pbsnode **problem_nodes = NULL;\n\tstatic char *warnmsg = NULL;\n\tstruct pbsnode **warn_nodes = NULL;\n\tint warn_idx = 0;\n\tint replied = 0; /* boolean */\n\n\tnodename = preq->rq_ind.rq_manager.rq_objname;\n\n\tif (((*preq->rq_ind.rq_manager.rq_objname == '\\0') ||\n\t     (*preq->rq_ind.rq_manager.rq_objname == '@')) &&\n\t    (preq->rq_ind.rq_manager.rq_objtype != MGR_OBJ_HOST)) {\n\n\t\t/*\n\t\t * In this instance the set node req is to apply to all\n\t\t * nodes at the local ('\\0')  or specified ('@') server\n\t\t */\n\n\t\tif ((pbsndlist != NULL) && svr_totnodes) {\n\t\t\tnodename = all_nodes;\n\t\t\tpnode = pbsndlist[0];\n\t\t\tnumnodes = svr_totnodes;\n\t\t} else { /* specified server has no nodes in its node table */\n\t\t\tpnode = NULL;\n\t\t}\n\n\t} else if (preq->rq_ind.rq_manager.rq_objtype == MGR_OBJ_HOST) {\n\t\t/* Operating on all vnodes on a named host\n\t\t * if it is the last/only host\n\t\t * find the mom and get the first vnode in her list\n\t\t */\n\n\t\tchar *pc;\n\t\tunsigned int port = pbs_mom_port;\n\n\t\tpc = strchr(preq->rq_ind.rq_manager.rq_objname, (int) ':');\n\t\tif (pc) {\n\t\t\tport = atol(pc + 1);\n\t\t}\n\t\tif (get_fullhostname(preq->rq_ind.rq_manager.rq_objname,\n\t\t\t\t     hostname, (sizeof(hostname) - 1)) != 0) {\n\t\t\treq_reject(PBSE_UNKNODE, 0, preq);\n\t\t\treturn;\n\t\t}\n\t\tpmom = find_mom_entry(hostname, port);\n\t\tif (pmom) {\n\t\t\tpsvrmom = (mom_svrinfo_t *) pmom->mi_data;\n\t\t\tnumnodes = psvrmom->msr_numvnds;\n\t\t\tmomidx = 0;\n\t\t\tpnode = psvrmom->msr_children[momidx];\n\t\t} else {\n\t\t\t/* no such Mom */\n\t\t\treq_reject(PBSE_UNKNODE, 0, preq);\n\t\t\treturn;\n\t\t}\n\t} else /* Else one and only one vnode */\n\t\tpnode = find_nodebyname(nodename);\n\n\tif (pnode == NULL) {\n\t\treq_reject(PBSE_UNKNODE, 0, preq);\n\t\treturn;\n\t}\n\n\tCLEAR_HEAD(setlist);\n\tCLEAR_HEAD(unsetlist);\n\n\tplist = (svrattrl *) GET_NEXT(preq->rq_ind.rq_manager.rq_attr);\n\twhile (plist) {\n\t\tpsvrat = dup_svrattrl(plist);\n\t\tif (psvrat == NULL) {\n\t\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\t\tfree_attrlist(&setlist);\n\t\t\tfree_attrlist(&unsetlist);\n\t\t\treturn;\n\t\t}\n\t\tif (psvrat->al_atopl.value == NULL || psvrat->al_atopl.value[0] == '\\0')\n\t\t\tappend_link(&unsetlist, &psvrat->al_link, psvrat);\n\t\telse\n\t\t\tappend_link(&setlist, &psvrat->al_link, psvrat);\n\n\t\tplist = (struct svrattrl *) GET_NEXT(plist->al_link);\n\t}\n\n\t/* set writtable attributes of node (nodes if numnodes > 1) */\n\tlog_eventf(PBSEVENT_ADMIN, PBS_EVENTCLASS_NODE, LOG_INFO, nodename, msg_manager, msg_man_set, preq->rq_user, preq->rq_host);\n\n\tif (numnodes > 1) {\n\t\tproblem_nodes = (struct pbsnode **) malloc(numnodes * sizeof(struct pbsnode *));\n\t\tif (problem_nodes == NULL) {\n\t\t\tlog_err(ENOMEM, __func__, \"out of memory\");\n\t\t\treturn;\n\t\t}\n\t\tproblem_cnt = 0;\n\t}\n\n\twarn_idx = 0;\n\twarn_nodes = (struct pbsnode **) malloc(numnodes * sizeof(struct pbsnode *));\n\tif (warn_nodes == NULL) {\n\t\tlog_err(ENOMEM, __func__, \"out of memory\");\n\t\tfree(problem_nodes);\n\t\treturn;\n\t}\n\twarnings_update(WARN_ngrp_init, warn_nodes, &warn_idx, pnode);\n\n\ti = 0;\n\twhile (pnode) {\n\t\tif ((pnode->nd_state & INUSE_DELETED) == 0) {\n\t\t\tfor (j = 0; j < 2; j++) {\n\t\t\t\trc = 0;\n\t\t\t\tif (j == 0) {\n\t\t\t\t\tplist = (svrattrl *) GET_NEXT(unsetlist);\n\t\t\t\t\tif (plist)\n\t\t\t\t\t\trc = mgr_unset_attr(pnode->nd_attr, node_attr_idx, node_attr_def, ND_ATR_LAST, plist, preq->rq_perm, &bad, (void *) pnode, PARENT_TYPE_NODE, INDIRECT_RES_CHECK);\n\t\t\t\t} else {\n\t\t\t\t\tplist = (svrattrl *) GET_NEXT(setlist);\n\t\t\t\t\tif (plist)\n\t\t\t\t\t\trc = mgr_set_attr(pnode->nd_attr, node_attr_idx, node_attr_def, ND_ATR_LAST, plist, preq->rq_perm | ATR_PERM_ALLOW_INDIRECT, &bad, (void *) pnode, ATR_ACTION_ALTER);\n\t\t\t\t}\n\t\t\t\tif (rc != 0) {\n\t\t\t\t\tif (numnodes > 1) {\n\t\t\t\t\t\tif (problem_nodes) {\n\t\t\t\t\t\t\t/*we have an array in which to save*/\n\t\t\t\t\t\t\tif (problem_nodes[problem_cnt - 1] != pnode) {\n\t\t\t\t\t\t\t\t/* and this node was not saved already */\n\t\t\t\t\t\t\t\tproblem_nodes[problem_cnt] = pnode;\n\t\t\t\t\t\t\t\t++problem_cnt;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t} else { /*In the specific node case, reply w/ error and return*/\n\t\t\t\t\t\tswitch (rc) {\n\t\t\t\t\t\t\tcase PBSE_INTERNAL:\n\t\t\t\t\t\t\tcase PBSE_SYSTEM:\n\t\t\t\t\t\t\t\treq_reject(rc, bad, preq);\n\t\t\t\t\t\t\t\tbreak;\n\n\t\t\t\t\t\t\tcase PBSE_NOATTR:\n\t\t\t\t\t\t\tcase PBSE_ATTRRO:\n\t\t\t\t\t\t\tcase PBSE_MUTUALEX:\n\t\t\t\t\t\t\tcase PBSE_BADNDATVAL:\n\t\t\t\t\t\t\tcase PBSE_UNKRESC:\n\t\t\t\t\t\t\t\treply_badattr(rc, bad, plist, preq);\n\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\tcase PBSE_QUE_NOT_IN_PARTITION:\n\t\t\t\t\t\t\t\t(void) snprintf(log_buffer, LOG_BUF_SIZE, msg_queue_not_in_partition,\n\t\t\t\t\t\t\t\t\t\tget_nattr_str(pnode, ND_ATR_Queue));\n\t\t\t\t\t\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\t\t\t\t\t\treply_text(preq, PBSE_QUE_NOT_IN_PARTITION, log_buffer);\n\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\tcase PBSE_PARTITION_NOT_IN_QUE:\n\t\t\t\t\t\t\t\t(void) snprintf(log_buffer, LOG_BUF_SIZE, msg_partition_not_in_queue,\n\t\t\t\t\t\t\t\t\t\tget_nattr_str(pnode, ND_ATR_partition));\n\t\t\t\t\t\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\t\t\t\t\t\treply_text(preq, PBSE_PARTITION_NOT_IN_QUE, log_buffer);\n\t\t\t\t\t\t\t\tbreak;\n\n\t\t\t\t\t\t\tdefault:\n\t\t\t\t\t\t\t\treq_reject(rc, 0, preq);\n\t\t\t\t\t\t}\n\t\t\t\t\t\tfree(warn_nodes);\n\t\t\t\t\t\tfree_attrlist(&unsetlist);\n\t\t\t\t\t\tfree_attrlist(&setlist);\n\t\t\t\t\t\treturn;\n\t\t\t\t\t}\n\t\t\t\t} else if (plist) { /*modifications succeed for this node*/\n\t\t\t\t\twarnings_update(WARN_ngrp, warn_nodes, &warn_idx, pnode);\n\n\t\t\t\t\tif ((pnode->nd_nsnfree == 0) && (pnode->nd_state == 0))\n\t\t\t\t\t\tset_vnode_state(pnode, INUSE_JOB, Nd_State_Or);\n\n\t\t\t\t\tmgr_log_attr(msg_man_set, GET_NEXT(preq->rq_ind.rq_manager.rq_attr), PBS_EVENTCLASS_NODE, pnode->nd_name, NULL);\n\t\t\t\t\tpnode->nd_modified = 1;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tif (numnodes == 1)\n\t\t\tbreak; /* just the one vnode */\n\t\telse if (preq->rq_ind.rq_manager.rq_objtype == MGR_OBJ_HOST) {\n\t\t\tint update_mom_only = 0;\n\n\t\t\t/* next vnode under the Mom */\n\t\t\tif (++momidx >= psvrmom->msr_numvnds)\n\t\t\t\tbreak; /* all down */\n\t\t\tpnode = psvrmom->msr_children[momidx];\n\t\t\tif ((strcmp(plist->al_name, ATTR_NODE_state) == 0) && (plist->al_op == INCR)) {\n\t\t\t\t/* Marking nodes offline.  We should only mark the children vnodes\n\t\t\t\t * as offline if no other mom that reports the vnodes are up.\n\t\t\t\t */\n\t\t\t\tif (pnode->nd_nummoms > 1) {\n\t\t\t\t\tint imom;\n\t\t\t\t\tfor (imom = 0; imom < pnode->nd_nummoms; ++imom) {\n\t\t\t\t\t\tunsigned long mstate;\n\t\t\t\t\t\tmstate = pnode->nd_moms[imom]->mi_dmn_info->dmn_state;\n\t\t\t\t\t\tif ((mstate & (INUSE_DOWN | INUSE_OFFLINE)) == 0) {\n\t\t\t\t\t\t\t/* If another mom is up (i.e. not down or offline)\n\t\t\t\t\t\t\t * then do not set the vnode state of the children\n\t\t\t\t\t\t\t */\n\t\t\t\t\t\t\tupdate_mom_only = 1;\n\t\t\t\t\t\t\tbreak; /* found at least one mom that's up, we can stop now */\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (update_mom_only)\n\t\t\t\tbreak; /* all done */\n\t\t} else {\n\t\t\tif (++i == svr_totnodes)\n\t\t\t\tbreak;\t      /* all done */\n\t\t\tpnode = pbsndlist[i]; /* next vnode in array */\n\t\t}\n\t} /*bottom of the while()*/\n\n\tfree_attrlist(&setlist);\n\tfree_attrlist(&unsetlist);\n\n\twarnmsg = warn_msg_build(WARN_ngrp, warn_nodes, warn_idx);\n\n\tsave_nodes_db(0, NULL);\n\n\tif (numnodes > 1) { /*modification was for multiple vnodes  */\n\n\t\tif (problem_cnt) { /*one or more problems encountered*/\n\n\t\t\tfor (len = 0, i = 0; i < problem_cnt; i++)\n\t\t\t\tlen += strlen(problem_nodes[i]->nd_name) + 3;\n\n\t\t\tif (warnmsg == NULL)\n\t\t\t\tlen += strlen(pbse_to_txt(PBSE_GMODERR));\n\t\t\telse\n\t\t\t\tlen += strlen(pbse_to_txt(PBSE_GMODERR)) + strlen(warnmsg);\n\n\t\t\tif ((problem_names = malloc(len)) != NULL) {\n\n\t\t\t\tstrcpy(problem_names, pbse_to_txt(PBSE_GMODERR));\n\t\t\t\tfor (i = 0; i < problem_cnt; i++) {\n\t\t\t\t\tif (i)\n\t\t\t\t\t\tstrcat(problem_names, \", \");\n\t\t\t\t\tstrcat(problem_names, problem_nodes[i]->nd_name);\n\t\t\t\t}\n\t\t\t\tif (warnmsg != NULL)\n\t\t\t\t\tstrcat(problem_names, warnmsg);\n\n\t\t\t\t(void) reply_text(preq, PBSE_GMODERR, problem_names);\n\t\t\t\tfree(problem_names);\n\t\t\t\tproblem_names = NULL;\n\t\t\t\treplied = 1;\n\t\t\t} else {\n\t\t\t\t(void) reply_text(preq, PBSE_GMODERR, pbse_to_txt(PBSE_GMODERR));\n\t\t\t\treplied = 1;\n\t\t\t}\n\t\t}\n\t}\n\n\tif (replied == 0) {\n\n\t\tif (warnmsg) {\n\t\t\t(void) reply_text(preq, PBSE_NONE, warnmsg);\n\t\t\tfree(warnmsg);\n\t\t\twarnmsg = NULL;\n\t\t} else\n\t\t\treply_ack(preq);\n\t}\n\n\tfree(problem_nodes);\n\tfree(warn_nodes);\n}\n\n/**\n * @brief\n *\t\tUnset node attributes\n *\n * \t\tFinds the node, unsets the attributes and\n * \t\treturns a reply to the sender of the batch_request\n *\n * @note\n * \t\tNote, the attrbibutes of \"state\" and \"ntype\" cannot be unset.\n *\n *  @param[in]\tpreq\t- Pointer to a batch request structure\n *\n *  @par MT-safe: No\n */\n\nvoid\nmgr_node_unset(struct batch_request *preq)\n\n{\n\tint bad = 0;\n\tchar hostname[PBS_MAXHOSTNAME + 1];\n\tint numnodes = 1; /* number of vnode to operate */\n\tsvrattrl *plist;\n\tchar *nodename;\n\tmominfo_t *pmom = NULL;\n\tmom_svrinfo_t *psvrmom = NULL;\n\tstruct pbsnode *pnode;\n\tint rc;\n\tint unset_que = 0;\n\tint i, len;\n\tint problem_cnt = 0;\n\tint momidx;\n\tchar *problem_names;\n\tstruct pbsnode **problem_nodes = NULL;\n\tstatic char *warnmsg = NULL;\n\tstruct pbsnode **warn_nodes = NULL;\n\tint warn_idx = 0;\n\tint replied = 0; /* boolean */\n\tattribute *patr;\n\tresource_def *prd;\n\tresource *prc;\n\tstatic char *astate = ATTR_NODE_state;\n\tstatic char *antype = ATTR_NODE_ntype;\n\tstatic char *ra = ATTR_rescavail;\n\n\tnodename = preq->rq_ind.rq_manager.rq_objname;\n\n\tif (preq->rq_ind.rq_manager.rq_objtype == MGR_OBJ_HOST) {\n\t\t/* Operating on all vnodes on a named host          */\n\t\t/* find the mom and get the first vnode in her list */\n\t\tchar *pc;\n\t\tunsigned int port = pbs_mom_port;\n\n\t\tpc = strchr(preq->rq_ind.rq_manager.rq_objname, (int) ':');\n\t\tif (pc) {\n\t\t\tport = atol(pc + 1);\n\t\t}\n\t\tif (get_fullhostname(preq->rq_ind.rq_manager.rq_objname,\n\t\t\t\t     hostname, (sizeof(hostname) - 1)) != 0) {\n\t\t\treq_reject(PBSE_UNKNODE, 0, preq);\n\t\t\treturn;\n\t\t}\n\t\tpmom = find_mom_entry(hostname, port);\n\t\tif (pmom) {\n\t\t\t/* found mom, set number of and first vnode */\n\t\t\tpsvrmom = (mom_svrinfo_t *) pmom->mi_data;\n\t\t\tnumnodes = psvrmom->msr_numvnds;\n\t\t\tmomidx = 0;\n\t\t\tpnode = psvrmom->msr_children[momidx];\n\t\t} else {\n\t\t\t/* no such Mom */\n\t\t\treq_reject(PBSE_UNKNODE, 0, preq);\n\t\t\treturn;\n\t\t}\n\n\t} else if ((*preq->rq_ind.rq_manager.rq_objname == '\\0') ||\n\t\t   (*preq->rq_ind.rq_manager.rq_objname == '@')) {\n\n\t\t/*In this instance the set node req is to apply to all */\n\t\t/*nodes at the local ('\\0')  or specified ('@') server */\n\n\t\tif ((pbsndlist != NULL) && svr_totnodes) {\n\t\t\tnodename = all_nodes;\n\t\t\tpnode = pbsndlist[0];\n\t\t\tnumnodes = svr_totnodes;\n\t\t} else { /* specified server has no nodes in its node table */\n\t\t\tpnode = NULL;\n\t\t}\n\n\t} else {\n\t\tpnode = find_nodebyname(nodename);\n\t}\n\n\tif (pnode == NULL) {\n\t\treq_reject(PBSE_UNKNODE, 0, preq);\n\t\treturn;\n\t}\n\n\t/* check attributes being unset */\n\n\tplist = (svrattrl *) GET_NEXT(preq->rq_ind.rq_manager.rq_attr);\n\twhile (plist) {\n\t\tbad++;\n\n\t\t/* check that state, vnode_pool, ntype and\t*/\n\t\t/* resources_available.host are not being unset\t*/\n\t\tif ((strcasecmp(plist->al_name, astate) == 0) ||\n\t\t    (strcasecmp(plist->al_name, antype) == 0) ||\n\t\t    (strcasecmp(plist->al_name, ATTR_NODE_VnodePool) == 0) ||\n\t\t    ((strcasecmp(plist->al_name, ra) == 0) &&\n\t\t     ((plist->al_resc == NULL) ||\n\t\t      (strcasecmp(plist->al_resc, \"host\") == 0)))) {\n\t\t\treply_badattr(PBSE_BADNDATVAL, bad, plist, preq);\n\t\t\treturn;\n\t\t}\n\n\t\t/* is \"queue\" being unset */\n\t\tif (strcasecmp(plist->al_name, \"queue\") == 0)\n\t\t\tunset_que = 1;\n\n\t\tplist = (struct svrattrl *) GET_NEXT(plist->al_link);\n\t}\n\tbad = 0;\n\n\t/* unset writtable attributes of node (nodes if numnodes > 1) */\n\n\tlog_eventf(PBSEVENT_ADMIN, PBS_EVENTCLASS_NODE, LOG_INFO,\n\t\t   nodename, msg_manager, msg_man_uns,\n\t\t   preq->rq_user, preq->rq_host);\n\n\tif (numnodes > 1) {\n\t\tproblem_nodes = (struct pbsnode **) malloc(numnodes * sizeof(struct pbsnode *));\n\t\tif (problem_nodes == NULL) {\n\t\t\tlog_err(ENOMEM, __func__, \"out of memory\");\n\t\t\treturn;\n\t\t}\n\t\tproblem_cnt = 0;\n\t}\n\n\twarn_idx = 0;\n\twarn_nodes = (struct pbsnode **) malloc(numnodes * sizeof(struct pbsnode *));\n\tif (warn_nodes == NULL) {\n\t\tlog_err(ENOMEM, __func__, \"out of memory\");\n\t\tfree(problem_nodes);\n\t\treturn;\n\t}\n\twarnings_update(WARN_ngrp_init, warn_nodes, &warn_idx, pnode);\n\n\tplist = (svrattrl *) GET_NEXT(preq->rq_ind.rq_manager.rq_attr);\n\ti = 0;\n\twhile (pnode) {\n\t\tif ((pnode->nd_state & INUSE_DELETED) == 0) {\n\t\t\t/*\n\t\t\t * The unset operation requires us to note before hand\n\t\t\t * whether we have the grouping resource on this node\n\t\t\t * as we loose that information if unset succeeds.\n\t\t\t */\n\n\t\t\twarnings_update(WARN_ngrp_ck, warn_nodes, &warn_idx, pnode);\n\n\t\t\trc = mgr_unset_attr(pnode->nd_attr, node_attr_idx, node_attr_def, ND_ATR_LAST,\n\t\t\t\t\t    plist, preq->rq_perm, &bad, (void *) pnode,\n\t\t\t\t\t    PARENT_TYPE_NODE, INDIRECT_RES_CHECK);\n\t\t\tif (rc != 0) {\n\n\t\t\t\tif (numnodes > 1) {\n\t\t\t\t\tif (problem_nodes) {\n\t\t\t\t\t\t/*we have an array in which to save*/\n\t\t\t\t\t\tproblem_nodes[problem_cnt] = pnode;\n\t\t\t\t\t\t++problem_cnt;\n\t\t\t\t\t}\n\n\t\t\t\t} else { /*In the specific node case, reply w/ error and return*/\n\t\t\t\t\tswitch (rc) {\n\t\t\t\t\t\tcase PBSE_INTERNAL:\n\t\t\t\t\t\tcase PBSE_SYSTEM:\n\t\t\t\t\t\t\treq_reject(rc, bad, preq);\n\t\t\t\t\t\t\tbreak;\n\n\t\t\t\t\t\tcase PBSE_NOATTR:\n\t\t\t\t\t\tcase PBSE_ATTRRO:\n\t\t\t\t\t\tcase PBSE_MUTUALEX:\n\t\t\t\t\t\tcase PBSE_BADNDATVAL:\n\t\t\t\t\t\tcase PBSE_UNKRESC:\n\t\t\t\t\t\t\treply_badattr(rc, bad, plist, preq);\n\t\t\t\t\t\t\tbreak;\n\n\t\t\t\t\t\tdefault:\n\t\t\t\t\t\t\treq_reject(rc, 0, preq);\n\t\t\t\t\t}\n\t\t\t\t\tfree(warn_nodes);\n\t\t\t\t\treturn;\n\t\t\t\t}\n\n\t\t\t} else { /*modifications succeed for this node*/\n\n\t\t\t\twarnings_update(WARN_ngrp, warn_nodes, &warn_idx, pnode);\n\n\t\t\t\t/* if queue unset, clear pointer to queue struct */\n\t\t\t\tif (unset_que) {\n\t\t\t\t\tpnode->nd_pque = NULL;\n\t\t\t\t\tmark_which_queues_have_nodes();\n\t\t\t\t}\n\n\t\t\t\t/* if resources_avail.ncpus unset, reset to default */\n\t\t\t\tpatr = get_nattr(pnode, ND_ATR_ResourceAvail);\n\t\t\t\tprd = &svr_resc_def[RESC_NCPUS];\n\t\t\t\tprc = find_resc_entry(patr, prd);\n\t\t\t\tif (prc == NULL)\n\t\t\t\t\tprc = add_resource_entry(patr, prd);\n\t\t\t\tif (!is_attr_set(&prc->rs_value)) {\n\t\t\t\t\tprc->rs_value.at_val.at_long = pnode->nd_ncpus;\n\t\t\t\t\tprc->rs_value.at_flags |= ATR_VFLAG_DEFLT | ATR_SET_MOD_MCACHE;\n\t\t\t\t}\n\n\t\t\t\t/* If the Mom attribute is unset, reset to default */\n\t\t\t\tif (is_nattr_set(pnode, ND_ATR_Mom) == 0) {\n\t\t\t\t\tattribute tmp;\n\t\t\t\t\tif (get_fullhostname(pnode->nd_name, hostname,\n\t\t\t\t\t\t\t     (sizeof(hostname) - 1)) != 0) {\n\t\t\t\t\t\tstrncpy(hostname, pnode->nd_name, (sizeof(hostname) - 1));\n\t\t\t\t\t}\n\t\t\t\t\tclear_attr(&tmp, &node_attr_def[(int) ND_ATR_Mom]);\n\t\t\t\t\trc = decode_arst(&tmp, ATTR_NODE_Mom, NULL, hostname);\n\t\t\t\t\tif (rc == 0) {\n\t\t\t\t\t\tset_arst(get_nattr(pnode, ND_ATR_Mom), &tmp, INCR);\n\t\t\t\t\t\t(get_nattr(pnode, ND_ATR_Mom))->at_flags |= ATR_VFLAG_DEFLT;\n\t\t\t\t\t\tfree_arst(&tmp);\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\tpnode->nd_modified = 1;\n\t\t\t\tmgr_log_attr(msg_man_set, plist, PBS_EVENTCLASS_NODE, pnode->nd_name, NULL);\n\t\t\t}\n\t\t}\n\t\tif (numnodes == 1)\n\t\t\tbreak;\n\t\tif (preq->rq_ind.rq_manager.rq_objtype == MGR_OBJ_HOST) {\n\t\t\tif (++momidx >= psvrmom->msr_numvnds)\n\t\t\t\tbreak;\n\t\t\tpnode = psvrmom->msr_children[momidx];\n\t\t} else {\n\t\t\tif (++i == svr_totnodes)\n\t\t\t\tbreak;\n\t\t\tpnode = pbsndlist[i];\n\t\t}\n\t} /* bottom of the while() */\n\n\twarnmsg = warn_msg_build(WARN_ngrp, warn_nodes, warn_idx);\n\n\tsave_nodes_db(0, NULL);\n\n\tif (numnodes > 1) { /*modification was for all nodes  */\n\n\t\tif (problem_cnt) { /*one or more problems encountered*/\n\n\t\t\tfor (len = 0, i = 0; i < problem_cnt; i++)\n\t\t\t\tlen += strlen(problem_nodes[i]->nd_name) + 3;\n\n\t\t\tif (warnmsg == NULL)\n\t\t\t\tlen += strlen(pbse_to_txt(PBSE_GMODERR));\n\t\t\telse\n\t\t\t\tlen += strlen(pbse_to_txt(PBSE_GMODERR)) + strlen(warnmsg);\n\n\t\t\tif ((problem_names = malloc(len)) != NULL) {\n\n\t\t\t\tstrcpy(problem_names, pbse_to_txt(PBSE_GMODERR));\n\t\t\t\tfor (i = 0; i < problem_cnt; i++) {\n\t\t\t\t\tif (i)\n\t\t\t\t\t\tstrcat(problem_names, \", \");\n\t\t\t\t\tstrcat(problem_names, problem_nodes[i]->nd_name);\n\t\t\t\t}\n\t\t\t\tif (warnmsg != NULL)\n\t\t\t\t\tstrcat(problem_names, warnmsg);\n\n\t\t\t\t(void) reply_text(preq, PBSE_GMODERR, problem_names);\n\t\t\t\tfree(problem_names);\n\t\t\t\tproblem_names = NULL;\n\t\t\t\treplied = 1;\n\t\t\t} else {\n\t\t\t\t(void) reply_text(preq, PBSE_GMODERR, pbse_to_txt(PBSE_GMODERR));\n\t\t\t\treplied = 1;\n\t\t\t}\n\t\t}\n\t}\n\n\tif (replied == 0) {\n\n\t\tif (warnmsg) {\n\t\t\t(void) reply_text(preq, PBSE_NONE, warnmsg);\n\t\t\tfree(warnmsg);\n\t\t} else\n\t\t\treply_ack(preq);\n\t}\n\n\tfree(problem_nodes);\n\tfree(warn_nodes);\n}\n\n/**\n * @brief\n * \t\tmake_host_addresses_list - return a null terminated list of all of the\n *\t\tIP addresses of the named host (phost)\n *\n * @param[in]\tphost\t- named host\n * @param[in]\tpul\t- ptr to null terminated address list is returned in *pul\n *\n * @return\terror code\n * @retval\t0\t- no error\n * @retval\tPBS error\t- error\n */\n\nint\nmake_host_addresses_list(char *phost, u_long **pul)\n{\n\tint i;\n\tint err;\n\tstruct pul_store *tpul = NULL;\n\tint len;\n\tstruct addrinfo *aip, *pai;\n\tstruct addrinfo hints;\n\tstruct sockaddr_in *inp;\n\n\tif ((phost == 0) || (*phost == '\\0'))\n\t\treturn (PBSE_SYSTEM);\n\n\t/* search for the address list in the address list index\n\t * so that we do not hit NS for everything\n\t */\n\tif (hostaddr_idx != NULL) {\n\t\tif (pbs_idx_find(hostaddr_idx, (void **) &phost, (void **) &tpul, NULL) == PBS_IDX_RET_OK) {\n\t\t\t*pul = (u_long *) malloc(tpul->len);\n\t\t\tif (!*pul) {\n\t\t\t\tstrcat(log_buffer, \"out of memory \");\n\t\t\t\treturn (PBSE_SYSTEM);\n\t\t\t}\n\t\t\tmemmove(*pul, tpul->pul, tpul->len);\n\t\t\treturn 0;\n\t\t}\n\t}\n\n\tmemset(&hints, 0, sizeof(struct addrinfo));\n\t/*\n\t *      Why do we use AF_UNSPEC rather than AF_INET?  Some\n\t *      implementations of getaddrinfo() will take an IPv6\n\t *      address and map it to an IPv4 one if we ask for AF_INET\n\t *      only.  We don't want that - we want only the addresses\n\t *      that are genuinely, natively, IPv4 so we start with\n\t *      AF_UNSPEC and filter ai_family below.\n\t */\n\thints.ai_family = AF_UNSPEC;\n\thints.ai_socktype = SOCK_STREAM;\n\thints.ai_protocol = IPPROTO_TCP;\n\tif ((err = getaddrinfo(phost, NULL, &hints, &pai)) != 0) {\n\t\tsprintf(log_buffer,\n\t\t\t\"addr not found for %s h_errno=%d errno=%d\",\n\t\t\tphost, err, errno);\n\t\treturn (PBSE_UNKNODE);\n\t}\n\n\ti = 0;\n\tfor (aip = pai; aip != NULL; aip = aip->ai_next) {\n\t\t/* skip non-IPv4 addresses */\n\t\tif (aip->ai_family == AF_INET)\n\t\t\ti++;\n\t}\n\n\t/* null end it */\n\tlen = sizeof(u_long) * (i + 1);\n\t*pul = (u_long *) malloc(len);\n\tif (*pul == NULL) {\n\t\tstrcat(log_buffer, \"Out of memory \");\n\t\treturn (PBSE_SYSTEM);\n\t}\n\n\ti = 0;\n\tfor (aip = pai; aip != NULL; aip = aip->ai_next) {\n\t\tif (aip->ai_family == AF_INET) {\n\t\t\tu_long ipaddr;\n\n\t\t\tinp = (struct sockaddr_in *) aip->ai_addr;\n\t\t\tipaddr = ntohl(inp->sin_addr.s_addr);\n\t\t\t(*pul)[i] = ipaddr;\n\t\t\ti++;\n\t\t}\n\t}\n\t(*pul)[i] = 0; /* null term array ip adrs */\n\n\tfreeaddrinfo(pai);\n\n\ttpul = malloc(sizeof(struct pul_store));\n\tif (!tpul) {\n\t\tstrcat(log_buffer, \"out of  memory\");\n\t\tfree(*pul);\n\t\treturn (PBSE_SYSTEM);\n\t}\n\ttpul->len = len;\n\ttpul->pul = (u_long *) malloc(tpul->len);\n\tif (!tpul->pul) {\n\t\tfree(tpul);\n\t\tfree(*pul);\n\t\tstrcat(log_buffer, \"out of  memory\");\n\t\treturn (PBSE_SYSTEM);\n\t}\n\tmemmove(tpul->pul, *pul, tpul->len);\n\n\tif (hostaddr_idx == NULL) {\n\t\tif ((hostaddr_idx = pbs_idx_create(0, 0)) == NULL) {\n\t\t\tfree(tpul->pul);\n\t\t\tfree(tpul);\n\t\t\tfree(*pul);\n\t\t\tstrcat(log_buffer, \"out of  memory\");\n\t\t\treturn (PBSE_SYSTEM);\n\t\t}\n\t}\n\tif (pbs_idx_insert(hostaddr_idx, phost, tpul) != PBS_IDX_RET_OK) {\n\t\tfree(tpul->pul);\n\t\tfree(tpul);\n\t\tfree(*pul);\n\t\treturn (PBSE_SYSTEM);\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\tremove the cached ip addresses of a mom from the host index and the ipaddrs index\n *\n * @param[in]\tpmom - valid ptr to the mom info\n *\n * @return\terror code\n * @retval\t0\t\t- no error\n * @retval\tPBS error\t- error\n */\nint\nremove_mom_ipaddresses_list(mominfo_t *pmom)\n{\n\t/* take ipaddrs from ipaddrs cache index */\n\tif (hostaddr_idx != NULL) {\n\t\tstruct pul_store *tpul = NULL;\n\t\tvoid *phost = &pmom->mi_host;\n\n\t\tif (pbs_idx_find(hostaddr_idx, &phost, (void **) &tpul, NULL) == PBS_IDX_RET_OK) {\n\t\t\tu_long *pul;\n\t\t\tfor (pul = tpul->pul; *pul; pul++)\n\t\t\t\ttdelete2(*pul, pmom->mi_port, &ipaddrs);\n\n\t\t\tif (pbs_idx_delete(hostaddr_idx, pmom->mi_host) != PBS_IDX_RET_OK)\n\t\t\t\treturn (PBSE_SYSTEM);\n\n\t\t\tfree(tpul->pul);\n\t\t\tfree(tpul);\n\t\t}\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\t\tcreate pbs node structure, i.e. add a node\n *\n * @param[in]\tobjname\t- Name of node\n * @param[in]\tplist \t- list of attributes to set on node\n * @param[out]\tbad \t- Return the index of a bad attribute\n * @param[out]\trtnpnode\t- pointer to created node structure\n * @param[in]\tnodup \t- TRUE  - means duplicated name not allowed (qmgr create)\n *\t\t\t \t\t\t\tFALSE - dups are allowed (same vnode under multi Moms)\n * @param[in]\tallow_unkresc\t- TRUE - allow node to have unknown resources\n * \t\t\t\t\t\t\t\t\tFALSE - do not allow unknown resources.\n *\n * @return Error code\n * @retval - 0 - Success\n * @retval - pbs_errno - Failure code\n *\n */\n\nint\ncreate_pbs_node2(char *objname, svrattrl *plist, int perms, int *bad, struct pbsnode **rtnpnode, int nodup, int allow_unkresc)\n{\n\tstruct pbsnode *pnode;\n\tstruct pbsnode **tmpndlist;\n\tint ntype; /* node type, always PBS */\n\tchar *pc;\n\tchar *phost;\t    /* trial host name */\n\tchar *pname;\t    /* node name w/o any :ts       */\n\tu_long *pul = NULL; /* 0 terminated host adrs array*/\n\tint rc;\n\tint j;\n\tint iht;\n\tattribute *pattr;\n\tsvrattrl *plx;\n\tmominfo_t *pmom;\n\tmom_svrinfo_t *smp;\n\tresource_def *prdef;\n\tresource *presc;\n\tchar realfirsthost[PBS_MAXHOSTNAME + 1];\n\tint ret;\n\n\tif (rtnpnode != NULL)\n\t\t*rtnpnode = NULL;\n\n\tret = PBSE_NONE;\n\n\t/* change \"Host\" attrribute into Mom */\n\t/* a carry over from the past\t     */\n\tplx = plist;\n\twhile (plx) {\n\t\tif (strcasecmp(plx->al_name, \"Host\") == 0) {\n\t\t\t/* this only works becase strlen(Mom) < strlen(Host) */\n\t\t\tstrcpy(plx->al_name, \"Mom\");\n\t\t\tbreak;\n\t\t}\n\t\tplx = (svrattrl *) GET_NEXT(plx->al_link);\n\t}\n\n\trc = process_host_name_part(objname, plist, &pname, &ntype);\n\tif (rc)\n\t\treturn (rc);\n\n\tif ((pnode = find_nodebyname(pname)) == NULL) {\n\n\t\t/* need to create the pbs_node entry */\n\n\t\tpnode = (struct pbsnode *) malloc(sizeof(struct pbsnode));\n\t\tif (pnode == NULL) {\n\t\t\tfree(pname);\n\t\t\treturn (PBSE_SYSTEM);\n\t\t}\n\n\t\t/* expand pbsndlist array exactly svr_totnodes long*/\n\t\ttmpndlist = (struct pbsnode **) realloc(pbsndlist,\n\t\t\t\t\t\t\tsizeof(struct pbsnode *) * (svr_totnodes + 1));\n\n\t\tif (tmpndlist != NULL) {\n\t\t\t/*add in the new entry etc*/\n\t\t\tpbsndlist = tmpndlist;\n\t\t\t/* nd_index = 0 (regular node, single parent mom), nd_index = 1 (multiple parent moms, usually Cray) */\n\t\t\tpnode->nd_index = 0;\n\t\t\tpnode->nd_arr_index = svr_totnodes; /* this is only in mem, not from db */\n\t\t\tpbsndlist[svr_totnodes++] = pnode;\n\t\t} else {\n\t\t\tfree(pname);\n\t\t\tfree_pnode(pnode);\n\t\t\treturn (PBSE_SYSTEM);\n\t\t}\n\t\tif (initialize_pbsnode(pnode, pname, ntype) != PBSE_NONE) {\n\t\t\tsvr_totnodes--;\n\t\t\tfree_pnode(pnode);\n\t\t\treturn (PBSE_SYSTEM);\n\t\t}\n\n\t\t/* create and initialize the first subnode to go with */\n\t\t/* the parent node */\n\t\tif (create_subnode(pnode, NULL) == NULL) {\n\t\t\tsvr_totnodes--;\n\t\t\tfree_pnode(pnode);\n\t\t\treturn (PBSE_SYSTEM);\n\t\t}\n\n\t\t/* create node index if not already done */\n\t\tif (node_idx == NULL) {\n\t\t\tif ((node_idx = pbs_idx_create(0, 0)) == NULL) {\n\t\t\t\tsvr_totnodes--;\n\t\t\t\tfree_pnode(pnode);\n\t\t\t\treturn (PBSE_SYSTEM);\n\t\t\t}\n\t\t}\n\n\t\t/* add to node to index */\n\t\tif (pbs_idx_insert(node_idx, pname, pnode) != PBS_IDX_RET_OK) {\n\t\t\tsvr_totnodes--;\n\t\t\tfree_pnode(pnode);\n\t\t\treturn (PBSE_SYSTEM);\n\t\t}\n\t} else if (nodup == TRUE) {\n\t\t/* duplicating/modifying vnode by qmgr is not allowed */\n\t\t/* as what qmgr creates is the natural vnode          */\n\t\tfree(pname);\n\t\treturn (PBSE_NODEEXIST);\n\t}\n\n\t/*\n\t * Make sure Mom attribute is or will be  set.\n\t * Action functions in mgr_set_attr expect it to be set\n\t *\n\t * If it is specified in the provided attrl list,  turn the\n\t * operation into a INCR to add any unique host names to those\n\t * already there.\n\t *\n\t * If it isn't specified in the attrl, use the node name.\n\t */\n\tplx = plist;\n\twhile (plx) {\n\t\tif (strcasecmp(plx->al_name, ATTR_NODE_Mom) == 0) {\n\t\t\tbreak;\n\t\t}\n\t\tplx = (svrattrl *) GET_NEXT(plx->al_link);\n\t}\n\n\tif (plx) {\n\t\tplx->al_op = INCR;\n\t} else if (is_nattr_set(pnode, ND_ATR_Mom) == 0) {\n\t\trc = set_nattr_str_slim(pnode, ND_ATR_Mom, pname, NULL);\n\t\tif (rc != PBSE_NONE) {\n\t\t\teffective_node_delete(pnode);\n\t\t\treturn (rc);\n\t\t}\n\t}\n\n\t/* Make sure Port is or will be set */\n\tplx = plist;\n\twhile (plx) {\n\t\tif (strcasecmp(plx->al_name, ATTR_NODE_Port) == 0) {\n\t\t\tbreak;\n\t\t}\n\t\tplx = (svrattrl *) GET_NEXT(plx->al_link);\n\t}\n\tif ((plx == NULL) && (is_nattr_set(pnode, ND_ATR_Port) == 0))\n\t\tset_nattr_l_slim(pnode, ND_ATR_Port, pbs_mom_port, SET);\n\n\t/* OK, set the attributes specified */\n\n\trc = mgr_set_attr2(pnode->nd_attr, node_attr_idx, node_attr_def, ND_ATR_LAST,\n\t\t\t   plist, perms | ATR_PERM_ALLOW_INDIRECT, bad,\n\t\t\t   (void *) pnode, ATR_ACTION_NEW, allow_unkresc);\n\n\tif (rc != 0) {\n\t\t/*\n\t\t * If an attribute could not be resolved, do not delete node\n\t\t * from database, for other errors go ahead and delete node.\n\t\t */\n\t\tif (rc != PBSE_UNKRESC)\n\t\t\teffective_node_delete(pnode);\n\n\t\treturn (rc);\n\t}\n\n\t/*\n\t * Ensure \"resources_available.host=HOSTNAME\" is in the resource list.\n\t * Use the first hostname provided, and determine whether it is a\n\t * hostname or IP address. If it is truly a host name, use the short\n\t * form by truncating it at the first '.' character. If it is an IP\n\t * address, use the entire string. Note that RFC 1123 allows\n\t * hostnames to start with a number, so do not simply key off of the\n\t * first character.\n\t */\n\n\tpattr = get_nattr(pnode, ND_ATR_ResourceAvail);\n\n\tprdef = &svr_resc_def[RESC_HOST];\n\tpresc = find_resc_entry(pattr, prdef);\n\tif (presc == NULL) {\n\t\t/* add the entry */\n\t\tpresc = add_resource_entry(pattr, prdef);\n\t\tif (presc) {\n\t\t\tstruct sockaddr_in sa4;\n\t\t\tstruct sockaddr_in6 sa6;\n\n\t\t\tstrncpy(realfirsthost, (get_nattr_arst(pnode, ND_ATR_Mom))->as_string[0], (sizeof(realfirsthost) - 1));\n\t\t\trealfirsthost[PBS_MAXHOSTNAME] = '\\0';\n\n\t\t\tif ((inet_pton(AF_INET, realfirsthost, &(sa4.sin_addr)) != 1) &&\n\t\t\t    (inet_pton(AF_INET6, realfirsthost, &(sa6.sin6_addr)) != 1)) {\n\t\t\t\t/* Not an IPv4 or IPv6 address, truncate it. */\n\t\t\t\tpc = strchr(realfirsthost, '.');\n\t\t\t\tif (pc)\n\t\t\t\t\t*pc = '\\0';\n\t\t\t}\n\t\t\trc = prdef->rs_decode(&presc->rs_value, \"\", \"host\", realfirsthost);\n\t\t\tpresc->rs_value.at_flags |= ATR_VFLAG_DEFLT; /* so not written to nodes file */\n\t\t} else {\n\t\t\trc = PBSE_SYSTEM;\n\t\t}\n\t}\n\n\tif (rc != 0) {\n\t\teffective_node_delete(pnode);\n\t\treturn (rc);\n\t}\n\n\tpnode->nd_hostname = strdup(presc->rs_value.at_val.at_str);\n\tif (pnode->nd_hostname == NULL) {\n\t\teffective_node_delete(pnode);\n\t\treturn (PBSE_SYSTEM);\n\t}\n\tprdef = &svr_resc_def[RESC_VNODE];\n\tpresc = find_resc_entry(pattr, prdef);\n\tif (presc == NULL)\n\t\tpresc = add_resource_entry(pattr, prdef); /* add the entry */\n\n\tif (presc) {\n\t\trc = prdef->rs_decode(&presc->rs_value, NULL, NULL, objname);\n\t\tpresc->rs_value.at_flags |= ATR_VFLAG_DEFLT; /* so not written to nodes file */\n\n\t} else {\n\t\trc = PBSE_SYSTEM;\n\t}\n\n\tif (rc != 0) {\n\t\teffective_node_delete(pnode);\n\t\treturn (rc);\n\t}\n\n\t/*\n\t * Now we need to create the Mom structure for each Mom who is a\n\t * parent of this (v)node.\n\t * The Mom structure may already exist\n\t */\n\n\tpattr = get_nattr(pnode, ND_ATR_Mom);\n\tfor (iht = 0; iht < pattr->at_val.at_arst->as_usedptr; ++iht) {\n\t\tunsigned int nport;\n\n\t\tphost = pattr->at_val.at_arst->as_string[iht];\n\n\t\tif ((rc = make_host_addresses_list(phost, &pul))) {\n\t\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_NODE, LOG_INFO, pnode->nd_name, log_buffer);\n\n\t\t\t/* special case for unresolved nodes in case of server startup */\n\t\t\tif (rc == PBSE_UNKNODE && get_sattr_long(SVR_ATR_State) == SV_STATE_INIT) {\n\t\t\t\t/*\n\t\t\t\t * mark node as INUSE_UNRESOLVABLE, pbsnodes will show unresolvable state\n\t\t\t\t */\n\t\t\t\tset_vnode_state(pnode, INUSE_UNRESOLVABLE | INUSE_DOWN, Nd_State_Set);\n\n\t\t\t\t/*\n\t\t\t\t * make_host_addresses_list failed, so pul was not allocated\n\t\t\t\t * Since we are going ahead nevertheless, we need to allocate\n\t\t\t\t * an \"empty\" pul list\n\t\t\t\t */\n\t\t\t\tfree(pul);\n\t\t\t\tpul = malloc(sizeof(u_long) * (1));\n\t\t\t\tpul[0] = 0;\n\t\t\t\tret = PBSE_UNKNODE; /* set return of function to this, so that error is logged */\n\t\t\t} else {\n\t\t\t\tfree(pul);\n\t\t\t\teffective_node_delete(pnode);\n\t\t\t\treturn (rc); /* return the error code from make_host_addresses_list */\n\t\t\t}\n\t\t}\n\n\t\t/*\n\t\t * Note, once create_svrmom_entry() is called, it has the\n\t\t * responsibility for \"pul\" including freeing it if need be.\n\t\t */\n\n\t\tnport = get_nattr_long(pnode, ND_ATR_Port);\n\n\t\tif ((pmom = create_svrmom_entry(phost, nport, pul)) == NULL) {\n\t\t\tfree(pul);\n\t\t\teffective_node_delete(pnode);\n\t\t\treturn (PBSE_SYSTEM);\n\t\t}\n\n\t\tif (!pbs_iplist) {\n\t\t\tpbs_iplist = create_pbs_iplist();\n\t\t\tif (!pbs_iplist) {\n\t\t\t\treturn (PBSE_SYSTEM); /* No Memory */\n\t\t\t}\n\t\t}\n\t\tsmp = (mom_svrinfo_t *) (pmom->mi_data);\n\t\tfor (j = 0; pmom->mi_dmn_info->dmn_addrs[j]; j++) {\n\t\t\tu_long ipaddr = pmom->mi_dmn_info->dmn_addrs[j];\n\t\t\tif (insert_iplist_element(pbs_iplist, ipaddr)) {\n\t\t\t\tdelete_pbs_iplist(pbs_iplist);\n\t\t\t\treturn (PBSE_SYSTEM); /* No Memory */\n\t\t\t}\n\t\t}\n\n\t\t/* cross link the vnode (pnode) and its Mom (pmom) */\n\n\t\tif ((rc = cross_link_mom_vnode(pnode, pmom)) != 0)\n\t\t\treturn (rc);\n\n\t\t/* If this is the \"natural vnode\" (i.e. 0th entry) */\n\t\tif (pnode->nd_nummoms == 1) {\n\t\t\tif (is_nattr_set(pnode, ND_ATR_vnode_pool) && get_nattr_long(pnode, ND_ATR_vnode_pool) > 0)\n\t\t\t\tsmp->msr_vnode_pool = get_nattr_long(pnode, ND_ATR_vnode_pool);\n\t\t}\n\t}\n\n\tpnode->nd_modified = 1;\n\tif (rtnpnode != NULL)\n\t\t*rtnpnode = pnode;\n\treturn (ret); /*create completely successful*/\n}\n\n/**\n * @brief\n * \t\tWrapper function to create_pbs_node() but without the 'allow_unkresc'\n * \t\tparameter.\n */\nint\ncreate_pbs_node(char *objname, svrattrl *plist, int perms, int *bad, struct pbsnode **rtnpnode, int nodup)\n{\n\treturn (create_pbs_node2(objname, plist, perms, bad, rtnpnode, nodup, FALSE));\n}\n\n/**\n * @brief\n * \t\tcheck_sister_vnodes_for_delete\t- check for sister vnodes which are eligible for delete.\n * \t\treturns if nd_summons is equal to one for the array of supporing vnodes.\n *\n * @param[in]\tpsvrmom\t- mom_svrinfo structure of the mom which needs to be checked\n *\n * @return\treturn code\n * @retval\t0\t- no node to delete.\n * @retval\t1\t- there are nodes to be deleted\n */\nstatic int\ncheck_sister_vnodes_for_delete(mom_svrinfo_t *psvrmom)\n{\n\tint ct = 0;\n\tint i;\n\n\tif (psvrmom->msr_numvnds <= 1)\n\t\treturn ct;\n\tfor (i = 1; i < psvrmom->msr_numvnds; ++i) {\n\t\tif (psvrmom->msr_children[i]->nd_nummoms == 1) {\n\t\t\t++ct;\n\t\t\tbreak;\n\t\t}\n\t}\n\treturn (ct);\n}\n\n/**\n *  @brief delete a scheduler object\n *\n *  @param[in] preq - Pointer to a batch request structure\n *\n */\nstatic void\nmgr_sched_delete(struct batch_request *preq)\n{\n\tpbs_sched *psched;\n\tpbs_sched *tmpsched;\n\n\tif ((*preq->rq_ind.rq_manager.rq_objname == '\\0') || (*preq->rq_ind.rq_manager.rq_objname == '@')) {\n\t\t/* Delete all except default */\n\t\tpsched = (pbs_sched *) GET_NEXT(svr_allscheds);\n\t\twhile (psched != NULL) {\n\t\t\ttmpsched = psched;\n\t\t\tpsched = (pbs_sched *) GET_NEXT(psched->sc_link);\n\t\t\tif (tmpsched != dflt_scheduler) {\n\t\t\t\tif (sched_delete(tmpsched) == PBSE_OBJBUSY) {\n\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t\t \"Scheduler %s is busy\", tmpsched->sc_name);\n\t\t\t\t\tlog_err(PBSE_OBJBUSY, __func__, log_buffer);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t} else {\n\t\tpsched = find_sched(preq->rq_ind.rq_manager.rq_objname);\n\t\tif (!psched) {\n\t\t\treq_reject(PBSE_UNKSCHED, 0, preq);\n\t\t\treturn;\n\t\t} else if (psched == dflt_scheduler) {\n\t\t\treq_reject(PBSE_SCHED_NO_DEL, 0, preq);\n\t\t\treturn;\n\t\t}\n\n\t\tif (sched_delete(psched) == PBSE_OBJBUSY) {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"Scheduler %s is busy\", psched->sc_name);\n\t\t\tlog_err(PBSE_OBJBUSY, __func__, log_buffer);\n\t\t\t(void) reply_text(preq, PBSE_OBJBUSY, preq->rq_ind.rq_manager.rq_objname);\n\t\t\treturn;\n\t\t}\n\t}\n\treply_ack(preq); /*request completely successful*/\n\treturn;\n}\n\n/*\n * mgr_node_delete - mark a node (or all nodes) in the server's node list\n *                   as being \"deleted\".  It (they) will no longer get\n *                   assigned to a job, will no longer be pinged, and\n *                   any current job tasks will continue until they end,\n *                   abort, or are killed.\n *\n *  @param[in] preq - Pointer to a batch request structure\n *\n */\n\nstatic void\n\tmgr_node_delete(struct batch_request *preq)\n{\n\tint numnodes = 1;\n\tstruct pbsnode *pnode;\n\tstruct pbssubn *psub;\n\tchar *nodename;\n\tint rc;\n\n\tint i, len;\n\tint n;\n\tint problem_cnt = 0;\n\tchar *problem_names;\n\tstruct pbsnode **problem_nodes = NULL;\n\tsvrattrl *plist;\n\tmom_svrinfo_t *psvrmom;\n\n\tnodename = preq->rq_ind.rq_manager.rq_objname;\n\n\tif ((*preq->rq_ind.rq_manager.rq_objname == '\\0') ||\n\t    (*preq->rq_ind.rq_manager.rq_objname == '@')) {\n\n\t\t/*In this instance the delete node req is to apply to all */\n\t\t/*nodes at the local ('\\0')  or specified ('@') server */\n\n\t\tif ((pbsndlist != NULL) && svr_totnodes) {\n\t\t\tnodename = all_nodes;\n\t\t\tpnode = *pbsndlist;\n\t\t\tnumnodes = svr_totnodes;\n\t\t} else { /* specified server has no nodes in its node table */\n\t\t\tpnode = NULL;\n\t\t}\n\n\t} else {\n\t\tpnode = find_nodebyname(nodename);\n\t}\n\n\tif (pnode == NULL) {\n\t\treq_reject(PBSE_UNKNODE, 0, preq);\n\t\treturn;\n\t}\n\n\t/* If node being deleted is linked to any queue, clear \"has node\" flag for that queue */\n\tif (pnode->nd_pque != NULL) {\n\t\tset_qattr_l_slim(pnode->nd_pque, QE_ATR_HasNodes, 0, SET);\n\t\tATR_UNSET(get_qattr(pnode->nd_pque, QE_ATR_HasNodes));\n\t}\n\n\tlog_eventf(PBSEVENT_ADMIN, PBS_EVENTCLASS_NODE, LOG_INFO,\n\t\t   nodename, msg_manager, msg_man_del,\n\t\t   preq->rq_user, preq->rq_host);\n\n\t/*if doing many and problem arises with some, record them for report*/\n\t/*the array of \"problem nodes\" sees no use now and may never see use*/\n\tif (numnodes > 1) {\n\t\tpnode = pbsndlist[0];\n\t\tproblem_nodes = (struct pbsnode **) malloc(svr_totnodes * sizeof(struct pbsnode *));\n\t\tif (problem_nodes == NULL) {\n\t\t\tlog_err(ENOMEM, __func__, \"out of memory\");\n\t\t\treturn;\n\t\t}\n\t\tproblem_cnt = 0;\n\t}\n\n\t/*set \"deleted\" bit in node's (nodes, numnodes == 1) \"inuse\" field*/\n\t/*remove entire prop list, including the node name, from the node */\n\t/*remove the IP address array hanging from the node               */\n\n\tplist = (svrattrl *) GET_NEXT(preq->rq_ind.rq_manager.rq_attr);\n\tfor (i = 0; i < svr_totnodes; i++) {\n\t\tif (numnodes > 1)\n\t\t\tpnode = pbsndlist[i];\n\n\t\trc = 0;\n\n\t\tif (pnode->nd_state & INUSE_DELETED)\n\t\t\tcontinue; /* already deleted */\n\t\tif (pnode->nd_state & INUSE_PROV)\n\t\t\trc = PBSE_NODEPROV_NODEL;\n\t\telse {\n\t\t\tif (find_vnode_in_resvs(pnode, Skip_Degraded_Time) != NULL)\n\t\t\t\trc = PBSE_OBJBUSY;\n\t\t\tfor (psub = pnode->nd_psn; psub != 0; psub = psub->next) {\n\t\t\t\tif (psub->jobs)\n\t\t\t\t\trc = PBSE_OBJBUSY;\n\t\t\t}\n\t\t}\n\n\t\tif (pnode->nd_nummoms == 1) {\n\t\t\tpsvrmom = pnode->nd_moms[0]->mi_data;\n\t\t\tif ((numnodes == 1) &&\n\t\t\t    (psvrmom->msr_numvnds > 1) &&\n\t\t\t    (psvrmom->msr_children[0] == pnode)) {\n\t\t\t\t/*\n\t\t\t\t * this is the \"natural\" vnode for a Mom with\n\t\t\t\t * multiple vnodes.  If there running jobs, or the\n\t\t\t\t * other vnodes under her do not have another\n\t\t\t\t * Mom, we cannot delete this vnode (and the Mom)\n\t\t\t\t * unless we are deleteing all vnodes.\n\t\t\t\t */\n\t\t\t\tif (psvrmom->msr_numjobs > 0) {\n\t\t\t\t\trc = PBSE_OBJBUSY;\n\t\t\t\t} else {\n\t\t\t\t\tn = check_sister_vnodes_for_delete(psvrmom);\n\t\t\t\t\tif (n > 0)\n\t\t\t\t\t\trc = PBSE_OBJBUSY;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\tif (rc != 0) {\n\n\t\t\tif (numnodes > 1) {\n\t\t\t\tif (problem_nodes) /*we have an array in which to save*/\n\t\t\t\t\tproblem_nodes[problem_cnt] = pnode;\n\n\t\t\t\t++problem_cnt;\n\n\t\t\t} else { /*In the specific node case, reply w/ error and return*/\n\t\t\t\tswitch (rc) {\n\n\t\t\t\t\tdefault:\n\t\t\t\t\t\treq_reject(rc, 0, preq);\n\t\t\t\t}\n\t\t\t\treturn;\n\t\t\t}\n\n\t\t} else { /*modifications succeed for this node*/\n\t\t\tnodename = strdup(pnode->nd_name);\n\t\t\teffective_node_delete(pnode);\n\t\t\tpnode = NULL; /* pnode has been freed, set it to NULL */\n\t\t\ti--;\t      /* the array has been coalesced, so reset i to the earlier position */\n\n\t\t\tif (nodename) {\n\t\t\t\tmgr_log_attr(msg_man_set, plist, PBS_EVENTCLASS_NODE, nodename, NULL);\n\t\t\t\tfree(nodename);\n\t\t\t\tnodename = NULL;\n\t\t\t}\n\t\t}\n\t\tif (numnodes == 1)\n\t\t\tbreak;\n\t} /*bottom of the for()*/\n\n\tsave_nodes_db(1, NULL);\n\n\tif (numnodes > 1) { /*modification was for all nodes  */\n\n\t\tif (problem_cnt) { /*one or more problems encountered*/\n\n\t\t\tfor (len = 0, i = 0; i < problem_cnt; i++)\n\t\t\t\tlen += strlen(problem_nodes[i]->nd_name) + 3;\n\n\t\t\tlen += strlen(pbse_to_txt(PBSE_GMODERR));\n\n\t\t\tif ((problem_names = malloc(len)) != NULL) {\n\n\t\t\t\tstrcpy(problem_names, pbse_to_txt(PBSE_GMODERR));\n\t\t\t\tfor (i = 0; i < problem_cnt; i++) {\n\t\t\t\t\tif (i)\n\t\t\t\t\t\tstrcat(problem_names, \", \");\n\t\t\t\t\tstrcat(problem_names, problem_nodes[i]->nd_name);\n\t\t\t\t}\n\n\t\t\t\t(void) reply_text(preq, PBSE_GMODERR, problem_names);\n\t\t\t\tfree(problem_names);\n\t\t\t\tproblem_names = NULL;\n\t\t\t} else {\n\t\t\t\t(void) reply_text(preq, PBSE_GMODERR, pbse_to_txt(PBSE_GMODERR));\n\t\t\t}\n\t\t}\n\n\t\tfree(problem_nodes); /* maybe problem malloc failed */\n\t\tproblem_nodes = NULL;\n\n\t\tif (problem_cnt) /* reply has already been sent */\n\t\t\treturn;\n\t}\n\n\treply_ack(preq);\n}\n\n/**\n * @brief\n *\t\tmgr_sched_create - process request to create a sched\n *\n *\t\tCreates sched\n *\n *  @param[in]\tpreq\t- Pointer to a batch request structure\n */\nstatic void\nmgr_sched_create(struct batch_request *preq)\n{\n\tint bad;\n\tsvrattrl *plist;\n\tpbs_sched *psched;\n\tint rc;\n\n\tif (strlen(preq->rq_ind.rq_manager.rq_objname) > PBS_MAXSCHEDNAME) {\n\t\treq_reject(PBSE_SCHED_NAME_BIG, 0, preq);\n\t\treturn;\n\t}\n\tif (find_sched(preq->rq_ind.rq_manager.rq_objname)) {\n\t\treq_reject(PBSE_SCHEDEXIST, 0, preq);\n\t\treturn;\n\t}\n\n\tpsched = sched_alloc(preq->rq_ind.rq_manager.rq_objname);\n\tif (!psched)\n\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\n\tplist = (svrattrl *) GET_NEXT(preq->rq_ind.rq_manager.rq_attr);\n\trc = mgr_set_attr(psched->sch_attr, sched_attr_idx, sched_attr_def, SCHED_ATR_LAST, plist,\n\t\t\t  preq->rq_perm, &bad, (void *) psched, ATR_ACTION_NEW);\n\tif (rc != 0) {\n\t\treply_badattr(rc, bad, plist, preq);\n\t\tsched_free(psched);\n\t\treturn;\n\t}\n\n\tset_sched_default(psched, 0);\n\n\tsched_save_db(psched);\n\tsnprintf(log_buffer, LOG_BUF_SIZE, msg_manager, msg_man_set, preq->rq_user, preq->rq_host);\n\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_SCHED, LOG_INFO, msg_daemonname, log_buffer);\n\tmgr_log_attr(msg_man_set, plist, PBS_EVENTCLASS_SCHED, msg_daemonname, NULL);\n\treply_ack(preq);\n}\n\n/**\n * @brief\n *\t\tmgr_node_create - process request to create a node\n *\n *\t\tCreates pbsnode and calls mgr_set_attr to set\n *      any associated node attributes also specified in the\n *      request.\n *\n *  @param[in]\tpreq\t- Pointer to a batch request structure\n */\n\nvoid\nmgr_node_create(struct batch_request *preq)\n{\n\tint bad;\n\tsvrattrl *plist;\n\tint rc;\n\tchar *vnp; /* Temp storage to validate (v)node name */\n\tlong vn_pool;\n\tstruct pbsnode *pnode;\n\tmominfo_t *mymom;\n\tstruct sockaddr_in check_ip;\n\tint is_node_ip;\n\tchar *nodename;\n\n\tnodename = preq->rq_ind.rq_manager.rq_objname;\n\n\t/*\n\t * Before creating the (v)node, validate the (v)node name using\n\t * legal_vnode_char() to check if it contains any invalid character.\n\t * If any invalid char found, then reject the batch req with\n\t * PBSE_BADNDATVAL.\n\t */\n\tif ((vnp = nodename) != NULL) {\n\t\tfor (; *vnp && legal_vnode_char(*vnp, 1); vnp++)\n\t\t\t;\n\t\tif (*vnp) {\n\t\t\treq_reject(PBSE_BADNDATVAL, 0, preq);\n\t\t\treturn;\n\t\t}\n\t\t/* Condition to make sure that node name should not exceed\n\t\t * PBS_MAXHOSTNAME i.e. 64 characters. This is because the\n\t\t * corresponding column nd_name in the database table pbs.node\n\t\t * is defined as string of length 64.\n\t\t */\n\t\tif (strlen(nodename) > PBS_MAXHOSTNAME) {\n\t\t\treq_reject(PBSE_NODENBIG, 0, preq);\n\t\t\treturn;\n\t\t}\n\t}\n\tis_node_ip = inet_pton(AF_INET, nodename, &(check_ip.sin_addr));\n\tif (is_node_ip > 0) {\n\t\tlog_eventf(PBSEVENT_DEBUG, PBS_EVENTCLASS_NODE, LOG_INFO, nodename,\n\t\t\t   \"Node added using IPv4 address\\nVerify that PBS_MOM_NODE_NAME is configured on the respective host\");\n\t}\n\tplist = (svrattrl *) GET_NEXT(preq->rq_ind.rq_manager.rq_attr);\n\trc = create_pbs_node(nodename, plist, preq->rq_perm, &bad, &pnode, TRUE);\n\n\tif (rc != 0) {\n\n\t\tswitch (rc) {\n\t\t\tcase PBSE_INTERNAL:\n\t\t\tcase PBSE_SYSTEM:\n\t\t\t\treq_reject(rc, bad, preq);\n\t\t\t\tbreak;\n\n\t\t\tcase PBSE_NOATTR:\n\t\t\tcase PBSE_ATTRRO:\n\t\t\tcase PBSE_MUTUALEX:\n\t\t\tcase PBSE_BADNDATVAL:\n\t\t\t\treply_badattr(rc, bad, plist, preq);\n\t\t\t\tbreak;\n\n\t\t\tdefault:\n\t\t\t\treq_reject(rc, 0, preq);\n\t\t}\n\t\treturn;\n\t}\n\n\tmymom = pnode->nd_moms[0];\n\tif (is_nattr_set(pnode, ND_ATR_vnode_pool) && ((vn_pool = get_nattr_long(pnode, ND_ATR_vnode_pool)) > 0)) {\n\t\tif (add_mom_to_pool(mymom) == PBSE_NONE) {\n\t\t\t/* cross link any vnodes of an existing Mom in pool */\n\t\t\tint i;\n\t\t\tmom_svrinfo_t *ppoolm = NULL;\n\t\t\tvnpool_mom_t *ppool = vnode_pool_mom_list;\n\t\t\twhile (ppool) {\n\t\t\t\tif (ppool->vnpm_vnode_pool == vn_pool) {\n\t\t\t\t\tif (ppool->vnpm_inventory_mom) {\n\t\t\t\t\t\tppoolm = (mom_svrinfo_t *) (ppool->vnpm_inventory_mom->mi_data);\n\t\t\t\t\t}\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tppool = ppool->vnpm_next;\n\t\t\t}\n\t\t\tif (ppoolm) {\n\t\t\t\tlog_eventf(PBSEVENT_DEBUG4, PBS_EVENTCLASS_NODE, LOG_INFO,\n\t\t\t\t\t   mymom->mi_host, \"POOL: cross linking %d vnodes from %s\",\n\t\t\t\t\t   ppoolm->msr_numvnds - 1, ppool->vnpm_inventory_mom->mi_host);\n\t\t\t\tfor (i = 1; i < ppoolm->msr_numvnds; ++i) {\n\t\t\t\t\tcross_link_mom_vnode(ppoolm->msr_children[i], mymom);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\tmgr_log_attr(msg_man_set, plist, PBS_EVENTCLASS_NODE, nodename, NULL);\n\n\tsetup_notification(); /*set mechanism for notifying */\n\t/*other nodes of new member   */\n\n\tsave_nodes_db(1, NULL);\n\n\treply_ack(preq); /*create completely successful*/\n}\n\n/**\n * @brief\n * \t\tHelper function used to handle resource deletion and setting\n *\n * @par\n * \t\tThe caller must ensure that the parameters are valid and not NULL\n *\n * @param[in]\tpattr\t- attribute structure which needs to be checked\n * @param[in]\tpreq\t- The client's batch request\n *\n * @return\ta pointer to the resource\n * @retval\tNULL\t- if the resource is not set\n */\nstatic resource *\nget_resource(attribute *pattr, resource_def *prdef)\n{\n\tresource *presc;\n\n\tif (is_attr_set(pattr)) {\n\t\tif (pattr->at_type == ATR_TYPE_RESC) {\n\t\t\tpresc = (resource *) GET_NEXT(pattr->at_val.at_list);\n\t\t\twhile (presc) {\n\t\t\t\tif (presc->rs_defin == prdef) {\n\t\t\t\t\treturn presc;\n\t\t\t\t}\n\t\t\t\tpresc = (resource *) GET_NEXT(presc->rs_link);\n\t\t\t}\n\t\t}\n\t}\n\treturn NULL;\n}\n/**\n * @brief\n * \t\tis_entity_resource_set\t- checks for a resource name in all the resources\n * \t\t\t\t\t\t\t\t\tin the tree within the attribute structure\n *\n * @param[in]\tpattr\t- attribute structure which contains the resource tree root\n * @param[in]\tresc_name\t- resource name\n *\n * @return\tint\n * @retval\t0\t- resource set\n * @retval\t1\t- resource not set\n */\nint\nis_entity_resource_set(attribute *pattr, char *resc_name)\n{\n\tif (is_attr_set(pattr)) {\n\t\tchar *key = NULL;\n\t\tvoid *ctx = pattr->at_val.at_enty.ae_tree;\n\t\tchar resc[PBS_MAX_RESC_NAME + 1];\n\n\t\twhile (entlim_get_next(ctx, (void **) &key) != NULL) {\n\t\t\tif (entlim_resc_from_key(key, resc, PBS_MAX_RESC_NAME) == 0) {\n\t\t\t\tif (strcmp(resc, resc_name) == 0)\n\t\t\t\t\treturn 1;\n\t\t\t}\n\t\t}\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\tHelper function to check if a resource is set on jobs or reservations.\n * @par\n * \t\tIf a resource is busy on an object, this function will respond back to\n * \t\tthe client request's.\n *\n * @param[in]\tpreq\t- The client's batch request\n * @param[in]\tprdef\t- The resource definition to check on\n * @param[in]\tmod\t- Set to 1 if the resource type or flag are being modified\n *\n * @return\tBOOLEAN\n * @retval\t1\t- If resource is busy on jobs or reservations\n * @retval\t0\t- Otherwise\n */\nstatic int\ncheck_resource_set_on_jobs_or_resvs(struct batch_request *preq, resource_def *prdef, int mod)\n{\n\tjob *pj;\n\tresc_resv *pr;\n\tchar *rmatch;\n\tint rlen;\n\tresource *presc;\n\tresource *presc_list;\n\tresource *presc_used;\n\n\t/* Reject if resource is on a job and the type or flag are being modified */\n\n\tfor (pj = (job *) GET_NEXT(svr_alljobs); pj != NULL; pj = (job *) GET_NEXT(pj->ji_alljobs)) {\n\t\tpresc_list = get_resource(get_jattr(pj, JOB_ATR_resc_used), prdef);\n\t\tpresc_used = get_resource(get_jattr(pj, JOB_ATR_resource), prdef);\n\t\tif (((presc_list != NULL) || presc_used != NULL) && (mod == 1)) {\n\t\t\treply_text(preq, PBSE_RESCBUSY, \"Resource busy on job\");\n\t\t\treturn 1;\n\t\t}\n\t\tif (is_jattr_set(pj, JOB_ATR_SchedSelect)) {\n\t\t\tchar *val = get_jattr_str(pj, JOB_ATR_SchedSelect);\n\t\t\trmatch = strstr(val, prdef->rs_name);\n\t\t\tif (rmatch != NULL) {\n\t\t\t\trlen = strlen(prdef->rs_name);\n\t\t\t\tif (((mod == 1) && (*(rmatch + rlen) == '=')) &&\n\t\t\t\t    ((rmatch == val) || *(rmatch - 1) == ':')) {\n\t\t\t\t\treply_text(preq, PBSE_RESCBUSY, \"Resource busy on job\");\n\t\t\t\t\treturn 1;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\t/* Reject if resource is on a job and the type or flag are being modified */\n\tpr = (resc_resv *) GET_NEXT(svr_allresvs);\n\twhile (pr != NULL) {\n\t\tpresc = get_resource(get_rattr(pr, RESV_ATR_resource), prdef);\n\t\tif ((presc != NULL) && (mod == 1)) {\n\t\t\treply_text(preq, PBSE_RESCBUSY, \"Resource busy on reservation\");\n\t\t\treturn 1;\n\t\t}\n\t\tif (is_rattr_set(pr, RESV_ATR_SchedSelect)) {\n\t\t\trmatch = strstr(get_rattr_str(pr, RESV_ATR_SchedSelect), prdef->rs_name);\n\t\t\tif (rmatch != NULL) {\n\t\t\t\trlen = strlen(prdef->rs_name);\n\t\t\t\tif (((*(rmatch + rlen) == '=') && (*(rmatch - 1) == ':')) && (mod == 1)) {\n\t\t\t\t\treply_text(preq, PBSE_RESCBUSY, \"Resource busy on reservation\");\n\t\t\t\t\treturn 1;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tpr = (resc_resv *) GET_NEXT(pr->ri_allresvs);\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\thelper function to send/update resourcedef file.\n */\nstatic void\ntimed_send_rescdef(struct work_task *pwt)\n{\n\tsend_rescdef(1); /* forcing with 1 to avoid failures due to intermittent file stamp race issues */\n\trescdef_wt_g = NULL;\n}\n\n/**\n * @brief\n * \t\tdeferred send/update resourcedef file\n *\n * \t\tBursty updates to resources may result in a large number of requests to\n * \t\tupdate the resourcedef file on the MoMs, to avoid piling up the requests,\n * \t\tthe update is sent after one second from the last request.\n */\nstatic void\ndeferred_send_rescdef()\n{\n\tif (rescdef_wt_g) {\n\t\tdelete_task(rescdef_wt_g);\n\t}\n\trescdef_wt_g = set_task(WORK_Timed, ((long) (time(0))) + 1, timed_send_rescdef, NULL);\n}\n\n/**\n * @brief\n * \t\tCreate a resource\n *\n * @param[in]\tpreq\t- The request containing information about the resource to\n * \t\t     \t\t\t\tcreate\n *\n * @par\n * \t\tRequests to create a resource with a name that may already exist are\n * \t\trejected.\n *\n * @return\tvoid\n */\nstatic void\nmgr_resource_create(struct batch_request *preq)\n{\n\tchar *resc;\n\tchar buf[LOG_BUF_SIZE];\n\tint rc;\n\tsvrattrl *plist;\n\tint type = ATR_TYPE_STR;\n\tint flags = READ_WRITE;\n\tint flag_ir = 0;\n\tresource_def *prdef;\n\n\tif ((resc = preq->rq_ind.rq_manager.rq_objname) == NULL) {\n\t\treq_reject(PBSE_BADATVAL, 0, preq);\n\t\treturn;\n\t}\n\n\trc = verify_resc_name(resc);\n\tif (rc != 0) {\n\t\treq_reject(PBSE_BADATVAL, 0, preq);\n\t\treturn;\n\t}\n\n\tprdef = find_resc_def(svr_resc_def, resc);\n\tif (prdef != NULL) {\n\t\treq_reject(PBSE_DUPLIST, 0, preq);\n\t\treturn;\n\t}\n\n\tplist = (svrattrl *) GET_NEXT(preq->rq_ind.rq_manager.rq_attr);\n\twhile (plist) {\n\t\tif (strcmp(plist->al_atopl.name, ATTR_RESC_TYPE) == 0) {\n\t\t\tif (parse_resc_type(plist->al_atopl.value, &type) == -1) {\n\t\t\t\treq_reject(PBSE_BADATVAL, 0, preq);\n\t\t\t\treturn;\n\t\t\t}\n\t\t} else if (strcmp(plist->al_atopl.name, ATTR_RESC_FLAG) == 0) {\n\t\t\tif (parse_resc_flags(plist->al_atopl.value, &flag_ir, &flags) == -1) {\n\t\t\t\treq_reject(PBSE_BADATVAL, 0, preq);\n\t\t\t\treturn;\n\t\t\t}\n\t\t} else {\n\t\t\treq_reject(PBSE_BADATVAL, 0, preq);\n\t\t\treturn;\n\t\t}\n\t\tplist = (svrattrl *) GET_NEXT(plist->al_link);\n\t}\n\n\tif (verify_resc_type_and_flags(type, &flag_ir, &flags, resc, buf, sizeof(buf), 0) != 0) {\n\t\treply_text(preq, PBSE_BADATVAL, buf);\n\t\treturn;\n\t}\n\n\trc = add_resource_def(resc, type, flags);\n\tif (rc < 0) {\n\t\tlog_eventf(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER, LOG_ERR, msg_daemonname, \"resource %s can not be defined\", resc);\n\t\treq_reject(PBSE_BADATVAL, 0, preq);\n\t\treturn;\n\t}\n\n\tlog_eventf(PBSEVENT_ADMIN, PBS_EVENTCLASS_RESC, LOG_INFO, resc, msg_manager, msg_man_cre, preq->rq_user, preq->rq_host);\n\tplist = (svrattrl *) GET_NEXT(preq->rq_ind.rq_manager.rq_attr);\n\tmgr_log_attr(msg_man_set, plist, PBS_EVENTCLASS_RESC, preq->rq_ind.rq_manager.rq_objname, NULL);\n\n\tif (flags & (ATR_DFLAG_RASSN | ATR_DFLAG_FNASSN | ATR_DFLAG_ANASSN)) {\n\t\tupdate_resc_sum();\n\t}\n\treply_ack(preq);\n\n\trestart_python_interpreter(__func__);\n\tdeferred_send_rescdef();\n\n\treturn;\n}\n\n/**\n * @brief\n * \t\tDelete a resource\n *\n * @param[in]\tpreq\t- The request containing information about the resource to\n * \t\t     \t\t\t\tdelete\n *\n * @par\n * \t\tRequests to delete resources that are set on a job or reservation\n * \t\tare rejected. If the resource is set on any other object, i.e., server,\n * \t\tqueue, or node, the resource is unset from the associated object.\n *\n * @return void\n */\nstatic void\nmgr_resource_delete(struct batch_request *preq)\n{\n\tchar *resc;\n\tint rc;\n\tint i, j;\n\tpbs_queue *pq;\n\tresource_def *prdef;\n\tattribute *pattr;\n\tresource_def *svr_rd;\n\tresource_def *svr_rd_prev;\n\tsvrattrl *plist;\n\tint bad;\n\tint updatedb; /* set to 1 if a db update is needed */\n\n\tif ((resc = preq->rq_ind.rq_manager.rq_objname) == NULL) {\n\t\treq_reject(PBSE_BADATVAL, 0, preq);\n\t\treturn;\n\t}\n\n\tif ((prdef = find_resc_def(svr_resc_def, resc)) == NULL) {\n\t\treq_reject(PBSE_UNKRESC, 0, preq);\n\t\treturn;\n\t}\n\n\tif (!prdef->rs_custom) {\n\t\treq_reject(PBSE_IVALREQ, 0, preq);\n\t\treturn;\n\t}\n\n\trc = check_resource_set_on_jobs_or_resvs(preq, prdef, 1);\n\tif (rc == 1)\n\t\treturn;\n\n\t/* check if resource is set in restrict_res_to_release_on_suspend */\n\tif (is_sattr_set(SVR_ATR_restrict_res_to_release_on_suspend)) {\n\t\tstruct array_strings *pval = get_sattr_arst(SVR_ATR_restrict_res_to_release_on_suspend);\n\t\tfor (i = 0; pval != NULL && i < pval->as_usedptr; i++) {\n\t\t\tif (strcmp(pval->as_string[i], prdef->rs_name) == 0) {\n\t\t\t\treply_text(preq, PBSE_RESCBUSY, \"Resource busy on server\");\n\t\t\t\treturn;\n\t\t\t}\n\t\t}\n\t}\n\n\trc = update_resource_def_file(resc, RESDEF_DELETE, prdef->rs_type, prdef->rs_flags);\n\tif (rc != 0) {\n\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_RESC, LOG_ERR, msg_daemonname, \"Error updating resource definitions\");\n\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\treturn;\n\t}\n\n\t/* Is the resource set on queues? If so unset */\n\tpq = (pbs_queue *) GET_NEXT(svr_queues);\n\twhile (pq != NULL) {\n\t\tupdatedb = 0;\n\t\tfor (i = 0; i < QA_ATR_LAST; i++) {\n\t\t\tpattr = get_qattr(pq, i);\n\t\t\tif (is_attr_set(pattr) && (pattr->at_type == ATR_TYPE_RESC || pattr->at_type == ATR_TYPE_ENTITY)) {\n\t\t\t\tplist = attrlist_create(que_attr_def[i].at_name, prdef->rs_name, 0);\n\t\t\t\tplist->al_link.ll_next->ll_struct = NULL;\n\t\t\t\trc = mgr_unset_attr(pq->qu_attr, que_attr_idx, que_attr_def, QA_ATR_LAST, plist, -1, &bad, (void *) pq, PARENT_TYPE_QUE_ALL, INDIRECT_RES_CHECK);\n\t\t\t\tif (rc != 0) {\n\t\t\t\t\tlog_eventf(PBSEVENT_DEBUG, PBS_EVENTCLASS_RESC, LOG_DEBUG, resc, \"error unsetting resource %s.%s\", que_attr_def[i].at_name, prdef->rs_name);\n\t\t\t\t\treply_badattr(rc, bad, plist, preq);\n\t\t\t\t\tfree_svrattrl(plist);\n\t\t\t\t\treturn;\n\t\t\t\t} else {\n\t\t\t\t\t/* default_chunk requires special handling because\n\t\t\t\t\t * the server keeps track of defaults to add to\n\t\t\t\t\t * schedselect @see qu_seldft\n\t\t\t\t\t */\n\t\t\t\t\tif (i == QE_ATR_DefaultChunk && (get_qattr(pq, QE_ATR_DefaultChunk))->at_flags & ATR_VFLAG_MODIFY)\n\t\t\t\t\t\t(void) deflt_chunk_action(get_qattr(pq, QE_ATR_DefaultChunk), (void *) pq, ATR_ACTION_ALTER);\n\t\t\t\t}\n\t\t\t\tupdatedb = 1;\n\t\t\t\tfree_svrattrl(plist);\n\t\t\t}\n\t\t}\n\t\tif (updatedb) {\n\t\t\tque_save_db(pq);\n\t\t}\n\t\tpq = (pbs_queue *) GET_NEXT(pq->qu_link);\n\t}\n\n\tupdatedb = 0;\n\t/* Is the resource set on the server? If so unset */\n\tfor (i = 0; i < SVR_ATR_LAST; i++) {\n\t\tpattr = get_sattr(i);\n\t\tif (is_attr_set(pattr) && (pattr->at_type == ATR_TYPE_RESC || pattr->at_type == ATR_TYPE_ENTITY)) {\n\t\t\tplist = attrlist_create(svr_attr_def[i].at_name, prdef->rs_name, 0);\n\t\t\tplist->al_link.ll_next->ll_struct = NULL;\n\t\t\trc = mgr_unset_attr(server.sv_attr, svr_attr_idx, svr_attr_def, SVR_ATR_LAST, plist, -1, &bad, (void *) &server, PARENT_TYPE_SERVER, INDIRECT_RES_CHECK);\n\t\t\tif (rc != 0) {\n\t\t\t\tlog_eventf(PBSEVENT_DEBUG, PBS_EVENTCLASS_RESC, LOG_DEBUG, resc, \"error unsetting resource %s.%s\", svr_attr_def[i].at_name, prdef->rs_name);\n\t\t\t\treply_badattr(rc, bad, plist, preq);\n\t\t\t\tfree_svrattrl(plist);\n\t\t\t\treturn;\n\t\t\t} else {\n\t\t\t\t/* default_chunk requires special handling because\n\t\t\t\t * the server keeps track of defaults to add to\n\t\t\t\t * schedselect @see sv_seldft\n\t\t\t\t */\n\t\t\t\tattribute *dfltchk_attr;\n\t\t\t\tif ((i == SVR_ATR_DefaultChunk) && ((dfltchk_attr = get_sattr(SVR_ATR_DefaultChunk))->at_flags & ATR_VFLAG_MODIFY)) {\n\t\t\t\t\t(void) deflt_chunk_action(dfltchk_attr, (void *) &server, ATR_ACTION_ALTER);\n\t\t\t\t}\n\t\t\t}\n\t\t\tfree_svrattrl(plist);\n\t\t\tupdatedb = 1;\n\t\t}\n\t}\n\tif (updatedb) {\n\t\tsvr_save_db(&server);\n\t}\n\n\t/* Is the resource set on nodes? If so unset */\n\tfor (i = 0; i < svr_totnodes; i++) {\n\t\tupdatedb = 0;\n\t\tfor (j = 0; j < ND_ATR_LAST; j++) {\n\t\t\tpattr = get_nattr(pbsndlist[i], j);\n\t\t\tif (is_attr_set(pattr) && (pattr->at_type == ATR_TYPE_RESC || pattr->at_type == ATR_TYPE_ENTITY)) {\n\t\t\t\tplist = attrlist_create(node_attr_def[j].at_name, prdef->rs_name, 0);\n\t\t\t\tplist->al_link.ll_next->ll_struct = NULL;\n\t\t\t\trc = mgr_unset_attr(pbsndlist[i]->nd_attr, node_attr_idx, node_attr_def, ND_ATR_LAST, plist, -1, &bad, (void *) pbsndlist[i], PARENT_TYPE_NODE, INDIRECT_RES_UNLINK);\n\t\t\t\tif (rc != 0) {\n\t\t\t\t\tlog_eventf(PBSEVENT_DEBUG, PBS_EVENTCLASS_RESC, LOG_DEBUG, resc, \"error unsetting resource %s.%s\", node_attr_def[i].at_name, prdef->rs_name);\n\t\t\t\t\treply_badattr(rc, bad, plist, preq);\n\t\t\t\t\tfree_svrattrl(plist);\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t\tfree_svrattrl(plist);\n\t\t\t\tupdatedb = 1;\n\t\t\t}\n\t\t}\n\t\tif (updatedb) {\n\t\t\tnode_save_db(pbsndlist[i]);\n\t\t}\n\t}\n\n\tfor (i = 0; svr_resc_sum[i].rs_def != NULL; i++) {\n\t\tif (svr_resc_sum[i].rs_def == prdef) {\n\t\t\tfor (j = i; svr_resc_sum[j].rs_def != NULL; j++) {\n\t\t\t\tsvr_resc_sum[j].rs_def = svr_resc_sum[j + 1].rs_def;\n\t\t\t\tsvr_resc_sum[j].rs_prs = svr_resc_sum[j + 1].rs_prs;\n\t\t\t\tsvr_resc_sum[j].rs_attr = svr_resc_sum[j + 1].rs_attr;\n\t\t\t\tsvr_resc_sum[j].rs_set = svr_resc_sum[j + 1].rs_set;\n\t\t\t}\n\t\t\tbreak;\n\t\t}\n\t}\n\n\tfor (svr_rd_prev = NULL, svr_rd = svr_resc_def; svr_rd != NULL; svr_rd_prev = svr_rd, svr_rd = svr_rd->rs_next) {\n\t\tif (svr_rd == prdef) {\n\t\t\tif (svr_rd_prev != NULL) {\n\t\t\t\tsvr_rd_prev->rs_next = svr_rd->rs_next;\n\t\t\t} else {\n\t\t\t\tsvr_resc_def = svr_rd->rs_next;\n\t\t\t}\n\t\t\tif (pbs_idx_delete(resc_attrdef_idx, prdef->rs_name) != PBS_IDX_RET_OK)\n\t\t\t\tlog_errf(-1, __func__, \"Could not remove %s from server resource index\", prdef->rs_name);\n\t\t\tfree(prdef->rs_name);\n\t\t\tfree(prdef);\n\t\t\tprdef = NULL;\n\t\t\tsvr_resc_size--;\n\t\t\tbreak;\n\t\t}\n\t}\n\n\tlog_eventf(PBSEVENT_ADMIN, PBS_EVENTCLASS_RESC, LOG_INFO, resc, msg_manager, msg_man_del, preq->rq_user, preq->rq_host);\n\n\treply_ack(preq);\n\n\trestart_python_interpreter(__func__);\n\tdeferred_send_rescdef();\n\tset_scheduler_flag(SCH_CONFIGURE, NULL);\n\n\treturn;\n}\n\n/**\n * @brief\n * \t\tSet a resource type and/or flag\n *\n * @param[in]\tpreq\t- The request containing information about the resource to\n * \t\t     \t\t\t\tset\n *\n * @par\n * \t\tRequests to set a resource type is only possible if the resource\n * \t\tis not set on any object, i.e. not on any job, reservation, node, queue, or\n * \t\tserver.\n *\n * @par\n * \t\tRequests to set a resource flag is only possible if the resource\n * \t\tis not set on any job or reservation. Setting/resetting a resource's\n * \t\tvisibility flags i or r is however always honored regardless of whether the\n * \t\tresource is defined on objects or not.\n *\n * @return void\n */\nstatic void\nmgr_resource_set(struct batch_request *preq)\n{\n\tchar *resc;\n\tchar buf[LOG_BUF_SIZE];\n\tresource_def *prdef;\n\tsvrattrl *plist;\n\tint type;\n\tint flags;\n\tint o_flags;\n\tint flag_ir = 0;\n\tint i, j;\n\tattribute *pattr;\n\tresource *presc;\n\tpbs_queue *pq;\n\tint mod_type = 0;\n\tint mod_flag = 0;\n\tint busy;\n\tint rc;\n\tstruct resc_type_map *p_resc_type_map = NULL;\n\n\tif ((preq->rq_perm & PERM_MANAGER) == 0) {\n\t\treq_reject(PBSE_PERM, 0, preq);\n\t\treturn;\n\t}\n\n\tif ((resc = preq->rq_ind.rq_manager.rq_objname) == NULL) {\n\t\treq_reject(PBSE_BADATVAL, 0, preq);\n\t\treturn;\n\t}\n\n\tprdef = find_resc_def(svr_resc_def, resc);\n\tif (prdef == NULL) {\n\t\treq_reject(PBSE_UNKRESC, 0, preq);\n\t\treturn;\n\t}\n\n\tif (!prdef->rs_custom) {\n\t\treq_reject(PBSE_IVALREQ, 0, preq);\n\t\treturn;\n\t}\n\n\tplist = (svrattrl *) GET_NEXT(preq->rq_ind.rq_manager.rq_attr);\n\tif (plist == NULL) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"missing type and/or permissions for '%s'\", resc);\n\t\treq_reject(PBSE_BADATVAL, 0, preq);\n\t\treturn;\n\t}\n\twhile (plist != NULL) {\n\t\tif (strcmp(plist->al_atopl.name, ATTR_RESC_TYPE) == 0) {\n\t\t\t/* we will reject any request that modifies type if the\n\t\t\t * resource is set on any object\n\t\t\t */\n\t\t\tmod_type = 1;\n\t\t} else if (strcmp(plist->al_atopl.name, ATTR_RESC_FLAG) == 0) {\n\t\t\tconst char *f = \"fhnq\";\n\t\t\tchar *new = plist->al_atopl.value;\n\t\t\tchar *old = find_resc_flag_map(prdef->rs_flags);\n\n\t\t\t/* If any of \"fhnq\" flags are being added, then reject\n\t\t\t * the request if the resource is set on\n\t\t\t * jobs/reservations.\n\t\t\t *\n\t\t\t * Adding flags r or i is allowed\n\t\t\t */\n\t\t\tfor (i = 0; (i < strlen(f)) && (mod_flag == 0); i++) {\n\t\t\t\tif ((strchr(old, f[i]) == NULL) &&\n\t\t\t\t    (strchr(new, f[i]) != NULL)) {\n\t\t\t\t\tmod_flag = 1;\n\t\t\t\t} else if ((strchr(old, f[i]) != NULL) &&\n\t\t\t\t\t   (strchr(new, f[i]) == NULL)) {\n\t\t\t\t\tmod_flag = 1;\n\t\t\t\t}\n\t\t\t}\n\t\t\tfree(old);\n\t\t}\n\t\tplist = (svrattrl *) GET_NEXT(plist->al_link);\n\t}\n\n\trc = check_resource_set_on_jobs_or_resvs(preq, prdef, (mod_type || mod_flag));\n\tif (rc == 1) {\n\t\t/* check function replies to client's preq */\n\t\treturn;\n\t}\n\t/* Reject if resource is on a queue and the type is being modified */\n\tpq = (pbs_queue *) GET_NEXT(svr_queues);\n\twhile (pq != NULL) {\n\t\tbusy = 0;\n\t\tfor (i = 0; (i < QA_ATR_LAST) && (busy == 0); i++) {\n\t\t\tpattr = get_qattr(pq, i);\n\t\t\tif (pattr->at_type == ATR_TYPE_RESC) {\n\t\t\t\tpresc = get_resource(pattr, prdef);\n\t\t\t\tif ((mod_type == 1) && (presc != NULL)) {\n\t\t\t\t\tbusy = 1;\n\t\t\t\t}\n\t\t\t} else if (pattr->at_type == ATR_TYPE_ENTITY) {\n\t\t\t\tif ((mod_type == 1) && is_entity_resource_set(pattr, prdef->rs_name)) {\n\t\t\t\t\tbusy = 1;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tif (busy) {\n\t\t\treply_text(preq, PBSE_RESCBUSY, \"Resource busy on queue\");\n\t\t\treturn;\n\t\t}\n\t\tpq = (pbs_queue *) GET_NEXT(pq->qu_link);\n\t}\n\n\t/* Reject if resource is on the server and the type is being modified */\n\tfor (i = 0, busy = 0; (i < SVR_ATR_LAST) && (busy == 0); i++) {\n\t\tpattr = get_sattr(i);\n\t\tif (pattr->at_type == ATR_TYPE_RESC) {\n\t\t\tpresc = get_resource(pattr, prdef);\n\t\t\tif ((presc != NULL) && (mod_type == 1)) {\n\t\t\t\tbusy = 1;\n\t\t\t}\n\t\t} else if (pattr->at_type == ATR_TYPE_ENTITY) {\n\t\t\tif ((mod_type == 1) && is_entity_resource_set(pattr, prdef->rs_name)) {\n\t\t\t\tbusy = 1;\n\t\t\t}\n\t\t}\n\t}\n\tif (busy) {\n\t\treply_text(preq, PBSE_RESCBUSY, \"Resource busy on server\");\n\t\treturn;\n\t}\n\n\t/* Reject if resource is on a node and the type is being modified */\n\tfor (i = 0; i < svr_totnodes; i++) {\n\t\tif (pbsndlist[i]->nd_state & INUSE_DELETED)\n\t\t\tcontinue;\n\n\t\tpattr = get_nattr(pbsndlist[i], ND_ATR_ResourceAvail);\n\t\tfor (j = 0; j < ND_ATR_LAST; j++) {\n\t\t\tpattr = get_nattr(pbsndlist[i], j);\n\t\t\tpresc = get_resource(pattr, prdef);\n\t\t\tif ((presc != NULL) && (mod_type == 1)) {\n\t\t\t\treply_text(preq, PBSE_RESCBUSY, \"Resource busy on node\");\n\t\t\t\treturn;\n\t\t\t}\n\t\t}\n\t}\n\n\ttype = prdef->rs_type;\n\to_flags = flags = prdef->rs_flags;\n\n\tplist = (svrattrl *) GET_NEXT(preq->rq_ind.rq_manager.rq_attr);\n\twhile (plist) {\n\t\tif (strcmp(plist->al_atopl.name, ATTR_RESC_TYPE) == 0) {\n\t\t\tp_resc_type_map = find_resc_type_map_by_typest(plist->al_atopl.value);\n\t\t\tif (p_resc_type_map == NULL) {\n\t\t\t\treq_reject(PBSE_BADATVAL, 0, preq);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\ttype = p_resc_type_map->rtm_type;\n\t\t} else if (strcmp(plist->al_atopl.name, ATTR_RESC_FLAG) == 0) {\n\t\t\tif (parse_resc_flags(plist->al_atopl.value, &flag_ir, &flags) == -1) {\n\t\t\t\treq_reject(PBSE_BADATVAL, 0, preq);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tmod_flag = 1;\n\t\t} else {\n\t\t\treq_reject(PBSE_BADATVAL, 0, preq);\n\t\t\treturn;\n\t\t}\n\n\t\tplist = (svrattrl *) GET_NEXT(plist->al_link);\n\t}\n\n\tif (verify_resc_type_and_flags(type, &flag_ir, &flags, resc, buf, sizeof(buf), 0) != 0) {\n\t\treply_text(preq, PBSE_BADATVAL, buf);\n\t\treturn;\n\t}\n\n\tplist = (svrattrl *) GET_NEXT(preq->rq_ind.rq_manager.rq_attr);\n\tmgr_log_attr(msg_man_set, plist, PBS_EVENTCLASS_RESC, resc, NULL);\n\n\trc = update_resource_def_file(resc, RESDEF_UPDATE, type, flags);\n\tif (rc != 0) {\n\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_RESC, LOG_ERR, msg_daemonname, \"Error updating resource definitions\");\n\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\treturn;\n\t}\n\n\tif (mod_type && (p_resc_type_map != NULL)) {\n\t\tprdef->rs_decode = p_resc_type_map->rtm_decode;\n\t\tprdef->rs_encode = p_resc_type_map->rtm_encode;\n\t\tprdef->rs_set = p_resc_type_map->rtm_set;\n\t\tprdef->rs_comp = p_resc_type_map->rtm_comp;\n\t\tprdef->rs_free = p_resc_type_map->rtm_free;\n\t\tprdef->rs_type = p_resc_type_map->rtm_type;\n\t}\n\tif (mod_flag) {\n\t\tprdef->rs_flags = flags;\n\t}\n\n\tif ((o_flags & (ATR_DFLAG_RASSN | ATR_DFLAG_FNASSN | ATR_DFLAG_ANASSN)) ||\n\t    (flags & (ATR_DFLAG_RASSN | ATR_DFLAG_FNASSN | ATR_DFLAG_ANASSN))) {\n\t\tupdate_resc_sum();\n\t}\n\n\treply_ack(preq);\n\n\trestart_python_interpreter(__func__);\n\tdeferred_send_rescdef();\n\tset_scheduler_flag(SCH_CONFIGURE, NULL);\n\n\treturn;\n}\n\n/**\n * @brief\n * \t\tUnset a resource flag\n *\n * @param[in]\tpreq\t- The request containing information about the resource to\n * \t\t     \t\t\t\tunset\n *\n * @par\n * \t\tRequest to unset a resource flag is rejected if the resource is\n * \t\tset on any job or reservation.\n *\n * @return void\n */\nstatic void\nmgr_resource_unset(struct batch_request *preq)\n{\n\tresource_def *prdef;\n\tsvrattrl *plist;\n\tchar *resc;\n\tint i, j;\n\tattribute *pattr;\n\tresource *presc;\n\tpbs_queue *pq;\n\tint mod = 0;\n\tint busy;\n\tint rc;\n\tint o_type;\n\tint o_flags;\n\tpbs_queue **pq_list = NULL;\n\tint pq_list_size = 0;\n\tattribute *q_attr = NULL;\n\tint q_count = 0;\n\n\tif ((preq->rq_perm & PERM_MANAGER) == 0) {\n\t\treq_reject(PBSE_PERM, 0, preq);\n\t\treturn;\n\t}\n\n\tif ((resc = preq->rq_ind.rq_manager.rq_objname) == NULL) {\n\t\treq_reject(PBSE_BADATVAL, 0, preq);\n\t\treturn;\n\t}\n\n\tprdef = find_resc_def(svr_resc_def, resc);\n\tif (prdef == NULL) {\n\t\treq_reject(PBSE_UNKRESC, 0, preq);\n\t\treturn;\n\t}\n\n\tif (!prdef->rs_custom) {\n\t\treq_reject(PBSE_IVALREQ, 0, preq);\n\t\treturn;\n\t}\n\n\tplist = (svrattrl *) GET_NEXT(preq->rq_ind.rq_manager.rq_attr);\n\tif (plist == NULL) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"missing type and/or permissions for '%s'\", resc);\n\t\treq_reject(PBSE_BADATVAL, 0, preq);\n\t\treturn;\n\t}\n\twhile (plist) {\n\t\tif ((strcmp(plist->al_atopl.name, ATTR_RESC_TYPE) == 0)) {\n\t\t\treq_reject(PBSE_IVALREQ, 0, preq);\n\t\t\treturn;\n\t\t}\n\t\tif (strcmp(plist->al_atopl.name, ATTR_RESC_FLAG) == 0) {\n\t\t\tmod = 1;\n\t\t\tbreak;\n\t\t}\n\t\tplist = (svrattrl *) GET_NEXT(plist->al_link);\n\t}\n\n\trc = check_resource_set_on_jobs_or_resvs(preq, prdef, 1);\n\tif (rc == 1)\n\t\treturn;\n\n\t/* Reject if resource is on a queue */\n\tpq = (pbs_queue *) GET_NEXT(svr_queues);\n\twhile (pq != NULL) {\n\t\tfor (i = 0, busy = 0; (i < QA_ATR_LAST) && (busy == 0); i++) {\n\t\t\tpattr = get_qattr(pq, i);\n\t\t\tif (pattr->at_type == ATR_TYPE_RESC) {\n\t\t\t\tpresc = get_resource(pattr, prdef);\n\t\t\t\tif ((presc != NULL) && (mod == 1)) {\n\t\t\t\t\tif (i == QE_ATR_ResourceAssn) {\n\t\t\t\t\t\tpbs_queue **temp_q = NULL;\n\t\t\t\t\t\ttemp_q = (pbs_queue **) realloc(pq_list, (pq_list_size + 1) * sizeof(pbs_queue *));\n\t\t\t\t\t\tif (temp_q == NULL) {\n\t\t\t\t\t\t\treply_text(preq, PBSE_SYSTEM, \"malloc failure\");\n\t\t\t\t\t\t\treturn;\n\t\t\t\t\t\t}\n\t\t\t\t\t\tpq_list = temp_q;\n\t\t\t\t\t\tpq_list[pq_list_size] = pq;\n\t\t\t\t\t\tpq_list_size++;\n\t\t\t\t\t} else\n\t\t\t\t\t\tbusy = 1;\n\t\t\t\t}\n\t\t\t} else if (pattr->at_type == ATR_TYPE_ENTITY) {\n\t\t\t\tif ((mod == 1) && is_entity_resource_set(pattr, prdef->rs_name)) {\n\t\t\t\t\tbusy = 1;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tif (busy) {\n\t\t\treply_text(preq, PBSE_RESCBUSY, \"Resource busy on queue\");\n\t\t\tif (pq_list != NULL)\n\t\t\t\tfree(pq_list);\n\t\t\treturn;\n\t\t}\n\t\tpq = (pbs_queue *) GET_NEXT(pq->qu_link);\n\t}\n\n\t/* Reject if resource is on a node */\n\tfor (i = 0; i < svr_totnodes; i++) {\n\t\tif (pbsndlist[i]->nd_state & INUSE_DELETED)\n\t\t\tcontinue;\n\n\t\tfor (j = 0; j < ND_ATR_LAST; j++) {\n\t\t\tpattr = get_nattr(pbsndlist[i], j);\n\t\t\tpresc = get_resource(pattr, prdef);\n\t\t\tif ((presc != NULL) && (mod == 1)) {\n\t\t\t\treply_text(preq, PBSE_RESCBUSY, \"Resource busy on node\");\n\t\t\t\tif (pq_list != NULL)\n\t\t\t\t\tfree(pq_list);\n\t\t\t\treturn;\n\t\t\t}\n\t\t}\n\t}\n\n\t/* Reject if resource is on the server */\n\tfor (i = 0, busy = 0; (i < SVR_ATR_LAST) && (busy == 0); i++) {\n\t\tpattr = get_sattr(i);\n\t\tif (pattr->at_type == ATR_TYPE_RESC) {\n\t\t\tpresc = get_resource(pattr, prdef);\n\t\t\tif ((presc != NULL) && (mod == 1)) {\n\t\t\t\t/* Since the resource is not present in resources_available attribute\n\t\t\t\t * just delete the resource entry from resources_assigned attribute.\n\t\t\t\t */\n\t\t\t\tif (i == SVR_ATR_resource_assn) {\n\t\t\t\t\tif (pattr->at_flags & ATR_VFLAG_SET) {\n\t\t\t\t\t\tpresc->rs_defin->rs_free(&presc->rs_value);\n\t\t\t\t\t\tdelete_link(&presc->rs_link);\n\t\t\t\t\t\tfree(presc);\n\t\t\t\t\t\tpresc = (resource *) GET_NEXT(get_attr_list(pattr));\n\t\t\t\t\t\tif (presc == NULL)\n\t\t\t\t\t\t\tpattr->at_flags &= ~ATR_VFLAG_SET;\n\t\t\t\t\t\tpattr->at_flags |= ATR_MOD_MCACHE;\n\t\t\t\t\t}\n\t\t\t\t} else\n\t\t\t\t\tbusy = 1;\n\t\t\t}\n\t\t} else if (is_attr_set(pattr) && (pattr->at_type == ATR_TYPE_ENTITY)) {\n\t\t\tif ((mod == 1) && is_entity_resource_set(pattr, prdef->rs_name)) {\n\t\t\t\tbusy = 1;\n\t\t\t}\n\t\t}\n\t}\n\tif (busy) {\n\t\treply_text(preq, PBSE_RESCBUSY, \"Resource busy on server\");\n\t\tif (pq_list != NULL)\n\t\t\tfree(pq_list);\n\t\treturn;\n\t}\n\tif (pq_list != NULL) {\n\t\tfor (q_count = 0; q_count < pq_list_size; q_count++) {\n\t\t\tq_attr = get_qattr(pq_list[q_count], QE_ATR_ResourceAssn);\n\t\t\tpresc = get_resource(q_attr, prdef);\n\t\t\tpresc->rs_defin->rs_free(&presc->rs_value);\n\t\t\tdelete_link(&presc->rs_link);\n\t\t\tfree(presc);\n\t\t\tpresc = (resource *) GET_NEXT(q_attr->at_val.at_list);\n\t\t\tif (presc == NULL)\n\t\t\t\tmark_attr_not_set(q_attr);\n\t\t\tq_attr->at_flags |= ATR_MOD_MCACHE;\n\t\t}\n\t\tfree(pq_list);\n\t\tpq_list = NULL;\n\t}\n\n\to_type = prdef->rs_type;\n\to_flags = prdef->rs_flags;\n\n\tplist = (svrattrl *) GET_NEXT(preq->rq_ind.rq_manager.rq_attr);\n\twhile (plist) {\n\t\t/* Only consider flag since unsetting type is disallowed */\n\t\tif (strcmp(plist->al_atopl.name, ATTR_RESC_FLAG) == 0) {\n\t\t\tprdef->rs_flags = READ_WRITE;\n\t\t} else {\n\t\t\treq_reject(PBSE_BADATVAL, 0, preq);\n\t\t\treturn;\n\t\t}\n\t\tplist = (svrattrl *) GET_NEXT(plist->al_link);\n\t}\n\n\tplist = (svrattrl *) GET_NEXT(preq->rq_ind.rq_manager.rq_attr);\n\tmgr_log_attr(msg_man_uns, plist, PBS_EVENTCLASS_RESC, resc, NULL);\n\n\trc = update_resource_def_file(resc, RESDEF_UPDATE, prdef->rs_type, prdef->rs_flags);\n\tif (rc != 0) {\n\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_RESC, LOG_ERR, msg_daemonname, \"Error updating resource definitions\");\n\t\t/* rollback mods */\n\t\tprdef->rs_type = o_type;\n\t\tprdef->rs_flags = o_flags;\n\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\treturn;\n\t}\n\n\tif (o_flags & (ATR_DFLAG_RASSN | ATR_DFLAG_FNASSN | ATR_DFLAG_ANASSN)) {\n\t\tupdate_resc_sum();\n\t}\n\n\treply_ack(preq);\n\n\trestart_python_interpreter(__func__);\n\tdeferred_send_rescdef();\n\tset_scheduler_flag(SCH_CONFIGURE, NULL);\n\n\treturn;\n}\n\n/**\n * @brief\n * \t\treq_manager - the dispatch routine for a series of functions which\n *\t\timplement the Manager (qmgr) Batch Request\n *\n *\t\tThe privilege of the requester is checked against the type of\n *\t\tthe object and the operation to be performed on it.  Then the\n *\t\tappropriate function is called to perform the operation.\n *\n * @param[in]\tpreq\t- The request containing information about the resource to perform the operation.\n *\n * @return void\n *\n */\nvoid\nreq_manager(struct batch_request *preq)\n{\n\tint obj_name_len;\n\tconn_t *conn = NULL;\n\n\t++preq->rq_refct;\n\n\tif (preq->prot == PROT_TCP) {\n\t\tif (preq->rq_conn != PBS_LOCAL_CONNECTION) {\n\t\t\tconn = get_conn(preq->rq_conn);\n\t\t\tif (!conn) {\n\t\t\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_REQUEST, LOG_ERR, __func__, \"did not find socket in connection table\");\n\t\t\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\t\t\tgoto req_manager_exit;\n\t\t\t}\n\t\t}\n\t}\n\n\tobj_name_len = strlen(preq->rq_ind.rq_manager.rq_objname);\n\n\tswitch (preq->rq_ind.rq_manager.rq_cmd) {\n\n\t\tcase MGR_CMD_CREATE:\n\t\tcase MGR_CMD_DELETE:\n\n\t\t\t/* MGR_OBJ_SITE_HOOK permission checking is different */\n\t\t\tif (preq->rq_ind.rq_manager.rq_objtype != MGR_OBJ_SITE_HOOK) {\n\t\t\t\tif ((preq->rq_perm & PERM_MANAGER) == 0) {\n\t\t\t\t\treq_reject(PBSE_PERM, 0, preq);\n\t\t\t\t\tgoto req_manager_exit;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tswitch (preq->rq_ind.rq_manager.rq_objtype) {\n\t\t\t\tcase MGR_OBJ_QUEUE:\n\t\t\t\t\tif (preq->rq_ind.rq_manager.rq_cmd == MGR_CMD_CREATE)\n\t\t\t\t\t\tmgr_queue_create(preq);\n\t\t\t\t\telse\n\t\t\t\t\t\tmgr_queue_delete(preq);\n\t\t\t\t\tbreak;\n\n\t\t\t\tcase MGR_OBJ_NODE:\n\t\t\t\t\tif (preq->rq_ind.rq_manager.rq_cmd == MGR_CMD_CREATE)\n\t\t\t\t\t\tmgr_node_create(preq);\n\t\t\t\t\telse\n\t\t\t\t\t\tmgr_node_delete(preq);\n\t\t\t\t\tbreak;\n\n\t\t\t\tcase MGR_OBJ_SITE_HOOK:\n\t\t\t\t\tif (!is_local_root(preq->rq_user, preq->rq_host)) {\n\t\t\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\t\t\"%s@%s is unauthorized to access hooks data from server %s\",\n\t\t\t\t\t\t\tpreq->rq_user, preq->rq_host, server_host);\n\n\t\t\t\t\t\treply_text(preq, PBSE_HOOKERROR, log_buffer);\n\t\t\t\t\t\tgoto req_manager_exit;\n\t\t\t\t\t}\n\t\t\t\t\tif (preq->rq_ind.rq_manager.rq_cmd == MGR_CMD_CREATE)\n\t\t\t\t\t\tmgr_hook_create(preq);\n\t\t\t\t\telse\n\t\t\t\t\t\tmgr_hook_delete(preq);\n\t\t\t\t\tbreak;\n\n\t\t\t\tcase MGR_OBJ_RSC:\n\t\t\t\t\tif (preq->rq_ind.rq_manager.rq_cmd == MGR_CMD_CREATE)\n\t\t\t\t\t\tmgr_resource_create(preq);\n\t\t\t\t\telse\n\t\t\t\t\t\tmgr_resource_delete(preq);\n\t\t\t\t\tbreak;\n\n\t\t\t\tcase MGR_OBJ_SCHED:\n\t\t\t\t\tif (preq->rq_ind.rq_manager.rq_cmd == MGR_CMD_CREATE) {\n\t\t\t\t\t\tif (obj_name_len == 0) {\n\t\t\t\t\t\t\tstrncpy(preq->rq_ind.rq_manager.rq_objname,\n\t\t\t\t\t\t\t\tPBS_DFLT_SCHED_NAME, PBS_MAXSVRJOBID);\n\t\t\t\t\t\t\tpreq->rq_ind.rq_manager.rq_objname[PBS_MAXSVRJOBID] = '\\0';\n\t\t\t\t\t\t}\n\t\t\t\t\t\tmgr_sched_create(preq);\n\t\t\t\t\t} else\n\t\t\t\t\t\tmgr_sched_delete(preq);\n\t\t\t\t\tbreak;\n\n\t\t\t\tdefault:\n\t\t\t\t\treq_reject(PBSE_IVALREQ, 0, preq);\n\t\t\t\t\tgoto req_manager_exit;\n\t\t\t}\n\t\t\tbreak;\n\n\t\tcase MGR_CMD_SET:\n\n\t\t\t/* MGR_OBJ_SITE_HOOK permission checking is different */\n\t\t\tif ((preq->rq_ind.rq_manager.rq_objtype != MGR_OBJ_SITE_HOOK) &&\n\t\t\t    (preq->rq_ind.rq_manager.rq_objtype != MGR_OBJ_PBS_HOOK)) {\n\t\t\t\tif ((preq->rq_perm & PERM_OPorMGR) == 0) {\n\t\t\t\t\treq_reject(PBSE_PERM, 0, preq);\n\t\t\t\t\tgoto req_manager_exit;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tswitch (preq->rq_ind.rq_manager.rq_objtype) {\n\t\t\t\tcase MGR_OBJ_SERVER:\n\t\t\t\t\tmgr_server_set(preq, conn);\n\t\t\t\t\tbreak;\n\t\t\t\tcase MGR_OBJ_SCHED:\n\t\t\t\t\tif (obj_name_len == 0) {\n\t\t\t\t\t\tstrncpy(preq->rq_ind.rq_manager.rq_objname,\n\t\t\t\t\t\t\tPBS_DFLT_SCHED_NAME, PBS_MAXSVRJOBID);\n\t\t\t\t\t\tpreq->rq_ind.rq_manager.rq_objname[PBS_MAXSVRJOBID] = '\\0';\n\t\t\t\t\t}\n\t\t\t\t\tmgr_sched_set(preq);\n\t\t\t\t\tbreak;\n\t\t\t\tcase MGR_OBJ_QUEUE:\n\t\t\t\t\tmgr_queue_set(preq);\n\t\t\t\t\tbreak;\n\t\t\t\tcase MGR_OBJ_NODE:\n\t\t\t\tcase MGR_OBJ_HOST:\n\t\t\t\t\tmgr_node_set(preq);\n\t\t\t\t\tbreak;\n\t\t\t\tcase MGR_OBJ_SITE_HOOK:\n\t\t\t\tcase MGR_OBJ_PBS_HOOK:\n\t\t\t\t\tif (!is_local_root(preq->rq_user, preq->rq_host)) {\n\t\t\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\t\t\"%s@%s is unauthorized to access hooks data from server %s\",\n\t\t\t\t\t\t\tpreq->rq_user, preq->rq_host, server_host);\n\n\t\t\t\t\t\treply_text(preq, PBSE_HOOKERROR, log_buffer);\n\t\t\t\t\t\tgoto req_manager_exit;\n\t\t\t\t\t}\n\t\t\t\t\tmgr_hook_set(preq);\n\t\t\t\t\tbreak;\n\t\t\t\tcase MGR_OBJ_RSC:\n\t\t\t\t\tmgr_resource_set(preq);\n\t\t\t\t\tbreak;\n\t\t\t\tdefault:\n\t\t\t\t\treq_reject(PBSE_IVALREQ, 0, preq);\n\t\t\t\t\tgoto req_manager_exit;\n\t\t\t}\n\t\t\tbreak;\n\n\t\tcase MGR_CMD_UNSET:\n\t\t\t/* MGR_OBJ_SITE_HOOK permission checking is different */\n\t\t\tif ((preq->rq_ind.rq_manager.rq_objtype != MGR_OBJ_SITE_HOOK) &&\n\t\t\t    (preq->rq_ind.rq_manager.rq_objtype != MGR_OBJ_PBS_HOOK)) {\n\t\t\t\tif ((preq->rq_perm & PERM_OPorMGR) == 0) {\n\t\t\t\t\treq_reject(PBSE_PERM, 0, preq);\n\t\t\t\t\tgoto req_manager_exit;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tswitch (preq->rq_ind.rq_manager.rq_objtype) {\n\t\t\t\tcase MGR_OBJ_SERVER:\n\t\t\t\t\tmgr_server_unset(preq, conn);\n\t\t\t\t\tbreak;\n\t\t\t\tcase MGR_OBJ_QUEUE:\n\t\t\t\t\tmgr_queue_unset(preq);\n\t\t\t\t\tbreak;\n\t\t\t\tcase MGR_OBJ_NODE:\n\t\t\t\t\tmgr_node_unset(preq);\n\t\t\t\t\tbreak;\n\t\t\t\tcase MGR_OBJ_SITE_HOOK:\n\t\t\t\tcase MGR_OBJ_PBS_HOOK:\n\t\t\t\t\tif (!is_local_root(preq->rq_user, preq->rq_host)) {\n\t\t\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\t\t\"%s@%s is unauthorized to access hooks data from server %s\",\n\t\t\t\t\t\t\tpreq->rq_user, preq->rq_host,\n\t\t\t\t\t\t\tserver_host);\n\n\t\t\t\t\t\treply_text(preq, PBSE_HOOKERROR, log_buffer);\n\t\t\t\t\t\tgoto req_manager_exit;\n\t\t\t\t\t}\n\t\t\t\t\tmgr_hook_unset(preq);\n\t\t\t\t\tbreak;\n\t\t\t\tcase MGR_OBJ_SCHED:\n\t\t\t\t\tif (obj_name_len == 0) {\n\t\t\t\t\t\tstrncpy(preq->rq_ind.rq_manager.rq_objname,\n\t\t\t\t\t\t\tPBS_DFLT_SCHED_NAME, PBS_MAXSVRJOBID);\n\t\t\t\t\t\tpreq->rq_ind.rq_manager.rq_objname[PBS_MAXSVRJOBID] = '\\0';\n\t\t\t\t\t}\n\t\t\t\t\tmgr_sched_unset(preq);\n\t\t\t\t\tbreak;\n\t\t\t\tcase MGR_OBJ_RSC:\n\t\t\t\t\tmgr_resource_unset(preq);\n\t\t\t\t\tbreak;\n\t\t\t\tdefault:\n\t\t\t\t\treq_reject(PBSE_IVALREQ, 0, preq);\n\t\t\t\t\tgoto req_manager_exit;\n\t\t\t}\n\t\t\tbreak;\n\n\t\tcase MGR_CMD_IMPORT:\n\n\t\t\t/* If this expands to operate on other objects like       */\n\t\t\t/* MGR_OBJ_SERVER, MGR_OBJ_QUEUE, MGR_OBJ_NODE, then you  */\n\t\t\t/* might need to put back in here the permission checking */\n\t\t\t/* \"if( (preq->rq_perm & PERM_OPorMGR) == 0)...\"          */\n\n\t\t\tswitch (preq->rq_ind.rq_manager.rq_objtype) {\n\t\t\t\tcase MGR_OBJ_SITE_HOOK:\n\t\t\t\tcase MGR_OBJ_PBS_HOOK:\n\t\t\t\t\tif (!is_local_root(preq->rq_user, preq->rq_host)) {\n\t\t\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\t\t\"%s@%s is unauthorized to access hooks data from server %s\",\n\t\t\t\t\t\t\tpreq->rq_user, preq->rq_host, server_host);\n\n\t\t\t\t\t\treply_text(preq, PBSE_HOOKERROR, log_buffer);\n\t\t\t\t\t\tgoto req_manager_exit;\n\t\t\t\t\t}\n\t\t\t\t\tmgr_hook_import(preq);\n\t\t\t\t\tbreak;\n\t\t\t\tdefault:\n\t\t\t\t\treq_reject(PBSE_IVALREQ, 0, preq);\n\t\t\t\t\tgoto req_manager_exit;\n\t\t\t}\n\t\t\tbreak;\n\n\t\tcase MGR_CMD_EXPORT:\n\n\t\t\t/* If this expands to operate on other objects like       */\n\t\t\t/* MGR_OBJ_SERVER, MGR_OBJ_QUEUE, MGR_OBJ_NODE, then you  */\n\t\t\t/* might need to put back in here the permission checking */\n\t\t\t/* \"if( (preq->rq_perm & PERM_OPorMGR) == 0)...\"          */\n\n\t\t\tswitch (preq->rq_ind.rq_manager.rq_objtype) {\n\t\t\t\tcase MGR_OBJ_SITE_HOOK:\n\t\t\t\tcase MGR_OBJ_PBS_HOOK:\n\t\t\t\t\tif (!is_local_root(preq->rq_user, preq->rq_host)) {\n\t\t\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\t\t\"%s@%s is unauthorized to access hooks data from server %s\",\n\t\t\t\t\t\t\tpreq->rq_user, preq->rq_host, server_host);\n\n\t\t\t\t\t\treply_text(preq, PBSE_HOOKERROR, log_buffer);\n\t\t\t\t\t\tgoto req_manager_exit;\n\t\t\t\t\t}\n\t\t\t\t\tmgr_hook_export(preq);\n\t\t\t\t\tbreak;\n\t\t\t\tdefault:\n\t\t\t\t\treq_reject(PBSE_IVALREQ, 0, preq);\n\t\t\t\t\tgoto req_manager_exit;\n\t\t\t}\n\t\t\tbreak;\n\n\t\tdefault: /*batch_request specified an invalid command*/\n\t\t\treq_reject(PBSE_IVALREQ, 0, preq);\n\t\t\tgoto req_manager_exit;\n\t}\nreq_manager_exit : {\n\tchar hook_msg[HOOK_MSG_SIZE];\n\tprocess_hooks(preq, hook_msg, sizeof(hook_msg), pbs_python_set_interrupt);\n}\n\tif (--preq->rq_refct == 0) {\n\t\treply_send(preq);\n\t}\n}\n\n/**\n * @brief\n * \t\tmanager_oper_chk - check the @host part of a manager or operator acl\n *\t\tentry to insure it is fully qualified.  This is to prevent\n *\t\tinput errors when setting the list.\n *\t\tThis is the at_action() routine for the server attributes\n *\t\t\"managers\" and \"operators\"\n *\n * @param[in]\tpattr\t-     pointer to new attribute value\n * @param[in]\tpobject\t-     pointer to node\n * @param[in]\tactmode\t-     action mode\n *\n * @return\terror code\n * @retval\t0\t- success\n * @retval\tPBSE_BADHOST\t- failure\n */\n\nint\nmanager_oper_chk(attribute *pattr, void *pobject, int actmode)\n{\n\tchar *entry;\n\tint err = 0;\n\tchar hostname[PBS_MAXHOSTNAME + 1];\n\tint i;\n\tstruct array_strings *pstr;\n\n\tif (actmode == ATR_ACTION_FREE)\n\t\treturn (0); /* no checking on free */\n\n\tif ((pstr = pattr->at_val.at_arst) == NULL)\n\t\treturn (0);\n\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n\t/* with kerberos, we do not check */\n\treturn 0;\n#endif\n\n\tfor (i = 0; i < pstr->as_usedptr; ++i) {\n\t\tentry = strchr(pstr->as_string[i], (int) '@');\n\t\tif (entry == NULL) {\n\t\t\terr = PBSE_BADHOST;\n\t\t\tbreak;\n\t\t}\n\t\tentry++;\t     /* point after the '@' */\n\t\tif (*entry != '*') { /* if == * cannot check it any more */\n\t\t\t/* if not wild card, must be fully qualified host */\n\t\t\tif (get_fullhostname(entry, hostname, (sizeof(hostname) - 1)) ||\n\t\t\t    strncasecmp(entry, hostname, (sizeof(hostname) - 1))) {\n\t\t\t\tif (actmode == ATR_ACTION_RECOV) {\n\t\t\t\t\t(void) sprintf(log_buffer, \"bad entry in acl: %s\",\n\t\t\t\t\t\t       pstr->as_string[i]);\n\t\t\t\t\tlog_err(PBSE_BADHOST, \"manager_oper_chk\",\n\t\t\t\t\t\tlog_buffer);\n\t\t\t\t} else {\n\t\t\t\t\terr = PBSE_BADHOST;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\treturn (err);\n}\n\n/**\n * @brief\n * \t\tnode_comment - action routine for the comment attribute of a node\n *\t\tif set to non-default, set flag to cause node_status file to\n *\t\tbe written.   The comment will be the last item on the status line.\n *\n * @param[in]\tpattr\t-     pointer to new attribute value\n * @param[in]\tpobj    -     pointer to node\n * @param[in]\tact     -     action mode\n *\n * @return\terror code\n * @retval\t0\t- success\n */\nint\nnode_comment(attribute *pattr, void *pobj, int act)\n{\n\treturn 0;\n}\n\n/**\n * @brief\n *\t\tChecks if provisioning can be enabled on a vnode.\n *\n * @par Functionality:\n *\t\tThis function is an action routine for provision_enable attribute of a\n *\t\tnode and server. It checks if attribute can be set a vnode. If vnode is\n *\t\ta head node returns error.\n *\n * @param[in]\tnew     -     pointer to new attribute value\n * @param[in]\tpobj    -     pointer to node\n * @param[in]\tact     -     action mode\n *\n * @return\tint\n * @retval\t PBSE_NONE\t- success\n * @retval\t PBSE_PROV_HEADERROR\t- failure if setting on head node\n *\n * @par Side Effects: None\n *\n * @par MT-safe: No\n *\n */\nint\nnode_prov_enable_action(attribute *new, void *pobj, int act)\n{\n\tstruct pbsnode *pnode = (struct pbsnode *) pobj;\n\n\tif (is_attr_set(new) && new->at_val.at_long == 1) {\n\t\t/* Check user tries to set on Head node */\n\t\tif (is_nattr_set(pnode, ND_ATR_Mom) && compare_short_hostname(get_nattr_str(pnode, ND_ATR_Mom), server_host) == 0)\n\t\t\treturn PBSE_PROV_HEADERROR;\n\t}\n\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n *\t\tChecks if prov_tracking size needs to be readjusted.\n *\n * @par Functionality:\n *\t\tThis function is action routine for max_concurrent_provision attribute\n *\t\tof server. It resizes prov_tracking table. If new value is greater than\n *\t\tcurrent value, an immediate work task is created to drain queued provisioning\n *\t\trequests.\n *\n * @see\n *\t\t#prov_tracking in provision.h\n *\n * @param[in]\tnew     -     pointer to new attribute value\n * @param[in]\tpobj    -     pointer to node\n * @param[in]\tact     -     action mode\n *\n * @return\tint\n * @retval\tPBSE_NONE\t: success\n * @retval\tPBSE_BADATVAL : if the value provided <= 0\n * @retval\tPBSE_SYSTEM : error returned if resize_prov_table function fails\n *\n * @par Side Effects:\n *\t\tUnknown\n *\n * @par MT-safe: No\n *\n */\nint\nsvr_max_conc_prov_action(attribute *new, void *pobj, int act)\n{\n\tint rc;\n\n\tif (is_attr_set(new)) {\n\n\t\tif ((int) new->at_val.at_long <= 0)\n\t\t\treturn PBSE_BADATVAL;\n\t}\n\n\tmax_concurrent_prov = new->at_val.at_long;\n\n\tif (act == ATR_ACTION_RECOV)\n\t\t/* dont resize prov table yet, since we might be recovering */\n\t\treturn PBSE_NONE;\n\n\t/* pick a few if size is increased */\n\tif (server.sv_provtracksize < max_concurrent_prov)\n\t\tset_task(WORK_Immed, 0, do_provisioning, NULL);\n\n\trc = resize_prov_table(max_concurrent_prov);\n\n\treturn rc;\n}\n\n/**\n * @brief\n *\t\tResizes and reinitializes prov_tracking table.\n *\n * @par Functionality:\n *\t\tThis function resizes and reinitializes provision tracking table as per\n *\t\tnewsize. If there is an action provisioning taking place and newsize is\n *\t\tless than current size then it does not resize or reinitialize.\n *\n * @param[in]\tnewsize\t-\tnew size of provision table\n *\n * @return\tint\n * @retval\tPBSE_NONE\t: success or no action taken\n * @retval\tPBSE_SYSTEM\t: failure\n *\n * @par Side Effects:\n *\t\tUnknown\n *\n * @par MT-safe: No\n *\n */\nint\nresize_prov_table(int newsize)\n{\n\tstruct prov_tracking *tmp;\n\tint i;\n\tint oldsize = server.sv_provtracksize; /* save  previous size */\n\n\tDBPRT((\"resize_prov_table: oldsize = %d, newsize=%d\\n\", oldsize, newsize))\n\n\tif (newsize == oldsize)\n\t\treturn PBSE_NONE;\n\n\tif (server.sv_cur_prov_records != 0) {\n\t\tif (newsize < oldsize)\n\t\t\treturn PBSE_NONE;\n\t}\n\n\tserver.sv_provtracksize = newsize;\n\n\t/* though the table is scattered, by the time we resize it, its guaranteed\n\t that the table will have all empty slots, so resizing to smaller is also fine */\n\n\t/* realloc the existing memory size, since table size has changed */\n\ttmp = (struct prov_tracking *) realloc(server.sv_prov_track,\n\t\t\t\t\t       server.sv_provtracksize * sizeof(struct prov_tracking));\n\tif (tmp == NULL)\n\t\treturn PBSE_SYSTEM;\n\telse\n\t\tserver.sv_prov_track = tmp;\n\n\tfor (i = oldsize; i < server.sv_provtracksize; i++) {\n\t\tmemset(&(server.sv_prov_track[i]), 0, sizeof(struct prov_tracking));\n\t\tserver.sv_prov_track[i].pvtk_mtime = 0;\n\t}\n\n\tserver.sv_provtrackmodifed = 1;\n\tprov_track_save();\n\tset_sattr_l_slim(SVR_ATR_max_concurrent_prov, newsize, SET);\n\tsvr_save_db(&server);\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n *\t\tAllows or disallows setting current_aoe on a vnode.\n *\n * @par Functionality:\n *\t\tThis function is action routine for current_aoe attribute of a vnode.\n *\t\tIt allows setting current_aoe only if the aoe is available on the\n *\t\tvnode and vnode is not provisioning/wait-provisioning and vnode is not\n *\t\ta head node.\n *\n * @param[in]\tnew\t-\tpointer to new attribute value\n * @param[in]\tpobj\t-\tpointer to node\n * @param[in]\tact\t-\taction mode\n *\n * @return\tint\n * @retval\tPBSE_NONE\t: success\n * @retval\tPBSE_PROV_HEADERROR\t: failure if node is a head node\n * @retval\tPBSE_NODEPROV_NOACTION\t: failure if node is provisioning/wait-provisioning\n * @retval\tPBSE_NODE_BAD_CURRENT_AOE\t: failure if aoe is not available on node\n *\n * @par Side Effects:\n *\t\tUnknown\n *\n * @par MT-safe: No\n *\n */\nint\nnode_current_aoe_action(attribute *new, void *pobj, int act)\n{\n\tstruct pbsnode *pnode = (struct pbsnode *) pobj;\n\n\tif (act == ATR_ACTION_RECOV)\n\t\treturn PBSE_NONE;\n\n\t/* Check user tries to set on Head node */\n\tif (is_nattr_set(pnode, ND_ATR_Mom) && compare_short_hostname(get_nattr_str(pnode, ND_ATR_Mom), server_host) == 0)\n\t\treturn PBSE_PROV_HEADERROR;\n\n\t/* Don't set/unset while provisioning */\n\tif ((pnode->nd_state & INUSE_PROV) ||\n\t    (pnode->nd_state & INUSE_WAIT_PROV))\n\t\treturn PBSE_NODEPROV_NOACTION;\n\n\t\t/* check if value being set is available in resources_available.aoe */\n#ifdef NAS /* localmod 148 */\n\tif (check_req_aoe_available(pnode, new->at_val.at_str) != 0) {\n\t\tif (pnode->nd_name != NULL) {\n\t\t\tsprintf(log_buffer, \"node \\\"%s\\\" would have received PBSE_NODE_BAD_CURRENT_AOE, but localmod 148 avoided this\", pnode->nd_name);\n\t\t} else {\n\t\t\tsprintf(log_buffer, \"unknown node would have received PBSE_NODE_BAD_CURRENT_AOE\");\n\t\t}\n\n\t\tlog_err(-1, \"node_current_aoe_action\", log_buffer);\n\t}\n#else\n\tif (check_req_aoe_available(pnode, new->at_val.at_str) != 0)\n\t\treturn PBSE_NODE_BAD_CURRENT_AOE;\n#endif /* localmod 148 */\n\n\treturn PBSE_NONE;\n}\n"
  },
  {
    "path": "src/server/req_message.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file    req_message.c\n *\n * @brief\n * \t\treq_message.c - functions dealing with sending a message to a running job.\n *\n * Functions included are:\n * \treq_messagejob()\n * \tpost_message_req()\n * \tpost_py_spawn_req()\n * \treq_py_spawn()\n *\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <stdio.h>\n#include <sys/types.h>\n#include \"libpbs.h\"\n#include <signal.h>\n#include \"server_limits.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"server.h\"\n#include \"credential.h\"\n#include \"batch_request.h\"\n#include \"job.h\"\n#include \"work_task.h\"\n#include \"pbs_error.h\"\n#include \"log.h\"\n#include \"pbs_nodes.h\"\n#include \"svrfunc.h\"\n#include \"acct.h\"\n\n/* Private Function local to this file */\n\nstatic void post_message_req(struct work_task *);\n\n/* Global Data Items: */\n\nextern char *msg_messagejob;\n\nextern job *chk_job_request(char *, struct batch_request *, int *, int *);\nextern int validate_perm_res_in_select(char *val, int val_exist);\n\n/**\n * @brief\n * \t\treq_messagejob - service the Message Job Request\n *\n *\t\tThis request sends (via MOM) a message to a running job.\n *\n * @param[in]\tpreq\t- Pointer to batch request\n */\n\nvoid\nreq_messagejob(struct batch_request *preq)\n{\n\tint jt; /* job type */\n\tjob *pjob;\n\tint rc;\n\n\tif ((pjob = chk_job_request(preq->rq_ind.rq_message.rq_jid, preq, &jt, NULL)) == 0)\n\t\treturn;\n\n\tif (jt != IS_ARRAY_NO) {\n\t\treply_text(preq, PBSE_NOSUP, \"not supported for Array Jobs\");\n\t\treturn;\n\t}\n\n\t/* the job must be running */\n\n\tif (!check_job_state(pjob, JOB_STATE_LTR_RUNNING)) {\n\t\treq_reject(PBSE_BADSTATE, 0, preq);\n\t\treturn;\n\t}\n\n\t/* pass the request on to MOM */\n\n\trc = relay_to_mom(pjob, preq, post_message_req);\n\tif (rc)\n\t\treq_reject(rc, 0, preq); /* unable to get to MOM */\n\n\t/* After MOM acts and replies to us, we pick up in post_message_req() */\n}\n\n/**\n * @brief\n * \t\tpost_message_req - complete a Message Job Request\n *\n * @param[in]\tpwt\t-\twork task structure\n */\n\nstatic void\npost_message_req(struct work_task *pwt)\n{\n\tstruct batch_request *preq;\n\n\tif (pwt->wt_aux2 != PROT_TPP)\n\t\tsvr_disconnect(pwt->wt_event); /* close connection to MOM */\n\tpreq = pwt->wt_parm1;\n\tpreq->rq_conn = preq->rq_orgconn; /* restore socket to client */\n\n\t(void) sprintf(log_buffer, msg_messagejob, preq->rq_reply.brp_code);\n\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t  preq->rq_ind.rq_message.rq_jid, log_buffer);\n\tif (preq->rq_reply.brp_code)\n\t\treq_reject(preq->rq_reply.brp_code, 0, preq);\n\telse\n\t\treply_ack(preq);\n}\n\n/**\n * @brief\n * \t\tpost_py_spawn_req - complete a py_spawn Job Request\n *\n * @param[in]\tpwt\t-\twork task structure\n */\n\nstatic void\npost_py_spawn_req(struct work_task *pwt)\n{\n\tstruct batch_request *preq;\n\tchar tmp_buf[128] = \"\";\n\n\tif (pwt->wt_aux2 != PROT_TPP)\n\t\tsvr_disconnect(pwt->wt_event); /* close connection to MOM */\n\tpreq = pwt->wt_parm1;\n\tpreq->rq_conn = preq->rq_orgconn; /* restore socket to client */\n\n\tif (preq->rq_reply.brp_code == 0)\n\t\tsprintf(tmp_buf, \" exit value %d\", preq->rq_reply.brp_auxcode);\n\tsprintf(log_buffer, \"Python spawn status %d%s\",\n\t\tpreq->rq_reply.brp_code, tmp_buf);\n\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t  preq->rq_ind.rq_py_spawn.rq_jid, log_buffer);\n\treply_send(preq);\n}\n\n/**\n * @brief\n * \t\treq_py_spawn - service the Python Spawn Request\n *\n * @param[in]\tpreq\t- Pointer to batch request\n */\n\nvoid\nreq_py_spawn(struct batch_request *preq)\n{\n\tint jt; /* job type */\n\tjob *pjob;\n\tint rc;\n\tchar *jid = preq->rq_ind.rq_py_spawn.rq_jid;\n\n\t/*\n\t ** Returns job pointer for singleton job or \"parent\" of\n\t ** an array job.\n\t */\n\tpjob = chk_job_request(jid, preq, &jt, NULL);\n\tif (pjob == NULL)\n\t\treturn;\n\n\t/* see if requestor is the job owner */\n\tif (svr_chk_owner(preq, pjob) != 0) {\n\t\treq_reject(PBSE_PERM, 0, preq);\n\t\treturn;\n\t}\n\n\tif (jt == IS_ARRAY_NO) { /* a regular job is okay */\n\t\t/* the job must be running */\n\t\tif ((!check_job_state(pjob, JOB_STATE_LTR_RUNNING)) ||\n\t\t    (!check_job_substate(pjob, JOB_SUBSTATE_RUNNING))) {\n\t\t\treq_reject(PBSE_BADSTATE, 0, preq);\n\t\t\treturn;\n\t\t}\n\t} else if (jt == IS_ARRAY_Single) { /* a single subjob is okay */\n\t\tchar sjst;\n\t\tint sjsst;\n\n\t\tget_subjob_and_state(pjob, get_index_from_jid(jid), &sjst, &sjsst);\n\t\tif (sjst == JOB_STATE_LTR_UNKNOWN) {\n\t\t\treq_reject(PBSE_UNKJOBID, 0, preq);\n\t\t\treturn;\n\t\t}\n\n\t\tif (sjst != JOB_STATE_LTR_RUNNING || sjsst != JOB_SUBSTATE_RUNNING) {\n\t\t\treq_reject(PBSE_BADSTATE, 0, preq);\n\t\t\treturn;\n\t\t}\n\t} else {\n\t\treply_text(preq, PBSE_NOSUP,\n\t\t\t   \"not supported for Array Jobs or multiple sub-jobs\");\n\t\treturn;\n\t}\n\n\t/*\n\t ** Pass the request on to MOM.  If this works, the function\n\t ** post_py_spawn_req will be called to handle the reply.\n\t ** If it fails, send the reply now.\n\t */\n\trc = relay_to_mom(pjob, preq, post_py_spawn_req);\n\tif (rc)\n\t\treq_reject(rc, 0, preq); /* unable to get to MOM */\n}\n\n/**\n * @brief\n * \tService the PBS_BATCH_RelnodesJob Request.\n *\n * @param[in]\tpreq - the request structure.\n *\n * @rerturn void\n *\n */\n\nvoid\nreq_relnodesjob(struct batch_request *preq)\n{\n\tint jt; /* job type */\n\tjob *pjob;\n\tint rc = PBSE_NONE;\n\tchar *jid;\n\tchar *nodeslist = NULL;\n\tchar msg[LOG_BUF_SIZE];\n\tchar *keep_select = NULL;\n\n\tif (preq == NULL)\n\t\treturn;\n\n\tjid = preq->rq_ind.rq_relnodes.rq_jid;\n\tif (jid == NULL)\n\t\treturn;\n\n\t/*\n\t ** Returns job pointer for singleton job or \"parent\" of\n\t ** an array job.\n\t */\n\tpjob = chk_job_request(jid, preq, &jt, NULL);\n\tif (pjob == NULL) {\n\t\treturn;\n\t}\n\n\tif (jt == IS_ARRAY_NO) { /* a regular job is okay */\n\t\t/* the job must be running */\n\t\tif ((!check_job_state(pjob, JOB_STATE_LTR_RUNNING)) ||\n\t\t    (!check_job_substate(pjob, JOB_SUBSTATE_RUNNING))) {\n\t\t\treq_reject(PBSE_BADSTATE, 0, preq);\n\t\t\treturn;\n\t\t}\n\t} else if (jt == IS_ARRAY_Single) { /* a single subjob is okay */\n\t\tchar sjst;\n\t\tint sjsst;\n\n\t\tpjob = get_subjob_and_state(pjob, get_index_from_jid(jid), &sjst, &sjsst);\n\t\tif (pjob == NULL || sjst == JOB_STATE_LTR_UNKNOWN) {\n\t\t\treq_reject(PBSE_UNKJOBID, 0, preq);\n\t\t\treturn;\n\t\t}\n\n\t\tif (sjst != JOB_STATE_LTR_RUNNING || sjsst != JOB_SUBSTATE_RUNNING) {\n\t\t\treq_reject(PBSE_BADSTATE, 0, preq);\n\t\t\treturn;\n\t\t}\n\t} else {\n\t\treply_text(preq, PBSE_NOSUP,\n\t\t\t   \"not supported for Array Jobs or multiple sub-jobs\");\n\t\treturn;\n\t}\n\n\tnodeslist = preq->rq_ind.rq_relnodes.rq_node_list;\n\n\tif ((nodeslist != NULL) && (nodeslist[0] == '\\0')) {\n\t\tnodeslist = NULL;\n\t}\n\n\tif (preq->rq_extend && *preq->rq_extend) {\n\t\tkeep_select = strchr(preq->rq_extend, '=');\n\t\tif (keep_select && *(keep_select + 1)) {\n\t\t\tkeep_select++;\n\t\t\trc = validate_perm_res_in_select(keep_select, 1);\n\t\t} else\n\t\t\trc = PBSE_INVALSELECTRESC;\n\n\t\tif (rc != PBSE_NONE) {\n\t\t\treq_reject(rc, 0, preq);\n\t\t\treturn;\n\t\t}\n\t}\n\n\trc = free_sister_vnodes(pjob, nodeslist, keep_select, msg, LOG_BUF_SIZE, preq);\n\n\tjob_save_db(pjob); /* we must save the updates anyway, if any */\n\n\tif (rc != 0) {\n\t\treply_text(preq, PBSE_SYSTEM, msg);\n\t}\n}\n"
  },
  {
    "path": "src/server/req_modify.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n *\n * @brief\n * \t\tFunctions relating to the Modify Job Batch Requests.\n *\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <stdio.h>\n#include <sys/types.h>\n#include \"libpbs.h\"\n#include <signal.h>\n#include \"server_limits.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"resource.h\"\n#include \"server.h\"\n#include \"credential.h\"\n#include \"batch_request.h\"\n#include \"job.h\"\n#include \"reservation.h\"\n#include \"queue.h\"\n#include \"work_task.h\"\n#include \"pbs_error.h\"\n#include \"log.h\"\n#include \"pbs_nodes.h\"\n#include \"svrfunc.h\"\n#include \"hook.h\"\n#include \"sched_cmds.h\"\n#include \"pbs_internal.h\"\n#include \"pbs_sched.h\"\n#include \"acct.h\"\n\n/* Global Data Items: */\n\nextern attribute_def job_attr_def[];\nextern char *msg_jobmod;\nextern char *msg_manager;\nextern char *msg_mombadmodify;\nextern char *msg_defproject;\nextern char *msg_max_no_minwt;\nextern char *msg_min_gt_maxwt;\nextern char *msg_nostf_jobarray;\nextern int comp_resc_gt;\nextern int comp_resc_lt;\nextern char *resc_in_err;\n\nextern int scheduler_jobs_stat;\nextern int resc_access_perm;\nextern char *msg_nostf_resv;\n\nint modify_resv_attr(resc_resv *presv, svrattrl *plist, int perm, int *bad);\nextern void revert_alter_reservation(resc_resv *presv);\nextern int gen_future_reply(resc_resv *presv, long fromNow);\nextern job *chk_job_request(char *, struct batch_request *, int *, int *);\nextern resc_resv *chk_rescResv_request(char *, struct batch_request *);\nvoid clear_job_estimate(struct work_task *ptask);\n\n/**\n * @brief\n * \t\tClear job estimate.\n *\n * @par\tFunctionality:\n *\t\tIf the server attribute clear_topjob_estimates_enable is set to True,\n *\t\tthe job estimates when and where the job will run are cleared.\n *\n * @param[in,out]\tptask\t- work task\n */\nvoid\nclear_job_estimate(struct work_task *ptask)\n{\n\tjob *pjob;\n\tpjob = (job *) ptask->wt_parm1;\n\n\tif (!get_sattr_long(SVR_ATR_clear_est_enable)) {\n\t\treturn;\n\t}\n\n\tif (is_jattr_set(pjob, JOB_ATR_estimated)) {\n\t\tclear_jattr(pjob, JOB_ATR_estimated);\n\t}\n}\n/*\n * post_modify_req - clean up after sending modify request to MOM\n */\nstatic void\npost_modify_req(struct work_task *pwt)\n{\n\tstruct batch_request *preq;\n\n\tif (pwt->wt_aux2 != PROT_TPP)\n\t\tsvr_disconnect(pwt->wt_event); /* close connection to MOM */\n\tpreq = pwt->wt_parm1;\n\tpreq->rq_conn = preq->rq_orgconn; /* restore socket to client */\n\n\tif (preq->rq_reply.brp_code) {\n\t\tlog_eventf(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t   preq->rq_ind.rq_modify.rq_objname, msg_mombadmodify, preq->rq_reply.brp_code);\n\t\treq_reject(preq->rq_reply.brp_code, 0, preq);\n\t} else\n\t\treply_ack(preq);\n}\n\n/**\n * @brief\n * \t\tService the Modify Job Request from client such as qalter.\n *\n * @par\tFunctionality:\n *\t\tThis request automatically modifies one or more of a job's attributes.\n *\t\tAn error is returned to the client if the user does not have permission\n *\t\tto perform the modification, the attribute is read-only, the job is\n *\t\trunning and the attribute is only modifiable when the job is not\n *\t\trunning, the user attempts to modify a subjob of an array.\n *\n *\t\tIf any \"move job\" hooks are in place, they modify the request before\n *\t\tthe Server does anything with the request.\n *\n * @param[in] preq - pointer to batch request from client\n */\n\nvoid\nreq_modifyjob(struct batch_request *preq)\n{\n\tint add_to_am_list = 0; /* if altered during sched cycle */\n\tint bad = 0;\n\tint jt; /* job type */\n\tchar newstate;\n\tint newsubstate;\n\tresource_def *outsideselect = NULL;\n\tjob *pjob;\n\tsvrattrl *plist;\n\tresource *presc;\n\tresource_def *prsd;\n\tint rc;\n\tint running = 0;\n\tint sendmom = 0;\n\tchar hook_msg[HOOK_MSG_SIZE];\n\tint mod_project = 0;\n\tpbs_sched *psched;\n\n\tswitch (process_hooks(preq, hook_msg, sizeof(hook_msg),\n\t\t\t      pbs_python_set_interrupt)) {\n\t\tcase 0: /* explicit reject */\n\t\t\treply_text(preq, PBSE_HOOKERROR, hook_msg);\n\t\t\treturn;\n\t\tcase 1:\t\t\t\t\t    /* explicit accept */\n\t\t\tif (recreate_request(preq) == -1) { /* error */\n\t\t\t\t/* we have to reject the request, as 'preq' */\n\t\t\t\t/* may have been partly modified            */\n\t\t\t\tstrcpy(hook_msg,\n\t\t\t\t       \"modifyjob event: rejected request\");\n\t\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_HOOK,\n\t\t\t\t\t  LOG_ERR, \"\", hook_msg);\n\t\t\t\treply_text(preq, PBSE_HOOKERROR, hook_msg);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tbreak;\n\t\tcase 2: /* no hook script executed - go ahead and accept event*/\n\t\t\tbreak;\n\t\tdefault:\n\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t\t\t  LOG_INFO, \"\", \"modifyjob event: accept req by default\");\n\t}\n\n\tpjob = chk_job_request(preq->rq_ind.rq_modify.rq_objname, preq, &jt, NULL);\n\tif (pjob == NULL)\n\t\treturn;\n\n\tif ((jt == IS_ARRAY_Single) || (jt == IS_ARRAY_Range)) {\n\t\treq_reject(PBSE_IVALREQ, 0, preq);\n\t\treturn;\n\t}\n\n\tpsched = find_sched_from_sock(preq->rq_conn, CONN_SCHED_PRIMARY);\n\t/* allow scheduler to modify job */\n\tif (psched == NULL) {\n\t\t/* provisioning job is not allowed to be modified */\n\t\tif ((check_job_state(pjob, JOB_STATE_LTR_RUNNING)) &&\n\t\t    (check_job_substate(pjob, JOB_SUBSTATE_PROVISION))) {\n\t\t\treq_reject(PBSE_BADSTATE, 0, preq);\n\t\t\treturn;\n\t\t}\n\t}\n\n\t/* cannot be in exiting or transit, exiting has already be checked */\n\n\tif (check_job_state(pjob, JOB_STATE_LTR_TRANSIT)) {\n\t\treq_reject(PBSE_BADSTATE, 0, preq);\n\t\treturn;\n\t}\n\n\tplist = (svrattrl *) GET_NEXT(preq->rq_ind.rq_modify.rq_attr);\n\tif (plist == NULL) { /* nothing to do */\n\t\treply_ack(preq);\n\t\treturn;\n\t}\n\n\t/*\n\t * Special checks must be made:\n\t *\tif during a scheduling cycle and certain attributes are altered,\n\t *\t   make a note of the job to prevent it from being run now;\n\t *\tif job is running, only certain attributes/resources can be\n\t *\t   altered.\n\t */\n\n\tif (check_job_state(pjob, JOB_STATE_LTR_RUNNING)) {\n\t\trunning = 1;\n\t}\n\twhile (plist) {\n\t\tint i;\n\n\t\ti = find_attr(job_attr_idx, job_attr_def, plist->al_name);\n\n\t\t/*\n\t\t * Is the attribute being altered one which could change\n\t\t * scheduling (ATR_DFLAG_SCGALT set) and if a scheduling\n\t\t * cycle is in progress, then set flag to add the job to list\n\t\t * of jobs which cannot be run in this cycle.\n\t\t * If the scheduler itself sends a modify job request,\n\t\t * no need to delay the job until next cycle.\n\t\t */\n\t\tif ((psched == NULL) && (scheduler_jobs_stat) && (job_attr_def[i].at_flags & ATR_DFLAG_SCGALT))\n\t\t\tadd_to_am_list = 1;\n\n\t\t/* Is the attribute modifiable in RUN state ? */\n\n\t\tif (i < 0) {\n\t\t\treply_badattr(PBSE_NOATTR, 1, plist, preq);\n\t\t\treturn;\n\t\t}\n\t\tif ((running == 1) &&\n\t\t    ((job_attr_def[i].at_flags & ATR_DFLAG_ALTRUN) == 0)) {\n\n\t\t\treply_badattr(PBSE_MODATRRUN, 1, plist, preq);\n\t\t\treturn;\n\t\t}\n\t\tif (i == (int) JOB_ATR_resource) {\n\t\t\tprsd = find_resc_def(svr_resc_def, plist->al_resc);\n\t\t\tif (prsd == 0) {\n\t\t\t\treply_badattr(PBSE_UNKRESC, 1, plist, preq);\n\t\t\t\treturn;\n\t\t\t}\n\n\t\t\t/* is the specified resource modifiable while */\n\t\t\t/* the job is running                         */\n\n\t\t\tif (running) {\n\n\t\t\t\tif ((prsd->rs_flags & ATR_DFLAG_ALTRUN) == 0) {\n\t\t\t\t\treply_badattr(PBSE_MODATRRUN, 1, plist, preq);\n\t\t\t\t\treturn;\n\t\t\t\t}\n\n\t\t\t\tsendmom = 1;\n\t\t\t}\n\n\t\t\t/* should the resource be only in a select spec */\n\n\t\t\tif (prsd->rs_flags & ATR_DFLAG_CVTSLT && !outsideselect &&\n\t\t\t    plist->al_atopl.value && plist->al_atopl.value[0]) {\n\t\t\t\t/* if \"-lresource\" is set and has non-NULL value,\n\t\t\t\t** remember as potential bad resource\n\t\t\t\t** if this appears along \"select\".\n\t\t\t\t*/\n\t\t\t\toutsideselect = prsd;\n\t\t\t}\n\t\t}\n\t\tif (strcmp(plist->al_name, ATTR_project) == 0) {\n\t\t\tmod_project = 1;\n\t\t} else if ((strcmp(plist->al_name, ATTR_runcount) == 0) &&\n\t\t\t   ((plist->al_flags & ATR_VFLAG_HOOK) == 0) &&\n\t\t\t   (plist->al_value != NULL) &&\n\t\t\t   (plist->al_value[0] != '\\0') &&\n\t\t\t   ((preq->rq_perm & (ATR_DFLAG_MGWR | ATR_DFLAG_OPWR)) == 0) &&\n\t\t\t   (atol(plist->al_value) <\n\t\t\t    get_jattr_long(pjob, JOB_ATR_runcount))) {\n\t\t\tlog_eventf(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_ERR,\n\t\t\t\t   pjob->ji_qs.ji_jobid, \"regular user %s@%s cannot decrease '%s' attribute value from %ld to %ld\",\n\t\t\t\t   preq->rq_user, preq->rq_host, ATTR_runcount,\n\t\t\t\t   get_jattr_long(pjob, JOB_ATR_runcount),\n\t\t\t\t   atol(plist->al_value));\n\t\t\treq_reject(PBSE_PERM, 0, preq);\n\t\t\treturn;\n\t\t} else if ((strcmp(plist->al_name, ATTR_topjob) == 0) &&\n\t\t\t\t(plist->al_value != NULL) &&\n\t\t\t\t(strcmp(plist->al_value, \"False\") == 0)) {\n\t\t\tset_task(WORK_Immed, 0, clear_job_estimate, (void *) pjob);\n\t\t}\n\t\tplist = (svrattrl *) GET_NEXT(plist->al_link);\n\t}\n\n\tif (outsideselect) {\n\t\tpresc = find_resc_entry(get_jattr(pjob, JOB_ATR_resource), &svr_resc_def[RESC_SELECT]);\n\t\tif (presc && ((presc->rs_value.at_flags & ATR_VFLAG_DEFLT) == 0)) {\n\t\t\t/* select is not a default, so reject qalter */\n\t\t\tresc_in_err = strdup(outsideselect->rs_name);\n\t\t\treq_reject(PBSE_INVALJOBRESC, 0, preq);\n\t\t\treturn;\n\t\t}\n\t}\n\n\t/* modify the jobs attributes */\n\n\tbad = 0;\n\tplist = (svrattrl *) GET_NEXT(preq->rq_ind.rq_modify.rq_attr);\n\trc = modify_job_attr(pjob, plist, preq->rq_perm, &bad);\n\tif (rc) {\n\t\tif (pjob->ji_clterrmsg)\n\t\t\treply_text(preq, rc, pjob->ji_clterrmsg);\n\t\telse\n\t\t\treply_badattr(rc, bad, plist, preq);\n\t\treturn;\n\t}\n\n\t/* If certain attributes modified and if in scheduling cycle  */\n\t/* then add to list of jobs which cannot be run in this cycle */\n\n\tif (add_to_am_list)\n\t\tam_jobs_add(pjob); /* see req_runjob() */\n\n\t/* check if project attribute was requested to be modified to */\n\t/* be the default project value */\n\tif (mod_project && is_jattr_set(pjob, JOB_ATR_project)) {\n\n\t\tif (strcmp(get_jattr_str(pjob, JOB_ATR_project),\n\t\t\t   PBS_DEFAULT_PROJECT) == 0) {\n\t\t\tsprintf(log_buffer, msg_defproject,\n\t\t\t\tATTR_project, PBS_DEFAULT_PROJECT);\n#ifdef NAS /* localmod 107 */\n\t\t\tlog_event(PBSEVENT_DEBUG4, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n#else\n\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n#endif /* localmod 107 */\n\t\t}\n\t}\n\n\tif ((get_jattr(pjob, JOB_ATR_resource))->at_flags & ATR_VFLAG_MODIFY) {\n\t\tpresc = find_resc_entry(get_jattr(pjob, JOB_ATR_resource), &svr_resc_def[RESC_SELECT]);\n\t\tif (presc && (presc->rs_value.at_flags & ATR_VFLAG_DEFLT)) {\n\t\t\t/* changing Resource_List and select is a default   */\n\t\t\t/* clear \"select\" so it is rebuilt inset_resc_deflt */\n\t\t\tsvr_resc_def[RESC_SELECT].rs_free(&presc->rs_value);\n\t\t}\n\t}\n\n\t/* Reset any defaults resource limit which might have been unset */\n\tif ((rc = set_resc_deflt((void *) pjob, JOB_OBJECT, NULL)) != 0) {\n\t\treq_reject(rc, 0, preq);\n\t\treturn;\n\t}\n\n\tif (find_sched_from_sock(preq->rq_conn, CONN_SCHED_PRIMARY) == NULL)\n\t\tlog_alter_records_for_attrs(pjob, plist);\n\n\t/* if job is not running, may need to change its state */\n\tif (!check_job_state(pjob, JOB_STATE_LTR_RUNNING)) {\n\t\tsvr_evaljobstate(pjob, &newstate, &newsubstate, 0);\n\t\tsvr_setjobstate(pjob, newstate, newsubstate);\n\t}\n\n\tjob_save_db(pjob); /* we must save the updates anyway, if any */\n\n\tlog_eventf(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO, pjob->ji_qs.ji_jobid, msg_manager, msg_jobmod, preq->rq_user, preq->rq_host);\n\n\t/* if a resource limit changed for a running job, send to MOM */\n\tif (sendmom) {\n\t\trc = relay_to_mom(pjob, preq, post_modify_req);\n\t\tif (rc)\n\t\t\treq_reject(rc, 0, preq); /* unable to get to MOM */\n\t\treturn;\n\t}\n\n\treply_ack(preq);\n}\n\n/**\n * @brief\n * \t\tReturns the svrattrl entry matching attribute 'name', or NULL if not found.\n *\n * @param[in]\tplist\t-\thead of svrattrl list\n * @param[in]\tname\t-\tmatching attribute 'name'\n *\n * @return\tsvrattrl entry matching attribute 'name'\n * @retval\tNULL\t: if entry not found\n */\nstatic svrattrl *\nfind_name_in_svrattrl(svrattrl *plist, char *name)\n{\n\n\tif (!name)\n\t\treturn NULL;\n\n\twhile (plist) {\n\n\t\tif (strcmp(plist->al_name, name) == 0) {\n\t\t\treturn plist;\n\t\t}\n\n\t\tplist = (svrattrl *) GET_NEXT(plist->al_link);\n\t}\n\treturn NULL;\n}\n\n/**\n * @brief\n * \t\tmodify_job_attr - modify the attributes of a job automatically\n *\t\tUsed by req_modifyjob() to alter the job attributes and by\n *\t\tstat_update() [see req_stat.c] to update with latest from MOM\n *\n * @param[in,out]\tpjob\t-\tjob structure\n * @param[in,out]\tplist\t-\tPointer to list of attributes\n * @param[in]\tperm\t-\tPermissions of the caller requesting the operation\n * @param[out]\tbad\t-\tPointer to the attribute index in case of a failed\n */\nint\nmodify_job_attr(job *pjob, svrattrl *plist, int perm, int *bad)\n{\n\tint changed_resc;\n\tint allow_unkn;\n\tlong i;\n\tattribute *newattr;\n\tattribute *pre_copy;\n\tattribute *attr_save;\n\tattribute *pattr;\n\tresource *prc;\n\tint rc;\n\tchar newstate = -1;\n\tint newsubstate = -1;\n\tlong newaccruetype = -1;\n\n\tif (pjob->ji_qhdr->qu_qs.qu_type == QTYPE_Execution)\n\t\tallow_unkn = -1;\n\telse\n\t\tallow_unkn = (int) JOB_ATR_UNKN;\n\n\tpattr = pjob->ji_wattr;\n\n\t/* call attr_atomic_set to decode and set a copy of the attributes.\n\t * We need 2 copies: 1 for copying to pattr and 1 for calling the action functions\n\t * We can't use the same copy for the action functions because copying to pattr\n\t * is a shallow copy and array pointers will be cleared during the copy.\n\t */\n\n\tnewattr = calloc(JOB_ATR_LAST, sizeof(attribute));\n\tif (newattr == NULL)\n\t\treturn PBSE_SYSTEM;\n\trc = attr_atomic_set(plist, pattr, newattr, job_attr_idx, job_attr_def, JOB_ATR_LAST, allow_unkn, perm, bad);\n\tif (rc) {\n\t\tattr_atomic_kill(newattr, job_attr_def, JOB_ATR_LAST);\n\t\treturn rc;\n\t}\n\n\tpre_copy = calloc(JOB_ATR_LAST, sizeof(attribute));\n\tif (pre_copy == NULL) {\n\t\tattr_atomic_kill(newattr, job_attr_def, JOB_ATR_LAST);\n\t\treturn PBSE_SYSTEM;\n\t}\n\tattr_atomic_copy(pre_copy, newattr, job_attr_def, JOB_ATR_LAST);\n\n\tattr_save = calloc(JOB_ATR_LAST, sizeof(attribute));\n\tif (attr_save == NULL) {\n\t\tattr_atomic_kill(newattr, job_attr_def, JOB_ATR_LAST);\n\t\tattr_atomic_kill(pre_copy, job_attr_def, JOB_ATR_LAST);\n\t\treturn PBSE_SYSTEM;\n\t}\n\n\tattr_atomic_copy(attr_save, pattr, job_attr_def, JOB_ATR_LAST);\n\n\t/* If resource limits are being changed ... */\n\n\tchanged_resc = is_attr_set(&newattr[JOB_ATR_resource]);\n\tif ((rc == 0) && (changed_resc != 0)) {\n\n\t\t/* first, remove ATR_VFLAG_DEFLT from any value which was set */\n\t\t/* it can no longer be a \"default\" as it explicitly changed   */\n\n\t\tprc = (resource *) GET_NEXT(newattr[(int) JOB_ATR_resource].at_val.at_list);\n\t\twhile (prc) {\n\t\t\tif ((prc->rs_value.at_flags & (ATR_VFLAG_MODIFY | ATR_VFLAG_DEFLT)) == (ATR_VFLAG_MODIFY | ATR_VFLAG_DEFLT))\n\t\t\t\tprc->rs_value.at_flags &= ~ATR_VFLAG_DEFLT;\n\n\t\t\tif ((prc->rs_value.at_flags & (ATR_VFLAG_MODIFY | ATR_VFLAG_SET)) == (ATR_VFLAG_MODIFY | ATR_VFLAG_SET)) {\n\t\t\t\t/* if being changed at all, see if \"select\" */\n\t\t\t\tif (prc->rs_defin == &svr_resc_def[RESC_SELECT]) {\n\t\t\t\t\t/* select is modified, recalc chunk sums */\n\t\t\t\t\trc = set_chunk_sum(&prc->rs_value, &newattr[(int) JOB_ATR_resource]);\n\t\t\t\t\tif (rc)\n\t\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t\tprc = (resource *) GET_NEXT(prc->rs_link);\n\t\t}\n\n\t\t/* Manager/Operator can modify job just about any old way     */\n\t\t/* So, the following checks are made only if not the Op/Admin */\n\n\t\tif ((perm & (ATR_DFLAG_MGWR | ATR_DFLAG_OPWR)) == 0) {\n\t\t\tif (check_job_state(pjob, JOB_STATE_LTR_RUNNING)) {\n\n\t\t\t\t/* regular user cannot raise the limits of a running job */\n\n\t\t\t\tif ((comp_resc(get_jattr(pjob, JOB_ATR_resource), &newattr[(int) JOB_ATR_resource]) == -1) ||\n\t\t\t\t    comp_resc_lt)\n\t\t\t\t\trc = PBSE_PERM;\n\t\t\t}\n\n\t\t\t/* Also check against queue, system and entity limits */\n\n\t\t\tif (rc == 0) {\n\t\t\t\trc = chk_resc_limits(&newattr[(int) JOB_ATR_resource],\n\t\t\t\t\t\t     pjob->ji_qhdr);\n\t\t\t}\n\t\t\tif (rc == 0) {\n\t\t\t\trc = check_entity_resc_limit_max(pjob, pjob->ji_qhdr,\n\t\t\t\t\t\t\t\t &newattr[(int) JOB_ATR_resource]);\n\t\t\t\tif (rc == 0) {\n\t\t\t\t\trc = check_entity_resc_limit_queued(pjob, pjob->ji_qhdr,\n\t\t\t\t\t\t\t\t\t    &newattr[(int) JOB_ATR_resource]);\n\t\t\t\t\tif (rc == 0) {\n\t\t\t\t\t\trc = check_entity_resc_limit_max(pjob, NULL,\n\t\t\t\t\t\t\t\t\t\t &newattr[(int) JOB_ATR_resource]);\n\t\t\t\t\t\tif (rc == 0)\n\t\t\t\t\t\t\trc = check_entity_resc_limit_queued(pjob, NULL,\n\t\t\t\t\t\t\t\t\t\t\t    &newattr[(int) JOB_ATR_resource]);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\t/* special check on permissions for hold */\n\n\tif ((rc == 0) &&\n\t    (newattr[(int) JOB_ATR_hold].at_flags & ATR_VFLAG_MODIFY)) {\n\t\tsvrattrl *hold_e = find_name_in_svrattrl(plist, ATTR_h);\n\t\t/* don't perform permission check if Hold_Types attribute */\n\t\t/* was set in a hook script (special privilege) */\n\t\tif ((hold_e == NULL) ||\n\t\t    ((hold_e->al_flags & ATR_VFLAG_HOOK) == 0)) {\n\t\t\ti = newattr[(int) JOB_ATR_hold].at_val.at_long ^\n\t\t\t    (pattr + (int) JOB_ATR_hold)->at_val.at_long;\n\t\t\trc = chk_hold_priv(i, perm);\n\t\t}\n\t}\n\n\tif ((rc == 0) &&\n\t    ((newattr[(int) JOB_ATR_userlst].at_flags & ATR_VFLAG_MODIFY) ||\n\t     (newattr[(int) JOB_ATR_grouplst].at_flags & ATR_VFLAG_MODIFY))) {\n\t\t/* Need to reset execution uid and gid */\n\t\trc = set_objexid((void *) pjob, JOB_OBJECT, newattr);\n\t}\n\n\tif (rc) {\n\t\tattr_atomic_kill(newattr, job_attr_def, JOB_ATR_LAST);\n\t\tattr_atomic_kill(attr_save, job_attr_def, JOB_ATR_LAST);\n\t\tattr_atomic_kill(pre_copy, job_attr_def, JOB_ATR_LAST);\n\t\treturn (rc);\n\t}\n\n\t/* OK, if resources changed, reset entity sums */\n\n\tif (changed_resc) {\n\t\taccount_entity_limit_usages(pjob, NULL,\n\t\t\t\t\t    &newattr[(int) JOB_ATR_resource], INCR, ETLIM_ACC_ALL_RES);\n\t\taccount_entity_limit_usages(pjob, pjob->ji_qhdr,\n\t\t\t\t\t    &newattr[(int) JOB_ATR_resource], INCR, ETLIM_ACC_ALL_RES);\n\t}\n\n\t/* Now copy the new values into the job attribute array for the purposes of running the action functions */\n\n\tfor (i = 0; i < JOB_ATR_LAST; i++) {\n\t\tif (newattr[i].at_flags & ATR_VFLAG_MODIFY) {\n\t\t\t/*\n\t\t\t * The function update_eligible_time() expects it is the only one setting accrue_type.\n\t\t\t * If we set it here, it will get confused.  There is no action function for accrue_type,\n\t\t\t * so pre-setting it for the action function calls isn't required.\n\t\t\t */\n\t\t\tif (i == JOB_ATR_accrue_type)\n\t\t\t\tcontinue;\n\t\t\tfree_attr(job_attr_def, &pattr[i], i);\n\t\t\tif ((pre_copy[i].at_type == ATR_TYPE_LIST) ||\n\t\t\t    (pre_copy[i].at_type == ATR_TYPE_RESC)) {\n\t\t\t\tlist_move(&pre_copy[i].at_val.at_list,\n\t\t\t\t\t  &pattr[i].at_val.at_list);\n\t\t\t} else {\n\t\t\t\tpattr[i] = pre_copy[i];\n\t\t\t}\n\t\t\t/* ATR_VFLAG_MODCACHE will be included if set */\n\t\t\tpattr[i].at_flags = pre_copy[i].at_flags;\n\t\t}\n\t}\n\n\tfor (i = 0; i < JOB_ATR_LAST; i++) {\n\t\t/* Check newattr instead of pattr for modify.  It is possible that\n\t\t * the attribute already has the modify flag before we added the new attributes to it.\n\t\t * We only want to call the action functions for attributes which are being modified by this function.\n\t\t */\n\t\tif (newattr[i].at_flags & ATR_VFLAG_MODIFY) {\n\t\t\tif ((job_attr_def[i].at_flags & ATR_DFLAG_NOSAVM))\n\t\t\t\tcontinue;\n\n\t\t\tif (job_attr_def[i].at_action) {\n\t\t\t\trc = job_attr_def[i].at_action(&newattr[i],\n\t\t\t\t\t\t\t       pjob, ATR_ACTION_ALTER);\n\t\t\t\tif (rc) {\n\t\t\t\t\t*bad = i;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\tif (rc) {\n\t\tattr_atomic_copy(pjob->ji_wattr, attr_save, job_attr_def, JOB_ATR_LAST);\n\t\tfree(pre_copy);\n\t\tattr_atomic_kill(newattr, job_attr_def, JOB_ATR_LAST);\n\t\tattr_atomic_kill(attr_save, job_attr_def, JOB_ATR_LAST);\n\t\treturn (rc);\n\t}\n\n\t/* The action functions may have modified the attributes, need to set them to newattr2 */\n\tfor (i = 0; i < JOB_ATR_LAST; i++) {\n\t\tif (newattr[i].at_flags & ATR_VFLAG_MODIFY) {\n\t\t\tfree_attr(job_attr_def, &pattr[i], i);\n\t\t\tswitch (i) {\n\t\t\t\tcase JOB_ATR_state:\n\t\t\t\t\tnewstate = get_attr_c(&newattr[i]);\n\t\t\t\t\tbreak;\n\t\t\t\tcase JOB_ATR_substate:\n\t\t\t\t\tnewsubstate = get_attr_l(&newattr[i]);\n\t\t\t\t\tbreak;\n\t\t\t\tcase JOB_ATR_accrue_type:\n\t\t\t\t\tnewaccruetype = get_attr_l(&newattr[i]);\n\t\t\t\t\tbreak;\n\t\t\t\tdefault:\n\t\t\t\t\tif ((newattr[i].at_type == ATR_TYPE_LIST) ||\n\t\t\t\t\t    (newattr[i].at_type == ATR_TYPE_RESC)) {\n\t\t\t\t\t\tlist_move(&newattr[i].at_val.at_list,\n\t\t\t\t\t\t\t  &pattr[i].at_val.at_list);\n\t\t\t\t\t} else {\n\t\t\t\t\t\tpattr[i] = newattr[i];\n\t\t\t\t\t}\n\t\t\t}\n\t\t\t/* ATR_VFLAG_MODCACHE will be included if set */\n\t\t\tpattr[i].at_flags = newattr[i].at_flags;\n\t\t}\n\t}\n\n\tif (newstate != -1 && newsubstate != -1) {\n\t\tsvr_setjobstate(pjob, newstate, newsubstate);\n\t}\n\n\tif (newaccruetype != -1)\n\t\tupdate_eligible_time(newaccruetype, pjob);\n\n\tfree(newattr);\n\tfree(pre_copy);\n\tattr_atomic_kill(attr_save, job_attr_def, JOB_ATR_LAST);\n\treturn (0);\n}\n\n/**\n * @brief determine if one schedselect is made up of fewer or equal number of\n *\t\tthe same chunk types with possibly some chunk types removed.\n *\n * @return int\n * @retval 1 fewer or equal chunks\n * @retval 0 more chunks or different chunks\n * @retval -1 error\n */\nint\nis_select_smaller(char *select_orig, char *select_smaller)\n{\n\tchar *spec1, *spec2;\n\tchar *subspec1, *subspec2;\n\tchar *last1, *last2;\n\tint num_chunks1, num_chunks2;\n\tchar *rest1, *rest2;\n\n\tif (select_orig == NULL || select_smaller == NULL)\n\t\treturn -1;\n\n\tif ((spec1 = strdup(select_orig)) == NULL) {\n\t\tlog_err(errno, __func__, \"Failed to allocate memory\");\n\t\treturn -1;\n\t}\n\tif ((spec2 = strdup(select_smaller)) == NULL) {\n\t\tfree(spec1);\n\t\tlog_err(errno, __func__, \"Failed to allocate memory\");\n\t\treturn -1;\n\t}\n\n\tsubspec2 = parse_plus_spec_r(spec2, &last2, NULL);\n\tnum_chunks2 = strtol(subspec2, &rest2, 10);\n\n\tfor (subspec1 = parse_plus_spec_r(spec1, &last1, NULL); subspec1;\n\t     subspec1 = parse_plus_spec_r(last1, &last1, NULL)) {\n\t\tnum_chunks1 = strtol(subspec1, &rest1, 10);\n\n\t\tif (strcmp(rest1, rest2) == 0 && num_chunks2 <= num_chunks1) {\n\t\t\tsubspec2 = parse_plus_spec_r(last2, &last2, NULL);\n\t\t\tif (subspec2 == NULL)\n\t\t\t\tbreak;\n\t\t\tnum_chunks2 = strtol(subspec2, &rest2, 10);\n\t\t}\n\t}\n\n\tfree(spec1);\n\tfree(spec2);\n\n\tif (subspec2 != NULL)\n\t\treturn 0;\n\n\treturn 1;\n}\n\n/**\n * @brief This function creates a request run destination string for a reservation.\n *\t  This string is in the same format as preq->rq_ind.rq_run.rq_destin\n * @param[in] presv - reservation for which string is being made.\n * @return destination string\n */\nchar *\ncreate_resv_destination(resc_resv *presv)\n{\n\tchar *format = \"%s\";\n\tchar *destin = NULL;\n\tif (presv == NULL)\n\t\treturn NULL;\n\t/* standing reservations format is <num>#<exec-vnode>{<num>} */\n\tif (get_rattr_long(presv, RESV_ATR_resv_standing))\n\t\tformat = \"1#%s{0}\";\n\tpbs_asprintf(&destin, format, get_rattr_str(presv, RESV_ATR_resv_nodes));\n\treturn destin;\n}\n\n/**\n * @brief This function creates a batch_request to confirm a reservation\n * @param[in] presv - reservation that is being confirmed.\n *\n * @return - Batch request\n */\nstatic struct batch_request *\ncreate_resv_confirm_req(resc_resv *presv)\n{\n\tstruct batch_request *confirm_req;\n\tchar *part;\n\tconfirm_req = alloc_br(PBS_BATCH_ConfirmResv);\n\tif (confirm_req == NULL)\n\t\treturn NULL;\n\tif (is_rattr_set(presv, RESV_ATR_partition))\n\t\tpart = get_rattr_str(presv, RESV_ATR_partition);\n\telse\n\t\tpart = DEFAULT_PARTITION;\n\n\tif (pbs_asprintf(&confirm_req->rq_extend, \"%s:partition=%s\", PBS_RESV_CONFIRM_SUCCESS, part) == -1) {\n\t\tfree_br(confirm_req);\n\t\treturn NULL;\n\t}\n\tpbs_strncpy(confirm_req->rq_ind.rq_run.rq_jid, presv->ri_qs.ri_resvID, sizeof(confirm_req->rq_ind.rq_run.rq_jid));\n\tconfirm_req->rq_ind.rq_run.rq_resch = get_rattr_long(presv, RESV_ATR_start);\n\tif (is_rattr_set(presv, RESV_ATR_resv_nodes)) {\n\t\tconfirm_req->rq_ind.rq_run.rq_destin = create_resv_destination(presv);\n\t\tif (confirm_req->rq_ind.rq_run.rq_destin == NULL) {\n\t\t\tfree_br(confirm_req);\n\t\t\treturn NULL;\n\t\t}\n\t}\n\treturn confirm_req;\n}\n\n/**\n * @brief Save the current starttime, duration, and select of a reservation.\n * \t\tTo be reverted by revert_alter_reservation\n *\n * @param[in] presv pointer to reservation\n */\nvoid\nsave_alter_reservation(resc_resv *presv)\n{\n\n\tattribute *alter_revert;\n\tresource_def *presdef = NULL;\n\tresource *resc = NULL, *pselect = NULL;\n\tattribute atemp = {0};\n\n\talter_revert = get_rattr(presv, RESV_ATR_alter_revert);\n\tif (is_attr_set(alter_revert))\n\t\treturn;\n\n\tpresdef = &svr_resc_def[RESC_START_TIME];\n\tresc = add_resource_entry(alter_revert, presdef);\n\tatemp.at_flags = ATR_VFLAG_SET;\n\tatemp.at_type = ATR_TYPE_LONG;\n\tatemp.at_val.at_long = get_rattr_long(presv, RESV_ATR_start);\n\tpresdef->rs_set(&resc->rs_value, &atemp, SET);\n\n\tpresdef = &svr_resc_def[RESC_WALLTIME];\n\tresc = add_resource_entry(alter_revert, presdef);\n\tatemp.at_val.at_long = get_rattr_long(presv, RESV_ATR_duration);\n\tpresdef->rs_set(&resc->rs_value, &atemp, SET);\n\n\tpresdef = &svr_resc_def[RESC_SELECT];\n\tpselect = find_resc_entry(get_rattr(presv, RESV_ATR_resource), presdef);\n\tresc = add_resource_entry(alter_revert, presdef);\n\tatemp.at_type = ATR_TYPE_STR;\n\tatemp.at_val.at_str = pselect->rs_value.at_val.at_str;\n\tpresdef->rs_set(&resc->rs_value, &atemp, SET);\n\n\tset_rattr_str_slim(presv, RESV_ATR_SchedSelect_orig, get_rattr_str(presv, RESV_ATR_SchedSelect), NULL);\n}\n\n/**\n * @brief Revert the alter for the reservation.\n * \t\tRestore the values stored in save_alter_reservation\n *\n * @param[in] presv pointer to reservation\n */\nvoid\nrevert_alter_reservation(resc_resv *presv)\n{\n\tattribute *alter_revert;\n\tresource_def *presdef = NULL;\n\tresource *resc = NULL, *resc2 = NULL;\n\tattribute atemp = {0};\n\tattribute *resc_attr;\n\tint state = 0;\n\tint sub = 0;\n\n\talter_revert = get_rattr(presv, RESV_ATR_alter_revert);\n\tif (!is_attr_set(alter_revert))\n\t\treturn;\n\n\t/* revert start time */\n\tpresdef = &svr_resc_def[RESC_START_TIME];\n\tresc = find_resc_entry(alter_revert, presdef);\n\tset_rattr_l_slim(presv, RESV_ATR_start, get_attr_l(&resc->rs_value), SET);\n\tpresv->ri_qs.ri_stime = resc->rs_value.at_val.at_long;\n\n\t/* revert duration and walltime */\n\tpresdef = &svr_resc_def[RESC_WALLTIME];\n\tresc = find_resc_entry(alter_revert, presdef);\n\tset_rattr_l_slim(presv, RESV_ATR_duration, get_attr_l(&resc->rs_value), SET);\n\tpresv->ri_qs.ri_duration = resc->rs_value.at_val.at_long;\n\n\tresc_attr = get_rattr(presv, RESV_ATR_resource);\n\tresc = find_resc_entry(resc_attr, presdef);\n\tpresdef->rs_set(&resc->rs_value, get_rattr(presv, RESV_ATR_duration), SET);\n\n\t/* revert end time */\n\tset_rattr_l_slim(presv, RESV_ATR_end, get_rattr_long(presv, RESV_ATR_start) + get_rattr_long(presv, RESV_ATR_duration), SET);\n\tpresv->ri_qs.ri_etime = get_rattr_long(presv, RESV_ATR_end);\n\n\t/* revert resource_list.select */\n\tpresdef = &svr_resc_def[RESC_SELECT];\n\tresc = find_resc_entry(alter_revert, presdef);\n\tresc2 = find_resc_entry(resc_attr, presdef);\n\tatemp.at_flags = ATR_VFLAG_SET;\n\tatemp.at_type = ATR_TYPE_STR;\n\tatemp.at_val.at_str = resc->rs_value.at_val.at_str;\n\tpresdef->rs_set(&resc2->rs_value, &atemp, SET);\n\tpresdef->rs_free(&resc->rs_value);\n\tset_chunk_sum(&resc2->rs_value, resc_attr);\n\tpost_attr_set(resc_attr);\n\tset_rattr_str_slim(presv, RESV_ATR_SchedSelect, get_rattr_str(presv, RESV_ATR_SchedSelect_orig), NULL);\n\tfree_rattr(presv, RESV_ATR_SchedSelect_orig);\n\n\tpresv->ri_alter.ra_flags = 0;\n\tfree_rattr(presv, RESV_ATR_alter_revert);\n\n\teval_resvState(presv, RESVSTATE_alter_failed, 0, &state, &sub);\n\t/* While requesting alter, substate was retained, so we use the same here. */\n\t(void) resv_setResvState(presv, state, presv->ri_qs.ri_substate);\n}\n\n/**\n * @brief Save the original starttime, duration, and select of a standing reservation.\n *\n * @param[in] preq pointer to reservation\n */\nvoid\nsave_standing_reservation(resc_resv *presv)\n{\n\tattribute *standing;\n\tattribute atemp = {0};\n\tresource_def *presdef = NULL;\n\tresource *resc = NULL;\n\n\tstanding = get_rattr(presv, RESV_ATR_standing_revert);\n\tif (is_attr_set(standing))\n\t\treturn;\n\n\tpost_attr_set(standing);\n\n\tpresdef = &svr_resc_def[RESC_START_TIME];\n\tresc = add_resource_entry(standing, presdef);\n\tatemp.at_flags = ATR_VFLAG_SET;\n\tatemp.at_type = ATR_TYPE_LONG;\n\tatemp.at_val.at_long = get_rattr_long(presv, RESV_ATR_start);\n\tpresdef->rs_set(&resc->rs_value, &atemp, SET);\n\n\tpresdef = &svr_resc_def[RESC_WALLTIME];\n\tresc = add_resource_entry(standing, presdef);\n\tatemp.at_val.at_long = get_rattr_long(presv, RESV_ATR_duration);\n\tpresdef->rs_set(&resc->rs_value, &atemp, SET);\n\n\tpresdef = &svr_resc_def[RESC_SELECT];\n\tresc = find_resc_entry(get_rattr(presv, RESV_ATR_resource), presdef);\n\tatemp.at_type = ATR_TYPE_STR;\n\tatemp.at_val.at_str = resc->rs_value.at_val.at_str;\n\tresc = add_resource_entry(standing, presdef);\n\tpresdef->rs_set(&resc->rs_value, &atemp, SET);\n}\n\n/**\n * @brief Service the Modify Reservation Request from client such as pbs_ralter.\n *\n *\tThis request atomically modifies one or more of a reservation's attributes.\n *\tAn error is returned to the client if the user does not have permission\n *\tto perform the modification, the attribute is read-only, the reservation is\n *\trunning and the attribute is only modifiable when the reservation is not\n *\trunning or is empty.\n *\n * @param[in] preq - pointer to batch request from client\n */\nvoid\nreq_modifyReservation(struct batch_request *preq)\n{\n\tchar *rid = NULL;\n\tsvrattrl *psatl = NULL;\n\tattribute_def *pdef = NULL;\n\tint rc = 0;\n\tint bad = 0;\n\tchar buf[PBS_MAXUSER + PBS_MAXHOSTNAME + 32] = {0};\n\tint sock;\n\tint resc_access_perm_save = 0;\n\tint send_to_scheduler = 0;\n\tint log_len = 0;\n\tchar *fmt = \"%a %b %d %H:%M:%S %Y\";\n\tint is_standing = 0;\n\tint next_occr_start = 0;\n\textern char *msg_stdg_resv_occr_conflict;\n\tresc_resv *presv;\n\tint num_jobs;\n\tlong new_end_time = 0;\n\tresource *presc = NULL;\n\tint scheds_notified = 0;\n\tint force_alter = FALSE;\n\tchar hook_msg[HOOK_MSG_SIZE] = {0};\n\n\tif (preq == NULL)\n\t\treturn;\n\n\tswitch (process_hooks(preq, hook_msg, sizeof(hook_msg),\n\t\t\t      pbs_python_set_interrupt)) {\n\t\tcase 0: /* explicit reject */\n\t\t\treply_text(preq, PBSE_HOOKERROR, hook_msg);\n\t\t\treturn;\n\t\tcase 1:\t\t\t\t\t    /* explicit accept */\n\t\t\tif (recreate_request(preq) == -1) { /* error */\n\t\t\t\t/* we have to reject the request, as 'preq' */\n\t\t\t\t/* may have been partly modified            */\n\t\t\t\tstrcpy(hook_msg,\n\t\t\t\t       \"modifyresv event: rejected request\");\n\t\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_HOOK,\n\t\t\t\t\t  LOG_ERR, \"\", hook_msg);\n\t\t\t\treply_text(preq, PBSE_HOOKERROR, hook_msg);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tbreak;\n\t\tcase 2: /* no hook script executed - go ahead and accept event*/\n\t\t\tbreak;\n\t\tdefault:\n\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t\t\t  LOG_INFO, \"\", \"modifyresv event: accept req by default\");\n\t}\n\n\tsock = preq->rq_conn;\n\n\tpresv = chk_rescResv_request(preq->rq_ind.rq_modify.rq_objname, preq);\n\t/* Note: on failure, chk_rescResv_request invokes req_reject\n\t * appropriate reply is sent and batch_request is freed.\n\t */\n\tif (presv == NULL)\n\t\treturn;\n\n\tif (preq->rq_extend != NULL && (strcmp(preq->rq_extend, FORCE) == 0))\n\t\tforce_alter = TRUE;\n\n\trid = preq->rq_ind.rq_modify.rq_objname;\n\tpresv = find_resv(rid);\n\n\tif (presv == NULL) {\n\t\treq_reject(PBSE_UNKRESVID, 0, preq);\n\t\treturn;\n\t}\n\n\tif (get_rattr_long(presv, RESV_ATR_state) == RESV_BEING_ALTERED) {\n\t\treq_reject(PBSE_BADSTATE, 0, preq);\n\t\treturn;\n\t}\n\n\tnum_jobs = presv->ri_qp->qu_numjobs;\n\tif (svr_chk_history_conf()) {\n\t\tnum_jobs -= (presv->ri_qp->qu_njstate[JOB_STATE_MOVED] + presv->ri_qp->qu_njstate[JOB_STATE_FINISHED] +\n\t\t\t     presv->ri_qp->qu_njstate[JOB_STATE_EXPIRED]);\n\t}\n\n\tis_standing = get_rattr_long(presv, RESV_ATR_resv_standing);\n\tif (is_standing)\n\t\tnext_occr_start = get_occurrence(get_rattr_str(presv, RESV_ATR_resv_rrule),\n\t\t\t\t\t\t get_rattr_long(presv, RESV_ATR_start),\n\t\t\t\t\t\t get_rattr_str(presv, RESV_ATR_resv_timezone), 2);\n\n\tresc_access_perm = preq->rq_perm;\n\tresc_access_perm_save = resc_access_perm;\n\tpsatl = (svrattrl *) GET_NEXT(preq->rq_ind.rq_modify.rq_attr);\n\tpresv->ri_alter.ra_flags = 0;\n\tpresv->ri_alter.ra_state = get_rattr_long(presv, RESV_ATR_state);\n\n\t/* Only set this once to the original start/duration */\n\tif (is_standing && !is_rattr_set(presv, RESV_ATR_standing_revert))\n\t\tsave_standing_reservation(presv);\n\n\twhile (psatl) {\n\t\tlong temp = 0;\n\t\tchar *end = NULL;\n\t\tint index;\n\n\t\t/* identify the attribute by name */\n\t\tindex = find_attr(resv_attr_idx, resv_attr_def, psatl->al_name);\n\t\tif (index < 0) {\n\t\t\t/* didn`t recognize the name */\n\t\t\treply_badattr(PBSE_NOATTR, 1, psatl, preq);\n\t\t\treturn;\n\t\t}\n\t\tpdef = &resv_attr_def[index];\n\n\t\t/* Does attribute's definition flags indicate that\n\t\t * we have sufficient permission to write the attribute?\n\t\t */\n\n\t\tresc_access_perm = resc_access_perm_save; /* reset */\n\t\tif (psatl->al_flags & ATR_VFLAG_HOOK) {\n\t\t\tresc_access_perm = ATR_DFLAG_USWR |\n\t\t\t\t\t   ATR_DFLAG_OPWR |\n\t\t\t\t\t   ATR_DFLAG_MGWR |\n\t\t\t\t\t   ATR_DFLAG_SvWR |\n\t\t\t\t\t   ATR_DFLAG_Creat;\n\t\t}\n\t\tif ((pdef->at_flags & resc_access_perm) == 0) {\n\t\t\treply_badattr(PBSE_ATTRRO, 1, psatl, preq);\n\t\t\treturn;\n\t\t}\n\n\t\tswitch (index) {\n\t\t\tcase RESV_ATR_start:\n\t\t\t\tif (get_rattr_long(presv, RESV_ATR_state) != RESV_RUNNING || !num_jobs) {\n\t\t\t\t\ttemp = strtol(psatl->al_value, &end, 10);\n\t\t\t\t\tif (temp > time(NULL)) {\n\t\t\t\t\t\tif (!is_standing || (temp < next_occr_start)) {\n\t\t\t\t\t\t\tsend_to_scheduler = 1;\n\t\t\t\t\t\t\tsave_alter_reservation(presv);\n\t\t\t\t\t\t\tpresv->ri_alter.ra_flags |= RESV_START_TIME_MODIFIED;\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\trevert_alter_reservation(presv);\n\t\t\t\t\t\t\tlog_event(PBSEVENT_RESV, PBS_EVENTCLASS_RESV, LOG_INFO,\n\t\t\t\t\t\t\t\t  preq->rq_ind.rq_modify.rq_objname, msg_stdg_resv_occr_conflict);\n\t\t\t\t\t\t\treq_reject(PBSE_STDG_RESV_OCCR_CONFLICT, 0, preq);\n\t\t\t\t\t\t\treturn;\n\t\t\t\t\t\t}\n\t\t\t\t\t} else {\n\t\t\t\t\t\trevert_alter_reservation(presv);\n\t\t\t\t\t\treq_reject(PBSE_BADTSPEC, 0, preq);\n\t\t\t\t\t\treturn;\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\trevert_alter_reservation(presv);\n\t\t\t\t\tif (num_jobs)\n\t\t\t\t\t\treq_reject(PBSE_RESV_NOT_EMPTY, 0, preq);\n\t\t\t\t\telse\n\t\t\t\t\t\treq_reject(PBSE_BADTSPEC, 0, preq);\n\t\t\t\t\treturn;\n\t\t\t\t}\n\n\t\t\t\tbreak;\n\t\t\tcase RESV_ATR_end:\n\t\t\t\ttemp = strtol(psatl->al_value, &end, 10);\n\t\t\t\tif (!is_standing || temp < next_occr_start) {\n\t\t\t\t\tsend_to_scheduler = 1;\n\t\t\t\t\tpresv->ri_alter.ra_flags |= RESV_END_TIME_MODIFIED;\n\t\t\t\t\tsave_alter_reservation(presv);\n\t\t\t\t} else {\n\t\t\t\t\trevert_alter_reservation(presv);\n\t\t\t\t\tlog_event(PBSEVENT_RESV, PBS_EVENTCLASS_RESV, LOG_INFO,\n\t\t\t\t\t\t  preq->rq_ind.rq_modify.rq_objname, msg_stdg_resv_occr_conflict);\n\t\t\t\t\treq_reject(PBSE_STDG_RESV_OCCR_CONFLICT, 0, preq);\n\t\t\t\t\treturn;\n\t\t\t\t}\n\n\t\t\t\tbreak;\n\t\t\tcase RESV_ATR_duration:\n\t\t\t\tsend_to_scheduler = 1;\n\t\t\t\tpresv->ri_alter.ra_flags |= RESV_DURATION_MODIFIED;\n\t\t\t\tsave_alter_reservation(presv);\n\t\t\t\tbreak;\n\t\t\tcase RESV_ATR_resource:\n\t\t\t\tif (force_alter) {\n\t\t\t\t\trevert_alter_reservation(presv);\n\t\t\t\t\treq_reject(PBSE_NOSUP, 0, preq);\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t\tif (strcmp(psatl->al_resc, \"select\") != 0) {\n\t\t\t\t\trevert_alter_reservation(presv);\n\t\t\t\t\treq_reject(PBSE_BADATVAL, 0, preq);\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t\tif (get_rattr_long(presv, RESV_ATR_substate) == RESV_IN_CONFLICT)\n\t\t\t\t\treturn;\n\t\t\t\tsave_alter_reservation(presv);\n\n\t\t\t\tsend_to_scheduler = 1;\n\t\t\t\tpresv->ri_alter.ra_flags |= RESV_SELECT_MODIFIED;\n\t\t\t\tpresc = find_resc_entry(get_rattr(presv, RESV_ATR_resource), &svr_resc_def[RESC_SELECT]);\n\t\t\t\tif (presc == NULL) {\n\t\t\t\t\treq_reject(PBSE_INTERNAL, 0, preq);\n\t\t\t\t\trevert_alter_reservation(presv);\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t\tset_rattr_str_slim(presv, RESV_ATR_SchedSelect_orig, get_rattr_str(presv, RESV_ATR_SchedSelect), NULL);\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\tbreak;\n\t\t}\n\n\t\tpsatl = (svrattrl *) GET_NEXT(psatl->al_link);\n\t}\n\t/* Force option is only applied to attributes that require reconfirmation */\n\tif ((send_to_scheduler == 0) && (force_alter == TRUE)) {\n\t\trevert_alter_reservation(presv);\n\t\treq_reject(PBSE_NOSUP, 0, preq);\n\t\treturn;\n\t}\n\n\tif (get_rattr_long(presv, RESV_ATR_state) == RESV_RUNNING && num_jobs) {\n\t\tif ((presv->ri_alter.ra_flags & RESV_DURATION_MODIFIED) && (presv->ri_alter.ra_flags & RESV_END_TIME_MODIFIED)) {\n\t\t\trevert_alter_reservation(presv);\n\t\t\treq_reject(PBSE_RESV_NOT_EMPTY, 0, preq);\n\t\t\treturn;\n\t\t}\n\t}\n\n\tbad = 0;\n\tpsatl = (svrattrl *) GET_NEXT(preq->rq_ind.rq_modify.rq_attr);\n\tif (psatl) {\n\t\trc = modify_resv_attr(presv, psatl, preq->rq_perm, &bad);\n\t\tif (rc != 0) {\n\t\t\treply_badattr(rc, bad, psatl, preq);\n\t\t\trevert_alter_reservation(presv);\n\t\t\treturn;\n\t\t}\n\t}\n\tresc_access_perm = resc_access_perm_save;\n\n\tnew_end_time = get_rattr_long(presv, RESV_ATR_start) + get_rattr_long(presv, RESV_ATR_duration);\n\n\tif (is_standing && new_end_time >= next_occr_start) {\n\t\tlog_event(PBSEVENT_RESV, PBS_EVENTCLASS_RESV, LOG_INFO,\n\t\t\t  preq->rq_ind.rq_modify.rq_objname, msg_stdg_resv_occr_conflict);\n\t\treq_reject(PBSE_STDG_RESV_OCCR_CONFLICT, 0, preq);\n\t\trevert_alter_reservation(presv);\n\t\treturn;\n\t}\n\n\tif (presv->ri_alter.ra_flags & RESV_SELECT_MODIFIED) {\n\t\tpresc = find_resc_entry(get_rattr(presv, RESV_ATR_resource), &svr_resc_def[RESC_SELECT]);\n\t\tmake_schedselect(get_rattr(presv, RESV_ATR_resource), presc, NULL, get_rattr(presv, RESV_ATR_SchedSelect));\n\t\tif (is_select_smaller(get_rattr_str(presv, RESV_ATR_SchedSelect_orig), get_rattr_str(presv, RESV_ATR_SchedSelect)) == 0) {\n\t\t\treq_reject(PBSE_SELECT_NOT_SUBSET, 0, preq);\n\t\t\trevert_alter_reservation(presv);\n\t\t\treturn;\n\t\t}\n\t\trc = set_chunk_sum(&presc->rs_value, get_rattr(presv, RESV_ATR_resource));\n\t\tif (rc) {\n\t\t\treq_reject(rc, 0, preq);\n\t\t\trevert_alter_reservation(presv);\n\t\t\treturn;\n\t\t}\n\t}\n\n\tif (send_to_scheduler) {\n\t\tresv_setResvState(presv, RESV_BEING_ALTERED, presv->ri_qs.ri_substate);\n\t\tif (presv->ri_alter.ra_flags & (RESV_START_TIME_MODIFIED | RESV_END_TIME_MODIFIED | RESV_DURATION_MODIFIED)) {\n\t\t\t/* \"start\", \"end\", \"duration\", and \"wall\"; derive and check */\n\t\t\tif (start_end_dur_wall(presv)) {\n\t\t\t\treq_reject(PBSE_BADTSPEC, 0, preq);\n\t\t\t\trevert_alter_reservation(presv);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\t/* walltime can change */\n\t\t\tpost_attr_set(get_rattr(presv, RESV_ATR_resource));\n\t\t}\n\t}\n\n\t/* If Authorized_Groups is modified, we need to update the queue's acl_users\n\t * Authorized_Users cannot be unset, it must always have a value\n\t * The queue will have acl_user_enable set to 1 by default\n\t * If Authorized_Groups is modified, we need to update the queue's acl_groups and acl_group_enable\n\t * Authorized_Groups could be unset, so we need to update the queue accordingly, unsetting both acl_groups and acl_group_enable\n\t */\n\tif ((get_rattr(presv, RESV_ATR_auth_u))->at_flags & ATR_VFLAG_MODIFY) {\n\t\tsvrattrl *pattrl;\n\t\tresv_attr_def[(int) RESV_ATR_auth_u].at_encode(get_rattr(presv, RESV_ATR_auth_u), NULL, resv_attr_def[RESV_ATR_auth_u].at_name, NULL, ATR_ENCODE_CLIENT, &pattrl);\n\t\tset_qattr_str_slim(presv->ri_qp, QA_ATR_AclUsers, pattrl->al_atopl.value, NULL);\n\t\tfree(pattrl);\n\t}\n\tif ((get_rattr(presv, RESV_ATR_auth_g))->at_flags & ATR_VFLAG_MODIFY) {\n\t\tif (is_rattr_set(presv, RESV_ATR_auth_g)) {\n\t\t\tsvrattrl *pattrl = NULL;\n\t\t\tresv_attr_def[(int) RESV_ATR_auth_g].at_encode(get_rattr(presv, RESV_ATR_auth_g), NULL, resv_attr_def[RESV_ATR_auth_g].at_name, NULL, ATR_ENCODE_CLIENT, &pattrl);\n\t\t\tset_qattr_str_slim(presv->ri_qp, QE_ATR_AclGroup, pattrl->al_atopl.value, NULL);\n\t\t\tif (!is_qattr_set(presv->ri_qp, QE_ATR_AclGroupEnabled) || get_qattr_long(presv->ri_qp, QE_ATR_AclGroupEnabled) == 0)\n\t\t\t\tset_qattr_l_slim(presv->ri_qp, QE_ATR_AclGroupEnabled, 1, SET);\n\t\t\tque_save_db(presv->ri_qp);\n\t\t\tfree(pattrl);\n\t\t} else {\n\t\t\tfree_rattr(presv, RESV_ATR_auth_g);\n\t\t\tfree_qattr(presv->ri_qp, QE_ATR_AclGroup);\n\t\t\tfree_qattr(presv->ri_qp, QE_ATR_AclGroupEnabled);\n\t\t\tque_save_db(presv->ri_qp);\n\t\t}\n\t}\n\n\tif (send_to_scheduler)\n\t\tscheds_notified = notify_scheds_about_resv(SCH_SCHEDULE_RESV_RECONFIRM, presv);\n\n\tsprintf(log_buffer, \"Attempting to modify reservation\");\n\tif (presv->ri_alter.ra_flags & RESV_START_TIME_MODIFIED) {\n\t\tlong dt = get_rattr_long(presv, RESV_ATR_start);\n\t\tstrftime(buf, sizeof(buf), fmt, localtime((time_t *) &dt));\n\t\tlog_len = strlen(log_buffer);\n\t\tsnprintf(log_buffer + log_len, sizeof(log_buffer) - log_len, \" start=%s\", buf);\n\t}\n\tif (presv->ri_alter.ra_flags & RESV_END_TIME_MODIFIED) {\n\t\tlong dt = get_rattr_long(presv, RESV_ATR_end);\n\t\tstrftime(buf, sizeof(buf), fmt, localtime((time_t *) &dt));\n\t\tlog_len = strlen(log_buffer);\n\t\tsnprintf(log_buffer + log_len, sizeof(log_buffer) - log_len, \" end=%s\", buf);\n\t}\n\tif (presv->ri_alter.ra_flags & RESV_SELECT_MODIFIED) {\n\t\tlog_len = strlen(log_buffer);\n\t\tsnprintf(log_buffer + log_len, sizeof(log_buffer) - log_len, \" select=%s\", presc->rs_value.at_val.at_str);\n\t}\n\tlog_event(PBSEVENT_RESV, PBS_EVENTCLASS_RESV, LOG_INFO, preq->rq_ind.rq_modify.rq_objname, log_buffer);\n\n\tif (force_alter == TRUE) {\n\t\tif (scheds_notified == 0) {\t\t\t\t    /* No schedulers notified, just enforce confirmation */\n\t\t\tif (presv->ri_alter.ra_state == RESV_UNCONFIRMED) { /* No need to do anything, just make the change */\n\t\t\t\tresv_setResvState(presv, RESV_UNCONFIRMED, RESV_UNCONFIRMED);\n\t\t\t\tpresv->ri_alter.ra_flags = 0;\n\t\t\t\tpresv->ri_alter.ra_state = 0;\n\t\t\t} else {\n\t\t\t\tstruct batch_request *confirm_req;\n\t\t\t\tstruct work_task *pwt;\n\t\t\t\tconfirm_req = create_resv_confirm_req(presv);\n\t\t\t\tif (confirm_req == NULL) {\n\t\t\t\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\t\t\t\trevert_alter_reservation(presv);\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t\tconfirm_req->rq_perm = preq->rq_perm;\n\t\t\t\tif (issue_Drequest(PBS_LOCAL_CONNECTION, confirm_req, release_req, &pwt, 0) == -1) {\n\t\t\t\t\tfree_br(confirm_req);\n\t\t\t\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\t\t\t\trevert_alter_reservation(presv);\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t\tappend_link(&presv->ri_svrtask, &pwt->wt_linkobj, pwt);\n\t\t\t}\n\t\t\tsnprintf(buf, sizeof(buf), \"%s CONFIRMED\", presv->ri_qs.ri_resvID);\n\t\t\treply_text(preq, PBSE_NONE, buf);\n\t\t\treturn;\n\t\t} else\n\t\t\tpresv->ri_alter.ra_flags |= RESV_ALTER_FORCED;\n\t}\n\n\tif (is_rattr_set(presv, RESV_ATR_interactive) == 0) {\n\t\tchar buf1[PBS_MAXUSER + PBS_MAXHOSTNAME + 32] = {0};\n\t\t/*Not \"interactive\" so don't wait on scheduler, reply now*/\n\n\t\tsprintf(buf, \"%s ALTER REQUESTED\", presv->ri_qs.ri_resvID);\n\t\tsprintf(buf1, \"requestor=%s@%s\", preq->rq_user, preq->rq_host);\n\n\t\tif ((rc = reply_text(preq, PBSE_NONE, buf))) {\n\t\t\t/* reply failed,  close connection; DON'T purge resv */\n\t\t\tclose_client(sock);\n\t\t\treturn;\n\t\t}\n\t} else {\n\t\t/* Don't reply back until scheduler decides */\n\t\tlong dt;\n\t\tpresv->ri_brp = preq;\n\t\tdt = get_rattr_long(presv, RESV_ATR_interactive);\n\t\t/*reply with id and state no decision in +dt secs*/\n\t\t(void) gen_future_reply(presv, dt);\n\t\t(void) snprintf(buf, sizeof(buf), \"requestor=%s@%s Interactive=%ld\",\n\t\t\t\tpreq->rq_user, preq->rq_host, dt);\n\t}\n}\n\n/**\n * @brief modify the attributes of a reservation atomically.\n *\n * @param[in]  presv - pointer to the reservation structure.\n * @param[in]  plist - list of attributes to modify.\n * @param[in]  perm  - permissions.\n * @param[out] bad   - the index of the attribute which caused an error.\n *\n * @return 0 on success.\n * @return PBS error code.\n */\nint\nmodify_resv_attr(resc_resv *presv, svrattrl *plist, int perm, int *bad)\n{\n\tint allow_unkn = 0;\n\tlong i = 0;\n\tattribute newattr[(int) RESV_ATR_LAST];\n\tattribute *pattr;\n\tint rc = 0;\n\n\tif (presv == NULL || plist == NULL)\n\t\treturn PBSE_INTERNAL;\n\n\tallow_unkn = -1;\n\tpattr = presv->ri_wattr;\n\n\t/* call attr_atomic_set to decode and set a copy of the attributes */\n\n\trc = attr_atomic_set(plist, pattr, newattr, resv_attr_idx, resv_attr_def, RESV_ATR_LAST, allow_unkn, perm, bad);\n\tif (rc == 0) {\n\t\tfor (i = 0; i < RESV_ATR_LAST; i++) {\n\t\t\tif (newattr[i].at_flags & ATR_VFLAG_MODIFY) {\n\t\t\t\tif (resv_attr_def[i].at_action) {\n\t\t\t\t\trc = resv_attr_def[i].at_action(&newattr[i],\n\t\t\t\t\t\t\t\t\tpresv, ATR_ACTION_ALTER);\n\t\t\t\t\tif (rc)\n\t\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tif ((rc == 0) &&\n\t\t    ((newattr[(int) RESV_ATR_userlst].at_flags & ATR_VFLAG_MODIFY) ||\n\t\t     (newattr[(int) RESV_ATR_grouplst].at_flags & ATR_VFLAG_MODIFY))) {\n\t\t\t/* Need to reset execution uid and gid */\n\t\t\trc = set_objexid((void *) presv, JOB_OBJECT, newattr);\n\t\t}\n\t}\n\tif (rc) {\n\t\tfor (i = 0; i < RESV_ATR_LAST; i++)\n\t\t\tresv_attr_def[i].at_free(newattr + i);\n\t\treturn (rc);\n\t}\n\n\t/* Now copy the new values into the reservation attribute array */\n\n\tfor (i = 0; i < RESV_ATR_LAST; i++) {\n\t\tif (newattr[i].at_flags & ATR_VFLAG_MODIFY) {\n\t\t\tresv_attr_def[i].at_free(pattr + i);\n\t\t\tif ((newattr[i].at_type == ATR_TYPE_LIST) || (newattr[i].at_type == ATR_TYPE_RESC)) {\n\t\t\t\tlist_move(&newattr[i].at_val.at_list, &(pattr + i)->at_val.at_list);\n\t\t\t} else {\n\t\t\t\t*(pattr + i) = newattr[i];\n\t\t\t}\n\t\t\t/* ATR_VFLAG_MODCACHE will be included if set */\n\t\t\t(pattr + i)->at_flags = newattr[i].at_flags;\n\t\t}\n\t}\n\n\treturn (0);\n}\n"
  },
  {
    "path": "src/server/req_movejob.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file    req_movejob.c\n *\n * @brief\n * \t\treq_movejob.c - function to move a job to another queue\n *\n * Included functions are:\n * \treq_movejob()\n * \treq_orderjob()\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <stdio.h>\n#include <sys/types.h>\n#include <sys/param.h>\n#include \"libpbs.h\"\n#include <errno.h>\n\n#include \"server_limits.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"credential.h\"\n#include \"batch_request.h\"\n#include \"resv_node.h\"\n#include \"queue.h\"\n#include \"job.h\"\n#include \"reservation.h\"\n#include \"log.h\"\n#include \"pbs_error.h\"\n#include \"hook.h\"\n#include \"pbs_nodes.h\"\n#include \"svrfunc.h\"\n\n/* Global Data Items: */\n\nextern char *msg_badstate;\nextern char *msg_manager;\nextern char *msg_movejob;\nextern int pbs_errno;\n\n/**\n * @brief\n * \t\treq_movejob = move a job to a new destination (local or remote)\n *\n * @param[in,out]\treq\t-\tthe batch request\n */\n\nvoid\nreq_movejob(struct batch_request *req)\n{\n\tint jt; /* job type */\n\tjob *jobp;\n\tchar hook_msg[HOOK_MSG_SIZE];\n\n\tswitch (process_hooks(req, hook_msg, sizeof(hook_msg), pbs_python_set_interrupt)) {\n\t\tcase 0: /* explicit reject */\n\t\t\treply_text(req, PBSE_HOOKERROR, hook_msg);\n\t\t\treturn;\n\t\tcase 1:\t\t\t\t\t   /* explicit accept */\n\t\t\tif (recreate_request(req) == -1) { /* error */\n\t\t\t\t/* we have to reject the request, as 'req' */\n\t\t\t\t/* may have been partly modified */\n\t\t\t\tstrcpy(hook_msg,\n\t\t\t\t       \"movejob event: rejected request\");\n\t\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_HOOK,\n\t\t\t\t\t  LOG_ERR, \"\", hook_msg);\n\t\t\t\treply_text(req, PBSE_HOOKERROR, hook_msg);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tbreak;\n\t\tcase 2: /* no hook script executed - go ahead and accept event*/\n\t\t\tbreak;\n\t\tdefault:\n\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t\t\t  LOG_INFO, \"\", \"movejob event: accept req by default\");\n\t}\n\n\tjobp = chk_job_request(req->rq_ind.rq_move.rq_jid, req, &jt, NULL);\n\n\tif (jobp == NULL)\n\t\treturn;\n\n\tif ((jt != IS_ARRAY_NO) && (jt != IS_ARRAY_ArrayJob)) {\n\t\treq_reject(PBSE_IVALREQ, 0, req);\n\t\treturn;\n\t}\n\n\tif (!check_job_state(jobp, JOB_STATE_LTR_QUEUED) &&\n\t    !check_job_state(jobp, JOB_STATE_LTR_HELD) &&\n\t    !check_job_state(jobp, JOB_STATE_LTR_WAITING)) {\n#ifndef NDEBUG\n\t\tlog_eventf(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t   jobp->ji_qs.ji_jobid, \"(%s) %s, state=%c\",\n\t\t\t   __func__, msg_badstate, get_job_state(jobp));\n#endif /* NDEBUG */\n\t\treq_reject(PBSE_BADSTATE, 0, req);\n\t\treturn;\n\t}\n\n\tif (jt != IS_ARRAY_NO) {\n\t\t/* cannot move Subjob and can only move array job if */\n\t\t/* no subjobs are running\t\t\t     */\n\t\tif ((jt != IS_ARRAY_ArrayJob) ||\n\t\t    (jobp->ji_ajinfo->tkm_subjsct[JOB_STATE_RUNNING] != 0)) {\n\t\t\treq_reject(PBSE_IVALREQ, 0, req);\n\t\t\treturn;\n\t\t}\n\t}\n\n\t/*\n\t * svr_movejob() does the real work, handles both local and\n\t * network moves\n\t */\n\n\tswitch (svr_movejob(jobp, req->rq_ind.rq_move.rq_destin, req)) {\n\t\tcase 0: /* success */\n\t\t\t(void) strcpy(log_buffer, msg_movejob);\n\t\t\t(void) sprintf(log_buffer + strlen(log_buffer),\n\t\t\t\t       msg_manager, req->rq_ind.rq_move.rq_destin,\n\t\t\t\t       req->rq_user, req->rq_host);\n\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t  jobp->ji_qs.ji_jobid, log_buffer);\n\t\t\treply_ack(req);\n\t\t\tbreak;\n\t\tcase -1:\n\t\tcase -2:\n\t\tcase 1: /* fail */\n\t\t\tif (jobp)\n\t\t\t\tsvr_evalsetjobstate(jobp);\n\n\t\t\tif (jobp && jobp->ji_clterrmsg)\n\t\t\t\treply_text(req, pbs_errno, jobp->ji_clterrmsg);\n\t\t\telse\n\t\t\t\treq_reject(pbs_errno, 0, req);\n\t\t\tbreak;\n\t\tcase 2: /* deferred, will be handled by \t   */\n\t\t\t/* post_movejob() when the child completes */\n\t\t\tbreak;\n\t}\n\treturn;\n}\n\n/**\n * @brief\n * \t\treq_orderjob - reorder the jobs in a queue\n *\n * @param[in,out]\treq\t-\tthe batch request\n */\nvoid\nreq_orderjob(struct batch_request *req)\n{\n\tint jt1, jt2; /* job type */\n\tjob *pjob;\n\tjob *pjob1;\n\tjob *pjob2;\n\tlong long rank;\n\tint rc;\n\tchar tmpqn[PBS_MAXQUEUENAME + 1];\n\tconn_t *conn = NULL;\n\tchar *physhost = NULL;\n\n\tif ((pjob1 = chk_job_request(req->rq_ind.rq_move.rq_jid, req, &jt1, NULL)) == NULL)\n\t\treturn;\n\tif ((pjob2 = chk_job_request(req->rq_ind.rq_move.rq_destin, req, &jt2, NULL)) == NULL)\n\t\treturn;\n\tif ((jt1 == IS_ARRAY_Single) || (jt2 == IS_ARRAY_Single) ||\n\t    (jt1 == IS_ARRAY_Range) || (jt2 == IS_ARRAY_Range)) {\n\t\t/* can only move regular or Array Job, not Subjobs */\n\t\treq_reject(PBSE_IVALREQ, 0, req);\n\t\treturn;\n\t}\n\n\tif (check_job_state(pjob = pjob1, JOB_STATE_LTR_RUNNING) ||\n\t    check_job_state(pjob = pjob2, JOB_STATE_LTR_RUNNING) ||\n\t    check_job_state(pjob = pjob1, JOB_STATE_LTR_BEGUN) ||\n\t    check_job_state(pjob = pjob2, JOB_STATE_LTR_BEGUN)) {\n#ifndef NDEBUG\n\t\tlog_eventf(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t   pjob->ji_qs.ji_jobid, \"(%s) %s, state=%c\",\n\t\t\t   __func__, msg_badstate, get_job_state(pjob));\n#endif /* NDEBUG */\n\t\treq_reject(PBSE_BADSTATE, 0, req);\n\t\treturn;\n\t} else if (pjob1->ji_qhdr != pjob2->ji_qhdr) {\n\n\t\t/* Jobs are in different queues */\n\n\t\tconn = get_conn(req->rq_conn);\n\t\tif (conn) {\n\t\t\tphyshost = conn->cn_physhost;\n\t\t}\n\n\t\tif ((rc = svr_chkque(pjob1, pjob2->ji_qhdr,\n\t\t\t\t     get_jattr_str(pjob1, JOB_ATR_submit_host),\n\t\t\t\t     physhost,\n\t\t\t\t     MOVE_TYPE_Order)) ||\n\t\t    (rc = svr_chkque(pjob2, pjob1->ji_qhdr,\n\t\t\t\t     get_jattr_str(pjob2, JOB_ATR_submit_host),\n\t\t\t\t     physhost,\n\t\t\t\t     MOVE_TYPE_Order))) {\n\t\t\treq_reject(rc, 0, req);\n\t\t\treturn;\n\t\t}\n\t}\n\n\t/* now swap the order of the two jobs in the queue lists */\n\n\trank = get_jattr_ll(pjob1, JOB_ATR_qrank);\n\tset_jattr_ll_slim(pjob1, JOB_ATR_qrank, get_jattr_ll(pjob2, JOB_ATR_qrank), SET);\n\tset_jattr_ll_slim(pjob2, JOB_ATR_qrank, rank, SET);\n\n\tif (pjob1->ji_qhdr != pjob2->ji_qhdr) {\n\t\t(void) strcpy(tmpqn, pjob1->ji_qs.ji_queue);\n\t\t(void) strcpy(pjob1->ji_qs.ji_queue, pjob2->ji_qs.ji_queue);\n\t\t(void) strcpy(pjob2->ji_qs.ji_queue, tmpqn);\n\t\tsvr_dequejob(pjob1);\n\t\tsvr_dequejob(pjob2);\n\t\t(void) svr_enquejob(pjob1, NULL);\n\t\t(void) svr_enquejob(pjob2, NULL);\n\n\t} else {\n\t\tswap_link(&pjob1->ji_jobque, &pjob2->ji_jobque);\n\t\tswap_link(&pjob1->ji_alljobs, &pjob2->ji_alljobs);\n\t}\n\n\t/* need to update disk copy of both jobs to save new order */\n\tjob_save_db(pjob1);\n\tjob_save_db(pjob2);\n\n\treply_ack(req);\n}\n"
  },
  {
    "path": "src/server/req_preemptjob.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/*\n * @file\tsvr_preemptjob.c\n *\n * Functions relating to the Hold and Release Job Batch Requests.\n *\n * Included funtions are:\n *\treq_holdjob()\n *\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <stdio.h>\n#include <time.h>\n#include <sys/types.h>\n#include \"libpbs.h\"\n#include \"server_limits.h\"\n#include \"list_link.h\"\n#include \"server.h\"\n#include \"batch_request.h\"\n#include \"net_connect.h\"\n#include \"job.h\"\n#include \"pbs_error.h\"\n#include \"log.h\"\n#include \"acct.h\"\n#include \"pbs_nodes.h\"\n#include \"svrfunc.h\"\n\nextern void post_signal_req(struct work_task *);\nstruct preempt_ordering *svr_get_preempt_order(job *pjob, pbs_sched *psched);\n\n/* Global Data Items: */\n\nextern struct server server;\n\nextern time_t time_now;\n\nconst static char *preempt_methods[] = {\n\t\"\",\n\t\"suspend\",\n\t\"checkpoint\",\n\t\"requeue\",\n\t\"delete\",\n\t\"\"};\n\n/**\n * @brief mark a job preemption as failed\n * @param[in] preempt_preq - the preemption preq from the scheduler\n * @param[in] job_id - the job to mark as failed\n * @return void\n */\nstatic void\njob_preempt_fail(struct batch_request *preempt_preq, char *job_id)\n{\n\tint preempt_index = preempt_preq->rq_reply.brp_un.brp_preempt_jobs.count;\n\tpreempt_job_info *preempt_jobs_list = preempt_preq->rq_reply.brp_un.brp_preempt_jobs.ppj_list;\n\n\tpreempt_preq->rq_reply.brp_code = 1;\n\tstrcpy(preempt_jobs_list[preempt_index].order, \"000\");\n\tsprintf(preempt_jobs_list[preempt_index].job_id, \"%s\", job_id);\n\tpreempt_preq->rq_reply.brp_un.brp_preempt_jobs.count++;\n\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_DEBUG, job_id, \"Job failed to be preempted\");\n}\n\n/**\n * @brief create a local batch_request for a suspend request\n * @param[in] job_id - the job to create the request for\n * @return batch_request *\n * @retval new batch_request for suspend\n */\nstatic struct batch_request *\ncreate_suspend_request(char *job_id)\n{\n\tstruct batch_request *newreq;\n\n\tnewreq = alloc_br(PBS_BATCH_SignalJob);\n\tif (newreq == NULL)\n\t\treturn NULL;\n\tsnprintf(newreq->rq_ind.rq_signal.rq_jid, sizeof(newreq->rq_ind.rq_signal.rq_jid), \"%s\", job_id);\n\tsnprintf(newreq->rq_ind.rq_signal.rq_signame, sizeof(newreq->rq_ind.rq_signal.rq_signame), \"%s\", SIG_SUSPEND);\n\treturn newreq;\n}\n\n/**\n * @brief create a local batch_request for a checkpoint request\n * @param[in] job_id - the job to create the request for\n * @return batch_request *\n * @retval new batch_request for holdjob (checkpoint)\n */\n\nstatic struct batch_request *\ncreate_ckpt_request(char *job_id)\n{\n\tint hold_name_size;\n\tint hold_val_size = 2; /* 2 for 's' and '\\0' */\n\tstruct batch_request *newreq;\n\tsvrattrl *hold_svrattrl;\n\n\thold_name_size = strlen(job_attr_def[(int) JOB_ATR_hold].at_name) + 1;\n\tnewreq = alloc_br(PBS_BATCH_HoldJob);\n\thold_svrattrl = attrlist_alloc(hold_name_size, 0, hold_val_size);\n\tif (newreq == NULL || hold_svrattrl == NULL) {\n\t\tif (newreq != NULL)\n\t\t\tfree_br(newreq);\n\t\tif (hold_svrattrl != NULL)\n\t\t\tfree(hold_svrattrl);\n\t\treturn NULL;\n\t}\n\tsnprintf(newreq->rq_ind.rq_hold.rq_orig.rq_objname, sizeof(newreq->rq_ind.rq_hold.rq_orig.rq_objname), \"%s\", job_id);\n\tsnprintf(hold_svrattrl->al_name, hold_name_size, \"%s\", job_attr_def[(int) JOB_ATR_hold].at_name);\n\tsnprintf(hold_svrattrl->al_value, hold_val_size, \"s\");\n\tCLEAR_HEAD(newreq->rq_ind.rq_hold.rq_orig.rq_attr);\n\tappend_link(&newreq->rq_ind.rq_hold.rq_orig.rq_attr, &hold_svrattrl->al_link, hold_svrattrl);\n\n\treturn newreq;\n}\n\n/**\n * @brief create a local batch_request for a requeue request\n * @param[in] job_id - the job to create the request for\n * @return batch_request *\n * @retval new batch_request for rerun (requeue)\n */\n\nstatic struct batch_request *\ncreate_requeue_request(char *job_id)\n{\n\tstruct batch_request *newreq;\n\n\tnewreq = alloc_br(PBS_BATCH_Rerun);\n\tif (newreq == NULL)\n\t\treturn NULL;\n\n\tsnprintf(newreq->rq_ind.rq_signal.rq_jid, sizeof(newreq->rq_ind.rq_signal.rq_jid), \"%s\", job_id);\n\treturn newreq;\n}\n\n/**\n * @brief create a local batch_request for a delete request\n * @param[in] job_id - the job to create the request for\n * @return batch_request *\n * @retval new batch_request for delete\n */\n\nstatic struct batch_request *\ncreate_delete_request(char *job_id)\n{\n\tstruct batch_request *newreq;\n\tnewreq = alloc_br(PBS_BATCH_DeleteJob);\n\tif (newreq == NULL)\n\t\treturn NULL;\n\tsnprintf(newreq->rq_ind.rq_delete.rq_objname, sizeof(newreq->rq_ind.rq_delete.rq_objname), \"%s\", job_id);\n\treturn newreq;\n}\n\n/**\n * @brief create and issue local preemption request for a job\n * @param[in] preempt_method - preemption method\n * @param[in] pjob - the job to be preempted\n * @param[in] preq - the preempt request from the scheduler\n * @return success/failure\n */\nstatic int\nissue_preempt_request(int preempt_method, job *pjob, struct batch_request *preq)\n{\n\tstruct batch_request *newreq;\n\tstruct work_task *pwt;\n\n\tswitch (preempt_method) {\n\t\tcase PREEMPT_METHOD_SUSPEND:\n\t\t\tnewreq = create_suspend_request(pjob->ji_qs.ji_jobid);\n\t\t\tbreak;\n\t\tcase PREEMPT_METHOD_CHECKPOINT:\n\t\t\tnewreq = create_ckpt_request(pjob->ji_qs.ji_jobid);\n\t\t\tbreak;\n\t\tcase PREEMPT_METHOD_REQUEUE:\n\t\t\tnewreq = create_requeue_request(pjob->ji_qs.ji_jobid);\n\t\t\tbreak;\n\t\tcase PREEMPT_METHOD_DELETE:\n\t\t\tnewreq = create_delete_request(pjob->ji_qs.ji_jobid);\n\t\t\tbreak;\n\t\tdefault:\n\t\t\treturn 1;\n\t}\n\n\tif (newreq != NULL) {\n\t\tnewreq->rq_extend = NULL;\n\t\tsnprintf(newreq->rq_user, sizeof(newreq->rq_user), \"%s\", preq->rq_user);\n\t\tsnprintf(newreq->rq_host, sizeof(newreq->rq_host), \"%s\", preq->rq_host);\n\t\tnewreq->rq_perm = preq->rq_perm;\n\t\tif (issue_Drequest(PBS_LOCAL_CONNECTION, newreq, release_req, &pwt, 0) == -1) {\n\t\t\tfree_br(newreq);\n\t\t\treturn 1;\n\t\t}\n\t\tappend_link(&pjob->ji_svrtask, &pwt->wt_linkobj, pwt);\n\t} else\n\t\treturn 1;\n\n\treturn 0;\n}\n\n/**\n * @brief clear the system hold on a job after a checkpoint request\n * @param[in] pjob - the job to clear\n */\nstatic void\nclear_preempt_hold(job *pjob)\n{\n\tlong old_hold;\n\tint newsub;\n\tchar newstate;\n\n\told_hold = get_jattr_long(pjob, JOB_ATR_hold);\n\tset_jattr_generic(pjob, JOB_ATR_hold, \"s\", NULL, DECR);\n\n\tif (old_hold != get_jattr_long(pjob, JOB_ATR_hold)) {\n\t\tsvr_evaljobstate(pjob, &newstate, &newsub, 0);\n\t\tsvr_setjobstate(pjob, newstate, newsub); /* saves job */\n\t}\n\tif (get_jattr_long(pjob, JOB_ATR_hold) == 0)\n\t\tfree_jattr(pjob, JOB_ATR_Comment);\n}\n\n/**\n * @brief\n * req_preemptjobs- service the Preempt Jobs Request\n *\n * This request tries to preempt multiple jobs.\n * The state of the job may change as a result.\n *\n * @param[in,out]\tpreq\t- The Request\n */\n\nvoid\nreq_preemptjobs(struct batch_request *preq)\n{\n\tint i = 0;\n\tint count = 0;\n\tjob *pjob = NULL;\n\tpreempt_job_info *ppj = NULL;\n\tpbs_sched *psched;\n\tint preempt_index = 0;\n\tint preempt_total;\n\tpreempt_job_info *preempt_jobs_list;\n\n\tpreq->rq_reply.brp_code = 0;\n\tcount = preq->rq_ind.rq_preempt.count;\n\tpsched = find_sched_from_sock(preq->rq_conn, CONN_SCHED_PRIMARY);\n\tpreempt_total = preq->rq_ind.rq_preempt.count;\n\n\tif (psched == NULL) {\n\t\treq_reject(PBSE_INTERNAL, 0, preq);\n\t\treturn;\n\t}\n\n\tif ((preempt_jobs_list = calloc(sizeof(preempt_job_info), preempt_total)) == NULL) {\n\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\tlog_err(errno, __func__, \"Unable to allocate memory\");\n\t\treturn;\n\t}\n\n\tpreq->rq_reply.brp_un.brp_preempt_jobs.ppj_list = preempt_jobs_list;\n\tpreq->rq_reply.brp_choice = BATCH_REPLY_CHOICE_PreemptJobs;\n\tpreq->rq_reply.brp_un.brp_preempt_jobs.count = 0;\n\n\tfor (i = 0; i < count; i++) {\n\t\tppj = &(preq->rq_ind.rq_preempt.ppj_list[i]);\n\t\tpjob = find_job(ppj->job_id);\n\t\t/* The job is already out of the way. This must have happened after the scheduler\n\t\t * queried the universe and before it tried to preempt the jobs.\n\t\t * Regardless of the preempt_order, use the correct reply code to what\n\t\t * actually happened so the scheduler correctly handles the job.\n\t\t */\n\t\tif (pjob == NULL) {\n\t\t\tsprintf(preempt_jobs_list[preempt_index].job_id, \"%s\", ppj->job_id);\n\t\t\tstrcpy(preempt_jobs_list[preempt_index].order, \"D\");\n\t\t\tpreempt_index++;\n\t\t\tcontinue;\n\t\t}\n\n\t\tif (!check_job_state(pjob, JOB_STATE_LTR_RUNNING)) {\n\t\t\tsprintf(preempt_jobs_list[preempt_index].job_id, \"%s\", ppj->job_id);\n\t\t\tswitch (get_job_state(pjob)) {\n\t\t\t\tcase JOB_STATE_LTR_QUEUED:\n\t\t\t\t\tstrcpy(preempt_jobs_list[preempt_index].order, \"Q\");\n\t\t\t\t\tpreempt_index++;\n\t\t\t\t\tbreak;\n\t\t\t\tcase JOB_STATE_LTR_EXPIRED:\n\t\t\t\tcase JOB_STATE_LTR_FINISHED:\n\t\t\t\tcase JOB_STATE_LTR_MOVED:\n\t\t\t\t\tstrcpy(preempt_jobs_list[preempt_index].order, \"D\");\n\t\t\t\t\tpreempt_index++;\n\t\t\t\t\tbreak;\n\t\t\t\tdefault:\n\t\t\t\t\tjob_preempt_fail(preq, ppj->job_id);\n\t\t\t\t\tpreempt_index++;\n\t\t\t}\n\t\t\tcontinue;\n\t\t}\n\n\t\tpjob->ji_pmt_preq = preq;\n\n\t\tpjob->preempt_order = svr_get_preempt_order(pjob, psched);\n\t\tpjob->preempt_order_index = 0;\n\t\tif (issue_preempt_request((int) pjob->preempt_order[0].order[0], pjob, preq))\n\t\t\treply_preempt_jobs_request(PBSE_SYSTEM, (int) pjob->preempt_order[0].order[0], pjob);\n\t}\n\tpreq->rq_reply.brp_un.brp_preempt_jobs.count = preempt_index;\n\t/* check if all jobs failed */\n\tif (preempt_index == preempt_total)\n\t\treply_send(preq);\n}\n\n/**\n * @brief\n * reply_preempt_jobs_request- synthesize and reply to Preempt Jobs Request\n *\n * If an attempt to preempt the job fails, we use the next method to preempt\n * that job as per the preemption order,\n *\n * If the job gets preempted successfully, job-id is added to the reply.\n *\n * @param[in] code - determines if the job was preempted or not.\n * @param[in] aux  - determines the method by which job was preempted.\n * @param[in] pjob - the job in which we are replying to the preemption request\n */\n\nvoid\nreply_preempt_jobs_request(int code, int aux, struct job *pjob)\n{\n\tstruct batch_request *preq;\n\tint clear_preempt_vars = 0;\n\n\tif (pjob == NULL)\n\t\treturn;\n\n\tpreq = pjob->ji_pmt_preq;\n\n\tif (code != PBSE_NONE) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer), \"preemption method %s failed for job (%d)\", preempt_methods[aux], code);\n\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid, log_buffer);\n\n\t\tif (pjob->preempt_order[0].order[pjob->preempt_order_index] == PREEMPT_METHOD_CHECKPOINT)\n\t\t\tclear_preempt_hold(pjob);\n\n\t\tpjob->preempt_order_index++;\n\t\tif (pjob->preempt_order[0].order[pjob->preempt_order_index] != PREEMPT_METHOD_LOW) {\n\t\t\tif (issue_preempt_request((int) pjob->preempt_order[0].order[pjob->preempt_order_index], pjob, preq)) {\n\t\t\t\tjob_preempt_fail(preq, pjob->ji_qs.ji_jobid);\n\t\t\t\tclear_preempt_vars = 1;\n\t\t\t} else {\n\t\t\t\t/* reply_preempt_jobs_request() is somewhat recursive.  If a preemption method fails, one call will issue the next\n\t\t\t\t * preemptiom method request.  The next preemption method request immediately is rejected, it will call\n\t\t\t\t * reply_preempt_jobs_request() again before the first call ends.  A reject like this is considered a successful\n\t\t\t\t * call to issue_Drequest().  If pjob->ji_pmt_preq has been NULLed, it means the last preemption method has failed.\n\t\t\t\t * If this is the last job in the preemption list, the call to reply_preempt_job_request() will have replied to\n\t\t\t\t * the scheduler.  In this case, we do not want to reply a second time.\n\t\t\t\t */\n\t\t\t\tif (pjob->ji_pmt_preq == NULL)\n\t\t\t\t\treturn;\n\t\t\t}\n\t\t} else {\n\t\t\tjob_preempt_fail(preq, pjob->ji_qs.ji_jobid);\n\t\t\tclear_preempt_vars = 1;\n\t\t}\n\t} else {\n\t\tint preempt_index;\n\t\tpreempt_job_info *preempt_jobs_list;\n\n\t\tpreempt_index = preq->rq_reply.brp_un.brp_preempt_jobs.count;\n\t\tpreempt_jobs_list = preq->rq_reply.brp_un.brp_preempt_jobs.ppj_list;\n\n\t\t/* successful preemption */\n\t\tset_jattr_l_slim(pjob, JOB_ATR_sched_preempted, time(0), SET);\n\t\tswitch (aux) {\n\t\t\tcase PREEMPT_METHOD_SUSPEND:\n\t\t\t\tstrcpy(preempt_jobs_list[preempt_index].order, \"S\");\n\t\t\t\tbreak;\n\t\t\tcase PREEMPT_METHOD_CHECKPOINT:\n\t\t\t\tstrcpy(preempt_jobs_list[preempt_index].order, \"C\");\n\t\t\t\tclear_preempt_hold(pjob);\n\t\t\t\tbreak;\n\t\t\tcase PREEMPT_METHOD_REQUEUE:\n\t\t\t\tstrcpy(preempt_jobs_list[preempt_index].order, \"Q\");\n\t\t\t\tbreak;\n\t\t\tcase PREEMPT_METHOD_DELETE:\n\t\t\t\tstrcpy(preempt_jobs_list[preempt_index].order, \"D\");\n\t\t\t\tbreak;\n\t\t}\n\t\tsprintf(preempt_jobs_list[preempt_index].job_id, \"%s\", pjob->ji_qs.ji_jobid);\n\t\tclear_preempt_vars = 1;\n\n\t\tpreq->rq_reply.brp_un.brp_preempt_jobs.count++;\n\t}\n\tif (clear_preempt_vars) {\n\t\tpjob->preempt_order_index = 0;\n\t\tpjob->preempt_order = NULL;\n\t\tpjob->ji_pmt_preq = NULL;\n\t}\n\t/* send reply if we're done */\n\tif (preq->rq_reply.brp_un.brp_preempt_jobs.count == preq->rq_ind.rq_preempt.count) {\n\t\treply_send(preq);\n\t}\n}\n\n/**\n * @brief\n *  get_job_req_used_time - get a running job's req and used time for preemption\n *\n * @param[in]\tpjob - the job in question\n * @param[out]\trtime - return pointer to the requested time\n * @param[out]\tutime - return pointer to the used time\n *\n * @return\tint\n * @retval\t0 for success\n * @retval\t1 for error\n */\nstatic int\nget_job_req_used_time(job *pjob, int *rtime, int *utime)\n{\n\tdouble req = 0;\n\tdouble used = 0;\n\n\tif (pjob == NULL || rtime == NULL || utime == NULL)\n\t\treturn 1;\n\n\treq = get_softwall(pjob);\n\tif (req == -1)\n\t\treq = get_wall(pjob);\n\n\tif (req == -1) {\n\t\treq = get_cput(pjob);\n\t\tused = get_used_cput(pjob);\n\t} else\n\t\tused = get_used_wall(pjob);\n\n\t*rtime = req;\n\t*utime = used;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *  \tsvr_get_preempt_order - deduce the preemption ordering to be used for a job\n *\n * @param[in]\tpjob\t-\tthe job to preempt\n * @param[in]\tpsched\t-\tPointer to the sched object.\n *\n * @return\t: struct preempt_ordering.  array containing preemption order\n *\n */\nstruct preempt_ordering *\nsvr_get_preempt_order(job *pjob, pbs_sched *psched)\n{\n\tstruct preempt_ordering *po = NULL;\n\tint req = -1;\n\tint used = -1;\n\n\tif (get_job_req_used_time(pjob, &req, &used) != 0)\n\t\treturn NULL;\n\n\tpo = get_preemption_order(psched->preempt_order, req, used);\n\n\treturn po;\n}\n"
  },
  {
    "path": "src/server/req_quejob.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n *\n * @brief\n * \t\tFunctions relating to the Queue Job Batch Request sequence, including\n * \t\tQueue Job, Job Script, Ready to Commit, and Commit.\n *\n *\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <stdio.h>\n#include <ctype.h>\n#include <errno.h>\n#include <fcntl.h>\n#include <assert.h>\n#include <sys/types.h>\n#include <sys/stat.h>\n#include <libutil.h>\n\n#include <unistd.h>\n#include <sys/param.h>\n#include <netinet/in.h>\n#include <sys/time.h>\n\n#include \"libpbs.h\"\n#include \"server_limits.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"resource.h\"\n#include \"server.h\"\n#include \"work_task.h\"\n#include \"credential.h\"\n#include \"ticket.h\"\n#include \"batch_request.h\"\n#include \"resv_node.h\"\n#include \"queue.h\"\n#include \"job.h\"\n#include \"reservation.h\"\n#include \"net_connect.h\"\n#include \"pbs_error.h\"\n#include \"pbs_nodes.h\"\n#include \"svrfunc.h\"\n#include \"sched_cmds.h\"\n#include \"log.h\"\n#include \"acct.h\"\n#include \"tpp.h\"\n#include \"user.h\"\n#include \"hook.h\"\n#include \"pbs_internal.h\"\n#include \"pbs_sched.h\"\n#ifndef PBS_MOM\n#include \"pbs_db.h\"\n#define SEQ_WIN_INCR 1000 /*save jobid number to database in this increment*/\n#endif\n#include \"libutil.h\"\n\n#ifdef PBS_MOM\n#include \"mom_hook_func.h\"\n#include \"placementsets.h\"\n\n#include <pwd.h>\n#include \"mom_func.h\"\n\nextern char mom_host[PBS_MAXHOSTNAME + 1];\n#endif /* PBS_MOM */\n\n#define RESV_INFINITY (60 * 60 * 24 * 365 * 5)\n\n/* External Functions Called: */\n\nextern struct connection *svr_conn;\n#ifndef PBS_MOM\nextern int remtree(char *);\n#ifdef NAS /* localmod 005 */\nextern int apply_aoe_inchunk_rules(resource *, attribute *, void *, int);\n#endif /* localmod 005 */\nvoid post_sendmom(struct work_task *);\nvoid post_sendmom_inner(job *, struct batch_request *, int, int, char *);\n#endif /* PBS_MOM */\n\n/* Global Data Items: */\n\n#ifndef PBS_MOM\nextern char *path_spool;\nextern struct server server;\nextern struct attribute attr_jobscript_max_size;\nextern char server_name[];\nextern unsigned int pbs_server_port_dis;\nextern char *resc_in_err;\n#endif /* PBS_MOM */\n\nextern int resc_access_perm;\nextern pbs_list_head svr_alljobs;\nextern pbs_list_head svr_newjobs;\nextern attribute_def job_attr_def[];\nextern char *path_jobs;\nextern char *pbs_o_host;\nextern char *msg_script_open;\nextern char *msg_script_write;\nextern char *msg_jobnew;\nextern char *msg_resvQcreateFail;\nextern char *msg_defproject;\nextern char *msg_mom_reject_root_scripts;\nextern int reject_root_scripts;\nextern time_t time_now;\n\n#ifndef PBS_MOM\nextern void *svr_db_conn;\nextern char *msg_max_no_minwt;\nextern char *msg_min_gt_maxwt;\nextern char *msg_nostf_resv;\nextern char *msg_nostf_jobarray;\n#endif\n\n/* Private Functions in this file */\n\nstatic job *locate_new_job(struct batch_request *preq, char *jobid);\n\n#ifndef PBS_MOM /* SERVER only */\nstatic void handle_qmgr_reply_to_resvQcreate(struct work_task *);\nstatic int get_queue_for_reservation(resc_resv *);\nstatic int ignore_attr(char *);\nstatic int validate_place_req_of_job_in_reservation(job *pj);\n\n/* To generate the job/resv id's locally */\nvoid reset_svr_sequence_window(void);\nlong long next_svr_sequence_id = 0;\n\nstatic char *pbs_o_que = \"PBS_O_QUEUE=\";\n/**\n * @brief\n * \t\tvalidate_perm_res_in_select -\tchecks to see if the resources\n * \t\tappearing in select spec 'val' are  valid based on\n * \t\t\"caller's\" permission level (i.e. resc_access_perm).\n * \t\toptionally checks if the resources exist based on val_exist parameter\n *\n * @param[in]\tval\t-\tselect spec 'val'\n * @param[in]\tval_exist\t-\tvalidate if res exists in server def\n *\n * @return\terror code\n * @retval\t0\t: success\n * @retval\t!0\t: failure\n * @retval\tPBSE_INVALSELECTRESC\t: 'resc_in_err' is also set to the name of the offending resource.\n *\n * @note\n *\t\tNOTE:\t'resc_in_err' is a malloced-string which is used and\n *\t\tfreed inside req_reject().\n *\t\tSo upon PBS_INVALSELECTRES or PBSE_UNKRESC return, be sure to\n *\t\tissue req_reject().\n*/\nint\nvalidate_perm_res_in_select(char *val, int val_exist)\n{\n\tchar *chunk;\n\tint nchk;\n\tint nelem;\n\tstruct key_value_pair *pkvp;\n\tint rc = 0;\n\tint j;\n\tresource_def *presc;\n\n\tif (val == NULL)\n\t\treturn (0); /* nothing to validate */\n\n\tchunk = parse_plus_spec(val, &rc); /* break '+' seperated substrings */\n\tif (rc != 0)\n\t\treturn (rc);\n\n\twhile (chunk) {\n\n\t\tif (parse_chunk(chunk, &nchk, &nelem, &pkvp, NULL) == 0) {\n\n\t\t\t/* first check for any invalid resources in the select */\n\t\t\tfor (j = 0; j < nelem; ++j) {\n\t\t\t\tpresc = find_resc_def(svr_resc_def, pkvp[j].kv_keyw);\n\t\t\t\tif (presc) {\n\t\t\t\t\tif ((presc->rs_flags & resc_access_perm) == 0) {\n\t\t\t\t\t\tif ((resc_in_err = strdup(pkvp[j].kv_keyw)) == NULL)\n\t\t\t\t\t\t\treturn PBSE_SYSTEM;\n\t\t\t\t\t\treturn PBSE_INVALSELECTRESC; /* for freeing resc_in_err please read \"NOTE\" above in function brief*/\n\t\t\t\t\t}\n\t\t\t\t} else if (val_exist) {\n\t\t\t\t\tif ((resc_in_err = strdup(pkvp[j].kv_keyw)) == NULL)\n\t\t\t\t\t\treturn PBSE_SYSTEM;\n\t\t\t\t\treturn PBSE_UNKRESC; /* for freeing resc_in_err please read \"NOTE\" above in function brief*/\n\t\t\t\t}\n\t\t\t} /* for */\n\t\t}\t  /* if */\n\t\tchunk = parse_plus_spec(NULL, &rc);\n\t\tif (rc != 0)\n\t\t\treturn (rc);\n\t} /* while */\n\treturn (0);\n}\n#endif\n\n#define SET_RESC_SELECT 1\n#define SET_RESC_PLACE 2\n\n#ifndef PBS_MOM\n\n/**\n * @brief\tGenerate and fill jobid of the new job\n * \t\t\tNote: Consider directly modifying get_next_svr_sequence_id() if reservations\n * \t\t\talso get sharded for multi-server\n *\n * @param[out]\tidbuf - buffer to fill job/resv id in\n * @param[in]\tclusterid - cluster name (PBS_SERVER)\n * @param[in]\tobjtype - object type which specifies whether it is a normal/array job or reservation\n * @param[in]\tresv_char - character representing type of reservation\n *\n * @return\tint\n * @retval\t0 for Success\n * @retval\t1 for Failure\n */\nstatic int\ngenerate_objid(char *idbuf, char *clusterid, int objtype, char resv_char)\n{\n\tif (idbuf == NULL || server_name[0] == '\\0' || clusterid == NULL)\n\t\treturn 1;\n\n\tif (objtype == MGR_OBJ_JOB)\n\t\tsprintf(idbuf, \"%lld.%s\", next_svr_sequence_id, clusterid);\n\telse if (objtype == MGR_OBJ_JOBARRAY_PARENT)\n\t\tsprintf(idbuf, \"%lld[].%s\", next_svr_sequence_id, clusterid);\n\telse if (objtype == MGR_OBJ_RESV)\n\t\tsprintf(idbuf, \"%c%lld.%s\", resv_char, next_svr_sequence_id, clusterid);\n\n\treturn 0;\n}\n\n#endif /* #ifndef PBS_MOM */\n\n/**\n * @brief\n *\t\tQueue Job Batch Request processing routine\n *\n * @param[in] - ptr to the decoded request\n *\n */\n\nvoid\nreq_quejob(struct batch_request *preq)\n{\n\tint created_here = 0;\n\tint index;\n\tchar *jid;\n\tattribute_def *pdef;\n\tjob *pj;\n\tsvrattrl *psatl;\n\tint rc;\n\tint sock = preq->rq_conn;\n\tint resc_access_perm_save;\n\tint implicit_commit = 0;\n#ifndef PBS_MOM\n\tint set_project = 0;\n\tint i;\n\tchar jidbuf[PBS_MAXSVRJOBID + 1];\n\tpbs_queue *pque;\n\tchar *qname;\n\tchar *result;\n\tresource_def *prdefnod;\n\tresource_def *prdefsel;\n\tresource_def *prdefplc;\n\tresource *presc;\n\tconn_t *conn = NULL;\n\tchar *physhost = NULL;\n#else\n\tmom_hook_input_t hook_input;\n\tmom_hook_output_t hook_output;\n\tint hook_errcode = 0;\n\tint hook_rc = 0;\n\tchar hook_buf[HOOK_MSG_SIZE];\n\thook *last_phook = NULL;\n\tunsigned int hook_fail_action = 0;\n#endif\n\tchar hook_msg[HOOK_MSG_SIZE];\n\n\t/* set basic (user) level access permission */\n\n\tresc_access_perm = ATR_DFLAG_USWR | ATR_DFLAG_Creat;\n\n#ifndef PBS_MOM /* server server server server */\n\n\tif (preq->prot != PROT_TPP) {\n\t\tconn = get_conn(sock);\n\n\t\tif (!conn) {\n\t\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\t\treturn;\n\t\t}\n\n\t\tif (conn->cn_authen & PBS_NET_CONN_FORCE_QSUB_UPDATE) {\n\t\t\treq_reject(PBSE_FORCE_QSUB_UPDATE, 0, preq);\n\t\t\tconn->cn_authen &= ~PBS_NET_CONN_FORCE_QSUB_UPDATE;\n\t\t\treturn;\n\t\t}\n\t}\n\n\tpsatl = (svrattrl *) GET_NEXT(preq->rq_ind.rq_queuejob.rq_attr);\n\twhile (psatl) {\n\t\tif (psatl->al_name == NULL || (!strcasecmp(psatl->al_name, ATTR_l) && psatl->al_resc == NULL)) {\n\t\t\treq_reject(PBSE_IVALREQ, 0, preq);\n\t\t\treturn;\n\t\t}\n\t\tif (!strcasecmp(psatl->al_name, ATTR_l) &&\n\t\t    !strcasecmp(psatl->al_resc, \"select\") &&\n\t\t    ((psatl->al_value != NULL) &&\n\t\t     (psatl->al_value[0] != '\\0'))) {\n\n\t\t\tif ((rc = validate_perm_res_in_select(psatl->al_value, 0)) != 0) {\n\t\t\t\treq_reject(rc, 0, preq);\n\t\t\t\treturn;\n\t\t\t}\n\t\t}\n\t\tpsatl = (svrattrl *) GET_NEXT(psatl->al_link);\n\t}\n\n\tswitch (process_hooks(preq, hook_msg, sizeof(hook_msg),\n\t\t\t      pbs_python_set_interrupt)) {\n\t\tcase 0: /* explicit reject */\n\t\t\treply_text(preq, PBSE_HOOKERROR, hook_msg);\n\t\t\treturn;\n\t\tcase 1:\t\t\t\t\t    /* explicit accept */\n\t\t\tif (recreate_request(preq) == -1) { /* error */\n\t\t\t\t/* we have to reject the request, as 'preq' */\n\t\t\t\t/* may have been partly modified            */\n\t\t\t\tstrcpy(hook_msg,\n\t\t\t\t       \"queuejob event: rejected request\");\n\t\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_HOOK,\n\t\t\t\t\t  LOG_ERR, \"\", hook_msg);\n\t\t\t\treply_text(preq, PBSE_HOOKERROR, hook_msg);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tbreak;\n\t\tcase 2: /* no hook script executed - go ahead and accept event*/\n\t\t\tbreak;\n\t\tdefault:\n\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t\t\t  LOG_INFO, \"\", \"queuejob event: accept req by default\");\n\t}\n\n\tprdefsel = &svr_resc_def[RESC_SELECT];\n\tprdefplc = &svr_resc_def[RESC_PLACE];\n\tprdefnod = &svr_resc_def[RESC_NODES];\n\n\t/*\n\t * if the job id is supplied, the request had better be\n\t * from another server\n\t */\n\n\tif (preq->rq_fromsvr) {\n\t\t/* from another server - accept the extra attributes */\n\t\tresc_access_perm |= ATR_DFLAG_MGWR | ATR_DFLAG_SvWR;\n\t\tjid = preq->rq_ind.rq_queuejob.rq_jid;\n\n\t} else if (preq->rq_ind.rq_queuejob.rq_jid[0] != '\\0') {\n\t\t/* a job id is not allowed from a client */\n\t\treq_reject(PBSE_IVALREQ, 0, preq);\n\t\treturn;\n\t} else {\n\t\t/* assign it a job id */\n\n\t\tpsatl = (svrattrl *) GET_NEXT(preq->rq_ind.rq_queuejob.rq_attr);\n\t\ti = MGR_OBJ_JOB;\n\t\twhile (psatl) {\n\t\t\t/* Ensure that array_indices_submitted has a proper   */\n\t\t\t/* value (non-\"\" and non-NULL) before asserting that  */\n\t\t\t/* current job is a job array.\t\t\t  */\n\t\t\t/* The Hook script can set \tvalue to \"\" meaning none  */\n\t\t\t/* was specified making it a normal (non job array)   */\n\t\t\t/* job.\t\t\t\t\t\t  */\n\t\t\t/* The value should never be NULL and if so, then     */\n\t\t\t/* it won't be a job array.                           */\n\t\t\tif (!strcasecmp(psatl->al_name,\n\t\t\t\t\tATTR_array_indices_submitted) &&\n\t\t\t    ((psatl->al_value != NULL) &&\n\t\t\t     (psatl->al_value[0] != '\\0'))) {\n\t\t\t\ti = MGR_OBJ_JOBARRAY_PARENT;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\tpsatl = (svrattrl *) GET_NEXT(psatl->al_link);\n\t\t}\n\t\t/* fetch job id locally*/\n\t\tif ((next_svr_sequence_id = get_next_svr_sequence_id()) == -1) {\n\t\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\t\treturn;\n\t\t}\n\t\tcreated_here = JOB_SVFLG_HERE;\n\t\tif (generate_objid(jidbuf, server_name, i, '\\0') != 0) {\n\t\t\treq_reject(PBSE_INTERNAL, 0, preq);\n\t\t\treturn;\n\t\t}\n\t\tjid = jidbuf;\n\t}\n\n#else  /* PBS_MOM mom mom mom mom mom mom*/\n\n\tif (preq->rq_fromsvr) { /* must be from a server */\n\t\t/* from another server - accept the extra attributes */\n\t\tresc_access_perm |= ATR_DFLAG_MGWR | ATR_DFLAG_SvWR |\n\t\t\t\t    ATR_DFLAG_MOM;\n\t\tjid = preq->rq_ind.rq_queuejob.rq_jid;\n\t} else {\n\t\treq_reject(PBSE_IVALREQ, 0, preq);\n\t\treturn;\n\t}\n#endif /* PBS_MOM all all all all all */\n\n\t/* does job already exist, check both old and new jobs */\n\n\tif ((pj = find_job(jid)) == NULL) {\n\t\tpj = (job *) GET_NEXT(svr_newjobs);\n\t\twhile (pj) {\n\t\t\tif (!strcasecmp(pj->ji_qs.ji_jobid, jid))\n\t\t\t\tbreak;\n\t\t\tpj = (job *) GET_NEXT(pj->ji_alljobs);\n\t\t}\n\t}\n\n#ifndef PBS_MOM /* server server server server server server */\n\t/*\n\t * Check if the server is configured for history job info. If yes and\n\t * server has the history job with same job id, then don't reject the\n\t * queue request with PBSE_JOBEXIST but purge the history job(i.e. with\n\t * job state 'M') from the server and accept queue request. If you have\n\t * the real job, keeping its history info does not make any sense.\n\t * Otherwise SERVER will continue to reject queue request if job already\n\t * exists.\n\t */\n\tif (pj != NULL) {\n\t\tif ((svr_chk_history_conf()) &&\n\t\t    (check_job_state(pj, JOB_STATE_LTR_MOVED))) {\n\t\t\tjob_purge(pj);\n\t\t} else {\n\t\t\t/* server rejects the queue request */\n\t\t\treq_reject(PBSE_JOBEXIST, 0, preq);\n\t\t\treturn;\n\t\t}\n\t}\n\n\t/* find requested queue, is it there? */\n\tqname = preq->rq_ind.rq_queuejob.rq_destin;\n\tif ((*qname == '\\0') || (*qname == '@')) { /* use default queue */\n\t\tpque = get_dfltque();\n\t\trc = PBSE_QUENODFLT;\n\t} else { /* else find the named queue */\n\t\tpque = find_queuebyname(preq->rq_ind.rq_queuejob.rq_destin);\n#ifdef NAS /* localmod 075 */\n\t\tif (pque == NULL)\n\t\t\tpque = find_resvqueuebyname(qname);\n#endif /* localmod 075 */\n\t\trc = PBSE_UNKQUE;\n\t}\n\tif (pque == NULL) {\n\t\treq_reject(rc, 0, preq); /* not there   */\n\t\treturn;\n\t}\n\n\t/* create the job structure */\n\tif ((pj = job_alloc()) == NULL) {\n\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\treturn;\n\t}\n\n\t*pj->ji_qs.ji_fileprefix = '\\0';\n\n#else  /* PBS_MOM mom mom mom mom*/\n\tif (pj) {\n\n\t\t/*\n\t\t * An existing job - likely was checkpointed but in rare\n\t\t * cases may result from a Server-Mom communication error\n\t\t *\n\t\t * Update run version from the new request so server will\n\t\t * accept the obit when the job finishes.\n\t\t */\n\n\t\tpsatl = (svrattrl *) GET_NEXT(preq->rq_ind.rq_queuejob.rq_attr);\n\t\twhile (psatl) {\n\t\t\t/* look for run count attribute */\n\t\t\tindex = find_attr(job_attr_idx, job_attr_def, psatl->al_name);\n\t\t\tif (index == (int) JOB_ATR_run_version) {\n\t\t\t\tset_jattr_str_slim(pj, index, psatl->al_value, psatl->al_resc);\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\tpsatl = (svrattrl *) GET_NEXT(psatl->al_link);\n\t\t}\n\n\t\t/* if actually running, tell Server we already have it */\n\t\tif (check_job_substate(pj, JOB_SUBSTATE_RUNNING)) {\n\t\t\treq_reject(PBSE_JOBEXIST, 0, preq);\n\t\t\treturn;\n\t\t}\n\n\t\t/* if checkpointed, then keep old and skip rest of process */\n\n\t\tif (pj->ji_qs.ji_svrflags & JOB_SVFLG_CHKPT) {\n\t\t\tset_job_substate(pj, JOB_SUBSTATE_TRANSIN);\n\t\t\tint prot = preq->prot;\n\t\t\tif (reply_jobid(preq, pj->ji_qs.ji_jobid, BATCH_REPLY_CHOICE_Queue) == 0) {\n\t\t\t\tdelete_link(&pj->ji_alljobs);\n\t\t\t\tif (pbs_idx_delete(jobs_idx, pj->ji_qs.ji_jobid) != PBS_IDX_RET_OK)\n\t\t\t\t\tlog_joberr(PBSE_INTERNAL, __func__, \"Failed to remove checkpointed job from index\", pj->ji_qs.ji_jobid);\n\t\t\t\tappend_link(&svr_newjobs, &pj->ji_alljobs, pj);\n\t\t\t\tpj->ji_qs.ji_un_type = JOB_UNION_TYPE_NEW;\n\t\t\t\tpj->ji_qs.ji_un.ji_newt.ji_fromsock = sock;\n\t\t\t\tif (prot == PROT_TCP) {\n\t\t\t\t\tpj->ji_qs.ji_un.ji_newt.ji_fromaddr = get_connectaddr(sock);\n\t\t\t\t} else {\n\t\t\t\t\tstruct sockaddr_in *addr = tpp_getaddr(sock);\n\t\t\t\t\tif (addr)\n\t\t\t\t\t\tpj->ji_qs.ji_un.ji_newt.ji_fromaddr = (pbs_net_t) ntohl(addr->sin_addr.s_addr);\n\t\t\t\t}\n\t\t\t\tpj->ji_qs.ji_un.ji_newt.ji_scriptsz = 0;\n\t\t\t} else {\n\t\t\t\tclose_client(sock); /* error on reply */\n\t\t\t}\n\t\t\treturn;\n\t\t}\n\t\t/* unlink job from svr_alljobs since will be place on newjobs */\n\t\tdelete_link(&pj->ji_alljobs);\n\t\tif (pbs_idx_delete(jobs_idx, pj->ji_qs.ji_jobid) != PBS_IDX_RET_OK)\n\t\t\tlog_joberr(PBSE_INTERNAL, __func__, \"Failed to remove job from index\", pj->ji_qs.ji_jobid);\n\t} else {\n\t\tchar *namebuf;\n\t\tchar basename[MAXPATHLEN + 1] = {0};\n\n\t\t/* if not already here, allocate job struct */\n\n\t\tif ((pj = job_alloc()) == NULL) {\n\t\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\t\treturn;\n\t\t}\n\t\t/*\n\t\t *\n\t\t * for MOM - rather than make up a hashname, we use the sent\n\t\t * to us by the server as an attribute.\n\t\t */\n\t\tpsatl = (svrattrl *) GET_NEXT(preq->rq_ind.rq_queuejob.rq_attr);\n\t\twhile (psatl) {\n\t\t\tif (!strcmp(psatl->al_name, ATTR_hashname)) {\n\t\t\t\tstrncpy(basename, psatl->al_value, MAXPATHLEN);\n\t\t\t\tif (strlen(basename) <= PBS_JOBBASE)\n\t\t\t\t\tstrcpy(pj->ji_qs.ji_fileprefix, basename);\n\t\t\t\telse\n\t\t\t\t\t*pj->ji_qs.ji_fileprefix = '\\0';\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\tpsatl = (svrattrl *) GET_NEXT(psatl->al_link);\n\t\t}\n\t\tpbs_asprintf(&namebuf, \"%s%s%s\", path_jobs, basename, JOB_TASKDIR_SUFFIX);\n\t\tif (mkdir(namebuf, 0700) == -1) {\n\t\t\tpj->ji_qs.ji_un.ji_momt.ji_exitstat = -1;\n\t\t\tif (*pj->ji_qs.ji_fileprefix == '\\0' && *pj->ji_qs.ji_jobid == '\\0') {\n\t\t\t\tsnprintf(pj->ji_qs.ji_jobid, sizeof(pj->ji_qs.ji_jobid), \"%s\", jid);\n\t\t\t}\n\t\t\tjob_purge(pj);\n\t\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\t\tfree(namebuf);\n\t\t\treturn;\n\t\t}\n\t\tcreated_here = JOB_SVFLG_HERE;\n\t\tfree(namebuf);\n\t}\n#endif /* PBS_MOM */\n\n\t(void) strcpy(pj->ji_qs.ji_jobid, jid);\n\tpj->ji_qs.ji_svrflags = created_here;\n\tpj->ji_qs.ji_un_type = JOB_UNION_TYPE_NEW;\n\n\t/* decode attributes from request into job structure */\n\n\tpsatl = (svrattrl *) GET_NEXT(preq->rq_ind.rq_queuejob.rq_attr);\n\tresc_access_perm_save = resc_access_perm; /* save perm */\n\twhile (psatl) {\n\n\t\t/* identify the attribute by name */\n\n\t\tindex = find_attr(job_attr_idx, job_attr_def, psatl->al_name);\n\t\tif (index < 0) {\n\n\t\t\t/* didn`t recognize the name */\n#ifndef PBS_MOM\n\t\t\tindex = JOB_ATR_UNKN; /* keep as \"unknown\" for now */\n#else\t\t\t\t\t      /* is PBS_MOM */\n\t\t\treply_badattr(PBSE_NOATTR, 1, psatl, preq);\n\t\t\treturn;\n#endif\t\t\t\t\t      /* PBS_MOM */\n\t\t}\n\t\tpdef = &job_attr_def[index];\n\n#ifndef PBS_MOM\n\t\tif (index == JOB_ATR_project) {\n\t\t\tset_project = 1;\n\t\t}\n#endif\n\n\t\t/* Is attribute not writeable by manager or by a server? */\n\t\t/* Exempt attributes set by the hook script */\n\t\tresc_access_perm = resc_access_perm_save; /* reset */\n\t\tif ((psatl->al_flags & ATR_VFLAG_HOOK)) {\n\t\t\tresc_access_perm = ATR_DFLAG_USWR |\n\t\t\t\t\t   ATR_DFLAG_OPWR |\n\t\t\t\t\t   ATR_DFLAG_MGWR |\n\t\t\t\t\t   ATR_DFLAG_SvWR |\n\t\t\t\t\t   ATR_DFLAG_Creat;\n\t\t}\n\t\tif (((pdef->at_flags & resc_access_perm) == 0)) {\n\t\t\tjob_purge(pj);\n\t\t\treply_badattr(PBSE_ATTRRO, 1, psatl, preq);\n\t\t\treturn;\n\t\t}\n\n\t\t/* decode attribute */\n#ifndef PBS_MOM\n\t\tif (index == JOB_ATR_create_resv_from_job) {\n\t\t\tif (qname != NULL && *qname != '\\0') {\n\t\t\t\tresc_resv *presv;\n\t\t\t\tpresv = find_resv(qname);\n\n\t\t\t\tif (presv) {\n\t\t\t\t\tjob_purge(pj);\n\t\t\t\t\treq_reject(PBSE_RESV_FROM_RESVJOB, 0, preq);\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n#endif\n\t\tif ((index == JOB_ATR_resource) && (psatl->al_resc != NULL) && (strcmp(psatl->al_resc, \"neednodes\") == 0))\n\t\t\trc = 0;\n\t\telse\n\t\t\trc = set_jattr_generic(pj, index, psatl->al_value, psatl->al_resc, INTERNAL);\n#ifndef PBS_MOM\n\t\tif (rc != 0) {\n\t\t\tif (rc == PBSE_UNKRESC) {\n\n\t\t\t\t/* unknown resources not allow in Exec queue */\n\n\t\t\t\tif (pque->qu_qs.qu_type == QTYPE_Execution) {\n\t\t\t\t\tjob_purge(pj);\n\t\t\t\t\treply_badattr(rc, 1, psatl, preq);\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\t/* any other error is fatal */\n\t\t\t\tjob_purge(pj);\n\t\t\t\treply_badattr(rc, 1, psatl, preq);\n\t\t\t\treturn;\n\t\t\t}\n\t\t}\n#else  /* PBS_MOM MOM MOM MOM */\n\t\tif (rc != 0) {\n\t\t\t/* all  errors are fatal for MOM */\n\n\t\t\tjob_purge(pj);\n\t\t\treply_badattr(rc, 1, psatl, preq);\n\t\t\treturn;\n\t\t}\n\t\tif (psatl->al_op == DFLT) {\n\t\t\tattribute *attr = get_jattr(pj, index);\n\t\t\tif (psatl->al_resc) {\n\n\t\t\t\tresource *presc;\n\t\t\t\tresource_def *prdef;\n\n\t\t\t\tprdef = find_resc_def(svr_resc_def, psatl->al_resc);\n\t\t\t\tif (prdef == 0) {\n\t\t\t\t\tjob_purge(pj);\n\t\t\t\t\treply_badattr(rc, 1, psatl, preq);\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t\tpresc = find_resc_entry(attr, prdef);\n\t\t\t\tif (presc)\n\t\t\t\t\tpresc->rs_value.at_flags |= ATR_VFLAG_DEFLT;\n\t\t\t} else {\n\t\t\t\tattr->at_flags |= ATR_VFLAG_DEFLT;\n\t\t\t}\n\t\t}\n#endif /* PBS_MOM */\n\n\t\tpsatl = (svrattrl *) GET_NEXT(psatl->al_link);\n\t}\n\n#ifndef PBS_MOM\n\n\t/* perform any at_action routine declared for the attributes */\n\n\tfor (i = 0; i < JOB_ATR_LAST; ++i) {\n\t\tpdef = &job_attr_def[i];\n\t\tif ((is_jattr_set(pj, i)) &&\n\t\t    (pdef->at_action)) {\n\t\t\trc = pdef->at_action(get_jattr(pj, i), pj, ATR_ACTION_NEW);\n\t\t\tif (rc) {\n\t\t\t\tjob_purge(pj);\n\t\t\t\treq_reject(rc, i, preq);\n\t\t\t\treturn;\n\t\t\t}\n\t\t}\n\t}\n\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n\t/* save gssapi/krb5 creds for this job */\n\tif (conn && conn->cn_credid != NULL &&\n\t    conn->cn_auth_config != NULL &&\n\t    conn->cn_auth_config->auth_method != NULL && strcmp(conn->cn_auth_config->auth_method, AUTH_GSS_NAME) == 0) {\n\t\tlog_eventf(PBSEVENT_DEBUG, PBS_EVENTCLASS_SERVER, LOG_DEBUG, __func__,\n\t\t\t   \"saving creds.  conn is %d, cred id %s\", preq->rq_conn, conn->cn_credid);\n\n\t\tset_jattr_str_slim(pj, JOB_ATR_cred_id, conn->cn_credid, NULL);\n\n\t\tif (is_sattr_set(SVR_ATR_acl_krb_submit_realms)) {\n\t\t\tif (!acl_check(get_sattr(SVR_ATR_acl_krb_submit_realms), conn->cn_credid, ACL_Host)) {\n\t\t\t\tjob_purge(pj);\n\t\t\t\treq_reject(PBSE_PERM, 0, preq);\n\t\t\t\treturn;\n\t\t\t}\n\t\t}\n\t}\n#endif\n\n\t/*\n\t * Now that the attributes have been decoded, we can setup some\n\t * additional parameters and perform a few more checks.\n\t *\n\t * First, set some items based on who created the job...\n\t */\n\n\tif (created_here) { /* created here */\n\t\tint l;\n\t\tchar buf[256];\n\n\t\t/* check that job has a jobname */\n\n\t\tif ((is_jattr_set(pj, JOB_ATR_jobname)) == 0)\n\t\t\tset_jattr_str_slim(pj, JOB_ATR_jobname, \"none\", NULL);\n\n\t\t/* check resources in the Resource_List are valid job wide */\n\n\t\tif (is_jattr_set(pj, JOB_ATR_resource)) {\n\t\t\tint have_selectplace = 0;\n\t\t\tresource_def *prdefbad;\n\n\t\t\tpresc = (resource *) GET_NEXT(get_jattr_list(pj, JOB_ATR_resource));\n\n\t\t\tprdefbad = NULL;\n\t\t\twhile (presc) {\n\t\t\t\tif (presc->rs_defin == prdefsel) {\n\t\t\t\t\thave_selectplace |= SET_RESC_SELECT;\n\t\t\t\t} else if (presc->rs_defin == prdefplc) {\n\t\t\t\t\thave_selectplace |= SET_RESC_PLACE;\n\t\t\t\t} else if ((presc->rs_defin == prdefnod) &&\n\t\t\t\t\t   (presc->rs_value.at_val.at_str != NULL)) {\n\t\t\t\t\t/*\n\t\t\t\t\t * if \"nodes\" is set and has non-NULL value,\n\t\t\t\t\t * remember as potential bad resource\n\t\t\t\t\t * if this appears along \"select\" or \"place\".\n\t\t\t\t\t */\n\t\t\t\t\tprdefbad = presc->rs_defin;\n\t\t\t\t} else if (\n\t\t\t\t\t(presc->rs_defin->rs_flags & ATR_DFLAG_CVTSLT) &&\n\t\t\t\t\t(is_attr_set(&presc->rs_value))) {\n\t\t\t\t\t/*\n\t\t\t\t\t * if this resource is not \"select\", \"place\",\n\t\t\t\t\t * or \"nodes\", but is meant to appear inside\n\t\t\t\t\t * a \"select\" line, then remember as potential\n\t\t\t\t\t * bad resource if this appears along\n\t\t\t\t\t * \"select\" or \"place\".\n\t\t\t\t\t */\n\n\t\t\t\t\tprdefbad = presc->rs_defin;\n\t\t\t\t}\n\t\t\t\tpresc = (resource *) GET_NEXT(presc->rs_link);\n\t\t\t}\n\t\t\tif (have_selectplace && prdefbad) {\n\t\t\t\tif (prdefbad == prdefnod)\n\t\t\t\t\trc = PBSE_INVALNODEPLACE;\n\t\t\t\telse\n\t\t\t\t\trc = PBSE_INVALJOBRESC;\n\t\t\t\tif ((resc_in_err = strdup(prdefbad->rs_name)) == NULL)\n\t\t\t\t\trc = PBSE_SYSTEM;\n\t\t\t\tjob_purge(pj);\n\t\t\t\treq_reject(rc, 0, preq);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tif (have_selectplace == SET_RESC_PLACE) {\n\t\t\t\t/* cannot have place without select */\n\t\t\t\tjob_purge(pj);\n\t\t\t\treq_reject(PBSE_PLACENOSELECT, 0, preq);\n\t\t\t\treturn;\n\t\t\t}\n\t\t}\n\n\t\t/* check value of priority */\n\n\t\tif (is_jattr_set(pj, JOB_ATR_priority)) {\n\t\t\tif ((get_jattr_long(pj, JOB_ATR_priority) < -1024) ||\n\t\t\t    (get_jattr_long(pj, JOB_ATR_priority) > 1024)) {\n\t\t\t\tjob_purge(pj);\n\t\t\t\treq_reject(PBSE_BADATVAL, 0, preq);\n\t\t\t\treturn;\n\t\t\t}\n\t\t}\n\n\t\t/* set job owner attribute to user@host */\n\t\tstrcpy(buf, preq->rq_user);\n\t\tstrcat(buf, \"@\");\n\t\tstrcat(buf, preq->rq_host);\n\t\tset_jattr_str_slim(pj, JOB_ATR_job_owner, buf, NULL);\n\n\t\tif (conn) {\n\t\t\tstrcpy(buf, conn->cn_physhost);\n\t\t\tset_jattr_str_slim(pj, JOB_ATR_submit_host, buf, NULL);\n\t\t}\n\n\t\t/* set create time */\n\t\tset_jattr_l_slim(pj, JOB_ATR_ctime, time_now, SET);\n\n\t\t/* set hop count = 1 */\n\n\t\tset_jattr_l_slim(pj, JOB_ATR_hopcount, 1, SET);\n\n\t\t/* need to set certain environmental variables per POSIX */\n\t\tstrcpy(buf, pbs_o_que);\n\t\tstrcat(buf, pque->qu_qs.qu_name);\n\t\tif (conn && get_variable(pj, pbs_o_host) == NULL) {\n\t\t\tstrcat(buf, \",\");\n\t\t\tstrcat(buf, pbs_o_host);\n\t\t\tstrcat(buf, \"=\");\n\t\t\tstrcat(buf, conn->cn_physhost);\n\t\t}\n\t\tset_jattr_generic(pj, JOB_ATR_variables, buf, NULL, INCR);\n\n\t\t/* if JOB_ATR_outpath/JOB_ATR_errpath not set, set default */\n\n\t\tif (!(is_jattr_set(pj, JOB_ATR_outpath))) {\n\t\t\tset_jattr_str_slim(pj, JOB_ATR_outpath, prefix_std_file(pj, (int) 'o'), NULL);\n\t\t} else {\n\t\t\tl = strlen(get_jattr_str(pj, JOB_ATR_outpath));\n\t\t\tif (l > 0) {\n\t\t\t\tif (get_jattr_str(pj, JOB_ATR_outpath)[l - 1] == ':') {\n\n\t\t\t\t\tcat_default_std(pj, (int) 'o', get_jattr_str(pj, JOB_ATR_outpath), &result);\n\t\t\t\t\tif (result)\n\t\t\t\t\t\tset_jattr_str_slim(pj, JOB_ATR_outpath, result, NULL);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\tif (!(is_jattr_set(pj, JOB_ATR_errpath))) {\n\t\t\tset_jattr_str_slim(pj, JOB_ATR_errpath, prefix_std_file(pj, (int) 'e'), NULL);\n\t\t} else {\n\t\t\tl = strlen(get_jattr_str(pj, JOB_ATR_errpath));\n\t\t\tif (l > 0) {\n\t\t\t\tif (get_jattr_str(pj, JOB_ATR_errpath)[l - 1] == ':') {\n\n\t\t\t\t\tcat_default_std(pj, (int) 'e', get_jattr_str(pj, JOB_ATR_errpath), &result);\n\t\t\t\t\tif (result)\n\t\t\t\t\t\tset_jattr_str_slim(pj, JOB_ATR_errpath, result, NULL);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\tif ((get_jattr_str(pj, JOB_ATR_outpath) == 0) ||\n\t\t    (get_jattr_str(pj, JOB_ATR_errpath) == 0)) {\n\t\t\tjob_purge(pj);\n\t\t\treq_reject(PBSE_NOATTR, 0, preq);\n\t\t\treturn;\n\t\t}\n\n\t\tif ((is_jattr_set(pj, JOB_ATR_interactive)) && (is_jattr_set(pj, JOB_ATR_array_indices_submitted))) {\n\t\t\tjob_purge(pj);\n\t\t\treq_reject(PBSE_NOSUP, 0, preq);\n\t\t\treturn;\n\t\t}\n\t\t/* Interactive jobs are not rerunable */\n\n\t\tif ((is_jattr_set(pj, JOB_ATR_interactive)) && get_jattr_long(pj, JOB_ATR_interactive))\n\t\t\tset_jattr_l_slim(pj, JOB_ATR_rerunable, 0, SET);\n\n\t\tif (!is_jattr_set(pj, JOB_ATR_project))\n\t\t\tset_jattr_str_slim(pj, JOB_ATR_project, PBS_DEFAULT_PROJECT, NULL);\n\n\t} else { /* job was created elsewhere and moved here */\n\n\t\t/* make sure job_owner is set, error if not */\n\n\t\tif (!is_jattr_set(pj, JOB_ATR_job_owner)) {\n\t\t\tjob_purge(pj);\n\t\t\treq_reject(PBSE_IVALREQ, 0, preq);\n\t\t\treturn;\n\t\t}\n\n\t\t/* increment hop count */\n\n\t\tset_jattr_l_slim(pj, JOB_ATR_hopcount, 1, INCR);\n\t\tif (get_jattr_long(pj, JOB_ATR_hopcount) > PBS_MAX_HOPCOUNT) {\n\t\t\tjob_purge(pj);\n\t\t\treq_reject(PBSE_HOPCOUNT, 0, preq);\n\t\t\treturn;\n\t\t}\n\n\t\t/* make sure that if job belonged to an advance reservation\n\t\t * queue, that old information is wiped out.  If being moved\n\t\t * into an advance reservation queue, the reservation's ID\n\t\t * gets attached later in the code\n\t\t */\n\t\tset_attr_generic(get_jattr(pj, JOB_ATR_reserve_ID), &job_attr_def[(int) JOB_ATR_reserve_ID], NULL, NULL, INTERNAL);\n\t}\n\n\t/* set up at_server attribute for status */\n\n\tset_jattr_str_slim(pj, JOB_ATR_at_server, pbs_server_name, NULL);\n\n\t/* If enabled, check the server's required cred type */\n\n\tif (is_sattr_set(SVR_ATR_ReqCredEnable) &&\n\t    get_sattr_long(SVR_ATR_ReqCredEnable) &&\n\t    is_sattr_set(SVR_ATR_ReqCred)) {\n\t\tchar *reqc = get_sattr_str(SVR_ATR_ReqCred);\n\t\tchar *jobc = get_jattr_str(pj, JOB_ATR_cred);\n\t\t/*\n\t\t **\tThe server requires a cred, if job has none, or\n\t\t **\tit is the wrong one, reject.\n\t\t */\n\t\tif (!is_jattr_set(pj, JOB_ATR_cred) || strcmp(reqc, jobc) != 0) {\n\t\t\tjob_purge(pj);\n\t\t\treq_reject(PBSE_BADCRED, 0, preq);\n\t\t\treturn;\n\t\t}\n\t}\n\n\tif (conn) {\n\t\tphyshost = conn->cn_physhost;\n\t}\n\n\t/*\n\t * See if the job is qualified to go into the requested queue.\n\t * Note, if an execution queue, then ji_qs.ji_un.ji_exect is set up\n\t *\n\t * svr_chkque is called way down here because it needs to have the\n\t * job structure and attributes already set up.\n\t */\n\n\trc = svr_chkque(pj, pque, get_jattr_str(pj, JOB_ATR_submit_host), physhost, MOVE_TYPE_Move);\n\tif (rc) {\n\t\tif (pj->ji_clterrmsg)\n\t\t\treply_text(preq, rc, pj->ji_clterrmsg);\n\t\telse\n\t\t\treq_reject(rc, 0, preq);\n\t\tjob_purge(pj);\n\t\treturn;\n\t}\n\n\t(void) strcpy(pj->ji_qs.ji_queue, pque->qu_qs.qu_name);\n\n\t/* Is job being submitted to a reservation queue?\n\t * If yes, have the job point to the resc_resv object and\n\t * update the job attribute JOB_ATR_reservation.\n\t *\n\t * Also check for conflict for job and reservation place spec\n\t */\n\tif (pque->qu_resvp) {\n\t\tfree_jattr(pj, JOB_ATR_reserve_ID);\n\t\tset_jattr_str_slim(pj, JOB_ATR_reserve_ID, pque->qu_resvp->ri_qs.ri_resvID, NULL);\n\t\tpj->ji_myResv = pque->qu_resvp;\n\n\t\tif (!validate_place_req_of_job_in_reservation(pj)) {\n\t\t\tjob_purge(pj);\n\t\t\tpsatl = (svrattrl *) GET_NEXT(\n\t\t\t\tpreq->rq_ind.rq_queuejob.rq_attr);\n\t\t\twhile (psatl) {\n\t\t\t\tif (!strcasecmp(psatl->al_name, ATTR_l) &&\n\t\t\t\t    !strcasecmp(psatl->al_resc, \"place\")) {\n\t\t\t\t\treply_badattr(PBSE_JOBINRESV_CONFLICT, 1,\n\t\t\t\t\t\t      psatl, preq);\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t\tpsatl = (svrattrl *) GET_NEXT(psatl->al_link);\n\t\t\t}\n\t\t\tif (!psatl) {\n\t\t\t\treq_reject(PBSE_JOBINRESV_CONFLICT, 0, preq);\n\t\t\t\treturn;\n\t\t\t}\n\t\t}\n\t}\n\n\tset_jattr_l_slim(pj, JOB_ATR_substate, JOB_SUBSTATE_TRANSIN, SET);\n\n\t/* action routine for select does not have reservation data hence\n\t * returns without doing checks. Checks are called now.\n\t */\n\tpresc = find_resc_entry(get_jattr(pj, JOB_ATR_resource), prdefsel);\n\tif (presc) {\n\t\trc = apply_aoe_inchunk_rules(presc, get_jattr(pj, JOB_ATR_resource),\n\t\t\t\t\t     pj, PARENT_TYPE_JOB);\n\t\tif (rc) {\n\t\t\tjob_purge(pj);\n\t\t\treq_reject(rc, 0, preq);\n\t\t\treturn;\n\t\t}\n\t}\n#endif /* not PBS_MOM */\n\n\t/* set remaining job structure elements\t\t\t*/\n\n\tset_job_state(pj, JOB_STATE_LTR_TRANSIT);\n\tset_job_substate(pj, JOB_SUBSTATE_TRANSIN);\n\tset_jattr_l_slim(pj, JOB_ATR_mtime, time_now, SET);\n\n\tpj->ji_qs.ji_un_type = JOB_UNION_TYPE_NEW;\n\tpj->ji_qs.ji_un.ji_newt.ji_fromsock = sock;\n\tpj->ji_qs.ji_un.ji_newt.ji_scriptsz = 0;\n\n#ifdef PBS_MOM\n\tif ((is_jattr_set(pj, JOB_ATR_executable)) &&\n\t    (reject_root_scripts == TRUE) &&\n\t    (is_jattr_set(pj, JOB_ATR_euser)) &&\n\t    (get_jattr_str(pj, JOB_ATR_euser) != NULL)) {\n#ifdef WIN32\n\n\t\t/* equivalent of root */\n\t\tif (isAdminPrivilege(get_jattr_str(pj, JOB_ATR_euser)))\n#else\n\t\tstruct passwd *pwdp;\n\n\t\tpwdp = getpwnam(get_jattr_str(pj, JOB_ATR_euser));\n\t\tif ((pwdp != NULL) && (pwdp->pw_uid == 0))\n#endif\n\t\t{\n\t\t\tlog_err(-1, __func__, msg_mom_reject_root_scripts);\n\t\t\treply_text(preq, PBSE_MOM_REJECT_ROOT_SCRIPTS, msg_mom_reject_root_scripts);\n\t\t\tjob_purge(pj);\n\t\t\treturn;\n\t\t}\n\t}\n\tmom_hook_input_init(&hook_input);\n\thook_input.pjob = pj;\n\n\tmom_hook_output_init(&hook_output);\n\thook_output.reject_errcode = &hook_errcode;\n\thook_output.last_phook = &last_phook;\n\thook_output.fail_action = &hook_fail_action;\n\n\tswitch ((hook_rc = mom_process_hooks(HOOK_EVENT_EXECJOB_BEGIN, PBS_MOM_SERVICE_NAME,\n\t\t\t\t\t     mom_host, &hook_input, &hook_output, hook_buf, sizeof(hook_buf), 1))) {\n\t\tcase 1: /* explicit accept */\n\t\t\tbreak;\n\t\tcase 2: /* no hook script executed - go ahead and accept event*/\n\t\t\tbreak;\n\t\tdefault:\n\t\t\t/* a value of '0' means explicit reject encountered. */\n\t\t\tif (hook_rc != 0) {\n\t\t\t\t/* we've hit an internal error (malloc error, full disk, etc...), so */\n\t\t\t\t/* treat this now like a  hook error so hook fail_action  */\n\t\t\t\t/* will be consulted.  */\n\t\t\t\t/* Before, behavior of an internal error was to ignore it! */\n\t\t\t\thook_errcode = PBSE_HOOKERROR;\n\t\t\t}\n\t\t\tif (hook_errcode == PBSE_HOOKERROR) { /* error */\n\t\t\t\t/* piggy back the hook_name in the message */\n\t\t\t\t/* to be stripped out by the server upon */\n\t\t\t\t/* processing hook fail_action */\n\t\t\t\tsnprintf(hook_msg, sizeof(hook_msg), \"%s,%.*s\",\n\t\t\t\t\t last_phook ? last_phook->hook_name : \"\",\n\t\t\t\t\t (int) (sizeof(hook_msg) - (last_phook ? strlen(last_phook->hook_name) : 0) - 2),\n\t\t\t\t\t (hook_rc == 0) ? hook_buf : \"internal error\");\n\t\t\t} else {\n\t\t\t\tsnprintf(hook_msg, sizeof(hook_msg), \",%.*s\",\n\t\t\t\t\t (int) (sizeof(hook_msg) - 2), hook_buf);\n\t\t\t}\n\n\t\t\tjob_purge(pj);\n\t\t\treply_text(preq, hook_errcode, hook_msg);\n\t\t\treturn;\n\t}\n#endif\n\n\t/* check implicit commit only not blocking job */\n\tif ((is_jattr_set(pj, JOB_ATR_block)) == 0)\n\t\timplicit_commit = ((preq->rq_extend) && (strstr(preq->rq_extend, EXTEND_OPT_IMPLICIT_COMMIT)));\n\n\t/* acknowledge the request with the job id */\n\tif (!implicit_commit) {\n\t\tif (preq->prot == PROT_TCP) {\n\t\t\tpj->ji_qs.ji_un.ji_newt.ji_fromaddr = get_connectaddr(sock);\n\t\t\t/* acknowledge the request with the job id */\n\t\t\tif (reply_jobid(preq, pj->ji_qs.ji_jobid, BATCH_REPLY_CHOICE_Queue) != 0) {\n\t\t\t\t/* reply failed, purge the job and close the connection */\n\n\t\t\t\tclose_client(sock);\n\t\t\t\tjob_purge(pj);\n\t\t\t\treturn;\n\t\t\t}\n\t\t} else {\n\t\t\tstruct sockaddr_in *addr = tpp_getaddr(sock);\n\t\t\tif (addr)\n\t\t\t\tpj->ji_qs.ji_un.ji_newt.ji_fromaddr = (pbs_net_t) ntohl(addr->sin_addr.s_addr);\n\t\t\tfree_br(preq);\n\t\t\t/* No need of acknowledge for TPP */\n\t\t}\n\t}\n\n#ifndef PBS_MOM\n\tif (set_project && (is_jattr_set(pj, JOB_ATR_project)) &&\n\t    (strcmp(get_jattr_str(pj, JOB_ATR_project), PBS_DEFAULT_PROJECT) == 0))\n\t\tlog_eventf(PBSEVENT_DEBUG4, PBS_EVENTCLASS_JOB, LOG_INFO, pj->ji_qs.ji_jobid, msg_defproject, ATTR_project, PBS_DEFAULT_PROJECT);\n#endif\n\n\tif (implicit_commit) {\n\t\treq_commit_now(preq, pj);\n\t\treturn;\n\t}\n\n\t/* link job into server's new jobs list request  */\n\n\tappend_link(&svr_newjobs, &pj->ji_alljobs, pj);\n\n#ifndef PBS_MOM\n\t{\n\t\tjob *pjob;\n\t\tint myport, port;\n\t\tchar *myhost, *host;\n\n\t\t/*\n\t\t **\tIf the JOB_ATR_block attribute is set, look through the\n\t\t **\tother jobs to make sure the host/port combo is unique.\n\t\t */\n\t\tif ((is_jattr_set(pj, JOB_ATR_block)) == 0)\n\t\t\treturn;\n\n\t\tmyhost = get_jattr_str(pj, JOB_ATR_submit_host);\n\t\tif (myhost == NULL)\n\t\t\treturn;\n\t\tmyport = (int) get_jattr_long(pj, JOB_ATR_block);\n\t\tif (myport == 0)\n\t\t\treturn;\n\n\t\tfor (pjob = (job *) GET_NEXT(svr_alljobs);\n\t\t     pjob != NULL;\n\t\t     pjob = (job *) GET_NEXT(pjob->ji_alljobs)) {\n\t\t\tif (pjob == pj)\n\t\t\t\tcontinue;\n\t\t\tif (!is_jattr_set(pjob, JOB_ATR_block))\n\t\t\t\tcontinue;\n\n\t\t\tport = (int) get_jattr_long(pjob, JOB_ATR_block);\n\t\t\tif (port != myport)\n\t\t\t\tcontinue;\n\t\t\thost = get_jattr_str(pjob, JOB_ATR_submit_host);\n\t\t\tif (host == NULL)\n\t\t\t\tcontinue;\n\t\t\tif (strcmp(host, myhost) != 0)\n\t\t\t\tcontinue;\n\n\t\t\t/* we found a job with the same host/port */\n\t\t\tsprintf(log_buffer,\n\t\t\t\t\"job %s has duplicate BLOCK host %s port %d\",\n\t\t\t\tpjob->ji_qs.ji_jobid, host, port);\n\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_ERR,\n\t\t\t\t  pj->ji_qs.ji_jobid, log_buffer);\n\n\t\t\t/* unset the old job's JOB_ATR_block */\n\t\t\tset_jattr_l_slim(pjob, JOB_ATR_block, 0, SET);\n\t\t\tmark_jattr_not_set(pjob, JOB_ATR_block);\n\t\t}\n\t}\n#endif /* not PBS_MOM */\n}\n\n/**\n * @brief\n * \t\treq_jobcredential - receive a set of credentials to be used by the job\n *\n * @param[in]\tpreq\t-\tptr to the decoded request\n */\nvoid\nreq_jobcredential(struct batch_request *preq)\n{\n\tjob *pj;\n\tint type;\n\tchar *cred;\n\tsize_t len;\n\n\tDBPRT((\"%s: entered\\n\", __func__))\n\tpj = locate_new_job(preq, NULL);\n\tif (pj == NULL) {\n\t\treq_reject(PBSE_IVALREQ, 0, preq);\n\t\treturn;\n\t}\n\tif (!check_job_substate(pj, JOB_SUBSTATE_TRANSIN)) {\n\t\tdelete_link(&pj->ji_alljobs);\n\t\treq_reject(PBSE_IVALREQ, 0, preq);\n\t\treturn;\n\t}\n\n#ifndef PBS_MOM\n\tif (svr_authorize_jobreq(preq, pj) == -1) {\n\t\treq_reject(PBSE_PERM, 0, preq);\n\t\treturn;\n\t}\n#endif /* PBS_MOM */\n\n\ttype = pj->ji_extended.ji_ext.ji_credtype =\n\t\tpreq->rq_ind.rq_jobcred.rq_type;\n\tcred = preq->rq_ind.rq_jobcred.rq_data;\n\tlen = (size_t) preq->rq_ind.rq_jobcred.rq_size;\n\n\tswitch (type) {\n\n\t\tdefault:\n\t\t\tif (write_cred(pj, cred, len) == -1) {\n\t\t\t\tdelete_link(&pj->ji_alljobs);\n\t\t\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\t\t} else\n\t\t\t\treply_ack(preq);\n\t\t\tbreak;\n\t}\n\n\treturn;\n}\n\n/**\n * @brief\n *\t\tReceive job script section\n * @par Functionality:\n *\t\tFor Mom, each section is appended to the file\n *\t\tFor Server, its appended to the ji_script member\n *\t\tof the job structure, to be later saved to the DB\n *\n *  @param[in,out]\tpreq\t-\tPointer to batch request structure\n */\n\nvoid\nreq_jobscript(struct batch_request *preq)\n{\n\tjob *pj;\n#ifdef PBS_MOM\n\tint filemode = 0700;\n\tint fds;\n\tchar namebuf[MAXPATHLEN];\n#else\n\tchar *temp;\n\tu_Long size;\n#endif\n\n\tpj = locate_new_job(preq, NULL);\n\tif (pj == NULL) {\n\t\treq_reject(PBSE_IVALREQ, 0, preq);\n\t\treturn;\n\t}\n\tif (!check_job_substate(pj, JOB_SUBSTATE_TRANSIN)) {\n\t\tdelete_link(&pj->ji_alljobs);\n\t\treq_reject(PBSE_IVALREQ, 0, preq);\n\t\treturn;\n\t}\n#ifndef PBS_MOM\n\tif (svr_authorize_jobreq(preq, pj) == -1) {\n\t\treq_reject(PBSE_PERM, 0, preq);\n\t\treturn;\n\t}\n#else\n\t/* mom - if job has been checkpointed, discard script,already have it */\n\tif (pj->ji_qs.ji_svrflags & JOB_SVFLG_CHKPT) {\n\t\t/* do nothing, ignore script */\n\t\treply_ack(preq);\n\t\treturn;\n\t}\n#endif /* PBS_MOM */\n\n#ifdef PBS_MOM\n\n\tif (reject_root_scripts == TRUE) {\n\t\tif (is_jattr_set(pj, JOB_ATR_euser) && get_jattr_str(pj, JOB_ATR_euser) != NULL) {\n#ifdef WIN32\n\t\t\t/* equivalent of root */\n\t\t\tif (!isAdminPrivilege(get_jattr_str(pj, JOB_ATR_euser)))\n#else\n\t\t\tstruct passwd *pwdp;\n\n\t\t\tpwdp = getpwnam(get_jattr_str(pj, JOB_ATR_euser));\n\t\t\tif ((pwdp != NULL) && (pwdp->pw_uid == 0))\n#endif\n\t\t\t{\n\n\t\t\t\tlog_err(-1, \"req_jobscript\",\n\t\t\t\t\tmsg_mom_reject_root_scripts);\n\t\t\t\tdelete_link(&pj->ji_alljobs);\n\t\t\t\treq_reject(PBSE_MOM_REJECT_ROOT_SCRIPTS, 0, preq);\n\t\t\t\treturn;\n\t\t\t}\n\t\t}\n\t}\n\n\t(void) strcpy(namebuf, path_jobs);\n\tif (*pj->ji_qs.ji_fileprefix != '\\0')\n\t\t(void) strcat(namebuf, pj->ji_qs.ji_fileprefix);\n\telse\n\t\t(void) strcat(namebuf, pj->ji_qs.ji_jobid);\n\t(void) strcat(namebuf, JOB_SCRIPT_SUFFIX);\n\n\tif (pj->ji_qs.ji_un.ji_newt.ji_scriptsz == 0) {\n\t\tfds = open(namebuf, O_WRONLY | O_CREAT, filemode);\n\t} else {\n\t\tfds = open(namebuf, O_WRONLY | O_APPEND, filemode);\n\t}\n\tif (fds < 0) {\n\t\tlog_err(errno, \"req_jobscript\", msg_script_open);\n\t\tdelete_link(&pj->ji_alljobs);\n\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\treturn;\n\t}\n\n#ifdef WIN32\n\tsecure_file2(namebuf, \"Administrators\", READS_MASK | WRITES_MASK | STANDARD_RIGHTS_REQUIRED, \"Everyone\", READS_MASK | READ_CONTROL);\n\tsetmode(fds, O_BINARY);\n#endif /* WIN32 */\n\n\tif (write(fds, preq->rq_ind.rq_jobfile.rq_data,\n\t\t  (unsigned) preq->rq_ind.rq_jobfile.rq_size) !=\n\t    preq->rq_ind.rq_jobfile.rq_size) {\n\t\tlog_err(errno, \"req_jobscript\", msg_script_write);\n\t\tdelete_link(&pj->ji_alljobs);\n\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\t(void) close(fds);\n\t\treturn;\n\t}\n\t(void) close(fds);\n#else /* server - server - server - server */\n\t/* add the script to the job */\n\tsize = get_bytes_from_attr(&attr_jobscript_max_size);\n\tif (pj->ji_qs.ji_un.ji_newt.ji_scriptsz + preq->rq_ind.rq_jobfile.rq_size > size) {\n\t\tjob_purge(pj);\n\t\treq_reject(PBSE_JOBSCRIPTMAXSIZE, 0, preq);\n\t\treturn;\n\t}\n\ttemp = realloc(pj->ji_script, pj->ji_qs.ji_un.ji_newt.ji_scriptsz +\n\t\t\t\t\t      preq->rq_ind.rq_jobfile.rq_size + 1);\n\tif (!temp) {\n\t\tjob_purge(pj);\n\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\treturn;\n\t}\n\tpj->ji_script = temp;\n\tmemmove(pj->ji_script + pj->ji_qs.ji_un.ji_newt.ji_scriptsz,\n\t\tpreq->rq_ind.rq_jobfile.rq_data,\n\t\t(size_t) preq->rq_ind.rq_jobfile.rq_size);\n#endif\n\tpj->ji_qs.ji_un.ji_newt.ji_scriptsz += preq->rq_ind.rq_jobfile.rq_size;\n\n#ifndef PBS_MOM\n\tpj->ji_script[pj->ji_qs.ji_un.ji_newt.ji_scriptsz] = '\\0';\n#endif\n\n\tpj->ji_qs.ji_svrflags = (pj->ji_qs.ji_svrflags & ~JOB_SVFLG_CHKPT) |\n\t\t\t\tJOB_SVFLG_SCRIPT; /* has a script file */\n\n\treply_ack(preq);\n}\n\n#ifndef PBS_MOM\n/* the following is for the server only, MOM has her own version below */\n\n/**\n * @brief\n * \t\treq_mvjobfile - receive a job file\n *\t\tThis request is used to move a file associated with a job, typically\n *\t\tthe standard output or error, between a server and a server or from\n *\t\ta mom back to a server.  For a server, the destination is alway\n *\t\twithin the spool directory.\n *\n * @param[in,out]\tpreq\t-\tptr to the decoded request\n */\n\nvoid\nreq_mvjobfile(struct batch_request *preq)\n{\n\tint fds;\n\tchar namebuf[MAXPATHLEN + 1];\n\tjob *pj;\n\tmode_t cur_mask;\n\tstruct stat sb;\n\n\tpj = locate_new_job(preq, NULL);\n\tif (pj == NULL)\n\t\tpj = find_job(preq->rq_ind.rq_jobfile.rq_jobid);\n\n\tif ((preq->rq_fromsvr == 0) || (pj == NULL)) {\n\t\treq_reject(PBSE_IVALREQ, 0, preq);\n\t\treturn;\n\t}\n\n\t(void) strcpy(namebuf, path_spool);\n\tif (*pj->ji_qs.ji_fileprefix != '\\0')\n\t\t(void) strcat(namebuf, pj->ji_qs.ji_fileprefix);\n\telse\n\t\t(void) strcat(namebuf, pj->ji_qs.ji_jobid);\n\tswitch ((enum job_file) preq->rq_ind.rq_jobfile.rq_type) {\n\t\tcase StdOut:\n\t\t\t(void) strcat(namebuf, JOB_STDOUT_SUFFIX);\n\t\t\tbreak;\n\n\t\tcase StdErr:\n\t\t\t(void) strcat(namebuf, JOB_STDERR_SUFFIX);\n\t\t\tbreak;\n\n\t\tcase Chkpt:\n\t\t\t(void) strcat(namebuf, JOB_CKPT_SUFFIX);\n\t\t\tbreak;\n\n\t\tdefault:\n\t\t\treq_reject(PBSE_IVALREQ, 0, preq);\n\t\t\treturn;\n\t}\n\n\t/* check symlinks */\n\tif (lstat(namebuf, &sb) == 0) {\n\t\t/* if it exists, the file must be a prior copy which means */\n\t\t/* it must be a regular file and owned by me (root)        */\n\t\tif (((sb.st_mode & S_IFMT) != S_IFREG) ||\n\t\t    (sb.st_nlink != 1) ||\n\t\t    (sb.st_uid != 0)) {\n\t\t\t/* this does not meet the above conditions    */\n\t\t\t/* someone may be trying to hack in a link to */\n\t\t\t/* cause the link target to be overwritten    */\n\t\t\t/* lets log it and leave the file as evidence */\n\t\t\tlog_suspect_file(__func__, \"wrong type or owner\", namebuf, &sb);\n\t\t}\n\t}\n\tif (preq->rq_ind.rq_jobfile.rq_sequence == 0) {\n\t\tchar ntmpbuf[MAXPATHLEN + 1];\n\n\t\t/* receiving first piece, so create new file securely */\n\t\t/* will discard any existing file (via rename)\t      */\n\t\tsnprintf(ntmpbuf, sizeof(ntmpbuf), \"%s\", namebuf);\n\t\tif (strlen(ntmpbuf) > (sizeof(ntmpbuf) - 8))\n\t\t\tntmpbuf[sizeof(ntmpbuf) - 8] = '\\0';\n\t\tstrcat(ntmpbuf, \"XXXXXX\"); /* template for mkstemp() */\n\t\tcur_mask = umask(077);\t   /* force to create -rw------ */\n\t\tfds = mkstemp(ntmpbuf);\n\t\t(void) umask(cur_mask);\n\t\tif (fds != -1) {\n\t\t\t/* now rename to the filename we want */\n\t\t\tif (rename(ntmpbuf, namebuf) == -1) {\n\t\t\t\tclose(fds);\n\t\t\t\tunlink(ntmpbuf);\n\t\t\t\tfds = -1;\n\t\t\t}\n\t\t}\n\n\t} else {\n\t\t/* receiving a follow-on chunk of data, file\t*/\n\t\t/* should already exist as regular file\t\t*/\n\t\tfds = open(namebuf, O_WRONLY | O_APPEND | O_Sync, 0600);\n\t\tif (fds != -1) {\n\t\t\tif (lstat(namebuf, &sb) == 0) {\n\t\t\t\t/* if exists, file must be a regular file and be */\n\t\t\t\t/* owned by me (root) \t\t\t\t */\n\t\t\t\tif (((sb.st_mode & S_IFMT) != S_IFREG) ||\n\t\t\t\t    (sb.st_nlink != 1) ||\n\t\t\t\t    (sb.st_uid != 0)) {\n\t\t\t\t\t/* this does not meet the above conditions */\n\t\t\t\t\tlog_suspect_file(__func__, \"wrong type or owner\",\n\t\t\t\t\t\t\t namebuf, &sb);\n\t\t\t\t\tclose(fds);\n\t\t\t\t\tunlink(namebuf);\n\t\t\t\t\tfds = -1;\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tsprintf(log_buffer, \"unable to lstat %s\", namebuf);\n\t\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\t\tclose(fds);\n\t\t\t\tfds = -1;\n\t\t\t}\n\t\t}\n\t}\n\n\tif (fds < 0) {\n\t\tlog_err(errno, \"req_mvjobfile\", msg_script_open);\n\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\treturn;\n\t}\n\n\tif (write(fds, preq->rq_ind.rq_jobfile.rq_data,\n\t\t  (unsigned) preq->rq_ind.rq_jobfile.rq_size) !=\n\t    preq->rq_ind.rq_jobfile.rq_size) {\n\t\tlog_err(errno, \"req_jobfile\", msg_script_write);\n\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\t(void) close(fds);\n\t\treturn;\n\t}\n\t(void) close(fds);\n\treply_ack(preq);\n}\n#else /* PBS_MOM - MOM MOM MOM */\n/**\n * @brief\n * \t\treq_mvjobfile - move the specifled job standard files\n *\t\tThis is MOM's version.  The files are owned by the user and placed\n *\t\tin either the spool area or the user's home directory depending\n *\t\ton the compile option, see std_file_name().\n *\n * @param[in,out]\tpreq\t-\tptr to the decoded request\n */\n\nvoid\nreq_mvjobfile(struct batch_request *preq)\n{\n\tint fds;\n\tenum job_file jft;\n\tint oflag;\n\tjob *pj;\n\tstruct passwd *check_pwd(job *);\n\n\tjft = (enum job_file) preq->rq_ind.rq_jobfile.rq_type;\n\tif (preq->rq_ind.rq_jobfile.rq_sequence == 0)\n\t\toflag = O_CREAT | O_WRONLY | O_TRUNC;\n\telse\n\t\toflag = O_CREAT | O_WRONLY | O_APPEND;\n\n\tpj = locate_new_job(preq, NULL);\n\tif (pj == NULL)\n\t\tpj = find_job(preq->rq_ind.rq_jobfile.rq_jobid);\n\n\tif (pj == NULL) {\n\t\treq_reject(PBSE_UNKJOBID, 0, preq);\n\t\treturn;\n\t}\n\t/* this call sets up home/uid/gid information for the job */\n\tif (check_pwd(pj) == NULL) {\n\t\treq_reject(PBSE_MOMREJECT, 0, preq);\n\t\treturn;\n\t}\n\tif ((is_jattr_set(pj, JOB_ATR_sandbox)) &&\n\t    (strcasecmp(get_jattr_str(pj, JOB_ATR_sandbox), \"PRIVATE\") == 0)) {\n\t\t/* have a private sandbox which must be recreated */\n\t\t/* prior to copying standard out and err back     */\n\n\t\tchar *pbs_jobdir;\n\n\t\tpbs_jobdir = jobdirname(pj->ji_qs.ji_jobid,\n\t\t\t\t\tpj->ji_grpcache->gc_homedir);\n\t\t/* call mkjobdir() with a NULL for the environment entry */\n\t\t/* We are not at a point where we can setup the job's    */\n\t\t/* environment and mkjobdir() will be called again in    */\n\t\t/* start_exec where the permissions are reset to match   */\n\t\t/* the user's umask and the environment is built.\t */\n#ifdef WIN32\n\n\t\tif (mkjobdir(pj->ji_qs.ji_jobid,\n\t\t\t     pbs_jobdir,\n\t\t\t     (pj->ji_user != NULL) ? pj->ji_user->pw_name : NULL,\n\t\t\t     INVALID_HANDLE_VALUE) != 0) {\n\t\t\tsprintf(log_buffer, \"unable to create the job directory %s\", pbs_jobdir);\n\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO, pj->ji_qs.ji_jobid, log_buffer);\n\t\t\treq_reject(PBSE_MOMREJECT, 0, preq);\n\t\t\treturn;\n\t\t}\n#else\n\t\tif (mkjobdir(pj->ji_qs.ji_jobid,\n\t\t\t     pbs_jobdir,\n\t\t\t     pj->ji_grpcache->gc_uid,\n\t\t\t     pj->ji_grpcache->gc_gid) != 0) {\n\t\t\tsprintf(log_buffer, \"unable to create the job directory %s\", pbs_jobdir);\n\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO, pj->ji_qs.ji_jobid, log_buffer);\n\t\t\treq_reject(PBSE_MOMREJECT, 0, preq);\n\t\t\treturn;\n\t\t}\n#endif /* WIN32 */\n\t}\n\n\tif ((fds = open_std_file(pj, jft, oflag,\n\t\t\t\t pj->ji_grpcache->gc_gid)) < 0) {\n\t\treq_reject(PBSE_MOMREJECT, 0, preq);\n\t\treturn;\n\t}\n\n\tif (write(fds, preq->rq_ind.rq_jobfile.rq_data,\n\t\t  preq->rq_ind.rq_jobfile.rq_size) !=\n\t    preq->rq_ind.rq_jobfile.rq_size)\n\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\telse {\n\t\treply_ack(preq);\n\t}\n\t(void) close(fds);\n}\n#endif /* PBS_MOM */\n\n/**\n * @brief\n * \tParse the next message type and parameter from extend field\n *\n * @param[in] extend - extend field\n * @param[out] msg_type - batch request type\n * @param[out] msg_params - this array will be filled with parameter in the order they received\n * \t\t\t\tparameters has to be freed by the caller.\n * @param[in] param_arr_sz - size of the parameter array.\n *\n * @return int\n * @return 0 success\n * @retval -1 failure, memory allocation / input array is not of sufficient size\n */\nint\nparse_next_msg(char *extend, int *msg_type, char **msg_params, int param_arr_sz)\n{\n\tint i, j;\n\tchar **arr;\n\tchar *pc;\n\n\tif ((arr = break_comma_list(extend)) == NULL)\n\t\treturn -1;\n\n\tfor (i = 0; arr[i]; i++) {\n\t\tif ((pc = strstr(arr[i], EXTEND_OPT_NEXT_MSG_TYPE))) {\n\t\t\t*msg_type = strtol(pc + strlen(EXTEND_OPT_NEXT_MSG_TYPE) + 1, NULL, 10);\n\t\t\tbreak;\n\t\t}\n\t}\n\tfor (j = 0; arr[i]; i++) {\n\t\tif ((pc = strstr(arr[i], EXTEND_OPT_NEXT_MSG_PARAM))) {\n\t\t\tif (j >= param_arr_sz) {\n\t\t\t\tfree_str_array(arr);\n\t\t\t\treturn -1;\n\t\t\t} else {\n\t\t\t\tmsg_params[j++] = strdup(pc + strlen(EXTEND_OPT_NEXT_MSG_PARAM) + 1);\n\t\t\t}\n\t\t}\n\t}\n\n\tif (j < param_arr_sz)\n\t\tmsg_params[j] = NULL;\n\tfree_str_array(arr);\n\treturn 0;\n}\n\n/**\n * @brief\n *\t\tCommit ownership of job\n * @par Functionality:\n *\t\tSet state of job to JOB_STATE_LTR_QUEUED (or Held or Waiting) and\n *\t\tenqueue the job into its destination queue.\n *\n * @param[in]\tpreq\t-\tThe batch request structure\n * @param[in]\tpj\t\t-   Pointer to the job structure\n *\n */\nvoid\nreq_commit_now(struct batch_request *preq, job *pj)\n{\n#ifndef PBS_MOM\n\tchar newstate;\n\tint newsub;\n\tpbs_queue *pque;\n\tint rc;\n\tpbs_db_jobscr_info_t jobscr;\n\tpbs_db_obj_info_t obj;\n\tlong long time_usec;\n\tstruct timeval tval;\n\tvoid *conn = (void *) svr_db_conn;\n\tchar *runjob_extend = NULL;\n\tstruct batch_request *preq_runjob = NULL;\n#endif\n\n\tif (!check_job_substate(pj, JOB_SUBSTATE_TRANSIN)) {\n\t\treq_reject(PBSE_IVALREQ, 0, preq);\n\t\treturn;\n\t}\n\n\tset_job_state(pj, JOB_STATE_LTR_TRANSIT);\n\tset_job_substate(pj, JOB_SUBSTATE_TRANSICM);\n\n#ifdef PBS_MOM /* MOM only */\n\n\t/* move job from new job list to \"all\" job list, set to running state */\n\tdelete_link(&pj->ji_alljobs);\n\tif (pbs_idx_insert(jobs_idx, pj->ji_qs.ji_jobid, pj) != PBS_IDX_RET_OK) {\n\t\tlog_joberr(PBSE_INTERNAL, __func__, \"Failed insert job in index\", pj->ji_qs.ji_jobid);\n\t\treq_reject(PBSE_INTERNAL, 0, preq);\n\t\tjob_purge(pj);\n\t\treturn;\n\t}\n\tappend_link(&svr_alljobs, &pj->ji_alljobs, pj);\n\t/*\n\t ** Set JOB_SVFLG_HERE to indicate that this is Mother Superior.\n\t */\n\tpj->ji_qs.ji_svrflags |= JOB_SVFLG_HERE;\n\tset_job_state(pj, JOB_STATE_LTR_RUNNING);\n\tset_job_substate(pj, JOB_SUBSTATE_PRERUN);\n\tpj->ji_qs.ji_un_type = JOB_UNION_TYPE_MOM;\n\tif (preq->prot) {\n\t\tstruct sockaddr_in *addr = tpp_getaddr(preq->rq_conn);\n\t\tif (addr)\n\t\t\tpj->ji_qs.ji_un.ji_momt.ji_svraddr = (pbs_net_t) ntohl(addr->sin_addr.s_addr);\n\t} else\n\t\tpj->ji_qs.ji_un.ji_momt.ji_svraddr = get_connectaddr(preq->rq_conn);\n\tpj->ji_qs.ji_un.ji_momt.ji_exitstat = 0;\n\tif ((pj->ji_qs.ji_svrflags & (JOB_SVFLG_CHKPT | JOB_SVFLG_ChkptMig)) == 0) {\n\t\tpj->ji_qs.ji_stime = time_now; /* start of walltime */\n\t\tset_jattr_l_slim(pj, JOB_ATR_stime, time_now, SET);\n\t}\n\n\t/*\n\t * For MOM - reply to the request and start up the job\n\t * any errors will be dealt with via the mechanism\n\t * used for a terminated job\n\t */\n\n\t(void) reply_jobid(preq, pj->ji_qs.ji_jobid, BATCH_REPLY_CHOICE_Commit);\n\tjob_save(pj);\n\tstart_exec(pj);\n\n\t/* The ATR_VFLAG_MODIFY bit for several attributes used to be\n\t * set here. Now we rely on these bits to be set when and where\n\t * an attribute is modified. Several of these are also set in\n\t * record_finish_exec().\n\t */\n\n#else  /* PBS_SERVER */\n\tif (svr_authorize_jobreq(preq, pj) == -1) {\n\t\treq_reject(PBSE_PERM, 0, preq);\n\t\treturn;\n\t}\n\n\t/* Set Server level entity usage */\n\n\tif ((rc = account_entity_limit_usages(pj, NULL, NULL, INCR, ETLIM_ACC_ALL)) != 0) {\n\t\tjob_purge(pj);\n\t\treq_reject(rc, 0, preq);\n\t\treturn;\n\t}\n\n\t/* remove job for the server new job list, set state, and enqueue it */\n\n\tdelete_link(&pj->ji_alljobs);\n\n\tsvr_evaljobstate(pj, &newstate, &newsub, 1);\n\tsvr_setjobstate(pj, newstate, newsub);\n\n\tgettimeofday(&tval, NULL);\n\ttime_usec = (tval.tv_sec * 1000000L) + tval.tv_usec;\n\t/* set the queue rank attribute */\n\tset_jattr_ll_slim(pj, JOB_ATR_qrank, time_usec, SET);\n\n\tif (preq->rq_type == PBS_BATCH_Commit)\n\t\trunjob_extend = preq->rq_extend;\n\n\tif ((rc = svr_enquejob(pj, runjob_extend)) != 0) {\n\t\tjob_purge(pj);\n\t\treq_reject(rc, 0, preq);\n\t\treturn;\n\t}\n\taccount_jobstr(pj, PBS_ACCT_QUEUE);\n\n\t/* Make things faster by writing job only once here  - at commit time */\n\tif (job_save_db(pj)) {\n\t\tjob_purge(pj);\n\t\treq_reject(PBSE_SAVE_ERR, 0, preq);\n\t\treturn;\n\t}\n\n\tif (pj->ji_script) {\n\t\tstrcpy(jobscr.ji_jobid, pj->ji_qs.ji_jobid);\n\t\tjobscr.script = pj->ji_script;\n\t\tobj.pbs_db_obj_type = PBS_DB_JOBSCR;\n\t\tobj.pbs_db_un.pbs_db_jobscr = &jobscr;\n\n\t\tif (pbs_db_save_obj(conn, &obj, OBJ_SAVE_NEW) != 0) {\n\t\t\tjob_purge(pj);\n\t\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\t\treturn;\n\t\t}\n\t\tfree(pj->ji_script);\n\t\tpj->ji_script = NULL;\n\t}\n\n\t/* Now, no need to save server here because server\n\t   has already saved in the get_next_svr_sequence_id() */\n\n\t/*\n\t * if the job went into a Route (push) queue that has been started,\n\t * try once to route it to give immediate feedback as a courtsey\n\t * to the user.\n\t */\n\n\tpque = pj->ji_qhdr;\n\n\tif ((preq->rq_fromsvr == 0) &&\n\t    (pque->qu_qs.qu_type == QTYPE_RoutePush) &&\n\t    (get_qattr_long(pque, QA_ATR_Started) != 0)) {\n\t\tif ((rc = job_route(pj)) != 0) {\n\t\t\tjob_purge(pj);\n\t\t\treq_reject(rc, 0, preq);\n\t\t\treturn;\n\t\t}\n\t}\n\n\t/* need to print message first, before request goes away */\n\tlog_eventf(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO, pj->ji_qs.ji_jobid,\n\t\t   msg_jobnew, preq->rq_user, preq->rq_host,\n\t\t   get_jattr_str(pj, JOB_ATR_job_owner),\n\t\t   get_jattr_str(pj, JOB_ATR_jobname),\n\t\t   pj->ji_qhdr->qu_qs.qu_name);\n\n\t/* Allocate a new batch request to use for runjob as we will be freeing preq in reply_jobid()  */\n\tif (runjob_extend) {\n\t\tchar *param_arr[1];\n\n\t\tif ((preq_runjob = copy_br(preq)) == NULL) {\n\t\t\tjob_purge(pj);\n\t\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\t\treturn;\n\t\t}\n\n\t\tif (parse_next_msg(runjob_extend, &preq_runjob->rq_type, param_arr, 1) == -1) {\n\t\t\tjob_purge(pj);\n\t\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\t\treturn;\n\t\t}\n\n\t\tstrcpy(preq_runjob->rq_ind.rq_run.rq_jid, pj->ji_qs.ji_jobid);\n\t\tpreq_runjob->rq_ind.rq_run.rq_destin = param_arr[0];\n\t\tpreq_runjob->tpp_ack = 1;\n\t\tpreq_runjob->tppcmd_msgid = strdup(preq->tppcmd_msgid);\n\t}\n\n\t/* acknowledge the request with the job id */\n\tif ((rc = reply_jobid(preq, pj->ji_qs.ji_jobid, BATCH_REPLY_CHOICE_Commit))) {\n\t\tlog_eventf(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_ERR, pj->ji_qs.ji_jobid, \"Failed to reply with Job Id, error %d\", rc);\n\t\tjob_purge(pj);\n\t\treturn;\n\t}\n\n\tif ((pj->ji_qs.ji_svrflags & JOB_SVFLG_HERE) == 0)\n\t\tissue_track(pj); /* notify creator where job is */\n\n\tif (!preq_runjob)\n\t\treturn;\n\n\treq_runjob(preq_runjob);\n#endif /* PBS_SERVER */\n}\n\n/**\n * @brief\n *\t\tlocate job and call req_commit_now\n *\n *  @param[in]\tpreq - The batch request structure\n *\n */\nvoid\nreq_commit(struct batch_request *preq)\n{\n\tjob *pj;\n\n\tpj = locate_new_job(preq, preq->rq_ind.rq_commit);\n\tif (pj == NULL) {\n\t\treq_reject(PBSE_UNKJOBID, 0, preq);\n\t\treturn;\n\t}\n\n\treq_commit_now(preq, pj);\n}\n\n/**\n * @brief\n * \t\tlocate_new_job - locate a \"new\" job which has been set up req_quejob on\n *\t\tthe servers new job list.\n *\n * @par Functionality:\n *\t\tThis function is used by the sub-requests which make up the global\n *\t\t\"Queue Job Request\" to locate the job structure.\n *\n *\t\tIf the jobid is specified (will be for rdytocommit and commit, but not\n *\t\tfor script), we search for a matching jobid.\n *\n *\t\tThe job must (also) match the socket specified and the host associated\n *\t\twith the socket unless ji_fromsock == -1, then its a recovery situation.\n *\n * @param[in]\tpreq\t-\tThe batch request structure\n * @param[in]\tjobid\t-\tJob Id which needs to be located\n *\n * @return\tjob structure associated with jobid.\n */\n\nstatic job *\nlocate_new_job(struct batch_request *preq, char *jobid)\n{\n\tjob *pj;\n\tint sock = -1;\n\tpbs_net_t conn_addr = 0;\n\n\tif (preq == NULL)\n\t\treturn NULL;\n\n\tsock = preq->rq_conn;\n\n\tif (!preq->prot) { /* Connection from TCP stream */\n\t\tconn_addr = get_connectaddr(sock);\n\t} else {\n\t\tstruct sockaddr_in *addr = tpp_getaddr(sock);\n\t\tif (addr)\n\t\t\tconn_addr = (pbs_net_t) ntohl(addr->sin_addr.s_addr);\n\t}\n\n\tpj = (job *) GET_NEXT(svr_newjobs);\n\twhile (pj) {\n\n\t\tif ((pj->ji_qs.ji_un.ji_newt.ji_fromsock == -1) ||\n\t\t    ((pj->ji_qs.ji_un.ji_newt.ji_fromsock == sock) &&\n\t\t     (pj->ji_qs.ji_un.ji_newt.ji_fromaddr == conn_addr))) {\n\n\t\t\tif (jobid != NULL) {\n\t\t\t\tif (!strncmp(pj->ji_qs.ji_jobid, jobid, PBS_MAXSVRJOBID))\n\t\t\t\t\tbreak;\n\t\t\t} else\n\t\t\t\tbreak;\n\t\t}\n\n\t\tpj = (job *) GET_NEXT(pj->ji_alljobs);\n\t}\n\treturn (pj);\n}\n\n#ifndef PBS_MOM /* SERVER only */\n/**\n * @brief  Function to notify relevant scheduler of the command passed to this function\n *\n * @param[in] cmd - The command that is to be notified to the scheduler\n * @param[in] resv - The reservation related to the command\n *\n * @return Number of schedulers notified.\n *\n */\nint\nnotify_scheds_about_resv(int cmd, resc_resv *resv)\n{\n\tpbs_sched *psched;\n\tchar *partition_name = NULL;\n\tint num_scheds = 0;\n\n\tif (resv != NULL) {\n\t\tif (is_rattr_set(resv, RESV_ATR_partition))\n\t\t\tpartition_name = get_rattr_str(resv, RESV_ATR_partition);\n\t\telse\n\t\t\t/* for reservations without partitions, set request/reply count to 0\n\t\t\t * because this is the only case when notification will be sent to multiple\n\t\t\t * schedulers and server expects multiple replies. Once partition name is set,\n\t\t\t * only relevant scheduler is notified and server will receive only one reply.\n\t\t\t */\n\t\t\tresv->req_sched_count = resv->rep_sched_count = 0;\n\t}\n\n\tfor (psched = (pbs_sched *) GET_NEXT(svr_allscheds); psched; psched = (pbs_sched *) GET_NEXT(psched->sc_link)) {\n\t\tif (partition_name != NULL) {\n\t\t\tif (strcmp(partition_name, DEFAULT_PARTITION) == 0) {\n\t\t\t\tif (get_sched_attr_long(dflt_scheduler, SCHED_ATR_scheduling) == 1) {\n\t\t\t\t\tset_scheduler_flag(cmd, dflt_scheduler);\n\t\t\t\t\tnum_scheds++;\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\t} else {\n\t\t\t\tpbs_sched *tmp;\n\t\t\t\ttmp = find_sched_from_partition(partition_name);\n\t\t\t\tif (tmp != NULL && (get_sched_attr_long(tmp, SCHED_ATR_scheduling) == 1)) {\n\t\t\t\t\tset_scheduler_flag(cmd, tmp);\n\t\t\t\t\tnum_scheds++;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t} else {\n\t\t\tif (get_sched_attr_long(psched, SCHED_ATR_scheduling) == 1) {\n\t\t\t\tset_scheduler_flag(cmd, psched);\n\t\t\t\tif (resv != NULL)\n\t\t\t\t\tresv->req_sched_count++;\n\t\t\t\tnum_scheds++;\n\t\t\t}\n\t\t}\n\t}\n\treturn num_scheds;\n}\n\n/**\n * @brief\n *\t\t\"resvSub\" Batch Request processing routine\n *\n *  @param[in]\t-\tptr to the decoded request\n */\n\nvoid\nreq_resvSub(struct batch_request *preq)\n{\n\t/*\n\t * buf and buf1 are used to hold user@hostname strings together\n\t * with a small amount (less than 64 characters) of text.\n\t */\n\tchar buf[PBS_MAXUSER + PBS_MAXHOSTNAME + 64] = {0};\n\tchar buf1[PBS_MAXUSER + PBS_MAXHOSTNAME + 64] = {0};\n\tint created_here = 0;\n\tint i = 0;\n\tchar *rid = NULL;\n\tchar ridbuf[PBS_MAXSVRRESVID + 1] = {0};\n\tchar qbuf[PBS_MAXSVRRESVID + 1] = {0};\n\tchar *pc = NULL;\n\tattribute_def *pdef = NULL;\n\tresc_resv *presv = NULL;\n\tsvrattrl *psatl = NULL;\n\tint rc = 0;\n\tint sock = preq->rq_conn;\n\tchar hook_msg[HOOK_MSG_SIZE] = {0};\n\tint resc_access_perm_save = 0;\n\tint qmove_requested = 0;\n\tchar *fmt = \"%a %b %d %H:%M:%S %Y\";\n\tchar tbuf1[256] = {0};\n\tchar tbuf2[256] = {0};\n\tint is_maintenance = 0;\n\tint is_resv_from_job = 0;\n\tjob *pjob;\n\tint rc2 = 0;\n\tchar owner[PBS_MAXUSER + 1];\n\tchar *partition_name = NULL;\n\tchar *ptr = NULL;\n\tconn_t *conn = NULL;\n\n\tif (preq->rq_extend && strchr(preq->rq_extend, 'm'))\n\t\tis_maintenance = 1;\n\n\tif (preq->prot != PROT_TPP) {\n\t\tconn = get_conn(sock);\n\t\tif (!conn) {\n\t\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\t\treturn;\n\t\t}\n\t}\n\n\tswitch (process_hooks(preq, hook_msg, sizeof(hook_msg),\n\t\t\t      pbs_python_set_interrupt)) {\n\t\tcase 0: /* explicit reject */\n\t\t\treply_text(preq, PBSE_HOOKERROR, hook_msg);\n\t\t\treturn;\n\t\tcase 1:\t\t\t\t\t    /* explicit accept */\n\t\t\tif (recreate_request(preq) == -1) { /* error */\n\t\t\t\t/* we have to reject the request, as 'preq' */\n\t\t\t\t/* may have been partly modified            */\n\t\t\t\tstrcpy(hook_msg,\n\t\t\t\t       \"resvsub event: rejected request\");\n\t\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_HOOK,\n\t\t\t\t\t  LOG_ERR, \"\", hook_msg);\n\t\t\t\treply_text(preq, PBSE_HOOKERROR, hook_msg);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tbreak;\n\t\tcase 2: /* no hook script executed - go ahead and accept event*/\n\t\t\tbreak;\n\t\tdefault:\n\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t\t\t  LOG_INFO, \"\", \"resvsub event: accept req by default\");\n\t}\n\n\t/* Is the admin refusing to allow reservations on this server? */\n\n\tif (is_sattr_set(SVR_ATR_ResvEnable) && !get_sattr_long(SVR_ATR_ResvEnable)) {\n\n\t\tsnprintf(buf, sizeof(buf), \"reservations disallowed on %s\", server_name);\n\t\tif ((rc = reply_text(preq, PBSE_RESVAUTH_U, buf))) {\n\t\t\t/* reply failed,  close connection; purge resv */\n\t\t\tclose_client(sock);\n\t\t}\n\t\treturn;\n\t}\n\t/* Are reservations from submitting host allowed? */\n\tif (get_sattr_long(SVR_ATR_acl_Resvhost_enable)) {\n\t\t/* acl enabled so need to check it */\n\t\tif (acl_check(get_sattr(SVR_ATR_acl_Resvhosts),\n\t\t\t      preq->rq_host, ACL_Host) == 0) {\n\t\t\treq_reject(PBSE_RESVAUTH_H, 0, preq);\n\t\t\treturn;\n\t\t}\n\t}\n\n\tresc_access_perm = preq->rq_perm | ATR_DFLAG_Creat;\n\n\tif (is_maintenance && !(preq->rq_perm & (ATR_DFLAG_OPWR | ATR_DFLAG_MGWR))) {\n\t\treq_reject(PBSE_PERM, 0, preq);\n\t\treturn;\n\t}\n\n\t/* get reservation id/queue name locally */\n\tif ((next_svr_sequence_id = get_next_svr_sequence_id()) == -1) {\n\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\treturn;\n\t}\n\t/*\n\t * if the reservation ID is supplied, the request had better be\n\t * from another server\n\t * Remark: would be the case if the reservation is being forwarded\n\t *     to another server - something to think about for the future\n\t */\n\n\tif (preq->rq_fromsvr) {\n\t\t/* from another server - accept the extra attributes */\n\t\tresc_access_perm |= ATR_DFLAG_MGWR | ATR_DFLAG_SvWR;\n\t\trid = preq->rq_ind.rq_queuejob.rq_jid;\n\n\t} else if (preq->rq_ind.rq_queuejob.rq_jid[0] != '\\0') {\n\t\t/* a reservation id is not allowed from a client */\n\t\treq_reject(PBSE_IVALREQ, 0, preq);\n\t\treturn;\n\t} else {\n\t\t/* No reservation ID came with the request, create one    */\n\t\t/* Note: use server's job seq number generation mechanism */\n\n\t\tcreated_here = RESV_SVFLG_HERE;\n\t\tif (generate_objid(ridbuf, server_name, MGR_OBJ_RESV, PBS_RESV_ID_CHAR) != 0) {\n\t\t\treq_reject(PBSE_INTERNAL, 0, preq);\n\t\t\treturn;\n\t\t}\n\t\trid = ridbuf;\n\t}\n\n\t/* generate a queue name then update sv_jobidnumber and\n\t * save serve struct\n\t * generate a queue name then update sv_resvidnumber and\n\t * save serve struct\n\t * The second comment above is the one we really want,\n\t * but the structure field would be an addition to the\n\t * \"quick save\" area of the server - can't do\n\t */\n\tptr = strchr(rid, '.');\n\tif (ptr == NULL) {\n\t\treq_reject(PBSE_INTERNAL, 0, preq);\n\t\treturn;\n\t}\n\n\t*ptr = '\\0';\n\tpbs_strncpy(qbuf, rid, sizeof(qbuf));\n\t*ptr = '.';\n\n\t/* does reservation already exist, check both old\n\t * and new reservations?\n\t * This could happen if we are allowing reservations\n\t * submitted to one server to be passed off to another\n\t * server to fulfill or reject;  we may or may not want\n\t * this capability, but will have this code here\n\t */\n\tpresv = find_resv(rid);\n\n\tif (presv != NULL) {\n\n\t\t/* server rejects resvSub request if already exists */\n\t\treq_reject(PBSE_RESVEXIST, 0, preq);\n\t\treturn;\n\t}\n\n\t/* OK, we have created a name for the local backing\n\t * store file and a zero length file of that name\n\t * is on the disk.   Now, CREATE THE RESC_RESV STRUCTURE\n\t * for managing the reservation and later on try and\n\t * create a pbs_queue into which jobs submitted to the\n\t * reservation get assigned\n\t */\n\n\tif ((presv = resv_alloc(rid)) == NULL) {\n\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\treturn;\n\t}\n\n\t/* Take a quick pass through the attribute list to see\n\t * whether a qmove is being performed. If so, the operation\n\t * is granted special permission to modify readonly\n\t * resources.\n\t */\n\tqmove_requested = 0;\n\tpsatl = (svrattrl *) GET_NEXT(preq->rq_ind.rq_queuejob.rq_attr);\n\twhile (psatl) {\n\t\tif (strcmp(psatl->al_atopl.name, ATTR_convert) == 0) {\n\t\t\tqmove_requested = 1;\n\t\t\tbreak;\n\t\t}\n\t\tpsatl = (svrattrl *) GET_NEXT(psatl->al_link);\n\t}\n\n\t/* decode attributes from resvSub request into\n\t * the resc_resv structure's attributes\n\t */\n\n\tresc_access_perm_save = resc_access_perm; /* save perm */\n\tpsatl = (svrattrl *) GET_NEXT(preq->rq_ind.rq_queuejob.rq_attr);\n\twhile (psatl) {\n\t\tint index;\n\t\t/* reservation does not support Shrink-to-fitness */\n\t\tif (!(strcasecmp(psatl->al_name, ATTR_l)) &&\n\t\t    (!(strcasecmp(psatl->al_resc, MIN_WALLTIME)) ||\n\t\t     !(strcasecmp(psatl->al_resc, MAX_WALLTIME)))) {\n\t\t\treq_reject(PBSE_NOSTF_RESV, 0, preq);\n\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_REQUEST, LOG_ERR, \"\", msg_nostf_resv);\n\t\t\tresv_free(presv);\n\t\t\treturn;\n\t\t}\n\t\t/* identify the attribute by name */\n\t\tindex = find_attr(resv_attr_idx, resv_attr_def, psatl->al_name);\n\t\tif (index < 0) {\n\n\t\t\tif (ignore_attr(psatl->al_name) >= 0) {\n\t\t\t\t/*ignore some currently not handled options\n\t\t\t\t *also helpful in debugging using qsub;\n\t\t\t\t *remove later on\n\t\t\t\t */\n\n\t\t\t\tpsatl = (svrattrl *) GET_NEXT(psatl->al_link);\n\t\t\t\tcontinue;\n\t\t\t}\n\n\t\t\t/* didn`t recognize the name */\n\t\t\tresv_free(presv);\n\t\t\treply_badattr(PBSE_NOATTR, 1, psatl, preq);\n\t\t\treturn;\n\t\t}\n\t\tpdef = &resv_attr_def[index];\n\n\t\t/* Does attribute's definition flags indicate that\n\t\t * we have sufficient permission to write the attribute?\n\t\t */\n\n\t\tresc_access_perm = resc_access_perm_save; /* reset */\n\t\tif ((psatl->al_flags & ATR_VFLAG_HOOK) || qmove_requested) {\n\t\t\tresc_access_perm = ATR_DFLAG_USWR |\n\t\t\t\t\t   ATR_DFLAG_OPWR |\n\t\t\t\t\t   ATR_DFLAG_MGWR |\n\t\t\t\t\t   ATR_DFLAG_SvWR |\n\t\t\t\t\t   ATR_DFLAG_Creat;\n\t\t}\n\t\tif ((pdef->at_flags & resc_access_perm) == 0) {\n\t\t\tresv_free(presv);\n\t\t\treply_badattr(PBSE_ATTRRO, 1, psatl, preq);\n\t\t\treturn;\n\t\t}\n\n\t\t/* decode attribute */\n\n\t\trc = set_rattr_generic(presv, index, psatl->al_value, psatl->al_resc, INTERNAL);\n\n\t\tif (rc != 0) {\n\t\t\tresv_free(presv);\n\t\t\treply_badattr(rc, 1, psatl, preq);\n\t\t\treturn;\n\t\t}\n\n\t\tif (!(strcasecmp(psatl->al_name, ATTR_resv_job))) {\n\t\t\tif ((pjob = find_job(psatl->al_value)) == NULL) {\n\t\t\t\treq_reject(PBSE_UNKJOBID, 0, preq);\n\t\t\t\tresv_free(presv);\n\t\t\t\treturn;\n\t\t\t}\n\n\t\t\tif (pjob->ji_myResv) {\n\t\t\t\treq_reject(PBSE_RESV_FROM_RESVJOB, 0, preq);\n\t\t\t\tresv_free(presv);\n\t\t\t\treturn;\n\t\t\t}\n\n\t\t\tif (is_job_array(psatl->al_value) != IS_ARRAY_NO) {\n\t\t\t\treq_reject(PBSE_RESV_FROM_ARRJOB, 0, preq);\n\t\t\t\tresv_free(presv);\n\t\t\t\treturn;\n\t\t\t}\n\n\t\t\tget_jobowner(get_jattr_str(pjob, JOB_ATR_job_owner), owner);\n\t\t\tif (strcmp(preq->rq_user, owner)) {\n\t\t\t\treq_reject(PBSE_PERM, 0, preq);\n\t\t\t\tresv_free(presv);\n\t\t\t\treturn;\n\t\t\t}\n\n\t\t\trc2 = copy_params_from_job(psatl->al_value, presv);\n\t\t\tif (rc2) {\n\t\t\t\treq_reject(rc2, 0, preq);\n\t\t\t\tresv_free(presv);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tif (is_qattr_set(pjob->ji_qhdr, QA_ATR_partition))\n\t\t\t\tpartition_name = get_qattr_str(pjob->ji_qhdr, QA_ATR_partition);\n\t\t\telse\n\t\t\t\tpartition_name = DEFAULT_PARTITION;\n\t\t\tis_resv_from_job = 1;\n\t\t}\n\n\t\tpsatl = (svrattrl *) GET_NEXT(psatl->al_link);\n\t}\n\tresc_access_perm = resc_access_perm_save; /* restore perm */\n\n\t/* invoke any defined attribute at_action routines */\n\n\tfor (i = 0; i < RESV_ATR_LAST; ++i) {\n\t\tpdef = &resv_attr_def[i];\n\t\tif (is_rattr_set(presv, i) && (pdef->at_action)) {\n\t\t\trc = pdef->at_action(get_rattr(presv, i), presv, ATR_ACTION_NEW);\n\t\t\tif (rc) {\n\t\t\t\tresv_free(presv);\n\t\t\t\treq_reject(rc, i, preq);\n\t\t\t\treturn;\n\t\t\t}\n\t\t}\n\t}\n\n\t/*\"start\", \"end\",\"duration\", and \"wall\"; derive and check*/\n\n\tif (start_end_dur_wall(presv)) {\n\t\tresv_free(presv);\n\t\treq_reject(PBSE_BADTSPEC, 0, preq);\n\t\treturn;\n\t}\n\n\t/* If standing reservation check the recurrence rule\n\t * and possibly change the queue and reservation id to start with\n\t * 'S' instead of 'R'\n\t */\n\tif (get_rattr_long(presv, RESV_ATR_resv_standing)) {\n\t\tint resv_count;\n\n\t\t/* Check the recurrence rule. If this fails, an error message\n\t\t * is sent back to the requestor. Otherwise, check the number\n\t\t * of occurrences requested by the recurrence rule. If 1 then\n\t\t * it is treated as an advance reservation.\n\t\t */\n\t\tresv_count = check_rrule(\n\t\t\tget_rattr_str(presv, RESV_ATR_resv_rrule),\n\t\t\tget_rattr_long(presv, RESV_ATR_start),\n\t\t\tget_rattr_long(presv, RESV_ATR_end),\n\t\t\tget_rattr_str(presv, RESV_ATR_resv_timezone),\n\t\t\t&rc);\n\n\t\t/* rc is set by check_rrule to report any possible icalendar\n\t\t * syntax or time problem\n\t\t */\n\t\tif (rc != 0) {\n\t\t\tresv_free(presv);\n\t\t\treq_reject(rc, 0, preq);\n\t\t\treturn;\n\t\t}\n\n\t\tset_rattr_l_slim(presv, RESV_ATR_resv_count, resv_count, SET);\n\n\t\t/* If more than 1 occurrence are requested then alter the\n\t\t * reservation and queue first character\n\t\t */\n\t\tif (resv_count > 1) {\n\t\t\trid[0] = PBS_STDNG_RESV_ID_CHAR;\n\t\t\tqbuf[0] = PBS_STDNG_RESV_ID_CHAR;\n\t\t} else /* If only 1 occurrence, treat it as an advance reservation */\n\t\t\tset_rattr_l_slim(presv, RESV_ATR_resv_standing, 0, SET);\n\t} else\n\t\tset_rattr_l_slim(presv, RESV_ATR_resv_count, 1, SET);\n\n\tif (is_maintenance)\n\t\trid[0] = qbuf[0] = PBS_MNTNC_RESV_ID_CHAR;\n\n\t(void) strcpy(presv->ri_qs.ri_resvID, rid);\n\tif (created_here) {\n\t\tpresv->ri_qs.ri_svrflags = created_here;\n\t}\n\n\t/*\n\t * for resources that are not specified in the request and\n\t * for which default values can be determined, set these values\n\t * as the values for those resources\n\t */\n\n\tif (!is_resv_from_job && ((rc = set_resc_deflt((void *) presv, RESC_RESV_OBJECT, NULL)) != 0)) {\n\t\tresv_free(presv);\n\t\treq_reject(rc, 0, preq);\n\t\treturn;\n\t}\n\n\t/*\n\t * Now that the attributes have been decoded, setup some\n\t * additional parameters and perform a few more checks.\n\t */\n\n\t/* set some items based on who created the reservation... */\n\n\tif (created_here) {\n\t\t/* reservation got created by this server */\n\n\t\t/* ck priority value - in future, reservations\n\t\t * may support the notion of priority\n\t\t */\n\n\t\tif (is_rattr_set(presv, RESV_ATR_priority)) {\n\t\t\tif (get_rattr_long(presv, RESV_ATR_priority) < -1024 || get_rattr_long(presv, RESV_ATR_priority) > 1024) {\n\t\t\t\tresv_free(presv);\n\t\t\t\treq_reject(PBSE_BADATVAL, 0, preq);\n\t\t\t\treturn;\n\t\t\t}\n\t\t}\n\n\t\tif (conn) {\n\t\t\tstrcpy(buf, conn->cn_physhost);\n\t\t\tset_rattr_str_slim(presv, RESV_ATR_submit_host, buf, NULL);\n\t\t}\n\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n\t\t/* save gssapi/krb5 creds for this reservation */\n\t\tif (conn && conn->cn_credid != NULL &&\n\t\t\tconn->cn_auth_config != NULL &&\n\t\t\tconn->cn_auth_config->auth_method != NULL && strcmp(conn->cn_auth_config->auth_method, AUTH_GSS_NAME) == 0) {\n\t\t\tlog_eventf(PBSEVENT_DEBUG, PBS_EVENTCLASS_SERVER, LOG_DEBUG, __func__,\n\t\t\t\t\"saving creds.  conn is %d, cred id %s\", preq->rq_conn, conn->cn_credid);\n\n\t\t\tset_rattr_str_slim(presv, RESV_ATR_cred_id, conn->cn_credid, NULL);\n\t\t}\n#endif\n\t\t/* set reservation name to \"NULL\" if not specified by user */\n\n\t\tif (!is_rattr_set(presv, RESV_ATR_resv_name))\n\t\t\tset_rattr_str_slim(presv, RESV_ATR_resv_name, \"NULL\", NULL);\n\n\t\tif (!is_resv_from_job) {\n\t\t\t/* set reservation owner attribute to user@host */\n\t\t\t(void) strcpy(buf, preq->rq_user);\n\t\t\t(void) strcat(buf, \"@\");\n\t\t\t(void) strcat(buf, preq->rq_host);\n\t\t\tset_rattr_str_slim(presv, RESV_ATR_resv_owner, buf, NULL);\n\t\t}\n\n\t\t/* make sure owner is in reservation's Authorized_Users */\n\t\tif (act_resv_add_owner(get_rattr(presv, RESV_ATR_auth_u), presv, ATR_ACTION_NEW)) {\n\t\t\tresv_free(presv);\n\t\t\treq_reject(PBSE_BADATVAL, 0, preq);\n\t\t\treturn;\n\t\t}\n\n\t\t/* set create time */\n\t\tset_rattr_l_slim(presv, RESV_ATR_ctime, (long) time_now, SET);\n\t\t/* set hop count = 1 */\n\t\tset_rattr_l_slim(presv, RESV_ATR_hopcount, 1, SET);\n\n\t} else {\n\t\t/* reservation created elsewhere and being moved here */\n\t\tlong hop;\n\n\t\t/* make sure resv_owner is set, ERROR IF NOT */\n\t\tif (!is_rattr_set(presv, RESV_ATR_resv_owner)) {\n\t\t\tresv_purge(presv);\n\t\t\treq_reject(PBSE_IVALREQ, 0, preq);\n\t\t\treturn;\n\t\t}\n\n\t\t/* increment hop count */\n\t\thop = get_rattr_long(presv, RESV_ATR_hopcount);\n\t\tif (++hop > PBS_MAX_HOPCOUNT) {\n\t\t\tresv_purge(presv);\n\t\t\treq_reject(PBSE_HOPCOUNT, 0, preq);\n\t\t\treturn;\n\t\t} else\n\t\t\tset_rattr_l_slim(presv, RESV_ATR_hopcount, hop, SET);\n\t}\n\n\t/* determine values for the \"euser\" and \"egroup\" attributes */\n\n\tif ((rc = set_objexid((void *) presv, RESC_RESV_OBJECT, presv->ri_wattr))) {\n\t\tresv_free(presv);\n\t\treq_reject(rc, 0, preq);\n\t\treturn;\n\t}\n\n\t/*\n\t * Are reservation submissions being controlled by a group ACL?\n\t * If yes, check if this one is allowed or denied\n\t */\n\n\tif (is_sattr_set(SVR_ATR_acl_ResvGroup_enable) && get_sattr_long(SVR_ATR_acl_ResvGroup_enable)) {\n\n\t\tif (acl_check(get_sattr(SVR_ATR_acl_ResvGroups), get_rattr_str(presv, RESV_ATR_euser), ACL_Group) == 0) {\n\t\t\tresv_free(presv);\n\t\t\treq_reject(PBSE_RESVAUTH_G, 0, preq);\n\t\t\treturn;\n\t\t}\n\t}\n\n\t/* Is this user allowed to submit a reservation? */\n\n\tif (is_sattr_set(SVR_ATR_AclResvUserEnabled) && get_sattr_long(SVR_ATR_AclResvUserEnabled)) {\n\t\tif (preq->rq_host[0])\n\t\t\tsnprintf(buf1, sizeof(buf1), \"%s@%s\", get_rattr_str(presv, RESV_ATR_euser), preq->rq_host);\n\n\t\tif (acl_check(get_sattr(SVR_ATR_AclResvUsers), buf1, ACL_User) == 0) {\n\t\t\tresv_free(presv);\n\t\t\treq_reject(PBSE_RESVAUTH_U, 0, preq);\n\t\t\treturn;\n\t\t}\n\t}\n\n\t/* set up at_server attribute for status */\n\tset_rattr_str_slim(presv, RESV_ATR_at_server, server_name, NULL);\n\n\t/* set what will be the name of the reservation's associated queue */\n\tset_rattr_str_slim(presv, RESV_ATR_queue, qbuf, NULL);\n\n\t/*\n\t * Now that the resc_resv structure exists and and has been setup,\n\t * try to acquire and setup a pbs_queue into which jobs submitted\n\t * to the reservation get placed - actually, right now, the user\n\t * directly does a \"qsub\" to this created queue but, at some point\n\t * it's conceivable that the interface to user might change for\n\t * submitting jobs to reservations and the user would specify the\n\t * reservation ID string instead of the queue\n\t */\n\n\tif ((rc = get_queue_for_reservation(presv)) != 0) {\n\t\t/* couldn't acquire the queue */\n\n\t\tif ((pc = pbse_to_txt(rc)) != 0)\n\t\t\tlog_event(PBSEVENT_RESV, PBS_EVENTCLASS_RESV, LOG_INFO,\n\t\t\t\t  presv->ri_qs.ri_resvID, pc);\n\n\t\tlog_event(PBSEVENT_RESV, PBS_EVENTCLASS_RESV, LOG_INFO,\n\t\t\t  presv->ri_qs.ri_resvID,\n\t\t\t  \"error - reservation being deleted\");\n\n\t\tresv_free(presv);\n\n\t\t/* Single out duplicate list entries to inform end-user about\n\t\t * erroneous input. Other errors are internal and will fall under\n\t\t * a generic \"reservation failure\" message.\n\t\t */\n\t\tif (rc != PBSE_DUPLIST)\n\t\t\trc = PBSE_resvFail;\n\n\t\treq_reject(rc, 0, preq);\n\t\treturn;\n\t}\n\n\t/* set remaining resc_resv structure elements */\n\tpresv->ri_qs.ri_state = RESV_UNCONFIRMED;\n\tpresv->ri_qs.ri_substate = RESV_UNCONFIRMED;\n\n\tset_rattr_l_slim(presv, RESV_ATR_state, RESV_UNCONFIRMED, SET);\n\tset_rattr_l_slim(presv, RESV_ATR_substate, RESV_UNCONFIRMED, SET);\n\tset_rattr_l_slim(presv, RESV_ATR_mtime, (long) time_now, SET);\n\n\tif (is_rattr_set(presv, RESV_ATR_convert) && !is_rattr_set(presv, RESV_ATR_del_idle_time))\n\t\tset_rattr_l_slim(presv, RESV_ATR_del_idle_time, RESV_ASAP_IDLE_TIME, SET);\n\n\t/* save resv and server structure */\n\tif (resv_save_db(presv)) {\n\t\t(void) resv_purge(presv);\n\t\treq_reject(PBSE_SAVE_ERR, 0, preq);\n\t\treturn;\n\t}\n\n\t/* If not a standing reservation, put onto the \"timed task\" list a task\n\t * that causes\tdeletion of the reservation if the window passes\n\t */\n\tif (!get_rattr_long(presv, RESV_ATR_resv_standing)) {\n\t\tif (get_rattr_long(presv, RESV_ATR_start) != PBS_RESV_FUTURE_SCH) {\n\t\t\tif (gen_task_EndResvWindow(presv)) {\n\t\t\t\tresv_purge(presv);\n\t\t\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\t\t\treturn;\n\t\t\t}\n\t\t}\n\t}\n\n\t/* link reservation into server's reservation list */\n\tappend_link(&svr_allresvs, &presv->ri_allresvs, presv);\n\tif ((is_resv_from_job) && (confirm_resv_locally(presv, preq, partition_name))) {\n\t\tresv_purge(presv);\n\t\treq_reject(PBSE_resvFail, 0, preq);\n\t\treturn;\n\t}\n\n\t/* acknowledge the request with the reservation id\n\t * Remark: for reply we can use the function used for jobs\n\t */\n\n\tif (is_rattr_set(presv, RESV_ATR_interactive) == 0) {\n\t\t/*Not \"interactive\" so don't wait on scheduler, reply now*/\n\n\t\tif (is_resv_from_job)\n\t\t\tsnprintf(buf, sizeof(buf), \"%s CONFIRMED\", presv->ri_qs.ri_resvID);\n\t\telse\n\t\t\tsnprintf(buf, sizeof(buf), \"%s UNCONFIRMED\", presv->ri_qs.ri_resvID);\n\t\tif (get_rattr_long(presv, RESV_ATR_resv_standing))\n\t\t\tsnprintf(buf1, sizeof(buf1), \"requestor=%s@%s recurrence_rrule=%s timezone=%s\",\n\t\t\t\t preq->rq_user, preq->rq_host,\n\t\t\t\t get_rattr_str(presv, RESV_ATR_resv_rrule),\n\t\t\t\t get_rattr_str(presv, RESV_ATR_resv_timezone));\n\t\telse\n\t\t\tsnprintf(buf1, sizeof(buf1), \"requestor=%s@%s\",\n\t\t\t\t preq->rq_user, preq->rq_host);\n\n\t\tif ((rc = reply_text(preq, PBSE_NONE, buf))) {\n\t\t\t/* reply failed,  close connection; purge resv */\n\t\t\tclose_client(sock);\n\t\t\tresv_purge(presv);\n\t\t\treturn;\n\t\t}\n\t\tif (!is_resv_from_job)\n\t\t\taccount_recordResv(PBS_ACCT_UR, presv, buf1);\n\t} else {\n\t\t/*Don't reply back until scheduler decides*/\n\t\tlong dt;\n\t\tpresv->ri_brp = preq;\n\t\tdt = get_rattr_long(presv, RESV_ATR_interactive);\n\t\tif (dt >= 0) {\n\t\t\t/*reply with id and state no decision in +dt secs*/\n\t\t\t(void) gen_future_reply(presv, dt);\n\t\t} else {\n\t\t\t/*no decision in -dt seconds, delete with msg*/\n\t\t\t(void) gen_negI_deleteResv(presv, -dt);\n\t\t}\n\t\tif (get_rattr_long(presv, RESV_ATR_resv_standing))\n\t\t\tsnprintf(buf, sizeof(buf), \"requestor=%s@%s Interactive=%ld recurrence_rrule=%s timezone=%s\",\n\t\t\t\t preq->rq_user, preq->rq_host, dt,\n\t\t\t\t get_rattr_str(presv, RESV_ATR_resv_rrule),\n\t\t\t\t get_rattr_str(presv, RESV_ATR_resv_timezone));\n\t\telse\n\t\t\tsnprintf(buf, sizeof(buf), \"requestor=%s@%s Interactive=%ld\",\n\t\t\t\t preq->rq_user, preq->rq_host, dt);\n\t\taccount_recordResv(PBS_ACCT_UR, presv, buf);\n\t}\n\n\t{\n\t\tlong dt = get_rattr_long(presv, RESV_ATR_start);\n\t\tstrftime(tbuf1, sizeof(tbuf1), fmt, localtime((time_t *) &dt));\n\t\tdt = get_rattr_long(presv, RESV_ATR_end);\n\t\tstrftime(tbuf2, sizeof(tbuf2), fmt, localtime((time_t *) &dt));\n\t}\n\n\tif (!get_rattr_long(presv, RESV_ATR_resv_standing)) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer), \"New reservation submitted start=%s end=%s\", tbuf1, tbuf2);\n\t} else {\n\t\tsnprintf(log_buffer, sizeof(log_buffer), \"New reservation submitted start=%s end=%s \"\n\t\t\t\t\t\t\t \"recurrence_rrule=%s timezone=%s\",\n\t\t\t tbuf1, tbuf2,\n\t\t\t get_rattr_str(presv, RESV_ATR_resv_rrule),\n\t\t\t get_rattr_str(presv, RESV_ATR_resv_timezone));\n\t}\n\tlog_event(PBSEVENT_RESV, PBS_EVENTCLASS_RESV, LOG_INFO,\n\t\t  presv->ri_qs.ri_resvID, log_buffer);\n\n\t/* let the scheduler know that something new\n\t * is available for consideration\n\t */\n\tif (!is_maintenance && !is_resv_from_job)\n\t\tnotify_scheds_about_resv(SCH_SCHEDULE_NEW, presv);\n}\n\nstatic struct dont_set_in_max {\n\tchar *ds_name;\t    /* resource name */\n\tresource *ds_rescp; /* ptr to resource entry */\n} dont_set_in_max[] = {\n\t{\"nodes\", NULL},\n\t{\"nodect\", NULL},\n\t{\"select\", NULL},\n\t{\"place\", NULL},\n\t{\"walltime\", NULL}};\n\n/**\n * @brief\n * \t\tcreate a queue to bind to the reservation\n *\n * @par Functionality:\n * \t\tget_queue_for_reservation - call this function to create and setup\n *\t\ta queue that's associated with a general resources reservation.\n *\n *\t\tAn internally generated request is built up and issued to the\n *\t\t\"batch manager\" subsystem to create a queue having the desired\n *\t\tqueue attributes\n *\n * @param[in]\tpresv\t-\tThe reservation to which a queue is to be be associated\n *\n * @par\n * \t\tNote that the queue is created by issuing a request to \"ourselves\"\n * \t\t(the server) and that this request is fulfilled asynchronously. The queue\n * \t\tmay fail to be created and cause the reservation to be queue-less.\n *\n * @return\terror code\n * @retval\t0\t- success\n * @retval\tPBS error code - error\n *\n * @par MT-safe: No\n */\nstatic int\nget_queue_for_reservation(resc_resv *presv)\n{\n\tint i;\n\tint j;\n\tint rc = 0;\n\tsvrattrl *psatl;\n\tattribute *pattr;\n\tstatic const int lenF = 6;  /*strlen(\"False\") + 1*/\n\tstatic const int lenT = 5;  /*strlen(\"True\") + 1*/\n\tstatic const int lenE = 10; /*strlen(\"Execution\") + 1*/\n\tpbs_list_head *plhed;\n\tstruct work_task *pwt;\n\tstruct batch_request *newreq;\n\n\tnewreq = alloc_br(PBS_BATCH_Manager);\n\tif (newreq == NULL) {\n\t\t(void) sprintf(log_buffer, \"batch request allocation failed\");\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_RESV, LOG_ERR,\n\t\t\t  presv->ri_qs.ri_resvID, log_buffer);\n\t\treturn (PBSE_SYSTEM);\n\t}\n\n\tnewreq->rq_ind.rq_manager.rq_cmd = MGR_CMD_CREATE;\n\tnewreq->rq_ind.rq_manager.rq_objtype = MGR_OBJ_QUEUE;\n\tnewreq->rq_perm = ATR_DFLAG_MGWR | ATR_DFLAG_OPWR;\n\t(void) strcpy(newreq->rq_user, \"pbs_server\");\n\t(void) strcpy(newreq->rq_host, server_name);\n\n\tstrcpy(newreq->rq_ind.rq_manager.rq_objname, get_rattr_str(presv, RESV_ATR_queue));\n\n\tpattr = get_rattr(presv, RESV_ATR_resource);\n\tCLEAR_HEAD(newreq->rq_ind.rq_manager.rq_attr);\n\tplhed = &newreq->rq_ind.rq_manager.rq_attr;\n\n\t/* resources specified by the reservation become \"resc_avail\" for queue\n\t * Have already (at_action processing for RESV_ATR_resource) checked\n\t * that server has control of resources needed, in sufficient quantity\n\t *\n\t * Note: \"resc_avail\" and \"resc_max\" attributes on the queue should\n\t * not include certain (string) resources from the reservation's\n\t * \"resource_list\" attribute as specified in the array dont_set_in_max;\n\t * so, that is why those links are deleted from the attribute\n\t * and then appended back a few lines later\n\t */\n\n\tj = sizeof(dont_set_in_max) / sizeof(struct dont_set_in_max);\n\tfor (i = 0; i < j; ++i) {\n\t\tresource_def *prdef;\n\t\tprdef = find_resc_def(svr_resc_def, dont_set_in_max[i].ds_name);\n\t\tdont_set_in_max[i].ds_rescp = find_resc_entry(pattr, prdef);\n\t\tif (dont_set_in_max[i].ds_rescp)\n\t\t\tdelete_link(&dont_set_in_max[i].ds_rescp->rs_link);\n\t}\n\n\trc = resv_attr_def[RESV_ATR_resource].at_encode(pattr, plhed,\n\t\t\t\t\t\t\tATTR_rescavail, NULL,\n\t\t\t\t\t\t\tATR_ENCODE_CLIENT, NULL);\n\n\trc += resv_attr_def[RESV_ATR_resource].at_encode(pattr, plhed,\n\t\t\t\t\t\t\t ATTR_rescmax, NULL,\n\t\t\t\t\t\t\t ATR_ENCODE_CLIENT, NULL);\n\n\tfor (i = 0; i < j; ++i) {\n\t\tif (dont_set_in_max[i].ds_rescp)\n\t\t\tappend_link(\n\t\t\t\t&pattr->at_val.at_list,\n\t\t\t\t&dont_set_in_max[i].ds_rescp->rs_link,\n\t\t\t\tdont_set_in_max[i].ds_rescp);\n\t}\n\tif (rc < 0) {\n\t\tfree_br(newreq);\n\t\treturn (PBSE_genBatchReq);\n\t}\n\n\tif ((psatl = attrlist_create(ATTR_qtype, NULL, lenE)) != NULL) {\n\t\tstatic char Execution[] = \"Execution\";\n\n\t\tpsatl->al_flags = que_attr_def[QA_ATR_QType].at_flags;\n\t\tstrcpy(psatl->al_value, Execution);\n\t\tappend_link(plhed, &psatl->al_link, psatl);\n\t} else {\n\t\tfree_br(newreq);\n\t\treturn (PBSE_genBatchReq);\n\t}\n\n\t/* Don't enable queue until reservation is RESV_CONFIRMED */\n\tif ((psatl = attrlist_create(ATTR_enable, NULL, lenF)) !=\n\t    NULL) {\n\t\tpsatl->al_flags = que_attr_def[QA_ATR_Enabled].at_flags;\n\t\tstrcpy(psatl->al_value, ATR_FALSE);\n\t\tappend_link(plhed, &psatl->al_link, psatl);\n\t} else {\n\t\tfree_br(newreq);\n\t\treturn (PBSE_genBatchReq);\n\t}\n\n\t/* Don't start queue until reservation window here, RESV_TIME_TO_RUN */\n\tif ((psatl = attrlist_create(ATTR_start, NULL, lenF)) != NULL) {\n\t\tpsatl->al_flags = que_attr_def[QA_ATR_Started].at_flags;\n\t\tstrcpy(psatl->al_value, ATR_FALSE);\n\t\tappend_link(plhed, &psatl->al_link, psatl);\n\t} else {\n\t\tfree_br(newreq);\n\t\treturn (PBSE_genBatchReq);\n\t}\n\n\t/* Generate a \"user_acl\" for PBS_BATCH_manager req, use\n\t * reservation's \"Authorized_Users\" attribute\n\t * Remark: \"Authorized_Users\" has, atleast, the reservation's owner\n\t */\n\n\tif (is_rattr_set(presv, RESV_ATR_auth_u)) {\n\t\tpattr = get_rattr(presv, RESV_ATR_auth_u);\n\t\tplhed = &newreq->rq_ind.rq_manager.rq_attr;\n\t\trc = check_duplicates(pattr->at_val.at_arst);\n\t\tif (rc == 1) {\n\t\t\tfree_br(newreq);\n\t\t\treturn (PBSE_DUPLIST);\n\t\t}\n\t\trc = resv_attr_def[RESV_ATR_auth_u].at_encode(pattr, plhed,\n\t\t\t\t\t\t\t      ATTR_acluser, NULL,\n\t\t\t\t\t\t\t      ATR_ENCODE_SVR, NULL);\n\t\tif (rc < 0) {\n\t\t\tfree_br(newreq);\n\t\t\treturn (PBSE_genBatchReq);\n\t\t}\n\n\t\t/*let the que know user acl is to be enforced*/\n\t\tif ((psatl = attrlist_create(ATTR_acluren,\n\t\t\t\t\t     NULL, lenT)) != NULL) {\n\t\t\tpsatl->al_flags = que_attr_def[QA_ATR_AclUserEnabled].at_flags;\n\t\t\tstrcpy(psatl->al_value, ATR_TRUE);\n\t\t\tappend_link(plhed, &psatl->al_link, psatl);\n\t\t} else {\n\t\t\tfree_br(newreq);\n\t\t\treturn (PBSE_genBatchReq);\n\t\t}\n\t}\n\n\t/* Generate a \"group_acl\" for PBS_BATCH_manager req, use\n\t * reservation's \"Authorized_Groups\" attribute\n\t */\n\n\tif (is_rattr_set(presv, RESV_ATR_auth_g)) {\n\t\tpattr = get_rattr(presv, RESV_ATR_auth_g);\n\t\tplhed = &newreq->rq_ind.rq_manager.rq_attr;\n\t\trc = check_duplicates(pattr->at_val.at_arst);\n\t\tif (rc == 1) {\n\t\t\tfree_br(newreq);\n\t\t\treturn (PBSE_DUPLIST);\n\t\t}\n\t\trc = resv_attr_def[RESV_ATR_auth_g].at_encode(pattr, plhed,\n\t\t\t\t\t\t\t      ATTR_aclgroup, NULL,\n\t\t\t\t\t\t\t      ATR_ENCODE_SVR, NULL);\n\t\tif (rc < 0) {\n\t\t\tfree_br(newreq);\n\t\t\treturn (PBSE_genBatchReq);\n\t\t}\n\n\t\t/*let the que know user acl is to be enforced*/\n\t\tif ((psatl = attrlist_create(ATTR_aclgren,\n\t\t\t\t\t     NULL, lenT)) != NULL) {\n\t\t\tpsatl->al_flags = que_attr_def[QE_ATR_AclGroupEnabled].at_flags;\n\t\t\tstrcpy(psatl->al_value, ATR_TRUE);\n\t\t\tappend_link(plhed, &psatl->al_link, psatl);\n\t\t} else {\n\t\t\tfree_br(newreq);\n\t\t\treturn (PBSE_genBatchReq);\n\t\t}\n\t}\n\n\t/* Generate a \"host_acl\" for PBS_BATCH_manager req, use\n\t * reservation's \"Authorized_Hosts\" attribute\n\t */\n\n\tif (is_rattr_set(presv, RESV_ATR_auth_h)) {\n\t\tpattr = get_rattr(presv, RESV_ATR_auth_h);\n\t\tplhed = &newreq->rq_ind.rq_manager.rq_attr;\n\t\trc = check_duplicates(pattr->at_val.at_arst);\n\t\tif (rc == 1) {\n\t\t\tfree_br(newreq);\n\t\t\treturn (PBSE_DUPLIST);\n\t\t}\n\t\trc = resv_attr_def[RESV_ATR_auth_h].at_encode(pattr, plhed,\n\t\t\t\t\t\t\t      ATTR_aclhost, NULL,\n\t\t\t\t\t\t\t      ATR_ENCODE_SVR, NULL);\n\t\tif (rc < 0) {\n\t\t\tfree_br(newreq);\n\t\t\treturn (PBSE_genBatchReq);\n\t\t}\n\n\t\t/*let the que know user acl is to be enforced*/\n\t\tif ((psatl = attrlist_create(ATTR_aclhten,\n\t\t\t\t\t     NULL, lenT)) != NULL) {\n\t\t\tpsatl->al_flags = que_attr_def[QA_ATR_AclHostEnabled].at_flags;\n\t\t\tstrcpy(psatl->al_value, ATR_TRUE);\n\t\t\tappend_link(plhed, &psatl->al_link, psatl);\n\t\t} else {\n\t\t\tfree_br(newreq);\n\t\t\treturn (PBSE_genBatchReq);\n\t\t}\n\t}\n\n\t/* Ok, everything is successfully built up, issue the Batch_Request */\n\n\tif (issue_Drequest(PBS_LOCAL_CONNECTION, newreq,\n\t\t\t   handle_qmgr_reply_to_resvQcreate, &pwt, 0) == -1) {\n\t\tfree_br(newreq);\n\n\t\t(void) sprintf(log_buffer, \"%s\", msg_resvQcreateFail);\n\t\tlog_event(PBSEVENT_RESV, PBS_EVENTCLASS_RESV, LOG_ERR,\n\t\t\t  presv->ri_qs.ri_resvID, log_buffer);\n\n\t\treturn (PBSE_mgrBatchReq);\n\t}\n\tif (pwt)\n\t\tpwt->wt_parm2 = presv; /*needed to handle qmgr's response*/\n\n\treturn (0);\n}\n/**\n * @brief\n * \t\tignore_attr\t- wrapper function for find_attr.\n *\n * @param[in]\tname\t-\tattribute name to find\n *\n * @return\treturn vlaue from find_attr()\n */\nstatic int\nignore_attr(char *name)\n{\n\treturn (find_attr(job_attr_idx, job_attr_def, name));\n}\n\n/**\n * @brief\n * \t\tact_resv_add_owner - This is a special action function\n *\t\tfor a reservation's  \"Authorized_Users\" attribute - i.e. who is\n *\t\tallowed to submit jobs to the reservation.\n *\n * @param[in]\tpattr\t-\tnot used here\n * @param[in]\tpobj\t-\treservation structure\n * @param[in]\tamode\t-\t\"actmode\" stands for the type of action,\n * \t\t\t\t\t\t\tif ATR_ACTION_NEW, just returns from the function.\n *\n * @return\terror code\n * @retval\t0\t: Success\n * @retval\t!=0\t: fails\n */\n\nint\nact_resv_add_owner(attribute *pattr, void *pobj, int amode)\n{\n\tattribute dummy, *ap;\n\tstruct array_strings dumarst;\n\tenum batch_op op;\n\tresc_resv *presv;\n\tchar *ps;\n\tint len;\n\n\tif (amode != ATR_ACTION_NEW)\n\t\treturn (0); /*success - nothing to do*/\n\n\tpresv = (resc_resv *) pobj;\n\tif (is_rattr_set(presv, RESV_ATR_resv_owner) == 0)\n\t\treturn (0); /*success - nothing to do*/\n\n\tps = get_rattr_str(presv, RESV_ATR_resv_owner);\n\tlen = strlen(ps);\n\n\tap = get_rattr(presv, RESV_ATR_auth_u);\n\tif (is_attr_set(ap)) {\n\t\tint i;\n\n\t\tfor (i = 0; i < ap->at_val.at_arst->as_usedptr; ++i)\n\t\t\tif (!strcmp(ps, ap->at_val.at_arst->as_string[i]))\n\t\t\t\treturn (0); /*resv owner in Authorized_Users*/\n\t\top = INCR;\n\t} else\n\t\top = SET; /*Authorized_Users is NULL, must set*/\n\n\t(void) memset(&dummy, 0, sizeof(dummy));\n\tdummy.at_flags = NO_USER_SET | ATR_VFLAG_SET;\n\tdummy.at_type = ATR_TYPE_ACL;\n\tdummy.at_val.at_arst = &dumarst;\n\tdumarst.as_npointers = 1;\n\tdumarst.as_usedptr = 1;\n\tdumarst.as_bufsize = strlen(ps) + len;\n\tdumarst.as_buf = ps;\n\tdumarst.as_next = ps + len;\n\tdumarst.as_string[0] = ps;\n\n\t/*\"at_set\" function returns 0 on success and NZ on failure*/\n\t/*Remark: nice thing would be to have owner appear first  */\n\treturn (resv_attr_def[RESV_ATR_auth_u].at_set(ap, &dummy, op));\n}\n\n/**\n * @brief\n * \t\thandle_qmgr_reply_to_resvQcreate - this is the function that's to be\n *\t\tcalled to handle the qmgr's response to the earlier issued request\n *\t\tfor queue creation for a reservation.  If the response from the\n *\t\tqmgr is successful, copy the queue's name into the ri_queue field\n *\t\tof the reservation and set the reservation's ri_qp field pointing\n *\t\tto this newly established queue.  If not successful log a message.\n * @par Functionality:\n *\t\tThis function should only be called through an INTERNALLY GENERATED\n *\t\trequest to another server (including ourself).\n *\t\tIt frees the request structure and closes the connection (handle).\n * @par\n *\t\tIn the work task entry, wt_event is the connection handle and\n *\t\twt_parm1 is a pointer to the request structure (containing the reply).\n *\t\twt_parm2 should have the address of the reservation structure\n *\n * @note\n *\t\tTHIS SHOULD NOT BE USED IF AN EXTERNAL (CLIENT) REQUEST IS \"relayed\",\n *\t\tbecause the request/reply structure is still needed to reply back\n *\t\tto the client.\n *\n * @param[in,out]\tpwt\t-\tearlier issued request for queue creation for a reservation.\n */\n\nstatic void\nhandle_qmgr_reply_to_resvQcreate(struct work_task *pwt)\n{\n\tjob *pjob;\n\tpbs_queue *pque;\n\tresc_resv *presv = pwt->wt_parm2;\n\tstruct batch_request *preq = pwt->wt_parm1;\n\n\tif (preq->rq_reply.brp_code) {\n\n\t\t(void) sprintf(log_buffer, msg_resvQcreateFail,\n\t\t\t       presv->ri_jbp->ji_qs.ji_jobid,\n\t\t\t       presv->ri_qs.ri_resvID);\n\t\tlog_event(PBSEVENT_RESV, PBS_EVENTCLASS_RESV, LOG_INFO,\n\t\t\t  presv->ri_qs.ri_resvID, log_buffer);\n\t} else {\n\t\tpque = find_queuebyname(preq->rq_ind.rq_manager.rq_objname);\n\t\tif ((presv->ri_qp = pque) != 0)\n\t\t\tpque->qu_resvp = presv;\n\t\tif ((pjob = find_job(get_rattr_str(presv, RESV_ATR_job))))\n\t\t\tpjob->ji_myResv = presv;\n\t\t(void) strcpy(presv->ri_qs.ri_queue, get_rattr_str(presv, RESV_ATR_queue));\n\t\tif (resv_save_db(presv)) {\n\t\t\t(void) resv_purge(presv);\n\t\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\t\treturn;\n\t\t}\n\t}\n\n\tfree_br((struct batch_request *) pwt->wt_parm1);\n\tif (pwt->wt_event != -1)\n\t\tsvr_disconnect(pwt->wt_event);\n}\n\n/**\n * @brief\n * \t\tValidate job and reservation place directives.\n *\n * @param[in]\tpj\t-\tThe job to validate.\n *\n * @note\n * \t\tThe reservation associated to the job is obtained\n * \t\tfrom the job structure.\n *\n * @return\tWhether the place directive between the job\n * \t\t\tand the reservation are in conflict or not.\n *\n * @retval\t1\t: job and reservation place directives do not conflict\n * @retval\t0\t: job and reservation place directives conflict\n *\n * @par MT-safe: No\n */\nstatic int\nvalidate_place_req_of_job_in_reservation(job *pj)\n{\n\tresource_def *prsdef;\n\tresource *job_place;\n\tresource *resv_place;\n\tattribute *jattr;\n\tattribute *rattr;\n\tenum vnode_sharing job_sharetype;\n\tenum vnode_sharing resv_sharetype;\n\n\t/* A job not in reservation is implicitly considered valid */\n\tif (pj->ji_myResv == NULL)\n\t\treturn 1;\n\n\tprsdef = &svr_resc_def[RESC_PLACE];\n\tjattr = get_jattr(pj, JOB_ATR_resource);\n\trattr = get_rattr(pj->ji_myResv, RESV_ATR_resource);\n\n\tjob_place = find_resc_entry(jattr, prsdef);\n\tif (!job_place || !job_place->rs_value.at_val.at_str)\n\t\treturn 1;\n\n\tresv_place = find_resc_entry(rattr, prsdef);\n\tif (!resv_place || !resv_place->rs_value.at_val.at_str)\n\t\treturn 1;\n\n\t/* Cehck for exclhost should come before excl because exclhost contains\n\t * the excl prefix\n\t */\n\tjob_sharetype = place_sharing_type(\n\t\tjob_place->rs_value.at_val.at_str,\n\t\tVNS_FORCE_EXCLHOST);\n\tif (job_sharetype == VNS_UNSET) {\n\t\tjob_sharetype = place_sharing_type(\n\t\t\tjob_place->rs_value.at_val.at_str,\n\t\t\tVNS_FORCE_EXCL);\n\t}\n\tresv_sharetype = place_sharing_type(\n\t\tresv_place->rs_value.at_val.at_str,\n\t\tVNS_FORCE_EXCLHOST);\n\tif (resv_sharetype == VNS_UNSET) {\n\t\tresv_sharetype = place_sharing_type(\n\t\t\tresv_place->rs_value.at_val.at_str,\n\t\t\tVNS_FORCE_EXCL);\n\t}\n\n\t/* Reject the request if job requests exclusive and reservation not */\n\tif ((resv_sharetype == VNS_UNSET) &&\n\t    (job_sharetype != VNS_UNSET))\n\t\treturn 0;\n\n\t/* Reject request if job requests exclhost and reservation doesn't */\n\tif ((resv_sharetype != VNS_FORCE_EXCLHOST) &&\n\t    (job_sharetype == VNS_FORCE_EXCLHOST))\n\t\treturn 0;\n\n\treturn 1;\n}\n\n/**\n * @brief\n * \t\tProvides the next job id\n *\n * @param[in] void\n *\n * @return\tlong long\n * @retval (>=0 to <= max_job_sequence_id): Success\n * @retval\t-1\t: database error\n *\n */\nlong long\nget_next_svr_sequence_id(void)\n{\n\tstatic long long lastid = -1;\n\tlong long seq = server.sv_qs.sv_jobidnumber;\n\n\t/* If server job limit is over, reset back to zero */\n\tif (++server.sv_qs.sv_jobidnumber > svr_max_job_sequence_id) {\n\t\tserver.sv_qs.sv_jobidnumber = 0;\n\t\tlastid = -1;\n\t}\n\n\t/* check if we should save jobid */\n\tif (lastid == -1 || server.sv_qs.sv_jobidnumber == lastid) {\n\t\tlastid = ((server.sv_qs.sv_jobidnumber / SEQ_WIN_INCR) + 1) * SEQ_WIN_INCR;\n\t\tserver.sv_qs.sv_lastid = lastid;\n\t\tsvr_save_db(&server);\n\t}\n\treturn seq;\n}\n\n/**\n * @brief\n * \t\t\tResets server sequence window count and server sv_jobidnumber\n *\n * @return None\n *\n */\nvoid\nreset_svr_sequence_window(void)\n{\n\tserver.sv_qs.sv_jobidnumber = 0;\n}\n\n/**\n * @brief - Copy parameters from job to reservation\n *\n * @param[in] - jobid - id of the job from which the parameters will be copied.\n * @param[in] - presv - reservation to which the parameters will be copied to.\n *\n * @return int\n * @retval 0: Success\n * @retval < 0: error\n */\nint\ncopy_params_from_job(char *jobid, resc_resv *presv)\n{\n\tjob *pjob;\n\tint bufsize;\n\tattribute temp;\n\tresource *presc;\n\tresource_def *prdefsl;\n\tresource_def *resc_def;\n\tresource *job_resc_entry;\n\tresource *resv_resc_entry;\n\tchar buf[PBS_MAXUSER + PBS_MAXHOSTNAME + 64] = {0};\n\n\tint walltime_copied = 0;\n\n\tpjob = find_job(jobid);\n\n\tif (pjob == NULL)\n\t\treturn PBSE_UNKJOBID;\n\n\tif ((!check_job_substate(pjob, JOB_SUBSTATE_RUNNING)) &&\n\t    (get_jattr_str(pjob, JOB_ATR_exec_vnode) == NULL))\n\t\treturn PBSE_BADSTATE;\n\n\tbufsize = PBS_MAXUSER + PBS_MAXHOSTNAME + 64 - 1;\n\n\tif (strchr(get_jattr_str(pjob, JOB_ATR_job_owner), '@')) {\n\t\tstrncpy(buf, get_jattr_str(pjob, JOB_ATR_job_owner), bufsize);\n\t} else\n\t\tsnprintf(buf, bufsize, \"%s@%s\", get_jattr_str(pjob, JOB_ATR_job_owner),\n\t\t\t get_jattr_str(pjob, JOB_ATR_submit_host));\n\n\tset_rattr_str_slim(presv, RESV_ATR_resv_owner, buf, NULL);\n\tset_rattr_str_slim(presv, RESV_ATR_resv_nodes, get_jattr_str(pjob, JOB_ATR_exec_vnode), NULL);\n\n\tif (is_jattr_set(pjob, JOB_ATR_stime))\n\t\tset_rattr_l_slim(presv, RESV_ATR_start, get_jattr_long(pjob, JOB_ATR_stime), SET);\n\telse\n\t\tset_rattr_l_slim(presv, RESV_ATR_start, time_now, SET);\n\n\tpost_attr_set(get_rattr(presv, RESV_ATR_SchedSelect));\n\n\tjob_resc_entry = (resource *) GET_NEXT(get_jattr_list(pjob, JOB_ATR_resource));\n\tfor (; job_resc_entry; job_resc_entry = (resource *) GET_NEXT(job_resc_entry->rs_link)) {\n\t\tresc_def = job_resc_entry->rs_defin;\n\t\tresv_resc_entry = find_resc_entry(get_rattr(presv, RESV_ATR_resource), resc_def);\n\t\tif (resv_resc_entry == NULL) {\n\t\t\tif (!(resv_resc_entry = add_resource_entry(get_rattr(presv, RESV_ATR_resource), resc_def)))\n\t\t\t\treturn PBSE_SYSTEM;\n\t\t}\n\t\tif (is_attr_set(&job_resc_entry->rs_value)) {\n\t\t\t(void) resc_def->rs_set(&resv_resc_entry->rs_value, &job_resc_entry->rs_value, SET);\n\t\t}\n\t\tif (!strcmp(resc_def->rs_name, WALLTIME))\n\t\t\twalltime_copied = 1;\n\t}\n\n\tif (!walltime_copied) {\n\t\tresc_def = &svr_resc_def[RESC_WALLTIME];\n\t\tif (resc_def != NULL) {\n\t\t\tresv_resc_entry = find_resc_entry(get_rattr(presv, RESV_ATR_resource), resc_def);\n\t\t\tif (resv_resc_entry == NULL) {\n\t\t\t\tif (!(resv_resc_entry = add_resource_entry(get_rattr(presv, RESV_ATR_resource), resc_def)))\n\t\t\t\t\treturn PBSE_SYSTEM;\n\t\t\t}\n\t\t\ttemp.at_flags = ATR_VFLAG_SET;\n\t\t\ttemp.at_type = ATR_TYPE_LONG;\n\t\t\ttemp.at_val.at_long = RESV_INFINITY;\n\t\t\t(void) resc_def->rs_set(&resv_resc_entry->rs_value, &temp, SET);\n\t\t}\n\t}\n\tprdefsl = &svr_resc_def[RESC_SELECT];\n\tpresc = find_resc_entry(get_jattr(pjob, JOB_ATR_resource), prdefsl);\n\tmake_schedselect(get_jattr(pjob, JOB_ATR_resource), presc, NULL, get_rattr(presv, RESV_ATR_SchedSelect));\n\n\treturn 0;\n}\n\n/**\n * @brief - Confirm reservation that is being created out of a job.\n *\n * @param[in] - presv - reservation that needs to be confirmed.\n * @param[in] - orig_preq - batch request.\n * @param[in] - partition_name - partition in which the reservation needs to be confirmed.\n *\n * @return int\n * @retval 0: Success\n * @retval != 0: error\n */\nint\nconfirm_resv_locally(resc_resv *presv, struct batch_request *orig_preq, char *partition_name)\n{\n\tchar *at;\n\tjob *pjob;\n\tstruct work_task *pwt;\n\tstruct batch_request *preq;\n\tint extend_size = 0;\n\n\tpresv->resv_from_job = 1;\n\tpreq = alloc_br(PBS_BATCH_ConfirmResv);\n\tpreq->rq_ind.rq_run.rq_destin = strdup(get_rattr_str(presv, RESV_ATR_resv_nodes));\n\tif (preq->rq_ind.rq_run.rq_destin == NULL) {\n\t\tfree_br(preq);\n\t\treturn 1;\n\t}\n\n\t/* extend field format is \"PBS_RESV_CONFIRM_SUCCESS:partition=<partition name>\"\n\t * allocate enough memory to be able to support the format.\n\t */\n\textend_size = strlen(PBS_RESV_CONFIRM_SUCCESS) + strlen(partition_name) + 12;\n\tpreq->rq_extend = malloc(extend_size);\n\tif (preq->rq_extend == NULL) {\n\t\tfree_br(preq);\n\t\treturn 1;\n\t}\n\tsnprintf(preq->rq_extend, extend_size, \"%s:partition=%s\", PBS_RESV_CONFIRM_SUCCESS, partition_name);\n\n\t(void) strcpy(preq->rq_ind.rq_run.rq_jid, presv->ri_qs.ri_resvID);\n\tpreq->rq_perm |= ATR_DFLAG_MGWR;\n\n\tif (issue_Drequest(PBS_LOCAL_CONNECTION, preq, release_req, &pwt, 0) == -1) {\n\t\tfree_br(preq);\n\t\treturn 1;\n\t}\n\n\tpreq = alloc_br(PBS_BATCH_MoveJob);\n\tpreq->rq_perm |= ATR_DFLAG_MGWR;\n\tstrcpy(preq->rq_user, orig_preq->rq_user);\n\tstrcpy(preq->rq_host, orig_preq->rq_host);\n\n\tpjob = find_job(get_rattr_str(presv, RESV_ATR_job));\n\n\tsnprintf(preq->rq_ind.rq_move.rq_jid, sizeof(preq->rq_ind.rq_move.rq_jid), \"%s\", get_rattr_str(presv, RESV_ATR_job));\n\tat = strchr(presv->ri_qs.ri_resvID, (int) '.');\n\tif (at)\n\t\t*at = '\\0';\n\n\tsnprintf(preq->rq_ind.rq_move.rq_destin, sizeof(preq->rq_ind.rq_move.rq_destin), \"%s\", presv->ri_qs.ri_resvID);\n\tif (at)\n\t\t*at = '.';\n\n\tsnprintf(pjob->ji_qs.ji_destin, PBS_MAXROUTEDEST, \"%s\", preq->rq_ind.rq_move.rq_destin);\n\treturn (local_move(pjob, preq));\n}\n\n#endif /*SERVER ONLY*/\n"
  },
  {
    "path": "src/server/req_register.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file    req_register.c\n *\n * @brief\n * \t\treq_register.c\t-\tThis file hold all the functions dealing with job dependency\n *\n * Included functions are:\n * \tpost_run_depend()\n * \treq_register()\n * \tpost_doq()\n * \talter_unreg()\n * \tdepend_on_que()\n * \tpost_doe()\n * \tdepend_on_exec()\n * \tdepend_on_term()\n * \tset_depend_hold()\n * \tfind_depend()\n * \tmake_depend()\n * \tregister_dep()\n * \tunregister_dep()\n * \tfind_dependjob()\n * \tmake_dependjob()\n * \tsend_depend_req()\n * \tdecode_depend()\n * \tcpy_jobsvr()\n * \tdup_depend()\n * \tencode_depend()\n * \tset_depend()\n * \tcomp_depend()\n * \tfree_depend()\n * \tbuild_depend()\n * \tclear_depend()\n * \tdel_depend()\n * \tdel_depend_job()\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <assert.h>\n#include <sys/types.h>\n#include <stdlib.h>\n#include <ctype.h>\n#include <errno.h>\n#include <stdio.h>\n#include <string.h>\n#include <signal.h>\n#include \"libpbs.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"server_limits.h\"\n#include \"credential.h\"\n#include \"batch_request.h\"\n#include \"job.h\"\n#include \"reservation.h\"\n#include \"queue.h\"\n#include \"server.h\"\n#include \"work_task.h\"\n#include \"pbs_error.h\"\n#include \"log.h\"\n#include \"pbs_nodes.h\"\n#include \"svrfunc.h\"\n#include \"net_connect.h\"\n\n/* External functions */\n\nextern int issue_to_svr(char *svr, struct batch_request *, void (*func)(struct work_task *));\n\n/* Local Private Functions */\n\nstatic void set_depend_hold(job *, attribute *);\nstatic int register_dep(attribute *, struct batch_request *, int, int *);\nstatic int unregister_dep(attribute *, struct batch_request *);\nstatic struct depend *make_depend(int type, attribute *pattr);\nstatic struct depend_job *make_dependjob(struct depend *, char *jobid, char *host);\nstatic void del_depend_job(struct depend_job *pdj);\nstatic int build_depend(attribute *, char *);\nstatic void clear_depend(struct depend *, int type, int exists);\nstatic void del_depend(struct depend *);\nstatic void update_depend(job *, char *, char *, int, int);\n\n/* External Global Data Items */\n\nextern struct server server;\nextern char server_name[];\nextern char *msg_unkjobid;\nextern char *msg_movejob;\nextern char *msg_err_malloc;\nextern char *msg_illregister;\nextern char *msg_registerdel;\nextern char *msg_registerrel;\nextern char *msg_regrej;\nextern char *msg_job_moved;\nextern char *msg_depend_runone;\nextern char *msg_histdepend;\nextern char *msg_historyjobid;\n\n#define DEPEND_ADD 1\n#define DEPEND_REMOVE 2\n/**\n * @brief\n * \t\tpost_run_depend - this function is called via a work task when a\n *\t\tregister dependency \"after\" is received for a job that is already\n *\t\trunning.  We accept the dependency and then turn around and send a\n *\t\t\"release\" back to the newly registered job to remove its hold.\n *\n * param[in]\tpwt\t-\twork task\n */\nstatic void\npost_run_depend(struct work_task *pwt)\n{\n\tjob *pjob;\n\n\tpjob = (job *) pwt->wt_parm1;\n\tif (is_jattr_set(pjob, JOB_ATR_depend))\n\t\tdepend_on_exec(pjob);\n\treturn;\n}\n\n/**\n * @brief\tA function to add/remove dependency on a given job\n *\n * @param[in]\tpjob - Job on which we need perform the operation\n * @param[in]\td_jobid - The job id which needs to be added/removed\n * @param[in]\td_svr\n * @param[in]\top -\toperation to perform - DEPEND_ADD/DEPEND_REMOVE\n * @param[in]\ttype - Dependency type\n *\n * @return\tvoid\n */\nstatic void\nupdate_depend(job *pjob, char *d_jobid, char *d_svr, int op, int type)\n{\n\tjob *d_job;\n\tstruct depend *dp;\n\tstruct depend_job *dpj;\n\tattribute *pattr;\n\n\td_job = find_job(d_jobid);\n\tif (d_job == NULL)\n\t\treturn;\n\n\tpattr = get_jattr(pjob, JOB_ATR_depend);\n\tdp = find_depend(type, pattr);\n\tif (op == DEPEND_ADD) {\n\t\tif (dp == NULL) {\n\t\t\tdp = make_depend(JOB_DEPEND_TYPE_RUNONE, pattr);\n\t\t\tif (dp == NULL)\n\t\t\t\treturn;\n\t\t}\n\t\tdpj = find_dependjob(dp, d_jobid);\n\t\tif (dpj)\n\t\t\treturn; /* Job dependency already established */\n\t\tif (strcmp(pjob->ji_qs.ji_jobid, d_jobid)) {\n\t\t\tdpj = make_dependjob(dp, d_jobid, d_svr);\n\t\t\tpost_attr_set(pattr);\n\t\t\tjob_save(pjob);\n\t\t\t/* runone dependencies are circular */\n\t\t\tif (type == JOB_DEPEND_TYPE_RUNONE)\n\t\t\t\tupdate_depend(d_job, pjob->ji_qs.ji_jobid, d_svr, op, type);\n\t\t}\n\t\treturn;\n\t} else if (op == DEPEND_REMOVE) {\n\t\tif (dp == NULL)\n\t\t\treturn;\n\t\tdpj = find_dependjob(dp, d_jobid);\n\t\tif (dpj == NULL)\n\t\t\treturn;\n\t\tdel_depend_job(dpj);\n\t\tif (GET_NEXT(dp->dp_jobs) == 0)\n\t\t\t/* no more dependencies of this type */\n\t\t\tdel_depend(dp);\n\n\t\tpattr->at_flags |= ATR_MOD_MCACHE;\n\t\t/* runone dependencies are circular */\n\t\tif (type == JOB_DEPEND_TYPE_RUNONE)\n\t\t\tupdate_depend(d_job, pjob->ji_qs.ji_jobid, d_svr, op, type);\n\t\treturn;\n\t}\n\treturn;\n}\n\n/**\n * @brief\n * \t\treq_register - process the Register Dependency Request\n * @note\n *\t\tWe have an interesting problem here in that the request may well\n *\t\toriginate from ourself.  In that case we doen't really reply.\n *\n * @param[in]\tpreq\t-\tRegister Dependency Request.\n */\n\nvoid\nreq_register(struct batch_request *preq)\n{\n\tint made;\n\tattribute *pattr;\n\tstruct depend *pdep;\n\tstruct depend_job *pdj;\n\tjob *pjob;\n\tchar *ps;\n\tstruct work_task *ptask;\n\tint rc = 0;\n\tint revtype;\n\tint type;\n\tint is_finished = FALSE;\n\n\t/*  make sure request is from a server */\n\n\tif (!preq->rq_fromsvr) {\n\t\treq_reject(PBSE_IVALREQ, 0, preq);\n\t\treturn;\n\t}\n\n\t/* find the \"parent\" job specified in the request */\n\n\tif ((pjob = find_job(preq->rq_ind.rq_register.rq_parent)) == NULL) {\n\n\t\t/*\n\t\t * job not found... if server is initializing, it may not\n\t\t * yet recovered, that is not an error.\n\t\t */\n\n\t\tif (get_sattr_long(SVR_ATR_State) != SV_STATE_INIT) {\n\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t  preq->rq_ind.rq_register.rq_parent,\n\t\t\t\t  msg_unkjobid);\n\t\t\treq_reject(PBSE_UNKJOBID, 0, preq);\n\t\t} else {\n\t\t\treply_ack(preq);\n\t\t}\n\t\treturn;\n\t}\n\n\tif (check_job_state(pjob, JOB_STATE_LTR_FINISHED))\n\t\tis_finished = TRUE;\n\n\tpattr = get_jattr(pjob, JOB_ATR_depend);\n\ttype = preq->rq_ind.rq_register.rq_dependtype;\n\n\t/* more of the server:port fix kludge */\n\n\tps = strchr(preq->rq_ind.rq_register.rq_child, (int) '@');\n\tif (ps != NULL) {\n\t\t(void) strcpy(preq->rq_ind.rq_register.rq_svr, ps + 1);\n\t\t*ps = '\\0';\n\t} else {\n\t\t(void) strcpy(preq->rq_ind.rq_register.rq_svr, preq->rq_host);\n\t}\n\n\tif (check_job_state(pjob, JOB_STATE_LTR_MOVED)) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer), \"Parent %s%s\", msg_movejob, pjob->ji_qs.ji_destin);\n\t\tlog_event(PBSEVENT_DEBUG | PBSEVENT_SYSTEM | PBSEVENT_ERROR,\n\t\t\t  PBS_EVENTCLASS_REQUEST, LOG_INFO,\n\t\t\t  preq->rq_ind.rq_register.rq_child, log_buffer);\n\t\treq_reject(PBSE_JOB_MOVED, 0, preq);\n\t\treturn;\n\t}\n\tswitch (preq->rq_ind.rq_register.rq_op) {\n\n\t\t\t/*\n\t\t\t * Register a dependency\n\t\t\t */\n\n\t\tcase JOB_DEPEND_OP_REGISTER:\n\t\t\tswitch (type) {\n\n\t\t\t\tcase JOB_DEPEND_TYPE_AFTERSTART:\n\t\t\t\t\tif (is_finished == TRUE) {\n\t\t\t\t\t\trc = PBSE_HISTJOBID;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t\tif (get_job_substate(pjob) >= JOB_SUBSTATE_RUNNING) {\n\t\t\t\t\t\t/* Job already running, setup task to send\n\t\t\t\t\t\t * release back to child and continue with\n\t\t\t\t\t\t * registration process.\n\t\t\t\t\t\t */\n\t\t\t\t\t\tptask = set_task(WORK_Immed, 0, post_run_depend,\n\t\t\t\t\t\t\t\t (void *) pjob);\n\t\t\t\t\t\tif (ptask)\n\t\t\t\t\t\t\tappend_link(&pjob->ji_svrtask,\n\t\t\t\t\t\t\t\t    &ptask->wt_linkobj, ptask);\n\t\t\t\t\t}\n\t\t\t\t\t/* fall through to complete registration */\n\t\t\t\tcase JOB_DEPEND_TYPE_AFTERANY:\n\t\t\t\tcase JOB_DEPEND_TYPE_AFTEROK:\n\t\t\t\tcase JOB_DEPEND_TYPE_AFTERNOTOK:\n\t\t\t\t\t/* If the job has already finished, no need to add after dependency */\n\t\t\t\t\tif (is_finished == TRUE) {\n\t\t\t\t\t\tif (((type == JOB_DEPEND_TYPE_AFTERNOTOK) && (pjob->ji_qs.ji_un.ji_exect.ji_exitstat == 0)) ||\n\t\t\t\t\t\t    ((type == JOB_DEPEND_TYPE_AFTEROK) && (pjob->ji_qs.ji_un.ji_exect.ji_exitstat != 0)))\n\t\t\t\t\t\t\trc = PBSE_HISTDEPEND;\n\t\t\t\t\t\telse\n\t\t\t\t\t\t\trc = PBSE_HISTJOBID;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t\trc = register_dep(pattr, preq, type, &made);\n\t\t\t\t\tbreak;\n\t\t\t\tcase JOB_DEPEND_TYPE_BEFORESTART:\n\t\t\t\tcase JOB_DEPEND_TYPE_BEFOREANY:\n\t\t\t\tcase JOB_DEPEND_TYPE_BEFOREOK:\n\t\t\t\tcase JOB_DEPEND_TYPE_BEFORENOTOK:\n\n\t\t\t\t\t/* There is no need to put before dependency on a finished job */\n\t\t\t\t\tif (is_finished == TRUE) {\n\t\t\t\t\t\trc = PBSE_HISTDEPEND;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t\t/*\n\t\t\t\t\t * Check job owner for permission, use the real\n\t\t\t\t\t * job owner, not the sending server's name.\n\t\t\t\t\t */\n\n\t\t\t\t\t(void) strcpy(preq->rq_user,\n\t\t\t\t\t\t      preq->rq_ind.rq_register.rq_owner);\n\t\t\t\t\tif (svr_chk_owner(preq, pjob)) {\n\t\t\t\t\t\trc = PBSE_PERM; /* not same user */\n\t\t\t\t\t} else {\n\t\t\t\t\t\t/* ok owner, see if job has \"on\" */\n\t\t\t\t\t\tpdep = find_depend(JOB_DEPEND_TYPE_ON, pattr);\n\t\t\t\t\t\tif (pdep == 0) {\n\t\t\t\t\t\t\t/* on \"on\", see if child already registered */\n\t\t\t\t\t\t\trevtype = type ^ (JOB_DEPEND_TYPE_BEFORESTART -\n\t\t\t\t\t\t\t\t\t  JOB_DEPEND_TYPE_AFTERSTART);\n\t\t\t\t\t\t\tpdep = find_depend(revtype, pattr);\n\t\t\t\t\t\t\tif (pdep == 0) {\n\t\t\t\t\t\t\t\t/* no \"on\" and no prior - return error */\n\t\t\t\t\t\t\t\trc = PBSE_BADDEPEND;\n\t\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t\tpdj = find_dependjob(pdep, preq->rq_ind.rq_register.rq_child);\n\t\t\t\t\t\t\t\tif (pdj) {\n\t\t\t\t\t\t\t\t\t/* has prior register, update it */\n\t\t\t\t\t\t\t\t\t(void) strcpy(pdj->dc_svr, preq->rq_ind.rq_register.rq_svr);\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t} else if ((rc = register_dep(pattr, preq, type, &made)) == 0) {\n\t\t\t\t\t\t\tif (made) { /* first time registered */\n\t\t\t\t\t\t\t\tif (--pdep->dp_numexp <= 0)\n\t\t\t\t\t\t\t\t\tdel_depend(pdep);\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tbreak;\n\t\t\t\tcase JOB_DEPEND_TYPE_RUNONE:\n\t\t\t\t\t/*\n\t\t\t\t\t * Check job owner for permission, use the real\n\t\t\t\t\t * job owner, not the sending server's name.\n\t\t\t\t\t */\n\n\t\t\t\t\tstrcpy(preq->rq_user, preq->rq_ind.rq_register.rq_owner);\n\t\t\t\t\tif (svr_chk_owner(preq, pjob)) {\n\t\t\t\t\t\trc = PBSE_PERM; /* not same user */\n\t\t\t\t\t} else {\n\t\t\t\t\t\tpdep = find_depend(JOB_DEPEND_TYPE_RUNONE, pattr);\n\t\t\t\t\t\tif (pdep) {\n\t\t\t\t\t\t\tstruct depend_job *dj_iter;\n\t\t\t\t\t\t\tjob *pr_job;\n\t\t\t\t\t\t\tpr_job = find_job(preq->rq_ind.rq_register.rq_child);\n\t\t\t\t\t\t\tfor (dj_iter = (struct depend_job *) GET_NEXT(pdep->dp_jobs);\n\t\t\t\t\t\t\t     dj_iter != NULL; dj_iter = (struct depend_job *) GET_NEXT(dj_iter->dc_link))\n\t\t\t\t\t\t\t\tupdate_depend(pr_job, dj_iter->dc_child, dj_iter->dc_svr, DEPEND_ADD, JOB_DEPEND_TYPE_RUNONE);\n\t\t\t\t\t\t}\n\t\t\t\t\t\tupdate_depend(pjob, preq->rq_ind.rq_register.rq_child, preq->rq_ind.rq_register.rq_svr, DEPEND_ADD, JOB_DEPEND_TYPE_RUNONE);\n\t\t\t\t\t}\n\t\t\t\t\tbreak;\n\n\t\t\t\tdefault:\n\t\t\t\t\trc = PBSE_IVALREQ;\n\t\t\t\t\tbreak;\n\t\t\t}\n\t\t\tbreak;\n\n\t\t\t/*\n\t\t\t * Release a dependency so job might run\n\t\t\t */\n\n\t\tcase JOB_DEPEND_OP_RELEASE:\n\t\t\tswitch (type) {\n\n\t\t\t\tcase JOB_DEPEND_TYPE_BEFORESTART:\n\t\t\t\tcase JOB_DEPEND_TYPE_BEFOREANY:\n\t\t\t\tcase JOB_DEPEND_TYPE_BEFOREOK:\n\t\t\t\tcase JOB_DEPEND_TYPE_BEFORENOTOK:\n\n\t\t\t\t\t/* predecessor sent release-reduce \"on\", */\n\t\t\t\t\t/* see if this job can now run \t\t */\n\t\t\t\t\ttype ^= (JOB_DEPEND_TYPE_BEFORESTART -\n\t\t\t\t\t\t JOB_DEPEND_TYPE_AFTERSTART);\n\t\t\t\t\tif ((pdep = find_depend(type, pattr)) != NULL) {\n\t\t\t\t\t\tpdj = find_dependjob(pdep,\n\t\t\t\t\t\t\t\t     preq->rq_ind.rq_register.rq_child);\n\t\t\t\t\t\tif (pdj) {\n\t\t\t\t\t\t\tdel_depend_job(pdj);\n\t\t\t\t\t\t\tpattr->at_flags |= ATR_MOD_MCACHE;\n\t\t\t\t\t\t\t(void) sprintf(log_buffer, msg_registerrel,\n\t\t\t\t\t\t\t\t       preq->rq_ind.rq_register.rq_child);\n\t\t\t\t\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB,\n\t\t\t\t\t\t\t\t  LOG_INFO,\n\t\t\t\t\t\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\n\t\t\t\t\t\t\tif (GET_NEXT(pdep->dp_jobs) == 0) {\n\t\t\t\t\t\t\t\t/* no more dependencies of this type */\n\t\t\t\t\t\t\t\tdel_depend(pdep);\n\t\t\t\t\t\t\t\tset_depend_hold(pjob, pattr);\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\trc = PBSE_IVALREQ;\n\t\t\t\t\tbreak;\n\t\t\t\tcase JOB_DEPEND_TYPE_RUNONE:\n\t\t\t\t\tpdep = find_depend(JOB_DEPEND_TYPE_RUNONE, pattr);\n\t\t\t\t\tif (pdep) {\n\t\t\t\t\t\tstruct depend_job *dj_iter;\n\t\t\t\t\t\tjob *pr_job;\n\t\t\t\t\t\tpr_job = find_job(preq->rq_ind.rq_register.rq_child);\n\t\t\t\t\t\tif (pr_job) {\n\t\t\t\t\t\t\tfor (dj_iter = (struct depend_job *) GET_NEXT(pdep->dp_jobs);\n\t\t\t\t\t\t\t     dj_iter != NULL; dj_iter = (struct depend_job *) GET_NEXT(dj_iter->dc_link)) {\n\t\t\t\t\t\t\t\tupdate_depend(pr_job, dj_iter->dc_child, dj_iter->dc_svr, DEPEND_REMOVE, JOB_DEPEND_TYPE_RUNONE);\n\t\t\t\t\t\t\t\tlog_eventf(PBSEVENT_JOB, PBS_EVENTCLASS_JOB,\n\t\t\t\t\t\t\t\t\t   LOG_INFO, pr_job->ji_qs.ji_jobid, msg_registerrel, dj_iter->dc_child);\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tupdate_depend(pjob, preq->rq_ind.rq_register.rq_parent, preq->rq_ind.rq_register.rq_svr, DEPEND_REMOVE, JOB_DEPEND_TYPE_RUNONE);\n\t\t\t\t\tlog_eventf(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t\t\t   pjob->ji_qs.ji_jobid, msg_registerrel, preq->rq_ind.rq_register.rq_parent);\n\n\t\t\t\t\tif (GET_NEXT(pdep->dp_jobs) == 0) {\n\t\t\t\t\t\t/* no more dependencies of this type */\n\t\t\t\t\t\tdel_depend(pdep);\n\t\t\t\t\t}\n\t\t\t\t\tbreak;\n\t\t\t}\n\n\t\t\tbreak;\n\n\t\tcase JOB_DEPEND_OP_READY:\n\t\t\trc = PBSE_NOSYNCMSTR;\n\t\t\tbreak;\n\n\t\tcase JOB_DEPEND_OP_DELETE:\n\t\t\t(void) sprintf(log_buffer, msg_registerdel,\n\t\t\t\t       preq->rq_ind.rq_register.rq_child);\n\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\tjob_abt(pjob, log_buffer);\n\t\t\t/* Since the job is aborted, we can return here itself */\n\t\t\treply_ack(preq);\n\t\t\treturn;\n\n\t\tcase JOB_DEPEND_OP_UNREG:\n\t\t\tunregister_dep(pattr, preq);\n\t\t\tset_depend_hold(pjob, pattr);\n\t\t\tbreak;\n\n\t\tdefault:\n\t\t\tsprintf(log_buffer, msg_illregister,\n\t\t\t\tpreq->rq_ind.rq_register.rq_parent);\n\t\t\tlog_event(PBSEVENT_DEBUG | PBSEVENT_SYSTEM | PBSEVENT_ERROR,\n\t\t\t\t  PBS_EVENTCLASS_REQUEST, LOG_INFO,\n\t\t\t\t  preq->rq_host, log_buffer);\n\t\t\trc = PBSE_IVALREQ;\n\t\t\tbreak;\n\t\t\t;\n\t}\n\n\tif (rc) {\n\t\treq_reject(rc, 0, preq);\n\t} else {\n\t\tjob_save_db(pjob);\n\t\treply_ack(preq);\n\t}\n\treturn;\n}\n\n/**\n * @brief\n * \t\tpost_doq (que not dog) - post request/reply processing for depend_on_que\n *\t\ti.e. the sending of register operations.\n *\n * @param[in]\tpwt\t-\tpost request/reply\n */\n\nstatic void\npost_doq(struct work_task *pwt)\n{\n\tstruct batch_request *preq = (struct batch_request *) pwt->wt_parm1;\n\tchar *jobid = preq->rq_ind.rq_register.rq_child;\n\tchar *msg;\n\tjob *pjob;\n\tjob *ppjob;\n\tstruct depend_job pparent;\n\tint rc;\n\n\tif (preq->rq_reply.brp_code) {\n\t\t/* request was rejected */\n\t\tif (preq->rq_reply.brp_code == PBSE_HISTJOBID)\n\t\t\tlog_eventf(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO, jobid,\n\t\t\t\t   \"%s %s, dependency satisfied\", preq->rq_ind.rq_register.rq_parent, msg_historyjobid);\n\t\telse if (preq->rq_reply.brp_code == PBSE_HISTDEPEND)\n\t\t\tlog_eventf(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO, jobid,\n\t\t\t\t   \"%s %s\", preq->rq_ind.rq_register.rq_parent, msg_histdepend);\n\t\telse\n\t\t\tlog_eventf(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO, jobid,\n\t\t\t\t   \"%s%s\", msg_regrej, preq->rq_ind.rq_register.rq_parent);\n\n\t\tpjob = find_job(jobid);\n\t\tif ((msg = pbse_to_txt(preq->rq_reply.brp_code)) != NULL) {\n\t\t\t(void) strcat(log_buffer, \" \");\n\t\t\t(void) strcat(log_buffer, msg);\n\t\t}\n\t\tif (pjob) {\n\t\t\tppjob = find_job(preq->rq_ind.rq_register.rq_parent);\n\t\t\tif (preq->rq_reply.brp_code == PBSE_JOB_MOVED) {\n\t\t\t\t/* Creating a separate log buffer because if we end up aborting the submitted job\n\t\t\t\t * we don't want to change what goes into accounting log via job_abt\n\t\t\t\t */\n\t\t\t\tchar log_msg[LOG_BUF_SIZE];\n\t\t\t\tsnprintf(log_msg, sizeof(log_msg), \"%s, %s\", msg_job_moved,\n\t\t\t\t\t \"sending dependency request to remote server\");\n\t\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO, jobid, log_msg);\n\t\t\t\tif (ppjob && (check_job_state(ppjob, JOB_STATE_LTR_MOVED)) && (check_job_substate(ppjob, JOB_SUBSTATE_MOVED))) {\n\t\t\t\t\tchar *destin;\n\t\t\t\t\t/* job destination should be <remote queue>@<remote server> */\n\t\t\t\t\tdestin = strchr(ppjob->ji_qs.ji_destin, (int) '@');\n\t\t\t\t\tif (destin != NULL) {\n\t\t\t\t\t\tstrncpy(pparent.dc_child, ppjob->ji_qs.ji_jobid, sizeof(pparent.dc_child));\n\t\t\t\t\t\tdestin++;\n\t\t\t\t\t\tstrncpy(pparent.dc_svr, destin, sizeof(pparent.dc_svr) - (destin - ppjob->ji_qs.ji_destin));\n\t\t\t\t\t\trc = send_depend_req(pjob, &pparent, preq->rq_ind.rq_register.rq_dependtype,\n\t\t\t\t\t\t\t\t     JOB_DEPEND_OP_REGISTER,\n\t\t\t\t\t\t\t\t     SYNC_SCHED_HINT_NULL, post_doq);\n\t\t\t\t\t\tif (rc) {\n\t\t\t\t\t\t\tsnprintf(log_msg, sizeof(log_msg), \"%s\",\n\t\t\t\t\t\t\t\t \"Failed to send dependency request to remote server, aborting job\");\n\t\t\t\t\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_ERR, jobid, log_msg);\n\t\t\t\t\t\t\tcheck_block(pjob, log_buffer);\n\t\t\t\t\t\t\tjob_abt(pjob, log_buffer);\n\t\t\t\t\t\t}\n\t\t\t\t\t} else {\n\t\t\t\t\t\t/* Ideally if a job is moved, destination can not be empty */\n\t\t\t\t\t\t/* If we come across an empty destination, abort the job */\n\t\t\t\t\t\tcheck_block(pjob, log_buffer);\n\t\t\t\t\t\tjob_abt(pjob, log_buffer);\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tcheck_block(pjob, log_buffer);\n\t\t\t\t\tjob_abt(pjob, log_buffer);\n\t\t\t\t}\n\t\t\t} else if (preq->rq_reply.brp_code == PBSE_HISTJOBID) {\n\t\t\t\tif (ppjob) {\n\t\t\t\t\tupdate_depend(pjob, ppjob->ji_qs.ji_jobid, preq->rq_host, DEPEND_REMOVE, preq->rq_ind.rq_register.rq_dependtype);\n\t\t\t\t\t/* check and remove system hold if needed */\n\t\t\t\t\tset_depend_hold(pjob, get_jattr(pjob, JOB_ATR_depend));\n\t\t\t\t}\n\n\t\t\t} else {\n\t\t\t\tcheck_block(pjob, log_buffer);\n\t\t\t\tjob_abt(pjob, log_buffer);\n\t\t\t}\n\t\t}\n\t}\n\n\trelease_req(pwt);\n}\n\n/**\n * @brief\n * \t\talter_unreg - if required, unregister dependencies on alter of attribute\n *\t\tThis is called from depend_on_que() when it is acting as the at_action\n *\t\troutine for the dependency attribute.\n *\n * @param[in]\tpjob\t-\tpointer to job structure\n * @param[in]\told\t-\tcurrent job dependency attribure\n * @param[out]\tnew\t-\tjob dependency attribute after alter\n */\n\nstatic void\nalter_unreg(job *pjob, attribute *old, attribute *new)\n{\n\tstruct depend *poldd;\n\tstruct depend *pnewd;\n\tstruct depend_job *oldjd;\n\tint type;\n\n\tfor (poldd = (struct depend *) GET_NEXT(old->at_val.at_list);\n\t     poldd;\n\t     poldd = (struct depend *) GET_NEXT(poldd->dp_link)) {\n\n\t\ttype = poldd->dp_type;\n\t\tif (type != JOB_DEPEND_TYPE_ON) {\n\n\t\t\tpnewd = find_depend(type, new);\n\t\t\toldjd = (struct depend_job *) GET_NEXT(poldd->dp_jobs);\n\t\t\twhile (oldjd) {\n\t\t\t\tif ((pnewd == 0) ||\n\t\t\t\t    (find_dependjob(pnewd, oldjd->dc_child) == NULL)) {\n\t\t\t\t\t(void) send_depend_req(pjob, oldjd, type,\n\t\t\t\t\t\t\t       JOB_DEPEND_OP_UNREG,\n\t\t\t\t\t\t\t       SYNC_SCHED_HINT_NULL,\n\t\t\t\t\t\t\t       release_req);\n\t\t\t\t}\n\t\t\t\toldjd = (struct depend_job *) GET_NEXT(oldjd->dc_link);\n\t\t\t}\n\t\t}\n\t}\n}\n\n/**\n * @brief\n * \t\tdepend_on_que - Perform a series of actions if job has a dependency\n *\t\tthat needs action when the job is queued into an execution queue.\n * @par\n *\t\tCalled from svr_enquejob() when a job enters an\n *\t\texecution queue.  Also  the at_action routine for the attribute.\n *\n * @param[out]\tpattr\t-\tjob dependency attribute after alter\n * @param[in]\tpobj\t-\tpointer to job structure\n * @param[in]\tamode\t-\t\"actmode\" stands for the type of action\n */\n\nint\ndepend_on_que(attribute *pattr, void *pobj, int mode)\n{\n\tchar *b1, *b2;\n\tstruct depend *pdep;\n\tstruct depend_job *pparent;\n\tint rc;\n\tint type;\n\tjob *pjob = (job *) pobj;\n\n\tif (((mode != ATR_ACTION_ALTER) && (mode != ATR_ACTION_NOOP)) ||\n\t    (pjob->ji_qhdr == 0) ||\n\t    (pjob->ji_qhdr->qu_qs.qu_type != QTYPE_Execution))\n\t\treturn (0);\n\n\tif (mode == ATR_ACTION_ALTER) {\n\t\t/* if there are dependencies being removed, unregister them */\n\n\t\talter_unreg(pjob, get_jattr(pjob, JOB_ATR_depend), pattr);\n\t}\n\n\t/* First set a System hold if required */\n\tset_depend_hold(pjob, pattr);\n\n\t/* Check if there are dependencies that require registering */\n\n\tpdep = (struct depend *) GET_NEXT(pattr->at_val.at_list);\n\twhile (pdep) {\n\t\ttype = pdep->dp_type;\n\t\tif (type != JOB_DEPEND_TYPE_ON) {\n\n\t\t\tpparent = (struct depend_job *) GET_NEXT(pdep->dp_jobs);\n\t\t\twhile (pparent) {\n\t\t\t\tif (((b1 = strchr(pparent->dc_child, (int) '[')) != NULL) &&\n\t\t\t\t    ((b2 = strchr(pparent->dc_child, (int) ']')) != NULL)) {\n\t\t\t\t\tif (b2 != b1 + 1)\n\t\t\t\t\t\treturn PBSE_IVALREQ;\n\t\t\t\t}\n\t\t\t\tif (strcmp(pparent->dc_child, pjob->ji_qs.ji_jobid) == 0) {\n\t\t\t\t\t/* parent and child job ids are the same */\n\t\t\t\t\treturn PBSE_IVALREQ;\n\t\t\t\t}\n\n\t\t\t\tif (type == JOB_DEPEND_TYPE_RUNONE) {\n\t\t\t\t\tjob *djob = find_job(pparent->dc_child);\n\t\t\t\t\tif (djob == NULL)\n\t\t\t\t\t\treturn PBSE_IVALREQ;\n\t\t\t\t\t/* hold this job if any of the jobs it is dependent on is\n\t\t\t\t\t * either running or already held because of dependency.\n\t\t\t\t\t * pay special attention to array parent jobs because those jobs\n\t\t\t\t\t * may not be in BEGUN state while a subjob might already be running\n\t\t\t\t\t * this is because JOB_STATE_LTR_BEGUN is set only when a mom reports\n\t\t\t\t\t * session id, but there could be a case that a mom is in the process\n\t\t\t\t\t * of reporting a session id and user submits a dependent job.\n\t\t\t\t\t * To avoid such cases, verify that count of queued subjobs is equal\n\t\t\t\t\t * to the number of subjobs\n\t\t\t\t\t */\n\t\t\t\t\telse if (check_job_state(djob, JOB_STATE_LTR_RUNNING) ||\n\t\t\t\t\t\t check_job_state(djob, JOB_STATE_LTR_BEGUN) ||\n\t\t\t\t\t\t (djob->ji_ajinfo != NULL &&\n\t\t\t\t\t\t  djob->ji_ajinfo->tkm_ct != djob->ji_ajinfo->tkm_subjsct[JOB_STATE_QUEUED]) ||\n\t\t\t\t\t\t (check_job_state(djob, JOB_STATE_LTR_HELD) &&\n\t\t\t\t\t\t  check_job_substate(djob, JOB_SUBSTATE_DEPNHOLD))) {\n\t\t\t\t\t\t/* If the dependent job is running or has system hold, then put this job on hold too*/\n\t\t\t\t\t\tset_jattr_b_slim(pjob, JOB_ATR_hold, HOLD_s, INCR);\n\t\t\t\t\t\tsvr_setjobstate(pjob, JOB_STATE_LTR_HELD, JOB_SUBSTATE_DEPNHOLD);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\trc = send_depend_req(pjob, pparent, type,\n\t\t\t\t\t\t     JOB_DEPEND_OP_REGISTER,\n\t\t\t\t\t\t     SYNC_SCHED_HINT_NULL, post_doq);\n\t\t\t\tif (rc)\n\t\t\t\t\treturn rc;\n\t\t\t\tpparent = (struct depend_job *) GET_NEXT(pparent->dc_link);\n\t\t\t}\n\t\t}\n\t\tpdep = (struct depend *) GET_NEXT(pdep->dp_link);\n\t}\n\n\tpdep = find_depend(JOB_DEPEND_TYPE_RUNONE, pattr);\n\tif (pdep != NULL) {\n\t\t/* go through all dependent jobs */\n\t\tstruct depend_job *f_pparent = (struct depend_job *) GET_NEXT(pdep->dp_jobs);\n\t\tif (f_pparent == NULL)\n\t\t\treturn (0);\n\t\tfor (; f_pparent != NULL; f_pparent = (struct depend_job *) GET_NEXT(f_pparent->dc_link)) {\n\t\t\tpjob = find_job(f_pparent->dc_child);\n\t\t\tif (pjob == NULL)\n\t\t\t\treturn (0);\n\t\t\tpparent = (struct depend_job *) GET_NEXT(f_pparent->dc_link);\n\t\t\tfor (; pparent != NULL; pparent = (struct depend_job *) GET_NEXT(pparent->dc_link)) {\n\t\t\t\tif (find_dependjob(find_depend(JOB_DEPEND_TYPE_RUNONE, get_jattr(pjob, JOB_ATR_depend)), pparent->dc_child) == NULL) {\n\t\t\t\t\trc = send_depend_req(pjob, pparent, pdep->dp_type,\n\t\t\t\t\t\t\t     JOB_DEPEND_OP_REGISTER, SYNC_SCHED_HINT_NULL, post_doq);\n\t\t\t\t\tif (rc)\n\t\t\t\t\t\treturn (rc);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\treturn (0);\n}\n\n/**\n * @brief\n * \t\tpost_doe - Post (reply) processing of requests processing for depend_on_exec\n *\n * @param[in]\tpwt\t-\twork task containing requests to be processed\n */\n\nstatic void\npost_doe(struct work_task *pwt)\n{\n\tstruct batch_request *preq = pwt->wt_parm1;\n\tchar *jobid = preq->rq_ind.rq_register.rq_child;\n\tattribute *pattr;\n\tstruct depend *pdep;\n\tstruct depend_job *pdj;\n\tjob *pjob;\n\n\tpjob = find_job(jobid);\n\tif (pjob) {\n\t\tpattr = get_jattr(pjob, JOB_ATR_depend);\n\t\tpdep = find_depend(JOB_DEPEND_TYPE_BEFORESTART, pattr);\n\t\tif (pdep != NULL) {\n\t\t\tpdj = find_dependjob(pdep, preq->rq_ind.rq_register.rq_parent);\n\t\t\tif (pdj != NULL)\n\t\t\t\tdel_depend_job(pdj);\n\t\t\tif (GET_NEXT(pdep->dp_jobs) == 0) {\n\t\t\t\t/* no more dependencies of this type */\n\t\t\t\tdel_depend(pdep);\n\t\t\t}\n\t\t}\n\t}\n\trelease_req(pwt);\n}\n\n/**\n * @brief\n * \t\tpost_runone - Post (reply) processing of requests processing for run one job\n *\n * @param[in]\tpwt\t-\twork task containing requests to be processed\n */\nvoid\npost_runone(struct work_task *pwt)\n{\n\tstruct batch_request *preq = pwt->wt_parm1;\n\tchar *jobid = preq->rq_ind.rq_register.rq_child;\n\tattribute *pattr;\n\tstruct depend *pdep;\n\tstruct depend_job *pdj;\n\tjob *pjob;\n\tjob *del_job;\n\n\tpjob = find_job(jobid);\n\tif (pjob) {\n\t\tpattr = get_jattr(pjob, JOB_ATR_depend);\n\t\tpdep = find_depend(JOB_DEPEND_TYPE_RUNONE, pattr);\n\t\tif (pdep != NULL) {\n\t\t\tpdj = find_dependjob(pdep, preq->rq_ind.rq_register.rq_parent);\n\t\t\tif (pdj != NULL)\n\t\t\t\tdel_depend_job(pdj);\n\t\t\tif (GET_NEXT(pdep->dp_jobs) == 0) {\n\t\t\t\t/* no more dependencies of this type */\n\t\t\t\tdel_depend(pdep);\n\t\t\t}\n\t\t\tdel_job = find_job(preq->rq_ind.rq_register.rq_parent);\n\t\t\tjob_abt(del_job, msg_depend_runone);\n\t\t}\n\t}\n\trelease_req(pwt);\n}\n\n/**\n * @brief\n * \t\tdepend_on_exec - Perform actions if job has\n *\t\t\"beforestart\" dependency - send \"register-release\" to child job; or\n * @note\n *\t\tThis function is called from svr_startjob().\n *\n * @param[in]\tpjob\t-\tjob\n *\n * @return\terror code\n * @retval\t0\t: success\n */\n\nint\ndepend_on_exec(job *pjob)\n{\n\tstruct depend *pdep;\n\tstruct depend_job *pdj;\n\n\tif (pjob == NULL)\n\t\treturn (0);\n\n\t/* If any jobs come after my start, release them */\n\n\tpdep = find_depend(JOB_DEPEND_TYPE_BEFORESTART, get_jattr(pjob, JOB_ATR_depend));\n\tif (pdep) {\n\t\tpdj = (struct depend_job *) GET_NEXT(pdep->dp_jobs);\n\t\twhile (pdj) {\n\t\t\t(void) send_depend_req(pjob, pdj, pdep->dp_type, JOB_DEPEND_OP_RELEASE, SYNC_SCHED_HINT_NULL, post_doe);\n\t\t\tpdj = (struct depend_job *) GET_NEXT(pdj->dc_link);\n\t\t}\n\t}\n\treturn (0);\n}\n\n/**\n * @brief\n * \t\tHelper function that goes through all dependent jobs with runone dependency\n *\t\ton the given job and removes the given job out of their dependency list.\n * @note\n *\t\tThis function is called from req_deletejob2().\n *\n * @param[in]\tpjob\t-\tjob\n *\n * @return\terror code\n * @retval\t0\t: success\n */\nint\ndepend_runone_remove_dependency(job *pjob)\n{\n\tstruct depend *pdep;\n\tstruct depend_job *pdj;\n\tstruct job *d_pjob;\n\n\tif (pjob == NULL)\n\t\treturn (0);\n\n\tpdep = find_depend(JOB_DEPEND_TYPE_RUNONE, get_jattr(pjob, JOB_ATR_depend));\n\tif (pdep) {\n\t\tfor (pdj = (struct depend_job *) GET_NEXT(pdep->dp_jobs);\n\t\t     pdj != NULL; pdj = (struct depend_job *) GET_NEXT(pdj->dc_link)) {\n\t\t\td_pjob = find_job(pdj->dc_child);\n\t\t\tif (d_pjob) {\n\t\t\t\tstruct depend_job *temp_pdj = NULL;\n\t\t\t\tattribute *pattr = get_jattr(d_pjob, JOB_ATR_depend);\n\n\t\t\t\ttemp_pdj = find_dependjob(find_depend(JOB_DEPEND_TYPE_RUNONE, pattr), pjob->ji_qs.ji_jobid);\n\t\t\t\tif (temp_pdj) {\n\t\t\t\t\tdel_depend_job(temp_pdj);\n\t\t\t\t\tpattr->at_flags |= ATR_MOD_MCACHE;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tdel_depend(pdep);\n\t}\n\treturn (0);\n}\n\n/**\n * @brief\n * \t\tHelper function that goes through all dependent jobs with runone dependency\n *\t\ton the given job and puts all them on system hold.\n * @note\n *\t\tThis function is called from req_strtjob2().\n *\n * @param[in]\tpjob\t-\tjob\n *\n * @return\terror code\n * @retval\t0\t: success\n */\nint\ndepend_runone_hold_all(job *pjob)\n{\n\tstruct depend *pdep;\n\tstruct depend_job *pdj;\n\tstruct job *d_pjob;\n\n\tif (pjob == NULL)\n\t\treturn (0);\n\n\tpdep = find_depend(JOB_DEPEND_TYPE_RUNONE, get_jattr(pjob, JOB_ATR_depend));\n\tif (pdep) {\n\t\tfor (pdj = (struct depend_job *) GET_NEXT(pdep->dp_jobs);\n\t\t     pdj != NULL; pdj = (struct depend_job *) GET_NEXT(pdj->dc_link)) {\n\t\t\td_pjob = find_job(pdj->dc_child);\n\t\t\tif (d_pjob) {\n\t\t\t\tset_jattr_b_slim(d_pjob, JOB_ATR_hold, HOLD_s, INCR);\n\t\t\t\tsvr_setjobstate(d_pjob, JOB_STATE_LTR_HELD, JOB_SUBSTATE_HELD);\n\t\t\t}\n\t\t}\n\t}\n\treturn (0);\n}\n\n/**\n * @brief\n * \t\tHelper function that goes through all dependent jobs with runone dependency\n *\t\ton the given job and releases them of system hold.\n * @note\n *\t\tThis function is called from req_rerunjob2().\n *\n * @param[in]\tpjob\t-\tjob\n *\n * @return\terror code\n * @retval\t0\t: success\n */\nint\ndepend_runone_release_all(job *pjob)\n{\n\tstruct depend *pdep;\n\tstruct depend_job *pdj;\n\tstruct job *d_pjob;\n\tchar newstate;\n\tint newsub;\n\n\tif (pjob == NULL)\n\t\treturn (0);\n\n\tpdep = find_depend(JOB_DEPEND_TYPE_RUNONE, get_jattr(pjob, JOB_ATR_depend));\n\tif (pdep) {\n\t\tpdj = (struct depend_job *) GET_NEXT(pdep->dp_jobs);\n\t\tfor (pdj = (struct depend_job *) GET_NEXT(pdep->dp_jobs);\n\t\t     pdj != NULL; pdj = (struct depend_job *) GET_NEXT(pdj->dc_link)) {\n\t\t\td_pjob = find_job(pdj->dc_child);\n\t\t\tif (d_pjob) {\n\t\t\t\tset_jattr_b_slim(d_pjob, JOB_ATR_hold, HOLD_s, DECR);\n\t\t\t\tsvr_evaljobstate(d_pjob, &newstate, &newsub, 0);\n\t\t\t\tsvr_setjobstate(d_pjob, newstate, newsub); /* saves job */\n\t\t\t}\n\t\t}\n\t}\n\treturn (0);\n}\n\n/**\n * @brief\n * \t\tdepend_on_term - Perform actions if job has \"afterany, afterok, afternotok\"\n *\t\tdependencies, send \"register-release\" or \"register-delete\" as\n *\t\tappropriate.\n * @par\n *\t\tThis function is invoked from on_job_exit() in req_jobobit.c.\n *\t\tWhen there are no depends to deal with, free the attribute and\n *\t\trecall on_job_exit().\n *\n * @param[in]\tpjob\t-\tjob\n *\n * @return\terror code\n * @retval\t0\t: success\n */\n\nint\ndepend_on_term(job *pjob)\n{\n\tint exitstat = pjob->ji_qs.ji_un.ji_exect.ji_exitstat;\n\tint op;\n\tstruct depend *pdep;\n\tstruct depend_job *pparent;\n\tint rc;\n\tint type;\n\n\tpdep = (struct depend *) GET_NEXT(get_jattr_list(pjob, JOB_ATR_depend));\n\twhile (pdep) {\n\t\top = -1;\n\t\tswitch (type = pdep->dp_type) {\n\n\t\t\t\t/* for the first three, before... types, release or delete */\n\t\t\t\t/* next job depending on exit status\t\t\t   */\n\n\t\t\tcase JOB_DEPEND_TYPE_BEFOREOK:\n\t\t\t\tif (exitstat == 0)\n\t\t\t\t\top = JOB_DEPEND_OP_RELEASE;\n\t\t\t\telse\n\t\t\t\t\top = JOB_DEPEND_OP_DELETE;\n\t\t\t\tbreak;\n\n\t\t\tcase JOB_DEPEND_TYPE_BEFORENOTOK:\n\t\t\t\t/* exitstat has defined values in case of user/admin forcefully deletes the job.\n\t\t\t\t * In such cases delete the chain on dependency.\n\t\t\t\t */\n\t\t\t\tif (exitstat != 0)\n\t\t\t\t\top = JOB_DEPEND_OP_RELEASE;\n\t\t\t\telse\n\t\t\t\t\top = JOB_DEPEND_OP_DELETE;\n\t\t\t\tbreak;\n\n\t\t\tcase JOB_DEPEND_TYPE_BEFOREANY:\n\t\t\t\top = JOB_DEPEND_OP_RELEASE;\n\t\t\t\tbreak;\n\n\t\t\tcase JOB_DEPEND_TYPE_RUNONE:\n\t\t\t\top = JOB_DEPEND_OP_DELETE;\n\t\t\t\tbreak;\n\t\t\t/* This case can only happen when a job with before start\n\t\t\t * dependency is getting deleted before it even runs.\n\t\t\t */\n\t\t\tcase JOB_DEPEND_TYPE_BEFORESTART:\n\t\t\t\top = JOB_DEPEND_OP_DELETE;\n\t\t\t\tbreak;\n\t\t}\n\t\tif (op != -1) {\n\t\t\t/* Check if the job is being deleted. If so, delete the dependency chain only for beforeok dependency */\n\t\t\tif (pjob->ji_terminated == 1) {\n\t\t\t\tif (type == JOB_DEPEND_TYPE_BEFORENOTOK || type == JOB_DEPEND_TYPE_BEFOREANY)\n\t\t\t\t\top = JOB_DEPEND_OP_RELEASE;\n\t\t\t\telse\n\t\t\t\t\top = JOB_DEPEND_OP_DELETE;\n\t\t\t}\n\t\t\t/* This function is also called from job_abt when the job is in held state and abort substate.\n\t\t\t * In case of a held job, release the dependency chain.\n\t\t\t */\n\t\t\tif (check_job_state(pjob, JOB_STATE_LTR_HELD) && check_job_substate(pjob, JOB_SUBSTATE_ABORT)) {\n\t\t\t\top = JOB_DEPEND_OP_DELETE;\n\t\t\t\t/* In case the job being deleted is a job with runone dependency type\n\t\t\t\t * then there is no need to delete other dependent jobs.\n\t\t\t\t */\n\t\t\t\tif (type == JOB_DEPEND_TYPE_RUNONE)\n\t\t\t\t\top = JOB_DEPEND_OP_RELEASE;\n\t\t\t}\n\n\t\t\tpparent = (struct depend_job *) GET_NEXT(pdep->dp_jobs);\n\t\t\twhile (pparent) {\n\t\t\t\t/* \"release\" the job to execute */\n\t\t\t\trc = send_depend_req(pjob, pparent, type, op,\n\t\t\t\t\t\t     SYNC_SCHED_HINT_NULL, release_req);\n\t\t\t\tif (rc)\n\t\t\t\t\treturn rc;\n\t\t\t\tpparent = (struct depend_job *) GET_NEXT(pparent->dc_link);\n\t\t\t}\n\t\t}\n\t\tpdep = (struct depend *) GET_NEXT(pdep->dp_link);\n\t}\n\treturn (0);\n}\n\n/**\n * @brief\n * \t\tset_depend_hold - set a hold on the job required by the type of dependency\n *\n * @param[in]\tpjob\t-\tjob\n * @param[in]\tpattr\t-\tattribute structure\n */\n\nstatic void\nset_depend_hold(job *pjob, attribute *pattr)\n{\n\tint loop = 1;\n\tchar newstate;\n\tint newsubst;\n\tstruct depend *pdp = NULL;\n\tint substate = -1;\n\n\tif (is_attr_set(pattr))\n\t\tpdp = (struct depend *) GET_NEXT(pattr->at_val.at_list);\n\twhile (pdp && loop) {\n\t\tswitch (pdp->dp_type) {\n\t\t\tcase JOB_DEPEND_TYPE_AFTERSTART:\n\t\t\tcase JOB_DEPEND_TYPE_AFTEROK:\n\t\t\tcase JOB_DEPEND_TYPE_AFTERNOTOK:\n\t\t\tcase JOB_DEPEND_TYPE_AFTERANY:\n\t\t\t\tif ((struct depend_job *) GET_NEXT(pdp->dp_jobs))\n\t\t\t\t\tsubstate = JOB_SUBSTATE_DEPNHOLD;\n\t\t\t\tbreak;\n\n\t\t\tcase JOB_DEPEND_TYPE_ON:\n\t\t\t\tif (pdp->dp_numexp)\n\t\t\t\t\tsubstate = JOB_SUBSTATE_DEPNHOLD;\n\t\t\t\tbreak;\n\t\t}\n\t\tpdp = (struct depend *) GET_NEXT(pdp->dp_link);\n\t}\n\tif (substate == -1) {\n\n\t\t/* No (more) dependencies, clear system hold and set state */\n\n\t\tif ((check_job_substate(pjob, JOB_SUBSTATE_SYNCHOLD)) ||\n\t\t    (check_job_substate(pjob, JOB_SUBSTATE_DEPNHOLD))) {\n\t\t\tset_jattr_b_slim(pjob, JOB_ATR_hold, HOLD_s, DECR);\n\t\t\tsvr_evaljobstate(pjob, &newstate, &newsubst, 0);\n\t\t\tsvr_setjobstate(pjob, newstate, newsubst);\n\t\t}\n\t} else {\n\n\t\t/* there are dependencies, set system hold accordingly */\n\n\t\tset_jattr_b_slim(pjob, JOB_ATR_hold, HOLD_s, INCR);\n\t\tsvr_setjobstate(pjob, JOB_STATE_LTR_HELD, substate);\n\t}\n\treturn;\n}\n\n/**\n * @brief\n * \t\tfind_depend - find a dependency struct of a certain type for a job\n *\n * @param[in]\ttype\t-\ttype of dependency struct\n * @param[in]\tpattr\t-\tattribute structure\n *\n * @return\tdependent job structure\n * @retval\tNULL\t: fail\n */\n\nstruct depend *\nfind_depend(int type, attribute *pattr)\n{\n\tstruct depend *pdep = NULL;\n\n\tif (is_attr_set(pattr)) {\n\t\tpdep = (struct depend *) GET_NEXT(pattr->at_val.at_list);\n\t\twhile (pdep) {\n\t\t\tif (pdep->dp_type == type)\n\t\t\t\tbreak;\n\t\t\tpdep = (struct depend *) GET_NEXT(pdep->dp_link);\n\t\t}\n\t}\n\treturn (pdep);\n}\n\n/**\n * @brief\n * \t\tmake_depend - allocate and attach a depend struct to the attribute\n *\n * @param[in]\ttype\t-\ttype of dependency struct\n * @param[in,out]\tpattr\t-\tattribute structure\n *\n * @return\tdependent job structure\n * @retval\tNULL\t: fail\n */\n\nstatic struct depend *\nmake_depend(int type, attribute *pattr)\n{\n\tstruct depend *pdep = NULL;\n\n\tpdep = (struct depend *) malloc(sizeof(struct depend));\n\tif (pdep) {\n\t\tclear_depend(pdep, type, 0);\n\t\tappend_link(&pattr->at_val.at_list, &pdep->dp_link, pdep);\n\t\tpost_attr_set(pattr);\n\t}\n\treturn (pdep);\n}\n\n/**\n * @brief\n * \t\tregister_dep - Some job wants to run before/after the local job, so set up\n *\t\ta dependency on the local job.\n *\n * @param[in,out]\tpattr\t-\tattribute structure\n * @param[in,out]\tpreq\t-\trequest structure\n * @param[in]\ttype\t-\ttype of dependency struct\n * @param[out]\tmade\t-\t0 if dependency is found else 1\n *\n * @return\terror code\n * @retval\t0\t-\tsuccess\n * @reval\tPBSE_SYSTEM\t- fail\n */\n\nstatic int\nregister_dep(attribute *pattr, struct batch_request *preq, int type, int *made)\n{\n\tstruct depend *pdep;\n\tstruct depend_job *pdj;\n\n\t/* change into the mirror image type */\n\n\ttype ^= (JOB_DEPEND_TYPE_BEFORESTART - JOB_DEPEND_TYPE_AFTERSTART);\n\tif ((pdep = find_depend(type, pattr)) == NULL) {\n\t\tif ((pdep = make_depend(type, pattr)) == NULL)\n\t\t\treturn (PBSE_SYSTEM);\n\t}\n\tif ((pdj = find_dependjob(pdep,\n\t\t\t\t  preq->rq_ind.rq_register.rq_child)) != NULL) {\n\t\t(void) strcpy(pdj->dc_svr, preq->rq_ind.rq_register.rq_svr);\n\t\t*made = 0;\n\t\treturn (0);\n\t} else if ((pdj = make_dependjob(pdep,\n\t\t\t\t\t preq->rq_ind.rq_register.rq_child,\n\t\t\t\t\t preq->rq_ind.rq_register.rq_svr)) ==\n\t\t   NULL) {\n\t\treturn (PBSE_SYSTEM);\n\t} else\n\t\t*made = 1;\n\treturn (0);\n}\n\n/**\n * @brief\n * \t\tunregister_dep - remove a registered dependency\n *\t\tResults from a qalter call to remove existing dependencies\n *\n * @param[in]\tpattr\t-\tattribute structure\n * @param[in]\tpreq\t-\trequest structure\n *\n * @return\terror code\n * @retval\t0\t: success\n * @retval\tPBSE_IVALREQ\t: Invalid request, dependency could not find.\n */\nstatic int\nunregister_dep(attribute *pattr, struct batch_request *preq)\n{\n\tint type;\n\tstruct depend *pdp;\n\tstruct depend_job *pdjb;\n\n\t/* if dependency has mirroring effect, get mirror image of dependency type */\n\tif (preq->rq_ind.rq_register.rq_dependtype < JOB_DEPEND_TYPE_RUNONE)\n\t\ttype = preq->rq_ind.rq_register.rq_dependtype ^\n\t\t       (JOB_DEPEND_TYPE_BEFORESTART - JOB_DEPEND_TYPE_AFTERSTART);\n\telse\n\t\ttype = preq->rq_ind.rq_register.rq_dependtype;\n\n\tif (((pdp = find_depend(type, pattr)) == 0) ||\n\t    ((pdjb = find_dependjob(pdp, preq->rq_ind.rq_register.rq_child)) == NULL))\n\t\treturn (PBSE_IVALREQ);\n\n\tdel_depend_job(pdjb);\n\treturn (0);\n}\n\n/**\n * @brief\n * \t\tfind_dependjob - find a child dependent job with a certain job id\n *\n * @param[in]\tpdep\t-\tdependent jobs\n * @param[in]\tname\t-\tjob id to be matched\n *\n * @return\tchild dependent job\n * @retval\tNULL\t: fail\n */\n\nstruct depend_job *\nfind_dependjob(struct depend *pdep, char *name)\n{\n\tstruct depend_job *pdj;\n\n\tif ((pdep == NULL) || (name == NULL))\n\t\treturn NULL;\n\n\tpdj = (struct depend_job *) GET_NEXT(pdep->dp_jobs);\n\twhile (pdj) {\n\t\tif (!strcmp(name, pdj->dc_child))\n\t\t\tbreak;\n\n\t\tpdj = (struct depend_job *) GET_NEXT(pdj->dc_link);\n\t}\n\treturn (pdj);\n}\n\n/**\n * @brief\n * \t\tmake_dependjob - add a depend_job structure\n *\n * @param[in,out]\tpdep\t-\tptr to head of depend list\n * @param[in]\tjobid\t-\tchild (dependent) job\n * @param[in]\thost\t-\tserver owning job\n *\n * @return\tchild dependent job\n */\n\nstatic struct depend_job *\nmake_dependjob(struct depend *pdep, char *jobid, char *host)\n{\n\tstruct depend_job *pdj;\n\n\tpdj = (struct depend_job *) malloc(sizeof(struct depend_job));\n\tif (pdj) {\n\n\t\tCLEAR_LINK(pdj->dc_link);\n\t\tpdj->dc_state = 0;\n\t\tpdj->dc_cost = 0;\n\t\t(void) strcpy(pdj->dc_child, jobid);\n\t\t(void) strcpy(pdj->dc_svr, host);\n\t\tappend_link(&pdep->dp_jobs, &pdj->dc_link, pdj);\n\t}\n\treturn (pdj);\n}\n\n/**\n * @brief\n * \t\tsend_depend_req - build and send a Register Dependent request\n *\n * @param[in]\tpjob\t-\tjob structure\n * @param[in]\tpparent\t-\tparent job\n * @param[in]\ttype\t-\tdependency type\n * @param[in]\top\t-\tdependent job operation\n * @param[in]\tschedhint\t-\tnot used here\n * @param[in]\tpostfunc\t-\tcall back function to issue_to_svr()\n *\n * @return\terror code\n * @retval\t0\t: success\n * @retval\tPBSE_BADHOST\t: Unable to perform dependency\n * @retval\tPBSE_SYSTEM\t: malloc failed\n */\n\nint\nsend_depend_req(job *pjob, struct depend_job *pparent, int type, int op, int schedhint, void (*postfunc)(struct work_task *))\n{\n\tint i;\n\tchar *pc;\n\tstruct batch_request *preq;\n\n\tpreq = alloc_br(PBS_BATCH_RegistDep);\n\tif (preq == NULL) {\n\t\tlog_err(errno, __func__, msg_err_malloc);\n\t\treturn (PBSE_SYSTEM);\n\t}\n\n\tif (get_jattr_str(pjob, JOB_ATR_job_owner) == NULL)\n\t\treturn PBSE_INTERNAL;\n\n\tfor (i = 0; i < PBS_MAXUSER; ++i) {\n\t\tpreq->rq_ind.rq_register.rq_owner[i] = get_jattr_str(pjob, JOB_ATR_job_owner)[i];\n\t\tif (preq->rq_ind.rq_register.rq_owner[i] == '@')\n\t\t\tbreak;\n\t}\n\tpreq->rq_ind.rq_register.rq_owner[i] = '\\0';\n\tstrcpy(preq->rq_ind.rq_register.rq_parent, pparent->dc_child);\n\tstrcpy(preq->rq_ind.rq_register.rq_child, pjob->ji_qs.ji_jobid);\n\t/* Append \"@<server_name>\" since server's name may not match host name */\n\tstrcat(preq->rq_ind.rq_register.rq_child, \"@\");\n\tstrcat(preq->rq_ind.rq_register.rq_child, pbs_server_name);\n\t/* kludge for server:port follows */\n\tif ((pc = strchr(server_name, (int) ':')) != NULL) {\n\t\tstrcat(preq->rq_ind.rq_register.rq_child, pc);\n\t}\n\tpreq->rq_ind.rq_register.rq_dependtype = type;\n\tpreq->rq_ind.rq_register.rq_op = op;\n\tstrcpy(preq->rq_host, pparent->dc_svr); /* for issue_to_svr() */\n\n\tpreq->rq_ind.rq_register.rq_cost = 0;\n\n\tif (issue_to_svr(pparent->dc_svr, preq, postfunc) == -1) {\n\t\tsprintf(log_buffer, \"Unable to perform dependency with job %s\", pparent->dc_child);\n\t\treturn (PBSE_BADHOST);\n\t}\n\treturn (0);\n}\n\n/*\n * This section contains general function for dependency attributes\n *\n * Each attribute has functions for:\n *\tDecoding the value string to the machine representation.\n *\tEncoding the internal representation of the attribute to external\n *\tSetting the value by =, + or - operators.\n *\tComparing a (decoded) value with the attribute value.\n *\tFreeing the space malloc-ed to the attribute value.\n *\n * The prototypes are declared in \"attribute.h\"\n *\n * ----------------------------------------------------------------------------\n * Attribute functions for attributes of type \"dependency\".\n *\n * The \"encoded\" or external form of the value is a string with sub-strings\n * separated by commas and terminated by a null.\n *\n * The \"decoded\" or internal form is a list of depend (and depend_child)\n * structures, which are defined in job.h.\n * ----------------------------------------------------------------------------\n */\n\nstruct dependnames {\n\tint type;\n\tchar *name;\n} dependnames[] = {\n\t{JOB_DEPEND_TYPE_AFTERSTART, \"after\"},\n\t{JOB_DEPEND_TYPE_AFTEROK, \"afterok\"},\n\t{JOB_DEPEND_TYPE_AFTERNOTOK, \"afternotok\"},\n\t{JOB_DEPEND_TYPE_AFTERANY, \"afterany\"},\n\t{JOB_DEPEND_TYPE_BEFORESTART, \"before\"},\n\t{JOB_DEPEND_TYPE_BEFOREOK, \"beforeok\"},\n\t{JOB_DEPEND_TYPE_BEFORENOTOK, \"beforenotok\"},\n\t{JOB_DEPEND_TYPE_BEFOREANY, \"beforeany\"},\n\t{JOB_DEPEND_TYPE_ON, \"on\"},\n\t{JOB_DEPEND_TYPE_RUNONE, \"runone\"},\n\t{-1, NULL}};\n\n/**\n * @brief\n * \t\tdecode_depend - decode a string into an attr of type dependency\n *\t\tString is of form: depend_type:job_id[:job_id:...][,depend_type:job_id]\n *\n * @param[out]\tpatr\t-\tan attr of type dependency\n * @param[in]\tname\t-\tattribute name\n * @param[in]\trescn\t-\tresource name, unused here\n * @param[in]\tval\t-\tattribute value\n *\n * @return\terror code\n * @retval\t0\t: ok\n * @retval\t>0\t: error\n */\n\nint\ndecode_depend(attribute *patr, char *name, char *rescn, char *val)\n{\n\tint rc;\n\tchar *valwd;\n\n\tif ((val == NULL) || (*val == 0)) {\n\t\tfree_depend(patr);\n\t\tpatr->at_flags |= ATR_VFLAG_MODIFY;\n\t\treturn (0);\n\t}\n\n\t/*\n\t * for each sub-string (terminated by comma or new-line),\n\t * add a depend or depend_child structure.\n\t */\n\tvalwd = parse_comma_string(val);\n\twhile (valwd) {\n\t\tif ((rc = build_depend(patr, valwd)) != 0) {\n\t\t\tfree_depend(patr);\n\t\t\treturn (rc);\n\t\t}\n\t\tvalwd = parse_comma_string(NULL);\n\t}\n\n\tpost_attr_set(patr);\n\treturn (0);\n}\n\n/**\n * @brief\n * \t\tcpy_jobsvr() - a version of strcpy() that watches for an embedded colon\n *\t\tand escapes it with a leading blackslash.  This is needed because\n *\t\tthe colon is overloaded, both as job_id separator within a set of\n *\t\tdepend jobs, and as the server:port separator.  Ugh!\n *\n * @param[in,out]\td\t-\tdestination string\n * @param[in,out]\ts\t-\tsource string\n */\n\nstatic void\ncpy_jobsvr(char *d, char *s)\n{\n\twhile (*d)\n\t\td++;\n\n\twhile (*s) {\n\t\tif (*s == ':')\n\t\t\t*d++ = '\\\\';\n\t\t*d++ = *s++;\n\t}\n\t*d = '\\0';\n}\n\n/**\n * @brief\n * \t\tdup_depend - duplicate a dependency (see set_depend())\n *\n * @param[in,out]\tpattr\t-\tattribute structure\n * @param[in]\tpd\t-\tptr dependency list\n *\n * @return\tint\n * @retval\t0\t: success\n */\n\nstatic int\ndup_depend(attribute *pattr, struct depend *pd)\n{\n\tstruct depend *pnwd;\n\tstruct depend_job *poldj;\n\tstruct depend_job *pnwdj;\n\tint type;\n\n\ttype = pd->dp_type;\n\tif ((pnwd = make_depend(type, pattr)) == 0)\n\t\treturn (-1);\n\n\tpnwd->dp_numexp = pd->dp_numexp;\n\tpnwd->dp_numreg = pd->dp_numreg;\n\tpnwd->dp_released = pd->dp_released;\n\tpnwd->dp_numrun = pd->dp_numrun;\n\tfor (poldj = (struct depend_job *) GET_NEXT(pd->dp_jobs); poldj;\n\t     poldj = (struct depend_job *) GET_NEXT(poldj->dc_link)) {\n\t\tif ((pnwdj = make_dependjob(pnwd, poldj->dc_child,\n\t\t\t\t\t    poldj->dc_svr)) == 0)\n\t\t\treturn (-1);\n\t\tpnwdj->dc_state = poldj->dc_state;\n\t\tpnwdj->dc_cost = poldj->dc_cost;\n\t}\n\n\treturn (0);\n}\n\n/**\n * @brief\n * \t\tencode_depend - encode dependency attr into attrlist entry\n *\n * @param[in]\tattr\t-\tptr to attribute to encode\n * @param[in,out]\tphead\t-\tptr to head of attrlist list\n * @param[in]\tatname\t-\tattribute name\n * @param[in]\trsname\t-\tresource name or null\n * @param[in]\tmode\t-\tncode mode, unused here\n * @param[out]\trtnl\t-\tReturn ptr to svrattrl\n *\n * @return\terror code\n * @retval\t>0\t: ok, entry created and linked into list\n * @retval\t=0\t: no value to encode, entry not created\n * @retval\t-1\t: if error\n */\n/*ARGSUSED*/\n\nint\nencode_depend(const attribute *attr, pbs_list_head *phead, char *atname, char *rsname, int mode, svrattrl **rtnl)\n{\n\tint ct = 0;\n\tchar cvtbuf[22];\n\tint numdep = 0;\n\tstruct depend *nxdp;\n\tstruct svrattrl *pal;\n\tstruct depend *pdp;\n\tstruct depend_job *pdjb = NULL;\n\tstruct dependnames *pn;\n\n\tif (!attr)\n\t\treturn (-1);\n\tif (!(is_attr_set(attr)))\n\t\treturn (0); /* no values */\n\n\tpdp = (struct depend *) GET_NEXT(attr->at_val.at_list);\n\tif (pdp == NULL)\n\t\treturn (0);\n\n\t/* scan dependencies types to compute needed base size of svrattrl */\n\n\tfor (nxdp = pdp; nxdp; nxdp = (struct depend *) GET_NEXT(nxdp->dp_link)) {\n\t\tif (nxdp->dp_type == JOB_DEPEND_TYPE_ON)\n\t\t\tct += 30; /* a guess at a reasonable amt of spece */\n\t\telse {\n\t\t\tct += 12; /* for longest type */\n\t\t\tpdjb = (struct depend_job *) GET_NEXT(nxdp->dp_jobs);\n\t\t\twhile (pdjb) {\n\t\t\t\tct += PBS_MAXSVRJOBID + PBS_MAXSERVERNAME + 3;\n\t\t\t\tpdjb = (struct depend_job *) GET_NEXT(pdjb->dc_link);\n\t\t\t}\n\t\t}\n\t}\n\n\tif ((pal = attrlist_create(atname, rsname, ct)) == NULL) {\n\t\treturn (-1);\n\t}\n\t*pal->al_value = '\\0';\n\tfor (nxdp = pdp; nxdp; nxdp = (struct depend *) GET_NEXT(nxdp->dp_link)) {\n\t\tif ((nxdp->dp_type != JOB_DEPEND_TYPE_ON) &&\n\t\t    !(pdjb = (struct depend_job *) GET_NEXT(nxdp->dp_jobs)))\n\t\t\tcontinue; /* no value, skip this one */\n\t\tif (nxdp != pdp)\n\t\t\tstrcat(pal->al_value, \",\"); /* comma between */\n\t\tpn = &dependnames[nxdp->dp_type];\n\t\tstrcat(pal->al_value, pn->name);\n\t\tif (pn->type == JOB_DEPEND_TYPE_ON) {\n\t\t\tsprintf(cvtbuf, \":%d\", nxdp->dp_numexp);\n\t\t\tstrcat(pal->al_value, cvtbuf);\n\t\t} else {\n\t\t\twhile (pdjb) {\n\t\t\t\tstrcat(pal->al_value, \":\");\n\t\t\t\tcpy_jobsvr(pal->al_value, pdjb->dc_child);\n\t\t\t\tif (*pdjb->dc_svr != '\\0') {\n\t\t\t\t\tstrcat(pal->al_value, \"@\");\n\t\t\t\t\tcpy_jobsvr(pal->al_value, pdjb->dc_svr);\n\t\t\t\t}\n\t\t\t\tpdjb = (struct depend_job *) GET_NEXT(pdjb->dc_link);\n\t\t\t}\n\t\t}\n\t\t++numdep;\n\t}\n\tif (numdep) {\n\t\t/* there are dependencies recorded, added to the list\t*/\n\t\tpal->al_flags = attr->at_flags;\n\t\tappend_link(phead, &pal->al_link, pal);\n\t\tif (rtnl)\n\t\t\t*rtnl = pal;\n\t\treturn (1);\n\t} else {\n\t\t/* there are no dependencies, just the base structure,\t*/\n\t\t/* so remove this svrattrl from ths list\t\t*/\n\t\t(void) free(pal);\n\t\tif (rtnl)\n\t\t\t*rtnl = NULL;\n\t\treturn (0);\n\t}\n}\n\n/**\n * @brief\n * \t\tset_depend - set value of attribute of dependency type to another\n * @par Functionality:\n *\t\tA=B --> set of dependencies in A replaced by set in B\n *\t\tA+B --> dependencies in B added to list in A\n *\t\tA-B --> not defined\n *\n * @param[in,out]\tpattr\t-\tattribute structure\n * @param[in]\tnew\t-\tptr dependency list\n * @param[in]\top\t-\toperation which needs to be performed\n * \t\t\t\t\t\tEx. SET, INCR, DECR.\n *\n * @return\terror code\n * @retval\t0\t: ok\n * @retval\t>0\t: error\n */\n\nint\nset_depend(attribute *attr, attribute *new, enum batch_op op)\n{\n\tstruct depend *pdnew;\n\tstruct depend *pdold;\n\tint rc;\n\n\tassert(attr && new);\n\n\tswitch (op) {\n\t\tcase SET:\n\n\t\t\t/*\n\t\t\t * if the type of dependency entry already exists, we are\n\t\t\t * going to replace it, so get rid of the old and dup the new\n\t\t\t */\n\n\t\t\tpdnew = (struct depend *) GET_NEXT(new->at_val.at_list);\n\t\t\twhile (pdnew) {\n\t\t\t\tpdold = find_depend(pdnew->dp_type, attr);\n\t\t\t\tif (pdold)\n\t\t\t\t\tdel_depend(pdold);\n\t\t\t\tif ((rc = dup_depend(attr, pdnew)) != 0)\n\t\t\t\t\treturn (rc);\n\t\t\t\tpdnew = (struct depend *) GET_NEXT(pdnew->dp_link);\n\t\t\t}\n\t\t\tbreak;\n\n\t\tcase INCR: /* not defined */\n\t\tcase DECR: /* not defined */\n\t\tdefault:\n\t\t\treturn (PBSE_IVALREQ);\n\t}\n\tpost_attr_set(attr);\n\treturn (0);\n}\n\n/*\n * comp_depend - compare two attributes of type dependency\n *\tThis operation is undefined.\n *\n *\tReturns: 0\n *\t\t+1\n *\t\t-1\n */\n\nint\ncomp_depend(attribute *attr, attribute *with)\n{\n\n\treturn (-1);\n}\n/**\n * @brief\n * \t\tfree_depend\t-\tfree the dependency link and dependent jobs from the\n * \t\t\t\t\t\tlist of resources in the attribute structure.\n *\n * @param[in,out]\tattr\t-\tattribute structure which contains the resource list to be freed\n */\nvoid\nfree_depend(attribute *attr)\n{\n\tstruct depend *pdp;\n\tstruct depend_job *pdjb;\n\n\twhile ((pdp = (struct depend *)\n\t\t\tGET_NEXT(attr->at_val.at_list)) != NULL) {\n\t\twhile ((pdjb = (struct depend_job *)\n\t\t\t\tGET_NEXT(pdp->dp_jobs)) != NULL) {\n\t\t\tdelete_link(&pdjb->dc_link);\n\t\t\t(void) free(pdjb);\n\t\t}\n\t\tdelete_link(&pdp->dp_link);\n\t\t(void) free(pdp);\n\t}\n\t/* free any data cached for stats */\n\tif (attr->at_user_encoded != NULL || attr->at_priv_encoded != NULL) {\n\t\tfree_svrcache(attr);\n\t}\n\tmark_attr_not_set(attr);\n}\n\n/**\n * @brief\n * \t\tbuild_depend -  build a dependency structure\n * \t\tparse the string and turn it into a list of depend structures\n *\n * @param[in,out]\tattr\t-\tattribute structure which contains the dependency structure\n * @param[in]\tvalue\t-\tattribute value which is a set of sub-string (terminated by comma or new-line)\n *\n * @return\terror code\n * @retval\t0\t: success\n * @retval\tnon-zero\t: error number\n */\n\nstatic int\nbuild_depend(attribute *pattr, char *value)\n{\n\tchar fhn[PBS_MAXHOSTNAME + 1];\n\tstruct depend *have[JOB_DEPEND_NUMBER_TYPES];\n\tint i;\n\tint numwds;\n\tstruct depend *pd;\n\tstruct depend_job *pdjb;\n\tstruct dependnames *pname;\n\tchar *pwhere;\n\tchar *valwd;\n\tchar *nxwrd;\n\tint type;\n\n\t/*\n\t * Map first subword into dependency type.\n\t */\n\n\tif ((nxwrd = strchr(value, (int) ':')) != NULL)\n\t\t*nxwrd++ = '\\0';\n\telse\n\t\t/* dependency can never be without ':<value>' */\n\t\treturn (PBSE_BADATVAL);\n\n\tif (*nxwrd == '\\0')\n\t\t/* dependency can never be without a job-id or a number */\n\t\treturn (PBSE_BADATVAL);\n\n\tfor (pname = dependnames; pname->type != -1; pname++)\n\t\tif (!strcmp(value, pname->name))\n\t\t\tbreak;\n\n\tif (pname->type == -1)\n\t\treturn (PBSE_BADATVAL);\n\ttype = pname->type;\n\n\t/* what types do we have already? */\n\n\tfor (i = 0; i < JOB_DEPEND_NUMBER_TYPES; i++)\n\t\thave[i] = NULL;\n\tfor (pd = (struct depend *) GET_NEXT(pattr->at_val.at_list);\n\t     pd; pd = (struct depend *) GET_NEXT(pd->dp_link))\n\t\thave[pd->dp_type] = pd;\n\n\tif ((pd = have[type]) == NULL) {\n\t\tpd = make_depend(type, pattr);\n\t\tif (pd == NULL)\n\t\t\treturn (PBSE_SYSTEM);\n\t}\n\n\t/* now process the value string */\n\n\tnumwds = 0;\n\twhile (nxwrd && (*nxwrd != '\\0')) {\n\n\t\tnumwds++; /* number of arguments */\n\t\tvalwd = nxwrd;\n\n\t\t/* find end of next word delimited by a : but not a '\\:' */\n\n\t\twhile (((*nxwrd != ':') || (*(nxwrd - 1) == '\\\\')) && *nxwrd)\n\t\t\tnxwrd++;\n\t\tif (*nxwrd)\n\t\t\t*nxwrd++ = '\\0';\n\n\t\t/* now process word (argument) depending on \"depend type\" */\n\n\t\tif (type == JOB_DEPEND_TYPE_ON) {\n\n\t\t\t/* a single word argument, a count */\n\n\t\t\tif (numwds == 1) {\n\t\t\t\tpd->dp_numexp = strtol(valwd, &pwhere, 10);\n\t\t\t\tif ((pd->dp_numexp < 1) ||\n\t\t\t\t    (pwhere && (*pwhere != '\\0'))) {\n\t\t\t\t\treturn (PBSE_BADATVAL);\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\treturn (PBSE_BADATVAL);\n\t\t\t}\n\n\t\t} else { /* all other dependency types */\n\n\t\t\t/* a set of job_id[\\:port][@server[\\:port]] */\n\n\t\t\tpdjb = (struct depend_job *) malloc(sizeof(*pdjb));\n\t\t\tif (pdjb) {\n\t\t\t\tCLEAR_LINK(pdjb->dc_link);\n\t\t\t\tpdjb->dc_state = 0;\n\t\t\t\tpdjb->dc_cost = 0;\n\t\t\t\tpdjb->dc_svr[0] = '\\0';\n\t\t\t\tpwhere = pdjb->dc_child;\n\n\t\t\t\twhile (*valwd) {\n\t\t\t\t\tif (*valwd == '@') { /* switch to @server */\n\t\t\t\t\t\t*pwhere = '\\0';\n\t\t\t\t\t\tpwhere = pdjb->dc_svr;\n\t\t\t\t\t} else if ((*valwd == '\\\\') && (*(valwd + 1) == ':')) {\n\t\t\t\t\t\t*pwhere++ = *++valwd; /* skip over '\\' */\n\t\t\t\t\t} else {\n\t\t\t\t\t\t*pwhere++ = *valwd; /* copy jobid */\n\t\t\t\t\t}\n\t\t\t\t\t++valwd;\n\t\t\t\t}\n\t\t\t\t*pwhere = '\\0';\n\n\t\t\t\tif (pdjb->dc_svr[0] == '\\0') {\n\t\t\t\t\tpwhere = strchr(pdjb->dc_child, (int) '.');\n\t\t\t\t\tif (pwhere) {\n\t\t\t\t\t\tpwhere++;\n\t\t\t\t\t\tif (strncmp(pwhere, pbs_conf.pbs_server_name, PBS_MAXSERVERNAME) == 0) {\n\t\t\t\t\t\t\t(void) strcpy(pdjb->dc_svr, pbs_default());\n\t\t\t\t\t\t} else if (get_fullhostname(pwhere, fhn, (sizeof(fhn) - 1)) == 0) {\n\t\t\t\t\t\t\t(void) strcpy(pdjb->dc_svr, fhn);\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t(void) free(pdjb);\n\t\t\t\t\t\t\treturn (PBSE_BADATVAL);\n\t\t\t\t\t\t}\n\t\t\t\t\t} else {\n\t\t\t\t\t\t(void) free(pdjb);\n\t\t\t\t\t\treturn (PBSE_BADATVAL);\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\tappend_link(&pd->dp_jobs, &pdjb->dc_link, pdjb);\n\t\t\t} else {\n\t\t\t\treturn (PBSE_SYSTEM);\n\t\t\t}\n\t\t}\n\t}\n\n\treturn (0);\n}\n\n/**\n * @brief\n * \t\tclear_depend - clear a single dependency set\n *\t\tIf the \"exist\" flag is set, any depend_job sub-structures are freed.\n *\n * @param[out]\tpd\t-\ta single dependency set\n * @param[in]\ttype\t-\tfreed dependency type\n * @param[in]\texist\t-\tIf the \"exist\" flag is set,\n * \t\t\t\t\t\t\tany depend_job sub-structures are freed.\n */\n\nstatic void\nclear_depend(struct depend *pd, int type, int exist)\n{\n\tstruct depend_job *pdj;\n\n\tif (exist) {\n\t\twhile ((pdj = (struct depend_job *)\n\t\t\t\tGET_NEXT(pd->dp_jobs)) != NULL) {\n\t\t\tdel_depend_job(pdj);\n\t\t}\n\t} else {\n\t\tCLEAR_HEAD(pd->dp_jobs);\n\t\tCLEAR_LINK(pd->dp_link);\n\t}\n\tpd->dp_type = type;\n\tpd->dp_numexp = 0;\n\tpd->dp_numreg = 0;\n\tpd->dp_released = 0;\n\tpd->dp_numrun = 0;\n}\n\n/**\n * @brief\n * \t\tdel_depend - delete a single dependency set, including any depend_jobs\n *\n * @param[in,out]\tpd\t-\ta single dependency set\n */\n\nstatic void\ndel_depend(struct depend *pd)\n{\n\tstruct depend_job *pdj;\n\n\twhile ((pdj = (struct depend_job *) GET_NEXT(pd->dp_jobs)) != NULL) {\n\t\tdel_depend_job(pdj);\n\t}\n\tdelete_link(&pd->dp_link);\n\t(void) free(pd);\n}\n\n/**\n * @brief\n * \t\tdel_depend_job - delete a single depend_job structure\n *\n *  @param[in,out]\tpdj\t-\ta single depend_job structure\n */\n\nstatic void\ndel_depend_job(struct depend_job *pdj)\n{\n\tdelete_link(&pdj->dc_link);\n\t(void) free(pdj);\n}\n"
  },
  {
    "path": "src/server/req_rerun.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file    req_rerun.c\n *\n * @brief\n * \t\treq_rerun.c - functions dealing with a Rerun Job Request\n *\n * Included functions are:\n * \tpost_rerun()\n * \tforce_reque()\n * \treq_rerunjob()\n * \ttimeout_rerun_request()\n * \treq_rerunjob2()\n *\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <stdio.h>\n#include <sys/types.h>\n#include \"libpbs.h\"\n#include <signal.h>\n#include \"server_limits.h\"\n#include \"list_link.h\"\n#include \"work_task.h\"\n#include \"attribute.h\"\n#include \"server.h\"\n#include \"credential.h\"\n#include \"batch_request.h\"\n#include \"job.h\"\n#include \"pbs_error.h\"\n#include \"log.h\"\n#include \"acct.h\"\n#include \"pbs_nodes.h\"\n#include \"svrfunc.h\"\n#include \"net_connect.h\"\n\n/* Private Function local to this file */\nstatic int req_rerunjob2(struct batch_request *preq, job *pjob);\n\n/* Global Data Items: */\n\nextern char *msg_manager;\nextern char *msg_jobrerun;\nextern time_t time_now;\nextern job *chk_job_request(char *, struct batch_request *, int *, int *);\n\n/**\n * @brief\n * \t\tpost_rerun - handler for reply from mom on signal_job sent in req_rerunjob\n *\t\tIf mom acknowledged the signal, then all is ok.\n *\t\tIf mom rejected the signal for unknown jobid, and force is set by the\n *\t\toriginal client for a non manager as indicated by the preq->rq_extra being zero,\n *\t\tthen do local requeue.\n *\n * @param[in]\tpwt\t-\twork task structure which contains the reply from mom\n */\n\nvoid\npost_rerun(struct work_task *pwt)\n{\n\tjob *pjob;\n\tstruct batch_request *preq;\n\tstruct depend *pdep;\n\n\tpreq = (struct batch_request *) pwt->wt_parm1;\n\n\tpjob = find_job(preq->rq_ind.rq_signal.rq_jid);\n\n\tif (pjob != NULL) {\n\t\tif (preq->rq_reply.brp_code != 0) {\n\t\t\tsprintf(log_buffer, \"rerun signal reject by mom: %d\",\n\t\t\t\tpreq->rq_reply.brp_code);\n\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t  preq->rq_ind.rq_signal.rq_jid, log_buffer);\n\n\t\t\tif (pjob->ji_pmt_preq != NULL)\n\t\t\t\treply_preempt_jobs_request(preq->rq_reply.brp_code, PREEMPT_METHOD_REQUEUE, pjob);\n\t\t} else {\n\t\t\t/* mom acknowledged to rerun the job, release depend hold on run-one dependency */\n\t\t\tpdep = find_depend(JOB_DEPEND_TYPE_RUNONE, get_jattr(pjob, JOB_ATR_depend));\n\t\t\tif (pdep != NULL)\n\t\t\t\tdepend_runone_release_all(pjob);\n\t\t}\n\t}\n\n\trelease_req(pwt);\n\treturn;\n}\n\n/**\n * @brief\n * \t\tforce_reque - requeue (rerun) a job\n *\n * @param[in,out]\tpwt\t-\tjob which needs to be rerun\n */\nvoid\nforce_reque(job *pjob)\n{\n\tchar newstate;\n\tint newsubstate;\n\tstruct batch_request *preq;\n\tchar hook_msg[HOOK_MSG_SIZE] = {0};\n\tint rc;\n\n\tpjob->ji_qs.ji_obittime = time_now;\n\tset_jattr_l_slim(pjob, JOB_ATR_obittime, pjob->ji_qs.ji_obittime, SET);\n\n\t/* Allocate space for the jobobit hook event params */\n\tpreq = alloc_br(PBS_BATCH_JobObit);\n\tif (preq == NULL) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"rq_jobobit alloc failed\");\n\t} else {\n\t\tpreq->rq_ind.rq_obit.rq_pjob = pjob;\n\t\trc = process_hooks(preq, hook_msg, sizeof(hook_msg), pbs_python_set_interrupt);\n\t\tif (rc == -1) {\n\t\t\tlog_err(-1, __func__, \"rq_jobobit force_reque process_hooks call failed\");\n\t\t}\n\t\tfree_br(preq);\n\t}\n\n\tpjob->ji_momhandle = -1;\n\tpjob->ji_mom_prot = PROT_INVALID;\n\n\tif ((is_jattr_set(pjob, JOB_ATR_resc_released))) {\n\t\t/* If JOB_ATR_resc_released attribute is set and we are trying to rerun a job\n\t\t * then we need to reassign resources first because\n\t\t * when we suspend a job we don't decrement all the resources.\n\t\t * So we need to set partially released resources\n\t\t * back again to release all other resources\n\t\t */\n\t\tset_resc_assigned(pjob, 0, INCR);\n\t\tfree_jattr(pjob, JOB_ATR_resc_released);\n\t\tmark_jattr_not_set(pjob, JOB_ATR_resc_released);\n\t\tif (is_jattr_set(pjob, JOB_ATR_resc_released_list)) {\n\t\t\tfree_jattr(pjob, JOB_ATR_resc_released_list);\n\t\t\tmark_jattr_not_set(pjob, JOB_ATR_resc_released_list);\n\t\t}\n\t}\n\n\t/* simulate rerun: free nodes, clear checkpoint flag, and */\n\t/* clear exec_vnode string\t\t\t\t  */\n\n\trel_resc(pjob);\n\n\t/* note in accounting file */\n\taccount_jobend(pjob, pjob->ji_acctrec, PBS_ACCT_RERUN);\n\n\t/*\n\t * Clear any JOB_SVFLG_Actsuspd flag too, as the job is no longer\n\t * suspended (User busy).  A suspended job is rerun in case of a\n\t * MOM failure after the workstation becomes active(busy).\n\t */\n\tpjob->ji_qs.ji_svrflags &= ~(JOB_SVFLG_Actsuspd | JOB_SVFLG_StagedIn | JOB_SVFLG_CHKPT);\n\tfree_jattr(pjob, JOB_ATR_exec_host);\n\tfree_jattr(pjob, JOB_ATR_exec_host2);\n\tfree_jattr(pjob, JOB_ATR_exec_vnode);\n\t/* job dir has no meaning for re-queued jobs, so unset it */\n\tfree_jattr(pjob, JOB_ATR_jobdir);\n\tunset_extra_attributes(pjob);\n\tsvr_evaljobstate(pjob, &newstate, &newsubstate, 1);\n\tsvr_setjobstate(pjob, newstate, newsubstate);\n}\n\n/**\n * @brief\n * \t\treq_rerunjob - service the Rerun Job Request\n *\n *\t\tThis request Reruns a job by:\n *\t\tsending to MOM a signal job request with SIGKILL\n *\t\tmarking the job as being rerun by setting the substate.\n *\n *  @param[in,out]\tpreq\t-\tJob Request\n */\n\nvoid\nreq_rerunjob(struct batch_request *preq)\n{\n\tint anygood = 0;\n\tint i;\n\tchar jid[PBS_MAXSVRJOBID + 1];\n\tint jt; /* job type */\n\tchar sjst;\n\tchar *pc;\n\tjob *pjob;\n\tjob *parent;\n\tchar *range;\n\tint start;\n\tint end;\n\tint step;\n\tint count;\n\tint err = PBSE_NONE;\n\n\tsnprintf(jid, sizeof(jid), \"%s\", preq->rq_ind.rq_signal.rq_jid);\n\tparent = chk_job_request(jid, preq, &jt, &err);\n\tif (parent == NULL) {\n\t\tpjob = find_job(jid);\n\t\tif (pjob != NULL && pjob->ji_pmt_preq != NULL)\n\t\t\treply_preempt_jobs_request(err, PREEMPT_METHOD_REQUEUE, pjob);\n\t\treturn; /* note, req_reject already called */\n\t}\n\n\tif ((preq->rq_perm & (ATR_DFLAG_MGWR | ATR_DFLAG_OPWR)) == 0) {\n\t\tif (parent->ji_pmt_preq != NULL)\n\t\t\treply_preempt_jobs_request(PBSE_BADSTATE, PREEMPT_METHOD_REQUEUE, parent);\n\t\treq_reject(PBSE_PERM, 0, preq);\n\t\treturn;\n\t}\n\n\tif (jt == IS_ARRAY_NO) {\n\n\t\t/* just a regular job, pass it on down the line and be done */\n\n\t\treq_rerunjob2(preq, parent);\n\t\treturn;\n\n\t} else if (jt == IS_ARRAY_Single) {\n\t\t/* single subjob, if running can signal */\n\t\tpjob = get_subjob_and_state(parent, get_index_from_jid(jid), &sjst, NULL);\n\t\tif (sjst == JOB_STATE_LTR_UNKNOWN) {\n\t\t\treq_reject(PBSE_IVALREQ, 0, preq);\n\t\t\treturn;\n\t\t} else if (pjob && sjst == JOB_STATE_LTR_RUNNING) {\n\t\t\treq_rerunjob2(preq, pjob);\n\t\t} else {\n\t\t\treq_reject(PBSE_BADSTATE, 0, preq);\n\t\t\treturn;\n\t\t}\n\t\treturn;\n\n\t} else if (jt == IS_ARRAY_ArrayJob) {\n\n\t\t/* The Array Job itself ... */\n\n\t\tif (!check_job_state(parent, JOB_STATE_LTR_BEGUN)) {\n\t\t\tif (parent->ji_pmt_preq != NULL)\n\t\t\t\treply_preempt_jobs_request(PBSE_BADSTATE, PREEMPT_METHOD_REQUEUE, parent);\n\t\t\treq_reject(PBSE_BADSTATE, 0, preq);\n\t\t\treturn;\n\t\t}\n\n\t\t/* for each subjob that is running, call req_rerunjob2 */\n\n\t\t++preq->rq_refct; /* protect the request/reply struct */\n\n\t\t/* Setting deleted subjobs count to 0,\n\t\t * since all the deleted subjobs will be moved to Q state\n\t\t */\n\t\tparent->ji_ajinfo->tkm_dsubjsct = 0;\n\n\t\tfor (i = parent->ji_ajinfo->tkm_start; i <= parent->ji_ajinfo->tkm_end; i += parent->ji_ajinfo->tkm_step) {\n\t\t\tpjob = get_subjob_and_state(parent, i, &sjst, NULL);\n\t\t\tif (sjst == JOB_STATE_LTR_UNKNOWN)\n\t\t\t\tcontinue;\n\t\t\tif (pjob) {\n\t\t\t\tif (sjst == JOB_STATE_LTR_RUNNING)\n\t\t\t\t\tdup_br_for_subjob(preq, pjob, req_rerunjob2);\n\t\t\t\telse\n\t\t\t\t\tforce_reque(pjob);\n\t\t\t} else {\n\t\t\t\tupdate_sj_parent(parent, NULL, create_subjob_id(parent->ji_qs.ji_jobid, i), sjst, JOB_STATE_LTR_QUEUED);\n\t\t\t}\n\t\t}\n\t\t/* if not waiting on any running subjobs, can reply; else */\n\t\t/* it is taken care of when last running subjob responds  */\n\t\tif (--preq->rq_refct == 0)\n\t\t\treply_send(preq);\n\t\treturn;\n\t}\n\t/* what's left to handle is a range of subjobs, foreach subjob\n\t * if running, all req_rerunjob2\n\t */\n\n\trange = get_range_from_jid(jid);\n\tif (range == NULL) {\n\t\treq_reject(PBSE_IVALREQ, 0, preq);\n\t\treturn;\n\t}\n\n\t/* now do the deed */\n\n\t++preq->rq_refct; /* protect the request/reply struct */\n\n\twhile (1) {\n\t\tif ((i = parse_subjob_index(range, &pc, &start, &end, &step, &count)) == -1) {\n\t\t\treq_reject(PBSE_IVALREQ, 0, preq);\n\t\t\tbreak;\n\t\t} else if (i == 1)\n\t\t\tbreak;\n\t\tfor (i = start; i <= end; i += step) {\n\t\t\tpjob = get_subjob_and_state(parent, i, &sjst, NULL);\n\t\t\tif (pjob && sjst == JOB_STATE_LTR_RUNNING) {\n\t\t\t\tanygood++;\n\t\t\t\tdup_br_for_subjob(preq, pjob, req_rerunjob2);\n\t\t\t}\n\t\t}\n\t\trange = pc;\n\t}\n\n\tif (anygood == 0) {\n\t\tpreq->rq_refct--;\n\t\treq_reject(PBSE_BADSTATE, 0, preq);\n\t\treturn;\n\t}\n\n\t/* if not waiting on any running subjobs, can reply; else */\n\t/* it is taken care of when last running subjob responds  */\n\tif (--preq->rq_refct == 0)\n\t\treply_send(preq);\n\treturn;\n}\n\n/**\n * @brief\n * \t\tFunction that causes a rerun request to return with a timeout message.\n *\n * @param[in,out]\tpwt\t-\twork task which contains the job structure which holds the rerun request\n */\nstatic void\ntimeout_rerun_request(struct work_task *pwt)\n{\n\tjob *pjob = (job *) pwt->wt_parm1;\n\tconn_t *conn = NULL;\n\n\tif ((pjob == NULL) || (pjob->ji_rerun_preq == NULL)) {\n\t\treturn; /* nothing to timeout */\n\t}\n\tif (pjob->ji_rerun_preq->rq_conn != PBS_LOCAL_CONNECTION) {\n\t\tconn = get_conn(pjob->ji_rerun_preq->rq_conn);\n\t}\n\treply_text(pjob->ji_rerun_preq, PBSE_INTERNAL,\n\t\t   \"Response timed out. Job rerun request still in progress for\");\n\n\t/* clear no-timeout flag on connection */\n\tif (conn)\n\t\tconn->cn_authen &= ~PBS_NET_CONN_NOTIMEOUT;\n\n\tpjob->ji_rerun_preq = NULL;\n}\n/**\n * @brief\n * \t\treq_rerunjob - service the Rerun Job Request\n *\n *  @param[in,out]\tpreq\t-\tJob Request\n *  @param[in,out]\tpjob\t-\tptr to the subjob\n *\n * @return int\n * @retval 0 for Success\n * @retval 1 for Error\n */\nstatic int\nreq_rerunjob2(struct batch_request *preq, job *pjob)\n{\n\tlong force = 0;\n\tstruct work_task *ptask;\n\ttime_t rerun_to;\n\tconn_t *conn;\n\tstruct depend *pdep;\n\tint rc;\n\n\tif (preq->rq_extend && (strcmp(preq->rq_extend, \"force\") == 0))\n\t\tforce = 1;\n\n\t/* the job must be rerunnable or force must be on */\n\n\tif ((get_jattr_long(pjob, JOB_ATR_rerunable) == 0) &&\n\t    (force == 0)) {\n\t\tif (pjob->ji_pmt_preq != NULL)\n\t\t\treply_preempt_jobs_request(PBSE_NORERUN, PREEMPT_METHOD_REQUEUE, pjob);\n\t\treq_reject(PBSE_NORERUN, 0, preq);\n\t\treturn 1;\n\t}\n\n\t/* the job must be running */\n\n\tif (!check_job_state(pjob, JOB_STATE_LTR_RUNNING)) {\n\t\tif (pjob->ji_pmt_preq != NULL)\n\t\t\treply_preempt_jobs_request(PBSE_BADSTATE, PREEMPT_METHOD_REQUEUE, pjob);\n\n\t\treq_reject(PBSE_BADSTATE, 0, preq);\n\t\treturn 1;\n\t}\n\t/* a node failure tolerant job could be waiting for healthy nodes\n\t * and it would have a JOB_SUBSTATE_PRERUN substate.\n\t */\n\tif ((!check_job_substate(pjob, JOB_SUBSTATE_RUNNING)) &&\n\t    (!check_job_substate(pjob, JOB_SUBSTATE_PRERUN)) && (force == 0)) {\n\t\tif (pjob->ji_pmt_preq != NULL)\n\t\t\treply_preempt_jobs_request(PBSE_BADSTATE, PREEMPT_METHOD_REQUEUE, pjob);\n\t\treq_reject(PBSE_BADSTATE, 0, preq);\n\t\treturn 1;\n\t}\n\n\t/* ask MOM to kill off the job */\n\n\trc = issue_signal(pjob, SIG_RERUN, post_rerun, NULL);\n\n\t/*\n\t * If force is set and request is from a PBS manager,\n\t * job is re-queued regardless of issue_signal to MoM\n\t * was a success or failure.\n\t * Eventually, when the mom updates server about the job,\n\t * server sends a discard message to mom and job is\n\t * then deleted from mom as well.\n\t */\n\tif (force == 1) {\n\t\t/* Mom is down and issue signal failed or\n\t\t * request is from a manager and \"force\" is on,\n\t\t * force the requeue */\n\t\tif (pjob->ji_pmt_preq != NULL)\n\t\t\treply_preempt_jobs_request(rc, PREEMPT_METHOD_REQUEUE, pjob);\n\n\t\tpjob->ji_qs.ji_un.ji_exect.ji_exitstat = JOB_EXEC_RERUN;\n\t\tset_job_substate(pjob, JOB_SUBSTATE_RERUN3);\n\n\t\tdiscard_job(pjob, \"Force rerun\", 0);\n\t\tpjob->ji_discarding = 1;\n\t\t/**\n\t\t * force_reque will be called in post_discard_job,\n\t\t * after receiving IS_DISCARD_DONE from the MOM.\n\t\t */\n\t\tpdep = find_depend(JOB_DEPEND_TYPE_RUNONE, get_jattr(pjob, JOB_ATR_depend));\n\t\tif (pdep != NULL)\n\t\t\tdepend_runone_release_all(pjob);\n\t\treply_ack(preq);\n\t\treturn 0;\n\t}\n\n\tif (rc != 0) {\n\t\tif (pjob->ji_pmt_preq != NULL)\n\t\t\treply_preempt_jobs_request(rc, PREEMPT_METHOD_REQUEUE, pjob);\n\t\treq_reject(rc, 0, preq);\n\t\treturn 1;\n\t}\n\n\t/* So job has run and is to be rerun (not restarted) */\n\n\tpjob->ji_qs.ji_svrflags = (pjob->ji_qs.ji_svrflags &\n\t\t\t\t   ~(JOB_SVFLG_CHKPT | JOB_SVFLG_ChkptMig)) |\n\t\t\t\t  JOB_SVFLG_HASRUN;\n\tsvr_setjobstate(pjob, JOB_STATE_LTR_RUNNING, JOB_SUBSTATE_RERUN);\n\n\tsprintf(log_buffer, msg_manager, msg_jobrerun,\n\t\tpreq->rq_user, preq->rq_host);\n\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\n\t/* The following means we've detected an outstanding rerun request  */\n\t/* for the same job which should not happen. But if it does, let's  */\n\t/* ack that previous request to also free up its request structure. */\n\tif (pjob->ji_rerun_preq != NULL) {\n\t\treply_ack(pjob->ji_rerun_preq);\n\t}\n\tpjob->ji_rerun_preq = preq;\n\n\t/* put a timeout on the rerun request so that it doesn't hang \t*/\n\t/* indefinitely; if it does, the scheduler would also hang on a */\n\t/* requeue request  */\n\ttime_now = time(NULL);\n\tif (!is_sattr_set(SVR_ATR_JobRequeTimeout))\n\t\trerun_to = time_now + PBS_DIS_TCP_TIMEOUT_RERUN;\n\telse\n\t\trerun_to = time_now + get_sattr_long(SVR_ATR_JobRequeTimeout);\n\tptask = set_task(WORK_Timed, rerun_to, timeout_rerun_request, pjob);\n\tif (ptask) {\n\t\t/* this ensures that the ptask created gets cleared in case */\n\t\t/* pjob gets deleted before the task is served */\n\t\tappend_link(&pjob->ji_svrtask, &ptask->wt_linkobj, ptask);\n\t}\n\n\t/* set no-timeout flag on connection to client */\n\tif (preq->rq_conn != PBS_LOCAL_CONNECTION) {\n\t\tconn = get_conn(preq->rq_conn);\n\t\tif (conn)\n\t\t\tconn->cn_authen |= PBS_NET_CONN_NOTIMEOUT;\n\t}\n\n\treturn 0;\n}\n"
  },
  {
    "path": "src/server/req_rescq.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/*\n * @file\treq_rescq.c\n *\n * @brief\n * \treq_rescq.c\t-\tFunctions relating to the Resource Query Batch Request.\n *\n * Included functions are:\n *\tresv_idle_delete()\n *\tcnvrt_qmove()\n *\tresv_timer_init()\n *\tassign_resv_resc()\n *\treq_confirmresv()\n *\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <sys/types.h>\n#include \"libpbs.h\"\n#include <ctype.h>\n#include <errno.h>\n#include <stdlib.h>\n#include <stdio.h>\n#include <string.h>\n#include \"server_limits.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"resource.h\"\n#include \"server.h\"\n#include \"batch_request.h\"\n#include \"resv_node.h\"\n#include \"pbs_nodes.h\"\n#include \"queue.h\"\n#include \"job.h\"\n#include \"reservation.h\"\n#include \"sched_cmds.h\"\n#include \"work_task.h\"\n#include \"credential.h\"\n#include \"pbs_error.h\"\n#include \"svrfunc.h\"\n#include \"log.h\"\n#include \"acct.h\"\n#include \"pbs_license.h\"\n#include \"libutil.h\"\n\n/* forward definitions to keep the compiler happy */\nstruct name_and_val {\n\tchar *pn;\n\tchar *pv;\n};\n\nint gen_task_Time4resv(resc_resv *);\nvoid revert_alter_reservation(resc_resv *presv);\n\nextern int svr_totnodes;\nextern time_t time_now;\n\nextern int cnvrt_local_move(job *, struct batch_request *);\n\n/**\n * @brief work task to delete reservation if there are no jobs in the reservation queue\n *\n * @param[in] ptask - work task\n *\n */\nvoid\nresv_idle_delete(struct work_task *ptask)\n{\n\tresc_resv *presv;\n\tint num_jobs;\n\n\tpresv = ptask->wt_parm1;\n\n\tif (presv == NULL)\n\t\treturn;\n\n\tnum_jobs = presv->ri_qp->qu_numjobs;\n\tif (svr_chk_history_conf()) {\n\t\tnum_jobs -= (presv->ri_qp->qu_njstate[JOB_STATE_MOVED] + presv->ri_qp->qu_njstate[JOB_STATE_FINISHED] +\n\t\t\t     presv->ri_qp->qu_njstate[JOB_STATE_EXPIRED]);\n\t}\n\n\tif (num_jobs == 0) {\n\t\tlog_eventf(PBSEVENT_RESV, PBS_EVENTCLASS_RESV, LOG_DEBUG, presv->ri_qs.ri_resvID,\n\t\t\t   \"Deleting reservation after being idle for %d seconds\",\n\t\t\t   get_rattr_long(presv, RESV_ATR_del_idle_time));\n\t\tgen_future_deleteResv(presv, 1);\n\t}\n}\n\n/**\n * @brief if there are no jobs in the reservation queue, set a timer to delete the reservation\n *\n * @param[in]\tpresv - pointer to reservation\n */\nvoid\nset_idle_delete_task(resc_resv *presv)\n{\n\tstruct work_task *wt;\n\tlong retry_time;\n\tint num_jobs;\n\n\tif (presv == NULL)\n\t\treturn;\n\n\tif (!is_rattr_set(presv, RESV_ATR_del_idle_time))\n\t\treturn;\n\n\tnum_jobs = presv->ri_qp->qu_numjobs;\n\tif (svr_chk_history_conf()) {\n\t\tnum_jobs -= (presv->ri_qp->qu_njstate[JOB_STATE_MOVED] + presv->ri_qp->qu_njstate[JOB_STATE_FINISHED] +\n\t\t\t     presv->ri_qp->qu_njstate[JOB_STATE_EXPIRED]);\n\t}\n\n\tif (num_jobs == 0 && presv->ri_qs.ri_state == RESV_RUNNING) {\n\t\tdelete_task_by_parm1_func(presv, resv_idle_delete, DELETE_ONE); /* Delete the previous task if it exists */\n\t\tretry_time = time_now + get_rattr_long(presv, RESV_ATR_del_idle_time);\n\t\tif (retry_time < presv->ri_qs.ri_etime) {\n\t\t\twt = set_task(WORK_Timed, retry_time, resv_idle_delete, presv);\n\t\t\tappend_link(&presv->ri_svrtask, &wt->wt_linkobj, wt);\n\t\t}\n\t}\n}\n\n/**\n * @brief\n *\t\tqmove a job into a reservation\n *\n * @parm[in]\tpresv\t-\treservation structure\n *\n * @return\tint\n *\n * @retval\t0\t: Success\n * @retval\t-1\t: Failure\n *\n */\nint\ncnvrt_qmove(resc_resv *presv)\n{\n\tint rc;\n\tstruct job *pjob;\n\tchar *q_job_id, *at;\n\tstruct batch_request *reqcnvrt;\n\n\tif (gen_task_EndResvWindow(presv)) {\n\t\t(void) resv_purge(presv);\n\t\treturn (-1);\n\t}\n\n\tpjob = find_job(get_rattr_str(presv, RESV_ATR_convert));\n\tif (pjob != NULL)\n\t\tq_job_id = pjob->ji_qs.ji_jobid;\n\telse {\n\t\t(void) resv_purge(presv);\n\t\treturn (-1);\n\t}\n\tif ((reqcnvrt = alloc_br(PBS_BATCH_MoveJob)) == NULL) {\n\t\t(void) resv_purge(presv);\n\t\treturn (-1);\n\t}\n\treqcnvrt->rq_perm = (presv->ri_brp)->rq_perm;\n\tstrcpy(reqcnvrt->rq_user, (presv->ri_brp)->rq_user);\n\tstrcpy(reqcnvrt->rq_host, (presv->ri_brp)->rq_host);\n\n\tsnprintf(reqcnvrt->rq_ind.rq_move.rq_jid, sizeof(reqcnvrt->rq_ind.rq_move.rq_jid), \"%s\", q_job_id);\n\tat = strchr(presv->ri_qs.ri_resvID, (int) '.');\n\tif (at)\n\t\t*at = '\\0';\n\n\tsnprintf(reqcnvrt->rq_ind.rq_move.rq_destin, sizeof(reqcnvrt->rq_ind.rq_move.rq_destin), \"%s\", presv->ri_qs.ri_resvID);\n\tif (at)\n\t\t*at = '.';\n\n\tsnprintf(pjob->ji_qs.ji_destin, PBS_MAXROUTEDEST, \"%s\", reqcnvrt->rq_ind.rq_move.rq_destin);\n\trc = cnvrt_local_move(pjob, reqcnvrt);\n\n\tif (rc != 0)\n\t\treturn (-1);\n\treturn (0);\n}\n\n/**\n * @brief\n * \t\tresv_timer_init - initialize timed task for removing empty reservation\n */\nvoid\nresv_timer_init(void)\n{\n\tresc_resv *presv;\n\tpresv = (resc_resv *) GET_NEXT(svr_allresvs);\n\twhile (presv) {\n\t\tif (is_rattr_set(presv, RESV_ATR_del_idle_time))\n\t\t\tset_idle_delete_task(presv);\n\t\tpresv = (resc_resv *) GET_NEXT(presv->ri_allresvs);\n\t}\n}\n\n/*-----------------------------------------------------------------------\n Functions that deals with a resc_resv* rather than a job*\n These may end up being deleted if the two can be easily merged\n -----------------------------------------------------------------------\n */\n\n/**\n * @brief\n * \t\tremove_node_from_resv - procedure removes node from reservation\n *\t\tthe node is removed from RESV_ATR_resv_nodes and assigned\n *\t\tresources are accounted back to loaner's pool and finally the\n *\t\treservation is removed from the node\n *\n * @parm[in,out]\tpresv\t-\treservation structure\n * @parm[in,out]\tpnode\t-\tpointer to node\n *\n */\nvoid\nremove_node_from_resv(resc_resv *presv, struct pbsnode *pnode)\n{\n\tchar *begin = NULL;\n\tchar *end = NULL;\n\tchar *tmp_buf;\n\tstruct resvinfo *rinfp, *prev;\n\tattribute tmpatr;\n\n\t/* +2 for colon and termination */\n\ttmp_buf = malloc(strlen(pnode->nd_name) + 2);\n\tif (tmp_buf == NULL) {\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE, \"malloc failure (errno %d)\", errno);\n\t\tlog_err(PBSE_SYSTEM, __func__, log_buffer);\n\t\treturn;\n\t}\n\n\tsnprintf(tmp_buf, strlen(pnode->nd_name) + 1, \"%s:\", pnode->nd_name);\n\n\t/* remove the node '(vn[n]:foo)' from RESV_ATR_resv_nodes attribute */\n\tif (is_rattr_set(presv, RESV_ATR_resv_nodes)) {\n\t\tif ((begin = strstr(get_rattr_str(presv, RESV_ATR_resv_nodes), tmp_buf)) != NULL) {\n\n\t\t\twhile (*begin != '(')\n\t\t\t\tbegin--;\n\n\t\t\tend = strchr(begin, ')');\n\t\t\tend++;\n\n\t\t\tif (presv->ri_giveback) {\n\t\t\t\t/* resources were actually assigned to this reservation\n\t\t\t\t * we must return the resources back into the loaner's pool.\n\t\t\t\t *\n\t\t\t\t * We use temp attribute for this and this attribute will\n\t\t\t\t * contain only the removed part of resv_nodes and this part will be returned\n\t\t\t\t */\n\t\t\t\t// FIXME: Can we create some utility func for below?\n\t\t\t\tclear_attr(&tmpatr, &resv_attr_def[(int) RESV_ATR_resv_nodes]);\n\t\t\t\tresv_attr_def[(int) RESV_ATR_resv_nodes].at_set(&tmpatr, get_rattr(presv, RESV_ATR_resv_nodes), SET);\n\t\t\t\ttmpatr.at_flags = (get_rattr(presv, RESV_ATR_resv_nodes))->at_flags;\n\n\t\t\t\tstrncpy(tmpatr.at_val.at_str, begin, (end - begin));\n\t\t\t\ttmpatr.at_val.at_str[end - begin] = '\\0';\n\n\t\t\t\tupdate_node_rassn(&tmpatr, DECR);\n\n\t\t\t\tresv_attr_def[(int) RESV_ATR_resv_nodes].at_free(&tmpatr);\n\n\t\t\t\t/* Note: We do not want to set presv->ri_giveback to 0 here.\n\t\t\t\t * The resv_nodes may not be empty yet and there could\n\t\t\t\t * be server resources assigned - it will be handled later.\n\t\t\t\t */\n\t\t\t}\n\n\t\t\t/* remove \"(vn[n]:foo)\" from the resv_nodes, no '+' is removed yet */\n\t\t\tmemmove(begin, end, strlen(end) + 1); /* +1 for '\\0' */\n\n\t\t\tif (strlen(get_rattr_str(presv, RESV_ATR_resv_nodes)) == 0) {\n\t\t\t\tfree_rattr(presv, RESV_ATR_resv_nodes);\n\t\t\t\t/* full remove of RESV_ATR_resv_nodes is dangerous;\n\t\t\t\t * the associated job can run anywhere without RESV_ATR_resv_nodes\n\t\t\t\t * so stop the associated queue */\n\t\t\t\tchange_enableORstart(presv, Q_CHNG_START, ATR_FALSE);\n\t\t\t} else {\n\t\t\t\t/* resv_nodes looks like \"+(vn2:foo)\" or \"(vn1:foo)+\" or \"(vn1:foo)++(vn3:bar)\"\n\t\t\t\t * the extra '+' is removed here */\n\t\t\t\tint tmp_len;\n\t\t\t\tchar *nodes = get_rattr_str(presv, RESV_ATR_resv_nodes);\n\n\t\t\t\t/* remove possible leading '+' */\n\t\t\t\tif (nodes[0] == '+')\n\t\t\t\t\tmemmove(nodes, nodes + 1, strlen(nodes));\n\n\t\t\t\t/* remove possible trailing '+' */\n\t\t\t\ttmp_len = strlen(nodes);\n\t\t\t\tif (nodes[tmp_len - 1] == '+')\n\t\t\t\t\tnodes[tmp_len - 1] = '\\0';\n\n\t\t\t\t/* change possible '++' into single '+' */\n\t\t\t\tif ((begin = strstr(nodes, \"++\")) != NULL)\n\t\t\t\t\tmemmove(begin, begin + 1, strlen(begin + 1) + 1);\n\t\t\t\tset_rattr_str_slim(presv, RESV_ATR_resv_nodes, nodes, NULL);\n\t\t\t}\n\t\t}\n\t}\n\n\t/* traverse the reservations of the node and remove the reservation if found */\n\tfor (prev = NULL, rinfp = pnode->nd_resvp; rinfp; prev = rinfp, rinfp = rinfp->next) {\n\t\tif (strcmp(presv->ri_qs.ri_resvID, rinfp->resvp->ri_qs.ri_resvID) == 0) {\n\t\t\tif (prev == NULL)\n\t\t\t\tpnode->nd_resvp = rinfp->next;\n\t\t\telse\n\t\t\t\tprev->next = rinfp->next;\n\t\t\tfree(rinfp);\n\t\t\tbreak;\n\t\t}\n\t}\n\n\tfree(tmp_buf);\n}\n\n/**\n * @brief\n * \t\tremove_host_from_resv - it calls remove_node_from_resv() for all\n *\t\tvnodes on the host\n *\n * @parm[in,out]\tpresv\t-\treservation structure\n * @parm[in]\t\thostname -\tstring with hostname\n *\n */\nvoid\nremove_host_from_resv(resc_resv *presv, char *hostname)\n{\n\tpbsnode_list_t *pl = NULL;\n\tpbsnode_list_t *prev = NULL;\n\n\tfor (prev = NULL, pl = presv->ri_pbsnode_list; pl != NULL;) {\n\t\tif (strcmp(pl->vnode->nd_hostname, hostname) == 0) {\n\t\t\tremove_node_from_resv(presv, pl->vnode);\n\t\t\tif (prev == NULL) {\n\t\t\t\tpresv->ri_pbsnode_list = pl->next;\n\t\t\t\tfree(pl);\n\t\t\t\tpl = presv->ri_pbsnode_list;\n\t\t\t} else {\n\t\t\t\tprev->next = pl->next;\n\t\t\t\tfree(pl);\n\t\t\t\tpl = prev->next;\n\t\t\t}\n\t\t} else {\n\t\t\tprev = pl;\n\t\t\tpl = pl->next;\n\t\t}\n\t}\n}\n\n/**\n * @brief\n * \t\tdegrade_overlapping_resv - by traversing all associated nodes\n *\t\tof the presv, search all overlapping reservations and if the\n *\t\treservation is not 'maintenance' and it is confirmed then\n *\t\tdegrade the reservation and wipe the overloaded node from the\n *\t\toverlapping reservation with remove_node_from_resv()\n *\n * @parm[in,out]\tpresv\t-\treservation structure\n *\n */\nvoid\ndegrade_overlapping_resv(resc_resv *presv)\n{\n\tpbsnode_list_t *pl = NULL;\n\tstruct resvinfo *rip;\n\tresc_resv *tmp_presv;\n\tint modified;\n\n\tfor (pl = presv->ri_pbsnode_list; pl != NULL; pl = pl->next) {\n\t\tdo {\n\t\t\tmodified = 0;\n\n\t\t\tfor (rip = pl->vnode->nd_resvp; rip; rip = rip->next) {\n\n\t\t\t\ttmp_presv = rip->resvp;\n\n\t\t\t\tif (tmp_presv->ri_qs.ri_resvID[0] == PBS_MNTNC_RESV_ID_CHAR)\n\t\t\t\t\tcontinue;\n\n\t\t\t\tif (tmp_presv->ri_qs.ri_state == RESV_UNCONFIRMED)\n\t\t\t\t\tcontinue;\n\n\t\t\t\tif (strcmp(presv->ri_qs.ri_resvID, tmp_presv->ri_qs.ri_resvID) != 0 &&\n\t\t\t\t    presv->ri_qs.ri_stime <= tmp_presv->ri_qs.ri_etime &&\n\t\t\t\t    presv->ri_qs.ri_etime >= tmp_presv->ri_qs.ri_stime) {\n\n\t\t\t\t\tset_resv_retry(tmp_presv, time_now);\n\n\t\t\t\t\tif (tmp_presv->ri_qs.ri_state == RESV_CONFIRMED) {\n\t\t\t\t\t\tresv_setResvState(tmp_presv, RESV_DEGRADED, RESV_IN_CONFLICT);\n\t\t\t\t\t} else {\n\t\t\t\t\t\tresv_setResvState(tmp_presv, tmp_presv->ri_qs.ri_state, RESV_IN_CONFLICT);\n\t\t\t\t\t}\n\n\t\t\t\t\tremove_host_from_resv(tmp_presv, pl->vnode->nd_hostname);\n\n\t\t\t\t\tresv_save_db(tmp_presv);\n\n\t\t\t\t\t/* we need 'break' here and start over because remove_host_from_resv()\n\t\t\t\t\t * modifies pl->vnode->nd_resvp */\n\t\t\t\t\tmodified = 1;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t} while (modified);\n\t}\n}\n\n/**\n * @brief\n * \t\tassign_resv_resc - function examines the reservation object\n * \t\tand server global parameters to obtain the node specification.\n * \t\tIf the above yields a non-NULL node spec, global function\n * \t\tset_nodes() is called to locate a set of nodes for the subject\n * \t\treservation and allocate them to the reservation - each node\n * \t\tthat's allocated to the reservation gets a resvinfo structure\n * \t\tadded to its list of resvinfo structures and that structure\n * \t\tpoints to the reservation.\n *\n * @parm[in,out]\tpresv\t-\treservation structure\n * @parm[in]\tvnodes\t-\toriginal vnode list from scheduler/operator\n * @parm[in]\tsvr_init\t- the server is recovering jobs and reservations\n *\n * @return\tint\n * @return\t0 : no problems detected in the process\n * @retval\tnon-zero\t: error code if problem occurs\n */\nint\nassign_resv_resc(resc_resv *presv, char *vnodes, int svr_init)\n{\n\tint ret;\n\tchar *node_str = NULL;\n\tchar *host_str = NULL; /* used only as arg to set_nodes */\n\tchar *host_str2 = NULL;\n\tif ((vnodes == NULL) || (*vnodes == '\\0'))\n\t\treturn (PBSE_BADNODESPEC);\n\n\tret = set_nodes((void *) presv, RESC_RESV_OBJECT, vnodes,\n\t\t\t&node_str, &host_str, &host_str2, 0, svr_init);\n\n\tif (ret == PBSE_NONE) {\n\t\t/* update resc_resv object's RESV_ATR_resv_nodes attribute */\n\t\tset_rattr_str_slim(presv, RESV_ATR_resv_nodes, node_str, NULL);\n\t}\n\n\treturn (ret);\n}\n\n/**\n * @brief\n * req_confirmresv -\tconfirm an advance or standing reservation and\n * \t\t\tset the assigned resources and optionally the start time.\n *\n *\t\t\tHandle the reconfirmation of a degraded reservation: The\n *\t\t\treconfirmation is handled by altering the reservation's execvnodes with\n *\t\t\talternate execvnodes.\n *\n *\t\t\tHandle the confirmation/denial of reservation alter request.\n *\n * @param\n * preq[in, out]   -\tThe batch request containing the success or failure of a\n * \t\t\treservation confirmation or re-confirmation.\n */\n\nvoid\nreq_confirmresv(struct batch_request *preq)\n{\n\ttime_t newstart = 0;\n\tresc_resv *presv = NULL;\n\tint rc = 0;\n\tint state = 0;\n\tint sub = 0;\n\tint resv_count = 0;\n\tint is_degraded = 0;\n\tint is_confirmed = 0;\n\tchar *execvnodes = NULL;\n\tchar *next_execvnode = NULL;\n\tchar **short_xc = NULL;\n\tchar **tofree = NULL;\n\tint is_being_altered = 0;\n\tchar *tmp_buf = NULL;\n\tsize_t tmp_buf_size = 0;\n\tchar buf[PBS_MAXQRESVNAME + PBS_MAXHOSTNAME + 256] = {0}; /* FQDN resvID+text */\n\tchar *partition_name = NULL;\n\n\tif ((preq->rq_perm & (ATR_DFLAG_MGWR | ATR_DFLAG_OPWR)) == 0) {\n\t\treq_reject(PBSE_PERM, 0, preq);\n\t\treturn;\n\t}\n\n\tpresv = find_resv(preq->rq_ind.rq_run.rq_jid);\n\tif (presv == NULL) {\n\t\treq_reject(PBSE_UNKRESVID, 0, preq);\n\t\treturn;\n\t}\n\tis_degraded = (presv->ri_qs.ri_substate == RESV_DEGRADED || presv->ri_qs.ri_substate == RESV_IN_CONFLICT) ? 1 : 0;\n\tis_being_altered = presv->ri_alter.ra_flags;\n\tis_confirmed = (presv->ri_qs.ri_substate == RESV_CONFIRMED) ? 1 : 0;\n\n\tDBPRT((\"resv_name=%s, is_degraded=%d, is_being_altered=%d, is_confirmed=%d\",\n\t       presv->ri_qs.ri_resvID, is_degraded, is_being_altered, is_confirmed));\n\n\tpresv->rep_sched_count++;\n\n\t/* Check if preq is coming from scheduler */\n\tif (preq->rq_extend == NULL) {\n\t\treq_reject(PBSE_resvFail, 0, preq);\n\t\treturn;\n\t}\n\n\t/* If the reservation was degraded and it could not be reconfirmed by the\n\t * scheduler, then the retry time for that reservation is reset to the half-\n\t * time between now and the time to reservation start or, if the retry time\n\t * is invalid, set it to some time after the soonest occurrence is to start\n\t */\n\tif (strcmp(preq->rq_extend, PBS_RESV_CONFIRM_FAIL) == 0) {\n\t\tint force_requested = FALSE;\n\t\tif (is_degraded && !is_being_altered) {\n\t\t\tlong retry_time;\n\t\t\tretry_time = determine_resv_retry(presv);\n\n\t\t\tset_resv_retry(presv, retry_time);\n\n\t\t} else {\n\t\t\tif (presv->rep_sched_count >= presv->req_sched_count) {\n\t\t\t\t/* Clients waiting on an interactive request must be\n\t\t\t\t* notified of the failure to confirm\n\t\t\t\t*/\n\t\t\t\tif ((presv->ri_brp != NULL) && is_rattr_set(presv, RESV_ATR_interactive)) {\n\t\t\t\t\tif (!(presv->ri_alter.ra_flags & RESV_ALTER_FORCED)) {\n\t\t\t\t\t\t(get_rattr(presv, RESV_ATR_interactive))->at_flags &= ~ATR_VFLAG_SET;\n\t\t\t\t\t\tsnprintf(buf, sizeof(buf), \"%s DENIED\", presv->ri_qs.ri_resvID);\n\t\t\t\t\t\t(void) reply_text(presv->ri_brp, PBSE_NONE, buf);\n\t\t\t\t\t\tpresv->ri_brp = NULL;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tif (!is_being_altered && !is_confirmed) {\n\t\t\t\t\tlog_event(PBS_EVENTCLASS_RESV, PBS_EVENTCLASS_RESV, LOG_INFO, presv->ri_qs.ri_resvID, \"Reservation denied\");\n\t\t\t\t\t(void) snprintf(log_buffer, sizeof(log_buffer), \"requestor=%s@%s\", msg_daemonname, server_host);\n\t\t\t\t\taccount_recordResv(PBS_ACCT_DRss, presv, log_buffer);\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_RESV, LOG_NOTICE, presv->ri_qs.ri_resvID, \"reservation deleted\");\n\t\t\t\t\tresv_purge(presv);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tif (presv->ri_qs.ri_state == RESV_BEING_ALTERED) {\n\t\t\tif (!(presv->ri_alter.ra_flags & RESV_ALTER_FORCED)) {\n\t\t\t\trevert_alter_reservation(presv);\n\t\t\t\tlog_event(PBSEVENT_RESV, PBS_EVENTCLASS_RESV, LOG_INFO,\n\t\t\t\t\t  presv->ri_qs.ri_resvID, \"Reservation alter denied\");\n\t\t\t} else if (presv->rep_sched_count >= presv->req_sched_count)\n\t\t\t\tforce_requested = TRUE;\n\t\t}\n\t\tif (is_being_altered)\n\t\t\tfree_rattr(presv, RESV_ATR_alter_revert);\n\n\t\tif (force_requested == FALSE) {\n\t\t\treply_ack(preq);\n\t\t\treturn;\n\t\t} else {\n\t\t\t/* This can only happen when ralter was requested with -Wforce option.\n\t\t\t * Even though all schedulers have rejected the change, enforce it.\n\t\t\t */\n\t\t\tpresv->ri_alter.ra_flags &= ~RESV_ALTER_FORCED;\n\t\t\tfree(preq->rq_extend);\n\t\t\tif (pbs_asprintf(&preq->rq_extend, \"%s:partition=%s\", PBS_RESV_CONFIRM_SUCCESS,\n\t\t\t\t\t get_rattr_str(presv, RESV_ATR_partition)) == -1) {\n\t\t\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\t/* set start time and destination in the preq structure */\n\t\t\tif (is_rattr_set(presv, RESV_ATR_start))\n\t\t\t\tpreq->rq_ind.rq_run.rq_resch = get_rattr_long(presv, RESV_ATR_start);\n\t\t\tif (is_rattr_set(presv, RESV_ATR_resv_nodes)) {\n\t\t\t\tpreq->rq_ind.rq_run.rq_destin = create_resv_destination(presv);\n\t\t\t\tif (preq->rq_ind.rq_run.rq_destin == NULL) {\n\t\t\t\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\tif (is_being_altered)\n\t\tfree_rattr(presv, RESV_ATR_alter_revert);\n\n\t/* if passed in the confirmation, set a new start time */\n\tif ((newstart = (time_t) preq->rq_ind.rq_run.rq_resch) != 0) {\n\t\tpresv->ri_qs.ri_stime = newstart;\n\t\tset_rattr_l_slim(presv, RESV_ATR_start, newstart, SET);\n\n\t\tpresv->ri_qs.ri_etime = newstart + presv->ri_qs.ri_duration;\n\t\tset_rattr_l_slim(presv, RESV_ATR_end, presv->ri_qs.ri_etime, SET);\n\t}\n\n\t/* The main difference between an advance reservation and a standing\n\t * reservation is the format of the execvnodes returned by \"rq_destin\":\n\t * An advance reservation has a single execvnode while a standing reservation\n\t * has a sting with the  particular format:\n\t *    <num_resv>#<execvnode1>[<range>]<exevnode2>[...\n\t * describing the execvnodes associated to each occurrence.\n\t */\n\tif (get_rattr_str(presv, RESV_ATR_resv_standing)) {\n\t\t/* The number of occurrences in the standing reservation and index are parsed\n\t\t * from the execvnode string which is of the form:\n\t\t *     <num_occurrences>#<vnode1>[range1]<vnode2>[range2]...\n\t\t */\n\t\tresv_count = get_execvnodes_count(preq->rq_ind.rq_run.rq_destin);\n\t\tif (resv_count == 0) {\n\t\t\treq_reject(PBSE_INTERNAL, 0, preq);\n\t\t\treturn;\n\t\t}\n\n\t\texecvnodes = strdup(preq->rq_ind.rq_run.rq_destin);\n\t\tif (execvnodes == NULL) {\n\t\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\t\treturn;\n\t\t}\n\t\tDBPRT((\"stdg_resv conf: execvnodes_seq is %s\\n\", execvnodes));\n\n\t\t/* execvnodes is of the form:\n\t\t *       <num_resv>#<(execvnode1)>[<range>]<(exevnode2)>[...\n\t\t * this \"condensed\" string is unrolled into a pointer array of\n\t\t * execvnodes per occurrence, e.g. short_xc[0] are the execvnodes\n\t\t * for 1st occurrence, short_xc[1] for the 2nd etc...\n\t\t * If something goes wrong during unrolling then NULL is returned.\n\t\t * which causes the confirmation message to be rejected\n\t\t */\n\t\tshort_xc = unroll_execvnode_seq(execvnodes, &tofree);\n\t\tif (short_xc == NULL) {\n\t\t\tfree(execvnodes);\n\t\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\t\treturn;\n\t\t}\n\t\t/* The execvnode of the soonest (i.e., next) occurrence */\n\t\tnext_execvnode = strdup(short_xc[0]);\n\t\tif (next_execvnode == NULL) {\n\t\t\tfree(short_xc);\n\t\t\tfree_execvnode_seq(tofree);\n\t\t\tfree(execvnodes);\n\t\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\t\treturn;\n\t\t}\n\t\t/* Release the now obsolete allocations used to manipulate the\n\t\t * unrolled string */\n\t\tfree(short_xc);\n\t\tfree_execvnode_seq(tofree);\n\t\tfree(execvnodes);\n\n\t\t/* When confirming for the first time, set the index and count */\n\t\tif (!is_degraded) {\n\n\t\t\t/* Add first occurrence's end date on timed task list */\n\t\t\tif (get_rattr_long(presv, RESV_ATR_start) != PBS_RESV_FUTURE_SCH) {\n\t\t\t\tif (gen_task_EndResvWindow(presv)) {\n\t\t\t\t\tfree(next_execvnode);\n\t\t\t\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t}\n\t\t\t/* Set first occurrence to index 1\n\t\t\t * (rather than 0 because it gets displayed in pbs_rstat -f) */\n\t\t\tset_rattr_l_slim(presv, RESV_ATR_resv_idx, 1, SET);\n\t\t}\n\n\t\t/* Skip setting the execvnodes sequence when reconfirming the last\n\t\t * occurrence or when altering a reservation.\n\t\t */\n\t\tif (!is_being_altered) {\n\t\t\tchar *new_execvnode = preq->rq_ind.rq_run.rq_destin;\n\t\t\tint remaining_occurrences = get_rattr_long(presv, RESV_ATR_resv_count) - get_rattr_long(presv, RESV_ATR_resv_idx) + 1; /* resv_idx starts at 1 */\n\t\t\tif (get_execvnodes_count(new_execvnode) != remaining_occurrences) {\n\t\t\t\tlog_event(PBSEVENT_RESV, PBS_EVENTCLASS_RESV, LOG_WARNING, presv->ri_qs.ri_resvID, \"Number of execvnodes given does not equal the number of occurrences left\");\n\t\t\t\tfree(next_execvnode);\n\t\t\t\treq_reject(PBSE_BADATVAL, 0, preq);\n\t\t\t\treturn;\n\t\t\t}\n\n\t\t\tif (remaining_occurrences > 0) {\n\t\t\t\t/* now assign the execvnodes sequence attribute */\n\t\t\t\tset_rattr_str_slim(presv, RESV_ATR_resv_execvnodes, preq->rq_ind.rq_run.rq_destin, NULL);\n\t\t\t}\n\t\t}\n\t} else { /* Advance reservation */\n\t\tnext_execvnode = strdup(preq->rq_ind.rq_run.rq_destin);\n\t\tif (next_execvnode == NULL) {\n\t\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\t\treturn;\n\t\t}\n\t}\n\n\t/* Is reservation still a viable reservation? */\n\tif ((rc = chk_resvReq_viable(presv)) != 0) {\n\t\tfree(next_execvnode);\n\t\treq_reject(PBSE_BADTSPEC, 0, preq);\n\t\treturn;\n\t}\n\n\t/* When reconfirming a degraded reservation, first free the nodes linked\n\t * to the reservation and unset all attributes relating to retry attempts\n\t */\n\tif (is_degraded) {\n\t\tif (presv->ri_qs.ri_state == RESV_RUNNING) {\n\t\t\tif (presv->ri_giveback) {\n\t\t\t\tset_resc_assigned((void *) presv, 1, DECR);\n\t\t\t\tpresv->ri_giveback = 0;\n\t\t\t}\n\t\t}\n\t\tfree_resvNodes(presv);\n\t\t/* Reset retry time */\n\t\tunset_resv_retry(presv);\n\t\t/* reset vnodes_down counter to 0 */\n\t\tpresv->ri_vnodes_down = 0;\n\t}\n\n\tif (is_being_altered & RESV_END_TIME_MODIFIED) {\n\t\tif (gen_task_EndResvWindow(presv)) {\n\t\t\tfree(next_execvnode);\n\t\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\t\treturn;\n\t\t}\n\t}\n\n\t/*\n\t * Assign the allocated resources to the reservation\n\t * and the reservation to the associated vnodes.\n\t */\n\tif (is_being_altered) {\n\t\tif ((is_being_altered & RESV_SELECT_MODIFIED) && presv->ri_qs.ri_stime <= time_now) {\n\t\t\t/* If we are both degraded and ralter -lselect, we are fine.  We will have unset ri_giveback above */\n\t\t\tif (presv->ri_giveback) {\n\t\t\t\tset_resc_assigned((void *) presv, 1, DECR);\n\t\t\t\tpresv->ri_giveback = 0;\n\t\t\t}\n\t\t}\n\n\t\tfree_resvNodes(presv);\n\t}\n\trc = assign_resv_resc(presv, next_execvnode, FALSE);\n\n\tDBPRT((\"resv_name=%s, rc=%d, is_degraded=%d, stime=%ld, now=%ld\",\n\t       presv->ri_qs.ri_resvID, rc, is_degraded, presv->ri_qs.ri_stime, time_now));\n\n\tif (presv->ri_qs.ri_stime <= time_now) {\n\t\tif (is_degraded || is_being_altered & RESV_SELECT_MODIFIED) {\n\t\t\tif (presv->ri_giveback == 0) {\n\t\t\t\tset_resc_assigned((void *) presv, 1, INCR);\n\t\t\t\tpresv->ri_giveback = 1;\n\t\t\t\tresv_exclusive_handler(presv);\n\t\t\t}\n\t\t}\n\t}\n\n\tif (rc != PBSE_NONE) {\n\t\tfree(next_execvnode);\n\t\treq_reject(rc, 0, preq);\n\t\treturn;\n\t}\n\n\t/* place \"Time4resv\" task on \"task_list_timed\" only if this is a\n\t * confirmation but not the reconfirmation of a degraded reservation as\n\t * in this case, the reservation had already been confirmed and added to\n\t * the task list before\n\t */\n\tif (!is_degraded && (!is_being_altered || is_being_altered & RESV_START_TIME_MODIFIED) &&\n\t    (rc = gen_task_Time4resv(presv)) != 0) {\n\t\tfree(next_execvnode);\n\t\treq_reject(rc, 0, preq);\n\t\treturn;\n\t}\n\n\t/*\n\t * compute new values for state and substate\n\t * and update the resc_resv object with these\n\t * newly computed values\n\t */\n\teval_resvState(presv, RESVSTATE_gen_task_Time4resv, 0, &state, &sub);\n\tresv_setResvState(presv, state, sub);\n\tif (strncmp(preq->rq_extend, PBS_RESV_CONFIRM_SUCCESS, strlen(PBS_RESV_CONFIRM_SUCCESS)) == 0) {\n\t\tchar *p_tmp;\n\t\tp_tmp = strstr(preq->rq_extend, \":partition=\");\n\t\tif (p_tmp) {\n\t\t\tp_tmp += strlen(\":partition=\");\n\t\t\tpartition_name = strdup(p_tmp);\n\t\t} else\n\t\t\tpartition_name = strdup(DEFAULT_PARTITION);\n\n\t\tif (partition_name == NULL) {\n\t\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\t\treturn;\n\t\t}\n\t\t/* Reservation is not degraded anymore */\n\t\tis_degraded = 0;\n\t}\n\tif (state == RESV_CONFIRMED && partition_name != NULL) {\n\t\t/* Set the name of the partition where the reservation is confirmed*/\n\t\tpbs_queue *rque = NULL;\n\t\tchar *qname = NULL;\n\t\tchar *p;\n\t\tset_rattr_str_slim(presv, RESV_ATR_partition, partition_name, NULL);\n\t\tqname = strdup(presv->ri_qs.ri_resvID);\n\t\tif (qname == NULL) {\n\t\t\tlog_err(PBSE_INTERNAL, __func__, \"malloc failed\");\n\t\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\t\tfree(partition_name);\n\t\t\treturn;\n\t\t}\n\t\tp = strpbrk(qname, \".\");\n\t\tif (p != NULL)\n\t\t\t*p = '\\0';\n\t\trque = find_queuebyname(qname);\n\t\tif (rque == NULL) {\n\t\t\tlog_err(PBSE_INTERNAL, __func__, \"Reservation queue not found\");\n\t\t\treq_reject(PBSE_INTERNAL, 0, preq);\n\t\t\tfree(partition_name);\n\t\t\tfree(qname);\n\t\t\treturn;\n\t\t} else {\n\t\t\tset_qattr_str_slim(rque, QA_ATR_partition, partition_name, NULL);\n\t\t\tque_save_db(rque);\n\t\t}\n\t\tfree(qname);\n\t}\n\tfree(partition_name);\n\tresv_save_db(presv);\n\n\tlog_buffer[0] = '\\0';\n\n\t/*\n\t * Notify all interested parties that the reservation\n\t * is moving from state UNCONFIRMED to CONFIRMED\n\t */\n\tif (presv->ri_brp) {\n\t\tpresv = find_resv(presv->ri_qs.ri_resvID);\n\t\tif (get_rattr_str(presv, RESV_ATR_convert) != NULL) {\n\t\t\trc = cnvrt_qmove(presv);\n\t\t\tif (rc != 0) {\n\t\t\t\tsnprintf(buf, sizeof(buf), \"%.240s FAILED\", presv->ri_qs.ri_resvID);\n\t\t\t} else {\n\t\t\t\tsnprintf(buf, sizeof(buf), \"%.240s CONFIRMED\", presv->ri_qs.ri_resvID);\n\t\t\t}\n\t\t} else {\n\t\t\tsnprintf(buf, sizeof(buf), \"%.240s CONFIRMED\", presv->ri_qs.ri_resvID);\n\t\t}\n\n\t\trc = reply_text(presv->ri_brp, PBSE_NONE, buf);\n\t\tpresv->ri_brp = NULL;\n\t}\n\n\tsvr_mailownerResv(presv, MAIL_CONFIRM, MAIL_NORMAL, log_buffer);\n\t(get_rattr(presv, RESV_ATR_interactive))->at_flags &= ~ATR_VFLAG_SET;\n\n\tif (is_being_altered) {\n\t\t/*\n\t\t * If the reservation is currently running and its start time is being\n\t\t * altered after the current time, It is going back to the confirmed state.\n\t\t * We need to stop the reservation queue as it would have been started at\n\t\t * the original start time.\n\t\t * This will prevent any jobs - that are submitted after the\n\t\t * reservation's start time is changed - from running.\n\t\t * The reservation went to CO from RN while being altered, that means the reservation\n\t\t * had resources assigned. We should decrement their usages until it starts running\n\t\t * again, where the resources will be accounted again.\n\t\t */\n\t\tif (presv->ri_qs.ri_state == RESV_CONFIRMED && presv->ri_alter.ra_state == RESV_RUNNING) {\n\t\t\tchange_enableORstart(presv, Q_CHNG_START, \"FALSE\");\n\t\t\tif (presv->ri_giveback) {\n\t\t\t\tset_resc_assigned((void *) presv, 1, DECR);\n\t\t\t\tpresv->ri_giveback = 0;\n\t\t\t}\n\t\t}\n\t\tif (presv->ri_alter.ra_flags & RESV_SELECT_MODIFIED)\n\t\t\tfree_rattr(presv, RESV_ATR_SchedSelect_orig);\n\n\t\tpresv->ri_alter.ra_flags = 0;\n\n\t\tlog_event(PBSEVENT_RESV, PBS_EVENTCLASS_RESV, LOG_INFO,\n\t\t\t  presv->ri_qs.ri_resvID, \"Reservation alter confirmed\");\n\t} else\n\t\tlog_event(PBSEVENT_RESV, PBS_EVENTCLASS_RESV, LOG_INFO,\n\t\t\t  presv->ri_qs.ri_resvID, \"Reservation confirmed\");\n\n\tif (!is_degraded) {\n\t\t/* 100 extra bytes for field names, times, and count */\n\t\ttmp_buf_size = 100 + strlen(preq->rq_user) + strlen(preq->rq_host) + strlen(next_execvnode);\n\t\tif (tmp_buf_size > sizeof(buf)) {\n\t\t\ttmp_buf = malloc(tmp_buf_size);\n\t\t\tif (tmp_buf == NULL) {\n\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"malloc failure (errno %d)\", errno);\n\t\t\t\tlog_err(PBSE_SYSTEM, __func__, log_buffer);\n\t\t\t\tfree(next_execvnode);\n\t\t\t\treply_ack(preq);\n\t\t\t\treturn;\n\t\t\t}\n\t\t} else {\n\t\t\ttmp_buf = buf;\n\t\t\ttmp_buf_size = sizeof(buf);\n\t\t}\n\n\t\tif (get_rattr_long(presv, RESV_ATR_resv_standing)) {\n\t\t\t(void) snprintf(tmp_buf, tmp_buf_size, \"requestor=%s@%s start=%ld end=%ld nodes=%s count=%ld\",\n\t\t\t\t\tpreq->rq_user, preq->rq_host,\n\t\t\t\t\tpresv->ri_qs.ri_stime, presv->ri_qs.ri_etime,\n\t\t\t\t\tnext_execvnode,\n\t\t\t\t\tget_rattr_long(presv, RESV_ATR_resv_count));\n\t\t} else {\n\t\t\t(void) snprintf(tmp_buf, tmp_buf_size, \"requestor=%s@%s start=%ld end=%ld nodes=%s\",\n\t\t\t\t\tpreq->rq_user, preq->rq_host,\n\t\t\t\t\tpresv->ri_qs.ri_stime, presv->ri_qs.ri_etime,\n\t\t\t\t\tnext_execvnode);\n\t\t}\n\t\tchar hook_msg[HOOK_MSG_SIZE] = {0};\n\t\tswitch (process_hooks(preq, hook_msg, sizeof(hook_msg), pbs_python_set_interrupt)) {\n\t\t\tcase 0: /* explicit reject */\n\t\t\tcase 1: /* no recreate request as there are only read permissions */\n\t\t\tcase 2: /* no hook script executed - go ahead and accept event*/\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK, LOG_INFO, __func__,\n\t\t\t\t\t  \"resv_confirm event: accept req by default\");\n\t\t}\n\t\taccount_recordResv(PBS_ACCT_CR, presv, tmp_buf);\n\t\tif (tmp_buf != buf) {\n\t\t\tfree(tmp_buf);\n\t\t\ttmp_buf_size = 0;\n\t\t}\n\t}\n\n\tif (presv->ri_qs.ri_resvID[0] == PBS_MNTNC_RESV_ID_CHAR)\n\t\tdegrade_overlapping_resv(presv);\n\n\tfree(next_execvnode);\n\treply_ack(preq);\n\n\treturn;\n}\n"
  },
  {
    "path": "src/server/req_runjob.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\treq_runjob.c\n *\n * @brief\n * \t\treq_runjob.c - functions dealing with a Run Job Request\n *\n * Included functions are:\n *\tcheck_and_provision_job()\n *\tclear_from_defr()\n *\treq_runjob()\n *\treq_runjob2()\n *\tclear_exec_on_run_fail()\n *\treq_stagein()\n *\tpost_stagein()\n *\tsvr_stagein()\n *\tsvr_startjob()\n *\tsvr_strtjob2()\n *\tcomplete_running()\n *\tparse_hook_rejectmsg()\n *\tpost_sendmom()\n *\tchk_job_torun()\n *\twhere_to_runjob()\n *\tassign_hosts()\n *\treq_defschedreply()\n *\tcheck_failed_attempts()\n *\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <ctype.h>\n#include <stdio.h>\n#include <sys/types.h>\n#include <sys/stat.h>\n#include <unistd.h>\n#include <fcntl.h>\n\n#include <strings.h>\n#include <sys/wait.h>\n\n#include <signal.h>\n#include <stdlib.h>\n#include \"libpbs.h\"\n#include \"server_limits.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"resource.h\"\n#include \"server.h\"\n#include \"credential.h\"\n#include \"batch_request.h\"\n#include \"job.h\"\n#include \"reservation.h\"\n#include \"queue.h\"\n#include \"work_task.h\"\n#include \"pbs_error.h\"\n#include \"log.h\"\n#include \"acct.h\"\n#include \"net_connect.h\"\n#include \"pbs_nodes.h\"\n#include \"svrfunc.h\"\n#include <libutil.h>\n#include \"sched_cmds.h\"\n#include \"pbs_license.h\"\n#include \"hook.h\"\n#include \"provision.h\"\n#include \"pbs_share.h\"\n#include \"pbs_sched.h\"\n\n/* External Functions Called: */\n\nextern struct batch_request *cpy_stage(struct batch_request *, job *,\n\t\t\t\t       enum job_atr, int);\nextern struct batch_request *cpy_stage(struct batch_request *, job *, enum job_atr, int);\n\n/* Public Functions in this file */\n\nint svr_startjob(job *, struct batch_request *);\nextern char *msg_daemonname;\nextern char *path_hooks_workdir;\nextern char *msg_hook_reject_deletejob;\n\n/* Private Function local to this file */\n\nvoid post_sendmom(struct work_task *);\nstatic int svr_stagein(job *, struct batch_request *, char, int);\nstatic int svr_strtjob2(job *, struct batch_request *);\nstatic job *chk_job_torun(struct batch_request *preq, job *);\nstatic int req_runjob2(struct batch_request *preq, job *pjob);\nstatic job *where_to_runjob(struct batch_request *preq, job *);\nstatic void convert_job_to_resv(job *pjob);\n/* Global Data Items: */\n\nextern pbs_net_t pbs_mom_addr;\nextern int pbs_mom_port;\nextern struct server server;\nextern char *msg_badexit;\nextern char *msg_jobrun;\nextern char *msg_job_end_sig;\nextern char *msg_init_substate;\nextern char *msg_manager;\nextern char *msg_stageinfail;\nextern char *msg_job_abort;\nextern time_t time_now;\nextern int svr_totnodes; /* non-zero if using nodes */\nextern job *chk_job_request(char *, struct batch_request *, int *, int *);\nextern int send_cred(job *pjob);\n\n/* private data */\n\n/**\n * @brief\n *\t\tTake a batch_request and job pointer as arguments\n *\t \tEnque provisioning by calling check_and_engue_provisioning\n *\t \tif Enque is successful, sets the job substate to provisioning\n *\t \telse returns an error, caller sends a req_reject to scheduler\n *\n * @see\n *\t\treq_runjob2\n *\n * @param[in]\tpreq\t-\tbatch_request\n * @param[in,out]\tpjob\t-\tjob pointer\n * @param[out]\tneed_prov\t-\tboolean value, whether job will provision\n *\n * @return\tint\n * @retval\t0\t: no provisioning required\n * @retval\t-1\t: provisioning required\n * @retval\t>0\t: PBS error codes\n *\n * @par Side Effects:\n *\tUnknown\n *\n * @par MT-safe: No\n *\n */\nstatic int\ncheck_and_provision_job(struct batch_request *preq, job *pjob, int *need_prov)\n{\n\tint rc = 0;\n\n\t/* prov node is part of exec_vnodes, */\n\t/* cut and update exec_vnode and prov_vnode */\n\tif (!preq || !pjob || !need_prov)\n\t\treturn (PBSE_IVALREQ);\n\n\trc = check_and_enqueue_provisioning(pjob, need_prov);\n\tif (rc) {\n\t\t/* log message about failure to start provisioning for a job */\n\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t  pjob->ji_qs.ji_jobid,\n\t\t\t  \"Job failed to start provisioning\");\n\n\t\t/* put system hold and move to held state */\n\t\tset_jattr_b_slim(pjob, JOB_ATR_hold, HOLD_s, INCR);\n\t\tsvr_setjobstate(pjob, JOB_STATE_LTR_HELD, JOB_SUBSTATE_HELD);\n\t\tset_jattr_str_slim(pjob, JOB_ATR_Comment, \"job held, provisioning failed to start\", NULL);\n\n\t\t/* not offlining vnodes, since its not a vnode's fault. vnode */\n\t\t/* is good to run other jobs, so why waste resource. */\n\t\treturn rc;\n\t}\n\n\tif (*need_prov == 0)\n\t\treturn PBSE_NONE;\n\n\t/* provisioning was needed and enqueued */\n\n\tsvr_setjobstate(pjob, JOB_STATE_LTR_RUNNING, JOB_SUBSTATE_PROVISION);\n\tDBPRT((\"%s: Sucessfully enqueued provisioning for job %s\\n\", __func__, pjob->ji_qs.ji_jobid))\n\n\t/* log accounting line for start of prov for a job */\n\tset_job_ProvAcctRcd(pjob, time_now, PROVISIONING_STARTED);\n\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n *\t\tSearch for a deferred reply element in the list whose client\n *\t\trequest came in on the socket which was just closed.  If found,\n *\t\tclear the pointer to the original request which has been freed\n *\t\twhen the connection was closed.\n *\n * @param[in]\tsd\t-\tsocket which was just closed\n *\n * @return\tnone\n *\n * @par MT-safe: not really\n */\nstatic void\nclear_from_defr(int sd)\n{\n\tpbs_sched *psched;\n\tpbs_list_head *deferred_req;\n\tstruct deferred_request *pdefr;\n\n\tfor (psched = (pbs_sched *) GET_NEXT(svr_allscheds);\n\t     psched;\n\t     psched = (pbs_sched *) GET_NEXT(psched->sc_link)) {\n\n\t\tdeferred_req = fetch_sched_deferred_request(psched, false);\n\t\tif (deferred_req == NULL) {\n\t\t\tcontinue;\n\t\t}\n\n\t\tfor (pdefr = (struct deferred_request *) GET_NEXT(*deferred_req);\n\t\t     pdefr;\n\t\t     pdefr = (struct deferred_request *) GET_NEXT(pdefr->dr_link)) {\n\t\t\tif (pdefr->dr_preq != NULL) {\n\t\t\t\tif (pdefr->dr_preq->rq_conn == sd) {\n\t\t\t\t\t/* found deferred run job request whose */\n\t\t\t\t\t/* connection to the client has closed  */\n\t\t\t\t\tif (pdefr->dr_sent != 0) {\n\t\t\t\t\t\t/* request sent to scheduler, wait   */\n\t\t\t\t\t\t/* for it to respond before removing */\n\t\t\t\t\t\t/* this request, just null the qrun  */\n\t\t\t\t\t\t/* request pointer                   */\n\t\t\t\t\t\tpdefr->dr_preq = NULL;\n\t\t\t\t\t} else {\n\t\t\t\t\t\t/* unlink & free the deferred request */\n\t\t\t\t\t\tdelete_link(&pdefr->dr_link);\n\t\t\t\t\t\tfree(pdefr);\n\t\t\t\t\t}\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\tclear_sched_deferred_request(psched);\n\t}\n}\n\n/**\n * @brief\tWrapper function that calls process_hooks()\n *\n * @see\t\treq_runjob()\n *\n * @return\tint\n *\n */\nint\ncall_to_process_hooks(struct batch_request *preq, char *hook_msg, size_t msg_len,\n\t\t      void(*pyinter_func))\n{\n\tint rc;\n\trc = process_hooks(preq, hook_msg, msg_len, pyinter_func);\n\tif (rc == -1)\n\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t\t  LOG_INFO, \"\", \"runjob event: accept req by default\");\n\treturn rc;\n}\n\n/**\n * @brief\n * \treq_runjob - service the Run Job and Asyc Run Job Requests\n *\n * @par\n *\tThis request forces a job into execution. Client must be privileged to run job.\n *\n * @param[in] preq - pointer to batch request structure\n *\n * @return void\n *\n */\nvoid\nreq_runjob(struct batch_request *preq)\n{\n\tint anygood;\n\tint i;\n\tint j;\n\tchar *jid;\n\tint jt; /* job type */\n\tchar *pc;\n\tjob *pjob = NULL;\n\tjob *pjobsub = NULL;\n\tjob *parent = NULL;\n\tchar *range;\n\tint start;\n\tint end;\n\tint step;\n\tint count;\n\tpbs_list_head *deferred_req;\n\tstruct deferred_request *pdefr;\n\tchar hook_msg[HOOK_MSG_SIZE];\n\tpbs_sched *psched;\n\tchar sjst;\n\n\tif ((preq->rq_perm & (ATR_DFLAG_MGWR | ATR_DFLAG_OPWR)) == 0) {\n\t\treq_reject(PBSE_PERM, 0, preq);\n\t\treturn;\n\t}\n\n\tjid = preq->rq_ind.rq_run.rq_jid;\n\tparent = chk_job_request(jid, preq, &jt, NULL);\n\tif (parent == NULL)\n\t\treturn; /* note, req_reject already called */\n\n\t/* the job must be in an execution queue */\n\tif (parent->ji_qhdr->qu_qs.qu_type != QTYPE_Execution) {\n\t\treq_reject(PBSE_IVALREQ, 0, preq);\n\t\treturn;\n\t}\n\n\tif (!find_assoc_sched_jid(jid, &psched)) {\n\t\tsprintf(log_buffer, \"Unable to reach scheduler associated with job %s\", jid);\n\t\tlog_err(-1, __func__, log_buffer);\n\t\treq_reject(PBSE_NOSCHEDULER, 0, preq);\n\t\treturn;\n\t}\n\n\tif ((psched->sc_cycle_started != -1) && was_job_alteredmoved(parent)) {\n\t\t/* Reject run request for altered/moved jobs if job_run_wait is set to \"execjob_hook\" */\n\t\tif (!is_sched_attr_set(psched, SCHED_ATR_job_run_wait) ||\n\t\t    (!strcmp(get_sched_attr_str(psched, SCHED_ATR_job_run_wait), RUN_WAIT_EXECJOB_HOOK))) {\n\t\t\treq_reject(PBSE_NORUNALTEREDJOB, 0, preq);\n\t\t\tset_scheduler_flag(SCH_SCHEDULE_NEW, psched);\n\t\t\treturn;\n\t\t}\n\t}\n\n\tif (jt == IS_ARRAY_NO) {\n\t\t/* just a regular job, pass it on down the line and be done */\n\t\tpjob = chk_job_torun(preq, parent);\n\t\tif (pjob == NULL)\n\t\t\treturn;\n\t\tif (pjob->ji_discarding) {\n\t\t\treq_reject(PBSE_BADSTATE, 0, preq);\n\t\t\treturn;\n\t\t}\n\t} else if (jt == IS_ARRAY_Single) {\n\t\t/* single subjob, if running can signal */\n\t\tpjob = get_subjob_and_state(parent, get_index_from_jid(jid), &sjst, NULL);\n\t\tif (sjst == JOB_STATE_LTR_UNKNOWN) {\n\t\t\treq_reject(PBSE_IVALREQ, 0, preq);\n\t\t\treturn;\n\t\t} else if (sjst != JOB_STATE_LTR_QUEUED || (pjob && pjob->ji_discarding)) {\n\t\t\t/* job already running or discarding  */\n\t\t\treq_reject(PBSE_BADSTATE, 0, preq);\n\t\t\treturn;\n\t\t}\n\t} else if (jt == IS_ARRAY_ArrayJob) {\n\t\t/* invalid to run the array itself */\n\t\treq_reject(PBSE_IVALREQ, 0, preq);\n\t\treturn;\n\t} else {\n\t\t/*\n\t\t * what's left to handle is a range of subjobs,\n\t\t * validate that given range has atleast one subjob\n\t\t * in queue state\n\t\t */\n\t\tanygood = 0;\n\t\trange = get_range_from_jid(jid);\n\t\tif (range == NULL) {\n\t\t\treq_reject(PBSE_IVALREQ, 0, preq);\n\t\t\treturn;\n\t\t}\n\n\t\twhile (1) {\n\t\t\tif ((i = parse_subjob_index(range, &pc, &start, &end, &step, &count)) == -1) {\n\t\t\t\treq_reject(PBSE_IVALREQ, 0, preq);\n\t\t\t\treturn;\n\t\t\t} else if (i == 1)\n\t\t\t\tbreak; /* no more in the range */\n\t\t\tfor (i = start; i <= end; i += step) {\n\t\t\t\tpjob = get_subjob_and_state(parent, i, &sjst, NULL);\n\t\t\t\tif (sjst == JOB_STATE_LTR_QUEUED) {\n\t\t\t\t\tanygood = 1;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t\trange = pc;\n\t\t}\n\t\tif (anygood == 0) {\n\t\t\treq_reject(PBSE_BADSTATE, 0, preq);\n\t\t\treturn;\n\t\t}\n\t}\n\n\t/*\n\t * At this point, we know the basic request to run the job\n\t * or jobs is valid, so we can proceed farther.\n\t * If there is a specified list of execution vnodes and\n\t * resources, then process the run request, else it has\n\t * to go to the scheduler\n\t */\n\tif ((preq->rq_ind.rq_run.rq_destin == NULL) ||\n\t    (*preq->rq_ind.rq_run.rq_destin == '\\0')) {\n\t\tchar fixjid[PBS_MAXSVRJOBID + 1];\n\n\t\t/* if runjob request is from the Scheduler, it must have a destination specified */\n\t\tif (preq->rq_conn == psched->sc_primary_conn) {\n\t\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_JOB, LOG_INFO, jid, \"runjob request from scheduler with null destination\");\n\t\t\treq_reject(PBSE_IVALREQ, 0, preq);\n\t\t\treturn;\n\t\t}\n\t\tpdefr = (struct deferred_request *) malloc(sizeof(struct deferred_request));\n\t\tif (pdefr == NULL) {\n\t\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\t\treturn;\n\t\t}\n\t\tCLEAR_LINK(pdefr->dr_link);\n\n\t\t/*\n\t\t * fix the job id so the suffix matches the real jobid's\n\t\t * suffix;  in case qrun 1.short vs 1.short.domain.com\n\t\t */\n\t\tsnprintf(fixjid, sizeof(fixjid), \"%s\", jid);\n\t\tpc = strchr(fixjid, (int) '.');\n\t\tif (pc)\n\t\t\t*pc = '\\0';\n\t\tpc = strchr(parent->ji_qs.ji_jobid, (int) '.');\n\t\tif (pc)\n\t\t\tstrcat(fixjid, pc);\n\n\t\tpbs_strncpy(pdefr->dr_id, fixjid, PBS_MAXSVRJOBID + 1);\n\t\tpdefr->dr_preq = preq;\n\t\tpdefr->dr_sent = 0;\n\t\tdeferred_req = fetch_sched_deferred_request(psched, true);\n\t\tif (deferred_req == NULL) {\n\t\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\t\treturn;\n\t\t}\n\t\tappend_link(deferred_req, &pdefr->dr_link, pdefr);\n\t\t/* ensure that request is removed if client connect is closed */\n\t\tnet_add_close_func(preq->rq_conn, clear_from_defr);\n\n\t\tif (schedule_jobs(psched) == -1) {\n\t\t\t/* unable to contact the Scheduler, reject */\n\t\t\treq_reject(PBSE_NOSCHEDULER, 0, preq);\n\t\t\t/* unlink and free the deferred request entry */\n\t\t\tdelete_link(&pdefr->dr_link);\n\t\t\tfree(pdefr);\n\t\t}\n\t\treturn;\n\t}\n\n\tDBPRT((\"req_runjob: received command to run job on destin %s\\n\", preq->rq_ind.rq_run.rq_destin))\n\n\t/*\n\t * OK - go back over the run job request, assign the vhosts\n\t * and finally run the job by calling req_runjob2()\n\t */\n\tif (jt == IS_ARRAY_NO) {\n\n\t\t/* just a regular job, pass it on down the line and be done */\n\t\tif (call_to_process_hooks(preq, hook_msg, sizeof(hook_msg), pbs_python_set_interrupt) == 0) {\n\t\t\treply_text(preq, PBSE_HOOKERROR, hook_msg);\n\t\t\treturn;\n\t\t}\n\t\tpjob = where_to_runjob(preq, parent);\n\t\tif (pjob) {\n\t\t\t/* free prov_vnode before use */\n\t\t\tfree_jattr(pjob, JOB_ATR_prov_vnode);\n\t\t\treq_runjob2(preq, parent);\n\t\t}\n\t\treturn;\n\n\t} else if (jt == IS_ARRAY_Single) {\n\t\tattribute sub_runcount;\n\t\tattribute sub_run_version;\n\t\tattribute sub_prev_res;\n\n\t\tclear_attr(&sub_runcount, &job_attr_def[JOB_ATR_runcount]);\n\t\tclear_attr(&sub_run_version, &job_attr_def[JOB_ATR_run_version]);\n\t\tclear_attr(&sub_prev_res, &job_attr_def[JOB_ATR_resource]);\n\n\t\tpjobsub = get_subjob_and_state(parent, get_index_from_jid(jid), NULL, NULL);\n\t\tif (pjobsub != NULL) {\n\t\t\tif (is_jattr_set(pjobsub, JOB_ATR_runcount))\n\t\t\t\tset_attr_with_attr(&job_attr_def[JOB_ATR_runcount], &sub_runcount, get_jattr(pjobsub, JOB_ATR_runcount), SET);\n\t\t\tif (is_jattr_set(pjobsub, JOB_ATR_run_version))\n\t\t\t\tset_attr_with_attr(&job_attr_def[JOB_ATR_run_version], &sub_run_version, get_jattr(pjobsub, JOB_ATR_run_version), SET);\n\t\t\tif (is_jattr_set(pjobsub, JOB_ATR_resource))\n\t\t\t\tset_attr_with_attr(&job_attr_def[JOB_ATR_resource], &sub_prev_res, get_jattr(pjobsub, JOB_ATR_resource), SET);\n\t\t\tjob_purge(pjobsub);\n\t\t}\n\n\t\tif ((pjobsub = create_subjob(parent, jid, &j)) == NULL) {\n\t\t\tif (is_attr_set(&sub_runcount))\n\t\t\t\tfree_attr(job_attr_def, &sub_runcount, JOB_ATR_runcount);\n\t\t\tif (is_attr_set(&sub_run_version))\n\t\t\t\tfree_attr(job_attr_def, &sub_run_version, JOB_ATR_run_version);\n\t\t\tif (is_attr_set(&sub_prev_res))\n\t\t\t\tfree_attr(job_attr_def, &sub_prev_res, JOB_ATR_resource);\n\t\t\treq_reject(j, 0, preq);\n\t\t\treturn;\n\t\t}\n\n\t\tif (is_attr_set(&sub_runcount)) {\n\t\t\tfree_jattr(pjobsub, JOB_ATR_runcount);\n\t\t\tset_attr_with_attr(&job_attr_def[JOB_ATR_runcount], get_jattr(pjobsub, JOB_ATR_runcount), &sub_runcount, SET);\n\t\t\tfree_attr(job_attr_def, &sub_runcount, JOB_ATR_runcount);\n\t\t}\n\n\t\tif (is_attr_set(&sub_run_version)) {\n\t\t\tfree_jattr(pjobsub, JOB_ATR_run_version);\n\t\t\tset_attr_with_attr(&job_attr_def[JOB_ATR_run_version], get_jattr(pjobsub, JOB_ATR_run_version), &sub_run_version, SET);\n\t\t\tfree_attr(job_attr_def, &sub_run_version, JOB_ATR_run_version);\n\t\t}\n\n\t\tif (is_attr_set(&sub_prev_res)) {\n\t\t\tfree_jattr(pjobsub, JOB_ATR_resource);\n\t\t\tset_attr_with_attr(&job_attr_def[JOB_ATR_resource], get_jattr(pjobsub, JOB_ATR_resource), &sub_prev_res, SET);\n\t\t\tfree_attr(job_attr_def, &sub_prev_res, JOB_ATR_resource);\n\t\t}\n\n\t\tif (call_to_process_hooks(preq, hook_msg, sizeof(hook_msg), pbs_python_set_interrupt) == 0) {\n\t\t\t/* subjob reject from hook*/\n\t\t\treply_text(preq, PBSE_HOOKERROR, hook_msg);\n\t\t\treturn;\n\t\t}\n\t\tpjob = where_to_runjob(preq, pjobsub);\n\t\tif (pjob) {\n\t\t\t/* free prov_vnode before use */\n\t\t\tfree_jattr(pjob, JOB_ATR_prov_vnode);\n\t\t\treq_runjob2(preq, pjob);\n\t\t}\n\t\treturn;\n\t}\n\n\t/*\n\t * what's left to handle is a range of subjobs,\n\t * foreach subjob, if queued, run it\n\t */\n\trange = get_range_from_jid(jid);\n\tif (range == NULL) {\n\t\treq_reject(PBSE_IVALREQ, 0, preq);\n\t\treturn;\n\t}\n\n\t++preq->rq_refct;\n\n\twhile (1) {\n\t\tif ((i = parse_subjob_index(range, &pc, &start, &end, &step, &count)) == -1) {\n\t\t\treq_reject(PBSE_IVALREQ, 0, preq);\n\t\t\tbreak;\n\t\t} else if (i == 1)\n\t\t\tbreak;\n\t\tfor (i = start; i <= end; i += step) {\n\t\t\tattribute sub_runcount = {0};\n\t\t\tattribute sub_run_version = {0};\n\t\t\tattribute sub_prev_res = {0};\n\n\t\t\tclear_attr(&sub_runcount, &job_attr_def[JOB_ATR_runcount]);\n\t\t\tclear_attr(&sub_run_version, &job_attr_def[JOB_ATR_run_version]);\n\t\t\tclear_attr(&sub_prev_res, &job_attr_def[JOB_ATR_resource]);\n\n\t\t\tpjobsub = get_subjob_and_state(parent, i, &sjst, NULL);\n\t\t\tif (sjst != JOB_STATE_LTR_QUEUED)\n\t\t\t\tcontinue;\n\n\t\t\tif (pjobsub != NULL) {\n\t\t\t\tif (is_jattr_set(pjobsub, JOB_ATR_runcount))\n\t\t\t\t\tset_attr_with_attr(&job_attr_def[JOB_ATR_runcount], &sub_runcount, get_jattr(pjobsub, JOB_ATR_runcount), SET);\n\t\t\t\tif (is_jattr_set(pjobsub, JOB_ATR_run_version))\n\t\t\t\t\tset_attr_with_attr(&job_attr_def[JOB_ATR_run_version], &sub_run_version, get_jattr(pjobsub, JOB_ATR_run_version), SET);\n\t\t\t\tif (is_jattr_set(pjobsub, JOB_ATR_resource))\n\t\t\t\t\tset_attr_with_attr(&job_attr_def[JOB_ATR_resource], &sub_prev_res, get_jattr(pjobsub, JOB_ATR_resource), SET);\n\t\t\t\tjob_purge(pjobsub);\n\t\t\t}\n\n\t\t\tif ((pjobsub = create_subjob(parent, create_subjob_id(parent->ji_qs.ji_jobid, i), &j)) == NULL) {\n\t\t\t\tif (is_attr_set(&sub_prev_res))\n\t\t\t\t\tfree_attr(job_attr_def, &sub_prev_res, JOB_ATR_resource);\n\t\t\t\treq_reject(j, 0, preq);\n\t\t\t\tcontinue;\n\t\t\t}\n\n\t\t\tif (is_attr_set(&sub_run_version))\n\t\t\t\tset_jattr_l_slim(pjobsub, JOB_ATR_run_version, get_attr_l(&sub_run_version), SET);\n\n\t\t\tif (is_attr_set(&sub_runcount))\n\t\t\t\tset_jattr_l_slim(pjobsub, JOB_ATR_runcount, get_attr_l(&sub_runcount), SET);\n\n\t\t\tif (is_attr_set(&sub_prev_res)) {\n\t\t\t\tfree_jattr(pjobsub, JOB_ATR_resource);\n\t\t\t\tset_attr_with_attr(&job_attr_def[JOB_ATR_resource], get_jattr(pjobsub, JOB_ATR_resource), &sub_prev_res, SET);\n\t\t\t\tfree_attr(job_attr_def, &sub_prev_res, JOB_ATR_resource);\n\t\t\t}\n\n\t\t\tif (call_to_process_hooks(preq, hook_msg, sizeof(hook_msg), pbs_python_set_interrupt) == 0) {\n\t\t\t\t/* subjob reject from hook*/\n\t\t\t\treply_text(preq, PBSE_HOOKERROR, hook_msg);\n\t\t\t\treturn;\n\t\t\t}\n\n\t\t\tif ((pjob = where_to_runjob(preq, pjobsub)) == NULL)\n\t\t\t\tcontinue;\n\n\t\t\tdup_br_for_subjob(preq, pjob, req_runjob2);\n\t\t}\n\t\trange = pc;\n\t}\n\n\t/*\n\t * if not waiting on any running subjobs, can reply; else\n\t * it is taken care of when last running subjob responds\n\t */\n\tif (--preq->rq_refct == 0)\n\t\treply_send(preq);\n\treturn;\n}\n/**\n * @brief\n * \t\treq_runjob - service the Run Job and Asyc Run Job Requests\n *\n * @param[in,out]\tpreq\t-\tRun Job Requests\n * @param[in,out]\tpjob\t-\tjob pointer\n */\nstatic int\nreq_runjob2(struct batch_request *preq, job *pjob)\n{\n\tint rc;\n\tint prov_rc = 0;\n\tint need_prov;\n\tchar *dest;\n\tint rq_type = 0;\n\n\t/* Check if prov is required, if so, reply_ack and let prov finish */\n\t/* else follow normal flow */\n\tprov_rc = check_and_provision_job(preq, pjob, &need_prov);\n\n\t/* In case of subjob, save it to the database now because\n\t * not saved to the database so far.\n\t */\n\tif (pjob->ji_qs.ji_svrflags & JOB_SVFLG_SubJob) {\n\t\tif (job_save_db(pjob)) {\n\t\t\tfree_nodes(pjob);\n\t\t\treq_reject(PBSE_SAVE_ERR, 0, preq);\n\t\t\treturn 1;\n\t\t}\n\t}\n\n\tif (prov_rc) { /* problem with the request */\n\t\tfree_nodes(pjob);\n\t\treq_reject(prov_rc, 0, preq);\n\t\treturn 1;\n\t} else if (need_prov == 1) { /* prov required and request is fine */\n\t\t/* allocate resources right away */\n\t\tset_resc_assigned((void *) pjob, 0, INCR);\n\n\t\t/* provisioning was needed and was qneueued successfully */\n\t\t/* Allways send ack for prov jobs, even if not async run */\n\t\treply_ack(preq);\n\t\treturn 0;\n\t}\n\n\t/* if need_prov ==0 then no prov required, so continue normal flow */\n\tdest = preq->rq_ind.rq_run.rq_destin;\n\tif ((dest == NULL) || (*dest == '\\0') || ((*dest == '-') && (*(dest + 1) == '\\0'))) {\n\t\tif (is_jattr_set(pjob, JOB_ATR_exec_vnode)) {\n\t\t\tdest = get_jattr_str(pjob, JOB_ATR_exec_vnode);\n\t\t} else {\n\t\t\tdest = NULL;\n\t\t}\n\t}\n\tif ((dest == NULL) || (*dest == '\\0')) {\n\t\t/* Neither the run request nor the job specified an execvnode. */\n\t\tfree_nodes(pjob);\n\t\treq_reject(PBSE_IVALREQ, 0, preq);\n\t\treturn 1;\n\t}\n\tsprintf(log_buffer, msg_manager, msg_jobrun, preq->rq_user, preq->rq_host);\n\tstrcat(log_buffer, \" on exec_vnode \");\n\trc = LOG_BUF_SIZE - strlen(log_buffer) - 1;\n\tstrncat(log_buffer, dest, rc);\n\t*(log_buffer + LOG_BUF_SIZE - 1) = '\\0';\n\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\n\t/* If async run, reply now; otherwise reply is handled in */\n\t/* post_sendmom or post_stagein\t\t\t\t  */\n\trq_type = preq->rq_type;\n\tif (preq && (rq_type == PBS_BATCH_AsyrunJob_ack)) {\n\t\treply_ack(preq);\n\t\tpreq = 0; /* cleared so we don't try to reuse */\n\t}\n\n\tif (((rc = svr_startjob(pjob, preq)) != 0) &&\n\t    ((rq_type == PBS_BATCH_AsyrunJob_ack) || preq)) {\n\t\tfree_nodes(pjob);\n\t\tif (preq)\n\t\t\treq_reject(rc, 0, preq);\n\t\treturn 1;\n\t}\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\t\tclear_exec_on_run_fail - On failure of running a job,\n *\t\tclear exec strings so job can be resecheduled anywhere.\n *\n * @par Functionality:\n *\t\tIf the job has been checkpointed then the job must run where it ran before.\n *\t\tOtherwise it is free to run anywhere when re-scheduled.  In this case,\n *\t\tclear the exec_hosts, exec_vnodes, etc.\n *\n * @param[in]\tjobp\t-\tpointer to to job whose run failed\n *\n * @return\tnone\n *\n * @par MT-safe: yes\n */\nvoid\nclear_exec_on_run_fail(job *jobp)\n{\n\tif ((jobp->ji_qs.ji_svrflags & JOB_SVFLG_CHKPT) == 0) {\n\n\t\tfree_jattr(jobp, JOB_ATR_exec_host);\n\t\tfree_jattr(jobp, JOB_ATR_exec_host2);\n\t\tfree_jattr(jobp, JOB_ATR_exec_vnode);\n\t\tjobp->ji_qs.ji_destin[0] = '\\0';\n\t}\n}\n\n/*\n * @brief\n * \t\treq_stagein\t-\tservice the Stage In Files for a Job Request\n *\n *\t\tThis request causes MOM to start stagin in files.\n *\t\tClient must be privileged.\n *\n * @param[in]\tpreq\t-\tJob Request\n */\n\nvoid\nreq_stagein(struct batch_request *preq)\n{\n\treq_reject(PBSE_NOSUP, 0, preq);\n}\n\n/**\n * @brief\n * \t\tpost_stagein - process reply from MOM to stage-in request\n *\n * @param[in]\tpwt\t-\tpointer to work task structure which contains the request\n */\n\nstatic void\npost_stagein(struct work_task *pwt)\n{\n\tint code;\n\tchar newstate;\n\tint newsub;\n\tjob *paltjob;\n\tjob *pjob;\n\tstruct batch_request *preq;\n\tattribute *pwait;\n\n\tpreq = pwt->wt_parm1;\n\tcode = preq->rq_reply.brp_code;\n\tpjob = find_job(preq->rq_extra);\n\tfree(preq->rq_extra);\n\n\tif (pjob != NULL) {\n\n\t\tif (code != 0) {\n\n\t\t\t/* stage in failed - \"wait\" job */\n\n\t\t\tset_resc_assigned((void *) pjob, 0, DECR);\n\t\t\tfree_nodes(pjob);\n\t\t\tfree_jattr(pjob, JOB_ATR_exec_host);\n\t\t\tfree_jattr(pjob, JOB_ATR_exec_host2);\n\t\t\tfree_jattr(pjob, JOB_ATR_exec_vnode);\n\n\t\t\tif (pjob->ji_qs.ji_svrflags & JOB_SVFLG_SubJob) {\n\t\t\t\t/* for subjob, \"wait\" the parent array */\n\t\t\t\tpaltjob = pjob->ji_parentaj;\n\t\t\t} else {\n\t\t\t\t/* for regular job, \"wait\" that job */\n\t\t\t\tpaltjob = pjob;\n\t\t\t}\n\t\t\tpwait = get_jattr(paltjob, JOB_ATR_exectime);\n\t\t\tif (!is_jattr_set(paltjob, JOB_ATR_exectime)) {\n\t\t\t\tset_jattr_l_slim(paltjob, JOB_ATR_exectime, time_now + PBS_STAGEFAIL_WAIT, SET);\n\t\t\t\tjob_set_wait(pwait, paltjob, 0);\n\t\t\t}\n\t\t\tsvr_setjobstate(paltjob, JOB_STATE_LTR_WAITING, JOB_SUBSTATE_STAGEFAIL);\n\n\t\t\tif (preq->rq_reply.brp_choice == BATCH_REPLY_CHOICE_Text)\n\t\t\t\tsvr_mailowner(pjob, MAIL_STAGEIN, MAIL_FORCE,\n\t\t\t\t\t      preq->rq_reply.brp_un.brp_txt.brp_str);\n\t\t} else {\n\t\t\t/* stage in was successful */\n\t\t\tpjob->ji_qs.ji_svrflags |= JOB_SVFLG_StagedIn;\n\t\t\tif (check_job_substate(pjob, JOB_SUBSTATE_STAGEGO)) {\n\t\t\t\t/* continue to start job running */\n\t\t\t\tsvr_strtjob2(pjob, NULL);\n\t\t\t} else {\n\t\t\t\tsvr_evaljobstate(pjob, &newstate, &newsub, 0);\n\t\t\t\tsvr_setjobstate(pjob, newstate, newsub);\n\t\t\t}\n\t\t}\n\t}\n\trelease_req(pwt); /* close connection and release request */\n}\n\n/**\n * @brief\n * \t\tsvr_stagein - direct MOM to stage in the requested files for a job\n *\n * @param[in,out]\tpjob\t-\tjob structure\n * @param[in,out]\tpreq\t-\trequest structure\n * @param[in]\tstate\t-\tjob state\n * @param[in,out]\tsubstate\t-\tjob substate\n *\n * @return\tint\n * @retval\t0\t- success\n * @retval\tnon-zero\t- error code\n */\n\nstatic int\nsvr_stagein(job *pjob, struct batch_request *preq, char state, int substate)\n{\n\tstruct batch_request *momreq = 0;\n\tint rc;\n\n\tmomreq = cpy_stage(momreq, pjob, JOB_ATR_stagein, STAGE_DIR_IN);\n\tif (momreq) { /* have files to stage in */\n\n\t\t/* save job id for post_stagein */\n\n\t\tmomreq->rq_extra = malloc(PBS_MAXSVRJOBID + 1);\n\t\tif (momreq->rq_extra == 0)\n\t\t\treturn (PBSE_SYSTEM);\n\t\tstrcpy(momreq->rq_extra, pjob->ji_qs.ji_jobid);\n\t\trc = relay_to_mom(pjob, momreq, post_stagein);\n\t\tif (rc == 0) {\n\n\t\t\tsvr_setjobstate(pjob, state, substate);\n\t\t\t/*\n\t\t\t * show resources allocated as stage-in may take\n\t\t\t * take sufficient time to run into another\n\t\t\t * scheduling cycle\n\t\t\t */\n\t\t\tset_resc_assigned((void *) pjob, 0, INCR);\n\t\t\t/*\n\t\t\t * stage-in started ok - reply to client as copy may\n\t\t\t * take too long to wait.\n\t\t\t */\n\n\t\t\tif (preq)\n\t\t\t\treply_ack(preq);\n\t\t} else {\n\t\t\tfree(momreq->rq_extra);\n\t\t}\n\t\treturn (rc);\n\n\t} else {\n\n\t\t/* no files to stage-in, go direct to sending job to mom */\n\n\t\treturn (svr_strtjob2(pjob, preq));\n\t}\n}\n\n/**\n * @brief\n * \t\tform_attr_comment - Creates and return attribute comment in the given template\n * \t\tby appending time and execvnode\n *\n * @param[in]\ttemplate\t-\ttemplate of the string\n * @param[in]\texecvnode\t-\texecution node, NULL if this field is not required in the output\n *\n * @return\tstring\n * @retval\tnew attribute comment with time and execnode appended.\n *\n * @note\n * \t\tDo not copy the output of this function into log_buffer. It is used internally.\n */\nchar *\nform_attr_comment(const char *template, const char *execvnode)\n{\n\tchar timebuf[128];\n\tstrftime(timebuf, 128, \"%a %b %d at %H:%M\", localtime(&time_now));\n\tsprintf(log_buffer, template, timebuf);\n\tif (execvnode != NULL) {\n\t\tstrcat(log_buffer, \" on \");\n\t\tif (strlen(execvnode) > COMMENT_BUF_SIZE - strlen(log_buffer) - 1) {\n\t\t\tstrncat(log_buffer, execvnode, COMMENT_BUF_SIZE - strlen(log_buffer) - 1 - 3);\n\t\t\tstrcat(log_buffer, \"...\");\n\t\t\tlog_buffer[COMMENT_BUF_SIZE - 1] = '\\0';\n\t\t} else\n\t\t\tstrcat(log_buffer, execvnode);\n\t}\n\treturn log_buffer;\n}\n\n/**\n * @brief\n * \t\tsvr_startjob - place a job into running state by shipping it to MOM\n *\n * @param[in,out]\tpjob\t-\tjob to run\n * @param[in,out]\tpreq\t-\t NULL or Run Job batch request\n *\n * @return\tint\n * @retval\t0\t- success\n * @retval\tnon-zero\t- error code\n */\nint\nsvr_startjob(job *pjob, struct batch_request *preq)\n{\n\tint f;\n\tint rc;\n\tchar *nspec;\n\tpbs_queue *pque = pjob->ji_qhdr;\n\tlong delay = 10; /* Default value for kill_delay */\n\n\t/* if not already setup, transfer the control/script file basename */\n\t/* into an attribute accessable to MOM\t\t\t\t   */\n\n\tif (!(is_jattr_set(pjob, JOB_ATR_hashname)))\n\t\tif (set_jattr_str_slim(pjob, JOB_ATR_hashname, pjob->ji_qs.ji_jobid, NULL))\n\t\t\treturn (PBSE_SYSTEM);\n\n\t/* clear Exit_status which may have been set in a hook and requeued */\n\tif (job_delete_attr(pjob, JOB_ATR_exit_status)) {\n\t\treturn PBSE_SYSTEM;\n\t}\n\n\t/* if exec_vnode already set and either (hotstart or checkpoint) */\n\t/* then reuseuse the host(s) listed in the current exec_vnode\t */\n\n\trc = 0;\n\tf = is_jattr_set(pjob, JOB_ATR_exec_vnode);\n\tif (f && ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_HOTSTART) || (pjob->ji_qs.ji_svrflags & JOB_SVFLG_CHKPT)) && ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_HasNodes) == 0)) {\n\n\t\tnspec = get_jattr_str(pjob, JOB_ATR_exec_vnode);\n\t\tif (nspec == NULL)\n\t\t\treturn (PBSE_SYSTEM);\n\t\trc = assign_hosts(pjob, nspec, 0);\n\n\t} else if (f == 0) {\n\t\t/* exec_vnode not already set, use hosts from request   */\n\t\tif (preq == NULL)\n\t\t\treturn (PBSE_INTERNAL);\n\t\tnspec = preq->rq_ind.rq_run.rq_destin;\n\t\tif (nspec == NULL)\n\t\t\treturn (PBSE_IVALREQ);\n\n\t\trc = assign_hosts(pjob, nspec, 1);\n\t}\n\tif (rc != 0)\n\t\treturn rc;\n\n\tif (is_jattr_set(pjob, JOB_ATR_create_resv_from_job) &&\n\t    get_jattr_long(pjob, JOB_ATR_create_resv_from_job))\n\t\tconvert_job_to_resv(pjob);\n\n\t/* Move job_kill_delay attribute from Server to MOM */\n\tif (is_qattr_set(pque, QE_ATR_KillDelay))\n\t\tdelay = get_qattr_long(pque, QE_ATR_KillDelay);\n\tset_jattr_l_slim(pjob, JOB_ATR_job_kill_delay, delay, SET);\n\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n\tif (is_jattr_set(pjob, JOB_ATR_cred_id)) {\n\t\trc = send_cred(pjob);\n\t\tif (rc != 0) {\n\t\t\treturn rc; /* do not start job without credentials */\n\t\t}\n\t}\n#endif\n\n\t/* Next, are there files to be staged-in? */\n\n\tif ((is_jattr_set(pjob, JOB_ATR_stagein)) &&\n\t    (!check_job_substate(pjob, JOB_SUBSTATE_STAGECMP))) {\n\n\t\t/* yes, we do that first; then start the job */\n\n\t\trc = svr_stagein(pjob, preq, JOB_STATE_LTR_RUNNING, JOB_SUBSTATE_STAGEGO);\n\n\t\t/* note, the positive acknowledgment to the run job request */\n\t\t/* is done by svr_stagein if the stage-in is successful     */\n\n\t\tif (rc != 0) {\n\t\t\t/* If the stage-in failed and we aren't          */\n\t\t\t/* checkpointed, clear the exec_host/exec_vnode; */\n\t\t\t/* job can be run  elsewhere\t\t\t */\n\t\t\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_CHKPT) == 0) {\n\t\t\t\t/* clear StagedIn flag for good measure */\n\t\t\t\tpjob->ji_qs.ji_svrflags &= ~JOB_SVFLG_StagedIn;\n\t\t\t\tfree_jattr(pjob, JOB_ATR_exec_host);\n\t\t\t\tfree_jattr(pjob, JOB_ATR_exec_host2);\n\t\t\t\tfree_jattr(pjob, JOB_ATR_exec_vnode);\n\t\t\t}\n\t\t}\n\n\t} else {\n\n\t\t/* No stage-in or already done, start job executing */\n\n\t\trc = svr_strtjob2(pjob, preq);\n\t}\n\treturn (rc);\n}\n\n/**\n * @brief\n * \t\tContinue the process of running a job by sending it to Mother Superior,\n *\t\tand making sure it is in JOB_SUBSTATE_PRERUN.\n *\n * @param[in]\tpjob - pointer to job to run\n * @param[in]\tpreq - the run job request from the scheduler or client\n *\n * @return\tint\n * @retval\t0\t:  success, job is being sent to Mom\n * @retval\t!0\t:  error in trying to send to Mom\n */\nstatic int\nsvr_strtjob2(job *pjob, struct batch_request *preq)\n{\n\tchar old_state;\n\tint old_subst;\n\n\told_state = get_job_state(pjob);\n\told_subst = get_job_substate(pjob);\n\tpjob->ji_qs.ji_stime = 0; /* updated in complete_running() */\n\n\t/* if not restarting a checkpointed job, increment the run/hop count */\n\n\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_CHKPT) == 0) {\n\t\tset_jattr_l_slim(pjob, JOB_ATR_run_version, 1, INCR);\n\t\tset_jattr_l_slim(pjob, JOB_ATR_runcount, 1, INCR);\n\t}\n\n\t/* send the job to MOM */\n\tset_jattr_generic(pjob, JOB_ATR_Comment,\n\t\t\t  form_attr_comment(\"Job was sent for execution at %s\", get_jattr_str(pjob, JOB_ATR_exec_vnode)),\n\t\t\t  NULL, SET);\n\n\tif (old_subst != JOB_SUBSTATE_PROVISION)\n\t\tsvr_setjobstate(pjob, JOB_STATE_LTR_RUNNING,\n\t\t\t\tJOB_SUBSTATE_PRERUN);\n\n\tif (send_job(pjob, pjob->ji_qs.ji_un.ji_exect.ji_momaddr,\n\t\t     pjob->ji_qs.ji_un.ji_exect.ji_momport, MOVE_TYPE_Exec,\n\t\t     post_sendmom, (void *) preq) == 2) {\n\t\tpjob->ji_prunreq = preq;\n\t\t/* Clear the suspend server flag. */\n\t\tpjob->ji_qs.ji_svrflags &= ~JOB_SVFLG_Suspend;\n\n\t\t/* in case of async ack runjob, we need to assign resources\n\t\t * since another scheduling cycle can happen before the\n\t\t * mom responds to the req_commit message. This is the\n\t\t * same logic that is done for jobs with files to stage\n\t\t * in\n\t\t */\n\t\tif (preq == NULL || (preq->rq_type == PBS_BATCH_AsyrunJob_ack) || (preq->rq_type == PBS_BATCH_AsyrunJob)) {\n\t\t\tjob *base_job = NULL;\n\t\t\tif (check_job_substate(pjob, JOB_SUBSTATE_PRERUN)) {\n\t\t\t\tset_resc_assigned((void *) pjob, 0, INCR);\n\t\t\t\t/* Just update dependencies for the first subjob that runs */\n\t\t\t\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_SubJob) &&\n\t\t\t\t    !check_job_state(pjob->ji_parentaj, JOB_STATE_LTR_BEGUN))\n\t\t\t\t\tbase_job = pjob->ji_parentaj;\n\t\t\t\telse\n\t\t\t\t\tbase_job = pjob;\n\t\t\t}\n\t\t\tif (base_job != NULL &&\n\t\t\t    is_jattr_set(base_job, JOB_ATR_depend)) {\n\t\t\t\tstruct depend *pdep;\n\t\t\t\tpdep = find_depend(JOB_DEPEND_TYPE_RUNONE, get_jattr(base_job, JOB_ATR_depend));\n\t\t\t\tif (pdep != NULL)\n\t\t\t\t\tdepend_runone_hold_all(base_job);\n\t\t\t}\n\t\t}\n\t\treturn (0);\n\t} else {\n\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_NOTICE,\n\t\t\t  pjob->ji_qs.ji_jobid,\n\t\t\t  \"Unable to Run Job, send to Mom failed\");\n\n\t\tif (check_job_substate(pjob, JOB_SUBSTATE_PROVISION) ||\n\t\t    check_job_substate(pjob, JOB_SUBSTATE_PRERUN))\n\t\t\trel_resc(pjob);\n\t\telse\n\t\t\tfree_nodes(pjob);\n\n\t\tclear_exec_on_run_fail(pjob);\n\t\tsvr_evaljobstate(pjob, &old_state, &old_subst, 1);\n\t\tsvr_setjobstate(pjob, old_state, old_subst);\n\t\treturn (pbs_errno);\n\t}\n}\n\n/**\n * @brief\n *\t\tComplete the process of placing a job into execution state\n * @par\n *\t\tRecords a bunch of information for accouting and resource management,\n *\t\tand sets substate to PRERUN if it isn't already.\n *\t\tThe sub moves to SUBSTATE_RUNNING when the session id is received\n *\t\tfrom Mom, meaning it is in fact running; see stat_update().\n * @par\n *\t\tNote, if a job is in substate PROVISION,  the resources have already\n *\t\tbeen allocated.\n *\n * @param[in]\tjobp\t-\tpointer to job which is just starting to run.\n */\n\nvoid\ncomplete_running(job *jobp)\n{\n\tjob *parent;\n\n\tif (jobp->ji_qs.ji_stime != 0)\n\t\treturn; /* already called for this incarnation */\n\n\tjobp->ji_terminated = 0; /* reset terminated flag */\n\t/**\n\t *\tFor a subjob, insure the parent array's state is set to 'B'\n\t *\tand deal with any dependency on the parent.\n\t */\n\tif (jobp->ji_qs.ji_svrflags & JOB_SVFLG_SubJob) {\n\t\t/* if this is first subjob to run, mark */\n\t\t/* parent Array as state \"Begun\"\t*/\n\t\tparent = jobp->ji_parentaj;\n\t\tif (check_job_state(parent, JOB_STATE_LTR_QUEUED) ||\n\t\t    (check_job_state(parent, JOB_STATE_LTR_BEGUN) && parent->ji_qs.ji_stime == 0)) {\n\t\t\tsvr_setjobstate(parent, JOB_STATE_LTR_BEGUN, JOB_SUBSTATE_BEGUN);\n\n\t\t\t/* Also set the parent job's stime */\n\t\t\tparent->ji_qs.ji_stime = time_now;\n\t\t\tset_jattr_l_slim(parent, JOB_ATR_stime, time_now, SET);\n\n\t\t\taccount_jobstr(parent, PBS_ACCT_RUN);\n\t\t\tset_jattr_str_slim(parent, JOB_ATR_Comment, form_attr_comment(\"Job Array Began at %s\", NULL), NULL);\n\n\t\t\t/* if any dependencies, see if action required */\n\t\t\tif (is_jattr_set(parent, JOB_ATR_depend))\n\t\t\t\tdepend_on_exec(parent);\n\n\t\t\tsvr_mailowner(parent, MAIL_BEGIN, MAIL_NORMAL, NULL);\n\t\t}\n\t}\n\t/* Job started ATR_Comment is set in server since scheduler cannot read\t*/\n\t/* the reply in case of error in asynchronous communication.\t*/\n\tset_jattr_str_slim(jobp, JOB_ATR_Comment, form_attr_comment(\"Job run at %s\", get_jattr_str(jobp, JOB_ATR_exec_vnode)), NULL);\n\n\tjobp->ji_qs.ji_svrflags &= ~JOB_SVFLG_HOTSTART;\n\n\t/* record start time for accounting and for the Scheduler */\n\t/* setting ji_stime is also an indicator that we have done all this */\n\n\tjobp->ji_qs.ji_stime = time_now;\n\tset_jattr_l_slim(jobp, JOB_ATR_stime, time_now, SET);\n\n\t/*\n\t * if job is in substate PROVISION, set to PRERUN.\n\t * It is possible that the job is in substate:\n\t * - RUNNING if Mom sent the status update first before we get to\n\t *   process the send_job SIGCHLD, see stat_update()\n\t * - EXITING if the Obit was received before send_job's exit status.\n\t */\n\tif (check_job_substate(jobp, JOB_SUBSTATE_PROVISION)) {\n\t\tsvr_setjobstate(jobp, JOB_STATE_LTR_RUNNING, JOB_SUBSTATE_PRERUN);\n\t\t/* above saves job structure */\n\t}\n\n\t/* update resource usage attributes */\n\t/* may have already been done for provisioning, but    */\n\t/* that will be detected inside of set_resc_assigned() */\n\tset_resc_assigned((void *) jobp, 0, INCR);\n\t/* These attributes need to be cleared/freed now that the job has been resumed */\n\tif (is_jattr_set(jobp, JOB_ATR_resc_released)) {\n\t\tfree_jattr(jobp, JOB_ATR_resc_released);\n\t\tmark_jattr_not_set(jobp, JOB_ATR_resc_released);\n\t}\n\n\tif (is_jattr_set(jobp, JOB_ATR_resc_released_list)) {\n\t\tfree_jattr(jobp, JOB_ATR_resc_released_list);\n\t\tmark_jattr_not_set(jobp, JOB_ATR_resc_released_list);\n\t}\n\n\t/* accounting log for start or restart */\n\tif (jobp->ji_qs.ji_svrflags & JOB_SVFLG_CHKPT)\n\t\taccount_record(PBS_ACCT_RESTRT, jobp, NULL);\n\telse\n\t\taccount_jobstr(jobp, PBS_ACCT_RUN);\n\n\t/* if any dependencies, see if action required */\n\n\tif (is_jattr_set(jobp, JOB_ATR_depend))\n\t\tdepend_on_exec(jobp);\n\n\tsvr_mailowner(jobp, MAIL_BEGIN, MAIL_NORMAL, NULL);\n\t/*\n\t * it is unfortunate, but while the job has gone into execution,\n\t * there is no way of obtaining the session id except by making\n\t * a status request of MOM.  (Even if the session id was passed\n\t * back to the sending child, it couldn't get up to the parent.)\n\t */\n}\n\n/**\n * @brief\n * \t\tHelper function to parse the hookname and hook_msg out of a hook rejection message\n *\n * @param[in]\treject_msg\t-\tThe hooks rejection message\n * @param[out]\thook_name\t-\tpointer to buffer to fill parsed hook_name\n * @param[in]\thook_name_size\t- The length of the hook name output buffer\n *\n * @return\tThe hook message\n * @retval\tNULL\t: Failed to parse out hook_name and hook_msg\n * @retval\t!NULL\t: The hook message\n */\nstatic char *\nparse_hook_rejectmsg(char *reject_msg, char *hook_name, int hook_name_size)\n{\n\tchar *p;\n\tif (reject_msg != NULL) {\n\t\tp = strchr(reject_msg, ',');\n\t\tif (p != NULL) {\n\t\t\t*p = '\\0';\n\t\t\tp++;\n\t\t\tstrncpy(hook_name, reject_msg, hook_name_size);\n\t\t\treturn p;\n\t\t}\n\t}\n\treturn NULL;\n}\n\n/**\n * @brief\n *\t\tCheck and put a hold on a job if it has already been run\n *\t\ttoo many times.\n *\n * @param[in,out]\tpjob\t-\tjob pointer\n *\n * @return\tvoid\n */\nvoid\ncheck_failed_attempts(job *jobp)\n{\n\tif (get_jattr_long(jobp, JOB_ATR_runcount) >\n#ifdef NAS /* localmod 083 */\n\t    PBS_MAX_HOPCOUNT\n#else\n\t    PBS_MAX_HOPCOUNT + PBS_MAX_HOPCOUNT\n#endif /* localmod 083 */\n\t) {\n\t\tset_jattr_b_slim(jobp, JOB_ATR_hold, HOLD_s, INCR);\n\t\tset_jattr_str_slim(jobp, JOB_ATR_Comment, \"job held, too many failed attempts to run\", NULL);\n\n\t\tif (jobp->ji_parentaj) {\n\t\t\tchar comment_buf[100 + PBS_MAXSVRJOBID];\n\t\t\tsvr_setjobstate(jobp->ji_parentaj, JOB_STATE_LTR_HELD, JOB_SUBSTATE_HELD);\n\t\t\tset_jattr_b_slim(jobp->ji_parentaj, JOB_ATR_hold, HOLD_s, INCR);\n\t\t\tsprintf(comment_buf, \"Job Array Held, too many failed attempts to run subjob %s\", jobp->ji_qs.ji_jobid);\n\t\t\tset_jattr_str_slim(jobp->ji_parentaj, JOB_ATR_Comment, comment_buf, NULL);\n\t\t}\n\t}\n}\n\n/**\n * @brief\n * \t\tpost_sendmom - clean up action for child started in send_job\n *\t\twhich was sending a job \"home\" to MOM\n * @par\n * \t\tIf send was successfull, mark job as executing.\n * \t\tSee comments in complete_running() above about the possible substate changes.\n *\n * \t\tThe job's session id will be updated with Mom first responds with\n * \t\tthe resources_used.\n *\n * \t\tIf send didn't work, requeue the job.\n *\n * \t\tIf the work_task has a non-null wt_parm2, it is the address of a batch\n * \t\trequest to which a reply must be sent.\n * @par\n * \t\tIf the ji_prunreq (pointer to the run request) is null,  the run request\n * \t\thas already been replied to.  This might happen if the job's Obit is\n * \t\treceived prior to reaping the send_job child.  In that case, we skip all\n * \t\tthis because the job has already \"run\" and is now in Exiting state.\n *\n * @param[in,out]\tpwt\t-\twork_task structure\n *\n * Returns: none.\n */\nvoid\npost_sendmom(struct work_task *pwt)\n{\n\tchar newstate;\n\tint newsub;\n\tint r;\n\tchar *reject_msg = NULL;\n\tint wstat = pwt->wt_aux;\n\tjob *jobp = (job *) pwt->wt_parm2;\n\tstruct batch_request *preq = (struct batch_request *) pwt->wt_parm1;\n\tint prot = pwt->wt_aux2;\n\tstruct batch_reply *reply = (struct batch_reply *) pwt->wt_parm3;\n\tchar dest_host[PBS_MAXROUTEDEST + 1];\n\tchar hook_name[PBS_HOOK_NAME_SIZE + 1] = {'\\0'};\n\tchar *hook_msg = NULL;\n\n\tif (jobp == NULL) {\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_INFO, \"\", \"post_sendmom failed, jobp NULL\");\n\t\tif (preq)\n\t\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\treturn;\n\t}\n\n\tDBPRT((\"post_sendmom: %s substate is %ld\", jobp->ji_qs.ji_jobid, get_job_substate(jobp)))\n\n\tif (jobp->ji_prunreq)\n\t\tjobp->ji_prunreq = NULL; /* set in svr_strtjob2() */\n\n\tif (prot == PROT_TCP) {\n\t\tif (WIFEXITED(wstat)) {\n\t\t\tr = WEXITSTATUS(wstat);\n\t\t} else if (WIFSIGNALED(wstat)) {\n\t\t\t/* Check if send_job child process has been signaled or not */\n\t\t\tr = SEND_JOB_SIGNAL;\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE, msg_job_end_sig, WTERMSIG(wstat));\n\t\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t  jobp->ji_qs.ji_jobid, log_buffer);\n\t\t} else {\n\t\t\tr = SEND_JOB_RETRY;\n\t\t\tsprintf(log_buffer, msg_badexit, wstat);\n\t\t\tstrcat(log_buffer, __func__);\n\t\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t  jobp->ji_qs.ji_jobid, log_buffer);\n\t\t}\n\n\t\t/* if the return code says it was hook error read the hook error message\n\t\t * from the file and parse out hookname and hook_msg\n\t\t */\n\t\tif ((r == SEND_JOB_HOOKERR) ||\n\t\t    (r == SEND_JOB_HOOK_REJECT) ||\n\t\t    (r == SEND_JOB_HOOK_REJECT_RERUNJOB) ||\n\t\t    (r == SEND_JOB_HOOK_REJECT_DELETEJOB)) {\n\n\t\t\tchar name_buf[MAXPATHLEN + 1];\n\t\t\tint fd;\n\t\t\tstruct stat sbuf;\n\n\t\t\tsnprintf(name_buf, sizeof(name_buf), \"%s%s%s\", path_hooks_workdir, jobp->ji_qs.ji_jobid,\n\t\t\t\t HOOK_REJECT_SUFFIX);\n\n\t\t\tif ((stat(name_buf, &sbuf) != -1) && (sbuf.st_size > 0)) {\n\n\t\t\t\tif ((fd = open(name_buf, O_RDONLY)) != -1) {\n\n\t\t\t\t\treject_msg = malloc(sbuf.st_size);\n\t\t\t\t\tif (reject_msg != NULL) {\n\t\t\t\t\t\tif (read(fd, reject_msg, sbuf.st_size) != sbuf.st_size) {\n\t\t\t\t\t\t\tsprintf(log_buffer, \"read %s is incomplete\", name_buf);\n\t\t\t\t\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\t\t\t\t\treject_msg[0] = '\\0';\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tclose(fd);\n\t\t\t\t\tunlink(name_buf);\n\t\t\t\t}\n\t\t\t}\n\t\t\thook_msg = parse_hook_rejectmsg(reject_msg, hook_name, PBS_HOOK_NAME_SIZE);\n\t\t}\n\n\t} else {\n\t\t/* in case of tpp, the pbs_errno is set in wstat, based\n\t\t * on which we determine value of r\n\t\t */\n\t\tswitch (wstat) {\n\t\t\tcase PBSE_NONE:\n\t\t\t\tr = SEND_JOB_OK;\n\t\t\t\tbreak;\n\t\t\tcase PBSE_NORELYMOM:\n\t\t\t\tr = SEND_JOB_NODEDW;\n\t\t\t\tbreak;\n\t\t\tcase PBSE_HOOKERROR:\n\t\t\t\tr = SEND_JOB_HOOKERR;\n\t\t\t\tbreak;\n\t\t\tcase PBSE_HOOK_REJECT:\n\t\t\t\tr = SEND_JOB_HOOK_REJECT;\n\t\t\t\tbreak;\n\t\t\tcase PBSE_HOOK_REJECT_RERUNJOB:\n\t\t\t\tr = SEND_JOB_HOOK_REJECT_RERUNJOB;\n\t\t\t\tbreak;\n\t\t\tcase PBSE_HOOK_REJECT_DELETEJOB:\n\t\t\t\tr = SEND_JOB_HOOK_REJECT_DELETEJOB;\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\tr = SEND_JOB_FATAL;\n\t\t\t\tbreak;\n\t\t}\n\n\t\t/* also take note of the reject msg if any */\n\t\tif (reply && reply->brp_choice == BATCH_REPLY_CHOICE_Text)\n\t\t\treject_msg = reply->brp_un.brp_txt.brp_str;\n\n\t\t/*\n\t\t * the above reject_msg should never be freed within this function\n\t\t * since it will be freed by the caller process_DreplyTPP() in the\n\t\t * case of a TPP based job send\n\t\t */\n\n\t\tif (r != SEND_JOB_OK) {\n\t\t\tif (reject_msg)\n\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\"send of job to %s failed error = %d reject_msg=%s\",\n\t\t\t\t\tjobp->ji_qs.ji_destin, pbs_errno, reject_msg);\n\t\t\telse\n\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\"send of job to %s failed error = %d\",\n\t\t\t\t\tjobp->ji_qs.ji_destin, pbs_errno);\n\n\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO, jobp->ji_qs.ji_jobid, log_buffer);\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE,\n\t\t\t\t \"Not Running: PBS Error: %s\", pbse_to_txt(PBSE_MOMREJECT));\n\n\t\t\tif (jobp->ji_qs.ji_svrflags & JOB_SVFLG_SubJob) {\n\t\t\t\t/*\n\t\t\t\t * if the job is a subjob, set the comment of parent job array\n\t\t\t\t * only if the job array is in state Queued. Once the job\n\t\t\t\t * array starts its comment is set to a begun message and\n\t\t\t\t * should not change after that\n\t\t\t\t */\n\t\t\t\tif (check_job_state(jobp->ji_parentaj, JOB_STATE_LTR_QUEUED)) {\n\t\t\t\t\tset_jattr_str_slim(jobp->ji_parentaj, JOB_ATR_Comment, log_buffer, NULL);\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t/* if the job is a normal job or a subjob */\n\t\t\tset_jattr_generic(jobp, JOB_ATR_Comment, log_buffer, NULL, SET);\n\n\t\t\tif (pbs_errno == PBSE_MOM_REJECT_ROOT_SCRIPTS)\n\t\t\t\tcheck_failed_attempts(jobp);\n\t\t}\n\n\t\t/* in the case of hook error we parse the hook_name and hook msg */\n\t\tif ((r == SEND_JOB_HOOKERR) ||\n\t\t    (r == SEND_JOB_HOOK_REJECT) ||\n\t\t    (r == SEND_JOB_HOOK_REJECT_RERUNJOB) ||\n\t\t    (r == SEND_JOB_HOOK_REJECT_DELETEJOB)) {\n\n\t\t\thook_msg = parse_hook_rejectmsg(reject_msg, hook_name, PBS_HOOK_NAME_SIZE);\n\t\t}\n\t}\n\n\tif (!(check_job_substate(jobp, JOB_SUBSTATE_PRERUN) ||\n\t      check_job_substate(jobp, JOB_SUBSTATE_RUNNING) ||\n\t      check_job_substate(jobp, JOB_SUBSTATE_PROVISION))) {\n\t\tsprintf(log_buffer, \"send_job returned with exit status = %d and job substate = %ld\",\n\t\t\tr, get_job_substate(jobp));\n\n\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t  jobp->ji_qs.ji_jobid, log_buffer);\n\t}\n\n\tswitch (r) {\n\n\t\tcase SEND_JOB_OK: /* send to MOM went ok */\n\n\t\t\tif (preq)\n\t\t\t\treply_ack(preq);\n\t\t\tif ((check_job_substate(jobp, JOB_SUBSTATE_PRERUN)) ||\n\t\t\t    (check_job_substate(jobp, JOB_SUBSTATE_PROVISION)))\n\t\t\t\tcomplete_running(jobp);\n\t\t\tbreak;\n\n\t\tcase SEND_JOB_SIGNAL:\n\n\t\t\t/* send_job child process has been signaled\n\t\t\t * therefore kill the job if it is already\n\t\t\t * running on the MOM and force requeue the job\n\t\t\t */\n\t\t\tif (preq)\n\t\t\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\n\t\t\t/* need to record log message before aborting and\n\t\t\t * requeuing job both in server and accounting logs\n\t\t\t */\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE, \"%s\", msg_job_abort);\n\t\t\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_JOB | PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_INFO, jobp->ji_qs.ji_jobid, log_buffer);\n\n\t\t\t/* abort job irrespective of its presence\n\t\t\t * (may or may not be running) in a MOM\n\t\t\t */\n\t\t\tjob_abt(jobp, log_buffer);\n\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE, msg_init_substate, get_job_substate(jobp));\n\t\t\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_JOB | PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_INFO, jobp->ji_qs.ji_jobid, log_buffer);\n\n\t\t\t/* Force requeue the job since the job has been aborted by the server */\n\t\t\tforce_reque(jobp);\n\t\t\tbreak;\n\n\t\tcase SEND_JOB_NODEDW: /* node (mother superior) is down? */\n\t\t\tmark_node_down(jobp->ji_qs.ji_destin, \"could not send job to mom\");\n\n\t\t\t/* fall through to requeue job */\n\n\t\tdefault: /* send failed, requeue the job */\n\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_NOTICE,\n\t\t\t\t  jobp->ji_qs.ji_jobid,\n\t\t\t\t  \"Unable to Run Job, MOM rejected\");\n\n\t\t\t/* release resources */\n\t\t\tif (check_job_substate(jobp, JOB_SUBSTATE_PROVISION) ||\n\t\t\t    check_job_substate(jobp, JOB_SUBSTATE_PRERUN))\n\t\t\t\trel_resc(jobp);\n\t\t\telse\n\t\t\t\tfree_nodes(jobp);\n\n\t\t\t/* delete stagein files if flag is set */\n\t\t\tif (jobp->ji_qs.ji_svrflags & JOB_SVFLG_StagedIn)\n\t\t\t\tif (remove_stagein(jobp) != 0) {\n\t\t\t\t\t/* if remove stagein is failed then */\n\t\t\t\t\t/* we will remove stagedin flag from job */\n\t\t\t\t\tjobp->ji_qs.ji_svrflags &= ~JOB_SVFLG_StagedIn;\n\t\t\t\t}\n\t\t\tsnprintf(dest_host, sizeof(dest_host), \"%s\", jobp->ji_qs.ji_destin);\n\t\t\tclear_exec_on_run_fail(jobp);\n\n\t\t\tif (!check_job_substate(jobp, JOB_SUBSTATE_ABORT)) {\n\t\t\t\tif (preq) {\n\t\t\t\t\tif ((r == SEND_JOB_HOOKERR) ||\n\t\t\t\t\t    (r == SEND_JOB_HOOK_REJECT) ||\n\t\t\t\t\t    (r == SEND_JOB_HOOK_REJECT_RERUNJOB) ||\n\t\t\t\t\t    (r == SEND_JOB_HOOK_REJECT_DELETEJOB)) {\n\t\t\t\t\t\tint err;\n\n\t\t\t\t\t\tif (r == SEND_JOB_HOOK_REJECT)\n\t\t\t\t\t\t\terr = PBSE_HOOK_REJECT;\n\t\t\t\t\t\telse if (r == SEND_JOB_HOOK_REJECT_RERUNJOB)\n\t\t\t\t\t\t\terr = PBSE_HOOK_REJECT_RERUNJOB;\n\t\t\t\t\t\telse if (r == SEND_JOB_HOOK_REJECT_DELETEJOB)\n\t\t\t\t\t\t\terr = PBSE_HOOK_REJECT_DELETEJOB;\n\t\t\t\t\t\telse\n\t\t\t\t\t\t\terr = PBSE_HOOKERROR;\n\n\t\t\t\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB,\n\t\t\t\t\t\t\t  LOG_NOTICE, jobp->ji_qs.ji_jobid, pbse_to_txt(err));\n\n\t\t\t\t\t\treply_text(preq, err, hook_msg ? hook_msg : \"\");\n\t\t\t\t\t} else {\n\t\t\t\t\t\treq_reject(PBSE_MOMREJECT, 0, preq);\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\tif (r == SEND_JOB_HOOK_REJECT_DELETEJOB) {\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG,\n\t\t\t\t\t\t  PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t\t\t  jobp->ji_qs.ji_jobid,\n\t\t\t\t\t\t  \"Job aborted per a hook rejection\");\n\n\t\t\t\t\t/* Need to force queued state so */\n\t\t\t\t\t/* job_abt() call does not try   */\n\t\t\t\t\t/* to issue a kill job signal to mom */\n\t\t\t\t\tset_job_state(jobp, JOB_STATE_LTR_QUEUED);\n\t\t\t\t\tset_job_substate(jobp, JOB_SUBSTATE_QUEUED);\n\t\t\t\t\tjob_abt(jobp, msg_hook_reject_deletejob);\n\t\t\t\t\tbreak;\n\t\t\t\t} else if ((r == SEND_JOB_HOOKERR) ||\n\t\t\t\t\t   (r == SEND_JOB_HOOK_REJECT) ||\n\t\t\t\t\t   (r == SEND_JOB_HOOK_REJECT_RERUNJOB)) {\n\t\t\t\t\tcheck_failed_attempts(jobp);\n\t\t\t\t\tif (r == SEND_JOB_HOOKERR) {\n\t\t\t\t\t\thook *phook;\n\t\t\t\t\t\tphook = find_hook(hook_name);\n\t\t\t\t\t\tif (phook != NULL) {\n\t\t\t\t\t\t\tif ((phook->fail_action & HOOK_FAIL_ACTION_OFFLINE_VNODES) != 0) {\n\t\t\t\t\t\t\t\t/*\n\t\t\t\t\t\t\t\t * hook_buf must be large enough\n\t\t\t\t\t\t\t\t * to hold the hook_name and a\n\t\t\t\t\t\t\t\t * small amount of text.\n\t\t\t\t\t\t\t\t */\n\t\t\t\t\t\t\t\tchar hook_buf[PBS_HOOK_NAME_SIZE + 64];\n\n\t\t\t\t\t\t\t\tsnprintf(hook_buf, sizeof(hook_buf),\n\t\t\t\t\t\t\t\t\t \"offlined by hook '%s' due to hook error\",\n\t\t\t\t\t\t\t\t\t hook_name);\n\t\t\t\t\t\t\t\tmark_node_offline_by_mom(dest_host, hook_buf);\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tif ((phook->fail_action & HOOK_FAIL_ACTION_SCHEDULER_RESTART_CYCLE) != 0) {\n\n\t\t\t\t\t\t\t\tset_scheduler_flag(SCH_SCHEDULE_RESTART_CYCLE, dflt_scheduler);\n\t\t\t\t\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK, LOG_INFO, phook->hook_name, \"requested for scheduler to restart cycle\");\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\tsvr_evaljobstate(jobp, &newstate, &newsub, 1);\n\t\t\t\tsvr_setjobstate(jobp, newstate, newsub);\n\t\t\t} else {\n\t\t\t\tif (preq)\n\t\t\t\t\treq_reject(PBSE_BADSTATE, 0, preq);\n\t\t\t}\n\n\t\t\tbreak;\n\t}\n\n\tif (prot == PROT_TCP && reject_msg != NULL)\n\t\tfree(reject_msg); /* free this only in case of non-tpp since it was locally allocated */\n\n\treturn;\n}\n\n/**\n * @brief\n * \t\tchk_job_torun - check state and past execution host of a job for which\n *\t\tfiles are about to be staged in or the job is about to be run.\n * \t\tReturns pointer to job if all is ok, else returns null.\n *\n *\t\tpjob must be to a existing job structure\n *\n * @param[in,out]\tpreq\t-\tPointer to batch request\n * @param[in]\tpjob\t-\texisting job structure\n *\n * @return\tPointer to job\n * @retval\tnull\t: fail\n */\n\nstatic job *\nchk_job_torun(struct batch_request *preq, job *pjob)\n{\n\n\tif (pjob == NULL)\n\t\treturn pjob;\n\n\tif ((check_job_state(pjob, JOB_STATE_LTR_TRANSIT)) ||\n\t    (check_job_state(pjob, JOB_STATE_LTR_EXITING)) ||\n\t    (check_job_substate(pjob, JOB_SUBSTATE_STAGEGO)) ||\n\t    (check_job_substate(pjob, JOB_SUBSTATE_PRERUN)) ||\n\t    (check_job_substate(pjob, JOB_SUBSTATE_RUNNING))) {\n\t\treq_reject(PBSE_BADSTATE, 0, preq);\n\t\treturn NULL;\n\t}\n\n\tif (preq->rq_type == PBS_BATCH_StageIn) {\n\t\tif (check_job_substate(pjob, JOB_SUBSTATE_STAGEIN)) {\n\t\t\treq_reject(PBSE_BADSTATE, 0, preq);\n\t\t\treturn NULL;\n\t\t}\n\t}\n\treturn (pjob);\n}\n/**\n * @brief\n * \t\twhere to execute the job\n *\n * @param[in,out]\tpreq\t-\tPointer to batch request\n * @param[in,out]\tpjob\t-\texisting job structure\n *\n * @return\tPointer to job\n * @retval\tnull\t: fail\n */\nstatic job *\nwhere_to_runjob(struct batch_request *preq, job *pjob)\n{\n\tchar *nspec;\n\tstruct rq_runjob *prun = &preq->rq_ind.rq_run;\n\tint rc;\n\n\tif ((pjob->ji_qs.ji_svrflags & (JOB_SVFLG_CHKPT | JOB_SVFLG_StagedIn)) ||\n\t    ((prun->rq_destin != NULL) && (*prun->rq_destin == '-') && (*(prun->rq_destin + 1) == '\\0'))) {\n\t\t/* Job has files staged, a checkpoint image, or \"qrun -H -\" was specified.\t*/\n\t\t/* Reuse assigned resources.\t\t\t\t\t\t\t*/\n\t\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_HasNodes) == 0) {\n\t\t\t/* re-reserve nodes and leave exec_vnode as is */\n\t\t\t/* convert exec_vnode string into form like user spec */\n\t\t\tnspec = get_jattr_str(pjob, JOB_ATR_exec_vnode);\n\t\t\tif (nspec == NULL) {\n\t\t\t\t/* something's wrong, before we reject the */\n\t\t\t\t/* job let us clear the flags so the job can */\n\t\t\t\t/* run the next time around  */\n\t\t\t\tpjob->ji_qs.ji_svrflags &= ~(JOB_SVFLG_CHKPT |\n\t\t\t\t\t\t\t     JOB_SVFLG_StagedIn);\n\t\t\t\treq_reject(PBSE_IVALREQ, 0, preq);\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t\tif ((rc = assign_hosts(pjob, nspec, 0)) != 0) {\n\t\t\t\tfree(nspec);\n\t\t\t\treq_reject(rc, 0, preq);\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t}\n\t} else {\n\n\t\t/* job has not run before or need not run there again\t*/\n\t\t/* reserve nodes and set exec_vnode anew\t\t*/\n\n\t\tif ((prun->rq_destin == NULL) ||\n\t\t    (*prun->rq_destin == '\\0')) {\n\t\t\treq_reject(PBSE_IVALREQ, 0, preq);\n\t\t\treturn NULL;\n\t\t}\n\n\t\tif ((is_jattr_set(pjob, JOB_ATR_exec_vnode)) != 0) {\n\t\t\t/*\n\t\t\t * Instruct MoM to discard the existing job before we assign new\n\t\t\t * resources. This ensures previously assigned resources are cleaned\n\t\t\t * up properly and prevents orphaned processes. If the job is not\n\t\t\t * discarded, files and directories created for the job will linger.\n\t\t\t */\n\t\t\tdiscard_job(pjob, \"Force qrun\", 1);\n\t\t}\n\n\t\trc = assign_hosts(pjob, prun->rq_destin, 1);\n\n\t\tif (rc != 0) {\n\t\t\treq_reject(rc, 0, preq);\n\t\t\treturn NULL;\n\t\t}\n\t}\n\n\t/* If the request did not come from the scheduler, update the comment. */\n\tif (find_sched_from_sock(preq->rq_conn, CONN_SCHED_PRIMARY) == NULL) {\n\t\tchar comment[MAXCOMMENTLEN];\n\t\tnspec = get_jattr_str(pjob, JOB_ATR_exec_vnode);\n\t\tif ((nspec != NULL) && (*nspec != '\\0')) {\n\t\t\tsnprintf(comment, MAXCOMMENTLEN, \"Job manually qrun on %s\", nspec);\n\t\t} else {\n\t\t\tsnprintf(comment, MAXCOMMENTLEN, \"Job manually qrun.\");\n\t\t}\n\t\tset_jattr_str_slim(pjob, JOB_ATR_Comment, comment, NULL);\n\t}\n\n\treturn (pjob);\n}\n\n/**\n * @brief\n * \t\tassign_hosts - assign hosts (vnodes) to job which are specified (given) by:\n *\t\t1. the scheduler when it runs a job,\n *\t\t2. the operator as the -H option to qrun\n *\t\t3. from exec_vnode when required by checkpoint-restart or file stage-in\n *\n * @param[in,out]\tpjob\t-\tpointer to a job object\n * @param[in]\tgiven\t-\toriginal vnode list from scheduler/operator\n * @param[in]\tset_exec_vnode\t-\tif True (non-zero), this function is to create\n *                              \ta new hoststr including new job indicies,\n *                              \totherwise return existing exec_host unchanged.\n *\n * @return\tint\n * @retval\t0\t: success\n * @retval\t!0\t: error code\n */\n\nint\nassign_hosts(job *pjob, char *given, int set_exec_vnode)\n{\n\tchar *hoststr;\n\tchar *hoststr2;\n\tchar *vnodestoalloc;\n\tpbs_net_t momaddr = 0;\n\tunsigned int port;\n\tint rc = 0;\n\n\tif (svr_totnodes == 0) /* Must have nodes file */\n\t\treturn (PBSE_NONODES);\n\n\tif (given == NULL)\n\t\treturn (PBSE_IVALREQ);\n\n\t/* allocate the execution nodes and resources */\n\n\tif ((set_exec_vnode == 0) &&\n\t    (is_jattr_set(pjob, JOB_ATR_exec_host))) {\n\t\thoststr = get_jattr_str(pjob, JOB_ATR_exec_host);\n\t\thoststr2 = get_jattr_str(pjob, JOB_ATR_exec_host2);\n\t} else {\n\t\thoststr = NULL;\n\t\thoststr2 = NULL;\n\t}\n\n\trc = set_nodes((void *) pjob, JOB_OBJECT, given, &vnodestoalloc, &hoststr, &hoststr2,\n\t\t       set_exec_vnode, FALSE);\n\n\tif (rc == 0) {\n\t\tif (set_exec_vnode) {\n\t\t\tfree_jattr(pjob, JOB_ATR_exec_host);\n\t\t\tfree_jattr(pjob, JOB_ATR_exec_host2);\n\t\t\tfree_jattr(pjob, JOB_ATR_exec_vnode);\n\t\t\tset_jattr_str_slim(pjob, JOB_ATR_exec_vnode, vnodestoalloc, NULL);\n\t\t\tset_jattr_str_slim(pjob, JOB_ATR_exec_host, hoststr, NULL);\n\t\t\tset_jattr_str_slim(pjob, JOB_ATR_exec_host2, hoststr2, NULL);\n\t\t} else {\n\t\t\t/* leave exec_vnode alone and reuse old IP address */\n\t\t\tmomaddr = pjob->ji_qs.ji_un.ji_exect.ji_momaddr;\n\t\t\tport = pjob->ji_qs.ji_un.ji_exect.ji_momport;\n\t\t}\n\t\tstrncpy(pjob->ji_qs.ji_destin,\n\t\t\tparse_servername(hoststr, NULL),\n\t\t\tPBS_MAXROUTEDEST);\n\t\tif (momaddr == 0) {\n\t\t\tmomaddr = get_addr_of_nodebyname(pjob->ji_qs.ji_destin,\n\t\t\t\t\t\t\t &port);\n\t\t\tif (momaddr == 0) {\n\t\t\t\tfree_nodes(pjob);\n\t\t\t\tfree_jattr(pjob, JOB_ATR_exec_host);\n\t\t\t\tfree_jattr(pjob, JOB_ATR_exec_host2);\n\t\t\t\tfree_jattr(pjob, JOB_ATR_exec_vnode);\n\t\t\t\treturn (PBSE_BADHOST);\n\t\t\t}\n\t\t}\n\t\tpjob->ji_qs.ji_un.ji_exect.ji_momaddr = momaddr;\n\t\tpjob->ji_qs.ji_un.ji_exect.ji_momport = port;\n\t}\n\treturn (rc);\n}\n\n/**\n * @brief\n * \t\treq_defschedreply - handle the deferred scheduler reply call\n *\n * @param[in,out]\tpreq\t-\tPointer to batch request\n */\n\nvoid\nreq_defschedreply(struct batch_request *preq)\n{\n\tpbs_sched *psched;\n\tpbs_list_head *deferred_req;\n\tstruct deferred_request *pdefr;\n\n\tif (preq->rq_ind.rq_defrpy.rq_cmd != SCH_SCHEDULE_AJOB) {\n\t\treq_reject(PBSE_IVALREQ, 0, preq);\n\t\treturn;\n\t}\n\n\tfind_assoc_sched_jid(preq->rq_ind.rq_defrpy.rq_id, &psched);\n\tdeferred_req = fetch_sched_deferred_request(psched, false);\n\tif (deferred_req == NULL) {\n\t\treq_reject(PBSE_IVALREQ, 0, preq);\n\t\treturn;\n\t}\n\n\tfor (pdefr = (struct deferred_request *) GET_NEXT(*deferred_req);\n\t     pdefr;\n\t     pdefr = (struct deferred_request *) GET_NEXT(pdefr->dr_link)) {\n\t\tif (strcmp(preq->rq_ind.rq_defrpy.rq_id, pdefr->dr_id) == 0)\n\t\t\tbreak;\n\t}\n\n\tif (pdefr == NULL) {\n\t\treq_reject(PBSE_UNKJOBID, 0, preq);\n\t\treturn;\n\t}\n\n\t/* reply to the original (deferred) request */\n\t/* if the connection for the original request (qrun) was closed */\n\t/* the pointer to it will have been nulled */\n\tif (pdefr->dr_preq != NULL) {\n\t\t/* \"preq\" points to the deferred reply from the Scheduler  */\n\t\t/* \"pdefr\" points to the original qrun batch request, this */\n\t\t/* request structure will be freed on the reply            */\n\n\t\tif (preq->rq_ind.rq_defrpy.rq_txt) {\n\t\t\t/* have a text string from the Scheduler to send to qrun */\n\t\t\treply_text(pdefr->dr_preq, preq->rq_ind.rq_defrpy.rq_err,\n\t\t\t\t   preq->rq_ind.rq_defrpy.rq_txt);\n\n\t\t} else if (preq->rq_ind.rq_defrpy.rq_err == 0) {\n\t\t\t/* no error, acknowledge qrun */\n\t\t\treply_send(pdefr->dr_preq);\n\n\t\t} else {\n\t\t\t/* was an error (without text string), send error to qrun */\n\t\t\treq_reject(preq->rq_ind.rq_defrpy.rq_err, 0, pdefr->dr_preq);\n\t\t}\n\t}\n\n\t/* unlink and free the deferred request entry */\n\tdelete_link(&pdefr->dr_link);\n\tfree(pdefr);\n\n\tclear_sched_deferred_request(psched);\n\n\treply_send(preq);\n}\n\n/**\n * @brief\n *\tconvert_job_to_resv - create a reservation out of the job\n * \t\t\t      and move the job to the newly created\n * \t\t\t      reservation.\n *\n * @param[in]\tpjob - pointer to the job object\n *\n * @return\tvoid\n */\n\nvoid\nconvert_job_to_resv(job *pjob)\n{\n\tsvrattrl *psatl;\n\tunsigned int len;\n\tpbs_list_head *plhed;\n\tstruct work_task *pwt;\n\tstruct batch_request *newreq;\n\n\tnewreq = alloc_br(PBS_BATCH_SubmitResv);\n\tif (newreq == NULL) {\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_JOB, LOG_ERR,\n\t\t\t  pjob->ji_qs.ji_jobid, \"batch request allocation failed, could not create reservation from the job\");\n\t\treturn;\n\t}\n\tnewreq->rq_type = PBS_BATCH_SubmitResv;\n\n\tget_jobowner(get_jattr_str(pjob, JOB_ATR_job_owner), newreq->rq_user);\n\n\tstrncpy(newreq->rq_host, get_jattr_str(pjob, JOB_ATR_submit_host), PBS_MAXHOSTNAME);\n\tnewreq->rq_perm = READ_WRITE | ATR_DFLAG_ALTRUN;\n\n\tnewreq->rq_ind.rq_queuejob.rq_jid[0] = '\\0';\n\tnewreq->rq_ind.rq_queuejob.rq_destin[0] = '\\0';\n\n\tlen = strlen(pjob->ji_qs.ji_jobid) + 1;\n\tplhed = &newreq->rq_ind.rq_queuejob.rq_attr;\n\tCLEAR_HEAD(newreq->rq_ind.rq_queuejob.rq_attr);\n\tif ((psatl = attrlist_create(ATTR_resv_job, NULL, len)) != NULL) {\n\t\tpsatl->al_flags = resv_attr_def[RESV_ATR_job].at_flags;\n\t\tstrcpy(psatl->al_value, pjob->ji_qs.ji_jobid);\n\t\tappend_link(plhed, &psatl->al_link, psatl);\n\t}\n\n\tif (issue_Drequest(PBS_LOCAL_CONNECTION, newreq, release_req, &pwt, 0) == -1) {\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_JOB, LOG_ERR,\n\t\t\t  pjob->ji_qs.ji_jobid, \"Could not create reservation from the job\");\n\t\tfree_br(newreq);\n\t}\n}\n"
  },
  {
    "path": "src/server/req_select.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n *\n * @brief\n * \t\tFunctions relating to the Select Job Batch Request and the Select-Status\n * \t\t(SelStat) Batch Request.\n *\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#define STAT_CNTL 1\n\n#include <sys/types.h>\n#include <stdlib.h>\n#include \"libpbs.h\"\n#include <string.h>\n#include \"server_limits.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"resource.h\"\n#include \"server.h\"\n#include \"credential.h\"\n#include \"batch_request.h\"\n#include \"job.h\"\n#include \"reservation.h\"\n#include \"queue.h\"\n#include \"pbs_error.h\"\n#include \"log.h\"\n#include \"pbs_nodes.h\"\n#include \"svrfunc.h\"\n#include \"pbs_sched.h\"\n\n/* Private Data */\n\n/* Global Data Items  */\n\nextern int resc_access_perm;\nextern pbs_list_head svr_alljobs;\nextern time_t time_now;\nextern char statechars[];\nextern long svr_history_enable;\nextern int scheduler_jobs_stat;\n\n/* Private Functions  */\n\nstatic int\nbuild_selist(svrattrl *, int perm, struct select_list **,\n\t     pbs_queue **, int *bad, char **pstate);\nstatic void free_sellist(struct select_list *pslist);\nstatic int sel_attr(attribute *, struct select_list *);\nstatic int select_job(job *, struct select_list *, int, int);\nstatic int select_subjob(char, struct select_list *);\n\n/**\n * @brief\n * \t\torder_chkpnt - provide order value for various checkpoint attribute values\n *\t\tn > s > c=minutes > c\n *\n * @param[in]\tattr\t-\tattribute structure\n *\n * @return\torder value\n * @retval\t0\t: no match\n * @retval\t!0\t: value according to the checkpoints\n */\n\nstatic int\norder_chkpnt(attribute *attr)\n{\n\tif (((is_attr_set(attr)) == 0) ||\n\t    (attr->at_val.at_str == 0))\n\t\treturn 0;\n\n\tswitch (*attr->at_val.at_str) {\n\t\tcase 'n':\n\t\t\treturn 5;\n\t\tcase 's':\n\t\t\treturn 4;\n\t\tcase 'c':\n\t\t\tif (*(attr->at_val.at_str + 1) != '\\0')\n\t\t\t\treturn 3;\n\t\t\telse\n\t\t\t\treturn 2;\n\t\tcase 'u':\n\t\t\treturn 1;\n\t\tdefault:\n\t\t\treturn 0;\n\t}\n}\n\n/**\n * @brief\n * \t\tcomp_chkpnt - compare two checkpoint attributes for selection\n *\n * @param[in]\tattr\t-\tattribute structure to compare\n * @param[in]\twith\t-\tattribute structure to compare with\n *\n * @return\tint\n * @retval\t0\t: same\n * @retval\t1\t: attr > with\n * @retval\t-1\t: attr < with\n */\n\nint\ncomp_chkpnt(attribute *attr, attribute *with)\n{\n\tint a;\n\tint w;\n\n\ta = order_chkpnt(attr);\n\tw = order_chkpnt(with);\n\n\tif (a == w)\n\t\treturn 0;\n\telse if (a > w)\n\t\treturn 1;\n\telse\n\t\treturn -1;\n}\n\n/**\n * @brief\n * \t\tcomp_state - compare the state of a job attribute (state) with that in\n *\t\ta select list (multiple state letters)\n *\n * @param[in]\tstate\t-\tstate of a job attribute\n * @param[in]\tselstate\t-\tselect list (multiple state letters)\n *\n * @return\tint\n * @retval\t0\t: match found\n * @retval\t1\t: no match\n * @retval\t-1\t: either state or selstate fields are empty\n */\nstatic int\ncomp_state(attribute *state, attribute *selstate)\n{\n\tchar *ps;\n\n\tif (!state || !selstate || !selstate->at_val.at_str)\n\t\treturn (-1);\n\n\tfor (ps = selstate->at_val.at_str; *ps; ++ps) {\n\t\tif (*ps == state->at_val.at_char)\n\t\t\treturn (0);\n\t}\n\treturn (1);\n}\n\nstatic attribute_def state_sel = {\n\tATTR_state,\n\tdecode_str,\n\tencode_str,\n\tset_str,\n\tcomp_state,\n\tfree_str,\n\tNULL_FUNC,\n\tREAD_ONLY,\n\tATR_TYPE_STR,\n\tPARENT_TYPE_JOB};\n\n/**\n * @brief\n * \t\tchk_job_statenum - check the state of a job (actual numeric state) with\n * \t\ta list of state letters\n *\n * @param[in]\tstate_ltr\t-\tstate of a job as a letter\n * @param[in]\tstatelist\t-\tlist of state letters\n *\n * @return\tint\n * @retval\t0\t: no match\n * @retval\t1\t: match found\n */\nstatic int\nchk_job_statenum(char state_ltr, char *statelist)\n{\n\tif (statelist == NULL)\n\t\treturn 1;\n\n\tif (strchr(statelist, (int) state_ltr))\n\t\treturn 1;\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\tadd_select_entry - add one jobid entry to the select return\n *\n * @param[in]\tjid\t-\tjobid entry\n * @param[in,out]\tpselx\t-\tselect return\n *\n * @return\tint\n * @retval\t0\t: error and not added\n * @retval\t1\t: added\n */\nstatic int\nadd_select_entry(char *jid, struct brp_select ***pselx)\n{\n\tstruct brp_select *pselect;\n\n\tif (jid == NULL)\n\t\treturn 0;\n\n\tpselect = (struct brp_select *) malloc(sizeof(struct brp_select));\n\tif (pselect == NULL)\n\t\treturn 0;\n\n\tpselect->brp_next = NULL;\n\t(void) strcpy(pselect->brp_jobid, jid);\n\t**pselx = pselect;\n\t*pselx = &pselect->brp_next;\n\treturn 1;\n}\n\n/**\n * @brief\n * \t\tadd_select_array_entries - add one jobid entry to the select return\n *\t\tfor each subjob whose state matches\n *\n * @param[in]\tpjob\t-\tpointer to job\n * @param[in]\tdosub\t-\ttreat as a normal job or array job\n * @param[in]\tstatelist\t-\tIf statelist is NULL, then no need to check anything,\n * \t\t\t\t\t\t\t\tjust add the subjobs to the return list.\n * @param[in,out]\tpselx\t-\tselect return\n * @param[in]\tpsel\t-\tpointer to select list\n *\n * @return\tint\n * @retval\t0\t: error and not added\n * @retval\t>0\t: no. of entries added\n */\nstatic int\nadd_select_array_entries(job *pjob, int dosub, char *statelist,\n\t\t\t struct brp_select ***pselx,\n\t\t\t struct select_list *psel)\n{\n\tint ct = 0;\n\tint i;\n\n\tif (pjob->ji_qs.ji_svrflags & JOB_SVFLG_SubJob)\n\t\treturn 0;\n\telse if ((dosub == 0) ||\n\t\t (pjob->ji_qs.ji_svrflags & JOB_SVFLG_ArrayJob) == 0) {\n\t\t/* is or treat as a normal job */\n\t\tct = add_select_entry(pjob->ji_qs.ji_jobid, pselx);\n\t} else {\n\t\t/* Array Job */\n\t\tfor (i = pjob->ji_ajinfo->tkm_start; i <= pjob->ji_ajinfo->tkm_end; i += pjob->ji_ajinfo->tkm_step) {\n\t\t\t/*\n\t\t\t * If statelist is NULL, then no need to check anything,\n\t\t\t * just add the subjobs to the return list.\n\t\t\t */\n\t\t\tchar sjst;\n\t\t\tjob *sj = get_subjob_and_state(pjob, i, &sjst, NULL);\n\t\t\tif (sjst == JOB_STATE_LTR_UNKNOWN)\n\t\t\t\tcontinue;\n\t\t\tif ((statelist == NULL) ||\n\t\t\t    (select_subjob(sjst, psel))) {\n\t\t\t\tct += add_select_entry(sj ? sj->ji_qs.ji_jobid : create_subjob_id(pjob->ji_qs.ji_jobid, i), pselx);\n\t\t\t}\n\t\t}\n\t}\n\n\treturn ct;\n}\n\n/**\n * @brief\n * \tService both the Select Job Request and the (special for the scheduler)\n * \tSelect-status Job Request\n *\n *\tThis request selects jobs based on a supplied criteria and returns\n *\tSelect   - a list of the job identifiers which meet the criteria\n *\tSel_stat - a list of the status of the jobs that meet the criteria\n *\t             and only the list of specified attributes if specified\n *\n * @param[in,out] preq - Select Job Request or Select-status Job Request\n *\n * @return void\n *\n */\nvoid\nreq_selectjobs(struct batch_request *preq)\n{\n\tint bad = 0;\n\tint i;\n\tjob *pjob;\n\tsvrattrl *plist;\n\tpbs_queue *pque;\n\tstruct batch_reply *preply;\n\tstruct brp_select **pselx;\n\tint dosubjobs = 0;\n\tint dohistjobs = 0;\n\tchar *pstate = NULL;\n\tint rc;\n\tstruct select_list *selistp;\n\tpbs_sched *psched;\n\n\tif (preq->rq_extend != NULL) {\n\t\t/*\n\t\t * if the letter T (or t) is in the extend string, select subjobs\n\t\t *\n\t\t * if the letter S is in the extend string, select real jobs,\n\t\t * regualar and running subjobs as it is requested by the Scheduler.\n\t\t */\n\t\tif (strchr(preq->rq_extend, 'T') || strchr(preq->rq_extend, 't'))\n\t\t\tdosubjobs = 1;\n\t\telse if (strchr(preq->rq_extend, 'S'))\n\t\t\tdosubjobs = 2;\n\t\t/*\n\t\t * If the letter x is in the extend string, Check if the server is\n\t\t * configured for job history info. If it is not SET or set to FALSE\n\t\t * then return with PBSE_JOBHISTNOTSET error. Otherwise select history\n\t\t * jobs also.\n\t\t */\n\t\tif (strchr(preq->rq_extend, 'x')) {\n\t\t\tif (svr_history_enable == 0) {\n\t\t\t\treq_reject(PBSE_JOBHISTNOTSET, 0, preq);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tdohistjobs = 1;\n\t\t}\n\t}\n\n\t/*\n\t * The first selstat() call from the scheduler indicates that a cycle\n\t * is in progress and has reached the point of querying for jobs.\n\t *\n\t * TODO: This approach must be revisited if the scheduler changes its\n\t * approach to query for jobs, e.g., by issuing a single pbs_statjob()\n\t * instead of a per-queue selstat()\n\t */\n\tpsched = find_sched_from_sock(preq->rq_conn, CONN_SCHED_PRIMARY);\n\tif (psched != NULL && psched == dflt_scheduler && !scheduler_jobs_stat)\n\t\tscheduler_jobs_stat = 1;\n\n\tplist = (svrattrl *) GET_NEXT(preq->rq_ind.rq_select.rq_selattr);\n\trc = build_selist(plist, preq->rq_perm, &selistp, &pque, &bad, &pstate);\n\tif (rc != 0) {\n\t\treply_badattr(rc, bad, plist, preq);\n\t\tfree_sellist(selistp);\n\t\treturn;\n\t}\n\n\t/* setup the appropriate return */\n\tpreply = &preq->rq_reply;\n\tif (preq->rq_type == PBS_BATCH_SelectJobs) {\n\t\tpreply->brp_choice = BATCH_REPLY_CHOICE_Select;\n\t\tpreply->brp_un.brp_select = NULL;\n\t} else {\n\t\tpreply->brp_choice = BATCH_REPLY_CHOICE_Status;\n\t\tCLEAR_HEAD(preply->brp_un.brp_status);\n\t}\n\tpselx = &preply->brp_un.brp_select;\n\tpreply->brp_count = 0;\n\n\t/* now start checking for jobs that match the selection criteria */\n\tif (pque)\n\t\tpjob = (job *) GET_NEXT(pque->qu_jobs);\n\telse\n\t\tpjob = (job *) GET_NEXT(svr_alljobs);\n\twhile (pjob) {\n\t\tif (get_sattr_long(SVR_ATR_query_others) || svr_authorize_jobreq(preq, pjob) == 0) {\n\n\t\t\t/*\n\t\t\t * either job owner or has special permission to see job\n\t\t\t * and\n\t\t\t * look at the job and see if the required attributes match\n\t\t\t * If \"T\" was specified, dosubjobs is set, and if the job is\n\t\t\t * an Array Job, then the State is Not checked. The State\n\t\t\t * must be checked against the state of each Subjob\n\t\t\t */\n\n\t\t\tif (select_job(pjob, selistp, dosubjobs, dohistjobs)) {\n\n\t\t\t\t/* job is selected, include in reply */\n\t\t\t\tif (preq->rq_type == PBS_BATCH_SelectJobs) {\n\n\t\t\t\t\t/* Select Jobs Reply */\n\n\t\t\t\t\tpreply->brp_count += add_select_array_entries(pjob, dosubjobs, pstate, &pselx, selistp);\n\n\t\t\t\t} else if ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_SubJob) == 0 || dosubjobs == 2) {\n\n\t\t\t\t\t/* Select-Status Reply */\n\n\t\t\t\t\tplist = (svrattrl *) GET_NEXT(preq->rq_ind.rq_select.rq_rtnattr);\n\t\t\t\t\tif (dosubjobs == 1 && pjob->ji_ajinfo) {\n\t\t\t\t\t\tfor (i = pjob->ji_ajinfo->tkm_start; i <= pjob->ji_ajinfo->tkm_end; i += pjob->ji_ajinfo->tkm_step) {\n\t\t\t\t\t\t\tchar sjst = JOB_STATE_LTR_QUEUED;\n\n\t\t\t\t\t\t\tget_subjob_and_state(pjob, i, &sjst, NULL);\n\t\t\t\t\t\t\tif (sjst == JOB_STATE_LTR_UNKNOWN)\n\t\t\t\t\t\t\t\tcontinue;\n\t\t\t\t\t\t\tif (pstate == 0 || chk_job_statenum(sjst, pstate)) {\n\t\t\t\t\t\t\t\tif (preply->brp_count >= MAX_JOBS_PER_REPLY) {\n\t\t\t\t\t\t\t\t\trc = reply_send_status_part(preq);\n\t\t\t\t\t\t\t\t\tif (rc != PBSE_NONE)\n\t\t\t\t\t\t\t\t\t\treturn;\n\t\t\t\t\t\t\t\t\tpreply->brp_count = 0;\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\trc = status_subjob(pjob, preq, plist, i, &preply->brp_un.brp_status, &bad, 0);\n\t\t\t\t\t\t\t\tif (rc && rc != PBSE_PERM)\n\t\t\t\t\t\t\t\t\tgoto out;\n\t\t\t\t\t\t\t\tplist = (svrattrl *) GET_NEXT(preq->rq_ind.rq_select.rq_rtnattr);\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t} else {\n\t\t\t\t\t\trc = status_job(pjob, preq, plist, &preply->brp_un.brp_status, &bad, 0);\n\t\t\t\t\t\tif (rc && rc != PBSE_PERM)\n\t\t\t\t\t\t\tgoto out;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tif (pque)\n\t\t\tpjob = (job *) GET_NEXT(pjob->ji_jobque);\n\t\telse\n\t\t\tpjob = (job *) GET_NEXT(pjob->ji_alljobs);\n\t\tif (preq->rq_type != PBS_BATCH_SelectJobs && preply->brp_count >= MAX_JOBS_PER_REPLY && pjob) {\n\t\t\trc = reply_send_status_part(preq);\n\t\t\tif (rc != PBSE_NONE)\n\t\t\t\treturn;\n\t\t}\n\t}\nout:\n\tfree_sellist(selistp);\n\tif (rc)\n\t\treq_reject(rc, 0, preq);\n\telse\n\t\treply_send(preq);\n}\n\n/**\n * @brief\n * \t\tselect_job - determine if a single job matches the selection criteria\n *\n * @param[in]\tpjob\t-\tpointer to job\n * @param[in]\tpsel\t-\tselection list\n * @param[in]\tdosubjobs\t-\tDoes it needs to check the subjob.\n * @param[in]\tdohistjobs\t-\tIf not being asked for history jobs specifically,\n * \t\t\t\t\t\t\t\t\tthen just skip them otherwise include them.\n *\n * @return\tint\n * @retval\t0\t: no match\n * @retval\t1\t: matches\n */\n\nstatic int\nselect_job(job *pjob, struct select_list *psel, int dosubjobs, int dohistjobs)\n{\n\n\t/*\n\t * If not being asked for history jobs specifically, then just skip\n\t * them otherwise include them. i.e. if the batch request has the special\n\t * extended flag 'x'.\n\t */\n\tif ((!dohistjobs) && ((check_job_state(pjob, JOB_STATE_LTR_FINISHED)) ||\n\t\t\t      (check_job_state(pjob, JOB_STATE_LTR_MOVED)))) {\n\t\treturn 0;\n\t}\n\n\tif ((dosubjobs == 2) && (pjob->ji_qs.ji_svrflags & JOB_SVFLG_SubJob) &&\n\t    (!check_job_state(pjob, JOB_STATE_LTR_EXITING)) &&\n\t    (!check_job_state(pjob, JOB_STATE_LTR_RUNNING))) /* select only exiting or running subjobs */\n\t\treturn 0;\n\n\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_ArrayJob) == 0)\n\t\tdosubjobs = 0; /* not an Array Job,  ok to check state */\n\telse if ((dosubjobs != 2) &&\n\t\t (pjob->ji_qs.ji_svrflags & JOB_SVFLG_SubJob))\n\t\treturn 0; /* don't bother to look at sub job */\n\n\tfor (; psel; psel = psel->sl_next) {\n\n\t\tif (psel->sl_atindx == (int) JOB_ATR_userlst) {\n\t\t\tif (!acl_check(&psel->sl_attr, get_jattr_str(pjob, JOB_ATR_job_owner), ACL_User))\n\t\t\t\treturn (0);\n\n\t\t} else if (!dosubjobs || (psel->sl_atindx != JOB_ATR_state)) {\n\t\t\tif (!sel_attr(get_jattr(pjob, psel->sl_atindx), psel)) {\n\t\t\t\t/* Make sure we haven't incorrectly dismissed a suspended job */\n\t\t\t\tif (psel->sl_atindx == JOB_ATR_state && get_attr_str(&psel->sl_attr)[0] == 'S') {\n\t\t\t\t\tif (check_job_state(pjob, JOB_STATE_LTR_RUNNING) &&\n\t\t\t\t\t    (check_job_substate(pjob, JOB_SUBSTATE_SCHSUSP) ||\n\t\t\t\t\t     check_job_substate(pjob, JOB_SUBSTATE_SUSPEND)))\n\t\t\t\t\t\tcontinue;\n\t\t\t\t}\n\t\t\t\treturn 0;\n\t\t\t} else if (psel->sl_atindx == JOB_ATR_state && get_attr_str(&psel->sl_attr)[0] == 'R') {\n\t\t\t\t/* Make sure we don't incorrectly select suspended jobs */\n\t\t\t\tif (check_job_substate(pjob, JOB_SUBSTATE_SCHSUSP) || check_job_substate(pjob, JOB_SUBSTATE_SUSPEND))\n\t\t\t\t\treturn 0;\n\t\t\t}\n\t\t}\n\t}\n\n\treturn 1;\n}\n\n/**\n * @brief\n * \t\tsel_attr - determine if attribute is according to the selection operator\n *\n * @param[in]\tjobat\t-\tjob attribute\n * @param[in]\tpselst\t-\tselection operator\n *\n * @return\tint\n * @retval\t0\t: attribute does not meets criteria\n * @retval\t1\t: attribute meets criteria\n *\n */\n\nstatic int\nsel_attr(attribute *jobat, struct select_list *pselst)\n{\n\tint rc;\n\tresource *rescjb;\n\tresource *rescsl;\n\n\tif (pselst->sl_attr.at_type == ATR_TYPE_RESC) {\n\n\t\t/* Only one resource per selection entry, \t\t*/\n\t\t/* find matching resource in job attribute if one\t*/\n\n\t\trescsl = (resource *) GET_NEXT(pselst->sl_attr.at_val.at_list);\n\t\trescjb = find_resc_entry(jobat, rescsl->rs_defin);\n\n\t\tif (rescjb && (is_attr_set(&rescjb->rs_value)))\n\t\t\t/* found match, compare them */\n\t\t\trc = pselst->sl_def->at_comp(&rescjb->rs_value, &rescsl->rs_value);\n\t\telse /* not one in job,  force to .lt. */\n\t\t\trc = -1;\n\n\t} else {\n\t\t/* \"normal\" attribute */\n\n\t\trc = pselst->sl_def->at_comp(jobat, &pselst->sl_attr);\n\t}\n\n\tif (rc < 0) {\n\t\tif ((pselst->sl_op == NE) ||\n\t\t    (pselst->sl_op == LT) ||\n\t\t    (pselst->sl_op == LE))\n\t\t\treturn (1);\n\n\t} else if (rc > 0) {\n\t\tif ((pselst->sl_op == NE) ||\n\t\t    (pselst->sl_op == GT) ||\n\t\t    (pselst->sl_op == GE))\n\t\t\treturn (1);\n\n\t} else { /* rc == 0 */\n\t\tif ((pselst->sl_op == EQ) ||\n\t\t    (pselst->sl_op == GE) ||\n\t\t    (pselst->sl_op == LE))\n\t\t\treturn (1);\n\t}\n\treturn (0);\n}\n\n/**\n * @brief\n * \t\tFree a select_list list created by build_selist()\n * @par\n *\t\tFor each entry in the select_list free the enclosed attribute entry\n *\t\tusing the index into the job_attr_def array in sl_atindx.  For an\n *\t\tattribute of type resource, this is the index of the resource type\n *\t\tattribute (typically Resource_List).  Where as sl_def is specific to\n *\t\tthe resource in the list headed by that attribute.  There is only one\n *\t\tresource per select_list entry.\n *\n * @param[in]\tpslist\t-\tpointer to first entry in the select list.\n *\n * @return\tnone\n */\n\nstatic void\nfree_sellist(struct select_list *pslist)\n{\n\tstruct select_list *next;\n\n\twhile (pslist) {\n\t\tnext = pslist->sl_next;\n\t\tif (pslist->sl_atindx == JOB_ATR_state)\n\t\t\tstate_sel.at_free(&pslist->sl_attr);\n\t\telse\n\t\t\tfree_attr(job_attr_def, &pslist->sl_attr, pslist->sl_atindx);\n\t\t(void) free(pslist); /* free the entry */\n\t\tpslist = next;\n\t}\n}\n\n/**\n * @brief\n * \t\tbuild_selentry - build a single entry for a select list\n *\n * @param[in]\tpslist\t-\tsvrattrl structure from which we decode the select list\n * @param[in]\tpdef\t-\tattribute_def structure.\n * @param[in]\tperm\t-\tpermission\n * @param[out]\trtnentry\t-\tpointer to the single entry for the select list\n *\n * @return\tint\n * @retval\t0\t: success\n * @retval\t!0\t: error code\n *\n */\n\nstatic int\nbuild_selentry(svrattrl *plist, attribute_def *pdef, int perm, struct select_list **rtnentry)\n{\n\tstruct select_list *entry;\n\tresource_def *prd;\n\tint old_perms = resc_access_perm;\n\tint rc;\n\n\t/* create a select list entry for this attribute */\n\n\tentry = (struct select_list *)\n\t\tmalloc(sizeof(struct select_list));\n\tif (entry == NULL)\n\t\treturn (PBSE_SYSTEM);\n\n\tentry->sl_next = NULL;\n\n\tclear_attr(&entry->sl_attr, pdef);\n\n\tif (!(pdef->at_flags & ATR_DFLAG_RDACC & perm)) {\n\t\t(void) free(entry);\n\t\treturn (PBSE_PERM); /* no read permission */\n\t}\n\tif ((pdef->at_flags & ATR_DFLAG_SELEQ) && (plist->al_op != EQ) &&\n\t    (plist->al_op != NE)) {\n\t\t/* can only select eq/ne on this attribute */\n\t\t(void) free(entry);\n\t\treturn (PBSE_IVALREQ);\n\t}\n\n\t/*\n\t * If a resource is marked flag=r in resourcedef\n\t * we need to force the decode function to\n\t * decode it to allow us to select upon it.\n\t */\n\tif (plist->al_resc != NULL) {\n\t\tprd = find_resc_def(svr_resc_def, plist->al_resc);\n\t\tif (prd != NULL && (prd->rs_flags & NO_USER_SET) == NO_USER_SET) {\n\t\t\tresc_access_perm = ATR_DFLAG_ACCESS;\n\t\t}\n\t}\n\n\t/* decode the attribute into the entry */\n\n\trc = set_attr_generic(&entry->sl_attr, pdef, plist->al_value, plist->al_resc, INTERNAL);\n\n\tresc_access_perm = old_perms;\n\tif (rc) {\n\t\tif (rc == PBSE_UNKRESC) {\n\t\t\t/* The resource was unknown, free the allocated attribute */\n\t\t\tpdef->at_free(&entry->sl_attr);\n\t\t}\n\t\t(void) free(entry);\n\t\treturn (rc);\n\t}\n\tif (!is_attr_set(&entry->sl_attr)) {\n\t\t(void) free(entry);\n\t\treturn (PBSE_BADATVAL);\n\t}\n\n\t/*\n\t * save the pointer to the attribute definition,\n\t * if a resource, use the resource specific one\n\t */\n\n\tif (entry->sl_attr.at_type == ATR_TYPE_RESC) {\n\t\tentry->sl_def = (attribute_def *) find_resc_def(svr_resc_def, plist->al_resc);\n\t\tif (!entry->sl_def) {\n\t\t\t(void) free(entry);\n\t\t\treturn (PBSE_UNKRESC);\n\t\t}\n\t} else\n\t\tentry->sl_def = pdef;\n\n\t/* save the selection operator to pass along */\n\n\tentry->sl_op = plist->al_op;\n\n\t*rtnentry = entry;\n\treturn (0);\n}\n\n/**\n * @brief\n * \t\tbuild_selist - build the list of select_list structures based on\n *\t\tthe svrattrl structures in the request.\n * @par\n *\t\tFunction returns non-zero on an error, also returns into last\n *\t\tfour entries of the parameter list.\n *\n * @param[in]\tplist\t-\tsvrattrl structure from which we decode the select list\n * @param[in]\tperm\t-\tpermission\n * @param[out]\tpselist\t-\tRETURN : select list\n * @param[out]\tpque\t-\tRETURN : queue ptr if limit to que\n * @param[out]\tbad\t-\tRETURN - index of bad attr\n * @param[out]\tpstate\t-\tRETURN - pointer to required state\n *\n * @return\tint\n * @retval\t0\t: success\n * @retval\t!0\t: error code\n */\n\nstatic int\nbuild_selist(svrattrl *plist, int perm, struct select_list **pselist, pbs_queue **pque, int *bad, char **pstate)\n{\n\tstruct select_list *entry;\n\tint i;\n\tchar *pc;\n\tattribute_def *pdef;\n\tstruct select_list *prior = NULL;\n\tint rc;\n\n\t/* set permission for decode_resc() */\n\n\tresc_access_perm = perm;\n\n\t*pque = NULL;\n\t*bad = 0;\n\t*pselist = NULL;\n\twhile (plist) {\n\t\t(*bad)++; /* list counter incase one is bad */\n\n\t\t/* go for all job unless a \"destination\" other than */\n\t\t/* \"@server\" is specified\t\t\t    */\n\n\t\tif (!strcasecmp(plist->al_name, ATTR_q)) {\n\t\t\tif (plist->al_valln) {\n\t\t\t\tif (((pc = strchr(plist->al_value, (int) '@')) == 0) ||\n\t\t\t\t    (pc != plist->al_value)) {\n\n\t\t\t\t\t/* does specified destination exist? */\n\n\t\t\t\t\t*pque = find_queuebyname(plist->al_value);\n#ifdef NAS /* localmod 075 */\n\t\t\t\t\tif (*pque == NULL)\n\t\t\t\t\t\t*pque = find_resvqueuebyname(plist->al_value);\n#endif /* localmod 075 */\n\t\t\t\t\tif (*pque == NULL)\n\t\t\t\t\t\treturn (PBSE_UNKQUE);\n\t\t\t\t}\n\t\t\t}\n\t\t} else {\n\t\t\ti = find_attr(job_attr_idx, job_attr_def, plist->al_name);\n\t\t\tif (i < 0)\n\t\t\t\treturn (PBSE_NOATTR); /* no such attribute */\n\n\t\t\tif (i == JOB_ATR_state) {\n\t\t\t\tpdef = &state_sel;\n\t\t\t\t*pstate = plist->al_value;\n\t\t\t} else {\n\t\t\t\tpdef = job_attr_def + i;\n\t\t\t}\n\n\t\t\t/* create a select list entry for this attribute */\n\n\t\t\trc = build_selentry(plist, pdef, perm, &entry);\n\t\t\tif (rc)\n\t\t\t\treturn rc;\n\t\t\tentry->sl_atindx = i;\n\n\t\t\t/* add the entry to the select list */\n\n\t\t\tif (prior)\n\t\t\t\tprior->sl_next = entry; /* link into list */\n\t\t\telse\n\t\t\t\t*pselist = entry; /* return start of list */\n\t\t\tprior = entry;\n\t\t}\n\t\tplist = (svrattrl *) GET_NEXT(plist->al_link);\n\t}\n\treturn (0);\n}\n\n/**\n * @brief\n *\t\tSelect subjob by matching the specified state with select_list\n *\n * @par\n *\t\tLinkage scope: Local(static)\n *\n * @par Functionality:\n *\t\tThis function walks through the select list (which is basically a \\n\n *\t\tlinked list of attribute structures built by build_selist()). Skips \\n\n *\t\tthe select_list structure if the index is not JOB_ATR_state.\n *\n * @see\tadd_select_array_entries()\n *\n * @param[in]\tstate\t-\tstate of the subjob\n * @param[in]\tpsel\t-\tpointer to select list\n *\n * @return\tint\n *\n * @retval\t0\t- failure: no match\n * @retval\t1\t- success: selected subjob\n *\n * @par MT-safety: NO\n *\n */\n\nstatic int\nselect_subjob(char state, struct select_list *psel)\n{\n\tattribute *selstate;\n\n\tfor (; psel; psel = psel->sl_next) {\n\t\tif (psel->sl_atindx != JOB_ATR_state)\n\t\t\tcontinue;\n\t\tselstate = &psel->sl_attr;\n\t\tif (selstate == NULL)\n\t\t\tcontinue;\n\t\tif (!chk_job_statenum(state, selstate->at_val.at_str))\n\t\t\treturn (0);\n\t}\n\treturn (1);\n}\n"
  },
  {
    "path": "src/server/req_shutdown.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\treq_shutdown.c\n *\n * @brief\n * \t\treq_shutdown.c - contains the functions to shutdown the server\n *\n * Included functions are:\n * \tsvr_shutdown()\n * \tshutdown_ack()\n * \treq_shutdown()\n * \tshutdown_preempt_chkpt()\n * \tpost_chkpt()\n * \trerun_or_kill()\n *\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <stdio.h>\n#include <sys/types.h>\n#include \"libpbs.h\"\n#include <errno.h>\n#include <fcntl.h>\n#include <signal.h>\n#include <string.h>\n#include \"server_limits.h\"\n#include \"list_link.h\"\n#include \"work_task.h\"\n#include \"log.h\"\n#include \"attribute.h\"\n#include \"server.h\"\n#include \"credential.h\"\n#include \"batch_request.h\"\n#include \"job.h\"\n#include \"reservation.h\"\n#include \"queue.h\"\n#include \"pbs_error.h\"\n#include \"sched_cmds.h\"\n#include \"acct.h\"\n#include \"pbs_nodes.h\"\n#include \"svrfunc.h\"\n\n/* Private Fuctions Local to this File */\n\nint shutdown_preempt_chkpt(job *);\nvoid post_hold(struct work_task *);\nstatic void post_chkpt(struct work_task *);\nstatic void rerun_or_kill(job *, char *text);\n\n/* Private Data Items */\n\nstatic struct batch_request *pshutdown_request = 0;\n\n/* Global Data Items: */\n\nextern pbs_list_head svr_alljobs;\nextern char *msg_abort_on_shutdown;\nextern char *msg_daemonname;\nextern char *msg_init_queued;\nextern char *msg_shutdown_op;\nextern char *msg_shutdown_start;\nextern char *msg_leftrunning;\nextern char *msg_stillrunning;\nextern char *msg_on_shutdown;\nextern char *msg_job_abort;\n\nextern pbs_list_head task_list_event;\nextern struct server server;\nextern attribute_def svr_attr_def[];\n\n/**\n * @brief\n *\t\tPerform start of the shutdown procedure for the server\n *\n * @par failover\n *\t\tIn failover environment, may need to tell Secondary to either stay\n *\t\tinactive, to shutdown (only the Secondary) or to shutdown as well.\n *\t\tIn the cases the Primary is also going down, we want to wait\n *\t\tfor an acknowledgement from Secondary.  That is down by or-ing in\n *\t\tSV_STATE_PRIMDLY to the server internal state.   see failover.c and\n *\t\tthe main processing loop in pbsd_main.c;  that loop won't exit if\n *\t\tSV_STATE_PRIMDLY is in the state.\n *\n * @param[in]\ttype\t-\tSHUT_* - type of for shutdown, see pbs_internal.h\n */\n\nvoid\nsvr_shutdown(int type)\n{\n\tjob *pjob;\n\tjob *pnxt;\n\tlong state;\n\tint wait_for_secondary = 0;\n\n\t/* Lets start by logging shutdown and saving everything */\n\n\t/* Saving server jobid number to the database as server is going to shutdown.\n\t * Once server will come up then it will start jobid/resvid from this number onwards.\n\t */\n\tstate = get_sattr_long(SVR_ATR_State);\n\t(void) strcpy(log_buffer, msg_shutdown_start);\n\n\tif (state == SV_STATE_SHUTIMM) {\n\n\t\t/* if already shuting down, another Immed/sig will force it */\n\n\t\tif ((type == SHUT_IMMEDIATE) || (type == SHUT_SIG)) {\n\t\t\tstate = SV_STATE_DOWN;\n\t\t\tset_sattr_l_slim(SVR_ATR_State, state, SET);\n\t\t\t(void) strcat(log_buffer, \"Forced\");\n\t\t\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_ADMIN | PBSEVENT_DEBUG,\n\t\t\t\t  PBS_EVENTCLASS_SERVER, LOG_NOTICE,\n\t\t\t\t  msg_daemonname, log_buffer);\n\t\t\treturn;\n\t\t}\n\t}\n\n\t/* in failover environments, need to communicate with Secondary */\n\t/* and for these two where the Primary is going down, mark to   */\n\t/* wait for the acknowledgement from the Secondary              */\n\n\tif (type & SHUT_WHO_SECDRY) {\n\t\tif (failover_send_shutdown(FAILOVER_SecdShutdown) == 0)\n\t\t\twait_for_secondary = 1;\n\t} else if (type & SHUT_WHO_IDLESECDRY) {\n\t\tif (failover_send_shutdown(FAILOVER_SecdGoInactive) == 0)\n\t\t\twait_for_secondary = 1;\n\t}\n\n\t/* what is the manner of our demise? */\n\n\ttype = type & SHUT_MASK;\n\tif (type == SHUT_IMMEDIATE) {\n\t\tstate = SV_STATE_SHUTIMM;\n\t\tset_sattr_l_slim(SVR_ATR_State, state, SET);\n\t\t(void) strcat(log_buffer, \"Immediate\");\n\n\t} else if (type == SHUT_DELAY) {\n\t\tstate = SV_STATE_SHUTDEL;\n\t\tset_sattr_l_slim(SVR_ATR_State, state, SET);\n\t\t(void) strcat(log_buffer, \"Delayed\");\n\n\t} else if (type == SHUT_QUICK) {\n\t\tstate = SV_STATE_DOWN; /* set to down to brk pbsd_main loop */\n\t\tset_sattr_l_slim(SVR_ATR_State, state, SET);\n\t\t(void) strcat(log_buffer, \"Quick\");\n\n\t} else {\n\t\tstate = SV_STATE_DOWN;\n\t\tset_sattr_l_slim(SVR_ATR_State, state, SET);\n\t\t(void) strcat(log_buffer, \"By Signal\");\n\t\ttype = SHUT_QUICK;\n\t}\n\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_ADMIN | PBSEVENT_DEBUG,\n\t\t  PBS_EVENTCLASS_SERVER, LOG_NOTICE, msg_daemonname, log_buffer);\n\n\tif (wait_for_secondary) {\n\t\tstate |= SV_STATE_PRIMDLY; /* wait for reply from Secondary */\n\t\tset_sattr_l_slim(SVR_ATR_State, state, SET);\n\t}\n\n\tif (type == SHUT_QUICK) /* quick, leave jobs as are */\n\t\treturn;\n\tsvr_save_db(&server);\n\n\tpnxt = (job *) GET_NEXT(svr_alljobs);\n\twhile ((pjob = pnxt) != NULL) {\n\t\tpnxt = (job *) GET_NEXT(pjob->ji_alljobs);\n\n\t\tif (check_job_state(pjob, JOB_STATE_LTR_RUNNING)) {\n\t\t\tchar *val = get_jattr_str(pjob, JOB_ATR_chkpnt);\n\n\t\t\tpjob->ji_qs.ji_svrflags |= JOB_SVFLG_HOTSTART;\n\t\t\tpjob->ji_qs.ji_svrflags |= JOB_SVFLG_HASRUN;\n\t\t\tif (val && (*val != 'n')) {\n\t\t\t\t/* do checkpoint of job */\n\n\t\t\t\tif (shutdown_preempt_chkpt(pjob) == 0)\n\t\t\t\t\tcontinue;\n\t\t\t}\n\n\t\t\t/* if not checkpoint (not supported, not allowed, or fails */\n\t\t\t/* rerun if possible, else kill job\t\t\t   */\n\n\t\t\trerun_or_kill(pjob, msg_on_shutdown);\n\t\t}\n\t}\n\treturn;\n}\n\n/**\n * @brief\n * \t\tshutdown_ack - acknowledge the shutdown (terminate) request\n * \t\tif there is one.  This is about the last thing the server does\n *\t\tbefore going away.\n */\n\nvoid\nshutdown_ack()\n{\n\tif (pshutdown_request) {\n\t\treply_ack(pshutdown_request);\n\t\tpshutdown_request = 0;\n\t}\n}\n\n/**\n * @brief\n * \t\treq_shutdown - process request to shutdown the server.\n * @note\n *\t\tMust have operator or administrator privilege.\n *\n * @param[in,out]\tpreq\t-\tShutdown Job Request\n */\n\nvoid\nreq_shutdown(struct batch_request *preq)\n{\n\tint type;\n\textern int shutdown_who;\n\n\tif ((preq->rq_perm & (ATR_DFLAG_MGWR | ATR_DFLAG_MGRD | ATR_DFLAG_OPRD |\n\t\t\t      ATR_DFLAG_OPWR)) == 0) {\n\t\treq_reject(PBSE_PERM, 0, preq);\n\t\treturn;\n\t}\n\n\t(void) sprintf(log_buffer, msg_shutdown_op, preq->rq_user, preq->rq_host);\n\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_ADMIN | PBSEVENT_DEBUG,\n\t\t  PBS_EVENTCLASS_SERVER, LOG_NOTICE, msg_daemonname, log_buffer);\n\n\tpshutdown_request = preq; /* save for reply from main() when done */\n\ttype = preq->rq_ind.rq_shutdown;\n\tshutdown_who = type & SHUT_WHO_MASK;\n\n\tif (shutdown_who & SHUT_WHO_SECDONLY)\n\t\t(void) failover_send_shutdown(FAILOVER_SecdShutdown);\n\n\tif (shutdown_who & SHUT_WHO_SCHED)\n\t\tsend_sched_cmd(dflt_scheduler, SCH_QUIT, NULL); /* tell scheduler to quit */\n\n\tif (shutdown_who & SHUT_WHO_SECDONLY) {\n\t\treply_ack(preq);\n\t\treturn; /* do NOT shutdown this Server */\n\t}\n\n\t/* Moms are told to shutdown in pbsd_main.c after main loop */\n\n\tsvr_shutdown(type);\n\treturn;\n}\n\n/**\n * @brief\n * \t\tshutdown_preempt_chkpt - perform checkpoint of job by issuing a hold request to mom\n *\n * @param[in,out]\tpjob\t-\tpointer to job\n * @param[in]\t\tnest\t-\tpointer to the nested batch_request (if any)\n *\n * @return\tint\n * @retval\t0\t: success\n * @retval\t-1\t: relay_to_mom failed\n * @retval\tPBSE_SYSTEM\t: error\n */\n\nint\nshutdown_preempt_chkpt(job *pjob)\n{\n\tstruct batch_request *phold;\n\tattribute temp;\n\tvoid (*func)(struct work_task *);\n\n\tlong *hold_val = NULL;\n\tlong old_hold = 0;\n\n\tphold = alloc_br(PBS_BATCH_HoldJob);\n\tif (phold == NULL)\n\t\treturn (PBSE_SYSTEM);\n\n\ttemp.at_flags = ATR_VFLAG_SET;\n\ttemp.at_type = job_attr_def[(int) JOB_ATR_hold].at_type;\n\ttemp.at_user_encoded = NULL;\n\ttemp.at_priv_encoded = NULL;\n\ttemp.at_val.at_long = HOLD_s;\n\n\tphold->rq_perm = ATR_DFLAG_MGRD | ATR_DFLAG_MGWR;\n\t(void) strcpy(phold->rq_ind.rq_hold.rq_orig.rq_objname, pjob->ji_qs.ji_jobid);\n\tCLEAR_HEAD(phold->rq_ind.rq_hold.rq_orig.rq_attr);\n\tif (job_attr_def[(int) JOB_ATR_hold].at_encode(&temp,\n\t\t\t\t\t\t       &phold->rq_ind.rq_hold.rq_orig.rq_attr,\n\t\t\t\t\t\t       job_attr_def[(int) JOB_ATR_hold].at_name,\n\t\t\t\t\t\t       NULL,\n\t\t\t\t\t\t       ATR_ENCODE_CLIENT, NULL) < 0)\n\t\treturn (PBSE_SYSTEM);\n\n\tphold->rq_extra = pjob;\n\tfunc = post_chkpt;\n\n\tif (relay_to_mom(pjob, phold, func) == 0) {\n\n\t\tif (check_job_state(pjob, JOB_STATE_LTR_TRANSIT))\n\t\t\tsvr_setjobstate(pjob, JOB_STATE_LTR_RUNNING, JOB_SUBSTATE_RUNNING);\n\t\tpjob->ji_qs.ji_svrflags |= (JOB_SVFLG_HASRUN | JOB_SVFLG_CHKPT | JOB_SVFLG_HASHOLD);\n\t\t(void) job_save_db(pjob);\n\t\treturn (0);\n\t} else {\n\t\t*hold_val = old_hold; /* reset to the old value */\n\t\treturn (-1);\n\t}\n}\n\n/**\n * @brief\n * \t\tpost-chkpt - clean up after shutdown_preempt_chkpt\n *\t\tThis is called on the reply from MOM to a Hold request made in\n *\t\tshutdown_preempt_chkpt().  If the request succeeded, then record in job.\n *\t\tIf the request failed, then we fall back to rerunning or aborting\n *\t\tthe job.\n *\n * @param[in]\tptask\t-\twork_task which contains the request\n */\n\nstatic void\npost_chkpt(struct work_task *ptask)\n{\n\tjob *pjob;\n\tstruct batch_request *preq;\n\n\tpreq = (struct batch_request *) ptask->wt_parm1;\n\tpjob = find_job(preq->rq_ind.rq_hold.rq_orig.rq_objname);\n\tif (!preq || !pjob)\n\t\treturn;\n\tif (preq->rq_reply.brp_code == 0) {\n\t\t/* checkpointed ok */\n\t\tif (preq->rq_reply.brp_auxcode) { /* chkpt can be moved */\n\t\t\tpjob->ji_qs.ji_svrflags &= ~JOB_SVFLG_CHKPT;\n\t\t\tpjob->ji_qs.ji_svrflags |= JOB_SVFLG_ChkptMig;\n\t\t\tjob_save_db(pjob);\n\t\t}\n\t\taccount_record(PBS_ACCT_CHKPNT, pjob, NULL);\n\t} else {\n\t\t/* need to try rerun if possible or just abort the job */\n\t\tif (preq->rq_reply.brp_code != PBSE_CKPBSY) {\n\t\t\tpjob->ji_qs.ji_svrflags &= ~JOB_SVFLG_CHKPT;\n\t\t\tset_job_substate(pjob, JOB_SUBSTATE_RUNNING);\n\t\t\tjob_save_db(pjob);\n\t\t\tif (check_job_state(pjob, JOB_STATE_LTR_RUNNING))\n\t\t\t\trerun_or_kill(pjob, msg_on_shutdown);\n\t\t}\n\t}\n\n\trelease_req(ptask);\n}\n/**\n * @brief\n * \trerun_or_kill - rerun or kill a job and log the message\n *\n * \t@par Functionality:\n * \t\tIf the job is re-runnable mark it to be re-queued.\n * \t\tIf the job is not re-runnable, immediately kill the job.\n * \t\tRecord log message before purging job.\n * \t\tIf shutdown takes time, leave the job running.\n *\n * @param[in]\tpjob\t-\tjob which needs to be killed or re-runned.\n * @param[in]\ttext\t-\tmessage to be logged before purging the job.\n */\nstatic void\nrerun_or_kill(job *pjob, char *text)\n{\n\tlong server_state = get_sattr_long(SVR_ATR_State);\n\n\tif (get_jattr_long(pjob, JOB_ATR_rerunable)) {\n\n\t\t/* job is rerunable, mark it to be requeued */\n\n\t\t(void) issue_signal(pjob, \"SIGKILL\", release_req, 0);\n\t\tset_job_substate(pjob, JOB_SUBSTATE_RERUN);\n\t\t(void) strcpy(log_buffer, msg_init_queued);\n\t\t(void) strcat(log_buffer, pjob->ji_qhdr->qu_qs.qu_name);\n\t\t(void) strcat(log_buffer, text);\n\t} else if (server_state != SV_STATE_SHUTDEL) {\n\n\t\t/* job not rerunable, immediate shutdown - kill it off */\n\n\t\t(void) strcpy(log_buffer, msg_job_abort);\n\t\t(void) strcat(log_buffer, text);\n\t\t/* need to record log message before purging job */\n\t\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_JOB | PBSEVENT_DEBUG,\n\t\t\t  PBS_EVENTCLASS_JOB, LOG_INFO, pjob->ji_qs.ji_jobid,\n\t\t\t  log_buffer);\n\t\t(void) job_abt(pjob, log_buffer);\n\t\treturn;\n\t} else {\n\n\t\t/* delayed shutdown, leave job running */\n\n\t\t(void) strcpy(log_buffer, msg_leftrunning);\n\t\t(void) strcat(log_buffer, text);\n\t}\n\tlog_event(PBSEVENT_SYSTEM | PBSEVENT_JOB | PBSEVENT_DEBUG,\n\t\t  PBS_EVENTCLASS_JOB, LOG_NOTICE, pjob->ji_qs.ji_jobid,\n\t\t  log_buffer);\n}\n"
  },
  {
    "path": "src/server/req_signal.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\treq_signaljob.c\n *\n * @brief\n * \t\treq_signaljob.c - functions dealing with sending a signal\n *\t\t     to a running job.\n *\n * Functions included are:\n * \treq_signaljob()\n * \treq_signaljob2()\n * \tissue_signal()\n * \tpost_signal_req()\n *\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <sys/types.h>\n#include <errno.h>\n#include <signal.h>\n#include <stdio.h>\n#include \"libpbs.h\"\n#include \"server_limits.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"server.h\"\n#include \"credential.h\"\n#include \"batch_request.h\"\n#include \"job.h\"\n#include \"work_task.h\"\n#include \"pbs_error.h\"\n#include \"log.h\"\n#include \"pbs_nodes.h\"\n#include \"svrfunc.h\"\n#include \"sched_cmds.h\"\n#include \"pbs_sched.h\"\n#include \"acct.h\"\n\n/* Private Function local to this file */\n\nvoid post_signal_req(struct work_task *);\nstatic int req_signaljob2(struct batch_request *preq, job *pjob);\nvoid set_admin_suspend(job *pjob, int set_remove_nstate);\nint create_resreleased(job *pjob);\n\n/* Global Data Items: */\n\nextern char *msg_momreject;\nextern char *msg_signal_job;\nextern job *chk_job_request(char *, struct batch_request *, int *, int *);\nextern time_t time_now;\n\n/**\n * @brief\n * \t\treq_signaljob - service the Signal Job Request\n * @par\n *\t\tThis request sends (via MOM) a signal to a running job.\n *\n * @param[in]\tpreq\t-\tSignal Job Request\n */\n\nvoid\nreq_signaljob(struct batch_request *preq)\n{\n\tint anygood = 0;\n\tint i;\n\tchar jid[PBS_MAXSVRJOBID + 1];\n\tint jt; /* job type */\n\tchar *pc;\n\tjob *pjob;\n\tjob *parent;\n\tchar *range;\n\tint suspend = 0;\n\tint resume = 0;\n\tint start;\n\tint end;\n\tint step;\n\tint count;\n\tint err = PBSE_NONE;\n\tchar sjst;\n\n\tsnprintf(jid, sizeof(jid), \"%s\", preq->rq_ind.rq_signal.rq_jid);\n\n\tparent = chk_job_request(jid, preq, &jt, &err);\n\tif (parent == NULL) {\n\t\tpjob = find_job(jid);\n\t\tif (pjob != NULL && pjob->ji_pmt_preq != NULL)\n\t\t\treply_preempt_jobs_request(err, PREEMPT_METHOD_SUSPEND, pjob);\n\t\treturn; /* note, req_reject already called */\n\t}\n\n\tif (strcmp(preq->rq_ind.rq_signal.rq_signame, SIG_RESUME) == 0 || strcmp(preq->rq_ind.rq_signal.rq_signame, SIG_ADMIN_RESUME) == 0)\n\t\tresume = 1;\n\telse if (strcmp(preq->rq_ind.rq_signal.rq_signame, SIG_SUSPEND) == 0 || strcmp(preq->rq_ind.rq_signal.rq_signame, SIG_ADMIN_SUSPEND) == 0)\n\t\tsuspend = 1;\n\n\tif (suspend || resume) {\n\n\t\tif ((preq->rq_perm & (ATR_DFLAG_OPRD | ATR_DFLAG_OPWR |\n\t\t\t\t      ATR_DFLAG_MGRD | ATR_DFLAG_MGWR)) == 0) {\n\t\t\t/* for suspend/resume, must be mgr/op */\n\t\t\treq_reject(PBSE_PERM, 0, preq);\n\t\t\treturn;\n\t\t}\n\t}\n\n\tif (jt == IS_ARRAY_NO) {\n\n\t\t/* just a regular job, pass it on down the line and be done */\n\n\t\treq_signaljob2(preq, parent);\n\t\treturn;\n\n\t} else if (jt == IS_ARRAY_Single) {\n\t\t/* single subjob, if running can signal */\n\t\tpjob = get_subjob_and_state(parent, get_index_from_jid(jid), &sjst, NULL);\n\t\tif (sjst == JOB_STATE_LTR_UNKNOWN) {\n\t\t\treq_reject(PBSE_UNKJOBID, 0, preq);\n\t\t\treturn;\n\t\t} else if (pjob && sjst == JOB_STATE_LTR_RUNNING) {\n\t\t\treq_signaljob2(preq, pjob);\n\t\t} else {\n\t\t\treq_reject(PBSE_BADSTATE, 0, preq);\n\t\t}\n\t\treturn;\n\t} else if (jt == IS_ARRAY_ArrayJob) {\n\n\t\t/* The Array Job itself ... */\n\n\t\tif (!check_job_state(parent, JOB_STATE_LTR_BEGUN)) {\n\t\t\treq_reject(PBSE_BADSTATE, 0, preq);\n\t\t\treturn;\n\t\t}\n\n\t\t/* for each subjob that is running, signal it via req_signaljob2 */\n\n\t\t++preq->rq_refct; /* protect the request/reply struct */\n\n\t\tfor (i = parent->ji_ajinfo->tkm_start; i <= parent->ji_ajinfo->tkm_end; i += parent->ji_ajinfo->tkm_step) {\n\t\t\tpjob = get_subjob_and_state(parent, i, &sjst, NULL);\n\t\t\tif (!pjob || sjst != JOB_STATE_LTR_RUNNING)\n\t\t\t\tcontinue;\n\t\t\t/* if suspending,  skip those already suspended,  */\n\t\t\tif (suspend && (pjob->ji_qs.ji_svrflags & JOB_SVFLG_Suspend))\n\t\t\t\tcontinue;\n\t\t\t/* if resuming, skip those not suspended         */\n\t\t\tif (resume && !(pjob->ji_qs.ji_svrflags & JOB_SVFLG_Suspend))\n\t\t\t\tcontinue;\n\t\t\tdup_br_for_subjob(preq, pjob, req_signaljob2);\n\t\t}\n\t\t/* if not waiting on any running subjobs, can reply; else */\n\t\t/* it is taken care of when last running subjob responds  */\n\t\tif (--preq->rq_refct == 0)\n\t\t\treply_send(preq);\n\t\treturn;\n\t}\n\t/* what's left to handle is a range of subjobs, foreach subjob \t*/\n\t/* if running, signal it\t\t\t\t\t*/\n\n\trange = get_range_from_jid(jid);\n\tif (range == NULL) {\n\t\treq_reject(PBSE_IVALREQ, 0, preq);\n\t\treturn;\n\t}\n\n\t/* now do the deed */\n\n\t++preq->rq_refct; /* protect the request/reply struct */\n\n\twhile (1) {\n\t\tif ((i = parse_subjob_index(range, &pc, &start, &end, &step, &count)) == -1) {\n\t\t\treq_reject(PBSE_IVALREQ, 0, preq);\n\t\t\tbreak;\n\t\t} else if (i == 1)\n\t\t\tbreak;\n\t\tfor (i = start; i <= end; i += step) {\n\t\t\tpjob = get_subjob_and_state(parent, i, &sjst, NULL);\n\t\t\tif (!pjob || sjst != JOB_STATE_LTR_RUNNING)\n\t\t\t\tcontinue;\n\t\t\t/* if suspending,  skip those already suspended,  */\n\t\t\tif (suspend && (pjob->ji_qs.ji_svrflags & JOB_SVFLG_Suspend))\n\t\t\t\tcontinue;\n\t\t\t/* if resuming, skip those not suspended         */\n\t\t\tif (resume && !(pjob->ji_qs.ji_svrflags & JOB_SVFLG_Suspend))\n\t\t\t\tcontinue;\n\t\t\tanygood = 1;\n\t\t\tdup_br_for_subjob(preq, pjob, req_signaljob2);\n\t\t}\n\t\trange = pc;\n\t}\n\n\tif (anygood == 0) { /* no running subjobs in the range */\n\t\treq_reject(PBSE_BADSTATE, 0, preq);\n\t\treturn;\n\t}\n\n\t/* if not waiting on any running subjobs, can reply; else */\n\t/* it is taken care of when last running subjob responds  */\n\tif (--preq->rq_refct == 0)\n\t\treply_send(preq);\n\treturn;\n}\n/**\n * @brief\n * \t\treq_signaljob2 - service the Signal Job Request\n * @par\n *\t\tThis request sends (via MOM) a signal to a running job.\n *\n * @param[in]\tpreq\t-\tSignal Job Request\n *\n * @return int\n * @retval 0 for Success\n * @retval 1 for Error\n */\nstatic int\nreq_signaljob2(struct batch_request *preq, job *pjob)\n{\n\tint rc;\n\tchar *pnodespec;\n\tint suspend = 0;\n\tint resume = 0;\n\tpbs_sched *psched;\n\n\tif (!check_job_state(pjob, JOB_STATE_LTR_RUNNING) ||\n\t    (check_job_state(pjob, JOB_STATE_LTR_RUNNING) && check_job_substate(pjob, JOB_SUBSTATE_PROVISION))) {\n\t\treq_reject(PBSE_BADSTATE, 0, preq);\n\t\treturn 1;\n\t}\n\tif ((strcmp(preq->rq_ind.rq_signal.rq_signame, SIG_ADMIN_RESUME) == 0 && !(pjob->ji_qs.ji_svrflags & JOB_SVFLG_AdmSuspd)) ||\n\t    (strcmp(preq->rq_ind.rq_signal.rq_signame, SIG_RESUME) == 0 && (pjob->ji_qs.ji_svrflags & JOB_SVFLG_AdmSuspd))) {\n\t\treq_reject(PBSE_WRONG_RESUME, 0, preq);\n\t\treturn 1;\n\t}\n\n\tif (strcmp(preq->rq_ind.rq_signal.rq_signame, SIG_RESUME) == 0 || strcmp(preq->rq_ind.rq_signal.rq_signame, SIG_ADMIN_RESUME) == 0)\n\t\tresume = 1;\n\telse if (strcmp(preq->rq_ind.rq_signal.rq_signame, SIG_SUSPEND) == 0 || strcmp(preq->rq_ind.rq_signal.rq_signame, SIG_ADMIN_SUSPEND) == 0)\n\t\tsuspend = 1;\n\n\t/* Special pseudo signals for suspend and resume require op/mgr */\n\n\tif (suspend || resume) {\n\n\t\tpreq->rq_extra = pjob; /* save job ptr for post_signal_req() */\n\n\t\tsprintf(log_buffer, \"%s job by %s@%s\",\n\t\t\tpreq->rq_ind.rq_signal.rq_signame,\n\t\t\tpreq->rq_user, preq->rq_host);\n\n\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\n\t\tif (resume) {\n\t\t\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_Suspend) != 0) {\n\n\t\t\t\tif (preq->rq_fromsvr == 1 ||\n\t\t\t\t    strcmp(preq->rq_ind.rq_signal.rq_signame, SIG_ADMIN_RESUME) == 0) {\n\t\t\t\t\t/* from Scheduler, resume job */\n\t\t\t\t\tpnodespec = get_jattr_str(pjob, JOB_ATR_exec_vnode);\n\t\t\t\t\tif (pnodespec) {\n\t\t\t\t\t\trc = assign_hosts(pjob, pnodespec, 0);\n\t\t\t\t\t\tif (rc == 0) {\n\t\t\t\t\t\t\tset_resc_assigned((void *) pjob, 0, INCR);\n\t\t\t\t\t\t\t/* if resume fails,need to free resources */\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\treq_reject(rc, 0, preq);\n\t\t\t\t\t\t\treturn 1;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tif (is_jattr_set(pjob, JOB_ATR_exec_vnode_deallocated)) {\n\n\t\t\t\t\t\tchar *hoststr = NULL;\n\t\t\t\t\t\tchar *hoststr2 = NULL;\n\t\t\t\t\t\tchar *vnodestoalloc = NULL;\n\t\t\t\t\t\tchar *new_exec_vnode_deallocated;\n\t\t\t\t\t\tnew_exec_vnode_deallocated =\n\t\t\t\t\t\t\tget_jattr_str(pjob, JOB_ATR_exec_vnode_deallocated);\n\n\t\t\t\t\t\trc = set_nodes((void *) pjob, JOB_OBJECT, new_exec_vnode_deallocated, &vnodestoalloc, &hoststr, &hoststr2, 1, FALSE);\n\t\t\t\t\t\tif (rc != 0) {\n\t\t\t\t\t\t\treq_reject(rc, 0, preq);\n\t\t\t\t\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_WARNING,\n\t\t\t\t\t\t\t\t  pjob->ji_qs.ji_jobid, \"Warning: Failed to make some nodes aware of deleted job\");\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\t/* not from scheduler, change substate so the  */\n\t\t\t\t\t/* scheduler will resume the job when possible */\n\t\t\t\t\tsvr_setjobstate(pjob, JOB_STATE_LTR_RUNNING, JOB_SUBSTATE_SCHSUSP);\n\t\t\t\t\tif (find_assoc_sched_jid(pjob->ji_qs.ji_jobid, &psched))\n\t\t\t\t\t\tset_scheduler_flag(SCH_SCHEDULE_NEW, psched);\n\t\t\t\t\telse {\n\t\t\t\t\t\tsprintf(log_buffer, \"Unable to reach scheduler associated with job %s\", pjob->ji_qs.ji_jobid);\n\t\t\t\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\t\t\t}\n\t\t\t\t\treply_send(preq);\n\t\t\t\t\treturn 0;\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\t/* Job can be resumed only on suspended state */\n\t\t\t\treq_reject(PBSE_BADSTATE, 0, preq);\n\t\t\t\treturn 1;\n\t\t\t}\n\t\t}\n\t}\n\n\t/* log and pass the request on to MOM */\n\n\tsprintf(log_buffer, msg_signal_job, preq->rq_ind.rq_signal.rq_signame,\n\t\tpreq->rq_user, preq->rq_host);\n\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\trc = relay_to_mom(pjob, preq, post_signal_req);\n\tif (rc) {\n\t\tif (resume)\n\t\t\trel_resc(pjob);\n\t\treq_reject(rc, 0, preq); /* unable to get to MOM */\n\t\treturn 1;\n\t}\n\n\treturn 0;\n\n\t/* After MOM acts and replies to us, we pick up in post_signal_req() */\n}\n\n/**\n * @brief\n * \t\tissue_signal - send an internally generated signal to a running job\n *\n * @param[in,out]\tpjob\t-\trunning job\n * @param[in]\tsigname\t-\tname of the signal to send\n * @param[in]\tfunc\t-\tfunction pointer taking work_task structure as argument.\n * @param[in]\textra\t-\textra parameter to be stored in sig request\n * @param[in]\tnest\t-\tpointer to the nested batch_request (if any)\n *\n * @return\tint\n * @retval\t0\t- success\n * @retval\tnon-zero\t- error code\n */\n\nint\nissue_signal(job *pjob, char *signame, void (*func)(struct work_task *), void *extra)\n{\n\tstruct batch_request *newreq;\n\n\t/* build up a Signal Job batch request */\n\n\tif ((newreq = alloc_br(PBS_BATCH_SignalJob)) == NULL)\n\t\treturn (PBSE_SYSTEM);\n\n\tnewreq->rq_extra = extra;\n\n\tstrcpy(newreq->rq_ind.rq_signal.rq_jid, pjob->ji_qs.ji_jobid);\n\tstrncpy(newreq->rq_ind.rq_signal.rq_signame, signame, PBS_SIGNAMESZ);\n\treturn (relay_to_mom(pjob, newreq, func));\n\n\t/* when MOM replies, we just free the request structure */\n}\n\n/**\n * @brief\n * \t\tissue_signal_task - task to send signal to a job\n *\n * @param[in,out]\tpwt\t-\twork_task which contains Signal Job Request and post function\n */\n\nstatic void\nissue_signal_task(struct work_task *pwt)\n{\n\tstruct batch_request *newreq;\n\tvoid *func;\n\tjob *pjob;\n\tint rc;\n\n\tnewreq = (struct batch_request *) pwt->wt_parm1;\n\tfunc = pwt->wt_parm2;\n\n\tpjob = find_job(newreq->rq_ind.rq_signal.rq_jid);\n\tif (pjob) {\n\t\tif ((rc = relay_to_mom(pjob, newreq, func)) != PBSE_NONE) {\n\t\t\tsprintf(log_buffer, \"Issue signal error (%d)\", rc);\n\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_ERR,\n\t\t\t\tpjob->ji_qs.ji_jobid, log_buffer);\n\t\t}\n\t}\n}\n\n/**\n * @brief\n * \t\tdelayed_issue_signal - send an internally generated signal to a running job\n *\n * @param[in,out]\tpjob\t-\trunning job\n * @param[in]\tsigname\t-\tname of the signal to send\n * @param[in]\tfunc\t-\tfunction pointer taking work_task structure as argument.\n * @param[in]\textra\t-\textra parameter to be stored in sig request\n * @param[in]\tdelay\t-\tthe signal is sent after <delay> seconds\n *\n * @return\tint\n * @retval\t0\t- success\n * @retval\tnon-zero\t- error code\n */\n\nint\ndelayed_issue_signal(job *pjob, char *signame, void (*func)(struct work_task *), void *extra, int delay)\n{\n\tstruct work_task *pwt;\n\tstruct batch_request *newreq;\n\n\t/* build up a Signal Job batch request */\n\n\tif ((newreq = alloc_br(PBS_BATCH_SignalJob)) == NULL) {\n\t\tlog_err(PBSE_SYSTEM, __func__, \"Failed to allocate batch request\");\n\t\treturn PBSE_SYSTEM;\n\t}\n\n\tnewreq->rq_extra = extra;\n\n\tstrcpy(newreq->rq_ind.rq_signal.rq_jid, pjob->ji_qs.ji_jobid);\n\tstrncpy(newreq->rq_ind.rq_signal.rq_signame, signame, PBS_SIGNAMESZ);\n\n\tpwt = set_task(WORK_Timed, time_now + delay,\n\t\tissue_signal_task, newreq);\n\tpwt->wt_parm2 = func;\n\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n * \t\tpost_signal_req - complete a Signal Job Request (externally generated)\n *\n * @param[in,out]\tpwt\t-\twork_task which contains Signal Job Request\n */\n\nvoid\npost_signal_req(struct work_task *pwt)\n{\n\tjob *pjob;\n\tstruct batch_request *preq;\n\tint rc;\n\tint ss;\n\tint suspend = 0;\n\tint resume = 0;\n\n\tif (pwt->wt_aux2 != PROT_TPP)\n\t\tsvr_disconnect(pwt->wt_event); /* disconnect from MOM */\n\n\tpreq = pwt->wt_parm1;\n\tpreq->rq_conn = preq->rq_orgconn; /* restore client socket */\n\tpjob = preq->rq_extra;\n\n\tif (strcmp(preq->rq_ind.rq_signal.rq_signame, SIG_SUSPEND) == 0 ||\n\t    strcmp(preq->rq_ind.rq_signal.rq_signame, SIG_ADMIN_SUSPEND) == 0)\n\t\tsuspend = 1;\n\telse if (strcmp(preq->rq_ind.rq_signal.rq_signame, SIG_RESUME) == 0 ||\n\t\t strcmp(preq->rq_ind.rq_signal.rq_signame, SIG_ADMIN_RESUME) == 0)\n\t\tresume = 1;\n\n\tif ((rc = preq->rq_reply.brp_code)) {\n\n\t\t/* there was an error on the Mom side of things */\n\n\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_REQUEST, LOG_DEBUG,\n\t\t\t  preq->rq_ind.rq_signal.rq_jid, msg_momreject);\n\t\terrno = 0;\n\t\tif (rc == PBSE_UNKJOBID)\n\t\t\trc = PBSE_INTERNAL;\n\t\tif (resume) {\n\t\t\t/* resume failed, re-release resc and nodes */\n\t\t\trel_resc(pjob);\n\t\t}\n\n\t\tif (pjob == NULL)\n\t\t\tpjob = find_job(preq->rq_ind.rq_signal.rq_jid);\n\t\tif (pjob != NULL && pjob->ji_pmt_preq != NULL)\n\t\t\treply_preempt_jobs_request(rc, PREEMPT_METHOD_SUSPEND, pjob);\n\n\t\treq_reject(rc, 0, preq);\n\t} else {\n\n\t\t/* everything went ok for signal request at Mom */\n\n\t\tif (suspend && pjob && (check_job_state(pjob, JOB_STATE_LTR_RUNNING))) {\n\t\t\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_Suspend) == 0) {\n\t\t\t\tif (preq->rq_fromsvr == 1 || pjob->ji_pmt_preq != NULL)\n\t\t\t\t\tss = JOB_SUBSTATE_SCHSUSP;\n\t\t\t\telse\n\t\t\t\t\tss = JOB_SUBSTATE_SUSPEND;\n\t\t\t\tif (is_sattr_set(SVR_ATR_restrict_res_to_release_on_suspend)) {\n\t\t\t\t\tif (create_resreleased(pjob) == 1) {\n\t\t\t\t\t\tsprintf(log_buffer, \"Unable to create resource released list\");\n\t\t\t\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tpjob->ji_qs.ji_svrflags |= JOB_SVFLG_Suspend;\n\t\t\t\t/* update all released resources */\n\t\t\t\tsvr_setjobstate(pjob, JOB_STATE_LTR_RUNNING, ss);\n\t\t\t\trel_resc(pjob); /* release resc and nodes */\n\t\t\t\tjob_save(pjob); /* save released resc and nodes */\n\t\t\t\tlog_suspend_resume_record(pjob, PBS_ACCT_SUSPEND);\n\t\t\t\t/* Since our purpose is to put the node in maintenance state if \"admin-suspend\"\n\t\t\t\t * signal is used, be sure that rel_resc() is called before set_admin_suspend().\n\t\t\t\t * Otherwise, set_admin_suspend will move the node to maintenance state and\n\t\t\t\t * rel_resc() will pull it out of maintenance state */\n\t\t\t\tif (strcmp(preq->rq_ind.rq_signal.rq_signame, SIG_ADMIN_SUSPEND) == 0)\n\t\t\t\t\tset_admin_suspend(pjob, 1);\n\t\t\t}\n\t\t} else if (resume && pjob && (check_job_state(pjob, JOB_STATE_LTR_RUNNING))) {\n\t\t\t/* note - the resources have already been reallocated */\n\t\t\tpjob->ji_qs.ji_svrflags &= ~JOB_SVFLG_Suspend;\n\t\t\tif (strcmp(preq->rq_ind.rq_signal.rq_signame, SIG_ADMIN_RESUME) == 0)\n\t\t\t\tset_admin_suspend(pjob, 0);\n\n\t\t\tfree_jattr(pjob, JOB_ATR_resc_released);\n\t\t\tmark_jattr_not_set(pjob, JOB_ATR_resc_released);\n\n\t\t\tfree_jattr(pjob, JOB_ATR_resc_released_list);\n\t\t\tmark_jattr_not_set(pjob, JOB_ATR_resc_released_list);\n\n\t\t\tsvr_setjobstate(pjob, JOB_STATE_LTR_RUNNING, JOB_SUBSTATE_RUNNING);\n\t\t\tlog_suspend_resume_record(pjob, PBS_ACCT_RESUME);\n\n\t\t\tset_jattr_generic(pjob, JOB_ATR_Comment,\n\t\t\t\t\t  form_attr_comment(\"Job run at %s\", get_jattr_str(pjob, JOB_ATR_exec_vnode)), NULL, SET);\n\t\t}\n\n\t\tif (pjob == NULL)\n\t\t\tpjob = find_job(preq->rq_ind.rq_signal.rq_jid);\n\t\tif (pjob != NULL && pjob->ji_pmt_preq != NULL)\n\t\t\treply_preempt_jobs_request(PBSE_NONE, PREEMPT_METHOD_SUSPEND, pjob);\n\n\t\treply_ack(preq);\n\t}\n}\n\n/**\n * @brief  Create the job's resources_released and Resource_Rel_List\n *\t    attributes based on its exec_vnode\n * @param[in/out] pjob - Job structure\n *\n * @return int\n * @retval 1 - In case of failure\n * @retval 0 - In case of success\n */\nint\ncreate_resreleased(job *pjob)\n{\n\tchar *chunk;\n\tint j;\n\tint nelem;\n\tchar *noden;\n\tint rc;\n\tstruct key_value_pair *pkvp;\n\tchar *resreleased;\n\tchar buf[1024] = {0};\n\tchar *dflt_ncpus_rel = \":ncpus=0\";\n\tint no_res_rel = 1;\n\n\tattribute *pexech = get_jattr(pjob, JOB_ATR_exec_vnode);\n\t/* Multiplying by 2 to take care of superchunks of the format\n\t * (node:resc=n+node:resc=m) which will get converted to\n\t * (node:resc=n)+(node:resc=m). This will add room for this\n\t * expansion.\n\t */\n\tresreleased = (char *) calloc(1, strlen(pexech->at_val.at_str) * 2 + 1);\n\tif (resreleased == NULL)\n\t\treturn 1;\n\tresreleased[0] = '\\0';\n\n\tchunk = parse_plus_spec(pexech->at_val.at_str, &rc);\n\tif (rc != 0) {\n\t\tfree(resreleased);\n\t\treturn 1;\n\t}\n\twhile (chunk) {\n\t\tno_res_rel = 1;\n\t\tstrcat(resreleased, \"(\");\n\t\tif (parse_node_resc(chunk, &noden, &nelem, &pkvp) == 0) {\n\t\t\tstrcat(resreleased, noden);\n\t\t\tif (is_sattr_set(SVR_ATR_restrict_res_to_release_on_suspend)) {\n\t\t\t\tstruct array_strings *pval = get_sattr_arst(SVR_ATR_restrict_res_to_release_on_suspend);\n\t\t\t\tfor (j = 0; pval != NULL && j < nelem; ++j) {\n\t\t\t\t\tint k;\n\t\t\t\t\tint np = pval->as_usedptr;\n\t\t\t\t\tfor (k = 0; np != 0 && k < np; k++) {\n\t\t\t\t\t\tchar *res;\n\t\t\t\t\t\tres = pval->as_string[k];\n\t\t\t\t\t\tif ((res != NULL) && (strcmp(pkvp[j].kv_keyw, res) == 0)) {\n\t\t\t\t\t\t\tsprintf(buf, \":%s=%s\", res, pkvp[j].kv_val);\n\t\t\t\t\t\t\tstrcat(resreleased, buf);\n\t\t\t\t\t\t\tno_res_rel = 0;\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tfree(resreleased);\n\t\t\t\treturn 1;\n\t\t\t}\n\t\t} else {\n\t\t\tfree(resreleased);\n\t\t\treturn 1;\n\t\t}\n\t\t/* If there are no resources released on this vnode then add a dummy \"ncpus=0\"\n\t\t * This is needed otherwise scheduler will not be able to assign this chunk to\n\t\t * the job while trying to resume it\n\t\t */\n\t\tif (no_res_rel)\n\t\t\tstrcat(resreleased, dflt_ncpus_rel);\n\t\tstrcat(resreleased, \")\");\n\t\tchunk = parse_plus_spec(NULL, &rc);\n\t\tif (rc != 0) {\n\t\t\tfree(resreleased);\n\t\t\treturn 1;\n\t\t}\n\t\tif (chunk)\n\t\t\tstrcat(resreleased, \"+\");\n\t}\n\tif (resreleased[0] != '\\0')\n\t\tset_jattr_generic(pjob, JOB_ATR_resc_released, resreleased, NULL, SET);\n\n\tfree(resreleased);\n\treturn 0;\n}\n\n/**\n *\t@brief Handle admin-suspend/admin-resume on the job and nodes\n *\t\tset or remove the JOB_SVFLG_AdmSuspd flag on the job\n *\t\tset or remove nodes in state maintenance\n *\n *\t@param[in] pjob - job to act upon\n *\t@param[in] set_remove_nstate if 1, set flag/state if 0 remove flag/state\n *\n *\t@return void\n */\nvoid\nset_admin_suspend(job *pjob, int set_remove_nstate)\n{\n\tchar *chunk;\n\tchar *execvncopy;\n\tchar *last;\n\tchar *vname;\n\tstruct key_value_pair *pkvp;\n\tint hasprn;\n\tint nelem;\n\tstruct pbsnode *pnode;\n\tattribute new;\n\n\tif (pjob == NULL)\n\t\treturn;\n\n\texecvncopy = strdup(get_jattr_str(pjob, JOB_ATR_exec_vnode));\n\n\tif (execvncopy == NULL)\n\t\treturn;\n\n\tif (set_remove_nstate)\n\t\tpjob->ji_qs.ji_svrflags |= JOB_SVFLG_AdmSuspd;\n\telse\n\t\tpjob->ji_qs.ji_svrflags &= ~JOB_SVFLG_AdmSuspd;\n\n\tclear_attr(&new, &node_attr_def[(int) ND_ATR_MaintJobs]);\n\tdecode_arst(&new, ATTR_NODE_MaintJobs, NULL, pjob->ji_qs.ji_jobid);\n\n\tchunk = parse_plus_spec_r(execvncopy, &last, &hasprn);\n\n\twhile (chunk) {\n\n\t\tif (parse_node_resc(chunk, &vname, &nelem, &pkvp) == 0) {\n\t\t\tpnode = find_nodebyname(vname);\n\t\t\tif (pnode) {\n\t\t\t\tif (set_remove_nstate) {\n\t\t\t\t\tset_arst(get_nattr(pnode, ND_ATR_MaintJobs), &new, INCR);\n\t\t\t\t\tset_vnode_state(pnode, INUSE_MAINTENANCE, Nd_State_Or);\n\t\t\t\t} else {\n\t\t\t\t\tset_arst(get_nattr(pnode, ND_ATR_MaintJobs), &new, DECR);\n\t\t\t\t\tif ((get_nattr_arst(pnode, ND_ATR_MaintJobs))->as_usedptr == 0)\n\t\t\t\t\t\tset_vnode_state(pnode, ~INUSE_MAINTENANCE, Nd_State_And);\n\t\t\t\t}\n\t\t\t}\n\t\t\tpnode->nd_modified = 1;\n\t\t}\n\t\tchunk = parse_plus_spec_r(last, &last, &hasprn);\n\t}\n\tsave_nodes_db(0, NULL);\n\tjob_save_db(pjob);\n\tfree_arst(&new);\n\tfree(execvncopy);\n}\n"
  },
  {
    "path": "src/server/req_stat.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\treq_stat.c\n *\n * @brief\n * \t\treq_stat.c - Functions relating to the Status Job, Status Queue, and\n * \t\tStatus Server Batch Requests.\n *\n * Functions included are:\n * \tdo_stat_of_a_job()\n * \tstat_a_jobidname()\n * \treq_stat_job()\n * \treq_stat_que()\n * \tstatus_que()\n * \treq_stat_node()\n * \tstatus_node()\n * \treq_stat_svr()\n * \treq_stat_sched()\n * \tupdate_state_ct()\n * \tupdate_license_ct()\n * \treq_stat_resv()\n * \tstatus_resv()\n * \tstatus_resc()\n * \treq_stat_resc()\n *\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#define STAT_CNTL 1\n\n#include <stdio.h>\n#include <sys/types.h>\n#include <stdlib.h>\n#include \"libpbs.h\"\n#include <ctype.h>\n#include \"server_limits.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"server.h\"\n#include \"credential.h\"\n#include \"batch_request.h\"\n#include \"job.h\"\n#include \"reservation.h\"\n#include \"queue.h\"\n#include \"work_task.h\"\n#include \"pbs_entlim.h\"\n#include \"pbs_error.h\"\n#include \"pbs_nodes.h\"\n#include \"svrfunc.h\"\n#include \"net_connect.h\"\n#include \"pbs_license.h\"\n#include \"resource.h\"\n#include \"pbs_sched.h\"\n#include \"liblicense.h\"\n#include \"ifl_internal.h\"\n\n/* Global Data Items: */\n\nextern struct server server;\nextern pbs_list_head svr_alljobs;\nextern pbs_list_head svr_queues;\nextern char server_name[];\nextern attribute_def svr_attr_def[];\nextern attribute_def que_attr_def[];\nextern attribute_def job_attr_def[];\nextern time_t time_now;\nextern char *msg_init_norerun;\nextern int resc_access_perm;\nextern long svr_history_enable;\nextern pbs_list_head svr_runjob_hooks;\n\nextern pbs_license_counts license_counts;\n\n/* Extern Functions */\n\nextern int status_attrib(svrattrl *, void *, attribute_def *, attribute *, int, int, pbs_list_head *, int *);\nextern int status_nodeattrib(svrattrl *, struct pbsnode *, int, int, pbs_list_head *, int *);\n\nextern int svr_chk_histjob(job *);\n\n/* Private Data Definitions */\n\nstatic int bad;\n\n/* The following private support functions are included */\n\nstatic int status_que(pbs_queue *, struct batch_request *, pbs_list_head *);\nstatic int status_node(struct pbsnode *, struct batch_request *, pbs_list_head *);\nstatic int status_resv(resc_resv *, struct batch_request *, pbs_list_head *);\n\n/**\n * @brief\n * \tSupport function for req_stat_job() and stat_a_jobidname().\n * \tBuilds status reply for normal job, Array job, and if requested all of the\n * \tsubjobs of the array (but not a single or range of subjobs).\n *\n * @note\n * \tIf dohistjobs is not set and the job is history, no status or error\n * \tis returned. If an error return is needed, the caller must make that check.\n *\n * @param[in,out] preq       - pointer to the stat job batch request, reply updated\n * @param[in]     pjob       - pointer to the job to be statused\n * @param[in]     dohistjobs - flag to include job if it is a history job\n * @param[in]     dosubjobs  - flag to expand a Array job to include all subjobs\n *\n * @return int\n * @retval PBSE_NONE  - no error\n * @retval !PBSE_NONE - PBS error code to return to client\n */\nstatic int\ndo_stat_of_a_job(struct batch_request *preq, job *pjob, int dohistjobs, int dosubjobs)\n{\n\tint i;\n\tsvrattrl *pal;\n\tint rc;\n\tstruct batch_reply *preply = &preq->rq_reply;\n\n\t/* if history job and not asking for them, just return */\n\tif (!dohistjobs && (check_job_state(pjob, JOB_STATE_LTR_FINISHED) || check_job_state(pjob, JOB_STATE_LTR_MOVED))) {\n\t\treturn (PBSE_NONE); /* just return nothing */\n\t}\n\n\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_SubJob) == 0) {\n\t\t/* this is not a subjob, go ahead and build the status reply for this job */\n\t\tpal = (svrattrl *) GET_NEXT(preq->rq_ind.rq_status.rq_attr);\n\t\trc = status_job(pjob, preq, pal, &preply->brp_un.brp_status, &bad, dosubjobs);\n\t\tif (dosubjobs && (pjob->ji_qs.ji_svrflags & JOB_SVFLG_ArrayJob) && (rc == PBSE_NONE || rc != PBSE_PERM) && pjob->ji_ajinfo != NULL && pjob->ji_ajinfo->tkm_ct != pjob->ji_ajinfo->tkm_subjsct[JOB_STATE_QUEUED]) {\n\t\t\tfor (i = pjob->ji_ajinfo->tkm_start; i <= pjob->ji_ajinfo->tkm_end; i += pjob->ji_ajinfo->tkm_step) {\n\t\t\t\tif (range_contains(pjob->ji_ajinfo->trm_quelist, i))\n\t\t\t\t\tcontinue;\n\t\t\t\trc = status_subjob(pjob, preq, pal, i, &preply->brp_un.brp_status, &bad, 1);\n\t\t\t\tif (rc && rc != PBSE_PERM)\n\t\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t\tif (rc && rc != PBSE_PERM)\n\t\t\treturn (rc);\n\t}\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n * \tSupport function for req_stat_job().\n * \tBuilds status reply for a single job id, which may be: a normal job,\n * \tan Array job, a single subjob or a range of subjobs.\n * \tFinds the job structure for the job id and calls either do_stat_of_a_job()\n * \tor status_subjob() to build that actual status reply.\n *\n * @param[in,out] preq       - pointer to the stat job batch request, reply updated\n * @param[in]     name       - job id to be statused\n * @param[in]     dohistjobs - flag to include job if it is a history job\n * @param[in]     dosubjobs  - flag to expand a Array job to include all subjobs\n *\n * @return int\n * @retval PBSE_NONE  - no error\n * @retval !PBSE_NONE - PBS error code to return to client\n */\nstatic int\nstat_a_jobidname(struct batch_request *preq, char *name, int dohistjobs, int dosubjobs)\n{\n\tint i;\n\tchar *pc;\n\tchar *range;\n\tint rc;\n\tjob *pjob;\n\tstruct batch_reply *preply = &preq->rq_reply;\n\tsvrattrl *pal;\n\n\ti = is_job_array(name);\n\tif (i == IS_ARRAY_Single) {\n\t\tint idx;\n\n\t\tpjob = find_arrayparent(name);\n\t\tif (pjob == NULL)\n\t\t\treturn PBSE_UNKJOBID;\n\t\telse if (!dohistjobs && (rc = svr_chk_histjob(pjob)))\n\t\t\treturn rc;\n\t\tidx = get_index_from_jid(name);\n\t\tif (idx != -1) {\n\t\t\tpal = (svrattrl *) GET_NEXT(preq->rq_ind.rq_status.rq_attr);\n\t\t\trc = status_subjob(pjob, preq, pal, idx, &preply->brp_un.brp_status, &bad, 0);\n\t\t} else\n\t\t\trc = PBSE_UNKJOBID;\n\t\treturn rc; /* no job still needs to be stat-ed */\n\n\t} else if ((i == IS_ARRAY_NO) || (i == IS_ARRAY_ArrayJob)) {\n\t\tpjob = find_job(name);\n\t\tif (pjob == NULL)\n\t\t\treturn PBSE_UNKJOBID;\n\t\telse if (!dohistjobs && (rc = svr_chk_histjob(pjob)) != PBSE_NONE)\n\t\t\treturn rc;\n\t\treturn do_stat_of_a_job(preq, pjob, dohistjobs, dosubjobs);\n\t} else {\n\t\t/* range of sub jobs */\n\t\trange = get_range_from_jid(name);\n\t\tif (range == NULL)\n\t\t\treturn PBSE_IVALREQ;\n\t\tpjob = find_arrayparent(name);\n\t\tif (pjob == NULL)\n\t\t\treturn PBSE_UNKJOBID;\n\t\telse if (!dohistjobs && (rc = svr_chk_histjob(pjob)) != PBSE_NONE)\n\t\t\treturn rc;\n\t\tpal = (svrattrl *) GET_NEXT(preq->rq_ind.rq_status.rq_attr);\n\t\twhile (1) {\n\t\t\tint start;\n\t\t\tint end;\n\t\t\tint step;\n\t\t\tint unused;\n\n\t\t\tif ((i = parse_subjob_index(range, &pc, &start, &end, &step, &unused)) == -1)\n\t\t\t\treturn PBSE_IVALREQ;\n\t\t\telse if (i == 1)\n\t\t\t\tbreak;\n\t\t\tfor (i = start; i <= end; i += step) {\n\t\t\t\tif (preply->brp_count >= MAX_JOBS_PER_REPLY) {\n\t\t\t\t\trc = reply_send_status_part(preq);\n\t\t\t\t\tif (rc != PBSE_NONE)\n\t\t\t\t\t\treturn rc;\n\t\t\t\t}\n\t\t\t\trc = status_subjob(pjob, preq, pal, i, &preply->brp_un.brp_status, &bad, 0);\n\t\t\t\tif (rc && rc != PBSE_PERM)\n\t\t\t\t\treturn rc;\n\t\t\t}\n\t\t\trange = pc;\n\t\t}\n\t\t/* stat-ed the range, no more to stat for this id */\n\t\treturn PBSE_NONE;\n\t}\n}\n\n/**\n * @brief\n * \tService the Status Job Request\n *\n * \tThis processes the request for status of a single job or\n * \tthe set of jobs at a destination.  It uses the currently known data\n * \tfor resources_used in the case of a running job.  If Mom for that\n * \tjob is down, the data is likely stale.\n * \tThe requested object may be a job id (either a single regular job, an Array\n * \tjob, a subjob or a range of subjobs), a comma separated list of the above,\n * \ta queue name or null (or @...) for all jobs in the Server.\n *\n * @param[in/out] preq - pointer to the stat job batch request, reply updated\n *\n * @return void\n *\n */\nvoid\nreq_stat_job(struct batch_request *preq)\n{\n\tint at_least_one_success = 0;\n\tint dosubjobs = 0;\n\tint dohistjobs = 0;\n\tchar *name;\n\tjob *pjob = NULL;\n\tpbs_queue *pque = NULL;\n\tstruct batch_reply *preply;\n\tint rc = 0;\n\tint type = 0;\n\tchar *pnxtjid = NULL;\n\n\t/* check for any extended flag in the batch request. 't' for\n\t * the sub jobs. If 'x' is there, then check if the server is\n\t * configured for history job info. If not set or set to FALSE,\n\t * return with PBSE_JOBHISTNOTSET error. Otherwise select history\n\t * jobs.\n\t */\n\tif (preq->rq_extend) {\n\t\tif (strchr(preq->rq_extend, (int) 't'))\n\t\t\tdosubjobs = 1; /* status sub jobs of an Array Job */\n\t\tif (strchr(preq->rq_extend, (int) 'x')) {\n\t\t\tif (svr_history_enable == 0) {\n\t\t\t\treq_reject(PBSE_JOBHISTNOTSET, 0, preq);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tdohistjobs = 1; /* status history jobs */\n\t\t}\n\t}\n\n\t/*\n\t * first, validate the name of the requested object, either\n\t * a job, a queue, or the whole server.\n\t * type = 1 for a job, Array job, subjob or range of subjobs, or\n\t *          a comma separated list of  the above.\n\t *        2 for jobs in a queue,\n\t *        3 for jobs in the server, or\n\t */\n\n\tname = preq->rq_ind.rq_status.rq_id;\n\n\tif (isdigit((int) *name)) {\n\t\t/* a single job id */\n\t\ttype = 1;\n\t\trc = PBSE_UNKJOBID;\n\n\t} else if (isalpha((int) *name)) {\n\t\tpque = find_queuebyname(name) /* status jobs in a queue */;\n#ifdef NAS /* localmod 075 */\n\t\tif (pque == NULL)\n\t\t\tpque = find_resvqueuebyname(name);\n#endif /* localmod 075 */\n\t\tif (pque)\n\t\t\ttype = 2;\n\t\telse\n\t\t\trc = PBSE_UNKQUE;\n\n\t} else if ((*name == '\\0') || (*name == '@'))\n\t\ttype = 3; /* status all jobs at server */\n\telse\n\t\trc = PBSE_IVALREQ;\n\n\tif (type == 0) { /* is invalid - an error */\n\t\treq_reject(rc, 0, preq);\n\t\treturn;\n\t}\n\tpreply = &preq->rq_reply;\n\tpreply->brp_choice = BATCH_REPLY_CHOICE_Status;\n\tCLEAR_HEAD(preply->brp_un.brp_status);\n\tpreply->brp_count = 0;\n\n\tif (dosubjobs && GET_NEXT(preq->rq_ind.rq_status.rq_attr) != NULL) {\n\t\tif (find_svrattrl_list_entry(&preq->rq_ind.rq_status.rq_attr, ATTR_array_indices_remaining, NULL) == NULL)\n\t\t\tadd_to_svrattrl_list(&preq->rq_ind.rq_status.rq_attr, ATTR_array_indices_remaining, NULL, \"\", SET, NULL);\n\t}\n\n\trc = PBSE_NONE;\n\tif (type == 1) {\n\t\t/*\n\t\t * If there is more than one job id, any status for any\n\t\t * one job is returned, then no error is given.\n\t\t * If a single job id is requested and there is an error\n\t\t * the error is returned.\n\t\t */\n\t\tpnxtjid = name;\n\t\twhile ((name = parse_comma_string_r(&pnxtjid)) != NULL) {\n\t\t\tif ((rc = stat_a_jobidname(preq, name, dohistjobs, dosubjobs)) == PBSE_NONE)\n\t\t\t\tat_least_one_success = 1;\n\t\t}\n\t\tif (at_least_one_success == 1)\n\t\t\treply_send(preq);\n\t\telse\n\t\t\treq_reject(rc, 0, preq);\n\t\treturn;\n\n\t} else {\n\t\tpjob = (job *) GET_NEXT(type == 2 ? pque->qu_jobs : svr_alljobs);\n\t\twhile (pjob) {\n\t\t\trc = do_stat_of_a_job(preq, pjob, dohistjobs, dosubjobs);\n\t\t\tif (rc != PBSE_NONE) {\n\t\t\t\treq_reject(rc, bad, preq);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tpjob = (job *) GET_NEXT(type == 2 ? pjob->ji_jobque : pjob->ji_alljobs);\n\t\t\tif (preply->brp_count >= MAX_JOBS_PER_REPLY && pjob) {\n\t\t\t\trc = reply_send_status_part(preq);\n\t\t\t\tif (rc != PBSE_NONE)\n\t\t\t\t\treturn;\n\t\t\t}\n\t\t}\n\t}\n\n\tif (rc && rc != PBSE_PERM)\n\t\treq_reject(rc, bad, preq);\n\telse\n\t\treply_send(preq);\n}\n\n/**\n * @brief\n * \t\treq_stat_que - service the Status Queue Request\n *\n *\t\tThis request processes the request for status of a single queue or\n *\t\tthe set of queues at a destination.\n *\n * @param[in,out]\tpreq\t-\tptr to the decoded request\n */\n\nvoid\nreq_stat_que(struct batch_request *preq)\n{\n\tchar *name;\n\tpbs_queue *pque;\n\tstruct batch_reply *preply;\n\tint rc = 0;\n\tint type = 0;\n\n\t/*\n\t * first, validate the name of the requested object, either\n\t * a queue, or null for all queues\n\t */\n\n\tname = preq->rq_ind.rq_status.rq_id;\n\n\tif ((*name == '\\0') || (*name == '@'))\n\t\ttype = 1;\n\telse {\n\t\tpque = find_queuebyname(name);\n#ifdef NAS /* localmod 075 */\n\t\tif (pque == NULL)\n\t\t\tpque = find_resvqueuebyname(name);\n#endif /* localmod 075 */\n\t\tif (pque == NULL) {\n\t\t\treq_reject(PBSE_UNKQUE, 0, preq);\n\t\t\treturn;\n\t\t}\n\t}\n\n\tpreply = &preq->rq_reply;\n\tpreply->brp_choice = BATCH_REPLY_CHOICE_Status;\n\tCLEAR_HEAD(preply->brp_un.brp_status);\n\tpreply->brp_count = 0;\n\n\tif (type == 0) { /* get status of the one named queue */\n\t\trc = status_que(pque, preq, &preply->brp_un.brp_status);\n\n\t} else { /* get status of queues */\n\n\t\tpque = (pbs_queue *) GET_NEXT(svr_queues);\n\t\twhile (pque) {\n\t\t\trc = status_que(pque, preq, &preply->brp_un.brp_status);\n\t\t\tif (rc != 0) {\n\t\t\t\tif (rc == PBSE_PERM)\n\t\t\t\t\trc = 0;\n\t\t\t\telse\n\t\t\t\t\tbreak;\n\t\t\t}\n\t\t\tpque = (pbs_queue *) GET_NEXT(pque->qu_link);\n\t\t}\n\t}\n\tif (rc) {\n\t\treply_free(preply);\n\t\treq_reject(rc, bad, preq);\n\t} else {\n\t\treply_send(preq);\n\t}\n}\n\n/**\n * @brief\n * \t\tstatus_que - Build the status reply for a single queue.\n *\n * @param[in,out]\tpque\t-\tptr to que to status\n * @param[in]\t\tpreq\t-\tptr to the decoded request\n * @param[in,out]\tpstathd\t-\thead of list to append status to\n *\n * @return\tint\n * @retval\t0\t: success\n * @retval\t!0\t: PBSE error code\n */\n\nstatic int\nstatus_que(pbs_queue *pque, struct batch_request *preq, pbs_list_head *pstathd)\n{\n\tstruct brp_status *pstat;\n\tsvrattrl *pal;\n\tlong total_jobs;\n\tint rc = 0;\n\tattribute *qattr;\n\n\tif ((preq->rq_perm & ATR_DFLAG_RDACC) == 0)\n\t\treturn (PBSE_PERM);\n\n\t/* ok going to do status, update count and state counts from qu_qs */\n\n\tif (!svr_chk_history_conf()) {\n\t\ttotal_jobs = pque->qu_numjobs;\n\t} else {\n\t\ttotal_jobs = pque->qu_numjobs - (pque->qu_njstate[JOB_STATE_MOVED] + pque->qu_njstate[JOB_STATE_FINISHED] + pque->qu_njstate[JOB_STATE_EXPIRED]);\n\t}\n\tset_qattr_l_slim(pque, QA_ATR_TotalJobs, total_jobs, SET);\n\n\tqattr = get_qattr(pque, QA_ATR_JobsByState);\n\tupdate_state_ct(qattr, pque->qu_njstate, &que_attr_def[QA_ATR_JobsByState]);\n\n\t/* allocate status sub-structure and fill in header portion */\n\n\tpstat = (struct brp_status *) malloc(sizeof(struct brp_status));\n\tif (pstat == NULL)\n\t\treturn (PBSE_SYSTEM);\n\tpstat->brp_objtype = MGR_OBJ_QUEUE;\n\tstrcpy(pstat->brp_objname, pque->qu_qs.qu_name);\n\tCLEAR_LINK(pstat->brp_stlink);\n\tCLEAR_HEAD(pstat->brp_attr);\n\tappend_link(pstathd, &pstat->brp_stlink, pstat);\n\tpreq->rq_reply.brp_count++;\n\n\t/* add attributes to the status reply */\n\n\tbad = 0;\n\tpal = (svrattrl *) GET_NEXT(preq->rq_ind.rq_status.rq_attr);\n\tif (status_attrib(pal, que_attr_idx, que_attr_def, pque->qu_attr, QA_ATR_LAST,\n\t\t\t  preq->rq_perm, &pstat->brp_attr, &bad))\n\t\trc = PBSE_NOATTR;\n\n\tif (is_attr_set(qattr))\n\t\tfree_attr(que_attr_def, qattr, QA_ATR_JobsByState);\n\treturn rc;\n}\n\n/**\n * @brief\n * \t\treq_stat_node - service the Status Node Request\n *\n *\t\tThis request processes the request for status of a single node or\n *\t\tset of nodes at a destination.\n *\n * @param[in]\tpreq\t-\tptr to the decoded request\n */\n\nvoid\nreq_stat_node(struct batch_request *preq)\n{\n\tchar *name;\n\tstruct batch_reply *preply;\n\tsvrattrl *pal;\n\tstruct pbsnode *pnode = NULL;\n\tint rc = 0;\n\tint type = 0;\n\tint i;\n\n\t/*\n\t * first, check that the server indeed has a list of nodes\n\t * and if it does, validate the name of the requested object--\n\t * either name is that of a spedific node, or name[0] is null/@\n\t * meaning request is for all nodes in the server's jurisdiction\n\t */\n\n\tif (pbsndlist == 0 || svr_totnodes <= 0) {\n\t\treq_reject(PBSE_NONODES, 0, preq);\n\t\treturn;\n\t}\n\n\tresc_access_perm = preq->rq_perm;\n\n\tname = preq->rq_ind.rq_status.rq_id;\n\n\tif ((*name == '\\0') || (*name == '@'))\n\t\ttype = 1;\n\telse {\n\t\tpnode = find_nodebyname(name);\n\t\tif (pnode == NULL) {\n\t\t\treq_reject(PBSE_UNKNODE, 0, preq);\n\t\t\treturn;\n\t\t}\n\t}\n\n\tpreply = &preq->rq_reply;\n\tpreply->brp_choice = BATCH_REPLY_CHOICE_Status;\n\tCLEAR_HEAD(preply->brp_un.brp_status);\n\tpreply->brp_count = 0;\n\n\tif (type == 0) { /* get status of the named node */\n\t\trc = status_node(pnode, preq, &preply->brp_un.brp_status);\n\n\t} else { /* get status of all nodes */\n\n\t\tfor (i = 0; i < svr_totnodes; i++) {\n\t\t\tpnode = pbsndlist[i];\n\n\t\t\trc = status_node(pnode, preq,\n\t\t\t\t\t &preply->brp_un.brp_status);\n\t\t\tif (rc)\n\t\t\t\tbreak;\n\t\t}\n\t}\n\n\tif (!rc) {\n\t\treply_send(preq);\n\t} else {\n\t\tif (rc != PBSE_UNKNODEATR)\n\t\t\treq_reject(rc, 0, preq);\n\n\t\telse {\n\t\t\tpal = (svrattrl *) GET_NEXT(preq->rq_ind.rq_status.rq_attr);\n\t\t\treply_badattr(rc, bad, pal, preq);\n\t\t}\n\t}\n}\n\n/**\n * @brief\n * \t\tstatus_node - Build the status reply for a single node.\n *\n * @param[in,out]\tpnode\t-\tptr to node receiving status query\n * @param[in]\tpreq\t-\tptr to the decoded request\n * @param[in,out]\tpstathd\t-\thead of list to append status to\n *\n * @return\tint\n * @retval\t0\t: success\n * @retval\t!0\t: PBSE error code\n */\n\nstatic int\nstatus_node(struct pbsnode *pnode, struct batch_request *preq, pbs_list_head *pstathd)\n{\n\tint rc = 0;\n\tstruct brp_status *pstat;\n\tsvrattrl *pal;\n\tunsigned long old_nd_state = VNODE_UNAVAILABLE;\n\n\tif (pnode->nd_state & INUSE_DELETED) /*node no longer valid*/\n\t\treturn (0);\n\n\tif ((preq->rq_perm & ATR_DFLAG_RDACC) == 0)\n\t\treturn (PBSE_PERM);\n\n\t/* sync state attribute with nd_state */\n\n\tif (pnode->nd_state != get_nattr_long(pnode, ND_ATR_state))\n\t\tset_nattr_l_slim(pnode, ND_ATR_state, pnode->nd_state, SET);\n\n\t/*node is provisioning - mask out the DOWN/UNKNOWN flags while prov is on*/\n\tif (get_nattr_long(pnode, ND_ATR_state) & (INUSE_PROV | INUSE_WAIT_PROV)) {\n\t\told_nd_state = get_nattr_long(pnode, ND_ATR_state);\n\n\t\t/* don't want to show job-busy, job/resv-excl while provisioning */\n\t\tset_nattr_l_slim(pnode, ND_ATR_state,\n\t\t\t\t old_nd_state & ~(INUSE_DOWN | INUSE_UNKNOWN | INUSE_JOB | INUSE_JOBEXCL | INUSE_RESVEXCL),\n\t\t\t\t SET);\n\t}\n\n\t/*allocate status sub-structure and fill in header portion*/\n\n\tpstat = (struct brp_status *) malloc(sizeof(struct brp_status));\n\tif (pstat == NULL)\n\t\treturn (PBSE_SYSTEM);\n\n\tpstat->brp_objtype = MGR_OBJ_NODE;\n\tstrcpy(pstat->brp_objname, pnode->nd_name);\n\tCLEAR_LINK(pstat->brp_stlink);\n\tCLEAR_HEAD(pstat->brp_attr);\n\n\t/*add this new brp_status structure to the list hanging off*/\n\t/*the request's reply substructure                         */\n\n\tappend_link(pstathd, &pstat->brp_stlink, pstat);\n\tpreq->rq_reply.brp_count++;\n\n\t/*point to the list of node-attributes about which we want status*/\n\t/*hang that status information from the brp_attr field for this  */\n\t/*brp_status structure                                           */\n\tbad = 0; /*global variable*/\n\tpal = (svrattrl *) GET_NEXT(preq->rq_ind.rq_status.rq_attr);\n\n\trc = status_nodeattrib(pal, pnode, ND_ATR_LAST, preq->rq_perm, &pstat->brp_attr, &bad);\n\n\t/*reverting back the state*/\n\n\tif (get_nattr_long(pnode, ND_ATR_state) & INUSE_PROV)\n\t\tset_nattr_l_slim(pnode, ND_ATR_state, old_nd_state, SET);\n\n\treturn (rc);\n}\n\n/**\n * @brief\n * \tupdate_isrunhook - update the value is has_runjob_hook\n *\n * @param[in]\tpattr - ptr to the server attribute object\n *\n * @return\tvoid\n */\nstatic void\nupdate_isrunhook(attribute *pattr)\n{\n\thook *phook = NULL;\n\tlong old_val = pattr->at_val.at_long;\n\tlong new_val = 0;\n\n\t/* Check if there are any valid runjob hooks */\n\tfor (phook = (hook *) GET_NEXT(svr_runjob_hooks);\n\t     phook != NULL;\n\t     phook = (hook *) GET_NEXT(phook->hi_runjob_hooks)) {\n\t\tif (phook->enabled) {\n\t\t\tnew_val = 1;\n\t\t\tbreak;\n\t\t}\n\t}\n\n\tif (new_val != old_val) {\n\t\tpattr->at_val.at_long = new_val;\n\t\tpost_attr_set(pattr);\n\t}\n}\n\n/**\n * @brief\n * \t\treq_stat_svr - service the Status Server Request\n * @par\n *\t\tThis request processes the request for status of the Server\n *\n * @param[in]\tpreq\t-\tptr to the decoded request\n */\n\nvoid\nreq_stat_svr(struct batch_request *preq)\n{\n\tsvrattrl *pal;\n\tstruct batch_reply *preply;\n\tstruct brp_status *pstat;\n\tconn_t *conn;\n\n\t/* update count and state counts from sv_numjobs and sv_jobstates */\n\tset_sattr_l_slim(SVR_ATR_TotalJobs, server.sv_qs.sv_numjobs, SET);\n\tupdate_state_ct(get_sattr(SVR_ATR_JobsByState), server.sv_jobstates, &svr_attr_def[SVR_ATR_JobsByState]);\n\n\tupdate_license_ct();\n\n\tconn = get_conn(preq->rq_conn);\n\tif (!conn) {\n\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\treturn;\n\t}\n\tif (conn->cn_origin == CONN_SCHED_PRIMARY) {\n\t\t/* Request is from sched so update \"has_runjob_hook\" */\n\t\tupdate_isrunhook(get_sattr(SVR_ATR_has_runjob_hook));\n\t}\n\n\t/* allocate a reply structure and a status sub-structure */\n\n\tpreply = &preq->rq_reply;\n\tpreply->brp_choice = BATCH_REPLY_CHOICE_Status;\n\tCLEAR_HEAD(preply->brp_un.brp_status);\n\tpreply->brp_count = 0;\n\n\tpstat = (struct brp_status *) malloc(sizeof(struct brp_status));\n\tif (pstat == NULL) {\n\t\treply_free(preply);\n\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\treturn;\n\t}\n\tCLEAR_LINK(pstat->brp_stlink);\n\tstrcpy(pstat->brp_objname, server_name);\n\tpstat->brp_objtype = MGR_OBJ_SERVER;\n\tCLEAR_HEAD(pstat->brp_attr);\n\tappend_link(&preply->brp_un.brp_status, &pstat->brp_stlink, pstat);\n\tpreply->brp_count++;\n\n\t/* add attributes to the status reply */\n\n\tbad = 0;\n\tpal = (svrattrl *) GET_NEXT(preq->rq_ind.rq_status.rq_attr);\n\tif (status_attrib(pal, svr_attr_idx, svr_attr_def, server.sv_attr, SVR_ATR_LAST,\n\t\t\t  preq->rq_perm, &pstat->brp_attr, &bad))\n\t\treply_badattr(PBSE_NOATTR, bad, pal, preq);\n\telse\n\t\treply_send(preq);\n}\n\n/**\n * @brief\n * \t\tstatus_sched - Build the status reply for single scheduler\n *\n * @param[in]\tpsched\t-\tptr to sched receiving status query\n * @param[in]\tpreq\t-\tptr to the decoded request\n * @param[out]\tpstathd\t-\thead of list to append status to\n *\n * @return\tint\n * @retval\t0\t: success\n * @retval\t!0\t: PBSE error code\n */\nstatic int\nstatus_sched(pbs_sched *psched, struct batch_request *preq, pbs_list_head *pstathd)\n{\n\tint rc = 0;\n\tstruct brp_status *pstat;\n\tsvrattrl *pal;\n\n\tpstat = (struct brp_status *) malloc(sizeof(struct brp_status));\n\tif (pstat == NULL)\n\t\treturn (PBSE_SYSTEM);\n\n\tpstat->brp_objtype = MGR_OBJ_SCHED;\n\tstrncpy(pstat->brp_objname, psched->sc_name, (PBS_MAXSVRJOBID > PBS_MAXDEST ? PBS_MAXSVRJOBID : PBS_MAXDEST) - 1);\n\tpstat->brp_objname[(PBS_MAXSVRJOBID > PBS_MAXDEST ? PBS_MAXSVRJOBID : PBS_MAXDEST) - 1] = '\\0';\n\n\tCLEAR_LINK(pstat->brp_stlink);\n\tCLEAR_HEAD(pstat->brp_attr);\n\tappend_link(pstathd, &pstat->brp_stlink, pstat);\n\tpreq->rq_reply.brp_count++;\n\n\tbad = 0;\n\tpal = (svrattrl *) GET_NEXT(preq->rq_ind.rq_status.rq_attr);\n\tif (status_attrib(pal, sched_attr_idx, sched_attr_def, psched->sch_attr, SCHED_ATR_LAST,\n\t\t\t  preq->rq_perm, &pstat->brp_attr, &bad))\n\t\treply_badattr(PBSE_NOATTR, bad, pal, preq);\n\n\treturn (rc);\n}\n\n/**\n * @brief\n * \t\treq_stat_sched - service a PBS_BATCH_StatusSched request\n * @par\n *\t\tThis function processes a request regarding scheduler status\n *\n * @param[in]\tpreq\t-\tptr to the decoded request\n *\n * @par MT-safe: No\n */\n\nvoid\nreq_stat_sched(struct batch_request *preq)\n{\n\tsvrattrl *pal;\n\tstruct batch_reply *preply;\n\tint rc = 0;\n\tpbs_sched *psched;\n\n\t/* allocate a reply structure and a status sub-structure */\n\n\tpreply = &preq->rq_reply;\n\tpreply->brp_choice = BATCH_REPLY_CHOICE_Status;\n\tCLEAR_HEAD(preply->brp_un.brp_status);\n\tpreply->brp_count = 0;\n\n\tfor (psched = (pbs_sched *) GET_NEXT(svr_allscheds);\n\t     (psched != NULL);\n\t     psched = (pbs_sched *) GET_NEXT(psched->sc_link)) {\n\t\trc = status_sched(psched, preq, &preply->brp_un.brp_status);\n\t\tif (rc != 0) {\n\t\t\tbreak;\n\t\t}\n\t}\n\n\tif (!rc) {\n\t\treply_send(preq);\n\t} else {\n\t\tif (rc != PBSE_NOATTR)\n\t\t\treq_reject(rc, 0, preq);\n\t\telse {\n\t\t\tpal = (svrattrl *) GET_NEXT(preq->rq_ind.rq_status.rq_attr);\n\t\t\treply_badattr(rc, bad, pal, preq);\n\t\t}\n\t}\n}\n\n/**\n * @brief\n * \t\tupdate-state_ct - update the count of jobs per state (in queue and server\n *\t\tattributes.\n *\n * @param[out]\tpattr\t-\tqueue or server attribute\n * @param[in]\tct_array\t-\tnumber of jobs per state\n * @param[in] attr_def - attribute def of pattr\n *\n * @par MT-safe: No\n */\n\nvoid\nupdate_state_ct(attribute *pattr, int *ct_array, attribute_def *attr_def)\n{\n\tstatic char *statename[] = {\"Transit\", \"Queued\", \"Held\", \"Waiting\",\n\t\t\t\t    \"Running\", \"Exiting\", \"Expired\", \"Begun\",\n\t\t\t\t    \"Moved\", \"Finished\"};\n\tint index;\n\tchar buf[BUF_SIZE];\n\n\tbuf[0] = '\\0';\n\tfor (index = 0; index < (PBS_NUMJOBSTATE); index++) {\n\t\tif ((index == JOB_STATE_EXPIRED) ||\n\t\t    (index == JOB_STATE_MOVED) ||\n\t\t    (index == JOB_STATE_FINISHED))\n\t\t\tcontinue; /* skip over Expired/Moved/Finished */\n\t\tsprintf(buf + strlen(buf), \"%s:%d \", statename[index],\n\t\t\t*(ct_array + index));\n\t}\n\tset_attr_generic(pattr, attr_def, buf, NULL, INTERNAL);\n}\n\n/**\n * @brief\n * \tupdate_license_ct - update the # of licenses (counters) in 'license_count' server attribute.\n */\nvoid\nupdate_license_ct(void)\n{\n\tchar buf[BUF_SIZE];\n\n\tbuf[0] = '\\0';\n\tsnprintf(buf, sizeof(buf), \"Avail_Global:%ld Avail_Local:%ld Used:%ld High_Use:%d\",\n\t\t license_counts.licenses_global,\n\t\t license_counts.licenses_local,\n\t\t license_counts.licenses_used,\n\t\t license_counts.licenses_high_use.lu_max_forever);\n\tset_sattr_str_slim(SVR_ATR_license_count, buf, NULL);\n}\n\n/**\n * @brief\n * \t\treq_stat_resv - service the Status Reservation Request\n * @par\n *\t\tThis request processes the request for status of a single\n *\t\treservation or the set of reservations at a destination.\n *\n * @param[in,out]\tpreq\t-\tptr to the decoded request\n */\n\nvoid\nreq_stat_resv(struct batch_request *preq)\n{\n\tchar *name;\n\tstruct batch_reply *preply;\n\tresc_resv *presv = NULL;\n\tint rc = 0;\n\tint type = 0;\n\n\t/*\n\t * first, validate the name sent in the request.\n\t * This is either the ID of a specific reservation\n\t * or a '\\0' or \"@...\" for all reservations.\n\t */\n\n\tname = preq->rq_ind.rq_status.rq_id;\n\n\tif ((*name == '\\0') || (*name == '@'))\n\t\ttype = 1;\n\telse {\n\t\tpresv = find_resv(name);\n\t\tif (presv == NULL) {\n\t\t\treq_reject(PBSE_UNKRESVID, 0, preq);\n\t\t\treturn;\n\t\t}\n\t}\n\n\tpreply = &preq->rq_reply;\n\tpreply->brp_choice = BATCH_REPLY_CHOICE_Status;\n\tCLEAR_HEAD(preply->brp_un.brp_status);\n\tpreply->brp_count = 0;\n\n\tif (type == 0) {\n\t\t/* get status of the specifically named reservation */\n\t\trc = status_resv(presv, preq, &preply->brp_un.brp_status);\n\n\t} else {\n\t\t/* get status of all the reservations */\n\n\t\tpresv = (resc_resv *) GET_NEXT(svr_allresvs);\n\t\twhile (presv) {\n\t\t\trc = status_resv(presv, preq, &preply->brp_un.brp_status);\n\t\t\tif (rc == PBSE_PERM)\n\t\t\t\trc = 0;\n\t\t\tif (rc)\n\t\t\t\tbreak;\n\t\t\tpresv = (resc_resv *) GET_NEXT(presv->ri_allresvs);\n\t\t}\n\t}\n\n\tif (rc == 0)\n\t\treply_send(preq);\n\telse\n\t\treq_reject(rc, bad, preq);\n}\n\n/**\n * @brief\n * \t\tstatus_resv - Build the status reply for a single resv.\n *\n * @param[in]\tpresv\t-\tget status for this reservation\n * @param[in]\tpreq\t-\tptr to the decoded request\n * @param[in,out]\tpstathd\t-\tappend retrieved status to list\n *\n * @return\tint\n * @retval\t0\t: success\n * @retval\t!0\t: PBSE error\n */\n\nstatic int\nstatus_resv(resc_resv *presv, struct batch_request *preq, pbs_list_head *pstathd)\n{\n\tstruct brp_status *pstat;\n\tsvrattrl *pal;\n\n\tif ((preq->rq_perm & ATR_DFLAG_RDACC) == 0)\n\t\treturn (PBSE_PERM);\n\n\t/*first do any need update to attributes from\n\t *\"quick save\" area of the resc_resv structure\n\t */\n\n\t/*now allocate status sub-structure and fill header portion*/\n\n\tpstat = (struct brp_status *) malloc(sizeof(struct brp_status));\n\tif (pstat == NULL)\n\t\treturn (PBSE_SYSTEM);\n\n\tpstat->brp_objtype = MGR_OBJ_RESV;\n\tstrcpy(pstat->brp_objname, presv->ri_qs.ri_resvID);\n\tCLEAR_LINK(pstat->brp_stlink);\n\tCLEAR_HEAD(pstat->brp_attr);\n\tappend_link(pstathd, &pstat->brp_stlink, pstat);\n\tpreq->rq_reply.brp_count++;\n\n\t/*finally, add the requested attributes to the status reply*/\n\n\tbad = 0; /*global: record ordinal position where got error*/\n\tpal = (svrattrl *) GET_NEXT(preq->rq_ind.rq_status.rq_attr);\n\n\tif (status_attrib(pal, resv_attr_idx, resv_attr_def, presv->ri_wattr,\n\t\t\t  RESV_ATR_LAST, preq->rq_perm, &pstat->brp_attr, &bad) == 0)\n\t\treturn (0);\n\telse\n\t\treturn (PBSE_NOATTR);\n}\n\n/**\n * @brief\n * \t\tstatus_resc - Build the status reply for a single resource.\n *\n * @param[in]\tprd\t-\tpointer to resource def to status\n * @param[in]\tpreq\t-\tpointer to the batch request to service\n * @param[in]\tpstathd\t-\tpointer to head of list to append status to\n * @param[in]\tprivate\t-\tif a pbs private request, the status returns numeric\n * \t\t\t\t\t\t\tvalues for type and flags. Otherwise it returns strings\n *\n * @par\n * \t\tAt the current time, the only things returned in the reply are\n *\t\tthe resource type and the flags, both as \"integers\".\n *\n * @return\twhether the operation was successful or not\n * @retval\t0\t: on success\n * @retval\tPBSE_SYSTEM\t: on error\n */\n\nstatic int\nstatus_resc(struct resource_def *prd, struct batch_request *preq, pbs_list_head *pstathd, int private)\n{\n\tattribute attr;\n\tstruct brp_status *pstat;\n\n\tif (((prd->rs_flags & ATR_DFLAG_USRD) == 0) &&\n\t    (preq->rq_perm & (ATR_DFLAG_MGRD | ATR_DFLAG_OPRD)) == 0)\n\t\treturn (PBSE_PERM);\n\n\t/* allocate status sub-structure and fill in header portion */\n\n\tpstat = (struct brp_status *) malloc(sizeof(struct brp_status));\n\tif (pstat == NULL)\n\t\treturn (PBSE_SYSTEM);\n\tpstat->brp_objtype = MGR_OBJ_RSC;\n\tstrcpy(pstat->brp_objname, prd->rs_name);\n\tCLEAR_LINK(pstat->brp_stlink);\n\tCLEAR_HEAD(pstat->brp_attr);\n\n\t/* add attributes to the status reply */\n\tif (private) {\n\t\tattr.at_val.at_long = prd->rs_type;\n\t\tattr.at_flags = ATR_VFLAG_SET;\n\t\tif (encode_l(&attr, &pstat->brp_attr, ATTR_RESC_TYPE, NULL, 0, NULL) == -1)\n\t\t\treturn PBSE_SYSTEM;\n\n\t\tattr.at_val.at_long = prd->rs_flags;\n\t\tattr.at_flags = ATR_VFLAG_SET;\n\t\tif (encode_l(&attr, &pstat->brp_attr, ATTR_RESC_FLAG, NULL, 0, NULL) == -1)\n\t\t\treturn PBSE_SYSTEM;\n\t} else {\n\t\tstruct resc_type_map *p_resc_type_map;\n\n\t\tp_resc_type_map = find_resc_type_map_by_typev(prd->rs_type);\n\t\tif (p_resc_type_map == NULL) {\n\t\t\treturn PBSE_SYSTEM;\n\t\t}\n\n\t\tattr.at_val.at_str = p_resc_type_map->rtm_rname;\n\t\tattr.at_flags = ATR_VFLAG_SET;\n\t\tif (encode_str(&attr, &pstat->brp_attr, ATTR_RESC_TYPE, NULL, 0, NULL) == -1)\n\t\t\treturn PBSE_SYSTEM;\n\n\t\tattr.at_val.at_str = find_resc_flag_map(prd->rs_flags);\n\t\tattr.at_flags = ATR_VFLAG_SET;\n\t\tif (encode_str(&attr, &pstat->brp_attr, ATTR_RESC_FLAG, NULL, 0, NULL) == -1)\n\t\t\treturn PBSE_SYSTEM;\n\t}\n\tappend_link(pstathd, &pstat->brp_stlink, pstat);\n\tpreq->rq_reply.brp_count++;\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\treq_stat_resc - service the Status Resource Request\n *\n *\t\tThis request processes the request for status of (information on)\n *\t\ta set of resources\n *\n * @param[in]\tpreq\t-\tptr to the decoded request\n */\n\nvoid\nreq_stat_resc(struct batch_request *preq)\n{\n\tint i;\n\tchar *name;\n\tchar *extend;\n\tstruct resource_def *prd = NULL;\n\tstruct batch_reply *preply;\n\tint rc = 0;\n\tint type;\n\tint private = 0;\n\n\tif (preq == NULL)\n\t\treturn;\n\t/*\n\t * first, validate the name of the requested object, either\n\t * a resource name, or null for all resources\n\t */\n\n\tname = preq->rq_ind.rq_status.rq_id;\n\n\tif ((*name == '\\0') || (*name == '@'))\n\t\ttype = 1;\n\telse {\n\t\ttype = 0;\n\t\tprd = find_resc_def(svr_resc_def, name);\n\t\tif (prd == NULL) {\n\t\t\treq_reject(PBSE_UNKRESC, 0, preq);\n\t\t\treturn;\n\t\t}\n\t}\n\n\textend = preq->rq_extend;\n\tif (extend != NULL) {\n\t\tif (strchr(preq->rq_extend, (int) 'p'))\n\t\t      private\n\t\t= 1;\n\t}\n\n\tpreply = &preq->rq_reply;\n\tpreply->brp_choice = BATCH_REPLY_CHOICE_Status;\n\tCLEAR_HEAD(preply->brp_un.brp_status);\n\tpreply->brp_count = 0;\n\n\tif (type == 0) { /* get status of the one named resource */\n\t\trc = status_resc(prd, preq, &preply->brp_un.brp_status, private);\n\n\t} else { /* get status of all resources */\n\n\t\ti = svr_resc_size;\n\t\tprd = &svr_resc_def[0];\n\t\twhile (i--) {\n\t\t\t/* skip the unknown resource because it would fail\n\t\t\t * to pass the string encoding routine\n\t\t\t */\n\t\t\tif (!private && (strcmp(prd->rs_name, RESOURCE_UNKNOWN) == 0)) {\n\t\t\t\tprd = prd->rs_next;\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\trc = status_resc(prd, preq, &preply->brp_un.brp_status, private);\n\t\t\tif (rc == PBSE_PERM) {\n\t\t\t\t/* we skip resources that are disallowed to be\n\t\t\t\t * stat'ed by this user\n\t\t\t\t */\n\t\t\t\trc = 0;\n\t\t\t}\n\t\t\tprd = prd->rs_next;\n\t\t}\n\t}\n\tif (rc) {\n\t\treply_free(preply);\n\t\treq_reject(rc, bad, preq);\n\t} else {\n\t\treply_send(preq);\n\t}\n}\n"
  },
  {
    "path": "src/server/req_track.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file    req_track.c\n *\n * @brief\n * \treq_track.c\t-\tFunctions relation to the Track Job Request and job tracking.\n *\n * Functions included are:\n *\treq_track()\n *\ttrack_save()\n *\tissue_track()\n *\ttrack_history_job()\n *\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <unistd.h>\n#include <errno.h>\n#include <sys/types.h>\n#include <stdlib.h>\n#include \"libpbs.h\"\n#include <fcntl.h>\n#include <signal.h>\n#include \"server_limits.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"server.h\"\n#include \"credential.h\"\n#include \"batch_request.h\"\n#include \"job.h\"\n#include \"pbs_error.h\"\n#include \"work_task.h\"\n#include \"tracking.h\"\n#include \"log.h\"\n#include \"pbs_nodes.h\"\n#include \"svrfunc.h\"\n\n/* External functions */\n\nextern int issue_to_svr(char *svr, struct batch_request *, void (*func)(struct work_task *));\n\n/* Local functions */\n\nstatic void track_history_job(struct rq_track *, char *);\n\n/* Global Data Items: */\n\nextern char *path_track;\nextern struct server server;\nextern time_t time_now;\nextern char server_name[];\n\n/**\n * @brief\n * \t\treq_track - record job tracking information\n *\n * @param[in,out]\tpreq\t-\trequest from the server.\n */\n\nvoid\nreq_track(struct batch_request *preq)\n{\n\tstruct tracking *empty = NULL;\n\tint i;\n\tint need;\n\tstruct tracking *new;\n\tstruct tracking *ptk;\n\tstruct rq_track *prqt;\n\n\t/*  make sure request is from a server */\n\n\tif (!preq->rq_fromsvr) {\n\t\treq_reject(PBSE_IVALREQ, 0, preq);\n\t\treturn;\n\t}\n\n\t/* attempt to locate tracking record for this job    */\n\t/* also remember first empty slot in case its needed */\n\n\tprqt = &preq->rq_ind.rq_track;\n\n\tptk = server.sv_track;\n\tfor (i = 0; i < server.sv_tracksize; i++) {\n\t\tif ((ptk + i)->tk_mtime) {\n\t\t\tif (!strcmp((ptk + i)->tk_jobid, prqt->rq_jid)) {\n\n\t\t\t\t/*\n\t\t\t\t * found record, discard it if state == exiting,\n\t\t\t\t * otherwise, update it if older\n\t\t\t\t */\n\n\t\t\t\tif (*prqt->rq_state == 'E') {\n\t\t\t\t\t(ptk + i)->tk_mtime = 0;\n\t\t\t\t\ttrack_history_job(prqt, NULL);\n\t\t\t\t} else if ((ptk + i)->tk_hopcount < prqt->rq_hopcount) {\n\t\t\t\t\t(ptk + i)->tk_hopcount = prqt->rq_hopcount;\n\t\t\t\t\t(void) strcpy((ptk + i)->tk_location, prqt->rq_location);\n\t\t\t\t\t(ptk + i)->tk_state = *prqt->rq_state;\n\t\t\t\t\t(ptk + i)->tk_mtime = time_now;\n\t\t\t\t\ttrack_history_job(prqt, preq->rq_extend);\n\t\t\t\t}\n\t\t\t\tserver.sv_trackmodifed = 1;\n\t\t\t\treply_ack(preq);\n\t\t\t\treturn;\n\t\t\t}\n\t\t} else if (empty == NULL) {\n\t\t\tempty = ptk + i;\n\t\t}\n\t}\n\n\t/* if we got here, didn't find it... */\n\n\tif (*prqt->rq_state != 'E') {\n\n\t\t/* and need to add it */\n\n\t\tif (empty == NULL) {\n\n\t\t\t/* need to make room for more */\n\n\t\t\tneed = server.sv_tracksize * 3 / 2;\n\t\t\tnew = (struct tracking *) realloc(server.sv_track,\n\t\t\t\t\t\t\t  need * sizeof(struct tracking));\n\t\t\tif (new == NULL) {\n\t\t\t\tlog_err(errno, \"req_track\", \"malloc failed\");\n\t\t\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tempty = new + server.sv_tracksize; /* first new slot */\n\t\t\tfor (i = server.sv_tracksize; i < need; i++)\n\t\t\t\t(new + i)->tk_mtime = 0;\n\t\t\tserver.sv_tracksize = need;\n\t\t\tserver.sv_track = new;\n\t\t}\n\n\t\tempty->tk_mtime = time_now;\n\t\tempty->tk_hopcount = prqt->rq_hopcount;\n\t\t(void) strcpy(empty->tk_jobid, prqt->rq_jid);\n\t\t(void) strcpy(empty->tk_location, prqt->rq_location);\n\t\tempty->tk_state = *prqt->rq_state;\n\t\tserver.sv_trackmodifed = 1;\n\t}\n\treply_ack(preq);\n\treturn;\n}\n\n/**\n * @brief\n * \t\ttrack_save - save the tracking records to a file\n * @par\n *\t\tThis routine is invoked periodically by a timed work task entry.\n *\t\tThe first entry is created at server initialization time and then\n *\t\trecreated on each entry.\n * @par\n *\t\tOn server shutdown, track_save is called with a null work task pointer.\n *\n * @param[in]\tpwt\t-\tunused\n */\n\nvoid\ntrack_save(struct work_task *pwt)\n{\n\tint fd;\n\n\t/* set task for next round trip */\n\n\tif (pwt) { /* set up another work task for next time period */\n\t\tif (!set_task(WORK_Timed, (long) time_now + PBS_SAVE_TRACK_TM,\n\t\t\t      track_save, 0))\n\t\t\tlog_err(errno, __func__, \"Unable to set task for save\");\n\t}\n\n\tif (server.sv_trackmodifed == 0)\n\t\treturn; /* nothing to do this time */\n\n\tfd = open(path_track, O_WRONLY, 0);\n\tif (fd < 0) {\n\t\tlog_err(errno, __func__, \"Unable to open tracking file\");\n\t\treturn;\n\t}\n\n\tif (write(fd, (char *) server.sv_track, server.sv_tracksize * sizeof(struct tracking)) == -1) \n\t\tlog_errf(-1, __func__, \"write failed. ERR : %s\",strerror(errno));\n\t(void) close(fd);\n\tserver.sv_trackmodifed = 0;\n\treturn;\n}\n\n/**\n * @brief\n * \t\tissue_track - issue a Track Job Request to another server\n *\n * @param[in]\tpwt\t-\tJob Request to another server\n */\n\nvoid\nissue_track(job *pjob)\n{\n\tstruct batch_request *preq;\n\tchar *pc;\n\n\tpreq = alloc_br(PBS_BATCH_TrackJob);\n\tif (preq == NULL)\n\t\treturn;\n\n\tpreq->rq_ind.rq_track.rq_hopcount = get_jattr_long(pjob, JOB_ATR_hopcount);\n\t(void) strcpy(preq->rq_ind.rq_track.rq_jid, pjob->ji_qs.ji_jobid);\n\t(void) strcpy(preq->rq_ind.rq_track.rq_location, pbs_server_name);\n\tpreq->rq_ind.rq_track.rq_state[0] = get_job_state(pjob);\n\tpreq->rq_extend = (char *) malloc(PBS_MAXROUTEDEST + 1);\n\tif (preq->rq_extend != NULL)\n\t\t(void) strncpy(preq->rq_extend, pjob->ji_qs.ji_queue, PBS_MAXROUTEDEST + 1);\n\n\tpc = pjob->ji_qs.ji_jobid;\n\twhile (*pc != '.')\n\t\tpc++;\n\t(void) issue_to_svr(++pc, preq, release_req);\n}\n\n/**\n * @brief\n * \t\ttrack_history_job()\t-\tIt updates the substate and comment attribute of\n * \t\thistory job (job state = JOB_STATE_LTR_MOVED).\n *\n * @param[in]\tprqt\t-\trequest track structure\n * @param[in]\textend\t-\trequest \"extension\" data\n *\n * @return\tNothing\n */\nstatic void\ntrack_history_job(struct rq_track *prqt, char *extend)\n{\n\tchar *comment = \"Job has been moved to\";\n\tjob *pjob = NULL;\n\tchar dest_queue[PBS_MAXROUTEDEST + 1] = {'\\0'};\n\n\t/* return if the server is not configured for job history */\n\tif (svr_chk_history_conf() == 0)\n\t\treturn;\n\n\tpjob = find_job(prqt->rq_jid);\n\n\t/*\n\t * Return if not found the job OR job is not created here\n\t * OR job is not in state MOVED.\n\t */\n\tif ((pjob == NULL) ||\n\t    ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_HERE) == 0) ||\n\t    (!check_job_state(pjob, JOB_STATE_LTR_MOVED))) {\n\t\treturn;\n\t}\n\n\t/*\n\t * If the track state is 'E', then update the substate of\n\t * the history job substate=JOB_SUBSTATE_MOVED to JOB_SUBSTATE_FINISHED\n\t * and update the comment message.\n\t */\n\tif (*prqt->rq_state == 'E') {\n\t\tset_job_substate(pjob, JOB_SUBSTATE_FINISHED);\n\t\t/* over write the default comment message */\n\t\tcomment = \"Job finished at\";\n\t}\n\n\t/* If the track state is 'Q' and extend has data, update\n\t * history information with new destination queue.\n\t */\n\tif (*prqt->rq_state == 'Q' && extend != NULL) {\n\t\t(void) strncpy(dest_queue, extend, PBS_MAXQUEUENAME + 1);\n\t\t(void) strcat(dest_queue, \"@\");\n\t\t(void) strcat(dest_queue, prqt->rq_location);\n\t\t/* Set the new queue attribute to destination */\n\t\tset_jattr_generic(pjob, JOB_ATR_in_queue, dest_queue, NULL, SET);\n\t}\n\n\t/*\n\t * Populate the appropriate comment message in log_buffer\n\t * and call the decode API for the comment attribute of job\n\t * to update the modified comment message.\n\t */\n\tsprintf(log_buffer, \"%s \\\"%s\\\"\", comment, prqt->rq_location);\n\tset_jattr_str_slim(pjob, JOB_ATR_Comment, log_buffer, NULL);\n\tsvr_histjob_update(pjob, get_job_state(pjob), get_job_substate(pjob));\n}\n"
  },
  {
    "path": "src/server/resc_attr.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n *\n * @brief\n * \tFunctions relation to the Track Job Request and job tracking.\n *\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n#include <sys/types.h>\n#include <stdlib.h>\n#include <stdio.h>\n#include <ctype.h>\n#include \"pbs_ifl.h\"\n#include \"server_limits.h\"\n#include <string.h>\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"resource.h\"\n#include \"pbs_error.h\"\n#include \"pbs_nodes.h\"\n#include \"svrfunc.h\"\n#include \"grunt.h\"\n#include \"pbs_share.h\"\n#include \"server.h\"\n#ifndef PBS_MOM\n#include \"queue.h\"\n#endif /* PBS_MOM */\n\nextern char *find_aoe_from_request(resc_resv *);\n/**\n * @brief\n * \t\tctnodes\t-\tcount the num of nodes from attribute struct holding value\n *\n * @param[in]\tspec\t-\tattribute struct holding value\n *\n * @return\tnum of nodes\n */\nint\nctnodes(char *spec)\n{\n\tint ct = 0;\n\tchar *pc;\n\n\twhile (1) {\n\n\t\twhile (isspace((int) *spec))\n\t\t\t++spec;\n\n\t\tif (isdigit((int) *spec)) {\n\t\t\tpc = spec;\n\t\t\twhile (isdigit((int) *pc))\n\t\t\t\t++pc;\n\t\t\tif (!isalpha((int) *pc))\n\t\t\t\tct += atoi(spec);\n\t\t\telse\n\t\t\t\t++ct;\n\t\t} else\n\t\t\t++ct;\n\t\tif ((pc = strchr(spec, '+')) == NULL)\n\t\t\tbreak;\n\t\tspec = pc + 1;\n\t}\n\treturn (ct);\n}\n\n/**\n * @brief\n * \t\tset_node_ct = set node count\n * @par\n *\t\tThis is the \"at_action\" routine for the resource \"nodes\".\n *\t\tWhen the resource_list attribute changes, then set/update\n *\t\tthe value of the resource \"nodect\" for use by the scheduler.\n *\t\tAlso updates \"ncpus\" in most circumstances.\n *\n * @param[in]\tpnodesp\t-\tpointer to resource\n * @param[in,out]\tpattr\t-\tpointer to attribute\n * @param[in]\tpobj\t-\tunused here.\n * @param[in]\ttype\t-\tunused here.\n * @param[in]\tactmode\t-\tmode of action routine\n */\n\nint\nset_node_ct(resource *pnodesp, attribute *pattr, void *pobj, int type, int actmode)\n{\n#ifndef PBS_MOM\n\tint nn;\t      /* num of nodes */\n\tint nt;\t      /* num of tasks (processes) */\n\tint hcpp = 0; /* has :ccp in string */\n\tlong nc;\n\tresource *pnct;\n\tresource *pncpus;\n\tresource_def *pndef;\n\n\tif ((actmode == ATR_ACTION_RECOV) ||\n\t    ((is_attr_set(&pnodesp->rs_value)) == 0))\n\t\treturn (0);\n\n\t/* first validate the spec */\n\n\tif ((nn = validate_nodespec(pnodesp->rs_value.at_val.at_str)) != 0)\n\t\treturn nn;\n\n\t/* Set \"nodect\" to count of nodes in \"nodes\" */\n\n\tpndef = &svr_resc_def[RESC_NODECT];\n\tif (pndef == NULL)\n\t\treturn (PBSE_SYSTEM);\n\n\tif ((pnct = find_resc_entry(pattr, pndef)) == NULL) {\n\t\tif ((pnct = add_resource_entry(pattr, pndef)) == 0)\n\t\t\treturn (PBSE_SYSTEM);\n\t}\n\n\tnn = ctnodes(pnodesp->rs_value.at_val.at_str);\n\tpnct->rs_value.at_val.at_long = nn;\n\tpost_attr_set(&pnct->rs_value);\n\n\t/* find the number of cpus specified in the node string */\n\n\tnt = ctcpus(pnodesp->rs_value.at_val.at_str, &hcpp);\n\n\t/* Is \"ncpus\" set as a separate resource? */\n\n\tpndef = &svr_resc_def[RESC_NCPUS];\n\tif (pndef == NULL)\n\t\treturn (PBSE_SYSTEM);\n\tif ((pncpus = find_resc_entry(pattr, pndef)) == NULL) {\n\t\tif ((pncpus = add_resource_entry(pattr, pndef)) == 0)\n\t\t\treturn (PBSE_SYSTEM);\n\t}\n\n\tif (((pncpus->rs_value.at_flags & (ATR_VFLAG_SET | ATR_VFLAG_DEFLT)) == ATR_VFLAG_SET) && (actmode == ATR_ACTION_NEW)) {\n\t\t/* ncpus is already set and not a default and new job */\n\n\t\tnc = pncpus->rs_value.at_val.at_long;\n\t\tif (hcpp && (nt != pncpus->rs_value.at_val.at_long)) {\n\t\t\t/* if cpp string specificed, this is an error */\n\t\t\treturn (PBSE_BADATVAL);\n\t\t} else if ((nc % nt) != 0) {\n\t\t\t/* ncpus must be multiple of number of tasks */\n\t\t\treturn (PBSE_BADATVAL);\n\t\t}\n\n\t} else {\n\t\t/* ncpus is not set or not a new job (qalter being done) */\n\t\t/* force ncpus to the correct thing */\n\t\tpncpus->rs_value.at_val.at_long = nt;\n\t\tpost_attr_set(&pncpus->rs_value);\n\t}\n\n#endif /* not MOM */\n\treturn (0);\n}\n\nstruct place_words {\n\tchar *pw_word;\t   /* place keyword */\n\tshort pw_oneof;\t   /* bit mask for which cannot be together */\n\tshort pw_equalstr; /* one if word has following \"=value\" */\n} place_words[] = {\n\t{PLACE_Group, 0, 1},\n\t{PLACE_Excl, 2, 0},\n\t{PLACE_ExclHost, 2, 0},\n\t{PLACE_Shared, 2, 0},\n\t{PLACE_Free, 1, 0},\n\t{PLACE_Pack, 1, 0},\n\t{PLACE_Scatter, 1, 0},\n\t{PLACE_VScatter, 1, 0}};\n/**\n * @brief\n * \t\tdecode_place\t-\tnot used.\n */\n\nint\ndecode_place(attribute *patr, char *name, char *rescn, char *val)\n{\n#ifndef PBS_MOM\n\tint have_oneof = 0;\n\tint i;\n\tsize_t ln;\n\tchar h;\n\tchar *pc;\n\tchar *px;\n\tstruct resource_def *pres;\n\n\tpc = val;\n\n\twhile (1) {\n\t\twhile (isspace((int) *pc))\n\t\t\t++pc;\n\t\tif (*pc == '\\0' || !isalpha((int) *pc))\n\t\t\treturn PBSE_BADATVAL;\n\t\t/* found start of word,  look for end of word */\n\t\tpx = pc + 1;\n\t\twhile (isalpha((int) *px))\n\t\t\tpx++;\n\n\t\tfor (i = 0; i < sizeof(place_words) / sizeof(place_words[0]); ++i) {\n\t\t\tif (strlen(place_words[i].pw_word) >= (size_t) (px - pc))\n\t\t\t\tln = strlen(place_words[i].pw_word);\n\t\t\telse\n\t\t\t\tln = (size_t) (px - pc);\n\t\t\tif (strncasecmp(pc, place_words[i].pw_word, ln) == 0) {\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t\tif (i == sizeof(place_words) / sizeof(place_words[0]))\n\t\t\treturn PBSE_BADATVAL;\n\n\t\tif (place_words[i].pw_oneof & have_oneof)\n\t\t\treturn PBSE_BADATVAL;\n\t\thave_oneof |= place_words[i].pw_oneof;\n\n\t\tif (place_words[i].pw_equalstr) {\n\t\t\tif (*px != '=')\n\t\t\t\treturn PBSE_BADATVAL;\n\t\t\tpc = ++px;\n\t\t\twhile ((isalnum((int) *px) || (*px == '_') || (*px == '-')) &&\n\t\t\t       (*px != ':'))\n\t\t\t\t++px;\n\t\t\tif (pc == px)\n\t\t\t\treturn PBSE_BADATVAL;\n\t\t\t/* now need to see if the value is a valid resource/type */\n\t\t\th = *px;\n\t\t\t*px = '\\0';\n\t\t\tpres = find_resc_def(svr_resc_def, pc);\n\t\t\tif (pres == NULL)\n\t\t\t\treturn PBSE_UNKRESC;\n\t\t\tif ((pres->rs_type != ATR_TYPE_STR) &&\n\t\t\t    (pres->rs_type != ATR_TYPE_ARST))\n\t\t\t\treturn PBSE_RESCNOTSTR;\n\t\t\t*px = h;\n\n\t\t\tif (*px == '\\0')\n\t\t\t\tbreak;\n\t\t\telse if (*px != ':')\n\t\t\t\treturn PBSE_BADATVAL;\n\t\t}\n\t\tpc = px;\n\t\tif (*pc == '\\0')\n\t\t\tbreak;\n\t\telse if (*pc != ':')\n\t\t\treturn PBSE_BADATVAL;\n\t\tpc++;\n\t}\n\n#endif /* not PBS_MOM */\n\n\treturn (decode_str(patr, name, rescn, val));\n}\n\n/**\n * @brief\n * \t\tto_kbsize - decode a \"size\" string to a value in kilobytes\n *\n * @param[in]\tval\t-\t\"size\" string\n *\n * @return\tlong\n * @retval\tvalue in kilobytes\n */\n\nlong long\nto_kbsize(char *val)\n{\n\tint havebw = 0;\n\tlong long sv_num;\n\tint sv_shift = 0;\n\tchar *pc;\n\n\tsv_num = strtol(val, &pc, 0);\n\tif (pc == val) /* no numeric part */\n\t\treturn (0);\n\n\tswitch (*pc) {\n\t\tcase '\\0':\n\t\t\tbreak;\n\t\tcase 'k':\n\t\tcase 'K':\n\t\t\tsv_shift = 10;\n\t\t\tbreak;\n\t\tcase 'm':\n\t\tcase 'M':\n\t\t\tsv_shift = 20;\n\t\t\tbreak;\n\t\tcase 'g':\n\t\tcase 'G':\n\t\t\tsv_shift = 30;\n\t\t\tbreak;\n\t\tcase 't':\n\t\tcase 'T':\n\t\t\tsv_shift = 40;\n\t\t\tbreak;\n\t\tcase 'p':\n\t\tcase 'P':\n\t\t\tsv_shift = 50;\n\t\t\tbreak;\n\t\tcase 'b':\n\t\tcase 'B':\n\t\t\thavebw = 1;\n\t\t\tbreak;\n\t\tcase 'w':\n\t\tcase 'W':\n\t\t\thavebw = 1;\n\t\t\tsv_num *= SIZEOF_WORD;\n\t\t\tbreak;\n\n\t\tdefault:\n\t\t\treturn (0); /* invalid string */\n\t}\n\tif (*pc != '\\0')\n\t\tpc++;\n\tif (*pc != '\\0') {\n\t\tif (havebw)\n\t\t\treturn (0); /* invalid string */\n\t\tswitch (*pc) {\n\t\t\tcase 'b':\n\t\t\tcase 'B':\n\t\t\t\tbreak;\n\t\t\tcase 'w':\n\t\t\tcase 'W':\n\t\t\t\tsv_num *= sizeof(int);\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\treturn (0);\n\t\t}\n\t}\n\n\tif (sv_shift == 0) {\n\t\tsv_num = (sv_num + 1023) >> 10;\n\t} else {\n\t\tsv_num = sv_num << (sv_shift - 10);\n\t}\n\treturn (sv_num);\n}\n\n/**\n * @brief\n * \t\tpreempt_targets_action - A function which is used to validate the <attribute>\n *                          out of \"<attribute>=<value>\" pair assigned to preempt_targets\n *\n * @param[in]\tpresc    -       pointer to resource\n * @param[in]   pattr    -       pointer to attribute\n * @param[in]   pobj     -       pointer to job or reservation\n * @param[in]   type     -       if job or reservation\n * @param[in]   actmode  -       mode of action routine\n *\n * @return\tint\n * @retval\tPBSE_NONE\t: success\n * @retval\tPBSE_BADATVAL\t: if a non existent attribute is given\n */\n\nint\npreempt_targets_action(resource *presc, attribute *pattr, void *pobject, int type, int actmode)\n{\n\tchar *name;\n\tchar *p;\n\tint i;\n\tchar *res_name = NULL;\n\tresource_def *resdef = NULL;\n\tchar ch;\n\n\tif ((actmode == ATR_ACTION_FREE) || (actmode == ATR_ACTION_RECOV))\n\t\treturn PBSE_NONE;\n\n\tif (!is_attr_set(pattr))\n\t\treturn PBSE_NONE;\n\n\tif (presc->rs_value.at_val.at_arst == NULL)\n\t\treturn PBSE_BADATVAL;\n\n\tfor (i = 0; i < presc->rs_value.at_val.at_arst->as_usedptr; ++i) {\n\t\tname = presc->rs_value.at_val.at_arst->as_string[i];\n\n\t\tif (!strncasecmp(name, TARGET_NONE, strlen(TARGET_NONE))) {\n\t\t\tif (presc->rs_value.at_val.at_arst->as_usedptr > 1)\n\t\t\t\treturn PBSE_BADATVAL;\n\t\t\treturn PBSE_NONE;\n\t\t}\n\t\tp = strpbrk(name, \".=\");\n\t\tif (p) {\n\t\t\tch = *p;\n\t\t\t*p = '\\0';\n\t\t\tif (!(strcasecmp(name, ATTR_l))) {\n\t\t\t\t*p = ch;\n\t\t\t\tres_name = p + 1;\n\t\t\t\tp = strpbrk(res_name, \"=\");\n\t\t\t\tif (p) {\n\t\t\t\t\tch = *p;\n\t\t\t\t\t*p = '\\0';\n\t\t\t\t\tresdef = find_resc_def(svr_resc_def, res_name);\n\t\t\t\t\t*p = ch;\n\t\t\t\t\tif (resdef == NULL)\n\t\t\t\t\t\treturn PBSE_UNKRESC;\n\t\t\t\t\telse\n\t\t\t\t\t\tcontinue;\n\t\t\t\t} else\n\t\t\t\t\treturn PBSE_BADATVAL;\n\t\t\t} else if (!(strcasecmp(name, ATTR_queue))) {\n\t\t\t\t*p = ch;\n#ifndef PBS_MOM\n\t\t\t\tif (ch != '=')\n\t\t\t\t\treturn PBSE_BADATVAL;\n\t\t\t\tp++;\n\t\t\t\tif (find_queuebyname(p) != NULL) {\n\t\t\t\t\tcontinue;\n\t\t\t\t} else {\n\t\t\t\t\treturn PBSE_UNKQUE;\n\t\t\t\t}\n#endif\n\t\t\t} else {\n\t\t\t\t*p = ch;\n\t\t\t\treturn PBSE_BADATVAL;\n\t\t\t}\n\t\t} else {\n\t\t\treturn PBSE_BADATVAL;\n\t\t}\n\t}\n\treturn PBSE_NONE;\n}\n\n#ifndef PBS_MOM\n/**\n * @brief\n * \t\thost_action - action routine for job's resource_list host resource\n *\t\tvalidate the legality of the host name syntax\n *\n * @param[in]\tpresc    -       pointer to resource\n * @param[in]   pattr    -       not used here\n * @param[in]   pobj     -       not used here\n * @param[in]   type     -       not used here\n * @param[in]   actmode  -       mode of action routine\n *\n * @return\tint\n * @retval\t0\t: success\n * @retval\tPBSE_BADATVAL\t: host name is not alpha numerical\n * @retval\tPBSE_SYSTEM\t: strdup failed, probably due to malloc failure\n */\nint\nhost_action(resource *presc, attribute *pattr, void *pobj, int type, int actmode)\n{\n\tchar *name;\n\textern char *resc_in_err;\n\n\tif ((actmode != ATR_ACTION_ALTER) && (actmode != ATR_ACTION_NEW))\n\t\treturn 0;\n\n\tname = presc->rs_value.at_val.at_str;\n\tif (name) {\n\t\tfor (; *name; ++name) {\n\t\t\tif (isalnum((int) *name) ||\n\t\t\t    *name == '-' ||\n\t\t\t    *name == '_' ||\n\t\t\t    *name == '.') {\n\t\t\t\tcontinue;\n\t\t\t} else {\n\t\t\t\tif ((resc_in_err = strdup(presc->rs_value.at_val.at_str)) == NULL)\n\t\t\t\t\treturn PBSE_SYSTEM;\n\t\t\t\treturn PBSE_BADATVAL;\n\t\t\t}\n\t\t}\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\t\tCheck select string for key\n * @par\n *\t\tWe can't just use strstr because it could match something with\n *\t\tkey as the last part of a longer string.  For example, looking\n *\t\tfor \"eoe=\" in \"1:cooleoe=3\" would match with strstr but would\n *\t\tbe a false positive.\n * @param[in]\tstr\t\tstring from select to search\n * @param[in]\tkey\t\tstring to search for\n *\n * @return\tchar*\n * @retval\tNULL\tkey not found\n * @retval\t!NULL\tlocation of key\n */\nchar *\nselect_search(char *str, char *key)\n{\n\tchar *loc = strstr(str, key);\n\tchar *prev;\n\n\tif (loc == NULL) /* not found at all */\n\t\treturn NULL;\n\tif (str == loc) /* key is initial string */\n\t\treturn loc;\n\tprev = loc - 1;\t\t\t  /* look at char before key */\n\tif (*prev == ':' || *prev == '+') /* key is really there */\n\t\treturn loc;\n\treturn NULL; /* some other string like \"1:xeoe=42\" */\n}\n\n/**\n * @brief\n *      Wrapper routine for resc_select_action\n *\n * @par Functionality:\n *      It applies rules to validate eoe in the chunks. Rules are:\n *      (a) either all chunks request eoe or none request eoe\n *      (b) all chunks request same eoe\n *\n * @param[in]   presc    -       pointer to resource\n * @param[in]   pattr    -       pointer to attribute\n * @param[in]   pobj     -       pointer to job or reservation\n * @param[in]   type     -       if job or reservation\n *\n * @return\t\tint\n * @retval\t\tPBSE_NONE : success if no provisioning needed\n * @retval\t\tPBSE_IVAL_AOECHUNK : if rules not met, the message for\n *\t\t\t\t\t\tthis EOE define is what is needed here as well\n * @retval\t\tPBSE_SYSTEM : if error\n *\n * @par Side Effects:\n *\tNone\n *\n * @par MT-safe: Yes\n *\n */\nstatic int\napply_eoe_inchunk_rules(resource *presc, attribute *pattr, void *pobj,\n\t\t\tint type)\n{\n\tint c = 1, i; /* # of chunks, len of aoename */\n\tint ret = PBSE_NONE;\n\tstatic char key[] = \"eoe=\";\n\tchar *name;\n\tchar *peoe = NULL;    /* stores addr of eoe */\n\tchar *eoename = NULL; /* 1st eoe found in select */\n\tchar *tmpptr;\t      /* store temp addr */\n\n\tif ((name = presc->rs_value.at_val.at_str) == NULL)\n\t\treturn PBSE_NONE;\n\n\t/* eoe is requested? */\n\tif ((peoe = select_search(name, key)) == NULL)\n\t\treturn PBSE_NONE;\n\n\t/* count # of chunks; ignore chunk multiplier */\n\tfor (tmpptr = name; *tmpptr; tmpptr++)\n\t\tif (*tmpptr == '+')\n\t\t\tc++;\n\n\t/* find key ; reduce c each time pattern is found. */\n\tfor (; peoe; c--, (peoe = select_search(peoe, key))) {\n\t\t/* point to eoe name. */\n\t\tpeoe += strlen(key);\n\t\ttmpptr = peoe;\n\t\t/* get length of eoe name in i */\n\t\tfor (i = 0; *tmpptr && *tmpptr != ':' && *tmpptr != '+';\n\t\t     i++, tmpptr++)\n\t\t\t;\n\t\t/* if first appearance of eoe, store it. */\n\t\tif (eoename == NULL) {\n\t\t\teoename = malloc(i + 1);\n\t\t\tif (eoename == NULL)\n\t\t\t\treturn PBSE_SYSTEM;\n\t\t\tstrncpy(eoename, peoe, i);\n\t\t\teoename[i] = '\\0';\n\t\t}\n\t\t/* compare previously stored eoe and this eoe  */\n\t\tif (strncmp(peoe, eoename, i)) {\n\t\t\tret = PBSE_IVAL_AOECHUNK; /* rule (b)*/\n\t\t\tbreak;\n\t\t}\n\t}\n\t/* there were chunks without eoe */\n\tif (c)\n\t\tret = PBSE_IVAL_AOECHUNK; /* rule (a) */\n\n\tif (eoename)\n\t\tfree(eoename);\n\treturn ret;\n}\n/**\n * @brief\n *      Action routine for resource 'select'\n *\n * @param[in]   presc    -       pointer to resource\n * @param[in]   pattr    -       pointer to attribute\n * @param[in]   pobj     -       pointer to job or reservation\n * @param[in]   type     -       if job or reservation\n * @param[in]   actmode  -       mode of action routine\n *\n * @return\tint\n * @retval\tPBSE_NONE\t: success if no provisioning needed\n * @retval\tPBSE_IVAL_AOECHUNK\t: if rules not met\n * @retval\tPBSE_SYSTEM\t: if error\n *\n * @par Side Effects:\n *\t\tNone\n *\n * @par\tMT-safe: Yes\n *\n */\n\nint\nresc_select_action(resource *presc, attribute *pattr, void *pobj,\n\t\t   int type, int actmode)\n{\n\tint rc = 0;\n\tif ((actmode != ATR_ACTION_NEW) && (actmode != ATR_ACTION_ALTER))\n\t\treturn PBSE_NONE;\n\trc = apply_select_inchunk_rules(presc, pattr, pobj, type, actmode);\n\tif (rc != PBSE_NONE)\n\t\treturn rc;\n\trc = apply_eoe_inchunk_rules(presc, pattr, pobj, type);\n\tif (rc != PBSE_NONE)\n\t\treturn rc;\n\n\t/* not performing check if job is being created since reservation\n\t * related data is not available yet.\n\t */\n\tif (type == PARENT_TYPE_JOB && actmode == ATR_ACTION_NEW)\n\t\treturn PBSE_NONE;\n\n\treturn apply_aoe_inchunk_rules(presc, pattr, pobj, type);\n}\n\n/**\n * @brief\n *      Wrapper routine for resc_select_action\n *\n * @par Functionality:\n *      It applies rules to validate aoe in the chunks. Rules are:\n *      (a) either all chunks request aoe or none request aoe\n *      (b) all chunks request same aoe\n *      (c) job with aoe cannot be in reservation without aoe\n *      (d) job without aoe cannot be in reservation with aoe\n *      (e) reservation and job in it, have same aoe\n *\n * @param[in]   presc    -       pointer to resource\n * @param[in]   pattr    -       pointer to attribute\n * @param[in]   pobj     -       pointer to job or reservation\n * @param[in]   type     -       if job or reservation\n *\n * @return\tint\n * @retval\tPBSE_NONE\t: success if no provisioning needed\n * @retval\tPBSE_IVAL_AOECHUNK\t: if rules not met\n * @retval\tPBSE_SYSTEM\t: if error\n *\n * @par Side Effects:\n *\t\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\napply_aoe_inchunk_rules(resource *presc, attribute *pattr, void *pobj,\n\t\t\tint type)\n{\n\tjob *jb = NULL;\n\tint c = 1, i; /* # of chunks, len of aoename */\n\tchar *name;\n\tchar *paoe = NULL;    /* stores addr of aoe */\n\tchar *aoename = NULL; /* 1st aoe found in select */\n\tchar *tmpptr;\t      /* store temp addr */\n\tchar *aoe_req = NULL; /* Null if job outside reservation,\n\t\t\t\t\t * Null if reservation has no aoe,\n\t\t\t\t\t * not NULL if reservation has aoe */\n\n\tif (type == PARENT_TYPE_JOB) {\n\t\tjb = (job *) pobj;\n\t\tif (jb->ji_myResv) /* Get aoe requested by reservation */\n\t\t\taoe_req = (char *) find_aoe_from_request(jb->ji_myResv);\n\t}\n\n\tname = presc->rs_value.at_val.at_str;\n\tif (name) {\n\t\t/* easy n quick check first: aoe is requested? */\n\t\tif ((paoe = strstr(name, \"aoe=\")) == NULL) {\n\t\t\tif (aoe_req) {\n\t\t\t\tfree(aoe_req);\n\t\t\t\treturn PBSE_IVAL_AOECHUNK; /* rule (d) */\n\t\t\t}\n\t\t} else {\n\t\t\t/* aoe is requested, slow down for checks */\n\n\t\t\ttmpptr = name;\n\t\t\t/* count # of chunks; ignore chunk multiplier */\n\t\t\tfor (; *tmpptr; tmpptr++)\n\t\t\t\tif (*tmpptr == '+')\n\t\t\t\t\tc++;\n\n\t\t\t/* find pattern aoe= ; reduce c each time\n\t\t\t * pattern is found. paoe was set in 'if' block\n\t\t\t */\n\t\t\tfor (; paoe; c--, (paoe = strstr(paoe, \"aoe=\"))) {\n\t\t\t\t/* point to aoe name. */\n\t\t\t\tpaoe += 4;\n\t\t\t\ttmpptr = paoe;\n\t\t\t\t/* get length of aoe name in i. */\n\t\t\t\tfor (i = 0; *tmpptr && *tmpptr != ':' &&\n\t\t\t\t\t    *tmpptr != '+';\n\t\t\t\t     i++, tmpptr++)\n\t\t\t\t\t;\n\t\t\t\t/* if first appearance of aoe, store it. */\n\t\t\t\tif (aoename == NULL) {\n\t\t\t\t\taoename = malloc(i + 1);\n\t\t\t\t\tif (aoename == NULL) {\n\t\t\t\t\t\tif (aoe_req)\n\t\t\t\t\t\t\tfree(aoe_req);\n\t\t\t\t\t\treturn PBSE_SYSTEM;\n\t\t\t\t\t}\n\t\t\t\t\tstrncpy(aoename, paoe, i);\n\t\t\t\t\taoename[i] = '\\0';\n\t\t\t\t}\n\t\t\t\t/* compare previously stored aoe and\n\t\t\t\t * this aoe.\n\t\t\t\t */\n\t\t\t\tif (strncmp(paoe, aoename, i)) {\n\t\t\t\t\tif (aoe_req)\n\t\t\t\t\t\tfree(aoe_req);\n\t\t\t\t\tif (aoename)\n\t\t\t\t\t\tfree(aoename);\n\t\t\t\t\treturn PBSE_IVAL_AOECHUNK; /* rule (b)*/\n\t\t\t\t}\n\t\t\t\t/* if job is in reservation, compare\n\t\t\t\t * with reservation's aoe.\n\t\t\t\t */\n\t\t\t\tif (type == PARENT_TYPE_JOB && jb->ji_myResv) {\n\t\t\t\t\tif (aoe_req == NULL ||\n\t\t\t\t\t    strncmp(aoe_req, aoename, i)) {\n\t\t\t\t\t\tif (aoe_req)\n\t\t\t\t\t\t\tfree(aoe_req);\n\t\t\t\t\t\tif (aoename)\n\t\t\t\t\t\t\tfree(aoename);\n\t\t\t\t\t\t/* rule (c/e) */\n\t\t\t\t\t\treturn PBSE_IVAL_AOECHUNK;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\tif (aoe_req)\n\t\tfree(aoe_req);\n\tif (aoename)\n\t\tfree(aoename);\n\treturn PBSE_NONE;\n}\n#endif /* not PBS_MOM */\n/**\n * @brief\n *      action routine for built-in resources to check if its value is zero\n *      or positive whose datatype is long.\n *\n * @param[in]   presc\t-\tpointer to resource\n * @param[in]   pattr\t-\tpointer to attribute\n * @param[in]   pobj\t-\tpointer to job or reservation\n * @param[in]   type\t-\tif job or reservation\n * @param[in]   actmode\t-\tmode of action routine\n *\n * @return\tint\n * @retval\tPBSE_NONE\t: success(if value is greater than or equal to zero)\n * @retval\tPBSE_BADATVAL\t: if value is less than zero\n *\n * @par Side Effects:\n *\t\tNone\n *\n * @par MT-safe: Yes\n *\n */\nint\nzero_or_positive_action(resource *presc, attribute *pattr, void *pobj, int type, int actmode)\n{\n\tlong l;\n\tif ((actmode != ATR_ACTION_ALTER) && (actmode != ATR_ACTION_NEW))\n\t\treturn 0;\n\tl = presc->rs_value.at_val.at_long;\n\tif (l < 0)\n\t\treturn PBSE_BADATVAL;\n\treturn PBSE_NONE;\n}\n/**\n * @brief\n *      Wrapper routine for resc_select_action\n *      It applies rules to validate all individual resources in all the chunks.\n *\n * @par Functionality:\n *      1. Parses select specification by calling parse_chunk function.\n *      2. Decodes each chunk\n *      3. Calls resource action function for each resource in a chunk if\n *\t   the resource is of type long.\n *\n * @param[in]   presc\t-\tpointer to resource\n * @param[in]   pattr\t-\tpointer to attribute\n * @param[in]   pobj\t-\tpointer to job\n * @param[in]   type\t-\tif job or reservation\n * @param[in]   actmode\t-\tmode of action routine\n *\n * @return\tint\n * @retval\tPBSE_NONE\t: success\n * @retval\t> 0\t: if error\n *\n * @par Side Effects:\n *\t\tNone\n *\n */\nint\napply_select_inchunk_rules(resource *presc, attribute *pattr, void *pobj, int type, int actmode)\n{\n\tchar *chunk;\n\tint nchk;\n\tint nelem;\n\tstruct key_value_pair *pkvp;\n\tint rc = 0;\n\tint j;\n\tstruct resource tmp_resc;\n\tchar *select_str = NULL;\n\n\tselect_str = presc->rs_value.at_val.at_str;\n\tif ((select_str == NULL) || (select_str[0] == '\\0'))\n\t\treturn PBSE_BADATVAL;\n\tchunk = parse_plus_spec(select_str, &rc); /* break '+' seperated substrings */\n\tif (rc != 0)\n\t\treturn (rc);\n\twhile (chunk) {\n\t\tif (parse_chunk(chunk, &nchk, &nelem, &pkvp, NULL) == 0) {\n\t\t\tfor (j = 0; j < nelem; ++j) {\n\t\t\t\ttmp_resc.rs_defin = find_resc_def(svr_resc_def, pkvp[j].kv_keyw);\n\t\t\t\tif ((tmp_resc.rs_defin != NULL) && (tmp_resc.rs_defin->rs_type == ATR_TYPE_LONG)) {\n\t\t\t\t\ttmp_resc.rs_value.at_val.at_long = atol(pkvp[j].kv_val);\n\t\t\t\t\tif (tmp_resc.rs_defin->rs_action) {\n\t\t\t\t\t\tif ((rc = tmp_resc.rs_defin->rs_action(&tmp_resc, pattr, pobj,\n\t\t\t\t\t\t\t\t\t\t       type, actmode)) != 0)\n\t\t\t\t\t\t\treturn (rc);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t} else {\n\t\t\treturn PBSE_BADATVAL;\n\t\t}\n\t\tchunk = parse_plus_spec(NULL, &rc);\n\t\tif (rc != 0)\n\t\t\treturn (rc);\n\t} /* while */\n\treturn PBSE_NONE;\n}\n/**\n * @brief action_soft_walltime - action function for the soft_walltime resource.\n *\n * \treturns int\n * \t@retval PBSE_BADATVAL - soft_walltime > walltime\n * \t@retval PBSE_SOFTWT_STF - min_walltime is set\n * \t@retval PBSE_NONE - everything is fine\n */\nint\naction_soft_walltime(resource *presc, attribute *pattr, void *pobject, int type, int actmode)\n{\n\tjob *pjob;\n\n\tif ((actmode != ATR_ACTION_ALTER) && (actmode != ATR_ACTION_NEW))\n\t\treturn PBSE_NONE;\n\n\tif (pobject != NULL) {\n\t\tstatic resource_def *walltime_def = NULL;\n\t\tstatic resource_def *min_walltime_def = NULL;\n\t\tresource *entry;\n\n\t\tif (type != PARENT_TYPE_JOB)\n\t\t\treturn PBSE_NONE;\n\n\t\tpjob = (job *) pobject;\n\n\t\t/* Make sure soft_walltime < walltime */\n\t\tif (walltime_def == NULL)\n\t\t\twalltime_def = &svr_resc_def[RESC_WALLTIME];\n\t\tentry = find_resc_entry(get_jattr(pjob, JOB_ATR_resource), walltime_def);\n\t\tif (entry != NULL) {\n\t\t\tif (is_attr_set(&entry->rs_value)) {\n\t\t\t\tif (walltime_def->rs_comp(&(entry->rs_value), &(presc->rs_value)) < 0)\n\t\t\t\t\treturn PBSE_BADATVAL;\n\t\t\t}\n\t\t}\n\n\t\t/* soft_walltime and STF jobs are incompatible */\n\t\tif (min_walltime_def == NULL)\n\t\t\tmin_walltime_def = &svr_resc_def[RESC_MIN_WALLTIME];\n\t\tentry = find_resc_entry(get_jattr(pjob, JOB_ATR_resource), min_walltime_def);\n\t\tif (entry != NULL) {\n\t\t\tif (is_attr_set(&entry->rs_value))\n\t\t\t\treturn PBSE_SOFTWT_STF;\n\t\t}\n\t}\n\treturn PBSE_NONE;\n}\n/**\n * @brief action_walltime - action function for the soft_walltime resource.\n *\n * \treturns int\n * \t@retval PBSE_BADATVAL - walltime < soft_walltime\n * \t@retval PBSE_NONE - everything is fine\n */\n\nint\naction_walltime(resource *presc, attribute *pattr, void *pobject, int type, int actmode)\n{\n\tjob *pjob;\n\tresource *entry;\n\n\tif ((actmode != ATR_ACTION_ALTER) && (actmode != ATR_ACTION_NEW))\n\t\treturn PBSE_NONE;\n\n\tif (pobject != NULL) {\n\t\tstatic resource_def *soft_walltime_def = NULL;\n\n\t\tif (type != PARENT_TYPE_JOB)\n\t\t\treturn PBSE_NONE;\n\n\t\tpjob = (job *) pobject;\n\n\t\t/* Make sure walltime > soft_walltime */\n\t\tif (soft_walltime_def == NULL)\n\t\t\tsoft_walltime_def = &svr_resc_def[RESC_SOFT_WALLTIME];\n\t\tentry = find_resc_entry(get_jattr(pjob, JOB_ATR_resource), soft_walltime_def);\n\t\tif (entry != NULL) {\n\t\t\tif (is_attr_set(&entry->rs_value)) {\n\t\t\t\tif (soft_walltime_def->rs_comp(&(entry->rs_value), &(presc->rs_value)) > 0)\n\t\t\t\t\treturn PBSE_BADATVAL;\n\t\t\t}\n\t\t}\n\t}\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief action_min_walltime - action function for min_walltime.\n * @return int\n * @retval PBSE_NOSTF_JOBARRAY - if min_walltime is on a job array\n * @retval PBSE_SOFTWT_STF - if min_walltime is set with soft_walltime\n * @retval PBSE_MIN_GT_MAXWT - if min_walltime > max_walltime\n * @retval PBSE_NONE - all is fine\n */\nint\naction_min_walltime(resource *presc, attribute *pattr, void *pobject, int type, int actmode)\n{\n\tjob *pjob;\n\n\tif ((actmode != ATR_ACTION_ALTER) && (actmode != ATR_ACTION_NEW))\n\t\treturn PBSE_NONE;\n\n\tif (pobject != NULL) {\n\t\tstatic resource_def *soft_walltime_def = NULL;\n\t\tstatic resource_def *max_walltime_def = NULL;\n\t\tresource *entry;\n\n\t\tif (type != PARENT_TYPE_JOB)\n\t\t\treturn PBSE_NONE;\n\n\t\tpjob = (job *) pobject;\n\n#ifndef PBS_MOM /* MOM doesn't call the action functions and doesn't have access to is_job_array() */\n\t\t/* Job arrays can't be STF jobs */\n\t\tif (is_job_array(pjob->ji_qs.ji_jobid) != IS_ARRAY_NO)\n\t\t\treturn PBSE_NOSTF_JOBARRAY;\n#endif\n\t\t/* STF jobs can't request soft_walltime */\n\t\tif (soft_walltime_def == NULL)\n\t\t\tsoft_walltime_def = &svr_resc_def[RESC_SOFT_WALLTIME];\n\t\tif (soft_walltime_def != NULL) {\n\t\t\tentry = find_resc_entry(get_jattr(pjob, JOB_ATR_resource), soft_walltime_def);\n\t\t\tif (entry != NULL) {\n\t\t\t\tif (is_attr_set(&entry->rs_value))\n\t\t\t\t\treturn PBSE_SOFTWT_STF;\n\t\t\t}\n\t\t}\n\n\t\t/* max_walltime needs to be greater than min_walltime */\n\t\tif (max_walltime_def == NULL)\n\t\t\tmax_walltime_def = &svr_resc_def[RESC_MAX_WALLTIME];\n\t\tif (max_walltime_def != NULL) {\n\t\t\tentry = find_resc_entry(get_jattr(pjob, JOB_ATR_resource), max_walltime_def);\n\t\t\tif (entry != NULL && (is_attr_set(&entry->rs_value)))\n\t\t\t\tif (max_walltime_def->rs_comp(&(entry->rs_value), &(presc->rs_value)) < 0)\n\t\t\t\t\treturn PBSE_MIN_GT_MAXWT;\n\t\t}\n\t}\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief action_max_walltime - action function for max_walltime.\n * @return int\n * @retval PBSE_SOFTWT_STF - if max_walltime is set with soft_walltime\n * @retval PBSE_MIN_GT_MAXWT - if max_walltime < min_walltime\n * @retval PBSE_MAX_NO_MINWT - max_walltime with no min_walltime\n * @retval PBSE_NONE - all is fine\n */\n\nint\naction_max_walltime(resource *presc, attribute *pattr, void *pobj, int type, int actmode)\n{\n\tjob *pjob;\n\n\tif ((actmode != ATR_ACTION_ALTER) && (actmode != ATR_ACTION_NEW))\n\t\treturn PBSE_NONE;\n\n\tif (pobj != NULL) {\n\t\tstatic resource_def *soft_walltime_def = NULL;\n\t\tstatic resource_def *min_walltime_def = NULL;\n\t\tresource *entry;\n\n\t\tif (type != PARENT_TYPE_JOB)\n\t\t\treturn PBSE_NONE;\n\n\t\tpjob = (job *) pobj;\n\n\t\t/* STF jobs can't request soft_walltime */\n\t\tif (soft_walltime_def == NULL)\n\t\t\tsoft_walltime_def = &svr_resc_def[RESC_SOFT_WALLTIME];\n\t\tif (soft_walltime_def != NULL) {\n\t\t\tentry = find_resc_entry(get_jattr(pjob, JOB_ATR_resource), soft_walltime_def);\n\t\t\tif (entry != NULL) {\n\t\t\t\tif (is_attr_set(&entry->rs_value))\n\t\t\t\t\treturn PBSE_SOFTWT_STF;\n\t\t\t}\n\t\t}\n\n\t\t/* max_walltime needs to be greater than min_walltime */\n\t\tif (min_walltime_def == NULL)\n\t\t\tmin_walltime_def = &svr_resc_def[RESC_MIN_WALLTIME];\n\t\tif (min_walltime_def != NULL) {\n\t\t\tentry = find_resc_entry(get_jattr(pjob, JOB_ATR_resource), min_walltime_def);\n\t\t\tif (entry != NULL) {\n\t\t\t\tif (is_attr_set(&entry->rs_value)) {\n\t\t\t\t\tif (min_walltime_def->rs_comp(&(entry->rs_value), &(presc->rs_value)) > 0)\n\t\t\t\t\t\treturn PBSE_MIN_GT_MAXWT;\n\t\t\t\t}\n\t\t\t} else\n\t\t\t\treturn PBSE_MAX_NO_MINWT;\n\t\t}\n\t}\n\treturn PBSE_NONE;\n}\n"
  },
  {
    "path": "src/server/run_sched.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h>\n#include \"libpbs.h\"\n#include \"server.h\"\n#include \"svrfunc.h\"\n\n/* Global Data */\n\nextern struct server server;\nextern char server_name[];\nextern char *msg_sched_called;\n\nint scheduler_jobs_stat = 0; /* set to 1 once scheduler queried jobs in a cycle*/\nextern int svr_unsent_qrun_req;\n\n/**\n * @brief\n * \t\tam_jobs - array of pointers to jobs which were moved or which had certain\n * \t\tattributes altered (qalter) while a schedule cycle was in progress.\n *\t\tIf a job in the array is run by the scheduler in the cycle, that run\n *\t\trequest is rejected as the move/modification may impact the job's\n *\t\trequirements and placement.\n */\nstatic struct am_jobs {\n\tint am_used;\t/* number of jobs in the array  */\n\tint am_max;\t/* number of slots in the array */\n\tjob **am_array; /* pointer the malloc-ed array  */\n} am_jobs = {0, 0, NULL};\n\n/**\n * @brief\n *\tsend sched command 'cmd' to given sched\n *\tif cmd == SCH_SCHEDULE_AJOB send jobid also\n *\n * @param[in]\tsched\t-\tpointer to sched obj\n * @param[in]\tcmd\t-\tthe command to send\n * @param[in]\tjobid\t-\tthe jobid to send if 'cmd' is SCH_SCHEDULE_AJOB\n *\n * @return\tint\n * @retval\t1\tfor success\n * @retval\t0\tfor failure\n */\nint\nsend_sched_cmd(pbs_sched *sched, int cmd, char *jobid)\n{\n\tint ret = -1;\n\n\tDIS_tcp_funcs();\n\n\tif (sched->sc_secondary_conn < 0)\n\t\tgoto err;\n\n\tif ((ret = diswsi(sched->sc_secondary_conn, cmd)) != DIS_SUCCESS)\n\t\tgoto err;\n\n\tif (cmd == SCH_SCHEDULE_AJOB) {\n\t\tif ((ret = diswst(sched->sc_secondary_conn, jobid)) != DIS_SUCCESS)\n\t\t\tgoto err;\n\t}\n\n\tif (dis_flush(sched->sc_secondary_conn) != 0)\n\t\tgoto err;\n\n\tlog_eventf(PBSEVENT_SCHED, PBS_EVENTCLASS_SERVER, LOG_INFO, server_name, msg_sched_called, cmd);\n\n\tsched->sc_cycle_started = 1;\n\n\treturn 1;\n\nerr:\n\tlog_eventf(PBSEVENT_SCHED, PBS_EVENTCLASS_SERVER, LOG_INFO, server_name, \"write to scheduler failed, err=%d\", ret);\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\tfind_assoc_sched_jid - find the corresponding scheduler which is responsible\n * \t\tfor handling this job.\n *\n * @param[in]\tjid - job id\n * @param[out]\ttarget_sched - pointer to the corresponding scheduler to which the job belongs to\n *\n * @retval - 1  if success\n * \t   - 0 if fail\n */\nint\nfind_assoc_sched_jid(char *jid, pbs_sched **target_sched)\n{\n\tjob *pj;\n\tint t;\n\n\t*target_sched = NULL;\n\n\tt = is_job_array(jid);\n\tif ((t == IS_ARRAY_NO) || (t == IS_ARRAY_ArrayJob))\n\t\tpj = find_job(jid); /* regular or ArrayJob itself */\n\telse\n\t\tpj = find_arrayparent(jid); /* subjob(s) */\n\n\tif (pj == NULL)\n\t\treturn 0;\n\n\treturn find_assoc_sched_pque(pj->ji_qhdr, target_sched);\n}\n\n/**\n * @brief\n * \t\tfind_assoc_sched_pque - find the corresponding scheduler which is responsible\n * \t\tfor handling this job.\n *\n * @param[in]\tpq\t\t- pointer to pbs_queue\n * @param[out]  target_sched\t- pointer to the corresponding scheduler to which the job belongs to\n *\n  * @retval - 1 if success\n * \t    - 0 if fail\n */\nint\nfind_assoc_sched_pque(pbs_queue *pq, pbs_sched **target_sched)\n{\n\tpbs_sched *psched;\n\n\t*target_sched = NULL;\n\tif (pq == NULL)\n\t\treturn 0;\n\n\tif (is_qattr_set(pq, QA_ATR_partition)) {\n\t\tchar *partition = get_qattr_str(pq, QA_ATR_partition);\n\n\t\tif (strcmp(partition, DEFAULT_PARTITION) == 0) {\n\t\t\t*target_sched = dflt_scheduler;\n\t\t\treturn 1;\n\t\t}\n\t\tfor (psched = (pbs_sched *) GET_NEXT(svr_allscheds); psched; psched = (pbs_sched *) GET_NEXT(psched->sc_link)) {\n\t\t\tif (is_sched_attr_set(psched, SCHED_ATR_partition)) {\n\t\t\t\tif (!strcmp(get_sched_attr_str(psched, SCHED_ATR_partition), partition)) {\n\t\t\t\t\t*target_sched = psched;\n\t\t\t\t\treturn 1;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t} else {\n\t\t*target_sched = dflt_scheduler;\n\t\treturn 1;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\tfind_sched_from_sock - find the corresponding scheduler which is having\n * \t\tthe given socket.\n *\n * @param[in]\tsock\t- socket descriptor\n * @param[in]\twhich\t- which connection to check, primary or secondary\n * \t\t\t  can be one of CONN_SCHED_PRIMARY or CONN_SCHED_SECONDARY\n *\n * @retval - pointer to the corresponding pbs_sched object if success\n * \t\t -  NULL if fail\n */\npbs_sched *\nfind_sched_from_sock(int sock, conn_origin_t which)\n{\n\tpbs_sched *psched;\n\n\tif (sock < 0 || (which != CONN_SCHED_PRIMARY && which != CONN_SCHED_SECONDARY && which != CONN_SCHED_ANY))\n\t\treturn NULL;\n\n\tfor (psched = (pbs_sched *) GET_NEXT(svr_allscheds); psched; psched = (pbs_sched *) GET_NEXT(psched->sc_link)) {\n\t\tif ((which == CONN_SCHED_PRIMARY || which == CONN_SCHED_ANY) && psched->sc_primary_conn == sock)\n\t\t\treturn psched;\n\t\tif ((which == CONN_SCHED_SECONDARY || which == CONN_SCHED_ANY) && psched->sc_secondary_conn == sock)\n\t\t\treturn psched;\n\t}\n\treturn NULL;\n}\n\n/**\n * @brief\n * Sets SCHED_ATR_sched_state and then sets flags on SVR_ATR_State if default scheduler.\n * We need to set MOD_MCACHE so the attribute can get re-encoded\n *\n * @param[in] psched - scheduler to set state on\n * @param[in] state - state of scheduler\n *\n */\nstatic void\nset_sched_state(pbs_sched *psched, char *state)\n{\n\tif (psched == NULL)\n\t\treturn;\n\n\tset_sched_attr_str_slim(psched, SCHED_ATR_sched_state, state, NULL);\n\tif (psched == dflt_scheduler)\n\t\t(get_sattr(SVR_ATR_State))->at_flags |= ATR_MOD_MCACHE;\n}\n\n/**\n * @brief\n * \tReceives end of cycle notification from the corresponding Scheduler\n *\n * @param[in] sock - socket to read\n *\n * @return int\n * @retval 0  - on success\n * @retval !0 - on error\n */\nint\nrecv_sched_cycle_end(int sock)\n{\n\tint rc = 0;\n\tpbs_sched *psched = find_sched_from_sock(sock, CONN_SCHED_SECONDARY);\n\tchar *state = SC_IDLE;\n\n\tif (!psched)\n\t\treturn 0;\n\n\tDIS_tcp_funcs();\n\t(void) disrsi(sock, &rc); /* read end cycle marker and ignore as we don't need its value */\n\tpsched->sc_cycle_started = 0;\n\n\tif (rc != 0)\n\t\tstate = SC_DOWN;\n\n\tset_sched_state(psched, state);\n\n\t/* clear list of jobs which were altered/modified during cycle */\n\tam_jobs.am_used = 0;\n\tscheduler_jobs_stat = 0;\n\thandle_deferred_cycle_close(psched);\n\n\tif (rc == DIS_EOF)\n\t\trc = -1;\n\n\treturn rc;\n}\n\n/**\n * @brief\n * \t\tschedule_high\t-\tsend high priority commands to the scheduler\n *\n * @return\tint\n * @retval  1\t: scheduler busy\n * @retval  0\t: scheduler notified\n * @retval\t-1\t: error\n */\nint\nschedule_high(pbs_sched *psched)\n{\n\tif (psched == NULL)\n\t\treturn -1;\n\tif (psched->sc_cycle_started == 0) {\n\t\tif (!send_sched_cmd(psched, psched->svr_do_sched_high, NULL)) {\n\t\t\tset_sched_state(psched, SC_DOWN);\n\t\t\treturn -1;\n\t\t}\n\t\tpsched->svr_do_sched_high = SCH_SCHEDULE_NULL;\n\t\tset_sched_state(psched, SC_SCHEDULING);\n\t\treturn 0;\n\t}\n\treturn 1;\n}\n\n/**\n * @brief\n * \t\tContact scheduler and direct it to run a scheduling cycle\n *\t\tIf a request is already outstanding, skip this one.\n *\n * @return\tint\n * @retval\t-1\t: error\n * @reval\t0\t: scheduler notified\n * @retval\t+1\t: scheduler busy\n *\n * @par Side Effects:\n *     the global variable (first_time) is changed.\n *\n * @par MT-safe: No\n */\n\nint\nschedule_jobs(pbs_sched *psched)\n{\n\tint cmd;\n\tint s;\n\tstatic int first_time = 1;\n\tstruct deferred_request *pdefr = NULL;\n\tpbs_list_head *deferred_req;\n\tchar *jid = NULL;\n\n\tif (psched == NULL)\n\t\treturn -1;\n\n\tif (first_time)\n\t\tcmd = SCH_SCHEDULE_FIRST;\n\telse\n\t\tcmd = psched->svr_do_schedule;\n\n\tif (psched->sc_cycle_started == 0) {\n\n\t\t/* are there any qrun requests from manager/operator */\n\t\t/* which haven't been sent,  they take priority      */\n\t\tdeferred_req = fetch_sched_deferred_request(psched, false);\n\t\tif (deferred_req) {\n\t\t\tpdefr = (struct deferred_request *) GET_NEXT(*deferred_req);\n\t\t} /* else pdefr is NULL */\n\t\twhile (pdefr) {\n\t\t\tif (pdefr->dr_sent == 0) {\n\t\t\t\ts = is_job_array(pdefr->dr_id);\n\t\t\t\tif (s == IS_ARRAY_NO) {\n\t\t\t\t\tif (find_job(pdefr->dr_id) != NULL) {\n\t\t\t\t\t\tjid = pdefr->dr_id;\n\t\t\t\t\t\tcmd = SCH_SCHEDULE_AJOB;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t} else if ((s == IS_ARRAY_Single) ||\n\t\t\t\t\t   (s == IS_ARRAY_Range)) {\n\t\t\t\t\tif (find_arrayparent(pdefr->dr_id) != NULL) {\n\t\t\t\t\t\tjid = pdefr->dr_id;\n\t\t\t\t\t\tcmd = SCH_SCHEDULE_AJOB;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\tpdefr = (struct deferred_request *) GET_NEXT(pdefr->dr_link);\n\t\t}\n\n\t\tif (!send_sched_cmd(psched, cmd, jid)) {\n\t\t\tset_sched_state(psched, SC_DOWN);\n\t\t\treturn -1;\n\t\t} else if (pdefr != NULL)\n\t\t\tpdefr->dr_sent = 1; /* mark entry as sent to sched */\n\n\t\tpsched->svr_do_schedule = SCH_SCHEDULE_NULL;\n\t\tset_sched_state(psched, SC_SCHEDULING);\n\n\t\tfirst_time = 0;\n\n\t\t/* if there are more qrun requests queued up, reset cmd so */\n\t\t/* they are sent when the Scheduler completes this cycle   */\n\t\tif (deferred_req) {\n\t\t\tpdefr = GET_NEXT(*deferred_req);\n\t\t} /* else pdefr is NULL */\n\t\twhile (pdefr) {\n\t\t\tif (pdefr->dr_sent == 0) {\n\t\t\t\tpbs_sched *target_sched;\n\t\t\t\tif (find_assoc_sched_jid(pdefr->dr_preq->rq_ind.rq_queuejob.rq_jid, &target_sched))\n\t\t\t\t\ttarget_sched->svr_do_schedule = SCH_SCHEDULE_AJOB;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\tpdefr = (struct deferred_request *) GET_NEXT(pdefr->dr_link);\n\t\t}\n\n\t\treturn (0);\n\t} else\n\t\treturn (1); /* scheduler was busy */\n}\n\n/**\n * @brief\n * \t\tscheduler_close - connection to scheduler has closed, clear scheduler_called\n * @par\n * \t\tConnection to scheduler has closed, mark scheduler sock as\n *\t\tclosed with -1 and if any clean up any outstanding deferred scheduler\n *\t\trequests (qrun).\n * @par\n * \t\tPerform some cleanup as connection to scheduler has closed\n *\n * @param[in]\tsock\t-\tcommunication endpoint.\n * \t\t\t\t\t\t\tclosed (scheduler connection) socket, not used but\n *\t\t\t\t\t\t\trequired to match general prototype of functions called when\n *\t\t\t\t\t\t\ta socket is closed.\n * @return\tvoid\n */\nvoid\nscheduler_close(int sock)\n{\n\tpbs_sched *psched;\n\tint other_conn = -1;\n\n\tpsched = find_sched_from_sock(sock, CONN_SCHED_ANY);\n\tif (psched == NULL)\n\t\treturn;\n\n\tif (sock == psched->sc_primary_conn)\n\t\tother_conn = psched->sc_secondary_conn;\n\telse if (sock == psched->sc_secondary_conn)\n\t\tother_conn = psched->sc_primary_conn;\n\telse\n\t\treturn;\n\tlog_eventf(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SCHED, LOG_CRIT, psched->sc_name, \"scheduler disconnected\");\n\tpsched->sc_secondary_conn = -1;\n\tpsched->sc_primary_conn = -1;\n\tif (other_conn != -1) {\n\t\tnet_add_close_func(other_conn, NULL);\n\t\tclose_conn(other_conn);\n\t}\n\tpsched->sc_cycle_started = 0;\n\tset_sched_state(psched, SC_DOWN);\n\n\t/* clear list of jobs which were altered/modified during cycle */\n\tam_jobs.am_used = 0;\n\tscheduler_jobs_stat = 0;\n\n\thandle_deferred_cycle_close(psched);\n}\n\n/**\n * @brief\n * \t\tAdd a job to the am_jobs array, called when a job is moved (locally)\n *\t\tor modified (qalter) during a scheduling cycle\n *\n * @param[in]\tpjob\t-\tpointer to job to add to the array.\n */\nvoid\nam_jobs_add(job *pjob)\n{\n\tif (am_jobs.am_used == am_jobs.am_max) {\n\t\t/* Need to expand the array, increase by 4 slots */\n\t\tjob **tmp = realloc(am_jobs.am_array, sizeof(job *) * (am_jobs.am_max + 4));\n\t\tif (tmp == NULL)\n\t\t\treturn; /* cannot increase array, so be it */\n\t\tam_jobs.am_array = tmp;\n\t\tam_jobs.am_max += 4;\n\t}\n\t*(am_jobs.am_array + am_jobs.am_used++) = pjob;\n}\n\n/**\n * @brief\n * \t\tDetermine if the job in question is in the list of moved/altered\n *\t\tjobs.  Called when a run request for a job comes from the Scheduler.\n *\n * @param[in]\tpjob\t-\tpointer to job in question.\n *\n * @return\tint\n * @retval\t0\t- job not in list\n * @retval\t1\t- job is in list\n */\nint\nwas_job_alteredmoved(job *pjob)\n{\n\tint i;\n\tfor (i = 0; i < am_jobs.am_used; ++i) {\n\t\tif (*(am_jobs.am_array + i) == pjob)\n\t\t\treturn 1;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\tset_scheduler_flag - set the flag to call the Scheduler\n *\t\tcertain flag values should not be overwritten\n *\n * @param[in]\tflag\t-\tscheduler command.\n * @param[in] psched -   pointer to sched object. Then set the flag only for this object.\n *                                     NULL. Then set the flag for all the scheduler objects.\n */\nvoid\nset_scheduler_flag(int flag, pbs_sched *psched)\n{\n\tint single_sched;\n\n\tif (psched)\n\t\tsingle_sched = 1;\n\telse {\n\t\tsingle_sched = 0;\n\t\tpsched = (pbs_sched *) GET_NEXT(svr_allscheds);\n\t}\n\n\tfor (; psched; psched = (pbs_sched *) GET_NEXT(psched->sc_link)) {\n\t\t/* high priority commands:\n\t\t * Note: A) usually SCH_QUIT is sent directly and not via here\n\t\t *       B) if we ever add a 3rd high prio command, we can lose them\n\t\t */\n\t\tif (flag == SCH_CONFIGURE || flag == SCH_QUIT) {\n\t\t\tif (psched->svr_do_sched_high == SCH_QUIT)\n\t\t\t\treturn; /* keep only SCH_QUIT */\n\n\t\t\tpsched->svr_do_sched_high = flag;\n\t\t} else\n\t\t\tpsched->svr_do_schedule = flag;\n\t\tif (single_sched)\n\t\t\tbreak;\n\t}\n}\n\n/**\n * @brief\n * \tHandles deferred requests during scheduling cycle closure\n *\n * @return void\n */\nvoid\nhandle_deferred_cycle_close(pbs_sched *psched)\n{\n\tpbs_list_head *deferred_req;\n\tstruct deferred_request *pdefr;\n\n\tdeferred_req = fetch_sched_deferred_request(psched, false);\n\tif (deferred_req == NULL) {\n\t\treturn;\n\t}\n\n\t/*\n\t * If a deferred (from qrun) had been sent to the Scheduler and is still\n\t * there, then the Scheduler must have closed the connection without\n\t * dealing with the job. Tell qrun it failed if the qrun connection\n\t * is still there.\n\t *\n\t * If any qrun request is pending in the deffered list, set svr_unsent_qrun_req so\n\t * they are sent when the Scheduler completes this cycle\n\t */\n\tpdefr = (struct deferred_request *) GET_NEXT(*deferred_req);\n\n\twhile (pdefr) {\n\t\tstruct deferred_request *next_pdefr = (struct deferred_request *) GET_NEXT(pdefr->dr_link);\n\n\t\tif (pdefr->dr_sent != 0) {\n\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_NOTICE, pdefr->dr_id, \"deferred qrun request to scheduler failed\");\n\t\t\tif (pdefr->dr_preq != NULL)\n\t\t\t\treq_reject(PBSE_INTERNAL, 0, pdefr->dr_preq);\n\t\t\t/* unlink and free the deferred request entry */\n\t\t\tdelete_link(&pdefr->dr_link);\n\t\t\tfree(pdefr);\n\t\t} else if (pdefr->dr_sent == 0 && svr_unsent_qrun_req == 0)\n\t\t\tsvr_unsent_qrun_req = 1;\n\n\t\tpdefr = next_pdefr;\n\t}\n\n\tclear_sched_deferred_request(psched);\n}\n"
  },
  {
    "path": "src/server/sattr_get_set.c",
    "content": "/*\n * Copyright (C) 1994-2020 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h>\n\n#include \"server.h\"\n\n/**\n * @brief\tGet attribute of server based on given attr index\n *\n * @param[in] attr_idx - attribute index\n *\n * @return attribute *\n * @retval NULL  - failure\n * @retval !NULL - pointer to attribute struct\n */\nattribute *\nget_sattr(int attr_idx)\n{\n\treturn &(server.sv_attr[attr_idx]);\n}\n\n/**\n * @brief\tGetter function for server attribute of type string\n *\n * @param[in]\tattr_idx - index of the attribute to return\n *\n * @return\tchar *\n * @retval\tstring value of the attribute\n * @retval\tNULL if pjob is NULL\n */\nchar *\nget_sattr_str(int attr_idx)\n{\n\treturn get_attr_str(get_sattr(attr_idx));\n}\n\n/**\n * @brief\tGetter function for server attribute of type string of array\n *\n * @param[in]\tattr_idx - index of the attribute to return\n *\n * @return\tstruct array_strings *\n * @retval\tvalue of the attribute\n * @retval\tNULL if pjob is NULL\n */\nstruct array_strings *\nget_sattr_arst(int attr_idx)\n{\n\treturn get_attr_arst(get_sattr(attr_idx));\n}\n\n/**\n * @brief\tGetter for server attribute's list value\n *\n * @param[in]\tattr_idx - index of the attribute to return\n *\n * @return\tpbs_list_head\n * @retval\tvalue of attribute\n */\npbs_list_head\nget_sattr_list(int attr_idx)\n{\n\treturn get_attr_list(get_sattr(attr_idx));\n}\n\n/**\n * @brief\tGetter function for server attribute of type long\n *\n * @param[in]\tattr_idx - index of the attribute to return\n *\n * @return\tlong\n * @retval\tlong value of the attribute\n * @retval\t-1 if pjob is NULL\n */\nlong\nget_sattr_long(int attr_idx)\n{\n\treturn get_attr_l(get_sattr(attr_idx));\n}\n\n/**\n * @brief\tGeneric server attribute setter (call if you want at_set() action functions to be called)\n *\n * @param[in]\tattr_idx - attribute index to set\n * @param[in]\tval - new val to set\n * @param[in]\trscn - new resource val to set, if applicable\n * @param[in]\top - batch_op operation, SET, INCR, DECR etc.\n *\n * @return\tint\n * @retval\t0 for success\n * @retval\t!0 for failure\n */\nint\nset_sattr_generic(int attr_idx, char *val, char *rscn, enum batch_op op)\n{\n\treturn set_attr_generic(get_sattr(attr_idx), &svr_attr_def[attr_idx], val, rscn, op);\n}\n\n/**\n * @brief\t\"fast\" server attribute setter for string values\n *\n * @param[in]\tattr_idx - attribute index to set\n * @param[in]\tval - new val to set\n * @param[in]\trscn - new resource val to set, if applicable\n *\n * @return\tint\n * @retval\t0 for success\n * @retval\t!0 for failure\n */\nint\nset_sattr_str_slim(int attr_idx, char *val, char *rscn)\n{\n\treturn set_attr_generic(get_sattr(attr_idx), &svr_attr_def[attr_idx], val, rscn, INTERNAL);\n}\n\n/**\n * @brief\t\"fast\" server attribute setter for long values\n *\n * @param[in]\tattr_idx - attribute index to set\n * @param[in]\tval - new val to set\n * @param[in]\top - batch_op operation, SET, INCR, DECR etc.\n *\n * @return\tint\n * @retval\t0 for success\n * @retval\t1 for failure\n */\nint\nset_sattr_l_slim(int attr_idx, long val, enum batch_op op)\n{\n\tset_attr_l(get_sattr(attr_idx), val, op);\n\treturn 0;\n}\n\n/**\n * @brief\t\"fast\" server attribute setter for boolean values\n *\n * @param[in]\tattr_idx - attribute index to set\n * @param[in]\tval - new val to set\n * @param[in]\top - batch_op operation, SET, INCR, DECR etc.\n *\n * @return\tint\n * @retval\t0 for success\n * @retval\t1 for failure\n */\nint\nset_sattr_b_slim(int attr_idx, long val, enum batch_op op)\n{\n\tset_attr_b(get_sattr(attr_idx), val, op);\n\treturn 0;\n}\n\n/**\n * @brief\t\"fast\" server attribute setter for char values\n *\n * @param[in]\tattr_idx - attribute index to set\n * @param[in]\tval - new val to set\n * @param[in]\top - batch_op operation, SET, INCR, DECR etc.\n *\n * @return\tint\n * @retval\t0 for success\n * @retval\t1 for failure\n */\nint\nset_sattr_c_slim(int attr_idx, char val, enum batch_op op)\n{\n\tset_attr_c(get_sattr(attr_idx), val, op);\n\treturn 0;\n}\n\n/**\n * @brief\tCheck if a server attribute is set\n *\n * @param[in]\tattr_idx - attribute index to check\n *\n * @return\tint\n * @retval\t1 if it is set\n * @retval\t0 otherwise\n */\nint\nis_sattr_set(int attr_idx)\n{\n\treturn is_attr_set(get_sattr(attr_idx));\n}\n\n/**\n * @brief\tFree a server attribute\n *\n * @param[in]\tattr_idx - attribute index to free\n *\n * @return\tvoid\n */\nvoid\nfree_sattr(int attr_idx)\n{\n\tfree_attr(svr_attr_def, get_sattr(attr_idx), attr_idx);\n}\n"
  },
  {
    "path": "src/server/sched_attr_get_set.c",
    "content": "/*\n * Copyright (C) 1994-2020 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h>\n\n#include \"pbs_sched.h\"\n\n/**\n * @brief\tGet attribute of sched based on given attr index\n *\n * @param[in] psched    - pointer to sched struct\n * @param[in] attr_idx - attribute index\n *\n * @return attribute *\n * @retval NULL  - failure\n * @retval !NULL - pointer to attribute struct\n */\nattribute *\nget_sched_attr(const pbs_sched *psched, int attr_idx)\n{\n\tif (psched != NULL)\n\t\treturn _get_attr_by_idx((attribute *) psched->sch_attr, attr_idx);\n\treturn NULL;\n}\n\n/**\n * @brief\tGetter function for sched attribute of type string\n *\n * @param[in]\tpsched - pointer to the sched\n * @param[in]\tattr_idx - index of the attribute to return\n *\n * @return\tchar *\n * @retval\tstring value of the attribute\n * @retval\tNULL if psched is NULL\n */\nchar *\nget_sched_attr_str(const pbs_sched *psched, int attr_idx)\n{\n\tif (psched != NULL)\n\t\treturn get_attr_str(get_sched_attr(psched, attr_idx));\n\n\treturn NULL;\n}\n\n/**\n * @brief\tGetter function for sched attribute of type string of array\n *\n * @param[in]\tpsched - pointer to the sched\n * @param[in]\tattr_idx - index of the attribute to return\n *\n * @return\tstruct array_strings *\n * @retval\tvalue of the attribute\n * @retval\tNULL if psched is NULL\n */\nstruct array_strings *\nget_sched_attr_arst(const pbs_sched *psched, int attr_idx)\n{\n\tif (psched != NULL)\n\t\treturn get_attr_arst(get_sched_attr(psched, attr_idx));\n\n\treturn NULL;\n}\n\n/**\n * @brief\tGetter for sched attribute's list value\n *\n * @param[in]\tpsched - pointer to the sched\n * @param[in]\tattr_idx - index of the attribute to return\n *\n * @return\tpbs_list_head\n * @retval\tvalue of attribute\n */\npbs_list_head\nget_sched_attr_list(const pbs_sched *psched, int attr_idx)\n{\n\treturn get_attr_list(get_sched_attr(psched, attr_idx));\n}\n\n/**\n * @brief\tGetter function for sched attribute of type long\n *\n * @param[in]\tpsched - pointer to the sched\n * @param[in]\tattr_idx - index of the attribute to return\n *\n * @return\tlong\n * @retval\tlong value of the attribute\n * @retval\t-1 if psched is NULL\n */\nlong\nget_sched_attr_long(const pbs_sched *psched, int attr_idx)\n{\n\tif (psched != NULL)\n\t\treturn get_attr_l(get_sched_attr(psched, attr_idx));\n\n\treturn -1;\n}\n\n/**\n * @brief\tGeneric sched attribute setter (call if you want at_set() action functions to be called)\n *\n * @param[in]\tpsched - pointer to sched\n * @param[in]\tattr_idx - attribute index to set\n * @param[in]\tval - new val to set\n * @param[in]\trscn - new resource val to set, if applicable\n * @param[in]\top - batch_op operation, SET, INCR, DECR etc.\n *\n * @return\tint\n * @retval\t0 for success\n * @retval\t!0 for failure\n */\nint\nset_sched_attr_generic(pbs_sched *psched, int attr_idx, char *val, char *rscn, enum batch_op op)\n{\n\tif (psched == NULL || val == NULL)\n\t\treturn 1;\n\n\treturn set_attr_generic(get_sched_attr(psched, attr_idx), &sched_attr_def[attr_idx], val, rscn, op);\n}\n\n/**\n * @brief\t\"fast\" sched attribute setter for string values\n *\n * @param[in]\tpsched - pointer to sched\n * @param[in]\tattr_idx - attribute index to set\n * @param[in]\tval - new val to set\n * @param[in]\trscn - new resource val to set, if applicable\n *\n * @return\tint\n * @retval\t0 for success\n * @retval\t!0 for failure\n */\nint\nset_sched_attr_str_slim(pbs_sched *psched, int attr_idx, char *val, char *rscn)\n{\n\tif (psched == NULL || val == NULL)\n\t\treturn 1;\n\n\treturn set_attr_generic(get_sched_attr(psched, attr_idx), &sched_attr_def[attr_idx], val, rscn, INTERNAL);\n}\n\n/**\n * @brief\t\"fast\" sched attribute setter for long values\n *\n * @param[in]\tpsched - pointer to sched\n * @param[in]\tattr_idx - attribute index to set\n * @param[in]\tval - new val to set\n * @param[in]\top - batch_op operation, SET, INCR, DECR etc.\n *\n * @return\tint\n * @retval\t0 for success\n * @retval\t1 for failure\n */\nint\nset_sched_attr_l_slim(pbs_sched *psched, int attr_idx, long val, enum batch_op op)\n{\n\tif (psched == NULL)\n\t\treturn 1;\n\n\tset_attr_l(get_sched_attr(psched, attr_idx), val, op);\n\n\treturn 0;\n}\n\n/**\n * @brief\t\"fast\" sched attribute setter for boolean values\n *\n * @param[in]\tpsched - pointer to sched\n * @param[in]\tattr_idx - attribute index to set\n * @param[in]\tval - new val to set\n * @param[in]\top - batch_op operation, SET, INCR, DECR etc.\n *\n * @return\tint\n * @retval\t0 for success\n * @retval\t1 for failure\n */\nint\nset_sched_attr_b_slim(pbs_sched *psched, int attr_idx, long val, enum batch_op op)\n{\n\tif (psched == NULL)\n\t\treturn 1;\n\n\tset_attr_b(get_sched_attr(psched, attr_idx), val, op);\n\n\treturn 0;\n}\n\n/**\n * @brief\t\"fast\" sched attribute setter for char values\n *\n * @param[in]\tpsched - pointer to sched\n * @param[in]\tattr_idx - attribute index to set\n * @param[in]\tval - new val to set\n * @param[in]\top - batch_op operation, SET, INCR, DECR etc.\n *\n * @return\tint\n * @retval\t0 for success\n * @retval\t1 for failure\n */\nint\nset_sched_attr_c_slim(pbs_sched *psched, int attr_idx, char val, enum batch_op op)\n{\n\tif (psched == NULL)\n\t\treturn 1;\n\n\tset_attr_c(get_sched_attr(psched, attr_idx), val, op);\n\n\treturn 0;\n}\n\n/**\n * @brief\tCheck if a sched attribute is set\n *\n * @param[in]\tpsched - pointer to sched\n * @param[in]\tattr_idx - attribute index to check\n *\n * @return\tint\n * @retval\t1 if it is set\n * @retval\t0 otherwise\n */\nint\nis_sched_attr_set(const pbs_sched *psched, int attr_idx)\n{\n\tif (psched != NULL)\n\t\treturn is_attr_set(get_sched_attr(psched, attr_idx));\n\n\treturn 0;\n}\n\n/**\n * @brief\tFree a sched attribute\n *\n * @param[in]\tpsched - pointer to sched\n * @param[in]\tattr_idx - attribute index to free\n *\n * @return\tvoid\n */\nvoid\nfree_sched_attr(pbs_sched *psched, int attr_idx)\n{\n\tif (psched != NULL)\n\t\tfree_attr(sched_attr_def, get_sched_attr(psched, attr_idx), attr_idx);\n}\n\n/**\n * @brief\tclear a sched attribute\n *\n * @param[in]\tpsched - pointer to sched\n * @param[in]\tattr_idx - attribute index to clear\n *\n * @return\tvoid\n */\nvoid\nclear_sched_attr(pbs_sched *psched, int attr_idx)\n{\n\tclear_attr(get_sched_attr(psched, attr_idx), &sched_attr_def[attr_idx]);\n}\n"
  },
  {
    "path": "src/server/sched_func.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file    sched_func.c\n *\n *@brief\n * \t\tsched_func.c - various functions dealing with schedulers\n *\n */\n#include <pbs_config.h>\n\n#ifdef PYTHON\n#include <pbs_python_private.h>\n#include <Python.h>\n#endif\n\n#include <ctype.h>\n#include <errno.h>\n#include <string.h>\n#include <memory.h>\n\n#include <pbs_python.h>\n#include \"pbs_version.h\"\n#include \"pbs_share.h\"\n#include \"pbs_sched.h\"\n#include \"log.h\"\n#include \"pbs_ifl.h\"\n#include \"pbs_db.h\"\n#include \"pbs_error.h\"\n#include \"pbs_internal.h\"\n#include \"pbs_sched.h\"\n#include \"pbs_share.h\"\n#include \"resource.h\"\n#include \"sched_cmds.h\"\n#include \"server.h\"\n#include <server_limits.h>\n#include \"svrfunc.h\"\n\nextern struct server server;\n\n/* Functions */\n#ifdef PYTHON\nextern char *pbs_python_object_str(PyObject *);\n#endif /* PYTHON */\n\nextern void *svr_db_conn;\n\n/**\n * @brief\tHelper function to write job sort formula to a sched's sched_priv\n *\n * @param[in]\tformula - the formula to write\n * @param[in]\tsched_priv_path - path to scheduler's sched_priv\n *\n * @return\tint\n * @return \tPBSE_NONE for Success\n * @return\tPBSE_ error code for Failure\n */\nstatic int\nwrite_job_sort_formula(char *formula, char *sched_priv_path)\n{\n\tchar pathbuf[MAXPATHLEN];\n\tFILE *fp;\n\n\tsnprintf(pathbuf, sizeof(pathbuf), \"%s/%s\", sched_priv_path, FORMULA_FILENAME);\n\tif ((fp = fopen(pathbuf, \"w\")) == NULL) {\n\t\treturn PBSE_SYSTEM;\n\t}\n\n\tfprintf(fp, \"### PBS INTERNAL FILE DO NOT MODIFY ###\\n\");\n\tfprintf(fp, \"%s\\n\", formula);\n\tfclose(fp);\n\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n * \tvalidate_job_formula - validate that the sorting forumla is in the\n *\tcorrect form.  We do this by calling python and having\n *\tit catch exceptions.\n *\n */\nint\nvalidate_job_formula(attribute *pattr, void *pobject, int actmode)\n{\n\tchar *formula;\n\tchar *errmsg = NULL;\n\tstruct resource_def *pres;\n\tchar buf[1024];\n\tchar *globals1 = NULL;\n\tint globals_size1 = 1024;\n\tchar *globals2 = NULL;\n\tint globals_size2 = 1024;\n\tchar *script = NULL;\n\tint script_size = 2048;\n\tPyThreadState *ts_main = NULL;\n\tPyThreadState *ts_sub = NULL;\n\tpbs_sched *psched = NULL;\n\tint rc = 0;\n\tint err = 0;\n\n\tif (actmode == ATR_ACTION_FREE)\n\t\treturn (0);\n\n#ifndef PYTHON\n\treturn PBSE_INTERNAL;\n#else\n\n\tformula = pattr->at_val.at_str;\n\tif (formula == NULL)\n\t\treturn PBSE_INTERNAL;\n\n\tif (pobject == &server) {\n\t\t/* check if any sched's JSF is set to a different, incompatible value */\n\t\tfor (psched = (pbs_sched *) GET_NEXT(svr_allscheds);\n\t\t     psched != NULL;\n\t\t     psched = (pbs_sched *) GET_NEXT(psched->sc_link)) {\n\t\t\tif (is_sched_attr_set(psched, SCHED_ATR_job_sort_formula)) {\n\t\t\t\tif (strcmp(get_sched_attr_str(psched, SCHED_ATR_job_sort_formula), formula) != 0)\n\t\t\t\t\treturn PBSE_SVR_SCHED_JSF_INCOMPAT;\n\t\t\t}\n\t\t}\n\t} else {\n\t\t/* Check if server's JSF is set to a different value */\n\t\tif (is_sattr_set(SVR_ATR_job_sort_formula) && strcmp(get_sattr_str(SVR_ATR_job_sort_formula), formula) != 0)\n\t\t\treturn PBSE_SVR_SCHED_JSF_INCOMPAT;\n\t}\n\n\tif (!Py_IsInitialized()) {\n\t\tif (actmode == ATR_ACTION_RECOV)\n\t\t\treturn 0;\n\t\treturn PBSE_INTERNAL;\n\t}\n\n\tglobals1 = malloc(globals_size1);\n\tif (globals1 == NULL) {\n\t\trc = PBSE_SYSTEM;\n\t\tgoto validate_job_formula_exit;\n\t}\n\n\tglobals2 = malloc(globals_size2);\n\tif (globals2 == NULL) {\n\t\trc = PBSE_SYSTEM;\n\t\tgoto validate_job_formula_exit;\n\t}\n\n\tstrcpy(globals1, \"globals1={\");\n\tstrcpy(globals2, \"globals2={\");\n\n\t/* We need to create a python dictionary to pass to python as a list\n\t * of valid symbols.\n\t */\n\n\tfor (pres = svr_resc_def; pres; pres = pres->rs_next) {\n\t\t/* unknown resource is used as a delimiter between builtin and custom resources */\n\t\tif (strcmp(pres->rs_name, RESOURCE_UNKNOWN) != 0) {\n\t\t\tsnprintf(buf, sizeof(buf), \"\\'%s\\':1,\", pres->rs_name);\n\t\t\tif (pbs_strcat(&globals1, &globals_size1, buf) == NULL) {\n\t\t\t\trc = PBSE_SYSTEM;\n\t\t\t\tgoto validate_job_formula_exit;\n\t\t\t}\n\t\t\tif (pres->rs_type == ATR_TYPE_LONG ||\n\t\t\t    pres->rs_type == ATR_TYPE_SIZE ||\n\t\t\t    pres->rs_type == ATR_TYPE_LL ||\n\t\t\t    pres->rs_type == ATR_TYPE_SHORT ||\n\t\t\t    pres->rs_type == ATR_TYPE_FLOAT) {\n\t\t\t\tif (pbs_strcat(&globals2, &globals_size2, buf) == NULL) {\n\t\t\t\t\trc = PBSE_SYSTEM;\n\t\t\t\t\tgoto validate_job_formula_exit;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\tsnprintf(buf, sizeof(buf), \"\\'%s\\':1, '%s':1, \\'%s\\':1,\\'%s\\':1, \\'%s\\':1, \\'%s\\':1, \\'%s\\':1, \\'%s\\': 1}\\n\",\n\t\t FORMULA_ELIGIBLE_TIME, FORMULA_QUEUE_PRIO, FORMULA_JOB_PRIO,\n\t\t FORMULA_FSPERC, FORMULA_FSPERC_DEP, FORMULA_TREE_USAGE, FORMULA_FSFACTOR, FORMULA_ACCRUE_TYPE);\n\tif (pbs_strcat(&globals1, &globals_size1, buf) == NULL) {\n\t\trc = PBSE_SYSTEM;\n\t\tgoto validate_job_formula_exit;\n\t}\n\tif (pbs_strcat(&globals2, &globals_size2, buf) == NULL) {\n\t\trc = PBSE_SYSTEM;\n\t\tgoto validate_job_formula_exit;\n\t}\n\n\t/* Allocate a buffer for the Python code */\n\tscript = malloc(script_size);\n\tif (script == NULL) {\n\t\trc = PBSE_SYSTEM;\n\t\tgoto validate_job_formula_exit;\n\t}\n\t*script = '\\0';\n\n\t/* import math and initialize variables */\n\tsprintf(buf,\n\t\t\"ans = 0\\n\"\n\t\t\"errnum = 0\\n\"\n\t\t\"errmsg = \\'\\'\\n\"\n\t\t\"try:\\n\"\n\t\t\"    from math import *\\n\"\n\t\t\"except ImportError as e:\\n\"\n\t\t\"    errnum=4\\n\"\n\t\t\"    errmsg=str(e)\\n\");\n\tif (pbs_strcat(&script, &script_size, buf) == NULL) {\n\t\trc = PBSE_SYSTEM;\n\t\tgoto validate_job_formula_exit;\n\t}\n\t/* set up our globals dictionary */\n\tif (pbs_strcat(&script, &script_size, globals1) == NULL) {\n\t\trc = PBSE_SYSTEM;\n\t\tgoto validate_job_formula_exit;\n\t}\n\tif (pbs_strcat(&script, &script_size, globals2) == NULL) {\n\t\trc = PBSE_SYSTEM;\n\t\tgoto validate_job_formula_exit;\n\t}\n\t/* Now for the real guts: The initial try/except block*/\n\tsprintf(buf,\n\t\t\"try:\\n\"\n\t\t\"    exec(\\'ans=\");\n\tif (pbs_strcat(&script, &script_size, buf) == NULL) {\n\t\trc = PBSE_SYSTEM;\n\t\tgoto validate_job_formula_exit;\n\t}\n\tif (pbs_strcat(&script, &script_size, formula) == NULL) {\n\t\trc = PBSE_SYSTEM;\n\t\tgoto validate_job_formula_exit;\n\t}\n\tsprintf(buf, \"\\', globals1, locals())\\n\"\n\t\t     \"except SyntaxError as e:\\n\"\n\t\t     \"    errnum=1\\n\"\n\t\t     \"    errmsg=str(e)\\n\"\n\t\t     \"except NameError as e:\\n\"\n\t\t     \"    errnum=2\\n\"\n\t\t     \"    errmsg=str(e)\\n\"\n\t\t     \"except Exception as e:\\n\"\n\t\t     \"    pass\\n\"\n\t\t     \"if errnum == 0:\\n\"\n\t\t     \"    try:\\n\"\n\t\t     \"        exec(\\'ans=\");\n\tif (pbs_strcat(&script, &script_size, buf) == NULL) {\n\t\trc = PBSE_SYSTEM;\n\t\tgoto validate_job_formula_exit;\n\t}\n\tif (pbs_strcat(&script, &script_size, formula) == NULL) {\n\t\trc = PBSE_SYSTEM;\n\t\tgoto validate_job_formula_exit;\n\t}\n\tsprintf(buf, \"\\', globals2, locals())\\n\"\n\t\t     \"    except NameError as e:\\n\"\n\t\t     \"        errnum=3\\n\"\n\t\t     \"        errmsg=str(e)\\n\"\n\t\t     \"    except Exception as e:\\n\"\n\t\t     \"        pass\\n\");\n\tif (pbs_strcat(&script, &script_size, buf) == NULL) {\n\t\trc = PBSE_SYSTEM;\n\t\tgoto validate_job_formula_exit;\n\t}\n\n\t/* run the script in a subinterpreter */\n\tts_main = PyThreadState_Get();\n\tts_sub = Py_NewInterpreter();\n\tif (!ts_sub) {\n\t\trc = PBSE_SYSTEM;\n\t\tgoto validate_job_formula_exit;\n\t}\n\terr = PyRun_SimpleString(script);\n\n\t/* peek into the interpreter to get the values of err and errmsg */\n\tif (err == 0) {\n\t\tPyObject *module;\n\t\tPyObject *dict;\n\t\tPyObject *val;\n\t\terr = -1;\n\t\tif ((module = PyImport_AddModule(\"__main__\"))) {\n\t\t\tif ((dict = PyModule_GetDict(module))) {\n\t\t\t\tchar *p;\n\t\t\t\tif ((val = PyDict_GetItemString(dict, \"errnum\"))) {\n\t\t\t\t\tp = pbs_python_object_str(val);\n\t\t\t\t\tif (*p != '\\0')\n\t\t\t\t\t\terr = atoi(p);\n\t\t\t\t}\n\t\t\t\tif ((val = PyDict_GetItemString(dict, \"errmsg\"))) {\n\t\t\t\t\tp = pbs_python_object_str(val);\n\t\t\t\t\tif (*p != '\\0')\n\t\t\t\t\t\terrmsg = strdup(p);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\tswitch (err) {\n\t\tcase 0: /* Success */\n\t\t\trc = 0;\n\t\t\tbreak;\n\t\tcase 1: /* Syntax error in formula */\n\t\t\trc = PBSE_BAD_FORMULA;\n\t\t\tbreak;\n\t\tcase 2: /* unknown resource name */\n\t\t\trc = PBSE_BAD_FORMULA_KW;\n\t\t\tbreak;\n\t\tcase 3: /* resource of non-numeric type */\n\t\t\trc = PBSE_BAD_FORMULA_TYPE;\n\t\t\tbreak;\n\t\tcase 4: /* import error */\n\t\t\trc = PBSE_SYSTEM;\n\t\t\tbreak;\n\t\tdefault: /* unrecognized error */\n\t\t\trc = PBSE_INTERNAL;\n\t\t\tbreak;\n\t}\n\n\tif (err == 0) {\n\t\tif (pobject == &server) {\n\t\t\t/* Write formula to all scheds' sched_priv */\n\t\t\tfor (psched = (pbs_sched *) GET_NEXT(svr_allscheds);\n\t\t\t     psched != NULL;\n\t\t\t     psched = (pbs_sched *) GET_NEXT(psched->sc_link)) {\n\t\t\t\trc = write_job_sort_formula(formula, get_sched_attr_str(psched, SCHED_ATR_sched_priv));\n\t\t\t\tif (rc != PBSE_NONE)\n\t\t\t\t\tgoto validate_job_formula_exit;\n\t\t\t}\n\t\t} else { /* Write formula to a specific sched's sched_priv */\n\t\t\tpsched = (pbs_sched *) pobject;\n\t\t\trc = write_job_sort_formula(formula, get_sched_attr_str(psched, SCHED_ATR_sched_priv));\n\t\t\tif (rc != PBSE_NONE)\n\t\t\t\tgoto validate_job_formula_exit;\n\t\t}\n\n\t} else {\n\t\tsnprintf(buf, sizeof(buf), \"Validation Error: %s\", errmsg ? errmsg : \"Internal error\");\n\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_SERVER, LOG_DEBUG, __func__, buf);\n\t}\n\nvalidate_job_formula_exit:\n\tif (ts_main) {\n\t\tif (ts_sub)\n\t\t\tPy_EndInterpreter(ts_sub);\n\t\tPyThreadState_Swap(ts_main);\n\t}\n\tfree(script);\n\tfree(globals1);\n\tfree(globals2);\n\tfree(errmsg);\n\treturn rc;\n#endif\n}\n\n/**\n * @brief\n * \t\tsched_alloc - allocate space for a pbs_sched structure and\n * \t\tinitialize attributes to \"unset\" and pbs_sched object is added\n * \t\tto svr_allscheds list\n *\n * @param[in]\tsched_name\t- scheduler  name\n *\n * @return\tpbs_sched *\n * @retval\tnull\t- space not available.\n */\npbs_sched *\nsched_alloc(char *sched_name)\n{\n\tint i;\n\tpbs_sched *psched;\n\n\tpsched = calloc(1, sizeof(pbs_sched));\n\n\tif (psched == NULL) {\n\t\tlog_err(errno, __func__, \"Unable to allocate memory (malloc error)\");\n\t\treturn NULL;\n\t}\n\n\tCLEAR_LINK(psched->sc_link);\n\tstrncpy(psched->sc_name, sched_name, PBS_MAXSCHEDNAME);\n\tpsched->sc_name[PBS_MAXSCHEDNAME] = '\\0';\n\tpsched->svr_do_schedule = SCH_SCHEDULE_NULL;\n\tpsched->svr_do_sched_high = SCH_SCHEDULE_NULL;\n\tpsched->sc_primary_conn = -1;\n\tpsched->sc_secondary_conn = -1;\n\tpsched->newobj = 1;\n\tappend_link(&svr_allscheds, &psched->sc_link, psched);\n\n\t/* set the working attributes to \"unspecified\" */\n\n\tfor (i = 0; i < (int) SCHED_ATR_LAST; i++)\n\t\tclear_sched_attr(psched, i);\n\n\treturn (psched);\n}\n\n/**\n * @brief find a scheduler\n *\n * @param[in]\tsched_name - scheduler name\n *\n * @return\tpbs_sched *\n */\n\npbs_sched *\nfind_sched(char *sched_name)\n{\n\tpbs_sched *psched = NULL;\n\tif (!sched_name)\n\t\treturn NULL;\n\tpsched = (pbs_sched *) GET_NEXT(svr_allscheds);\n\twhile (psched != NULL) {\n\t\tif (strcmp(sched_name, psched->sc_name) == 0)\n\t\t\tbreak;\n\t\tpsched = (pbs_sched *) GET_NEXT(psched->sc_link);\n\t}\n\treturn (psched);\n}\n\n/**\n * @brief find a scheduler from partition name\n *\n * @param[in]\tpartition - partition name\n *\n * @return\tpbs_sched *\n */\n\npbs_sched *\nfind_sched_from_partition(char *partition)\n{\n\tpbs_sched *psched = NULL;\n\n\tif (!partition)\n\t\treturn NULL;\n\n\tfor (psched = (pbs_sched *) GET_NEXT(svr_allscheds); psched; psched = (pbs_sched *) GET_NEXT(psched->sc_link)) {\n\t\tif (is_sched_attr_set(psched, SCHED_ATR_partition)) {\n\t\t\tchar *value = get_sched_attr_str(psched, SCHED_ATR_partition);\n\t\t\tif (value != NULL && !strcmp(partition, value))\n\t\t\t\treturn psched;\n\t\t}\n\t}\n\treturn NULL;\n}\n\n/**\n * @brief free sched structure\n *\n * @param[in]\tpsched\t- The pointer to the sched to free\n *\n */\nvoid\nsched_free(pbs_sched *psched)\n{\n\tint i;\n\n\t/* remove any malloc working attribute space */\n\n\tfor (i = 0; i < (int) SCHED_ATR_LAST; i++)\n\t\tfree_sched_attr(psched, i);\n\n\t/* now free the main structure */\n\tdelete_link(&psched->sc_link);\n\t(void) free(psched);\n}\n\n/**\n * @brief - purge scheduler from system\n *\n * @param[in]\tpsched\t- The pointer to the delete to delete\n *\n * @return\terror code\n * @retval\t0\t- scheduler purged\n * @retval\tPBSE_OBJBUSY\t- scheduler deletion not allowed\n */\nint\nsched_delete(pbs_sched *psched)\n{\n\tpbs_db_obj_info_t obj;\n\tpbs_db_sched_info_t dbsched;\n\tvoid *conn = (void *) svr_db_conn;\n\n\tif (psched == NULL)\n\t\treturn (0);\n\n\t/* TODO check for scheduler activity and return PBSE_OBJBUSY */\n\t/* delete scheduler from database */\n\tstrcpy(dbsched.sched_name, psched->sc_name);\n\tobj.pbs_db_obj_type = PBS_DB_SCHED;\n\tobj.pbs_db_un.pbs_db_sched = &dbsched;\n\tif (pbs_db_delete_obj(conn, &obj) != 0) {\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE,\n\t\t\t \"delete of scheduler %s from datastore failed\",\n\t\t\t psched->sc_name);\n\t\tlog_err(errno, __func__, log_buffer);\n\t}\n\tsched_free(psched);\n\n\treturn (0);\n}\n\n/**\n * @brief\n * \t\taction routine for the sched's \"sched_host\" attribute\n *\n * @param[in]\tpattr\t-\tattribute being set\n * @param[in]\tpobj\t-\tObject on which attribute is being set\n * @param[in]\tactmode\t-\tthe mode of setting, recovery or just alter\n *\n * @return\terror code\n * @retval\tPBSE_NONE\t-\tSuccess\n * @retval\t!PBSE_NONE\t-\tFailure\n *\n */\nint\naction_sched_host(attribute *pattr, void *pobj, int actmode)\n{\n\tpbs_sched *psched;\n\tpsched = (pbs_sched *) pobj;\n\n\tif (actmode == ATR_ACTION_NEW || actmode == ATR_ACTION_ALTER || actmode == ATR_ACTION_RECOV) {\n\t\tpsched->sc_conn_addr = get_hostaddr(pattr->at_val.at_str);\n\t\tif (psched->sc_conn_addr == (pbs_net_t) 0)\n\t\t\treturn PBSE_BADATVAL;\n\t}\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n * \t\taction routine for the sched's \"sched_priv\" attribute\n *\n * @param[in]\tpattr\t-\tattribute being set\n * @param[in]\tpobj\t-\tObject on which attribute is being set\n * @param[in]\tactmode\t-\tthe mode of setting, recovery or just alter\n *\n * @return\terror code\n * @retval\tPBSE_NONE\t-\tSuccess\n * @retval\t!PBSE_NONE\t-\tFailure\n *\n */\nint\naction_sched_priv(attribute *pattr, void *pobj, int actmode)\n{\n\tpbs_sched *psched;\n\n\tpsched = (pbs_sched *) pobj;\n\n\tif (pobj == dflt_scheduler)\n\t\treturn PBSE_SCHED_OP_NOT_PERMITTED;\n\n\tif (actmode == ATR_ACTION_NEW || actmode == ATR_ACTION_ALTER || actmode == ATR_ACTION_RECOV) {\n\t\tpsched = (pbs_sched *) GET_NEXT(svr_allscheds);\n\t\twhile (psched != NULL) {\n\t\t\tif (is_sched_attr_set(psched, SCHED_ATR_sched_priv)) {\n\t\t\t\tif (!strcmp(get_sched_attr_str(psched, SCHED_ATR_sched_priv), pattr->at_val.at_str)) {\n\t\t\t\t\tif (psched != pobj) {\n\t\t\t\t\t\treturn PBSE_SCHED_PRIV_EXIST;\n\t\t\t\t\t} else\n\t\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t\tpsched = (pbs_sched *) GET_NEXT(psched->sc_link);\n\t\t}\n\t}\n\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n * \t\taction routine for the sched's \"sched_log\" attribute\n *\n * @param[in]\tpattr\t-\tattribute being set\n * @param[in]\tpobj\t-\tObject on which attribute is being set\n * @param[in]\tactmode\t-\tthe mode of setting, recovery or just alter\n *\n * @return\terror code\n * @retval\tPBSE_NONE\t-\tSuccess\n * @retval\t!PBSE_NONE\t-\tFailure\n *\n */\nint\naction_sched_log(attribute *pattr, void *pobj, int actmode)\n{\n\tpbs_sched *psched;\n\tpsched = (pbs_sched *) pobj;\n\n\tif (pobj == dflt_scheduler)\n\t\treturn PBSE_SCHED_OP_NOT_PERMITTED;\n\n\tif (actmode == ATR_ACTION_NEW || actmode == ATR_ACTION_ALTER || actmode == ATR_ACTION_RECOV) {\n\t\tpsched = (pbs_sched *) GET_NEXT(svr_allscheds);\n\t\twhile (psched != NULL) {\n\t\t\tif (is_sched_attr_set(psched, SCHED_ATR_sched_log)) {\n\t\t\t\tif (!strcmp(get_sched_attr_str(psched, SCHED_ATR_sched_log), pattr->at_val.at_str)) {\n\t\t\t\t\tif (psched != pobj) {\n\t\t\t\t\t\treturn PBSE_SCHED_LOG_EXIST;\n\t\t\t\t\t} else\n\t\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t\tpsched = (pbs_sched *) GET_NEXT(psched->sc_link);\n\t\t}\n\t}\n\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n * \t\taction routine for the sched's \"sched_iteration\" attribute\n *\n * @param[in]\tpattr\t-\tattribute being set\n * @param[in]\tpobj\t-\tObject on which attribute is being set\n * @param[in]\tactmode\t-\tthe mode of setting, recovery or just alter\n *\n * @return\terror code\n * @retval\tPBSE_NONE\t-\tSuccess\n * @retval\t!PBSE_NONE\t-\tFailure\n *\n */\nint\naction_sched_iteration(attribute *pattr, void *pobj, int actmode)\n{\n\tif (pobj == dflt_scheduler) {\n\t\tset_sattr_l_slim(SVR_ATR_scheduler_iteration, pattr->at_val.at_long, SET);\n\t\tsvr_save_db(&server);\n\t}\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n * \t\taction routine for the sched's \"sched_user\" attribute\n *\n * @param[in]\tpattr\t-\tattribute being set\n * @param[in]\tpobj\t-\tObject on which attribute is being set\n * @param[in]\tactmode\t-\tthe mode of setting, recovery or just alter\n *\n * @return\terror code\n * @retval\tPBSE_NONE\t-\tSuccess\n * @retval\t!PBSE_NONE\t-\tFailure\n *\n */\nint\naction_sched_user(attribute *pattr, void *pobj, int actmode)\n{\n\tif (actmode == ATR_ACTION_ALTER) {\n\t\t/*TODO*/\n\t}\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n * \t\taction routine for the sched's \"preempt_order\" attribute\n *\n * @param[in]\tpattr\t-\tattribute being set\n * @param[in]\tpobj\t-\tObject on which attribute is being set\n * @param[in]\tactmode\t-\tthe mode of setting, recovery or just alter\n *\n * @return\terror code\n * @retval\tPBSE_NONE\t-\tSuccess\n * @retval\t!PBSE_NONE\t-\tFailure\n *\n */\nint\naction_sched_preempt_order(attribute *pattr, void *pobj, int actmode)\n{\n\tchar *tok = NULL;\n\tchar *endp = NULL;\n\tpbs_sched *psched = pobj;\n\n\tif ((actmode == ATR_ACTION_ALTER) || (actmode == ATR_ACTION_RECOV)) {\n\t\tchar copy[256] = {0};\n\n\t\tif (!(pattr->at_val.at_str))\n\t\t\treturn PBSE_BADATVAL;\n\t\tstrcpy(copy, pattr->at_val.at_str);\n\t\ttok = strtok(copy, \"\\t \");\n\n\t\tif (tok != NULL && !isdigit(tok[0])) {\n\t\t\tint i = 0;\n\t\t\tint num = 0;\n\t\t\tchar s_done = 0;\n\t\t\tchar c_done = 0;\n\t\t\tchar r_done = 0;\n\t\t\tchar d_done = 0;\n\t\t\tchar next_is_num = 0;\n\n\t\t\tpsched->preempt_order[0].order[0] = PREEMPT_METHOD_LOW;\n\t\t\tpsched->preempt_order[0].order[1] = PREEMPT_METHOD_LOW;\n\t\t\tpsched->preempt_order[0].order[2] = PREEMPT_METHOD_LOW;\n\t\t\tpsched->preempt_order[0].order[3] = PREEMPT_METHOD_LOW;\n\n\t\t\tpsched->preempt_order[0].high_range = 100;\n\t\t\ti = 0;\n\t\t\tdo {\n\t\t\t\tint j = 0;\n\t\t\t\tj = isdigit(tok[0]);\n\t\t\t\tif (j) {\n\t\t\t\t\tif (next_is_num) {\n\t\t\t\t\t\tnum = strtol(tok, &endp, 10);\n\t\t\t\t\t\tif (*endp == '\\0') {\n\t\t\t\t\t\t\tpsched->preempt_order[i].low_range = num + 1;\n\t\t\t\t\t\t\ti++;\n\t\t\t\t\t\t\tpsched->preempt_order[i].high_range = num;\n\t\t\t\t\t\t\tnext_is_num = 0;\n\t\t\t\t\t\t} else\n\t\t\t\t\t\t\treturn PBSE_BADATVAL;\n\t\t\t\t\t} else\n\t\t\t\t\t\treturn PBSE_BADATVAL;\n\t\t\t\t} else if (!next_is_num) {\n\t\t\t\t\tfor (j = 0; tok[j] != '\\0'; j++) {\n\t\t\t\t\t\tswitch (tok[j]) {\n\t\t\t\t\t\t\tcase 'S':\n\t\t\t\t\t\t\t\tif (!s_done) {\n\t\t\t\t\t\t\t\t\tpsched->preempt_order[i].order[j] = PREEMPT_METHOD_SUSPEND;\n\t\t\t\t\t\t\t\t\ts_done = 1;\n\t\t\t\t\t\t\t\t} else\n\t\t\t\t\t\t\t\t\treturn PBSE_BADATVAL;\n\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\tcase 'C':\n\t\t\t\t\t\t\t\tif (!c_done) {\n\t\t\t\t\t\t\t\t\tpsched->preempt_order[i].order[j] = PREEMPT_METHOD_CHECKPOINT;\n\t\t\t\t\t\t\t\t\tc_done = 1;\n\t\t\t\t\t\t\t\t} else\n\t\t\t\t\t\t\t\t\treturn PBSE_BADATVAL;\n\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\tcase 'R':\n\t\t\t\t\t\t\t\tif (!r_done) {\n\t\t\t\t\t\t\t\t\tpsched->preempt_order[i].order[j] = PREEMPT_METHOD_REQUEUE;\n\t\t\t\t\t\t\t\t\tr_done = 1;\n\t\t\t\t\t\t\t\t} else\n\t\t\t\t\t\t\t\t\treturn PBSE_BADATVAL;\n\t\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t\tcase 'D':\n\t\t\t\t\t\t\t\tif (!d_done) {\n\t\t\t\t\t\t\t\t\tpsched->preempt_order[i].order[j] = PREEMPT_METHOD_DELETE;\n\t\t\t\t\t\t\t\t\td_done = 1;\n\t\t\t\t\t\t\t\t} else\n\t\t\t\t\t\t\t\t\treturn PBSE_BADATVAL;\n\t\t\t\t\t\t\t\tbreak;\n\n\t\t\t\t\t\t\tdefault:\n\t\t\t\t\t\t\t\treturn PBSE_BADATVAL;\n\t\t\t\t\t\t}\n\t\t\t\t\t\tnext_is_num = 1;\n\t\t\t\t\t}\n\t\t\t\t\ts_done = 0;\n\t\t\t\t\tc_done = 0;\n\t\t\t\t\tr_done = 0;\n\t\t\t\t\td_done = 0;\n\t\t\t\t} else\n\t\t\t\t\treturn PBSE_BADATVAL;\n\t\t\t\ttok = strtok(NULL, \"\\t \");\n\t\t\t} while (tok != NULL && i < PREEMPT_ORDER_MAX);\n\n\t\t\tif (tok != NULL)\n\t\t\t\treturn PBSE_BADATVAL;\n\n\t\t\tpsched->preempt_order[i].low_range = 0;\n\t\t} else\n\t\t\treturn PBSE_BADATVAL;\n\t}\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n * \t\tpoke_scheduler - action routine for the server's \"scheduling\" attribute.\n *\t\tCall the scheduler whenever the attribute is set (or reset) to true.\n *\n * @param[in]\tpattr\t-\tpointer to attribute structure\n * @param[in]\tpobj\t-\tnot used\n * @param[in]\tactmode\t-\taction mode\n *\n * @return\tint\n * @retval\tzero\t: success\n */\n\nint\npoke_scheduler(attribute *pattr, void *pobj, int actmode)\n{\n\tif (pobj == &server || pobj == dflt_scheduler) {\n\t\tif (pobj == &server) {\n\t\t\t/* set this attribute on main scheduler */\n\t\t\tif (dflt_scheduler) {\n\t\t\t\tset_attr_with_attr(&sched_attr_def[SCHED_ATR_scheduling], get_sched_attr(dflt_scheduler, SCHED_ATR_scheduling), pattr, SET);\n\t\t\t\tsched_save_db(dflt_scheduler);\n\t\t\t}\n\t\t} else {\n\t\t\tset_sattr_l_slim(SVR_ATR_scheduling, pattr->at_val.at_long, SET);\n\t\t\tsvr_save_db(&server);\n\t\t}\n\t\tif (actmode == ATR_ACTION_ALTER) {\n\t\t\tif (pattr->at_val.at_long)\n\t\t\t\tset_scheduler_flag(SCH_SCHEDULE_CMD, dflt_scheduler);\n\t\t}\n\t} else {\n\t\tif (actmode == ATR_ACTION_ALTER) {\n\t\t\tif (pattr->at_val.at_long)\n\t\t\t\tset_scheduler_flag(SCH_SCHEDULE_CMD, (pbs_sched *) pobj);\n\t\t}\n\t}\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n * \t\tSets default scheduler attributes\n *\n * @param[in] psched\t\t- Scheduler\n * @parma[in] unset_flag\t- flag to indicate if this function is called after unset of any sched attributes.\n * @parma[in] from_scheduler\t- flag to indicate if this function is called on a request from scheduler.\n *\n *\n  */\nvoid\nset_sched_default(pbs_sched *psched, int from_scheduler)\n{\n\tchar dir_path[MAXPATHLEN + 1] = {0};\n\n\tif (!psched)\n\t\treturn;\n\n\tif (!is_sched_attr_set(psched, SCHED_ATR_sched_cycle_len))\n\t\tset_sched_attr_l_slim(psched, SCHED_ATR_sched_cycle_len, PBS_SCHED_CYCLE_LEN_DEFAULT, SET);\n\tif (!is_sched_attr_set(psched, SCHED_ATR_schediteration))\n\t\tset_sched_attr_l_slim(psched, SCHED_ATR_schediteration, PBS_SCHEDULE_CYCLE, SET);\n\tif (!is_sched_attr_set(psched, SCHED_ATR_scheduling)) {\n\t\tif (psched != dflt_scheduler)\n\t\t\tset_sched_attr_b_slim(psched, SCHED_ATR_scheduling, FALSE, SET);\n\t\telse\n\t\t\tset_sched_attr_b_slim(psched, SCHED_ATR_scheduling, TRUE, SET);\n\t}\n\tif (!is_sched_attr_set(psched, SCHED_ATR_sched_state)) {\n\t\tif (psched != dflt_scheduler)\n\t\t\tset_sched_attr_str_slim(psched, SCHED_ATR_sched_state, SC_DOWN, NULL);\n\t\telse\n\t\t\tset_sched_attr_str_slim(psched, SCHED_ATR_sched_state, SC_IDLE, NULL);\n\t}\n\tif (!is_sched_attr_set(psched, SCHED_ATR_sched_priv)) {\n\t\tif (psched != dflt_scheduler)\n\t\t\t(void) snprintf(dir_path, MAXPATHLEN, \"%s/sched_priv_%s\", pbs_conf.pbs_home_path, psched->sc_name);\n\t\telse\n\t\t\t(void) snprintf(dir_path, MAXPATHLEN, \"%s/sched_priv\", pbs_conf.pbs_home_path);\n\t\tset_sched_attr_str_slim(psched, SCHED_ATR_sched_priv, dir_path, NULL);\n\t}\n\tif (!is_sched_attr_set(psched, SCHED_ATR_sched_log)) {\n\t\tif (psched != dflt_scheduler)\n\t\t\t(void) snprintf(dir_path, MAXPATHLEN, \"%s/sched_logs_%s\", pbs_conf.pbs_home_path, psched->sc_name);\n\t\telse\n\t\t\t(void) snprintf(dir_path, MAXPATHLEN, \"%s/sched_logs\", pbs_conf.pbs_home_path);\n\t\tset_sched_attr_str_slim(psched, SCHED_ATR_sched_log, dir_path, NULL);\n\t}\n\tif (!is_sched_attr_set(psched, SCHED_ATR_log_events)) {\n\t\tset_sched_attr_l_slim(psched, SCHED_ATR_log_events, SCHED_LOG_DFLT, SET);\n\t\t(get_sched_attr(psched, SCHED_ATR_log_events))->at_flags |= ATR_VFLAG_DEFLT;\n\t}\n\tif (!is_sched_attr_set(psched, SCHED_ATR_preempt_queue_prio)) {\n\t\tset_sched_attr_l_slim(psched, SCHED_ATR_preempt_queue_prio, PBS_PREEMPT_QUEUE_PRIO_DEFAULT, SET);\n\t\t(get_sched_attr(psched, SCHED_ATR_preempt_queue_prio))->at_flags |= ATR_VFLAG_DEFLT;\n\t}\n\tif (!is_sched_attr_set(psched, SCHED_ATR_preempt_prio)) {\n\t\tset_sched_attr_str_slim(psched, SCHED_ATR_preempt_prio, PBS_PREEMPT_PRIO_DEFAULT, NULL);\n\t\t(get_sched_attr(psched, SCHED_ATR_preempt_prio))->at_flags |= ATR_VFLAG_DEFLT;\n\t}\n\tif (!is_sched_attr_set(psched, SCHED_ATR_preempt_order)) {\n\t\tset_sched_attr_str_slim(psched, SCHED_ATR_preempt_order, PBS_PREEMPT_ORDER_DEFAULT, NULL);\n\t\taction_sched_preempt_order(get_sched_attr(psched, SCHED_ATR_preempt_order), psched, ATR_ACTION_ALTER);\n\t\t(get_sched_attr(psched, SCHED_ATR_preempt_order))->at_flags |= ATR_VFLAG_DEFLT;\n\t}\n\tif (!is_sched_attr_set(psched, SCHED_ATR_preempt_sort)) {\n\t\tset_sched_attr_str_slim(psched, SCHED_ATR_preempt_sort, PBS_PREEMPT_SORT_DEFAULT, NULL);\n\t\t(get_sched_attr(psched, SCHED_ATR_preempt_sort))->at_flags |= ATR_VFLAG_DEFLT;\n\t}\n\tif (!is_sched_attr_set(psched, SCHED_ATR_server_dyn_res_alarm)) {\n\t\tset_sched_attr_l_slim(psched, SCHED_ATR_server_dyn_res_alarm, PBS_SERVER_DYN_RES_ALARM_DEFAULT, SET);\n\t\t(get_sched_attr(psched, SCHED_ATR_server_dyn_res_alarm))->at_flags |= ATR_VFLAG_DEFLT;\n\t}\n\tif (!is_sched_attr_set(psched, SCHED_ATR_job_run_wait)) {\n\t\tset_sched_attr_str_slim(psched, SCHED_ATR_job_run_wait, RUN_WAIT_RUNJOB_HOOK, NULL);\n\t\t(get_sched_attr(psched, SCHED_ATR_job_run_wait))->at_flags |= ATR_VFLAG_DEFLT;\n\t}\n\tif (!is_sched_attr_set(psched, SCHED_ATR_throughput_mode) && strcmp(get_sched_attr_str(psched, SCHED_ATR_job_run_wait), RUN_WAIT_NONE)) {\n\t\tset_sched_attr_l_slim(psched, SCHED_ATR_throughput_mode, TRUE, SET);\n\t\t(get_sched_attr(psched, SCHED_ATR_throughput_mode))->at_flags |= ATR_VFLAG_DEFLT;\n\t}\n\tif (psched == dflt_scheduler) {\n\t\tif (!is_sched_attr_set(psched, SCHED_ATR_partition)) {\n\t\t\tset_sched_attr_str_slim(psched, SCHED_ATR_partition, DEFAULT_PARTITION, NULL);\n\t\t\t(get_sched_attr(psched, SCHED_ATR_partition))->at_flags |= ATR_VFLAG_DEFLT;\n\t\t}\n\t\tif (!is_sched_attr_set(psched, SCHED_ATR_SchedHost)) {\n\t\t\tset_sched_attr_str_slim(psched, SCHED_ATR_SchedHost, server_host, NULL);\n\t\t\t(get_sched_attr(psched, SCHED_ATR_SchedHost))->at_flags |= ATR_VFLAG_DEFLT;\n\t\t\tpsched->sc_conn_addr = get_hostaddr(server_host);\n\t\t}\n\t}\n\tset_scheduler_flag(SCH_CONFIGURE, psched);\n}\n\n/**\n * @brief\n * \t\taction routine for the scheduler's partition attribute\n *\n * @param[in]\tpattr\t-\tpointer to attribute structure\n * @param[in]\tpobj\t-\tnot used\n * @param[in]\tactmode\t-\taction mode\n *\n *\n * @return\terror code\n * @retval\tPBSE_NONE\t-\tSuccess\n * @retval\t!PBSE_NONE\t-\tFailure\n *\n */\n\nint\naction_sched_partition(attribute *pattr, void *pobj, int actmode)\n{\n\tpbs_sched *psched;\n\n\tif (actmode == ATR_ACTION_RECOV)\n\t\treturn PBSE_NONE;\n\n\tif (pobj == dflt_scheduler)\n\t\treturn PBSE_SCHED_OP_NOT_PERMITTED;\n\n\tif (pattr->at_val.at_str == NULL)\n\t\treturn PBSE_NONE;\n\tif (strcmp(pattr->at_val.at_str, DEFAULT_PARTITION) == 0)\n\t\treturn PBSE_DEFAULT_PARTITION;\n\tfor (psched = (pbs_sched *) GET_NEXT(svr_allscheds); psched; psched = (pbs_sched *) GET_NEXT(psched->sc_link)) {\n\t\tif (psched == pobj)\n\t\t\tcontinue;\n\t\tif (is_sched_attr_set(psched, SCHED_ATR_partition) && !strcmp(pattr->at_val.at_str, get_sched_attr_str(psched, SCHED_ATR_partition)))\n\t\t\treturn PBSE_SCHED_PARTITION_ALREADY_EXISTS;\n\t}\n\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\taction function for 'opt_backfill_fuzzy' sched attribute\n *\n * @param[in]\tpattr\t\tattribute being set\n * @param[in]\tpobj\t\tObject on which the attribute is being set\n * @param[in]\tactmode\t\tthe mode of setting\n *\n * @return error code\n */\nint\naction_opt_bf_fuzzy(attribute *pattr, void *pobj, int actmode)\n{\n\tchar *str = pattr->at_val.at_str;\n\tchar *endp = NULL;\n\n\tif (str == NULL)\n\t\treturn PBSE_BADATVAL;\n\n\tif (actmode == ATR_ACTION_ALTER || actmode == ATR_ACTION_RECOV) {\n\t\t/* Check if this is numeric, also acceptable */\n\t\tstrtol(str, &endp, 10);\n\t\tif (*endp == '\\0')\n\t\t\treturn PBSE_NONE;\n\n\t\tif (!strcasecmp(str, \"off\") ||\n\t\t    !strcasecmp(str, \"low\") ||\n\t\t    !strcasecmp(str, \"medium\") || !strcasecmp(str, \"med\") ||\n\t\t    !strcasecmp(str, \"high\"))\n\t\t\treturn PBSE_NONE;\n\t\telse\n\t\t\treturn PBSE_BADATVAL;\n\t}\n\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief action function for 'job_run_wait' sched attribute\n *\n * @param[in]\tpattr\t\tattribute being set\n * @param[in]\tpobj\t\tObject on which the attribute is being set\n * @param[in]\tactmode\t\tthe mode of setting\n *\n * @return error code\n */\nint\naction_job_run_wait(attribute *pattr, void *pobj, int actmode)\n{\n\tchar *str = pattr->at_val.at_str;\n\n\tif (str == NULL)\n\t\treturn PBSE_BADATVAL;\n\n\tif (actmode == ATR_ACTION_ALTER || actmode == ATR_ACTION_NEW || actmode == ATR_ACTION_RECOV) {\n\t\tpbs_sched *psched = NULL;\n\t\tchar *tp_val = NULL;\n\n\t\tif (!strcasecmp(str, RUN_WAIT_EXECJOB_HOOK))\n\t\t\ttp_val = ATR_FALSE;\n\t\telse if (!strcasecmp(str, RUN_WAIT_RUNJOB_HOOK))\n\t\t\ttp_val = ATR_TRUE;\n\t\telse if (!strcasecmp(str, RUN_WAIT_NONE))\n\t\t\ttp_val = NULL;\n\t\telse\n\t\t\treturn PBSE_BADATVAL;\n\n\t\tpsched = (pbs_sched *) pobj;\n\t\tif (tp_val == NULL)\n\t\t\t/* No equivalent value of 'none' for throughput_mode, so unset it */\n\t\t\tclear_sched_attr(psched, SCHED_ATR_throughput_mode);\n\t\telse\n\t\t\tset_sched_attr_str_slim(psched, SCHED_ATR_throughput_mode, tp_val, NULL);\n\t}\n\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief action function for 'throughput_mode' sched attribute\n *\n * @param[in]\tpattr\t\tattribute being set\n * @param[in]\tpobj\t\tObject on which the attribute is being set\n * @param[in]\tactmode\t\tthe mode of setting\n *\n * @return error code\n */\nint\naction_throughput_mode(attribute *pattr, void *pobj, int actmode)\n{\n\tlong val = pattr->at_val.at_long;\n\tpbs_sched *psched = NULL;\n\n\tpsched = (pbs_sched *) pobj;\n\tif (actmode == ATR_ACTION_ALTER || actmode == ATR_ACTION_NEW || actmode == ATR_ACTION_RECOV) {\n\t\tchar *jrw_val = NULL;\n\n\t\tif (val)\n\t\t\tjrw_val = RUN_WAIT_RUNJOB_HOOK;\n\t\telse\n\t\t\tjrw_val = RUN_WAIT_EXECJOB_HOOK;\n\n\t\tset_sched_attr_str_slim(psched, SCHED_ATR_job_run_wait, jrw_val, NULL);\n\t}\n\n\t/* Log a message letting user know that this attribute is deprecated */\n\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_REQUEST, LOG_WARNING, psched->sc_name,\n\t\t  \"'throughput_mode' is being deprecated, it is recommended to use 'job_run_wait'\");\n\n\treturn PBSE_NONE;\n}\n"
  },
  {
    "path": "src/server/setup_resc.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n *\n * @brief\n * \t\tFile contains functions related to setting up of resources.\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <ctype.h>\n#include <string.h>\n#include <fcntl.h>\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"resource.h\"\n#include \"pbs_error.h\"\n#include \"pbs_nodes.h\"\n#include \"svrfunc.h\"\n#include \"log.h\"\n#include \"pbs_python.h\"\n#include \"sched_cmds.h\"\n#include \"pbs_nodes.h\"\n#include <sys/file.h>\n#include \"libutil.h\"\n#include \"pbs_sched.h\"\n\nextern char *msg_daemonname;\n#ifndef PBS_MOM\nextern struct python_interpreter_data svr_interp_data;\n\nstruct resc_sum *svr_resc_sum;\n\n/**\n * @brief\n * \t\tHelper function to restart the Python interpreter and record the\n * \t\toccurrence in the log.\n *\n * @param[in]\tcaller\t-\tThe name of the calling function (for logging)\n */\nint\nrestart_python_interpreter(const char *caller)\n{\n\tint rc;\n\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t  LOG_INFO, (char *) caller,\n\t\t  \"Restarting Python interpreter as resourcedef file has changed.\");\n\tpbs_python_ext_shutdown_interpreter(&svr_interp_data);\n\trc = pbs_python_ext_start_interpreter(&svr_interp_data);\n\tif (rc != 0) {\n\t\tlog_err(PBSE_INTERNAL, (char *) caller, \"Failed to restart Python interpreter\");\n\t}\n\treturn rc;\n}\n#endif\n\n/**\n * @brief\n * \t\tAdd a resource to the resource definition array and update the\n * \t\tresourcedef file\n *\n * @param[in]\tname\t-\tThe name of the resource to operate on\n * @param[in]\ttype\t-\tThe type of the resource\n * @param[in]\tperms\t-\tThe permissions/flags of the resource\n *\n * @return\tint\n * @retval\t-2\t: if resource already exists as a different type or the h flag\n * \t\t\t\t\tis being modified\n * @retval\t-1\t: on any other error\n * @retval\t0\t: if ok\n */\nint\nadd_resource_def(char *name, int type, int perms)\n{\n\tresource_def *prdef;\n\tint rc;\n\n\t/* first see if the resource \"name\" already exists */\n\tif ((prdef = find_resc_def(svr_resc_def, name)) != NULL) {\n\t\tif (prdef->rs_type != type)\n\t\t\treturn -2;\n\t\tif ((prdef->rs_flags & ATR_DFLAG_CVTSLT) != (perms & ATR_DFLAG_CVTSLT))\n\t\t\treturn -2;\n\t\treturn 0; /* there are correct, just return */\n\t}\n\n\tif (expand_resc_array(name, type, perms) == -1) {\n\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_RESC, LOG_ERR, msg_daemonname, \"Error creating resource\");\n\t\treturn -1;\n\t}\n\n\trc = update_resource_def_file(name, RESDEF_CREATE, type, perms);\n\tif (rc < 0) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"resource %s can not be defined\", name);\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_SERVER,\n\t\t\t  LOG_ERR, msg_daemonname, log_buffer);\n\t\treturn -1;\n\t}\n#ifndef PBS_MOM\n\tset_scheduler_flag(SCH_CONFIGURE, NULL);\n#endif\n\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\tHelper function to determine whether a line in the resourcedef file\n * \t\tis an exact match to a given resource name\n *\n * @param[in]\tline\t-\tThe line in a resourcedef file\n * @param[name]\tname\t-\tThe name of the resource to match\n * @note\n * \t\tNote that entries in the resourcedef file are of the form\n *\t\t<resource name><white space>type=<type><white space>[flag=<flag>]\n *\n * @return\tint\n * @retval\t1\t: if a match is found\n * @retval\t0\t: otherwise\n */\nstatic int\nis_res_in_line(char *line, char *name)\n{\n\tint i, j;\n\n\tif ((line == NULL) || (name == NULL))\n\t\treturn 0;\n\n\tfor (i = 0; (line[i] != '\\0') && isspace(line[i]); i++)\n\t\t;\n\n\tfor (j = 0; (line[i] != '\\0') && (name[j] != '\\0') && (line[i] == name[j]); i++, j++)\n\t\t;\n\n\tif ((j == 0) || (name[j] != '\\0'))\n\t\treturn 0;\n\n\tif (!isspace(line[i]) && (line[i] != '\\0'))\n\t\treturn 0;\n\n\treturn 1;\n}\n\n/**\n * @brief\n * \t\tmodify a resource type/flag in the resourcedef file\n *\n * @param[in]\tname\t-\tThe name of the resource to operate on\n * @param[in]\top\t-\tThe operation to perform, one of RESDEF_CREATE,\n * \t\t\t\t\t\tRESDEF_UPDATE, RESDEF_DELETE\n * @param[in]\ttype\t-\tThe type of the resource\n * @param[in]\tperms\t-\tThe permissions/flags of the resource\n *\n * @return\tWhether the operation was successful or not\n * @retval\t-1\t: on error\n * @retval\t0\t: on success\n */\nint\nupdate_resource_def_file(char *name, resdef_op_t op, int type, int perms)\n{\n\tFILE *rfile;\n\tFILE *tmpfile;\n\tint tmp_fd;\n\tint fd;\n\textern char *path_rescdef;\n\tchar template[] = \"pbstmpXXXXXX\";\n\tchar *line;\n\tint line_len = 256;\n\tchar msg[LOG_BUF_SIZE];\n\tstruct resc_type_map *p_resc_type_map = NULL;\n\tchar *flags = NULL;\n\tint rc;\n\n\tfd = open(path_rescdef, O_CREAT | O_RDONLY, 0644);\n\tif (fd == -1)\n\t\treturn -1;\n\n\tif ((rfile = fdopen(fd, \"r\")) == NULL) {\n\t\tclose(fd);\n\t\treturn -1;\n\t}\n\ttmp_fd = mkstemp(template);\n\ttmpfile = fdopen(tmp_fd, \"w\");\n\t/* set mode bits because mkstemp() created files don't ensure 0644 */\n\tfchmod(tmp_fd, S_IRUSR | S_IWUSR | S_IRGRP | S_IROTH);\n\tline = malloc(line_len * sizeof(char));\n\tif (line == NULL) {\n\t\tlog_err(errno, __func__, MALLOC_ERR_MSG);\n\t\tfclose(tmpfile);\n\t\tunlink(template);\n\t\treturn -1;\n\t}\n\n\tif (lock_file(fileno(rfile), F_RDLCK, path_rescdef, LOCK_RETRY_DEFAULT, msg, sizeof(msg)) != 0) {\n\t\tlog_err(errno, __func__, msg);\n\t\tfclose(rfile);\n\t\tfclose(tmpfile);\n\t\tunlink(template);\n\t\tfree(line);\n\t\treturn -1;\n\t}\n\tif ((op == RESDEF_UPDATE) || (op == RESDEF_CREATE)) {\n\t\tp_resc_type_map = find_resc_type_map_by_typev(type);\n\t\tif (p_resc_type_map == NULL) {\n\t\t\t(void) fclose(rfile);\n\t\t\tfree(line);\n\t\t\tfclose(tmpfile);\n\t\t\tunlink(template);\n\t\t\treturn -1;\n\t\t}\n\t\tflags = find_resc_flag_map(perms);\n\t}\n\n\twhile (pbs_fgets(&line, &line_len, rfile)) {\n\t\tif (((op == RESDEF_UPDATE) || (op == RESDEF_DELETE)) &&\n\t\t    ((line[0] != '#') && (is_res_in_line(line, name)))) {\n\t\t\tif (op == RESDEF_UPDATE) {\n\t\t\t\tfprintf(tmpfile, \"%s type=%s\", name, p_resc_type_map->rtm_rname);\n\t\t\t\tif ((flags != NULL) && (flags[0] != '\\0')) {\n\t\t\t\t\tfprintf(tmpfile, \" flag=%s\", flags);\n\t\t\t\t}\n\t\t\t\tfprintf(tmpfile, \"\\n\");\n\t\t\t} else if (op == RESDEF_DELETE) {\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t} else {\n\t\t\tfprintf(tmpfile, \"%s\", line);\n\t\t}\n\t}\n\tif (op == RESDEF_CREATE) {\n\t\tfprintf(tmpfile, \"%s type=%s\", name, p_resc_type_map->rtm_rname);\n\t\tif ((flags != NULL) && (strcmp(flags, \"\") != 0)) {\n\t\t\tfprintf(tmpfile, \" flag=%s\", flags);\n\t\t}\n\t\tfprintf(tmpfile, \"\\n\");\n\t}\n\n\tif (lock_file(fileno(rfile), F_UNLCK, path_rescdef, LOCK_RETRY_DEFAULT, msg, sizeof(msg)) != 0)\n\t\tlog_err(errno, __func__, msg);\n\n\t(void) fclose(rfile);\n\t(void) fclose(tmpfile);\n\n\tfree(line);\n\tfree(flags);\n\n\trc = 0;\n\tif (rename(template, path_rescdef) != 0) {\n\t\trc = 1;\n\t}\n\tif (rc != 0) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer), \"error renaming resourcedef file\");\n\t\tlog_err(errno, __func__, log_buffer);\n\t\tunlink(template);\n\t\treturn -1;\n\t}\n\n\tunlink(template);\n\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\texpand_resc_array - expand the list (no longer an array) of resource\n *\t\tdefinitions, linking the new to the current last entry\n *\n * @param[in]\trname\t-\tThe name of the resource to operate on\n * @param[in]\trtype\t-\tThe type of the resource\n * @param[in]\trflag\t-\tThe permissions/flags of the resource\n *\n * @return\tint\n * @retval\t-1\t- error\n * @retval\t0\t- success\n */\nint\nexpand_resc_array(char *rname, int rtype, int rflag)\n{\n\tresource_def *pnew;\n\tresource_def *pold;\n\tstruct resc_type_map *p_resc_type_map;\n\textern void *resc_attrdef_idx;\n\n\t/* get mapping between type and functions */\n\n\tp_resc_type_map = find_resc_type_map_by_typev(rtype);\n\tif (p_resc_type_map == NULL)\n\t\treturn -1;\n\n\t/* find the old last entry */\n\tpold = svr_resc_def;\n\twhile (pold->rs_next)\n\t\tpold = pold->rs_next;\n\n\t/* allocate new resc_def entry */\n\n\tpnew = (resource_def *) malloc(sizeof(resource_def));\n\tif (pnew == NULL)\n\t\treturn (-1);\n\n\tif ((pnew->rs_name = strdup(rname)) == NULL) {\n\t\tfree(pnew);\n\t\treturn (-1);\n\t}\n\tpnew->rs_decode = p_resc_type_map->rtm_decode;\n\tpnew->rs_encode = p_resc_type_map->rtm_encode;\n\tpnew->rs_set = p_resc_type_map->rtm_set;\n\tpnew->rs_comp = p_resc_type_map->rtm_comp;\n\tpnew->rs_free = p_resc_type_map->rtm_free;\n\tpnew->rs_action = NULL_FUNC_RESC;\n\tpnew->rs_custom = 1; /*  built-in resources are loaded from XML defn and initialized to 0 */\n\tpnew->rs_flags = rflag;\n\tpnew->rs_type = rtype;\n\tpnew->rs_entlimflg = 0;\n\tpnew->rs_next = NULL;\n\n\tif (pbs_idx_insert(resc_attrdef_idx, pnew->rs_name, pnew) != PBS_IDX_RET_OK) {\n\t\tfree(pnew->rs_name);\n\t\tfree(pnew);\n\t\treturn (-1);\n\t}\n\n\tpold->rs_next = pnew;\n\tsvr_resc_size++;\n\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\tSetup resource definitions\n * @par\n * \t\tRead the file, \"resourcedef\", which defines new resources.\n * \t\tExpand the array of resource_defs\n *\n * @param[in]\tautocorrect\t-\tWhether to autocorrect (when possible) erroneous\n * \t\t\t\t\t\t\t\tresource flags/type combinations.\n *\n * @retval\t-1\t: on error\n * @retval\t-2\t: on error that got auto-corrected\n * @retval\t0\t: otherwise.\n * @par\n *  \tFormat of entries in the file are:\n *\t    resource name type=x flag=y\n * \twhere\n *\t\tx is \"long\", \"float\",  \"size\", \"boolean\" or  \"string\"\n *\t\ty is a combination of the characters 'n' and 'q'\n * @par\n *\t\tIf routine returns -1 or -2, then \"log_buffer\" contains a message to\n *\t\tbe logged.\n *\n * @par MT-safe: No\n */\n\nint\nsetup_resc(int autocorrect)\n{\n\tFILE *nin;\n\tchar *line = NULL;\n\tchar buf[4096];\n\tchar *token;\n\tint linenum;\n\tint err;\n\tchar *val;\n\tchar xchar;\n\tchar *rescname;\n\tint resc_type;\n\tint resc_flag;\n\tint flag_ir = 0;\n\tint rc;\n\tint err_code = -1;\n\tint len = 0;\n\tresource_def *presc;\n\textern char *path_rescdef;\n\tstatic char *invalchar = \"invalid character in resource \"\n\t\t\t\t \"name \\\"%s\\\" on line %d of \";\n\tstatic char *invalchar_skip = \"invalid character in resource name \\\"%s\\\"\";\n\n\tif ((nin = fopen(path_rescdef, \"r\")) == NULL) {\n\t\treturn 0;\n\t}\n\n\tfor (linenum = 1; pbs_fgets(&line, &len, nin); linenum++) {\n\t\tresc_flag = READ_WRITE;\n\t\tresc_type = ATR_TYPE_LONG;\n\n\t\tif (line[0] == '#') /* comment */\n\t\t\tcontinue;\n\n\t\t/* first token is the resource name */\n\n\t\ttoken = parse_node_token(line, 1, &err, &xchar);\n\t\tif (token == NULL)\n\t\t\tcontinue; /* blank line */\n\t\tif (err) {\n\n\t\t\tif (autocorrect) {\n\t\t\t\tif (err_code != -2) {\n\t\t\t\t\terr_code = -2;\n\t\t\t\t}\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer), invalchar_skip, token);\n\t\t\t\tfprintf(stderr, \"%s\\n\", log_buffer);\n\t\t\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER,\n\t\t\t\t\t  LOG_WARNING, msg_daemonname, log_buffer);\n\t\t\t\tcontinue;\n\t\t\t} else {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer), invalchar, token, linenum);\n\t\t\t\tgoto errtoken2;\n\t\t\t}\n\t\t}\n\n\t\trc = verify_resc_name(token);\n\t\tif (rc == -1) {\n\t\t\tif (autocorrect) {\n\t\t\t\tif (err_code != -2) {\n\t\t\t\t\terr_code = -2;\n\t\t\t\t}\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"resource name \\\"%s\\\" does not \"\n\t\t\t\t\t\t\t\t\t \"start with alpha; ignoring resource.\",\n\t\t\t\t\t token);\n\t\t\t\tfprintf(stderr, \"%s\\n\", log_buffer);\n\t\t\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER,\n\t\t\t\t\t  LOG_WARNING, msg_daemonname, log_buffer);\n\t\t\t\tcontinue;\n\t\t\t} else {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"resource name \\\"%s\\\" does not \"\n\t\t\t\t\t\t\t\t\t \"start with alpha on line %d of \",\n\t\t\t\t\t token, linenum);\n\t\t\t\tgoto errtoken2;\n\t\t\t}\n\t\t} else if (rc == -2) {\n\t\t\tif (autocorrect) {\n\t\t\t\tif (err_code != -2) {\n\t\t\t\t\terr_code = -2;\n\t\t\t\t}\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer), invalchar_skip, token);\n\t\t\t\tfprintf(stderr, \"%s\\n\", log_buffer);\n\t\t\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER, LOG_WARNING, msg_daemonname, log_buffer);\n\t\t\t\tcontinue;\n\t\t\t} else {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer), invalchar, token, linenum);\n\t\t\t\tgoto errtoken2;\n\t\t\t}\n\t\t}\n\n\t\trescname = token;\n\n\t\t/* now process remaining tokens (if any), \t*/\n\t\t/* they must be of the form keyword=value\t*/\n\n\t\twhile (1) {\n\n\t\t\ttoken = parse_node_token(NULL, 0, &err, &xchar);\n\t\t\tif (err) {\n\t\t\t\tif (autocorrect) {\n\t\t\t\t\tif (err_code != -2) {\n\t\t\t\t\t\terr_code = -2;\n\t\t\t\t\t}\n\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer), invalchar_skip, token);\n\t\t\t\t\tfprintf(stderr, \"%s\\n\", log_buffer);\n\t\t\t\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER,\n\t\t\t\t\t\t  LOG_WARNING, msg_daemonname, log_buffer);\n\t\t\t\t\tbreak;\n\t\t\t\t} else {\n\t\t\t\t\tgoto errtoken1;\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (token == NULL)\n\t\t\t\tbreak;\n\n\t\t\tif (xchar == '=') {\n\n\t\t\t\t/* have  keyword=value */\n\n\t\t\t\tval = parse_node_token(NULL, 0, &err, &xchar);\n\t\t\t\tif ((val == NULL) || err || (xchar == '=')) {\n\t\t\t\t\tif (autocorrect) {\n\t\t\t\t\t\tif (err_code != -2) {\n\t\t\t\t\t\t\terr_code = -2;\n\t\t\t\t\t\t}\n\t\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer), invalchar_skip, token);\n\t\t\t\t\t\tfprintf(stderr, \"%s\\n\", log_buffer);\n\t\t\t\t\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER,\n\t\t\t\t\t\t\t  LOG_WARNING, msg_daemonname, log_buffer);\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t} else {\n\t\t\t\t\t\tgoto errtoken1;\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\tif (strcmp(token, \"type\") == 0) {\n\t\t\t\t\tif (parse_resc_type(val, &resc_type) == -1) {\n\t\t\t\t\t\tif (autocorrect) {\n\t\t\t\t\t\t\tif (err_code != -2) {\n\t\t\t\t\t\t\t\terr_code = -2;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"invalid resource type %s\", val);\n\t\t\t\t\t\t\tfprintf(stderr, \"%s\\n\", log_buffer);\n\t\t\t\t\t\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER,\n\t\t\t\t\t\t\t\t  LOG_WARNING, msg_daemonname, log_buffer);\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\tgoto errtoken1;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t} else if (strcmp(token, \"flag\") == 0) {\n\t\t\t\t\tif (parse_resc_flags(val, &flag_ir, &resc_flag) == -1) {\n\t\t\t\t\t\tif (autocorrect) {\n\t\t\t\t\t\t\tif (err_code != -2) {\n\t\t\t\t\t\t\t\terr_code = -2;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"Invalid resource flag %s\", val);\n\t\t\t\t\t\t\tfprintf(stderr, \"%s\\n\", log_buffer);\n\t\t\t\t\t\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER,\n\t\t\t\t\t\t\t\t  LOG_WARNING, msg_daemonname, log_buffer);\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\tgoto errtoken1;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tif (autocorrect) {\n\t\t\t\t\t\tif (err_code != -2) {\n\t\t\t\t\t\t\terr_code = -2;\n\t\t\t\t\t\t}\n\t\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"Unrecognized token %s; skipping\", token);\n\t\t\t\t\t\tfprintf(stderr, \"%s\\n\", log_buffer);\n\t\t\t\t\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER,\n\t\t\t\t\t\t\t  LOG_WARNING, msg_daemonname, log_buffer);\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t} else {\n\t\t\t\t\t\tgoto errtoken1;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tif (autocorrect) {\n\t\t\t\t\tif (err_code != -2) {\n\t\t\t\t\t\terr_code = -2;\n\t\t\t\t\t}\n\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"Unrecognized token %s; skipping\", token);\n\t\t\t\t\tfprintf(stderr, \"%s\\n\", log_buffer);\n\t\t\t\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER,\n\t\t\t\t\t\t  LOG_WARNING, msg_daemonname, log_buffer);\n\t\t\t\t\tbreak;\n\t\t\t\t} else {\n\t\t\t\t\tgoto errtoken1;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t\trc = verify_resc_type_and_flags(resc_type, &flag_ir, &resc_flag, rescname, buf, sizeof(buf), autocorrect);\n\t\tif (rc != 0) {\n\t\t\tfprintf(stderr, \"%s\\n\", buf);\n\t\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER,\n\t\t\t\t  LOG_WARNING, msg_daemonname, buf);\n\t\t\t/* with autocorrect enabled a return code of -2 would be returned on error */\n\t\t\tif (rc == -1) {\n\t\t\t\tgoto errtoken3;\n\t\t\t}\n\t\t}\n\t\t/* create resource definition */\n\n\t\tpresc = find_resc_def(svr_resc_def, rescname);\n\t\tif (presc != NULL) {\n\t\t\tif (resc_type == presc->rs_type) {\n\t\t\t\tresc_flag &= (ATR_DFLAG_RASSN |\n\t\t\t\t\t      ATR_DFLAG_ANASSN |\n\t\t\t\t\t      ATR_DFLAG_FNASSN |\n\t\t\t\t\t      ATR_DFLAG_CVTSLT |\n\t\t\t\t\t      ATR_DFLAG_MOM |\n\t\t\t\t\t      READ_WRITE);\n\t\t\t\tpresc->rs_flags &= ~(ATR_DFLAG_RASSN |\n\t\t\t\t\t\t     ATR_DFLAG_ANASSN |\n\t\t\t\t\t\t     ATR_DFLAG_FNASSN |\n\t\t\t\t\t\t     ATR_DFLAG_CVTSLT |\n\t\t\t\t\t\t     ATR_DFLAG_MOM |\n\t\t\t\t\t\t     READ_WRITE);\n\t\t\t\tpresc->rs_flags |= resc_flag;\n#ifndef PBS_MOM\n\t\t\t} else {\n\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\"Erroneous to define duplicate \"\n\t\t\t\t\t\"resource \\\"%s\\\" with differing type \"\n\t\t\t\t\t\"specification, ignoring new definition\",\n\t\t\t\t\trescname);\n\t\t\t\tfprintf(stderr, \"%s\\n\", log_buffer);\n\t\t\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER,\n\t\t\t\t\t  LOG_WARNING, msg_daemonname, log_buffer);\n#endif\n\t\t\t}\n\t\t} else {\n\t\t\terr = expand_resc_array(rescname, resc_type, resc_flag);\n\t\t\tif (err == -1) {\n\t\t\t\t(void) strcpy(log_buffer, \"error allocating memory in setup_resc\");\n\t\t\t\tgoto errtoken3;\n\t\t\t}\n\t\t}\n\t}\n\n\tfree(line);\n\tfclose(nin);\n\treturn 0;\n\nerrtoken1:\n\tsprintf(log_buffer, \"token \\\"%s\\\" in error on line %d of \",\n\t\ttoken, linenum);\nerrtoken2:\n\tstrcat(log_buffer, path_rescdef);\nerrtoken3:\n\tfree(line);\n\tfclose(nin);\n\treturn err_code;\n}\n\n#ifndef PBS_MOM\n/**\n * @brief\n * \t\tUpdate the global resource summation array that tracks the resources\n * \t\tthat need to be accumulated across chunks\n *\n * @see svr_resc_sum\n */\nvoid\nupdate_resc_sum(void)\n{\n\tresource_def *prdef;\n\tstruct resc_sum *tmp_resc_sum;\n\tint i = 0;\n\n\tfor (prdef = svr_resc_def; prdef; prdef = prdef->rs_next) {\n\t\tif (prdef->rs_flags & (ATR_DFLAG_RASSN | ATR_DFLAG_ANASSN | ATR_DFLAG_FNASSN))\n\t\t\ti++;\n\t}\n\n\t/* allocating i+1 for the NULL terminator */\n\ttmp_resc_sum = (struct resc_sum *) calloc((size_t) (i + 1), sizeof(struct resc_sum));\n\tif (tmp_resc_sum == NULL) {\n\t\tlog_err(-1, \"setup_resc\", \"unable to malloc for svr_resc_sum\");\n\t\treturn;\n\t}\n\n\tif (svr_resc_sum != NULL)\n\t\tfree(svr_resc_sum);\n\n\tsvr_resc_sum = tmp_resc_sum;\n\n\tfor (i = 0, prdef = svr_resc_def; prdef; prdef = prdef->rs_next) {\n\t\tif (prdef->rs_flags & (ATR_DFLAG_RASSN | ATR_DFLAG_ANASSN | ATR_DFLAG_FNASSN)) {\n\t\t\tsvr_resc_sum[i].rs_def = prdef;\n\t\t\tsvr_resc_sum[i].rs_prs = NULL;\n\t\t\t(void) memset((char *) &svr_resc_sum[i].rs_attr, 0, sizeof(struct attribute));\n\t\t\tsvr_resc_sum[i].rs_attr.at_type = prdef->rs_type;\n\t\t\tsvr_resc_sum[i].rs_set = 0;\n\t\t\ti++;\n\t\t}\n\t}\n\tsvr_resc_sum[i].rs_def = NULL;\n}\n#endif\n"
  },
  {
    "path": "src/server/stat_job.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n/*\n * @file\tstat_job.c\n *\n * @brief\n * \tstat_job.c\t-\tFunctions which support the Status Job Batch Request.\n *\n * Included funtions are:\n *\tsvrcached()\n *\tstatus_attrib()\n *\tstatus_job()\n *\tstatus_subjob()\n *\n */\n#include <sys/types.h>\n#include <stdlib.h>\n#include \"libpbs.h\"\n#include <ctype.h>\n#include <time.h>\n#include \"server_limits.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"server.h\"\n#include \"credential.h\"\n#include \"batch_request.h\"\n#include \"job.h\"\n#include \"reservation.h\"\n#include \"queue.h\"\n#include \"work_task.h\"\n#include \"pbs_error.h\"\n#include \"pbs_nodes.h\"\n#include \"svrfunc.h\"\n#include \"pbs_ifl.h\"\n#include \"ifl_internal.h\"\n\n/* Global Data Items: */\n\nextern attribute_def job_attr_def[];\nextern int resc_access_perm; /* see encode_resc() in attr_fn_resc.c */\nextern struct server server;\nextern char statechars[];\nextern time_t time_now;\n\n/**\n * @brief\n * \t\tsvrcached - either link in (to phead) a cached svrattrl struct which is\n *\t\tpointed to by the attribute, or if the cached struct isn't there or\n *\t\tis out of date, then replace it with a new svrattrl structure.\n * @par\n *\t\tWhen replacing, unlink and delete old one if the reference count goes\n *\t\tto zero.\n *\n * @par[in,out]\tpat\t-\tattribute structure which contains a cached svrattrl struct\n * @par[in,out]\tphead\t-\tlist of new attribute values\n * @par[in]\tpdef\t-\tattribute for any parent object.\n *\n * @note\n *\tIf an attribute has the ATR_DFLAG_HIDDEN flag set, then no\n *\tneed to obtain and cache new svrattrl values.\n */\n\nstatic void\nsvrcached(attribute *pat, pbs_list_head *phead, attribute_def *pdef)\n{\n\tsvrattrl *working = NULL;\n\tsvrattrl *wcopy;\n\tsvrattrl *encoded;\n\n\tif (pdef == NULL)\n\t\treturn;\n\n\tif ((pdef->at_flags & ATR_DFLAG_HIDDEN) &&\n\t    (get_sattr_long(SVR_ATR_show_hidden_attribs) == 0)) {\n\t\treturn;\n\t}\n\tif (pat->at_flags & ATR_VFLAG_MODCACHE) {\n\t\t/* free old cache value if the value has changed */\n\t\tfree_svrcache(pat);\n\t\tencoded = NULL;\n\t} else {\n\t\tif (resc_access_perm & PRIV_READ)\n\t\t\tencoded = pat->at_priv_encoded;\n\t\telse\n\t\t\tencoded = pat->at_user_encoded;\n\t}\n\n\tif ((encoded == NULL) || (pat->at_flags & ATR_VFLAG_MODCACHE)) {\n\t\tif (is_attr_set(pat)) {\n\t\t\t/* encode and cache new svrattrl structure */\n\t\t\t(void) pdef->at_encode(pat, phead, pdef->at_name,\n\t\t\t\t\t       NULL, ATR_ENCODE_CLIENT, &working);\n\t\t\tif (resc_access_perm & PRIV_READ)\n\t\t\t\tpat->at_priv_encoded = working;\n\t\t\telse\n\t\t\t\tpat->at_user_encoded = working;\n\n\t\t\tpat->at_flags &= ~ATR_VFLAG_MODCACHE;\n\t\t\twhile (working) {\n\t\t\t\tworking->al_refct++; /* incr ref count */\n\t\t\t\tworking = working->al_sister;\n\t\t\t}\n\t\t}\n\t} else {\n\t\t/* can use the existing cached svrattrl struture */\n\n\t\tworking = encoded;\n\t\tif (working->al_refct < 2) {\n\t\t\twhile (working) {\n\t\t\t\tCLEAR_LINK(working->al_link);\n\t\t\t\tif (phead != NULL)\n\t\t\t\t\tappend_link(phead, &working->al_link, working);\n\t\t\t\tworking->al_refct++; /* incr ref count */\n\t\t\t\tworking = working->al_sister;\n\t\t\t}\n\t\t} else {\n\t\t\t/*\n\t\t\t * already linked in, must make a copy to link\n\t\t\t * NOTE: the copy points to the original's data\n\t\t\t * so it should be freed by itself, hence the\n\t\t\t * ref count is set to 1 and the sisters are not\n\t\t\t * linked in\n\t\t\t */\n\t\t\twhile (working) {\n\t\t\t\twcopy = malloc(sizeof(struct svrattrl));\n\t\t\t\tif (wcopy) {\n\t\t\t\t\t*wcopy = *working;\n\t\t\t\t\tworking = working->al_sister;\n\t\t\t\t\tCLEAR_LINK(wcopy->al_link);\n\t\t\t\t\tif (phead != NULL)\n\t\t\t\t\t\tappend_link(phead, &wcopy->al_link, wcopy);\n\t\t\t\t\twcopy->al_refct = 1;\n\t\t\t\t\twcopy->al_sister = NULL;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n\n/*\n * status_attrib - add each requested or all attributes to the status reply\n *\n * @param[in,out]\tpal \t-\tspecific attributes to status\n * @param[in]\t\tpidx \t-\tSearch index of the attribute array\n * @param[in]\t\tpadef\t-\tattribute definition structure\n * @param[in,out]\tpattr\t-\tattribute structure\n * @param[in]\t\tlimit\t-\tlimit on size of def array\n * @param[in]\t\tpriv\t-\tuser-client privilege\n * @param[in,out]\tphead\t-\tpbs_list_head\n * @param[out]\t\tbad \t-\tRETURN: index of first bad attribute\n *\n * @return\tint\n * @retval\t0\t: success\n * @retval\t-1\t: on error (bad attribute)\n */\n\nint\nstatus_attrib(svrattrl *pal, void *pidx, attribute_def *padef, attribute *pattr, int limit, int priv, pbs_list_head *phead, int *bad)\n{\n\tint index;\n\tint nth = 0;\n\n\tpriv &= (ATR_DFLAG_RDACC | ATR_DFLAG_SvWR); /* user-client privilege */\n\tresc_access_perm = priv;\t\t    /* pass privilege to encode_resc()\t*/\n\n\t/* for each attribute asked for or for all attributes, add to reply */\n\n\tif (pal) { /* client specified certain attributes */\n\t\twhile (pal) {\n\t\t\t++nth;\n\t\t\tindex = find_attr(pidx, padef, pal->al_name);\n\t\t\tif (index < 0) {\n\t\t\t\t*bad = nth;\n\t\t\t\treturn (-1);\n\t\t\t}\n\t\t\tif ((padef + index)->at_flags & priv) {\n\t\t\t\tsvrcached(pattr + index, phead, padef + index);\n\t\t\t}\n\t\t\tpal = (svrattrl *) GET_NEXT(pal->al_link);\n\t\t}\n\t} else { /* non specified, return all readable attributes */\n\t\tfor (index = 0; index < limit; index++) {\n\t\t\tif ((padef + index)->at_flags & priv) {\n\t\t\t\tsvrcached(pattr + index, phead, padef + index);\n\t\t\t}\n\t\t}\n\t}\n\treturn (0);\n}\n\n/**\n * @brief\n * \t\tstatus_job - Build the status reply for a single job, regular or Array,\n *\t\tbut not a subjob of an Array Job.\n *\n * @param[in,out]\tpjob\t-\tptr to job to status\n * @param[in]\t\tpreq\t-\trequest structure\n * @param[in]\t\tpal\t-\tspecific attributes to status\n * @param[in,out]\tpstathd\t-\tRETURN: head of list to append status to\n * @param[out]\t\tbad\t-\tRETURN: index of first bad attribute\n * @param[in]\t\tdosubjobs -\tflag to expand a Array job to include all subjobs\n *\n * @return\tint\n * @retval\t0\t: success\n * @retval\tPBSE_PERM\t: client is not authorized to status the job\n * @retval\tPBSE_SYSTEM\t: memory allocation error\n * @retval\tPBSE_NOATTR\t: attribute error\n */\n\nint\nstatus_job(job *pjob, struct batch_request *preq, svrattrl *pal, pbs_list_head *pstathd, int *bad, int dosubjobs)\n{\n\tstruct brp_status *pstat;\n\tlong oldtime = 0;\n\tint old_elig_flags = 0;\n\tint old_atyp_flags = 0;\n\tint revert_state_r = 0;\n\n\t/* see if the client is authorized to status this job */\n\n\tif (!get_sattr_long(SVR_ATR_query_others))\n\t\tif (svr_authorize_jobreq(preq, pjob))\n\t\t\treturn (PBSE_PERM);\n\n\t/* calc eligible time on the fly and return, don't save. */\n\tif (get_sattr_long(SVR_ATR_EligibleTimeEnable) == TRUE) {\n\t\tif (get_jattr_long(pjob, JOB_ATR_accrue_type) == JOB_ELIGIBLE) {\n\t\t\toldtime = get_jattr_long(pjob, JOB_ATR_eligible_time);\n\t\t\tset_jattr_l_slim(pjob, JOB_ATR_eligible_time,\n\t\t\t\t\t time_now - get_jattr_long(pjob, JOB_ATR_sample_starttime), INCR);\n\t\t}\n\t} else {\n\t\t/* eligible_time_enable is off so, clear set flag so that eligible_time and accrue type dont show */\n\t\told_elig_flags = get_jattr(pjob, JOB_ATR_eligible_time)->at_flags;\n\t\tmark_jattr_not_set(pjob, JOB_ATR_eligible_time);\n\n\t\told_atyp_flags = get_jattr(pjob, JOB_ATR_accrue_type)->at_flags;\n\t\tmark_jattr_not_set(pjob, JOB_ATR_accrue_type);\n\t}\n\n\t/* allocate reply structure and fill in header portion */\n\n\tpstat = (struct brp_status *) malloc(sizeof(struct brp_status));\n\tif (pstat == NULL)\n\t\treturn (PBSE_SYSTEM);\n\tCLEAR_LINK(pstat->brp_stlink);\n\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_ArrayJob) != 0 && dosubjobs)\n\t\tpstat->brp_objtype = MGR_OBJ_JOBARRAY_PARENT;\n\telse if ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_SubJob) != 0 && dosubjobs)\n\t\tpstat->brp_objtype = MGR_OBJ_SUBJOB;\n\telse\n\t\tpstat->brp_objtype = MGR_OBJ_JOB;\n\t(void) strcpy(pstat->brp_objname, pjob->ji_qs.ji_jobid);\n\tCLEAR_HEAD(pstat->brp_attr);\n\tappend_link(pstathd, &pstat->brp_stlink, pstat);\n\tpreq->rq_reply.brp_count++;\n\n\t/* Temporarily set suspend/user suspend states for the stat */\n\tif (check_job_state(pjob, JOB_STATE_LTR_RUNNING)) {\n\t\tif (pjob->ji_qs.ji_svrflags & JOB_SVFLG_Suspend) {\n\t\t\tset_job_state(pjob, JOB_STATE_LTR_SUSPENDED);\n\t\t\trevert_state_r = 1;\n\t\t} else if (pjob->ji_qs.ji_svrflags & JOB_SVFLG_Actsuspd) {\n\t\t\tset_job_state(pjob, JOB_STATE_LTR_USUSPENDED);\n\t\t\trevert_state_r = 1;\n\t\t}\n\t}\n\n\t/* add attributes to the status reply */\n\n\t*bad = 0;\n\tif (status_attrib(pal, job_attr_idx, job_attr_def, pjob->ji_wattr, JOB_ATR_LAST, preq->rq_perm, &pstat->brp_attr, bad))\n\t\treturn (PBSE_NOATTR);\n\n\t/* reset eligible time, it was calctd on the fly, real calctn only when accrue_type changes */\n\n\tif (get_sattr_long(SVR_ATR_EligibleTimeEnable) != 0) {\n\t\tif (get_jattr_long(pjob, JOB_ATR_accrue_type) == JOB_ELIGIBLE)\n\t\t\tset_jattr_l_slim(pjob, JOB_ATR_eligible_time, oldtime, SET);\n\t} else {\n\t\t/* reset the set flags */\n\t\tget_jattr(pjob, JOB_ATR_eligible_time)->at_flags = old_elig_flags;\n\n\t\tget_jattr(pjob, JOB_ATR_accrue_type)->at_flags = old_atyp_flags;\n\t}\n\n\tif (revert_state_r)\n\t\tset_job_state(pjob, JOB_STATE_LTR_RUNNING);\n\n\treturn (0);\n}\n\n/**\n * @brief\n * \t\tstatus_subjob - status a single subjob (of an Array Job)\n *\t\tWorks by statusing the parrent unless subjob is actually running.\n *\n * @param[in,out]\tpjob\t-\tptr to parent Array\n * @param[in]\t\tpreq\t-\trequest structure\n * @param[in]\t\tpal\t-\tspecific attributes to status\n * @param[in]\t\tsubj\t-\tif not = -1 then include subjob [n]\n * @param[in,out]\tpstathd\t-\tRETURN: head of list to append status to\n * @param[out]\t\tbad\t-\tRETURN: index of first bad attribute\n * @param[in]\t\tdosubjobs -\tflag to expand a Array job to include all subjobs\n *\n * @return\tint\n * @retval\t0\t: success\n * @retval\tPBSE_PERM\t: client is not authorized to status the job\n * @retval\tPBSE_SYSTEM\t: memory allocation error\n * @retval\tPBSE_IVALREQ\t: something wrong with the flags\n */\nint\nstatus_subjob(job *pjob, struct batch_request *preq, svrattrl *pal, int subj, pbs_list_head *pstathd, int *bad, int dosubjobs)\n{\n\tint limit = (int) JOB_ATR_LAST;\n\tstruct brp_status *pstat;\n\tjob *psubjob; /* ptr to job to status */\n\tchar realstate;\n\tint rc = 0;\n\tint oldeligflags = 0;\n\tint oldatypflags = 0;\n\tchar *old_subjob_comment = NULL;\n\tchar sjst;\n\tint sjsst;\n\tchar *objname;\n\n\t/* see if the client is authorized to status this job */\n\n\tif (!get_sattr_long(SVR_ATR_query_others))\n\t\tif (svr_authorize_jobreq(preq, pjob))\n\t\t\treturn (PBSE_PERM);\n\n\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_ArrayJob) == 0)\n\t\treturn PBSE_IVALREQ;\n\n\t/* if subjob job obj exists, use real job structure */\n\n\tpsubjob = get_subjob_and_state(pjob, subj, &sjst, &sjsst);\n\tif (psubjob)\n\t\treturn status_job(psubjob, preq, pal, pstathd, bad, dosubjobs);\n\n\tif (sjst == JOB_STATE_LTR_UNKNOWN)\n\t\treturn PBSE_UNKJOBID;\n\n\t/* otherwise we fake it with info from the parent      */\n\t/* allocate reply structure and fill in header portion */\n\n\tobjname = create_subjob_id(pjob->ji_qs.ji_jobid, subj);\n\tif (objname == NULL)\n\t\treturn PBSE_SYSTEM;\n\n\t/* for the general case, we don't want to include the parent's */\n\t/* array related attrbutes as they belong only to the Array    */\n\tif (pal == NULL)\n\t\tlimit = JOB_ATR_array;\n\tpstat = (struct brp_status *) malloc(sizeof(struct brp_status));\n\tif (pstat == NULL)\n\t\treturn (PBSE_SYSTEM);\n\tCLEAR_LINK(pstat->brp_stlink);\n\tif (dosubjobs)\n\t\tpstat->brp_objtype = MGR_OBJ_SUBJOB;\n\telse\n\t\tpstat->brp_objtype = MGR_OBJ_JOB;\n\t(void) strcpy(pstat->brp_objname, objname);\n\tCLEAR_HEAD(pstat->brp_attr);\n\tappend_link(pstathd, &pstat->brp_stlink, pstat);\n\tpreq->rq_reply.brp_count++;\n\n\t/* add attributes to the status reply */\n\n\t*bad = 0;\n\n\t/*\n\t * fake the job state and comment by setting the parent job's state\n\t * and comment to that of the subjob\n\t */\n\trealstate = get_job_state(pjob);\n\tset_job_state(pjob, sjst);\n\n\tif (sjst == JOB_STATE_LTR_EXPIRED || sjst == JOB_STATE_LTR_FINISHED) {\n\t\tif (sjsst == JOB_SUBSTATE_FINISHED) {\n\t\t\tif (is_jattr_set(pjob, JOB_ATR_Comment)) {\n\t\t\t\told_subjob_comment = strdup(get_jattr_str(pjob, JOB_ATR_Comment));\n\t\t\t\tif (old_subjob_comment == NULL)\n\t\t\t\t\treturn (PBSE_SYSTEM);\n\t\t\t}\n\t\t\tif (set_jattr_str_slim(pjob, JOB_ATR_Comment, \"Subjob finished\", NULL)) {\n\t\t\t\treturn (PBSE_SYSTEM);\n\t\t\t}\n\t\t} else if (sjsst == JOB_SUBSTATE_FAILED) {\n\t\t\tif (is_jattr_set(pjob, JOB_ATR_Comment)) {\n\t\t\t\told_subjob_comment = strdup(get_jattr_str(pjob, JOB_ATR_Comment));\n\t\t\t\tif (old_subjob_comment == NULL)\n\t\t\t\t\treturn (PBSE_SYSTEM);\n\t\t\t}\n\t\t\tif (set_jattr_str_slim(pjob, JOB_ATR_Comment, \"Subjob failed\", NULL)) {\n\t\t\t\treturn (PBSE_SYSTEM);\n\t\t\t}\n\t\t} else if (sjsst == JOB_SUBSTATE_TERMINATED) {\n\t\t\tif (is_jattr_set(pjob, JOB_ATR_Comment)) {\n\t\t\t\told_subjob_comment = strdup(get_jattr_str(pjob, JOB_ATR_Comment));\n\t\t\t\tif (old_subjob_comment == NULL)\n\t\t\t\t\treturn (PBSE_SYSTEM);\n\t\t\t}\n\t\t\tif (set_jattr_str_slim(pjob, JOB_ATR_Comment, \"Subjob terminated\", NULL)) {\n\t\t\t\treturn (PBSE_SYSTEM);\n\t\t\t}\n\t\t}\n\t}\n\n\t/* when eligible_time_enable is off,\t\t\t\t      */\n\t/* clear the set flag so that eligible_time and accrue_type dont show */\n\tif (get_sattr_long(SVR_ATR_EligibleTimeEnable) == 0) {\n\t\tattribute *attr = get_jattr(pjob, JOB_ATR_eligible_time);\n\n\t\toldeligflags = attr->at_flags;\n\t\tmark_jattr_not_set(pjob, JOB_ATR_eligible_time);\n\n\t\tattr = get_jattr(pjob, JOB_ATR_accrue_type);\n\t\toldatypflags = attr->at_flags;\n\t\tmark_jattr_not_set(pjob, JOB_ATR_accrue_type);\n\t}\n\n\tif (status_attrib(pal, job_attr_idx, job_attr_def, pjob->ji_wattr, limit, preq->rq_perm, &pstat->brp_attr, bad))\n\t\trc = PBSE_NOATTR;\n\n\t/* Set the parent state back to what it really is */\n\tset_job_state(pjob, realstate);\n\n\t/* Set the parent comment back to what it really is */\n\tif (old_subjob_comment != NULL) {\n\t\tif (set_jattr_str_slim(pjob, JOB_ATR_Comment, old_subjob_comment, NULL)) {\n\t\t\treturn (PBSE_SYSTEM);\n\t\t}\n\n\t\tfree(old_subjob_comment);\n\t}\n\n\t/* reset the flags */\n\tif (get_sattr_long(SVR_ATR_EligibleTimeEnable) == 0) {\n\t\tattribute *attr = get_jattr(pjob, JOB_ATR_eligible_time);\n\t\tattr->at_flags = oldeligflags;\n\n\t\tattr = get_jattr(pjob, JOB_ATR_accrue_type);\n\t\tattr->at_flags = oldatypflags;\n\t}\n\n\treturn (rc);\n}\n"
  },
  {
    "path": "src/server/svr_chk_owner.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file    svr_chk_owner.c\n *\n * @brief\n * \t\tsvr_chk_owner.c\t-\tThis file contains functions related to authorizing a job request.\n *\n * Functions included are:\n * \tsvr_chk_owner()\n *\tsvr_authorize_jobreq()\n *\tsvr_get_privilege()\n *\tauthenticate_user()\n *\tchk_job_request()\n *\tchk_rescResv_request()\n *\tsvr_chk_ownerResv()\n *\tsvr_authorize_resvReq()\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <stdio.h>\n#include <sys/types.h>\n#include \"libpbs.h\"\n#include \"string.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"server_limits.h\"\n#include \"server.h\"\n#include \"credential.h\"\n#include \"batch_request.h\"\n#include \"net_connect.h\"\n\n#include <unistd.h>\n\n#include \"job.h\"\n#include \"reservation.h\"\n#include \"pbs_error.h\"\n#include \"log.h\"\n#include \"pbs_nodes.h\"\n#include \"svrfunc.h\"\n#include \"libutil.h\"\n\n/* Global Data */\n\nextern char *msg_badstate;\nextern char *msg_permlog;\nextern char *msg_unkjobid;\nextern char *msg_system;\nextern char *msg_unkresvID;\nextern char *msg_delProgress;\nextern time_t time_now;\n\n/* Global functions */\nextern int svr_chk_histjob(job *pjob);\n\n/* Non-global functions */\nstatic int svr_authorize_resvReq(struct batch_request *, resc_resv *);\n\n/**\n * @brief\n * \t\tsvr_chk_owner - compare a user name from a request and the name of\n *\t\tthe user who owns the job.\n *\n * @param[in]\tpreq\t-\trequest structure which contains the user name\n * @param[in]\tpjob\t-\tjob structure\n *\n * @return\tint\n * @retval\t0\t: success\n * @retval\t!0\t: user is not the job owner\n */\n\nint\nsvr_chk_owner(struct batch_request *preq, job *pjob)\n{\n\tchar owner[PBS_MAXUSER + 1];\n\tchar *pu;\n\tchar *ph;\n\tchar rmtuser[PBS_MAXUSER + PBS_MAXHOSTNAME + 2];\n\textern int ruserok(const char *rhost, int suser, const char *ruser,\n\t\t\t   const char *luser);\n\n\t/* Are the owner and requestor the same? */\n\tsnprintf(rmtuser, sizeof(rmtuser), \"%s\", get_jattr_str(pjob, JOB_ATR_job_owner));\n\tpu = rmtuser;\n\tph = strchr(rmtuser, '@');\n\tif (!ph)\n\t\treturn -1;\n\t*ph++ = '\\0';\n\tif (strcmp(preq->rq_user, pu) == 0) {\n\t\t/* Avoid the lookup if they match. */\n\t\tif (strcmp(preq->rq_host, ph) == 0)\n\t\t\treturn 0;\n\t\t/* Perform the lookup. */\n\t\tif (is_same_host(preq->rq_host, ph))\n\t\t\treturn 0;\n\t}\n\n\t/* map requestor user@host to \"local\" name */\n\n\tpu = site_map_user(preq->rq_user, preq->rq_host);\n\tif (pu == NULL)\n\t\treturn (-1);\n\t(void) strncpy(rmtuser, pu, PBS_MAXUSER);\n\n\t/*\n\t * Get job owner name without \"@host\" and then map to \"local\" name.\n\t */\n\n\tget_jobowner(get_jattr_str(pjob, JOB_ATR_job_owner), owner);\n\tpu = site_map_user(owner, get_hostPart(get_jattr_str(pjob, JOB_ATR_job_owner)));\n\n\tif (get_sattr_long(SVR_ATR_FlatUID)) {\n\t\t/* with flatuid, all that must match is user names */\n\t\treturn (strcmp(rmtuser, pu));\n\t} else {\n\t\t/* non-flatuid space, must validate rmtuser vs owner */\n\t\treturn (ruserok(preq->rq_host, 0, rmtuser, pu));\n\t}\n}\n\n/**\n * @brief\n * \t\tsvr_authorize_jobreq - determine if requestor is authorized to make\n *\t\trequest against the job.  This is only called for batch requests\n *\t\tagainst jobs, not manager requests against queues or the server.\n *\n * @param[in]\tpreq\t-\trequest structure which contains the user name\n * @param[in]\tpjob\t-\tjob structure\n *\n * @return\tint\n * @retval\t0\t: if authorized (job owner, operator, administrator)\n * @retval\t!0\t: not authorized.\n */\n\nint\nsvr_authorize_jobreq(struct batch_request *preq, job *pjob)\n{\n\t/* Is requestor special privileged? */\n\n\tif ((preq->rq_perm & (ATR_DFLAG_OPRD | ATR_DFLAG_OPWR |\n\t\t\t      ATR_DFLAG_MGRD | ATR_DFLAG_MGWR)) != 0)\n\t\treturn (0);\n\n\t/* if not, see if requestor is the job owner */\n\n\telse if (svr_chk_owner(preq, pjob) == 0)\n\t\treturn (0);\n\n\telse\n\t\treturn (-1);\n}\n\n/**\n * @brief\n * \t\tsvr_get_privilege - get privilege level of a user.\n *\n *\t\tPrivilege is granted to a user at a host.  A user is automatically\n *\t\tgranted \"user\" privilege.  The user@host pair must appear in\n *\t\tthe server's administrator attribute list to be granted \"manager\"\n *\t\tprivilege and/or appear in the operators attribute list to be\n *\t\tgranted \"operator\" privilege.  If either acl is unset, then root\n *\t\ton the server machine is granted that privilege.\n *\n *\t\tIf \"PBS_ROOT_ALWAYS_ADMIN\" is defined, then root always has privilege\n *\t\teven if not in the list.\n *\n *\t\tThe returns are based on the access permissions of attributes, see\n *\t\tattribute.h.\n *\n * @param[in]\tuser\t-\tuser in user@host pair\n * @param[in]\thost\t-\thost in user@host pair\n *\n * @return\tint\n * @retval\taccess privilage of the user\n */\n\nint\nsvr_get_privilege(char *user, char *host)\n{\n\tint is_root = 0;\n\tint priv = (ATR_DFLAG_USRD | ATR_DFLAG_USWR);\n\tchar uh[PBS_MAXUSER + PBS_MAXHOSTNAME + 2];\n\n\t(void) strcpy(uh, user);\n\t(void) strcat(uh, \"@\");\n\t(void) strcat(uh, host);\n\n\tif (strcmp(user, PBS_DEFAULT_ADMIN) == 0) {\n\t\tchar myhostname[PBS_MAXHOSTNAME + 1];\n\t\t/* First try without DNS lookup. */\n\t\tif (strcasecmp(host, server_host) == 0) {\n\t\t\tis_root = 1;\n\t\t} else if (strcasecmp(host, LOCALHOST_SHORTNAME) == 0) {\n\t\t\tis_root = 1;\n\t\t} else if (strcasecmp(host, LOCALHOST_FULLNAME) == 0) {\n\t\t\tis_root = 1;\n\t\t} else {\n\t\t\tif (gethostname(myhostname, (sizeof(myhostname) - 1)) == -1) {\n\t\t\t\tmyhostname[0] = '\\0';\n\t\t\t}\n\t\t\tif (strcasecmp(host, myhostname) == 0) {\n\t\t\t\tis_root = 1;\n\t\t\t}\n\t\t}\n\t\tif (is_root == 0) {\n\t\t\t/* Now try with DNS lookup. */\n\t\t\tif (is_same_host(host, server_host)) {\n\t\t\t\tis_root = 1;\n\t\t\t} else if (is_same_host(host, myhostname)) {\n\t\t\t\tis_root = 1;\n\t\t\t}\n\t\t}\n\t}\n\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n\tchar *privil_auth_user = pbs_conf.pbs_privileged_auth_user ? pbs_conf.pbs_privileged_auth_user : NULL;\n\tif (privil_auth_user &&\n\t    is_string_in_arr(pbs_conf.supported_auth_methods, AUTH_GSS_NAME)) {\n\t\tif (strcmp(uh, privil_auth_user) == 0) {\n\t\t\tis_root = 1;\n\t\t}\n\t}\n#endif\n\n#ifdef PBS_ROOT_ALWAYS_ADMIN\n\tif (is_root)\n\t\treturn (priv | ATR_DFLAG_MGRD | ATR_DFLAG_MGWR | ATR_DFLAG_OPRD | ATR_DFLAG_OPWR);\n#endif /* PBS_ROOT_ALWAYS_ADMIN */\n\n\tif (!is_sattr_set(SVR_ATR_managers)) {\n\t\tif (is_root)\n\t\t\tpriv |= (ATR_DFLAG_MGRD | ATR_DFLAG_MGWR);\n\n\t} else if (acl_check(get_sattr(SVR_ATR_managers), uh, ACL_User))\n\t\tpriv |= (ATR_DFLAG_MGRD | ATR_DFLAG_MGWR);\n\n\tif (!is_attr_set(get_sattr(SVR_ATR_operators))) {\n\t\tif (is_root)\n\t\t\tpriv |= (ATR_DFLAG_OPRD | ATR_DFLAG_OPWR);\n\n\t} else if (acl_check(get_sattr(SVR_ATR_operators), uh, ACL_User))\n\t\tpriv |= (ATR_DFLAG_OPRD | ATR_DFLAG_OPWR);\n\n\treturn (priv);\n}\n\n/**\n * @brief\n * \t\tauthenticate_user - authenticate user by checking name against credential\n *\t\t       provided on connection via Authenticate User request.\n *\n * @param[in]\tpreq\t-\tuser to be authenticated\n * @param[in]\tpcred\t-\tcredential provided on connection via Authenticate User request.\n *\n * @return\tint\n * @retval\t0\t: if user is who s/he claims\n * @retval\tnonzero\t: error code\n */\n\nint\nauthenticate_user(struct batch_request *preq, struct connection *pcred)\n{\n\tchar uath[PBS_MAXUSER + PBS_MAXHOSTNAME + 1];\n\n\tif (strncmp(preq->rq_user, pcred->cn_username, PBS_MAXUSER))\n\t\treturn (PBSE_BADCRED);\n\tif (strncasecmp(preq->rq_host, pcred->cn_hostname, PBS_MAXHOSTNAME))\n\t\treturn (PBSE_BADCRED);\n\tif (pcred->cn_timestamp) {\n\t\tif ((pcred->cn_timestamp - CREDENTIAL_TIME_DELTA > time_now) ||\n\t\t    (pcred->cn_timestamp + CREDENTIAL_LIFETIME < time_now))\n\t\t\treturn (PBSE_EXPIRED);\n\t}\n\n\t/* If Server's Acl_User enabled, check if user in list */\n\n\tif (get_sattr_long(SVR_ATR_AclUserEnabled)) {\n\n\t\t(void) strcpy(uath, preq->rq_user);\n\t\t(void) strcat(uath, \"@\");\n\t\t(void) strcat(uath, preq->rq_host);\n\t\tif (acl_check(get_sattr(SVR_ATR_AclUsers),\n\t\t\t      uath, ACL_User) == 0) {\n\t\t\t/* not in list, next check if listed as a manager */\n\n\t\t\tif ((svr_get_privilege(preq->rq_user, preq->rq_host) &\n\t\t\t     (ATR_DFLAG_MGWR | ATR_DFLAG_OPWR)) == 0)\n\t\t\t\treturn (PBSE_PERM);\n\t\t}\n\t}\n\n\t/* A site stub for additional checking */\n\n\treturn (site_allow_u(preq->rq_user, preq->rq_host));\n}\n\n/**\n * @brief\n * \t\tchk_job_request - check legality of a request against a job\n * @par\n *\t\tthis checks the most conditions common to most job batch requests.\n *\t\tIt also returns a pointer to the job if found and the tests pass.\n *\t\tIf the request is for a single subjob or a range of subjobs (of an\n *\t\tJob Array),  the return job pointer is to the parent Array Job.\n * @par\n *\t\tDepending on what the \"jobid\" identifies, the following is returned\n *\t\tin the integer pointed to by rc:\n *\t   \tIS_ARRAY_NO (0)       - for a regular job\n *\t   \tIS_ARRAY_ArrayJob (1) - for an Array Job\n *\t   \tIS_ARRAY_Single (2)   - for a single subjob\n *\t   \tIS_ARRAY_Range (3)    - for a range of  subjobs\n *\n * @param[in]\tjobid\t-\tJob Id.\n * @param[in,out]\tpreq\t-\tjob batch request\n * @param[out]\trc\t-\tDepending on what the \"jobid\" identifies,\n * \t\t\t\t\t\tthe following is returned\n *\t\t\t\t\t\t(0)       - for a regular job\n *\t\t\t\t\t\t(1) - for an Array Job\n *\t\t\t\t\t\t(2)   - for a single subjob\n *\t\t\t\t\t\t(3)    - for a range of  subjobs\n * @param[out]\terr\t\tPBSE reason why request was rejected\n *\n * @return\tjob *\n * @retval\ta pointer to the job\t: if found and the tests pass.\n * @retval\tNULL\t: failed\n */\n\njob *\nchk_job_request(char *jobid, struct batch_request *preq, int *rc, int *err)\n{\n\tint t;\n\tint histerr = 0;\n\tjob *pjob;\n\tint deletehist = 0;\n\tchar *p1;\n\tchar *p2;\n\n\tif (preq->rq_extend && strstr(preq->rq_extend, DELETEHISTORY))\n\t\tdeletehist = 1;\n\tt = is_job_array(jobid);\n\tif ((t == IS_ARRAY_NO) || (t == IS_ARRAY_ArrayJob))\n\t\tpjob = find_job(jobid); /* regular or ArrayJob itself */\n\telse\n\t\tpjob = find_arrayparent(jobid); /* subjob(s) */\n\n\t*rc = t;\n\n\tif (pjob == NULL) {\n\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t  jobid, msg_unkjobid);\n\t\tif (err != NULL)\n\t\t\t*err = PBSE_UNKJOBID;\n\n\t\tif (preq->rq_type != PBS_BATCH_DeleteJobList)\n\t\t\treq_reject(PBSE_UNKJOBID, 0, preq);\n\t\treturn NULL;\n\t} else {\n\t\thisterr = svr_chk_histjob(pjob);\n\t\tif (histerr && deletehist == 0) {\n\t\t\tif (err != NULL)\n\t\t\t\t*err = histerr;\n\t\t\tif (preq->rq_type != PBS_BATCH_DeleteJobList)\n\t\t\t\treq_reject(histerr, 0, preq);\n\t\t\treturn NULL;\n\t\t}\n\t\tif (deletehist == 1 && check_job_state(pjob, JOB_STATE_LTR_MOVED) &&\n\t\t    !check_job_substate(pjob, JOB_SUBSTATE_FINISHED)) {\n\t\t\tjob_purge(pjob);\n\t\t\tif (preq->rq_type != PBS_BATCH_DeleteJobList)\n\t\t\t\treq_reject(PBSE_UNKJOBID, 0, preq);\n\t\t\treturn NULL;\n\t\t}\n\t}\n\n\t/*\n\t * The job was found using the job ID in the request, but it may not\n\t * match exactly (i.e. FQDN vs. unqualified hostname). Overwrite the\n\t * host portion of the job ID in the request with the host portion of\n\t * the one from the server job structure. Do not modify anything\n\t * before the first dot in the job ID because it may be an array job.\n\t * This will allow find_job() to look for an exact match when the\n\t * request is serviced by MoM.\n\t */\n\tp1 = strchr(pjob->ji_qs.ji_jobid, '.');\n\tif (p1) {\n\t\tp2 = strchr(jobid, '.');\n\t\tif (p2)\n\t\t\t*p2 = '\\0';\n\t\tstrncat(jobid, p1, PBS_MAXSVRJOBID - 1);\n\t}\n\n\tif (svr_authorize_jobreq(preq, pjob) == -1) {\n\t\t(void) sprintf(log_buffer, msg_permlog, preq->rq_type,\n\t\t\t       \"Job\", pjob->ji_qs.ji_jobid,\n\t\t\t       preq->rq_user, preq->rq_host);\n\t\tlog_event(PBSEVENT_SECURITY, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\tif (err != NULL)\n\t\t\t*err = PBSE_PERM;\n\t\tif (preq->rq_type != PBS_BATCH_DeleteJobList)\n\t\t\treq_reject(PBSE_PERM, 0, preq);\n\t\treturn NULL;\n\t}\n\n\tif ((t == IS_ARRAY_NO) && (check_job_state(pjob, JOB_STATE_LTR_EXITING))) {\n\n\t\t/* special case Deletejob with \"force\" */\n\t\tif (((preq->rq_type == PBS_BATCH_DeleteJob) || (preq->rq_type == PBS_BATCH_DeleteJobList)) &&\n\t\t    (preq->rq_extend != NULL) &&\n\t\t    (strcmp(preq->rq_extend, \"force\") == 0)) {\n\t\t\treturn pjob;\n\t\t}\n\n\t\tlog_eventf(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_INFO, pjob->ji_qs.ji_jobid,\n\t\t\t   \"%s, state=%c\", msg_badstate, get_job_state(pjob));\n\t\tif (err != NULL)\n\t\t\t*err = PBSE_BADSTATE;\n\t\tif (preq->rq_type != PBS_BATCH_DeleteJobList)\n\t\t\treq_reject(PBSE_BADSTATE, 0, preq);\n\t\treturn NULL;\n\t}\n\n\treturn pjob;\n}\n\n/**\n * @brief\n * \t\tchk_rescResv_request - check legality of a request against named\n *\t \tresource reservation\n * @par\n *\t\tThis checks the conditions common to most batch requests\n *\t\tagainst a resc_resv object.  If the object is found in the\n *\t\tsystem and the tests are passed, a non-zero resc_resv\n *\t\tpointer is returned.\n * @par\n *\t\tIf the object can't be found or an applied test fails,\n *\t\tan appropriate error event is logged, an error code is passed\n *\t\tback to the requester via function req_reject(), the batch\n *\t\trequest structure is handled appropriately, and\n *\t\tvalue NULL is returned to the caller.\n * @note\n *\t\tNotes: On failure the reply back to the requester will be handled\n *\t       for the caller - as is the batch request structure itself.\n *\t       It would be better if the caller got back an error code\n *\t       and then called a function passing it that code and request\n *\t       and let it handle the event log and rejection of the\n *\t       request.  Currently, we are modeled along the lines of\n *\t       the function chk_job_request().\n *\n * @param[in]\tresvID\t-\treservation ID\n * @param[in,out]\tpreq\t-\tjob batch request\n *\n * @return\tresc_resv *\n * @retval\tresc_resv object ptr\t: successful\n * @retval\tNULL\t: failed test/no object\n */\n\nresc_resv *\nchk_rescResv_request(char *resvID, struct batch_request *preq)\n{\n\tresc_resv *presv;\n\n\tif ((presv = find_resv(resvID)) == NULL) {\n\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_RESV, LOG_INFO,\n\t\t\t  resvID, msg_unkresvID);\n\t\treq_reject(PBSE_UNKRESVID, 0, preq);\n\t\treturn NULL;\n\t}\n\n\tif (resvID[0] == PBS_MNTNC_RESV_ID_CHAR && !(preq->rq_perm & (ATR_DFLAG_OPWR | ATR_DFLAG_MGWR))) {\n\t\treq_reject(PBSE_PERM, 0, preq);\n\t\treturn NULL;\n\t}\n\n\tif (svr_authorize_resvReq(preq, presv) == -1) {\n\t\t(void) sprintf(log_buffer, msg_permlog, preq->rq_type,\n\t\t\t       \"RESCRESV\", presv->ri_qs.ri_resvID,\n\t\t\t       preq->rq_user, preq->rq_host);\n\t\tlog_event(PBSEVENT_SECURITY, PBS_EVENTCLASS_RESV, LOG_INFO,\n\t\t\t  presv->ri_qs.ri_resvID, log_buffer);\n\t\treq_reject(PBSE_PERM, 0, preq);\n\t\treturn NULL;\n\t}\n\n\treturn (presv);\n}\n\n/**\n * @return\n * \t\tsvr_chk_ownerResv - compare a user name from a request and the name of\n *\t\tthe user who owns the resources reservation.\n *\n * @param[in]\tpreq\t-\trequest structure which contains the user name\n * @param[in]\tpresv\t-\tresources reservation.\n *\n * @return\tint\n * @retval\t0\t: if same\n * @retval\tnonzero\t: if user is not the reservation owner\n */\n\nint\nsvr_chk_ownerResv(struct batch_request *preq, resc_resv *presv)\n{\n\tchar owner[PBS_MAXUSER + 1];\n\tchar *host;\n\tchar *pu;\n\tchar rmtuser[PBS_MAXUSER + 1];\n\n\t/* map user@host to \"local\" name */\n\n\tpu = site_map_user(preq->rq_user, preq->rq_host);\n\tif (pu == NULL)\n\t\treturn (-1);\n\t(void) strncpy(rmtuser, pu, PBS_MAXUSER);\n\n\tget_jobowner(get_rattr_str(presv, RESV_ATR_resv_owner), owner);\n\thost = get_hostPart(get_rattr_str(presv, RESV_ATR_resv_owner));\n\tpu = site_map_user(owner, host);\n\n\treturn (strcmp(rmtuser, pu));\n}\n\n/**\n * @brief\n * \t\tsvr_authorize_resvReq - determine if requestor is authorized to make\n *\t\trequest against the reservation.  This is only called for batch requests\n *\t\tagainst reservations.\n *\n * @param[in]\tpreq\t-\tbatch request structure\n * @param[in]\tpresv\t-\tresources reservation.\n *\n * @return\tint\n * @retval\t0\t: if authorized (reservation owner, operator, administrator)\n * @retval\t-1\t: if not authorized.\n */\n\nstatic int\nsvr_authorize_resvReq(struct batch_request *preq, resc_resv *presv)\n{\n\t/* Is requestor special privileged? */\n\n\tif ((preq->rq_perm & (ATR_DFLAG_OPRD | ATR_DFLAG_OPWR |\n\t\t\t      ATR_DFLAG_MGRD | ATR_DFLAG_MGWR)) != 0)\n\t\treturn (0);\n\t/* Only Manager has privilage to force modify reservation */\n\tif (preq->rq_type == PBS_BATCH_ModifyResv && (preq->rq_extend != NULL) &&\n\t    (strcmp(preq->rq_extend, FORCE) == 0) && ((preq->rq_perm & ATR_DFLAG_MGWR) == 0))\n\t\treturn (-1);\n\n\t/* if not, see if requestor is the reservation owner */\n\n\telse if (svr_chk_ownerResv(preq, presv) == 0)\n\t\treturn (0);\n\n\telse\n\t\treturn (-1);\n}\n"
  },
  {
    "path": "src/server/svr_connect.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file    svr_connect.c\n *\n * @brief\n * \t\tsvr_connect.c - contains routines to tie the structures used by\n *\t\tnet_client and net_server together with those used by the\n *\t\tvarious PBS_*() routines in the API.\n *\n *\t\tsvr_connect() opens a connection with can be used with the\n *\t\tAPI routines and still be selected in wait_request().\n *\n *\t\tsvr_disconnect() closes the above connection.\n *\n *\t\tsvr_disconnect_with_wait_option() like svr_disconnect() but there's\n *\t\tan option to wait until connection has completely closed.\n *\n *\t\tsvr_force_disconnect() = directly closes the connection without asking\n *\t\tthe other end to close first.\n *\n * Functions included are:\n * \tsvr_connect()\n * \tsvr_disconnect()\n * \tsvr_disconnect_with_wait_option()\n * \tsvr_force_disconnect()\n *\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <stdio.h>\n#include <sys/types.h>\n\n#include <unistd.h>\n#include <sys/socket.h>\n#include <signal.h>\n\n#include <errno.h>\n#include \"libpbs.h\"\n#include \"server_limits.h\"\n#include \"net_connect.h\"\n#include \"attribute.h\"\n#include \"pbs_nodes.h\"\n#include \"svrfunc.h\"\n#include \"dis.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"work_task.h\"\n#include \"log.h\"\n#include \"server.h\"\n\n/* global data */\nextern int errno;\nextern int pbs_errno;\nextern unsigned int pbs_mom_port;\nextern char *msg_daemonname;\nextern char *msg_noloopbackif;\n\nextern pbs_net_t pbs_server_addr;\n\nextern sigset_t allsigs; /* see pbsd_main.c */\n\n/**\n * @brief\n *\t\topens a connection which can be used with the\n *      API routines and still be selected in wait_request(). It is called by\n *      the server whenever we need to send a request to another server, or\n *      talk to MOM.\n *\n * @param[in]   hostaddr - address of the host\n * @param[in]   port - port number f the host\n * @param[in]   func - pointer to function\n * @param[in]   cntype - indicates whether a connection table entry is in\n *                  use or is free\n * @param[in]   prot    - PROT_TPP or PROT_TCP\n *\n * @return\tint\n * @retval\t>=0\t: connection handle returned on success. Note that a value\n *\t\t \t\t\tof PBS_LOCAL_CONNECTION is special, it means the server\n *\t\t \t\t\tis talking to itself.\n * @retval\t-1\t: PBS_NET_RC_FATAL (-1) is retuned if the error is believed\n *               \tto be permanent\n * @retval\t-2\t: PBS_NET_RC_RETRY (-2) if the error is believed to be temporary,\n *               \tie retry.\n */\nint\nsvr_connect(pbs_net_t hostaddr, unsigned int port, void (*func)(int), enum conn_type cntype, int prot)\n{\n\tint sock;\n\tmominfo_t *pmom = NULL;\n\tconn_t *conn = NULL;\n\tdmn_info_t *pdmninfo;\n\n\t/* First, determine if the request is to another server or ourselves */\n\n\tif ((hostaddr == pbs_server_addr) && (port == pbs_server_port_dis))\n\t\treturn (PBS_LOCAL_CONNECTION); /* special value for local */\n\tpmom = tfind2((unsigned long) hostaddr, port, &ipaddrs);\n\n\tif (pmom && (port == pmom->mi_port)) {\n\t\tpdmninfo = pmom->mi_dmn_info;\n\t\t/* connect to peer server associated with this mom */\n\t\tif (pdmninfo->dmn_state & INUSE_DOWN) {\n\t\t\tif (pdmninfo->dmn_state & INUSE_NEEDS_HELLOSVR) {\n\t\t\t\tif (open_conn_stream(pmom) < 0) {\n\t\t\t\t\tpbs_errno = PBSE_NORELYMOM;\n\t\t\t\t\treturn (PBS_NET_RC_FATAL);\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tpbs_errno = PBSE_NORELYMOM;\n\t\t\t\treturn (PBS_NET_RC_FATAL);\n\t\t\t}\n\t\t}\n\t}\n\n\tif (prot == PROT_TPP) {\n\t\tif (!pmom) {\n\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\treturn (PBS_NET_RC_RETRY);\n\t\t}\n\t\treturn pmom->mi_dmn_info->dmn_stream;\n\t}\n\n\t/* obtain the connection to the other server */\n\t/*  block signals while we attempt to connect */\n\n\tif (sigprocmask(SIG_BLOCK, &allsigs, NULL) == -1)\n\t\tlog_err(errno, msg_daemonname, \"sigprocmask(BLOCK)\");\n\n\tsock = client_to_svr(hostaddr, port, B_RESERVED);\n\tif (pbs_errno == PBSE_NOLOOPBACKIF)\n\t\tlog_err(PBSE_NOLOOPBACKIF, \"client_to_svr\", msg_noloopbackif);\n\n\tif ((sock < 0) && (errno == ECONNREFUSED)) {\n\t\t/* try one additional time */\n\t\tsock = client_to_svr(hostaddr, port, B_RESERVED);\n\t\tif (pbs_errno == PBSE_NOLOOPBACKIF)\n\t\t\tlog_err(PBSE_NOLOOPBACKIF, \"client_to_svr\", msg_noloopbackif);\n\t}\n\n\t/* unblock signals */\n\tif (sigprocmask(SIG_UNBLOCK, &allsigs, NULL) == -1)\n\t\tlog_err(errno, msg_daemonname, \"sigprocmask(UNBLOCK)\");\n\n\tif (sock < 0) {\n\t\t/* if execution node, mark it down  */\n\t\tif (pmom) {\n\t\t\tstatic char error_mess[256];\n\n\t\t\tsprintf(error_mess, \"cannot open TCP stream: %s (%d)\",\n\t\t\t\tstrerror(errno), errno);\n\t\t\tmomptr_down(pmom, error_mess);\n\t\t}\n\t\tpbs_errno = PBSE_NORELYMOM;\n\t\treturn (sock); /* PBS_NET_RC_RETRY or PBS_NET_RC_FATAL */\n\t}\n\n\t/* We need to further identify our identity with the remote execution node\n\t * to prevent against IP spoofing. The encoded cipher will include the local\n\t * port used to connect to the remote execution node.\n\t */\n\tif ((sock >=0) && pmom && (prot == PROT_TCP)) {\n\t\tchar errbuf[LOG_BUF_SIZE] = \"\";\n\t\tstruct sockaddr sockname;\n\t\tpbs_socklen_t socknamelen;\n\t\tchar port_str[6]; /* TCP ports can go up to 65536, so 5 digits + 1 for null term */\n\t\tsocknamelen = sizeof(sockname);\n\n\t\tif (getsockname(sock, (struct sockaddr *) &sockname, &socknamelen) < 0) {\n\t\t\tlog_err(errno, __func__, \"Error getting the socket's bound address\");\n\t\t\treturn (PBS_NET_RC_RETRY);\n\t\t}\n\t\tsnprintf(port_str, sizeof(port_str), \"%d\", ntohs(GET_IP_PORT(&sockname)));\n\t\tif (client_cipher_auth(sock, port_str, errbuf, sizeof(errbuf)) != 0) {\n\t\t\tlog_err(errno, __func__, errbuf);\n\t\t\tpbs_errno = PBSE_BADCRED;\n\t\t\treturn (PBS_NET_RC_RETRY);\n\t\t}\n\t}\n\n\t/* add the connection to the server connection table and select list */\n\n\tif (func) {\n\t\tconn = add_conn(sock, ToServerDIS, hostaddr, port, NULL, func);\n\t} else {\n\t\tconn = add_conn(sock, ToServerDIS, 0, 0, NULL, NULL); /* empty slot */\n\t}\n\n\tif (!conn) {\n\t\t(void) close(sock);\n\t\tpbs_errno = PBSE_SYSTEM;\n\t\treturn (PBS_NET_RC_FATAL);\n\t}\n\n\tconn->cn_sock = sock;\n\tconn->cn_authen |= PBS_NET_CONN_AUTHENTICATED;\n\n\treturn (sock);\n}\n/**\n * @brief\n *\t\tClose a connection made with svr_connect() by sending a\n *\t\tPBS_BATCH_Disconnect request to the remote host.\n *\n * @note\n *\t\tThis will not wait for the remote host to close the\n *\t\tconnection. The calling program (like the main server) should\n *\t\ttake care of checking existing connections where the\n *\t\tremote end has closed the connection as a PBS_BATCH_Disconnect\n *\t\tresponse. If so, then proceed to locally close the connection.\n *\n * @param[in]\thandle\t-\tthe index to the connection table containing the socket\n *\t\t\t\t\t\t\tto communicate the PBS_BATCH_Disconnect request.\n * @return\tvoid\n */\n\nvoid\nsvr_disconnect(int handle)\n{\n\tsvr_disconnect_with_wait_option(handle, 0);\n}\n\n/**\n * @brief\n *\t\tClose a connection made with svr_connect() by sending a\n *\t\tPBS_BATCH_Disconnect to the remote host. If the parameter\n *\t\t'wait' is set to 1, then this function call would wait until\n *\t\tconnection is completely closed by the remote host.\n *\n * @note\n *\t\tIn addition to closing the actual connection, both the\n *\t\tserver's connection table and the handle table used by\n *\t\tthe API routines must be cleaned-up.\n *\n * @param[in]\thandle\t-\tthe index to the connection table containing the socket\n *\t\t\t\t\t\t\tto communicate the PBS_BATCH_Disconnect request.\n * @param[in]\twait\t-\tif set to 1, then  this function waits until the remote\n *\t\t\t\t\t\t\thost has closed the connection.\n *\n * @return\tvoid\n *\n */\n\nvoid\nsvr_disconnect_with_wait_option(int sock, int wait)\n{\n\tchar x;\n\n\tif (sock < 0 || sock >= PBS_LOCAL_CONNECTION)\n\t\treturn;\n\tif (pbs_client_thread_lock_connection(sock) != 0)\n\t\treturn;\n\tDIS_tcp_funcs();\n\tif ((encode_DIS_ReqHdr(sock, PBS_BATCH_Disconnect, pbs_current_user) == 0) && (dis_flush(sock) == 0)) {\n\t\tconn_t *conn = get_conn(sock);\n\n\t\t/* if no error, will be closed when process_request */\n\t\t/* sees the EOF\t\t\t\t\t    */\n\n\t\tif (wait) {\n\t\t\tfor (;;) {\n\t\t\t\t/* wait for EOF (closed connection) */\n\t\t\t\t/* from remote host, in response to */\n\t\t\t\t/* PBS_BATCH_Disconnect */\n\t\t\t\tif (read(sock, &x, 1) < 1)\n\t\t\t\t\tbreak;\n\t\t\t}\n\n\t\t\t(void) close(sock);\n\t\t} else if (conn) {\n\t\t\tconn->cn_func = close_conn;\n\t\t\tconn->cn_oncl = 0;\n\t\t}\n\t} else {\n\t\t/* error sending disconnect, just close now */\n\t\tclose_conn(sock);\n\t}\n\tset_conn_errtxt(sock, NULL);\n\tset_conn_errno(sock, 0);\n\t(void) pbs_client_thread_unlock_connection(sock);\n\tpbs_client_thread_destroy_connect_context(sock);\n}\n\n/**\n * @brief\n * \t\tsvr_force_disconnect - force the close of a connection\n *\t\tUnlike svr_disconnect(), this does not send disconnect message\n *\t\tand wait for the connection to be closed by the other end;\n *\t\tjust force it closed now.\n *\n * @param[in]\tsock\t-\tconnection sock\n */\nvoid\nsvr_force_disconnect(int sock)\n{\n\tif (sock < 0 || sock > PBS_LOCAL_CONNECTION)\n\t\treturn;\n\tif (pbs_client_thread_lock_connection(sock) != 0)\n\t\treturn;\n\n\tclose_conn(sock);\n\tset_conn_errtxt(sock, NULL);\n\t(void) pbs_client_thread_unlock_connection(sock);\n\tpbs_client_thread_destroy_connect_context(sock);\n}\n"
  },
  {
    "path": "src/server/svr_credfunc.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tsvr_credfunc.c\n *\n * @brief\n *\tRoutines for work task that takes care of renewing credentials for\n *\trunning jobs.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <errno.h>\n#include <unistd.h>\n\n#include \"attribute.h\"\n#include \"job.h\"\n#include \"work_task.h\"\n#include \"pbs_error.h\"\n#include \"log.h\"\n\n#include <stdlib.h>\n\n#define SVR_RENEW_CREDS_TM 300\t\t    /* each 5*60 seconds, reschedule the work task and spread renew within the 5*60 seconds */\n#define SVR_RENEW_PERIOD_DEFAULT 3600\t    /* default renew creds 1 hour befor expiration */\n#define SVR_RENEW_CACHE_PERIOD_DEFAULT 7200 /* default cred usable 2 hours befor expiration */\n\nlong svr_cred_renew_enable = 0; /*disable by default*/\nlong svr_cred_renew_period = SVR_RENEW_PERIOD_DEFAULT;\nlong svr_cred_renew_cache_period = SVR_RENEW_CACHE_PERIOD_DEFAULT;\n\nextern time_t time_now;\nextern pbs_list_head svr_alljobs;\n\nextern int send_cred(job *pjob);\n\n/* @brief\n *\tThe work task for particular job. This work task renew credentials for\n *\ta job specified in the work task and sends the credentials to the\n *\tsuperior mom.\n *\n * @param[in] pwt - work task structure\n *\n */\nvoid\nsvr_renew_job_cred(struct work_task *pwt)\n{\n\tchar *jobid = (char *) pwt->wt_parm1;\n\tjob *pjob = NULL;\n\tint rc;\n\tif ((pjob = find_job(jobid)) != NULL) {\n\t\tif (!check_job_state(pjob, JOB_STATE_LTR_RUNNING))\n\t\t\treturn;\n\n\t\t/* job without cred id */\n\t\tif ((is_jattr_set(pjob, JOB_ATR_cred_id)) == 0)\n\t\t\treturn;\n\n\t\trc = send_cred(pjob);\n\t\tif (rc != 0) {\n\t\t\tlog_eventf(PBSEVENT_ERROR, PBS_EVENTCLASS_SERVER,\n\t\t\t\t   LOG_NOTICE, msg_daemonname,\n\t\t\t\t   \"svr_renew_job_cred %s renew failed, send_cred returned: %d\", pjob->ji_qs.ji_jobid, rc);\n\t\t} else {\n\t\t\tlog_eventf(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER,\n\t\t\t\t   LOG_NOTICE, msg_daemonname,\n\t\t\t\t   \"svr_renew_job_cred %s renew was successful\", pjob->ji_qs.ji_jobid);\n\t\t}\n\t} /* else job does not exists - job probably finished */\n}\n\n/* @brief\n *\tThis is the main credentials renew work task. This work task runs every\n *\tSVR_RENEW_CREDS_TM and it checks all the running jobs and for running\n *\tjobs it checks the validity of credentials. If the credentials are too\n *\told then svr_renew_job_cred() work task is planned for the particular\n *\tjob.\n *\n * @param[in] pwt - work task structure\n *\n */\nvoid\nsvr_renew_creds(struct work_task *pwt)\n{\n\tjob *pjob = NULL;\n\tjob *nxpjob = NULL;\n\n\t/* first, set up another work task for next time period */\n\tif (pwt && svr_cred_renew_enable) {\n\t\tif (!set_task(WORK_Timed,\n\t\t\t      (time_now + SVR_RENEW_CREDS_TM),\n\t\t\t      svr_renew_creds, NULL)) {\n\t\t\tlog_err(errno,\n\t\t\t\t__func__,\n\t\t\t\t\"Unable to set task for renew credentials\");\n\t\t}\n\t}\n\n\t/*\n\t * Traverse through the SERVER job list and set renew task if necessary.\n\t * The renew tasks are spread within SVR_RENEW_CREDS_TM\n\t */\n\tpjob = (job *) GET_NEXT(svr_alljobs);\n\n\twhile (pjob) {\n\t\t/* save the next job */\n\t\tnxpjob = (job *) GET_NEXT(pjob->ji_alljobs);\n\n\t\tif ((is_jattr_set(pjob, JOB_ATR_cred_id)) &&\n\t\t    check_job_state(pjob, JOB_STATE_LTR_RUNNING)) {\n\n\t\t\tif ((is_jattr_set(pjob, JOB_ATR_cred_validity)) &&\n\t\t\t    (get_jattr_long(pjob, JOB_ATR_cred_validity) - svr_cred_renew_period <= time_now)) {\n\t\t\t\t/* spread the renew tasks to the SVR_RENEW_CREDS_TM interval */\n\t\t\t\tif (!set_task(WORK_Timed, (time_now + (rand() % SVR_RENEW_CREDS_TM)), svr_renew_job_cred, pjob->ji_qs.ji_jobid)) {\n\t\t\t\t\tlog_err(errno, __func__, \"Unable to set task for renew job credential\");\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\t/* restore the saved next in pjob */\n\t\tpjob = nxpjob;\n\t}\n}\n\n/* @brief\n *\tEnables renewing credentials for running jobs. It starts the renewing\n *\twork task.\n *\n * @param[in]\tpattr\t-\tpointer to attribute structure\n * @param[in]\tpobject -\tpointer to some parent object.(not used here)\n * @param[in]\tactmode\t-\tthe action to take (e.g. ATR_ACTION_ALTER)\n *\n * @return\tint\n * @retval\tPBSE_NONE on success\n * @retval\t!= PBSE_NONE on error\n */\nint\nset_cred_renew_enable(attribute *pattr, void *pobject, int actmode)\n{\n#if defined(PBS_SECURITY) && (PBS_SECURITY == KRB5)\n\tif ((actmode == ATR_ACTION_ALTER) ||\n\t    (actmode == ATR_ACTION_RECOV)) {\n\n\t\tsvr_cred_renew_enable = pattr->at_val.at_long;\n\t\tif (svr_cred_renew_enable) {\n\t\t\t(void) set_task(WORK_Timed,\n\t\t\t\t\t(long) (time_now + SVR_RENEW_CREDS_TM),\n\t\t\t\t\tsvr_renew_creds, 0);\n\t\t}\n\t}\n#endif\n\treturn (PBSE_NONE);\n}\n\n/* @brief\n *\tSets the svr_cred_renew_period.\n *\n * @param[in]\tpattr\t-\tpointer to attribute structure\n * @param[in]\tpobject -\tpointer to some parent object.(not used here)\n * @param[in]\tactmode\t-\tthe action to take (e.g. ATR_ACTION_ALTER)\n *\n * @return\tint\n * @retval\tPBSE_NONE on success\n * @retval\t!= PBSE_NONE on error\n */\nint\nset_cred_renew_period(attribute *pattr, void *pobject, int actmode)\n{\n\tif ((actmode == ATR_ACTION_ALTER) ||\n\t    (actmode == ATR_ACTION_RECOV)) {\n\n\t\tif ((pattr->at_val.at_long < SVR_RENEW_CREDS_TM)) {\n\t\t\tlog_eventf(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER,\n\t\t\t\t   LOG_NOTICE, msg_daemonname,\n\t\t\t\t   \"%s value to low, using: %ld\",\n\t\t\t\t   ATTR_cred_renew_period,\n\t\t\t\t   svr_cred_renew_period);\n\t\t\treturn PBSE_BADATVAL;\n\t\t}\n\n\t\tsvr_cred_renew_period = pattr->at_val.at_long;\n\n\t\tif ((svr_cred_renew_period > svr_cred_renew_cache_period)) {\n\t\t\t/* warning */\n\t\t\tlog_eventf(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER,\n\t\t\t\t   LOG_NOTICE, msg_daemonname,\n\t\t\t\t   \"%s: %ld should be lower than %s: %ld\",\n\t\t\t\t   ATTR_cred_renew_period,\n\t\t\t\t   pattr->at_val.at_long,\n\t\t\t\t   ATTR_cred_renew_cache_period,\n\t\t\t\t   svr_cred_renew_cache_period);\n\t\t}\n\n\t\tlog_eventf(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER,\n\t\t\t   LOG_NOTICE, msg_daemonname,\n\t\t\t   \"svr_cred_renew_period set to val %ld\",\n\t\t\t   svr_cred_renew_period);\n\t}\n\treturn PBSE_NONE;\n}\n\n/* @brief\n *\tSets the svr_cred_renew_cache_period.\n *\n * @param[in]\tpattr\t-\tpointer to attribute structure\n * @param[in]\tpobject -\tpointer to some parent object.(not used here)\n * @param[in]\tactmode\t-\tthe action to take (e.g. ATR_ACTION_ALTER)\n *\n * @return\tint\n * @retval\tPBSE_NONE on success\n * @retval\t!= PBSE_NONE on error\n */\nint\nset_cred_renew_cache_period(attribute *pattr, void *pobject, int actmode)\n{\n\tif ((actmode == ATR_ACTION_ALTER) ||\n\t    (actmode == ATR_ACTION_RECOV)) {\n\n\t\tif ((pattr->at_val.at_long < SVR_RENEW_CREDS_TM)) {\n\t\t\tlog_eventf(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER,\n\t\t\t\t   LOG_NOTICE, msg_daemonname,\n\t\t\t\t   \"%s value to low, using: %ld\",\n\t\t\t\t   ATTR_cred_renew_cache_period,\n\t\t\t\t   svr_cred_renew_cache_period);\n\t\t\treturn PBSE_BADATVAL;\n\t\t}\n\n\t\tsvr_cred_renew_cache_period = pattr->at_val.at_long;\n\n\t\tif ((svr_cred_renew_cache_period < svr_cred_renew_period)) {\n\t\t\t/* warning */\n\t\t\tlog_eventf(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER,\n\t\t\t\t   LOG_NOTICE, msg_daemonname,\n\t\t\t\t   \"%s: %ld should be greater than %s: %ld\",\n\t\t\t\t   ATTR_cred_renew_cache_period,\n\t\t\t\t   pattr->at_val.at_long,\n\t\t\t\t   ATTR_cred_renew_period,\n\t\t\t\t   svr_cred_renew_period);\n\t\t}\n\n\t\tlog_eventf(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER,\n\t\t\t   LOG_NOTICE, msg_daemonname,\n\t\t\t   \"svr_cred_renew_cache_period set to val %ld\",\n\t\t\t   svr_cred_renew_cache_period);\n\t}\n\treturn PBSE_NONE;\n}\n"
  },
  {
    "path": "src/server/svr_func.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n *\n * @brief\n * \t\tmiscellaneous server functions\n *\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#ifdef PYTHON\n#include \"pbs_python_private.h\"\n#endif\n\n#include \"portability.h\"\n#include <assert.h>\n#include <sys/types.h>\n#include <ctype.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <errno.h>\n#include <sys/stat.h>\n#include <sys/wait.h>\n#include <unistd.h>\n#include <fcntl.h>\n#include <signal.h>\n#include \"server_limits.h\"\n#include \"list_link.h\"\n#include \"log.h\"\n#include \"attribute.h\"\n#include \"resource.h\"\n#include \"job.h\"\n#include \"reservation.h\"\n#include \"queue.h\"\n#include \"server.h\"\n#include \"pbs_error.h\"\n#include \"sched_cmds.h\"\n#include \"ticket.h\"\n#include \"pbs_nodes.h\"\n#include \"tpp.h\"\n#include \"pbs_license.h\"\n#include \"pbs_share.h\"\n#include \"pbs_entlim.h\"\n#include \"work_task.h\"\n#include \"acct.h\"\n#include \"provision.h\"\n#include \"hook.h\"\n#include \"net_connect.h\"\n#include \"libpbs.h\"\n#include \"batch_request.h\"\n#include \"svrfunc.h\"\n#include \"pbs_db.h\"\n#include \"libutil.h\"\n#include \"pbs_ecl.h\"\n#include \"pbs_sched.h\"\n#include \"liblicense.h\"\n\nextern struct python_interpreter_data svr_interp_data;\nextern pbs_list_head svr_runjob_hooks;\nextern pbs_list_head svr_deferred_req;\n\nextern time_t time_now;\nextern char *resc_in_err;\nextern char *msg_daemonname;\nextern char server_name[];\n\nextern pbs_list_head svr_allconns;\n\n#define ERR_MSG_SIZE 256\n#define MAXNLINE 2048\n#define SERVER_ID \"1\"\n\n/*\n * application provisioning returns success status as 1\n */\n#define APP_PROV_SUCCESS 1\n\nextern char *path_hooks_workdir;\nextern char *path_priv;\n\nchar *path_prov_track;\nint max_concurrent_prov = PBS_MAX_CONCURRENT_PROV;\nint provision_timeout;\n\n/*\n * the top level list of all vnodes queued for provisioning\n */\npbs_list_head prov_allvnodes;\n\nstatic int is_runnable(job *, struct prov_vnode_info *);\nextern void set_srv_prov_attributes();\nstatic void del_prov_vnode_entry(job *);\nextern int resize_prov_table(int);\nstatic void prov_startjob(struct work_task *ptask);\nextern enum failover_state are_we_primary(void);\n\n/*\n * Added for History jobs.\n */\nextern void svr_clean_job_history(struct work_task *);\nlong svr_history_enable = 0;\t\t\t /* disable by default */\nlong svr_history_duration = SVR_JOBHIST_DEFAULT; /* default 2 weeks */\n/* Added for Trillion Jobid*/\nlong long svr_max_job_sequence_id = SVR_MAX_JOB_SEQ_NUM_DEFAULT; /* default max job id 9999999 */\n\n/*\n * Added for Node_fail_requeue\n */\nlong node_fail_requeue = PBS_NODE_FAIL_REQUEUE_DEFAULT; /* default value for node_fail_requeue 310 */\n\n/*\n * Added for jobscript_max_size\n */\nstruct attribute attr_jobscript_max_size; /* to store default size value for jobscript_max_size */\n\nextern int do_sync_mom_hookfiles;\nextern int sync_mom_hookfiles_replies_pending;\n\n/*\n * Added for licensing\n */\nextern struct work_task *init_licensing_task;\nextern struct work_task *get_more_licenses_task;\nextern struct work_task *licenses_linger_time_task;\nextern void get_more_licenses(struct work_task *ptask);\nextern void return_lingering_licenses(struct work_task *ptask);\n\n/*\n * Miscellaneous server functions\n */\nextern void db_to_svr_svr(struct server *ps, pbs_db_svr_info_t *pdbsvr);\n#ifdef NAS /* localmod 005 */\nextern int write_single_node_state(struct pbsnode *np);\n#endif /* localmod 005 */\n\nchar primary_host[PBS_MAXHOSTNAME + 1]; /* host_name of primary */\n\n/*\n * the following array of strings is used in decoding/encoding the server state\n */\nstatic char *svr_idle = \"Idle\";\nstatic char *svr_sched = \"Scheduling\";\nstatic char *svr_state_names[] = {\n\t\"\",\t\t     /* SV_STATE_DOWN */\n\t\"\",\t\t     /* SV_STATE_INIT */\n\t\"Hot_Start\",\t     /* SV_STATE_HOT  */\n\t\"Active\",\t     /* SV_STATE_RUN  */\n\t\"Terminating_Delay\", /* SV_STATE_SHUTDEL */\n\t\"Terminating\",\t     /* SV_STATE_SHUTIMM */\n\t\"Terminating\"\t     /* SV_STATE_SHUTSIG */\n};\n\n/**\n * @brief\n * \t\tencode_svrstate - encode the current server state from the internal\n *\t\tinteger to a state name string.\n *\n * @param[in]\tpattr\t-\tptr to attribute\n * @param[in,out]\tphead\t-\thead of attrlist list\n * @param[in]\tatname\t-\tattribute name\n * @param[in]\trsname\t-\tresource name\n * @param[in]\tmode\t-\tencode mode\n * @param[out]\trtnl\t-\tRETURN: ptr to svrattrl\n *\n * @return\tint\n * @retval\t0\t: don't bother to encode it\n * @retval\t1\t: encoded.\n */\n\nint\nencode_svrstate(const attribute *pattr, pbs_list_head *phead, char *atname, char *rsname, int mode, svrattrl **rtnl)\n{\n\tsvrattrl *pal;\n\tchar *psname;\n\n\tif (!pattr)\n\t\treturn (-1);\n\tif ((mode == ATR_ENCODE_SAVE) ||\n\t    (pattr->at_val.at_long <= SV_STATE_DOWN) ||\n\t    (pattr->at_val.at_long > SV_STATE_SHUTSIG))\n\t\treturn (0); /* don't bother to encode it */\n\n\tpsname = svr_state_names[pattr->at_val.at_long];\n\tif (pattr->at_val.at_long == SV_STATE_RUN) {\n\t\tif (get_sattr_long(SVR_ATR_scheduling) == 0)\n\t\t\tpsname = svr_idle;\n\t\telse if (dflt_scheduler && dflt_scheduler->sc_cycle_started == 1)\n\t\t\tpsname = svr_sched;\n\t}\n\n\tpal = attrlist_create(atname, rsname, strlen(psname) + 1);\n\tif (pal == NULL)\n\t\treturn (-1);\n\t(void) strcpy(pal->al_value, psname);\n\tpal->al_flags = pattr->at_flags;\n\tappend_link(phead, &pal->al_link, pal);\n\tif (rtnl)\n\t\t*rtnl = pal;\n\treturn (1);\n}\n\n/**\n * @brief\n * \t\tset_resc_assigned - updates server and/or queue resources_assigned\n *\t\tattribute depending on to what kind of object the first argument\n *\t\tpoints and possibly on what value of \"state\" the object has\n *\n * @param[in,out]\tpobj\t-\tpointer to reservation or object based on the type\n * @param[in]\tobjtype\t-\t0=job, 1=reservation\n * @param[in]\top\t-\toperation to be performed.\n *\n */\nvoid\nset_resc_assigned(void *pobj, int objtype, enum batch_op op)\n{\n\tresc_resv *presv = NULL;\n\tresource_def *rscdef;\n\tjob *pjob = NULL;\n\tresource *pr = NULL;\n\tresource *rescp = NULL;\n\tattribute *queru = NULL;\n\tattribute *sysru = NULL;\n\n\t/*First part of this lengthy function figures out which\n\t *\"resources_assigned\" lists need to get updated.  Most of\n\t *the time it's two lists that will get updated, but it can\n\t *be only one (or even none, if the resources have already\n\t *been accounted earlier) if for example we have a job belonging\n\t *to a reservation and the job is told to run or the job exits\n\t */\n\n\tif (!objtype) {\n\t\tpjob = (job *) pobj;\n\n\t\tif ((pjob->ji_qhdr == 0) ||\n\t\t    (pjob->ji_qhdr->qu_qs.qu_type != QTYPE_Execution))\n\t\t\treturn;\n\n\t\tif (op == INCR) {\n\t\t\tif (pjob->ji_qs.ji_svrflags & JOB_SVFLG_RescAssn)\n\t\t\t\treturn; /* already added in */\n\t\t\tpjob->ji_qs.ji_svrflags |= JOB_SVFLG_RescAssn;\n\t\t} else if (op == DECR) {\n\t\t\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_RescAssn) == 0)\n\t\t\t\treturn; /* not currently included */\n\t\t\tpjob->ji_qs.ji_svrflags &= ~JOB_SVFLG_RescAssn;\n\t\t} else {\n\t\t\treturn; /* invalid op */\n\t\t}\n\n\t\trescp = (resource *) GET_NEXT(get_jattr_list(pjob, JOB_ATR_resource));\n\t\tif ((check_job_substate(pjob, JOB_SUBSTATE_SUSPEND)) ||\n\t\t    (check_job_substate(pjob, JOB_SUBSTATE_SCHSUSP))) {\n\t\t\t/* If resources_released attribute is not set for this suspended job then use release all\n\t\t\t * resources assigned to the job */\n\t\t\tif ((is_jattr_set(pjob, JOB_ATR_resc_released)) == 0)\n\t\t\t\trescp = (resource *) GET_NEXT(get_jattr_list(pjob, JOB_ATR_resource));\n\t\t\telse {\n\t\t\t\t/* Use resource_released_list for updating queue/server resources,\n\t\t\t\t * If resource_released_list is not present then create it by\n\t\t\t\t * using resources_released attribute.\n\t\t\t\t */\n\t\t\t\tif (is_jattr_set(pjob, JOB_ATR_resc_released_list))\n\t\t\t\t\trescp = (resource *) GET_NEXT(get_jattr_list(pjob, JOB_ATR_resc_released_list));\n\t\t\t\telse {\n\t\t\t\t\tif (update_resources_rel(pjob, get_jattr(pjob, JOB_ATR_resc_released), INCR) != 0)\n\t\t\t\t\t\trescp = (resource *) GET_NEXT(get_jattr_list(pjob, JOB_ATR_resource));\n\t\t\t\t\telse\n\t\t\t\t\t\trescp = (resource *) GET_NEXT(get_jattr_list(pjob, JOB_ATR_resc_released_list));\n\t\t\t\t}\n\t\t\t}\n\t\t} else {\n\t\t\t/* If job is not suspended then just release all resources assigned to the job */\n\t\t\trescp = (resource *) GET_NEXT(get_jattr_list(pjob, JOB_ATR_resource));\n\t\t\tif (is_jattr_set(pjob, JOB_ATR_resc_released_list))\n\t\t\t\trescp = (resource *) GET_NEXT(get_jattr_list(pjob, JOB_ATR_resc_released_list));\n\t\t}\n\t\tsysru = get_sattr(SVR_ATR_resource_assn);\n\t\tqueru = get_qattr(pjob->ji_qhdr, QE_ATR_ResourceAssn);\n\n\t\tif (pjob->ji_myResv &&\n\t\t    (pjob->ji_myResv->ri_qs.ri_state == RESV_RUNNING ||\n\t\t     pjob->ji_myResv->ri_qs.ri_state == RESV_DELETED ||\n\t\t     pjob->ji_myResv->ri_qs.ri_state == RESV_BEING_DELETED ||\n\t\t     pjob->ji_myResv->ri_qs.ri_state == RESV_FINISHED)) {\n\n\t\t\t/*for jobs running under a reservation, server's\n\t\t\t *\"resources_assigned\" is updated when reservation\n\t\t\t *itself begins running or is terminated.  So don't touch\n\t\t\t *the server's resources_assigned\n\t\t\t */\n\t\t\tsysru = NULL;\n\t\t}\n\t} else if (objtype == 1) {\n\n\t\tpresv = (resc_resv *) pobj;\n\t\tqueru = NULL;\n\t\tsysru = NULL;\n\t\trescp = (resource *) GET_NEXT(get_rattr_list(presv, RESV_ATR_resource));\n\t\tif (presv->ri_parent != NULL &&\n\t\t    (presv->ri_parent->ri_qs.ri_state == RESV_RUNNING ||\n\t\t     presv->ri_parent->ri_qs.ri_state == RESV_DELETED ||\n\t\t     presv->ri_parent->ri_qs.ri_state == RESV_BEING_DELETED ||\n\t\t     presv->ri_parent->ri_qs.ri_state == RESV_FINISHED)) {\n\t\t\t/*if the reservation has a parent (as reservation jobs can)\n\t\t\t *the parent's \"resources_assigned\" list is the relevant list\n\t\t\t *to modify\n\t\t\t *Remark: The -server's- \"resources_assigned\" updates when the\n\t\t\t *parent starts running or is terminated\n\t\t\t */\n\t\t\tsysru = get_qattr(presv->ri_parent->ri_qp, QE_ATR_ResourceAssn);\n\t\t} else if (presv->ri_parent == NULL &&\n\t\t\t   (presv->ri_qs.ri_state == RESV_RUNNING ||\n\t\t\t    presv->ri_qs.ri_state == RESV_DELETED ||\n\t\t\t    presv->ri_qs.ri_state == RESV_BEING_DELETED ||\n\t\t\t    presv->ri_qs.ri_state == RESV_FINISHED)) {\n\t\t\t/*when reservation object has no parent reservation, the server's\n\t\t\t *\"resources_asigned\" list is the one that's relevant in this case.\n\t\t\t *if the reservation object is that of a \"reservation job\",\n\t\t\t *the job's queue needs to have its \"resources_assigned\" list\n\t\t\t *modified.  Otherwise the \"queru\" should be set NULL\n\t\t\t */\n\n\t\t\tsysru = get_sattr(SVR_ATR_resource_assn);\n\t\t}\n\t}\n\n\t/*\n\t *for each resource in the job (or reservation's or reservation-job's)\n\t *list, check in the definition for that resource to see if the \"RASSN\"\n\t *flag is turned on.  If the flag is set, modify the appropriate\n\t *\"resources_assigned\" lists (\"queue\", \"sys\" or both) to account for\n\t *the amount of the resources being consumed or relinquished by the object.\n\t *\n\t *Note: if we aren't supposed to be updating the server's or the queue's\n\t *\t\"resources_assigned\" the pointers \"sysru\"/\"queru\" should be NULL\n\t */\n\twhile (rescp) {\n\t\trscdef = rescp->rs_defin;\n\n\t\t/* if resource usage is to be tracked */\n\t\tif ((rscdef->rs_flags & ATR_DFLAG_RASSN) &&\n\t\t    (is_attr_set(&rescp->rs_value))) {\n\n\t\t\t/* update system attribute of resources assigned */\n\n\t\t\tif (sysru) {\n\t\t\t\tpr = find_resc_entry(sysru, rscdef);\n\t\t\t\tif (pr == NULL) {\n\t\t\t\t\tpr = add_resource_entry(sysru, rscdef);\n\t\t\t\t\tif (pr == NULL)\n\t\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t\trscdef->rs_set(&pr->rs_value, &rescp->rs_value, op);\n\t\t\t\tsysru->at_flags |= ATR_MOD_MCACHE;\n\t\t\t}\n\n\t\t\t/* update queue attribute of resources assigned */\n\n\t\t\tif (queru) {\n\t\t\t\tpr = find_resc_entry(queru, rscdef);\n\t\t\t\tif (pr == NULL) {\n\t\t\t\t\tpr = add_resource_entry(queru, rscdef);\n\t\t\t\t\tif (pr == NULL)\n\t\t\t\t\t\treturn;\n\t\t\t\t}\n\t\t\t\trscdef->rs_set(&pr->rs_value, &rescp->rs_value, op);\n\t\t\t\tqueru->at_flags |= ATR_MOD_MCACHE;\n\t\t\t}\n\t\t}\n\t\trescp = (resource *) GET_NEXT(rescp->rs_link);\n\t}\n\n\t/* if a job, update resource_assigned at the node level */\n\tif (objtype == 1)\n\t\tupdate_node_rassn(get_rattr(presv, RESV_ATR_resv_nodes), op);\n\telse if ((objtype == 0) && (pjob->ji_myResv == NULL)) {\n\t\tif (is_jattr_set(pjob, JOB_ATR_resc_released))\n\t\t\t/* This is just the normal case when job was not suspended but trying to run| end */\n\t\t\tupdate_job_node_rassn(pjob, get_jattr(pjob, JOB_ATR_resc_released), op);\n\t\telse\n\t\t\t/* updating all resources from exec vnode attribute */\n\t\t\tupdate_job_node_rassn(pjob, get_jattr(pjob, JOB_ATR_exec_vnode), op);\n\t\tif (is_jattr_set(pjob, JOB_ATR_exec_vnode_deallocated)) {\n\t\t\tupdate_job_node_rassn(pjob, get_jattr(pjob, JOB_ATR_exec_vnode_deallocated), op);\n\t\t}\n\t}\n}\n\n/**\n * @brief\n * \t\tck_chkpnt - check validity of job checkpoint attribute value\n *\n * @param[in]\tpattr\t-\tcheckpoint attribute\n * @param[in]\tpobject\t-\tjob object\n * @param[in]\tmode\t-\taction mode\n *\n * @return\tint\n * @retval\t0\t: success\n * @retval\t!0\t: PBS Error Code\n */\nint\nck_chkpnt(attribute *pattr, void *pobject, int mode)\n{\n\tchar *val;\n\n\tval = pattr->at_val.at_str;\n\tif (val == NULL)\n\t\treturn (0);\n\n\tif ((*val == 'n') || (*val == 's') || (*val == 'u')) {\n\t\tif (*(val + 1) != '\\0')\n\t\t\treturn (PBSE_BADATVAL);\n\t} else if (*val == 'c') {\n\t\tval++;\n\t\tif (*val != '\\0') {\n\t\t\tif (*val++ != '=')\n\t\t\t\treturn (PBSE_BADATVAL);\n\t\t\tif (atoi(val) <= 0)\n\t\t\t\treturn (PBSE_BADATVAL);\n\t\t}\n\t} else if (*val == 'w') {\n\t\tval++;\n\t\tif (*val != '\\0') {\n\t\t\tif (*val++ != '=')\n\t\t\t\treturn (PBSE_BADATVAL);\n\t\t\tif (atoi(val) <= 0)\n\t\t\t\treturn (PBSE_BADATVAL);\n\t\t}\n\t} else\n\t\treturn (PBSE_BADATVAL);\n\n\t/* If the checkpoint attribute is being altered, then check    */\n\t/* against the queue's Checkpoint_min attribute as when queued */\n\tif (mode == ATR_ACTION_ALTER)\n\t\teval_chkpnt((job *) pobject, get_qattr(((job *) pobject)->ji_qhdr, QE_ATR_ChkptMin));\n\treturn (0);\n}\n\n/**\n * @brief\n *      keepfiles_action - check validity of job keepfiles attribute value\n *\n * @param[in]   pattr   -   keepfiles attribute\n * @param[in]   pobject -   job object\n * @param[in]   mode    -   action mode\n *\n * @return  int\n * @retval  0   : success\n * @retval  !0  : PBS Error Code\n */\nint\nkeepfiles_action(attribute *pattr, void *pobject, int mode)\n{\n\tif ((mode != ATR_ACTION_ALTER) && (mode != ATR_ACTION_NEW))\n\t\treturn PBSE_NONE;\n\tif (pobject && check_job_state((job *) pobject, JOB_STATE_LTR_RUNNING))\n\t\treturn PBSE_MODATRRUN;\n\treturn verify_keepfiles_common(pattr->at_val.at_str);\n}\n\n/**\n * @brief\n *      removefiles_action - check validity of job removefiles attribute value\n *\n * @param[in]   pattr   -   remove attribute\n * @param[in]   pobject -   job object\n * @param[in]   mode    -   action mode\n *\n * @return  int\n * @retval  0   : success\n * @retval  !0  : PBS Error Code\n */\nint\nremovefiles_action(attribute *pattr, void *pobject, int mode)\n{\n\tif ((mode != ATR_ACTION_ALTER) && (mode != ATR_ACTION_NEW))\n\t\treturn PBSE_NONE;\n\tif (pobject && check_job_state((job *) pobject, JOB_STATE_LTR_RUNNING))\n\t\treturn PBSE_MODATRRUN;\n\treturn verify_removefiles_common(pattr->at_val.at_str);\n}\n\n/**\n * @brief\n * \t\tcred_name_okay - action routine for the \"required_cred\" attribute.\n *\t\tCheck to make sure the cred name is okay.\n *\n * @param[in]\tpattr\t-\tpointer to attribute structure\n * @param[in]\tpobj\t-\tnot used\n * @param[in]\tactmode\t-\taction mode\n *\n * @return\tint\n * @retval\tzero\t: success\n * @retval\tnonzero\t: failure\n */\n\nint\ncred_name_okay(attribute *pattr, void *pobj, int actmode)\n{\n\tstatic const char *cred_list[] = {\n\t\tPBS_CREDNAME_AES,\n\t\tNULL /* must be last */\n\t};\n\n\tif (actmode == ATR_ACTION_ALTER) {\n\t\tchar *val = pattr->at_val.at_str;\n\t\tint i;\n\n\t\tfor (i = 0; cred_list[i]; i++) {\n\t\t\tif (strcmp(cred_list[i], val) == 0)\n\t\t\t\treturn PBSE_NONE;\n\t\t}\n\t\treturn PBSE_BADATVAL;\n\t}\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n * \t\taction_resv_retry_time - action routine for the server's\n * \t\t\"reserve_retry_time\" attribute.\n *\n * @param[in]\tpattr\t-\tpointer to attribute structure\n * @param[in]\tpobj\t-\tnot used\n * @param[in]\tactmode\t-\taction mode\n *\n * @return\tint\n * @retval\tzero\t: success\n * @retval\tnonzero\t: failure\n */\nint\naction_reserve_retry_time(attribute *pattr, void *pobj, int actmode)\n{\n\tif (actmode == ATR_ACTION_ALTER ||\n\t    actmode == ATR_ACTION_RECOV) {\n\n\t\tif (pattr->at_val.at_long <= 0)\n\t\t\treturn PBSE_BADATVAL;\n\t\tATR_UNSET(get_sattr(SVR_ATR_resv_retry_init));\n\t\tresv_retry_time = pattr->at_val.at_long;\n\t}\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n * \t\taction_resv_retry_init - action routine for the server's\n * \t\t\"reserve_retry_init\" attribute.\n *\n * @param[in]\tpattr\t-\tpointer to attribute structure\n * @param[in]\tpobj\t-\tnot used\n * @param[in]\tactmode\t-\taction mode\n *\n * @return\tint\n * @retval\tzero\t: success\n * @retval\tnonzero\t: failure\n */\nint\naction_reserve_retry_init(attribute *pattr, void *pobj, int actmode)\n{\n\tif (actmode == ATR_ACTION_ALTER ||\n\t    actmode == ATR_ACTION_RECOV) {\n\n\t\tif (pattr->at_val.at_long <= 0)\n\t\t\treturn PBSE_BADATVAL;\n\t\tset_sattr_l_slim(SVR_ATR_resv_retry_time, pattr->at_val.at_long, SET);\n\n\t\tresv_retry_time = pattr->at_val.at_long;\n\t}\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n * \t\tdummy action function for rpp_retry\n *\n * @param[in]\tpattr\t-\tpointer to attribute structure\n * @param[in]\tpobj\t-\tnot used\n * @param[in]\tactmode\t-\taction mode\n *\n * @return\tint\n * @retval\tzero\t: success\n * @retval\tnonzero\t: failure\n */\nint\nset_rpp_retry(attribute *pattr, void *pobj, int actmode)\n{\n\tlog_err(-1, __func__, \"rpp_retry is deprecated. This functionality is now automatic without needing this attribute\");\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n * \t\tdummy action function for rpp_highwater\n *\n * @param[in]\tpattr\t-\tpointer to attribute structure\n * @param[in]\tpobj\t-\tnot used\n * @param[in]\tactmode\t-\taction mode\n *\n * @return\tint\n * @retval\tzero\t: success\n * @retval\tnonzero\t: failure\n */\nint\nset_rpp_highwater(attribute *pattr, void *pobj, int actmode)\n{\n\tlog_err(-1, __func__, \"rpp_highwater is deprecated. This functionality is now automatic without needing this attribute\");\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n *\t\tis_valid_resource - action function to make sure attribute value is\n *\t\t\t        a valid resource of type string\n *\n * @param[in]\tpattr\t-\tpointer to attribute structure\n * @param[in]\tpobj\t-\tnot used\n * @param[in]\tactmode\t-\taction mode\n *\n * @return\tint\n * @retval\tzero\t: success\n * @retval\tnonzero\t: failure\n */\nint\nis_valid_resource(attribute *pattr, void *pobject, int actmode)\n{\n\tint i;\n\tstruct resource_def *pres;\n\n\tif (actmode == ATR_ACTION_FREE)\n\t\treturn (PBSE_NONE);\n\n\tif (is_attr_set(pattr) == 0)\n\t\treturn (PBSE_NONE);\n\n\tfor (i = 0; i < pattr->at_val.at_arst->as_usedptr; ++i) {\n\t\tpres = find_resc_def(svr_resc_def, pattr->at_val.at_arst->as_string[i]);\n\t\tif (pres == NULL)\n\t\t\treturn PBSE_UNKRESC;\n\n\t\tif ((pres->rs_type != ATR_TYPE_STR) &&\n\t\t    (pres->rs_type != ATR_TYPE_ARST))\n\t\t\treturn PBSE_RESCNOTSTR;\n\t}\n\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n * \t\taction_svr_iteration - the \"action\" routine for the server\n *\t\tscheduler_iteration attribute\n * @param[in]\tpattr\t-\tpointer to attribute structure\n * @param[in]\tpobject -\tpointer to some parent object.\n * @param[in]\tactmode\t-\tthe action to take (e.g. ATR_ACTION_ALTER)\n *\n * @return\tint\n * @retval\t0\t: success\n * @retval\t!0\t: PBSE Error Code\n */\nint\naction_svr_iteration(attribute *pattr, void *pobj, int mode)\n{\n\t/* set this attribute on main scheduler */\n\tif (dflt_scheduler) {\n\t\tif (mode == ATR_ACTION_NEW || mode == ATR_ACTION_ALTER || mode == ATR_ACTION_RECOV) {\n\t\t\tset_sched_attr_l_slim(dflt_scheduler, SCHED_ATR_schediteration, pattr->at_val.at_long, SET);\n\t\t\tsched_save_db(dflt_scheduler);\n\t\t}\n\t}\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n * \t\tdeflt_chunk_action - the \"action\" routine for the queue and server\n *\t\tdefault_chunk attribute\n * @par\n *\t\tBuilds an array of key_value_pair structures for the defaults\n *\n * @param[in]\tpattr\t-\tpointer to attribute structure\n * @param[in]\tpobject -\tpointer to some parent object.\n * @param[in]\tactmode\t-\tthe action to take (e.g. ATR_ACTION_ALTER)\n *\n * @return\tint\n * @retval\t0\t: success\n * @retval\t!0\t: PBSE Error Code\n */\nint\ndeflt_chunk_action(attribute *pattr, void *pobj, int mode)\n{\n\tint i;\n\tint j;\n\tint nelem;\n\tint *nkv;\n\tint old_perm;\n\tstruct key_value_pair **pkvp;\n\tresource *presc;\n\tpbs_list_head head;\n\tsvrattrl *psvratrl;\n\tint rc;\n\textern int resc_access_perm;\n\n\tCLEAR_HEAD(head);\n\n\tif (pobj == (void *) &server) {\n\t\tpkvp = &server.sv_seldft;\n\t\tnkv = &server.sv_nseldft;\n\t} else {\n\t\tpkvp = &((pbs_queue *) pobj)->qu_seldft;\n\t\tnkv = &((pbs_queue *) pobj)->qu_nseldft;\n\t}\n\n\t/* free any existing key_value_pair structure */\n\tif (*pkvp) {\n\t\tfor (i = 0; i < *nkv; ++i) {\n\t\t\tfree(((*pkvp) + i)->kv_keyw);\n\t\t\tfree(((*pkvp) + i)->kv_val);\n\t\t}\n\t\tfree(*pkvp);\n\t\t*pkvp = NULL;\n\t}\n\t*nkv = 0;\n\n\tif (((is_attr_set(pattr)) == 0) ||\n\t    (mode == ATR_ACTION_FREE))\n\t\treturn 0;\n\n\t/* validate and count the number of pairs in the default attribute */\n\tnelem = 0;\n\n\tpresc = GET_NEXT(pattr->at_val.at_list);\n\twhile (presc) {\n\t\tif ((presc->rs_defin->rs_flags & ATR_DFLAG_CVTSLT) == 0) {\n\t\t\tif ((resc_in_err = strdup(presc->rs_defin->rs_name)) == NULL)\n\t\t\t\treturn PBSE_SYSTEM;\n\t\t\treturn PBSE_INVALJOBRESC;\n\t\t}\n\t\tnelem++;\n\t\tpresc = GET_NEXT(presc->rs_link);\n\t}\n\n\t/* encode the default resources so we can get the values */\n\t/* need to save & restore the current value incase we are recovering */\n\told_perm = resc_access_perm;\n\tresc_access_perm = ATR_DFLAG_RDACC;\n\trc = encode_resc(pattr, &head, ATTR_DefaultChunk, NULL, ATR_ENCODE_CLIENT, NULL);\n\tresc_access_perm = old_perm;\n\tif (rc < 0) {\n\t\treturn PBSE_SYSTEM;\n\t}\n\n\t*pkvp = (struct key_value_pair *) malloc((nelem + 1) * sizeof(struct key_value_pair));\n\tif (*pkvp == NULL) {\n\t\tfree_attrlist(&head);\n\t\treturn PBSE_SYSTEM;\n\t}\n\n\t/* now set the name and value words */\n\ti = 0;\n\tpsvratrl = GET_NEXT(head);\n\twhile (psvratrl && i < nelem) {\n\t\tif ((((*pkvp) + i)->kv_keyw = strdup(psvratrl->al_resc)) == NULL) {\n\t\t\tfree_attrlist(&head);\n\t\t\tif (*pkvp) {\n\t\t\t\tfor (j = 0; j < i; ++j) {\n\t\t\t\t\tfree(((*pkvp) + j)->kv_keyw);\n\t\t\t\t\tfree(((*pkvp) + j)->kv_val);\n\t\t\t\t}\n\t\t\t\tfree(*pkvp);\n\t\t\t\t*pkvp = NULL;\n\t\t\t}\n\t\t\treturn PBSE_SYSTEM;\n\t\t}\n\t\tif ((((*pkvp) + i)->kv_val = strdup(psvratrl->al_value)) == NULL) {\n\t\t\tfree_attrlist(&head);\n\t\t\tif (*pkvp) {\n\t\t\t\tfor (j = 0; j < i; ++j) {\n\t\t\t\t\tfree(((*pkvp) + j)->kv_keyw);\n\t\t\t\t\tfree(((*pkvp) + j)->kv_val);\n\t\t\t\t}\n\t\t\t\tfree(*pkvp);\n\t\t\t\t*pkvp = NULL;\n\t\t\t}\n\t\t\treturn PBSE_SYSTEM;\n\t\t}\n\t\t++i;\n\t\tpsvratrl = GET_NEXT(psvratrl->al_link);\n\t}\n\tfree_attrlist(&head); /* free svrattrl list created by the encode */\n\n\t*nkv = i;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tset_license_location - action function for the pbs_license_info\n * \t\t\t\tserver attribute.\n *\n * @param[in]\tpattr\t-\tpointer to attribute structure\n * @param[in]\tpobject -\tpointer to some parent object.(not used here)\n * @param[in]\tactmode\t-\tthe action to take (e.g. ATR_ACTION_ALTER)\n *\n * @return\tint\n * @retval\tPBSE_NONE\t: success\n */\nint\nset_license_location(attribute *pattr, void *pobject, int actmode)\n{\n\tif (actmode == ATR_ACTION_FREE)\n\t\treturn (PBSE_NONE);\n\n\tif ((actmode == ATR_ACTION_ALTER) ||\n\t    (actmode == ATR_ACTION_RECOV)) {\n\t\tint delay = 5;\n\n\t\tif (pbs_licensing_location)\n\t\t\tfree(pbs_licensing_location);\n\n\t\tpbs_licensing_location = strdup(pattr->at_val.at_str ? pattr->at_val.at_str : \"\");\n\t\tif (pbs_licensing_location == NULL) {\n\t\t\tlog_err(errno, __func__, \"warning: strdup failed!\");\n\t\t\treturn PBSE_SYSTEM;\n\t\t}\n\n\t\tif (actmode == ATR_ACTION_RECOV)\n\t\t\tdelay = 0;\n\n\t\tinit_licensing_task = set_task(WORK_Timed, time_now + delay, init_licensing, NULL);\n\t}\n\n\treturn (PBSE_NONE);\n}\n\n/**\n * @brief\n *\t\tunset_license_location - set the floating licensing\n * \t\t\t\tserver attribute to default value.\n *\n */\nvoid\nunset_license_location(void)\n{\n\n\tif (pbs_licensing_location) {\n\n\t\tif (pbs_licensing_location[0] != '\\0') {\n\t\t\tlic_close();\n\t\t\tunlicense_nodes();\n\t\t\tmemset(&license_counts, 0, sizeof(license_counts));\n\t\t} else\n\t\t\treset_license_counters(&license_counts);\n\n\t\tfree(pbs_licensing_location);\n\t\tpbs_licensing_location = NULL;\n\t}\n}\n\n/*\n *\n * @brief\n *\tSet node_fail_requeue attribute.\n *\n * @par Functionality:\n *\tThis function sets the node_fail_requeue server attribute.\n *\tSince node_fail_requeue can be a negative value no check\n *\tfor < 0 is performed.\n *\n * @param[in]\tpattr\t-\tptr to attribute\n * @param[in]\tpobject\t-\tpointer to some parent object.(required but unused here)\n * @param[in]\tactmode\t-\tthe action to take (e.g. ATR_ACTION_ALTER)\n *\n * @return\tint\n * @retval\tPBSE_NONE\n *\n */\nint\nset_node_fail_requeue(attribute *pattr, void *pobject, int actmode)\n{\n\tif (actmode == ATR_ACTION_FREE)\n\t\treturn (PBSE_NONE);\n\n\tif ((actmode == ATR_ACTION_ALTER) ||\n\t    (actmode == ATR_ACTION_RECOV)) {\n\n\t\tnode_fail_requeue = pattr->at_val.at_long;\n\t\tsprintf(log_buffer,\n\t\t\t\"node_fail_requeue value changed to %ld\",\n\t\t\tnode_fail_requeue);\n\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER,\n\t\t\t  LOG_NOTICE, msg_daemonname, log_buffer);\n\t}\n\n\treturn (PBSE_NONE);\n}\n\n/*\n *\n * @brief\n *\tUnset node_fail_requeue attribute.\n *\n * @par Functionality:\n *\tThis function unsets the node_fail_requeue server attribute\n *\tby reverting it back to it's default value.\n *\n * @param[in]\tvoid\n *\n * @return\tvoid\n *\n */\nvoid\nunset_node_fail_requeue(void)\n{\n\tnode_fail_requeue = PBS_NODE_FAIL_REQUEUE_DEFAULT;\n\n\tsprintf(log_buffer,\n\t\t\"node_fail_requeue reverting back to default val %ld\",\n\t\tnode_fail_requeue);\n\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER,\n\t\t  LOG_NOTICE, msg_daemonname, log_buffer);\n}\n\n/*\n *\n * @brief\n *\tSet resend_term_delay attribute.\n *\n * @par Functionality:\n *\tThis function sets the resend_term_delay server attribute.\n *\tresend_term_delay can not be < 0 and > 1800.\n *\n * @param[in]\tpattr\t-\tptr to attribute\n * @param[in]\tpobject\t-\tpointer to some parent object.(required but unused here)\n * @param[in]\tactmode\t-\tthe action to take (e.g. ATR_ACTION_ALTER)\n *\n * @return\tint\n * @retval\tPBSE_NONE\n *\n */\nint\nset_resend_term_delay(attribute *pattr, void *pobject, int actmode)\n{\n\tif (actmode == ATR_ACTION_FREE)\n\t\treturn (PBSE_NONE);\n\n\tif ((actmode == ATR_ACTION_ALTER) ||\n\t    (actmode == ATR_ACTION_RECOV)) {\n\n\t\tif (pattr->at_val.at_long >= 0 && pattr->at_val.at_long <= 1800) {\n\t\t\tset_sattr_l_slim(SVR_ATR_ResendTermDelay, pattr->at_val.at_long, SET);\n\t\t} else {\n\t\t\treturn (PBSE_BADATVAL);\n\t\t}\n\t\tlog_eventf(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER,\n\t\t\tLOG_NOTICE, msg_daemonname, \"resend_term_delay value changed to %ld\",\n\t\t\tpattr->at_val.at_long);\n\t}\n\n\treturn (PBSE_NONE);\n}\n\n/*\n *\n * @brief\n *\tUnset resend_term_delay attribute.\n *\n * @par Functionality:\n *\tThis function unsets the resend_term_delay server attribute\n *\tby reverting it back to it's default value.\n *\n * @param[in]\tvoid\n *\n * @return\tvoid\n *\n */\nvoid\nunset_resend_term_delay(void)\n{\n\tset_sattr_l_slim(SVR_ATR_ResendTermDelay,\n\t\tPBS_RESEND_TERM_DELAY_DEFAULT, SET);\n\tlog_eventf(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER,\n\t\tLOG_NOTICE, msg_daemonname,\n\t\t\"resend_term_delay reverting back to default val %ld\",\n\t\tPBS_RESEND_TERM_DELAY_DEFAULT);\n}\n\n/**\n * @brief\n *\t\tset_license_min - action function for the pbs_license_min server\n *\t\t\t  attribute.\n *\n * @param[in]\tpattr\t-\tpointer to attribute structure\n * @param[in]\tpobject -\tpointer to some parent object.(not used here)\n * @param[in]\tactmode\t-\tthe action to take (e.g. ATR_ACTION_ALTER)\n *\n * @return\tint\n * @retval\tPBSE_NONE\t: success\n */\nint\nset_license_min(attribute *pattr, void *pobject, int actmode)\n{\n\tif (actmode == ATR_ACTION_FREE)\n\t\treturn (PBSE_NONE);\n\n\tif ((actmode == ATR_ACTION_ALTER) ||\n\t    (actmode == ATR_ACTION_RECOV)) {\n\n\t\tif ((pattr->at_val.at_long < 0) ||\n\t\t    (pattr->at_val.at_long > licensing_control.licenses_max)) {\n\t\t\treturn (PBSE_LICENSE_MIN_BADVAL);\n\t\t}\n\t\tlicensing_control.licenses_min = pattr->at_val.at_long;\n\n\t\tif (licensing_control.licenses_min > licensing_control.licenses_checked_out)\n\t\t\tif (get_more_licenses_task == NULL)\n\t\t\t\tget_more_licenses_task = set_task(WORK_Timed, time(NULL) + 2, get_more_licenses, NULL);\n\t}\n\n\treturn (PBSE_NONE);\n}\n\n/**\n * @brief\n *\t\tunset_license_min - set the the pbs_license_min server\n *\t\t\t  attribute to default value.\n */\nvoid\nunset_license_min(void)\n{\n\tlicensing_control.licenses_min = PBS_MIN_LICENSING_LICENSES;\n\n\tsprintf(log_buffer,\n\t\t\"pbs_license_min reverting back to default val %ld\",\n\t\tlicensing_control.licenses_min);\n\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER,\n\t\t  LOG_NOTICE, msg_daemonname, log_buffer);\n}\n\n/**\n * @brief\n *\t\tset_license_max - action function for the pbs_license_max server\n *\t\t\t  attribute.\n *\n * @param[in]\tpattr\t-\tpointer to attribute structure\n * @param[in]\tpobject -\tpointer to some parent object.(not used here)\n * @param[in]\tactmode\t-\tthe action to take (e.g. ATR_ACTION_ALTER)\n *\n * @return\tint\n * @retval\tPBSE_NONE\t: success\n * @retval\tPBSE_LICENSE_MAX_BADVAL\t: wrong value for pbs_license_max attribute\n */\nint\nset_license_max(attribute *pattr, void *pobject, int actmode)\n{\n\tif (actmode == ATR_ACTION_FREE)\n\t\treturn (PBSE_NONE);\n\n\tif ((actmode == ATR_ACTION_ALTER) ||\n\t    (actmode == ATR_ACTION_RECOV)) {\n\n\t\tif ((pattr->at_val.at_long < 0) ||\n\t\t    (pattr->at_val.at_long < licensing_control.licenses_min)) {\n\t\t\treturn (PBSE_LICENSE_MAX_BADVAL);\n\t\t}\n\t\tlicensing_control.licenses_max = pattr->at_val.at_long;\n\n\t\tif ((licensing_control.licenses_max < licensing_control.licenses_checked_out) ||\n\t\t    ((licensing_control.licenses_checked_out < licensing_control.licenses_total_needed) &&\n\t\t     (licensing_control.licenses_checked_out < licensing_control.licenses_max)))\n\t\t\tif (get_more_licenses_task == NULL)\n\t\t\t\tget_more_licenses_task = set_task(WORK_Timed, time(NULL) + 2, get_more_licenses, NULL);\n\t}\n\n\treturn (PBSE_NONE);\n}\n\n/**\n * @brief\n *\t\tunset_license_max - set pbs_license_max server\n *\t\t\t  attribute to default value.\n */\nvoid\nunset_license_max(void)\n{\n\tlicensing_control.licenses_max = PBS_MAX_LICENSING_LICENSES;\n\n\tsprintf(log_buffer,\n\t\t\"pbs_license_max reverting back to default val %ld\",\n\t\tlicensing_control.licenses_max);\n\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER,\n\t\t  LOG_NOTICE, msg_daemonname, log_buffer);\n}\n\n/**\n * @brief\n *\t\tset_license_linger - action function for the pbs_license_linger server\n *\t\t\t  attribute.\n *\n * @param[in]\tpattr\t-\tpointer to attribute structure\n * @param[in]\tpobject -\tpointer to some parent object.(not used here)\n * @param[in]\tactmode\t-\tthe action to take (e.g. ATR_ACTION_ALTER)\n *\n * @return\tint\n * @retval\tPBSE_NONE\t: success\n * @retval\tPBSE_LICENSE_LINGER_BADVAL\t: wrong value for pbs_license_linger attribute\n */\nint\nset_license_linger(attribute *pattr, void *pobject, int actmode)\n{\n\n\tif (actmode == ATR_ACTION_FREE)\n\t\treturn (PBSE_NONE);\n\n\tif ((actmode == ATR_ACTION_ALTER) ||\n\t    (actmode == ATR_ACTION_RECOV)) {\n\n\t\tif ((pattr->at_val.at_long <= 0)) {\n\t\t\treturn (PBSE_LICENSE_LINGER_BADVAL);\n\t\t}\n\t\tlicensing_control.licenses_linger_time = pattr->at_val.at_long;\n\n\t\tif (licenses_linger_time_task)\n\t\t\tdelete_task(licenses_linger_time_task);\n\n\t\tlicenses_linger_time_task = set_task(WORK_Timed,\n\t\t\t\t\t\t     licensing_control.licenses_checkout_time + licensing_control.licenses_linger_time,\n\t\t\t\t\t\t     return_lingering_licenses, NULL);\n\t}\n\n\treturn (PBSE_NONE);\n}\n\n/**\n * @brief\n *\t\tunset_license_linger - set pbs_license_linger server\n *\t\t\t  attribute to default value.\n */\nvoid\nunset_license_linger(void)\n{\n\tlicensing_control.licenses_linger_time = PBS_LIC_LINGER_TIME;\n\n\tsprintf(log_buffer,\n\t\t\"pbs_license_linger_time reverting back to default val %ld\",\n\t\tlicensing_control.licenses_linger_time);\n\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER,\n\t\t  LOG_NOTICE, msg_daemonname, log_buffer);\n}\n\n/**\n * @brief\n *\t\tFunction name: unset_job_history_enable\n * @par\n *\t\tDescription: It is called when the job_history_enable attr will be\n *\t\t     unset through \"qmgr\".\n * @par\n *\t\tPurpose: If the job_history_enable server attribute is unset, then\n *\t\t set the global svr_history_enable to '0' and purge all the\n *\t\t the history jobs available in the server immediately. Also\n *\t\t will be called if job_history_enable set to 0.\n * @par\n *\t\tInput : None\n *\t\tOutput: None\n */\nvoid\nunset_job_history_enable(void)\n{\n\tjob *pjob = NULL;\n\tjob *nxpjob = NULL;\n\n\tsprintf(log_buffer, \"job_history_enable has been unset.\");\n\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER,\n\t\t  LOG_NOTICE, msg_daemonname, log_buffer);\n\n\t/*\n\t * Reset the SERVER level global switch for job history\n\t * feature i.e. svr_history_enable. It will not keep the\n\t * job history information anymore.\n\t */\n\tsvr_history_enable = 0;\n\n\t/*\n\t * Find all the history jobs (jobs with state JOB_STATE_LTR_MOVED\n\t * and JOB_STATE_LTR_FINISHED) in the server and purge them right\n\t * now as job_history_enable has been UNSET OR SET to FALSE.\n\t */\n\tpjob = (job *) GET_NEXT(svr_alljobs);\n\twhile (pjob != NULL) {\n\t\t/* save the next */\n\t\tnxpjob = (job *) GET_NEXT(pjob->ji_alljobs);\n\n\t\tif ((check_job_state(pjob, JOB_STATE_LTR_MOVED)) ||\n\t\t    (check_job_state(pjob, JOB_STATE_LTR_FINISHED)) ||\n\t\t    (check_job_state(pjob, JOB_STATE_LTR_EXPIRED))) {\n\t\t\tjob_purge(pjob);\n\t\t\tpjob = NULL;\n\t\t}\n\t\t/* restore the next and continue */\n\t\tpjob = nxpjob;\n\t}\n}\n\n/**\n * @brief\n *\t\tset_job_history_enable - action function for the job_history_enable server\n *\t\t\t  attribute.\n *\n * @param[in]\tpattr\t-\tpointer to attribute structure\n * @param[in]\tpobject -\tpointer to some parent object.(not used here)\n * @param[in]\tactmode\t-\tthe action to take (e.g. ATR_ACTION_ALTER)\n *\n * @return\tint\n * @retval\tPBSE_NONE\t: success\n */\nint\nset_job_history_enable(attribute *pattr, void *pobject, int actmode)\n{\n\tif ((actmode == ATR_ACTION_ALTER) ||\n\t    (actmode == ATR_ACTION_RECOV)) {\n\n\t\tsvr_history_enable = pattr->at_val.at_long;\n\t\tif (svr_history_enable) {\n\t\t\t(void) set_task(WORK_Timed,\n\t\t\t\t\t(long) (time_now + SVR_CLEAN_JOBHIST_TM),\n\t\t\t\t\tsvr_clean_job_history, 0);\n\t\t} else {\n\t\t\tunset_job_history_enable();\n\t\t}\n\t}\n\treturn (PBSE_NONE);\n}\n\n/**\n * @brief\n *\t\tset_log_events - action function for the log_events\n *\t\t\t  server attribute, also sets the tpp logmask\n *\n * @param[in]\tpattr\t-\tpointer to attribute structure\n * @param[in]\tpobject -\tpointer to some parent object.(not used here)\n * @param[in]\tactmode\t-\tthe action to take (e.g. ATR_ACTION_ALTER)\n *\n * @return\tint\n * @retval\tPBSE_NONE\t: success\n */\nint\nset_log_events(attribute *pattr, void *pobject, int actmode)\n{\n\tif ((actmode == ATR_ACTION_ALTER) ||\n\t    (actmode == ATR_ACTION_RECOV)) {\n\t\ttpp_set_logmask(pattr->at_val.at_long);\n\t}\n\treturn (PBSE_NONE);\n}\n\n/**\n * @brief\n *\t\tset_job_history_duration - action function for the job_history_duration\n *\t\t\t  server attribute.\n *\n * @param[in]\tpattr\t-\tpointer to attribute structure\n * @param[in]\tpobject -\tpointer to some parent object.(not used here)\n * @param[in]\tactmode\t-\tthe action to take (e.g. ATR_ACTION_ALTER)\n *\n * @return\tint\n * @retval\tPBSE_NONE\t: success\n * @retval\tPBSE_BADATVAL\t: Invalid attribute value\n */\nint\nset_job_history_duration(attribute *pattr, void *pobject, int actmode)\n{\n\n\tif ((actmode == ATR_ACTION_ALTER) ||\n\t    (actmode == ATR_ACTION_RECOV)) {\n\n\t\tif ((pattr->at_val.at_long < 0))\n\t\t\treturn (PBSE_BADATVAL);\n\n\t\tsvr_history_duration = pattr->at_val.at_long;\n\t\tsprintf(log_buffer, \"svr_history_duration set to val %ld\",\n\t\t\tsvr_history_duration);\n\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER,\n\t\t\t  LOG_NOTICE, msg_daemonname, log_buffer);\n\t}\n\treturn (PBSE_NONE);\n}\n\n/**\n * @brief\n *\t\tunset_job_history_duration - set job_history_duration server\n *\t\t\t  attribute to default value.\n */\nvoid\nunset_job_history_duration(void)\n{\n\tsvr_history_duration = SVR_JOBHIST_DEFAULT;\n\n\tsprintf(log_buffer,\n\t\t\"svr_history_duration reverting back to default val %ld\",\n\t\tsvr_history_duration);\n\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER,\n\t\t  LOG_NOTICE, msg_daemonname, log_buffer);\n}\n\n/**\n * @brief\n *\tset_max_job_sequence_id - action function for the max_job_sequence_id server\n *\t\t\t\t  attribute.\n *\n * @param[in]\tpattr\t-\tpointer to attribute structure\n * @param[in]\tpobject -\tpointer to some parent object.(not used here)\n * @param[in]\tactmode\t-\tthe action to take (e.g. ATR_ACTION_ALTER)\n *\n * @return\tint\n * @retval\tPBSE_NONE\t: success\n */\nint\nset_max_job_sequence_id(attribute *pattr, void *pobject, int actmode)\n{\n\n\tif ((actmode == ATR_ACTION_ALTER) ||\n\t    (actmode == ATR_ACTION_RECOV)) {\n\n\t\tif ((pattr->at_val.at_ll < SVR_MAX_JOB_SEQ_NUM_DEFAULT) ||\n\t\t    (pattr->at_val.at_ll > PBS_SEQNUMTOP)) {\n\t\t\treturn (PBSE_INVALID_MAX_JOB_SEQUENCE_ID);\n\t\t}\n\t\tsvr_max_job_sequence_id = pattr->at_val.at_ll;\n\t\t/* If the max_job_sequence_id is set to something smaller than current job id,\n\t\t * then it will wrap to 0(ZERO)*/\n\t\tif (server.sv_qs.sv_jobidnumber > svr_max_job_sequence_id) {\n\t\t\t(void) reset_svr_sequence_window(); /* wrap it*/\n\t\t\tsprintf(log_buffer, \"svr_max_job_sequence_id wrapped to 0\");\n\t\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER,\n\t\t\t\t  LOG_NOTICE, msg_daemonname, log_buffer);\n\t\t} else {\n\t\t\tsprintf(log_buffer, \"svr_max_job_sequence_id set to val %lld\",\n\t\t\t\tsvr_max_job_sequence_id);\n\t\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER,\n\t\t\t\t  LOG_NOTICE, msg_daemonname, log_buffer);\n\t\t}\n\t}\n\treturn (PBSE_NONE);\n}\n\n/**\n * @brief\n *\tunset_max_job_sequence_id - set server attribute \"max_job_sequence_id\" to\n *\t\t\t\t    default value.\n */\nvoid\nunset_max_job_sequence_id(void)\n{\n\tsvr_max_job_sequence_id = SVR_MAX_JOB_SEQ_NUM_DEFAULT;\n\t/* If the max_job_sequence_id is set to something smaller than current job id,\n\t * then it will wrap to 0(ZERO)*/\n\tif (server.sv_qs.sv_jobidnumber >= svr_max_job_sequence_id) {\n\t\t(void) reset_svr_sequence_window(); /* wrap it*/\n\t\tsprintf(log_buffer, \"svr_max_job_sequence_id wrapped to 0\");\n\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER,\n\t\t\t  LOG_NOTICE, msg_daemonname, log_buffer);\n\t}\n\tsprintf(log_buffer,\n\t\t\"svr_max_job_sequence_id reverting back to default val %lld\",\n\t\tsvr_max_job_sequence_id);\n\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER,\n\t\t  LOG_NOTICE, msg_daemonname, log_buffer);\n}\n\n/**\n * @brief\n *\t\teligibletime_action - set/unset ATR_VFLAG_SET flag for\n *\t\t\t      all jobs in server.\n *\n * @param[in]\tpattr\t-\tpointer to attribute structure\n * @param[in]\tpobject -\tpointer to some parent object.(not used here)\n * @param[in]\tactmode\t-\tthe action to take (e.g. ATR_ACTION_ALTER)(not used here)\n *\n * @return\tint\n * @retval\tPBSE_NONE\t: success\n */\nint\neligibletime_action(attribute *pattr, void *pobject, int actmode)\n{\n\tjob *pj;\n\tlong accruetype;\n\n\t/* switching on eligible_time_enable. when switch happens,\n\t * job's old accrue_type is not reliable\n\t */\n\tif (pattr->at_val.at_long == 1) {\n\n\t\tpj = (job *) GET_NEXT(svr_alljobs);\n\t\twhile (pj != NULL) {\n\t\t\taccruetype = determine_accruetype(pj);\n\t\t\tupdate_eligible_time(accruetype, pj);\n\n\t\t\tpj = (job *) GET_NEXT(pj->ji_alljobs);\n\t\t}\n\n\t\t/* if scheduling is true, need to run the scheduling cycle */\n\t\t/* so that, accrue type is determined for cases */\n\t\tif (get_sattr_long(SVR_ATR_scheduling))\n\t\t\tset_scheduler_flag(SCH_SCHEDULE_ETE_ON, NULL);\n\t}\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\t\tdecode_formula - decode the job sort formula from a secure file\n * @par\n *\t\treturns value from decode_str\n *\n * @param[in]\tpatr\t-\tpointer to attribute structure\n * @param[in]\tname\t-\tattribute name\n * @param[in]\trescn\t-\tresource name - unused here\n * @param[in]\tval\t-\tattribute value\n *\n * @return\tint\n * @retval\tzero\t: success\n * @retval\tnonzero\t: PBSE Error Code\n */\nint\ndecode_formula(attribute *patr, char *name, char *rescn, char *val)\n{\n\tFILE *fp;\n\tchar pathbuf[MAXPATHLEN];\n\tchar *formula_buf;\n\tint formula_buf_len = 1024;\n\tint rc;\n\n\t/* when we are coming up, we need to read from the server's database */\n\tif (get_sattr_long(SVR_ATR_State) == SV_STATE_INIT)\n\t\treturn decode_str(patr, name, rescn, val);\n\n\tsprintf(pathbuf, \"%s/%s\", pbs_conf.pbs_home_path, FORMULA_ATTR_PATH);\n\n\tif ((fp = fopen(pathbuf, \"r\")) == NULL) {\n\t\treturn PBSE_PERM;\n\t}\n\n\tformula_buf = malloc(formula_buf_len);\n\tif (formula_buf == NULL) {\n\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER,\n\t\t\t  LOG_ALERT, msg_daemonname,\n\t\t\t  \"unable to decode formula, no memory\");\n\t\tfclose(fp);\n\t\tremove(pathbuf);\n\t\treturn PBSE_INTERNAL;\n\t}\n\tmemset(formula_buf, 0, formula_buf_len);\n\n\tif (pbs_fgets(&formula_buf, &formula_buf_len, fp) == NULL) {\n\t\tfclose(fp);\n\t\tremove(pathbuf);\n\t\tfree(formula_buf);\n\t\treturn PBSE_INTERNAL;\n\t}\n\n\tfclose(fp);\n\n\t/* now that we have the data, the file may be removed */\n\tremove(pathbuf);\n\n\t/* remove the newline */\n\tformula_buf[strlen(formula_buf) - 1] = '\\0';\n\n\trc = decode_str(patr, name, rescn, formula_buf);\n\tfree(formula_buf);\n\treturn rc;\n}\n\n/*\n *  Following datum and functions are used to enforce the rule that the\n * entity-limits (entlims) attributes cannot be used if the old style\n * user/group/run limeits are in use and vice versa.\n *\n * The datum entlim_type_in_use is set to:\n *\t 0 - when neither type has been set (yet)\n *\t+1 - when newer \"entlims\" have been set\n *\t-1 - when older style limits have been set.\n *\n * If the datum is 0, the first style limit of either style is allowed\n * and sets the datum accordingly.  If set to +1, then any additional new\n * style entlim is allowed without additional checks.  If set to -1, then\n * additionaly old style are allowed to be set.\n *\n * If the wrong type is being set,  then an exhustive search of the server\n * attributes and queue attributes of all queues are required to see if the\n * other style is in use.  This is needed because the datum cannot be reset to\n * zero if the last of a style is unset, the mechinism isn't in place to do so.\n *\n * There are two lists, one for server the other for queues, of the attributs\n * which must be checked.\n */\n\nstatic int entlim_type_in_use = 0;\n\nstatic int svr_oldstyle[] = {\n\t(int) SVR_ATR_max_running,\n\t(int) SVR_ATR_MaxUserRun,\n\t(int) SVR_ATR_MaxGrpRun,\n\t(int) SVR_ATR_MaxUserRes,\n\t(int) SVR_ATR_MaxGroupRes,\n\t(int) SVR_ATR_MaxUserRunSoft,\n\t(int) SVR_ATR_MaxGrpRunSoft,\n\t(int) SVR_ATR_MaxUserResSoft,\n\t(int) SVR_ATR_MaxGroupResSoft,\n\t-1};\nstatic int svr_newstyle[] = {\n\t(int) SVR_ATR_max_run,\n\t(int) SVR_ATR_max_run_res,\n\t(int) SVR_ATR_max_run_soft,\n\t(int) SVR_ATR_max_run_res_soft,\n\t-1};\nstatic int que_oldstyle[] = {\n\t(int) QA_ATR_MaxJobs,\n\t(int) QA_ATR_MaxRun,\n\t(int) QE_ATR_MaxUserRun,\n\t(int) QE_ATR_MaxGrpRun,\n\t(int) QE_ATR_MaxUserRes,\n\t(int) QE_ATR_MaxGroupRes,\n\t(int) QE_ATR_MaxUserRunSoft,\n\t(int) QE_ATR_MaxGrpRunSoft,\n\t(int) QE_ATR_MaxUserResSoft,\n\t(int) QE_ATR_MaxGroupResSoft,\n\t-1};\nstatic int que_newstyle[] = {\n\t(int) QA_ATR_max_queued,\n\t(int) QA_ATR_queued_jobs_threshold,\n\t(int) QE_ATR_max_run,\n\t(int) QE_ATR_max_run_res,\n\t(int) QE_ATR_max_run_soft,\n\t(int) QE_ATR_max_run_res_soft,\n\t-1};\n\nextern pbs_list_head svr_queues;\n/**\n * @brief\n * \t\tis_attrs_in_list_set - for a list of certain attributes, is any of them\n * \t\tset in the parent objects array of attributes\n *\n * @param[in]\twlist\t-\tstyle of queue/server\n * @param[in]\tattrs\t-\tpointer to attribute structure\n *\n *\tReturns >=0 index of the first attribute found to be set,\n *\t\t -1 if none set\n */\nstatic int\nis_attrs_in_list_set(int *wlist, attribute *attrs)\n{\n\tint i;\n\n\tfor (i = 0; *(wlist + i) != -1; ++i) {\n\t\tif (((attrs + *(wlist + i))->at_flags & ATR_VFLAG_SET) != 0)\n\t\t\treturn *(wlist + i);\n\t}\n\treturn -1;\n}\n\n/**\n * @brief\n * \t\tlog_mixed_limit_controls - log a message when the administrator attempts\n *\t\tto mix the type of queue/run limits.\n *\n * @param[in]\tpq\t-\tpointer to the queue\n * @param[in]\tindex\t-\tindex of queue/server attribute definition structure\n * @param[in]\ttype\t-\ttype of queue/run limits\n */\nstatic void\nlog_mixed_limit_controls(pbs_queue *pq, int index, char *type)\n{\n\tattribute_def *pdef;\n\tchar *objname;\n\n\tif (pq) {\n\t\tobjname = pq->qu_qs.qu_name;\n\t\tpdef = &que_attr_def[index];\n\t} else {\n\t\tobjname = \"Server\";\n\t\tpdef = &svr_attr_def[index];\n\t}\n\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t \"%s style attribute \\\"%s\\\" already set in %s %s, cannot mix types\",\n\t\t type, pdef->at_name, pq ? \"queue\" : \"\", objname);\n\tlog_buffer[LOG_BUF_SIZE - 1] = '\\0';\n\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER, LOG_ALERT,\n\t\t  msg_daemonname, log_buffer);\n}\n\n/**\n * @brief\n * \t\taction_entlim_chk - the at_action for the entity attribute\n *\t\tPrevents old and new type controls from being used at same time\n *\n * @param[in]\tpattr\t-\tpointer to attribute structure(not used here)\n * @param[in]\tpobject -\tpointer to some parent object.(not used here)\n * @param[in]\tactmode\t-\tthe action to take (e.g. ATR_ACTION_ALTER)(not used here)\n *\n * @return\tint\n * @retval\tPBSE_NONE\t: success\n * @retval\tPBSE_MIXENTLIMS\t: mixing old and new limit enformcement\n */\nint\naction_entlim_chk(attribute *pattr, void *pobject, int actmode)\n{\n\tint i;\n\tpbs_queue *pq;\n\n\t/* first check if the new style limits cannot be used */\n\t/* due to a conflict with the old style.              */\n\tif (entlim_type_in_use == +1)\n\t\treturn PBSE_NONE;\n\telse if (entlim_type_in_use == 0) {\n\t\tentlim_type_in_use = +1; /* show new stype in use */\n\t\treturn PBSE_NONE;\n\t}\n\n\t/* flags says wrong (old) style in use, but need to double check */\n\tif ((i = is_attrs_in_list_set(svr_oldstyle, server.sv_attr)) != -1) {\n\t\tlog_mixed_limit_controls(NULL, i, \"old\");\n\t\treturn PBSE_MIXENTLIMS;\n\t}\n\tpq = (pbs_queue *) GET_NEXT(svr_queues);\n\twhile (pq) {\n\t\tif ((i = is_attrs_in_list_set(que_oldstyle, pq->qu_attr)) != -1) {\n\t\t\tlog_mixed_limit_controls(pq, i, \"old\");\n\t\t\treturn PBSE_MIXENTLIMS;\n\t\t}\n\t\tpq = (pbs_queue *) GET_NEXT(pq->qu_link);\n\t}\n\n\tentlim_type_in_use = +1; /* show new stype in use */\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n * \t\tentlim_resum - Re-totals the entity usage, either count or resources\n *\t\tfor a specific entity limit attribute\n *\n * @param[in]\tpwt\t-\tpointer to work task structure\n */\n\nstatic void\nentlim_resum(struct work_task *pwt)\n{\n\tvoid *ctx;\n\tint is_resc;\n\tattribute *pattr;\n\tattribute *pattr2;\n\tchar *key = NULL;\n\tsvr_entlim_leaf_t *plf;\n\tjob *pj;\n\tvoid *pobject;\n\tpbs_queue *pque;\n\textern pbs_list_head svr_alljobs;\n\n\tpobject = pwt->wt_parm1; /* pointer to parent object */\n\tis_resc = pwt->wt_aux;\t /* 1=resource, 0-count */\n\n\t/* now determine if the parent object is a queue or is the Server */\n\t/* this tells us which list of jobs we need to walk.\t\t  */\n\tif ((struct server *) pobject == &server) {\n\t\t/* server is the parent */\n\t\tpque = NULL;\n\t\tif (is_resc) {\n\t\t\tpattr = get_sattr(SVR_ATR_max_queued_res);\n\t\t\tpattr2 = get_sattr(SVR_ATR_queued_jobs_threshold_res);\n\t\t} else {\n\t\t\tpattr = get_sattr(SVR_ATR_max_queued);\n\t\t\tpattr2 = get_sattr(SVR_ATR_queued_jobs_threshold);\n\t\t}\n\t\tpj = (job *) GET_NEXT(svr_alljobs);\n\t} else {\n\t\t/* a queue is the parent */\n\t\tpque = (pbs_queue *) pobject;\n\t\tif (is_resc) {\n\t\t\tpattr = get_qattr(pque, QA_ATR_max_queued_res);\n\t\t\tpattr2 = get_qattr(pque, QA_ATR_queued_jobs_threshold_res);\n\t\t} else {\n\t\t\tpattr = get_qattr(pque, QA_ATR_max_queued);\n\t\t\tpattr2 = get_qattr(pque, QA_ATR_queued_jobs_threshold);\n\t\t}\n\t\tpj = (job *) GET_NEXT(pque->qu_jobs);\n\t}\n\n\t/* Next, walk the limit tree and clear all current values */\n\n\tctx = pattr->at_val.at_enty.ae_tree;\n\twhile ((plf = entlim_get_next(ctx, (void **) &key)) != NULL) {\n\t\tif (is_attr_set(&plf->slf_sum)) {\n\t\t\tplf->slf_rescd->rs_free(&plf->slf_sum);\n\t\t\tDBPRT((\"clearing %s\\n\", key))\n\t\t}\n\t}\n\n\tctx = pattr2->at_val.at_enty.ae_tree;\n\tkey = NULL;\n\twhile ((plf = entlim_get_next(ctx, (void **) &key)) != NULL) {\n\t\tif (is_attr_set(&plf->slf_sum)) {\n\t\t\tplf->slf_rescd->rs_free(&plf->slf_sum);\n\t\t\tDBPRT((\"clearing %s\\n\", key))\n\t\t}\n\t}\n\n\t/* then for each job in the parent object, sum up its count/resource */\n\n\twhile (pj) {\n\t\tif ((pj->ji_qs.ji_svrflags & JOB_SVFLG_SubJob) == 0) {\n\t\t\tif (is_resc) {\n\t\t\t\taccount_entity_limit_usages(pj, pque, NULL, INCR, ETLIM_ACC_ALL_RES);\n\t\t\t} else {\n\t\t\t\taccount_entity_limit_usages(pj, pque, NULL, INCR, ETLIM_ACC_ALL_CT);\n\t\t\t}\n\t\t}\n\n\t\tif (pque)\n\t\t\tpj = (job *) GET_NEXT(pj->ji_jobque);\n\t\telse\n\t\t\tpj = (job *) GET_NEXT(pj->ji_alljobs);\n\t}\n}\n\n/**\n * @brief\n * \t\taction_entlim_ct - the at_action for the entity job count attributes\n *\t\tcalls the common \"action_entlim\" function with an zero flag to indicate\n *\t\tthat the entity limit is a count limit.\n *\n * @param[in]\tpattr\t-\tpointer to attribute structure\n * @param[in]\tpobject -\tpointer to some parent object.\n * @param[in]\tactmode\t-\tthe action to take (e.g. ATR_ACTION_ALTER)\n *\n * @return\tint\n * @retval\tPBSE_NONE\t: success\n * @retval\tnonzero\t: PBSE error code\n */\n\nint\naction_entlim_ct(attribute *pattr, void *pobject, int actmode)\n{\n\tstruct work_task *pwt;\n\tint rc;\n\n\trc = action_entlim_chk(pattr, pobject, actmode);\n\tif (rc != PBSE_NONE)\n\t\treturn rc;\n\n\tif (actmode == ATR_ACTION_ALTER) {\n\t\t/*\n\t\t * setup a work task to resum the count for this\n\t\t * limit after the \"set\" has been really set in the\n\t\t * attribute.  At this instant in time, the real attribute\n\t\t * still has the old information and may even be unset\n\t\t */\n\t\tpwt = set_task(WORK_Immed, 0, entlim_resum, pobject);\n\t\tif (pwt)\n\t\t\tpwt->wt_aux = 0; /* resum count of jobs */\n\t}\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n * \t\taction_entlim_res - the at_action for the entity resource limit attributes\n *\t\tcalls the common \"action_entlim\" function with the flag indicating\n *\t\tthat the entity limit is a resource limit.\n *\n * @param[in]\tpattr\t-\tpointer to attribute structure\n * @param[in]\tpobject -\tpointer to some parent object.\n * @param[in]\tactmode\t-\tthe action to take (e.g. ATR_ACTION_ALTER)\n *\n * @return\tint\n * @retval\tPBSE_NONE\t: success\n * @retval\tnonzero\t: PBSE error code\n */\n\nint\naction_entlim_res(attribute *pattr, void *pobject, int actmode)\n{\n\tint rc;\n\tstruct work_task *pwt;\n\n\trc = action_entlim_chk(pattr, pobject, actmode);\n\tif (rc != PBSE_NONE)\n\t\treturn rc;\n\n\tif (actmode == ATR_ACTION_ALTER) {\n\t\t/*\n\t\t * setup a work task to resum the resource usage for this\n\t\t * limit after the \"set\" has been really set in the\n\t\t * attribute.  At this instant in time, the real attribute\n\t\t * still has the old information and may even be unset\n\t\t */\n\t\tpwt = set_task(WORK_Immed, 0, entlim_resum, pobject);\n\t\tif (pwt)\n\t\t\tpwt->wt_aux = 1; /* resum resources */\n\t}\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n * \t\tcheck_no_entlim - checks for conflicting attributes which restrict what can\n *\t\trun or be enqueued.  If an old style is being set, the newer \"entlim\"\n *\t\ttypes cannot be set.\n *\n * @param[in]\tpattr\t-\tpointer to attribute structure\n * @param[in]\tpobject -\tpointer to some parent object.\n * @param[in]\tactmode\t-\tthe action to take (e.g. ATR_ACTION_ALTER)\n *\n * @return\tint\n * @retval\tPBSE_NONE\t: no new entlim type currently set\n * @retval\tPBSE_MIXENTLIMS\t: here is a new style entlim limit set\n */\nint\ncheck_no_entlim(attribute *pattr, void *pobject, int actmode)\n{\n\tint i;\n\tpbs_queue *pq;\n\n\tif (entlim_type_in_use == -1)\n\t\treturn PBSE_NONE;\n\telse if (entlim_type_in_use == 0) {\n\t\tentlim_type_in_use = -1; /* show old style in use */\n\t\treturn PBSE_NONE;\n\t}\n\n\t/* flags says wrong (new) style in use, but need to double check */\n\tif ((i = is_attrs_in_list_set(svr_newstyle, server.sv_attr)) != -1) {\n\t\tlog_mixed_limit_controls(NULL, i, \"new\");\n\t\treturn PBSE_MIXENTLIMS;\n\t}\n\tpq = (pbs_queue *) GET_NEXT(svr_queues);\n\twhile (pq) {\n\t\tif ((i = is_attrs_in_list_set(que_newstyle, pq->qu_attr)) != -1) {\n\t\t\tlog_mixed_limit_controls(pq, i, \"new\");\n\t\t\treturn PBSE_MIXENTLIMS;\n\t\t}\n\t\tpq = (pbs_queue *) GET_NEXT(pq->qu_link);\n\t}\n\n\tentlim_type_in_use = -1; /* show old style in use */\n\treturn 0;\n}\n\n/* Defines for return value of check_single_entity_* */\n#define Exceeds_Generic -2\n#define Exceeds_Limit -1\n#define No_Limit 0\n#define Within_Limit 1\n\n#define ET_LIM_DBG(format, ...)                                                                              \\\n\tif (will_log_event(PBSEVENT_DEBUG4)) {                                                               \\\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"ET_LIM_DBG: %s: \" format, __VA_ARGS__);              \\\n\t\tlog_event(PBSEVENT_DEBUG4, PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid, log_buffer); \\\n\t}\n\nextern char statechars[];\n\n/**\n * @brief\n * \t\tcheck_single_entity_ct\t-\tcheck the single entity count\n *\n * @param[in]\tkt\t-\tKey type- user/group/project or overall.\n * @param[in]\tename\t-\tentity name.\n * @param[in]\tpatr\t-\tpointer to attribute structure\n * @param[in]\tsubjobs\t-\tnumber of subjobs if any.\n * @param[in]\tpjob\t-\tpointer to job\n *\n * @return\tint\n * @retval\tExceeds_Generic\t: count exceeds generic limit\n * @retval\tExceeds_Limit\t: count exceeds slf_limit\n * @retval\tNo_Limit\t: There is no limit\n * @retval\tWithin_Limit\t: count is within the limit\n */\nstatic int\ncheck_single_entity_ct(enum lim_keytypes kt, char *ename, attribute *patr, int subjobs, job *pjob)\n{\n\tchar *kstr;\n\tvoid *ctx;\n\tsvr_entlim_leaf_t *plf;\n\tint count = subjobs;\n\n\tkstr = entlim_mk_runkey(kt, ename);\n\tif (kstr == NULL) {\n\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER,\n\t\t\t  LOG_ALERT, msg_daemonname,\n\t\t\t  \"rejecting job,  unable to make entity limit key, no memory\");\n\t\tET_LIM_DBG(\"exiting, ret %d [kstr is NULL]\", __func__, LIM_OVERALL)\n\t\treturn LIM_OVERALL;\n\t}\n\tET_LIM_DBG(\"kstr %s, %d\", __func__, kstr, subjobs)\n\tctx = patr->at_val.at_enty.ae_tree;\n\tplf = (svr_entlim_leaf_t *) entlim_get(kstr, ctx);\n\n\tif (plf) {\n\t\tcount += plf->slf_sum.at_val.at_long;\n\t\tET_LIM_DBG(\"ct usage for %s is %ld\", __func__, kstr, plf->slf_sum.at_val.at_long)\n\t\tET_LIM_DBG(\"ct specific limit for %s is %ld\", __func__, kstr, plf->slf_limit.at_val.at_long)\n\t}\n\tfree(kstr);\n\n\tET_LIM_DBG(\"count is %d\", __func__, count)\n\tif (plf && (is_attr_set(&plf->slf_limit))) {\n\t\tif (count > plf->slf_limit.at_val.at_long) {\n\t\t\tET_LIM_DBG(\"exiting, ret Exceeds_Limit [specific limit]\", __func__)\n\t\t\treturn Exceeds_Limit;\n\t\t} else {\n\t\t\tET_LIM_DBG(\"exiting, ret Within_Limit [specific limit]\", __func__)\n\t\t\treturn Within_Limit;\n\t\t}\n\t} else if (kt != LIM_OVERALL) {\n\t\t/* compare against generic limit if one */\n\t\tkstr = entlim_mk_runkey(kt, PBS_GENERIC_ENTITY);\n\t\tif (kstr == NULL) {\n\t\t\tET_LIM_DBG(\"exiting, ret No_Limit [generic limit]\", __func__)\n\t\t\treturn No_Limit;\n\t\t}\n\t\tplf = (svr_entlim_leaf_t *) entlim_get(kstr, ctx);\n\t\tif (plf && (is_attr_set(&plf->slf_limit))) {\n\t\t\tET_LIM_DBG(\"ct generic limit for %s is %ld\", __func__, kstr, plf->slf_limit.at_val.at_long)\n\t\t\tfree(kstr);\n\t\t\tif (count > plf->slf_limit.at_val.at_long) {\n\t\t\t\tET_LIM_DBG(\"exiting, ret Exceeds_Generic [generic limit]\", __func__)\n\t\t\t\treturn Exceeds_Generic;\n\t\t\t} else {\n\t\t\t\tET_LIM_DBG(\"exiting, ret Within_Limit [generic limit]\", __func__)\n\t\t\t\treturn Within_Limit;\n\t\t\t}\n\t\t}\n\t\tfree(kstr);\n\t}\n\tET_LIM_DBG(\"exiting, ret No_Limit [all ok]\", __func__)\n\treturn No_Limit;\n}\n/**\n * @brief\n * \t\tcheck_single_entity_res\t-\tcheck single entity resource\n *\n * @param[in]\tkt\t-\tKey type- user/group/project or overall.\n * @param[in]\tename\t-\tentity name.\n * @param[in]\tpatr\t-\tpointer to attribute structure\n * @param[in]\tnewr\t-\tnew resource\n * @param[in]\toldr\t-\told resource\n * @param[in]\tsubjobs -\tnumber of subjobs if any.\n * @param[in]\tpjob\t-\tpointer to job\n *\n * @return\tint\n * @retval\tExceeds_Generic\t: count exceeds generic limit\n * @retval\tExceeds_Limit\t: count exceeds slf_limit\n * @retval\tNo_Limit\t: There is no limit\n * @retval\tWithin_Limit\t: count is within the limit\n */\nstatic int\ncheck_single_entity_res(enum lim_keytypes kt, char *ename,\n\t\t\tattribute *patr,\n\t\t\tresource *newr,\n\t\t\tresource *oldr,\n\t\t\tint subjobs,\n\t\t\tjob *pjob)\n{\n\tchar *kstr;\n\tvoid *ctx;\n\tsvr_entlim_leaf_t *plf;\n\tint rc;\n\tint i;\n\tattribute tmpval = {0};\n\n\tkstr = entlim_mk_reskey(kt, ename, newr->rs_defin->rs_name);\n\tif (kstr == NULL) {\n\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER,\n\t\t\t  LOG_ALERT, msg_daemonname,\n\t\t\t  \"rejecting job,  unable to make entity limit key, no memory\");\n\t\tET_LIM_DBG(\"exiting, ret %d [kstr is NULL]\", __func__, LIM_OVERALL)\n\t\treturn LIM_OVERALL;\n\t}\n\tET_LIM_DBG(\"kstr %s, %d, oldr %p\", __func__, kstr, subjobs, oldr)\n\tctx = patr->at_val.at_enty.ae_tree;\n\tplf = (svr_entlim_leaf_t *) entlim_get(kstr, ctx);\n\n\tif (plf) {\n\t\ttmpval = plf->slf_sum;\n\t\tfor (i = 0; i < subjobs; i++) {\n\t\t\tif (oldr)\n\t\t\t\tplf->slf_rescd->rs_set(&tmpval, &oldr->rs_value, DECR);\n\t\t\t/* add in requested amount */\n\t\t\tplf->slf_rescd->rs_set(&tmpval, &newr->rs_value, INCR);\n\t\t}\n\t\tif (will_log_event(PBSEVENT_DEBUG4)) {\n\t\t\tsvrattrl *sum = NULL, *limit = NULL;\n\t\t\tchar *sum_val, *limit_val;\n\t\t\tif (is_attr_set(&plf->slf_sum)) {\n\t\t\t\tplf->slf_rescd->rs_encode(&plf->slf_sum, NULL, \"sumval\", NULL, ATR_ENCODE_CLIENT, &sum);\n\t\t\t\tsum_val = sum->al_value;\n\t\t\t} else\n\t\t\t\tsum_val = \"(not_set)\";\n\t\t\tif (is_attr_set(&plf->slf_limit)) {\n\t\t\t\tplf->slf_rescd->rs_encode(&plf->slf_limit, NULL, \"limval\", NULL, ATR_ENCODE_CLIENT, &limit);\n\t\t\t\tlimit_val = limit->al_value;\n\t\t\t} else\n\t\t\t\tlimit_val = \"(not_set)\";\n\t\t\tET_LIM_DBG(\"res usage for %s is %s\", __func__, kstr, sum_val)\n\t\t\tET_LIM_DBG(\"res specific limit for %s is %s\", __func__, kstr, limit_val)\n\t\t\tfree(sum);\n\t\t\tfree(limit);\n\t\t}\n\t}\n\tfree(kstr);\n\tif (plf && (is_attr_set(&plf->slf_limit))) {\n\t\t/* check the specific user's limit */\n\t\trc = plf->slf_rescd->rs_comp(&tmpval, &plf->slf_limit);\n\t\tif (rc > 0) {\n\t\t\tET_LIM_DBG(\"exiting, ret Exceeds_Limit, rc=%d [specific limit]\", __func__, rc)\n\t\t\treturn Exceeds_Limit;\n\t\t}\n\t\tET_LIM_DBG(\"exiting, ret Within_Limit, rc=%d [specific limit]\", __func__, rc)\n\t\treturn Within_Limit;\n\t} else if (kt != LIM_OVERALL) {\n\t\t/* check against the generic limit if one */\n\t\tkstr = entlim_mk_reskey(kt, PBS_GENERIC_ENTITY, newr->rs_defin->rs_name);\n\t\tif (kstr == NULL) {\n\t\t\tET_LIM_DBG(\"exiting, ret No_Limit [generic limit]\", __func__)\n\t\t\treturn No_Limit;\n\t\t}\n\t\tplf = (svr_entlim_leaf_t *) entlim_get(kstr, ctx);\n\t\tif (plf && (is_attr_set(&plf->slf_limit))) {\n\t\t\tif (!(is_attr_set(&tmpval))) { /* for no recorded usage for entity */\n\t\t\t\tplf->slf_rescd->rs_set(&tmpval, &newr->rs_value, SET);\n\t\t\t\tfor (i = 0; i < (subjobs - 1); i++) {\n\t\t\t\t\tplf->slf_rescd->rs_set(&tmpval, &newr->rs_value, INCR);\n\t\t\t\t}\n\t\t\t\tif (will_log_event(PBSEVENT_DEBUG4) && (is_attr_set(&tmpval))) {\n\t\t\t\t\tsvrattrl *count;\n\t\t\t\t\tplf->slf_rescd->rs_encode(&tmpval, NULL, \"tmpval\", NULL, ATR_ENCODE_CLIENT, &count);\n\t\t\t\t\tET_LIM_DBG(\"res generic limit for %s is %s\", __func__, kstr, count->al_value)\n\t\t\t\t\tfree(count);\n\t\t\t\t} else\n\t\t\t\t\tET_LIM_DBG(\"res generic limit for %s is (not_set)\", __func__, kstr)\n\t\t\t}\n\t\t\trc = plf->slf_rescd->rs_comp(&tmpval, &plf->slf_limit);\n\t\t\tfree(kstr);\n\t\t\tif (rc > 0) {\n\t\t\t\tET_LIM_DBG(\"exiting, ret Exceeds_Generic, rc=%d [generic limit]\", __func__, rc)\n\t\t\t\treturn Exceeds_Generic;\n\t\t\t}\n\t\t\tET_LIM_DBG(\"exiting, ret Within_Limit, rc=%d [generic limit]\", __func__, rc)\n\t\t\treturn Within_Limit;\n\t\t}\n\t\tfree(kstr);\n\t}\n\tET_LIM_DBG(\"exiting, ret No_Limit [all ok]\", __func__)\n\treturn No_Limit;\n}\n\n/**\n * @brief\n * \t\tcheck_entity_ct_limit_queued() - called to see if a job can be enqueued\n *\t\t1. Called when new job is arriving against server attributes:\n *\t   \t\t- pque will be null\n *\t\t2. Called to check against queue attributes on any enqueue\n *\t   \t(submit, move or route):\n *\t   \t- pque will point to queue struct, i.e. not null\n *\n * @param[in]\tpjob\t-\tnew job\n * @param[in]\tpque\t-\tany enqueue\n *\n * @return\twithin the limit or not\n * @retval\tzero\t: within defined limit\n * @retval\tPBS_Enumber\t: if limit exceeded\n * @note\n *\t\tOn an error, a formatted message is attached to the job in ji_\n */\nint\ncheck_entity_ct_limit_queued(job *pjob, pbs_queue *pque)\n{\n\tchar *egroup;\n\tchar *project;\n\tchar *euser;\n\tattribute *pqueued_jobs_threshold;\n\tint rc;\n\tint subjobs;\n\tchar ebuff[COMMENT_BUF_SIZE + 1];\n\textern char *msg_et_qct_q;\n\textern char *msg_et_sct_q;\n\textern char *msg_et_ggq_q;\n\textern char *msg_et_ggs_q;\n\textern char *msg_et_gpq_q;\n\textern char *msg_et_gps_q;\n\textern char *msg_et_guq_q;\n\textern char *msg_et_gus_q;\n\textern char *msg_et_sgq_q;\n\textern char *msg_et_sgs_q;\n\textern char *msg_et_spq_q;\n\textern char *msg_et_sps_q;\n\textern char *msg_et_suq_q;\n\textern char *msg_et_sus_q;\n\n\tET_LIM_DBG(\"entered for %s\", __func__, pque ? pque->qu_qs.qu_name : \"server\")\n\teuser = get_jattr_str(pjob, JOB_ATR_euser);\n\tegroup = get_jattr_str(pjob, JOB_ATR_egroup);\n\tproject = get_jattr_str(pjob, JOB_ATR_project);\n\tif (pjob->ji_clterrmsg) {\n\t\tfree(pjob->ji_clterrmsg);\n\t\tpjob->ji_clterrmsg = NULL;\n\t}\n\tif (pque)\n\t\tpqueued_jobs_threshold = get_qattr(pque, QA_ATR_queued_jobs_threshold);\n\telse\n\t\tpqueued_jobs_threshold = get_sattr(SVR_ATR_queued_jobs_threshold);\n\n\tif (!is_attr_set(pqueued_jobs_threshold)) {\n\t\tET_LIM_DBG(\"exiting, ret 0 [queued_jobs_threshold limit not set for %s]\", __func__, pque ? pque->qu_qs.qu_name : \"server\")\n\t\treturn PBSE_NONE; /* no limits set */\n\t}\n\n\tif ((subjobs = get_queued_subjobs_ct(pjob)) < 0) {\n\t\tET_LIM_DBG(\"exiting, ret %d [get_queued_subjobs_ct() returned %d]\", __func__,\n\t\t\t   PBSE_INTERNAL, subjobs)\n\t\treturn PBSE_INTERNAL;\n\t}\n\n\t/* I.  For jobs count limits */\n\n\t/* 1. Check against Overall limit, [o:PBS_ALL] */\n\trc = check_single_entity_ct(LIM_OVERALL, PBS_ALL_ENTITY, pqueued_jobs_threshold, subjobs, pjob);\n\tif (rc == Exceeds_Limit) {\n\t\tif (pque) {\n\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, msg_et_qct_q,\n\t\t\t\t pque->qu_qs.qu_name);\n\t\t} else {\n\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, \"%s\", msg_et_sct_q);\n\t\t}\n\t\tebuff[COMMENT_BUF_SIZE] = '\\0';\n\t\tif ((pjob->ji_clterrmsg = strdup(ebuff)) == NULL)\n\t\t\treturn PBSE_SYSTEM;\n\t\tET_LIM_DBG(\"exiting, ret %d [check_single_entity_ct(o:\" PBS_ALL_ENTITY \",%d) returned Exceeds_Limit]\", __func__,\n\t\t\t   PBSE_ENTLIMCT, subjobs)\n\t\treturn PBSE_ENTLIMCT;\n\t}\n\n\t/* 2. Check against specific user limit, [u:user] */\n\trc = check_single_entity_ct(LIM_USER, euser, pqueued_jobs_threshold, subjobs, pjob);\n\tif (rc == Exceeds_Limit) {\n\t\tif (pque) {\n\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, msg_et_suq_q,\n\t\t\t\t euser, pque->qu_qs.qu_name);\n\t\t} else {\n\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, msg_et_sus_q, euser);\n\t\t}\n\t\tebuff[COMMENT_BUF_SIZE] = '\\0';\n\t\tif ((pjob->ji_clterrmsg = strdup(ebuff)) == NULL)\n\t\t\treturn PBSE_SYSTEM;\n\t\tET_LIM_DBG(\"exiting, ret %d [check_single_entity_ct(u:%s,%d) returned Exceeds_Limit]\", __func__,\n\t\t\t   PBSE_ENTLIMCT, euser, subjobs)\n\t\treturn PBSE_ENTLIMCT;\n\n\t} else if (rc == Exceeds_Generic) {\n\t\tif (pque) {\n\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, msg_et_guq_q,\n\t\t\t\t \"generic\", pque->qu_qs.qu_name);\n\t\t} else {\n\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, msg_et_gus_q, \"generic\");\n\t\t}\n\t\tebuff[COMMENT_BUF_SIZE] = '\\0';\n\t\tif ((pjob->ji_clterrmsg = strdup(ebuff)) == NULL)\n\t\t\treturn PBSE_SYSTEM;\n\t\tET_LIM_DBG(\"exiting, ret %d [check_single_entity_ct(u:%s,%d) returned Exceeds_Generic]\", __func__,\n\t\t\t   PBSE_ENTLIMCT, euser, subjobs)\n\t\treturn PBSE_ENTLIMCT;\n\t}\n\n\t/* 3. Check against specific group limit, [g:group] */\n\trc = check_single_entity_ct(LIM_GROUP, egroup, pqueued_jobs_threshold, subjobs, pjob);\n\tif (rc == Exceeds_Limit) {\n\t\tif (pque) {\n\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, msg_et_sgq_q,\n\t\t\t\t egroup, pque->qu_qs.qu_name);\n\t\t} else {\n\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, msg_et_sgs_q, egroup);\n\t\t}\n\t\tebuff[COMMENT_BUF_SIZE] = '\\0';\n\t\tif ((pjob->ji_clterrmsg = strdup(ebuff)) == NULL)\n\t\t\treturn PBSE_SYSTEM;\n\t\tET_LIM_DBG(\"exiting, ret %d [check_single_entity_ct(g:%s,%d) returned Exceeds_Limit]\", __func__,\n\t\t\t   PBSE_ENTLIMCT, egroup, subjobs)\n\t\treturn PBSE_ENTLIMCT;\n\n\t} else if (rc == Exceeds_Generic) {\n\t\tif (pque) {\n\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, msg_et_ggq_q,\n\t\t\t\t pque->qu_qs.qu_name);\n\t\t} else {\n\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, \"%s\", msg_et_ggs_q);\n\t\t}\n\t\tebuff[COMMENT_BUF_SIZE] = '\\0';\n\t\tif ((pjob->ji_clterrmsg = strdup(ebuff)) == NULL)\n\t\t\treturn PBSE_SYSTEM;\n\t\tET_LIM_DBG(\"exiting, ret %d [check_single_entity_ct(g:%s,%d) returned Exceeds_Generic]\", __func__,\n\t\t\t   PBSE_ENTLIMCT, egroup, subjobs)\n\t\treturn PBSE_ENTLIMCT;\n\t}\n\n\t/* 4. Check against specific project limit, [p:project] */\n\trc = check_single_entity_ct(LIM_PROJECT, project, pqueued_jobs_threshold, subjobs, pjob);\n\tif (rc == Exceeds_Limit) {\n\t\tif (pque) {\n\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, msg_et_spq_q,\n\t\t\t\t project, pque->qu_qs.qu_name);\n\t\t} else {\n\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, msg_et_sps_q, project);\n\t\t}\n\t\tebuff[COMMENT_BUF_SIZE] = '\\0';\n\t\tif ((pjob->ji_clterrmsg = strdup(ebuff)) == NULL)\n\t\t\treturn PBSE_SYSTEM;\n\t\tET_LIM_DBG(\"exiting, ret %d [check_single_entity_ct(p:%s,%d) returned Exceeds_Limit]\", __func__,\n\t\t\t   PBSE_ENTLIMCT, project, subjobs)\n\t\treturn PBSE_ENTLIMCT;\n\n\t} else if (rc == Exceeds_Generic) {\n\t\tif (pque) {\n\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, msg_et_gpq_q,\n\t\t\t\t pque->qu_qs.qu_name);\n\t\t} else {\n\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, \"%s\", msg_et_gps_q);\n\t\t}\n\t\tebuff[COMMENT_BUF_SIZE] = '\\0';\n\t\tif ((pjob->ji_clterrmsg = strdup(ebuff)) == NULL)\n\t\t\treturn PBSE_SYSTEM;\n\t\tET_LIM_DBG(\"exiting, ret %d [check_single_entity_ct(p:%s,%d) returned Exceeds_Generic]\", __func__,\n\t\t\t   PBSE_ENTLIMCT, project, subjobs)\n\t\treturn PBSE_ENTLIMCT;\n\t}\n\n\tET_LIM_DBG(\"exiting, ret 0 [all ok]\", __func__)\n\treturn 0; /* within all count limits */\n}\n\n/**\n * @brief\n * \t\tcheck_entity_ct_limit_max() - called to see if a job can be enqueued\n *\t\t1. Called when new job is arriving against server attributes:\n *\t   \t- pque will be null\n *\t\t2. Called to check against queue attributes on any enqueue\n *\t   \t(submit, move or route):\n *\t   \t- pque will point to queue struct, i.e. not null\n *\n * @param[in]\tpjob\t-\tnew job\n * @param[in]\tpque\t-\tany enqueue\n *\n * @return\twithin the limit or not\n * @retval\tzero\t: within defined limit\n * @retval\tPBS_Enumber\t: if limit exceeded\n * @note\n *\t\tOn an error, a formatted message is attached to the job in ji_\n */\nint\ncheck_entity_ct_limit_max(job *pjob, pbs_queue *pque)\n{\n\tchar *egroup;\n\tchar *project;\n\tchar *euser;\n\tattribute *pmax_queued;\n\tint rc;\n\tint subjobs;\n\tchar ebuff[COMMENT_BUF_SIZE + 1];\n\textern char *msg_et_qct;\n\textern char *msg_et_sct;\n\textern char *msg_et_ggq;\n\textern char *msg_et_ggs;\n\textern char *msg_et_gpq;\n\textern char *msg_et_gps;\n\textern char *msg_et_guq;\n\textern char *msg_et_gus;\n\textern char *msg_et_sgq;\n\textern char *msg_et_sgs;\n\textern char *msg_et_spq;\n\textern char *msg_et_sps;\n\textern char *msg_et_suq;\n\textern char *msg_et_sus;\n\n\tET_LIM_DBG(\"entered for %s\", __func__, pque ? pque->qu_qs.qu_name : \"server\")\n\teuser = get_jattr_str(pjob, JOB_ATR_euser);\n\tegroup = get_jattr_str(pjob, JOB_ATR_egroup);\n\tproject = get_jattr_str(pjob, JOB_ATR_project);\n\tif (pjob->ji_clterrmsg) {\n\t\tfree(pjob->ji_clterrmsg);\n\t\tpjob->ji_clterrmsg = NULL;\n\t}\n\tif (pque)\n\t\tpmax_queued = get_qattr(pque, QA_ATR_max_queued);\n\telse\n\t\tpmax_queued = get_sattr(SVR_ATR_max_queued);\n\n\tif (!is_attr_set(pmax_queued)) {\n\t\tET_LIM_DBG(\"exiting, ret 0 [max_queued limit not set for %s]\", __func__, pque ? pque->qu_qs.qu_name : \"server\")\n\t\treturn PBSE_NONE; /* no limits set */\n\t}\n\n\tif ((subjobs = get_queued_subjobs_ct(pjob)) < 0) {\n\t\tET_LIM_DBG(\"exiting, ret %d [get_queued_subjobs_ct() returned %d]\", __func__,\n\t\t\t   PBSE_INTERNAL, subjobs)\n\t\treturn PBSE_INTERNAL;\n\t}\n\n\t/* I.  For jobs count limits */\n\n\t/* 1. Check against Overall limit, [o:PBS_ALL] */\n\trc = check_single_entity_ct(LIM_OVERALL, PBS_ALL_ENTITY, pmax_queued, subjobs, pjob);\n\tif (rc == Exceeds_Limit) {\n\t\tif (pque) {\n\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, msg_et_qct,\n\t\t\t\t pque->qu_qs.qu_name);\n\t\t} else {\n\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, \"%s\", msg_et_sct);\n\t\t}\n\t\tebuff[COMMENT_BUF_SIZE] = '\\0';\n\t\tif ((pjob->ji_clterrmsg = strdup(ebuff)) == NULL)\n\t\t\treturn PBSE_SYSTEM;\n\t\tET_LIM_DBG(\"exiting, ret %d [check_single_entity_ct(o:\" PBS_ALL_ENTITY \",%d) returned Exceeds_Limit]\", __func__,\n\t\t\t   PBSE_ENTLIMCT, subjobs)\n\t\treturn PBSE_ENTLIMCT;\n\t}\n\n\t/* 2. Check against specific user limit, [u:user] */\n\trc = check_single_entity_ct(LIM_USER, euser, pmax_queued, subjobs, pjob);\n\tif (rc == Exceeds_Limit) {\n\t\tif (pque) {\n\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, msg_et_suq,\n\t\t\t\t euser, pque->qu_qs.qu_name);\n\t\t} else {\n\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, msg_et_sus, euser);\n\t\t}\n\t\tebuff[COMMENT_BUF_SIZE] = '\\0';\n\t\tif ((pjob->ji_clterrmsg = strdup(ebuff)) == NULL)\n\t\t\treturn PBSE_SYSTEM;\n\t\tET_LIM_DBG(\"exiting, ret %d [check_single_entity_ct(u:%s,%d) returned Exceeds_Limit]\", __func__,\n\t\t\t   PBSE_ENTLIMCT, euser, subjobs)\n\t\treturn PBSE_ENTLIMCT;\n\n\t} else if (rc == Exceeds_Generic) {\n\t\tif (pque) {\n\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, msg_et_guq,\n\t\t\t\t \"generic\", pque->qu_qs.qu_name);\n\t\t} else {\n\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, msg_et_gus, \"generic\");\n\t\t}\n\t\tebuff[COMMENT_BUF_SIZE] = '\\0';\n\t\tif ((pjob->ji_clterrmsg = strdup(ebuff)) == NULL)\n\t\t\treturn PBSE_SYSTEM;\n\t\tET_LIM_DBG(\"exiting, ret %d [check_single_entity_ct(u:%s,%d) returned Exceeds_Generic]\", __func__,\n\t\t\t   PBSE_ENTLIMCT, euser, subjobs)\n\t\treturn PBSE_ENTLIMCT;\n\t}\n\n\t/* 3. Check against specific group limit, [g:group] */\n\trc = check_single_entity_ct(LIM_GROUP, egroup, pmax_queued, subjobs, pjob);\n\tif (rc == Exceeds_Limit) {\n\t\tif (pque) {\n\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, msg_et_sgq,\n\t\t\t\t egroup, pque->qu_qs.qu_name);\n\t\t} else {\n\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, msg_et_sgs, egroup);\n\t\t}\n\t\tebuff[COMMENT_BUF_SIZE] = '\\0';\n\t\tif ((pjob->ji_clterrmsg = strdup(ebuff)) == NULL)\n\t\t\treturn PBSE_SYSTEM;\n\t\tET_LIM_DBG(\"exiting, ret %d [check_single_entity_ct(g:%s,%d) returned Exceeds_Limit]\", __func__,\n\t\t\t   PBSE_ENTLIMCT, egroup, subjobs)\n\t\treturn PBSE_ENTLIMCT;\n\n\t} else if (rc == Exceeds_Generic) {\n\t\tif (pque) {\n\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, msg_et_ggq,\n\t\t\t\t pque->qu_qs.qu_name);\n\t\t} else {\n\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, \"%s\", msg_et_ggs);\n\t\t}\n\t\tebuff[COMMENT_BUF_SIZE] = '\\0';\n\t\tif ((pjob->ji_clterrmsg = strdup(ebuff)) == NULL)\n\t\t\treturn PBSE_SYSTEM;\n\t\tET_LIM_DBG(\"exiting, ret %d [check_single_entity_ct(g:%s,%d) returned Exceeds_Generic]\", __func__,\n\t\t\t   PBSE_ENTLIMCT, egroup, subjobs)\n\t\treturn PBSE_ENTLIMCT;\n\t}\n\n\t/* 4. Check against specific project limit, [p:project] */\n\trc = check_single_entity_ct(LIM_PROJECT, project, pmax_queued, subjobs, pjob);\n\tif (rc == Exceeds_Limit) {\n\t\tif (pque) {\n\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, msg_et_spq,\n\t\t\t\t project, pque->qu_qs.qu_name);\n\t\t} else {\n\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, msg_et_sps, project);\n\t\t}\n\t\tebuff[COMMENT_BUF_SIZE] = '\\0';\n\t\tif ((pjob->ji_clterrmsg = strdup(ebuff)) == NULL)\n\t\t\treturn PBSE_SYSTEM;\n\t\tET_LIM_DBG(\"exiting, ret %d [check_single_entity_ct(p:%s,%d) returned Exceeds_Limit]\", __func__,\n\t\t\t   PBSE_ENTLIMCT, project, subjobs)\n\t\treturn PBSE_ENTLIMCT;\n\n\t} else if (rc == Exceeds_Generic) {\n\t\tif (pque) {\n\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, msg_et_gpq,\n\t\t\t\t pque->qu_qs.qu_name);\n\t\t} else {\n\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, \"%s\", msg_et_gps);\n\t\t}\n\t\tebuff[COMMENT_BUF_SIZE] = '\\0';\n\t\tif ((pjob->ji_clterrmsg = strdup(ebuff)) == NULL)\n\t\t\treturn PBSE_SYSTEM;\n\t\tET_LIM_DBG(\"exiting, ret %d [check_single_entity_ct(p:%s,%d) returned Exceeds_Generic]\", __func__,\n\t\t\t   PBSE_ENTLIMCT, project, subjobs)\n\t\treturn PBSE_ENTLIMCT;\n\t}\n\n\tET_LIM_DBG(\"exiting, ret 0 [all ok]\", __func__)\n\treturn 0; /* within all count limits */\n}\n\n/**\n * @brief\n * \t\tcheck_entity_res_limit_queued() - called to see if a job can be enqueued\n *\t\tbased on requested (or altered) job wide resources\n *\t\t1. Called when new job is arriving against server attributes:\n *\t   \t- pque will be null\n *\t\t2. Called to check against queue attributes on any enqueue\n *\t   \t(submit, move or route):\n *\t   \t- pque will point to queue struct, i.e. not null\n *\n * @param[in]\tpjob\t-\tnew job\n * @param[in]\tpque\t-\tany enqueue\n * @param[in]\taltered_resc\t-\taltered job wide resources\n *\n * @return\twithin the limit or not\n * @retval\tzero\t: within defined limit\n * @retval\tPBS_Enumber\t: if limit exceeded\n * @note\n *\t\tError message text returned in ebuffer if limit exceeded\n */\nint\ncheck_entity_resc_limit_queued(job *pjob, pbs_queue *pque, attribute *altered_resc)\n{\n\tchar *egroup;\n\tchar *project;\n\tchar *euser;\n\tint rc;\n\tint subjobs;\n\tattribute *pmaxqresc;\n\tattribute *pattr_new;\n\tattribute *pattr_old;\n\tresource *presc_new;\n\tresource *presc_old;\n\tchar ebuff[COMMENT_BUF_SIZE + 1];\n\n\textern char *msg_et_ggq_q;\n\textern char *msg_et_ggs_q;\n\textern char *msg_et_guq_q;\n\textern char *msg_et_gus_q;\n\textern char *msg_et_sgq_q;\n\textern char *msg_et_sgs_q;\n\textern char *msg_et_spq_q;\n\textern char *msg_et_sps_q;\n\textern char *msg_et_suq_q;\n\textern char *msg_et_sus_q;\n\textern char *msg_et_raq_q;\n\textern char *msg_et_ras_q;\n\textern char *msg_et_rggq_q;\n\textern char *msg_et_rggs_q;\n\textern char *msg_et_rgpq_q;\n\textern char *msg_et_rgps_q;\n\textern char *msg_et_rguq_q;\n\textern char *msg_et_rgus_q;\n\textern char *msg_et_rsgq_q;\n\textern char *msg_et_rsgs_q;\n\textern char *msg_et_rspq_q;\n\textern char *msg_et_rsps_q;\n\textern char *msg_et_rsuq_q;\n\textern char *msg_et_rsus_q;\n\n\tET_LIM_DBG(\"entered for %s, alt_res %p\", __func__, pque ? pque->qu_qs.qu_name : \"server\", altered_resc)\n\teuser = get_jattr_str(pjob, JOB_ATR_euser);\n\tegroup = get_jattr_str(pjob, JOB_ATR_egroup);\n\tproject = get_jattr_str(pjob, JOB_ATR_project);\n\tif (pjob->ji_clterrmsg) {\n\t\tfree(pjob->ji_clterrmsg);\n\t\tpjob->ji_clterrmsg = NULL;\n\t}\n\tif (pque)\n\t\tpmaxqresc = get_qattr(pque, QA_ATR_queued_jobs_threshold_res);\n\telse\n\t\tpmaxqresc = get_sattr(SVR_ATR_queued_jobs_threshold_res);\n\n\tif (!is_attr_set(pmaxqresc)) {\n\t\tET_LIM_DBG(\"exiting, ret 0 [queued_jobs_threshold_res limit not set for %s]\", __func__, pque ? pque->qu_qs.qu_name : \"server\")\n\t\treturn 0; /* no limits set */\n\t}\n\n\tif (altered_resc) {\n\t\tpattr_new = altered_resc;\n\t\tpattr_old = get_jattr(pjob, JOB_ATR_resource);\n\t} else {\n\t\tpattr_new = get_jattr(pjob, JOB_ATR_resource);\n\t\tpattr_old = NULL; /* null */\n\t}\n\n\tif ((subjobs = get_queued_subjobs_ct(pjob)) < 0) {\n\t\tET_LIM_DBG(\"exiting, ret %d [get_queued_subjobs_ct() returned %d]\", __func__,\n\t\t\t   PBSE_INTERNAL, subjobs)\n\t\treturn PBSE_INTERNAL;\n\t}\n\n\tfor (presc_new = (resource *) GET_NEXT(pattr_new->at_val.at_list);\n\t     presc_new != NULL;\n\t     presc_new = (resource *) GET_NEXT(presc_new->rs_link)) {\n\t\tchar *rescn = presc_new->rs_defin->rs_name;\n\t\t/* is there an entity limit set for this resource */\n\t\tif (!(is_attr_set(&presc_new->rs_value)) || (presc_new->rs_defin->rs_entlimflg != PBS_ENTLIM_LIMITSET))\n\t\t\tcontinue; /* no limit set */\n\n\t\t/* If this is from qalter where presc_old is set, see if    */\n\t\t/* corresponding resource is in presc_old, had a pior value */\n\n\t\tif (pattr_old)\n\t\t\tpresc_old = find_resc_entry(pattr_old, presc_new->rs_defin);\n\t\telse\n\t\t\tpresc_old = NULL;\n\n\t\tET_LIM_DBG(\"checking for resc %s\", __func__, rescn)\n\t\t/* 1. check against overall limit o:PBS_ALL */\n\t\trc = check_single_entity_res(LIM_OVERALL, PBS_ALL_ENTITY,\n\t\t\t\t\t     pmaxqresc,\n\t\t\t\t\t     presc_new, presc_old, subjobs, pjob);\n\t\tif (rc == Exceeds_Limit) {\n\t\t\tif (pque) {\n\t\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, msg_et_raq_q,\n\t\t\t\t\t presc_new->rs_defin->rs_name,\n\t\t\t\t\t pque->qu_qs.qu_name);\n\t\t\t} else {\n\t\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, msg_et_ras_q,\n\t\t\t\t\t presc_new->rs_defin->rs_name);\n\t\t\t}\n\t\t\tebuff[COMMENT_BUF_SIZE] = '\\0';\n\t\t\tif ((pjob->ji_clterrmsg = strdup(ebuff)) == NULL)\n\t\t\t\treturn PBSE_SYSTEM;\n\t\t\tET_LIM_DBG(\"exiting, ret %d [check_single_entity_res(o:\" PBS_ALL_ENTITY \";%s,%d) returned Exceeds_Limit]\", __func__,\n\t\t\t\t   PBSE_ENTLIMRESC, rescn, subjobs)\n\t\t\treturn PBSE_ENTLIMRESC;\n\t\t}\n\n\t\t/* 2. check aginst user/generic-user limit */\n\t\trc = check_single_entity_res(LIM_USER, euser,\n\t\t\t\t\t     pmaxqresc,\n\t\t\t\t\t     presc_new, presc_old, subjobs, pjob);\n\t\tif (rc == Exceeds_Limit) {\n\t\t\tif (pque) {\n\t\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, msg_et_rsuq_q,\n\t\t\t\t\t euser,\n\t\t\t\t\t presc_new->rs_defin->rs_name,\n\t\t\t\t\t pque->qu_qs.qu_name);\n\t\t\t} else {\n\t\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, msg_et_rsus_q,\n\t\t\t\t\t euser, presc_new->rs_defin->rs_name);\n\t\t\t}\n\t\t\tebuff[COMMENT_BUF_SIZE] = '\\0';\n\t\t\tif ((pjob->ji_clterrmsg = strdup(ebuff)) == NULL)\n\t\t\t\treturn PBSE_SYSTEM;\n\t\t\tET_LIM_DBG(\"exiting, ret %d [check_single_entity_res(u:%s;%s,%d) returned Exceeds_Limit]\", __func__,\n\t\t\t\t   PBSE_ENTLIMRESC, euser, rescn, subjobs)\n\t\t\treturn PBSE_ENTLIMRESC;\n\n\t\t} else if (rc == Exceeds_Generic) {\n\t\t\tif (pque) {\n\t\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, msg_et_rguq_q,\n\t\t\t\t\t presc_new->rs_defin->rs_name,\n\t\t\t\t\t pque->qu_qs.qu_name);\n\t\t\t} else {\n\t\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, msg_et_rgus_q,\n\t\t\t\t\t presc_new->rs_defin->rs_name);\n\t\t\t}\n\t\t\tebuff[COMMENT_BUF_SIZE] = '\\0';\n\t\t\tif ((pjob->ji_clterrmsg = strdup(ebuff)) == NULL)\n\t\t\t\treturn PBSE_SYSTEM;\n\t\t\tET_LIM_DBG(\"exiting, ret %d [check_single_entity_res(u:%s;%s,%d) returned Exceeds_Generic]\", __func__,\n\t\t\t\t   PBSE_ENTLIMRESC, euser, rescn, subjobs)\n\t\t\treturn PBSE_ENTLIMRESC;\n\t\t}\n\n\t\t/* 3. check against specific/generic group limit */\n\t\trc = check_single_entity_res(LIM_GROUP, egroup,\n\t\t\t\t\t     pmaxqresc,\n\t\t\t\t\t     presc_new, presc_old, subjobs, pjob);\n\t\tif (rc == Exceeds_Limit) {\n\t\t\tif (pque) {\n\t\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, msg_et_rsgq_q,\n\t\t\t\t\t egroup,\n\t\t\t\t\t presc_new->rs_defin->rs_name,\n\t\t\t\t\t pque->qu_qs.qu_name);\n\t\t\t} else {\n\t\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, msg_et_rsgs_q,\n\t\t\t\t\t egroup,\n\t\t\t\t\t presc_new->rs_defin->rs_name);\n\t\t\t}\n\t\t\tebuff[COMMENT_BUF_SIZE] = '\\0';\n\t\t\tif ((pjob->ji_clterrmsg = strdup(ebuff)) == NULL)\n\t\t\t\treturn PBSE_SYSTEM;\n\t\t\tET_LIM_DBG(\"exiting, ret %d [check_single_entity_res(g:%s;%s,%d) returned Exceeds_Limit]\", __func__,\n\t\t\t\t   PBSE_ENTLIMRESC, egroup, rescn, subjobs)\n\t\t\treturn PBSE_ENTLIMRESC;\n\n\t\t} else if (rc == Exceeds_Generic) {\n\t\t\tif (pque) {\n\t\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, msg_et_rggq_q,\n\t\t\t\t\t presc_new->rs_defin->rs_name,\n\t\t\t\t\t pque->qu_qs.qu_name);\n\t\t\t} else {\n\t\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, msg_et_rggs_q,\n\t\t\t\t\t presc_new->rs_defin->rs_name);\n\t\t\t}\n\t\t\tebuff[COMMENT_BUF_SIZE] = '\\0';\n\t\t\tif ((pjob->ji_clterrmsg = strdup(ebuff)) == NULL)\n\t\t\t\treturn PBSE_SYSTEM;\n\t\t\tET_LIM_DBG(\"exiting, ret %d [check_single_entity_res(g:%s;%s,%d) returned Exceeds_Generic]\", __func__,\n\t\t\t\t   PBSE_ENTLIMRESC, egroup, rescn, subjobs)\n\t\t\treturn PBSE_ENTLIMRESC;\n\t\t}\n\n\t\t/* 4. check against specific/generic project limit */\n\t\trc = check_single_entity_res(LIM_PROJECT, project,\n\t\t\t\t\t     pmaxqresc,\n\t\t\t\t\t     presc_new, presc_old, subjobs, pjob);\n\t\tif (rc == Exceeds_Limit) {\n\t\t\tif (pque) {\n\t\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, msg_et_rspq_q,\n\t\t\t\t\t project,\n\t\t\t\t\t presc_new->rs_defin->rs_name,\n\t\t\t\t\t pque->qu_qs.qu_name);\n\t\t\t} else {\n\t\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, msg_et_rsps_q,\n\t\t\t\t\t project,\n\t\t\t\t\t presc_new->rs_defin->rs_name);\n\t\t\t}\n\t\t\tebuff[COMMENT_BUF_SIZE] = '\\0';\n\t\t\tif ((pjob->ji_clterrmsg = strdup(ebuff)) == NULL)\n\t\t\t\treturn PBSE_SYSTEM;\n\t\t\tET_LIM_DBG(\"exiting, ret %d [check_single_entity_res(p:%s;%s,%d) returned Exceeds_Limit]\", __func__,\n\t\t\t\t   PBSE_ENTLIMRESC, project, rescn, subjobs)\n\t\t\treturn PBSE_ENTLIMRESC;\n\n\t\t} else if (rc == Exceeds_Generic) {\n\t\t\tif (pque) {\n\t\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, msg_et_rgpq_q,\n\t\t\t\t\t presc_new->rs_defin->rs_name,\n\t\t\t\t\t pque->qu_qs.qu_name);\n\t\t\t} else {\n\t\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, msg_et_rgps_q,\n\t\t\t\t\t presc_new->rs_defin->rs_name);\n\t\t\t}\n\t\t\tebuff[COMMENT_BUF_SIZE] = '\\0';\n\t\t\tif ((pjob->ji_clterrmsg = strdup(ebuff)) == NULL)\n\t\t\t\treturn PBSE_SYSTEM;\n\t\t\tET_LIM_DBG(\"exiting, ret %d [check_single_entity_res(p:%s;%s,%d) returned Exceeds_Generic]\", __func__,\n\t\t\t\t   PBSE_ENTLIMRESC, project, rescn, subjobs)\n\t\t\treturn PBSE_ENTLIMRESC;\n\t\t}\n\t}\n\n\tET_LIM_DBG(\"exiting, ret 0 [all ok]\", __func__)\n\t/* At this point the job is good to go (into the queue/server */\n\treturn 0;\n}\n\n/**\n * @buffer\n * \t\tcheck_entity_res_limit_max() - called to see if a job can be enqueued\n *\t\tbased on requested (or altered) job wide resources\n *\t\t1. Called when new job is arriving against server attributes:\n *\t   \t- pque will be null\n *\t\t2. Called to check against queue attributes on any enqueue\n *\t   \t(submit, move or route):\n *\t   \t- pque will point to queue struct, i.e. not null\n *\n * @param[in]\tpjob\t-\tnew job\n * @param[in]\tpque\t-\tany enqueue\n * @param[in]\taltered_resc\t-\taltered job wide resources\n *\n * @return\twithin the limit or not\n * @retval\tzero\t: within defined limit\n * @retval\tPBS_Enumber\t: if limit exceeded\n * @note\n * \t\tError message text returned in ebuffer if limit exceeded\n */\nint\ncheck_entity_resc_limit_max(job *pjob, pbs_queue *pque, attribute *altered_resc)\n{\n\tchar *egroup;\n\tchar *project;\n\tchar *euser;\n\tint rc;\n\tint subjobs;\n\tattribute *pmaxqresc;\n\tattribute *pattr_new;\n\tattribute *pattr_old;\n\tresource *presc_new;\n\tresource *presc_old;\n\tchar ebuff[COMMENT_BUF_SIZE + 1];\n\n\textern char *msg_et_ggq;\n\textern char *msg_et_ggs;\n\textern char *msg_et_guq;\n\textern char *msg_et_gus;\n\textern char *msg_et_sgq;\n\textern char *msg_et_sgs;\n\textern char *msg_et_spq;\n\textern char *msg_et_sps;\n\textern char *msg_et_suq;\n\textern char *msg_et_sus;\n\textern char *msg_et_raq;\n\textern char *msg_et_ras;\n\textern char *msg_et_rggq;\n\textern char *msg_et_rggs;\n\textern char *msg_et_rgpq;\n\textern char *msg_et_rgps;\n\textern char *msg_et_rguq;\n\textern char *msg_et_rgus;\n\textern char *msg_et_rsgq;\n\textern char *msg_et_rsgs;\n\textern char *msg_et_rspq;\n\textern char *msg_et_rsps;\n\textern char *msg_et_rsuq;\n\textern char *msg_et_rsus;\n\n\tET_LIM_DBG(\"entered for %s, alt_res %p\", __func__, pque ? pque->qu_qs.qu_name : \"server\", altered_resc)\n\teuser = get_jattr_str(pjob, JOB_ATR_euser);\n\tegroup = get_jattr_str(pjob, JOB_ATR_egroup);\n\tproject = get_jattr_str(pjob, JOB_ATR_project);\n\tif (pjob->ji_clterrmsg) {\n\t\tfree(pjob->ji_clterrmsg);\n\t\tpjob->ji_clterrmsg = NULL;\n\t}\n\tif (pque)\n\t\tpmaxqresc = get_qattr(pque, QA_ATR_max_queued_res);\n\telse\n\t\tpmaxqresc = get_sattr(SVR_ATR_max_queued_res);\n\n\tif (!is_attr_set(pmaxqresc)) {\n\t\tET_LIM_DBG(\"exiting, ret 0 [max_queued_res limit not set for %s]\", __func__, pque ? pque->qu_qs.qu_name : \"server\")\n\t\treturn 0; /* no limits set */\n\t}\n\n\tif (altered_resc) {\n\t\tpattr_new = altered_resc;\n\t\tpattr_old = get_jattr(pjob, JOB_ATR_resource);\n\t} else {\n\t\tpattr_new = get_jattr(pjob, JOB_ATR_resource);\n\t\tpattr_old = NULL; /* null */\n\t}\n\n\tif ((subjobs = get_queued_subjobs_ct(pjob)) < 0) {\n\t\tET_LIM_DBG(\"exiting, ret %d [get_queued_subjobs_ct() returned %d]\", __func__,\n\t\t\t   PBSE_INTERNAL, subjobs)\n\t\treturn PBSE_INTERNAL;\n\t}\n\n\tfor (presc_new = (resource *) GET_NEXT(pattr_new->at_val.at_list);\n\t     presc_new != NULL;\n\t     presc_new = (resource *) GET_NEXT(presc_new->rs_link)) {\n\t\tchar *rescn = presc_new->rs_defin->rs_name;\n\t\t/* is there an entity limit set for this resource */\n\t\tif (!(is_attr_set(&presc_new->rs_value)) || (presc_new->rs_defin->rs_entlimflg != PBS_ENTLIM_LIMITSET))\n\t\t\tcontinue; /* no limit set */\n\n\t\t/* If this is from qalter where presc_old is set, see if    */\n\t\t/* corresponding resource is in presc_old, had a pior value */\n\n\t\tif (pattr_old)\n\t\t\tpresc_old = find_resc_entry(pattr_old, presc_new->rs_defin);\n\t\telse\n\t\t\tpresc_old = NULL;\n\n\t\tET_LIM_DBG(\"checking for resc %s\", __func__, rescn)\n\t\t/* 1. check against overall limit o:PBS_ALL */\n\t\trc = check_single_entity_res(LIM_OVERALL, PBS_ALL_ENTITY,\n\t\t\t\t\t     pmaxqresc,\n\t\t\t\t\t     presc_new, presc_old, subjobs, pjob);\n\t\tif (rc == Exceeds_Limit) {\n\t\t\tif (pque) {\n\t\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, msg_et_raq,\n\t\t\t\t\t presc_new->rs_defin->rs_name,\n\t\t\t\t\t pque->qu_qs.qu_name);\n\t\t\t} else {\n\t\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, msg_et_ras,\n\t\t\t\t\t presc_new->rs_defin->rs_name);\n\t\t\t}\n\t\t\tebuff[COMMENT_BUF_SIZE] = '\\0';\n\t\t\tif ((pjob->ji_clterrmsg = strdup(ebuff)) == NULL)\n\t\t\t\treturn PBSE_SYSTEM;\n\t\t\tET_LIM_DBG(\"exiting, ret %d [check_single_entity_res(o:\" PBS_ALL_ENTITY \";%s,%d) returned Exceeds_Limit]\", __func__,\n\t\t\t\t   PBSE_ENTLIMRESC, rescn, subjobs)\n\t\t\treturn PBSE_ENTLIMRESC;\n\t\t}\n\n\t\t/* 2. check aginst user/generic-user limit */\n\t\trc = check_single_entity_res(LIM_USER, euser,\n\t\t\t\t\t     pmaxqresc,\n\t\t\t\t\t     presc_new, presc_old, subjobs, pjob);\n\t\tif (rc == Exceeds_Limit) {\n\t\t\tif (pque) {\n\t\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, msg_et_rsuq,\n\t\t\t\t\t euser,\n\t\t\t\t\t presc_new->rs_defin->rs_name,\n\t\t\t\t\t pque->qu_qs.qu_name);\n\t\t\t} else {\n\t\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, msg_et_rsus,\n\t\t\t\t\t euser, presc_new->rs_defin->rs_name);\n\t\t\t}\n\t\t\tebuff[COMMENT_BUF_SIZE] = '\\0';\n\t\t\tif ((pjob->ji_clterrmsg = strdup(ebuff)) == NULL)\n\t\t\t\treturn PBSE_SYSTEM;\n\t\t\tET_LIM_DBG(\"exiting, ret %d [check_single_entity_res(u:%s;%s,%d) returned Exceeds_Limit]\", __func__,\n\t\t\t\t   PBSE_ENTLIMRESC, euser, rescn, subjobs)\n\t\t\treturn PBSE_ENTLIMRESC;\n\n\t\t} else if (rc == Exceeds_Generic) {\n\t\t\tif (pque) {\n\t\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, msg_et_rguq,\n\t\t\t\t\t presc_new->rs_defin->rs_name,\n\t\t\t\t\t pque->qu_qs.qu_name);\n\t\t\t} else {\n\t\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, msg_et_rgus,\n\t\t\t\t\t presc_new->rs_defin->rs_name);\n\t\t\t}\n\t\t\tebuff[COMMENT_BUF_SIZE] = '\\0';\n\t\t\tif ((pjob->ji_clterrmsg = strdup(ebuff)) == NULL)\n\t\t\t\treturn PBSE_SYSTEM;\n\t\t\tET_LIM_DBG(\"exiting, ret %d [check_single_entity_res(u:%s;%s,%d) returned Exceeds_Generic]\", __func__,\n\t\t\t\t   PBSE_ENTLIMRESC, euser, rescn, subjobs)\n\t\t\treturn PBSE_ENTLIMRESC;\n\t\t}\n\n\t\t/* 3. check against specific/generic group limit */\n\t\trc = check_single_entity_res(LIM_GROUP, egroup,\n\t\t\t\t\t     pmaxqresc,\n\t\t\t\t\t     presc_new, presc_old, subjobs, pjob);\n\t\tif (rc == Exceeds_Limit) {\n\t\t\tif (pque) {\n\t\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, msg_et_rsgq,\n\t\t\t\t\t egroup,\n\t\t\t\t\t presc_new->rs_defin->rs_name,\n\t\t\t\t\t pque->qu_qs.qu_name);\n\t\t\t} else {\n\t\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, msg_et_rsgs,\n\t\t\t\t\t egroup,\n\t\t\t\t\t presc_new->rs_defin->rs_name);\n\t\t\t}\n\t\t\tebuff[COMMENT_BUF_SIZE] = '\\0';\n\t\t\tif ((pjob->ji_clterrmsg = strdup(ebuff)) == NULL)\n\t\t\t\treturn PBSE_SYSTEM;\n\t\t\tET_LIM_DBG(\"exiting, ret %d [check_single_entity_res(g:%s;%s,%d) returned Exceeds_Limit]\", __func__,\n\t\t\t\t   PBSE_ENTLIMRESC, egroup, rescn, subjobs)\n\t\t\treturn PBSE_ENTLIMRESC;\n\n\t\t} else if (rc == Exceeds_Generic) {\n\t\t\tif (pque) {\n\t\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, msg_et_rggq,\n\t\t\t\t\t presc_new->rs_defin->rs_name,\n\t\t\t\t\t pque->qu_qs.qu_name);\n\t\t\t} else {\n\t\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, msg_et_rggs,\n\t\t\t\t\t presc_new->rs_defin->rs_name);\n\t\t\t}\n\t\t\tebuff[COMMENT_BUF_SIZE] = '\\0';\n\t\t\tif ((pjob->ji_clterrmsg = strdup(ebuff)) == NULL)\n\t\t\t\treturn PBSE_SYSTEM;\n\t\t\tET_LIM_DBG(\"exiting, ret %d [check_single_entity_res(g:%s;%s,%d) returned Exceeds_Generic]\", __func__,\n\t\t\t\t   PBSE_ENTLIMRESC, egroup, rescn, subjobs)\n\t\t\treturn PBSE_ENTLIMRESC;\n\t\t}\n\n\t\t/* 4. check against specific/generic project limit */\n\t\trc = check_single_entity_res(LIM_PROJECT, project,\n\t\t\t\t\t     pmaxqresc,\n\t\t\t\t\t     presc_new, presc_old, subjobs, pjob);\n\t\tif (rc == Exceeds_Limit) {\n\t\t\tif (pque) {\n\t\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, msg_et_rspq,\n\t\t\t\t\t project,\n\t\t\t\t\t presc_new->rs_defin->rs_name,\n\t\t\t\t\t pque->qu_qs.qu_name);\n\t\t\t} else {\n\t\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, msg_et_rsps,\n\t\t\t\t\t project,\n\t\t\t\t\t presc_new->rs_defin->rs_name);\n\t\t\t}\n\t\t\tebuff[COMMENT_BUF_SIZE] = '\\0';\n\t\t\tif ((pjob->ji_clterrmsg = strdup(ebuff)) == NULL)\n\t\t\t\treturn PBSE_SYSTEM;\n\t\t\tET_LIM_DBG(\"exiting, ret %d [check_single_entity_res(p:%s;%s,%d) returned Exceeds_Limit]\", __func__,\n\t\t\t\t   PBSE_ENTLIMRESC, project, rescn, subjobs)\n\t\t\treturn PBSE_ENTLIMRESC;\n\n\t\t} else if (rc == Exceeds_Generic) {\n\t\t\tif (pque) {\n\t\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, msg_et_rgpq,\n\t\t\t\t\t presc_new->rs_defin->rs_name,\n\t\t\t\t\t pque->qu_qs.qu_name);\n\t\t\t} else {\n\t\t\t\tsnprintf(ebuff, COMMENT_BUF_SIZE, msg_et_rgps,\n\t\t\t\t\t presc_new->rs_defin->rs_name);\n\t\t\t}\n\t\t\tebuff[COMMENT_BUF_SIZE] = '\\0';\n\t\t\tif ((pjob->ji_clterrmsg = strdup(ebuff)) == NULL)\n\t\t\t\treturn PBSE_SYSTEM;\n\t\t\tET_LIM_DBG(\"exiting, ret %d [check_single_entity_res(p:%s;%s,%d) returned Exceeds_Generic]\", __func__,\n\t\t\t\t   PBSE_ENTLIMRESC, project, rescn, subjobs)\n\t\t\treturn PBSE_ENTLIMRESC;\n\t\t}\n\t}\n\n\tET_LIM_DBG(\"exiting, ret 0 [all ok]\", __func__)\n\t/* At this point the job is good to go (into the queue/server */\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\tset_single_entity_ct - incr/decr a count for a single entity (user/group/all)\n *\t\tfor an attribute owned by queue or server\n * @see\n *\t\tset_entity_ct_sum()\n *\n * @param[in]\tkt    - key type\n * @param[in]\tename - entity name\n * @param[in]\tpatr  - pointer to attribute\n * @param[in]\tpjob  - pointer to job\n * @param[in]\tsubjobs   - count\n * @param[in]\top    - increment or decrement\n *\n * @return\twithin the limit or not\n * @retval\tzero\t: adjusted Entity count\n * @retval\tPBS_Enumber\t: something got wrong!\n *\n */\n\nstatic int\nset_single_entity_ct(enum lim_keytypes kt, char *ename, attribute *patr, job *pjob, int subjobs, enum batch_op op)\n{\n\tchar *kstr;\n\tvoid *ctx;\n\tsvr_entlim_leaf_t *plf;\n\tint rc;\n\n\tkstr = entlim_mk_runkey(kt, ename);\n\tif (kstr == NULL) {\n\t\tET_LIM_DBG(\"exiting, ret %d [kstr is NULL]\", __func__, PBSE_SYSTEM)\n\t\treturn (PBSE_SYSTEM);\n\t}\n\tET_LIM_DBG(\"kstr %s, %d, %s\", __func__, kstr, subjobs, (op == INCR) ? \"INCR\" : \"DECR\")\n\tctx = patr->at_val.at_enty.ae_tree;\n\tplf = (svr_entlim_leaf_t *) entlim_get(kstr, ctx);\n\tif (op == INCR) {\n\t\tif (plf == NULL) {\n\t\t\t/* add leaf for this entity-limit */\n\t\t\tif ((rc = alloc_svrleaf(NULL, &plf)) != PBSE_NONE) {\n\t\t\t\tfree(kstr);\n\t\t\t\tET_LIM_DBG(\"exiting, ret %d [alloc_svrleaf failed]\", __func__, rc)\n\t\t\t\treturn (rc);\n\t\t\t}\n\t\t\tif (entlim_add(kstr, plf, ctx) == -1) {\n\t\t\t\tfree(kstr);\n\t\t\t\tfree(plf);\n\t\t\t\tET_LIM_DBG(\"exiting, ret %d [entlim_add failed]\", __func__, PBSE_SYSTEM)\n\t\t\t\treturn (PBSE_SYSTEM);\n\t\t\t}\n\t\t}\n\t\tplf->slf_sum.at_val.at_long += subjobs;\n\t\tmark_attr_set(&plf->slf_sum);\n\t\tET_LIM_DBG(\"usage INCR to %ld, by %d\", __func__, plf->slf_sum.at_val.at_long, subjobs)\n\t} else {\n\t\tif (plf == NULL) {\n\t\t\tfree(kstr);\n\t\t\t/* Do not decrement what isn't there */\n\t\t\tET_LIM_DBG(\"exiting, ret %d [plf is NULL]\", __func__, PBSE_INTERNAL)\n\t\t\treturn (PBSE_INTERNAL);\n\t\t}\n\t\tplf->slf_sum.at_val.at_long -= subjobs;\n\t\tmark_attr_set(&plf->slf_sum);\n\t\tET_LIM_DBG(\"usage DECR to %ld, by %d\", __func__, plf->slf_sum.at_val.at_long, subjobs)\n\n\t\tif (plf->slf_sum.at_val.at_long < 0L) {\n\t\t\tET_LIM_DBG(\"zeroing usage, was %ld, by %d\", __func__, plf->slf_sum.at_val.at_long, subjobs)\n\t\t\tplf->slf_sum.at_val.at_long = 0L;\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"set_single_entity_ct zeroing negative usage for %s\", kstr);\n\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_SERVER, LOG_WARNING, msg_daemonname, log_buffer);\n\t\t}\n\t}\n\tfree(kstr);\n\tET_LIM_DBG(\"exiting, ret 0 [all ok]\", __func__)\n\treturn PBSE_NONE;\n}\n\n/*\n * set_single_entity_res - incr/decr a single resource for a single\n *\tentity (user/group/all) for an attribute owned by queue or server,\n *\tsee set_entity_res_sum()\n *\n *\tkt     - key type\n *\tename  - entity name\n *\tpatr   - pointer to attribute owned by queue or server to update\n *\tnewval - ptr to resource new value\n *\toldval - ptr to resource old value, null except for qalter case\n *\t\t where value is being changed, not just set\n *\tpjob   - pointer to job\n *\tsubjobs    - count\n *\top     - increment or decrement\n *\n * @return\twithin the limit or not\n * @retval\tzero\t: adjusted Entity count\n * @retval\tPBS_Enumber\t: something got wrong!\n */\nstatic int\nset_single_entity_res(enum lim_keytypes kt, char *ename,\n\t\t      attribute *patr, resource *newval,\n\t\t      resource *oldval, job *pjob, int subjobs, enum batch_op op)\n{\n\tchar *rescn = newval->rs_defin->rs_name;\n\tchar *kstr;\n\tvoid *ctx;\n\tsvr_entlim_leaf_t *plf;\n\tint rc;\n\tint i;\n\tattribute tmpval = newval->rs_value;\n\n\tkstr = entlim_mk_reskey(kt, ename, rescn);\n\tif (kstr == NULL) {\n\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"Error in entlim_mk_reskey for rescn %s\", rescn);\n\t\tlog_err(-1, __func__, log_buffer);\n\t\tET_LIM_DBG(\"exiting, ret %d [kstr is NULL]\", __func__, PBSE_SYSTEM)\n\t\treturn (PBSE_SYSTEM);\n\t}\n\tET_LIM_DBG(\"kstr %s, %d, %s, res %s, %p\", __func__, kstr,\n\t\t   subjobs, (op == INCR) ? \"INCR\" : \"DECR\", rescn, oldval)\n\tctx = patr->at_val.at_enty.ae_tree;\n\tplf = (svr_entlim_leaf_t *) entlim_get(kstr, ctx);\n\n\tif (oldval && plf) {\n\t\tif (!(plf->slf_rescd->rs_comp(&tmpval, &oldval->rs_value))) {\n\t\t\tfree(kstr);\n\t\t\tET_LIM_DBG(\"exiting, ret 0 [newval == oldval]\", __func__)\n\t\t\treturn PBSE_NONE;\n\t\t}\n\t\tplf->slf_rescd->rs_set(&tmpval, &oldval->rs_value, DECR); /* subtract prior value (qalter case) */\n\t\tif (will_log_event(PBSEVENT_DEBUG4)) {\n\t\t\tsvrattrl *new, *old, *diff;\n\t\t\tchar *new_val, *old_val, *diff_val;\n\t\t\tnew = old = diff = NULL;\n\t\t\tif (is_attr_set(&newval->rs_value)) {\n\t\t\t\tplf->slf_rescd->rs_encode(&newval->rs_value, NULL, \"newval\", NULL, ATR_ENCODE_CLIENT, &new);\n\t\t\t\tnew_val = new->al_value;\n\t\t\t} else\n\t\t\t\tnew_val = \"(not_set)\";\n\t\t\tif (is_attr_set(&oldval->rs_value)) {\n\t\t\t\tplf->slf_rescd->rs_encode(&oldval->rs_value, NULL, \"oldval\", NULL, ATR_ENCODE_CLIENT, &old);\n\t\t\t\told_val = old->al_value;\n\t\t\t} else\n\t\t\t\told_val = \"(not_set)\";\n\t\t\tif (is_attr_set(&tmpval)) {\n\t\t\t\tplf->slf_rescd->rs_encode(&tmpval, NULL, \"diffval\", NULL, ATR_ENCODE_CLIENT, &diff);\n\t\t\t\tdiff_val = diff->al_value;\n\t\t\t} else\n\t\t\t\tdiff_val = \"(not_set)\";\n\t\t\tET_LIM_DBG(\"DECR old from new, %s - %s = %s\", __func__, new_val, old_val, diff_val)\n\t\t\tfree(new);\n\t\t\tfree(old);\n\t\t\tfree(diff);\n\t\t}\n\t}\n\n\tif (op == INCR) {\n\n\t\t/* increment resource by newval, subtracting oldval if there */\n\t\tif (plf == NULL) {\n\t\t\t/* add leaf for this entity-limit */\n\t\t\tif ((rc = alloc_svrleaf(rescn, &plf)) != PBSE_NONE) {\n\t\t\t\tfree(kstr);\n\t\t\t\tET_LIM_DBG(\"exiting, ret %d [alloc_svrleaf failed]\", __func__, rc)\n\t\t\t\treturn (rc);\n\t\t\t}\n\t\t\tif (entlim_add(kstr, plf, ctx) == -1) {\n\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"Error in entlim_add for reskey %s\", kstr);\n\t\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\t\tfree(kstr);\n\t\t\t\tfree(plf);\n\t\t\t\tET_LIM_DBG(\"exiting, ret %d [entlim_add failed]\", __func__, PBSE_SYSTEM)\n\t\t\t\treturn (PBSE_SYSTEM);\n\t\t\t}\n\t\t}\n\n\t\tfor (i = 0; i < subjobs; i++) {\n\t\t\t/* add in requested amount */\n\t\t\t(void) plf->slf_rescd->rs_set(&plf->slf_sum,\n\t\t\t\t\t\t      &tmpval, INCR);\n\t\t}\n\t\tif (will_log_event(PBSEVENT_DEBUG4) && (is_attr_set(&plf->slf_sum))) {\n\t\t\tsvrattrl *sum;\n\t\t\tplf->slf_rescd->rs_encode(&plf->slf_sum, NULL, \"sumval\", NULL, ATR_ENCODE_CLIENT, &sum);\n\t\t\tET_LIM_DBG(\"usage INCR to %s\", __func__, sum->al_value)\n\t\t\tfree(sum);\n\t\t} else\n\t\t\tET_LIM_DBG(\"usage INCR to (not_set)\", __func__)\n\n\t} else { /* DECR */\n\n\t\t/* decrement resource by newval, adding oldval if there */\n\t\tif (plf == NULL) {\n\t\t\t/* Do not decrement what isn't there */\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"decrementing resource for reskey %s: isn't found in attribute tree\", kstr);\n\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\tfree(kstr);\n\t\t\tET_LIM_DBG(\"exiting, ret %d [plf is NULL]\", __func__, PBSE_INTERNAL)\n\t\t\treturn (PBSE_INTERNAL);\n\t\t}\n\n\t\tfor (i = 0; i < subjobs; i++) {\n\t\t\t(void) plf->slf_rescd->rs_set(&plf->slf_sum, &tmpval, DECR);\n\t\t}\n\n\t\tif (will_log_event(PBSEVENT_DEBUG4) && (is_attr_set(&plf->slf_sum))) {\n\t\t\tsvrattrl *sum;\n\t\t\tplf->slf_rescd->rs_encode(&plf->slf_sum, NULL, \"sumval\", NULL, ATR_ENCODE_CLIENT, &sum);\n\t\t\tET_LIM_DBG(\"usage DECR to %s\", __func__, sum->al_value)\n\t\t\tfree(sum);\n\t\t} else\n\t\t\tET_LIM_DBG(\"usage DECR to (not_set)\", __func__)\n\n\t\ttmpval = plf->slf_sum;\n\t\tplf->slf_rescd->rs_decode(&tmpval, NULL, NULL, \"0\");\n\t\tif (plf->slf_rescd->rs_comp(&plf->slf_sum, &tmpval) < 0) {\n\t\t\tET_LIM_DBG(\"zeroing res usage\", __func__)\n\t\t\tplf->slf_sum = tmpval;\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"set_single_entity_res zeroing negative usage for %s-%s\", plf->slf_rescd->rs_name, kstr);\n\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_SERVER, LOG_WARNING, msg_daemonname, log_buffer);\n\t\t}\n\t}\n\n\tfree(kstr);\n\tET_LIM_DBG(\"exiting, ret 0 [all ok]\", __func__)\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n * \t\tset_entity_ct_sum_queued() - set (increment/decrement) entity usage sum\n *\t\t1.Againt Server attribute (pque will be null):\n *\t   \t\ta. Called when new job is arriving (INCR)\n *\t   \t\tb. Called when job is purged (DECR)\n *\t\t2. against queue attributes\n *\t   \t\ta. on any enqueue (INCR) or\n *\t   \t\tb. any dequeue (DECR)\n *\n * @param[in]\tpjob\t-\tpointer to job structure\n * @param[in]\tpque\t-\tpque will point to queue structure, i.e. not be null\n * @param[in]\top\t-\toperator example- INCR, DECR\n *\n * @return\tint\n * @retval\tzero\t: all went ok\n * @retval\tPBS_Enumber\t: if error, typically a system or internal error\n */\nint\nset_entity_ct_sum_queued(job *pjob, pbs_queue *pque, enum batch_op op)\n{\n\tchar *egroup;\n\tchar *project;\n\tchar *euser;\n\tattribute *pqueued_jobs_threshold;\n\tenum batch_op rev_op;\n\tint rc;\n\tint subjobs;\n\n\t/* if the job is in states JOB_STATE_LTR_MOVED or JOB_STATE_LTR_FINISHED, */\n\t/* then just return,  the job's resources were removed from the   */\n\t/* entity sums when it went into the MOVED/FINISHED state\t  */\n\t/* also return if the entity limits for this job were \t\t  */\n\t/* decremented before.\t\t\t\t\t\t  */\n\n\tif ((check_job_state(pjob, JOB_STATE_LTR_MOVED)) ||\n\t    (check_job_state(pjob, JOB_STATE_LTR_FINISHED)) ||\n\t    (check_job_state(pjob, JOB_STATE_LTR_EXPIRED)) ||\n\t    ((check_job_state(pjob, JOB_STATE_LTR_RUNNING)) && (op == INCR))) {\n\t\tET_LIM_DBG(\"exiting, ret 0 [job in %c state]\", __func__, get_job_state(pjob))\n\t\treturn 0;\n\t}\n\n\t/* set reverse op incase we have to back up */\n\tif (op == INCR)\n\t\trev_op = DECR;\n\telse\n\t\trev_op = INCR;\n\n\tif (pque)\n\t\tpqueued_jobs_threshold = get_qattr(pque, QA_ATR_queued_jobs_threshold);\n\telse\n\t\tpqueued_jobs_threshold = get_sattr(SVR_ATR_queued_jobs_threshold);\n\n\tif (!is_attr_set(pqueued_jobs_threshold)) {\n\t\tET_LIM_DBG(\"exiting, ret 0 [queued_jobs_threshold limit not set for %s]\", __func__, pque ? pque->qu_qs.qu_name : \"server\")\n\t\treturn PBSE_NONE; /* no limits set */\n\t}\n\n\teuser = get_jattr_str(pjob, JOB_ATR_euser);\n\tegroup = get_jattr_str(pjob, JOB_ATR_egroup);\n\tproject = get_jattr_str(pjob, JOB_ATR_project);\n\n\tif ((subjobs = get_queued_subjobs_ct(pjob)) < 0) {\n\t\tET_LIM_DBG(\"exiting, ret %d [get_queued_subjobs_ct() returned %d]\", __func__,\n\t\t\t   PBSE_INTERNAL, subjobs)\n\t\treturn PBSE_INTERNAL;\n\t}\n\n\t/* 1. set Overall limit, [o:PBS_ALL] */\n\trc = set_single_entity_ct(LIM_OVERALL, PBS_ALL_ENTITY, pqueued_jobs_threshold, pjob, subjobs, op);\n\tif (rc != PBSE_NONE) {\n\t\tET_LIM_DBG(\"exiting, ret %d [set_single_entity_ct(o:\" PBS_ALL_ENTITY \",%d,%s) failed]\", __func__,\n\t\t\t   rc, subjobs, (op == INCR) ? \"INCR\" : \"DECR\")\n\t\treturn rc;\n\t}\n\n\t/* 2. set specific user limit, [u:user] */\n\trc = set_single_entity_ct(LIM_USER, euser, pqueued_jobs_threshold, pjob, subjobs, op);\n\tif (rc != PBSE_NONE) {\n\t\t/* undo what was done above */\n\t\t(void) set_single_entity_ct(LIM_OVERALL, PBS_ALL_ENTITY, pqueued_jobs_threshold, pjob,\n\t\t\t\t\t    subjobs, rev_op);\n\t\tET_LIM_DBG(\"exiting, ret %d [set_single_entity_ct(u:%s,%d,%s) failed]\", __func__,\n\t\t\t   rc, euser, subjobs, (op == INCR) ? \"INCR\" : \"DECR\")\n\t\treturn rc;\n\t}\n\n\t/* 3. set specific group limit, [g:group] */\n\trc = set_single_entity_ct(LIM_GROUP, egroup, pqueued_jobs_threshold, pjob, subjobs, op);\n\tif (rc != PBSE_NONE) {\n\t\t/* undo what was done above */\n\t\t(void) set_single_entity_ct(LIM_USER, euser, pqueued_jobs_threshold, pjob, subjobs, rev_op);\n\t\t(void) set_single_entity_ct(LIM_OVERALL, PBS_ALL_ENTITY, pqueued_jobs_threshold, pjob,\n\t\t\t\t\t    subjobs, rev_op);\n\t\tET_LIM_DBG(\"exiting, ret %d [set_single_entity_ct(g:%s,%d,%s) failed]\", __func__,\n\t\t\t   rc, egroup, subjobs, (op == INCR) ? \"INCR\" : \"DECR\")\n\t\treturn rc;\n\t}\n\n\t/* 4. set specific project limit, [p:project] */\n\trc = set_single_entity_ct(LIM_PROJECT, project, pqueued_jobs_threshold, pjob, subjobs, op);\n\tif (rc != PBSE_NONE) {\n\t\t/* undo what was done above */\n\t\t(void) set_single_entity_ct(LIM_GROUP, egroup, pqueued_jobs_threshold, pjob, subjobs, rev_op);\n\t\t(void) set_single_entity_ct(LIM_USER, euser, pqueued_jobs_threshold, pjob, subjobs, rev_op);\n\t\t(void) set_single_entity_ct(LIM_OVERALL, PBS_ALL_ENTITY, pqueued_jobs_threshold, pjob, subjobs, rev_op);\n\t\tET_LIM_DBG(\"exiting, ret %d [set_single_entity_ct(p:%s,%d,%s) failed]\", __func__,\n\t\t\t   rc, project, subjobs, (op == INCR) ? \"INCR\" : \"DECR\")\n\t\treturn rc;\n\t}\n\n\tET_LIM_DBG(\"exiting, ret 0 [all ok]\", __func__)\n\treturn 0;\n}\n/**\n * @brief\n * \t\tset_entity_ct_sum_max() - set (increment/decrement) entity usage sum\n *\t\t1.Againt Server attribute (pque will be null):\n *\t   \t\ta. Called when new job is arriving (INCR)\n *\t   \t\tb. Called when job is purged (DECR)\n *\t\t2. against queue attributes\n *\t   \t\ta. on any enqueue (INCR) or\n *\t   \t\tb. any dequeue (DECR)\n *\n * @param[in]\tpjob\t-\tpointer to job structure\n * @param[in]\tpque\t-\tpque will point to queue structure, i.e. not be null\n * @param[in]\top\t-\toperator example- INCR, DECR\n *\n * @return\tint\n * @retval\tzero\t: all went ok\n * @retval\tPBS_Enumber\t: if error, typically a system or internal error\n */\nint\nset_entity_ct_sum_max(job *pjob, pbs_queue *pque, enum batch_op op)\n{\n\tchar *egroup;\n\tchar *project;\n\tchar *euser;\n\tattribute *pmax_queued;\n\tenum batch_op rev_op;\n\tint rc;\n\tint subjobs;\n\n\t/* if the job is in states JOB_STATE_LTR_MOVED or JOB_STATE_LTR_FINISHED, */\n\t/* then just return,  the job's resources were removed from the   */\n\t/* entity sums when it went into the MOVED/FINISHED state\t  */\n\n\tif ((check_job_state(pjob, JOB_STATE_LTR_MOVED)) ||\n\t    (check_job_state(pjob, JOB_STATE_LTR_EXPIRED)) ||\n\t    (check_job_state(pjob, JOB_STATE_LTR_FINISHED))) {\n\t\tET_LIM_DBG(\"exiting, ret 0 [job in %c state]\", __func__, get_job_state(pjob))\n\t\treturn 0;\n\t}\n\n\t/* set reverse op incase we have to back up */\n\tif (op == INCR)\n\t\trev_op = DECR;\n\telse\n\t\trev_op = INCR;\n\n\tif (pque)\n\t\tpmax_queued = get_qattr(pque, QA_ATR_max_queued);\n\telse\n\t\tpmax_queued = get_sattr(SVR_ATR_max_queued);\n\n\tif (!is_attr_set(pmax_queued)) {\n\t\tET_LIM_DBG(\"exiting, ret 0 [max_queued limit not set for %s]\", __func__, pque ? pque->qu_qs.qu_name : \"server\")\n\t\treturn PBSE_NONE;\n\t}\n\n\teuser = get_jattr_str(pjob, JOB_ATR_euser);\n\tegroup = get_jattr_str(pjob, JOB_ATR_egroup);\n\tproject = get_jattr_str(pjob, JOB_ATR_project);\n\n\tif ((subjobs = get_queued_subjobs_ct(pjob)) < 0) {\n\t\tET_LIM_DBG(\"exiting, ret %d [get_queued_subjobs_ct() returned %d]\", __func__,\n\t\t\t   PBSE_INTERNAL, subjobs)\n\t\treturn PBSE_INTERNAL;\n\t}\n\n\t/* 1. set Overall limit, [o:PBS_ALL] */\n\trc = set_single_entity_ct(LIM_OVERALL, PBS_ALL_ENTITY, pmax_queued, pjob, subjobs, op);\n\tif (rc != PBSE_NONE) {\n\t\tET_LIM_DBG(\"exiting, ret %d [set_single_entity_ct(o:\" PBS_ALL_ENTITY \",%d,%s) failed]\", __func__,\n\t\t\t   rc, subjobs, (op == INCR) ? \"INCR\" : \"DECR\")\n\t\treturn rc;\n\t}\n\n\t/* 2. set specific user limit, [u:user] */\n\trc = set_single_entity_ct(LIM_USER, euser, pmax_queued, pjob, subjobs, op);\n\tif (rc != PBSE_NONE) {\n\t\t/* undo what was done above */\n\t\t(void) set_single_entity_ct(LIM_OVERALL, PBS_ALL_ENTITY, pmax_queued, pjob, subjobs, rev_op);\n\t\tET_LIM_DBG(\"exiting, ret %d [set_single_entity_ct(u:%s,%d,%s) failed]\", __func__,\n\t\t\t   rc, euser, subjobs, (op == INCR) ? \"INCR\" : \"DECR\")\n\t\treturn rc;\n\t}\n\n\t/* 3. set specific group limit, [g:group] */\n\trc = set_single_entity_ct(LIM_GROUP, egroup, pmax_queued, pjob, subjobs, op);\n\tif (rc != PBSE_NONE) {\n\t\t/* undo what was done above */\n\t\t(void) set_single_entity_ct(LIM_USER, euser, pmax_queued, pjob, subjobs, rev_op);\n\t\t(void) set_single_entity_ct(LIM_OVERALL, PBS_ALL_ENTITY, pmax_queued, pjob, subjobs, rev_op);\n\t\tET_LIM_DBG(\"exiting, ret %d [set_single_entity_ct(g:%s,%d,%s) failed]\", __func__,\n\t\t\t   rc, egroup, subjobs, (op == INCR) ? \"INCR\" : \"DECR\")\n\t\treturn rc;\n\t}\n\n\t/* 4. set specific project limit, [p:project] */\n\trc = set_single_entity_ct(LIM_PROJECT, project, pmax_queued, pjob, subjobs, op);\n\tif (rc != PBSE_NONE) {\n\t\t/* undo what was done above */\n\t\t(void) set_single_entity_ct(LIM_GROUP, egroup, pmax_queued, pjob, subjobs, rev_op);\n\t\t(void) set_single_entity_ct(LIM_USER, euser, pmax_queued, pjob, subjobs, rev_op);\n\t\t(void) set_single_entity_ct(LIM_OVERALL, PBS_ALL_ENTITY, pmax_queued, pjob, subjobs, rev_op);\n\t\tET_LIM_DBG(\"exiting, ret %d [set_single_entity_ct(p:%s,%d,%s) failed]\", __func__,\n\t\t\t   rc, project, subjobs, (op == INCR) ? \"INCR\" : \"DECR\")\n\t\treturn rc;\n\t}\n\tET_LIM_DBG(\"exiting, ret 0 [all ok]\", __func__)\n\treturn 0; /* within all count limits */\n}\n\n/**\n * @brief\n *\t\trevert_entity_resources - unset prior entity resource count, if any failure occurs.\n *\n *\n * @param[in]  pmaxqresc    -   pointer to queue attribute structure\n * @param[in]  pattr_old    -   pointer to job attribute\n * @param[in]  presc_new    -   pointer to current processing resource\n * @param[in]  presc_old    -   pointer to old resource before alter\n * @param[in]  presc_first  -   pointer to first resource, used to reach the starting of resource list\n * @param[in]  pjob...      -   pointer to job\n * @param[in]  subjobs      -   number of subjobs, if any.\n * @param[in]  op           -   operator example- INCR, DECR\n *\n * @return      int\n * @retval      zero        -   all went ok\n * @retval      -1          -   error in input parameters\n */\n\nstatic int\nrevert_entity_resources(attribute *pmaxqresc, attribute *pattr_old,\n\t\t\tresource *presc_new, resource *presc_old, resource *presc_first,\n\t\t\tjob *pjob, int subjobs, enum batch_op op)\n{\n\n\tint res_flag = 1;\n\tchar *euser = get_jattr_str(pjob, JOB_ATR_euser);\n\tchar *egroup = get_jattr_str(pjob, JOB_ATR_egroup);\n\tchar *project = get_jattr_str(pjob, JOB_ATR_project);\n\n\tif (pmaxqresc && presc_new && presc_first && euser && egroup && project) {\n\n\t\tfor (presc_new = (resource *) GET_PRIOR(presc_new->rs_link);\n\t\t     (presc_new != NULL) && res_flag;\n\t\t     presc_new = (resource *) GET_PRIOR(presc_new->rs_link)) {\n\n\t\t\tif (presc_new == presc_first)\n\t\t\t\tres_flag = 0;\n\t\t\tif (!(is_attr_set(&presc_new->rs_value)) || ((presc_new->rs_defin->rs_entlimflg & PBS_ENTLIM_LIMITSET) == 0))\n\t\t\t\tcontinue;\n\n\t\t\t/* If this is from qalter where presc_old is set, see if    */\n\t\t\t/* corresponding resource is in presc_old, had a pior value */\n\n\t\t\tif (pattr_old)\n\t\t\t\tpresc_old = find_resc_entry(pattr_old, presc_new->rs_defin);\n\t\t\telse\n\t\t\t\tpresc_old = NULL;\n\n\t\t\t(void) set_single_entity_res(LIM_OVERALL, PBS_ALL_ENTITY, pmaxqresc, presc_new, presc_old, pjob, subjobs, op);\n\t\t\t(void) set_single_entity_res(LIM_USER, euser, pmaxqresc, presc_new, presc_old, pjob, subjobs, op);\n\t\t\t(void) set_single_entity_res(LIM_GROUP, egroup, pmaxqresc, presc_new, presc_old, pjob, subjobs, op);\n\t\t\t(void) set_single_entity_res(LIM_PROJECT, project, pmaxqresc, presc_new, presc_old, pjob, subjobs, op);\n\t\t}\n\n\t\treturn (0);\n\t} else\n\t\treturn (-1);\n}\n\n/**\n * @brief\n * \t\tset_entity_resc_sum_queued() - set entity resource usage\n *\t\tbased on requested (or altered) job wide resources\n *\t\t1. Called against server attributes (pque will be null):\n *\t   \t\ta. When new job arrives (INCR)\n *\t   \t\tb. when job is purged (DECR)\n *\t\t2. Called against queue attributes (pque will point to queue struct):\n *\t   \t\ta. on any enqueue (INCR), or\n *\t   \t\tb. on any dequeue (DECR)\n *\n * @param[in,out]\tpjob\t-\tpointer to job structure\n * @param[in]\tpque\t-\tpque will point to queue structure, i.e. not be null\n * @param[in]\taltered_resc\t-\taltered resources.\n * @param[in]\top\t-\toperator example- INCR, DECR\n *\n * @return\tint\n * @retval\tzero\t: all went ok\n * @retval\tPBS_Enumber\t: if error, typically a system or internal error\n */\nint\nset_entity_resc_sum_queued(job *pjob, pbs_queue *pque, attribute *altered_resc,\n\t\t\t   enum batch_op op)\n{\n\tchar *egroup = NULL;\n\tchar *project = NULL;\n\tchar *euser = NULL;\n\tint rc = PBSE_NONE;\n\tint rc_final;\n\tint subjobs;\n\tattribute *pmaxqresc = NULL;\n\tattribute *pattr_new = NULL;\n\tattribute *pattr_old = NULL;\n\tresource *presc_new = NULL;\n\tresource *presc_old = NULL;\n\tresource *presc_first = NULL;\n\tenum batch_op rev_op;\n\n\tET_LIM_DBG(\"entered [alt_res %p]\", __func__, altered_resc)\n\t/* if the job is in states JOB_STATE_LTR_MOVED or JOB_STATE_LTR_FINISHED, */\n\t/* then just return,  the job's resources were removed from the   */\n\t/* entity sums when it went into the MOVED/FINISHED state\t  */\n\t/* also return if the entity limits for this job were \t\t  */\n\t/* decremented before.\t\t\t\t\t\t  */\n\n\tif ((check_job_state(pjob, JOB_STATE_LTR_MOVED)) ||\n\t    (check_job_state(pjob, JOB_STATE_LTR_FINISHED)) ||\n\t    (check_job_state(pjob, JOB_STATE_LTR_EXPIRED)) ||\n\t    ((check_job_state(pjob, JOB_STATE_LTR_RUNNING)) && (op == INCR))) {\n\t\tET_LIM_DBG(\"exiting, ret 0 [job in %c state]\", __func__, get_job_state(pjob))\n\t\treturn 0;\n\t}\n\n\t/* set reverse op incase we have to back up */\n\tif (op == INCR)\n\t\trev_op = DECR;\n\telse\n\t\trev_op = INCR;\n\n\tif (pque)\n\t\tpmaxqresc = get_qattr(pque, QA_ATR_queued_jobs_threshold_res);\n\telse\n\t\tpmaxqresc = get_sattr(SVR_ATR_queued_jobs_threshold_res);\n\n\tif (!is_attr_set(pmaxqresc)) {\n\t\tET_LIM_DBG(\"exiting, ret 0 [queued_jobs_threshold_res limit not set for %s]\", __func__, pque ? pque->qu_qs.qu_name : \"server\")\n\t\treturn 0; /* no limits set */\n\t}\n\n\tif (altered_resc) {\n\t\tpattr_new = altered_resc;\n\t\tpattr_old = get_jattr(pjob, JOB_ATR_resource);\n\t} else {\n\t\tpattr_new = get_jattr(pjob, JOB_ATR_resource);\n\t\tpattr_old = NULL; /* null */\n\t}\n\n\teuser = get_jattr_str(pjob, JOB_ATR_euser);\n\tegroup = get_jattr_str(pjob, JOB_ATR_egroup);\n\tproject = get_jattr_str(pjob, JOB_ATR_project);\n\n\tif ((subjobs = get_queued_subjobs_ct(pjob)) < 0)\n\t\trc = PBSE_INTERNAL;\n\n\tif (!euser) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"EMPTY USER\");\n\t\trc = PBSE_INTERNAL;\n\t}\n\tif (!egroup) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"EMPTY GROUP\");\n\t\trc = PBSE_INTERNAL;\n\t}\n\tif (!project) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"EMPTY PROJECT\");\n\t\trc = PBSE_INTERNAL;\n\t}\n\n\tif (rc == PBSE_INTERNAL) {\n\t\tET_LIM_DBG(\"exiting, ret %d [something not right, subjobs %d, %p, %p, %p]\", __func__,\n\t\t\t   PBSE_INTERNAL, subjobs, euser, egroup, project)\n\t\treturn PBSE_INTERNAL;\n\t}\n\n\trc_final = 0;\n\n\tfor (presc_new = (resource *) GET_NEXT(pattr_new->at_val.at_list), presc_first = presc_new;\n\t     presc_new != NULL;\n\t     presc_new = (resource *) GET_NEXT(presc_new->rs_link)) {\n\n\t\tchar *rescn;\n\t\tif (!(is_attr_set(&presc_new->rs_value)) || ((presc_new->rs_defin->rs_entlimflg & PBS_ENTLIM_LIMITSET) == 0))\n\t\t\tcontinue;\n\n\t\t/* If this is from qalter where presc_old is set, see if    */\n\t\t/* corresponding resource is in presc_old, had a pior value */\n\n\t\tif (pattr_old)\n\t\t\tpresc_old = find_resc_entry(pattr_old, presc_new->rs_defin);\n\t\telse\n\t\t\tpresc_old = NULL;\n\n\t\trescn = presc_new->rs_defin->rs_name;\n\t\tif (rescn == NULL) {\n\t\t\tif (presc_new != presc_first)\n\t\t\t\tif (revert_entity_resources(pmaxqresc, pattr_old, presc_new, presc_old, presc_first, pjob, subjobs, rev_op) != 0)\n\t\t\t\t\tlog_err(PBSE_INTERNAL, __func__, \"Error in revert_entity_resources\");\n\t\t\tlog_err(PBSE_INTERNAL, __func__, \"EMPTY RESOURCE\");\n\t\t\tET_LIM_DBG(\"exiting, ret %d [rescn is NULL]\", __func__, PBSE_INTERNAL)\n\t\t\treturn PBSE_INTERNAL;\n\t\t}\n\n\t\tET_LIM_DBG(\"setting usage for res %s\", __func__, rescn)\n\t\t/* 1. set overall limit o:PBS_ALL */\n\t\trc = set_single_entity_res(LIM_OVERALL, PBS_ALL_ENTITY,\n\t\t\t\t\t   pmaxqresc, presc_new, presc_old, pjob, subjobs, op);\n\t\tif (rc) {\n\t\t\tET_LIM_DBG(\"set_single_entity_res(o:\" PBS_ALL_ENTITY \";%s,%d,%s) failed with rc %d\", __func__,\n\t\t\t\t   rescn, subjobs, (op == INCR) ? \"INCR\" : \"DECR\", rc)\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t\t \"Error in LIM_OVERALL for resource %s\", rescn);\n\t\t\tlog_err(rc, __func__, log_buffer);\n\t\t\tif (op == INCR) {\n\t\t\t\tif (presc_new != presc_first)\n\t\t\t\t\tif (revert_entity_resources(pmaxqresc, pattr_old, presc_new, presc_old, presc_first, pjob, subjobs, rev_op) != 0)\n\t\t\t\t\t\tlog_err(PBSE_INTERNAL, __func__, \"Error in revert_entity_resources\");\n\t\t\t\treturn rc;\n\t\t\t} else {\n\t\t\t\tif (!rc_final)\n\t\t\t\t\trc_final = rc;\n\t\t\t}\n\t\t\tcontinue;\n\t\t}\n\n\t\t/* 2. sets user limit */\n\t\trc = set_single_entity_res(LIM_USER, euser,\n\t\t\t\t\t   pmaxqresc, presc_new, presc_old, pjob, subjobs, op);\n\t\tif (rc) {\n\t\t\tET_LIM_DBG(\"set_single_entity_res(u:%s;%s,%d,%s) failed with rc %d\", __func__,\n\t\t\t\t   euser, rescn, subjobs, (op == INCR) ? \"INCR\" : \"DECR\", rc)\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t\t \"Error in LIM_USER for euser %s for resource %s\", euser, rescn);\n\t\t\tlog_err(rc, __func__, log_buffer);\n\t\t\t/* reverse change made above */\n\t\t\t(void) set_single_entity_res(LIM_OVERALL, PBS_ALL_ENTITY, pmaxqresc,\n\t\t\t\t\t\t     presc_new, presc_old, pjob, subjobs, rev_op);\n\t\t\tif (op == INCR) {\n\t\t\t\tif (presc_new != presc_first)\n\t\t\t\t\tif (revert_entity_resources(pmaxqresc, pattr_old, presc_new, presc_old, presc_first, pjob, subjobs, rev_op) != 0)\n\t\t\t\t\t\tlog_err(PBSE_INTERNAL, __func__, \"Error in revert_entity_resources\");\n\t\t\t\treturn rc;\n\t\t\t} else {\n\t\t\t\tif (!rc_final)\n\t\t\t\t\trc_final = rc;\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t}\n\n\t\t/* 3. set specific group limit */\n\t\trc = set_single_entity_res(LIM_GROUP, egroup,\n\t\t\t\t\t   pmaxqresc, presc_new, presc_old, pjob, subjobs, op);\n\t\tif (rc) {\n\t\t\tET_LIM_DBG(\"set_single_entity_res(g:%s;%s,%d,%s) failed with rc %d\", __func__,\n\t\t\t\t   egroup, rescn, subjobs, (op == INCR) ? \"INCR\" : \"DECR\", rc)\n\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t\t \"Error in LIM_GROUP for egroup %s for resource %s\", egroup, rescn);\n\t\t\tlog_err(rc, __func__, log_buffer);\n\n\t\t\t/* reverse changes made above */\n\t\t\t(void) set_single_entity_res(LIM_OVERALL, PBS_ALL_ENTITY,\n\t\t\t\t\t\t     pmaxqresc, presc_new, presc_old, pjob, subjobs, rev_op);\n\t\t\t(void) set_single_entity_res(LIM_USER, euser,\n\t\t\t\t\t\t     pmaxqresc, presc_new, presc_old, pjob, subjobs, rev_op);\n\t\t\tif (op == INCR) {\n\t\t\t\tif (presc_new != presc_first)\n\t\t\t\t\tif (revert_entity_resources(pmaxqresc, pattr_old, presc_new, presc_old, presc_first, pjob, subjobs, rev_op) != 0)\n\t\t\t\t\t\tlog_err(PBSE_INTERNAL, __func__, \"Error in revert_entity_resources\");\n\t\t\t\treturn rc;\n\t\t\t} else {\n\t\t\t\tif (!rc_final)\n\t\t\t\t\trc_final = rc;\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t}\n\n\t\t/* 4. set specific project limit */\n\t\trc = set_single_entity_res(LIM_PROJECT, project,\n\t\t\t\t\t   pmaxqresc, presc_new, presc_old, pjob, subjobs, op);\n\t\tif (rc) {\n\t\t\tET_LIM_DBG(\"set_single_entity_res(p:%s;%s,%d,%s) failed with rc %d\", __func__,\n\t\t\t\t   project, rescn, subjobs, (op == INCR) ? \"INCR\" : \"DECR\", rc)\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t\t \"Error in LIM_USER for project %s for resource %s\", project, rescn);\n\t\t\tlog_err(rc, __func__, log_buffer);\n\n\t\t\t/* reverse changes made above */\n\t\t\t(void) set_single_entity_res(LIM_OVERALL, PBS_ALL_ENTITY,\n\t\t\t\t\t\t     pmaxqresc, presc_new, presc_old, pjob, subjobs, rev_op);\n\t\t\t(void) set_single_entity_res(LIM_USER, euser,\n\t\t\t\t\t\t     pmaxqresc, presc_new, presc_old, pjob, subjobs, rev_op);\n\t\t\t(void) set_single_entity_res(LIM_GROUP, egroup,\n\t\t\t\t\t\t     pmaxqresc, presc_new, presc_old, pjob, subjobs, rev_op);\n\t\t\tif (op == INCR) {\n\t\t\t\tif (presc_new != presc_first)\n\t\t\t\t\tif (revert_entity_resources(pmaxqresc, pattr_old, presc_new, presc_old, presc_first, pjob, subjobs, rev_op) != 0)\n\t\t\t\t\t\tlog_err(PBSE_INTERNAL, __func__, \"Error in revert_entity_resources\");\n\t\t\t\treturn rc;\n\t\t\t} else {\n\t\t\t\tif (!rc_final)\n\t\t\t\t\trc_final = rc;\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t}\n\t}\n\n\tET_LIM_DBG(\"exiting, ret %d\", __func__, rc_final)\n\treturn rc_final;\n}\n\n/**\n * @brief\n * \t\tset_entity_res_sum_max() - set entity resource usage\n *\t\tbased on requested (or altered) job wide resources\n *\t\t1. Called against server attributes (pque will be null):\n *\t   \t\ta. When new job arrives (INCR)\n *\t   \t\tb. when job is purged (DECR)\n *\t\t2. Called against queue attributes (pque will point to queue struct):\n *\t   \t\ta. on any enqueue (INCR), or\n *\t   \t\tb. on any dequeue (DECR)\n *\n * @param[in]\tpjob\t-\tpointer to job structure\n * @param[in]\tpque\t-\tpque will point to queue structure, i.e. not be null\n * @param[in]\taltered_resc\t-\taltered resources.\n * @param[in]\top\t-\toperator example- INCR, DECR\n *\n * @return\tint\n * @retval\tzero\t: all went ok\n * @retval\tPBS_Enumber\t: if error, typically a system or internal error\n */\nint\nset_entity_resc_sum_max(job *pjob, pbs_queue *pque, attribute *altered_resc,\n\t\t\tenum batch_op op)\n{\n\tchar *egroup = NULL;\n\tchar *project = NULL;\n\tchar *euser = NULL;\n\tint rc = PBSE_NONE;\n\tint rc_final;\n\tint subjobs;\n\tattribute *pmaxqresc = NULL;\n\tattribute *pattr_new = NULL;\n\tattribute *pattr_old = NULL;\n\tresource *presc_new = NULL;\n\tresource *presc_old = NULL;\n\tresource *presc_first = NULL;\n\tenum batch_op rev_op;\n\n\tET_LIM_DBG(\"entered [alt_res %p]\", __func__, altered_resc)\n\t/* if the job is in states JOB_STATE_LTR_MOVED or JOB_STATE_LTR_FINISHED, */\n\t/* then just return,  the job's resources were removed from the   */\n\t/* entity sums when it went into the MOVED/FINISHED state\t  */\n\n\tif ((check_job_state(pjob, JOB_STATE_LTR_MOVED)) ||\n\t    (check_job_state(pjob, JOB_STATE_LTR_EXPIRED)) ||\n\t    (check_job_state(pjob, JOB_STATE_LTR_FINISHED))) {\n\t\tET_LIM_DBG(\"exiting, ret 0 [job in %c state]\", __func__, get_job_state(pjob))\n\t\treturn 0;\n\t}\n\n\t/* set reverse op incase we have to back up */\n\tif (op == INCR)\n\t\trev_op = DECR;\n\telse\n\t\trev_op = INCR;\n\n\tif (pque)\n\t\tpmaxqresc = get_qattr(pque, QA_ATR_max_queued_res);\n\telse\n\t\tpmaxqresc = get_sattr(SVR_ATR_max_queued_res);\n\n\tif (!is_attr_set(pmaxqresc)) {\n\t\tET_LIM_DBG(\"exiting, ret 0 [max_queued_res limit not set for %s]\", __func__, pque ? pque->qu_qs.qu_name : \"server\")\n\t\treturn 0; /* no limits set */\n\t}\n\n\tif (altered_resc) {\n\t\tpattr_new = altered_resc;\n\t\tpattr_old = get_jattr(pjob, JOB_ATR_resource);\n\t} else {\n\t\tpattr_new = get_jattr(pjob, JOB_ATR_resource);\n\t\tpattr_old = NULL; /* null */\n\t}\n\n\teuser = get_jattr_str(pjob, JOB_ATR_euser);\n\tegroup = get_jattr_str(pjob, JOB_ATR_egroup);\n\tproject = get_jattr_str(pjob, JOB_ATR_project);\n\n\tif ((subjobs = get_queued_subjobs_ct(pjob)) < 0) {\n\t\trc = PBSE_INTERNAL;\n\t}\n\n\tif (!euser) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"EMPTY USER\");\n\t\trc = PBSE_INTERNAL;\n\t}\n\tif (!egroup) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"EMPTY GROUP\");\n\t\trc = PBSE_INTERNAL;\n\t}\n\tif (!project) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"EMPTY PROJECT\");\n\t\trc = PBSE_INTERNAL;\n\t}\n\n\tif (rc == PBSE_INTERNAL) {\n\t\tET_LIM_DBG(\"exiting, ret %d [something not right, subjobs %d, %p, %p, %p]\", __func__,\n\t\t\t   PBSE_INTERNAL, subjobs, euser, egroup, project)\n\t\treturn PBSE_INTERNAL;\n\t}\n\n\trc_final = 0;\n\n\tfor (presc_new = (resource *) GET_NEXT(pattr_new->at_val.at_list), presc_first = presc_new;\n\t     presc_new != NULL;\n\t     presc_new = (resource *) GET_NEXT(presc_new->rs_link)) {\n\t\tchar *rescn;\n\t\tif (!(is_attr_set(&presc_new->rs_value)) || ((presc_new->rs_defin->rs_entlimflg & PBS_ENTLIM_LIMITSET) == 0))\n\t\t\tcontinue;\n\n\t\t/* If this is from qalter where presc_old is set, see if    */\n\t\t/* corresponding resource is in presc_old, had a pior value */\n\n\t\tif (pattr_old)\n\t\t\tpresc_old = find_resc_entry(pattr_old, presc_new->rs_defin);\n\t\telse\n\t\t\tpresc_old = NULL;\n\n\t\trescn = presc_new->rs_defin->rs_name;\n\t\tif (rescn == NULL) {\n\t\t\tif (presc_new != presc_first)\n\t\t\t\tif (revert_entity_resources(pmaxqresc, pattr_old, presc_new, presc_old, presc_first, pjob, subjobs, rev_op) != 0)\n\t\t\t\t\tlog_err(PBSE_INTERNAL, __func__, \"Error in revert_entity_resources\");\n\t\t\tlog_err(PBSE_INTERNAL, __func__, \"EMPTY RESOURCE\");\n\t\t\tET_LIM_DBG(\"exiting, ret %d [rescn is NULL]\", __func__, PBSE_INTERNAL)\n\t\t\treturn PBSE_INTERNAL;\n\t\t}\n\n\t\tET_LIM_DBG(\"setting usage for res %s\", __func__, rescn)\n\t\t/* 1. set overall limit o:PBS_ALL */\n\t\trc = set_single_entity_res(LIM_OVERALL, PBS_ALL_ENTITY,\n\t\t\t\t\t   pmaxqresc, presc_new, presc_old, pjob, subjobs, op);\n\t\tif (rc) {\n\t\t\tET_LIM_DBG(\"set_single_entity_res(o:\" PBS_ALL_ENTITY \";%s,%d,%s) failed with rc %d\", __func__,\n\t\t\t\t   rescn, subjobs, (op == INCR) ? \"INCR\" : \"DECR\", rc)\n\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t\t \"Error in  LIM_OVERALL for resource %s\", rescn);\n\t\t\tlog_err(rc, __func__, log_buffer);\n\t\t\tif (op == INCR) {\n\t\t\t\tif (presc_new != presc_first)\n\t\t\t\t\tif (revert_entity_resources(pmaxqresc, pattr_old, presc_new, presc_old, presc_first, pjob, subjobs, rev_op) != 0)\n\t\t\t\t\t\tlog_err(PBSE_INTERNAL, __func__, \"Error in revert_entity_resources\");\n\t\t\t\treturn rc;\n\t\t\t} else {\n\t\t\t\tif (!rc_final)\n\t\t\t\t\trc_final = rc;\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t}\n\n\t\t/* 2. sets user limit */\n\t\trc = set_single_entity_res(LIM_USER, euser,\n\t\t\t\t\t   pmaxqresc, presc_new, presc_old, pjob, subjobs, op);\n\t\tif (rc) {\n\t\t\tET_LIM_DBG(\"set_single_entity_res(u:%s;%s,%d,%s) failed with rc %d\", __func__,\n\t\t\t\t   euser, rescn, subjobs, (op == INCR) ? \"INCR\" : \"DECR\", rc)\n\t\t\t/* reverse change made above */\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t\t \"Error in  LIM_USER for euser %s for resource %s\", euser, rescn);\n\t\t\tlog_err(rc, __func__, log_buffer);\n\t\t\t(void) set_single_entity_res(LIM_OVERALL, PBS_ALL_ENTITY,\n\t\t\t\t\t\t     pmaxqresc, presc_new, presc_old, pjob, subjobs, rev_op);\n\t\t\tif (op == INCR) {\n\t\t\t\tif (presc_new != presc_first)\n\t\t\t\t\tif (revert_entity_resources(pmaxqresc, pattr_old, presc_new, presc_old, presc_first, pjob, subjobs, rev_op) != 0)\n\t\t\t\t\t\tlog_err(PBSE_INTERNAL, __func__, \"Error in revert_entity_resources\");\n\t\t\t\treturn rc;\n\t\t\t} else {\n\t\t\t\tif (!rc_final)\n\t\t\t\t\trc_final = rc;\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t}\n\n\t\t/* 3. set specific group limit */\n\t\trc = set_single_entity_res(LIM_GROUP, egroup,\n\t\t\t\t\t   pmaxqresc, presc_new, presc_old, pjob, subjobs, op);\n\t\tif (rc) {\n\t\t\tET_LIM_DBG(\"set_single_entity_res(g:%s;%s,%d,%s) failed with rc %d\", __func__,\n\t\t\t\t   egroup, rescn, subjobs, (op == INCR) ? \"INCR\" : \"DECR\", rc)\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t\t \"Error in  LIM_GROUP for egroup %s for resource %s\", egroup, rescn);\n\t\t\tlog_err(rc, __func__, log_buffer);\n\t\t\t/* reverse changes made above */\n\t\t\t(void) set_single_entity_res(LIM_OVERALL, PBS_ALL_ENTITY,\n\t\t\t\t\t\t     pmaxqresc, presc_new, presc_old, pjob, subjobs, rev_op);\n\t\t\t(void) set_single_entity_res(LIM_USER, euser,\n\t\t\t\t\t\t     pmaxqresc, presc_new, presc_old, pjob, subjobs, rev_op);\n\t\t\tif (op == INCR) {\n\t\t\t\tif (presc_new != presc_first)\n\t\t\t\t\tif ((revert_entity_resources(pmaxqresc, pattr_old, presc_new, presc_old, presc_first, pjob, subjobs, rev_op)) != 0)\n\t\t\t\t\t\tlog_err(PBSE_INTERNAL, __func__, \"Error in revert_entity_resources\");\n\t\t\t\treturn rc;\n\t\t\t} else {\n\t\t\t\tif (!rc_final)\n\t\t\t\t\trc_final = rc;\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t}\n\n\t\t/* 4. set specific project limit */\n\t\trc = set_single_entity_res(LIM_PROJECT, project,\n\t\t\t\t\t   pmaxqresc, presc_new, presc_old, pjob, subjobs, op);\n\t\tif (rc) {\n\t\t\tET_LIM_DBG(\"set_single_entity_res(p:%s;%s,%d,%s) failed with rc %d\", __func__,\n\t\t\t\t   project, rescn, subjobs, (op == INCR) ? \"INCR\" : \"DECR\", rc)\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t\t \"Error in LIM_PROJECT for project %s for resource %s\", project, rescn);\n\t\t\tlog_err(rc, __func__, log_buffer);\n\t\t\t/* reverse changes made above */\n\t\t\t(void) set_single_entity_res(LIM_OVERALL, PBS_ALL_ENTITY,\n\t\t\t\t\t\t     pmaxqresc, presc_new, presc_old, pjob, subjobs, rev_op);\n\t\t\t(void) set_single_entity_res(LIM_USER, euser,\n\t\t\t\t\t\t     pmaxqresc, presc_new, presc_old, pjob, subjobs, rev_op);\n\t\t\t(void) set_single_entity_res(LIM_GROUP, egroup,\n\t\t\t\t\t\t     pmaxqresc, presc_new, presc_old, pjob, subjobs, rev_op);\n\t\t\tif (op == INCR) {\n\t\t\t\tif (presc_new != presc_first)\n\t\t\t\t\tif (revert_entity_resources(pmaxqresc, pattr_old, presc_new, presc_old, presc_first, pjob, subjobs, rev_op) != 0)\n\t\t\t\t\t\tlog_err(PBSE_INTERNAL, __func__, \"Error in revert_entity_resources\");\n\t\t\t\treturn rc;\n\t\t\t} else {\n\t\t\t\tif (!rc_final)\n\t\t\t\t\trc_final = rc;\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t}\n\t}\n\n\tET_LIM_DBG(\"exiting, ret %d\", __func__, rc_final)\n\treturn rc_final;\n}\n/**\n * @brief\n * \t\taccount_entity_limit_usages() - set entity usage\n *\t\tfor all four combination of entity limits res/ct and max/queued\n *\t\t1. Called against server attributes (pque will be null):\n *\t   \t\ta. When new job arrives (INCR)\n *\t   \t\tb. when job is purged (DECR)\n *\t\t2. Called against queue attributes (pque will point to queue struct):\n *\t   \t\ta. on any enqueue (INCR), or\n *\t   \t\tb. on any dequeue (DECR)\n *\n * @param[in]\tpjob\t-\tpointer to job structure\n * @param[in]\tpque\t-\tpque will point to queue structure, i.e. not be null\n * @param[in]\taltered_resc\t-\taltered resources.\n * @param[in]\top\t-\toperator example- INCR, DECR\n * @param[in]\top_flag\t-\toperation flag for selecting combinations of set_entity_*_sum_*()\n * \t\t\t\tuse ETLIM_ACC_* flag macros defined in pbs_entlim.h, ex: ETLIM_ACC_ALL\n *\n * @return\tint\n * @retval\tzero\t: all went ok\n * @retval\tPBS_Enumber\t: if error, typically a system or internal error\n */\nint\naccount_entity_limit_usages(job *pjob, pbs_queue *pque, attribute *altered_resc,\n\t\t\t    enum batch_op op, int op_flag)\n{\n\tint rc, ret_error = PBSE_NONE;\n\n\t/* not doing NULL checks of parameters as this function is currently invoked from sane locations */\n\n\tET_LIM_DBG(\"entered, %s on %s %s, op_flag %x, alt_res_ptr %p\", __func__,\n\t\t   (op == INCR) ? \"INCR\" : \"DECR\", pque ? \"queue\" : \"server\", pque ? pque->qu_qs.qu_name : server_name, op_flag, altered_resc)\n\n\tif ((op_flag & ETLIM_ACC_CT_MAX) == ETLIM_ACC_CT_MAX)\n\t\tif ((rc = set_entity_ct_sum_max(pjob, pque, op)) != 0) {\n\t\t\tret_error = rc;\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"set_entity_ct_sum_max %s on %s %s failed with %d\",\n\t\t\t\t (op == INCR) ? \"INCR\" : \"DECR\", pque ? \"queue\" : \"server\", pque ? pque->qu_qs.qu_name : server_name, rc);\n\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_NOTICE, pjob->ji_qs.ji_jobid, log_buffer);\n\t\t}\n\n\tif ((op_flag & ETLIM_ACC_CT_QUEUED) == ETLIM_ACC_CT_QUEUED)\n\t\tif ((rc = set_entity_ct_sum_queued(pjob, pque, op)) != 0) {\n\t\t\tret_error = rc;\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"set_entity_ct_sum_queued %s on %s %s failed with %d\",\n\t\t\t\t (op == INCR) ? \"INCR\" : \"DECR\", pque ? \"queue\" : \"server\", pque ? pque->qu_qs.qu_name : server_name, rc);\n\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_NOTICE, pjob->ji_qs.ji_jobid, log_buffer);\n\t\t}\n\n\tif ((op_flag & ETLIM_ACC_RES_MAX) == ETLIM_ACC_RES_MAX)\n\t\tif ((rc = set_entity_resc_sum_max(pjob, pque, altered_resc, op)) != 0) {\n\t\t\tret_error = rc;\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"set_entity_resc_sum_max %s on %s %s failed with %d, (altered_resc %p)\",\n\t\t\t\t (op == INCR) ? \"INCR\" : \"DECR\", pque ? \"queue\" : \"server\", pque ? pque->qu_qs.qu_name : server_name, rc, altered_resc);\n\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_NOTICE, pjob->ji_qs.ji_jobid, log_buffer);\n\t\t}\n\n\tif ((op_flag & ETLIM_ACC_RES_QUEUED) == ETLIM_ACC_RES_QUEUED)\n\t\tif ((rc = set_entity_resc_sum_queued(pjob, pque, altered_resc, op)) != 0) {\n\t\t\tret_error = rc;\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"set_entity_resc_sum_queued %s on %s %s failed with %d, (altered_resc %p)\",\n\t\t\t\t (op == INCR) ? \"INCR\" : \"DECR\", pque ? \"queue\" : \"server\", pque ? pque->qu_qs.qu_name : server_name, rc, altered_resc);\n\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_NOTICE, pjob->ji_qs.ji_jobid, log_buffer);\n\t\t}\n\n\tET_LIM_DBG(\"exiting, ret_error %d\", __func__, ret_error)\n\n\treturn ret_error;\n}\n\n/**\n * @brief\n *\t\tAdds a record for a provisioning vnode.\n *\n * @par Functionality:\n *      This function loops through 'sv_prov_track' table\n *\t\tand stores input arguments, if it finds an empty record.\n *\n * @see\n *\t\tstart_vnode_provisioning\n *\n * @param[in]   pid\t\t-\tprovision process id\n * @param[in]   prov_vnode_info\t-\tprov_vnode_info structure\n *\n * @return\tint\n * @retval\t0\t: On successful addtion of record\n * @retval\t-1\t: could not add provision record\n *\n * @par Side Effects:\n *      Unknown\n *\n * @par MT-safe: No\n *\n */\n\nstatic int\nadd_prov_record(prov_pid pid,\n\t\tstruct prov_vnode_info *prov_vnode_info)\n{\n\tint i;\n\n\tfor (i = 0; i < server.sv_provtracksize; i++) {\n\t\tif (server.sv_prov_track[i].pvtk_mtime == 0) {\n\t\t\t/* found an empty record */\n\t\t\tbreak;\n\t\t}\n\t}\n\tif (i == server.sv_provtracksize) {\n\t\tDBPRT((\"%s: Could not add records: current records = %d\\n\",\n\t\t       __func__, server.sv_cur_prov_records))\n\t\treturn -1;\n\t}\n\tserver.sv_prov_track[i].pvtk_mtime = time_now;\n\tif ((server.sv_prov_track[i].pvtk_vnode = strdup(prov_vnode_info->pvnfo_vnode)) == NULL) {\n\t\tDBPRT((\"%s: Unable to allocate Memory!\\n\", __func__));\n\t\treturn -1;\n\t}\n\tif ((server.sv_prov_track[i].pvtk_aoe_req = strdup(prov_vnode_info->pvnfo_aoe_req)) == NULL) {\n\t\tfree(server.sv_prov_track[i].pvtk_vnode);\n\t\tDBPRT((\"%s: Unable to allocate Memory!\\n\", __func__));\n\t\treturn -1;\n\t}\n\tserver.sv_prov_track[i].prov_vnode_info = prov_vnode_info;\n\tserver.sv_prov_track[i].pvtk_pid = pid;\n\tserver.sv_cur_prov_records++;\n\tserver.sv_provtrackmodifed = 1;\n\tDBPRT((\"%s: Added a record: current records = %d\\n\",\n\t       __func__, server.sv_cur_prov_records))\n\treturn 0;\n}\n\n/**\n * @brief\n *\t\tremove_prov_record\n *\n * @par Functionality:\n *      This function loops through 'sv_prov_track' table and resets the record\n *\t\tfor a vnode. It is called when vnode finishes provisioning or fails one.\n *\n * @see\n *\n * @param[in]   vnode\t-\tvnode name\n *\n * @return      void\n *\n * @par Side Effects:\n *      Unknown\n *\n * @par MT-safe: No\n *\n */\n\nstatic void\nremove_prov_record(char *vnode)\n{\n\tint i;\n\n\tfor (i = 0; i < server.sv_provtracksize; i++) {\n\t\tif (server.sv_prov_track[i].pvtk_mtime != 0 &&\n\t\t    (strcmp(vnode, server.sv_prov_track[i].pvtk_vnode) == 0)) {\n\t\t\tif (server.sv_prov_track[i].pvtk_aoe_req)\n\t\t\t\tfree(server.sv_prov_track[i].pvtk_aoe_req);\n\t\t\tif (server.sv_prov_track[i].pvtk_vnode)\n\t\t\t\tfree(server.sv_prov_track[i].pvtk_vnode);\n\t\t\tmemset(&server.sv_prov_track[i], 0,\n\t\t\t       sizeof(struct prov_tracking));\n\t\t\tserver.sv_prov_track[i].pvtk_mtime = 0;\n\t\t\tserver.sv_provtrackmodifed = 1;\n\t\t\tserver.sv_cur_prov_records--;\n\t\t\tbreak;\n\t\t}\n\t}\n}\n\n/**\n * @brief\n *\t\tSave the provisioning records to a file.\n *\n * @par Functionality:\n *      This function is invoked periodically by a timed work task. It saves\n *\t\tvnode name, aoe name and time stamp to prov_tracking file.\n *\n * @see\n *\n * @return\tvoid\n *\n * @par Side Effects:\n *      Unknown\n *\n * @par MT-safe:\tNo\n *\n */\n\nvoid\nprov_track_save()\n{\n\tFILE *fd;\n\tint i;\n\n\t/* set task for next round trip */\n\tif (server.sv_provtrackmodifed == 0)\n\t\treturn; /* nothing to do this time */\n\n\tfd = fopen(path_prov_track, \"w\");\n\tif (fd == NULL) {\n\t\tDBPRT((\"%s: unable to open tracking file\\n\", __func__))\n\t\treturn;\n\t}\n\n\t/* we write only mtime , vnode name and AOE name to file */\n\tfor (i = 0; i < server.sv_provtracksize; i++) {\n\t\t/* write in the file (size of each record may vary) */\n\t\tfprintf(fd, \"%ld|\", server.sv_prov_track[i].pvtk_mtime);\n\n\t\tif (server.sv_prov_track[i].pvtk_vnode)\n\t\t\tfprintf(fd, \"%s|\", server.sv_prov_track[i].pvtk_vnode);\n\t\telse\n\t\t\tfprintf(fd, \"0|\"); /* dont want to write (null) */\n\n\t\tif (server.sv_prov_track[i].pvtk_aoe_req)\n\t\t\tfprintf(fd, \"%s\", server.sv_prov_track[i].pvtk_aoe_req);\n\t\telse\n\t\t\tfprintf(fd, \"0\"); /* dont want to write (null) */\n\n\t\tif (i < server.sv_provtracksize - 1)\n\t\t\tfprintf(fd, \"|\");\n\t}\n\n\t(void) fclose(fd);\n\tserver.sv_provtrackmodifed = 0;\n}\n\n/**\n * @brief\n *\t\tLooks up a provisioning vnode record by a vnode name.\n *\n * @par Functionality:\n *      This function gets the index of the provisioning table given the\n *\t\tvnode name. It returns NULL if match not found.\n *\n * @see\n *\t\t#prov_tracking in provision.h\n *\n * @param[in]\tvnode\t-\tvnode name\n *\n * @return\tpointer to prov_tracking\n * @retval\tpointer to prov_tracking\t: if prov_tracking record is found\n * @retval\tNULL\t: if prov_tracking record is not found\n *\n * @par Side Effects:\n *      Unknown\n *\n * @par MT-safe: No\n *\n */\n\nstruct prov_tracking *\nget_prov_record_by_vnode(char *vnode)\n{\n\tint i;\n\n\tfor (i = 0; i < server.sv_provtracksize; i++) {\n\t\tif ((server.sv_prov_track[i].pvtk_mtime != 0) &&\n\t\t    strcmp(vnode,\n\t\t\t   server.sv_prov_track[i].pvtk_vnode) == 0) {\n\t\t\treturn &(server.sv_prov_track[i]);\n\t\t}\n\t}\n\treturn NULL;\n}\n\n/**\n * @brief\n *\t\tLooks up a provisioning vnode record by a given pid.\n *\n *\n * @par Functionality:\n *      This function takes 'pid' as an input and returns address\n * \t\tof the provision record. If it doesn't find any entry then\n *\t\tit returns -1.\n *\n * @see\n *\t\t#prov_tracking in provision.h\n *\n * @param[in]\tpid\t-\tprovision process id\n *\n * @return\tpointer to prov_tracking\n * @retval\tpointer to prov_tracking\t: if prov_tracking record is found\n * @retval\tNULL\t: if prov_tracking record is not found\n *\n * @par Side Effects:\n *      Unknown\n *\n * @par MT-safe: No\n *\n */\n\nstatic struct prov_tracking *\nget_prov_record_by_pid(prov_pid pid)\n{\n\tint i;\n\n\tfor (i = 0; i < server.sv_provtracksize; i++) {\n\t\tif (pid == server.sv_prov_track[i].pvtk_pid) {\n\t\t\treturn &(server.sv_prov_track[i]);\n\t\t}\n\t}\n\treturn NULL;\n}\n\n/**\n * @brief\n *\t\tDeletes a single prov_vnode_info record.\n *\n *\n * @par Functionality:\n *      This function deletes an entry of prov_vnode_info type\n *\n * @see\n *\n * @param[in]\tpvinfo\t-\tprovision vnode info structure.\n *\n * @return      void\n *\n * @par Side Effects:\n *      Unknown\n *\n * @par MT-safe: No\n *\n */\n\nstatic void\nfree_pvnfo(struct prov_vnode_info *pvnfo)\n{\n\tif (pvnfo == NULL)\n\t\treturn;\n\tif (pvnfo->pvnfo_vnode)\n\t\tfree(pvnfo->pvnfo_vnode);\n\tif (pvnfo->pvnfo_aoe_req)\n\t\tfree(pvnfo->pvnfo_aoe_req);\n\tfree(pvnfo);\n\tpvnfo = NULL;\n}\n\n/**\n * @brief\n *\t\tChecks if aoe is available on a vnode.\n *\n *\n * @par Functionality:\n *      This function checks if aoe is available in node's\n *\t\tresources_available.aoe\n *\n * @see\n *\n * @param[in]\tpnode\t-\tpointer to pbsnode struct\n * @param[in]   aoe_req\t-\taoe requested\n *\n * @return\tint\n * @retval\t0\t: aoe is available on the vnode\n * @retval\t-1\t: aoe is unavailable on the vnode\n *\n * @par Side Effects:\n *      Unknown\n *\n * @par MT-safe: No\n *\n */\n\nint\ncheck_req_aoe_available(struct pbsnode *pnode, char *aoe_req)\n{\n\tresource_def *prd;\n\tresource *prc;\n\tstruct array_strings *pas;\n\tint i;\n\n\tif (!pnode || !aoe_req)\n\t\treturn -1;\n\n\tprd = &svr_resc_def[RESC_AOE];\n\tif (prd == NULL)\n\t\treturn -1;\n\tprc = find_resc_entry(get_nattr(pnode, ND_ATR_ResourceAvail), prd);\n\tif (prc) {\n\t\tpas = prc->rs_value.at_val.at_arst;\n\n\t\tif (pas != NULL) {\n\t\t\tfor (i = 0; i < pas->as_usedptr; i++) {\n\t\t\t\tif (strcmp(aoe_req, pas->as_string[i]) == 0)\n\t\t\t\t\treturn 0;\n\t\t\t}\n\t\t}\n\t}\n\treturn -1;\n}\n\n/**\n * @brief\n *\t\tThis function disables provisioning functionality.\n *\n * @par Functionality:\n *\t\tThis function disables provision_enable (internal)attribute on server.\n *\n * @see\n *\t\tmgr_hook_delete\n *\t\tmgr_hook_set\n *\t\tmgr_hook_unset\n *\n * @return\tvoid\n *\n * @par Side Effects:\n *      Unknown\n *\n * @par MT-safe: No\n *\n */\n\nvoid\ndisable_svr_prov()\n{\n\tif (is_sattr_set(SVR_ATR_ProvisionEnable))\n\t\tset_sattr_l_slim(SVR_ATR_ProvisionEnable, 0, SET);\n}\n\n/**\n * @brief\n *\t\tParses prov_vnode attribute of job\n *\n * @par Functionality:\n *      This function parses 'prov_vnode' attribute of job and returns\n *\t\tnumber of nodes required to run the job or -1 for failure.\n *\n * @see\n *\t\tis_runnable\n *\t\tfail_vnode_job\n *\n * @param[in]\tprov_vnode\t-\tvnode name\n * @param[in]   prov_vnodes\t-\tptr to prov_vnode string\n *\n * @return      int\n * @retval     >=1\t: number of nodes to be provisioned\n * @retval      -1\t: parsing failure.\n *\n * @par Side Effects:\n *      Unknown\n *\n * @par MT-safe: No\n *\n */\n\nint\nparse_prov_vnode(char *prov_vnode, exec_vnode_listtype *prov_vnodes)\n{\n\t/* Variables used in parsing the \"exec_vnode\" string */\n\tchar *psubspec;\n\tchar *slast;\n\tchar *sbuf = NULL;\n\tint hpn;\n\tint i = 0, k;\n\tint num_of_prov_vnodes = 1;\n\tchar *p = NULL;\n\n\tif (prov_vnode == NULL) {\n\t\tDBPRT((\"%s: invalid params\\n\", __func__))\n\t\treturn (-1);\n\t}\n\n\t/* Find number of nodes required to run the job */\n\tfor (p = prov_vnode; *p; p++) {\n\t\tif (*p == '+')\n\t\t\tnum_of_prov_vnodes++;\n\t}\n\t/* Allocate tempory memory to hold prov_vnode attribute */\n\tsbuf = strdup(prov_vnode);\n\tif (sbuf == NULL)\n\t\treturn -1;\n\n\t/* Allocate memory to hold vnodenames */\n\t*prov_vnodes = calloc(num_of_prov_vnodes, PBS_MAXHOSTNAME + 1);\n\tif (*prov_vnodes == NULL) {\n\t\tfree(sbuf);\n\t\treturn -1;\n\t}\n\n\tpsubspec = parse_plus_spec_r(sbuf, &slast, &hpn);\n\twhile (psubspec) {\n\t\t/* Read vnodename */\n\t\tk = 0;\n\t\tfor (p = psubspec; *p && *p != ':'; p++, k++) {\n\t\t\t(*prov_vnodes)[i][k] = *p;\n\t\t}\n\t\t(*prov_vnodes)[i][k] = '\\0';\n\t\tDBPRT((\"%s: %s\\n\", __func__, (*prov_vnodes)[i]))\n\t\t++i;\n\t\tpsubspec = parse_plus_spec_r(slast, &slast, &hpn);\n\t}\n\tfree(sbuf);\n\n\treturn num_of_prov_vnodes;\n}\n\n/**\n * @brief\n *\t\tChecks if node needs provisioning.\n *\n * @par Functionality:\n *\t\tChecks if node needs provisioning by matching the requested aoe and\n *\t\tcurrent aoe on the node. It also checks if requested aoe is available\n *\t\ton the node. Node need not provision if requested aoe is current aoe\n *\t\ton the node. If reqeusted aoe is not available on node or available\n *\t\tlist on node is empty then job cannot run.\n *\n * @see\n *\t\tfind_prov_vnode_list\n *\n * @param[in]\tpnode\t-\tvnode\n * @param[in]\taoe_name\t-\taoe requested\n *\n * @return\tint\n * @retval\t-1\t: node cannot be provisioned\n * @retval\t0\t: node need not provision\n * @retval\t1\t: node can be provisioned\n *\n * @par Side Effects:\n *      Unknown\n *\n * @par MT-safe:  Yes\n *\n */\n\nstatic int\nnode_need_prov(struct pbsnode *pnode, char *aoe_name)\n{\n\tresource *presc;\n\tresource_def *prdef;\n\tchar *aoe; /* hold current_aoe of pnode */\n\tint i;\n\tstruct array_strings *pas = NULL;\n\n\tif (pnode == NULL || aoe_name == NULL)\n\t\treturn -1;\n\n\tprdef = &svr_resc_def[RESC_AOE];\n\tpresc = find_resc_entry(get_nattr(pnode, ND_ATR_ResourceAvail), prdef);\n\n\t/* if resources_available.aoe not set */\n\tif (presc == NULL)\n\t\treturn -1;\n\n\tif (presc->rs_value.at_flags & ATR_VFLAG_MODIFY) {\n\t\t/* if aoe is not in resources_available.aoe */\n\n\t\tpas = presc->rs_value.at_val.at_arst;\n\t\tfor (i = 0; i < pas->as_usedptr; i++) {\n\t\t\tif (strcmp(pas->as_string[i], aoe_name) == 0) { /* aoe is available */\n\t\t\t\t/* if aoe is already instantiated */\n\t\t\t\taoe = get_nattr_str(pnode, ND_ATR_current_aoe);\n\t\t\t\tif (aoe != NULL) {\n\t\t\t\t\tif (strcmp(aoe_name, aoe) == 0)\n\t\t\t\t\t\treturn 0;\n\t\t\t\t}\n\t\t\t\treturn 1;\n\t\t\t}\n\t\t}\n\t}\n\n\treturn -1;\n}\n\n/**\n * @brief\n *\t\tParses exec_vnode string sent by scheduler and sets prov_vnode of job\n *\n * @par Functionality:\n *      This function takes 'exec_vnode' attribute sent by scheduler\n *\t\tand on sucessful parsing returns number of nodes with aoe in\n *\t\ttheir chunk. Multiple nodes in exec_vnode are reported once.\n *\t\taoe_name contains the name of aoe to be provisioned with.\n *\t\tprov_vnode attribute of job is also set.\n *\n * @see\n *\t\tcheck_and_enqueue_provisioning\n *\n * @param[in]\tpjob\t\t-\tpointer to job\n * @param[out]  prov_vnodes\t-\tlist of nodes\n * @param[out]  aoe_name\t-\taoe requested\n *\n * @return\tint\n * @retval\t>=0\t: number of nodes to be provisioned\n * @retval\t-1\t: parsing failure.\n *\n * @par Side Effects:\n *      Unknown\n *\n * @par MT-safe: No\n *\n */\n\nint\nfind_prov_vnode_list(job *pjob, exec_vnode_listtype *prov_vnodes, char **aoe_name)\n{\n\t/* Variables used in parsing the \"exec_vnode\" string */\n\tchar *psubspec;\n\tchar *slast;\n\tchar *sbuf = NULL;\n\tint hpn;\n\tint i = 0, k, j;\n\tint num_of_exec_vnodes = 1;\n\tchar *p = NULL;\n\tchar *aoe = NULL;\n\tchar *vname;\n\tint nelem;\n\tstruct key_value_pair *pkvp;\n\tint no_add = 0;\n\tstruct pbsnode *pnode;\n\tint ret; /* return code of node_need_provision() */\n\tchar *pbuf = NULL;\n\tchar *execvnod = NULL;\n\n\tif (is_jattr_set(pjob, JOB_ATR_exec_vnode))\n\t\texecvnod = get_jattr_str(pjob, JOB_ATR_exec_vnode);\n\n\tif (execvnod == NULL) {\n\t\tDBPRT((\"%s: invalid params\\n\", __func__))\n\t\treturn (-1);\n\t}\n\n\t/* Find number of nodes required to run the job */\n\tfor (p = execvnod; *p; p++) {\n\t\tif (*p == '+')\n\t\t\tnum_of_exec_vnodes++;\n\t}\n\t/* Allocate temporary memory to hold execvnod attribute */\n\tsbuf = strdup(execvnod);\n\tif (sbuf == NULL)\n\t\treturn -1;\n\n\t/* Allocate temp memory to hold prov_vnode attribute */\n\tpbuf = calloc(1, strlen(execvnod) + 1);\n\tif (pbuf == NULL) {\n\t\tfree(sbuf);\n\t\treturn -1;\n\t}\n\n\t/* Allocate memory to hold vnodenames and their aoe's */\n\t*prov_vnodes = calloc(num_of_exec_vnodes, PBS_MAXHOSTNAME + 1);\n\tif (*prov_vnodes == NULL) {\n\t\tfree(sbuf);\n\t\tfree(pbuf);\n\t\treturn -1;\n\t}\n\n\tpsubspec = parse_plus_spec_r(sbuf, &slast, &hpn);\n\twhile (psubspec) {\n\t\tif (parse_node_resc(psubspec, &vname, &nelem, &pkvp) == 0) {\n\t\t\tfor (k = 0; k < nelem; k++) {\n\t\t\t\tno_add = 0;\n\t\t\t\t/* Read vnodename if aoe requested */\n\t\t\t\tif (strcasecmp(\"aoe\", (pkvp + k)->kv_keyw) == 0) {\n\n\t\t\t\t\t/* check if same vnode is request again */\n\t\t\t\t\tfor (j = 0; j <= i; j++) {\n\t\t\t\t\t\tif (strcmp(vname, (*prov_vnodes)[i]) == 0) {\n\t\t\t\t\t\t\tno_add = 1;\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tif (no_add)\n\t\t\t\t\t\tbreak;\n\n\t\t\t\t\tDBPRT((\"%s: Look up node %s\\n\", __func__, vname))\n\t\t\t\t\tpnode = find_nodebyname(vname);\n\t\t\t\t\t/* check if node really needs provisioning, if not, continue.\n\t\t\t\t\t * This is to stop qrun -H from provisioning a node (including\n\t\t\t\t\t * head node) that does not have aoe_req in its available list.\n\t\t\t\t\t */\n\t\t\t\t\tret = node_need_prov(pnode, (pkvp + k)->kv_val);\n\t\t\t\t\tif (ret == -1) {\n\t\t\t\t\t\tfree(sbuf);\n\t\t\t\t\t\tfree(pbuf);\n\t\t\t\t\t\treturn -1;\n\t\t\t\t\t}\n\t\t\t\t\tif (ret == 0)\n\t\t\t\t\t\tbreak;\n\n\t\t\t\t\tstrcpy((*prov_vnodes)[i], vname);\n\t\t\t\t\tDBPRT((\"%s: %s\\n\", __func__, (*prov_vnodes)[i]))\n\t\t\t\t\t++i;\n\t\t\t\t\tif (aoe_name != NULL) {\n\t\t\t\t\t\tif (*aoe_name) {\n\t\t\t\t\t\t\tif (strcmp(*aoe_name, ((pkvp + k)->kv_val)) != 0) {\n\t\t\t\t\t\t\t\t/* Aoe name can not be different across chunks, it's an error */\n\t\t\t\t\t\t\t\tfree(sbuf);\n\t\t\t\t\t\t\t\tfree(pbuf);\n\t\t\t\t\t\t\t\tfree(*aoe_name);\n\t\t\t\t\t\t\t\treturn -1;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\taoe = strdup((pkvp + k)->kv_val);\n\t\t\t\t\t\t\tif (aoe == NULL) {\n\t\t\t\t\t\t\t\tfree(sbuf);\n\t\t\t\t\t\t\t\tfree(pbuf);\n\t\t\t\t\t\t\t\treturn -1;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t(*aoe_name) = aoe;\n\t\t\t\t\t\t}\n\t\t\t\t\t\tDBPRT((\"%s: %s\\n\", __func__, (*aoe_name)))\n\t\t\t\t\t}\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tpsubspec = parse_plus_spec_r(slast, &slast, &hpn);\n\t}\n\n\t/* prepare prov_vnode and assign to job. We do this because prov_vnode\n\t * is to be parsed again later to find vnodes that were provisioned.\n\t * exec_vnode cannot be parsed again since vnodes would have their\n\t * current_aoe set right.\n\t */\n\tfor (j = 0; j < i; j++) {\n\t\tif (j == 0) {\n\t\t\tstrcpy(pbuf, (*prov_vnodes)[j]);\n\t\t} else {\n\t\t\tstrcat(pbuf, \"+\");\n\t\t\tstrcat(pbuf, (*prov_vnodes)[j]);\n\t\t}\n\t\tstrcat(pbuf, \":aoe=\");\n\t\tstrcat(pbuf, (*aoe_name));\n\t}\n\tset_jattr_str_slim(pjob, JOB_ATR_prov_vnode, pbuf, NULL);\n\n\tDBPRT((\"%s: prov_vnode: %s\\n\", __func__, pbuf))\n\n\tfree(pbuf);\n\tfree(sbuf);\n\treturn i;\n}\n\n/**\n * @brief\n *\t\tFinds a vnode's entry in prov_vnode_info.\n *\n * @par Functionality:\n *      This function loops through list of provision vnodes\n *\t\tand returns prov_vnode_info. Returns NULL, if not able to find.\n *\n * @see\n *\t\tfree_prov_vnode\n *\t\t#prov_vnode_info in provision.h\n *\n * @param[in]\tpnode\t-\tpointer to pbsnode\n *\n * @return\tpointer to prov_vnode_info\n * @retval\tpointer to prov_vnode_info\t: if entry in prov_vnode_info is found\n * @retval\tNULL\t: if entry is not found\n *\n * @par Side Effects:\n *      Unknown\n *\n * @par MT-safe: No\n *\n */\n\nstatic struct prov_vnode_info *\nfind_prov_vnode(struct pbsnode *pnode)\n{\n\tstruct prov_vnode_info *prov_vnode_info = NULL;\n\n\tprov_vnode_info = GET_NEXT(prov_allvnodes);\n\twhile (prov_vnode_info) {\n\t\tif (strcmp(prov_vnode_info->pvnfo_vnode, pnode->nd_name) == 0) {\n\t\t\treturn prov_vnode_info;\n\t\t}\n\t\tprov_vnode_info = GET_NEXT(prov_vnode_info->al_link);\n\t}\n\treturn NULL;\n}\n\n/**\n * @brief\n *\t\tRemoves vnode's entry from prov_vnode_info.\n *\n * @par Functionality:\n *      This function removes the vnode from prov_vnode_info and\n *\t\tunsets vnode's INUSE_WAIT_PROV state.\n *\n * @see\n *\t\tfree_nodes\n *\n * @param[in]\tpnode\t-\tpointer to pbsnode\n *\n * @return\tvoid\n *\n * @par Side Effects:\n *      Unknown\n *\n * @par MT-safe:\tNo\n *\n */\n\nvoid\nfree_prov_vnode(struct pbsnode *pnode)\n{\n\tstruct prov_vnode_info *prov_vnode_info = NULL;\n\n\tif (pnode->nd_state & INUSE_WAIT_PROV) {\n\t\tif ((prov_vnode_info = find_prov_vnode(pnode))) {\n\t\t\tdelete_link(&prov_vnode_info->al_link);\n\t\t\tfree_pvnfo(prov_vnode_info);\n\t\t}\n\n\t\tset_vnode_state(pnode, ~INUSE_WAIT_PROV, Nd_State_And);\n\t}\n}\n\n/**\n * @brief\n *\t\tDetermines if job can be run on account of a vnode finishing\n *\t\tprovisioning.\n *\n * @par Functionality:\n *      This function checks for a job, if all its vnodes have finished\n *\t\tprovisioning. If at least one vnode is offline or in wait-provisioning\n *\t\tstate or has finished provisioning but has another aoe set then job\n *\t\tcannot be run.\n *\t\tIt also checks the case where a multi-vnode job has one\n *\t\tvnode failing provisioning while others are still provisioning.\n *\n * @see\n *\t\tcheck_and_run_jobs\n *\n * @param[in]   ptr\t-\tpointer to job struct\n * @param[in]   pvinfo\t-\tpointer to prov_vnode_info struct\n *\n * @return\tint\n * @retval\t0\t: job is eligible to run\n * @retval\t-1\t: job is not eligible to run since other vnodes\n *\t\t\t\t\tstill provisioning or error\n * @retval\t-2\t:\tnodes are done provisioning, but some are down\n * @retval\t-3\t:\tnodes are all done prov, but curr_aoe, does not\n *\t\t\t\t\tmatch req_aoe\n * @retval\t-4\t:\tleft over provisioning just returned\n *\n * @par Side Effects:\n *      Unknown\n *\n * @par MT-safe: No\n *\n */\n\nstatic int\nis_runnable(job *ptr, struct prov_vnode_info *pvnfo)\n{\n\tstruct pbsnode *np = NULL;\n\tint i;\n\tint eflag = 0;\n\texec_vnode_listtype prov_vnode_list = NULL;\n\tint num_of_prov_vnodes = 1;\n\tjob *pjob;\n\tchar *aoe_req = NULL;\n\tchar *current_aoe;\n\n\tif (!ptr) {\n\t\tDBPRT((\"%s: ptr is NULL\\n\", __func__))\n\t\treturn -1;\n\t}\n\n\tpjob = (job *) ptr;\n\tDBPRT((\"%s: Entered jobid=%s\\n\", __func__, pjob->ji_qs.ji_jobid))\n\n\taoe_req = pvnfo->pvnfo_aoe_req;\n\n\tnum_of_prov_vnodes = parse_prov_vnode(get_jattr_str(pjob, JOB_ATR_prov_vnode), &prov_vnode_list);\n\n\tif (num_of_prov_vnodes == -1) {\n\t\tif (prov_vnode_list)\n\t\t\tfree(prov_vnode_list);\n\t\treturn -1;\n\t}\n\n\t/* it could happen that some vnode started provisioning but another */\n\t/* failed to provision. Since, first vnode will return later, this is */\n\t/* a catch to stop processing further since job would have already */\n\t/* been held or re queued */\n\tif (!check_job_substate(pjob, JOB_SUBSTATE_PROVISION)) {\n\t\tDBPRT((\"%s: stray provisioning for job %s\\n\", __func__,\n\t\t       pjob->ji_qs.ji_jobid))\n\t\teflag = -4;\n\t\tgoto label1;\n\t}\n\n\tfor (i = 0; i < num_of_prov_vnodes; i++) {\n\n\t\tnp = find_nodebyname(prov_vnode_list[i]);\n\t\tif (np == NULL) {\n\t\t\tDBPRT((\"%s: node %s is null\\n\",\n\t\t\t       __func__, prov_vnode_list[i]))\n\t\t\teflag = -2;\n\t\t\t/* let eflag get overwritten in next iterations\n\t\t\t by other conditions */\n\t\t\tbreak;\n\t\t}\n\n\t\t/* check if vnode offline, since it could have failed prov */\n\t\tif (np->nd_state & (INUSE_OFFLINE | INUSE_OFFLINE_BY_MOM)) {\n\n\t\t\tDBPRT((\"%s: vnode %s is offline (failed prov)\\n\",\n\t\t\t       __func__, np->nd_name))\n\t\t\teflag = -2;\n\t\t\tbreak;\n\n\t\t} else if ((np->nd_state & INUSE_PROV) ||\n\t\t\t   (np->nd_state & INUSE_WAIT_PROV)) {\n\t\t\t/* Check any vnode is provisioning */\n\t\t\teflag = -1;\n\t\t\tDBPRT((\"%s: Some nodes still provisioning\\n\", __func__))\n\t\t\tbreak;\n\t\t} else {\n\t\t\t/* check if node has the correct aoe or not */\n\t\t\tcurrent_aoe = NULL;\n\t\t\tif (is_nattr_set(np, ND_ATR_current_aoe))\n\t\t\t\tcurrent_aoe = get_nattr_str(np, ND_ATR_current_aoe);\n\n\t\t\tif ((current_aoe == NULL) ||\n\t\t\t    strcmp(current_aoe, aoe_req) != 0) {\n\t\t\t\teflag = -3;\n\t\t\t\tDBPRT((\"%s: req_aoe mismatch on %s\\n\",\n\t\t\t\t       __func__, prov_vnode_list[i]))\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t}\nlabel1:\n\n\tif (num_of_prov_vnodes > 0)\n\t\tfree(prov_vnode_list);\n\n\treturn eflag;\n}\n\n/**\n * @brief\n *\t\tRequeue/Hold job on provisioning failure.\n *\n * @par Functionality:\n * \t\tThis function requeues/holds the job for which provisioning failed.\n *\t \t- writes accounting log message for failure\n *\t \t- frees prov_vnode of job\n *\t \t- removes all pending provisioning requests\n *\t \t- releases resources held by job\n *\t \t- applies server hold or requeues the job\n *\n * @see\n *\t\tfail_vnode\n *\t\tcheck_and_run_jobs\n *\t\tdo_provisioning\n *\n * @param[in]   prov_vnod_info\t-\tpointer to prov_vnode_info struct\n * @param[in]   hold_or_que\t-\tindicates if job is to be held or queued\n *\t\t\t\t\t\t\t\thold_or_que = 0 if job is to be held\n *\t\t\t\t\t\t\t\thold_or_que > 0 if job is to be queued\n *\n * @return\tvoid\n *\n * @par Side Effects:\n *      Unknown\n *\n * @par MT-safe: No\n *\n */\n\nvoid\nfail_vnode_job(struct prov_vnode_info *prov_vnode_info, int hold_or_que)\n{\n\tjob *pjob;\n\tint cnt; /* no. of prov vnodes */\n\texec_vnode_listtype prov_vnode_list = NULL;\n\tint i;\n\tstruct pbsnode *np;\n\tstruct prov_tracking *ptracking = NULL;\n\n\tif (!prov_vnode_info) {\n\t\tDBPRT((\"%s: prov_vnode_info is NULL\\n\", __func__))\n\t\treturn;\n\t}\n\n\t/*\n\t * fail_vnode_job could be called by pending work tasks\n\t * of a job, which might have already been requeued/held.\n\t * However, in that case, prov_vnode_info->pvnfo_jobid\n\t * will be empty, return without performing any action\n\t */\n\tif (prov_vnode_info->pvnfo_jobid[0] == '\\0')\n\t\treturn;\n\n\tpjob = (job *) find_job(prov_vnode_info->pvnfo_jobid);\n\tif (!pjob)\n\t\treturn;\n\n\t/* add accounting log for provision failure for job */\n\tset_job_ProvAcctRcd(pjob, time_now, PROVISIONING_FAILURE);\n\n\t/* log job prov failed message */\n\tif (hold_or_que == 0) {\n\t\tsprintf(log_buffer,\n\t\t\t\"Provisioning for job %s failed, job held\",\n\t\t\tpjob->ji_qs.ji_jobid);\n\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\n\t} else if (hold_or_que == 1) {\n\t\tsprintf(log_buffer,\n\t\t\t\"Provisioning for job %s failed, job queued\",\n\t\t\tpjob->ji_qs.ji_jobid);\n\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t}\n\t/* remove from table other vnodes that might provision.*/\n\t/* vnodes that start provisioning are not within control. */\n\t/* These have not yet entered tracking table. */\n\tdel_prov_vnode_entry(pjob);\n\n\t/* release resource, put system hold and move to held state */\n\tif (hold_or_que == 0) {\n\t\trel_resc(pjob);\n\t\tclear_exec_on_run_fail(pjob);\n\t\tset_jattr_b_slim(pjob, JOB_ATR_hold, HOLD_s, INCR);\n\t\tset_jattr_str_slim(pjob, JOB_ATR_Comment, \"job held, provisioning failed to start\", NULL);\n\t\tsvr_setjobstate(pjob, JOB_STATE_LTR_HELD, JOB_SUBSTATE_HELD);\n\t} else if (hold_or_que == 1) {\n\t\t/* don't purge job, instead requeue */\n\t\t(void) force_reque(pjob);\n\t}\n\n\t/*\n\t * The first time fail_job_vnode is called for a job, the\n\t * job is requeued/held, prov_vnode freed, and accounting\n\t * record written. However, pending work tasks for the same\n\t * job could trigger fail_job_vnode again later.\n\t *\n\t * Thus, on the first call to fail_vnode_job, loop through\n\t * all prov_vnode_info's for this job and remove the job_id\n\t * from them, so future calls to fail_vnode_job from pending\n\t * work tasks would return without performing any action\n\t */\n\tcnt = parse_prov_vnode(get_jattr_str(pjob, JOB_ATR_prov_vnode), &prov_vnode_list);\n\tfor (i = 0; i < cnt; i++) {\n\t\tif ((np = find_nodebyname(prov_vnode_list[i]))) {\n\t\t\tif ((ptracking = get_prov_record_by_vnode(np->nd_name))) {\n\t\t\t\tprov_vnode_info = ptracking->prov_vnode_info;\n\t\t\t\tif (prov_vnode_info)\n\t\t\t\t\tprov_vnode_info->pvnfo_jobid[0] = '\\0';\n\t\t\t}\n\t\t}\n\t}\n\tif (prov_vnode_list)\n\t\tfree(prov_vnode_list);\n\n\t/* remove the prov_node attribute from the job here */\n\tif (is_jattr_set(pjob, JOB_ATR_prov_vnode))\n\t\tfree_jattr(pjob, JOB_ATR_prov_vnode);\n}\n\n/**\n * @brief\n *\t\tMarks the vnode offline.\n *\n * @par Functionality:\n *      This function marks a given vnode as offline and may log a message\n *      with it why vnode marked offline.\n *\n * @see\n *\t\tfail_vnode\n *\t\toffline_all_provisioning_vnodes\n *\n * @param[in]   pnode\t\t-\tpointer to pbsnode\n * @param[in]   comment\t\t-\tcomment to be set on vnode and logged\n *\n * @return      void\n *\n * @par Side Effects:\n *      Unknown\n *\n * @par MT-safe: No\n *\n */\n\nstatic void\nmark_prov_vnode_offline(pbsnode *pnode, char *comment)\n{\n\tif (!pnode) {\n\t\tDBPRT((\"%s: pnode is NULL\\n\", __func__))\n\t\treturn;\n\t}\n\n\t/* unset the current aoe settings, as this may not be right now */\n\tfree_nattr(pnode, ND_ATR_current_aoe);\n\n\tDBPRT((\"%s: node=%s set to offline, resetting current_aoe\\n\",\n\t       __func__, pnode->nd_name))\n\n\t/* set node to down state */\n\tset_vnode_state(pnode, INUSE_OFFLINE, Nd_State_Or);\n\tset_vnode_state(pnode, ~INUSE_PROV, Nd_State_And);\n\n\t/* write the node state and current_aoe */\n\tnode_save_db(pnode);\n\n\tif (comment != NULL) {\n\t\t/* log msg about marking node as offline */\n\t\tlog_eventf(PBSEVENT_DEBUG, PBS_EVENTCLASS_NODE, LOG_NOTICE, msg_daemonname, \"Vnode %s: %s\", pnode->nd_name, comment);\n\t\tset_nattr_str_slim(pnode, ND_ATR_Comment, comment, NULL);\n\t}\n}\n\n/**\n * @brief\n *\t\tOn provisioning failure, marks vnode offline and fails the job.\n *\n * @par Functionality:\n *      This function marks a vnode offline and requeue/holds all jobs on it.\n *\n * @see\n *\t\tprov_request_deferred\n *\t\tprov_request_timed\n *\n * @param[in]   prov_vnode_info\t-\tpointer to struct prov_vnode_info\n * @param[in]   hold_or_que\t-\tindicates if job is to be held or queued\n *\t\t\t\t\t\t\t\thold_or_que = 0 if job is to be held\n *\t\t\t\t\t\t\t\thold_or_que > 0 if job is to be queued\n *\n * @return\tvoid\n *\n * @par Side Effects:\n *      Unknown\n *\n * @par MT-safe:\tNo\n *\n */\n\nstatic void\nfail_vnode(struct prov_vnode_info *prov_vnode_info, int hold_or_que)\n{\n\tstruct pbsnode *pnode;\n\tchar comment[MAXNLINE];\n\n\tif (!prov_vnode_info) {\n\t\tDBPRT((\"%s: prov_vnode_info is NULL\\n\", __func__))\n\t\treturn;\n\t}\n\n\tpnode = find_nodebyname(prov_vnode_info->pvnfo_vnode);\n\n\tDBPRT((\"%s: node=%s entered\\n\", __func__, prov_vnode_info->pvnfo_vnode))\n\n\tif (pnode == NULL)\n\t\treturn;\n\n\tstrcpy(comment, \"Vnode offlined since it failed provisioning\");\n\tmark_prov_vnode_offline(pnode, comment);\n\n\tfail_vnode_job(prov_vnode_info, hold_or_que);\n}\n\n/**\n * @brief\n *\t\tMarks vnodes in prov_tracking table offline during startup.\n *\n * @par Functionality:\n *      This function marks all vnodes present in prov_tracking table as offline\n *\t\tCalled from pbsd_init, when server recovers from a crash or server is\n *\t\tstarted. Since, status of vnodes undergoing provisioning is not known,\n *\t\tit marks them offline.\n *\n * @see\n *\t\t#prov_tracking in provision.h\n *\n * @return\tvoid\n *\n * @par Side Effects:\n *      Unknown\n *\n * @par MT-safe:\tNo\n *\n */\n\nvoid\noffline_all_provisioning_vnodes()\n{\n\tint i;\n\tint count = 0;\n\tstruct pbsnode *pnode;\n\tchar comment[MAXNLINE];\n\tchar *vnode;\n\n\tstrcpy(comment,\n\t       \"Vnode offlined since server went down during provisioning\");\n\n\tfor (i = 0; i < server.sv_provtracksize; i++) {\n\t\tif (server.sv_prov_track[i].pvtk_mtime != 0) {\n\t\t\t/* found an empty record */\n\t\t\tvnode = server.sv_prov_track[i].pvtk_vnode;\n\t\t\tpnode = find_nodebyname(vnode);\n\n\t\t\tif (pnode) {\n\t\t\t\tmark_prov_vnode_offline(pnode, comment);\n\t\t\t\t/*\n\t\t\t\t * reservations will take care of\n\t\t\t\t * themselves in pbsd_init\n\t\t\t\t */\n\t\t\t\tcount++;\n\t\t\t}\n\t\t}\n\t\tif (server.sv_prov_track[i].pvtk_vnode)\n\t\t\tfree(server.sv_prov_track[i].pvtk_vnode);\n\t\tif (server.sv_prov_track[i].pvtk_aoe_req)\n\t\t\tfree(server.sv_prov_track[i].pvtk_aoe_req);\n\t\tmemset(&(server.sv_prov_track[i]), 0,\n\t\t       sizeof(struct prov_tracking));\n\t\tserver.sv_prov_track[i].pvtk_mtime = 0; /* mark slot empty */\n\t}\n\n\tserver.sv_cur_prov_records = 0;\n\tserver.sv_provtrackmodifed = 1;\n\n\tDBPRT((\"%s: Marked %d nodes offline (from prov recovery)\\n\",\n\t       __func__, count))\n\n\t/* save the provisioning table to disk */\n\tprov_track_save();\n}\n\n/**\n * @brief\n *\t\tRuns a job if it can when a vnode finished provisioning.\n *\n * @par Functionality:\n *      This function checks if job is runnable by calling is_runnable().\n *\t\tA vnode finished provisioning, so check if job can be run. If it can be\n *\t\trun, writes accounting log and server log and sends job to mom.\n *\t\tIf job cannot run because of provisioning failure, it calls fail_vnode_job\n *\t\tto requeue job and mark vnode offline.\n *\n * @see\n *\t\tis_runnable\n *\t\tis_vnode_prov_done\n *\t\tfail_vnode_job\n *\n * @param[in]\tprov_vnode_info\t-\tpointer to struct prov_vnode_info\n *\n * @return\tvoid\n *\n * @par Side Effects:\n *      Unknown\n *\n * @par MT-safe:\tNo\n *\n */\n\nstatic void\ncheck_and_run_jobs(struct prov_vnode_info *prov_vnode_info)\n{\n\tjob *pjob;\n\tint rc;\n\tstruct work_task task;\n\n\tif (!prov_vnode_info) {\n\t\tDBPRT((\"%s: prov_vnode_info is NULL\\n\", __func__))\n\t\treturn;\n\t}\n\n\t/*\n\t * job info is stale - this is from a pending work task for a job\n\t * which would have failed provisioning already and requeued/held.\n\t * So, dont process any further\n\t */\n\tif (prov_vnode_info->pvnfo_jobid[0] == '\\0')\n\t\treturn;\n\n\tDBPRT((\"%s: Entered, node=%s, jobid=%s\\n\", __func__,\n\t       prov_vnode_info->pvnfo_vnode, prov_vnode_info->pvnfo_jobid))\n\n\tpjob = (job *) find_job(prov_vnode_info->pvnfo_jobid);\n\tif (pjob == NULL)\n\t\treturn;\n\n\trc = is_runnable(pjob, prov_vnode_info);\n\n\tif (rc == 0) {\n\t\ttask.wt_parm1 = (void *) pjob;\n\t\tprov_startjob(&task);\n\n\t} else if (rc == -2 || rc == -3) {\n\t\t/*\n\t\t * prov over on all nodes,\n\t\t * but some nodes offline or curr_aoe bad\n\t\t */\n\t\tDBPRT((\"%s: Jobid: %s isjob_eligible returned %d\\n\", __func__,\n\t\t       pjob->ji_qs.ji_jobid, rc))\n\t\tif (rc == -3)\n\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t  pjob->ji_qs.ji_jobid, \"provisioning error: AOE mis-match\");\n\t\telse\n\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t\t  pjob->ji_qs.ji_jobid, \"provisioning error: vnode offline\");\n\n\t\tif (rc == -3)\n\t\t\tfail_vnode_job(prov_vnode_info, 0);\n\t\telse\n\t\t\tfail_vnode_job(prov_vnode_info, 1);\n\t}\n}\n\n/**\n * @brief\n *\t\tChecks if vnode is up after provisioning.\n *\n * @par Functionality:\n *      This function checks whether the concerned vnode is up after\n *\t\tprovisioning. If vnode is up:\n *\t\t\t- cancels the timeout work task\n *\t\t\t- updates the nodes file to reflect the new state\n *\t\t\t- calls check_and_run_job to run jobs\n *\t\t\t- frees the prov_vnode_info structure allocated\n *\t\t  \t\tby do_provisioning\n *\t\tIf vnode is not yet up:\n *\t\t\t- it returns. It will get called again by set_vnode_state.\n *\n * @see\n *\t\tset_vnode_state\n *\t\tprov_request_deferred\n *\n * @param[in]\tvnode\t-\tpointer to string containing vnode name\n *\n * @return\tvoid\n *\n * @par Side Effects:\n *     starts a new work task to do more provisioning\n *\n * @par MT-safe:\tNo\n *\n */\n\nvoid\nis_vnode_prov_done(char *vnode)\n{\n\tstruct pbsnode *pnode = NULL;\n\tstruct prov_vnode_info *prov_vnode_info;\n\tstruct work_task *ptask_timeout;\n\tstruct prov_tracking *ptracking;\n\n\tptracking = get_prov_record_by_vnode(vnode);\n\tif (ptracking == NULL)\n\t\t/* prov tracking record not created */\n\t\treturn;\n\tif (ptracking->pvtk_pid > -1) {\n\t\tDBPRT((\"%s: Provisioning script not yet done\\n\", __func__))\n\t\treturn;\n\t}\n\n\tprov_vnode_info = ptracking->prov_vnode_info;\n\n\tpnode = (struct pbsnode *) find_nodebyname(prov_vnode_info->pvnfo_vnode);\n\tassert(pnode != NULL);\n\n\tptask_timeout = prov_vnode_info->ptask_timed;\n\n\tDBPRT((\"%s: Entered for node:%s\\n\", __func__, prov_vnode_info->pvnfo_vnode))\n\n\t/* check if this node is up or not */\n\tif ((pnode->nd_state & VNODE_UNAVAILABLE) ||\n\t    (pnode->nd_state & INUSE_INIT)) {\n\t\t/* node is is still not up\n\t\t return, since this will be called again\n\t\t when the vnode gets up (from set_vnode_state)\n\t\t */\n\t\tDBPRT((\"%s: node:%s not yet up\\n\",\n\t\t       __func__, prov_vnode_info->pvnfo_vnode))\n\t\treturn;\n\t}\n\n\tDBPRT((\"%s: node:%s is up - cancelling timeout task\\n\",\n\t       __func__, prov_vnode_info->pvnfo_vnode))\n\t/* delete the timeout task */\n\tdelete_task(ptask_timeout);\n\n\t/* unset the provisioning flag on this node */\n\tif (pnode->nd_state & INUSE_PROV) {\n\t\tDBPRT((\"%s: node:%s is up - removing prov\\n\",\n\t\t       __func__, prov_vnode_info->pvnfo_vnode))\n\t\tset_vnode_state(pnode, ~INUSE_PROV, Nd_State_And);\n\t}\n\n\t/* save the state of this node to the nodes file */\n\tnode_save_db(pnode);\n\n\t/* log msg about prov of node success */\n\tsprintf(log_buffer, \"Provisioning of Vnode %s successful\",\n\t\tprov_vnode_info->pvnfo_vnode);\n\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_NODE,\n\t\t  LOG_NOTICE, msg_daemonname, log_buffer);\n\n\tcheck_and_run_jobs(prov_vnode_info);\n\n\t/* Remove record from prov tracking table */\n\tremove_prov_record(pnode->nd_name);\n\tprov_track_save(); /* save tracking table since its modified now */\n\n\tfree_pvnfo(prov_vnode_info);\n\n\t/*\n\t * since one provisioning was finished, we have space\n\t * to do more prov so start a task for looking at\n\t * other nodes in the provisioning queue\n\t */\n\tset_task(WORK_Immed, 0, do_provisioning, NULL);\n}\n\n/**\n * @brief\n *\t\tDetermines if any of the provisionable vnodes assigned to the job\n *\t\thas a pending mom hook-related file copy action.\n *\n * @param[in]   pjob\t-\tpointer to job struct\n *\n * @return\tint\n * @retval\t1\t: job has a pending hook-related copy action on at least\n *\t\t\t  of its provisioning vnodes.\n * @retval\t0\t: either no pending hook-related action detected, or an\n *\t\t\t  an error has occurred.\n *\n * @par Side Effects:\n *      Unknown\n *\n * @par MT-safe: No\n *\n */\n\nstatic int\nprov_vnode_pending_hook_copy(job *pjob)\n{\n\tstruct pbsnode *np = NULL;\n\tint i;\n\texec_vnode_listtype prov_vnode_list = NULL;\n\tint num_of_prov_vnodes = 1;\n\tint rcode = 0;\n\n\tif (pjob == NULL) {\n\t\tDBPRT((\"%s: job is NULL\\n\", __func__))\n\t\treturn 0;\n\t}\n\n\tDBPRT((\"%s: Entered jobid=%s\\n\", __func__, pjob->ji_qs.ji_jobid))\n\n\tnum_of_prov_vnodes = parse_prov_vnode(get_jattr_str(pjob, JOB_ATR_prov_vnode), &prov_vnode_list);\n\n\tif (num_of_prov_vnodes == -1) {\n\t\tif (prov_vnode_list)\n\t\t\tfree(prov_vnode_list);\n\t\treturn 0;\n\t}\n\n\tfor (i = 0; i < num_of_prov_vnodes; i++) {\n\t\tint j;\n\n\t\tnp = find_nodebyname(prov_vnode_list[i]);\n\t\tif (np == NULL) {\n\t\t\tDBPRT((\"%s: node %s is null\\n\",\n\t\t\t       __func__, prov_vnode_list[i]))\n\t\t\tgoto prov_vnode_label;\n\t\t}\n\t\t/* hook has not been sent */\n\t\tfor (j = 0; j < np->nd_nummoms; j++) {\n\t\t\tif ((np->nd_moms[j] != NULL) && (sync_mom_hookfiles_count(np->nd_moms[j]) > 0)) {\n\t\t\t\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_NODE, LOG_WARNING, pjob->ji_qs.ji_jobid, \"prov vnode %s's parent mom %s:%d has a pending copy hook or delete hook request\", np->nd_name, np->nd_moms[j]->mi_host, np->nd_moms[j]->mi_port);\n\t\t\t\trcode = 1;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t}\nprov_vnode_label:\n\n\tif (num_of_prov_vnodes > 0)\n\t\tfree(prov_vnode_list);\n\n\treturn rcode;\n}\n\n/**\n * @brief\n * \tThis function ensures that the hooks are synced with the\n * \tprovisioned node before starting the job on it.\n *\n * @param[in,out]\n * \tptask - work task structure contains prov_vnode_info\n *\n * @return\tvoid\n *\n */\n\nstatic void\nprov_startjob(struct work_task *ptask)\n{\n\tjob *pjob;\n\tint rc;\n\n\tassert(ptask->wt_parm1 != NULL);\n\tpjob = (job *) ptask->wt_parm1;\n\tif (pjob == NULL) {\n\t\tDBPRT((\"%s: pjob is NULL\\n\", __func__))\n\t\treturn;\n\t}\n\t/* task being serviced here */\n\tpjob->ji_prov_startjob_task = NULL;\n\tif ((do_sync_mom_hookfiles || sync_mom_hookfiles_replies_pending) &&\n\t    (prov_vnode_pending_hook_copy(pjob))) {\n\n\t\t/**\n\t\t * If mom hook files sync is in process then create\n\t\t * a time task where you perform this check again,\n\t\t * and start the job once it is done\n\t\t */\n\n\t\tDBPRT((\"%s: setting the time task as sync mom\"\n\t\t       \"hookfiles is not completed\\n\",\n\t\t       __func__))\n\n\t\t/* set a work task to run after 5 sec from now */\n\t\tpjob->ji_prov_startjob_task = set_task(WORK_Timed, time_now + 5,\n\t\t\t\t\t\t       prov_startjob, pjob);\n\t\tif (pjob->ji_prov_startjob_task == NULL) {\n\t\t\tlog_err(errno, __func__, \"Unable to set task for prov_startjob; requeuing the job\");\n\t\t\t(void) force_reque(pjob);\n\t\t}\n\t\treturn;\n\t}\n\n\t/*  accounting log about prov for job over */\n\tset_job_ProvAcctRcd(pjob, time_now,\n\t\t\t    PROVISIONING_SUCCESS);\n\n\t/* log msg about prov for job over */\n\tsprintf(log_buffer,\n\t\t\"Provisioning for Job %s succeeded, running job\",\n\t\tpjob->ji_qs.ji_jobid);\n\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB,\n\t\t  LOG_INFO, pjob->ji_qs.ji_jobid,\n\t\t  log_buffer);\n\n\tDBPRT((\"%s: Jobid: %s about to run after prov success\\n\",\n\t       __func__, pjob->ji_qs.ji_jobid))\n\n\t/* now prov_vnode is stale, remove it */\n\tif (is_jattr_set(pjob, JOB_ATR_prov_vnode))\n\t\tfree_jattr(pjob, JOB_ATR_prov_vnode);\n\n\tDBPRT((\"%s: calling [svr_startjob] from prov_startjob\\n\", __func__))\n\t/* Move the job to MOM */\n\tif ((rc = svr_startjob(pjob, 0)) != 0) {\n\t\tDBPRT((\"%s: Jobid: %s - startjob failed - rc:%d\\n\",\n\t\t       __func__, pjob->ji_qs.ji_jobid, rc))\n\t\tfree_nodes(pjob);\n\t}\n\tDBPRT((\"%s: Jobid: %s, startjob returned: %d\\n\",\n\t       __func__, pjob->ji_qs.ji_jobid, rc))\n}\n\n/**\n * @brief\n *\t\tPerforms provisioning cleanup when provisioning script returns.\n *\n * @par Functionality:\n *      This function is called when deferred child task, set by\n *\t\tstart_vnode_provisioning, returns (i.e. provisioning script finishes,\n *\t\teither success or failure). This can get triggered before/after\n *\t\tprovision_timeout occurs.\n *\t\t\t1) Gets the childs exit status:\n *\t\tif provisioning script exited with success (0),\n *\t\t\t- updates vnodes current_aoe attribute to\n *\t\t\t the aoe for provisioning\n *\t\t\t- removes the provisioning record from the provisioing table\n *\t\t\t- saves the provisioing table to disk\n *\t\tif provisioning script exited with error (non-zero)\n *\t\t\t- cancels the timeout work task\n *\t\t\t- removes the provisioning record from the\n * \t\t\t prov table and saves to disk\n *\t\t\t- calls fail_vnode to mark node offline\n *\t\t\t and requeue all jobs on vnode\n *\n * @see\n *\t\tstart_vnode_provisioning\n *\n * @param[in]\twtask\t-\tpointer to work_task\n *\t\t\t\t\t\t\twtask->wt_parm1\t: should have pointer to\n *\t\t\t\t\t\t  \t\t\tprov_vnode_info structure\n *\t\t\t\t\t\t\twtask->wt_parm2\t: should have pointer to timeout task\n *\n * @return\tvoid\n *\n * @par Side Effects:\n *      Unknown\n *\n * @par MT-safe:\tNo\n *\n */\n\nstatic void\nprov_request_deferred(struct work_task *wtask)\n{\n\tstruct work_task *timeout_task;\n\tint stat;\n\tstruct pbsnode *pnode = NULL;\n\tstruct prov_vnode_info *prov_vnode_info;\n\tprov_pid this_pid;\n\tint exit_status = -1;\n\tstruct prov_tracking *prov_tracking;\n\n\tassert(wtask->wt_parm1 != NULL);\n\n\tprov_vnode_info = (struct prov_vnode_info *) wtask->wt_parm1;\n\tpnode = (struct pbsnode *) find_nodebyname(prov_vnode_info->pvnfo_vnode);\n\tthis_pid = (pid_t) wtask->wt_event;\n\tDBPRT((\"%s: pid = %ld\\n\", __func__, (long) this_pid))\n\ttimeout_task = (struct work_task *) prov_vnode_info->ptask_timed;\n\n\t/* Now, figure out exitvalue of the child process */\n\tstat = wtask->wt_aux;\n\n\t/* update the fact that the process is gone in the prov table */\n\tprov_tracking = get_prov_record_by_pid(this_pid);\n\tprov_tracking->pvtk_pid = -1; /* indicating the process has exited */\n\n\tif (WIFEXITED(stat))\n\t\texit_status = WEXITSTATUS(stat);\n\n\tDBPRT((\"%s: stat=%d, exit_status=%d\\n\", __func__, stat, exit_status))\n\n\t/* success or application prov over */\n\tif (exit_status == 0 || exit_status == APP_PROV_SUCCESS) {\n\n\t\tif (pnode == NULL) {\n\t\t\tdelete_task(timeout_task);\n\t\t\tfree_pvnfo(prov_vnode_info);\n\t\t\treturn;\n\t\t}\n\n\t\t/* Update Current aoe */\n\t\tset_nattr_str_slim(pnode, ND_ATR_current_aoe, prov_vnode_info->pvnfo_aoe_req, NULL);\n\n\t\tDBPRT((\"%s: node:%s current_aoe set: %s\\n\",\n\t\t       __func__, pnode->nd_name, prov_vnode_info->pvnfo_aoe_req))\n\n\t\t/* write the node current_aoe */\n\t\tnode_save_db(pnode);\n\n\t\t/* if exit_status says app_prov returned success, reset down\n\t\t * that we set. after setting the state, is_vnode_prov_done()\n\t\t * is called which would delete the timed work task.\n\t\t */\n\t\tif (exit_status == APP_PROV_SUCCESS &&\n\t\t    (pnode->nd_state & INUSE_DOWN))\n\t\t\tset_vnode_state(pnode, ~INUSE_DOWN, Nd_State_And);\n\n\t\tis_vnode_prov_done(pnode->nd_name);\n\n\t\treturn;\n\t}\n\n\t/* log msg about prov of node failure */\n\tsprintf(log_buffer,\n\t\t\"Provisioning of %s with %s for %s failed, provisioning exit status=%d\",\n\t\tprov_vnode_info->pvnfo_vnode, prov_vnode_info->pvnfo_aoe_req,\n\t\tprov_vnode_info->pvnfo_jobid, exit_status);\n\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_SERVER,\n\t\t  LOG_NOTICE, msg_daemonname, log_buffer);\n\n\t/* kill the timed task since we dont need it any more */\n\tdelete_task(timeout_task);\n\n\t/* Remove record from prov tracking table */\n\tremove_prov_record(pnode->nd_name);\n\tprov_track_save(); /* save tracking table since its modified now */\n\n\t/* Any other exit code */\n\t/* Failure, move all jobs to be run_err\n\t * on this node to failed state\n\t */\n\tfail_vnode(prov_vnode_info, 1);\n\tfree_pvnfo(prov_vnode_info);\n\n\t/*\n\t * since one provisioning was failed, we have space to\n\t * do more prov so start a task for looking at other\n\t * nodes in the provisioning queue\n\t */\n\tset_task(WORK_Immed, 0, do_provisioning, NULL);\n}\n\n/**\n * @brief\n *\t\tPerforms provisioning cleanup if provisioning timed out.\n *\n * @par Functionality:\n *      This function performs provisioning cleanup if timed out.\n *      It is triggered after \"provision_timeout\" seconds have elapsed.\n *      This can get triggered before/after the deferred task finishes.\n *\t\t1) Kills the program group of the provisioning script, if deferred\n *\t   \t\tchild task not yet called.\n *      2) Cancels the deferred child work task if its not yet complete.\n *      3) Calls fail_vnode (for the concerned vnode) to mark vnode offline and\n *\t   \t\trequeue all jobs on this vnode.\n *      4) Frees the prov_vnode_info structure, allocated by do_provisioning.\n *\n * @see\n *\t\tstart_vnode_provisioning\n *\n * @param[in]\twtask\t-\tpointer to work_task\n *\t\t\t\t\t\t\twtask->wt_parm1\t: should have pointer to\n *\t\t\t\t\t  \t\tprov_vnode_info structure\n *\t\t\t\t\t\t\twtask->wt_parm2 : should have pointer to\n *\t\t\t\t\t  \t\tsdeferred child task\n *\n * @return\tvoid\n *\n * @par Side Effects:\n *      Unknown\n *\n * @par MT-safe:\tNo\n *\n */\n\nstatic void\nprov_request_timed(struct work_task *wtask)\n{\n\tstruct work_task *ptask_defer;\n\tstruct prov_vnode_info *prov_vnode_info;\n\tprov_pid this_pid;\n\tstruct prov_tracking *ptracking;\n\n\tassert(wtask->wt_parm1 != NULL);\n\n\tprov_vnode_info = (struct prov_vnode_info *) wtask->wt_parm1;\n\tptask_defer = (struct work_task *) prov_vnode_info->ptask_defer;\n\n\tsprintf(log_buffer,\n\t\t\"Provisioning of %s with %s for %s timed out\",\n\t\tprov_vnode_info->pvnfo_vnode, prov_vnode_info->pvnfo_aoe_req,\n\t\tprov_vnode_info->pvnfo_jobid);\n\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_SERVER, LOG_NOTICE,\n\t\t  msg_daemonname, log_buffer);\n\n\tDBPRT((\"%s: Entered node:%s Timed timeout work task\\n\",\n\t       __func__, prov_vnode_info->pvnfo_vnode))\n\n\tptracking = get_prov_record_by_vnode(prov_vnode_info->pvnfo_vnode);\n\tif (ptracking->pvtk_pid > -1) {\n\t\t/* pid is part of the deferred task event */\n\t\tthis_pid = ptracking->pvtk_pid;\n\t\tDBPRT((\"%s: pid = %d\\n\", __func__, this_pid))\n\n\t\t/* Kill all process belonging to this process group */\n\t\tif (kill(((-1) * this_pid), SIGKILL) == -1) {\n\t\t\tDBPRT((\"%s: couldn't kill prov process pgid = %d\\n\",\n\t\t\t       __func__, this_pid))\n\t\t}\n\t\tDBPRT((\"%s: killed provisioning process tree for pgid = %d\\n\",\n\t\t       __func__, this_pid))\n\n\t\t/*\n\t\t * script was running, it means that prov_request_deferred did\n\t\t * not occur. so safe to delete task.\n\t\t */\n\t\tdelete_task(ptask_defer);\n\t}\n\n\t/* remove prov record */\n\tremove_prov_record(prov_vnode_info->pvnfo_vnode);\n\tprov_track_save();\n\n\t/* Move jobs on this node to the failed state */\n\tfail_vnode(prov_vnode_info, 1);\n\tfree_pvnfo(prov_vnode_info);\n\n\t/*\n\t * since one provisioning was failed, we have space\n\t * to do more prov so start a task for looking at other nodes\n\t * in the provisioning queue\n\t */\n\tset_task(WORK_Immed, 0, do_provisioning, NULL);\n}\n\n/**\n * @brief\n *\t\tSets provision_enable and provision_timeout on server every time\n *\t\tprovision hook is modified.\n *\n * @par Functionality:\n *      This function sets server level attributes, SVR_ATR_ProvisionEnable and\n *\t\tSVR_ATR_provision_timeout from the provisioning hook. It checks whether\n *\t\tserver attributes should be set or not.\n *\n * @see\n *\t\tmgr_hook_import\n *\t\tmgr_hook_set\n *\t\tmgr_hook_unset\n *\n * @return\tvoid\n *\n * @par Side Effects:\n *      Unknown\n *\n * @par MT-safe:\tNo\n *\n */\n\nvoid\nset_srv_prov_attributes(void)\n{\n#ifdef PYTHON\n\thook *phook;\n\n\tDBPRT((\"Entered %s\\n\", __func__))\n\n\tphook = find_hookbyevent(HOOK_EVENT_PROVISION);\n\tif (!phook || !phook->script || !phook->enabled) {\n\t\tdisable_svr_prov();\n\t\tDBPRT((\"%s: script/enabled not set\\n\", __func__))\n\t\treturn;\n\t}\n\n\tprovision_timeout = phook->alarm;\n\tset_sattr_l_slim(SVR_ATR_provision_timeout, provision_timeout, SET);\n\tset_sattr_l_slim(SVR_ATR_ProvisionEnable, 1, SET);\n#else\n\tdisable_svr_prov();\n\tDBPRT((\"%s: Python not enabled\\n\", __func__))\n#endif\n}\n\n/**\n * @brief\n *\t\tExecutes provisioning hook script for a vnode.\n *\n * @par Functionality:\n *      This function initializes python environment and runs python top level\n *\t\tscript. If compiled without python support, it can run a shell script\n *\t\t(for testing).\n *\n * @see\n *\t\tstart_vnode_provisioning\n *\n * @param[in]\tphook\t-\tpointer to provisioning hook\n * @param[in]   prov_vnode_info\t-\tpointer to prov_vnode_info\n *\n * @return\tint\n * @retval\t>1\t: error code as returned by provisioning hook script\n * @retval\t1\t: success if doing application provisioning\n * @retval\t0\t: success if doing os provisioning\n * @retval\t-1\t: failure\n *\n * @par Side Effects:\n *      Unknown\n *\n * @par MT-safe: No\n *\n */\n\nint\nexecute_python_prov_script(hook *phook,\n\t\t\t   struct prov_vnode_info *prov_vnode_info)\n{\n\tint rc = 255;\n\tint exit_code = 255;\n#ifdef PYTHON\n\tunsigned int hook_event;\n\tchar *emsg = NULL;\n\thook_input_param_t req_ptr;\n\tchar perf_label[MAXBUFLEN];\n\n\tif (!phook || !prov_vnode_info)\n\t\treturn rc;\n\n\thook_event = HOOK_EVENT_PROVISION;\n\n\tif (phook->user != HOOK_PBSADMIN)\n\t\treturn rc;\n\n\tsnprintf(perf_label, sizeof(perf_label), \"hook_%s_%s_%d\", HOOKSTR_PROVISION, phook->hook_name, getpid());\n\treq_ptr.rq_prov = (struct prov_vnode_info *) prov_vnode_info;\n\trc = pbs_python_event_set(hook_event, \"root\",\n\t\t\t\t  \"server\", &req_ptr, perf_label);\n\tif (rc == -1) { /* internal server code failure */\n\t\tlog_event(PBSEVENT_DEBUG2,\n\t\t\t  PBS_EVENTCLASS_HOOK, LOG_ERR, __func__,\n\t\t\t  \"Failed to set event; request accepted by default\");\n\t\treturn (-1);\n\t}\n\n\t/* hook_name changes for each hook */\n\t/* This sets Python event object's hook_name value */\n\trc = pbs_python_event_set_attrval(PY_EVENT_HOOK_NAME,\n\t\t\t\t\t  phook->hook_name);\n\n\tif (rc == -1) {\n\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t\t  LOG_ERR, phook->hook_name,\n\t\t\t  \"Failed to set event 'hook_name'.\");\n\t\treturn (-1);\n\t}\n\n\t/* hook_type needed for internal processing; */\n\t/* hook_type changes for each hook.\t     */\n\t/* This sets Python event object's hook_type value */\n\trc = pbs_python_event_set_attrval(PY_EVENT_HOOK_TYPE,\n\t\t\t\t\t  hook_type_as_string(phook->type));\n\n\tif (rc == -1) {\n\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t\t  LOG_ERR, phook->hook_name,\n\t\t\t  \"Failed to set event 'hook_type'.\");\n\t\treturn (-1);\n\t}\n\n\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_HOOK,\n\t\t  LOG_INFO, phook->hook_name, \"started\");\n\n\tpbs_python_set_mode(PY_MODE); /* hook script mode */\n\n\t/* hook script may create files, and we don't want it to */\n\t/* be littering server's private directory. */\n\t/* NOTE: path_hooks_workdir is periodically cleaned up */\n\tif (chdir(path_hooks_workdir) != 0) {\n\t\tlog_event(PBSEVENT_DEBUG2,\n\t\t\t  PBS_EVENTCLASS_HOOK, LOG_WARNING, phook->hook_name,\n\t\t\t  \"unable to go to hooks tmp directory\");\n\t}\n\n\t/* let rc pass through */\n\thook_perf_stat_start(perf_label, HOOK_PERF_RUN_CODE, 0);\n\trc = pbs_python_run_code_in_namespace(&svr_interp_data,\n\t\t\t\t\t      phook->script,\n\t\t\t\t\t      &exit_code);\n\thook_perf_stat_stop(perf_label, HOOK_PERF_RUN_CODE, 0);\n\n\t/* go back to server's private directory */\n\tif (chdir(path_priv) != 0) {\n\t\tlog_event(PBSEVENT_DEBUG2,\n\t\t\t  PBS_EVENTCLASS_HOOK, LOG_WARNING, phook->hook_name,\n\t\t\t  \"unable to go back server private directory\");\n\t}\n\n\tpbs_python_set_mode(C_MODE); /* PBS C mode - flexible */\n\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_HOOK,\n\t\t  LOG_INFO, phook->hook_name, \"finished\");\n\n\tswitch (rc) {\n\t\tcase 0:\n\t\t\t/* reject if at least one hook script rejects */\n\t\t\tif (pbs_python_event_get_accept_flag() == FALSE) { /* a reject occurred */\n\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t\t\t \"%s request rejected by '%s'\",\n\t\t\t\t\t hook_event_as_string(hook_event),\n\t\t\t\t\t phook->hook_name);\n\t\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_HOOK,\n\t\t\t\t\t  LOG_ERR, phook->hook_name, log_buffer);\n\t\t\t\tif ((emsg = pbs_python_event_get_reject_msg()) != NULL) {\n\t\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1, \"%s\", emsg);\n\t\t\t\t\t/* log also the custom reject message */\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_HOOK,\n\t\t\t\t\t\t  LOG_ERR, phook->hook_name, log_buffer);\n\t\t\t\t}\n\t\t\t}\n\t\t\treturn (exit_code);\n\n\t\tcase -1: /* internal error */\n\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t\t\t  LOG_ERR, phook->hook_name,\n\t\t\t\t  \"Internal server error encountered. Skipping hook.\");\n\t\t\treturn (rc); /* should not happen */\n\n\t\tcase -2: /* unhandled exception */\n\t\t\tpbs_python_event_reject(NULL);\n\t\t\tpbs_python_event_param_mod_disallow();\n\n\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t\t \"%s hook '%s' encountered an exception, \"\n\t\t\t\t \"request rejected\",\n\t\t\t\t hook_event_as_string(hook_event), phook->hook_name);\n\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t\t\t  LOG_ERR, phook->hook_name, log_buffer);\n\t\t\treturn (rc);\n\t}\n#endif\n\treturn rc;\n}\n\n/**\n * @brief\n *\t\tPerforms basic checks and then kicks off provisioning of a vnode.\n *\n * @par Functionality:\n *      This function starts provisioning of a vnode with aoe specified by\n *\t\tstarting provisioning hook in another process. do_provisioning() is\n *\t\tcalled in the end to drain the provisioning list. Deferred and Timed\n *\t\twork tasks are set and a provisioning record is added in server. vnode\n *\t\tstate is marked down and provisioning. wait-provisioning state flag is\n *\t\tcleared.\n *\n * @see\n *\t\tcheck_and_enqueue_provisioning\n *\n * @param[in]\tprov_vnode_info\t-\tpointer to prov_vnode_info entry in server\n *\n * @return\tint\n * @retval\tPBSE_NONE\t: success if provisioning started for a vnode\n * @retval\tPBS Error code\t: if failed to start provisioning\n *\n * @par Side Effects:\n *      Unknown\n *\n * @par MT-safe:\tNo\n *\n */\n\nstatic int\nstart_vnode_provisioning(struct prov_vnode_info *prov_vnode_info)\n{\n\tprov_pid pid;\n\tstruct work_task *ptask_defer;\n\tstruct work_task *ptask_timed;\n\tstruct pbsnode *pnode;\n\tjob *pjob;\n\tint rc = -1;\n\tstruct sigaction act;\n\thook *phook;\n\n\tDBPRT((\"%s: Provisioning vnode: %s with aoe: %s\\n\", __func__,\n\t       prov_vnode_info->pvnfo_vnode, prov_vnode_info->pvnfo_aoe_req))\n\n\tpnode = find_nodebyname(prov_vnode_info->pvnfo_vnode);\n\tif (!pnode) {\n\t\tDBPRT((\"%s: Could not find vnode %s\\n\", __func__,\n\t\t       prov_vnode_info->pvnfo_vnode))\n\t\treturn (PBSE_SYSTEM);\n\t}\n\n\tphook = find_hookbyevent(HOOK_EVENT_PROVISION);\n\tif (!phook) {\n\t\tDBPRT((\"%s: Provisioning hook not found\\n\", __func__))\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_SERVER, LOG_INFO,\n\t\t\t  msg_daemonname, \"Provisioning hook not found\");\n\t\treturn rc;\n\t}\n\n\tif ((rc = pbs_python_check_and_compile_script(&svr_interp_data,\n\t\t\t\t\t\t      phook->script)) != 0) {\n\t\tDBPRT((\"%s: Recompilation failed\\n\", __func__))\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_SERVER, LOG_INFO,\n\t\t\t  msg_daemonname, \"Provisioning script recompilation failed\");\n\t\treturn rc;\n\t}\n\n\t/* Create child process to run TOP-LEVEL provisioning script */\n\tpid = fork();\n\tif (pid == -1) { /* fork failed */\n\t\tDBPRT((\"%s: fork() failed\\n\", __func__))\n\t\treturn (PBSE_SYSTEM);\n\t} else if (pid == 0) { /* child process */\n\t\talarm(0);\n\t\t/* standard tpp closure and net close */\n\t\tnet_close(-1);\n\t\ttpp_terminate();\n\n\t\t/* Reset signal actions for most to SIG_DFL */\n\t\tsigemptyset(&act.sa_mask);\n\t\tact.sa_flags = 0;\n\t\tact.sa_handler = SIG_DFL;\n\t\t(void) sigaction(SIGCHLD, &act, NULL);\n\t\t(void) sigaction(SIGHUP, &act, NULL);\n\t\t(void) sigaction(SIGINT, &act, NULL);\n\t\t(void) sigaction(SIGTERM, &act, NULL);\n\n\t\t/* Reset signal mask */\n\t\t(void) sigprocmask(SIG_SETMASK, &act.sa_mask, NULL);\n\n\t\t/*\n\t\t * set process as session leader\n\t\t */\n\t\tif (setsid() < 0)\n\t\t\texit(13);\n\n\t\t/* Redirect standard files to /dev/null */\n\t\tif (freopen(\"/dev/null\", \"r\", stdin) == NULL) \n\t\t\tlog_errf(-1, __func__, \"freopen of null device failed. ERR : %s\",strerror(errno));\n\n\n\t\t/* Unprotect child from being killed by system */\n\t\tdaemon_protect(0, PBS_DAEMON_PROTECT_OFF);\n\n\t\t/* exit with the return code from the script */\n\t\trc = execute_python_prov_script(phook, prov_vnode_info);\n\n\t\t/* if python did sys.exit we wont be here */\n\t\texit(rc);\n\t}\n\n\t/* parent process */\n\t/* set node state to provisioning */\n\t/*\n\t * set_vnode_state(pnode, INUSE_PROV, Nd_State_Or);\n\t * This is now done earlier\n\t */\n\t/* unset the current_aoe for the node here provisioning */\n\tfree_nattr(pnode, ND_ATR_current_aoe);\n\n\t/* write the node current_aoe */\n\tnode_save_db(pnode);\n\n\t/*\n\t * Parent process creates two work tasks\n\t * i.e deferred child work task and timed work task.Deferred child\n\t * task is to capture the exit code of the provisioning script.\n\t * The Timed task is to implement the timeout feature.\n\t */\n\n\t/*\n\t * wt_parm1 is passed the address of the prov_vnode_info\n\t * structure allocated earlier\n\t */\n\tptask_defer = set_task(WORK_Deferred_Child, pid,\n\t\t\t       prov_request_deferred,\n\t\t\t       (void *) prov_vnode_info);\n\tif (!ptask_defer)\n\t\treturn (PBSE_INTERNAL);\n\n\tptask_timed = set_task(WORK_Timed, time_now + provision_timeout,\n\t\t\t       prov_request_timed,\n\t\t\t       (void *) prov_vnode_info);\n\tif (!ptask_timed) {\n\t\t/* cancel deferred child work task */\n\t\tdelete_task(ptask_defer);\n\t\treturn (PBSE_INTERNAL);\n\t}\n\n\t/* store the addresses in prov_vnode_info */\n\tprov_vnode_info->ptask_defer = ptask_defer;\n\tprov_vnode_info->ptask_timed = ptask_timed;\n\n\t/*\n\t * add a provisioning record to the prov_record table,\n\t * used for server crash recovery\n\t */\n\tif (add_prov_record(pid, prov_vnode_info) == -1) {\n\t\t/* this actually should not fail, since we checked before */\n\t\tdelete_task(ptask_defer);\n\t\tdelete_task(ptask_timed);\n\t\treturn (PBSE_INTERNAL);\n\t}\n\n\tpjob = find_job(prov_vnode_info->pvnfo_jobid);\n\tif (pjob) {\n\t\t/* log job prov success message */\n\t\tsprintf(log_buffer, \"Provisioning vnode %s with AOE %s \"\n\t\t\t\t    \"started successfully\",\n\t\t\tprov_vnode_info->pvnfo_vnode,\n\t\t\tprov_vnode_info->pvnfo_aoe_req);\n\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB,\n\t\t\t  LOG_INFO, pjob->ji_qs.ji_jobid, log_buffer);\n\t}\n\n\t/* remove the INUSE_WAIT_PROV flag as it is prov now */\n\tset_vnode_state(pnode, ~INUSE_WAIT_PROV, Nd_State_And);\n\n\t/* set prov and down states */\n\tset_vnode_state(pnode, INUSE_PROV | INUSE_DOWN, Nd_State_Or);\n\n\treturn (PBSE_NONE);\n}\n\n/**\n * @brief\n *\t\tChecks if provisioning is required or not.\n *\n * @par Functionality:\n *      This function parses job's exec_vnode attribute, if set, it checks if\n *\t\tjob needs one or more vnodes to be provisioned. If exec_vnode is null,\n *\t\tneed_prov contains 0. If one or more vnodes need provisioning, need_prov is 1.\n *\n * @see\n *\t\tcheck_and_provision_job\n * @param[in]   pjob\t-\tpointer to job\n * @param[out]  need_prov\t-\tboolean value, whether job will provision\n *\n * @return\tint\n * @retval\tPBSE_NONE\t: success if no provisioning needed\n * @retval\tPBS Error code\t: if some error occurs\n *\n * @par Side Effects:\n *  \tUnknown\n *\n * @par MT-safe:\tNo\n *\n */\n\nint\ncheck_and_enqueue_provisioning(job *pjob, int *need_prov)\n{\n\texec_vnode_listtype prov_vnode_list = NULL;\n\tint num_of_prov_vnodes = -1;\n\tint i;\n\tstruct prov_vnode_info *prov_vnode_info;\n\tstruct pbsnode *pnode;\n\tstruct work_task *ptask_start_prov;\n\tchar *aoe_req = NULL; /* to point to aoe */\n\n\tDBPRT((\"%s: Entered\\n\", __func__))\n\n\tif (need_prov == NULL) {\n\t\tDBPRT((\"%s: bad params\\n\", __func__))\n\t\treturn (PBSE_IVALREQ);\n\t}\n\n\t*need_prov = 0;\n\n\t/* prov_vnode_list is of type exec_vnode_listtype.\n\t * This is an array of \"pointers to arrays[PBS_MAXCLTJOBID]\"\n\t */\n\tnum_of_prov_vnodes = find_prov_vnode_list(pjob, &prov_vnode_list, &aoe_req);\n\tif (num_of_prov_vnodes == -1) {\n\t\tif (prov_vnode_list)\n\t\t\tfree(prov_vnode_list);\n\t\tif (aoe_req)\n\t\t\tfree(aoe_req);\n\t\treturn (PBSE_IVALREQ);\n\t}\n\n\tDBPRT((\"%s: aoe_req: %s\\n\", __func__, (aoe_req ? aoe_req : \"NULL\")))\n\n\tif (num_of_prov_vnodes == 0) {\n\t\t*need_prov = 0;\n\t\tDBPRT((\"%s: Provisioning will not be done, \"\n\t\t       \"since no aoe requested or scheduler did not give provision vnode\\n\",\n\t\t       __func__))\n\t\tif (prov_vnode_list)\n\t\t\tfree(prov_vnode_list);\n\t\treturn (PBSE_NONE);\n\t}\n\n\t/* enque the provisioning request */\n\tfor (i = 0; i < num_of_prov_vnodes; i++) {\n\t\tprov_vnode_info =\n\t\t\t(struct prov_vnode_info *) calloc(1,\n\t\t\t\t\t\t\t  sizeof(struct prov_vnode_info));\n\t\tif (!prov_vnode_info) {\n\t\t\tfree(prov_vnode_list);\n\t\t\tif (aoe_req)\n\t\t\t\tfree(aoe_req);\n\t\t\treturn (PBSE_INTERNAL);\n\t\t}\n\t\t/*\n\t\t * prepare prov_vnode_info structure thats\n\t\t * passed as arg to work tasks\n\t\t */\n\n\t\t/*\n\t\t * prov_vnode_info carries only the id's of the\n\t\t * job/resv and not pointers this is because, this\n\t\t * structure would be used by work tasks later and\n\t\t * at that point of time, job / resv pointers may not\n\t\t * be valid as its possible that they could be\n\t\t * deleted by the server\n\t\t */\n\t\tif ((prov_vnode_info->pvnfo_vnode = strdup(prov_vnode_list[i])) == NULL) {\n\t\t\tfree(prov_vnode_list);\n\t\t\tfree_pvnfo(prov_vnode_info);\n\t\t\tif (aoe_req)\n\t\t\t\tfree(aoe_req);\n\t\t\treturn PBSE_SYSTEM;\n\t\t}\n\t\tif ((prov_vnode_info->pvnfo_aoe_req = strdup(aoe_req)) == NULL) {\n\t\t\tfree(prov_vnode_list);\n\t\t\tfree_pvnfo(prov_vnode_info);\n\t\t\tif (aoe_req)\n\t\t\t\tfree(aoe_req);\n\t\t\treturn PBSE_SYSTEM;\n\t\t}\n\t\tstrcpy(prov_vnode_info->pvnfo_jobid, pjob->ji_qs.ji_jobid);\n\n\t\tCLEAR_LINK(prov_vnode_info->al_link);\n\t\tappend_link(&prov_allvnodes, &prov_vnode_info->al_link,\n\t\t\t    prov_vnode_info);\n\n\t\tpnode = find_nodebyname(prov_vnode_list[i]);\n\n\t\tset_vnode_state(pnode, INUSE_WAIT_PROV, Nd_State_Or);\n\t}\n\n\t/*\n\t * then start a immediate work task to start provisioning\n\t * based on max allowed provisioings - start an immediate\n\t * work task repeatable every PROV_POLL interval\n\t */\n\tptask_start_prov = set_task(WORK_Immed, 0,\n\t\t\t\t    do_provisioning, NULL);\n\n\tif (ptask_start_prov == NULL) {\n\t\tfree(prov_vnode_list);\n\t\tif (aoe_req)\n\t\t\tfree(aoe_req);\n\t\treturn (PBSE_INTERNAL);\n\t}\n\n\tDBPRT((\"%s: Provisioning will be done\\n\", __func__))\n\n\tfree(prov_vnode_list);\n\tif (aoe_req)\n\t\tfree(aoe_req);\n\n\t/* could be a good time to resize the prov table */\n\tresize_prov_table(max_concurrent_prov);\n\n\t*need_prov = 1;\n\treturn (PBSE_NONE);\n}\n\n/**\n * @brief\n *\t\tStarts as many provisioning as possible from the list available\n *\t\twith server.\n *\n * @par Functionality:\n *      This function is called by a work task. It runs as many provisioning\n *\t\tfrom the linked list as allowed. It calls start_vnode_provisioning()\n *\t\tto start the provisioning for a vnode. If starting a provisioning fails\n *\t\tit does not fail the vnode but the job that was waiting on that vnode is\n *\t\theld.\n *\n * @see\n *\t\tstart_vnode_provisioning\n *\n * @param[in]\twtask\t-\tpointer to work_task\n *\n * @return\tvoid\n *\n * @par Side Effects:\n *      Unknown\n *\n * @par MT-safe:\tNo\n *\n */\n\nvoid\ndo_provisioning(struct work_task *wtask)\n{\n\tstruct prov_vnode_info *prov_vnode_info;\n\tstruct pbsnode *pnode;\n\tint rc;\n\n\tprov_vnode_info = GET_NEXT(prov_allvnodes);\n\n\t/*\n\t * check number of provisionings needed to be done,\n\t * should not cross max limit\n\t */\n\twhile (prov_vnode_info &&\n\t       (server.sv_cur_prov_records < max_concurrent_prov)) {\n\n\t\t/*\n\t\t * allocate prov_vnode_info, its kept as long as provisioning\n\t\t * goes on. This will be freed by the fail_vnode,\n\t\t * prov_request_deferred(if script failed),prov_request_timeout\n\t\t * (always), is_vnode_prov_done\n\t\t * (before running job)\n\t\t */\n\n\t\t/* remove this node from the linked list */\n\t\tdelete_link(&prov_vnode_info->al_link);\n\n\t\tpnode = find_nodebyname(prov_vnode_info->pvnfo_vnode);\n\t\tif (pnode == NULL) {\n\t\t\tDBPRT((\"%s: node %s was deleted\\n\", __func__,\n\t\t\t       prov_vnode_info->pvnfo_vnode))\n\t\t\tfree_pvnfo(prov_vnode_info);\n\t\t\tprov_vnode_info = GET_NEXT(prov_allvnodes);\n\t\t\tcontinue;\n\t\t}\n\n\t\trc = start_vnode_provisioning(prov_vnode_info);\n\n\t\tif (rc != 0) {\n\t\t\t/* we want to fail jobs/resv but not the node */\n\t\t\t/* fail all the jobs that were logged on this vnode */\n\t\t\t/* vnode is not offlined */\n\t\t\tfail_vnode_job(prov_vnode_info, 0);\n\n\t\t\t/* this node will not provision, remove flag */\n\t\t\tpnode = find_nodebyname(prov_vnode_info->pvnfo_vnode);\n\t\t\tif (pnode) {\n\t\t\t\tDBPRT((\"%s: \\n\", __func__))\n\t\t\t\tset_vnode_state(pnode, ~(INUSE_PROV | INUSE_WAIT_PROV),\n\t\t\t\t\t\tNd_State_And);\n\t\t\t}\n\t\t\tfree_pvnfo(prov_vnode_info);\n\t\t}\n\t\tprov_vnode_info = GET_NEXT(prov_allvnodes);\n\t}\n\n\t/* Save provisioning records to file */\n\tprov_track_save();\n}\n\n/**\n * @brief\n *\t\tDeletes prov_vnode_info entry.\n *\n * @par Functionality:\n *      This function deletes all prov_vnode_info entries for a job in server.\n *\n * @see\n *\t\tfail_vnode_job\n *\n * @param[in]\tpjob\t-\tpointer to job\n *\n * @return\tvoid\n *\n * @par Side Effects:\n *      Unknown\n *\n * @par MT-safe:\tNo\n *\n */\nstatic void\ndel_prov_vnode_entry(job *pjob)\n{\n\tstruct prov_vnode_info *tmp_record;\n\tstruct prov_vnode_info *nxt_record;\n\tstruct pbsnode *pnode;\n\n\t/* since entry is plucked from list, it won't come again */\n\ttmp_record = GET_NEXT(prov_allvnodes);\n\twhile (tmp_record) {\n\t\tnxt_record = GET_NEXT(tmp_record->al_link);\n\t\tif (strcmp(tmp_record->pvnfo_jobid, pjob->ji_qs.ji_jobid) == 0) {\n\t\t\tdelete_link(&tmp_record->al_link);\n\t\t\tDBPRT((\"%s: vnode %s\\n\", __func__, tmp_record->pvnfo_vnode))\n\t\t\t/* node is no longer going to provision */\n\t\t\tpnode = find_nodebyname(tmp_record->pvnfo_vnode);\n\t\t\tif (pnode)\n\t\t\t\tset_vnode_state(pnode,\n\t\t\t\t\t\t~(INUSE_PROV | INUSE_WAIT_PROV),\n\t\t\t\t\t\tNd_State_And);\n\t\t\tfree_pvnfo(tmp_record);\n\t\t}\n\t\ttmp_record = nxt_record;\n\t}\n}\n\n/**\n * @brief\n * function to enable/disable power_provisioning\n *\n * Reflect the change to the server attribute from enabled flag for\n * a PBS hook.\n * *\n * @return\tNone.\n */\nvoid\nset_srv_pwr_prov_attribute()\n{\n\tchar hook_name[] = PBS_POWER;\n\thook *phook = NULL;\n\tint val = 0;\n\tunsigned int action = 0;\n\tchar str_val[2] = {0};\n\n\tphook = find_hook(hook_name);\n\tif (phook == NULL)\n\t\treturn;\n\n\tif (phook->enabled == TRUE)\n\t\tval = 1;\n\n\tsnprintf(str_val, sizeof(str_val), \"%d\", val);\n\tset_sattr_str_slim(SVR_ATR_PowerProvisioning, str_val, NULL);\n\n\t/*\n\t * The enabled attribute is changed so send the attributes.\n\t * If enabled is true, we also need to send the hook.\n\t */\n\taction = MOM_HOOK_ACTION_SEND_ATTRS;\n\tif (val)\n\t\taction |= MOM_HOOK_ACTION_SEND_SCRIPT;\n\tadd_pending_mom_hook_action(NULL, hook_name, action);\n}\n\n/**\n * @brief\n *\t\taction_backfill_depth - action function for backfill_depth\n *\t\t\t\tvalid input range is >=1\n *\n * @param[in]\tpattr\t-\tThe estimated start time frequency\n * @param[in]\tpobj\t-\tobject being considered\n * @param[in]\tactmode\t-\taction mode\n *\n * @return\tWhether function completed successfully or not\n * @retval\tPBSE_NONE\t: when no errors are encountered\n * @retval\tPBSE_BADATVAL\t: if bad attribute is attempted to be set\n *\n * @return\tint\n */\nint\naction_backfill_depth(attribute *pattr, void *pobj, int actmode)\n{\n\n\tif (pattr == NULL)\n\t\treturn PBSE_NONE;\n\n\tif (actmode == ATR_ACTION_ALTER || actmode == ATR_ACTION_RECOV) {\n\t\tif (pattr->at_val.at_long < 0)\n\t\t\treturn PBSE_BADATVAL;\n\t}\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n *\taction_jobscript_max_size - action function for jobscript_max_size\n *\tvalid input range is >=1 to <=2GB\n *\n * @param[in] pattr - server attributes (jobscript_max_size)\n * @param[in] pobj  - object being considered\n * @param[in] actmode - action mode\n *\n * @return Whether function completed successfully or not\n * @retval PBSE_NONE when no errors are encountered\n * @retval PBSE_BADJOBSCRIPTMAXSIZE when size is set to more than 2GB\n *\n */\n\nint\naction_jobscript_max_size(attribute *pattr, void *pobj, int actmode)\n{\n\tattribute attrib;\n\tif (pattr == NULL)\n\t\treturn PBSE_NONE;\n\tset_attr_generic(&attrib, &svr_attr_def[SVR_ATR_jobscript_max_size], \"2gb\", NULL, INTERNAL);\n\tif (actmode == ATR_ACTION_ALTER || actmode == ATR_ACTION_RECOV) {\n\t\tif (comp_size(pattr, &attrib) > 0)\n\t\t\treturn PBSE_BADJOBSCRIPTMAXSIZE;\n\t}\n\tset_size(&attr_jobscript_max_size, pattr, SET);\n\treturn PBSE_NONE;\n}\n\n/**\n * @brief\n *\taction_check_res_to_release - action function for restrict_res_to_release_on_suspend\n *\tit validates that input is a list of legitimate resource names\n *\n * @param[in] pattr - server attribute\n * @param[in] pobj  - object being considered\n * @param[in] actmode - action mode\n *\n * @return Whether function completed successfully or not\n * @retval PBSE_NONE when no errors are encountered\n * @retval PBSE_UNKRESC when any of the resource is not known\n *\n */\n\nint\naction_check_res_to_release(attribute *pattr, void *pobj, int actmode)\n{\n\tint i;\n\tif (pattr == NULL)\n\t\treturn PBSE_NONE;\n\n\tif (actmode == ATR_ACTION_ALTER || actmode == ATR_ACTION_NEW) {\n\t\tfor (i = 0; i < pattr->at_val.at_arst->as_usedptr; i++) {\n\t\t\tif (find_resc_def(svr_resc_def, pattr->at_val.at_arst->as_string[i]) == NULL)\n\t\t\t\treturn PBSE_UNKRESC;\n\t\t}\n\t}\n\treturn PBSE_NONE;\n}\n\n/**\n  * @brief\n  *      Unset jobscript_max_size attribute.\n  *\n  * @par Functionality:\n  *      This function unsets the jobscript_max_size server attribute\n  *      by reverting it back to it's default value.\n  *\n  * @param[in]   void\n  *\n  * @return      void\n  *\n  */\nvoid\nunset_jobscript_max_size(void)\n{\n\tlog_eventf(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER, LOG_NOTICE, msg_daemonname,\n\t\t   \"unsetting jobscript_max_size - reverting back to default val %s\",\n\t\t   DFLT_JOBSCRIPT_MAX_SIZE);\n\tset_attr_generic(&attr_jobscript_max_size, &svr_attr_def[SVR_ATR_jobscript_max_size], DFLT_JOBSCRIPT_MAX_SIZE, NULL, INTERNAL);\n}\n\n/**\n * @brief\n *\t\tCreate a copy of the job script from database to a temporary file\n *\t\tThis filename is then passed onto the sendjob process to send the\n *\t\tjobfile to the target mom/server\n *\n * @param[in]\tpj\t-\tJob pointer\n * @param[out]\tscript_name\t-\tName of the temporary filename to which\n *\t\t\t\t\t\t\t\tthe job script was copied to\n *\n * @return\tError code\n * @retval\t0\t: Success\n * @retval\t-1\t: Failure\n *\n */\nextern char *msg_script_open;\nextern char *msg_script_write;\nextern char *path_spool;\n\n/*\n * @brief\n *  \tLoads the job-script associated to the job from the database.\n *  \tIt populates the ji_script field of the job as well as returns\n *      a pointer to the script\n *\n * @param[in, out] pj - Job pointer. pj->ji_script has the script loaded into it.\n *\n * @return Text buffer containing the job script\n * @retval NULL  - Failed to load job script\n * @retval !NULL - Job script\n *\n */\nchar *\nsvr_load_jobscript(job *pj)\n{\n\tvoid *conn = (void *) svr_db_conn;\n\tpbs_db_jobscr_info_t jobscr;\n\tpbs_db_obj_info_t obj;\n\n\tif (pj->ji_script) {\n\t\tfree(pj->ji_script);\n\t\tpj->ji_script = NULL;\n\t}\n\n\tif (pj->ji_qs.ji_svrflags & JOB_SVFLG_SubJob) {\n\t\tstrcpy(jobscr.ji_jobid, pj->ji_parentaj->ji_qs.ji_jobid);\n\t} else {\n\t\tstrcpy(jobscr.ji_jobid, pj->ji_qs.ji_jobid);\n\t}\n\tjobscr.script = NULL;\n\tobj.pbs_db_obj_type = PBS_DB_JOBSCR;\n\tobj.pbs_db_un.pbs_db_jobscr = &jobscr;\n\n\tif (pbs_db_load_obj(conn, &obj) != 0) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"Failed to load job script for job %s from PBS datastore\",\n\t\t\t pj->ji_qs.ji_jobid);\n\t\tlog_err(-1, __func__, log_buffer);\n\t\treturn NULL;\n\t}\n\n\tif (jobscr.script == NULL) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t \"Out of memory loading script for job %s from PBS datastore\",\n\t\t\t pj->ji_qs.ji_jobid);\n\t\tlog_err(-1, __func__, log_buffer);\n\t\treturn NULL;\n\t}\n\n\tpj->ji_script = jobscr.script;\n\n\treturn jobscr.script;\n}\n\n/*\n * @brief\n *  \tWrite the job script from the job structure into a temporary file\n *\n * @param[in] pj - Job pointer\n * @param[in] script_name - The name of the script file to be created in tmpdir\n *\n * @return Error code\n * @retval -1 - Failure\n * @retval  0 - Success\n */\nint\nsvr_create_tmp_jobscript(job *pj, char *script_name)\n{\n\tint fds;\n\tint filemode = 0600;\n\tint len;\n\n\tif (pj->ji_script == NULL) {\n\t\t(void) snprintf(log_buffer, sizeof(log_buffer), \"Job has no script loaded!! Can't write temp job script\");\n\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_INFO, pj->ji_qs.ji_jobid, log_buffer);\n\t\treturn -1;\n\t}\n\n\t(void) strcpy(script_name, pbs_conf.pbs_tmpdir);\n\t(void) strcat(script_name, \"/\");\n\n\tif (*pj->ji_qs.ji_fileprefix != '\\0')\n\t\t(void) strcat(script_name, pj->ji_qs.ji_fileprefix);\n\n\t(void) strcat(script_name, pj->ji_qs.ji_jobid);\n\t(void) strcat(script_name, JOB_SCRIPT_SUFFIX);\n\n\tfds = open(script_name, O_WRONLY | O_CREAT, filemode);\n\tif (fds < 0) {\n\t\tlog_err(errno, __func__, msg_script_open);\n\t\treturn -1;\n\t}\n\n\tlen = strlen(pj->ji_script);\n\tif (write(fds, pj->ji_script, len) != len) {\n\t\tlog_err(errno, __func__, msg_script_write);\n\t\t(void) close(fds);\n\t\treturn -1;\n\t}\n\n\t(void) close(fds);\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\tDetermines type of place directive.\n *\n * @param[in]\tplace_str\t: The string representation of the place directive\n * @param[in]\tby\t: The type of exclusivity to check for.\n *\n * @return\tThe place sharing type\n *\n * @par MT-Safe: No\n */\nenum vnode_sharing\nplace_sharing_type(char *place_str, enum vnode_sharing by)\n{\n\tenum vnode_sharing ret = VNS_UNSET;\n\n\tif (place_str == NULL)\n\t\treturn ret;\n\n\tif (by == VNS_FORCE_EXCL) {\n\t\tif (place_sharing_check(place_str, PLACE_Excl))\n\t\t\tret = VNS_FORCE_EXCL;\n\t} else if (by == VNS_FORCE_EXCLHOST) {\n\t\tif (place_sharing_check(place_str, PLACE_ExclHost))\n\t\t\tret = VNS_FORCE_EXCLHOST;\n\t} else if (by == VNS_IGNORE_EXCL) {\n\t\tif (place_sharing_check(place_str, PLACE_Shared))\n\t\t\tret = VNS_IGNORE_EXCL;\n\t}\n\n\treturn ret;\n}\n/**\n * @brief\n * \t\tfunction for default queue check.\n *\n * @param[in]\tpattr\t-\tThe estimated start time frequency\n * @param[in]\tpobj\t-\tobject being considered\n * @param[in]\tactmode\t-\taction mode\n *\n * @return\tWhether function completed successfully or not\n * @retval\tPBSE_NONE\t: when no errors are encountered\n * @retval\tPBSE_UNKQUE\t: Unknown queue name\n * @retval\tPBSE_INTERNAL\t: on an internal error\n *\n */\nint\ndefault_queue_chk(attribute *pattr, void *pobj, int actmode)\n{\n\tpbs_queue *pq = NULL;\n\n\tif (pattr == NULL) {\n\t\treturn (PBSE_INTERNAL);\n\t}\n\n\tif (actmode == ATR_ACTION_ALTER) {\n\t\tif (is_attr_set(pattr)) {\n\t\t\tpq = find_queuebyname(pattr->at_val.at_str);\n\t\t\tif (pq == NULL) {\n\t\t\t\treturn (PBSE_UNKQUE);\n\t\t\t}\n\t\t}\n\t}\n\treturn (PBSE_NONE);\n}\n\n/**\n *\n * @brief\n *\t\tMarks a connection flag that tells a qsub daemon that something has\n *\t\tchanged in the server, and its req_queuejob request needs to be redone.\n *\n */\nvoid\nforce_qsub_daemons_update(void)\n{\n\tconn_t *cp = NULL;\n\tif (svr_allconns.ll_next == NULL)\n\t\treturn;\n\tfor (cp = (conn_t *) GET_NEXT(svr_allconns); cp; cp = GET_NEXT(cp->cn_link)) {\n\t\tif (cp->cn_authen & PBS_NET_CONN_FROM_QSUB_DAEMON)\n\t\t\tcp->cn_authen |= PBS_NET_CONN_FORCE_QSUB_UPDATE;\n\t}\n}\n\n/**\n * @brief\n *\t\tThe action function for the \"default_qsub_arguments\" server\n *\t\tattribute, which tells qsub daemons to redo some req_queuejob\n *\t\toperation as this attribute has changed.\n\n * @param[in]\tpattr\t-\ttarget \"default_qsub_arguments\" attribute value\n * @param[in]\tpobject\t-\tpointer to some parent object.(required but unused here)\n * @param[in]\tactmode\t-\tthe action to take (e.g. ATR_ACTION_ALTER)\n *\n * @return\tWhether or not okay to set to new value.\n * @retval\tPBSE_NONE\t: Action is okay.\n * @retval\tPBSE_INTERNAL\t: for any error.\n */\nint\nforce_qsub_daemons_update_action(attribute *pattr, void *pobj, int actmode)\n{\n\tif (pattr == NULL) {\n\t\treturn (PBSE_INTERNAL);\n\t}\n\tforce_qsub_daemons_update();\n\n\treturn (PBSE_NONE);\n}\n\n/**\n * @brief\n *\t\tare_we_primary - determines the failover role, are we the Primary\n *\t\tServer, the Secondary Server or the only Server (no failover)\n *\n * @return  int\t\t\t- failover server role\n * @retval  FAILOVER_NONE\t\t- failover not configured\n * @retval  FAILOVER_PRIMARY\t\t- Primary Server\n * @retval  FAILOVER_SECONDARY\t\t- Secondary Server\n * @retval  FAILOVER_CONFIG_ERROR\t- error in pbs.conf configuration\n */\nenum failover_state\nare_we_primary(void)\n{\n\tchar hn1[PBS_MAXHOSTNAME + 1];\n\n\t/* both secondary and primary should be set or neither set */\n\tif ((pbs_conf.pbs_secondary == NULL) && (pbs_conf.pbs_primary == NULL))\n\t\treturn FAILOVER_NONE;\n\tif ((pbs_conf.pbs_secondary == NULL) || (pbs_conf.pbs_primary == NULL))\n\t\treturn FAILOVER_CONFIG_ERROR;\n\n\tif (get_fullhostname(pbs_conf.pbs_primary, primary_host, (sizeof(primary_host) - 1)) == -1) {\n\t\tlog_err(-1, \"pbsd_main\", \"Unable to get full host name of primary\");\n\t\treturn FAILOVER_CONFIG_ERROR;\n\t}\n\n\tif (strcmp(primary_host, server_host) == 0)\n\t\treturn FAILOVER_PRIMARY; /* we are the listed primary */\n\n\tif (get_fullhostname(pbs_conf.pbs_secondary, hn1, (sizeof(hn1) - 1)) == -1) {\n\t\tlog_err(-1, \"pbsd_main\", \"Unable to get full host name of secondary\");\n\t\treturn FAILOVER_CONFIG_ERROR;\n\t}\n\tif (strcmp(hn1, server_host) == 0)\n\t\treturn FAILOVER_SECONDARY; /* we are the secondary */\n\n\treturn FAILOVER_CONFIG_ERROR; /* cannot be neither */\n}\n\n/**\n * @brief\n * \t\tdumps the memory usage of the heap into server log in every 10 minutes.\n *\n * @param[in]\tptask\t-\tpointer to the work task\n *\n * @return\tvoid\n *\n * @par MT-Safe: Yes\n * @par Side Effects: None\n *\n */\nvoid\nmemory_debug_log(struct work_task *ptask)\n{\n\n\tif (ptask)\n\t\t(void) set_task(WORK_Timed, time_now + 600, memory_debug_log, NULL);\n\tif (!will_log_event(PBSEVENT_DEBUG4))\n\t\treturn;\n\tsnprintf(log_buffer, LOG_BUF_SIZE, \"MEM_DEBUG: sbrk: %zu\", (size_t) sbrk(0));\n\tlog_event(PBSEVENT_DEBUG4, PBS_EVENTCLASS_SERVER, LOG_DEBUG, msg_daemonname, log_buffer);\n#ifdef HAVE_MALLOC_INFO\n\tchar *buf;\n\tbuf = get_mem_info();\n\tif (buf) {\n\t\tlog_event(PBSEVENT_DEBUG4, PBS_EVENTCLASS_SERVER, LOG_DEBUG, msg_daemonname, buf);\n\t\tfree(buf);\n\t}\n#endif /* malloc_info */\n}\n\n/**\n * @brief\n *\t\tGet list of deferred requests for a particular scheduler.\n * \t\tIf the list does not exist yet and the 'create' is TRUE,\n * \t\tthen create the list.\n *\n * @param[in]\tpsched\t-\tscheduler structure to identify list of deferred requests.\n * @param[in]\tcreate\t-\tboolean - if true, create non-existing list\n *\n * @return\tList of deferred requests for the particular scheduler\n * @retval\tNULL\t: list not found or not created\n * @retval\tpbs_list_head*\t: list of scheduler deferred requests.\n */\npbs_list_head *\nfetch_sched_deferred_request(pbs_sched *psched, bool create)\n{\n\tstruct sched_deferred_request *psdefr;\n\n\tfor (psdefr = (struct sched_deferred_request *) GET_NEXT(svr_deferred_req);\n\t     psdefr;\n\t     psdefr = (struct sched_deferred_request *) GET_NEXT(psdefr->sdr_link)) {\n\t\tif (psdefr->sdr_psched == psched)\n\t\t\tbreak;\n\t}\n\n\tif (psdefr) {\n\t\treturn &psdefr->sdr_deferred_req;\n\t}\n\n\tif (create == FALSE) {\n\t\treturn NULL;\n\t}\n\n\tpsdefr = (struct sched_deferred_request *) malloc(sizeof(struct sched_deferred_request));\n\tif (psdefr == NULL) {\n\t\tlog_err(-1, __func__, \"Failed to allocate memory.\");\n\t\treturn NULL;\n\t}\n\tCLEAR_LINK(psdefr->sdr_link);\n\tCLEAR_HEAD(psdefr->sdr_deferred_req);\n\tpsdefr->sdr_psched = psched;\n\tappend_link(&svr_deferred_req, &psdefr->sdr_link, psdefr);\n\n\treturn &psdefr->sdr_deferred_req;\n}\n\n/**\n * @brief\n *\t\tRemove list of deferred requests for a particular scheduler\n *\t\tif the list is empty.\n *\n * @param[in]\tpsched\t-\tscheduler structure to identify list of deferred requests.\n *\n */\nvoid\nclear_sched_deferred_request(pbs_sched *psched)\n{\n\tstruct sched_deferred_request *psdefr;\n\n\tfor (psdefr = (struct sched_deferred_request *) GET_NEXT(svr_deferred_req);\n\t     psdefr;\n\t     psdefr = (struct sched_deferred_request *) GET_NEXT(psdefr->sdr_link)) {\n\t\tif (psdefr->sdr_psched == psched)\n\t\t\tbreak;\n\t}\n\n\tif (psdefr && GET_NEXT(psdefr->sdr_deferred_req) == NULL) {\n\t\t/* no more requests in psdefr->sdr_deferred_req\n\t\t * lets remove the scheduler related list\n\t\t */\n\t\tdelete_link(&psdefr->sdr_link);\n\t\tfree(psdefr);\n\t}\n}\n\n/**\n * @brief\n * \t\taction_clear_topjob_estimates - action routine for the server's\n * \t\t\"clear_topjob_estimates_enable\" attribute.\n *\n * @param[in]\tpattr\t-\tpointer to attribute structure\n * @param[in]\tpobj\t-\tnot used\n * @param[in]\tactmode\t-\taction mode\n *\n * @return\tint\n * @retval\tzero\t: success\n * @retval\tnonzero\t: failure\n */\nint\naction_clear_topjob_estimates(attribute *pattr, void *pobj, int actmode)\n{\n\tif (actmode == ATR_ACTION_NEW ||\n\t    actmode == ATR_ACTION_ALTER) {\n\n\t\tif (is_attr_set(pattr) && pattr->at_val.at_long) {\n\t\t\tjob *pjob = (job *) GET_NEXT(svr_alljobs);\n\t\t\tfor (; pjob; pjob = (job *) GET_NEXT(pjob->ji_alljobs)) {\n\t\t\t\tif (check_job_substate(pjob, JOB_SUBSTATE_FINISHED)) {\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\n\t\t\t\tif (get_jattr_long(pjob, JOB_ATR_topjob)) {\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\n\t\t\t\tif (is_jattr_set(pjob, JOB_ATR_estimated)) {\n\t\t\t\t\tclear_jattr(pjob, JOB_ATR_estimated);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\treturn PBSE_NONE;\n}\n"
  },
  {
    "path": "src/server/svr_jobfunc.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n *\n * @brief\n * \t\tcontains server functions dealing with jobs\n *\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <unistd.h>\n#include <fcntl.h>\n#include <assert.h>\n#include <errno.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <errno.h>\n#include <string.h>\n#include <ctype.h>\n#include <time.h>\n#include <math.h>\n#include <netdb.h>\n#include <signal.h>\n#include <sys/types.h>\n#include <sys/socket.h>\n#include <sys/poll.h>\n#include <netinet/in.h>\n#include <arpa/inet.h>\n#include \"pbs_ifl.h\"\n#include \"libutil.h\"\n#include \"server_limits.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"libpbs.h\"\n#include \"credential.h\"\n#include \"batch_request.h\"\n#include \"resource.h\"\n#include \"server.h\"\n#include \"work_task.h\"\n#include \"resv_node.h\"\n#include \"queue.h\"\n#include \"job.h\"\n#include \"pbs_sched.h\"\n#include \"reservation.h\"\n#include \"pbs_error.h\"\n#include \"log.h\"\n#include \"acct.h\"\n#include \"pbs_idx.h\"\n#include \"pbs_nodes.h\"\n#include \"svrfunc.h\"\n#include \"sched_cmds.h\"\n#include \"dis.h\"\n#include \"libsec.h\"\n#include \"pbs_license.h\"\n#include \"pbs_reliable.h\"\n#include <sys/wait.h>\n\n#define MIN_WALLTIME_LIMIT 0\n#define MAX_WALLTIME_LIMIT 1\n\nchar statechars[] = \"TQHWREXBMF\";\n\n/* Private Functions */\n\nstatic void default_std(job *, int key, char *to);\nstatic void Time4reply(struct work_task *);\nstatic void Time4resv(struct work_task *);\nstatic void Time4resv1(struct work_task *);\nstatic void resvFinishReply(struct work_task *);\nint change_enableORstart(resc_resv *, int, char *);\nstatic void handle_qmgr_reply_to_startORenable(struct work_task *);\nstatic void delete_occurrence_jobs(resc_resv *presv);\nstatic void Time4occurrenceFinish(resc_resv *);\nstatic void running_jobs_count(struct work_task *);\n\n/* Global Data Items: */\nextern char *msg_noloopbackif;\nextern char *msg_mombadmodify;\n\nextern struct server server;\nextern int pbs_mom_port;\nextern pbs_list_head svr_alljobs;\nextern char *msg_badwait; /* error message */\nextern char *msg_daemonname;\nextern char *msg_also_deleted_job_history;\nextern char server_name[];\nextern pbs_list_head svr_queues;\nextern int comp_resc_lt;\nextern int comp_resc_gt;\nextern time_t time_now;\nextern char *resc_in_err;\n\nextern struct licenses_high_use usedlicenses;\n\n/* For history jobs only */\nextern long svr_history_enable;\nextern long svr_history_duration;\n\n/* Work Task Handlers */\n\nextern void resv_retry_handler(struct work_task *);\nextern long determine_resv_retry(resc_resv *);\n\n/* external functions */\nextern void free_job_work_tasks(job *);\n\n/* Private Functions */\n\n#ifndef NDEBUG\nstatic void correct_ct(pbs_queue *);\n#endif /* NDEBUG */\n\n/**\n * @brief\n * \t\tclear the default resource from structures\n *\n * @param[in]\tpjob\t-\tThe job to be enqueued.\n */\nstatic void\nclear_default_resc(job *pjob)\n{\n\tresource *presc;\n\n\tif (is_jattr_set(pjob, JOB_ATR_resource)) {\n\t\tpresc = (resource *) GET_NEXT(get_jattr_list(pjob, JOB_ATR_resource));\n\t\twhile (presc) {\n\t\t\tif (presc->rs_value.at_flags & ATR_VFLAG_DEFLT)\n\t\t\t\tpresc->rs_defin->rs_free(&presc->rs_value);\n\t\t\tpresc = (resource *) GET_NEXT(presc->rs_link);\n\t\t}\n\t}\n}\n\n/**\n * @brief\n * \t\ttickle_for_reply ()\n * \t\tFor internally generated requests to the server we would like\n * \t\tprocessing of the reply from the particular server subsystem\n * \t\tto happen as \"soon\" as the server get back to its main loop -\n * \t\tsee server's main loop and \"next_task()\" and variable, \"waittime\".\n * \t\tBy placing a do nothing task on the \"timed_task_list\" whose time\n * \t\tis now (or already passed), we can get next_task() to look at the\n * \t\t\"task_list_immed\" tasks now rather than wait for a while\n */\nvoid\ntickle_for_reply(void)\n{\n\t(void) set_task(WORK_Timed, time_now + 10, 0, NULL);\n}\n\n/**\n * @brief\n * \t\tsvr_enquejob\t-\tEnqueue the job into specified queue.\n *\n * @param[in]\tpjob\t-\tThe job to be enqueued.\n * @param[in]\tselectspec -\tselect spec of the job.\n *\n * @return\tint\n * @retval\t0\t: on success\n * @retval\tPBSE\t: specified error number.\n *\n * @par MT-Safe:\tno\n *\n * @par Note:\n *\t\tEnqueue the job to the specific queue and update the queue state.\n *\t\tUpdated default attributes and resources specific to job type.\n */\nint\nsvr_enquejob(job *pjob, char *selectspec)\n{\n\tjob *pjcur;\n\tpbs_queue *pque;\n\tint rc;\n\tpbs_sched *psched;\n\tint state_num;\n\tchar *qtype;\n\tchar hook_msg[HOOK_MSG_SIZE] = {0};\n\n\tstate_num = get_job_state_num(pjob);\n\n\t/* make sure queue is still there, there exist a small window ... */\n\n\tpque = find_queuebyname(pjob->ji_qs.ji_queue);\n\tif (pque == NULL) {\n\t\t/*\n\t\t * If it is a history job, then don't return PBSE_UNKQUE\n\t\t * error but link the job to SERVER job list and update\n\t\t * job history timestamp and subjob state table and return\n\t\t * 0 (SUCCESS). INFO: The job is not associated with any\n\t\t * queue as the queue has been already purged.\n\t\t */\n\t\tif ((check_job_state(pjob, JOB_STATE_LTR_MOVED)) ||\n\t\t    (check_job_state(pjob, JOB_STATE_LTR_FINISHED))) {\n\t\t\tif (is_linked(&svr_alljobs, &pjob->ji_alljobs) == 0) {\n\t\t\t\tif (pbs_idx_insert(jobs_idx, pjob->ji_qs.ji_jobid, pjob) != PBS_IDX_RET_OK) {\n\t\t\t\t\tlog_joberr(PBSE_INTERNAL, __func__, \"Failed add history job in index\", pjob->ji_qs.ji_jobid);\n\t\t\t\t\treturn PBSE_INTERNAL;\n\t\t\t\t}\n\t\t\t\tappend_link(&svr_alljobs, &pjob->ji_alljobs, pjob);\n\t\t\t}\n\t\t\tserver.sv_qs.sv_numjobs++;\n\t\t\tif (state_num != -1)\n\t\t\t\tserver.sv_jobstates[state_num]++;\n\t\t\treturn (0);\n\t\t} else {\n\t\t\treturn (PBSE_UNKQUE);\n\t\t}\n\t}\n\n\t/* add job to server's all job list and update server counts */\n\n#ifndef NDEBUG\n\t(void) sprintf(log_buffer, \"enqueuing into %s, state %c hop %ld\",\n\t\t       pque->qu_qs.qu_name, get_job_state(pjob),\n\t\t       get_jattr_long(pjob, JOB_ATR_hopcount));\n\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n#endif /* NDEBUG */\n\n\tif (pbs_idx_insert(jobs_idx, pjob->ji_qs.ji_jobid, pjob) != PBS_IDX_RET_OK) {\n\t\tlog_joberr(PBSE_INTERNAL, __func__, \"Failed add job in index\", pjob->ji_qs.ji_jobid);\n\t\treturn PBSE_INTERNAL;\n\t}\n\n\tpjcur = (job *) GET_PRIOR(svr_alljobs);\n\twhile (pjcur) {\n\t\tif (get_jattr_ll(pjob, JOB_ATR_qrank) >= get_jattr_ll(pjcur, JOB_ATR_qrank))\n\t\t\tbreak;\n\t\tpjcur = (job *) GET_PRIOR(pjcur->ji_alljobs);\n\t}\n\tif (pjcur == 0) {\n\t\t/* link first in server's list */\n\t\tinsert_link(&svr_alljobs, &pjob->ji_alljobs, pjob,\n\t\t\t    LINK_INSET_AFTER);\n\t} else {\n\t\t/* link after 'current' job in server's list */\n\t\tinsert_link(&pjcur->ji_alljobs, &pjob->ji_alljobs, pjob,\n\t\t\t    LINK_INSET_AFTER);\n\t}\n\n\tserver.sv_qs.sv_numjobs++;\n\tif (state_num != -1)\n\t\tserver.sv_jobstates[state_num]++;\n\n\t/* place into queue in order of queue rank starting at end */\n\n\tpjob->ji_qhdr = pque;\n\n\tpjcur = (job *) GET_PRIOR(pque->qu_jobs);\n\twhile (pjcur) {\n\t\tif (get_jattr_ll(pjob, JOB_ATR_qrank) >= get_jattr_ll(pjcur, JOB_ATR_qrank))\n\t\t\tbreak;\n\t\tpjcur = (job *) GET_PRIOR(pjcur->ji_jobque);\n\t}\n\tif (pjcur == 0) {\n\t\t/* link first in list */\n\t\tinsert_link(&pque->qu_jobs, &pjob->ji_jobque, pjob,\n\t\t\t    LINK_INSET_AFTER);\n\t} else {\n\t\t/* link after 'current' job in list */\n\t\tinsert_link(&pjcur->ji_jobque, &pjob->ji_jobque, pjob,\n\t\t\t    LINK_INSET_AFTER);\n\t}\n\n\t/* update counts: queue and queue by state */\n\n\tpque->qu_numjobs++;\n\tif (state_num != -1)\n\t\tpque->qu_njstate[state_num]++;\n\n\tif ((check_job_state(pjob, JOB_STATE_LTR_MOVED)) || (check_job_state(pjob, JOB_STATE_LTR_FINISHED))) {\n\t\treturn (0);\n\t}\n\n\t/* update the current location and type attribute */\n\tset_jattr_generic(pjob, JOB_ATR_in_queue, pque->qu_qs.qu_name, NULL, SET);\n\n\tif ((qtype = get_qattr_str(pque, QA_ATR_QType)) == NULL) {\n\t\tlog_eventf(PBSEVENT_ADMIN, PBS_EVENTCLASS_QUEUE, LOG_ERR,\n\t\t\t   pjob->ji_qs.ji_jobid, \"queue type must be set for queue `%s`\",\n\t\t\t   pque->qu_qs.qu_name);\n\t\treturn PBSE_NEEDQUET;\n\t}\n\tset_jattr_c_slim(pjob, JOB_ATR_queuetype, *qtype, SET);\n\n\tif (!is_jattr_set(pjob, JOB_ATR_qtime))\n\t\tset_jattr_l_slim(pjob, JOB_ATR_qtime, time_now, SET);\n\n\t/*\n\t * set any \"unspecified\" resources which have default values,\n\t * first with queue defaults, then with server defaults\n\t */\n\n\trc = set_resc_deflt((void *) pjob, JOB_OBJECT, NULL);\n\tif (rc)\n\t\treturn rc;\n\n\t/*\n\t * Ensure that all jobs has JOB_ATR_project set.\n\t * It could be unset if coming from an overlay upgrade.\n\t */\n\tif (!is_jattr_set(pjob, JOB_ATR_project))\n\t\tset_jattr_str_slim(pjob, JOB_ATR_project, PBS_DEFAULT_PROJECT, NULL);\n\n\t/* update any entity count and entity resources usage for the queue */\n\n\tif (!(pjob->ji_qs.ji_svrflags & JOB_SVFLG_SubJob) ||\n\t    (get_sattr_long(SVR_ATR_State) == SV_STATE_INIT))\n\t\taccount_entity_limit_usages(pjob, pque, NULL, INCR, ETLIM_ACC_ALL);\n\n\t/*\n\t * See if we need to do anything special based on type of queue\n\t */\n\n\tif (pque->qu_qs.qu_type == QTYPE_Execution) {\n\n\t\t/* set union to \"EXEC\" and clear mom's address */\n\n\t\tif (pjob->ji_qs.ji_un_type != JOB_UNION_TYPE_EXEC) {\n\t\t\tpjob->ji_qs.ji_un_type = JOB_UNION_TYPE_EXEC;\n\t\t\tpjob->ji_qs.ji_un.ji_exect.ji_momaddr = 0;\n\t\t\tpjob->ji_qs.ji_un.ji_exect.ji_momport = 0;\n\t\t\tpjob->ji_qs.ji_un.ji_exect.ji_exitstat = 0;\n\t\t}\n\n\t\t/* check the job checkpoint against the queue's  min */\n\n\t\teval_chkpnt(pjob, get_qattr(pque, QE_ATR_ChkptMin));\n\n\t\t/*\n\t\t * do anything needed doing regarding job dependencies,\n\t\t * ignore this during Server recovery as the dependency\n\t\t * was registered when the job was first enqueued.\n\t\t */\n\n\t\tif (get_sattr_long(SVR_ATR_State) != SV_STATE_INIT) {\n\t\t\tif (is_jattr_set(pjob, JOB_ATR_depend)) {\n\t\t\t\trc = depend_on_que(get_jattr(pjob, JOB_ATR_depend), pjob, ATR_ACTION_NOOP);\n\t\t\t\tif (rc)\n\t\t\t\t\treturn rc;\n\t\t\t}\n\t\t}\n\n\t\t/* set eligible time */\n\n\t\tif (!is_jattr_set(pjob, JOB_ATR_etime) && check_job_state(pjob, JOB_STATE_LTR_QUEUED)) {\n\t\t\tset_jattr_l_slim(pjob, JOB_ATR_etime, time_now, SET);\n\n\t\t\t/* better notify the Scheduler we have a new job */\n\t\t\tif (!selectspec) {\n\t\t\t\tif (find_assoc_sched_jid(pjob->ji_qs.ji_jobid, &psched))\n\t\t\t\t\tset_scheduler_flag(SCH_SCHEDULE_NEW, psched);\n\t\t\t\telse {\n\t\t\t\t\tsprintf(log_buffer, \"Unable to reach scheduler associated with job %s\", pjob->ji_qs.ji_jobid);\n\t\t\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (find_assoc_sched_jid(pjob->ji_qs.ji_jobid, &psched))\n\t\t\t\tset_scheduler_flag(SCH_SCHEDULE_NEW, psched);\n\t\t\telse {\n\t\t\t\tsprintf(log_buffer, \"Unable to reach scheduler associated with job %s\", pjob->ji_qs.ji_jobid);\n\t\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\t}\n\t\t} else if (get_sattr_long(SVR_ATR_EligibleTimeEnable) && get_sattr_long(SVR_ATR_scheduling) && !selectspec) {\n\n\t\t\t/* notify the Scheduler we have moved a job here */\n\n\t\t\tif (find_assoc_sched_jid(pjob->ji_qs.ji_jobid, &psched))\n\t\t\t\tset_scheduler_flag(SCH_SCHEDULE_MVLOCAL, psched);\n\t\t\telse {\n\t\t\t\tsprintf(log_buffer, \"Unable to reach scheduler associated with job %s\", pjob->ji_qs.ji_jobid);\n\t\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\t}\n\t\t}\n\n\t} else if (pque->qu_qs.qu_type == QTYPE_RoutePush) {\n\n\t\t/* start attempts to route job */\n\n\t\tpjob->ji_qs.ji_un_type = JOB_UNION_TYPE_ROUTE;\n\t\tpjob->ji_qs.ji_un.ji_routet.ji_quetime = time_now;\n\t\tpjob->ji_qs.ji_un.ji_routet.ji_rteretry = 0;\n\t}\n\n\t/* start postqueuejob hook */\n\n\tstruct batch_request *preq;\n\tpreq = alloc_br(PBS_BATCH_PostQueueJob);\n\tif (preq == NULL) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"failed to alloc_br for PBS_BATCH_PostQueueJob\");\n\t} else {\n\t\tpreq->rq_ind.rq_postqueuejob.rq_pjob = pjob;\n\t\tstrcpy(preq->rq_ind.rq_postqueuejob.rq_jid, pjob->ji_qs.ji_jobid);\n\t\tstrncpy(preq->rq_user, pbs_current_user, PBS_MAXUSER);\n\t\tstrncpy(preq->rq_host, server_host, PBS_MAXHOSTNAME);\n\n\t\trc = process_hooks(preq, hook_msg, sizeof(hook_msg), pbs_python_set_interrupt);\n\t\tif (rc == -1) {\n\t\t\tlog_eventf(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO, pjob->ji_qs.ji_jobid,\n\t\t\t\t   \"postqueuejob process_hooks call failed: %s\", hook_msg);\n\t\t}\n\t\tfree_br(preq);\n\t}\n\treturn (0);\n}\n\n/**\n * @brief\n * \t\tsvr_dequejob() - remove job from whatever queue its in and reduce counts\n *\n * @param[in]\tpjob\t-\tThe job to be enqueued.\n */\n\nvoid\nsvr_dequejob(job *pjob)\n{\n\tint bad_ct = 0;\n\tpbs_queue *pque;\n\tint state_num;\n\n\t/* remove job from server's all job list and reduce server counts */\n\n\tif (is_linked(&svr_alljobs, &pjob->ji_alljobs)) {\n\t\tint state_num;\n\n\t\tdelete_link(&pjob->ji_alljobs);\n\t\tdelete_link(&pjob->ji_unlicjobs);\n\t\tif (pbs_idx_delete(jobs_idx, pjob->ji_qs.ji_jobid) != PBS_IDX_RET_OK)\n\t\t\tlog_joberr(PBSE_INTERNAL, __func__, \"Failed to delete job from index\", pjob->ji_qs.ji_jobid);\n\t\tif (--server.sv_qs.sv_numjobs < 0)\n\t\t\tbad_ct = 1;\n\n\t\tstate_num = get_job_state_num(pjob);\n\t\tif (state_num != -1 && --server.sv_jobstates[state_num] < 0)\n\t\t\tbad_ct = 1;\n\t}\n\n\tif ((pque = pjob->ji_qhdr) != NULL) {\n\n\t\t/* update any entity count and entity resources usage at que */\n\n\t\taccount_entity_limit_usages(pjob, pque, NULL, DECR,\n\t\t\t\t\t    pjob->ji_etlimit_decr_queued ? ETLIM_ACC_ALL_MAX : ETLIM_ACC_ALL);\n\n\t\tif (is_linked(&pque->qu_jobs, &pjob->ji_jobque)) {\n\t\t\tdelete_link(&pjob->ji_jobque);\n\t\t\tif (--pque->qu_numjobs < 0)\n\t\t\t\tbad_ct = 1;\n\n\t\t\tstate_num = get_job_state_num(pjob);\n\t\t\tif (state_num != -1 && --pque->qu_njstate[state_num] < 0)\n\t\t\t\tbad_ct = 1;\n\t\t}\n\t\tpjob->ji_qhdr = NULL;\n\t}\n\n#ifndef NDEBUG\n\tsprintf(log_buffer, \"dequeuing from %s, state %c\",\n\t\tpque ? pque->qu_qs.qu_name : \"\", get_job_state(pjob));\n\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\tif (bad_ct) /* state counts are all messed up */\n\t\tcorrect_ct(pque);\n#endif /* NDEBUG */\n\n\tmark_jattr_not_set(pjob, JOB_ATR_qtime);\n\n\t/* clear any default resource values */\n\tclear_default_resc(pjob);\n}\n\n/**\n * @brief\n * \t\tsvr_setjobstate - set the job state, update the server/queue state counts,\n *\t\tand save the job\n *\n * @param[in,out]\tpjob\t-\tThe job to be operated on.\n * @param[in]\tnewstate\t-\tnew job state\n * @param[in]\tnewsubstate\t-\tnew sub state of the job.\n *\n * @return\tint\n * @retval\t0\t: success\n * @retval\t!=0\t: failure\n */\n\nint\nsvr_setjobstate(job *pjob, char newstate, int newsubstate)\n{\n\tpbs_queue *pque = pjob->ji_qhdr;\n\tpbs_sched *psched;\n\n\t/*\n\t * If the job has already finished, then do not make any new changes\n\t * to job state or substate.\n\t */\n\tif (check_job_state(pjob, JOB_STATE_LTR_FINISHED) ||\n\t    (check_job_state(pjob, newstate) && (check_job_substate(pjob, newsubstate))))\n\t\treturn (0);\n\n\tlog_eventf(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_INFO, pjob->ji_qs.ji_jobid,\n\t\t   \"Updated job state to %d and substate to %d\", newstate, newsubstate);\n\n\t/*\n\t * if its is a new job, then don't update counts, svr_enquejob() will\n\t * take care of that, also req_commit() will see that the job is saved.\n\t */\n\n\tif (!check_job_substate(pjob, JOB_SUBSTATE_TRANSICM)) {\n\t\tchar oldstate = get_job_state(pjob);\n\n\t\t/* if the state is changing, also update the state counts */\n\n\t\tif (oldstate != newstate) {\n\t\t\tint oldstatenum;\n\t\t\tint newstatenum;\n\n\t\t\toldstatenum = state_char2int(oldstate);\n\t\t\tnewstatenum = state_char2int(newstate);\n\t\t\tif (oldstatenum != -1)\n\t\t\t\tserver.sv_jobstates[oldstatenum]--;\n\t\t\tif (newstatenum != -1)\n\t\t\t\tserver.sv_jobstates[newstatenum]++;\n\t\t\tif (pque != NULL) {\n\t\t\t\tif (oldstatenum != -1)\n\t\t\t\t\tpque->qu_njstate[oldstatenum]--;\n\t\t\t\tif (newstatenum != -1)\n\t\t\t\t\tpque->qu_njstate[newstatenum]++;\n\n\t\t\t\t/*\n\t\t\t\t * if execution queue, and eligability to run\n\t\t\t\t * has improved, kick the scheduler.\n\t\t\t\t */\n\n\t\t\t\tif ((pque->qu_qs.qu_type == QTYPE_Execution) &&\n\t\t\t\t    (newstate == JOB_STATE_LTR_QUEUED)) {\n\t\t\t\t\tif (find_assoc_sched_jid(pjob->ji_qs.ji_jobid, &psched))\n\t\t\t\t\t\tset_scheduler_flag(SCH_SCHEDULE_NEW, psched);\n\t\t\t\t\telse {\n\t\t\t\t\t\tsprintf(log_buffer, \"Unable to reach scheduler associated with job %s\", pjob->ji_qs.ji_jobid);\n\t\t\t\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\t\t\t}\n\n\t\t\t\t\tif (!is_jattr_set(pjob, JOB_ATR_etime))\n\t\t\t\t\t\tset_jattr_l_slim(pjob, JOB_ATR_etime, time_now, SET);\n\n\t\t\t\t\t/* clear start time (stime) */\n\t\t\t\t\tfree_jattr(pjob, JOB_ATR_stime);\n\n\t\t\t\t} else if ((newstate == JOB_STATE_LTR_HELD) || (newstate == JOB_STATE_LTR_WAITING)) {\n\t\t\t\t\t/* on hold or wait, clear etime */\n\t\t\t\t\tfree_jattr(pjob, JOB_ATR_etime);\n\t\t\t\t\t/* TODO: remove attr etime from database */\n\t\t\t\t}\n\t\t\t}\n\t\t\t/* if subjob, update parent Array Job */\n\t\t\tif (pjob->ji_qs.ji_svrflags & JOB_SVFLG_SubJob) {\n\t\t\t\tupdate_sj_parent(pjob->ji_parentaj, pjob, pjob->ji_qs.ji_jobid, oldstate, newstate);\n\t\t\t\tchk_array_doneness(pjob->ji_parentaj);\n\t\t\t}\n\t\t}\n\t}\n\n\t/* set the states accordingly */\n\tset_job_state(pjob, newstate);\n\tset_job_substate(pjob, newsubstate);\n\n\t/* eligible_time_enable */\n\tif (get_sattr_long(SVR_ATR_EligibleTimeEnable) == 1) {\n\t\tlong newaccruetype;\n\n\t\tnewaccruetype = determine_accruetype(pjob);\n\t\tupdate_eligible_time(newaccruetype, pjob);\n\t}\n\n\t/* update the job file */\n\n\tif (newstate == JOB_STATE_LTR_RUNNING) {\n\t\tif (pjob->ji_etlimit_decr_queued == FALSE) {\n\t\t\taccount_entity_limit_usages(pjob, NULL, NULL, DECR, ETLIM_ACC_ALL_QUEUED);\n\t\t\taccount_entity_limit_usages(pjob, pjob->ji_qhdr, NULL, DECR, ETLIM_ACC_ALL_QUEUED);\n\t\t\tpjob->ji_etlimit_decr_queued = TRUE;\n\t\t}\n\t}\n\n\tif (pjob->newobj) {\n\t\t/* object was never saved/loaded before, so new object */\n\t\treturn 0;\n\t}\n\n\treturn (job_save_db(pjob));\n}\n\n/**\n * @brief\tHelper function thats re-evaluates job state and sub state.\n *\n * @param\tjobp - pointer to the job\n *\n * @return\tvoid\n */\nvoid\nsvr_evalsetjobstate(job *jobp)\n{\n\tchar newstate;\n\tint newsub;\n\n\t/* force re-eval of job state out of Transit */\n\tsvr_evaljobstate(jobp, &newstate, &newsub, 1);\n\tsvr_setjobstate(jobp, newstate, newsub);\n}\n\n/**\n * @brief\n * \t\tsvr_evaljobstate - evaluate and return the job state and substate\n *\t\taccording to the the values of the hold, execution time, and\n *\t\tdependency attributes.  This is typically called after the job has been\n *\t\tenqueued or the (hold, execution-time) attributes have been modified.\n * @par\n *\t\tIF the job is a history job i.e. job state is JOB_STATE_MOVED\n *\t\tor JOB_STATE_FINISHED, then just return state/substate without\n *\t\tany change, irrespective of the value of \"forceeval\".\n *\n * @param[in]\tpjob\t-\tpointer to the job structure\n * @param[out]\tnewstate\t-\trecommended new state for job\n * @param[out]\tnewsub\t-\trecommended new substate for job\n * @param[in]\tforceeval\t-\twhether to forcefully change the value or not.\n *\n * @return\tvoid\n */\nvoid\nsvr_evaljobstate(job *pjob, char *newstate, int *newsub, int forceeval)\n{\n\t/*\n\t * A value MUST be assigned to newstate and newsub because\n\t * they may have been passed in uninitialized. We MUST put\n\t * the job in a valid state or the scheduler will bail out\n\t * on subsequent cycles and not schedule ANY work. The\n\t * safest thing to do is to hold the job by default.\n\t */\n\t*newstate = JOB_STATE_LTR_HELD;\n\t*newsub = JOB_SUBSTATE_HELD;\n\n\tif ((check_job_state(pjob, JOB_STATE_LTR_MOVED)) ||\n\t    (check_job_state(pjob, JOB_STATE_LTR_FINISHED))) {\n\n\t\t/* History job, just return state/sub-state. */\n\t\t*newstate = get_job_state(pjob);\n\t\t*newsub = get_job_substate(pjob);\n\n\t} else if ((forceeval == 0) &&\n\t\t   (check_job_state(pjob, JOB_STATE_LTR_TRANSIT) ||\n\t\t    check_job_state(pjob, JOB_STATE_LTR_RUNNING))) {\n\n\t\t/* Leave as is. */\n\t\t*newstate = get_job_state(pjob);\n\t\t*newsub = get_job_substate(pjob);\n\t} else if (get_jattr_long(pjob, JOB_ATR_hold)) {\n\n\t\t*newstate = JOB_STATE_LTR_HELD;\n\t\t/* is the hold due to a dependency? */\n\t\tif ((check_job_substate(pjob, JOB_SUBSTATE_SYNCHOLD)) ||\n\t\t    (check_job_substate(pjob, JOB_SUBSTATE_DEPNHOLD))) {\n\t\t\t/* Retain substate. */\n\t\t\t*newsub = get_job_substate(pjob);\n\t\t} else {\n\t\t\t*newsub = JOB_SUBSTATE_HELD;\n\t\t}\n\n\t} else if (get_jattr_long(pjob, JOB_ATR_exectime) > (long) time_now) {\n\n\t\t*newstate = JOB_STATE_LTR_WAITING;\n\t\t*newsub = JOB_SUBSTATE_WAITING;\n\n\t} else if (is_jattr_set(pjob, JOB_ATR_stagein)) {\n\n\t\t*newstate = JOB_STATE_LTR_QUEUED;\n\t\tif (pjob->ji_qs.ji_svrflags & JOB_SVFLG_StagedIn) {\n\t\t\t*newsub = JOB_SUBSTATE_STAGECMP;\n\t\t} else {\n\t\t\t*newsub = JOB_SUBSTATE_PRESTAGEIN;\n\t\t}\n\n\t} else {\n\n\t\tif (pjob->ji_qs.ji_svrflags & JOB_SVFLG_ArrayJob) {\n\t\t\t/* This is an array job. */\n\t\t\tajinfo_t *ptbl = pjob->ji_ajinfo;\n\t\t\tif (ptbl) {\n\t\t\t\tif (ptbl->tkm_subjsct[JOB_STATE_QUEUED] + ptbl->tkm_dsubjsct < ptbl->tkm_ct) {\n\t\t\t\t\t*newstate = JOB_STATE_LTR_BEGUN;\n\t\t\t\t\t*newsub = JOB_SUBSTATE_BEGUN;\n\t\t\t\t} else {\n\t\t\t\t\t/* All subjobs are queued. */\n\t\t\t\t\t*newstate = JOB_STATE_LTR_QUEUED;\n\t\t\t\t\t*newsub = JOB_SUBSTATE_QUEUED;\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tsprintf(log_buffer, \"Array job has no tracking table!\");\n\t\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_ERR,\n\t\t\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\t\t*newstate = JOB_STATE_LTR_HELD;\n\t\t\t\t*newsub = JOB_SUBSTATE_HELD;\n\t\t\t}\n\t\t} else {\n\t\t\t*newstate = JOB_STATE_LTR_QUEUED;\n\t\t\t*newsub = JOB_SUBSTATE_QUEUED;\n\t\t}\n\t}\n}\n\n/**\n * @brief\n * \t\tget_variable - get the value associated with a specified environment\n *\t\tvariable of a job\n *\n * @param[in]\tpjob\t-\tpointer to the job object\n * @param[in]\tvariable\t-\tstring variable which needs to be searched in object attribute.\n *\n * @return\tpointer to the start of the value\n * @retval\tNULL\t: if the variable is not found in the variable_list attribute.\n */\n\nchar *\nget_variable(job *pjob, char *variable)\n{\n\tchar *pc;\n\n\tpc = arst_string(variable, get_jattr(pjob, JOB_ATR_variables));\n\tif (pc) {\n\t\tif ((pc = strchr(pc, (int) '=')) != 0)\n\t\t\tpc++;\n\t}\n\treturn (pc);\n}\n\n/**\n * @brief\n * \t\tlookup_variable - lookup the value of a particular environment variable\n *\t\tassociated with the object.\n *\n * @param[in]\tpobj\t-\tpointer to the object structure\n * @param[in]\tobjtype\t-\tobject type\n * @param[in]\tvariable\t-\tstring variable which needs to be searched in object attribute.\n *\n * @return\ta pointer to the beginning of the value string\n * @retval\tNULL\t: pointer if the variable isn't found in the object's \"variable_list\"\n */\n\nchar *\nlookup_variable(void *pobj, int objtype, char *variable)\n{\n\tchar *pc;\n\tattribute *objattr;\n\n\tif (objtype == JOB_OBJECT)\n\t\tobjattr = get_jattr((job *) pobj, JOB_ATR_variables);\n\telse\n\t\tobjattr = get_rattr((resc_resv *) pobj, RESV_ATR_variables);\n\n\tpc = arst_string(variable, objattr);\n\tif (pc) {\n\t\tif ((pc = strchr(pc, (int) '=')) != 0)\n\t\t\tpc++;\n\t}\n\treturn (pc);\n}\n\n/**\n * @brief\n * \t\tcompare the job resource limit against the system limit\n * \t\tunless a queue limit exists, it take priority\n *\n * @param[in]\tjobatr\t-\tjob attribute\n * @param[in]\tqueatr\t-\tresource (value) entry\n * @param[in]\tsvratr\t-\tserver attribute.\n * @param[in]\tqtype\t-\ttype of queue (not used here)\n *\n * @return\tnumber of .gt. and .lt. comparison in comp_resc_gt and comp_resc_lt\n * \t\t\tdoes not make use of comp_resc_eq or comp_resc_nc\n */\n\nstatic void\nchk_svr_resc_limit(attribute *jobatr, attribute *queatr,\n\t\t   attribute *svratr, int qtype)\n{\n\tint rc;\n\tresource *jbrc;\n\tresource *qurc;\n\tresource *svrc;\n\tresource *cmpwith;\n\tstatic resource_def *noderesc = NULL;\n\n\tif (noderesc == NULL) {\n\t\tnoderesc = &svr_resc_def[RESC_NODES];\n\t}\n\tcomp_resc_gt = 0;\n\tcomp_resc_lt = 0;\n\n\tjbrc = (resource *) GET_NEXT(jobatr->at_val.at_list);\n\twhile (jbrc) {\n\t\tcmpwith = 0;\n\t\tif (is_attr_set(&jbrc->rs_value)) {\n\t\t\tqurc = find_resc_entry(queatr, jbrc->rs_defin);\n\t\t\tif ((qurc == 0) ||\n\t\t\t    ((is_attr_set(&qurc->rs_value)) == 0)) {\n\t\t\t\t/* queue limit not set, check server's */\n\n\t\t\t\tsvrc = find_resc_entry(svratr, jbrc->rs_defin);\n\t\t\t\tif ((svrc != 0) &&\n\t\t\t\t    (is_attr_set(&svrc->rs_value))) {\n\t\t\t\t\tcmpwith = svrc;\n\t\t\t\t}\n\n\t\t\t} else {\n\t\t\t\t/* queue limit is set, use it */\n\t\t\t\tcmpwith = qurc;\n\t\t\t}\n\n\t\t\tif ((jbrc->rs_defin != noderesc) && cmpwith) {\n\t\t\t\trc = jbrc->rs_defin->rs_comp(&cmpwith->rs_value,\n\t\t\t\t\t\t\t     &jbrc->rs_value);\n\t\t\t\tif (rc > 0)\n\t\t\t\t\tcomp_resc_gt++;\n\t\t\t\telse if (rc < 0)\n\t\t\t\t\tcomp_resc_lt++;\n\t\t\t}\n\t\t}\n\t\tjbrc = (resource *) GET_NEXT(jbrc->rs_link);\n\t}\n}\n/**\n * @brief\n * \t\tget_wt_limit - get limit set on walltime from the list of resource limits.\n *\n * @param[in]\tplimit_attr\t-\tlist of resource limits\n * @param[out]\twt_attr\t-\tA pointer to walltime limit\n *\n * @return\tint\n * @retval\t0\t: if walltime limit is set\n * @retval \t1\t: no walltime limit is set\n */\n\nint\nget_wt_limit(attribute *plimit_attr, attribute *wt_attr)\n{\n\tresource *wiresc = NULL;\n\tif (plimit_attr == NULL || wt_attr == NULL)\n\t\treturn 1;\n\t/* Check min_walltime if min_or_max == MIN_WALLTIME_LIMIT */\n\twiresc = (resource *) GET_NEXT(plimit_attr->at_val.at_list);\n\twhile (wiresc != NULL) {\n\t\tif ((strcasecmp(wiresc->rs_defin->rs_name, WALLTIME) == 0) && (is_attr_set(&wiresc->rs_value))) {\n\t\t\t*wt_attr = wiresc->rs_value;\n\t\t\treturn 0;\n\t\t}\n\t\twiresc = (resource *) GET_NEXT(wiresc->rs_link);\n\t}\n\treturn 1;\n}\n/**\n * @brief\n * \t\tcomp_wt_limits_STF - check a job's min_walltime OR max_walltime against\n * \t\tconfigured \"walltime\" limits.\n *\n * @param[in]\tresc_minmaxwt\t-\tresource(min_walltime or max_walltime) to be compared\n * @param[in] \tlimit_attr\t-\tattribute containing resource limit\n * @param[in] \tmin_or_max\t-\tcheck aginst minimum walltime limit if MIN_WALLTIME_LIMIT,\n * \t\t\t\t\t\t\t\telse check against maximum walltime limit\n * @return\tint\n * @retval \t0\t: within limits OR if resc_minmaxwt == NULL OR is unset.\n * @retval\tPBSE_EXCQRESC\t: not within limits\n */\nint\ncomp_wt_limits_STF(resource *resc_minmaxwt, attribute limit_attr, int min_or_max)\n{\n\tint rc = 0;\n\n\tif (resc_minmaxwt == NULL || !(is_attr_set(&resc_minmaxwt->rs_value)))\n\t\treturn 0;\n\n\t/* Check minimum walltime limit if min_or_max == MIN_WALLTIME_LIMIT */\n\tif (min_or_max == MIN_WALLTIME_LIMIT) {\n\t\tif ((rc = resc_minmaxwt->rs_defin->rs_comp(&(resc_minmaxwt->rs_value), &limit_attr)) < 0)\n\t\t\treturn (PBSE_EXCQRESC);\n\t} else {\n\t\t/* Check maximum walltime limit*/\n\t\tif ((rc = resc_minmaxwt->rs_defin->rs_comp(&(resc_minmaxwt->rs_value), &limit_attr)) > 0)\n\t\t\treturn (PBSE_EXCQRESC);\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\tchk_wt_limits_STF - check a STF job's min and max wlltime against the queue\n * \t\tand server maximum and mininum \"walltime\" limits.\n * \t\tmax_walltime will be set to resources_max.walltime(if set), if resc_maxwt == NULL\n *\n * @param[in]\tresc_minwt\t-\tresource_list.min_walltime\n * @param[in]\tresc_maxwt\t-\tresource_list.max_walltime\n * @param[in]\tpque\t-\tqueue\n * @param[out]\tpattr\t-\tresource_list, resource_list.max_walltime will be set if NULL\n * @return\tint\n * @retval \t0\t: within limits OR if resc_minwt == NULL\n * @retval\tPBSE_EXCQRESC\t: not within limits\n */\nint\nchk_wt_limits_STF(resource *resc_minwt, resource *resc_maxwt, pbs_queue *pque, attribute *pattr)\n{\n\tattribute wt_min_queue_limit;\n\tattribute wt_max_queue_limit;\n\tattribute wt_max_server_limit;\n\tresource *new_res = NULL;\n\tresource_def *rscdef = NULL;\n\tint have_max_queue_limit = 0;\n\tint have_max_server_limit = 0;\n\n\tif (resc_minwt == NULL)\n\t\treturn 0;\n\t/* The following should be true:\n\t min_walltime >= resources_min.walltime\n\t min_walltime <= resources_max.walltime\n\t max_walltime >= resources_min.walltime\n\t max_walltime <= resources_max.walltime\n\t */\n\t/* Check against queue maximum */\n\tif (pque && get_wt_limit(get_qattr(pque, QA_ATR_ResourceMax), &wt_max_queue_limit) == 0)\n\t\thave_max_queue_limit = 1;\n\t/* Check server maximum limit only if queue maximum limit is not present */\n\tif (!have_max_queue_limit && pque && get_wt_limit(get_sattr(SVR_ATR_ResourceMax), &wt_max_server_limit) == 0)\n\t\thave_max_server_limit = 1;\n\n#ifndef NAS /* localmod 026 */\n\t/* If res_maxwt is NULL and resources_max.walltime is set on either server/queue,\n\t * set resource_list.max_walltime on the server/queue to value of resources_max.walltime.\n\t * If resources_max.walltime is set on both server and queue, set resource_list.max_walltime\n\t * to queue's resources_max.walltime. */\n\tif (resc_maxwt == NULL && pattr != NULL && (have_max_queue_limit || have_max_server_limit)) {\n\t\trscdef = &svr_resc_def[RESC_MAX_WALLTIME];\n\t\tnew_res = add_resource_entry(pattr, rscdef);\n\t\tif (have_max_queue_limit)\n\t\t\tnew_res->rs_defin->rs_set(&new_res->rs_value, &wt_max_queue_limit, SET);\n\t\telse if (have_max_server_limit)\n\t\t\tnew_res->rs_defin->rs_set(&new_res->rs_value, &wt_max_server_limit, SET);\n\t\tmark_attr_set(&new_res->rs_value);\n\t}\n#endif /* localmod 026 */\n\t/* Check against queue maximum */\n\tif (have_max_queue_limit) {\n\t\tif (PBSE_EXCQRESC == comp_wt_limits_STF(resc_minwt,\n\t\t\t\t\t\t\twt_max_queue_limit, MAX_WALLTIME_LIMIT) ||\n\t\t    PBSE_EXCQRESC == comp_wt_limits_STF(resc_maxwt,\n\t\t\t\t\t\t\twt_max_queue_limit, MAX_WALLTIME_LIMIT))\n\t\t\treturn (PBSE_EXCQRESC);\n\t}\n\t/* Queue limit not present, check against server maximum */\n\telse if (have_max_server_limit) {\n\t\tif ((PBSE_EXCQRESC == comp_wt_limits_STF(resc_maxwt,\n\t\t\t\t\t\t\t wt_max_server_limit, MAX_WALLTIME_LIMIT) ||\n\t\t     PBSE_EXCQRESC == comp_wt_limits_STF(resc_minwt,\n\t\t\t\t\t\t\t wt_max_server_limit, MAX_WALLTIME_LIMIT)))\n\t\t\treturn (PBSE_EXCQRESC);\n\t}\n\t/* Check against queue minimum */\n\tif (pque && (get_wt_limit(get_qattr(pque, QA_ATR_ResourceMin), &wt_min_queue_limit) == 0)) {\n\t\tif (PBSE_EXCQRESC == comp_wt_limits_STF(resc_minwt,\n\t\t\t\t\t\t\twt_min_queue_limit, MIN_WALLTIME_LIMIT) ||\n\t\t    PBSE_EXCQRESC == comp_wt_limits_STF(resc_maxwt,\n\t\t\t\t\t\t\twt_min_queue_limit, MIN_WALLTIME_LIMIT))\n\t\t\treturn (PBSE_EXCQRESC);\n\t}\n\treturn 0;\n}\n/**\n * @brief\n * \t\tchk_resc_limits - check job Resource_Limits attribute against the queue\n *\t\tand server maximum and minimum values.\n *\n * @param[in]\tpattr\t-\tattribute list containing resource request of the job\n * @param[in]\tpque\t-\tqueue\n *\n * @return\tint\n * @retval \t0\t: within limits\n * @retval \tPBSE_EXCQRESC\t: not within limits\n */\nint\nchk_resc_limits(attribute *pattr, pbs_queue *pque)\n{\n\tresource *atresc;\n\tresource *resc_maxwt = NULL;\n\tresource *resc_minwt = NULL;\n\n\t/* Get resource_list.min_walltime and resource_list.max_walltime if it is a STF job */\n\tatresc = (resource *) GET_NEXT(pattr->at_val.at_list);\n\twhile (atresc != NULL) {\n\t\tif ((strcasecmp(atresc->rs_defin->rs_name, MIN_WALLTIME) == 0)) {\n\t\t\tresc_minwt = atresc;\n\t\t} else if ((strcasecmp(atresc->rs_defin->rs_name, MAX_WALLTIME) == 0)) {\n\t\t\tresc_maxwt = atresc;\n\t\t}\n\t\t/* No need to traverse further if both min_walltime and max_walltime are set */\n\t\tif (resc_minwt && resc_maxwt)\n\t\t\tbreak;\n\t\tatresc = (resource *) GET_NEXT(atresc->rs_link);\n\t}\n\n\t/* Check min and max walltime of a STF job against \"walltime\" resource limit on queue and server */\n\tif (resc_minwt != NULL && PBSE_EXCQRESC == chk_wt_limits_STF(resc_minwt, resc_maxwt, pque, pattr))\n\t\treturn (PBSE_EXCQRESC);\n\tif ((comp_resc(get_qattr(pque, QA_ATR_ResourceMin), pattr) == -1) ||\n\t    comp_resc_gt)\n\t\treturn (PBSE_EXCQRESC);\n\n\t/* now check individual resources against queue or server maximum */\n\tchk_svr_resc_limit(pattr,\n\t\t\t   get_qattr(pque, QA_ATR_ResourceMax),\n\t\t\t   get_sattr(SVR_ATR_ResourceMax),\n\t\t\t   pque->qu_qs.qu_type);\n\n\tif (comp_resc_lt > 0)\n\t\treturn (PBSE_EXCQRESC);\n\treturn (0);\n}\n\n/**\n * @brief\n * \t\tsvr_chkque - check if job can enter a queue\n *\n * @note\n * \t\tNote: job owner must be set before calling svr_chkque()\n * \t\tset_objexid() will be called to set a uid/gid/name if not already set\n *\n * @param[in]\tpjob\t-\tjob structure\n * @param[in]\tsubmithost\t-\tjob's submit machine\n * @param[in]\thostname\t-\thost machine issued this check\n * @param[in]\tmtype\t-\tMOVE_TYPE_* type;  see server_limits.h\n *\n * @return\tint\n * @retval\t0\t: all ok, job can enter queue\n * @retval\tPBSE Number\t: error code\n */\n\nint\nsvr_chkque(job *pjob, pbs_queue *pque, char *submithost, char *hostname, int mtype)\n{\n\tint i;\n\n\t/* if not already set, set up a uid/gid/name */\n\n\tif (!is_jattr_set(pjob, JOB_ATR_euser) || !is_jattr_set(pjob, JOB_ATR_egroup)) {\n\t\tif ((i = set_objexid((void *) pjob, JOB_OBJECT, pjob->ji_wattr)) != 0)\n\t\t\treturn (i); /* PBSE_BADUSER or GRP */\n\t}\n\n\t/*\n\t * 1. If the queue is an Execution queue ...\n\t *    These are checked first because 1b - 1d are more damaging\n\t *    (see local_move() in svr_movejob.c)\n\t */\n\n\tif (pque->qu_qs.qu_type == QTYPE_Execution) {\n\n\t\t/* 1b. Check site restrictions */\n\n\t\tif (site_acl_check(pjob, pque))\n\t\t\treturn (PBSE_PERM);\n\n\t\t/* 1c. cannot have an unknown resource */\n\n\t\tif (find_resc_entry(get_jattr(pjob, JOB_ATR_resource),\n\t\t\t\t    svr_resc_def + svr_resc_unk))\n\t\t\treturn (PBSE_UNKRESC);\n\n\t\t/* 1d. cannot have an unknown attribute */\n\n\t\tif (is_jattr_set(pjob, JOB_ATR_UNKN))\n\t\t\treturn (PBSE_NOATTR);\n\t}\n\n\t/* checks 2, 2a, and 3 are bypassed for a move by manager or qorder */\n\n\tif ((mtype != MOVE_TYPE_MgrMv) && (mtype != MOVE_TYPE_Order)) {\n\n\t\t/* 2. the queue must be enabled and the job limit not exceeded */\n\n\t\tif (get_qattr_long(pque, QA_ATR_Enabled) == 0)\n\t\t\treturn (PBSE_QUNOENB);\n\n\t\tif (is_qattr_set(pque, QA_ATR_MaxJobs)) {\n\t\t\tint histjobs = 0;\n\t\t\tif (svr_chk_history_conf()) {\n\t\t\t\t/* calculate number of finished and moved jobs */\n\t\t\t\thistjobs = pque->qu_njstate[JOB_STATE_MOVED] +\n\t\t\t\t\t   pque->qu_njstate[JOB_STATE_FINISHED] +\n\t\t\t\t\t   pque->qu_njstate[JOB_STATE_EXPIRED];\n\t\t\t}\n\t\t\t/*\n\t\t\t * check number of jobs in queue excluding\n\t\t\t * finished and moved jobs\n\t\t\t */\n\t\t\tif ((pque->qu_numjobs - histjobs) >= get_qattr_long(pque, QA_ATR_MaxJobs))\n\t\t\t\treturn (PBSE_MAXQUED);\n\t\t}\n\n\t\t/* 2a. if job array, check for queue max_array_size */\n\n\t\tif (is_qattr_set(pque, QA_ATR_maxarraysize)) {\n\t\t\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_ArrayJob) &&\n\t\t\t    (pjob->ji_ajinfo != NULL)) {\n\t\t\t\tif (pjob->ji_ajinfo->tkm_ct > get_qattr_long(pque, QA_ATR_maxarraysize))\n\t\t\t\t\treturn (PBSE_MaxArraySize);\n\t\t\t}\n\t\t}\n\n\t\t/* 3. If \"from_route_only\" is true, only local route allowed */\n\n\t\tif (is_qattr_set(pque, QA_ATR_FromRouteOnly) && get_qattr_long(pque, QA_ATR_FromRouteOnly) == 1)\n\t\t\tif (mtype == MOVE_TYPE_Move) /* ok if not plain user or scheduler */\n\t\t\t\treturn (PBSE_QACESS);\n\t}\n\n\t/* 4. If enabled, check the queue's host ACL */\n\n\tif (get_qattr_long(pque, QA_ATR_AclHostEnabled))\n\t\tif ((acl_check(get_qattr(pque, QA_ATR_AclHost),\n\t\t\t      submithost, ACL_Host) == 0) &&\n\t\t\t(acl_check(get_qattr(pque, QA_ATR_AclHost),\n\t\t\t      hostname, ACL_Host) == 0))\n\t\t\tif (mtype != MOVE_TYPE_MgrMv) /* ok if mgr */\n\t\t\t\treturn (PBSE_BADHOST);\n\n\t/* 5a. If enabled, check the queue's user ACL */\n\n\tif (get_qattr_long(pque, QA_ATR_AclUserEnabled))\n\t\tif (acl_check(get_qattr(pque, QA_ATR_AclUsers),\n\t\t\t      get_jattr_str(pjob, JOB_ATR_job_owner), ACL_User) == 0)\n\t\t\tif (mtype != MOVE_TYPE_MgrMv) /* ok if mgr */\n\t\t\t\treturn (PBSE_PERM);\n\n\t/* 5b. If enabled, check the queue's group ACL */\n\n\tif (get_qattr_long(pque, QE_ATR_AclGroupEnabled))\n\t\tif (acl_check(get_qattr(pque, QE_ATR_AclGroup),\n\t\t\t      get_jattr_str(pjob, JOB_ATR_euser),\n\t\t\t      ACL_Group) == 0)\n\t\t\tif (mtype != MOVE_TYPE_MgrMv) /* ok if mgr */\n\t\t\t\treturn (PBSE_PERM);\n\n\t/* 6. If enabled, check the queue's required cred type */\n\n\tif (is_qattr_set(pque, QA_ATR_ReqCredEnable) &&\n\t    get_qattr_long(pque, QA_ATR_ReqCredEnable) &&\n\t    is_qattr_set(pque, QA_ATR_ReqCred)) {\n\t\tchar *reqc = get_qattr_str(pque, QA_ATR_ReqCred);\n\t\tchar *jobc = get_jattr_str(pjob, JOB_ATR_cred);\n\t\t/*\n\t\t **\tThe queue requires a cred, if job has none, or\n\t\t **\tit is the wrong one, and if not mgr, reject.\n\t\t */\n\t\tif ((!is_jattr_set(pjob, JOB_ATR_cred) || strcmp(reqc, jobc) != 0) && mtype != MOVE_TYPE_MgrMv)\n\t\t\treturn PBSE_BADCRED;\n\t}\n\n\t/* checks 7 and 7a are bypassed for a move by manager or qorder */\n\tif (mtype != MOVE_TYPE_MgrMv) {\n\t\t/* 7. resources of the job must be in the limits of the queue */\n\n\t\t/* 7a. Check limit on number of jobs per entity in queue */\n\n\t\ti = check_entity_ct_limit_max(pjob, pque);\n\t\tif (i != 0)\n\t\t\treturn i;\n\n\t\ti = check_entity_ct_limit_queued(pjob, pque);\n\t\tif (i != 0)\n\t\t\treturn i;\n\n\t\t/* 7b. Check limit on number of jobs per entity in server only if */\n\t\t/*     this is a new job defined by state == JOB_STATE_LTR_TRANSIT    */\n\t\tif (check_job_state(pjob, JOB_STATE_LTR_TRANSIT)) {\n\t\t\ti = check_entity_ct_limit_max(pjob, NULL);\n\t\t\tif (i != 0)\n\t\t\t\treturn i;\n\t\t\ti = check_entity_ct_limit_queued(pjob, NULL);\n\t\t\tif (i != 0)\n\t\t\t\treturn i;\n\t\t}\n\t}\n\n\t/* Need to unset current default resources and reset them */\n\t/* from new queue before check if can enter that queue    */\n\n\tclear_default_resc(pjob);\n\ti = set_resc_deflt(pjob, JOB_OBJECT, pque);\n\tif (i == 0) {\n\n\t\t/* checks 7c and 7d are bypassed for a move by manager or qorder */\n\t\tif (mtype != MOVE_TYPE_MgrMv) {\n\t\t\t/* 7c. Check resource limits per entity in queue */\n\t\t\ti = check_entity_resc_limit_max(pjob, pque, NULL);\n\t\t\tif (i == 0)\n\t\t\t\ti = check_entity_resc_limit_queued(pjob, pque, NULL);\n\n\t\t\tif (i == 0) {\n\t\t\t\t/* 7d. Check resource limits per entity in server if this */\n\t\t\t\t/*     is a new job defined by state == JOB_STATE_TRANSIT */\n\t\t\t\ti = check_entity_resc_limit_max(pjob, NULL, NULL);\n\t\t\t\tif (i == 0)\n\t\t\t\t\ti = check_entity_resc_limit_queued(pjob, NULL, NULL);\n\n\t\t\t\tif (i == 0) {\n\t\t\t\t\t/* 7e.  test old gateing limits */\n\t\t\t\t\ti = chk_resc_limits(get_jattr(pjob, JOB_ATR_resource), pque);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\t/* after check unset defaults & reset based on current queue, if one */\n\tif (pjob->ji_qhdr) {\n\t\tclear_default_resc(pjob);\n\t\t(void) set_resc_deflt(pjob, JOB_OBJECT, NULL);\n\t}\n\n\tif (i != 0)\n\t\tif (mtype != MOVE_TYPE_MgrMv) /* ok if mgr */\n\t\t\treturn (i);\n\n\treturn (0); /* all ok, job can enter queue */\n}\n\n/**\n * @brief\n *\t\tcheck_block_wt\t-\tA worktask to reply to the blocked job client\n *\n * @param[in]\tptask\t-\twork_task structure\n */\nvoid\ncheck_block_wt(struct work_task *ptask)\n{\n\tstruct block_job_reply *blockj = ptask->wt_parm1;\n\tstruct pollfd fds[1];\n\tint rc;\n\tpbs_socklen_t len = sizeof(rc);\n\tint conn = 0;\n\tint ret = 0;\n\tint check_error;\n\n\tif (blockj->fd == -1) {\n\t\tint sock_flags;\n\t\tstruct hostent *hp;\n\t\tstruct sockaddr_in remote;\n\n\t\tif ((hp = gethostbyname(blockj->client)) == NULL) {\n\t\t\tsprintf(log_buffer, \"client host %s not found for block job %s\",\n\t\t\t\tblockj->client, blockj->jobid);\n\t\t\tgoto err;\n\t\t}\n\n\t\tmemset(&remote, 0, sizeof(remote));\n\t\tmemcpy(&remote.sin_addr, hp->h_addr, hp->h_length);\n\t\tremote.sin_port = htons((unsigned short) blockj->port);\n\t\tremote.sin_family = hp->h_addrtype;\n\n\t\tif ((blockj->fd = socket(AF_INET, SOCK_STREAM, 0)) == -1) {\n\t\t\tsprintf(log_buffer, \"Failed to create socket for job %s\", blockj->jobid);\n\t\t\tgoto err;\n\t\t}\n\n\t\t/* Set socket to Non-blocking */\n\t\tsock_flags = fcntl(blockj->fd, F_GETFL, 0);\n\t\tif (fcntl(blockj->fd, F_SETFL, sock_flags | O_NONBLOCK) == -1) {\n\t\t\tsprintf(log_buffer, \"Failed to set non-blocking flag on socket for job %s\",\n\t\t\t\tblockj->jobid);\n\t\t\tgoto err;\n\t\t}\n\n\t\tconn = connect(blockj->fd, (struct sockaddr *) &remote, sizeof(remote));\n\t\tif ((conn == -1) && !(errno == EINPROGRESS || errno == EWOULDBLOCK)) {\n\t\t\tgoto retry;\n\t\t}\n\t}\n\n\twhile (1) {\n\t\tfds[0].fd = blockj->fd;\n\t\tfds[0].events = POLLOUT;\n\t\tfds[0].revents = 0;\n\n\t\trc = poll(fds, (nfds_t) 1, 0);\n\t\tif (rc == -1) {\n\t\t\tif ((errno != EAGAIN) && (errno != EINTR))\n\t\t\t\tbreak;\n\t\t} else\n\t\t\tbreak; /* no error */\n\t}\n\n\tif (rc <= 0)\n\t\tgoto retry;\n\n\trc = 0;\n\tcheck_error = getsockopt(fds[0].fd, SOL_SOCKET, SO_ERROR, &rc, &len);\n\tif ((rc != 0) || (check_error != 0) || (fds[0].revents != POLLOUT))\n\t\tgoto retry;\n\n\trc = CS_server_auth(blockj->fd);\n\tif ((rc != CS_SUCCESS) && (rc != CS_AUTH_CHECK_PORT)) {\n\t\tsprintf(log_buffer, \"Unable to authenticate with %s:%d\", blockj->client, blockj->port);\n\t\tgoto err;\n\t}\n\n\t/*\n\t**\tAll ready to talk... now send the info.\n\t*/\n\n\tDIS_tcp_funcs();\n\tret = diswsi(blockj->fd, 1); /* version */\n\tif (ret != DIS_SUCCESS)\n\t\tgoto err;\n\tret = diswst(blockj->fd, blockj->jobid);\n\tif (ret != DIS_SUCCESS)\n\t\tgoto err;\n\tif (blockj->msg == NULL) {\n\t\tret = diswst(blockj->fd, \"\");\n\t} else {\n\t\tret = diswst(blockj->fd, blockj->msg);\n\t}\n\tif (ret != DIS_SUCCESS)\n\t\tgoto err;\n\tret = diswsi(blockj->fd, blockj->exitstat);\n\tif (ret != DIS_SUCCESS)\n\t\tgoto err;\n\t(void) dis_flush(blockj->fd);\n\n\tsprintf(log_buffer, \"%s: Write successful to client %s for job %s \", __func__,\n\t\tblockj->client, blockj->jobid);\n\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_NOTICE, blockj->jobid, log_buffer);\n\tdis_destroy_chan(blockj->fd);\n\tCS_close_socket(blockj->fd);\n\tgoto end;\n\nretry:\n\tif ((time(0) - blockj->reply_time) < BLOCK_JOB_REPLY_TIMEOUT) {\n\t\tset_task(WORK_Timed, time_now + 10, check_block_wt, blockj);\n\t\treturn;\n\t} else {\n\t\tsprintf(log_buffer, \"Unable to reply to client %s for job %s\",\n\t\t\tblockj->client, blockj->jobid);\n\t}\nerr:\n\tDIS_tcp_funcs();\n\tdis_destroy_chan(blockj->fd);\n\tif (ret != DIS_SUCCESS) {\n\t\tsprintf(log_buffer, \"DIS error while replying to client %s for job %s\",\n\t\t\tblockj->client, blockj->jobid);\n\t}\n\tlog_err(-1, __func__, log_buffer);\nend:\n\tif (blockj->fd != -1)\n\t\tclose(blockj->fd);\n\tfree(blockj->msg);\n\tfree(blockj);\n}\n\n/**\n * @brief\n *\t\tcheck_block\t-\tSee if \"block\" is set and send reply.\n *\n * @param[in,out]\tpjob\t-\tjob structure\n * @param[in]\tmessage\t-\tmessage needs to be send to the port.\n */\nvoid\ncheck_block(job *pjob, char *message)\n{\n\tint port;\n\tchar *phost;\n\tchar *jobid = pjob->ji_qs.ji_jobid;\n\tstruct block_job_reply *blockj;\n\n\tif ((is_jattr_set(pjob, JOB_ATR_block)) == 0)\n\t\treturn;\n\tif ((get_jattr_long(pjob, JOB_ATR_block)) == -1)\n\t\treturn;\n\n\tport = (int) get_jattr_long(pjob, JOB_ATR_block);\n\t/*\n\t * The blocking attribute of the job needs to be unset . This contains the port number on which the job\n\t * submission host is waiting for the exit status of the job . This is done here i.e check_block() as it is the\n\t * final function in processing of a blocking job .\n\t *\n\t * Since for posterity it would be useful to record the fact that a job was a blocking job we set the\n\t * port number to an impossible value instead of clearing it so that the database only contains\n\t * a reference to the fact that a history job was a blocking job . Port number need not be recorded .\n\t */\n\tset_jattr_l_slim(pjob, JOB_ATR_block, -1, SET);\n\n\tphost = get_jattr_str(pjob, JOB_ATR_submit_host);\n\tif (port == 0 || phost == NULL) {\n\t\tsprintf(log_buffer, \"%s: cannot reply %s:%d\", __func__,\n\t\t\tphost == NULL ? \"<no host>\" : phost, port);\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_NOTICE,\n\t\t\t  jobid, log_buffer);\n\t\treturn;\n\t}\n\n\tblockj = (struct block_job_reply *) malloc(sizeof(struct block_job_reply));\n\tif (blockj == NULL) {\n\t\tsprintf(log_buffer, \"%s: Unable to allocate memory for the job %s\", __func__, jobid);\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_NOTICE,\n\t\t\t  jobid, log_buffer);\n\t\treturn;\n\t}\n\n\tblockj->msg = strdup(message);\n\tstrncpy(blockj->client, phost, PBS_MAXHOSTNAME);\n\tblockj->client[PBS_MAXHOSTNAME - 1] = '\\0';\n\tblockj->port = port;\n\tblockj->fd = -1;\n\tblockj->reply_time = time(NULL);\n\tblockj->exitstat = pjob->ji_qs.ji_un.ji_exect.ji_exitstat;\n\tstrcpy(blockj->jobid, pjob->ji_qs.ji_jobid);\n\n\tset_task(WORK_Immed, 0, check_block_wt, blockj);\n\treturn;\n}\n\n/**\n * @brief\n * \t\tjob_wait_over - The execution wait time of a job has been reached, at\n *\t\tleast according to a work_task entry via which we were invoked.\n * @par\n *\t\tIMP: Should not invoke/create any such work task(s) for history\n *\t     \tjobs (job with state JOB_STATE_MOVED or JOB_STATE_FINISHED).\n * @par\n *\t\tIf indeed the case, re-evaluate and set the job state.\n *\n * @param[in]\tpwt\t-\twork task structure\n */\n\nstatic void\njob_wait_over(struct work_task *pwt)\n{\n\tchar newstate;\n\tint newsub;\n\tjob *pjob;\n\n\tpjob = (job *) pwt->wt_parm1;\n\n\t/* If history job, just return from here */\n\tif ((check_job_state(pjob, JOB_STATE_LTR_MOVED)) ||\n\t    (check_job_state(pjob, JOB_STATE_LTR_FINISHED)))\n\t\treturn;\n\n#ifndef NDEBUG\n\t{\n\t\ttime_t now = time(NULL);\n\t\ttime_t when = get_jattr_long((job *) pjob, JOB_ATR_exectime);\n\t\tstruct work_task *ptask;\n\n\t\tif (when > now) {\n\t\t\tsprintf(log_buffer, msg_badwait, ((job *) pjob)->ji_qs.ji_jobid);\n\t\t\tlog_err(-1, \"job_wait_over\", log_buffer);\n\n\t\t\t/* recreate the work task entry */\n\n\t\t\tptask = set_task(WORK_Timed, when, job_wait_over, pjob);\n\t\t\tif (ptask) {\n\t\t\t\tappend_link(&pjob->ji_svrtask,\n\t\t\t\t\t    &ptask->wt_linkobj, ptask);\n\t\t\t}\n\t\t\treturn;\n\t\t}\n\t}\n#endif\n\tpjob->ji_qs.ji_svrflags &= ~JOB_SVFLG_HASWAIT;\n\n\t/* clear the exectime attribute */\n\tfree_jattr(pjob, JOB_ATR_exectime);\n\tsvr_evaljobstate(pjob, &newstate, &newsub, 0);\n\tsvr_setjobstate(pjob, newstate, newsub);\n}\n\n/**\n * @brief\n * \t\tjob_set_wait - set up a work task entry that will trigger at the execution\n *\t\twait time of the job.\n * @par\n *\t\tIMP: History jobs:\n *\t     If the SERVER is configured for history jobs and the job is\n *\t     in state JOB_STATE_MOVED or JOB_STATE_FINISHED, then do not\n *\t     create/schedule any further work task on this job which may\n *\t     modify the HISTORY jobs.\n * @par\n *\t\tThis is called as the at_action (see attribute.h) function associated\n *\t\twith the execution-time job attribute.\n * \t\tparameter pjob is a job * cast to a void *\n * \t\tparameter mode is unused;  do it for all action modes\n *\n * @param[in]\tpattr\t-\texecution-time job attribute.\n * @param[in]\tpjob\t-\tpjob is a job * cast to a void *\n * @param[in]\tpattr\t-\tmode is unused;  do it for all action modes\n */\n\nint\njob_set_wait(attribute *pattr, void *pjob, int mode)\n{\n\tstruct work_task *ptask;\n\tlong when;\n\n\t/* Return 0 if it is history job */\n\tif (check_job_state((job *) pjob, JOB_STATE_LTR_MOVED) || check_job_state((job *) pjob, JOB_STATE_LTR_FINISHED))\n\t\treturn (0);\n\n\tif (!is_attr_set(pattr))\n\t\treturn (0);\n\twhen = pattr->at_val.at_long;\n\tptask = (struct work_task *) GET_NEXT(((job *) pjob)->ji_svrtask);\n\n\t/* Is there already an entry for this job?  Then reuse it */\n\n\tif (((job *) pjob)->ji_qs.ji_svrflags & JOB_SVFLG_HASWAIT) {\n\t\twhile (ptask) {\n\t\t\tif ((ptask->wt_event == WORK_Timed) &&\n\t\t\t    (ptask->wt_func == job_wait_over) &&\n\t\t\t    (ptask->wt_parm1 == pjob)) {\n\t\t\t\tptask->wt_event = when;\n\t\t\t\treturn (0);\n\t\t\t}\n\t\t\tptask = (struct work_task *) GET_NEXT(ptask->wt_linkobj);\n\t\t}\n\t}\n\n\tptask = set_task(WORK_Timed, when, job_wait_over, pjob);\n\tif (ptask == NULL)\n\t\treturn (-1);\n\tappend_link(&((job *) pjob)->ji_svrtask, &ptask->wt_linkobj, ptask);\n\n\t/* set JOB_SVFLG_HASWAIT to show job has work task entry */\n\n\t((job *) pjob)->ji_qs.ji_svrflags |= JOB_SVFLG_HASWAIT;\n\treturn (0);\n}\n\n/**\n * @brief\n * \t\tdefault_std - make the default name for standard output or error\n *\t\t\"job_name\".[e|o]job_sequence_number\n *\t\tor\n *\t\t\"job_name\".[e|o]job_sequence_number^index^ for an Array Job\n * \t\tparameter key is 'e' for stderr, 'o' for stdout\n *\t \tparameter to points to a buffer into which the name is returned; callers\n *\t\tare responsible for ensuring that the buffer is of sufficient size\n *\n * @param[in]\tpjob\t-\tpointer to job structure\n * @param[in]\tkey\t-\tthe letter before the sequence number\n * @param[out]\tto\t-\toutput name\n */\n\nstatic void\ndefault_std(job *pjob, int key, char *to)\n{\n\tint len;\n\tchar *pd;\n\n\tpd = strrchr(get_jattr_str(pjob, JOB_ATR_jobname), '/');\n\tif (pd)\n\t\t++pd;\n\telse\n\t\tpd = get_jattr_str(pjob, JOB_ATR_jobname);\n\tlen = strlen(pd);\n\n\t(void) strcpy(to, pd);\t    /* start with the job name */\n\t*(to + len++) = '.';\t    /* the dot        */\n\t*(to + len++) = (char) key; /* the letter     */\n\tpd = pjob->ji_qs.ji_jobid;  /* the seq_number */\n\twhile (isdigit((int) *pd))\n\t\t*(to + len++) = *pd++;\n\t*(to + len) = '\\0';\n\tif (pjob->ji_qs.ji_svrflags & JOB_SVFLG_ArrayJob) {\n\t\t/* Array Job - append special substituation string for index */\n\t\tstrcat(to, \".\");\n\t\tstrcat(to, PBS_FILE_ARRAY_INDEX_TAG);\n\t}\n}\n\n/**\n * @brief\n * \t\tprefix_std_file - build the absolute pathname for the job's standard\n * \t\toutput or error:\n *\t\toutputhost:$PBS_O_WORKDIR/job_name.[eo]job_sequence_number\n *\n * @param[in]\tpjob\t-\tpointer to job structure\n * @param[in]\tkey\t-\tinteger that is either 'e' or 'o'\n *\n * @return\tchar *\n * @retval\tNULL\t-\tFailed to construct prefix\n * @retval\t!NULL\t-\tPointer to the prefix string\n */\nchar *\nprefix_std_file(job *pjob, int key)\n{\n\tchar *name = NULL;\n\tchar *outputhost;\n\tchar *wdir;\n\n\tif (pbs_conf.pbs_output_host_name)\n\t\toutputhost = pbs_conf.pbs_output_host_name;\n\telse\n\t\toutputhost = get_jattr_str(pjob, JOB_ATR_submit_host);\n\twdir = get_variable(pjob, \"PBS_O_WORKDIR\");\n\tif (outputhost) {\n\t\tint len;\n\n\t\tlen = strlen(outputhost) +\n\t\t      strlen(get_jattr_str(pjob, JOB_ATR_jobname)) + PBS_MAXSEQNUM + strlen(PBS_FILE_ARRAY_INDEX_TAG) + 6;\n\t\tif (wdir)\n\t\t\tlen += strlen(wdir);\n\t\tname = malloc(len);\n\t\tif (name) {\n\t\t\tstrcpy(name, outputhost); /* the qsub host name\t*/\n\t\t\tstrcat(name, \":\");\t  /* the :\t\t*/\n\t\t\tif (wdir) {\n\t\t\t\tstrcat(name, wdir); /* the qsub cwd\t\t*/\n\t\t\t\tstrcat(name, \"/\");  /* the final /\t\t*/\n\t\t\t}\n\t\t\t/* now add the rest\t*/\n\t\t\tdefault_std(pjob, key, name + strlen(name));\n\t\t}\n\t}\n\treturn (name);\n}\n\n/**\n * @brief\n * \t\tcat_default_std  - function concatenates the default name for the job's\n * \t\tstdout/err filename to the string residing in buffer \"in\".  Space for the\n * \t\tnewly created string is dynamically acquired and returned to the caller\n * \t\tvia the argument \"out\".  It is the responsibility of some other function\n * \t\tto free this heap memory when it is no longer needed.\n * @par\n * \t\tparameter key is 'e' for stderr, 'o' for stdout\n * \t\tparameter in is the location of the input buffer\n * \t\tparameter out is the return location of the result string\n *\n * @param[in]\tpjob\t-\tjob structure\n * @param[in]\tkey\t-\tthe letter before the sequence number\n * @param[in]\tin\t-\tthe string residing in buffer \"in\".\n * @param[in]\tout\t-\tSpace for the newly created string.\n */\nvoid\ncat_default_std(job *pjob, int key, char *in, char **out)\n{\n\tchar *result;\n\tint len;\n\tlen = strlen(in) +\n\t      strlen(get_jattr_str(pjob, JOB_ATR_jobname)) +\n\t      PBS_MAXSEQNUM + 5 + strlen(PBS_FILE_ARRAY_INDEX_TAG) + 1;\n\tif ((result = malloc(len))) {\n\t\tstrcpy(result, in);\n\t\tdefault_std(pjob, key, &result[strlen(result)]);\n\t}\n\t*out = result;\n}\n\n/**\n * @brief\n * \t\tcvrt_fqn_to_name - copy the name only (no @host suffix) to the \"to\" buffer.\n *\t\t\"to\" buffer is (PBS_MAXUSER+1) characters long. Null terminate string.\n *\n * @param[in]\tfrom\t-\t basic job owner's name\n * @param[out]\tto\t-\t\"to\" buffer where name is copied.\n */\nvoid\ncvrt_fqn_to_name(char *from, char *to)\n{\n\tint i;\n\n\tfor (i = 0; i < PBS_MAXUSER; ++i) {\n\t\tif ((*(from + i) == '@') || (*(from + i) == '\\0'))\n\t\t\tbreak;\n\t\t*(to + i) = *(from + i);\n\t}\n\t*(to + i) = '\\0';\n}\n\n/**\n * @brief\n *\t\tget_hostPart - return a pointer to the \"host\" part of a \"name@host\"\n *\t\tstring.\n *\n * @param[in]\tfrom\t-\tpointer to a string of the form user@host\n *\n * @return\tchar *\n * @retval\tpointer to host part of the input string\n * @retval \tNULL\t: if no '@' in input or host part following the '@' is null\n *\n * @par MT-safe:\tyes\n */\nchar *\nget_hostPart(char *from)\n{\n\tchar *pc;\n\n\tif ((pc = strchr(from, '@')) == NULL)\n\t\treturn NULL;\n\telse if (*(++pc) == '\\0')\n\t\treturn NULL;\n\treturn pc;\n}\n\n#define CVT_SIZE 1500\n\n/**\n * @brief\n * \t\tset_select_and_place - create a select and place resource for the job\n *\n * @param[in]\tobjtype\t-\ttype of the object\n * @param[in]\tpobj\t-\tpointer to void * object, not used here.\n * @param[in]\tpatr\t-\tpointer to attriute structure which contains the resources.\n *\n * @return\tint\n * @retval\t0\t: success\n * @retval\tPBSE_Error\t: Error Code\n *\n * @par MT-safe:\tNo.\n */\nint\nset_select_and_place(int objtype, void *pobj, attribute *patr)\n{\n\tpbs_list_head collectresc;\n\tstatic char *cvt = NULL;\n\tstatic size_t cvt_len;\n\tchar *ndspec;\n\tresource *presc;\n\tresource *prescsl;\n\tresource *prescpc;\n\tresource_def *prdefnd;\n\tresource_def *prdefpc;\n\tresource_def *prdefsl;\n\tint rc;\n\textern int resc_access_perm;\n\n\tif (cvt == NULL) {\n\t\tcvt = malloc(CVT_SIZE);\n\t\tif (cvt == NULL)\n\t\t\treturn PBSE_SYSTEM;\n\t\telse\n\t\t\tcvt_len = CVT_SIZE;\n\t}\n\n\tprdefpc = &svr_resc_def[RESC_PLACE];\n\tprdefnd = &svr_resc_def[RESC_NODES];\n\tprdefsl = &svr_resc_def[RESC_SELECT];\n\tpresc = find_resc_entry(patr, prdefnd);\n\n\t/* add \"select\" and \"place\" resource */\n\n\tprescsl = add_resource_entry(patr, prdefsl);\n\tif (prescsl == NULL)\n\t\treturn PBSE_SYSTEM;\n\n\tprescpc = find_resc_entry(patr, prdefpc);\n\tif (prescpc == NULL) {\n\t\tprescpc = add_resource_entry(patr, prdefpc);\n\t\tif (prescpc == NULL)\n\t\t\treturn PBSE_SYSTEM;\n\t}\n\n\tif (presc && ((ndspec = presc->rs_value.at_val.at_str) != NULL)) {\n\n\t\t/* Have a nodes spec, use it  to make select and place */\n\n\t\tif ((rc = cvt_nodespec_to_select(ndspec, &cvt, &cvt_len, patr)) != 0)\n\t\t\treturn rc;\n\n\t\tif ((rc = prdefsl->rs_decode(&prescsl->rs_value, NULL, \"select\", cvt)) != 0)\n\t\t\treturn rc;\n\t\tprescsl->rs_value.at_flags |= ATR_VFLAG_DEFLT;\n\n\t\tif (strstr(ndspec, \"#excl\") != NULL) {\n\t\t\tprdefpc->rs_decode(&prescpc->rs_value, NULL, \"place\", \"scatter:excl\");\n\t\t} else if (strstr(ndspec, \"#shared\") != NULL) {\n\t\t\tprdefpc->rs_decode(&prescpc->rs_value, NULL, \"place\", \"scatter:share\");\n\t\t} else {\n\t\t\tprdefpc->rs_decode(&prescpc->rs_value, NULL, \"place\", \"scatter\");\n\t\t}\n\n\t} else {\n\t\tattribute_def *objatrdef;\n\n\t\t/* No nodes spec, use ncpus/mem/arch/host/software\t*/\n\t\t/* from the resource_List attribute\t\t\t*/\n\n\t\tif (objtype == JOB_OBJECT)\n\t\t\tobjatrdef = &job_attr_def[(int) JOB_ATR_resource];\n\t\telse\n\t\t\tobjatrdef = &resv_attr_def[(int) RESV_ATR_resource];\n\n\t\tCLEAR_HEAD(collectresc);\n\t\tresc_access_perm = READ_ONLY;\n\t\tif (objatrdef->at_encode(patr, &collectresc, objatrdef->at_name, NULL, ATR_ENCODE_CLIENT, NULL) > 0) {\n\t\t\tsvrattrl *psvrl;\n\n\t\t\t*cvt = '1';\n\t\t\t*(cvt + 1) = '\\0';\n\t\t\tpsvrl = (svrattrl *) GET_NEXT(collectresc);\n\t\t\twhile (psvrl) {\n\t\t\t\tresource_def *prdefcopy;\n\n\t\t\t\tprdefcopy = find_resc_def(svr_resc_def, psvrl->al_resc);\n\t\t\t\tif (prdefcopy && (prdefcopy->rs_flags & ATR_DFLAG_CVTSLT)) {\n\t\t\t\t\tsize_t cvtneed;\n\n\t\t\t\t\t/* how much space is needed in cvt buffer, \t */\n\t\t\t\t\t/* +5 = one for : = possible quotes and null */\n\t\t\t\t\tcvtneed = strlen(psvrl->al_resc) +\n\t\t\t\t\t\t  strlen(psvrl->al_value) + 5;\n\t\t\t\t\tif ((strlen(cvt) + cvtneed) > cvt_len) {\n\t\t\t\t\t\t/* double cvt buffer */\n\t\t\t\t\t\tchar *tcvt;\n\t\t\t\t\t\ttcvt = realloc(cvt, 2 * cvt_len);\n\t\t\t\t\t\tif (tcvt) {\n\t\t\t\t\t\t\tcvt = tcvt;\n\t\t\t\t\t\t\tcvt_len *= 2;\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\tlog_event(PBSEVENT_ERROR,\n\t\t\t\t\t\t\t\t  PBS_EVENTCLASS_SERVER, LOG_ALERT,\n\t\t\t\t\t\t\t\t  msg_daemonname,\n\t\t\t\t\t\t\t\t  \"unable to malloc space\");\n\t\t\t\t\t\t\treturn PBSE_SYSTEM;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tstrcat(cvt, \":\");\n\t\t\t\t\tstrcat(cvt, psvrl->al_resc);\n\t\t\t\t\tstrcat(cvt, \"=\");\n\t\t\t\t\tif (strpbrk(psvrl->al_value, \"\\\"'+:=()\")) {\n\t\t\t\t\t\tchar *quotec;\n\t\t\t\t\t\tif (strchr(psvrl->al_value, (int) '\"'))\n\t\t\t\t\t\t\tquotec = \"'\";\n\t\t\t\t\t\telse\n\t\t\t\t\t\t\tquotec = \"\\\"\";\n\t\t\t\t\t\tstrcat(cvt, quotec);\n\t\t\t\t\t\tstrcat(cvt, psvrl->al_value);\n\t\t\t\t\t\tstrcat(cvt, quotec);\n\t\t\t\t\t} else {\n\t\t\t\t\t\tstrcat(cvt, psvrl->al_value);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tpsvrl = (svrattrl *) GET_NEXT(psvrl->al_link);\n\t\t\t}\n\t\t\tfree_attrlist(&collectresc);\n\t\t} else {\n\t\t\tstrcpy(cvt, \"ncpus=1\");\n\t\t}\n\t\tif (prdefsl->rs_decode(&prescsl->rs_value, NULL, \"select\", cvt) == 0) {\n\n\t\t\tif (objtype == JOB_OBJECT) { /* set default flg only on jobs */\n\t\t\t\tprescsl->rs_value.at_flags |= ATR_VFLAG_DEFLT;\n\t\t\t\tif ((prescpc->rs_value.at_flags & (ATR_VFLAG_SET | ATR_VFLAG_DEFLT)) != ATR_VFLAG_SET)\n\t\t\t\t\tif (prdefpc->rs_decode(&prescpc->rs_value, NULL, \"place\", \"pack\") == 0)\n\n\t\t\t\t\t\tif (objtype == JOB_OBJECT) /* set default flg only on jobs */\n\t\t\t\t\t\t\tprescpc->rs_value.at_flags |= ATR_VFLAG_DEFLT;\n\t\t\t}\n\t\t}\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\tset_chunk_sums() - set the sums of consumable resources listed\n *\t\tin the chunks as job limits if not already set.\n *\n * @param[in]\tpselectattr\t-\tattribute structure from which we parse the select directive\n * @param[in]\tpattr\t-\tattribute structure where resource limit is set.\n *\n * @return\tint\n * @retval\t0\t: success\n * @retval\tPBSE_Error\t: Error Code\n *\n * @par MT-safe:\tNo.\n */\n\nint\nset_chunk_sum(attribute *pselectattr, attribute *pattr)\n{\n\tchar *chunk;\n\tint i;\n\tint j;\n\tint nchk;\n\tint nelem;\n\tint rc;\n\tint default_flag;\n\tint total_chunks = 0;\n\tstruct key_value_pair *pkvp;\n\tresource *presc;\n\tresource_def *pdef;\n\tstatic attribute tmpatr;\n\n\tif ((pselectattr == NULL) || (pattr == NULL))\n\t\treturn 0;\n\n\t/* first clear the summation table used later */\n\n\tfor (i = 0; svr_resc_sum[i].rs_def; ++i) {\n\t\t(void) memset((char *) &svr_resc_sum[i].rs_attr, 0, sizeof(struct attribute));\n\n\t\tsvr_resc_sum[i].rs_set = 0;\n\t\tsvr_resc_sum[i].rs_prs = NULL;\n\t}\n\n\t/* now, look through the resource limits specified for the job        */\n\t/* if any matches an entry in the table, set the pointer and set flag */\n\n\tpresc = (resource *) GET_NEXT(pattr->at_val.at_list);\n\twhile (presc) {\n\t\tfor (i = 0; svr_resc_sum[i].rs_def; ++i) {\n\t\t\tif (strcmp(presc->rs_defin->rs_name, svr_resc_sum[i].rs_def->rs_name) == 0) {\n\t\t\t\t/* found one, save the resource ptr in sum table */\n\t\t\t\tsvr_resc_sum[i].rs_prs = presc;\n\t\t\t}\n\t\t}\n\t\tpresc = (resource *) GET_NEXT(presc->rs_link);\n\t}\n\n\t/* now, parse the select directive */\n\n\tchunk = parse_plus_spec(pselectattr->at_val.at_str, &rc);\n\tif (rc != 0)\n\t\treturn rc;\n\twhile (chunk) {\n\t\tif (parse_chunk(chunk, &nchk, &nelem, &pkvp, NULL) == 0) {\n\t\t\ttotal_chunks += nchk;\n\t\t\tfor (j = 0; j < nelem; ++j) {\n\t\t\t\tfor (i = 0; svr_resc_sum[i].rs_def; ++i) {\n\t\t\t\t\tif (strcmp(svr_resc_sum[i].rs_def->rs_name, pkvp[j].kv_keyw) == 0) {\n\t\t\t\t\t\trc = svr_resc_sum[i].rs_def->rs_decode(&tmpatr, 0,\n\t\t\t\t\t\t\t\t\t\t       0, pkvp[j].kv_val);\n\t\t\t\t\t\tif (rc != 0)\n\t\t\t\t\t\t\treturn rc;\n\t\t\t\t\t\telse if (!is_attr_set(&tmpatr))\n\t\t\t\t\t\t\treturn PBSE_BADATVAL; /* illegal null value */\n\t\t\t\t\t\tif (svr_resc_sum[i].rs_def->rs_type == ATR_TYPE_SIZE)\n\t\t\t\t\t\t\ttmpatr.at_val.at_size.atsv_num *= nchk;\n\t\t\t\t\t\telse if (svr_resc_sum[i].rs_def->rs_type == ATR_TYPE_FLOAT)\n\t\t\t\t\t\t\ttmpatr.at_val.at_float *= nchk;\n\t\t\t\t\t\telse\n\t\t\t\t\t\t\ttmpatr.at_val.at_long *= nchk;\n\n\t\t\t\t\t\t(void) svr_resc_sum[i].rs_def->rs_set(&svr_resc_sum[i].rs_attr, &tmpatr, INCR);\n\t\t\t\t\t\tsvr_resc_sum[i].rs_set = 1;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t} else {\n\t\t\treturn (PBSE_BADATVAL);\n\t\t}\n\t\tchunk = parse_plus_spec(NULL, &rc);\n\t\tif (rc != 0)\n\t\t\treturn rc;\n\t}\n\n\t/* check that the user asked for at least one chunk total */\n\n\tif (total_chunks <= 0)\n\t\treturn (PBSE_BADATVAL);\n\n\t/*\n\t * now that we have summed up the chunks, for each one summed (set) ...\n\t * set or reset the corresponding job wide limit\n\t */\n\tfor (i = 0; svr_resc_sum[i].rs_def; ++i) {\n\t\tif (svr_resc_sum[i].rs_set) {\n\n\t\t\tif (svr_resc_sum[i].rs_prs) {\n\t\t\t\tpresc = svr_resc_sum[i].rs_prs;\n\t\t\t\tdefault_flag = presc->rs_value.at_flags & ATR_VFLAG_DEFLT;\n\t\t\t} else {\n\t\t\t\tdefault_flag = ATR_VFLAG_DEFLT;\n\t\t\t\tpresc = add_resource_entry(pattr, svr_resc_sum[i].rs_def);\n\t\t\t\tif (presc == NULL)\n\t\t\t\t\treturn PBSE_SYSTEM;\n\t\t\t}\n\t\t\t(void) svr_resc_sum[i].rs_def->rs_set(&presc->rs_value, &svr_resc_sum[i].rs_attr, SET);\n\t\t\tpresc->rs_value.at_flags |= default_flag;\n\t\t}\n\t}\n\n\t/* set pseudo-resource \"nodect\" to the number of chunks */\n\n\tpdef = &svr_resc_def[RESC_NODECT];\n\tif (pdef) {\n\t\tpresc = find_resc_entry(pattr, pdef);\n\t\tif (presc == NULL)\n\t\t\tpresc = add_resource_entry(pattr, pdef);\n\t\tif (presc) {\n\t\t\tpresc->rs_value.at_val.at_long = total_chunks;\n\t\t\tpresc->rs_value.at_flags |= ATR_VFLAG_DEFLT | ATR_SET_MOD_MCACHE;\n\t\t}\n\t}\n\treturn 0;\n}\n\n/**\n * @par\n * \t\tmake_schedselect - decode a selection specification,  and produce the\n *\t\tthe \"schedselect\" attribute which contains any default resources\n *\t\tmissing from the chunks in the select spec.\n *\t\tAlso translates the value of any boolean resource to the \"formal\"\n *\t\tvalue of \"True\" or \"False\" for the Scheduler who needs to know it\n *\t\tis a boolean and not a string or number.\n *\n *\t@param[in]\tpatrl\t-\t(not used)\n * \t@param[in]\tpselect -\tpointer to the select specification\n * \t@param[in]\tpque\t-\tused to obtain queue defaults\n *\t@param[in,out]\tpsched\t-\tscheduler attribute\n *\n *\t@return\tint\n *\t@retval\t0\t: success\n *\t@retval\tPBSE_Error\t: Error Code.\n *\n *\t@par MT-safe:\tNo.\n */\n\nextern int resc_access_perm;\n\nint\nmake_schedselect(attribute *patrl, resource *pselect,\n\t\t pbs_queue *pque, attribute *psched)\n{\n\tint rc;\n\tchar *sched_select_out = NULL;\n\n\tif ((pselect == NULL) || (psched == NULL)) {\n\t\treturn (PBSE_SYSTEM);\n\t}\n\n\trc = do_schedselect(pselect->rs_value.at_val.at_str, (struct server *) &server, (pbs_queue *) pque, &resc_in_err, &sched_select_out);\n\n\tif (rc == 0) {\n\t\tfree_str(psched);\n\t\t(void) decode_str(psched, NULL, NULL, sched_select_out);\n\t\tpsched->at_flags |= ATR_VFLAG_DEFLT;\n\t}\n\treturn (rc);\n}\n\n/**\n * @brief\n * \t\tset_deflt_resc - set resource attributes based on a set of defaults provided\n *\n *\t@param[in,out]\tjb\t-\tis the job resource list attribute\n *\t@param[in]\tdflt\t-\tis the parent object (queue, server, ...) list of defaults\n *\t@param[in]\tselflg\t-\tif set means set select/place from the defaults\n */\n\nstatic void\nset_deflt_resc(attribute *jb, attribute *dflt, int selflg)\n{\n\tresource *prescjb;\n\tresource *prescdt;\n\tresource_def *seldef;\n\tresource_def *plcdef;\n\n\tseldef = &svr_resc_def[RESC_SELECT];\n\tplcdef = &svr_resc_def[RESC_PLACE];\n\n\tif (is_attr_set(dflt)) {\n\n\t\t/* for each resource in the default value list */\n\n\t\tfor (prescdt = (resource *) GET_NEXT(dflt->at_val.at_list);\n\t\t     prescdt;\n\t\t     prescdt = (resource *) GET_NEXT(prescdt->rs_link)) {\n\n\t\t\tif ((prescdt->rs_defin == seldef) ||\n\t\t\t    (prescdt->rs_defin == plcdef)) {\n\t\t\t\tif (!selflg)\n\t\t\t\t\tcontinue; /* dont use select/place */\n\t\t\t}\n\n\t\t\tif (is_attr_set(&prescdt->rs_value)) {\n\t\t\t\t/* see if the job already has that resource */\n\t\t\t\tprescjb = find_resc_entry(jb, prescdt->rs_defin);\n\t\t\t\tif ((prescjb == NULL) ||\n\t\t\t\t    ((prescjb->rs_value.at_flags &\n\t\t\t\t      ATR_VFLAG_SET) == 0)) {\n\n\t\t\t\t\tif (prescjb == NULL)\n\t\t\t\t\t\tprescjb = add_resource_entry(jb,\n\t\t\t\t\t\t\t\t\t     prescdt->rs_defin);\n\t\t\t\t\tif (prescjb) {\n\t\t\t\t\t\tif (prescdt->rs_defin->rs_set(&prescjb->rs_value, &prescdt->rs_value, SET) == 0)\n\t\t\t\t\t\t\tprescjb->rs_value.at_flags |= (ATR_VFLAG_SET | ATR_VFLAG_DEFLT);\n\t\t\t\t\t\tjb->at_flags |= ATR_MOD_MCACHE;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n\n/**\n * @brief\n * \t\tset_resc_deflt - sets default resource limit values\n * \t\ton the object pointed to by input \"pobj\"\n *\n * @param[in]\tpobj\t-\tjob/reservation structure\n * @param[in]\tobjtype\t-\ttype of object - job or reservation.\n * @param[in,out]\tpque\t-\tQueue structure\n *\n * @return\tint\n * @retval\t0\t: success\n * @retval\tPBSE_Error\t: Error Code.\n *\n * @par MT-safe:\tNo.\n */\nint\nset_resc_deflt(void *pobj, int objtype, pbs_queue *pque)\n{\n\tstatic resc_resv *presv;\n\tjob *pjob;\n\tattribute *pdest = NULL;\n\tattribute *psched = NULL;\n\tresource *presc;\n\tresource_def *prdefsl;\n\tresource_def *prdefpc;\n\tint rc;\n\n\tswitch (objtype) {\n\t\tcase JOB_OBJECT:\n\t\t\tpjob = (job *) pobj;\n\t\t\tassert(pjob != NULL);\n\t\t\tif (pque == NULL)\n\t\t\t\tpque = pjob->ji_qhdr;\n\t\t\tassert(pque != NULL);\n\t\t\tpdest = get_jattr(pjob, JOB_ATR_resource);\n\t\t\tpsched = get_jattr(pjob, JOB_ATR_SchedSelect);\n\t\t\tbreak;\n\n\t\tcase RESC_RESV_OBJECT:\n\t\t\tpresv = (resc_resv *) pobj;\n\t\t\tassert(presv != NULL);\n\t\t\tpque = NULL;\n\t\t\tpdest = get_rattr(presv, RESV_ATR_resource);\n\t\t\tpsched = get_rattr(presv, RESV_ATR_SchedSelect);\n\t\t\tbreak;\n\n\t\tdefault:\n\t\t\tbreak;\n\t}\n\n\t/* set defaults based on the Queue's resources_default */\n\tif (pque) {\n\t\tset_deflt_resc(pdest,\n\t\t\t       get_qattr(pque, QA_ATR_ResourceDefault), 1);\n\t}\n\n\t/* set defaults based on the Server' resources_default */\n\tset_deflt_resc(pdest, get_sattr(SVR_ATR_resource_deflt), 1);\n\n\t/* set defaults based on the Queue's resources_max */\n\tif (pque) {\n\t\tset_deflt_resc(pdest,\n\t\t\t       get_qattr(pque, QA_ATR_ResourceMax), 0);\n\t}\n\n\t/* set defaults based on the Server's resources_max */\n\tset_deflt_resc(pdest, get_sattr(SVR_ATR_ResourceMax), 0);\n\n\t/* if needed, set \"select\" and \"place\" from the other resources */\n\n\tprdefsl = &svr_resc_def[RESC_SELECT];\n\tpresc = find_resc_entry(pdest, prdefsl);\n\t/* if not set, set select/place */\n\tif ((presc == NULL) || ((is_attr_set(&presc->rs_value)) == 0))\n\t\tif ((rc = set_select_and_place(objtype, pobj, pdest)) != 0)\n\t\t\treturn rc;\n\n\tprdefpc = &svr_resc_def[RESC_PLACE];\n\tpresc = find_resc_entry(pdest, prdefpc);\n\t/* if \"place\" still not set, force to \"free\" */\n\tif ((presc == NULL) || ((is_attr_set(&presc->rs_value)) == 0)) {\n\t\tpresc = add_resource_entry(pdest, prdefpc);\n\t\tif (presc == NULL)\n\t\t\treturn PBSE_SYSTEM;\n\t\tif (prdefpc->rs_decode(&presc->rs_value, NULL, \"place\", \"free\") == 0)\n\t\t\tif (objtype == JOB_OBJECT) /* only for jobs, set DEFLT */\n\t\t\t\tpresc->rs_value.at_flags |= ATR_VFLAG_DEFLT;\n\t}\n\n\t/* now set up the Scheduler's version of select JOB_ATR_SchedSelect */\n\tpresc = find_resc_entry(pdest, prdefsl);\n\tif (presc) {\n\t\tif ((rc = make_schedselect(pdest, presc, pque, psched)) == 0)\n\t\t\trc = set_chunk_sum(psched, pdest);\n\n\t} else\n\t\trc = PBSE_SYSTEM;\n\treturn rc;\n}\n\n/**\n * @brief\n * \t\teval_chkpnt - if the job's checkpoint attribute is \"c=nnnn\" and\n * \t\tnnnn is less than the queue' minimum checkpoint time, reset\n *\t\tto the queue min time.\n *\n * @param[in]\tjobckp\t-\tthe job's checkpoint attribute\n * @param[in]\tqueckp\t-\tthe queue's checkpoint attribute\n */\n\nvoid\neval_chkpnt(job *pjob, attribute *queckp)\n{\n\tchar *pv = get_jattr_str(pjob, JOB_ATR_chkpnt);\n\n\tif (!is_jattr_set(pjob, JOB_ATR_chkpnt) || !is_attr_set(queckp))\n\t\treturn; /* need do nothing */\n\n\tif ((*pv == 'c') || (*pv == 'w')) {\n\t\tint jobs;\n\t\tchar queues[30];\n\t\tchar ckt;\n\n\t\tckt = *pv;\n\t\tif (*++pv == '=')\n\t\t\tpv++;\n\t\tjobs = atoi(pv);\n\t\tif (jobs < queckp->at_val.at_long) {\n\t\t\tsprintf(queues, \"%c=%ld\", ckt, queckp->at_val.at_long);\n\t\t\tset_jattr_generic(pjob, JOB_ATR_chkpnt, queues, NULL, INTERNAL);\n\t\t}\n\t}\n}\n\n#ifndef NDEBUG\n/**\n * @brief\n * \t\tcorrect_ct - This is a work-around for an as yet unfound bug where\n *\t\tthe counts of jobs in each state sometimes (rarely) become wrong.\n *\t\tWhen this happens, the count for a state can become negative.\n *\t\tIf this is detected (see above), this routine is called to reset\n *\t\tall of the counts and log a message.\n *\n * @param[in]\tpqj\t-\tpbs queue structure\n */\n\nstatic void\ncorrect_ct(pbs_queue *pqj)\n{\n\tint i;\n\tchar *pc;\n\tjob *pjob;\n\tpbs_queue *pque;\n\n\t(void) sprintf(log_buffer, \"Job state counts incorrect, server %d: \",\n\t\t       server.sv_qs.sv_numjobs);\n\tserver.sv_qs.sv_numjobs = 0;\n\tfor (i = 0; i < PBS_NUMJOBSTATE - 4; ++i) {\n\t\tpc = log_buffer + strlen(log_buffer);\n\t\t(void) sprintf(pc, \"%d \", server.sv_jobstates[i]);\n\t\tserver.sv_jobstates[i] = 0;\n\t}\n\tif (pqj) {\n\t\tpc = log_buffer + strlen(log_buffer);\n\t\t(void) sprintf(pc, \"; queue %s %d: \", pqj->qu_qs.qu_name,\n\t\t\t       pqj->qu_numjobs);\n\t\tfor (i = 0; i < PBS_NUMJOBSTATE - 4; ++i) {\n\t\t\tpc = log_buffer + strlen(log_buffer);\n\t\t\t(void) sprintf(pc, \"%d \", pqj->qu_njstate[i]);\n\t\t}\n\t}\n\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_SERVER, LOG_DEBUG,\n\t\t  msg_daemonname, log_buffer);\n\n\tfor (pque = (pbs_queue *) GET_NEXT(svr_queues); pque;\n\t     pque = (pbs_queue *) GET_NEXT(pque->qu_link)) {\n\t\tpque->qu_numjobs = 0;\n\t\tfor (i = 0; i < PBS_NUMJOBSTATE - 4; ++i)\n\t\t\tpque->qu_njstate[i] = 0;\n\t}\n\n\tfor (pjob = (job *) GET_NEXT(svr_alljobs); pjob;\n\t     pjob = (job *) GET_NEXT(pjob->ji_alljobs)) {\n\t\tint state_num;\n\n\t\tstate_num = get_job_state_num(pjob);\n\t\tserver.sv_qs.sv_numjobs++;\n\t\tif (state_num != -1)\n\t\t\tserver.sv_jobstates[state_num]++;\n\t\tif (pjob->ji_qhdr) {\n\t\t\t(pjob->ji_qhdr)->qu_numjobs++;\n\t\t\tif (state_num != -1)\n\t\t\t\t(pjob->ji_qhdr)->qu_njstate[state_num]++;\n\t\t}\n\t}\n\treturn;\n}\n#endif /* NDEBUG */\n\n/**\n * @brief\n * \t\tget_wall - get the value of \"walltime\" for the job\n *\n * @param[in]\tjp\t-\tjp is a valid job pointer\n *\n * @return\tint\n * @retval\t-1\t: function failed\n * @retval\twalltime value\t: function succeeded\n *\n * @note\n * \t\tAssumption: input jp is a valid job pointer\n */\nint\nget_wall(job *jp)\n{\n\tresource_def *rscdef;\n\tresource *pres;\n\n\trscdef = &svr_resc_def[RESC_WALLTIME];\n\tif (rscdef == 0)\n\t\treturn (-1);\n\tpres = find_resc_entry(get_jattr(jp, JOB_ATR_resource), rscdef);\n\tif (pres == 0)\n\t\treturn (-1);\n\telse if (!is_attr_set(&pres->rs_value))\n\t\treturn (-1);\n\telse\n\t\treturn pres->rs_value.at_val.at_long; /*wall time value*/\n}\n\n/**\n * @brief\n * \t\tget the amount of \"walltime\" resource USED for the job\n *\n * @param[in]\tjp\t- Pointer to a job\n *\n * @return\tSuccess/Failure\n * @retval\t-1 \t\t- Function failed\n * @retval\twalltime value\t- Function succeeded\n * @note\n * \t\tAssumption: input jp is a valid job pointer\n *\n * @return\twall time value\n * @retval\t-1\t: failure\n *\n */\nint\nget_used_wall(job *jp)\n{\n\tresource_def *rscdef;\n\tresource *pres;\n\n\trscdef = &svr_resc_def[RESC_WALLTIME];\n\tif (rscdef == 0)\n\t\treturn (-1);\n\tpres = find_resc_entry(get_jattr(jp, JOB_ATR_resc_used), rscdef);\n\tif (pres == 0)\n\t\treturn (-1);\n\telse if (!is_attr_set(&pres->rs_value))\n\t\treturn (-1);\n\telse\n\t\treturn pres->rs_value.at_val.at_long; /*wall time value*/\n}\n\n/**\n * @brief\n * \t\tget_softwall - get the value of \"soft_walltime\" for the job\n *\n * @param[in]\tjp\t-\tjp is a valid job pointer\n *\n * @return\tint\n * @retval\t-1\t: function failed\n * @retval\tsoft walltime value\t: function succeeded\n *\n * @note\n * \t\tAssumption: input jp is a valid job pointer\n */\nint\nget_softwall(job *jp)\n{\n\tresource_def *rscdef;\n\tresource *pres;\n\n\trscdef = &svr_resc_def[RESC_SOFT_WALLTIME];\n\tif (rscdef == 0)\n\t\treturn (-1);\n\tpres = find_resc_entry(get_jattr(jp, JOB_ATR_resource), rscdef);\n\tif (pres == 0)\n\t\treturn (-1);\n\telse if (!is_attr_set(&pres->rs_value))\n\t\treturn (-1);\n\telse\n\t\treturn pres->rs_value.at_val.at_long; /*wall time value*/\n}\n\n/**\n * @brief\n * \t\tget_cput - get the value of \"cput\" for the job\n *\n * @param[in]\tjp\t-\tjp is a valid job pointer\n *\n * @return\tint\n * @retval\t-1\t: function failed\n * @retval\tcput value\t: function succeeded\n *\n * @note\n * \t\tAssumption: input jp is a valid job pointer\n */\nint\nget_cput(job *jp)\n{\n\tresource_def *rscdef;\n\tresource *pres;\n\n\trscdef = &svr_resc_def[RESC_CPUT];\n\tif (rscdef == 0)\n\t\treturn (-1);\n\tpres = find_resc_entry(get_jattr(jp, JOB_ATR_resource), rscdef);\n\tif (pres == 0)\n\t\treturn (-1);\n\telse if (!is_attr_set(&pres->rs_value))\n\t\treturn (-1);\n\telse\n\t\treturn pres->rs_value.at_val.at_long; /*wall time value*/\n}\n\n/**\n * @brief\n * \t\tget the amount of \"cput\" resource USED for the job\n *\n * @param[in]\tjp\t- Pointer to a job\n *\n * @return\tSuccess/Failure\n * @retval\t-1 \t\t- Function failed\n * @retval\twalltime value\t- Function succeeded\n * @note\n * \t\tAssumption: input jp is a valid job pointer\n *\n * @return\twall time value\n * @retval\t-1\t: failure\n *\n */\nint\nget_used_cput(job *jp)\n{\n\tresource_def *rscdef;\n\tresource *pres;\n\n\trscdef = &svr_resc_def[RESC_CPUT];\n\tif (rscdef == 0)\n\t\treturn (-1);\n\tpres = find_resc_entry(get_jattr(jp, JOB_ATR_resc_used), rscdef);\n\tif (pres == 0)\n\t\treturn (-1);\n\telse if (!is_attr_set(&pres->rs_value))\n\t\treturn (-1);\n\telse\n\t\treturn pres->rs_value.at_val.at_long; /*wall time value*/\n}\n/*-------------------------------------------------------------------------------\n Functions for establishing reservation related tasks\n --------------------------------------------------------------------------------*/\n/**\n * @brief\n * \t\tTime4reply\t-\treply when reservation becomes unconfirmed.\n *\n * @param[in,out]\tptask\t-\twork task structure which contains reservation structure.\n */\nstatic void\nTime4reply(struct work_task *ptask)\n{\n\tresc_resv *presv = ptask->wt_parm1;\n\n\tif (presv->ri_brp) {\n\t\tchar buf[512] = {0};\n\t\tif (presv->ri_qs.ri_state == RESV_UNCONFIRMED ||\n\t\t    presv->ri_qs.ri_state == RESV_BEING_ALTERED)\n\t\t\tsnprintf(buf, sizeof(buf), \"%s UNCONFIRMED\", presv->ri_qs.ri_resvID);\n\t\telse if (presv->ri_qs.ri_state == RESV_CONFIRMED) {\n\t\t\t/*Remark: this part of the if is unlikely to happen*/\n\t\t\t/*        reply would happen in req_rescreserve()  */\n\t\t\tsnprintf(buf, sizeof(buf), \"%s CONFIRMED\", presv->ri_qs.ri_resvID);\n\t\t}\n\n\t\t(void) reply_text(presv->ri_brp, PBSE_NONE, buf);\n\t\tpresv->ri_brp = NULL;\n\t}\n}\n\n/**\n * @brief\n * \t\tTime4resv - function to execute when the \"start time\" for a\n *\t\tCONFIRMED reservation finally arrives.\n *\t\tAt some prior point in time a task that's to be processed\n *\t\tat \"start time\" is put onto  the \"timed-tasks\" list.  This\n *\t\ttask's function pointer field points at this function -\n *\t\tsee \"gen_task_Time4resv\" regards the task on the \"timed_tasks\"\n *\t\tlist.\n *\t\tA pointer to the resc_resv structure is put in the task's\n *\t\t\"wt_parm1\" void* field.\n * @note\n *\t\tNote: function \"dispatch_task\" unlinks the task structure\n *\t      from whatever list(s) it's on and it frees the memory\n *\t      consumed by the work_task struct\n *\n * @param[in,out]\tptask\t-\twork task structure which contains reservation structure.\n *\n *\tReturns   none\n */\nstatic void\nTime4resv(struct work_task *ptask)\n{\n\tresc_resv *presv = ptask->wt_parm1;\n\tint pbs_ecode;\n\tint state, sub;\n\n\t/* cause to have issued to the qmgr subsystem\n\t * a request to start the reservation's queue\n\t * note: if presv is for a job reservation no\n\t * request gets made\n\t */\n\n\tpbs_ecode = change_enableORstart(presv, Q_CHNG_START, \"True\");\n\tif (!pbs_ecode) {\n\t\t/*\n\t\t *this is really the line we want once the scheduler\n\t\t *has the capability to say \"begin this reservation\"\n\t\t */\n\n\t\teval_resvState(presv, RESVSTATE_Time4resv, 0, &state, &sub);\n\t\tresv_setResvState(presv, state, sub);\n\n\t\t/*ok, time for the reservation to be running so adjust\n\t\t *server's/queue's resource accounting to reflect that\n\t\t *their \"resources_assigned\" values are now higher by the\n\t\t *amounts requested by the reservation.  Also, set a flag to\n\t\t *indicate that in the future resources have to be returned\n\t\t *and, setup so that the scheduler gets notified\n\t\t */\n\t\tif (!presv->resv_from_job)\n\t\t\tset_resc_assigned((void *) presv, 1, INCR);\n\t\tpresv->ri_giveback = 1;\n\n\t\tresv_exclusive_handler(presv);\n\t\tnotify_scheds_about_resv(SCH_SCHEDULE_JOBRESV, presv);\n\n\t\t/*notify the relevant persons that the reservation time has arrived*/\n\t\tif (presv->ri_qs.ri_tactive == time_now) {\n\t\t\tsvr_mailownerResv(presv, MAIL_BEGIN, MAIL_NORMAL, \"\");\n\t\t\taccount_resvstart(presv);\n\n\t\t\t/* make an artifical request so we can fire process hooks */\n\t\t\tstruct batch_request *preq = alloc_br(PBS_BATCH_BeginResv);\n\t\t\tpreq->rq_perm |= ATR_DFLAG_MGWR;\n\t\t\tstrncpy(preq->rq_user, pbs_current_user, PBS_MAXUSER);\n\t\t\tstrncpy(preq->rq_host, server_host, PBS_MAXHOSTNAME);\n\t\t\tstrncpy(preq->rq_ind.rq_manager.rq_objname, presv->ri_qs.ri_resvID, PBS_MAXSVRRESVID);\n\t\t\t/* handle truncation warning */\n\t\t\tpreq->rq_ind.rq_manager.rq_objname[PBS_MAXSVRJOBID] = '\\0';\n\n\t\t\tchar hook_msg[HOOK_MSG_SIZE] = {0};\n\t\t\tswitch (process_hooks(preq, hook_msg, sizeof(hook_msg), pbs_python_set_interrupt)) {\n\t\t\t\tcase 0: /* explicit reject */\n\t\t\t\tcase 1: /* no recreate request as there are only read permissions */\n\t\t\t\tcase 2: /* no hook script executed - go ahead and accept event*/\n\t\t\t\t\tbreak;\n\t\t\t\tdefault:\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK, LOG_INFO, __func__,\n\t\t\t\t\t\t  \"resv_begin event: accept req by default\");\n\t\t\t}\n\t\t\tfree_br(preq);\n\t\t}\n\n\t\tpresv->resv_start_task = NULL;\n\t\tif ((ptask = set_task(WORK_Timed, time_now + 60,\n\t\t\t\t      Time4resv1, presv)) != 0) {\n\n\t\t\tptask->wt_aux = 4; /*we will attempt up to 5 times*/\n\n\t\t\t/* set things so that the reservation going away causes */\n\t\t\t/* any \"yet to be processed\" work tasks also going away */\n\n\t\t\tappend_link(&presv->ri_svrtask, &ptask->wt_linkobj, ptask);\n\t\t}\n\t}\n\n\tif (is_rattr_set(presv, RESV_ATR_del_idle_time)) {\n\t\t/* Catch the idle case where the reservation never has any jobs in it */\n\t\tset_idle_delete_task(presv);\n\t}\n}\n\n/**\n * @brief\n * \t\tTime4resv1 - function that's executed when a \"n-th reminder\"\n *\t\ton the timed-tasks list gets dispatched.\n * @par\n *\t\tThis function checks if the reservation is still in state\n *\t\tRESV_TIME_TO_RUN and, if it is, it sets into the global\n *\t\tserver variable \"svr_do_schedule\" an appropriate command\n *\t\tfor the scheduler, notes how many times it has done this\n *\t\tand, from that, determines whether or not to put itself\n *\t\tback on the task list.\n *\n * @param[in,out]\tptask\t-\twork task structure which contains reservation structure.\n *\n *\t@return   none\n */\nstatic void\nTime4resv1(struct work_task *ptask)\n{\n\tstruct work_task *pwt;\n\tresc_resv *presv = ptask->wt_parm1;\n\n\tif (get_rattr_long(presv, RESV_ATR_state) != RESV_TIME_TO_RUN)\n\t\treturn; /*no more reminders needed*/\n\n\t/*put on another reminder timed for 60 seconds in the future*/\n\tif (ptask->wt_aux > 0) {\n\t\tif ((pwt = set_task(WORK_Timed, time_now + 60,\n\t\t\t\t    Time4resv1, presv)) != 0) {\n\n\t\t\tpwt->wt_aux = ptask->wt_aux - 1;\n\n\t\t\t/* set things so that the job going away will result in */\n\t\t\t/* any \"yet to be processed\" work tasks also going away */\n\n\t\t\tappend_link(&presv->ri_svrtask, &pwt->wt_linkobj, pwt);\n\t\t}\n\t}\n\n#if 0\n\tFor general reservation, where duration can be less than end - start\n\ttime, the scheduler can(eventually) choose when to run the reservation\n\tso going to want something different than what we currently have for\n\t\treservation jobs\n#endif\n\n\t/* specify the scheduling command for the scheduler */\n\tset_scheduler_flag(SCH_SCHEDULE_JOBRESV, dflt_scheduler);\n}\n\n/**\n * @brief\n * \t\tTime4resvFinish - function that's to execute when \"time_now\" exceeds\n *\t\tthe ending time of the reservation\n *\n * @param[in,out]\tptask\t-\twork task structure which contains reservation structure.\n *\n *\tReturns   none\n */\nvoid\nTime4resvFinish(struct work_task *ptask)\n{\n\tresc_resv *presv = ptask->wt_parm1;\n\tstruct batch_request *preq;\n\n\t/* If more than one occurrence then process the occurrence end. The sequence\n\t * of events that are needed for the end of a standing reservation are:\n\t *\n\t * 1) change the queue state from started True to False\n\t * 2) Delete all Running Jobs and Keep Queued Jobs\n\t * 3) Once all Obits are received (see running_jobs_count):\n\t *    3.a) Determine if occurrences were missed\n\t *    3.b) Add the next occurrence start and end event on the work task\n\t */\n\tpresv->resv_end_task = NULL;\n\tif (get_rattr_long(presv, RESV_ATR_resv_count) > 1) {\n\t\tint ridx = get_rattr_long(presv, RESV_ATR_resv_idx);\n\t\tint rcount = get_rattr_long(presv, RESV_ATR_resv_count);\n\n\t\tDBPRT((\"reached end of occurrence %d/%d\\n\", ridx, rcount))\n\t\tlog_eventf(PBSEVENT_DEBUG, PBS_EVENTCLASS_RESV, LOG_NOTICE, presv->ri_qs.ri_resvID,\n\t\t\t\t       \"reached end of occurrence %d/%d\", ridx, rcount);\n\n\t\t/* When recovering past the last occurrence the standing reservation is purged\n\t\t * in a manner similar to an advance reservation\n\t\t */\n\t\tif (ridx < rcount) {\n\t\t\t/*\n\t\t\t * Invoke the reservation end hook for every occurrence\n\t\t\t */\n\t\t\tstruct batch_request *newreq;\n\t\t\tnewreq = alloc_br(PBS_BATCH_ResvOccurEnd);\n\t\t\tif (newreq != NULL) {\n\t\t\t\tnewreq->rq_perm |= ATR_DFLAG_MGWR;\n\t\t\t\tstrcpy(newreq->rq_user, pbs_current_user);\n\t\t\t\tstrcpy(newreq->rq_host, server_host);\n\t\t\t\tstrcpy(newreq->rq_ind.rq_manager.rq_objname, presv->ri_qs.ri_resvID);\n\t\t\t\tif (issue_Drequest(PBS_LOCAL_CONNECTION, newreq, resvFinishReply, NULL, 0) == -1) {\n\t\t\t\t\tfree_br(newreq);\n\t\t\t\t}\n\t\t\t\ttickle_for_reply();\n\t\t\t}\n\t\t\t/* 1) Change queue state from started True to False and change\n\t\t\t * state of the reservation queue\n\t\t\t */\n\t\t\tchange_enableORstart(presv, Q_CHNG_START, \"FALSE\");\n\t\t\tresv_setResvState(presv, RESV_DELETING_JOBS, presv->ri_qs.ri_substate);\n\n\t\t\t/* 2) Issue delete messages to jobs in running state and keep jobs in\n\t\t\t * Queued state. Server periodically monitors the reservation queue\n\t\t\t * to determine if all jobs in run state have been purged.\n\t\t\t */\n\t\t\tdelete_occurrence_jobs(presv);\n\n\t\t\t/* Done processing the current occurrence. If MOM locks up during cleanup\n\t\t\t * or stage out for a duration that exceeds the time of the last occurrence\n\t\t\t * then this handler will be invoked again with an occurrence index (ridx)\n\t\t\t * equal to the last occurrence (rcount) and be processed as a DeleteReservation\n\t\t\t * event in the next block.\n\t\t\t */\n\t\t\treturn;\n\t\t}\n\t}\n\t/* If an advance reservation or last occurrence of a standing reservation\n\t * construct a \"deleteResv\" batch request for the dummy connection\n\t * PBS_LOCAL_CONNECTION; Issue that request via \"issue_Drequest\".\n\t * \"issue_Drequest\" will notice this request is to be handled here\n\t * and call upon \"dispatch_request\", the mechanism for dispatching of\n\t * incomming requests.  \"issue_Drequest\" is passed a function that's\n\t * to deal with the reply to the request when it arrives.  A task\n\t * having this reply handling function (pointer) is placed on the\n\t * global list, \"task_list_event\".  Request dispatching proceeds\n\t * as it normally does and invokes the function \"reply_send\", which\n\t * is to send back a reply to the batch request.  In this instance,\n\t * (response recipent local) reply_send  moves the task of dealing\n\t * with the \"reply to request\" on to \"task_list_immediate\", so it\n\t * can get recognized the next time function \"next_task\" in the\n\t * server's main loop gets invoked\n\t */\n\tif ((preq = alloc_br(PBS_BATCH_DeleteResv)) != 0) {\n\t\t/*setup field so don't fail a check on perm*/\n\t\tpreq->rq_perm |= ATR_DFLAG_MGWR;\n\n\t\tstrcpy(preq->rq_user, pbs_current_user);\n\t\tstrcpy(preq->rq_host, server_host);\n\t\tstrcpy(preq->rq_ind.rq_manager.rq_objname,\n\t\t       presv->ri_qs.ri_resvID);\n\n\t\t/*notify relevant parties that the reservation's\n\t\t *ending time has arrived and reservation is being deleted\n\t\t */\n\t\tsvr_mailownerResv(presv, MAIL_END, MAIL_NORMAL, \"\");\n\n\t\tset_last_used_time_node(presv, 1);\n\t\t(void) issue_Drequest(PBS_LOCAL_CONNECTION, preq,\n\t\t\t\t      resvFinishReply, NULL, 0);\n\t\ttickle_for_reply();\n\t}\n}\n\n/**\n * @brief\n * \t\tIf processing a Standing Reservation\n * \t\t1) Get the occurrence index and the total number of occurrences,\n *    \t\tif this is the last occurrence, an event to purge reservation is added to\n *    \t\tthe work task.\n * \t\t2) If not last, then set the next occurrence's start and end time and\n *    \t\tappropriate execvnodes and add to server's work task\n * \t\t3) Update state and save reservation\n * @par\n * \t\tThis function is also entered upon reservation recovery to handle skipped\n * \t\toccurrences.\n *\n * @param[in]\tpresv\t-\tStanding Reservation\n */\nstatic void\nTime4occurrenceFinish(resc_resv *presv)\n{\n\ttime_t newend;\n\ttime_t newstart;\n\tint state = 0;\n\tint sub = 0;\n\tint rc = 0;\n\tint rcount_adjusted = 0;\n\tchar *execvnodes_orig = NULL;\n\tchar *execvnodes = NULL;\n\tchar *newxc = NULL;\n\tchar **short_xc = NULL;\n\tchar **tofree = NULL;\n\ttime_t dtstart;\n\ttime_t dtend;\n\ttime_t next;\n\ttime_t now;\n\tstruct work_task *ptask = NULL;\n\tpbsnode_list_t *pl = NULL;\n\tchar start_time[9] = {0}; /* 9 = sizeof(\"%H:%M:%S\")[=8] + 1('\\0') */\n\tresource_def *rscdef = NULL;\n\tresource *prsc = NULL;\n\tattribute atemp = {0};\n\tint j = 2;\n\tint occurrence_ended_early = 0;\n\tint ridx = get_rattr_long(presv, RESV_ATR_resv_idx);\n\tint rcount = get_rattr_long(presv, RESV_ATR_resv_count);\n\tchar *rrule = get_rattr_str(presv, RESV_ATR_resv_rrule);\n\tchar *tz = get_rattr_str(presv, RESV_ATR_resv_timezone);\n\n\t/* the next occurrence returned by get_occurrence is counted from the current\n\t * one which is at index 1. */\n\n\t/* If the reservation was altered,\n\t * use the stored values in RESV_ATR_standing_revert.\n\t */\n\tif (is_rattr_set(presv, RESV_ATR_standing_revert)) {\n\t\tresource *resc, *resc2;\n\t\tattribute *stnd_revert = get_rattr(presv, RESV_ATR_standing_revert);\n\t\tattribute *resc_attr = get_rattr(presv, RESV_ATR_resource);\n\n\t\tresc = find_resc_entry(stnd_revert, &svr_resc_def[RESC_START_TIME]);\n\t\tdtstart = resc->rs_value.at_val.at_long;\n\n\t\tresc = find_resc_entry(stnd_revert, &svr_resc_def[RESC_WALLTIME]);\n\t\tset_rattr_l_slim(presv, RESV_ATR_duration, resc->rs_value.at_val.at_long, SET);\n\t\tpresv->ri_qs.ri_duration = resc->rs_value.at_val.at_long;\n\n\t\tresc = find_resc_entry(resc_attr, &svr_resc_def[RESC_SELECT]);\n\t\tresc2 = find_resc_entry(stnd_revert, &svr_resc_def[RESC_SELECT]);\n\t\tfree(resc->rs_value.at_val.at_str);\n\t\tresc->rs_value.at_val.at_str = strdup(resc2->rs_value.at_val.at_str);\n\t\tpost_attr_set(resc_attr);\n\t\tmake_schedselect(resc_attr, resc, NULL, get_rattr(presv, RESV_ATR_SchedSelect));\n\t\tset_chunk_sum(&resc->rs_value, resc_attr);\n\t} else\n\t\tdtstart = get_rattr_long(presv, RESV_ATR_start);\n\n\tdtend = get_rattr_long(presv, RESV_ATR_end);\n\tnext = dtstart;\n\tnow = time(NULL);\n\n\t/* Add next occurrence and account for missed occurrences.\n\t * There are three ways we can get into this function:\n\t * 1) When the server is initializating.  We need to account for all occurrences we have missed.\n\t * 2) The end of an occurrence.  We need to move onto the next.\n\t * 3) If an occurrence ends early.  We need to move onto the next.\n\t */\n\tif (presv->ri_qs.ri_substate == RESV_RUNNING && next < now)\n\t\toccurrence_ended_early = 1;\n\twhile (occurrence_ended_early || dtend <= now) {\n\t\t/* We may loop and skip several occurrences for different reasons,\n\t\t * if an occurrence ended early, it can only be the one we are in\n\t\t */\n\t\toccurrence_ended_early = 0;\n\t\t/* get occurrence that is \"j\" numbers away from dtstart. */\n\t\tnext = get_occurrence(rrule, dtstart, tz, j);\n\t\tdtend = next + presv->ri_qs.ri_duration;\n\n\t\t/* Index of next occurrence from dtstart */\n\t\tj++;\n\n\t\t/* Log information notifying of missed occurrences. An occurrence is\n\t\t * \"missed\" either if it was interrupted, in which case it never was\n\t\t * instructed to \"give back\" its allocated resources, or if the server\n\t\t * was down for an extended period of time extending over a number of\n\t\t * occurrences.\n\t\t * The first time around j has the value 2 and is incremented to 3 to\n\t\t * account for the next occurrence. Any increments after that characterize\n\t\t * missed occurrences that are noted in the log file. */\n\t\tif (j > 3 || presv->ri_giveback == 0) {\n\t\t\tif (strftime(start_time, sizeof(start_time),\n\t\t\t\t     \"%H:%M:%S\", localtime(&dtstart))) {\n\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\"reservation occurrence %d/%d \"\n\t\t\t\t\t\"scheduled at %s was skipped because \"\n\t\t\t\t\t\"its end time is in the past\",\n\t\t\t\t\tridx, rcount, start_time);\n\t\t\t} else {\n\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\"reservation occurrence %d/%d was \"\n\t\t\t\t\t\"skipped because its end time is in \"\n\t\t\t\t\t\"the past\",\n\t\t\t\t\tridx, rcount);\n\t\t\t}\n\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_RESV,\n\t\t\t\t  LOG_NOTICE, presv->ri_qs.ri_resvID,\n\t\t\t\t  log_buffer);\n\t\t}\n\n\t\t/* The reservation index is incremented */\n\t\tridx++;\n\n\t\t/* If skipped past the last occurrence then return to the\n\t\t * caller which will handle issuing a reservation delete\n\t\t * message\n\t\t */\n\t\tif (ridx > rcount) {\n\t\t\tset_rattr_l_slim(presv, RESV_ATR_resv_idx, rcount, SET);\n\n\t\t\tif ((ptask = set_task(WORK_Immed, 0, Time4resvFinish, presv)) != 0)\n\t\t\t\tappend_link(&presv->ri_svrtask, &ptask->wt_linkobj, ptask);\n\n\t\t\treturn;\n\t\t}\n\n\t\tDBPRT((\"stdg_resv: next occurrence start = %s\", ctime(&next)))\n\t\tDBPRT((\"stdg_resv: next occurrence end   = %s\", ctime(&dtend)))\n\t}\n\tif (is_rattr_set(presv, RESV_ATR_resv_execvnodes))\n\t\texecvnodes_orig = get_rattr_str(presv, RESV_ATR_resv_execvnodes);\n\tif (execvnodes_orig != NULL) {\n\t\tDBPRT((\"stdg_resv: execvnodes sequence   = %s\\n\", execvnodes_orig))\n\t\texecvnodes = strdup(execvnodes_orig);\n\t} else {\n\t\tDBPRT((\"stdg_resv: execvnodes sequence missing\"))\n\t\t;\n\t}\n\tshort_xc = (char **) unroll_execvnode_seq(execvnodes, &tofree);\n\n\t/* when a reservation is reconfirmed, the 'count' of occurrences may differ\n\t * from the original 'count', we need to adjust for the actual remaining\n\t * count\n\t */\n\trcount_adjusted = rcount - get_execvnodes_count(execvnodes);\n\n\t/* The reservation index starts at 1 but the short_xc array at 0. Occurrence 1\n\t * is therefore given by array element 0.\n\t */\n\tif (ridx - rcount_adjusted >= 1 && short_xc != NULL)\n\t\tnewxc = strdup(short_xc[ridx - rcount_adjusted - 1]);\n\telse {\n\t\tnewxc = NULL;\n\t\tlog_eventf(PBSEVENT_ERROR, PBS_EVENTCLASS_RESV, LOG_NOTICE, presv->ri_qs.ri_resvID,\n\t\t           \"%s: attempt to find vnodes for for occurence %d failed; using empty set\",\n\t\t           __func__, ridx);\n\t}\n\n\t/* clean up helper variables */\n\tfree(short_xc);\n\tfree(execvnodes);\n\tfree_execvnode_seq(tofree);\n\n\t/* Set reservation state to finished. Will re-evaluate\n\t * the state for the next occurrence later in the function.\n\t */\n\tresv_setResvState(presv, RESV_FINISHED, RESV_FINISHED);\n\n\t/* Decrement resources assigned */\n\tif (presv->ri_giveback == 1) {\n\t\tset_resc_assigned((void *) presv, 1, DECR);\n\t\tpresv->ri_giveback = 0;\n\t}\n\n\t/* Reservation Nodes are freed and a -possibly- new set assigned */\n\tfree_resvNodes(presv);\n\t/* set ri_vnodes_down to 0 because the previous occurrences downed nodes might\n\t * not exist in the following occurrence.  The new occurrence's ri_vnodes_down\n\t * will be set properly in set_nodes()\n\t */\n\tpresv->ri_vnodes_down = 0;\n\n\t/* Set the new start time, end time, and occurrence index */\n\tnewstart = next;\n\tnewend = (time_t)(newstart + presv->ri_qs.ri_duration);\n\n\tset_rattr_l_slim(presv, RESV_ATR_start, newstart, SET);\n\tpresv->ri_qs.ri_stime = newstart;\n\n\tset_rattr_l_slim(presv, RESV_ATR_end, newend, SET);\n\tpresv->ri_qs.ri_etime = newend;\n\n\tset_rattr_l_slim(presv, RESV_ATR_resv_idx, ridx, SET);\n\tset_rattr_l_slim(presv, RESV_ATR_duration, presv->ri_qs.ri_duration, SET);\n\n\trscdef = &svr_resc_def[RESC_WALLTIME];\n\tprsc = find_resc_entry(get_rattr(presv, RESV_ATR_resource), rscdef);\n\tatemp.at_flags = ATR_VFLAG_SET;\n\tatemp.at_type = ATR_TYPE_LONG;\n\tatemp.at_val.at_long = presv->ri_qs.ri_duration;\n\trscdef->rs_set(&prsc->rs_value, &atemp, SET);\n\tpost_attr_set(get_rattr(presv, RESV_ATR_resource));\n\n\t/* Assign the allocated resources to the reservation\n\t * and the reservation to the associated vnodes\n\t */\n\trc = assign_resv_resc(presv, newxc, FALSE);\n\tfree(newxc);\n\n\tif (rc != PBSE_NONE) {\n\t\tsprintf(log_buffer, \"problem assigning resource to reservation occurrence (%d)\", rc);\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_RESV, LOG_NOTICE, presv->ri_qs.ri_resvID, log_buffer);\n\t\tresv_setResvState(presv, RESV_DEGRADED, RESV_DEGRADED);\n\t\t/* avoid skipping a reconfirmation */\n\t\tpresv->ri_degraded_time = newstart;\n\t\tforce_resv_retry(presv, determine_resv_retry(presv));\n\t\tresv_save_db(presv);\n\t\treturn;\n\t}\n\n\t/* place \"Time4resv\" task on \"task_list_timed\" */\n\tif ((rc = gen_task_Time4resv(presv)) != 0) {\n\t\tsprintf(log_buffer, \"problem generating task Time for occurrence (%d)\", rc);\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_RESV, LOG_NOTICE, presv->ri_qs.ri_resvID, log_buffer);\n\t\tresv_setResvState(presv, RESV_DEGRADED, RESV_DEGRADED);\n\t\t/* avoid skipping a reconfirmation */\n\t\tpresv->ri_degraded_time = newstart;\n\t\tforce_resv_retry(presv, determine_resv_retry(presv));\n\t\tresv_save_db(presv);\n\t\treturn;\n\t}\n\t/* add task to handle the end of the next occurrence */\n\tif ((rc = gen_task_EndResvWindow(presv)) != 0) {\n\t\t(void) resv_purge(presv);\n\t\tsprintf(log_buffer, \"problem generating reservation end task for occurrence (%d); \"\n\t\t        \"purging reservation\", rc);\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_RESV, LOG_NOTICE, presv->ri_qs.ri_resvID, log_buffer);\n\t\treturn;\n\t}\n\n\t/* compute new values for state and substate */\n\teval_resvState(presv, RESVSTATE_gen_task_Time4resv, 1, &state, &sub);\n\n\t/*\n\t * Walk the nodes list associated to this reservation to determine if any\n\t * node is unavailable. If so, mark this next occurrence as degraded\n\t */\n\tfor (pl = presv->ri_pbsnode_list; pl != NULL; pl = pl->next) {\n\t\tif ((pl->vnode->nd_state & (INUSE_OFFLINE | INUSE_OFFLINE_BY_MOM | INUSE_DOWN | INUSE_UNKNOWN)) != 0) {\n\t\t\tDBPRT((\"vnode %s unavailable\\n\", pl->vnode->nd_name))\n\t\t\tstate = RESV_DEGRADED;\n\t\t\tsub = RESV_DEGRADED;\n\n\t\t\tpresv->ri_degraded_time = newstart;\n\t\t\tbreak;\n\t\t}\n\t}\n\n\t/* All nodes of this occurrence are up, mark reservation confirmed */\n\tif (pl == NULL)\n\t\tstate = RESV_CONFIRMED;\n\n\t/* If the reservation already has a retry time set then its substate is\n\t * marked degraded.  If all degraded occurrences are in the past, the\n\t * scheduler will fix this on the next retry attempt.\n\t */\n\tif (is_rattr_set(presv, RESV_ATR_retry)) {\n\t\tsub = RESV_DEGRADED;\n\t\tif (get_rattr_long(presv, RESV_ATR_retry) > 0 && get_rattr_long(presv, RESV_ATR_retry) <= time_now)\n\t\t\tset_resv_retry(presv, time_now + 120);\n\t}\n\n\tif (sub == RESV_DEGRADED) {\n\t\tDBPRT((\"degraded_time of %s is %s\", presv->ri_qs.ri_resvID, ctime(&presv->ri_degraded_time)))\n\t}\n\n\t/* Set the reservation state and substate */\n\tresv_setResvState(presv, state, sub);\n\n\tresv_save_db(presv);\n}\n\n/**\n * @brief\n * \t\tHandler to check on number of remaining jobs in RUNNING/EXITING state.\n * \t\tThe jobs asynchronously update their state as they get purged from the system.\n * \t\tOnce all jobs have been purged, the process for adding the next occurrence is\n * \t\ttriggered.\n *\n * @param[in,out]\tptask\t-\twork task structure which contains reservation\n */\nstatic void\nrunning_jobs_count(struct work_task *ptask)\n{\n\tresc_resv *presv;\n\tint rj;\n\tpresv = (resc_resv *) ptask->wt_parm1;\n\n\t/* Number of remaining jobs in RUNNING/EXITING state */\n\trj = presv->ri_qp->qu_njstate[JOB_STATE_RUNNING] + presv->ri_qp->qu_njstate[JOB_STATE_EXITING];\n\n\tif (rj == 0)\n\t\t/* If none are left then process the next occurrence */\n\t\tTime4occurrenceFinish(presv);\n\telse\n\t\t/* If some are left then issue another set of requests to clean up */\n\t\tdelete_occurrence_jobs(presv);\n}\n\n/**\n * @brief\n * \t\tDelete all Running jobs associated to a standing reservation queue.\n * \t\tThe queued jobs will remain queued\n *\n * @param[in,out]\tpresv\t-\tThe reservation to obtain queue and jobs from\n *\n */\nstatic void\ndelete_occurrence_jobs(resc_resv *presv)\n{\n\tjob *pjob, *pnxj;\n\tstruct work_task *ptask;\n\n\tpjob = (job *) GET_NEXT(presv->ri_qp->qu_jobs);\n\twhile (pjob != NULL) {\n\t\t/* Get the next job from the queue before the job is unlinked as a result\n\t\t * of job_abt\n\t\t */\n\t\tpnxj = (job *) GET_NEXT(pjob->ji_jobque);\n\t\tif (check_job_state(pjob, JOB_STATE_LTR_RUNNING) && !check_job_substate(pjob, JOB_SUBSTATE_ABORT))\n\t\t\t(void) job_abt(pjob, \"Deleting running job at end of reservation occurrence\");\n\n\t\tpjob = pnxj;\n\t}\n\t/* Check if all running jobs have been cleaned up every 5 seconds.\n\t * Link the work task into the server's reservation info work tasks such that\n\t * the work task gets deleted when the reservation is deleted.\n\t * This can happen if a pbs_rdel is invoked on the reservation while it is\n\t * processing the deletion of running jobs. */\n\tif ((ptask = set_task(WORK_Timed, time_now + 5, running_jobs_count, presv)) != 0)\n\t\tappend_link(&presv->ri_svrtask, &ptask->wt_linkobj, ptask);\n}\n\n/**\n * @brif\n * \t\tTime4_term - function that's to execute when server wants to delete\n *\t\ta reservation, e.g. when the server needs to delete the reservation\n *\t\tpart of a \"reservation job\".\n *\n * @param[in]\tptask\t-\tThe reservation to be deleted.\n *\n *\t@return\tnone\n */\nvoid\nTime4_term(struct work_task *ptask)\n{\n\tresc_resv *presv = ptask->wt_parm1;\n\tstruct batch_request *preq;\n\n\t/*construct a \"deleteResv\" batch request for the dummy connection\n\t *PBS_LOCAL_CONNECTION; Issue that request via \"issue_Drequest\".\n\t *\"issue_Drequest\" will notice this request is to be handled here\n\t *and call upon \"dispatch_request\", the mechanism for dispatching of\n\t *incomming requests.  \"issue_Drequest\" is passed a function that's\n\t *to deal with the reply to the request when it arrives.  A task\n\t *having this reply handling function (pointer) is placed on the\n\t *global list, \"task_list_event\".  Request dispatching proceeds\n\t *as it normally does and invokes the function \"reply_send\", which\n\t *is to send back a reply to the batch request.  In this instance,\n\t *(response recipent local) reply_send  moves the task of dealing\n\t *with the \"reply to request\" on to \"task_list_immediate\", so it\n\t *can get recognized the next time function \"next_task\" in the\n\t *server's main loop gets invoked\n\t */\n\n\tif ((preq = alloc_br(PBS_BATCH_DeleteResv)) != 0) {\n\t\t/*setup field so don't fail a check on perm*/\n\t\tpreq->rq_perm |= ATR_DFLAG_MGWR;\n\n\t\tstrcpy(preq->rq_user, pbs_current_user);\n\t\tstrcpy(preq->rq_host, server_host);\n\t\tstrcpy(preq->rq_ind.rq_manager.rq_objname,\n\t\t       presv->ri_qs.ri_resvID);\n\n\t\t(void) issue_Drequest(PBS_LOCAL_CONNECTION, preq,\n\t\t\t\t      resvFinishReply, NULL, 0);\n\n\t\t/*notify relevant parties that the reservation's\n\t\t *ending time has arrived and reservation is being deleted\n\t\t */\n\t\tsvr_mailownerResv(presv, MAIL_END, MAIL_NORMAL, \"\");\n\n\t\ttickle_for_reply();\n\t\tset_last_used_time_node(presv, 1);\n\t}\n}\n\n/**\n * @brief\n * \t\tTime4_I_term - function that's to execute when an \"interactive\"\n *\t\treservation is submitted with a ngative \"I\" value.  If the state\n *\t\ton the reservation is UNCONFIRMED a delete request is generated.\n *\t\tIf the state is not UNCONFIRMED, this task function does nothing.\n *\n * @param[in]\tptask\t-\tThe reservation submitted with a negative \"I\" value.\n *\n *\t@return\tnone\n */\nvoid\nTime4_I_term(struct work_task *ptask)\n{\n\tresc_resv *presv = ptask->wt_parm1;\n\tstruct batch_request *preq;\n\n\tif (presv->ri_qs.ri_state != RESV_UNCONFIRMED)\n\t\treturn;\n\n\t/*construct a \"deleteResv\" batch request for the dummy connection\n\t *PBS_LOCAL_CONNECTION; Issue that request via \"issue_Drequest\".\n\t *\"issue_Drequest\" will notice this request is to be handled here\n\t *and call upon \"dispatch_request\", the mechanism for dispatching of\n\t *incomming requests.  \"issue_Drequest\" is passed a function that's\n\t *to deal with the reply to the request when it arrives.  A task\n\t *having this reply handling function (pointer) is placed on the\n\t *global list, \"task_list_event\".  Request dispatching proceeds\n\t *as it normally does and invokes the function \"reply_send\", which\n\t *is to send back a reply to the batch request.  In this instance,\n\t *(response recipent local) reply_send  moves the task of dealing\n\t *with the \"reply to request\" on to \"task_list_immediate\", so it\n\t *can get recognized the next time function \"next_task\" in the\n\t *server's main loop gets invoked\n\t */\n\n\tif ((preq = alloc_br(PBS_BATCH_DeleteResv)) != 0) {\n\t\t/*setup field so don't fail a check on perm*/\n\t\tpreq->rq_perm |= ATR_DFLAG_MGWR;\n\n\t\tstrcpy(preq->rq_user, pbs_current_user);\n\t\tstrcpy(preq->rq_host, server_host);\n\t\tstrcpy(preq->rq_ind.rq_manager.rq_objname,\n\t\t       presv->ri_qs.ri_resvID);\n\n\t\t(void) issue_Drequest(PBS_LOCAL_CONNECTION, preq,\n\t\t\t\t      resvFinishReply, NULL, 0);\n\n\t\t/*notify relevant parties that the reservation's\n\t\t *ending time has arrived and reservation is being deleted\n\t\t */\n\t\tsvr_mailownerResv(presv, MAIL_END, MAIL_NORMAL, \"\");\n\n\t\ttickle_for_reply();\n\t}\n}\n\n/**\n * @brief\n * \t\tresvFinishReply - function gets executed to dispatch the reply\n *\t\tto an internally generated request to delete a reservation\n *\t\twhose time has passed (it's FINISHED).\n *\t\tHere, we just delete the batch_request structure.  If the\n *\t\trequest bombs for some internal reason, mail should go back\n *\t\tto those on the reservation's mail list as well as an error\n *\t\tbeing entered into the server's logging files.\n * @param[in,out]\tptask\t-\twt_param1 holds the address of the batch_request structure,\n *\t\t\t\t\t\t\t\twhich needs to be freed\n *\n * @return\tnone\n */\nstatic void\nresvFinishReply(struct work_task *ptask)\n{\n\tif (ptask->wt_event == PBS_LOCAL_CONNECTION) {\n\t\t/*we passed the little sanity check so do the free*/\n\t\tfree_br((struct batch_request *) ptask->wt_parm1);\n\t}\n}\n\n/**\n * @brief\n * \t\teval_resvState - does an evaluation to determine\n * \t\twhat should be set for state and substate values on\n * \t\tthe reservation in question.\n * @par\n * \t\tEvaluation is based on current time, current state\n * \t\tand substate, pointer to some relevant function and\n * \t\tpossibly it's success or failure return value\n *\n * @param[in]\tpresv\t-\treservation in question.\n * @param[in]\ts\t-\tidentifies the caller\n * @param[in]\trelVal\t-\trelVal can have following possible values 0,1,2.\n * @param[out]\tpstate\t-\tinternal copy of state\n * @param[out]\tpsub\t-\tsubstate of resv state\n */\nvoid\neval_resvState(resc_resv *presv, enum resvState_discrim s, int relVal, int *pstate, int *psub)\n{\n\tint is_running = 0;\n\n\t*pstate = presv->ri_qs.ri_state;\n\t*psub = presv->ri_qs.ri_substate;\n\n\tif (time_now >= presv->ri_qs.ri_stime && time_now < presv->ri_qs.ri_etime)\n\t\tis_running = 1;\n\n\tif (s == RESVSTATE_gen_task_Time4resv) {\n\t\t/* from a successful confirmation */\n\t\tif (relVal == 0) {\n\t\t\tif (*psub == RESV_DEGRADED) {\n\t\t\t\tif (is_running) {\n\t\t\t\t\t*pstate = RESV_RUNNING;\n\t\t\t\t\t*psub = RESV_RUNNING;\n\t\t\t\t} else {\n\t\t\t\t\t*pstate = RESV_CONFIRMED;\n\t\t\t\t\t*psub = RESV_CONFIRMED;\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tif (*pstate == RESV_BEING_ALTERED) {\n\t\t\t\t\tif (is_running) {\n\t\t\t\t\t\t*pstate = RESV_RUNNING;\n\t\t\t\t\t\t*psub = RESV_RUNNING;\n\n\t\t\t\t\t} else {\n\t\t\t\t\t\t/* Altering a reservation after its start time */\n\t\t\t\t\t\t*pstate = RESV_CONFIRMED;\n\t\t\t\t\t\t*psub = RESV_CONFIRMED;\n\t\t\t\t\t}\n\t\t\t\t} else if (presv->ri_qs.ri_etime > time_now) {\n\t\t\t\t\t*pstate = RESV_CONFIRMED;\n\t\t\t\t\t*psub = RESV_CONFIRMED;\n\t\t\t\t}\n\t\t\t}\n\t\t} else {\n\t\t\t/* End of standing occurrence */\n\t\t\tif (*psub == RESV_DEGRADED)\n\t\t\t\t*pstate = RESV_DEGRADED;\n\t\t\telse {\n\t\t\t\t*pstate = RESV_CONFIRMED;\n\t\t\t\t*psub = RESV_CONFIRMED;\n\t\t\t}\n\t\t}\n\t} else if (s == RESVSTATE_Time4resv) {\n\t\tif (relVal == 0) {\n\t\t\tif (presv->ri_qs.ri_stime <= time_now &&\n\t\t\t    time_now <= presv->ri_qs.ri_etime) {\n\t\t\t\tif (*pstate == RESV_DEGRADED || *psub == RESV_DEGRADED)\n\t\t\t\t\t*psub = RESV_DEGRADED;\n\t\t\t\telse\n\t\t\t\t\t*psub = RESV_RUNNING;\n\t\t\t\t*pstate = RESV_RUNNING;\n\t\t\t\tif (presv->ri_qs.ri_tactive < get_rattr_long(presv, RESV_ATR_start))\n\t\t\t\t\t/* Assigning time_now to indicate when reservation become active\n \t\t\t\t\t *to help in fend off accounting on server restart\n\t\t\t\t\t */\n\t\t\t\t\tpresv->ri_qs.ri_tactive = time_now;\n\t\t\t}\n\t\t}\n\t} else if (s == RESVSTATE_req_deleteReservation) {\n\t\tif (relVal == 0) {\n\n\t\t\t*pstate = RESV_BEING_DELETED;\n\t\t\t*psub = RESV_BEING_DELETED;\n\t\t} else if (relVal == 1) {\n\n\t\t\t*pstate = RESV_BEING_DELETED;\n\t\t\t*psub = RESV_DELETING_JOBS;\n\t\t} else if (relVal == 2) {\n\n\t\t\t*pstate = RESV_DELETED;\n\t\t\t*psub = RESV_DELETED;\n\t\t}\n\t} else if (s == RESVSTATE_add_resc_resv_to_job) {\n\t\t*pstate = RESV_UNCONFIRMED;\n\t\t*psub = RESV_UNCONFIRMED;\n\t} else if (s == RESVSTATE_is_resv_window_in_future) {\n\t\tif (presv->ri_qs.ri_etime < time_now) {\n\t\t\t*pstate = RESV_FINISHED;\n\t\t\t*psub = RESV_FINISHED;\n\t\t}\n\t} else if (s == RESVSTATE_req_resvSub) {\n\t\t*pstate = RESV_UNCONFIRMED;\n\t\t*psub = RESV_UNCONFIRMED;\n\t} else if (s == RESVSTATE_alter_failed) {\n\t\tif (presv->ri_alter.ra_state) {\n\t\t\t*pstate = presv->ri_alter.ra_state;\n\t\t} else if (*psub == RESV_IN_CONFLICT || *psub == RESV_DEGRADED) {\n\t\t\tif (is_running) {\n\t\t\t\t*pstate = RESV_RUNNING;\n\t\t\t} else {\n\t\t\t\t*pstate = RESV_DEGRADED;\n\t\t\t}\n\t\t} else if (is_running) {\n\t\t\t*pstate = RESV_RUNNING;\n\t\t\t*psub = RESV_RUNNING;\n\t\t} else if (is_rattr_set(presv, RESV_ATR_resv_nodes)) {\n\t\t\t*pstate = RESV_CONFIRMED;\n\t\t\t*psub = RESV_CONFIRMED;\n\t\t} else {\n\t\t\t*pstate = RESV_UNCONFIRMED;\n\t\t\t*psub = RESV_UNCONFIRMED;\n\t\t}\n\t}\n}\n\n/**\n * @brief\n * \t\tresv_setResvState - function modifies the state, substate\n * \t\tand related fields of the resc_resv object and updates\n * \t\tthe local backing store for the object as appropriate -\n * \t\teither a full save of the structure or a quick save of the\n * \t\tstructure\n *\n * @param[out]\tpresv\t-\tresc_resv object\n * @param[in]\tstate\t-\tinternal copy of state\n * @param[in]\tsub\t-\tsubstate of resv state\n */\nvoid\nresv_setResvState(resc_resv *presv, int state, int sub)\n{\n\tif ((presv->ri_qs.ri_state == state) &&\n\t    (presv->ri_qs.ri_substate == sub))\n\t\treturn;\n\n\tDBPRT((\"resv_name=%s, o_state=%d, o_sub=%d, state=%d, sub=%d\",\n\t       presv->ri_qs.ri_resvID, presv->ri_qs.ri_state, presv->ri_qs.ri_substate,\n\t       state, sub))\n\n\tpresv->ri_qs.ri_state = state;\n\tpresv->ri_qs.ri_substate = sub;\n\n\tset_rattr_l_slim(presv, RESV_ATR_state, state, SET);\n\tset_rattr_l_slim(presv, RESV_ATR_substate, sub, SET);\n\n\tresv_save_db(presv);\n\treturn;\n}\n\n/**\n * @brief\n * \t\tSet a scheduler flag to initiate a scheduling cycle when a reservation is\n * \t\tin degraded mode and needs to have nodes replaced.\n *\n * @param[in]\tptask\t-\twork task structure which contains reservation.\n * @param[in]\tforced \t- \twhether to neuter scheduler call if ri_vnodes_down is 0\n */\nvoid\nresv_retry_handler2(struct work_task *ptask, int forced)\n{\n\tresc_resv *presv = ptask->wt_parm1;\n\n\tif (!presv)\n\t\treturn;\n\n\t/* If all nodes associated to this reservation are back to available (due to\n\t * a change in the system setup or a recovery) then no action is required as\n\t * the handler vnode_available takes care of updating the reservation state\n\t */\n\tif (!forced && presv->ri_vnodes_down == 0)\n\t\treturn;\n\n\t/* Notify scheduler that a reservation needs to be reconfirmed */\n\tnotify_scheds_about_resv(SCH_SCHEDULE_RESV_RECONFIRM, presv);\n}\n\n/**\n * @brief\n * \t\tSet a scheduler flag to initiate a scheduling cycle when a reservation is\n * \t\tin degraded mode and needs to have nodes replaced.\n *\t\tthis version will only kick scheduler if ri_vnodes_down > 0\n *\n * @param[in]\tptask\t-\twork task structure which contains reservation.\n */\nvoid\nresv_retry_handler(struct work_task *ptask)\n{\n\tresv_retry_handler2(ptask, 0);\n}\n\n/**\n * @brief\n * \t\tSet a scheduler flag to initiate a scheduling cycle when a reservation is\n * \t\tin degraded mode and needs to have nodes replaced.\n *\t\tthis version will also kick scheduler if ri_vnodes_down is 0\n *\n * @param[in]\tptask\t-\twork task structure which contains reservation.\n */\nvoid\nresv_retry_handler_forced(struct work_task *ptask)\n{\n\tresv_retry_handler2(ptask, 1);\n}\n\n/**\n * @brief\n * \t\tchk_resvReq_viable - checks if scheduler's request to reserve is viable\n *\n * @param[in]\tpresv\t-\tpointer to reservation.\n *\n * @return\tint\n * @retval\t0\t: no problems occur\n * @retval\terror code\t: if problem detected\n */\nint\nchk_resvReq_viable(resc_resv *presv)\n{\n\tlong state = get_rattr_long(presv, RESV_ATR_state);\n\tint rc;\n\n\tif (state == RESV_NONE)\n\t\treturn PBSE_INTERNAL;\n\n\trc = 0; /*assume no problems occur*/\n\n\tif (state == RESV_FINISHED || state == RESV_DELETED || state == RESV_BEING_DELETED)\n\t\trc = PBSE_INTERNAL;\n\n\treturn rc;\n}\n\n/**\n * @brief\n * \t\tgen_task_Time4resv - creates a work_task structure and puts it onto\n * \t\tthe \"WORK_Timed\" work_task list at the appropriate (time sequential)\n * \t\tlocation.\n * @par\n * \t\tThe assumption here is that gen_task_Time4resv () won't be called\n * \t\tif the \"rescreserve\" is not viable - see chk_resvReq_viable ()\n *\n * @param[in]\tpresv\t-\tpointer to reservation.\n *\n * @return\tint\n * @retval\t0\t: work_task created and put on timed task list\n * @retval\terror code\t: if problem was detected\n */\nint\ngen_task_Time4resv(resc_resv *presv)\n{\n\tstruct work_task *ptask;\n\tint rc;\n\tlong startTime;\n\n\tif (get_rattr_long(presv, RESV_ATR_state) == RESV_NONE)\n\t\treturn PBSE_INTERNAL;\n\n\tif (presv->resv_start_task)\n\t\tdelete_task(presv->resv_start_task);\n\tpresv->resv_start_task = NULL;\n\tstartTime = get_rattr_long(presv, RESV_ATR_start);\n\tif ((ptask = set_task(WORK_Timed, startTime,\n\t\t\t      Time4resv, presv)) != 0) {\n\t\t/* set things so that the reservation going away causes\n\t\t * any \"yet to be processed\" work tasks to also go away\n\t\t */\n\n\t\tappend_link(&presv->ri_svrtask, &ptask->wt_linkobj, ptask);\n\n\t\t/* cause to have issued to the qmgr subsystem\n\t\t * a request to enable the reservation's queue\n\t\t */\n\t\trc = change_enableORstart(presv, Q_CHNG_ENABLE, \"True\");\n\t\tpresv->resv_start_task = ptask;\n\n\t} else\n\t\trc = PBSE_SYSTEM;\n\n\treturn (rc);\n}\n\n/**\n * @brief\n * \t\tgen_task_EndResvWindow - creates a work_task for deleting a reservation\n * \t\twhose window has expired and puts it on the \"WORK_Timed\" work_task list\n * \t\tat the appropriate (time sequential) location.\n *\n * @param[in]\tpresv\t-\tpointer to reservation.\n *\n * @return\tint\n * @retval\t0\t: task was created and put on timed task list\n * @retval\terror code\t: if a problem was detected\n */\nint\ngen_task_EndResvWindow(resc_resv *presv)\n{\n\tint rc;\n\tlong fromNow;\n\n\tif (presv == NULL)\n\t\treturn (PBSE_INTERNAL);\n\n\tfromNow = presv->ri_qs.ri_etime - (long) time_now;\n\tif (is_sattr_set(SVR_ATR_resv_post_processing))\n\t\tfromNow -= get_sattr_long(SVR_ATR_resv_post_processing);\n\trc = gen_future_deleteResv(presv, fromNow);\n\treturn (rc);\n}\n\n/**\n * @brief\n * \t\tgen_deleteResv - creates a work_task for deleting a reservation\n * \t\tArgument \"fromNow\" needs to be a non-negative value.  It's the number of\n * \t\tseconds into the future (measured from from global variable \"time_now\")\n * \t\tthat this task is to be activated.\n *\n * @param[in,out]\tpresv\t-\tpointer to reservation.\n * @param[in]\tfromNow\t-\tIt's the number of seconds into the future that this task is to be activated.\n *\n * @return\tint\n * @retval\t0\t: task was created and put on timed task list\n * @retval\terror code\t: if a problem was detected\n */\nint\ngen_deleteResv(resc_resv *presv, long fromNow)\n{\n\tstruct work_task *ptask;\n\tint rc = 0; /*assume success*/\n\tlong event = (long) time_now + fromNow;\n\n\tif ((ptask = set_task(WORK_Timed, event,\n\t\t\t      Time4_term, presv)) != 0) {\n\n\t\t/* set things so that the reservation going away results in\n\t\t * any \"yet to be processed\" work tasks also going away\n\t\t * and set up to notify Scheduler of new reservation-job\n\t\t */\n\n\t\tappend_link(&presv->ri_svrtask, &ptask->wt_linkobj, ptask);\n\t\tpresv->ri_futuredr = 1;\n\t} else\n\t\trc = PBSE_SYSTEM;\n\n\treturn (rc);\n}\n\n/**\n * @brief\n * \t\tgen_negI_deleteResv - creates a work_task for deleting a reservation\n * \t\tif the reservation was submitted with a negative value for \"I\" attribute -\n * \t\tmeaning: willing to wait \"n\" seconds, but after that forget it.\n * \t\tArgument \"fromNow\" needs to be a non-negative value.  It's the number of\n * \t\tseconds into the future (measured from from global variable \"time_now\")\n * \t\tthat this task is to be activated.\n *\n * @param[in,out]\tpresv\t-\tpointer to reservation.\n * @param[in]\tfromNow\t-\tIt's the number of seconds into the future that this task is to be activated.\n *\n * @return\tint\n * @retval\t0\t: task was created and put on timed task list\n * @retval\terror code\t: if a problem was detected\n */\nint\ngen_negI_deleteResv(resc_resv *presv, long fromNow)\n{\n\tstruct work_task *ptask;\n\tint rc = 0; /*assume success*/\n\tlong event = (long) time_now + fromNow;\n\n\tif ((ptask = set_task(WORK_Timed, event,\n\t\t\t      Time4_I_term, presv)) != 0) {\n\n\t\t/* set things so that the reservation going away results in\n\t\t * any \"yet to be processed\" work tasks also going away\n\t\t * and set up to notify Scheduler of new reservation-job\n\t\t */\n\n\t\tappend_link(&presv->ri_svrtask, &ptask->wt_linkobj, ptask);\n\t\tpresv->ri_futuredr = 1;\n\t} else\n\t\trc = PBSE_SYSTEM;\n\n\treturn (rc);\n}\n\n/**\n * @brief\n * \t\tgen_future_deleteResv - creates a work_task for deleting a reservation\n * \t\tin the future and puts it on the \"WORK_Timed\" work_task list at the\n * \t\tappropriate (time sequential) location.  Argument \"fromNow\" is to be a\n * \t\tnon-negative value.  It's the number of seconds into the future\n * \t\t(measured from from global variable \"time_now\") that the task is to become\n * \t\tactive.\n *\n * @param[in,out]\tpresv\t-\tpointer to reservation.\n * @param[in]\tfromNow\t-\tIt's the number of seconds into the future that this task is to be activated.\n *\n * @return\tint\n * @retval\t0\t: task was created and put on timed task list\n * @retval\terror code\t: if a problem was detected\n */\nint\ngen_future_deleteResv(resc_resv *presv, long fromNow)\n{\n\tstruct work_task *ptask = NULL;\n\tint rc = 0; /*assume success*/\n\tlong event = (long) time_now + fromNow;\n\n\tif (presv->resv_end_task)\n\t\tdelete_task(presv->resv_end_task);\n\tpresv->resv_end_task = NULL;\n\tif ((ptask = set_task(WORK_Timed, event,\n\t\t\t      Time4resvFinish, presv)) != 0) {\n\n\t\t/* set things so that the reservation going away results in\n\t\t * any \"yet to be processed\" work tasks also going away\n\t\t * and set up to notify Scheduler of new reservation-job\n\t\t */\n\n\t\tappend_link(&presv->ri_svrtask, &ptask->wt_linkobj, ptask);\n\t\tpresv->ri_futuredr = 1;\n\t\tpresv->resv_end_task = ptask;\n\t} else\n\t\trc = PBSE_SYSTEM;\n\n\treturn (rc);\n}\n\n/**\n * @brief\n * \t\tgen_future_reply - creates a work_task to reply in the future to a\n * \t\treservation request submitted now. Place on the \"WORK_Timed\" work_task\n * \t\tlist at the appropriate (time sequential) location.  Argument \"fromNow\"\n * \t\tis to be a non-negative value.  It's the number of seconds into the future\n * \t\t(measured from from global variable \"time_now\") that the task is to become\n * \t\tactive.\n *\n * @param[in,out]\tpresv\t-\tpointer to reservation.\n * @param[in]\tfromNow\t-\tIt's the number of seconds into the future that this task is to be activated.\n *\n * @return\tint\n * @retval\t0\t: task was created and put on timed task list\n * @retval\terror code\t: if a problem was detected\n */\nint\ngen_future_reply(resc_resv *presv, long fromNow)\n{\n\tstruct work_task *ptask;\n\tint rc = 0; /*assume success*/\n\tlong event = (long) time_now + fromNow;\n\n\tif ((ptask = set_task(WORK_Timed, event,\n\t\t\t      Time4reply, presv)) != 0) {\n\n\t\t/* set things so that the reservation going away results in\n\t\t * any \"yet to be processed\" work tasks also going away\n\t\t * and set up to notify Scheduler of new reservation-job\n\t\t */\n\n\t\tappend_link(&presv->ri_svrtask, &ptask->wt_linkobj, ptask);\n\t\tpresv->ri_futuredr = 1;\n\t} else\n\t\trc = PBSE_SYSTEM;\n\n\treturn (rc);\n}\n\n/**\n * @brief\n * \t\tchange_enableORstart - call this function to build and issue an internally\n *\t\tgenerated request to the qmgr subsystem to change the value of either\n *\t\tattributes \"start\" or \"enable\" for the queue associated with a general\n *\t\tresources reservation.\n *\n * @note\n *\t\tNotes:  the \"issue_Drequest\" function called in the body of this\n *\t\tcode causes the request to be dispatched to qmgr immediately\n *\t\t(since its local) and the reply from qmgr will get handled\n *\t\tby the \"reply handling function\" passed to issue_Drequest.\n *\t\tThe reply handler gets triggered by invocation of \"next_task()\"\n *\t\tin the server's main loop, since it's put into a work_task on the\n *\t\t\"immediate_tasks\" work_list.\n *\n * @param[in]\tpresv\t-\tpointer to reservation structure\n * @param[in]\twhich\t-\tQ_CHNG_START, Q_CHNG_ENABLE\n * @param[in]\tvalue\t-\t\"True\", \"False\"\n *\n * @return\tint\n * @retval\t0\t: if build and issuance successful\n * @retval\t!=0\t: error code if function fails\n */\nint\nchange_enableORstart(resc_resv *presv, int which, char *value)\n{\n\textern char *msg_internalReqFail;\n\tstruct batch_request *newreq;\n\tpbs_list_head *plhed;\n\tint len;\n\tsvrattrl *psatl;\n\tstruct work_task *pwt;\n\tchar *at_name;\n\tint index;\n\n\tif (which == Q_CHNG_START && strcmp(value, ATR_TRUE) == 0 && !is_rattr_set(presv, RESV_ATR_resv_nodes))\n\t\treturn (0);\n\n\tnewreq = alloc_br(PBS_BATCH_Manager);\n\tif (newreq == NULL) {\n\t\t(void) sprintf(log_buffer, \"batch request allocation failed\");\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_RESV, LOG_NOTICE,\n\t\t\t  presv->ri_qs.ri_resvID, log_buffer);\n\t\treturn (PBSE_SYSTEM);\n\t}\n\n\tnewreq->rq_ind.rq_manager.rq_cmd = MGR_CMD_SET;\n\tnewreq->rq_ind.rq_manager.rq_objtype = MGR_OBJ_QUEUE;\n\tnewreq->rq_perm = ATR_DFLAG_MGWR | ATR_DFLAG_OPWR;\n\t(void) strcpy(newreq->rq_user, \"pbs_server\");\n\t(void) strcpy(newreq->rq_host, pbs_server_name);\n\n\tstrcpy(newreq->rq_ind.rq_manager.rq_objname, get_rattr_str(presv, RESV_ATR_queue));\n\n\tCLEAR_HEAD(newreq->rq_ind.rq_manager.rq_attr);\n\tplhed = &newreq->rq_ind.rq_manager.rq_attr;\n\n\tif (which == Q_CHNG_ENABLE) {\n\t\tindex = QA_ATR_Enabled;\n\t\tat_name = que_attr_def[index].at_name;\n\t} else if (which == Q_CHNG_START) {\n\t\tindex = QA_ATR_Started;\n\t\tat_name = que_attr_def[index].at_name;\n\t} else\n\t\treturn (PBSE_INTERNAL);\n\n\tlen = strlen(value) + 1;\n\tif ((psatl = attrlist_create(at_name, NULL, len)) != NULL) {\n\t\tpsatl->al_flags = que_attr_def[index].at_flags;\n\t\tstrcpy(psatl->al_value, value);\n\t\tappend_link(plhed, &psatl->al_link, psatl);\n\t} else {\n\t\tfree_br(newreq);\n\t\treturn (PBSE_INTERNAL);\n\t}\n\n\tif (issue_Drequest(PBS_LOCAL_CONNECTION, newreq,\n\t\t\t   handle_qmgr_reply_to_startORenable, &pwt, 0) == -1) {\n\t\tfree_br(newreq);\n\n\t\t(void) sprintf(log_buffer, \"%s\", msg_internalReqFail);\n\t\tlog_event(PBSEVENT_RESV, PBS_EVENTCLASS_RESV, LOG_NOTICE,\n\t\t\t  presv->ri_qs.ri_resvID, log_buffer);\n\n\t\treturn (PBSE_mgrBatchReq);\n\t}\n\ttickle_for_reply();\n\tif (pwt)\n\t\tpwt->wt_parm2 = presv; /*needed to handle qmgr's response*/\n\n\treturn (0);\n}\n\n/**\n * @brief\n * \t\thandle_qmgr_reply_to_startORenable - this is the function that's to be\n *\t\tcalled to handle the qmgr's response to the request issued in\n *\t\t\"change_enableORstart()\".  If not successful log a message.\n * @par\n *\t\tThis function should only be called through an INTERNALLY GENERATED\n *\t\trequest to another server (including ourself).\n *\t\tIt frees the request structure and closes the connection (handle).\n * @par\n *\t\tIn the work task entry, wt_event is the connection handle and\n *\t\twt_parm1 is a pointer to the request structure (that contains the reply).\n *\t\twt_parm2 should have the address of the reservation structure\n * @par\n *\t\tTHIS SHOULD NOT BE USED IF AN EXTERNAL (CLIENT) REQUEST IS \"relayed\",\n *\t\tbecause the request/reply structure is still needed to reply back\n *\t\tto the client.\n *\n * @param[in]\tpwt\t-\twork task entry\n */\nstatic void\nhandle_qmgr_reply_to_startORenable(struct work_task *pwt)\n{\n\textern char *msg_qEnabStartFail;\n\tstruct batch_request *preq = pwt->wt_parm1;\n\tresc_resv *presv = pwt->wt_parm2;\n\n\tif (preq->rq_reply.brp_code) {\n\n\t\t(void) sprintf(log_buffer, \"%s\", msg_qEnabStartFail);\n\t\tlog_event(PBSEVENT_RESV, PBS_EVENTCLASS_RESV, LOG_NOTICE,\n\t\t\t  presv->ri_qs.ri_resvID, log_buffer);\n\t}\n\n\tfree_br((struct batch_request *) pwt->wt_parm1);\n\tif (pwt->wt_event != -1)\n\t\tsvr_disconnect(pwt->wt_event);\n\n\t/*I don't know why, except for system error, that a\n\t *the server couldn't set \"start\" or \"enable\" on one\n\t *of its own queues.  However, if this happens it probably\n\t *should result in the reservation being deleted by the\n\t *server and a message sent back to the owner regarding the\n\t *action.  We will pass on that for now.\n\t */\n}\n\n/**\n * @brief\n * \t\tremove_deleted_resvs - Walk the server's \"svr_allresvs\"\n *\t\tlist and cause to be removed any reservation whose state\n *\t\tis marked RESV_FINISHED.  Function used in \"pbsd_init\" code\n *\n *\t@return\tNothing\n */\nvoid\nremove_deleted_resvs(void)\n{\n\tresc_resv *presv, *nxresv;\n\tstruct work_task *ptask;\n\n\tpresv = (resc_resv *) GET_NEXT(svr_allresvs);\n\twhile (presv) {\n\t\tnxresv = (resc_resv *) GET_NEXT(presv->ri_allresvs);\n\n\t\tif (presv->ri_qs.ri_state == RESV_FINISHED) {\n\t\t\t/*put a task on the server's \"task_list_timed\" that causes\n\t\t\t *an internal BATCH_REQUEST_DeleteResv to be generated\n\t\t\t *and issued against this reservation\n\t\t\t */\n\n\t\t\tif ((ptask = set_task(WORK_Timed, time_now + 5,\n\t\t\t\t\t      Time4resvFinish, presv)) != 0) {\n\n\t\t\t\t/* set things so that the reservation going away results in\n\t\t\t\t * any \"yet to be processed\" work tasks also going away\n\t\t\t\t * and set up to notify Scheduler of new reservation-job\n\t\t\t\t */\n\n\t\t\t\tappend_link(&presv->ri_svrtask, &ptask->wt_linkobj, ptask);\n\t\t\t}\n\t\t} else if (presv->ri_qs.ri_state == RESV_DELETING_JOBS) {\n\t\t\t\t/* this will set up the task to finally move it to RESV_FINISHED */\n\t\t\t\tdelete_occurrence_jobs(presv);\n\t\t}\n\t\tpresv = nxresv;\n\t}\n}\n\n/**\n * @brief\n *  \tdegrade_corrupted_confirmed_resvs - Walk the server's \"svr_allresvs\"\n *  \tlist and cause to be degraded any reservation whose state\n *  \tis marked RESV_CONFIRMED but is missing resv_nodes or resv_execvnodes.\n *  \tFunction used in \"pbsd_init\" code\n *\n * @return Nothing\n */\nvoid\ndegrade_corrupted_confirmed_resvs(void)\n{\n\tint is_degraded = 0;\n\tresc_resv *presv, *nxresv;\n\tlong retry_time = 0;\n\tchar *str_time;\n\n\tpresv = (resc_resv *) GET_NEXT(svr_allresvs);\n\twhile (presv) {\n\t\tnxresv = (resc_resv *) GET_NEXT(presv->ri_allresvs);\n\t\t/* if corrupted and already degraded we still need to set a retry time for the scheduler to be prodded again */\n\t\tif (presv->ri_qs.ri_state == RESV_CONFIRMED || presv->ri_qs.ri_state == RESV_DEGRADED) {\n\t\t\tif (get_rattr_long(presv, RESV_ATR_resv_standing))\n\t\t\t\tif (!(is_rattr_set(presv, RESV_ATR_resv_execvnodes)) || get_rattr_str(presv, RESV_ATR_resv_execvnodes) == NULL)\n\t\t\t\t\tis_degraded = 1;\n\t\t\tif (!(is_rattr_set(presv, RESV_ATR_resv_nodes)) || get_rattr_str(presv, RESV_ATR_resv_nodes) == NULL)\n\t\t\t\tis_degraded = 1;\n\t\t} else if (presv->ri_qs.ri_state == RESV_FINISHED && get_rattr_long(presv, RESV_ATR_resv_standing))\n\t\t\tif (get_rattr_long(presv, RESV_ATR_resv_idx) < get_rattr_long(presv, RESV_ATR_resv_count))\n\t\t\t\t/* should never keep a standing reservation in RESV_FINISHED state for anything but the last occurrence */\n\t\t\t\t/* if we don't degrade it then remove_deleted_resvs may create a task to nuke it */\n\t\t\t\tis_degraded = 1;\n\t\tif (is_degraded) {\n\t\t\tresv_setResvState(presv, RESV_DEGRADED, RESV_DEGRADED);\n\t\t\t/* there is no point in trying to reconfirm it immediately at server start,\n\t\t\t * since the nodes will not have reported as free yet.\n\t\t\t * One minute is a reasonable time to try, but jobs may already have filled\n\t\t\t * some nodes by then. Tough luck, but it's the best we can do. It beats\n\t\t\t * waiting for the default 600 seconds.\n\t\t\t */\n\t\t\tretry_time = determine_resv_retry(presv);\n\t\t\tif (time_now + 60 < retry_time)\n\t\t\t\tretry_time = time_now + 60;\n\t\t\tstr_time = ctime(&retry_time);\n\t\t\tif (str_time == NULL)\n\t\t\t\tstr_time = \"\";\n\t\t\tpresv->ri_degraded_time = get_rattr_long(presv, RESV_ATR_start);\n\t\t\t/* bogus value, but avoid skipping a reconfirmation */\n\t\t\tlog_eventf(PBSEVENT_ERROR, PBS_EVENTCLASS_RESV, LOG_NOTICE, presv->ri_qs.ri_resvID,\n\t\t\t\t   \"Reservation with corrupted nodes, degrading with retry time set to %s\", str_time);\n\t\t\tforce_resv_retry(presv, retry_time);\n\t\t}\n\t\tpresv = nxresv;\n\t}\n}\n\n/**\n * @brief\n *  \tadd_resv_beginEnd_tasks - for each reservation not in state\n *  \tRESV_FINISHED add to \"task_list_timed\" the \"begin\" and\n *  \t\"end\" reservation tasks as appropriate.  Function used\n *  \tin \"pbsd_init\" code\n *\n * @return\tnone\n */\nvoid\nadd_resv_beginEnd_tasks(void)\n{\n\tresc_resv *presv;\n\tchar txt[PBS_MAXSVRRESVID + 100];\n\tint rc;\n\n\tpresv = (resc_resv *) GET_NEXT(svr_allresvs);\n\twhile (presv) {\n\t\trc = 0;\n\t\tif (presv->ri_qs.ri_state == RESV_CONFIRMED ||\n\t\t    presv->ri_qs.ri_state == RESV_RUNNING) {\n\n\t\t\t/* add \"begin\" and \"end\" tasks onto \"task_list_timed\" */\n\n\t\t\tif ((rc = gen_task_EndResvWindow(presv)) != 0) {\n\t\t\t\tsprintf(txt, \"%s : EndResvWindow task creation failed\",\n\t\t\t\t\tpresv->ri_qs.ri_resvID);\n\t\t\t\tlog_err(rc, \"add_resv_beginEnd_tasks\", txt);\n\t\t\t}\n\t\t\tif ((rc = gen_task_Time4resv(presv)) != 0) {\n\t\t\t\tsprintf(txt, \"%s : Time4resv task creation failed\",\n\t\t\t\t\tpresv->ri_qs.ri_resvID);\n\t\t\t\tlog_err(rc, \"add_resv_beginEnd_tasks\", txt);\n\t\t\t}\n\t\t} else if (presv->ri_qs.ri_state == RESV_UNCONFIRMED) {\n\n\t\t\t/* add \"end\" task onto \"task_list_timed\" */\n\n\t\t\tif ((rc = gen_task_EndResvWindow(presv)) != 0) {\n\t\t\t\tsprintf(txt, \"%s : EndResvWindow task creation failed\",\n\t\t\t\t\tpresv->ri_qs.ri_resvID);\n\t\t\t\tlog_err(rc, \"add_resv_beginEnd_tasks\", txt);\n\t\t\t}\n\t\t}\n\n\t\tpresv = (resc_resv *) GET_NEXT(presv->ri_allresvs);\n\t}\n}\n\n/**\n * @brief\n * \t\tuniq_nameANDfile - develop a unique name and file in the directory\n *\t\tpointed to by \"pdir\".  The root name of the file is initially\n *\t\tgiven by \"pname\".  The name pointed to by pname may be modified\n *\t\tin place in the process of arriving at a unique filename for\n *\t\tthe file.  The file, if generated, will have zero length.\n *\n * @param[in]\tpname\t-\tThe root name of the file is initially given by \"pname\"\n * @param[in]\tpsuffix\t-\tsuffix of the name\n * @param[in]\tpdir\t-\tpoints to the directory in which file contains.\n *\n * @return\tint\n * @return\t0\t: on success\n * @retval\tPBSE_*\t: code on failure\n */\nint\nuniq_nameANDfile(char *pname, char *psuffix, char *pdir)\n{\n\tint fds, L1, L2;\n\tint rc = 0;\n\tchar *pc;\n\tchar namebuf[MAXPATHLEN + 1];\n\n\tif (!pname || !psuffix || !pdir ||\n\t    !(L1 = strlen(pname)) ||\n\t    !(L2 = strlen(pdir)) ||\n\t    ((L1 + L2 + strlen(psuffix)) >= MAXPATHLEN))\n\t\treturn (PBSE_INTERNAL);\n\n\tdo {\n\t\t(void) strcpy(namebuf, pdir);\n\t\t(void) strcat(namebuf, pname);\n\t\t(void) strcat(namebuf, psuffix);\n\t\tfds = open(namebuf, O_CREAT | O_EXCL | O_WRONLY, 0600);\n\t\tif (fds < 0) {\n\t\t\tif (errno == EEXIST) {\n\t\t\t\tpc = pname + strlen(pname) - 1;\n\t\t\t\twhile (!isprint((int) *pc)) {\n\t\t\t\t\tpc--;\n\t\t\t\t\tif (pc <= pname) {\n\t\t\t\t\t\trc = PBSE_INTERNAL;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\t(*pc)++;\n\t\t\t} else {\n\t\t\t\trc = PBSE_SYSTEM;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t} while (fds < 0);\n\n\tif (fds)\n\t\t(void) close(fds);\n\treturn (rc);\n}\n\n/**\n * @brief\n *\t\tstart_end_dur_wall - This function considers the information specified for\n *\t\tstart_time, end_time, duration and walltime.  Using what was\n *\t\tspecified, it computes those unspecified values that are\n *\t\tpossible to compute. If the initially supplied information\n *\t\tis bogus, or not enough information is specified or results\n *\t\tare inconsistent, the function returns \"failure\" otherwise,\n *\t\tit returns \"success\".\n * @par\n * \t\treservation attributes dealing with start, end, duration times\n *\t\tcan be modified by this function.  In addition, if this is\n *\t\thappens to be a RESC_RESV_OBJECT  its \"ri_qs.ri_stime\",\n *\t\t\"ri_qs.ri_etime\", and \"ri_qs.ri_duration\" fields are subject\n *\t\tto modification.\n *\n * @param[in,out]\tpresv\t-\tthe \"resc_resv\" object\n *\n * @return\tint\n * @retval\t0\t: Success\n * @retval\t!= 0\t: don't have a complete or consistent set of\n * \t\t\t\t\tinformation, or possibly some other error\n * \t\t\t\t\toccurred - e.g. problem adding the \"walltime\"\n * \t\t\t\t\tresource entry if it doesn't exist\n */\nint\nstart_end_dur_wall(resc_resv *presv)\n{\n\tresource_def *rscdef = NULL;\n\tresource *prsc = NULL;\n\tattribute *pattr = NULL;\n\tattribute atemp = {0};\n\tint pstate = 0;\n\tlong stime, etime, duration;\n\n\tint swcode = 0; /* \"switch code\" */\n\tint rc = 0;\t/* return code, assume success */\n\tshort check_start = 1;\n\n\tif (presv == 0)\n\t\treturn (-1);\n\n\trscdef = &svr_resc_def[RESC_WALLTIME];\n\tpstate = get_rattr_long(presv, RESV_ATR_state);\n\tstime = get_rattr_long(presv, RESV_ATR_start);\n\tetime = get_rattr_long(presv, RESV_ATR_end);\n\tduration = get_rattr_long(presv, RESV_ATR_duration);\n\n\tpattr = get_rattr(presv, RESV_ATR_resource);\n\tprsc = find_resc_entry(pattr, rscdef);\n\tcheck_start = !is_rattr_set(presv, RESV_ATR_job);\n\n\tif (pstate != RESV_BEING_ALTERED) {\n\t\tif (is_rattr_set(presv, RESV_ATR_start))\n\t\t\tswcode += 1; /* have start */\n\t\tif (is_rattr_set(presv, RESV_ATR_end))\n\t\t\tswcode += 2; /* have end */\n\t\tif (is_rattr_set(presv, RESV_ATR_duration))\n\t\t\tswcode += 4; /* have duration */\n\t\tif (prsc)\n\t\t\tswcode += 8; /* have walltime */\n\t\telse if (!(prsc = add_resource_entry(pattr, rscdef)))\n\t\t\treturn (-1);\n\t} else {\n\t\tif (presv->ri_alter.ra_flags & RESV_DURATION_MODIFIED)\n\t\t\tswcode += 4;\n\t\tif (presv->ri_alter.ra_flags & RESV_END_TIME_MODIFIED)\n\t\t\tswcode += 2; /* calcualte start time */\n\t\tif (presv->ri_alter.ra_flags & RESV_START_TIME_MODIFIED)\n\t\t\tswcode += 1; /* calculate end time */\n\t\tif (presv->ri_alter.ra_flags == RESV_START_TIME_MODIFIED || presv->ri_alter.ra_flags == RESV_END_TIME_MODIFIED) {\n\t\t\tswcode = 3;\n\t\t}\n\t}\n\n\tatemp.at_flags = ATR_VFLAG_SET;\n\tatemp.at_type = ATR_TYPE_LONG;\n\tswitch (swcode) {\n\t\tcase 3: /* start, end */\n\t\t\tif (((check_start && (stime < time_now)) && (pstate != RESV_BEING_ALTERED)) ||\n\t\t\t    (etime <= stime))\n\t\t\t\trc = -1;\n\t\t\telse {\n\t\t\t\tif (pstate == RESV_BEING_ALTERED) {\n\t\t\t\t\tpresv->ri_alter.ra_flags |= RESV_DURATION_MODIFIED;\n\t\t\t\t}\n\t\t\t\tatemp.at_val.at_long = etime - stime;\n\t\t\t\tset_rattr_l_slim(presv, RESV_ATR_duration, atemp.at_val.at_long, SET);\n\t\t\t\trscdef->rs_set(&prsc->rs_value, &atemp, SET);\n\t\t\t}\n\t\t\tbreak;\n\n\t\tcase 4:\n\t\tcase 5: /* start, duration */\n\t\t\tif (((check_start && stime < time_now) && (pstate != RESV_BEING_ALTERED)) ||\n\t\t\t    (duration <= 0))\n\t\t\t\trc = -1;\n\t\t\telse {\n\t\t\t\tif (pstate == RESV_BEING_ALTERED) {\n\t\t\t\t\tpresv->ri_alter.ra_flags |= RESV_END_TIME_MODIFIED;\n\t\t\t\t}\n\t\t\t\tset_rattr_l_slim(presv, RESV_ATR_end, stime + duration, SET);\n\t\t\t\tset_attr_l(&atemp, duration, SET);\n\t\t\t\trscdef->rs_set(&prsc->rs_value, &atemp, SET);\n\t\t\t}\n\t\t\tbreak;\n\n\t\tcase 7: /* start, end, duration */\n\t\t\tif (((check_start) && (stime < time_now)) ||\n\t\t\t    (etime < stime) ||\n\t\t\t    (duration <= 0) ||\n\t\t\t    ((etime - stime) !=\n\t\t\t     duration))\n\t\t\t\trc = -1;\n\t\t\telse {\n\t\t\t\tatemp.at_val.at_long = duration;\n\t\t\t\trscdef->rs_set(&prsc->rs_value, &atemp, SET);\n\t\t\t}\n\t\t\tbreak;\n\n\t\tcase 6:\n\t\tcase 8: /* end, duration */\n\t\t\tif ((duration <= 0) ||\n\t\t\t    (etime - duration <\n\t\t\t     time_now)) {\n\t\t\t\trc = -1;\n\t\t\t} else {\n\t\t\t\tif (pstate == RESV_BEING_ALTERED) {\n\t\t\t\t\tpresv->ri_alter.ra_flags |= RESV_START_TIME_MODIFIED;\n\t\t\t\t}\n\t\t\t\tset_rattr_l_slim(presv, RESV_ATR_start, etime - duration, SET);\n\t\t\t\tatemp.at_val.at_long = duration;\n\t\t\t\trscdef->rs_set(&prsc->rs_value, &atemp, SET);\n\t\t\t}\n\t\t\tbreak;\n\n\t\tcase 9: /* start, wall */\n\t\t\tif (((check_start) && (stime < time_now)) ||\n\t\t\t    (prsc->rs_value.at_val.at_long <= 0))\n\t\t\t\trc = -1;\n\t\t\telse {\n\t\t\t\tif (pstate == RESV_BEING_ALTERED) {\n\t\t\t\t\tpresv->ri_alter.ra_flags |= RESV_END_TIME_MODIFIED | RESV_DURATION_MODIFIED;\n\t\t\t\t}\n\t\t\t\tset_rattr_l_slim(presv, RESV_ATR_end, stime + prsc->rs_value.at_val.at_long, SET);\n\t\t\t\tset_rattr_l_slim(presv, RESV_ATR_duration, prsc->rs_value.at_val.at_long, SET);\n\t\t\t}\n\t\t\tbreak;\n\n\t\tcase 10: /* end, wall */\n\t\t\tif ((prsc->rs_value.at_val.at_long <= 0) ||\n\t\t\t    (etime - prsc->rs_value.at_val.at_long <\n\t\t\t     time_now)) {\n\t\t\t\trc = -1;\n\t\t\t} else {\n\t\t\t\tif (pstate == RESV_BEING_ALTERED) {\n\t\t\t\t\tpresv->ri_alter.ra_flags |= RESV_START_TIME_MODIFIED;\n\t\t\t\t}\n\t\t\t\tset_rattr_l_slim(presv, RESV_ATR_start, etime - prsc->rs_value.at_val.at_long, SET);\n\t\t\t\tset_rattr_l_slim(presv, RESV_ATR_duration, prsc->rs_value.at_val.at_long, SET);\n\t\t\t}\n\t\t\tbreak;\n\n\t\tcase 11: /* start, end, wall */\n\t\t\tif (((check_start) && (stime < time_now)) ||\n\t\t\t    (prsc->rs_value.at_val.at_long <= 0) ||\n\t\t\t    (etime - stime !=\n\t\t\t     prsc->rs_value.at_val.at_long))\n\t\t\t\trc = -1;\n\t\t\telse {\n\t\t\t\tif (pstate == RESV_BEING_ALTERED) {\n\t\t\t\t\tpresv->ri_alter.ra_flags |= RESV_DURATION_MODIFIED;\n\t\t\t\t}\n\t\t\t\tset_rattr_l_slim(presv, RESV_ATR_duration, prsc->rs_value.at_val.at_long, SET);\n\t\t\t}\n\t\t\tbreak;\n\n\t\tcase 13: /* start, duration & wall */\n\t\t\tif (((check_start) && (stime < time_now)) ||\n\t\t\t    (prsc->rs_value.at_val.at_long != duration) ||\n\t\t\t    (duration <= 0))\n\t\t\t\trc = -1;\n\t\t\telse {\n\t\t\t\tif (pstate == RESV_BEING_ALTERED) {\n\t\t\t\t\tpresv->ri_alter.ra_flags |= RESV_END_TIME_MODIFIED;\n\t\t\t\t}\n\t\t\t\tset_rattr_l_slim(presv, RESV_ATR_end, stime + presv->ri_qs.ri_duration, SET);\n\t\t\t}\n\t\t\tbreak;\n\n\t\tcase 15: /* start, end, duration & wall */\n\t\t\tif (((check_start) || (stime < time_now)) ||\n\t\t\t    (etime < stime) ||\n\t\t\t    (duration <= 0) ||\n\t\t\t    (prsc->rs_value.at_val.at_long != duration) ||\n\t\t\t    ((etime - stime) !=\n\t\t\t     duration))\n\t\t\t\trc = -1;\n\t\t\tbreak;\n\n\t\tdefault:\n\t\t\trc = -1;\n\t}\n\n\tif (is_sattr_set(SVR_ATR_resv_post_processing)) {\n\t\tduration += get_sattr_long(SVR_ATR_resv_post_processing);\n\t\tetime += get_sattr_long(SVR_ATR_resv_post_processing);\n\t}\n\n\tpresv->ri_qs.ri_stime = get_rattr_long(presv, RESV_ATR_start);\n\tpresv->ri_qs.ri_etime = get_rattr_long(presv, RESV_ATR_end);\n\tpresv->ri_qs.ri_duration = get_rattr_long(presv, RESV_ATR_duration);\n\n\treturn (rc);\n}\n\n/**\n * @brief\n * \t\tis_resv_window_in_future - Updates the reservation's state\n *\t\tto RESV_FINISHED if the current time is already beyond\n *\t\tthe start of the reservation's window otherwise, it\n *\t\tjust returns.  It's used in pbsd_init.c to decide if\n *\t\tan existing reservation, that's read back into the system\n *\t\tfrom the disk at sever restart, should continue to remain.\n *\n * @param[in,out]\tpresv\t-\treservation structure\n *\n * @return\tNothing\n */\nvoid\nis_resv_window_in_future(resc_resv *presv)\n{\n\tint state, sub;\n\n\teval_resvState(presv, RESVSTATE_is_resv_window_in_future, 0, &state,\n\t\t       &sub);\n\tresv_setResvState(presv, state, sub);\n}\n\n/**\n * @brief\n * \t\tresv_mailAction - Based on what was requested on reservation submission\n * \t\tand on who is issuing the current *request*, generate (or not) a mail\n * \t\tmessage about some aspect of the reservation to the appropriate parties\n * @par\n * \t\tThis function makes use of mail function svr_mailownerResv () but\n * \t\tit is not intended that this be the only way that svr_mailownerResv\n * \t\tshould be called, for we may not be presented with any related\n * \t\tbatch_request at the point where some message ought to be issued -\n * \t\te.g. the case, \"reservation start-time has finally arrived.\"\n *\n * @param[in]\tpresv\t-\treservation structure\n * @param[in]\tpreq\t-\tbatch_request structure\n */\nvoid\nresv_mailAction(resc_resv *presv, struct batch_request *preq)\n{\n\tint force;\n\tchar text[PBS_MAXUSER + PBS_MAXHOSTNAME + 64];\n\n\tif (preq->rq_type != PBS_BATCH_DeleteResv)\n\t\treturn;\n\n\tsnprintf(text, sizeof(text), \"Requesting party: %s@%s\",\n\t\t preq->rq_user, preq->rq_host);\n#ifdef NAS /* localmod 028 */\n\t/*\n\t * The extend attribute can contain additional explanation\n\t */\n\tif (preq->rq_extend) {\n\t\tsize_t len;\n\t\tlen = strlen(text);\n\t\tsnprintf(text + len, sizeof(text) - len,\n\t\t\t \"\\nReason: %s\\n\", preq->rq_extend);\n\t}\n#endif /* localmod 028 */\n\tif (preq->rq_fromsvr != 0)\n\t\tforce = MAIL_FORCE;\n\telse\n\t\tforce = MAIL_NORMAL;\n\tsvr_mailownerResv(presv, MAIL_ABORT, force, text);\n}\n\n/**\n * @brief\n * \t\tThis function converts long to hh:mm:ss format\n *\n * @param[in]\tl\t-\ttime passed as a long number\n *\n * @return\tpointer to the converted time string, allocated\n *         \tby the function. Memory deallocation rests in\n *         \tthe hands of caller.\n * @retval\tNULL\t: failure\n */\n\nchar *\nconvert_long_to_time(long l)\n{\n\tunsigned int h;\n\tunsigned int temp;\n\tint m;\n\tint s;\n\tint hr_len = 0;\n\tchar *str;\n\n\ttemp = h = l / 3600;\n\tl = l % 3600;\n\tm = l / 60;\n\tl = l % 60;\n\ts = l;\n\twhile (temp > 0) {\n\t\thr_len++;\n\t\ttemp = temp / 10;\n\t}\n\t/* Allocating memory for hours field and other 9 chars which can\n\t * accommodate \"hh:mm:ss\\0\"\n\t */\n\tstr = (char *) malloc(hr_len + 9);\n\tif (str == NULL)\n\t\treturn NULL;\n\n\tsprintf(str, \"%02u:%02d:%02d\", h, m, s);\n\treturn str;\n}\n\n/**\n * @brief\n * \t\t  determine_accruetype\n *        determine accrue_type for new job or after overlay upgrade\n *        or after recovery.\n *        If coming after an overlay upgrade or enabling accrual after a long time,\n *        jobs which entered system when accrual was false or those before the upgrade,\n *        accrual will begin from start time of the scheduling cycle.\n *        if, scheduler cannot determine accrual type, then server determines it and\n *        accrual begins from the time job was created in the server.\n *\n *        precedence of accrual :\n *\t        1) run time, exit time 2) ineligible time 3) eligible time\n * @param[in]\tpjob\t-\tJob whose accrue type needs to be determined\n * @return\tlong\n * @retval\tJOB_ELIGIBLE\t-\twhen job is eligible to accrue eligible_time\n * @retval\tJOB_INELIGIBLE\t-\twhen job is ineligible to accrue eligible_time\n * @retval\tJOB_RUNNING\t-\twhen job is running or provisioning\n * @retval\tJOB_EXIT\t-\twhen job is exiting\n * @retval\t-1\t- when accrue type couldn'tbe determined\n */\nlong\ndetermine_accruetype(job *pjob)\n{\n\tstruct pbs_queue *pque;\n\tlong temphold;\n\n\t/* have to determine accrue type */\n\n\t/* if job is truely running or provisioning */\n\tif (check_job_state(pjob, JOB_STATE_LTR_RUNNING) &&\n\t    (check_job_substate(pjob, JOB_SUBSTATE_RUNNING) ||\n\t     check_job_substate(pjob, JOB_SUBSTATE_PROVISION)))\n\t\treturn JOB_RUNNING;\n\n\t/* if job exit */\n\tif (check_job_state(pjob, JOB_STATE_LTR_EXITING))\n\t\treturn JOB_EXIT;\n\n\t/* handling qsub -a, waiting with substate 30 ; accrue ineligible time */\n\tif (get_jattr_long(pjob, JOB_ATR_exectime))\n\t\treturn JOB_INELIGIBLE;\n\n\t/* 'user' hold applied ; accrue ineligible time */\n\tif (get_jattr_long(pjob, JOB_ATR_hold) & HOLD_u)\n\t\treturn JOB_INELIGIBLE;\n\n\t/* other than 'user' hold applied */\n\t/* accrue type is set to JOB_INELIGIBLE incase a job has dependency */\n\t/* on another job and hold type is set to system hold. */\n\t/* For all other cases accrue type is set to JOB_ELIGIBLE. */\n\ttemphold = get_jattr_long(pjob, JOB_ATR_hold);\n\tif (temphold & HOLD_o || temphold & HOLD_bad_password || temphold & HOLD_s) {\n\t\tif ((check_job_substate(pjob, JOB_SUBSTATE_DEPNHOLD)) && (temphold & HOLD_s))\n\t\t\treturn JOB_INELIGIBLE;\n\n\t\treturn JOB_ELIGIBLE;\n\t}\n\n\t/* scheduler suspend job ; accrue eligible time */\n\tif (check_job_substate(pjob, JOB_SUBSTATE_SCHSUSP))\n\t\treturn JOB_ELIGIBLE;\n\n\t/* qsig suspended job ; accrue eligible time */\n\tif (check_job_substate(pjob, JOB_SUBSTATE_SUSPEND))\n\t\treturn JOB_ELIGIBLE;\n\n\t/* check for stopped queue: routing and execute ; accrue eligible time */\n\tpque = find_queuebyname(pjob->ji_qs.ji_queue);\n\tif (pque != NULL)\n\t\tif (get_qattr_long(pque, QA_ATR_Started) == 0)\n\t\t\treturn JOB_ELIGIBLE;\n\n\t/* The job doesn't have any reason to not accrue eligible time (e.g. on hold), so it should accrue it */\n\tif (check_job_state(pjob, JOB_STATE_LTR_TRANSIT) &&\n\t    check_job_substate(pjob, JOB_SUBSTATE_TRANSIN))\n\t\treturn JOB_ELIGIBLE;\n\n\treturn -1;\n}\n\n/**\n * @brief\n * \t\tupdate_eligible_time - this function is responsible for calculating eligible time accrued\n *\t\t\t  for a job. it also updates the accrue type and sample start time\n *\n * @param[in]\tnewaccruetype\t-\tnew accrue type to be set\n * @param[in,out]\tpjob\t-\tpointer to job\n *\n * @return\tint\n * @retval\t0\t: success\n * @retval\t1\t: if updating same accrue type or do nothing\n *\n * @par MT-Safe: No\n */\n\nint\nupdate_eligible_time(long newaccruetype, job *pjob)\n{\n\tstatic char *msg[] = {\"initial_time\", \"ineligible_time\", \"eligible_time\", \"run_time\", \"exiting\"};\n\tchar *strtime;\n\tstatic char errtime[] = \"00:00:00\";\n\tchar str[256];\n\tlong accrued_time = 0; /* accrued time */\n\tlong oldaccruetype = get_jattr_long(pjob, JOB_ATR_accrue_type);\n\tlong timestamp = (long) time_now; /* time since accrual begins */\n\n\t/* check if updating same accrue type or do nothing */\n\tif (newaccruetype == oldaccruetype || newaccruetype == -1)\n\t\treturn 1;\n\n\t/* time since accrue type last changed  */\n\taccrued_time = timestamp - get_jattr_long(pjob, JOB_ATR_sample_starttime);\n\n\tif (oldaccruetype == JOB_ELIGIBLE && accrued_time > 0)\n\t\tset_jattr_l_slim(pjob, JOB_ATR_eligible_time, accrued_time, INCR);\n\n\t/* change type to new accrue type, update start time to mark change of accrue type */\n\tset_jattr_l_slim(pjob, JOB_ATR_accrue_type, newaccruetype, SET);\n\tset_jattr_l_slim(pjob, JOB_ATR_sample_starttime, timestamp, SET);\n\n\t/* Prepare and print log message */\n\tstrtime = convert_long_to_time(get_jattr_long(pjob, JOB_ATR_eligible_time));\n\tif (strtime == NULL)\n\t\tstrtime = errtime;\n\n\tsprintf(str, \"Accrue type has changed to %s, previous accrue type was %s for %ld secs, total eligible_time=%s\",\n\t\tmsg[newaccruetype], msg[oldaccruetype], accrued_time, strtime);\n\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid, str);\n\n\tif (strtime != NULL && strtime != errtime)\n\t\tfree(strtime);\n\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\talter_eligibletime \tthis is action function for eligible_time.\n *\t\t\tqalter will alter the value of eligible_time,\n *\t\t\thence need to set sample_starttime to now so that\n *\t\t\taccrual begins with this change. log message is\n *\t\t\tprinted to mark the change. the time accrued while\n *\t\t\tin present accruetype from last change is printed.\n *\t\t\tAccrual continues from now.\n *\n * @param[in]\tpattr\t-\tattribute structure\n * @param[in,out]\tpobject\t-\tobject which will be later typecasted into job type.\n * @param[in]\tactmode\t-\taction mode\n *\n * @par MT-Safe: No\n */\nint\nalter_eligibletime(attribute *pattr, void *pobject, int actmode)\n{\n\tstatic char errtime[] = \"00:00:00\";\n\tlong timestamp = (long) time_now; /* accrual begins from here */\n\tjob *pjob = (job *) pobject;\n\tlong oldaccruetype = get_jattr_long(pjob, JOB_ATR_accrue_type);\n\tlong newaccruetype = oldaccruetype; /* We are not changing accrue type */\n\n\t/* distinguish between genuine qalter and call by action */\n\tif (actmode == ATR_ACTION_ALTER) {\n\n\t\t/* eligible_time_enable is OFF, then error */\n\t\tif (!get_sattr_long(SVR_ATR_EligibleTimeEnable)) {\n\t\t\treturn PBSE_ETEERROR;\n\t\t} else {\n\t\t\tlong accrued_time;\n\t\t\tchar *strtime;\n\t\t\tchar logstr[256];\n\t\t\tstatic char *msg[] = {\n\t\t\t\t\"initial_time\",\n\t\t\t\t\"ineligible_time\",\n\t\t\t\t\"eligible_time\",\n\t\t\t\t\"run_time\",\n\t\t\t\t\"exiting\"};\n\n\t\t\taccrued_time = (long) time_now -\n\t\t\t\t       get_jattr_long(pjob, JOB_ATR_sample_starttime);\n\n\t\t\t/* Sample time accrual continues with this time .... */\n\t\t\tset_jattr_l_slim(pjob, JOB_ATR_sample_starttime, timestamp, SET);\n\t\t\t/* eligible_time is set to new value again in modify_job_attr.\n\t\t\t * this is for log message, we have the new value anyways.\n\t\t\t */\n\t\t\tstrtime = convert_long_to_time(pattr->at_val.at_long);\n\n\t\t\tsprintf(logstr, \"Accrue type is %s, previous accrue type was %s for %ld secs, due to qalter total eligible_time=%s\",\n\t\t\t\tmsg[newaccruetype], msg[oldaccruetype], accrued_time, strtime != NULL ? strtime : errtime);\n\t\t\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t\t  pjob->ji_qs.ji_jobid, logstr);\n\n\t\t\tfree(strtime);\n\n\t\t\treturn PBSE_NONE;\n\t\t}\n\t}\n\treturn PBSE_NONE;\n}\n/**\n * @brief\n *\t\tCheck if the history of the  finished job needs to be saved or purged .\n *\t\tIf it needs to be saved and history management is ON then call svr_setjob_histinfo() to\n *\t\tstore the data in the server's history . Else if the history management is OFF or the\n *\t\trequest is to not store the jobs history then call job_purge()\n *\n * @param[in]\tpjob\t-\tPointer to the job structure\n *\n *\n */\nvoid\nsvr_saveorpurge_finjobhist(job *pjob)\n{\n\tint flag = 0;\n\tresc_resv *presv;\n\n\tpresv = pjob->ji_myResv;\n\n\tflag = svr_chk_history_conf();\n\tif (flag && !pjob->ji_deletehistory) {\n\t\tsvr_setjob_histinfo(pjob, T_FIN_JOB);\n\t\tif (pjob->ji_ajinfo != NULL)\n\t\t\tpjob->ji_ajinfo->tkm_flags &= ~TKMFLG_CHK_ARRAY;\n\t\tif (pjob->ji_terminated &&\n\t\t    (pjob->ji_qs.ji_svrflags & JOB_SVFLG_SubJob) &&\n\t\t    pjob->ji_parentaj != NULL &&\n\t\t    pjob->ji_parentaj->ji_ajinfo != NULL)\n\t\t\tpjob->ji_parentaj->ji_ajinfo->tkm_dsubjsct++;\n\t} else {\n\t\tif (pjob->ji_deletehistory && flag) {\n\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB,\n\t\t\t\t  LOG_INFO, pjob->ji_qs.ji_jobid,\n\t\t\t\t  msg_also_deleted_job_history);\n\t\t}\n\t\t/* For an array subjob if exit status is non-zero mark sub state\n\t\t * as JOB_SUBSTATE_FAILED. Otherwise set to JOB_SUBSTATE_FINISHED\n\t\t * when current sub state is JOB_SUBSTATE_EXITED.\n\t\t */\n\t\tif (pjob->ji_qs.ji_svrflags & JOB_SVFLG_SubJob) {\n\t\t\tif (pjob->ji_terminated)\n\t\t\t\tset_job_substate(pjob, JOB_SUBSTATE_TERMINATED);\n\t\t\telse if (is_jattr_set(pjob, JOB_ATR_exit_status)) {\n\t\t\t\tif (get_jattr_long(pjob, JOB_ATR_exit_status))\n\t\t\t\t\tset_job_substate(pjob, JOB_SUBSTATE_FAILED);\n\t\t\t\telse if (check_job_substate(pjob, JOB_SUBSTATE_EXITED))\n\t\t\t\t\tset_job_substate(pjob, JOB_SUBSTATE_FINISHED);\n\t\t\t}\n\t\t}\n\t\tjob_purge(pjob);\n\t}\n\tset_idle_delete_task(presv);\n}\n/**\n * @brief\n *\t\tFunction name: svr_clean_job_history\n * @par Purpose: Periodically checks for the history jobs in the server and\n *\t\t purge the history jobs whose history duration exceeds the\n *\t\t configured job_history_duration server attribute.\n * @par Functionality: It is a work_task and reschedule itself after 2 mins if\n *\t\t and only if job_history_enable is set.\n *\t\tOutput: None\n *\n * @param[in]\tpwt\t-\twork_task structure\n */\nvoid\nsvr_clean_job_history(struct work_task *pwt)\n{\n\tjob *pjob;\n\tjob *nxpjob = NULL;\n\tint walltime_used = 0;\n\n\t/*\n\t * Keep track of time spent purging jobs, interrupts purge if necessary.\n\t * Timed task in nearby future set if purge took too long.\n\t * Timed task in far future only set if this task completes.\n\t * Autotunes time between job history purges:\n\t * - raised if last purge was short\n\t * - lowered if this purge needs to be interrupted\n\t */\n\n\ttime_t begin_time;\n\ttime_t end_time;\n\tstatic time_t time_between_tasks = SVR_CLEAN_JOBHIST_TM;\n\n\tbegin_time = time(NULL);\n\t/* Initialize end_time, in case we do not get into the while loop */\n\tend_time = begin_time;\n\n\t/*\n\t * Traverse through the SERVER job list and find the history\n\t * jobs (job with state JOB_STATE_LTR_MOVED and JOB_STATE_LTR_FINISHED)\n\t * which exceed the configured job_history_duration value and\n\t * purge them immediately.\n\t */\n\tpjob = (job *) GET_NEXT(svr_alljobs);\n\n\twhile (pjob != NULL) {\n\t\t/* save the next job */\n\t\tnxpjob = (job *) GET_NEXT(pjob->ji_alljobs);\n\n\t\tif ((check_job_state(pjob, JOB_STATE_LTR_MOVED) && check_job_substate(pjob, JOB_SUBSTATE_FINISHED)) ||\n\t\t    (check_job_state(pjob, JOB_STATE_LTR_FINISHED)) ||\n\t\t    (check_job_state(pjob, JOB_STATE_LTR_EXPIRED))) {\n\n\t\t\tif (!(is_jattr_set(pjob, JOB_ATR_history_timestamp))) {\n\t\t\t\tif (check_job_state(pjob, JOB_STATE_LTR_MOVED))\n\t\t\t\t\tset_jattr_l_slim(pjob, JOB_ATR_history_timestamp, time_now, SET);\n\t\t\t\telse {\n\t\t\t\t\tif (((walltime_used = get_used_wall(pjob)) == -1) ||\n\t\t\t\t\t    !(is_jattr_set(pjob, JOB_ATR_stime))) {\n\t\t\t\t\t\tlog_err(-1, \"svr_clean_job_history\",\n\t\t\t\t\t\t\t\"Finished job missing start-time/walltime used, cannot clean history\");\n\t\t\t\t\t\tpjob = nxpjob;\n\t\t\t\t\t\tcontinue;\n\t\t\t\t\t}\n\t\t\t\t\tset_jattr_l_slim(pjob, JOB_ATR_history_timestamp,\n\t\t\t\t\t\t\t get_jattr_long(pjob, JOB_ATR_stime) + walltime_used, SET);\n\t\t\t\t}\n\t\t\t\tjob_save_db(pjob);\n\t\t\t}\n\n\t\t\tif (time_now >= (get_jattr_long(pjob, JOB_ATR_history_timestamp) + svr_history_duration)) {\n\t\t\t\tjob_purge(pjob);\n\t\t\t\tpjob = NULL;\n\t\t\t}\n\t\t}\n\t\t/* restore the saved next in pjob */\n\t\tpjob = nxpjob;\n\n\t\t/* check if we spent too long hogging the pbs_server process here */\n\t\tend_time = time(NULL);\n\t\tif ((end_time - begin_time) > SVR_CLEAN_JOBHIST_SECS) {\n\t\t\t/* Apparently the interval between history purges is too long.\n\t\t\t * reduce it using factor 0.7\n\t\t\t */\n\t\t\ttime_between_tasks = (floor((double) time_between_tasks * 0.7));\n\n\t\t\t/* no use reducing to less than 4 * SVR_CLEAN_JOBHIST_SECS\n\t\t\t * since we'll already schedule a continuation task here\n\t\t\t */\n\t\t\tif (time_between_tasks < (4 * SVR_CLEAN_JOBHIST_SECS))\n\t\t\t\ttime_between_tasks = 4 * SVR_CLEAN_JOBHIST_SECS;\n\n\t\t\t/* set up another work task in near future,\n\t\t\t * but leave as much time as we spent in this routine for other work first\n\t\t\t */\n\t\t\tif (!set_task(WORK_Timed,\n\t\t\t\t      (end_time + SVR_CLEAN_JOBHIST_SECS),\n\t\t\t\t      svr_clean_job_history, NULL)) {\n\t\t\t\tlog_err(errno,\n\t\t\t\t\t\"svr_clean_job_history\",\n\t\t\t\t\t\"Unable to set task for clean job history\");\n\t\t\t\t/* on error to set task\n\t\t\t\t\t * just continue purging the history\n\t\t\t\t\t */\n\t\t\t} else\n\t\t\t\t/* but if we managed to set a task in near future, return;\n\t\t\t\t * that task will continue where we left off\n\t\t\t\t */\n\t\t\t\treturn;\n\t\t}\n\t} /* end of while loop through jobs */\n\n\t/* We purged everything necessary in this task if we get here.\n\t * set up another work task for next time period.\n\t */\n\tif (pwt && svr_history_enable) {\n\t\tif (!set_task(WORK_Timed,\n\t\t\t      (time_now + time_between_tasks),\n\t\t\t      svr_clean_job_history, NULL)) {\n\t\t\tlog_err(errno,\n\t\t\t\t\"svr_clean_job_history\",\n\t\t\t\t\"Unable to set task for clean job history\");\n\t\t}\n\t}\n\n\t/* try to move the time between tasks up again\n\t * but only if we spent less than 2/3rds of what we were allowed to spend\n\t * Note the last purge of a chain of purges will often tend to undo part of the lowering\n\t * in the earlier incomplete purges -- that's OK: 0.7*1.1 is still smaller than 1\n\t */\n\n\tif ((time_between_tasks < SVR_CLEAN_JOBHIST_TM) &&\n\t    ((end_time - begin_time) < floor((double) SVR_CLEAN_JOBHIST_SECS * 2 / 3))) {\n\t\ttime_between_tasks = ceil((double) time_between_tasks * 1.1);\n\t\tif (time_between_tasks > SVR_CLEAN_JOBHIST_TM)\n\t\t\ttime_between_tasks = SVR_CLEAN_JOBHIST_TM;\n\t}\n}\n\n/**\n * @brief\n * \t\tFunction name: svr_histjob_update()\n * \t\tDescription: Update the state/substate of the history job and save\n *\t\tthe job structure to the disk.\n * \t\tInput: 1) pjob 2) newstate 3) newsubstate\n *\n * @param[in,out]\tpjob\t-\tjob which needs to be updated.\n * @param[in]\tnewstate\t-\tinternal copy of state\n * @param[in]\tnewsubstate\t-\tjob sub-state\n *\n * @return\tNothing\n */\nvoid\nsvr_histjob_update(job *pjob, char newstate, int newsubstate)\n{\n\tchar oldstate = get_job_state(pjob);\n\tpbs_queue *pque = pjob->ji_qhdr;\n\n\t/* update the state count in queue and server */\n\tif (oldstate != newstate) {\n\t\tint oldstatenum;\n\t\tint newstatenum;\n\n\t\toldstatenum = state_char2int(oldstate);\n\t\tnewstatenum = state_char2int(newstate);\n\t\tif (oldstatenum != -1)\n\t\t\tserver.sv_jobstates[oldstatenum]--;\n\t\tif (newstatenum != -1)\n\t\t\tserver.sv_jobstates[newstatenum]++;\n\t\tif (pque != NULL) {\n\t\t\tif (oldstatenum != -1)\n\t\t\t\tpque->qu_njstate[oldstatenum]--;\n\t\t\tif (newstatenum != -1)\n\t\t\t\tpque->qu_njstate[newstatenum]++;\n\t\t}\n\t}\n\t/* set the job state and state char */\n\tset_job_state(pjob, newstate);\n\tset_job_substate(pjob, newsubstate);\n\n\t/* For subjob update the state */\n\tif (pjob->ji_qs.ji_svrflags & JOB_SVFLG_SubJob) {\n\t\tupdate_sj_parent(pjob->ji_parentaj, pjob, pjob->ji_qs.ji_jobid, oldstate, newstate);\n\t\tchk_array_doneness(pjob->ji_parentaj);\n\t}\n\n\t/* set the status of each subjob if it is an array job */\n\tif (pjob->ji_qs.ji_svrflags & JOB_SVFLG_ArrayJob) {\n\t\tint i;\n\t\tajinfo_t *ptbl = pjob->ji_ajinfo;\n\t\tif (ptbl) {\n\t\t\tfor (i = ptbl->tkm_start; i <= ptbl->tkm_end; i += ptbl->tkm_step) {\n\t\t\t\tint sjsst;\n\t\t\t\tchar sjst;\n\t\t\t\tjob *psubj = get_subjob_and_state(pjob, i, &sjst, &sjsst);\n\t\t\t\tif (psubj) {\n\t\t\t\t\tif (sjsst != JOB_SUBSTATE_TERMINATED &&\n\t\t\t\t\t    sjsst != JOB_SUBSTATE_FINISHED &&\n\t\t\t\t\t    sjsst != JOB_SUBSTATE_FAILED &&\n\t\t\t\t\t    sjsst != JOB_SUBSTATE_MOVED)\n\t\t\t\t\t\tsvr_histjob_update(psubj, newstate, newsubstate);\n\t\t\t\t\telse\n\t\t\t\t\t\tsvr_histjob_update(psubj, newstate, sjsst);\n\t\t\t\t} else\n\t\t\t\t\tupdate_sj_parent(pjob, NULL, create_subjob_id(pjob->ji_qs.ji_jobid, i), sjst, newstate);\n\t\t\t}\n\t\t}\n\t}\n\n\tjob_save_db(pjob);\n}\n\n/**\n * @brief\n * \t\t svr_chk_history_conf - Check if server is configured to keep job history info.\n *\n * @return\tBoolen value\n * @retval\t1\t: if the server is configured for job history info\n * @retval\t0\t: otherwise. i.e. onei/both of svr_history_enable and\n * \t\t\t\t\t svr_history_duration is/are zero.\n */\nint\nsvr_chk_history_conf()\n{\n\treturn (svr_history_enable && svr_history_duration);\n}\n\n/**\n * @brief:\n *        update_job_finish_comment\t- Append job comment on exit (finished/ terminated/ failed) of job.\n *\n * @param[in]  *pjob       -\tjob structure\n * @param[in]  newsubstate -\tnew substate of the job\n * @param[in]  user        -\tusername who invoked job termination\n *\n * @return void\n *\n */\nvoid\nupdate_job_finish_comment(job *pjob, int newsubstate, char *user)\n{\n\tchar buffer[LOG_BUF_SIZE + 1] = {'\\0'};\n\tif ((is_jattr_set(pjob, JOB_ATR_Comment)) == 0) {\n\t\treturn;\n\t}\n\n\tif (newsubstate == JOB_SUBSTATE_FINISHED) {\n\t\tsnprintf(buffer, LOG_BUF_SIZE, \"%s and finished\",\n\t\t\t get_jattr_str(pjob, JOB_ATR_Comment));\n\t} else if (newsubstate == JOB_SUBSTATE_FAILED) {\n\t\tif (is_jattr_set(pjob, JOB_ATR_exit_status)) {\n\t\t\tswitch (get_jattr_long(pjob, JOB_ATR_exit_status)) {\n\t\t\t\tcase JOB_EXEC_KILL_NCPUS_BURST:\n\t\t\t\t\tsnprintf(buffer, LOG_BUF_SIZE, \"%s and exceeded resource ncpus (burst)\",\n\t\t\t\t\t\t get_jattr_str(pjob, JOB_ATR_Comment));\n\t\t\t\t\tbreak;\n\t\t\t\tcase JOB_EXEC_KILL_NCPUS_SUM:\n\t\t\t\t\tsnprintf(buffer, LOG_BUF_SIZE, \"%s and exceeded resource ncpus (sum)\",\n\t\t\t\t\t\t get_jattr_str(pjob, JOB_ATR_Comment));\n\t\t\t\t\tbreak;\n\t\t\t\tcase JOB_EXEC_KILL_VMEM:\n\t\t\t\t\tsnprintf(buffer, LOG_BUF_SIZE, \"%s and exceeded resource vmem\",\n\t\t\t\t\t\t get_jattr_str(pjob, JOB_ATR_Comment));\n\t\t\t\t\tbreak;\n\t\t\t\tcase JOB_EXEC_KILL_MEM:\n\t\t\t\t\tsnprintf(buffer, LOG_BUF_SIZE, \"%s and exceeded resource mem\",\n\t\t\t\t\t\t get_jattr_str(pjob, JOB_ATR_Comment));\n\t\t\t\t\tbreak;\n\t\t\t\tcase JOB_EXEC_KILL_CPUT:\n\t\t\t\t\tsnprintf(buffer, LOG_BUF_SIZE, \"%s and exceeded resource cput\",\n\t\t\t\t\t\t get_jattr_str(pjob, JOB_ATR_Comment));\n\t\t\t\t\tbreak;\n\t\t\t\tcase JOB_EXEC_KILL_WALLTIME:\n\t\t\t\t\tsnprintf(buffer, LOG_BUF_SIZE, \"%s and exceeded resource walltime\",\n\t\t\t\t\t\t get_jattr_str(pjob, JOB_ATR_Comment));\n\t\t\t\t\tbreak;\n\t\t\t\tdefault:\n\t\t\t\t\tsnprintf(buffer, LOG_BUF_SIZE, \"%s and failed\",\n\t\t\t\t\t\t get_jattr_str(pjob, JOB_ATR_Comment));\n\t\t\t\t\tbreak;\n\t\t\t}\n\t\t} else {\n\t\t\tsnprintf(buffer, LOG_BUF_SIZE, \"%s and failed\",\n\t\t\t\t get_jattr_str(pjob, JOB_ATR_Comment));\n\t\t}\n\t} else if (newsubstate == JOB_SUBSTATE_TERMINATED) {\n\t\t/* Don't overwrite the comment; if already set by req_deletejob2 */\n\t\tif (strstr(get_jattr_str(pjob, JOB_ATR_Comment), \"terminated\") == NULL) {\n\t\t\tif (user != NULL) {\n\t\t\t\tsnprintf(buffer, LOG_BUF_SIZE, \"%s and terminated by %s\",\n\t\t\t\t\t get_jattr_str(pjob, JOB_ATR_Comment),\n\t\t\t\t\t user);\n\t\t\t} else {\n\t\t\t\tsnprintf(buffer, LOG_BUF_SIZE, \"%s and terminated\",\n\t\t\t\t\t get_jattr_str(pjob, JOB_ATR_Comment));\n\t\t\t}\n\t\t}\n\t}\n\tif (buffer[0] != '\\0') {\n\t\tset_jattr_str_slim(pjob, JOB_ATR_Comment, buffer, NULL);\n\t}\n}\n\n/**\n * @brief\n *\t\tSet the history info for the job and keep until cleaned up by the\n *\t\tserver after the svr_history_duration period.\n *\t\tCalled on the execution completion of jobs i.e. normal job (non-array)\n *\t\tor sub jobs; and when an array job is done (all subjobs complete).\n *\t\tAlso called when locally created normal jobs and array jobs are\n *\t\tmoved/routed to a new server.\n *\n * @param[in]\t*pjob\t-\tjob structure\n * @param[in]\ttype\t-\ttype of history\n *\t  \t\t\t\t\t\ttype = T_FIN_JOB for FINISHED jobs\n *\t  \t\t\t\t\t\ttype = T_MOV_JOB for MOVED jobs\n *\t  \t\t\t\t\t\ttype = T_MOM_DOWN for non-rerunnable jobs FAILED\n *\t  \t\t\t\t\t\tbecause MOM went down.\n *\n * @return\tvoid\n */\n\nvoid\nsvr_setjob_histinfo(job *pjob, histjob_type type)\n{\n\tchar newstate = 'T';\n\tint newsubstate = 0;\n\tajinfo_t *ptbl = NULL;\n\n\tif (type == T_MOV_JOB) { /* MOVED job */\n\t\tchar *destination = pjob->ji_qs.ji_destin;\n\t\tchar *tmpstr = NULL;\n\t\tchar qname[PBS_MAXROUTEDEST + 1];\n\n\t\tif (destination == NULL || *destination == '\\0') {\n\t\t\treturn;\n\t\t}\n\n\t\t/*\n\t\t * If the move_job request comes from the scheduler because of\n\t\t * peer-2-peer scheduling, then destin will have port number\n\t\t * (format: \"[<queue>]<server>:<portno>\")which is not required,\n\t\t * so strip the string after ':'.\n\t\t */\n\t\ttmpstr = strchr(destination, ':');\n\t\tif (tmpstr != NULL) {\n\t\t\t*tmpstr = '\\0';\n\t\t}\n\n\t\tsprintf(log_buffer,\n\t\t\t\"Job Moved to destination: \\\"%s\\\"\", destination);\n\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO,\n\t\t\t  pjob->ji_qs.ji_jobid, log_buffer);\n\n\t\t/* put the accounting log for MOVED job */\n\t\tsprintf(log_buffer, \"destination=%s\", destination);\n\t\taccount_record(PBS_ACCT_MOVED, pjob, log_buffer);\n\n\t\t/*\n\t\t * parse the queue name from the destination\n\t\t * set the queue attribute to the destination\n\t\t * set the server's ji_queue info to just the queue name\n\t\t */\n\t\tsnprintf(qname, sizeof(qname), \"%s\", destination);\n\n\t\t/* strip off the portion that isn't the queue name */\n\t\ttmpstr = strchr(qname, '@');\n\t\tif (tmpstr != NULL) {\n\t\t\t*tmpstr = '\\0';\n\t\t}\n\t\tsnprintf(pjob->ji_qs.ji_queue, sizeof(pjob->ji_qs.ji_queue),\n\t\t\t \"%.*s\", PBS_MAXQUEUENAME, qname);\n\n\t\t/* Set the queue attribute to destination */\n\t\tset_jattr_generic(pjob, JOB_ATR_in_queue, destination, NULL, SET);\n\n\t\t/* set the job comment attr with destination */\n\t\tsprintf(log_buffer, \"Job has been moved to \\\"%s\\\"\", destination);\n\t\tset_jattr_generic(pjob, JOB_ATR_Comment, log_buffer, NULL, SET);\n\n\t\t/*\n\t\t * SET the NEW STATE/SUB-STATE for the job (which is moved).\n\t\t * New STATE for the job will be JOB_STATE_LTR_MOVED and new\n\t\t * SUBSTATE will be JOB_SUBSTATE_MOVED.\n\t\t */\n\t\tnewstate = JOB_STATE_LTR_MOVED;\n\t\tnewsubstate = JOB_SUBSTATE_MOVED;\n\n\t} else if (type == T_FIN_JOB) {\n\t\t/*\n\t\t * FINISHED job:\n\t\t * set the OGSA-BES compliant substate for the job.\n\t\t * If it is terminated by deljob batch request, then\n\t\t * set it as TERMINATED. Else, if the job has run and\n\t\t * exited with non-zero exit status, then it is FAILED,\n\t\t * otherwise FINISHED.\n\t\t */\n\t\tnewstate = (pjob->ji_qs.ji_svrflags & JOB_SVFLG_SubJob) ? JOB_STATE_LTR_EXPIRED : JOB_STATE_LTR_FINISHED; /* default X for subjob, F for other jobs */\n\t\tnewsubstate = JOB_SUBSTATE_FINISHED;\t\t\t\t\t\t\t\t\t  /* default */\n\n\t\t/* If Array job, handle here */\n\t\tif ((pjob->ji_qs.ji_svrflags & JOB_SVFLG_ArrayJob) &&\n\t\t    (ptbl = pjob->ji_ajinfo)) {\n\t\t\tif (pjob->ji_terminated)\n\t\t\t\tnewsubstate = JOB_SUBSTATE_TERMINATED;\n\t\t\telse {\n\t\t\t\tint i;\n\t\t\t\tfor (i = ptbl->tkm_start; i <= ptbl->tkm_end; i += ptbl->tkm_step) {\n\t\t\t\t\tint sjsst;\n\t\t\t\t\tget_subjob_and_state(pjob, i, NULL, &sjsst);\n\t\t\t\t\tif (sjsst == JOB_SUBSTATE_FAILED || sjsst == JOB_SUBSTATE_TERMINATED) {\n\t\t\t\t\t\tnewsubstate = sjsst;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t} else { /* Non-Array job */\n\t\t\tif (pjob->ji_terminated) {\n\t\t\t\tnewsubstate = JOB_SUBSTATE_TERMINATED;\n\t\t\t} else if (is_jattr_set(pjob, JOB_ATR_exit_status)) {\n\t\t\t\tif (get_jattr_long(pjob, JOB_ATR_exit_status))\n\t\t\t\t\tnewsubstate = JOB_SUBSTATE_FAILED;\n\t\t\t}\n\t\t}\n\t\tupdate_job_finish_comment(pjob, newsubstate, NULL);\n\t} else if (type == T_MOM_DOWN) {\n\t\tnewstate = JOB_STATE_LTR_FINISHED;\n\t\tnewsubstate = JOB_SUBSTATE_FAILED;\n\t}\n\n\t/* if the job is not already in MOVED or FINISHED state, then */\n\t/* decrement the entity job counts and entity resource sums   */\n\n\tif (!check_job_state(pjob, JOB_STATE_LTR_MOVED) &&\n\t    !check_job_state(pjob, JOB_STATE_LTR_EXPIRED) &&\n\t    !check_job_state(pjob, JOB_STATE_LTR_FINISHED)) {\n\t\taccount_entity_limit_usages(pjob, NULL, NULL, DECR,\n\t\t\t\t\t    pjob->ji_etlimit_decr_queued ? ETLIM_ACC_ALL_MAX : ETLIM_ACC_ALL);\n\t\taccount_entity_limit_usages(pjob, pjob->ji_qhdr, NULL, DECR,\n\t\t\t\t\t    pjob->ji_etlimit_decr_queued ? ETLIM_ACC_ALL_MAX : ETLIM_ACC_ALL);\n\t}\n\n\t/* set the history timestamp */\n\tset_jattr_l_slim(pjob, JOB_ATR_history_timestamp, time_now, SET);\n\t/* update the history job state and substate */\n\tsvr_histjob_update(pjob, newstate, newsubstate);\n\n\t/*\n\t * Work tasks on history jobs are not required and may change the\n\t * history info which is dangerous, so better delete them.\n\t */\n\tfree_job_work_tasks(pjob);\n}\n\n/**\n * @brief\n * \t\tsvr_chk_histjob - check whether job is a history job: called from\n * \t\t\treq_stat_job() if type = 1;\n *\n * @param[in]\tpjob\t-\tjob structure to be checked\n *\n * @return\tPBSE_*\n * @retval\tPBSE_NONE\t: if it is not a history job or feature is not enabled.\n * @retval\tPBSE_UNKJOBID\t: if it is a moved job but active at remote server\n * @retval\tPBSE_HISTJOBID\t: if it is a Finished job or moved and finished\n *\t    \t\t    \t\t\tat remote server.\n */\n\nint\nsvr_chk_histjob(job *pjob)\n{\n\tint rc = PBSE_NONE;\n\n\t/*\n\t * If server is not configured for history job information,\n\t * no need to check further, return PBSE_NONE.\n\t */\n\tif (svr_history_enable == 0)\n\t\treturn PBSE_NONE;\n\n\t/*\n\t * If it is FINISHED or MOVED with substate FINISHED, then\n\t * return PBSE_HISTJOBID otherwise PBSE_NONE.\n\t */\n\tif (pjob) {\n\t\tswitch (get_job_state(pjob)) {\n\t\t\tcase JOB_STATE_LTR_FINISHED:\n\t\t\t\trc = PBSE_HISTJOBID;\n\t\t\t\tbreak;\n\t\t\tcase JOB_STATE_LTR_MOVED:\n\t\t\t\tif (check_job_substate(pjob, JOB_SUBSTATE_FINISHED))\n\t\t\t\t\trc = PBSE_HISTJOBID;\n\t\t\t\telse /* other than JOB_SUBSTATE_FINISHED */\n\t\t\t\t\trc = PBSE_UNKJOBID;\n\t\t\t\tbreak;\n\t\t}\n\t}\n\treturn rc;\n}\n\n/**\n * @brief\n * \tFinish the request to mom to update a job's exec_* values.\n *\tBoth the mom request and the originating client request are acknowledged.\n *\n * @param[in,out]\tpwt -\twork_task structure, containing info\n *\t\t\t\tabout the mom request and the client request.\n * @return none\n */\nstatic void\npost_send_job_exec_update_req(struct work_task *pwt)\n{\n\tstruct batch_request *mom_preq = NULL;\n\tstruct batch_request *cli_preq = NULL;\n\tint bcode = 0;\n\n\tif (pwt == NULL)\n\t\treturn;\n\n\tif (pwt->wt_aux2 != PROT_TPP)\n\t\tsvr_disconnect(pwt->wt_event); /* close connection to MOM */\n\tmom_preq = pwt->wt_parm1;\n\tmom_preq->rq_conn = mom_preq->rq_orgconn; /* restore socket to client */\n\tbcode = mom_preq->rq_reply.brp_code;\n\n\tcli_preq = pwt->wt_parm2;\n\n\tif (bcode) {\n\t\tchar err_msg[LOG_BUF_SIZE];\n\n\t\t/* also take note of the reject msg if any */\n\t\tif (mom_preq->rq_reply.brp_choice == BATCH_REPLY_CHOICE_Text) {\n\t\t\t(void) snprintf(err_msg, sizeof(err_msg), \"%s\", mom_preq->rq_reply.brp_un.brp_txt.brp_str);\n\t\t} else {\n\t\t\t(void) snprintf(err_msg, sizeof(err_msg), msg_mombadmodify, bcode);\n\t\t}\n\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO, mom_preq->rq_ind.rq_modify.rq_objname, err_msg);\n\t\treq_reject(bcode, 0, mom_preq);\n\t\treply_text(cli_preq, bcode, err_msg);\n\t} else {\n\t\treply_ack(mom_preq);\n\t\tif (cli_preq != NULL) {\n\t\t\tif (cli_preq->rq_extend == NULL) {\n\t\t\t\treply_ack(cli_preq);\n\t\t\t} else {\n\t\t\t\treply_text(cli_preq, PBSE_NONE, cli_preq->rq_extend);\n\t\t\t}\n\t\t}\n\t}\n}\n\n/**\n * @brief\n *\n * Communicate to the MS mom pjob's exec_vnode, exec_host,\n * exec_host2, and schedselect attributes.\n *\n * @param[in]\tpjob - job structure\n * @param[out]  err_msg - a buffer of size 'err_msg_sz' supplied by the\n *       \t\t  caller and upon a failure will contain an appropriate\n *       \t\t  error message\n * @param[in]\terr_msg_sz - size of 'err_msg' buf\n * @param[in]\treply_req - the batch request to reply to if any\n *\n * @return int\n * @retrval\t0\t- sucess\n * @retrval\t1\t- fail\n */\n\nint\nsend_job_exec_update_to_mom(job *pjob, char *err_msg, int err_msg_sz,\n\t\t\t    struct batch_request *reply_req)\n{\n\tstruct batch_request *newreq;\n\tchar *new_exec_vnode = NULL;\n\tchar *new_exec_host = NULL;\n\tchar *new_exec_host2 = NULL;\n\tint rc = 1;\n\tint num_updates = 0;\n\tstruct work_task *pwt = NULL;\n\n\tif (pjob == NULL) {\n\t\tlog_err(-1, __func__, \"bad job parameter\");\n\t\treturn (1);\n\t}\n\n\tif ((err_msg[0] != '\\0') && (reply_req != NULL)) {\n\t\t/*\n\t\t * be sure to save/send this extra info in\n\t\t * 'err_msg' buf\n\t\t */\n\t\treply_req->rq_extend = strdup(err_msg);\n\t\tif (reply_req->rq_extend == NULL) {\n\t\t\tlog_err(-1, __func__, \"strdup failed\");\n\t\t\treturn (1);\n\t\t}\n\t}\n\n\tnewreq = alloc_br(PBS_BATCH_ModifyJob);\n\n\tif (newreq == (struct batch_request *) 0) {\n\t\tlog_err(-1, __func__, \"failed to alloc_br for PBS_MATCH_modifyjob\");\n\t\treturn (1);\n\t}\n\tCLEAR_HEAD(newreq->rq_ind.rq_modify.rq_attr);\n\n\t(void) strcpy(newreq->rq_ind.rq_modify.rq_objname,\n\t\t      pjob->ji_qs.ji_jobid);\n\n\tif (is_jattr_set(pjob, JOB_ATR_exec_vnode)) {\n\t\tnew_exec_vnode =\n\t\t\tget_jattr_str(pjob, JOB_ATR_exec_vnode);\n\n\t\tif (add_to_svrattrl_list(\n\t\t\t    &(newreq->rq_ind.rq_modify.rq_attr),\n\t\t\t    ATTR_execvnode, NULL, new_exec_vnode, 0,\n\t\t\t    NULL) == -1) {\n\t\t\tif ((err_msg != NULL) && (err_msg_sz > 0)) {\n\t\t\t\tsnprintf(err_msg, err_msg_sz, \"failed to add_to_svrattrl_list(%s,%s,%s)\", ATTR_execvnode, \"\", new_exec_vnode);\n\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid, err_msg);\n\t\t\t}\n\t\t\tgoto send_job_exec_update_exit;\n\t\t}\n\t\tnum_updates++;\n\t}\n\n\tif (is_jattr_set(pjob, JOB_ATR_exec_host)) {\n\t\tnew_exec_host =\n\t\t\tget_jattr_str(pjob, JOB_ATR_exec_host);\n\n\t\tif (add_to_svrattrl_list(\n\t\t\t    &(newreq->rq_ind.rq_modify.rq_attr),\n\t\t\t    ATTR_exechost, NULL, new_exec_host, 0, NULL) == -1) {\n\t\t\tif ((err_msg != NULL) && (err_msg_sz > 0)) {\n\t\t\t\tsnprintf(err_msg, err_msg_sz, \"failed to add_to_svrattrl_list(%s,%s,%s)\", ATTR_exechost, \"\", new_exec_host);\n\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid, err_msg);\n\t\t\t}\n\t\t\tgoto send_job_exec_update_exit;\n\t\t}\n\t\tnum_updates++;\n\t}\n\n\tif (is_jattr_set(pjob, JOB_ATR_exec_host2)) {\n\t\tnew_exec_host2 =\n\t\t\tget_jattr_str(pjob, JOB_ATR_exec_host2);\n\n\t\tif (add_to_svrattrl_list(\n\t\t\t    &(newreq->rq_ind.rq_modify.rq_attr),\n\t\t\t    ATTR_exechost2, NULL, new_exec_host2, 0, NULL) == -1) {\n\t\t\tif ((err_msg != NULL) && (err_msg_sz > 0)) {\n\t\t\t\tsnprintf(err_msg, err_msg_sz, \"failed to add_to_svrattrl_list(%s,%s,%s)\", ATTR_exechost2, \"\", new_exec_host2);\n\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid, err_msg);\n\t\t\t}\n\t\t\tgoto send_job_exec_update_exit;\n\t\t}\n\t\tnum_updates++;\n\t}\n\n\tif (is_jattr_set(pjob, JOB_ATR_SchedSelect)) {\n\t\tif (add_to_svrattrl_list(\n\t\t\t    &(newreq->rq_ind.rq_modify.rq_attr),\n\t\t\t    ATTR_SchedSelect, NULL,\n\t\t\t    get_jattr_str(pjob, JOB_ATR_SchedSelect),\n\t\t\t    0, NULL) == -1) {\n\t\t\tif ((err_msg != NULL) && (err_msg_sz > 0)) {\n\t\t\t\tsnprintf(err_msg, err_msg_sz, \"failed to add_to_svrattrl_list(%s,%s,%s)\", ATTR_SchedSelect, \"\", get_jattr_str(pjob, JOB_ATR_SchedSelect));\n\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid, err_msg);\n\t\t\t}\n\t\t\tgoto send_job_exec_update_exit;\n\t\t}\n\t\tnum_updates++;\n\t}\n\n\tif ((is_jattr_set(pjob, JOB_ATR_resource)) != 0) {\n\t\tpbs_list_head collectresc;\n\t\tsvrattrl *psvrl;\n\t\tattribute_def *objatrdef;\n\t\textern int resc_access_perm;\n\n\t\tobjatrdef = &job_attr_def[(int) JOB_ATR_resource];\n\t\tCLEAR_HEAD(collectresc);\n\t\tresc_access_perm = READ_ONLY;\n\t\tif (objatrdef->at_encode(get_jattr(pjob, JOB_ATR_resource), &collectresc, objatrdef->at_name, NULL, ATR_ENCODE_CLIENT, NULL) > 0) {\n\n\t\t\tpsvrl = (svrattrl *) GET_NEXT(collectresc);\n\t\t\twhile (psvrl) {\n\t\t\t\tif (add_to_svrattrl_list(\n\t\t\t\t\t    &(newreq->rq_ind.rq_modify.rq_attr),\n\t\t\t\t\t    objatrdef->at_name, psvrl->al_resc,\n\t\t\t\t\t    psvrl->al_value, 0, NULL) == -1) {\n\t\t\t\t\tfree_attrlist(&collectresc);\n\t\t\t\t\tif ((err_msg != NULL) && (err_msg_sz > 0)) {\n\t\t\t\t\t\tsnprintf(err_msg, err_msg_sz, \"failed to add_to_svrattrl_list(%s,%s,%s)\", objatrdef->at_name, psvrl->al_resc, psvrl->al_value);\n\t\t\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid, err_msg);\n\t\t\t\t\t}\n\t\t\t\t\tgoto send_job_exec_update_exit;\n\t\t\t\t}\n\t\t\t\tnum_updates++;\n\t\t\t\tpsvrl = (svrattrl *) GET_NEXT(psvrl->al_link);\n\t\t\t}\n\t\t\tfree_attrlist(&collectresc);\n\t\t}\n\t}\n\n\t/* pass the request on to MOM */\n\n\tif (num_updates > 0) {\n\t\trc = relay_to_mom2(pjob, newreq,\n\t\t\t\t   post_send_job_exec_update_req, &pwt);\n\t\tif (rc != 0) {\n\t\t\tlog_err(-1, __func__, \"failed telling mom of the request\");\n\t\t} else {\n\t\t\tpwt->wt_parm2 = reply_req;\n\t\t}\n\t} else {\n\t\t/* no updates, ok */\n\t\trc = 0;\n\t}\n\nsend_job_exec_update_exit:\n\n\tif ((rc != 0) || (num_updates == 0)) {\n\t\tfree_br(newreq);\n\t}\n\n\treturn (rc);\n}\n\n/**\n *\n * @brief\n *\tExtracts mom hostnames from exechostx and adds it to list\n *\n * @param[in/out]\tto_head - destination reliable_job_node list\n * @param[in]\texechostx - string in exechost format\n *\n * @return \tvoid\n */\nstatic void\npopulate_mom_list(pbs_list_head *to_head, char *exechostx)\n{\n\tchar *hostn = NULL, *last = NULL, *peh;\n\tint hasprn = 0;\n\n\tif (!to_head || !exechostx || !*exechostx) {\n\t\tlog_err(-1, __func__, \"bad param passed\");\n\t\treturn;\n\t}\n\n\tpeh = strdup(exechostx);\n\tif (peh == NULL) {\n\t\tlog_err(errno, __func__, \"strdup error\");\n\t\treturn;\n\t}\n\n\tCLEAR_HEAD((*to_head));\n\n\tfor (hostn = parse_plus_spec_r(peh, &last, &hasprn);\n\t     hostn;\n\t     hostn = parse_plus_spec_r(last, &last, &hasprn)) {\n\t\tif (reliable_job_node_add(to_head, strtok(hostn, \":/\")) == -1) {\n\t\t\tfree(peh);\n\t\t\treturn;\n\t\t}\n\t}\n\tfree(peh);\n\treturn;\n}\n\n/**\n * @brief\n *\treturns a copy of partial select string representing the MS (first) chunk\n *\tNote - caller to free the returned string pointer\n *\n * @param[in]\t\tselect_str - pointer to complete schedselect string\n *\n * @return char *\n * @retval ptr\tpointer to malloc'd string containing ms (first) chunk's select str\n *\n * @note\n * caller to free the returned string pointer\n*/\nstatic char *\nget_ms_select_chunk(char *select_str)\n{\n\tchar *slast, *selbuf, *psubspec, *retval = NULL;\n\tint hpn;\n\n\tif (select_str == NULL) {\n\t\tlog_err(-1, __func__, \"bad param passed\");\n\t\treturn (NULL);\n\t}\n\n\tselbuf = strdup(select_str);\n\tif (selbuf == NULL) {\n\t\tlog_err(errno, __func__, \"strdup fail\");\n\t\treturn (NULL);\n\t}\n\tpsubspec = parse_plus_spec_r(selbuf, &slast, &hpn);\n\n\tif (psubspec) {\n\t\twhile (*psubspec && !isalpha(*(psubspec++)))\n\t\t\t; /* one line loop */\n\n\t\tif (!(retval = strdup(--psubspec)))\n\t\t\tlog_err(errno, __func__, \"strdup fail\");\n\t}\n\n\tfree(selbuf);\n\treturn retval;\n}\n/**\n * @brief\n *\tRecreates the pjob's exec_vnode, updating at the same time\n *\tits corresponding exec_host and exec_host2 attributes\n *\tby taking out the vnodes managed by sister moms.\n *\n * @param[in,out]\tpjob - job structure\n * @param[in]\t\tvnodelist - if non-NULL, lists the vnodes to be\n *\t\t\t\tfreed whose parent mom is a sister mom.\n *\t\t\t\tif NULL, releases all the sister\n *\t\t\t\tvnodes assigned to 'pjob'\n * @param[in]\t\tkeep_select - non-NULL means it's a select string that\n *\t\t\t\tdescribes vnodes to be kept while freeing all other vnodes\n *\t\t\t\tassigned to 'pjob' whose parent mom is a sister mom.\n * @param[out]  err_msg - if function returns != 0 (failure), return\n *\t\t\t  any error message in this buffer.\n * @param[int]\terr_msg_sz - size of 'err_msg' buf.\n * @return int\n * @retval 0\tfor success\n * @reval 1\tfor error\n*/\nint\nrecreate_exec_vnode(job *pjob, char *vnodelist, char *keep_select, char *err_msg,\n\t\t    int err_msg_sz)\n{\n\tchar *exec_vnode = NULL;\n\tchar *exec_host = NULL;\n\tchar *exec_host2 = NULL;\n\tchar *new_exec_vnode = NULL;\n\tchar *new_exec_host = NULL;\n\tchar *new_exec_host2 = NULL;\n\tchar *new_select = NULL;\n\tchar *schedselect = NULL;\n\tchar *deallocated_execvnode = NULL;\n\tchar *new_deallocated_execvnode = NULL;\n\tresource_def *prdefsl = NULL;\n\tresource *presc;\n\tint rc = 1;\n\trelnodes_input_t r_input;\n\trelnodes_input_vnodelist_t r_input_vnlist;\n\trelnodes_input_select_t r_input_keep_select;\n\tpbs_list_head succeeded_mom_list;\n\n\tif (pjob == NULL) {\n\t\tlog_err(-1, __func__, \"bad job parameter\");\n\t\treturn (1);\n\t}\n\n\tif ((!check_job_state(pjob, JOB_STATE_LTR_RUNNING)) &&\n\t    (!check_job_state(pjob, JOB_STATE_LTR_EXITING))) {\n\t\tlog_err(-1, __func__, \"job not in running or exiting state\");\n\t\treturn (1);\n\t}\n\n\tif ((is_jattr_set(pjob, JOB_ATR_exec_vnode)) == 0) {\n\t\tlog_err(-1, __func__, \"exec_vnode is not set\");\n\t\treturn (1);\n\t}\n\n\tif ((is_jattr_set(pjob, JOB_ATR_exec_host)) == 0) {\n\t\tlog_err(-1, __func__, \"exec_host is not set\");\n\t\treturn (1);\n\t}\n\n\tif ((is_jattr_set(pjob, JOB_ATR_exec_host2)) == 0) {\n\t\tlog_err(-1, __func__, \"exec_host2 is not set\");\n\t\treturn (1);\n\t}\n\n\tif ((is_jattr_set(pjob, JOB_ATR_SchedSelect)) == 0) {\n\t\tlog_err(-1, __func__, \"schedselect is not set\");\n\t\treturn (1);\n\t}\n\n\texec_vnode = get_jattr_str(pjob, JOB_ATR_exec_vnode);\n\n\texec_host = get_jattr_str(pjob, JOB_ATR_exec_host);\n\n\texec_host2 = get_jattr_str(pjob, JOB_ATR_exec_host2);\n\n\tschedselect = get_jattr_str(pjob, JOB_ATR_SchedSelect);\n\n\tif (is_jattr_set(pjob, JOB_ATR_exec_vnode_deallocated))\n\t\tdeallocated_execvnode = get_jattr_str(pjob, JOB_ATR_exec_vnode_deallocated);\n\n\trelnodes_input_init(&r_input);\n\tr_input.jobid = pjob->ji_qs.ji_jobid;\n\tr_input.execvnode = exec_vnode;\n\tr_input.exechost = exec_host;\n\tr_input.exechost2 = exec_host2;\n\tr_input.schedselect = schedselect;\n\tr_input.p_new_exec_vnode = &new_exec_vnode;\n\tr_input.p_new_exec_host[0] = &new_exec_host;\n\tr_input.p_new_exec_host[1] = &new_exec_host2;\n\tr_input.p_new_schedselect = &new_select;\n\n\tif (keep_select == NULL) {\n\t\trelnodes_input_vnodelist_init(&r_input_vnlist);\n\t\tr_input_vnlist.vnodelist = vnodelist;\n\t\tr_input_vnlist.deallocated_nodes_orig = deallocated_execvnode;\n\t\tr_input_vnlist.p_new_deallocated_execvnode = &new_deallocated_execvnode;\n\n\t\trc = pbs_release_nodes_given_nodelist(&r_input, &r_input_vnlist, err_msg, err_msg_sz);\n\t} else {\n\t\tint select_str_sz = 0;\n\t\trelnodes_input_select_init(&r_input_keep_select);\n\t\tr_input_keep_select.select_str = get_ms_select_chunk(schedselect); /* has to be freed later */\n\t\tselect_str_sz = strlen(r_input_keep_select.select_str) + 1;\n\t\tpbs_strcat(&r_input_keep_select.select_str, &select_str_sz, \"+\");\n\t\tpbs_strcat(&r_input_keep_select.select_str, &select_str_sz, keep_select);\n\t\tpopulate_mom_list(&succeeded_mom_list, exec_host2);\n\t\tr_input_keep_select.succeeded_mom_list = &succeeded_mom_list;\n\n\t\trc = pbs_release_nodes_given_select(&r_input, &r_input_keep_select, err_msg, err_msg_sz);\n\t\tfree(r_input_keep_select.select_str);\n\t\treliable_job_node_free(&succeeded_mom_list);\n\t}\n\n\tif (rc != 0) {\n\t\tgoto recreate_exec_vnode_exit;\n\t}\n\n\tif (new_exec_vnode && (new_exec_vnode[0] != '\\0')) {\n\n\t\tif (strcmp(get_jattr_str(pjob, JOB_ATR_exec_vnode),\n\t\t\t   new_exec_vnode) == 0) {\n\t\t\t/* no change */\n\n\t\t\tif ((err_msg != NULL) && (err_msg_sz > 0)) {\n\t\t\t\tsnprintf(err_msg, err_msg_sz, \"node(s) requested to be released not part of the job: %s\", vnodelist ? vnodelist : \"\");\n\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG, pjob->ji_qs.ji_jobid, err_msg);\n\t\t\t}\n\t\t\tgoto recreate_exec_vnode_exit;\n\t\t}\n\t\tset_jattr_str_slim(pjob, JOB_ATR_exec_vnode_acct, get_jattr_str(pjob, JOB_ATR_exec_vnode), NULL);\n\n\t\t/* save original value which will be used later in the accounting end record */\n\t\tif ((is_jattr_set(pjob, JOB_ATR_exec_vnode_orig)) == 0) {\n\t\t\tset_jattr_str_slim(pjob, JOB_ATR_exec_vnode_orig,\n\t\t\t\t\t   get_jattr_str(pjob, JOB_ATR_exec_vnode), NULL);\n\t\t}\n\n\t\tif ((is_jattr_set(pjob, JOB_ATR_resource_acct)) != 0) {\n\t\t\tfree_jattr(pjob, JOB_ATR_resource_acct);\n\t\t\tmark_jattr_not_set(pjob, JOB_ATR_resource_acct);\n\t\t}\n\t\tset_attr_with_attr(&job_attr_def[JOB_ATR_resource_acct], get_jattr(pjob, JOB_ATR_resource_acct), get_jattr(pjob, JOB_ATR_resource), INCR);\n\t\tset_jattr_str_slim(pjob, JOB_ATR_exec_vnode, new_exec_vnode, NULL);\n\n\t\t(void) update_resources_list(pjob, ATTR_l,\n\t\t\t\t\t     JOB_ATR_resource, new_exec_vnode, INCR, 0,\n\t\t\t\t\t     JOB_ATR_resource_orig);\n\t} else {\n\t\tlog_err(-1, __func__, \"new_exec_vnode is null or empty string\");\n\t\tgoto recreate_exec_vnode_exit;\n\t}\n\n\tif (!keep_select && new_deallocated_execvnode && *new_deallocated_execvnode) {\n\t\tset_jattr_str_slim(pjob, JOB_ATR_exec_vnode_deallocated, new_deallocated_execvnode, NULL);\n\t}\n\n\tif (new_exec_host && *new_exec_host) {\n\n\t\tset_jattr_str_slim(pjob, JOB_ATR_exec_host_acct, get_jattr_str(pjob, JOB_ATR_exec_host), NULL);\n\n\t\t/* save original value which will be used later in the accounting end record */\n\t\tif ((is_jattr_set(pjob, JOB_ATR_exec_host_orig)) == 0) {\n\t\t\tset_jattr_str_slim(pjob, JOB_ATR_exec_host_orig, get_jattr_str(pjob, JOB_ATR_exec_host), NULL);\n\t\t}\n\n\t\tset_jattr_str_slim(pjob, JOB_ATR_exec_host, new_exec_host, NULL);\n\t} else {\n\t\tlog_err(-1, __func__, \"new_exec_host is null or empty string\");\n\t\tgoto recreate_exec_vnode_exit;\n\t}\n\n\tif (new_exec_host2 && *new_exec_host2) {\n\n\t\tset_jattr_str_slim(pjob, JOB_ATR_exec_host2, new_exec_host2, NULL);\n\t} else {\n\t\tlog_err(-1, __func__, \"new_exec_host2 is null or empty string\");\n\t\tgoto recreate_exec_vnode_exit;\n\t}\n\n\tif (new_select && *new_select) {\n\t\tprdefsl = &svr_resc_def[RESC_SELECT];\n\t\t/* re-generate \"select\" resource */\n\t\tif (prdefsl != NULL) {\n\t\t\tpresc = find_resc_entry(get_jattr(pjob, JOB_ATR_resource), prdefsl);\n\t\t\tif (presc == NULL)\n\t\t\t\tpresc = add_resource_entry(get_jattr(pjob, JOB_ATR_resource), prdefsl);\n\t\t\tif (presc != NULL) {\n\t\t\t\t(void) prdefsl->rs_decode(\n\t\t\t\t\t&presc->rs_value, NULL, \"select\", new_select);\n\t\t\t}\n\t\t}\n\t\t/* re-generate \"schedselect\" attribute */\n\n\t\tif (is_jattr_set(pjob, JOB_ATR_SchedSelect)) {\n\t\t\t/* Save current SchedSelect value if not */\n\t\t\t/* already saved in *_orig */\n\t\t\tif (!is_jattr_set(pjob, JOB_ATR_SchedSelect_orig))\n\t\t\t\tset_jattr_str_slim(pjob, JOB_ATR_SchedSelect_orig, get_jattr_str(pjob, JOB_ATR_SchedSelect), NULL);\n\t\t}\n\t\tset_jattr_str_slim(pjob, JOB_ATR_SchedSelect, new_select, NULL);\n\t\t/* re-generate nodect */\n\t\tset_chunk_sum(get_jattr(pjob, JOB_ATR_SchedSelect), get_jattr(pjob, JOB_ATR_resource));\n\n\t} else {\n\t\tlog_err(-1, __func__, \"new_select is null or empty string\");\n\t\tgoto recreate_exec_vnode_exit;\n\t}\nrecreate_exec_vnode_exit:\n\tfree(new_exec_vnode);\n\tfree(new_exec_host);\n\tfree(new_exec_host2);\n\tfree(new_select);\n\tfree(new_deallocated_execvnode);\n\n\treturn (rc);\n}\n\n/**\n * @brief\n *  action_max_run_subjobs This is action function for max_run_subjobs attribute.\n *\t\t\t   It verifies that the attribute is being set only on array jobs.\n *\n * @param[in]\tpattr\t-\tattribute structure\n * @param[in]\tpobject\t-\tjob object\n * @param[in]\tactmode\t-\taction mode\n */\nint\naction_max_run_subjobs(attribute *pattr, void *pobject, int actmode)\n{\n\tjob *pjob = (job *) pobject;\n\tint jtype;\n\n\tif (pjob == NULL)\n\t\treturn PBSE_INTERNAL;\n\n\tjtype = is_job_array(pjob->ji_qs.ji_jobid);\n\tif (jtype != IS_ARRAY_ArrayJob)\n\t\treturn PBSE_NOTARRAY_ATTR;\n\n\treturn PBSE_NONE;\n}\n"
  },
  {
    "path": "src/server/svr_mail.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file    svr_mail.c\n *\n * @brief\n * \t\tsvr_mail.c - send mail to mail list or owner of job on\n *\t\tjob begin, job end, and/or job abort\n *\n * \tIncluded public functions are:\n *\t\tcreate_socket_and_connect()\n *\t\tread_smtp_reply()\n *\t\twrite3_smtp_data()\n *\t\tsend_mail()\n *\t\tsend_mail_detach()\n *\t\tsvr_mailowner_id()\n *\t\tsvr_mailowner()\n *\t\tsvr_mailownerResv()\n *\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <unistd.h>\n#include <errno.h>\n#include <sys/types.h>\n#include \"pbs_ifl.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"server_limits.h\"\n#include \"log.h\"\n\n#include \"job.h\"\n#include \"reservation.h\"\n#include \"server.h\"\n#include \"tpp.h\"\n\n/* External Functions Called */\n\nextern void net_close(int);\n\n/* Globol Data */\n\nextern struct server server;\nextern char *msg_job_abort;\nextern char *msg_job_start;\nextern char *msg_job_end;\nextern char *msg_resv_abort;\nextern char *msg_resv_start;\nextern char *msg_resv_end;\nextern char *msg_resv_confirm;\nextern char *msg_job_stageinfail;\n\n#define MAIL_ADDR_BUF_LEN 1024\n\n/**\n * @brief\n * \t\tExec mailer (sendmail like) and return a descriptor with a pipe\n *\t\twhere the mailer is waiting for data on stdin. The descriptor\n *\t\tmust be closed after conveying data.\n *\n * @param[in]\tmailer - path to sendmail/mailer\n * @param[in]\tmailfrom - the sender of the email\n * @param[in]\tmailto - the recipient of the email\n *\n * @return\tFILE *\n * @retval\tfile descriptor : if fdopen succeed\n * @retval\tNULL : failed\n */\nstatic FILE *\nsvr_exec_mailer(char *mailer, char *mailfrom, char *mailto)\n{\n\tchar *margs[5];\n\tint mfds[2];\n\tpid_t mcpid;\n\n\t/* setup sendmail/mailer command line with -f from_whom */\n\n\tmargs[0] = mailer;\n\tmargs[1] = \"-f\";\n\tmargs[2] = mailfrom;\n\tmargs[3] = mailto;\n\tmargs[4] = NULL;\n\n\tif (pipe(mfds) == -1)\n\t\texit(1);\n\n\tmcpid = fork();\n\tif (mcpid == 0) {\n\t\t/* this child will be sendmail with its stdin set to the pipe */\n\t\tclose(mfds[1]);\n\t\tif (mfds[0] != 0) {\n\t\t\t(void) close(0);\n\t\t\tif (dup(mfds[0]) == -1)\n\t\t\t\texit(1);\n\t\t}\n\t\t(void) close(1);\n\t\t(void) close(2);\n\t\tif (execv(mailer, margs) == -1)\n\t\t\texit(1);\n\t}\n\tif (mcpid == -1) { /* Error on fork */\n\t\tlog_err(errno, __func__, \"fork failed\\n\");\n\t\t(void) close(mfds[0]);\n\t\texit(1);\n\t}\n\n\t/* parent (not the real server though) will write body of message on pipe */\n\t(void) close(mfds[0]);\n\n\treturn (fdopen(mfds[1], \"w\"));\n}\n\n/**\n * @brief\n * \t\tSend mail to owner of a job when an event happens that\n *\t\trequires mail, such as the job starts, ends or is aborted.\n *\t\tThe event is matched against those requested by the user.\n *\t\tFor Unix/Linux, a child is forked to not hold up the Server.  This child\n *\t\twill fork/exec sendmail and pipe the To, Subject and body to it.\n *\n * @param[in]\tjid\t-\tthe Job ID (string)\n * @param[in]\tpjob\t-\tpointer to the job structure\n * @param[in]\tmailpoint\t-\twhich mail event is triggering the send\n * @param[in]\tforce\t-\tif non-zero, force the mail even if not requested\n * @param[in]\ttext\t-\tthe body text of the mail message\n *\n * @return\tnone\n */\nvoid\nsvr_mailowner_id(char *jid, job *pjob, int mailpoint, int force, char *text)\n{\n\tint addmailhost;\n\tint i;\n\tchar *mailer;\n\tchar *mailfrom;\n\tchar mailto[MAIL_ADDR_BUF_LEN];\n\tint mailaddrlen = 0;\n\tstruct array_strings *pas;\n\tchar *stdmessage = NULL;\n\tchar *pat;\n\n\tFILE *outmail;\n\tpid_t mcpid;\n\n\t/* if force is true, force the mail out regardless of mailpoint */\n\n\tif (force != MAIL_FORCE) {\n\t\tif (pjob != 0) {\n\n\t\t\tif (pjob->ji_qs.ji_svrflags & JOB_SVFLG_SubJob) {\n\t\t\t\tif (is_jattr_set(pjob, JOB_ATR_mailpnts)) {\n\t\t\t\t\tif (strchr(get_jattr_str(pjob, JOB_ATR_mailpnts), MAIL_SUBJOB) == NULL)\n\t\t\t\t\t\treturn;\n\t\t\t\t} else\n\t\t\t\t\treturn;\n\t\t\t}\n\n\t\t\t/* see if user specified mail of this type */\n\n\t\t\tif (is_jattr_set(pjob, JOB_ATR_mailpnts)) {\n\t\t\t\tif (strchr(get_jattr_str(pjob, JOB_ATR_mailpnts), mailpoint) == NULL)\n\t\t\t\t\treturn;\n\t\t\t} else if (mailpoint != MAIL_ABORT) /* not set, default to abort */\n\t\t\t\treturn;\n\n\t\t} else if (!is_sattr_set(SVR_ATR_mailfrom)) {\n\n\t\t\t/* not job related, must be system related;  not sent unless */\n\t\t\t/* forced or if \"mailfrom\" attribute set         \t\t */\n\t\t\treturn;\n\t\t}\n\t}\n\n\t/*\n\t * ok, now we will fork a process to do the mailing to not\n\t * hold up the server's other work.\n\t */\n\n\tmcpid = fork();\n\tif (mcpid == -1) { /* Error on fork */\n\t\tlog_err(errno, __func__, \"fork failed\\n\");\n\t\treturn;\n\t}\n\tif (mcpid > 0)\n\t\treturn; /* its all up to the child now */\n\n\t/*\n\t * From here on, we are a child process of the server.\n\t * Fix up file descriptors and signal handlers.\n\t */\n\tnet_close(-1);\n\ttpp_terminate();\n\n\t/* Unprotect child from being killed by kernel */\n\tdaemon_protect(0, PBS_DAEMON_PROTECT_OFF);\n\n\tif (is_sattr_set(SVR_ATR_mailer))\n\t\tmailer = get_sattr_str(SVR_ATR_mailer);\n\telse\n\t\tmailer = SENDMAIL_CMD;\n\n\t/* Who is mail from, if SVR_ATR_mailfrom not set use default */\n\n\tif (is_sattr_set(SVR_ATR_mailfrom))\n\t\tmailfrom = get_sattr_str(SVR_ATR_mailfrom);\n\telse\n\t\tmailfrom = PBS_DEFAULT_MAIL;\n\n\t/* Who does the mail go to?  If mail-list, them; else owner */\n\n\t*mailto = '\\0';\n\tif (pjob != 0) {\n\t\tif (jid == NULL)\n\t\t\tjid = pjob->ji_qs.ji_jobid;\n\n\t\tif (is_jattr_set(pjob, JOB_ATR_mailuser)) {\n\n\t\t\t/* has mail user list, send to them rather than owner */\n\n\t\t\tpas = get_jattr_arst(pjob, JOB_ATR_mailuser);\n\t\t\tif (pas != NULL) {\n\t\t\t\tfor (i = 0; i < pas->as_usedptr; i++) {\n\t\t\t\t\taddmailhost = 0;\n\t\t\t\t\tmailaddrlen += strlen(pas->as_string[i]) + 2;\n\t\t\t\t\tif ((pbs_conf.pbs_mail_host_name) &&\n\t\t\t\t\t    (strchr(pas->as_string[i], (int) '@') == NULL)) {\n\t\t\t\t\t\t/* no host specified in address and      */\n\t\t\t\t\t\t/* pbs_mail_host_name is defined, use it */\n\t\t\t\t\t\tmailaddrlen += strlen(pbs_conf.pbs_mail_host_name) + 1;\n\t\t\t\t\t\taddmailhost = 1;\n\t\t\t\t\t}\n\t\t\t\t\tif (mailaddrlen < sizeof(mailto)) {\n\t\t\t\t\t\t(void) strcat(mailto, pas->as_string[i]);\n\t\t\t\t\t\tif (addmailhost) {\n\t\t\t\t\t\t\t/* append pbs_mail_host_name */\n\t\t\t\t\t\t\t(void) strcat(mailto, \"@\");\n\t\t\t\t\t\t\t(void) strcat(mailto, pbs_conf.pbs_mail_host_name);\n\t\t\t\t\t\t}\n\t\t\t\t\t\t(void) strcat(mailto, \" \");\n\t\t\t\t\t} else {\n\t\t\t\t\t\tsprintf(log_buffer, \"Email list is too long: \\\"%.77s...\\\"\", mailto);\n\t\t\t\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_WARNING, pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\n\t\t} else {\n\n\t\t\t/* no mail user list, just send to owner */\n\n\t\t\tpbs_strncpy(mailto, get_jattr_str(pjob, JOB_ATR_job_owner), sizeof(mailto));\n\t\t\t/* if pbs_mail_host_name is set in pbs.conf, then replace the */\n\t\t\t/* host name with the name specified in pbs_mail_host_name    */\n\t\t\tif (pbs_conf.pbs_mail_host_name) {\n\t\t\t\tif ((pat = strchr(mailto, (int) '@')) != NULL)\n\t\t\t\t\t*pat = '\\0'; /* remove existing @host */\n\t\t\t\tif ((strlen(mailto) + strlen(pbs_conf.pbs_mail_host_name) + 1) < sizeof(mailto)) {\n\t\t\t\t\t/* append the pbs_mail_host_name since it fits */\n\t\t\t\t\tstrcat(mailto, \"@\");\n\t\t\t\t\tstrcat(mailto, pbs_conf.pbs_mail_host_name);\n\t\t\t\t} else {\n\t\t\t\t\tif (pat)\n\t\t\t\t\t\t*pat = '@'; /* did't fit, restore the \"at\" sign */\n\t\t\t\t\tsprintf(log_buffer, \"Email address is too long: \\\"%.77s...\\\"\", mailto);\n\t\t\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_WARNING, pjob->ji_qs.ji_jobid, log_buffer);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t} else {\n\t\t/* send system related mail to \"mailfrom\" */\n\t\tstrcpy(mailto, mailfrom);\n\t}\n\n\tif ((outmail = svr_exec_mailer(mailer, mailfrom, mailto)) == NULL)\n\t\texit(1);\n\n\t/* Pipe in mail headers: To: and Subject: */\n\n\tfprintf(outmail, \"To: %s\\n\", mailto);\n\n\tif (pjob)\n\t\tfprintf(outmail, \"Subject: PBS JOB %s\\n\\n\", jid);\n\telse\n\t\tfprintf(outmail, \"Subject: PBS Server on %s\\n\\n\", server_host);\n\n\t/* Now pipe in \"standard\" message */\n\n\tswitch (mailpoint) {\n\n\t\tcase MAIL_ABORT:\n\t\t\tstdmessage = msg_job_abort;\n\t\t\tbreak;\n\n\t\tcase MAIL_BEGIN:\n\t\t\tstdmessage = msg_job_start;\n\t\t\tbreak;\n\n\t\tcase MAIL_END:\n\t\t\tstdmessage = msg_job_end;\n\t\t\tbreak;\n\n\t\tcase MAIL_STAGEIN:\n\t\t\tstdmessage = msg_job_stageinfail;\n\t\t\tbreak;\n\t}\n\n\tif (pjob) {\n\t\tfprintf(outmail, \"PBS Job Id: %s\\n\", jid);\n\t\tfprintf(outmail, \"Job Name:   %s\\n\",\n\t\t\tget_jattr_str(pjob, JOB_ATR_jobname));\n\t}\n\tif (stdmessage)\n\t\tfprintf(outmail, \"%s\\n\", stdmessage);\n\tif (text != NULL)\n\t\tfprintf(outmail, \"%s\\n\", text);\n\tfclose(outmail);\n\n\texit(0);\n}\n/**\n * @brief\n * \t\tsvr_mailowner - Send mail to owner of a job when an event happens that\n *\t\trequires mail, such as the job starts, ends or is aborted.\n *\t\tThe event is matched against those requested by the user.\n *\t\tFor Unix/Linux, a child is forked to not hold up the Server.  This child\n *\t\twill fork/exec sendmail and pipe the To, Subject and body to it.\n *\n * @param[in]\tpjob\t-\tptr to job (null for server based mail)\n * @param[in]\tmailpoint\t-\tnote, single character\n * @param[in]\tforce\t-\tif set, force mail delivery\n * @param[in]\ttext\t-\tadditional message text\n */\nvoid\nsvr_mailowner(job *pjob, int mailpoint, int force, char *text)\n{\n\tsvr_mailowner_id(NULL, pjob, mailpoint, force, text);\n}\n\n/**\n * @brief\n * \t\tSend mail to owner of a reservation when an event happens that\n *\t\trequires mail, such as the reservation starts, ends or is aborted.\n *\t\tThe event is matched against those requested by the user.\n *\t\tFor Unix/Linux, a child is forked to not hold up the Server.  This child\n *\t\twill fork/exec sendmail and pipe the To, Subject and body to it.\n *\n * @param[in]\tpresv\t-\tpointer to the reservation structure\n * @param[in]\tmailpoint\t-\twhich mail event is triggering the send\n * @param[in]\tforce\t-\tif non-zero, force the mail even if not requested\n * @param[in]\ttext\t-\tthe body text of the mail message\n *\n * @return\tnone\n */\nvoid\nsvr_mailownerResv(resc_resv *presv, int mailpoint, int force, char *text)\n{\n\tint i;\n\tint addmailhost;\n\tchar *mailer;\n\tchar *mailfrom;\n\tchar mailto[MAIL_ADDR_BUF_LEN];\n\tint mailaddrlen = 0;\n\tstruct array_strings *pas;\n\tchar *pat;\n\tchar *stdmessage = NULL;\n\n\tFILE *outmail;\n\tpid_t mcpid;\n\n\tif (force != MAIL_FORCE) {\n\t\t/*Not forcing out mail regardless of mailpoint */\n\n\t\tif (is_rattr_set(presv, RESV_ATR_mailpnts)) {\n\t\t\t/*user has set one or mode mailpoints is this one included?*/\n\t\t\tif (strchr(get_rattr_str(presv, RESV_ATR_mailpnts), mailpoint) == NULL)\n\t\t\t\treturn;\n\t\t} else {\n\t\t\t/*user hasn't bothered to set any mailpoints so default to\n\t\t\t *sending mail only in the case of reservation deletion and\n\t\t\t *reservation confirmation\n\t\t\t */\n\t\t\tif ((mailpoint != MAIL_ABORT) && (mailpoint != MAIL_CONFIRM))\n\t\t\t\treturn;\n\t\t}\n\t}\n\n\tif (is_rattr_set(presv, RESV_ATR_mailpnts)) {\n\t\tif (strchr(get_rattr_str(presv, RESV_ATR_mailpnts), MAIL_NONE) != NULL)\n\t\t\treturn;\n\t}\n\n\t/*\n\t * ok, now we will fork a process to do the mailing to not\n\t * hold up the server's other work.\n\t */\n\n\tmcpid = fork();\n\tif (mcpid == -1) { /* Error on fork */\n\t\tlog_err(errno, __func__, \"fork failed\\n\");\n\t\treturn;\n\t}\n\tif (mcpid > 0)\n\t\treturn; /* its all up to the child now */\n\n\t/*\n\t * From here on, we are a child process of the server.\n\t * Fix up file descriptors and signal handlers.\n\t */\n\n\tnet_close(-1);\n\ttpp_terminate();\n\n\t/* Unprotect child from being killed by kernel */\n\tdaemon_protect(0, PBS_DAEMON_PROTECT_OFF);\n\n\tif (is_sattr_set(SVR_ATR_mailer))\n\t\tmailer = get_sattr_str(SVR_ATR_mailer);\n\telse\n\t\tmailer = SENDMAIL_CMD;\n\n\t/* Who is mail from, if SVR_ATR_mailfrom not set use default */\n\n\tif (is_sattr_set(SVR_ATR_mailfrom))\n\t\tmailfrom = get_sattr_str(SVR_ATR_mailfrom);\n\telse\n\t\tmailfrom = PBS_DEFAULT_MAIL;\n\n\t/* Who does the mail go to?  If mail-list, them; else owner */\n\n\t*mailto = '\\0';\n\tif (is_rattr_set(presv, RESV_ATR_mailuser)) {\n\n\t\t/* has mail user list, send to them rather than owner */\n\n\t\tpas = get_rattr_arst(presv, RESV_ATR_mailuser);\n\t\tif (pas != NULL) {\n\t\t\tfor (i = 0; i < pas->as_usedptr; i++) {\n\t\t\t\taddmailhost = 0;\n\t\t\t\tmailaddrlen += strlen(pas->as_string[i]) + 2;\n\t\t\t\tif ((pbs_conf.pbs_mail_host_name) &&\n\t\t\t\t    (strchr(pas->as_string[i], (int) '@') == NULL)) {\n\t\t\t\t\t/* no host specified in address and      */\n\t\t\t\t\t/* pbs_mail_host_name is defined, use it */\n\t\t\t\t\tmailaddrlen += strlen(pbs_conf.pbs_mail_host_name) + 1;\n\t\t\t\t\taddmailhost = 1;\n\t\t\t\t}\n\t\t\t\tif (mailaddrlen < sizeof(mailto)) {\n\t\t\t\t\t(void) strcat(mailto, pas->as_string[i]);\n\t\t\t\t\tif (addmailhost) {\n\t\t\t\t\t\t/* append pbs_mail_host_name */\n\t\t\t\t\t\t(void) strcat(mailto, \"@\");\n\t\t\t\t\t\t(void) strcat(mailto, pbs_conf.pbs_mail_host_name);\n\t\t\t\t\t} else {\n\t\t\t\t\t\tsprintf(log_buffer, \"Email list is too long: \\\"%.77s...\\\"\", mailto);\n\t\t\t\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_WARNING, presv->ri_qs.ri_resvID, log_buffer);\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t\t(void) strcat(mailto, \" \");\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\n\t} else {\n\n\t\t/* no mail user list, just send to owner */\n\n\t\t(void) pbs_strncpy(mailto, get_rattr_str(presv, RESV_ATR_resv_owner), sizeof(mailto));\n\t\t/* if pbs_mail_host_name is set in pbs.conf, then replace the */\n\t\t/* host name with the name specified in pbs_mail_host_name    */\n\t\tif (pbs_conf.pbs_mail_host_name) {\n\t\t\tif ((pat = strchr(mailto, (int) '@')) != NULL)\n\t\t\t\t*pat = '\\0'; /* remove existing @host */\n\t\t\tif ((strlen(mailto) + strlen(pbs_conf.pbs_mail_host_name) + 1) < sizeof(mailto)) {\n\t\t\t\t/* append the pbs_mail_host_name since it fits */\n\t\t\t\tstrcat(mailto, \"@\");\n\t\t\t\tstrcat(mailto, pbs_conf.pbs_mail_host_name);\n\t\t\t} else {\n\t\t\t\tif (pat)\n\t\t\t\t\t*pat = '@'; /* did't fit, restore the \"at\" sign */\n\t\t\t\tsprintf(log_buffer, \"Email address is too long: \\\"%.77s...\\\"\", mailto);\n\t\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_WARNING, presv->ri_qs.ri_resvID, log_buffer);\n\t\t\t}\n\t\t}\n\t}\n\n\tif ((outmail = svr_exec_mailer(mailer, mailfrom, mailto)) == NULL)\n\t\texit(1);\n\n\t/* Pipe in mail headers: To: and Subject: */\n\n\tfprintf(outmail, \"To: %s\\n\", mailto);\n\tfprintf(outmail, \"Subject: PBS RESERVATION %s\\n\\n\", presv->ri_qs.ri_resvID);\n\n\t/* Now pipe in \"standard\" message */\n\n\tswitch (mailpoint) {\n\n\t\tcase MAIL_ABORT:\n\t\t\t/*\"Aborted by Server, Scheduler, or User \"*/\n\t\t\tstdmessage = msg_resv_abort;\n\t\t\tbreak;\n\n\t\tcase MAIL_BEGIN:\n\t\t\t/*\"Reservation period starting\"*/\n\t\t\tstdmessage = msg_resv_start;\n\t\t\tbreak;\n\n\t\tcase MAIL_END:\n\t\t\t/*\"Reservation terminated\"*/\n\t\t\tstdmessage = msg_resv_end;\n\t\t\tbreak;\n\n\t\tcase MAIL_CONFIRM:\n\t\t\t/*scheduler requested, \"CONFIRM reservation\"*/\n\t\t\tstdmessage = msg_resv_confirm;\n\t\t\tbreak;\n\t}\n\n\tfprintf(outmail, \"PBS Reservation Id: %s\\n\", presv->ri_qs.ri_resvID);\n\tfprintf(outmail, \"Reservation Name:   %s\\n\", get_rattr_str(presv, RESV_ATR_resv_name));\n\tif (stdmessage)\n\t\tfprintf(outmail, \"%s\\n\", stdmessage);\n\tif (text != NULL)\n\t\tfprintf(outmail, \"%s\\n\", text);\n\tfclose(outmail);\n\n\texit(0);\n}\n"
  },
  {
    "path": "src/server/svr_movejob.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <stdio.h>\n#include <errno.h>\n#include <string.h>\n#include <signal.h>\n#include <sys/types.h>\n\n#include <unistd.h>\n#include <fcntl.h>\n#include <time.h>\n#include <sys/time.h>\n#include <sys/param.h>\n#include <sys/wait.h>\n#include <netdb.h>\n#include <sys/socket.h>\n#include <netinet/in.h>\n#include <arpa/inet.h>\n\n#include \"libpbs.h\"\n#include \"pbs_error.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"server_limits.h\"\n#include \"work_task.h\"\n#include \"log.h\"\n#include \"pbs_db.h\"\n#include \"batch_request.h\"\n#include \"resv_node.h\"\n#include \"queue.h\"\n#include \"job.h\"\n#include \"reservation.h\"\n#include \"credential.h\"\n#include \"ticket.h\"\n#include \"queue.h\"\n#include \"job.h\"\n#include \"net_connect.h\"\n#include \"pbs_nodes.h\"\n#include \"svrfunc.h\"\n#include <libutil.h>\n#include \"tpp.h\"\n#include <memory.h>\n#include \"server.h\"\n#include \"hook.h\"\n#include \"pbs_sched.h\"\n#include \"acct.h\"\n\n#define RETRY 3 /* number of times to retry network move */\n\n/* External functions called */\n\nextern void stat_mom_job(job *);\nint local_move(job *, struct batch_request *);\n\n/* Private Functions local to this file */\n\nstatic void post_movejob(struct work_task *);\nstatic void post_routejob(struct work_task *);\nstatic int small_job_files(job *pjob);\nextern int should_retry_route(int err);\nextern int move_job_file(int con, job *pjob, enum job_file which, int prot, char **msgid);\nextern void post_sendmom(struct work_task *pwt);\n\n/* Global Data */\n\n#if !defined(H_ERRNO_DECLARED)\nextern int h_errno;\n#endif\nextern char *path_jobs;\nextern char *path_spool;\nextern attribute_def job_attr_def[];\nextern char *msg_badexit;\nextern char *msg_routebad;\nextern char *msg_routexceed;\nextern char *msg_manager;\nextern char *msg_movejob;\nextern char *msg_err_malloc;\nextern int comp_resc_gt, comp_resc_eq, comp_resc_lt;\nextern int pbs_errno;\nextern pbs_net_t pbs_server_addr;\nextern int resc_access_perm;\nextern time_t time_now;\nextern int svr_create_tmp_jobscript(job *pj, char *script_name);\nextern int scheduler_jobs_stat;\nextern char *path_hooks_workdir;\nextern struct work_task *add_mom_deferred_list(int stream, mominfo_t *minfo, void (*func)(struct work_task *), char *msgid, void *parm1, void *parm2);\n\n/**\n * @brief\n * \t\tsvr_movejob - Test if the destination is local or not and call a routine to\n * \t\tdo the appropriate move.\n *\n * param[in,out]\tjobp\t-\tpointer to job to move\n * param[in]\tdestination\t-\tdestination to be moved\n * param[in]\treq\t-\tclient request from a qmove client, null if a route\n *\n * @return\tint\n * @retval\t0\t: success\n * @retval\t-1\t: permenent failure or rejection,\n * @retval\t1\t: failed but try again\n * @reval\t2\t: deferred (ie move in progress), check later\n */\nint\nsvr_movejob(job *jobp, char *destination, struct batch_request *req)\n{\n\tunsigned int port = pbs_server_port_dis;\n\tchar *toserver;\n\n\tif (strlen(destination) >= (size_t) PBS_MAXROUTEDEST) {\n\t\tsprintf(log_buffer, \"name %s over maximum length of %d\",\n\t\t\tdestination, PBS_MAXROUTEDEST);\n\t\tlog_err(-1, \"svr_movejob\", log_buffer);\n\t\tpbs_errno = PBSE_QUENBIG;\n\t\treturn -1;\n\t}\n\n\tstrncpy(jobp->ji_qs.ji_destin, destination, PBS_MAXROUTEDEST);\n\tjobp->ji_qs.ji_un_type = JOB_UNION_TYPE_ROUTE;\n\n\tif ((toserver = strchr(destination, '@')) != NULL) {\n\t\t/* check to see if the part after '@' is this server */\n\t\tint comp = -1;\n\t\tcomp = comp_svraddr(pbs_server_addr, parse_servername(++toserver, &port), NULL);\n\t\tif ((comp == 1) ||\n\t\t    (port != pbs_server_port_dis)) {\n\t\t\treturn (net_move(jobp, req)); /* not a local dest */\n\t\t} else if (comp == 2)\n\t\t\treturn -1;\n\t}\n\n\t/* if get to here, it is a local destination */\n\treturn (local_move(jobp, req));\n}\n\n/**\n * @brief\n * \t\tMove a job to another queue in this Server.\n *\n * @par\n * \t\tCheck the destination to see if it can accept the job.\n * \t\tIf the job can enter the new queue, dequeue from the existing queue and\n * \t\tenqueue into the new queue\n *\n * @par\n * \t\tNote - the destination is specified by the queue's name in the\n *\t\tji_qs.ji_destin element of the job structure.\n *\n * param[in]\tjobp\t-\tpointer to job to move\n * param[in]\treq\t-\tclient request from a qmove client, null if a route\n *\n * @return\tint\n * @retval  0\t: success\n * @retval -1\t: permanent failure or rejection, see pbs_errno\n * @retval  1\t: failed but try again later\n */\nint\nlocal_move(job *jobp, struct batch_request *req)\n{\n\tpbs_queue *qp;\n\tchar *destination = jobp->ji_qs.ji_destin;\n\tint mtype;\n\tlong newtype = -1;\n\tlong long time_usec;\n\tstruct timeval tval;\n\tconn_t *conn = NULL;\n\tchar *physhost = NULL;\n\n\t/* search for destination queue */\n\tif ((qp = find_queuebyname(destination)) == NULL) {\n\t\tsprintf(log_buffer,\n\t\t\t\"queue %s does not exist\",\n\t\t\tdestination);\n\t\tlog_err(-1, __func__, log_buffer);\n\t\tpbs_errno = PBSE_UNKQUE;\n\t\treturn -1;\n\t}\n\n\t/*\n\t * if being moved at specific request of administrator, then\n\t * checks on queue availability, etc. are skipped;\n\t * otherwise all checks are enforced.\n\t */\n\n\tif (req == NULL) {\n\t\tmtype = MOVE_TYPE_Route; /* route */\n\t} else if (req->rq_perm & (ATR_DFLAG_MGRD | ATR_DFLAG_MGWR) && find_sched_from_sock(req->rq_conn, CONN_SCHED_PRIMARY) == NULL) {\n\t\tmtype = MOVE_TYPE_MgrMv; /* privileged move */\n\t} else {\n\t\tmtype = MOVE_TYPE_Move; /* non-privileged move */\n\t}\n\n\tif (req != NULL) {\n\t\tconn = get_conn(req->rq_conn);\n\t\tif (conn) {\n\t\t\tphyshost = conn->cn_physhost;\n\t\t}\n\t}\n\n\tpbs_errno = svr_chkque(jobp, qp, get_jattr_str(jobp, JOB_ATR_submit_host), physhost, mtype);\n\tif (pbs_errno) {\n\t\t/* should this queue be retried? */\n\t\treturn (should_retry_route(pbs_errno));\n\t}\n\n\t/* dequeue job from present queue, update destination and\t*/\n\t/* queue rank for new queue and enqueue into destination\t*/\n\n\tsvr_dequejob(jobp);\n\tjobp->ji_myResv = NULL;\n\tpbs_strncpy(jobp->ji_qs.ji_queue, qp->qu_qs.qu_name, PBS_MAXQUEUENAME + 1);\n\n\tgettimeofday(&tval, NULL);\n\ttime_usec = (tval.tv_sec * 1000000L) + tval.tv_usec;\n\n\tset_jattr_ll_slim(jobp, JOB_ATR_qrank, time_usec, SET);\n\n\tif (qp->qu_resvp) {\n\t\tset_jattr_generic(jobp, JOB_ATR_reserve_ID, qp->qu_resvp->ri_qs.ri_resvID, NULL, INTERNAL);\n\t\tjobp->ji_myResv = qp->qu_resvp;\n\t} else\n\t\tset_jattr_generic(jobp, JOB_ATR_reserve_ID, NULL, NULL, INTERNAL);\n\n\tif (get_sattr_long(SVR_ATR_EligibleTimeEnable) == 1) {\n\t\tnewtype = determine_accruetype(jobp);\n\t\tupdate_eligible_time(newtype, jobp);\n\t}\n\n\tif ((pbs_errno = svr_enquejob(jobp, NULL)) != 0)\n\t\treturn -1; /* should never ever get here */\n\taccount_jobstr(jobp, PBS_ACCT_QUEUE);\n\n\tjobp->ji_lastdest = 0; /* reset in case of another route */\n\n\tjob_save_db(jobp);\n\n\t/* If a scheduling cycle is in progress, then this moved job may have\n\t * had changes resulting from the move that would impact scheduling or\n\t * placement, add job to list of jobs which cannot be run in this cycle.\n\t */\n\tif ((req == NULL || (find_sched_from_sock(req->rq_conn, CONN_SCHED_PRIMARY) == NULL)) && (scheduler_jobs_stat))\n\t\tam_jobs_add(jobp);\n\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\tpost_routejob - clean up action for child started in net_move/send_job\n *\t\t   to \"route\" a job to another server\n * @par\n * \t\tIf route was successfull, delete job.\n * @par\n * \t\tIf route didn't work, mark destination not to be tried again for this\n * \t\tjob and call route again.\n *\n * @param[in]\tpwt\t-\twork task structure\n *\n * @return\tnone.\n */\nstatic void\npost_routejob(struct work_task *pwt)\n{\n\tchar newstate;\n\tint newsub;\n\tint r;\n\tint stat = pwt->wt_aux;\n\tjob *jobp = (job *) pwt->wt_parm2;\n\n\tif (jobp == NULL) {\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_INFO, \"\", \"post_routejob failed, jobp NULL\");\n\t\treturn;\n\t}\n\n\tif (WIFEXITED(stat)) {\n\t\tr = WEXITSTATUS(stat);\n\t} else {\n\t\tr = SEND_JOB_FATAL;\n\t\t(void) sprintf(log_buffer, msg_badexit, stat);\n\t\t(void) strcat(log_buffer, __func__);\n\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_JOB, LOG_NOTICE,\n\t\t\t  jobp->ji_qs.ji_jobid, log_buffer);\n\t}\n\n\tswitch (r) {\n\t\tcase SEND_JOB_OK: /* normal return, job was routed */\n\n\t\t\tif (jobp->ji_qs.ji_svrflags & JOB_SVFLG_StagedIn)\n\t\t\t\tremove_stagein(jobp);\n\t\t\t/*\n\t\t\t * If the server is configured to keep job history and the job\n\t\t\t * is created here, do not purge the job structure but save\n\t\t\t * it for history purpose. No need to check for sub-jobs as\n\t\t\t * sub jobs can not be routed.\n\t\t\t */\n\t\t\tif (svr_chk_history_conf())\n\t\t\t\tsvr_setjob_histinfo(jobp, T_MOV_JOB);\n\t\t\telse\n\t\t\t\tjob_purge(jobp); /* need to remove server job struct */\n\t\t\treturn;\n\t\tcase SEND_JOB_FATAL: /* permanent rejection (or signal) */\n\t\t\tif (check_job_substate(jobp, JOB_SUBSTATE_ABORT)) {\n\n\t\t\t\t/* Job Delete in progress, just set to queued status */\n\n\t\t\t\tsvr_setjobstate(jobp, JOB_STATE_LTR_QUEUED,\n\t\t\t\t\t\tJOB_SUBSTATE_ABORT);\n\t\t\t\treturn;\n\t\t\t}\n\t\t\tadd_dest(jobp); /* else mark destination as bad */\n\t\t\t\t\t/* fall through */\n\t\tdefault:\t\t/* try routing again */\n\t\t\t/* force re-eval of job state out of Transit */\n\t\t\tsvr_evaljobstate(jobp, &newstate, &newsub, 1);\n\t\t\tsvr_setjobstate(jobp, newstate, newsub);\n\t\t\tjobp->ji_retryok = 1;\n\t\t\tif ((r = job_route(jobp)) == PBSE_ROUTEREJ)\n\t\t\t\t(void) job_abt(jobp, msg_routebad);\n\t\t\telse if (r != 0)\n\t\t\t\t(void) job_abt(jobp, msg_routexceed);\n\t\t\tbreak;\n\t}\n\treturn;\n}\n\nextern pbs_list_head task_list_event;\nextern pbs_list_head task_list_immed;\n\n/**\n * @brief\n * \t\tcheck_move_status - check if the move was successful or not\n * @param[in]\tjobp\t-\tpointer to job being moved\n * @param[in]\tprot\t-\trequest protocol\n * @param[in]\twstat\t-\tchild status\n * @param[in]\trq_move\t-\tpointer to move job structure\n * @param[in]\trq_user\t-\trequesting user\n * @param[in]\trq_host\t-\trequesting host\n *\n * @return\tmove status\n * @retval  0 \tsuccess\n * @retval  !0 \tfailure\n */\nint\ncheck_move_status(job *jobp, int prot, int wstat, struct rq_move *rq_move, char *rq_user, char *rq_host)\n{\n\tint r = 0;\n\n\tif (prot == PROT_TCP) {\n\t\tif (WIFEXITED(wstat)) {\n\t\t\tr = WEXITSTATUS(wstat);\n\t\t\tif (r != SEND_JOB_OK) {\n\t\t\t\tr = PBSE_ROUTEREJ;\n\t\t\t}\n\t\t} else\n\t\t\tr = PBSE_SYSTEM;\n\n\t} else {\n\t\tswitch (wstat) {\n\t\t\tcase PBSE_NONE:\n\t\t\t\tr = SEND_JOB_OK;\n\t\t\t\tbreak;\n\t\t\tcase PBSE_SYSTEM:\n\t\t\t\tr = PBSE_SYSTEM;\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\tr = PBSE_ROUTEREJ;\n\t\t\t\tbreak;\n\t\t}\n\t}\n\n\tswitch (r) {\n\t\tcase SEND_JOB_OK:\n\t\t\tif (jobp && jobp->ji_qs.ji_svrflags & JOB_SVFLG_StagedIn)\n\t\t\t\tremove_stagein(jobp);\n\t\t\tstrcpy(log_buffer, msg_movejob);\n\t\t\tsprintf(log_buffer + strlen(log_buffer),\n\t\t\t\tmsg_manager,\n\t\t\t\trq_move->rq_destin,\n\t\t\t\trq_user, rq_host);\n\t\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_JOB, LOG_NOTICE,\n\t\t\t\t  rq_move->rq_jid, log_buffer);\n\t\t\tbreak;\n\t\tcase PBSE_SYSTEM:\n\t\t\tsprintf(log_buffer, msg_badexit, wstat);\n\t\t\tstrcat(log_buffer, __func__);\n\t\t\tlog_event(PBSEVENT_SYSTEM, PBS_EVENTCLASS_JOB, LOG_NOTICE,\n\t\t\t\t  rq_move->rq_jid, log_buffer);\n\t\t\tbreak;\n\t\tcase PBSE_ROUTEREJ:\n\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_NOTICE,\n\t\t\t\t  rq_move->rq_jid, msg_routebad);\n\t}\n\n\treturn r;\n}\n\n/**\n * @brief\n * \t\tpost_movejob - clean up action for child started in net_move/send_job\n *\t\t   to \"move\" a job to another server\n * @par\n * \t\tIf move was successfull, delete server's copy of thejob structure,\n * \t\tand reply to request.\n * @par\n * \t\tIf route didn't work, reject the request.\n *\n * @param[in]\tpwt\t-\twork task structure\n *\n * @return\tnone.\n */\nstatic void\npost_movejob(struct work_task *pwt)\n{\n\tstruct batch_request *req;\n\tint r;\n\tjob *jobp;\n\tint prot = pwt->wt_aux2;\n\tint wstat = pwt->wt_aux;\n\tstruct rq_move *rq_move;\n\n\treq = (struct batch_request *) pwt->wt_parm1;\n\tpbs_errno = PBSE_NONE;\n\tif (req->rq_type != PBS_BATCH_MoveJob) {\n\t\tsprintf(log_buffer, \"bad request type %d\", req->rq_type);\n\t\tlog_err(-1, __func__, log_buffer);\n\t\treturn;\n\t}\n\n\trq_move = &req->rq_ind.rq_move;\n\tjobp = find_job(rq_move->rq_jid);\n\n\tif ((jobp == NULL) || (jobp != (job *) pwt->wt_parm2))\n\t\tlog_errf(-1, __func__, \"job %s not found\", rq_move->rq_jid);\n\n\tr = check_move_status(jobp, prot, wstat, rq_move, req->rq_user, req->rq_host);\n\n\tif (r == SEND_JOB_OK) {\n\t\t/*\n\t\t* If server is configured to keep job history info and\n\t\t* the job is created here, then keep the job struture\n\t\t* for history purpose without purging. No need to check\n\t\t* for sub-jobs as sub jobs can't be moved.\n\t\t*/\n\t\tif (jobp) {\n\t\t\tif (svr_chk_history_conf())\n\t\t\t\tsvr_setjob_histinfo(jobp, T_MOV_JOB);\n\t\t\telse\n\t\t\t\tjob_purge(jobp);\n\t\t}\n\t\treply_ack(req);\n\t} else {\n\t\tif (jobp)\n\t\t\tsvr_evalsetjobstate(jobp);\n\t\treq_reject(r, 0, req);\n\t}\n\n\treturn;\n}\n\n/**\n *\n * @brief\n * \tSend execution job on connected tpp stream.\n *\tNote: Job structure has been loaded with the script by now (ji_script populated)\n *\n * @param[in]\tjobp - pointer to the job being sent\n * @param[in]\thostaddr - the address of host to send job to, host byte order\n * @param[in]\tport - the destination port, host byte order\n * @param[in]\tmove_type - the type of move (e.g. MOVE_TYPE_exec)\n * @param[in]\trequest - The batch request associated with this send job call\n *\n * @return int\n * @retval  2 \tsuccess\n * @retval  -1 \tfailure (pbs_errno set to error number)\n *\n */\nint\nsend_job_exec(job *jobp, pbs_net_t hostaddr, int port, int move_type, struct batch_request *request)\n{\n\tpbs_list_head attrl;\n\tmominfo_t *pmom = NULL;\n\tint stream = -1;\n\tint encode_type;\n\tchar *destin = jobp->ji_qs.ji_destin;\n\tint i;\n\tsize_t credlen = 0;\n\tchar *credbuf = NULL;\n\tchar job_id[PBS_MAXSVRJOBID + 1];\n\tstruct attropl *pqjatr; /* list (single) of attropl for quejob */\n\tint rc;\n\tchar *jobid = NULL;\n\tchar *msgid = NULL;\n\tchar *dup_msgid = NULL;\n\tstruct work_task *ptask = NULL;\n\tint save_resc_access_perm;\n\tchar *extend = NULL;\n\tvoid (*post_func)(struct work_task *) = post_sendmom;\n\tchar *extend_commit = NULL;\n\n\t/* saving resc_access_perm global variable as backup */\n\tsave_resc_access_perm = resc_access_perm;\n\tpbs_errno = PBSE_NONE;\n\n\tstream = svr_connect(hostaddr, port, NULL, ToServerDIS, PROT_TPP);\n\tif (stream < 0) {\n\t\tsprintf(log_buffer, \"Could not connect to Mom, svr_connect returned %d\", stream);\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_REQUEST, LOG_WARNING, \"\", log_buffer);\n\t\tgoto send_err;\n\t}\n\n\tpmom = tfind2(hostaddr, port, &ipaddrs);\n\tif (!pmom || (pmom->mi_dmn_info->dmn_state & INUSE_DOWN)) {\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_REQUEST, LOG_WARNING, \"\", \"Mom is down\");\n\t\tpbs_errno = PBSE_NORELYMOM;\n\t\tgoto send_err;\n\t}\n\n\tCLEAR_HEAD(attrl);\n\n\tresc_access_perm = ATR_DFLAG_MOM;\n\tencode_type = ATR_ENCODE_MOM;\n\n\tfor (i = 0; i < (int) JOB_ATR_LAST; i++) {\n\t\tif ((job_attr_def + i)->at_flags & resc_access_perm) {\n\t\t\t(void) (job_attr_def + i)->at_encode(get_jattr(jobp, i), &attrl, (job_attr_def + i)->at_name, NULL, encode_type, NULL);\n\t\t}\n\t}\n\tattrl_fixlink(&attrl);\n\t/* save the job id for when after we purge the job */\n\n\t/* read any credential file */\n\tget_credential(pmom->mi_host, jobp, PBS_GC_BATREQ, &credbuf, &credlen);\n\n\tstrcpy(job_id, jobp->ji_qs.ji_jobid);\n\n\tif (((jobp->ji_qs.ji_svrflags & JOB_SVFLG_SCRIPT) == 0) && (credlen <= 0) && ((jobp->ji_qs.ji_svrflags & JOB_SVFLG_HASRUN) == 0))\n\t\textend = EXTEND_OPT_IMPLICIT_COMMIT;\n\n\tpqjatr = &((svrattrl *) GET_NEXT(attrl))->al_atopl;\n\tjobid = PBSD_queuejob(stream, jobp->ji_qs.ji_jobid, destin, pqjatr, extend, PROT_TPP, &msgid, NULL);\n\tfree_attrlist(&attrl);\n\tif (jobid == NULL)\n\t\tgoto send_err;\n\n\ttpp_add_close_func(stream, process_DreplyTPP); /* register a close handler */\n\n\t/* adding msgid to deferred list, dont free msgid */\n\tif ((ptask = add_mom_deferred_list(stream, pmom, post_func, msgid, request, jobp)) == NULL) {\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_REQUEST, LOG_WARNING, \"\", \"add_mom_deferred_list returned NULL\");\n\t\tpbs_errno = PBSE_SYSTEM;\n\t\tgoto send_err;\n\t}\n\n\t/* add to pjob->svrtask list so its automatically cleared when job is purged */\n\tappend_link(&jobp->ji_svrtask, &ptask->wt_linkobj, ptask);\n\n\t/*\n\t * svr-mom communication is asynchronous, so PBSD_quejob does not return in this flow\n\t * hence commit_done is meaningless here\n\t * We only need to check extend and skip sending the other messages\n\t */\n\tif (extend)\n\t\tgoto done;\n\n\t/* we cannot use the same msgid, since it is not part of the preq,\n\t * make a dup of it, and we can freely free it\n\t */\n\tif ((dup_msgid = strdup(msgid)) == NULL) {\n\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_REQUEST, LOG_WARNING, \"\", \"strdup returned NULL\");\n\t\tpbs_errno = PBSE_SYSTEM;\n\t\tgoto send_err;\n\t}\n\n\t/*\n\t * henceforth use the same msgid, since we mean to say all this is\n\t * part of a single logical request to the mom\n\t * and we will be hanging off one request to be answered to finally\n\t */\n\tif (jobp->ji_qs.ji_svrflags & JOB_SVFLG_SCRIPT) {\n\t\tif (PBSD_jscript_direct(stream, jobp->ji_script, PROT_TPP, &dup_msgid) != 0)\n\t\t\tgoto send_err;\n\t}\n\tif (jobp->ji_script) {\n\t\tfree(jobp->ji_script);\n\t\tjobp->ji_script = NULL;\n\t}\n\n\tif (credlen > 0) {\n\t\trc = PBSD_jcred(stream, jobp->ji_extended.ji_ext.ji_credtype, credbuf, credlen, PROT_TPP, &dup_msgid);\n\t\tif (credbuf)\n\t\t\tfree(credbuf);\n\t\tif (rc != 0)\n\t\t\tgoto send_err;\n\t}\n\n\tif ((jobp->ji_qs.ji_svrflags & JOB_SVFLG_HASRUN) && (hostaddr != pbs_server_addr)) {\n\t\tif ((move_job_file(stream, jobp, StdOut, PROT_TPP, &dup_msgid) != 0) ||\n\t\t    (move_job_file(stream, jobp, StdErr, PROT_TPP, &dup_msgid) != 0) ||\n\t\t    (move_job_file(stream, jobp, Chkpt, PROT_TPP, &dup_msgid) != 0))\n\t\t\tgoto send_err;\n\t}\n\n\tif (PBSD_commit(stream, job_id, PROT_TPP, &dup_msgid, extend_commit) != 0)\n\t\tgoto send_err;\n\ndone:\n\tfree(dup_msgid);\t\t\t  /* free this as it is not part of any work task */\n\tresc_access_perm = save_resc_access_perm; /* reset back to it's old value */\n\treturn 2;\n\nsend_err:\n\tfree(dup_msgid);\n\n\tif (jobp->ji_script) {\n\t\tfree(jobp->ji_script);\n\t\tjobp->ji_script = NULL;\n\t}\n\n\tif (ptask) {\n\t\tif (ptask->wt_event2)\n\t\t\tfree(ptask->wt_event2);\n\t\tdelete_task(ptask);\n\t}\n\n\tsprintf(log_buffer, \"send of job to %s failed error = %d\", destin, pbs_errno);\n\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO, jobp->ji_qs.ji_jobid, log_buffer);\n\tresc_access_perm = save_resc_access_perm; /* reset back to it's old value */\n\treturn (-1);\n}\n\n/**\n *\n * @brief\n * \t\tSend a job over the network to some other server or MOM.\n * @par\n * \t\tUnder Linux/Unix, this starts a child process to do the work.\n *\t\tConnect to the destination host and port,\n * \t\tand go through the protocol to transfer the job.\n * \t\tSignals are blocked.\n *\n * @param[in]\tjobp\t-\tpointer to the job being sent.\n * @param[in]\thostaddr\t-\tthe address of host to send job to, host byte order.\n * @param[in]\tport\t-\tthe destination port, host byte order\n * @param[in]\tmove_type\t-\tthe type of move (e.g. MOVE_TYPE_exec)\n * @param[in]\tpost_func\t-\tthe function to execute once the child process\n *\t\t\t\t\t\t\t\tsending job completes (Linux/Unix only)\n * @param[in]\tdata\t-\tinput data to 'post_func'\n *\n * @return\tint\n * @retval\t2\tparent\t: success (child forked)\n * @retval\t-1\tparent\t: on failure (pbs_errno set to error number)\n * @retval\tSEND_JOB_OK\tchild\t: 0 success, job sent\n * @retval\tSEND_JOB_FATAL\tchild\t: 1 permenent failure or rejection,\n * @retval\tSEND_JOB_RETRY\tchild\t: 2 failed but try again\n * @retval\tSEND_JOB_NODEDW child\t: 3 execution node down, retry different node\n */\nint\nsend_job(job *jobp, pbs_net_t hostaddr, int port, int move_type,\n\t void (*post_func)(struct work_task *), struct batch_request *preq)\n{\n\tpbs_list_head attrl;\n\tenum conn_type cntype = ToServerDIS;\n\tint con;\n\tchar *credbuf = NULL;\n\tsize_t credlen = 0;\n\tchar *destin = jobp->ji_qs.ji_destin;\n\tint encode_type;\n\tint i;\n\tchar job_id[PBS_MAXSVRJOBID + 1];\n\tpid_t pid;\n\tstruct attropl *pqjatr; /* list (single) of attropl for quejob */\n\tchar script_name[MAXPATHLEN + 1];\n\tstruct work_task *ptask;\n\tstruct hostent *hp;\n\tstruct in_addr addr;\n\tlong tempval;\n\n\t/* if job has a script read it from database */\n\tif (jobp->ji_qs.ji_svrflags & JOB_SVFLG_SCRIPT) {\n\t\tif (svr_load_jobscript(jobp) == NULL) {\n\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"Failed to load job script for job %s\",\n\t\t\t\t jobp->ji_qs.ji_jobid);\n\t\t\tlog_err(pbs_errno, __func__, log_buffer);\n\t\t\treturn (-1);\n\t\t}\n\t}\n\n\tif (move_type == MOVE_TYPE_Exec && small_job_files(jobp))\n\t\treturn send_job_exec(jobp, hostaddr, port, move_type, preq);\n\n\tsnprintf(log_buffer, sizeof(log_buffer), \"big job files, sending via subprocess\");\n\tlog_event(PBSEVENT_DEBUG3, PBS_EVENTCLASS_JOB, LOG_INFO, jobp->ji_qs.ji_jobid, log_buffer);\n\n\tscript_name[0] = '\\0';\n\t/* if job has a script read it from database */\n\tif (jobp->ji_qs.ji_svrflags & JOB_SVFLG_SCRIPT) {\n\t\t/* write the job script to a temporary file */\n\t\tif (svr_create_tmp_jobscript(jobp, script_name) != 0) {\n\t\t\tpbs_errno = PBSE_SYSTEM;\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"Failed to create temporary job script for job %s\",\n\t\t\t\t jobp->ji_qs.ji_jobid);\n\t\t\tlog_err(pbs_errno, __func__, log_buffer);\n\n\t\t\tif (jobp->ji_script) {\n\t\t\t\tfree(jobp->ji_script);\n\t\t\t\tjobp->ji_script = NULL;\n\t\t\t}\n\n\t\t\treturn -1;\n\t\t}\n\t}\n\n\tif (jobp->ji_script) {\n\t\tfree(jobp->ji_script);\n\t\tjobp->ji_script = NULL;\n\t}\n\n\tpid = fork();\n\tif (pid == -1) { /* Error on fork */\n\t\tlog_err(errno, __func__, \"fork failed\\n\");\n\t\tpbs_errno = PBSE_SYSTEM;\n\t\treturn -1;\n\t}\n\n\tif (pid != 0) { /* The parent (main server) */\n\n\t\tptask = set_task(WORK_Deferred_Child, pid, post_func, preq);\n\t\tif (!ptask) {\n\t\t\tlog_err(errno, __func__, msg_err_malloc);\n\t\t\treturn (-1);\n\t\t} else {\n\t\t\tptask->wt_parm2 = jobp;\n\t\t\tappend_link(&((job *) jobp)->ji_svrtask,\n\t\t\t\t    &ptask->wt_linkobj, ptask);\n\t\t}\n\t\treturn 2;\n\t}\n\n\t/*\n\t * the child process\n\t *\n\t * set up signal cather for error return\n\t */\n\tDBPRT((\"%s: child started, sending to port %d\\n\", __func__, port))\n\ttpp_terminate();\n\n\t/* Unprotect child from being killed by kernel */\n\tdaemon_protect(0, PBS_DAEMON_PROTECT_OFF);\n\taddr.s_addr = htonl(hostaddr);\n\thp = gethostbyaddr((void *) &addr, sizeof(struct in_addr), AF_INET);\n\tif (hp == NULL) {\n\t\tsprintf(log_buffer, \"%s: h_errno=%d\",\n\t\t\tinet_ntoa(addr), h_errno);\n\t\tlog_err(-1, __func__, log_buffer);\n\t} else {\n\t\t/* read any credential file */\n\t\t(void) get_credential(hp->h_name, jobp, PBS_GC_BATREQ,\n\t\t\t\t      &credbuf, &credlen);\n\t}\n\t/* encode job attributes to be moved */\n\n\tCLEAR_HEAD(attrl);\n\n\t/* select attributes/resources to send based on move type */\n\n\tif (move_type == MOVE_TYPE_Exec) {\n\t\tresc_access_perm = ATR_DFLAG_MOM;\n\t\tencode_type = ATR_ENCODE_MOM;\n\t\tcntype = ToServerDIS;\n\t} else {\n\t\tresc_access_perm = ATR_DFLAG_USWR | ATR_DFLAG_OPWR |\n\t\t\t\t   ATR_DFLAG_MGWR | ATR_DFLAG_SvRD;\n\t\tencode_type = ATR_ENCODE_SVR;\n\t\tsvr_dequejob(jobp); /* clears default resource settings */\n\t}\n\n\t/* our job is to calc eligible time accurately and save it */\n\t/* on new server, accrue type should be calc afresh */\n\t/* Note: if job is being sent for execution on mom, then don't calc eligible time */\n\n\tif ((get_jattr_long(jobp, JOB_ATR_accrue_type) == JOB_ELIGIBLE) &&\n\t    (get_sattr_long(SVR_ATR_EligibleTimeEnable) == 1) &&\n\t    (move_type != MOVE_TYPE_Exec)) {\n\t\ttempval = ((long) time_now - get_jattr_long(jobp, JOB_ATR_sample_starttime));\n\t\tset_jattr_l_slim(jobp, JOB_ATR_eligible_time, tempval, INCR);\n\t}\n\n\tfor (i = 0; i < (int) JOB_ATR_LAST; i++) {\n\t\tif ((job_attr_def + i)->at_flags & resc_access_perm) {\n\t\t\t(void) (job_attr_def + i)->at_encode(get_jattr(jobp, i), &attrl, (job_attr_def + i)->at_name, NULL, encode_type, NULL);\n\t\t}\n\t}\n\tattrl_fixlink(&attrl);\n\n\t/* save the job id for when after we purge the job */\n\n\t(void) strcpy(job_id, jobp->ji_qs.ji_jobid);\n\n\tpbs_errno = 0;\n\tcon = -1;\n\n\tfor (i = 0; i < RETRY; i++) {\n\n\t\t/* connect to receiving server with retries */\n\n\t\tif (i > 0) { /* recycle after an error */\n\t\t\tif (con >= 0)\n\t\t\t\tsvr_disconnect(con);\n\t\t\tif (should_retry_route(pbs_errno) == -1) {\n\t\t\t\t/* delete the temp script file */\n\t\t\t\tunlink(script_name);\n\t\t\t\texit(SEND_JOB_FATAL); /* fatal error, don't retry */\n\t\t\t}\n\t\t\tsleep(1 << i);\n\t\t}\n\t\tif ((con = svr_connect(hostaddr, port, 0, cntype, PROT_TCP)) == PBS_NET_RC_FATAL) {\n\t\t\tlog_errf(pbs_errno, __func__, \"send_job failed to %lx port %d\",\n\t\t\t\t hostaddr, port);\n\n\t\t\t/* delete the temp script file */\n\t\t\tunlink(script_name);\n\n\t\t\tif ((move_type == MOVE_TYPE_Exec) && (pbs_errno == PBSE_BADCRED))\n\t\t\t\texit(SEND_JOB_NODEDW);\n\n\t\t\texit(SEND_JOB_FATAL);\n\t\t} else if (con == PBS_NET_RC_RETRY) {\n\t\t\tpbs_errno = ECONNREFUSED; /* should retry */\n\t\t\tcontinue;\n\t\t}\n\n\t\t/*\n\t\t * if the job is substate JOB_SUBSTATE_TRNOUTCM which means\n\t\t * we are recovering after being down or a late failure, we\n\t\t * just want to send the commit\"\n\t\t */\n\n\t\tif (!check_job_substate(jobp, JOB_SUBSTATE_TRNOUTCM)) {\n\n\t\t\tif (!check_job_substate(jobp, JOB_SUBSTATE_TRNOUT))\n\t\t\t\tset_job_substate(jobp, JOB_SUBSTATE_TRNOUT);\n\n\t\t\tpqjatr = &((svrattrl *) GET_NEXT(attrl))->al_atopl;\n\t\t\tif (PBSD_queuejob(con, jobp->ji_qs.ji_jobid, destin, pqjatr, NULL, PROT_TCP, NULL, NULL) == 0) {\n\t\t\t\tif (pbs_errno == PBSE_JOBEXIST && move_type == MOVE_TYPE_Exec) {\n\t\t\t\t\t/* already running, mark it so */\n\t\t\t\t\tlog_event(PBSEVENT_ERROR, PBS_EVENTCLASS_JOB, LOG_INFO, jobp->ji_qs.ji_jobid, \"Mom reports job already running\");\n\t\t\t\t\texit(SEND_JOB_OK);\n\t\t\t\t} else if ((pbs_errno == PBSE_HOOKERROR) || (pbs_errno == PBSE_HOOK_REJECT) ||\n\t\t\t\t\t   (pbs_errno == PBSE_HOOK_REJECT_RERUNJOB) || (pbs_errno == PBSE_HOOK_REJECT_DELETEJOB)) {\n\t\t\t\t\tchar name_buf[MAXPATHLEN + 1];\n\t\t\t\t\tint rfd;\n\t\t\t\t\tint len;\n\t\t\t\t\tchar *reject_msg;\n\t\t\t\t\tint err;\n\n\t\t\t\t\terr = pbs_errno;\n\n\t\t\t\t\treject_msg = pbs_geterrmsg(con);\n\t\t\t\t\t(void) sprintf(log_buffer, \"send of job to %s failed error = %d reject_msg=%s\", destin, err, reject_msg ? reject_msg : \"\");\n\t\t\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO, jobp->ji_qs.ji_jobid, log_buffer);\n\n\t\t\t\t\t(void) strcpy(name_buf, path_hooks_workdir);\n\t\t\t\t\t(void) strcat(name_buf, jobp->ji_qs.ji_jobid);\n\t\t\t\t\t(void) strcat(name_buf, HOOK_REJECT_SUFFIX);\n\n\t\t\t\t\tif ((reject_msg != NULL) && (reject_msg[0] != '\\0')) {\n\t\t\t\t\t\tif ((rfd = open(name_buf, O_RDWR | O_CREAT | O_TRUNC, 0600)) == -1) {\n\t\t\t\t\t\t\tsprintf(log_buffer, \"open of reject file %s failed: errno %d\", name_buf, errno);\n\t\t\t\t\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO, jobp->ji_qs.ji_jobid, log_buffer);\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\tlen = strlen(reject_msg) + 1;\n\t\t\t\t\t\t\t/* write also trailing null char */\n\t\t\t\t\t\t\tif (write(rfd, reject_msg, len) != len) {\n\t\t\t\t\t\t\t\tsprintf(log_buffer, \"write to file %s incomplete: errno %d\", name_buf, errno);\n\t\t\t\t\t\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO, jobp->ji_qs.ji_jobid, log_buffer);\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tclose(rfd);\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\n\t\t\t\t\tif (err == PBSE_HOOKERROR)\n\t\t\t\t\t\texit(SEND_JOB_HOOKERR);\n\t\t\t\t\tif (err == PBSE_HOOK_REJECT)\n\t\t\t\t\t\texit(SEND_JOB_HOOK_REJECT);\n\t\t\t\t\tif (err == PBSE_HOOK_REJECT_RERUNJOB)\n\t\t\t\t\t\texit(SEND_JOB_HOOK_REJECT_RERUNJOB);\n\t\t\t\t\tif (err == PBSE_HOOK_REJECT_DELETEJOB)\n\t\t\t\t\t\texit(SEND_JOB_HOOK_REJECT_DELETEJOB);\n\t\t\t\t} else {\n\t\t\t\t\t(void) sprintf(log_buffer, \"send of job to %s failed error = %d\", destin, pbs_errno);\n\t\t\t\t\tlog_event(PBSEVENT_JOB, PBS_EVENTCLASS_JOB, LOG_INFO, jobp->ji_qs.ji_jobid, log_buffer);\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif (jobp->ji_qs.ji_svrflags & JOB_SVFLG_SCRIPT) {\n\t\t\t\tif (PBSD_jscript(con, script_name, PROT_TCP, NULL) != 0)\n\t\t\t\t\tcontinue;\n\t\t\t}\n\n\t\t\tif ((move_type == MOVE_TYPE_Exec) &&\n\t\t\t    (jobp->ji_qs.ji_svrflags & JOB_SVFLG_HASRUN) &&\n\t\t\t    (hostaddr != pbs_server_addr)) {\n\t\t\t\t/* send files created on prior run */\n\t\t\t\tif ((move_job_file(con, jobp, StdOut, PROT_TCP, NULL) != 0) ||\n\t\t\t\t    (move_job_file(con, jobp, StdErr, PROT_TCP, NULL) != 0) ||\n\t\t\t\t    (move_job_file(con, jobp, Chkpt, PROT_TCP, NULL) != 0))\n\t\t\t\t\tcontinue;\n\t\t\t}\n\n\t\t\tset_job_substate(jobp, JOB_SUBSTATE_TRNOUTCM);\n\t\t}\n\n\t\tif (PBSD_commit(con, job_id, PROT_TCP, NULL, NULL) != 0) {\n\t\t\t/* delete the temp script file */\n\t\t\tunlink(script_name);\n\t\t\texit(SEND_JOB_FATAL);\n\t\t}\n\n\t\tsvr_disconnect(con);\n\n\t\t/* delete the temp script file */\n\t\tunlink(script_name);\n\n\t\texit(SEND_JOB_OK); /* This child process is all done */\n\t}\n\tif (con >= 0)\n\t\tsvr_disconnect(con);\n\t/*\n\t * If connection is actively refused by the execution node(or mother superior) OR\n\t * the execution node(or mother superior) is rejecting request with error\n\t * PBSE_BADHOST(failing to authorize server host), the node should be marked down.\n\t */\n\tif ((move_type == MOVE_TYPE_Exec) && (pbs_errno == ECONNREFUSED || pbs_errno == PBSE_BADHOST)) {\n\t\ti = SEND_JOB_NODEDW;\n\t} else if (should_retry_route(pbs_errno) == -1) {\n\t\ti = SEND_JOB_FATAL;\n\t} else {\n\t\ti = SEND_JOB_RETRY;\n\t}\n\t(void) sprintf(log_buffer, \"send_job failed with error %d\", pbs_errno);\n\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_JOB, LOG_NOTICE, jobp->ji_qs.ji_jobid, log_buffer);\n\n\t/* delete the temp script file */\n\tunlink(script_name);\n\n\texit(i);\n\treturn -1; /* NOT REACHED */\n}\n\n/**\n * @brief\n * \t\tnet_move - move a job over the network to another queue.\n * @par\n * \t\tGet the address of the destination server and call send_job()\n *\n * @return\tint\n * @retval\t2\t: success (child started, see send_job())\n * @retval\t-1\t: error\n */\n\nint\nnet_move(job *jobp, struct batch_request *req)\n{\n\tvoid *data;\n\tchar *destination = jobp->ji_qs.ji_destin;\n\tpbs_net_t hostaddr;\n\tchar *hostname;\n\tint move_type;\n\tunsigned int port = pbs_server_port_dis;\n\tvoid (*post_func)(struct work_task *);\n\tchar *toserver;\n\n\t/* Determine to whom are we sending the job */\n\n\tif ((toserver = strchr(destination, '@')) == NULL) {\n\t\tsprintf(log_buffer,\n\t\t\t\"no server specified in %s\", destination);\n\t\tlog_err(-1, __func__, log_buffer);\n\t\treturn (-1);\n\t}\n\n\ttoserver++; /* point to server name */\n\thostname = parse_servername(toserver, &port);\n\thostaddr = get_hostaddr(hostname);\n\n\tif (req) {\n\t\t/* note, in this case, req is the orginal Move Request */\n\t\tmove_type = MOVE_TYPE_Move;\n\t\tpost_func = post_movejob;\n\t\tdata = req;\n\t} else {\n\t\t/* note, in this case req is NULL */\n\t\tmove_type = MOVE_TYPE_Route;\n\t\tpost_func = post_routejob;\n\t\tdata = 0;\n\t}\n\n\tsvr_setjobstate(jobp, JOB_STATE_LTR_TRANSIT, JOB_SUBSTATE_TRNOUT);\n\treturn (send_job(jobp, hostaddr, port, move_type, post_func, data));\n}\n\n/**\n * @brief\n * \t\tshould_retry_route - should the route be retried based on the error return\n * @par\n *\t\tCertain error are temporary, and that destination should not be\n *\t\tconsidered bad.\n *\n * @param[in]\terr\t-\terror return\n *\n * @return\tint\n * @retval\t1\t: it should retry this destination\n * @retval\t-1\t: if destination should not be retried\n */\n\nint\nshould_retry_route(int err)\n{\n\tswitch (err) {\n\t\tcase 0:\n\t\tcase EADDRINUSE:\n\t\tcase EADDRNOTAVAIL:\n\t\tcase ECONNREFUSED:\n\t\tcase PBSE_JOBEXIST:\n\t\tcase PBSE_SYSTEM:\n\t\tcase PBSE_INTERNAL:\n\t\tcase PBSE_EXPIRED:\n\t\tcase PBSE_MAXQUED:\n\t\tcase PBSE_QUNOENB:\n\t\tcase PBSE_NOCONNECTS:\n\t\tcase PBSE_ENTLIMCT:\n\t\tcase PBSE_ENTLIMRESC:\n\t\t\treturn (1);\n\n\t\tdefault:\n\t\t\treturn (-1);\n\t}\n}\n/**\n * @brief\n * \t\tmove_job_file - send files created on prior run\n *\n * @param[in]\tconn\t-\tconnection handle\n * @param[in]\tpjob\t-\tpointer to job structure\n * @param[in]\twhich\t-\tstandard file type, see libpbs.h\n * @param[in]\tprot\t-\tPROT_TPP or PROT_TCP\n * @param[out]\tmsgid\t-\tmessage id\n *\n * @return\tint\n * @retval\t0\t: success\n * @retval\t!=0\t: error code\n */\nint\nmove_job_file(int conn, job *pjob, enum job_file which, int prot, char **msgid)\n{\n\tchar path[MAXPATHLEN + 1];\n\n\t(void) strcpy(path, path_spool);\n\tif (*pjob->ji_qs.ji_fileprefix != '\\0')\n\t\t(void) strcat(path, pjob->ji_qs.ji_fileprefix);\n\telse\n\t\t(void) strcat(path, pjob->ji_qs.ji_jobid);\n\tif (which == StdOut)\n\t\t(void) strcat(path, JOB_STDOUT_SUFFIX);\n\telse if (which == StdErr)\n\t\t(void) strcat(path, JOB_STDERR_SUFFIX);\n\telse if (which == Chkpt)\n\t\t(void) strcat(path, JOB_CKPT_SUFFIX);\n\n\tif (access(path, F_OK) < 0) {\n\t\tif (errno == ENOENT)\n\t\t\treturn (0);\n\t\telse\n\t\t\treturn (errno);\n\t}\n\treturn PBSD_jobfile(conn, PBS_BATCH_MvJobFile, path, pjob->ji_qs.ji_jobid, which, prot, msgid);\n}\n\n/**\n * @brief\n * \t\tcnvrt_local_move - internally move a job to another queue\n * @par\n * \t\tCheck the destination to see if it can accept the job.\n *\n * @return\tint\n * @retval\t0\t: success\n * @retval\t-1\t: permanent failure or rejection, see pbs_errno\n * @retval\t1\t: failed but try again\n */\nint\ncnvrt_local_move(job *jobp, struct batch_request *req)\n{\n\treturn (local_move(jobp, req));\n}\n\n/**\n * @brief\n * \t\tcheck size of job files\n * @par\n * \t\tChecks the size of the job-script/output/error/checkpoint files for a job.\n * \t\tIf the job is not being rerun, simply returns 1.\n *\n * @return\tint\n * @retval\t0\t: at least one file is larger than 2MB.\n * @retval\t1\t: all job files are smaller than 2MB.\n */\nstatic int\nsmall_job_files(job *pjob)\n{\n\tint max_bytes_over_tpp = 2 * 1024 * 1024;\n\tchar path[MAXPATHLEN + 1] = {0};\n\tstruct stat sb;\n\tint have_file_prefix = 0;\n\n\tif (pjob->ji_script && (strlen(pjob->ji_script) > max_bytes_over_tpp))\n\t\treturn 0;\n\n\t/*\n\t * If the job is not being rerun, we need not check\n\t * the size of the spool files.\n\t */\n\tif (!(pjob->ji_qs.ji_svrflags & JOB_SVFLG_HASRUN))\n\t\treturn 1;\n\n\tif (*pjob->ji_qs.ji_fileprefix != '\\0')\n\t\thave_file_prefix = 1;\n\n\tif (have_file_prefix)\n\t\tsnprintf(path, MAXPATHLEN, \"%s%s%s\", path_spool, pjob->ji_qs.ji_fileprefix, JOB_STDOUT_SUFFIX);\n\telse\n\t\tsnprintf(path, MAXPATHLEN, \"%s%s%s\", path_spool, pjob->ji_qs.ji_jobid, JOB_STDOUT_SUFFIX);\n\tif ((access(path, F_OK) == 0) && !stat(path, &sb))\n\t\tif (sb.st_size > max_bytes_over_tpp)\n\t\t\treturn 0;\n\n\tmemset(path, 0, sizeof(path));\n\tif (have_file_prefix)\n\t\tsnprintf(path, MAXPATHLEN, \"%s%s%s\", path_spool, pjob->ji_qs.ji_fileprefix, JOB_STDERR_SUFFIX);\n\telse\n\t\tsnprintf(path, MAXPATHLEN, \"%s%s%s\", path_spool, pjob->ji_qs.ji_jobid, JOB_STDERR_SUFFIX);\n\tif ((access(path, F_OK) == 0) && !stat(path, &sb))\n\t\tif (sb.st_size > max_bytes_over_tpp)\n\t\t\treturn 0;\n\n\tmemset(path, 0, sizeof(path));\n\tif (have_file_prefix)\n\t\tsnprintf(path, MAXPATHLEN, \"%s%s%s\", path_spool, pjob->ji_qs.ji_fileprefix, JOB_CKPT_SUFFIX);\n\telse\n\t\tsnprintf(path, MAXPATHLEN, \"%s%s%s\", path_spool, pjob->ji_qs.ji_jobid, JOB_CKPT_SUFFIX);\n\tif ((access(path, F_OK) == 0) && !stat(path, &sb))\n\t\tif (sb.st_size > max_bytes_over_tpp)\n\t\t\treturn 0;\n\n\treturn 1;\n}\n"
  },
  {
    "path": "src/server/svr_recov_db.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file    svr_recov_db.c\n *\n * @brief\n * \t\tsvr_recov_db.c - contains functions to save server state and recover\n *\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <stdio.h>\n#include <unistd.h>\n#include <stdlib.h>\n#include <errno.h>\n#include <fcntl.h>\n#include <string.h>\n#include <sys/types.h>\n#include <sys/param.h>\n#include <sys/stat.h>\n#include <sys/time.h>\n#include \"pbs_ifl.h\"\n#include \"server_limits.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"job.h\"\n#include \"reservation.h\"\n#include \"queue.h\"\n#include \"server.h\"\n#include \"pbs_nodes.h\"\n#include \"svrfunc.h\"\n#include \"log.h\"\n#include \"pbs_db.h\"\n#include \"pbs_sched.h\"\n#include \"pbs_share.h\"\n\n/* Global Data Items: */\n\nextern struct server server;\nextern pbs_list_head svr_queues;\nextern attribute_def svr_attr_def[];\nextern char *path_priv;\nextern time_t time_now;\nextern char *msg_svdbopen;\nextern char *msg_svdbnosv;\nextern char *path_svrlive;\nextern void *svr_db_conn;\nextern void sched_free(pbs_sched *psched);\n\nextern pbs_sched *sched_alloc(char *sched_name);\n\n/**\n * @brief\n *\t\tUpdate the $PBS_HOME/server_priv/svrlive file timestamp\n *\n * @return\tError code\n * @retval\t0\t: Success\n * @retval\t-1\t: Failed to update timestamp\n *\n */\nint\nupdate_svrlive()\n{\n\tstatic int fdlive = -1;\n\tif (fdlive == -1) {\n\t\t/* first time open the file */\n\t\tfdlive = open(path_svrlive, O_WRONLY | O_CREAT, 0600);\n\t\tif (fdlive < 0)\n\t\t\treturn -1;\n\t}\n\t(void) utimes(path_svrlive, NULL);\n\treturn 0;\n}\n\n/**\n * @brief\n *\tconvert server structure to DB format\n *\n * @param[in]\tps\t-\tAddress of the server in pbs server\n * @param[out]\tpdbsvr\t-\tAddress of the database server object\n *\n * @retval   -1  Failure\n * @retval\t>=0 What to save: 0=nothing, OBJ_SAVE_NEW or OBJ_SAVE_QS\n *\n */\nstatic int\nsvr_to_db(struct server *ps, pbs_db_svr_info_t *pdbsvr)\n{\n\tint savetype = 0;\n\n\tpdbsvr->sv_jobidnumber = ps->sv_qs.sv_lastid;\n\n\tif ((encode_attr_db(svr_attr_def, ps->sv_attr, (int) SVR_ATR_LAST, &pdbsvr->db_attr_list, 1)) != 0) /* encode all attributes */\n\t\treturn -1;\n\n\tif (ps->newobj) /* object was never saved or loaded before */\n\t\tsavetype |= (OBJ_SAVE_NEW | OBJ_SAVE_QS);\n\n\treturn savetype;\n}\n\n/**\n * @brief\n *\tconvert from DB to server structure\n *\n * @param[out]\tps\t-\tAddress of the server in pbs server\n * @param[in]\tpdbsvr\t-\tAddress of the database server object\n *\n * @return   !=0   - Failure\n * @return   0     - Success\n */\nint\ndb_to_svr(struct server *ps, pbs_db_svr_info_t *pdbsvr)\n{\n\tif ((decode_attr_db(ps, &pdbsvr->db_attr_list.attrs, svr_attr_idx, svr_attr_def, ps->sv_attr, SVR_ATR_LAST, 0)) != 0)\n\t\treturn -1;\n\n\tps->newobj = 0;\n\tps->sv_qs.sv_jobidnumber = pdbsvr->sv_jobidnumber;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tconvert sched structure to DB format\n *\n * @param[in]\tps - Address of the scheduler in pbs server\n * @param[out]  pdbsched  - Address of the database scheduler object\n *\n * @retval   -1  Failure\n * @retval\t>=0 What to save: 0=nothing, OBJ_SAVE_NEW or OBJ_SAVE_QS\n */\nstatic int\nsched_to_db(struct pbs_sched *ps, pbs_db_sched_info_t *pdbsched)\n{\n\tint savetype = 0;\n\n\tstrcpy(pdbsched->sched_name, ps->sc_name);\n\n\tif ((encode_attr_db(sched_attr_def, ps->sch_attr, (int) SCHED_ATR_LAST, &pdbsched->db_attr_list, 0)) != 0)\n\t\treturn -1;\n\n\tif (ps->newobj) /* was never loaded or saved before */\n\t\tsavetype |= OBJ_SAVE_NEW;\n\n\treturn savetype;\n}\n\n/**\n * @brief\n *\tconvert from DB to sched structure\n *\n * @param[out] ps - Address of the scheduler in pbs server\n * @param[in]  pdbsched  - Address of the database scheduler object\n *\n */\nstatic int\ndb_to_sched(struct pbs_sched *ps, pbs_db_sched_info_t *pdbsched)\n{\n\tstrcpy(ps->sc_name, pdbsched->sched_name);\n\n\tif ((decode_attr_db(ps, &pdbsched->db_attr_list.attrs, sched_attr_idx, sched_attr_def, ps->sch_attr, SCHED_ATR_LAST, 0)) != 0)\n\t\treturn -1;\n\n\tps->newobj = 0;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\t\tRecover server information and attributes from server database\n *\n * @return\tError code\n * @retval\t0\t: On successful recovery and creation of server structure\n * @retval\t-1\t: On failure\n *\n */\nint\nsvr_recov_db(void)\n{\n\tvoid *conn = (void *) svr_db_conn;\n\tpbs_db_svr_info_t dbsvr = {0};\n\tpbs_db_obj_info_t obj;\n\tint rc = -1;\n\n\tobj.pbs_db_obj_type = PBS_DB_SVR;\n\tobj.pbs_db_un.pbs_db_svr = &dbsvr;\n\n\trc = pbs_db_load_obj(conn, &obj);\n\tif (rc == -2)\n\t\treturn 0; /* no change in server, return 0 */\n\n\tif (rc == 0)\n\t\trc = db_to_svr(&server, &dbsvr);\n\n\tfree_db_attr_list(&dbsvr.db_attr_list);\n\n\treturn rc;\n}\n\n/**\n * @brief\n *\t\tSave the state of the server, server quick save sub structure and\n *\t\toptionally the attributes.\n *\n * @param[in]\tps   -\tPointer to struct server\n * @param[in]\tmode -  type of save, either SVR_SAVE_QUICK or SVR_SAVE_FULL\n *\n * @return\tError code\n * @retval\t 0\t: Successful save of data.\n * @retval\t-1\t: Failure\n *\n */\n\nint\nsvr_save_db(struct server *ps)\n{\n\tvoid *conn = (void *) svr_db_conn;\n\tpbs_db_svr_info_t dbsvr = {0};\n\tpbs_db_obj_info_t obj;\n\tint savetype;\n\tint rc = -1;\n\tchar *conn_db_err = NULL;\n\n\t/* as part of the server save, update svrlive file now,\n\t * used in failover\n\t */\n\tif (update_svrlive() != 0)\n\t\tgoto done;\n\n\tif ((savetype = svr_to_db(ps, &dbsvr)) == -1)\n\t\tgoto done;\n\n\tobj.pbs_db_obj_type = PBS_DB_SVR;\n\tobj.pbs_db_un.pbs_db_svr = &dbsvr;\n\n\tif ((rc = pbs_db_save_obj(conn, &obj, savetype)) == 0)\n\t\tps->newobj = 0;\n\ndone:\n\tfree_db_attr_list(&dbsvr.db_attr_list);\n\n\tif (rc != 0) {\n\t\tpbs_db_get_errmsg(PBS_DB_ERR, &conn_db_err);\n\t\tlog_errf(PBSE_INTERNAL, __func__, \"Failed to save server %s\", conn_db_err ? conn_db_err : \"\");\n\t\tpanic_stop_db();\n\t\tfree(conn_db_err);\n\t}\n\n\treturn (rc);\n}\n\n/**\n * @brief Recover Schedulers\n *\n * @param[in]\tsname\t- scheduler name\n * @param[in]\tps\t- scheduler pointer, if any, to be updated\n *\n * @return\tThe recovered sched structure\n * @retval\tNULL - Failure\n * @retval\t!NULL - Success - address of recovered sched returned\n * */\n\npbs_sched *\nsched_recov_db(char *sname, pbs_sched *ps)\n{\n\tpbs_db_sched_info_t dbsched = {{0}};\n\tpbs_db_obj_info_t obj;\n\tvoid *conn = (void *) svr_db_conn;\n\tint rc = -1;\n\tchar *conn_db_err = NULL;\n\n\tif (!ps) {\n\t\tif ((ps = sched_alloc(sname)) == NULL) {\n\t\t\tlog_err(-1, __func__, \"sched_alloc failed\");\n\t\t\treturn NULL;\n\t\t}\n\t}\n\n\tobj.pbs_db_obj_type = PBS_DB_SCHED;\n\tobj.pbs_db_un.pbs_db_sched = &dbsched;\n\n\t/* load sched */\n\tsnprintf(dbsched.sched_name, sizeof(dbsched.sched_name), \"%s\", sname);\n\n\trc = pbs_db_load_obj(conn, &obj);\n\tif (rc == -2)\n\t\treturn ps; /* no change in sched */\n\n\tif (rc == 0)\n\t\trc = db_to_sched(ps, &dbsched);\n\telse {\n\t\tpbs_db_get_errmsg(PBS_DB_ERR, &conn_db_err);\n\t\tlog_errf(PBSE_INTERNAL, __func__, \"Failed to load sched %s %s\", sname, conn_db_err ? conn_db_err : \"\");\n\t\tfree(conn_db_err);\n\t}\n\n\tfree_db_attr_list(&dbsched.db_attr_list);\n\n\tif (rc != 0) {\n\t\tif (ps)\n\t\t\tsched_free(ps); /* free if we allocated here */\n\t\tps = NULL;\t\t/* so we return NULL */\n\t}\n\treturn ps;\n}\n\n/**\n * @brief\n *\t\tSave the state of the scheduler structure which consists only of attributes\n *\n * @param[in]\tps   -\tPointer to struct sched\n * @param[in]\tmode -  type of save, only SVR_SAVE_FULL\n *\n * @return\tError code\n * @retval\t 0 :\tSuccessful save of data.\n * @retval\t-1 :\tFailure\n *\n */\n\nint\nsched_save_db(pbs_sched *ps)\n{\n\tvoid *conn = (void *) svr_db_conn;\n\tpbs_db_sched_info_t dbsched = {{0}};\n\tpbs_db_obj_info_t obj;\n\tint savetype;\n\tint rc = -1;\n\tchar *conn_db_err = NULL;\n\n\tif ((savetype = sched_to_db(ps, &dbsched)) == -1)\n\t\tgoto done;\n\n\tobj.pbs_db_obj_type = PBS_DB_SCHED;\n\tobj.pbs_db_un.pbs_db_sched = &dbsched;\n\n\tif ((rc = pbs_db_save_obj(conn, &obj, savetype)) == 0)\n\t\tps->newobj = 0;\n\ndone:\n\tfree_db_attr_list(&dbsched.db_attr_list);\n\n\tif (rc != 0) {\n\t\tpbs_db_get_errmsg(PBS_DB_ERR, &conn_db_err);\n\t\tlog_errf(PBSE_INTERNAL, __func__, \"Failed to save sched %s %s\", ps->sc_name, conn_db_err ? conn_db_err : \"\");\n\t\tpanic_stop_db();\n\t\tfree(conn_db_err);\n\t}\n\n\treturn rc;\n}\n\n/**\n * @brief\n* \trecov_sched_cb - callback function to process and load\n* \t\t\tsched database result to pbs structure.\n *\n * @param[in]\tdbobj\t- database sched structure to C.\n * @param[out]\trefreshed - if rows processed.\n *\n * @return\tresv structure - on success\n * @return \tNULL - on failure\n */\npbs_sched *\nrecov_sched_cb(pbs_db_obj_info_t *dbobj, int *refreshed)\n{\n\tpbs_sched *psched = NULL;\n\tpbs_db_sched_info_t *dbsched = dbobj->pbs_db_un.pbs_db_sched;\n\n\t*refreshed = 0;\n\t/* recover sched */\n\tif ((psched = sched_recov_db(dbsched->sched_name, NULL)) != NULL) {\n\t\tif (!strncmp(dbsched->sched_name, PBS_DFLT_SCHED_NAME, strlen(PBS_DFLT_SCHED_NAME)))\n\t\t\tdflt_scheduler = psched;\n\t\tpsched->sc_conn_addr = get_hostaddr(get_sched_attr_str(psched, SCHED_ATR_SchedHost));\n\t\tset_scheduler_flag(SCH_CONFIGURE, psched);\n\t\t*refreshed = 1;\n\t}\n\n\tfree_db_attr_list(&dbsched->db_attr_list);\n\treturn psched;\n}\n"
  },
  {
    "path": "src/server/svr_resccost.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file    svr_resccost.c\n *\n * @brief\n * \t\tsvr_resccost.c - This file contains the functions for manipulating the server\n * \t\tattribute \"resource cost\", which is of type ATR_TYPE_LIST\n *\n *  \tIt contains functions for:\n *\t\tDecoding the value string to the machine representation,\n *\t\ta long integer within the resource cost structure.\n *\t\tEncoding the long integer value to external form\n *\t\tSetting the value by =, + or - operators.\n *\t\tFreeing the storage space used by the list.\n *\n *\t\tnote - it was my original intent to have the cost be an integer recorded\n *\t\tin the resource_defination structure itself.  It seemed logical, one\n *\t\tvalue per definition, why not.  But \"the old atomic set\" destroys that\n *\t\tidea.  Have to be able to have temporary attributes with their own\n *\t\tvalues...  Hence it came down to another linked-list of values.\n *\n *\t\tResource_cost entry, one per resource type which has been set.\n * \t\tThe list is headed in the resource_cost attribute.\n *\n * Included functions are:\n *\tdecode_rcost()\n *\tencode_rcost()\n *\tset_rcost()\n *\tfree_rcost()\n *\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <assert.h>\n#include <ctype.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <time.h>\n#include <sys/types.h>\n#include \"pbs_ifl.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"resource.h\"\n#include \"pbs_error.h\"\n#include \"server_limits.h\"\n#include \"server.h\"\n#include \"job.h\"\n\nstruct resource_cost {\n\tpbs_list_link rc_link;\n\tresource_def *rc_def;\n\tlong rc_cost;\n};\n\n/**\n * @brief\n * \t\tadd_cost_entry\t-\tadd a new cost entry to the resource_cost list.\n *\n * @param[in,out]\tpatr\t-\tattribute structure\n * @param[in]\tprdef\t-\tresource definition structure\n *\n * @return\tresource_cost *\n */\n\nstatic struct resource_cost *\nadd_cost_entry(attribute *patr, resource_def *prdef)\n{\n\tstruct resource_cost *pcost;\n\n\tpcost = malloc(sizeof(struct resource_cost));\n\tif (pcost) {\n\t\tCLEAR_LINK(pcost->rc_link);\n\t\tpcost->rc_def = prdef;\n\t\tpcost->rc_cost = 0;\n\t\tappend_link(&patr->at_val.at_list, &pcost->rc_link, pcost);\n\t}\n\treturn (pcost);\n}\n\n/**\n * @brief\n * \t\tdecode_rcost - decode string into resource cost value\n *\n * @param[in,out]\tpatr\t-\tattribute name\n * @param[in]\tname\t-\tattribute name\n * @param[in]\trescn\t-\tresource name, unused here\n * @param[in]\tval\t-\tattribute value\n *\n * @return\tint\n * @retval\t0\t: if ok\n * @retval\t>0\t: error number if error\n * @retval\t*patr\n *\tReturns: 0 if ok\n *\t\t>0 error number if error\n */\n\nint\ndecode_rcost(attribute *patr, char *name, char *rescn, char *val)\n{\n\tresource_def *prdef;\n\tstruct resource_cost *pcost;\n\tvoid free_rcost(attribute *);\n\n\tif ((val == NULL) || (rescn == NULL)) {\n\t\tATR_UNSET(patr);\n\t\treturn (0);\n\t}\n\tif (is_attr_set(patr))\n\t\tfree_rcost(patr);\n\n\tprdef = find_resc_def(svr_resc_def, rescn);\n\tif (prdef == NULL)\n\t\treturn (PBSE_UNKRESC);\n\tpcost = (struct resource_cost *) GET_NEXT(patr->at_val.at_list);\n\twhile (pcost) {\n\t\tif (pcost->rc_def == prdef)\n\t\t\tbreak; /* have entry in attr already */\n\t\tpcost = (struct resource_cost *) GET_NEXT(pcost->rc_link);\n\t}\n\tif (pcost == NULL) { /* add entry */\n\t\tif ((pcost = add_cost_entry(patr, prdef)) == NULL)\n\t\t\treturn (PBSE_SYSTEM);\n\t}\n\tpcost->rc_cost = atol(val);\n\tpost_attr_set(patr);\n\treturn (0);\n}\n\n/**\n * @brief\n * \t\tencode_rcost - encode attribute of type long into attr_extern\n *\n * @param[in]\tattr\t-\tptr to attribute\n * @param[in,out]\tphead\t-\thead of attrlist list\n * @param[in]\tatname\t-\tattribute name\n * @param[in]\trsname\t-\tesource name or null\n * @param[in]\tmode\t-\tencode mode, unused here\n * @param[out]\trtnl\t-\tRETURN: ptr to svrattrl\n *\n * @return\tint\n * @retval\t>0\t: if ok\n * @retval\t=0\t: if no value, no attrlist link added\n * @retval\t<0\t: if error\n */\n/*ARGSUSED*/\n\nint\nencode_rcost(const attribute *attr, pbs_list_head *phead, char *atname, char *rsname, int mode, svrattrl **rtnl)\n{\n\tsvrattrl *pal;\n\tstruct resource_cost *pcost;\n\tint first = 1;\n\tsvrattrl *xprior = NULL;\n\n\tif (!attr)\n\t\treturn (-1);\n\tif (!(is_attr_set(attr)))\n\t\treturn (0);\n\n\tpcost = (struct resource_cost *) GET_NEXT(attr->at_val.at_list);\n\twhile (pcost) {\n\t\trsname = pcost->rc_def->rs_name;\n\t\tif ((pal = attrlist_create(atname, rsname, 23)) == NULL)\n\t\t\treturn (-1);\n\n\t\t(void) sprintf(pal->al_value, \"%ld\", pcost->rc_cost);\n\t\tpal->al_flags = attr->at_flags;\n\t\tappend_link(phead, &pal->al_link, pal);\n\t\tif (first) {\n\t\t\tif (rtnl)\n\t\t\t\t*rtnl = pal;\n\t\t\tfirst = 0;\n\t\t} else {\n\t\t\txprior->al_sister = pal;\n\t\t}\n\t\txprior = pal;\n\n\t\tpcost = (struct resource_cost *) GET_NEXT(pcost->rc_link);\n\t}\n\n\treturn (1);\n}\n\n/**\n * @brief\n * \t\tset_rcost - set attribute A to attribute B,\n *\t\teither A=B, A += B, or A -= B\n *\n * @param[in,out]\told\t-\tattribute A\n * @param[in]\tnew\t-\tattribute B\n * @param[in]\top\t-\tbatch operator. Ex: SET, INCR, DECR.\n *\n * @return\tint\n * @retval\t0\t: if ok\n * @retval\t>0 \t: if error\n */\n\nint\nset_rcost(attribute *old, attribute *new, enum batch_op op)\n{\n\tstruct resource_cost *pcnew;\n\tstruct resource_cost *pcold;\n\n\tassert(old && new && (is_attr_set(new)));\n\n\tpcnew = (struct resource_cost *) GET_NEXT(new->at_val.at_list);\n\twhile (pcnew) {\n\t\tpcold = (struct resource_cost *) GET_NEXT(old->at_val.at_list);\n\t\twhile (pcold) {\n\t\t\tif (pcnew->rc_def == pcold->rc_def)\n\t\t\t\tbreak;\n\t\t\tpcold = (struct resource_cost *) GET_NEXT(pcold->rc_link);\n\t\t}\n\t\tif (pcold == NULL)\n\t\t\tif ((pcold = add_cost_entry(old, pcnew->rc_def)) == NULL)\n\t\t\t\treturn (PBSE_SYSTEM);\n\n\t\tswitch (op) {\n\t\t\tcase SET:\n\t\t\t\tpcold->rc_cost = pcnew->rc_cost;\n\t\t\t\tbreak;\n\n\t\t\tcase INCR:\n\t\t\t\tpcold->rc_cost += pcnew->rc_cost;\n\t\t\t\tbreak;\n\n\t\t\tcase DECR:\n\t\t\t\tpcold->rc_cost -= pcnew->rc_cost;\n\t\t\t\tbreak;\n\n\t\t\tdefault:\n\t\t\t\treturn (PBSE_INTERNAL);\n\t\t}\n\t\tpcnew = (struct resource_cost *) GET_NEXT(pcnew->rc_link);\n\t}\n\tpost_attr_set(old);\n\treturn (0);\n}\n\n/**\n * @brief\n * \t\tfree_rcost - free space used by resource cost attribute\n *\n * @param[in]\tpattr\t-\tattribute structure\n */\n\nvoid\nfree_rcost(attribute *pattr)\n{\n\tstruct resource_cost *pcost;\n\n\twhile ((pcost = (struct resource_cost *) GET_NEXT(\n\t\t\tpattr->at_val.at_list)) != NULL) {\n\t\tdelete_link(&pcost->rc_link);\n\t\t(void) free(pcost);\n\t}\n\tmark_attr_not_set(pattr);\n}\n"
  },
  {
    "path": "src/server/user_func.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file    user_func.c\n *\n *@brief\n * \t\tuser_func.c - Functions which provide basic operation on the user concept\n *\n * Included public functions are:\n *\n *   user_write_password  saves a user password to a file\n *   user_read_password   reads user's saved password from a file\n *   req_usercredential   receive save per user/per server password request\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <unistd.h>\n#include <sys/param.h>\n#include <dirent.h>\n\n#include <sys/types.h>\n#include <sys/stat.h>\n#include <ctype.h>\n#include <errno.h>\n#include <assert.h>\n\n#include <fcntl.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n\n#include <netdb.h>\n\n#include \"pbs_ifl.h\"\n#include \"log.h\"\n#include \"user.h\"\n#include \"list_link.h\"\n#include \"server_limits.h\"\n#include \"attribute.h\"\n#include \"credential.h\"\n#include \"ticket.h\"\n#include \"libpbs.h\"\n#include \"batch_request.h\"\n#include \"server.h\"\n#include \"net_connect.h\"\n#include \"pbs_nodes.h\"\n#include \"svrfunc.h\"\n\n#define MIG_RETRY_LIMIT 3\n\nextern struct server server;\nextern char *path_users;\n\n/* External functions */\nextern int should_retry_route(int err);\n\n/* Local Private Functions */\n\n/* Global Data items */\n\n/**\n * @brief\n *  \tuser_write_password - Output password into user password file.\n *\n * @param[in]\tuser\t-\tThe user name.\n * @param[in]\tcred\t-\tCredential\n * @param[in]\tlen\t-\tlength of cred.\n *\n * @return\tint\n * @retval\t0\t: success\n * @retval\t-2\t: if a request was made to delete a non-existent user password file.\n * @retval\t-1\t: on all other errors.\n * @note\n *      NOTE: The well known 'log_buffer' array will be overwritten as to what\n *\t\taction took place.\n *      Also, if cred is \"\" and len is 0, then the user password file is\n *      deleted!\n */\nint\nuser_write_password(char *user, char *cred, size_t len)\n{\n\textern char *path_users;\n\tchar name_buf[MAXPATHLEN + 1];\n\tint cred_fd;\n\tint ret = -1;\n\n\tassert(user != NULL);\n\tassert(cred != NULL);\n\n\t(void) strcpy(name_buf, path_users);\n\t(void) strcat(name_buf, user);\n\t(void) strcat(name_buf, USER_PASSWORD_SUFFIX);\n\n\tif (len == 0) {\n\t\tstruct stat sbuf;\n\t\tif ((stat(name_buf, &sbuf) == -1) &&\n\t\t    (errno == ENOENT)) {\n\t\t\tsprintf(log_buffer, \"user %s has no password file!\",\n\t\t\t\tuser);\n\t\t\treturn (-2);\n\t\t}\n\t\tif (unlink(name_buf) == -1) {\n\t\t\tsprintf(log_buffer, \"Deleting user %s failed: error %d\",\n\t\t\t\tuser, errno);\n\t\t\treturn (-1);\n\t\t}\n\t\treturn (0);\n\t}\n\n\tif ((cred_fd = open(name_buf, O_RDWR | O_CREAT | O_TRUNC, 0600)) == -1) {\n\t\tsprintf(log_buffer,\n\t\t\t\"open of user password file %s failed: errno %d\",\n\t\t\tname_buf, errno);\n\t\treturn -1;\n\t}\n\n\tif (write(cred_fd, cred, len) != len) {\n\t\tsprintf(log_buffer,\n\t\t\t\"write to file %s incomplete: errno %d\", name_buf, errno);\n\t\tgoto done;\n\t}\n\tsprintf(log_buffer, \"saved user %s's per server password\", user);\n\tret = 0;\n\ndone:\n\tif (cred_fd > 0)\n\t\tclose(cred_fd);\n\treturn ret;\n}\n\n/**\n * @brief\n * \t\tuser_read_password\n *\t\tCheck if this user has an associated user password file.  If it does,\n *\t\tthe user password file is opened and the password is read into\n *\t\tmalloc'ed memory.\n *\n * @param[in]\tuser\t-\tThe user name.\n * @param[out]\tcred\t-\tCredential\n * @param[out]\tlen\t-\tlength of cred.\n *\n * @return\tint\n * @retval\t1\t: if there is no password\n * @retval\t0\t: if there is a password\n * @retval\t-1\t: error.\n */\nint\nuser_read_password(char *user, char **cred, size_t *len)\n{\n\textern char *path_users;\n\tchar name_buf[MAXPATHLEN + 1];\n\tchar *hold = NULL;\n\tstruct stat sbuf;\n\tint fd;\n\tint ret = -1;\n\n\tassert(user != NULL);\n\tassert(cred != NULL);\n\n\t(void) strcpy(name_buf, path_users);\n\t(void) strcat(name_buf, user);\n\t(void) strcat(name_buf, USER_PASSWORD_SUFFIX);\n\n\tif ((fd = open(name_buf, O_RDONLY)) == -1) {\n\t\tif (errno == ENOENT)\n\t\t\treturn 1;\n\n\t\tsprintf(log_buffer, \"failed to open %s errno\", name_buf);\n\t\tlog_err(errno, __func__, log_buffer);\n\t\treturn ret;\n\t}\n\n\tif (fstat(fd, &sbuf) == -1) {\n\t\tsprintf(log_buffer, \"failed to fstat %s\", name_buf);\n\t\tlog_err(errno, __func__, log_buffer);\n\t\tgoto done;\n\t}\n\n\thold = malloc(sbuf.st_size);\n\tassert(hold != NULL);\n\n\tif (read(fd, hold, sbuf.st_size) != sbuf.st_size) {\n\t\tsprintf(log_buffer, \"read %s is incomplete\", name_buf);\n\t\tlog_err(errno, __func__, log_buffer);\n\t\tgoto done;\n\t}\n\t*len = sbuf.st_size;\n\t*cred = hold;\n\thold = NULL;\n\tret = 0;\n\ndone:\n\tif (fd > 0)\n\t\tclose(fd);\n\n\tif (hold != NULL)\n\t\tfree(hold);\n\n\treturn ret;\n}\n\n/**\n * @brief\n * \t\treq_usercredential - receive password credential of a user that\n *                      is to be saved by the server.\n *\n * @param[in,out]\tpreq\t-\tptr to the decoded request\n */\nvoid\nreq_usercredential(struct batch_request *preq)\n{\n\tchar *user;\n\tint type;\n\tchar *cred;\n\tsize_t len;\n\tchar info[PBS_MAXUSER + PBS_MAXHOSTNAME + 2];\n\tint rval;\n\n\tDBPRT((\"%s: entered\\n\", __func__))\n\tuser = preq->rq_ind.rq_usercred.rq_user;\n\ttype = preq->rq_ind.rq_usercred.rq_type;\n\tcred = preq->rq_ind.rq_usercred.rq_data;\n\tlen = (size_t) preq->rq_ind.rq_usercred.rq_size;\n\n\tif ((preq->rq_host[0] == '\\0') || (preq->rq_user[0] == '\\0') ||\n\t    user == NULL) { /* no user */\n\t\treq_reject(PBSE_INTERNAL, 0, preq);\n\t\treturn;\n\t}\n\n\tsnprintf(info, sizeof(info), \"%s@%s\", preq->rq_user, preq->rq_host);\n\n\tif (strcasecmp(preq->rq_user, user) != 0) {\n\t\t/* ok if request coming from another server */\n\t\tif (!preq->rq_fromsvr &&\n\t\t    (strcasecmp(preq->rq_user, PBS_DEFAULT_ADMIN) != 0) &&\n\t\t    (ruserok(preq->rq_host, 0, preq->rq_user, user) != 0)) {\n\t\t\treq_reject(PBSE_PERM, 0, preq);\n\t\t\treturn;\n\t\t}\n\t}\n\n\tif (type != PBS_CREDTYPE_AES) {\n\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t\treturn;\n\t}\n\n\tstrcpy(log_buffer, \"\");\n\n\trval = user_write_password(user, cred, len);\n\tif (rval == -2) {\n\t\tif (strlen(log_buffer) > 0)\n\t\t\tlog_err(-1, info, log_buffer);\n\t\treq_reject(PBSE_BADUSER, 0, preq);\n\t} else if (rval == -1) {\n\t\tif (strlen(log_buffer) > 0)\n\t\t\tlog_err(-1, info, log_buffer);\n\t\treq_reject(PBSE_SYSTEM, 0, preq);\n\t} else {\n\t\tif (strlen(log_buffer) > 0)\n\t\t\tlog_event(PBSEVENT_ADMIN, PBS_EVENTCLASS_SERVER,\n\t\t\t\t  LOG_INFO, info, log_buffer);\n\t\treply_ack(preq);\n\t}\n\n\treturn;\n}\n"
  },
  {
    "path": "src/server/vnparse.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n *\n *@brief\n * \t\tFunctions which provide basic operation on the parsing of vnl files.\n *\n */\n#include <sys/types.h>\n#include <sys/stat.h>\n#include <assert.h>\n#include <ctype.h>\n#include <errno.h>\n#include <stdio.h>\n#include <string.h>\n#include <stdlib.h>\n#include <unistd.h>\n#include <time.h>\n#include \"dis.h\"\n#include \"pbs_error.h\"\n#include \"log.h\"\n#include \"placementsets.h\"\n#include \"pbs_config.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"pbs_nodes.h\"\n#include \"cmds.h\"\n#include \"server.h\"\n#include \"queue.h\"\n#include \"pbs_reliable.h\"\n\nstatic vnal_t *vnal_alloc(vnal_t **);\nstatic vnal_t *id2vnrl(vnl_t *, char *);\nstatic vna_t *attr2vnr(vnal_t *, char *);\n\nstatic const char iddelim = ':';\nstatic const char attrdelim = '=';\n\nextern char *msg_err_malloc;\n\n/**\n * @brief\n *\t\tExported interfaces responsible for opening and reading the given file\n *\t\t(which should contain vnode-specific data in the form described in the\n *\t\tdesign document), parsing it into a vnl_t (see \"placementsets.h\").\n *\t\tOn error, NULL is returned;  on success, a pointer to the resulting\n *\t\tvnl_t structure is returned.  Space allocated by the parse functions\n *\t\tshould be freed with vnl_free() below.\n *\n *\t\tIn order to allow the user to effect actions based on the attributes\n *\t\tor vnode IDs, a callback function may be supplied.  It will be called\n *\t\tbefore inserting a new name/value pair, and supplied the vnode ID,\n *\t\tattribute name and value;  if it returns zero, the insertion of the\n *\t\tgiven <ID, name, value> tuple will not occur but processing of the\n *\t\tfile will continue normally.\n *\n * @param[in]\tfile\t-\tfile which should contain vnode-specific data\n * \t\t\t\t\t\t\tin the form described in the design document.\n * @param[in]\tcallback\t-\tcallback function which will be called\n * \t\t\t\t\t\t\t\tbefore inserting a new name/value pair.\n *\n * @return\tvnl_t structure\n */\nvnl_t *\nvn_parse(const char *file, callfunc_t callback)\n{\n\tFILE *fp;\n\tvnl_t *vnlp;\n\n\tif ((fp = fopen(file, \"r\")) == NULL) {\n\t\tsprintf(log_buffer, \"%s\", file);\n\t\tlog_err(errno, __func__, log_buffer);\n\t\treturn NULL;\n\t}\n\n\tvnlp = vn_parse_stream(fp, callback);\n\n\t(void) fclose(fp);\n\treturn (vnlp);\n}\n\n/**\n * @brief\n * \t\tRead a configuration file.  The lines of the file have the form:\n * @par\n * \t\t@verbatim\n * \t\t<ID><IDDELIM><ATTRNAME><ATTRDELIM><ATTRVAL> [<TYPE> <ATTRDELIM> <TYPEVAL>]\n * @par\n * \t\tFor example:\n * \t\tfred: thing = blue   type = string_array\n * \t\t@endverbatim\n * @par\n * \t\twhere <ID>, <ATTRNAME>, <ATTRVAL> and <TYPEVAL>\n * \t\tare all strings; <IDDELIM> and <ATTRDELIM> are\n * \t\tcharacters (':' and '=' respectively - see iddelim,\n * \t\tattrdelim above); <TYPE> is the literal string \"type\" and\n * \t\tbegins an optional section used to define the data type for <ATTRNAME>.\n *\n * @param[in]\tfile\t-\tfile which should contain vnode-specific data\n * \t\t\t\t\t\t\tin the form described in the design document.\n * @param[in]\tcallback\t-\tcallback function which will be called\n * \t\t\t\t\t\t\t\tbefore inserting a new name/value pair.\n *\n * @return\tvnl_t structure\n */\nvnl_t *\nvn_parse_stream(FILE *fp, callfunc_t callback)\n{\n\tint linenum;\n\tchar linebuf[BUFSIZ];\n\tvnl_t *vnlp = NULL;\n\tstruct stat sb;\n\tstatic char type[] = \"type\";\n\n\tif (vnl_alloc(&vnlp) == NULL) {\n\t\treturn NULL;\n\t}\n\n\tif (fstat(fileno(fp), &sb) == -1) {\n\t\tlog_err(errno, __func__, \"fstat\");\n\t\tvnl_free(vnlp);\n\t\treturn NULL;\n\t} else\n\t\tvnlp->vnl_modtime = sb.st_mtime;\n\n\t/*\n\t *\tlinenum begins at 1, not 0, because of the implicit assumption\n\t *\tthat each file we're asked to parse must have begun with a line\n\t *\tof the form\n\t *\n\t *\t\t$configversion\t...\n\t */\n\tlinenum = 1;\n\twhile (fgets(linebuf, sizeof(linebuf), fp) != NULL) {\n\t\tchar *p, *opt;\n\t\tchar *tokbegin, *tokend;\n\t\tchar *pdelim;\n\t\tchar *vnid;\t  /* vnode ID */\n\t\tchar *attrname;\t  /* attribute name */\n\t\tchar *attrval;\t  /* attribute value */\n\t\tchar *vnp;\t  /* vnode ID ptr*/\n\t\tint typecode = 0; /* internal attribute type */\n\t\t/* internal attribute flag, default */\n\t\tint typeflag = READ_WRITE | ATR_DFLAG_CVTSLT;\n\t\tstruct resc_type_map *ptmap;\n\n\t\t/* cost of using fgets() - have to remove trailing newline */\n\t\tif ((p = strrchr(linebuf, '\\n')) != NULL) {\n\t\t\t*p = '\\0';\n\t\t\tlinenum++;\n\t\t} else {\n\t\t\tsprintf(log_buffer, \"line %d not newline-terminated\",\n\t\t\t\tlinenum);\n\t\t\tlog_err(PBSE_SYSTEM, __func__, log_buffer);\n\t\t\tvnl_free(vnlp);\n\t\t\treturn NULL;\n\t\t}\n\n\t\t/* ignore initial white space;  skip blank lines */\n\t\tp = linebuf;\n\t\twhile ((*p != '\\0') && isspace(*p))\n\t\t\tp++;\n\t\tif (*p == '\\0')\n\t\t\tcontinue;\n\n\t\t/* <ID> <IDDELIM> */\n\t\tif ((pdelim = strchr(linebuf, iddelim)) == NULL) {\n\t\t\tsprintf(log_buffer, \"line %d:  missing '%c'\", linenum,\n\t\t\t\tiddelim);\n\t\t\tlog_err(PBSE_SYSTEM, __func__, log_buffer);\n\t\t\tvnl_free(vnlp);\n\t\t\treturn NULL;\n\t\t}\n\t\twhile ((p < pdelim) && isspace(*p))\n\t\t\tp++;\n\t\tif (p == pdelim) {\n\t\t\tsprintf(log_buffer, \"line %d:  no vnode id\",\n\t\t\t\tlinenum);\n\t\t\tlog_err(PBSE_SYSTEM, __func__, log_buffer);\n\t\t\tvnl_free(vnlp);\n\t\t\treturn NULL;\n\t\t} else {\n\t\t\ttokbegin = p;\n\t\t\twhile ((p < pdelim) && !isspace(*p))\n\t\t\t\tp++;\n\t\t\ttokend = p;\n\t\t\t*tokend = '\\0';\n\t\t\tvnid = tokbegin;\n\t\t}\n\n\t\t/*\n\t\t * Validate the vnode name here in MOM before sending UPDATE2\n\t\t * command to SERVER (is_request()->update2_to_vnode()->\n\t\t * create_pbs_node()) to create the vnode. MOM does not allow\n\t\t * any invalid character in vnode name which is not supported\n\t\t * by PBS server.\n\t\t */\n\t\tfor (vnp = vnid; *vnp && legal_vnode_char(*vnp, 1); vnp++)\n\t\t\t;\n\t\tif (*vnp) {\n\t\t\tlog_errf(PBSE_SYSTEM, __func__, \"invalid character in vnode name \\\"%s\\\"\", vnid);\n\t\t\tvnl_free(vnlp);\n\t\t\treturn NULL;\n\t\t}\n\t\t/* Condition to make sure that vnode name should not exceed\n\t\t * PBS_MAXHOSTNAME i.e. 64 characters. This is because the\n\t\t * corresponding column nd_name in the database table pbs.node\n\t\t * is defined as string of length 64.\n\t\t */\n\t\tif (strlen(vnid) > PBS_MAXHOSTNAME) {\n\t\t\tlog_errf(PBSE_SYSTEM, __func__, \"Node name \\\"%s\\\" is too big\", vnid);\n\t\t\treturn NULL;\n\t\t}\n\t\t/* <ATTRNAME> <ATTRDELIM> */\n\t\tp = pdelim + 1; /* advance past iddelim */\n\t\tif ((pdelim = strchr(p, attrdelim)) == NULL) {\n\t\t\tsprintf(log_buffer, \"line %d:  missing '%c'\", linenum,\n\t\t\t\tattrdelim);\n\t\t\tlog_err(PBSE_SYSTEM, __func__, log_buffer);\n\t\t\tvnl_free(vnlp);\n\t\t\treturn NULL;\n\t\t}\n\t\twhile ((p < pdelim) && isspace(*p))\n\t\t\tp++;\n\t\tif (p == pdelim) {\n\t\t\tsprintf(log_buffer, \"line %d:  no attribute name\",\n\t\t\t\tlinenum);\n\t\t\tlog_err(PBSE_SYSTEM, __func__, log_buffer);\n\t\t\tvnl_free(vnlp);\n\t\t\treturn NULL;\n\t\t} else {\n\t\t\ttokbegin = p;\n\t\t\twhile ((p < pdelim) && !isspace(*p))\n\t\t\t\tp++;\n\t\t\ttokend = p;\n\t\t\t*tokend = '\\0';\n\t\t\tattrname = tokbegin;\n\t\t}\n\n\t\t/* <ATTRVAL> */\n\t\tp = pdelim + 1; /* advance past attrdelim */\n\t\twhile (isspace(*p))\n\t\t\tp++;\n\t\tif (*p == '\\0') {\n\t\t\tsprintf(log_buffer, \"line %d:  no attribute value\",\n\t\t\t\tlinenum);\n\t\t\tlog_err(PBSE_SYSTEM, __func__, log_buffer);\n\t\t\tvnl_free(vnlp);\n\t\t\treturn NULL;\n\t\t}\n\n\t\t/*\n\t\t * Check to see if the optional \"type\" section exists.\n\t\t */\n\t\ttokbegin = NULL;\n\t\topt = strchr(p, attrdelim);\n\t\tif (opt != NULL) { /* found one */\n\t\t\topt--;\t   /* skip backward from '=' */\n\t\t\twhile ((p < opt) && isspace(*opt))\n\t\t\t\topt--;\n\t\t\tif (p < opt) { /* check for \"type\" */\n\t\t\t\t/*\n\t\t\t\t * We want to see if the string value\n\t\t\t\t * of \"type\" exists.  opt is pointing to\n\t\t\t\t * the first non-space char so back up\n\t\t\t\t * enough to get to the beginning of \"type\".\n\t\t\t\t * The sizeof(type) is the same as strlen+1\n\t\t\t\t * so we need to backup sizeof(type)-2 to\n\t\t\t\t * get to the beginning of \"type\".\n\t\t\t\t */\n\t\t\t\topt -= (sizeof(type) - 2);\n\t\t\t\tif ((p < opt) && (strncmp(opt, type,\n\t\t\t\t\t\t\t  sizeof(type) - 1) == 0)) {\n\t\t\t\t\ttokend = opt - 1;\n\t\t\t\t\t/* must have a space before \"type\" */\n\t\t\t\t\tif (isspace(*tokend)) {\n\t\t\t\t\t\ttokbegin = p;\n\t\t\t\t\t\t*tokend = '\\0';\n\t\t\t\t\t\tp = opt;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tif (tokbegin == NULL) { /* no optional section */\n\t\t\ttokbegin = p;\n\t\t\twhile (*p != '\\0')\n\t\t\t\tp++;\n\t\t\ttokend = p;\n\t\t}\n\n\t\t/*\n\t\t * The attribute value needs to be checked for\n\t\t * bad chars.  The only one is attrdelim '='.\n\t\t */\n\t\tattrval = tokbegin;\n\t\tif (strchr(attrval, attrdelim) != NULL) {\n\t\t\tsprintf(log_buffer,\n\t\t\t\t\"line %d:  illegal char '%c' in value\",\n\t\t\t\tlinenum, attrdelim);\n\t\t\tlog_err(PBSE_SYSTEM, __func__, log_buffer);\n\t\t\tvnl_free(vnlp);\n\t\t\treturn NULL;\n\t\t}\n\n\t\t/* look for optional \"keyword = typeval\" */\n\t\twhile ((*p != '\\0') && isspace(*p))\n\t\t\t++p;\n\t\tif (*p != '\\0') {\n\t\t\t/* there is a keyword (\"type\") */\n\t\t\tif ((pdelim = strchr(p, attrdelim)) == NULL) {\n\t\t\t\tsprintf(log_buffer, \"line %d:  missing '%c'\",\n\t\t\t\t\tlinenum, attrdelim);\n\t\t\t\tlog_err(PBSE_SYSTEM, __func__, log_buffer);\n\t\t\t\tvnl_free(vnlp);\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t\ttokbegin = p;\n\t\t\twhile ((p < pdelim) && !isspace(*p))\n\t\t\t\tp++;\n\t\t\ttokend = p;\n\t\t\t*tokend = '\\0';\n\t\t\tp = pdelim + 1;\n\t\t\tif (strcmp(tokbegin, type) == 0) {\n\t\t\t\twhile (isspace(*p))\n\t\t\t\t\t++p;\n\t\t\t\tif (*p == '\\0') {\n\t\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\t\"line %d:  no keyword value\",\n\t\t\t\t\t\tlinenum);\n\t\t\t\t\tlog_err(PBSE_SYSTEM, __func__, log_buffer);\n\t\t\t\t\tvnl_free(vnlp);\n\t\t\t\t\treturn NULL;\n\t\t\t\t}\n\t\t\t\ttokbegin = p;\n\t\t\t\twhile ((*p != '\\0') && !isspace(*p))\n\t\t\t\t\t++p;\n\t\t\t\ttokend = p;\n\t\t\t\t*tokend = '\\0';\n\t\t\t\tptmap = find_resc_type_map_by_typest(tokbegin);\n\t\t\t\tif (ptmap == NULL) {\n\t\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\t\"line %d: invalid type '%s'\",\n\t\t\t\t\t\tlinenum, tokbegin);\n\t\t\t\t\tlog_err(PBSE_SYSTEM, __func__,\n\t\t\t\t\t\tlog_buffer);\n\t\t\t\t\tvnl_free(vnlp);\n\t\t\t\t\treturn NULL;\n\t\t\t\t}\n\t\t\t\ttypecode = ptmap->rtm_type;\n\n\t\t\t} else {\n\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\"line %d:  invalid keyword '%s'\",\n\t\t\t\t\tlinenum, tokbegin);\n\t\t\t\tlog_err(PBSE_SYSTEM, __func__, log_buffer);\n\t\t\t\tvnl_free(vnlp);\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t}\n\n\t\tif (vn_addvnr(vnlp, vnid, attrname, attrval, typecode,\n\t\t\t      typeflag, callback) == -1) {\n\t\t\tsprintf(log_buffer,\n\t\t\t\t\"line %d:  vn_addvnr failed\", linenum);\n\t\t\tlog_err(PBSE_SYSTEM, __func__, log_buffer);\n\t\t\tvnl_free(vnlp);\n\t\t\treturn NULL;\n\t\t}\n\t}\n\n\treturn ((vnl_t *) vnlp);\n}\n\n/**\n * @brief\n *\t\tMerge data from the newly-parse vnode list (new) into a previously-\n *\t\tparsed one (cur) adding any attribute/value pairs found in new to\n *\t\tcur, and overwriting any duplicate attributes with new's values.\n *\t\tIf successful, cur is returned, otherwise NULL.\n *\n * @param[in,out]\tcur\t-\tpreviously-parsed one (cur)\n * @param[in]\tnew\t-\tnewly-parse vnode list (new)\n * @param[in]\tcallback\t-\tcallback function which will be called\n * \t\t\t\t\t\t\t\tbefore inserting a new name/value pair.\n */\nvnl_t *\nvn_merge(vnl_t *cur, vnl_t *new, callfunc_t callback)\n{\n\tunsigned long i, j;\n\n\tfor (i = 0; i < new->vnl_used; i++) {\n\t\tvnal_t *newreslist = VNL_NODENUM(new, i);\n\n\t\tfor (j = 0; j < newreslist->vnal_used; j++) {\n\t\t\tvna_t *newres = VNAL_NODENUM(newreslist, j);\n\n\t\t\tif (vn_addvnr(cur, newreslist->vnal_id,\n\t\t\t\t      newres->vna_name, newres->vna_val,\n\t\t\t\t      newres->vna_type, newres->vna_flag,\n\t\t\t\t      callback) == -1)\n\t\t\t\treturn NULL;\n\t\t}\n\t}\n\n\tcur->vnl_modtime = (cur->vnl_modtime > new->vnl_modtime) ? cur->vnl_modtime : new->vnl_modtime;\n\treturn (cur);\n}\n\n/**\n * @brief\n *\t\tMerge data from the newly-parse vnode list (new) into a previously-\n *\t\tparsed one (cur) adding any attribute/value pairs of those attribute\n *\t\tnames listed in the 'allow_attribs'.\n *\t\tThis overwrites any duplicate attributes with new's values.\n * @note\n *\t\tAn entry in 'new' will be matched just before a dot (.) in the name\n *\t\tif one exists.\n *\t\tFor example, a 'new' entry of \"resources_available.ncpus\" will\n *\t\tmatch with 'allow_attribs' entry of \"resources_available\".\n *\n * @param[in]\tcur - previously parsed vnode list\n * @param[in]\tnew - newly parsed vnode list\n * @param[in]\tallow_attribs - list of attribute names to to match\n *\n * @return\tvnl_t *\n * @retval\tcur\t- if successful\n * @retval\tNULL\t- if not successful.\n */\nvnl_t *\nvn_merge2(vnl_t *cur, vnl_t *new, char **allow_attribs, callfunc_t callback)\n{\n\tunsigned long i, j;\n\tchar *vna_name, *dot;\n\tint match;\n\n\tfor (i = 0; i < new->vnl_used; i++) {\n\t\tvnal_t *newreslist = VNL_NODENUM(new, i);\n\n\t\tfor (j = 0; j < newreslist->vnal_used; j++) {\n\t\t\tvna_t *newres = VNAL_NODENUM(newreslist, j);\n\n\t\t\tvna_name = newres->vna_name;\n\t\t\tdot = strchr(vna_name, (int) '.');\n\t\t\tif (dot)\n\t\t\t\t*dot = '\\0';\n\n\t\t\t/* match up to but not including dot */\n\t\t\tmatch = is_string_in_arr(allow_attribs, vna_name);\n\t\t\tif (dot)\n\t\t\t\t*dot = '.'; /* restore */\n\t\t\tif (!match)\n\t\t\t\tcontinue;\n\n\t\t\tif (vn_addvnr(cur, newreslist->vnal_id,\n\t\t\t\t      newres->vna_name, newres->vna_val,\n\t\t\t\t      newres->vna_type, newres->vna_flag,\n\t\t\t\t      callback) == -1)\n\t\t\t\treturn NULL;\n\t\t}\n\t}\n\n\tcur->vnl_modtime = cur->vnl_modtime > new->vnl_modtime ? cur->vnl_modtime : new->vnl_modtime;\n\treturn cur;\n}\n\n/**\n * @brief\n * \t\tSearch for an attribute in a vnode.\n *\n * @param[in]\tvnrlp\t-\tvnode to check\n * @param[in]\tattr\t-\tcheck for the existence of the given attribute\n *\n * @return\tatrribute value\n * @retval\tNULL\t: not found\n */\nchar *\nattr_exist(vnal_t *vnrlp, char *attr)\n{\n\tvna_t *vnrp;\n\n\tif (vnrlp == NULL)\n\t\treturn NULL;\n\n\tif ((vnrp = attr2vnr(vnrlp, attr)) == NULL)\n\t\treturn NULL;\n\n\treturn vnrp->vna_val;\n}\n\n/**\n * @brief\n * \t\tCheck if a vnode exists.\n *\n * @param[in]\tvnlp\t-\tvnode to check\n * @param[in]\tid\t-\tvnode name to look for\n *\n * @return\tvnal_t *\n */\nvnal_t *\nvn_vnode(vnl_t *vnlp, char *id)\n{\n\tif (vnlp == NULL)\n\t\treturn NULL;\n\treturn id2vnrl(vnlp, id);\n}\n\n/**\n * @brief\n * \t\tSearch for a named vnode, then search for an attribute in that vnode.\n *\n * @param[in]\tvnlp\t-\tvnode list to search\n * @param[in]\tid\t-\tvnode name to look for\n * @param[in]\tattr\t-\tcheck for the existence of the given attribute\n *\n * @return\tatrribute value\n * @retval\tNULL\t: not found\n */\nchar *\nvn_exist(vnl_t *vnlp, char *id, char *attr)\n{\n\tvnal_t *vnrlp;\n\n\tif (vnlp == NULL)\n\t\treturn NULL;\n\tif ((vnrlp = id2vnrl(vnlp, id)) == NULL)\n\t\treturn NULL;\n\n\treturn attr_exist(vnrlp, attr);\n}\n\n/**\n * @brief\n *\t\tAdd the given attribute (attr) and value (attrval) to the vnode with\n *\t\tID id;  if no vnode with the given ID is found, one is created.\n *\n * @param[in,out]\tvnlp\t-\tvnode list to search\n * @param[in]\tid\t-\tvnode name to look for\n * @param[in]\tattr\t-\tcheck for the existence of the given attribute\n * @param[in]\tattrval\t-\tattribute value\n * @param[in]\tattrtype\t-\tattribute type\n * @param[in]\tattrflags\t-\tattribute flags\n * @param[in]\tcallback\t-\tcallback function which will be called\n * \t\t\t\t\t\t\t\tbefore inserting a new name/value pair.\n *\n * @return\tint\n * @retval\t-1\t: error\n * @retval\t0\t: success\n */\nint\nvn_addvnr(vnl_t *vnlp, char *id, char *attr, char *attrval,\n\t  int attrtype, int attrflags, callfunc_t callback)\n{\n\tvnal_t *vnrlp;\n\tvna_t *vnrp;\n\tchar *newid, *newname, *newval;\n\n\tif ((callback != NULL) && (callback(id, attr, attrval) == 0))\n\t\treturn (0);\n\n\tif ((newname = strdup(attr)) == NULL) {\n\t\treturn (-1);\n\t} else if ((newval = strdup(attrval)) == NULL) {\n\t\tfree(newname);\n\t\treturn (-1);\n\t}\n\n\tif ((vnrlp = id2vnrl(vnlp, id)) == NULL) {\n\t\tif ((newid = strdup(id)) == NULL) {\n\t\t\tfree(newval);\n\t\t\tfree(newname);\n\t\t\treturn (-1);\n\t\t}\n\n\t\t/*\n\t\t *\tNo vnode_attrlist with this ID - add one.\n\t\t */\n\t\tif ((vnlp->vnl_used >= vnlp->vnl_nelem) &&\n\t\t    (vnl_alloc(&vnlp) == NULL)) {\n\t\t\tfree(newid);\n\t\t\tfree(newval);\n\t\t\tfree(newname);\n\t\t\treturn (-1);\n\t\t}\n\t\tvnlp->vnl_cur = vnlp->vnl_used++;\n\t\tif (pbs_idx_insert(vnlp->vnl_ix, id, (void *) vnlp->vnl_dl.dl_cur) != PBS_IDX_RET_OK) {\n\t\t\tfree(newid);\n\t\t\tfree(newval);\n\t\t\tfree(newname);\n\t\t\treturn (-1);\n\t\t}\n\t\tvnrlp = CURVNLNODE(vnlp);\n\t\tvnrlp->vnal_id = newid;\n\t}\n\n\tif ((vnrp = attr2vnr(vnrlp, attr)) == NULL) {\n\t\t/*\n\t\t *\tNo vnode_attr for this attribute - add one.\n\t\t */\n\t\tif ((vnrlp->vnal_used >= vnrlp->vnal_nelem) &&\n\t\t    (vnal_alloc(&vnrlp) == NULL)) {\n\t\t\tfree(newval);\n\t\t\tfree(newname);\n\t\t\treturn (-1);\n\t\t}\n\t\tvnrlp->vnal_cur = vnrlp->vnal_used++;\n\t\tvnrp = CURVNRLNODE(vnrlp);\n\t} else {\n\t\tfree(vnrp->vna_name);\n\t\tfree(vnrp->vna_val);\n\t}\n\n\tvnrp->vna_name = newname;\n\tvnrp->vna_val = newval;\n\tvnrp->vna_type = attrtype;\n\tvnrp->vna_flag = attrflags;\n\treturn (0);\n}\n\n/**\n * @brief\n *\t\tIf a vnal_t entry with the given ID (id) exists, return a pointer\n *\t\tto it;  otherwise NULL is returned.\n *\n * @param[in,out]\tvnlp\t-\tvnode list to search\n * @param[in]\tid\t-\tvnode name to look for\n *\n * @return\tvnal_t *\n * @retval\ta pointer to vnal_t\t: entry with the given ID (id) exists\n * @retval\tNULL\t: does not exists.\n */\nstatic vnal_t *\nid2vnrl(vnl_t *vnlp, char *id)\n{\n\tunsigned long i = 0;\n\tif (vnlp != NULL && pbs_idx_find(vnlp->vnl_ix, (void **) &id, (void **) &i, NULL) == PBS_IDX_RET_OK) {\n\t\tvnal_t *vnrlp = VNL_NODENUM(vnlp, i);\n\n\t\treturn (vnrlp);\n\t}\n\n\treturn NULL;\n}\n\n/**\n * @brief\n *\t\tIf a vna_t entry with the given ID attribute (attr), return a pointer\n *\t\tto it;  otherwise NULL is returned.\n *\n * @param[in]\tvnrlp\t-\tvnode list to search\n * @param[in]\tattr\t-\tcheck for the existence of the given attribute\n *\n * @return\tvnal_t *\n * @retval\ta pointer to vnal_t\t: entry with the given ID (attr) exists\n * @retval\tNULL\t: does not exists.\n */\nstatic vna_t *\nattr2vnr(vnal_t *vnrlp, char *attr)\n{\n\tunsigned long i;\n\n\tif (vnrlp == NULL || attr == NULL)\n\t\treturn NULL;\n\n\tfor (i = 0; i < vnrlp->vnal_used; i++) {\n\t\tvna_t *vnrp = VNAL_NODENUM(vnrlp, i);\n\n\t\tif (strcmp(vnrp->vna_name, attr) == 0)\n\t\t\treturn vnrp;\n\t}\n\n\treturn NULL;\n}\n\n/**\n * @brief\n * \t\tfree the given vnl_t\n *\n * @param[in]\tvnlp\t-\tvnl_t which needs to be freed.\n */\nvoid\nvnl_free(vnl_t *vnlp)\n{\n\tunsigned long i, j;\n\n\tif (vnlp) {\n\t\tassert(vnlp->vnl_list != NULL);\n\t\tif (vnlp->vnl_used == 0 && vnlp->vnl_nelem && vnlp->vnl_list) {\n\t\t\tvnal_t *vnrlp = (vnal_t *) vnlp->vnl_list;\n\t\t\tfree(vnrlp->vnal_list);\n\t\t}\n\t\tfor (i = 0; i < vnlp->vnl_used; i++) {\n\t\t\tvnal_t *vnrlp = VNL_NODENUM(vnlp, i);\n\n\t\t\tassert(vnrlp->vnal_list != NULL);\n\t\t\tfor (j = 0; j < vnrlp->vnal_used; j++) {\n\t\t\t\tvna_t *vnrp = VNAL_NODENUM(vnrlp, j);\n\n\t\t\t\tfree(vnrp->vna_name);\n\t\t\t\tfree(vnrp->vna_val);\n\t\t\t}\n\t\t\tfree(vnrlp->vnal_list);\n\t\t\tfree(vnrlp->vnal_id);\n\t\t}\n\t\tfree(vnlp->vnl_list);\n#ifdef PBS_MOM\n\t\tpbs_idx_destroy(vnlp->vnl_ix);\n#endif /* PBS_MOM */\n\t\tfree(vnlp);\n\t}\n}\n\n/**\n * @brief\n *\t\tCheck character in a vnode name.\n *\n * @param[in]\tc\t-\tcharacter in a vnode name.\n * @param[in]\textra\t-\textra should be non-zero if a period, '.', is to be accepted\n *\n * @return\tint\n * @retval\t1\t: character is legal in a vnode name\n * @retval\t0\t: character is not legal in a vnode name\n */\nint\nlegal_vnode_char(char c, int extra)\n{\n\tif (isalnum((int) c) ||\n\t    (c == '-') || (c == '_') || (c == '@') ||\n\t    (c == '[') || (c == ']') || (c == '#') || (c == '^') ||\n\t    (c == '/') || (c == '\\\\'))\n\t\treturn 1; /* ok */\n\tif (extra == 1) {\n\t\t/* extra character, the period,  allowed */\n\t\tif (c == '.')\n\t\t\treturn 1;\n\t} else if (extra == 2) {\n\t\t/* extra characters, the period and comma,  allowed */\n\t\tif ((c == '.') || (c == ','))\n\t\t\treturn 1;\n\t} else {\n\t\tif (c == ',')\n\t\t\treturn 1;\n\t}\n\treturn 0;\n}\n\n/**\n *\n * @brief\n * \t\tParse tokens in the nodes file\n *\n * @par\n *\t\tToken is returned, if null then there was none.\n *\t\tIf there is an error, then \"err\" is set non-zero.\n *\t\tOn following call, with argument \"start\" as null pointer, then\n *\t\tresume where it left off.\n *\n * @param[in]\tstart\t-\twhere the parsing last left off. If start is NULL,\n *\t\t\t\t\t\t\tthen restart where the function last left off.\n * @param[in]\tcok\t-\tstates if certain characters are legal, separaters,\n *\t\t\t\t\t\tor are illegal.   If cok is\n *\t\t\t\t\t\t0: '.' and '=' are separators, and ',' is ok:\n *\t\t   \t   \t\t\t'=' as separator between \"keyword\" and \"value\", and\n *\t\t   \t   \t\t\t'.' between attribute and resource.\n *\t\t\t\t\t\t1: '.' is allowed as character and '=' is illegal\n *\t\t\t\t\t\t2: use quoted string parsing rules\n *\t\t\t\t\t\tTypically \"cok\" is 1 when parsing what should be the\n *\t\t\t\t\t\tvnode name:\n *\t\t\t   \t\t\t\t0 when parsing attribute/resource names\n *\t\t\t   \t\t\t\t2 when parsing (resource) values\n * @param[out]\terr\t-\treturns in '*err' the error return code\n * @param[out]\tterm\t-\tcharacter terminating token.\n *\n * @return\tchar *\n * @retval\t<value>\tReturns the get next element (resource or value) as\n *\t\t\tnext token.\n *\n * @note\n * \t\tIf called with cok = 2, the returned value, if non-null, will be on the\n * \t\theap and must be freed by the caller.\n *\n * @par MT-safe: No\n */\n\nchar *\nparse_node_token(char *start, int cok, int *err, char *term)\n{\n\tstatic char *pt;\n\tchar *ts;\n\tchar quote;\n\tchar *rn;\n\n\t*err = 0;\n\tif (start)\n\t\tpt = start;\n\n\tif (cok == 2) {\n\t\t/* apply quoted value parsing rules */\n\t\tif ((*err = pbs_quote_parse(pt, &rn, &ts, QMGR_NO_WHITE_IN_VALUE)) == 0) {\n\t\t\t*term = *ts;\n\t\t\tif (*ts != '\\0')\n\t\t\t\tpt = ts + 1;\n\t\t\telse\n\t\t\t\tpt = ts;\n\t\t\treturn rn;\n\t\t} else {\n\t\t\treturn NULL;\n\t\t}\n\t}\n\n\twhile (*pt && isspace((int) *pt)) /* skip leading whitespace */\n\t\tpt++;\n\tif (*pt == '\\0')\n\t\treturn NULL; /* no token */\n\n\tts = pt;\n\n\t/* test for legal characters in token */\n\n\tfor (; *pt; pt++) {\n\t\tif (*pt == '\\\"') {\n\t\t\tquote = *pt;\n\t\t\t++pt;\n\t\t\twhile (*pt != '\\0' && *pt != quote)\n\t\t\t\tpt++;\n\t\t\tquote = 0;\n\t\t} else {\n\t\t\tif (legal_vnode_char(*pt, cok) || (*pt == ':'))\n\t\t\t\tcontinue; /* valid anywhere */\n\t\t\telse if (isspace((int) *pt))\n\t\t\t\tbreak; /* separator anywhere */\n\t\t\telse if (!cok && (*pt == '.'))\n\t\t\t\tbreak; /* separator attr.resource */\n\t\t\telse if (!cok && (*pt == '='))\n\t\t\t\tbreak; /* separate attr(.resc)=value */\n\t\t\telse\n\t\t\t\t*err = 1;\n\t\t}\n\t}\n\t*term = *pt;\n\t*pt = '\\0';\n\tpt++;\n\treturn (ts);\n}\n\n#define VN_NCHUNKS 4 /* number of chunks to allocate initially */\n#define VN_MULT 4    /* multiplier for next allocation size */\n\n/**\n * @brief\n *\t\tHandle initial allocation of a vnl_t as well as reallocation when\n *\t\twe run out of space.  The list of vnodes (vnl_t) and the attributes\n *\t\tfor each (vnal_t) are initially allocated VN_NCHUNKS entries;  when\n *\t\tthat size is outgrown, a list VN_MULT times the current size is\n *\t\treallocated.\n *\n * @param[out]\tvp\t-\tvnl_t which requires allocation or reallocation.\n */\nvnl_t *\nvnl_alloc(vnl_t **vp)\n{\n\tvnl_t *newchunk;\n\tvnal_t *newlist;\n\n\tassert(vp != NULL);\n\tif (*vp == NULL) {\n\t\t/*\n\t\t *\tAllocate chunk structure and first chunk of\n\t\t *\tVN_NCHUNKS attribute list entries.\n\t\t */\n\t\tif ((newchunk = malloc(sizeof(vnl_t))) == NULL) {\n\t\t\tsprintf(log_buffer, \"malloc vnl_t\");\n\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\treturn NULL;\n\t\t}\n\n\t\tnewlist = NULL;\n\t\tif (vnal_alloc(&newlist) == NULL) {\n\t\t\tfree(newchunk);\n\t\t\treturn NULL;\n\t\t}\n\t\tif ((newchunk->vnl_ix = pbs_idx_create(0, 0)) == NULL) {\n\t\t\tfree(newchunk);\n\t\t\treturn NULL;\n\t\t}\n\t\tnewchunk->vnl_list = newlist;\n\t\tnewchunk->vnl_nelem = 1;\n\t\tnewchunk->vnl_cur = 0;\n\t\tnewchunk->vnl_used = 0;\n\t\tnewchunk->vnl_modtime = time(NULL);\n\t\treturn (*vp = newchunk);\n\t} else {\n\t\t/*\n\t\t *\tReallocate a larger chunk, multiplying the number of\n\t\t *\tentries by VN_MULT and initializing the new ones to 0.\n\t\t */\n\t\tint cursize = (*vp)->vnl_nelem;\n\t\tint newsize = cursize * VN_MULT;\n\n\t\tassert((*vp)->vnl_list != NULL);\n\t\tif ((newlist = realloc((*vp)->vnl_list,\n\t\t\t\t       newsize * sizeof(vnal_t))) == NULL) {\n\t\t\tsprintf(log_buffer, \"realloc vnl_list\");\n\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\treturn NULL;\n\t\t} else {\n\t\t\t(*vp)->vnl_list = newlist;\n\t\t\tmemset(((vnal_t *) (*vp)->vnl_list) + cursize, 0,\n\t\t\t       ((newsize - cursize) * sizeof(vnal_t)));\n\t\t\t(*vp)->vnl_nelem = newsize;\n\t\t\treturn (*vp);\n\t\t}\n\t}\n}\n\n/**\n * @brief\n *\t\tHandle initial allocation of a vnal_t as well as reallocation when\n *\t\twe run out of space.  The list of vnode attributes for a given vnode\n *\t\t(vnal_t) is initially allocated VN_NCHUNKS entries;  when that size\n *\t\tis outgrown, a list VN_MULT times the current size is reallocated.\n *\n * @param[out]\tvp\t-\tvnl_t which requires allocation or reallocation.\n */\nstatic vnal_t *\nvnal_alloc(vnal_t **vp)\n{\n\tvnal_t *newchunk;\n\tvna_t *newlist;\n\n\tassert(vp != NULL);\n\tif (*vp == NULL) {\n\t\t/*\n\t\t *\tAllocate chunk structure and first chunk of\n\t\t *\tVN_NCHUNKS attribute list entries.\n\t\t */\n\t\tif ((newchunk = malloc(sizeof(vnal_t))) == NULL) {\n\t\t\tsprintf(log_buffer, \"malloc vnal_t\");\n\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\treturn NULL;\n\t\t}\n\t\tif ((newlist = calloc(VN_NCHUNKS, sizeof(vna_t))) == NULL) {\n\t\t\tsprintf(log_buffer, \"calloc vna_t\");\n\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\tfree(newchunk);\n\t\t\treturn NULL;\n\t\t} else {\n\t\t\tnewchunk->vnal_nelem = VN_NCHUNKS;\n\t\t\tnewchunk->vnal_cur = 0;\n\t\t\tnewchunk->vnal_used = 0;\n\t\t\tnewchunk->vnal_list = newlist;\n\t\t\treturn (*vp = newchunk);\n\t\t}\n\t} else {\n\t\t/*\n\t\t *\tReallocate a larger chunk, multiplying the number of\n\t\t *\tentries by VN_MULT and initializing the new ones to 0.\n\t\t */\n\t\tint cursize = (*vp)->vnal_nelem;\n\t\tint newsize = (cursize == 0 ? 1 : cursize) * VN_MULT;\n\n\t\tif ((newlist = realloc((*vp)->vnal_list,\n\t\t\t\t       newsize * sizeof(vna_t))) == NULL) {\n\t\t\tsprintf(log_buffer, \"realloc vnal_list\");\n\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\treturn NULL;\n\t\t} else {\n\t\t\t(*vp)->vnal_list = newlist;\n\t\t\tmemset(((vna_t *) (*vp)->vnal_list) + cursize, 0,\n\t\t\t       ((newsize - cursize) * sizeof(vna_t)));\n\t\t\t(*vp)->vnal_nelem = newsize;\n\t\t\treturn (*vp);\n\t\t}\n\t}\n}\n\n/**\n * @brief\n *\tThis return 1 if the given 'host' and 'part' matches the\n *\tparent mom of node 'pnode'.\n *\n * @param[in]\tpnode - the node to match host against\n * @param[in]\tnode_parent_host - if pnode is NULL, consults this as node parent host.\n * @param[in]\thost - hostname to match\n * @param[in]\tport - port to match\n *\n * @return int\n * @retval 1\t- if true\n * @retval 0 \t- if  false\n */\nstatic int\nis_parent_host_of_node(pbsnode *pnode, char *node_parent_host, char *host, int port)\n{\n\tif (((pnode == NULL) && (node_parent_host == NULL)) || (host == NULL))\n\t\treturn (0);\n\n\tif (pnode == NULL) {\n\t\tif (strcmp(node_parent_host, host) == 0)\n\t\t\treturn (1);\n\n\t} else {\n\t\tint i;\n\t\tfor (i = 0; i < pnode->nd_nummoms; i++) {\n\t\t\tif ((strcmp(pnode->nd_moms[i]->mi_host, host) == 0) &&\n\t\t\t    (pnode->nd_moms[i]->mi_port == port)) {\n\t\t\t\treturn (1);\n\t\t\t}\n\t\t}\n\t}\n\treturn (0);\n}\n\n/**\n *\n * @brief\n *\tReturn <resource>=<value> entries in 'chunk' where\n *\t<resource> does not appear in the comma-separated\n *\tlist 'res_list'.\n * @par\n *\tFor example, suppposed:\n *\t\tres_list = <resA>,<resB>\n *\tand\n *\t\tchunk = <resB>=<valB>:<resC>=<valC>:<resD>=<valD>\n *\n *\tthen this function returns:\n *\t\t<resC>=<valC>:<resD>=<valD>\n *\n * @param[in]\tres_list - the resources list\n * @param[in]\tchunk - the chunk to check for new resources.\n *\n * @return char *\n * @retval != NULL\tthe resources that are used in 'chunk',\n *\t\t\tbut not in 'res_list'.\n * @retval == NULL\tif error encountered.\n *\n * @note\n *\tThe returned string points to a statically allocated buffer\n *\tthat must not be freed, and will get overwritten on the\n *\tnext call to this function.\n *\n */\nstatic char *\nreturn_missing_resources(char *chunk, char *res_list)\n{\n\tint snc;\n\tint snelma;\n\tstatic int snelmt = 0;\t\t   /* must be static per parse_chunk_r() */\n\tstatic key_value_pair *skv = NULL; /* must be static per parse_chunk_r() */\n\tint rc = 0;\n\tstatic char *ret_buf = NULL;\n\tstatic int ret_buf_size = 0;\n\tint l;\n\tchar *chunk_dup = NULL;\n\n\tif ((res_list == NULL) || (chunk == NULL)) {\n\t\tlog_err(-1, __func__, \"bad params passed\");\n\t\treturn (NULL);\n\t}\n\n\tif (ret_buf == NULL) {\n\t\tint chunk_len;\n\n\t\tchunk_len = strlen(chunk);\n\t\tret_buf = malloc(chunk_len + 1);\n\t\tif (ret_buf == NULL) {\n\t\t\tlog_err(errno, __func__, \"malloc failed\");\n\t\t\treturn NULL;\n\t\t}\n\t\tret_buf_size = chunk_len;\n\t}\n\n\tchunk_dup = strdup(chunk);\n\tif (chunk_dup == NULL) {\n\t\tlog_err(errno, __func__, \"strdup failed on chunk\");\n\t\treturn (NULL);\n\t}\n\trc = parse_chunk_r(chunk_dup, &snc, &snelma,\n\t\t\t   &snelmt, &skv, NULL);\n\tif (rc != 0) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer), \"bad parse of %s\", chunk_dup);\n\t\tlog_err(-1, __func__, log_buffer);\n\t\tfree(chunk_dup);\n\t\treturn (NULL);\n\t}\n\tret_buf[0] = '\\0';\n\tfor (l = 0; l < snelma; ++l) {\n\t\tif (!in_string_list(skv[l].kv_keyw, ',', res_list)) {\n\t\t\tif (ret_buf[0] != '\\0') {\n\t\t\t\tif (pbs_strcat(&ret_buf, &ret_buf_size, \":\") == NULL)\n\t\t\t\t\treturn NULL;\n\t\t\t}\n\n\t\t\tif (pbs_strcat(&ret_buf, &ret_buf_size, skv[l].kv_keyw) == NULL)\n\t\t\t\treturn NULL;\n\t\t\tif (pbs_strcat(&ret_buf, &ret_buf_size, \"=\") == NULL)\n\t\t\t\treturn NULL;\n\t\t\tif (pbs_strcat(&ret_buf, &ret_buf_size, skv[l].kv_val) == NULL)\n\t\t\t\treturn NULL;\n\t\t}\n\t}\n\tfree(chunk_dup);\n\treturn (ret_buf);\n}\n\n/**\n *\n * @brief\n *\tReturn a comma-separated list of resource names\n *\tused/assigned in the given 'exec_vnode' string.\n *\n * @param[in]\texec_vnode - the master exec_vnode to search on.\n * @return char *\n * @retval != NULL\tthe resources from 'exec_vnode'.\n * @retval == NULL\tif error encountered.\n *\n * @note\n *\tThe returned string can have duplicate resource\n *\tnames in them.\n *\tThe returned string points to a malloced area that\n *\tmust be freed when not needed.\n *\n */\nstatic char *\nresources_seen(char *exec_vnode)\n{\n\tchar *selbuf = NULL;\n\tint hasprn;\n\tchar *last = NULL;\n\tint snelma;\n\tstatic key_value_pair *skv = NULL; /* must be static */\n\tint j;\n\tchar *psubspec;\n\tchar *res_list = NULL;\n\tchar *noden = NULL;\n\tsize_t ssize = 0;\n\tsize_t slen = 0;\n\n\tif (exec_vnode == NULL) {\n\t\tlog_err(-1, __func__, \"bad params passed\");\n\t\treturn (NULL);\n\t}\n\n\tselbuf = strdup(exec_vnode);\n\tif (selbuf == NULL) {\n\t\tlog_err(errno, __func__, \"strdup failed on exec_vnode\");\n\t\treturn (NULL);\n\t}\n\tssize = strlen(exec_vnode) + 1;\n\tres_list = (char *) calloc(1, strlen(exec_vnode) + 1);\n\tif (res_list == NULL) {\n\t\tlog_err(errno, __func__, \"calloc failed on exec_vnode\");\n\t\tfree(selbuf);\n\t\treturn (NULL);\n\t}\n\n\tfor (psubspec = parse_plus_spec_r(selbuf, &last, &hasprn); psubspec != NULL;\n\t     psubspec = parse_plus_spec_r(last, &last, &hasprn)) {\n\n\t\tif (parse_node_resc(psubspec, &noden, &snelma, &skv) != 0) {\n\t\t\tfree(selbuf);\n\t\t\tfree(res_list);\n\t\t\treturn (NULL);\n\t\t}\n\n\t\tfor (j = 0; j < snelma; ++j) {\n\t\t\tif (res_list[0] == '\\0') {\n\t\t\t\tstrncpy(res_list, skv[j].kv_keyw, ssize - 1);\n\t\t\t} else {\n\t\t\t\tslen = strlen(res_list);\n\t\t\t\tstrncat(res_list, \",\", ssize - slen - 1);\n\t\t\t\tslen += 1;\n\t\t\t\tstrncat(res_list, skv[j].kv_keyw, ssize - slen - 1);\n\t\t\t}\n\t\t}\n\t}\n\tfree(selbuf);\n\treturn (res_list);\n}\n\n/**\n * @brief\n *\tLook into a job's exec_host2 or exec_host attribute\n *\tfor the first entry which is considered the MS host and its\n *\tport. 'exec_host2' is consulted first if it is non-NULL, then 'exec_host'.\n * @param[in]\texec_host - exechost to consult\n * @param[in]\texec_host2 - exechost to consult\n * @param[out]  port\t- where the corresponding port is returned.\n *\n * @return char *\n * @retval\t!= NULL - mother superior full hostname\n * @retval\tNULL - if error obtaining hostname.\n *\n * @note\n *\tReturned string is in a malloc-ed area which must be freed\n *\toutside after use.\n */\nstatic char *\nfind_ms_full_host_and_port(char *exec_host, char *exec_host2, int *port)\n{\n\tchar *ms_exec_host = NULL;\n\tchar *p;\n\n\tif (((exec_host == NULL) && (exec_host2 == NULL)) || (port == NULL)) {\n\t\tlog_err(PBSE_INTERNAL, __func__, \"bad input parameter\");\n\t\treturn (NULL);\n\t}\n\n\t*port = pbs_conf.mom_service_port;\n\n\tif (exec_host2 != NULL) {\n\t\tms_exec_host = strdup(exec_host2);\n\t\tif (ms_exec_host == NULL) {\n\t\t\tlog_err(errno, __func__, \"strdup failed\");\n\t\t\treturn (NULL);\n\t\t}\n\t\tif ((p = strchr(ms_exec_host, '/')) != NULL)\n\t\t\t*p = '\\0';\n\n\t\tif ((p = strchr(ms_exec_host, ':')) != NULL) {\n\t\t\tchar *endp;\n\t\t\tlong pnum;\n\n\t\t\tpnum = (int) strtol(p + 1, &endp, 10);\n\t\t\tif ((*endp != '\\0') || (pnum == LONG_MIN) || (pnum == LONG_MAX)) {\n\t\t\t\tlog_err(errno, __func__, \"strtoul error\");\n\t\t\t\treturn (NULL);\n\t\t\t}\n\t\t\t*p = '\\0';\n\t\t\t*port = pnum;\n\t\t}\n\t} else if (exec_host != NULL) {\n\t\tms_exec_host = strdup(exec_host);\n\t\tif (ms_exec_host == NULL) {\n\t\t\tlog_err(errno, __func__, \"strdup failed\");\n\t\t\treturn (NULL);\n\t\t}\n\t\tif ((p = strchr(ms_exec_host, '/')) != NULL)\n\t\t\t*p = '\\0';\n\t}\n\treturn (ms_exec_host);\n}\n\n/**\n * @brief\n *\tGiven a select string specification of the form:\n *\t\t<num>:<resA>=<valA>:<resB>=<valB>+<resN>=<valN>\n *\texpand the spec to write out the repeated chunks\n *\tcompletely. For example, given:\n *\t\t2:ncpus=1:mem=3gb:mpiprocs=5\n *\tthis expands to:\n *\t   ncpus=1:mem=3gb:mpiprocs=5+ncpus=1:mem=3gb:mpiprocs=5\n * @param[in]\tselect_str - the select/schedselect specification\n *\n * @return char *\n * @retval\t!= NULL - the expanded select string\n * @retval\tNULL - if unexpected encountered during processing.\n *\n * @note\n *\tReturned string is in a malloc-ed area which must be freed\n *\toutside after use.\n */\nstatic char *\nexpand_select_spec(char *select_str)\n{\n\tchar *selbuf = NULL;\n\tint hasprn3;\n\tchar *last3 = NULL;\n\tint snc;\n\tint snelma;\n\tstatic int snelmt = 0;\t\t   /* must be static per parse_chunk_r() */\n\tstatic key_value_pair *skv = NULL; /* must be static per parse_chunk_r() */\n\tint i, j;\n\tchar *psubspec;\n\tchar buf[LOG_BUF_SIZE + 1];\n\tint ns_malloced = 0;\n\tchar *new_sel = NULL;\n\n\tif (select_str == NULL) {\n\t\tlog_err(-1, __func__, \"bad param passed\");\n\t\treturn (NULL);\n\t}\n\n\tselbuf = strdup(select_str);\n\tif (selbuf == NULL) {\n\t\tlog_err(errno, __func__, \"strdup fail\");\n\t\treturn (NULL);\n\t}\n\n\t/* parse chunk from select spec */\n\tfor (psubspec = parse_plus_spec_r(selbuf, &last3, &hasprn3); psubspec != NULL;\n\t     psubspec = parse_plus_spec_r(last3, &last3, &hasprn3)) {\n\t\tint rc = 0;\n\t\trc = parse_chunk_r(psubspec, &snc, &snelma, &snelmt, &skv, NULL);\n\t\t/* snc = number of chunks */\n\t\tif (rc != 0) {\n\t\t\tfree(selbuf);\n\t\t\tfree(new_sel);\n\t\t\treturn (NULL);\n\t\t}\n\n\t\tfor (i = 0; i < snc; ++i) { /* for each chunk in select.. */\n\n\t\t\tfor (j = 0; j < snelma; ++j) {\n\t\t\t\tif (j == 0) {\n\t\t\t\t\tsnprintf(buf, sizeof(buf), \"1:%s=%s\",\n\t\t\t\t\t\t skv[j].kv_keyw, skv[j].kv_val);\n\t\t\t\t} else {\n\t\t\t\t\tsnprintf(buf, sizeof(buf), \":%s=%s\",\n\t\t\t\t\t\t skv[j].kv_keyw, skv[j].kv_val);\n\t\t\t\t}\n\t\t\t\tif ((new_sel != NULL) && (new_sel[0] != '\\0') && (j == 0)) {\n\t\t\t\t\tif (pbs_strcat(&new_sel, &ns_malloced, \"+\") == NULL) {\n\t\t\t\t\t\tif (ns_malloced > 0)\n\t\t\t\t\t\t\tfree(new_sel);\n\t\t\t\t\t\tlog_err(errno, __func__, \"pbs_strcat failed\");\n\t\t\t\t\t\tfree(selbuf);\n\t\t\t\t\t\treturn (NULL);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tif (pbs_strcat(&new_sel, &ns_malloced, buf) == NULL) {\n\t\t\t\t\tif (ns_malloced > 0)\n\t\t\t\t\t\tfree(new_sel);\n\t\t\t\t\tlog_err(errno, __func__, \"pbs_strcat failed\");\n\t\t\t\t\tfree(selbuf);\n\t\t\t\t\treturn (NULL);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\tfree(selbuf);\n\treturn (new_sel);\n}\n\nenum resc_sum_action {\n\tRESC_SUM_ADD,\n\tRESC_SUM_GET_CLEAR\n};\n\n/**\n * @brief\n *\tmanage_resc_sum_values: perform some 'action' on the internal resc_sum_values\n *\tarray, whether adding a new entry, getting an entry, or clearing/initializing\n *\tan entry.\n *\n * @param[in]\taction\t- can either be 'RESC_SUM_ADD' to add an entry (resc_def,\n *\t\t\t  keyw, value) into the internal resc_sum_values array,\n *\t\t\t  or 'RESC_SUM_GET_CLEAR' to return the contents of the\n *\t\t\t  resc_sum_values array.\n * @param[in]\tresc_def- resource definition of the resource to be added to the array.\n *\t\t\t- must be non-NULL if 'action' is 'RESC_SUM_ADD'.\n * @param[in]\tkeyw\t- resource name of the resource to be added to the array.\n *\t\t\t- must be non-NULL if 'action' is 'RESC_SUM_ADD'.\n * @param[in]\tvalue\t- value of the resource to be added to the array.\n *\t\t\t  must be non-NULL if 'action' is 'RESC_SUM_ADD'.\n * @param[out]\terr_msg\t- error message buffer filled in if there's an error executing\n *\t\t\t  this function.\n * @param[in]\terr_msg_sz - size of 'err_msg' buffer.\n *\n * @return \tchar *\n * @retval\t<string> If 'action' is RESC_SUM_ADD, then this returns the 'keyw' to\n *\t\t\t signal success adding the <resc_def, keyw, value>.\n *\t\t\t If 'action' is RESC_SUM_GET_CLEAR, then this returns the\n *\t\t\t <res>=<value> entries in the internal resc_sum_values\n *\t\t\t array, as well as clear/initialize entries in the resc_sum_values\n *\t\t\t array.The returned string is of the form:\n *\t\t\t\t\":<res>=<value>:<res1>=(value1>:<res2>=<value2>...\"\n * @retval\tNULL\t If an error has occurred, filling in the 'err_msg' with the error\n *\t\t\t message.\n * @par\tMT-safe: No.\n */\nstatic char *\nmanage_resc_sum_values(enum resc_sum_action action, resource_def *resc_def, char *keyw, char *value,\n\t\t       char *err_msg, int err_msg_sz)\n{\n\tstatic struct resc_sum *resc_sum_values = NULL;\n\tstatic int resc_sum_values_size = 0;\n\tstruct resc_sum *rs;\n\tint k;\n\n\tif ((action == RESC_SUM_ADD) && ((resc_def == NULL) || (keyw == NULL) || (value == NULL))) {\n\t\tlog_err(-1, __func__, \"RESC_SUM_ADD: resc_def, keyw, or value is NULL\");\n\t\treturn (NULL);\n\t}\n\n\tif (resc_sum_values_size == 0) {\n\t\tresc_sum_values = (struct resc_sum *) calloc(20,\n\t\t\t\t\t\t\t     sizeof(struct resc_sum));\n\t\tif (resc_sum_values == NULL) {\n\t\t\tlog_err(-1, __func__, \"resc_sum_values calloc error\");\n\t\t\treturn (NULL);\n\t\t}\n\t\tresc_sum_values_size = 20;\n\t}\n\n\tif (action == RESC_SUM_ADD) {\n\t\tint r;\n\t\tstruct resc_sum *tmp_rs;\n\t\tint found_match = 0;\n\t\tstruct attribute tmpatr;\n\n\t\tfound_match = 0;\n\t\tfor (k = 0; k < resc_sum_values_size; k++) {\n\t\t\trs = resc_sum_values;\n\t\t\tif (rs[k].rs_def == NULL)\n\t\t\t\tbreak;\n\n\t\t\tif (strcmp(rs[k].rs_def->rs_name, keyw) == 0) {\n\t\t\t\tr = rs[k].rs_def->rs_decode(&tmpatr, keyw, NULL, value);\n\t\t\t\tif (r == 0)\n\t\t\t\t\trs[k].rs_def->rs_set(&rs[k].rs_attr, &tmpatr, INCR);\n\t\t\t\tfound_match = 1;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\n\t\tif (k == resc_sum_values_size) {\n\t\t\tint t;\n\t\t\t/* add a new entry */\n\n\t\t\tt = resc_sum_values_size + 5;\n\t\t\ttmp_rs = (struct resc_sum *) realloc(resc_sum_values,\n\t\t\t\t\t\t\t     t * sizeof(struct resc_sum));\n\t\t\tif (tmp_rs == NULL) {\n\t\t\t\tlog_err(-1, __func__, \"resc_sum_values realloc error\");\n\t\t\t\treturn (NULL);\n\t\t\t}\n\t\t\tresc_sum_values = tmp_rs;\n\t\t\tfor (k = resc_sum_values_size; k < t; k++) {\n\t\t\t\trs = resc_sum_values;\n\t\t\t\trs[k].rs_def = NULL;\n\t\t\t\tmemset(&rs[k].rs_attr, 0, sizeof(struct attribute));\n\t\t\t}\n\t\t\t/* k becomes the index to the new netry */\n\t\t\tk = resc_sum_values_size;\n\t\t\tresc_sum_values_size = t;\n\t\t}\n\n\t\tif (!found_match) {\n\t\t\trs = resc_sum_values;\n\t\t\trs[k].rs_def = resc_def;\n\t\t\trs[k].rs_def->rs_decode(&rs[k].rs_attr, keyw, NULL,\n\t\t\t\t\t\tvalue);\n\t\t}\n\t\treturn (keyw);\n\n\t} else if (action == RESC_SUM_GET_CLEAR) {\n\t\tsvrattrl *val = NULL;\n\t\tstatic char *buf = NULL;\n\t\tstatic int buf_size = 0;\n\n\t\tif (buf_size == 0) {\n\t\t\tbuf = (char *) malloc(LOG_BUF_SIZE);\n\n\t\t\tif (buf == NULL) {\n\t\t\t\tlog_err(-1, __func__, \"local buf malloc error\");\n\t\t\t\treturn (NULL);\n\t\t\t}\n\t\t\tbuf_size = LOG_BUF_SIZE;\n\t\t}\n\t\tbuf[0] = '\\0';\n\n\t\tfor (k = 0; k < resc_sum_values_size; k++) {\n\t\t\tint rc;\n\n\t\t\trs = resc_sum_values;\n\t\t\tif (rs[k].rs_def == NULL)\n\t\t\t\tbreak;\n\n\t\t\trc = rs[k].rs_def->rs_encode(&rs[k].rs_attr,\n\t\t\t\t\t\t     NULL, ATTR_l, rs[k].rs_def->rs_name,\n\t\t\t\t\t\t     ATR_ENCODE_CLIENT, &val);\n\t\t\tif (rc > 0) {\n\t\t\t\tif (pbs_strcat(&buf, &buf_size, \":\") == NULL)\n\t\t\t\t\treturn (NULL);\n\t\t\t\tif (pbs_strcat(&buf, &buf_size, val->al_resc) == NULL)\n\t\t\t\t\treturn (NULL);\n\t\t\t\tif (pbs_strcat(&buf, &buf_size, \"=\") == NULL)\n\t\t\t\t\treturn (NULL);\n\t\t\t\tif (pbs_strcat(&buf, &buf_size, val->al_value) == NULL)\n\t\t\t\t\treturn (NULL);\n\t\t\t}\n\t\t\tfree(val);\n\n\t\t\trs[k].rs_def->rs_free(&rs[k].rs_attr);\n\t\t\trs[k].rs_def = NULL;\n\t\t\tmemset(&rs[k].rs_attr, 0, sizeof(struct attribute));\n\t\t}\n\t\treturn (buf);\n\t}\n\treturn (NULL);\n}\n\n/*\n * @brief\n *\tInitialize the relnodes_input_vnodelist_t structure used as argument to\n *\tpbs_release_nodes_given_nodelist() function.\n *\n * @param[out]\tr_input\t- structure to initialize\n * @return none\n */\nvoid\nrelnodes_input_vnodelist_init(relnodes_input_vnodelist_t *r_input)\n{\n\tr_input->vnodelist = NULL;\n\tr_input->deallocated_nodes_orig = NULL;\n\tr_input->p_new_deallocated_execvnode = NULL;\n}\n\n/*\n * @brief\n *\tRelease node resources from a job whose node/vnode are appearing in\n *\tspecified nodelist.\n *\n * @param[in]\t\tr_input\t- contains various input including the job id\n * @param[in,out]\tr_input2 - contains various input and output parameters\n *\t\t\t\t   including the list of nodes/vnodes to release\n *\t\t\t\t   resources from, as well\n *\t\t\t\t   the resulting new values to job's exec_vnode,\n *\t\t\t\t   exec_host, exec_host2, and schedselect.\n * @param[out]\t\terr_msg - gets filled in with the error message if this\n *\t\t\t\t  function returns a non-zero value.\n * @param[in]\t\terr_sz - size of the 'err_msg' buffer.\n * @return int\n * @retval 0 - success\n * @retval 1 - fail with 'err_msg' filled in with message.\n */\nint\npbs_release_nodes_given_nodelist(relnodes_input_t *r_input, relnodes_input_vnodelist_t *r_input2, char *err_msg, int err_msg_sz)\n{\n\tchar *new_exec_vnode = NULL;\n\tchar *new_exec_host = NULL;\n\tchar *new_exec_host2 = NULL;\n\tchar *new_select = NULL;\n\tchar *chunk_buf = NULL;\n\tint chunk_buf_sz = 0;\n\tchar *chunk = NULL;\n\tchar *chunk1 = NULL;\n\tchar *chunk2 = NULL;\n\tchar *chunk3 = NULL;\n\tchar *last = NULL;\n\tchar *last1 = NULL;\n\tchar *last2 = NULL;\n\tchar *last3 = NULL;\n\tint hasprn = 0;\n\tint hasprn1 = 0;\n\tint hasprn2 = 0;\n\tint hasprn3 = 0;\n\tint entry = 0;\n\tint f_entry = 0;\n\tint h_entry = 0;\n\tint sel_entry = 0;\n\tint j;\n\tint nelem;\n\tchar *noden;\n\tstruct key_value_pair *pkvp;\n\tchar buf[LOG_BUF_SIZE] = {0};\n\tstruct pbsnode *pnode = NULL;\n\tint rc = 1;\n\tint ns_malloced = 0;\n\tchar *buf_sum = NULL;\n\tint paren = 0;\n\tint found_paren = 0;\n\tint found_paren_dealloc = 0;\n\tresource_def *resc_def = NULL;\n\tchar *deallocated_execvnode = NULL;\n\tint deallocated_execvnode_sz = 0;\n\tchar *extra_res = NULL;\n\tresource *prs;\n\tresource_def *prdefvntype;\n\tchar *parent_mom;\n\tchar prev_noden[PBS_MAXNODENAME + 1];\n\tchar *res_in_exec_vnode = NULL;\n\tchar *ms_fullhost = NULL;\n\tint ms_port = 0;\n\tchar *exec_vnode = NULL;\n\tchar *exec_host = NULL;\n\tchar *exec_host2 = NULL;\n\tchar *sched_select = NULL;\n#ifdef PBS_MOM\n\tmomvmap_t *vn_vmap = NULL;\n#endif\n\n\tif ((r_input == NULL) || (r_input->jobid == NULL) || (r_input->execvnode == NULL) || (r_input->exechost == NULL) || (r_input->exechost2 == NULL) || (r_input->schedselect == NULL) || (err_msg == NULL) || (err_msg_sz <= 0)) {\n\n\t\tlog_err(errno, __func__, \"required parameter is null\");\n\t\treturn (1);\n\t}\n\n\terr_msg[0] = '\\0';\n\n\texec_vnode = strdup(r_input->execvnode);\n\tif (exec_vnode == NULL) {\n\t\tlog_err(errno, __func__, \"strdup error\");\n\t\tgoto release_nodeslist_exit;\n\t}\n\n\texec_host = strdup(r_input->exechost);\n\tif (exec_host == NULL) {\n\t\tlog_err(errno, __func__, \"strdup error\");\n\t\tgoto release_nodeslist_exit;\n\t}\n\n\texec_host2 = strdup(r_input->exechost2);\n\tif (exec_host2 == NULL) {\n\t\tlog_err(errno, __func__, \"strdup error\");\n\t\tgoto release_nodeslist_exit;\n\t}\n\n\tsched_select = expand_select_spec(r_input->schedselect);\n\tif (sched_select == NULL) {\n\t\tlog_err(errno, __func__, \"strdup error\");\n\t\tgoto release_nodeslist_exit;\n\t}\n\n\tms_fullhost = find_ms_full_host_and_port(exec_host, exec_host2, &ms_port);\n\tif (ms_fullhost == NULL) {\n\t\tlog_err(-1, __func__, \"can't determine primary execution host and port\");\n\t\tgoto release_nodeslist_exit;\n\t}\n\n\tres_in_exec_vnode = resources_seen(exec_vnode);\n\n\tnew_exec_vnode = (char *) calloc(1, strlen(exec_vnode) + 1);\n\tif (new_exec_vnode == NULL) {\n\t\tlog_err(-1, __func__, \"new_exec_vnode calloc error\");\n\t\tgoto release_nodeslist_exit;\n\t}\n\tnew_exec_vnode[0] = '\\0';\n\n\tchunk_buf_sz = strlen(exec_vnode) + 1;\n\tchunk_buf = (char *) calloc(1, chunk_buf_sz);\n\tif (chunk_buf == NULL) {\n\t\tlog_err(-1, __func__, \"chunk_buf calloc error\");\n\t\tgoto release_nodeslist_exit;\n\t}\n\n\tdeallocated_execvnode_sz = strlen(exec_vnode) + 1;\n\tdeallocated_execvnode = (char *) calloc(1, deallocated_execvnode_sz);\n\tif (deallocated_execvnode == NULL) {\n\t\tlog_err(-1, __func__, \"deallocated_execvnode calloc error\");\n\t\tgoto release_nodeslist_exit;\n\t}\n\n\tif (exec_host != NULL) {\n\t\tnew_exec_host = (char *) calloc(1, strlen(exec_host) + 1);\n\t\tif (new_exec_host == NULL) {\n\t\t\tlog_err(-1, __func__, \"new_exec_host calloc error\");\n\t\t\tgoto release_nodeslist_exit;\n\t\t}\n\t\tnew_exec_host[0] = '\\0';\n\t}\n\n\tif (exec_host2 != NULL) {\n\t\tnew_exec_host2 = (char *) calloc(1, strlen(exec_host2) + 1);\n\t\tif (new_exec_host2 == NULL) {\n\t\t\tlog_err(-1, __func__, \"new_exec_host2 calloc error\");\n\t\t\tgoto release_nodeslist_exit;\n\t\t}\n\t\tnew_exec_host2[0] = '\\0';\n\t}\n\n\tprdefvntype = &svr_resc_def[RESC_VNTYPE];\n\t/* There's a 1:1:1 mapping among exec_vnode parenthesized\n\t * entries, exec_host, and exec_host2.\n\t */\n\tentry = 0;     /* exec_vnode entries */\n\th_entry = 0;   /* exec_host* entries */\n\tsel_entry = 0; /* select and schedselect entries */\n\tf_entry = 0;   /* number of freed sister nodes */\n\tparen = 0;\n\tprev_noden[0] = '\\0';\n\tparent_mom = NULL;\n\tfor (chunk = parse_plus_spec_r(exec_vnode, &last, &hasprn),\n\t    chunk1 = parse_plus_spec_r(exec_host, &last1, &hasprn1),\n\t    chunk2 = parse_plus_spec_r(exec_host2, &last2, &hasprn2),\n\t    chunk3 = parse_plus_spec_r(sched_select, &last3, &hasprn3);\n\t     (chunk != NULL) && (chunk1 != NULL) && (chunk2 != NULL) && (chunk3 != NULL);\n\t     chunk = parse_plus_spec_r(last, &last, &hasprn)) {\n\n\t\tparen += hasprn;\n\t\tstrncpy(chunk_buf, chunk, chunk_buf_sz - 1);\n\t\tif (parse_node_resc(chunk, &noden, &nelem, &pkvp) == 0) {\n\n#ifdef PBS_MOM\n\t\t\t/* see if previous entry already matches this */\n\t\t\tif ((strcmp(prev_noden, noden) != 0)) {\n\t\t\t\tvn_vmap = find_vmap_entry(noden);\n\t\t\t\tif (vn_vmap == NULL) { /* should not happen */\n\n\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"no vmap entry for %s\", noden);\n\t\t\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\t\t\tgoto release_nodeslist_exit;\n\t\t\t\t}\n\t\t\t\tif (vn_vmap->mvm_hostn != NULL)\n\t\t\t\t\tparent_mom = vn_vmap->mvm_hostn;\n\t\t\t\telse\n\t\t\t\t\tparent_mom = vn_vmap->mvm_name;\n\t\t\t}\n\n\t\t\tif (parent_mom == NULL) { /* should not happen */\n\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"no parent_mom for %s\", noden);\n\t\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\t\tgoto release_nodeslist_exit;\n\t\t\t}\n\n\t\t\tstrncpy(prev_noden, noden, PBS_MAXNODENAME);\n#else\n\t\t\tif (r_input->vnodes_data != NULL) {\n\t\t\t\t/* see if previous entry already matches this */\n\t\t\t\tif ((strcmp(prev_noden, noden) != 0)) {\n\t\t\t\t\tchar key_buf[BUF_SIZE];\n\t\t\t\t\tsvrattrl *svrattrl_e;\n\n\t\t\t\t\tsnprintf(key_buf, BUF_SIZE, \"%s.resources_assigned\", noden);\n\t\t\t\t\tif ((svrattrl_e = find_svrattrl_list_entry(r_input->vnodes_data, key_buf, \"host,string\")) != NULL) {\n\t\t\t\t\t\tparent_mom = svrattrl_e->al_value;\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\tif (parent_mom == NULL) { /* should not happen */\n\n\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"no parent_mom for %s\", noden);\n\t\t\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\t\t\tgoto release_nodeslist_exit;\n\t\t\t\t}\n\n\t\t\t\tstrncpy(prev_noden, noden, PBS_MAXNODENAME);\n\t\t\t} else {\n\t\t\t\t/* see if previous entry already matches this */\n\n\t\t\t\tif ((pnode == NULL) ||\n\t\t\t\t    (strcmp(pnode->nd_name, noden) != 0)) {\n\t\t\t\t\tpnode = find_nodebyname(noden);\n\t\t\t\t}\n\n\t\t\t\tif (pnode == NULL) { /* should not happen */\n\t\t\t\t\tif ((err_msg != NULL) && (err_msg_sz > 0)) {\n\t\t\t\t\t\tsnprintf(err_msg, err_msg_sz, \"no node entry for %s\", noden);\n\t\t\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG, r_input->jobid, err_msg);\n\t\t\t\t\t}\n\t\t\t\t\tgoto release_nodeslist_exit;\n\t\t\t\t}\n\t\t\t}\n#endif\n\n\t\t\tif (is_parent_host_of_node(pnode, parent_mom, ms_fullhost, ms_port) &&\n\t\t\t    (r_input2->vnodelist != NULL) &&\n\t\t\t    in_string_list(noden, '+', r_input2->vnodelist)) {\n\t\t\t\tif ((err_msg != NULL) && (err_msg_sz > 0)) {\n\t\t\t\t\tsnprintf(err_msg, err_msg_sz,\n\t\t\t\t\t\t \"Can't free '%s' since it's on a primary execution host\", noden);\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG, r_input->jobid, err_msg);\n\t\t\t\t}\n\t\t\t\tgoto release_nodeslist_exit;\n\t\t\t}\n\n\t\t\tif ((r_input2->vnodelist != NULL) &&\n\t\t\t    in_string_list(noden, '+', r_input2->vnodelist) && (pnode != NULL) &&\n\t\t\t    (is_nattr_set(pnode, ND_ATR_ResourceAvail) != 0)) {\n\t\t\t\tfor (prs = (resource *) GET_NEXT(get_nattr_list(pnode, ND_ATR_ResourceAvail)); prs != NULL; prs = (resource *) GET_NEXT(prs->rs_link)) {\n\t\t\t\t\tif ((prdefvntype != NULL) &&\n\t\t\t\t\t    (prs->rs_defin == prdefvntype) &&\n\t\t\t\t\t    (is_attr_set(&prs->rs_value)) != 0) {\n\t\t\t\t\t\tstruct array_strings *as;\n\t\t\t\t\t\tint l;\n\t\t\t\t\t\tas = prs->rs_value.at_val.at_arst;\n\t\t\t\t\t\tfor (l = 0; l < as->as_usedptr; l++) {\n\t\t\t\t\t\t\tif (strncmp(as->as_string[l], \"cray_\", 5) == 0) {\n\t\t\t\t\t\t\t\tif ((err_msg != NULL) && (err_msg_sz > 0)) {\n\t\t\t\t\t\t\t\t\tsnprintf(err_msg, err_msg_sz, \"not currently supported on Cray X* series nodes: %s\", noden);\n\t\t\t\t\t\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG, r_input->jobid, err_msg);\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t\tgoto release_nodeslist_exit;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif (is_parent_host_of_node(pnode, parent_mom, ms_fullhost, ms_port) ||\n\t\t\t    ((r_input2->vnodelist != NULL) && !in_string_list(noden, '+', r_input2->vnodelist))) {\n\n\t\t\t\tif (entry > 0) /* there's something put in previously */\n\t\t\t\t\tstrcat(new_exec_vnode, \"+\");\n\n\t\t\t\tif (((hasprn > 0) && (paren > 0)) ||\n\t\t\t\t    ((hasprn == 0) && (paren == 0))) {\n\t\t\t\t\t/* at the beginning of chunk for current host */\n\t\t\t\t\tif (!found_paren) {\n\t\t\t\t\t\tstrcat(new_exec_vnode, \"(\");\n\t\t\t\t\t\tfound_paren = 1;\n\n\t\t\t\t\t\tif (h_entry > 0) {\n\t\t\t\t\t\t\t/* there's already previous exec_host entry */\n\t\t\t\t\t\t\tif (new_exec_host != NULL)\n\t\t\t\t\t\t\t\tstrcat(new_exec_host, \"+\");\n\t\t\t\t\t\t\tif (new_exec_host2 != NULL)\n\t\t\t\t\t\t\t\tstrcat(new_exec_host2, \"+\");\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\tif (new_exec_host != NULL)\n\t\t\t\t\t\t\tstrcat(new_exec_host, chunk1);\n\t\t\t\t\t\tif (new_exec_host2 != NULL)\n\t\t\t\t\t\t\tstrcat(new_exec_host2, chunk2);\n\t\t\t\t\t\th_entry++;\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\tif (!found_paren) {\n\t\t\t\t\tstrcat(new_exec_vnode, \"(\");\n\t\t\t\t\tfound_paren = 1;\n\n\t\t\t\t\tif (h_entry > 0) {\n\t\t\t\t\t\t/* there's already previous exec_host entry */\n\t\t\t\t\t\tif (new_exec_host != NULL)\n\t\t\t\t\t\t\tstrcat(new_exec_host, \"+\");\n\t\t\t\t\t\tif (new_exec_host2 != NULL)\n\t\t\t\t\t\t\tstrcat(new_exec_host2, \"+\");\n\t\t\t\t\t}\n\n\t\t\t\t\tif (new_exec_host != NULL)\n\t\t\t\t\t\tstrcat(new_exec_host, chunk1);\n\t\t\t\t\tif (new_exec_host2 != NULL)\n\t\t\t\t\t\tstrcat(new_exec_host2, chunk2);\n\t\t\t\t\th_entry++;\n\t\t\t\t}\n\t\t\t\tstrcat(new_exec_vnode, noden);\n\t\t\t\tentry++;\n\n\t\t\t\tfor (j = 0; j < nelem; ++j) {\n\n\t\t\t\t\tresc_def = find_resc_def(svr_resc_def, pkvp[j].kv_keyw);\n\t\t\t\t\tif (resc_def == NULL) {\n\t\t\t\t\t\tcontinue;\n\t\t\t\t\t}\n\n\t\t\t\t\tif (manage_resc_sum_values(RESC_SUM_ADD, resc_def,\n\t\t\t\t\t\t\t\t   pkvp[j].kv_keyw, pkvp[j].kv_val, err_msg, err_msg_sz) == NULL) {\n\t\t\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t\t\t\t\t  __func__, err_msg);\n\t\t\t\t\t\tgoto release_nodeslist_exit;\n\t\t\t\t\t}\n\n\t\t\t\t\tsnprintf(buf, sizeof(buf),\n\t\t\t\t\t\t \":%s=%s\", pkvp[j].kv_keyw, pkvp[j].kv_val);\n\t\t\t\t\tstrcat(new_exec_vnode, buf);\n\t\t\t\t}\n\n\t\t\t\tif (paren == 0) { /* have all chunks for current host */\n\n\t\t\t\t\tif (found_paren) {\n\t\t\t\t\t\tstrcat(new_exec_vnode, \")\");\n\t\t\t\t\t\tfound_paren = 0;\n\t\t\t\t\t}\n\n\t\t\t\t\tif (found_paren_dealloc) {\n\t\t\t\t\t\tstrcat(deallocated_execvnode, \")\");\n\t\t\t\t\t\tfound_paren_dealloc = 0;\n\t\t\t\t\t}\n\n\t\t\t\t\tbuf_sum = manage_resc_sum_values(RESC_SUM_GET_CLEAR,\n\t\t\t\t\t\t\t\t\t NULL, NULL, NULL, err_msg, err_msg_sz);\n\n\t\t\t\t\tif (buf_sum == NULL) {\n\t\t\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t\t\t\t\t  __func__, err_msg);\n\t\t\t\t\t\tgoto release_nodeslist_exit;\n\t\t\t\t\t}\n\n\t\t\t\t\tif (buf_sum[0] != '\\0') {\n\t\t\t\t\t\textra_res = return_missing_resources(chunk3,\n\t\t\t\t\t\t\t\t\t\t     res_in_exec_vnode);\n\n\t\t\t\t\t\tif (sel_entry > 0) {\n\t\t\t\t\t\t\t/* there's already previous select/schedselect entry */\n\t\t\t\t\t\t\tif (pbs_strcat(\n\t\t\t\t\t\t\t\t    &new_select,\n\t\t\t\t\t\t\t\t    &ns_malloced,\n\t\t\t\t\t\t\t\t    \"+\") == NULL) {\n\t\t\t\t\t\t\t\tlog_err(-1, __func__, \"pbs_strcat failed\");\n\t\t\t\t\t\t\t\tgoto release_nodeslist_exit;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif (pbs_strcat(&new_select, &ns_malloced, \"1\") == NULL) {\n\t\t\t\t\t\t\tlog_err(-1, __func__, \"pbs_strcat failed\");\n\t\t\t\t\t\t\tgoto release_nodeslist_exit;\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif (pbs_strcat(&new_select, &ns_malloced, buf_sum) == NULL) {\n\t\t\t\t\t\t\tlog_err(-1, __func__, \"pbs_strcat failed\");\n\t\t\t\t\t\t\tgoto release_nodeslist_exit;\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif ((extra_res != NULL) && (extra_res[0] != '\\0')) {\n\t\t\t\t\t\t\tif (pbs_strcat(&new_select, &ns_malloced, \":\") == NULL) {\n\t\t\t\t\t\t\t\tlog_err(-1, __func__, \"pbs_strcat failed\");\n\t\t\t\t\t\t\t\tgoto release_nodeslist_exit;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tif (pbs_strcat(&new_select, &ns_malloced, extra_res) == NULL) {\n\t\t\t\t\t\t\t\tlog_err(-1, __func__, \"pbs_strcat failed\");\n\t\t\t\t\t\t\t\tgoto release_nodeslist_exit;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t\tsel_entry++;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tif (!is_parent_host_of_node(pnode, parent_mom, ms_fullhost, ms_port)) {\n\t\t\t\t\tif (f_entry > 0) { /* there's something put in previously */\n\t\t\t\t\t\tstrcat(deallocated_execvnode, \"+\");\n\t\t\t\t\t}\n\n\t\t\t\t\tif (((hasprn > 0) && (paren > 0)) || ((hasprn == 0) && (paren == 0))) {\n\t\t\t\t\t\t/* at the beginning of chunk for current host */\n\t\t\t\t\t\tif (!found_paren_dealloc) {\n\t\t\t\t\t\t\tstrcat(deallocated_execvnode, \"(\");\n\t\t\t\t\t\t\tfound_paren_dealloc = 1;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\n\t\t\t\t\tif (!found_paren_dealloc) {\n\t\t\t\t\t\tstrcat(deallocated_execvnode, \"(\");\n\t\t\t\t\t\tfound_paren_dealloc = 1;\n\t\t\t\t\t}\n\t\t\t\t\tstrcat(deallocated_execvnode, chunk_buf);\n\t\t\t\t\tf_entry++;\n\n\t\t\t\t\tif (paren == 0) { /* have all chunks for current host */\n\n\t\t\t\t\t\tif (found_paren) {\n\t\t\t\t\t\t\tstrcat(new_exec_vnode, \")\");\n\t\t\t\t\t\t\tfound_paren = 0;\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\tif (found_paren_dealloc) {\n\t\t\t\t\t\t\tstrcat(deallocated_execvnode, \")\");\n\t\t\t\t\t\t\tfound_paren_dealloc = 0;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\tif (hasprn < 0) {\n\t\t\t\t\t/* matched ')' in chunk, so need to balance the parenthesis */\n\t\t\t\t\tif (found_paren) {\n\t\t\t\t\t\tstrcat(new_exec_vnode, \")\");\n\t\t\t\t\t\tfound_paren = 0;\n\t\t\t\t\t}\n\t\t\t\t\tif (found_paren_dealloc) {\n\t\t\t\t\t\tstrcat(deallocated_execvnode, \")\");\n\t\t\t\t\t\tfound_paren_dealloc = 0;\n\t\t\t\t\t}\n\n\t\t\t\t\tbuf_sum = manage_resc_sum_values(RESC_SUM_GET_CLEAR,\n\t\t\t\t\t\t\t\t\t NULL, NULL, NULL, err_msg, err_msg_sz);\n\n\t\t\t\t\tif (buf_sum == NULL) {\n\t\t\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG,\n\t\t\t\t\t\t\t  __func__, err_msg);\n\t\t\t\t\t\tgoto release_nodeslist_exit;\n\t\t\t\t\t}\n\n\t\t\t\t\tif (buf_sum[0] != '\\0') {\n\t\t\t\t\t\textra_res = return_missing_resources(chunk3,\n\t\t\t\t\t\t\t\t\t\t     res_in_exec_vnode);\n\n\t\t\t\t\t\tif (sel_entry > 0) {\n\t\t\t\t\t\t\t/* there's already previous select/schedselect entry */\n\t\t\t\t\t\t\tif (pbs_strcat(&new_select, &ns_malloced, \"+\") == NULL) {\n\t\t\t\t\t\t\t\tlog_err(-1, __func__, \"pbs_strcat failed\");\n\t\t\t\t\t\t\t\tgoto release_nodeslist_exit;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif (pbs_strcat(&new_select, &ns_malloced, \"1\") == NULL) {\n\t\t\t\t\t\t\tlog_err(-1, __func__, \"pbs_strcat failed\");\n\t\t\t\t\t\t\tgoto release_nodeslist_exit;\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif (pbs_strcat(&new_select, &ns_malloced, buf_sum) == NULL) {\n\t\t\t\t\t\t\tlog_err(-1, __func__, \"pbs_strcat failed\");\n\t\t\t\t\t\t\tgoto release_nodeslist_exit;\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif ((extra_res != NULL) && (extra_res[0] != '\\0')) {\n\t\t\t\t\t\t\tif (pbs_strcat(&new_select, &ns_malloced, \":\") == NULL) {\n\t\t\t\t\t\t\t\tlog_err(-1, __func__, \"pbs_strcat failed\");\n\t\t\t\t\t\t\t\tgoto release_nodeslist_exit;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tif (pbs_strcat(&new_select, &ns_malloced, extra_res) == NULL) {\n\t\t\t\t\t\t\t\tlog_err(-1, __func__, \"pbs_strcat failed\");\n\t\t\t\t\t\t\t\tgoto release_nodeslist_exit;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t\tsel_entry++;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t} else {\n\t\t\tlog_err(-1, __func__, \"parse_node_resc error\");\n\t\t\tgoto release_nodeslist_exit;\n\t\t}\n\n\t\tif (paren == 0) {\n\t\t\tchunk1 = parse_plus_spec_r(last1, &last1, &hasprn1),\n\t\t\tchunk2 = parse_plus_spec_r(last2, &last2, &hasprn2);\n\t\t\tchunk3 = parse_plus_spec_r(last3, &last3, &hasprn3);\n\t\t}\n\t}\n\tentry = strlen(new_exec_vnode) - 1;\n\tif ((entry >= 0) && (new_exec_vnode[entry] == '+'))\n\t\tnew_exec_vnode[entry] = '\\0';\n\n\tif (strcmp(new_exec_vnode, r_input->execvnode) == 0) {\n\t\t/* no change, don't bother setting the new_* return values */\n\t\tgoto release_nodeslist_exit;\n\t}\n\n\tif (new_exec_host != NULL) {\n\t\tentry = strlen(new_exec_host) - 1;\n\t\tif ((entry >= 0) && (new_exec_host[entry] == '+'))\n\t\t\tnew_exec_host[entry] = '\\0';\n\t}\n\n\tif (new_exec_host2 != NULL) {\n\t\tentry = strlen(new_exec_host2) - 1;\n\t\tif ((entry >= 0) && (new_exec_host2[entry] == '+'))\n\t\t\tnew_exec_host2[entry] = '\\0';\n\t}\n\n\tentry = strlen(new_select) - 1;\n\tif ((entry >= 0) && (new_select[entry] == '+'))\n\t\tnew_select[entry] = '\\0';\n\n\tentry = strlen(deallocated_execvnode) - 1;\n\tif ((entry >= 0) && (deallocated_execvnode[entry] == '+'))\n\t\tdeallocated_execvnode[entry] = '\\0';\n\n\tif (deallocated_execvnode[0] != '\\0') {\n\t\tif ((r_input2->deallocated_nodes_orig != NULL) && (r_input2->deallocated_nodes_orig[0] != '\\0')) {\n\t\t\tif (pbs_strcat(&deallocated_execvnode,\n\t\t\t\t       &deallocated_execvnode_sz, \"+\") == NULL) {\n\t\t\t\tlog_err(-1, __func__,\n\t\t\t\t\t\"pbs_strcat deallocated_execvnode failed\");\n\t\t\t\tgoto release_nodeslist_exit;\n\t\t\t}\n\t\t\tif (pbs_strcat(&deallocated_execvnode, &deallocated_execvnode_sz,\n\t\t\t\t       r_input2->deallocated_nodes_orig) == NULL) {\n\t\t\t\tlog_err(-1, __func__,\n\t\t\t\t\t\"pbs_strcat deallocated_execvnode failed\");\n\t\t\t\tgoto release_nodeslist_exit;\n\t\t\t}\n\t\t}\n\t}\n\n\t/* output message about nodes to be freed but no part of job */\n\tif ((r_input2->vnodelist != NULL) && (err_msg != NULL) &&\n\t    (err_msg_sz > 0)) {\n\t\tchar *tmpbuf;\n\t\tchar *tmpbuf2;\n\t\tchar *pc = NULL;\n\t\tchar *pc1 = NULL;\n\t\tchar *save_ptr; /* posn for strtok_r() */\n\n\t\ttmpbuf = strdup(r_input2->vnodelist);\n\t\t/* will contain nodes that are in 'vnodelist' but not in deallocated_execvnode */\n\t\ttmpbuf2 = strdup(r_input2->vnodelist);\n\t\tif ((tmpbuf != NULL) && (tmpbuf2 != NULL)) {\n\n\t\t\ttmpbuf2[0] = '\\0';\n\n\t\t\tpc = strtok_r(tmpbuf, \"+\", &save_ptr);\n\t\t\twhile (pc != NULL) {\n\t\t\t\t/* trying to match '(<vnode_name>:'\n\t\t\t\t *  or '+<vnode_name>:'\n\t\t\t\t */\n\t\t\t\tsnprintf(chunk_buf, chunk_buf_sz, \"(%s:\", pc);\n\t\t\t\tpc1 = strstr(deallocated_execvnode, chunk_buf);\n\t\t\t\tif (pc1 == NULL) {\n\t\t\t\t\tsnprintf(chunk_buf, chunk_buf_sz, \"+%s:\", pc);\n\t\t\t\t\tpc1 = strstr(deallocated_execvnode, chunk_buf);\n\t\t\t\t}\n\t\t\t\tif (pc1 == NULL) {\n\t\t\t\t\tif (tmpbuf2[0] != '\\0')\n\t\t\t\t\t\tstrcat(tmpbuf2, \" \");\n\t\t\t\t\tstrcat(tmpbuf2, pc);\n\t\t\t\t}\n\t\t\t\tpc = strtok_r(NULL, \"+\", &save_ptr);\n\t\t\t}\n\n\t\t\tif (tmpbuf2[0] != '\\0') {\n\t\t\t\tsnprintf(err_msg, err_msg_sz,\n\t\t\t\t\t \"node(s) requested to be released not part of the job: %s\", tmpbuf2);\n\t\t\t\tfree(tmpbuf);\n\t\t\t\tfree(tmpbuf2);\n\t\t\t\tgoto release_nodeslist_exit;\n\t\t\t}\n\t\t}\n\t\tfree(tmpbuf);\n\t\tfree(tmpbuf2);\n\t}\n\n\tif (new_exec_vnode[0] != '\\0') {\n\n\t\tif (strcmp(r_input->execvnode, new_exec_vnode) == 0) {\n\t\t\t/* no change */\n\t\t\tif ((err_msg != NULL) && (err_msg_sz > 0)) {\n\t\t\t\tsnprintf(err_msg, err_msg_sz, \"node(s) requested to be released not part of the job: %s\", r_input2->vnodelist ? r_input2->vnodelist : \"\");\n\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG, r_input->jobid, err_msg);\n\t\t\t}\n\t\t\tgoto release_nodeslist_exit;\n\t\t}\n\t\tif (r_input->p_new_exec_vnode != NULL)\n\t\t\t*(r_input->p_new_exec_vnode) = new_exec_vnode;\n\t}\n\n\tif (deallocated_execvnode[0] != '\\0') {\n\t\tif (r_input2->p_new_deallocated_execvnode != NULL)\n\t\t\t*(r_input2->p_new_deallocated_execvnode) = deallocated_execvnode;\n\t}\n\n\tif ((new_exec_host != NULL) && (new_exec_host[0] != '\\0')) {\n\n\t\tif (r_input->p_new_exec_host[0] != NULL)\n\t\t\t*(r_input->p_new_exec_host[0]) = new_exec_host;\n\t}\n\n\tif ((new_exec_host2 != NULL) && (new_exec_host2[0] != '\\0')) {\n\n\t\tif (r_input->p_new_exec_host[1] != NULL)\n\t\t\t*(r_input->p_new_exec_host[1]) = new_exec_host2;\n\t}\n\n\tif (new_select[0] != '\\0') {\n\t\tif (r_input->p_new_schedselect != NULL)\n\t\t\t*(r_input->p_new_schedselect) = new_select;\n\t}\n\trc = 0;\n\nrelease_nodeslist_exit:\n\tfree(ms_fullhost);\n\tfree(res_in_exec_vnode);\n\tfree(chunk_buf);\n\tfree(exec_vnode);\n\tfree(exec_host);\n\tfree(exec_host2);\n\tfree(sched_select);\n\tif ((rc != 0) || (strcmp(new_exec_vnode, r_input->execvnode) == 0)) {\n\t\tfree(new_exec_vnode);\n\t\tfree(new_exec_host);\n\t\tfree(new_exec_host2);\n\t\tfree(new_select);\n\t\tfree(deallocated_execvnode);\n\t}\n\t/* clear the summation buffer */\n\t(void) manage_resc_sum_values(RESC_SUM_GET_CLEAR, NULL, NULL, NULL, buf, sizeof(buf));\n\n\treturn (rc);\n}\n\n/**\n *\n * @brief\n *\t Print/log all the entries in the list of nodes for a job.\n *\n * @param[in]\theader_str\t- header string for logging\n * @param[in]\tnode_list\t- the PBS node list\n *\n * @return none\n */\nvoid\nreliable_job_node_print(char *header_str, pbs_list_head *node_list, int logtype)\n{\n\treliable_job_node *rjn;\n\n\tif ((header_str == NULL) || (node_list == NULL))\n\t\treturn;\n\n\tfor (rjn = (reliable_job_node *) GET_NEXT(*node_list); rjn != NULL;\n\t     rjn = (reliable_job_node *) GET_NEXT(rjn->rjn_link)) {\n\t\tsnprintf(log_buffer, sizeof(log_buffer), \"%s: node %s\", header_str, rjn->rjn_host);\n\t\tlog_event(logtype, PBS_EVENTCLASS_NODE,\n\t\t\t  LOG_INFO, __func__, log_buffer);\n\t}\n}\n\n/**\n *\n * @brief\n *\t Free up all the entries in the list of nodes for a job.\n *\n * @param[in]\tnode_list\t- the PBS node list\n *\n * @return none\n */\nvoid\nreliable_job_node_free(pbs_list_head *node_list)\n{\n\treliable_job_node *rjn;\n\n\tif (node_list == NULL)\n\t\treturn;\n\n\tfor (rjn = (reliable_job_node *) GET_NEXT(*node_list); rjn != NULL; rjn = (reliable_job_node *) GET_NEXT(*node_list)) {\n\t\tdelete_link(&rjn->rjn_link);\n\t\tfree(rjn);\n\t}\n}\n\n/**\n *\n * @brief\n *\t Find an entry from the list of nodes for a job.\n *\n * @param[in]\tnode_list\t- the PBS node list\n * @param[in]\tnname\t\t- node hostname to search for.\n *\n * @return reliable_job_node *\n *\n * @retval <reliable_nob_node entry>\t- if one found.\n * @retval NULL  \t\t\t- if no entry found.\n *\n */\nreliable_job_node *\nreliable_job_node_find(pbs_list_head *node_list, char *nname)\n{\n\treliable_job_node *rjn = NULL;\n\n\tif ((node_list == NULL) || (nname == NULL))\n\t\treturn (NULL);\n\n\tfor (rjn = (reliable_job_node *) GET_NEXT(*node_list); rjn != NULL; rjn = (reliable_job_node *) GET_NEXT(rjn->rjn_link)) {\n\t\tif (strcmp(rjn->rjn_host, nname) == 0) {\n\t\t\treturn (rjn);\n\t\t}\n\t}\n\treturn (NULL);\n}\n\n/**\n *\n * @brief\n * \tAdd a unique entry to the list of mom nodes for a job.\n *\n * @param[in]\tnode_list\t- the PBS node list\n * @param[in]\tnname\t\t- node hostname\n *\n * @return int\n * @retval 0\t- success\n * @retval -1\t- error encountered\n *\n * @return none\n */\nint\nreliable_job_node_add(pbs_list_head *node_list, char *nname)\n{\n\treliable_job_node *rjn = NULL;\n\n\tif ((node_list == NULL) || (nname == NULL) || (nname[0] == '\\0')) {\n\t\tlog_err(-1, __func__, \"unexpected input\");\n\t\treturn (-1);\n\t}\n\n\tif (reliable_job_node_find(node_list, nname) != NULL) {\n\t\treturn (0);\n\t}\n\n\trjn = (reliable_job_node *) malloc(sizeof(reliable_job_node));\n\tif (rjn == NULL) {\n\t\tlog_err(errno, __func__, msg_err_malloc);\n\t\treturn (-1);\n\t}\n\tCLEAR_LINK(rjn->rjn_link);\n\n\tsnprintf(rjn->rjn_host, sizeof(rjn->rjn_host), \"%s\", nname);\n\n\trjn->prologue_hook_success = 0;\n\n\tappend_link(node_list, &rjn->rjn_link, rjn);\n\n\treturn (0);\n}\n\n/**\n *\n * @brief\n * \tDelete an entry from the list nodes for a job.\n *\n * @param[in]\tnode_list\t- the PBS node list\n * @param[in]\tnname\t\t- node hostname to delete\n *\n * @return none\n */\nvoid\nreliable_job_node_delete(pbs_list_head *node_list, char *nname)\n{\n\treliable_job_node *rjn;\n\n\tif ((node_list == NULL) || (nname == NULL)) {\n\t\treturn;\n\t}\n\n\tfor (rjn = (reliable_job_node *) GET_NEXT(*node_list); rjn != NULL; rjn = (reliable_job_node *) GET_NEXT(rjn->rjn_link)) {\n\t\tif (strcmp(rjn->rjn_host, nname) == 0) {\n\t\t\tdelete_link(&rjn->rjn_link);\n\t\t\tfree(rjn);\n\t\t\treturn;\n\t\t}\n\t}\n}\n\n/**\n *\n * @brief\n *\tFind an entry from the list of nodes for a job\n *\tnamed 'nname', and mark this node host as having\n *\tsuccessfully executed execjob_prologue hook,\n *\tresulting in a hook event accept.\n *\tIf no existing node host was matched, then add one.\n *\n * @param[in]\tnode_list\t- the PBS node list\n * @param[in]\tnname\t\t- node hostname to search for.\n *\n * @return reliable_job_node *\n *\n * @retval <reliable_nob_node entry>\t- the updated/added node entry\n * @retval NULL  \t\t\t- if an error occurred.\n *\n */\nreliable_job_node *\nreliable_job_node_set_prologue_hook_success(pbs_list_head *node_list, char *nname)\n{\n\treliable_job_node *rjn = NULL;\n\n\tif ((node_list == NULL) || (nname == NULL))\n\t\treturn (NULL);\n\n\tfor (rjn = (reliable_job_node *) GET_NEXT(*node_list); rjn != NULL; rjn = (reliable_job_node *) GET_NEXT(rjn->rjn_link)) {\n\t\tif (strcmp(rjn->rjn_host, nname) == 0) {\n\t\t\trjn->prologue_hook_success = 1;\n\t\t\treturn (rjn);\n\t\t}\n\t}\n\t/* no entry matched so add one */\n\trjn = (reliable_job_node *) malloc(sizeof(reliable_job_node));\n\tif (rjn == NULL) {\n\t\tlog_err(errno, __func__, msg_err_malloc);\n\t\treturn (NULL);\n\t}\n\tCLEAR_LINK(rjn->rjn_link);\n\n\tsnprintf(rjn->rjn_host, sizeof(rjn->rjn_host), \"%s\", nname);\n\n\trjn->prologue_hook_success = 1;\n\n\tappend_link(node_list, &rjn->rjn_link, rjn);\n\n\treturn (rjn);\n}\n\n/* Functions and structure in support of releasing node resources to satisfy\n * a new select spec.\n */\n\ntypedef struct resc_limit_entry {\n\tpbs_list_link rl_link;\n\tresc_limit_t *resc;\n} rl_entry;\n\n#if !(defined(PBS_MOM) || defined(PBS_PYTHON))\n/**\n * @brief\n * compare lexicographically the names of resources in the resource list contained in\n * two resc_limit_t structures\n *\n * @param[in]\tleft - the left resource limit.\n * @param[in]\tright - the right resource limit.\n *\n * @return int\n * @retval -1\t- left < right\n * @retval 0\t- left = right\n * @retval 1\t- left > right\n * @retval -2\t- error\n */\nstatic int\nresc_limit_list_cmp_name(resc_limit_t *left, resc_limit_t *right)\n{\n\tresource *pres_l, *pres_r;\n\n\tif ((left == NULL) || (right == NULL))\n\t\treturn -2;\n\n\tif (left->rl_ncpus && !right->rl_ncpus)\n\t\treturn 1;\n\tif (!left->rl_ncpus && right->rl_ncpus)\n\t\treturn -1;\n\n\tif (left->rl_ssi && !right->rl_ssi)\n\t\treturn 1;\n\tif (!left->rl_ssi && right->rl_ssi)\n\t\treturn -1;\n\n\tif (left->rl_mem && !right->rl_mem)\n\t\treturn 1;\n\tif (!left->rl_mem && right->rl_mem)\n\t\treturn -1;\n\n\tif (left->rl_vmem && !right->rl_vmem)\n\t\treturn 1;\n\tif (!left->rl_vmem && right->rl_vmem)\n\t\treturn -1;\n\n\tif (left->rl_naccels && !right->rl_naccels)\n\t\treturn 1;\n\tif (!left->rl_naccels && right->rl_naccels)\n\t\treturn -1;\n\n\tif (left->rl_accel_mem && !right->rl_accel_mem)\n\t\treturn 1;\n\tif (!left->rl_accel_mem && right->rl_accel_mem)\n\t\treturn -1;\n\n\tfor (pres_l = (resource *) GET_NEXT(left->rl_other_res),\n\t    pres_r = (resource *) GET_NEXT(right->rl_other_res);\n\t     pres_l && pres_r;\n\t     pres_l = (resource *) GET_NEXT(pres_l->rs_link),\n\t    pres_r = (resource *) GET_NEXT(pres_r->rs_link)) {\n\t\tint cmp_res;\n\t\tif ((cmp_res = strcasecmp(pres_l->rs_defin->rs_name, pres_r->rs_defin->rs_name)))\n\t\t\treturn cmp_res;\n\t}\n\n\treturn 0;\n}\n\n/**\n * @brief\n * compare the values of resources in the resource list contained in\n * two resc_limit_t structures\n *\n * @param[in]\tleft - the left resource limit.\n * @param[in]\tright - the right resource limit.\n *\n * @return int\n * @retval -1\t- left < right\n * @retval 0\t- left = right\n * @retval 1\t- left > right\n * @retval -2\t- error\n */\nstatic int\nresc_limit_list_cmp_val(resc_limit_t *left, resc_limit_t *right)\n{\n\tresource *pres_l, *pres_r;\n\n\tif ((left == NULL) || (right == NULL))\n\t\treturn -2;\n\n\tif (left->rl_ncpus > right->rl_ncpus)\n\t\treturn 1;\n\tif (left->rl_ncpus < right->rl_ncpus)\n\t\treturn -1;\n\n\tif (left->rl_ssi > right->rl_ssi)\n\t\treturn 1;\n\tif (left->rl_ssi < right->rl_ssi)\n\t\treturn -1;\n\n\tif (left->rl_mem > right->rl_mem)\n\t\treturn 1;\n\tif (left->rl_mem < right->rl_mem)\n\t\treturn -1;\n\n\tif (left->rl_vmem > right->rl_vmem)\n\t\treturn 1;\n\tif (left->rl_vmem < right->rl_vmem)\n\t\treturn -1;\n\n\tif (left->rl_naccels > right->rl_naccels)\n\t\treturn 1;\n\tif (left->rl_naccels < right->rl_naccels)\n\t\treturn -1;\n\n\tif (left->rl_accel_mem > right->rl_accel_mem)\n\t\treturn 1;\n\tif (left->rl_accel_mem < right->rl_accel_mem)\n\t\treturn -1;\n\n\tfor (pres_l = (resource *) GET_NEXT(left->rl_other_res),\n\t    pres_r = (resource *) GET_NEXT(right->rl_other_res);\n\t     pres_l && pres_r;\n\t     pres_l = (resource *) GET_NEXT(pres_l->rs_link),\n\t    pres_r = (resource *) GET_NEXT(pres_r->rs_link)) {\n\t\tint cmp_res;\n\t\tif (pres_l->rs_defin->rs_type == ATR_TYPE_BOOL)\n\t\t\tcmp_res = pres_l->rs_value.at_val.at_long - pres_r->rs_value.at_val.at_long;\n\t\telse\n\t\t\tcmp_res = pres_l->rs_defin->rs_comp(&pres_l->rs_value, &pres_r->rs_value);\n\n\t\tif (cmp_res)\n\t\t\treturn cmp_res;\n\t}\n\n\treturn 0;\n}\n#endif\n/**\n * @brief\n *\tAdd the resc_limit entry 'resc' into pbs list 'pbs_head' in\n *\ta sorted manner, in the order of increasing cpus or mem.\n *\n * @param[in,out]\tphead - the list to add to.\n * @param[in]\t\tresc - the resource limit to add.\n *\n * @return int\n * @retval 0\t- successful operation.\n * @retval 1\t- unsuccessful operation.\n * 0 for successfully added; 1 otherwise.\n */\nstatic int\nadd_to_resc_limit_list_sorted(pbs_list_head *phead, resc_limit_t *resc)\n{\n\tpbs_list_link *plink_cur;\n\trl_entry *p_entry_cur = NULL;\n\trl_entry *new_resc = NULL;\n\tresc_limit_t *p_res_cur;\n\n\tif ((phead == NULL) || (resc == NULL))\n\t\treturn (1);\n\n\tfor (plink_cur = phead,\n\t    p_entry_cur = (rl_entry *) GET_NEXT(*phead);\n\t     p_entry_cur;\n\t     p_entry_cur = (rl_entry *) GET_NEXT(*plink_cur)) {\n\t\tplink_cur = &p_entry_cur->rl_link;\n\t\tp_res_cur = p_entry_cur->resc;\n\n\t\tif (p_res_cur != NULL) {\n#if defined(PBS_MOM) || defined(PBS_PYTHON)\n\t\t\t/* order according to increasing # of cpus\n\t\t\t * if same # of cpus, use increasing amt of mem\n\t\t\t */\n\t\t\tif ((p_res_cur->rl_ncpus > resc->rl_ncpus) || ((p_res_cur->rl_ncpus == resc->rl_ncpus) && (p_res_cur->rl_mem > resc->rl_mem))) {\n\t\t\t\tbreak;\n\t\t\t}\n#else\n\t\t\tint cmp_res_name;\n\n\t\t\tif (p_res_cur->rl_res_count < resc->rl_res_count)\n\t\t\t\tcontinue;\n\t\t\telse if (p_res_cur->rl_res_count > resc->rl_res_count)\n\t\t\t\tbreak;\n\n\t\t\tcmp_res_name = resc_limit_list_cmp_name(p_res_cur, resc);\n\n\t\t\tif (cmp_res_name < 0)\n\t\t\t\tcontinue;\n\t\t\telse if (cmp_res_name > 0)\n\t\t\t\tbreak;\n\t\t\telse {\n\t\t\t\tint cmp_res_val = resc_limit_list_cmp_val(p_res_cur, resc);\n\t\t\t\tif (cmp_res_val < 0)\n\t\t\t\t\tcontinue;\n\t\t\t\telse\n\t\t\t\t\tbreak;\n\t\t\t}\n#endif\n\t\t}\n\t}\n\n\tnew_resc = (rl_entry *) malloc(sizeof(rl_entry));\n\tif (new_resc == NULL) {\n\t\tlog_err(errno, __func__, msg_err_malloc);\n\t\treturn (1);\n\t}\n\tCLEAR_LINK(new_resc->rl_link);\n\n\tnew_resc->resc = resc;\n\n\t/* link after 'current' (or end) of resc_limit_entry in list */\n\tif (p_entry_cur != NULL) {\n\t\tinsert_link(plink_cur, &new_resc->rl_link, new_resc, LINK_INSET_BEFORE);\n\t} else {\n\t\tinsert_link(plink_cur, &new_resc->rl_link, new_resc, LINK_INSET_AFTER);\n\t}\n\n\treturn (0);\n}\n\n/**\n * @brief\n *\tAdd the resc_limit entry 'resc' into the beginning of pbs list\n *\t'pbs_head'.\n *\n * @param[in,out]\tphead - the list to add to.\n * @param[in]\t\tresc - the resource limit to add as head.\n *\n * @return int\n * @retval 0\t- successful operation.\n * @retval 1\t- unsuccessful operation.\n */\nstatic int\nadd_to_resc_limit_list_as_head(pbs_list_head *phead, resc_limit_t *resc)\n{\n\tpbs_list_link *plink_cur;\n\trl_entry *new_resc = NULL;\n\trl_entry *p_entry_cur = NULL;\n\n\tif ((phead == NULL) || (resc == NULL))\n\t\treturn (1);\n\n\tplink_cur = phead;\n\tp_entry_cur = (rl_entry *) GET_NEXT(*phead);\n\n\tif (p_entry_cur) {\n\t\tplink_cur = &p_entry_cur->rl_link;\n\t}\n\n\tnew_resc = (rl_entry *) malloc(sizeof(rl_entry));\n\tif (new_resc == NULL) {\n\t\tlog_err(errno, __func__, msg_err_malloc);\n\t\treturn (1);\n\t}\n\tCLEAR_LINK(new_resc->rl_link);\n\n\tnew_resc->resc = resc;\n\n\t/* link after 'current' (or end) of resc_limit_entry in list */\n\tif (p_entry_cur) {\n\t\tinsert_link(plink_cur, &new_resc->rl_link, new_resc, LINK_INSET_BEFORE);\n\t} else {\n\t\tinsert_link(plink_cur, &new_resc->rl_link, new_resc, LINK_INSET_AFTER);\n\t}\n\treturn (0);\n}\n\n#if !(defined(PBS_MOM) || defined(PBS_PYTHON))\n/**\n * @brief\n *\tinserts the resource in the resc_limit_t in lexically increasing order of\n *\tresource name\n *\n * @param[in]\thave - the resc_limit structure.\n * @param[in]\tkv_keyw - resource name.\n * @param[in]\tkv_val - resource value.\n * @param[in]\texecv_f - flag to indicate if the resource was found in execvnode\n *\n * @return int\n * @retval 0\t- successful operation.\n * @retval PBSE_Error\t- Error Code.\n */\nint\nresc_limit_insert_other_res(resc_limit_t *have, char *kv_keyw, char *kv_val, int execv_f)\n{\n\tresource *pres, *pnewres;\n\tresource_def *resc_def = NULL;\n\tint cmp_res = -1;\n\tint rc;\n\n\tif (have == NULL) {\n\t\tlog_err(-1, __func__, \"have is NULL\");\n\t\treturn PBSE_INTERNAL;\n\t}\n\n\tif (kv_keyw == NULL) {\n\t\tlog_err(-1, __func__, \"kv_keyw is NULL\");\n\t\treturn PBSE_INVALJOBRESC;\n\t}\n\n\tif (kv_val == NULL) {\n\t\tlog_err(-1, __func__, \"kv_val is NULL\");\n\t\treturn PBSE_INVALJOBRESC;\n\t}\n\n\tresc_def = find_resc_def(svr_resc_def, kv_keyw);\n\tif (resc_def == NULL) {\n\t\tlog_err(-1, __func__, \"resc_def is NULL\");\n\t\treturn PBSE_UNKRESC;\n\t}\n\n\tfor (pres = (resource *) GET_NEXT(have->rl_other_res);\n\t     pres != NULL;\n\t     pres = (resource *) GET_NEXT(pres->rs_link)) {\n\t\tif ((cmp_res = strcasecmp(pres->rs_defin->rs_name, kv_keyw)) >= 0)\n\t\t\tbreak;\n\t}\n\n\tif (!cmp_res) {\n\t\tattribute tmp = {0};\n\t\tif ((rc = pres->rs_defin->rs_decode(&tmp, NULL, NULL, kv_val))) {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"failed to decode res %s=%s, (rc=%d)\", kv_keyw, kv_val, rc);\n\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\treturn rc;\n\t\t}\n\t\tpres->rs_defin->rs_set(&pres->rs_value, &tmp, INCR);\n\t\tfree_svrcache(&pres->rs_value);\n\t\tpres->rs_defin->rs_encode(&pres->rs_value, NULL, pres->rs_defin->rs_name,\n\t\t\t\t\t  NULL, ATR_ENCODE_CLIENT, &pres->rs_value.at_priv_encoded);\n\t\tpres->rs_defin->rs_free(&tmp);\n\t} else {\n\t\tpnewres = (resource *) calloc(1, sizeof(resource));\n\t\tif (pnewres == NULL) {\n\t\t\tlog_err(-1, __func__, \"unable to calloc resource\");\n\t\t\treturn 1;\n\t\t}\n\t\tCLEAR_LINK(pnewres->rs_link);\n\t\tpnewres->rs_defin = resc_def;\n\t\tif ((rc = resc_def->rs_decode(&pnewres->rs_value, NULL, NULL, kv_val))) {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"failed to decode res %s=%s, (rc=%d)\", kv_keyw, kv_val, rc);\n\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\tfree(pnewres);\n\t\t\treturn rc;\n\t\t}\n\t\tresc_def->rs_encode(&pnewres->rs_value, NULL, resc_def->rs_name,\n\t\t\t\t    NULL, ATR_ENCODE_CLIENT, &pnewres->rs_value.at_priv_encoded);\n\t\tif (execv_f)\n\t\t\tpnewres->rs_value.at_flags |= ATR_VFLAG_IN_EXECVNODE_FLAG;\n\t\tif (cmp_res < 0) /* pres will be NULL */\n\t\t\tappend_link(&have->rl_other_res, &pnewres->rs_link, pnewres);\n\t\telse /* cmp_res > 0, pres wont be NULL */\n\t\t\tinsert_link(&pres->rs_link, &pnewres->rs_link, pnewres, LINK_INSET_BEFORE);\n\t}\n\thave->rl_res_count++;\n\n\treturn 0;\n}\n#endif\n\n/**\n * @brief\n *\tInitialize to zero the resc_limit_t structure.\n *\n * @param[in]\thave - the resc_limit_t structure.\n *\n * @return none\n */\nvoid\nresc_limit_init(resc_limit_t *have)\n{\n\tif (have == NULL)\n\t\treturn;\n\n\thave->rl_ncpus = 0;\n\thave->rl_ssi = 0;\n\thave->rl_mem = 0LL;\n\thave->rl_vmem = 0LL;\n\thave->rl_naccels = 0;\n\thave->rl_accel_mem = 0LL;\n\tCLEAR_HEAD(have->rl_other_res);\n\thave->rl_res_count = 0U;\n\thave->chunkstr = NULL;\n\thave->chunkstr_sz = 0;\n\thave->chunkspec = NULL;\n\thave->host_chunk[0].str = NULL;\n\thave->host_chunk[0].num = 0;\n\thave->host_chunk[1].str = NULL;\n\thave->host_chunk[1].num = 0;\n}\n\n/**\n * @brief\n *\tFree resource list.\n *\n * @param[in]\tpl_head - pointer to pbs_list_head of res list\n *\n * @return none\n */\nvoid\nresc_limit_free_res_list(pbs_list_head *pl_head)\n{\n\tresource *next;\n\tresource *pr;\n\n\tif ((pl_head == NULL) || (pl_head->ll_next == NULL))\n\t\treturn;\n\n\tpr = (resource *) GET_NEXT(*pl_head);\n\twhile (pr != NULL) {\n\t\tnext = (resource *) GET_NEXT(pr->rs_link);\n\t\tdelete_link(&pr->rs_link);\n\t\tpr->rs_defin->rs_free(&pr->rs_value);\n\t\tfree(pr);\n\t\tpr = next;\n\t}\n\tCLEAR_HEAD((*pl_head));\n}\n\n/**\n * @brief\n *\tFree any malloced entries in resc_limit structure.\n *\n * @param[in]\thave - the resc_limit structure.\n *\n * @return none\n */\nvoid\nresc_limit_free(resc_limit_t *have)\n{\n\tif (have == NULL)\n\t\treturn;\n\n\tresc_limit_free_res_list(&have->rl_other_res);\n\thave->rl_res_count = 0U;\n\tfree(have->chunkstr);\n\thave->chunkstr = NULL;\n\thave->chunkstr_sz = 0;\n\tfree(have->chunkspec);\n\thave->chunkspec = NULL;\n\tfree(have->host_chunk[0].str);\n\thave->host_chunk[0].str = NULL;\n\thave->host_chunk[0].num = 0;\n\tfree(have->host_chunk[1].str);\n\thave->host_chunk[1].str = NULL;\n\thave->host_chunk[1].num = 0;\n}\n\n/**\n * @brief\n *\tFree any malloced entries in the 'resc_list' parameter..\n *\n * @param[in]\thave - the resc_limit structure.\n *\n * @return none\n */\nvoid\nresc_limit_list_free(pbs_list_head *res_list)\n{\n\trl_entry *p_entry = NULL;\n\n\tif (res_list == NULL)\n\t\treturn;\n\n\tfor (p_entry = (rl_entry *) GET_NEXT(*res_list); p_entry != NULL; p_entry = (rl_entry *) GET_NEXT(*res_list)) {\n\t\tresc_limit_free(p_entry->resc);\n\t\tfree(p_entry->resc);\n\t\tdelete_link(&p_entry->rl_link);\n\t\tfree(p_entry);\n\t}\n}\n\n/**\n * @brief\n *\tPrint out the entries in the 'res_list' the server log under 'logtype'\n *\n * @param[in]\theader_str - a string to accompany the logged message\n * @param[in]\tres_list - the resc_limit structure list\n * @param[in]\tlogtype -  log level type\n *\n * @return none\n */\nvoid\nresc_limit_list_print(char *header_str, pbs_list_head *res_list, int logtype)\n{\n\trl_entry *p_entry = NULL;\n\tint i;\n\tresource *phave;\n\n\tif ((header_str == NULL) || (res_list == NULL))\n\t\treturn;\n\n\tp_entry = (rl_entry *) GET_NEXT(*res_list);\n\ti = 0;\n\twhile (p_entry) {\n\t\tresc_limit_t *have;\n\n\t\thave = p_entry->resc;\n\n\t\tsnprintf(log_buffer, sizeof(log_buffer), \"%s[%d]: ncpus=%d ssi=%d mem=%lld vmem=%lld naccels=%d accel_mem=%lld chunkstr=%s host_chunk[0].str=%s host_chunk[1].str=%s\",\n\t\t\t header_str,\n\t\t\t i,\n\t\t\t have->rl_ncpus,\n\t\t\t have->rl_ssi,\n\t\t\t have->rl_mem,\n\t\t\t have->rl_vmem,\n\t\t\t have->rl_naccels,\n\t\t\t have->rl_accel_mem,\n\t\t\t have->chunkstr ? have->chunkstr : \"\",\n\t\t\t have->host_chunk[0].str ? have->host_chunk[0].str : \"\",\n\t\t\t have->host_chunk[1].str ? have->host_chunk[1].str : \"\");\n\t\tlog_event(logtype, PBS_EVENTCLASS_RESC,\n\t\t\t  LOG_INFO, __func__, log_buffer);\n\t\tfor (phave = (resource *) GET_NEXT(have->rl_other_res);\n\t\t     phave;\n\t\t     phave = (resource *) GET_NEXT(phave->rs_link)) {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"%s[%d]: other res %s=%s\",\n\t\t\t\t header_str, i, phave->rs_defin->rs_name, phave->rs_value.at_priv_encoded->al_value);\n\t\t\tlog_event(logtype, PBS_EVENTCLASS_RESC,\n\t\t\t\t  LOG_INFO, __func__, log_buffer);\n\t\t}\n\t\tp_entry = (rl_entry *) GET_NEXT(p_entry->rl_link);\n\t\ti++;\n\t}\n}\n\n/**\n * @brief\n * \tReturn in 'buf' of size 'sz' a string of the form:\n *\n *\t\t<resource_name>=<resource_value>\n *\n * \twhere <resource_name> and <resource_val> maps 'have_resc',\n *\t'have_val'  against map_need value. <resource_value> must be of type int.\n *\n * @param[out]\t\tbuf - the buffer to fill\n * @param[in]\t\tbuf_sz - the size of 'buf'.\n * @param[in]\t\thave_resc \t- the resource name being matched.\n * @param[in]\t\thave_val\t- the resource value available.\n * @param[in,out]\tmap_need_val \t- the needed value. If have_val is <\n *\t\t\t\t\t  map_need_val, then map_need_val is\n *\t\t\t\t\t  decremented by have_val amount and\n *\t\t\t\t\t  returned.\n * @return none\n */\nstatic void\nintmap_need_to_have_resources(char *buf, size_t buf_sz,\n\t\t\t      char *have_resc, char *have_val, int *map_need_val)\n{\n\tint have_int;\n\tchar *endp;\n\n\tif ((have_resc == NULL) || (have_val == NULL) || (buf == NULL) ||\n\t    (buf_sz == 0) || (map_need_val == NULL)) {\n\t\tlog_err(-1, __func__, \"map_need_to_have_resources\");\n\t\treturn;\n\t}\n\n\tif (*map_need_val == 0)\n\t\treturn;\n\n\thave_int = (int) strtol(have_val, &endp, 10);\n\tif (*endp != '\\0') {\n\t\tlog_err(errno, __func__, \"strtoul error\");\n\t\treturn;\n\t}\n\n\tif (have_int > *map_need_val) {\n\t\tsnprintf(buf, buf_sz, \":%s=%d\", have_resc, *map_need_val);\n\t\t*map_need_val = 0;\n\t} else {\n\t\t*map_need_val -= have_int;\n\t\tsnprintf(buf, buf_sz, \":%s=%s\", have_resc, have_val);\n\t}\n}\n\n/**\n * @brief\n * \tReturn in 'buf' of size 'sz' a string of the form:\n *\n *\t\t<resource_name>=<resource_value>kb\n *\n * \twhere <resource_name> and <resource_val> maps 'have_resc',\n *\t'have_val'  against map_neeed value. <resource_value> must be of size\n *\tvalue.\n *\n * @param[out]\t\tbuf - the buffer to fill\n * @param[in]\t\tbuf_sz - the size of 'buf'.\n * @param[in]\t\thave_resc \t- the resource name being matched.\n * @param[in]\t\thave_val\t- the resource value we have in stock.\n * @param[in,out]\tmap_need_val \t- the needed value. If have_val is <\n *\t\t\t\t\t  map_neeed_val, then map_need_val is\n *\t\t\t\t\t  decremented by have_val amount and\n *\t\t\t\t\t  returned.\n * @return none\n */\nstatic void\nsizemap_need_to_have_resources(char *buf, size_t buf_sz, char *have_resc, char *have_val,\n\t\t\t       long long *map_need_val)\n{\n\tlong long have_size;\n\n\tif ((have_resc == NULL) || (have_val == NULL) || (buf == NULL) ||\n\t    (buf_sz == 0) || (map_need_val == NULL)) {\n\t\tlog_err(-1, __func__, \"map_need_to_have_resources\");\n\t\treturn;\n\t}\n\n\tif (*map_need_val == 0LL)\n\t\treturn;\n\n\thave_size = to_kbsize(have_val);\n\n\tif (have_size > *map_need_val) {\n\t\tsnprintf(buf, buf_sz, \":%s=%lldkb\", have_resc, *map_need_val);\n\t\t*map_need_val = 0LL;\n\t} else {\n\t\t*map_need_val -= have_size;\n\t\tsnprintf(buf, buf_sz, \":%s=%s\", have_resc, have_val);\n\t}\n}\n\n/**\n * @brief\n * \tReturn in 'buf' of size 'sz' a string of the form:\n *\n *\t\t<resource_name>=<resource_value>kb\n *\n * \twhere <resource_name> and <resource_val> map 'have_resc',\n *\t'have_val' against the resc_limit 'need' value.\n *\n * @param[out]\t\tbuf - the buffer to fill\n * @param[in]\t\tbuf_sz - the size of 'buf'.\n * @param[in]\t\thave_resc \t- the resource name being matched.\n * @param[in]\t\thave_val\t- the resource value we have in stock.\n * @param[in,out]\tneed\t\t- a resc_limit_t structure.\n *\t\t\t\t\t  The 'need' value is\n *\t\t\t\t\t  decremented by have_val amount and\n *\t\t\t\t\t  returned.\n * @return none\n */\nstatic void\nmap_need_to_have_resources(char *buf, size_t buf_sz, char *have_resc,\n\t\t\t   char *have_val, resc_limit_t *need)\n{\n\n\tif ((buf == NULL) || (buf_sz == 0) || (have_resc == NULL) ||\n\t    (have_val == NULL) || (need == NULL)) {\n\t\treturn;\n\t}\n\n\tif (strcmp(have_resc, \"ncpus\") == 0) {\n\t\tintmap_need_to_have_resources(buf, buf_sz,\n\t\t\t\t\t      have_resc, have_val, &need->rl_ncpus);\n\n\t} else if (strcmp(have_resc, \"mem\") == 0) {\n\t\tsizemap_need_to_have_resources(buf, buf_sz,\n\t\t\t\t\t       have_resc, have_val, &need->rl_mem);\n\n\t} else if (strcmp(have_resc, \"vmem\") == 0) {\n\t\tsizemap_need_to_have_resources(buf, buf_sz,\n\t\t\t\t\t       have_resc, have_val, &need->rl_vmem);\n\n\t} else if (strcmp(have_resc, \"naccelerators\") == 0) {\n\t\tintmap_need_to_have_resources(buf, buf_sz,\n\t\t\t\t\t      have_resc, have_val, &need->rl_naccels);\n\n\t} else if (strcmp(have_resc, \"accelerator_memory\") == 0) {\n\t\tsizemap_need_to_have_resources(buf, buf_sz,\n\t\t\t\t\t       have_resc, have_val, &need->rl_accel_mem);\n#if !(defined(PBS_MOM) || defined(PBS_PYTHON))\n\t} else {\n\t\tresource *pneed;\n\t\tfor (pneed = (resource *) GET_NEXT(need->rl_other_res);\n\t\t     pneed;\n\t\t     pneed = (resource *) GET_NEXT(pneed->rs_link)) {\n\t\t\tif (strcasecmp(have_resc, pneed->rs_defin->rs_name) == 0) {\n\t\t\t\tattribute hattr = {0};\n\t\t\t\tint cmp_res;\n\t\t\t\tif (!(pneed->rs_value.at_flags & ATR_VFLAG_IN_EXECVNODE_FLAG))\n\t\t\t\t\treturn;\n\t\t\t\tpneed->rs_defin->rs_decode(&hattr, NULL, NULL, have_val);\n\t\t\t\tcmp_res = pneed->rs_defin->rs_comp(&hattr, &pneed->rs_value);\n\t\t\t\tif (!cmp_res) {\n\t\t\t\t\tresource *tmp = pneed;\n\t\t\t\t\tpneed = (resource *) pneed->rs_link.ll_prior;\n\t\t\t\t\tsnprintf(buf, buf_sz, \":%s=%s\", have_resc, have_val);\n\t\t\t\t\tdelete_link(&tmp->rs_link);\n\t\t\t\t\ttmp->rs_defin->rs_free(&tmp->rs_value);\n\t\t\t\t\tfree(tmp);\n\t\t\t\t} else {\n\t\t\t\t\tif (cmp_res > 0) {\n\t\t\t\t\t\tsnprintf(buf, buf_sz, \":%s=%s\", have_resc, pneed->rs_value.at_priv_encoded->al_value);\n\t\t\t\t\t\tpneed->rs_defin->rs_decode(&pneed->rs_value, NULL, NULL, \"0\");\n\t\t\t\t\t} else {\n\t\t\t\t\t\tpneed->rs_defin->rs_set(&pneed->rs_value, &hattr, DECR);\n\t\t\t\t\t\tsnprintf(buf, buf_sz, \":%s=%s\", have_resc, have_val);\n\t\t\t\t\t}\n\t\t\t\t\tfree_svrcache(&pneed->rs_value);\n\t\t\t\t\tpneed->rs_defin->rs_encode(&pneed->rs_value, NULL, pneed->rs_defin->rs_name,\n\t\t\t\t\t\t\t\t   NULL, ATR_ENCODE_CLIENT, &pneed->rs_value.at_priv_encoded);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n#endif\n\t}\n}\n\n/**\n * @brief\n *\tHelper function that adds 'noden' resources assigned\n *\t'keyw' = 'keyval' values to 'vnlp'.\n * @param[in]\tvnlp\t- vnode list structure\n * @param[in]\tnoden   - the node/vnode represented\n * @param[in]\tkeyw\t- resource name\n * @param[in]\tkeyval\t- resource value\n * @return int\n * @retval\t0\t- success\n * @retval\t1\t- failure\n */\nstatic int\nadd_to_vnl(vnl_t **vnlp, char *noden, char *keyw, char *keyval)\n{\n#if defined(PBS_MOM) || defined(PBS_PYTHON)\n\tint rc;\n\tchar buf[LOG_BUF_SIZE];\n\tchar buf_val[LOG_BUF_SIZE];\n\tchar *attr_val = NULL;\n\n\tif ((vnlp == NULL) || (noden == NULL) || (keyw == NULL) || (keyval == NULL))\n\t\treturn (0);\n\n\tif (*vnlp == NULL) {\n\t\tif (vnl_alloc(vnlp) == NULL) {\n\t\t\tlog_err(errno, __func__,\n\t\t\t\t\"Failed to allocate a vnlp structure\");\n\t\t\treturn (1);\n\t\t}\n\t}\n\n\tsnprintf(buf, sizeof(buf), \"resources_assigned.%s\", keyw);\n\n\tsnprintf(buf_val, sizeof(buf_val), \"%s\", keyval);\n\n\tattr_val = vn_exist(*vnlp, noden, buf);\n\tif (attr_val != NULL) {\n\n\t\tif ((strcmp(buf, \"resources_assigned.mem\") == 0) ||\n\t\t    (strcmp(buf, \"resources_assigned.vmem\") == 0) ||\n\t\t    (strcmp(buf, \"resources_assigned.accelerator_memory\") == 0)) {\n\t\t\tlong long size1;\n\t\t\tlong long size2;\n\n\t\t\tsize1 = to_kbsize(attr_val);\n\t\t\tsize2 = to_kbsize(keyval);\n\n\t\t\tsnprintf(buf_val, sizeof(buf_val), \"%lldkb\", size1 + size2);\n\n\t\t} else {\n\t\t\tint val1;\n\t\t\tint val2;\n\n\t\t\tval1 = atol(attr_val);\n\t\t\tval2 = atol(keyval);\n\t\t\tsnprintf(buf_val, sizeof(buf_val) - 1, \"%d\", val1 + val2);\n\t\t}\n\t}\n\trc = vn_addvnr(*vnlp, noden, buf, buf_val, 0, 0, NULL);\n\tif (rc == -1) {\n\t\tchar *msgbuf;\n\t\tpbs_asprintf(&msgbuf, \"failed to add '%s=%s', buf, keyval\");\n\t\tlog_err(-1, __func__, msgbuf);\n\t\tfree(msgbuf);\n\t\treturn (1);\n\t}\n#endif\n\treturn (0);\n}\n\n#if !(defined(PBS_MOM) || defined(PBS_PYTHON))\n/**\n * @brief\n *\tCheck if other resources in 'have' satisfy the remaining\n *\t resources from 'need'.\n *\n * @param[in]\tneed - the need value of resc_limit_t type.\n * @param[in]\thave - the have value of resc_limit_t type.\n *\n * @return int\n *\t 1 - if not satisfied\n *\t 0 - otherwise\n *\t-1 - error\n */\nstatic int\ncheck_other_res(resc_limit_t *need, resc_limit_t *have)\n{\n\tresource *pneed, *phave;\n\n\tif ((need == NULL) || (have == NULL))\n\t\treturn -1;\n\n\tif (!GET_NEXT(need->rl_other_res))\n\t\treturn 0;\n\n\tfor (pneed = (resource *) GET_NEXT(need->rl_other_res);\n\t     pneed;\n\t     pneed = (resource *) GET_NEXT(pneed->rs_link)) {\n\t\tint matched = 0;\n\t\tfor (phave = (resource *) GET_NEXT(have->rl_other_res);\n\t\t     phave;\n\t\t     phave = (resource *) GET_NEXT(phave->rs_link)) {\n\t\t\tif (pneed->rs_defin == phave->rs_defin) {\n\t\t\t\tresource_def *prdef = pneed->rs_defin;\n\t\t\t\tunsigned int atr_type = prdef->rs_type;\n\t\t\t\tif ((atr_type == ATR_TYPE_STR) || (atr_type == ATR_TYPE_BOOL)) {\n\t\t\t\t\tif (!prdef->rs_comp(&pneed->rs_value, &phave->rs_value)) {\n\t\t\t\t\t\tmatched = 1;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t} else { /* for atr type long, float and size */\n\t\t\t\t\tif (prdef->rs_comp(&pneed->rs_value, &phave->rs_value) <= 0) {\n\t\t\t\t\t\tmatched = 1;\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t\tif (!matched)\n\t\t\treturn 1;\n\t}\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\tappends a chunk spec to sched select and also tries to\n *\tgroup identical chunk specs by increasing the chunk count\n *\n * @param[in/out]\tnew_schedselect - pointer to new sched select\n * @param[in]\t\tchunkstr - current chunk spec to append\n * @param[in/out]\ttmp_chunk_spec - buffer to accumulate identical chunk specs\n * @param[in/out]\ttmp_chunk_ct - count of gathered identical chunk specs\n *\n * @return none\n */\nstatic void\nappend_and_group_sched_sel(char *new_schedselect, char *chunkstr, char *tmp_chunk_spec, int *tmp_chunk_ct)\n{\n\tif ((new_schedselect == NULL) || (chunkstr == NULL) || (tmp_chunk_spec == NULL) || (tmp_chunk_ct == NULL)) {\n\t\tlog_err(-1, __func__, \"a parameter is NULL\");\n\t\treturn;\n\t}\n\n\tif (*tmp_chunk_spec) {\n\t\tif (!strcasecmp(tmp_chunk_spec, chunkstr)) {\n\t\t\t(*tmp_chunk_ct)++;\n\t\t\treturn;\n\t\t}\n\t\tif (*new_schedselect)\n\t\t\tstrcat(new_schedselect, \"+\");\n\t\tsprintf(new_schedselect + strlen(new_schedselect), \"%d:%s\", *tmp_chunk_ct, tmp_chunk_spec);\n\t}\n\n\t(*tmp_chunk_ct) = 1;\n\tstrcpy(tmp_chunk_spec, chunkstr);\n}\n#endif\n\n/**\n * @brief\n *\tReturn a string representing a chunk that satisfies 'need'\n *\t resources from the 'have' pool.\n *\n * @param[in]\tneed - the need value of resc_limit_t type.\n * @param[in]\thave - the have value of resc_limit_t type.\n * @param[in]\tvnlp - if not-NULL, add the vnodes and\n *\t\t\tresources that satisfy 'need'\n *\t\t\trequest against 'have' resources.\n *\n * @return char *\n *\t<chunk string> - if successful\n *\tNULL \t\t- if could not find a chunk string to satisfy need\n *\t\t\t  against have, or if an error occurred.\n */\nstatic char *\nsatisfy_chunk_need(resc_limit_t *need, resc_limit_t *have, vnl_t **vnlp)\n{\n\tresc_limit_t map_need;\n\tchar *chunk = NULL;\n\tchar *noden;\n\tint nelem;\n\tchar *chunkstr = NULL;\n\tstatic char *ret_chunkstr = NULL;\n\tstatic size_t ret_chunkstr_size = 0;\n\tsize_t data_size = 0;\n\tchar buf[LOG_BUF_SIZE];\n\tstruct key_value_pair *pkvp;\n\tint paren = 0;\n\tint found_paren = 0;\n\tchar *last = NULL;\n\tint hasprn = 0;\n\tint j;\n\tint entry = 0;\n#if !(defined(PBS_MOM) || defined(PBS_PYTHON))\n\tresource *presnew, *pres, *pneed;\n#endif\n\n\tif ((need == NULL) || (have == NULL))\n\t\treturn (NULL);\n\n\tif ((have->chunkstr == NULL) || (have->chunkstr[0] == '\\0'))\n\t\treturn (NULL);\n\n\tif ((need->rl_ncpus > have->rl_ncpus) ||\n\t    (need->rl_mem > have->rl_mem) ||\n\t    (need->rl_ssi > have->rl_ssi) ||\n\t    (need->rl_vmem > have->rl_vmem) ||\n\t    (need->rl_naccels > have->rl_naccels) ||\n\t    (need->rl_accel_mem > have->rl_accel_mem)\n#if !(defined(PBS_MOM) || defined(PBS_PYTHON))\n\t    || check_other_res(need, have)\n#endif\n\t) {\n\t\treturn (NULL);\n\t}\n\n\tmemset(&map_need, 0, sizeof(resc_limit_t));\n\tresc_limit_init(&map_need);\n\n#if defined(PBS_MOM) || defined(PBS_PYTHON)\n\tmap_need.rl_ncpus = need->rl_ncpus;\n\tmap_need.rl_mem = need->rl_mem;\n\tmap_need.rl_ssi = need->rl_ssi;\n\tmap_need.rl_vmem = need->rl_vmem;\n\tmap_need.rl_naccels = need->rl_naccels;\n\tmap_need.rl_accel_mem = need->rl_accel_mem;\n#else\n\tmap_need.rl_ncpus = have->rl_ncpus;\n\tmap_need.rl_mem = have->rl_mem;\n\tmap_need.rl_ssi = have->rl_ssi;\n\tmap_need.rl_vmem = have->rl_vmem;\n\tmap_need.rl_naccels = have->rl_naccels;\n\tmap_need.rl_accel_mem = have->rl_accel_mem;\n\n\tfor (pres = (resource *) GET_NEXT(have->rl_other_res);\n\t     pres;\n\t     pres = (resource *) GET_NEXT(pres->rs_link)) {\n\t\tpresnew = (resource *) calloc(1, sizeof(resource));\n\t\tif (presnew == NULL) {\n\t\t\tlog_err(-1, __func__, \"unable to calloc resource\");\n\t\t\treturn (NULL);\n\t\t}\n\t\tCLEAR_LINK(presnew->rs_link);\n\t\tpresnew->rs_defin = pres->rs_defin;\n\t\tappend_link(&map_need.rl_other_res, &presnew->rs_link, presnew);\n\t\tpresnew->rs_defin->rs_set(&presnew->rs_value, &pres->rs_value, SET);\n\t\tpresnew->rs_defin->rs_encode(&presnew->rs_value, NULL, presnew->rs_defin->rs_name,\n\t\t\t\t\t     NULL, ATR_ENCODE_CLIENT, &presnew->rs_value.at_priv_encoded);\n\t\tif (pres->rs_value.at_flags & ATR_VFLAG_IN_EXECVNODE_FLAG)\n\t\t\tpresnew->rs_value.at_flags |= ATR_VFLAG_IN_EXECVNODE_FLAG;\n\t}\n\n\tif (need->rl_ncpus)\n\t\tmap_need.rl_ncpus = need->rl_ncpus;\n\tif (need->rl_mem)\n\t\tmap_need.rl_mem = need->rl_mem;\n\tif (need->rl_ssi)\n\t\tmap_need.rl_ssi = need->rl_ssi;\n\tif (need->rl_vmem)\n\t\tmap_need.rl_vmem = need->rl_vmem;\n\tif (need->rl_naccels)\n\t\tmap_need.rl_naccels = need->rl_naccels;\n\tif (need->rl_accel_mem)\n\t\tmap_need.rl_accel_mem = need->rl_accel_mem;\n\n\tpres = (resource *) GET_NEXT(map_need.rl_other_res);\n\tpneed = (resource *) GET_NEXT(need->rl_other_res);\n\twhile (pres && pneed) {\n\t\tunsigned int res_type = pneed->rs_defin->rs_type;\n\t\t;\n\t\twhile (pres && (pneed->rs_defin != pres->rs_defin))\n\t\t\tpres = (resource *) GET_NEXT(pres->rs_link);\n\t\tif (!pres)\n\t\t\tbreak;\n\n\t\tif (((res_type == ATR_TYPE_LONG) || (res_type == ATR_TYPE_SIZE) || (res_type == ATR_TYPE_FLOAT)) && (pres->rs_defin->rs_comp(&pres->rs_value, &pneed->rs_value))) {\n\t\t\tpres->rs_defin->rs_free(&pres->rs_value); /* ATR_VFLAG_IN_EXECVNODE_FLAG gets preserved */\n\t\t\tpres->rs_defin->rs_set(&pres->rs_value, &pneed->rs_value, SET);\n\t\t\tpres->rs_defin->rs_encode(&pres->rs_value, NULL, pres->rs_defin->rs_name,\n\t\t\t\t\t\t  NULL, ATR_ENCODE_CLIENT, &pres->rs_value.at_priv_encoded);\n\t\t}\n\t\tpneed = (resource *) GET_NEXT(pneed->rs_link);\n\t}\n#endif\n\n\tdata_size = strlen(have->chunkstr) + 1;\n\tif (data_size > ret_chunkstr_size) {\n\t\tchar *tpbuf;\n\n\t\ttpbuf = realloc(ret_chunkstr, data_size);\n\t\tif (tpbuf == NULL) {\n\t\t\tlog_err(-1, __func__, \"realloc failure\");\n\t\t\tresc_limit_free(&map_need);\n\t\t\treturn (NULL);\n\t\t}\n\t\tret_chunkstr = tpbuf;\n\t\tret_chunkstr_size = data_size;\n\t}\n\tret_chunkstr[0] = '\\0';\n\n\tchunkstr = strdup(have->chunkstr);\n\tif (chunkstr == NULL) {\n\t\tlog_err(errno, __func__, \"strdup 1 failure\");\n\t\tresc_limit_free(&map_need);\n\t\treturn (NULL);\n\t}\n\n\tfor (chunk = parse_plus_spec_r(chunkstr, &last, &hasprn);\n\t     chunk != NULL;\n\t     chunk = parse_plus_spec_r(last, &last, &hasprn)) {\n\n\t\tparen += hasprn;\n\n\t\tif (parse_node_resc(chunk, &noden, &nelem, &pkvp) == 0) {\n\t\t\tint vnode_in = 0;\n\n\t\t\tif (((hasprn > 0) && (paren > 0)) || ((hasprn == 0) && (paren == 0))) {\n\t\t\t\t/* at the beginning of chunk for current host */\n\t\t\t\tif (!found_paren) {\n\t\t\t\t\tstrcat(ret_chunkstr, \"(\");\n\t\t\t\t\tfound_paren = 1;\n\t\t\t\t}\n\t\t\t\tfor (j = 0; j < nelem; ++j) {\n\n\t\t\t\t\tbuf[0] = '\\0';\n\t\t\t\t\tmap_need_to_have_resources(buf,\n\t\t\t\t\t\t\t\t   sizeof(buf) - 1, pkvp[j].kv_keyw, pkvp[j].kv_val, &map_need);\n\n\t\t\t\t\tif (buf[0] != '\\0') {\n\t\t\t\t\t\tif (!vnode_in) {\n\t\t\t\t\t\t\tif (entry > 0) {\n\t\t\t\t\t\t\t\tstrcat(ret_chunkstr, \"+\");\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tstrcat(ret_chunkstr, noden);\n\t\t\t\t\t\t\tentry++;\n\t\t\t\t\t\t\tvnode_in = 1;\n\t\t\t\t\t\t}\n\t\t\t\t\t\tstrcat(ret_chunkstr, buf);\n\t\t\t\t\t\t(void) add_to_vnl(vnlp, noden, pkvp[j].kv_keyw, pkvp[j].kv_val);\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\tif (paren == 0) { /* have all chunks for current host */\n\n\t\t\t\t\tif (found_paren) {\n\t\t\t\t\t\tstrcat(ret_chunkstr, \")\");\n\t\t\t\t\t\tfound_paren = 0;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t} else {\n\n\t\t\t\tif (!found_paren) {\n\t\t\t\t\tstrcat(ret_chunkstr, \"(\");\n\t\t\t\t\tfound_paren = 1;\n\t\t\t\t}\n\t\t\t\tfor (j = 0; j < nelem; ++j) {\n\t\t\t\t\tbuf[0] = '\\0';\n\t\t\t\t\tmap_need_to_have_resources(buf, sizeof(buf) - 1, pkvp[j].kv_keyw, pkvp[j].kv_val, &map_need);\n\n\t\t\t\t\tif (buf[0] != '\\0') {\n\t\t\t\t\t\tif (!vnode_in) {\n\t\t\t\t\t\t\tif (entry > 0) {\n\t\t\t\t\t\t\t\tstrcat(ret_chunkstr, \"+\");\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tstrcat(ret_chunkstr, noden);\n\t\t\t\t\t\t\tentry++;\n\t\t\t\t\t\t\tvnode_in = 1;\n\t\t\t\t\t\t}\n\t\t\t\t\t\tstrcat(ret_chunkstr, buf);\n\t\t\t\t\t\t(void) add_to_vnl(vnlp, noden, pkvp[j].kv_keyw, pkvp[j].kv_val);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif (paren == 0) { /* have all chunks for current host */\n\n\t\t\t\tif (found_paren) {\n\t\t\t\t\tstrcat(ret_chunkstr, \")\");\n\t\t\t\t\tfound_paren = 0;\n\t\t\t\t}\n\t\t\t}\n\n\t\t} else {\n\t\t\tlog_err(errno, __func__, \"parse_node_resc_error\");\n\t\t\tfree(chunkstr);\n\t\t\tresc_limit_free(&map_need);\n\t\t\treturn (NULL);\n\t\t}\n\t}\n\n\tfree(chunkstr);\n\tresc_limit_free(&map_need);\n\treturn (ret_chunkstr);\n}\n\n/*\n * @brief\n *\tInitialize the relnodes_input_t structure used as argument to\n *\tpbs_release_nodes_given_nodelist() and pbs_release_nodes_given_select()\n *\tfunctions.\n * @param[out]\tr_input\t- structure to initialize\n * @return none\n */\nvoid\nrelnodes_input_init(relnodes_input_t *r_input)\n{\n\tr_input->jobid = NULL;\n\tr_input->vnodes_data = NULL;\n\tr_input->execvnode = NULL;\n\tr_input->exechost = NULL;\n\tr_input->exechost2 = NULL;\n\tr_input->schedselect = NULL;\n\tr_input->p_new_exec_vnode = NULL;\n\tr_input->p_new_exec_host[0] = NULL;\n\tr_input->p_new_exec_host[1] = NULL;\n\tr_input->p_new_schedselect = NULL;\n}\n\n/*\n * @brief\n *\tInitialize the relnodes_input_select_t structure used as argument to\n *\tpbs_release_nodes_given_select() function.\n *\n * @param[out]\tr_input\t- structure to initialize\n * @return none\n */\nvoid\nrelnodes_input_select_init(relnodes_input_select_t *r_input)\n{\n\tr_input->select_str = NULL;\n\tr_input->failed_mom_list = NULL;\n\tr_input->succeeded_mom_list = NULL;\n\tr_input->failed_vnodes = NULL;\n\tr_input->good_vnodes = NULL;\n}\n\n/**\n *\n * @brief\n *\tReturn a subset of the exec_vnode string 'e_vnode'\n *\twhere only vnode chunks matching the vnode names\n *\tin 'vnl_good' (list of healthy vnodes) are shown.\n * @par\n *\tFor example, given:\n *\t\te_vnode = (vn1:<r1>=<v1>)+(vn2:<r2>=<v2>)+(vn3:<r3>=<v3>)\n *\tand\n *\t\tvnl_fails lists: vn2\n *\n *\tthen this function returns:\n *\t\te_vnode = (vn1:<r1>=<v1>)+(vn3:<r3>=<v3>)\n *\n * @param[in]\te_vnode - exec_vnode string\n * @param[in]\tvnl_good - some list of names of failed vnodes\n *\n * @return char *\n * @retval != NULL\tthe exec_vnode string of healthy vnodes\n * @retval == NULL\tif error encountered.\n *\n * @note\n *\tThe returned string points to a statically allocated buffer\n *\tthat must not be freed, and will get overwritten on the\n *\tnext call to this function.\n *\n */\nstatic char *\nreturn_available_vnodes(char *e_vnode, vnl_t *vnl_good)\n{\n\tchar *save_ptr;\n\tchar *pc;\n\tchar *tmpbuf;\n\tstatic char *ebuf = NULL;\n\tstatic int ebuf_size = 0;\n\n\tif (e_vnode == NULL)\n\t\treturn NULL;\n\n\ttmpbuf = strdup(e_vnode);\n\tif (tmpbuf == NULL) {\n\t\tlog_err(errno, __func__, \"strdup failed\");\n\t\treturn NULL;\n\t}\n\n\tif (ebuf == NULL) {\n\t\tebuf = malloc(LOG_BUF_SIZE + 1);\n\t\tif (ebuf == NULL) {\n\t\t\tlog_err(errno, __func__, \"malloc failed\");\n\t\t\treturn NULL;\n\t\t}\n\t\tebuf_size = LOG_BUF_SIZE;\n\t}\n\n\tebuf[0] = '\\0';\n\tfor (pc = strtok_r(tmpbuf, \"+\", &save_ptr); pc != NULL; pc = strtok_r(NULL, \"+\", &save_ptr)) {\n\t\tchar *vn = pc;\n\t\tchar *p;\n\n\t\tp = strchr(pc, ':');\n\t\tif (p != NULL) {\n\t\t\t*p = '\\0';\n\t\t\tif (*vn == '(')\n\t\t\t\tvn++;\n\t\t\tif (vn_vnode(vnl_good, vn)) {\n\t\t\t\t*p = ':';\n\t\t\t\tif (ebuf[0] != '\\0') {\n\t\t\t\t\tif (pbs_strcat(&ebuf, &ebuf_size, \"+\") == NULL) {\n\t\t\t\t\t\tfree(tmpbuf);\n\t\t\t\t\t\treturn NULL;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tif (pbs_strcat(&ebuf, &ebuf_size, pc) == NULL) {\n\t\t\t\t\tfree(tmpbuf);\n\t\t\t\t\treturn NULL;\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\t*p = ':';\n\t\t\t}\n\n\t\t} else {\n\t\t\tif (ebuf[0] != '\\0') {\n\t\t\t\tif (pbs_strcat(&ebuf, &ebuf_size, \"+\") == NULL) {\n\t\t\t\t\tfree(tmpbuf);\n\t\t\t\t\treturn NULL;\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (pbs_strcat(&ebuf, &ebuf_size, pc) == NULL) {\n\t\t\t\tfree(tmpbuf);\n\t\t\t\treturn NULL;\n\t\t\t}\n\t\t}\n\t}\n\n\tfree(tmpbuf);\n\treturn (ebuf);\n}\n\n/**\n * @brief\n *\tRelease node resources from a job in such a way it satisfies some\n *\tspecified select value.\n *\n * @param[in]\t\tr_input\t- contains various input including the job id\n * @param[in,out]\tr_input2 - contains various input and output parameters\n *\t\t\t\t   including the desired 'select' spec, as well\n *\t\t\t\t   the resulting new values to job's exec_vnode,\n *\t\t\t\t   exec_host, exec_host2, and schedselect.\n * @param[out]\t\terr_msg - gets filled in with the error message if this\n *\t\t\t\t  function returns a non-zero value.\n * @param[int]\t\terr_sz - size of the 'err_msg' buffer.\n * @return int\n * @retval 0 - success\n * @retval 1 - fail with 'err_msg' filled in with message.\n */\nint\npbs_release_nodes_given_select(relnodes_input_t *r_input, relnodes_input_select_t *r_input2, char *err_msg, int err_msg_sz)\n{\n\tchar *new_exec_vnode = NULL;\n\tchar *new_exec_host = NULL;\n\tchar *new_exec_host2 = NULL;\n\tchar *new_schedselect = NULL;\n\tchar *noden;\n\tint nelem;\n\tint paren = 0;\n\tint found_paren = 0;\n\tchar *chunk = NULL;\n\tchar *chunk1 = NULL;\n\tchar *chunk2 = NULL;\n\tint entry = 0;\n\tint h_entry = 0;\n\tchar buf[LOG_BUF_SIZE];\n\n\tstruct key_value_pair *pkvp;\n\tchar *last = NULL;\n\tchar *last1 = NULL;\n\tchar *last2 = NULL;\n\tchar *last3 = NULL;\n\tint hasprn = 0;\n\tint hasprn1 = 0;\n\tint hasprn2 = 0;\n\tint hasprn3;\n\tchar *exec_vnode = NULL;\n\tchar *exec_host = NULL;\n\tchar *exec_host2 = NULL;\n\n\tresc_limit_t *have = NULL;\n\tresc_limit_t *have2 = NULL;\n\tpbs_list_head resc_limit_list;\n\tresc_limit_t *have0 = NULL;\n\n\tchar *selbuf = NULL;\n\tchar *psubspec;\n\tint rc = 1;\n\tresc_limit_t need;\n\tint snelma;\n\tstatic int snelmt = 0;\n\tstatic key_value_pair *skv = NULL;\n\trl_entry *p_entry = NULL;\n\tint matched = 0;\n\tint snc;\n\tint h, i, j, k, l;\n\tchar *new_chunkstr = NULL;\n\t/* job vnodes that have been out as their parent moms are non-functioning */\n\tvnl_t *vnl_fails = NULL;\n\t/* job vnodess that have functioning parent moms */\n\tvnl_t *vnl_good = NULL;\n\tvnl_t *vnl_good_master = NULL;\n\tchar prev_noden[PBS_MAXNODENAME + 1];\n\tchar *parent_mom;\n\tchar *tmpstr;\n\tchar *chunk_buf = NULL;\n\tint chunk_buf_sz = 0;\n#ifdef PBS_MOM\n\tresource_def *resc_def = NULL;\n\tmomvmap_t *vn_vmap = NULL;\n#else\n\tstruct pbsnode *pnode = NULL;\n\tchar e2buf[PBS_MAXHOSTNAME + 1 + 6 + 16];\n#ifndef PBS_PYTHON\n\tchar *extra_res = NULL;\n\tresource *pres;\n\tchar *sched_select = NULL;\n\tchar *chunkschsel = NULL;\n\tchar *res_in_exec_vnode = NULL;\n\tchar *lastschsel = NULL;\n\tint hasprnschsel = 0;\n\tchar *tmp_chunk_spec = NULL;\n\tint tmp_chunk_ct;\n#endif\n#endif\n\n\tif ((r_input == NULL) || (r_input2 == NULL) || (r_input->jobid == NULL) || (r_input->execvnode == NULL) || (r_input->exechost == NULL) || (r_input->exechost2 == NULL) || (r_input->schedselect == NULL) || (err_msg == NULL) || (err_msg_sz <= 0)) {\n\t\tlog_err(-1, __func__, \"required parameter is null\");\n\t\treturn (1);\n\t}\n\terr_msg[0] = '\\0';\n\tresc_limit_init(&need);\n\texec_vnode = strdup(r_input->execvnode);\n\tif (exec_vnode == NULL) {\n\t\tlog_err(errno, __func__, \"strdup error\");\n\t\tgoto release_nodes_exit;\n\t}\n\n\texec_host = strdup(r_input->exechost);\n\tif (exec_host == NULL) {\n\t\tlog_err(errno, __func__, \"strdup error\");\n\t\tgoto release_nodes_exit;\n\t}\n\n\texec_host2 = strdup(r_input->exechost2);\n\tif (exec_host2 == NULL) {\n\t\tlog_err(errno, __func__, \"strdup error\");\n\t\tgoto release_nodes_exit;\n\t}\n\n#if !(defined(PBS_MOM) || defined(PBS_PYTHON))\n\tsched_select = expand_select_spec(r_input->schedselect);\n\tif (sched_select == NULL) {\n\t\tlog_err(errno, __func__, \"strdup error\");\n\t\tgoto release_nodes_exit;\n\t}\n\tif (!(new_schedselect = malloc(strlen(sched_select)))) {\n\t\tlog_err(errno, __func__, \"new_schedselect malloc failed\");\n\t\tgoto release_nodes_exit;\n\t}\n\t*new_schedselect = '\\0';\n\n\tif (!(tmp_chunk_spec = malloc(strlen(r_input->schedselect)))) {\n\t\tlog_err(errno, __func__, \"tmp_chunk_spec malloc failed\");\n\t\tgoto release_nodes_exit;\n\t}\n\t*tmp_chunk_spec = '\\0';\n\tres_in_exec_vnode = resources_seen(exec_vnode);\n#endif\n\n\tchunk_buf_sz = strlen(exec_vnode) + 1;\n\tchunk_buf = (char *) calloc(1, chunk_buf_sz);\n\tif (chunk_buf == NULL) {\n\t\tlog_err(errno, __func__, \"chunk_buf calloc error\");\n\t\tgoto release_nodes_exit;\n\t}\n\n\treliable_job_node_print(\"job failed_mom_list\",\n\t\t\t\tr_input2->failed_mom_list, PBSEVENT_DEBUG3);\n\treliable_job_node_print(\"job succeeded_mom_list\",\n\t\t\t\tr_input2->succeeded_mom_list, PBSEVENT_DEBUG3);\n\n\t/* now parse exec_vnode to build up the 'have' resources list */\n\tCLEAR_HEAD(resc_limit_list);\n\n\t/* There's a 1:1:1 mapping among exec_vnode parenthesized entries, exec_host, */\n\t/* and exec_host2,  */\n\tentry = 0;   /* exec_vnode entries */\n\th_entry = 0; /* exec_host* entries */\n\tparen = 0;\n\tprev_noden[0] = '\\0';\n\tk = 0;\n\tparent_mom = NULL;\n\tfor (chunk = parse_plus_spec_r(exec_vnode, &last, &hasprn),\n\t    chunk1 = parse_plus_spec_r(exec_host, &last1, &hasprn1),\n#if !(defined(PBS_MOM) || defined(PBS_PYTHON))\n\t    chunkschsel = parse_plus_spec_r(sched_select, &lastschsel, &hasprnschsel),\n#endif\n\t    chunk2 = parse_plus_spec_r(exec_host2, &last2, &hasprn2);\n\t     (chunk != NULL) && (chunk1 != NULL)\n#if !(defined(PBS_MOM) || defined(PBS_PYTHON))\n\t     && (chunkschsel != NULL)\n#endif\n\t     && (chunk2 != NULL);\n\t     chunk = parse_plus_spec_r(last, &last, &hasprn)) {\n\n\t\tparen += hasprn;\n\t\tstrncpy(chunk_buf, chunk, chunk_buf_sz - 1);\n\t\tif (parse_node_resc(chunk, &noden, &nelem, &pkvp) == 0) {\n\n#ifdef PBS_MOM\n\t\t\t/* see if previous entry already matches this */\n\t\t\tif ((strcmp(prev_noden, noden) != 0)) {\n\t\t\t\tvn_vmap = find_vmap_entry(noden);\n\t\t\t\tif (vn_vmap == NULL) { /* should not happen */\n\n\t\t\t\t\tif ((err_msg != NULL) && (err_msg_sz > 0)) {\n\t\t\t\t\t\tsnprintf(err_msg, err_msg_sz, \"no vmap entry for %s\", noden);\n\t\t\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG, r_input->jobid, err_msg);\n\t\t\t\t\t}\n\t\t\t\t\tgoto release_nodes_exit;\n\t\t\t\t}\n\t\t\t\tif (vn_vmap->mvm_hostn != NULL) {\n\t\t\t\t\tparent_mom = vn_vmap->mvm_hostn;\n\t\t\t\t} else {\n\t\t\t\t\tparent_mom = vn_vmap->mvm_name;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif (parent_mom == NULL) { /* should not happen */\n\n\t\t\t\tif ((err_msg != NULL) && (err_msg_sz > 0)) {\n\t\t\t\t\tsnprintf(err_msg, err_msg_sz, \"no parent_mom for for %s\", noden);\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG, r_input->jobid, err_msg);\n\t\t\t\t}\n\t\t\t\tgoto release_nodes_exit;\n\t\t\t}\n\n\t\t\tstrncpy(prev_noden, noden, PBS_MAXNODENAME);\n#else\n\t\t\te2buf[0] = '\\0';\n\t\t\tif (r_input->vnodes_data != NULL) {\n\t\t\t\t/* see if previous entry already matches this */\n\t\t\t\tif ((strcmp(prev_noden, noden) != 0)) {\n\t\t\t\t\tchar key_buf[BUF_SIZE];\n\t\t\t\t\tsvrattrl *svrattrl_e;\n\n\t\t\t\t\tsnprintf(key_buf, BUF_SIZE, \"%s.resources_assigned\", noden);\n\t\t\t\t\tif ((svrattrl_e = find_svrattrl_list_entry(r_input->vnodes_data, key_buf, \"host,string\")) != NULL) {\n\t\t\t\t\t\tparent_mom = svrattrl_e->al_value;\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\tstrncpy(prev_noden, noden, PBS_MAXNODENAME);\n\t\t\t} else {\n\t\t\t\t/* see if previous entry already matches this */\n\t\t\t\tif ((pnode == NULL) ||\n\t\t\t\t    (strcmp(pnode->nd_name, noden) != 0)) {\n\t\t\t\t\tpnode = find_nodebyname(noden);\n\t\t\t\t}\n\n\t\t\t\tif (pnode == NULL) { /* should not happen */\n\t\t\t\t\tif ((err_msg != NULL) && (err_msg_sz > 0)) {\n\t\t\t\t\t\tsnprintf(err_msg, err_msg_sz, \"no node entry for %s\", noden);\n\t\t\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG, r_input->jobid, err_msg);\n\t\t\t\t\t}\n\t\t\t\t\tgoto release_nodes_exit;\n\t\t\t\t}\n\n\t\t\t\tparent_mom = NULL;\n\n\t\t\t\tif (chunk2 && *chunk2) {\n\t\t\t\t\tchar *tmp;\n\t\t\t\t\tint i;\n\t\t\t\t\tsnprintf(e2buf, sizeof(e2buf), \"%s\", chunk2);\n\t\t\t\t\ttmp = strtok(e2buf, \":/\");\n\n\t\t\t\t\tfor (i = 0; tmp && (i < pnode->nd_nummoms); i++) {\n\t\t\t\t\t\tif ((strcmp(pnode->nd_moms[i]->mi_host, tmp) == 0)) {\n\t\t\t\t\t\t\tparent_mom = tmp;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif (parent_mom == NULL) { /* should not happen */\n\n\t\t\t\tif ((err_msg != NULL) && (err_msg_sz > 0)) {\n\t\t\t\t\tsnprintf(err_msg, err_msg_sz, \"no parent_mom for for %s\", noden);\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_DEBUG, r_input->jobid, err_msg);\n\t\t\t\t}\n\t\t\t\tgoto release_nodes_exit;\n\t\t\t}\n#endif\n\n\t\t\tif (reliable_job_node_find(r_input2->succeeded_mom_list, parent_mom) != NULL) {\n\t\t\t\tif (entry > 0) { /* there's something */\n\t\t\t\t\t\t /* put in previously */\n\t\t\t\t\tif (have != NULL) {\n\t\t\t\t\t\tif (pbs_strcat(&have->chunkstr, &have->chunkstr_sz, \"+\") == NULL)\n\t\t\t\t\t\t\tgoto release_nodes_exit;\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\tif (((hasprn > 0) && (paren > 0)) ||\n\t\t\t\t    ((hasprn == 0) && (paren == 0))) {\n\t\t\t\t\t/* at the beginning of chunk for current host */\n\t\t\t\t\tif (!found_paren) {\n\n\t\t\t\t\t\tfree(have);\n\t\t\t\t\t\thave = (resc_limit_t *) malloc(sizeof(resc_limit_t));\n\t\t\t\t\t\tif (have == NULL) {\n\t\t\t\t\t\t\tgoto release_nodes_exit;\n\t\t\t\t\t\t}\n\t\t\t\t\t\t/* clear \"have\" counts */\n\t\t\t\t\t\tresc_limit_init(have);\n\n\t\t\t\t\t\tif (pbs_strcat(&have->chunkstr, &have->chunkstr_sz, \"(\") == NULL)\n\t\t\t\t\t\t\tgoto release_nodes_exit;\n\t\t\t\t\t\tfound_paren = 1;\n\n\t\t\t\t\t\tif (h_entry > 0) {\n\t\t\t\t\t\t\t/* there's already previous exec_host entry */\n\t\t\t\t\t\t\tif ((have->host_chunk[0].str != NULL) &&\n\t\t\t\t\t\t\t    have->host_chunk[0].str[0] != '\\0') {\n\t\t\t\t\t\t\t\tif (pbs_strcat(&have->host_chunk[0].str, &have->host_chunk[0].num, \"+\") == NULL)\n\t\t\t\t\t\t\t\t\tgoto release_nodes_exit;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tif ((have->host_chunk[1].str != NULL) &&\n\t\t\t\t\t\t\t    (have->host_chunk[1].str[0] != '\\0')) {\n\t\t\t\t\t\t\t\tif (pbs_strcat(&have->host_chunk[1].str, &have->host_chunk[1].num, \"+\") == NULL)\n\t\t\t\t\t\t\t\t\tgoto release_nodes_exit;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\n\t\t\t\t\t\tif (pbs_strcat(&have->host_chunk[0].str, &have->host_chunk[0].num, chunk1) == NULL)\n\t\t\t\t\t\t\tgoto release_nodes_exit;\n\t\t\t\t\t\tif (pbs_strcat(&have->host_chunk[1].str, &have->host_chunk[1].num, chunk2) == NULL)\n\t\t\t\t\t\t\tgoto release_nodes_exit;\n\t\t\t\t\t\th_entry++;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tif (have == NULL) {\n\t\t\t\t\tlog_err(-1, __func__, \"unexpected NULL 'have' value\");\n\t\t\t\t\tgoto release_nodes_exit;\n\t\t\t\t}\n\n\t\t\t\tif (!found_paren) {\n\t\t\t\t\tif (pbs_strcat(&have->chunkstr, &have->chunkstr_sz, \"(\") == NULL)\n\t\t\t\t\t\tgoto release_nodes_exit;\n\t\t\t\t\tfound_paren = 1;\n\n\t\t\t\t\tif (h_entry > 0) {\n\t\t\t\t\t\t/* there's already previous */\n\t\t\t\t\t\t/* exec_host entry */\n\t\t\t\t\t\tif ((have->host_chunk[0].str != NULL) &&\n\t\t\t\t\t\t    (have->host_chunk[0].str[0] != '\\0')) {\n\t\t\t\t\t\t\tif (pbs_strcat(&have->host_chunk[0].str, &have->host_chunk[0].num, \"+\") == NULL)\n\t\t\t\t\t\t\t\tgoto release_nodes_exit;\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif ((have->host_chunk[1].str != NULL) &&\n\t\t\t\t\t\t    (have->host_chunk[1].str[0] != '\\0')) {\n\t\t\t\t\t\t\tif (pbs_strcat(&have->host_chunk[1].str, &have->host_chunk[1].num, \"+\") == NULL)\n\t\t\t\t\t\t\t\tgoto release_nodes_exit;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tif (pbs_strcat(&have->host_chunk[0].str, &have->host_chunk[0].num, chunk1) == NULL)\n\t\t\t\t\t\tgoto release_nodes_exit;\n\n\t\t\t\t\tif (pbs_strcat(&have->host_chunk[1].str, &have->host_chunk[1].num, chunk2) == NULL)\n\t\t\t\t\t\tgoto release_nodes_exit;\n\n\t\t\t\t\th_entry++;\n\t\t\t\t}\n\t\t\t\tif (pbs_strcat(&have->chunkstr, &have->chunkstr_sz, noden) == NULL)\n\t\t\t\t\tgoto release_nodes_exit;\n\t\t\t\tentry++;\n\n\t\t\t\tfor (j = 0; j < nelem; ++j) {\n\n#ifdef PBS_MOM\n\t\t\t\t\tresc_def = find_resc_def(svr_resc_def, pkvp[j].kv_keyw);\n\t\t\t\t\tif (resc_def == NULL) {\n\t\t\t\t\t\tcontinue;\n\t\t\t\t\t}\n\t\t\t\t\tif (add_to_vnl(&vnl_good, noden, pkvp[j].kv_keyw, pkvp[j].kv_val) != 0) {\n\t\t\t\t\t\tgoto release_nodes_exit;\n\t\t\t\t\t}\n#endif\n\t\t\t\t\tsnprintf(buf, sizeof(buf),\n\t\t\t\t\t\t \":%s=%s\", pkvp[j].kv_keyw, pkvp[j].kv_val);\n\t\t\t\t\tif (pbs_strcat(&have->chunkstr, &have->chunkstr_sz, buf) == NULL)\n\t\t\t\t\t\tgoto release_nodes_exit;\n\t\t\t\t\tif (strcmp(pkvp[j].kv_keyw, \"ncpus\") == 0) {\n\t\t\t\t\t\thave->rl_ncpus += atol(pkvp[j].kv_val);\n\t\t\t\t\t} else if (strcmp(pkvp[j].kv_keyw, \"mem\") == 0) {\n\t\t\t\t\t\thave->rl_mem += to_kbsize(pkvp[j].kv_val);\n\t\t\t\t\t} else if (strcmp(pkvp[j].kv_keyw, \"vmem\") == 0) {\n\t\t\t\t\t\thave->rl_vmem += to_kbsize(pkvp[j].kv_val);\n\t\t\t\t\t} else if (strcmp(pkvp[j].kv_keyw, \"ssinodes\") == 0) {\n\t\t\t\t\t\thave->rl_ssi += atol(pkvp[j].kv_val);\n\t\t\t\t\t} else if (strcmp(pkvp[j].kv_keyw, \"naccelerators\") == 0) {\n\t\t\t\t\t\thave->rl_naccels += atol(pkvp[j].kv_val);\n\t\t\t\t\t} else if (\n\t\t\t\t\t\tstrcmp(pkvp[j].kv_keyw, \"accelerator_memory\") == 0) {\n\t\t\t\t\t\thave->rl_accel_mem += to_kbsize(pkvp[j].kv_val);\n#if !(defined(PBS_MOM) || defined(PBS_PYTHON))\n\t\t\t\t\t} else {\n\t\t\t\t\t\trc = resc_limit_insert_other_res(have, pkvp[j].kv_keyw, pkvp[j].kv_val, TRUE);\n\t\t\t\t\t\tif (rc != 0) {\n\t\t\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"failed to insert resource %s\", pkvp[j].kv_keyw);\n\t\t\t\t\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\t\t\t\t\tgoto release_nodes_exit;\n\t\t\t\t\t\t}\n#endif\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\tif (paren == 0) { /* have all chunks for current host */\n\n\t\t\t\t\tif (found_paren) {\n\t\t\t\t\t\tif (pbs_strcat(&have->chunkstr, &have->chunkstr_sz, \")\") == NULL)\n\t\t\t\t\t\t\tgoto release_nodes_exit;\n\t\t\t\t\t\tfound_paren = 0;\n\t\t\t\t\t}\n\n#if !(defined(PBS_MOM) || defined(PBS_PYTHON))\n\t\t\t\t\tif (!(have->chunkspec = strdup(chunkschsel + 2))) { /* +2 is to skip past '1:' */\n\t\t\t\t\t\tlog_err(errno, __func__, \"strdup error\");\n\t\t\t\t\t\tgoto release_nodes_exit;\n\t\t\t\t\t}\n\t\t\t\t\textra_res = return_missing_resources(chunkschsel,\n\t\t\t\t\t\t\t\t\t     res_in_exec_vnode);\n\t\t\t\t\tif ((extra_res) && (*extra_res)) {\n\t\t\t\t\t\tchar *word, *value, *last;\n\t\t\t\t\t\tint x;\n\t\t\t\t\t\tfor (x = parse_resc_equal_string(extra_res, &word, &value, &last);\n\t\t\t\t\t\t     x == 1;\n\t\t\t\t\t\t     x = parse_resc_equal_string(last, &word, &value, &last)) {\n\t\t\t\t\t\t\tif ((rc = resc_limit_insert_other_res(have, word, value, FALSE))) {\n\t\t\t\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"failed to insert resource %s\", word);\n\t\t\t\t\t\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\t\t\t\t\t\tgoto release_nodes_exit;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\n\t\t\t\t\tif (have->rl_ncpus)\n\t\t\t\t\t\thave->rl_res_count++;\n\t\t\t\t\tif (have->rl_ssi)\n\t\t\t\t\t\thave->rl_res_count++;\n\t\t\t\t\tif (have->rl_mem)\n\t\t\t\t\t\thave->rl_res_count++;\n\t\t\t\t\tif (have->rl_vmem)\n\t\t\t\t\t\thave->rl_res_count++;\n\t\t\t\t\tif (have->rl_naccels)\n\t\t\t\t\t\thave->rl_res_count++;\n\t\t\t\t\tif (have->rl_accel_mem)\n\t\t\t\t\t\thave->rl_res_count++;\n#endif\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tif (paren == 0) { /* have all chunks for current host */\n\n\t\t\t\t\tif (found_paren) {\n\t\t\t\t\t\tif (have == NULL) {\n\t\t\t\t\t\t\tlog_err(-1, __func__, \"unexpected NULL 'have' value\");\n\t\t\t\t\t\t\tgoto release_nodes_exit;\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif (pbs_strcat(&have->chunkstr, &have->chunkstr_sz, \")\") == NULL)\n\t\t\t\t\t\t\tgoto release_nodes_exit;\n\t\t\t\t\t\tfound_paren = 0;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\t/* only save in 'failed_nodes' those nodes that are non in good mom_List but in failed_mom_list */\n\t\t\t\tif ((r_input2->failed_vnodes != NULL) && (reliable_job_node_find(r_input2->failed_mom_list, parent_mom) != NULL)) {\n\t\t\t\t\tfor (j = 0; j < nelem; ++j) {\n\t\t\t\t\t\tif (add_to_vnl(&vnl_fails, noden, pkvp[j].kv_keyw, pkvp[j].kv_val) != 0) {\n\t\t\t\t\t\t\tgoto release_nodes_exit;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif (hasprn < 0) {\n\t\t\t\t/* matched ')' in chunk, so need to */\n\t\t\t\t/* balance the parenthesis */\n\t\t\t\tif (found_paren) {\n\t\t\t\t\tif (have == NULL) {\n\t\t\t\t\t\tlog_err(-1, __func__, \"unexpected NULL 'have' value\");\n\t\t\t\t\t\tgoto release_nodes_exit;\n\t\t\t\t\t}\n\t\t\t\t\tif (pbs_strcat(&have->chunkstr, &have->chunkstr_sz, \")\") == NULL)\n\t\t\t\t\t\tgoto release_nodes_exit;\n\t\t\t\t\tfound_paren = 0;\n\t\t\t\t}\n\t\t\t}\n\t\t} else {\n\t\t\tlog_err(-1, __func__, \"parse_node_resc error\");\n\t\t\tgoto release_nodes_exit;\n\t\t}\n\n\t\tif (paren == 0) {\n\t\t\tif (k == 0) {\n\t\t\t\thave0 = have;\n\t\t\t\thave = NULL;\n\t\t\t} else if (add_to_resc_limit_list_sorted(&resc_limit_list, have) == 0) {\n\t\t\t\thave = NULL;\n\t\t\t\t/* already saved in list  */\n\t\t\t} else if (have != NULL) {\n\t\t\t\tlog_err(-1, __func__, \"problem saving 'have' value\");\n\t\t\t\tgoto release_nodes_exit;\n\t\t\t}\n\t\t\tchunk1 = parse_plus_spec_r(last1, &last1,\n\t\t\t\t\t\t   &hasprn1),\n\t\t\tchunk2 = parse_plus_spec_r(last2, &last2,\n\t\t\t\t\t\t   &hasprn2);\n#if !(defined(PBS_MOM) || defined(PBS_PYTHON))\n\t\t\tchunkschsel = parse_plus_spec_r(lastschsel, &lastschsel,\n\t\t\t\t\t\t\t&hasprnschsel);\n#endif\n\t\t\tk++;\n\t\t}\n\t}\n\t/* First chunk in the 'have' list must always be the first */\n\t/* entry since it pertains to the MS */\n\n\tif (have0 != NULL) {\n\t\tif (add_to_resc_limit_list_as_head(&resc_limit_list, have0) == 0) {\n\t\t\thave0 = NULL; /* already saved in listl  */\n\t\t}\n\t}\n\n\tnew_exec_vnode = (char *) calloc(1, strlen(r_input->execvnode) + 1);\n\tif (new_exec_vnode == NULL) {\n\t\tlog_err(-1, __func__, \"calloc error\");\n\t\tgoto release_nodes_exit;\n\t}\n\tnew_exec_vnode[0] = '\\0';\n\n\tif (r_input->exechost != NULL) {\n\t\tnew_exec_host = (char *) calloc(1, strlen(r_input->exechost) + 1);\n\t\tif (new_exec_host == NULL) {\n\t\t\tlog_err(-1, __func__, \"calloc error\");\n\t\t\tgoto release_nodes_exit;\n\t\t}\n\t\tnew_exec_host[0] = '\\0';\n\t}\n\n\tif (r_input->exechost2 != NULL) {\n\t\tnew_exec_host2 = (char *) calloc(1, strlen(r_input->exechost2) + 1);\n\t\tif (new_exec_host2 == NULL) {\n\t\t\tlog_err(-1, __func__, \"calloc error\");\n\t\t\tgoto release_nodes_exit;\n\t\t}\n\t\tnew_exec_host2[0] = '\\0';\n\t}\n\n\tif (r_input2->select_str == NULL) {\n\t\t/* not satisfying some schedselect */\n\t\trc = 0;\n\t\tgoto release_nodes_exit;\n\t}\n\n\t/* save vnl_good for logging later, as we now try to satisfy */\n\t/* select_str */\n\tvnl_good_master = vnl_good;\n\tvnl_good = NULL;\n\n\tselbuf = strdup(r_input2->select_str);\n\tif (selbuf == NULL) {\n\t\tlog_err(-1, __func__, \"strdup failed\");\n\t\tgoto release_nodes_exit;\n\t}\n\tresc_limit_list_print(\"HAVE\", &resc_limit_list, PBSEVENT_DEBUG4);\n\treliable_job_node_print(\"job failed_mom_list\", r_input2->failed_mom_list, PBSEVENT_DEBUG4);\n\treliable_job_node_print(\"job succeeded_mom_list\", r_input2->succeeded_mom_list, PBSEVENT_DEBUG4);\n\n\t/* (1) parse chunk from select spec */\n\tpsubspec = parse_plus_spec_r(selbuf, &last3, &hasprn3);\n\th = 0; /* tracks # of plus entries in select */\n\tl = 0; /* tracks # of chunks in new_exec_vnode */\n\twhile (psubspec) {\n\t\trc = parse_chunk_r(psubspec, &snc, &snelma, &snelmt, &skv, NULL);\n\t\t/* snc = number of chunks */\n\t\tif (rc != 0)\n\t\t\tgoto release_nodes_exit;\n\n\t\tfor (i = 0; i < snc; ++i) { /* for each chunk in select.. */\n\t\t\tchar *have_exec_vnode;\n\n\t\t\t/* clear \"need\" counts */\n\t\t\tmemset(&need, 0, sizeof(need));\n\t\t\tresc_limit_init(&need);\n\n\t\t\t/* figure out what is \"need\"ed */\n\t\t\tfor (j = 0; j < snelma; ++j) {\n\t\t\t\tif (strcmp(skv[j].kv_keyw, \"ncpus\") == 0)\n\t\t\t\t\tneed.rl_ncpus = atol(skv[j].kv_val);\n\t\t\t\telse if (strcmp(skv[j].kv_keyw, \"ssinodes\") == 0)\n\t\t\t\t\tneed.rl_ssi = atol(skv[j].kv_val);\n\t\t\t\telse if (strcmp(skv[j].kv_keyw, \"mem\") == 0)\n\t\t\t\t\tneed.rl_mem = to_kbsize(skv[j].kv_val);\n\t\t\t\telse if (strcmp(skv[j].kv_keyw, \"vmem\") == 0)\n\t\t\t\t\tneed.rl_vmem = to_kbsize(skv[j].kv_val);\n\t\t\t\telse if (strcmp(skv[j].kv_keyw, \"naccelerators\") == 0)\n\t\t\t\t\tneed.rl_naccels = atol(skv[j].kv_val);\n\t\t\t\telse if (strcmp(skv[j].kv_keyw, \"accelerator_memory\") == 0)\n\t\t\t\t\tneed.rl_accel_mem = to_kbsize(skv[j].kv_val);\n#if !(defined(PBS_MOM) || defined(PBS_PYTHON))\n\t\t\t\telse {\n\t\t\t\t\tif ((rc = resc_limit_insert_other_res(&need, skv[j].kv_keyw, skv[j].kv_val, FALSE))) {\n\t\t\t\t\t\tsprintf(log_buffer, \"failed to insert resource %s\", skv[j].kv_keyw);\n\t\t\t\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\t\t\t\tgoto release_nodes_exit;\n\t\t\t\t\t}\n\t\t\t\t}\n#endif\n\t\t\t}\n\n\t\t\t/* go through the list of chunk resources */\n\t\t\t/* we have and find a matching chunk */\n\t\t\t/* If none matched, then we return  */\n\t\t\tp_entry = (rl_entry *) GET_NEXT(resc_limit_list);\n\t\t\tmatched = 0; /* set to 1 if an entry is matched for current select chunk */\n\t\t\tk = 0;\n\t\t\twhile (p_entry) {\n\t\t\t\thave2 = p_entry->resc;\n\t\t\t\tnew_chunkstr = satisfy_chunk_need(&need, have2, &vnl_good);\n\t\t\t\tif (new_chunkstr != NULL) {\n#if !(defined(PBS_MOM) || defined(PBS_PYTHON))\n\t\t\t\t\tappend_and_group_sched_sel(new_schedselect, have2->chunkspec, tmp_chunk_spec, &tmp_chunk_ct);\n#endif\n\t\t\t\t\tif (l > 0) {\n\t\t\t\t\t\tstrcat(new_exec_vnode, \"+\");\n\t\t\t\t\t\tif (have2->host_chunk[0].str) {\n\t\t\t\t\t\t\tif (new_exec_host != NULL)\n\t\t\t\t\t\t\t\tstrcat(new_exec_host, \"+\");\n\t\t\t\t\t\t}\n\t\t\t\t\t\tif (have2->host_chunk[1].str) {\n\t\t\t\t\t\t\tif (new_exec_host2 != NULL)\n\t\t\t\t\t\t\t\tstrcat(new_exec_host2, \"+\");\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tstrcat(new_exec_vnode, new_chunkstr);\n\t\t\t\t\tfree(have2->chunkstr);\n\t\t\t\t\thave2->chunkstr = NULL;\n\n\t\t\t\t\tif (have2->host_chunk[0].str) {\n\t\t\t\t\t\tif (new_exec_host != NULL)\n\t\t\t\t\t\t\tstrcat(new_exec_host, have2->host_chunk[0].str);\n\t\t\t\t\t\tfree(have2->host_chunk[0].str);\n\t\t\t\t\t\thave2->host_chunk[0].str = NULL;\n\t\t\t\t\t}\n\n\t\t\t\t\tif (have2->host_chunk[1].str) {\n\t\t\t\t\t\tif (new_exec_host2 != NULL)\n\t\t\t\t\t\t\tstrcat(new_exec_host2, have2->host_chunk[1].str);\n\t\t\t\t\t\tfree(have2->host_chunk[1].str);\n\t\t\t\t\t\thave2->host_chunk[1].str = NULL;\n\t\t\t\t\t}\n\t\t\t\t\tmatched = 1;\n\t\t\t\t\tl++;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tk++;\n\t\t\t\tp_entry = (rl_entry *) GET_NEXT(p_entry->rl_link);\n\t\t\t}\n\t\t\tif (matched == 0) {\n\t\t\t\t/* did not find a matching chunk */\n\t\t\t\t/* for current select chunk. */\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t \"could not satisfy select \"\n\t\t\t\t\t \"chunk (ncpus=%d \"\n\t\t\t\t\t \"ssi=%d \"\n\t\t\t\t\t \"mem=%lld vmem=%lld naccels=%d \"\n\t\t\t\t\t \"accel_mem=%lld)\",\n\t\t\t\t\t need.rl_ncpus, need.rl_ssi, need.rl_mem,\n\t\t\t\t\t need.rl_vmem, need.rl_naccels,\n\t\t\t\t\t need.rl_accel_mem);\n\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_ERR, r_input->jobid, log_buffer);\n#if !(defined(PBS_MOM) || defined(PBS_PYTHON))\n\t\t\t\tfor (pres = (resource *) GET_NEXT(need.rl_other_res);\n\t\t\t\t     pres != NULL;\n\t\t\t\t     pres = (resource *) GET_NEXT(pres->rs_link)) {\n\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t\t \"(%s=%s)\", pres->rs_defin->rs_name, pres->rs_value.at_priv_encoded->al_value);\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_ERR, r_input->jobid, log_buffer);\n\t\t\t\t}\n#endif\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"NEED chunks for keep_select (%s)\", (r_input2->select_str ? r_input2->select_str : \"\"));\n\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_ERR, r_input->jobid, log_buffer);\n\t\t\t\thave_exec_vnode = return_available_vnodes(r_input->execvnode, vnl_good_master);\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"HAVE chunks from job's exec_vnode: %s\", have_exec_vnode ? have_exec_vnode : \"none\");\n\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_ERR, r_input->jobid, log_buffer);\n\t\t\t\tresc_limit_list_print(\"HAVE\", &resc_limit_list, PBSEVENT_DEBUG3);\n\t\t\t\treliable_job_node_print(\"job failed_mom_list\", r_input2->failed_mom_list, PBSEVENT_DEBUG3);\n\t\t\t\treliable_job_node_print(\"job succeeded_mom_list\", r_input2->succeeded_mom_list, PBSEVENT_DEBUG3);\n\t\t\t\trc = 1;\n\t\t\t\tgoto release_nodes_exit;\n\t\t\t} else if ((h == 0) && (i == 0) && (k != 0)) {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t \"could not satisfy 1st \"\n\t\t\t\t\t \"select chunk (ncpus=%d \"\n\t\t\t\t\t \"ssi=%d \"\n\t\t\t\t\t \"mem=%lld vmem=%lld naccels=%d \"\n\t\t\t\t\t \"accel_mem=%lld) \"\n\t\t\t\t\t \" with first available chunk\",\n\t\t\t\t\t need.rl_ncpus, need.rl_ssi, need.rl_mem,\n\t\t\t\t\t need.rl_vmem, need.rl_naccels,\n\t\t\t\t\t need.rl_accel_mem);\n\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_ERR, r_input->jobid, log_buffer);\n#if !(defined(PBS_MOM) || defined(PBS_PYTHON))\n\t\t\t\tfor (pres = (resource *) GET_NEXT(need.rl_other_res);\n\t\t\t\t     pres != NULL;\n\t\t\t\t     pres = (resource *) GET_NEXT(pres->rs_link)) {\n\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t\t \"(%s=%s)\", pres->rs_defin->rs_name, pres->rs_value.at_priv_encoded->al_value);\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_ERR, r_input->jobid, log_buffer);\n\t\t\t\t}\n#endif\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"NEED chunks for keep_select (%s)\", (r_input2->select_str ? r_input2->select_str : \"\"));\n\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_ERR, r_input->jobid, log_buffer);\n\t\t\t\thave_exec_vnode = return_available_vnodes(r_input->execvnode, vnl_good_master);\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"HAVE chunks from job's exec_vnode: %s\", have_exec_vnode ? have_exec_vnode : \"none\");\n\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_JOB, LOG_ERR, r_input->jobid, log_buffer);\n\t\t\t\tresc_limit_list_print(\"HAVE\", &resc_limit_list, PBSEVENT_DEBUG3);\n\n\t\t\t\treliable_job_node_print(\"job failed_mom_list\", r_input2->failed_mom_list, PBSEVENT_DEBUG3);\n\t\t\t\treliable_job_node_print(\"job succeeded_mom_list\", r_input2->succeeded_mom_list, PBSEVENT_DEBUG3);\n\t\t\t\trc = 1;\n\t\t\t\tgoto release_nodes_exit;\n\t\t\t}\n\t\t\tresc_limit_free(&need);\n\t\t}\n\n\t\t/* do next section of select */\n\t\tpsubspec = parse_plus_spec_r(last3, &last3, &hasprn3);\n\t\th++;\n\t}\n\n\tif (strcmp(new_exec_vnode, r_input->execvnode) == 0) {\n\t\t/* no change, don't bother setting the new_* return values */\n\t\trc = 0;\n\t\tgoto release_nodes_exit;\n\t}\n\n#if !(defined(PBS_MOM) || defined(PBS_PYTHON))\n\tappend_and_group_sched_sel(new_schedselect, \"\", tmp_chunk_spec, &tmp_chunk_ct);\n\trc = do_schedselect(new_schedselect, NULL, NULL, NULL, &tmpstr);\n\tfree(new_schedselect);\n#else\n\trc = do_schedselect(r_input2->select_str, NULL, NULL, NULL, &tmpstr);\n#endif\n\tif (rc != 0) {\n\t\trc = 1;\n\t\tgoto release_nodes_exit;\n\t}\n\tnew_schedselect = strdup(tmpstr);\n\tif (new_schedselect == NULL) {\n\t\tlog_err(errno, __func__, msg_err_malloc);\n\t\trc = 1;\n\t\tgoto release_nodes_exit;\n\t}\n\n\trc = 0;\nrelease_nodes_exit:\n\tfree(exec_vnode);\n\tfree(exec_host);\n\tfree(exec_host2);\n#if !(defined(PBS_MOM) || defined(PBS_PYTHON))\n\tfree(sched_select);\n\tfree(res_in_exec_vnode);\n\tif (tmp_chunk_spec)\n\t\tfree(tmp_chunk_spec);\n#endif\n\tfree(chunk_buf);\n\tresc_limit_list_free(&resc_limit_list);\n\tresc_limit_free(have);\n\tfree(have);\n\tresc_limit_free(&need);\n\tresc_limit_free(have0);\n\tfree(have0);\n\tfree(selbuf);\n\tvnl_free(vnl_good_master);\n\n\tif ((rc != 0) || (strcmp(r_input->execvnode, new_exec_vnode) == 0)) {\n\t\t/* error or if there was no change */\n\t\tfree(new_exec_vnode);\n\t\tfree(new_exec_host);\n\t\tfree(new_exec_host2);\n\t\tfree(new_schedselect);\n\t\tvnl_free(vnl_fails);\n\t\tvnl_free(vnl_good);\n\t} else if (r_input2->select_str == NULL) {\n\t\tif (r_input2->failed_vnodes != NULL) {\n\t\t\t*(r_input2->failed_vnodes) = vnl_fails;\n\t\t} else {\n\t\t\tvnl_free(vnl_fails);\n\t\t}\n\n\t\tif (r_input2->good_vnodes != NULL) {\n\t\t\t*(r_input2->good_vnodes) = vnl_good;\n\t\t} else {\n\t\t\tvnl_free(vnl_good);\n\t\t}\n\t\tfree(new_exec_vnode);\n\t\tfree(new_exec_host);\n\t\tfree(new_exec_host2);\n\t\tfree(new_schedselect);\n\n\t} else {\n\t\tif (r_input2->failed_vnodes != NULL) {\n\t\t\t*(r_input2->failed_vnodes) = vnl_fails;\n\t\t} else {\n\t\t\tvnl_free(vnl_fails);\n\t\t}\n\n\t\tif (r_input2->good_vnodes != NULL) {\n\t\t\t*(r_input2->good_vnodes) = vnl_good;\n\t\t} else {\n\t\t\tvnl_free(vnl_good);\n\t\t}\n\t\tif (r_input->p_new_exec_vnode != NULL)\n\t\t\t*(r_input->p_new_exec_vnode) = new_exec_vnode;\n\t\tif ((r_input->p_new_exec_host[0] != NULL) && (new_exec_host != NULL))\n\t\t\t*(r_input->p_new_exec_host[0]) = new_exec_host;\n\t\tif ((r_input->p_new_exec_host[1] != NULL) && (new_exec_host2 != NULL))\n\t\t\t*(r_input->p_new_exec_host[1]) = new_exec_host2;\n\t\tif (r_input->p_new_schedselect != NULL)\n\t\t\t*(r_input->p_new_schedselect) = new_schedselect;\n\t}\n\n\treturn (rc);\n}\n\n/**\n * @brief\n *\t\tstrcat_grow\n *\n * @par Functionality:\n *\t\tIf the buffer, buf, whose size is lenbuf is too small to cat source,\n *\t\tincrease the size of buf by the length of \"source\" plus an extra\n *\t\tPBS_STRCAT_GROW_INCR bytes.\n *\t\tMakes sure there is at least PBS_STRCAT_GROW_MIN free bytes\n *\t\tin \"buff\" for those simple one or two byte direct additions..\n *\t\tAssumes both current string in buf and source are null terminated.\n *\n * @param[in]\tbuf    -\tpointer to the address of buffer, may be updated.\n * @param[in] \tcurr   -\tcurrent point to which source string will be\n *\t\t\t\t\t\t\tconcatenated.  This is within the current string or the end\n *\t\t\t\t\t\t\tof the current string in \"buff\", may have data after\n *\t\t\t\t\t\t\t\"curr\" in the buffer.\n * @param[in]\tlenbuf -\tcurrent size of buf, may be updated.\n * @param[in]\tsource - \tstring which is to be concatenated to \"curr\".\n *\n * @return\tint\n * @retval\t 0\t: success\n * @retval\t-1\t: realloc failure, out of memory\n *\n * @par Side-effect:\tIf buffer is increased in size, \"buf\", \"curr\" and \"lenbuf\"\n *\t\t\t\t\t\tare updated.\n *\n * @par MT-safe:\tNo\n *\n * @par\tFuture extension\t- This function is currently designed as a drop in\n *\tfor make_schedselect().  It could be simplified for more general use\n *\tby removing the use of \"curr\".\n */\n\n#define PBS_STRCAT_GROW_MIN 16\n#define PBS_STRCAT_GROW_INCR 512\nstatic int\nstrcat_grow(char **buf, char **curr, size_t *lenbuf, char *source)\n{\n\n\tsize_t add;\n\tsize_t currsize;\n\tssize_t delta;\n\tif ((lenbuf == NULL) || (curr == NULL) || (buf == NULL) || (source == NULL))\n\t\treturn -1;\n\n\tcurrsize = *lenbuf;\n\tdelta = *curr - *buf; /* offset in buffer */\n\tadd = strlen(source);\n\tif ((delta + strlen(*curr) + add + PBS_STRCAT_GROW_MIN) >= currsize) {\n\t\t/* need to grow buffer */\n\t\tchar *newbuf;\n\t\tsize_t newlen;\n\n\t\tnewlen = currsize + add + PBS_STRCAT_GROW_INCR;\n\n\t\tnewbuf = realloc((void *) *buf, newlen);\n\t\tif (newbuf) {\n\t\t\t*buf = newbuf;\n\t\t\t*curr = newbuf + delta;\n\t\t\t*lenbuf = newlen;\n\t\t} else {\n\t\t\treturn -1; /* error */\n\t\t}\n\t}\n\t(void) strcat(*curr, source);\n\treturn 0;\n}\n/*\n *\n * @note\n *\tThe return value *p_sched_select is in a static dynamic memory that\n *\twill get overwritten in the next call.\n */\n/**\n * @par\n * \t\tDecode a selection specification, and produce the\n *\t\tthe \"schedselect\" attribute which contains any default resources\n *\t\tmissing from the chunks in the select spec.\n *\t\tAlso translates the value of any boolean resource to the \"formal\"\n *\t\tvalue of \"True\" or \"False\" for the Scheduler who needs to know it\n *\t\tis a boolean and not a string or number.\n *\n *\t@param[in]\tselect_val -\tthe select specification being decoded\n * \t@param[in]\tserver -\tused to obtain server defaults\n * \t@param[in]\tdestin -\tused to obtain queue defaults\n *\t@param[out]\tpresc_in_err -\terror information filled in here\n *\t@param[out]\tp_schedselect -\tthe resulting schedselect value.\n *\n *\t@return\tint\n *\t@retval\t0\t: success\n *\t@retval\tPBSE_Error\t: Error Code.\n *\n *\t@par MT-safe:\tNo.\n */\nint\ndo_schedselect(char *select_val, void *server, void *destin, char **presc_in_err, char **p_sched_select)\n{\n\tchar *chunk;\n\tint i;\n\tint firstchunk;\n\tsize_t len;\n\tint nchk;\n\tint already_set = 0;\n\tint nchunk_internally_set;\n\tint nelem;\n\tstatic char *outbuf = NULL;\n\tstruct key_value_pair *pkvp;\n\tstruct key_value_pair *qdkvp;\n\tint qndft;\n\tstruct key_value_pair *sdkvp;\n\tint sndft;\n\tchar *quotec;\n\tresource_def *presc;\n\tchar *pc;\n\tint rc;\n\tchar *tb;\n\tint validate_resource_exist = 0;\n\tstatic size_t bufsz = 0;\n\tstruct server *pserver = NULL;\n\tpbs_queue *pque = NULL;\n\n\tif ((select_val == NULL) || (p_sched_select == NULL)) {\n\t\treturn (PBSE_SYSTEM);\n\t}\n\n\tpserver = server;\n\tpque = destin;\n\n\t/* allocate or realloc bigger the out buffer for parsing */\n\tif ((len = (strlen(select_val) + 100)) >= (bufsz >> 1)) {\n\t\tlen = (2 * len) + 500 + bufsz;\n\t\tif (bufsz) {\n\t\t\ttb = (char *) realloc(outbuf, len);\n\t\t\tif (tb == NULL)\n\t\t\t\treturn PBSE_SYSTEM;\n\t\t\toutbuf = tb;\n\t\t} else {\n\t\t\toutbuf = (char *) malloc(len);\n\t\t\tif (outbuf == NULL)\n\t\t\t\treturn PBSE_SYSTEM;\n\t\t}\n\t\tbufsz = len;\n\t}\n\n\tif (pque == NULL || pque->qu_qs.qu_type == QTYPE_Execution)\n\t\tvalidate_resource_exist = 1;\n\n\t*outbuf = '\\0';\n\t/* copy input, the string will be broken during parsing */\n\tfirstchunk = 1;\n\tchunk = parse_plus_spec(select_val, &rc); /* break '+' separated substrings */\n\tif (rc != 0)\n\t\treturn rc;\n\twhile (chunk) {\n\t\tif (firstchunk)\n\t\t\tfirstchunk = 0;\n\t\telse\n\t\t\tstrcat(outbuf, \"+\");\n\n\t\tif (parse_chunk(chunk, &nchk, &nelem, &pkvp, &nchunk_internally_set) == 0) {\n\t\t\tint j;\n\n\t\t\t/* first check for any invalid resources in the select */\n\t\t\tfor (j = 0; j < nelem; ++j) {\n\n\t\t\t\t/* see if resource is repeated within the chunk - an err */\n\t\t\t\tfor (i = 0; i < j; ++i) {\n\t\t\t\t\tif (strcmp(pkvp[j].kv_keyw, pkvp[i].kv_keyw) == 0) {\n\t\t\t\t\t\tif (presc_in_err != NULL) {\n\t\t\t\t\t\t\tif ((*presc_in_err = strdup(pkvp[j].kv_keyw)) == NULL)\n\t\t\t\t\t\t\t\treturn PBSE_SYSTEM;\n\t\t\t\t\t\t}\n\t\t\t\t\t\treturn PBSE_DUPRESC;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tpresc = find_resc_def(svr_resc_def, pkvp[j].kv_keyw);\n\t\t\t\tif (presc) {\n\t\t\t\t\tif ((presc->rs_flags & ATR_DFLAG_CVTSLT) == 0) {\n\t\t\t\t\t\tif (presc_in_err != NULL) {\n\t\t\t\t\t\t\tif ((*presc_in_err = strdup(pkvp[j].kv_keyw)) == NULL)\n\t\t\t\t\t\t\t\treturn PBSE_SYSTEM;\n\t\t\t\t\t\t}\n\t\t\t\t\t\treturn PBSE_INVALSELECTRESC;\n\t\t\t\t\t}\n\t\t\t\t} else if (validate_resource_exist) {\n\t\t\t\t\tif (presc_in_err != NULL) {\n\t\t\t\t\t\tif ((*presc_in_err = strdup(pkvp[j].kv_keyw)) == NULL)\n\t\t\t\t\t\t\treturn PBSE_SYSTEM;\n\t\t\t\t\t}\n\t\t\t\t\treturn PBSE_UNKRESC;\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tpc = outbuf + strlen(outbuf);\n\n\t\t\t/* add in any defaults, first from the queue,... */\n\t\t\t/* then add in any defaults from the server */\n\n\t\t\tif (pque) {\n\t\t\t\tqndft = pque->qu_nseldft;\n\t\t\t\tqdkvp = pque->qu_seldft;\n\t\t\t\trc = parse_chunk_make_room(nelem, qndft, &pkvp);\n\t\t\t\tif (rc)\n\t\t\t\t\treturn rc;\n\t\t\t\tfor (i = 0; i < qndft; ++i) {\n\t\t\t\t\tfor (j = 0; j < nelem; ++j) {\n\t\t\t\t\t\tif (strcasecmp(qdkvp[i].kv_keyw, pkvp[j].kv_keyw) == 0)\n\t\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t\tif (j == nelem) {\n\t\t\t\t\t\t/* check to see if the value is \"nchunk\" */\n\t\t\t\t\t\t/* If nchunk_internally_set is set, then */\n\t\t\t\t\t\t/* the user did not specify a chunk size in the */\n\t\t\t\t\t\t/* select line.  Set nchk to the \"nchunk\" value  */\n\t\t\t\t\t\tif (strcasecmp(qdkvp[i].kv_keyw, \"nchunk\") == 0) {\n\t\t\t\t\t\t\tif (nchunk_internally_set) {\n\t\t\t\t\t\t\t\tnchk = atoi(qdkvp[i].kv_val);\n\t\t\t\t\t\t\t\talready_set = 1;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t/* Add in the defaults from the Queue */\n\t\t\t\t\t\t\tpkvp[nelem].kv_keyw = qdkvp[i].kv_keyw;\n\t\t\t\t\t\t\tpkvp[nelem++].kv_val = qdkvp[i].kv_val;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tif (pserver != NULL) {\n\t\t\t\tsndft = pserver->sv_nseldft;\n\t\t\t\tsdkvp = pserver->sv_seldft;\n\t\t\t\trc = parse_chunk_make_room(nelem, sndft, &pkvp);\n\t\t\t\tif (rc)\n\t\t\t\t\treturn rc;\n\t\t\t} else {\n\t\t\t\tsndft = 0;\n\t\t\t\tsdkvp = NULL;\n\t\t\t}\n\t\t\tfor (i = 0; i < sndft; ++i) {\n\t\t\t\tfor (j = 0; j < nelem; ++j) {\n\t\t\t\t\tif (strcasecmp(sdkvp[i].kv_keyw, pkvp[j].kv_keyw) == 0)\n\t\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tif (j == nelem) {\n\t\t\t\t\t/* check to see if the value is \"nchunk\" */\n\t\t\t\t\t/* If nchunk_internally_set is set, then    */\n\t\t\t\t\t/* the user did not specify a chunk size in the */\n\t\t\t\t\t/* select line, so set nchk to the \"nchunk\" value  */\n\t\t\t\t\tif (strcasecmp(sdkvp[i].kv_keyw, \"nchunk\") == 0) {\n\t\t\t\t\t\tif (nchunk_internally_set && (!already_set))\n\t\t\t\t\t\t\tnchk = atoi(sdkvp[i].kv_val);\n\t\t\t\t\t} else {\n\t\t\t\t\t\t/* Add in the defaults from the Server */\n\t\t\t\t\t\tpkvp[nelem].kv_keyw = sdkvp[i].kv_keyw;\n\t\t\t\t\t\tpkvp[nelem++].kv_val = sdkvp[i].kv_val;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t\tsprintf(pc, \"%d\", nchk);\n\t\t\tif (nelem > 0) {\n\t\t\t\t/*\n\t\t\t\t * if the resource is known to be of type boolean, then\n\t\t\t\t * replace its value with exactly \"True\" or \"False\" as\n\t\t\t\t * appropriate for the Scheduler.  Then rebuild it in\n\t\t\t\t * the out buf.\n\t\t\t\t */\n\t\t\t\tpresc = find_resc_def(svr_resc_def, pkvp[0].kv_keyw);\n\t\t\t\tfor (i = 0; i < nelem; ++i) {\n\t\t\t\t\tstrcat(pc, \":\");\n\t\t\t\t\tif (strcat_grow(&outbuf, &pc, &bufsz, pkvp[i].kv_keyw) == -1)\n\t\t\t\t\t\treturn PBSE_SYSTEM;\n\t\t\t\t\tstrcat(pc, \"=\");\n\t\t\t\t\tpresc = find_resc_def(svr_resc_def, pkvp[i].kv_keyw);\n\t\t\t\t\tif (presc && (presc->rs_type == ATR_TYPE_BOOL)) {\n\t\t\t\t\t\tj = is_true_or_false(pkvp[i].kv_val);\n\t\t\t\t\t\tif (j == 1)\n\t\t\t\t\t\t\tstrcat(pc, ATR_TRUE);\n\t\t\t\t\t\telse if (j == 0)\n\t\t\t\t\t\t\tstrcat(pc, ATR_FALSE);\n\t\t\t\t\t\telse\n\t\t\t\t\t\t\treturn PBSE_BADATVAL;\n\n\t\t\t\t\t} else {\n\t\t\t\t\t\tif (presc && (presc->rs_type == ATR_TYPE_SIZE)) {\n\t\t\t\t\t\t\tif (strcat_grow(&outbuf, &pc, &bufsz, pkvp[i].kv_val) == -1)\n\t\t\t\t\t\t\t\treturn PBSE_SYSTEM;\n\t\t\t\t\t\t\ttb = pkvp[i].kv_val + strlen(pkvp[i].kv_val) - 1;\n\t\t\t\t\t\t\tif (*tb != 'b' && *tb != 'w' &&\n\t\t\t\t\t\t\t    *tb != 'B' && *tb != 'W')\n\t\t\t\t\t\t\t\tstrcat(pc, \"b\");\n\t\t\t\t\t\t} else if (presc &&\n\t\t\t\t\t\t\t   ((presc->rs_type == ATR_TYPE_STR) ||\n\t\t\t\t\t\t\t    (presc->rs_type == ATR_TYPE_ARST))) {\n\t\t\t\t\t\t\tif (strpbrk(pkvp[i].kv_val, \"\\\"'+:=()\")) {\n\t\t\t\t\t\t\t\tif (strchr(pkvp[i].kv_val, (int) '\"'))\n\t\t\t\t\t\t\t\t\tquotec = \"'\";\n\t\t\t\t\t\t\t\telse\n\t\t\t\t\t\t\t\t\tquotec = \"\\\"\";\n\t\t\t\t\t\t\t\tstrcat(pc, quotec);\n\t\t\t\t\t\t\t\tif (strcat_grow(&outbuf, &pc, &bufsz, pkvp[i].kv_val) == -1)\n\t\t\t\t\t\t\t\t\treturn PBSE_SYSTEM;\n\t\t\t\t\t\t\t\tstrcat(pc, quotec);\n\t\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\t\tif (strcat_grow(&outbuf, &pc, &bufsz, pkvp[i].kv_val) == -1)\n\t\t\t\t\t\t\t\t\treturn PBSE_SYSTEM;\n\t\t\t\t\t\t\t}\n\n\t\t\t\t\t\t} else {\n\t\t\t\t\t\t\tif (strcat_grow(&outbuf, &pc, &bufsz, pkvp[i].kv_val) == -1)\n\t\t\t\t\t\t\t\treturn PBSE_SYSTEM;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t} else\n\t\t\t\treturn (PBSE_INVALSELECTRESC);\n\n\t\t} else {\n\t\t\tif (presc_in_err != NULL) {\n\t\t\t\tif ((*presc_in_err = strdup(chunk)) == NULL)\n\t\t\t\t\treturn PBSE_SYSTEM;\n\t\t\t}\n\t\t\treturn (PBSE_UNKRESC);\n\t\t}\n\t\tchunk = parse_plus_spec(NULL, &rc);\n\t\tif (rc != 0)\n\t\t\treturn (rc);\n\t}\n\n\t*p_sched_select = outbuf;\n\treturn 0;\n}\n"
  },
  {
    "path": "src/tools/Makefile.am",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nbin_PROGRAMS = \\\n\tpbs_hostn \\\n\tpbs_python \\\n\tpbs_tclsh \\\n\tpbs_wish \\\n\tprintjob.bin \\\n\tprintjob_svr.bin \\\n\ttracejob \\\n\tpbs_sleep\n\nsbin_PROGRAMS = \\\n\tpbs_ds_monitor \\\n\tpbs_idled \\\n\tpbs_probe \\\n\tpbs_upgrade_job\n\nEXTRA_PROGRAMS = \\\n\tchk_tree \\\n\trstester\n\ncommon_cflags = \\\n\t-I$(top_srcdir)/src/include \\\n\t@KRB5_CFLAGS@\n\ncommon_libs = \\\n\t$(top_builddir)/src/lib/Libpbs/libpbs.la \\\n\t$(top_builddir)/src/lib/Libutil/libutil.a \\\n\t$(top_builddir)/src/lib/Libnet/libnet.a \\\n\t$(top_builddir)/src/lib/Libsec/libsec.a \\\n\t@KRB5_LIBS@ \\\n\t-lpthread \\\n\t@socket_lib@\n\npbs_sleep_LDFLAGS = -all-static\npbs_sleep_SOURCES = pbs_sleep.c\n\nchk_tree_CPPFLAGS = ${common_cflags}\nchk_tree_LDADD = ${common_libs}\nchk_tree_SOURCES = chk_tree.c\n\npbs_ds_monitor_CPPFLAGS = ${common_cflags}\npbs_ds_monitor_LDADD = \\\n\t$(top_builddir)/src/lib/Libdb/libpbsdb.la \\\n\t${common_libs} \\\n\t-lssl \\\n\t-lcrypto\n\npbs_ds_monitor_SOURCES = pbs_ds_monitor.c $(top_srcdir)/src/lib/Libcmds/cmds_common.c\n\npbs_idled_CPPFLAGS = ${X_CFLAGS} ${common_cflags}\npbs_idled_LDADD = \\\n\t${common_libs} \\\n\t${X_PRE_LIBS} \\\n\t${X_LIBS} \\\n\t-lX11\npbs_idled_SOURCES = pbs_idled.c $(top_srcdir)/src/lib/Libcmds/cmds_common.c\n\npbs_hostn_CPPFLAGS = ${common_cflags}\npbs_hostn_LDADD = ${common_libs}\npbs_hostn_SOURCES = hostn.c\n\npbs_probe_CPPFLAGS = \\\n\t${common_cflags} \\\n\t@PYTHON_INCLUDES@\npbs_probe_LDADD = \\\n\t${common_libs} \\\n\t@PYTHON_LDFLAGS@ \\\n\t@PYTHON_LIBS@\npbs_probe_SOURCES = pbs_probe.c $(top_srcdir)/src/lib/Libcmds/cmds_common.c\n\npbs_python_CPPFLAGS =  \\\n\t-DPBS_PYTHON=1.1 \\\n\t${common_cflags} \\\n\t@PYTHON_INCLUDES@\n\npbs_python_LDADD = \\\n\t$(top_builddir)/src/lib/Libpbs/libpbs.la \\\n\t$(top_builddir)/src/lib/Liblog/liblog.a \\\n\t$(top_builddir)/src/lib/Libattr/libattr.a \\\n\t$(top_builddir)/src/lib/Libutil/libutil.a \\\n\t$(top_builddir)/src/lib/Libnet/libnet.a \\\n\t$(top_builddir)/src/lib/Libsec/libsec.a \\\n\t$(top_builddir)/src/lib/Libpython/libpbspython.a \\\n\t@PYTHON_LDFLAGS@ \\\n\t@PYTHON_LIBS@\n\t-lpthread \\\n\t@socket_lib@ \\\n\t@KRB5_LIBS@\n\npbs_python_SOURCES = \\\n\t$(top_srcdir)/src/server/resc_attr.c \\\n\t$(top_srcdir)/src/server/jattr_get_set.c \\\n\t$(top_srcdir)/src/server/sattr_get_set.c \\\n\t$(top_srcdir)/src/server/qattr_get_set.c \\\n\t$(top_srcdir)/src/server/nattr_get_set.c \\\n\t$(top_srcdir)/src/server/setup_resc.c \\\n\t$(top_srcdir)/src/server/vnparse.c \\\n\t$(top_srcdir)/src/lib/Libcmds/cmds_common.c \\\n\tpbs_python.c\n\npbs_tclsh_CPPFLAGS = \\\n\t${common_cflags} \\\n\t@libz_inc@ \\\n\t@tcl_inc@\n\npbs_tclsh_LDADD = \\\n\t$(top_builddir)/src/lib/Libpbs/libpbs.la \\\n\t$(top_builddir)/src/lib/Libtpp/libtpp.a \\\n\t$(top_builddir)/src/lib/Liblog/liblog.a \\\n\t$(top_builddir)/src/lib/Libutil/libutil.a \\\n\t$(top_builddir)/src/lib/Libnet/libnet.a \\\n\t$(top_builddir)/src/lib/Libsec/libsec.a \\\n\t-lpthread \\\n\t@KRB5_LIBS@ \\\n\t@socket_lib@ \\\n\t@libz_lib@ \\\n\t@tcl_lib@\n\npbs_tclsh_SOURCES = \\\n\tpbs_tclWrap.c \\\n\tsite_tclWrap.c \\\n\tpbsTclInit.c\n\npbs_upgrade_job_CPPFLAGS = ${common_cflags}\npbs_upgrade_job_LDADD = ${common_libs}\npbs_upgrade_job_SOURCES = pbs_upgrade_job.c\n\npbs_wish_CPPFLAGS = \\\n\t${common_cflags} \\\n\t@libz_inc@ \\\n\t@tk_inc@\n\npbs_wish_LDADD = \\\n\t$(top_builddir)/src/lib/Libpbs/libpbs.la \\\n\t$(top_builddir)/src/lib/Libtpp/libtpp.a \\\n\t$(top_builddir)/src/lib/Liblog/liblog.a \\\n\t$(top_builddir)/src/lib/Libutil/libutil.a \\\n\t$(top_builddir)/src/lib/Libnet/libnet.a \\\n\t$(top_builddir)/src/lib/Libsec/libsec.a \\\n\t-lpthread \\\n\t@KRB5_LIBS@ \\\n\t@socket_lib@ \\\n\t@libz_lib@ \\\n\t@tk_lib@\n\npbs_wish_SOURCES = \\\n\tpbs_tclWrap.c \\\n\tsite_tclWrap.c \\\n\tpbsTkInit.c\n\nprintjob_bin_CPPFLAGS = ${common_cflags}\nprintjob_bin_LDADD = ${common_libs}\nprintjob_bin_SOURCES = printjob.c $(top_srcdir)/src/lib/Libcmds/cmds_common.c\n\nprintjob_svr_bin_CPPFLAGS = \\\n\t${common_cflags} \\\n\t-I$(top_srcdir)/src/lib/Libdb \\\n\t-DPRINTJOBSVR\n\nprintjob_svr_bin_LDADD = \\\n\t$(top_builddir)/src/lib/Libdb/libpbsdb.la \\\n\t${common_libs} \\\n\t-lssl \\\n\t-lcrypto\n\nprintjob_svr_bin_SOURCES = \\\n\t$(top_srcdir)/src/server/jattr_get_set.c \\\n\t$(top_srcdir)/src/lib/Libcmds/cmds_common.c \\\n\tprintjob.c\n\nrstester_CPPFLAGS = ${common_cflags}\nrstester_LDADD = ${common_libs}\nrstester_SOURCES = rstester.c\n\ntracejob_CPPFLAGS = ${common_cflags}\ntracejob_LDADD = ${common_libs}\ntracejob_SOURCES = \\\n\t$(top_srcdir)/src/lib/Libcmds/cmds_common.c \\\n\ttracejob.c \\\n\ttracejob.h\n"
  },
  {
    "path": "src/tools/chk_tree.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file    chk_tree.c\n *\n * @brief\n * \t\tchk_tree.c - Check permissions on the PBS Tree Built\n *\t\tFiles should be owned by root, and not world writable.\n *\n * Functions included are:\n * \tmain()\n * \tlog_err()\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n#include \"pbs_version.h\"\n\n#include <sys/types.h>\n#include <sys/stat.h>\n#include <errno.h>\n#include <limits.h>\n#include <stdio.h>\n#include <unistd.h>\n#include <string.h>\n#include \"cmds.h\"\n#include \"portability.h\"\n#include \"log.h\"\n\n/**\n * @brief\n * \t\tmain\t-\tThe main function of chk_tree\n *\n * @param[in]\targc\t-\targument count\n * @param[in]\targv\t-\targument variables.\n *\n * @return\tint\n * @retval\t0\t: success\n * @retval\t!=0\t: some error.\n */\nint\nmain(int argc, char *argv[])\n{\n\tint err = 0;\n\tint i;\n\tint j;\n\tint chk_file_sec();\n\tint dir = 0;\n\tint no_err = 0;\n\tint sticky = 0;\n\textern int optind;\n\n\t/*the real deal or output pbs_version and exit?*/\n\tPRINT_VERSION_AND_EXIT(argc, argv);\n\tif (set_msgdaemonname(\"chk_tree\")) {\n\t\tfprintf(stderr, \"Out of memory\\n\");\n\t\treturn 1;\n\t}\n\tset_logfile(stderr);\n\n\twhile ((i = getopt(argc, argv, \"dns-:\")) != EOF) {\n\t\tswitch (i) {\n\t\t\tcase 'd':\n\t\t\t\tdir = 1;\n\t\t\t\tbreak;\n\t\t\tcase 'n':\n\t\t\t\tno_err = 1;\n\t\t\t\tbreak;\n\t\t\tcase 's':\n\t\t\t\tsticky = 1;\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\terr = 1;\n\t\t}\n\t}\n\n\tif (err || (optind == argc)) {\n\t\tfprintf(stderr, \"Usage %s -d -s -n path ...\\n\\twhere:\\t-d indicates directory (file otherwise)\\n\\t\\t-s indicates world write allowed if sticky set\\n\\t\\t-n indicates do not return the error status, exit with 0\\n\", argv[0]);\n\t\tfprintf(stderr, \"      %s --version display version, exit with 0\\n\", argv[0]);\n\t\treturn 1;\n\t}\n\n\tfor (i = optind; i < argc; ++i) {\n#ifdef WIN32\n\t\t/* we're not checking fullpath */\n\t\tj = chk_file_sec(argv[i], dir, sticky, WRITES_MASK ^ FILE_WRITE_EA, 0);\n#else\n\t\tj = chk_file_sec(argv[i], dir, sticky, S_IWGRP | S_IWOTH, 1);\n#endif\n\t\tif (j)\n\t\t\terr = 1;\n\t}\n\n\tif (no_err)\n\t\treturn 0;\n\telse\n\t\treturn (err);\n}\n"
  },
  {
    "path": "src/tools/create_env_file.sh",
    "content": "#!/bin/sh\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n#\n# Script to create an environment variables files for the PBS daemons\n#\nif [ $# -eq 0 ] ; then\n\techo Usage: $0 filename\n\texit 1\nfi\nF=$1\nans=y\nED=${EDITOR:-vi}\nif [ -f $F ] ; then\n\tif [ -w $F ] ; then\n\t\techo \"\u0007\"\n\t\techo PBS environment file $F exists and is writable.\n\t\techo 'Do you wish to overwrite it [y|(n)]?'\n\t\tread ans\n\t\tif [ X$ans = X ] ; then ans=n ; fi\n\t\tif [ $ans = y ] ; then\n\t\t\techo 'Are you sure [y\\|(n)]?'\n\t\t\tread ans\n\t\t\tif [ X$ans = X ] ; then ans=n ; fi\n\t\t\tif [ $ans = y ] ; then\n\t\t\t\trm $F\n\t\t\tfi\n\t\tfi\n\telif [ -r $F ] ; then\n\t\techo WARNING, file $F exists and is not writable.\n\t\texit 1\n\tfi\nfi\nif [ $ans = y ] ; then\n\techo \"\u0007\"\n\techo Creating PBS environment file $F\n\tprintenv > $F\n\tchmod 700 $F\nfi\necho 'Do you wish to edit it [(y)\\|n]?'\nread ans\nif [ X$ans = X ] ; then ans=y ; fi\nif [ $ans = y ] ; then\n\t$ED $F\nfi\n"
  },
  {
    "path": "src/tools/hostn.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file    hostn.c\n *\n * @brief\n * \t\thostn.c - Functions related to get host by name.\n *\n * Functions included are:\n * \tusage()\n * \tmain()\n * \tprt_herrno()\n */\n#include <pbs_config.h>\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <unistd.h>\n#include <netdb.h>\n#include <string.h>\n\n#include <sys/types.h>\n#include <netinet/in.h>\n#include <arpa/inet.h>\n#include \"cmds.h\"\n#include \"pbs_version.h\"\n\n#if !defined(H_ERRNO_DECLARED)\nextern int h_errno;\n#endif\n\n/**\n * @brief\n * \t\tusage - shows the usage of the module\n *\n * @param[in]\tname\t-\thostname\n */\nvoid\nusage(char *name)\n{\n\tfprintf(stderr, \"Usage: %s [-v] hostname\\n\", name);\n\tfprintf(stderr, \"\\t -v turns on verbose output\\n\");\n\tfprintf(stderr, \"       %s --version\\n\", name);\n}\n/**\n * @brief\n * \t\tmain - the entry point in hostn.c\n *\n * @param[in]\targc\t-\targument count\n * @param[in]\targv\t-\targument variables.\n * @param[in]\tenv\t-\tenvironment values.\n *\n * @return\tint\n * @retval\t0\t: success\n * @retval\t!=0\t: error code\n */\nint\nmain(int argc, char *argv[], char *env[])\n{\n\tint i;\n\tstruct hostent *host;\n\tstruct hostent *hosta;\n\tstruct in_addr *ina;\n\tint naddr;\n\tint vflag = 0;\n\tvoid prt_herrno();\n\textern int optind;\n\n\t/*the real deal or output pbs_version and exit?*/\n\tPRINT_VERSION_AND_EXIT(argc, argv);\n\n\tif (initsocketlib())\n\t\treturn 1;\n\n\twhile ((i = getopt(argc, argv, \"v-:\")) != EOF) {\n\t\tswitch (i) {\n\t\t\tcase 'v':\n\t\t\t\tvflag = 1;\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\tusage(argv[0]);\n\t\t\t\treturn 1;\n\t\t}\n\t}\n\n\tif (optind != argc - 1) {\n\t\tusage(argv[0]);\n\t\treturn 1;\n\t}\n\n#ifndef WIN32\n\th_errno = 0;\n#endif\n\n\ti = 0;\n\twhile (env[i]) {\n\t\tif (!strncmp(env[i], \"LOCALDOMAIN\", 11)) {\n\t\t\tprintf(\"%s\\n\", env[i]);\n\t\t\tenv[i] = \"\";\n\t\t\tbreak;\n\t\t}\n\t\t++i;\n\t}\n\n\thost = gethostbyname(argv[optind]);\n\tif (host) {\n\t\tif (vflag)\n\t\t\tprintf(\"primary name: \");\n\t\tprintf(\"%s\", host->h_name);\n\t\tif (vflag)\n\t\t\tprintf(\" (from gethostbyname())\");\n\t\tprintf(\"\\n\");\n\t\tif (vflag) {\n\t\t\tif (host->h_aliases && *host->h_aliases) {\n\t\t\t\tfor (i = 0; host->h_aliases[i]; ++i)\n\t\t\t\t\tprintf(\"aliases:           %s\\n\",\n\t\t\t\t\t       host->h_aliases[i]);\n\t\t\t} else {\n\t\t\t\tprintf(\"aliases:            -none-\\n\");\n\t\t\t}\n\n\t\t\tprintf(\"     address length:  %d bytes\\n\", host->h_length);\n\t\t}\n\n\t\t/* need to save address because they will be over writen on */\n\t\t/* next call to gethostby*()\t\t\t\t    */\n\n\t\tnaddr = 0;\n\t\tfor (i = 0; host->h_addr_list[i]; ++i) {\n\t\t\t++naddr;\n\t\t}\n\t\tina = (struct in_addr *) malloc(sizeof(struct in_addr) * naddr);\n\t\tif (ina == NULL) {\n\t\t\tfprintf(stderr, \"%s: out of memory\\n\", argv[0]);\n\t\t\treturn 1;\n\t\t}\n\n\t\tfor (i = 0; i < naddr; ++i) {\n\t\t\t(void) memcpy((char *) (ina + i), host->h_addr_list[i],\n\t\t\t\t      host->h_length);\n\t\t}\n\t\tif (vflag) {\n\t\t\tfor (i = 0; i < naddr; ++i) {\n\t\t\t\tprintf(\"     address:      %15.15s  \", inet_ntoa(*(ina + i)));\n\t\t\t\tprintf(\" (%u dec)  \", (int) (ina + i)->s_addr);\n\n#ifndef WIN32\n\t\t\t\th_errno = 0;\n#endif\n\t\t\t\thosta = gethostbyaddr((char *) (ina + i), host->h_length,\n\t\t\t\t\t\t      host->h_addrtype);\n\t\t\t\tif (hosta) {\n\t\t\t\t\tprintf(\"name:  %s\", host->h_name);\n\t\t\t\t} else {\n\t\t\t\t\tprintf(\"name:  -null-\");\n\t\t\t\t\tprt_herrno();\n\t\t\t\t}\n\t\t\t\tprintf(\"\\n\");\n\t\t\t}\n\t\t}\n\n\t} else {\n\t\tfprintf(stderr, \"no name entry found for %s\\n\", argv[optind]);\n\t\tprt_herrno();\n\t}\n\treturn 0;\n}\n/**\n * @brief\n * \t\tprt_herrno - assigns error descriptions corresponding to error number.\n */\nvoid\nprt_herrno()\n{\n\tchar *txt;\n\n\tswitch (h_errno) {\n\t\tcase 0:\n\t\t\treturn;\n\n\t\tcase HOST_NOT_FOUND:\n\t\t\ttxt = \"Answer Host Not Found\";\n\t\t\tbreak;\n\n\t\tcase TRY_AGAIN:\n\t\t\ttxt = \"Try Again\";\n\t\t\tbreak;\n\n\t\tcase NO_RECOVERY:\n\t\t\ttxt = \"No Recovery\";\n\t\t\tbreak;\n\n\t\tcase NO_DATA:\n\t\t\ttxt = \"No Data\";\n\t\t\tbreak;\n\n\t\tdefault:\n\t\t\ttxt = \"unknown error\";\n\t\t\tbreak;\n\t}\n\tfprintf(stderr, \" ** h_errno is %d %s\\n\", h_errno, txt);\n}\n"
  },
  {
    "path": "src/tools/pbsTclInit.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file    pbsTclInit.c\n *\n * @brief\n *\t\tpbsTclInit - All the pbs specific code needed to make pbstclsh\n *\t\twork is here.  The routine pbsTcl_Init() is to be called\n *\t\tin place of Tcl_Init().\n *\n * Functions included are:\n * \tpbsTcl_Init()\n * \tmain()\n */\n#include \"pbs_config.h\"\n#include \"pbs_version.h\"\n#include <stdlib.h>\n#include <string.h>\n#include <unistd.h>\n#include \"tcl.h\"\n#include \"rm.h\"\n#include \"pbs_ifl.h\"\n#include \"pbs_internal.h\"\n#include \"log.h\"\n#include \"tpp.h\"\n\n#ifdef NAS /* localmod 099 */\nextern int quiet;\n#endif /* localmod 099 */\n\nextern void add_cmds(Tcl_Interp *interp);\n\n#define SHOW_NONE 0xff\n\n/**\n * @brief\n * \t\tpbsTcl_Init\t- Function to initialize Tcl interpreter based on the environment.\n *\n * @param[in,out]\tinterp\t-\tInterpreter for application.\n *\n * @return\tint\n * @retval\tTCL_OK\t: everything looks good.\n * @retval\tTCL_ERROR\t: something got wrong!\n */\nint\npbsTcl_Init(Tcl_Interp *interp)\n{\n\tif (Tcl_Init(interp) == TCL_ERROR)\n\t\treturn TCL_ERROR;\n#if TCLX\n\tif (Tclx_Init(interp) == TCL_ERROR)\n\t\treturn TCL_ERROR;\n#endif\n\n\tfullresp(0);\n\tadd_cmds(interp);\n\n\tTcl_SetVar(interp, \"tcl_rcFileName\", \"~/.tclshrc\", TCL_GLOBAL_ONLY);\n\treturn TCL_OK;\n}\n/**\n * @brief\n * \t\tmain - the entry point in pbsTclInit.c\n *\n * @param[in]\targc\t-\targument count.\n * @param[in]\targv\t-\targument variables.\n *\n * @return\tint\n * @retval\t0\t: success\n */\nint\nmain(int argc, char *argv[])\n{\n\tchar tbuf_env[256];\n\tint rc;\n\tstruct tpp_config tpp_conf;\n\tfd_set selset;\n\tstruct timeval tv;\n\n\t/*the real deal or just pbs_version and exit?*/\n\n\tPRINT_VERSION_AND_EXIT(argc, argv);\n\tif (set_msgdaemonname(\"pbs_tclsh\")) {\n\t\tfprintf(stderr, \"Out of memory\\n\");\n\t\treturn 1;\n\t}\n\tset_logfile(stderr);\n\n\t/* load the pbs conf file */\n\tif (pbs_loadconf(0) == 0) {\n\t\tfprintf(stderr, \"%s: Configuration error\\n\", argv[0]);\n\t\treturn (1);\n\t}\n\n\tset_log_conf(pbs_conf.pbs_leaf_name, pbs_conf.pbs_mom_node_name,\n\t\t     pbs_conf.locallog, pbs_conf.syslogfac,\n\t\t     pbs_conf.syslogsvr, pbs_conf.pbs_log_highres_timestamp);\n\n\tif (!getenv(\"TCL_LIBRARY\")) {\n\t\tif (pbs_conf.pbs_exec_path) {\n\t\t\tsprintf(tbuf_env, \"%s/tcltk/lib/tcl%s\", pbs_conf.pbs_exec_path, TCL_VERSION);\n\t\t\tsetenv(\"TCL_LIBRARY\", tbuf_env, 1);\n\t\t}\n\t}\n\n\tif (!pbs_conf.pbs_leaf_name) {\n\t\tchar my_hostname[PBS_MAXHOSTNAME + 1];\n\t\tif (gethostname(my_hostname, (sizeof(my_hostname) - 1)) < 0) {\n\t\t\tfprintf(stderr, \"Failed to get hostname\\n\");\n\t\t\treturn -1;\n\t\t}\n\t\tpbs_conf.pbs_leaf_name = get_all_ips(my_hostname, log_buffer, sizeof(log_buffer) - 1);\n\t\tif (!pbs_conf.pbs_leaf_name) {\n\t\t\tfprintf(stderr, \"%s\\n\", log_buffer);\n\t\t\tfprintf(stderr, \"%s\\n\", \"Unable to determine TPP node name\");\n\t\t\treturn -1;\n\t\t}\n\t}\n\n\t/* call tpp_init */\n\trc = set_tpp_config(&pbs_conf, &tpp_conf, pbs_conf.pbs_leaf_name, -1, pbs_conf.pbs_leaf_routers);\n\tif (rc == -1) {\n\t\tfprintf(stderr, \"Error setting TPP config\\n\");\n\t\treturn -1;\n\t}\n\n\tif ((tpp_fd = tpp_init(&tpp_conf)) == -1) {\n\t\tfprintf(stderr, \"tpp_init failed\\n\");\n\t\treturn -1;\n\t}\n\n\t/*\n\t * Wait for net to get restored, ie, app to connect to routers\n\t */\n\tFD_ZERO(&selset);\n\tFD_SET(tpp_fd, &selset);\n\ttv.tv_sec = 5;\n\ttv.tv_usec = 0;\n\tselect(FD_SETSIZE, &selset, NULL, NULL, &tv);\n\n\ttpp_poll(); /* to clear off the read notification */\n\n\tTcl_Main(argc, argv, pbsTcl_Init);\n\treturn 0;\n}\n"
  },
  {
    "path": "src/tools/pbsTkInit.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n *  @file    pbsTkInit.c\n *\n *  @brief\n *\t\tpbsTkInit - All the pbs specific code needed to make pbswish\n *\t\twork is here.  The routine pbsTk_Init() is to be called\n *\t\tin place of Tk_Init().\n *\n * Functions included are:\n * \tpbsTcl_Init()\n * \tmain()\n */\n#include \"pbs_config.h\"\n#include \"pbs_version.h\"\n\n#include \"tcl.h\"\n#include \"tk.h\"\n#include <string.h>\n#include <stdlib.h>\n#include \"rm.h\"\n#include \"pbs_ifl.h\"\n#include \"pbs_internal.h\"\n#include \"log.h\"\n\nextern void add_cmds(Tcl_Interp *interp);\n\n/**\n * @brief\n * \t\tpbsTcl_Init\t- Function to initialize Tcl interpreter based on the environment.\n *\n * @param[in,out]\tinterp\t-\tInterpreter for application.\n *\n * @return\tint\n * @retval\tTCL_OK\t: everything looks good.\n * @retval\tTCL_ERROR\t: something got wrong!\n */\nint\npbsTcl_Init(Tcl_Interp *interp)\n{\n\tif (Tcl_Init(interp) == TCL_ERROR)\n\t\treturn TCL_ERROR;\n\tif (Tk_Init(interp) == TCL_ERROR)\n\t\treturn TCL_ERROR;\n\tTcl_StaticPackage(interp, \"Tk\", Tk_Init, Tk_SafeInit);\n#if TCLX\n\tif (Tclx_Init(interp) == TCL_ERROR)\n\t\treturn TCL_ERROR;\n\tif (Tkx_Init(interp) == TCL_ERROR)\n\t\treturn TCL_ERROR;\n#endif\n\n\tfullresp(0);\n\tadd_cmds(interp);\n\n\tTcl_SetVar(interp, \"tcl_rcFileName\", \"~/.wishrc\", TCL_GLOBAL_ONLY);\n\treturn TCL_OK;\n}\n/**\n * @brief\n * \t\tmain - the entry point in pbsTkInit.c\n *\n * @param[in]\targc\t-\targument count.\n * @param[in]\targv\t-\targument variables.\n *\n * @return\tint\n * @retval\t0\t: success\n */\nint\nmain(int argc, char *argv[])\n{\n\n\tchar tbuf_env[256];\n\n\t/*the real deal or just pbs_version and exit?*/\n\n\tPRINT_VERSION_AND_EXIT(argc, argv);\n\tset_logfile(stderr);\n\n\tpbs_loadconf(0);\n\n\tset_log_conf(pbs_conf.pbs_leaf_name, pbs_conf.pbs_mom_node_name,\n\t\t     pbs_conf.locallog, pbs_conf.syslogfac,\n\t\t     pbs_conf.syslogsvr, pbs_conf.pbs_log_highres_timestamp);\n\n\tif (!getenv(\"TCL_LIBRARY\")) {\n\t\tif (pbs_conf.pbs_exec_path) {\n\t\t\tsprintf(tbuf_env, \"%s/tcltk/lib/tcl%s\", pbs_conf.pbs_exec_path, TCL_VERSION);\n\t\t\tsetenv(\"TCL_LIBRARY\", tbuf_env, 1);\n\t\t}\n\t}\n\n\tif (!getenv(\"TK_LIBRARY\")) {\n\t\tif (pbs_conf.pbs_exec_path) {\n\t\t\tsprintf(tbuf_env, \"%s/tcltk/lib/tk%s\", pbs_conf.pbs_exec_path, TK_VERSION);\n\t\t\tsetenv(\"TK_LIBRARY\", tbuf_env, 1);\n\t\t}\n\t}\n\n\tTk_Main(argc, argv, pbsTcl_Init);\n\treturn 0;\n}\n"
  },
  {
    "path": "src/tools/pbs_ds_monitor.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n *  @file    pbs_ds_monitor.c\n *\n *  @brief\n *\t\tpbs_ds_monitor - This file contains functions related to database and serialization.\n *\n * Functions included are:\n * \tclear_stop_db_file()\n * \tcheck_and_stop_db()\n * \tget_pid()\n * \tlock_out()\n * \tacquire_lock()\n * \twin_db_monitor()\n * \tclear_tmp_files()\n * \tcheckpid()\n * \twin_db_monitor_child()\n * \tacquire_lock()\n * \tunix_db_monitor()\n * \tmain()\n */\n#include <pbs_config.h>\n#include <pbs_version.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <unistd.h>\n#include <string.h>\n#include <fcntl.h>\n#include <signal.h>\n#include <sys/stat.h>\n#include <sys/time.h>\n#include <errno.h>\n#include \"server_limits.h\"\n#include <pbs_internal.h>\n#include \"pbs_db.h\"\n#include \"pbs_ifl.h\"\n\n#define MAX_LOCK_ATTEMPTS 5\n#define MAX_DBPID_ATTEMPTS 20\n#define TEMP_BUF_SIZE 100\n#define RES_BUF_SIZE 4096\n\nchar thishost[PBS_MAXHOSTNAME + 1];\n\n/**\n * @brief\n * \t\tclear_stop_db_file - Function to clear the db stop file\n *\n * @return\tvoid\n *\n * @par MT-safe: Yes\n */\nvoid\nclear_stop_db_file(void)\n{\n\tchar closefile[MAXPATHLEN + 1];\n\tsnprintf(closefile, MAXPATHLEN, \"%s/datastore/pbs_dbclose\", pbs_conf.pbs_home_path);\n\tunlink(closefile);\n}\n\n/**\n * @brief\n * \t\tcheck_and_stop_db - Function to check for db stop file and stop the database\n *          if such a file exists\n *\n * @param[in]\tdbpid\t-\tPid of the database process (unused for now)\n *\n * @return\tvoid\n *\n * @par MT-safe: Yes\n */\nvoid\ncheck_and_stop_db(int dbpid)\n{\n\tchar closefile[MAXPATHLEN + 1];\n\n\tsnprintf(closefile, MAXPATHLEN, \"%s/datastore/pbs_dbclose\", pbs_conf.pbs_home_path);\n\n\tif (access(closefile, R_OK) == 0) {\n\t\t/* file present, somebody is asking us to quit the database */\n\t\t/* first clear the file */\n\t\tunlink(closefile);\n\t\t/* now stop the database */\n\t\tpbs_stop_db(thishost, pbs_conf.pbs_data_service_port);\n\t}\n}\n\n/**\n * @brief\n * \t\tGet the pid of the database from the postmaster.pid\n *\t\t\tfile located inside directory pointed by dbstore\n *\n * @param[in]\tdbstore\t-\tThe path to the database data directory\n *\n * @retval\t0\t-\tFunction failed\n * @retval\t>0\t-\tPid of the postmaster master process\n *\n * @par MT-safe:\tYes\n */\nstatic pid_t\nget_pid()\n{\n\tchar pidfile[MAXPATHLEN + 1];\n\tFILE *fp;\n\tchar buf[TEMP_BUF_SIZE + 1];\n\tpid_t pid = 0;\n\n\tsnprintf(pidfile, MAXPATHLEN, \"%s/datastore/postmaster.pid\", pbs_conf.pbs_home_path);\n\tif (access(pidfile, R_OK) != 0)\n\t\treturn 0;\n\n\tif ((fp = fopen(pidfile, \"r\")) == NULL)\n\t\treturn 0;\n\n\tmemset(buf, 0, TEMP_BUF_SIZE + 1);\n\tif (fgets(buf, TEMP_BUF_SIZE, fp) == NULL) \n\t\tfprintf(stderr, \"%s fgets failed. \\n\", __func__);\n\tbuf[TEMP_BUF_SIZE] = '\\0';\n\n\tfclose(fp);\n\n\tif (strlen(buf) == 0)\n\t\treturn 0;\n\n\tpid = atol(buf);\n\tif (pid == 0)\n\t\treturn 0;\n\n\tif (kill(pid, 0) != 0)\n\t\treturn 0;\n\treturn pid;\n}\n\n/**\n * @brief\n * \t\tlock_out - Function to lock/unlock a file.\n *\n *\t\tFor Unix, this uses fcntl lock (not inheritable).\n *\t\tIf the operand is F_WRLCK, then this also writes\n *\t\tthe pid of this process to the lockfile.\n *\n * @param[in]\tfds\t-\tThe descriptor of the file to be locked\n * @param[in]\top\t- \tOperation to perform\n *\t\t\t\t\t\tF_WRLCK - To obtain a \"write lock\"\n *\t\t\t\t\t\tF_UNLCK\t- To release a previous lock\n * \t\t\t\t\t\tThese contants are defined in win.h.\n *\n * @retval\t0\t-\tFunction succeeded for the given operation\n * @retval\t1\t-\tFailed (eg to lock the file).\n *\n * @par MT-safe:\tYes\n */\nstatic int\nlock_out(int fds, int op)\n{\n\tstruct flock flock;\n\tchar buf[PBS_MAXHOSTNAME + 10];\n\n\t(void) lseek(fds, (off_t) 0, SEEK_SET);\n\tflock.l_type = op;\n\tflock.l_whence = SEEK_SET;\n\tflock.l_start = 0;\n\tflock.l_len = 0;\n\n\tif (fcntl(fds, F_SETLK, &flock) != -1) {\n\t\tif (op == F_WRLCK) {\n\t\t\t/* if write-lock, record hostname and pid in file */\n\t\t\tif (ftruncate(fds, (off_t) 0) == -1) \n\t\t\t\tfprintf(stderr, \"ftruncate failed, ERR = %s\\n\", strerror(errno));\n\t\t\t(void) sprintf(buf, \"%s:%d\\n\", thishost, getpid());\n\t\t\tif (write(fds, buf, strlen(buf)) == -1) \n\t\t\t\tfprintf(stderr, \"write failed, ERR = %s\\n\", strerror(errno));\n\t\t}\n\t\treturn 0;\n\t}\n\treturn 1;\n}\n\n/**\n * @brief\n * \t\tThis is the Unix counterpart of acquire_lock\n * @par\n *  \tThis function creates/opens the lock file, and locks the file.\n *  \tIn case of a failover environment, the whole operation is retried\n *  \tseveral times in a loop.\n *\n * @param[in]  lockfile         - Path of db_lock file.\n * @param[out] reason           - Reason for failure, if not able to accquire lock\n * @param[in]  reasonlen        - reason buffer legnth.\n * @param[out] is_lock_hld_by_thishost  - This flag is set if the lock is held by the host\n *                                         requesting accquire_lock in check_mode.\n *\n * @return\tFile descriptor of the open and locked file\n * @retval\t-1\t: Function failed to acquire lock\n * @retval\t!=-1\t: Function succeeded (file descriptor returned)\n *\n * @par MT-safe:\tYes\n */\nint\nacquire_lock(char *lockfile, char *reason, int reasonlen, int *is_lock_hld_by_thishost)\n{\n\tint fd;\n\tstruct stat st;\n\tint i, j;\n\ttime_t lasttime = 0;\n\tint rc;\n\tchar who[PBS_MAXHOSTNAME + 10];\n\tchar *p;\n\n\tif (reasonlen > 0)\n\t\treason[0] = '\\0';\n\n\tif (pbs_conf.pbs_secondary == NULL)\n\t\tj = 1; /* not fail over, try lock one time */\n\telse\n\t\tj = MAX_LOCK_ATTEMPTS; /* fail over, try X times */\n\n#ifndef O_RSYNC\n#define O_RSYNC 0\n#endif\n\nagain:\n\tif ((fd = open(lockfile, O_RDWR | O_CREAT | O_RSYNC, 0600)) == -1) {\n\t\tsnprintf(reason, reasonlen, \"Could not access lockfile, errno=%d\", errno);\n\t\treturn -1;\n\t}\n\n\t/* check time stamp of lock file */\n\tif (fstat(fd, &st) == -1) {\n\t\tsnprintf(reason, reasonlen, \"Failed to stat lockfile, errno=%d\", errno);\n\t\tclose(fd);\n\t\treturn -1;\n\t}\n\n\t/* record the last modified timestamp */\n\tlasttime = st.st_mtime;\n\n\tfor (i = 0; i < j; i++) { /* try X times where X is MAX_LOCK_ATTEMPTS */\n\t\tif (i > 0)\n\t\t\tsleep(1);\n\t\t/* attempt to lock the datastore directory */\n\t\tif (lock_out(fd, F_WRLCK) == 0)\n\t\t\treturn fd;\n\t}\n\n\t/* do this only if failover is configured */\n\tif (pbs_conf.pbs_secondary != NULL) {\n\t\t/*\n\t\t * Came here, means we could not lock even after j attempts.\n\t\t *\n\t\t * 2 levels of check will be performed (based on the last modified timestamp):\n\t\t *\n\t\t * 1) Check the lock file's modified timestamp and compare with \"lasttime\" to see if the file was modified\n\t\t *    in between. If the file was modified, then the other side up and so we give up.\n\t\t *\n\t\t * 2) We know that the modified timestamp is not updating however we need to make\n\t\t *    sure that the other side is really gone. Therefore we check the difference of last\n\t\t *    updated timestamp from now (current system time). If the difference > (4*j) seconds,\n\t\t *    then the other side has vanished at the OS level itself, and NFS cannot unlock it.\n\t\t *    So delete the lockfile and start afresh. For this to work, make sure that the\n\t\t *    time on primary, secondary and the pbs_home server (NFS server) are synced.\n\t\t */\n\n\t\t/* Re-check time stamp of lock file */\n\t\tif (fstat(fd, &st) == -1) {\n\t\t\tsnprintf(reason, reasonlen, \"Failed to stat lockfile, errno=%d\", errno);\n\t\t\tclose(fd);\n\t\t\treturn -1;\n\t\t}\n\n\t\t/* Check if time stamp of lock file has updated at all */\n\t\tif (st.st_mtime == lasttime) {\n\t\t\t/* Modified times stamp did not update in the given window. Re-check how long it has been stale */\n\t\t\tif (time(0) - lasttime >= (MAX_LOCK_ATTEMPTS * 4)) {\n\t\t\t\t/* other side is long dead, clear up stuff */\n\t\t\t\tclose(fd);\n\t\t\t\tunlink(lockfile);\n\t\t\t\tfd = -1;\n\t\t\t\tlasttime = 0;\n\t\t\t\tgoto again;\n\t\t\t}\n\t\t}\n\t}\n\n\t/* all attempts to lock failed, try to see who has it locked */\n\t(void) lseek(fd, (off_t) 0, SEEK_SET);\n\tif ((rc = read(fd, who, sizeof(who) - 1)) > 0) {\n\t\twho[rc - 1] = '\\0';\n\t\tp = strchr(who, ':');\n\t\tif (p) {\n\t\t\t*p = '\\0';\n\t\t\tsnprintf(reason, reasonlen,\n\t\t\t\t \"Lock seems to be held by pid: %s running on host: %s\",\n\t\t\t\t (p + 1), who);\n\t\t} else {\n\t\t\tsnprintf(reason, reasonlen, \"Lock seems to be held by %s\", who);\n\t\t}\n\t\tif (is_lock_hld_by_thishost != NULL) {\n\t\t\tif (strcmp(thishost, who) == 0)\n\t\t\t\t*is_lock_hld_by_thishost = 1;\n\t\t\telse\n\t\t\t\t*is_lock_hld_by_thishost = 0;\n\t\t}\n\t}\n\n\tclose(fd);\n\tfd = -1;\n\n\treturn fd;\n}\n\n/**\n * @brief\tThis is the Unix couterpart of the monitoring\n *\t\t\tcode.\n * @par\n *\t\tThis function does the following:\n *\t\ta) Creates a pipe, forks itself, parent waits to read on pipe.\n *\t\tb) Child creates/opens a file $PBS_HOME/datastore/pbs_dblock.\n *\t\tc) Attempts to lock the file. If locking\n *\t\t\tsucceeds, unlocks the file and writes 0 (success) to the write\n *\t\t\tend of the pipe. If locking fails, writes 1 (failure) to pipe.\n *\t\t\tParent reads from pipe and exits with the code read from pipe.\n *\t\td) If mode is \"check\" then child quits.\n *\t\te) If mode is \"monitor\", continues in the background, checking\n *\t\t\tthe database pid in a loop forever. If database pid goes\n *\t\t\tdown, then it unlocks the file and exits.\n *\n * @param[in]\tmode\t-\t\"check\" - to just check if lockfile can be locked\n *\t\t     \t\t\t\t\"monitor\" - to launch a monitoring child process that\n *\t\t\t\t\t\t\tholds onto the file lock.\n *\n * @retval\t1\t-\tFunction failed to acquire lock\n * @retval\t0\t-\tFunction succeded in the requested operation\n * @par\n * \t\tThe return values are not used by the caller (parent process) since\n * \t\tin the success case this function does not return. Instead, the parent\n * \t\twaits on the read end of the pipe to read a status from the monitoring\n * \t\tchild process.\n *\n * @par MT-safe: Yes\n */\nint\nunix_db_monitor(char *mode)\n{\n\tint fd;\n\tint rc;\n\tint i;\n\tpid_t dbpid;\n\tchar lockfile[MAXPATHLEN + 1];\n\tint pipefd[2];\n\tint res;\n\tint is_lock_local = 0;\n\tchar reason[RES_BUF_SIZE];\n\n\treason[0] = '\\0';\n\n\tif (pipe(pipefd) != 0) {\n\t\tfprintf(stderr, \"Unable to create pipe, errno = %d\\n\", errno);\n\t\treturn 1;\n\t}\n\n\tsnprintf(lockfile, MAXPATHLEN, \"%s/datastore/pbs_dblock\", pbs_conf.pbs_home_path);\n\n\t/* first fork off */\n\trc = fork();\n\tif (rc == -1) {\n\t\tfprintf(stderr, \"Unable to create process, errno = %d\\n\", errno);\n\t\treturn 1;\n\t}\n\n\tif (rc > 0) {\n\t\tclose(pipefd[1]);\n\t\t/*\n\t\t * child can continue to execute in case of \"monitor\",\n\t\t * so dont wait for child to exit, rather read code\n\t\t * from pipe that child will write to\n\t\t */\n\t\tif (read(pipefd[0], &res, sizeof(int)) != sizeof(int))\n\t\t\treturn 1;\n\n\t\tif (res != 0) {\n\t\t\tif (read(pipefd[0], &reason, sizeof(reason)) == -1) \n\t\t\t\tfprintf(stderr, \"read failed, ERR = %s\\n\", strerror(errno));\n\t\t\tfprintf(stderr, \"Failed to acquire lock on %s. %s\\n\", lockfile, reason);\n\t\t}\n\n\t\treturn (res); /* return parent with success */\n\t}\n\n\tclose(pipefd[0]);\n\n\t/* child */\n\tif (setsid() == -1) {\n\t\tclose(pipefd[1]);\n\t\treturn 1;\n\t}\n\n\t(void) fclose(stdin);\n\t(void) fclose(stdout);\n\t(void) fclose(stderr);\n\n\t/* Protect from being killed by kernel */\n\tdaemon_protect(0, PBS_DAEMON_PROTECT_ON);\n\n\tif ((fd = acquire_lock(lockfile, reason, sizeof(reason), &is_lock_local)) == -1) {\n\t\tif (is_lock_local && strcmp(mode, \"check\") == 0) {\n\t\t\t/* write success to parent since lock is already held by the localhost */\n\t\t\tres = 0;\n\t\t\tif (write(pipefd[1], &res, sizeof(int)) == -1) \n\t\t\t\tfprintf(stderr, \"write failed, ERR = %s\\n\", strerror(errno));\n\t\t\tclose(pipefd[1]);\n\t\t\treturn 0;\n\t\t}\n\t\tres = 1;\n\t\tif (write(pipefd[1], &res, sizeof(int)) == -1) \n\t\t\tfprintf(stderr, \"write failed, ERR = %s\\n\", strerror(errno));\n\t\tif (write(pipefd[1], reason, sizeof(reason)) == -1) \n\t\t\tfprintf(stderr, \"write failed, ERR = %s\\n\", strerror(errno));\n\t\tclose(pipefd[1]);\n\t\treturn 1;\n\t}\n\n\t/* unlock before writing success to parent, to avoid race */\n\tif (strcmp(mode, \"check\") == 0) {\n\t\tlock_out(fd, F_UNLCK);\n\t\tclose(fd);\n\t\tunlink(lockfile);\n\t}\n\n\t/* write success to parent since we acquired the lock */\n\tres = 0;\n\tif (write(pipefd[1], &res, sizeof(int)) == -1) \n\t\tfprintf(stderr, \"%s : write failed, ERR = %s\\n\", __func__ , strerror(errno));\n\tclose(pipefd[1]);\n\n\tif (strcmp(mode, \"check\") == 0)\n\t\treturn 0;\n\n\t/* clear any residual stop db file before starting monitoring */\n\tclear_stop_db_file();\n\n\t/*\n\t * first find out the pid of the postgres process from dbstore/postmaster.pid\n\t * wait for a while till it is found\n\t * if not found within MAX_DBPID_ATTEMPTS then break with error\n\t * if found, start monitoring the pid\n\t *\n\t */\n\tdbpid = 0;\n\tfor (i = 0; i < MAX_DBPID_ATTEMPTS; i++) {\n\t\tif ((dbpid = get_pid()) > 0)\n\t\t\tbreak;\n\t\t(void) utimes(lockfile, NULL);\n\t\tsleep(1);\n\t}\n\n\tif (dbpid == 0) {\n\t\t/* database did not come up, so quit after unlocking file */\n\t\tlock_out(fd, F_UNLCK);\n\t\tclose(fd);\n\t\tunlink(lockfile);\n\t\treturn 0;\n\t}\n\n\twhile (1) {\n\t\t(void) utimes(lockfile, NULL);\n\n\t\tif (kill(dbpid, 0) != 0)\n\t\t\tbreak;\n\t\tif (!((dbpid = get_pid()) > 0))\n\t\t\tbreak;\n\n\t\t/* check if stop db file exists */\n\t\tcheck_and_stop_db(dbpid);\n\n\t\tsleep(1);\n\t}\n\n\tlock_out(fd, F_UNLCK);\n\tclose(fd);\n\tunlink(lockfile);\n\n\treturn 0;\n}\n\n/**\n * @brief\n * \t\tmain - the entry point in pbs_config_add_win.c\n *\n * @param[in]\targc\t-\targument count\n * @param[in]\targv\t-\targument variables.\n *\n * @return\tint\n * @retval\t1\t-\tFunction failed to perform the requested operation.\n * @retval\t0\t-\tFunction succeeded in the requested operation\n */\nint\nmain(int argc, char *argv[])\n{\n\tchar *mode;\n\n\tif (argc < 2) {\n\t\tfprintf(stderr, \"Usage: %s check|monitor\\n\", argv[0]);\n\t\treturn 1;\n\t}\n\tmode = argv[1];\n\n\tif (pbs_loadconf(0) == 0) {\n\t\tfprintf(stderr, \"Failed to load PBS conf file\\n\");\n\t\treturn 1;\n\t}\n\n\tif (gethostname(thishost, (sizeof(thishost) - 1)) == -1) {\n\t\tfprintf(stderr, \"Failed to detect hostname\\n\");\n\t\treturn -1;\n\t}\n\n\treturn unix_db_monitor(mode);\n}\n"
  },
  {
    "path": "src/tools/pbs_idled.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\tpbs_idled.c\n *\n * @brief\n * \t\tpbs_idled.c\t- This file functions related to making the pbs idle.\n *\n * Functions included are:\n * \tmain()\n * \tevent_setup()\n * \tpointer_query()\n * \tupdate_utime()\n * \tX_handler()\n */\n#include <pbs_config.h>\n#include <pbs_ifl.h>\n#include \"cmds.h\"\n#include \"pbs_version.h\"\n\n#include <X11/X.h>\n#include <X11/Xlib.h>\n#include <time.h>\n#include <X11/Intrinsic.h>\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <unistd.h>\n#include <sys/types.h>\n#include <sys/stat.h>\n#include <fcntl.h>\n#include <utime.h>\n#include <errno.h>\n\n#define EVER 1\n\n/* used to pass back pointer locations from pointer_query() */\nstruct xy {\n\tint x;\n\tint y;\n};\n\n/* prototypes */\nint event_setup(Window w, Display *dsp);\nint pointer_query(Display *dsp, Window w, struct xy *p);\nvoid update_utime(char *filename);\nint X_handler(Display *dsp);\n\n/* globals */\nchar **argv_save;\nchar **env_save;\n/**\n * @brief\n * \t\tmain - the entry point in pbs_config_add_win.c\n *\n * @param[in]\targc\t-\targument count\n * @param[in]\targv\t-\targument variables.\n * @param[in]\tenvp\t-\tenvironment values.\n *\n * @return\tint\n * @retval\t1\t-\tFunction failed to perform the requested operation.\n * @retval\t0\t-\tFunction succeeded in the requested operation\n */\nint\nmain(int argc, char *argv[], char *envp[])\n{\n\tWindow w;\n\tchar *env_dsp = NULL;\n\tDisplay *dsp = NULL;\n\tXEvent event;\n\tint delay = 5;\n\tint reconnect_delay = 180;\n\tint do_update = 0;\n\tint is_daemon = 0;\n\ttime_t create_time = 0;\n\tchar *filename = NULL;\n\tchar filename_buf[MAXPATHLEN];\n\tchar *username;\n\tstruct xy cur_xy, prev_xy;\n\tstruct stat st;\n\tchar errbuf[BUFSIZ]; /* BUFSIZ is sufficient to hold buffer msg */\n\tint fd;\n\tint c;\n\n\tcur_xy.x = -1;\n\tcur_xy.y = -1;\n\n\t/*the real deal or output pbs_version and exit?*/\n\tPRINT_VERSION_AND_EXIT(argc, argv);\n\n\tpbs_loadconf(0);\n\n\twhile ((c = getopt(argc, argv, \"D:w:f:r:t:-:\")) != -1)\n\t\tswitch (c) {\n\t\t\tcase 'w':\n\t\t\t\tdelay = atoi(optarg);\n\t\t\t\tbreak;\n\t\t\tcase 'r':\n\t\t\t\treconnect_delay = atoi(optarg);\n\t\t\t\tbreak;\n\t\t\tcase 'f':\n\t\t\t\tfilename = optarg;\n\t\t\t\tbreak;\n\t\t\tcase 'D':\n\t\t\t\tenv_dsp = optarg;\n\t\t\t\tbreak;\n\t\t\tcase 't':\n\t\t\t\tif (!strcmp(optarg, \"daemon\"))\n\t\t\t\t\tis_daemon = 1;\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\t/* show usage and exit */\n\t\t\t\tfprintf(stderr, \"USAGE: %s [-w wait between X queries] [-f idle_file] [-D Display] [-r reconnct_delay]\\n\", argv[0]);\n\t\t\t\tfprintf(stderr, \"       %s --version\\n\", argv[0]);\n\t\t\t\texit(1);\n\t\t}\n\n\tprev_xy.x = -1;\n\tprev_xy.y = -1;\n\n\tif (filename == NULL) {\n\t\tusername = getlogin();\n\t\tif (username == NULL)\n\t\t\tusername = getenv(\"USER\");\n\t\tif (username == NULL)\n\t\t\tusername = \"UNKNOWN\";\n\t\tsprintf(filename_buf, \"%s/%s/%s\", pbs_conf.pbs_home_path, \"spool/idledir\", username);\n\t\tfilename = filename_buf;\n\t}\n\n\tif (stat(filename, &st) == -1) {\n\t\tif (errno == ENOENT) { /* file doesn't exist... lets create it */\n\t\t\tif ((fd = creat(filename, S_IRUSR | S_IWUSR)) == -1) {\n\t\t\t\tsprintf(errbuf, \"Can not open %s\", filename);\n\t\t\t\tperror(errbuf);\n\t\t\t\texit(1);\n\t\t\t}\n\t\t\tclose(fd);\n\t\t} else {\n\t\t\tperror(\"File Error\");\n\t\t\texit(1);\n\t\t}\n\t}\n\n\targv_save = argv;\n\tenv_save = envp;\n\n\tif (env_dsp == NULL)\n\t\tenv_dsp = getenv(\"DISPLAY\");\n\n\tif (env_dsp == NULL)\n\t\tenv_dsp = \":0\";\n\n\twhile (dsp == NULL) {\n\t\tdsp = XOpenDisplay(env_dsp);\n\n\t\tif (dsp == NULL) {\n#ifdef DEBUG\n\t\t\tprintf(\"Could not open display %s\\n\", env_dsp == NULL ? \"(null)\" : env_dsp);\n#endif\n\t\t\tsleep(reconnect_delay);\n\t\t}\n\t}\n\n\t/* only set the io error handler to ignore X connection closes IF\n\t * we're running as a daemon.  If we are run out of Xsession, and\n\t * ignore the close of the X connection, we'll stick around forever... not\n\t * good.\n\t */\n\tif (is_daemon)\n\t\tXSetIOErrorHandler(X_handler);\n\n\tw = RootWindow(dsp, XDefaultScreen(dsp));\n\n\tevent_setup(w, dsp);\n\n\tfor (; EVER;) {\n\t\tsleep(delay);\n\n\t\twhile (XCheckMaskEvent(dsp, KeyPressMask | KeyReleaseMask | SubstructureNotifyMask, &event)) {\n\t\t\tswitch (event.type) {\n\t\t\t\tcase KeyPress:\n\t\t\t\tcase KeyRelease:\n\t\t\t\t\tdo_update = 1;\n\t\t\t\t\tbreak;\n\t\t\t\tcase CreateNotify:\n\t\t\t\t\tcreate_time = time(NULL) + 30;\n\t\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\n\t\tif (create_time != 0) {\n\t\t\tif (time(NULL) >= create_time) {\n\t\t\t\tevent_setup(w, dsp);\n\t\t\t\tcreate_time = 0;\n\t\t\t}\n\t\t}\n\n\t\tif (pointer_query(dsp, w, &cur_xy))\n\t\t\tif (cur_xy.x != prev_xy.x || cur_xy.y != prev_xy.y) {\n\t\t\t\tdo_update = 1;\n\t\t\t\tprev_xy = cur_xy;\n\t\t\t}\n\n\t\tif (do_update) {\n\t\t\tupdate_utime(filename);\n\t\t\tdo_update = 0;\n\t\t}\n\t}\n}\n\n/**\n * @brief\n * \t\tsetup the event on an event in an X server.\n *\n * @param[in]\tw\t-\tSpecifies the connection to the X server.\n * @param[in]\tdsp\t-\tSpecifies the window whose events you are interested in.\n *\n * @return\tint\n * @retval\t0\t: XQueryTree has failed\n * @retval\t1\t: successfully completed.\n */\nint\nevent_setup(Window w, Display *dsp)\n{\n\tWindow root, parent, *kids;\n\tunsigned int nkids;\n\tunsigned int mask;\n\tint i;\n\n\tif (!XQueryTree(dsp, w, &root, &parent, &kids, &nkids))\n\t\treturn 0;\n\n\tmask = (KeyPressMask | KeyReleaseMask | SubstructureNotifyMask);\n\n\tXSelectInput(dsp, w, mask);\n\n\tif (kids) {\n\t\tfor (i = 0; i < nkids; i++)\n\t\t\tevent_setup(kids[i], dsp);\n\t}\n\tif (kids != NULL)\n\t\tXFree(kids);\n\n\treturn 1;\n}\n/**\n * @brief\n * \t\tIt returns the root window the pointer is logically on and the pointer\n * \t\tcoordinates relative to the root window's origin.\n *\n * @param[in]\tdsp\t-\tSpecifies the window whose events you are interested in.\n * @param[in]\tw\t-\tSpecifies the connection to the X server.\n * @param[out]\tp\t-\tX,Y coordinate\n *\n * @return\tint\n * @retval\t0\t: p is NULL\n * @retval\t1\t: successfully completed.\n */\nint\npointer_query(Display *dsp, Window w, struct xy *p)\n{\n\tWindow root_return;\n\tWindow child_return;\n\tint root_x;\n\tint root_y;\n\tint win_x;\n\tint win_y;\n\tunsigned int mask;\n\n\tif (p == NULL)\n\t\treturn 0;\n\n\tif (XQueryPointer(dsp, w,\n\t\t\t  &root_return, &child_return, &root_x, &root_y, &win_x, &win_y, &mask)) {\n\t\tp->x = root_x;\n\t\tp->y = root_y;\n\t} else\n\t\tprintf(\"XQueryPointer failed\\n\");\n\n\treturn 1;\n}\n/**\n * @brief\n * \t\tset access time and modify time to current time\n *\n * @param[in]\tfilename\t-\tfile for which access time needs to be updated.\n */\nvoid\nupdate_utime(char *filename)\n{\n\tutime(filename, NULL);\n\n#ifdef DEBUG\n\tprintf(\"Updating utime\\n\");\n#endif\n}\n\n/**\n * @brief\n * \t\twe lost our X display... let's just reexec ourself\n *\n * @param[in]\tdsp\t-\tpointer to display ( not used here)\n *\n * @return\tint\n * @retval\t0\t: success\n */\nint\nX_handler(Display *dsp)\n{\n\n#ifdef DEBUG\n\tprintf(\"Lost X connection, restarting!\\n\");\n#endif\n\n\texecve(argv_save[0], argv_save, env_save);\n\n\treturn 0;\n}\n"
  },
  {
    "path": "src/tools/pbs_probe.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\n *\t\tpbs_probe.c\n *\n * @brief\n * \t\tMuch of this program derives from the PBS utility chk_tree\n * \t\tand the manner in which thing were done in that program.\n *\n * Functions included are:\n * \tmain()\n * \tam_i_authorized()\n * \tinfrastruct_params()\n * \tprint_infrastruct()\n * \ttitle_string()\n * \tprint_problems()\n * \tmsg_table_set_defaults()\n * \tget_primary_values()\n * \tpbsdotconf()\n * \tget_realpath_values()\n * \tis_parent_rpathnull()\n * \tinspect_dir_entries()\n * \twhich_suffixset()\n * \tis_suffix_ok()\n * \twhich_knwn_mpugs()\n * \tchk_entries()\n * \tpbs_dirtype()\n * \tnon_db_resident()\n * \tis_a_numericname()\n * \tcheck_paths()\n * \tcheck_owner_modes()\n * \tmbits_and_owner()\n * \tperm_owner_msg()\n * \tperm_string()\n * \towner_string()\n * \tprocess_ret_code()\n * \tconf4primary()\n * \tenv4primary()\n * \tfix()\n * \tfix_perm_owner()\n *\n */\n#include <pbs_config.h>\n\n#include <pbs_python_private.h>\n#include <Python.h>\n#include <sys/types.h>\n#include <sys/stat.h>\n#include <sys/utsname.h>\n#include <fcntl.h>\n#include <pwd.h>\n#include <limits.h>\n#include <stdlib.h>\n#include <sys/stat.h>\n#include <errno.h>\n#include <stdio.h>\n#include <unistd.h>\n#include <string.h>\n#include <assert.h>\n#include <dirent.h>\n#include <grp.h>\n#include \"cmds.h\"\n#include \"pbs_version.h\"\n#include \"pbs_ifl.h\"\n#include \"glob.h\"\n\n// clang-format off\n\n#ifndef\tS_ISLNK\n#define\tS_ISLNK(m)\t(((m) & S_IFMT) == S_IFLNK)\n#endif\n\n#define\tDEMARC\t'/'\n#define DFLT_MSGTBL_SZ (1024)\n\n/* ---- required and disallowed dir/file modes ----*/\n\n#define DFLT_REQ_DIR_MODES (S_IRWXU | S_IRGRP | S_IXGRP | S_IROTH | S_IWOTH)\n#define DFLT_DIS_DIR_MODES (S_IWGRP | S_IWOTH)\n\n#define rwxrxrx\t\t(S_IRWXU | S_IRGRP | S_IXGRP | S_IROTH | S_IXOTH)\n#define frwxrxrx\t(S_IFREG | S_IRWXU | S_IRGRP | S_IXGRP | S_IROTH | S_IXOTH)\n#define drwxrxrx\t(S_IFDIR | S_IRWXU | S_IRGRP | S_IXGRP | S_IROTH | S_IXOTH)\n#define tdrwxrwxrwx\t(S_ISVTX | S_IFDIR | S_IRWXU | S_IRWXG | S_IRWXO)\n#define tgworwx\t\t(S_ISVTX | S_IWGRP | S_IRWXO)\n\n#define drwxgo\t\t(S_IFDIR | S_IRWXU)\n#define drwxrxo\t\t(S_IFDIR | S_IRWXU | S_IRGRP | S_IXGRP)\n#define tgrwxorwx\t(S_ISVTX | S_IRWXG | S_IRWXO)\n#define tgwow\t\t(S_ISVTX | S_IWGRP | S_IWOTH)\n\n#define drwxrxx\t\t(S_IFDIR | S_IRWXU | S_IRGRP | S_IXGRP | S_IXOTH)\n#define tgworw\t\t(S_ISVTX | S_IWGRP | S_IROTH | S_IWOTH)\n#define dtgwow\t\t(S_IFDIR | S_ISVTX | S_IWGRP | S_IWOTH)\n\n#define frwxrxrx\t(S_IFREG | S_IRWXU | S_IRGRP | S_IXGRP | S_IROTH | S_IXOTH)\n#define sgswow\t\t(S_ISUID | S_ISGID | S_IWGRP | S_IWOTH)\n\n#define fsrwxrxrx\t(S_IFREG | S_ISUID | S_IRWXU | S_IRGRP | S_IXGRP | S_IROTH | S_IXOTH)\n#define gswow\t\t(S_ISGID | S_IWGRP | S_IWOTH)\n\n#define frwxgo\t\t(S_IFREG | S_IRWXU)\n#define sgsrwxorwx\t(S_ISUID | S_ISGID | S_IRWXG | S_IRWXO)\n\n#define frwrr\t\t(S_IFREG | S_IRUSR | S_IWUSR | S_IRGRP | S_IROTH)\n#define xsgswxowx\t(S_IXUSR | S_ISUID | S_ISGID | S_IWGRP | S_IXGRP | S_IWOTH | S_IXOTH)\n\n#define frwgo\t\t(S_IFREG | S_IRUSR | S_IWUSR)\n#define xsgsrwxorwx\t(S_IXUSR | S_ISUID | S_ISGID | S_IRWXG | S_IRWXO)\n\n#define frgror\t\t(S_IFREG | S_IRUSR | S_IRGRP | S_IROTH)\n#define sgswxowx\t(S_ISUID | S_ISGID | S_IWGRP | S_IXGRP | S_IWOTH | S_IXOTH)\n\n#define\tdrwxrr\t\t(S_IFDIR | S_IRWXU | S_IRGRP | S_IROTH)\n#define tgwxowx\t\t(S_ISVTX | S_IWGRP | S_IXGRP | S_IWOTH | S_IXOTH)\n\n\n/* ---- Codes to identify the source of various data items ----*/\n\n#define\tSRC_NONE\t0\t/* no source/can't determine */\n#define\tSRC_DFLT\t1\t/* source is default value */\n#define\tSRC_ENV\t\t2\t/* source is environment variable */\n#define\tSRC_CONF\t3\t/* source is PBS config file */\n\n\n/* -----------  error values -------------*/\n\n#define\tPBS_CONF_NO_EXIST\t1\n#define\tPBS_CONF_CAN_NOT_OPEN\t2\n\n#define LSTAT_PATH_ERR\t-1\n#define PATH_ERR\t 1\n\n\ntypedef struct  statdata {\n\tint populated;\t\t/* member \"sb\" populated */\n\tstruct stat\tsb;\t/* stat  \"buffer\" */\n} STATDATA;\n\ntypedef struct  utsdata {\n\tint\tpopulated;\t/* member \"ub\" populated */\n\tstruct  utsname ub;\t/* uname \"buffer\" */\n} UTSDATA;\n\ntypedef struct vld_ug {\n\t/*\n\t * each directory/file in the PBS infrastructure should\n\t * one of these \"valid users, valid groups\" structures\n\t * associated with it\n\t */\n\tint\t*uids;\t\t/* -1 terminated array UID values */\n\tint\t*gids;\t\t/* -1 terminated array GID values */\n\n\tchar\t**unames; \t/* null terminated table user names */\n\tchar\t**gnames; \t/* null terminated table group names */\n} VLD_UG;\n\n\ntypedef struct modes_path_user_group {\n\t/*\n\t * each directory/file in the PBS infrastructure should have\n\t * one of these \"modes, path, user, group, type\" structures\n\t * associated with it\n\t */\n\tint\tfc;\t\t/* fix code: 0=no, 1=perm/own, 2=create */\n\n\tint\tnotReq;\t\t/* bit1 (0x1): 0=always required, 1=never required\n\t\t\t\t   bit2 (0x2): 1=not required for command-only install\n\t\t\t\t   bit3 (0x4): 1=not required for execution-only install\n\t\t\t\t   Note: used in conjunction with \"notbits\"\n\t\t\t\t */\n\n\tint\tchkfull;\t/* 1=check each path component */\n\n\tint\treq_modes;\t/* required permissions (modes) */\n\tint\tdis_modes;\t/* disallowed permissions (modes) */\n\tVLD_UG\t*vld_ug;\t/* tables of valid users and groups */\n\tchar\t*path;\t\t/* location of file/directory */\n\tchar\t*realpath;\t/* canonicalized absolute location */\n} MPUG;\n\ntypedef struct modeadjustments {\n\t/*\n\t * an instance of this structure will contain modification\n\t * data that should be used in conjunction with the mode data\n\t * found in a corresponding MPUG data struccture.\n\t */\n\tint\treq;\t/* required modes   */\n\tint\tdis;\t/* disallowed modes */\n} ADJ;\n\n\ntypedef struct primary {\n\n\tMPUG\t*pbs_mpug;\t/* MPUGs: PBS primary dirs/files */\n\n\t/*\n\t * record values and sources for \"path\" and \"started\"\n\t */\n\n\tstruct {\n\t\tunsigned\tserver:1, mom:1, sched:1;\n\t} started;\n\n\tstruct {\n\t\tunsigned\tserver:2, mom:2, sched:2;\n\t} src_started;\n\n\tstruct {\n\t\tunsigned\tconf:2, home:2, exec:2;\n\t} src_path;\n\n} PRIMARY;\n\n/*\n * Numeric codes - use in title generation (see function, title_string)\n */\nenum code_title { TC_top, TC_sys, TC_ro, TC_fx, TC_pri, TC_ho, TC_ex,\n\tTC_cnt, TC_tvrb, TC_datpri, TC_datho, TC_datex, TC_noerr,\n\tTC_use };\n/*\n * Numeric codes - for use in function process_ret_code()\n */\nenum func_names { GET_PRIMARY_VALUES, END_FUNC_NAMES };\n\n/*\n * Message Header data - use, output of pbs_probe's \"Primary\" variables\n */\nenum\tmhp { MHP_cnf, MHP_home, MHP_exec, MHP_svr, MHP_mom, MHP_sched };\nstatic char mhp[][20] = {\n\t\"PBS_CONF_FILE\",\n\t\"PBS_HOME\",\n\t\"PBS_EXEC\",\n\t\"PBS_START_SERVER\",\n\t\"PBS_START_MOM\",\n\t\"PBS_START_SCHED\"\n};\n\n/* ---- default values for uid/gid, user names, group names ----*/\n\nstatic int pbsdata[] = {-1, -1};\t\t  /* PBS datastore */\nstatic int pbsservice[] = {0, -1}; /* PBS daemon service user */\nstatic int pbsu[] = {0, -1};\t\t  /* PBS UID, default */\nstatic int du[] = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, -1}; /* non-PBS UIDs, default */\n\nstatic char *pbs_dataname[] = {\"pbsdata\", NULL}; /* PBS data name, default */\nstatic char *pbs_servicename[] = {\"root\", NULL}; /* PBS daemon service name, default */\nstatic char *pbs_unames[] = {\"root\", NULL}; /* PBS user name, default */\nstatic char *pbs_gnames[] = {NULL}; /* PBS group name, default*/\n\nstatic\tint\tdg[] = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, -1}; /* non-PBS GIDs, default */\n\n\n/* ---------- default VLD_UG structures, PBS and non-PBS -----------*/\n\nstatic VLD_UG dflt_pbs_data = { pbsdata, dg, &pbs_dataname[0], &pbs_gnames[0] };\nstatic VLD_UG dflt_pbs_service = { pbsservice, dg, &pbs_servicename[0], &pbs_gnames[0] };\nstatic VLD_UG dflt_pbs_ug = { pbsu, dg, &pbs_unames[0], &pbs_gnames[0] };\nstatic VLD_UG dflt_ext_ug = { du, dg, &pbs_unames[0], &pbs_gnames[0] };\n\n/* ============  PBS path names ============ */\n\nstatic  char default_pbsconf[] = \"/etc/pbs.conf\";\n\n/* ------------ PBS HOME: relative paths -----------*/\n\n\nstatic char svrhome[][80] = {\n\t/* 00 */ \"server_logs\",\n\t/* 01 */ \"spool\",\n\t/* 02 */ \"server_priv\",\n\t/* 03 */ \"server_priv/resourcedef\",\n\t/* 04 */ \"server_priv/server.lock\",\n\t/* 05 */ \"server_priv/tracking\",\n\t/* 06 */ \"server_priv/accounting\",\n\t/* 07 */ \"server_priv/jobs\",\n\t/* 08 */ \"server_priv/users\",\n\t/* 09 */ \"server_priv/hooks\",\n\t/* 10 */ \"server_priv/hooks/tmp\",\n\t/* 11 */ \"server_priv/prov_tracking\",\n\t/* 12 */ \"server_priv/db_password\",\n\t/* 13 */ \"server_priv/db_svrhost\",\n\t/* 14 */ \"server_priv/db_svrhost.new\",\n\t/* 15 */ \"server_priv/svrlive\",\n\t/* 16 */ \"datastore\"\n};\n\nstatic char momhome[][80] = {\n\t/* 0 */ \"aux\",\n\t/* 1 */ \"checkpoint\",\n\t/* 2 */ \"mom_logs\",\n\t/* 3 */ \"mom_priv\",\n\t/* 4 */ \"mom_priv/mom.lock\",\n\t/* 5 */ \"mom_priv/config\",\n\t/* 6 */ \"mom_priv/jobs\",\n\t/* 7 */ \"spool\",\n\t/* 8 */ \"undelivered\",\n\t/* 9 */ \"mom_priv/config.d\",\n\t/* 10 */ \"mom_priv/hooks\",\n\t/* 11 */ \"mom_priv/hooks/tmp\"\n};\n\nstatic char schedhome[][80] = {\n\t/* 0 */ \"sched_logs\",\n\t/* 1 */ \"sched_priv\",\n\t/* 2 */ \"sched_priv/dedicated_time\",\n\t/* 3 */ \"sched_priv/holidays\",\n\t/* 4 */ \"sched_priv/sched_config\",\n\t/* 5 */ \"sched_priv/resource_group\",\n\t/* 6 */ \"sched_priv/sched.lock\",\n\t/* 7 */ \"sched_priv/sched_out\"\n};\n\nstatic char exec[][80] = {\n\t/* 0 */ \"bin\",\n\t/* 1 */ \"etc\",\n\t/* 2 */ \"include\",\n\t/* 3 */ \"lib\",\n\t/* 4 */ \"man\",\n\t/* 5 */ \"sbin\",\n\t/* 6 */ \"tcltk\",\n\t/* 7 */ \"python\",\n\t/* 8 */ \"pgsql\"\n};\n\n\n\n/* ------------ PBS EXEC: relative paths ----------*/\n\nstatic char exbin[][80] = {\n\t/* 00 */ \"bin/pbs_topologyinfo\",\n\t/* 01 */ \"bin/pbs_hostn\",\n\t/* 02 */ \"bin/pbs_rdel\",\n\t/* 03 */ \"bin/pbs_rstat\",\n\t/* 04 */ \"bin/pbs_rsub\",\n\t/* 05 */ \"bin/pbs_tclsh\",\n\t/* 06 */ \"bin/pbs_wish\",\n\t/* 07 */ \"bin/pbsdsh\",\n\t/* 08 */ \"bin/pbsnodes\",\n\t/* 09 */ \"bin/printjob\",\n\t/* 10 */ \"bin/qalter\",\n\t/* 11 */ \"bin/qdel\",\n\t/* 12 */ \"bin/qdisable\",\n\t/* 13 */ \"bin/qenable\",\n\t/* 14 */ \"bin/qhold\",\n\t/* 15 */ \"bin/qmgr\",\n\t/* 16 */ \"bin/qmove\",\n\t/* 17 */ \"bin/qmsg\",\n\t/* 18 */ \"bin/qorder\",\n\t/* 19 */ \"bin/qrerun\",\n\t/* 20 */ \"bin/qrls\",\n\t/* 21 */ \"bin/qrun\",\n\t/* 22 */ \"bin/qselect\",\n\t/* 23 */ \"bin/qsig\",\n\t/* 24 */ \"bin/qstart\",\n\t/* 25 */ \"bin/qstat\",\n\t/* 26 */ \"bin/qstop\",\n\t/* 27 */ \"bin/qsub\",\n\t/* 28 */ \"bin/qterm\",\n\t/* 29 */ \"bin/tracejob\",\n\t/* 30 */ \"bin/pbs_lamboot\",\n\t/* 31 */ \"bin/pbs_mpilam\",\n\t/* 32 */ \"bin/pbs_mpirun\",\n\t/* 33 */ \"bin/pbs_mpihp\",\n\t/* 34 */ \"bin/pbs_attach\",\n\t/* 35 */ \"bin/pbs_remsh\",\n\t/* 36 */ \"bin/pbs_tmrsh\",\n\t/* 37 */ \"bin/mpiexec\",\n\t/* 38 */ \"bin/pbsrun\",\n\t/* 39 */ \"bin/pbsrun_wrap\",\n\t/* 40 */ \"bin/pbsrun_unwrap\",\n\t/* 41 */ \"bin/pbs_python\",\n\t/* 42 */ \"bin/pbs_ds_password\",\n\t/* 43 */ \"bin/pbs_dataservice\"\n};\n\nstatic char exsbin[][80] = {\n\t/* 00 */ \"sbin/pbs-report\",\n\t/* 01 */ \"sbin/pbs_demux\",\n\t/* 02 */ \"sbin/pbs_idled\",\n\t/* 03 */ \"sbin/pbs_iff\",\n\t/* 04 */ \"sbin/pbs_mom\",\n\t/* 05 */ \"XXX\",\t\t\t\t/* slot available for use */\n\t/* 06 */ \"XXX\",\t\t\t\t/* slot available for use */\n\t/* 07 */ \"sbin/pbs_rcp\",\n\t/* 08 */ \"sbin/pbs_sched\",\n\t/* 09 */ \"sbin/pbs_server\",\n\t/* 10 */ \"sbin/pbsfs\",\n\t/* 11 */ \"sbin/pbs_probe\",\n\t/* 12 */ \"sbin/pbs_upgrade_job\"\n};\n\nstatic char exetc[][80] = {\n\t/* 00 */ \"etc/modulefile\",\n\t/* 01 */ \"etc/pbs_dedicated\",\n\t/* 02 */ \"etc/pbs_habitat\",\n\t/* 03 */ \"etc/pbs_holidays\",\n\t/* 04 */ \"etc/pbs_init.d\",\n\t/* 05 */ \"etc/pbs_postinstall\",\n\t/* 06 */ \"etc/pbs_resource_group\",\n\t/* 07 */ \"etc/pbs_sched_config\",\n\t/* 08 */ \"etc/pbs_db_utility\",\n\t/* 09 */ \"etc/pbs_topologyinfo\"\n};\n\nstatic char exinc[][80] = {\n\t/* 00 */ \"include/pbs_error.h\",\n\t/* 01 */ \"include/pbs_ifl.h\",\n\t/* 02 */ \"include/rm.h\",\n\t/* 03 */ \"include/tm.h\",\n\t/* 04 */ \"include/tm_.h\"\n};\n\nstatic char exlib[][80] = {\n\t/* 00 */ \"lib/libattr.a\",\n\t/* 01 */ \"SLOT_AVAILABLE\",\n\t/* 02 */ \"lib/liblog.a\",\n\t/* 03 */ \"lib/libnet.a\",\n\t/* 04 */ \"lib/libpbs.a\",\n\t/* 05 */ \"lib/libsite.a\",\n\t/* 06 */ \"lib/pbs_sched.a\",\n\t/* 07 */ \"lib/pm\",\n\t/* 08 */ \"lib/pm/PBS.pm\",\n\t/* 09 */ \"lib/MPI\",\n\t/* 10 */ \"lib/MPI/sgiMPI.awk\",\n\t/* 11 */ \"lib/MPI/pbsrun.ch_gm.init.in\",\n\t/* 12 */ \"lib/MPI/pbsrun.ch_mx.init.in\",\n\t/* 13 */ \"lib/MPI/pbsrun.gm_mpd.init.in\",\n\t/* 14 */ \"lib/MPI/pbsrun.mx_mpd.init.in\",\n\t/* 15 */ \"lib/MPI/pbsrun.mpich2.init.in\",\n\t/* 16 */ \"lib/MPI/pbsrun.intelmpi.init.in\",\n\t/* 17 */ \"SLOT_AVAILABLE\",\n\t/* 18 */ \"lib/python\",\n\t/* 19 */ \"lib/python/altair\",\n\t/* 20 */ \"lib/python/altair/pbs\",\n\t/* 21 */ \"lib/python/altair/pbs/__pycache__\",\n\t/* 22 */ \"lib/python/altair/pbs/__pycache__/__init__.cpython-3?.pyc\",\n\t/* 23 */ \"lib/python/altair/pbs/__init__.py\",\n\t/* 24 */ \"lib/python/altair/pbs/v1\",\n\t/* 25 */ \"lib/python/altair/pbs/v1/__pycache__\",\n\t/* 26 */ \"lib/python/altair/pbs/v1/__pycache__/__init__.cpython-3?.pyc\",\n\t/* 27 */ \"lib/python/altair/pbs/v1/__init__.py\",\n\t/* 28 */ \"lib/python/altair/pbs/v1/_export_types.py\",\n\t/* 29 */ \"lib/python/altair/pbs/v1/_attr_types.py\",\n\t/* 30 */ \"lib/python/altair/pbs/v1/__pycache__/_attr_types.cpython-3?.pyc\",\n\t/* 31 */ \"lib/python/altair/pbs/v1/_base_types.py\",\n\t/* 32 */ \"lib/python/altair/pbs/v1/__pycache__/_base_types.cpython-3?.pyc\",\n\t/* 33 */ \"lib/python/altair/pbs/v1/_exc_types.py\",\n\t/* 34 */ \"lib/python/altair/pbs/v1/__pycache__/_exc_types.cpython-3?.pyc\",\n\t/* 35 */ \"lib/python/altair/pbs/v1/__pycache__/_export_types.cpython-3?.pyc\",\n\t/* 36 */ \"lib/python/altair/pbs/v1/_svr_types.py\",\n\t/* 37 */ \"lib/python/altair/pbs/v1/__pycache__/_svr_types.cpython-3?.pyc\",\n};\n\n#if 0\nstatic char exec_man1[] = \"man/man1\";\nstatic char exec_man3[] = \"man/man3\";\nstatic char exec_man7[] = \"man/man7\";\nstatic char exec_man8[] = \"man/man8\";\n#endif\n\nstatic char exman1[][80] = {\n\t/* 00 */ \"man/man1\",\n\t/* 01 */ \"man/man1/pbs_python.1B\",\n\t/* 02 */ \"man/man1/pbs_rdel.1B\",\n\t/* 03 */ \"man/man1/pbs_rstat.1B\",\n\t/* 04 */ \"man/man1/pbs_rsub.1B\",\n\t/* 05 */ \"man/man1/pbsdsh.1B\",\n\t/* 06 */ \"man/man1/qalter.1B\",\n\t/* 07 */ \"man/man1/qdel.1B\",\n\t/* 08 */ \"man/man1/qhold.1B\",\n\t/* 09 */ \"man/man1/qmove.1B\",\n\t/* 10 */ \"man/man1/qmsg.1B\",\n\t/* 11 */ \"man/man1/qorder.1B\",\n\t/* 12 */ \"man/man1/qrerun.1B\",\n\t/* 13 */ \"man/man1/qrls.1B\",\n\t/* 14 */ \"man/man1/qselect.1B\",\n\t/* 15 */ \"man/man1/qsig.1B\",\n\t/* 16 */ \"man/man1/qstat.1B\",\n\t/* 17 */ \"man/man1/qsub.1B\"\n};\n\nstatic char exman3[][80] = {\n\t/* 00 */ \"man/man3\",\n\t/* 01 */ \"man/man3/pbs_alterjob.3B\",\n\t/* 02 */ \"man/man3/pbs_connect.3B\",\n\t/* 03 */ \"man/man3/pbs_default.3B\",\n\t/* 04 */ \"man/man3/pbs_deljob.3B\",\n\t/* 05 */ \"man/man3/pbs_disconnect.3B\",\n\t/* 06 */ \"man/man3/pbs_geterrmsg.3B\",\n\t/* 07 */ \"man/man3/pbs_holdjob.3B\",\n\t/* 08 */ \"man/man3/pbs_manager.3B\",\n\t/* 09 */ \"man/man3/pbs_movejob.3B\",\n\t/* 10 */ \"man/man3/pbs_msgjob.3B\",\n\t/* 11 */ \"man/man3/pbs_orderjob.3B\",\n\t/* 12 */ \"man/man3/pbs_rerunjob.3B\",\n\t/* 13 */ \"man/man3/pbs_statsched.3B\",\n\t/* 14 */ \"man/man3/pbs_rescreserve.3B\",\n\t/* 15 */ \"man/man3/pbs_rlsjob.3B\",\n\t/* 16 */ \"man/man3/pbs_runjob.3B\",\n\t/* 17 */ \"man/man3/pbs_selectjob.3B\",\n\t/* 18 */ \"man/man3/pbs_sigjob.3B\",\n\t/* 19 */ \"man/man3/pbs_stagein.3B\",\n\t/* 20 */ \"man/man3/pbs_statjob.3B\",\n\t/* 21 */ \"man/man3/pbs_statnode.3B\",\n\t/* 22 */ \"man/man3/pbs_statque.3B\",\n\t/* 23 */ \"man/man3/pbs_statserver.3B\",\n\t/* 24 */ \"man/man3/pbs_submit.3B\",\n\t/* 25 */ \"man/man3/pbs_terminate.3B\",\n\t/* 26 */ \"man/man3/tm.3\",\n\t/* 27 */ \"man/man3/pbs_tclapi.3B\",\n\t/* 28 */ \"man/man3/pbs_delresv.3B\",\n\t/* 29 */ \"man/man3/pbs_locjob.3B\",\n\t/* 30 */ \"man/man3/pbs_selstat.3B\",\n\t/* 31 */ \"man/man3/pbs_statresv.3B\",\n\t/* 32 */ \"man/man3/pbs_statfree.3B\"\n};\n\nstatic char exman7[][80] = {\n\t/* 00 */ \"man/man7\",\n\t/* 01 */ \"man/man7/pbs_job_attributes.7B\",\n\t/* 02 */ \"man/man7/pbs_node_attributes.7B\",\n\t/* 03 */ \"man/man7/pbs_queue_attributes.7B\",\n\t/* 04 */ \"man/man7/pbs_resources.7B\",\n\t/* 05 */ \"man/man7/pbs_resv_attributes.7B\",\n\t/* 06 */ \"man/man7/pbs_server_attributes.7B\",\n\t/* 07 */ \"man/man7/pbs_sched_attributes.7B\",\n\t/* 08 */ \"man/man7/pbs_professional.7B\"\n};\n\nstatic char exman8[][80] = {\n\t/* 00 */ \"man/man8\",\n\t/* 01 */ \"man/man8/pbs_idled.8B\",\n\t/* 02 */ \"man/man8/pbs_mom.8B\",\n\t/* 03 */ \"man/man8/pbs_sched.8B\",\n\t/* 04 */ \"man/man8/pbs_server.8B\",\n\t/* 05 */ \"man/man8/pbsfs.8B\",\n\t/* 06 */ \"man/man8/pbsnodes.8B\",\n\t/* 07 */ \"man/man8/qdisable.8B\",\n\t/* 08 */ \"man/man8/qenable.8B\",\n\t/* 09 */ \"man/man8/qmgr.8B\",\n\t/* 10 */ \"man/man8/qrun.8B\",\n\t/* 11 */ \"man/man8/qstart.8B\",\n\t/* 12 */ \"man/man8/qstop.8B\",\n\t/* 13 */ \"man/man8/qterm.8B\",\n\t/* 14 */ \"man/man8/pbs_lamboot.8B\",\n\t/* 15 */ \"man/man8/pbs_mpilam.8B\",\n\t/* 16 */ \"man/man8/pbs_mpirun.8B\",\n\t/* 17 */ \"man/man8/pbs_attach.8B\",\n\t/* 18 */ \"man/man8/pbs_mkdirs.8B\",\n\t/* 19 */ \"man/man8/pbs_hostn.8B\",\n\t/* 20 */ \"man/man8/pbs_probe.8B\",\n\t/* 21 */ \"man/man8/pbs-report.8B\",\n\t/* 22 */ \"man/man8/pbs_tclsh.8B\",\n\t/* 23 */ \"man/man8/pbs_tmrsh.8B\",\n\t/* 24 */ \"man/man8/pbs_wish.8B\",\n\t/* 25 */ \"man/man8/printjob.8B\",\n\t/* 26 */ \"man/man8/pbs.8B\",\n\t/* 27 */ \"man/man8/pbs_interactive.8B\"\n};\n\nstatic char extcltk[][80] = {\n\t/* 0 */ \"tcltk/bin\",\n\t/* 1 */ \"tcltk/include\",\n\t/* 2 */ \"tcltk/lib\",\n\t/* 3 */ \"tcltk/license.terms\"\n};\n\nstatic char expython[][80] = {\n\t/* 0 */ \"python/bin\",\n\t/* 1 */ \"python/include\",\n\t/* 2 */ \"python/lib\",\n\t/* 3 */ \"python/man\",\n\t/* 4 */ \"python/python.changes.txt\",\n\t/* 5 */ \"python/bin/python\"\n};\n\nstatic char expgsql[][80] = {\n\t/* 0 */ \"pgsql/bin\",\n\t/* 1 */ \"pgsql/include\",\n\t/* 2 */ \"pgsql/lib\",\n\t/* 3 */ \"pgsql/share\"\n};\n\n/* -------- global static PBS variables -------- */\n\nADJ dflt_modeadjustments = { S_IFDIR | S_IXUSR | S_IXGRP | S_IXOTH, S_IFREG };\n\nenum fixcodes\t{ FIX_none, FIX_po };\nenum pbsdirtype { PBS_niltype, PBS_logsdir, PBS_acctdir,\n\tPBS_spooldir, PBS_jobsdir,\n\tPBS_usersdir,\n\tPBS_hooksdir, PBS_hookswdir };\n\n\nenum pbs_mpugs { PBS_conf, PBS_home, PBS_exec, PBS_last };\nchar\t*origin_names[] = {\"PBS CONF FILE\", \"PBS HOME\", \"PBS EXEC\"};\n\n/*\n * The following definitions simplify setting bit field codes\n */\n\n#define C000\t{0, 0, 0}\t\t/* no fix,    Req'd, !ckfull */\n#define C010\t{0, 1, 0}\t\t/* no fix,    noReq, !ckfull */\n#define C100\t{1, 0, 0}\t\t/* fix perms, Req'd, !ckfull */\n#define C110\t{1, 1, 0}\t\t/* fix perms, noReq, !ckfull */\n#define C111\t{1, 1, 1}\t\t/* fix perms, noReq,  ckfull */\n#define C200\t{2, 0, 0}\t\t/* fix exist, Req'd, !ckfull */\n#define C201\t{2, 0, 1}\t\t/* fix exist, Req'd,  ckfull */\n\n/*\n * MPUG arrays and mask to use with MPUG's \"notReq\" member\n */\n\n/*\n * variable records which bits in an MPUG's \"notReq\" member need be considered\n *\tnotbits starts as 0x1\n *\tFor execution only (Mom) set to 0x5\n *\tFor commands only\t set to 0x3\n *\n *\t\"notReq\" is set to\n *\t0 - required for all\n *\t1 - not required ever\n *\t2 - not required for commands only install\n *\t4 - not required for execuition only (Mom) install\n *\n * The two are \"and\"ed together.  If the result is 0 the file should be there,\n * if the result is non-zero, the file need not be present.\n */\nstatic int\tnotbits = 0x1;\n\n\nstatic MPUG\tpbs_mpugs[] = {\n\t/*\n\t * infrastructure data associated with PBS origins\n\t * dir, chkfull, required and disallowed modes, pointer\n\t * to \"valid users, valid groups\", path, realpath\n\t */\n\t{1, 0, 0, frwrr,  xsgswxowx, &dflt_ext_ug, NULL, NULL},\n\t{1, 0, 1, drwxrxrx,   tgwow, &dflt_ext_ug, NULL, NULL},\n\t{1, 0, 1, drwxrxrx,   tgwow, &dflt_ext_ug, NULL, NULL} };\n\nenum exec_mpugs { EXEC_exec, EXEC_bin, EXEC_sbin,  EXEC_etc, EXEC_include,\n\tEXEC_lib, EXEC_man, EXEC_man1, EXEC_man3, EXEC_man7,\n\tEXEC_man8, EXEC_tcltk, EXEC_python, EXEC_pgsql, EXEC_last };\n\nchar *exec_mpug_set[EXEC_last] = {\"exec\", \"bin\", \"sbin\", \"etc\", \"include\",\n\t\"lib\", \"man\", \"man1\", \"man3\", \"man7\",\n\t\"man8\", \"tcltk\", \"python\", \"pgsql\"};\n\nint  exec_sizes[EXEC_last];\n\nstatic MPUG\t exec_mpugs[] = {\n\t/*\n\t * infrastructure data associated with PBS execution -\n\t * bin, sbin, etc, include, lib, man, tcltk, python, pgsql\n\t */\n\t{1, 0, 0, drwxrxrx,    tgwow, &dflt_pbs_ug, exec[0], NULL}, /* bin */\n\t{1, 0, 0, drwxrxrx,    tgwow, &dflt_pbs_ug, exec[1], NULL}, /* etc */\n\t{1, 0, 0, drwxrxrx,    tgwow, &dflt_pbs_ug, exec[2], NULL}, /* include */\n\t{1, 0, 0, drwxrxrx,    tgwow, &dflt_pbs_ug, exec[3], NULL}, /* lib */\n\t{1, 0, 0, drwxrxrx,    tgwow, &dflt_pbs_ug, exec[4], NULL}, /* man */\n\t{1, 0, 0, drwxrxrx,    tgwow, &dflt_pbs_ug, exec[5], NULL}, /* sbin */\n\t{1, 0, 0, drwxrxrx,    tgwow, &dflt_pbs_ug, exec[6], NULL}, /* tcltk */\n\t{1, 0, 0, drwxrxrx,    tgwow, &dflt_pbs_ug, exec[7], NULL}, /* python */\n\t{1, 0, 0, drwxrxrx,    tgwow, &dflt_pbs_ug, exec[8], NULL}  /* pgsql */\n};\n\n\n\nstatic MPUG\tbin_mpugs[] = {\n\t/*\n\t * infrastructure data associated with PBS_EXEC/bin\n\t */\n\t{1, 0, 0, drwxrxrx,    tgwow, &dflt_pbs_ug, exec[0],    NULL},\n\t{1, 6, 0,   frwxgo,   sgsrwxorwx, &dflt_pbs_ug, exbin[ 0], NULL }, /* pbs_topologyinfo */\n\t{1, 0, 0,   frwxrxrx,     sgswow, &dflt_pbs_ug, exbin[ 1], NULL }, /* pbs_hostn */\n\t{1, 0, 0,   frwxrxrx,     sgswow, &dflt_pbs_ug, exbin[ 2], NULL }, /* pbs_rdel */\n\t{1, 0, 0,   frwxrxrx,     sgswow, &dflt_pbs_ug, exbin[ 3], NULL }, /* pbs_rstat */\n\t{1, 0, 0,   frwxrxrx,     sgswow, &dflt_pbs_ug, exbin[ 4], NULL }, /* pbs_rsub */\n\t{1, 0, 0,   frwxrxrx,     sgswow, &dflt_pbs_ug, exbin[ 5], NULL }, /* pbs_tclsh */\n\t{1, 0, 0,   frwxrxrx,     sgswow, &dflt_pbs_ug, exbin[ 6], NULL }, /* pbs_wish */\n\t{1, 0, 0,   frwxrxrx,     sgswow, &dflt_pbs_ug, exbin[ 7], NULL }, /* pbsdsh */\n\t{1, 0, 0,   frwxrxrx,     sgswow, &dflt_pbs_ug, exbin[ 8], NULL }, /* pbsnodes */\n\t{1, 0, 0,   frwxrxrx,     sgswow, &dflt_pbs_ug, exbin[ 9], NULL }, /* printjob */\n\t{1, 0, 0,   frwxrxrx,     sgswow, &dflt_pbs_ug, exbin[10], NULL }, /* qalter */\n\t{1, 0, 0,   frwxrxrx,     sgswow, &dflt_pbs_ug, exbin[11], NULL }, /* qdel */\n\t{1, 0, 0,   frwxrxrx,     sgswow, &dflt_pbs_ug, exbin[12], NULL }, /* qdisable */\n\t{1, 0, 0,   frwxrxrx,     sgswow, &dflt_pbs_ug, exbin[13], NULL }, /* qenable */\n\t{1, 0, 0,   frwxrxrx,     sgswow, &dflt_pbs_ug, exbin[14], NULL }, /* qhold */\n\t{1, 0, 0,   frwxrxrx,     sgswow, &dflt_pbs_ug, exbin[15], NULL }, /* qmgr */\n\t{1, 0, 0,   frwxrxrx,     sgswow, &dflt_pbs_ug, exbin[16], NULL }, /* qmove */\n\t{1, 0, 0,   frwxrxrx,     sgswow, &dflt_pbs_ug, exbin[17], NULL }, /* qmsg */\n\t{1, 0, 0,   frwxrxrx,     sgswow, &dflt_pbs_ug, exbin[18], NULL }, /* qorder */\n\t{1, 0, 0,   frwxrxrx,     sgswow, &dflt_pbs_ug, exbin[19], NULL }, /* qrerun */\n\t{1, 0, 0,   frwxrxrx,     sgswow, &dflt_pbs_ug, exbin[20], NULL }, /* qrls */\n\t{1, 0, 0,   frwxrxrx,     sgswow, &dflt_pbs_ug, exbin[21], NULL }, /* qrun */\n\t{1, 0, 0,   frwxrxrx,     sgswow, &dflt_pbs_ug, exbin[22], NULL }, /* qselect */\n\t{1, 0, 0,   frwxrxrx,     sgswow, &dflt_pbs_ug, exbin[23], NULL }, /* qsig */\n\t{1, 0, 0,   frwxrxrx,     sgswow, &dflt_pbs_ug, exbin[24], NULL }, /* qstart */\n\t{1, 0, 0,   frwxrxrx,     sgswow, &dflt_pbs_ug, exbin[25], NULL }, /* qstat */\n\t{1, 0, 0,   frwxrxrx,     sgswow, &dflt_pbs_ug, exbin[26], NULL }, /* qstop */\n\t{1, 0, 0,   frwxrxrx,     sgswow, &dflt_pbs_ug, exbin[27], NULL }, /* qsub */\n\t{1, 0, 0,   frwxrxrx,     sgswow, &dflt_pbs_ug, exbin[28], NULL }, /* qterm */\n\t{1, 0, 0,   frwxrxrx,     sgswow, &dflt_pbs_ug, exbin[29], NULL }, /* tracejob */\n\t{1, 1, 0,   frwxrxrx,     sgswow, &dflt_pbs_ug, exbin[30], NULL }, /* pbs_lamboot */\n\t{1, 1, 0,   frwxrxrx,     sgswow, &dflt_pbs_ug, exbin[31], NULL }, /* pbs_mpilam */\n\t{1, 1, 0,   frwxrxrx,     sgswow, &dflt_pbs_ug, exbin[32], NULL }, /* pbs_mpirun */\n\t{1, 1, 0,   frwxrxrx,     sgswow, &dflt_pbs_ug, exbin[33], NULL }, /* pbs_mpihp */\n\t{1, 1, 0,   frwxrxrx,     sgswow, &dflt_pbs_ug, exbin[34], NULL }, /* pbs_attach */\n\t{1, 1, 0,   frwxrxrx,     sgswow, &dflt_pbs_ug, exbin[35], NULL }, /* pbs_remsh */\n\t{1, 1, 0,   frwxrxrx,     sgswow, &dflt_pbs_ug, exbin[36], NULL }, /* pbs_tmrsh */\n\t{1, 2, 0,   frwxrxrx,     sgswow, &dflt_pbs_ug, exbin[37], NULL }, /* mpiexec */\n\t{1, 1, 0,   frwxrxrx,     sgswow, &dflt_pbs_ug, exbin[38], NULL }, /* pbsrun */\n\t{1, 1, 0,   frwxrxrx,     sgswow, &dflt_pbs_ug, exbin[39], NULL }, /* pbsrun_wrap */\n\t{1, 1, 0,   frwxrxrx,     sgswow, &dflt_pbs_ug, exbin[40], NULL }, /* pbsrun_unwrap */\n\t{1, 2, 0,   frwxrxrx,     sgswow, &dflt_pbs_ug, exbin[41], NULL },  /* pbs_python */\n\t{1, 6, 0,   frwxgo,     tgrwxorwx, &dflt_pbs_ug, exbin[42], NULL },  /* pbs_ds_password */\n\t{1, 6, 0,   frwxgo,     tgrwxorwx, &dflt_pbs_ug, exbin[43], NULL }  /* pbs_dataservice */\n};\n\nstatic MPUG\tsbin_mpugs[] = {\n\t/*\n\t * infrastructure data associated with PBS_EXEC/sbin\n\t */\n\t{1, 0, 0,   drwxrxrx,      tgwow, &dflt_pbs_ug, exec[5], NULL},\n\t{1, 2, 0,   frwxrxrx,     sgswow, &dflt_pbs_ug, exsbin[ 0], NULL }, /* pbs-report */\n\t{1, 2, 0,   frwxrxrx,     sgswow, &dflt_pbs_ug, exsbin[ 1], NULL }, /* pbs_demux */\n\t{1, 2, 0,   frwxrxrx,     sgswow, &dflt_pbs_ug, exsbin[ 2], NULL }, /* pbs_idled */\n\t{1, 0, 0,  fsrwxrxrx,      gswow, &dflt_pbs_ug, exsbin[ 3], NULL }, /* pbs_iff */\n\t{1, 2, 0,     frwxgo, sgsrwxorwx, &dflt_pbs_ug, exsbin[ 4], NULL }, /* pbs_mom */\n\t{1, 1, 0,     frwxgo, sgsrwxorwx, &dflt_pbs_ug, exsbin[ 5], NULL }, /* slot available for use */\n\t{1, 1, 0,     frwxgo, sgsrwxorwx, &dflt_pbs_ug, exsbin[ 6], NULL }, /* slot available for use */\n\t{1, 2, 0,  fsrwxrxrx,      gswow, &dflt_pbs_ug, exsbin[ 7], NULL }, /* pbs_rcp */\n\t{1, 6, 0,   frwxrxrx,     sgswow, &dflt_pbs_ug, exsbin[ 8], NULL }, /* pbs_sched */\n\t{1, 6, 0,     frwxgo, sgsrwxorwx, &dflt_pbs_ug, exsbin[ 9], NULL }, /* pbs_server */\n\t{1, 6, 0,   frwxrxrx,     sgswow, &dflt_pbs_ug, exsbin[10], NULL }, /* pbsfs */\n\t{1, 0, 0,   frwxrxrx,     sgswow, &dflt_pbs_ug, exsbin[11], NULL }, /* pbs_probe */\n\t{1, 2, 0,     frwxgo, sgsrwxorwx, &dflt_pbs_ug, exsbin[12], NULL } /* pbs_upgrade_job */\n};\n\n\nstatic MPUG\tetc_mpugs[] = {\n\t/*\n\t * infrastructure data associated with PBS_EXEC/etc\n\t */\n\t{1, 0, 0, drwxrxrx,      tgwow, &dflt_pbs_ug,  exec[ 1], NULL },\n\t{1, 0, 0,    frwrr,  xsgswxowx, &dflt_pbs_ug, exetc[ 0], NULL }, /* modulefile */\n\t{1, 6, 0,    frwrr,  xsgswxowx, &dflt_pbs_ug, exetc[ 1], NULL }, /* pbs_dedicated */\n\t{1, 2, 0,   frwxgo, sgsrwxorwx, &dflt_pbs_ug, exetc[ 2], NULL }, /* pbs_habitat */\n\t{1, 6, 0,    frwrr,  xsgswxowx, &dflt_pbs_ug, exetc[ 3], NULL }, /* pbs_holidays */\n\t{1, 2, 0,   frwxgo, sgsrwxorwx, &dflt_pbs_ug, exetc[ 4], NULL }, /* pbs_init.d */\n\t{1, 0, 0,   frwxgo, sgsrwxorwx, &dflt_pbs_ug, exetc[ 5], NULL }, /* pbs_postinstall */\n\t{1, 6, 0,    frwrr,  xsgswxowx, &dflt_pbs_ug, exetc[ 6], NULL }, /* pbs_resource_group */\n\t{1, 6, 0,   frgror,   sgswxowx, &dflt_pbs_ug, exetc[ 7], NULL }, /* pbs_sched_config */\n\t{1, 6, 0,   frwxgo,  tgrwxorwx, &dflt_pbs_ug, exetc[ 8], NULL }, /* pbs_db_utility */\n\t{1, 6, 0,   frwxgo, sgsrwxorwx, &dflt_pbs_ug, exetc[ 9], NULL }  /* pbs_topologyinfo */\n};\n\n\nstatic MPUG\tinclude_mpugs[] = {\n\t/*\n\t * infrastructure data associated with PBS_EXEC/include\n\t */\n\t{1, 1, 0,   drwxrxrx,    tgwow, &dflt_pbs_ug, exec[2], NULL},\n\t{1, 1, 0,     frgror,   sgswxowx, &dflt_pbs_ug, exinc[0], NULL }, /* pbs_error.h */\n\t{1, 1, 0,     frgror,   sgswxowx, &dflt_pbs_ug, exinc[1], NULL }, /* pbs_ifl.h */\n\t{1, 1, 0,     frgror,   sgswxowx, &dflt_pbs_ug, exinc[2], NULL }, /* rm.h */\n\t{1, 1, 0,     frgror,   sgswxowx, &dflt_pbs_ug, exinc[3], NULL }, /* tm.h */\n\t{1, 1, 0,     frgror,   sgswxowx, &dflt_pbs_ug, exinc[4], NULL } }; /* tm_.h */\n\n\nstatic MPUG\tlib_mpugs[] = {\n\t/*\n\t * infrastructure data associated with PBS_EXEC/lib\n\t */\n\t{1, 0, 0, drwxrxrx,    tgwow, &dflt_pbs_ug, exec[3],    NULL},\n\t{1, 1, 0,    frwrr,  xsgswxowx, &dflt_pbs_ug, exlib[ 0], NULL }, /* libattr.a */\n\t{1, 1, 0,    frwrr,  xsgswxowx, &dflt_pbs_ug, exlib[ 1], NULL }, /* SLOT_AVAILABLE */\n\t{1, 1, 0,    frwrr,  xsgswxowx, &dflt_pbs_ug, exlib[ 2], NULL }, /* liblog.a */\n\t{1, 1, 0,    frwrr,  xsgswxowx, &dflt_pbs_ug, exlib[ 3], NULL }, /* libnet.a */\n\t{1, 1, 0,    frwrr,  xsgswxowx, &dflt_pbs_ug, exlib[ 4], NULL }, /* libpbs.a */\n\t{1, 1, 0,    frwrr,  xsgswxowx, &dflt_pbs_ug, exlib[ 5], NULL }, /* libsite.a */\n\t{1, 1, 0,    frwrr,  xsgswxowx, &dflt_pbs_ug, exlib[ 6], NULL }, /* pbs_sched.a */\n\t{1, 2, 0,   drwxrxrx,    tgwow, &dflt_pbs_ug, exlib[ 7], NULL }, /* pm */\n\t{1, 0, 0,    frwrr,  xsgswxowx, &dflt_pbs_ug, exlib[ 8], NULL }, /* PBS.pm */\n\t{1, 2, 0,   drwxrxrx,    tgwow, &dflt_pbs_ug, exlib[ 9], NULL}, /* MPI */\n\t{1, 1, 0,    frwrr,  xsgswxowx, &dflt_pbs_ug, exlib[10], NULL}, /* sgiMPI.awk */\n\t{1, 1, 0,    frwrr,  xsgswxowx, &dflt_pbs_ug, exlib[11], NULL}, /* pbsrun.ch_gm.init.in */\n\t{1, 1, 0,    frwrr,  xsgswxowx, &dflt_pbs_ug, exlib[12], NULL}, /* pbsrun.ch_mx.init.in */\n\t{1, 1, 0,    frwrr,  xsgswxowx, &dflt_pbs_ug, exlib[13], NULL}, /* pbsrun.gm_mpd.init.in */\n\t{1, 1, 0,    frwrr,  xsgswxowx, &dflt_pbs_ug, exlib[14], NULL}, /* pbsrun.mx_mpd.init.in */\n\t{1, 1, 0,    frwrr,  xsgswxowx, &dflt_pbs_ug, exlib[15], NULL}, /* pbsrun.mpich2.init.in */\n\t{1, 1, 0,    frwrr,  xsgswxowx, &dflt_pbs_ug, exlib[16], NULL},  /* pbsrun.intelmpi.init.in */\n\t{1, 1, 0,    frwrr,  xsgswxowx, &dflt_pbs_ug, exlib[17], NULL},  /* SLOT_AVAILABLE */\n\t{1, 6, 0,    drwxrxrx,   tgwow, &dflt_pbs_ug, exlib[18], NULL},  /* lib/python */\n\t{1, 2, 0,    drwxrxrx,   tgwow, &dflt_pbs_ug, exlib[19], NULL},  /* lib/python/altair */\n\t{1, 2, 0,    drwxrxrx,   tgwow, &dflt_pbs_ug, exlib[20], NULL},  /* lib/python/altair/pbs */\n\t{1, 2, 0,    drwxrxrx,   tgwow, &dflt_pbs_ug, exlib[21], NULL},  /* lib/python/altair/pbs/__pycache__ */\n\t{1, 2, 0,    frgror,  sgswxowx, &dflt_pbs_ug, exlib[22], NULL},  /* lib/python/altair/pbs/__pycache__\n\t/__init__.cpython-3?.pyc */\n\t{1, 2, 0,    frgror,  sgswxowx, &dflt_pbs_ug, exlib[23], NULL},  /* lib/python/altair/pbs/__init__.py */\n\t{1, 2, 0,    drwxrxrx,   tgwow, &dflt_pbs_ug, exlib[24], NULL},  /* lib/python/altair/pbs/v1 */\n\t{1, 2, 0,    drwxrxrx,   tgwow, &dflt_pbs_ug, exlib[25], NULL},  /* lib/python/altair/pbs/v1/__pycache__ */\n\t{1, 2, 0,    frgror,  sgswxowx, &dflt_pbs_ug, exlib[26], NULL},  /* lib/python/altair/pbs/v1/__pycache__\n\t/__init__.cpython-3?.pyc */\n\t{1, 2, 0,    frgror,  sgswxowx, &dflt_pbs_ug, exlib[27], NULL},  /* lib/python/altair/pbs/v1/__init__.py */\n\t{1, 2, 0,    frgror,  sgswxowx, &dflt_pbs_ug, exlib[28], NULL},  /* lib/python/altair/pbs/v1/_export_types.py */\n\t{1, 2, 0,    frgror,  sgswxowx, &dflt_pbs_ug, exlib[29], NULL},  /* lib/python/altair/pbs/v1/_attr_types.py */\n\t{1, 2, 0,    frgror,  sgswxowx, &dflt_pbs_ug, exlib[30], NULL},  /* lib/python/altair/pbs/v1/__pycache__\n\t/_attr_types.cpython-3?.pyc */\n\t{1, 2, 0,    frgror,  sgswxowx, &dflt_pbs_ug, exlib[31], NULL},  /* lib/python/altair/pbs/v1/_base_types.py */\n\t{1, 2, 0,    frgror,  sgswxowx, &dflt_pbs_ug, exlib[32], NULL},  /* lib/python/altair/pbs/v1/__pycache__\n\t/_base_types.cpython-3?.pyc */\n\t{1, 2, 0,    frgror,  sgswxowx, &dflt_pbs_ug, exlib[33], NULL},  /* lib/python/altair/pbs/v1/_exc_types.py */\n\t{1, 2, 0,    frgror,  sgswxowx, &dflt_pbs_ug, exlib[34], NULL},  /* lib/python/altair/pbs/v1/__pycache__\n\t/_exc_types.cpython-3?.pyc */\n\t{1, 2, 0,    frgror,  sgswxowx, &dflt_pbs_ug, exlib[35], NULL},  /* lib/python/altair/pbs/v1/__pycache__\n\t/_export_types.cpython-3?pyc */\n\t{1, 2, 0,    frgror,  sgswxowx, &dflt_pbs_ug, exlib[36], NULL},  /* lib/python/altair/pbs/v1/_svr_types.py */\n\t{1, 2, 0,    frgror,  sgswxowx, &dflt_pbs_ug, exlib[37], NULL},  /* lib/python/altair/pbs/v1/__pycache__\n\t/_svr_types.cpython-3?.pyc */\n};\n\nstatic MPUG\tman_mpugs[] = {\n\t/*\n\t * infrastructure data associated with PBS_EXEC/man\n\t */\n\t{1, 0, 0, drwxrxrx,    tgwow, &dflt_pbs_ug, exec[4],    NULL},\n\n\t/*\n\t * infrastructure data associated with PBS_EXEC/man/man1\n\t */\n\t{1, 0, 0,   drwxrxrx,      tgwow, &dflt_pbs_ug, exman1[ 0], NULL }, /* man1 */\n\t{1, 0, 0,      frwrr,  xsgswxowx, &dflt_pbs_ug, exman1[ 1], NULL }, /* pbs_python.1B */\n\t{1, 0, 0,      frwrr,  xsgswxowx, &dflt_pbs_ug, exman1[ 2], NULL }, /* pbs_rdel.1B */\n\t{1, 0, 0,      frwrr,  xsgswxowx, &dflt_pbs_ug, exman1[ 3], NULL }, /* pbs_rstat.1B */\n\t{1, 0, 0,      frwrr,  xsgswxowx, &dflt_pbs_ug, exman1[ 4], NULL }, /* pbs_rsub.1B */\n\t{1, 0, 0,      frwrr,  xsgswxowx, &dflt_pbs_ug, exman1[ 5], NULL }, /* pbsdsh.1B */\n\t{1, 0, 0,      frwrr,  xsgswxowx, &dflt_pbs_ug, exman1[ 6], NULL }, /* qalter.1B */\n\t{1, 0, 0,      frwrr,  xsgswxowx, &dflt_pbs_ug, exman1[ 7], NULL }, /* qdel.1B */\n\t{1, 0, 0,      frwrr,  xsgswxowx, &dflt_pbs_ug, exman1[ 8], NULL }, /* qhold.1B */\n\t{1, 0, 0,      frwrr,  xsgswxowx, &dflt_pbs_ug, exman1[ 9], NULL }, /* qmove.1B */\n\t{1, 0, 0,      frwrr,  xsgswxowx, &dflt_pbs_ug, exman1[10], NULL }, /* qmsg.1B */\n\t{1, 0, 0,      frwrr,  xsgswxowx, &dflt_pbs_ug, exman1[11], NULL }, /* qorder.1B */\n\t{1, 0, 0,      frwrr,  xsgswxowx, &dflt_pbs_ug, exman1[12], NULL }, /* qrerun.1B */\n\t{1, 0, 0,      frwrr,  xsgswxowx, &dflt_pbs_ug, exman1[13], NULL }, /* qrls.1B */\n\t{1, 0, 0,      frwrr,  xsgswxowx, &dflt_pbs_ug, exman1[14], NULL }, /* qselect.1B */\n\t{1, 0, 0,      frwrr,  xsgswxowx, &dflt_pbs_ug, exman1[15], NULL }, /* qsig.1B */\n\t{1, 0, 0,      frwrr,  xsgswxowx, &dflt_pbs_ug, exman1[16], NULL }, /* qstat.1B */\n\t{1, 0, 0,      frwrr,  xsgswxowx, &dflt_pbs_ug, exman1[17], NULL }, /* qsub.1B */\n\n\t/*\n\t * infrastructure data associated with PBS_EXEC/man/man3\n\t */\n\t{1, 0, 0,   drwxrxrx,      tgwow, &dflt_pbs_ug, exman3[ 0], NULL }, /* man3 */\n\t{1, 0, 0,      frwrr,  xsgswxowx, &dflt_pbs_ug, exman3[ 1], NULL }, /* pbs_alterjob.3B */\n\t{1, 0, 0,      frwrr,  xsgswxowx, &dflt_pbs_ug, exman3[ 2], NULL }, /* pbs_connect.3B */\n\t{1, 0, 0,      frwrr,  xsgswxowx, &dflt_pbs_ug, exman3[ 3], NULL }, /* pbs_default.3B */\n\t{1, 0, 0,      frwrr,  xsgswxowx, &dflt_pbs_ug, exman3[ 4], NULL }, /* pbs_deljob.3B */\n\t{1, 0, 0,      frwrr,  xsgswxowx, &dflt_pbs_ug, exman3[ 5], NULL }, /* pbs_disconnect.3B */\n\t{1, 0, 0,      frwrr,  xsgswxowx, &dflt_pbs_ug, exman3[ 6], NULL }, /* pbs_geterrmsg.3B */\n\t{1, 0, 0,      frwrr,  xsgswxowx, &dflt_pbs_ug, exman3[ 7], NULL }, /* pbs_holdjob.3B */\n\t{1, 0, 0,      frwrr,  xsgswxowx, &dflt_pbs_ug, exman3[ 8], NULL }, /* pbs_manager.3B */\n\t{1, 0, 0,      frwrr,  xsgswxowx, &dflt_pbs_ug, exman3[ 9], NULL }, /* pbs_movejob.3B */\n\t{1, 0, 0,      frwrr,  xsgswxowx, &dflt_pbs_ug, exman3[10], NULL }, /* pbs_msgjob.3B */\n\t{1, 0, 0,      frwrr,  xsgswxowx, &dflt_pbs_ug, exman3[11], NULL }, /* pbs_orderjob.3B */\n\t{1, 0, 0,      frwrr,  xsgswxowx, &dflt_pbs_ug, exman3[12], NULL }, /* pbs_rerunjob.3B */\n\t{1, 0, 0,      frwrr,  xsgswxowx, &dflt_pbs_ug, exman3[13], NULL }, /* pbs_statsched.3B */\n\t{1, 0, 0,      frwrr,  xsgswxowx, &dflt_pbs_ug, exman3[14], NULL }, /* pbs_rescreserve.3B */\n\t{1, 0, 0,      frwrr,  xsgswxowx, &dflt_pbs_ug, exman3[15], NULL }, /* pbs_rlsjob.3B */\n\t{1, 0, 0,      frwrr,  xsgswxowx, &dflt_pbs_ug, exman3[16], NULL }, /* pbs_runjob.3B */\n\t{1, 0, 0,      frwrr,  xsgswxowx, &dflt_pbs_ug, exman3[17], NULL }, /* pbs_selectjob.3B */\n\t{1, 0, 0,      frwrr,  xsgswxowx, &dflt_pbs_ug, exman3[18], NULL }, /* pbs_sigjob.3B */\n\t{1, 0, 0,      frwrr,  xsgswxowx, &dflt_pbs_ug, exman3[19], NULL }, /* pbs_stagein.3B */\n\t{1, 0, 0,      frwrr,  xsgswxowx, &dflt_pbs_ug, exman3[20], NULL }, /* pbs_statjob.3B */\n\t{1, 0, 0,      frwrr,  xsgswxowx, &dflt_pbs_ug, exman3[21], NULL }, /* pbs_statnode.3B */\n\t{1, 0, 0,      frwrr,  xsgswxowx, &dflt_pbs_ug, exman3[22], NULL }, /* pbs_statque.3B */\n\t{1, 0, 0,      frwrr,  xsgswxowx, &dflt_pbs_ug, exman3[23], NULL }, /* pbs_statserver.3B */\n\t{1, 0, 0,      frwrr,  xsgswxowx, &dflt_pbs_ug, exman3[24], NULL }, /* pbs_submit.3B */\n\t{1, 0, 0,      frwrr,  xsgswxowx, &dflt_pbs_ug, exman3[25], NULL }, /* pbs_terminate.3B */\n\t{1, 0, 0,      frwrr,  xsgswxowx, &dflt_pbs_ug, exman3[26], NULL }, /* tm.3 */\n\t{1, 0, 0,      frwrr,  xsgswxowx, &dflt_pbs_ug, exman3[27], NULL }, /* pbs_tclapi.3B */\n\t{1, 0, 0,      frwrr,  xsgswxowx, &dflt_pbs_ug, exman3[28], NULL }, /* pbs_delresv.3B */\n\t{1, 0, 0,      frwrr,  xsgswxowx, &dflt_pbs_ug, exman3[20], NULL }, /* pbs_locjob.3B */\n\t{1, 0, 0,      frwrr,  xsgswxowx, &dflt_pbs_ug, exman3[30], NULL }, /* pbs_selstat.3B */\n\t{1, 0, 0,      frwrr,  xsgswxowx, &dflt_pbs_ug, exman3[31], NULL }, /* pbs_statresv.3B */\n\t{1, 0, 0,      frwrr,  xsgswxowx, &dflt_pbs_ug, exman3[32], NULL }, /* pbs_statfree.3B */\n\n\t/*\n\t * infrastructure data associated with PBS_EXEC/man/man7\n\t */\n\t{1, 0, 0, drwxrxrx,  tgwow, &dflt_pbs_ug, exman7[ 0], NULL }, /* man7 */\n\t{1, 0, 0, frwrr, xsgswxowx, &dflt_pbs_ug, exman7[ 1], NULL }, /* pbs_job_attributes.7B */\n\t{1, 0, 0, frwrr, xsgswxowx, &dflt_pbs_ug, exman7[ 2], NULL }, /* pbs_node_attributes.7B */\n\t{1, 0, 0, frwrr, xsgswxowx, &dflt_pbs_ug, exman7[ 3], NULL }, /* pbs_queue_attributes.7B */\n\t{1, 0, 0, frwrr, xsgswxowx, &dflt_pbs_ug, exman7[ 4], NULL }, /* pbs_resources.7B */\n\t{1, 0, 0, frwrr, xsgswxowx, &dflt_pbs_ug, exman7[ 5], NULL }, /* pbs_resv_attributes.7B */\n\t{1, 0, 0, frwrr, xsgswxowx, &dflt_pbs_ug, exman7[ 6], NULL }, /* pbs_server_attributes.7B */\n\t{1, 0, 0, frwrr, xsgswxowx, &dflt_pbs_ug, exman7[ 7], NULL }, /* pbs_sched_attributes.7B */\n\t{1, 0, 0, frwrr, xsgswxowx, &dflt_pbs_ug, exman7[ 8], NULL }, /* pbs_professional.7B */\n\n\t/*\n\t * infrastructure data associated with PBS_EXEC/man/man8\n\t */\n\t{1, 0, 0,  drwxrxrx,      tgwow, &dflt_pbs_ug, exman8[ 0], NULL }, /* man8 */\n\t{1, 2, 0,     frwrr,  xsgswxowx, &dflt_pbs_ug, exman8[ 1], NULL }, /* pbs_idled.8B */\n\t{1, 0, 0,     frwrr,  xsgswxowx, &dflt_pbs_ug, exman8[ 2], NULL }, /* pbs_mom.8B */\n\t{1, 0, 0,     frwrr,  xsgswxowx, &dflt_pbs_ug, exman8[ 3], NULL }, /* pbs_sched.8B */\n\t{1, 0, 0,     frwrr,  xsgswxowx, &dflt_pbs_ug, exman8[ 4], NULL }, /* pbs_server.8B */\n\t{1, 2, 0,     frwrr,  xsgswxowx, &dflt_pbs_ug, exman8[ 5], NULL }, /* pbsfs.8B */\n\t{1, 0, 0,     frwrr,  xsgswxowx, &dflt_pbs_ug, exman8[ 6], NULL }, /* pbsnodes.8B */\n\t{1, 0, 0,     frwrr,  xsgswxowx, &dflt_pbs_ug, exman8[ 7], NULL }, /* qdisable.8B */\n\t{1, 0, 0,     frwrr,  xsgswxowx, &dflt_pbs_ug, exman8[ 8], NULL }, /* qenable.8B */\n\t{1, 0, 0,     frwrr,  xsgswxowx, &dflt_pbs_ug, exman8[ 9], NULL }, /* qmgr.8B */\n\t{1, 0, 0,     frwrr,  xsgswxowx, &dflt_pbs_ug, exman8[10], NULL }, /* qrun.8B */\n\t{1, 0, 0,     frwrr,  xsgswxowx, &dflt_pbs_ug, exman8[11], NULL }, /* qstart.8B */\n\t{1, 0, 0,     frwrr,  xsgswxowx, &dflt_pbs_ug, exman8[12], NULL }, /* qstop.8B */\n\t{1, 0, 0,     frwrr,  xsgswxowx, &dflt_pbs_ug, exman8[13], NULL }, /* qterm.8B */\n\t{1, 0, 0,     frwrr,  xsgswxowx, &dflt_pbs_ug, exman8[14], NULL }, /* pbs_lamboot.8B */\n\t{1, 0, 0,     frwrr,  xsgswxowx, &dflt_pbs_ug, exman8[15], NULL }, /* pbs_mpilam.8B */\n\t{1, 0, 0,     frwrr,  xsgswxowx, &dflt_pbs_ug, exman8[16], NULL }, /* pbs_mpirun.8B */\n\t{1, 0, 0,     frwrr,  xsgswxowx, &dflt_pbs_ug, exman8[17], NULL }, /* pbs_attach.8B */\n\t{1, 0, 0,     frwrr,  xsgswxowx, &dflt_pbs_ug, exman8[18], NULL }, /* pbs_mkdirs.8B */\n\t{1, 0, 0,     frwrr,  xsgswxowx, &dflt_pbs_ug, exman8[19], NULL }, /* pbs_hostn.8B */\n\t{1, 0, 0,     frwrr,  xsgswxowx, &dflt_pbs_ug, exman8[20], NULL }, /* pbs_probe.8B */\n\t{1, 0, 0,     frwrr,  xsgswxowx, &dflt_pbs_ug, exman8[21], NULL }, /* pbs-report.8B */\n\t{1, 0, 0,     frwrr,  xsgswxowx, &dflt_pbs_ug, exman8[22], NULL }, /* pbs_tclsh.8B */\n\t{1, 0, 0,     frwrr,  xsgswxowx, &dflt_pbs_ug, exman8[23], NULL }, /* pbs_tmrsh.8B */\n\t{1, 0, 0,     frwrr,  xsgswxowx, &dflt_pbs_ug, exman8[24], NULL }, /* pbs_wish.8B */\n\t{1, 0, 0,     frwrr,  xsgswxowx, &dflt_pbs_ug, exman8[25], NULL }, /* printjob.8B */\n\t{1, 0, 0,     frwrr,  xsgswxowx, &dflt_pbs_ug, exman8[26], NULL }, /* pbs.8B */\n\t{1, 0, 0,     frwrr,  xsgswxowx, &dflt_pbs_ug, exman8[27], NULL } }; /* pbs_interactive.8B */\n\nstatic MPUG\ttcltk_mpugs[] = {\n\t/*\n\t * infrastructure data associated with PBS_EXEC/tcltk\n\t */\n\t{1, 0, 0,   drwxrxrx,      tgwow, &dflt_pbs_ug, exec[6],  NULL},\n\t{1, 0, 0,   drwxrxrx,      tgwow, &dflt_pbs_ug, extcltk[0], NULL }, /* tcltk/bin */\n\t{1, 0, 0,   drwxrxrx,      tgwow, &dflt_pbs_ug, extcltk[1], NULL }, /* tcltk/include */\n\t{1, 0, 0,   drwxrxrx,      tgwow, &dflt_pbs_ug, extcltk[2], NULL }, /* tcltk/lib */\n\t{1, 0, 0,      frwrr,  xsgswxowx, &dflt_pbs_ug, extcltk[3], NULL } }; /* tcltk/license.terms */\n\nstatic MPUG\tpython_mpugs[] = {\n\t/*\n\t * infrastructure data associated with PBS_EXEC/python\n\t */\n\t{1, 2, 0,   drwxrxrx,      tgwow, &dflt_pbs_ug, exec[7],  NULL},\n\t{1, 2, 0,   drwxrxrx,      tgwow, &dflt_pbs_ug, expython[0], NULL }, /* python/bin */\n\t{1, 2, 0,   drwxrxrx,      tgwow, &dflt_pbs_ug, expython[1], NULL }, /* python/include */\n\t{1, 2, 0,   drwxrxrx,      tgwow, &dflt_pbs_ug, expython[2], NULL }, /* python/lib */\n\t{1, 2, 0,   drwxrxrx,      tgwow, &dflt_pbs_ug, expython[3], NULL }, /* python/man */\n\t{1, 2, 0,      frwrr,  xsgswxowx, &dflt_pbs_ug, expython[4], NULL }, /* python/python.changes.txt */\n\t{1, 2, 0,      frwxrxrx,  sgswow, &dflt_pbs_ug, expython[5], NULL } }; /* python/bin/python */\n\nstatic MPUG\tpgsql_mpugs[] = {\n\t/*\n\t * infrastructure data associated with PBS_EXEC/pgsql\n\t */\n\t{1, 6, 0,   drwxrxrx,      tgwow, &dflt_pbs_ug, exec[8],  NULL},\n\t{1, 6, 0,   drwxrxrx,      tgwow, &dflt_pbs_ug, expgsql[0], NULL }, /* pgsql/bin */\n\t{1, 6, 0,   drwxrxrx,      tgwow, &dflt_pbs_ug, expgsql[1], NULL }, /* pgsql/include */\n\t{1, 6, 0,   drwxrxrx,      tgwow, &dflt_pbs_ug, expgsql[2], NULL }, /* pgsql/lib */\n\t{1, 6, 0,   drwxrxrx,      tgwow, &dflt_pbs_ug, expgsql[3], NULL } }; /* pgsql/share */\n\n\nenum pbshome_mpugs { PH_server, PH_mom, PH_sched, PH_last };\n\nint  home_sizes[PH_last];\nchar *home_mpug_set[PH_last] = {\"pbs_server\", \"pbs_mom\", \"pbs_sched\"};\n\nenum svr_mpugs { SVR_logs,  SVR_spool, SVR_priv,  SVR_acct,\n\tSVR_jobs,\n\tSVR_users, SVR_hooks, SVR_hookswdir, SVR_last };\n\nstatic MPUG\t svr_mpugs[] = {\n\t/*\n\t * infrastructure data associated with server daemon\n\t * home dir, chkfull, required and disallowed modes,\n\t * pointer to \"valid users, valid groups\", path, realpath\n\t */\n\t{2, 0, 0, drwxrxrx,  tgwow,  &dflt_pbs_ug, svrhome[ 0], NULL}, /* logs */\n\t{2, 0, 0, tdrwxrwxrwx,   0,  &dflt_pbs_ug, svrhome[ 1], NULL}, /* spool */\n\t{2, 0, 0, drwxrxo, tgworwx,  &dflt_pbs_ug, svrhome[ 2], NULL}, /* priv */\n\t{1, 1, 0, frwrr, xsgswxowx,  &dflt_pbs_ug, svrhome[ 3], NULL}, /* resourcedef */\n\t{0, 1, 0, frwgo, sgsrwxorwx, &dflt_pbs_ug, svrhome[ 4], NULL}, /* server.lock */\n\t{2, 0, 0, frwgo, sgsrwxorwx, &dflt_pbs_ug, svrhome[ 5], NULL}, /* tracking */\n\t{2, 0, 0, drwxrxrx,  tgwow,  &dflt_pbs_ug, svrhome[ 6], NULL}, /* accounting */\n\t{2, 0, 0, drwxrxo, tgworwx,  &dflt_pbs_ug, svrhome[ 7], NULL}, /* jobs */\n\t{2, 0, 0, drwxrxo, tgworwx,  &dflt_pbs_ug, svrhome[ 8], NULL}, /* users */\n\t{2, 0, 0, drwxrxo, tgworwx,  &dflt_pbs_ug, svrhome[ 9], NULL}, /* hooks */\n\t{2, 0, 0, drwxrxo, tgworwx,  &dflt_pbs_ug, svrhome[10], NULL}, /* hooks' workdir */\n\t{1, 0, 0, frwgo, sgsrwxorwx, &dflt_pbs_ug, svrhome[11], NULL}, /* prov_tracking */\n\t{1, 6, 0, frwgo, sgsrwxorwx, &dflt_pbs_ug, svrhome[12], NULL}, /* db_password */\n\t{1, 1, 0, frwgo, sgsrwxorwx, &dflt_pbs_ug, svrhome[13], NULL}, /* db_svrhost */\n\t{1, 1, 0, frwgo, sgsrwxorwx, &dflt_pbs_ug, svrhome[14], NULL}, /* db_svrhost.new */\n\t{1, 6, 0, frwgo, sgsrwxorwx, &dflt_pbs_ug, svrhome[15], NULL}, /* svrlive */\n\t{1, 6, 0, drwxgo, tgworwx, &dflt_pbs_data, svrhome[16], NULL}  /* datastore (must be last) */\n};\n\n\nenum mom_mpugs { MOM_aux, MOM_checkpoint, MOM_logs, MOM_priv,\n\tMOM_jobs, MOM_spool, MOM_undelivered, MOM_last };\n\nstatic MPUG\tmom_mpugs[] = {\n\t/*\n\t * infrastructure data associated with mom daemon\n\t * dir, chkfull, required and disallowed modes, pointer\n\t * to \"valid users, valid groups\", path, realpath\n\t */\n\t{2, 0, 0, drwxrxrx,    tgwow, &dflt_pbs_ug, momhome[0], NULL}, /* aux */\n\t{2, 0, 0, drwxgo,  tgrwxorwx, &dflt_pbs_ug, momhome[1], NULL}, /* checkpoint */\n\t{2, 0, 0, drwxrxrx,    tgwow, &dflt_pbs_ug, momhome[2], NULL}, /* mom_logs */\n\t{2, 0, 0, drwxrxx,    tgworw, &dflt_pbs_ug, momhome[3], NULL}, /* mom_priv */\n\t{0, 1, 0, frwrr,   xsgswxowx, &dflt_pbs_ug, momhome[4], NULL}, /* mom.lock */\n\t{2, 0, 0, frwrr,   xsgswxowx, &dflt_pbs_ug, momhome[5], NULL}, /* config */\n\t{2, 0, 0, drwxrxx,    tgworw, &dflt_pbs_ug, momhome[6], NULL}, /* jobs */\n\t{2, 0, 0, tdrwxrwxrwx,     0, &dflt_pbs_ug, momhome[7], NULL}, /* spool */\n\t{2, 0, 0, tdrwxrwxrwx,     0, &dflt_pbs_ug, momhome[8], NULL}, /* undelivered */\n\t{0, 1, 0, drwxgo,     tgworw, &dflt_pbs_ug, momhome[9], NULL}, /* config.d */\n\t{0, 1, 0, drwxgo,     tgworw, &dflt_pbs_ug, momhome[10], NULL}, /* mom_priv/hooks */\n\t{0, 1, 0, drwxgo,     tgworw, &dflt_pbs_ug, momhome[11], NULL}};/* mom_priv/hooks/tmp */\n\n\n\nenum sched_mpugs { SCHED_logs, SCHED_priv, SCHED_last };\n\nstatic MPUG\tsched_mpugs[] = {\n\t/*\n\t * infrastructure data associated with sched daemon\n\t * dir, chkfull, required and disallowed modes, pointer\n\t * to \"valid users, valid groups\", path, realpath\n\t */\n\t{2, 0, 0, drwxrxrx,    tgwow, &dflt_pbs_service, schedhome[0], NULL}, /* sched_logs */\n\t{2, 0, 0, drwxrxo,   tgworwx, &dflt_pbs_service, schedhome[1], NULL}, /* sched_priv */\n\t{2, 0, 0, frwrr,   xsgswxowx, &dflt_pbs_service, schedhome[2], NULL}, /* dedicated_time */\n\t{2, 0, 0, frwrr,   xsgswxowx, &dflt_pbs_service, schedhome[3], NULL}, /* holidays */\n\t{2, 0, 0, frwrr,   xsgswxowx, &dflt_pbs_service, schedhome[4], NULL}, /* sched_config */\n\t{2, 0, 0, frwrr,   xsgswxowx, &dflt_pbs_service, schedhome[5], NULL}, /* resource_group */\n\t{0, 1, 0, frwrr,   xsgswxowx, &dflt_pbs_service, schedhome[6], NULL}, /* sched.lock */\n\t{2, 1, 0, frwrr,   xsgswxowx, &dflt_pbs_service, schedhome[7], NULL} }; /* sched_out */\n\n\n\n\nenum  msg_sources { SRC_pri, SRC_home, SRC_exec, SRC_last, SRC_none };\nenum  msg_categories { MSG_md, MSG_mf, MSG_po, MSG_unr, MSG_real, MSG_pri, MSG_oth, MSG_last, MSG_none };\n/*\n * MSG_md   - missing directories\n * MSG_mf   - missing files\n * MSG_po   - permission/owner errors\n * MSG_unr  - unrecognized directory entry\n * MSG_pri  - primary data\n * MSG_real - real path problem\n * MSG_oth  - other problems\n * MSG_last - last enumeration value\n */\n\n/*\n * The structure below is used in mechanizing storage of messages in\n * memory for subsequent output.  Things are done this in order that\n * there be more flexibility/control over how information flowing on\n * stdout is organized.\n *\n * For each \"source\" of output messages, there will be an instance of\n * PROBEMSGS data structure - i.e.,\n * there will be an instance for messages relating to the PRIMARY data,\n * one relating to messages associated with PBS HOME data, and one\n * relating to messages associated with PBS EXEC data.\n */\n\ntypedef struct\tprobemsgs {\n\t/*\n\t * each pointer in mtbls will point to an array of\n\t * pointers to messages.  The message pointers in each\n\t * array are pointing to output messages from pbs_probe that\n\t * belong to the same \"category\" of message - e.g. messages\n\t * about a file being \"missing\". (see enum msg_categories)\n\t *\n\t * Structure member \"idx\" is an array of index values, one for\n\t * each array of message pointers - e.g. idx[MSG_mf] holds the\n\t * index of the last \"missing file\" message placed in the array.\n\t */\n\tchar **mtbls[MSG_last];\n\tint  idx[MSG_last];\n}  PROBEMSGS;\n\n\ntypedef struct\tinfrastruct {\n\n\tint\tmode;\t\t/* pbs_probe \"mode\" */\n\tchar*\tphost;\t\t/* host running pbs_probe */\n\n\t/* PRIMARY related MPUGS and their sources */\n\n\tPRIMARY\tpri;\n\n\t/* pointers to PBS HOME related MPUG arrays */\n\n\tMPUG\t*home[PH_last];\n\n\t/* pointers to PBS EXEC related MPUG arrays */\n\n\tMPUG\t*exec[EXEC_last];\n\n\tPROBEMSGS *msgs[SRC_last];\n\n\tstruct utsdata\tutsd;\n\tstruct statdata statd;\n} INFRA;\n\nstatic void\tam_i_authorized(void);\nstatic void\tinfrastruct_params(struct infrastruct *, int);\nstatic void\tadjust_for_os(struct infrastruct *pinf);\nstatic void\tprint_infrastruct(struct infrastruct *);\nstatic void\ttitle_string(enum code_title, int, INFRA *);\nstatic void\tprint_problems(INFRA *);\nstatic void\tmsg_table_set_defaults(INFRA *, int, int);\nstatic int\tput_msg_in_table(INFRA *, int, int, char*);\nstatic int\tget_primary_values(struct infrastruct *);\nstatic int\tget_realpath_values(struct infrastruct *);\nstatic int\tis_parent_rpathnull(char *, MPUG **, int, int *);\n#if 0\nstatic int\tis_suffix_ok(char *, char *);\nstatic int\tinspect_dir_entries(struct infrastruct *);\nstatic const\tchar *which_suffixset(MPUG *);\nstatic MPUG\t**which_knwn_mpugs(MPUG *, MPUG *[], int *, int asz);\nstatic MPUG\t**which_Knwn_mpugs(MPUG *, MPUG *, int);\nstatic void\tchk_entries(MPUG *, MPUG **);\nstatic void\tpbs_dirtype(int *, MPUG *);\nstatic int\tnon_db_resident(MPUG*, char*, int, char *entryname);\nstatic int\tis_a_numericname(char *);\n#endif\nstatic int\tcheck_paths(struct infrastruct *);\nstatic int\tcheck_owner_modes(char *, MPUG *, int);\nstatic int\tmbits_and_owner(struct stat *, MPUG *, int);\nstatic char\t*perm_string(mode_t);\n\nstatic const char *perm_owner_msg(struct stat *, MPUG *, ADJ *, int);\nstatic const char *owner_string(struct stat *, MPUG *, int);\n\nstatic int\tprocess_ret_code(enum func_names, int, struct infrastruct *);\nstatic int\tconf4primary(FILE *, struct infrastruct *);\nstatic int\tenv4primary(struct infrastruct *);\nstatic void     fix(int, int, int, MPUG *, ADJ *, struct stat *, int);\nstatic void     fix_perm_owner(MPUG *, struct stat *, ADJ *);\n\n/* Variables visible to all functions in this file */\n\nstatic int   flag_verbose;\nstatic\tint  mode = 0;\nstatic\tint  max_level = FIX_po;\nint\tnonlocaldata = 0;\n/**\n * @Brief\n *      This is main function of pbs_probe process.\n *\n * @return\tint\n * @retval\t0\t: On Success\n * @retval\t!=0\t: failure\n *\n */\nint\nmain(int argc, char *argv[])\n{\n\tint rc;\n\tint i=0;\n\tint err=0;\n\tstruct infrastruct infra;\n\n\textern int optind;\n\n\t/*the real deal or output pbs_version and exit?*/\n\tPRINT_VERSION_AND_EXIT(argc, argv);\n\n\t/* If not authorized, don't proceed any further */\n\tam_i_authorized();\n\n\t/*\n\t * Check that this invocation of pbs_probe is properly formed\n\t * compute the \"run mode\"\n\t */\n\n\twhile (err == 0 && (i = getopt(argc, argv, \"fcv\")) != EOF) {\n\n\t\tswitch (i) {\n\n\t\t\t\t/*other two \"recognized\" modes*/\n\n\t\t\tcase 'v':\n\t\t\t\tflag_verbose = 1;\n\t\t\t\tbreak;\n\n\t\t\tcase 'f':\n\t\t\t\tif (mode)\n\t\t\t\t\t/*\n\t\t\t\t *program allows only one mode at a time, so if\n\t\t\t\t *already has a value that is bad\n\t\t\t\t */\n\t\t\t\t\terr = 1;\n\t\t\t\telse\n\t\t\t\t\tmode = i;\n\t\t\t\tbreak;\n\n\t\t\t\t/*\n\t\t\t\t * Currently, only options are: \"none\", f, v\n\t\t\t\t * Also, for any non-recognized option getopt outputs\n\t\t\t\t * an error message to stderr.\n\t\t\t\t */\n\n\t\t\tcase 'c':\n\t\t\tdefault: err = 1;\n\t\t}\n\t}\n\n\n\tif (err == 0 && mode != 'c' && argv[optind] == NULL) {\n\n\t\t/*\n\t\t * Determine name of this host, pbs.conf pathname,\n\t\t * and values for certain primary parameters\n\t\t */\n\n\t\tinfrastruct_params(&infra, mode);\n\t\tmsg_table_set_defaults(&infra, SRC_pri, MSG_oth);\n\n\t\t/*\n\t\t * generate for each infrastructure file/directory\n\t\t * the canonicalized, absolute pathname\n\t\t */\n\n\t\tif ((rc = get_realpath_values(&infra)))\n\t\t\texit(rc);\n\n\t\t/*\n\t\t * check modes/ownership on those database paths which\n\t\t * successfully mapped to a realpath\n\t\t */\n\n\t\tcheck_paths(&infra);\n\n\t\t/*\n\t\t * For existing infrastucture directories, inspect their\n\t\t * entries for validity:\n\t\t *   - valid name  (name in database, other general criteria)\n\t\t *   - valid modes (modes from database or suitable default)\n\t\t *\n\t\t * example:\n\t\t * checking content of server's \"jobs\" directory by checking\n\t\t * entry's suffix and mode\n\t\t */\n\n#if 0\n\t\tinspect_dir_entries(&infra);\n#endif\n\t\tprint_problems(&infra);\n\n\t} else {\n\n\t\t/*\n\t\t * err == 0 || (for the time being) mode == 'c'\n\t\t * output typical kind of usage message\n\t\t */\n\t\ttitle_string(TC_use, mode, &infra);\n\t\texit(1);\n\t}\n\n\tif (flag_verbose)\n\t\tprint_infrastruct(&infra);\n\texit(0);\n}\n/**\n * @brief\n * \t\tCheck whether user is authorized to use pbs_probe.\n *\n * @par MT-safe:\tNo\n */\nstatic void\nam_i_authorized(void)\n{\n\tstatic char allow[] = \"root\";\n\tuid_t getuid(void);\n\tstruct passwd *ppwd = getpwuid(getuid()) ;\n\n\tif (ppwd && strcmp(ppwd->pw_name, allow) == 0)\n\t\treturn;\n\n\t/*problem encountered*/\n\n\tif (ppwd)\n\t\tfprintf(stderr, \"User %s not authorized to use pbs_probe\\n\", ppwd->pw_name);\n\telse\n\t\tfprintf(stderr, \"Problem checking user authorization for utility\\n\");\n\texit(1);\n}\n/**\n * @brief\n * \t\tconfigure values for various infrastructure parameters.\n *\n * @param[out]\tpinf\t-\t structpointer to infrastruct\n * @param[out]\tmode\t-\t pbs_probe \"mode\"\n */\nstatic void\ninfrastruct_params(struct infrastruct *pinf, int mode)\n{\n\n\tint\ti, rc;\t\t\t/* return code */\n\tchar\thname[PBS_MAXHOSTNAME+1];\n\n\tmemset((void *)pinf, 0, (size_t)sizeof(struct infrastruct));\n\n\tfor (i=0; i<SRC_last; ++i) {\n\t\tpinf->msgs[i] = (PROBEMSGS *)malloc(sizeof(PROBEMSGS));\n\t\tif (pinf->msgs[i] == NULL) {\n\t\t\tfprintf(stderr, \"pbs_probe: Out of Memory\\n\");\n\t\t\texit(1);\n\t\t}\n\t\tmemset((void *)pinf->msgs[i], 0, sizeof(PROBEMSGS));\n\t}\n\n\tpinf->mode = mode;\n\tpinf->pri.pbs_mpug = &pbs_mpugs[0];\n\n\tif (gethostname(hname, (sizeof(hname) - 1)))\n\t\tstrcpy(hname, \"localhost\");\n\tpinf->phost = strdup(hname);\n\n\tif (uname(&pinf->utsd.ub) >= 0) {\n\t\tpinf->utsd.populated = 1;\n\t\tadjust_for_os(pinf);\n\t}\n\n\t/*\n\t * output a title and accompanying system information\n\t */\n\n\ttitle_string(TC_sys, mode, pinf);\n\n\t/*\n\t * determine values for the primary variables:\n\t * paths - pbs_home, ps_exec,\n\t * pbs_start_server, pbs_start_mom, pbs_start_sched\n\t */\n\n\tif ((rc = get_primary_values(pinf)))\n\t\tif (process_ret_code(GET_PRIMARY_VALUES, rc, pinf)) {\n\t\t\tprint_problems(pinf);\n\t\t\texit(1);\n\t\t}\n\n\tif (nonlocaldata)\t/* don't check datastore if it is nonlocal */\n\t\thome_sizes[PH_server] -= 1;\n\n\t/*\n\t * PBS HOME:  load pointers to the various arrays of MPUG data\n\t * relevant to value of PBS HOME stored in *pinf's pri element\n\t */\n\n\tif (pinf->pri.started.server)\n\t\tpinf->home[PH_server] = &svr_mpugs[0];\n\n\tif (pinf->pri.started.mom)\n\t\tpinf->home[PH_mom] =    &mom_mpugs[0];\n\n\tif (pinf->pri.started.sched)\n\t\tpinf->home[PH_sched] =  &sched_mpugs[0];\n\n\t/*\n\t * Record install type in the \"notbits\" variable\n\t */\n\n\tif (pinf->pri.started.server == 0 &&\n\t\tpinf->pri.started.sched == 0  &&  pinf->pri.started.mom) {\n\n\t\t/* did \"execution-only\" install */\n\n\t\tnotbits |= 0x4;\n\t}\n\telse if (!pinf->pri.started.server && !pinf->pri.started.sched &&\n\t\t!pinf->pri.started.mom) {\n\t\t/* did \"cmds-only\" install */\n\t\tnotbits |= 0x2;\n\t}\n\n\thome_sizes[PH_server] = sizeof(svr_mpugs)/sizeof(MPUG);\n\thome_sizes[PH_mom] =  sizeof(mom_mpugs)/sizeof(MPUG);\n\thome_sizes[PH_sched] = sizeof(sched_mpugs)/sizeof(MPUG);\n\n\t/*\n\t * PBS EXEC:  load pointers to the various arrays of MPUG data\n\t * relevant to value of PBS EXEC stored in *pinf's pri element\n\t */\n\n\tpinf->exec[EXEC_exec] =  &exec_mpugs[0]; /* make irix compiler happy */\n\tpinf->exec[EXEC_exec] =  NULL;\n\tpinf->exec[EXEC_bin] =   &bin_mpugs[0];\n\tpinf->exec[EXEC_sbin] =  &sbin_mpugs[0];\n\tpinf->exec[EXEC_etc] =   &etc_mpugs[0];\n\tpinf->exec[EXEC_lib] =   &lib_mpugs[0];\n\tpinf->exec[EXEC_man] =   &man_mpugs[0];\n\tpinf->exec[EXEC_man1] =  NULL;\n\tpinf->exec[EXEC_man3] =  NULL;\n\tpinf->exec[EXEC_man7] =  NULL;\n\tpinf->exec[EXEC_man8] =  NULL;\n\n\tpinf->exec[EXEC_tcltk] =   &tcltk_mpugs[0];\n\tpinf->exec[EXEC_python] =   &python_mpugs[0];\n\tpinf->exec[EXEC_include] = &include_mpugs[0];\n\tpinf->exec[EXEC_pgsql] =   &pgsql_mpugs[0];\n\n\texec_sizes[EXEC_exec] = sizeof(exec_mpugs)/sizeof(MPUG);\n\texec_sizes[EXEC_bin] =  sizeof(bin_mpugs)/sizeof(MPUG);\n\texec_sizes[EXEC_sbin] = sizeof(sbin_mpugs)/sizeof(MPUG);\n\texec_sizes[EXEC_etc] =  sizeof(etc_mpugs)/sizeof(MPUG);\n\texec_sizes[EXEC_lib] =  sizeof(lib_mpugs)/sizeof(MPUG);\n\texec_sizes[EXEC_man] =  sizeof(man_mpugs)/sizeof(MPUG);\n\n\texec_sizes[EXEC_man1] = 0;\n\texec_sizes[EXEC_man3] = 0;\n\texec_sizes[EXEC_man7] = 0;\n\texec_sizes[EXEC_man8] = 0;\n\n\texec_sizes[EXEC_tcltk] =   sizeof(tcltk_mpugs)/sizeof(MPUG);\n\texec_sizes[EXEC_python] =   sizeof(python_mpugs)/sizeof(MPUG);\n\texec_sizes[EXEC_include] = sizeof(include_mpugs)/sizeof(MPUG);\n\texec_sizes[EXEC_pgsql] =   sizeof(pgsql_mpugs)/sizeof(MPUG);\n}\n/**\n * @brief\n * \t\tadjust the infrastruct parameter values based on the os.\n *\n * @param[out]\tpinf\t-\t pointer to infrastruct\n */\nstatic void\nadjust_for_os(struct infrastruct *pinf)\n{\n\t/* offset to use with specific MPUG array */\n\n\tint\tofs_bin = 1;  /* use with bin_mpugs[] */\n\tint\tofs_lib = 1;  /* use with lib_mpugs[] */\n\n\tif (strstr(pinf->utsd.ub.sysname, \"Linux\") != NULL) {\n\n\t\t/* Linux: pbs_lamboot, pbs_mpilam, pbs_mpirun, mpiexec, pbsrun, pbsrun_wrap, pbsrun_unwrap  */\n\n\t\tbin_mpugs[ofs_bin + 31].notReq &= ~(0x1);\n\t\tbin_mpugs[ofs_bin + 32].notReq &= ~(0x1);\n\t\tbin_mpugs[ofs_bin + 33].notReq &= ~(0x1);\n\t\tbin_mpugs[ofs_bin + 38].notReq &= ~(0x1);\n\t\tbin_mpugs[ofs_bin + 39].notReq &= ~(0x1);\n\t\tbin_mpugs[ofs_bin + 40].notReq &= ~(0x1);\n\t\tbin_mpugs[ofs_bin + 41].notReq &= ~(0x1);\n\n\t\t/* Linux + /etc/sgi-compute-node_release => SGI ICE\t*/\n\t\tif (access(\"/etc/sgi-compute-node-release\", R_OK) == 0) {\n\t\t\tlib_mpugs[ofs_lib + 23].notReq = 0;    /* sgiMPI.awk       */\n\t\t}\n\n\t\t/* Linux: pbsrun.<keyword>.init.in files must exist */\n\t\tlib_mpugs[ofs_lib + 24].notReq &= ~(0x1);\n\t\tlib_mpugs[ofs_lib + 25].notReq &= ~(0x1);\n\t\tlib_mpugs[ofs_lib + 26].notReq &= ~(0x1);\n\t\tlib_mpugs[ofs_lib + 27].notReq &= ~(0x1);\n\t\tlib_mpugs[ofs_lib + 28].notReq &= ~(0x1);\n\t\tlib_mpugs[ofs_lib + 29].notReq &= ~(0x1);\n\t\tbin_mpugs[ofs_bin + 30].notReq &= ~(0x1);\n\t}\n}\n\n/**\n * @brief\n * \t\tprint the values of infrstruct.\n *\n * @param[in]\tpinf\t-\t pointer to infrastruct\n */\nstatic void\nprint_infrastruct(struct infrastruct *pinf)\n{\n\tint\ti, j;\n\tint\ttflag;\n\tMPUG\t*pmpug;\n\n\ttflag = 0;\n\tfor (i=0; i<PBS_last; ++i) {\n\n\t\tif (!tflag) {\n\t\t\t++tflag;\n\t\t\ttitle_string(TC_tvrb, mode, pinf);\n\t\t}\n\n\t\tif (pinf->pri.pbs_mpug[i].path) {\n\n\t\t\tif (tflag == 1) {\n\t\t\t\t++tflag;\n\t\t\t\ttitle_string(TC_datpri, mode, pinf);\n\t\t\t}\n\t\t\tfprintf(stdout, \"%s: %s\\n\", mhp[i], pinf->pri.pbs_mpug[i].path);\n\t\t}\n\t}\n\tif (tflag) {\n\t\tfprintf(stdout, \"%s: %d\\n\", mhp[MHP_svr], pinf->pri.started.server);\n\t\tfprintf(stdout, \"%s: %d\\n\", mhp[MHP_mom], pinf->pri.started.mom);\n\t\tfprintf(stdout, \"%s: %d\\n\", mhp[MHP_sched], pinf->pri.started.sched);\n\t}\n\n\ttflag = 0;\n\tfor (i=0; i<PH_last; ++i) {\n\n\t\tif ((pmpug = pinf->home[i]) == NULL || (pmpug->notReq & notbits))\n\t\t\tcontinue;\n\n\t\tif (!tflag++)\n\t\t\ttitle_string(TC_datho, mode, pinf);\n\n\t\tfprintf(stdout, \"\\nHierarchy %s:\\n\", home_mpug_set[i]);\n\n\t\tfor (j=0; j<home_sizes[i]; ++j, ++pmpug) {\n\t\t\tif (pmpug->path == NULL || (pmpug->notReq & notbits))\n\t\t\t\tcontinue;\n                        fprintf(stdout, \"%-70s(%s, %s)\\n\", pmpug->path, perm_string((mode_t)pmpug->req_modes), owner_string(NULL, pmpug, 0));\n\t\t}\n\t}\n\n\ttflag = 0;\n\tfor (i=0; i<EXEC_last; ++i) {\n                if ((pmpug = pinf->exec[i]) == NULL || (pmpug->notReq & notbits))\n\t\t\tcontinue;\n\n\t\tif (!tflag++)\n\t\t\ttitle_string(TC_datex, mode, pinf);\n\n\t\tfprintf(stdout, \"\\nHierarchy %s:\\n\\n\", exec_mpug_set[i]);\n\n\t\tfor (j=0; j<exec_sizes[i]; ++j, ++pmpug) {\n\n\t\t\tif (pmpug->path == NULL || (pmpug->notReq & notbits))\n\t\t\t\tcontinue;\n\t\t\tfprintf(stdout, \"%-70s(%s, %s)\\n\", pmpug->path, perm_string((mode_t)pmpug->req_modes), owner_string(NULL, pmpug, 0));\n\t\t}\n\t}\n}\n/**\n * @brief\n * \t\tprint the title string based on the code_title value.\n *\n * @param[in]\ttc\t-\tcode_title value.\n * @param[in]\tmode\t-\tmode (not used here)\n * @param[in]\tpinf\t-\tpointer to infrastruct\n */\nstatic void\ntitle_string(enum code_title tc, int mode, INFRA *pinf)\n{\n\tswitch (tc) {\n\t\tcase TC_sys:\n\t\t\tfprintf(stdout, \"\\n\\n====== System Information =======\\n\\n\");\n\t\t\tfprintf(stdout,\n\t\t\t\t\"\\nsysname=%s\\nnodename=%s\\nrelease=%s\\nversion=%s\\nmachine=%s\\n\",\n\t\t\t\tpinf->utsd.ub.sysname, pinf->utsd.ub.nodename,\n\t\t\t\tpinf->utsd.ub.release, pinf->utsd.ub.version,\n\t\t\t\tpinf->utsd.ub.machine);\n\t\t\tbreak;\n\n\t\tcase TC_top:\n\t\t\tfprintf(stdout,\n\t\t\t\t\"\\n\\n====== PBS Infrastructure Report =======\\n\\n\");\n\t\t\tbreak;\n\n\t\tcase TC_pri:\n\t\t\tfprintf(stdout,\n\t\t\t\t\"\\n\\n====== Problems in pbs_probe's Primary Data =======\\n\\n\");\n\t\t\tbreak;\n\n\t\tcase TC_ho:\n\t\t\tfprintf(stdout,\n\t\t\t\t\"\\n\\n====== Problems in PBS HOME Hierarchy =======\\n\\n\");\n\t\t\tbreak;\n\n\t\tcase TC_ex:\n\t\t\tfprintf(stdout,\n\t\t\t\t\"\\n\\n====== Problems in PBS EXEC Hierarchy =======\\n\\n\");\n\t\t\tbreak;\n\n\t\tcase TC_ro:\n\t\tcase TC_fx:\n\t\tcase TC_cnt:\n\t\t\t/* Not explicitely handled, but keeps compiler quiet. */\n\t\t\tbreak;\n\n\t\t\t/*\n\t\t\t * verbose related strings\n\t\t\t */\n\n\t\tcase TC_tvrb:\n\t\t\tfprintf(stdout,\n\t\t\t\t\"\\n\\n=== Primary variables and specific hierarchies checked by pbs_probe ===\\n\\n\");\n\t\t\tbreak;\n\n\t\tcase TC_datpri:\n\t\t\tfprintf(stdout, \"\\nPbs_probe's Primary variables:\\n\\n\");\n\t\t\tbreak;\n\n\t\tcase TC_datho:\n\t\t\tfprintf(stdout, \"\\n\\n=== PBS HOME Infrastructure ===\\n\");\n\t\t\tbreak;\n\n\t\tcase TC_datex:\n\t\t\tfprintf(stdout, \"\\n\\n=== PBS EXEC Infrastructure ===\\n\");\n\t\t\tbreak;\n\n\t\tcase TC_noerr:\n\t\t\tfprintf(stdout,\n\t\t\t\t\"\\n\\n=== No PBS Infrastructure Problems Detected ===\\n\");\n\t\t\tbreak;\n\n\t\tcase TC_use:\n\t\t\tfprintf(stderr, \"Usage: pbs_probe [ -fv ]\\n\");\n\t\t\tfprintf(stderr, \"       pbs_probe --version\\n\");\n\t\t\tfprintf(stderr,\n\t\t\t\t\"\\tno option - run in 'report' mode\\n\");\n\t\t\tfprintf(stderr,\n\t\t\t\t\"\\t-f        - run in 'fix' mode\\n\");\n\t\t\tfprintf(stderr,\n\t\t\t\t\"\\t-v        - show hierarchy examined\\n\");\n\t\t\tfprintf(stderr,\n\t\t\t\t\"\\t--version - show version and exit\\n\");\n\t\t\tbreak;\n\t}\n}\n/**\n * @brief\n * \t\tprint the title string based on the code_title value.\n *\n * @param[in]\ttc\t-\tcode_title value.\n * @param[in]\tmode\t-\tmode (not used here)\n * @param[in]\tpinf\t-\tpointer to infrastruct\n */\nstatic void\nprint_problems(INFRA *pinf)\n{\n\tint i, j, k;\n\tint idx;\n\tint tflag, output_err = 0;\n\tchar **pa;\n\n\tfor (i=0; i<SRC_last; ++i) {\n\t\ttflag = 0;\n\t\tfor (j=0; j < MSG_last; ++j) {\n\n\t\t\tif (pinf->msgs[i]->mtbls[j] == NULL)\n\t\t\t\tcontinue;\n\n\t\t\tif (!tflag++)\n\t\t\t\tswitch (i) {\n\t\t\t\t\tcase SRC_pri:\n\t\t\t\t\t\ttitle_string(TC_pri, mode, pinf);\n\t\t\t\t\t\tbreak;\n\n\t\t\t\t\tcase SRC_home:\n\t\t\t\t\t\ttitle_string(TC_ho, mode, pinf);\n\t\t\t\t\t\tbreak;\n\n\t\t\t\t\tcase SRC_exec:\n\t\t\t\t\t\ttitle_string(TC_ex, mode, pinf);\n\t\t\t\t\t\tbreak;\n\n\t\t\t\t\tdefault:\n\t\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\tidx = pinf->msgs[i]->idx[j];\n\t\t\tpa = pinf->msgs[i]->mtbls[j];\n\t\t\tfor (k=0; k < idx; ++k) {\n\t\t\t\toutput_err = 1;\n\t\t\t\tfprintf(stdout, \"%s\\n\", pa[k]);\n\t\t\t}\n\t\t}\n\t}\n\tif (output_err == 0)\n\t\ttitle_string(TC_noerr, mode, pinf);\n}\n\n/**\n * @brief\n * \t\tCalling put_msg_in_table with a NULL message pointer is the\n * \t\tmechanism used to cause the loading of new values into the\n *\t \tfunction's three internal static variables: dflt_pinf, dflt_src,\n * \t\tdflt_cat - i.e. causing new default values to be established.\n *\n * @param[in]\tpinf\t-\tpointer to infrastruct\n * @param[in]\tsrc\t-\tdefault source.\n * @param[in]\tcategory\t-default category.\n */\nstatic void\nmsg_table_set_defaults(INFRA *pinf, int src, int category)\n{\n\n\tput_msg_in_table(pinf, src, category, NULL);\n}\n/**\n * @brief\n * \t\tput values in message table.\n *\n * @param[in]\tpinf\t-\tpointer to infrastruct\n * @param[in]\tsrc\t-\tsource. can be SRC_pri, SRC_home,\n * \t\t\t\t\t\tSRC_exec, SRC_last, SRC_none\n * @param[in]\tcategory\t-\tmessage category.\n * @param[in]\tmsg\t-\tmessage which needs to be put into table.\n *\n * @return\tint\n * @retval\t0\t: things are fine.\n * @retval\t1\t: something smells bad!\n *\n * @par MT-safe: No\n */\nstatic int\nput_msg_in_table(INFRA *pinf, int src, int category, char* msg)\n{\n\tstatic INFRA\t\t\t*dflt_pinf = NULL;\n\tstatic enum  msg_sources\tdflt_src = SRC_none;\n\tstatic enum  msg_categories\tdflt_cat = MSG_none;\n\tstatic char  *msg_headers[MSG_last] = { NULL};\n\n\tchar   **ptbl;\n\tchar   **mtb;\n\tint    idx;\n\n\t/*\n\t * One shot: Create pointers to table headings\n\t */\n\n\tif (msg_headers[0] == NULL) {\n\t\tmsg_headers[MSG_md]   = \"Missing Directory Problems:\";\n\t\tmsg_headers[MSG_mf]   = \"Missing File Problems:\";\n\t\tmsg_headers[MSG_po]   = \"Permission/Ownership Problems:\";\n\t\tmsg_headers[MSG_unr]  = \"Directory Entry Problems:\";\n\t\tmsg_headers[MSG_pri]  = \"Primary Data Problems:\";\n\t\tmsg_headers[MSG_oth]  = \"Other Problems:\";\n\t\tmsg_headers[MSG_real] = \"Real Path Problems:\";\n\t}\n\n\tif (msg == NULL) {\n\n\t\t/* Load new default values */\n\n\t\tif (pinf)\n\t\t\tdflt_pinf = pinf;\n\n\t\tif (src != SRC_none) {\n\t\t\tif (src != SRC_pri && src != SRC_home && src != SRC_exec) {\n\n\t\t\t\tfprintf(stderr, \"put_msg_in_table: Bad value for argument \\\"src\\\"\\n\");\n\t\t\t\texit(1);\n\t\t\t}\n\t\t\tdflt_src = (enum  msg_sources)src;\n\t\t}\n\n\t\tif (category != MSG_none) {\n\n\t\t\tif (category != MSG_mf   &&  category  != MSG_md  &&\n\t\t\t\tcategory != MSG_po   &&  category  != MSG_unr &&\n\t\t\t\tcategory != MSG_real &&  category  != MSG_pri &&\n\t\t\t\tcategory != MSG_oth) {\n\n\t\t\t\tfprintf(stderr, \"put_msg_in_table: Bad value for argument \\\"category\\\"\\n\");\n\t\t\t\texit(1);\n\t\t\t}\n\t\t\tdflt_cat = (enum  msg_categories)category;\n\t\t}\n\n\t\treturn (0);\n\t} /* end of msg == NULL */\n\n\tif (pinf == NULL) {\n\t\tif (dflt_pinf == NULL) {\n\t\t\tfprintf(stderr, \"put_msg_in_table: No default set for pinf\\n\");\n\t\t\texit(1);\n\t\t}\n\t\tpinf = dflt_pinf;\n\t}\n\n\tif (src == SRC_none) {\n\t\tif (dflt_src == SRC_none) {\n\t\t\tfprintf(stderr, \"put_msg_in_table: No default value for \\\"argument\\\" src\\n\");\n\t\t\texit(1);\n\t\t}\n\t\tsrc = dflt_src;\n\t}\n\n\tif (src != SRC_pri && src != SRC_home && src != SRC_exec) {\n\n\t\tfprintf(stderr, \"put_msg_in_table: Bad value for message source\\n\");\n\t\tfprintf(stderr, \"message %s:  not saved to table\\n\\n\", msg);\n\t\treturn (1);\n\t}\n\n\tif (category == MSG_none) {\n\t\tif (dflt_cat == MSG_none) {\n\t\t\tfprintf(stderr, \"put_msg_in_table: No default value for \\\"argument\\\" category\\n\");\n\t\t\texit(1);\n\t\t}\n\t\tcategory = dflt_cat;\n\t}\n\n\tif (category != MSG_mf  && category != MSG_md  && category != MSG_po &&\n\t\tcategory != MSG_unr && category != MSG_pri && category != MSG_oth &&\n\t\tcategory != MSG_real) {\n\n\t\tfprintf(stderr, \"put_msg_in_table: Bad value for message category\\n\");\n\t\tfprintf(stderr, \"message %s:  not saved to table\\n\\n\", msg);\n\t\treturn (1);\n\t}\n\n\tif (pinf->msgs[src] != NULL) {\n\t\tif (pinf->msgs[src]->mtbls[category] == NULL) {\n\n\t\t\t/*\n\t\t\t * No table exists, malloc memory for one and store\n\t\t\t * in the first location a pointer to a header message\n\t\t\t */\n\n\t\t\tmtb = (char **)malloc(DFLT_MSGTBL_SZ * sizeof(char *));\n\t\t\tif (mtb == NULL) {\n\t\t\t\tfprintf(stderr, \"pbs_probe: Out of Memory\\n\");\n\t\t\t\treturn (1);\n\t\t\t}\n\t\t\tpinf->msgs[src]->mtbls[category] = mtb;\n\t\t\tidx = pinf->msgs[src]->idx[category];\n\n\t\t\tpinf->msgs[src]->mtbls[category][idx] = msg_headers[category];\n\t\t\t++pinf->msgs[src]->idx[category];\n\t\t}\n\t} else {\n\n\t\tfprintf(stderr, \"put_msg_in_table: No initialization of pinf->msgs\\n\");\n\t\texit(1);\n\t}\n\n\tidx = pinf->msgs[src]->idx[category];\n\tif (idx >= DFLT_MSGTBL_SZ) {\n\t\tfprintf(stderr, \"put_msg_in_table: Table full\\n\");\n\t\tfprintf(stderr, \"message %s:  not saved to table\\n\\n\", msg);\n\t\treturn (1);\n\t}\n\n\t/* add pointer to the message into the table and bump table index */\n\n\tptbl = pinf->msgs[src]->mtbls[category];\n\tptbl [ idx ] = strdup(msg);\n\t++pinf->msgs[src]->idx[category];\n\treturn (0);\n}\n/**\n * @brief\n * \t\tget primary vaues from configuration file Then,\n * \t\tover ride conf derived settings with any values\n * \t\tset in the process's environment.\n *\n * @param[in]\tpinf\t-\tpointer to infrastruct\n *\n * @return\tint\n * @retval\t0\t: things are fine.\n * @retval\t!=0\t: something smells bad!\n */\nstatic int\nget_primary_values(struct infrastruct *pinf)\n{\n\tFILE\t*fp;\n\tint\trc;\n\tchar\t*gvalue;\t\t/* used with getenv() */\n\n\n\torigin_names[PBS_conf] = \"PBS CONF FILE\";\n\torigin_names[PBS_home] = \"PBS HOME\";\n\torigin_names[PBS_exec] = \"PBS EXEC\";\n\n\t/*\n\t * determine path for PBS infrastructure configuration file\n\t */\n\n\tpinf->pri.pbs_mpug = &pbs_mpugs[0];\n\n\tgvalue = getenv(\"PBS_CONF_FILE\");\n\tif (gvalue == NULL || *gvalue == '\\0') {\n\t\tpinf->pri.pbs_mpug[PBS_conf].path = default_pbsconf;\n\t\tpinf->pri.src_path.conf = SRC_DFLT;\n\t} else {\n\t\tpinf->pri.pbs_mpug[PBS_conf].path = strdup(gvalue);\n\t\tpinf->pri.src_path.conf = SRC_ENV;\n\t}\n\n\tif ((fp = fopen(pinf->pri.pbs_mpug[PBS_conf].path, \"r\")) == NULL) {\n\t\tif (stat(pinf->pri.pbs_mpug[PBS_conf].path, &pinf->statd.sb)) {\n\t\t\tif (errno == ENOENT)\n\t\t\t\tif (mode != 'f')\n\t\t\t\t\treturn (PBS_CONF_NO_EXIST);\n\t\t\telse {\n#if 0\n\t\t\t\t/*\n\t\t\t\t * In \"fix\" mode and pbs.conf doesn't exist\n\t\t\t\t * try to find and run pbs_postinstall to create it\n\t\t\t\t */\n\t\t\t\tpath = pinf->pri.pbs_mpug[PBS_conf].path;\n\t\t\t\treturn (pbsdotconf(path));\n\t\t\t\treturn (PBS_CONF_NO_EXIST);\n#endif\n\t\t\t\treturn (PBS_CONF_CAN_NOT_OPEN);\n\t\t\t}\n\t\t\telse\n\t\t\t\treturn (PBS_CONF_CAN_NOT_OPEN);\n\t\t}\n\t}\n\n\t/*\n\t * first source for the primary variables is the config file\n\t * Then, over ride conf derived settings with any values set\n\t * in the process's environment\n\t */\n\n\tif ((rc = conf4primary(fp, pinf))) {\n\t}\n\n\tif ((rc = env4primary(pinf))) {\n\t}\n\treturn (rc);\n}\n/**\n * @brief\n * \t\tRead the PBS_CONF_FILE path and store in a buffer.\n *\n * @param[in]\tpath\t-\tPBS_CONF_FILE path\n *\n * @return\tint\n * @retval\tPBS_CONF_NO_EXIST\t: No PBS_CONF_FILE.\n */\n#if 0\nstatic int\npbsdotconf(char * path)\n{\n\n\tchar\t*gvalue_save = NULL;\n\n\tif (path == NULL)\n\t\treturn (PBS_CONF_NO_EXIST);\n\n\tpbuf = malloc(strlen(\"PBS_CONF_FILE=\") + strlen(path) + 1);\n\tif (pbuf == NULL)\n\t\treturn (PBS_CONF_NO_EXIST);\n\n\tif ((gvalue = getenv(\"PBS_CONF_FILE\")) != NULL)\n\t\tif ((len = strlen(gvalue)))\n\t\t\tgvalue_save = strdup(gvalue);\n\n\tstrcpy(pbuf, \"PBS_CONF_FILE=\");\n\tstrcat(pbuf, path);\n\tif ((rc = putenv(pbuf))) {\n\t\tfree(pbuf);\n\t\tif (gvalue_save)\n\t\t\tfree(gvalue_save);\n\t\treturn (PBS_CONF_NO_EXIST);\n\t}\n}\n#endif\n/**\n * @brief\n * \t\tFirst try and resolve to a real path the MPUG path\n * \t\tdata belonging to *pinf's \"pri\" member.\n * @par\n * \t\tIf the PBS HOME pathname was resolvable to a realpath in\n * \t\tthe file system - i.e. have good PBS HOME primary value,\n * \t\tcompute MPUG data for PBS_HOME hierarchy.\n *\n * @param[in]\tpinf\t-\tPBS_CONF_FILE path\n *\n * @return\tint\n * @retval\t0\t: good to go.\n */\nstatic\tint\nget_realpath_values(struct infrastruct *pinf)\n{\n\tchar *real = NULL;\n\tchar path[MAXPATHLEN + 1];\n\tchar *endhead;\n\tchar demarc[]=\"/\";\n\tint  i, j;\n\tMPUG *pmpug;\n\tint  good_prime[PBS_last];\n\tchar *msgbuf;\n\tconst char *pycptr;\n        /*\n\t * First try and resolve to a real path the MPUG path\n\t * data belonging to *pinf's \"pri\" member\n\t */\n\tfor (i = 0; i < PBS_last; ++i) {\n\t\tgood_prime[i] = 0;\n\t\tif (pinf->pri.pbs_mpug[i].path) {\n\t\t\tif ((real = realpath(pinf->pri.pbs_mpug[i].path, NULL)) != NULL) {\n\t\t\t\tpinf->pri.pbs_mpug[i].realpath = strdup(real);\n\t\t\t\tgood_prime[i] = 1;\n\t\t\t\tfree(real);\n\t\t\t} else if (pinf->pri.pbs_mpug[i].notReq == 0) {\n\t\t\t\t/*\n\t\t\t\t * system not able to convert path string to valid\n\t\t\t\t * file system path\n\t\t\t\t */\n\t\t\t\tpbs_asprintf(&msgbuf,\n\t\t\t\t\t\"Unable to convert the primary, %s, string to a real path\\n%s\\n\",\n\t\t\t\t\torigin_names[i], strerror(errno));\n\t\t\t\tput_msg_in_table(pinf, SRC_pri, MSG_pri, msgbuf);\n\t\t\t\tfree(msgbuf);\n\t\t\t\tpbs_asprintf(&msgbuf, \"%s: %s\\n\",\n\t\t\t\t\torigin_names[i], pinf->pri.pbs_mpug[i].path);\n\t\t\t\tput_msg_in_table(pinf, SRC_pri, MSG_pri, msgbuf);\n\t\t\t\tfree(msgbuf);\n\t\t\t\t/* good_prime[i] = 0; */\n\t\t\t}\n\t\t} else {\n\t\t\tif (pinf->pri.pbs_mpug[i].notReq == 0) {\n\t\t\t\tpbs_asprintf(&msgbuf, \"Missing primary path %s\",\n\t\t\t\t\torigin_names[i]);\n\t\t\t\tput_msg_in_table(pinf, SRC_pri, MSG_pri, msgbuf);\n\t\t\t\tfree(msgbuf);\n\t\t\t}\n\t\t}\n\t}\n\tfor (i = 0; i < PBS_last; ++i) {\n\t\tif (good_prime[i] == 0 && pinf->pri.pbs_mpug[i].notReq == 0) {\n\t\t\tprint_problems(pinf);\n\t\t\texit(0);\n\t\t}\n\t}\n\n\t/*\n\t * If the PBS HOME pathname was resolvable to a realpath in\n\t * the file system - i.e. have good PBS HOME primary value,\n\t * compute MPUG data for PBS_HOME hierarchy.\n\t */\n\n\tif (good_prime[PBS_home]) {\n\t\t/* only need to check database directory if it is local */\n\t\tif (nonlocaldata == 0) {\n\t\t\tint fd;\n\t\t\tstruct stat st;\n\t\t\tchar buf[MAXPATHLEN+1];\n\t\t\tstruct passwd\t*pw;\n\n\t\t\t/*\n\t\t\t * create path for db_user\n\t\t\t * This is done outside of the table driven files\n\t\t\t * because it is optional and no message should\n\t\t\t * be generated if it does not exist.\n\t\t\t */\n\t\t\tstrcpy(path, pinf->pri.pbs_mpug[PBS_home].path);\n\t\t\tstrcat(path, \"/server_priv/db_user\");\n\n\t\t\tif ((fd = open(path, O_RDONLY)) != -1) {\n\t\t\t\tif (fstat(fd, &st) != -1) {\n\t\t\t\t\tif ((st.st_mode & 0777) != 0600) {\n\t\t\t\t\t\tpbs_asprintf(&msgbuf,\n\t\t\t\t\t\t\t\"%s, permission must be 0600\\n\",\n\t\t\t\t\t\t\tpath);\n\t\t\t\t\t\tput_msg_in_table(NULL,\n\t\t\t\t\t\t\tSRC_home, MSG_real, msgbuf);\n\t\t\t\t\t\tfree(msgbuf);\n\t\t\t\t\t}\n\t\t\t\t\tif (st.st_uid != 0) {\n\t\t\t\t\t\tpbs_asprintf(&msgbuf,\n\t\t\t\t\t\t\t\"%s, owner must be root\\n\",\n\t\t\t\t\t\t\tpath);\n\t\t\t\t\t\tput_msg_in_table(NULL,\n\t\t\t\t\t\t\tSRC_home, MSG_real, msgbuf);\n\t\t\t\t\t\tfree(msgbuf);\n\t\t\t\t\t}\n\t\t\t\t\tif (st.st_size < sizeof(buf) &&\n\t\t\t\t\t\tread(fd, buf, st.st_size) ==\n\t\t\t\t\t\tst.st_size) {\n\t\t\t\t\t\tbuf[st.st_size] = 0;\n\t\t\t\t\t\tpw = getpwnam(buf);\n\t\t\t\t\t\tif (pw != NULL)\n\t\t\t\t\t\t\tpbs_dataname[0] = strdup(buf);\n\t\t\t\t\t\telse {\n\t\t\t\t\t\t\tpbs_asprintf(&msgbuf,\n\t\t\t\t\t\t\t\t\"db_user %s does not exist\\n\",\n\t\t\t\t\t\t\t\tbuf);\n\t\t\t\t\t\t\tput_msg_in_table(NULL,\n\t\t\t\t\t\t\t\tSRC_home, MSG_real, msgbuf);\n\t\t\t\t\t\t\tfree(msgbuf);\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t\tclose(fd);\n\t\t\t}\n\t\t\tpw = getpwnam(pbs_dataname[0]);\n\t\t\tif (pw != NULL)\n\t\t\t\tpbsdata[0] = pw->pw_uid;\n\t\t}\n\n\t\tstrcpy(path, pinf->pri.pbs_mpug[PBS_home].path);\n\t\tstrcat(path, demarc);\n\t\tendhead = &path[strlen(path)];\n\n\t\tfor (i = 0; i < PH_last; ++i) {\n\t\t\tif ((pmpug = pinf->home[i]) == NULL)\n\t\t\t\tcontinue;\n\n\t\t\tfor (j = 0; j < home_sizes[i]; ++j) {\n\t\t\t\tif (pmpug[j].path) {\n\n\t\t\t\t\t/* proceed only if parent realpath is not NULL */\n\n\t\t\t\t\tif (is_parent_rpathnull(pmpug[j].path, pinf->home, PH_last, home_sizes))\n\t\t\t\t\t\tcontinue;\n\n\t\t\t\t\tstrcpy(endhead, pmpug[j].path);\n\n\t\t\t\t\tif ((real = realpath(path, NULL)) != NULL) {\n\t\t\t\t\t\tpmpug[j].realpath = strdup(real);\n\t\t\t\t\t\tfree(real);\n\t\t\t\t\t} else if ((pmpug[j].notReq & notbits) == 0) {\n\n\t\t\t\t\t\tif (errno == ENOENT)\n\t\t\t\t\t\t\tpbs_asprintf(&msgbuf, \"%s, %s\\n\",\n\t\t\t\t\t\t\t\tpath, strerror(errno));\n\t\t\t\t\t\telse\n\t\t\t\t\t\t\tpbs_asprintf(&msgbuf,\n\t\t\t\t\t\t\t\t\"%s,  errno = %d\\n\",\n\t\t\t\t\t\t\t\tpath, errno);\n\n\t\t\t\t\t\tput_msg_in_table(NULL, SRC_home, MSG_real, msgbuf);\n\t\t\t\t\t\tfree(msgbuf);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\n\t/*\n\t * If the PBS EXEC string was  resolvable to a realpath in\n\t * the file system - i.e. have good PBS HOME primary value,\n\t * compute MPUG data for PBS EXEC hierarchy\n\t */\n\n\tif (good_prime[PBS_exec]) {\n\t\tstrcpy(path, pinf->pri.pbs_mpug[PBS_exec].path);\n\t\tstrcat(path, demarc);\n\t\tendhead = &path[strlen(path)];\n\n\t\tfor (i = 0; i < EXEC_last; ++i) {\n\n\t\t\tif ((pmpug = pinf->exec[i]) == NULL)\n\t\t\t\tcontinue;\n\n\t\t\tfor (j = 0; j < exec_sizes[i]; ++j) {\n\t\t\t\tif (pmpug[j].path) {\n\n\t\t\t\t\t/* proceed only if parent realpath is not NULL */\n\n\t\t\t\t\tif ((is_parent_rpathnull(pmpug[j].path, pinf->exec, EXEC_last, exec_sizes)))\n\t\t\t\t\t\tcontinue;\n\n\t\t\t\t\tstrcpy(endhead, pmpug[j].path);\n\t\t\t\t\tif ((real = realpath(path, NULL)) != NULL) {\n\t\t\t\t\t\tpmpug[j].realpath = strdup(real);\n\t\t\t\t\t\tfree(real);\n\t\t\t\t\t} else if ((pycptr = strstr(path, \".pyc\")) != NULL){\n\t\t\t\t\t\tglob_t pycbuf;\n\t\t\t\t\t\tglob(path, 0, NULL, &pycbuf);\n\t\t\t\t\t\tif (pycbuf.gl_pathc == 1){\n\t\t\t\t\t\t\tpmpug[j].realpath = strdup(pycbuf.gl_pathv[0]);\n\t\t\t\t\t\t\tpmpug[j].path = strdup((pycbuf.gl_pathv[0] + strlen(pinf->pri.pbs_mpug[PBS_exec].path) + strlen(demarc)));\n\t\t\t\t\t\t}\n\t\t\t\t\t\tglobfree(&pycbuf);\n\t\t\t\t\t} else if ((pmpug[j].notReq & notbits) == 0) {\n                        \t\t\tif (errno == ENOENT)\n\t\t\t\t\t\t\tpbs_asprintf(&msgbuf, \"%s, %s\\n\",\n\t\t\t\t\t\t\t\tpath, strerror(errno));\n\t\t\t\t\t\telse\n\t\t\t\t\t\t\tpbs_asprintf(&msgbuf,\n\t\t\t\t\t\t\t\t\"%s,  errno = %d\\n\",\n\t\t\t\t\t\t\t\tpath, errno);\n\t\t\t\t\t\tput_msg_in_table(NULL, SRC_exec, MSG_real, msgbuf);\n\t\t\t\t\t\tfree(msgbuf);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\treturn (0);\n}\n\n/**\n * @brief\n * \t\tDoes path have a parent directory?\n * @par\n * \t\treplace demarc value\n *\n * @param[in]\tpath\t-\tlocation of file/directory\n * @param[in]\tmpa\t-\tpointer to MPUG structure which stores file/directory properties.\n * @param[in]\tnelts\t-\tnumber of elements in mpa\n * @param[in]\tnmems\t-\tarray storing number of members for each MPUG.\n *\n * @return\tint\n * @retval\t0\t: good to go.\n * @retval\t1\t: parent unresolved.\n */\nstatic int\nis_parent_rpathnull(char *path, MPUG **mpa, int nelts, int * nmems)\n{\n\tchar *dp;\n\tMPUG *mpug;\n\tint  i, j;\n\tint  rc = 0, done = 0;\n\n\t/*\n\t * Does path have a parent directory?\n\t */\n\n\tif (path == NULL)\n\t\treturn (0);\n\telse if (!((dp = strrchr(path, DEMARC)) && (dp != path)))\n\t\treturn (0);\n\n\t*dp = '\\0';\t/* temporarily overwrite demarc */\n\n\tfor (i=0; i<nelts; ++i) {\n\n\t\tif ((mpug = mpa[i]) == NULL)\n\t\t\tcontinue;\n\n\t\tfor (j=0; j<nmems[i]; ++j, ++mpug)\n\n\t\t\tif (strcmp(path, mpug->path) == 0) {\n\n\t\t\t\t/* is parent unresolved */\n\t\t\t\tif (mpug->realpath == NULL) {\n\t\t\t\t\trc = 1;\n\t\t\t\t}\n\t\t\t\tdone = 1;\n\t\t\t\tbreak;\n\t\t\t}\n\n\t\tif (done)\n\t\t\tbreak;\n\t}\n\n\t/* replace demarc value */\n\n\t*dp = DEMARC;\n\treturn rc;\n}\n/**\n * @brief\n * \t\tinspect the directory entries in infrastruct\n *\n * @param[in]\tpinf\t-\tpointer to infrastruct\n *\n * @return\tint\n * @retval\t0\t: good to go.\n */\n#if 0\nstatic\tint\ninspect_dir_entries(struct infrastruct *pinf)\n{\n\tint\ti, j;\n\tMPUG\t*pmpug;\n\tMPUG\t**knwn_set;\n\tint\ttsz;\n\n\n\tfor (i=0; i<PH_last; ++i) {\n\n\t\tif ((pmpug = pinf->home[i]) == NULL)\n\t\t\tcontinue;\n\t\ttsz = home_sizes[i];\n\n\t\tfor (j=0; j<home_sizes[i]; ++j) {\n\n\t\t\tif (pmpug[j].path && pmpug[j].realpath) {\n\n\t\t\t\t/*\n\t\t\t\t * If pmpug[j] relates to a directory, find all database\n\t\t\t\t * MPUG's for entries that belong to that directory.\n\t\t\t\t *\n\t\t\t\t * A pointer to an array of MPUG pointers is returned.\n\t\t\t\t * These are MPUGS gleened from pbs_probe's database and\n\t\t\t\t * is thought of as the, \"known set of MPUGS\".\n\t\t\t\t */\n\n\t\t\t\tknwn_set = which_Knwn_mpugs(&pmpug[j], pmpug, tsz);\n\t\t\t\tmsg_table_set_defaults(pinf, SRC_home, MSG_none);\n\n\t\t\t\t/*\n\t\t\t\t * If *pmpug[j] happens to the MPUG data for a directory,\n\t\t\t\t * check each entry of that directory either against\n\t\t\t\t * MPUG data from the \"known set\" or against some other\n\t\t\t\t * criteria to ascertain its \"correctness\"\n\t\t\t\t */\n\n\t\t\t\tchk_entries(&pmpug[j], knwn_set);\n\t\t\t}\n\t\t}\n\t}\n\n\tfor (i=0; i<EXEC_last; ++i) {\n\n                if ((pmpug = pinf->exec[i]) == NULL)\n\t\t\tcontinue;\n\t\ttsz = exec_sizes[i];\n\n\t\tfor (j=0; j<exec_sizes[i]; ++j) {\n\n\t\t\tif ((pmpug[j].path)) {\n\t\t\t\t/*\n\t\t\t\t * Refer to block comments in previous code block\n\t\t\t\t * for explanation of what this code block does\n\t\t\t\t */\n\n\t\t\t\tknwn_set = which_Knwn_mpugs(&pmpug[j], pmpug, tsz);\n\t\t\t\tmsg_table_set_defaults(pinf, SRC_exec, MSG_none);\n\t\t\t\tchk_entries(&pmpug[j], knwn_set);\n\t\t\t}\n\t\t}\n\t}\n\n\treturn 0;\n}\n#endif\t/* 0 */\n/**\n * @brief\n * \t\tdefines the suffix set and returns the set corresponding to the path of directory.\n *\n * @param[in]\tpmpug\t-\tpointer to MPUG struct\n *\n * @return\tchar *\n * @retval\tNULL\t: No match found.\n * @retval\tthe suffix set\t: corresponding to the directory.\n */\n#if 0\nstatic const char *\nwhich_suffixset(MPUG *pmpug)\n{\n\n\tstatic char vld_job[] = \".JB,.SC,.CR,.XP,.OU,.ER,.CK,.TK,.CS,.BD\";\n\tstatic char vld_hooks[] = \".HK,.PY\";\n\tstatic char vld_resv[] = \".RB,.RBD\";\n\tstatic char vld_tcltk[] = \".h,8.3,8.3.a,.sh\";\n\tstatic char vld_python[] = \".py,.pyc,.so\";\n\tchar buf[MAXPATHLEN];\n\tchar py_version[4];\n\t/* Get version of the Python interpreter */\n\tstrncpy(py_version, Py_GetVersion(), 3);\n\tpy_version[4] = '\\0';\n\n\tif (pmpug->path == NULL)\n\t\treturn NULL;\n\tif (strcmp(\"server_priv/jobs\", pmpug->path) == 0)\n\t\treturn (vld_job);\n\tif (strcmp(\"server_priv/users\", pmpug->path) == 0)\n\t\treturn (vld_job);\n\tif (strcmp(\"server_priv/hooks\", pmpug->path) == 0)\n\t\treturn (vld_hooks);\n\tif (strcmp(\"mom_priv/jobs\", pmpug->path) == 0)\n\t\treturn (vld_job);\n\tif (strcmp(\"undelivered\", pmpug->path) == 0)\n\t\treturn (vld_job);\n\tif (strcmp(\"spool\", pmpug->path) == 0)\n\t\treturn (vld_job);\n\tif (strcmp(\"tcltk/bin\", pmpug->path) == 0)\n\t\treturn (vld_tcltk);\n\tif (strcmp(\"tcltk/include\", pmpug->path) == 0)\n\t\treturn (vld_tcltk);\n\tif (strcmp(\"tcltk/lib\", pmpug->path) == 0)\n\t\treturn (vld_tcltk);\n\tif (strcmp(\"lib/python\", pmpug->path) == 0)\n\t\treturn (vld_python);\n\tif (strcmp(\"lib/python/altair\", pmpug->path) == 0)\n\t\treturn (vld_python);\n\tif (strcmp(\"lib/python/altair/pbs\", pmpug->path) == 0)\n\t\treturn (vld_python);\n\tif (strcmp(\"lib/python/altair/pbs/v1\", pmpug->path) == 0)\n\t\treturn (vld_python);\n\tsnprintf(buf, sizeof(buf), \"lib/python/python%s\", py_version);\n\tif (strcmp(buf, pmpug->path) == 0)\n\t\treturn (vld_python);\n\tsnprintf(buf, sizeof(buf), \"lib/python/python%s/logging\", py_version);\n\tif (strcmp(buf, pmpug->path) == 0)\n\t\treturn (vld_python);\n\tsnprintf(buf, sizeof(buf), \"lib/python/python%s/shared\", py_version);\n\tif (strcmp(buf, pmpug->path) == 0)\n\t\treturn (vld_python);\n\tsnprintf(buf, sizeof(buf), \"lib/python/python%s/xml\", py_version);\n\tif (strcmp(buf, pmpug->path) == 0)\n\t\treturn (vld_python);\n\tsnprintf(buf, sizeof(buf), \"lib/python/python%s/xml/dom\", py_version);\n\tif (strcmp(buf, pmpug->path) == 0)\n\t\treturn (vld_python);\n\tsnprintf(buf, sizeof(buf), \"lib/python/python%s/xml/etree\", py_version);\n\tif (strcmp(buf, pmpug->path) == 0)\n\t\treturn (vld_python);\n\tsnprintf(buf, sizeof(buf), \"lib/python/python%s/xml/parsers\", py_version);\n\tif (strcmp(buf, pmpug->path) == 0)\n\t\treturn (vld_python);\n\tsnprintf(buf, sizeof(buf), \"lib/python/python%s/xml/sax\", py_version);\n\tif (strcmp(buf, pmpug->path) == 0)\n\t\treturn (vld_python);\n\treturn NULL;\n}\n#endif /* 0 */\n\n#if 0\nstatic\tint\nis_suffix_ok(char *entryname, char *psuf)\n{\n\tchar\ttbuf[100];\n\tchar\t*tok;\n\tint\tlen;\n\tint\telen = strlen(entryname);\n\n\tif (psuf == NULL)\n\t\treturn 1;\n\n\tstrcpy(tbuf, psuf);\n\n\ttok = strtok(tbuf, \",\");\n\tfor (; tok; tok = strtok(NULL,  \",\")) {\n\n\t\tlen = strlen(tok);\n\t\tif (elen <= len)\n\t\t\tcontinue;\n\t\telse if (strcmp(&entryname[elen - len], tok))\n\t\t\tcontinue;\n\t\telse {\n\t\t\t/* matched */\n\t\t\treturn (1);\n\t\t}\n\t}\n\treturn 0;\n}\n#endif\t/* 0 */\n\n\n#if 0\nstatic \tMPUG  **\nwhich_knwn_mpugs(MPUG *pmpug, MPUG *sets[], int *ssizes, int asz)\n{\n\t/* Assumption being made that argument setsz < 100 */\n\n\tstatic\tMPUG *knwn_mpugs[100];\n\tMPUG\t     *pm;\n\tchar\t     *dp;\n\tchar\t     tmp_path[MAXPATHLEN];\n\tint i, j, idx=0;\n\n\tknwn_mpugs[0] = NULL;\n\n\tif ((pmpug == NULL) ||  !(pmpug->req_modes & S_IFDIR))\n\t\treturn knwn_mpugs;\n\n\tfor (i=0, pm=sets[0]; i<asz; ++i, pm = sets[i]) {\n\t\tfor (j=0; j<ssizes[i]; ++j, ++pm) {\n\n\t\t\t/*\n\t\t\t * copy to temporary avoids a problem when pmpug->path\n\t\t\t * and pm->path never point to the same memory location\n\t\t\t */\n\n\t\t\tstrcpy(tmp_path, pm->path);\n\n\t\t\tif ((dp = strrchr(tmp_path, (int)'/')) == NULL)\n\t\t\t\tcontinue;\n\n\t\t\t*dp = '\\0';\n\t\t\tif (strcmp(pmpug->path, tmp_path) == 0)\n\t\t\t\tknwn_mpugs[idx++] = pm;\n\t\t\t*dp = DEMARC;\n\t\t}\n\t}\n\n\t/*\n\t * MPUG pointer array _must_ end with a NULL pointer\n\t */\n\n\tknwn_mpugs[idx] = NULL;\n\treturn\t(knwn_mpugs);\n}\n#endif\t/* 0 */\n\n#if 0\nstatic \tMPUG  **\nwhich_Knwn_mpugs(MPUG *pmpug, MPUG *base, int tsize)\n{\n\t/* Assumption being made that argument setsz < 100 */\n\n\tstatic\tMPUG *knwn_mpugs[100];\n\tMPUG\t     *pm;\n\tchar\t     *dp;\n\tchar\t     tmp_path[MAXPATHLEN];\n\tint\t     j, idx=0;\n\n\tknwn_mpugs[0] = NULL;\n\n\tif ((pmpug == NULL) ||  !(pmpug->req_modes & S_IFDIR))\n\t\treturn knwn_mpugs;\n\n\tfor (j=0, pm=base; j<tsize; ++j, ++pm) {\n\n\t\t/*\n\t\t * copy to temporary avoids a problem when pmpug->path\n\t\t * and pm->path never point to the same memory location\n\t\t */\n\n\t\tstrcpy(tmp_path, pm->path);\n\n\t\tif ((dp = strrchr(tmp_path, (int)'/')) == NULL)\n\t\t\tcontinue;\n\n\t\t*dp = '\\0';\n\t\tif (strcmp(pmpug->path, tmp_path) == 0)\n\t\t\tknwn_mpugs[idx++] = pm;\n\n\t\t*dp = DEMARC;\n\t}\n\n\t/*\n\t * MPUG pointer array _must_ end with a NULL pointer\n\t */\n\n\tknwn_mpugs[idx] = NULL;\n\treturn\t(knwn_mpugs);\n}\n#endif\t/* 0 */\n\n#if 0\n/**\n * @brief\n * If pmpug happens to the MPUG data for a directory,\n * check each entry of that directory either against\n * MPUG data from the \"known set\" or against some other\n * criteria to ascertain its \"correctness\"\n *\n * @param[in] pmpug - pointer to struct modes_path_user_group\n * @param[in] knwn_set - known set of MPUGS\n */\nstatic void\nchk_entries(MPUG *pmpug, MPUG **knwn_set)\n{\n\tDIR\t\t*dir;\n\tstruct dirent\t*pdirent;\n\tchar\t\t*dirpath = pmpug->realpath;\n\tint\t\ti;\n\tint\t\tdirtype;\n\tchar\t\t*name;\n\tchar\t\t*psuf;\n\tchar\t\tmsg[1024];\n\n\tif ((dirpath == NULL) || !(pmpug->req_modes & S_IFDIR))\n\t\treturn;\n\n\tif ((dir = opendir(dirpath)) == NULL) {\n\n\t\tsnprintf(msg, sizeof(msg), \"Can't open directory %s for inspection\\n\", dirpath);\n\t\tput_msg_in_table(NULL, SRC_none, MSG_oth, msg);\n\t\treturn;\n\t}\n\n\t/*\n\t * Certain directories will have a list of associated suffixes.\n\t * Get the location of any such list.\n\t */\n\n\tpsuf = (char *)which_suffixset(pmpug);\n\n\t/*\n\t * Determine PBS directory type\n\t */\n\n\tpbs_dirtype(&dirtype, pmpug);\n\n\twhile (errno = 0, (pdirent = readdir(dir)) != NULL) {\n\n\t\t/*\n\t\t * Ignore non-relevant directory entries\n\t\t */\n\n\t\tif (strcmp(\".\", pdirent->d_name) == 0)\n\t\t\tcontinue;\n\t\telse if (strcmp(\"..\", pdirent->d_name) == 0)\n\t\t\tcontinue;\n\n\t\t/*\n\t\t * Begin by checking if the name of this entry matches any one\n\t\t * of the names stored in the \"known names (MPUGS)\" subset\n\t\t * supplied as input to this function\n\t\t */\n\n\t\tfor (i=0; knwn_set[i]; ++i) {\n\t\t\tif ((name = strrchr(knwn_set[i]->path, DEMARC))) {\n\t\t\t\t++name;\n\t\t\t\tif (strcmp(name, pdirent->d_name) == 0)\n\t\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\n\t\tif (knwn_set[i] != NULL) {\n\n\t\t\t/* matched a known entry, call readdir again */\n\t\t\tcontinue;\n\t\t}\n\n\t\t/*\n\t\t * See if there is any, \"name is outside the database,\"\n\t\t * kind of processing that would apply, and do it\n\t\t */\n\n\t\tif (non_db_resident(pmpug, psuf, dirtype, pdirent->d_name))\n\t\t\tcontinue;\n\n\t\t/*\n\t\t * entry is not a known name in pbs_probe's database and none\n\t\t * of the other mechanisms for evaluating, in so way, the\n\t\t * fitness of this entry were found to apply.\n\t\t */\n\n\t\tsnprintf(msg, sizeof(msg), \"%s, unrecognized entry appears in %s\\n\", pdirent->d_name, pmpug->path);\n\t\tput_msg_in_table(NULL, SRC_none, MSG_unr, msg);\n\t}\n\tif (errno != 0 && errno != ENOENT) {\n\n\t\tsnprintf(msg, sizeof(msg), \"Can't read directory %s for inspection\\n\", dirpath);\n\t\tput_msg_in_table(NULL, SRC_none, MSG_oth, msg);\n\t\t(void)closedir(dir);\n\t\treturn;\n\t}\n\t(void)closedir(dir);\n}\n#endif\t/* 0 */\n\n#if 0\nstatic\tvoid\npbs_dirtype(int *dirtype, MPUG *pmpug)\n{\n\tif (strstr(pmpug->path, \"logs\"))\n\t\t*dirtype = PBS_logsdir;\n\telse if (strstr(pmpug->path, \"accounting\"))\n\t\t*dirtype = PBS_acctdir;\n\telse if (strstr(pmpug->path, \"spool\"))\n\t\t*dirtype = PBS_spooldir;\n\telse if (strstr(pmpug->path, \"jobs\"))\n\t\t*dirtype = PBS_jobsdir;\n\telse if (strstr(pmpug->path, \"users\"))\n\t\t*dirtype = PBS_usersdir;\n\telse if (strstr(pmpug->path, \"hooks\"))\n\t\t*dirtype = PBS_hooksdir;\n\telse if (strstr(pmpug->path, \"hooks/tmp\"))\n\t\t*dirtype = PBS_hookswdir;\n\telse\n\t\t*dirtype = PBS_niltype;\n}\n#endif\t/* 0 */\n\n#if 0\nstatic\tint\nnon_db_resident(MPUG *pmpug, char* psuf, int dirtype, char *entryname)\n{\n\tchar\tmsg[1024];\n\n\tswitch (dirtype) {\n\t\tcase PBS_acctdir:\n\t\tcase PBS_logsdir:\n\t\t\tif (is_a_numericname(entryname) == 0) {\n\n\t\t\t\tsnprintf(msg, sizeof(msg), \"%s, unrecognized entry appears in %s\\n\", entryname, pmpug->path);\n\t\t\t\tput_msg_in_table(NULL, SRC_none, MSG_unr, msg);\n\t\t\t}\n\t}\n\n\tif (psuf && is_suffix_ok(entryname, psuf)) {\n\t\treturn (1);\n\t}\n\n\t/* needs further examination to decide */\n\n\treturn (0);\n}\n#endif\t/* 0 */\n\n#if 0\nstatic\tint\nis_a_numericname(char *entryname)\n{\n\n\tchar\t*endptr;\n\n\t(void)strtol(entryname, &endptr, 0);\n\tif (*endptr == '\\0')\n\t\treturn (1);\n\telse\n\t\treturn (0);\n}\n#endif\t/* 0 */\n/**\n * @brief\n * \t\tcheck owner mode on pbs directories.\n *\n * @param[in]\tpinf\t-\tpointer to infrasruct\n *\n * @return\tint\n * @retval\t0\t: success\n */\nstatic\tint\ncheck_paths(struct infrastruct *pinf)\n{\n\tint\ti, j;\n\tMPUG\t*pmpug;\n\tchar\t*realpath;\n\n\n\tfor (i=0; i<PBS_last; ++i) {\n\t\tmsg_table_set_defaults(pinf, SRC_pri, MSG_po);\n\t\tif ((realpath = pinf->pri.pbs_mpug[i].realpath))\n\t\t\tcheck_owner_modes(realpath, &pinf->pri.pbs_mpug[i], 0);\n\t}\n\n\tfor (i=0; i<PH_last; ++i) {\n\t\tmsg_table_set_defaults(pinf, SRC_home, MSG_po);\n\n\t\tif ((pmpug = pinf->home[i]) == NULL)\n\t\t\tcontinue;\n\n\t\tfor (j=0; j<home_sizes[i]; ++j) {\n\t\t\tif ((realpath = pmpug[j].realpath))\n\t\t\t\tcheck_owner_modes(realpath, pmpug + j, 0);\n\t\t}\n\t}\n\n\tfor (i=0; i<EXEC_last; ++i) {\n\n\t\tmsg_table_set_defaults(pinf, SRC_exec, MSG_po);\n\n\t\tif ((pmpug = pinf->exec[i]) == NULL)\n\t\t\tcontinue;\n\n\t\tfor (j=0; j<exec_sizes[i]; ++j) {\n\t\t\tif ((realpath = pmpug[j].realpath) &&\n\t\t\t\t!(pmpug[j].notReq & notbits)) {\n                                check_owner_modes(realpath, pmpug + j, 0);\n\t\t\t}\n\t\t}\n\t}\n\treturn 0;\n}\n/**\n * @brief\n * \t\tif full path check is required, see if the path contains\n * \t\ta sub-path and if it does, call check_owner_modes on that\n * \t\tsub-path\n * @par\n * \t\tif lstat on the path is successful, check perms and owners\n * \t\tagainst values stored in MPUG structure\n *\n * @param[in]\tpath\t-\tpath which needs to be checked.\n * @param[in]\tp_mpug\t-\tpointer to MPUG structure\n * @param[in]\tsys\t-\tindicates ownerID < 10, group id < 10\n *\n * @return\tint\n * @retval\t0\t: success\n * @retval\t!=0\t: something got wrong!\n *\n * @par MT-safe: No\n */\nstatic\tint\ncheck_owner_modes(char *path, MPUG *p_mpug, int sys)\n{\n\tint\t    rc = 0;\t\t/* encountered no mode problem */\n\tchar\t    *dp;\n\tchar\t    msg[256];\n\tconst char  *perm_msg;\n\n\tstruct stat sbuf;\n\tstatic int  cnt_recursive = 0;\n\n\t/*\n\t * if full path check is required, see if the path contains\n\t * a sub-path and if it does, call check_owner_modes on that\n\t * sub-path\n\t */\n\n\tif (p_mpug->chkfull &&\n\t\t(dp = strrchr(path, DEMARC)) && (dp != path)) {\n\t\t/* temporarily overwrite demarc */\n\n\t\t*dp = '\\0';\n\n\t\t++cnt_recursive;\n\t\trc = check_owner_modes(path, p_mpug, 0);\n\n\t\t/* replace demarc value and stat this component of real path */\n\n\t\t*dp = DEMARC;\n\n\t}\n\n\t/*\n\t * if lstat on the path is successful, check perms and owners\n\t * against values stored in MPUG structure\n\t */\n\n\tif (rc == LSTAT_PATH_ERR) {\n\t\tif (cnt_recursive > 0)\n\t\t\t--cnt_recursive;\n\t\treturn (rc);\n\t}\n\n\t/*\n\t * Clarification for reader may be in order:\n\t *\n\t * For a fullpath check, getting to this point in the\n\t * code means the prior subpath was ok, else would be\n\t * taking the above return.\n\t *\n\t * For a non-fullpath check, we are immediately here\n\t */\n\n\tif (! lstat(path, &sbuf)) {\n\n\t\t/* successful on the lstat */\n\t\trc = mbits_and_owner(&sbuf, p_mpug, sys);\n\t\tif (rc) {\n\t\t\tsnprintf(msg, sizeof(msg), \"\\n%s\", path);\n\t\t\tput_msg_in_table(NULL, SRC_none, MSG_po, msg);\n\t\t\tperm_msg = perm_owner_msg(&sbuf, p_mpug, NULL, sys);\n\t\t\tstrcpy(msg, perm_msg);\n\t\t\tput_msg_in_table(NULL, SRC_none, MSG_po, msg);\n\t\t}\n\t\t/*\n\t\t * if running in \"fix\" mode, do fixing up to and including\n\t\t * the maximum authorized level (max_level).\n\t\t */\n\t\tfix(mode, rc, max_level, p_mpug, NULL, &sbuf, FIX_po);\n\n\t} else {\n\n\t\t/* lstat complained about something */\n\n\t\tif (errno != ENOENT || ! p_mpug->notReq) {\n\t\t\t/* this PBS file is required */\n\n\t\t\tsnprintf(msg, sizeof(msg), \"lstat error: %s, \\\"%s\\\"\\n\", path, strerror(errno));\n\t\t\tput_msg_in_table(NULL, SRC_none, MSG_real, msg);\n\t\t\trc = LSTAT_PATH_ERR;\n\t\t}\n\t}\n\n\tif (cnt_recursive > 0)\n\t\t--cnt_recursive;\n\treturn (rc);\n}\n\n/**\n * @brief\n * \t\tTest mode bits and ownerships\n *\n * @param[in]\tps\t-\t stat  \"buffer\"\n * @param[in]\tp_mpug\t-\tpointer to MPUG structure\n * @param[in]\tsys\t-\tindicates ownerID < 10, group id < 10\n *\n * @return\tint\n * @retval\t0\t: success\n * @retval\tPATH_ERR\t: something got wrong!\n */\n\nstatic\tint\nmbits_and_owner(struct stat *ps, MPUG *p_mpug, int sys)\n{\n\tint\t    i;\n\tmode_t\t    modes;\n\n\t/*\n\t * first adjust bits from the MPUG by turning off mode bits that should\n\t * be disallowed at this level in the hierarchy and turn on those bits\n\t * that are required, before testing the modes produced by lstat call\n\t */\n\n\tif (sys == 0) {\n\n\t\tmodes = (mode_t)p_mpug->req_modes;\n\t\tif ((ps->st_mode & modes) != modes)\n\t\t\treturn (PATH_ERR);\n\n\t\tmodes = (mode_t)p_mpug->dis_modes;\n\t\tif (ps->st_mode & modes)\n\t\t\treturn (PATH_ERR);\n\t}\n\n\t/*\n\t * if the MPUG has associated \"user and group\"\n\t * data, test if this file's user and group is consitent\n\t * with what is in the database\n\t */\n\n\tif (p_mpug->vld_ug) {\n\t\tfor (i=0; p_mpug->vld_ug->uids[i] != -1; ++i) {\n\t\t\tif (p_mpug->vld_ug->uids[i] == ps->st_uid)\n\t\t\t\tbreak;\n\t\t}\n\t\tif (p_mpug->vld_ug->uids[i] == -1)\n\t\t\treturn (PATH_ERR);\n\t}\n\n\tif (p_mpug->vld_ug) {\n\n\t\tfor (i=0; p_mpug->vld_ug->gids[i]; ++i) {\n\t\t\tif (p_mpug->vld_ug->gids[i] == ps->st_gid)\n\t\t\t\tbreak;\n\t\t}\n\t\tif (p_mpug->vld_ug->gids[i] == -1)\n\t\t\treturn (PATH_ERR);\n\t}\n\n\treturn (0);\n}\n/**\n * @brief\n * \t\tprepare a permission owner message in the following format\n * \t\tperm_is, owner_is, perm_need, owner_need\n *\n * @param[in]\tps\t-\t stat  \"buffer\"\n * @param[in]\tp_mpug\t-\tpointer to MPUG structure\n * @param[in]\tp_adj\t-\tpointer to ADJ structure\n * @param[in]\tsys\t-\tindicates ownerID < 10, group id < 10\n *\n * @return\tpermission owner message\n *\n * @par MT-safe: No\n */\nstatic const\tchar *\nperm_owner_msg(struct stat *ps, MPUG *p_mpug,\n\tADJ  *p_adj, int sys)\n{\n\tmode_t\t    modes;\n\tchar\t    *perm_is;\n\tchar\t    *perm_need;\n\tchar\t    *owner_is;\n\tchar\t    *owner_need;\n\tstatic char buf[1024];\n\n\t/*\n\t * first adjust bits from the MPUG by turning off mode bits that should\n\t * be disallowed at this level in the hierarchy and turn on those bits\n\t * that are required, before testing the modes produced by lstat call\n\t */\n\n\towner_is = strdup(owner_string(ps, NULL, sys));\n\towner_need = strdup(owner_string(NULL, p_mpug, sys));\n\n\tif (sys) {\n\n\t\tsnprintf(buf, sizeof(buf), \"(%s) needs to be (%s)\", owner_is, owner_need);\n\t\tfree(owner_is);\n\t\tfree(owner_need);\n\t\treturn (buf);\n\t}\n\n\t/* continue with this part if part of PBS hierarchy proper */\n\n\tmodes = (mode_t)p_mpug->req_modes;\n\tif (p_adj)\n\t\tmodes = (modes & ~p_adj->dis) | p_adj->req;\n\n\tperm_is = strdup(perm_string(ps->st_mode));\n\tperm_need = strdup(perm_string(modes));\n\n\tsnprintf(buf, sizeof(buf), \"(%s , %s) needs to be (%s , %s)\",\n\t\tperm_is, owner_is, perm_need, owner_need);\n\n\tfree(perm_is);\n\tfree(perm_need);\n\tfree(owner_is);\n\tfree(owner_need);\n\treturn (buf);\n}\n\n/**\n * @brief\n * \t\tperm_string - create permission string from mode.\n *\n * @param[in]\tmodes\t-\trequired permissions (modes)\n *\n * @return\tpermission string\n *\n * @par MT-safe: No\n */\nstatic\tchar *\nperm_string(mode_t modes)\n{\n\tstatic char buf[12];\n\n\tstrcpy(buf, \"----------\");\n\n\tif (S_IFDIR & modes)\n\t\tbuf[0] = 'd';\n\n\tif (S_IRUSR & modes)\n\t\tbuf[1] = 'r';\n\tif (S_IWUSR & modes)\n\t\tbuf[2] = 'w';\n\n\tif (S_IXUSR & modes)\n\t\tbuf[3] = 'x';\n\n\tif (S_ISUID & modes)\n\t\tbuf[3] = 's';\n\n\tif (S_IRGRP & modes)\n\t\tbuf[4] = 'r';\n\tif (S_IWGRP & modes)\n\t\tbuf[5] = 'w';\n\tif (S_IXGRP & modes)\n\t\tbuf[6] = 'x';\n\n\tif (S_ISGID & modes)\n\t\tbuf[6] = 's';\n\n\tif (S_IROTH & modes)\n\t\tbuf[7] = 'r';\n\tif (S_IWOTH & modes)\n\t\tbuf[8] = 'w';\n\tif (S_IXOTH & modes)\n\t\tbuf[9] = 'x';\n\n\tif (S_ISVTX & modes)\n\t\tbuf[9] = 't';\n\n\treturn buf;\n}\n/**\n * @brief\n * \t\tformulate a string contains owner user info and group info.\n *\n * @param[in]\tps\t-\tdata returned by the stat() function\n * @param[in]\tp_mpug\t-\tmodes_path_user_group structure\n * @param[in]\tsys\t-\tindicates ownerID < 10, group id < 10\n *\n * @return\tstring contains owner user info and group info.\n *\n * @par MT-safe: No\n */\nstatic\tconst char *\nowner_string(struct stat *ps, MPUG *p_mpug, int sys)\n{\n\tstruct passwd\t*ppw;\n\tstruct group\t*pgrp;\n\n\tstatic char buf[1024];\n\n\tbuf[0] = '\\0';\n\tif (ps) {\n\t\tppw = getpwuid(ps->st_uid);\n\t\tpgrp = getgrgid(ps->st_gid);\n\n\t\tif (ppw != NULL && pgrp != NULL &&\n\t\t\tppw->pw_name != NULL && pgrp->gr_name != NULL)\n\n\t\t\tsnprintf(buf, sizeof(buf), \"%s , %s\", ppw->pw_name, pgrp->gr_name);\n\t\telse\n\t\t\tsnprintf(buf, sizeof(buf), \"%d , %d\", ps->st_uid, ps->st_gid);\n\n\t} else if (p_mpug) {\n\t\tif (p_mpug->vld_ug) {\n\t\t\tif (sys)\n\t\t\t\tsnprintf(buf, sizeof(buf), \"ownerID < 10, group id < 10\");\n\t\t\telse\n\t\t\t\tsnprintf(buf, sizeof(buf), \"%s, group id < 10\", p_mpug->vld_ug->unames[0]);\n\t\t} else\n\t\t\tsnprintf(buf, sizeof(buf), \" \");\n\t}\n\treturn buf;\n}\n\n/**\n * @brief\n * \t\tprocess return code and arrive at a return code that will determine the fate of pbs_probe\n *\n * @param[in]\tfrom\t-\tNumeric codes, options - GET_PRIMARY_VALUES, END_FUNC_NAMES.\n * @param[in]\trc\t-\treturn code to be processed.\n * @param[in]\tpinf\t-\tpointer to infrastruct\n *\n * @return\tint\n * @retval\t0\t: primary data is fine, continue with pbs_probe.\n * @retval\t1\t:  primary data is bogus, pbs_probe must exit\n */\nstatic int\nprocess_ret_code(enum func_names from, int rc, struct infrastruct *pinf)\n{\n\tint  ret = 0;\n\tchar msg[1024];\n\n\tif (from == GET_PRIMARY_VALUES) {\n\n\t\tif (rc != 0) {\n\t\t\tif (pinf->pri.pbs_mpug[PBS_conf].path) {\n\n\t\t\t\tif (rc == PBS_CONF_NO_EXIST)\n\t\t\t\t\tsnprintf(msg, sizeof(msg), \"File %s does not exist\\n\",\n\t\t\t\t\t\tpinf->pri.pbs_mpug[PBS_conf].path);\n\t\t\t\telse if (rc == PBS_CONF_CAN_NOT_OPEN)\n\t\t\t\t\tsnprintf(msg, sizeof(msg), \"Could not open PBS configuration file %s\\n\",\n\t\t\t\t\t\tpinf->pri.pbs_mpug[PBS_conf].path);\n\t\t\t\telse\n\t\t\t\t\tsnprintf(msg, sizeof(msg),\n\t\t\t\t\t\t\"Internal pbs_probe problem, unknown return code\\n\");\n\n\t\t\t\tput_msg_in_table(pinf, SRC_pri, MSG_pri, msg);\n\t\t\t}\n\t\t\tret = 1;\t/* primary data is bogus, pbs_probe must exit */\n\t\t}\n\t}\n\n\treturn (ret);\n}\n\n/**\n * @brief\n * \t\tread the configuration file and obtain primary data for infrastruct structureS\n *\n * @param[in]\tfp\t-\tfile pointer to config file\n * @param[out]\tpointer to infrastuct structure\n */\nstatic\tint\nconf4primary(FILE *fp, struct infrastruct *pinf)\n{\n\tchar buf[1024];\n\tchar *conf_name;              /* the name of the conf parameter */\n\tchar *conf_value;             /* the value from the conf file or env*/\n\tunsigned int uvalue;          /* used with sscanf() */\n\n\t/* should not be calling with a NULL value for fp */\n\n\tassert(fp != NULL);\n\n\twhile (fgets(buf, 1024, fp) != NULL) {\n\t\tif (buf[0] != '#') {\n\t\t\t/* replace '\\n' with '\\0' */\n\t\t\tbuf[strlen(buf)-1] = '\\0';\n\t\t\tconf_name = strtok(buf, \"=\");\n\t\t\tconf_value = strtok(NULL, \"     \");\n\n\t\t\t/* ignore the unexpected (inserted blank line?) */\n\n\t\t\tif ((conf_name == NULL) || (conf_value == NULL))\n\t\t\t\tcontinue;\n\n\t\t\tif (!strcmp(conf_name, \"PBS_START_SERVER\")) {\n\t\t\t\tif (sscanf(conf_value, \"%u\", &uvalue) == 1)\n\t\t\t\t\tpinf->pri.started.server = ((uvalue > 0) ? 1 : 0);\n\t\t\t\tpinf->pri.src_started.server = SRC_CONF;\n\t\t\t}\n\t\t\telse if (!strcmp(conf_name, \"PBS_START_MOM\")) {\n\t\t\t\tif (sscanf(conf_value, \"%u\", &uvalue) == 1)\n\t\t\t\t\tpinf->pri.started.mom = ((uvalue > 0) ? 1 : 0);\n\t\t\t\tpinf->pri.src_started.mom = SRC_CONF;\n\t\t\t}\n\t\t\telse if (!strcmp(conf_name, \"PBS_START_SCHED\")) {\n\t\t\t\tif (sscanf(conf_value, \"%u\", &uvalue) == 1)\n\t\t\t\t\tpinf->pri.started.sched = ((uvalue > 0) ? 1 : 0);\n\t\t\t\tpinf->pri.src_started.sched = SRC_CONF;\n\t\t\t}\n\t\t\telse if (!strcmp(conf_name, \"PBS_HOME\")) {\n\t\t\t\tif (pinf->pri.pbs_mpug[PBS_home].path)\n\t\t\t\t\tfree(pinf->pri.pbs_mpug[PBS_home].path);\n\t\t\t\tpinf->pri.pbs_mpug[PBS_home].path = strdup(conf_value);\n\t\t\t\tpinf->pri.src_path.home = SRC_CONF;\n\t\t\t}\n\t\t\telse if (!strcmp(conf_name, \"PBS_CONF_DATA_SERVICE_HOST\")) {\n\t\t\t\tnonlocaldata = 1;\n\t\t\t}\n\t\t\telse if (!strcmp(conf_name, \"PBS_EXEC\")) {\n\t\t\t\tif (pinf->pri.pbs_mpug[PBS_exec].path)\n\t\t\t\t\tfree(pinf->pri.pbs_mpug[PBS_exec].path);\n\t\t\t\tpinf->pri.pbs_mpug[PBS_exec].path = strdup(conf_value);\n\t\t\t\tpinf->pri.src_path.exec = SRC_CONF;\n\t\t\t}\n\t\t\telse if (!strcmp(conf_name, \"PBS_DAEMON_SERVICE_USER\")) {\n\t\t\t\tstruct passwd *pw;\n\t\t\t\tpw = getpwnam(conf_value);\n\t\t\t\tif (pw != NULL) {\n\t\t\t\t\tpbs_servicename[0] = strdup(conf_value);\n\t\t\t\t\tpbsservice[0] = pw->pw_uid;\n\t\t\t\t}\n\t\t\t\telse {\n\t\t\t\t\tchar *msgbuf;\n\t\t\t\t\tpbs_asprintf(&msgbuf, \"Service user %s does not exist\\n\", conf_value);\n\t\t\t\t\tput_msg_in_table(NULL, SRC_CONF, MSG_real, msgbuf);\n\t\t\t\t\tfree(msgbuf);\n\t\t\t\t}\n\t\t\t}\n\n\t\t} else {\n\t\t\t/* ignore comment lines (# in column 1) */\n\t\t\tcontinue;\n\t\t}\n\t}\n\treturn (0);\n}\n/**\n * @brief\n * \t\tread the environment values and store in infrastruct.\n *\n * @param[out]\tpinf\t-\tpointer to environment structure.\n *\n * @return\tint\n * @retval\t0\t: success\n */\nstatic\tint\nenv4primary(struct infrastruct *pinf)\n{\n\tchar *gvalue;                 /* used with getenv() */\n\tunsigned int uvalue;          /* used with sscanf() */\n\n\tif ((gvalue = getenv(\"PBS_START_SERVER\"))) {\n\t\tif (sscanf(gvalue, \"%u\", &uvalue) == 1) {\n\t\t\tpinf->pri.started.server = ((uvalue > 0) ? 1 : 0);\n\t\t\tpinf->pri.src_started.server = SRC_ENV;\n\t\t}\n\t}\n\tif ((gvalue = getenv(\"PBS_START_MOM\"))) {\n\t\tif (sscanf(gvalue, \"%u\", &uvalue) == 1) {\n\t\t\tpinf->pri.started.mom = ((uvalue > 0) ? 1 : 0);\n\t\t\tpinf->pri.src_started.mom = SRC_ENV;\n\t\t}\n\t}\n\tif ((gvalue = getenv(\"PBS_START_SCHED\"))) {\n\t\tif (sscanf(gvalue, \"%u\", &uvalue) == 1) {\n\t\t\tpinf->pri.started.sched = ((uvalue > 0) ? 1 : 0);\n\t\t\tpinf->pri.src_started.sched = SRC_ENV;\n\t\t}\n\t}\n\tif ((gvalue = getenv(\"PBS_HOME\"))) {\n\n\t\tif (pinf->pri.pbs_mpug[PBS_home].path)\n\t\t\tfree(pinf->pri.pbs_mpug[PBS_home].path);\n\n\t\tpinf->pri.pbs_mpug[PBS_home].path = strdup(gvalue);\n\t\tpinf->pri.src_path.home = SRC_ENV;\n\t}\n\tif ((gvalue = getenv(\"PBS_EXEC\"))) {\n\n\t\tif (pinf->pri.pbs_mpug[PBS_exec].path)\n\t\t\tfree(pinf->pri.pbs_mpug[PBS_exec].path);\n\n\t\tpinf->pri.pbs_mpug[PBS_exec].path = strdup(gvalue);\n\t\tpinf->pri.src_path.exec = SRC_ENV;\n\t}\n\tif ((gvalue = getenv(\"PBS_CONF_DATA_SERVICE_HOST\")) != NULL) {\n\t\tnonlocaldata = 1;\n\t}\n\tif ((gvalue = getenv(\"PBS_DAEMON_SERVICE_USER\")) != NULL) {\n\t\tstruct passwd *pw;\n\t\tpw = getpwnam(gvalue);\n\t\tif (pw != NULL) {\n\t\t\tpbs_servicename[0] = strdup(gvalue);\n\t\t\tpbsservice[0] = pw->pw_uid;\n\t\t}\n\t\telse {\n\t\t\tchar *msgbuf;\n\t\t\tpbs_asprintf(&msgbuf, \"Service user %s does not exist\\n\", gvalue);\n\t\t\tput_msg_in_table(NULL, SRC_CONF, MSG_real, msgbuf);\n\t\t\tfree(msgbuf);\n\t\t}\n\t}\n\n\n\treturn (0);\n}\n\n/**\n * @brief\n * \t\tfix - check is a fix is necessary and, if it is, attempt to do\n * \t\tthe fix, being careful not to attempt a fix whose code is higher\n * \t\tthan the maximum allowed (max_level.)\n * @par\n * \t\tIf a fix is attempted, add an appropriate message(s) to the end of\n * \t\tthe relevant message category.\n *\n * @param[in]\tprobemode\t-\tprobe mode\n * @param[in]\tneed\t-\trequired or not ?\n * @param[in]\tmax_level\t-\tmaximum allowed\n * @param[in]\tpmpug\t-\tpointer to modes_path_user_group structure\n * @param[in]\tpadj\t-\tpointer to a structure which holds data for modeadjustments.\n * @param[in]\tps\t-\tdata returned by the stat() function\n * @param[in]\tfc\t-\tfix codes.\n *\n * @return\tnone\n */\nstatic void\nfix(int probemode, int need, int max_level, MPUG *pmpug, ADJ *padj, struct stat *ps, int fc)\n{\n\tif ((need == 0) || (probemode != (int)'f') || (fc > max_level))\n\t\treturn;\n\n\tswitch (fc) {\n\t\tcase FIX_po:\n\t\t\tif (padj == NULL)\n\t\t\t\tfix_perm_owner(pmpug, ps, padj);\n\t\t\tbreak;\n\t}\n}\n\n/**\n * @brief\n * \t\tfix_perm_owner - attempt a fix of permission or ownership problems on\n * \t\tthe input file and add a message(s) to whatever default message category\n * \t\tis currently set.\n *\n * @param[in]\tp_mpug\t-\tpointer to modes_path_user_group structure\n * @param[in]\tps\t-\tdata returned by the stat() function\n * @param[in]\tp_adj\t-\tpointer to a structure which holds data for modeadjustments.\n *\n * @return\tnone\n */\nstatic void\nfix_perm_owner(MPUG *p_mpug, struct stat *ps, ADJ *p_adj)\n{\n\tchar\t\tmsg[512];\n\tmode_t\t\tmodes;\n\tmode_t\t\tdis_modes;\n\tunsigned\tfixes = 0;\n\tint\t\ti, rc;\n\n\tif (ps == NULL)\n\t\treturn;\n\n\tmodes = (mode_t)p_mpug->req_modes;\n\tif (p_adj)\n\t\tmodes = (modes & ~p_adj->dis) | p_adj->req;\n\n\tif (p_adj)\n\t\tdis_modes = (~modes & (mode_t)p_mpug->dis_modes) | p_adj->dis;\n\telse\n\t\tdis_modes = (mode_t)p_mpug->dis_modes;\n\n\tif (dis_modes & modes) {\n\t\tsnprintf(msg, sizeof(msg), \"%s: database problem, 'allowed/disallowed' modes overlap\", p_mpug->path);\n\t\tput_msg_in_table(NULL, SRC_none, MSG_po, msg);\n\t\treturn;\n\t}\n\n\tif (ps->st_mode != modes) {\n\t\tif ((rc = chmod(p_mpug->realpath, modes))) {\n\t\t\tsnprintf(msg, sizeof(msg), \"%s: permission correction failed, %s\", p_mpug->path, strerror(errno));\n\t\t\tput_msg_in_table(NULL, SRC_none, MSG_po, msg);\n\t\t} else {\n\t\t\tfixes |= 0x1;\t/* permission */\n\t\t}\n\t}\n\n\t/*\n\t * Fix any ownership problems (user/group) if they exist\n\t */\n\n\tif (p_mpug->vld_ug) {\n\t\tfor (i=0; p_mpug->vld_ug->uids[i] != -1; ++i) {\n\t\t\tif (p_mpug->vld_ug->uids[i] == ps->st_uid)\n\t\t\t\tbreak;\n\t\t}\n\t\tif (p_mpug->vld_ug->uids[i] == -1) {\n\t\t\trc = chown(p_mpug->realpath, p_mpug->vld_ug->uids[0], -1);\n\t\t\tif (rc) {\n\t\t\t\tsnprintf(msg, sizeof(msg), \"%s: ownership correction failed, %s\", p_mpug->path, strerror(errno));\n\t\t\t\tput_msg_in_table(NULL, SRC_none, MSG_po, msg);\n\t\t\t} else {\n\t\t\t\tfixes |= 0x2;\n\t\t\t}\n\t\t}\n\t}\n\n\tif (p_mpug->vld_ug) {\n\t\tfor (i=0; p_mpug->vld_ug->gids[i]; ++i) {\n\t\t\tif (p_mpug->vld_ug->gids[i] == ps->st_gid)\n\t\t\t\tbreak;\n\t\t}\n\t\tif (p_mpug->vld_ug->gids[i] == -1) {\n\t\t\t/*\n\t\t\t * Remark: we are using the gid value \"0\" because\n\t\t\t * on most of the systems checked the group value was\n\t\t\t * this value.  On a few it was \"1\".\n\t\t\t */\n\t\t\trc = chown(p_mpug->realpath, -1, p_mpug->vld_ug->gids[0]);\n\t\t\tif (rc) {\n\t\t\t\tsnprintf(msg, sizeof(msg), \"%s: group correction failed, %s\", p_mpug->path, strerror(errno));\n\t\t\t\tput_msg_in_table(NULL, SRC_none, MSG_po, msg);\n\t\t\t} else {\n\t\t\t\tfixes |= 0x4;\n\t\t\t}\n\t\t}\n\t}\n\n\tswitch (fixes) {\n\t\tcase 1:\n\t\t\tsnprintf(msg, sizeof(msg), \"%s: corrected permissions\", p_mpug->path);\n\t\t\tput_msg_in_table(NULL, SRC_none, MSG_po, msg);\n\t\t\tbreak;\n\n\t\tcase 2:\n\t\tcase 4:\n\t\tcase 6:\n\t\t  \tsnprintf(msg, sizeof(msg), \"%s: corrected ownership(s)\", p_mpug->path);\n\t\t\tput_msg_in_table(NULL, SRC_none, MSG_po, msg);\n\t\t\tbreak;\n\n\t\tcase 3:\n\t\tcase 5:\n\t\tcase 7:\n\t\t\tsnprintf(msg, sizeof(msg), \"%s: corrected permissions and ownership(s)\", p_mpug->path);\n\t\t\tput_msg_in_table(NULL, SRC_none, MSG_po, msg);\n\t\t\tbreak;\n\t}\n}\n// clang-format on\n"
  },
  {
    "path": "src/tools/pbs_python.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\n *\t\tpbs_python.c\n *\n * @brief\n *\t\tThis file contains functions related to Pbs  with Python.\n *\n * Functions included are:\n * \tcheck_que_enable()\n * \tdecode_rcost()\n * \tencode_rcost()\n * \tencode_svrstate()\n * \tdecode_depend()\n * \tencode_depend()\n * \tnode_queue_action()\n * \tnode_np_action()\n * \tnode_pcpu_action()\n * \tpbs_python_populate_svrattrl_from_file()\n * \tpbs_python_populate_server_svrattrl_from_file()\n * \tfprint_svrattrl_list()\n * \tfprint_str_array()\n * \targv_list_to_str()\n * \tmain()\n */\n#include <pbs_config.h>\n\n#include <pbs_python_private.h>\n#include <Python.h>\n\n#include <pbs_ifl.h>\n#include <pbs_internal.h>\n#include <pbs_version.h>\n\n#ifdef NAS /* localmod 005 */\n#include <ctype.h>\n#endif /* localmod 005 */\n#include <stdio.h>\n#include <stdlib.h>\n#include <unistd.h>\n#include <string.h>\n#include <errno.h>\n#include <fcntl.h>\n#include <pbs_python.h>\n#include <pbs_error.h>\n#include <pbs_entlim.h>\n#include <work_task.h>\n#include <resource.h>\n#include <list_link.h>\n#include <attribute.h>\n#include \"libpbs.h\"\n#include \"batch_request.h\"\n#include \"hook.h\"\n#include <signal.h>\n#include \"job.h\"\n#include \"reservation.h\"\n#include \"server.h\"\n#include \"queue.h\"\n#include <pbs_nodes.h>\n#include \"libutil.h\"\n#include \"cmds.h\"\n#include \"svrfunc.h\"\n#include \"pbs_sched.h\"\n#include \"portability.h\"\n\n#define PBS_V1_COMMON_MODULE_DEFINE_STUB_FUNCS 1\n#include \"pbs_v1_module_common.i\"\n\n#define MAXBUF 4096\n#define PYHOME \"PYTHONHOME\"\n#define PYHOME_EQUAL \"PYTHONHOME=\"\n\n#define HOOK_MODE \"--hook\"\n\nextern char *vnode_state_to_str(int state_bit);\nextern char *vnode_sharing_to_str(enum vnode_sharing vns);\nextern char *vnode_ntype_to_str(int type);\n\nextern int str_to_vnode_state(char *state_str);\nextern int str_to_vnode_ntype(char *ntype_str);\nextern enum vnode_sharing str_to_vnode_sharing(char *sharing_str);\n\n/**\n * @brief\n *\t\tTakes data from input file or stdin of the form:\n *\t\t<attribute_name>=<attribute value>\n *\t\t<attribute_name>[<resource_name>]=<resource value>\n *\t\tand populate the given various lists with the value obtained.\n *\n * @param[in]\tinput_file\t-\tif NULL, will get data from stdin.\n * @param[in]\tdefault_svrattrl\t-\tthe \"catch all\" list\n * @param[in]\tevent_svrattrl\t-\tgets <attribute_name>=EVENT_OBJECT data\n * @param[in]\tevent_job_svrattrl\t-\tgets <attribute_name>=EVENT_JOB_OBJECT data\n * @param[in]\tevent_job_o_svrattrl\t-\tgets <attribute_name>=EVENT_JOB_O_OBJECT data\n * @param[in]\tevent_resv_svrattrl\t-\tgets <attribute_name>=EVENT_RESV_OBJECT data\n * @param[in]\tevent_vnode_svrattrl\t-\tgets <attribute_name>=EVENT_VNODE_OBJECT data\n * \t\t\t             \t\tCaution: svrattrl values stored in sorted order\n * @param[in] event_vnode_fail_svrattrl -\tgets <attribute_name>=EVENT_VNODELIST_FAIL_OBJECT data\n * \t\t\t             \t\tCaution: svrattrl values stored in sorted order\n * @param[in] job_failed_mom_list_svrattrl -\tgets <attribute_name>=JOB_FAILED_MOM_LIST_OBJECT data\n * @param[in] job_succeeded_mom_list_svrattrl -\tgets <attribute_name>=JOB_SUCCEEDED_MOM_LIST_OBJECT data\n * @param[in]\tevent_src_queue_svrattrl\t-\tgets <attribute_name>=EVENT_SRC_QUEUE_OBJECT data\n * @param[in]\tevent_aoe_svrattrl\t-\tgets <attribute_name=EVENT_AOE_OBJECT data\n * @param[in]\tevent_argv_svrattrl\t-\tgets <attribute_name=EVENT_ARGV_OBJECT data\n *\n * @param[in]\tevent_jobs_svrattrl\t-\tgets <attribute_name>=EVENT_JOBLIST_OBJECT data\n * \t\t\t            \t\t\t\tCaution: svrattrl values stored in sorted order\n * @param[in]\tperf_label - passed on to hook_perf_stat* call.\n * @param[in]\tperf_action - passed on to hook_perf_stat* call.\n *\n * @return\tint\n * @retval\t0\t: success\n * @retval\t-1\t: failure, and free_attrlist() is used to free the memory\n *\t\t\t\t\tassociated with each non-NULL list parameter.\n *\n * @note\n *\t\tThis function calls a single hook_perf_stat_start()\n *\t\tthat has some malloc-ed data that are freed in the\n *\t\thook_perf_stat_stop() call, which is done at the end of\n *\t\tthis function.\n *\t\tEnsure that after the hook_perf_stat_start(), all\n *\t\tprogram execution path lead to hook_perf_stat_stop()\n *\t\tcall.\n */\nint\npbs_python_populate_svrattrl_from_file(char *input_file,\n\t\t\t\t       pbs_list_head *default_svrattrl, pbs_list_head *event_svrattrl,\n\t\t\t\t       pbs_list_head *event_job_svrattrl, pbs_list_head *event_job_o_svrattrl,\n\t\t\t\t       pbs_list_head *event_resv_svrattrl, pbs_list_head *event_vnode_svrattrl,\n\t\t\t\t       pbs_list_head *event_vnode_fail_svrattrl,\n\t\t\t\t       pbs_list_head *job_failed_mom_list_svrattrl,\n\t\t\t\t       pbs_list_head *job_succeeded_mom_list_svrattrl,\n\t\t\t\t       pbs_list_head *event_src_queue_svrattrl, pbs_list_head *event_aoe_svrattrl,\n\t\t\t\t       pbs_list_head *event_argv_svrattrl, pbs_list_head *event_jobs_svrattrl,\n\t\t\t\t       char *perf_label, char *perf_action)\n{\n\n\tchar *attr_name;\n\tchar *name_str;\n\tchar name_str_buf[STRBUF + 1] = {'\\0'};\n\tchar *resc_str;\n\tchar argv_index[STRBUF + 1] = {'\\0'};\n\tchar *val_str;\n\tchar *obj_name;\n\tint rc = -1;\n\tchar *pc, *pc1, *pc2, *pc3, *pc4;\n\tchar *in_data = NULL;\n\tlong int endpos;\n\tint in_data_sz;\n\tchar *data_value;\n\tsize_t ll;\n\tFILE *fp = NULL;\n\tchar *p;\n\tint vn_obj_len = strlen(EVENT_VNODELIST_OBJECT);\n\tint vn_fail_obj_len = strlen(EVENT_VNODELIST_FAIL_OBJECT);\n\tint job_obj_len = strlen(EVENT_JOBLIST_OBJECT);\n\tint b_triple_quotes = 0;\n\tint e_triple_quotes = 0;\n\tchar buf_data[STRBUF];\n\n\tif ((default_svrattrl == NULL) || (event_svrattrl == NULL) ||\n\t    (event_job_svrattrl == NULL) || (event_job_o_svrattrl == NULL) ||\n\t    (event_resv_svrattrl == NULL) || (event_vnode_svrattrl == NULL) ||\n\t    (event_src_queue_svrattrl == NULL) || (event_aoe_svrattrl == NULL) ||\n\t    (event_argv_svrattrl == NULL) || (event_vnode_fail_svrattrl == NULL) ||\n\t    (job_failed_mom_list_svrattrl == NULL) ||\n\t    (job_succeeded_mom_list_svrattrl == NULL) ||\n\t    (event_jobs_svrattrl == NULL)) {\n\t\tlog_err(-1, __func__, \"Bad input parameter!\");\n\t\trc = -1;\n\t\tgoto populate_svrattrl_fail;\n\t}\n\n\tif ((input_file != NULL) && (*input_file != '\\0')) {\n\t\tfp = fopen(input_file, \"r\");\n\n\t\tif (fp == NULL) {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"failed to open input file %s\", input_file);\n\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\trc = -1;\n\t\t\tgoto populate_svrattrl_fail;\n\t\t}\n\t} else {\n\t\tfp = stdin;\n\t}\n\n\thook_perf_stat_start(perf_label, perf_action, 0);\n\tif (default_svrattrl)\n\t\tfree_attrlist(default_svrattrl);\n\tif (event_svrattrl)\n\t\tfree_attrlist(event_svrattrl);\n\tif (event_job_svrattrl)\n\t\tfree_attrlist(event_job_svrattrl);\n\tif (event_job_o_svrattrl)\n\t\tfree_attrlist(event_job_o_svrattrl);\n\tif (event_resv_svrattrl)\n\t\tfree_attrlist(event_resv_svrattrl);\n\tif (event_vnode_svrattrl)\n\t\tfree_attrlist(event_vnode_svrattrl);\n\tif (event_vnode_fail_svrattrl)\n\t\tfree_attrlist(event_vnode_fail_svrattrl);\n\tif (job_failed_mom_list_svrattrl)\n\t\tfree_attrlist(job_failed_mom_list_svrattrl);\n\tif (job_succeeded_mom_list_svrattrl)\n\t\tfree_attrlist(job_succeeded_mom_list_svrattrl);\n\tif (event_src_queue_svrattrl)\n\t\tfree_attrlist(event_src_queue_svrattrl);\n\tif (event_aoe_svrattrl)\n\t\tfree_attrlist(event_aoe_svrattrl);\n\tif (event_argv_svrattrl)\n\t\tfree_attrlist(event_argv_svrattrl);\n\tif (event_jobs_svrattrl)\n\t\tfree_attrlist(event_jobs_svrattrl);\n\n\tin_data_sz = STRBUF;\n\tin_data = (char *) malloc(in_data_sz);\n\tif (in_data == NULL) {\n\t\tlog_err(errno, __func__, \"malloc failed\");\n\t\trc = -1;\n\t\tgoto populate_svrattrl_fail;\n\t}\n\tin_data[0] = '\\0';\n\n\tif (fseek(fp, 0, SEEK_END) != 0) {\n\t\tlog_err(errno, __func__, \"fseek to end failed\");\n\t\trc = 1;\n\t\tgoto populate_svrattrl_fail;\n\t}\n\tendpos = ftell(fp);\n\tif (fseek(fp, 0, SEEK_SET) != 0) {\n\t\tlog_err(errno, __func__, \"fseek to beginning failed\");\n\t\trc = 1;\n\t\tgoto populate_svrattrl_fail;\n\t}\n\twhile (fgets(buf_data, STRBUF, fp) != NULL) {\n\n\t\tb_triple_quotes = 0;\n\t\te_triple_quotes = 0;\n\n\t\tif (pbs_strcat(&in_data, &in_data_sz, buf_data) == NULL) {\n\t\t\tgoto populate_svrattrl_fail;\n\t\t}\n\n\t\tll = strlen(in_data);\n#ifdef WIN32\n\t\t/* The file is being read in O_BINARY mode (see _fmode setting) */\n\t\t/* so on Windows, there's a carriage return (\\r) line feed (\\n), */\n\t\t/* then the linefeed needs to get processed out */\n\t\tif (ll >= 2) {\n\t\t\tif (in_data[ll - 2] == '\\r') {\n\t\t\t\t/* remove newline */\n\t\t\t\tin_data[ll - 2] = '\\0';\n\t\t\t}\n\t\t}\n#endif\n\t\tif ((p = strchr(in_data, '=')) != NULL) {\n\t\t\tb_triple_quotes = starts_with_triple_quotes(p + 1);\n\t\t}\n\n\t\tif (in_data[ll - 1] == '\\n') {\n\t\t\te_triple_quotes = ends_with_triple_quotes(in_data, 0);\n\n\t\t\tif (b_triple_quotes && !e_triple_quotes) {\n\t\t\t\tint jj;\n\n\t\t\t\twhile (fgets(buf_data, STRBUF, fp) != NULL) {\n\t\t\t\t\tif (pbs_strcat(&in_data, &in_data_sz,\n\t\t\t\t\t\t       buf_data) == NULL) {\n\t\t\t\t\t\tgoto populate_svrattrl_fail;\n\t\t\t\t\t}\n\n\t\t\t\t\tjj = strlen(in_data);\n\t\t\t\t\tif ((in_data[jj - 1] != '\\n') &&\n\t\t\t\t\t    (ftell(fp) != endpos)) {\n\t\t\t\t\t\t/* get more input for\n\t\t\t\t\t\t * current item.\n\t\t\t\t\t\t */\n\t\t\t\t\t\tcontinue;\n\t\t\t\t\t}\n\t\t\t\t\te_triple_quotes =\n\t\t\t\t\t\tends_with_triple_quotes(in_data, 0);\n\n\t\t\t\t\tif (e_triple_quotes) {\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t}\n\n\t\t\t\tif ((!b_triple_quotes && e_triple_quotes) ||\n\t\t\t\t    (b_triple_quotes && !e_triple_quotes)) {\n\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t\t \"unmatched triple quotes! Skipping  line %s\",\n\t\t\t\t\t\t in_data);\n\t\t\t\t\tlog_err(PBSE_INTERNAL, __func__, log_buffer);\n\t\t\t\t\t/* process a new line */\n\t\t\t\t\tin_data[0] = '\\0';\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\t\t\t\tin_data[strlen(in_data) - 1] = '\\0';\n\n\t\t\t} else {\n\t\t\t\t/* remove newline */\n\t\t\t\tin_data[ll - 1] = '\\0';\n\t\t\t}\n\t\t} else if (ftell(fp) != endpos) { /* continued on next line */\n\t\t\t/* get more input for current item.  */\n\t\t\tcontinue;\n\t\t}\n\t\tdata_value = NULL;\n\t\tif ((p = strchr(in_data, '=')) != NULL) {\n\t\t\tint i;\n\t\t\t*p = '\\0';\n\t\t\tp++;\n\t\t\t/* Given '<obj_name>=<data_value>' line, */\n\t\t\t/* strip off leading spaces from <data_value> */\n\t\t\twhile (isspace(*p))\n\t\t\t\tp++;\n\t\t\tif (b_triple_quotes) {\n\t\t\t\t/* strip triple quotes */\n\t\t\t\tp += 3;\n\t\t\t}\n\t\t\tdata_value = p;\n\t\t\tif (e_triple_quotes) {\n\t\t\t\t(void) ends_with_triple_quotes(p, 1);\n\t\t\t}\n\n\t\t\ti = strlen(p);\n\t\t\twhile (--i > 0) { /* strip trailing blanks */\n\t\t\t\tif (!isspace((int) *(p + i)))\n\t\t\t\t\tbreak;\n\t\t\t\t*(p + i) = '\\0';\n\t\t\t}\n\t\t}\n\t\tobj_name = in_data;\n\n\t\tpc = strrchr(in_data, '.');\n\t\tif (pc) {\n\t\t\t*pc = '\\0';\n\t\t\tpc++;\n\t\t} else {\n\t\t\tpc = in_data;\n\t\t}\n\t\tname_str = pc;\n\n\t\tpc1 = strchr(pc, '[');\n\t\tpc2 = strchr(pc, ']');\n\t\tresc_str = NULL;\n\t\tif (pc1 && pc2 && (pc2 > pc1)) {\n\t\t\t*pc1 = '\\0';\n\t\t\tpc1++;\n\t\t\tresc_str = pc1;\n\t\t\t*pc2 = '\\0';\n\t\t\tpc2++;\n\n\t\t\t/* now let's if there's anything quoted inside */\n\t\t\tpc3 = strchr(pc1, '\"');\n\t\t\tif (pc3 != NULL)\n\t\t\t\tpc4 = strchr(pc3 + 1, '\"');\n\t\t\telse\n\t\t\t\tpc4 = NULL;\n\n\t\t\tif (pc3 && pc4 && (pc4 > pc3)) {\n\t\t\t\tpc3++;\n\t\t\t\t*pc4 = '\\0';\n\t\t\t\tresc_str = pc3;\n\t\t\t}\n\t\t}\n\n\t\tval_str = NULL;\n\t\tif (data_value) {\n\t\t\tval_str = data_value;\n\n\t\t\tif (strcmp(obj_name, EVENT_OBJECT) == 0) {\n\t\t\t\tif (strcmp(name_str, PY_EVENT_PARAM_ARGLIST) == 0) {\n\t\t\t\t\tif (event_argv_svrattrl) {\n\t\t\t\t\t\t/* 'resc_str' holds the */\n\t\t\t\t\t\t/* numeric index to argv (0,1,...). */\n\t\t\t\t\t\t/* enumerating argv[0], argv[1],... */\n\t\t\t\t\t\t/* argv_index is the 'resc_str' value */\n\t\t\t\t\t\t/* that is padded to have a fixed */\n\t\t\t\t\t\t/* length, so that when use as a sort */\n\t\t\t\t\t\t/* key, natural ordering is respected */\n\t\t\t\t\t\t/* in the lexicographical comparison */\n\t\t\t\t\t\t/* done by add_to_svrattrl_list_sorted(). */\n\t\t\t\t\t\t/* Without the padding, the argv index */\n\t\t\t\t\t\t/* would get ordered as 0,1,10,11,12,2,3,.. */\n\t\t\t\t\t\t/* With padding, then the order would be */\n\t\t\t\t\t\t/* 000,001,002,003,...,010,011,... */\n\t\t\t\t\t\t/* respecting natural order. */\n\t\t\t\t\t\t/* leading zeros added up to a length of 8 */\n\t\t\t\t\t\tsnprintf(argv_index, sizeof(argv_index) - 1,\n\t\t\t\t\t\t\t \"%08d\", atoi(resc_str));\n\n\t\t\t\t\t\trc = add_to_svrattrl_list_sorted(event_argv_svrattrl, name_str, resc_str, val_str, 0, argv_index);\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\tif (event_svrattrl)\n\t\t\t\t\t\trc = add_to_svrattrl_list(event_svrattrl, name_str, resc_str, val_str, 0, NULL);\n\t\t\t\t}\n\t\t\t} else if (event_job_svrattrl &&\n\t\t\t\t   (strcmp(obj_name, EVENT_JOB_OBJECT) == 0)) {\n\t\t\t\tif (strcmp(name_str, PY_JOB_FAILED_MOM_LIST) == 0) {\n\t\t\t\t\tif (job_failed_mom_list_svrattrl) {\n\t\t\t\t\t\trc = add_to_svrattrl_list(job_failed_mom_list_svrattrl, val_str, NULL, NULL, 0, NULL);\n\t\t\t\t\t}\n\t\t\t\t} else if (strcmp(name_str, PY_JOB_SUCCEEDED_MOM_LIST) == 0) {\n\t\t\t\t\tif (job_succeeded_mom_list_svrattrl) {\n\t\t\t\t\t\trc = add_to_svrattrl_list(job_succeeded_mom_list_svrattrl, val_str, NULL, NULL, 0, NULL);\n\t\t\t\t\t}\n\t\t\t\t} else {\n\t\t\t\t\trc = add_to_svrattrl_list(event_job_svrattrl, name_str, resc_str, val_str, 0, NULL);\n\t\t\t\t}\n\t\t\t} else if (event_job_o_svrattrl &&\n\t\t\t\t   (strcmp(obj_name, EVENT_JOB_O_OBJECT) == 0)) {\n\t\t\t\trc = add_to_svrattrl_list(event_job_o_svrattrl, name_str,\n\t\t\t\t\t\t\t  resc_str, val_str, 0, NULL);\n\t\t\t} else if (event_resv_svrattrl &&\n\t\t\t\t   (strcmp(obj_name, EVENT_RESV_OBJECT) == 0)) {\n\t\t\t\trc = add_to_svrattrl_list(event_resv_svrattrl, name_str,\n\t\t\t\t\t\t\t  resc_str, val_str, 0, NULL);\n\t\t\t} else if ((event_vnode_fail_svrattrl &&\n\t\t\t\t    (strncmp(obj_name, EVENT_VNODELIST_FAIL_OBJECT, vn_fail_obj_len) == 0)) ||\n\t\t\t\t   (event_vnode_svrattrl &&\n\t\t\t\t    (strncmp(obj_name, EVENT_VNODELIST_OBJECT, vn_obj_len) == 0))) {\n\n\t\t\t\t/* pbs.event().vnode_list_fail[<vnode_name>]\\0<attribute name>\\0<resource name>\\0<value>\n\t\t\t\t * where obj_name = pbs.event().vnode_list_fail[<vnode_name>]\n\t\t\t\t *\t  name_str = <attribute name>\n\t\t\t\t *     - or -\n\t\t\t\t * pbs.event().vnode_list[<vnode_name>]\\0<attribute name>\\0<resource name>\\0<value>\n\t\t\t\t * where obj_name = pbs.event().vnode_list[<vnode_name>]\n\t\t\t\t *\t  name_str = <attribute name>\n\t\t\t\t */\n\n\t\t\t\t/* import here to look for the leftmost '[' (using strchr)\n\t\t\t\t * and the rightmost ']' (using strrchr)\n\t\t\t\t * as we can have:\n\t\t\t\t *\tpbs.event().vnode_list_fail[\"altix[5]\"].<attr>=<val>\n\t\t\t\t *    - or -\n\t\t\t\t *\tpbs.event().vnode_list[\"altix[5]\"].<attr>=<val>\n\t\t\t\t * and \"altix[5]\" is a valid vnode id.\n\t\t\t\t */\n\t\t\t\tif (((pc1 = strchr(obj_name, '[')) != NULL) &&\n\t\t\t\t    ((pc2 = strrchr(obj_name, ']')) != NULL) &&\n\t\t\t\t    (pc2 > pc1)) {\n\t\t\t\t\tpc1++; /* <vnode_name> part */\n\n\t\t\t\t\t*pc2 = '.'; /* pbs.event().vnode_list_fail[<vnode_name>. or pbs.event().vnode_list[<vnode_nam.. */\n\t\t\t\t\tpc2++;\n\n\t\t\t\t\t/* now let's if there's anything quoted inside */\n\t\t\t\t\tpc3 = strchr(pc1, '\"');\n\t\t\t\t\tif (pc3 != NULL)\n\t\t\t\t\t\tpc4 = strchr(pc3 + 1, '\"');\n\t\t\t\t\telse\n\t\t\t\t\t\tpc4 = NULL;\n\n\t\t\t\t\tif (pc3 && pc4 && (pc4 > pc3)) {\n\t\t\t\t\t\tpc3++;\n\t\t\t\t\t\t*pc4 = '.';\n\t\t\t\t\t\tpc4++;\n\t\t\t\t\t\t/* we're saving 'name_str' in a separate array (name_str_buf), */\n\t\t\t\t\t\t/* as strcpy() does something odd under rhel6/centos if the */\n\t\t\t\t\t\t/* destination (pc4)  and the source (name_str) are in the same */\n\t\t\t\t\t\t/* memory area, even though non-overlapping. */\n\t\t\t\t\t\tstrncpy(name_str_buf, name_str, sizeof(name_str_buf) - 1);\n\t\t\t\t\t\tstrcpy(pc4, name_str_buf); /* <vnode_name>.<attr name> */\n\t\t\t\t\t\tname_str = pc3;\n\t\t\t\t\t} else {\n\t\t\t\t\t\tstrncpy(name_str_buf, name_str, sizeof(name_str_buf) - 1);\n\t\t\t\t\t\tstrcpy(pc2, name_str_buf); /* <vnode_name>.<attr name> */\n\t\t\t\t\t\tname_str = pc1;\n\t\t\t\t\t}\n\t\t\t\t\tattr_name = strrchr(name_str, '.');\n\t\t\t\t\tif (attr_name == NULL)\n\t\t\t\t\t\tattr_name = name_str;\n\t\t\t\t\telse\n\t\t\t\t\t\tattr_name++;\n\n\t\t\t\t} else {\n\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t\t \"object '%s' does not have a vnode name!\", obj_name);\n\t\t\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\t\t\t/* process a new line */\n\t\t\t\t\tin_data[0] = '\\0';\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\t\t\t\tif (strncmp(obj_name, EVENT_VNODELIST_FAIL_OBJECT, vn_fail_obj_len) == 0) {\n\t\t\t\t\trc = add_to_svrattrl_list_sorted(event_vnode_fail_svrattrl, name_str, resc_str, return_internal_value(attr_name, val_str), 0, NULL);\n\t\t\t\t} else {\n\t\t\t\t\trc = add_to_svrattrl_list_sorted(event_vnode_svrattrl, name_str, resc_str, return_internal_value(attr_name, val_str), 0, NULL);\n\t\t\t\t}\n\n\t\t\t} else if (event_jobs_svrattrl && (strncmp(obj_name, EVENT_JOBLIST_OBJECT, job_obj_len) == 0)) {\n\n\t\t\t\t/* pbs.event().job_list[<jobid>]\\0<attribute name>\\0<resource name>\\0<value>\n\t\t\t\t * where obj_name = pbs.event().job_list[<jobid>]\n\t\t\t\t *\t  name_str = <attribute name>\n\t\t\t\t */\n\n\t\t\t\t/* import here to look for the leftmost '[' (using strchr)\n\t\t\t\t * and the rightmost ']' (using strrchr)\n\t\t\t\t * as we can have:\n\t\t\t\t *\t\tpbs.event().job_list[\"5.altix\"].<attr>=<val>\n\t\t\t\t * and \"5.altix\" is a valid job id.\n\t\t\t\t */\n\t\t\t\tif (((pc1 = strchr(obj_name, '[')) != NULL) &&\n\t\t\t\t    ((pc2 = strrchr(obj_name, ']')) != NULL) &&\n\t\t\t\t    (pc2 > pc1)) {\n\t\t\t\t\tpc1++; /* <jobid> part */\n\n\t\t\t\t\t*pc2 = '.'; /* pbs.event().job_list[<jobid>. */\n\t\t\t\t\tpc2++;\n\n\t\t\t\t\t/* now let's if there's anything quoted inside */\n\t\t\t\t\tpc3 = strchr(pc1, '\"');\n\t\t\t\t\tif (pc3 != NULL)\n\t\t\t\t\t\tpc4 = strchr(pc3 + 1, '\"');\n\t\t\t\t\telse\n\t\t\t\t\t\tpc4 = NULL;\n\n\t\t\t\t\tif (pc3 && pc4 && (pc4 > pc3)) {\n\t\t\t\t\t\tpc3++;\n\t\t\t\t\t\t*pc4 = '.';\n\t\t\t\t\t\tpc4++;\n\t\t\t\t\t\t/* we're saving 'name_str' in a separate array (name_str_buf), */\n\t\t\t\t\t\t/* as strcpy() does something odd under rhel6/centos if the */\n\t\t\t\t\t\t/* destination (pc4)  and the source (name_str) are in the same */\n\t\t\t\t\t\t/* memory area, even though non-overlapping. */\n\t\t\t\t\t\tstrncpy(name_str_buf, name_str, sizeof(name_str_buf) - 1);\n\t\t\t\t\t\tstrcpy(pc4, name_str_buf); /* <jobid>.<attr name> */\n\t\t\t\t\t\tname_str = pc3;\n\t\t\t\t\t} else {\n\t\t\t\t\t\tstrncpy(name_str_buf, name_str, sizeof(name_str_buf) - 1);\n\t\t\t\t\t\tstrcpy(pc2, name_str_buf); /* <jobid>.<attr name> */\n\t\t\t\t\t\tname_str = pc1;\n\t\t\t\t\t}\n\t\t\t\t\tattr_name = strrchr(name_str, '.');\n\t\t\t\t\tif (attr_name == NULL)\n\t\t\t\t\t\tattr_name = name_str;\n\t\t\t\t\telse\n\t\t\t\t\t\tattr_name++;\n\n\t\t\t\t} else {\n\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t\t \"object '%s' does not have a job name!\", obj_name);\n\t\t\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\t\t\t/* process a new line */\n\t\t\t\t\tin_data[0] = '\\0';\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\t\t\t\trc = add_to_svrattrl_list_sorted(event_jobs_svrattrl,\n\t\t\t\t\t\t\t\t name_str, resc_str, val_str, 0, NULL);\n\t\t\t} else if (event_src_queue_svrattrl && (strcmp(obj_name, EVENT_SRC_QUEUE_OBJECT) == 0)) {\n\t\t\t\trc = add_to_svrattrl_list(event_src_queue_svrattrl,\n\t\t\t\t\t\t\t  name_str, resc_str, val_str, 0, NULL);\n\t\t\t} else if (event_aoe_svrattrl && (strcmp(obj_name, EVENT_AOE_OBJECT) == 0)) {\n\t\t\t\trc = add_to_svrattrl_list(event_aoe_svrattrl, name_str,\n\t\t\t\t\t\t\t  resc_str, val_str, 0, NULL);\n\t\t\t} else if ((strcmp(obj_name, PBS_OBJ) == 0) &&\n\t\t\t\t   (strcmp(name_str, GET_NODE_NAME_FUNC) == 0)) {\n\t\t\t\tstrncpy(svr_interp_data.local_host_name, val_str,\n\t\t\t\t\tPBS_MAXHOSTNAME);\n\t\t\t\trc = 0;\n\t\t\t} else {\n\t\t\t\trc = add_to_svrattrl_list(default_svrattrl,\n\t\t\t\t\t\t\t  name_str, resc_str, val_str, 0, NULL);\n\t\t\t}\n\n\t\t\tif (rc == -1) {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t \"failed to add_to_svrattrl_list(%s,%s,%s\",\n\t\t\t\t\t name_str, resc_str, (val_str ? val_str : \"\"));\n\t\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\t\tgoto populate_svrattrl_fail;\n\t\t\t}\n\t\t}\n\t\tin_data[0] = '\\0';\n\t}\n\n\tif (fp != stdin)\n\t\tfclose(fp);\n\n\tif (in_data != NULL) {\n\t\tfree(in_data);\n\t}\n\thook_perf_stat_stop(perf_label, perf_action, 0);\n\treturn (0);\n\npopulate_svrattrl_fail:\n\tif (default_svrattrl)\n\t\tfree_attrlist(default_svrattrl);\n\tif (event_svrattrl)\n\t\tfree_attrlist(event_svrattrl);\n\tif (event_job_svrattrl)\n\t\tfree_attrlist(event_job_svrattrl);\n\tif (event_job_o_svrattrl)\n\t\tfree_attrlist(event_job_o_svrattrl);\n\tif (event_resv_svrattrl)\n\t\tfree_attrlist(event_resv_svrattrl);\n\tif (event_vnode_svrattrl)\n\t\tfree_attrlist(event_vnode_svrattrl);\n\tif (event_vnode_fail_svrattrl)\n\t\tfree_attrlist(event_vnode_fail_svrattrl);\n\tif (event_src_queue_svrattrl)\n\t\tfree_attrlist(event_src_queue_svrattrl);\n\tif (event_aoe_svrattrl)\n\t\tfree_attrlist(event_aoe_svrattrl);\n\tif (event_argv_svrattrl)\n\t\tfree_attrlist(event_argv_svrattrl);\n\tif (event_jobs_svrattrl)\n\t\tfree_attrlist(event_jobs_svrattrl);\n\n\tif ((fp != NULL) && (fp != stdin))\n\t\tfclose(fp);\n\n\tif (in_data != NULL) {\n\t\tfree(in_data);\n\t}\n\n\thook_perf_stat_stop(perf_label, perf_action, 0);\n\treturn (rc);\n}\n\n/**\n *\n * @brief\n *\n * \t\tThis is like populate_svrattrl_from_file() but data is focused\n * \t\ton pbs.server() type of data.\n *\t\tTakes data from input file or stdin of the form:\n *\t\t<attribute_name>=<attribute value>\n *\t\t<attribute_name>[<resource_name>]=<resource value>\n *\t\tand populate the given various lists with the value obtained.\n *\n * @param[out]\tinput_file\t-\tif NULL, will get data from stdin.\n * @param[out]\tdefault_svrattrl\t-\tthe \"catch all\" list\n * @param[out]\tserver_svrattrl\t-\tgets <attribute_name>=SERVER_OBJECT data\n * @param[out]\tserver_jobs_svrattrl\t-\tgets <attribute_name>=SERVER_JOB_OBJECT data\n * \t\t\t              \t\t\t\t\tCaution: stored in sorted order.\n * @param[out]\tserver_jobs_ids_svrattrl\t-\tgets the list of job ids obtained\n * @param[out]\tserver_queues_svrattrl\t-\tgets <attribute_name>=SERVER_QUEUE_OBJECT data\n * \t\t\t                \t\t\t\tCaution: stored in sorted order.\n * @param[out]\tserver_queues_names_svrattrl\t-\tgets list of queue names obtained\n * @param[out]\tserver_resvs_svrattrl\t-\tgets <attribute_name>=SERVER_RESV_OBJECT data\n * \t\t\t               \t\t\t\t\tCaution: stored in sorted order.\n * @param[out]\tserver_resvs_resvids_svrattrl\t-\tgets list of reservation ids obtained\n * @param[out]\tserver_vnodes_svrattrl\t-\tgets <attribute_name>=SERVER_VNODE_OBJECT data\n * \t\t\t                \t\t\t\tCaution: stored in sorted order.\n * @param[out]\tserver_vnodes_names_svrattrl\t-\tgets list of vnode names obtained.\n * @param[in]\tperf_label - passed on to hook_perf_stat* call.\n * @param[in]\tperf_action - passed on to hook_perf_stat* call.\n *\n * @return\tint\n * @retval\t0\t: success\n * @retval\t-1\t: failure, and free_attrlist() is used to free the memory\n *\t\t\t\t\tassociated with each non-NULL list parameter.\n *\n * @note\n *\t\tThis function calls a single hook_perf_stat_start()\n *\t\tthat has some malloc-ed data that are freed in the\n *\t\thook_perf_stat_stop() call, which is done at the end of\n *\t\tthis function.\n *\t\tEnsure that after the hook_perf_stat_start(), all\n *\t\tprogram execution path lead to hook_perf_stat_stop()\n *\t\tcall.\n */\nint\npbs_python_populate_server_svrattrl_from_file(char *input_file,\n\t\t\t\t\t      pbs_list_head *default_svrattrl,\n\t\t\t\t\t      pbs_list_head *server_svrattrl,\n\t\t\t\t\t      pbs_list_head *server_jobs_svrattrl,\n\t\t\t\t\t      pbs_list_head *server_jobs_ids_svrattrl,\n\t\t\t\t\t      pbs_list_head *server_queues_svrattrl,\n\t\t\t\t\t      pbs_list_head *server_queues_names_svrattrl,\n\t\t\t\t\t      pbs_list_head *server_resvs_svrattrl,\n\t\t\t\t\t      pbs_list_head *server_resvs_resvids_svrattrl,\n\t\t\t\t\t      pbs_list_head *server_vnodes_svrattrl,\n\t\t\t\t\t      pbs_list_head *server_vnodes_names_svrattrl,\n\t\t\t\t\t      char *perf_label, char *perf_action)\n{\n\n\tchar *attr_name;\n\tchar *name_str;\n\tchar name_str_buf[STRBUF + 1] = {'\\0'};\n\tchar *resc_str;\n\tchar *val_str;\n\tchar *obj_name;\n\tchar *obj_name2;\n\tint rc = -1;\n\tint rc2 = -1;\n\tchar *pc, *pc1, *pc2, *pc3, *pc4;\n\tchar *in_data = NULL;\n\tchar *tmp_data = NULL;\n\tlong int curpos;\n\tlong int endpos;\n\tsize_t in_data_sz;\n\tchar *data_value;\n\tsize_t ll;\n\tFILE *fp = NULL;\n\tchar *p, *p2;\n\tint jobs_obj_len = strlen(SERVER_JOB_OBJECT);\n\tint queue_obj_len = strlen(SERVER_QUEUE_OBJECT);\n\tint resv_obj_len = strlen(SERVER_RESV_OBJECT);\n\tint vnode_obj_len = strlen(SERVER_VNODE_OBJECT);\n\n\tif ((default_svrattrl == NULL) ||\n\t    (server_svrattrl == NULL) ||\n\t    (server_jobs_svrattrl == NULL) ||\n\t    (server_jobs_ids_svrattrl == NULL) ||\n\t    (server_queues_svrattrl == NULL) ||\n\t    (server_queues_names_svrattrl == NULL) ||\n\t    (server_vnodes_svrattrl == NULL) ||\n\t    (server_vnodes_names_svrattrl == NULL) ||\n\t    (server_resvs_svrattrl == NULL) ||\n\t    (server_resvs_resvids_svrattrl == NULL)) {\n\t\tlog_err(errno, __func__, \"Bad input parameter!\");\n\t\trc = -1;\n\t\tgoto populate_server_svrattrl_fail;\n\t}\n\n\tif ((input_file != NULL) && (*input_file != '\\0')) {\n\t\tfp = fopen(input_file, \"r\");\n\n\t\tif (fp == NULL) {\n\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t \"failed to open input file %s\", input_file);\n\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\trc = -1;\n\t\t\tgoto populate_server_svrattrl_fail;\n\t\t}\n\t} else {\n\t\tfp = stdin;\n\t}\n\n\thook_perf_stat_start(perf_label, perf_action, 0);\n\n\tif (default_svrattrl) {\n\t\tfree_attrlist(default_svrattrl);\n\t\tCLEAR_HEAD((*default_svrattrl));\n\t}\n\tif (server_svrattrl) {\n\t\tfree_attrlist(server_svrattrl);\n\t\tCLEAR_HEAD((*server_svrattrl));\n\t}\n\tif (server_jobs_svrattrl) {\n\t\tfree_attrlist(server_jobs_svrattrl);\n\t\tCLEAR_HEAD((*server_jobs_svrattrl));\n\t}\n\tif (server_jobs_ids_svrattrl) {\n\t\tfree_attrlist(server_jobs_ids_svrattrl);\n\t\tCLEAR_HEAD((*server_jobs_ids_svrattrl));\n\t}\n\tif (server_queues_svrattrl) {\n\t\tfree_attrlist(server_queues_svrattrl);\n\t\tCLEAR_HEAD((*server_queues_svrattrl));\n\t}\n\tif (server_queues_names_svrattrl) {\n\t\tfree_attrlist(server_queues_names_svrattrl);\n\t\tCLEAR_HEAD((*server_queues_names_svrattrl));\n\t}\n\tif (server_vnodes_svrattrl) {\n\t\tfree_attrlist(server_vnodes_svrattrl);\n\t\tCLEAR_HEAD((*server_vnodes_svrattrl));\n\t}\n\tif (server_vnodes_names_svrattrl) {\n\t\tfree_attrlist(server_vnodes_names_svrattrl);\n\t\tCLEAR_HEAD((*server_vnodes_names_svrattrl));\n\t}\n\tif (server_resvs_svrattrl) {\n\t\tfree_attrlist(server_resvs_svrattrl);\n\t\tCLEAR_HEAD((*server_resvs_svrattrl));\n\t}\n\tif (server_resvs_resvids_svrattrl) {\n\t\tfree_attrlist(server_resvs_resvids_svrattrl);\n\t\tCLEAR_HEAD((*server_resvs_resvids_svrattrl));\n\t}\n\n\tin_data_sz = STRBUF;\n\tin_data = (char *) malloc(in_data_sz);\n\tif (in_data == NULL) {\n\t\tlog_err(errno, __func__, \"malloc failed\");\n\t\trc = -1;\n\t\tgoto populate_server_svrattrl_fail;\n\t}\n\n\tif (fseek(fp, 0, SEEK_END) != 0) {\n\t\tlog_err(errno, __func__, \"fseek to end failed\");\n\t\trc = 1;\n\t\tgoto populate_server_svrattrl_fail;\n\t}\n\tendpos = ftell(fp);\n\tif (fseek(fp, 0, SEEK_SET) != 0) {\n\t\tlog_err(errno, __func__, \"fseek to beginning failed\");\n\t\trc = 1;\n\t\tgoto populate_server_svrattrl_fail;\n\t}\n\tcurpos = ftell(fp);\n\twhile (fgets(in_data, in_data_sz, fp) != NULL) {\n\n\t\tll = strlen(in_data);\n#ifdef WIN32\n\t\t/* The file is being read in O_BINARY mode (see _fmode setting) */\n\t\t/* so on Windows, there's a carriage return (\\r) line feed (\\n), */\n\t\t/* then the linefeed needs to get processed out */\n\t\tif (ll >= 2) {\n\t\t\tif (in_data[ll - 2] == '\\r') {\n\t\t\t\t/* remove newline */\n\t\t\t\tin_data[ll - 2] = '\\0';\n\t\t\t}\n\t\t}\n#endif\n\t\tif (in_data[ll - 1] == '\\n') {\n\t\t\t/* remove newline */\n\t\t\tin_data[ll - 1] = '\\0';\n\t\t} else if (ftell(fp) != endpos) { /* continued on next line */\n\t\t\tin_data_sz = 2 * in_data_sz;\n\t\t\ttmp_data = (char *) realloc(in_data, in_data_sz);\n\t\t\tif (tmp_data == NULL) {\n\t\t\t\tlog_err(errno, __func__, \"realloc failed\");\n\t\t\t\trc = -1;\n\t\t\t\tgoto populate_server_svrattrl_fail;\n\t\t\t}\n\t\t\tin_data = tmp_data;\n\t\t\tif (fseek(fp, curpos, SEEK_SET) != 0) {\n\t\t\t\tlog_err(errno, __func__, \"failed to fseek\");\n\t\t\t\trc = -1;\n\t\t\t\tgoto populate_server_svrattrl_fail;\n\t\t\t}\n\t\t\tcontinue;\n\t\t}\n\t\tcurpos = ftell(fp);\n\t\tdata_value = NULL;\n\t\tif ((p = strchr(in_data, '=')) != NULL) {\n\t\t\tint i;\n\t\t\t*p = '\\0';\n\t\t\tp++;\n\t\t\t/* Given '<obj_name>=<data_value>' line, */\n\t\t\t/* strip off leading spaces from <data_value> */\n\t\t\twhile (isspace(*p))\n\t\t\t\tp++;\n\t\t\tdata_value = p;\n\t\t\t/* and strip off trailing spaces from <data_value> */\n\t\t\ti = strlen(p);\n\t\t\twhile (--i > 0) { /* strip trailing blanks */\n\t\t\t\tif (!isspace((int) *(p + i)))\n\t\t\t\t\tbreak;\n\t\t\t\t*(p + i) = '\\0';\n\t\t\t}\n\t\t}\n\t\tobj_name = in_data;\n\n\t\tpc = strrchr(in_data, '.');\n\t\tif (pc) {\n\t\t\t*pc = '\\0';\n\t\t\tpc++;\n\t\t} else {\n\t\t\tpc = in_data;\n\t\t}\n\t\tname_str = pc;\n\n\t\tpc1 = strchr(pc, '[');\n\t\tpc2 = strchr(pc, ']');\n\t\tresc_str = NULL;\n\t\tif (pc1 && pc2 && (pc2 > pc1)) {\n\t\t\t*pc1 = '\\0';\n\t\t\tpc1++;\n\t\t\tresc_str = pc1;\n\t\t\t*pc2 = '\\0';\n\t\t\tpc2++;\n\n\t\t\t/* now let's if there's anything quoted inside */\n\t\t\tpc3 = strchr(pc1, '\"');\n\t\t\tif (pc3 != NULL)\n\t\t\t\tpc4 = strchr(pc3 + 1, '\"');\n\t\t\telse\n\t\t\t\tpc4 = NULL;\n\n\t\t\tif (pc3 && pc4 && (pc4 > pc3)) {\n\t\t\t\tpc3++;\n\t\t\t\t*pc4 = '\\0';\n\t\t\t\tresc_str = pc3;\n\t\t\t}\n\t\t}\n\n\t\tval_str = NULL;\n\t\tif (data_value) {\n\t\t\tval_str = data_value;\n\n\t\t\tif (strcmp(obj_name, SERVER_OBJECT) == 0) {\n\t\t\t\tif (server_svrattrl) {\n\t\t\t\t\trc = add_to_svrattrl_list(server_svrattrl, name_str, resc_str, val_str, 0, NULL);\n\t\t\t\t}\n\t\t\t\trc2 = 0;\n\t\t\t} else if (server_jobs_svrattrl &&\n\t\t\t\t   (strncmp(obj_name, SERVER_JOB_OBJECT,\n\t\t\t\t\t    jobs_obj_len) == 0)) {\n\t\t\t\tobj_name2 = obj_name + jobs_obj_len;\n\n\t\t\t\t/* pbs.server().job(<jobid>)\\0<attribute name>\\0<resource name>\\0<value>\n\t\t\t\t * where obj_name = pbs.server().job(<jobid>)\n\t\t\t\t *       obj_name2 = (<jobid>\n\t\t\t\t *\t  name_str = <attribute name>\n\t\t\t\t *\n\t\t\t\t */\n\n\t\t\t\t/* import here to look for the first '(' (using strrchr)\n\t\t\t\t * and the last')' (using strrchr)\n\t\t\t\t * as we can have:\n\t\t\t\t *\t\tpbs.server().job(\"23.ricardo\").<attr>=<val>\n\t\t\t\t * and \"23.ricardo\" is a valid job id.\n\t\t\t\t */\n\t\t\t\tif (((pc1 = strchr(obj_name2, '(')) != NULL) &&\n\t\t\t\t    ((pc2 = strchr(obj_name2, ')')) != NULL) &&\n\t\t\t\t    (pc2 > pc1)) {\n\t\t\t\t\tpc1++; /* <jobid> part */\n\n\t\t\t\t\t*pc2 = '.'; /* pbs.server().job(<jobid>. */\n\t\t\t\t\tpc2++;\n\n\t\t\t\t\t/* now let's if there's anything quoted inside */\n\t\t\t\t\tpc3 = strchr(pc1, '\"');\n\t\t\t\t\tif (pc3 != NULL)\n\t\t\t\t\t\tpc4 = strchr(pc3 + 1, '\"');\n\t\t\t\t\telse\n\t\t\t\t\t\tpc4 = NULL;\n\n\t\t\t\t\tif (pc3 && pc4 && (pc4 > pc3)) {\n\t\t\t\t\t\tpc3++;\n\t\t\t\t\t\t*pc4 = '.';\n\t\t\t\t\t\tpc4++;\n\t\t\t\t\t\t/* we're saving 'name_str' in a separate array (name_str_buf), */\n\t\t\t\t\t\t/* as strcpy() does something odd under rhel6/centos if the */\n\t\t\t\t\t\t/* destination (pc4)  and the source (name_str) are in the same */\n\t\t\t\t\t\t/* memory area, even though non-overlapping. */\n\t\t\t\t\t\tstrncpy(name_str_buf, name_str, sizeof(name_str_buf) - 1);\n\t\t\t\t\t\tstrcpy(pc4, name_str_buf); /* <jobid>.<attr name> */\n\t\t\t\t\t\tname_str = pc3;\n\t\t\t\t\t} else {\n\t\t\t\t\t\tstrncpy(name_str_buf, name_str, sizeof(name_str_buf) - 1);\n\t\t\t\t\t\tstrcpy(pc2, name_str_buf); /* <jobid>.<attr name> */\n\t\t\t\t\t\tname_str = pc1;\n\t\t\t\t\t}\n\t\t\t\t\tattr_name = strrchr(name_str, '.');\n\t\t\t\t\tif (attr_name == NULL)\n\t\t\t\t\t\tattr_name = name_str;\n\t\t\t\t\telse\n\t\t\t\t\t\tattr_name++;\n\n\t\t\t\t} else {\n\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t\t \"object '%s' does not have a job id!\", obj_name);\n\t\t\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\t\t\t\trc = add_to_svrattrl_list_sorted(server_jobs_svrattrl,\n\t\t\t\t\t\t\t\t name_str, resc_str, val_str, 0, NULL);\n\n\t\t\t\tif ((p2 = strrchr(name_str, '.')) != NULL)\n\t\t\t\t\t*p2 = '\\0'; /* name_str=<jobid> */\n\n\t\t\t\tif (!find_svrattrl_list_entry(server_jobs_ids_svrattrl,\n\t\t\t\t\t\t\t      name_str, NULL))\n\t\t\t\t\trc2 = add_to_svrattrl_list(server_jobs_ids_svrattrl, name_str, NULL, \"\", 0, NULL);\n\n\t\t\t\tif (p2 != NULL)\n\t\t\t\t\t*p2 = '.'; /* name_str=<jobid>.<attr> */\n\n\t\t\t} else if (server_vnodes_svrattrl &&\n\t\t\t\t   (strncmp(obj_name, SERVER_VNODE_OBJECT,\n\t\t\t\t\t    vnode_obj_len) == 0)) {\n\n\t\t\t\tobj_name2 = obj_name + vnode_obj_len;\n\t\t\t\t/* pbs.server().vnode(<vnode_name>)\\0<attribute name>\\0<resource name>\\0<value>\n\t\t\t\t * where obj_name = pbs.server().vnode(<vnode_name>)\n\t\t\t\t *       obj_name = (<vnode_name>)\n\t\t\t\t *\t  name_str = <attribute name>\n\t\t\t\t */\n\n\t\t\t\t/* import here to look for the leftmost '(' (using strchr)\n\t\t\t\t * and the rightmost ')' (using strrchr)\n\t\t\t\t * as we can have:\n\t\t\t\t *\t\tpbs.server().vnode(\"altix[5]\").<attr>=<val>\n\t\t\t\t * and \"altix[5]\" is a valid vnode id.\n\t\t\t\t */\n\t\t\t\tif (((pc1 = strchr(obj_name2, '(')) != NULL) &&\n\t\t\t\t    ((pc2 = strrchr(obj_name2, ')')) != NULL) &&\n\t\t\t\t    (pc2 > pc1)) {\n\t\t\t\t\tpc1++; /* <vnode_name> part */\n\n\t\t\t\t\t*pc2 = '.'; /* pbs.server().vnode(<vnode_name>. */\n\t\t\t\t\tpc2++;\n\n\t\t\t\t\t/* now let's if there's anything quoted inside */\n\t\t\t\t\tpc3 = strchr(pc1, '\"');\n\t\t\t\t\tif (pc3 != NULL)\n\t\t\t\t\t\tpc4 = strchr(pc3 + 1, '\"');\n\t\t\t\t\telse\n\t\t\t\t\t\tpc4 = NULL;\n\n\t\t\t\t\tif (pc3 && pc4 && (pc4 > pc3)) {\n\t\t\t\t\t\tpc3++;\n\t\t\t\t\t\t*pc4 = '.';\n\t\t\t\t\t\tpc4++;\n\t\t\t\t\t\t/* we're saving 'name_str' in a separate array (name_str_buf), */\n\t\t\t\t\t\t/* as strcpy() does something odd under rhel6/centos if the */\n\t\t\t\t\t\t/* destination (pc4)  and the source (name_str) are in the same */\n\t\t\t\t\t\t/* memory area, even though non-overlapping. */\n\t\t\t\t\t\tstrncpy(name_str_buf, name_str, sizeof(name_str_buf) - 1);\n\t\t\t\t\t\tstrcpy(pc4, name_str_buf); /* <vnode_name>.<attr name> */\n\t\t\t\t\t\tname_str = pc3;\n\t\t\t\t\t} else {\n\t\t\t\t\t\tstrncpy(name_str_buf, name_str, sizeof(name_str_buf) - 1);\n\t\t\t\t\t\tstrcpy(pc2, name_str_buf); /* <vnode_name>.<attr name> */\n\t\t\t\t\t\tname_str = pc1;\n\t\t\t\t\t}\n\t\t\t\t\tattr_name = strrchr(name_str, '.');\n\t\t\t\t\tif (attr_name == NULL)\n\t\t\t\t\t\tattr_name = name_str;\n\t\t\t\t\telse\n\t\t\t\t\t\tattr_name++;\n\n\t\t\t\t} else {\n\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t\t \"object '%s' does not have a vnode name!\", obj_name);\n\t\t\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\t\t\t\trc = add_to_svrattrl_list_sorted(server_vnodes_svrattrl,\n\t\t\t\t\t\t\t\t name_str, resc_str,\n\t\t\t\t\t\t\t\t return_internal_value(attr_name, val_str), 0, NULL);\n\t\t\t\tif ((p2 = strrchr(name_str, '.')) != NULL)\n\t\t\t\t\t*p2 = '\\0'; /* name_str=<vname> */\n\n\t\t\t\tif (!find_svrattrl_list_entry(server_vnodes_names_svrattrl,\n\t\t\t\t\t\t\t      name_str, NULL))\n\t\t\t\t\trc2 = add_to_svrattrl_list(server_vnodes_names_svrattrl, name_str, NULL, \"\", 0, NULL);\n\n\t\t\t\tif (p2 != NULL)\n\t\t\t\t\t*p2 = '.'; /* name_str=<vname>.<attr> */\n\n\t\t\t} else if (server_queues_svrattrl &&\n\t\t\t\t   (strncmp(obj_name, SERVER_QUEUE_OBJECT,\n\t\t\t\t\t    queue_obj_len) == 0)) {\n\n\t\t\t\tobj_name2 = obj_name + queue_obj_len;\n\t\t\t\t/* pbs.server().queue(<qname>)\\0<attribute name>\\0<resource name>\\0<value>\n\t\t\t\t * where obj_name = pbs.server().queue(<qname>)\n\t\t\t\t * where obj_name = pbs.server().queue(<qname>)\n\t\t\t\t *\t  name_str = <attribute name>\n\t\t\t\t */\n\n\t\t\t\t/* import here to look for the leftmost '(' (using strchr)\n\t\t\t\t * and the rightmost ')' (using strrchr)\n\t\t\t\t * as we can have:\n\t\t\t\t *\t\tpbs.server().queue(\"workq\").<attr>=<val>\n\t\t\t\t * and \"workq\" is a valid queue id.\n\t\t\t\t */\n\t\t\t\tif (((pc1 = strrchr(obj_name2, '(')) != NULL) &&\n\t\t\t\t    ((pc2 = strrchr(obj_name2, ')')) != NULL) &&\n\t\t\t\t    (pc2 > pc1)) {\n\t\t\t\t\tpc1++; /* <qname> part */\n\n\t\t\t\t\t*pc2 = '.'; /* pbs.server().queue(<qname>. */\n\t\t\t\t\tpc2++;\n\n\t\t\t\t\t/* now let's if there's anything quoted inside */\n\t\t\t\t\tpc3 = strchr(pc1, '\"');\n\t\t\t\t\tif (pc3 != NULL)\n\t\t\t\t\t\tpc4 = strchr(pc3 + 1, '\"');\n\t\t\t\t\telse\n\t\t\t\t\t\tpc4 = NULL;\n\n\t\t\t\t\tif (pc3 && pc4 && (pc4 > pc3)) {\n\t\t\t\t\t\tpc3++;\n\t\t\t\t\t\t*pc4 = '.';\n\t\t\t\t\t\tpc4++;\n\t\t\t\t\t\t/* we're saving 'name_str' in a separate array (name_str_buf), */\n\t\t\t\t\t\t/* as strcpy() does something odd under rhel6/centos if the */\n\t\t\t\t\t\t/* destination (pc4)  and the source (name_str) are in the same */\n\t\t\t\t\t\t/* memory area, even though non-overlapping. */\n\t\t\t\t\t\tstrncpy(name_str_buf, name_str, sizeof(name_str_buf) - 1);\n\t\t\t\t\t\tstrcpy(pc4, name_str_buf); /* <qname>.<attr name> */\n\t\t\t\t\t\tname_str = pc3;\n\t\t\t\t\t} else {\n\t\t\t\t\t\tstrncpy(name_str_buf, name_str, sizeof(name_str_buf) - 1);\n\t\t\t\t\t\tstrcpy(pc2, name_str_buf); /* <qname>.<attr name> */\n\t\t\t\t\t\tname_str = pc1;\n\t\t\t\t\t}\n\t\t\t\t\tattr_name = strrchr(name_str, '.');\n\t\t\t\t\tif (attr_name == NULL)\n\t\t\t\t\t\tattr_name = name_str;\n\t\t\t\t\telse\n\t\t\t\t\t\tattr_name++;\n\n\t\t\t\t} else {\n\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t\t \"object '%s' does not have a queue name!\", obj_name);\n\t\t\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\t\t\t\trc = add_to_svrattrl_list_sorted(server_queues_svrattrl,\n\t\t\t\t\t\t\t\t name_str, resc_str, val_str, 0, NULL);\n\t\t\t\tif ((p2 = strrchr(name_str, '.')) != NULL)\n\t\t\t\t\t*p2 = '\\0'; /* name_str=<qname> */\n\n\t\t\t\tif (!find_svrattrl_list_entry(server_queues_names_svrattrl,\n\t\t\t\t\t\t\t      name_str, NULL))\n\t\t\t\t\trc2 = add_to_svrattrl_list(server_queues_names_svrattrl, name_str, NULL, \"\", 0, NULL);\n\n\t\t\t\tif (p2 != NULL)\n\t\t\t\t\t*p2 = '.'; /* name_str=<qname>.<attr> */\n\t\t\t} else if (server_resvs_svrattrl &&\n\t\t\t\t   (strncmp(obj_name, SERVER_RESV_OBJECT,\n\t\t\t\t\t    resv_obj_len) == 0)) {\n\n\t\t\t\tobj_name2 = obj_name + resv_obj_len;\n\t\t\t\t/* pbs.server().resv(<resv_name>)\\0<attribute name>\\0<resource name>\\0<value>\n\t\t\t\t * where obj_name = pbs.server().resv(<resv_name>)\n\t\t\t\t * \t obj_name = (<resv_name>)\n\t\t\t\t *\t  name_str = <attribute name>\n\t\t\t\t */\n\n\t\t\t\t/* import here to look for the leftmost '(' (using strchr)\n\t\t\t\t * and the rightmost ')' (using strrchr)\n\t\t\t\t * as we can have:\n\t\t\t\t *\t\tpbs.server().resv(\"R5\").<attr>=<val>\n\t\t\t\t * and \"R5\" is a valid resv id.\n\t\t\t\t */\n\t\t\t\tif (((pc1 = strrchr(obj_name2, '(')) != NULL) &&\n\t\t\t\t    ((pc2 = strrchr(obj_name2, ')')) != NULL) &&\n\t\t\t\t    (pc2 > pc1)) {\n\t\t\t\t\tpc1++; /* <resv_name> part */\n\n\t\t\t\t\t*pc2 = '.'; /* pbs.server().resv(<resv_name>. */\n\t\t\t\t\tpc2++;\n\n\t\t\t\t\t/* now let's if there's anything quoted inside */\n\t\t\t\t\tpc3 = strchr(pc1, '\"');\n\t\t\t\t\tif (pc3 != NULL)\n\t\t\t\t\t\tpc4 = strchr(pc3 + 1, '\"');\n\t\t\t\t\telse\n\t\t\t\t\t\tpc4 = NULL;\n\n\t\t\t\t\tif (pc3 && pc4 && (pc4 > pc3)) {\n\t\t\t\t\t\tpc3++;\n\t\t\t\t\t\t*pc4 = '.';\n\t\t\t\t\t\tpc4++;\n\t\t\t\t\t\t/* we're saving 'name_str' in a separate array (name_str_buf), */\n\t\t\t\t\t\t/* as strcpy() does something odd under rhel6/centos if the */\n\t\t\t\t\t\t/* destination (pc4)  and the source (name_str) are in the same */\n\t\t\t\t\t\t/* memory area, even though non-overlapping. */\n\t\t\t\t\t\tstrncpy(name_str_buf, name_str, sizeof(name_str_buf) - 1);\n\t\t\t\t\t\tstrcpy(pc4, name_str_buf); /* <resv_name>.<attr name> */\n\t\t\t\t\t\tname_str = pc3;\n\t\t\t\t\t} else {\n\t\t\t\t\t\tstrncpy(name_str_buf, name_str, sizeof(name_str_buf) - 1);\n\t\t\t\t\t\tstrcpy(pc2, name_str_buf); /* <resv_name>.<attr name> */\n\t\t\t\t\t\tname_str = pc1;\n\t\t\t\t\t}\n\t\t\t\t\tattr_name = strrchr(name_str, '.');\n\t\t\t\t\tif (attr_name == NULL)\n\t\t\t\t\t\tattr_name = name_str;\n\t\t\t\t\telse\n\t\t\t\t\t\tattr_name++;\n\n\t\t\t\t} else {\n\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t\t \"object '%s' does not have a resv name!\", obj_name);\n\t\t\t\t\tlog_err(-1, __func__, log_buffer);\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\t\t\t\trc = add_to_svrattrl_list_sorted(server_resvs_svrattrl,\n\t\t\t\t\t\t\t\t name_str, resc_str, val_str, 0, NULL);\n\t\t\t\tif ((p2 = strrchr(name_str, '.')) != NULL)\n\t\t\t\t\t*p2 = '\\0'; /* name_str=<qname> */\n\n\t\t\t\tif (!find_svrattrl_list_entry(server_resvs_resvids_svrattrl, name_str, NULL))\n\t\t\t\t\trc2 = add_to_svrattrl_list(server_resvs_resvids_svrattrl, name_str, NULL, \"\", 0, NULL);\n\n\t\t\t\tif (p2 != NULL)\n\t\t\t\t\t*p2 = '.'; /* name_str=<qname>.<attr> */\n\t\t\t} else {\n\t\t\t\trc = add_to_svrattrl_list(default_svrattrl,\n\t\t\t\t\t\t\t  name_str, resc_str, val_str, 0, NULL);\n\t\t\t\trc2 = 0;\n\t\t\t}\n\n\t\t\tif (rc == -1) {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t \"failed to add_to_svrattrl_list(%s,%s,%s)\",\n\t\t\t\t\t name_str, resc_str, (val_str ? val_str : \"\"));\n\t\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\t\tgoto populate_server_svrattrl_fail;\n\t\t\t}\n\n\t\t\tif (rc2 == -1) {\n\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer),\n\t\t\t\t\t \"failed to add %s to list of names\",\n\t\t\t\t\t name_str);\n\t\t\t\tlog_err(errno, __func__, log_buffer);\n\t\t\t\tgoto populate_server_svrattrl_fail;\n\t\t\t}\n\t\t}\n\t}\n\n\tif (fp != stdin)\n\t\tfclose(fp);\n\n\tif (in_data != NULL) {\n\t\tfree(in_data);\n\t}\n\thook_perf_stat_stop(perf_label, perf_action, 0);\n\treturn (0);\n\npopulate_server_svrattrl_fail:\n\n\tif (default_svrattrl) {\n\t\tfree_attrlist(default_svrattrl);\n\t\tCLEAR_HEAD((*default_svrattrl));\n\t}\n\tif (server_svrattrl) {\n\t\tfree_attrlist(server_svrattrl);\n\t\tCLEAR_HEAD((*server_svrattrl));\n\t}\n\tif (server_jobs_svrattrl) {\n\t\tfree_attrlist(server_jobs_svrattrl);\n\t\tCLEAR_HEAD((*server_jobs_svrattrl));\n\t}\n\tif (server_jobs_ids_svrattrl) {\n\t\tfree_attrlist(server_jobs_ids_svrattrl);\n\t\tCLEAR_HEAD((*server_jobs_ids_svrattrl));\n\t}\n\tif (server_queues_svrattrl) {\n\t\tfree_attrlist(server_queues_svrattrl);\n\t\tCLEAR_HEAD((*server_queues_svrattrl));\n\t}\n\tif (server_queues_names_svrattrl) {\n\t\tfree_attrlist(server_queues_names_svrattrl);\n\t\tCLEAR_HEAD((*server_queues_names_svrattrl));\n\t}\n\tif (server_resvs_svrattrl) {\n\t\tfree_attrlist(server_resvs_svrattrl);\n\t\tCLEAR_HEAD((*server_resvs_svrattrl));\n\t}\n\tif (server_resvs_resvids_svrattrl) {\n\t\tfree_attrlist(server_resvs_resvids_svrattrl);\n\t\tCLEAR_HEAD((*server_resvs_resvids_svrattrl));\n\t}\n\tif (server_vnodes_svrattrl) {\n\t\tfree_attrlist(server_vnodes_svrattrl);\n\t\tCLEAR_HEAD((*server_vnodes_svrattrl));\n\t}\n\tif (server_vnodes_names_svrattrl) {\n\t\tfree_attrlist(server_vnodes_names_svrattrl);\n\t\tCLEAR_HEAD((*server_vnodes_names_svrattrl));\n\t}\n\n\tif ((fp != NULL) && (fp != stdin))\n\t\tfclose(fp);\n\n\tif (in_data != NULL) {\n\t\tfree(in_data);\n\t}\n\n\thook_perf_stat_stop(perf_label, perf_action, 0);\n\treturn (rc);\n}\n\n/**\n *\n * @brief\n *\t\tPrints out to the file opened in stream 'fp', the contents of the\n *\t\tstring array 'str_array'.\n *\n * @param[in]\tfp\t-\tthe stream pointer of the file to write output into\n * @param[in]\thead_str\t-\tsome string to print out the beginning.\n * @param[in]\tstr_array\t-\tthe array whose contents are being printed.\n *\n * @return\tnone\n */\nvoid\nfprint_str_array(FILE *fp, char *head_str, void **str_array)\n{\n\tint i;\n\n\tfor (i = 0; str_array[i]; i++)\n\t\tfprintf(fp, \"%s[%d]=%s\\n\", head_str, i, (char *) str_array[i]);\n}\n\n/**\n * @brief\n * \t\tGiven an 'argv_list', return a malloc-ed\n * \t\tstring, containing the argv_list->al_values separated\n *\t\tby spaces.\n * @note\n *\t\tNeed to free() returned value.\n *\n * @param[in]\targv_list\t-\tan argv list.\n *\n * @return\tchar *\n * @retval\t<string>\t-\tpointer to a malloced area holding\n *\t\t\t\t  \t\t\tthe values of 'argv_list'.\n * @retval\tNULL\t: error\n *\n */\nstatic char *\nargv_list_to_str(pbs_list_head *argv_list)\n{\n\tint i, len;\n\tchar *ret_string = NULL;\n\tsvrattrl *plist = NULL;\n\n\tif (argv_list == NULL)\n\t\treturn NULL;\n\n\tlen = 0;\n\ti = 0;\n\n\t/* calculate the list size */\n\tplist = (svrattrl *) GET_NEXT(*argv_list);\n\twhile (plist) {\n\t\tif (plist->al_value == NULL) {\n\t\t\treturn NULL;\n\t\t}\n\t\tlen += strlen(plist->al_value);\n\t\tlen++; /* for ' ' (space) */\n\t\ti++;\n\t\tplist = (svrattrl *) GET_NEXT(plist->al_link);\n\t}\n\n\tlen++; /* for trailing '\\0' */\n\n\tif (len > 1) { /* not an empty list */\n\t\tret_string = (char *) malloc(len);\n\n\t\tif (ret_string == NULL)\n\t\t\treturn NULL;\n\t\ti = 0;\n\t\tplist = (svrattrl *) GET_NEXT(*argv_list);\n\t\twhile (plist) {\n\t\t\tif (i == 0) {\n\t\t\t\tstrcpy(ret_string, plist->al_value);\n\t\t\t} else {\n\t\t\t\tstrcat(ret_string, \" \");\n\t\t\t\tstrcat(ret_string, plist->al_value);\n\t\t\t}\n\t\t\ti++;\n\t\t\tplist = (svrattrl *) GET_NEXT(plist->al_link);\n\t\t}\n\t}\n\treturn (ret_string);\n}\n\n/**\n *\n * @brief\n * \t\tpbs_python is a wrapper to the Python program shipped with\n *      PBS. It will construct a Python search path for modules\n *      (i.e. sys.path/PYTHONPATH) that points to directories in\n *      $PBS_EXEC/python, and then will call the Python interpreter taking\n * \t  \tas input arguments from the commandline if they exist; otherwise,\n *\t  \tthe name of the script file to execute as taken from STDIN.\n */\n\nint\nmain(int argc, char *argv[], char *envp[])\n{\n#ifndef WIN32\n\tchar dirname[MAXPATHLEN + 1];\n\tint env_len = 0;\n#else\n\tchar python_cmdline[MAXBUF + 1];\n#endif\n\tchar **lenvp = NULL;\n\tint i, rc;\n\n\t/* python externs */\n\textern void pbs_python_svr_initialize_interpreter_data(struct python_interpreter_data * interp_data);\n\textern void pbs_python_svr_destroy_interpreter_data(struct python_interpreter_data * interp_data);\n\n\tif (set_msgdaemonname(PBS_PYTHON_PROGRAM)) {\n\t\tfprintf(stderr, \"Out of memory\\n\");\n\t\treturn 1;\n\t}\n\n#ifdef WIN32\n\t/* The following needed so that buffered writes (e.g. fprintf) */\n\t/* won't end up getting ^M */\n\t_set_fmode(_O_BINARY);\n#endif\n\n\tif (initsocketlib())\n\t\treturn 1;\n\n\t/*the real deal or output pbs_version and exit?*/\n\tPRINT_VERSION_AND_EXIT(argc, argv);\n\tif (pbs_loadconf(0) == 0) {\n\t\tfprintf(stderr, \"Failed to load pbs.conf!\\n\");\n\t\treturn 1;\n\t}\n\n\tset_log_conf(pbs_conf.pbs_leaf_name, pbs_conf.pbs_mom_node_name,\n\t\t     pbs_conf.locallog, pbs_conf.syslogfac,\n\t\t     pbs_conf.syslogsvr, pbs_conf.pbs_log_highres_timestamp);\n\n\t/* by default, server_name is what is set in /etc/pbs.conf */\n\t(void) strcpy(server_name, pbs_conf.pbs_server_name);\n\n\t/* determine the actual server name */\n\tpbs_server_name = pbs_default();\n\tif ((!pbs_server_name) || (*pbs_server_name == '\\0')) {\n\t\tlog_err(-1, PBS_PYTHON_PROGRAM, \"Unable to get server name\");\n\t\treturn (-1);\n\t}\n\n\t/* determine the server host name */\n\tif (get_fullhostname(pbs_server_name, server_host, PBS_MAXSERVERNAME) != 0) {\n\t\tlog_err(-1, PBS_PYTHON_PROGRAM, \"Unable to get server host name\");\n\t\treturn (-1);\n\t}\n\n\tif ((job_attr_idx = cr_attrdef_idx(job_attr_def, JOB_ATR_LAST)) == NULL) {\n\t\tlog_err(errno, PBS_PYTHON_PROGRAM, \"Failed creating job attribute search index\");\n\t\treturn (-1);\n\t}\n\tif ((node_attr_idx = cr_attrdef_idx(node_attr_def, ND_ATR_LAST)) == NULL) {\n\t\tlog_err(errno, PBS_PYTHON_PROGRAM, \"Failed creating node attribute search index\");\n\t\treturn (-1);\n\t}\n\tif ((que_attr_idx = cr_attrdef_idx(que_attr_def, QA_ATR_LAST)) == NULL) {\n\t\tlog_err(errno, PBS_PYTHON_PROGRAM, \"Failed creating queue attribute search index\");\n\t\treturn (-1);\n\t}\n\tif ((svr_attr_idx = cr_attrdef_idx(svr_attr_def, SVR_ATR_LAST)) == NULL) {\n\t\tlog_err(errno, PBS_PYTHON_PROGRAM, \"Failed creating server attribute search index\");\n\t\treturn (-1);\n\t}\n\tif ((sched_attr_idx = cr_attrdef_idx(sched_attr_def, SCHED_ATR_LAST)) == NULL) {\n\t\tlog_err(errno, PBS_PYTHON_PROGRAM, \"Failed creating sched attribute search index\");\n\t\treturn (-1);\n\t}\n\tif ((resv_attr_idx = cr_attrdef_idx(resv_attr_def, RESV_ATR_LAST)) == NULL) {\n\t\tlog_err(errno, PBS_PYTHON_PROGRAM, \"Failed creating resv attribute search index\");\n\t\treturn (-1);\n\t}\n\tif (cr_rescdef_idx(svr_resc_def, svr_resc_size) != 0) {\n\t\tlog_err(errno, PBS_PYTHON_PROGRAM, \"Failed creating resc definition search index\");\n\t\treturn (-1);\n\t}\n\n\t/* initialize the pointers in the resource_def array */\n\n\tfor (i = 0; i < (svr_resc_size - 1); ++i)\n\t\tsvr_resc_def[i].rs_next = &svr_resc_def[i + 1];\n\t/* last entry is left with null pointer */\n\n\tif ((argv[1] == NULL) || (strcmp(argv[1], HOOK_MODE) != 0)) {\n\t\tchar *python_path = NULL;\n\t\tif (get_py_progname(&python_path)) {\n\t\t\tlog_err(-1, PBS_PYTHON_PROGRAM, \"Failed to find python binary path!\");\n\t\t\treturn -1;\n\t\t}\n#ifdef WIN32\n\t\t/* unset PYTHONHOME if any */\n\t\tSetEnvironmentVariable(PYHOME, NULL);\n\n\t\t/* Just pass on the command line arguments onto Python */\n\n\t\tsnprintf(python_cmdline, sizeof(python_cmdline), \"%s\", python_path);\n\t\tfor (i = 1; i < argc; i++) {\n\t\t\tstrncat(python_cmdline, \" \\\"\", sizeof(python_cmdline) - strlen(python_cmdline) - 1);\n\t\t\tstrncat(python_cmdline, argv[i], sizeof(python_cmdline) - strlen(python_cmdline) - 1);\n\t\t\tstrncat(python_cmdline, \"\\\"\", sizeof(python_cmdline) - strlen(python_cmdline) - 1);\n\t\t}\n\t\trc = wsystem(python_cmdline, INVALID_HANDLE_VALUE, NULL);\n#else\n\t\tchar in_data[MAXBUF + 1];\n\t\tchar *largv[3];\n\t\tint ll;\n\t\tchar *pc, *pc2;\n\n\t\t/* Linux/Unix: Create a local environment block (i.e. lenvp)    */\n\t\t/* containing PYTHONHOME setting, and give to execve() when it\t*/\n\t\t/* executes the python script.\t\t\t\t\t*/\n\t\tlenvp = (char **) envp;\n\t\tdo\n\t\t\tenv_len += 1;\n\t\twhile (*lenvp++);\n\n\t\tlenvp = (char **) malloc((env_len + 1) * sizeof(char *));\n\t\tif (lenvp == NULL) {\n\t\t\terrno = ENOMEM;\n\t\t\treturn 1;\n\t\t}\n\n\t\t/* Copy envp to lenvp */\n\t\tfor (i = 0; envp[i] != NULL; i++) {\n\t\t\t/* Ignore PYTHONHOME as it will be set by python itself */\n\t\t\tif (strncmp(envp[i], PYHOME_EQUAL, sizeof(PYHOME_EQUAL) - 1) != 0) {\n\t\t\t\tlenvp[i] = envp[i];\n\t\t\t}\n\t\t}\n\t\tlenvp[i] = NULL;\n\n\t\tif (argc == 1) {\n\t\t\t/* If no command line options, just check stdin for input */\n\t\t\t/* name, which is what mom does. Also, under              */\n\t\t\t/* sandbox=private, mom passes                            */\n\t\t\t/* \"cd <homedir>;<input script>\" so we'll need to extract */\n\t\t\t/* the script name this way.                              */\n\n\t\t\tif (fgets(in_data, sizeof(in_data), stdin) == NULL) {\n\t\t\t\tfprintf(stderr, \"No python script file found!\\n\");\n\t\t\t\treturn 1;\n\t\t\t}\n\t\t\tll = strlen(in_data);\n\n\t\t\tif (in_data[ll - 1] == '\\n')\n\t\t\t\t/* remove newline */\n\t\t\t\tin_data[ll - 1] = '\\0';\n\n\t\t\tpc = strchr(in_data, ';');\n\t\t\tif (pc) {\n\t\t\t\tpc++;\n\t\t\t\twhile (isspace(*pc))\n\t\t\t\t\tpc++;\n\t\t\t\tlargv[1] = pc;\n\n\t\t\t\t/* looking for the \"cd <homedir>\" part */\n\t\t\t\tif ((pc = strstr(in_data, \"cd\"))) { /* found a chdir */\n\t\t\t\t\tpc2 = in_data + 2;\n\t\t\t\t\twhile (isspace(*pc2))\n\t\t\t\t\t\tpc2++;\n\t\t\t\t\tpbs_strncpy(dirname, pc2, MAXPATHLEN);\n\t\t\t\t\tif ((pc = strrchr(dirname, ';')))\n\t\t\t\t\t\t*pc = '\\0';\n\t\t\t\t\tif (chdir(dirname) == -1) {\n\t\t\t\t\t\tfprintf(stderr,\n\t\t\t\t\t\t\t\"Failed to chdir to %s (errno %d)\\n\",\n\t\t\t\t\t\t\tdirname, errno);\n\t\t\t\t\t\treturn 1;\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t} else {\n\t\t\t\tpc = in_data;\n\t\t\t\twhile (isspace(*pc))\n\t\t\t\t\tpc++;\n\t\t\t\tlargv[1] = pc;\n\t\t\t}\n\n\t\t\tif (largv[1][0] == '\\0') {\n\t\t\t\tfprintf(stderr, \"Failed to obtain python script\\n\");\n\t\t\t\treturn 1;\n\t\t\t}\n\n\t\t\tlargv[0] = python_path;\n\t\t\tlargv[2] = NULL;\n\n\t\t\trc = execve(python_path, largv, lenvp);\n\t\t} else {\n\t\t\targv[0] = python_path;\n\t\t\trc = execve(python_path, argv, lenvp);\n\t\t}\n#endif\n\t\tfree(python_path);\n\t} else { /* hook mode */\n\n\t\tchar **argv2 = NULL;\n\t\tint argc2;\n\t\tint argv_len = 0;\n\t\tchar hook_script[MAXPATHLEN + 1] = {'\\0'};\n\t\tchar the_input[MAXPATHLEN + 1] = {'\\0'};\n\t\tchar the_output[MAXPATHLEN + 1] = {'\\0'};\n\t\tchar the_server_output[MAXPATHLEN + 1] = {'\\0'};\n\t\tchar the_data[MAXPATHLEN + 1] = {'\\0'};\n\t\tchar path_log[MAXPATHLEN + 1] = {'\\0'};\n\t\tchar logname[MAXPATHLEN + 1] = {'\\0'};\n\n\t\tchar hook_name[MAXBUF + 1] = {'\\0'};\n\t\tchar req_user[PBS_MAXUSER + 1] = {'\\0'};\n\t\tchar req_host[PBS_MAXHOSTNAME + 1] = {'\\0'};\n\t\tchar hookstr_type[MAXBUF + 1] = {'\\0'};\n\t\tchar hookstr_event[MAXBUF + 1] = {'\\0'};\n\t\tint hook_alarm = 0;\n\t\tint c, j;\n\t\tint errflg = 0;\n\t\tunsigned int hook_event = 0;\n\t\tstruct python_script *py_script = NULL;\n\t\tpbs_list_head default_list, event, event_job, event_job_o,\n\t\t\tevent_resv, event_vnode, event_src_queue, event_vnode_fail,\n\t\t\tevent_aoe, event_argv, event_jobs,\n\t\t\tserver, server_jobs, server_jobs_ids,\n\t\t\tserver_queues, server_queues_names,\n\t\t\tserver_resvs, server_resvs_resvids,\n\t\t\tserver_vnodes, server_vnodes_names,\n\t\t\tjob_failed_mom_list, job_succeeded_mom_list;\n\t\tsvrattrl *svrattrl_e;\n\t\tFILE *fp_out = NULL;\n\t\tFILE *fp_server_out = NULL;\n\t\tsvrattrl *plist = NULL;\n\t\tstruct rq_queuejob rqj;\n\t\tstruct rq_manage rqm;\n\t\tstruct rq_move rqmv;\n\t\tstruct rq_runjob rqrun;\n\t\tchar *rej_msg = NULL;\n\t\tchar *rerunjob_str = NULL;\n\t\tchar *deletejob_str = NULL;\n\t\tchar *new_exec_time_str = NULL;\n\t\tchar *new_hold_types_str = NULL;\n\t\tchar *new_project_str = NULL;\n\t\thook_input_param_t req_params;\n\t\thook_output_param_t req_params_out;\n\t\tchar *progname = NULL;\n\t\tchar *progname_orig = NULL;\n\t\tchar *env_str = NULL;\n\t\tchar *env_str_orig = NULL;\n\t\tchar *argv_str_orig = NULL;\n\t\tchar *argv_str = NULL;\n\t\tint print_progname = 0;\n\t\tint print_argv = 0;\n\t\tint print_env = 0;\n\t\tchar *tmp_str = NULL;\n\t\tchar perf_label[MAXBUF];\n\t\tchar perf_action[MAXBUFLEN + 13]; /* Additional 13 byte for description string*/\n\t\tchar *sp;\n\n\t\tthe_input[0] = '\\0';\n\t\tthe_output[0] = '\\0';\n\t\tthe_server_output[0] = '\\0';\n\t\tthe_data[0] = '\\0';\n\t\thook_name[0] = '\\0';\n\t\treq_user[0] = '\\0';\n\t\treq_host[0] = '\\0';\n\t\thookstr_type[0] = '\\0';\n\t\thookstr_event[0] = '\\0';\n\t\thook_script[0] = '\\0';\n\t\tlogname[0] = '\\0';\n\t\tstrcpy(path_log, \".\");\n\n\t\tif (*(argv + 2) == NULL) {\n\t\t\tfprintf(stderr, \"%s --hook -i <input_file> [-s <data_file>] [-o <output_file>] [-L <path_log>] [-l <logname>] [-r <resourcedef>] [-e <log_event_mask>] [<python_script>]\\n\", argv[0]);\n\t\t\texit(2);\n\t\t}\n\t\targv2 = (char **) argv;\n\t\tdo\n\t\t\targv_len += 1;\n\t\twhile (*argv2++);\n\n\t\targv2 = (char **) malloc((argv_len + 1) * sizeof(char *));\n\t\tif (argv2 == NULL) {\n\t\t\treturn 1;\n\t\t}\n\n\t\targc2 = 0;\n\t\tfor (i = 0, j = 0; argv[i] != NULL; i++) {\n\t\t\tif (strncmp(argv[i], HOOK_MODE,\n\t\t\t\t    sizeof(HOOK_MODE) - 1) == 0)\n\t\t\t\tcontinue;\n\t\t\targv2[j++] = argv[i];\n\t\t\targc2++;\n\t\t}\n\t\targv2[i] = NULL;\n\n\t\tpbs_python_set_use_static_data_value(0);\n\t\twhile ((c = getopt(argc2, argv2, \"i:o:l:L:e:r:s:\")) != EOF) {\n\n\t\t\tswitch (c) {\n\t\t\t\tcase 'i':\n\t\t\t\t\twhile (isspace((int) *optarg))\n\t\t\t\t\t\toptarg++;\n\n\t\t\t\t\tif (optarg[0] == '\\0') {\n\t\t\t\t\t\tfprintf(stderr, \"pbs_python: illegal -i value\\n\");\n\t\t\t\t\t\terrflg++;\n\t\t\t\t\t} else {\n\t\t\t\t\t\tsnprintf(the_input, sizeof(the_input), \"%s\", optarg);\n\t\t\t\t\t}\n\t\t\t\t\tbreak;\n\t\t\t\tcase 'o':\n\t\t\t\t\twhile (isspace((int) *optarg))\n\t\t\t\t\t\toptarg++;\n\n\t\t\t\t\tif (optarg[0] == '\\0') {\n\t\t\t\t\t\tfprintf(stderr, \"pbs_python: illegal -o value\\n\");\n\t\t\t\t\t\terrflg++;\n\t\t\t\t\t} else {\n\t\t\t\t\t\tsnprintf(the_output, sizeof(the_output), \"%s\", optarg);\n\t\t\t\t\t}\n\t\t\t\t\tbreak;\n\t\t\t\tcase 's':\n\t\t\t\t\twhile (isspace((int) *optarg))\n\t\t\t\t\t\toptarg++;\n\n\t\t\t\t\tif (optarg[0] == '\\0') {\n\t\t\t\t\t\tfprintf(stderr, \"pbs_python: illegal -s value\\n\");\n\t\t\t\t\t\terrflg++;\n\t\t\t\t\t} else {\n\t\t\t\t\t\tsnprintf(the_data, sizeof(the_data), \"%s\", optarg);\n\t\t\t\t\t\tpbs_python_set_use_static_data_value(1);\n\t\t\t\t\t}\n\t\t\t\t\tbreak;\n\t\t\t\tcase 'L':\n\t\t\t\t\twhile (isspace((int) *optarg))\n\t\t\t\t\t\toptarg++;\n\n\t\t\t\t\tif (optarg[0] == '\\0') {\n\t\t\t\t\t\tfprintf(stderr, \"pbs_python: illegal -L value\\n\");\n\t\t\t\t\t\terrflg++;\n\t\t\t\t\t} else {\n\t\t\t\t\t\tsnprintf(path_log, sizeof(path_log), \"%s\", optarg);\n\t\t\t\t\t}\n\t\t\t\t\tbreak;\n\t\t\t\tcase 'l':\n\t\t\t\t\twhile (isspace((int) *optarg))\n\t\t\t\t\t\toptarg++;\n\n\t\t\t\t\tif (optarg[0] == '\\0') {\n\t\t\t\t\t\tfprintf(stderr, \"pbs_python: illegal -l value\\n\");\n\t\t\t\t\t\terrflg++;\n\t\t\t\t\t} else {\n\t\t\t\t\t\tsnprintf(logname, sizeof(logname), \"%s\", optarg);\n\t\t\t\t\t}\n\t\t\t\t\tbreak;\n\t\t\t\tcase 'e':\n\t\t\t\t\twhile (isspace((int) *optarg))\n\t\t\t\t\t\toptarg++;\n\n\t\t\t\t\tif (optarg[0] == '\\0') {\n\t\t\t\t\t\tfprintf(stderr, \"pbs_python: illegal -e value\\n\");\n\t\t\t\t\t\terrflg++;\n\t\t\t\t\t} else {\n\t\t\t\t\t\tchar *bad;\n\n\t\t\t\t\t\t*log_event_mask = strtol(optarg, &bad, 0);\n\t\t\t\t\t\tif ((*bad != '\\0') && !isspace((int) *bad)) {\n\t\t\t\t\t\t\tfprintf(stderr,\n\t\t\t\t\t\t\t\t\"pbs_python: bad -e value %s\\n\",\n\t\t\t\t\t\t\t\toptarg);\n\t\t\t\t\t\t\terrflg++;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tbreak;\n\t\t\t\tcase 'r':\n\t\t\t\t\twhile (isspace((int) *optarg))\n\t\t\t\t\t\toptarg++;\n\n\t\t\t\t\tif (optarg[0] == '\\0') {\n\t\t\t\t\t\tfprintf(stderr, \"pbs_python: illegal -r value\\n\");\n\t\t\t\t\t\terrflg++;\n\t\t\t\t\t} else {\n\t\t\t\t\t\tpath_rescdef = strdup(optarg);\n\t\t\t\t\t\tif (path_rescdef == NULL) {\n\t\t\t\t\t\t\tfprintf(stderr,\n\t\t\t\t\t\t\t\t\"pbs_python: errno %d mallocing path_rescdef\\n\", errno);\n\t\t\t\t\t\t\terrflg++;\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tbreak;\n\t\t\t\tdefault:\n\t\t\t\t\terrflg++;\n\t\t\t}\n\t\t\tif (errflg) {\n\t\t\t\tfprintf(stderr, \"%s --hook -i <hook_input> [-s <data_file>] [-o <hook_output>] [-L <path_log>] [-l <logname>] [-r <resourcedef>] [-e <log_event_mask>] [<python_script>]\\n\", argv[0]);\n\t\t\t\texit(2);\n\t\t\t}\n\t\t}\n\n\t\tif (the_input[0] == '\\0') {\n\t\t\tfprintf(stderr, \"%s: No -i <input_file> given\\n\",\n\t\t\t\targv[0]);\n\t\t\texit(2);\n\t\t}\n\n\t\tif (path_rescdef != NULL) {\n\t\t\tif (setup_resc(1) == -1) {\n\t\t\t\tfprintf(stderr, \"setup_resc() of %s failed!\",\n\t\t\t\t\tpath_rescdef);\n\t\t\t\texit(2);\n\t\t\t}\n\t\t}\n\n\t\tif ((optind < argc2) && (argv2[optind] != NULL)) {\n\t\t\tstrncpy(hook_script, argv2[optind],\n\t\t\t\tsizeof(hook_script) - 1);\n\t\t}\n\t\tif (log_open_main(logname, path_log, 1) != 0) { /* use given name */\n\t\t\tfprintf(stderr, \"pbs_python: Unable to open logfile\\n\");\n\t\t\texit(1);\n\t\t}\n\n\t\tsp = NULL;\n\t\tif (the_input[0] != '\\0') {\n\t\t\tsp = strrchr(the_input, '/');\n\t\t\tif (sp != NULL)\n\t\t\t\tsp++;\n\t\t\telse\n\t\t\t\tsp = the_input;\n\t\t}\n\t\tsnprintf(perf_label, sizeof(perf_label), \"%s\", sp ? sp : \"stdin\");\n\t\thook_perf_stat_start(perf_label, PBS_PYTHON_PROGRAM, 1);\n\n\t\tCLEAR_HEAD(default_list);\n\t\tCLEAR_HEAD(event);\n\t\tCLEAR_HEAD(event_job);\n\t\tCLEAR_HEAD(event_job_o);\n\t\tCLEAR_HEAD(event_resv);\n\t\tCLEAR_HEAD(event_vnode);\n\t\tCLEAR_HEAD(event_vnode_fail);\n\t\tCLEAR_HEAD(job_failed_mom_list);\n\t\tCLEAR_HEAD(job_succeeded_mom_list);\n\t\tCLEAR_HEAD(event_src_queue);\n\t\tCLEAR_HEAD(event_aoe);\n\t\tCLEAR_HEAD(event_argv);\n\t\tCLEAR_HEAD(event_jobs);\n\n\t\trc = pbs_python_populate_svrattrl_from_file(the_input,\n\t\t\t\t\t\t\t    &default_list,\n\t\t\t\t\t\t\t    &event, &event_job, &event_job_o, &event_resv,\n\t\t\t\t\t\t\t    &event_vnode, &event_vnode_fail, &job_failed_mom_list,\n\t\t\t\t\t\t\t    &job_succeeded_mom_list, &event_src_queue,\n\t\t\t\t\t\t\t    &event_aoe, &event_argv, &event_jobs,\n\t\t\t\t\t\t\t    perf_label, HOOK_PERF_LOAD_INPUT);\n\t\tif (rc == -1) {\n\t\t\tfprintf(stderr, \"%s: failed to populate svrattrl \\n\", argv[0]);\n\t\t\texit(2);\n\t\t}\n\t\tif (the_data[0] != '\\0') {\n\t\t\tCLEAR_HEAD(server);\n\t\t\tCLEAR_HEAD(server_jobs);\n\t\t\tCLEAR_HEAD(server_jobs_ids);\n\t\t\tCLEAR_HEAD(server_queues);\n\t\t\tCLEAR_HEAD(server_queues_names);\n\t\t\tCLEAR_HEAD(server_resvs);\n\t\t\tCLEAR_HEAD(server_resvs_resvids);\n\t\t\tCLEAR_HEAD(server_vnodes);\n\t\t\tCLEAR_HEAD(server_vnodes_names);\n\t\t\tpbs_python_unset_server_info();\n\t\t\tpbs_python_unset_server_jobs_info();\n\t\t\tpbs_python_unset_server_queues_info();\n\t\t\tpbs_python_unset_server_resvs_info();\n\t\t\tpbs_python_unset_server_vnodes_info();\n\n\t\t\trc = pbs_python_populate_server_svrattrl_from_file(the_data,\n\t\t\t\t\t\t\t\t\t   &default_list, &server,\n\t\t\t\t\t\t\t\t\t   &server_jobs, &server_jobs_ids,\n\t\t\t\t\t\t\t\t\t   &server_queues, &server_queues_names,\n\t\t\t\t\t\t\t\t\t   &server_resvs, &server_resvs_resvids,\n\t\t\t\t\t\t\t\t\t   &server_vnodes, &server_vnodes_names,\n\t\t\t\t\t\t\t\t\t   the_data, HOOK_PERF_LOAD_DATA);\n\t\t\tif (rc == -1) {\n\t\t\t\tfprintf(stderr,\n\t\t\t\t\t\"%s: failed to populate svrattrl \\n\",\n\t\t\t\t\targv[0]);\n\t\t\t\texit(2);\n\t\t\t}\n\t\t\tpbs_python_set_server_info(&server);\n\t\t\tpbs_python_set_server_jobs_info(&server_jobs,\n\t\t\t\t\t\t\t&server_jobs_ids);\n\t\t\tpbs_python_set_server_queues_info(&server_queues,\n\t\t\t\t\t\t\t  &server_queues_names);\n\t\t\tpbs_python_set_server_resvs_info(&server_resvs,\n\t\t\t\t\t\t\t &server_resvs_resvids);\n\t\t\tpbs_python_set_server_vnodes_info(&server_vnodes,\n\t\t\t\t\t\t\t  &server_vnodes_names);\n\t\t}\n\n\t\tplist = (svrattrl *) GET_NEXT(event);\n\t\twhile (plist) {\n\n\t\t\tif (strcmp(plist->al_name, \"type\") == 0) {\n\t\t\t\thook_event =\n\t\t\t\t\thookstr_event_toint(plist->al_value);\n\t\t\t\tsprintf(hookstr_event, \"%u\", hook_event);\n\t\t\t} else if (strcmp(plist->al_name, \"hook_name\") == 0) {\n\t\t\t\tstrcpy(hook_name, plist->al_value);\n\t\t\t} else if (strcmp(plist->al_name, \"requestor\") == 0) {\n\t\t\t\tstrcpy(req_user, plist->al_value);\n\t\t\t} else if (strcmp(plist->al_name,\n\t\t\t\t\t  \"requestor_host\") == 0) {\n\t\t\t\tstrcpy(req_host, plist->al_value);\n\t\t\t} else if (strcmp(plist->al_name, \"hook_type\") == 0) {\n\t\t\t\tstrcpy(hookstr_type, plist->al_value);\n\t\t\t} else if (strcmp(plist->al_name, \"alarm\") == 0) {\n\t\t\t\thook_alarm = atoi(plist->al_value);\n\t\t\t} else if (strcmp(plist->al_name, \"debug\") == 0) {\n\t\t\t\tstrncpy(the_server_output, plist->al_value,\n\t\t\t\t\tsizeof(the_server_output) - 1);\n\t\t\t\tfp_server_out = fopen(the_server_output,\n\t\t\t\t\t\t      \"w\");\n\t\t\t\tif (fp_server_out == NULL) {\n\t\t\t\t\tlog_eventf(PBSEVENT_DEBUG,\n\t\t\t\t\t\t   PBS_EVENTCLASS_HOOK, LOG_WARNING,\n\t\t\t\t\t\t   __func__,\n\t\t\t\t\t\t   \"warning: error opening debug data file %s\",\n\t\t\t\t\t\t   the_server_output);\n\t\t\t\t\tpbs_python_set_hook_debug_data_fp(NULL);\n\t\t\t\t\tpbs_python_set_hook_debug_data_file(\"\");\n\t\t\t\t} else {\n\t\t\t\t\tpbs_python_set_hook_debug_data_fp(fp_server_out);\n\t\t\t\t\tpbs_python_set_hook_debug_data_file(the_server_output);\n\t\t\t\t}\n\t\t\t} else if ((strcmp(plist->al_name, HOOKATT_USER) != 0) &&\n\t\t\t\t   (strcmp(plist->al_name, HOOKATT_FREQ) != 0) &&\n\t\t\t\t   (strcmp(plist->al_name, PY_EVENT_PARAM_PROGNAME) != 0) &&\n\t\t\t\t   (strcmp(plist->al_name, PY_EVENT_PARAM_ARGLIST) != 0) &&\n\t\t\t\t   (strcmp(plist->al_name, PY_EVENT_PARAM_ENV) != 0) &&\n\t\t\t\t   (strcmp(plist->al_name, PY_EVENT_PARAM_PID) != 0) &&\n\t\t\t\t   (strcmp(plist->al_name, HOOKATT_FAIL_ACTION) != 0)) {\n\t\t\t\tfprintf(stderr, \"%s: unknown event attribute '%s'\\n\", argv[0], plist->al_name);\n\t\t\t\texit(2);\n\t\t\t}\n\n\t\t\tplist = (svrattrl *) GET_NEXT(plist->al_link);\n\t\t}\n\n\t\tif (req_host[0] == '\\0')\n\t\t\tgethostname(req_host, PBS_MAXHOSTNAME);\n\n\t\tfix_path(logname, 3);\n\n\t\tif ((logname[0] != '\\0') && (!is_full_path(logname))) {\n\t\t\tchar curdir[MAXPATHLEN + 1];\n\t\t\tchar full_logname[MAXPATHLEN + 1];\n\t\t\tchar *slash;\n#ifdef WIN32\n\t\t\tslash = \"\\\\\";\n#else\n\t\t\tslash = \"/\";\n#endif\n\t\t\t/* save current working dir before any chdirs */\n\t\t\tif (getcwd(curdir, MAXPATHLEN) == NULL) {\n\t\t\t\tfprintf(stderr, \"getcwd failed\\n\");\n\t\t\t\texit(2);\n\t\t\t}\n\t\t\tif ((strlen(curdir) + strlen(logname) + 1) >= sizeof(full_logname)) {\n\t\t\t\tfprintf(stderr, \"log file path too long\\n\");\n\t\t\t\texit(2);\n\t\t\t}\n\t\t\t/*\n\t\t\t * The following silliness is brought to you by gcc version 8.\n\t\t\t * Having checked to ensure full_logname is large enough we\n\t\t\t * should be able to snprintf() the entire string in one call.\n\t\t\t * However, the bounds checking in gcc version 8 is overzealous\n\t\t\t * and generates a format-overflow warning forcing us to use\n\t\t\t * strcat() instead.\n\t\t\t */\n\t\t\tsnprintf(full_logname, sizeof(full_logname), \"%s\", curdir);\n\t\t\tstrncat(full_logname, slash, sizeof(full_logname) - strlen(full_logname));\n\t\t\tstrncat(full_logname, logname, sizeof(full_logname) - strlen(full_logname));\n\t\t\tsnprintf(logname, sizeof(logname), \"%s\", full_logname);\n\t\t}\n\n\t\t/* set python interp data */\n\t\tsvr_interp_data.data_initialized = 0;\n\t\tsvr_interp_data.init_interpreter_data = pbs_python_svr_initialize_interpreter_data;\n\t\tsvr_interp_data.destroy_interpreter_data = pbs_python_svr_destroy_interpreter_data;\n\n\t\tsvr_interp_data.daemon_name = strdup(PBS_PYTHON_PROGRAM);\n\n\t\tif (svr_interp_data.daemon_name == NULL) { /* should not happen */\n\t\t\tfprintf(stderr, \"strdup failed\");\n\t\t\texit(1);\n\t\t}\n\n\t\t(void) pbs_python_ext_alloc_python_script(hook_script,\n\t\t\t\t\t\t\t  (struct python_script **) &py_script);\n\n\t\thook_perf_stat_start(perf_label, HOOK_PERF_START_PYTHON, 0);\n\t\tif (pbs_python_ext_start_interpreter(&svr_interp_data) != 0) {\n\t\t\tfprintf(stderr, \"Failed to start Python interpreter\");\n\t\t\texit(1);\n\t\t}\n\t\thook_perf_stat_stop(perf_label, HOOK_PERF_START_PYTHON, 0);\n\t\thook_input_param_init(&req_params);\n\t\tswitch (hook_event) {\n\n\t\t\tcase HOOK_EVENT_QUEUEJOB:\n\t\t\t\trqj.rq_jid[0] = '\\0';\n\t\t\t\tif ((svrattrl_e = find_svrattrl_list_entry(&event_job,\n\t\t\t\t\t\t\t\t\t   \"id\", NULL)) != NULL) {\n\t\t\t\t\tstrcpy((char *) rqj.rq_jid,\n\t\t\t\t\t       svrattrl_e->al_value);\n\t\t\t\t}\n\t\t\t\trqj.rq_destin[0] = '\\0';\n\t\t\t\tif ((svrattrl_e = find_svrattrl_list_entry(&event_job,\n\t\t\t\t\t\t\t\t\t   ATTR_queue, NULL)) != NULL) {\n\t\t\t\t\tstrcpy((char *) rqj.rq_destin,\n\t\t\t\t\t       svrattrl_e->al_value);\n\t\t\t\t}\n\t\t\t\tif (copy_svrattrl_list(&event_job,\n\t\t\t\t\t\t       &rqj.rq_attr) == -1) {\n\t\t\t\t\tlog_err(errno, PBS_PYTHON_PROGRAM, \"failed to copy event_job\");\n\t\t\t\t\trc = 1;\n\t\t\t\t\tgoto pbs_python_end;\n\t\t\t\t}\n\n\t\t\t\treq_params.rq_job = (struct rq_quejob *) &rqj;\n\t\t\t\treq_params.vns_list = (pbs_list_head *) &event_vnode;\n\t\t\t\trc = pbs_python_event_set(hook_event, req_user, req_host, &req_params, perf_label);\n\n\t\t\t\tif (rc == -1) { /* internal server code failure */\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG,\n\t\t\t\t\t\t  PBS_EVENTCLASS_HOOK, LOG_ERR,\n\t\t\t\t\t\t  hook_name,\n\t\t\t\t\t\t  \"Encountered an error while setting event\");\n\t\t\t\t}\n\n\t\t\t\tbreak;\n\t\t\tcase HOOK_EVENT_POSTQUEUEJOB:\n\t\t\t\trqj.rq_jid[0] = '\\0';\n\t\t\t\tif ((svrattrl_e = find_svrattrl_list_entry(&event_job,\n\t\t\t\t\t\t\t\t\t   \"id\", NULL)) != NULL) {\n\t\t\t\t\tstrcpy((char *) rqj.rq_jid,\n\t\t\t\t\t       svrattrl_e->al_value);\n\t\t\t\t}\n\t\t\t\trqj.rq_destin[0] = '\\0';\n\t\t\t\tif ((svrattrl_e = find_svrattrl_list_entry(&event_job,\n\t\t\t\t\t\t\t\t\t   ATTR_queue, NULL)) != NULL) {\n\t\t\t\t\tstrcpy((char *) rqj.rq_destin,\n\t\t\t\t\t       svrattrl_e->al_value);\n\t\t\t\t}\n\t\t\t\tif (copy_svrattrl_list(&event_job,\n\t\t\t\t\t\t       &rqj.rq_attr) == -1) {\n\t\t\t\t\tlog_err(errno, PBS_PYTHON_PROGRAM, \"failed to copy event_job\");\n\t\t\t\t\trc = 1;\n\t\t\t\t\tgoto pbs_python_end;\n\t\t\t\t}\n\n\t\t\t\treq_params.rq_job = (struct rq_postqueuejob *) &rqj;\n\t\t\t\treq_params.vns_list = (pbs_list_head *) &event_vnode;\n\t\t\t\trc = pbs_python_event_set(hook_event, req_user, req_host, &req_params, perf_label);\n\n\t\t\t\tif (rc == -1) { /* internal server code failure */\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG,\n\t\t\t\t\t\t  PBS_EVENTCLASS_HOOK, LOG_ERR,\n\t\t\t\t\t\t  hook_name,\n\t\t\t\t\t\t  \"Encountered an error while setting event\");\n\t\t\t\t}\n\n\t\t\t\tbreak;\n\t\t\tcase HOOK_EVENT_MODIFYJOB:\n\t\t\t\trqm.rq_objname[0] = '\\0';\n\t\t\t\tif ((svrattrl_e = find_svrattrl_list_entry(&event_job,\n\t\t\t\t\t\t\t\t\t   \"id\", NULL)) != NULL) {\n\t\t\t\t\tstrcpy((char *) rqm.rq_objname,\n\t\t\t\t\t       svrattrl_e->al_value);\n\t\t\t\t}\n\t\t\t\tif (copy_svrattrl_list(&event_job,\n\t\t\t\t\t\t       &rqm.rq_attr) == -1) {\n\t\t\t\t\tlog_err(errno, PBS_PYTHON_PROGRAM, \"failed to copy event_job\");\n\t\t\t\t\trc = 1;\n\t\t\t\t\tgoto pbs_python_end;\n\t\t\t\t}\n\n\t\t\t\treq_params.rq_manage = (struct rq_manage *) &rqm;\n\t\t\t\trc = pbs_python_event_set(hook_event, req_user, req_host, &req_params, perf_label);\n\n\t\t\t\tif (rc == -1) { /* internal server code failure */\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG,\n\t\t\t\t\t\t  PBS_EVENTCLASS_HOOK, LOG_ERR,\n\t\t\t\t\t\t  hook_name,\n\t\t\t\t\t\t  \"Encountered an error while setting event\");\n\t\t\t\t}\n\n\t\t\t\tbreak;\n\t\t\tcase HOOK_EVENT_MOVEJOB:\n\t\t\t\trqmv.rq_jid[0] = '\\0';\n\t\t\t\tif ((svrattrl_e = find_svrattrl_list_entry(&event_job,\n\t\t\t\t\t\t\t\t\t   \"id\", NULL)) != NULL) {\n\t\t\t\t\tstrcpy((char *) rqmv.rq_jid,\n\t\t\t\t\t       svrattrl_e->al_value);\n\t\t\t\t}\n\n\t\t\t\treq_params.rq_move = (struct rq_move *) &rqmv;\n\t\t\t\trc = pbs_python_event_set(hook_event, req_user, req_host, &req_params, perf_label);\n\n\t\t\t\tif (rc == -1) { /* internal server code failure */\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG,\n\t\t\t\t\t\t  PBS_EVENTCLASS_HOOK, LOG_ERR,\n\t\t\t\t\t\t  hook_name,\n\t\t\t\t\t\t  \"Encountered an error while setting event\");\n\t\t\t\t}\n\n\t\t\t\tbreak;\n\t\t\tcase HOOK_EVENT_RUNJOB:\n\t\t\t\trqrun.rq_jid[0] = '\\0';\n\t\t\t\tif ((svrattrl_e = find_svrattrl_list_entry(&event_job,\n\t\t\t\t\t\t\t\t\t   \"id\", NULL)) != NULL) {\n\t\t\t\t\tstrcpy((char *) rqrun.rq_jid,\n\t\t\t\t\t       svrattrl_e->al_value);\n\t\t\t\t}\n\t\t\t\treq_params.rq_run = (struct rq_runjob *) &rqrun;\n\n\t\t\t\trc = pbs_python_event_set(hook_event, req_user, req_host, &req_params, perf_label);\n\n\t\t\t\tif (rc == -1) { /* internal server code failure */\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG,\n\t\t\t\t\t\t  PBS_EVENTCLASS_HOOK, LOG_ERR,\n\t\t\t\t\t\t  hook_name,\n\t\t\t\t\t\t  \"Encountered an error while setting event\");\n\t\t\t\t}\n\n\t\t\t\tbreak;\n\t\t\tcase HOOK_EVENT_RESVSUB:\n\t\t\t\trqj.rq_jid[0] = '\\0';\n\t\t\t\tif ((svrattrl_e = find_svrattrl_list_entry(&event_resv,\n\t\t\t\t\t\t\t\t\t   \"resvid\", NULL)) != NULL) {\n\t\t\t\t\tstrcpy((char *) rqj.rq_jid,\n\t\t\t\t\t       svrattrl_e->al_value);\n\t\t\t\t}\n\t\t\t\tif (copy_svrattrl_list(&event_resv,\n\t\t\t\t\t\t       &rqj.rq_attr) == -1) {\n\t\t\t\t\tlog_err(errno, PBS_PYTHON_PROGRAM, \"failed to copy event_job\");\n\t\t\t\t\trc = 1;\n\t\t\t\t\tgoto pbs_python_end;\n\t\t\t\t}\n\t\t\t\treq_params.rq_job = (struct rq_queuejob *) &rqj;\n\t\t\t\treq_params.vns_list = (pbs_list_head *) &event_vnode;\n\t\t\t\trc = pbs_python_event_set(hook_event, req_user, req_host, &req_params, perf_label);\n\n\t\t\t\tif (rc == -1) { /* internal server code failure */\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG,\n\t\t\t\t\t\t  PBS_EVENTCLASS_HOOK, LOG_ERR,\n\t\t\t\t\t\t  hook_name,\n\t\t\t\t\t\t  \"Encountered an error while setting event\");\n\t\t\t\t}\n\n\t\t\t\tbreak;\n\t\t\tcase HOOK_EVENT_EXECJOB_BEGIN:\n\t\t\tcase HOOK_EVENT_EXECJOB_PROLOGUE:\n\t\t\tcase HOOK_EVENT_EXECJOB_EPILOGUE:\n\t\t\tcase HOOK_EVENT_EXECJOB_END:\n\t\t\tcase HOOK_EVENT_EXECJOB_PRETERM:\n\t\t\tcase HOOK_EVENT_EXECJOB_RESIZE:\n\t\t\tcase HOOK_EVENT_EXECJOB_ABORT:\n\t\t\tcase HOOK_EVENT_EXECJOB_POSTSUSPEND:\n\t\t\tcase HOOK_EVENT_EXECJOB_PRERESUME:\n\n\t\t\t\tif ((svrattrl_e = find_svrattrl_list_entry(&event_job,\n\t\t\t\t\t\t\t\t\t   \"id\", NULL)) != NULL) {\n\t\t\t\t\tstrcpy((char *) rqj.rq_jid,\n\t\t\t\t\t       svrattrl_e->al_value);\n\t\t\t\t}\n\t\t\t\trqj.rq_destin[0] = '\\0';\n\n\t\t\t\tif (copy_svrattrl_list(&event_job,\n\t\t\t\t\t\t       &rqj.rq_attr) == -1) {\n\t\t\t\t\tlog_err(errno, PBS_PYTHON_PROGRAM, \"failed to copy event_job\");\n\t\t\t\t\trc = 1;\n\t\t\t\t\tgoto pbs_python_end;\n\t\t\t\t}\n\t\t\t\treq_params.rq_job = (struct rq_queuejob *) &rqj;\n\t\t\t\treq_params.vns_list = (pbs_list_head *) &event_vnode;\n\n\t\t\t\tif (hook_event == HOOK_EVENT_EXECJOB_PROLOGUE) {\n\t\t\t\t\treq_params.vns_list_fail = (pbs_list_head *) &event_vnode_fail;\n\t\t\t\t\treq_params.failed_mom_list = &job_failed_mom_list;\n\t\t\t\t\treq_params.succeeded_mom_list = &job_succeeded_mom_list;\n\t\t\t\t}\n\n\t\t\t\trc = pbs_python_event_set(hook_event, req_user, req_host, &req_params, perf_label);\n\n\t\t\t\tif (rc == -1) { /* internal server code failure */\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG,\n\t\t\t\t\t\t  PBS_EVENTCLASS_HOOK, LOG_ERR,\n\t\t\t\t\t\t  hook_name,\n\t\t\t\t\t\t  \"Encountered an error while setting event\");\n\t\t\t\t}\n\n\t\t\t\tbreak;\n\n\t\t\tcase HOOK_EVENT_EXECJOB_LAUNCH:\n\n\t\t\t\tif ((svrattrl_e = find_svrattrl_list_entry(&event_job,\n\t\t\t\t\t\t\t\t\t   \"id\", NULL)) != NULL) {\n\t\t\t\t\tstrcpy((char *) rqj.rq_jid,\n\t\t\t\t\t       svrattrl_e->al_value);\n\t\t\t\t}\n\t\t\t\trqj.rq_destin[0] = '\\0';\n\n\t\t\t\tif (copy_svrattrl_list(&event_job,\n\t\t\t\t\t\t       &rqj.rq_attr) == -1) {\n\t\t\t\t\tlog_err(errno, PBS_PYTHON_PROGRAM, \"failed to copy event_job\");\n\t\t\t\t\trc = 1;\n\t\t\t\t\tgoto pbs_python_end;\n\t\t\t\t}\n\n\t\t\t\treq_params.rq_job = (struct rq_queuejob *) &rqj;\n\t\t\t\treq_params.vns_list = &event_vnode;\n\t\t\t\treq_params.vns_list_fail = &event_vnode_fail;\n\t\t\t\treq_params.failed_mom_list = &job_failed_mom_list;\n\t\t\t\treq_params.succeeded_mom_list = &job_succeeded_mom_list;\n\n\t\t\t\tif ((svrattrl_e = find_svrattrl_list_entry(&event,\n\t\t\t\t\t\t\t\t\t   PY_EVENT_PARAM_PROGNAME, NULL)) != NULL) {\n\t\t\t\t\treq_params.progname = svrattrl_e->al_value;\n\t\t\t\t\tprogname_orig = svrattrl_e->al_value;\n\t\t\t\t} else {\n\t\t\t\t\tprogname_orig = \"\";\n\t\t\t\t}\n\n\t\t\t\treq_params.argv_list = (pbs_list_head *) &event_argv;\n\n\t\t\t\targv_str_orig = argv_list_to_str((pbs_list_head *) &event_argv);\n\t\t\t\tif ((svrattrl_e = find_svrattrl_list_entry(&event,\n\t\t\t\t\t\t\t\t\t   PY_EVENT_PARAM_ENV, NULL)) != NULL) {\n\t\t\t\t\treq_params.env = svrattrl_e->al_value;\n\t\t\t\t\tenv_str_orig = svrattrl_e->al_value;\n\t\t\t\t} else {\n\t\t\t\t\tenv_str_orig = \"\";\n\t\t\t\t}\n\n\t\t\t\trc = pbs_python_event_set(hook_event, req_user, req_host, &req_params, perf_label);\n\n\t\t\t\tif (rc == -1) { /* internal server code failure */\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_HOOK, LOG_ERR,\n\t\t\t\t\t\t  hook_name,\n\t\t\t\t\t\t  \"Encountered an error while setting event\");\n\t\t\t\t}\n\n\t\t\t\tbreak;\n\t\t\tcase HOOK_EVENT_EXECJOB_ATTACH:\n\n\t\t\t\tif ((svrattrl_e = find_svrattrl_list_entry(&event_job,\n\t\t\t\t\t\t\t\t\t   \"id\", NULL)) != NULL) {\n\t\t\t\t\tstrcpy((char *) rqj.rq_jid,\n\t\t\t\t\t       svrattrl_e->al_value);\n\t\t\t\t}\n\t\t\t\trqj.rq_destin[0] = '\\0';\n\n\t\t\t\tif (copy_svrattrl_list(&event_job,\n\t\t\t\t\t\t       &rqj.rq_attr) == -1) {\n\t\t\t\t\tlog_err(errno, PBS_PYTHON_PROGRAM, \"failed to copy event_job\");\n\t\t\t\t\trc = 1;\n\t\t\t\t\tgoto pbs_python_end;\n\t\t\t\t}\n\n\t\t\t\treq_params.rq_job = (struct rq_queuejob *) &rqj;\n\n\t\t\t\tif ((svrattrl_e = find_svrattrl_list_entry(&event,\n\t\t\t\t\t\t\t\t\t   PY_EVENT_PARAM_PID, NULL)) != NULL) {\n\t\t\t\t\treq_params.pid = atoi(svrattrl_e->al_value);\n\t\t\t\t} else {\n\t\t\t\t\treq_params.pid = -1;\n\t\t\t\t}\n\n\t\t\t\treq_params.vns_list = (pbs_list_head *) &event_vnode;\n\n\t\t\t\trc = pbs_python_event_set(hook_event, req_user, req_host, &req_params, perf_label);\n\n\t\t\t\tif (rc == -1) { /* internal server code failure */\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_HOOK, LOG_ERR,\n\t\t\t\t\t\t  hook_name,\n\t\t\t\t\t\t  \"Encountered an error while setting event\");\n\t\t\t\t}\n\n\t\t\t\tbreak;\n\t\t\tcase HOOK_EVENT_EXECHOST_PERIODIC:\n\t\t\tcase HOOK_EVENT_EXECHOST_STARTUP:\n\t\t\t\treq_params.vns_list = &event_vnode;\n\t\t\t\tif (hook_event == HOOK_EVENT_EXECHOST_PERIODIC) {\n\t\t\t\t\treq_params.jobs_list = &event_jobs;\n\t\t\t\t}\n\t\t\t\trc = pbs_python_event_set(hook_event, req_user, req_host, &req_params, perf_label);\n\n\t\t\t\tif (rc == -1) { /* internal server code failure */\n\t\t\t\t\tlog_event(PBSEVENT_DEBUG,\n\t\t\t\t\t\t  PBS_EVENTCLASS_HOOK, LOG_ERR,\n\t\t\t\t\t\t  hook_name,\n\t\t\t\t\t\t  \"Encountered an error while setting event\");\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_HOOK, LOG_ERR,\n\t\t\t\t\t  hook_name, \"Unexpected event\");\n\t\t\t\trc = 1;\n\t\t\t\tgoto pbs_python_end;\n\t\t}\n\n\t\t/* This sets Python event object's hook_name value */\n\t\trc = pbs_python_event_set_attrval(PY_EVENT_HOOK_NAME,\n\t\t\t\t\t\t  hook_name);\n\n\t\tif (rc == -1) {\n\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_HOOK,\n\t\t\t\t  LOG_ERR, hook_name, \"Failed to set event 'hook_name'.\");\n\t\t}\n\n\t\trc = pbs_python_event_set_attrval(PY_EVENT_HOOK_TYPE,\n\t\t\t\t\t\t  hookstr_type);\n\n\t\tif (rc == -1) {\n\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_HOOK,\n\t\t\t\t  LOG_ERR, hook_name, \"Failed to set event 'hook_type'.\");\n\t\t}\n\n\t\trc = pbs_python_event_set_attrval(PY_EVENT_TYPE,\n\t\t\t\t\t\t  hookstr_event);\n\n\t\tif (rc == -1) {\n\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_HOOK,\n\t\t\t\t  LOG_ERR, hook_name,\n\t\t\t\t  \"Failed to set event 'type'.\");\n\t\t}\n\n\t\tpbs_python_set_mode(PY_MODE); /* hook script mode */\n\n\t\t/* reset global flag to allow modification of             */\n\t\t/* attributes and resources for every new hook execution. */\n\t\tpbs_python_event_param_mod_allow();\n\n\t\tset_alarm(hook_alarm, pbs_python_set_interrupt);\n\t\tif (hook_script[0] == '\\0') {\n\t\t\twchar_t *tmp_argv[2];\n\n\t\t\ttmp_argv[0] = Py_DecodeLocale(argv[0], NULL);\n\t\t\tif (tmp_argv[0] == NULL) {\n\t\t\t\tfprintf(stderr, \"Fatal error: cannot decode script name\\n\");\n\t\t\t\texit(2);\n\t\t\t}\n\t\t\ttmp_argv[1] = NULL;\n\n\t\t\trc = Py_Main(1, tmp_argv);\n\t\t\tPyMem_RawFree(tmp_argv[0]);\n\t\t} else {\n\t\t\thook_perf_stat_start(perf_label, HOOK_PERF_RUN_CODE, 0);\n\t\t\trc = pbs_python_run_code_in_namespace(&svr_interp_data,\n\t\t\t\t\t\t\t      py_script, 0);\n\t\t\thook_perf_stat_stop(perf_label, HOOK_PERF_RUN_CODE, 0);\n\t\t}\n\t\tset_alarm(0, pbs_python_set_interrupt);\n\n\t\tpbs_python_set_mode(C_MODE); /* PBS C mode - flexible */\n\n\t\t/* Prepare output file */\n\t\tif (*the_output != '\\0') {\n\t\t\tfp_out = fopen(the_output, \"w\");\n\n\t\t\tif (fp_out == NULL) {\n\t\t\t\tfprintf(stderr, \"failed to open event output file %s\\n\", the_output);\n\t\t\t\texit(2);\n\t\t\t}\n\t\t} else {\n\t\t\tfp_out = stdout;\n\t\t}\n\n\t\tswitch (rc) {\n\t\t\tcase -1: /* internal error */\n\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t\t\t\t  LOG_ERR, hook_name,\n\t\t\t\t\t  \"Internal server error encountered. Skipping hook.\");\n\t\t\t\trc = -1; /* should not happen */\n\t\t\t\tgoto pbs_python_end;\n\t\t\tcase -2: /* unhandled exception */\n\t\t\t\tpbs_python_event_reject(NULL);\n\t\t\t\tpbs_python_event_param_mod_disallow();\n\n\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t\t\t \"%s hook '%s' encountered an exception, \"\n\t\t\t\t\t \"request rejected\",\n\t\t\t\t\t hook_event_as_string(hook_event), hook_name);\n\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t\t\t\t  LOG_ERR, hook_name, log_buffer);\n\t\t\t\trc = -2; /* should not happen */\n\t\t\t\tbreak;\n\t\t\tcase -3: /* alarm timeout */\n\t\t\t\tpbs_python_event_reject(NULL);\n\t\t\t\tpbs_python_event_param_mod_disallow();\n\n\t\t\t\tsnprintf(log_buffer, LOG_BUF_SIZE - 1,\n\t\t\t\t\t \"alarm call while running %s hook '%s', \"\n\t\t\t\t\t \"request rejected\",\n\t\t\t\t\t hook_event_as_string(hook_event), hook_name);\n\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK,\n\t\t\t\t\t  LOG_ERR, hook_name, log_buffer);\n\t\t\t\trc = -3; /* should not happen */\n\t\t\t\tbreak;\n\t\t}\n\n\t\thook_output_param_init(&req_params_out);\n\n\t\tsp = NULL;\n\t\tif (the_output[0] != '\\0') {\n\t\t\tsp = strrchr(the_output, '/');\n\t\t\tif (sp != NULL)\n\t\t\t\tsp++;\n\t\t\telse\n\t\t\t\tsp = the_output;\n\t\t}\n\t\tsnprintf(perf_action, sizeof(perf_action), \"%s:%s\", HOOK_PERF_HOOK_OUTPUT, sp ? sp : \"stdout\");\n\n\t\tswitch (hook_event) {\n\n\t\t\tcase HOOK_EVENT_QUEUEJOB:\n\n\t\t\t\tif (pbs_python_event_get_accept_flag() == FALSE) {\n\t\t\t\t\trej_msg = pbs_python_event_get_reject_msg();\n\n\t\t\t\t\tfprintf(fp_out, \"%s=True\\n\", EVENT_REJECT_OBJECT);\n\t\t\t\t\tfprintf(fp_out, \"%s=False\\n\", EVENT_ACCEPT_OBJECT);\n\t\t\t\t\tif (rej_msg != NULL)\n\t\t\t\t\t\tfprintf(fp_out, \"%s=%s\\n\", EVENT_REJECT_MSG_OBJECT,\n\t\t\t\t\t\t\trej_msg);\n\t\t\t\t} else {\n\t\t\t\t\tfprintf(fp_out, \"%s=True\\n\", EVENT_ACCEPT_OBJECT);\n\t\t\t\t\tfprintf(fp_out, \"%s=False\\n\", EVENT_REJECT_OBJECT);\n\n\t\t\t\t\treq_params_out.rq_job = (struct rq_quejob *) &rqj;\n\t\t\t\t\tpbs_python_event_to_request(hook_event, &req_params_out, perf_label, perf_action);\n\n\t\t\t\t\tfprint_svrattrl_list(fp_out, EVENT_JOB_OBJECT,\n\t\t\t\t\t\t\t     &rqj.rq_attr);\n\t\t\t\t}\n\t\t\t\tbreak;\n\n\t\t\tcase HOOK_EVENT_POSTQUEUEJOB:\n\n\t\t\t\tif (pbs_python_event_get_accept_flag() == FALSE) {\n\t\t\t\t\trej_msg = pbs_python_event_get_reject_msg();\n\n\t\t\t\t\tfprintf(fp_out, \"%s=True\\n\", EVENT_REJECT_OBJECT);\n\t\t\t\t\tfprintf(fp_out, \"%s=False\\n\", EVENT_ACCEPT_OBJECT);\n\t\t\t\t\tif (rej_msg != NULL)\n\t\t\t\t\t\tfprintf(fp_out, \"%s=%s\\n\", EVENT_REJECT_MSG_OBJECT,\n\t\t\t\t\t\t\trej_msg);\n\t\t\t\t} else {\n\t\t\t\t\tfprintf(fp_out, \"%s=True\\n\", EVENT_ACCEPT_OBJECT);\n\t\t\t\t\tfprintf(fp_out, \"%s=False\\n\", EVENT_REJECT_OBJECT);\n\n\t\t\t\t\treq_params_out.rq_job = (struct rq_postqueuejob *) &rqj;\n\t\t\t\t\tpbs_python_event_to_request(hook_event, &req_params_out, perf_label, perf_action);\n\n\t\t\t\t\tfprint_svrattrl_list(fp_out, EVENT_JOB_OBJECT,\n\t\t\t\t\t\t\t     &rqj.rq_attr);\n\t\t\t\t}\n\t\t\t\tbreak;\n\n\t\t\tcase HOOK_EVENT_MODIFYJOB:\n\n\t\t\t\tif (pbs_python_event_get_accept_flag() == FALSE) {\n\t\t\t\t\trej_msg = pbs_python_event_get_reject_msg();\n\n\t\t\t\t\tfprintf(fp_out, \"%s=True\\n\", EVENT_REJECT_OBJECT);\n\t\t\t\t\tfprintf(fp_out, \"%s=False\\n\", EVENT_ACCEPT_OBJECT);\n\t\t\t\t\tif (rej_msg != NULL)\n\t\t\t\t\t\tfprintf(fp_out, \"%s=%s\\n\", EVENT_REJECT_MSG_OBJECT,\n\t\t\t\t\t\t\trej_msg);\n\n\t\t\t\t} else {\n\t\t\t\t\tfprintf(fp_out, \"%s=True\\n\", EVENT_ACCEPT_OBJECT);\n\t\t\t\t\tfprintf(fp_out, \"%s=False\\n\", EVENT_REJECT_OBJECT);\n\t\t\t\t\treq_params_out.rq_manage = (struct rq_manage *) &rqm;\n\t\t\t\t\tpbs_python_event_to_request(hook_event, &req_params_out, perf_label, perf_action);\n\t\t\t\t\tfprint_svrattrl_list(fp_out, EVENT_JOB_OBJECT,\n\t\t\t\t\t\t\t     &rqm.rq_attr);\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase HOOK_EVENT_MOVEJOB:\n\n\t\t\t\tif (pbs_python_event_get_accept_flag() == FALSE) {\n\t\t\t\t\trej_msg = pbs_python_event_get_reject_msg();\n\n\t\t\t\t\tfprintf(fp_out, \"%s=True\\n\", EVENT_REJECT_OBJECT);\n\t\t\t\t\tfprintf(fp_out, \"%s=False\\n\", EVENT_ACCEPT_OBJECT);\n\t\t\t\t\tif (rej_msg != NULL)\n\t\t\t\t\t\tfprintf(fp_out, \"%s=%s\\n\", EVENT_REJECT_MSG_OBJECT,\n\t\t\t\t\t\t\trej_msg);\n\n\t\t\t\t} else {\n\t\t\t\t\tfprintf(fp_out, \"%s=True\\n\", EVENT_ACCEPT_OBJECT);\n\t\t\t\t\tfprintf(fp_out, \"%s=False\\n\", EVENT_REJECT_OBJECT);\n\t\t\t\t\treq_params_out.rq_move = (struct rq_manage *) &rqmv;\n\t\t\t\t\tpbs_python_event_to_request(hook_event, &req_params_out, perf_label, perf_action);\n\t\t\t\t\tif (rqmv.rq_destin[0] != '\\0')\n\t\t\t\t\t\tfprintf(fp_out, \"%s.%s=%s\\n\", EVENT_OBJECT,\n\t\t\t\t\t\t\tPY_EVENT_PARAM_SRC_QUEUE, rqmv.rq_destin);\n\t\t\t\t}\n\t\t\t\tbreak;\n\n\t\t\tcase HOOK_EVENT_RUNJOB:\n\n\t\t\t\tif (pbs_python_event_get_accept_flag() == FALSE) {\n\t\t\t\t\trej_msg = pbs_python_event_get_reject_msg();\n\n\t\t\t\t\tfprintf(fp_out, \"%s=True\\n\", EVENT_REJECT_OBJECT);\n\t\t\t\t\tfprintf(fp_out, \"%s=False\\n\", EVENT_ACCEPT_OBJECT);\n\t\t\t\t\tif (rej_msg != NULL)\n\t\t\t\t\t\tfprintf(fp_out, \"%s=%s\\n\", EVENT_REJECT_MSG_OBJECT,\n\t\t\t\t\t\t\trej_msg);\n\n\t\t\t\t\tnew_exec_time_str =\n\t\t\t\t\t\tpbs_python_event_job_getval_hookset(ATTR_a,\n\t\t\t\t\t\t\t\t\t\t    NULL, 0, NULL, 0);\n\n\t\t\t\t\tif (new_exec_time_str != NULL)\n\t\t\t\t\t\tfprintf(fp_out, \"%s.%s=%s\\n\", EVENT_JOB_OBJECT,\n\t\t\t\t\t\t\tATTR_a, new_exec_time_str);\n\n\t\t\t\t\tnew_hold_types_str =\n\t\t\t\t\t\tpbs_python_event_job_getval_hookset(ATTR_h,\n\t\t\t\t\t\t\t\t\t\t    NULL, 0, NULL, 0);\n\n\t\t\t\t\tif (new_hold_types_str != NULL)\n\t\t\t\t\t\tfprintf(fp_out, \"%s.%s=%s\\n\", EVENT_JOB_OBJECT,\n\t\t\t\t\t\t\tATTR_h, new_hold_types_str);\n\n\t\t\t\t\tnew_project_str =\n\t\t\t\t\t\tpbs_python_event_job_getval_hookset(ATTR_project,\n\t\t\t\t\t\t\t\t\t\t    NULL, 0, NULL, 0);\n\t\t\t\t\tif (new_project_str != NULL)\n\t\t\t\t\t\tfprintf(fp_out, \"%s.%s=%s\\n\", EVENT_JOB_OBJECT,\n\t\t\t\t\t\t\tATTR_project, new_project_str);\n\n\t\t\t\t} else {\n\t\t\t\t\tfprintf(fp_out, \"%s=True\\n\", EVENT_ACCEPT_OBJECT);\n\t\t\t\t\tfprintf(fp_out, \"%s=False\\n\", EVENT_REJECT_OBJECT);\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase HOOK_EVENT_RESVSUB:\n\n\t\t\t\tif (pbs_python_event_get_accept_flag() == FALSE) {\n\t\t\t\t\trej_msg = pbs_python_event_get_reject_msg();\n\n\t\t\t\t\tfprintf(fp_out, \"%s=True\\n\", EVENT_REJECT_OBJECT);\n\t\t\t\t\tfprintf(fp_out, \"%s=False\\n\", EVENT_ACCEPT_OBJECT);\n\t\t\t\t\tif (rej_msg != NULL)\n\t\t\t\t\t\tfprintf(fp_out, \"%s=%s\\n\", EVENT_REJECT_MSG_OBJECT,\n\t\t\t\t\t\t\trej_msg);\n\n\t\t\t\t} else {\n\t\t\t\t\tfprintf(fp_out, \"%s=True\\n\", EVENT_ACCEPT_OBJECT);\n\t\t\t\t\tfprintf(fp_out, \"%s=False\\n\", EVENT_REJECT_OBJECT);\n\t\t\t\t\treq_params_out.rq_job = (struct rq_quejob *) &rqj;\n\t\t\t\t\tpbs_python_event_to_request(hook_event, &req_params_out, perf_label, perf_action);\n\t\t\t\t\tfprint_svrattrl_list(fp_out, EVENT_RESV_OBJECT,\n\t\t\t\t\t\t\t     &rqj.rq_attr);\n\t\t\t\t}\n\t\t\t\tbreak;\n\n\t\t\tcase HOOK_EVENT_EXECJOB_BEGIN:\n\t\t\tcase HOOK_EVENT_EXECJOB_PROLOGUE:\n\t\t\tcase HOOK_EVENT_EXECJOB_EPILOGUE:\n\t\t\tcase HOOK_EVENT_EXECJOB_END:\n\t\t\tcase HOOK_EVENT_EXECJOB_PRETERM:\n\t\t\tcase HOOK_EVENT_EXECJOB_LAUNCH:\n\t\t\tcase HOOK_EVENT_EXECJOB_ABORT:\n\t\t\tcase HOOK_EVENT_EXECJOB_POSTSUSPEND:\n\t\t\tcase HOOK_EVENT_EXECJOB_PRERESUME:\n\n\t\t\t\tif (pbs_python_event_get_accept_flag() == FALSE) {\n\n\t\t\t\t\trej_msg = pbs_python_event_get_reject_msg();\n\n\t\t\t\t\tfprintf(fp_out, \"%s=True\\n\", EVENT_REJECT_OBJECT);\n\t\t\t\t\tfprintf(fp_out, \"%s=False\\n\", EVENT_ACCEPT_OBJECT);\n\t\t\t\t\tif (rej_msg != NULL)\n\t\t\t\t\t\tfprintf(fp_out, \"%s=%s\\n\", EVENT_REJECT_MSG_OBJECT,\n\t\t\t\t\t\t\trej_msg);\n\n\t\t\t\t} else {\n\t\t\t\t\tfprintf(fp_out, \"%s=True\\n\", EVENT_ACCEPT_OBJECT);\n\t\t\t\t\tfprintf(fp_out, \"%s=False\\n\", EVENT_REJECT_OBJECT);\n\t\t\t\t}\n\n\t\t\t\t/* Whether accept or reject, show job, vnode_list changes and job actions */\n\t\t\t\tfree_attrlist(&event_vnode);\n\t\t\t\tCLEAR_HEAD(event_vnode);\n\n\t\t\t\tif (hook_event == HOOK_EVENT_EXECJOB_LAUNCH) {\n\t\t\t\t\tif (progname != NULL) {\n\t\t\t\t\t\tfree(progname);\n\t\t\t\t\t\tprogname = NULL;\n\t\t\t\t\t}\n\t\t\t\t\tfree_attrlist(&event_argv);\n\t\t\t\t\tCLEAR_HEAD(event_argv);\n\t\t\t\t\tif (env_str != NULL) {\n\t\t\t\t\t\tfree(env_str);\n\t\t\t\t\t\tenv_str = NULL;\n\t\t\t\t\t}\n\n\t\t\t\t\tfree_attrlist(&event_vnode_fail);\n\t\t\t\t\tCLEAR_HEAD(event_vnode_fail);\n\n\t\t\t\t\treq_params_out.progname = (char **) &progname;\n\t\t\t\t\treq_params_out.argv_list = (pbs_list_head *) &event_argv;\n\t\t\t\t\treq_params_out.env = (char **) &env_str;\n\t\t\t\t\treq_params_out.vns_list = (pbs_list_head *) &event_vnode;\n\t\t\t\t\treq_params_out.vns_list_fail = (pbs_list_head *) &event_vnode_fail;\n\t\t\t\t} else if (hook_event == HOOK_EVENT_EXECJOB_PROLOGUE) {\n\n\t\t\t\t\tfree_attrlist(&event_vnode_fail);\n\t\t\t\t\tCLEAR_HEAD(event_vnode_fail);\n\t\t\t\t\treq_params_out.vns_list_fail = (pbs_list_head *) &event_vnode_fail;\n\t\t\t\t}\n\n\t\t\t\treq_params_out.rq_job = (struct rq_quejob *) &rqj;\n\t\t\t\treq_params_out.vns_list = (pbs_list_head *) &event_vnode;\n\t\t\t\tpbs_python_event_to_request(hook_event,\n\t\t\t\t\t\t\t    &req_params_out, perf_label, perf_action);\n\t\t\t\tfprint_svrattrl_list(fp_out, EVENT_JOB_OBJECT,\n\t\t\t\t\t\t     &rqj.rq_attr);\n\t\t\t\tfprint_svrattrl_list(fp_out,\n\t\t\t\t\t\t     EVENT_VNODELIST_OBJECT,\n\t\t\t\t\t\t     &event_vnode);\n\n\t\t\t\tif (hook_event == HOOK_EVENT_EXECJOB_LAUNCH) {\n\t\t\t\t\tfprint_svrattrl_list(fp_out, EVENT_VNODELIST_FAIL_OBJECT, &event_vnode_fail);\n\t\t\t\t\tfprintf(fp_out, \"%s=%s\\n\", EVENT_PROGNAME_OBJECT, progname);\n\t\t\t\t\tfprint_svrattrl_list(fp_out, EVENT_OBJECT, &event_argv);\n\t\t\t\t\tfprintf(fp_out, \"%s=\\\"\\\"\\\"%s\\\"\\\"\\\"\\n\", EVENT_ENV_OBJECT, env_str);\n\t\t\t\t\tif (strcmp(progname_orig, progname) != 0)\n\t\t\t\t\t\tprint_progname = 1;\n\n\t\t\t\t\targv_str = argv_list_to_str((pbs_list_head *) &event_argv);\n\n\t\t\t\t\tif (((argv_str_orig == NULL) && (argv_str != NULL)) ||\n\t\t\t\t\t    ((argv_str_orig != NULL) && (argv_str == NULL)) ||\n\t\t\t\t\t    ((argv_str_orig != NULL) && (argv_str != NULL) &&\n\t\t\t\t\t     (strcmp(argv_str_orig, argv_str) != 0)))\n\t\t\t\t\t\tprint_argv = 1;\n\n\t\t\t\t\tif (!varlist_same(env_str_orig, env_str))\n\t\t\t\t\t\tprint_env = 1;\n\n\t\t\t\t\tif (print_progname) {\n\t\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"progname orig: %s\", progname_orig);\n\t\t\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK, LOG_INFO, hook_name, log_buffer);\n\t\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"progname new: %s\", progname);\n\t\t\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK, LOG_INFO, hook_name, log_buffer);\n\t\t\t\t\t}\n\t\t\t\t\tif (print_argv) {\n\t\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"argv orig: %s\", argv_str_orig ? argv_str_orig : \"\");\n\t\t\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK, LOG_INFO, hook_name, log_buffer);\n\t\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"argv new: %s\", argv_str ? argv_str : \"\");\n\t\t\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK, LOG_INFO, hook_name, log_buffer);\n\t\t\t\t\t}\n\t\t\t\t\tif (print_env) {\n\t\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"env orig: %s\", env_str_orig);\n\t\t\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK, LOG_INFO, hook_name, log_buffer);\n\t\t\t\t\t\tsnprintf(log_buffer, sizeof(log_buffer), \"env new: %s\", env_str);\n\t\t\t\t\t\tlog_event(PBSEVENT_DEBUG2, PBS_EVENTCLASS_HOOK, LOG_INFO, hook_name, log_buffer);\n\t\t\t\t\t}\n\t\t\t\t\tfree(argv_str_orig);\n\t\t\t\t\tfree(argv_str);\n\t\t\t\t\t/* Put something here the modified stuff */\n\t\t\t\t\ttmp_str = pbs_python_event_job_getval_hookset(ATTR_execvnode, NULL, 0, NULL, 0);\n\t\t\t\t\tif (tmp_str != NULL)\n\t\t\t\t\t\tfprintf(fp_out, \"%s.%s=%s\\n\", EVENT_JOB_OBJECT, ATTR_execvnode, tmp_str);\n\n\t\t\t\t\ttmp_str = pbs_python_event_job_getval_hookset(ATTR_exechost, NULL, 0, NULL, 0);\n\t\t\t\t\tif (tmp_str != NULL)\n\t\t\t\t\t\tfprintf(fp_out, \"%s.%s=%s\\n\", EVENT_JOB_OBJECT, ATTR_exechost, tmp_str);\n\n\t\t\t\t\ttmp_str = pbs_python_event_job_getval_hookset(ATTR_exechost2, NULL, 0, NULL, 0);\n\t\t\t\t\tif (tmp_str != NULL)\n\t\t\t\t\t\tfprintf(fp_out, \"%s.%s=%s\\n\", EVENT_JOB_OBJECT, ATTR_exechost2, tmp_str);\n\n\t\t\t\t\ttmp_str = pbs_python_event_job_getval_hookset(ATTR_SchedSelect, NULL, 0, NULL, 0);\n\t\t\t\t\tif (tmp_str != NULL)\n\t\t\t\t\t\tfprintf(fp_out, \"%s.%s=%s\\n\", EVENT_JOB_OBJECT, ATTR_SchedSelect, tmp_str);\n\t\t\t\t} else if (hook_event == HOOK_EVENT_EXECJOB_PROLOGUE) {\n\t\t\t\t\tfprint_svrattrl_list(fp_out, EVENT_VNODELIST_FAIL_OBJECT, &event_vnode_fail);\n\t\t\t\t}\n\n\t\t\t\t/* job actions */\n\t\t\t\trerunjob_str = pbs_python_event_job_getval_hookset(\n\t\t\t\t\tPY_RERUNJOB_FLAG, NULL, 0, NULL, 0);\n\t\t\t\tif (rerunjob_str != NULL) {\n\t\t\t\t\tfprintf(fp_out, \"%s.%s=%s\\n\", EVENT_JOB_OBJECT,\n\t\t\t\t\t\tPY_RERUNJOB_FLAG, rerunjob_str);\n\t\t\t\t}\n\t\t\t\tdeletejob_str = pbs_python_event_job_getval_hookset(\n\t\t\t\t\tPY_DELETEJOB_FLAG, NULL, 0, NULL, 0);\n\t\t\t\tif (deletejob_str != NULL) {\n\t\t\t\t\tfprintf(fp_out, \"%s.%s=%s\\n\", EVENT_JOB_OBJECT,\n\t\t\t\t\t\tPY_DELETEJOB_FLAG,\n\t\t\t\t\t\tdeletejob_str);\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase HOOK_EVENT_EXECHOST_PERIODIC:\n\t\t\tcase HOOK_EVENT_EXECHOST_STARTUP:\n\t\t\t\tif (pbs_python_event_get_accept_flag() == FALSE) {\n\t\t\t\t\trej_msg = pbs_python_event_get_reject_msg();\n\t\t\t\t\tfprintf(fp_out, \"%s=True\\n\", EVENT_REJECT_OBJECT);\n\t\t\t\t\tfprintf(fp_out, \"%s=False\\n\", EVENT_ACCEPT_OBJECT);\n\t\t\t\t\tif (rej_msg != NULL)\n\t\t\t\t\t\tfprintf(fp_out, \"%s=%s\\n\", EVENT_REJECT_MSG_OBJECT,\n\t\t\t\t\t\t\trej_msg);\n\t\t\t\t} else {\n\t\t\t\t\tfprintf(fp_out, \"%s=True\\n\", EVENT_ACCEPT_OBJECT);\n\t\t\t\t\tfprintf(fp_out, \"%s=False\\n\", EVENT_REJECT_OBJECT);\n\t\t\t\t}\n\t\t\t\t/* show vnode_list changes whether or not accepted or */\n\t\t\t\t/*  rejected */\n\t\t\t\tfree_attrlist(&event_vnode);\n\t\t\t\tCLEAR_HEAD(event_vnode);\n\t\t\t\tfree_attrlist(&event_jobs);\n\t\t\t\tCLEAR_HEAD(event_jobs);\n\t\t\t\treq_params_out.vns_list = (pbs_list_head *) &event_vnode;\n\t\t\t\tif (hook_event == HOOK_EVENT_EXECHOST_PERIODIC) {\n\t\t\t\t\tfree_attrlist(&event_jobs);\n\t\t\t\t\tCLEAR_HEAD(event_jobs);\n\t\t\t\t\treq_params_out.jobs_list = (pbs_list_head *) &event_jobs;\n\t\t\t\t}\n\t\t\t\tpbs_python_event_to_request(hook_event,\n\t\t\t\t\t\t\t    &req_params_out, perf_label, perf_action);\n\n\t\t\t\tfprint_svrattrl_list(fp_out, EVENT_VNODELIST_OBJECT,\n\t\t\t\t\t\t     &event_vnode);\n\t\t\t\tif (hook_event == HOOK_EVENT_EXECHOST_PERIODIC) {\n\t\t\t\t\tfprint_svrattrl_list(fp_out, EVENT_JOBLIST_OBJECT,\n\t\t\t\t\t\t\t     &event_jobs);\n\t\t\t\t}\n\t\t\t\tbreak;\n\t\t\tcase HOOK_EVENT_EXECJOB_ATTACH:\n\t\t\tcase HOOK_EVENT_EXECJOB_RESIZE:\n\n\t\t\t\tif (pbs_python_event_get_accept_flag() == FALSE) {\n\n\t\t\t\t\trej_msg = pbs_python_event_get_reject_msg();\n\n\t\t\t\t\tfprintf(fp_out, \"%s=True\\n\", EVENT_REJECT_OBJECT);\n\t\t\t\t\tfprintf(fp_out, \"%s=False\\n\", EVENT_ACCEPT_OBJECT);\n\t\t\t\t\tif (rej_msg != NULL)\n\t\t\t\t\t\tfprintf(fp_out, \"%s=%s\\n\", EVENT_REJECT_MSG_OBJECT,\n\t\t\t\t\t\t\trej_msg);\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tfprintf(fp_out, \"%s=True\\n\", EVENT_ACCEPT_OBJECT);\n\t\t\t\tfprintf(fp_out, \"%s=False\\n\", EVENT_REJECT_OBJECT);\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\tlog_event(PBSEVENT_DEBUG, PBS_EVENTCLASS_HOOK, LOG_ERR,\n\t\t\t\t\t  hook_name, \"event_to_request: Unexpected event\");\n\t\t\t\trc = 1;\n\t\t}\n\tpbs_python_end:\n\t\tif (pbs_python_get_reboot_host_flag() == TRUE) {\n\t\t\tchar *reboot_cmd;\n\n\t\t\tfprintf(fp_out, \"%s.%s=True\\n\", PBS_OBJ,\n\t\t\t\tPBS_REBOOT_OBJECT);\n\t\t\treboot_cmd = pbs_python_get_reboot_host_cmd();\n\t\t\tif (reboot_cmd != NULL)\n\t\t\t\tfprintf(fp_out, \"%s.%s=%s\\n\", PBS_OBJ,\n\t\t\t\t\tPBS_REBOOT_CMD_OBJECT, reboot_cmd);\n\t\t}\n\t\tif (pbs_python_get_scheduler_restart_cycle_flag() == TRUE) {\n\n\t\t\tfprintf(fp_out, \"%s.%s=True\\n\",\n\t\t\t\tSERVER_OBJECT,\n\t\t\t\tPY_SCHEDULER_RESTART_CYCLE_METHOD);\n\t\t}\n\n\t\tif ((fp_out != NULL) && (fp_out != stdout))\n\t\t\tfclose(fp_out);\n\n\t\tif ((fp_server_out != NULL) && (fp_server_out != stdout))\n\t\t\tfclose(fp_server_out);\n\n\t\tpbs_python_ext_free_global_dict(py_script);\n\t\tpbs_python_clear_attributes();\n\t\tpbs_python_ext_shutdown_interpreter(&svr_interp_data);\n\n\t\tfree_attrlist(&event_vnode);\n\t\tCLEAR_HEAD(event_vnode);\n\t\tfree_attrlist(&event_vnode_fail);\n\t\tCLEAR_HEAD(event_vnode_fail);\n\t\tfree_attrlist(&event_argv);\n\t\tCLEAR_HEAD(event_argv);\n\t\tfree_attrlist(&event_jobs);\n\t\tCLEAR_HEAD(event_jobs);\n\t\tif (progname != NULL)\n\t\t\tfree(progname);\n\t\tif (env_str != NULL)\n\t\t\tfree(env_str);\n\n\t\thook_perf_stat_stop(perf_label, PBS_PYTHON_PROGRAM, 1);\n\t}\n\n\treturn rc;\n}\n"
  },
  {
    "path": "src/tools/pbs_sleep.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file\n *\t\tpbs_sleep.c\n *\n * @brief\n *\t\tThis file contains functions related to sleep of PBS.\n *\n * Functions included are:\n * \tmain()\n *\n */\n#include <pbs_config.h>\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <unistd.h>\n#include <string.h>\n#include <signal.h>\n\n/**\n * @Brief\n *      This is main function of pbs_sleep process.\n *      It calls sleep internally for the number of seconds passed to it, -1 for sleep indefinitely.\n *\n */\n\nint\nmain(int argc, char *argv[])\n{\n\tint i;\n\tint forever = 0;\n\tint secs = 0;\n\n\tif (argc != 2) {\n\t\tfprintf(stderr, \"%s secs\\n\", argv[0]);\n\t\texit(1);\n\t}\n\n\t/* if argv[1] is -1, loop with sleep 1 indefinitely */\n\tif (strcmp(argv[1], \"-1\") == 0)\n\t\tforever = 1;\n\telse\n\t\tsecs = atoi(argv[1]);\n\n\tfor (i = 0; i < secs || forever; i++)\n\t\tsleep(1);\n\n\treturn 0;\n}\n"
  },
  {
    "path": "src/tools/pbs_tclWrap.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <ctype.h>\n#include <errno.h>\n#include <time.h>\n#include <string.h>\n#include <assert.h>\n#include <tcl.h>\n\n#include \"pbs_error.h\"\n#include \"pbs_ifl.h\"\n#include \"ifl_internal.h\"\n#include \"log.h\"\n#include \"resmon.h\"\n#include \"rm.h\"\n#include \"cmds.h\"\n#include \"attribute.h\"\n\n#if !defined(HAVE_TCL_SIZE)\ntypedef int Tcl_Size;\n#endif\n\nchar badparm[] = \"%s: bad parameter\";\nchar missingfd[] = \"%s: missing file descriptor\";\nchar not_connected[] = \"not connected\";\nchar fail[] = \"failed\";\n#ifdef NAS /* localmod 071 */\nchar *tcl_atrsep = NULL;\n#endif /* localmod 071 */\nTcl_Obj *pbserr;\nTcl_Obj *pbsmsg;\n\nint connector = -1;\nint (*local_disconnect)(int connection) = __pbs_disconnect;\n\n#define SET_PBSERR(value)                           \\\n\t(void) Tcl_ObjSetVar2(interp, pbserr, NULL, \\\n\t\t\t      Tcl_NewIntObj((value)), TCL_GLOBAL_ONLY | TCL_LEAVE_ERR_MSG)\n\n#define SET_PBSMSG(msg)                             \\\n\t(void) Tcl_ObjSetVar2(interp, pbsmsg, NULL, \\\n\t\t\t      Tcl_NewStringObj((msg), -1), TCL_GLOBAL_ONLY)\n\n#ifdef NAS\n#define PBS_CALL(function)                                         \\\n\tif (function) {                                            \\\n\t\tTcl_SetObjResult(interp, Tcl_NewIntObj(-1));       \\\n\t\tmsg = pbs_geterrmsg(connector);                    \\\n\t\tsprintf(log_buffer, \"%s: %s (%d)\", argv[1],        \\\n\t\t\tmsg ? msg : fail, pbs_errno);              \\\n\t\tif (!quiet)                                        \\\n\t\t\tlog_err(-1, (char *) argv[0], log_buffer); \\\n\t} else                                                     \\\n\t\tTcl_SetObjResult(interp, Tcl_NewIntObj(0));\n#else\n#define PBS_CALL(function)                                   \\\n\tif (function) {                                      \\\n\t\tTcl_SetObjResult(interp, Tcl_NewIntObj(-1)); \\\n\t\tmsg = pbs_geterrmsg(connector);              \\\n\t\tsprintf(log_buffer, \"%s: %s (%d)\", argv[1],  \\\n\t\t\tmsg ? msg : fail, pbs_errno);        \\\n\t\tlog_err(-1, (char *) argv[0], log_buffer);   \\\n\t} else                                               \\\n\t\tTcl_SetObjResult(interp, Tcl_NewIntObj(0));\n#endif\n\nint\nOpenRM(ClientData clientData, Tcl_Interp *interp, int objc, Tcl_Obj *const objv[])\n{\n\tint port = 0;\n\tint fd;\n\tchar *host;\n\n\tif (objc == 3) {\n\t\tif (Tcl_GetIntFromObj(interp, objv[2], &port) != TCL_OK)\n\t\t\treturn TCL_ERROR;\n\t} else if (objc != 2) {\n\t\tTcl_WrongNumArgs(interp, 1, objv, \"host ?port?\");\n\t\treturn TCL_ERROR;\n\t}\n\n\thost = Tcl_GetStringFromObj(objv[1], NULL);\n\tif ((fd = openrm(host, port)) < 0) {\n\t\tTcl_PosixError(interp);\n#ifdef NAS\n\t\tif (!quiet)\n#endif\n\t\t\tlog_err(pbs_errno, Tcl_GetStringFromObj(objv[0], NULL), host);\n\t}\n\n\tSET_PBSERR(pbs_errno);\n\tTcl_SetObjResult(interp, Tcl_NewIntObj(fd));\n\treturn TCL_OK;\n}\n\nint\nCloseRM(ClientData clientData, Tcl_Interp *interp, int objc, Tcl_Obj *const objv[])\n{\n\tint fd, ret;\n\tchar *cmd;\n\n\tcmd = Tcl_GetStringFromObj(objv[0], NULL);\n\tif (objc != 2) {\n\t\tsprintf(log_buffer, missingfd, cmd);\n\t\tTcl_SetResult(interp, log_buffer, TCL_VOLATILE);\n\t\treturn TCL_ERROR;\n\t}\n\n\tif (Tcl_GetIntFromObj(interp, objv[1], &fd) != TCL_OK)\n\t\treturn TCL_ERROR;\n\n\tif ((ret = closerm(fd)) == -1) {\n\t\tTcl_PosixError(interp);\n#ifdef NAS\n\t\tif (!quiet)\n#endif\n\t\t\tlog_err(pbs_errno, cmd, Tcl_GetStringFromObj(objv[1], NULL));\n\t}\n\n\tSET_PBSERR(pbs_errno);\n\tTcl_SetObjResult(interp, Tcl_NewIntObj(ret));\n\treturn TCL_OK;\n}\n\nint\nDownRM(ClientData clientData, Tcl_Interp *interp, int objc, Tcl_Obj *const objv[])\n{\n\tint fd, ret;\n\tchar *cmd;\n\n\tcmd = Tcl_GetStringFromObj(objv[0], NULL);\n\tif (objc != 2) {\n\t\tsprintf(log_buffer, missingfd, cmd);\n\t\tTcl_SetResult(interp, log_buffer, TCL_VOLATILE);\n\t\treturn TCL_ERROR;\n\t}\n\n\tif (Tcl_GetIntFromObj(interp, objv[1], &fd) != TCL_OK)\n\t\treturn TCL_ERROR;\n\n\tif ((ret = downrm(fd)) == -1) {\n\t\tTcl_PosixError(interp);\n#ifdef NAS\n\t\tif (!quiet)\n#endif\n\t\t\tlog_err(pbs_errno, cmd, Tcl_GetStringFromObj(objv[1], NULL));\n\t}\n\n\tSET_PBSERR(pbs_errno);\n\tTcl_SetObjResult(interp, Tcl_NewIntObj(ret));\n\treturn TCL_OK;\n}\n\nint\nConfigRM(ClientData clientData, Tcl_Interp *interp, int objc, Tcl_Obj *const objv[])\n{\n\tint fd, ret;\n\tchar *cmd, *filename;\n\n\tcmd = Tcl_GetStringFromObj(objv[0], NULL);\n\tif (objc != 3) {\n\t\tsprintf(log_buffer,\n\t\t\t\"%s: missing file descriptor or filename\", cmd);\n\t\tTcl_SetResult(interp, log_buffer, TCL_VOLATILE);\n\t\treturn TCL_ERROR;\n\t}\n\n\tif (Tcl_GetIntFromObj(interp, objv[1], &fd) != TCL_OK)\n\t\treturn TCL_ERROR;\n\n\tfilename = Tcl_GetStringFromObj(objv[2], NULL);\n\tret = configrm(fd, filename);\n\tif (ret == -1) {\n\t\tTcl_PosixError(interp);\n#ifdef NAS\n\t\tif (!quiet)\n#endif\n\t\t\tlog_err(pbs_errno, cmd, filename);\n\t}\n\n\tSET_PBSERR(pbs_errno);\n\tTcl_SetObjResult(interp, Tcl_NewIntObj(ret));\n\treturn TCL_OK;\n}\n\nint\nAddREQ(ClientData clientData, Tcl_Interp *interp, int objc, Tcl_Obj *const objv[])\n{\n\tint fd, ret;\n\tchar *cmd, *request;\n\n\tcmd = Tcl_GetStringFromObj(objv[0], NULL);\n\tif (objc != 3) {\n\t\tsprintf(log_buffer,\n\t\t\t\"%s: missing file descriptor or request\", cmd);\n\t\tTcl_SetResult(interp, log_buffer, TCL_VOLATILE);\n\t\treturn TCL_ERROR;\n\t}\n\n\tif (Tcl_GetIntFromObj(interp, objv[1], &fd) != TCL_OK)\n\t\treturn TCL_ERROR;\n\n\trequest = Tcl_GetStringFromObj(objv[2], NULL);\n\tret = addreq(fd, request);\n\tif (ret == -1) {\n\t\tTcl_PosixError(interp);\n#ifdef NAS\n\t\tif (!quiet)\n#endif\n\t\t\tlog_err(pbs_errno, cmd, request);\n\t}\n\n\tSET_PBSERR(pbs_errno);\n\tTcl_SetObjResult(interp, Tcl_NewIntObj(ret));\n\treturn TCL_OK;\n}\n\nint\nAllREQ(ClientData clientData, Tcl_Interp *interp, int argc, const char *argv[])\n{\n\tint ret;\n\n\tif (argc != 2) {\n\t\tsprintf(log_buffer, \"%s: missing request\", argv[0]);\n\t\tTcl_SetResult(interp, log_buffer, TCL_VOLATILE);\n\t\treturn TCL_ERROR;\n\t}\n\n\tret = allreq((char *) argv[1]);\n\tSET_PBSERR(pbs_errno);\n\tTcl_SetObjResult(interp, Tcl_NewIntObj(ret));\n\treturn TCL_OK;\n}\n\nint\nGetREQ(ClientData clientData, Tcl_Interp *interp, int objc, Tcl_Obj *const objv[])\n{\n\tint fd;\n\tchar *ret;\n\tchar *cmd;\n\n\tcmd = Tcl_GetStringFromObj(objv[0], NULL);\n\tif (objc != 2) {\n\t\tsprintf(log_buffer, missingfd, cmd);\n\t\tTcl_SetResult(interp, log_buffer, TCL_VOLATILE);\n\t\treturn TCL_ERROR;\n\t}\n\n\tif (Tcl_GetIntFromObj(interp, objv[1], &fd) != TCL_OK)\n\t\treturn TCL_ERROR;\n\n\tif ((ret = getreq(fd)) == NULL) {\n\t\tif (pbs_errno) {\n\t\t\tTcl_PosixError(interp);\n#ifdef NAS\n\t\t\tif (!quiet)\n#endif\n\t\t\t\tlog_err(pbs_errno, cmd,\n\t\t\t\t\tTcl_GetStringFromObj(objv[1], NULL));\n\t\t}\n\t\tSET_PBSERR(pbs_errno);\n\t} else {\n\t\tint err = 0;\n\n\t\tTcl_SetResult(interp, ret, (Tcl_FreeProc *) free);\n\t\tif (*ret == '?') {\n\t\t\tif (strlen(ret) > (size_t) 2 && /* look for err num */\n\t\t\t    Tcl_GetInt(interp, &ret[2], &err) != TCL_OK)\n\t\t\t\treturn TCL_ERROR;\n\t\t}\n\t\tSET_PBSERR(err);\n\t}\n\n\treturn TCL_OK;\n}\n\nint\nFlushREQ(ClientData clientData, Tcl_Interp *interp, int argc, const char *argv[])\n{\n\tif (argc != 1) {\n\t\tsprintf(log_buffer, badparm, (char *) argv[0]);\n\t\tTcl_SetResult(interp, log_buffer, TCL_VOLATILE);\n\t\treturn TCL_ERROR;\n\t}\n\n\tflushreq();\n\n\tSET_PBSERR(pbs_errno);\n\treturn TCL_OK;\n}\n\nint\nActiveREQ(ClientData clientData, Tcl_Interp *interp, int argc, const char *argv[])\n{\n\tint ret;\n\n\tif (argc != 1) {\n\t\tsprintf(log_buffer, badparm, argv[0]);\n\t\tTcl_SetResult(interp, log_buffer, TCL_VOLATILE);\n\t\treturn TCL_ERROR;\n\t}\n\n\tret = activereq();\n\tif (ret == -1) {\n\t\tTcl_PosixError(interp);\n\t\tsprintf(log_buffer, \"result %d\", ret);\n#ifdef NAS\n\t\tif (!quiet)\n#endif\n\t\t\tlog_err(pbs_errno, (char *) argv[0], log_buffer);\n\t}\n\n\tSET_PBSERR(pbs_errno);\n\tTcl_SetObjResult(interp, Tcl_NewIntObj(ret));\n\treturn TCL_OK;\n}\n\nint\nFullResp(ClientData clientData, Tcl_Interp *interp, int argc, const char *argv[])\n{\n\tint flag;\n\n\tif (argc != 2) {\n\t\tsprintf(log_buffer, \"%s: missing flag\", argv[0]);\n\t\tTcl_SetResult(interp, log_buffer, TCL_VOLATILE);\n\t\treturn TCL_ERROR;\n\t}\n\n\tif (Tcl_GetBoolean(interp, (char *) argv[1], &flag) != TCL_OK)\n\t\treturn TCL_ERROR;\n\n\tfullresp(flag);\n\tSET_PBSERR(0);\n\treturn TCL_OK;\n}\n\nint\nPBS_Connect(ClientData clientData, Tcl_Interp *interp, int argc, const char *argv[])\n{\n\tconst char *server = NULL;\n\n\tif (argc == 2)\n\t\tserver = argv[1];\n\telse if (argc != 1) {\n\t\tsprintf(log_buffer, \"%s: wrong # args: ?server?\", argv[0]);\n\t\tTcl_SetResult(interp, log_buffer, TCL_VOLATILE);\n\t\treturn TCL_ERROR;\n\t}\n\n\tlocal_disconnect(connector);\n\tpbs_errno = PBSE_NONE;\n\tif ((connector = pbs_connect(server)) < 0) {\n\t\tTcl_SetObjResult(interp, Tcl_NewIntObj(-1));\n\t\tsprintf(log_buffer, \"%s (%d)\",\n\t\t\tserver ? server : \"DefaultServer\",\n\t\t\tpbs_errno);\n#ifdef NAS\n\t\tif (!quiet)\n#endif\n\t\t\tlog_err(-1, (char *) argv[0], log_buffer);\n\t} else\n\t\tTcl_SetObjResult(interp, Tcl_NewIntObj(0));\n\n\tSET_PBSERR(pbs_errno);\n\treturn TCL_OK;\n}\n\nint\nPBS_Disconnect(ClientData clientData, Tcl_Interp *interp, int argc, const char *argv[])\n{\n\tif (argc != 1) {\n\t\tsprintf(log_buffer, badparm, argv[0]);\n\t\tTcl_SetResult(interp, log_buffer, TCL_VOLATILE);\n\t\treturn TCL_ERROR;\n\t}\n\n\tpbs_errno = PBSE_NONE;\n\tlocal_disconnect(connector);\n\tTcl_SetObjResult(interp, Tcl_NewIntObj(0));\n\tconnector = -1;\n\n\tSET_PBSERR(pbs_errno);\n\treturn TCL_OK;\n}\n\nTcl_Obj *\nattrlist(Tcl_Interp *interp, struct attrl *ap)\n{\n\tTcl_Obj *ret;\n\n\tret = Tcl_NewListObj(0, NULL); /* null list */\n\twhile (ap) {\n\t\tTcl_Obj *twol[2];\n\n\t\ttwol[0] = Tcl_NewStringObj(ap->name, -1);\n\t\tif (ap->resource) {\n\t\t\tTcl_AppendStringsToObj(twol[0],\n#ifdef NAS /* localmod 071 */\n\t\t\t\t\t       tcl_atrsep, ap->resource, NULL);\n#else\n\t\t\t\t\t       TCL_ATRSEP, ap->resource, NULL);\n#endif /* localmod 071 */\n\t\t}\n\t\ttwol[1] = Tcl_NewStringObj(ap->value, -1);\n\t\tTcl_ListObjAppendElement(interp, ret, Tcl_NewListObj(2, twol));\n\n\t\tap = ap->next;\n\t}\n\treturn (ret);\n}\n\nvoid\n\tbatresult(Tcl_Interp *interp, struct batch_status *bs)\n{\n\tTcl_Obj *batchl;\n\tstruct batch_status *bp;\n\n\tbatchl = Tcl_NewObj(); /* empty list */\n\tfor (bp = bs; bp; bp = bp->next) {\n\t\tTcl_Obj *threel[3];\n\n\t\tthreel[0] = Tcl_NewStringObj(bp->name, -1);\n\t\tthreel[1] = attrlist(interp, bp->attribs);\n\t\tthreel[2] = Tcl_NewStringObj(bp->text, -1);\n\n\t\tTcl_ListObjAppendElement(interp, batchl,\n\t\t\t\t\t Tcl_NewListObj(3, threel));\n\t}\n\tTcl_SetObjResult(interp, batchl);\n\tpbs_statfree(bs);\n}\n\nint\nPBS_StatServ(ClientData clientData, Tcl_Interp *interp, int argc, const char *argv[])\n{\n\tchar *msg;\n\tstruct batch_status *bs;\n\tTcl_Obj *threel[3];\n\n\tif (argc != 1) {\n\t\tsprintf(log_buffer, badparm, argv[0]);\n\t\tTcl_SetResult(interp, log_buffer, TCL_VOLATILE);\n\t\treturn TCL_ERROR;\n\t}\n\n\tif (connector < 0) {\n#ifdef NAS\n\t\tif (!quiet)\n#endif\n\t\t\tlog_err(-1, (char *) argv[0], not_connected);\n\t\tSET_PBSERR(PBSE_NOSERVER);\n\t\treturn TCL_OK;\n\t}\n\n\tif ((bs = pbs_statserver(connector, NULL, NULL)) == NULL) {\n\t\tif (pbs_errno != PBSE_NONE) {\n\t\t\tmsg = pbs_geterrmsg(connector);\n\t\t\tsprintf(log_buffer, \"%s (%d)\",\n\t\t\t\tmsg ? msg : fail, pbs_errno);\n#ifdef NAS\n\t\t\tif (!quiet)\n#endif\n\t\t\t\tlog_err(-1, (char *) argv[0], log_buffer);\n\t\t}\n\t} else {\n\t\tthreel[0] = Tcl_NewStringObj(bs->name, -1);\n\t\tthreel[1] = attrlist(interp, bs->attribs);\n\t\tthreel[2] = Tcl_NewStringObj(bs->text, -1);\n\n\t\tTcl_SetObjResult(interp, Tcl_NewListObj(3, threel));\n\n\t\tpbs_statfree(bs);\n\t}\n\n\tSET_PBSERR(pbs_errno);\n\treturn TCL_OK;\n}\n\nint\nPBS_StatJob(ClientData clientData, Tcl_Interp *interp, int argc, const char *argv[])\n{\n\tchar *msg;\n\tstruct batch_status *bs;\n\tchar *extend = NULL;\n\n\tif (argc > 2) { /* can have one argument for extend field */\n\t\tsprintf(log_buffer, badparm, argv[0]);\n\t\tTcl_SetResult(interp, log_buffer, TCL_VOLATILE);\n\t\treturn TCL_ERROR;\n\t}\n\tif (argc == 2) {\n\t\textend = (char *) argv[1];\n\t}\n\n\tif (connector < 0) {\n#ifdef NAS\n\t\tif (!quiet)\n#endif\n\t\t\tlog_err(-1, (char *) argv[0], not_connected);\n\t\tSET_PBSERR(PBSE_NOSERVER);\n\t\treturn TCL_OK;\n\t}\n\n\tif ((bs = pbs_statjob(connector, NULL, NULL, extend)) == NULL) {\n\t\tif (pbs_errno != PBSE_NONE) {\n\t\t\tmsg = pbs_geterrmsg(connector);\n\t\t\tsprintf(log_buffer, \"%s (%d)\",\n\t\t\t\tmsg ? msg : fail, pbs_errno);\n#ifdef NAS\n\t\t\tif (!quiet)\n#endif\n\t\t\t\tlog_err(-1, (char *) argv[0], log_buffer);\n\t\t}\n\t} else\n\t\tbatresult(interp, bs);\n\n\tSET_PBSERR(pbs_errno);\n\treturn TCL_OK;\n}\n\nint\nPBS_SelStat(ClientData clientData, Tcl_Interp *interp, int argc, const char *argv[])\n{\n\tchar *msg;\n\tstruct batch_status *bs;\n\n\tstatic struct attropl att1 = {\n\t\tNULL,\n\t\t\"queue_type\",\n\t\tNULL,\n\t\t\"E\",\n\t\tEQ};\n\tstatic struct attropl att2 = {\n\t\t&att1,\n\t\t\"job_state\",\n\t\tNULL,\n\t\t\"Q\",\n\t\tEQ};\n\n\tif (argc != 1) {\n\t\tsprintf(log_buffer, badparm, argv[0]);\n\t\tTcl_SetResult(interp, log_buffer, TCL_VOLATILE);\n\t\treturn TCL_ERROR;\n\t}\n\n\tif (connector < 0) {\n#ifdef NAS\n\t\tif (!quiet)\n#endif\n\t\t\tlog_err(-1, (char *) argv[0], not_connected);\n\t\tSET_PBSERR(PBSE_NOSERVER);\n\t\treturn TCL_OK;\n\t}\n\n\tif ((bs = pbs_selstat(connector, &att2, NULL, NULL)) == NULL) {\n\t\tif (pbs_errno != PBSE_NONE) {\n\t\t\tmsg = pbs_geterrmsg(connector);\n\t\t\tsprintf(log_buffer, \"%s (%d)\",\n\t\t\t\tmsg ? msg : fail, pbs_errno);\n#ifdef NAS\n\t\t\tif (!quiet)\n#endif\n\t\t\t\tlog_err(-1, (char *) argv[0], log_buffer);\n\t\t}\n\t} else\n\t\tbatresult(interp, bs);\n\n\tSET_PBSERR(pbs_errno);\n\treturn TCL_OK;\n}\n\nint\nPBS_StatQue(ClientData clientData, Tcl_Interp *interp, int argc, const char *argv[])\n{\n\tchar *msg;\n\tstruct batch_status *bs;\n\n\tif (argc != 1) {\n\t\tsprintf(log_buffer, badparm, argv[0]);\n\t\tTcl_SetResult(interp, log_buffer, TCL_VOLATILE);\n\t\treturn TCL_ERROR;\n\t}\n\n\tif (connector < 0) {\n#ifdef NAS\n\t\tif (!quiet)\n#endif\n\t\t\tlog_err(-1, (char *) argv[0], not_connected);\n\t\tSET_PBSERR(PBSE_NOSERVER);\n\t\treturn TCL_OK;\n\t}\n\n\tif ((bs = pbs_statque(connector, NULL, NULL, NULL)) == NULL) {\n\t\tif (pbs_errno != PBSE_NONE) {\n\t\t\tmsg = pbs_geterrmsg(connector);\n\t\t\tsprintf(log_buffer, \"%s (%d)\",\n\t\t\t\tmsg ? msg : fail, pbs_errno);\n#ifdef NAS\n\t\t\tif (!quiet)\n#endif\n\t\t\t\tlog_err(-1, (char *) argv[0], log_buffer);\n\t\t}\n\t} else\n\t\tbatresult(interp, bs);\n\n\tSET_PBSERR(pbs_errno);\n\treturn TCL_OK;\n}\n\nint\nPBS_StatNode(ClientData clientData, Tcl_Interp *interp, int objc, Tcl_Obj *const objv[])\n{\n\tchar *msg, *cmd;\n\tchar *node = NULL;\n\tstruct batch_status *bs;\n\n\tif (objc == 2)\n\t\tnode = Tcl_GetStringFromObj(objv[1], NULL);\n\telse if (objc != 1) {\n\t\tTcl_WrongNumArgs(interp, 1, objv, \"?node?\");\n\t\treturn TCL_ERROR;\n\t}\n\n\tcmd = Tcl_GetStringFromObj(objv[0], NULL);\n\tif (connector < 0) {\n#ifdef NAS\n\t\tif (!quiet)\n#endif\n\t\t\tlog_err(-1, cmd, not_connected);\n\t\tSET_PBSERR(PBSE_NOSERVER);\n\t\treturn TCL_OK;\n\t}\n\n\tif ((bs = pbs_statnode(connector, node, NULL, NULL)) == NULL) {\n\t\tif (pbs_errno != PBSE_NONE) {\n\t\t\tmsg = pbs_geterrmsg(connector);\n\t\t\tsprintf(log_buffer, \"%s (%d)\",\n\t\t\t\tmsg ? msg : fail, pbs_errno);\n#ifdef NAS\n\t\t\tif (!quiet)\n#endif\n\t\t\t\tlog_err(-1, cmd, log_buffer);\n\t\t}\n\t} else\n\t\tbatresult(interp, bs);\n\n\tSET_PBSERR(pbs_errno);\n\treturn TCL_OK;\n}\n\nint\nPBS_AsyRunJob(ClientData clientData, Tcl_Interp *interp, int argc, const char *argv[])\n{\n\tchar *msg;\n\tchar *location = NULL;\n\n\tif (argc == 3)\n\t\tlocation = (char *) argv[2];\n\telse if (argc != 2) {\n\t\tsprintf(log_buffer,\n\t\t\t\"%s: wrong # args: job_id ?location?\", argv[0]);\n\t\tTcl_SetResult(interp, log_buffer, TCL_VOLATILE);\n\t\treturn TCL_ERROR;\n\t}\n\n\tif (connector < 0) {\n#ifdef NAS\n\t\tif (!quiet)\n#endif\n\t\t\tlog_err(-1, (char *) argv[0], not_connected);\n\t\tSET_PBSERR(PBSE_NOSERVER);\n\t\treturn TCL_OK;\n\t}\n\n\tPBS_CALL(pbs_asyrunjob(connector, (char *) argv[1], location, NULL))\n\n\tSET_PBSERR(pbs_errno);\n\treturn TCL_OK;\n}\n\nint\nPBS_RunJob(ClientData clientData, Tcl_Interp *interp, int argc, const char *argv[])\n{\n\tchar *msg;\n\tchar *location = NULL;\n\n\tif (argc == 3)\n\t\tlocation = (char *) argv[2];\n\telse if (argc != 2) {\n\t\tsprintf(log_buffer,\n\t\t\t\"%s: wrong # args: job_id ?location?\", argv[0]);\n\t\tTcl_SetResult(interp, log_buffer, TCL_VOLATILE);\n\t\treturn TCL_ERROR;\n\t}\n\n\tif (connector < 0) {\n#ifdef NAS\n\t\tif (!quiet)\n#endif\n\t\t\tlog_err(-1, (char *) argv[0], not_connected);\n\t\tSET_PBSERR(PBSE_NOSERVER);\n\t\treturn TCL_OK;\n\t}\n\n\tPBS_CALL(pbs_runjob(connector, (char *) argv[1], location, NULL))\n\n\tSET_PBSERR(pbs_errno);\n\treturn TCL_OK;\n}\n\nint\nPBS_ReRun(ClientData clientData, Tcl_Interp *interp, int argc, const char *argv[])\n{\n\tchar *msg;\n\tchar *extend = \"0\";\n\n\tif (argc != 2) {\n\t\tsprintf(log_buffer,\n\t\t\t\"%s: wrong # args: job_id\", argv[0]);\n\t\tTcl_SetResult(interp, log_buffer, TCL_VOLATILE);\n\t\treturn TCL_ERROR;\n\t}\n\n\tif (connector < 0) {\n#ifdef NAS\n\t\tif (!quiet)\n#endif\n\t\t\tlog_err(-1, (char *) argv[0], not_connected);\n\t\tSET_PBSERR(PBSE_NOSERVER);\n\t\treturn TCL_OK;\n\t}\n\n\tPBS_CALL(pbs_rerunjob(connector, (char *) argv[1], extend))\n\n\tSET_PBSERR(pbs_errno);\n\n\treturn TCL_OK;\n}\n\nint\nPBS_MoveJob(ClientData clientData, Tcl_Interp *interp, int argc, const char *argv[])\n{\n\tchar *msg;\n\tchar *location = NULL;\n\tchar job_id_out[PBS_MAXCLTJOBID];\n\tchar server_out[MAXSERVERNAME];\n\n\tif (argc == 3)\n\t\tlocation = (char *) argv[2];\n\telse if (argc != 2) {\n\t\tsprintf(log_buffer,\n\t\t\t\"%s: wrong # args: job_id ?location?\", argv[0]);\n\t\tTcl_SetResult(interp, log_buffer, TCL_VOLATILE);\n\t\treturn TCL_ERROR;\n\t}\n\n\tif (connector < 0) {\n#ifdef NAS\n\t\tif (!quiet)\n#endif\n\t\t\tlog_err(-1, (char *) argv[0], not_connected);\n\t\tSET_PBSERR(PBSE_NOSERVER);\n\t\treturn TCL_OK;\n\t}\n\n\tif (get_server((char *) argv[1], job_id_out, server_out)) {\n\t\tmsg = pbs_geterrmsg(connector);\n\t\tsprintf(log_buffer, \"%s: %s (%d)\", argv[1],\n\t\t\tmsg ? msg : fail, pbs_errno);\n\t\tTcl_SetResult(interp, log_buffer, TCL_VOLATILE);\n\t\treturn TCL_ERROR;\n\t}\n\n\tPBS_CALL(pbs_movejob(connector, job_id_out, location, NULL))\n\n\tSET_PBSERR(pbs_errno);\n\treturn TCL_OK;\n}\n\nint\nPBS_DelJob(ClientData clientData, Tcl_Interp *interp, int argc, const char *argv[])\n{\n\tchar *msg;\n\tchar *message = NULL;\n\n\tif (argc == 3)\n\t\tmessage = (char *) argv[2];\n\telse if (argc != 2) {\n\t\tsprintf(log_buffer,\n\t\t\t\"%s: wrong # args: job_id ?message?\", argv[0]);\n\t\tTcl_SetResult(interp, log_buffer, TCL_VOLATILE);\n\t\treturn TCL_ERROR;\n\t}\n\n\tif (connector < 0) {\n#ifdef NAS\n\t\tif (!quiet)\n#endif\n\t\t\tlog_err(-1, (char *) argv[0], not_connected);\n\t\tSET_PBSERR(PBSE_NOSERVER);\n\t\treturn TCL_OK;\n\t}\n\n\tPBS_CALL(pbs_deljob(connector, (char *) argv[1], message))\n\n\tSET_PBSERR(pbs_errno);\n\treturn TCL_OK;\n}\n\nint\nPBS_HoldJob(ClientData clientData, Tcl_Interp *interp, int argc, const char *argv[])\n{\n\tchar *msg;\n\n\tif (argc != 2) {\n\t\tsprintf(log_buffer,\n\t\t\t\"%s: wrong # args: job_id\", argv[0]);\n\t\tTcl_SetResult(interp, log_buffer, TCL_VOLATILE);\n\t\treturn TCL_ERROR;\n\t}\n\n\tif (connector < 0) {\n#ifdef NAS\n\t\tif (!quiet)\n#endif\n\t\t\tlog_err(-1, (char *) argv[0], not_connected);\n\t\tSET_PBSERR(PBSE_NOSERVER);\n\t\treturn TCL_OK;\n\t}\n\n\tPBS_CALL(pbs_holdjob(connector, (char *) argv[1], SYSTEM_HOLD, NULL))\n\n\tSET_PBSERR(pbs_errno);\n\treturn TCL_OK;\n}\n\nint\nPBS_QueueOp(ClientData clientData, Tcl_Interp *interp, int argc, const char *argv[], struct attropl *attr)\n{\n\tint merr;\n\n\tif (argc != 2) {\n\t\tsprintf(log_buffer,\n\t\t\t\"%s: wrong # args: queue\", argv[0]);\n\t\tTcl_SetResult(interp, log_buffer, TCL_VOLATILE);\n\t\treturn TCL_ERROR;\n\t}\n\tif (connector < 0) {\n#ifdef NAS\n\t\tif (!quiet)\n#endif\n\t\t\tlog_err(-1, (char *) argv[0], not_connected);\n\t\tSET_PBSERR(PBSE_NOSERVER);\n\t\treturn TCL_OK;\n\t}\n\n\tmerr = pbs_manager(connector, MGR_CMD_SET, MGR_OBJ_QUEUE,\n\t\t\t   (char *) argv[1], attr, NULL);\n\tif (merr != 0) {\n\t\tsprintf(log_buffer, \"%s: %s %s\", argv[0], argv[1],\n\t\t\tpbs_geterrmsg(connector));\n\t\tTcl_SetResult(interp, log_buffer, TCL_VOLATILE);\n\t\treturn TCL_ERROR;\n\t}\n\n\tTcl_SetObjResult(interp, Tcl_NewIntObj(0));\n\tSET_PBSERR(pbs_errno);\n\treturn TCL_OK;\n}\n\nint\nPBS_EnableQueue(ClientData clientData, Tcl_Interp *interp, int argc, const char *argv[])\n{\n\tstatic struct attropl attr = {NULL, \"enabled\", NULL, \"TRUE\", SET};\n\treturn PBS_QueueOp(clientData, interp, argc, argv, &attr);\n}\n\nint\nPBS_DisableQueue(ClientData clientData, Tcl_Interp *interp, int argc, const char *argv[])\n{\n\tstatic struct attropl attr = {NULL, \"enabled\", NULL, \"FALSE\", SET};\n\treturn PBS_QueueOp(clientData, interp, argc, argv, &attr);\n}\n\nint\nPBS_StartQueue(ClientData clientData, Tcl_Interp *interp, int argc, const char *argv[])\n{\n\tstatic struct attropl attr = {NULL, \"started\", NULL, \"TRUE\", SET};\n\treturn PBS_QueueOp(clientData, interp, argc, argv, &attr);\n}\n\nint\nPBS_StopQueue(ClientData clientData, Tcl_Interp *interp, int argc, const char *argv[])\n{\n\tstatic struct attropl attr = {NULL, \"started\", NULL, \"FALSE\", SET};\n\treturn PBS_QueueOp(clientData, interp, argc, argv, &attr);\n}\n\nint\nPBS_AlterJob(ClientData clientData, Tcl_Interp *interp, int objc, Tcl_Obj *const objv[])\n{\n\tstatic char id[] = \"PBS_AlterJob\";\n\tchar *msg;\n\tint i, ret;\n\tTcl_Size num;\n\tTcl_Size tre;\n\tTcl_Obj **listp, **indp;\n\tstruct attrl *attrs, *atp = NULL;\n\tchar *cmd, *jobid;\n\n\tif (objc != 3) {\n\t\tTcl_WrongNumArgs(interp, 1, objv, \"job_id attribute(s)\");\n\t\treturn TCL_ERROR;\n\t}\n\n\tif ((ret = Tcl_ListObjGetElements(interp, objv[2],\n\t\t\t\t\t  &num, &listp)) != TCL_OK)\n\t\treturn ret;\n\tcmd = Tcl_GetStringFromObj(objv[0], NULL);\n\tattrs = NULL;\n\tfor (i = 0; i < num; i++) {\n\t\tif ((ret = Tcl_ListObjGetElements(interp, listp[i],\n\t\t\t\t\t\t  &tre, &indp)) != TCL_OK)\n\t\t\tgoto done;\n\t\tif (tre != 3) {\n\t\t\tsprintf(log_buffer,\n\t\t\t\t\"%s: bad attribute format: %s\",\n\t\t\t\tcmd, Tcl_GetStringFromObj(listp[i], NULL));\n\t\t\tTcl_SetResult(interp, log_buffer, TCL_VOLATILE);\n\t\t\tret = TCL_ERROR;\n\t\t\tgoto done;\n\t\t}\n\t\tatp = new_attrl();\n\t\tif (atp == NULL) {\n\t\t\tsprintf(log_buffer, \"Unable to allocate memory (malloc error)\");\n#ifdef NAS\n\t\t\tif (!quiet)\n#endif\n\t\t\t\tlog_err(errno, id, log_buffer);\n\t\t\treturn TCL_ERROR;\n\t\t}\n\t\tif ((atp->name = strdup(Tcl_GetStringFromObj(indp[0], NULL))) == NULL) {\n\t\t\tsprintf(log_buffer, \"Unable to allocate memory (malloc error)\");\n#ifdef NAS\n\t\t\tif (!quiet)\n#endif\n\t\t\t\tlog_err(errno, id, log_buffer);\n\t\t\tfree(atp);\n\t\t\treturn TCL_ERROR;\n\t\t}\n\t\tif ((atp->resource = strdup(Tcl_GetStringFromObj(indp[1], NULL))) == NULL) {\n\t\t\tsprintf(log_buffer, \"Unable to allocate memory (malloc error)\");\n#ifdef NAS\n\t\t\tif (!quiet)\n#endif\n\t\t\t\tlog_err(errno, id, log_buffer);\n\t\t\tfree(atp->name);\n\t\t\tfree(atp);\n\t\t\treturn TCL_ERROR;\n\t\t}\n\t\tif ((atp->value = strdup(Tcl_GetStringFromObj(indp[2], NULL))) == NULL) {\n\t\t\tsprintf(log_buffer, \"Unable to allocate memory (malloc error)\");\n#ifdef NAS\n\t\t\tif (!quiet)\n#endif\n\t\t\t\tlog_err(errno, id, log_buffer);\n\t\t\tfree(atp->resource);\n\t\t\tfree(atp->name);\n\t\t\tfree(atp);\n\t\t\treturn TCL_ERROR;\n\t\t}\n\t\tatp->next = attrs;\n\t\tattrs = atp;\n\t}\n\n\tif (connector < 0) {\n#ifdef NAS\n\t\tif (!quiet)\n#endif\n\t\t\tlog_err(-1, cmd, not_connected);\n\t\tSET_PBSERR(PBSE_NOSERVER);\n\t\tgoto done;\n\t}\n\n\tjobid = Tcl_GetStringFromObj(objv[1], NULL);\n\tif (pbs_alterjob(connector, jobid, atp, NULL)) {\n\t\tTcl_SetObjResult(interp, Tcl_NewIntObj(-1));\n\t\tmsg = pbs_geterrmsg(connector);\n\t\tsprintf(log_buffer, \"%s: %s (%d)\", jobid,\n\t\t\tmsg ? msg : fail, pbs_errno);\n#ifdef NAS\n\t\tif (!quiet)\n#endif\n\t\t\tlog_err(-1, cmd, log_buffer);\n\t} else\n\t\tTcl_SetObjResult(interp, Tcl_NewIntObj(0));\n\ndone:\n\tfor (atp = attrs; attrs; atp = attrs) {\n\t\tattrs = atp->next;\n\t\tfree(atp->name);\n\t\tfree(atp->resource);\n\t\tfree(atp->value);\n\t\tfree(atp);\n\t}\n\n\tSET_PBSERR(pbs_errno);\n\treturn ret;\n}\n\nint\nPBS_RescQuery(ClientData clientData, Tcl_Interp *interp, int objc, Tcl_Obj *const objv[])\n{\n\tstatic char id[] = \"PBS_RescQuery\";\n\tchar *msg;\n\tint i, ret;\n\tTcl_Size num;\n\tTcl_Obj **listp, *fourl[4], *retl;\n\tchar *cmd;\n\tchar **res_array;\n\tint *avail_array, *alloc_array, *reser_array, *down_array;\n\n\tif (objc != 2) {\n\t\tTcl_WrongNumArgs(interp, 1, objv, \"{resource1 resource2 ...}\");\n\t\treturn TCL_ERROR;\n\t}\n\n\tcmd = Tcl_GetStringFromObj(objv[0], NULL);\n\tif ((ret = Tcl_ListObjGetElements(interp, objv[1],\n\t\t\t\t\t  &num, &listp)) != TCL_OK)\n\t\treturn ret;\n\tif (num == 0) {\n\t\tsprintf(log_buffer, \"%s: null resource list\", cmd);\n\t\tTcl_SetResult(interp, log_buffer, TCL_VOLATILE);\n\t\treturn TCL_ERROR;\n\t}\n\n\tif (connector < 0) {\n#ifdef NAS\n\t\tif (!quiet)\n#endif\n\t\t\tlog_err(-1, cmd, not_connected);\n\t\tSET_PBSERR(PBSE_NOSERVER);\n\t\treturn TCL_OK;\n\t}\n\n\tres_array = (char **) malloc(sizeof(char *) * num);\n\tif (res_array == NULL) {\n\t\tsprintf(log_buffer, \"Unable to allocate memory (malloc error)\");\n#ifdef NAS\n\t\tif (!quiet)\n#endif\n\t\t\tlog_err(errno, id, log_buffer);\n\t\treturn TCL_ERROR;\n\t}\n\tavail_array = (int *) malloc(sizeof(int) * num);\n\tif (avail_array == NULL) {\n\t\tsprintf(log_buffer, \"Unable to allocate memory (malloc error)\");\n#ifdef NAS\n\t\tif (!quiet)\n#endif\n\t\t\tlog_err(errno, id, log_buffer);\n\t\tfree(res_array);\n\t\treturn TCL_ERROR;\n\t}\n\talloc_array = (int *) malloc(sizeof(int) * num);\n\tif (alloc_array == NULL) {\n\t\tsprintf(log_buffer, \"Unable to allocate memory (malloc error)\");\n#ifdef NAS\n\t\tif (!quiet)\n#endif\n\t\t\tlog_err(errno, id, log_buffer);\n\t\tfree(res_array);\n\t\tfree(avail_array);\n\t\treturn TCL_ERROR;\n\t}\n\treser_array = (int *) malloc(sizeof(int) * num);\n\tif (reser_array == NULL) {\n\t\tsprintf(log_buffer, \"Unable to allocate memory (malloc error)\");\n#ifdef NAS\n\t\tif (!quiet)\n#endif\n\t\t\tlog_err(errno, id, log_buffer);\n\t\tfree(res_array);\n\t\tfree(avail_array);\n\t\tfree(alloc_array);\n\t\treturn TCL_ERROR;\n\t}\n\tdown_array = (int *) malloc(sizeof(int) * num);\n\tif (down_array == NULL) {\n\t\tsprintf(log_buffer, \"Unable to allocate memory (malloc error)\");\n#ifdef NAS\n\t\tif (!quiet)\n#endif\n\t\t\tlog_err(errno, id, log_buffer);\n\t\tfree(res_array);\n\t\tfree(avail_array);\n\t\tfree(alloc_array);\n\t\tfree(reser_array);\n\t\treturn TCL_ERROR;\n\t}\n\tfor (i = 0; i < num; i++)\n\t\tres_array[i] = Tcl_GetStringFromObj(listp[i], NULL);\n\n\tretl = Tcl_NewObj(); /* empty list */\n\tif (pbs_rescquery(connector, res_array, num,\n\t\t\t  avail_array, alloc_array, reser_array, down_array)) {\n\t\tmsg = pbs_geterrmsg(connector);\n\t\tsprintf(log_buffer, \"%s (%d)\", msg ? msg : fail, pbs_errno);\n#ifdef NAS\n\t\tif (!quiet)\n#endif\n\t\t\tlog_err(-1, cmd, log_buffer);\n\t} else {\n\t\tfor (i = 0; i < num; i++) {\n\t\t\tfourl[0] = Tcl_NewIntObj(avail_array[i]);\n\t\t\tfourl[1] = Tcl_NewIntObj(alloc_array[i]);\n\t\t\tfourl[2] = Tcl_NewIntObj(reser_array[i]);\n\t\t\tfourl[3] = Tcl_NewIntObj(down_array[i]);\n\n\t\t\tTcl_ListObjAppendElement(interp, retl,\n\t\t\t\t\t\t Tcl_NewListObj(4, fourl));\n\t\t}\n\t}\n\tTcl_SetObjResult(interp, retl);\n\n\tfree(res_array);\n\tfree(avail_array);\n\tfree(alloc_array);\n\tfree(reser_array);\n\tfree(down_array);\n\n\tSET_PBSERR(pbs_errno);\n\treturn TCL_OK;\n}\n\nint\nPBS_RescReserve(ClientData clientData, Tcl_Interp *interp, int objc, Tcl_Obj *const objv[])\n{\n\tstatic char id[] = \"PBS_RescReserve\";\n\tchar *msg;\n\tint i, ret;\n\tTcl_Size num;\n\tTcl_Obj **listp;\n\tchar *cmd;\n\tchar **res_array;\n\tpbs_resource_t resid;\n\n\tif (objc != 3) {\n\t\tTcl_WrongNumArgs(interp, 1, objv,\n\t\t\t\t \"resource_id {resource1 resource2 ...}\");\n\t\treturn TCL_ERROR;\n\t}\n\n\tif (Tcl_GetIntFromObj(interp, objv[1], &resid) != TCL_OK)\n\t\treturn TCL_ERROR;\n\n\tcmd = Tcl_GetStringFromObj(objv[0], NULL);\n\tif ((ret = Tcl_ListObjGetElements(interp, objv[2],\n\t\t\t\t\t  &num, &listp)) != TCL_OK)\n\t\treturn ret;\n\tif (num == 0) {\n\t\tsprintf(log_buffer, \"%s: null resource list\", cmd);\n\t\tTcl_SetResult(interp, log_buffer, TCL_VOLATILE);\n\t\treturn TCL_ERROR;\n\t}\n\n\tif (connector < 0) {\n#ifdef NAS\n\t\tif (!quiet)\n#endif\n\t\t\tlog_err(-1, cmd, not_connected);\n\t\tSET_PBSERR(PBSE_NOSERVER);\n\t\treturn TCL_OK;\n\t}\n\n\tres_array = (char **) malloc(sizeof(char *) * num);\n\tif (res_array == NULL) {\n\t\tsprintf(log_buffer, \"Unable to allocate memory (malloc error)\");\n#ifdef NAS\n\t\tif (!quiet)\n#endif\n\t\t\tlog_err(errno, id, log_buffer);\n\t\treturn TCL_ERROR;\n\t}\n\tfor (i = 0; i < num; i++)\n\t\tres_array[i] = Tcl_GetStringFromObj(listp[i], NULL);\n\n\tpbs_errno = 0;\n\tif (pbs_rescreserve(connector, res_array, num, &resid) != 0) {\n\t\tmsg = pbs_geterrmsg(connector);\n\t\tsprintf(log_buffer, \"%s (%d)\", msg ? msg : fail, pbs_errno);\n\t\tSET_PBSMSG(log_buffer);\n\t}\n\tTcl_SetObjResult(interp, Tcl_NewIntObj(resid));\n\n\tSET_PBSERR(pbs_errno);\n\treturn TCL_OK;\n}\n\nint\nPBS_RescRelease(ClientData clientData, Tcl_Interp *interp, int objc, Tcl_Obj *const objv[])\n{\n\tchar *msg;\n\tint ret;\n\tchar *cmd;\n\tpbs_resource_t resid;\n\n\tif (objc != 2) {\n\t\tTcl_WrongNumArgs(interp, 1, objv, \"resource_id\");\n\t\treturn TCL_ERROR;\n\t}\n\n\tif (Tcl_GetIntFromObj(interp, objv[1], &resid) != TCL_OK)\n\t\treturn TCL_ERROR;\n\n\tcmd = Tcl_GetStringFromObj(objv[0], NULL);\n\n\tif (connector < 0) {\n#ifdef NAS\n\t\tif (!quiet)\n#endif\n\t\t\tlog_err(-1, cmd, not_connected);\n\t\tSET_PBSERR(PBSE_NOSERVER);\n\t\treturn TCL_OK;\n\t}\n\n\tif ((ret = pbs_rescrelease(connector, resid)) != 0) {\n\t\tmsg = pbs_geterrmsg(connector);\n\t\tsprintf(log_buffer, \"%s (%d)\", msg ? msg : fail, pbs_errno);\n#ifdef NAS\n\t\tif (!quiet)\n#endif\n\t\t\tlog_err(-1, cmd, log_buffer);\n\t}\n\tTcl_SetObjResult(interp, Tcl_NewIntObj(ret));\n\n\tSET_PBSERR(pbs_errno);\n\treturn TCL_OK;\n}\n\nint\nPBS_ResvStatus(ClientData clientData, Tcl_Interp *interp, int argc, const char *argv[])\n{\n\tchar *msg;\n\tstruct batch_status *bs;\n\n\tif (argc != 1) {\n\t\tsprintf(log_buffer, badparm, argv[0]);\n\t\tTcl_SetResult(interp, log_buffer, TCL_VOLATILE);\n\t\treturn TCL_ERROR;\n\t}\n\tif (connector < 0) {\n#ifdef NAS\n\t\tif (!quiet)\n#endif\n\t\t\tlog_err(-1, argv[0], not_connected);\n\t\tSET_PBSERR(PBSE_NOSERVER);\n\t\treturn TCL_OK;\n\t}\n\tif ((bs = pbs_statresv(connector, NULL, NULL, NULL)) == NULL) {\n\t\tif (pbs_errno != PBSE_NONE) {\n\t\t\tmsg = pbs_geterrmsg(connector);\n\t\t\tsprintf(log_buffer, \"%s (%d)\",\n\t\t\t\tmsg ? msg : fail, pbs_errno);\n#ifdef NAS\n\t\t\tif (!quiet)\n#endif\n\t\t\t\tlog_err(-1, argv[0], log_buffer);\n\t\t}\n\t} else\n\t\tbatresult(interp, bs);\n\tSET_PBSERR(pbs_errno);\n\treturn TCL_OK;\n}\n\nint\nPBS_ResvConfirm(ClientData clientData, Tcl_Interp *interp, int argc, const char *argv[])\n{\n\tconst char *msg = NULL;\n\tunsigned long stime = 0;\n\n\tif (argc < 2 || argc > 4) {\n\t\tsprintf(log_buffer,\n\t\t\t\"%s: wrong # args: resv_id vnodes ?stime? ?reason?\",\n\t\t\targv[0]);\n\t\tTcl_SetResult(interp, log_buffer, TCL_VOLATILE);\n\t\treturn TCL_ERROR;\n\t}\n\tif (argc == 4) {\n\t\tmsg = argv[3];\n\t}\n\tif (argc == 3) {\n\t\tstime = strtoul(argv[2], NULL, 10);\n\t}\n\tif (connector < 0) {\n#ifdef NAS\n\t\tif (!quiet)\n#endif\n\t\t\tlog_err(-1, argv[0], not_connected);\n\t\tSET_PBSERR(PBSE_NOSERVER);\n\t\treturn TCL_OK;\n\t}\n\tPBS_CALL(pbs_confirmresv(connector, argv[1], argv[2], stime, msg))\n\tSET_PBSERR(pbs_errno);\n\treturn TCL_OK;\n}\n\nint\nPBS_ResvDelete(ClientData clientData, Tcl_Interp *interp, int argc, const char *argv[])\n{\n\tconst char *msg = NULL;\n\n\tif (argc < 2 || argc > 3) {\n\t\tsprintf(log_buffer,\n\t\t\t\"%s: wrong # args: resv_id ?reason?\", argv[0]);\n\t\tTcl_SetResult(interp, log_buffer, TCL_VOLATILE);\n\t\treturn TCL_ERROR;\n\t}\n\tif (argc == 3)\n\t\tmsg = argv[2];\n\tif (connector < 0) {\n#ifdef NAS\n\t\tif (!quiet)\n#endif\n\t\t\tlog_err(-1, argv[0], not_connected);\n\t\tSET_PBSERR(PBSE_NOSERVER);\n\t\treturn TCL_OK;\n\t}\n\tPBS_CALL(pbs_delresv(connector, argv[1], msg))\n\tSET_PBSERR(pbs_errno);\n\treturn TCL_OK;\n}\n\nint\nLogMsg(ClientData clientData, Tcl_Interp *interp, int objc, Tcl_Obj *const objv[])\n{\n\tchar *tag = NULL;\n\tchar *msg = NULL;\n\n\tif (objc != 3) {\n\t\tTcl_WrongNumArgs(interp, 1, objv, \"tag message\");\n\t\treturn TCL_ERROR;\n\t} else {\n\t\ttag = Tcl_GetStringFromObj(objv[1], NULL);\n\t\tmsg = Tcl_GetStringFromObj(objv[2], NULL);\n\t}\n\n\tif (connector < 0) {\n#ifdef NAS\n\t\tif (!quiet)\n#endif\n\t\t\tlog_err(-1, tag, not_connected);\n\t\tTcl_SetObjResult(interp, Tcl_NewIntObj(-1));\n\t} else {\n#ifdef NAS\n\t\tif (!quiet)\n#endif\n\t\t\tlog_err(-1, tag, msg);\n\t\tTcl_SetObjResult(interp, Tcl_NewIntObj(0));\n\t}\n\n\treturn TCL_OK;\n}\n\nint\nDateTime(ClientData clientData, Tcl_Interp *interp, int argc, const char *argv[])\n{\n\ttime_t when;\n\tstruct tm tm, *t = NULL;\n\tint i, yyy, len;\n\tchar rtime[64], hold[8];\n\tstatic char *wkday[] = {\"Sun\", \"Mon\", \"Tue\", \"Wed\",\n\t\t\t\t\"Thu\", \"Fri\", \"Sat\", NULL};\n\n\tswitch (argc) {\n\n\t\tcase 1: /* current date/time */\n\t\t\twhen = time(NULL);\n\t\t\tsprintf(log_buffer, \"%ld\", (long) when);\n\t\t\tTcl_SetResult(interp, log_buffer, TCL_VOLATILE);\n\t\t\treturn TCL_OK;\n\n\t\tcase 2:\n\t\t\tsnprintf(rtime, sizeof(rtime), \"%s\", argv[1]);\n\t\t\tlen = strlen(rtime);\n\t\t\twhen = 0;\n\t\t\tif (len < 12)\n\t\t\t\tbreak;\n\n\t\t\t/* absolute date/time */\n\t\t\tfor (i = 0; i < len; i++) {\n\t\t\t\tif (!isdigit(rtime[i]))\n\t\t\t\t\tbreak;\n\t\t\t}\n\t\t\tif (i != len || len > 14) {\n\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\"%s: bad absolute date format: %s\",\n\t\t\t\t\targv[0], rtime);\n\t\t\t\tTcl_SetResult(interp, log_buffer, TCL_VOLATILE);\n\t\t\t\treturn TCL_ERROR;\n\t\t\t}\n\n\t\t\tyyy = len - 10;\n\t\t\tfor (i = 0; i < yyy; i++)\n\t\t\t\thold[i] = rtime[i];\n\t\t\thold[i] = '\\0';\n\t\t\ttm.tm_year = atoi(hold);\n\n\t\t\thold[0] = rtime[i++];\n\t\t\thold[1] = rtime[i++];\n\t\t\thold[2] = '\\0';\n\t\t\ttm.tm_mon = atoi(hold) - 1;\n\n\t\t\thold[0] = rtime[i++];\n\t\t\thold[1] = rtime[i++];\n\t\t\ttm.tm_mday = atoi(hold);\n\n\t\t\thold[0] = rtime[i++];\n\t\t\thold[1] = rtime[i++];\n\t\t\ttm.tm_hour = atoi(hold);\n\n\t\t\thold[0] = rtime[i++];\n\t\t\thold[1] = rtime[i++];\n\t\t\ttm.tm_min = atoi(hold);\n\n\t\t\thold[0] = rtime[i++];\n\t\t\thold[1] = rtime[i];\n\t\t\ttm.tm_sec = atoi(hold);\n\t\t\ttm.tm_isdst = -1;\n\n\t\t\twhen = mktime(&tm);\n\t\t\tif (when == -1) {\n\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\"%s: could not convert date: %s\",\n\t\t\t\t\targv[0], rtime);\n\t\t\t\tTcl_SetResult(interp, log_buffer, TCL_VOLATILE);\n\t\t\t\treturn TCL_ERROR;\n\t\t\t}\n\t\t\tTcl_SetObjResult(interp, Tcl_NewLongObj((long) when));\n\t\t\treturn TCL_OK;\n\n\t\tcase 3: /* relative weekday */\n\t\t\tfor (i = 0; wkday[i]; i++) {\n\t\t\t\tif (strcmp(argv[1], wkday[i]) == 0)\n\t\t\t\t\tbreak;\n\t\t\t}\n\t\t\tif (wkday[i] == NULL) {\n\t\t\t\tsprintf(log_buffer,\n\t\t\t\t\t\"%s: unrecognized weekday: %s\",\n\t\t\t\t\targv[0], argv[1]);\n\t\t\t\tTcl_SetResult(interp, log_buffer, TCL_VOLATILE);\n\t\t\t\treturn TCL_ERROR;\n\t\t\t}\n\t\t\twhen = time(NULL);\n\t\t\tt = localtime(&when);\n\t\t\tt->tm_mday += (i - t->tm_wday + 7) % 7;\n\t\t\tt->tm_hour = 0;\n\t\t\tt->tm_min = 0;\n\t\t\tt->tm_sec = 0;\n\t\t\tt->tm_isdst = -1;\n\t\t\twhen = mktime(t);\n\t\t\tsnprintf(rtime, sizeof(rtime), \"%s\", argv[2]);\n\t\t\tlen = strlen(rtime);\n\t\t\tbreak;\n\n\t\tdefault:\n\t\t\tsprintf(log_buffer,\n\t\t\t\t\"%s: wrong # args: ?day? ?time?\", argv[0]);\n\t\t\tTcl_SetResult(interp, log_buffer, TCL_VOLATILE);\n\t\t\treturn TCL_ERROR;\n\t}\n\n\tif (len != 8 || rtime[2] != ':' || rtime[5] != ':' ||\n\t    !isdigit(rtime[0]) || !isdigit(rtime[1]) ||\n\t    !isdigit(rtime[3]) || !isdigit(rtime[4]) ||\n\t    !isdigit(rtime[6]) || !isdigit(rtime[7])) {\n\t\tsprintf(log_buffer,\n\t\t\t\"%s: bad relative time format: %s\", argv[0], rtime);\n\t\tTcl_SetResult(interp, log_buffer, TCL_VOLATILE);\n\t\treturn TCL_ERROR;\n\t}\n\trtime[2] = rtime[5] = '\\0';\n\twhen += atoi(&rtime[0]) * 3600 +\n\t\tatoi(&rtime[3]) * 60 + atoi(&rtime[6]);\n\n\tTcl_SetObjResult(interp, Tcl_NewLongObj((long) when));\n\treturn TCL_OK;\n}\n\nint\nStrFtime(ClientData clientData, Tcl_Interp *interp, int objc, Tcl_Obj *const objv[])\n{\n\tstruct tm *t;\n\tlong hold;\n\ttime_t when;\n\n\tif (objc != 3) {\n\t\tsprintf(log_buffer,\n\t\t\t\"%s: wrong # args: format time\",\n\t\t\tTcl_GetStringFromObj(objv[0], NULL));\n\t\tTcl_SetResult(interp, log_buffer, TCL_VOLATILE);\n\t\treturn TCL_ERROR;\n\t}\n\n\tif (Tcl_GetLongFromObj(interp, objv[2], &hold) != TCL_OK)\n\t\treturn TCL_ERROR;\n\n\twhen = (time_t) hold;\n\tt = localtime(&when);\n\t(void) strftime(log_buffer, LOG_BUF_SIZE,\n\t\t\tTcl_GetStringFromObj(objv[1], NULL), t);\n\tTcl_SetResult(interp, log_buffer, TCL_VOLATILE);\n\n\treturn TCL_OK;\n}\n\nint\nPBS_PbsPortInfoCmd(ClientData clientData, Tcl_Interp *interp, int objc, Tcl_Obj *const objv[])\n{\n\tint index, result;\n\tstatic const char *subCmds[] = {\n\t\t\"batch_service_port\", \"batch_service_port_dis\",\n\t\t\"mom_service_port\", \"manager_service_port\",\n\t\tNULL};\n\tenum ISubCmdIdx {\n\t\tIBatchSvcIdx,\n\t\tIBatchSvcDisIdx,\n\t\tIMomSvcIdx,\n\t\tIManSvcIdx\n\t};\n\n\tif (objc != 2) {\n\t\tTcl_WrongNumArgs(interp, 1, objv, \"batch_service_port|batch_service_port_dis|mom_service_port|manager_service_port\");\n\t\treturn TCL_ERROR;\n\t}\n\n\tresult = Tcl_GetIndexFromObj(interp, objv[1], subCmds, \"option\", 0, &index);\n\tif (result != TCL_OK) {\n\t\treturn result;\n\t}\n\n\tswitch (index) {\n\t\tcase IBatchSvcIdx:\n\t\t\tresult = pbs_conf.batch_service_port;\n\t\t\tbreak;\n\t\tcase IBatchSvcDisIdx:\n\t\t\tresult = pbs_conf.batch_service_port_dis;\n\t\t\tbreak;\n\t\tcase IMomSvcIdx:\n\t\t\tresult = pbs_conf.mom_service_port;\n\t\t\tbreak;\n\t\tcase IManSvcIdx:\n\t\t\tresult = pbs_conf.manager_service_port;\n\t\t\tbreak;\n\t}\n\n\tTcl_SetObjResult(interp, Tcl_NewIntObj(result));\n\treturn TCL_OK;\n}\n\nvoid\n\tadd_cmds(Tcl_Interp *interp)\n{\n\textern void site_cmds(Tcl_Interp * interp);\n\n\tTcl_CreateObjCommand(interp, \"openrm\", OpenRM, NULL, NULL);\n\tTcl_CreateObjCommand(interp, \"closerm\", CloseRM, NULL, NULL);\n\tTcl_CreateObjCommand(interp, \"downrm\", DownRM, NULL, NULL);\n\tTcl_CreateObjCommand(interp, \"configrm\", ConfigRM, NULL, NULL);\n\tTcl_CreateObjCommand(interp, \"getreq\", GetREQ, NULL, NULL);\n\tTcl_CreateObjCommand(interp, \"addreq\", AddREQ, NULL, NULL);\n\tTcl_CreateCommand(interp, \"allreq\", AllREQ, NULL, NULL);\n\tTcl_CreateCommand(interp, \"flushreq\", FlushREQ, NULL, NULL);\n\tTcl_CreateCommand(interp, \"activereq\", ActiveREQ, NULL, NULL);\n\tTcl_CreateCommand(interp, \"fullresp\", FullResp, NULL, NULL);\n\tTcl_CreateObjCommand(interp, \"pbsportinfo\", PBS_PbsPortInfoCmd,\n\t\t\t     NULL, NULL);\n\tTcl_CreateCommand(interp, \"pbsconnect\", PBS_Connect, NULL, NULL);\n\tTcl_CreateCommand(interp, \"pbsdisconnect\", PBS_Disconnect, NULL, NULL);\n\tTcl_CreateCommand(interp, \"pbsstatserv\", PBS_StatServ, NULL, NULL);\n\tTcl_CreateCommand(interp, \"pbsstatjob\", PBS_StatJob, NULL, NULL);\n\tTcl_CreateCommand(interp, \"pbsstatque\", PBS_StatQue, NULL, NULL);\n\tTcl_CreateObjCommand(interp, \"pbsstatnode\", PBS_StatNode, NULL, NULL);\n\tTcl_CreateCommand(interp, \"pbsselstat\", PBS_SelStat, NULL, NULL);\n\tTcl_CreateCommand(interp, \"pbsrunjob\", PBS_RunJob, NULL, NULL);\n\tTcl_CreateCommand(interp, \"pbsmovejob\", PBS_MoveJob, NULL, NULL);\n\tTcl_CreateCommand(interp, \"pbsqenable\", PBS_EnableQueue, NULL, NULL);\n\tTcl_CreateCommand(interp, \"pbsqdisable\", PBS_DisableQueue, NULL, NULL);\n\tTcl_CreateCommand(interp, \"pbsqstart\", PBS_StartQueue, NULL, NULL);\n\tTcl_CreateCommand(interp, \"pbsqstop\", PBS_StopQueue, NULL, NULL);\n\tTcl_CreateCommand(interp, \"pbsasyrunjob\", PBS_AsyRunJob, NULL, NULL);\n\tTcl_CreateCommand(interp, \"pbsdeljob\", PBS_DelJob, NULL, NULL);\n\tTcl_CreateCommand(interp, \"pbsholdjob\", PBS_HoldJob, NULL, NULL);\n\tTcl_CreateObjCommand(interp, \"pbsalterjob\", PBS_AlterJob, NULL, NULL);\n\tTcl_CreateObjCommand(interp, \"pbsrescquery\", PBS_RescQuery, NULL, NULL);\n\tTcl_CreateObjCommand(interp, \"pbsrescreserve\", PBS_RescReserve,\n\t\t\t     NULL, NULL);\n\tTcl_CreateObjCommand(interp, \"pbsrescrelease\", PBS_RescRelease,\n\t\t\t     NULL, NULL);\n\tTcl_CreateCommand(interp, \"pbsresvstat\", PBS_ResvStatus, NULL, NULL);\n\tTcl_CreateCommand(interp, \"pbsresvconf\", PBS_ResvConfirm, NULL, NULL);\n\tTcl_CreateCommand(interp, \"pbsresvdel\", PBS_ResvDelete, NULL, NULL);\n\tTcl_CreateObjCommand(interp, \"logmsg\", LogMsg, NULL, NULL);\n\tTcl_CreateCommand(interp, \"datetime\", DateTime, NULL, NULL);\n\tTcl_CreateObjCommand(interp, \"strftime\", StrFtime, NULL, NULL);\n\n\t/*\n\t * Extended scheduler commands from Univ. of Colorado\n\t */\n\tTcl_CreateCommand(interp, \"pbsrerunjob\", PBS_ReRun, NULL, NULL);\n\n\t/*\n\t * Initialize global variables pbs_errno and pbs_errmsg\n\t */\n\tpbserr = Tcl_NewStringObj(\"pbs_errno\", -1);\n\tTcl_ObjSetVar2(interp, pbserr, NULL, Tcl_NewIntObj((0)),\n\t\t       TCL_GLOBAL_ONLY | TCL_LEAVE_ERR_MSG);\n\tpbsmsg = Tcl_NewStringObj(\"pbs_errmsg\", -1);\n\tTcl_ObjSetVar2(interp, pbsmsg, NULL, Tcl_NewStringObj(\"no msg\", -1),\n\t\t       TCL_GLOBAL_ONLY | TCL_LEAVE_ERR_MSG);\n\n#ifdef NAS /* localmod 071 */\n\tif (tcl_atrsep)\n\t\tfree(tcl_atrsep);\n\ttcl_atrsep = strdup(TCL_ATRSEP);\n#endif /* localmod 071 */\n\tsite_cmds(interp);\n}\n"
  },
  {
    "path": "src/tools/pbs_upgrade_job.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file pbs_upgrade_job.c\n *\n * @brief\n *\t\tpbs_upgrade_job.c - This file contains the functions to read in an older .JB file (from 13.x to 19.x versions)\n *\t\tand convert it into the newer format.\n * @par\n *\t\tThis tool is required due to the change in PBS macros which are defined in the pbs_ifl.h/server_limits.h\n \t\tor in any other file and the same macros PBS uses in the job structure(see job.h) as well that alters the\n\t\tsize of the jobfix and taskfix structures.\n *\n * Functions included are:\n * \tmain()\n * \tprint_usage()\n * \tcheck_job_file()\n * \tupgrade_job_file()\n * \tupgrade_task_file()\n * \tmain()\n */\n\n/* Need to define PBS_MOM to get the pbs_task structure from job.h */\n#define PBS_MOM\n\n#include \"pbs_config.h\"\n\n#include <sys/types.h>\n#include <sys/stat.h>\n#include <unistd.h>\n#include <fcntl.h>\n#include <dirent.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <errno.h>\n#include \"pbs_ifl.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"job.h\"\n#include \"tm.h\"\n#include \"server_limits.h\"\n#include \"pbs_version.h\"\n\n#define O_BINARY 0\n\n/*\n * Define macros that controlled the size of the jobfix and taskfix structure (see job.h)\n * from 13.x to pre 19.x versions. Append the _PRE19 suffix to each.\n */\n\n/* From pbs_ifl.h */\n#define PBS_MAXSEQNUM_PRE19 7\n#define PBS_MAXSVRJOBID_PRE19 (PBS_MAXSEQNUM_PRE19 - 1 + PBS_MAXSERVERNAME + PBS_MAXPORTNUM + 2)\n#define PBS_MAXSVRJOBID_19_21 (PBS_MAXSEQNUM - 1 + PBS_MAXSERVERNAME + PBS_MAXPORTNUM + 2)\n\n/*\n * Replicate the jobfix structure as it was defined in versions 19.x - 21.x\n */\ntypedef struct jobfix_19_21 {\n\tint ji_jsversion;   /* job structure version - JSVERSION */\n\tint ji_state;\t    /* internal copy of state */\n\tint ji_substate;    /* job sub-state */\n\tint ji_svrflags;    /* server flags */\n\tint ji_numattr;\t    /* not used */\n\tint ji_ordering;    /* special scheduling ordering */\n\tint ji_priority;    /* internal priority */\n\ttime_t ji_stime;    /* time job started execution */\n\ttime_t ji_endtBdry; /* estimate upper bound on end time */\n\n\tchar ji_jobid[PBS_MAXSVRJOBID_19_21 + 1]; /* job identifier */\n\tchar ji_fileprefix[PBS_JOBBASE + 1];\t  /* no longer used */\n\tchar ji_queue[PBS_MAXQUEUENAME + 1];\t  /* name of current queue */\n\tchar ji_destin[PBS_MAXROUTEDEST + 1];\t  /* dest from qmove/route */\n\t/* MomS for execution    */\n\n\tint ji_un_type;\t\t\t\t /* type of ji_un union */\n\tunion {\t\t\t\t\t /* depends on type of queue currently in */\n\t\tstruct {\t\t\t /* if in execution queue .. */\n\t\t\tpbs_net_t ji_momaddr;\t /* host addr of Server */\n\t\t\tunsigned int ji_momport; /* port # */\n\t\t\tint ji_exitstat;\t /* job exit status from MOM */\n\t\t} ji_exect;\n\t\tstruct {\n\t\t\ttime_t ji_quetime;  /* time entered queue */\n\t\t\ttime_t ji_rteretry; /* route retry time */\n\t\t} ji_routet;\n\t\tstruct {\n\t\t\tint ji_fromsock;\t  /* socket job coming over */\n\t\t\tpbs_net_t ji_fromaddr;\t  /* host job coming from   */\n\t\t\tunsigned int ji_scriptsz; /* script size */\n\t\t} ji_newt;\n\t\tstruct {\n\t\t\tpbs_net_t ji_svraddr; /* host addr of Server */\n\t\t\tint ji_exitstat;      /* job exit status from MOM */\n\t\t\tuid_t ji_exuid;\t      /* execution uid */\n\t\t\tgid_t ji_exgid;\t      /* execution gid */\n\t\t} ji_momt;\n\t} ji_un;\n} jobfix_19_21;\n\nunion jobextend_19_21 {\n\tchar fill[256]; /* fill to keep same size */\n\tstruct {\n#if defined(__sgi)\n\t\tjid_t ji_jid;\n\t\tash_t ji_ash;\n#else\n\t\tchar ji_4jid[8];\n\t\tchar ji_4ash[8];\n#endif /* sgi */\n\t\tint ji_credtype;\n#ifdef PBS_MOM\n\t\ttm_host_id ji_nodeidx; /* my node id */\n\t\ttm_task_id ji_taskidx; /* generate task id's for job */\n#if MOM_ALPS\n\t\tlong ji_reservation;\n\t\t/* ALPS reservation identifier */\n\t\tunsigned long long ji_pagg;\n\t\t/* ALPS process aggregate ID */\n#endif /* MOM_ALPS */\n#endif /* PBS_MOM */\n\t} ji_ext;\n};\n\n/*\n * Replicate the jobfix and taskfix structures as they were defined in 13.x to pre 19.x versions.\n * Use the macros defined above for convenience.\n */\ntypedef struct jobfix_PRE19 {\n\tint ji_jsversion;   /* job structure version - JSVERSION */\n\tint ji_state;\t    /* internal copy of state */\n\tint ji_substate;    /* job sub-state */\n\tint ji_svrflags;    /* server flags */\n\tint ji_numattr;\t    /* not used */\n\tint ji_ordering;    /* special scheduling ordering */\n\tint ji_priority;    /* internal priority */\n\ttime_t ji_stime;    /* time job started execution */\n\ttime_t ji_endtBdry; /* estimate upper bound on end time */\n\n\tchar ji_jobid[PBS_MAXSVRJOBID_PRE19 + 1]; /* job identifier */\n\tchar ji_fileprefix[PBS_JOBBASE + 1];\t  /* no longer used */\n\tchar ji_queue[PBS_MAXQUEUENAME + 1];\t  /* name of current queue */\n\tchar ji_destin[PBS_MAXROUTEDEST + 1];\t  /* dest from qmove/route */\n\t/* MomS for execution    */\n\n\tint ji_un_type; /* type of ji_un union */\n\tunion {\t\t/* depends on type of queue currently in */\n\t\tstruct\n\t\t{\t\t\t\t /* if in execution queue .. */\n\t\t\tpbs_net_t ji_momaddr;\t /* host addr of Server */\n\t\t\tunsigned int ji_momport; /* port # */\n\t\t\tint ji_exitstat;\t /* job exit status from MOM */\n\t\t} ji_exect;\n\t\tstruct\n\t\t{\n\t\t\ttime_t ji_quetime;  /* time entered queue */\n\t\t\ttime_t ji_rteretry; /* route retry time */\n\t\t} ji_routet;\n\t\tstruct\n\t\t{\n\t\t\tint ji_fromsock;\t  /* socket job coming over */\n\t\t\tpbs_net_t ji_fromaddr;\t  /* host job coming from   */\n\t\t\tunsigned int ji_scriptsz; /* script size */\n\t\t} ji_newt;\n\t\tstruct\n\t\t{\n\t\t\tpbs_net_t ji_svraddr; /* host addr of Server */\n\t\t\tint ji_exitstat;      /* job exit status from MOM */\n\t\t\tuid_t ji_exuid;\t      /* execution uid */\n\t\t\tgid_t ji_exgid;\t      /* execution gid */\n\t\t} ji_momt;\n\t} ji_un;\n} jobfix_PRE19;\n\ntypedef struct taskfix_PRE19 {\n\tchar ti_parentjobid[PBS_MAXSVRJOBID_PRE19 + 1];\n\ttm_node_id ti_parentnode; /* parent vnode */\n\ttm_node_id ti_myvnode;\t  /* my vnode */\n\ttm_task_id ti_parenttask; /* parent task */\n\ttm_task_id ti_task;\t  /* task's taskid */\n\tint ti_status;\t\t  /* status of task */\n\tpid_t ti_sid;\t\t  /* session id */\n\tint ti_exitstat;\t  /* exit status */\n\tunion {\n\t\tint ti_hold[16]; /* reserved space */\n\t} ti_u;\n} taskfix_PRE19;\n\n/* Create a global buffer for reading and writing data. */\n#define BUFSZ 4096\nchar buf[BUFSZ];\n\nsvrattrl *read_all_attrs_from_jbfile(int fd, char **state, char **substate, char **errbuf);\n\n/**\n * @brief\n *\t\tPrint usage text to stderr and exit.\n *\n * @return\tvoid\n */\nvoid\nprint_usage(void)\n{\n\tfprintf(stderr, \"Invalid parameter specified. Usage:\\n\");\n\tfprintf(stderr, \"pbs_upgrade_job [-c] -f file.JB\\n\");\n}\n\n/**\n * @brief\n *\t\tAttempt to identify the format version of job file.\n *\n * @param[in]\tfd\t-File descriptor from which to read\n *\n * @return\tint\n * @retval\t-1\t: failure\n * @retval\t>=0\t: version number\n */\nint\ncheck_job_file(int fd)\n{\n\toff_t pos_saved;\n\toff_t pos_new;\n\tint ret_version = -1;\n\tint length = -1;\n\tjobfix_PRE19 old_jobfix_pre19;\n\terrno = 0;\n\n\t/* Save our current position so we can comeback to it */\n\tpos_saved = lseek(fd, 0, SEEK_CUR);\n\n\t/*---------- For PBS >=13.x or <=18.x versions jobfix structure ---------- */\n\tpos_new = lseek(fd, pos_saved, SEEK_SET);\n\tif (pos_new != 0) {\n\t\tfprintf(stderr, \"Couldn't set the file position to zero [%s]\\n\",\n\t\t\terrno ? strerror(errno) : \"No error\");\n\t\tgoto check_job_file_exit;\n\t}\n\told_jobfix_pre19.ji_jsversion = 0;\n\tlength = read(fd, (char *) &old_jobfix_pre19.ji_jsversion, sizeof(old_jobfix_pre19.ji_jsversion));\n\tif (length < 0) {\n\t\tfprintf(stderr, \"Failed to read input file [%s]\\n\",\n\t\t\terrno ? strerror(errno) : \"No error\");\n\t\tgoto check_job_file_exit;\n\t}\n\tif (old_jobfix_pre19.ji_jsversion == JSVERSION_18) {\n\t\t/* for all type of jobfix structures, from 13.x to 18.x PBS versions */\n\t\tret_version = 18;\n\t\tgoto check_job_file_exit;\n\t} else if (old_jobfix_pre19.ji_jsversion == JSVERSION_19) {\n\t\t/* if job has already updated structure */\n\t\tret_version = 19;\n\t\tgoto check_job_file_exit;\n\t} else if (old_jobfix_pre19.ji_jsversion == JSVERSION) {\n\t\t/* if job has already updated structure */\n\t\tret_version = 21;\n\t\tgoto check_job_file_exit;\n\t} else {\n\t\tfprintf(stderr, \"Job structure version (JSVERSION) not recognized, found=%d.\\n\",\n\t\t\told_jobfix_pre19.ji_jsversion);\n\t\tgoto check_job_file_exit;\n\t}\n\ncheck_job_file_exit:\n\tpos_new = lseek(fd, pos_saved, SEEK_SET);\n\tif (pos_new != 0) {\n\t\tfprintf(stderr, \"Couldn't set the file position back to zero [%s]\\n\",\n\t\t\terrno ? strerror(errno) : \"No error\");\n\t\tgoto check_job_file_exit;\n\t}\n\treturn ret_version;\n}\n\n/**\n * @brief\tUpgrade pre 18.x jobfix structure to 19\n *\n * @param[in]\told_jobfix_pre19 - pre-19 jobfix struct\n *\n * @return\tjobfix_19_21\n * @retval\tv19 converted jobfix\n */\njobfix_19_21\nconvert_pre19jf_to_19(jobfix_PRE19 old_jobfix_pre19)\n{\n\tjobfix_19_21 jf_19_21;\n\n\t/* Copy the data to the new jobfix structure */\n\tmemset(&jf_19_21, 0, sizeof(jf_19_21));\n\tjf_19_21.ji_jsversion = JSVERSION_19;\n\tjf_19_21.ji_state = old_jobfix_pre19.ji_state;\n\tjf_19_21.ji_substate = old_jobfix_pre19.ji_substate;\n\tjf_19_21.ji_svrflags = old_jobfix_pre19.ji_svrflags;\n\tjf_19_21.ji_numattr = old_jobfix_pre19.ji_numattr;\n\tjf_19_21.ji_ordering = old_jobfix_pre19.ji_ordering;\n\tjf_19_21.ji_priority = old_jobfix_pre19.ji_priority;\n\tjf_19_21.ji_stime = old_jobfix_pre19.ji_stime;\n\tjf_19_21.ji_endtBdry = old_jobfix_pre19.ji_endtBdry;\n\tsnprintf(jf_19_21.ji_jobid, sizeof(jf_19_21.ji_jobid),\n\t\t \"%s\", old_jobfix_pre19.ji_jobid);\n\tsnprintf(jf_19_21.ji_fileprefix, sizeof(jf_19_21.ji_fileprefix),\n\t\t \"%s\", old_jobfix_pre19.ji_fileprefix);\n\tsnprintf(jf_19_21.ji_queue, sizeof(jf_19_21.ji_queue),\n\t\t \"%s\", old_jobfix_pre19.ji_queue);\n\tsnprintf(jf_19_21.ji_destin, sizeof(jf_19_21.ji_destin),\n\t\t \"%s\", old_jobfix_pre19.ji_destin);\n\tjf_19_21.ji_un_type = old_jobfix_pre19.ji_un_type;\n\tmemcpy(&jf_19_21.ji_un, &old_jobfix_pre19.ji_un, sizeof(jf_19_21.ji_un));\n\n\treturn jf_19_21;\n}\n\n/**\n * @brief\tUpgrade 19-21 jobfix structure to latest\n *\n * @param[in]\told_jf - 19-21 jobfix struct\n *\n * @return\tjobfix_19_21\n * @retval\tconverted jobfix\n */\nstruct jobfix\nconvert_19jf_to_22(jobfix_19_21 old_jf)\n{\n\tstruct jobfix jf;\n\n\tmemset(&jf, 0, sizeof(jf));\n\tjf.ji_jsversion = JSVERSION;\n\tjf.ji_svrflags = old_jf.ji_svrflags;\n\tjf.ji_stime = old_jf.ji_stime;\n\tsnprintf(jf.ji_jobid, sizeof(jf.ji_jobid), \"%s\", old_jf.ji_jobid);\n\tsnprintf(jf.ji_fileprefix, sizeof(jf.ji_fileprefix), \"%s\", old_jf.ji_fileprefix);\n\tsnprintf(jf.ji_queue, sizeof(jf.ji_queue), \"%s\", old_jf.ji_queue);\n\tsnprintf(jf.ji_destin, sizeof(jf.ji_destin), \"%s\", old_jf.ji_destin);\n\tjf.ji_un_type = old_jf.ji_un_type;\n\tmemcpy(&jf.ji_un, &old_jf.ji_un, sizeof(jf.ji_un));\n\n\treturn jf;\n}\n\n/**\n * @brief\tUpgrade 19-21 jobextend structure to latest\n *\n * @param[in]\told_extend - 19-21 jobextend struct\n *\n * @return\tjobextend\n * @retval\tconverted jobextend\n */\nunion jobextend\nconvert_19ext_to_22(union jobextend_19_21 old_extend)\n{\n\tunion jobextend je;\n\n\tmemset(&je, 0, sizeof(je));\n\tsnprintf(je.fill, sizeof(je.fill), \"%s\", old_extend.fill);\n#if !defined(__sgi)\n\tsnprintf(je.ji_ext.ji_jid, sizeof(je.ji_ext.ji_jid), \"%s\", old_extend.ji_ext.ji_4jid);\n#endif\n\tje.ji_ext.ji_credtype = old_extend.ji_ext.ji_credtype;\n#ifdef PBS_MOM\n\tje.ji_ext.ji_nodeidx = old_extend.ji_ext.ji_nodeidx;\n\tje.ji_ext.ji_taskidx = old_extend.ji_ext.ji_taskidx;\n#if MOM_ALPS\n\tje.ji_ext.ji_reservation = old_extend.ji_ext.ji_reservation;\n\tje.ji_ext.ji_pagg = old_extend.ji_ext.ji_pagg;\n#endif\n#endif\n\n\treturn je;\n}\n\n/**\n * @brief\n *\t\tUpgrade a job file from an earlier version.\n *\n * @param[in]\tfd\t\t-\tFile descriptor from which to read\n * @param[in]\tver\t\t-\tOld version\n *\n * @return\tint\n * @retval\t1\t: failure\n * @retval\t 0\t: success\n */\nint\nupgrade_job_file(int fd, int ver)\n{\n\tFILE *tmp = NULL;\n\tint tmpfd = -1;\n\tjobfix_19_21 qs_19_21;\n\tjobfix_PRE19 old_jobfix_pre19;\n\tunion jobextend_19_21 old_ji_extended;\n\tint ret;\n\toff_t pos;\n\tjob new_job;\n\terrno = 0;\n\tint len;\n\tsvrattrl *pal = NULL;\n\tchar statechar;\n\tsvrattrl *pali;\n\tsvrattrl dummy;\n\tchar statebuf[2];\n\tchar ssbuf[5];\n\tchar *charstrm;\n\n\t/* The following code has been modeled after job_recov_fs() */\n\n\tif (ver == 18) {\n\t\t/* Read in the pre19 jobfix structure */\n\t\tmemset(&old_jobfix_pre19, 0, sizeof(old_jobfix_pre19));\n\t\tlen = read(fd, (char *) &old_jobfix_pre19, sizeof(old_jobfix_pre19));\n\t\tif (len < 0) {\n\t\t\tfprintf(stderr, \"Failed to read input file [%s]\\n\",\n\t\t\t\terrno ? strerror(errno) : \"No error\");\n\t\t\treturn 1;\n\t\t}\n\t\tif (len != sizeof(old_jobfix_pre19)) {\n\t\t\tfprintf(stderr, \"Format not recognized, not enough fixed data.\\n\");\n\t\t\treturn 1;\n\t\t}\n\n\t\tqs_19_21 = convert_pre19jf_to_19(old_jobfix_pre19);\n\t} else {\n\t\t/* Read in the 19_21 jobfix structure */\n\t\tmemset(&qs_19_21, 0, sizeof(qs_19_21));\n\t\tlen = read(fd, (char *) &qs_19_21, sizeof(qs_19_21));\n\t\tif (len < 0) {\n\t\t\tfprintf(stderr, \"Failed to read input file [%s]\\n\",\n\t\t\t\terrno ? strerror(errno) : \"No error\");\n\t\t\treturn 1;\n\t\t}\n\t\tif (len != sizeof(qs_19_21)) {\n\t\t\tfprintf(stderr, \"Format not recognized, not enough fixed data.\\n\");\n\t\t\treturn 1;\n\t\t}\n\t}\n\tmemset(&new_job, 0, sizeof(new_job));\n\tnew_job.ji_qs = convert_19jf_to_22(qs_19_21);\n\n\t/* Convert old extended data to new */\n\tmemset(&old_ji_extended, 0, sizeof(old_ji_extended));\n\tlen = read(fd, (char *) &old_ji_extended, sizeof(union jobextend_19_21));\n\tif (len < 0) {\n\t\tfprintf(stderr, \"Failed to read input file [%s]\\n\", errno ? strerror(errno) : \"No error\");\n\t\treturn 1;\n\t}\n\tif (len != sizeof(union jobextend_19_21)) {\n\t\tfprintf(stderr, \"Format not recognized, not enough extended data.\\n\");\n\t\treturn 1;\n\t}\n\tnew_job.ji_extended = convert_19ext_to_22(old_ji_extended);\n\n\t/* previous versions may not have updated values of state and substate in the attribute list\n\t * since we now rely on these attributes instead of the quick save area, it's important\n\t * to make sure that state and substate attributes are set correctly */\n\tstatechar = state_int2char(qs_19_21.ji_state);\n\tif (statechar != JOB_STATE_LTR_UNKNOWN) {\n\t\tbool stateset = false;\n\t\tbool substateset = false;\n\t\tchar *errbuf = malloc(1024);\n\n\t\tif (errbuf == NULL) {\n\t\t\tfprintf(stderr, \"Malloc error\\n\");\n\t\t\treturn 1;\n\t\t}\n\t\tsnprintf(statebuf, sizeof(statebuf), \"%c\", statechar);\n\t\tsnprintf(ssbuf, sizeof(ssbuf), \"%d\", qs_19_21.ji_substate);\n\t\tpal = read_all_attrs_from_jbfile(fd, NULL, NULL, &errbuf);\n\t\tif (pal == NULL && errbuf[0] != '\\0') {\n\t\t\tfprintf(stderr, \"%s\\n\", errbuf);\n\t\t\treturn 1;\n\t\t}\n\t\tpali = pal;\n\t\twhile (pali != NULL) {\n\t\t\tif (strcmp(pali->al_name, ATTR_state) == 0) {\n\t\t\t\tpali->al_valln = strlen(statebuf) + 1;\n\t\t\t\tpali->al_value = pali->al_name + pali->al_nameln + pali->al_rescln;\n\t\t\t\tstrcpy(pali->al_value, statebuf);\n\t\t\t\tpali->al_tsize = sizeof(svrattrl) + pali->al_nameln + pali->al_valln;\n\t\t\t\tstateset = true;\n\t\t\t\tif (substateset)\n\t\t\t\t\tbreak;\n\t\t\t} else if (strcmp(pali->al_name, ATTR_substate) == 0) {\n\t\t\t\tpali->al_valln = strlen(ssbuf) + 1;\n\t\t\t\tpali->al_value = pali->al_name + pali->al_nameln + pali->al_rescln;\n\t\t\t\tstrcpy(pali->al_value, ssbuf);\n\t\t\t\tpali->al_tsize = sizeof(svrattrl) + pali->al_nameln + pali->al_valln;\n\t\t\t\tsubstateset = true;\n\t\t\t\tif (stateset)\n\t\t\t\t\tbreak;\n\t\t\t}\n\t\t\tif (pali->al_link.ll_next == NULL)\n\t\t\t\tbreak;\n\t\t\tpali = GET_NEXT(pali->al_link);\n\t\t}\n\t}\n\n\t/* Open a temporary file to stage data */\n\ttmp = tmpfile();\n\tif (!tmp) {\n\t\tfprintf(stderr, \"Failed to open temporary file [%s]\\n\", errno ? strerror(errno) : \"No error\");\n\t\treturn 1;\n\t}\n\ttmpfd = fileno(tmp);\n\tif (tmpfd < 0) {\n\t\tfprintf(stderr, \"Failed to find temporary file descriptor [%s]\\n\", errno ? strerror(errno) : \"No error\");\n\t\treturn 1;\n\t}\n\n\t/* Write the new jobfix structure to the output file */\n\tlen = write(tmpfd, &new_job.ji_qs, sizeof(new_job.ji_qs));\n\tif (len != sizeof(new_job.ji_qs)) {\n\t\tfprintf(stderr, \"Failed to write jobfix to output file [%s]\\n\", errno ? strerror(errno) : \"No error\");\n\t\treturn 1;\n\t}\n\n\t/* Write the new extend structure to the output file */\n\tlen = write(tmpfd, &new_job.ji_extended, sizeof(new_job.ji_extended));\n\tif (len != sizeof(new_job.ji_extended)) {\n\t\tfprintf(stderr, \"Failed to write job extend data to output file [%s]\\n\",\n\t\t\terrno ? strerror(errno) : \"No error\");\n\t\treturn 1;\n\t}\n\n\t/* Write the job attribute list to the output file */\n\tpali = pal;\n\twhile (pali != NULL) { /* Modeled after save_struct() */\n\t\tint copysize;\n\t\tint objsize;\n\n\t\tobjsize = pali->al_tsize;\n\t\tcharstrm = (char *) pali;\n\t\twhile (objsize > 0) {\n\t\t\tif (objsize > BUFSZ)\n\t\t\t\tcopysize = BUFSZ;\n\t\t\telse\n\t\t\t\tcopysize = objsize;\n\t\t\tmemcpy(buf, charstrm, copysize);\n\t\t\tlen = write(tmpfd, buf, copysize);\n\t\t\tif (len < 0) {\n\t\t\t\tfprintf(stderr, \"Failed to write output file [%s]\\n\", errno ? strerror(errno) : \"No error\");\n\t\t\t\treturn 1;\n\t\t\t}\n\t\t\tobjsize -= len;\n\t\t\tcharstrm += len;\n\t\t}\n\t\tif (pali->al_link.ll_next == NULL)\n\t\t\tbreak;\n\t\tpali = GET_NEXT(pali->al_link);\n\t}\n\n\t/* Write a dummy attribute to indicate the end of attribute list, refer to save_attr_fs */\n\tdummy.al_tsize = ENDATTRIBUTES;\n\tcharstrm = (char *) &dummy;\n\tmemcpy(buf, charstrm, sizeof(dummy));\n\tlen = write(tmpfd, buf, sizeof(dummy));\n\tif (len < 0) {\n\t\tfprintf(stderr, \"Failed to write dummy to output file [%s]\\n\", errno ? strerror(errno) : \"No error\");\n\t\treturn 1;\n\t}\n\n\t/* Read the rest of the input and write it to the temporary file */\n\tdo {\n\t\tlen = read(fd, buf, BUFSZ);\n\t\tif (len < 0) {\n\t\t\tfprintf(stderr, \"Failed to read input file [%s]\\n\", errno ? strerror(errno) : \"No error\");\n\t\t\treturn 1;\n\t\t}\n\t\tif (len < 1)\n\t\t\tbreak;\n\t\tlen = write(tmpfd, buf, len);\n\t\tif (len < 0) {\n\t\t\tfprintf(stderr, \"Failed to write output file [%s]\\n\", errno ? strerror(errno) : \"No error\");\n\t\t\treturn 1;\n\t\t}\n\t} while (len > 0);\n\n\t/* Reset the file descriptors to zero */\n\tpos = lseek(fd, 0, SEEK_SET);\n\tif (pos != 0) {\n\t\tfprintf(stderr, \"Failed to reset job file position [%s]\\n\", errno ? strerror(errno) : \"No error\");\n\t\treturn 1;\n\t}\n\tpos = lseek(tmpfd, 0, SEEK_SET);\n\tif (pos != 0) {\n\t\tfprintf(stderr, \"Failed to reset temporary file position [%s]\\n\",\n\t\t\terrno ? strerror(errno) : \"No error\");\n\t\treturn 1;\n\t}\n\n\t/* truncate the original file before writing new contents */\n\tif (ftruncate(fd, 0) != 0) {\n\t\tfprintf(stderr, \"Failed to truncate the job file [%s]\\n\",\n\t\t\terrno ? strerror(errno) : \"No error\");\n\t\treturn 1;\n\t}\n\n\t/* Copy the data from the temporary file back to the original */\n\tdo {\n\t\tlen = read(tmpfd, buf, BUFSZ);\n\t\tif (len < 0) {\n\t\t\tfprintf(stderr, \"Failed to read temporary file [%s]\\n\", errno ? strerror(errno) : \"No error\");\n\t\t\treturn 1;\n\t\t}\n\t\tif (len < 1)\n\t\t\tbreak;\n\t\tlen = write(fd, buf, len);\n\t\tif (len < 0) {\n\t\t\tfprintf(stderr, \"Failed to write job file [%s]\\n\", errno ? strerror(errno) : \"No error\");\n\t\t\treturn 1;\n\t\t}\n\t} while (len > 0);\n\n\tret = fclose(tmp);\n\tif (ret != 0) {\n\t\tfprintf(stderr, \"Failed to close temporary file [%s]\\n\", errno ? strerror(errno) : \"No error\");\n\t\treturn 1;\n\t}\n\treturn 0;\n}\n\n/**\n * @brief\n *\t\tUpgrade a task file from an earlier version.\n *\n * @param[in]\ttaskfile\t\t-\tFile name of the task file\n * @return\tint\n * @retval\t-1\t: failure\n * @retval\t0\t: success\n */\nint\nupgrade_task_file(char *taskfile)\n{\n\tFILE *tmp = NULL;\n\tint fd;\n\tint tmpfd = -1;\n\tint len;\n\tint ret;\n\toff_t pos;\n\ttaskfix_PRE19 old_taskfix_pre19;\n\tpbs_task new_task;\n\terrno = 0;\n\n\t/* The following code has been modeled after task_recov() */\n\n\t/* Open the task file */\n\tfd = open(taskfile, O_BINARY | O_RDWR);\n\tif (fd < 0) {\n\t\tfprintf(stderr, \"Failed to open %s [%s]\\n\", taskfile,\n\t\t\terrno ? strerror(errno) : \"No error\");\n\t\treturn 1;\n\t}\n\n\t/* Read in the pre19 task structure */\n\tmemset(&old_taskfix_pre19, 0, sizeof(old_taskfix_pre19));\n\tlen = read(fd, (char *) &old_taskfix_pre19, sizeof(old_taskfix_pre19));\n\tif (len < 0) {\n\t\tfprintf(stderr, \"Failed to read input file [%s]\\n\",\n\t\t\terrno ? strerror(errno) : \"No error\");\n\t\treturn 1;\n\t}\n\tif (len != sizeof(old_taskfix_pre19)) {\n\t\tfprintf(stderr, \"Format not recognized, not enough fixed data.\\n\");\n\t\treturn 1;\n\t}\n\t/* Copy the data to the new task structure */\n\tmemset(&new_task, 0, sizeof(new_task));\n\tstrncpy(new_task.ti_qs.ti_parentjobid, old_taskfix_pre19.ti_parentjobid,\n\t\tsizeof(new_task.ti_qs.ti_parentjobid));\n\tnew_task.ti_qs.ti_parentnode = old_taskfix_pre19.ti_parentnode;\n\tnew_task.ti_qs.ti_myvnode = old_taskfix_pre19.ti_myvnode;\n\tnew_task.ti_qs.ti_parenttask = old_taskfix_pre19.ti_parenttask;\n\tnew_task.ti_qs.ti_task = old_taskfix_pre19.ti_task;\n\tnew_task.ti_qs.ti_status = old_taskfix_pre19.ti_status;\n\tnew_task.ti_qs.ti_sid = old_taskfix_pre19.ti_sid;\n\tnew_task.ti_qs.ti_exitstat = old_taskfix_pre19.ti_exitstat;\n\tmemcpy(&new_task.ti_qs.ti_u, &old_taskfix_pre19.ti_u, sizeof(old_taskfix_pre19.ti_u));\n\n\t/* Open a temporary file to stage data */\n\ttmp = tmpfile();\n\tif (!tmp) {\n\t\tfprintf(stderr, \"Failed to open temporary file [%s]\\n\",\n\t\t\terrno ? strerror(errno) : \"No error\");\n\t\treturn 1;\n\t}\n\ttmpfd = fileno(tmp);\n\tif (tmpfd < 0) {\n\t\tfprintf(stderr, \"Failed to find temporary file descriptor [%s]\\n\",\n\t\t\terrno ? strerror(errno) : \"No error\");\n\t\treturn 1;\n\t}\n\n\t/* Write the new taskfix structure to the output file */\n\tlen = write(tmpfd, &new_task.ti_qs, sizeof(new_task.ti_qs));\n\tif (len != sizeof(new_task.ti_qs)) {\n\t\tfprintf(stderr, \"Failed to write taskfix to output file [%s]\\n\",\n\t\t\terrno ? strerror(errno) : \"No error\");\n\t\treturn 1;\n\t}\n\n\t/* Read the rest of the input and write it to the temporary file */\n\tdo {\n\t\tlen = read(fd, buf, sizeof(buf));\n\t\tif (len < 0) {\n\t\t\tfprintf(stderr, \"Failed to read input file [%s]\\n\",\n\t\t\t\terrno ? strerror(errno) : \"No error\");\n\t\t\treturn 1;\n\t\t}\n\t\tif (len < 1)\n\t\t\tbreak;\n\t\tlen = write(tmpfd, buf, len);\n\t\tif (len < 0) {\n\t\t\tfprintf(stderr, \"Failed to write output file [%s]\\n\",\n\t\t\t\terrno ? strerror(errno) : \"No error\");\n\t\t\treturn 1;\n\t\t}\n\t} while (len > 0);\n\n\t/* Reset the file descriptors to zero */\n\tpos = lseek(fd, 0, SEEK_SET);\n\tif (pos != 0) {\n\t\tfprintf(stderr, \"Failed to reset task file position [%s]\\n\",\n\t\t\terrno ? strerror(errno) : \"No error\");\n\t\treturn 1;\n\t}\n\tpos = lseek(tmpfd, 0, SEEK_SET);\n\tif (pos != 0) {\n\t\tfprintf(stderr, \"Failed to reset temporary file position [%s]\\n\",\n\t\t\terrno ? strerror(errno) : \"No error\");\n\t\treturn 1;\n\t}\n\n\t/* Copy the data from the temporary file back to the original */\n\tdo {\n\t\tlen = read(tmpfd, buf, sizeof(buf));\n\t\tif (len < 0) {\n\t\t\tfprintf(stderr, \"Failed to read temporary file [%s]\\n\",\n\t\t\t\terrno ? strerror(errno) : \"No error\");\n\t\t\treturn 1;\n\t\t}\n\t\tif (len < 1)\n\t\t\tbreak;\n\t\tlen = write(fd, buf, len);\n\t\tif (len < 0) {\n\t\t\tfprintf(stderr, \"Failed to write job file [%s]\\n\",\n\t\t\t\terrno ? strerror(errno) : \"No error\");\n\t\t\treturn 1;\n\t\t}\n\t} while (len > 0);\n\n\tret = fclose(tmp);\n\tif (ret != 0) {\n\t\tfprintf(stderr, \"Failed to close temporary file [%s]\\n\",\n\t\t\terrno ? strerror(errno) : \"No error\");\n\t\treturn 1;\n\t}\n\n\treturn 0;\n}\n/**\n * @brief\n *      This is main function of pbs_upgrade_job process.\n */\nint\nmain(int argc, char *argv[])\n{\n\tDIR *dir;\n\tstruct stat statbuf;\n\tstruct dirent *dirent;\n\tchar taskdir[MAXPATHLEN + 1] = {'\\0'};\n\tchar namebuf[MAXPATHLEN + 1] = {'\\0'};\n\tchar *jobfile = NULL;\n\tchar *p;\n\tchar *task_start;\n\tint fd = -1;\n\tint flags = 0;\n\tint err = 0;\n\tint check_flag = 0;\n\tint i;\n\tint ret;\n\n\terrno = 0;\n\n\t/* Print pbs_version and exit if --version specified */\n\tPRINT_VERSION_AND_EXIT(argc, argv);\n\n\t/* Parse the command line parameters */\n\twhile (!err && ((i = getopt(argc, argv, \"cf:\")) != EOF)) {\n\t\tswitch (i) {\n\t\t\tcase 'c':\n\t\t\t\tcheck_flag = 1;\n\t\t\t\tbreak;\n\t\t\tcase 'f':\n\t\t\t\tif (jobfile) {\n\t\t\t\t\terr = 1;\n\t\t\t\t\tbreak;\n\t\t\t\t}\n\t\t\t\tjobfile = optarg;\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\terr = 1;\n\t\t\t\tbreak;\n\t\t}\n\t}\n\tif (!jobfile)\n\t\terr = 1;\n\tif (err) {\n\t\tprint_usage();\n\t\treturn 1;\n\t}\n\n\t/* Ensure the tasks directory exists */\n\tsnprintf(namebuf, sizeof(namebuf), \"%s\", jobfile);\n\tp = strrchr(namebuf, '.');\n\tif (!p) {\n\t\tfprintf(stderr, \"Missing job file suffix\");\n\t\treturn 1;\n\t}\n\tif (strncmp(p, JOB_FILE_SUFFIX, strlen(JOB_FILE_SUFFIX)) != 0) {\n\t\tfprintf(stderr, \"Invalid job file suffix\");\n\t\treturn 1;\n\t}\n\tstrcpy(p, JOB_TASKDIR_SUFFIX);\n\tp += strlen(JOB_TASKDIR_SUFFIX);\n\tret = stat(namebuf, &statbuf);\n\tif (ret < 0) {\n\t\tfprintf(stderr, \"Failed to stat task directory %s [%s]\\n\",\n\t\t\tnamebuf, errno ? strerror(errno) : \"No error\");\n\t\treturn 1;\n\t}\n\tif (!S_ISDIR(statbuf.st_mode)) {\n\t\tfprintf(stderr, \"Expected directory at %s\", namebuf);\n\t\treturn 1;\n\t}\n\tstrncpy(taskdir, namebuf, sizeof(taskdir));\n\tstrcat(p, \"/\");\n\ttask_start = ++p;\n\n\tif (check_flag)\n\t\tflags = O_BINARY | O_RDONLY;\n\telse\n\t\tflags = O_BINARY | O_RDWR;\n\n\t/* Open the job file for reading */\n\tfd = open(jobfile, flags);\n\tif (fd < 0) {\n\t\tfprintf(stderr, \"Failed to open %s [%s]\\n\", jobfile,\n\t\t\terrno ? strerror(errno) : \"No error\");\n\t\treturn 1;\n\t}\n\n\t/* Determine the format of the file */\n\tret = check_job_file(fd);\n\tif (ret < 0) {\n\t\tfprintf(stderr, \"Unknown job format: %s\\n\", jobfile);\n\t\treturn 1;\n\t}\n\tif (check_flag) {\n\t\tprintf(\"%d\\n\", ret);\n\t\tclose(fd);\n\t\treturn 0;\n\t}\n\n\tswitch (ret) {\n\t\tcase 18:\n\t\t\t/* this case will execute for all PBS >=13.x or <=18.x versions */\n\t\tcase 19:\n\t\t\t/* this case will execute for all PBS >=19.x or <=21.x versions */\n\t\t\tbreak;\n\t\tcase 21:\n\t\t\t/* no need to update the job sturcture */\n\t\t\treturn 0;\n\t\tdefault:\n\t\t\tfprintf(stderr, \"Unsupported version, job_name=%s\\n\", jobfile);\n\t\t\treturn 1;\n\t}\n\n\t/* Upgrade the job file */\n\tret = upgrade_job_file(fd, ret);\n\tif (ret != 0) {\n\t\tfprintf(stderr, \"Failed to upgrade the job file:%s\\n\", jobfile);\n\t\treturn 1;\n\t}\n\n\t/* Close the job file */\n\tret = close(fd);\n\tif (ret < 0) {\n\t\tfprintf(stderr, \"Failed to close the job file [%s]\\n\",\n\t\t\terrno ? strerror(errno) : \"No error\");\n\t\treturn 1;\n\t}\n\n\t/* Upgrade the task files */\n\tdir = opendir(taskdir);\n\tif (!dir) {\n\t\tfprintf(stderr, \"Failed to open the task directory [%s]\\n\",\n\t\t\terrno ? strerror(errno) : \"No error\");\n\t\treturn 1;\n\t}\n\terrno = 0;\n\twhile ((dirent = readdir(dir)) != NULL) {\n\t\tif (errno != 0) {\n\t\t\tfprintf(stderr, \"Failed to read directory [%s]\\n\",\n\t\t\t\terrno ? strerror(errno) : \"No error\");\n\t\t\tclosedir(dir);\n\t\t\treturn 1;\n\t\t}\n\t\tif (dirent->d_name[0] == '.')\n\t\t\tcontinue;\n\t\tstrcpy(task_start, dirent->d_name);\n\t\tret = upgrade_task_file(namebuf);\n\t\tif (ret != 0) {\n\t\t\tfprintf(stderr, \"Failed to upgrade the task file:%s\\n\", jobfile);\n\t\t\tclosedir(dir);\n\t\t\treturn 1;\n\t\t}\n\t}\n\tclosedir(dir);\n\treturn 0;\n}\n"
  },
  {
    "path": "src/tools/printjob.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file printjob.c\n *\n * @brief\n *\t\tprintjob.c - This file contains the functions related to the print job task.\n *\n * Functions included are:\n * \tprint_usage()\n * \tprt_job_struct()\n * \tprt_task_struct()\n * \tread_attr()\n * \tprint_db_job()\n * \tmain()\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <sys/types.h>\n#include <errno.h>\n#include <fcntl.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <strings.h>\n#include <unistd.h>\n#include <dirent.h>\n\n#define PBS_MOM 1 /* this is so we can use the task struct */\n\n#include \"cmds.h\"\n#include \"pbs_version.h\"\n#include \"portability.h\"\n#include \"list_link.h\"\n#include \"attribute.h\"\n#include \"server_limits.h\"\n#include \"job.h\"\n#ifdef PRINTJOBSVR\n#include \"pbs_db.h\"\nvoid *conn = NULL;\n#endif\n\n#ifdef PRINTJOBSVR\n/* just to make jattr_get_set.c happy */\nattribute_def job_attr_def[1] = {{0}};\n#endif\n\n#define BUF_SIZE 512\nint display_script = 0; /* to track if job-script is required or not */\n\nsvrattrl *read_all_attrs_from_jbfile(int fd, char **state, char **substate, char **errbuf);\n\n/**\n * @brief\n *\t\tPrint usage text to stderr and exit.\n *\n * @return\tvoid\n *\n */\nvoid\nprint_usage()\n{\n\tfprintf(stderr, \"Usage: %s [-a] (jobid|file)\\n\", \"printjob\");\n\tfprintf(stderr, \"       %s -s jobid\\n\", \"printjob\");\n\tfprintf(stderr, \"       %s --version\\n\", \"printjob\");\n}\n/**\n * @brief\n *\t\tprint the job struct.\n *\n * @param[in]\tpjob\t-\tpointer to the job struct.\n */\nstatic void\nprt_job_struct(job *pjob, char *state, char *substate)\n{\n\tunsigned int ss_num;\n\tunsigned int s_num;\n\tchar *endp = NULL;\n\n\tss_num = strtol(substate, &endp, 10);\n\ts_num = state_char2int(state[0]);\n\n\tprintf(\"---------------------------------------------------\\n\");\n\tprintf(\"jobid:\\t%s\\n\", pjob->ji_qs.ji_jobid);\n\tprintf(\"---------------------------------------------------\\n\");\n\tprintf(\"state:\\t\\t0x%x\\n\", s_num);\n\tprintf(\"substate:\\t0x%x (%d)\\n\", ss_num, ss_num);\n\tprintf(\"svrflgs:\\t0x%x (%d)\\n\", pjob->ji_qs.ji_svrflags,\n\t       pjob->ji_qs.ji_svrflags);\n\tprintf(\"stime:\\t\\t%ld\\n\", (long) pjob->ji_qs.ji_stime);\n\tprintf(\"obittime:\\t\\t%ld\\n\", (long) pjob->ji_qs.ji_obittime);\n\tprintf(\"file base:\\t%s\\n\", pjob->ji_qs.ji_fileprefix);\n\tprintf(\"queue:\\t\\t%s\\n\", pjob->ji_qs.ji_queue);\n\tswitch (pjob->ji_qs.ji_un_type) {\n\t\tcase JOB_UNION_TYPE_NEW:\n\t\t\tprintf(\"union type new:\\n\");\n\t\t\tprintf(\"\\tsocket\\t%d\\n\", pjob->ji_qs.ji_un.ji_newt.ji_fromsock);\n\t\t\tprintf(\"\\taddr\\t%lu\\n\", pjob->ji_qs.ji_un.ji_newt.ji_fromaddr);\n\t\t\tprintf(\"\\tscript\\t%d\\n\", pjob->ji_qs.ji_un.ji_newt.ji_scriptsz);\n\t\t\tbreak;\n\t\tcase JOB_UNION_TYPE_EXEC:\n\t\t\tprintf(\"union type exec:\\n\");\n\t\t\tprintf(\"\\texits\\t%d\\n\",\n\t\t\t       pjob->ji_qs.ji_un.ji_exect.ji_exitstat);\n\t\t\tbreak;\n\t\tcase JOB_UNION_TYPE_ROUTE:\n\t\t\tprintf(\"union type route:\\n\");\n\t\t\tprintf(\"\\tquetime\\t%ld\\n\",\n\t\t\t       (long) pjob->ji_qs.ji_un.ji_routet.ji_quetime);\n\t\t\tprintf(\"\\tretry\\t%ld\\n\",\n\t\t\t       (long) pjob->ji_qs.ji_un.ji_routet.ji_rteretry);\n\t\t\tbreak;\n\t\tcase JOB_UNION_TYPE_MOM:\n\t\t\tprintf(\"union type mom:\\n\");\n\t\t\tprintf(\"\\tsvraddr\\t%lu\\n\",\n\t\t\t       pjob->ji_qs.ji_un.ji_momt.ji_svraddr);\n\t\t\tprintf(\"\\texitst\\t%d\\n\", pjob->ji_qs.ji_un.ji_momt.ji_exitstat);\n\t\t\tprintf(\"\\tuid\\t%d\\n\", pjob->ji_qs.ji_un.ji_momt.ji_exuid);\n\t\t\tprintf(\"\\tgid\\t%d\\n\", pjob->ji_qs.ji_un.ji_momt.ji_exgid);\n\t\t\tbreak;\n\t\tdefault:\n\t\t\tprintf(\"--bad union type %d\\n\", pjob->ji_qs.ji_un_type);\n\t}\n}\n/**\n * @brief\n *\t\tprint the pbs_task struct.\n *\n * @param[in]\tptask\t-\tpointer to the task struct.\n */\nvoid\nprt_task_struct(pbs_task *ptask)\n{\n\tprintf(\"\\n\");\n\tprintf(\"\\tparentjobid:\\t%s\\n\", ptask->ti_qs.ti_parentjobid);\n\tprintf(\"\\tparentnode:\\t%d\\n\", ptask->ti_qs.ti_parentnode);\n\tprintf(\"\\tmyvnode:\\t%d\\n\", ptask->ti_qs.ti_myvnode);\n\tprintf(\"\\tparenttask:\\t%d\\n\", ptask->ti_qs.ti_parenttask);\n\tprintf(\"\\ttask:\\t\\t%d\\n\", ptask->ti_qs.ti_task);\n\tprintf(\"\\tstatus:\\t\\t%d\\t\", ptask->ti_qs.ti_status);\n\tswitch (ptask->ti_qs.ti_status) {\n\n\t\tcase TI_STATE_EMBRYO:\n\t\t\tprintf(\"TI_STATE_EMBRYO\\n\");\n\t\t\tbreak;\n\n\t\tcase TI_STATE_RUNNING:\n\t\t\tprintf(\"TI_STATE_RUNNING\\n\");\n\t\t\tbreak;\n\n\t\tcase TI_STATE_EXITED:\n\t\t\tprintf(\"TI_STATE_EXITED\\n\");\n\t\t\tbreak;\n\n\t\tcase TI_STATE_DEAD:\n\t\t\tprintf(\"TI_STATE_DEAD\\n\");\n\t\t\tbreak;\n\n\t\tdefault:\n\t\t\tprintf(\"unknown value\\n\");\n\t\t\tbreak;\n\t}\n\n\tprintf(\"\\tsid:\\t\\t%d\\n\", ptask->ti_qs.ti_sid);\n\tprintf(\"\\texitstat:\\t%d\\n\", ptask->ti_qs.ti_exitstat);\n}\n\n#define ENDATTRIBUTES -711\n\n/**\n * @brief\tPrint an attribute\n *\n * @param[in]\tpal  - pointer to attribute\n *\n * @return\tvoid\n */\nstatic void\nprint_attr(svrattrl *pal)\n{\n\tprintf(\"%s\", pal->al_name);\n\tif (pal->al_resc)\n\t\tprintf(\".%s\", pal->al_resc);\n\tprintf(\" = \");\n\tif (pal->al_value)\n\t\tprintf(\"%s\", show_nonprint_chars(pal->al_value));\n\tprintf(\"\\n\");\n}\n\n/**\n * @brief\n * \t\tsave the db info into job structure\n *\n * @param[out]\tpjob\t-\tpointer to job struct\n * @param[in]\tpdjob\t-\tpbs DB job info.\n */\n#ifdef PRINTJOBSVR\nstatic void\ndb_2_job(job *pjob, pbs_db_job_info_t *pdjob)\n{\n\tchar statec;\n\n\tstrcpy(pjob->ji_qs.ji_jobid, pdjob->ji_jobid);\n\tstatec = state_int2char(pdjob->ji_state);\n\tif (statec != '0')\n\t\tset_job_state(pjob, statec);\n\n\tset_job_substate(pjob, pdjob->ji_substate);\n\tpjob->ji_qs.ji_svrflags = pdjob->ji_svrflags;\n\tpjob->ji_qs.ji_stime = pdjob->ji_stime;\n\tpjob->ji_qs.ji_fileprefix[0] = 0;\n\tstrcpy(pjob->ji_qs.ji_queue, pdjob->ji_queue);\n\tstrcpy(pjob->ji_qs.ji_destin, pdjob->ji_destin);\n\tpjob->ji_qs.ji_un_type = pdjob->ji_un_type;\n\tif (pjob->ji_qs.ji_un_type == JOB_UNION_TYPE_NEW) {\n\t\tpjob->ji_qs.ji_un.ji_newt.ji_fromsock = pdjob->ji_fromsock;\n\t\tpjob->ji_qs.ji_un.ji_newt.ji_fromaddr = pdjob->ji_fromaddr;\n\t} else if (pjob->ji_qs.ji_un_type == JOB_UNION_TYPE_EXEC)\n\t\tpjob->ji_qs.ji_un.ji_exect.ji_exitstat = pdjob->ji_exitstat;\n\telse if (pjob->ji_qs.ji_un_type == JOB_UNION_TYPE_ROUTE) {\n\t\tpjob->ji_qs.ji_un.ji_routet.ji_quetime = pdjob->ji_quetime;\n\t\tpjob->ji_qs.ji_un.ji_routet.ji_rteretry = pdjob->ji_rteretry;\n\t}\n\n\t/* extended portion */\n\tstrcpy(pjob->ji_extended.ji_ext.ji_jid, pdjob->ji_jid);\n\tpjob->ji_extended.ji_ext.ji_credtype = pdjob->ji_credtype;\n}\n\n/**\n * @brief\n * \t\tenable db lookup only for server version of printjob\n *\n * @param[in]\tid\t-\tJob Id.\n * @param[in]\tno_attributes\t-\tif set means no need to set attr_info\n *\n * @return\tint\n */\nint\nprint_db_job(char *id, int no_attributes)\n{\n\tpbs_db_obj_info_t obj;\n\tpbs_db_job_info_t dbjob;\n\tpbs_db_jobscr_info_t jobscr;\n\tjob xjob;\n\tchar *db_errmsg = NULL;\n\tint failcode;\n\n\tif (conn == NULL) {\n\n\t\t/* connect to database */\n#ifdef NAS /* localmod 111 */\n\t\tif (pbs_conf.pbs_data_service_host) {\n\t\t\tfailcode = pbs_db_connect(&conn, pbs_conf.pbs_data_service_host, pbs_conf.pbs_data_service_port, PBS_DB_CNT_TIMEOUT_NORMAL);\n\t\t} else\n#endif /* localmod 111 */\n\t\t\tfailcode = pbs_db_connect(&conn, pbs_conf.pbs_server_name, pbs_conf.pbs_data_service_port, PBS_DB_CNT_TIMEOUT_NORMAL);\n\t\tif (!conn && pbs_conf.pbs_secondary != NULL) {\n\t\t\tfailcode = pbs_db_connect(&conn, pbs_conf.pbs_secondary, pbs_conf.pbs_data_service_port, PBS_DB_CNT_TIMEOUT_NORMAL);\n\t\t\tif (!conn) {\n\t\t\t\tpbs_db_get_errmsg(failcode, &db_errmsg);\n\t\t\t\tfprintf(stderr, \"%s\\n\", db_errmsg);\n\t\t\t\tfree(db_errmsg);\n\t\t\t\treturn -1;\n\t\t\t}\n\t\t}\n\t}\n\n\t/*\n\t * On a server machine, if display_script is set,\n\t * retrieve the job-script from database.\n\t */\n\tif (display_script) {\n\t\tobj.pbs_db_obj_type = PBS_DB_JOBSCR;\n\t\tobj.pbs_db_un.pbs_db_jobscr = &jobscr;\n\t\tstrcpy(jobscr.ji_jobid, id);\n\t\tif (strchr(id, '.') == 0) {\n\t\t\tstrcat(jobscr.ji_jobid, \".\");\n\t\t\tstrcat(jobscr.ji_jobid, pbs_conf.pbs_server_name);\n\t\t}\n\n\t\tif (pbs_db_load_obj(conn, &obj) != 0) {\n\t\t\tfprintf(stderr, \"Job %s not found\\n\", jobscr.ji_jobid);\n\t\t\treturn (1);\n\t\t} else {\n\t\t\tprintf(\"---------------------------------------------------\\n\");\n\t\t\tprintf(\"Jobscript for jobid:%s\\n\", jobscr.ji_jobid);\n\t\t\tprintf(\"---------------------------------------------------\\n\");\n\n\t\t\tprintf(\"%s \\n\", jobscr.script);\n\t\t}\n\t}\n\n\t/*\n\t * On a server machine, if display_script is not set,\n\t * retrieve the job info from database.\n\t */\n\telse {\n\t\tchar state[2];\n\t\tchar substate[4];\n\t\tobj.pbs_db_obj_type = PBS_DB_JOB;\n\t\tobj.pbs_db_un.pbs_db_job = &dbjob;\n\t\tstrcpy(dbjob.ji_jobid, id);\n\t\tif (strchr(id, '.') == 0) {\n\t\t\tstrcat(dbjob.ji_jobid, \".\");\n\t\t\tstrcat(dbjob.ji_jobid, pbs_conf.pbs_server_name);\n\t\t}\n\n\t\tif (pbs_db_load_obj(conn, &obj) != 0) {\n\t\t\tfprintf(stderr, \"Job %s not found\\n\", dbjob.ji_jobid);\n\t\t\treturn (1);\n\t\t}\n\t\tdb_2_job(&xjob, &dbjob);\n\t\tsnprintf(state, sizeof(state), \"%c\", get_job_state(&xjob));\n\t\tsnprintf(substate, sizeof(substate), \"%ld\", get_job_substate(&xjob));\n\t\tprt_job_struct(&xjob, state, substate);\n\n\t\tif (no_attributes == 0) {\n\t\t\tsvrattrl *pal;\n\t\t\tprintf(\"--attributes--\\n\");\n\t\t\tfor (pal = (svrattrl *) GET_NEXT(dbjob.db_attr_list.attrs); pal != NULL; pal = (svrattrl *) GET_NEXT(pal->al_link)) {\n\t\t\t\tprintf(\"%s\", pal->al_atopl.name);\n\t\t\t\tif (pal->al_atopl.resource && pal->al_atopl.resource[0] != 0)\n\t\t\t\t\tprintf(\".%s\", pal->al_atopl.resource);\n\t\t\t\tprintf(\" = \");\n\t\t\t\tif (pal->al_atopl.value)\n\t\t\t\t\tprintf(\"%s\", pal->al_atopl.value);\n\t\t\t\tprintf(\"\\n\");\n\t\t\t}\n\t\t}\n\t\tprintf(\"\\n\");\n\t\tfree_attrlist(&dbjob.db_attr_list.attrs);\n\t}\n\n\treturn 0;\n}\n#endif\n/**\n * @brief\n *      This is main function of printjob.\n *\n * @return\tint\n * @retval\t0\t: success\n * @retval\t1\t: failure\n */\nint\nmain(int argc, char *argv[])\n{\n\tint amt;\n\tint err = 0;\n\tint f;\n\tint fp;\n\tint no_attributes = 0;\n\tjob xjob;\n\tpbs_task xtask;\n\textern int optopt;\n\textern int opterr;\n\textern int optind;\n\tchar *job_id = NULL;\n\tchar job_script[BUF_SIZE];\n\n\t/*\n\t * Check for the user. If the user is not root/administrator,\n\t * display appropriate error message and exit. Else, continue.\n\t */\n\n#ifdef WIN32\n\tif (!isAdminPrivilege(getlogin())) {\n\t\tfprintf(stderr, \"printjob must be run by Admin\\n\");\n\t\texit(1);\n\t}\n\n#else\n\tif ((getuid() != 0) || (geteuid() != 0)) {\n\t\tfprintf(stderr, \"printjob must be run by root\\n\");\n\t\texit(1);\n\t}\n#endif\n\n\tif (pbs_loadconf(0) == 0) {\n\t\tfprintf(stderr, \"%s\\n\", \"couldnot load conf file\");\n\t\texit(1);\n\t}\n\n\t/*the real deal or output pbs_version and exit?*/\n\tPRINT_VERSION_AND_EXIT(argc, argv);\n\topterr = 0;\n\twhile ((f = getopt(argc, argv, \"as\")) != EOF) {\n\t\tswitch (f) {\n\t\t\tcase 'a':\n\t\t\t\tif (display_script) {\n\t\t\t\t\tprint_usage();\n\t\t\t\t\texit(1);\n\t\t\t\t}\n\t\t\t\tno_attributes = 1;\n\t\t\t\tbreak;\n\n\t\t\tcase 's':\n\t\t\t\t/* set display_script if job-script is required */\n\t\t\t\tif (no_attributes) {\n\t\t\t\t\tprint_usage();\n\t\t\t\t\texit(1);\n\t\t\t\t}\n\t\t\t\tdisplay_script = 1;\n\t\t\t\tbreak;\n\n\t\t\tdefault:\n\t\t\t\terr = 1;\n\t\t\t\tfprintf(stderr, \"printjob: invalid option -- %c\\n\", optopt);\n\t\t}\n\t}\n\tif (err || (argc - optind < 1)) {\n\t\tprint_usage();\n\t\treturn 1;\n\t}\n\n\tfor (f = optind; f < argc; ++f) {\n\t\tchar *jobfile = argv[f];\n\t\tint len;\n\t\tchar *dirname;\n\t\tDIR *dirp;\n\t\tFILE *fp_script = NULL;\n\t\tstruct dirent *dp;\n\n\t\tfp = open(jobfile, O_RDONLY, 0);\n\n\t\tif (display_script) {\n\n\t\t\t/*\n\t\t\t * if open() succeeds, it means argument is jobfile-path which\n\t\t\t * is not allowed with -s option. Print the usage error and exit\n\t\t\t */\n\t\t\tif (fp > 0) {\n\t\t\t\tprint_usage();\n\t\t\t\tclose(fp);\n\t\t\t\texit(1);\n\t\t\t}\n\t\t}\n\n\t\t/* If open () fails to open the jobfile, assume argument is jobid */\n\t\tif (fp < 0) {\n\n#ifdef PRINTJOBSVR\n\t\t\tif (print_db_job(jobfile, no_attributes) == 0) {\n\t\t\t\tcontinue;\n\t\t\t} else {\n\t\t\t\tif (conn != NULL) {\n\t\t\t\t\tpbs_db_disconnect(conn);\n\t\t\t\t}\n\t\t\t\texit(1);\n\t\t\t}\n#else\n\t\t\t/*\n\t\t\t * On non-server host, execute the following code when\n\t\t\t * the job-id is given to open the job file in mom_priv\n\t\t\t */\n\t\t\tjob_id = (char *) malloc(strlen(jobfile) + strlen(pbs_conf.pbs_server_name) + 2);\n\t\t\tif (job_id == NULL) {\n\t\t\t\tperror(\"malloc failed\");\n\t\t\t\texit(1);\n\t\t\t}\n\t\t\tstrcpy(job_id, jobfile);\n\t\t\tif (strchr(job_id, '.') == 0) {\n\t\t\t\tstrcat(job_id, \".\");\n\t\t\t\tstrcat(job_id, pbs_conf.pbs_server_name);\n\t\t\t}\n\n\t\t\t/*frame the jobfile to contain $PBS_HOME/mom_priv/jobs/jobid.JB */\n\t\t\tjobfile = (char *) malloc(strlen(pbs_conf.pbs_home_path) + (strlen(job_id)) + (strlen(\"/mom_priv/jobs/.JB\")) + 1);\n\t\t\tif (jobfile == NULL) {\n\t\t\t\tperror(\"malloc failed\");\n\t\t\t\tfree(job_id);\n\t\t\t\texit(1);\n\t\t\t}\n\t\t\tsprintf(jobfile, \"%s/mom_priv/jobs/%s.JB\", pbs_conf.pbs_home_path, job_id);\n\t\t\tfp = open(jobfile, O_RDONLY, 0);\n\n\t\t\t/* If open() fails, the jobfile formed by jobid is not found in $PBS_HOME */\n\t\t\tif (fp < 0) {\n\t\t\t\tfprintf(stderr, \"Job %s not found\\n\", job_id);\n\t\t\t\tfree(job_id);\n\t\t\t\tfree(jobfile);\n\t\t\t\texit(1);\n\t\t\t}\n#endif\n\t\t}\n\n\t\t/* If not asked for displaying of script, execute below code */\n\t\tif (!display_script) {\n\t\t\tsvrattrl *pal, *pali, *ppal;\n\t\t\tchar *state = \"\";\n\t\t\tchar *substate = \"\";\n\t\t\tchar *errbuf = malloc(1024);\n\n\t\t\tif (errbuf == NULL) {\n\t\t\t\tfprintf(stderr, \"Malloc error\\n\");\n\t\t\t\texit(1);\n\t\t\t}\n\n\t\t\tamt = read(fp, &xjob.ji_qs, sizeof(xjob.ji_qs));\n\t\t\tif (amt != sizeof(xjob.ji_qs)) {\n\t\t\t\tfprintf(stderr, \"Short read of %d bytes, file %s\\n\",\n\t\t\t\t\tamt, jobfile);\n\t\t\t}\n\n\t\t\t/* if present, skip over extended area */\n\t\t\tif (xjob.ji_qs.ji_jsversion > 500) {\n\t\t\t\tamt = read(fp, &xjob.ji_extended, sizeof(xjob.ji_extended));\n\t\t\t\tif (amt != sizeof(xjob.ji_extended)) {\n\t\t\t\t\tfprintf(stderr, \"Short read of %d bytes, file %s\\n\",\n\t\t\t\t\t\tamt, jobfile);\n\t\t\t\t}\n\t\t\t}\n\n\t\t\t/* if array job, skip over sub job table */\n\t\t\tif (xjob.ji_qs.ji_svrflags & JOB_SVFLG_ArrayJob) {\n\t\t\t\tsize_t xs;\n\t\t\t\tajinfo_t *ajtrk;\n\n\t\t\t\tif (read(fp, (char *) &xs, sizeof(xs)) != sizeof(xs)) {\n\t\t\t\t\tif ((ajtrk = (ajinfo_t *) malloc(xs)) == NULL) {\n\t\t\t\t\t\t(void) close(fp);\n\t\t\t\t\t\treturn 1;\n\t\t\t\t\t}\n\t\t\t\t\tif (read(fp, (char *) ajtrk + sizeof(xs), xs - sizeof(xs)) == -1) \n\t\t\t\t\t\tfprintf(stderr, \"read failed, ERR = %s\\n\", strerror(errno));\n\t\t\t\t\tfree(ajtrk);\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tpal = read_all_attrs_from_jbfile(fp, &state, &substate, &errbuf);\n\t\t\tif (pal == NULL && errbuf[0] != '\\0') {\n\t\t\t\tfprintf(stderr, \"%s\\n\", errbuf);\n\t\t\t\texit(1);\n\t\t\t}\n\n\t\t\t/* Print the summary first */\n\t\t\tprt_job_struct(&xjob, state, substate);\n\n\t\t\t/* now do attributes, one at a time */\n\t\t\tif (no_attributes == 0 && pal != NULL) {\n\t\t\t\t/* Now print all attributes */\n\t\t\t\tprintf(\"--attributes--\\n\");\n\n\t\t\t\tpali = GET_NEXT(pal->al_link);\n\t\t\t\twhile (pali != NULL) {\n\t\t\t\t\tprint_attr(pali);\n\t\t\t\t\tif (pali->al_link.ll_next == NULL) {\n\t\t\t\t\t\tfree(pali);\n\t\t\t\t\t\tbreak;\n\t\t\t\t\t}\n\t\t\t\t\tppal = pali;\n\t\t\t\t\tpali = GET_NEXT(pali->al_link);\n\t\t\t\t\tfree(ppal);\n\t\t\t\t}\n\t\t\t\tfree(pal);\n\t\t\t}\n\n\t\t\t(void) close(fp);\n\t\t\tprintf(\"\\n\");\n\n\t\t\tlen = strlen(jobfile);\n\t\t\tif (len <= 2 ||\n\t\t\t    jobfile[len - 2] != 'J' ||\n\t\t\t    jobfile[len - 1] != 'B')\n\t\t\t\tcontinue;\n\t\t\tdirname = malloc(len + 50);\n\t\t\tstrcpy(dirname, jobfile);\n\n\t\t\tdirname[len - 2] = 'T';\n\t\t\tdirname[len - 1] = 'K';\n\t\t\tdirp = opendir(dirname);\n\n\t\t\tif (dirp == NULL) {\n\t\t\t\tfree(dirname);\n\t\t\t\tcontinue;\n\t\t\t}\n\n\t\t\tdirname[len++] = '/';\n\t\t\tdirname[len] = '\\0';\n\t\t\twhile (errno = 0, (dp = readdir(dirp)) != NULL) {\n\t\t\t\tif (dp->d_name[0] == '.')\n\t\t\t\t\tcontinue;\n\t\t\t\tstrcpy(&dirname[len], dp->d_name);\n\n\t\t\t\tprintf(\"task file %s\\n\", dirname);\n\t\t\t\tfp = open(dirname, O_RDONLY, 0);\n\t\t\t\tif (fp < 0) {\n\t\t\t\t\tperror(\"open failed\");\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\n\t\t\t\tamt = read(fp, &xtask.ti_qs, sizeof(xtask.ti_qs));\n\t\t\t\tif (amt != sizeof(xtask.ti_qs)) {\n\t\t\t\t\tfprintf(stderr,\n\t\t\t\t\t\t\"Short read of %d bytes\\n\", amt);\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\n\t\t\t\tprt_task_struct(&xtask);\n\t\t\t\tclose(fp);\n\t\t\t}\n\t\t\tif (errno != 0 && errno != ENOENT) {\n\t\t\t\tperror(\"readdir failed\");\n\t\t\t\tfree(dirname);\n\t\t\t\tclosedir(dirp);\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\tfree(dirname);\n\t\t\tif (jobfile != argv[f])\n\t\t\t\tfree(jobfile);\n\t\t\tif (job_id)\n\t\t\t\tfree(job_id);\n\t\t\tclosedir(dirp);\n\t\t\tfree(errbuf);\n\t\t}\n\t\t/* if asked for displaying of script, execute below code  (for mom-side) */\n\t\telse {\n\n\t\t\tlen = strlen(jobfile);\n\t\t\tjobfile[len - 2] = 'S';\n\t\t\tjobfile[len - 1] = 'C';\n\n\t\t\tfp_script = fopen(jobfile, \"r\");\n\n\t\t\t/* If fopen fails, display the usage error */\n\t\t\tif (fp_script == NULL) {\n\t\t\t\tprint_usage();\n\t\t\t\texit(1);\n\t\t\t}\n\t\t\tif (job_id) {\n\t\t\t\tprintf(\"--------------------------------------------------\\n\");\n\t\t\t\tprintf(\"jobscript for %s\\n\", job_id);\n\t\t\t\tprintf(\"--------------------------------------------------\\n\");\n\t\t\t}\n\t\t\twhile ((fgets(job_script, BUF_SIZE - 1, fp_script)) != NULL) {\n\t\t\t\tif (fputs(job_script, stdout) < 0) {\n\t\t\t\t\tfprintf(stderr, \"Error reading job-script file\\n\");\n\t\t\t\t\texit(1);\n\t\t\t\t}\n\t\t\t}\n\t\t\tprintf(\"\\n\");\n\t\t\tfree(jobfile);\n\t\t\tfclose(fp_script);\n\t\t\tfree(job_id);\n\t\t\tclose(fp);\n\t\t}\n\t}\n#ifdef PRINTJOBSVR\n\tif (conn != NULL) {\n\t\tpbs_db_disconnect(conn);\n\t}\n#endif\n\treturn (0);\n}\n"
  },
  {
    "path": "src/tools/rstester.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file rstester.c\n *\n * @brief\n *\t\trstester.c - This file contains the functions related to resource testing.\n *\n * Functions included are:\n * \tmain()\n * \tread_attrs()\n */\n#include <pbs_config.h>\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <unistd.h>\n#include <pbs_ifl.h>\n#include \"attribute.h\"\n\n/* prototypes */\nstatic struct attrl *read_attrs(FILE *fp);\n/**\n * @brief\n *      This is main function of rstester.\n *\n * @return\tint\n * @retval\t0\t: success\n * @retval\t1\t: failure\n *\n */\nint\nmain(int argc, char *argv[])\n{\n\tint c;\n\tint print_parse = 0;\n\tint print_resc = 0;\n\tint print_assn = 0;\n\tint read_values = 0;\n\tFILE *fp = NULL;\n\trescspec *parse_tree;\n\tstruct batch_status *bs;\n\tstruct attrl *al = NULL;\n\tint eval_value;\n\tchar logbuf[256];\n\n\twhile ((c = getopt(argc, argv, \"parv:\")) != -1)\n\t\tswitch (c) {\n\t\t\tcase 'p':\n\t\t\t\tprint_parse = 1;\n\t\t\t\tbreak;\n\t\t\tcase 'a':\n\t\t\t\tprint_assn = 1;\n\t\t\t\tbreak;\n\t\t\tcase 'r':\n\t\t\t\tprint_resc = 1;\n\t\t\t\tbreak;\n\t\t\tcase 'v':\n\t\t\t\tread_values = 1;\n\t\t\t\tfp = fopen(optarg, \"r\");\n\t\t\t\tbreak;\n\n\t\t\tdefault:\n\t\t\t\tfprintf(stderr, \"Invalid Option: -%c\\n\", c);\n\t\t}\n\n\tif (argc < optind) {\n\t\tfprintf(stderr, \"no rescspec!\\n\");\n\t\treturn 1;\n\t}\n\n\tif (read_values && (fp == NULL || (al = read_attrs(fp)) == NULL)) {\n\t\tfprintf(stderr, \"No file to read attribs from!\\n\");\n\t\treturn 1;\n\t}\n\n\t/* turn on error output to stdout */\n\trescspec_print_errors(1);\n\n\tparse_tree = rescspec_parse(argv[optind]);\n\n\tif (parse_tree != NULL) {\n\t\tif (read_values) {\n\t\t\tlogbuf[0] = '\\0';\n\t\t\teval_value = rescspec_evaluate(parse_tree, al, logbuf);\n\t\t\tif (eval_value > 0)\n\t\t\t\tprintf(\"Evaluate: yes\\n\");\n\t\t\telse if (eval_value == 0)\n\t\t\t\tprintf(\"Evaluate: no: %s\\n\", logbuf);\n\t\t\telse\n\t\t\t\tprintf(\"Evaluate: Error\\n\");\n\t\t}\n\n\t\tif (print_parse) {\n\t\t\tprintf(\"The Parse Tree:\\n\");\n\t\t\tprint_rescspec_tree(parse_tree, NULL);\n\t\t}\n\n\t\tif (print_resc) {\n\t\t\tprintf(\"The Resources: \\n\");\n\t\t\tbs = rescspec_get_resources(parse_tree);\n\t\t\tif (bs != NULL) {\n\t\t\t\tprint_attrl(bs->attribs);\n\t\t\t\t/*      pbs_statfree(bs); */\n\t\t\t}\n\t\t}\n\n\t\tif (print_assn) {\n\t\t\tprintf(\"The Assignments: \\n\");\n\t\t\tbs = rescspec_get_assignments(parse_tree);\n\t\t\tif (bs != NULL) {\n\t\t\t\tprint_attrl(bs->attribs);\n\t\t\t\t/*      pbs_statfree(bs); */\n\t\t\t}\n\t\t}\n\t}\n\tif (fp != NULL)\n\t\tfclose(fp);\n\treturn 0;\n}\n\n/**\n * @brief\n *\t\tread_attrs - read attribvalue pairs from file\n *\n * @param[in]\tfp\t-\tthe file to read from\n *\n * @return\tattrl\n * @retval\tlist of attrib value pairs\t: success\n * @retval\tNULL\t: failed\n *\n */\nstatic struct attrl *\nread_attrs(FILE *fp)\n{\n\tchar buf[1024];\t\t   /* buf to read into */\n\tstruct attrl *head = NULL; /* head of list */\n\tstruct attrl *cur = NULL;  /* current entry in list */\n\tstruct attrl *prev = NULL; /* prev entry to add current one to */\n\n\tif (fp == NULL)\n\t\treturn NULL;\n\n\twhile (fgets(buf, 1024, fp) != NULL) {\n\t\tif ((cur = new_attrl()) == NULL)\n\t\t\treturn NULL;\n\n\t\t/* chop the \\n */\n\t\tbuf[strlen(buf) - 1] = '\\0';\n\n\t\tcur->name = ATTR_l;\n\t\tcur->resource = strdup(strtok(buf, \"= \t\"));\n\t\tcur->value = strdup(strtok(NULL, \"= \t\"));\n\n\t\tif (prev != NULL) {\n\t\t\tprev->next = cur;\n\t\t\tprev = cur;\n\t\t} else {\n\t\t\tprev = cur;\n\t\t\thead = cur;\n\t\t}\n\t}\n\n\treturn head;\n}\n"
  },
  {
    "path": "src/tools/site_tclWrap.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <tcl.h>\n#ifdef NAS\n#include <string.h>\n#include <stdlib.h>\n#endif\n#include \"portability.h\"\n#include \"pbs_error.h\"\n#ifdef NAS\n#include \"pbs_ifl.h\"\n#include \"pbs_internal.h\"\n#endif\n#include \"log.h\"\n\n#ifdef NAS\n/* localmod 071 */\nextern char *tcl_atrsep;\n\n/* localmod 099 */\nint quiet = 0;\n\n/* localmod 098 */\nextern char badparm[];\nextern char not_connected[];\nextern char fail[];\nextern Tcl_Obj *pbserr;\nextern Tcl_Obj *pbsmsg;\nextern int connector;\nvoid batresult(Tcl_Interp *interp, struct batch_status *bs);\nTcl_Obj *attrlist(Tcl_Interp *interp, struct attrl *ap);\n\n#define SET_PBSERR(value)                           \\\n\t(void) Tcl_ObjSetVar2(interp, pbserr, NULL, \\\n\t\t\t      Tcl_NewIntObj((value)), TCL_GLOBAL_ONLY | TCL_LEAVE_ERR_MSG)\n\n#define SET_PBSMSG(msg)                             \\\n\t(void) Tcl_ObjSetVar2(interp, pbsmsg, NULL, \\\n\t\t\t      Tcl_NewStringObj((msg), -1), TCL_GLOBAL_ONLY)\n\n#define PBS_CALL(function, note)                             \\\n\tif (function) {                                      \\\n\t\tTcl_SetObjResult(interp, Tcl_NewIntObj(-1)); \\\n\t\tmsg = pbs_geterrmsg(connector);              \\\n\t\tsprintf(log_buffer, \"%s: %s (%d)\", note,     \\\n\t\t\tmsg ? msg : fail, pbs_errno);        \\\n\t\tif (!quiet)                                  \\\n\t\t\tlog_err(-1, cmd, log_buffer);        \\\n\t} else                                               \\\n\t\tTcl_SetObjResult(interp, Tcl_NewIntObj(0));\n\n/* localmod 071 */\nint\nPBS_atrsep(ClientData clientData, Tcl_Interp *interp, int objc,\n\t   Tcl_Obj *CONST objv[])\n{\n\tint ret;\n\tchar *newvalue;\n\n\tnewvalue = NULL;\n\tswitch (objc) {\n\t\tcase 2:\n\t\t\tnewvalue = Tcl_GetString(objv[1]);\n\t\tcase 1:\n\t\t\tbreak;\n\t\tdefault:\n\t\t\tTcl_WrongNumArgs(interp, 1, objv, \"?string?\");\n\t\t\treturn TCL_ERROR;\n\t}\n\tTcl_SetObjResult(interp, Tcl_NewStringObj(tcl_atrsep, strlen(tcl_atrsep)));\n\tif (newvalue) {\n\t\tfree(tcl_atrsep);\n\t\ttcl_atrsep = strdup(newvalue);\n\t}\n\treturn TCL_OK;\n}\n\n/* localmod 098 */\nint\nPBS_confirm(ClientData clientData, Tcl_Interp *interp, int objc,\n\t    Tcl_Obj *CONST objv[])\n{\n\tchar *cmd;\n\tchar *reqid;\n\tchar *exechost;\n\tunsigned long start = 0;\n\tchar *extend = NULL;\n\tchar *msg;\n\tint ret;\n\n\tswitch (objc) {\n\t\tcase 5:\n\t\t\textend = Tcl_GetString(objv[4]);\n\t\t\t/* Fall through */\n\t\tcase 4:\n\t\t\tret = Tcl_GetLongFromObj(interp, objv[3], (long *) &start);\n\t\t\tif (ret != TCL_OK) {\n\t\t\t\treturn ret;\n\t\t\t}\n\t\t\t/* Fall through */\n\t\tcase 3:\n\t\t\texechost = Tcl_GetString(objv[2]);\n\t\t\treqid = Tcl_GetString(objv[1]);\n\t\t\tbreak;\n\t\tdefault:\n\t\t\tTcl_WrongNumArgs(interp, 1, objv, \"resvid exechost ?start_time? ?extra?\");\n\t\t\treturn TCL_ERROR;\n\t}\n\tcmd = Tcl_GetString(objv[0]);\n\tif (connector < 0) {\n\t\tif (!quiet)\n\t\t\tlog_err(-1, cmd, not_connected);\n\t\tSET_PBSERR(PBSE_NOSERVER);\n\t\treturn TCL_OK;\n\t}\n\n\tPBS_CALL(pbs_confirmresv(connector, reqid, exechost, start, extend), reqid)\n\tSET_PBSERR(pbs_errno);\n\treturn TCL_OK;\n}\n\n/* localmod 099 */\nint\nPBS_quiet(ClientData clientData, Tcl_Interp *interp, int objc,\n\t  Tcl_Obj *CONST objv[])\n{\n\tint ret;\n\tint newvalue;\n\n\tnewvalue = quiet;\n\tswitch (objc) {\n\t\tcase 2:\n\t\t\tret = Tcl_GetBooleanFromObj(interp, objv[1], &newvalue);\n\t\t\tif (ret != TCL_OK) {\n\t\t\t\treturn ret;\n\t\t\t}\n\t\tcase 1:\n\t\t\tbreak;\n\t\tdefault:\n\t\t\tTcl_WrongNumArgs(interp, 1, objv, \"?bool?\");\n\t\t\treturn TCL_ERROR;\n\t}\n\tTcl_SetObjResult(interp, Tcl_NewIntObj(quiet));\n\tquiet = newvalue;\n\treturn TCL_OK;\n}\n\n/* localmod 098 */\nint\nPBS_StatResv(ClientData clientData, Tcl_Interp *interp, int objc,\n\t     Tcl_Obj *CONST objv[])\n{\n\tchar *msg;\n\tstruct batch_status *bs;\n\tchar *extend = NULL;\n\n\tif (objc > 2) { /* can have one argument for extend field */\n\t\tsprintf(log_buffer, badparm, Tcl_GetString(objv[0]));\n\t\tTcl_SetResult(interp, log_buffer, TCL_VOLATILE);\n\t\treturn TCL_ERROR;\n\t}\n\tif (objc == 2) {\n\t\textend = Tcl_GetString(objv[1]);\n\t}\n\n\tif (connector < 0) {\n\t\tif (!quiet)\n\t\t\tlog_err(-1, Tcl_GetString(objv[0]), not_connected);\n\t\tSET_PBSERR(PBSE_NOSERVER);\n\t\treturn TCL_OK;\n\t}\n\n\tif ((bs = pbs_statresv(connector, NULL, NULL, extend)) == NULL) {\n\t\tif (pbs_errno != PBSE_NONE) {\n\t\t\tmsg = pbs_geterrmsg(connector);\n\t\t\tsprintf(log_buffer, \"%s (%d)\",\n\t\t\t\tmsg ? msg : fail, pbs_errno);\n\t\t\tif (!quiet)\n\t\t\t\tlog_err(-1, Tcl_GetString(objv[0]), log_buffer);\n\t\t}\n\t} else\n\t\tbatresult(interp, bs);\n\n\tSET_PBSERR(pbs_errno);\n\treturn TCL_OK;\n}\n\nint\nPBS_StatSched(clientData, interp, argc, argv)\nClientData clientData;\nTcl_Interp *interp;\nint argc;\nchar *argv[];\n{\n\tchar *msg;\n\tstruct batch_status *bs;\n\tTcl_Obj *threel[3];\n\n\tif (argc != 1) {\n\t\tsprintf(log_buffer, badparm, argv[0]);\n\t\tTcl_SetResult(interp, log_buffer, TCL_VOLATILE);\n\t\treturn TCL_ERROR;\n\t}\n\n\tif (connector < 0) {\n\t\tif (!quiet)\n\t\t\tlog_err(-1, (char *) argv[0], not_connected);\n\t\tSET_PBSERR(PBSE_NOSERVER);\n\t\treturn TCL_OK;\n\t}\n\n\tif ((bs = pbs_statsched(connector, NULL, NULL)) == NULL) {\n\t\tif (pbs_errno != PBSE_NONE) {\n\t\t\tmsg = pbs_geterrmsg(connector);\n\t\t\tsprintf(log_buffer, \"%s (%d)\",\n\t\t\t\tmsg ? msg : fail, pbs_errno);\n\t\t\tif (!quiet)\n\t\t\t\tlog_err(-1, (char *) argv[0], log_buffer);\n\t\t}\n\t} else {\n\t\tthreel[0] = Tcl_NewStringObj(bs->name, -1);\n\t\tthreel[1] = attrlist(interp, bs->attribs);\n\t\tthreel[2] = Tcl_NewStringObj(bs->text, -1);\n\n\t\tTcl_SetObjResult(interp, Tcl_NewListObj(3, threel));\n\n\t\tpbs_statfree(bs);\n\t}\n\n\tSET_PBSERR(pbs_errno);\n\treturn TCL_OK;\n}\n\nint\nPBS_StatVnode(clientData, interp, objc, objv)\nClientData clientData;\nTcl_Interp *interp;\nint objc;\nTcl_Obj *CONST objv[];\n{\n\tchar *msg, *cmd;\n\tchar *node = NULL;\n\tstruct batch_status *bs;\n\n\tif (objc == 2)\n\t\tnode = Tcl_GetStringFromObj(objv[1], NULL);\n\telse if (objc != 1) {\n\t\tTcl_WrongNumArgs(interp, 1, objv, \"?node?\");\n\t\treturn TCL_ERROR;\n\t}\n\n\tcmd = Tcl_GetStringFromObj(objv[0], NULL);\n\tif (connector < 0) {\n\t\tif (!quiet)\n\t\t\tlog_err(-1, cmd, not_connected);\n\t\tSET_PBSERR(PBSE_NOSERVER);\n\t\treturn TCL_OK;\n\t}\n\n\tif ((bs = pbs_statvnode(connector, node, NULL, NULL)) == NULL) {\n\t\tif (pbs_errno != PBSE_NONE) {\n\t\t\tmsg = pbs_geterrmsg(connector);\n\t\t\tsprintf(log_buffer, \"%s (%d)\",\n\t\t\t\tmsg ? msg : fail, pbs_errno);\n\t\t\tif (!quiet)\n\t\t\t\tlog_err(-1, cmd, log_buffer);\n\t\t}\n\t} else\n\t\tbatresult(interp, bs);\n\n\tSET_PBSERR(pbs_errno);\n\treturn TCL_OK;\n}\n#endif\n\n/*\n **\tThis is a site dependent routine provided as a place holder\n **\tfor whatever C code which may be required for your scheduler.\n */\nvoid\n\tsite_cmds(Tcl_Interp *interp)\n{\n\tDBPRT((\"%s: entered\\n\", __func__))\n#ifdef NAS\n\t/* localmod 071 */\n\tTcl_CreateObjCommand(interp, \"pbsatrsep\", PBS_atrsep, NULL, NULL);\n\t/* localmod 099 */\n\tTcl_CreateObjCommand(interp, \"pbsquiet\", PBS_quiet, NULL, NULL);\n\t/* localmod 098 */\n\tTcl_CreateObjCommand(interp, \"pbsconfirm\", PBS_confirm, NULL, NULL);\n\tTcl_CreateObjCommand(interp, \"pbsstatresv\", PBS_StatResv, NULL, NULL);\n\tTcl_CreateObjCommand(interp, \"pbsstatsched\", PBS_StatSched, NULL, NULL);\n\tTcl_CreateObjCommand(interp, \"pbsstatvnode\", PBS_StatVnode, NULL, NULL);\n#endif\n\treturn;\n}\n"
  },
  {
    "path": "src/tools/tracejob.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n/**\n * @file tracejob.c\n *\n * @brief\n *\t\ttracejob.c - This file contains the functions related to the tracejob command.\n *\n * Functions included are:\n * \tget_cols()\n * \tmain()\n * \tparse_log()\n * \tsort_by_date()\n * \tsort_by_message()\n * \tstrip_path()\n * \tfree_log_entry()\n * \tline_wrap()\n * \tlog_path()\n * \talloc_more_space()\n * \tfilter_excess()\n *\n */\n#include <pbs_config.h> /* the master config generated by configure */\n\n#include <stdio.h>\n#include <string.h>\n#include <time.h>\n#include <stdlib.h>\n#include <unistd.h>\n#include <ctype.h>\n#include <termios.h>\n#if defined(HAVE_SYS_IOCTL_H)\n#include <sys/ioctl.h>\n#endif\n#include \"cmds.h\"\n#include \"pbs_version.h\"\n#include \"pbs_ifl.h\"\n#include \"log.h\"\n#include \"tracejob.h\"\n\n/* path from pbs home to the log files:\n index in mid_path must match with enum field in header */\nconst char *mid_path[] = {\"server_priv/accounting\", \"server_logs\", \"mom_logs\",\n\t\t\t  \"sched_logs\"};\n\nstruct log_entry *log_lines;\nint ll_cur_amm;\nint ll_max_amm;\nint has_high_res_timestamp = 0;\n\nstatic char none[1] = {'\\0'};\n/**\n * @brief\n * \t\treturns columns, in characters from winsize struct.\n *\n * @return\tint\n * @retval\t0\t: failed.\n * @retval\tcolumns, in characters\t: success\n */\nint\nget_cols()\n{\n\n#ifdef WIN32\n\tCONSOLE_SCREEN_BUFFER_INFO csbi;\n\n\tif (GetConsoleScreenBufferInfo(GetStdHandle(STD_OUTPUT_HANDLE), &csbi)) {\n\t\treturn (csbi.dwSize.X);\n\t}\n\treturn (0);\n#else\n\tstruct winsize ws;\n\n\tif (ioctl(STDOUT_FILENO, TIOCGWINSZ, &ws) != -1) {\n\t\treturn ws.ws_col;\n\t}\n\treturn (0);\n#endif\n}\n/**\n * @brief\n * \t\tThis is main function of tracejob.\n *\n * @return\tint\n * @retval\t0\t: success\n * @retval\t1\t: failure\n */\nint\nmain(int argc, char *argv[])\n{\n\t/* Array for the log entries for the specified job */\n\tFILE *fp;\n\tint i, j;\n\tchar *filename; /* full path of logfile to read */\n\tstruct tm *tm_ptr;\n\tint month, day, year;\n\ttime_t t, t_save;\n\tsigned char c;\n\tchar *prefix_path = NULL;\n\tint number_of_days = 1;\n\tchar *endp;\n\tshort error = 0;\n\tint opt;\n\tchar no_acct = 0, no_svr = 0, no_mom = 0, no_schd = 0;\n\tchar verbose = 0;\n\tint wrap = -1;\n\tint log_filter = 0;\n\tint event_type;\n\tchar filter_excessive = 0;\n\tint excessive_count;\n#ifdef NAS /* localmod 022 */\n\tstruct stat sbuf;\n#endif /* localmod 022 */\n\tint unknw_job = 0;\n\n\t/*the real deal or output pbs_version and exit?*/\n\tPRINT_VERSION_AND_EXIT(argc, argv);\n\n#if defined(FILTER_EXCESSIVE)\n\tfilter_excessive = 1;\n#endif\n\n#if defined(EXCESSIVE_COUNT)\n\texcessive_count = EXCESSIVE_COUNT;\n#endif\n\n\tpbs_loadconf(0);\n\n\twhile ((c = getopt(argc, argv, \"zvamslw:p:n:f:c:-:\")) != EOF) {\n\t\tswitch (c) {\n\t\t\tcase 'v':\n\t\t\t\tverbose = 1;\n\t\t\t\tbreak;\n\n\t\t\tcase 'a':\n\t\t\t\tno_acct = 1;\n\t\t\t\tbreak;\n\n\t\t\tcase 's':\n\t\t\t\tno_svr = 1;\n\t\t\t\tbreak;\n\n\t\t\tcase 'm':\n\t\t\t\tno_mom = 1;\n\t\t\t\tbreak;\n\n\t\t\tcase 'l':\n\t\t\t\tno_schd = 1;\n\t\t\t\tbreak;\n\n\t\t\tcase 'z':\n\t\t\t\tfilter_excessive = filter_excessive ? 0 : 1;\n\t\t\t\tbreak;\n\n\t\t\tcase 'c':\n\t\t\t\texcessive_count = strtol(optarg, &endp, 10);\n\t\t\t\tif (*endp != '\\0')\n\t\t\t\t\terror = 1;\n\t\t\t\tbreak;\n\n\t\t\tcase 'w':\n\t\t\t\twrap = strtol(optarg, &endp, 10);\n\t\t\t\tif (*endp != '\\0')\n\t\t\t\t\terror = 1;\n\t\t\t\tbreak;\n\n\t\t\tcase 'p':\n\t\t\t\tprefix_path = optarg;\n\t\t\t\tbreak;\n\n\t\t\tcase 'n':\n\t\t\t\tnumber_of_days = strtol(optarg, &endp, 10);\n\t\t\t\tif (*endp != '\\0')\n\t\t\t\t\terror = 1;\n\t\t\t\tbreak;\n\n\t\t\tcase 'f':\n\t\t\t\tif (strcmp(optarg, \"error\") == 0)\n\t\t\t\t\tlog_filter |= PBSEVENT_ERROR;\n\t\t\t\telse if (strcmp(optarg, \"system\") == 0)\n\t\t\t\t\tlog_filter |= PBSEVENT_SYSTEM;\n\t\t\t\telse if (strcmp(optarg, \"admin\") == 0)\n\t\t\t\t\tlog_filter |= PBSEVENT_ADMIN;\n\t\t\t\telse if (strcmp(optarg, \"job\") == 0)\n\t\t\t\t\tlog_filter |= PBSEVENT_JOB;\n\t\t\t\telse if (strcmp(optarg, \"job_usage\") == 0)\n\t\t\t\t\tlog_filter |= PBSEVENT_JOB_USAGE;\n\t\t\t\telse if (strcmp(optarg, \"security\") == 0)\n\t\t\t\t\tlog_filter |= PBSEVENT_SECURITY;\n\t\t\t\telse if (strcmp(optarg, \"sched\") == 0)\n\t\t\t\t\tlog_filter |= PBSEVENT_SCHED;\n\t\t\t\telse if (strcmp(optarg, \"debug\") == 0)\n\t\t\t\t\tlog_filter |= PBSEVENT_DEBUG;\n\t\t\t\telse if (strcmp(optarg, \"debug2\") == 0)\n\t\t\t\t\tlog_filter |= PBSEVENT_DEBUG2;\n\t\t\t\telse if (strcmp(optarg, \"resv\") == 0)\n\t\t\t\t\tlog_filter |= PBSEVENT_RESV;\n\t\t\t\telse if (strcmp(optarg, \"debug3\") == 0)\n\t\t\t\t\tlog_filter |= PBSEVENT_DEBUG3;\n\t\t\t\telse if (strcmp(optarg, \"debug4\") == 0)\n\t\t\t\t\tlog_filter |= PBSEVENT_DEBUG4;\n\t\t\t\telse if (strcmp(optarg, \"force\") == 0)\n\t\t\t\t\tlog_filter |= PBSEVENT_FORCE;\n\t\t\t\telse if (isdigit(optarg[0])) {\n\t\t\t\t\tlog_filter = strtol(optarg, &endp, 16);\n\t\t\t\t\tif (*endp != '\\0')\n\t\t\t\t\t\terror = 1;\n\t\t\t\t} else\n\t\t\t\t\terror = 1;\n\t\t\t\tbreak;\n\n\t\t\tdefault:\n\t\t\t\terror = 1;\n\t\t}\n\t}\n\n\t/* no jobs */\n\tif (error || argc == optind) {\n\t\tprintf(\n\t\t\t\"USAGE: %s [-a|s|l|m|v] [-w size] [-p path] [-n days] [-f filter_type] job_identifier...\\n\",\n\t\t\tstrip_path(argv[0]));\n\n\t\tprintf(\n\t\t\t\"   -p : path to PBS_HOME\\n\"\n\t\t\t\"   -w : number of columns of your terminal\\n\"\n\t\t\t\"   -n : number of days in the past to look for job(s) [default 1]\\n\"\n\t\t\t\"   -f : filter out types of log entries, multiple -f's can be specified\\n\"\n\t\t\t\"        error, system, admin, job, job_usage, security, sched, debug, \\n\"\n\t\t\t\"        debug2, resv, debug3, debug4, force or absolute numberic equiv\\n\"\n\t\t\t\"   -z : toggle filtering excessive messages\\n\"\n\t\t\t\"   -c : what message count is considered excessive\\n\"\n\t\t\t\"   -a : don't use accounting log files\\n\"\n\t\t\t\"   -s : don't use server log files\\n\"\n\t\t\t\"   -l : don't use scheduler log files\\n\"\n\t\t\t\"   -m : don't use mom log files\\n\"\n\t\t\t\"   -v : verbose mode - show more error messages\\n\");\n\n\t\tprintf(\"\\n       %s --version\\n\", strip_path(argv[0]));\n\t\tprintf(\"   --version : display version only\\n\\n\");\n\n\t\tprintf(\"default prefix path = %s\\n\", pbs_conf.pbs_home_path);\n#if defined(FILTER_EXCESSIVE)\n\t\tprintf(\"filter_excessive: ON\\n\");\n#else\n\t\tprintf(\"filter_excessive: OFF\\n\");\n#endif\n\t\treturn 1;\n\t}\n\n\tif (wrap == -1)\n\t\twrap = get_cols();\n\n\ttime(&t);\n\tt_save = t;\n\n\tfor (opt = optind; opt < argc; opt++) {\n\t\tll_cur_amm = 0; /* reset line count to zero */\n\t\tfor (i = 0, t = t_save; i < number_of_days; i++, t -= SECONDS_IN_DAY) {\n\t\t\ttm_ptr = localtime(&t);\n\t\t\tmonth = tm_ptr->tm_mon;\n\t\t\tday = tm_ptr->tm_mday;\n\t\t\tyear = tm_ptr->tm_year;\n\n\t\t\tfor (j = 0; j < 4; j++) {\n\t\t\t\tif ((j == IND_ACCT && no_acct) || (j == IND_SERVER && no_svr) ||\n\t\t\t\t    (j == IND_MOM && no_mom) || (j == IND_SCHED && no_schd))\n\t\t\t\t\tcontinue;\n\n#ifdef NAS /* localmod 022 */\n\t\t\t\tfilename = log_path(prefix_path, j, 1, month, day, year);\n\t\t\t\tif (stat(filename, &sbuf) == -1) {\n\t\t\t\t\tfilename = log_path(prefix_path, j, 0, month, day, year);\n\t\t\t\t}\n#else\n\t\t\t\tfilename = log_path(prefix_path, j, month, day, year);\n#endif /* localmod 022 */\n\n\t\t\t\tif ((fp = fopen(filename, \"r\")) == NULL) {\n\t\t\t\t\tif (verbose)\n\t\t\t\t\t\tperror(filename);\n\t\t\t\t\tcontinue;\n\t\t\t\t}\n\n\t\t\t\tparse_log(fp, argv[opt], j);\n\n\t\t\t\tfclose(fp);\n\t\t\t}\n\t\t}\n\n\t\tif (filter_excessive)\n\t\t\tfilter_excess(excessive_count);\n\n\t\tqsort(log_lines, ll_cur_amm, sizeof(struct log_entry), sort_by_date);\n\n\t\tif (ll_cur_amm != 0) {\n\t\t\tprintf(\"\\nJob: %s\\n\\n\", log_lines[0].name);\n\t\t\tfor (i = 0; i < ll_cur_amm; i++) {\n\t\t\t\tif (log_lines[i].log_file == 'A')\n\t\t\t\t\tevent_type = 0;\n\t\t\t\telse\n\t\t\t\t\tevent_type = strtol(log_lines[i].event, &endp, 16);\n\t\t\t\tif (!(log_filter & event_type) && !(log_lines[i].no_print)) {\n\t\t\t\t\tif (has_high_res_timestamp) {\n\t\t\t\t\t\tprintf(\"%-27s %-5c\", log_lines[i].date, log_lines[i].log_file);\n\t\t\t\t\t\tline_wrap(log_lines[i].msg, 33, wrap);\n\t\t\t\t\t} else {\n\t\t\t\t\t\tprintf(\"%-20s %-5c\", log_lines[i].date, log_lines[i].log_file);\n\t\t\t\t\t\tline_wrap(log_lines[i].msg, 26, wrap);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t} else {\n\t\t\t/*\n\t\t\t * if line count is zero, it means that there is\n\t\t\t * no job associated with the job-id given.\n\t\t\t */\n\t\t\tunknw_job = 1;\n\t\t\tif (strchr(argv[opt], '.') == NULL)\n\t\t\t\tfprintf(stderr, \"\\ntracejob: Couldn't find Job Id %s.%s in logs of past %d day%s\\n\\n\", argv[opt],\n\t\t\t\t\tpbs_conf.pbs_server_name, number_of_days, number_of_days == 1 ? \"\" : \"s\");\n\t\t\telse\n\t\t\t\tfprintf(stderr, \"\\ntracejob: Couldn't find Job Id %s in logs of past %d day%s\\n\\n\", argv[opt],\n\t\t\t\t\tnumber_of_days, number_of_days == 1 ? \"\" : \"s\");\n\t\t}\n\t}\n\n\t/* return 1 if unknown job-id */\n\tif (unknw_job)\n\t\treturn 1;\n\n\treturn 0;\n}\n\n/**\n * @brief\n *\t\tparse_log - parse out entires of a log file for a specific job\n *\t\t    and return them in log_entry structures\n *\n * @param[in]\tfp\t-\tthe log file\n * @param[in]\tjob\t-\tthe name of the job\n * @param[in]\tind\t-\twhich log file - index in enum index\n *\n *\t@return\tnothing\n *\t@note\n *\t\tmodifies global variables: loglines, ll_cur_amm, ll_max_amm\n *\n * @par MT-safe: No\n */\nvoid\nparse_log(FILE *fp, char *job, int ind)\n{\n\tstruct log_entry tmp; /* temporary log entry */\n\tchar *buf;\t      /* buffer to read in from file */\n\tchar *tbuf;\t      /* temporarily hold realloc's for main buffer */\n\tchar job_buf[128];    /* hold the jobid and the . */\n\tchar *p;\t      /* pointer to use for strtok */\n\tint field_count;      /* which field in log entry */\n\tint j = 0;\n\tstruct tm tms; /* used to convert date to unix date */\n\tint lineno = 0;\n\tint slen;\n\tchar *pdot;\n\tint buf_size = 16384; /* initial buffer size */\n\tint break_fl = 0;\n\n\tbuf = (char *) calloc(buf_size, sizeof(char));\n\tif (!buf)\n\t\treturn;\n\n\ttms.tm_isdst = -1; /* mktime() will attempt to figure it out */\n\n\tstrcpy(job_buf, job);\n\n\twhile (fgets(buf, buf_size, fp) != NULL) {\n\t\twhile (buf_size == (strlen(buf) + 1)) {\n\t\t\tbuf_size *= 2;\n\t\t\ttbuf = (char *) realloc(buf, (buf_size + 1) * sizeof(char));\n\t\t\tif (!tbuf) {\n\t\t\t\tbreak_fl = 1;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t\tbuf = tbuf;\n\t\t\tif (fgets(buf + strlen(buf), buf_size / 2 + 1, fp) == NULL) {\n\t\t\t\tbreak_fl = 1;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t\tif (break_fl)\n\t\t\tbreak;\n\t\tlineno++;\n\t\tj++;\n\t\tbuf[strlen(buf) - 1] = '\\0';\n\t\tp = strtok(buf, \";\");\n\t\tfield_count = 0;\n\t\tmemset(&tmp, 0, sizeof(struct log_entry));\n\n\t\tfor (field_count = 0; field_count < 6 && p != NULL; field_count++) {\n\t\t\tswitch (field_count) {\n\t\t\t\tcase FLD_DATE:\n\t\t\t\t\ttmp.date = p;\n\t\t\t\t\tif (ind == IND_ACCT)\n\t\t\t\t\t\tfield_count = 2;\n\t\t\t\t\tbreak;\n\n\t\t\t\tcase FLD_EVENT:\n\t\t\t\t\ttmp.event = p;\n\t\t\t\t\tbreak;\n\n\t\t\t\tcase FLD_OBJ:\n\t\t\t\t\ttmp.obj = p;\n\t\t\t\t\tbreak;\n\n\t\t\t\tcase FLD_TYPE:\n\t\t\t\t\ttmp.type = p;\n\t\t\t\t\tbreak;\n\n\t\t\t\tcase FLD_NAME:\n\t\t\t\t\ttmp.name = p;\n\t\t\t\t\tbreak;\n\n\t\t\t\tcase FLD_MSG:\n\t\t\t\t\ttmp.msg = p;\n\t\t\t\t\tbreak;\n\n\t\t\t\tdefault:\n\t\t\t\t\tprintf(\"Field count too big!\\n\");\n\t\t\t\t\tprintf(\"%s\\n\", p);\n\t\t\t}\n\n\t\t\tp = strtok(NULL, \";\");\n\t\t}\n\n\t\tpdot = strchr(job_buf, (int) '.');\n\t\tif (pdot == NULL && tmp.name != NULL) {\n\t\t\tint tlen = strlen(job_buf);\n\n\t\t\tslen = strcspn(tmp.name, \".\");\n\t\t\tif (tlen > slen)\n\t\t\t\tslen = tlen;\n\t\t} else\n\t\t\tslen = strlen(job_buf);\n\n\t\tif (tmp.name != NULL && strncmp(job_buf, tmp.name, slen) == 0) {\n\t\t\tif (ll_cur_amm >= ll_max_amm)\n\t\t\t\talloc_more_space();\n\n\t\t\tfree_log_entry(&log_lines[ll_cur_amm]);\n\n\t\t\tif (tmp.date != NULL) {\n\t\t\t\t/*\n\t\t\t\t * We need to parse the time string.\n\t\t\t\t * The string will either have high res logging or not.\n\t\t\t\t * The high res logging is after the dot after the seconds field.\n\t\t\t\t */\n\t\t\t\tlog_lines[ll_cur_amm].date = strdup(tmp.date);\n\t\t\t\tif ((ind != IND_ACCT) && (strchr(tmp.date, '.'))) {\n\t\t\t\t\t/* Parse time string looking for high res logging.  If we don't parse 7 fields, we have a invalid log time. */\n\t\t\t\t\tif (sscanf(tmp.date, \"%d/%d/%d %d:%d:%d.%ld\", &tms.tm_mon,\n\t\t\t\t\t\t   &tms.tm_mday, &tms.tm_year, &tms.tm_hour, &tms.tm_min,\n\t\t\t\t\t\t   &tms.tm_sec, &(log_lines[ll_cur_amm].highres)) != 7) {\n\t\t\t\t\t\tlog_lines[ll_cur_amm].date_time = -1; /* error in date field */\n\t\t\t\t\t\tlog_lines[ll_cur_amm].highres = NO_HIGH_RES_TIMESTAMP;\n\t\t\t\t\t} else { /* We found all 7 fields, correctly formed time string */\n\t\t\t\t\t\thas_high_res_timestamp = 1;\n\t\t\t\t\t\tif (tms.tm_year > 1900)\n\t\t\t\t\t\t\ttms.tm_year -= 1900;\n\t\t\t\t\t\t/* The number of months since January,\n \t\t\t\t\t\t * in the range 0 to 11 for mktime()\n \t\t\t\t\t\t */\n\t\t\t\t\t\ttms.tm_mon--;\n\t\t\t\t\t\tlog_lines[ll_cur_amm].date_time = mktime(&tms);\n\t\t\t\t\t}\n\t\t\t\t} else { /* Normal time string */\n\t\t\t\t\tif (sscanf(tmp.date, \"%d/%d/%d %d:%d:%d\", &tms.tm_mon, &tms.tm_mday,\n\t\t\t\t\t\t   &tms.tm_year, &tms.tm_hour, &tms.tm_min, &tms.tm_sec) != 6) {\n\t\t\t\t\t\tlog_lines[ll_cur_amm].date_time = -1; /* error in date field */\n\t\t\t\t\t} else {\t\t\t\t      /* We found all 6 fields, correctly formed time string */\n\t\t\t\t\t\tif (tms.tm_year > 1900)\n\t\t\t\t\t\t\ttms.tm_year -= 1900;\n\t\t\t\t\t\ttms.tm_mon--; /* The number of months since January, in the range 0 to 11 for mktime */\n\t\t\t\t\t\tlog_lines[ll_cur_amm].date_time = mktime(&tms);\n\t\t\t\t\t}\n\t\t\t\t\tlog_lines[ll_cur_amm].highres = NO_HIGH_RES_TIMESTAMP;\n\t\t\t\t}\n\t\t\t}\n\t\t\tif (tmp.event != NULL)\n\t\t\t\tlog_lines[ll_cur_amm].event = strdup(tmp.event);\n\t\t\telse\n\t\t\t\tlog_lines[ll_cur_amm].event = none;\n\t\t\tif (tmp.obj != NULL)\n\t\t\t\tlog_lines[ll_cur_amm].obj = strdup(tmp.obj);\n\t\t\telse\n\t\t\t\tlog_lines[ll_cur_amm].obj = none;\n\t\t\tif (tmp.type != NULL)\n\t\t\t\tlog_lines[ll_cur_amm].type = strdup(tmp.type);\n\t\t\telse\n\t\t\t\tlog_lines[ll_cur_amm].type = none;\n\t\t\tif (tmp.name != NULL)\n\t\t\t\tlog_lines[ll_cur_amm].name = strdup(tmp.name);\n\t\t\telse\n\t\t\t\tlog_lines[ll_cur_amm].name = none;\n\t\t\tif (tmp.msg != NULL)\n\t\t\t\tlog_lines[ll_cur_amm].msg = strdup(tmp.msg);\n\t\t\telse\n\t\t\t\tlog_lines[ll_cur_amm].msg = none;\n\t\t\tswitch (ind) {\n\t\t\t\tcase IND_SERVER:\n\t\t\t\t\tlog_lines[ll_cur_amm].log_file = 'S';\n\t\t\t\t\tbreak;\n\n\t\t\t\tcase IND_SCHED:\n\t\t\t\t\tlog_lines[ll_cur_amm].log_file = 'L';\n\t\t\t\t\tbreak;\n\n\t\t\t\tcase IND_ACCT:\n\t\t\t\t\tlog_lines[ll_cur_amm].log_file = 'A';\n\t\t\t\t\tbreak;\n\n\t\t\t\tcase IND_MOM:\n\t\t\t\t\tlog_lines[ll_cur_amm].log_file = 'M';\n\t\t\t\t\tbreak;\n\t\t\t\tdefault:\n\t\t\t\t\tlog_lines[ll_cur_amm].log_file = 'U'; /* undefined */\n\t\t\t}\n\t\t\tlog_lines[ll_cur_amm].lineno = lineno;\n\t\t\tll_cur_amm++;\n\t\t}\n\t}\n\tfree(buf);\n}\n\n/**\n * @brief\n *\t\tsort_by_date - compare function for qsort.  It compares two time_t\n *\t\t\tvariables and high resolution time stamp (if set)\n *\n * @param[in]\tv1\t-\tlog_entry structure1 which contains time_t variables.\n * @param[in]\tv1\t-\tlog_entry structure2 which contains time_t variables.\n *\n * @return\tint\n * @retval\t0\t: both are same.\n * @retval\t-1\t: v1 is lesser than v2\n * @retval\t1\t: v1 is greater than v2\n */\n\nint\nsort_by_date(const void *v1, const void *v2)\n{\n\tstruct log_entry *l1, *l2;\n\n\tl1 = (struct log_entry *) v1;\n\tl2 = (struct log_entry *) v2;\n\n\tif (l1->date_time < l2->date_time)\n\t\treturn -1;\n\telse if (l1->date_time > l2->date_time)\n\t\treturn 1;\n\telse {\n\t\tif ((l1->highres != NO_HIGH_RES_TIMESTAMP) && (l2->highres != NO_HIGH_RES_TIMESTAMP)) {\n\t\t\tif (l1->highres < l2->highres)\n\t\t\t\treturn -1;\n\t\t\telse if (l1->highres > l2->highres)\n\t\t\t\treturn 1;\n\t\t}\n\n\t\tif (l1->log_file == l2->log_file) {\n\t\t\tif (l1->lineno < l2->lineno)\n\t\t\t\treturn -1;\n\t\t\telse if (l1->lineno > l2->lineno)\n\t\t\t\treturn 1;\n\t\t}\n\t\treturn 0;\n\t}\n}\n\n/**\n * @brief\n *\t\tsort_by_message - compare function used by qsort.  Compares the message\n *\n * @param[in]\tv1\t-\tlog_entry structure1 which contains message to be compared.\n * @param[in]\tv1\t-\tlog_entry structure2 which contains message to be compared.\n *\n * @return\treturn value from strcmp by passing v1 and v2 as arguments.\n */\nint\nsort_by_message(const void *v1, const void *v2)\n{\n\treturn strcmp(((struct log_entry *) v1)->msg, ((struct log_entry *) v2)->msg);\n}\n\n/**\n * @brief\n *\t\tstrip_path - strips all leading path and returns the command name\n *\t\t\ti.e. /usr/bin/vi => vi\n *\n * @param[in]\tpath\t-\tthe path to strip\n *\n * @return\tstriped path\n *\n */\nchar *\nstrip_path(char *path)\n{\n\tchar *p;\n\n\tp = path + strlen(path);\n\n\twhile (p != path && *p != '/')\n\t\tp--;\n\n\tif (*p == '/')\n\t\tp++;\n\n\treturn p;\n}\n\n/**\n * @brief\n *\t\tfree_log_entry - free the interal data used by a log entry\n *\n * @param[in,out]\tlg\t-\tlog entry to free\n *\n * @return nothing\n *\n */\nvoid\nfree_log_entry(struct log_entry *lg)\n{\n\tif (lg->date != NULL && lg->date != none)\n\t\tfree(lg->date);\n\n\tlg->date = NULL;\n\n\tif (lg->event != NULL && lg->event != none)\n\t\tfree(lg->event);\n\n\tlg->event = NULL;\n\n\tif (lg->obj != NULL && lg->obj != none)\n\t\tfree(lg->obj);\n\n\tlg->obj = NULL;\n\n\tif (lg->type != NULL && lg->type != none)\n\t\tfree(lg->type);\n\n\tlg->type = NULL;\n\n\tif (lg->name != NULL && lg->name != none)\n\t\tfree(lg->name);\n\n\tlg->name = NULL;\n\n\tif (lg->msg != NULL && lg->msg != none)\n\t\tfree(lg->msg);\n\n\tlg->msg = NULL;\n\n\tlg->log_file = '\\0';\n}\n\n/**\n * @brief\n *\t\tline_wrap - wrap lines at word margin and print\n *\t\t    The first line will be printed.  The rest will be indented\n *\t\t    by start and will not go over a max line length of end chars\n *\n *\n * @param[in]\tline\t-\tthe line to wrap\n * @param[in]\tstart\t-\tamount of whitespace to indent subsequent lines\n * @param[in]\tend\t-\tnumber of columns in the terminal\n *\n * @return\tnothing\n *\n */\nvoid\nline_wrap(char *line, int start, int end)\n{\n\tint wrap_at;\n\tint total_size;\n\tint start_index;\n\tchar *cur_ptr;\n\tchar *start_ptr;\n\n\tstart_ptr = line;\n\ttotal_size = strlen(line);\n\twrap_at = (end > start) ? (end - start) : 0;\n\tstart_index = 0;\n\n\tif (end == 0)\n\t\tprintf(\"%s\\n\", show_nonprint_chars(line));\n\telse {\n\t\twhile (start_index < total_size) {\n\t\t\tif (start_index + wrap_at < total_size) {\n\t\t\t\tcur_ptr = start_ptr + wrap_at;\n\n\t\t\t\twhile (cur_ptr > start_ptr && *cur_ptr != ' ')\n\t\t\t\t\tcur_ptr--;\n\n\t\t\t\tif (cur_ptr == start_ptr) {\n\t\t\t\t\tcur_ptr = start_ptr + wrap_at;\n\t\t\t\t\twhile (*cur_ptr != ' ' && *cur_ptr != '\\0')\n\t\t\t\t\t\tcur_ptr++;\n\t\t\t\t}\n\n\t\t\t\t*cur_ptr = '\\0';\n\t\t\t} else\n\t\t\t\tcur_ptr = line + total_size;\n\n\t\t\t/* first line, don't indent */\n\t\t\tif (start_ptr == line)\n\t\t\t\tprintf(\"%s\\n\", show_nonprint_chars(start_ptr));\n\t\t\telse\n\t\t\t\tprintf(\"%*s%s\\n\", start, \" \", show_nonprint_chars(start_ptr));\n\n\t\t\tstart_ptr = cur_ptr + 1;\n\t\t\tstart_index = cur_ptr - line;\n\t\t}\n\t}\n}\n\n/**\n * brief\n *\t\tlog_path - create the path to a log file\n *\n * @param[out]\tpath\t-\tprefix path\n * @param[in]\tindex\t-\tindex into the prefix_path array\n * @param[in]\told\t-\told date or new\n * @param[in]\tmonth\t-\tmonth in numeric starts from 0.\n * @param[in]\tday\t-\tday in numeric.\n * @param[in]\tyear\t-\tyear as a count from 1900.\n *\n * @return path to log file\n *\n * @par MT-safe:\tNo\n */\n#ifdef NAS /* localmod 022 */\nchar *\nlog_path(char *path, int index, int old, int month, int day, int year)\n{\n\tstatic char buf[256];\n\tchar *oldd;\n\n\toldd = old ? \"/old.d\" : \"\";\n\tif (pbs_conf.pbs_mom_home && index == IND_MOM)\n\t\tpath = pbs_conf.pbs_mom_home;\n\n\tif (path != NULL)\n\t\tsprintf(buf, \"%s/%s%s/%04d%02d%02d\", path, mid_path[index], oldd,\n\t\t\tyear + 1900, month + 1, day);\n\telse\n\t\tsprintf(buf, \"%s/%s%s/%04d%02d%02d\", pbs_conf.pbs_home_path,\n\t\t\tmid_path[index], oldd,\n\t\t\tyear + 1900, month + 1, day);\n\n\treturn buf;\n}\n#else\n/**\n * brief\n *\t\tlog_path - create the path to a log file\n *\n * @param[out]\tpath\t-\tprefix path\n * @param[in]\tindex\t-\tindex into the prefix_path array\n * @param[in]\tmonth\t-\tmonth in numeric starts from 0.\n * @param[in]\tday\t-\tday in numeric.\n * @param[in]\tyear\t-\tyear as a count from 1900.\n *\n * @return path to log file\n *\n * @par MT-safe:\tNo\n */\nchar *\nlog_path(char *path, int index, int month, int day, int year)\n{\n\tstatic char buf[256];\n\n\tif (pbs_conf.pbs_mom_home && index == IND_MOM)\n\t\tpath = pbs_conf.pbs_mom_home;\n\n\tif (path != NULL)\n\t\tsprintf(buf, \"%s/%s/%04d%02d%02d\", path, mid_path[index],\n\t\t\tyear + 1900, month + 1, day);\n\telse\n\t\tsprintf(buf, \"%s/%s/%04d%02d%02d\", pbs_conf.pbs_home_path, mid_path[index],\n\t\t\tyear + 1900, month + 1, day);\n\n\treturn buf;\n}\n#endif /* localmod 022 */\n\n/**\n * @brief\n *\t\talloc_space - double the allocation of current log entires\n *\n */\nvoid\nalloc_more_space()\n{\n\tint old_amm = ll_max_amm;\n\tstruct log_entry *temp_log_lines;\n\n\tif (ll_max_amm == 0)\n\t\tll_max_amm = DEFAULT_LOG_LINES;\n\telse\n\t\tll_max_amm *= 2;\n\n\tif ((temp_log_lines = realloc(log_lines, ll_max_amm * sizeof(struct log_entry))) == NULL) {\n\t\tperror(\"Error allocating memory\");\n\t\texit(1);\n\t} else\n\t\tlog_lines = temp_log_lines;\n\n\tmemset(&log_lines[old_amm], 0, (ll_max_amm - old_amm) * sizeof(struct log_entry));\n}\n\n/**\n * @brief\n *\t\tfilter_excess - count and set the no_print flags if the count goes over\n *\t\t       the message thrashold\n *\n * @param[in]\tthreshold\t-\tif the number of messages exceeds this, don't print them\n *\n * @return\tnothing\n *\n * @note\n *\t\tlog_lines array will be sorted in place\n */\nvoid\nfilter_excess(int threshold)\n{\n\tint cur_count = 1;\n\tchar *msg;\n\tint i;\n\tint j = 0;\n\n\tif (ll_cur_amm) {\n\t\tqsort(log_lines, ll_cur_amm, sizeof(struct log_entry), sort_by_message);\n\t\tmsg = log_lines[0].msg;\n\n\t\tfor (i = 1; i < ll_cur_amm; i++) {\n\t\t\tif (strcmp(log_lines[i].msg, msg) == 0)\n\t\t\t\tcur_count++;\n\t\t\telse {\n\t\t\t\tif (cur_count >= threshold) {\n\t\t\t\t\t/* we want to print 1 of the many messages */\n\t\t\t\t\tfor (; j < i - 1; j++)\n\t\t\t\t\t\tlog_lines[j].no_print = 1;\n\t\t\t\t}\n\n\t\t\t\tj = i;\n\t\t\t\tcur_count = 1;\n\t\t\t\tmsg = log_lines[i].msg;\n\t\t\t}\n\t\t}\n\n\t\tif (cur_count >= threshold) {\n\t\t\tj++;\n\t\t\tfor (; j < i; j++)\n\t\t\t\tlog_lines[j].no_print = 1;\n\t\t}\n\t}\n}\n"
  },
  {
    "path": "src/tools/tracejob.h",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef _TRACEJOB_H\n#define _TRACEJOB_H\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\n/* Symbolic constants */\n\n/* default number of columns on a terminal */\n#ifndef DEFAULT_WRAP\n#define DEFAULT_WRAP 80\n#endif\n\n/*\n * filter excessive log entires\n */\n#ifndef FILTER_EXCESSIVE\n#define FILTER_EXCESSIVE\n#endif\n\n/* if filter excessive if turned on and there are at least this many\n * log entires, its considered is considered excessive\n */\n#ifndef EXCESSIVE_COUNT\n#define EXCESSIVE_COUNT 15\n#endif\n\n/* number of entries to start */\n#ifndef DEFAULT_LOG_LINES\n#define DEFAULT_LOG_LINES 1024\n#endif\n\n#define SECONDS_IN_DAY 86400\n\n/* indicies into the mid_path array */\nenum index {\n\tIND_ACCT = 0,\n\tIND_SERVER = 1,\n\tIND_MOM = 2,\n\tIND_SCHED = 3\n};\n\n/* fields of a log entry */\nenum field {\n\tFLD_DATE = 0,\n\tFLD_EVENT = 1,\n\tFLD_OBJ = 2,\n\tFLD_TYPE = 3,\n\tFLD_NAME = 4,\n\tFLD_MSG = 5\n};\n\n/* A PBS log entry */\nstruct log_entry {\n\tchar *date;\t       /* date of log entry */\n\ttime_t date_time;      /* number of seconds from the epoch to date */\n\tlong highres;\t       /* high resolution portion of the log entry (number smaller than seconds) */\n\tchar *event;\t       /* event type */\n\tchar *obj;\t       /* what entity is writing the log */\n\tchar *type;\t       /* type of object Job/Svr/etc */\n\tchar *name;\t       /* name of object */\n\tchar *msg;\t       /* log message */\n\tchar log_file;\t       /* What log file */\n\tint lineno;\t       /* what line in the file.  used to stabilize the sort */\n\tunsigned no_print : 1; /* whether or not to print the message */\n\t\t\t       /* A=accounting S=server M=Mom L=Scheduler */\n};\n\n/* prototypes */\nint sort_by_date(const void *v1, const void *v2);\nvoid parse_log(FILE *fp, char *job, int act);\nchar *strip_path(char *path);\nvoid free_log_entry(struct log_entry *lg);\nvoid line_wrap(char *line, int start, int end);\n#ifdef NAS /* localmod 022 */\nchar *log_path(char *path, int index, int old, int month, int day, int year);\n#else\nchar *log_path(char *path, int index, int month, int day, int year);\n#endif /* localmod 022 */\nvoid alloc_more_space();\nvoid filter_excess(int threshold);\nint sort_by_message(const void *v1, const void *v2);\n\n/* Macros */\n#define NO_HIGH_RES_TIMESTAMP -1\n\n/* used by getopt(3) */\nextern char *optarg;\nextern int optind;\n#ifdef __cplusplus\n}\n#endif\n#endif /* _TRACEJOB_H */\n"
  },
  {
    "path": "src/tools/wrap_tcl.sh.in",
    "content": "#!/bin/sh\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nif [ $# -eq 1 ] && [ $1 = \"--version\" ]; then\n   echo pbs_version = @PBS_VERSION@\n   exit 0\nfi\n\nEXEC_FULL_NAME=$0\nEXEC_SHORT_NAME=`basename $0`\nEXEC_DIR_NAME=`dirname $0`\nEXEC_ARGS=$*\n\nLIBS_TO_LOAD=/opt/pbs/lib:/usr/pbs/lib:/usr/lib:/usr/local/lib\n\nPBS_CONF_FILE=${PBS_CONF_FILE:-@PBS_CONF_FILE@}\n\ntest -f $PBS_CONF_FILE && . $PBS_CONF_FILE\n\nPBS_LIB_PATH=${PBS_EXEC}/lib\nif [ ! -d ${PBS_LIB_PATH} -a -d ${PBS_EXEC}/lib64 ] ; then\n\tPBS_LIB_PATH=${PBS_EXEC}/lib64\nfi\n\nif  test -f ${EXEC_DIR_NAME}/../pbs_wish &&  \\\n    test -f ${EXEC_DIR_NAME}/${EXEC_SHORT_NAME}.src.tk ; then\n\tPBS_WISH_PATH=\"${EXEC_DIR_NAME}/..\"\n\tEXEC_PATH=\"${EXEC_DIR_NAME}\"\nelif test \"$PBS_EXEC\" != \"\" ; then\n\tPBS_WISH_PATH=${PBS_EXEC}/bin\n\tEXEC_PATH=${PBS_LIB_PATH}/${EXEC_SHORT_NAME}\nelse\n\tPBS_WISH_PATH=/usr/local/bin\n\tEXEC_PATH=/usr/local/lib/${EXEC_SHORT_NAME}\nfi\n\nenv LD_LIBRARY_PATH=${LIBS_TO_LOAD}:${LD_LIBRARY_PATH} LD_LIBRARY_PATH_64=${LIBS_TO_LOAD}:${LD_LIBRARY_PATH_64} ${PBS_WISH_PATH}/pbs_wish ${EXEC_PATH}/${EXEC_SHORT_NAME}.src.tk ${EXEC_ARGS}\n"
  },
  {
    "path": "src/unsupported/Makefile.am",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nunsupporteddir = ${exec_prefix}/unsupported\n\nunsupported_PROGRAMS = pbs_rmget\n\ndist_unsupported_SCRIPTS = \\\n\tpbs_loganalyzer \\\n\tpbs_stat \\\n\tpbs_config \\\n\tsgiICEvnode.sh \\\n\tsgiICEplacement.sh \\\n\tsgigenvnodelist.awk\n\n# Marking all *.py files as data as these files are meant to be used as hooks and\n# need no compilation.\ndist_unsupported_DATA = \\\n\tNodeHealthCheck.py \\\n\tload_balance.py \\\n\tmom_dyn_res.py \\\n\trapid_inter.py \\\n\trun_pelog_shell.py \\\n\tNodeHealthCheck.json \\\n\tREADME \\\n\tpbs_jobs_at.8B \\\n\tpbs_rescquery.3B \\\n\trun_pelog_shell.ini \\\n\tcray_readme \\\n\tReliableJobStartup.py \\\n\tpbs_output.py\n\npbs_rmget_CPPFLAGS = \\\n\t-I$(top_srcdir)/src/include \\\n\t@libz_inc@ \\\n\t@KRB5_CFLAGS@\n\npbs_rmget_LDADD = \\\n\t$(top_builddir)/src/lib/Libpbs/libpbs.la \\\n\t$(top_builddir)/src/lib/Libtpp/libtpp.a \\\n\t$(top_builddir)/src/lib/Liblog/liblog.a \\\n\t$(top_builddir)/src/lib/Libnet/libnet.a \\\n\t$(top_builddir)/src/lib/Libutil/libutil.a \\\n\t-lpthread \\\n\t@KRB5_LIBS@ \\\n\t@libz_lib@\n\npbs_rmget_SOURCES = pbs_rmget.c\n"
  },
  {
    "path": "src/unsupported/NodeHealthCheck.json",
    "content": "{\n   \"mounts\": {\n\n      \"comment\": [ \"Check the these mount points on the system\",\n         \"This should be the actual mount point and not a link. \",\n         \"Consider looking to see if it is a link and points to a mount point \",\n         \"Possible actions upon failure: Warn,Offline,Reboot\"\n      ],\n      \"check\":true,\n      \"mount_points\" : { \n         \"/home\":\"Warn\", \n         \"/scratch\":\"Warn\"\n      }\n   },\n\n   \"disk_space\": {\n      \"check\":true,\n\n      \"comment\": [\n         \"Check the disk space on the system.\",\n         \"Format 'Directory':['Min free space','Action upon fail']\",\n         \"if % is used then it refers to the amount of space used. example '15%'\",\n         \"if a number is used then its units are bytes. example 1073741824\",\n         \"if string is used then units must be defined example '1gb'\"\n      ],\n      \"dirs\" : {\n         \"/tmp\":[\"1gb\",\"Warn\"],\n         \"/scratch\":[1073741824,\"Warn\"]\n      },\n\n      \"comment1\": [\n         \"Return the values in decimal (KB, MB, GB, etc) or in binary (KiB, MiB, GiB, etc)\",\n         \"values:decimal,binary\"\n      ],\n      \"units\" : \"decimal\"\n   },\n\n   \"permissions\": {\n      \"check\":true,\n\n      \"comment\": [\n         \"Check the system permissions.\",\n         \"Format 'File':['Required permissions','Action upon fail']\",\n         \"Format 'Directory':['Required permissions','Action upon fail']\"\n      ],\n      \"check_dirs_and_files\" : {\n         \"/tmp\":[\"1777\",\"Warn\"]\n      }\n   },\n\n   \"processes\": {\n      \"check\":true,\n\n      \"comment\": [\n         \"Check the system permissions.\",\n         \"Format 'File':['Required permissions','Action upon fail']\",\n         \"Format 'Directory':['Required permissions','Action upon fail']\"\n      ],\n      \"running\" : {\n         \"sshd\":[\"root\",\"Warn\"]\n      },\n      \"stopped\" : {\n         \"nscd\":[\"root\",\"Warn\"]\n      }\n   },\n\n   \"as_user_operations\": {\n      \"comment\": \"To disable the touch file check set the value to False\",\n      \"check\" : true,\n\n      \"comment\": [\n         \"Touch a file in the following directories\",\n         \"Format 'Directory':['Touch file as this user','Action upon fail']\",\n         \"Possible options for Directory: \",\n         \"<user_home> (Will be replaced with the users home dir)\",\n         \"<userid> (Will be replace with the users id)\",\n         \"'/actual/path' (No changes)\",\n         \"Possible options for user: pbsuser (user running job) or pbsadmin (root)\",\n         \"To look in the Variable list place a $ in front of the variable (i.e. $PBS_O_WORKDIR)\"\n      ],\n      \"touch_files\" : { \n         \"<user_home>\":[\"pbsuser\",\"Warn\"], \n         \"/scratch/<userid>\":[\"pbsuser\",\"Warn\"], \n         \"$PBS_O_WORKDIR\":[\"pbsuser\",\"Warn\"],\n         \"/var/spool/pbs/mom_priv\":[\"pbsadmin\",\"Warn\"]\n      }\n   }\n}\n\n"
  },
  {
    "path": "src/unsupported/NodeHealthCheck.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\n##############################################################################\n# Purpose: To create a class for preforming node disk checks\n# Date: 20141114\n##############################################################################\n\n'''\ncreate hook NHC\nset hook NHC event = 'execjob_begin,execjob_prologue'\nset hook NHC fail_action = offline_vnodes\nimport hook NHC application/x-python default NodeHealthCheck.py\nimport hook NHC application/x-config default NodeHealthCheck.json\n\nOne can also optionally add exechost_periodic to NHC event\n'''\n\nimport json\nimport os\n# Import the needed modules for the program\nimport platform\nimport signal\nimport subprocess\nimport sys\nimport time\nfrom pwd import getpwnam\n\ntry:\n    import pbs\n\n    # Remember, periodic events do not have a job associated to them.\n    if pbs.event().type != pbs.EXECHOST_PERIODIC:\n        who = pbs.event().job.euser\n\n# For limiting testing to 1 user's jobs, uncomment this and change username\n#        pbs.logmsg(pbs.EVENT_DEBUG3,'User: %s'%who)\n#        if who != 'jshelley':\n#            pbs.logmsg(pbs.EVENT_DEBUG,'jshelley != %s'%who)\n#            pbs.event().accept()\n\n    pbs.logmsg(pbs.EVENT_DEBUG3, 'Event: %s' % pbs.event().type)\n\n    # Add the site-packages paths to the sys path\n    pbs_conf = pbs.pbs_conf\n#    py_path = '/opt/pbs/default/python/lib'\n    py_path = pbs_conf['PBS_EXEC'] + os.sep + 'python/lib'\n    py_version = str(sys.version_info.major) + \".\" + \\\n        str(sys.version_info.minor)\n    my_paths = [py_path + '/python' + py_version + '.zip',\n                py_path + '/python' + py_version,\n                py_path + '/python' + py_version + '/plat-linux2',\n                py_path + '/python' + py_version + '/lib-tk',\n                py_path + '/python' + py_version + '/lib-dynload',\n                py_path + '/python' + py_version + '/site-packages']\n\n    if not sys.path.__contains__(py_path + '/python' + py_version):\n        for my_path in my_paths:\n            if my_path not in sys.path:\n                sys.path.append(my_path)\n\nexcept ImportError:\n    pass\n\n\n\n\nclass NodeHealthCheck:\n    def __init__(self, **kwords):\n        self.host = ''\n        self.user = ''\n        self.job_id = ''\n        self.nhc_cfg = None\n\n        # Set up the values for host and user\n        pbs.logmsg(pbs.EVENT_DEBUG3, \"get node name\")\n        self.host = pbs.get_local_nodename()\n\n        # Read in the configurations file\n        pbs_hook_cfg = pbs.hook_config_filename\n        if pbs_hook_cfg is None:\n            pbs.logmsg(pbs.EVENT_DEBUG3, \"%s\" % os.environ)\n            pbs_hook_cfg = os.environ[\"PBS_HOOK_CONFIG_FILE\"]\n        pbs.logmsg(pbs.EVENT_DEBUG3, \"read config file: %s\" %\n                   pbs.hook_config_filename)\n        config_file = open(pbs.hook_config_filename).read()\n\n        self.nhc_cfg = json.loads(config_file)\n        pbs.logmsg(pbs.EVENT_DEBUG3, \"config file: %s\" % self.nhc_cfg)\n\n        # Check to make sure the event has a user associated with it\n        pbs.logmsg(pbs.EVENT_DEBUG3, 'Event: %s' % pbs.event().type)\n        if pbs.event().type != pbs.EXECHOST_PERIODIC:\n            self.user = repr(pbs.event().job.Job_Owner).split(\"@\")[\n                0].replace(\"'\", \"\")\n            self.job_id = pbs.event().job.id\n        else:\n            self.user = 'EXECHOST_PERIODIC'\n            self.job_id = str(time.time())\n\n        pbs.logmsg(pbs.EVENT_DEBUG3, 'Done initializing NodeHealthCheck')\n\n    def ChkMountPoints(self):\n        if not self.nhc_cfg['mounts']['check']:\n            pbs.logmsg(pbs.EVENT_DEBUG3, \"Skipping mounts check\")\n            return True\n\n        for mnt_pnt in self.nhc_cfg[\"mounts\"][\"mount_points\"]:\n            pbs.logmsg(pbs.EVENT_DEBUG3, \"mount point: %s, %s\" % (\n                mnt_pnt, self.nhc_cfg[\"mounts\"][\"mount_points\"][mnt_pnt]))\n            try:\n                # Added the line below to check to see if the real path is a\n                # mount or not\n                if not os.path.ismount(os.path.realpath(mnt_pnt)):\n                    pbs.logmsg(\n                        pbs.EVENT_DEBUG3, \"Mount: %s\\tAction: %s\" %\n                        (mnt_pnt,\n                         self.nhc_cfg[\"mounts\"][\"mount_points\"][mnt_pnt]))\n                    return [self.nhc_cfg[\"mounts\"][\"mount_points\"][mnt_pnt],\n                            '%s does not appear to be mounted' % mnt_pnt]\n            except Exception as e:\n                pbs.logmsg(pbs.EVENT_DEBUG, \"Mount check error: %s\" % e)\n                return False\n            pbs.logmsg(pbs.EVENT_DEBUG3,\n                       \"mount point %s checked out\" % (mnt_pnt))\n        return True\n\n    def ConvertToBytes(self, value):\n        # Determine what units the user would like to use.\n        if self.nhc_cfg[\"disk_space\"][\"units\"].lower() == 'binary':\n            units = {'kb': 1024, 'mb': 1048576,\n                     'gb': 1073741824, 'tb': 1099511627776}\n        elif self.nhc_cfg[\"disk_space\"][\"units\"].lower() == 'decimal':\n            units = {'kb': 1000, 'mb': 1000000,\n                     'gb': 1000000000, 'tb': 1000000000000}\n        else:\n            pbs.logmsg(\n                pbs.EVENT_DEBUG3,\n                (\"I'm not sure how to handle units: %s\\n\"\n                 \"So I will default to binary\") %\n                (self.nhc_cfg[\"disk_space\"][\"units\"]))\n            units = {'kb': 1024, 'mb': 1048576,\n                     'gb': 1073741824, 'tb': 1099511627776}\n\n        value = value.lower()\n        if value.find('%') != -1:\n            pbs.logmsg(pbs.EVENT_DEBUG3, \"found a % symbol\")\n            # Returned as a float so that I can distinguish between percentage\n            # vs free space\n            value = float(value.strip('%'))\n            pbs.logmsg(pbs.EVENT_DEBUG3, \"value: %s\" % value)\n        else:\n            for key in list(units.keys()):\n                if value.find(key) != -1:\n                    try:\n                        value = int(value[:-2].strip()) * units[key]\n                    except Exception as e:\n                        pbs.logmsg(\n                            pbs.EVENT_DEBUG,\n                            \"Error convertion value to int: %s\\tkey: %s\" %\n                            (value, key))\n                        return False\n                    break\n        return value\n\n    def ChkDiskUsage(self):\n        \"\"\"\n            Checks to see the disk usage. Returns True if the tests pass.\n        \"\"\"\n        if not self.nhc_cfg[\"disk_space\"][\"check\"]:\n            pbs.logmsg(pbs.EVENT_DEBUG3, \"Skipping disk space check\")\n            return True\n\n        for check_dir in self.nhc_cfg[\"disk_space\"][\"dirs\"]:\n            pbs.logmsg(pbs.EVENT_DEBUG3, \"Dir: %s\\tSpace: %s\" % (\n                check_dir, self.nhc_cfg[\"disk_space\"][\"dirs\"][check_dir]))\n            # Get the requested space required for the check\n            spaceVal = self.nhc_cfg[\"disk_space\"][\"dirs\"][check_dir][0]\n            if isinstance(spaceVal, int) or isinstance(spaceVal, (float)):\n                spaceVal = int(spaceVal)\n            else:\n                spaceVal = self.ConvertToBytes(spaceVal)\n                if not spaceVal:\n                    return False\n\n            try:\n                st = os.statvfs(check_dir)\n                free = (st.f_bavail * st.f_frsize)\n                total = (st.f_blocks * st.f_frsize)\n                used = (st.f_blocks - st.f_bfree) * st.f_frsize\n            except OSError:\n                line = \"No file or directory: %s\" % check_dir\n                return [self.nhc_cfg[\"disk_space\"][\"dirs\"][check_dir]\n                        [1], \"No file or directory: %s\" % check_dir]\n            except Exception as e:\n                pbs.logmsg(pbs.EVENT_DEBUG, \"Check Disk Usage Error: %s\" % (e))\n                return False\n\n            gb_unit = 1073741824\n            if self.nhc_cfg[\"disk_space\"]['units'].lower() == 'decimal':\n                gb_unit = 1000000000\n\n            if isinstance(spaceVal, int):\n                pbs.logmsg(\n                    pbs.EVENT_DEBUG3, \"Free: %0.2lfgb\\tRequested: %0.2lfgb\" %\n                    (float(free) / float(gb_unit),\n                     float(spaceVal) / float(gb_unit)))\n                if free < spaceVal:\n                    return [\n                        self.nhc_cfg[\"disk_space\"][\"dirs\"][check_dir][1],\n                        ('%s failed disk space check. Free: %0.2lfgb\\t'\n                         'Requested: %0.2lfgb') %\n                        (check_dir,\n                            float(free) /\n                            float(gb_unit),\n                            float(spaceVal) /\n                            float(gb_unit))]\n\n            elif isinstance(spaceVal, (float)):\n                try:\n                    pbs.logmsg(\n                        pbs.EVENT_DEBUG3,\n                        (\"Free: %d\\tTotal: %d\\tUsed: %d\\tUsed+Free: %d\\t\"\n                         \"SpaceVal: %d\") %\n                        (free, total, used, used + free, int(spaceVal)))\n                    percent = 100 - \\\n                        int((float(used) / float(used + free)) * 100)\n                    pbs.logmsg(\n                        pbs.EVENT_DEBUG3, \"Free: %d%%\\tRequested: %d%%\" %\n                        (percent, int(spaceVal)))\n\n                    if percent < int(spaceVal):\n                        return [\n                            self.nhc_cfg[\"disk_space\"][\"dirs\"][check_dir][1],\n                            ('%s failed disk space check. Free: %d%%\\t'\n                             'Requested: %d%%') %\n                            (check_dir,\n                             percent,\n                             int(spaceVal))]\n                except Exception as e:\n                    pbs.logmsg(pbs.EVENT_DEBUG, \"Error: %s\" % e)\n\n        return True\n\n    def ChkDirFilePermissions(self):\n        \"\"\"\n            Returns True if the permissions match. The permissions from python\n            are returned as string with the '0100600'. The last three digits\n            are the file permissions for user,group, world Return action if the\n            permissions don't match and NoFileOrDir if it can't find the\n            file/dir\n        \"\"\"\n\n        if not self.nhc_cfg[\"permissions\"][\"check\"]:\n            pbs.logmsg(pbs.EVENT_DEBUG3, \"Skipping permissions check\")\n            return True\n\n        for file_dir in self.nhc_cfg[\"permissions\"][\"check_dirs_and_files\"]:\n            pbs.logmsg(\n                pbs.EVENT_DEBUG3, \"File/Dir: %s\\t%s\" %\n                (file_dir, str(\n                    self.nhc_cfg[\"permissions\"][\"check_dirs_and_files\"]\n                    [file_dir][0])))\n            try:\n                st = os.stat(file_dir)\n                permissions = oct(st.st_mode)\n\n                if (permissions[-len(self.nhc_cfg[\"permissions\"]\n                                     [\"check_dirs_and_files\"][file_dir][0]):]\n                        != str(self.nhc_cfg[\"permissions\"]\n                               [\"check_dirs_and_files\"][file_dir][0])):\n                    pbs.logmsg(\n                        pbs.EVENT_DEBUG3,\n                        \"Required permissions: %s\\tpermissions: %s\" %\n                        (str(self.nhc_cfg[\"permissions\"]\n                             [\"check_dirs_and_files\"][file_dir][0]),\n                         permissions[-len(self.nhc_cfg[\"permissions\"]\n                                          [\"check_dirs_and_files\"][file_dir][0]\n                                          ):]))\n                    return ([\n                        self.nhc_cfg[\"permissions\"][\"check_dirs_and_files\"]\n                        [file_dir][1],\n                        \"File/Dir: %s\\tRequired permissions: %s\\tpermissions: \"\n                        \"%s\" %\n                        (file_dir,\n                         str(self.nhc_cfg[\"permissions\"]\n                             [\"check_dirs_and_files\"][file_dir][0]),\n                         permissions[-len(self.nhc_cfg[\"permissions\"]\n                                          [\"check_dirs_and_files\"][file_dir][0]\n                                          ):])])\n            except OSError:\n                return [self.nhc_cfg[\"permissions\"][\"check_dirs_and_files\"]\n                        [file_dir][1], \"Can not find file/dir: %s\" % file_dir]\n            except BaseException:\n                return False\n\n        return True\n\n    def ChkProcesses(self):\n        if not self.nhc_cfg[\"processes\"][\"check\"]:\n            pbs.logmsg(pbs.EVENT_DEBUG3, \"Skipping processes check\")\n            return True\n\n        # List all of the processes\n        procs = {}\n        if platform.uname()[0] == 'Linux':\n            # out, err = subprocess.Popen(\n            #   ['ps', '-Af'], stdout=subprocess.PIPE).communicate()\n            out, err = subprocess.Popen(\n                ['top', '-bn1'], stdout=subprocess.PIPE).communicate()\n            lines = out.split('\\n')\n            for line in lines[1:]:\n                if line != \"\":\n                    line = line.split()\n                    # If ps -Af is used\n                    # procs[os.path.split(line[-1].split()[0])[-1]] = line[0]\n\n                    # If top -bn1 is used\n                    procs[os.path.split(line[-1].split()[0])[-1]] = line[1]\n\n        pbs.logmsg(pbs.EVENT_DEBUG3, \"Processes: %s\" % procs)\n\n        # store procs that violate the checks\n        chk_procs = {}\n        chk_procs['running'] = []\n        chk_procs['stopped'] = []\n        chk_action = \"\"\n\n        # Loop through processes\n        for proc in self.nhc_cfg[\"processes\"][\"running\"]:\n            if proc not in list(procs.keys()):\n                pbs.logmsg(\n                    pbs.EVENT_DEBUG,\n                    \"Process: %s is not in the running process list but \"\n                    \"should be\" %\n                    proc)\n                chk_procs['running'].append(proc)\n                if chk_action == \"\":\n                    chk_action = self.nhc_cfg['processes']['running'][proc][1]\n\n        for proc in self.nhc_cfg['processes']['stopped']:\n            if proc in list(procs.keys()):\n                pbs.logmsg(\n                    pbs.EVENT_DEBUG,\n                    \"Process: %s is in the stopped process list but was found \"\n                    \"to be running\" %\n                    proc)\n                chk_procs['stopped'].append(proc)\n                if chk_action == \"\":\n                    chk_action = self.nhc_cfg['processes']['stopped'][proc][1]\n\n        if len(chk_procs['running']) > 0 or len(chk_procs['stopped']) > 0:\n            line = \"running: %s\\nstopped: %s\" % (\n                \",\".join(chk_procs['running']), \",\".join(chk_procs['stopped']))\n            return [\n                chk_action,\n                \"CheckProcesses: One or more processes were found which \"\n                \"violates the check\\n%s\" %\n                line]\n\n        return True\n\n    def ChkTouchFileAsUser(self):\n        if not self.nhc_cfg[\"as_user_operations\"][\"check\"]:\n            pbs.logmsg(pbs.EVENT_DEBUG3, \"Skipping touch file as user check\")\n            return True\n\n        for file_dir in self.nhc_cfg[\"as_user_operations\"][\"touch_files\"]:\n            file_dir_orig = file_dir\n            # Check to see if this is a periodic hook. If so skip pbsuser file\n            # touches\n            if (pbs.event().type == pbs.EXECHOST_PERIODIC and\n                    self.nhc_cfg[\"as_user_operations\"][\"touch_files\"]\n                [file_dir_orig][0] == 'pbsuser'):\n                pbs.logmsg(\n                    pbs.EVENT_DEBUG3,\n                    \"Skipping this check dir: %s, since this is a periodic \"\n                    \"hook\" %\n                    file_dir)\n                continue\n\n#            pbs.logmsg(pbs.EVENT_DEBUG3, \"Dir: %s\\tUser: %s\" %\n#                       (file_dir, str(self.nhc_cfg[\"as_user_operations\"]\n#                        [\"touch_files\"][file_dir_orig][0])))\n#            pbs.logmsg(pbs.EVENT_DEBUG3,\"Job User: %s\"%(self.user))\n\n            try:\n                new_file_dir = ''\n                if file_dir.startswith('$') != -1:\n                    # I need to flesh out how to best handle this.\n                    # It will require looking through the job environment\n                    # varilables\n                    V = pbs.event().job.Variable_List\n                    pbs.logmsg(pbs.EVENT_DEBUG3, \"Type(V): %s\" % (type(V)))\n                    pbs.logmsg(pbs.EVENT_DEBUG3, \"Job variable list: %s\" % (V))\n                    for var in V:\n                        pbs.logmsg(pbs.EVENT_DEBUG3,\n                                   \"var: %s, file_dir: %s\" % (var, file_dir))\n                        pbs.logmsg(pbs.EVENT_DEBUG3, \"V[var]: %s\" % (V[var]))\n                        if var.startswith(file_dir[1:]):\n                            new_file_dir = V[var]\n                            pbs.logmsg(pbs.EVENT_DEBUG3,\n                                       \"New dir: %s\" % (file_dir))\n                            break\n\n                    pass\n\n                # Check to see what user this test should be run as.\n                # Options: pbsuser or pbsadmin\n                status = ''\n                if (self.nhc_cfg[\"as_user_operations\"][\"touch_files\"]\n                    [file_dir_orig][0] == 'pbsadmin'):\n                    pbs.logmsg(pbs.EVENT_DEBUG3,\n                               \"TouchFileAsAdmin: %s\" % (file_dir))\n                    if new_file_dir != '':\n                        status = self.TouchFileAsUser(\n                            'root', new_file_dir, file_dir_orig)\n                    else:\n                        status = self.TouchFileAsUser(\n                            'root', file_dir, file_dir_orig)\n\n                elif (self.nhc_cfg[\"as_user_operations\"][\"touch_files\"]\n                      [file_dir_orig][0] == 'pbsuser'):\n                    # Check to see if check is to be written to a specific user\n                    # dir\n                    pbs.logmsg(\n                        pbs.EVENT_DEBUG3,\n                        \"TouchFileAsUser: User: %s, Dir: %s\" %\n                        (self.user, file_dir))\n                    if file_dir.find('<userid>') != -1:\n                        file_dir = file_dir.replace('<userid>', self.user)\n\n                    # Try to touch the file\n                    if new_file_dir != '':\n                        status = self.TouchFileAsUser(\n                            self.user, new_file_dir, file_dir_orig)\n                    else:\n                        status = self.TouchFileAsUser(\n                            self.user, file_dir, file_dir_orig)\n                else:\n                    pbs.logmsg(\n                        pbs.EVENT_DEBUG,\n                        \"Unknown User: %s. Please specify either pbsadmin or \"\n                        \"pbsuser\" %\n                        (str(\n                            self.nhc_cfg[\"as_user_operations\"][\"touch_files\"]\n                            [file_dir_orig][0])))\n                    return [\n                        self.nhc_cfg[\"as_user_operations\"][\"touch_files\"]\n                        [file_dir_orig][1],\n                        \"Unknown User: %s. Please specify either pbsadmin or \"\n                        \"pbsuser\" %\n                        (str(\n                            self.nhc_cfg[\"as_user_operations\"][\"touch_files\"]\n                            [file_dir_orig][0]))]\n\n                if not status:\n                    return status\n\n            except OSError:\n                return [self.nhc_cfg[\"as_user_operations\"][\"touch_files\"][\n                    file_dir_orig][1], 'Can not find file/dir: %s' % file_dir]\n            except Exception as e:\n                return [\n                    self.nhc_cfg[\"as_user_operations\"][\"touch_files\"]\n                    [file_dir_orig][1],\n                    'Encountered an error %s for file/dir: %s' % (e, file_dir)\n                ]\n                # return False\n\n        return True\n\n    def TouchFileAsUser(self, user, file_dir, file_dir_orig):\n        # file_dir_orig is needed to access the \"Warn\" or \"Offline\" information\n        # for the file/directory in question from the config file when variable\n        # substitution has taken place\n        # Define the child var\n        child = 0\n        user_data = None\n\n        pbs.logmsg(pbs.EVENT_DEBUG3, \"User name: %s\\tFile dir: %s\" %\n                   (user, file_dir))\n        try:\n            # user_data = getpwnam(self.user)\n            user_data = getpwnam(user)\n\n            if file_dir.find('<user_home>') != -1:\n                file_dir = file_dir.replace('<user_home>', user_data[5])\n\n            pbs.logmsg(pbs.EVENT_DEBUG3, \"User name: %s\\tDir to write to: %s\" %\n                       (user_data[4], file_dir))\n\n# This is a special case where the user account does not exist on the\n# node.  Offlining here is a good alternative to the job failing to run 20\n# times and being held, but it can be changed if desired\n        except KeyError:\n            pbs.logmsg(pbs.EVENT_DEBUG, \"Unable to find user: %s\" % user)\n            #\n            return ['Offline', 'unable to find user: %s' % user]\n\n        # Fork the process for touching a file as the user\n        r, w = os.pipe()\n\n        pid = os.fork()\n        pbs.logmsg(pbs.EVENT_DEBUG3, \"pid: %d\" % pid)\n\n        if pid:\n            # We are the parent\n            os.close(w)\n\n            r = os.fdopen(r)  # turn r into a file object\n\n            child = pid\n\n            pbs.logmsg(pbs.EVENT_DEBUG3,\n                       \"Ready to read from the child process: %d\" % pid)\n            lines = r.read()\n\n            pbs.logmsg(pbs.EVENT_DEBUG3, lines)\n            # Wait for the child process to complete\n            os.waitpid(child, 0)\n\n            # Close the pipes\n            r.close()\n\n            # Check to see if the file was successfully touched\n            if lines.find('Successfully touched file') == - \\\n                    1 or lines.find('Failed to remove file') != -1:\n                pbs.logmsg(\n                    pbs.EVENT_DEBUG3,\n                    \"Failed to touch/remove file in %s as %s\" %\n                    (file_dir, user))\n                return [\n                    self.nhc_cfg[\"as_user_operations\"][\"touch_files\"]\n                    [file_dir_orig][1],\n                    'Failed to touch/remove file for %s in %s' %\n                    (user,\n                     file_dir)]\n            else:\n                pbs.logmsg(\n                    pbs.EVENT_DEBUG3,\n                    \"Successfully touched and removed file for %s in %s\" %\n                    (user, file_dir))\n\n        else:\n            try:\n                # Close the reading pipe\n                os.close(r)\n\n                # Turn w into a file object\n                w = os.fdopen(w, 'w')\n\n                # Switch to the user\n                w.write(\"Ready to switch to user: %s\\tuid: %s\\n\" %\n                        (user, user_data[2]))\n                os.setuid(user_data[2])\n\n                # Change to the user home dir\n                w.write(\"Changing dir to: %s\\n\" % (file_dir))\n                if os.path.isdir(file_dir):\n                    os.chdir(file_dir)\n\n                    # Touch a file in the user's home directory\n                    touch_file_name = (\n                        \"__user_%s_jobid_%s_host_%s_pbs_test.txt\" %\n                        (user, self.job_id, self.host))\n                    w.write(\"Ready to touch file: %s\\n\" % (touch_file_name))\n                    touchFileSuccess = self.TouchFile(touch_file_name)\n\n                    if touchFileSuccess:\n                        w.write(\"Successfully touched file\\n\")\n\n                    try:\n                        os.remove(touch_file_name)\n                    except OSError:\n                        w.write(\"Failed to remove file: %s\" % touch_file_name)\n                    except Exception as e:\n                        w.write(\"Remove file exception: %s\\n\" % (e))\n                else:\n                    w.write(\"%s does not appear to be a directory\" % file_dir)\n\n            except Exception as e:\n                w.write(\"Exception: %s\\n\" % (e))\n            finally:\n                # Close the pipe\n                w.close()\n                # Exit the child thread\n                os._exit(0)\n\n        return True\n\n    def TouchFile(self, fname, times=None):\n        try:\n            open(fname, 'a').close()\n            os.utime(fname, times)\n            return True\n        except IOError:\n            pbs.logmsg(pbs.EVENT_DEBUG3, \"Failed to touch file: %s\" % (fname))\n            return False\n\n    def CheckNode(self):\n        # Setup the fail counter\n        failCnt = 0\n\n        pbs.logmsg(pbs.EVENT_DEBUG3, \"Ready to check the mounts\")\n        if not c.ContinueChk(c.ChkMountPoints()):\n            failCnt += 1\n\n        pbs.logmsg(pbs.EVENT_DEBUG3, \"Ready to check the disk usage\")\n        if not c.ContinueChk(c.ChkDiskUsage()):\n            failCnt += 1\n\n        pbs.logmsg(pbs.EVENT_DEBUG3, \"Ready to check the file permissions\")\n        if not c.ContinueChk(c.ChkDirFilePermissions()):\n            failCnt += 1\n\n        pbs.logmsg(pbs.EVENT_DEBUG3, \"Ready to check the processes\")\n        if not c.ContinueChk(c.ChkProcesses()):\n            failCnt += 1\n\n        pbs.logmsg(pbs.EVENT_DEBUG3, \"Ready to touch file as user\")\n        if not c.ContinueChk(c.ChkTouchFileAsUser()):\n            failCnt += 1\n\n        pbs.logmsg(pbs.EVENT_DEBUG3, \"Exiting CheckNode function\")\n\n        return failCnt\n\n    def CheckNodePeriodic(self):\n        # Setup the fail counter\n        failCnt = 0\n\n        pbs.logmsg(pbs.EVENT_DEBUG3, \"Ready perform check node periodic\")\n\n        # Run block of code with timeouts\n        pbs.logmsg(pbs.EVENT_DEBUG3, \"Ready to check the mounts\")\n        if not c.ContinueChk(c.ChkMountPoints()):\n            failCnt += 1\n\n        pbs.logmsg(pbs.EVENT_DEBUG3, \"Ready to check the disk usage\")\n        if not c.ContinueChk(c.ChkDiskUsage()):\n            failCnt += 1\n\n        pbs.logmsg(pbs.EVENT_DEBUG3, \"Ready to check the file permissions\")\n        if not c.ContinueChk(c.ChkDirFilePermissions()):\n            failCnt += 1\n\n        pbs.logmsg(pbs.EVENT_DEBUG3, \"Exiting CheckNode function\")\n\n        return failCnt\n\n    def CheckOfflineNode(self):\n\n        failCnt = self.CheckNodePeriodic()\n\n        if failCnt == 0:\n            localtime = time.asctime(time.localtime(time.time()))\n            self.ContinueChk(\n                ['Online', 'Passed the periodic test at %s' % localtime])\n        return True\n\n    def ContinueChk(self, status, comment=''):\n        if isinstance(status, list):\n            comment = str(status[1])\n            status = status[0].lower()\n        elif not isinstance(status, bool):\n            status = status.lower()\n\n        # Check to see how to handle the status\n        pbs.logmsg(pbs.EVENT_DEBUG3, 'Status: %s\\tComment: %s' %\n                   (status, comment))\n        if not status:\n            return False\n        elif status == 'warn':\n            pbs.logmsg(pbs.EVENT_DEBUG, 'WARNING: %s' % comment)\n            return True\n        elif status == 'offline' or status == 'reboot':\n            pbs.logmsg(pbs.EVENT_DEBUG, \"Status: %s\\tComment: %s\" %\n                       (status, comment))\n            # Get the node, offline it,\n            pbs.logmsg(pbs.EVENT_DEBUG, \"Offline node: %s\" % (self.host))\n            myvnode = pbs.event().vnode_list[self.host]\n            myvnode.state = pbs.ND_OFFLINE\n            pbs.logmsg(\n                pbs.EVENT_DEBUG,\n                \"Offline node type: %s, comment: %s\" %\n                (type(\n                    str(comment)),\n                    comment))\n            myvnode.comment = \"-attn_nhc: \" + comment\n            # pbs.logmsg(\n            #     pbs.EVENT_DEBUG, \"restart scheduler: %s %s\" %\n            #     (self.host, repr(myvnode.state)))\n            # pbs.server().scheduler_restart_cycle()\n\n            # Check to see if the node should be rebooted\n            if status == 'reboot':\n                pbs.logmsg(\n                    pbs.EVENT_DEBUG,\n                    \"Comment: %s\\nOfflined node: %s and rebooted\" %\n                    (comment, self.host))\n                pbs.event().job.rerun()\n                pbs.reboot('reboot')\n\n                # Run this command if the node is rebooted\n                # The event().reject function ends the script\n                pbs.logmsg(\n                    pbs.EVENT_DEBUG,\n                    \"Comment: %s\\nOfflined node: %s and restarted scheduling \"\n                    \"cycle\" %\n                    (comment,\n                     self.host))\n                pbs.event().reject(\"Offlined node, sent the reboot signal, \"\n                                   \"and restarted scheduling cycle\")\n\n            # Reject the job\n            pbs.event().reject(\"Offlined node and restarted scheduling cycle\")\n\n        elif status == 'online':\n            pbs.logmsg(pbs.EVENT_DEBUG, \"Onlined node: %s\" % (self.host))\n            mynodename = pbs.get_local_nodename()\n            myvnode = pbs.event().vnode_list[mynodename]\n            mynodename = pbs.get_local_nodename()\n            pbs.logmsg(pbs.EVENT_DEBUG3, \"got node: %s\" % (mynodename))\n            myvnode.state = pbs.ND_FREE\n            pbs.logmsg(pbs.EVENT_DEBUG,\n                       \"Changed node state to ND_FREE: %s\" % (mynodename))\n            myvnode.comment = None\n            pbs.logmsg(pbs.EVENT_DEBUG, \"Onlined node: %s\" % (mynodename))\n\n        else:\n            return True\n\n\nif __name__ == \"__builtin__\":\n    start = time.time()\n    pbs.logmsg(pbs.EVENT_DEBUG3, \"Starting the node health check\")\n    c = NodeHealthCheck()\n\n    if pbs.event().type == pbs.EXECHOST_PERIODIC:\n        vnode = pbs.server().vnode(c.host)\n        if vnode.state == pbs.ND_OFFLINE and vnode.comment.startswith(\n                '-attn_nhc:'):\n            # Still need to flesh out CheckOfflineNode function\n            c.CheckOfflineNode()\n        else:\n            c.CheckNodePeriodic()\n    else:\n        c.CheckNode()\n\n    pbs.logmsg(pbs.EVENT_DEBUG3, \"Finished check disk hook: %0.5lf (s)\" %\n               (time.time() - start))\n"
  },
  {
    "path": "src/unsupported/README",
    "content": "The material in this directory is unsupported.  It is provided as is,\nand its inclusion with the distribution of PBS does not\nimply any warranty or support.  Scripts, programs, etc. in this\ndirectory are provided as examples and/or for diagnostic purposes.\n\nThe scripts, programs, etc. included here may or may not run on all\nsupported platforms.  The behavior of scripts, programs, etc. in this\ndirectory may change without notice with a new release.\n\nThe material in this directory may or may not have documentation.  The\nmaterial in this directory may or may not contain proper commenting.\n\nThe material in this directory may appear or disappear without notice\nwith a new release.\n"
  },
  {
    "path": "src/unsupported/ReliableJobStartup.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\n# ReliableJobStartup.py:\n#\n# A job is started and one or more of the sister moms in a cluster fails to\n# join job due to a reject from an execjob_begin hook or a reject from an\n# execjob_prologue hook, and that sister mom goes offline.\n# See NodeHealthCheck.py that might be used for this purpose.\n#\n# Using the reliable job startup feature, if the job's tolerate_node_failures\n# is set to \"job_start\" (or \"all\"), the job's original select spec is\n# expanded in increment_chunks() in a queuejob event.\n#\n# If ran by the primary mom, in an execjob_launch event the list of selected\n# nodes for the job and the ones that failed after job started are logged and\n# release_nodes(keep_select) will use good nodes to satisfy the select spec.\n# The failed nodes are offlined.  The 's' accounting record is generated.\n\n# To register the hook, as root via qmgr:\n# qmgr << RJS\n# create hook rjs_hook\n# set hook rjs_hook event = 'queuejob,execjob_launch'\n# set hook rjs_hook enabled = true\n# import hook rjs_hook application/x-python default ReliableJobStartup.py\n# RJS\n\nimport pbs\ne = pbs.event()\n\nif e.type == pbs.QUEUEJOB:\n    # add a log entry in server logs\n    pbs.logmsg(pbs.LOG_DEBUG, \"queuejob hook executed\")\n    e.job.tolerate_node_failures = \"job_start\"\n\n    # Save current select spec in resource 'site'\n    selspec = e.job.Resource_List[\"select\"]\n    if selspec is None:\n        e.reject(\"Event job does not have select spec!\")\n    e.job.Resource_List[\"site\"] = str(selspec)\n\n    # increment_chunks() can use a percentage argument or an integer. For\n    # example add 1 chunk to each chunk (except the first) in the job's\n    # select spec\n    new_select = selspec.increment_chunks(1)\n    e.job.Resource_List[\"select\"] = new_select\n    pbs.logmsg(pbs.LOG_DEBUG, \"job's select spec changed to %s\" % new_select)\n\nelif e.type == pbs.EXECJOB_LAUNCH:\n    # PBS_TASKNUM exists on primary Mom when executing launch hook, has value:\n    # 1  - for the first time when launching top-level shell, or\n    # >1 - for the spawned tasks servicing TM_SPAWN requests\n    if not e.job.in_ms_mom() or (\n            ('PBS_TASKNUM' in e.env) and (int(e.env['PBS_TASKNUM']) > 1)):\n        e.accept()\n    # add a log entry in primary mom logs\n    pbs.logmsg(pbs.LOG_DEBUG, \"Executing launch\")\n\n    # print out the vnode_list[] values\n    for vn in e.vnode_list:\n        v = e.vnode_list[vn]\n        pbs.logjobmsg(e.job.id, \"launch: found vnode_list[\" + v.name + \"]\")\n\n    # print out the vnodes in vnode_list_fail[] and offline them\n    for vn in e.vnode_list_fail:\n        v = e.vnode_list_fail[vn]\n        pbs.logjobmsg(\n            e.job.id, \"launch: found vnode_list_fail[\" + v.name + \"]\")\n        v.state = pbs.ND_OFFLINE\n\n    # prune the job's vnodes to satisfy the select spec in resource 'site'\n    # and vnodes in vnode_list_fail[] are not used.\n    pj = e.job.release_nodes(keep_select=e.job.Resource_List[\"site\"])\n    if pj is None:\n        e.job.Hold_Types = pbs.hold_types(\"s\")\n        e.job.rerun()\n        e.reject(\"unsuccessful at LAUNCH\")\n"
  },
  {
    "path": "src/unsupported/cray_readme",
    "content": "Contents\n--------\n\n\tpbs_output.py\n\nThis script is not intended to be run by users or Administrators.\nIt is run by the Cray RUR system as an \"output plugin\" that will\nwrite data specific to a PBS job to a well known path that PBS hooks\ncan access:\n\nPBS_HOME/spool/<jobid>.rur\n\nThe format of the file will be:\n\npluginName : apid : pluginOutput\n\nSee the Cray document: \"Managing System Software for the Cray®\nLinux Environment\" section 12.7 \"RUR Plugins\" for more\ninformation on the RUR plugin interface.\nhttp://docs.cray.com/books/S-2393-5101/S-2393-5101.pdf\n"
  },
  {
    "path": "src/unsupported/load_balance.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n# This is a periodic hook script that monitors the load average on the local\n# host, and offlines or frees the vnode representing the host depending the\n# cpu load.\n#\n# A site can modify the \"ideal_load\" and \"max_load\" value below, so that:\n# if the system's cpu load average falls above \"max_load\", then the\n# vnode corresponding to the current host is offlined.\n# This prevents the scheduler from scheduling jobs on this vnode.\n#\n# If the system's cpu load average falls below \"ideal_load\" value,\n# then vnode representing the current host is set to free.\n# This ensure the scheduler can now schedule jobs on this vnode.\n#\n# To instantiate this hook, specify the following:\n#    qmgr -c \"create hook load_balance event=exechost_periodic,freq=10\"\n#    qmgr -c \"import hook load_balance application/x-python default\n#             load_balance.py\"\n\nimport os\nimport re\n\nimport pbs\n\nideal_load=1.5\nmax_load=2.0\n\n# get_la: returns a list of load averages within the past 1-minute, 5-minute,\n#         15-minutes range.\ndef get_la():\n    line=os.popen(\"uptime\").read()\n    r = re.search(r'load average: (\\S+), (\\S+), (\\S+)$', line).groups()\n    return list(map(float, r))\n\nlocal_node = pbs.get_local_nodename()\n\nvnl = pbs.event().vnode_list\ncurrent_state = pbs.server().vnode(local_node).state\nmla = get_la()[0]\nif (mla >= max_load) and ((current_state & pbs.ND_OFFLINE) == 0):\n    vnl[local_node].state = pbs.ND_OFFLINE\n    vnl[local_node].comment = \"offlined node as it is heavily loaded\"\nelif (mla < ideal_load) and ((current_state & pbs.ND_OFFLINE) != 0):\n    vnl[local_node].state = pbs.ND_FREE\n    vnl[local_node].comment = None\n"
  },
  {
    "path": "src/unsupported/mom_dyn_res.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n# A hook script that periodically updates the values of a set of\n\n# custom resources for the vnode representing the current mom.\n#\n# The current set includes 2 size types: scratch, home\n#\n# Prerequisites:\n#\n#    1. Define the following custom resources in server's resourcedef file, and\n#       restart pbs_server.\n#\n#       % cat PBS_HOME/server_priv/resourcedef\n#           scratch type=size flag=nh\n#           home type=size flag=nh\n#\n#    2. Add the new resources to the \"resources:\" line in sched_config file and\n#       restart pbs_sched:\n#\n#       % cat PBS_HOME/sched_priv/sched_config\n#           resources: ncpus, mem, arch, [...], scratch, home\n#\n#    3. Install this hook as:\n#       qmgr -c \"create hook mom_dyn_res event=exechost_periodic,freq=30\"\n#       qmgr -c \"import hook mom_dyn_res application/x-python default\n#                mom_dyn_res.py\"\n#\n# NOTE:\n#    Update the dyn_res[] array below to include any other custom resources\n#    to be included in the updates. Ensure that each resource added has an\n#    entry in the server's resourcedef file and scheduler's sched_config file.\n\nimport os\n\nimport pbs\n\n\n# get_filesystem_avail_unprivileged: returns available size in kbytes\n# (in pbs.size type) to unprivileged users, of the filesystem where 'dirname'\n# resides.\n#\ndef get_filesystem_avail_unprivileged( dirname ):\n    o = os.statvfs(dirname)\n    return pbs.size( \"%skb\" % ((o.f_bsize * o.f_bavail) / 1024) )\n\n# get_filesystem_avail_privileged: returns available size in kbytes\n# (in pbs.size type) to privileged users, of the filesystem where 'dirname'\n# resides.\n#\ndef get_filesystem_avail_privileged( dirname ):\n    o = os.statvfs(dirname)\n    return pbs.size( \"%skb\" % ((o.f_bsize * o.f_bfree) / 1024) )\n\n\n# Define here the custom resources as key, and the function and its argument\n# for obtaining the value of the custom resource:\n#    Format: dyn_res[<resource_name>] = [<function_name>, <function_argument>]\n# So \"<function_name>(<function_argument>)\" is called to return the value\n# for custom <resource_name>.\ndyn_res = {}\ndyn_res[\"scratch\"] = [get_filesystem_avail_unprivileged, \"/tmp\"]\ndyn_res[\"home\"]    = [get_filesystem_avail_unprivileged, \"/home\"]\n\nvnl = pbs.event().vnode_list\nlocal_node = pbs.get_local_nodename()\n\nfor k in list(dyn_res.keys()):\n    vnl[local_node].resources_available[k] = dyn_res[k][0](dyn_res[k][1])\n"
  },
  {
    "path": "src/unsupported/pbs-mailer/README.md",
    "content": "### pbs-mailer\n\nOpenPBS can easily send a huge amount of emails to one user/email. pbs-mailer is a service for an aggregation of emails sent by OpenPBS to the same user/email. The first email is sent immediately and subsequent emails are sent after a configurable time period. The emails are squashed into one email.\n\n### Building package\n\n* RPM package: run  'release-rpm.sh'\n* DEB package: run  'release-deb.sh'\n\n### Manual instalation\n\nMove pbs_mail.json, pbs_mail_saver, and pbs_mail_sender into an appropriate location.\n\n### Configure the mailer\n\nThe configuration file is pbs_mail.json. This file is located in /opt/pbs/etc/ by default.\n\n* pidfile - the daemon's PID file location\n* sqlite_db - sqlite database location\n* sendmail - path to sendmail\n* gathering_period -  during this period the emails are gathered\n* mailer_cycle_sleep - the length of sleep before the next sender periods begin\n* add_servername - add the server name to the email\n* send_begin_immediately - if true, the notification of beginning job or reservation is sent immediately (together with already gathered emails) - the gathering_period is shortened\n\n### Configure the pbs_server\n\n* Set the server attribute 'mailer' to 'pbs_mail_saver'. E.g.: 'set server mailer = /opt/pbs/bin/pbs_mail_saver'\n* Start the pbs-mailer (pbs_mail_sender) service.\n\n"
  },
  {
    "path": "src/unsupported/pbs-mailer/debian/changelog",
    "content": "pbs-mailer (1.0-1) unstable; urgency=low\n\n  * initial package\n\n -- Altair Engineering, Inc. <https://www.openpbs.org/>  Mon, 14 Sep 2020 11:30:00 +0100\n"
  },
  {
    "path": "src/unsupported/pbs-mailer/debian/compat",
    "content": "9\n"
  },
  {
    "path": "src/unsupported/pbs-mailer/debian/conffiles",
    "content": "/opt/pbs/etc/pbs_mail.json\n"
  },
  {
    "path": "src/unsupported/pbs-mailer/debian/control",
    "content": "Source: pbs-mailer\nSection: unknown\nPriority: optional\nMaintainer: Altair Engineering, Inc. <https://www.openpbs.org/>\nBuild-Depends: debhelper\nStandards-Version: 3.8.4\n \nPackage: pbs-mailer\nDepends: python3\nSection: utils\nArchitecture: all\nDescription: Mailing service for PBS software.\n"
  },
  {
    "path": "src/unsupported/pbs-mailer/debian/pbs-mailer.service",
    "content": "[Unit]\nDescription=pbs-mailer\n\n[Service]\nType=forking\nExecStart=/opt/pbs/bin/pbs_mail_sender start\nExecStop=/opt/pbs/bin/pbs_mail_sender stop\n\n[Install]\nWantedBy=multi-user.target\n"
  },
  {
    "path": "src/unsupported/pbs-mailer/debian/rules",
    "content": "#!/usr/bin/make -f\n\n%:\n\tdh $@ --with-systemd\n\noverride_dh_auto_install:\n\tmkdir -p $(CURDIR)/debian/pbs-mailer/opt/pbs/bin\n\tmkdir -p $(CURDIR)/debian/pbs-mailer/opt/pbs/etc\n\tcp $(CURDIR)/pbs_mail.json $(CURDIR)/debian/pbs-mailer/opt/pbs/etc/\n\tcp $(CURDIR)/pbs_mail_saver $(CURDIR)/debian/pbs-mailer/opt/pbs/bin/\n\tcp $(CURDIR)/pbs_mail_sender $(CURDIR)/debian/pbs-mailer/opt/pbs/bin/\n\tdh_auto_install\n\tdh_systemd_enable || true\n\tdh_systemd_start || true\n\nclean:\n\tdh_testdir\n\tdh_testroot\n\tdh_clean\n"
  },
  {
    "path": "src/unsupported/pbs-mailer/pbs_mail.json",
    "content": "{\n\t\"pidfile\": \"/var/spool/pbs/pbs_mail.pid\",\n\t\"sqlite_db\": \"/var/spool/pbs/pbs_mail.sqlite\",\n\t\"sendmail\": \"/usr/sbin/sendmail\",\n\t\"gathering_period\": 1800,\n\t\"mailer_cycle_sleep\": 60,\n\t\"add_servername\": true,\n\t\"send_begin_immediately\": false\n}\n"
  },
  {
    "path": "src/unsupported/pbs-mailer/pbs_mail_saver",
    "content": "#!/usr/bin/env python3\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\nimport sys\nimport time\nimport sqlite3\nimport json\nimport os\n\n\n#\n# CLASS saver\n#\nclass PBS_mail_saver(object):\n    \"\"\"\n    Email saver class\n    \"\"\"\n\n    sqlite_db = \"/var/spool/pbs/pbs_mail.sqlite\",\n    tb_name_emails = \"pbs_emails\"\n\n    email_from = \"adm\"\n    email_to = \"\"\n    email_subject = \"\"\n    email_body = []\n\n    def __init__(self):\n        config = {}\n        try:\n            config_file = \"pbs_mail.json\"\n            paths = []\n\n            abspath = os.path.dirname(os.path.abspath(__file__))\n            paths.append(os.path.join(abspath, config_file))\n            paths.append(os.path.join(abspath, '..', 'etc', config_file))\n            paths.append(os.path.join('/etc', config_file))\n            paths.append(os.path.join('/opt', 'pbs', 'etc', config_file))\n\n            for path in paths:\n                if os.path.isfile(path):\n                    config_file = path\n                    break\n\n            f = open(os.path.join(path, config_file),)\n            config = json.load(f)\n            f.close()\n\n            self.sqlite_db = config[\"sqlite_db\"]\n\n        except Exception as err:\n            print(\"Failed to load configuration: %s\" % err)\n            exit(1)\n\n        if len(sys.argv) > 1:\n\n            pos = 1\n            if sys.argv[1] == \"-f\" and len(sys.argv) >= 2:\n                self.email_from = sys.argv[2]\n                pos = 3\n            if len(sys.argv) >= pos:\n                self.email_to = sys.argv[3]\n\n    def read_mail(self):\n        \"\"\"\n        Read email from stdin\n        \"\"\"\n\n        data = sys.stdin.readlines()\n\n        for line in data:\n            if not line.strip():\n                continue\n\n            if line.startswith(\"To: \"):\n                self.email_to = line[3:].strip()\n            elif line.startswith(\"Subject: \"):\n                self.email_subject = line[8:].strip()\n            else:\n                self.email_body.append(line.strip())\n\n    def save_mail_db(self):\n        \"\"\"\n        Save email into sqlite database\n        \"\"\"\n\n        now = int(time.time())\n\n        try:\n            conn = sqlite3.connect(self.sqlite_db)\n            c = conn.cursor()\n        except Exception as err:\n            print(str(err))\n            conn.commit()\n            conn.close()\n            return\n\n        req = \"SELECT name FROM sqlite_master \\\n              WHERE type='table' AND name='%s'\"\\\n              % self.tb_name_emails\n        c.execute(req)\n        if c.fetchone() is None:\n                req = \"CREATE TABLE %s ( \\\n                      date integer, \\\n                      email_to text, \\\n                      email_from text, \\\n                      subject text, \\\n                      body text)\" % self.tb_name_emails\n                c.execute(req)\n\n        req = \"INSERT INTO %s \\\n              VALUES (%d, '%s', '%s', '%s', '%s')\" % (\n              self.tb_name_emails,\n              now, self.email_to,\n              self.email_from,\n              self.email_subject,\n              \"\\n\".join(self.email_body))\n        c.execute(req)\n\n        conn.commit()\n        conn.close()\n\nif __name__ == \"__main__\":\n    saver = PBS_mail_saver()\n    saver.read_mail()\n    saver.save_mail_db()\n"
  },
  {
    "path": "src/unsupported/pbs-mailer/pbs_mail_sender",
    "content": "#!/usr/bin/env python3\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\nimport sys\nimport time\nimport sqlite3\nimport subprocess\nimport socket\nimport json\nimport os\nimport atexit\nimport re\nfrom signal import SIGTERM\n\n\n#\n# CLASS Daemon\n#\nclass Daemon(object):\n    \"\"\"\n    Deamon class\n    \"\"\"\n\n    sleeptime = 5\n\n    def __init__(self, pidfile, debug):\n        self.pidfile = pidfile\n        self.debug = debug\n\n    def daemonize(self):\n        \"\"\"\n        Daemonize me\n        \"\"\"\n\n        try:\n            pid = os.fork()\n            if pid > 0:\n                sys.exit(0)\n        except Exception as err:\n            printf(\"fork failed: %s\" % err)\n            sys.exit(1)\n\n        os.chdir(\"/\")\n        os.setsid()\n        os.umask(0)\n\n        atexit.register(self.delPID)\n        pid = str(os.getpid())\n        open(self.pidfile, 'w+').write(\"%s\\n\" % pid)\n\n    def delPID(self):\n        \"\"\"\n        Delete PID file\n        \"\"\"\n\n        try:\n            os.remove(self.pidfile)\n        except OSError:\n            pass\n\n    def status(self):\n        \"\"\"\n        Check status of daemon\n        \"\"\"\n\n        try:\n            f = open(self.pidfile, 'r')\n            pid = int(f.read().strip())\n            f.close()\n        except IOError:\n            return None\n\n        try:\n            os.kill(pid, 0)\n        except Exception:\n            self.delPID()\n            return None\n\n        return pid\n\n    def start(self):\n        \"\"\"\n        Daemonize and start main process: run()\n        \"\"\"\n\n        pid = self.status()\n\n        if pid:\n            print(\"Daemon already running with pid %d\" % pid)\n            sys.exit(1)\n\n        if not self.debug:\n            if self.daemonize():\n                sys.exit(1)\n\n        try:\n            self.run()\n        except Exception as err:\n            if self.debug:\n                print(\"run() failed: %s\" % err)\n            return(1)\n\n        return(0)\n\n    def stop(self):\n        \"\"\"\n        Stop process\n        \"\"\"\n\n        try:\n            f = open(self.pidfile, 'r')\n            pid = int(f.read().strip())\n            f.close()\n        except IOError:\n            pid = None\n\n        if not pid:\n            return(0)\n\n        try:\n            while True:\n                os.kill(pid, SIGTERM)\n                time.sleep(0.1)\n        except OSError as err:\n            err = str(err)\n            if err.find(\"No such process\") > 0:\n                if os.path.exists(self.pidfile):\n                    self.delPID()\n            else:\n                return(1)\n\n        return(0)\n\n    def restart(self):\n        \"\"\"\n        Restart process - stop and start again\n        \"\"\"\n\n        if self.stop() == 0:\n            return self.start()\n\n        return 1\n\n    def run(self):\n        \"\"\"\n        Main loop\n        \"\"\"\n\n        while True:\n            time.sleep(self.sleeptime)\n\n\n#\n# CLASS sender\n#\nclass PBS_mail_sender(Daemon):\n    \"\"\"\n    Email sender class\n    \"\"\"\n\n    pidfile = \"/var/spool/pbs/pbs_mail.pid\"\n    sqlite_db = \"/var/spool/pbs/mail.sqlite\"\n    tb_name_emails = \"pbs_emails\"\n    tb_name_timestamps = \"pbs_users_timestamp\"\n    gathering_period = 1800\n    mailer_cycle_sleep = 60\n    sendmail = \"/usr/sbin/sendmail\"\n    add_servername = True\n    send_begin_immediately = False\n\n    def __init__(self, debug):\n        config = {}\n        try:\n            config_file = \"pbs_mail.json\"\n            paths = []\n\n            abspath = os.path.dirname(os.path.abspath(__file__))\n            paths.append(os.path.join(abspath, config_file))\n            paths.append(os.path.join(abspath, '..', 'etc', config_file))\n            paths.append(os.path.join('/etc', config_file))\n            paths.append(os.path.join('/opt', 'pbs', 'etc', config_file))\n\n            for path in paths:\n                if os.path.isfile(path):\n                    config_file = path\n                    break\n\n            f = open(os.path.join(path, config_file),)\n            config = json.load(f)\n            f.close()\n\n            self.pidfile = config[\"pidfile\"]\n            self.sqlite_db = config[\"sqlite_db\"]\n            self.gathering_period = config[\"gathering_period\"]\n            self.mailer_cycle_sleep = config[\"mailer_cycle_sleep\"]\n            self.sendmail = config[\"sendmail\"]\n            self.add_servername = config[\"add_servername\"]\n            self.send_begin_immediately = \\\n                config[\"send_begin_immediately\"]\n\n        except Exception as err:\n            print(\"Failed to load configuration: %s\" % err)\n            sys.exit(1)\n\n        super(PBS_mail_sender, self).__init__(self.pidfile, debug)\n\n    def db_delete_emails(self, c):\n        \"\"\"\n        Delete 'emails_to_delete' from sqllite db\n        \"\"\"\n\n        for rowid in self.emails_to_delete:\n            c.execute(\"DELETE FROM '%s' WHERE rowid == %d\" % (\n                self.tb_name_emails,\n                rowid))\n        self.emails_to_delete = []\n\n    def send_mail(self, email_to, email_from, subject, body):\n        \"\"\"\n        Real email sending\n        \"\"\"\n\n        p = subprocess.Popen([self.sendmail, '-f ' + email_from, email_to],\n                             stdout=subprocess.PIPE,\n                             stdin=subprocess.PIPE,\n                             stderr=subprocess.STDOUT)\n\n        email_input = \"To: %s\\n\" % email_to\n        email_input += \"Subject: %s\\n\\n\" % subject\n        email_input += body\n        p.communicate(input=str.encode(email_input))\n\n    def run(self):\n        \"\"\"\n        Main\n        \"\"\"\n\n        while True:\n            now = int(time.time())\n\n            try:\n                conn = sqlite3.connect(self.sqlite_db)\n\n                def regexp(expr, item):\n                    reg = re.compile(expr)\n                    return reg.search(item) is not None\n                conn.create_function(\"REGEXP\", 2, regexp)\n\n                c = conn.cursor()\n                req = \"SELECT name FROM sqlite_master \\\n                      WHERE type='table' AND name='%s'\" % \\\n                      self.tb_name_emails\n                c.execute(req)\n                if c.fetchone() is None:\n                    conn.commit()\n                    conn.close()\n                    time.sleep(self.mailer_cycle_sleep)\n                    continue\n            except Exception as err:\n                print(str(err))\n                conn.commit()\n                conn.close()\n                time.sleep(self.mailer_cycle_sleep)\n                continue\n\n            emails_to_send = {}\n            self.emails_to_delete = []\n            threshold = now - self.gathering_period\n\n            if self.send_begin_immediately:\n                req = \"SELECT rowid, date, email_to, email_from, subject, body \\\n                      FROM '%s' \\\n                      WHERE body REGEXP \\\n                      '.*Begun execution.*|\\\n.*Reservation period starting.*'\" % self.tb_name_emails\n                for email in c.execute(req):\n                    (rowid,\n                        timestamp,\n                        email_to,\n                        email_from,\n                        subject,\n                        body) = email\n\n                    if email_to not in emails_to_send.keys():\n                        emails_to_send[email_to] = []\n\n                    emails_to_send[email_to].append([email_from,\n                                                    subject,\n                                                    body])\n                    self.emails_to_delete.append(rowid)\n\n            self.db_delete_emails(c)\n\n            req = \"SELECT name FROM sqlite_master \\\n                  WHERE type='table' AND name='%s'\" % \\\n                  self.tb_name_timestamps\n            c.execute(req)\n            if c.fetchone() is None:\n                req = \"CREATE TABLE %s \\\n                      (date integer, recipient text UNIQUE)\" % \\\n                      self.tb_name_timestamps\n                c.execute(req)\n\n            recipients = {}\n\n            req = \"SELECT DISTINCT email_to FROM '%s'\" % self.tb_name_emails\n            for email_to, in c.execute(req):\n                recipients[email_to] = 1\n\n            req = \"SELECT date, recipient FROM '%s'\" % self.tb_name_timestamps\n            for timestamp, email_to in c.execute(req):\n                recipients[email_to] = timestamp\n\n            for i in recipients.keys():\n                req = \"SELECT rowid, date, email_to, email_from, subject, body \\\n                      FROM '%s' WHERE email_to = '%s'\" % (\n                      self.tb_name_emails,\n                      i)\n                for email in c.execute(req):\n                    (rowid,\n                        timestamp,\n                        email_to,\n                        email_from,\n                        subject,\n                        body) = email\n\n                    is_time = recipients[email_to] < \\\n                        now - self.gathering_period\n                    is_present = email_to in emails_to_send.keys()\n                    if is_time or is_present:\n                        if email_to not in emails_to_send.keys():\n                            emails_to_send[email_to] = []\n\n                        emails_to_send[email_to].append([\n                            email_from,\n                            subject,\n                            body])\n                        self.emails_to_delete.append(rowid)\n\n            self.db_delete_emails(c)\n\n            for email_to in emails_to_send.keys():\n                req = \"SELECT recipient FROM '%s' \\\n                      WHERE recipient='%s'\" % (\n                      self.tb_name_timestamps,\n                      email_to)\n                c.execute(req)\n                if c.fetchone() is None:\n                    req = \"INSERT INTO %s VALUES (%d, '%s')\" % (\n                          self.tb_name_timestamps,\n                          now,\n                          email_to)\n                else:\n                    req = \"UPDATE '%s' SET date = %d \\\n                          WHERE recipient = '%s'\" % (\n                          self.tb_name_timestamps,\n                          now,\n                          email_to)\n                c.execute(req)\n\n            conn.commit()\n            conn.close()\n\n            for email_to in emails_to_send.keys():\n                email_from = \"\"\n                email_body = \"\"\n                subject = \"PBS report\"\n                subjects = []\n\n                for email in emails_to_send[email_to]:\n                    email_from = email[0]\n                    if self.add_servername:\n                        email_body += email[1] \\\n                                      + \"@\" \\\n                                      + socket.gethostname()\\\n                                      + \"\\n\\t\"\n                    else:\n                        email_body += email[1] + \"\\n\\t\"\n                    email_body += email[2].replace(\"\\n\", \"\\n\\t\") + \"\\n\\n\"\n                    subjects.append(email[1])\n\n                if len(subjects) == 1:\n                    subject = subjects[0]\n                else:\n                    is_job = False\n                    is_resv = False\n                    for s in subjects:\n                        if s.startswith(\"PBS JOB\"):\n                            is_job = True\n                        if s.startswith(\"PBS RESERVATION\"):\n                            is_resv = True\n                    subject = \"PBS \"\n                    if is_job:\n                        subject += \"JOB\"\n                    if is_job and is_resv:\n                        subject += \"|\"\n                    if is_resv:\n                        subject += \"RESERVATION\"\n                    subject += \" squashed report\"\n                    email_body = \"This e-mail is a squashed report of \" \\\n                                 + str(len(subjects)) \\\n                                 + \" e-mails from PBS.\\n\\n\" \\\n                                 + email_body\n                self.send_mail(email_to, email_from, subject, email_body)\n\n            time.sleep(self.mailer_cycle_sleep)\n\n\nif __name__ == \"__main__\":\n    debug = False\n    if \"--debug\" in sys.argv or \"-d\" in sys.argv:\n        debug = True\n\n    sender = PBS_mail_sender(debug)\n\n    if len(sys.argv) > 1:\n        if 'restart' in sys.argv:\n            if sender.stop() != 0 or sender.start() != 0:\n                print(\"Restarting failed\")\n\n        elif 'stop' in sys.argv:\n            if sender.stop() > 0:\n                print(\"Stopping failed\")\n\n        elif 'start' in sys.argv:\n            if sender.start() > 0:\n                print(\"Starting failed\")\n\n        else:\n            print(\"Unknown deamon command: %s\" % \" \".join(sys.argv))\n            print(\"usage: %s start|stop|restart [-d|--debug]\" % sys.argv[0])\n            sys.exit(1)\n\n        sys.exit(0)\n\n    else:\n        print(\"usage: %s start|stop|restart [-d|--debug]\" % sys.argv[0])\n        sys.exit(1)\n"
  },
  {
    "path": "src/unsupported/pbs-mailer/release-deb.sh",
    "content": "#!/bin/bash\n\ndpkg-buildpackage -us -uc\n"
  },
  {
    "path": "src/unsupported/pbs-mailer/release-rpm.sh",
    "content": "#!/bin/bash\n\nVERSION=\"1.0\"\n\nrm pbs-mailer-$VERSION -rf\nmkdir -p pbs-mailer-$VERSION\n\ncp pbs-mailer.spec pbs-mailer-$VERSION/\ncp pbs_mail_saver pbs-mailer-$VERSION/\ncp pbs_mail_sender pbs-mailer-$VERSION/\ncp pbs_mail.json pbs-mailer-$VERSION/\ncp debian/pbs-mailer.service pbs-mailer-$VERSION/\n\ntar -cvzf pbs-mailer-${VERSION}.tar.gz pbs-mailer-$VERSION\nrm pbs-mailer-$VERSION -rf\n\nmkdir -p ~/rpmbuild/SOURCES/\nmv pbs-mailer-${VERSION}.tar.gz ~/rpmbuild/SOURCES/\n\nrpmbuild -ba pbs-mailer.spec\n"
  },
  {
    "path": "src/unsupported/pbs_config",
    "content": "#!/bin/bash\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\npbs_conf=${PBS_CONF_FILE:-/etc/pbs.conf}\nif [ -r \"${pbs_conf}\" ]; then\n\t. ${pbs_conf}\nelse\n\techo \"***\" >&2\n\techo \"*** No PBS_CONF_FILE found, exiting\" >&2\n\techo \"***\" >&2\n\texit 1\nfi\n\nexport PYTHONPATH=${PBS_EXEC}/unsupported/fw:${PYTHONPATH}\n\nif [ -x \"${PBS_EXEC}/python/bin/python\" ]; then\n\t${PBS_EXEC}/python/bin/python ${PBS_EXEC}/unsupported/fw/bin/pbs_config.py \"${@}\"\nelse\n\tpython3 ${PBS_EXEC}/unsupported/fw/bin/pbs_config.py \"${@}\"\nfi\n"
  },
  {
    "path": "src/unsupported/pbs_jobs_at.8B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n\n.TH pbs_jobs_at 8B \"22 September 2015\" Local \"PBS\"\n.SH NAME\n.B pbs_jobs_at\n\\- tool to identify historical PBS jobs\n.SH SYNOPSIS\n.B pbs_jobs_at\n[-s <start date>]\n[-e <end date>]\n[-n <nodes>]\n.br\n\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\\n[-a <accounting logs directory>]\n[-w <max walltime>]\n.br\n\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\\n[-v]\n[-h]\n\n.SH DESCRIPTION\nThe\n.B pbs_jobs_at\ncommand allows you to identify historical jobs that were running on\nnode(s) during a certain period of time, perhaps because the cluster\nmanager identified possible issues and we need to know which jobs may\nhave been affected.\n\nThe tool reports jobs that were running on node(s) during a specified\nperiod of time. This means jobs that started or finished as well as\nwere running when the period started (jobs that spanned the period of\ntime). During testing this has highlighted a few old jobs that have\nfinished but don't have end records due to hardware/software issues\nthat meant PBS wasn't able to properly clean up.\n\nThe tool queries either the system accounting logs on the host it is\nrun on or a named directory containing copies of accounting logs.\n\nWhen specifying nodes of interest it is possible to define node ranges\nusing host[xx-yy], e.g. node[1-100] or node[01-09]. It is also\npossible to specify multiple node(s) as a comma-separated list, for\nexample node[1-10],node[20-29].\n\nAll arguments are optional. If no arguments are specified it will\nreport all jobs on all nodes in all the accounting logs.\n\nWhen giving a date and time for start or end the search for jobs is\naccurate to the second specified. This can be useful if an exact time\nif known for a historical issue.\n\n.SH OPTIONS\n.IP \"-s <start date>\" 10\nStart date/time of time period. If no start date specified, begins\nat first accounting log.\n\n.IP \"-e <end date>\" 10\nEnd date/time of time period. If no end date specified, finishes at\nlast accounting log.\n\n.IP \"-n <nodes>\" 10\nNodes of interest. If no nodes specified, reports jobs on all nodes.\n\n.IP \"-a <accounting logs directory>\" 10\nPath to directory containing accounting logs.  If no accounting logs\ndirectory specified, uses system logs in\n$PBS_HOME/server_priv/accounting\n\n.IP \"-w <max walltime>\" 10\nSpecify maximum job length to optimize search for jobs still running\nat <start date>.  If not specified, searches from first accounting log\n\n.IP \"-v\" 10\nVerbose output. Display Nodes-Per-Job and Jobs-Per-Node.  If not\nspecified, displays list of jobs.\n\n.IP \"-h\"  10\nDisplays help message.\n"
  },
  {
    "path": "src/unsupported/pbs_loganalyzer",
    "content": "#!/bin/bash\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\npbs_conf=${PBS_CONF_FILE:-/etc/pbs.conf}\nif [ -r \"${pbs_conf}\" ]; then\n\t. ${pbs_conf}\nelse\n\techo \"***\" >&2\n\techo \"*** No PBS_CONF_FILE found, exiting\" >&2\n\techo \"***\" >&2\n\texit 1\nfi\n\nexport PYTHONPATH=${PBS_EXEC}/unsupported/fw:${PYTHONPATH}\n\nif [ -x \"${PBS_EXEC}/python/bin/python\" ]; then\n\t${PBS_EXEC}/python/bin/python ${PBS_EXEC}/unsupported/fw/bin/pbs_loganalyzer.py \"${@}\"\nelse\n\tpython3 ${PBS_EXEC}/unsupported/fw/bin/pbs_loganalyzer.py \"${@}\"\nfi\n"
  },
  {
    "path": "src/unsupported/pbs_output.py",
    "content": "# coding: utf-8\n#!/usr/bin/env python3\n\"\"\"\n/*\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n *\n */\n\"\"\"\n__doc__ = \"\"\"\nThis script is not intended to be run by users or Administrators.\nIt is run by the Cray RUR system as an \"output plugin\".\n\"\"\"\n\nimport os\nimport sys\nrur_path = os.path.join(os.path.sep, 'opt', 'cray', 'rur', 'default', 'bin')\nif rur_path not in sys.path:\n    sys.path.append(rur_path)\ntry:\n    from rur_plugins import rur_output_args, get_plugin_name, rur_errorlog\nexcept Exception:\n    sys.stderr.write(\"Failed to import from rur_plugins\\n\")\n    raise ImportError\n\n\ndef outname(jobid):\n    # Create the pathname to write the RUR data for PBS.\n    # By default it will be \"/var/spool/pbs/spool/<jobid>.rur\"\n    home = \"PBS_HOME\"\n    dirname = \"/var/spool/pbs\"\n    if home in os.environ:\n        dirname = os.environ[home]\n    else:\n        conf = 'PBS_CONF_FILE'\n        if conf in os.environ:\n            confile = os.environ[conf]\n        else:\n            confile = '/etc/pbs.conf'\n        with open(confile, \"r\") as fp:\n            for line in fp:\n                line = line.strip()\n                if line == \"\":\n                    continue\n                var, _, val = line.partition('=')\n                if var == home:\n                    dirname = val\n                    break\n    dirname = os.path.join(dirname, \"spool\")\n    if os.path.isdir(dirname):\n        return os.path.join(dirname, \"%s.rur\" % jobid)\n    else:\n        raise IOError(\"not a directory\")\n\n\ndef main():\n    # An RUR output plugin that will write data specific to a PBS job\n    # to a well known path that PBS hooks can access.  The format of\n    # the file will be:\n    # pluginName : apid : pluginOutput\n    #\n    # See the Cray document: \"Managing System Software for the Cray\n    # Linux Environment\" section 12.7 \"RUR Plugins\" for more\n    # information on the RUR plugin interface.\n    # http://docs.cray.com/books/S-2393-5101/S-2393-5101.pdf\n\n    try:\n        rur_output = list()\n        rur_output = rur_output_args(sys.argv[1:], True)\n        apid = rur_output[0]\n        jobid = rur_output[1]\n        inputfilelist = rur_output[4]\n    except Exception as e:\n        rur_errorlog(\"RUR PBS output plugin rur_output_args error '%s'\" %\n                     str(e))\n        exit(1)\n\n    # If an aprun runs within a PBS job, the jobid will have the PBS\n    # jobid set.  It will have the short servername like \"77.sdb\".\n    # If an aprun is run interactively, the jobid will be \"0\".\n    if jobid == \"0\":\t\t# not from a PBS job\n        exit(0)\n\n    try:\n        output = outname(jobid)\n        outfile = open(output, \"a\")\n    except Exception:\n        rur_errorlog(\"RUR PBS output plugin cannot access output file %s\" %\n                     output)\n        exit(1)\n\n    # copy input to job specific output file\n    for inputfile in inputfilelist:\n        try:\n            plugin = get_plugin_name(inputfile)\n            plugin = plugin.split()[1]\t\t# keep just the plugin name\n            with open(inputfile, \"r\") as infile:\n                for line in infile:\n                    outfile.write(\"%s : %s : %s\" % (plugin, apid, line))\n        except Exception:\n            pass\n    outfile.close()\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "src/unsupported/pbs_rescquery.3B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n\n.if \\n(Pb .ig Ig\n.TH pbs_rescquery 3B \"\" Local \"PBS Pro\"\n.\\\" The following macros are style for object names and values.\n.de Ar\t\t\\\" command/function arguments and operands (italic)\n.ft 2\n.if \\\\n(.$>0 \\&\\\\$1\\f1\\\\$2\n..\n.de Av\t\t\\\" data item values  (Helv)\n.if  \\n(Pb .ft 6\n.if !\\n(Pb .ft 3\n.ps -1\n.if \\\\n(.$>0 \\&\\\\$1\\s+1\\f1\\\\$2\n..\n.de At\t\t\\\" attribute and data item names (Helv Bold)\n.if  \\n(Pb .ft 6\n.if !\\n(Pb .ft 2\n.ps -1\n.if \\\\n(.$>0 \\&\\\\$1\\s+1\\f1\\\\$2\n..\n.de Ty\t\t\\\" Type-ins and examples (typewritter)\n.if  \\n(Pb .ft 5\n.if !\\n(Pb .ft 3\n.if \\\\n(.$>0 \\&\\\\$1\\f1\\\\$2\n..\n.de Er\t\t\\\" Error values ( [Helv] )\n.if  \\n(Pb .ft 6\n.if !\\n(Pb .ft 3\n\\&\\s-1[\\^\\\\$1\\^]\\s+1\\f1\\\\$2\n..\n.de Sc\t\t\\\" Symbolic constants ( {Helv} )\n.if  \\n(Pb .ft 6\n.if !\\n(Pb .ft 3\n\\&\\s-1{\\^\\\\$1\\^}\\s+1\\f1\\\\$2\n..\n.de Al\t\t\\\" Attribute list item, like .IP but set font and size\n.if !\\n(Pb .ig Ig\n.ft 6\n.IP \"\\&\\s-1\\\\$1\\s+1\\f1\"\n.Ig\n.if  \\n(Pb .ig Ig\n.ft 2\n.IP \"\\&\\\\$1\\s+1\\f1\"\n.Ig\n..\n.\\\" the following pair of macros are used to bracket sections of code\n.de Cs\n.ft 5\n.nf\n..\n.de Ce\n.sp\n.fi\n.ft 1\n..\n.\\\" End of macros\n.Ig\n.SH NAME\npbs_rescquery, avail, totpool, usepool - query resource availability\n.SH SYNOPSIS\n#include <pbs_error.h>\n.br\n#include <pbs_ifl.h>\n.sp\n.ft 3\n.nf\nint pbs_rescquery\\^(\\^int\\ connect, char\\ **resourcelist, int *arraysize,\nint *available, int *allocated, int *reserved, int *down \\^)\n.sp\nchar *avail\\^(\\^int connect, char *resc\\^)\n.sp\nint totpool\\^(\\^int connect, int update\\^)\n.sp\nint usepool\\^(\\^int connect, int update\\^)\n.fi\n.ft 1\n.SH DESCRIPTION\n.if \\n(Pb .ig Ig\n.HP 2\n.Ig\n.if !\\n(Pb .ig Ig\n.sp\n.Ig\n.B pbs_rescquery\n.br\nIssue a request to the batch server to query the availability of resources.\n.Ar connect\nis the connection returned by \\f3pbs_connect\\fP().\n.Ar resourcelist\nis an array of one or more strings specifying the resources to be queried.\n.Ar arraysize\nis the is the number of strings in resourcelist.\n.Ar available ,\n.Ar allocated ,\n.Ar reserved ,\nand\n.Ar down\nare integer arrays of size arraysize.  The amount of resource specified in\nthe corresponding resourcelist string which is available, already allocated,\nreserved, and down/off-line is returned in the integer arrays.\n.IP\nAt the present time the only resources which may be specified is \"nodes\".\nIt may be specified as\n.br\n.Ty \\ \\ \\ \\ nodes\n.br\n.Ty \\ \\ \\ \\ nodes=\n.br\n.Ty \\ \\ \\ \\ nodes=\\f2specification\\f1\n.br\nwhere specification is what a user specifies in the -l option arguement list\nfor nodes, see qsub(1B) and the various pbs_resource_* man pages.\n.IP\nWhere the node resourcelist is a simple type, such as \"nodes\", \"nodes=\",\nor \"nodes=\\f2type\\fP\", the numbers returned reflect the actual number of nodes\n(of the specified type) which are \\f2available\\fP, \\f2allocated\\fP,\n\\f2reserved\\fP, or \\f2down\\fP.\n.IP\nFor a more complex node resourcelist, such as\n\"nodes=2\" or \"nodes=type1:type2\", only the value returned in\n.I available\nhas meaning.\nIf the number in\n.I available\nis positive, it is the number of nodes required to satisfy the specification\nand that some set of nodes are available which will satisfy it, see\n.I avail ().\nIf the number in\n.I available\nis zero, some number of nodes required for the specification are\ncurrently unavailable, the request might be satisfied at a later time.\nIf the number in\n.I available\nis negative, no combination of known nodes can fulfill the specification.\n.if \\n(Pb .ig Ig\n.HP 2\n.Ig\n.if !\\n(Pb .ig Ig\n.sp\n.Ig\n.B avail\n.br\nThe\n.I avail ()\ncall is provided as a conversion aid for schedulers written for early versions\nof PBS.   The avail() routine uses pbs_rescquery() and returns a character\nstring answer.\n.Ar connect\nis the connection returned by \\f3pbs_connect\\fP().\n.Ar resc\nis a single\n.I node=specification\nspecification as discussed above.  If the nodes to satisfy the specification\nare currently available, the return value is the character string\n.B yes .\nIf the nodes are currently unavailable, the return is the character string\n.B no .\nIf the specification could never be satisfied, the return is the string\n.B never .\nAn error in the specification returns the character string\n.B ? .\n.if \\n(Pb .ig Ig\n.HP 2\n.Ig\n.if !\\n(Pb .ig Ig\n.sp\n.Ig\n.B totpool\n.br\nThe\n.I totpool ()\nfunction returns the total number of nodes known to the PBS server.  This is\nthe sum of the number of nodes available, allocated, reserved, and down.\nThe parameter\n.Ar connection\nis the connection returned by pbs_connect().\nThe parameter\n.Ar update\nif non-zero, causes totpool() to issue a pbs_rescquery() call to obtain\nfresh information.   If zero, numbers from the prior pbs_rescquery() are used.\n.if \\n(Pb .ig Ig\n.HP 2\n.Ig\n.if !\\n(Pb .ig Ig\n.sp\n.Ig\n.B usepool\n.br\n.I usepool ()\nreturns the number of nodes currently in use, the sum of allocated, reserved,\nand down.\nThe parameter\n.Ar connection\nis the connection returned by pbs_connect().\nThe parameter\n.Ar update\nif non-zero, causes totpool() to issue a pbs_rescquery() call to obtain\nfresh information.   If zero, numbers from the prior pbs_rescquery() are used.\n.SH \"SEE ALSO\"\nqsub(1B), pbs_connect(3B), pbs_disconnect(3B), pbs_rescreserve(3B) and\npbs_resources(7B)\n.SH DIAGNOSTICS\nWhen the batch request generated by the \\f3pbs_rescquery\\f1()\nfunction has been completed successfully\nby a batch server, the routine will return 0 (zero).\nOtherwise, a non zero error is returned.  The error number is also set\nin pbs_errno.\n.LP\nThe functions usepool() and totpool() return -1 on error.\n"
  },
  {
    "path": "src/unsupported/pbs_rmget.8B",
    "content": ".\\\"\n.\\\" Copyright (C) 1994-2021 Altair Engineering, Inc.\n.\\\" For more information, contact Altair at www.altair.com.\n.\\\"\n.\\\" This file is part of both the OpenPBS software (\"OpenPBS\")\n.\\\" and the PBS Professional (\"PBS Pro\") software.\n.\\\"\n.\\\" Open Source License Information:\n.\\\"\n.\\\" OpenPBS is free software. You can redistribute it and/or modify it under\n.\\\" the terms of the GNU Affero General Public License as published by the\n.\\\" Free Software Foundation, either version 3 of the License, or (at your\n.\\\" option) any later version.\n.\\\"\n.\\\" OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n.\\\" ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n.\\\" FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n.\\\" License for more details.\n.\\\"\n.\\\" You should have received a copy of the GNU Affero General Public License\n.\\\" along with this program.  If not, see <http://www.gnu.org/licenses/>.\n.\\\"\n.\\\" Commercial License Information:\n.\\\"\n.\\\" PBS Pro is commercially licensed software that shares a common core with\n.\\\" the OpenPBS software.  For a copy of the commercial license terms and\n.\\\" conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n.\\\" Altair Legal Department.\n.\\\"\n.\\\" Altair's dual-license business model allows companies, individuals, and\n.\\\" organizations to create proprietary derivative works of OpenPBS and\n.\\\" distribute them - whether embedded or bundled with other software -\n.\\\" under a commercial license agreement.\n.\\\"\n.\\\" Use of Altair's trademarks, including but not limited to \"PBS™\",\n.\\\" \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n.\\\" subject to Altair's trademark licensing policies.\n.\\\"\n\n.if \\n(Pb .ig Iq\n.TH pbs_rmget 8B \"8 February 2007\" Local \"PBS\"\n.\\\" The following macros are style for object names and values.\n.de Ar\t\t\\\" command/function arguments and operands (italic)\n.ft 2\n.if \\\\n(.$>0 \\&\\\\$1\\f1\\\\$2\n..\n.de Av\t\t\\\" data item values  (Helv)\n.if  \\n(Pb .ft 6\n.if !\\n(Pb .ft 3\n.ps -1\n.if \\\\n(.$>0 \\&\\\\$1\\s+1\\f1\\\\$2\n..\n.de At\t\t\\\" attribute and data item names (Helv Bold)\n.if  \\n(Pb .ft 6\n.if !\\n(Pb .ft 2\n.ps -1\n.if \\\\n(.$>0 \\&\\\\$1\\s+1\\f1\\\\$2\n..\n.de Ty\t\t\\\" Type-ins and examples (typewriter)\n.if  \\n(Pb .ft 5\n.if !\\n(Pb .ft 3\n.if \\\\n(.$>0 \\&\\\\$1\\f1\\\\$2\n..\n.de Er\t\t\\\" Error values ( [Helv] )\n.if  \\n(Pb .ft 6\n.if !\\n(Pb .ft 3\n\\&\\s-1[\\^\\\\$1\\^]\\s+1\\f1\\\\$2\n..\n.de Sc\t\t\\\" Symbolic constants ( {Helv} )\n.if  \\n(Pb .ft 6\n.if !\\n(Pb .ft 3\n\\&\\s-1{\\^\\\\$1\\^}\\s+1\\f1\\\\$2\n..\n.de Al\t\t\\\" Attribute list item, like .IP but set font and size\n.if !\\n(Pb .ig Ig\n.ft 6\n.IP \"\\&\\s-1\\\\$1\\s+1\\f1\"\n.Ig\n.if  \\n(Pb .ig Ig\n.ft 2\n.IP \"\\&\\\\$1\\s+1\\f1\"\n.Ig\n..\n.\\\" the following pair of macros are used to bracket sections of code\n.de Cs\n.ft 5\n.nf\n..\n.de Ce\n.sp\n.fi\n.ft 1\n..\n.\\\" End of macros\n.Iq\n\n\n.SH NAME\n.B pbs_rmget\n\\- queries MOM for resource values\n\n.SH SYNOPSIS\npbs_rmget [-m MOM name] [-p port] [resource list]\n\n.SH DESCRIPTION\nThe\n.B pbs_rmget\ncommand uses the resource monitor interface to query the MOM\nfor resource values.\n\n.SH OPTIONS\n.IP \"-m MOM name\" 15\nThe\n.I MOM name\n(hostname) to query.\nIf the\n.I MOM name\nis not specified, the MOM on the current host is queried.\n\n.IP \"-p port\" 15\nSpecifies the MOM's RM\n.I port\nto query.  If the\n.I port\nis not specified, the default port is queried.\n\n.SH OPERANDS\n.IP \"resource list\" 15\nSpace-separated list of one or more resources.\nIf no\n.I resource list\nis given, the\n.B pbs_rmget\ncommand returns its usage.\n\n.SH OUTPUT\nGiven\n.B pbs_rmget RES_A RES_B,\nthe output is:\n.RS 5\n[0] RES_A=<value of RES_A>\n.br\n[1] RES_B=<value of RES_B>\n.RE\n.br\n\nQuerying a nonexistent resource:\n.br\nGiven\n.B pbs_rmget RES_C,\nwhere RES_C is nonexistent, the output is:\n.RS 5\n[0] RES_C=? 15201\n.RE\n\n.SH EXIT STATUS\n.IP \"0\" 15\nSuccess\n.IP \"1\" 15\nif MOM name, option, or port is unrecognized.\nA message is printed to standard error.\n.LP\n\n.SH ERROR MESSAGES\nIf the\n.B pbs_rmget\ncommand fails to open a connection to the MOM name given in the\n.I -m\noption:\n.RS 5\n\"Unable to open connection to mom: <MOM name>, <MOM port>\"\n.RE\nThe default MOM port is reported as zero.\n.br\n\nIf the\n.B pbs_rmget\ncommand fails to get a message back from MOM:\n.RS 5\n\"Error getting response <resource request> from mom.\"\n.RE\n\n.SH SEE ALSO\nThe\n.B PBS Professional External Reference Specification,\nThe\n.B PBS Administrator's Guide,\n.br\nrm(3B),\npbs_mom(8B), pbs_tclsh(8B)\n"
  },
  {
    "path": "src/unsupported/pbs_rmget.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <pbs_config.h>\n\n#include <stdio.h>\n#include <string.h>\n#include <stdlib.h>\n#include <unistd.h>\n#include <pbs_ifl.h>\n#include <rm.h>\n#include \"pbs_internal.h\"\n#include \"tpp.h\"\n#include \"log.h\"\n\n#define SHOW_NONE 0xff\n\nint\nmain(int argc, char *argv[])\n{\n\tint i;\n\tchar mom_name[PBS_MAXHOSTNAME + 1];\n\tint mom_port = 0;\n\tint c;\n\tint mom_sd;\n\tchar *req;\n\n\tif (initsocketlib())\n\t\treturn 1;\n\n\tif (gethostname(mom_name, (sizeof(mom_name) - 1)) < 0)\n\t\tmom_name[0] = '\\0';\n\n\twhile ((c = getopt(argc, argv, \"m:p:\")) != EOF) {\n\t\tswitch (c) {\n\t\t\tcase 'm':\n\t\t\t\tstrcpy(mom_name, optarg);\n\t\t\t\tbreak;\n\t\t\tcase 'p':\n\t\t\t\tmom_port = atoi(optarg);\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\tfprintf(stderr, \"Bad option: %c\\n\", c);\n\t\t}\n\t}\n\n\tif (mom_name[0] == '\\0' || optind == argc) {\n\t\tfprintf(stderr,\n\t\t\t\"Error in usage: pbs_rmget [-m mom name] [-p mom port] <req1>...[reqN]\\n\");\n\t\treturn 1;\n\t}\n\n\tif (set_msgdaemonname(\"pbs_rmget\")) {\n\t\tfprintf(stderr, \"Out of memory\\n\");\n\t\treturn 1;\n\t}\n\n\t/* load the pbs conf file */\n\tif (pbs_loadconf(0) == 0) {\n\t\tfprintf(stderr, \"%s: Configuration error\\n\", argv[0]);\n\t\treturn (1);\n\t}\n\n\tset_log_conf(pbs_conf.pbs_leaf_name, pbs_conf.pbs_mom_node_name,\n\t\t     pbs_conf.locallog, pbs_conf.syslogfac,\n\t\t     pbs_conf.syslogsvr, pbs_conf.pbs_log_highres_timestamp);\n\n\tif (!pbs_conf.pbs_leaf_name) {\n\t\tchar my_hostname[PBS_MAXHOSTNAME + 1];\n\t\tif (gethostname(my_hostname, (sizeof(my_hostname) - 1)) < 0) {\n\t\t\tfprintf(stderr, \"Failed to get hostname\\n\");\n\t\t\treturn -1;\n\t\t}\n\t\tpbs_conf.pbs_leaf_name = get_all_ips(my_hostname, log_buffer, sizeof(log_buffer) - 1);\n\t\tif (!pbs_conf.pbs_leaf_name) {\n\t\t\tfprintf(stderr, \"%s\\n\", log_buffer);\n\t\t\tfprintf(stderr, \"%s\\n\", \"Unable to determine TPP node name\");\n\t\t\treturn -1;\n\t\t}\n\t}\n\n\tif ((mom_sd = openrm(mom_name, mom_port)) < 0) {\n\t\tfprintf(stderr, \"Unable to open connection to mom: %s:%d\\n\", mom_name, mom_port);\n\t\treturn 1;\n\t}\n\n\tfor (i = optind; i < argc; i++)\n\t\taddreq(mom_sd, argv[i]);\n\n\tfor (i = optind; i < argc; i++) {\n\t\treq = getreq(mom_sd);\n\t\tif (req == NULL) {\n\t\t\tfprintf(stderr, \"Error getting response %d from mom.\\n\", i - optind);\n\t\t\treturn 1;\n\t\t}\n\t\tprintf(\"[%d] %s\\n\", i - optind, req);\n\t\tfree(req);\n\t}\n\n\tcloserm(mom_sd);\n\n\treturn 0;\n}\n"
  },
  {
    "path": "src/unsupported/pbs_stat",
    "content": "#!/bin/bash\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\npbs_conf=${PBS_CONF_FILE:-/etc/pbs.conf}\nif [ -r \"${pbs_conf}\" ]; then\n\t. ${pbs_conf}\nelse\n\techo \"***\" >&2\n\techo \"*** No PBS_CONF_FILE found, exiting\" >&2\n\techo \"***\" >&2\n\texit 1\nfi\n\nexport PYTHONPATH=${PBS_EXEC}/unsupported/fw:${PYTHONPATH}\n\nif [ -x \"${PBS_EXEC}/python/bin/python\" ]; then\n\t${PBS_EXEC}/python/bin/python ${PBS_EXEC}/unsupported/fw/bin/pbs_stat.py \"${@}\"\nelse\n\tpython3 ${PBS_EXEC}/unsupported/fw/bin/pbs_stat.py \"${@}\"\nfi\n"
  },
  {
    "path": "src/unsupported/rapid_inter.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n# This is a queuejob hook script that determines if a job entering the system\n# is an interactive job. And if so, directs it to the high priority queue\n# specified in 'high_priority_queue', and also tells the server to restart\n# the scheduling cycle. This  for faster qsub -Is throughput.\n#\n# Prerequisite:\n#    Site must define a \"high\" queue as follows:\n#        qmgr -c \"create queue high queue_type=e,Priority=150\n#        qenable high\n#        qstart high\n#    NOTE:\n#        A) 150 is the default priority for an express (high) queue.\n#           This will have the interactive job to preempt currently running\n#           work.\n#        B) If site does not want this, lower the priority of the high\n#           priority queue.  This might not cause the job to run right away,\n#           but will try.\n#\n#    This hook is instantiated as follows:\n#        qmgr -c \"create hook rapid event=queuejob\"\n#        qmgr -c \"import hook rapid_inter application/x-python default\n#                 rapid_inter.py\"\nimport pbs\n\nhigh_priority_queue=\"high\"\n\ne = pbs.event()\nif e.job.interactive:\n    high = pbs.server().queue(high_priority_queue)\n    if high is not None:\n        e.job.queue = high\n        pbs.logmsg(pbs.LOG_DEBUG, \"quick start interactive job\")\n        pbs.server().scheduler_restart_cycle()\n"
  },
  {
    "path": "src/unsupported/renew-test/base64.c",
    "content": "/* -*- mode: c; c-basic-offset: 4; indent-tabs-mode: nil -*- */\n/* util/support/base64.c - base64 encoder and decoder */\n/*\n * Copyright (c) 1995-2001 Kungliga Tekniska Högskolan\n * (Royal Institute of Technology, Stockholm, Sweden).\n * All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions\n * are met:\n *\n * 1. Redistributions of source code must retain the above copyright\n *    notice, this list of conditions and the following disclaimer.\n *\n * 2. Redistributions in binary form must reproduce the above copyright\n *    notice, this list of conditions and the following disclaimer in the\n *    documentation and/or other materials provided with the distribution.\n *\n * 3. Neither the name of the Institute nor the names of its contributors\n *    may be used to endorse or promote products derived from this software\n *    without specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE INSTITUTE AND CONTRIBUTORS ``AS IS'' AND\n * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED.  IN NO EVENT SHALL THE INSTITUTE OR CONTRIBUTORS BE LIABLE\n * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS\n * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)\n * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT\n * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY\n * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF\n * SUCH DAMAGE.\n */\n/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n//#include <k5-platform.h>\n//#include <k5-base64.h>\n#include <string.h>\n#include <stdlib.h>\n\n#ifndef SIZE_MAX\n#define SIZE_MAX ((size_t) ((size_t) 0 - 1))\n#endif\n\nstatic const char base64_chars[] =\n\t\"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/\";\n\nchar *\nk5_base64_encode(const void *data, size_t len)\n{\n\tchar *s, *p;\n\tsize_t i;\n\tunsigned int c;\n\tconst unsigned char *q;\n\n\tif (len > SIZE_MAX / 4)\n\t\treturn NULL;\n\n\tp = s = malloc(len * 4 / 3 + 4);\n\tif (p == NULL)\n\t\treturn NULL;\n\tq = (const unsigned char *) data;\n\n\tfor (i = 0; i < len;) {\n\t\tc = q[i++];\n\t\tc *= 256;\n\t\tif (i < len)\n\t\t\tc += q[i];\n\t\ti++;\n\t\tc *= 256;\n\t\tif (i < len)\n\t\t\tc += q[i];\n\t\ti++;\n\t\tp[0] = base64_chars[(c & 0x00fc0000) >> 18];\n\t\tp[1] = base64_chars[(c & 0x0003f000) >> 12];\n\t\tp[2] = base64_chars[(c & 0x00000fc0) >> 6];\n\t\tp[3] = base64_chars[(c & 0x0000003f) >> 0];\n\t\tif (i > len)\n\t\t\tp[3] = '=';\n\t\tif (i > len + 1)\n\t\t\tp[2] = '=';\n\t\tp += 4;\n\t}\n\t*p = '\\0';\n\treturn s;\n}\n\n#define DECODE_ERROR 0xffffffff\n\n/* Decode token, which must be four bytes long. */\nstatic unsigned int\ndecode_token(const char *token)\n{\n\tint i, marker = 0;\n\tunsigned int val = 0;\n\tconst char *p;\n\n\tfor (i = 0; i < 4; i++) {\n\t\tval *= 64;\n\t\tif (token[i] == '=') {\n\t\t\tmarker++;\n\t\t} else if (marker > 0) {\n\t\t\treturn DECODE_ERROR;\n\t\t} else {\n\t\t\tp = strchr(base64_chars, token[i]);\n\t\t\tif (p == NULL)\n\t\t\t\treturn DECODE_ERROR;\n\t\t\tval += p - base64_chars;\n\t\t}\n\t}\n\tif (marker > 2)\n\t\treturn DECODE_ERROR;\n\treturn (marker << 24) | val;\n}\n\nvoid *\nk5_base64_decode(const char *str, size_t *len_out)\n{\n\tunsigned char *data, *q;\n\tunsigned int val, marker;\n\tsize_t len;\n\n\t*len_out = SIZE_MAX;\n\n\t/* Allocate the output buffer. */\n\tlen = strlen(str);\n\tif (len % 4)\n\t\treturn NULL;\n\tq = data = malloc(len / 4 * 3);\n\tif (data == NULL) {\n\t\t*len_out = 0;\n\t\treturn NULL;\n\t}\n\n\t/* Decode the string. */\n\tfor (; *str != '\\0'; str += 4) {\n\t\tval = decode_token(str);\n\t\tif (val == DECODE_ERROR) {\n\t\t\tfree(data);\n\t\t\treturn NULL;\n\t\t}\n\t\tmarker = (val >> 24) & 0xff;\n\t\t*q++ = (val >> 16) & 0xff;\n\t\tif (marker < 2)\n\t\t\t*q++ = (val >> 8) & 0xff;\n\t\tif (marker < 1)\n\t\t\t*q++ = val & 0xff;\n\t}\n\t*len_out = q - data;\n\treturn data;\n}\n"
  },
  {
    "path": "src/unsupported/renew-test/base64.h",
    "content": "/* -*- mode: c; c-basic-offset: 4; indent-tabs-mode: nil -*- */\n/* include/k5-base64.h - base64 declarations */\n/*\n * Copyright (c) 1995, 1996, 1997 Kungliga Tekniska Högskolan\n * (Royal Institute of Technology, Stockholm, Sweden).\n * All rights reserved.\n *\n * Redistribution and use in source and binary forms, with or without\n * modification, are permitted provided that the following conditions\n * are met:\n *\n * 1. Redistributions of source code must retain the above copyright\n *    notice, this list of conditions and the following disclaimer.\n *\n * 2. Redistributions in binary form must reproduce the above copyright\n *    notice, this list of conditions and the following disclaimer in the\n *    documentation and/or other materials provided with the distribution.\n *\n * 3. Neither the name of the Institute nor the names of its contributors\n *    may be used to endorse or promote products derived from this software\n *    without specific prior written permission.\n *\n * THIS SOFTWARE IS PROVIDED BY THE INSTITUTE AND CONTRIBUTORS ``AS IS'' AND\n * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n * ARE DISCLAIMED.  IN NO EVENT SHALL THE INSTITUTE OR CONTRIBUTORS BE LIABLE\n * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS\n * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)\n * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT\n * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY\n * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF\n * SUCH DAMAGE.\n */\n/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#ifndef K5_BASE64_H\n#define K5_BASE64_H\n\n#include <stddef.h>\n\n/* base64-encode data and return it in an allocated buffer.  Return NULL if out\n * of memory. */\nchar *k5_base64_encode(const void *data, size_t len);\n\n/*\n * Decode str as base64 and return the result in an allocated buffer, setting\n * *len_out to the length.  Return NULL and *len_out == 0 if out of memory,\n * NULL and *len_out == SIZE_MAX on invalid input.\n */\nvoid *k5_base64_decode(const char *str, size_t *len_out);\n\n#endif /* K5_BASE64_H */\n"
  },
  {
    "path": "src/unsupported/renew-test/renew-test.c",
    "content": "/*\n * Copyright (C) 1994-2021 Altair Engineering, Inc.\n * For more information, contact Altair at www.altair.com.\n *\n * This file is part of both the OpenPBS software (\"OpenPBS\")\n * and the PBS Professional (\"PBS Pro\") software.\n *\n * Open Source License Information:\n *\n * OpenPBS is free software. You can redistribute it and/or modify it under\n * the terms of the GNU Affero General Public License as published by the\n * Free Software Foundation, either version 3 of the License, or (at your\n * option) any later version.\n *\n * OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n * License for more details.\n *\n * You should have received a copy of the GNU Affero General Public License\n * along with this program.  If not, see <http://www.gnu.org/licenses/>.\n *\n * Commercial License Information:\n *\n * PBS Pro is commercially licensed software that shares a common core with\n * the OpenPBS software.  For a copy of the commercial license terms and\n * conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n * Altair Legal Department.\n *\n * Altair's dual-license business model allows companies, individuals, and\n * organizations to create proprietary derivative works of OpenPBS and\n * distribute them - whether embedded or bundled with other software -\n * under a commercial license agreement.\n *\n * Use of Altair's trademarks, including but not limited to \"PBS™\",\n * \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n * subject to Altair's trademark licensing policies.\n */\n\n#include <stdlib.h>\n#include <stdio.h>\n#include <string.h>\n#include <krb5.h>\n\n#include \"base64.h\"\n\n#define VAR_NAME_KEYTAB \"PBS_RENEW_KRB_KEYTAB\"\n\nstatic krb5_error_code\nprepare_ccache(krb5_context context, krb5_creds *creds, krb5_ccache *cc)\n{\n\tkrb5_error_code ret;\n\tkrb5_ccache ccache = NULL;\n\n\tret = krb5_cc_new_unique(context, \"MEMORY\", NULL, &ccache);\n\tif (ret) {\n\t\tfprintf(stderr, \"krb5_cc_new_unique() failed (%s)\",\n\t\t\tkrb5_get_error_message(context, ret));\n\t\tgoto end;\n\t}\n\n\tret = krb5_cc_initialize(context, ccache, creds->client);\n\tif (ret) {\n\t\tfprintf(stderr, \"krb5_cc_initialize() failed (%s)\",\n\t\t\tkrb5_get_error_message(context, ret));\n\t\tgoto end;\n\t}\n\n\tret = krb5_cc_store_cred(context, ccache, creds);\n\tif (ret) {\n\t\tfprintf(stderr, \"krb5_cc_store_cred() failed (%s)\",\n\t\t\tkrb5_get_error_message(context, ret));\n\t\tgoto end;\n\t}\n\n\t*cc = ccache;\n\tccache = NULL;\n\nend:\n\tif (ccache)\n\t\tkrb5_cc_destroy(context, ccache);\n\n\treturn ret;\n}\n\nstatic krb5_error_code\nget_init_creds_user(krb5_context context, const char *username, krb5_creds *creds)\n{\n\tkrb5_error_code ret;\n\tkrb5_get_init_creds_opt *opt = NULL;\n\tkrb5_keytab keytab = NULL;\n\tkrb5_principal user = NULL;\n\n\tret = krb5_parse_name(context, username, &user);\n\tif (ret) {\n\t\tfprintf(stderr, \"Parsing user principal (%s) failed: %s.\\n\",\n\t\t\tusername, krb5_get_error_message(context, ret));\n\t\tgoto end;\n\t}\n\n\tif (getenv(VAR_NAME_KEYTAB))\n\t\tret = krb5_kt_resolve(context, getenv(VAR_NAME_KEYTAB), &keytab);\n\telse\n\t\tret = krb5_kt_default(context, &keytab);\n\tif (ret) {\n\t\tfprintf(stderr, \"Cannot open keytab: %s\\n\",\n\t\t\tkrb5_get_error_message(context, ret));\n\t\tgoto end;\n\t}\n\n\tret = krb5_get_init_creds_opt_alloc(context, &opt);\n\tif (ret) {\n\t\tfprintf(stderr, \"krb5_get_init_creds_opt_alloc() failed (%s)\\n\",\n\t\t\tkrb5_get_error_message(context, ret));\n\t\tgoto end;\n\t}\n\n\tkrb5_get_init_creds_opt_set_forwardable(opt, 1);\n\n\tret = krb5_get_init_creds_keytab(context, creds, user, keytab, 0, NULL, opt);\n\tif (ret) {\n\t\tfprintf(stderr, \"krb5_get_init_creds_keytab() failed (%s)\\n\",\n\t\t\tkrb5_get_error_message(context, ret));\n\t\tgoto end;\n\t}\n\nend:\n\tif (opt)\n\t\tkrb5_get_init_creds_opt_free(context, opt);\n\tif (user)\n\t\tkrb5_free_principal(context, user);\n\tif (keytab)\n\t\tkrb5_kt_close(context, keytab);\n\n\treturn (ret);\n}\n\nstatic krb5_error_code\ninit_auth_context(krb5_context context, krb5_auth_context *auth_context)\n{\n\tint32_t flags;\n\tkrb5_error_code ret;\n\n\tret = krb5_auth_con_init(context, auth_context);\n\tif (ret) {\n\t\tfprintf(stderr, \"krb5_auth_con_init() failed: %s.\\n\",\n\t\t\tkrb5_get_error_message(context, ret));\n\t\treturn ret;\n\t}\n\n\tkrb5_auth_con_getflags(context, *auth_context, &flags);\n\t/* We disable putting times in the message so the message could be cached\n\t   and re-sent in the future. If caching isn't needed, it could be enabled\n\t   again (but read below) */\n\t/* N.B. The semantics of KRB5_AUTH_CONTEXT_DO_TIME applied in\n\t   krb5_fwd_tgt_creds() seems to differ between Heimdal and MIT. MIT uses\n\t   it to (also) enable replay cache checks (that are useless and\n\t   troublesome for us). Heimdal uses it to just specify whether or not the\n\t   timestamp is included in the forwarded message. */\n\tflags &= ~(KRB5_AUTH_CONTEXT_DO_TIME);\n#ifdef HEIMDAL\n\t/* With Heimdal, we need explicitly set that the credential is in cleartext.\n\t * MIT does not have the flag KRB5_AUTH_CONTEXT_CLEAR_FORWARDED_CRED */\n\tflags |= KRB5_AUTH_CONTEXT_CLEAR_FORWARDED_CRED;\n#endif\n\tkrb5_auth_con_setflags(context, *auth_context, flags);\n\n\treturn 0;\n}\n\n/* Creates a KRB_CRED message containing serialized credentials. The credentials\n   aren't encrypted, relying on the protection by application protocol, see RFC 6448 */\nstatic krb5_error_code\nget_fwd_creds(krb5_context context, krb5_creds *creds, krb5_data *creds_data)\n{\n\tkrb5_error_code ret;\n\tkrb5_auth_context auth_context = NULL;\n\tkrb5_ccache ccache = NULL;\n\n\tret = init_auth_context(context, &auth_context);\n\tif (ret)\n\t\tgoto end;\n\n\tret = prepare_ccache(context, creds, &ccache);\n\tif (ret)\n\t\tgoto end;\n\n\t/* It's necessary to pass a hostname to pass the code (Heimdal segfaults\n\t * otherwise), MIT tries to get a credential for the host if session keys\n\t * doesn't exist. It should be noted that the krb5 configuration should set\n\t * the no-address flags for tickets (otherwise tickets couldn't be cached,\n\t * wouldn't work with multi-homed machines etc.).\n     */\n\tret = krb5_fwd_tgt_creds(context, auth_context, \"localhost\", creds->client,\n\t\t\t\t NULL, ccache, 1, creds_data);\n\tif (ret) {\n\t\tfprintf(stderr, \"krb5_fwd_tgt_creds() failed: %s.\\n\",\n\t\t\tkrb5_get_error_message(context, ret));\n\t\tgoto end;\n\t}\n\nend:\n\tif (auth_context)\n\t\tkrb5_auth_con_free(context, auth_context);\n\tif (ccache)\n\t\tkrb5_cc_destroy(context, ccache);\n\n\treturn (ret);\n}\n\nstatic int\noutput_creds(krb5_context context, krb5_creds *target_creds)\n{\n\tkrb5_error_code ret;\n\tkrb5_auth_context auth_context = NULL;\n\tkrb5_creds **creds = NULL, **c;\n\tchar *encoded = NULL;\n\tkrb5_data _creds_data, *creds_data = &_creds_data;\n\n\tmemset(&_creds_data, 0, sizeof(_creds_data));\n\n\tret = get_fwd_creds(context, target_creds, creds_data);\n\tif (ret)\n\t\tgoto end;\n\n\tret = init_auth_context(context, &auth_context);\n\tif (ret)\n\t\tgoto end;\n\n\tencoded = k5_base64_encode(creds_data->data, creds_data->length);\n\tif (encoded == NULL) {\n\t\tfprintf(stderr, \"failed to encode the credentials, exiting.\\n\");\n\t\tret = -1;\n\t\tgoto end;\n\t}\n\n\tret = krb5_rd_cred(context, auth_context, creds_data, &creds, NULL);\n\tif (ret) {\n\t\tfprintf(stderr, \"krb5_rd_cred() failed: %s.\\n\",\n\t\t\tkrb5_get_error_message(context, ret));\n\t\tgoto end;\n\t}\n\n\tprintf(\"Type: Kerberos\\n\");\n\t/* there might be multiple credentials exported, which we silently ignore */\n\tprintf(\"Valid until: %ld\\n\", (long int) creds[0]->times.endtime);\n\tprintf(\"%s\\n\", encoded);\n\n\tret = 0;\n\nend:\n\tkrb5_free_data_contents(context, &_creds_data);\n\tif (auth_context)\n\t\tkrb5_auth_con_free(context, auth_context);\n\tif (encoded)\n\t\tfree(encoded);\n\tif (creds) {\n\t\tfor (c = creds; c != NULL && *c != NULL; c++)\n\t\t\tkrb5_free_creds(context, *c);\n\t\tfree(creds);\n\t}\n\n\treturn (ret);\n}\n\nstatic int\ndoit(const char *user)\n{\n\tint ret;\n\tkrb5_creds my_creds;\n\tkrb5_context context = NULL;\n\n\tmemset((char *) &my_creds, 0, sizeof(my_creds));\n\n\tret = krb5_init_context(&context);\n\tif (ret) {\n\t\tfprintf(stderr, \"Cannot initialize Kerberos, exiting.\\n\");\n\t\treturn (ret);\n\t}\n\n\tret = get_init_creds_user(context, user, &my_creds);\n\tif (ret)\n\t\tgoto end;\n\n\tret = output_creds(context, &my_creds);\n\nend:\n\tkrb5_free_cred_contents(context, &my_creds);\n\tkrb5_free_context(context);\n\n\treturn (ret);\n}\n\nint\nmain(int argc, char *argv[])\n{\n\tchar *progname;\n\tint ret;\n\n\tif ((progname = strrchr(argv[0], '/')))\n\t\tprogname++;\n\telse\n\t\tprogname = argv[0];\n\n\tif (argc != 2) {\n\t\tfprintf(stderr, \"Usage: %s principal_name\\n\", progname);\n\t\texit(1);\n\t}\n\n\tret = doit(argv[1]);\n\n\tif (ret != 0)\n\t\tret = 1;\n\treturn (ret);\n}\n"
  },
  {
    "path": "src/unsupported/run_pelog_shell.ini",
    "content": "[run_pelog_shell]\n# Enable parallel prologues/epilogues that run on sister moms. Note that all\n# the normal requirements apply, except the scripts should be named pprologue \n# and pepilogue.\nENABLE_PARALLEL=False\n\n# Provide verbose hook output to the user's .o/.e file\nVERBOSE_USER_OUTPUT=False\n\n# DEFAULT_ACTION can be one of DELETE or RERUN\nDEFAULT_ACTION=RERUN\n\n# Enable Torque argument compatibility\nTORQUE_COMPAT=False\n"
  },
  {
    "path": "src/unsupported/run_pelog_shell.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\"\"\"\nrun_pelog_shell.py - PBS hook that runs the classic shell script prologue or\nepilogue, if it exists, while still being able to use execjob_prologue or\nexecjob_epilogue hooks. Also adds the capability of running parallel prologue\nor epilogue shell scripts and Torque compatibility.\n\nOn the primary execution host (the first host listed in PBS_NODEFILE), the\nstandard naming convention of 'prologue' and 'epilogue' apply. Parallel\nprologues and epilogues use the naming conventions 'pprologue' and 'pepilogue',\nrespectively, but will only run on the secondary execution hosts. Classic\nprologues and epilogues in Windows are not currently implemented.\n\nParallel prologues will not run until a task associated with the job (i.e. via\npbs_attach, pbs_tmrsh, pbsdsh) begins on the secondary execution hosts.\n\nParallel epilogues will not run unless the prologue ran successfully on the\nprimary execution host. Only the primary execution host will have a value for\nresources_used in epilogue argument $7.\n\nWe assume the same requirements as listed in PBS 13.0\nAdministrator's Guide 11.5.4 for running all types of prologue and epilogue\nshell scripts:\n    - The script must be in the PBS_HOME/mom_priv directory\n    - The prologue must have the exact name \"prologue\" under UNIX/Linux, or\n      \"prologue.bat\" under Windows\n    - The epilogue must have the exact name \"epilogue\" under UNIX/Linux, or\n      \"epilogue.bat\" under Windows\n    - The script must be written to exit with one of the zero or positive exit\n      values listed in section 11.5.12, \"Prologue and Epilogue Exit Codes\". The\n      negative values are set by MOM\n    - Under UNIX/Linux, the script must be owned by root, be readable and exe-\n      cutable by root, and cannot be writable by anyone but root\n    - Under Windows, the script's permissions must give \"Full Access\" to the\n      local Administrators group on the local computer\n\nThe hook will kill the prologue/epilogue after the hook_alarm time - 5 has been\nreached. At this point the job will be requeued/deleted depending on the value\nof DEFAULT_ACTION below. If the hook_alarm time is not available, the default\nvalue of 30 seconds is assumed, giving the prologue/epilogue approximately 25\nseconds to complete.\n\nInstallation:\nTechnically you could create a single hook that fires on both the\nexecjob_prologue and the execjob_epilogue events, but to ensure execution order\nwe separate the two into the individual events by creating two separate hooks\nthat refer to the same hook script.\n\nEdit the run_pelog_shell.ini to make configuration changes, then create and\nimport the hook as follows.\n\nAs root, run the following:\nqmgr << EOF\ncreate hook run_prologue_shell\nset hook run_prologue_shell event = execjob_prologue\nset hook run_prologue_shell enabled = true\nset hook run_prologue_shell order = 1\nset hook run_prologue_shell alarm = 35\nimport hook run_prologue_shell application/x-python default run_pelog_shell.py\nimport hook run_prologue_shell application/x-config default run_pelog_shell.ini\n\ncreate hook run_epilogue_shell\nset hook run_epilogue_shell event = execjob_epilogue\nset hook run_epilogue_shell enabled = true\nset hook run_epilogue_shell order = 999\nset hook run_prologue_shell alarm = 35\nimport hook run_epilogue_shell application/x-python default run_pelog_shell.py\nimport hook run_epilogue_shell application/x-config default run_pelog_shell.ini\nEOF\n\nAny further configuration changes to run_pelog_shell.ini will require\nre-importing the file to both hooks:\nqmgr << EOF\nimport hook run_prologue_shell application/x-config default run_pelog_shell.ini\nimport hook run_epilogue_shell application/x-config default run_pelog_shell.ini\nEOF\n\nDirect modifications to this hook are not recommended.\nProceed at your own risk.\n\"\"\"\n\nimport os\nimport sys\nimport time\n\nimport pbs\n\nRERUN = 14\nDELETE = 6\n\n# The following constants can be modified in run_pelog_shell.ini to match\n# site preferences.\n\nENABLE_PARALLEL = False\nVERBOSE_USER_OUTPUT = False\nDEFAULT_ACTION = RERUN\nTORQUE_COMPAT = False\n\n\n# Set up a few variables\nstart_time = time.time()\npbs_event = pbs.event()\nhook_name = pbs_event.hook_name\nhook_alarm = 30  # default, we'll read it from the .HK later\nDEBUG = False  # default, we'll read it from the .HK later\njob = pbs_event.job\n\n# The trace_hook function has been written to be portable between hooks.\n\n\ndef trace_hook(**kwargs):\n    \"\"\"Simple exception trace logger for PBS hooks\n    loglevel=<int> (pbs.LOG_DEBUG): log level to pass to pbs.logmsg()\n    reject=True: reject the job upon completion of logging trace\n    trace_in_reject=<bool> (False): pass trace to pbs.event().reject()\n    trace_in_reject=<str>: message to pass to pbs.event().reject() with trace\n    \"\"\"\n    import sys\n\n    if 'loglevel' in kwargs:\n        loglevel = kwargs['loglevel']\n    else:\n        loglevel = pbs.LOG_ERROR\n    if 'reject' in kwargs:\n        reject = kwargs['reject']\n    else:\n        reject = True\n    if 'trace_in_reject' in kwargs:\n        trace_in_reject = kwargs['trace_in_reject']\n    else:\n        trace_in_reject = False\n\n    # Associate hook events with the appropriate PBS constant. This is a list\n    # of all hook events as of PBS 13.0. If the event does not exist, it is\n    # removed from the list.\n    hook_events=['queuejob', 'modifyjob', 'movejob', 'runjob', 'execjob_begin',\n                 'execjob_prologue', 'execjob_launch', 'execjob_attach',\n                 'execjob_preterm', 'execjob_epilogue', 'execjob_end',\n                 'resvsub', 'provision', 'exechost_periodic',\n                 'exechost_startup', 'execjob_resize', 'execjob_abort',\n                 'execjob_postsuspend', 'execjob_preresume']\n\n    hook_event={}\n    for he in hook_events:\n        # Only set available hooks for the current version of PBS.\n        if hasattr(pbs, he.upper()):\n            event_code = eval('pbs.' + he.upper())\n            hook_event[event_code] = he\n            hook_event[he] = event_code\n            hook_event[he.upper()] = event_code\n            del event_code\n        else:\n            del hook_events[hook_events.index(he)]\n\n    trace = {\n        'line': sys.exc_info()[2].tb_lineno,\n        'module': sys.exc_info()[2].tb_frame.f_code.co_name,\n        'exception': sys.exc_info()[0].__name__,\n        'message': sys.exc_info()[1].message,\n    }\n    tracemsg = '%s hook %s encountered an exception: Line %s in %s %s: %s' % (\n        hook_event[pbs.event().type], pbs.event().hook_name,\n        trace['line'], trace['module'], trace['exception'], trace['message']\n    )\n    rejectmsg = \"Hook Error: request rejected as filter hook '%s' \" \\\n        \"encountered an exception. Please inform Admin\" % pbs.event().hook_name\n    if not isinstance(loglevel, int):\n        loglevel = pbs.LOG_ERROR\n        tracemsg = 'trace_hook() called with invalid argument (loglevel=%s), '\\\n            'setting to pbs.LOG_ERROR. ' + tracemsg\n\n    pbs.logmsg(pbs.LOG_ERROR, tracemsg)\n\n    if reject:\n        tracemsg += ', request rejected'\n        if isinstance(trace_in_reject, bool):\n            if trace_in_reject:\n                pbs.event().reject(tracemsg)\n            else:\n                pbs.event().reject(rejectmsg)\n        else:\n            pbs.event().reject(str(trace_in_reject) + 'Line %s in %s %s:\\n%s' %\n                               (trace['line'], trace['module'],\n                                trace['exception'], trace['message']))\n\n\nclass JobLog:\n    \"\"\" Class for managing output to job stdout and stderr.\"\"\"\n\n    def __init__(self):\n        PBS_SPOOL = os.path.join(pbs_conf()['PBS_MOM_HOME'], 'spool')\n        self.stdout_log = os.path.join(PBS_SPOOL,\n                                       '%s.OU' % str(pbs.event().job.id))\n        self.stderr_log = os.path.join(PBS_SPOOL,\n                                       '%s.ER' % str(pbs.event().job.id))\n\n        if str(pbs.event().job.Join_Path) == 'oe':\n            self.stderr_log = self.stdout_log\n        elif str(pbs.event().job.Join_Path) == 'eo':\n            self.stdout_log = self.stderr_log\n\n    def stdout(self, msg):\n        \"\"\"Write msg to appropriate file handle for stdout\"\"\"\n        import sys\n\n        try:\n            if not pbs.event().job.interactive and pbs.event().job.in_ms_mom:\n                logfile = open(self.stdout_log, 'ab+')\n            else:\n                logfile = sys.stdout\n\n            if DEBUG:\n                pbs.logmsg(pbs.EVENT_DEBUG3,\n                           '%s;%s;[DEBUG3]: writing %s to %s' %\n                           (pbs.event().hook_name,\n                            pbs.event().job.id,\n                            repr(msg),\n                            logfile.name))\n\n            logfile.write(msg)\n            logfile.flush()\n            logfile.close()\n        except IOError:\n            trace_hook()\n\n    def stderr(self, msg):\n        \"\"\"Write msg to appropriate file handle for stdout\"\"\"\n        import sys\n\n        try:\n            if not pbs.event().job.interactive and pbs.event().job.in_ms_mom():\n                logfile = open(self.stderr_log, 'ab+')\n            else:\n                logfile = sys.stderr\n\n            if DEBUG:\n                pbs.logmsg(pbs.EVENT_DEBUG3,\n                           '%s;%s;[DEBUG3]: writing %s to %s' %\n                           (pbs.event().hook_name,\n                            pbs.event().job.id,\n                            repr(msg),\n                            logfile.name))\n\n            logfile.write(msg)\n            logfile.flush()\n            logfile.close()\n        except IOError:\n            trace_hook()\n\n\n# Read in pbs.conf\ndef pbs_conf(pbs_key=None):\n    \"\"\"Function to return the values from /etc/pbs.conf\n    If the PBS python interpreter hasn't been recycled, it is not necessary\n    to re-read and re-parse /etc/pbs.conf. This function will simply return\n    the variable that exists from the first time this function ran.\n    Creates a dict containing the key/value pairs in pbs.conf, accounting for\n    comments in lines and empty lines.\n    Returns a string representing the pbs.conf setting for pbs_key if set, or\n    the dict of all pbs.conf settings if pbs_key is not set.\n    \"\"\"\n    import os\n\n    if hasattr(pbs_conf, 'pbs_keys'):\n        return pbs_conf.pbs_keys[pbs_key] if pbs_key else pbs_conf.pbs_keys\n\n    if 'PBS_CONF_FILE' in list(os.environ.keys()):\n        pbs_conf_file = os.environ['PBS_CONF_FILE']\n    elif sys.platform == 'win32':\n        if 'ProgramFiles(x86)' in list(os.environ.keys()):\n            program_files = os.environ['ProgramFiles(x86)']\n        else:\n            program_files = os.environ['ProgramFiles']\n        pbs_conf_file = '%s\\\\PBS\\\\pbs.conf' % program_files\n    else:\n        pbs_conf_file = '/etc/pbs.conf'\n\n    pbs_conf.pbs_keys = dict([line.split('#')[0].strip().split('=')\n                              for line in open(pbs_conf_file)\n                              if not line.startswith('#') and '=' in line])\n\n    if 'PBS_MOM_HOME' not in list(pbs_conf.pbs_keys.keys()):\n        pbs_conf.pbs_keys['PBS_MOM_HOME'] = \\\n            pbs_conf.pbs_keys['PBS_HOME']\n\n    return pbs_conf.pbs_keys[pbs_key] if pbs_key else pbs_conf.pbs_keys\n\n\n# Primary hook execution begins here\ntry:\n\n    def rejectjob(reason, action=DEFAULT_ACTION):\n        \"\"\"Log job rejection and then call pbs.event().reject()\"\"\"\n\n        # Arguments to pbs.event().reject() do nothing in execjob events. Log a\n        # warning instead, update the job comment, then reject the job.\n        if action == RERUN:\n            job.rerun()\n            reason = 'Requeued - %s' % reason\n        elif action == DELETE:\n            job.delete()\n            reason = 'Deleted - %s' % reason\n        else:\n            reason = 'Rejected - %s' % reason\n\n        job.comment = '%s: %s' % (hook_name, reason)\n        pbs.logmsg(pbs.LOG_WARNING, ';'.join([hook_name, job.id, reason]))\n        pbs.logjobmsg(job.id, reason)  # Add a message that can be tracejob'd\n        if VERBOSE_USER_OUTPUT:\n            print(reason)\n        pbs_event.reject()\n\n    # For the path to mom_priv, we use PBS_MOM_HOME in case that is set,\n    # pbs_conf() will return PBS_HOME if it is not.\n    mom_priv = os.path.abspath(os.path.join(\n        pbs_conf()['PBS_MOM_HOME'], 'mom_priv'))\n\n    # Get the hook alarm time from the .HK file if it exists.\n    hk_file = os.path.join(mom_priv, 'hooks', '%s.HK' % hook_name)\n    if os.path.exists(hk_file):\n        hook_settings = dict([l.strip().split('=') for l in\n                              open(hk_file, 'r').readlines()])\n        if 'alarm' in list(hook_settings.keys()):\n            hook_alarm = int(hook_settings['alarm'])\n        if 'debug' in list(hook_settings.keys()):\n            DEBUG = True if hook_settings['debug'] == 'true' else False\n\n    if DEBUG:\n        pbs.logmsg(pbs.LOG_DEBUG, '%s;%s;[DEBUG] starting.' %\n                   (hook_name, job.id))\n\n    if 'PBS_HOOK_CONFIG_FILE' in os.environ:\n        config_file = os.environ[\"PBS_HOOK_CONFIG_FILE\"]\n        config = dict([l.split('#')[0].strip().split('=')\n                       for l in open(config_file, 'r').readlines()\n                       if '=' in l])\n\n        # Set the true/false configurations\n        if 'ENABLE_PARALLEL' in list(config.keys()):\n            ENABLE_PARALLEL = config['ENABLE_PARALLEL'].lower()[0] in [\n                't', '1']\n        if 'VERBOSE_USER_OUTPUT' in list(config.keys()):\n            VEROSE_USER_OUTPUT = config['VERBOSE_USER_OUTPUT'].lower()[0] in [\n                't', '1']\n        if 'DEFAULT_ACTION' in list(config.keys()):\n            if config['DEFAULT_ACTION'].upper() == 'DELETE':\n                DEFAULT_ACTION = DELETE\n            elif config['DEFAULT_ACTION'].upper() == 'RERUN':\n                DEFAULT_ACTION = RERUN\n            else:\n                pbs.logmsg(\n                    pbs.LOG_WARN, '%s;%s;[ERROR] ' % (hook_name, job.id) +\n                    'DEFAULT_ACTION in %s.ini must be one ' % (hook_name) +\n                    'of DELETE or RERUN.')\n        if 'TORQUE_COMPAT' in list(config.keys()):\n            TORQUE_COMPAT = config['TORQUE_COMPAT'].lower()[0] in ['t', '1']\n\n    # Skip sister mom if parallel pelogs aren't enabled.\n    if not ENABLE_PARALLEL and not job.in_ms_mom():\n        pbs_event.accept()\n\n    # Prologues and epilogues have different arguments\n    if pbs_event.type == pbs.EXECJOB_PROLOGUE:\n        event = 'prologue'\n        args = [\n            job.id,                         # argv[1]\n            job.euser,                      # argv[2]\n            job.egroup                      # argv[3]\n        ]\n        if TORQUE_COMPAT:\n            args.extend([\n                job.Job_Name,               # argv[4]\n                job.Resource_List,          # argv[5]\n                job.queue.name,             # argv[6]\n                job.Account_Name or ''      # argv[7]\n            ])\n    elif pbs_event.type == pbs.EXECJOB_EPILOGUE:\n        null = 'null' if not TORQUE_COMPAT else ''\n        event = 'epilogue'\n        args = [\n            job.id,                         # argv[1]\n            job.euser,                      # argv[2]\n            job.egroup,                     # argv[3]\n            job.Job_Name,                   # argv[4]\n            job.session_id,                 # argv[5]\n            job.Resource_List,              # argv[6]\n            job.resources_used,             # argv[7]\n            job.queue.name,                 # argv[8]\n            job.Account_Name or null,       # argv[9]\n            job.Exit_status                 # argv[10]\n        ]\n    else:  # hook has wrong events added\n        pbs.logmsg(\n            pbs.LOG_WARNING,\n            '%s;%s;[ERROR] PBS event type %s not supported in this hook.' %\n            (hook_name, job.id, pbs_event.type))\n        pbs_event.accept()\n\n    # Handle empty arguments\n    args = [str(a) if (a or a == 0) else '' for a in args]\n\n    if DEBUG:\n        pbs.logmsg(pbs.LOG_DEBUG,\n                   '%s;%s;[DEBUG] %s event triggered.' %\n                   (hook_name, job.id, event))\n\n    if DEBUG:\n        pbs.logmsg(pbs.LOG_DEBUG, '%s;%s;[DEBUG3] args=%s' %\n                   (hook_name, job.id, repr(args)))\n\n    # execjob_prologue and execjob_epilogue hooks can run on all nodes, so use\n    # pprologue/pepilogue if available and not on primary execution node.\n    p = '' if job.in_ms_mom() else 'p'\n\n    if DEBUG:\n        pbs.logmsg(pbs.LOG_DEBUG, '%s;%s;[DEBUG] %s.' %\n                   (pbs_event.hook_name,\n                    job.id,\n                    'in sister mom' if p else 'in mother superior'))\n\n    script = os.path.join(mom_priv, p + event)\n\n    if sys.platform == 'win32':\n        script = script + '.bat'\n\n    if DEBUG:\n        pbs.logmsg(pbs.EVENT_DEBUG3, '%s;%s;[DEBUG3] script set to %s.' % (\n            pbs_event.hook_name, job.id, script))\n\n    correct_permissions = False\n    if not script:\n        pbs_event.accept()\n\n    if not os.path.exists(script):\n        pbs_event.accept()\n\n    if sys.platform == 'win32':\n        # Windows support is currently not implemented.\n        pbs.logmsg(pbs.LOG_WARNING,\n                   '%s;%s;[ERROR] ' % (hook_name, job.id) +\n                   'Classic prologues and epilogues on Windows are not ' +\n                   'currently implemented in this hook.')\n        pbs_event.accept()\n\n    else:\n        try:\n            struct_stat = os.stat(script)\n        except OSError:\n            rejectjob('Could not stat the %s script (%s).' %\n                      (event, script), RERUN)\n\n        # We mask for read and execute on owner make sure no one else can write\n        # with 0522 (?r?x?w??w?). With this, permissions such as 0777 masked by\n        # 522 will return 522. Acceptable permissions will return 500.\n        correct_permissions = bool(struct_stat.st_mode & 0o522 == 0o500 and\n                                   struct_stat.st_uid == 0)\n\n    if correct_permissions:\n        import signal\n        import subprocess\n        import shlex\n\n        # Correction for subprocess SIGPIPE handling courtesy of Colin Watson:\n        # http://www.chiark.greenend.org.uk/~cjwatson/blog/python-sigpipe.html\n        def subprocess_setup():\n            \"\"\"subprocess_setup corrects a known bug where python installs a\n            SIGPIPE handler by default. This is usually not what non-Python\n            subprocesses expect\"\"\"\n            signal.signal(signal.SIGPIPE, signal.SIG_DFL)\n\n        if DEBUG:\n            pbs.logmsg(\n                pbs.EVENT_DEBUG2,\n                '%s;%s;[DEBUG2] script %s has appropriate permissions.' %\n                (hook_name, job.id, script))\n\n        # change to the correct working directory (PBS_HOME):\n        os.chdir(pbs_conf()['PBS_MOM_HOME'])\n\n        # add PBS_JOBDIR environment variable, accounting for empty job.jobdir\n        os.environ['PBS_JOBDIR'] = job.jobdir or ''\n\n        shell = \"\"\n        if sys.platform == 'win32':  # win32 is _always_ cmd\n            shell = \"cmd /c\"\n        else:\n            # check the script for the interpreter line\n            shebang = open(script, 'r').readline().strip().split('#!')\n            if len(shebang) == 2:\n                shell = shebang[1].split()[0]\n                if not os.path.exists(shell):\n                    rejectjob(\n                        'Interpreter specified in %s (%s) does not exist.' %\n                        (p + event, shell),\n                        RERUN)\n            else:\n                rejectjob(\n                    'No interpreter specified in %s.' %\n                    (p + event), RERUN)\n\n        if DEBUG:\n            pbs.logmsg(pbs.EVENT_DEBUG2,\n                       '%s;%s;[DEBUG2] interpreter set to \"%s\".' %\n                       (hook_name, job.id, shell))\n\n        pbs.logmsg(pbs.LOG_DEBUG, '%s;%s;running %s.' %\n                   (hook_name, job.id, p + event))\n\n        # We perform a shlex.split to make sure we capture any #! arguments\n        cmd = shlex.split('%s %s' % (shell, script))\n        cmd.extend(args)\n\n        if DEBUG:\n            pbs.logmsg(\n                pbs.EVENT_DEBUG3, '%s;%s;[DEBUG3] cmd=%s' %\n                (hook_name, job.id, repr(cmd)))\n\n        if str(job.Join_Path) in ['oe', 'eo']:\n            proc = subprocess.Popen(\n                cmd,\n                stdout=subprocess.PIPE,\n                stderr=subprocess.STDOUT,\n                preexec_fn=subprocess_setup)\n        else:\n            proc = subprocess.Popen(\n                cmd,\n                stdout=subprocess.PIPE,\n                stderr=subprocess.PIPE,\n                preexec_fn=subprocess_setup)\n\n        # Wait for the script to gracefully exit.\n        while time.time() < start_time + hook_alarm - 5:\n            if proc.poll() is not None:\n                break\n            time.sleep(1)\n\n        # If we reach the alarm time - 5 seconds, send a SIGTERM\n        if proc.poll() is None:\n            pbs.logmsg(\n                pbs.LOG_WARNING,\n                '%s;%s;[WARNING] Terminating %s after %s seconds' %\n                (hook_name, job.id, event, int(time.time() - start_time)))\n            os.kill(proc.pid, signal.SIGTERM)\n            while time.time() < start_time + hook_alarm - 3:\n                if proc.poll() is not None:\n                    break\n                time.sleep(0.5)\n\n        # If we reach an alarm time - 3 seconds, send a SIGKILL\n        if proc.poll() is None:\n            pbs.logmsg(\n                pbs.LOG_WARNING,\n                '%s;%s;[WARNING] Killing %s after %s seconds' %\n                (hook_name, job.id, event, int(time.time() - start_time)))\n            os.kill(proc.pid, signal.SIGKILL)\n            while time.time() < start_time + hook_alarm - 1:\n                if proc.poll() is not None:\n                    break\n                time.sleep(0.5)\n\n        # If we still can't kill the script, log a warning and let pbs kill it\n        if proc.poll() is None:\n            pbs.logmsg(pbs.LOG_WARNING,\n                       '%s;%s;[WARNING] Unable to kill %s after %s seconds' %\n                       (hook_name, job.id, event, start_time - time.time()))\n\n        # Get the stdout and stderr from the pelog\n        (o, e) = proc.communicate()\n\n        if DEBUG:\n            pbs.logmsg(\n                pbs.EVENT_DEBUG2,\n                '%s;%s;[DEBUG2]: stdout=%s, stderr=%s.' %\n                (hook_name, job.id, repr(o), repr(e)))\n\n        joblog = JobLog()\n        if o:\n            joblog.stdout(o)\n        if e:\n            joblog.stderr(e)\n\n        if proc.returncode:\n            return_action = RERUN\n            if event == 'prologue':\n                return_action = RERUN\n                if proc.returncode == 1:\n                    return_action = DELETE\n            elif event == 'epilogue':\n                return_action = DELETE\n                if proc.returncode == 2:\n                    return_action = RERUN\n\n            rejectjob(\n                '%s exited with a status of %s.' % (\n                    p + event, proc.returncode),\n                return_action)\n        else:\n            if DEBUG:\n                pbs.logmsg(pbs.LOG_DEBUG,\n                           '%s;%s;[DEBUG] %s exited with a status of 0.' %\n                           (hook_name, job.id, p + event))\n\n            if pbs_event.type == pbs.EXECJOB_PROLOGUE and VERBOSE_USER_OUTPUT:\n                print('%s: attached as primary execution host.' %\n                      pbs.get_local_nodename())\n\n            pbs_event.accept()\n    else:\n        rejectjob(\"The %s does not have the correct \" % (p + event) +\n                  'permissions. See the section entitled, ' +\n                  '\"Prologue and Epilogue Requirements\" in the PBS ' +\n                  \"Administrator's Guide.\", RERUN)\n\nexcept SystemExit:\n    pass\nexcept BaseException:\n    trace_hook()\n"
  },
  {
    "path": "src/unsupported/sgiICEplacement.sh",
    "content": "#!/bin/sh\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\n. ${PBS_CONF_FILE:-/etc/pbs.conf}\n\necho Setting placement set data ...\n${PBS_EXEC}/bin/pbsnodes -a | grep \"^[a-zA-Z]\" | while read node\ndo\n    if [ -n \"`echo $node | grep 'r[0-9][0-9]*i[0-9]n[0-9][0-9]*'`\" ]\n    then\n\tL1=`echo $node | sed -e \"s/\\(r[0-9][0-9]*i[0-9][0-9]*\\)n.*/\\1/\"`\n\tL2=`echo $node | sed -e \"s/\\(r[0-9][0-9]*\\)i[0-9][0-9]*n.*/\\1/\"`\n\techo \"  for $node as resources_available.router=\\\"${L1},${L2}\\\"\"\n\t${PBS_EXEC}/bin/qmgr -c \"s node $node resources_available.router=\\\"${L1},${L2}\\\"\"\n    else\n\techo \" \"\n\techo Node ${node} name is not in standard SGI naming convention,\n\techo no placement set created for ${node}\n\techo \" \"\n    fi\n\ndone\nexit 0\n"
  },
  {
    "path": "src/unsupported/sgiICEvnode.sh",
    "content": "#!/bin/sh\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n# writevnode - write cpuset information for a vnode\n#    $1 is the subdirectory (node0, node1, ...) under SYSDIRNODE\n#    $2 is the vnode name of the form \"host[n]\" for multiple nodes\n#       or \"host\" for a single node\n#    $3 is the node number, 0 to n-1\n#\nwritevnode ()\n{\n        numcpus=`ls -1d ${SYSDIRNODE}/${1}/cpu* | grep cpu\\[0-9\\] | wc -l`\n\tnumcpus=`echo $numcpus | sed -e \"s/^ *//\"`\n        cpustr=`ls -1d ${SYSDIRNODE}/${1}/cpu* | grep cpu\\[0-9\\] | \\\n                sed -e \"s/^.*cpu\\([0-9]*\\)/\\1/\" | sort -n`\n        cpustr=`echo $cpustr | sed -e \"s/ /,/g\"`\n\techo \"${2}: resources_available.ncpus = $numcpus\"\n\techo \"${2}: cpus = $cpustr\"\n\tamtmem=`grep MemTotal ${SYSDIRNODE}/${1}/meminfo | sed -e \"s/^.*: *//\"`\n\tamtmem=`echo $amtmem | sed -e \"s/ .*$//\"`\n\techo \"${2}: resources_available.mem = ${amtmem}kb\"\n\techo \"${2}: mems = $3\"\n}\n\nhost=`/bin/hostname | sed -e 's/\\..*//'`\n\necho \"\\$configversion 2\"\necho \"$host: pnames = router\"\n\nif [ -n \"$1\" ]\nthen\n    if [ \"$1\" = \"cpuset\" ]\n    then\n\n#\tcreate and write cpuset related information\n\n\tSYSDIRNODE=\"/sys/devices/system/node\"\n\tNODECT=`ls -1d $SYSDIRNODE/node[0-9]* | wc -l`\n\tif [ $NODECT -eq 1 ] ; then\n            echo \"$host: sharing = default_excl\"\n            writevnode \"node0\" $host \"0\"\n\telse\n            echo \"$host: sharing = ignore_excl\"\n            echo \"$host: resources_available.ncpus = 0\"\n            echo \"$host: resources_available.mem = 0\"\n\t    JN=0\n            while [ $JN -lt $NODECT ] ; do\n                echo \"${host}[${JN}]: sharing = default_excl\"\n                writevnode \"node${JN}\" \"${host}[${JN}]\" ${JN}\n                JN=`expr $JN + 1`\n            done\n        fi\n    else\n\techo invalid argument to script $0\n    fi\nfi\n\nexit 0\n"
  },
  {
    "path": "src/unsupported/sgigenvnodelist.awk",
    "content": "# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n#\n#\tUsage:\n#\n#\t\tawk -f ThisScript [ -v type=m ] [ inputfile ]\n#\t\tawk -f ThisScript [ -v type=q ] [ inputfile ]\n#\n#\tIf no input file is given, the value of topology_file (below) is used.\n#\n#\tThis script is designed to consume the Altix ProPack 4+ system topology\n#\tfile and emit placement set information in one of two forms, one for\n#\tconsumption by pbs_mom and the other in qmgr form.  The form of output\n#\tis controlled by the command-line variable \"type\", whose value should\n#\tbe either 'm' (for the pbs_mom form) or 'q' (for the qmgr form).\n#\n#\tFor the pbs_mom form, we first emit a prologue describing the types of\n#\tplacement sets and any global options.  This prologue uses the special\n#\tvnode ID of NODE and the special attribute name \"pnames\":\n#\n#\t\tNODE:\tpnames = TYPE1 [, TYPE2 ...]\n#\n#\tthen a list of the various placement sets by node\n#\n#\t\tNODE[N]:\tNAME = ps1 [ , ps2 ... ]\n#\n#\twhere NODE is derived from the host's FQDN by dropping the domain\n#\tqualifier(s), N is a node number, NAME is the name of a resource\n#\t(e.g. \"cbrick\" or \"router\"), and ps1, ... is a list of placement\n#\tset names.\n#\n#\tThis script currently computes only one type of placement set\n#\t(\"router\").  It does this by associating with each router that\n#\tis directly connected to a node the list of node IDs.  Then, for\n#\teach router that is two hops removed from a node, we associate\n#\tthe union of the lists of nodes of each directly-connected router.\n#\tThe expanding ring computation goes on until we either run out of\n#\trouters or have failed to make progress.\n#\n#\tThe qmgr form of output is a series of lines like this:\n#\n#\t\tset NAME[N] resources_available.TYPE = \"ps1 [ , ps2 ... ]\"\n#\n#\twhere NAME is the host's name, N is a node number, TYPE is the name\n#\tof the resource type (e.g. \"cbrick\" or \"router\", as above), and ps1,\n#\t... is a list of placement set names.\n#\n#\tAdditionally the script emits generic vnode information (vnode ID,\n#\tnumber of CPUs, amount of memory) for consumption by pbs_mom in this\n#\tform\n#\n#\t\tNODE[ID]:\tcpus = <CPUlist>\n#\t\tNODE[ID]:\tresources_available.ncpus = X\n#\t\tNODE[ID]:\tresources_available.mem = Y\n#\n#\twhere NODE is derived from the host's FQDN by dropping the domain\n#\tqualifier(s), ID is a vnode ID, X is the number of CPUs belonging\n#\tto the vnode with this ID, and Y is the amount of memory (in KB).\n#\t<CPUlist> is a list of CPUs in the given NODE[ID].\n#\n#\tNote:  the list of vnodes is culled to ensure that it excludes CPUs\n#\tthat belong to CPU sets not claimed by PBS.\n\nBEGIN {\n\t# Sort cpus, mems, and vnodes numerically\n\tPROCINFO[\"sorted_in\"] = \"@ind_num_asc\";\n\n\tdeftype = \"m\";\t\t\t#\tby default, output for pbs_mom\n\texitval = 0;\t\t\t#\tused to elide END actions\n\tlistsep = \", \"\n\tncpus = 0;\n\tnnodes = 0;\n\tnpnames = 0;\n\tnnumalinks = 0;\n\tnrouters = 0;\n\n\tptype = \"router\";\t\t#\tplacement set type\n\tpshort = \"R\";\t\t\t#\tshorthand used in resource value\n\t\t\t\t\t#\t(later modified by prepending\n\t\t\t\t\t#\tnodename to uniquify values\n\n\t#\tcommand to find \"cpus\" and \"mems\" files that do not belong to\n\t#\tPBS (CPU sets that are not the root, and not under /PBSPro)\n\t#\tin the \"cpuset\" file system\n\tfindcmd = \"find /dev/cpuset -name cpus -o -name cpuset.cpus \\\n\t\t-o -name mems -o -name cpuset.mems | \\\n\t\tegrep -v -e '/dev/cpuset/cpus$' -e '/dev/cpuset/cpuset.cpus$' \\\n\t\t-e '/dev/cpuset/mems$' -e '/dev/cpuset/cpuset.mems$' \\\n\t\t-e '/PBSPro/'\";\n\n\ttopology_version_min = 1;\t#\tthe versions we understand\n\ttopology_version_max = 2;\t#\tthe versions we understand\n\tUVtopology_version_min = 1;\t#\tthe versions we understand\n\tUVtopology_version_max = 1;\t#\tthe versions we understand\n\ttopology_file = \"/proc/sgi_sn/sn_topology\"\n\tUVtopology_file = \"/proc/sgi_uv/topology\"\n\n\t#\toverride standard input default if no input file is given\n\tif (ARGC == 1) {\n\t\tif ((getline < topology_file) > 0) {\n\t\t\tARGV[ARGC++] = topology_file;\n\t\t\tclose(topology_file);\n\t\t} else if ((getline < UVtopology_file) > 0) {\n\t\t\tARGV[ARGC++] = UVtopology_file;\n\t\t\tclose(UVtopology_file);\n\t\t}\n\t\tif (ARGC == 1) {\n\t\t\tprintf(\"no input files given, no known topology files found\\n\");\n\t\t\texitval = 1;\n\t\t\texit(1);\n\t\t}\n\t}\n\n\t\"uname -n | sed -e 's/\\\\..*//'\" | getline nodename\n\tclose(\"uname -n | sed -e 's/\\\\..*//'\");\n\tpshort = nodename \"-\" pshort;\t#\tshorthand is now \"H-RN\" where\n\t\t\t\t\t#\tH is the unFQDNed host name and\n\t\t\t\t\t#\tN is a nonnegative integer\n\n\t#\tmake two lists, excludedCPUs[] and excludedmems[],\n\t#\tof all CPUs and memory boards found in CPU sets\n\t#\tthat do not belong to PBS\n\twhile ((findcmd | getline cpumemfile) > 0)\n\t\tread_excludelist(cpumemfile);\n\tclose(findcmd);\n\n\tif (type == \"\")\n\t\ttype = deftype;\n\telse if ((type != \"m\") && (type != \"q\")) {\n\t\tprintf(\"type should be one of 'm' or 'q'\\n\");\n\t\texitval = 1;\n\t\texit(1);\n\t}\n}\n\n#\tcheck for supported version(s)\n$2 == \"sn_topology\" && $3 == \"version\" {\n\tis_UV = 0\n\ttopology_version = $4;\n\tverscheck($4, topology_version_min, topology_version_max)\n\tif (debug)\n\t\tprintf(\"SN topology file (version %d)\\n\", topology_version);\n}\n\n#\tcheck for supported version(s) for UV\n$2 == \"uv_topology\" && $3 == \"version\" {\n\tis_UV = 1\n\ttopology_version = $4;\n\tverscheck($4, UVtopology_version_min, UVtopology_version_max)\n\tif (debug)\n\t\tprintf(\"UV topology file (version %d)\\n\", topology_version);\n}\n\n#\tcpu 0 001c05#0a local freq 1500MHz, arch ia64, dist 10:10:45:...\n#\tcpu 000 r001i01b00#00_00-000 local freq 2666MHz, arch UV , dist 10:10:10:...\n/^cpu[[:space:]]+[[:digit:]]+/ {\n\tcpuid[int($2)] = $3;\n\tif (debug)\n\t\tprintf(\"cpu[%d]:  id %s\\n\", ncpus, $3);\n\n\tif (is_UV == 0) {\n\t\t#\tEven though the sn_topology version stays the same, the\n\t\t#\tdistance vector may appear in multiple places within the\n\t\t#\tline.\n\t\tif ($9 == \"dist\") {\n\t\t\tsplit($10, tmp, \":\");\n\t\t\tdo_cpudist = 1;\n\t\t} else if ($13 == \"dist\") {\n\t\t\tsplit($14, tmp, \":\");\n\t\t\tdo_cpudist = 1;\n\t\t} else\n\t\t\tdo_cpudist = 0;\n\t} else {\n\t\tif ($10 == \"dist\") {\n\t\t\tsplit($11, tmp, \":\");\n\t\t\tdo_cpudist = 1;\n\t\t} else\n\t\t\tdo_cpudist = 0;\n\t}\n\n\tif (do_cpudist)\n\t\tfor (tmpindex in tmp)\n\t\t\tcpudist[int($2), tmpindex - 1] = tmp[tmpindex];\n\tncpus++;\n}\n\n#\tnode 0 001c05#0 local asic SHub_1.2, nasid 0x0, dist 10:45:...\n/^node[[:space:]]+[[:digit:]]+/ {\n\tnodeid[$2] = $3;\n\tnodenums[$3] = $2;\n\tif (debug)\n\t\tprintf(\"node[%d]:  id %s\\n\", nnodes, $3);\n\n\t#\tEven though the sn_topology version stays the same, the\n\t#\tdistance vector may appear in multiple places within the\n\t#\tline.\n\tif ($9 == \"dist\") {\n\t\tsplit($10, tmp, \":\");\n\t\tdo_nodedist = 1;\n\t} else if ($13 == \"dist\") {\n\t\tsplit($14, tmp, \":\");\n\t\tdo_nodedist = 1;\n\t} else\n\t\tdo_nodedist = 0;\n\n\tif (do_nodedist)\n\t\tfor (tmpindex in tmp) {\n\t\t\t#\tDoes the distance list terminate the line, or\n\t\t\t#\tis it followed by \", near_mem_nodeid ...\"?  In\n\t\t\t#\tthe latter case, remove any possible trailing\n\t\t\t#\t',' from the distance number.\n\t\t\tif (topology_version == 2)\n\t\t\t\tsub(\",$\", \"\", tmp[tmpindex]);\n\t\t\tnodedist[$2, tmpindex - 1] = tmp[tmpindex];\n\t\t}\n\tnnodes++;\n}\n\n#\tInfo from SGI:\n#\tA \"local\" NUMAlink connection connects to another device that resides\n#\tin the same partition as the source.\n#\tA \"foreign\" NUMAlink connection terminates to another device that is\n#\tin a non-local partition.\n#\tA \"shared\" connection terminates on a device that is shared between\n#\tseveral partitions (always routers presently.)\n#\n#\tnumalink 1 001c05#0-1 local endpoint 001c05#5-1, protocol LLP4\n#\t\t\t^\t\t\t    ^\n#\t\t\t|\t\t\t    |\n#\t\t\t-- stored in\t\t    -- stored in\n#\t\t\t   numalinkremoteid[]  \t       numalinklocalid[]\n#\n/^numalink[[:space:]]+[[:digit:]]+/ {\n\tif ($6 == \"disconnected,\")\n\t\tnext;\n\tif (($4 == \"foreign\") || ($4 == \"local\") || ($4 == \"shared\")) {\n\t\tsub(\",$\", \"\", $6);\n\t\tnumalinklocalid[$2] = $3;\n\t\tnumalinkremoteid[$2] = $6;\n\t\tridtmp = NUMAlink2router($3);\n\n\t\t#\tnrouterconnections[] is an array (indexed by router ID,\n\t\t#\tridtmp) that records the number of routers connected to\n\t\t#\tthe router with ID ridtmp.  routerconnections[] holds\n\t\t#\tthe names of those routers.\n\t\tif (!(ridtmp in nrouterconnections))\n\t\t\tnrouterconnections[ridtmp] = 0;\n\t\trouterconnections[ridtmp, nrouterconnections[ridtmp]] = NUMAlink2router($6);\n\t\tnrouterconnections[ridtmp]++;\n\t\tif (debug)\n\t\t\tprintf(\"NUMAlink %d:  %s -> %s\\n\", nnumalinks,\n\t\t\t       numalinklocalid[$2], numalinkremoteid[$2]);\n\t}\n\tnnumalinks++;\n}\n\n#\trouter 0 001c05#4 local asic NL4Router\n#\trouter 1 001c21#5 shared asic NL4Router\n/^router[[:space:]]+[[:digit:]]+/ {\n\trouternum[$3] = $2;\n\trouterid[$2] = $3;\n\tif (debug)\n\t\tprintf(\"router[%d]:  id %s\\n\", nrouters, $3);\n\tnrouters++;\n}\n\n#\tread a list of \"cpus\" and \"mems\" files from the \"cpuset\" file system\n#\tand remember those CPUs and memory board numbers found there.  Later\n#\t(in printmompsdefs() and printqmgrpsdefs()) any vnode containing an\n#\texcluded CPU or memory board will not appear in the resulting placement\n#\tdefinitions.\nfunction read_excludelist(listfile)\n{\n\tif (debug)\n\t\tprintf(\"read_excludelist:  listfile %s\\n\", listfile);\n\tif (listfile ~ \"/cpus$\" || listfile ~ \"/cpuset.cpus$\") {\n\t\twhile ((getline cpumemlist < listfile) > 0)\n\t\t\tparse_CPUs(cpumemlist);\n\t\tclose(listfile);\n\t} else if (listfile ~ \"/mems$\" || listfile ~ \"/cpuset.mems$\") {\n\t\twhile ((getline cpumemlist < listfile) > 0)\n\t\t\tparse_mems(cpumemlist);\n\t\tclose(listfile);\n\t} else {\n\t\tprintf(\"read_excludelist:  not reading from cpus or mems\\n\");\n\t\texit (1);\n\t}\n}\n\n#\tbreak list at ',' characters, if any, then handle lists of\n#\tthe form M or \"M-N\"\nfunction parse_CPUs(cpulist,\t\tcpusublist, rangenum)\n{\n\tsplit(cpulist, cpusublist, \",\");\n\tfor (rangenum in cpusublist)\n\t\texclude_range(cpusublist[rangenum], \"cpus\");\n}\n\n\n#\tbreak list at ',' characters, if any, then handle lists of\n#\tthe form M or \"M-N\"\nfunction parse_mems(memlist,\t\tmemsublist, rangenum)\n{\n\t#\tbreak list at ',' characters, if any, then handle lists of\n\t#\tthe form M or \"M-N\"\n\n\tsplit(memlist, memsublist, \",\");\n\tfor (rangenum in memsublist)\n\t\texclude_range(memsublist[rangenum], \"mems\");\n}\n\nfunction exclude_range(range, excltype,\tnexcludedcpus, nexlcudedmems,\n\t\t\t\t\tnnums, nums)\n{\n\tnnums = split(range, nums, \"-\");\n\n\tif (nnums == 1) {\n\t\tif (debug)\n\t\t\tprintf(\"exclude %s %d\\n\", excltype, nums[1]);\n\t\tnums[2] = nums[1];\n\t} else if (nnums == 2) {\n\t\tif (debug)\n\t\t\tprintf(\"exclude %s %d - %d\\n\", excltype,\n\t\t\t    nums[1], nums[2]);\n\t} else {\n\t\tprintf(\"exclude_cpus:  internal error - nnums (%d) > 2\\n\",\n\t\t    nnums);\n\t\texit (1);\n\t}\n\n\tif (excltype == \"cpus\")\n\t\tfor (exclindex = nums[1]; exclindex <= nums[2]; exclindex++)\n\t\t\texcludedCPUs[nexcludedcpus++] = exclindex;\n\telse\n\t\tfor (exclindex = nums[1]; exclindex <= nums[2]; exclindex++)\n\t\t\texcludedmems[nexcludedmems++] = exclindex;\n}\n\nfunction report_memory(nodenum,\t\tmeminfofile)\n{\n\t# use \"-v sysdir=/...\" to direct this script to an alternate /sys\n\tmeminfofile = sysdir \"/sys/devices/system/node/node\" nodenum \"/meminfo\";\n\n\tif (debug)\n\t\tprintf(\"report_memory(node %d, meminfofile %s)\\n\", nodenum,\n\t\t    meminfofile);\n\twhile ((getline meminfo < meminfofile) > 0)\n\t\tif (meminfo ~ /MemTotal/) {\n\t\t\tsub(\".*MemTotal:[[:space:]]*\", \"\", meminfo);\n\t\t\tsub(\"[[:space:]]*[kK][bB]$\", \"\", meminfo);\n\t\t\tclose(meminfofile);\n\t\t\treturn (meminfo);\n\t\t}\n\tclose(meminfofile);\n\n\tprintf(\"No memory information for node %d\\n\", nodenum) | \"cat 1>&2\";\n\treturn (-1);\n}\n\n#\temit the virtual partition list prologue, if any\nfunction momprologue(\t\t\tfirsttime, momfmt,\n\t\t\t\t\tnocpusfmt, nomemfmt, novmemfmt,\n\t\t\t\t\tpnamefmt, sharedfmt, t, versionfmt)\n{\n\tversionfmt = \"$configversion 2\\n\";\t# same as CONFIG_VNODEVERS\n\tpnamefmt = \"%s:  pnames = %s\";\n\tsharedfmt = \"%s:  sharing = ignore_excl\\n\";\n\tnocpusfmt = \"%s:  resources_available.ncpus = 0\\n\";\n\tnomemfmt = \"%s:  resources_available.mem = 0\\n\";\n\tnovmemfmt = \"%s:  resources_available.vmem = 0\\n\";\n\n\tprintf(versionfmt);\n\n\tfirsttime = 1;\n\tfor (t in pnames) {\n\t\tif (firsttime == 1) {\n\t\t\tfirsttime = 0;\n\t\t\tprintf(pnamefmt, nodename, pnames[t]);\n\t\t} else {\n\t\t\tprintf(\",%s\", pnames[t]);\n\t\t}\n\t}\n\tif (npnames > 0)\n\t\tprintf(\"\\n\");\n\n\tprintf(sharedfmt, nodename);\n\tprintf(nocpusfmt, nodename);\n\tprintf(nomemfmt, nodename);\n\tprintf(novmemfmt, nodename);\n}\n\n#\temit the virtual partition list prologue, if any\nfunction qprologue(\t\t\tfirsttime,\n\t\t\t\t\tnocpusfmt, nomemfmt, novmemfmt,\n\t\t\t\t\tpnamefmt, sharedfmt, t)\n{\n\tpnamefmt = \"set node %s pnames = \\\"%s\"\n\tsharedfmt = \"set node %s sharing = ignore_excl\\n\";\n\tnocpusfmt = \"set node %s resources_available.ncpus = 0\\n\";\n\tnomemfmt = \"set node %s resources_available.mem = 0\\n\";\n\tnovmemfmt = \"set node %s resources_available.vmem = 0\\n\";\n\n\tfirsttime = 1;\n\tfor (t in pnames) {\n\t\tif (firsttime == 1) {\n\t\t\tfirsttime = 0;\n\t\t\tprintf(pnamefmt, nodename, pnames[t]);\n\t\t} else {\n\t\t\tprintf(\",%s\", pnames[t]);\n\t\t}\n\t}\n\tif (npnames > 0)\n\t\tprintf(\"\\\"\\n\");\n\n\tprintf(sharedfmt, nodename);\n\tprintf(nocpusfmt, nodename);\n\tprintf(nomemfmt, nodename);\n\tprintf(novmemfmt, nodename);\n}\n\n#\tfor debugging\n#\tconsistency checks:  distance should be symmetric\nfunction doconsistencychecks(\t\ti, j)\n{\n\tfor (i in cpuid)\n\t\tfor (j = 0; j < ncpus; j++)\n\t\t\tif (cpudist[i, j] != cpudist[j, i]) {\n\t\t\t\tprintf(\"cpudist[%d, %d] (%s) != \", i, j,\n\t\t\t\t    cpudist[i, j]);\n\t\t\t\tprintf(\"cpudist[%d, %d] (%s)\\n\", j, i,\n\t\t\t\t    cpudist[j, i]);\n\t\t\t}\n\tif (is_UV == 0)\n\t\tfor (i in nodeid) {\n\t\t\tfor (j = 0; j < nnodes; j++) {\n\t\t\t\tif (nodedist[i, j] != nodedist[j, i]) {\n\t\t\t\t\tprintf(\"nodedist[%s, %d] (%s) != \", i, j,\n\t\t\t\t\t    nodedist[i, j]);\n\t\t\t\t\tprintf(\"nodedist[%d, %s] (%s)\\n\", j, i,\n\t\t\t\t\t    nodedist[j, i]);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n}\n\n#\tfor debugging\nfunction dumpnodeinfo(\t\t\tcid, i, ix, j, UVnode)\n{\n\tfor (i in nodeid) {\n\t\tfor (j in cpuid) {\n\t\t\tif (is_UV) {\n\t\t\t\tsplit(cpuid[j], UVnode, \"_\");\n\t\t\t\tcid = UVnode[1];\n\t\t\t} else {\n\t\t\t\tcid = cpuid[j];\n\t\t\t}\n\t\t\t\t\n\t\t\tif ((nodeid[i] != \"\") && (cid != \"\")) {\n\t\t\t\tif ((ix = index(cid, nodeid[i])) == 1) {\n\t\t\t\t\tprintf(\"dumpnodeinfo:  cpuid[%s] = \\\"%s\\\", nodeid[%s] = \\\"%s\\\", index(cid, nodeid[i]) = %d\\n\", j, cpuid[j], i, nodeid[i], ix);\n\t\t\t\t\tprintf(\"node %s contains CPU[%s]\\n\", nodeid[i], j);\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n}\n\n#\tfor debugging\nfunction dumprouterinfo(\t\ti, j, k, rconn, rid)\n{\n\tfor (i in routerid) {\n\t\tfor (j in numalinklocalid) {\n\t\t\tif (index(numalinklocalid[j], routerid[i]) == 1) {\n\t\t\t\trid = numalinkremoteid[j];\n\t\t\t\tfor (k in nodeid) {\n\t\t\t\t\tif (index(rid, nodeid[k]) == 1) {\n\t\t\t\t\t\tprintf(\"router[%s] -> node %s\\n\",\n\t\t\t\t\t\t    i, nodeid[k]);\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\tfor (i = 0; i < nrouters; i++) {\n\t\trid = routerid[i];\n\t\tfor (j = 0; j < nrouterconnections[rid]; j++) {\n\t\t\trconn = routerconnections[rid, j];\n\t\t\tprintf(\"router %s connected to %s %s\\n\", rid,\n\t\t\t    (rconn in nodenums) ? \"node\" : \"router\", rconn);\n\t\t}\n\t}\n\n}\n\n#\tAdd a new placement set type, t, to the list of known types.\nfunction newpstype(t)\n{\n\tif (!(t in pnames)) {\n\t\tif (debug)\n\t\t\tprintf(\"newpstype:  \\\"%s\\\"\\n\", t);\n\t\tpnames[npnames++] = t \"\";\n\t}\n}\n\n#\tEmit vnode definitions for consumption by pbs_mom.\n#\tIn constructing vnode names, we are careful to ensure that the index\n#\t(in name[index]) is the same as the node number that appears in the\n#\tsn_topology file.\n#\n#\tIn constructing the values for the placement sets, we are also careful\n#\tto ensure that the index (in R[index]) is the same as the router number\n#\tin the sn_topology file.  We must also be sure when concatenating names\n#\tto form the a placement set value that we always concatenate in the same\n#\torder (that is, even though a human can tell that placement set \"ABC\" is\n#\tthe same as placement set \"BCA\", PBS regards them as different);  we\n#\tachieve this by always traversing the router list the same way.\nfunction printmompsdefs(\t\t\tcpusfmt, cpuspernode, i, j, id,\n\t\t\t\t\t\texclCPUfmt, exclmemfmt, extmp,\n\t\t\t\t\t\tfirsttime, \n\t\t\t\t\t\tmemfmt, ncpusfmt, psfmt,\n\t\t\t\t\t\tptmp, rid, vnodename)\n{\n\tpsfmt = \"%s:  resources_available.%s = %s\\n\";\n\tcpusfmt = \"%s:  cpus = %s\\n\";\n\tmemnodefmt = \"%s:  mems = %d\\n\";\n\tncpusfmt = \"%s:  resources_available.ncpus = %d\\n\";\n\tmemfmt = \"%s:  resources_available.mem = %dkb\\n\";\n\texclCPUfmt = \"deleting node[%d] (ID %s) - contains excluded CPU %d\\n\";\n\texclmemfmt = \"deleting node[%d] (ID %s) - contains excluded memory %d\\n\";\n\n\t#\tRefrain from reporting any vnode which contains an excluded CPU.\n\tfor (i in excludedCPUs) {\n\t\textmp = excludedCPUs[i];\n\t\tfor (j in nodeid)\n\t\t\tif (index(cpuid[extmp], nodeid[j]) == 1) {\n\t\t\t\tif (debug)\n\t\t\t\t\tprintf(exclCPUfmt, j, nodeid[j], extmp);\n\t\t\t\tdelete nodeid[j];\n\t\t\t\tnnodes--;\n\t\t\t}\n\t}\n\n\t#\tRefrain from reporting any vnode which contains an excluded\n\t#\tmemory board.  This relies on the fact that there is exactly\n\t#\tone memory board per topology file node.\n\tfor (i in excludedmems) {\n\t\textmp = excludedmems[i];\n\t\tif (nodeid[extmp] != \"\") {\n\t\t\tif (debug)\n\t\t\t\tprintf(exclmemfmt, extmp, nodeid[extmp], extmp);\n\t\t\tdelete nodeid[extmp];\n\t\t\tnnodes--;\n\t\t}\n\t}\n\n\tfor (i in nodeid) {\n\t\tif ((id = nodeid[i]) == \"\")\n\t\t\tcontinue;\n\n\t\t#\tMake sure that the vnode name we construct maps directly\n\t\t#\tto the node number in the sn_topology file.\n\t\tvnodename = nodename \"[\" nodenums[id] \"]\";\n\n\t\tprintf(\"%s:  sharing = default_excl\\n\", vnodename);\n\n\t\tcpuspernode = 0;\n\t\tvnodeCPUs = \"\";\n\t\tfor (j in cpuid)\n\t\t\tif (index(cpuid[j], id) == 1) {\n\t\t\t\tif (cpuspernode == 0)\n\t\t\t\t\tvnodeCPUs = j;\n\t\t\t\telse\n\t\t\t\t\tvnodeCPUs = vnodeCPUs \",\" j;\n\t\t\t\tcpuspernode++;\n\t\t\t}\n\n\t\tif (cpuspernode > 0) {\n\t\t\tprintf(ncpusfmt, vnodename, cpuspernode);\n\t\t\tprintf(cpusfmt, vnodename, vnodeCPUs);\n\t\t} else\n\t\t\tprintf(ncpusfmt, vnodename, 0);\n\n\t\tmeminfo = report_memory(i);\n\t\tif (meminfo >= 0) {\n\t\t\tprintf(memfmt, vnodename, meminfo);\n\t\t\tprintf(memnodefmt, vnodename, i)\n\t\t} else\n\t\t\tprintf(memfmt, vnodename, 0);\n\n\t\tif ((cpuspernode > 0) || (meminfo > 0)) {\n\t\t\tfirsttime = 1;\n\t\t\tptmp = \"\";\n\t\t\t#\tMake sure that the router names in the placement\n\t\t\t#\tset values map directly to the router numbers in\n\t\t\t#\tthe sn_topology file.\n\t\t\tfor (j = 0; j < nrouters; j++) {\n\t\t\t\trid = routerid[j];\n\t\t\t\tif (index(nodesof[rid], id))\n\t\t\t\t\tif (firsttime) {\n\t\t\t\t\t\tfirsttime = 0;\n\t\t\t\t\t\tptmp = pshort routernum[rid];\n\t\t\t\t\t} else\n\t\t\t\t\t\tptmp = ptmp \",\" pshort routernum[rid];\n\t\t\t}\n\t\t\tif (ptmp != \"\") {\n\t\t\t\t#\tadd a value for the whole machine\n\t\t\t\tptmp = ptmp \",\" nodename;\n\t\t\t\tprintf(psfmt, vnodename, ptype, ptmp);\n\t\t\t}\n\t\t}\n\t}\n}\n\n#\tEmit vnode definitions for consumption by qmgr.\n#\tIn constructing vnode names, we are careful to ensure that the index\n#\t(in name[index]) is the same as the node number that appears in the\n#\tsn_topology file.\n#\n#\tIn constructing the values for the placement sets, we are also careful\n#\tto ensure that the index (in R[index]) is the same as the router number\n#\tin the sn_topology file.  We must also be sure when concatenating names\n#\tto form the a placement set value that we always concatenate in the same\n#\torder (that is, even though a human can tell that placement set \"ABC\" is\n#\tthe same as placement set \"BCA\", PBS regards them as different);  we\n#\tachieve this by always traversing the router list the same way.\nfunction printqmgrpsdefs(\t\ti, j, id, cpuspernode,\n\t\t\t\t\texclCPUfmt, exclmemfmt, extmp,\n\t\t\t\t\tfirsttime,\n\t\t\t\t\tmemfmt, ncpusfmt, psfmt,\n\t\t\t\t\tptmp, rid, vnodename)\n{\n\tncpusfmt = \"set node %s resources_available.ncpus = %d\\n\";\n\tmemfmt = \"set node %s resources_available.mem = %dkb\\n\";\n\tpsfmt = \"set node %s resources_available.%s = %s\\n\";\n\texclCPUfmt = \"deleting node[%d] (ID %s) - contains excluded CPU %d\\n\";\n\texclmemfmt = \"deleting node[%d] (ID %s) - contains excluded memory %d\\n\";\n\n\t#\tRefrain from reporting any vnode which contains an excluded CPU.\n\tfor (i in excludedCPUs) {\n\t\textmp = excludedCPUs[i];\n\t\tfor (j in nodeid)\n\t\t\tif (index(cpuid[extmp], nodeid[j]) == 1) {\n\t\t\t\tif (debug)\n\t\t\t\t\tprintf(exclCPUfmt, j, nodeid[j], extmp);\n\t\t\t\tdelete nodeid[j];\n\t\t\t\tnnodes--;\n\t\t\t}\n\t}\n\n\t#\tRefrain from reporting any vnode which contains an excluded\n\t#\tmemory board.\n\tfor (i in excludedmems) {\n\t\textmp = excludedmems[i];\n\t\tif (nodeid[extmp] != \"\") {\n\t\t\tif (debug)\n\t\t\t\tprintf(exclmemfmt, extmp, nodeid[extmp], extmp);\n\t\t\tdelete nodeid[extmp];\n\t\t\tnnodes--;\n\t\t}\n\t}\n\n\tfor (i in nodeid) {\n\t\tif ((id = nodeid[i]) == \"\")\n\t\t\tcontinue;\n\n\t\t#\tMake sure that the vnode name we construct maps directly\n\t\t#\tto the node number in the sn_topology file.\n\t\tvnodename = nodename \"[\" nodenums[id] \"]\";\n\n\t\tprintf(\"set node %s sharing = default_excl\\n\",\n\t\t    vnodename);\n\n\t\tcpuspernode = 0;\n\t\tvnodeCPUs = \"\";\n\t\tfor (j in cpuid)\n\t\t\tif (index(cpuid[j], id) == 1) {\n\t\t\t\tif (cpuspernode == 0)\n\t\t\t\t\tvnodeCPUs = j;\n\t\t\t\telse\n\t\t\t\t\tvnodeCPUs = vnodeCPUs \",\" j;\n\t\t\t\tcpuspernode++;\n\t\t\t}\n\n\t\tif (cpuspernode > 0) {\n\t\t\tprintf(ncpusfmt, vnodename, cpuspernode);\n\n\t\t\tmeminfo = report_memory(i);\n\t\t\tif (meminfo >= 0)\n\t\t\t\tprintf(memfmt, vnodename, meminfo);\n\t\t\tfirsttime = 1;\n\t\t\tptmp = \"\";\n\t\t\t#\tMake sure that the router names in the placement\n\t\t\t#\tset values map directly to the router numbers in\n\t\t\t#\tthe sn_topology file.\n\t\t\tfor (j = 0; j < nrouters; j++) {\n\t\t\t\trid = routerid[j];\n\t\t\t\tif (index(nodesof[rid], id))\n\t\t\t\t\tif (firsttime) {\n\t\t\t\t\t\tfirsttime = 0;\n\t\t\t\t\t\tptmp = pshort routernum[rid];\n\t\t\t\t\t} else\n\t\t\t\t\t\tptmp = ptmp \",\" pshort routernum[rid];\n\t\t\t}\n\t\t\tif (ptmp != \"\") {\n\t\t\t\t#\tadd a value for the whole machine\n\t\t\t\tptmp = ptmp \",\" nodename;\n\t\t\t\tprintf(psfmt, vnodename, ptype, ptmp);\n\t\t\t}\n\t\t}\n\t}\n}\n\n#\tThese are functions to convert between various IDs (CPU, node, NUMAlink,\n#\trouter).  The IDs follow a pattern illustrated by this excerpt\n#\n#\t\tcpu 1 001c01^3#0a local freq 1596MHz, arch ia64, ...\n#\t\tnode 1 001c01^3#0 local asic SHub_2.0, nasid 0x2, ...\n#\t\tnumalink 4 001.01^10#0-2 local endpoint 001c01^3#0-0, ...\n#\t\trouter 0 001.01^10#0 local asic NL4Router\n#\t\tnumalink 14 001.01^11#1-4 local endpoint 001c01^3#0-1, ...\n#\t\trouter 1 001.01^11#1 local asic NL4Router\n#\n#\tfrom an sn_topology file.\n#\tA node ID is derived from a CPU ID by dropping the last character:\nfunction CPU2node(id,\t\t\t\ttmpid)\n{\n\ttmpid = id;\n\tsub(/.$/, \"\", tmpid)\n\n\treturn (tmpid);\n}\n\n#\tA router ID is derived from a NUMAlink ID by dropping the trailing\n#\t\"-[[:digit:]]\":\nfunction NUMAlink2router(id,\t\t\ttmpid)\n{\n\ttmpid = id;\n\tsub(/-[[:digit:]]$/, \"\", tmpid)\n\n\treturn (tmpid);\n}\n\n#\tFind routers h hops removed from a node.  We depend on having already\n#\tcomputed the list of routers that are less than h hops removed, in\n#\twhich case any router whose hop count is still not known and which is\n#\tone hop removed from a router with hop count h-1 must have hop count h.\n#\tThe function returns the number of new assignements done.\nfunction nexthop(h,\t\t\t\ti, j, ndone, rid, rtmp)\n{\n\tndone = 0;\n\tfor (i = 0; i < nrouters; i++) {\n\t\trid = routerid[i];\n\t\tif (hops[rid] != -1)\n\t\t\tcontinue;\n\t\tfor (j = 0; j < nrouterconnections[rid]; j++) {\n\t\t\trtmp = routerconnections[rid, j];\n\t\t\tif ((rtmp in hops) && (hops[rtmp] == (h - 1))) {\n\t\t\t\thops[rid] = h;\n\t\t\t\tndone++;\n\t\t\t\tbreak;\n\t\t\t}\n\t\t}\n\t}\n\n\treturn (ndone);\n}\n\n#\tUse an expanding ring to assign hop counts (number of hops removed from\n#\ta node) to routers, and returns the maximum possible hop count.\nfunction genhops(\t\t\t\tcurhop, i, j, nroutersleft,\n\t\t\t\t\t\tprogress, rid)\n{\n\tcurhop = 1;\n\tnroutersleft = nrouters;\n\n\t#\tThe first pass is special since we care only about routers that\n\t#\tare directly connected to nodes.\n\tfor (i = 0; i < nrouters; i++) {\n\t\trid = routerid[i];\n\t\tprogress = 0;\n\t\tfor (j = 0; j < nrouterconnections[rid]; j++)\n\t\t\tif (routerconnections[rid, j] in nodenums) {\n\t\t\t\thops[rid] = curhop;\n\t\t\t\tnroutersleft--;\n\t\t\t\tprogress = 1;\n\t\t\t\tbreak;\n\t\t\t}\n\t\tif (progress == 0)\n\t\t\thops[rid] = -1;\n\t}\n\n\t#\tNow derive the count for the rest of the routers, one hop at a\n\t#\ttime.  As a safety measure, we terminate the loop if we have\n\t#\tmade no progress (found no routers with a given hop count).\n\tdo {\n\t\tcurhop++;\n\t\tprogress = nexthop(curhop);\n\t\tnroutersleft -= progress;\n\t} while ((progress > 0) && (nroutersleft > 0));\n\n\tif (debug == 1)\n\t\tfor (i = 0; i < nrouters; i++) {\n\t\t\trid = routerid[i];\n\t\t\tprintf(\"router %s:  hop %d\\n\", rid, hops[rid]);\n\t\t}\n\n\treturn (curhop);\n}\n\n#\tThis function generates placement sets for each router based on\n#\tthe nodes to which the router is connected (through one or more\n#\thops).  For each router, we build a list (nodesof[router ID]),\n#\twhich is a concatenation of the node IDs for each node (if any)\n#\tto which the router is directly connected.  For subsequent hops\n#\tH (2 through nhops), the list of nodes for each new router, R,\n#\tis formed by concatenating the lists for every router directly\n#\tconnected to R whose hop count is less than H.\nfunction genps(nhops,\t\t\t\tcurhop, i, j, nid, rconn, rid)\n{\n\tcurhop = 1;\n\n\t#\tThe first pass is special since we care only about routers that\n\t#\tare directly connected to nodes.\n\tnewpstype(ptype);\n\tfor (i = 0; i < nrouters; i++) {\n\t\trid = routerid[i];\n\t\tif (hops[rid] == curhop) {\n\t\t\tnodesof[rid] = \"\";\n\t\t\tfor (j = 0; j < nrouterconnections[rid]; j++) {\n\t\t\t\trconn = routerconnections[rid, j];\n\t\t\t\tif (is_UV) {\n\t\t\t\t\t#\tnid[1] will be the router ID\n\t\t\t\t\t#\t(currently dysfunctional)\n\t\t\t\t\tsplit(j, nid, \"#\");\n\t\t\t\t\tif (rid == nid[1])\n\t\t\t\t\t\tnodesof[rid] = nodesof[rid] \"\" rconn;\n\t\t\t\t} else {\n\t\t\t\t\tif (rconn in nodenums)\n\t\t\t\t\t\tnodesof[rid] = nodesof[rid] \"\" rconn;\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t}\n\n\t#\tFor\n\tdo {\n\t\tcurhop++;\n\t\tfor (i = 0; i < nrouters; i++) {\n\t\t\trid = routerid[i];\n\t\t\tif (hops[rid] == curhop) {\n\t\t\t\t#\tThis should not happen ...\n\t\t\t\tif ((debug == 1) && (rid in nodesof))\n\t\t\t\t\tprintf(\"router %s in nodesof[]\\n\", rid);\n\t\t\t\tnodesof[rid] = \"\";\n\t\t\t\tfor (j = 0; j < nrouterconnections[rid]; j++) {\n\t\t\t\t\trconn = routerconnections[rid, j];\n\t\t\t\t\tif (hops[rconn] < curhop)\n\t\t\t\t\t\tnodesof[rid] = nodesof[rid] \"\" nodesof[rconn];\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t} while (curhop <= nhops);\n\n\tif (debug == 1)\n\t\tfor (i = 0; i < nrouters; i++) {\n\t\t\trid = routerid[i];\n\t\t\tprintf(\"router %s:  hop %d, nodesof %s\\n\",\n\t\t\t    rid, hops[rid], nodesof[rid]);\n\t\t}\n}\n\nfunction verscheck(vers, version_min, version_max)\n{\n\tif ((vers < version_min) || (vers > version_max)) {\n\t\tprintf(\"unsupported version (%d) - not between %d and %d\\n\",\n\t\t    vers, version_min, version_max);\n\t\texitval = 1;\n\t\texit(1);\n\t}\n}\n\n#\tThis function infers the existence of nodes and routers in order to\n#\tallow this script to work for both UV and non-UV systems with minimal\n#\tchanges.  It does this by assuming that the blade IDs (a.k.a. nodes)\n#\tare derived from CPU IDs by truncating the ID string at an '_' character\n#\tand that routers are derived from blade IDs by truncating at a '#'\n#\tcharacter.\nfunction UV_infer_nodes_and_routers(\t\ti, n, nid, UVnodeID, UVnode, nlid)\n{\n\tif (nnodes == 0) {\n\t\tn = asort(cpuid, temp)\n\t\tfor (i = 0; i < n; i++) {\n\t\t\tsplit(cpuid[i], UVnode, \"_\");\n\t\t\tUVnodeID = UVnode[1];\n\n\t\t\tif (UVnodeID in nodenums)\n\t\t\t\tcontinue;\n\t\t\telse {\n\t\t\t\t#\tnid is currently unused;\n\t\t\t\t#\tnid[1] would be a router ID\n\t\t\t\tsplit(UVnodeID, nid, \"#\");\n\t\t\t\tnodenums[UVnodeID] = nnodes;\n\t\t\t\tnodeid[nnodes] = UVnodeID \"\";\n\t\t\t\tif (debug) {\n\t\t\t\t\tprintf(\"nodenums[%s]:  %d\\n\", UVnodeID, nnodes);\n\t\t\t\t\tprintf(\"nodeid[%d]:  %s\\n\", nnodes, nodeid[nnodes]);\n\t\t\t\t}\n\t\t\t\tnnodes++;\n\t\t\t}\n\t\t}\n\t}\n\tif (nrouters == 0) {\n\t\tfor (i in numalinklocalid) {\n\t\t\tnlid = NUMAlink2router(numalinklocalid[i]);\n\t\t\tif (nlid in routernum) {\n\t\t\t\tif (debug)\n\t\t\t\t\tprintf(\"router %s already in routernum\\n\", nlid);\n\t\t\t\tcontinue;\n\t\t\t}\n\t\t\trouternum[nlid] = nrouters;\n\t\t\trouterid[nrouters] = nlid;\n\t\t\tif (debug) {\n\t\t\t\tprintf(\"routernum[%s]:  %d\\n\", nlid, nrouters);\n\t\t\t\tprintf(\"routerid[%d]:  %s\\n\", nrouters, nlid);\n\t\t\t}\n\t\t\tnrouters++;\n\t\t}\n\t}\n}\n\nEND {\n\t#\tEven though BEGIN may have called exit(), the END rule will\n\t#\tstill be executed.  Avoid any actions that shouldn't occur\n\t#\tin that case.\n\tif (exitval)\n\t\texit(exitval);\n\n\tif (is_UV)\n\t\tUV_infer_nodes_and_routers();\n\tif (debug) {\n\t\tdoconsistencychecks();\n\t\tdumpnodeinfo();\n\t\tdumprouterinfo();\n\t}\n\n\tnumhops = genhops();\n\tgenps(numhops);\n\n\tif (type == \"m\") {\n\t\tmomprologue();\n\t\tprintmompsdefs();\n\t} else if (type == \"q\") {\n\t\tqprologue();\n\t\tprintqmgrpsdefs();\n\t}\n}\n"
  },
  {
    "path": "test/Makefile.am",
    "content": "\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\nSUBDIRS = fw tests\n"
  },
  {
    "path": "test/fw/MANIFEST.in",
    "content": "# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\ninclude *.txt\ngraft bin\ngraft ptl\n"
  },
  {
    "path": "test/fw/Makefile.am",
    "content": "#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\n\n# making aptl installation in case it is enabled with configure\n# all directories for ptl install start with 'ptl_'\n\nif ENABLEPTL\nptlpkg_bindir = ${ptl_prefix}/bin\n\ndist_ptlpkg_bin_SCRIPTS = $(wildcard $(srcdir)/bin/*)\n\nptlpkg_fwdir = ${ptl_prefix}/fw\n\ndist_ptlpkg_fw_DATA =  $(srcdir)/requirements.txt\n\nptlpkg_pylib_topdir = ${ptl_prefix}/lib/python$(PYTHON_VERSION)/site-packages/ptl\n\ndist_ptlpkg_pylib_top_PYTHON = $(wildcard $(builddir)/ptl/*.py)\n\nptlpkg_pylib_libdir = $(ptlpkg_pylib_topdir)/lib\n\ndist_ptlpkg_pylib_lib_PYTHON = $(wildcard $(srcdir)/ptl/lib/*.py)\n\nptlpkg_pylib_utilsdir = $(ptlpkg_pylib_topdir)/utils\n\ndist_ptlpkg_pylib_utils_PYTHON = $(wildcard $(srcdir)/ptl/utils/*.py)\n\nptlpkg_pylib_pluginsdir = $(ptlpkg_pylib_utilsdir)/plugins\n\ndist_ptlpkg_pylib_plugins_PYTHON = $(wildcard $(srcdir)/ptl/utils/plugins/*.py)\n\nsysprofiledir = /etc/profile.d\n\ndist_sysprofile_DATA = \\\n\tptl.csh \\\n\tptl.sh\nendif\n\nptlmoduledir = $(exec_prefix)/unsupported/fw/ptl\nptlbindir = $(exec_prefix)/unsupported/fw/bin\n\ndist_ptlbin_SCRIPTS = \\\n\tbin/pbs_stat \\\n\tbin/pbs_loganalyzer \\\n\tbin/pbs_snapshot \\\n\tbin/pbs_config \\\n\tbin/pbs_compare_results\n\ndist_ptlmodule_PYTHON = ptl/__init__.py\n\nptlmodulelibdir = $(ptlmoduledir)/lib\n\ndist_ptlmodulelib_PYTHON = \\\n\tptl/lib/pbs_api_to_cli.py \\\n\tptl/lib/pbs_ifl_mock.py \\\n\tptl/lib/pbs_testlib.py \\\n\tptl/lib/__init__.py \\\n\tptl/lib/ptl_config.py \\\n\tptl/lib/ptl_error.py \\\n\tptl/lib/ptl_types.py \\\n\tptl/lib/ptl_batchutils.py \\\n\tptl/lib/ptl_constants.py \\\n\tptl/lib/ptl_expect_action.py \\\n\tptl/lib/ptl_object.py \\\n\tptl/lib/ptl_mom.py \\          \n\tptl/lib/ptl_sched.py \\\n\tptl/lib/ptl_server.py \\\n\tptl/lib/ptl_comm.py \\\n\tptl/lib/ptl_entities.py \\\n\tptl/lib/ptl_fairshare.py \\\n\tptl/lib/ptl_resourceresv.py \\\n\tptl/lib/ptl_service.py \\\n\tptl/lib/ptl_wrappers.py\n\nptlmoduleutilsdir = $(ptlmoduledir)/utils\n\ndist_ptlmoduleutils_PYTHON = \\\n\tptl/utils/pbs_procutils.py \\\n\tptl/utils/pbs_dshutils.py \\\n\tptl/utils/pbs_covutils.py \\\n\tptl/utils/pbs_cliutils.py \\\n\tptl/utils/pbs_logutils.py \\\n\tptl/utils/pbs_testsuite.py \\\n\tptl/utils/pbs_anonutils.py \\\n\tptl/utils/pbs_snaputils.py \\\n\tptl/utils/pbs_testusers.py \\\n\tptl/utils/__init__.py\n\nptlmoduleutilspluginsdir = $(ptlmoduleutilsdir)/plugins\n\ndist_ptlmoduleutilsplugins_PYTHON = \\\n\tptl/utils/plugins/ptl_test_tags.py \\\n\tptl/utils/plugins/ptl_test_loader.py \\\n\tptl/utils/plugins/ptl_test_db.py \\\n\tptl/utils/plugins/ptl_test_info.py \\\n\tptl/utils/plugins/ptl_test_runner.py \\\n\tptl/utils/plugins/ptl_test_data.py \\\n\tptl/utils/plugins/__init__.py\n\ninstall-data-hook:\n\tcd $(DESTDIR)$(ptlbindir) && \\\n\tmv pbs_stat pbs_stat.py && \\\n\tmv pbs_loganalyzer pbs_loganalyzer.py && \\\n\tmv pbs_snapshot pbs_snapshot.py && \\\n\tmv pbs_config pbs_config.py\n\nuninstall-hook:\n\tcd $(DESTDIR)$(ptlbindir) && \\\n\trm -f pbs_stat.py pbs_loganalyzer.py pbs_snapshot.py pbs_config.py\n"
  },
  {
    "path": "test/fw/bin/pbs_as",
    "content": "#!/usr/bin/env python3\n# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport getopt\nimport os\nimport pickle\nimport pwd\nimport sys\n\nfrom ptl.lib.pbs_testlib import (PbsAlterError, PbsDeleteError, PbsDeljobError,\n                                 PbsDelresvError, PbsHoldError,\n                                 PbsManagerError, PbsMessageError,\n                                 PbsMoveError, PbsOrderError, PbsQtermError,\n                                 PbsReleaseError, PbsRerunError, PbsRunError,\n                                 PbsSignalError, PbsStatusError,\n                                 PbsSubmitError, PtlConfig, Server)\n\n\ndef usage():\n    msg = []\n    msg += ['PBS user impersonation tool. This tool is an internal ']\n    msg += ['tool and is not intended to be used by end-users\\n\\n']\n    msg += ['Usage: ' + os.path.basename(sys.argv[0]) + ' [OPTION]\\n\\n']\n    msg += ['-c <cmd>: command to perform, one of submit, status\\n']\n    msg += ['-e <extend>: extend options to commands']\n    msg += ['-o <objid>: identifier of object to act upon\\n']\n    msg += ['-u <user>: username to perform commandd as\\n']\n    msg += ['-f <serialized_file>: filename containing serialized data\\n']\n    msg += ['-s <hostname>: name of host on which to perform command\\n']\n    msg += ['-h: usage help\\n']\n\n    print(\"\".join(msg))\n\n\ndef _load_data(objfile=None, servername=None, user=None):\n    if objfile:\n        f = open(objfile, 'r')\n        _data = pickle.load(f)\n        f.close()\n    else:\n        _data = None\n    if user is not None:\n        uid = pwd.getpwnam(user)[2]\n        if os.getuid() == 0 and uid != 0:\n            os.setuid(uid)\n    s = Server(servername, stat=False)\n    return (s, _data)\n\n\nif __name__ == '__main__':\n\n    if len(sys.argv) < 2:\n        usage()\n        sys.exit(1)\n\n    try:\n        opts, args = getopt.getopt(sys.argv[1:], \"c:e:o:u:f:s:h\")\n    except BaseException:\n        usage()\n        sys.exit(1)\n\n    cmd = None\n    objid = None\n    user = None\n    objfile = None\n    servername = None\n    extend = None\n\n    for o, val in opts:\n        if o == '-e':\n            extend = val\n        elif o == '-c':\n            cmd = val\n        elif o == '-o':\n            objid = val.split(',')\n        elif o == '-u':\n            user = val\n        elif o == '-f':\n            objfile = val\n        elif o == '-s':\n            servername = val\n        elif o == '-h':\n            usage()\n            sys.exit(0)\n        else:\n            sys.stderr.write('unrecognized option. Exiting')\n            usage()\n            sys.exit(1)\n\n    if cmd is None or user is None or servername is None:\n        print(None)\n        sys.exit(0)\n\n    PtlConfig()\n\n    if cmd == 'submit':\n        (s, job) = _load_data(objfile, servername, user)\n        job.attrl = s.utils.dict_to_attrl(job.attributes)\n        job.attropl = s.utils.dict_to_attropl(job.attributes)\n        try:\n            jid = s.submit(job)\n            sys.stdout.write(jid.strip())\n        except PbsSubmitError as e:\n            sys.stdout.write(str(e.rv))\n            sys.stderr.write(repr(e))\n            sys.exit(e.rc)\n\n    elif cmd == 'status':\n        (s, _data) = _load_data(objfile, servername, user)\n        if 'obj_type' in _data:\n            obj_type = int(_data['obj_type'])\n        else:\n            obj_type = None\n        if 'attrib' in _data:\n            attrib = _data['attrib']\n        else:\n            attrib = None\n        if 'id' in _data:\n            id = _data['id']\n        else:\n            id = None\n\n        try:\n            rv = s.status(obj_type, attrib, id, extend=extend)\n            sys.stdout.write(str(rv))\n        except PbsStatusError as e:\n            rv = e.rv\n            sys.stdout.write(str(rv))\n            sys.stderr.write(repr(e))\n            sys.stderr.flush()\n            sys.exit(e.rc)\n\n    elif cmd == 'delete':\n        if objid is None:\n            print('1')\n        else:\n            (s, data) = _load_data(objfile, servername, user)\n            try:\n                rc = s.delete(objid, extend=extend)\n                sys.stdout.write(str(rc))\n            except PbsDeleteError as e:\n                sys.stdout.write(str(rc))\n                sys.stderr.write(repr(e))\n                sys.exit(e.rc)\n\n    elif cmd == 'deljob':\n        if objid is None:\n            print('1')\n        else:\n            (s, data) = _load_data(objfile, servername, user)\n            try:\n                rc = s.deljob(objid, extend=extend)\n                sys.stdout.write(str(rc))\n            except PbsDeljobError as e:\n                sys.stdout.write(str(rc))\n                sys.stderr.write(repr(e))\n                sys.exit(e.rc)\n\n    elif cmd == 'delresv':\n        if objid is None:\n            print('1')\n        else:\n            (s, data) = _load_data(objfile, servername, user)\n            try:\n                rc = s.delresv(objid, extend=extend)\n                sys.stdout.write(str(rc))\n            except PbsDelresvError as e:\n                sys.stdout.write(str(rc))\n                sys.stderr.write(repr(e))\n                sys.exit(e.rc)\n\n    elif cmd == 'select':\n        (s, attrib) = _load_data(objfile, servername, user)\n        rv = s.select(attrib, extend=extend)\n        print(rv)\n\n    elif cmd == 'alterjob':\n        if objid is None:\n            print('1')\n        else:\n            (s, attrib) = _load_data(objfile, servername, user)\n            try:\n                rc = s.alterjob(objid, attrib, extend=extend)\n                sys.stdout.write(str(rc))\n            except PbsAlterError as e:\n                sys.stdout.write(str(rc))\n                sys.stderr.write(repr(e))\n                sys.exit(e.rc)\n\n    elif cmd == 'holdjob':\n        if objid is None:\n            print('1')\n        else:\n            (s, hold_list) = _load_data(objfile, servername, user)\n            try:\n                rc = s.holdjob(objid, str(hold_list), extend=extend)\n                sys.stdout.write(str(rc))\n            except PbsHoldError as e:\n                sys.stdout.write(str(rc))\n                sys.stderr.write(repr(e))\n                sys.exit(e.rc)\n\n    elif cmd == 'sigjob':\n        if objid is None:\n            print('1')\n        else:\n            (s, signal) = _load_data(objfile, servername, user)\n            try:\n                rc = s.sigjob(objid, str(signal), extend=extend)\n                sys.stdout.write(str(rc))\n            except PbsSignalError as e:\n                sys.stdout.write(str(rc))\n                sys.stderr.write(repr(e))\n                sys.exit(e.rc)\n\n    elif cmd == 'msgjob':\n        if objid is None:\n            print('1')\n        else:\n            (s, msg) = _load_data(objfile, servername, user)\n            try:\n                rc = s.holdjob(objid, str(msg), extend=extend)\n                sys.stdout.write(str(rc))\n            except PbsMessageError as e:\n                sys.stdout.write(str(rc))\n                sys.stderr.write(repr(e))\n                sys.exit(e.rc)\n\n    elif cmd == 'rlsjob':\n        if objid is None:\n            print('1')\n        else:\n            (s, hold_list) = _load_data(objfile, servername, user)\n            try:\n                rc = s.rlsjob(objid, str(hold_list), extend=extend)\n                sys.stdout.write(str(rc))\n            except PbsReleaseError as e:\n                sys.stdout.write(str(rc))\n                sys.stderr.write(repr(e))\n                sys.exit(e.rc)\n\n    elif cmd == 'rerunjob':\n        if objid is None:\n            print('1')\n        else:\n            (s, data) = _load_data(objfile, servername, user)\n            try:\n                rc = s.rerunjob(objid, extend=extend)\n                sys.stdout.write(str(rc))\n            except PbsRerunError as e:\n                sys.stdout.write(str(rc))\n                sys.stderr.write(repr(e))\n                sys.exit(e.rc)\n\n    elif cmd == 'orderjob':\n        if objid is None:\n            print('1')\n        else:\n            (s, jobid2) = _load_data(objfile, servername, user)\n            try:\n                rc = s.orderjob(objid, str(jobid2), extend=extend)\n                sys.stdout.write(str(rc))\n            except PbsOrderError as e:\n                sys.stdout.write(str(rc))\n                sys.stderr.write(repr(e))\n                sys.exit(e.rc)\n\n    elif cmd == 'runjob':\n        if objid is None:\n            print('1')\n        else:\n            (s, location) = _load_data(objfile, servername, user)\n            try:\n                rc = s.runjob(objid, str(location), extend=extend)\n            except PbsRunError as e:\n                rc = e.rc\n            print(str(rc))\n\n    elif cmd == 'movejob':\n        if objid is None:\n            print('1')\n        else:\n            (s, destination) = _load_data(objfile, servername, user)\n            try:\n                rc = s.movejob(objid, str(destination), extend=extend)\n                sys.stdout.write(rc)\n            except PbsMoveError as e:\n                sys.stdout.write(str(rc))\n                sys.stderr.write(repr(e))\n                sys.exit(e.rc)\n\n    # elif cmd == 'alterresv':\n    #    if objid is None:\n    #        print '1'\n    #    else:\n    #        (s, attrib) = _load_data(objfile, servername, user)\n    #        try:\n    #            rc = s.alterresv(objid, attrib, extend=extend)\n    #            sys.stdout.write(str(rc))\n    #        except PbsAlterError, e:\n    #            sys.stdout.write(str(rc))\n    #            sys.stderr.write(repr(e))\n    #            sys.exit(e.rc)\n\n    elif cmd == 'manager':\n        (s, _data) = _load_data(objfile, servername, user)\n        if 'cmd' in _data:\n            cmd = int(_data['cmd'])\n        else:\n            cmd = None\n        if 'obj_type' in _data:\n            obj_type = int(_data['obj_type'])\n        else:\n            obj_type = None\n        if 'attrib' in _data:\n            attrib = _data['attrib']\n        else:\n            attrib = None\n        if 'id' in _data:\n            id = _data['id']\n        else:\n            id = None\n        try:\n            rc = s.manager(cmd, obj_type, attrib, id, extend=extend)\n            sys.stdout.write(str(rc))\n        except PbsManagerError as e:\n            sys.stderr.write(repr(e))\n            sys.exit(e.rc)\n\n    elif cmd == 'terminate':\n        (s, data) = _load_data(servername, user)\n        rc = s.terminate(manner=data['manner'],\n                         server_name=data['server_name'], extend=extend)\n        print(str(rc))\n\n    sys.exit(0)\n"
  },
  {
    "path": "test/fw/bin/pbs_benchpress",
    "content": "#!/usr/bin/env python3\n# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport sys\nimport os\nimport getopt\nimport logging\nimport logging.config\nimport platform\nimport errno\nimport signal\nimport importlib\nimport ptl\nimport nose\nfrom nose.plugins.base import Plugin\nfrom nose.plugins.manager import PluginManager\nfrom ptl.lib.pbs_testlib import PtlConfig\nfrom distutils.version import LooseVersion\nfrom ptl.utils.pbs_cliutils import CliUtils\nfrom ptl.utils.pbs_dshutils import TimeOut\nfrom ptl.utils.plugins.ptl_test_loader import PTLTestLoader\nfrom ptl.utils.plugins.ptl_test_runner import PTLTestRunner\nfrom ptl.utils.plugins.ptl_test_db import PTLTestDb\nfrom ptl.utils.plugins.ptl_test_info import PTLTestInfo\nfrom ptl.utils.plugins.ptl_test_tags import PTLTestTags\nfrom ptl.utils.plugins.ptl_test_data import PTLTestData\n\n\nclass PTLNoseConfig(nose.config.Config):\n    def __init__(self, log_format=None, outfile_handler=None, **kw):\n        super().__init__(**kw)\n        self.outfile_handler = outfile_handler\n        self.log_format = log_format\n\n    def configureLogging(self):\n        super().configureLogging()\n        nose_logger = logging.getLogger('nose')\n        if self.log_format:\n            formatter = logging.Formatter(self.log_format)\n            for handler in nose_logger.handlers:\n                handler.setFormatter(formatter)\n        if self.outfile_handler:\n            nose_logger.addHandler(self.outfile_handler)\n\n\n# trap SIGINT and SIGPIPE\ndef trap_exceptions(etype, value, tb):\n    sys.excepthook = sys.__excepthook__\n    if issubclass(etype, IOError) and value.errno == errno.EPIPE:\n        pass\n    else:\n        sys.__excepthook__(etype, value, tb)\n\n\nsys.excepthook = trap_exceptions\n\n\ndef sighandler(signum, frames):\n    signal.alarm(0)\n    raise KeyboardInterrupt('Signal %d received' % (signum))\n\n\ndef timeout_handler(signum, frames):\n    raise TimeOut('pbs_benchpress timed out by signal %d' % (signum))\n\n\n# join process group of caller makes it possible to programmatically interrupt\n# when run in a subshell\nif os.getpgrp() != os.getpid():\n    os.setpgrp()\nsignal.signal(signal.SIGINT, sighandler)\nsignal.signal(signal.SIGTERM, sighandler)\n\n\ndef usage():\n    msg = []\n    msg += ['Usage: ' + os.path.basename(sys.argv[0]) + ' [OPTION]\\n\\n']\n    msg += ['  Test harness used to run or list test suites and test ' +\n            'cases\\n\\n']\n    msg += ['-f <file names>: comma-separated list of file names to run\\n']\n    msg += ['-F: set logging format to include timestamp and level\\n']\n    msg += ['-g <testgroup file>: path to file containing comma-separated']\n    msg += [' list of testsuites\\n']\n    msg += ['-h: display usage information\\n']\n    msg += ['-i: show test info\\n']\n    msg += ['-l <level>: log level\\n']\n    msg += ['-L: display list of tests\\n']\n    msg += ['-o <logfile>: log file name\\n']\n    msg += ['-p <param>: test parameter. Comma-separated list of key=val']\n    msg += [' pairs. Note that the comma can not be used in val\\n']\n    msg += ['-t <test suites>: comma-separated list of test suites to run\\n']\n    msg += ['--exclude=<names>: comma-separated string of tests to exclude\\n']\n    msg += ['--user-plugins=<names>: comma-separated list of key=val of']\n    msg += [' user plugins to load, where key is module and val is']\n    msg += [' classname of plugin which is subclass of nose.plugins.base']\n    msg += ['.Plugin\\n']\n    msg += ['--db-type=<type>: Type of database to use.']\n    msg += [' can be one of \"html\", \"file\", \"sqlite\", \"pgsql\", \"json\".']\n    msg += [' Default to \"json\"\\n']\n    msg += ['--db-name=<name>: database name. Default to']\n    msg += [' ptl_test_results.json\\n']\n    msg += ['--db-access=<path>: Path to a file that defines db options '\n            '(PostreSQL only)\\n']\n    msg += ['--lcov-bin=<bin>: path to lcov binary. Defaults to lcov\\n']\n    msg += ['--genhtml-bin=<bin>: path to genhtml binary. '\n            'Defaults to genhtml\\n']\n    msg += ['--lcov-data=<dir>: path to directory containig .gcno files\\n']\n    msg += ['--lcov-out=<dir>: path to output directory\\n']\n    msg += ['--lcov-baseurl=<url>: use <url> as baseurl in html report\\n']\n    msg += ['--lcov-nosrc: don\\'t include PBS source in coverage analysis.']\n    msg += [' Default PBS source will be included in coverage analysis\\n']\n    msg += ['--log-conf=<file>: logging config file\\n']\n    msg += ['--min-pyver=<version>: minimum Python version\\n']\n    msg += ['--max-pyver=<version>: maximum Python version\\n']\n    msg += ['--param-file=<file>: get params from file. Overrides -p\\n']\n    msg += ['--post-analysis-data=<dir>: path to post analysis data' +\n            ' directory\\n']\n    msg += ['--max-postdata-threshold=<count>: max post analysis data' +\n            ' threshold per testsuite. Defaults to 10. <count>=0 will' +\n            ' disable this threshold\\n']\n    msg += ['--tc-failure-threshold=<count>: test case failure threshold' +\n            ' per testsuite. Defaults to 10. <count>=0 will disable this' +\n            ' threshold\\n']\n    msg += ['--cumulative-tc-failure-threshold=<count>: cumulative test' +\n            ' case failure threshold. Defaults to 100. <count>=0 will' +\n            ' disable this threshold.\\n                              ' +\n            '             Must be greater or equal to' +\n            ' \\'tc-failure-threshold\\'\\n']\n    msg += ['--stop-on-failure: if set, stop when one of multiple tests ' +\n            'fails\\n']\n    msg += ['--timeout=<seconds>: duration after which no test suites are '\n            'run\\n']\n    msg += ['--repeat-count=<count>: repeat given all tests <count> times\\n']\n    msg += ['--repeat-delay=<seconds>: delay between two repetition\\n']\n    msg += ['--follow-child: if set, walk the test hierarchy and run ' +\n            'each test\\n']\n    msg += ['--tags=<tag>: Select only tests that have <tag> tag.']\n    msg += [' can be applied multiple times\\n']\n    msg += ['               Format: [!]tag[,tag]\\n']\n    msg += ['               Example:\\n']\n    msg += ['                 smoke - This will select all tests which has']\n    msg += [' \"smoke\" tag\\n']\n    msg += ['                 !smoke - This will select all tests which']\n    msg += [' doesn\\'t have \"smoke\" tag\\n']\n    msg += ['                 smoke,regression - This will select all tests']\n    msg += [' which has both \"smoke\" and \"regression\" tag\\n']\n    msg += ['--eval-tags=\\'<Python expression>\\': Select only tests for whose']\n    msg += [' tags evaluates <Python expression> to True.']\n    msg += [' can be applied multiple times\\n']\n    msg += ['            Example:\\n']\n    msg += ['               \\'smoke and (not regression)\\' - This will select']\n    msg += [' all tests which has \"smoke\" tag and does\\'t have \"regression\"']\n    msg += [' tag\\n']\n    msg += ['               \\'priority>4\\' - This will select all tests which']\n    msg += [' has \"priority\" tag and its value is >4\\n']\n    msg += ['--tags-info: List all selected test suite (or test cases if ']\n    msg += ['--verbose applied)\\n']\n    msg += [\n        '            used with --tags or --eval-tags (also -t or --exclude']\n    msg += [' can be applied to limit selection)\\n']\n    msg += ['--list-tags: List all currenlty used tags\\n']\n    msg += [\n        '--verbose: show verbose output (used with -i, -L or --tag-info)\\n']\n    msg += ['--use-current-setup: Runs test on current PBS setup\\n']\n    msg += ['--version: show version number and exit\\n']\n\n    print(''.join(msg))\n\n\nif __name__ == '__main__':\n\n    if len(sys.argv) < 2:\n        usage()\n        sys.exit(1)\n\n    level = 'INFOCLI2'\n    fmt = '%(asctime)-15s %(levelname)-8s %(message)s'\n    outfile = None\n    dbtype = None\n    dbname = None\n    use_cur_setup = False\n    dbaccess = None\n    testfiles = None\n    testsuites = None\n    testparam = None\n    testgroup = None\n    list_test = False\n    showinfo = False\n    follow = False\n    excludes = None\n    stoponfail = False\n    paramfile = None\n    logconf = None\n    minpyver = None\n    maxpyver = None\n    verbose = False\n    lcov_data = None\n    lcov_bin = None\n    lcov_out = None\n    lcov_nosrc = False\n    lcov_baseurl = None\n    genhtml_bin = None\n    timeout = None\n    repeat_count = 1\n    repeat_delay = 0\n    nosedebug = False\n    only_info = False\n    tags = []\n    eval_tags = []\n    tags_info = False\n    list_tags = False\n    post_data_dir = None\n    gen_ts_tree = False\n    tc_failure_threshold = 10\n    cumulative_tc_failure_threshold = 100\n    max_postdata_threshold = 10\n    user_plugins = None\n    PtlConfig()\n\n    largs = ['exclude=', 'log-conf=', 'timeout=', 'repeat-count=']\n    largs += ['param-file=', 'min-pyver=', 'max-pyver=']\n    largs += ['db-name=', 'db-access=', 'db-type=', 'genhtml-bin=']\n    largs += ['lcov-bin=', 'lcov-data=', 'lcov-out=', 'lcov-nosrc']\n    largs += ['lcov-baseurl=', 'tags=', 'eval-tags=', 'tags-info', 'list-tags']\n    largs += ['version', 'verbose', 'follow-child']\n    largs += ['stop-on-failure', 'enable-nose-debug']\n    largs += ['post-analysis-data=', 'gen-ts-tree', 'repeat-delay=']\n    largs += ['tc-failure-threshold=', 'cumulative-tc-failure-threshold=']\n    largs += ['max-postdata-threshold=', 'user-plugins=', 'use-current-setup']\n\n    try:\n        opts, args = getopt.getopt(\n            sys.argv[1:], 'f:il:t:o:p:g:hLF', largs)\n    except Exception:\n        sys.stderr.write('Unrecognized option. Exiting\\n')\n        usage()\n        sys.exit(1)\n\n    if args:\n        sys.stderr.write('Invalid usage. Exiting\\n')\n        usage()\n        sys.exit(1)\n\n    for o, val in opts:\n        if o == '-i':\n            showinfo = True\n            list_test = False\n            gen_ts_tree = False\n            only_info = True\n        elif o == '-L':\n            showinfo = False\n            list_test = True\n            gen_ts_tree = False\n            only_info = True\n        elif o == '--gen-ts-tree':\n            showinfo = False\n            list_test = False\n            gen_ts_tree = True\n            only_info = True\n        elif o == '-l':\n            level = val\n        elif o == '-o':\n            outfile = CliUtils.expand_abs_path(val)\n        elif o == '-f':\n            testfiles = CliUtils.expand_abs_path(val)\n        elif o == '-t':\n            testsuites = val\n        elif o == '--user-plugins':\n            user_plugins = val\n        elif o == '--tags':\n            tags.append(val.strip())\n        elif o == '--eval-tags':\n            eval_tags.append(val.strip())\n        elif o == '--tags-info':\n            tags_info = True\n        elif o == '--list-tags':\n            list_tags = True\n        elif o == '--exclude':\n            excludes = val\n        elif o == '-F':\n            fmt = '%(asctime)-15s %(levelname)-8s %(name)s %(message)s'\n        elif o == '-p':\n            testparam = val\n        elif o == '-g':\n            testgroup = val\n        elif o == '--timeout':\n            timeout = int(val)\n        elif o == '--repeat-count':\n            repeat_count = int(val)\n        elif o == '--repeat-delay':\n            repeat_delay = int(val)\n        elif o == '--db-type':\n            dbtype = val\n        elif o == '--db-name':\n            dbname = CliUtils.expand_abs_path(val)\n        elif o == '--db-access':\n            dbaccess = CliUtils.expand_abs_path(val)\n        elif o == '--genhtml-bin':\n            genhtml_bin = CliUtils.expand_abs_path(val)\n        elif o == '--lcov-bin':\n            lcov_bin = CliUtils.expand_abs_path(val)\n        elif o == '--lcov-data':\n            lcov_data = CliUtils.expand_abs_path(val)\n        elif o == '--lcov-out':\n            lcov_out = CliUtils.expand_abs_path(val)\n        elif o == '--lcov-nosrc':\n            lcov_nosrc = True\n        elif o == '--lcov-baseurl':\n            lcov_baseurl = val\n        elif o == '--param-file':\n            paramfile = CliUtils.expand_abs_path(val)\n        elif o == '--stop-on-failure':\n            stoponfail = True\n        elif o == '--follow-child':\n            follow = True\n        elif o == '--log-conf':\n            logconf = val\n        elif o == '--min-pyver':\n            minpyver = val\n        elif o == '--max-pyver':\n            maxpyver = val\n        elif o == '--enable-nose-debug':\n            nosedebug = True\n        elif o == '--verbose':\n            verbose = True\n        elif o == '--post-analysis-data':\n            post_data_dir = CliUtils.expand_abs_path(val)\n        elif o == '--tc-failure-threshold':\n            tc_failure_threshold = val\n        elif o == '--cumulative-tc-failure-threshold':\n            cumulative_tc_failure_threshold = val\n        elif o == '--max-postdata-threshold':\n            max_postdata_threshold = val\n        elif o == '-h':\n            usage()\n            sys.exit(0)\n        elif o == '--use-current-setup':\n            use_cur_setup = True\n        elif o == '--version':\n            print(ptl.__version__)\n            sys.exit(0)\n        else:\n            sys.stderr.write('Unreocgnized option %s\\n' % o)\n            usage()\n            sys.exit(1)\n\n    if nosedebug:\n        level = 'DEBUG'\n\n    log_lvl = CliUtils.get_logging_level(level)\n    if logconf:\n        logging.config.fileConfig(logconf)\n    else:\n        logging.basicConfig(level=log_lvl, format=fmt)\n\n    if not log_lvl and level:\n        logging.error('Invalid log level:%s', level)\n        sys.exit(1)\n    if outfile:\n        outfile_hdlr = logging.FileHandler(outfile)\n        outfile_hdlr.setLevel(log_lvl)\n        outfile_hdlr.setFormatter(logging.Formatter(fmt))\n        ptl_logger = logging.getLogger('ptl')\n        ptl_logger.addHandler(outfile_hdlr)\n        ptl_logger.setLevel(log_lvl)\n    else:\n        outfile_hdlr = None\n\n    pyver = platform.python_version()\n    if minpyver is not None and LooseVersion(pyver) < LooseVersion(minpyver):\n        logging.error('Python version ' + str(pyver) + ' does not meet ' +\n                      'required minimum version of ' + minpyver)\n        sys.exit(1)\n    if maxpyver is not None and LooseVersion(pyver) > LooseVersion(maxpyver):\n        logging.error('Python version ' + str(pyver) + ' does not meet ' +\n                      'required max version of ' + maxpyver)\n        sys.exit(1)\n\n    if showinfo and testsuites is None:\n        logging.error(\n            'Testsuites names require (see -t) along with -i option!')\n        sys.exit(1)\n\n    try:\n        tc_failure_threshold = int(tc_failure_threshold)\n        if tc_failure_threshold < 0:\n            raise ValueError\n    except ValueError:\n        _msg = 'Invalid value provided for testcase failure threshold, '\n        _msg += 'please provide integer'\n        logging.error(_msg)\n        sys.exit(1)\n\n    try:\n        cumulative_tc_failure_threshold = int(cumulative_tc_failure_threshold)\n        if cumulative_tc_failure_threshold < 0:\n            raise ValueError\n    except ValueError:\n        _msg = 'Invalid value provided for cumulative-tc-failure-threshold, '\n        _msg += 'please provide integer'\n        logging.error(_msg)\n        sys.exit(1)\n\n    if cumulative_tc_failure_threshold < tc_failure_threshold:\n        _msg = 'Value for cumulative-tc-failure-threshould should'\n        _msg += ' be greater or equal to \\'tc-failure-threshold\\''\n        logging.error(_msg)\n        sys.exit(1)\n\n    try:\n        max_postdata_threshold = int(max_postdata_threshold)\n        if max_postdata_threshold < 0:\n            raise ValueError\n    except ValueError:\n        _msg = 'Invalid value provided for max-postdata-threshold, '\n        _msg += 'please provide integer'\n        logging.error(_msg)\n        sys.exit(1)\n\n    if outfile is not None and not os.path.isdir(os.path.dirname(outfile)):\n        os.mkdir(os.path.dirname(outfile))\n\n    if timeout is not None:\n        PTLTestRunner.timeout = timeout\n        signal.signal(signal.SIGALRM, timeout_handler)\n        signal.alarm(timeout)\n\n    if list_test:\n        excludes = None\n        testgroup = None\n        follow = True\n    if testfiles is not None:\n        tests = testfiles.split(',')\n    else:\n        tests = os.getcwd()\n\n    if testsuites is None:\n        testsuites = 'PBSTestSuite'\n        follow = True\n\n    loader = PTLTestLoader()\n\n    if only_info:\n        testinfo = PTLTestInfo()\n        loader.set_data(testgroup, testsuites, excludes, True,\n                        testfiles)\n        testinfo.set_data(testsuites, list_test,\n                          showinfo, verbose, gen_ts_tree)\n        plugins = (loader, testinfo)\n    elif (tags_info or list_tags):\n        testtags = PTLTestTags()\n        loader.set_data(testgroup, testsuites, excludes, True)\n        testtags.set_data(tags, eval_tags, tags_info, list_tags, verbose)\n        plugins = (loader, testtags)\n    else:\n        testtags = PTLTestTags()\n        runner = PTLTestRunner()\n        db = PTLTestDb()\n        data = PTLTestData()\n        loader.set_data(testgroup, testsuites, excludes, follow, testfiles)\n        testtags.set_data(tags, eval_tags)\n        runner.set_data(paramfile, testparam, repeat_count,\n                        repeat_delay, lcov_bin, lcov_data,\n                        lcov_out, genhtml_bin, lcov_nosrc,\n                        lcov_baseurl, tc_failure_threshold,\n                        cumulative_tc_failure_threshold, use_cur_setup)\n        db.set_data(dbtype, dbname, dbaccess)\n        data.set_data(post_data_dir, max_postdata_threshold)\n        plugins = (loader, testtags, runner, db, data)\n    if user_plugins:\n        for plugin in user_plugins.split(','):\n            if '=' not in plugin:\n                _msg = 'Invalid value (%s)' % (plugin)\n                _msg += ' provided in user-plugins, it should be key value'\n                _msg += ' pair where key is module name and value is class'\n                _msg += ' name of plugin'\n                logging.error(_msg)\n                sys.exit(1)\n            mod, clsname = plugin.split('=', 1)\n            try:\n                loaded_mod = importlib.import_module(mod)\n            except ImportError:\n                _msg = 'Failed to load module (%s)' % mod\n                _msg += ' for plugin (%s)' % plugin\n                logging.error(_msg)\n                sys.exit(1)\n            _plugin = getattr(loaded_mod, clsname, None)\n            if not _plugin:\n                _msg = 'Could not find class named \"%s\"' % clsname\n                _msg += ' in module (%s)' % mod\n                logging.error(_msg)\n                sys.exit(1)\n            if not issubclass(_plugin, Plugin):\n                _msg = 'Plugin class (%s) should be subclass of ' % (clsname)\n                _msg += 'nose.plugins.base.Plugin'\n                logging.error(_msg)\n                sys.exit(1)\n            plugins += (_plugin(),)\n    test_regex = r'(^(?:[\\w]+|^)Test|pbs_|^test_[\\(]*)'\n    os.environ['NOSE_TESTMATCH'] = test_regex\n    nose_config = PTLNoseConfig(\n        env=os.environ,\n        log_format=fmt,\n        outfile_handler=outfile_hdlr,\n        plugins=PluginManager(plugins=plugins),\n        verbosity=7 if nosedebug else 2,\n        stopOnError=stoponfail)\n    nose.main(defaultTest=tests, argv=[sys.argv[0]], config=nose_config)\n    if outfile_hdlr:\n        outfile_hdlr.close()\n"
  },
  {
    "path": "test/fw/bin/pbs_compare_results",
    "content": "#!/usr/bin/env python3\n# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\nimport os\nimport csv\nimport json\nimport sys\nimport getopt\nimport time\n\n\ndef usage():\n    msg = []\n    msg += ['Usage: ' + os.path.basename(sys.argv[0])]\n    msg += [' [benchmark_json_file] [tocompare_json_file] [OPTION]\\n\\n']\n    msg += [' Performance test results comparision tool']\n    msg += [' to generate csv and html report\\n\\n']\n    msg += ['--html-report : option to generate html report\\n']\n    msg += ['--output-file : path to generate csv and html file\\n']\n    msg += ['--help or -h : To display usage information\\n']\n    msg += ['--append : Append results to an existing file\\n']\n    print(''.join(msg))\n\n\ndef generate_html_report(filepath, append):\n    \"\"\"\n    Generate html performance comparision report\n    \"\"\"\n    HTML = '''<html>\n    <head>\n      <style>\n        table {\n          font-family: sans-serif, \"Times New Roman\", serif;\n          border-collapse: collapse;\n          width: 100%%;\n        }\n        td, th {\n          border: 1px solid #dddddd;\n          text-align: left;\n          padding: 8px;\n        }\n      </style>\n    </head>\n    <body>\n      <table>\n        <tr><th><b>Performance tests benchmark comparision results</b>\n        </th></tr>\n        <tr><td><b>user:</b> %s</td></tr>\n        <tr><td><b>host:</b> %s</td></tr>\n      </table>\n      <table>\n      %s\n      </table>\n     </body>\n    </html>\n    '''\n\n    HTML_add = '''\n      %s\n      </table>\n     </body>\n    </html>\n    '''\n\n    if not filepath.endswith('.html'):\n        filepath = filepath + '.html'\n    if append:\n        fd = open(filepath, \"r\")\n        d = fd.read()\n        fd.close()\n        m = d.split(\"\\n\")\n        s = \"\\n\".join(m[:-4])\n        fd = open(filepath, \"w+\")\n        for i in range(len(s)):\n            fd.write(s[i])\n        fd.close()\n\n        with open(filepath, 'a+') as fp:\n            fp.write(HTML_add % (''.join(_data)))\n    else:\n        with open(filepath, 'w+') as fp:\n            fp.write(HTML % (oldv['user'],\n                             list(oldv['machine_info'].keys())[0],\n                             _h + ''.join(_data)))\n\n\ndef generate_csv_report(filepath, append):\n    \"\"\"\n    compare 2 json results and generate csv report\n    \"\"\"\n    if not filepath.endswith('.csv'):\n        filepath = filepath + '.csv'\n    if append:\n        with open(filepath, 'a+') as fp:\n            csv.writer(fp).writerows(mdata)\n    else:\n        with open(filepath, 'w+') as fp:\n            csv.writer(fp).writerows([header] + mdata)\n\n\ndef percent_change(nv, ov, unit):\n    \"\"\"\n    swap the values to find approriate percent\n    change for units\n    \"\"\"\n    if unit == 'jobs/sec':\n        a = ov\n        ov = nv\n        nv = a\n    diff = ov - nv\n    pchange = 0\n    if nv == 0:\n        nv = 1\n    if diff > 0:\n        pchange = (diff / nv) * 100\n    elif diff < 0:\n        diff = nv - ov\n        pchange = -(diff / nv) * 100\n    pchange = round(pchange, 2)\n    return str(pchange) + '%'\n\n\nif __name__ == '__main__':\n    if len(sys.argv) < 3:\n        usage()\n        sys.exit(1)\n\n    html_report = False\n    try:\n        opts, args = getopt.getopt(sys.argv[3:], \"h\",\n                                   [\"help\", \"html-report\", \"output-file=\",\n                                    \"append\"])\n    except getopt.GetoptError as err:\n        print(err)\n        usage()\n        sys.exit(1)\n\n    filepath = None\n    append = 0\n    for o, val in opts:\n        if o == '--html-report':\n            html_report = True\n        elif o in (\"-h\", \"--help\"):\n            usage()\n            sys.exit(0)\n        elif o == \"--output-file\":\n            filepath = val\n        elif o == \"--append\":\n            append = 1\n\n    with open(sys.argv[1]) as fp:\n        oldv = json.load(fp)\n\n    newfiles = sys.argv[2].split(',')\n    header = ['TestCase', 'Test Measure', 'Unit',\n              oldv['product_version'] + ' baseline PBS']\n    TR = '    <tr>\\n%s    </tr>\\n'\n    TH = '      <th>%s</th>\\n'\n    TD = '      <td>%s</td>\\n'\n    TDR = '      <td rowspan=%d>%s</td>\\n'\n    filenum = 0\n    mult_data = {}\n    for newfile in newfiles:\n        with open(newfile) as fp:\n            newv = json.load(fp)\n        header.extend([newv['product_version'], '% Improvement'])\n        for k, v in sorted(oldv['avg_measurements']['testsuites'].items()):\n            assert k in newv['avg_measurements']['testsuites'], k\n            _ntcs = newv['avg_measurements']['testsuites'][k]['testcases']\n            _otcs = v['testcases']\n            mn = 0\n            for _k, _v in sorted(v['testcases'].items()):\n                _tcn = k + '.' + _k\n                assert _k in _ntcs, _tcn\n                _om = _v\n                _om = [x for x in _om if 'test_measure' in x]\n                _om = sorted(_om, key=lambda x: x['test_measure'])\n                _nm = _ntcs[_k]\n                _nm = [x for x in _nm if 'test_measure' in x]\n                _nm = sorted(_nm, key=lambda x: x['test_measure'])\n                _nm_ms = [x['test_measure'] for x in _nm]\n                for key, val in sorted(oldv['testsuites'].items()):\n                    assert key in newv['testsuites'], key\n                    for tc, doc in sorted(val['testcases'].items()):\n                        if _k == tc:\n                            _docs = doc['docstring']\n                for i, _m in enumerate(_om):\n                    data = []\n                    _mn = _m['test_measure']\n                    _msg = 'test measure %s missing' % _mn\n                    _msg += ' in new %s' % _tcn\n                    assert _mn in _nm_ms, _msg\n                    _os = _m['test_data']['std_dev']\n                    _o = _m['test_data']['mean']\n                    _omi = _m['test_data']['minimum']\n                    _oma = _m['test_data']['maximum']\n                    _omt = _m['test_data']['total_samples']\n                    _oms = _m['test_data']['samples_considered']\n                    _n = _nm[i]['test_data']['mean']\n                    _ns = _nm[i]['test_data']['std_dev']\n                    _nsmi = _nm[i]['test_data']['minimum']\n                    _nsma = _nm[i]['test_data']['maximum']\n                    _nst = _nm[i]['test_data']['total_samples']\n                    _nss = _nm[i]['test_data']['samples_considered']\n                    _old_vals = ('mean:' + str(round(_o, 2)) +\n                                 ', std_dev:' + str(round(_os, 2)) +\n                                 ', minimum:' + str(round(_omi, 2)) +\n                                 ', maximum:' + str(round(_oma, 2)) +\n                                 ', mean_samples:' + str(round(_omt, 2)) +\n                                 ', samples_considered:' + str(round(_oms, 2)))\n                    _new_vals = ('mean:' + str(round(_n, 2)) +\n                                 ', std_dev:' + str(round(_ns, 2)) +\n                                 ', minimum:' + str(round(_nsmi, 2)) +\n                                 ', maximum:' + str(round(_nsma, 2)) +\n                                 ', mean_samples:' + str(round(_nst, 2)) +\n                                 ', samples_considered:' + str(round(_nss, 2)))\n                    _row = [_tcn, _mn, _m['unit'], _old_vals]\n                    _rowadd = [_new_vals, percent_change(_n, _o, _m['unit'])]\n                    if filenum == 0:\n                        data = _row\n                        mult_data[mn] = data\n                        data.extend(_rowadd)\n                    else:\n                        data.extend(_rowadd)\n                        mult_data[mn].extend(data)\n                    mn = mn + 1\n        filenum += 1\n    mdata = []\n    for ind, dat in mult_data.items():\n        mdata.append(dat)\n    _h = TR % ''.join([TH % x for x in header])\n    _data = []\n    _rsns = {}\n    _adf = []\n    for i, d in enumerate(mdata):\n        if d[1] in _rsns:\n            _rsns[d[1]] += 1\n        else:\n            _rsns.setdefault(d[1], 1)\n    for i, d in enumerate(mdata):\n        if _rsns[d[1]] > 1:\n            if d[1] in _adf:\n                _data.append(TR % ''.join([TD % x for x in d[2:]]))\n            else:\n                _d = [TDR % (_rsns[d[1]], x) for x in d[:2]]\n                _d1 = [TD % x for x in d[2:]]\n                _data.append(TR % ''.join(_d + _d1))\n                _adf.append(d[1])\n        else:\n            _data.append(TR % ''.join([TD % x for x in d]))\n    if not filepath:\n        filepath = 'performance_test_report'\n    if html_report:\n        generate_html_report(filepath, append)\n    generate_csv_report(filepath, append)\n"
  },
  {
    "path": "test/fw/bin/pbs_config",
    "content": "#!/usr/bin/env python3\n# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport getopt\nimport logging\nimport errno\n\nimport ptl\nfrom ptl.lib.pbs_testlib import *\nfrom ptl.utils.pbs_testsuite import PBS_GROUPS\n\n# trap SIGINT and SIGPIPE\n\n\ndef trap_exceptions(etype, value, tb):\n    sys.excepthook = sys.__excepthook__\n    if issubclass(etype, KeyboardInterrupt):\n        pass\n    elif issubclass(etype, IOError) and value.errno == errno.EPIPE:\n        pass\n    else:\n        sys.__excepthook__(etype, value, tb)\n\n\nsys.excepthook = trap_exceptions\n\n\ndef usage():\n    msg = []\n    msg += ['Usage: ' + os.path.basename(sys.argv[0]) + ' [OPTION]\\n\\n']\n    msg += ['-t <hostnames>: comma-separated hosts to operate on. Defaults '\n            'to localhost\\n']\n    msg += ['-l <log level>: one of DEBUG, INFO, ERROR, FATAL, WARNING\\n']\n    msg += ['\\n']\n    msg += ['--log-conf=<file>: logging config file\\n']\n    msg += ['--snap=<pbs_snapshot>: Mimic pbs_snapshot\\n']\n    msg += ['\\t --acct-logs=<path to acct logs>: path to accounting logs'\n            ', used to create users & groups\\n']\n    msg += ['--revert-config: revert services to their default ' +\n            'configuration\\n']\n    msg += ['\\t --scheduler: operate on scheduler\\n']\n    msg += ['\\t --server: operate on server\\n']\n    msg += ['\\t --mom: operate on MoM\\n']\n    msg += ['\\t --del-hooks=<True|False>: If True delete hooks.']\n    msg += [' Defaults to True\\n']\n    msg += ['\\t --del-queues=<True|False>: Delete non-default queues.']\n    msg += [' Defaults to True\\n']\n    msg += ['--save-config=<config>: save configuration to file\\n']\n    msg += ['\\t revert-config and save-config can operate on the following\\n']\n    msg += ['\\t --scheduler: operate on scheduler\\n']\n    msg += ['\\t --server: operate on server\\n']\n    msg += ['\\t --mom: operate on MoM\\n']\n    msg += ['--load-config=<config>: load configuration from saved file.\\n']\n    msg += ['\\n']\n\n    msg += ['--vnodify: define vnodes using the following suboptions:\\n']\n    msg += ['\\t-a <attrs>: comma separated list of attributes to set ' +\n            'on vnodes.\\n']\n    msg += ['\\t            format: <name>=<value>. Defaults to 8 cpus ' +\n            '8gb of mem.\\n']\n    msg += ['\\t-A: set additive mode, leave vnode definitions in ' +\n            'place. \\n']\n    msg += ['\\t    Default is to clear all existing vnode definition ' +\n            'files.\\n']\n    msg += ['\\t-d <y|n>: if y, delete all server nodes. Defaults to y.\\n']\n    msg += ['\\t-f <filename>: use output of pbsnodes -av from file as ' +\n            'definition\\n']\n    msg += ['\\t-P <num>: number of vnodes per host\\n']\n    msg += ['\\t-o <filename>: output vnode definition to filename\\n']\n    msg += ['\\t-M <mom>: MoM to operate on, format <host>@<path/to/conf>.\\n'\n            '\\t          Defaults to localhost.\\n']\n    msg += ['\\t-N <num vnodes>: number of vnodes to create. No default.\\n']\n    msg += ['\\t-n <name>: name of the natural vnode to create. ' +\n            'Defaults to MoM FQDN\\n']\n    msg += ['\\t-p <name>: prefix of name of node to create. ' +\n            'Output format: \\n']\n    msg += ['\\t           prefix followed by [<num]. Defaults to vnode\\n']\n    msg += ['\\t-r <y|n>: restart MoM or not, defaults to y\\n']\n    msg += ['\\t-s: if set, share vnodes on the host. ' +\n            'Default is \"standalone\" hosts\\n']\n    msg += ['\\t-u: if set, allocate the natural vnode\\n']\n    msg += ['\\n']\n\n    msg += ['--multi-mom: Define and create multiple MoMs on a host\\n']\n    msg += ['\\t--create=<num>: number of MoMs to create. No default.\\n']\n    msg += ['\\t--restart=<[seq]>: restart MoMs in sequence\\n']\n    msg += ['\\t--stop=<[seq]>: stop MoMs in sequence\\n']\n    msg += ['\\t--serverhost=<host>: hostname of server, defaults to '\n            'localhost\\n']\n    msg += ['\\t--home-prefix=<path>: prefix to PBS_HOME directory, defaults\\n'\n            '\\t                      to /var/spool/PBS_m\\n']\n    msg += ['\\t--conf-prefix=<path>: prefix to pbs.conf file. Defaults to\\n'\n            '\\t                      /etc/pbs.conf.m\\n']\n    msg += ['\\t--init-port=<number>: initial port to allocate. Defaults to\\n'\n            '\\t                      15011\\n']\n    msg += ['\\t--step-port=<number>: step for port sequence. Defaults to 2\\n']\n    msg += ['\\n']\n    msg += ['--switch-version=<version>: switch to a given installed ' +\n            'version of PBS\\n']\n    msg += ['\\tcurrently only works for \"vanilla\" installs, i.e, not '\n            'developer installs\\n']\n    msg += ['\\tbased on /etc/pbs.conf and \"default\" PBS_EXEC\\n']\n    msg += ['\\n']\n    msg += ['--check-ug: verifies whether test users and groups are ']\n    msg += [' defined as expected.\\n               Note that -t option '\n            'will be ignored.\\n']\n    msg += ['--make-ug: create users and groups to match what is expected\\n']\n    msg += ['           Note that -t option will be ignored\\n']\n    msg += ['--del-ug: delete users and groups which is expected for PTL\\n']\n    msg += ['           Note that -t option will be ignored\\n']\n    msg += ['--version: print version number and exit\\n']\n\n    print(\"\".join(msg))\n\n\ndef process_config(hosts, process_obj, conf_file=None, type='default',\n                   delqueues=False, delhooks=False):\n    for host in hosts:\n        svr_obj = Server(host)\n        if MGR_OBJ_SCHED in process_obj:\n            if type == 'default':\n                Scheduler(svr_obj, host).revert_to_defaults()\n            elif type == 'load':\n                Scheduler(svr_obj, host).load_configuration(conf_file)\n            elif type == 'save':\n                Scheduler(svr_obj, host).save_configuration(conf_file)\n        if MGR_OBJ_SERVER in process_obj:\n            if type == 'default':\n                Server(host).revert_to_defaults(delhooks=delhooks,\n                                                delqueues=delqueues)\n            elif type == 'load':\n                Server(host).load_configuration(conf_file)\n            elif type == 'save':\n                Server(host).save_configuration(conf_file)\n        if MGR_OBJ_NODE in process_obj:\n            if type == 'default':\n                MoM(svr_obj, host).revert_to_defaults()\n            elif type == 'load':\n                MoM(svr_obj, host).load_configuration(conf_file)\n            elif type == 'save':\n                MoM(svr_obj, host).save_configuration(conf_file)\n\n\ndef process_attributes(attrs):\n    nattrs = {}\n    for a in attrs.split(','):\n        if '=' not in a:\n            logging.error('attributes must be of the form' +\n                          ' <name>=<value>')\n            sys.exit(1)\n        k, v = a.split('=')\n        nattrs[k] = v\n    return nattrs\n\n\ndef common_users_groups_ops():\n    du = DshUtils()\n    g_create = []\n    u_create = []\n    gm_expected = {}\n    gm_actual = du.group_memberships([str(g) for g in PBS_GROUPS])\n    for g in PBS_GROUPS:\n        gm_expected[g] = g.users\n    for k, v in gm_expected.items():\n        if str(k) not in gm_actual:\n            g_create.append(k)\n            for _u in v:\n                if _u not in u_create:\n                    u_create.append(_u)\n        else:\n            for _u in v:\n                if ((str(_u) not in gm_actual[str(k)]) and\n                        (_u not in u_create)):\n                    u_create.append(_u)\n    return (gm_expected, gm_actual, g_create, u_create)\n\n\ndef check_users_groups():\n    gm_expected, gm_actual, g_create, u_create = common_users_groups_ops()\n    if ((len(g_create) > 0) or (len(u_create) > 0)):\n        out = ['Expected (format is <group name>: <user> [, <user2>...) ']\n        for k, v in gm_expected.items():\n            out += [str(k) + ': ' + ', '.join([str(u) for u in v])]\n        out += ['\\n', 'Actual: ']\n        for k, v in gm_actual.items():\n            out += [k + ': ' + ', '.join(v)]\n        print('\\n'.join(out))\n        return False\n    else:\n        return True\n\n\ndef make_users_groups():\n    du = DshUtils()\n    _, _, g_create, u_create = common_users_groups_ops()\n    for g in g_create:\n        du.groupadd(g, g.gid, logerr=False)\n    for u in u_create:\n        du.useradd(name=u, uid=u.uid, gid=u.groups[0], groups=u.groups,\n                   logerr=False)\n    return True\n\n\ndef delete_users_groups():\n    du = DshUtils()\n    _, gm_actual, _, _ = common_users_groups_ops()\n    for v in gm_actual.values():\n        for u in v:\n            du.userdel(u, logerr=False)\n    for k in gm_actual.keys():\n        du.groupdel(k, logerr=False)\n    return True\n\n\nif __name__ == '__main__':\n\n    if len(sys.argv) < 2:\n        usage()\n        sys.exit(0)\n\n    # vnodify options\n    vnodify = False\n    vnodeprefix = 'vnode'\n    num_vnodes = None\n    additive = False\n    sharedhost = False\n    filename = None\n    attrs = \"resources_available.ncpus=8,resources_available.mem=8gb\"\n    hostname = None\n    conf_file = None\n    restart = True\n    delall = True\n    natvnode = None\n    usenatvnode = False\n    vdefname = None\n    # end of vnodify options\n\n    hosts = None\n    revert = False\n    op = None\n    loadconf = None\n    saveconf = None\n    logconf = None\n    vnodes_per_host = 1\n    delqueues = True\n    delhooks = True\n    lvl = logging.INFO\n\n    switchversion = None\n\n    check_ug = False\n    make_ug = False\n    del_ug = False\n\n    multimom = False\n    num_moms = None\n    restart_moms = None\n    stop_moms = None\n    clienthost = None\n    serverhost = None\n    init_port = 15011\n    step_port = 2\n\n    import_jobs = False\n    home_prefix = 'PBS_m'\n    conf_prefix = 'pbs.conf_m'\n    acct_logs = None\n\n    as_snap = None\n\n    process_obj = []\n    vnodify_args = \"a:d:f:N:n:o:P:p:M:l:r:v:Asu\"\n    generic_args = \"l:t:h\"\n    largs = [\"scheduler\", \"server\", \"mom\", \"revert-config\", \"load-config=\",\n             \"save-config=\", \"vnodify\", \"import-jobs\", \"del-ug\",\n             \"switch-version=\", \"log-conf=\", \"check-ug\", \"del-hooks=\",\n             \"del-queues=\", \"version\", \"make-ug\", \"multi-mom\", \"clienthost=\",\n             \"home-prefix=\", \"conf-prefix=\", \"serverhost=\", \"init-port=\",\n             \"step-port=\", \"snap=\", \"acct-logs=\", \"create=\", \"restart=\",\n             \"stop=\"]\n\n    try:\n        opts, args = getopt.getopt(sys.argv[1:], vnodify_args + generic_args,\n                                   largs)\n    except getopt.GetoptError:\n        usage()\n        sys.exit(1)\n\n    for o, val in opts:\n        if o == '-l':\n            lvl = CliUtils().get_logging_level(val)\n        elif o == '-t':\n            hosts = val\n        elif o == '-a':\n            attrs = val\n        elif o == '-A':\n            additive = True\n        elif o == '-d':\n            if val.startswith('y'):\n                delall = True\n            else:\n                delall = False\n        elif o == '-f':\n            filename = CliUtils.expand_abs_path(val)\n        elif o == '-P':\n            vnodes_per_host = int(val)\n        elif o == '-p':\n            vnodeprefix = val\n        elif o == '-M':\n            if '@' in val:\n                (hostname, conf_file) = val.split('@')\n            else:\n                hostname = val\n        elif o == '-N':\n            num_vnodes = int(val)\n        elif o == '-o':\n            vdefname = val\n        elif o == '-s':\n            sharedhost = True\n        elif o == '-r':\n            if val.startswith('y'):\n                restart = True\n        elif o == '-n':\n            natvnode = val\n        elif o == '-u':\n            usenatvnode = True\n        elif o == '--check-ug':\n            check_ug = True\n        elif o == '--make-ug':\n            make_ug = True\n        elif o == '--del-ug':\n            del_ug = True\n        elif o == '--del-hooks':\n            delhooks = eval(val)\n        elif o == '--del-queues':\n            delqueues = eval(val)\n        elif o == '--snap':\n            as_snap = CliUtils.expand_abs_path(val)\n        elif o == '--acct-logs':\n            confirm = raw_input(\"--acct-logs will create users & groups \"\n                                \"from accounting log trace\\n\"\n                                \"Ok to do so? (Y/N)\")\n            if confirm in (\"Y\", \"y\"):\n                acct_logs = CliUtils.expand_abs_path(val)\n            else:\n                acct_logs = None\n        elif o == '--import-jobs':\n            import_jobs = True\n        elif o == '--log-conf':\n            logconf = val\n        elif o == '--multi-mom':\n            multimom = True\n        elif o == '--create':\n            num_moms = int(val)\n        elif o == '--home-prefix':\n            home_prefix = val\n        elif o == '--conf-prefix':\n            conf_prefix = val\n        elif o == '--scheduler':\n            process_obj.append(MGR_OBJ_SCHED)\n        elif o == '--server':\n            process_obj.append(MGR_OBJ_SERVER)\n        elif o == '--mom':\n            process_obj.append(MGR_OBJ_NODE)\n        elif o == '--restart':\n            restart_moms = eval(val, {}, {})\n        elif o == '--stop':\n            stop_moms = eval(val, {}, {})\n        elif o == '--vnodify':\n            vnodify = True\n        elif o == '--revert-config':\n            revert = True\n        elif o == '--load-config':\n            loadconf = CliUtils.expand_abs_path(val)\n        elif o == '--save-config':\n            saveconf = CliUtils.expand_abs_path(val)\n        elif o == '--serverhost':\n            serverhost = val\n        elif o == '--init-port':\n            init_port = int(val)\n        elif o == '--step-port':\n            step_port = int(val)\n        elif o == '--switch-version':\n            switchversion = val\n        elif o == '--version':\n            print(ptl.__version__)\n            sys.exit(0)\n        else:\n            sys.stderr.write(\"Unrecognized option \" + o + \"\\n\")\n            usage()\n            sys.exit(1)\n\n    PtlConfig()\n\n    if logconf:\n        logging.config.fileConfig(logconf)\n    else:\n        logging.basicConfig(level=lvl)\n\n    if hosts is None:\n        hosts = [socket.gethostname()]\n    else:\n        hosts = hosts.split(',')\n\n    if check_ug:\n        rv = check_users_groups()\n        if rv:\n            sys.exit(0)\n        sys.exit(1)\n\n    if del_ug:\n        rv = delete_users_groups()\n        if rv:\n            sys.exit(0)\n        sys.exit(1)\n\n    if make_ug:\n        rv = make_users_groups()\n        if rv:\n            sys.exit(0)\n        sys.exit(1)\n\n    if revert:\n        process_config(hosts, process_obj, type='default', delqueues=delqueues,\n                       delhooks=delhooks)\n    elif loadconf:\n        # when loading configuration apply the saved configuration based\n        # on what was saved irregardless of what object types were passed in\n        allobjs = [MGR_OBJ_SCHED, MGR_OBJ_SERVER, MGR_OBJ_NODE]\n        process_config(hosts, allobjs, loadconf, type='load',\n                       delqueues=delqueues, delhooks=delhooks)\n    elif saveconf:\n        if os.path.isfile(saveconf):\n            answer = input('file ' + saveconf + ' exists, overwrite? '\n                           '[y]/n: ')\n            if answer == 'n':\n                sys.exit(1)\n        if not process_obj:\n            process_obj = [MGR_OBJ_SERVER, MGR_OBJ_SCHED, MGR_OBJ_NODE]\n        process_config(hosts, process_obj, saveconf, type='save',\n                       delqueues=delqueues, delhooks=delhooks)\n    elif vnodify:\n        if filename:\n            vdef = BatchUtils().file_to_vnodedef(filename)\n            if vdef:\n                svr_obj = Server(hostname, pbsconf_file=conf_file)\n                MoM(svr_obj, hostname,\n                    pbsconf_file=conf_file).insert_vnode_def(vdef)\n        elif num_vnodes is None:\n            logging.error('A number of vnodes to create is required\\n')\n            sys.exit(1)\n        else:\n            nattrs = process_attributes(attrs)\n            for hostname in hosts:\n                svr_obj = Server(hostname, pbsconf_file=conf_file)\n                m = MoM(svr_obj, hostname, pbsconf_file=conf_file)\n                m.create_vnodes(nattrs, num_vnodes,\n                                additive, sharedhost, restart, delall,\n                                natvnode, usenatvnode, fname=vdefname,\n                                vnodes_per_host=vnodes_per_host)\n    elif switchversion:\n        pi = PBSInitServices()\n        for host in hosts:\n            pi.switch_version(host, switchversion)\n\n    elif multimom:\n        if num_moms is not None:\n            if os.getuid() != 0:\n                logging.error('Must be run as root')\n                sys.exit(1)\n            du = DshUtils()\n            conf = du.parse_pbs_config(serverhost)\n            serverhost = DshUtils().get_pbs_server_name(conf)\n            s = Server(serverhost)\n            nattrs = process_attributes(attrs)\n            s.create_moms(num=num_moms, attrib=nattrs, conf_prefix=conf_prefix,\n                          home_prefix=home_prefix, momhosts=hosts,\n                          init_port=init_port, step_port=step_port)\n        if (restart_moms or stop_moms):\n            mom_op = []\n            if restart_moms:\n                mom_op = restart_moms\n            if stop_moms:\n                mom_op += stop_moms\n            for i in mom_op:\n                c = os.path.join('/etc', conf_prefix + str(i))\n                pi = PBSInitServices(serverhost, conf=c)\n                if restart_moms:\n                    ret = pi.restart()\n                if stop_moms:\n                    ret = pi.stop()\n                if ret['rc'] != 0:\n                    logging.error(ret['err'])\n                del pi\n    elif as_snap is not None:\n        if os.getuid() != 0:\n            logging.error('Must be run as root')\n            sys.exit(1)\n        Server(snap=as_snap).clusterize(conf_file, hosts, acct_logs=acct_logs,\n                                        import_jobs=import_jobs)\n"
  },
  {
    "path": "test/fw/bin/pbs_cov",
    "content": "#!/usr/bin/env python3\n# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport os\nimport sys\nimport getopt\nimport logging\nimport logging.config\nimport errno\n\nimport ptl\nfrom ptl.utils.pbs_cliutils import CliUtils\nfrom ptl.utils.pbs_covutils import LcovUtils\nfrom ptl.lib.pbs_testlib import PtlConfig\n\n\n# trap SIGINT and SIGPIPE\ndef trap_exceptions(etype, value, tb):\n    sys.excepthook = sys.__excepthook__\n    if issubclass(etype, IOError) and value.errno == errno.EPIPE:\n        pass\n    else:\n        sys.__excepthook__(etype, value, tb)\n\n\nsys.excepthook = trap_exceptions\n\n\ndef usage():\n    msg = []\n    msg += ['Usage: ' + os.path.basename(sys.argv[0]) + ' [OPTION]\\n\\n']\n    msg += ['    code coverage tools\\n\\n']\n    msg += ['-c: capture coverage\\n']\n    msg += ['-d <path>: path to directory that contains coverage data\\n']\n    msg += ['-i: initialize coverage\\n']\n    msg += ['-o <path>: path to output directory\\n']\n    msg += ['-m <f1,f2>: merge comma-separated coverage files\\n']\n    msg += ['-r <path>: path to file to remove coverage patterns from\\n']\n    msg += ['-z: reset coverage counters\\n']\n    msg += ['--exclude=<p1,p2>: comma-separated pattern of files to exclude\\n']\n    msg += ['--summarize: summarize coverage analysis\\n']\n    msg += ['--html: Generate HTML from coverage analysis\\n']\n    msg += ['--no-source: don\\'t include PBS source in coverage analysis']\n    msg += [' (Must be used with --html)\\n']\n    msg += ['--baseurl=<url>: use <url> as baseurl in html report']\n    msg += [' (Must be used with --html)\\n']\n    msg += [' Default source will be in coverage analysis\\n']\n    msg += ['--version: print version number and exit\\n']\n\n    print(\"\".join(msg))\n\n\nif __name__ == '__main__':\n\n    if len(sys.argv) < 2:\n        usage()\n        sys.exit(1)\n\n    data_dir = None\n    capture = None\n    initialize = None\n    merge = None\n    reset = None\n    remove = None\n    out = None\n    html_nosrc = False\n    html = False\n    html_baseurl = None\n    exclude = ['\"*work/gSOAP/*\"', '\"*/pbs/doc/*\"', 'lex.yy.c',\n               'pbs_ifl_wrap.c', 'usr/include/*', 'unsupported/*']\n\n    summarize = None\n    lvl = logging.INFO\n    logconf = None\n\n    lopts = [\"version\", \"exclude=\", \"summarize\", 'no-source', 'html']\n    lopts += ['baseurl=']\n\n    try:\n        opts, args = getopt.getopt(sys.argv[1:], \"ciszd:mo:l:rh\", lopts)\n    except Exception:\n        usage()\n        sys.exit(1)\n\n    for o, val in opts:\n        if o == '-d':\n            data_dir = CliUtils.expand_abs_path(val)\n        elif o == '-c':\n            capture = True\n        elif o == '-o':\n            out = CliUtils.expand_abs_path(val)\n        elif o == '-i':\n            initialize = True\n        elif o == '-l':\n            lvl = CliUtils.get_logging_level(val)\n        elif o == '-m':\n            merge = val\n        elif o == '-l':\n            lvl = CliUtils().get_logging_level(val)\n        elif o == '-r':\n            remove = CliUtils.expand_abs_path(val)\n        elif o == '-z':\n            reset = True\n        elif o == '-h':\n            usage()\n            sys.exit(0)\n        elif o == '--exclude':\n            exclude = val.split(',')\n        elif o == '--log-conf':\n            logconf = val\n        elif o in ('-s', '--summarize'):\n            summarize = True\n        elif o == '--html':\n            html = True\n        elif o in '--no-source':\n            html_nosrc = False\n        elif o in '--baseurl':\n            html_baseurl = val\n        elif o == '--version':\n            print(ptl.__version__)\n            sys.exit(0)\n        else:\n            sys.stderr.write(\"Unrecognized option\")\n            usage()\n            sys.exit(1)\n\n    PtlConfig()\n\n    if logconf:\n        logging.config.fileConfig(logconf)\n    else:\n        logging.basicConfig(level=lvl)\n\n    if html_nosrc and not html:\n        logging.error('--no-source must be used with --html')\n        sys.exit(1)\n\n    if html_baseurl and not html:\n        logging.error('--baseurl must be used with --html')\n        sys.exit(1)\n\n    cu = LcovUtils(cov_out=out, data_dir=data_dir, html_nosrc=html_nosrc,\n                   html_baseurl=html_baseurl)\n\n    if reset:\n        cu.zero_coverage()\n    if initialize:\n        cu.initialize_coverage()\n    if capture:\n        cu.capture_coverage()\n    if merge is not None:\n        for m in merge.split(','):\n            cu.add_trace(m)\n        cu.merge_coverage_traces(exclude=exclude)\n    if html:\n        cu.generate_html()\n        if html_baseurl:\n            cu.change_baseurl()\n    if summarize:\n        cu.summarize_coverage()\n"
  },
  {
    "path": "test/fw/bin/pbs_loganalyzer",
    "content": "#!/usr/bin/env python3\n# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport getopt\nimport sys\nimport os\nimport traceback\nimport logging\nimport logging.config\nimport time\nimport errno\nimport ptl\nfrom ptl.utils.pbs_logutils import PBSLogUtils, PBSLogAnalyzer\nfrom ptl.utils.pbs_cliutils import CliUtils\nfrom ptl.utils.plugins.ptl_test_db import PTLTestDb\nfrom ptl.lib.pbs_testlib import PtlConfig\n\n\n# trap SIGINT and SIGPIPE\ndef trap_exceptions(etype, value, tb):\n    sys.excepthook = sys.__excepthook__\n    if issubclass(etype, KeyboardInterrupt):\n        pass\n    elif issubclass(etype, IOError) and value.errno == errno.EPIPE:\n        pass\n    else:\n        sys.__excepthook__(etype, value, tb)\n\n\nsys.excepthook = trap_exceptions\n\n\ndef usage():\n    msg = []\n    msg += ['Usage: ' + os.path.basename(sys.argv[0]).split('.pyc')[0]]\n    msg += [' [OPTION]\\n\\n']\n    msg += ['  Analyze PBS logs and return various throughput metrics\\n\\n']\n    msg += ['-a <acctlog>: path to accounting log file/dir to analyze\\n']\n    msg += ['-b: process log from corresponding begin/start time\\n']\n    msg += ['    format: %m/%d/%Y %H:%M:%S\\n']\n    msg += ['-c: output cycle summary\\n']\n    msg += ['-d <diag>: path to a pbs_diag directory\\n']\n    msg += ['-e: process log up to corresponding end time\\n']\n    msg += ['    format: %m/%d/%Y %H:%M:%S\\n']\n    msg += ['-f <log>: generic log file for analysis\\n']\n    msg += ['-h: display usage information\\n']\n    msg += ['-t <hostname>: hostname. Defaults to FQDN local hostname\\n']\n    msg += ['-l <schedlog>: path to scheduler log file/dir to analyze\\n']\n    msg += ['-m <momlog>: path to mom log file/dir to analyze\\n']\n    msg += ['-s <serverlog>: path to server log file/dir to analyze\\n']\n    msg += ['-S: show per job scheduling details, time to '\n            'run/discard/calendar\\n']\n    msg += ['-U: show utilization. Requires paths to jobs and nodes info\\n']\n    msg += ['--estimated-info: show job start time estimate info. '\n            'Requires scheduler log(s)\\n']\n    msg += ['--estimated-info-only: write only  estimated info to the DB.'\n            ' Requires --db-out\\n']\n    msg += ['--last-week: analyze logs of the last 7 days\\n']\n    msg += ['--last-month: analyze logs of the last month\\n']\n    msg += ['--re-interval=<regexp>: report time interval between '\n            'occurrences of regexp\\n']\n    msg += ['--re-frequency=<seconds>: report frequency of occurrences of '\n            'the re-interval\\n']\n    msg += ['                          expression for every <seconds>\\n']\n    msg += ['--silent: do not display progress bar. Defaults to False\\n']\n    msg += ['--log-conf=<file>: logging config file\\n']\n    msg += ['--nodes-file=<path>: path to file with output of pbsnodes -av\\n']\n    msg += ['--jobs-file=<path>: path to file with output of qstat -f\\n']\n    msg += ['--db-out=<file>: send results to db file\\n']\n    msg += ['--db-type=<type>: database type\\n']\n    msg += ['--db-access=<path>: Path to a file that defines db options '\n            '(PostreSQL only)\\n']\n    msg += ['--version: print version number and exit\\n']\n\n    print(\"\".join(msg))\n\n\nif __name__ == '__main__':\n    if len(sys.argv) < 2:\n        usage()\n        sys.exit(0)\n\n    diag = None\n    schedulerlog = None\n    serverlog = None\n    momlog = None\n    acctlog = None\n    genericlog = None\n    hostname = None\n    sj = False\n    compact = False\n    begin = None\n    end = None\n    cyclesummary = False\n    nodesfile = None\n    jobsfile = None\n    utilization = None\n    silent = False\n    logconf = None\n    estimated_info = False\n    estimated_info_only = False\n    dbout = None\n    dbtype = None\n    dbaccess = None\n    re_interval = None\n    re_frequency = None\n    re_conditional = None\n    json_on = False\n    level = logging.FATAL\n    logutils = PBSLogUtils()\n    dbutils = PTLTestDb()\n\n    try:\n        shortopt = \"a:b:d:e:f:t:l:L:s:m:cShU\"\n        longopt = [\"nodes-file=\", \"jobs-file=\", \"version\", \"log-conf=\",\n                   \"estimated-info\", \"db-out=\", \"json\", \"re-interval=\",\n                   \"re-frequency=\", \"last-week\", \"last-month\",\n                   \"re-conditional=\", \"estimated-info-only\", \"silent\",\n                   \"db-type=\", \"db-access=\"]\n        opts, args = getopt.getopt(sys.argv[1:], shortopt, longopt)\n    except Exception:\n        usage()\n        sys.exit(1)\n\n    for o, val in opts:\n        if o == '-a':\n            acctlog = CliUtils.expand_abs_path(val)\n        elif o == '-b':\n            try:\n                begin = logutils.convert_date_time(val)\n            except Exception:\n                print('Error converting time, expected format '\n                      '%m/%d/%Y %H:%M:%S')\n                sys.exit(1)\n        elif o == '-e':\n            try:\n                end = logutils.convert_date_time(val)\n            except Exception:\n                print('Error converting time, expected format '\n                      '%m/%d/%Y %H:%M:%S')\n                print(traceback.print_exc())\n                sys.exit(1)\n        elif o == '-d':\n            diag = CliUtils.expand_abs_path(val)\n        elif o == '-f':\n            genericlog = CliUtils.expand_abs_path(val)\n        elif o == '-t':\n            hostname = val\n        elif o == '-l':\n            schedulerlog = CliUtils.expand_abs_path(val)\n        elif o == '-s':\n            serverlog = CliUtils.expand_abs_path(val)\n        elif o == '-m':\n            momlog = CliUtils.expand_abs_path(val)\n        elif o == '-c':\n            cyclesummary = True\n        elif o == '-C':\n            compact = True\n        elif o == '-L':\n            level = CliUtils.get_logging_level(val)\n        elif o == '-S':\n            sj = True\n        elif o == '-U':\n            utilization = True\n        elif o == '--db-out':\n            dbout = CliUtils.expand_abs_path(val)\n        elif o == '--db-type':\n            dbtype = val\n        elif o == '--db-access':\n            dbaccess = CliUtils.expand_abs_path(val)\n        elif o == '--estimated-info':\n            estimated_info = True\n        elif o == '--estimated-info-only':\n            estimated_info_only = True\n        elif o == '--json':\n            json_on = True\n        elif o == '--last-week':\n            s = time.localtime(time.time() - (7 * 24 * 3600))\n            begin = time.mktime(time.strptime(time.strftime(\"%m/%d/%Y\", s),\n                                              \"%m/%d/%Y\"))\n            end = time.time()\n        elif o == '--last-month':\n            s = time.localtime(time.time() - (30 * 24 * 3600))\n            begin = time.mktime(time.strptime(time.strftime(\"%m/%d/%Y\", s),\n                                              \"%m/%d/%Y\"))\n            end = time.time()\n        elif o == '--log-conf':\n            logconf = CliUtils.expand_abs_path(val)\n        elif o == '--nodes-file':\n            nodesfile = CliUtils.expand_abs_path(val)\n        elif o == '--jobs-file':\n            jobsfile = CliUtils.expand_abs_path(val)\n        elif o == '--re-conditional':\n            re_conditional = eval(val, {}, {})\n        elif o == '--re-interval':\n            re_interval = val\n        elif o == '--silent':\n            silent = True\n        elif o == '--re-frequency':\n            re_frequency = int(val)\n        elif o == '--version':\n            print(ptl.__version__)\n            sys.exit(0)\n        elif o == '-h':\n            usage()\n            sys.exit(0)\n        else:\n            sys.stderr.write(\"Unrecognized option \" + o)\n            usage()\n            sys.exit(1)\n\n    if logconf:\n        logging.config.fileConfig(logconf)\n    else:\n        logging.basicConfig(level=level)\n\n    PtlConfig()\n\n    if diag:\n        if nodesfile is None:\n            if os.path.isfile(os.path.join(diag, 'pbsnodes_va.out')):\n                nodesfile = os.path.join(diag, 'pbsnodes_va.out')\n        if jobsfile is None:\n            if os.path.isfile(os.path.join(diag, 'qstat_f.out')):\n                jobsfile = os.path.join(diag, 'qstat_f.out')\n\n    if ((re_interval is not None or re_conditional is not None) and\n            genericlog is None):\n        if schedulerlog is not None:\n            genericlog = schedulerlog\n            schedulerlog = None\n        elif serverlog is not None:\n            genericlog = serverlog\n            serverlog = None\n        elif momlog is not None:\n            genericlog = momlog\n            momlog = None\n        elif acctlog is not None:\n            genericlog = acctlog\n            acctlog = None\n\n    show_progress = not silent\n    pla = PBSLogAnalyzer(schedulerlog, serverlog, momlog, acctlog,\n                         genericlog, hostname, show_progress)\n\n    if utilization:\n        if acctlog is None:\n            logging.error(\"Accounting log is required to compute utilization\")\n            sys.exit(1)\n        pla.accounting.enable_utilization_parsing(hostname, nodesfile,\n                                                  jobsfile)\n\n    if re_interval is not None:\n        pla.set_custom_match(re_interval, re_frequency)\n\n    if re_conditional is not None:\n        pla.set_conditional_match(re_conditional)\n\n    if estimated_info or estimated_info_only:\n        if schedulerlog is None:\n            logging.error(\"Scheduler log is required for estimated start time \"\n                          \"analysis\")\n            sys.exit(1)\n        pla.scheduler.estimated_parsing_enabled = True\n        if estimated_info_only:\n            pla.scheduler.parse_estimated_only = True\n\n    info = pla.analyze_logs(start=begin, end=end, showjob=sj)\n\n    if genericlog:\n        dbutils.process_output(pla.info)\n\n    # Drift analysis and custom regex matching require additional\n    # post-processing and can't currently be passed through to JSON\n    if json_on:\n        if cyclesummary:\n            info['scheduler'] = info['scheduler']['summary']\n        print(CliUtils.__json__(info))\n        sys.exit(0)\n\n    if acctlog:\n        dbutils.process_output(info['accounting'], dbout, dbtype, dbaccess,\n                               name=acctlog, logtype='accounting')\n    if schedulerlog:\n        dbutils.process_output(info['scheduler'], dbout, dbtype, dbaccess,\n                               name=schedulerlog, logtype='scheduler',\n                               summary=cyclesummary)\n    if serverlog:\n        dbutils.process_output(info['server'], dbout, dbtype, dbaccess,\n                               name=serverlog, logtype='server')\n    if momlog:\n        dbutils.process_output(info['mom'], dbout, dbtype, dbaccess,\n                               name=momlog, logtype='mom')\n"
  },
  {
    "path": "test/fw/bin/pbs_py_spawn",
    "content": "#!/usr/bin/env python3\n# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport sys\nimport os\nimport getopt\nimport logging\nimport tempfile\nimport errno\n\nimport ptl\ntry:\n    from ptl.lib.pbs_ifl import pbs_py_spawn\n    from ptl.lib.pbs_testlib import Server, MoM, JOB, ResourceResv\n    from ptl.utils.pbs_cliutils import CliUtils\nexcept Exception:\n    sys.stderr.write(\"API wrapping is required, see pbs_swigify utility\")\n    exit(1)\n\n\n# trap SIGINT and SIGPIPE\ndef trap_exceptions(etype, value, tb):\n    sys.excepthook = sys.__excepthook__\n    if issubclass(etype, KeyboardInterrupt):\n        pass\n    elif issubclass(etype, IOError) and value.errno == errno.EPIPE:\n        pass\n    else:\n        sys.__excepthook__(etype, value, tb)\n\n\nsys.excepthook = trap_exceptions\n\n# Helper script to allow a py_spawned script to run detached from the\n# session associated to a PBS Job\n\n# sys.argv[1] must be a valid path to the pbs_attach command\n# sys.argv[2] must be a valid job identifier\n# sys.argv[3:] must be a valid path to a Python script and args to run in\n# the background. The first output of this script must be its PID (e.g., a\n# shell script shall echo $$)\n_wrapper_body = \"\"\"import os\nimport sys\nfrom subprocess import Popen, PIPE\n\nif len(sys.argv) < 3:\n    exit(1)\n\n(r, w) = os.pipe()\np = os.fork()\nif p < 0:\n    exit(1)\nelif p > 0:\n    os.close(w)\n    pid = os.read(r, 256)\n    p = Popen([sys.argv[1], \"-j\", sys.argv[2], \"-p\", str(pid)], stdout=PIPE)\n    p.communicate()\n    os.close(r)\n    exit(p.retcode)\n# child\nos.close(r)\np = Popen([\"setsid\"] + sys.argv[3:], stdout=PIPE)\nos.write(w, p.stdout.readline())\nos.close(w)\n\"\"\"\n\n\ndef usage():\n    msg = []\n    msg += ['Usage: ' + os.path.basename(sys.argv[0]) + ' [OPTION] '\n            '<path> <args>\\n\\n']\n    msg += ['   Run a py_spawn command\\n\\n']\n    msg += ['-e <envs>: comma-separated list of environment options\\n']\n    msg += ['-j <jobid>: jobid to which the spawned process is run\\n']\n    msg += ['-l <level>: logging level, INFO, DEBUG, ..., defaults to ERROR\\n']\n    msg += ['-t <hostname>: target hostname to operate on\\n']\n    msg += ['--detach: If set, run Python script in background\\n']\n    msg += ['--wrapper=<path>: Optional path to wrapper script, to use with '\n            'detach\\n']\n    msg += ['--version: print version number and exit\\n']\n\n    print(\"\".join(msg))\n\n\nif __name__ == '__main__':\n\n    if len(sys.argv) < 2:\n        usage()\n        sys.exit(0)\n\n    jobid = None\n    envs = []\n    detach = False\n    lvl = logging.ERROR\n    wrapper = None\n    cleanup_wrapper = False\n    hostname = None\n\n    try:\n        opts, args = getopt.getopt(sys.argv[1:], \"e:j:l:t:hw\",\n                                   ['detach', 'wrapper='])\n    except Exception:\n        usage()\n        sys.exit(1)\n\n    for o, val in opts:\n        if o == '-j':\n            jobid = val\n        elif o == '-e':\n            envs = val.split(',')\n        elif o == '-l':\n            lvl = CliUtils.get_logging_level(val)\n        elif o == '-h':\n            usage()\n            sys.exit(0)\n        elif o == '-t':\n            hostname = val\n        elif o == '--detach':\n            detach = True\n        elif o == '--wrapper':\n            wrapper = val\n        else:\n            sys.stderr.write(\"Unrecognized option\")\n            usage()\n            sys.exit(1)\n\n    logging.basicConfig(level=lvl)\n\n    s = Server(hostname)\n    if detach:\n        d = s.status(JOB, 'exec_host', id=jobid)\n        if d and 'exec_host' in d[0]:\n            hosts = ResourceResv.get_hosts(d[0]['exec_host'])\n            svr_obj = Server(hosts[0])\n            pconf = MoM(svr_obj, hosts[0]).pbs_conf['PBS_EXEC']\n            # Use path to pbs_attach on natural vnode of the job\n            pbs_attach = os.path.join(pconf, 'bin', 'pbs_attach')\n            if wrapper is None:\n                (fd, fn) = tempfile.mkstemp()\n                os.write(fd, _wrapper_body)\n                os.close(fd)\n                os.chmod(fn, 0o755)\n                wrapper = fn\n                cleanup_wrapper = True\n            a = [wrapper, pbs_attach, jobid] + args\n            logging.debug(str(a))\n            pbs_py_spawn(s._conn, jobid, a, envs)\n            if cleanup_wrapper:\n                os.remove(wrapper)\n    else:\n        pbs_py_spawn(s._conn, jobid, args, envs)\n\n    sys.exit(0)\n"
  },
  {
    "path": "test/fw/bin/pbs_snapshot",
    "content": "#!/usr/bin/env python3\n# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport os\nimport sys\nimport getopt\nimport errno\nimport logging\nimport ptl\nimport time\nimport tarfile\n\nfrom getopt import GetoptError\nfrom threading import Thread\nfrom pathlib import Path\n\nfrom ptl.lib.pbs_testlib import PtlConfig\nfrom ptl.utils.pbs_snaputils import PBSSnapUtils, ObfuscateSnapshot\nfrom ptl.utils.pbs_cliutils import CliUtils\nfrom ptl.utils.pbs_dshutils import DshUtils\n\n\ndef trap_exceptions(etype, value, tb):\n    \"\"\"\n    Trap SIGINT and SIGPIPE\n    \"\"\"\n    # This is done so that any exceptions created by this method itself\n    # are caught by the default excepthook to prevent endless recursion\n    sys.excepthook = sys.__excepthook__\n\n    if issubclass(etype, KeyboardInterrupt):\n        pass\n    elif issubclass(etype, IOError) and value.errno == errno.EPIPE:\n        pass\n    else:\n        sys.__excepthook__(etype, value, tb)\n\n    # Set sys.excepthook back to trap_exceptions to catch future exceptions\n    sys.excepthook = trap_exceptions\n\n\nsys.excepthook = trap_exceptions\n\n\ndef usage():\n    msg = \"\"\"\nUsage: pbs_snapshot -o <path to existing output directory> [OPTION]\n\n    Take snapshot of a PBS system and optionally capture logs for diagnostics\n\n    -H <hostname>                     primary hostname to operate on\n                                      Defaults to local host\n    -l <loglevel>                     set log level to one of INFO, INFOCLI,\n                                      INFOCLI2, DEBUG, DEBUG2, WARNING, ERROR\n                                      or FATAL\n    -h, --help                        display this usage message\n    --basic                           Capture only basic config & state data\n    --daemon-logs=<num days>          number of daemon logs to collect\n    --accounting-logs=<num days>      number of accounting logs to collect\n    --additional-hosts=<hostname>     collect data from additional hosts\n                                      'hostname' is a comma separated list\n    --map=<file>                      file to store the map of obfuscated data\n    --obfuscate                       obfuscates sensitive data\n    --with-sudo                       Uses sudo to capture privileged data\n    --version                         print version number and exit\n    --obf-snap                        path to existing snapshot to obfuscate\n\"\"\"\n    print(msg)\n\n\ndef create_snapshot_tar(snappath):\n    \"\"\"\n    Create a compressed tar for the snapshot given\n    Warning: Deletes the snapshot directory after creating its tar\n\n    snappath: path to snapshot to compress and tar\n    type: str\n\n    return: str - path to the compressed tar file\n    \"\"\"\n    outtar = snappath + \".tgz\"\n    with tarfile.open(outtar, \"w:gz\") as tar:\n        tar.add(snappath, arcname=os.path.basename(snappath))\n\n    # Delete the snapshot directory itself\n    du.rm(path=snappath, recursive=True, force=True)\n\n    return outtar\n\n\ndef get_snapshot_from_tar(logger, tarpath):\n    \"\"\"\n    Extract the snapshot from tarfile given\n\n    :tarpath - path to the tar\n    :type - str\n\n    :return tuple of (string path to extracted snapshot, True/False),\n        True: if the extraction was a success\n        False: if there was already a snapshot by the name of the one\n            being extracted, so the tar was not untar'd\n    \"\"\"\n    parentdir = Path(tarpath).parent\n    with tarfile.open(tarpath) as tar:\n        main_snap = tar.getnames()[0].split(os.sep, 1)[0]\n        main_snap = os.path.join(os.path.abspath(parentdir), main_snap)\n        if os.path.isdir(main_snap):\n            logger.error(\"Existing snapshot %s found, cannot extract tar\" %\n                         main_snap)\n            return (main_snap, False)\n        tar.extractall(path=parentdir)\n\n    return (main_snap, True)\n\n\ndef remotesnap_thread(logger, host):\n    \"\"\"\n    Routine to capture snapshot from a remote host\n\n    :param logger - Logging object\n    :type logger - logging.Logger\n    :param host - the hostname for remote host\n    :type host - str\n    \"\"\"\n    logger.info(\"Capturing snapshot from host %s\" % (host))\n\n    du = DshUtils()\n\n    # Get path to pbs_snapshot on remote host\n    host_pbsconf = du.parse_pbs_config(hostname=host)\n    try:\n        pbs_exec_path = host_pbsconf[\"PBS_EXEC\"]\n    except KeyError:\n        logger.error(\"Couldn't find PBS_EXEC on host %s\"\n                     \", won't capture snapshot on this host\" % (\n                         host))\n        return\n    host_pbssnappath = os.path.join(pbs_exec_path, \"sbin\",\n                                    \"pbs_snapshot\")\n\n    # Create a directory on the remote host with a unique name\n    # We will create the snapshot here\n    timestamp = str(int(time.time()))\n    snap_home = \"host_\" + timestamp\n    du.mkdir(hostname=host, path=snap_home)\n\n    # Run pbs_snapshot on the remote host\n    cmd = [host_pbssnappath, \"-o\", snap_home,\n           \"--daemon-logs=\" + str(daemon_logs),\n           \"--accounting-logs=\" + str(acct_logs)]\n    if obfuscate:\n        cmd.extend([\"--obfuscate\", \"--map=\" + map_file])\n    if with_sudo:\n        cmd.append(\"--with-sudo\")\n\n    ret = du.run_cmd(hosts=host, cmd=cmd, logerr=False)\n    if ret['rc'] != 0:\n        logger.error(\"Error capturing snapshot from host %s\" % (host))\n        print(ret['err'])\n        return\n\n    # Get the snapshot tar filename from stdout\n    child_stdout = ret['out'][-1]\n    snaptarname = child_stdout.split(\"Snapshot available at: \")[1]\n\n    # Copy over the snapshot tar file as <hostname>_snapshot.tgz\n    dest_path = os.path.join(out_dir, host + \"_snapshot.tgz\")\n    ret = du.run_copy(srchost=host, src=snaptarname, dest=dest_path)\n    if ret['rc'] != 0:\n        logger.error(\"Error copying child snapshot from host %s\" % (host))\n\n    # Copy over map file if any as 'host_<map filename>'\n    if map_file is not None:\n        mfilename = os.path.basename(map_file)\n        dest_path = os.path.join(out_dir, host + \"_\" + mfilename)\n        src_path = os.path.join(snap_home, map_file)\n        ret = du.run_copy(srchost=host, src=src_path, dest=dest_path)\n        if ret['rc'] != 0:\n            logger.error(\"Error copying map file from host %s\" % (host))\n\n    # Delete the snapshot home from remote host\n    du.rm(hostname=host, path=snap_home, recursive=True, force=True)\n\n\ndef obfuscate_snapshot_wrapper(snap_name, map_file=None, sudo_val=False):\n    if map_file is None:\n        snappath = Path(snap_name)\n        if snappath is None:\n            sys.stderr.write(\"snapshot path not found\")\n            usage()\n            sys.exit(1)\n        map_file = os.path.join(snappath.parent, \"obfuscate.map\")\n    obj = ObfuscateSnapshot()\n    obj.obfuscate_snapshot(snap_name, map_file, sudo_val)\n\n\ndef capture_local_snap(sudo_val):\n    \"\"\"\n    Helper method to capture snapshot of the local host\n\n    :param sudo_val - Value of the --with-sudo option\n    :type sudo_val - bool\n\n    :returns Name of the snapshot directory/tar file captured\n    \"\"\"\n    if obfuscate or additional_hosts is not None:\n        # We will need to the captured snapshot directory in these 2 cases\n        create_tar = False\n    else:\n        create_tar = True\n\n    with PBSSnapUtils(out_dir, basic=basic, acct_logs=acct_logs,\n                      daemon_logs=daemon_logs, create_tar=create_tar,\n                      log_path=log_path, with_sudo=sudo_val) as snap_utils:\n        snap_name = snap_utils.capture_all()\n\n    if obfuscate:\n        obfuscate_snapshot_wrapper(snap_name, map_file, sudo_val)\n        if additional_hosts is None:\n            # Now create the tar\n            snap_name = create_snapshot_tar(snap_name)\n\n    return snap_name\n\n\nif __name__ == '__main__':\n\n    # Arguments to PBSSnapUtils\n    out_dir = None\n    primary_host = None\n    log_level = \"INFOCLI2\"\n    acct_logs = None\n    daemon_logs = None\n    additional_hosts = None\n    map_file = None\n    obfuscate = False\n    log_file = \"pbs_snapshot.log\"\n    with_sudo = False\n    du = DshUtils()\n    basic = False\n    obf_snap = None\n\n    PtlConfig()\n\n    # Parse the options provided to pbs_snapshot\n    try:\n        sopt = \"d:H:l:o:h\"\n        lopt = [\"basic\", \"accounting-logs=\", \"daemon-logs=\", \"help\",\n                \"additional-hosts=\", \"map=\", \"obfuscate\", \"with-sudo\",\n                \"version\", \"obf-snap=\"]\n        opts, args = getopt.getopt(sys.argv[1:], sopt, lopt)\n    except GetoptError:\n        usage()\n        sys.exit(1)\n\n    for o, val in opts:\n        if o == \"-o\":\n            out_dir = val\n        elif o == \"-H\":\n            primary_host = val\n        elif o == \"-l\":\n            log_level = val\n        elif o == \"-h\" or o == \"--help\":\n            usage()\n            sys.exit(0)\n        elif o == \"--basic\":\n            basic = True\n        elif o == \"--accounting-logs\":\n            try:\n                acct_logs = int(val)\n            except ValueError:\n                raise ValueError(\"Invalid value for --accounting-logs\" +\n                                 \"option, should be an integer\")\n        elif o == \"--daemon-logs\":\n            try:\n                daemon_logs = int(val)\n            except ValueError:\n                raise ValueError(\"Invalid value for --daemon-logs\" +\n                                 \"option, should be an integer\")\n        elif o == \"--additional-hosts\":\n            additional_hosts = val\n        elif o == \"--map\":\n            map_file = val\n        elif o == \"--obfuscate\":\n            obfuscate = True\n        elif o == \"--with-sudo\":\n            with_sudo = True\n        elif o == \"--version\":\n            print(ptl.__version__)\n            sys.exit(0)\n        elif o == \"--obf-snap\":\n            obf_snap = val\n        else:\n            sys.stderr.write(\"Unrecognized option\")\n            usage()\n            sys.exit(1)\n\n    if obf_snap:\n        obfuscate = True\n        path = Path(obf_snap)\n        if path:\n            out_dir = str(path.parent)\n\n    # Check that parent of snapshot directory exists\n    if out_dir is None:\n        sys.stderr.write(\"-o option not provided\")\n        usage()\n        sys.exit(1)\n    elif not os.path.isdir(out_dir):\n        sys.stderr.write(\"-o path should exist,\"\n                         \" this is where the snapshot is captured\")\n        usage()\n        sys.exit(1)\n\n    fmt = '%(asctime)-15s %(levelname)-8s %(message)s'\n    level_int = CliUtils.get_logging_level(log_level)\n    log_path = os.path.join(out_dir, log_file)\n    logging.basicConfig(filename=log_path, filemode='w+',\n                        level=level_int, format=fmt)\n    stream_hdlr = logging.StreamHandler()\n    stream_hdlr.setLevel(level_int)\n    stream_hdlr.setFormatter(logging.Formatter(fmt))\n    ptl_logger = logging.getLogger('ptl')\n    ptl_logger.addHandler(stream_hdlr)\n    ptl_logger.setLevel(level_int)\n\n    if obfuscate is True:\n        # find the parent directory of the snapshot\n        # This will be used to store the map file\n        out_abspath = os.path.abspath(out_dir)\n        if map_file is None:\n            map_file = os.path.join(out_abspath, \"obfuscate.map\")\n\n    # Obfuscate an existing snapshot\n    if obf_snap:\n        istar = False\n        if os.path.isfile(obf_snap):\n            if tarfile.is_tarfile(obf_snap):\n                istar = True\n                obf_snap, ret = get_snapshot_from_tar(ptl_logger, obf_snap)\n                if ret is False:\n                    sys.exit(1)\n            else:\n                ptl_logger.error(\"Path is not a valid snapshot\")\n                sys.exit(1)\n\n        # The obfuscated snapshot will be at path: <obf_snap>_obf\n        obfout = obf_snap + \"_obf\"\n        if os.path.isfile(obfout + \".tgz\"):\n            ptl_logger.error(\"%s.tgz already exists, \"\n                             \"delete it to create a new obfuscated snapshot\" %\n                             obfout)\n            if istar:\n                # If input was a tar file, then we'd have extracted it\n                # So, delete the dir that was extracted from the input tar\n                du.rm(path=obf_snap, force=True, recursive=True)\n            sys.exit(1)\n        du.run_copy(src=obf_snap, dest=obfout, recursive=True)\n        obfuscate_snapshot_wrapper(obfout, map_file, with_sudo)\n        # Create the tar\n        obfout = create_snapshot_tar(obfout)\n        if istar:\n            du.rm(path=obf_snap, force=True, recursive=True)\n\n        print(\"Obfuscated snapshot at: \" + str(obfout))\n        sys.exit(0)\n\n    if not basic:\n        # Capture 5 days of daemon logs and 30 days of accounting logs\n        # by default\n        if daemon_logs is None:\n            daemon_logs = 5\n        if acct_logs is None:\n            acct_logs = 30\n\n    if additional_hosts is not None:\n        # Capture snapshot of remote hosts in addition to the main host\n\n        hostnames = additional_hosts.split(\",\")\n        remotesnap_threads = {}\n        for host in hostnames:\n            thread = Thread(target=remotesnap_thread, args=(\n                ptl_logger, host))\n            thread.start()\n            remotesnap_threads[host] = thread\n\n        # Capture snapshot of the main host in the meantime\n        thread_p = None\n        if not du.is_localhost(primary_host):\n            # The main host is remote\n            thread_p = Thread(target=remotesnap_thread,\n                              args=(ptl_logger, primary_host))\n            thread_p.start()\n        else:\n            # Capture a local snapshot\n            main_snap = capture_local_snap(with_sudo)\n\n        if thread_p is not None:\n            # Let's get the main host's snapshot first\n            thread_p.join()\n\n            # We need to copy additional hosts' snapshosts in\n            # the main snapshot, so un-tar the snapshot\n            p_snappath = os.path.join(out_dir, primary_host + \"_snapshot.tgz\")\n            main_snap, _ = get_snapshot_from_tar(ptl_logger, p_snappath)\n\n            # Delete the tar file\n            du.rm(path=p_snappath, force=True)\n\n        # Let's reconcile the child snapshots\n        for host, thread in remotesnap_threads.items():\n            thread.join()\n            host_snappath = os.path.join(out_dir, host + \"_snapshot.tgz\")\n            if os.path.isfile(host_snappath):\n                # Move the tar file to the main snapshot\n                du.run_copy(src=host_snappath, dest=main_snap)\n                du.rm(path=host_snappath, force=True)\n\n        # Finally, create a tar of the whole snapshot\n        outtar = create_snapshot_tar(main_snap)\n    elif not du.is_localhost(primary_host):\n        # Capture snapshot of a remote host\n        remotesnap_thread(ptl_logger, primary_host)\n        p_snappath = os.path.join(out_dir, primary_host + \"_snapshot.tgz\")\n        with tarfile.open(p_snappath) as tar:\n            main_snap = tar.getnames()[0].split(os.sep, 1)[0]\n            main_snap = os.path.join(os.path.abspath(out_dir), main_snap)\n\n        outtar = main_snap + \".tgz\"\n\n        # remotesnap_thread named the tar as <hostname>_snapshot.tgz\n        # rename it to the timestamp name of the original snapshot\n        du.run_copy(src=p_snappath, dest=outtar)\n        du.rm(path=p_snappath, force=True)\n    else:\n        # Capture snapshot of the local host\n        outtar = capture_local_snap(with_sudo)\n\n    if outtar is not None:\n        print(\"Snapshot available at: \" + outtar)\n"
  },
  {
    "path": "test/fw/bin/pbs_stat",
    "content": "#!/usr/bin/env python3\n# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport errno\nimport getopt\nimport logging\nimport logging.config\nimport os\nimport re\nimport sys\nimport time\n\nimport ptl\nfrom ptl.lib.pbs_testlib import (HOOK, JOB, PTL_AND, PTL_CLI, PTL_OR, QUEUE,\n                                 RESOURCES_AVAILABLE, RESOURCES_TOTAL, RESV,\n                                 RSC, SCHED, SERVER, VNODE, BatchUtils,\n                                 PbsStatusError, PbsTypeSize, PtlConfig,\n                                 Scheduler, Server)\nfrom ptl.utils.pbs_cliutils import CliUtils\nfrom ptl.utils.pbs_dshutils import DshUtils\nfrom ptl.utils.pbs_logutils import PBSAccountingLog\n\n# trap SIGINT and SIGPIPE\n\n\ndef trap_exceptions(etype, value, tb):\n    sys.excepthook = sys.__excepthook__\n    if issubclass(etype, KeyboardInterrupt):\n        pass\n    elif issubclass(etype, IOError) and value.errno == errno.EPIPE:\n        pass\n    else:\n        sys.__excepthook__(etype, value, tb)\n\n\nsys.excepthook = trap_exceptions\n\n\ndef usage():\n    msg = []\n    msg += ['Usage: ' + os.path.basename(sys.argv[0]).split('.pyc')[0]]\n    msg += [' [OPTION]\\n\\n']\n    msg += ['\\tStatus and filter PBS entities on given attributes\\n\\n']\n    msg += ['\\tNote: All output of this unsupported tool is experimental, \\n']\n    msg += ['\\tdo not rely on it remaining as-is across releases.\\n\\n']\n    msg += ['-A <path>: Path to accounting log\\n']\n    msg += ['-a <attrs>: comma-separated list of attributes\\n']\n    msg += ['    attrs can be in key[OP]val format where OP is one ' +\n            'of <,<=,=,>=,>,~\\n']\n    msg += ['-b: show available resources (a.k.a backfill hole)\\n']\n    msg += ['-c: counts number of matching items\\n']\n    msg += ['-C: counts grand total of given attribute values in complex\\n']\n    msg += ['-j: report job equivalence classes\\n']\n    msg += ['-n: report node equivalence classes\\n']\n    msg += ['-r <name>: comma-separated list of resource names, ' +\n            'e.g. ncpus, mem.\\n']\n    msg += ['-s: show selected objects in qselect-like format\\n']\n    msg += ['-t <hostname>: target hostname\\n']\n    msg += ['-T: report total resources available. Operates only on '\n            'equivalence classes\\n']\n    msg += ['-U: show current utilization of the system, use -r to \\n' +\n            '    specify resources, default to ncpus, mem, nodes\\n']\n    msg += ['--snap=<pbs_snapshot>: path to snap directory\\n']\n    msg += ['--id=<obj_id>: identifier of the object to query\\n']\n    msg += ['--key=<account record>: Accounting record key, one of Q or E\\n']\n    msg += ['--user=<username>: username to query for limits or '\n            'utilization\\n']\n    msg += ['--group=<groupname>: group to query for limits or '\n            'utilization\\n']\n    msg += ['--project=<project>: project to query for limits\\n']\n    msg += ['--json: display object type information in json format.\\n']\n    msg += ['--mode: show operating mode of PTL, one of cli or api\\n']\n    msg += ['--cli: force stat to query PBS server over cli\\n']\n    msg += ['--report-twiki: produce report in Twiki format\\n']\n    msg += ['--nodes: operate on node\\n']\n    msg += ['--queues: operate on queues\\n']\n    msg += ['--jobs: operate on jobs\\n']\n    msg += ['--resvs: operate on reservations\\n']\n    msg += ['--server: operate on server\\n']\n    msg += ['--scheduler: operate on scheduler\\n']\n    msg += ['--report: produce a site report\\n']\n    msg += ['--resources: operate on resources\\n']\n    msg += ['--resource=<name>: name of resource to stat\\n']\n    msg += ['--resources-set: list resources set for a given object type\\n']\n    msg += ['--fairshare-info=<entity>: query and display fairshare ' +\n            ' info of entity\\n']\n    msg += ['--fairshare-tree: query and display fairshare tree \\n']\n    msg += ['--sline: show selected objects on a single line\\n'\n            '\\t will not affect output of --json, -j, -s, nor -n\\n']\n    msg += ['--eval-formula: evaluate job priority\\n']\n    msg += ['--include-running-jobs: include running jobs in formula'\n            ' evaluation\\n']\n    msg += ['--pports: show number of privileged ports in use\\n']\n    msg += ['--resolve-indirectness: If set, dereference indirect '\n            ' resources\\n']\n    msg += ['--db-access=<cred>: file to credentials to access db\\n']\n    msg += ['--limits-info: show limit information per entity and '\n            'object\\n']\n    msg += ['--over-soft-limits: show entities that are over soft '\n            'limits\\n']\n    msg += [\n        '--server-file=<path>: path to file with output of qstat -Bf\\n']\n    msg += ['--dedtime-file=<path>: path to a dedicated time file\\n']\n    msg += [\n        '--nodes-file=<path>: path to file with output of pbsnodes -av\\n']\n    msg += [\n        '--queues-file=<path>: path to file with output of qstat -Qf\\n']\n    msg += ['--jobs-file=<path>: path to file with output of qstat -f\\n']\n    msg += [\n        '--resvs-file=<path>: path to file with output of pbs_rstat -f\\n']\n    msg += ['--log-conf=<file>: logging config file\\n']\n    msg += ['--version: print version number and exit\\n']\n\n    print(\"\".join(msg))\n\n\nclass SiteReportFormatter:\n\n    def __init__(self, snap, version, sched_version, jeq, neq, utilization,\n                 limits, qtypes, users, groups, sc, formula, backfill, hooks,\n                 job_states, osrelease):\n\n        self.report_tm = time.ctime()\n        if snap and len(snap) > 1:\n            snap = snap.replace(\".\", \"\")\n            snap = snap.replace(\"/\", \"_\")\n            snap = snap.replace(\"snapshot_\", \"\")\n            if snap[0] == '_':\n                snap = snap[1:]\n            m = re.search(r\"(?P<dtm>\\d{6}_\\d{6})\", snap)\n            if m:\n                _tm = m.group(\"dtm\")\n                _kt = time.mktime(time.strptime(_tm, \"%y%m%d_%H%M%S\"))\n                self.report_tm = time.ctime(_kt)\n\n        self.snap = snap\n        self.version = version\n        self.sched_version = sched_version\n        self.jeq = sorted(jeq, key=lambda e: len(e.entities))\n        self.neq = sorted(neq, key=lambda e: len(e.entities))\n        self.utilization = utilization\n        self.limits = limits\n        self.qtypes = qtypes\n        self.users = users\n        self.groups = groups\n        self.sc = sc\n        self.formula = formula\n        self.backfill = backfill\n        self.hooks = hooks\n        self.job_states = job_states\n\n        self.delim = \"\\n\"\n        self.open_tag = []\n        self.close_tag = []\n\n    def __twiki__(self):\n        self.delim = \"---+++++\"\n        self.open_tag = [\"<verbatim>\"]\n        self.close_tag = [\"</verbatim>\"]\n        return self.__str__()\n\n    def __str__(self):\n        title = 'PBS cluster report on ' + self.report_tm\n        if len(self.open_tag) == 0 or self.snap is None:\n            msg = []\n            self.sep = ['-' * len(title)]\n        else:\n            msg = [\"---+++\" + str(self.snap)]\n            self.sep = []\n\n        msg += [title]\n        msg += self.sep\n        msg += [self.delim + 'PBS version']\n        msg += self.sep\n        msg += self.open_tag\n        if self.version == self.sched_version:\n            msg += [self.version]\n        else:\n            msg += ['Server: ' + self.version]\n            msg += ['Scheduler: ' + self.sched_version]\n        msg += self.close_tag\n        if osrelease is not None:\n            msg += [self.delim + 'OS release']\n            msg += self.sep\n            msg += self.open_tag\n            msg += [osrelease]\n            msg += self.close_tag\n        msg += [self.delim + 'Utilization']\n        msg += self.sep\n        msg += self.open_tag\n        msg += u\n        msg += self.close_tag\n        msg += [self.delim + 'Job States']\n        msg += self.sep\n        msg += self.open_tag\n        for k, v in self.job_states.items():\n            msg += [\"%s: %d\" % (k.split('=')[1], v)]\n        msg += self.close_tag\n        if self.limits:\n            qsl = ssl = qhl = shl = 0\n            if SERVER in self.limits:\n                for l in self.limits[SERVER]:\n                    if '_soft' in l.limit_type:\n                        ssl += 1\n                    else:\n                        shl += 1\n            if QUEUE in self.limits:\n                for l in self.limits[QUEUE]:\n                    if '_soft' in l.limit_type:\n                        qsl += 1\n                    else:\n                        qhl += 1\n            msg += [self.delim + 'Limits']\n            msg += self.sep\n            msg += self.open_tag\n            msg += [\"Queue soft limits: %d\" % qsl]\n            msg += [\"Queue hard limits: %d\" % qhl]\n            msg += [\"Server soft limits: %d\" % ssl]\n            msg += [\"Server hard limits: %d\" % shl]\n            msg += self.close_tag\n        msg += [self.delim + 'Queue types']\n        msg += self.sep\n        msg += self.open_tag\n        for k, v in qtypes.items():\n            msg += [\"%s: %d\" % (k.split('=')[1], v)]\n        msg += self.close_tag\n        msg += [self.delim + 'Number of users and groups']\n        msg += self.sep\n        msg += self.open_tag\n        msg += [\"users: %d\" % (users)]\n        msg += [\"groups: %d\" % (groups)]\n        msg += self.close_tag\n        if self.hooks:\n            msg += [self.delim + 'Hooks']\n            msg += self.sep\n            msg += self.open_tag\n            msg += self.hooks\n            msg += self.close_tag\n        msg += [self.delim + 'Scheduling policies']\n        msg += self.sep\n        msg += self.open_tag\n        if 'preemptive_sched' in sc:\n            msg += [\"preemption: %s\" % sc['preemptive_sched']]\n        if 'backfill' in sc:\n            msg += [\"backfilling: %s\" % sc['backfill']]\n        if 'fair_share' in sc:\n            msg += [\"fair share: %s\" % sc['fair_share']]\n        if formula is not None:\n            msg += [\"formula: %s\" % self.formula]\n        if backfill is not None:\n            msg += [\"backfill depth: %s\" % self.backfill]\n        msg += self.close_tag\n        msg += [self.delim + 'Node Equivalence Classes']\n        msg += self.sep\n        msg += self.open_tag\n        for e in self.neq:\n            msg += [str(e)]\n        msg += self.close_tag\n        msg += [self.delim + 'Job Equivalence Classes']\n        msg += self.sep\n        msg += self.open_tag\n        for e in self.jeq:\n            msg += [str(e)]\n        msg += self.close_tag\n        return \"\\n\".join(msg)\n\n\ndef utilization_to_str(u):\n    msg = []\n    for k, v in u.items():\n        if len(v) != 2:\n            continue\n\n        if v[1] == 0:\n            perc = 100\n        else:\n            perc = 100 * v[0] / v[1]\n\n        if 'mem' in k:\n            v[0] = PbsTypeSize(str(v[0]) + 'kb')\n            v[1] = PbsTypeSize(str(v[1]) + 'kb')\n            msg += [k + ': (' + str(v[0]) + '/' + str(v[1]) + ') ' +\n                    str(perc) + '%']\n        else:\n            msg += [k + ': (' + str(v[0]) + '/' + str(v[1]) + ') ' +\n                    str(perc) + '%']\n\n    return msg\n\n\nif __name__ == '__main__':\n\n    if len(sys.argv) < 2:\n        usage()\n        sys.exit(1)\n\n    snap = None\n    lvl = logging.ERROR\n    hostname = None\n    objtype = None\n    jobclasses = False\n    nodeclasses = False\n    attributes = None\n    backfillhole = None\n    attrop = PTL_OR\n    accumulate = False\n    grandtotal = False\n    inputfile = {}\n    qselectfmt = False\n    resources = None\n    utilization = False\n    fsusage_entity = None\n    fstree = False\n    fsentity = None\n    fsperc_entity = None\n    pbsfs = False\n    eval_formula = False\n    entity = {}\n    objid = None\n    resourcesset = None\n    dedtimefile = None\n    limits_info = False\n    over_soft_limits = False\n    json_on = False\n    pports = False\n    db_access = None\n    logconf = None\n    scheduler = None\n    server = None\n    report = False\n    report_twiki = False\n    force_cli = False\n    get_mode = False\n    fmt = {'%6': '\\n'}\n    acct = None\n    key = 'E'\n    indirectness = False\n    osrelease = None\n    include_running_jobs = False\n    restotal = RESOURCES_AVAILABLE  # equivalence classes report avail - assgnd\n\n    lopts = [\"nodes\", \"queues\", \"server\", \"scheduler\", \"jobs\", \"resvs\"]\n    lopts += [\"fairshare-tree\", \"eval-formula\", \"user=\", \"group=\", \"project=\"]\n    lopts += [\"fairshare-info=\", \"resource=\", \"resources-set\", \"nodes-file=\"]\n    lopts += [\"queues-file=\", \"jobs-file=\", \"resvs-file=\", \"server-file=\"]\n    lopts += [\"dedtime-file=\", \"limits-info\", \"json\", \"pports\", \"db-access=\"]\n    lopts += [\"over-soft-limits\", \"id=\", \"resources\", \"log-conf=\", \"version\"]\n    lopts += [\"mode\", \"cli\", \"report\", \"sline\", \"key=\", \"resolve-indirectness\"]\n    lopts += [\"report-twiki\", \"include-running-jobs\", \"snap=\"]\n    try:\n        opts, args = getopt.getopt(sys.argv[1:], \"a:A:l:r:t:fUbcChjnsT\",\n                                   lopts)\n    except getopt.GetoptError:\n        logging.error('unhandled option')\n        usage()\n        sys.exit(1)\n\n    for o, val in opts:\n        if o == '-a':\n            attributes = val\n        elif o == '-A':\n            acct = CliUtils.expand_abs_path(val)\n        elif o == '-b':\n            backfillhole = True\n            objtype = VNODE\n        elif o == '-c':\n            accumulate = True\n        elif o == '-C':\n            grandtotal = True\n        elif o == '-j':\n            jobclasses = True\n        elif o == '-l':\n            lvl = CliUtils().get_logging_level(val)\n        elif o == '-n':\n            nodeclasses = True\n        elif o == '-r':\n            resources = val\n        elif o == '-s':\n            qselectfmt = True\n        elif o == '-t':\n            hostname = val\n        elif o == '-U':\n            utilization = True\n        elif o == '-h':\n            usage()\n            sys.exit(0)\n        elif o == '--snap':\n            snap = CliUtils.expand_abs_path(val)\n            if not os.path.isdir(snap):\n                sys.stderr.write('Cannnot access snapshot at ' +\n                                 str(snap) + '\\n')\n                sys.exit(1)\n        elif o == '--cli':\n            force_cli = True\n        elif o == '--key':\n            key = val\n        elif o == '--id':\n            objid = val\n        elif o == '--sline':\n            fmt = {'%1': ': ', '%2': '', '%3': '=', '%4': ', ', '%5': '\\n'}\n        elif o == \"--pports\":\n            pports = True\n        elif o == '--version':\n            print(ptl.__version__)\n            sys.exit(0)\n        elif o == \"--user\":\n            entity['euser'] = val\n        elif o == \"--group\":\n            entity['egroup'] = val\n        elif o == \"--mode\":\n            get_mode = True\n        elif o == \"--project\":\n            entity['project'] = val\n        elif o == \"--fairshare-info\":\n            fsentity = val\n            fstree = True\n        elif o == \"--fairshare-tree\":\n            fstree = True\n        elif o == \"--eval-formula\":\n            eval_formula = True\n        elif o == '--include-running-jobs':\n            include_running_jobs = True\n        elif o == \"--db-access\":\n            db_access = CliUtils.expand_abs_path(val)\n        elif o == \"--json\":\n            json_on = True\n        elif o == \"--limits-info\":\n            limits_info = True\n        elif o == \"--over-soft-limits\":\n            over_soft_limits = True\n        elif o == \"--log-conf\":\n            logconf = val\n        elif o == \"--nodes\":\n            objtype = VNODE\n        elif o == \"--queues\":\n            objtype = QUEUE\n        elif o == \"--jobs\":\n            objtype = JOB\n        elif o == \"--resvs\":\n            objtype = RESV\n        elif o == \"--server\":\n            objtype = SERVER\n        elif o == \"--scheduler\":\n            objtype = SCHED\n        elif o == \"--report\":\n            report = True\n        elif o == \"--report-twiki\":\n            report_twiki = True\n        elif o == \"--resources\":\n            objtype = RSC\n        elif o == \"--resource\":\n            objtype = RSC\n            objid = val\n        elif o == \"--resources-set\":\n            resourcesset = True\n        elif o == \"--resolve-indirectness\":\n            indirectness = True\n        elif o == \"--nodes-file\":\n            objtype = VNODE\n            inputfile[VNODE] = CliUtils.expand_abs_path(val)\n        elif o == \"--queues-file\":\n            objtype = QUEUE\n            inputfile[QUEUE] = CliUtils.expand_abs_path(val)\n        elif o == \"--jobs-file\":\n            objtype = JOB\n            inputfile[JOB] = CliUtils.expand_abs_path(val)\n        elif o == \"--resvs-file\":\n            objtype = RESV\n            inputfile[RESV] = CliUtils.expand_abs_path(val)\n        elif o == \"--server-file\":\n            objtype = SERVER\n            inputfile[SERVER] = CliUtils.expand_abs_path(val)\n        elif o == \"--dedtime-file\":\n            dedtimefile = CliUtils.expand_abs_path(val)\n        elif o == '-T':\n            restotal = None\n        else:\n            sys.stderr.write(\"Unrecognized option\")\n            usage()\n            sys.exit(1)\n\n    PtlConfig()\n\n    if pports:\n        msg = CliUtils().priv_ports_info(hostname)\n        if not msg:\n            sys.exit(1)\n        print(\"\\n\".join(msg))\n        sys.exit(0)\n\n    if logconf:\n        logging.config.fileConfig(logconf)\n    else:\n        logging.basicConfig(level=lvl)\n    bu = BatchUtils()\n\n    if len(inputfile) > 0:\n        server = Server(snapmap=inputfile, db_access=db_access)\n    elif snap is not None or db_access is not None:\n        server = Server(hostname, snap=snap, db_access=db_access)\n        scheduler = Scheduler(server=server, snap=snap, db_access=db_access)\n    else:\n        if hostname is None:\n            if 'PBS_SERVER' in os.environ:\n                _h = os.environ['PBS_SERVER']\n            else:\n                if 'PBS_CONF_FILE' in os.environ:\n                    _c = DshUtils().parse_pbs_config(\n                        file=os.environ['PBS_CONF_FILE'])\n                elif os.path.isfile('/etc/pbs.conf'):\n                    _c = DshUtils().parse_pbs_config()\n                if 'PBS_SERVER' in _c:\n                    _h = _c['PBS_SERVER']\n                else:\n                    _h = None\n        else:\n            _h = hostname\n        server = Server(_h, stat=False)\n\n    if force_cli:\n        if server.get_op_mode() != PTL_CLI:\n            server.set_op_mode(PTL_CLI)\n\n    if get_mode:\n        print(server.get_op_mode())\n        sys.exit(0)\n\n    if dedtimefile:\n        if scheduler is None:\n            scheduler = Scheduler(\n                server=server, snap=snap, db_access=db_access)\n        scheduler.set_dedicated_time_file(dedtimefile)\n\n    # server is assumed up in snap mode\n    if not server.isUp() and objtype != RSC and\\\n            not db_access:\n        logging.error('PBS Server is down, exiting')\n        sys.exit(1)\n\n    if report or report_twiki:\n        jobs = server.status(JOB)\n        nodes = server.status(VNODE)\n        queues = server.status(QUEUE)\n\n        jeq = server.equivalence_classes(JOB, bslist=jobs)\n        if scheduler is not None:\n            res = scheduler.get_resources(exclude=['host', 'vnode', 'arch'])\n            if res:\n                for i in range(len(res)):\n                    res[i] = \"resources_available.\" + str(res[i])\n        else:\n            res = ['resources_available.ncpus', 'resources_available.mem']\n\n        neq = server.equivalence_classes(VNODE, res, bslist=nodes,\n                                         show_zero_resources=True,\n                                         op=RESOURCES_TOTAL,\n                                         resolve_indirectness=indirectness)\n\n        job_states = server.counter(JOB, 'job_state', bslist=jobs)\n\n        u = utilization_to_str(server.utilization(entity=entity, nodes=nodes,\n                                                  jobs=jobs))\n        lims = server.parse_all_limits(\n            server=[server.attributes], queues=queues)\n        qtypes = server.counter(QUEUE, 'queue_type', bslist=queues)\n        d = server.counter(JOB, ['euser', 'egroup'], bslist=jobs)\n        users = groups = 0\n        for k, v in d.items():\n            if 'euser' in k:\n                users += 1\n            elif 'egroup' in k:\n                groups += 1\n        if scheduler is None:\n            scheduler = Scheduler(server=server, snap=server.snap,\n                                  snapmap=server.snapmap)\n        sc = scheduler.sched_config\n        formula = backfill = None\n        version = 'unavailable'\n        sched_version = 'unavailable'\n        hooks = []\n        _hlist = server.status(HOOK)\n        for hook in _hlist:\n            if 'enabled' in hook and hook['enabled'] == 'false':\n                _disabled = '[disabled]'\n            else:\n                _disabled = ''\n            hooks.append(hook['id'] + ': ' + hook['event'] + ' ' +\n                         _disabled)\n        try:\n            if 'pbs_version' not in server.attributes:\n                d = server.status(SERVER, ['pbs_version', 'job_sort_formula',\n                                           'backfill_depth'])\n            else:\n                d = [server.attributes]\n\n            if 'job_sort_formula' in d[0]:\n                formula = d[0]['job_sort_formula']\n            if 'backfill_depth' in d[0]:\n                backfill = d[0]['backfill_depth']\n            if 'pbs_version' in d[0]:\n                version = d[0]['pbs_version']\n        except PbsStatusError:\n            pass\n        try:\n            server.status(SCHED, 'pbs_version')\n        except PbsStatusError:\n            pass\n        if d and 'pbs_version' in d[0]:\n            sched_version = d[0]['pbs_version']\n\n        if snap is not None:\n            f = os.path.join(snap, 'OSrelease')\n            if os.path.isfile(f):\n                fos = open(f)\n                osrelease = fos.readline()\n                fos.close()\n\n        sr = SiteReportFormatter(snap, version, sched_version, jeq, neq, u,\n                                 lims, qtypes, users, groups, sc, formula,\n                                 backfill, hooks, job_states, osrelease)\n        if report_twiki:\n            print(str(sr.__twiki__()))\n        else:\n            print(str(sr))\n        sys.exit(0)\n\n    if utilization:\n        if resources:\n            resources = resources.split(',')\n        u = server.utilization(resources, entity=entity)\n        msg = utilization_to_str(u)\n        if msg:\n            print(\"\\n\".join(msg))\n        sys.exit(0)\n\n    if fstree:\n        if scheduler is None:\n            scheduler = Scheduler(server=server, snap=server.snap,\n                                  snapmap=server.snapmap, db_access=db_access)\n\n        fs_as_bs = scheduler.fairshare_tree.__batch_status__()\n        if json_on:\n            print(CliUtils.__json__(fs_as_bs))\n        else:\n            scheduler.utils.show(fs_as_bs, fsentity)\n        sys.exit(0)\n\n    if eval_formula:\n        f = server.evaluate_formula(include_running_jobs=include_running_jobs)\n        if f:\n            d = server.status(SERVER, 'job_sort_formula')\n            print('Formula: ' + d[0]['job_sort_formula'])\n            ret = sorted(list(f.items()), key=lambda x: x[1][1], reverse=True)\n            for (jobid, (fml, val)) in ret:\n                print(jobid + ': ' + fml + ' = ' + str(val))\n        sys.exit(0)\n\n    # parse resources and attributes 'language', i.e., handling of && and ||\n    if resources is not None:\n        if objtype in (JOB, RESV):\n            _r = \"Resource_List.\"\n        else:\n            _r = \"resources_available.\"\n        # add the resources to any attributes that may have been specified\n        if attributes is None:\n            attributes = ''\n        else:\n            attributes += \",\"\n        if \"&&\" in resources:\n            attrop = PTL_AND\n            resources = resources.replace(\"&&\", \",\")\n        elif \"||\" in resources:\n            resources = resources.replace(\"||\", ',')\n        attributes += \",\".join([_r + n for n in resources.split(',')])\n\n    if attributes is not None:\n        attributes = attributes.replace(\" \", \"\")\n        if \"&&\" in attributes:\n            attrop = PTL_AND\n            attributes = attributes.replace(\"&&\", \",\")\n        elif \"||\" in attributes:\n            attributes = attributes.replace(\"||\", ',')\n\n        attributes = attributes.split(\",\")\n\n    if attributes:\n        setattrs = False\n        operators = ('<=', '>=', '!=', '=', '>', '<', '~')\n        for a in attributes:\n            for op in operators:\n                if op in a:\n                    setattrs = True\n                    break\n\n        d = bu.convert_attributes_by_op(attributes, setattrs)\n        if len(d) > 0:\n            attributes = d\n\n    if backfillhole is not None:\n        server.show_whats_available(attrib=attributes)\n        sys.exit(0)\n\n    # other than the backfill hole that requires working with server objects,\n    # since updating object attributes can be an expensive operation on large\n    # systems, we disable it on these select calls where they are needed\n    server.ptl_conf['update_attributes'] = False\n\n    if limits_info or over_soft_limits:\n        etype = None\n        ename = None\n        if 'euser' in entity:\n            etype = 'u'\n            ename = entity['euser']\n        elif 'egroup' in entity:\n            etype = 'g'\n            ename = entity['egroup']\n        elif 'project' in entity:\n            etype = 'p'\n            ename = entity['project']\n\n        linfo = server.limits_info(etype=etype,\n                                   ename=ename,\n                                   db_access=db_access,\n                                   over=over_soft_limits)\n        if json_on:\n            CliUtils.__json__(server.utils.decode_dictlist(linfo))\n        else:\n            server.utils.show(linfo)\n        sys.exit(0)\n\n    if nodeclasses:\n        if scheduler is None and os.getuid() == 0:\n            scheduler = Scheduler(server=server, snap=server.snap,\n                                  snapmap=server.snapmap, db_access=db_access)\n\n        if attributes:\n            res = attributes\n        elif scheduler is not None:\n            res = scheduler.get_resources(exclude=['host', 'vnode', 'arch'])\n            if res:\n                # ncpus and mem may have been removed from the resources line\n                # in which case we must add them back\n                if 'ncpus' not in res:\n                    res.append('ncpus')\n                if 'mem' not in res:\n                    res.append('mem')\n                for i in range(len(res)):\n                    res[i] = \"resources_available.\" + str(res[i])\n\n        else:\n            res = ['resources_available.ncpus', 'resources_available.mem']\n\n        server.show_equivalence_classes(None, VNODE, res, op=restotal,\n                                        show_zero_resources=True,\n                                        db_access=db_access,\n                                        resolve_indirectness=indirectness)\n\n    if jobclasses or acct is not None:\n        if acct is not None:\n            eqclasses = {}\n            # no need to ping a live server, so pretend\n            sm = {SERVER: None}\n            # disable logging to avoid displaying server instatiation messages\n            logging.disable(logging.INFO)\n            server = Server('__snapserver__', snapmap=sm)\n            logging.disable(logging.NOTSET)\n\n            alog = PBSAccountingLog(show_progress=True)\n            alog.enable_accounting_workload_parsing()\n            alog.analyze(acct)\n            attrs = list(alog.job_attrs.values())\n            eq = server.equivalence_classes(JOB, attributes, bslist=attrs)\n            server.show_equivalence_classes(eq)\n            if alog.parser_errors > 0:\n                print('Failed to parse: ' + str(alog.parser_errors))\n            sys.exit(0)\n        else:\n            server.show_equivalence_classes(None, JOB, attributes, op=restotal,\n                                            db_access=db_access)\n\n    # remaining operations are to filter an object type, skip if an\n    # equivalence class or resource was requested\n    if (jobclasses or nodeclasses):\n        sys.exit(0)\n\n    if restotal is None:\n        logging.error('-T option operates only an equivalence classes')\n        sys.exit(1)\n\n    if resourcesset is not None:\n        if objtype is None:\n            logging.error('no object type specified')\n            sys.exit(1)\n        res = bu.list_resources(objtype, server.status(objtype))\n        if res:\n            print(\"\\n\".join(res))\n        sys.exit(0)\n\n    if (accumulate or grandtotal) and (attributes is not None):\n        if objtype is None:\n            logging.error('no object type specified')\n            sys.exit(1)\n\n        d = server.counter(objtype, attributes, attrop=attrop,\n                           grandtotal=grandtotal, db_access=db_access,\n                           extend='t', resolve_indirectness=indirectness)\n        for k, v in d.items():\n            if grandtotal and 'mem' in k:\n                d[k] = PbsTypeSize().encode(value=v)\n            else:\n                d[k] = v\n    else:\n        if objtype is None:\n            logging.error('no object type specified')\n            sys.exit(1)\n        idonly = True\n        if not qselectfmt:\n            idonly = False\n        if attributes:\n            d = server.filter(objtype, attributes, attrop=attrop,\n                              idonly=idonly, id=objid, extend='t',\n                              db_access=db_access, grandtotal=grandtotal,\n                              resolve_indirectness=indirectness)\n        else:\n            statinfo = server.status(objtype, id=objid, extend='t',\n                                     db_access=db_access,\n                                     resolve_indirectness=indirectness)\n            if json_on:\n                print(\n                    CliUtils.__json__(\n                        server.utils.decode_dictlist(statinfo)))\n            else:\n                server.utils.show(statinfo, fmt=fmt)\n            sys.exit(0)\n\n    if not d:\n        sys.exit(0)\n\n    if not qselectfmt and not (grandtotal or accumulate):\n        if objtype is None:\n            logging.error('no object type specified')\n            sys.exit(1)\n        visited = []\n        toshow = []\n        for objs in d.values():\n            for obj in objs:\n                if obj['id'] in visited:\n                    continue\n                else:\n                    toshow.append(obj)\n                visited.append(obj['id'])\n\n        if json_on:\n            CliUtils.__json__(server.utils.decode_dictlist(toshow))\n        else:\n            server.utils.show(toshow, fmt=fmt)\n\n    elif attrop == PTL_AND and len(d) > 0:\n        if objtype is None:\n            logging.error('no object type specified')\n            sys.exit(1)\n        if isinstance(attributes, (list, dict)):\n            if grandtotal:\n                for k, v in d.items():\n                    print(k + \": \" + str(v))\n            elif accumulate:\n                print(\" && \".join(d.keys()) + \": \" + str(list(d.values())[0]))\n            else:\n                print(\" && \".join(d.keys()) + \": \\n\" +\n                      \"\\n\".join(str(list(d.values())[0])))\n        else:\n            print(str(\" && \".join(d.keys())) + \": \" + str(list(d.values())[0]))\n    else:\n        if objtype is None:\n            logging.error('no object type specified')\n            sys.exit(1)\n        for k, v in d.items():\n            if isinstance(v, list):\n                print(k + \":\")\n                for val in v:\n                    print(val)\n            else:\n                print(k + \":\" + str(v))\n"
  },
  {
    "path": "test/fw/bin/pbs_swigify",
    "content": "#!/usr/bin/env python3\n# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport os\nimport sys\nimport socket\nimport getopt\nimport tempfile\nimport logging\nimport logging.config\nimport errno\n\nimport ptl\nimport ptl.lib\nfrom ptl.utils.pbs_cliutils import CliUtils\nfrom ptl.utils.pbs_dshutils import DshUtils\nfrom ptl.lib.pbs_testlib import PtlConfig\n\n\n# trap SIGINT and SIGPIPE\ndef trap_exceptions(etype, value, tb):\n    sys.excepthook = sys.__excepthook__\n    if issubclass(etype, KeyboardInterrupt):\n        pass\n    elif issubclass(etype, IOError) and value.errno == errno.EPIPE:\n        pass\n    else:\n        sys.__excepthook__(etype, value, tb)\n\n\nsys.excepthook = trap_exceptions\n\n# A basic SWIG interface definition to wrap PBS IFL\nswiginter = '''\\\n%module pbs_ifl\n\n%typemap(out) char ** {\n  int len,i;\n  len = 0;\n  while ($1[len]) len++;\n  $result = PyList_New(len);\n  for (i = 0; i < len; i++) {\n    PyList_SetItem($result,i,PyString_FromString($1[i]));\n  }\n}\n\n%typemap(in) char ** {\n  /* Check if is a list */\n  if (PyList_Check($input)) {\n    int size = PyList_Size($input);\n    int i = 0;\n    $1 = (char **) malloc((size+1)*sizeof(char *));\n    for (i = 0; i < size; i++) {\n      PyObject *o = PyList_GetItem($input,i);\n      if (PyString_Check(o))\n    $1[i] = PyString_AsString(PyList_GetItem($input,i));\n      else {\n    PyErr_SetString(PyExc_TypeError,\"list must contain strings\");\n    free($1);\n    return NULL;\n      }\n    }\n    $1[i] = 0;\n  } else {\n    PyErr_SetString(PyExc_TypeError,\"not a list\");\n    return NULL;\n  }\n}\n\n%typemap(out) struct batch_status * {\n    struct batch_status *head_bs, *bs;\n    struct attrl *attribs;\n    char *resource;\n    char *str;\n    int i, j;\n    int len;\n    char buf[4096];\n    static char *id = \"id\";\n\n    head_bs = $1;\n    bs = $1;\n\n    for (len=0; bs != NULL; len++)\n        bs = bs->next;\n\n    $result = PyList_New(len);\n\n    bs = head_bs;\n\n    for (i=0; i < len; i++) {\n        PyObject *dict;\n        PyObject *a, *v, *tmpv;\n\n        dict = PyDict_New();\n        PyList_SetItem($result, i, dict);\n\n        a = PyString_FromString(id);\n        v = PyString_FromString(bs->name);\n        PyDict_SetItem(dict, a, v);\n\n        attribs = bs->attribs;\n        while (attribs) {\n            resource = attribs->resource;\n            if (resource != NULL) {\n                /* +2 to account for the '.' between name and resource */\n                str = malloc(strlen(attribs->name) + strlen(resource) + 2);\n                sprintf(str, \"%s.%s\", attribs->name, attribs->resource);\n                a = PyString_FromString(str);\n            }\n            else {\n                a = PyString_FromString(attribs->name);\n            }\n            tmpv = PyDict_GetItem(dict, a);\n            /* if the key already exists, append as comma-separated */\n            if (tmpv != NULL) {\n                char *s = PyString_AsString(tmpv);\n                /* +4 for the quotes, the comma, and a NULL byte */\n                str = malloc(strlen(attribs->value) + strlen(s) + 4);\n                sprintf(str, \"%s,%s\", attribs->value, s);\n                v = PyString_FromString(str);\n            }\n            else {\n                v = PyString_FromString(attribs->value);\n            }\n            PyDict_SetItem(dict, a, v);\n            attribs = attribs->next;\n        }\n        bs = bs->next;\n    }\n}\n%{\n#include \"pbs_ifl.h\"\nint pbs_py_spawn(int, char *, char **, char **);\n%}\n\n%include \"pbs_ifl.h\"\nint pbs_py_spawn(int, char *, char **, char **);\n'''\n\n\ndef _remove_file(workdir, filename):\n    f = os.path.join(workdir, filename)\n    if os.path.isfile(f):\n        logging.debug('removing intermediary file ' + filename)\n        os.remove(f)\n\n\ndef usage():\n    msg = []\n    msg += ['Usage: ' + os.path.basename(sys.argv[0]) + ' [OPTION]\\n\\n']\n    msg += ['  Produce Python wrappers for PBS IFL API\\n\\n']\n    msg += ['-c <pbs_conf>: path to pbs.conf\\n']\n    msg += [\n        '-f: force overwrite of _pbs_ifl.so and pbs_ifl.py when present\\n']\n    msg += ['-h: display usage information\\n']\n    msg += ['-i <swig.i>: path to swig interface file\\n']\n    msg += ['-I <python_include>: path to python include directory\\n']\n    msg += ['-l <level>: logging level\\n']\n    msg += ['-s <swig>: path to swig binary to use\\n']\n    msg += ['-t <targethost>: hostname to operate on\\n']\n    msg += ['-w <workdir>: path to working directory\\n']\n    msg += ['--log-conf=<file>: logging config file\\n']\n    msg += ['--version: print version number and exit\\n']\n\n    print(\"\".join(msg))\n\n\nif __name__ == '__main__':\n\n    workdir = tempfile.gettempdir()\n    targethost = socket.gethostname()\n    config = None\n    interface = None\n    pythoninc = None\n    level = 'INFO'\n    force = False\n    swigbin = 'swig'\n    logconf = None\n    complete_options = {}\n    duplicate_options = []\n\n    if 'PBS_CONF_FILE' in os.environ:\n        config = os.environ['PBS_CONF_FILE']\n    else:\n        config = '/etc/pbs.conf'\n\n    opts, args = getopt.getopt(sys.argv[1:], \"t:i:I:c:w:l:s:hf\",\n                               [\"log-conf=\", \"version\"])\n\n    for o, val in opts:\n        if o == '-t':\n            targethost = val\n        elif o == '-w':\n            workdir = CliUtils.expand_abs_path(val)\n        elif o == '-i':\n            interface = val\n        elif o == '-l':\n            level = val\n        elif o == '-c':\n            config = val\n        elif o == '-I':\n            pythoninc = CliUtils.expand_abs_path(val)\n        elif o == '-f':\n            force = True\n        elif o == '-s':\n            swigbin = CliUtils.expand_abs_path(val)\n        elif o == '--log-conf':\n            logconf = val\n        elif o == '--version':\n            print(ptl.__version__)\n            sys.exit(0)\n        elif o == '-h':\n            usage()\n            sys.exit(0)\n        else:\n            sys.stderr.write(\"Unrecognized option\\n\")\n            usage()\n            sys.exit(1)\n        if o in complete_options:\n            duplicate_options.append(\"option %s -> %s already used.\\n\"\n                                     % (o, val))\n        complete_options[o] = val\n\n    if len(duplicate_options) > 0:\n        sys.stderr.write(\"Please use an option only once, exiting.\\n\")\n        for option in duplicate_options:\n            sys.stderr.write(\"    %s\" % option)\n        sys.exit(1)\n\n    cu = CliUtils()\n\n    if logconf:\n        logging.config.fileConfig(logconf)\n    else:\n        log_lvl = cu.get_logging_level(level)\n        logging.basicConfig(level=log_lvl)\n\n    b = cu.check_bin(swigbin)\n    if not b:\n        logging.error(\"swig is missing, exiting\")\n        sys.exit(1)\n\n    b = cu.check_bin(\"gcc\")\n    if not b:\n        logging.error(\"gcc is missing, exiting\")\n        sys.exit(1)\n\n    if pythoninc is None:\n        logging.error(\"Path to Python include directory is mandatory\")\n        usage()\n        sys.exit(1)\n\n    if targethost != socket.gethostname():\n        logging.error(\"This command only works on localhost\")\n        sys.exit(1)\n\n    PtlConfig()\n    du = DshUtils()\n    pbs_conf = du.parse_pbs_config(targethost, file=config)\n\n    os.chdir(workdir)\n\n    if interface is None:\n        interface = os.path.join(workdir, \"pbs_ifl.i\")\n        f = open(interface, 'w')\n        f.write(swiginter)\n        f.close()\n        srcdir = os.getcwd()\n    else:\n        srcdir = os.path.dirname(interface)\n\n    if 'PBS_EXEC' in pbs_conf:\n        pbsinclude = os.path.join(pbs_conf['PBS_EXEC'], 'include')\n        cmd = [swigbin, '-python', '-I' + pbsinclude, interface]\n        logging.debug(du.run_cmd(targethost, cmd))\n        if srcdir != os.getcwd():\n            logging.debug(du.run_copy(targethost,\n                                      src=os.path.join(\n                                          srcdir, \"pbs_ifl_wrap.c\"),\n                                      dest=workdir))\n            logging.debug(du.run_copy(targethost,\n                                      src=os.path.join(srcdir, \"pbs_ifl.py\"),\n                                      dest=workdir))\n        cmd = ['gcc', '-Wall', '-Wno-unused-variable', '-fPIC', '-shared',\n               '-I' + pbsinclude]\n        cmd += ['-I' + pythoninc]\n        cmd += ['pbs_ifl_wrap.c']\n        cmd += ['-L' + os.path.join(pbs_conf['PBS_EXEC'], 'lib')]\n        cmd += ['-lpbs']\n        cmd += ['-o', '_pbs_ifl.so']\n        cmd += ['-lcrypto', '-lssl']\n        logging.debug(du.run_cmd(targethost, cmd))\n\n    libdir = os.path.dirname(ptl.lib.__file__)\n    if force or not os.path.isfile(libdir + '/_pbs_ifo.so'):\n        du.run_copy(targethost,\n                    src=os.path.join(workdir, '_pbs_ifl.so'),\n                    dest=os.path.join(libdir, '_pbs_ifl.so'), sudo=True)\n    if force or not os.path.isfile(os.path.join(libdir, '/pbs_ifl.py')):\n        du.run_copy(targethost,\n                    src=os.path.join(workdir, 'pbs_ifl.py'),\n                    dest=os.path.join(libdir, 'pbs_ifl.py'), sudo=True)\n\n    _remove_file(workdir, \"pbs_ifl.py\")\n    _remove_file(workdir, \"_pbs_ifl.so\")\n    _remove_file(workdir, \"pbs_ifl_wrap.c\")\n    _remove_file(workdir, \"pbs_ifl.i\")\n"
  },
  {
    "path": "test/fw/bin/pbs_sys_report",
    "content": "#!/usr/bin/env python3\n# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\nimport csv\nimport json\nimport sys\nimport os\nimport argparse\n\n\ndef write_to_csv(head, csvrows, i, filepath):\n    with open(filepath, 'a+') as fp:\n        if i == 0:\n            csv.writer(fp).writerow(head)\n        csv.writer(fp).writerow(csvrows)\n\n\nif __name__ == '__main__':\n\n    msg = 'System monitoring results to csv conversion tool'\n    parser = argparse.ArgumentParser(description=msg)\n    parser.add_argument('PTL_JSON_FILE', help='path to ptl_test_results.json')\n    parser.add_argument(\n        '--outputfile', default='sys_report.csv', help='path to generate file')\n    args = parser.parse_args()\n    filepath = args.outputfile\n\n    if os.path.exists(filepath):\n        os.remove(filepath)\n\n    with open(args.PTL_JSON_FILE) as fp:\n        td = json.load(fp)\n\n    for k, v in sorted(td['testsuites'].items()):\n        _otcs = v['testcases']\n        for _k, _v in sorted(v['testcases'].items()):\n            _tcn = k + '.' + _k\n            _om = _v['results']\n            for key, val in _om.items():\n                for item in val['measurements']:\n                    for ky, vl in item.items():\n                        if ky == 'procs':\n                            timelist = []\n                            name = ''\n                            time = ''\n                            for mes in vl:\n                                timelist.append(mes['time'])\n                                timelist = list(set(timelist))\n                            timelist.sort()\n                            i = 0\n                            for tm in timelist:\n                                head = ['Time']\n                                csvrows = []\n                                csvrows.append(tm)\n                                for mes in vl:\n                                    name = mes['name']\n                                    if mes['time'] == tm:\n                                        for nm, value in sorted(mes.items()):\n                                            if nm != 'name' and nm != 'time':\n                                                head.append(nm + '_' + name)\n                                                csvrows.append(value)\n                                write_to_csv(head, csvrows, i, filepath)\n                                i += 1\n"
  },
  {
    "path": "test/fw/doc/caveats.rst",
    "content": "Caveats\n=======\n\nStanding reservation PBS_TZID\n-----------------------------\n\nStanding reservations cannot be submitted using the API interface alone due\nto the need to set the PBS_TZID environment variable, such reservations are\nalways submitted using a CLI.\n\nqmgr operations for hooks and formula\n-------------------------------------\n\nQmgr operations for hooks and for the job_sort_formula must be done as root,\nthey are performed over the CLI.\n\nCLI and API differences\n-----------------------\n\nPTL redefines the PBS IFL such that it can dynamically call them via either\nthe API or the CLI. The methods are typically named after their PBS IFL\ncounterpart omitting the `pbs_` prefix, for example pbs_manager() becomes\nmanager() in PTL. Each method will typically either return the return code\nof its API/CLI counterpart, or raise a specific PTL exception. In some cases\n(e.g. manager) the return value may be that of the call to the expect() method.\n\nWhen calling expect on an attribute value, the value may be different\ndepending on whether the library is operating in CLI or API mode; as an\nexample, when submitting a reservation, expecting it to be confirmed via the\nAPI calls for an expect of {'reserve_state':'2'} whereas using the CLI one\nwould expect {'reserve_state':'RESV_CONFIRMED'}.\nThis can be handled in several ways:\nThe preferred way is to use the MATCH_RE operation on the attribute and\ncheck for either one of the possible values: for example to match either\nRESV_CONFIRMED or 2 one can write::\n\n   Server().expect(RESV, {'reserve_state':(MATCH_RE,\"RESV_CONFIRMED|2\")})\n\nAn alternative way is to set the operating mode to the one desired at the\nbeginning of the test (to one of PTL_API, or PTL_CLI) and ensure it is set\naccordingly by calling get_op_mode(), or handle the response in the test by\nchecking if the operating mode is CLI or API, which is generally speaking\nmore robust and the favored approach as the automation may be run in either\nmode on different systems.\n\nList (non-exhaustive) of attribute type differences between CLI and API:\n\n - reserve_state\n - all times: ctime, mtime, qtime, reserve_start, reserve_end, estimated.start_time, Execution_Time\n\nCreating temp files\n-------------------\n\nWhen creating temp files, favor the use of DshUtils().mkstemp\n\nUnsetting attributes\n--------------------\n\nTo unset attributes in alterjob, set the attribute value to '' (two single\nquotes) in order to escape special quote handling in Popen.\n\nExample::\n\n obj.unset_attributes([ATTR_Arglist])\n\nStat'ing objects via db-access\n------------------------------\n\nNot all object attributes are written to the DB, as a result, when using\npbs_stat with db-access enabled, information may appear to be missing.\n\nScheduler holidays file handling\n--------------------------------\n\nWhen reverting scheduler's default configuration, the holidays file is\nreverted only if it was specifically parsed, either by calling parse_holidays\nor by calling set_prime_time, to the contents of the file first parsed. In\nother words, if the contents of the file were updated outside PbsTestLab, and\nedited in PbsTestLab, the file will be reverted to that version rather than\nthe vanilla file that ships with PBS.\n\nInteractive Jobs\n----------------\n\nInteractive jobs are only supported through CLI operations and require the\npexpect module to be installed.\n\nInteractive Jobs are submitted as a thread that sets the jobid as soon as it\nis returned by qsub -I, such that the caller can get back to monitoring\nthe state of PBS while the interactive session goes on in the thread.\n\nThe commands to be run within an interactive session are specified in the\njob's interactive_script attribute as a list of tuples, where the first\nitem in each tuple is the command to run, and the subsequent items are\nthe expected returned data.\n\n.. topic:: Implementation details:\n\n  The submission of an interactive job requires passing in job attributes,\n  the command to execute (i.e. path to qsub -I), the hostname and a\n  user-to-password map, details follow:\n\n  On Linux/Unix:\n\n    - when not impersonating:\n\n      pexpect spawns the qsub -I command and expects a prompt back, for each\n      tuple in the interactive_script, it sends the command and expects to\n      match the return value.\n\n    - when impersonating:\n\n      pexpect spawns sudo -u <user> qsub -I. The rest is as described in\n      non-impersonating mode.\n"
  },
  {
    "path": "test/fw/doc/commands.rst",
    "content": "Overview of commands\n=====================\n\nHere is an overview of the most common usage of the PTL commands, there are many\nmore options to control the commands, see the --help option of each command for\ndetails.\n\n.. _pbs_benchpress:\n\nHow to use pbs_benchpress\n-------------------------\n\npbs_benchpress is PTL's test harness, it is used to drive testing, logging\nand reporting of test suites and test cases.\n\nTo list information about a test suite::\n\n  pbs_benchpress -t <TestSuiteName> -i\n\nTo check for compilation errors use below command::\n\n  python -m py_compile /path/to/your/test/file.py\n\nBefore running any test we have to export below 2 paths::\n\n  export PYTHONPATH=</path/to/install/location>/lib/python<python version>/site-packages\n\n::\n\n  export PATH=</path/to/install/location>/bin\n\nTo Run a test suite and/or a test case\n\n   1. To run the entire test suite::\n\n        pbs_benchpress -t <TestSuiteName>\n\n    where `TestSuiteName` is the name of the class in the .py file you created\n\n   2. To run a test case part of a test suite::\n\n        pbs_benchpress -t <TestSuiteName>.<test_case_name>\n\n    where `TestSuiteName` is as described above and `test_case_name` is the name\n    of the test method in the class\n\n   3. You can run the under various logging levels using the -l option::\n\n        pbs_benchpress -t <TestSuiteName> -l DEBUG\n\n    To see various logging levels see :ref:`log_levels`\n\n   4. To run all tests that inherit from a parent test suite class run the\n      parent test suite passing the `--follow-child` param to pbs_benchpress::\n\n        pbs_benchpress -t <TestSuite> --follow-child\n\n   5. To exclude specific testsuites, use the --excluding option as such::\n\n        pbs_benchpress -t <TestSuite> --follow-child --exclude=<SomeTest>\n\n   6. To run the test by the name of the test file, for example, if a test\n      class is defined in a file named pbs_XYZ.py then you an run it using::\n\n        pbs_benchpress -f ./path/to/pbs_XYZ.py\n\n   7. To pass custom parameters to a test suite::\n\n        pbs_benchpress -t <TestSuite> -p \"<key1>=<val1>,<key2>=<val2>,...\"\n\n    Alternatively you can pass --param-file pointing to a file where parameters\n    are specified. The contents of the file should be one parameter per line::\n\n        pbs_benchpress -t <TestSuite> --param-file=</path/to/file>\n\n        Example: take file as \"param_file\" then file content should be as below.\n\n        key1=val1\n        key2=val2\n        .\n        .\n\n    Once params are specified, a class variable called param is set in the Test\n    that can then be parsed out to be used in the test. When inheriting from\n    PBSTestSuite, the key=val pairs are parsed out and made available in the\n    class variable ``conf``, so the test can retrieve the information using::\n\n        if self.conf.has_key(key1):\n            ...\n\n   8. To check that the available Python version is above a minimum::\n\n        pbs_benchpress --min-pyver=<version>\n\n   9. To check that the available Python version is less than a maximum::\n\n        pbs_benchpress --max-pyver=<version>\n\n\n   10. On Linux, you can generate PBS coverage data using PTL.\n       To collect coverage data using LCOV/LTP, first ensure that PBS was\n       compiled using --set-cflags=\"--coverage\" and make sure that you have the lcov\n       utility installed. Lcov utility can be obtained at http://ltp.sourceforge.net/coverage/lcov.php\n       Then to collect PBS coverage data run pbs_benchpress as follow::\n\n        pbs_benchpress -t <TestName> --lcov-data=</path/to/gcov/build/dir>\n\n       By default the output data will be written to TMPDIR/pbscov-YYYMMDD_HHMMSS,\n       this can be controlled using the option --lcov-out.\n       By default the lcov binary is expected to be available in the environment, if\n       it isn't you can set the path using the option --lcov-bin.\n\n\n   11. For tests that inherit from PBSTestSuite, to collect procecess information::\n\n        pbs_benchpress -t <TestSuite> -p \"procmon=<proc name>[:<proc name>],procmon-freq=<seconds>\"\n\n       where `proc name` is a process name such as pbs_server, pbs_sched, pbs_mom.\n       RSS,VSZ,PCPU info will be collected for each colon separated name.\n\n\n   12. To run ptl on multinode cluster we have following two basic requirements.\n\n         A. PTL to be installed on all the nodes.\n         B. Passwordless ssh between all the nodes.\n\n       Suppose we have a multinode cluster of three node (M1-type1, M2-type2, M3-type3)\n       We can invoke pbs_benchpress command as below::\n\n        pbs_benchpress -t <TestSuite> -p \"servers=M1,moms=M1:M2:M3\"\n\n\n.. _log_levels:\n\nLogging levels\n~~~~~~~~~~~~~~\n\nPTL uses the generic unittest log levels: INFO, WARNING, DEBUG, ERROR, FATAL\n\nand three custom log levels: INFOCLI, INFOCLI2, DEBUG2.\n\nINFOCLI is used to log command line calls such that the output of a test run\ncan be read with anyone familiar with the PBS commands.\n\nINFOCLI2 is used to log a wider set of commands run through PTL.\n\nDEBUG2 is a verbose debugging level. It will log commands, including return\ncode, stdout and stderr.\n\n.. _pbs_loganalyzer:\n\nHow to use pbs_loganalyzer\n--------------------------\n\nTo analyze scheduler logs::\n\n  pbs_loganalyzer -l </path/to/schedlog>\n\nTo only display scheduling cycles summary::\n\n  pbs_loganalyzer -l </path/to/schedlog> -c\n\nTo analyze server logs::\n\n  pbs_loganalyzer -s </path/to/serverlog>\n\nTo analyze mom logs::\n\n  pbs_loganalyzer -m </path/to/momlog>\n\nTo analyze accounting logs::\n\n  pbs_loganalyzer -a </path/to/accountinglog>\n\nTo specify a begin and/or end time::\n\n  pbs_loganalyzer -b \"02/20/2013 21:00:00\" -e \"02/20/2013 22:00:00\" <rest>\n\nNote that for accounting logs, the file will be 'cat' using the sudo command,\nso the tool can be run as a regular user with sudo privilege.\n\nTo compute cpu/hour utilization against a given snapshot of nodes::\n\n  pbs_loganalyzer -U --nodes-file=/path/to/pbsnodes-av-file\n                     --jobs-file=/path/to/qstat-f-file\n                     -a /path/acct\n\nA progress bar can be displayed by issuing::\n\n  pbs_loganalyzer --show-progress ...\n\nTo analyze the scheduler's estimated start time::\n\n  pbs_loganalyzer --estimated-info -l <path/to/sched/log>\n\nTo analyze per job scheduler performance metrics, time to run, time to discard,\ntime in scheduler (solver time as opposed to I/O with the server), time to\ncalendar::\n\n  pbs_loganalyzer -l </path/to/schedlog> -S\n\nIn addition to a scheduler log, a server log is required to compute the time in\nscheduler metric, this is due to the fact that the time in sched is measured\nas the difference between a sched log \"Considering job to run\" and a\ncorresponding server log's \"Job Run\" message.\n\nTo output analysis to a SQLite file::\n\n  pbs_loganalyzer --db-name=<name or path of database> --db-type=sqlite\n\nNote that the sqlite3 module is needed to write out to the DB file.\n\nTo output to a PostgreSQL database::\n\n  pbs_loganalyzer --db-access=</path/to/pgsql/cred/file>\n                  --db-name=<name or path of database>\n                  --db-type=psql\n\nNote that the psycopg2 module is needed to write out ot the PostgreSQL database.\nThe cred file should specify the following::\n\n  user=<db username> password=<user's password> dbname=<databasename> port=<val>\n\nTo analyze the time (i.e., log record time) between occurrences of a regular\nexpression in any log file::\n\n  pbs_loganalyzer --re-interval=<regex expression>\n\nThis can be used, for example, to measure the interval of occurrences between\nE records in an accounting log::\n\n  pbs_loganalyzer -a <path/to/accountlog> --re-interval=\";E;\"\n\nA useful extended option to the occurrences interval is to compute the number\nof regular expression matches over a given period of time::\n\n  pbs_loganalyzer --re-interval=<regex> --re-frequency=<seconds>\n\nFor example, to count how many E records are emitted over a 60 second window::\n\n  pbs_loganalyzer -a <acctlog> --re-interval=\";E;\" --re-frequency=60\n\nWhen using --re-interval, the -f option can be used to point to an arbitrary\nlog file instead of depending on -a, -l, -s, or -m, however all these log\nspecific options will work.\n\nA note about the regular expression used, every Python named group, i.e.,\nexpressions of the (?P<name>...), will be reported out as a dictionary of\nitems mapped to each named group.\n\n.. _pbs_stat:\n\nHow to use pbs_stat\n-------------------\n\npbs_stat is a useful tool to display filtered information from querying\nPBS objects. The supported objects are nodes, jobs, resvs, server, queues.\nThe supported operators on filtering attributes or resources are >,\n<, >=, <=, and ~, the latter being for a regular expression match on the value\nassociated to an attribute or resource.\n\nIn the examples below one can replace the object type by any of\nthose alternative ones, with the appropriate changes in attribute or resource\nnames.\n\nEach command can be run by passing a -t <hostname> option to specify a\ndesired target hostname, the default (no -t) will query the localhost.\n\nTo list a summary of all jobs equivalence classes on Resource_List.select, use::\n\n  pbs_stat -j -a \"Resource_List.select\"\n\nTo list a summary of all nodes equivalence classes::\n\n  pbs_stat -n\n\nNote that node equivalence classes are collected by default on\nresources_available.ncpus, resources_available.mem, and state. To specify\nattributes to create the equivalence class on use -a/-r.\n\nTo list all nodes that have more than 2 cpus::\n\n  pbs_stat --nodes -a \"resources_available.ncpus>2\"\n\nor equivalently (for resources)::\n\n  pbs_stat --nodes -r \"ncpus>2\"\n\nTo list all jobs that request more than 2 cpus and are in state 'R'::\n\n  pbs_stat --jobs -a \"Resource_List.ncpus>2&&job_state='R'\"\n\nTo filter all nodes that have a host value that start with n and end with a,\ni.e., \"n.*a\"::\n\n  pbs_stat --nodes -r \"host~n.*a\"\n\nTo display information in qselect like format use the option -s to each command\nusing -s the attributes selected are displayed first followed by a list of\nnames that match the selection criteria.\n\nTo display data with one entity per line use the --sline option::\n\n  pbs_stat --nodes --sline\n\nTo show what is available now in the complex (a.k.a, backfill hole) use::\n\n  pbs_stat -b\n\nby default the backfill hole is computed based on ncpus, mem, and state, you\ncan specify the attributes to compute it on by passing comma-separated list of\nattributes into the -a option. An alternative to compute the backfill hole is\nto use pbs_sim -b.\n\nTo show utilization of the system use::\n\n  pbs_stat -U [-r \"<resource1,resource2,...>]\n\nresources default to ncpus, memory, and nodes\n\nTo show utilization of a specific user::\n\n  pbs_stat -U --user=<name>\n\nTo show utilization of a specific group::\n\n  pbs_stat -U --group=<name>\n\nTo show utilization of a specific project::\n\n  pbs_stat -U --project=<name>\n\nTo count the grand total of a resource values in complex for the queried resource::\n\n  pbs_stat -r <resource, e.g. ncpus> -C --nodes\n\nNote that nodes that are not up are not counted\n\nTo count the number of resources having same values in complex for the queried resource::\n\n  pbs_stat -r <resource e.g. ncpus>  -c --nodes\n\nTo show an evaluation of the formula for all non-running jobs::\n\n  pbs_stat --eval-formula\n\nTo show the fairshare tree and fairshare usage::\n\n  pbs_stat --fairshare\n\nTo read information from file use for example::\n\n  pbs_stat -f /path/to/pbsnodes/or/qstat_f/output --nodes -r ncpus\n\nTo list all resources currently set on a given object type::\n\n  pbs_stat --nodes --resources-set\n\nTo list all resources defined in resourcedef::\n\n  pbs_stat --resources\n\nTo list a specific resource by name from resourcedef (if it exists)::\n\n  pbs_stat --resource=<custom_resource>\n\nTo show limits associated to all entities::\n\n  pbs_stat --limits-info\n\nTo show limits associated to a specific user::\n\n  pbs_stat --limits-info --user=<name>\n\nTo show limits associated to a specific group::\n\n  pbs_stat --limits-info --group=<name>\n\nTo show limits associated to a specific project::\n\n  pbs_stat --limits-info --project=<name>\n\nTo show entities that are over their soft limits::\n\n  pbs_stat --over-soft-limits\n\nThe output of limits information shows named entities associated to each\ncontainer (server or queue) to which a limit is applied. The entity's usage\nas well as limit set are displayed, as well as a remainder usage value that\nindicates whether an entity is over a limit (represented by a negative value)\nor under a limit (represented by a positive or zero value). In the case of a\nPBS_ALL or PBS_GENERIC limit setting, each entity's name is displayed using\nthe entity's name followed by \"/PBS_ALL\" or \"/PBS_GENERIC\" as the case may be.\n\nHere are a few examples, if a server soft limit is set to 0::\n\n    qmgr -c \"set server max_run_soft=[u:user1=0]\"\n\nfor user user1 on the server object, pbs_stat --limits-info will show::\n\n    u:user1\n        container = server:minita.pbs.com\n        limit_type = max_run_soft\n        remainder = -1\n        usage/limit = 1/0\n\n\nif a server soft limit is set to 0 on generic users::\n\n    qmgr -c \"set server max_run_soft=[u:PBS_GENERIC=0]\"\n\nthen pbs_stat --limits-info will show::\n\n    u:user1/PBS_GENERIC\n        container = server:minita.pbs.com\n        limit_type = max_run_soft\n        remainder = -1\n        usage/limit = 1/0\n\nTo print a site report that summarizes some key metrics from a site::\n\n  pbs_stat --report\n\noptionally, use the path to a pbs_snapshot using the -d option to summarize that\nsite's information.\n\nTo show the number of privileged ports in use::\n\n  pbs_stat --pports\n\nTo show information directly from the database (requires psycopg2 module)::\n\n  pbs_stat --db-access=<path/to/dbaccess_file> --db-type=psql\n           --<objtype> [-a <attribs>]\n\nwhere the dbaccess file is of the form::\n\n  user=<value>\n  password=<value>\n  # and optionally\n  [port=<value>]\n  [dbname=<value>]\n\n.. _pbs_config:\n\nHow to use pbs_config\n---------------------\n\npbs_config is useful in the following cases, use:\n\n.. option:: --revert-config\n\n    To revert a configuration of PBS entities specified as one or\n    more of --scheduler, --server, --mom to its default configuration. Note that\n    for the server, non-default queues and hooks are not deleted but disabled\n    instead.\n\n.. option:: --save-config\n\n    save the configuration of a PBS entity, one of --scheduler,\n    --server, --mom to file. The server saves the resourcedef, a qmgr print\n    server, qmgr print sched, qmgr print hook. The scheduler saves sched_config,\n    resource_group, dedicated_time, holidays. The mom saves the config file.\n\n.. option:: --load-config\n\n    load configuration from file. The changes will be applied to\n    all PBS entities as saved in the file.\n\n.. option:: --vnodify\n\n    create a vnode definition and insert it into a given MoM. There are\n    many options to this command, see the help page for details.\n\n.. option:: --switch-version\n\n    swith to a version of PBS installed on the system. This\n    only supports modifying the PBS installed on a system that matches\n    PBS_CONF_FILE.\n\n.. option:: --check-ug\n\n    To check if the users and groups required for automated testing are defined as\n    expected on the system\n\n.. option:: --make-ug\n\n    To make users and groups as required for automated testing.This will create\n    user home directories with 755 permission.If test user is not using this command\n    for user creation then he/she has to make sure that the home directories\n    should have 755 permission.\n\nTo setup, start, and add (to the server) multiple MoMs::\n\n  pbs_config --multi-mom=<num> -a <attributes> --serverhost=<host>\n\nThe multi-mom option creates <num> pbs.conf files, prefixed by pbs.conf_m\nfollowed by an incrementing number by default, for which each configuration\nfile has a unique PBS_HOME directory that is defined by default to be PBS_m\nfollowed by the same incrementing number as the configuration file. The\nconfiguration prefix can be changed by passing the --conf-prefix option and\nthe PBS_HOME prefix can be changed via --home-prefix.\n\nTo make a PBS daemons mimic the snapshot of a pbs_snapshot::\n\n  pbs_config --as-snap=<path/to/snap>\n\nThis will set all server and queue attributes from the snapshot, copy sched_config,\nresource_group, holidays, resourcedef, all site hooks, and create and insert a\nvnode definition that translates all of the nodes reported by pbsnodes -av.\nThere may be some specific attributes to adjust, such as pbs_license_info,\nor users or groups, that may prevent submission of jobs.\n\n.. _pbs_py_spawn:\n\nHow to use pbs_py_spawn\n-----------------------\n\nThe pbs_py_spawn wrapper can only be used when the pbs_ifl.h API is SWIG\nwrapped. The tool can be used to invoke a pbs_py_spawn action associated to a\njob running on a MoM.\n\nTo call a Python script during the runtime of a job::\n\n  pbs_py_spawn -j <jobid> <path/to/python/script/on/MoM>\n\nTo call a Python script that will detach from the job's session::\n\n  pbs_py_spawn --detach -j <jobid> </path/to/python/script/on/MoM>\n\nDetached scripts essentially background themselves and are attached back to\nthe job monitoring through pbs_attach such that they are terminated when the\njob terminates. The detached script must write out its PID as its first\noutput.\n\nHow to use pbs_compare_results\n-----------------------\n\nThe pbs_compare_results is a tool to compare performance test results by\ncomparing the json output generated by pbs_benchpress.\n\nTo run pbs_compare_results and generate csv only report::\n\n  pbs_compare_results <benchmark_version>.json <tocompare_version>.json\n\nTo run pbs_compare_results and generate html report along with csv::\n  \n  pbs_compare_results <benchmark_version>.json <tocompare_version>.json --html-report\n\nTo run pbs_compare_results and generate reports at user defined location::\n\n  pbs_compare_results <benchmark_version>.json <tocompare_version>.json --output-file=<path>\n\n"
  },
  {
    "path": "test/fw/doc/conf.py",
    "content": "# -*- coding: utf-8 -*-\n#\n# PbsTestLab documentation build configuration file, created by\n# sphinx-quickstart on Fri May 27 11:57:52 2016.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport os\nimport sys\n\nHAS_RTD = False\ntry:\n    import sphinx_rtd_theme\n    HAS_RTD = True\nexcept Exception:\n    HAS_RTD = False\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n\nsys.path.insert(0, os.path.abspath('..'))\nsys.path.insert(0, os.path.abspath('.'))\n# import ptl\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\nneeds_sphinx = '1.3'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = ['sphinx.ext.autodoc']\n\nautodoc_member_order = 'bysource'\n# Add any paths that contain templates here, relative to this directory.\n# templates_path = []\n\n# The suffix of source filenames.\nsource_suffix = '.rst'\n\n# The encoding of source files.\n# source_encoding = 'utf-8-sig'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = 'PbsTestLab'\ncopyright = '(C) 1994-2020 Altair Engineering, Inc'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\n__version__ = 'unknown'\nexec(compile(open('../ptl/__init__.py').read(), '../ptl/__init__.py', 'exec'))\nversion = __version__\n# The full version, including alpha/beta/rc tags.\nrelease = '1.0.0'\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n# language = None\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n# today = ''\n# Else, today_fmt is used as the format for a strftime call.\n# today_fmt = '%B %d, %Y'\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = ['target']\n\n# The reST default role (used for this markup: `text`) to use for all\n# documents.\n# default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n# add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n# add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n# show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# A list of ignored prefixes for module index sorting.\n# modindex_common_prefix = []\n\n# If true, keep warnings as \"system message\" paragraphs in the built documents.\n# keep_warnings = False\n\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages.  See the documentation for\n# a list of builtin themes.\nif HAS_RTD:\n    html_theme = 'sphinx_rtd_theme'\n    html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\nelse:\n    html_theme = 'sphinxdoc'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further.  For a list of options available for each theme, see the\n# documentation.\n# html_theme_options = {}\n\n# Add any paths that contain custom themes here, relative to this directory.\n# html_theme_path = []\n\n# The name for this set of Sphinx documents.  If None, it defaults to\n# \"<project> v<release> documentation\".\n# html_title = None\n\n# A shorter title for the navigation bar.  Default is the same as html_title.\n# html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\n# html_logo = None\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs.  This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\n# html_favicon = None\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\n# html_static_path = []\n\n# Add any extra paths that contain custom files (such as robots.txt or\n# .htaccess) here, relative to this directory. These files are copied\n# directly to the root of the documentation.\n# html_extra_path = []\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\n# html_last_updated_fmt = '%b %d, %Y'\n\n# If true, SmartyPants will be used to convert quotes and dashes to\n# typographically correct entities.\n# html_use_smartypants = True\n\n# Custom sidebar templates, maps document names to template names.\n# html_sidebars = {}\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n# html_additional_pages = {}\n\n# If false, no module index is generated.\n# html_domain_indices = True\n\n# If false, no index is generated.\n# html_use_index = True\n\n# If true, the index is split into individual pages for each letter.\n# html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\nhtml_show_sourcelink = True\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\nhtml_show_sphinx = False\n\n# If true, \"(C) Copyright ...\" is shown in the HTML footer. Default is True.\nhtml_show_copyright = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it.  The value of this option must be the\n# base URL from which the finished HTML is served.\n# html_use_opensearch = ''\n\n# This is the file name suffix for HTML files (e.g. \".xhtml\").\n# html_file_suffix = None\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'PbsTestLabdoc'\n\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {'papersize': 'a4paper', }\n# The paper size ('letterpaper' or 'a4paper').\n\n# The font size ('10pt', '11pt' or '12pt').\n# 'pointsize': '10pt',\n\n# Additional stuff for the LaTeX preamble.\n# 'preamble': '',\n\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n#  author, documentclass [howto, manual, or own class]).\nlatex_documents = [('index', 'PbsTestLab.tex', 'PbsTestLab Documentation',\n                    'Copyright (C) 1994-2021 Altair Engineering, Inc',\n                    'manual'), ]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\n# latex_logo = None\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n# latex_use_parts = False\n\n# If true, show page references after internal links.\n# latex_show_pagerefs = False\n\n# If true, show URL addresses after external links.\nlatex_show_urls = 'True'\n\n# Documents to append as an appendix to all manuals.\n# latex_appendices = []\n\n# If false, no module index is generated.\n# latex_domain_indices = True\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [('index', 'pbstestlab', 'PbsTestLab Documentation',\n              ['Copyright (C) 1994-2021 Altair Engineering, Inc'], 1)]\n\n# If true, show URL addresses after external links.\n# man_show_urls = False\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n#  dir menu entry, description, category)\ntexinfo_documents = [('index', 'PbsTestLab', 'PbsTestLab Documentation',\n                      'Copyright (C) 1994-2021 Altair Engineering, Inc',\n                      'PbsTestLab', 'PBS Testing and Benchmarking\\\n                      Framework', 'Miscellaneous'), ]\n\n# Documents to append as an appendix to all manuals.\n# texinfo_appendices = []\n\n# If false, no module index is generated.\n# texinfo_domain_indices = True\n\n# How to display URL addresses: 'footnote', 'no', or 'inline'.\ntexinfo_show_urls = 'inline'\n\n# If true, do not generate a @detailmenu in the \"Top\" node's menu.\n# texinfo_no_detailmenu = False\n\n\n# Skip the document for unwanted members iside API documentation\ndef autodoc_skip_member(app, what, name, obj, skip, options):\n    exclusions = ('chunks_tag', 'chunk_tag', 'array_tag', 'subjob_tag',\n                  'pbsobjname_re', 'pbsobjattrval_re', 'dt_tag',\n                  'hms_tag', 'lim_tag', 'fgc_attr_pat', 'fgc_val_pat',\n                  'version_tag', 'fs_tag', 'conf_re', 'generic_tag',\n                  'node_type_tag', 'queue_type_tag', 'job_type_tag',\n                  'job_exit_tag', 'tm_tag', 'server_run_tag',\n                  'server_nodeup_tag', 'server_enquejob_tag',\n                  'server_endjob_tag', 'startcycle_tag', 'endcycle_tag',\n                  'alarm_tag', 'considering_job_tag', 'sched_job_run_tag',\n                  'estimated_tag', 'run_failure_tag', 'calendarjob_tag',\n                  'preempt_failure_tag', 'preempt_tag', 'record_tag',\n                  'mom_run_tag', 'mom_end_tag', 'mom_enquejob_tag',\n                  'record_tag', 'S_sub_record_tag', 'E_sub_record_tag',\n                  'sub_record_tag', 'generic_tag', 'node_type_tag',\n                  'queue_type_tag', 'job_type_tag', 'job_exit_tag', 'tm_tag',\n                  'server_run_tag', 'server_nodeup_tag', 'server_enquejob_tag',\n                  'server_endjob_tag', 'startcycle_tag', 'endcycle_tag',\n                  'alarm_tag', 'considering_job_tag', 'sched_job_run_tag',\n                  'estimated_tag', 'run_failure_tag', 'calendarjob_tag',\n                  'preempt_failure_tag', 'preempt_tag', 'record_tag',\n                  'mom_run_tag', 'mom_end_tag', 'mom_enquejob_tag',\n                  'record_tag', 'S_sub_record_tag', 'E_sub_record_tag',\n                  'sub_record_tag')\n    exclude = name in exclusions\n    return skip or exclude\n\n\ndef setup(app):\n    app.connect('autodoc-skip-member', autodoc_skip_member)\n\n\n# Default autodoc members for API rst file\nautodoc_default_flags = ['members', 'no-undoc-members', 'no-private-members']\n"
  },
  {
    "path": "test/fw/doc/howtotest.rst",
    "content": "How to write test suite/case\n============================\n\nAssumptions\n-----------\n\nThe library and utility make several assumptions about the environment:\n\n- OS should be Unix or Linux\n- Password-less authentication should be setup on all systems\n- Required users are available (see ``pbs_config --check-ug``)\n- The PTL package should be installed on all systems\n- The file system layout should be same on all systems\n- PBS_CONF_FILE variable should be set on all systems\n\nNaming conventions, recommended practices, and guidelines\n---------------------------------------------------------\n\nWrite the test in a filename prefixed by ``pbs_`` followed by feature name\n``pbs_<featurename>.py``\n\nName of the test class should be  prefixed by ``Test`` and followed by unique\nexplanatory name ``Test<feature>``\n\nEach test case name should start with ``test_`` followed by lower case characters.\n``test_<testname>`` Test case name should be unique, accurate & explanatory, but\nconcise, can have multiple words if needed. The test cases running sequence is\nunordered (some claim that it is lexicographic ordering but it is best to not write\nyour test suites based on such assumptions).\n\nPut every functionality that is common to all test cases in its own method,\nand consider adding it to the library or utility if it is a generic interface to\nPBS.\n\nPTL strongly follows PEP8 Python coding style. so please style your code to follow\nPEP8 Python codeing style. You can find PEP8 at https://www.python.org/dev/peps/pep-0008/\n\nSome info about PBSTestSuite\n----------------------------\n\nTests that inherit functionality from a parent class such as PBSTestSuite have\navailable to them predefined functionality for setUpclass, setUp, tearDownclass, tearDown,\nor whatever capability they make available in the parent class.\n\nPBSTestSuite offers the following:\n\n.. topic:: setUpclass:\n\n  - Parse custom parameters that are passed in to the Class variable called 'param' (i.e. -p option to pbs_benchpress).\n    The built-in parameters are:\n\n    - servers: Colon-separated list of hostnames hosting a PBS server/scheduler.\n    - server: The hostname on which the PBS Server/scheduler is running\n    - moms: Colon-separated list of hostnames hosting a PBS MoMs.\n    - mom: The hostname on which the PBS MoM is running\n    - nomom: Colon-separated list of hostnames where not to expect MoM running.\n    - comms: Colon-separated list of hostnames hosting a PBS Comms.\n    - comm: The hostname on which the PBS Comm is running\n    - client: For CLI mode only, name of the host on which the PBS client commands are to be run from.\n    - clienthost: the hostnames to set in the MoM config file\n    - mode: Mode of operation to PBS server. Can be either ‘cli’ or ‘api’.\n    - conn_timeout: set a timeout in seconds after which a pbs_connect IFL call is refreshed (i.e., disconnected)\n    - skip-setup: Bypasses setUp of PBSTestSuite (not custom ones)\n    - skip-teardown: Bypasses tearDown of PBSTestSuite (not custom ones)\n    - repeat-count: Number of tests repetition\n    - repeat-delay: delay between two repetition\n    - procinfo: Enables process monitoring thread, logged into ptl_proc_info test metrics.\n    - procmon: Colon-separated process name to monitor. For example to monitor server, sched, and mom use procmon=pbs_server:pbs_sched:pbs_mom\n    - procmon-freq: Sets a polling frequency for the process monitoring tool. Defaults to 10 seconds.\n    - revert-to-defaults=<True|False>: if False, will not revert to defaults. Defaults to True.\n    - revert-hooks=<True|False>: if False, do not revert hooks to defaults. Defaults to True. revert-to-defaults set to False overrides this setting.\n    - del-hooks=<True|False>: If False, do not delete hooks. Defaults to False. revert-to-defaults set to False overrides this setting.\n    - revert-queues=<True|False>: If False, do not revert queues to defaults. Defaults to True. revert-to-defaults set to False overrides this setting.\n    - revert-resources=<True|False>: If False, do not revert resources to defaults. Defaults to True. revert-to-defaults set to False overrides this setting.\n    - del-queues=<True|False>: If False, do not delete queues. Defaults to False. revert-to-defaults set to False overrides this setting.\n    - del-vnodes=<True|False>: If False, do not delete vnodes on MoM instances. Defaults to True.\n    - server-revert-to-defaults=<True|False>: if False, don’t revert Server to defaults\n    - comm-revert-to-defaults=<True|False>: if False, don’t revert Comm to defaults\n    - mom-revert-to-defaults=<True|False>: if False, don’t revert MoM to defaults\n    - sched-revert-to-defaults=<True|False>: if False, don’t revert Scheduler to defaults\n    - test-users: colon-separated list of users to use as test users. The users specified override the default users in the order in which they appear in the PBS_USERS list.\n    - data-users: colon-separated list of data users.\n    - oper-users: colon-separated list of operator users.\n    - mgr-users: colon-separated list of manager users.\n    - root-users: colon-separated list of root users.\n    - build-users: colon-separated list of build users.\n    - daemon-users: colon-seperating list of daemon users.\n\n  - Check required users are available or not\n  - Creates servers, moms, schedulers and comms object\n\n.. topic:: setUp:\n\n  - Check that servers, schedulers, moms and comms services are up or not.\n  - If any of services is down then starts that services.\n  - Add the current user to the list of managers\n  - Bring servers, schedulers, moms and comms configurations back to out-of-box defaults\n  - Cleanup jobs and reservations\n  - If no nodes are defined in the system, a single 8 cpu node is defined.\n  - start process monitoring thread if process monitoring enabled\n\n.. topic:: setUpClass:\n\n  - If setUpClass is overridden, use super() instead of the class you are overriding to call setUpClass of the parent.\n\n.. topic:: tearDown:\n\n  - If process monitoring is enabled the stop process monitoring thread and collect process metrics\n\n.. topic:: tearDownClass:\n\n  - If tearDownClass is overridden, use super() instead of the class you are overriding to call tearDownClass of the parent.\n\n.. topic:: analyze_logs:\n\n  - Analyzes all PBS daemons and accounting logs and collect logs metrics\n\nYou can take advantage of PBSTestSuite's setUp and tearDown methods and extend\ntheir functionality by overriding the setUp and/or tearDown methods in your\nown class, for example\n\n::\n\n      class TestMyFix(PBSTestSuite):\n\n            def setUp(self):\n                PBSTestSuite.setUp(self)\n                # create custom nodes, server/sched config, etc...\n\nFor detailed test directory structure please check document.\n\nWriting a test suite\n--------------------\n\nSee ptl/tests/pbs_smoketest.py for some basic examples how to write test suite.\n\nWhenever possible consider making the test class inherit from PBSTestSuite, it\nis a generic setup and teardown class that delete all jobs and reservations,\nreverts PBS deamons configuration to defaults and ensures that there\nis at least one cpu to schedule work on.\n\nHow to mark a test as skipped\n------------------------------\n\nThe unittest module in Python versions less than 2.7 do not support\nregistering skipping tests. PTL offers a mechanism to skip test, it\nis however up to the test writer to ensure that a test is not run if\nit needs to be skipped.\n\n.. topic:: skipTest:\n\n  Tests that inherit from PBSTestSuite inherit a method called ``skipTest`` that\n  is used to skip tests, whenever a test is to be skipped, that method should be\n  called and the test should return.\n\n.. topic:: checkModule:\n\n  Tests that inherit from PBSTestSuite inherit a method called ``checkModule`` that\n  is used to skip tests if require Python module is not installed.\n\n.. topic:: skipOnCray:\n\n  Tests that inherit from PBSTestSuite inherit a method called ``skipOnCray`` that\n  is used to skip tests on Cray platform.\n\n.. topic:: skipOnShasta:\n\n  Tests that inherit from PBSTestSuite inherit a method called ``skipOnShasta`` that\n  is used to skip tests on Cray Shasta platform.\n\nHow to add a new attribute to the library\n-----------------------------------------\n\nThis section is targeted to PBS developers who may be adding a new job, queue,\nserver, or node attribute and need to write tests that depend on such a new\nattribute.\nPTL does not automatically generate mappings from API to CLI, so when adding\nnew attributes, it is the responsibility of the test writer to define the\nattribute conversion in ptl/lib/pbs_api_to_cli.py. The new attribute must also\nbe defined ptl/lib/pbs_ifl_mock.py so that the attribute name can be\ndereferenced if the SWIG wrapping was not performed.\n\nHere is an example, let's assume we are introducing a new job attribute called\nATTR_geometry that maps to the string \"job_geometry\", in order to be able to\nset the attribute on a job, we need to define it in pbs_api_to_cli.py as:\nATTR_geometry: \"W job_geometry=\"\nand add it to ptl/lib/pbs_ifl_mock.py as:\nATTR_geometry: \"job_geometry\".\nIn order to get the API to take the new attribute into consideration,\npbs_swigify must be rerun so that symbols from pbs_ifl.h are read in.\n"
  },
  {
    "path": "test/fw/doc/index.rst",
    "content": "Welcome to PbsTestLab's documentation!\n======================================\nA unit testing and benchmarking framework to write, execute, and catalog\nPBS tests.\n\n.. toctree::\n   :numbered:\n\n   install\n   intro\n   tutorial\n   howtotest\n   commands\n   caveats\n   ptl\n\n\n"
  },
  {
    "path": "test/fw/doc/install.rst",
    "content": "Installation\n============\n\nPrerequisite\n------------\n    - Python >= 2.6\n    - `Pip`_ >= 8\n\nInstall\n-------\nTo install package run following::\n\n    pip install -r requirements.txt .\n\nTo install package in non-default location run following::\n\n    pip install -r requirements.txt --prefix=/path/to/install/location .\n\nIf you install in non-default location then export `PYTHONPATH` variable before using PTL as follow::\n\n    export PYTHONPATH=</path/to/install/location>/lib/python<python version>/site-packages\n\n::\n\n    </path/to/install/location/bin>/pbs_benchpress -h\n\n\nUpgrade\n-------\n\nTo upgrade package run following::\n\n    pip install -U -r requirements.txt .\n\nUninstall\n---------\n\nTo uninstall package run following::\n\n    pip uninstall PbsTestLab\n\nIf you have installed in non-default location then export `PYTHONPATH` first before running uninstall command like::\n\n    export PYTHONPATH=</path/to/install/location>/lib/python<python version>/site-packages\n    pip uninstall PbsTestLab\n\n.. _Pip: https://pip.pypa.io/en/stable\n"
  },
  {
    "path": "test/fw/doc/intro.rst",
    "content": "Introduction of PbsTestLab\n==========================\n\nCommand line tools\n------------------\n\n- :ref:`pbs_benchpress <pbs_benchpress>` used to run unit tests\n- :ref:`pbs_loganalyzer <pbs_loganalyzer>` used to analyze PBS logs\n- :ref:`pbs_stat <pbs_stat>` used to filter PBS objects based on select properties\n- :ref:`pbs_config <pbs_config>` used to configure services, e.g., create vnodes\n- :ref:`pbs_py_spawn <pbs_py_spawn>` used to invoke a pbs_py_spawn action associated to a job running on a MoM\n- :ref:`pbs_compare_results <pbs_compare_results>` used to compare performance test results\n\nLibrary\n-------\n\n- Provides PBS IFL operations through either SWIG-wrappers or PBS CLI e.g. qstat, qsub etc.\n- Encapsulated PBS entities: :py:class:`~ptl.lib.pbs_testlib.Server`, :py:class:`~ptl.lib.pbs_testlib.Scheduler`,\n  :py:class:`~ptl.lib.pbs_testlib.MoM`, :py:class:`~ptl.lib.pbs_testlib.Comm`, :py:class:`~ptl.lib.pbs_testlib.Queue`,\n  :py:class:`~ptl.lib.pbs_testlib.Job`, :py:class:`~ptl.lib.pbs_testlib.Reservation`, :py:class:`~ptl.lib.pbs_testlib.Hook`,\n  :py:class:`~ptl.lib.pbs_testlib.Resource`\n- Utility class to convert batch status and attributes to Python lists, strings and dictionaries\n- High-level PBS operations to operate on PBS entities including nodes, queues, jobs, reservations, resources, and server\n\nUtilities\n---------\n\n- Logging to parse and report metrics from :py:class:`Server <ptl.utils.pbs_logutils.PBSServerLog>`, :py:class:`Scheduler <ptl.utils.pbs_logutils.PBSSchedulerLog>`,\n  :py:class:`MoM <ptl.utils.pbs_logutils.PBSMoMLog>` and :py:class:`Accounting <ptl.utils.pbs_logutils.PBSAccountingLog>` logs.\n- Distributed tools to transparently run commands locally or remotely, including file copying.\n\nPlugins\n-------\n\n- Provides utilities to load, run and get info of test cases in form of `Nose framework`_ plugins\n\nDocumentation\n-------------\n\n- API documentation describing the capabilities of the framework and utilities\n- For the command-line tools use the -h option for help\n\nDirectory structure\n-------------------\n\n::\n\n    fw\n    |- bin -- Command line tools\n    |- doc -- Documentation\n    `- ptl -- PTL package\n       |- lib -- Library\n       `- utils -- Utilities\n          `- plugins -- plugins of PTL for Nose framework\n\n.. _Nose framework: http://readthedocs.org/docs/nose/\n"
  },
  {
    "path": "test/fw/doc/make.bat",
    "content": "@ECHO OFF\r\n\r\nREM Command file for Sphinx documentation\r\n\r\nif \"%SPHINXBUILD%\" == \"\" (\r\n\tset SPHINXBUILD=sphinx-build\r\n)\r\nset BUILDDIR=target\r\nset ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% .\r\nset I18NSPHINXOPTS=%SPHINXOPTS% .\r\nif NOT \"%PAPER%\" == \"\" (\r\n\tset ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS%\r\n\tset I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS%\r\n)\r\n\r\nif \"%1\" == \"\" goto help\r\n\r\nif \"%1\" == \"help\" (\r\n\t:help\r\n\techo.Please use `make ^<target^>` where ^<target^> is one of\r\n\techo.  html       to make standalone HTML files\r\n\techo.  dirhtml    to make HTML files named index.html in directories\r\n\techo.  singlehtml to make a single large HTML file\r\n\techo.  pickle     to make pickle files\r\n\techo.  json       to make JSON files\r\n\techo.  htmlhelp   to make HTML files and a HTML help project\r\n\techo.  qthelp     to make HTML files and a qthelp project\r\n\techo.  devhelp    to make HTML files and a Devhelp project\r\n\techo.  epub       to make an epub\r\n\techo.  latex      to make LaTeX files, you can set PAPER=a4 or PAPER=letter\r\n\techo.  latexpdf   to make LaTeX files and run them through pdflatex\r\n\techo.  latexpdfja to make LaTeX files and run them through platex/dvipdfmx\r\n\techo.  text       to make text files\r\n\techo.  man        to make manual pages\r\n\techo.  texinfo    to make Texinfo files\r\n\techo.  gettext    to make PO message catalogs\r\n\techo.  changes    to make an overview over all changed/added/deprecated items\r\n\techo.  xml        to make Docutils-native XML files\r\n\techo.  pseudoxml  to make pseudoxml-XML files for display purposes\r\n\techo.  linkcheck  to check all external links for integrity\r\n\techo.  doctest    to run all doctests embedded in the documentation if enabled\r\n\tgoto end\r\n)\r\n\r\nif \"%1\" == \"clean\" (\r\n\tfor /d %%i in (%BUILDDIR%\\*) do rmdir /q /s %%i\r\n\tdel /q /s %BUILDDIR%\\*\r\n\tgoto end\r\n)\r\n\r\n\r\n%SPHINXBUILD% 2> nul\r\nif errorlevel 9009 (\r\n\techo.\r\n\techo.The 'sphinx-build' command was not found. Make sure you have Sphinx\r\n\techo.installed, then set the SPHINXBUILD environment variable to point\r\n\techo.to the full path of the 'sphinx-build' executable. Alternatively you\r\n\techo.may add the Sphinx directory to PATH.\r\n\techo.\r\n\techo.If you don't have Sphinx installed, grab it from\r\n\techo.http://sphinx-doc.org/\r\n\texit /b 1\r\n)\r\n\r\nif \"%1\" == \"html\" (\r\n\t%SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html\r\n\tif errorlevel 1 exit /b 1\r\n\techo.\r\n\techo.Build finished. The HTML pages are in %BUILDDIR%/html.\r\n\tgoto end\r\n)\r\n\r\nif \"%1\" == \"dirhtml\" (\r\n\t%SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml\r\n\tif errorlevel 1 exit /b 1\r\n\techo.\r\n\techo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml.\r\n\tgoto end\r\n)\r\n\r\nif \"%1\" == \"singlehtml\" (\r\n\t%SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml\r\n\tif errorlevel 1 exit /b 1\r\n\techo.\r\n\techo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml.\r\n\tgoto end\r\n)\r\n\r\nif \"%1\" == \"pickle\" (\r\n\t%SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle\r\n\tif errorlevel 1 exit /b 1\r\n\techo.\r\n\techo.Build finished; now you can process the pickle files.\r\n\tgoto end\r\n)\r\n\r\nif \"%1\" == \"json\" (\r\n\t%SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json\r\n\tif errorlevel 1 exit /b 1\r\n\techo.\r\n\techo.Build finished; now you can process the JSON files.\r\n\tgoto end\r\n)\r\n\r\nif \"%1\" == \"htmlhelp\" (\r\n\t%SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp\r\n\tif errorlevel 1 exit /b 1\r\n\techo.\r\n\techo.Build finished; now you can run HTML Help Workshop with the ^\r\n.hhp project file in %BUILDDIR%/htmlhelp.\r\n\tgoto end\r\n)\r\n\r\nif \"%1\" == \"qthelp\" (\r\n\t%SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp\r\n\tif errorlevel 1 exit /b 1\r\n\techo.\r\n\techo.Build finished; now you can run \"qcollectiongenerator\" with the ^\r\n.qhcp project file in %BUILDDIR%/qthelp, like this:\r\n\techo.^> qcollectiongenerator %BUILDDIR%\\qthelp\\PbsTestLab.qhcp\r\n\techo.To view the help file:\r\n\techo.^> assistant -collectionFile %BUILDDIR%\\qthelp\\PbsTestLab.ghc\r\n\tgoto end\r\n)\r\n\r\nif \"%1\" == \"devhelp\" (\r\n\t%SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp\r\n\tif errorlevel 1 exit /b 1\r\n\techo.\r\n\techo.Build finished.\r\n\tgoto end\r\n)\r\n\r\nif \"%1\" == \"epub\" (\r\n\t%SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub\r\n\tif errorlevel 1 exit /b 1\r\n\techo.\r\n\techo.Build finished. The epub file is in %BUILDDIR%/epub.\r\n\tgoto end\r\n)\r\n\r\nif \"%1\" == \"latex\" (\r\n\t%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex\r\n\tif errorlevel 1 exit /b 1\r\n\techo.\r\n\techo.Build finished; the LaTeX files are in %BUILDDIR%/latex.\r\n\tgoto end\r\n)\r\n\r\nif \"%1\" == \"latexpdf\" (\r\n\t%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex\r\n\tcd %BUILDDIR%/latex\r\n\tmake all-pdf\r\n\tcd %BUILDDIR%/..\r\n\techo.\r\n\techo.Build finished; the PDF files are in %BUILDDIR%/latex.\r\n\tgoto end\r\n)\r\n\r\nif \"%1\" == \"latexpdfja\" (\r\n\t%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex\r\n\tcd %BUILDDIR%/latex\r\n\tmake all-pdf-ja\r\n\tcd %BUILDDIR%/..\r\n\techo.\r\n\techo.Build finished; the PDF files are in %BUILDDIR%/latex.\r\n\tgoto end\r\n)\r\n\r\nif \"%1\" == \"text\" (\r\n\t%SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text\r\n\tif errorlevel 1 exit /b 1\r\n\techo.\r\n\techo.Build finished. The text files are in %BUILDDIR%/text.\r\n\tgoto end\r\n)\r\n\r\nif \"%1\" == \"man\" (\r\n\t%SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man\r\n\tif errorlevel 1 exit /b 1\r\n\techo.\r\n\techo.Build finished. The manual pages are in %BUILDDIR%/man.\r\n\tgoto end\r\n)\r\n\r\nif \"%1\" == \"texinfo\" (\r\n\t%SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo\r\n\tif errorlevel 1 exit /b 1\r\n\techo.\r\n\techo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo.\r\n\tgoto end\r\n)\r\n\r\nif \"%1\" == \"gettext\" (\r\n\t%SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale\r\n\tif errorlevel 1 exit /b 1\r\n\techo.\r\n\techo.Build finished. The message catalogs are in %BUILDDIR%/locale.\r\n\tgoto end\r\n)\r\n\r\nif \"%1\" == \"changes\" (\r\n\t%SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes\r\n\tif errorlevel 1 exit /b 1\r\n\techo.\r\n\techo.The overview file is in %BUILDDIR%/changes.\r\n\tgoto end\r\n)\r\n\r\nif \"%1\" == \"linkcheck\" (\r\n\t%SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck\r\n\tif errorlevel 1 exit /b 1\r\n\techo.\r\n\techo.Link check complete; look for any errors in the above output ^\r\nor in %BUILDDIR%/linkcheck/output.txt.\r\n\tgoto end\r\n)\r\n\r\nif \"%1\" == \"doctest\" (\r\n\t%SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest\r\n\tif errorlevel 1 exit /b 1\r\n\techo.\r\n\techo.Testing of doctests in the sources finished, look at the ^\r\nresults in %BUILDDIR%/doctest/output.txt.\r\n\tgoto end\r\n)\r\n\r\nif \"%1\" == \"xml\" (\r\n\t%SPHINXBUILD% -b xml %ALLSPHINXOPTS% %BUILDDIR%/xml\r\n\tif errorlevel 1 exit /b 1\r\n\techo.\r\n\techo.Build finished. The XML files are in %BUILDDIR%/xml.\r\n\tgoto end\r\n)\r\n\r\nif \"%1\" == \"pseudoxml\" (\r\n\t%SPHINXBUILD% -b pseudoxml %ALLSPHINXOPTS% %BUILDDIR%/pseudoxml\r\n\tif errorlevel 1 exit /b 1\r\n\techo.\r\n\techo.Build finished. The pseudo-XML files are in %BUILDDIR%/pseudoxml.\r\n\tgoto end\r\n)\r\n\r\n:end\r\n"
  },
  {
    "path": "test/fw/doc/ptl.rst",
    "content": ".. _full-api:\n\n=================\nAPI documentation\n=================\n\nLibrary\n=======\n\npbs_api_to_cli\n--------------\n\n.. automodule:: ptl.lib.pbs_api_to_cli\n\npbs_ifl_mock\n------------\n\n.. automodule:: ptl.lib.pbs_ifl_mock\n\npbs_testlib\n-----------\n\n.. automodule:: ptl.lib.pbs_testlib\n\nUtilities\n=========\n\npbs_cliutils\n------------\n\n.. automodule:: ptl.utils.pbs_cliutils\n\npbs_covutils\n------------\n\n.. automodule:: ptl.utils.pbs_covutils\n\npbs_dshutils\n------------\n\n.. automodule:: ptl.utils.pbs_dshutils\n\npbs_logutils\n------------\n\n.. automodule:: ptl.utils.pbs_logutils\n\npbs_procutils\n-------------\n\n.. automodule:: ptl.utils.pbs_procutils\n\npbs_testsuite\n-------------\n\n.. automodule:: ptl.utils.pbs_testsuite\n\nPlugins\n-------\n\nptl_test_data\n~~~~~~~~~~~~~\n\n.. automodule:: ptl.utils.plugins.ptl_test_data\n\nptl_test_db\n~~~~~~~~~~~\n\n.. automodule:: ptl.utils.plugins.ptl_test_db\n\nptl_test_info\n~~~~~~~~~~~~~\n\n.. automodule:: ptl.utils.plugins.ptl_test_info\n\nptl_test_loader\n~~~~~~~~~~~~~~~\n\n.. automodule:: ptl.utils.plugins.ptl_test_loader\n\nptl_test_runner\n~~~~~~~~~~~~~~~\n\n.. automodule:: ptl.utils.plugins.ptl_test_runner\n\nptl_test_tags\n~~~~~~~~~~~~~\n\n.. automodule:: ptl.utils.plugins.ptl_test_tags\n"
  },
  {
    "path": "test/fw/doc/tutorial.rst",
    "content": "Brief tutorial about common library API\n=======================================\n\nMost of the examples below show specific calls to the library functions,\nthere are typically many more derivations possible, check the :ref:`full API documentation <full-api>`\nfor details.\n\nImporting the library\n---------------------\nBecause the library may leverage SWIG-wrappers it is preferred to import all so that the pbs_ifl module imports all IFL symbols as shown below:\n\n::\n\n  from ptl.lib.pbs_testlib import *\n\nInstantiating a Server\n----------------------\nInstantiate a Server object and populate it's attributes values after stat'ing the PBS server.\n\n::\n\n  server = Server('remotehost')\n  OR\n  server = Server() # no hostname defaults to the FQDN of the current host\n\nAdding a user as manager\n------------------------\n\n::\n\n  server.manager(MGR_CMD_SET, SERVER, {ATTR_managers: (INCR, 'user@host')})\n\nReverting server's configuration to defaults\n--------------------------------------------\n\n::\n\n  server.revert_to_defaults()\n\n\nInstantiating a Job\n-------------------\n\n::\n\n  job = Job()\n\nSetting job attributes\n----------------------\n\n::\n\n  job.set_attributes({'Resource_List.select':'2:ncpus=1','Resource_List.place':'scatter'})\n\nSubmitting a job\n----------------\n\n::\n\n  server.submit(job)\n\nStat'ing a server\n-----------------\n\n::\n\n  server.status()\n\nStat'ing all jobs job_state attribute\n-------------------------------------\n\n::\n\n  server.status(JOB, 'job_state')\n\nCounting all vnodes by state\n----------------------------\n\n::\n\n  server.counter(NODE, 'state')\n\nExpecting a job to be running\n-----------------------------\n\n::\n\n  server.expect(JOB, {'job_state':'R','substate':42}, attrop=PTL_AND, id=jid)\n\nwhere `jid` is the result of a server.submit(job)\n\nEach attribute can be given an operand, one of LT, LE, EQ, GE, GT, NE\nFor example to expect a job to be in state R and substate != 41::\n\n  server.expect(JOB, {'job_state':(EQ,'R'), 'substate':(NE,41)}, id=jid)\n\nInstantiating a Scheduler object\n--------------------------------\n\n::\n\n  sched = Scheduler('hostname')\n  OR\n  sched = Scheduler() # no hostname defaults to the FQDN of the current host\n\nSetting scheduler configuration\n-------------------------------\n\n::\n\n  sched.set_sched_config({'backfill':'true  ALL'})\n\nReverting scheduler's configuration to defaults\n-----------------------------------------------\n\n::\n\n  sched.revert_to_defaults()\n\n\nInstantiating a MoM\n-------------------\n\n::\n\n  mom = MoM('hostname')\n\nCreating a vnode definition file\n--------------------------------\n\n::\n\n  attrs = {'resources_available.ncpus':8,'resources_available.mem':'8gb'}\n  vdef = node.create_vnode_def('vn', attrs, 10)\n\nInserting a vnode definition to a MoM\n-------------------------------------\n\n::\n\n  mom.insert_vnode_def(vdef)\n\nReverting mom's configuration to defaults\n-----------------------------------------\n\n::\n\n  mom.revert_to_defaults()\n"
  },
  {
    "path": "test/fw/ptl/__init__.py.in",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\n__version__ = '@PBS_VERSION@'\n"
  },
  {
    "path": "test/fw/ptl/lib/__init__.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n"
  },
  {
    "path": "test/fw/ptl/lib/pbs_api_to_cli.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom ptl.lib.pbs_ifl_mock import *\n\napi_to_cli = {\n    ATTR_a: 'a',\n    ATTR_c: 'c',\n    ATTR_e: 'e',\n    ATTR_g: 'W group_list=',\n    ATTR_h: 'h',\n    ATTR_j: 'j',\n    ATTR_J: 'J',\n    ATTR_k: 'k',\n    ATTR_l: 'l',\n    ATTR_m: 'm',\n    ATTR_o: 'o',\n    ATTR_p: 'p',\n    ATTR_q: 'q',\n    ATTR_R: 'R',\n    ATTR_r: 'r',\n    ATTR_u: 'u',\n    ATTR_v: 'v',\n    ATTR_A: 'A',\n    ATTR_M: 'M',\n    ATTR_N: 'N',\n    ATTR_S: 'S',\n    ATTR_W: 'W',\n    ATTR_array_indices_submitted: 'J',\n    ATTR_depend: 'W depend=',\n    ATTR_inter: 'I',\n    ATTR_sandbox: 'W sandbox=',\n    ATTR_stagein: 'W stagein=',\n    ATTR_stageout: 'W stageout=',\n    ATTR_resvTag: 'reserve_Tag',\n    ATTR_resv_start: 'R',\n    ATTR_resv_end: 'E',\n    ATTR_resv_duration: 'D',\n    ATTR_resv_state: 'reserve_state',\n    ATTR_resv_substate: 'reserve_substate',\n    ATTR_del_idle_time: 'W delete_idle_time=',\n    ATTR_auth_u: 'U',\n    ATTR_auth_g: 'G',\n    ATTR_auth_h: 'Authorized_Hosts',\n    ATTR_cred: 'cred',\n    ATTR_nodemux: 'no_stdio_sockets',\n    ATTR_umask: 'W umask=',\n    ATTR_block: 'W block=',\n    ATTR_convert: 'W qmove=',\n    ATTR_DefaultChunk: 'default_chunk',\n    ATTR_X11_cookie: 'forward_x11_cookie',\n    ATTR_X11_port: 'forward_x11_port',\n    ATTR_resv_standing: '',\n    ATTR_resv_count: 'reserve_count',\n    ATTR_resv_idx: 'reserve_index',\n    ATTR_resv_rrule: 'r',\n    ATTR_resv_execvnodes: 'reserve_execvnodes',\n    ATTR_resv_timezone: '',\n    ATTR_ctime: 'c',\n    ATTR_estimated: 't',\n    ATTR_exechost: 'exec_host',\n    ATTR_exechost2: 'exec_host2',\n    ATTR_execvnode: 'exec_vnode',\n    ATTR_resv_nodes: 'resv_nodes',\n    ATTR_mtime: 'm',\n    ATTR_qtime: 'q',\n    ATTR_session: 'session_id',\n    ATTR_jobdir: 'jobdir',\n    ATTR_euser: 'euser',\n    ATTR_egroup: 'egroup',\n    ATTR_project: 'P',\n    ATTR_hashname: 'hashname',\n    ATTR_hopcount: 'hop_count',\n    ATTR_security: 'security',\n    ATTR_sched_hint: 'sched_hint',\n    ATTR_SchedSelect: 'schedselect',\n    ATTR_substate: 'substate',\n    ATTR_name: 'N',\n    ATTR_owner: 'Job_Owner',\n    ATTR_used: 'resources_used',\n    ATTR_state: 's',\n    ATTR_queue: 'q',\n    ATTR_server: 'server',\n    ATTR_maxrun: 'max_running',\n    ATTR_max_run: 'max_run',\n    ATTR_max_run_res: 'max_run_res',\n    ATTR_max_run_soft: 'max_run_soft',\n    ATTR_max_run_res_soft: 'max_run_res_soft',\n    ATTR_total: 'total_jobs',\n    ATTR_comment: 'W comment=',\n    ATTR_cookie: 'cookie',\n    ATTR_qrank: 'queue_rank',\n    ATTR_altid: 'alt_id',\n    ATTR_altid2: 'alt_id2',\n    ATTR_acct_id: 'accounting_id',\n    ATTR_array: 'J',\n    ATTR_array_id: 'array_id',\n    ATTR_array_index: 'array_index',\n    ATTR_array_state_count: 'array_state_count',\n    ATTR_array_indices_remaining: 'array_indices_remaining',\n    ATTR_etime: 'e',\n    ATTR_gridname: 'gridname',\n    ATTR_refresh: 'last_context_refresh',\n    ATTR_ReqCredEnable: 'require_cred_enable',\n    ATTR_ReqCred: 'require_cred',\n    ATTR_runcount: 'W run_count=',\n    ATTR_stime: 's',\n    ATTR_executable: 'executable',\n    ATTR_Arglist: 'argument_list',\n    ATTR_version: 'pbs_version',\n    ATTR_eligible_time: 'g',\n    ATTR_accrue_type: 'accrue_type',\n    ATTR_sample_starttime: 'sample_starttime',\n    ATTR_job_kill_delay: 'job_kill_delay',\n    ATTR_history_timestamp: 'history_timestamp',\n    ATTR_stageout_status: 'Stageout_status',\n    ATTR_exit_status: 'Exit_status',\n    ATTR_submit_arguments: 'Submit_arguments',\n    ATTR_resv_name: 'Reserve_Name',\n    ATTR_resv_owner: 'Reserve_Owner',\n    ATTR_resv_Tag: 'reservation_Tag',\n    ATTR_resv_ID: 'reserve_ID',\n    ATTR_resv_retry: 'reserve_retry',\n    ATTR_aclgren: 'acl_group_enable',\n    ATTR_aclgroup: 'acl_groups',\n    ATTR_aclhten: 'acl_host_enable',\n    ATTR_aclhost: 'acl_hosts',\n    ATTR_acluren: 'acl_user_enable',\n    ATTR_acluser: 'acl_users',\n    ATTR_altrouter: 'alt_router',\n    ATTR_chkptmin: 'checkpoint_min',\n    ATTR_enable: 'enabled',\n    ATTR_fromroute: 'from_route_only',\n    ATTR_HasNodes: 'hasnodes',\n    ATTR_killdelay: 'kill_delay',\n    ATTR_maxgrprun: 'max_group_run',\n    ATTR_maxgrprunsoft: 'max_group_run_soft',\n    ATTR_maxque: 'max_queuable',\n    ATTR_max_queued: 'max_queued',\n    ATTR_max_queued_res: 'max_queued_res',\n    ATTR_maxuserrun: 'max_user_run',\n    ATTR_maxuserrunsoft: 'max_user_run_soft',\n    ATTR_qtype: 'queue_type',\n    ATTR_rescassn: 'resources_assigned',\n    ATTR_rescdflt: 'resources_default',\n    ATTR_rescmax: 'resources_max',\n    ATTR_rescmin: 'resources_min',\n    ATTR_rndzretry: 'rendezvous_retry',\n    ATTR_routedest: 'route_destinations',\n    ATTR_routeheld: 'route_held_jobs',\n    ATTR_routewait: 'route_waiting_jobs',\n    ATTR_routeretry: 'route_retry_time',\n    ATTR_routelife: 'route_lifetime',\n    ATTR_rsvexpdt: 'reserved_expedite',\n    ATTR_rsvsync: 'reserved_sync',\n    ATTR_start: 'started',\n    ATTR_count: 'state_count',\n    ATTR_number: 'number_jobs',\n    ATTR_SvrHost: 'server_host',\n    ATTR_aclroot: 'acl_roots',\n    ATTR_managers: 'managers',\n    ATTR_dfltque: 'default_queue',\n    ATTR_defnode: 'default_node',\n    ATTR_locsvrs: 'location_servers',\n    ATTR_logevents: 'log_events',\n    ATTR_logfile: 'log_file',\n    ATTR_mailfrom: 'mail_from',\n    ATTR_nodepack: 'node_pack',\n    ATTR_nodefailrq: 'node_fail_requeue',\n    ATTR_resendtermdelay: 'resend_term_delay',\n    ATTR_operators: 'operators',\n    ATTR_queryother: 'query_other_jobs',\n    ATTR_resccost: 'resources_cost',\n    ATTR_rescavail: 'resources_available',\n    ATTR_maxuserres: 'max_user_res',\n    ATTR_maxuserressoft: 'max_user_res_soft',\n    ATTR_maxgroupres: 'max_group_res',\n    ATTR_maxgroupressoft: 'max_group_res_soft',\n    ATTR_maxarraysize: 'max_array_size',\n    ATTR_PNames: 'pnames',\n    ATTR_schedit: 'scheduler_iteration',\n    ATTR_scheduling: 'scheduling',\n    ATTR_status: 'server_state',\n    ATTR_syscost: 'system_cost',\n    ATTR_FlatUID: 'flatuid',\n    ATTR_FLicenses: 'FLicenses',\n    ATTR_ResvEnable: 'resv_enable',\n    ATTR_aclResvgren: 'acl_resv_group_enable',\n    ATTR_aclResvgroup: 'acl_resv_groups',\n    ATTR_aclResvhten: 'acl_resv_host_enable',\n    ATTR_aclResvhost: 'acl_resv_hosts',\n    ATTR_aclResvuren: 'acl_resv_user_enable',\n    ATTR_aclResvuser: 'acl_resv_users',\n    ATTR_NodeGroupEnable: 'node_group_enable',\n    ATTR_NodeGroupKey: 'node_group_key',\n    ATTR_dfltqdelargs: 'default_qdel_arguments',\n    ATTR_dfltqsubargs: 'default_qsub_arguments',\n    ATTR_rpp_retry: 'rpp_retry',\n    ATTR_rpp_highwater: 'rpp_highwater',\n    ATTR_pbs_license_info: 'pbs_license_info',\n    ATTR_license_min: 'pbs_license_min',\n    ATTR_license_max: 'pbs_license_max',\n    ATTR_license_linger: 'pbs_license_linger_time',\n    ATTR_license_count: 'license_count',\n    ATTR_job_sort_formula: 'job_sort_formula',\n    ATTR_EligibleTimeEnable: 'eligible_time_enable',\n    ATTR_resv_retry_init: 'reserve_retry_init',\n    ATTR_resv_retry_time: 'reserve_retry_time',\n    ATTR_JobHistoryEnable: 'job_history_enable',\n    ATTR_JobHistoryDuration: 'job_history_duration',\n    ATTR_max_concurrent_prov: 'max_concurrent_provision',\n    ATTR_resv_post_processing: 'resv_post_processing_time',\n    ATTR_backfill_depth: 'backfill_depth',\n    ATTR_job_requeue_timeout: 'job_requeue_timeout',\n    ATTR_SchedHost: 'sched_host',\n    ATTR_sched_cycle_len: 'sched_cycle_length',\n    ATTR_do_not_span_psets: 'do_not_span_psets',\n    ATTR_soft_time: 'Wsoft_limit_time',\n    ATTR_power_provisioning: 'power_provisioning',\n    ATTR_max_job_sequence_id: 'max_job_sequence_id',\n    ATTR_tolerate_node_failures: 'Wtolerate_node_failures=',\n    ATTR_NODE_Host: 'Host',\n    ATTR_NODE_Mom: 'Mom',\n    ATTR_NODE_Port: 'Port',\n    ATTR_NODE_state: 'state',\n    ATTR_NODE_ntype: 'ntype',\n    ATTR_NODE_jobs: 'jobs',\n    ATTR_NODE_resvs: 'resv',\n    ATTR_NODE_resv_enable: 'resv_enable',\n    ATTR_NODE_np: 'np',\n    ATTR_NODE_pcpus: 'pcpus',\n    ATTR_NODE_properties: 'properties',\n    ATTR_NODE_NoMultiNode: 'no_multinode_jobs',\n    ATTR_NODE_No_Tasks: 'no_tasks',\n    ATTR_NODE_Sharing: 'sharing',\n    ATTR_NODE_HPCBP_User_name: 'hpcbp_user_name',\n    ATTR_NODE_HPCBP_WS_address: 'hpcbp_webservice_address',\n    ATTR_NODE_HPCBP_Stage_protocol: 'hpcbp_stage_protocol',\n    ATTR_NODE_HPCBP_enable: 'hpcbp_enable',\n    ATTR_NODE_ProvisionEnable: 'provision_enable',\n    ATTR_NODE_current_aoe: 'current_aoe',\n    ATTR_NODE_in_multivnode_host: 'in_multivnode_host',\n    ATTR_NODE_License: 'license',\n    ATTR_NODE_LicenseInfo: 'license_info',\n    ATTR_NODE_TopologyInfo: 'topology_info',\n    ATTR_NODE_last_used_time: 'last_used_time',\n    ATTR_NODE_last_state_change_time: 'last_state_change_time',\n    ATTR_sched_server_dyn_res_alarm: 'server_dyn_res_alarm',\n    ATTR_RESC_TYPE: 'type',\n    ATTR_RESC_FLAG: 'flag',\n    SHUT_QUICK: 't quick',\n    SHUT_DELAY: 't delay',\n    SHUT_IMMEDIATE: 't immediate',\n    SHUT_WHO_SCHED: 's',\n    SHUT_WHO_MOM: 'm',\n    SHUT_WHO_SECDRY: 'f',\n    SHUT_WHO_IDLESECDRY: 'i',\n    SHUT_WHO_SECDONLY: 'F',\n}\n\n\ndef convert_api_to_cli(attrs):\n    ret = []\n    for a in attrs:\n        if '.' in a:\n            (attribute, resource) = a.split('.')\n            ret.append(api_to_cli[attribute] + resource)\n        else:\n            ret.append(api_to_cli[a])\n    return ret\n"
  },
  {
    "path": "test/fw/ptl/lib/pbs_ifl_mock.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nMGR_OBJ_NONE = -1\nMGR_OBJ_SERVER = 0\nMGR_OBJ_QUEUE = 1\nMGR_OBJ_JOB = 2\nMGR_OBJ_NODE = 3\nMGR_OBJ_RESV = 4\nMGR_OBJ_RSC = 5\nMGR_OBJ_SCHED = 6\nMGR_OBJ_HOST = 7\nMGR_OBJ_HOOK = 8\nMGR_OBJ_PBS_HOOK = 9\n\nMGR_CMD_NONE = 10\nMGR_CMD_CREATE = 11\nMGR_CMD_DELETE = 12\nMGR_CMD_SET = 13\nMGR_CMD_UNSET = 14\nMGR_CMD_LIST = 15\nMGR_CMD_PRINT = 16\nMGR_CMD_ACTIVE = 17\nMGR_CMD_IMPORT = 18\nMGR_CMD_EXPORT = 19\n\nMSG_OUT = 1\nMSG_ERR = 2\n\nATTR_a = 'Execution_Time'\nATTR_c = 'Checkpoint'\nATTR_e = 'Error_Path'\nATTR_g = 'group_list'\nATTR_h = 'Hold_Types'\nATTR_j = 'Join_Path'\nATTR_J = 'array_indices_submitted'\nATTR_k = 'Keep_Files'\nATTR_l = 'Resource_List'\nATTR_m = 'Mail_Points'\nATTR_o = 'Output_Path'\nATTR_p = 'Priority'\nATTR_q = 'destination'\nATTR_R = 'Remove_Files'\nATTR_r = 'Rerunable'\nATTR_u = 'User_List'\nATTR_v = 'Variable_List'\nATTR_A = 'Account_Name'\nATTR_M = 'Mail_Users'\nATTR_N = 'Job_Name'\nATTR_S = 'Shell_Path_List'\nATTR_W = 'Additional_Attributes'  # Not in pbs_ifl.h\nATTR_array_indices_submitted = ATTR_J\nATTR_max_run_subjobs = 'max_run_subjobs'\nATTR_depend = 'depend'\nATTR_inter = 'interactive'\nATTR_sandbox = 'sandbox'\nATTR_stagein = 'stagein'\nATTR_stageout = 'stageout'\nATTR_resvTag = 'reserve_Tag'\nATTR_resv_start = 'reserve_start'\nATTR_resv_end = 'reserve_end'\nATTR_resv_duration = 'reserve_duration'\nATTR_resv_alter_revert = 'reserve_alter_revert'\nATTR_resv_state = 'reserve_state'\nATTR_resv_substate = 'reserve_substate'\nATTR_del_idle_time = 'delete_idle_time'\nATTR_auth_u = 'Authorized_Users'\nATTR_auth_g = 'Authorized_Groups'\nATTR_auth_h = 'Authorized_Hosts'\nATTR_cred = 'cred'\nATTR_nodemux = 'no_stdio_sockets'\nATTR_umask = 'umask'\nATTR_block = 'block'\nATTR_convert = 'qmove'\nATTR_DefaultChunk = 'default_chunk'\nATTR_X11_cookie = 'forward_x11_cookie'\nATTR_X11_port = 'forward_x11_port'\nATTR_resv_standing = 'reserve_standing'\nATTR_resv_count = 'reserve_count'\nATTR_resv_idx = 'reserve_index'\nATTR_resv_rrule = 'reserve_rrule'\nATTR_resv_execvnodes = 'reserve_execvnodes'\nATTR_resv_timezone = 'reserve_timezone'\nATTR_ctime = 'ctime'\nATTR_estimated = 'estimated'\nATTR_exechost = 'exec_host'\nATTR_exechost2 = 'exec_host2'\nATTR_execvnode = 'exec_vnode'\nATTR_resv_nodes = 'resv_nodes'\nATTR_mtime = 'mtime'\nATTR_qtime = 'qtime'\nATTR_session = 'session_id'\nATTR_jobdir = 'jobdir'\nATTR_job = 'reserve_job'\nATTR_euser = 'euser'\nATTR_egroup = 'egroup'\nATTR_project = 'project'\nATTR_hashname = 'hashname'\nATTR_hopcount = 'hop_count'\nATTR_security = 'security'\nATTR_sched_hint = 'sched_hint'\nATTR_SchedSelect = 'schedselect'\nATTR_substate = 'substate'\nATTR_name = 'Job_Name'\nATTR_owner = 'Job_Owner'\nATTR_used = 'resources_used'\nATTR_state = 'job_state'\nATTR_queue = 'queue'\nATTR_server = 'server'\nATTR_maxrun = 'max_running'\nATTR_max_run = 'max_run'\nATTR_max_run_res = 'max_run_res'\nATTR_max_run_soft = 'max_run_soft'\nATTR_max_run_res_soft = 'max_run_res_soft'\nATTR_total = 'total_jobs'\nATTR_comment = 'comment'\nATTR_cookie = 'cookie'\nATTR_qrank = 'queue_rank'\nATTR_altid = 'alt_id'\nATTR_altid2 = 'alt_id2'\nATTR_metaid = 'meta_id'\nATTR_acct_id = 'accounting_id'\nATTR_array = 'array'\nATTR_array_id = 'array_id'\nATTR_array_index = 'array_index'\nATTR_array_state_count = 'array_state_count'\nATTR_array_indices_remaining = 'array_indices_remaining'\nATTR_etime = 'etime'\nATTR_gridname = 'gridname'\nATTR_refresh = 'last_context_refresh'\nATTR_ReqCredEnable = 'require_cred_enable'\nATTR_ReqCred = 'require_cred'\nATTR_runcount = 'run_count'\nATTR_stime = 'stime'\nATTR_executable = 'executable'\nATTR_Arglist = 'argument_list'\nATTR_version = 'pbs_version'\nATTR_eligible_time = 'eligible_time'\nATTR_accrue_type = 'accrue_type'\nATTR_sample_starttime = 'sample_starttime'\nATTR_job_kill_delay = 'job_kill_delay'\nATTR_history_timestamp = 'history_timestamp'\nATTR_stageout_status = 'Stageout_status'\nATTR_exit_status = 'Exit_status'\nATTR_submit_arguments = 'Submit_arguments'\nATTR_resv_name = 'Reserve_Name'\nATTR_resv_owner = 'Reserve_Owner'\nATTR_resv_Tag = 'reservation_Tag'\nATTR_resv_ID = 'reserve_ID'\nATTR_resv_retry = 'reserve_retry'\nATTR_aclgren = 'acl_group_enable'\nATTR_aclgroup = 'acl_groups'\nATTR_aclhten = 'acl_host_enable'\nATTR_aclhost = 'acl_hosts'\nATTR_acluren = 'acl_user_enable'\nATTR_acluser = 'acl_users'\nATTR_altrouter = 'alt_router'\nATTR_chkptmin = 'checkpoint_min'\nATTR_enable = 'enabled'\nATTR_fromroute = 'from_route_only'\nATTR_HasNodes = 'hasnodes'\nATTR_killdelay = 'kill_delay'\nATTR_maxgrprun = 'max_group_run'\nATTR_maxgrprunsoft = 'max_group_run_soft'\nATTR_maxque = 'max_queuable'\nATTR_max_queued = 'max_queued'\nATTR_max_queued_res = 'max_queued_res'\nATTR_maxuserrun = 'max_user_run'\nATTR_maxuserrunsoft = 'max_user_run_soft'\nATTR_qtype = 'queue_type'\nATTR_rescassn = 'resources_assigned'\nATTR_rescdflt = 'resources_default'\nATTR_rescmax = 'resources_max'\nATTR_rescmin = 'resources_min'\nATTR_rndzretry = 'rendezvous_retry'\nATTR_routedest = 'route_destinations'\nATTR_routeheld = 'route_held_jobs'\nATTR_routewait = 'route_waiting_jobs'\nATTR_routeretry = 'route_retry_time'\nATTR_routelife = 'route_lifetime'\nATTR_rsvexpdt = 'reserved_expedite'\nATTR_rsvsync = 'reserved_sync'\nATTR_start = 'started'\nATTR_count = 'state_count'\nATTR_number = 'number_jobs'\nATTR_SvrHost = 'server_host'\nATTR_aclroot = 'acl_roots'\nATTR_managers = 'managers'\nATTR_dfltque = 'default_queue'\nATTR_defnode = 'default_node'\nATTR_locsvrs = 'location_servers'\nATTR_logevents = 'log_events'\nATTR_logfile = 'log_file'\nATTR_mailfrom = 'mail_from'\nATTR_nodepack = 'node_pack'\nATTR_nodefailrq = 'node_fail_requeue'\nATTR_resendtermdelay = 'resend_term_delay'\nATTR_operators = 'operators'\nATTR_queryother = 'query_other_jobs'\nATTR_resccost = 'resources_cost'\nATTR_rescavail = 'resources_available'\nATTR_maxuserres = 'max_user_res'\nATTR_maxuserressoft = 'max_user_res_soft'\nATTR_maxgroupres = 'max_group_res'\nATTR_maxgroupressoft = 'max_group_res_soft'\nATTR_maxarraysize = 'max_array_size'\nATTR_PNames = 'pnames'\nATTR_schedit = 'scheduler_iteration'\nATTR_scheduling = 'scheduling'\nATTR_status = 'server_state'\nATTR_syscost = 'system_cost'\nATTR_FlatUID = 'flatuid'\nATTR_FLicenses = 'FLicenses'\nATTR_ResvEnable = 'resv_enable'\nATTR_aclResvgren = 'acl_resv_group_enable'\nATTR_aclResvgroup = 'acl_resv_groups'\nATTR_aclResvhten = 'acl_resv_host_enable'\nATTR_aclResvhost = 'acl_resv_hosts'\nATTR_aclResvuren = 'acl_resv_user_enable'\nATTR_aclResvuser = 'acl_resv_users'\nATTR_NodeGroupEnable = 'node_group_enable'\nATTR_NodeGroupKey = 'node_group_key'\nATTR_dfltqdelargs = 'default_qdel_arguments'\nATTR_dfltqsubargs = 'default_qsub_arguments'\nATTR_rpp_retry = 'rpp_retry'\nATTR_rpp_highwater = 'rpp_highwater'\nATTR_rpp_max_pkt_check = 'rpp_max_pkt_check'\nATTR_pbs_license_info = 'pbs_license_info'\nATTR_license_min = 'pbs_license_min'\nATTR_license_max = 'pbs_license_max'\nATTR_license_linger = 'pbs_license_linger_time'\nATTR_license_count = 'license_count'\nATTR_job_sort_formula = 'job_sort_formula'\nATTR_EligibleTimeEnable = 'eligible_time_enable'\nATTR_resv_retry_init = 'reserve_retry_init'\nATTR_resv_retry_time = 'reserve_retry_time'\nATTR_JobHistoryEnable = 'job_history_enable'\nATTR_JobHistoryDuration = 'job_history_duration'\nATTR_max_concurrent_prov = 'max_concurrent_provision'\nATTR_resv_post_processing = 'resv_post_processing_time'\nATTR_backfill_depth = 'backfill_depth'\nATTR_job_requeue_timeout = 'job_requeue_timeout'\nATTR_SchedHost = 'sched_host'\nATTR_sched_cycle_len = 'sched_cycle_length'\nATTR_do_not_span_psets = 'do_not_span_psets'\nATTR_soft_time = 'soft_limit_time'\nATTR_power_provisioning = 'power_provisioning'\nATTR_max_job_sequence_id = 'max_job_sequence_id'\nATTR_rel_list = 'resource_released_list'\nATTR_released = 'resources_released'\nATTR_restrict_res_to_release_on_suspend = 'restrict_res_to_release_on_suspend'\nATTR_sched_preempt_enforce_resumption = 'sched_preempt_enforce_resumption'\nATTR_tolerate_node_failures = 'tolerate_node_failures'\nATTR_HOOK_type = 'type'\nATTR_HOOK_enable = 'enable'\nATTR_HOOK_event = 'event'\nATTR_HOOK_alarm = 'alarm'\nATTR_HOOK_order = 'order'\nATTR_HOOK_debug = 'debug'\nATTR_HOOK_fail_action = 'fail_action'\nATTR_HOOK_user = 'user'\nATTR_NODE_Host = 'Host'\nATTR_NODE_Mom = 'Mom'\nATTR_NODE_Port = 'Port'\nATTR_NODE_state = 'state'\nATTR_NODE_ntype = 'ntype'\nATTR_NODE_jobs = 'jobs'\nATTR_NODE_resvs = 'resv'\nATTR_NODE_resv_enable = 'resv_enable'\nATTR_NODE_np = 'np'\nATTR_NODE_pcpus = 'pcpus'\nATTR_NODE_properties = 'properties'\nATTR_NODE_NoMultiNode = 'no_multinode_jobs'\nATTR_NODE_No_Tasks = 'no_tasks'\nATTR_NODE_Sharing = 'sharing'\nATTR_NODE_HPCBP_User_name = 'hpcbp_user_name'\nATTR_NODE_HPCBP_WS_address = 'hpcbp_webservice_address'\nATTR_NODE_HPCBP_Stage_protocol = 'hpcbp_stage_protocol'\nATTR_NODE_HPCBP_enable = 'hpcbp_enable'\nATTR_NODE_ProvisionEnable = 'provision_enable'\nATTR_NODE_current_aoe = 'current_aoe'\nATTR_NODE_in_multivnode_host = 'in_multivnode_host'\nATTR_NODE_License = 'license'\nATTR_NODE_LicenseInfo = 'license_info'\nATTR_NODE_TopologyInfo = 'topology_info'\nATTR_NODE_last_used_time = 'last_used_time'\nATTR_NODE_last_state_change_time = 'last_state_change_time'\nATTR_sched_server_dyn_res_alarm = 'server_dyn_res_alarm'\nATTR_RESC_TYPE = 'type'\nATTR_RESC_FLAG = 'flag'\n\nSHUT_IMMEDIATE = 0x0\nSHUT_DELAY = 0x01\nSHUT_QUICK = 0x02\nSHUT_WHO_SCHED = 0x10\nSHUT_WHO_MOM = 0x20\nSHUT_WHO_SECDRY = 0x40\nSHUT_WHO_IDLESECDRY = 0x80\nSHUT_WHO_SECDONLY = 0x100\n\nUSER_HOLD = 'u'\nOTHER_HOLD = 'o'\nSYSTEM_HOLD = 's'\nBAD_PASSWORD_HOLD = 'p'\n\n\nclass attropl:\n\n    def __init__(self):\n        self.name = None\n        self.value = None\n        self.attribute = None\n        self.next = None\n        self.resource = None\n        self.op = None\n\n\nclass attrl:\n\n    def __init__(self):\n        self.name = None\n        self.value = None\n        self.attribute = None\n        self.next = None\n        self.resource = None\n        self.op = None\n\n\nclass batch_status:\n\n    def __init__(self):\n        self.next = None\n        self.name = None\n        self.attribs = None\n        self.text = None\n\n\nclass ecl_attrerr:\n\n    def __init__(self):\n        self.ecl_attribute = None\n        self.ecl_errcode = None\n        self.ecl_errmsg = None\n\n\nclass ecl_attribute_errors:\n\n    def __init(self):\n        self.ecl_numerrors = None\n        self.ecl_attrerr = None\n\n\ndef pbs_asyrunjob(c, jobid, attrib, extend):\n    pass\n\n\ndef pbs_alterjob(c, jobid, attrib, extend):\n    pass\n\n\ndef pbs_connect(c):\n    pass\n\n\ndef pbs_connect_extend(c, extend):\n    pass\n\n\ndef pbs_default(void):\n    pass\n\n\ndef pbs_deljob(c, jobid, extend):\n    pass\n\n\ndef pbs_disconnect(c):\n    pass\n\n\ndef pbs_geterrmsg(c):\n    pass\n\n\ndef pbs_holdjob(c, jobid, hold, extend):\n    pass\n\n\ndef pbs_locjob(c, jobid, extend):\n    pass\n\n\ndef pbs_manager(c, cmd, type, id, attropl, extend):\n    pass\n\n\ndef pbs_movejob(c, jobid, destin, extend):\n    pass\n\n\ndef pbs_msgjob(c, jobid, file, msg, extend):\n    pass\n\n\ndef pbs_orderjob(c, jobid1, jobid2, extend):\n    pass\n\n\ndef pbs_rerunjob(c, jobid, extend):\n    pass\n\n\ndef pbs_rlsjob(c, jobid, hold, extend):\n    pass\n\n\ndef pbs_runjob(c, jobid, loc, extend):\n    pass\n\n\ndef pbs_selectjob(c, attropl, extend):\n    pass\n\n\ndef pbs_sigjob(c, jobid, sig, extend):\n    pass\n\n\ndef pbs_statfree(batch_status):\n    pass\n\n\ndef pbs_statjob(c, jobid, attrl, extend):\n    pass\n\n\ndef pbs_selstat(c, attropl, attrl, extend):\n    pass\n\n\ndef pbs_statque(c, q, attrl, extend):\n    pass\n\n\ndef pbs_statserver(c, attrl, extend):\n    pass\n\n\ndef pbs_statsched(c, attrl, extend):\n    pass\n\n\ndef pbs_stathost(c, id, attrl, extend):\n    pass\n\n\ndef pbs_statnode(c, id, attrl, extend):\n    pass\n\n\ndef pbs_statvnode(c, id, attrl, extend):\n    pass\n\n\ndef pbs_statresv(c, id, attrl, extend):\n    pass\n\n\ndef pbs_stathook(c, id, attrl, s1):\n    pass\n\n\ndef pbs_statrsc(c, id, attrl, extend):\n    pass\n\n\ndef pbs_get_attributes_in_error(c):\n    pass\n\n\ndef pbs_submit(c, attropl, script, destin, extend):\n    pass\n\n\ndef pbs_submit_resv(c, attropl, jobid):\n    pass\n\n\ndef pbs_delresv(c, id, extend):\n    pass\n\n\ndef pbs_terminate(c, manner, extend):\n    pass\n\n\ndef pbs_modify_resv(c, resvid, attrib, extend):\n    pass\n"
  },
  {
    "path": "test/fw/ptl/lib/pbs_testlib.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom ptl.lib.ptl_error import *\nfrom ptl.lib.ptl_expect_action import *\nfrom ptl.lib.ptl_batchutils import *\nfrom ptl.lib.ptl_types import *\nfrom ptl.lib.ptl_object import *\nfrom ptl.lib.ptl_service import *\nfrom ptl.lib.ptl_config import *\nfrom ptl.lib.ptl_constants import *\nfrom ptl.lib.ptl_server import *\nfrom ptl.lib.ptl_sched import *\nfrom ptl.lib.ptl_mom import *\nfrom ptl.lib.ptl_comm import *\nfrom ptl.lib.ptl_resourceresv import *\nfrom ptl.lib.ptl_fairshare import *\nfrom ptl.lib.ptl_entities import *\n"
  },
  {
    "path": "test/fw/ptl/lib/ptl_batchutils.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport collections\nimport copy\nimport datetime\nimport json\nimport logging\nimport os\nimport random\nimport re\nimport string\nimport sys\nimport time\nfrom collections import OrderedDict\nfrom distutils.version import LooseVersion\n\nfrom ptl.lib.pbs_api_to_cli import api_to_cli\nfrom ptl.utils.pbs_dshutils import DshUtils\nfrom ptl.lib.ptl_constants import *\n\nfrom ptl.lib.ptl_types import (PbsTypeSize, PbsTypeChunk,\n                               PbsTypeDuration, PbsAttribute)\n\n\nclass BatchUtils(object):\n\n    \"\"\"\n    Utility class to create/convert/display various PBS\n    data structures\n    \"\"\"\n\n    legal = r\"\\d\\w:\\+=\\[\\]~\"\n    chunks_tag = re.compile(r\"(?P<chunk>\\([\\d\\w:\\+=\\[\\]~]\\)[\\+]?)\")\n    chunk_tag = re.compile(r\"(?P<vnode>[\\w\\d\\[\\]]+):\" +\n                           r\"(?P<resources>[\\d\\w:\\+=\\[\\]~])+\\)\")\n\n    array_tag = re.compile(r\"(?P<jobid>[\\d]+)\\[(?P<subjobid>[0-9]*)\\]*\" +\n                           r\"[.]*[(?P<server>.*)]*\")\n    subjob_tag = re.compile(r\"(?P<jobid>[\\d]+)\\[(?P<subjobid>[0-9]+)\\]*\" +\n                            r\"[.]*[(?P<server>.*)]*\")\n\n    pbsobjname_re = re.compile(r\"^(?P<tag>[\\w\\d][\\d\\w\\s]*:?[\\s]+)\" +\n                               r\"*(?P<name>[\\^\\\\\\w@\\.\\d\\[\\]-]+)$\")\n    pbsobjattrval_re = re.compile(r\"\"\"\n                            [\\s]*(?P<attribute>[\\w\\d\\.-]+)\n                            [\\s]*=[\\s]*\n                            (?P<value>.*)\n                            [\\s]*\"\"\",\n                                  re.VERBOSE)\n    dt_re = r'(?P<dt_from>\\d\\d/\\d\\d/\\d\\d\\d\\d \\d\\d:\\d\\d)' + \\\n            r'[\\s]+' + \\\n            r'(?P<dt_to>\\d\\d/\\d\\d/\\d\\d\\d\\d \\d\\d:\\d\\d)'\n    dt_tag = re.compile(dt_re)\n    hms_tag = re.compile(r'(?P<hr>\\d\\d):(?P<mn>\\d\\d):(?P<sc>\\d\\d)')\n    lim_tag = re.compile(r\"(?P<limtype>[a-z_]+)[\\.]*(?P<resource>[\\w\\d-]*)\"\n                         r\"=[\\s]*\\[(?P<entity_type>[ugpo]):\"\n                         r\"(?P<entity_name>[\\w\\d-]+)\"\n                         r\"=(?P<entity_value>[\\d\\w]+)\\][\\s]*\")\n\n    def __init__(self):\n        self.logger = logging.getLogger(__name__)\n        self.du = DshUtils()\n        self.platform = self.du.get_platform()\n\n    def list_to_attrl(self, l):\n        \"\"\"\n        Convert a list to a PBS attribute list\n\n        :param l: List to be converted\n        :type l: List\n        :returns: PBS attribute list\n        \"\"\"\n        return self.list_to_attropl(l, None)\n\n    def list_to_attropl(self, l, op=SET):\n        \"\"\"\n        Convert a list to a PBS attribute operation list\n\n        :param l: List to be converted\n        :type l: List\n        :returns: PBS attribute operation list\n        \"\"\"\n        head = None\n        prev = None\n\n        for i in l:\n            a = self.str_to_attropl(i, op)\n            if prev is None:\n                head = a\n            else:\n                prev.next = a\n            prev = a\n            if op is not None:\n                a.op = op\n        return head\n\n    def str_to_attrl(self, s):\n        \"\"\"\n        Convert a string to a PBS attribute list\n\n        :param s: String to be converted\n        :type s: str\n        :returns: PBS attribute list\n        \"\"\"\n        return self.str_to_attropl(s, None)\n\n    def str_to_attropl(self, s, op=SET):\n        \"\"\"\n        Convert a string to a PBS attribute operation list\n\n        :param s: String to be converted\n        :type s: str\n        :returns: PBS attribute operation list\n        \"\"\"\n        if op is not None:\n            a = attropl()\n        else:\n            a = attrl()\n        if '.' in s:\n            (attribute, resource) = s.split('.')\n            a.name = attribute\n            a.resource = resource.strip()\n        else:\n            a.name = s\n        a.value = ''\n        a.next = None\n        if op:\n            a.op = op\n        return a\n\n    def dict_to_attrl(self, d={}):\n        \"\"\"\n        Convert a dictionary to a PBS attribute list\n\n        :param d: Dictionary to be converted\n        :type d: Dictionary\n        :returns: PBS attribute list\n        \"\"\"\n        return self.dict_to_attropl(d, None)\n\n    def dict_to_attropl(self, d={}, op=SET):\n        \"\"\"\n        Convert a dictionary to a PBS attribute operation list\n\n        :param d: Dictionary to be converted\n        :type d: Dictionary\n        :returns: PBS attribute operation list\n        \"\"\"\n        if len(d.keys()) == 0:\n            return None\n\n        prev = None\n        head = None\n\n        for k, v in d.items():\n            if isinstance(v, tuple):\n                op = v[0]\n                v = v[1]\n            if op is not None:\n                a = attropl()\n            else:\n                a = attrl()\n            if '.' in k:\n                (attribute, resource) = k.split('.')\n                a.name = attribute\n                a.resource = resource\n            else:\n                a.name = k\n            a.value = str(v)\n            if op is not None:\n                a.op = op\n            a.next = None\n\n            if prev is None:\n                head = a\n            else:\n                prev.next = a\n            prev = a\n        return head\n\n    def convert_to_attrl(self, attrib):\n        \"\"\"\n        Generic call to convert Python type to PBS attribute list\n\n        :param attrib: Attributes to be converted\n        :type attrib: List or tuple or dictionary or str\n        :returns: PBS attribute list\n        \"\"\"\n        return self.convert_to_attropl(attrib, None)\n\n    def convert_to_attropl(self, attrib, cmd=MGR_CMD_SET, op=None):\n        \"\"\"\n        Generic call to convert Python type to PBS attribute\n        operation list\n\n        :param attrib: Attributes to be converted\n        :type attrib: List or tuple or dictionary or str\n        :returns: PBS attribute operation list\n        \"\"\"\n        if op is None:\n            op = self.command_to_op(cmd)\n\n        if isinstance(attrib, (list, tuple)):\n            a = self.list_to_attropl(attrib, op)\n        elif isinstance(attrib, (dict, OrderedDict)):\n            a = self.dict_to_attropl(attrib, op)\n        elif isinstance(attrib, str):\n            a = self.str_to_attropl(attrib, op)\n        else:\n            a = None\n        return a\n\n    def command_to_op(self, cmd=None):\n        \"\"\"\n        Map command to a ``SET`` or ``UNSET`` Operation. An unrecognized\n        command will return SET. No command will return None.\n\n        :param cmd: Command to be mapped\n        :type cmd: str\n        :returns: ``SET`` or ``UNSET`` operation for the command\n        \"\"\"\n\n        if cmd is None:\n            return None\n        if cmd in (MGR_CMD_SET, MGR_CMD_EXPORT, MGR_CMD_IMPORT):\n            return SET\n        if cmd == MGR_CMD_UNSET:\n            return UNSET\n        return SET\n\n    def display_attrl(self, a=None, writer=sys.stdout):\n        \"\"\"\n        Display an attribute list using writer, defaults to sys.stdout\n\n        :param a: Attributes\n        :type a: List\n        :returns: Displays attribute list\n        \"\"\"\n        return self.display_attropl(a)\n\n    def display_attropl(self, attropl=None, writer=sys.stdout):\n        \"\"\"\n        Display an attribute operation list with writer, defaults to\n        sys.stdout\n\n        :param attropl: Attribute operation list\n        :type attropl: List\n        :returns: Displays an attribute operation list\n        \"\"\"\n        attrs = attropl\n        while attrs is not None:\n            if attrs.resource:\n                writer.write('\\t' + attrs.name + '.' + attrs.resource + '= ' +\n                             attrs.value + '\\n')\n            else:\n                writer.write('\\t' + attrs.name + '= ' + attrs.value + '\\n')\n            attrs = attrs.__next__\n\n    def display_dict(self, d, writer=sys.stdout):\n        \"\"\"\n        Display a dictionary using writer, defaults to sys.stdout\n\n        :param d: Dictionary\n        :type d: Dictionary\n        :returns: Displays a dictionary\n        \"\"\"\n        if not d:\n            return\n        for k, v in d.items():\n            writer.write(k + ': ' + v + '\\n')\n\n    def batch_status_to_dictlist(self, bs=None, attr_names=None, id=None):\n        \"\"\"\n        Convert a batch status to a list of dictionaries.\n        version 0.1a6 added this conversion as a typemap(out) as\n        part of the swig wrapping itself so there are fewer uses\n        for this function.Returns a list of dictionary\n        representation of batch status\n\n        :param bs: Batch status\n        :param attr_names: Attribute names\n        :returns: List of dictionaries\n        \"\"\"\n        attr_time = (\n            'ctime', 'mtime', 'qtime', 'start', 'end', 'reserve_start',\n            'reserve_end', 'estimated.start_time')\n        ret = []\n        while bs:\n            if id is not None and bs.name != id:\n                bs = bs.__next__\n                continue\n            d = {}\n            attrs = bs.attribs\n            while attrs is not None:\n                if attrs.resource:\n                    key = attrs.name + '.' + attrs.resource\n                else:\n                    key = attrs.name\n                if attr_names is not None:\n                    if key not in attr_names:\n                        attrs = attrs.__next__\n                        continue\n                val = attrs.value\n                if attrs.name in attr_time:\n                    val = self.convert_time(val)\n                # for attributes that may occur multiple times (e.g., max_run)\n                # append the value in a comma-separated representation\n                if key in d:\n                    d[key] = d[key] + ',' + str(val)\n                else:\n                    d[key] = str(val)\n                attrs = attrs.__next__\n            if len(d.keys()) > 0:\n                ret.append(d)\n                d['id'] = bs.name\n            bs = bs.__next__\n        return ret\n\n    def display_batch_status(self, bs=None, attr_names=None,\n                             writer=sys.stdout):\n        \"\"\"\n        Display a batch status using writer, defaults to sys.stdout\n        :param bs: Batch status\n        :param attr_name: Attribute name\n        :type attr_name: str\n        :returns: Displays batch status\n        \"\"\"\n        if bs is None:\n            return\n\n        attr_list = self.batch_status_to_dictlist(bs, attr_names)\n        self.display_batch_status_as_dictlist(attr_list, writer)\n\n    def display_dictlist(self, dict_list=[], writer=sys.stdout, fmt=None):\n        \"\"\"\n        Display a list of dictionaries using writer, defaults to\n        sys.stdout\n\n        :param l: The list to display\n        :type l: List\n        :param writer: The stream on which to write\n        :param fmt: An optional formatting string\n        :type fmt: str or None\n        :returns: Displays list of dictionaries\n        \"\"\"\n        self.display_batch_status_as_dictlist(dict_list, writer, fmt)\n\n    def dictlist_to_file(self, dict_list=[], filename=None, mode='w'):\n        \"\"\"\n        write a dictlist to file\n\n        :param l: Dictlist\n        :type l: List\n        :param filename: File to which dictlist need to be written\n        :type filename: str\n        :param mode: Mode of file\n        :type mode: str\n        :raises: Exception writing to file\n        \"\"\"\n        if filename is None:\n            self.logger.error('a filename is required')\n            return\n\n        d = os.path.dirname(filename)\n        if d != '' and not os.path.isdir(d):\n            os.makedirs(d)\n        try:\n            with open(filename, mode) as f:\n                self.display_dictlist(dict_list, f)\n        except Exception:\n            self.logger.error('error writing to file ' + filename)\n            raise\n\n    def batch_status_as_dictlist_to_file(self, dictlist=[], writer=sys.stdout):\n        \"\"\"\n        Write a dictlist to file\n\n        :param dictlist: Dictlist\n        :type dictlist: List\n        :raises: Exception writing to file\n        \"\"\"\n        return self.dictlist_to_file(dictlist, writer)\n\n    def file_to_dictlist(self, fpath=None, attribs=None, id=None):\n        \"\"\"\n        Convert a file to a batch dictlist format\n\n        :param fpath: File to be converted\n        :type fpath: str\n        :param attribs: Attributes\n        :returns: File converted to a batch dictlist format\n        \"\"\"\n        if fpath is None:\n            return []\n\n        try:\n            with open(fpath, 'r') as f:\n                lines = f.readlines()\n        except Exception as e:\n            self.logger.error('error converting list of dictionaries to ' +\n                              'file ' + str(e))\n            return []\n\n        return self.convert_to_dictlist(lines, attribs, id=id)\n\n    def file_to_vnodedef(self, fpath=None):\n        \"\"\"\n        Convert a file output of pbsnodes -av to a vnode\n        definition format\n\n        :param fpath: File to be converted\n        :type fpath: str\n        :returns: Vnode definition format\n        \"\"\"\n        if fpath is None:\n            return None\n        try:\n            with open(fpath, 'r') as f:\n                lines = f.readlines()\n        except Exception:\n            self.logger.error('error converting nodes to vnode def')\n            return None\n\n        dl = self.convert_to_dictlist(lines)\n\n        return self.dictlist_to_vnodedef(dl)\n\n    def show(self, obj_list=[], name=None, fmt=None):\n        \"\"\"\n        Alias to display_dictlist with sys.stdout as writer\n\n        :param name: if specified only show the object of\n                     that name\n        :type name: str\n        :param fmt: Optional formatting string, uses %n for\n                    object name, %a for attributes, for example\n                    a format of r'%nE{}nE{}t%aE{}n' will display\n                    objects with their name starting on the first\n                    column, a new line, and attributes indented by\n                    a tab followed by a new line at the end.\n        :type fmt: str\n        \"\"\"\n        if name:\n            i = 0\n            for obj in obj_list:\n                if obj['id'] == name:\n                    obj_list = [obj_list[i]]\n                    break\n                i += 1\n        self.display_dictlist(obj_list, fmt=fmt)\n\n    def get_objtype(self, d={}):\n        \"\"\"\n        Get the type of a given object\n\n        :param d: Dictionary\n        :type d: Dictionary\n        :Returns: Type of the object\n        \"\"\"\n        if 'Job_Name' in d:\n            return JOB\n        elif 'queue_type' in d:\n            return QUEUE\n        elif 'Reserve_Name' in d:\n            return RESV\n        elif 'server_state' in d:\n            return SERVER\n        elif 'Mom' in d:\n            return NODE\n        elif 'event' in d:\n            return HOOK\n        elif 'type' in d:\n            return RSC\n        return None\n\n    def display_batch_status_as_dictlist(self, dict_list=[], writer=sys.stdout,\n                                         fmt=None):\n        \"\"\"\n        Display a batch status as a list of dictionaries\n        using writer, defaults to sys.stdout\n\n        :param dict_list: List\n        :type dict_list: List\n        :param fmt: - Optional format string\n        :type fmt: str or None\n        :returns: Displays batch status as a list of dictionaries\n        \"\"\"\n        if dict_list is None:\n            return\n\n        for d in dict_list:\n            self.display_batch_status_as_dict(d, writer, fmt)\n\n    def batch_status_as_dict_to_str(self, d={}, fmt=None):\n        \"\"\"\n        Return a string representation of a batch status dictionary\n\n        :param d: Dictionary\n        :type d: Dictionary\n        :param fmt: Optional format string\n        :type fmt: str or None\n        :returns: String representation of a batch status dictionary\n        \"\"\"\n        objtype = self.get_objtype(d)\n\n        if fmt is not None:\n            if '%1' in fmt:\n                _d1 = fmt['%1']\n            else:\n                _d1 = '\\n'\n            if '%2' in fmt:\n                _d2 = fmt['%2']\n            else:\n                _d2 = '    '\n            if '%3' in fmt:\n                _d3 = fmt['%3']\n            else:\n                _d3 = ' = '\n            if '%4' in fmt:\n                _d4 = fmt['%4']\n            else:\n                _d4 = '\\n'\n            if '%5' in fmt:\n                _d5 = fmt['%5']\n            else:\n                _d5 = '\\n'\n            if '%6' in fmt:\n                _d6 = fmt['%6']\n            else:\n                _d6 = ''\n        else:\n            _d1 = '\\n'\n            _d2 = '    '\n            _d3 = ' = '\n            _d4 = '\\n'\n            _d5 = '\\n'\n            _d6 = ''\n\n        if objtype == JOB:\n            _n = 'Job Id: ' + d['id'] + _d1\n        elif objtype == QUEUE:\n            _n = 'Queue: ' + d['id'] + _d1\n        elif objtype == RESV:\n            _n = 'Name: ' + d['id'] + _d1\n        elif objtype == SERVER:\n            _n = 'Server: ' + d['id'] + _d1\n        elif objtype == RSC:\n            _n = 'Resource: ' + d['id'] + _d1\n        elif 'id' in d:\n            _n = d['id'] + _d1\n            del d['id']\n        else:\n            _n = ''\n\n        _a = []\n        for k, v in sorted(d.items()):\n            if k == 'id':\n                continue\n            _a += [_d2 + k + _d3 + str(v)]\n\n        return _n + _d4.join(_a) + _d5 + _d6\n\n    def display_batch_status_as_dict(self, d={}, writer=sys.stdout, fmt=None):\n        \"\"\"\n        Display a dictionary representation of a batch status\n        using writer, defaults to sys.stdout\n\n        :param d: Dictionary\n        :type d: Dictionary\n        :param fmt: Optional format string\n        :param fmt: str\n        :returns: Displays dictionary representation of a batch\n                  status\n        \"\"\"\n        writer.write(self.batch_status_as_dict_to_str(d, fmt))\n\n    def decode_dictlist(self, dict_list=None, json=True):\n        \"\"\"\n        decode a list of dictionaries\n\n        :param dict_list: List of dictionaries\n        :type dict_list: List\n        :param json: The target of the decode is meant for ``JSON``\n                     formatting\n        :returns: Decoded list of dictionaries\n        \"\"\"\n        if dict_list is None:\n            return ''\n\n        _js = []\n        for d in dict_list:\n            _jdict = {}\n            for k, v in d.items():\n                if ',' in v:\n                    _jdict[k] = v.split(',')\n                else:\n                    _jdict[k] = PbsAttribute.decode_value(v)\n            _js.append(_jdict)\n        return _js\n\n    def convert_to_ascii(self, s):\n        \"\"\"\n        Convert char sequences within string like ^A, ^B, ... to\n        ASCII 0x01, ...\n\n        :param s: string to convert\n        :type s: string\n        :returns: converted string\n        \"\"\"\n        def repl(m):\n            c = m.group(1)\n            return chr(ord(c) - 64) if \"@\" < c <= \"_\" else m.group(0)\n        return re.sub(r\"\\^(.)\", repl, s)\n\n    def convert_to_dictlist(self, l, attribs=None, mergelines=True, id=None,\n                            obj_type=None):\n        \"\"\"\n        Convert a list of records into a dictlist format.\n\n        :param l: array of records to convert\n        :type l: List\n        :param mergelines: merge qstat broken lines into one\n        :param obj_type: The type of object to query, one of the *\n                         objects.\n        :returns: Record list converted into dictlist format\n        \"\"\"\n\n        if mergelines:\n            lines = []\n            count = 1\n            for i in range(len(l)):\n                if l[i].startswith('\\t'):\n                    _e = len(lines) - 1\n                    lines[_e] = lines[_e].strip('\\r\\n\\t') + \\\n                        l[i].strip('\\r\\n\\t')\n                elif (not l[i].startswith(' ') and i > count and\n                      l[i - count].startswith('\\t')):\n                    _e = len(lines) - count\n                    lines[_e] = lines[_e] + l[i]\n                    if ((i + 1) < len(l) and not\n                            l[i + 1].startswith(('\\t', ' '))):\n                        count += 1\n                    else:\n                        count = 1\n                else:\n                    lines.append(l[i])\n        else:\n            lines = l\n\n        objlist = []\n        d = {}\n\n        for l in lines:\n            strip_line = l.strip()\n            m = self.pbsobjname_re.match(strip_line)\n            if m:\n                if len(d.keys()) > 1:\n                    if id is None or (id is not None and d['id'] == id):\n                        objlist.append(d.copy())\n                d = {}\n                d['id'] = self.convert_to_ascii(m.group('name'))\n                _t = m.group('tag')\n                if _t == 'Resv ID: ':\n                    d[_t.replace(': ', '')] = d['id']\n            else:\n                m = self.pbsobjattrval_re.match(strip_line)\n                if m:\n                    attr = m.group('attribute')\n                    # Revisit this after having separate VNODE class\n                    if (attribs is None or attr.lower() in attribs or\n                            attr in attribs or (obj_type == MGR_OBJ_NODE and\n                                                attr == 'Mom')):\n                        if attr in d:\n                            d[attr] = d[attr] + \",\" + m.group('value')\n                        else:\n                            d[attr] = m.group('value')\n        # add the last element\n        if len(d.keys()) > 1:\n            if id is None or (id is not None and d['id'] == id):\n                objlist.append(d.copy())\n\n        return objlist\n\n    def convert_to_batch(self, l, mergelines=True):\n        \"\"\"\n        Convert a list of records into a batch format.\n\n        :param l: array of records to convert\n        :type l: List\n        :param mergelines: qstat breaks long lines over\n                           multiple lines, merge them\\\n                           to one by default.\n        :type mergelines: bool\n        :returns: A linked list of batch status\n        \"\"\"\n\n        if mergelines:\n            lines = []\n            for i in range(len(l)):\n                if l[i].startswith('\\t'):\n                    _e = len(lines) - 1\n                    lines[_e] = lines[_e].strip('\\r\\t') + \\\n                        l[i].strip('\\r\\n')\n                else:\n                    lines.append(l[i])\n        else:\n            lines = l\n\n        head_bs = None\n        prev_bs = None\n        prev_attr = None\n\n        for l in lines:\n            strip_line = l.strip()\n            m = self.pbsobjname_re.match(strip_line)\n            if m:\n                bs = batch_status()\n                bs.name = m.group('name')\n                bs.attribs = None\n                bs.next = None\n                if prev_bs:\n                    prev_bs.next = bs\n                if head_bs is None:\n                    head_bs = bs\n                prev_bs = bs\n                prev_attr = None\n            else:\n                m = self.pbsobjattrval_re.match(strip_line)\n                if m:\n                    attr = attrl()\n                    attr.name = m.group('attribute')\n                    attr.value = m.group('value')\n                    attr.next = None\n                    if bs.attribs is None:\n                        bs.attribs = attr\n                    if prev_attr:\n                        prev_attr.next = attr\n                    prev_attr = attr\n\n        return head_bs\n\n    def file_to_batch(self, fpath=None):\n        \"\"\"\n        Convert a file to batch format\n\n        :param fpath: File to be converted\n        :type fpath: str or None\n        :returns: File converted into batch format\n        \"\"\"\n        if fpath is None:\n            return None\n\n        try:\n            with open(fpath, 'r') as f:\n                lines = f.readlines()\n        except Exception:\n            self.logger.error('error converting file ' + fpath + ' to batch')\n            return None\n\n        return self.convert_to_batch(lines)\n\n    def batch_to_file(self, bs=None, fpath=None):\n        \"\"\"\n        Write a batch object to file\n\n        :param bs: Batch status\n        :param fpath: File to which batch object is to be written\n        :type fpath: str\n        \"\"\"\n        if bs is None or fpath is None:\n            return\n\n        try:\n            with open(fpath, 'w') as f:\n                self.display_batch_status(bs, writer=f)\n        except Exception:\n            self.logger.error('error converting batch status to file')\n\n    def batch_to_vnodedef(self, bs):\n        \"\"\"\n        :param bs: Batch status\n        :returns: The vnode definition string representation\n                  of nodes batch_status\n        \"\"\"\n        out = [\"$configversion 2\\n\"]\n\n        while bs is not None:\n            attr = bs.attribs\n            while attr is not None:\n                if attr.name.startswith(\"resources_available\") or \\\n                        attr.name.startswith(\"sharing\"):\n                    out += [bs.name + \": \"]\n                    out += [attr.name + \"=\" + attr.value + \"\\n\"]\n                attr = attr.__next__\n            bs = bs.__next__\n        return \"\".join(out)\n\n    def dictlist_to_vnodedef(self, dl=None):\n        \"\"\"\n        :param dl: Dictionary list\n        :type dl: List\n        :returns: The vnode definition string representation\n                  of a dictlist\n        \"\"\"\n        if dl is None:\n            return ''\n\n        out = [\"$configversion 2\\n\"]\n\n        for node in dl:\n            for k, v in node.items():\n                if (k.startswith(\"resources_available\") or\n                        k.startswith(\"sharing\") or\n                        k.startswith(\"provision_enable\") or\n                        k.startswith(\"queue\")):\n                    out += [node['id'] + \": \"]\n                    # MoM dislikes empty values reported in vnode defs so\n                    # we substitute no value for an actual empty string\n                    if not v:\n                        v = '\"\"'\n                    out += [k + \"=\" + str(v) + \"\\n\"]\n\n        return \"\".join(out)\n\n    def objlist_to_dictlist(self, objlist=None):\n        \"\"\"\n        Convert a list of PBS/PTL objects ``(e.g. Server/Job...)``\n        into a dictionary list representation of the batch status\n\n        :param objlist: List of ``PBS/PTL`` objects\n        :type objlist: List\n        :returns: Dictionary list representation of the batch status\n        \"\"\"\n        if objlist is None:\n            return None\n\n        bsdlist = []\n        for obj in objlist:\n            newobj = self.obj_to_dict(obj)\n            bsdlist.append(newobj)\n        return bsdlist\n\n    def obj_to_dict(self, obj):\n        \"\"\"\n        Convert a PBS/PTL object (e.g. Server/Job...) into a\n        dictionary format\n\n        :param obj: ``PBS/PTL`` object\n        :returns: Dictionary of ``PBS/PTL`` objects\n        \"\"\"\n        newobj = dict(obj.attributes.items())\n        newobj[id] = obj.name\n        return newobj\n\n    def parse_execvnode(self, s=None):\n        \"\"\"\n        Parse an execvnode string into chunk objects\n\n        :param s: Execvnode string\n        :type s: str or None\n        :returns: Chunk objects for parsed execvnode string\n        \"\"\"\n        if s is None:\n            return None\n\n        chunks = []\n        start = 0\n        for c in range(len(s)):\n            if s[c] == '(':\n                start = c + 1\n            if s[c] == ')':\n                chunks.append(PbsTypeChunk(chunkstr=s[start:c]).info)\n        return chunks\n\n    def anupbs_exechost_numhosts(self, s=None):\n        \"\"\"\n        :param s: Exechost string\n        :type s: str or None\n        \"\"\"\n        n = 0\n        if '[' in s:\n            eh = re.sub(r'.*\\[(.*)\\].*', r'\\1', s)\n            hosts = eh.split(',')\n            for hid in hosts:\n                elm = hid.split('-')\n                if len(elm) == 2:\n                    n += int(elm[1]) - int(elm[0]) + 1\n                else:\n                    n += 1\n        else:\n            n += 1\n        return n\n\n    def parse_exechost(self, s=None):\n        \"\"\"\n        Parse an exechost string into a dictionary representation\n\n        :param s: String to be parsed\n        :type s: str or None\n        :returns: Dictionary format of the exechost string\n        \"\"\"\n        if s is None:\n            return None\n\n        hosts = []\n        hsts = s.split('+')\n        for h in hsts:\n            hi = {}\n            ti = {}\n            (host, task) = h.split('/',)\n            d = task.split('*')\n            if len(d) == 1:\n                taskslot = d[0]\n                ncpus = 1\n            elif len(d) == 2:\n                (taskslot, ncpus) = d\n            else:\n                (taskslot, ncpus) = (0, 1)\n            ti['task'] = taskslot\n            ti['ncpus'] = ncpus\n            hi[host] = ti\n            hosts.append(hi)\n        return hosts\n\n    def parse_select(self, s=None):\n        \"\"\"\n        Parse a ``select/schedselect`` string into a list\n        of dictionaries.\n\n        :param s: select/schedselect string\n        :type s: str or None\n        :returns: List of dictonaries\n        \"\"\"\n        if s is None:\n            return\n        info = []\n        chunks = s.split('+')\n        for chunk in chunks:\n            d = chunk.split(':')\n            numchunks = int(d[0])\n            resources = {}\n            for e in d[1:]:\n                k, v = e.split('=')\n                resources[k] = v\n            for _ in range(numchunks):\n                info.append(resources)\n\n        return info\n\n    def convert_time(self, val, fmt='%a %b %d %H:%M:%S %Y'):\n        \"\"\"\n        Convert a date time format into number of seconds\n        since epoch\n\n        :param val: date time value\n        :param fmt: date time format\n        :type fmt: str\n        :returns: seconds\n        \"\"\"\n        # Tweak for NAS format that puts the number of seconds since epoch\n        # in between\n        if val.split()[0].isdigit():\n            val = int(val.split()[0])\n        elif not val.isdigit():\n            val = time.strptime(val, fmt)\n            val = int(time.mktime(val))\n        return val\n\n    def convert_duration(self, val):\n        \"\"\"\n        Convert HH:MM:SS into number of seconds\n        If a number is fed in, that number is returned\n        If neither formatted data is fed in, returns 0\n\n        :param val: duration value\n        :type val: str\n        :raises: Incorrect format error\n        :returns: seconds\n        \"\"\"\n        if val.isdigit():\n            return int(val)\n\n        hhmmss = val.split(':')\n        if len(hhmmss) != 3:\n            self.logger.error('Incorrect format, expected HH:MM:SS')\n            return 0\n        return int(hhmmss[0]) * 3600 + int(hhmmss[1]) * 60 + int(hhmmss[2])\n\n    def convert_seconds_to_datetime(self, tm, fmt=None, seconds=True):\n        \"\"\"\n        Convert time format to number of seconds since epoch\n\n        :param tm: the time to convert\n        :type tm: str\n        :param fmt: optional format string. If used, the seconds\n                    parameter is ignored.Defaults to ``%Y%m%d%H%M``\n        :type fmt: str or None\n        :param seconds: if True, convert time with seconds\n                        granularity. Defaults to True.\n        :type seconds: bool\n        :returns: Number of seconds\n        \"\"\"\n        if fmt is None:\n            fmt = \"%Y%m%d%H%M\"\n            if seconds:\n                fmt += \".%S\"\n\n        return time.strftime(fmt, time.localtime(int(tm)))\n\n    def convert_stime_to_seconds(self, st):\n        \"\"\"\n        Convert a time to seconds, if we fail we return the\n        original time\n\n        :param st: Time to be converted\n        :type st: str\n        :returns: Number of seconds\n        \"\"\"\n        try:\n            ret = time.mktime(time.strptime(st, '%a %b %d %H:%M:%S %Y'))\n        except Exception:\n            ret = st\n        return ret\n\n    def convert_dedtime(self, dtime):\n        \"\"\"\n        Convert dedicated time string of form %m/%d/%Y %H:%M.\n\n        :param dtime: A datetime string, as an entry in the\n                      dedicated_time file\n        :type dtime: str\n        :returns: A tuple of (from,to) of time since epoch\n        \"\"\"\n        dtime_from = None\n        dtime_to = None\n\n        m = self.dt_tag.match(dtime.strip())\n        if m:\n            try:\n                _f = \"%m/%d/%Y %H:%M\"\n                dtime_from = self.convert_datetime_to_epoch(m.group('dt_from'),\n                                                            fmt=_f)\n                dtime_to = self.convert_datetime_to_epoch(m.group('dt_to'),\n                                                          fmt=_f)\n            except Exception:\n                self.logger.error('error converting dedicated time')\n        return (dtime_from, dtime_to)\n\n    def convert_datetime_to_epoch(self, mdyhms, fmt=\"%m/%d/%Y %H:%M:%S\"):\n        \"\"\"\n        Convert the date time to epoch\n\n        :param mdyhms: date time\n        :type mdyhms: str\n        :param fmt: Format for date time\n        :type fmt: str\n        :returns: Epoch time\n        \"\"\"\n        return int(time.mktime(time.strptime(mdyhms, fmt)))\n\n    def compare_versions(self, v1, v2, op=None):\n        \"\"\"\n        Compare v1 to v2 with respect to operation op\n\n        :param v1: If not a looseversion, it gets converted\n                   to it\n        :param v2: If not a looseversion, it gets converted\n                   to it\n        :param op: An operation, one of ``LT``, ``LE``, ``EQ``,\n                   ``GE``, ``GT``\n        :type op: str\n        :returns: True or False\n        \"\"\"\n        if op is None:\n            self.logger.error('missing operator, one of LT,LE,EQ,GE,GT')\n            return None\n\n        if v1 is None or v2 is None:\n            return False\n\n        if isinstance(v1, str):\n            v1 = LooseVersion(v1)\n        if isinstance(v2, str):\n            v2 = LooseVersion(v2)\n\n        if op == GT:\n            if v1 > v2:\n                return True\n        elif op == GE:\n            if v1 >= v2:\n                return True\n        elif op == EQ:\n            if v1 == v2:\n                return True\n        elif op == LT:\n            if v1 < v2:\n                return True\n        elif op == LE:\n            if v1 <= v2:\n                return True\n\n        return False\n\n    def convert_arglist(self, attr):\n        \"\"\"\n        strip the XML attributes from the argument list attribute\n\n        :param attr: Argument list attributes\n        :type attr: List\n        :returns: Stripped XML attributes\n        \"\"\"\n\n        xmls = \"<jsdl-hpcpa:Argument>\"\n        xmle = \"</jsdl-hpcpa:Argument>\"\n        nattr = attr.replace(xmls, \" \")\n        nattr = nattr.replace(xmle, \" \")\n\n        return nattr.strip()\n\n    def convert_to_cli(self, attrs, op=None, hostname=None, dflt_conf=True,\n                       exclude_attrs=None):\n        \"\"\"\n        Convert attributes into their CLI format counterpart. This\n        method is far from complete, it grows as needs come by and\n        could use a rewrite, especially going along with a rewrite\n        of pbs_api_to_cli\n\n        :param attrs: Attributes to convert\n        :type attrs: List or str or dictionary\n        :param op: The qualifier of the operation being performed,\n                   such as ``IFL_SUBMIT``, ``IFL_DELETE``,\n                   ``IFL_TERMINUTE``...\n        :type op: str or None\n        :param hostname: The name of the host on which to operate\n        :type hostname: str or None\n        :param dflt_conf: Whether we are using the default PBS\n                          configuration\n        :type dflt_conf: bool\n        :param exclude_attrs: Optional list of attributes to not\n                              convert\n        :type exclude_attrs: List\n        :returns: CLI format of attributes\n        \"\"\"\n        ret = []\n\n        if op == IFL_SUBMIT:\n            executable = arglist = None\n\n        elif op == IFL_DELETE:\n            _c = []\n            if isinstance(attrs, str):\n                attrs = [attrs]\n\n            if isinstance(attrs, list):\n                for a in attrs:\n                    if 'force' in a:\n                        _c.append('-Wforce')\n                    if 'deletehist' in a:\n                        _c.append('-x')\n                    if 'nomail' in a:\n                        _c.append('-Wsuppress_email=-1')\n            return _c\n\n        elif op == IFL_TERMINATE:\n            _c = []\n            if attrs is None:\n                _c = []\n            elif isinstance(attrs, str):\n                _c = ['-t', attrs]\n            else:\n                if ((attrs & SHUT_QUICK) == SHUT_QUICK):\n                    _c = ['-t', 'quick']\n                if ((attrs & SHUT_IMMEDIATE) == SHUT_IMMEDIATE):\n                    _c = ['-t', 'immediate']\n                if ((attrs & SHUT_DELAY) == SHUT_DELAY):\n                    _c = ['-t', 'delay']\n                if ((attrs & SHUT_WHO_SCHED) == SHUT_WHO_SCHED):\n                    _c.append('-s')\n                if ((attrs & SHUT_WHO_MOM) == SHUT_WHO_MOM):\n                    _c.append('-m')\n                if ((attrs & SHUT_WHO_SECDRY) == SHUT_WHO_SECDRY):\n                    _c.append('-f')\n                if ((attrs & SHUT_WHO_IDLESECDRY) == SHUT_WHO_IDLESECDRY):\n                    _c.append('-F')\n                if ((attrs & SHUT_WHO_SECDONLY) == SHUT_WHO_SECDONLY):\n                    _c.append('-i')\n            return _c\n\n        elif op == IFL_RALTER:\n            if isinstance(attrs, dict):\n                if 'extend' in attrs and attrs['extend'] == 'force':\n                    ret.append('-Wforce')\n                    del attrs['extend']\n\n        if attrs is None or len(attrs) == 0:\n            return ret\n\n        # if a list, convert to a dictionary to fall into a single processing\n        # of the attributes\n        if (isinstance(attrs, list) and len(attrs) > 0 and\n                not isinstance(attrs[0], tuple)):\n            tmp_attrs = {}\n            for each_attr in attrs:\n                tmp_attrs[each_attr] = ''\n            del attrs\n            attrs = tmp_attrs\n            del tmp_attrs\n\n        if isinstance(attrs, (dict, OrderedDict)):\n            attrs = attrs.items()\n\n        for a, v in attrs:\n            # In job name string, use prefix \"\\\" with special charater\n            # to read as an ordinary character on\n            # cray, craysim, and shasta platform\n            if (a == \"Job_Name\") and (self.platform == 'cray' or\n                                      self.platform == 'craysim' or\n                                      self.platform == 'shasta'):\n                v = v.translate({ord(c): \"\\\\\" +\n                                 c for c in r\"~`!@#$%^&*()[]{};:,/<>?\\|=\"})\n            if exclude_attrs is not None and a in exclude_attrs:\n                continue\n\n            if op == IFL_SUBMIT:\n                if a == ATTR_executable:\n                    executable = v\n                    continue\n                if a == ATTR_Arglist:\n                    if v is not None:\n                        arglist = self.convert_arglist(v)\n                        if len(arglist) == 0:\n                            return []\n                    continue\n            if isinstance(v, list):\n                v = ','.join(v)\n\n            # when issuing remote commands, escape spaces in attribute values\n            if (((hostname is not None) and\n                 (not self.du.is_localhost(hostname))) or\n                    (not dflt_conf)):\n                if ' ' in str(v):\n                    v = '\"' + v + '\"'\n\n            if '.' in a:\n                (attribute, resource) = a.split('.')\n                ret.append('-' + api_to_cli[attribute])\n                rv = resource\n                if v is not None:\n                    rv += '=' + str(v)\n                ret.append(rv)\n            else:\n                try:\n                    val = api_to_cli[a]\n                except KeyError:\n                    self.logger.error('error  retrieving key ' + str(a))\n                    # for unknown or junk options\n                    ret.append(a)\n                    if v is not None:\n                        ret.append(str(v))\n                    continue\n                # on a remote job submit append the remote server name\n                # to the queue name\n                if ((op == IFL_SUBMIT) and (hostname is not None)):\n                    if ((not self.du.is_localhost(hostname)) and\n                            (val == 'q') and (v is not None) and\n                            ('@' not in v) and (v != '')):\n                        v += '@' + hostname\n                val = '-' + val\n                if '=' in val:\n                    if v is not None:\n                        ret.append(val + str(v))\n                    else:\n                        ret.append(val)\n                else:\n                    ret.append(val)\n                    if v is not None:\n                        ret.append(str(v))\n\n        # Executable and argument list must come last in a job submission\n        if ((op == IFL_SUBMIT) and (executable is not None)):\n            ret.append('--')\n            ret.append(executable)\n            if arglist is not None:\n                ret.append(arglist)\n        return ret\n\n    def filter_batch_status(self, bs, attrib):\n        \"\"\"\n        Filter out elements that don't have the attributes requested\n        This is needed to adapt to the fact that requesting a\n        resource attribute returns all ``'<resource-name>.*'``\n        attributes so we need to ensure that the specific resource\n        requested is present in the stat'ed object.\n\n        This is needed especially when calling expect with an op=NE\n        because we need to filter on objects that have exactly\n        the attributes requested\n\n        :param bs: Batch status\n        :param attrib: Requested attributes\n        :type attrib: str or dictionary\n        :returns: Filtered batch status\n        \"\"\"\n\n        if isinstance(attrib, dict):\n            keys = attrib.keys()\n        elif isinstance(attrib, str):\n            keys = attrib.split(',')\n        else:\n            keys = attrib\n\n        if keys:\n            del_indices = []\n            for idx in range(len(bs)):\n                for k in bs[idx].keys():\n                    if '.' not in k:\n                        continue\n                    if k != 'id' and k not in keys:\n                        del bs[idx][k]\n                # if no matching resources, remove the object\n                if len(bs[idx]) == 1:\n                    del_indices.append(idx)\n\n            for i in sorted(del_indices, reverse=True):\n                del bs[i]\n\n        return bs\n\n    def convert_attributes_by_op(self, attributes, setattrs=False):\n        \"\"\"\n        Convert attributes by operator, i.e. convert an attribute\n        of the form\n\n        ``<attr_name><op><value>`` (e.g. resources_available.ncpus>4)\n\n        to\n\n        ``<attr_name>: (<op>, <value>)``\n        (e.g. resources_available.ncpus: (GT, 4))\n\n        :param attributes: the attributes to convert\n        :type attributes: List\n        :param setattrs: if True, set the attributes with no operator\n                         as (SET, '')\n        :type setattrs: bool\n        :returns: Converted attributes by operator\n        \"\"\"\n        # the order of operator matters because they are used to search by\n        # regex so the longer strings to search must come first\n        operators = ('<=', '>=', '!=', '=', '>', '<', '~')\n        d = {}\n        for attr in attributes:\n            found = False\n            for op in operators:\n                if op in attr:\n                    a = attr.split(op)\n                    d[a[0]] = (PTL_STR_TO_OP[op], a[1])\n                    found = True\n                    break\n            if not found and setattrs:\n                d[attr] = (SET, '')\n        return d\n\n    def operator_in_attribute(self, attrib):\n        \"\"\"\n        Returns True if an operator string is present in an\n        attribute name\n\n        :param attrib: Attribute name\n        :type attrib: str\n        :returns: True or False\n        \"\"\"\n        operators = PTL_STR_TO_OP.keys()\n        for a in attrib:\n            for op in operators:\n                if op in a:\n                    return True\n        return False\n\n    def list_resources(self, objtype=None, objs=[]):\n        \"\"\"\n        Lists the resources\n\n        :param objtype: Type of the object\n        :type objtype: str\n        :param objs: Object list\n        :type objs: List\n        :returns: List of resources\n        \"\"\"\n        if objtype in (VNODE, NODE, SERVER, QUEUE, SCHED):\n            prefix = 'resources_available.'\n        elif objtype in (JOB, RESV):\n            prefix = 'Resource_List.'\n        else:\n            return\n\n        resources = []\n        for o in objs:\n            for a in o.keys():\n                if a.startswith(prefix):\n                    res = a.replace(prefix, '')\n                    if res not in resources:\n                        resources.append(res)\n        return resources\n\n    def compare(self, obj1, obj2, showdiff=False):\n        \"\"\"\n        Compare two objects.\n\n        :param showdiff: whether to print the specific differences,\n                         defaults to False\n        :type showdiff: bool\n        :returns: 0 if objects are identical and non zero otherwise\n        \"\"\"\n        if not showdiff:\n            ret = cmp(obj1, obj2)\n            if ret != 0:\n                self.logger.info('objects differ')\n            return ret\n\n        if not isinstance(obj1, type(obj2)):\n            self.logger.error('objects are of different type')\n            return 1\n\n        if isinstance(obj1, list):\n            if len(obj1) != len(obj2):\n                self.logger.info(\n                    'comparing ' + str(\n                        obj1) + ' and ' + str(\n                        obj2))\n                self.logger.info('objects are of different lengths')\n                return\n            for i in range(len(obj1)):\n                self.compare(obj1[i], obj2[i], showdiff=showdiff)\n            return\n\n        if isinstance(obj1, dict):\n            self.logger.info('comparing ' + str(obj1) + ' and ' + str(obj2))\n            onlyobj1 = []\n            diffobjs = []\n            onlyobj2 = []\n            for k1, v1 in obj1.items():\n                if k1 not in obj2:\n                    onlyobj1.append(k1 + '=' + str(v1))\n\n                if k1 in obj2 and obj2[k1] != v1:\n                    diffobjs.append(\n                        k1 + '=' + str(v1) + ' vs ' + k1 + '=' + str(obj2[k1]))\n\n            for k2, v2 in obj2.items():\n                if k2 not in obj1:\n                    onlyobj2.append(k2 + '=' + str(v2))\n\n            if len(onlyobj1) > 0:\n                self.logger.info(\"only in first object: \" + \" \".join(onlyobj1))\n            if len(onlyobj2) > 0:\n                self.logger.info(\n                    \"only in second object: \" + \" \".join(onlyobj2))\n            if len(diffobjs) > 0:\n                self.logger.info(\"diff between objects: \" + \" \".join(diffobjs))\n            if len(onlyobj1) == len(onlyobj2) == len(diffobjs) == 0:\n                self.logger.info(\"objects are identical\")\n                return 0\n\n            return 1\n\n    def _make_template_formula(self, formula):\n        \"\"\"\n        Create a template of the formula\n\n        :param formula: Formula for which template is to be created\n        :type formula: str\n        :returns: Template\n        \"\"\"\n        tformula = []\n        skip = False\n        for c in formula:\n            if not skip and c.isalpha():\n                tformula.append('$')\n                skip = True\n            if c in ('+', '-', '/', ' ', '*', '%'):\n                skip = False\n            tformula.append(c)\n        return \"\".join(tformula)\n\n    def update_attributes_list(self, obj):\n        \"\"\"\n        Updates the attribute list\n\n        :param obj: Objects\n        :returns: Updated attribute list\n        \"\"\"\n        if not hasattr(obj, 'attributes'):\n            return\n        if not hasattr(obj, 'Resource_List'):\n            setattr(obj, 'Resource_List', {})\n\n        for attr, val in obj.attributes.items():\n            if attr.startswith('Resource_List.'):\n                (_, resource) = attr.split('.')\n                obj.Resource_List[resource] = val\n\n    def parse_fgc_limit(self, limstr=None):\n        \"\"\"\n        Parse an ``FGC`` limit entry, of the form:\n\n        ``<limtype>[.<resource>]=[<entity_type>:<entity_name>\n        =<entity_value>]``\n\n        :param limstr: FGC limit string\n        :type limstr: str or None\n        :returns: Parsed FGC string in given format\n        \"\"\"\n        m = self.lim_tag.match(limstr)\n        if m:\n            _v = str(PbsAttribute.decode_value(m.group('entity_value')))\n            return (m.group('limtype'), m.group('resource'),\n                    m.group('entity_type'), m.group('entity_name'), _v)\n        return None\n\n    def is_job_array(self, jobid):\n        \"\"\"\n        If a job array return True, otherwise return False\n\n        :param jobid: PBS jobid\n        :returns: True or False\n        \"\"\"\n        if self.array_tag.match(jobid):\n            return True\n        return False\n\n    def is_subjob(self, jobid):\n        \"\"\"\n        If a subjob of a job array, return the subjob id\n        otherwise return False\n\n        :param jobid: PBS job id\n        :type jobid: str\n        :returns: True or False\n        \"\"\"\n        m = self.subjob_tag.match(jobid)\n        if m:\n            return m.group('subjobid')\n        return False\n\n\nclass PbsBatchObject(list):\n\n    def __init__(self, bs):\n        self.set_batch_status(bs)\n\n    def set_batch_status(self, bs):\n        \"\"\"\n        Sets the batch status\n\n        :param bs: Batch status\n        \"\"\"\n        if 'id' in bs:\n            self.name = bs['id']\n        for k, v in bs.items():\n            self.append(PbsAttribute(k, v))\n\n\nclass PbsBatchStatus(list):\n\n    \"\"\"\n    Wrapper class for Batch Status object\n    Converts a batch status (as dictlist) into a list of\n    PbsBatchObjects\n\n    :param bs: Batch status\n    :type bs: List or dictionary\n    :returns: List of PBS batch objects\n    \"\"\"\n\n    def __init__(self, bs):\n        if not isinstance(bs, (list, dict)):\n            raise TypeError(\"Expected a list or dictionary\")\n\n        if isinstance(bs, dict):\n            self.append(PbsBatchObject(bs))\n        else:\n            for b in bs:\n                self.append(PbsBatchObject(b))\n\n    def __str__(self):\n        rv = []\n        for l in self.__bs:\n            rv += [self.__bu.batch_status_as_dict_to_str(l)]\n        return \"\\n\".join(rv)\n"
  },
  {
    "path": "test/fw/ptl/lib/ptl_comm.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport copy\nimport datetime\nimport logging\nimport os\nimport re\nimport socket\nimport string\nimport sys\nimport time\nfrom ptl.lib.ptl_service import PBSService, PBSInitServices\n\n\nclass Comm(PBSService):\n\n    \"\"\"\n    PBS ``Comm`` configuration and control\n    \"\"\"\n\n    \"\"\"\n    :param name: The hostname of the Comm. Defaults to current hostname.\n    :type name: str\n    :param attrs: Dictionary of attributes to set, these will override\n                  defaults.\n    :type attrs: dictionary\n    :param pbsconf_file: path to config file to parse for PBS_HOME,\n                         PBS_EXEC, etc\n    :type pbsconf_file: str or None\n    :param snapmap: A dictionary of PBS objects (node,server,etc) to\n                    mapped files from PBS snap directory\n    :type snapmap: dictionary\n    :param snap: path to PBS snap directory (This will override snapmap)\n    :type snap: str or None\n    :param server: A PBS server instance to which this Comm is associated\n    :type server: str\n    :param db_access: set to either file containing credentials to DB access or\n                      dictionary containing {'dbname':...,'user':...,\n                      'port':...}\n    :type db_access: str or dictionary\n        \"\"\"\n    dflt_attributes = {}\n\n    def __init__(self, server, name=None, attrs={}, pbsconf_file=None,\n                 snapmap={}, snap=None, db_access=None):\n        self.server = server\n        if snap is None and self.server.snap is not None:\n            snap = self.server.snap\n        if (len(snapmap) == 0) and (len(self.server.snapmap) != 0):\n            snapmap = self.server.snapmap\n        super().__init__(name, attrs, self.dflt_attributes,\n                         pbsconf_file, snapmap, snap)\n        _m = ['Comm ', self.shortname]\n        if pbsconf_file is not None:\n            _m += ['@', pbsconf_file]\n        _m += [': ']\n        self.logprefix = \"\".join(_m)\n        self.conf_to_cmd_map = {\n            'PBS_COMM_ROUTERS': '-r',\n            'PBS_COMM_THREADS': '-t'\n        }\n        self.pi = PBSInitServices(hostname=self.hostname,\n                                  conf=self.pbs_conf_file)\n\n    def start(self, args=None, launcher=None):\n        \"\"\"\n        Start the comm\n\n        :param args: Argument required to start the comm\n        :type args: str\n        :param launcher: Optional utility to invoke the launch of the service\n        :type launcher: str or list\n        \"\"\"\n        if args is not None or launcher is not None:\n            return super()._start(inst=self, args=args,\n                                  cmd_map=self.conf_to_cmd_map,\n                                  launcher=launcher)\n        else:\n            try:\n                rv = self.pi.start_comm()\n                pid = self._validate_pid(self)\n                if pid is None:\n                    raise PbsServiceError(rv=False, rc=-1,\n                                          msg=\"Could not find PID\")\n            except PbsInitServicesError as e:\n                raise PbsServiceError(rc=e.rc, rv=e.rv, msg=e.msg)\n            return rv\n\n    def stop(self, sig=None):\n        \"\"\"\n        Stop the comm.\n\n        :param sig: Signal to stop the comm\n        :type sig: str\n        \"\"\"\n        if sig is not None:\n            self.logger.info(self.logprefix + 'stopping Comm on host ' +\n                             self.hostname)\n            return super(Comm, self)._stop(sig, inst=self)\n        else:\n            try:\n                self.pi.stop_comm()\n            except PbsInitServicesError as e:\n                raise PbsServiceError(rc=e.rc, rv=e.rv, msg=e.msg)\n            return True\n\n    def restart(self):\n        \"\"\"\n        Restart the comm.\n        \"\"\"\n        if self.isUp():\n            if not self.stop():\n                return False\n        return self.start()\n\n    def log_match(self, msg=None, id=None, n=50, tail=True, allmatch=False,\n                  regexp=False, max_attempts=None, interval=None,\n                  starttime=None, endtime=None, level=logging.INFO,\n                  existence=True):\n        \"\"\"\n        Match given ``msg`` in given ``n`` lines of Comm log\n\n        :param msg: log message to match, can be regex also when\n                    ``regexp`` is True\n        :type msg: str\n        :param id: The id of the object to trace. Only used for\n                   tracejob\n        :type id: str\n        :param n: 'ALL' or the number of lines to search through,\n                  defaults to 50\n        :type n: str or int\n        :param tail: If true (default), starts from the end of\n                     the file\n        :type tail: bool\n        :param allmatch: If True all matching lines out of then\n                         parsed are returned as a list. Defaults\n                         to False\n        :type allmatch: bool\n        :param regexp: If true msg is a Python regular expression.\n                       Defaults to False\n        :type regexp: bool\n        :param max_attempts: the number of attempts to make to find\n                             a matching entry\n        :type max_attempts: int\n        :param interval: the interval between attempts\n        :type interval: int\n        :param starttime: If set ignore matches that occur before\n                          specified time\n        :type starttime: float\n        :param endtime: If set ignore matches that occur after\n                        specified time\n        :type endtime: float\n        :param level: The logging level, defaults to INFO\n        :type level: int\n        :param existence: If True (default), check for existence of\n                        given msg, else check for non-existence of\n                        given msg.\n        :type existence: bool\n\n        :return: (x,y) where x is the matching line\n                 number and y the line itself. If allmatch is True,\n                 a list of tuples is returned.\n        :rtype: tuple\n        :raises PtlLogMatchError:\n                When ``existence`` is True and given\n                ``msg`` is not found in ``n`` line\n                Or\n                When ``existence`` is False and given\n                ``msg`` found in ``n`` line.\n\n        .. note:: The matching line number is relative to the record\n                  number, not the absolute line number in the file.\n        \"\"\"\n        return self._log_match(self, msg, id, n, tail, allmatch, regexp,\n                               max_attempts, interval, starttime, endtime,\n                               level=level, existence=existence)\n"
  },
  {
    "path": "test/fw/ptl/lib/ptl_config.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport copy\nimport datetime\nimport logging\nimport os\nimport random\nimport re\nimport sys\nimport time\n\nfrom ptl.utils.pbs_dshutils import DshUtils\nfrom ptl.lib.ptl_object import PBSObject\n\n\nclass PtlConfig(object):\n\n    \"\"\"\n    Holds configuration options\n    The options can be stored in a file as well as in the OS environment\n    variables.When set, the environment variables will override\n    definitions in the file.By default, on Unix like systems, the file\n    read is ``/etc/ptl.conf``, the environment variable ``PTL_CONF_FILE``\n    can be used to set the path to the file to read.\n\n    The format of the file is a series of ``<key> = <value>`` properties.\n    A line that starts with a '#' is ignored and can be used for comments\n\n    :param conf: Path to PTL configuration file\n    :type conf: str or None\n    \"\"\"\n    logger = logging.getLogger(__name__)\n\n    def __init__(self, conf=None):\n        self.options = {\n            'PTL_SUDO_CMD': 'sudo -H',\n            'PTL_RSH_CMD': 'ssh',\n            'PTL_CP_CMD': 'scp -p',\n            'PTL_MAX_ATTEMPTS': 180,\n            'PTL_ATTEMPT_INTERVAL': 0.5,\n            'PTL_UPDATE_ATTRIBUTES': True,\n        }\n        self.handlers = {\n            'PTL_SUDO_CMD': DshUtils.set_sudo_cmd,\n            'PTL_RSH_CMD': DshUtils.set_rsh_cmd,\n            'PTL_CP_CMD': DshUtils.set_copy_cmd,\n            'PTL_MAX_ATTEMPTS': PBSObject.set_max_attempts,\n            'PTL_ATTEMPT_INTERVAL': PBSObject.set_attempt_interval,\n            'PTL_UPDATE_ATTRIBUTES': PBSObject.set_update_attributes\n        }\n        if conf is None:\n            conf = os.environ.get('PTL_CONF_FILE', '/etc/ptl.conf')\n        try:\n            with open(conf) as f:\n                lines = f.readlines()\n        except IOError:\n            lines = []\n        for line in lines:\n            line = line.strip()\n            if (line.startswith('#') or (line == '')):\n                continue\n            try:\n                k, v = line.split('=', 1)\n                k = k.strip()\n                v = v.strip()\n                self.options[k] = v\n            except Exception:\n                self.logger.error('Error parsing line ' + line)\n        # below two if block are for backword compatibility\n        if 'PTL_EXPECT_MAX_ATTEMPTS' in self.options:\n            _o = self.options['PTL_EXPECT_MAX_ATTEMPTS']\n            _m = self.options['PTL_MAX_ATTEMPTS']\n            _e = os.environ.get('PTL_EXPECT_MAX_ATTEMPTS', _m)\n            del self.options['PTL_EXPECT_MAX_ATTEMPTS']\n            self.options['PTL_MAX_ATTEMPTS'] = max([int(_o), int(_m), int(_e)])\n            _msg = 'PTL_EXPECT_MAX_ATTEMPTS is deprecated,'\n            _msg += ' use PTL_MAX_ATTEMPTS instead'\n            self.logger.warn(_msg)\n        if 'PTL_EXPECT_INTERVAL' in self.options:\n            _o = self.options['PTL_EXPECT_INTERVAL']\n            _m = self.options['PTL_ATTEMPT_INTERVAL']\n            _e = os.environ.get('PTL_EXPECT_INTERVAL', _m)\n            del self.options['PTL_EXPECT_INTERVAL']\n            self.options['PTL_ATTEMPT_INTERVAL'] = \\\n                max([int(_o), int(_m), int(_e)])\n            _msg = 'PTL_EXPECT_INTERVAL is deprecated,'\n            _msg += ' use PTL_ATTEMPT_INTERVAL instead'\n            self.logger.warn(_msg)\n        for k, v in self.options.items():\n            if k in os.environ:\n                v = os.environ[k]\n            else:\n                os.environ[k] = str(v)\n            if k in self.handlers:\n                self.handlers[k](v)\n"
  },
  {
    "path": "test/fw/ptl/lib/ptl_constants.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport copy\nimport datetime\nimport logging\nimport os\nimport random\nimport re\nimport sys\nimport time\n\ntry:\n    from ptl.lib.pbs_ifl import *\n    API_OK = True\nexcept Exception:\n    try:\n        from ptl.lib.pbs_ifl_mock import *\n    except Exception:\n        sys.stderr.write(\"failed to import pbs_ifl, run pbs_swigify \" +\n                         \"to make it\\n\")\n        raise ImportError\n    API_OK = False\n# suppress logging exceptions\nlogging.raiseExceptions = False\n\n# Various mappings and aliases\nMGR_OBJ_VNODE = MGR_OBJ_NODE\n\nVNODE = MGR_OBJ_VNODE\nNODE = MGR_OBJ_NODE\nHOST = MGR_OBJ_HOST\nJOB = MGR_OBJ_JOB\nRESV = MGR_OBJ_RESV\nSERVER = MGR_OBJ_SERVER\nQUEUE = MGR_OBJ_QUEUE\nSCHED = MGR_OBJ_SCHED\nHOOK = MGR_OBJ_HOOK\nRSC = MGR_OBJ_RSC\nPBS_HOOK = MGR_OBJ_PBS_HOOK\n\n# the order of these symbols matters, see pbs_ifl.h\n(SET, UNSET, INCR, DECR, EQ, NE, GE, GT,\n LE, LT, MATCH, MATCH_RE, NOT, DFLT) = list(range(14))\n\n(PTL_OR, PTL_AND) = [0, 1]\n\n(IFL_SUBMIT, IFL_SELECT, IFL_TERMINATE, IFL_ALTER,\n IFL_MSG, IFL_DELETE, IFL_RALTER) = [0, 1, 2, 3, 4, 5, 6]\n\n(PTL_API, PTL_CLI) = ['api', 'cli']\n\n(PTL_COUNTER, PTL_FILTER) = [0, 1]\n\nPTL_STR_TO_OP = {\n    '<': LT,\n    '<=': LE,\n    '=': EQ,\n    '>=': GE,\n    '>': GT,\n    '!=': NE,\n    ' set ': SET,\n    ' unset ': UNSET,\n    ' match ': MATCH,\n    '~': MATCH_RE,\n    '!': NOT\n}\n\nPTL_OP_TO_STR = {\n    LT: '<',\n    LE: '<=',\n    EQ: '=',\n    GE: '>=',\n    GT: '>',\n    SET: ' set ',\n    NE: '!=',\n    UNSET: ' unset ',\n    MATCH: ' match ',\n    MATCH_RE: '~',\n    NOT: 'is not'\n}\n\nPTL_ATTROP_TO_STR = {PTL_AND: '&&', PTL_OR: '||'}\n\n(RESOURCES_AVAILABLE, RESOURCES_TOTAL) = [0, 1]\n\nEXPECT_MAP = {\n    UNSET: 'Unset',\n    SET: 'Set',\n    EQ: 'Equal',\n    NE: 'Not Equal',\n    LT: 'Less Than',\n    GT: 'Greater Than',\n    LE: 'Less Equal Than',\n    GE: 'Greater Equal Than',\n    MATCH_RE: 'Matches regexp',\n    MATCH: 'Matches',\n    NOT: 'Not'\n}\n\nPBS_CMD_MAP = {\n    MGR_CMD_CREATE: 'create',\n    MGR_CMD_SET: 'set',\n    MGR_CMD_DELETE: 'delete',\n    MGR_CMD_UNSET: 'unset',\n    MGR_CMD_IMPORT: 'import',\n    MGR_CMD_EXPORT: 'export',\n    MGR_CMD_LIST: 'list',\n}\n\nPBS_CMD_TO_OP = {\n    MGR_CMD_SET: SET,\n    MGR_CMD_UNSET: UNSET,\n    MGR_CMD_DELETE: UNSET,\n    MGR_CMD_CREATE: SET,\n}\n\nPBS_OBJ_MAP = {\n    MGR_OBJ_NONE: 'none',\n    SERVER: 'server',\n    QUEUE: 'queue',\n    JOB: 'job',\n    NODE: 'node',\n    RESV: 'reservation',\n    RSC: 'resource',\n    SCHED: 'sched',\n    HOST: 'host',\n    HOOK: 'hook',\n    VNODE: 'node',\n    PBS_HOOK: 'pbshook'\n}\n\nPTL_TRUE = ('1', 'true', 't', 'yes', 'y', 'enable', 'enabled', 'True', True)\nPTL_FALSE = ('0', 'false', 'f', 'no', 'n', 'disable', 'disabled', 'False',\n             False)\nPTL_NONE = ('None', None)\nPTL_FORMULA = '__formula__'\nPTL_NOARG = '__noarg__'\nPTL_ALL = '__ALL__'\n\nCMD_ERROR_MAP = {\n    'alterjob': 'PbsAlterError',\n    'holdjob': 'PbsHoldError',\n    'sigjob': 'PbsSignalError',\n    'msgjob': 'PbsMessageError',\n    'rlsjob': 'PbsReleaseError',\n    'rerunjob': 'PbsRerunError',\n    'orderjob': 'PbsOrderError',\n    'runjob': 'PbsRunError',\n    'movejob': 'PbsMoveError',\n    'delete': 'PbsDeleteError',\n    'deljob': 'PbsDeljobError',\n    'delresv': 'PbsDelresvError',\n    'status': 'PbsStatusError',\n    'manager': 'PbsManagerError',\n    'submit': 'PbsSubmitError',\n    'terminate': 'PbsQtermError',\n    'alterresv': 'PbsResvAlterError'\n}\n"
  },
  {
    "path": "test/fw/ptl/lib/ptl_entities.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport copy\nimport datetime\nimport logging\nimport os\nimport pwd\nimport re\nimport sys\nimport time\n\nfrom ptl.utils.pbs_testusers import (ROOT_USER, TEST_USER, PbsUser,\n                                     DAEMON_SERVICE_USER)\nfrom ptl.lib.ptl_error import PbsManagerError\nfrom ptl.lib.ptl_object import PBSObject\nfrom ptl.lib.ptl_constants import (ATTR_resv_start, ATTR_job,\n                                   ATTR_resv_end, ATTR_resv_duration,\n                                   ATTR_count, ATTR_rescassn, ATTR_qtype,\n                                   ATTR_enable, ATTR_start, ATTR_total,\n                                   MGR_CMD_SET, MGR_CMD_UNSET, MGR_OBJ_QUEUE,\n                                   QUEUE)\n\n\nclass Resource(PBSObject):\n\n    \"\"\"\n    PBS resource referenced by name, type and flag\n\n    :param name: Resource name\n    :type name: str or None\n    :param type: Type of resource\n    \"\"\"\n\n    def __init__(self, name=None, type=None, flag=None):\n        PBSObject.__init__(self, name)\n        self.set_name(name)\n        self.set_type(type)\n        self.set_flag(flag)\n\n    def __del__(self):\n        del self.__dict__\n\n    def set_name(self, name):\n        \"\"\"\n        Set the resource name\n        \"\"\"\n        self.name = name\n        self.attributes['id'] = name\n\n    def set_type(self, type):\n        \"\"\"\n        Set the resource type\n        \"\"\"\n        self.type = type\n        self.attributes['type'] = type\n\n    def set_flag(self, flag):\n        \"\"\"\n        Set the flag\n        \"\"\"\n        self.flag = flag\n        self.attributes['flag'] = flag\n\n    def __str__(self):\n        s = [self.attributes['id']]\n        if 'type' in self.attributes:\n            s.append('type=' + self.attributes['type'])\n        if 'flag' in self.attributes:\n            s.append('flag=' + self.attributes['flag'])\n        return \" \".join(s)\n\n\nclass Hook(PBSObject):\n\n    \"\"\"\n    PBS hook objects. Holds attributes information and pointer\n    to server\n\n    :param name: Hook name\n    :type name: str or None\n    :param attrs: Hook attributes\n    :type attrs: Dictionary\n    :param server: Pointer to server\n    \"\"\"\n\n    dflt_attributes = {}\n\n    def __init__(self, name=None, attrs={}, server=None):\n        PBSObject.__init__(self, name, attrs, self.dflt_attributes)\n        self.server = server\n\n\nclass Queue(PBSObject):\n\n    \"\"\"\n    PBS Queue container, holds attributes of the queue and\n    pointer to server\n\n    :param name: Queue name\n    :type name: str or None\n    :param attrs: Queue attributes\n    :type attrs: Dictionary\n    \"\"\"\n\n    dflt_attributes = {}\n\n    def __init__(self, name=None, attrs={}, server=None):\n        PBSObject.__init__(self, name, attrs, self.dflt_attributes)\n\n        self.server = server\n        m = ['queue']\n        if server is not None:\n            m += ['@' + server.shortname]\n        if self.name is not None:\n            m += [' ', self.name]\n        m += [': ']\n        self.logprefix = \"\".join(m)\n\n    def __del__(self):\n        del self.__dict__\n\n    def revert_to_defaults(self):\n        \"\"\"\n        reset queue attributes to defaults\n        \"\"\"\n\n        ignore_attrs = ['id', ATTR_count, ATTR_rescassn]\n        ignore_attrs += [ATTR_qtype, ATTR_enable, ATTR_start, ATTR_total]\n        ignore_attrs += ['THE_END']\n\n        len_attrs = len(ignore_attrs)\n        unsetlist = []\n        setdict = {}\n\n        self.logger.info(\n            self.logprefix +\n            \"reverting configuration to defaults\")\n        if self.server is not None:\n            self.server.status(QUEUE, id=self.name, level=logging.DEBUG)\n\n        for k in self.attributes.keys():\n            for i in range(len_attrs):\n                if k.startswith(ignore_attrs[i]):\n                    break\n            if (i == (len_attrs - 1)) and k not in self.dflt_attributes:\n                unsetlist.append(k)\n\n        if len(unsetlist) != 0 and self.server is not None:\n            try:\n                self.server.manager(MGR_CMD_UNSET, MGR_OBJ_QUEUE, unsetlist,\n                                    self.name)\n            except PbsManagerError as e:\n                self.logger.error(e.msg)\n\n        for k in self.dflt_attributes.keys():\n            if (k not in self.attributes or\n                    self.attributes[k] != self.dflt_attributes[k]):\n                setdict[k] = self.dflt_attributes[k]\n\n        if len(setdict.keys()) != 0 and self.server is not None:\n            self.server.manager(MGR_CMD_SET, MGR_OBJ_QUEUE, setdict)\n\n\nclass Entity(object):\n\n    \"\"\"\n    Abstract representation of a PBS consumer that has an\n    external relationship to the PBS system. For example, a\n    user associated to an OS identifier (uid) maps to a PBS\n    user entity.\n\n    Entities may be subject to policies, such as limits, consume\n    a certain amount of resource and/or fairshare usage.\n\n    :param etype: Entity type\n    :type etype: str or None\n    :param name: Entity name\n    :type name: str or None\n    \"\"\"\n\n    def __init__(self, etype=None, name=None):\n        self.type = etype\n        self.name = name\n        self.limits = []\n        self.resource_usage = {}\n        self.fairshare_usage = 0\n\n    def set_limit(self, limit=None):\n        \"\"\"\n        :param limit: Limit to be set\n        :type limit: str or None\n        \"\"\"\n        for l in self.limits:\n            if str(limit) == str(l):\n                return\n        self.limits.append(limit)\n\n    def set_resource_usage(self, container=None, resource=None, usage=None):\n        \"\"\"\n        Set the resource type\n\n        :param resource: PBS resource\n        :type resource: str or None\n        :param usage: Resource usage value\n        :type usage: str or None\n        \"\"\"\n        if self.type:\n            if container in self.resource_usage:\n                if self.resource_usage[self.type]:\n                    if resource in self.resource_usage[container]:\n                        self.resource_usage[container][resource] += usage\n                    else:\n                        self.resource_usage[container][resource] = usage\n                else:\n                    self.resource_usage[container] = {resource: usage}\n\n    def set_fairshare_usage(self, usage=0):\n        \"\"\"\n        Set fairshare usage\n\n        :param usage: Fairshare usage value\n        :type usage: int\n        \"\"\"\n        self.fairshare_usage += usage\n\n    def __repr__(self):\n        return self.__str__()\n\n    def __str__(self):\n        return str(self.limits) + ' ' + str(self.resource_usage) + ' ' + \\\n            str(self.fairshare_usage)\n\n\nclass Policy(object):\n\n    \"\"\"\n    Abstract PBS policy. Can be one of ``limits``,\n    ``access control``, ``scheduling policy``, etc...this\n    class does not currently support any operations\n    \"\"\"\n\n    def __init__(self):\n        pass\n\n\nclass Limit(Policy):\n\n    \"\"\"\n    Representation of a PBS limit\n    Limits apply to containers, are of a certain type\n    (e.g., max_run_res.ncpus) associated to a given resource\n    (e.g., resource), on a given entity (e.g.,user Bob) and\n    have a certain value.\n\n    :param limit_type: Type of the limit\n    :type limit_type: str or None\n    :param resource: PBS resource\n    :type resource: str or None\n    :param entity_obj: Entity object\n    :param value: Limit value\n    :type value: int\n    \"\"\"\n\n    def __init__(self, limit_type=None, resource=None,\n                 entity_obj=None, value=None, container=None,\n                 container_id=None):\n        self.set_container(container, container_id)\n        self.soft_limit = False\n        self.hard_limit = False\n        self.set_limit_type(limit_type)\n        self.set_resource(resource)\n        self.set_value(value)\n        self.entity = entity_obj\n\n    def set_container(self, container, container_id):\n        \"\"\"\n        Set the container\n\n        :param container: Container which is to be set\n        :type container: str\n        :param container_id: Container id\n        \"\"\"\n        self.container = container\n        self.container_id = container_id\n\n    def set_limit_type(self, t):\n        \"\"\"\n        Set the limit type\n\n        :param t: Limit type\n        :type t: str\n        \"\"\"\n        self.limit_type = t\n        if '_soft' in t:\n            self.soft_limit = True\n        else:\n            self.hard_limit = True\n\n    def set_resource(self, resource):\n        \"\"\"\n        Set the resource\n\n        :param resource: resource value to set\n        :type resource: str\n        \"\"\"\n        self.resource = resource\n\n    def set_value(self, value):\n        \"\"\"\n        Set the resource value\n\n        :param value: Resource value\n        :type value: str\n        \"\"\"\n        self.value = value\n\n    def __eq__(self, value):\n        if str(self) == str(value):\n            return True\n        return False\n\n    def __str__(self):\n        return self.__repr__()\n\n    def __repr__(self):\n        limit_list = [self.container_id, self.limit_type, self.resource, '[',\n                      self.entity.type, ':', self.entity.name, '=',\n                      self.value, ']']\n        return \" \".join(limit_list)\n\n\nclass EquivClass(PBSObject):\n\n    \"\"\"\n    Equivalence class holds information on a collection of entities\n    grouped according to a set of attributes\n    :param attributes: Dictionary of attributes\n    :type attributes: Dictionary\n    :param entities: List of entities\n    :type entities: List\n    \"\"\"\n\n    def __init__(self, name, attributes={}, entities=[]):\n        self.name = name\n        self.attributes = attributes\n        self.entities = entities\n\n    def add_entity(self, entity):\n        \"\"\"\n        Add entities\n\n        :param entity: Entity to add\n        :type entity: str\n        \"\"\"\n        if entity not in self.entities:\n            self.entities.append(entity)\n\n    def __str__(self):\n        s = [str(len(self.entities)), \":\", \":\".join(self.name)]\n        return \"\".join(s)\n\n    def show(self, showobj=False):\n        \"\"\"\n        Show the entities\n\n        :param showobj: If true then show the entities\n        :type showobj: bool\n        \"\"\"\n        s = \" && \".join(self.name) + ': '\n        if showobj:\n            s += str(self.entities)\n        else:\n            s += str(len(self.entities))\n        print(s)\n        return s\n\n\nclass Holidays():\n    \"\"\"\n    Descriptive calss for Holiday file.\n    \"\"\"\n\n    def __init__(self):\n        self.year = {'id': \"YEAR\", 'value': None, 'valid': False}\n        self.weekday = {'id': \"weekday\", 'p': None, 'np': None, 'valid': None,\n                        'position': None}\n        self.monday = {'id': \"monday\", 'p': None, 'np': None, 'valid': None,\n                       'position': None}\n        self.tuesday = {'id': \"tuesday\", 'p': None, 'np': None, 'valid': None,\n                        'position': None}\n        self.wednesday = {'id': \"wednesday\", 'p': None, 'np': None,\n                          'valid': None, 'position': None}\n        self.thursday = {'id': \"thursday\", 'p': None, 'np': None,\n                         'valid': None, 'position': None}\n        self.friday = {'id': \"friday\", 'p': None, 'np': None, 'valid': None,\n                       'position': None}\n        self.saturday = {'id': \"saturday\", 'p': None, 'np': None,\n                         'valid': None, 'position': None}\n        self.sunday = {'id': \"sunday\", 'p': None, 'np': None, 'valid': None,\n                       'position': None}\n\n        self.days_set = []  # list of set days\n        self._days_map = {'weekday': self.weekday, 'monday': self.monday,\n                          'tuesday': self.tuesday, 'wednesday': self.wednesday,\n                          'thursday': self.thursday, 'friday': self.friday,\n                          'saturday': self.saturday, 'sunday': self.sunday}\n        self.holidays = []  # list of calendar holidays\n\n    def __str__(self):\n        \"\"\"\n        Return the content to write to holidays file as a string\n        \"\"\"\n        content = []\n        if self.year['valid']:\n            content.append(self.year['id'] + \"\\t\" +\n                           self.year['value'])\n\n        for i in range(0, len(self.days_set)):\n            content.append(self.days_set[i]['id'] + \"\\t\" +\n                           self.days_set[i]['p'] + \"\\t\" +\n                           self.days_set[i]['np'])\n\n        # Add calendar holidays\n        for day in self.holidays:\n            content.append(day)\n\n        return \"\\n\".join(content)\n"
  },
  {
    "path": "test/fw/ptl/lib/ptl_error.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nclass PtlException(Exception):\n\n    \"\"\"\n    Generic errors raised by PTL operations.\n    Sets a ``return value``, a ``return code``, and a ``message``\n    A post function and associated positional and named arguments\n    are available to perform any necessary cleanup.\n\n    :param rv: Return value set for the error occured during PTL\n               operation\n    :type rv: int or None.\n    :param rc: Return code set for the error occured during PTL\n               operation\n    :type rc: int or None.\n    :param msg: Message set for the error occured during PTL operation\n    :type msg: str or None.\n    :param post: Execute given post callable function if not None.\n    :type post: callable or None.\n    :raises: PTL exceptions\n    \"\"\"\n\n    def __init__(self, rv=None, rc=None, msg=None, post=None, *args, **kwargs):\n        self.rv = rv\n        self.rc = rc\n        self.msg = msg\n        if post is not None:\n            post(*args, **kwargs)\n\n    def __str__(self):\n        return ('rc=' + str(self.rc) + ', rv=' + str(self.rv) +\n                ', msg=' + str(self.msg))\n\n    def __repr__(self):\n        return (self.__class__.__name__ + '(rc=' + str(self.rc) + ', rv=' +\n                str(self.rv) + ', msg=' + str(self.msg) + ')')\n\n\nclass PtlFailureException(AssertionError):\n\n    \"\"\"\n    Generic failure exception raised by PTL operations.\n    Sets a ``return value``, a ``return code``, and a ``message``\n    A post function and associated positional and named arguments\n    are available to perform any necessary cleanup.\n\n    :param rv: Return value set for the failure occured during PTL\n               operation\n    :type rv: int or None.\n    :param rc: Return code set for the failure occured during PTL\n               operation\n    :type rc: int or None.\n    :param msg: Message set for the failure occured during PTL operation\n    :type msg: str or None.\n    :param post: Execute given post callable function if not None.\n    :type post: callable or None.\n    :raises: PTL exceptions\n    \"\"\"\n\n    def __init__(self, rv=None, rc=None, msg=None, post=None, *args, **kwargs):\n        self.rv = rv\n        self.rc = rc\n        self.msg = msg\n        if post is not None:\n            post(*args, **kwargs)\n\n    def __str__(self):\n        return ('rc=' + str(self.rc) + ', rv=' + str(self.rv) +\n                ', msg=' + str(self.msg))\n\n    def __repr__(self):\n        return (self.__class__.__name__ + '(rc=' + str(self.rc) + ', rv=' +\n                str(self.rv) + ', msg=' + str(self.msg) + ')')\n\n\nclass PbsServiceError(PtlException):\n    pass\n\n\nclass PbsConnectError(PtlException):\n    pass\n\n\nclass PbsStatusError(PtlException):\n    pass\n\n\nclass PbsSubmitError(PtlException):\n    pass\n\n\nclass PbsManagerError(PtlException):\n    pass\n\n\nclass PbsDeljobError(PtlException):\n    pass\n\n\nclass PbsDelresvError(PtlException):\n    pass\n\n\nclass PbsDeleteError(PtlException):\n    pass\n\n\nclass PbsRunError(PtlException):\n    pass\n\n\nclass PbsSignalError(PtlException):\n    pass\n\n\nclass PbsMessageError(PtlException):\n    pass\n\n\nclass PbsHoldError(PtlException):\n    pass\n\n\nclass PbsReleaseError(PtlException):\n    pass\n\n\nclass PbsOrderError(PtlException):\n    pass\n\n\nclass PbsRerunError(PtlException):\n    pass\n\n\nclass PbsMoveError(PtlException):\n    pass\n\n\nclass PbsAlterError(PtlException):\n    pass\n\n\nclass PbsResourceError(PtlException):\n    pass\n\n\nclass PbsSelectError(PtlException):\n    pass\n\n\nclass PbsSchedConfigError(PtlException):\n    pass\n\n\nclass PbsMomConfigError(PtlException):\n    pass\n\n\nclass PbsFairshareError(PtlException):\n    pass\n\n\nclass PbsQdisableError(PtlException):\n    pass\n\n\nclass PbsQenableError(PtlException):\n    pass\n\n\nclass PbsQstartError(PtlException):\n    pass\n\n\nclass PbsQstopError(PtlException):\n    pass\n\n\nclass PtlExpectError(PtlFailureException):\n    pass\n\n\nclass PbsInitServicesError(PtlException):\n    pass\n\n\nclass PbsQtermError(PtlException):\n    pass\n\n\nclass PtlLogMatchError(PtlFailureException):\n    pass\n\n\nclass PbsResvAlterError(PtlException):\n    pass\n"
  },
  {
    "path": "test/fw/ptl/lib/ptl_expect_action.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\nimport logging\n\n\nclass ExpectActions(object):\n\n    \"\"\"\n    List of action handlers to run when Server's expect\n    function does not get the expected result\n\n    :param action: Action to run\n    :type action: str\n    :param level: Logging level\n    \"\"\"\n\n    actions = {}\n\n    def __init__(self, action=None, level=logging.INFO):\n        self.logger = logging.getLogger(__name__)\n        self.add_action(action, level=level)\n\n    def add_action(self, action=None, hostname=None, level=logging.INFO):\n        \"\"\"\n        Add an action\n\n        :param action: Action to add\n        :param hostname: Machine hostname\n        :type hostname: str\n        :param level: Logging level\n        \"\"\"\n        if action is not None and action.name is not None and\\\n           action.name not in self.actions:\n            self.actions[action.name] = action\n            msg = ['expect action: added action ' + action.name]\n            if hostname:\n                msg += [' to server ' + hostname]\n            if level >= logging.INFO:\n                self.logger.info(\"\".join(msg))\n            else:\n                self.logger.debug(\"\".join(msg))\n\n    def has_action(self, name):\n        \"\"\"\n        check whether action exists or not\n\n        :param name: Name of action\n        :type name: str\n        \"\"\"\n        if name in self.actions:\n            return True\n        return False\n\n    def get_action(self, name):\n        \"\"\"\n        Get an action if exists\n\n        :param name: Name of action\n        :type name: str\n        \"\"\"\n        if name in self.actions:\n            return self.actions[name]\n        return None\n\n    def list_actions(self, level=logging.INFO):\n        \"\"\"\n        List an actions\n\n        :param level: Logging level\n        \"\"\"\n        if level >= logging.INFO:\n            self.logger.info(self.get_all_cations)\n        else:\n            self.logger.debug(self.get_all_cations)\n\n    def get_all_actions(self):\n        \"\"\"\n        Get all the action\n        \"\"\"\n        return list(self.actions.values())\n\n    def get_actions_by_type(self, atype=None):\n        \"\"\"\n        Get an action by type\n\n        :param atype: Action type\n        :type atype: str\n        \"\"\"\n        if atype is None:\n            return None\n\n        ret_actions = []\n        for action in self.actions.values():\n            if action.type is not None and action.type == atype:\n                ret_actions.append(action)\n        return ret_actions\n\n    def _control_action(self, action=None, name=None, enable=None):\n        if action:\n            action.enabled = False\n            name = action.name\n        elif name is not None:\n            if name == 'ALL':\n                for a in self.actions:\n                    a.enabled = enable\n            else:\n                a = self.get_action(name)\n                a.enabled = False\n        else:\n            return\n\n        if enable:\n            msg = 'enabled'\n        else:\n            msg = 'disabled'\n\n        self.logger.info('expect action: ' + name + ' ' + msg)\n\n    def disable_action(self, action=None, name=None):\n        \"\"\"\n        Disable an action\n        \"\"\"\n        self._control_action(action, name, enable=False)\n\n    def enable_action(self, action=None, name=None):\n        \"\"\"\n        Enable an action\n        \"\"\"\n        self._control_action(action, name, enable=True)\n\n    def disable_all_actions(self):\n        \"\"\"\n        Disable all actions\n        \"\"\"\n        for a in self.actions.values():\n            a.enabled = False\n\n    def enable_all_actions(self):\n        \"\"\"\n        Enable all actions\n        \"\"\"\n        for a in self.actions.values():\n            a.enabled = True\n\n\nclass ExpectAction(object):\n\n    \"\"\"\n    Action function to run when Server's expect function does\n    not get the expected result\n\n    :param atype: Action type\n    :type atype: str\n    \"\"\"\n\n    def __init__(self, name=None, enabled=True, atype=None, action=None,\n                 level=logging.INFO):\n        self.logger = logging.getLogger(__name__)\n        self.set_name(name, level=level)\n        self.set_enabled(enabled)\n        self.set_type(atype)\n        self.set_action(action)\n\n    def set_name(self, name, level=logging.INFO):\n        \"\"\"\n        Set the actione name\n\n        :param name: Action name\n        :type name: str\n        \"\"\"\n        if level >= logging.INFO:\n            self.logger.info('expect action: created new action ' + name)\n        else:\n            self.logger.debug('expect action: created new action ' + name)\n        self.name = name\n\n    def set_enabled(self, enabled):\n        self.enabled = enabled\n\n    def set_type(self, atype):\n        self.type = atype\n\n    def set_action(self, action):\n        self.action = action\n"
  },
  {
    "path": "test/fw/ptl/lib/ptl_fairshare.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport copy\nimport logging\nimport os\nimport pwd\nimport re\nimport grp\nimport sys\nimport time\n\nfrom ptl.utils.pbs_dshutils import DshUtils\nfrom ptl.utils.pbs_testusers import ROOT_USER, PbsUser\nfrom ptl.lib.ptl_error import PbsFairshareError\n\n\nclass FairshareTree(object):\n\n    \"\"\"\n    Object representation of the Scheduler's resource_group\n    file and pbsfs data\n\n    :param hostname: Hostname of the machine\n    :type hostname: str\n    \"\"\"\n    du = DshUtils()\n\n    def __init__(self, hostname=None, resource_group=None):\n        self.logger = logging.getLogger(__name__)\n        self.hostname = hostname\n        self.resource_group = resource_group\n        self.nodes = {}\n        self.root = None\n        self._next_id = -1\n\n    def update_resource_group(self):\n        if self.resource_group:\n            fn = self.du.create_temp_file(body=self.__str__())\n            ret = self.du.run_copy(self.hostname, src=fn,\n                                   dest=self.resource_group,\n                                   preserve_permission=False, sudo=True)\n            os.remove(fn)\n\n            if ret['rc'] != 0:\n                raise PbsFairshareError(rc=1, rv=False,\n                                        msg='error updating resource group')\n        return True\n\n    def update(self):\n        for node in self.nodes.values():\n            if node._parent is None:\n                pnode = self.get_node(id=node.parent_id)\n                if pnode:\n                    node._parent = pnode\n                    if node not in pnode._child:\n                        pnode._child.append(node)\n\n    def _add_node(self, node):\n        if node.name == 'TREEROOT' or node.name == 'root':\n            self.root = node\n        self.nodes[node.name] = node\n        if node.parent_name in self.nodes:\n            self.nodes[node.parent_name]._child.append(node)\n            node._parent = self.nodes[node.parent_name]\n\n    def add_node(self, node, apply=True):\n        \"\"\"\n        add node to the fairshare tree\n        \"\"\"\n        self._add_node(node)\n        if apply:\n            return self.update_resource_group()\n        return True\n\n    def create_node(self, name, id, parent_name, nshares):\n        \"\"\"\n        Add an entry to the ``resource_group`` file\n\n        :param name: The name of the entity to add\n        :type name: str\n        :param id: The uniqe numeric identifier of the entity\n        :type id: int\n        :param parent: The name of the parent/group of the entity\n        :type parent: str\n        :param nshares: The number of shares assigned to this entity\n        :type nshares: int\n        :returns: True on success, False otherwise\n        \"\"\"\n        if name in self.nodes:\n            self.logger.warning('fairshare: node ' + name + ' already defined')\n            return True\n        self.logger.info('creating tree node: ' + name)\n\n        node = FairshareNode(name, id, parent_name=parent_name,\n                             nshares=nshares)\n        self._add_node(node)\n        return self.update_resource_group()\n\n    def get_node(self, name=None, id=None):\n        \"\"\"\n        Return a node of the fairshare tree identified by either\n        name or id.\n\n        :param name: The name of the entity to query\n        :type name: str or None\n        :param id: The id of the entity to query\n        :returns: The fairshare information of the entity when\n                  found, if not, returns None\n\n        .. note:: The name takes precedence over the id.\n        \"\"\"\n        for node in self.nodes.values():\n            if name is not None and node.name == name:\n                return node\n            if id is not None and node.id == id:\n                return node\n        return None\n\n    def __batch_status__(self):\n        \"\"\"\n        Convert fairshare tree object to a batch status format\n        \"\"\"\n        dat = []\n        for node in self.nodes.values():\n            if node.name == 'root':\n                continue\n            einfo = {}\n            einfo['cgroup'] = node.id\n            einfo['id'] = node.name\n            einfo['group'] = node.parent_id\n            einfo['nshares'] = node.nshares\n            if len(node.prio) > 0:\n                p = []\n                for k, v in node.prio.items():\n                    p += [\"%s:%d\" % (k, int(v))]\n                einfo['penalty'] = \", \".join(p)\n            einfo['usage'] = node.usage\n            if node.perc:\n                p = []\n                for k, v in node.perc.items():\n                    p += [\"%s:%.3f\" % (k, float(v))]\n                    einfo['shares_perc'] = \", \".join(p)\n            ppnode = self.get_node(id=node.parent_id)\n            if ppnode:\n                ppname = ppnode.name\n                ppid = ppnode.id\n            else:\n                ppnode = self.get_node(name=node.parent_name)\n                if ppnode:\n                    ppname = ppnode.name\n                    ppid = ppnode.id\n                else:\n                    ppname = ''\n                    ppid = None\n            einfo['parent'] = \"%s (%s) \" % (str(ppid), ppname)\n            dat.append(einfo)\n        return dat\n\n    def get_next_id(self):\n        self._next_id -= 1\n        return self._next_id\n\n    def __repr__(self):\n        return self.__str__()\n\n    def _dfs(self, node, dat):\n        if node.name != 'root':\n            s = []\n            if node.name is not None:\n                s += [node.name]\n            if node.id is not None:\n                s += [str(node.id)]\n            if node.parent_name is not None:\n                s += [node.parent_name]\n            if node.nshares is not None:\n                s += [str(node.nshares)]\n            if node.usage is not None:\n                s += [str(node.usage)]\n            dat.append(\"\\t\".join(s))\n        for n in node._child:\n            self._dfs(n, dat)\n\n    def __str__(self):\n        dat = []\n        if self.root:\n            self._dfs(self.root, dat)\n        if len(dat) > 0:\n            dat += ['\\n']\n        return \"\\n\".join(dat)\n\n\nclass FairshareNode(object):\n\n    \"\"\"\n    Object representation of the fairshare data as queryable through\n    the command ``pbsfs``.\n\n    :param name: Name of fairshare node\n    :type name: str or None\n    :param nshares: Number of shares\n    :type nshares: int or None\n    :param usage: Fairshare usage\n    :param perc: Percentage the entity has of the tree\n    \"\"\"\n\n    def __init__(self, name=None, id=None, parent_name=None, parent_id=None,\n                 nshares=None, usage=None, perc=None):\n        self.name = name\n        self.id = id\n        self.parent_name = parent_name\n        self.parent_id = parent_id\n        self.nshares = nshares\n        self.usage = usage\n        self.perc = perc\n        self.prio = {}\n        self._parent = None\n        self._child = []\n\n    def __str__(self):\n        ret = []\n        if self.name is not None:\n            ret.append(self.name)\n        if self.id is not None:\n            ret.append(str(self.id))\n        if self.parent_name is not None:\n            ret.append(str(self.parent_name))\n        if self.nshares is not None:\n            ret.append(str(self.nshares))\n        if self.usage is not None:\n            ret.append(str(self.usage))\n        if self.perc is not None:\n            ret.append(str(self.perc))\n        return \"\\t\".join(ret)\n\n\nclass Fairshare(object):\n    du = DshUtils()\n    logger = logger = logging.getLogger(__name__)\n    fs_re = r'(?P<name>[\\S]+)[\\s]*:[\\s]*Grp:[\\s]*(?P<Grp>[-]*[0-9]*)' + \\\n            r'[\\s]*cgrp:[\\s]*(?P<cgrp>[-]*[0-9]*)[\\s]*' + \\\n            r'Shares:[\\s]*(?P<Shares>[-]*[0-9]*)[\\s]*Usage:[\\s]*' + \\\n            r'(?P<Usage>[0-9]+)[\\s]*Perc:[\\s]*(?P<Perc>.*)%'\n    fs_tag = re.compile(fs_re)\n\n    def __init__(self, has_snap=None, pbs_conf={}, sc_name=None,\n                 hostname=None, user=None):\n        self.has_snap = has_snap\n        self.pbs_conf = pbs_conf\n        self.sc_name = sc_name\n        self.hostname = hostname\n        self.user = user\n        _m = ['fairshare']\n        if self.sc_name is not None:\n            _m += ['-', str(self.sc_name)]\n        if self.user is not None:\n            _m += ['-', str(self.user)]\n        _m += [':']\n        self.logprefix = \"\".join(_m)\n\n    def revert_fairshare(self):\n        \"\"\"\n        Helper method to revert scheduler's fairshare tree.\n        \"\"\"\n        cmd = [os.path.join(self.pbs_conf['PBS_EXEC'], 'sbin', 'pbsfs'), '-e']\n        if self.sc_name != 'default':\n            cmd += ['-I', self.sc_name]\n        self.du.run_cmd(self.hostname, cmd=cmd, runas=self.user)\n\n    def query_fairshare(self, name=None, id=None):\n        \"\"\"\n        Parse fairshare data using ``pbsfs`` and populates\n        fairshare_tree.If name or id are specified, return the data\n        associated to that id.Otherwise return the entire fairshare\n        tree\n        \"\"\"\n        if self.has_snap:\n            return None\n\n        tree = FairshareTree()\n        cmd = [os.path.join(self.pbs_conf['PBS_EXEC'], 'sbin', 'pbsfs')]\n        if self.sc_name != 'default':\n            cmd += ['-I', self.sc_name]\n\n        ret = self.du.run_cmd(self.hostname, cmd=cmd,\n                              sudo=True, logerr=False)\n\n        if ret['rc'] != 0:\n            raise PbsFairshareError(rc=ret['rc'], rv=None,\n                                    msg=str(ret['err']))\n        pbsfs = ret['out']\n        for p in pbsfs:\n            m = self.fs_tag.match(p)\n            if m:\n                usage = int(m.group('Usage'))\n                perc = float(m.group('Perc'))\n                nm = m.group('name')\n                cgrp = int(m.group('cgrp'))\n                pid = int(m.group('Grp'))\n                nd = tree.get_node(id=pid)\n                if nd:\n                    pname = nd.parent_name\n                else:\n                    pname = None\n                # if an entity has a negative cgroup it should belong\n                # to the unknown resource, we work around the fact that\n                # PBS (up to 13.0) sets this cgroup id to -1 by\n                # reassigning it to 0\n                # TODO: cleanup once PBS code is updated\n                if cgrp < 0:\n                    cgrp = 0\n                node = FairshareNode(name=nm,\n                                     id=cgrp,\n                                     parent_id=pid,\n                                     parent_name=pname,\n                                     nshares=int(m.group('Shares')),\n                                     usage=usage,\n                                     perc={'TREEROOT': perc})\n                if perc:\n                    node.prio['TREEROOT'] = float(usage) / perc\n                if nm == name or id == cgrp:\n                    return node\n\n                tree.add_node(node, apply=False)\n        # now that all nodes are known, update parent and child\n        # relationship of the tree\n        tree.update()\n\n        for node in tree.nodes.values():\n            pnode = node._parent\n            while pnode is not None and pnode.id != 0:\n                if pnode.perc['TREEROOT']:\n                    node.perc[pnode.name] = \\\n                        (node.perc['TREEROOT'] * 100 / pnode.perc[\n                         'TREEROOT'])\n                if pnode.name in node.perc and node.perc[pnode.name]:\n                    node.prio[pnode.name] = (\n                        node.usage / node.perc[pnode.name])\n                pnode = pnode._parent\n\n        if name:\n            n = tree.get_node(name)\n            if n is None:\n                raise PbsFairshareError(rc=1, rv=None,\n                                        msg='Unknown entity ' + name)\n            return n\n        if id:\n            n = tree.get_node(id=id)\n            raise PbsFairshareError(rc=1, rv=None,\n                                    msg='Unknown entity ' + str(id))\n            return n\n        return tree\n\n    def set_fairshare_usage(self, name=None, usage=None):\n        \"\"\"\n        Set the fairshare usage associated to a given entity.\n\n        :param name: The entity to set the fairshare usage of\n        :type name: str or :py:class:`~ptl.lib.pbs_testlib.PbsUser` or None\n        :param usage: The usage value to set\n        \"\"\"\n        if self.has_snap:\n            return True\n\n        if name is None:\n            self.logger.error(self.logprefix + ' an entity name required')\n            return False\n\n        if isinstance(name, PbsUser):\n            name = str(name)\n\n        if usage is None:\n            self.logger.error(self.logprefix + ' a usage is required')\n            return False\n\n        cmd = [os.path.join(self.pbs_conf['PBS_EXEC'], 'sbin', 'pbsfs')]\n        if self.sc_name != 'default':\n            cmd += ['-I', self.sc_name]\n        cmd += ['-s', name, str(usage)]\n        ret = self.du.run_cmd(self.hostname, cmd, runas=self.user)\n        if ret['rc'] == 0:\n            return True\n        return False\n\n    def cmp_fairshare_entities(self, name1=None, name2=None):\n        \"\"\"\n        Compare two fairshare entities. Wrapper of ``pbsfs -c e1 e2``\n\n        :param name1: name of first entity to compare\n        :type name1: str or :py:class:`~ptl.lib.pbs_testlib.PbsUser` or None\n        :param name2: name of second entity to compare\n        :type name2: str or :py:class:`~ptl.lib.pbs_testlib.PbsUser` or None\n        :returns: the name of the entity of higher priority or None on error\n        \"\"\"\n        if self.has_snap:\n            return None\n\n        if name1 is None or name2 is None:\n            self.logger.erro(self.logprefix + 'two fairshare entity names ' +\n                             'required')\n            return None\n\n        if isinstance(name1, PbsUser):\n            name1 = str(name1)\n\n        if isinstance(name2, PbsUser):\n            name2 = str(name2)\n\n        cmd = [os.path.join(self.pbs_conf['PBS_EXEC'], 'sbin', 'pbsfs')]\n        if self.sc_name != 'default':\n            cmd += ['-I', self.sc_name]\n        cmd += ['-c', name1, name2]\n        ret = self.du.run_cmd(self.hostname, cmd, runas=self.user)\n        if ret['rc'] == 0:\n            return ret['out'][0]\n        return None\n"
  },
  {
    "path": "test/fw/ptl/lib/ptl_mom.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport ast\nimport base64\nimport copy\nimport datetime\nimport json\nimport logging\nimport os\nimport pwd\nimport re\nimport socket\nimport string\nimport sys\nimport tempfile\nimport time\nimport pkg_resources\n\n\nfrom ptl.utils.pbs_testusers import (ROOT_USER, TEST_USER, PbsUser,\n                                     DAEMON_SERVICE_USER)\n\ntry:\n    from nose.plugins.skip import SkipTest\nexcept ImportError:\n    class SkipTest(Exception):\n        pass\n\nfrom ptl.lib.ptl_error import (PtlExpectError, PbsServiceError,\n                               PbsInitServicesError, PtlLogMatchError,\n                               PbsStatusError, PbsManagerError,\n                               PbsMomConfigError)\nfrom ptl.lib.ptl_constants import (MGR_CMD_DELETE, MGR_OBJ_NODE,\n                                   MGR_CMD_CREATE, MGR_CMD_IMPORT,\n                                   MGR_CMD_SET, ATTR_rescavail,\n                                   NODE, VNODE, HOOK, HOST, MATCH_RE)\nfrom ptl.lib.ptl_service import PBSService, PBSInitServices\n\n\ndef get_mom_obj(server, name=None, attrs={}, pbsconf_file=None,\n                snapmap={}, snap=None, db_access=None,\n                pbs_conf=None, platform=None):\n    return MoM(server, name, attrs, pbsconf_file, snapmap,\n               snap, db_access, pbs_conf, platform)\n\n\nclass MoM(PBSService):\n\n    \"\"\"\n    Container for MoM properties.\n    Provides various MoM operations, such as creation, insertion,\n    deletion of vnodes.\n\n    :param name: The hostname of the server. Defaults to calling\n                 pbs_default()\n    :type name: str or None\n    :param attrs: Dictionary of attributes to set, these will\n                  override defaults.\n    :type attrs: Dictionary\n    :param pbsconf_file: path to config file to parse for\n                         ``PBS_HOME``, ``PBS_EXEC``, etc\n    :type pbsconf_file: str or None\n    :param snapmap: A dictionary of PBS objects ``(node,server,etc)``\n                    to mapped files from PBS snap directory\n    :type snapmap: Dictionary\n    :param snap: path to PBS snap directory (This will overrides\n                 snapmap)\n    :type snap: str or None\n    :param server: A PBS server instance to which this mom is associated\n    :param db_acccess: set to either file containing credentials to DB\n                       access or dictionary containing\n                       {'dbname':...,'user':...,'port':...}\n    :type db_access: str or dictionary\n    :param pbs_conf: Parsed pbs.conf in dictionary format\n    :type pbs_conf: Dictionary or None\n    :param platform: Mom's platform\n    :type platform: str or None\n    \"\"\"\n    dflt_attributes = {}\n    conf_to_cmd_map = {'PBS_MOM_SERVICE_PORT': '-M',\n                       'PBS_MANAGER_SERVICE_PORT': '-R',\n                       'PBS_HOME': '-d'}\n\n    def __init__(self, server, name=None, attrs={}, pbsconf_file=None,\n                 snapmap={}, snap=None, db_access=None, pbs_conf=None,\n                 platform=None):\n        self.server = server\n        if snap is None and self.server.snap is not None:\n            snap = self.server.snap\n        if (len(snapmap) == 0) and (len(self.server.snapmap) != 0):\n            snapmap = self.server.snapmap\n\n        super().__init__(name, attrs, self.dflt_attributes,\n                         pbsconf_file, snap=snap, snapmap=snapmap,\n                         pbs_conf=pbs_conf, platform=platform)\n        _m = ['mom ', self.shortname]\n        if pbsconf_file is not None:\n            _m += ['@', pbsconf_file]\n        _m += [': ']\n        self.logprefix = \"\".join(_m)\n        self.pi = PBSInitServices(hostname=self.hostname,\n                                  conf=self.pbs_conf_file)\n        self.configd = os.path.join(self.pbs_conf['PBS_HOME'], 'mom_priv',\n                                    'config.d')\n        self.config = {}\n        if self.platform == 'cray' or self.platform == 'craysim':\n            usecp = os.path.realpath('/home')\n            if self.platform == 'cray':\n                if os.path.exists('/opt/cray/alps/default/bin/apbasil'):\n                    alps_client = '/opt/cray/alps/default/bin/apbasil'\n                else:\n                    alps_client = self.du.which(exe='apbasil')\n            else:\n                alps_client = \"/opt/alps/apbasil.sh\"\n            self.dflt_config = {'$vnodedef_additive': 0,\n                                '$alps_client': alps_client,\n                                '$usecp': '*:%s %s' % (usecp, usecp)}\n        elif self.platform == 'shasta':\n            usecp = os.path.realpath('/lus')\n            self.dflt_config = {'$usecp': '*:%s %s' % (usecp, usecp)}\n        else:\n            self.dflt_config = {}\n        self._is_cpuset_mom = None\n\n        # If this is true, the mom will revert to default.\n        # This is true by default, but can be set to False if\n        # required by a test\n        self.revert_to_default = True\n        pbs_conf = self.du.parse_pbs_config()\n        self.sleep_cmd = os.path.join(pbs_conf['PBS_EXEC'], 'bin', 'pbs_sleep')\n        if not os.path.isfile(self.sleep_cmd):\n            self.sleep_cmd = '/bin/sleep'\n\n    def get_formed_path(self, *argv):\n        \"\"\"\n        :param argv: argument variables\n        :type argv: str\n        :returns: A string of formed path\n        \"\"\"\n\n        if len(argv) == 0:\n            return None\n        return os.path.join(*argv)\n\n    def rm(self, path=None, sudo=False, runas=None, recursive=False,\n           force=False, logerr=True, as_script=False):\n        \"\"\"\n        :param path: the path to the files or directories to remove\n                     for more than one files or directories pass as\n                     list\n        :type path: str or None\n        :param sudo: whether to remove files or directories as root\n                     or not.Defaults to False\n        :type sudo: boolean\n        :param runas: remove files or directories as given user\n                      Defaults to calling user\n        :param recursive: remove files or directories and their\n                          contents recursively\n        :type recursive: boolean\n        :param force: force remove files or directories\n        :type force: boolean\n        :param cwd: working directory on local host from which\n                    command is run\n        :param logerr: whether to log error messages or not.\n                       Defaults to True.\n        :type logerr: boolean\n        :param as_script: if True, run the rm in a script created\n                          as a temporary file that gets deleted after\n                          being run. This is used mainly to handle\n                          wildcard in path list. Defaults to False.\n        :type as_script: boolean\n        \"\"\"\n        return self.du.rm(hostname=self.hostname, path=path, sudo=sudo,\n                          runas=runas, recursive=recursive, force=force,\n                          logerr=logerr, as_script=as_script)\n\n    def listdir(self, path=None, sudo=False, runas=None, fullpath=True):\n        \"\"\"\n        :param path: The path to directory to list\n        :type path: str or None\n        :param sudo: Whether to list directory as root or not.\n                     Defaults to False\n        :type sudo: bool\n        :param runas: run command as user\n        :type runas: str or None\n        :param fullpath: Whether to return full path of contents.\n        :type fullpath: bool\n        :returns: A list containing the names of the entries in\n                  the directory or an empty list in case no files exist\n        \"\"\"\n        return self.du.listdir(self.hostname, path, sudo, runas, fullpath)\n\n    def isfile(self, path=None, sudo=False, runas=None):\n        \"\"\"\n        :param path: The path to the file to check\n        :type path: str or None\n        :param sudo: Whether to run the command as a privileged user\n        :type sudo: boolean\n        :param runas: run command as user\n        :type runas: str or None\n        :returns: True if file pointed to by path exists, and False\n                  otherwise\n        \"\"\"\n        return self.du.isfile(self.hostname, path, sudo, runas)\n\n    def create_and_format_stagein_path(self, storage_info={}, asuser=None):\n        \"\"\"\n        Return the formatted stagein path\n        :param storage_info: The name of the process to query.\n        :type storage_info: Dictionary\n        :param asuser: Optional username of temp file owner\n        :type asuser: str or None\n        :returns: Formatted stageout path\n        \"\"\"\n        if 'hostname' in storage_info:\n            storage_host = storage_info['hostname']\n        else:\n            storage_host = self.server.hostname\n\n        if 'suffix' in storage_info:\n            storage_file_suffix = storage_info['suffix']\n        else:\n            storage_file_suffix = None\n\n        if 'prefix' in storage_info:\n            storage_file_prefix = storage_info['prefix']\n        else:\n            storage_file_prefix = 'PtlPbs'\n        storage_path = self.du.create_temp_file(storage_host,\n                                                storage_file_suffix,\n                                                storage_file_prefix,\n                                                asuser=asuser)\n\n        execution_path = self.du.create_temp_file(self.hostname, asuser=asuser)\n\n        path = '%s@%s:%s' % (execution_path, storage_host, storage_path)\n        return path\n\n    def create_and_format_stageout_path(self, execution_info={},\n                                        storage_info={},\n                                        asuser=None):\n        \"\"\"\n        Return the formatted stageout path\n        :param execution_info: Contains execution host information\n        :type execution_info: Dictionary\n        :param storage_info: The name of the process to query.\n        :type storage_info: Dictionary\n        :param asuser: Optional username of temp file owner\n        :type asuser: str or None\n        :returns: Formatted stageout path\n        \"\"\"\n        if 'hostname' in storage_info:\n            storage_host = storage_info['hostname']\n        else:\n            storage_host = self.server.hostname\n\n        if 'hostname' in execution_info:\n            execution_host = execution_info['hostname']\n        else:\n            execution_host = self.hostname\n\n        if 'suffix' in execution_info:\n            execution_file_suffix = execution_info['suffix']\n        else:\n            execution_file_suffix = None\n\n        if 'prefix' in execution_info:\n            execution_file_prefix = execution_info['prefix']\n        else:\n            execution_file_prefix = 'PtlPbs'\n\n        # create temp file on execution host which will be staged out\n        execution_path = self.du.create_temp_file(execution_host,\n                                                  execution_file_suffix,\n                                                  execution_file_prefix,\n                                                  asuser=asuser)\n\n        # Path where file will be staged out after job completion\n        storage_path = tempfile.gettempdir()\n        path = '%s@%s:%s' % (execution_path, storage_host, storage_path)\n        return path\n\n    def printjob(self, job_id=None):\n        \"\"\"\n        Run the printjob command for the given job id\n        :param job_id: job's id for which to run printjob cmd\n        :type job_id: string\n        \"\"\"\n\n        if job_id is None:\n            return None\n\n        printjob = os.path.join(self.pbs_conf['PBS_EXEC'], 'bin',\n                                'printjob')\n        jbfile = os.path.join(self.pbs_conf['PBS_HOME'], 'mom_priv',\n                              'jobs', job_id + '.JB')\n        ret = self.du.run_cmd(self.hostname, cmd=[printjob, jbfile],\n                              sudo=True)\n        return ret\n\n    def is_proc_suspended(self, pid=None):\n        \"\"\"\n        Check if given process is in suspended state or not\n        :param pid: process ID\n        :type pid: string\n        \"\"\"\n\n        if pid is None:\n            self.logger.error(\"Could not get pid to check the state\")\n            return False\n        state = 'T'\n        rv = self.pu.get_proc_state(self.hostname, pid)\n        if rv != state:\n            return False\n        childlist = self.pu.get_proc_children(self.hostname, pid)\n        for child in childlist:\n            rv = self.pu.get_proc_state(self.hostname, child)\n            if rv != state:\n                return False\n        return True\n\n    def isUp(self, max_attempts=None):\n        \"\"\"\n        Check for PBS mom up\n        \"\"\"\n        # Poll for few seconds to see if mom is up and node is free\n        if max_attempts is None:\n            max_attempts = self.ptl_conf['max_attempts']\n        for _ in range(max_attempts):\n            rv = super(MoM, self)._isUp()\n            if rv:\n                break\n            time.sleep(1)\n        if rv:\n            try:\n                nodes = self.server.status(NODE, id=self.shortname)\n                if nodes:\n                    attr = {'state': (MATCH_RE,\n                                      'free|provisioning|offline|job-busy')}\n                    self.server.expect(NODE, attr, id=self.shortname)\n            # Ignore PbsStatusError if mom daemon is up but there aren't\n            # any mom nodes\n            except PbsStatusError:\n                pass\n            except PtlExpectError:\n                rv = False\n        return rv\n\n    def start(self, args=None, launcher=None):\n        \"\"\"\n        Start the PBS mom\n\n        :param args: Arguments to start the mom\n        :type args: str or None\n        :param launcher: Optional utility to invoke the launch of the service\n        :type launcher: str or list or None\n        \"\"\"\n        if args is not None or launcher is not None:\n            return super(MoM, self)._start(inst=self, args=args,\n                                           cmd_map=self.conf_to_cmd_map,\n                                           launcher=launcher)\n        else:\n            try:\n                rv = self.pi.start_mom()\n                pid = self._validate_pid(self)\n                if pid is None:\n                    raise PbsServiceError(rv=False, rc=-1,\n                                          msg=\"Could not find PID\")\n            except PbsInitServicesError as e:\n                raise PbsServiceError(rc=e.rc, rv=e.rv, msg=e.msg)\n            return rv\n\n    def stop(self, sig=None):\n        \"\"\"\n        Stop the PBS mom\n\n        :param sig: Signal to stop the PBS mom\n        :type sig: str\n        \"\"\"\n        if sig is not None:\n            self.logger.info(self.logprefix + 'stopping MoM on host ' +\n                             self.hostname)\n            return super(MoM, self)._stop(sig, inst=self)\n        else:\n            try:\n                self.pi.stop_mom()\n            except PbsInitServicesError as e:\n                raise PbsServiceError(rc=e.rc, rv=e.rv, msg=e.msg)\n            return True\n\n    def restart(self, args=None):\n        \"\"\"\n        Restart the PBS mom\n        \"\"\"\n        if self.isUp():\n            if not self.stop():\n                return False\n        return self.start(args=args)\n\n    def log_match(self, msg=None, id=None, n=50, tail=True, allmatch=False,\n                  regexp=False, max_attempts=None, interval=None,\n                  starttime=None, endtime=None, level=logging.INFO,\n                  existence=True):\n        \"\"\"\n        Match given ``msg`` in given ``n`` lines of MoM log\n\n        :param msg: log message to match, can be regex also when\n                    ``regexp`` is True\n        :type msg: str\n        :param id: The id of the object to trace. Only used for\n                   tracejob\n        :type id: str\n        :param n: 'ALL' or the number of lines to search through,\n                  defaults to 50\n        :type n: str or int\n        :param tail: If true (default), starts from the end of\n                     the file\n        :type tail: bool\n        :param allmatch: If True all matching lines out of then\n                         parsed are returned as a list. Defaults\n                         to False\n        :type allmatch: bool\n        :param regexp: If true msg is a Python regular expression.\n                       Defaults to False\n        :type regexp: bool\n        :param max_attempts: the number of attempts to make to find\n                             a matching entry\n        :type max_attempts: int\n        :param interval: the interval between attempts\n        :type interval: int\n        :param starttime: If set ignore matches that occur before\n                          specified time\n        :type starttime: float\n        :param endtime: If set ignore matches that occur after\n                        specified time\n        :type endtime: float\n        :param level: The logging level, defaults to INFO\n        :type level: int\n        :param existence: If True (default), check for existence of\n                        given msg, else check for non-existence of\n                        given msg.\n        :type existence: bool\n\n        :return: (x,y) where x is the matching line\n                 number and y the line itself. If allmatch is True,\n                 a list of tuples is returned.\n        :rtype: tuple\n        :raises PtlLogMatchError:\n                When ``existence`` is True and given\n                ``msg`` is not found in ``n`` line\n                Or\n                When ``existence`` is False and given\n                ``msg`` found in ``n`` line.\n\n        .. note:: The matching line number is relative to the record\n                  number, not the absolute line number in the file.\n        \"\"\"\n        return self._log_match(self, msg, id, n, tail, allmatch, regexp,\n                               max_attempts, interval, starttime, endtime,\n                               level, existence)\n\n    def delete_vnodes(self):\n        rah = ATTR_rescavail + '.host'\n        rav = ATTR_rescavail + '.vnode'\n        a = {rah: self.hostname, rav: None}\n        try:\n            _vs = self.server.status(HOST, a, id=self.hostname)\n        except PbsStatusError:\n            try:\n                _vs = self.server.status(HOST, a, id=self.shortname)\n            except PbsStatusError as e:\n                err_msg = e.msg[0].rstrip()\n                if (err_msg.endswith('Server has no node list') or\n                        err_msg.endswith('Unknown node')):\n                    _vs = []\n                else:\n                    raise e\n        vs = []\n        for v in _vs:\n            if v[rav].split('.')[0] != v[rah].split('.')[0]:\n                vs.append(v['id'])\n        if len(vs) > 0:\n            self.server.manager(MGR_CMD_DELETE, VNODE, id=vs)\n\n    def revert_to_defaults(self, delvnodedefs=True):\n        \"\"\"\n        1. ``Revert MoM configuration to defaults.``\n\n        2. ``Remove epilogue and prologue``\n\n        3. ``Delete all vnode definitions\n        HUP MoM``\n\n        :param delvnodedefs: if True (the default) delete all vnode\n                             definitions and restart the MoM\n        :type delvnodedefs: bool\n\n        :returns: True on success and False otherwise\n        \"\"\"\n        self.logger.info(self.logprefix +\n                         'reverting configuration to defaults')\n        restart = False\n        if not self.has_snap:\n            self.delete_pelog()\n            if delvnodedefs and self.has_vnode_defs():\n                restart = True\n                if not self.delete_vnode_defs():\n                    return False\n                self.delete_vnodes()\n            if not (self.config == self.dflt_config):\n                # Clear older mom configuration. Apply default.\n                self.config = {}\n                self.apply_config(self.dflt_config, hup=False, restart=True)\n            if restart:\n                self.restart()\n            else:\n                self.signal('-HUP')\n            return self.isUp()\n        return True\n\n    def _get_dflt_pbsconfval(self, conf, svr_hostname, hosttype, hostobj):\n        \"\"\"\n        Helper function to revert_pbsconf, tries to determine and return\n        default value for the pbs.conf variable given\n        :param conf: the pbs.conf variable\n        :type conf: str\n        :param svr_hostname: hostname of the server host\n        :type svr_hostname: str\n        :param hosttype: type of host being reverted\n        :type hosttype: str\n        :param hostobj: PTL object associated with the host\n        :type hostobj: PBSService\n        :return default value of the pbs.conf variable if it can be determined\n        as a string, otherwise None\n        \"\"\"\n        if conf == \"PBS_SERVER\":\n            return svr_hostname\n        elif conf == \"PBS_START_SCHED\":\n            return \"0\"\n        elif conf == \"PBS_START_COMM\":\n            return \"0\"\n        elif conf == \"PBS_START_SERVER\":\n            return \"0\"\n        elif conf == \"PBS_START_MOM\":\n            return \"1\"\n        elif conf == \"PBS_CORE_LIMIT\":\n            return \"unlimited\"\n        elif conf == \"PBS_SCP\":\n            scppath = self.du.which(hostobj.hostname, \"scp\")\n            if scppath != \"scp\":\n                return scppath\n        elif conf == \"PBS_LOG_HIGHRES_TIMESTAMP\":\n            return \"1\"\n        elif conf == \"PBS_PUBLIC_HOST_NAME\":\n            return None\n        elif conf == \"PBS_DAEMON_SERVICE_USER\":\n            # Only set if scheduler user is not default\n            if DAEMON_SERVICE_USER.name == 'root':\n                return None\n            else:\n                return DAEMON_SERVICE_USER.name\n\n        return None\n\n    def cat(self, filename=None, sudo=False, runas=None,\n            logerr=True, level=logging.INFOCLI2, option=None):\n        \"\"\"\n        Wrapper for cat function\n        \"\"\"\n        return self.du.cat(self.hostname, filename, sudo, runas,\n                           logerr, level, option)\n\n    def revert_mom_pbs_conf(self, primary_server, vals_to_set):\n        \"\"\"\n        Helper function to revert_pbsconf to revert all mom daemons' pbs.conf\n        :param primary_server: object of the primary PBS server\n        :type primary_server: PBSService\n        :param vals_to_set: dict of pbs.conf values to set\n        :type vals_to_set: dict\n        \"\"\"\n\n        new_pbsconf = dict(vals_to_set)\n        restart_mom = False\n        pbs_conf_val = self.du.parse_pbs_config(self.hostname)\n        if not pbs_conf_val:\n            raise ValueError(\"Could not parse pbs.conf on host %s\" %\n                             (self.hostname))\n\n        # to start with, set all keys in new_pbsconf with values from the\n        # existing pbs.conf\n        keys_to_delete = []\n        for conf in new_pbsconf:\n            if conf in pbs_conf_val:\n                new_pbsconf[conf] = pbs_conf_val[conf]\n            else:\n                # existing pbs.conf doesn't have a default variable set\n                # Try to determine the default\n                val = self._get_dflt_pbsconfval(conf,\n                                                primary_server.hostname,\n                                                \"mom\", self)\n                if val is None:\n                    self.logger.error(\"Couldn't revert %s in pbs.conf\"\n                                      \" to its default value\" %\n                                      (conf))\n                    keys_to_delete.append(conf)\n                else:\n                    new_pbsconf[conf] = val\n\n        for key in keys_to_delete:\n            del(new_pbsconf[key])\n\n        # Set the mom start bit to 1\n        if (new_pbsconf[\"PBS_START_MOM\"] != \"1\"):\n            new_pbsconf[\"PBS_START_MOM\"] = \"1\"\n            restart_mom = True\n\n        # Set PBS_CORE_LIMIT, PBS_SCP and PBS_SERVER\n        if new_pbsconf[\"PBS_CORE_LIMIT\"] != \"unlimited\":\n            new_pbsconf[\"PBS_CORE_LIMIT\"] = \"unlimited\"\n            restart_mom = True\n        if new_pbsconf[\"PBS_SERVER\"] != primary_server.hostname:\n            new_pbsconf[\"PBS_SERVER\"] = primary_server.hostname\n            restart_mom = True\n        if \"PBS_SCP\" not in new_pbsconf:\n            scppath = self.du.which(self.hostname, \"scp\")\n            if scppath != \"scp\":\n                new_pbsconf[\"PBS_SCP\"] = scppath\n                restart_mom = True\n        if new_pbsconf[\"PBS_LOG_HIGHRES_TIMESTAMP\"] != \"1\":\n            new_pbsconf[\"PBS_LOG_HIGHRES_TIMESTAMP\"] = \"1\"\n            restart_mom = True\n\n        # Check if existing pbs.conf has more/less entries than the\n        # default list\n        if len(pbs_conf_val) != len(new_pbsconf):\n            restart_mom = True\n        # Check if existing pbs.conf has correct ownership\n        dest = self.du.get_pbs_conf_file(self.hostname)\n        (cf_uid, cf_gid) = (os.stat(dest).st_uid, os.stat(dest).st_gid)\n        if cf_uid != 0 or cf_gid > 10:\n            restart_mom = True\n\n        if restart_mom:\n            self.du.set_pbs_config(self.hostname, confs=new_pbsconf,\n                                   append=False)\n            self.pbs_conf = new_pbsconf\n            self.pi.initd(self.hostname, \"restart\", daemon=\"mom\")\n            if not self.isUp():\n                self.fail(\"Mom is not up\")\n\n    def save_configuration(self, outfile=None, mode='w'):\n        \"\"\"\n        Save a MoM ``mom_priv/config``\n\n        :param outfile: Optional Path to a file to which configuration\n                        is saved, when not provided, data is saved in\n                        class variable saved_config\n        :type outfile: str\n        :param mode: the mode in which to open outfile to save\n                     configuration.\n        :type mode: str\n        :returns: True on success, False on error\n\n        .. note:: first object being saved should open this file\n                  with 'w' and subsequent calls from other objects\n                  should save with mode 'a' or 'a+'. Defaults to a+\n        \"\"\"\n        conf = {}\n        mpriv = os.path.join(self.pbs_conf['PBS_HOME'], 'mom_priv')\n        cf = os.path.join(mpriv, 'config')\n        self._save_config_file(conf, cf)\n\n        if os.path.isdir(os.path.join(mpriv, 'config.d')):\n            for f in self.du.listdir(path=os.path.join(mpriv, 'config.d'),\n                                     sudo=True):\n                self._save_config_file(conf,\n                                       os.path.join(mpriv, 'config.d', f))\n        mconf = {self.hostname: conf}\n        if MGR_OBJ_NODE not in self.server.saved_config:\n            self.server.saved_config[MGR_OBJ_NODE] = {}\n        self.server.saved_config[MGR_OBJ_NODE].update(mconf)\n        if outfile is not None:\n            try:\n                with open(outfile, mode) as f:\n                    json.dump(self.server.saved_config, f)\n            except Exception:\n                self.logger.error('error saving configuration to ' + outfile)\n                return False\n        return True\n\n    def load_configuration(self, infile):\n        \"\"\"\n        load mom configuration from saved file infile\n        \"\"\"\n        rv = self._load_configuration(infile, MGR_OBJ_NODE)\n        self.signal('-HUP')\n        return rv\n\n    def is_cray(self):\n        \"\"\"\n        Returns True if the version of PBS used was built for Cray platforms\n        \"\"\"\n        try:\n            self.log_match(\"alps_client\", n='ALL', tail=False, max_attempts=1)\n        except PtlLogMatchError:\n            return False\n        else:\n            return True\n\n    def is_shasta(self):\n        \"\"\"\n        Returns True if the version of PBS used is installed on Shasta platform\n        \"\"\"\n        if self.platform == 'shasta':\n            return True\n        else:\n            return False\n\n    def is_only_linux(self):\n        \"\"\"\n        Returns True if MoM is only Linux\n        \"\"\"\n        return True\n\n    def check_mom_bash_version(self):\n        \"\"\"\n        Return True if bash version on mom is greater than or equal to 4.2.46\n        \"\"\"\n        cmd = ['echo', '${BASH_VERSION%%[^0-9.]*}']\n        ret = self.du.run_cmd(self.hostname, cmd=cmd, sudo=True,\n                              as_script=True)\n        req_bash_version = \"4.2.46\"\n        if len(ret['out']) > 0:\n            mom_bash_version = ret['out'][0]\n        else:\n            # If we can't get the bash version, there is no harm\n            # in trying to run the test. It might fail in an error,\n            # but at least we tried.\n            return True\n        if mom_bash_version >= req_bash_version:\n            return True\n        else:\n            return False\n\n    def is_cpuset_mom(self):\n        \"\"\"\n        Check for cgroup cpuset enabled system\n        \"\"\"\n        if self._is_cpuset_mom is not None:\n            return self._is_cpuset_mom\n        hpe_file1 = \"/etc/sgi-compute-node-release\"\n        hpe_file2 = \"/etc/sgi-known-distributions\"\n        ret1 = self.du.isfile(self.hostname, path=hpe_file1)\n        ret2 = self.du.isfile(self.hostname, path=hpe_file2)\n        if ret1 or ret2:\n            self._is_cpuset_mom = True\n        else:\n            self._is_cpuset_mom = False\n        return self._is_cpuset_mom\n\n    def skipTest(self, reason=None):\n        \"\"\"\n        Skip Test\n        :param reason: message to indicate why test is skipped\n        :type reason: str or None\n        \"\"\"\n        if reason:\n            self.logger.warning('test skipped: ' + reason)\n        else:\n            reason = 'unknown'\n        raise SkipTest(reason)\n\n    def check_mem_request(self, attrib):\n        if 'resources_available.mem' in attrib:\n            del attrib['resources_available.mem']\n            self.skipTest(\n                'mem requested cannot be set on cpuset mom')\n\n    def check_ncpus_request(self, attrib, vn):\n        skipmsg = 'ncpus requested are not same as available'\n        skipmsg += ' on a cpuset node'\n        at = 'resources_available.ncpus'\n        if at in attrib:\n            if int(attrib[at]) != int(vn[at]):\n                self.skipTest(skipmsg)\n            else:\n                del attrib['resources_available.ncpus']\n\n    def set_node_attrib(self, vnode, attrib):\n        \"\"\"\n        set attribute on a node\n        \"\"\"\n        for res, value in attrib.items():\n            ecmd = 'set node '\n            ecmd += vnode['id'] + ' ' + res + '=' + str(value)\n            pcmd = [os.path.join(self.pbs_conf['PBS_EXEC'],\n                                 'bin', 'qmgr'), '-c', ecmd]\n            ret = self.du.run_cmd(self.hostname, pcmd,\n                                  sudo=True,\n                                  level=logging.INFOCLI,\n                                  logerr=True)\n\n    def create_vnodes(self, attrib=None, num=1,\n                      additive=False, sharednode=True, restart=True,\n                      delall=True, natvnode=None, usenatvnode=False,\n                      attrfunc=None, fname=None, vnodes_per_host=1,\n                      createnode=True, expect=True, vname=None):\n        \"\"\"\n        helper function to create vnodes.\n        :param attrib: attributes to assign to each node\n        :type attrib: dict\n        :param num: the number of vnodes to create. Defaults to 1\n        :type num: int\n        :param additive: If True, vnodes are added to the existing\n                         vnode defs.Defaults to False.\n        :type additive: bool\n        :param sharednode: If True, all vnodes will share the same\n                           host.Defaults to True.\n        :type sharednode: bool\n        :param restart: If True the MoM will be restarted.\n        :type restart: bool\n        :param delall: If True delete all server nodes prior to\n                       inserting vnodes\n        :type delall: bool\n        :param natvnode: name of the natural vnode.i.e. The node\n                         name in qmgr -c \"create node <name>\"\n        :type natvnode: str or None\n        :param usenatvnode: count the natural vnode as an\n                            allocatable node.\n        :type usenatvnode: bool\n        :param attrfunc: an attribute=value function generator,\n                         see create_vnode_def\n        :param fname: optional name of the vnode def file\n        :type fname: str or None\n        :param vnodes_per_host: number of vnodes per host\n        :type vnodes_per_host: int\n        :param createnode: whether to create the node via manage or\n                           not. Defaults to True\n        :type createnode: bool\n        :param expect: whether to expect attributes to be set or\n                       not. Defaults to True\n        :type expect: bool\n        :returns: True on success and False otherwise\n        :param vname: optional vnode prefix name to be used\n                      only if vnodes cannot have mom hostname\n                      as vnode prefix under some condition\n        :type vname: str or None\n        \"\"\"\n        if attrib is None:\n            self.logger.error(\"attributes are required\")\n            return False\n\n        if self.is_cpuset_mom():\n            if vname:\n                msg = \"cpuset nodes cannot have vnode names\"\n                self.skipTest(msg)\n            self.check_mem_request(attrib)\n            if len(attrib) == 0:\n                return True\n            nodes = self.server.status(HOST, id=self.shortname)\n            del nodes[0]  # don't set any attribute on natural node\n            if len(nodes) < num:\n                msg = 'cpuset mom does not have required number of nodes'\n                self.skipTest(msg)\n            elif len(nodes) == num:\n                if attrib:\n                    for vnode in nodes:\n                        self.check_ncpus_request(attrib, vnode)\n                        if len(attrib) != 0:\n                            self.set_node_attrib(vnode, attrib)\n                return True\n            else:\n                i = 0\n                for vnode in nodes:\n                    if i < num:\n                        i += 1\n                        self.check_ncpus_request(attrib, vnode)\n                        if len(attrib) != 0:\n                            self.set_node_attrib(vnode, attrib)\n                    else:\n                        at = {'state': 'offline'}\n                        self.set_node_attrib(vnode, at)\n                return True\n\n        if natvnode is None:\n            natvnode = self.shortname\n\n        if vname is None:\n            vname = self.shortname\n\n        if delall:\n            try:\n                rv = self.server.manager(MGR_CMD_DELETE, NODE, None, \"\")\n                if rv != 0:\n                    return False\n            except PbsManagerError:\n                pass\n\n        vdef = self.create_vnode_def(vname, attrib, num, sharednode,\n                                     usenatvnode=usenatvnode,\n                                     attrfunc=attrfunc,\n                                     vnodes_per_host=vnodes_per_host)\n        self.insert_vnode_def(vdef, fname=fname, additive=additive,\n                              restart=restart)\n\n        new_vnodelist = []\n        if usenatvnode:\n            new_vnodelist.append(natvnode)\n            num_check = num - 1\n        else:\n            num_check = num\n        for i in range(num_check):\n            new_vnodelist.append(\"%s[%s]\" % (vname, i))\n\n        if createnode:\n            try:\n                statm = self.server.status(NODE, id=natvnode)\n            except Exception:\n                statm = []\n            if len(statm) >= 1:\n                _m = 'Mom %s already exists, not creating' % (natvnode)\n                self.logger.info(_m)\n            else:\n                if self.pbs_conf and 'PBS_MOM_SERVICE_PORT' in self.pbs_conf:\n                    m_attr = {'port': self.pbs_conf['PBS_MOM_SERVICE_PORT']}\n                else:\n                    m_attr = None\n                self.server.manager(MGR_CMD_CREATE, NODE, m_attr, natvnode)\n        # only expect if vnodes were added rather than the nat vnode modified\n        if expect and num > 0:\n            attrs = {'state': 'free'}\n            attrs.update(attrib)\n            for vn in new_vnodelist:\n                self.server.expect(VNODE, attrs, id=vn)\n        return True\n\n    def create_vnode_def(self, name, attrs={}, numnodes=1, sharednode=True,\n                         pre='[', post=']', usenatvnode=False, attrfunc=None,\n                         vnodes_per_host=1):\n        \"\"\"\n        Create a vnode definition string representation\n\n        :param name: The prefix for name of vnode to create,\n                     name of vnode will be prefix + pre + <num> +\n                     post\n        :type name: str\n        :param attrs: Dictionary of attributes to set on each vnode\n        :type attrs: Dictionary\n        :param numnodes: The number of vnodes to create\n        :type numnodes: int\n        :param sharednode: If true vnodes are shared on a host\n        :type sharednode: bool\n        :param pre: The symbol preceding the numeric value of that\n                    vnode.\n        :type pre: str\n        :param post: The symbol following the numeric value of that\n                     vnode.\n        :type post: str\n        :param usenatvnode: use the natural vnode as the first vnode\n                            to allocate this only makes sense\n                            starting with PBS 11.3 when natural\n                            vnodes are reported as a allocatable\n        :type usenatvnode: bool\n        :param attrfunc: function to customize the attributes,\n                         signature is (name, numnodes, curnodenum,\n                         attrs), must return a dict that contains\n                         new or modified attrs that will be added to\n                         the vnode def. The function is called once\n                         per vnode being created, it does not modify\n                         attrs itself across calls.\n        :param vnodes_per_host: number of vnodes per host\n        :type vnodes_per_host: int\n        :returns: A string representation of the vnode definition\n                  file\n        \"\"\"\n        sethost = False\n\n        attribs = attrs.copy()\n        if not sharednode and 'resources_available.host' not in attrs:\n            sethost = True\n\n        if attrfunc is None:\n            customattrs = attribs\n\n        vdef = [\"$configversion 2\"]\n\n        # altering the natural vnode information\n        if numnodes == 0:\n            for k, v in attribs.items():\n                vdef += [name + \": \" + str(k) + \"=\" + str(v)]\n        else:\n            if usenatvnode:\n                if attrfunc:\n                    customattrs = attrfunc(name, numnodes, \"\", attribs)\n                for k, v in customattrs.items():\n                    vdef += [self.shortname + \": \" + str(k) + \"=\" + str(v)]\n                # account for the use of the natural vnode\n                numnodes -= 1\n            else:\n                # ensure that natural vnode is not allocatable by the scheduler\n                vdef += [self.shortname + \": resources_available.ncpus=0\"]\n                vdef += [self.shortname + \": resources_available.mem=0\"]\n\n        for n in range(numnodes):\n            vnid = name + pre + str(n) + post\n            if sethost:\n                if vnodes_per_host > 1:\n                    if n % vnodes_per_host == 0:\n                        _nid = vnid\n                    else:\n                        _nid = name + pre + str(n - n % vnodes_per_host) + post\n                    attribs['resources_available.host'] = _nid\n                else:\n                    attribs['resources_available.host'] = vnid\n\n            if attrfunc:\n                customattrs = attrfunc(vnid, numnodes, n, attribs)\n            for k, v in customattrs.items():\n                vdef += [vnid + \": \" + str(k) + \"=\" + str(v)]\n\n        if numnodes == 0:\n            nn = 1\n        else:\n            nn = numnodes\n        if numnodes > 1:\n            vnn_msg = ' vnodes '\n        else:\n            vnn_msg = ' vnode '\n\n        self.logger.info(self.logprefix + 'created ' + str(nn) +\n                         vnn_msg + name + ' with attr ' +\n                         str(attribs) + ' on host ' + self.hostname)\n        vdef += [\"\\n\"]\n        del attribs\n        return \"\\n\".join(vdef)\n\n    def add_checkpoint_abort_script(self, dirname=None, body=None,\n                                    abort_time=30):\n        \"\"\"\n        Add checkpoint script in the mom config.\n        returns: a temp file for checkpoint script\n        \"\"\"\n        chk_file = self.du.create_temp_file(hostname=self.hostname, body=body,\n                                            dirname=dirname)\n        self.du.chmod(hostname=self.hostname, path=chk_file, mode=0o700)\n        self.du.chown(hostname=self.hostname, path=chk_file, runas=ROOT_USER,\n                      uid=0, gid=0)\n        c = {'$action checkpoint_abort':\n             str(abort_time) + ' !' + chk_file + ' %sid'}\n        self.add_config(c)\n        return chk_file\n\n    def add_restart_script(self, dirname=None, body=None,\n                           abort_time=30):\n        \"\"\"\n        Add restart script in the mom config.\n        returns: a temp file for restart script\n        \"\"\"\n        rst_file = self.du.create_temp_file(hostname=self.hostname, body=body,\n                                            dirname=dirname)\n        self.du.chmod(hostname=self.hostname, path=rst_file, mode=0o700)\n        self.du.chown(hostname=self.hostname, path=rst_file, runas=ROOT_USER,\n                      uid=0, gid=0)\n        c = {'$action restart': str(abort_time) + ' !' + rst_file + ' %sid'}\n        self.add_config(c)\n        return rst_file\n\n    def parse_config(self):\n        \"\"\"\n        Parse mom config file into a dictionary of configuration\n        options.\n\n        :returns: A dictionary of configuration options on success,\n                  and None otherwise\n        \"\"\"\n        try:\n            mconf = os.path.join(self.pbs_conf['PBS_HOME'], 'mom_priv',\n                                 'config')\n            ret = self.du.cat(self.hostname, mconf, sudo=True)\n            if ret['rc'] != 0:\n                self.logger.error('error parsing configuration file')\n                return None\n\n            self.config = {}\n            lines = ret['out']\n            for line in lines:\n                if line.startswith('$action'):\n                    (ac, k, v) = line.split(' ', 2)\n                    k = ac + ' ' + k\n                else:\n                    (k, v) = line.split(' ', 1)\n                if k in self.config:\n                    if isinstance(self.config[k], list):\n                        self.config[k].append(v)\n                    else:\n                        self.config[k] = [self.config[k], v]\n                else:\n                    self.config[k] = v\n        except Exception:\n            self.logger.error('error in parse_config')\n            return None\n\n        return self.config\n\n    def add_config(self, conf={}, hup=True):\n        \"\"\"\n        Add config options to mom_priv_config.\n\n        :param conf: The configurations to add to ``mom_priv/config``\n        :type conf: Dictionary\n        :param hup: If True (default) ``HUP`` the MoM\n        :type hup: bool\n        :returns: True on success and False otherwise\n        \"\"\"\n\n        doconfig = False\n\n        if not self.config:\n            self.parse_config()\n\n        mc = self.config\n\n        if mc is None:\n            mc = {}\n\n        for k, v in conf.items():\n            if k in mc and (mc[k] == v or (isinstance(v, list) and\n                                           mc[k] in v)):\n                self.logger.debug(self.logprefix + 'config ' + k +\n                                  ' already set to ' + str(v))\n                continue\n            else:\n                doconfig = True\n                break\n\n        if not doconfig:\n            return True\n\n        self.logger.info(self.logprefix + \"config \" + str(conf))\n\n        return self.apply_config(conf, hup)\n\n    def unset_mom_config(self, name, hup=True):\n        \"\"\"\n        Delete a mom_config entry\n\n        :param name: The entry to remove from ``mom_priv/config``\n        :type name: String\n        :param hup: if True (default) ``HUP`` the MoM\n        :type hup: bool\n        :returns: True on success and False otherwise\n        \"\"\"\n        mc = self.parse_config()\n        if mc is None or name not in mc:\n            return True\n        self.logger.info(self.logprefix + \"unsetting config \" + name)\n        del mc[name]\n\n        return self.apply_config(mc, hup)\n\n    def apply_config(self, conf={}, hup=True, restart=False):\n        \"\"\"\n        Apply configuration options to MoM.\n\n        :param conf: A dictionary of configuration options to apply\n                     to MoM\n        :type conf: Dictionary\n        :param hup: If True (default) , HUP the MoM to apply the\n                    configuration\n        :type hup: bool\n        :returns: True on success and False otherwise.\n        \"\"\"\n        self.config = {**self.config, **conf}\n        try:\n            fn = self.du.create_temp_file()\n            with open(fn, 'w+') as f:\n                for k, v in self.config.items():\n                    if isinstance(v, list):\n                        for eachprop in v:\n                            f.write(str(k) + ' ' + str(eachprop) + '\\n')\n                    else:\n                        f.write(str(k) + ' ' + str(v) + '\\n')\n            dest = os.path.join(\n                self.pbs_conf['PBS_HOME'], 'mom_priv', 'config')\n            self.du.run_copy(self.hostname, src=fn, dest=dest,\n                             preserve_permission=False, sudo=True)\n            os.remove(fn)\n        except Exception:\n            raise PbsMomConfigError(rc=1, rv=False,\n                                    msg='error processing add_config')\n        if restart:\n            return self.restart()\n        elif hup:\n            return self.signal('-HUP')\n\n        return True\n\n    def get_vnode_def(self, vnodefile=None):\n        \"\"\"\n        :returns: A vnode def file as a single string\n        \"\"\"\n        if vnodefile is None:\n            return None\n        with open(vnodefile) as f:\n            lines = f.readlines()\n        return \"\".join(lines)\n\n    def insert_vnode_def(self, vdef, fname=None, additive=False, restart=True):\n        \"\"\"\n        Insert and enable a vnode definition. Root privilege\n        is required\n\n        :param vdef: The vnode definition string as created by\n                     create_vnode_def\n        :type vdef: str\n        :param fname: The filename to write the vnode def string to\n        :type fname: str or None\n        :param additive: If True, keep all other vnode def files\n                         under config.d Default is False\n        :type additive: bool\n        :param delete: If True, delete all nodes known to the server.\n                       Default is True\n        :type delete: bool\n        :param restart: If True, restart the MoM. Default is True\n        :type restart: bool\n        \"\"\"\n\n        if self.is_cpuset_mom():\n            msg = 'Creating multiple vnodes is not supported on cpuset mom'\n            self.skipTest(msg)\n\n        try:\n            fn = self.du.create_temp_file(self.hostname, body=vdef)\n        except Exception:\n            raise PbsMomConfigError(rc=1, rv=False,\n                                    msg=\"Failed to insert vnode definition\")\n        if fname is None:\n            fname = 'pbs_vnode_' + str(int(time.time())) + '.def'\n        if not additive:\n            self.delete_vnode_defs()\n        cmd = [os.path.join(self.pbs_conf['PBS_EXEC'], 'sbin', 'pbs_mom')]\n        cmd += ['-s', 'insert', fname, fn]\n        ret = self.du.run_cmd(self.hostname, cmd, sudo=True, logerr=False,\n                              level=logging.INFOCLI)\n        self.du.rm(hostname=self.hostname, path=fn, force=True)\n        if ret['rc'] != 0:\n            raise PbsMomConfigError(rc=1, rv=False, msg=\"\\n\".join(ret['err']))\n        msg = self.logprefix + 'inserted vnode definition file '\n        msg += fname + ' on host: ' + self.hostname\n        self.logger.info(msg)\n        if restart:\n            self.restart()\n\n    def has_vnode_defs(self):\n        \"\"\"\n        Check for vnode definition(s)\n        \"\"\"\n        cmd = [os.path.join(self.pbs_conf['PBS_EXEC'], 'sbin', 'pbs_mom')]\n        cmd += ['-s', 'list']\n        ret = self.du.run_cmd(self.hostname, cmd, sudo=True, logerr=False,\n                              level=logging.INFOCLI)\n        if ret['rc'] == 0:\n            files = [x for x in ret['out'] if not x.startswith('PBS')]\n            if len(files) > 0:\n                return True\n            else:\n                return False\n        else:\n            return False\n\n    def delete_vnode_defs(self, vdefname=None):\n        \"\"\"\n        delete vnode definition(s) on this MoM\n\n        :param vdefname: name of a vnode definition file to delete,\n                         if None all vnode definitions are deleted\n        :type vdefname: str\n        :returns: True if delete succeed otherwise False\n        \"\"\"\n        cmd = [os.path.join(self.pbs_conf['PBS_EXEC'], 'sbin', 'pbs_mom')]\n        cmd += ['-s', 'list']\n        ret = self.du.run_cmd(self.hostname, cmd, sudo=True, logerr=False,\n                              level=logging.INFOCLI)\n        if ret['rc'] != 0:\n            return False\n        rv = True\n        if len(ret['out']) > 0:\n            for vnodedef in ret['out']:\n                vnodedef = vnodedef.strip()\n                if (vnodedef == vdefname) or vdefname is None:\n                    if vnodedef.startswith('PBS'):\n                        continue\n                    cmd = [os.path.join(self.pbs_conf['PBS_EXEC'], 'sbin',\n                                        'pbs_mom')]\n                    cmd += ['-s', 'remove', vnodedef]\n                    ret = self.du.run_cmd(self.hostname, cmd, sudo=True,\n                                          logerr=False, level=logging.INFOCLI)\n                    if ret['rc'] != 0:\n                        return False\n                    else:\n                        rv = True\n        return rv\n\n    def has_pelog(self, filename=None):\n        \"\"\"\n        Check for prologue and epilogue\n        \"\"\"\n        _has_pro = False\n        _has_epi = False\n        phome = self.pbs_conf['PBS_HOME']\n        prolog = os.path.join(phome, 'mom_priv', 'prologue')\n        epilog = os.path.join(phome, 'mom_priv', 'epilogue')\n        if self.du.isfile(self.hostname, path=prolog, sudo=True):\n            _has_pro = True\n        if filename == 'prologue':\n            return _has_pro\n        if self.du.isfile(self.hostname, path=epilog, sudo=True):\n            _has_epi = True\n        if filename == 'epilogue':\n            return _has_pro\n        if _has_epi or _has_pro:\n            return True\n        return False\n\n    def has_prologue(self):\n        \"\"\"\n        Check for prologue\n        \"\"\"\n        return self.has_pelog('prolouge')\n\n    def has_epilogue(self):\n        \"\"\"\n        Check for epilogue\n        \"\"\"\n        return self.has_pelog('epilogue')\n\n    def delete_pelog(self):\n        \"\"\"\n        Delete any prologue and epilogue files that may have been\n        defined on this MoM\n        \"\"\"\n        phome = self.pbs_conf['PBS_HOME']\n        prolog = os.path.join(phome, 'mom_priv', 'prologue')\n        epilog = os.path.join(phome, 'mom_priv', 'epilogue')\n        ret = self.du.rm(self.hostname, epilog, force=True,\n                         sudo=True, logerr=False)\n        if ret:\n            ret = self.du.rm(self.hostname, prolog, force=True,\n                             sudo=True, logerr=False)\n        if not ret:\n            self.logger.error('problem deleting prologue/epilogue')\n            # we don't bail because the problem may be that files did not\n            # exist. Let tester fix the issue\n        return ret\n\n    def create_pelog(self, body=None, src=None, filename=None):\n        \"\"\"\n        create ``prologue`` and ``epilogue`` files, functionality\n        accepts either a body of the script or a source file.\n\n        :returns: True on success and False on error\n        \"\"\"\n\n        if self.has_snap:\n            _msg = 'MoM is in loaded from snap so bypassing pelog creation'\n            self.logger.info(_msg)\n            return False\n\n        if (src is None and body is None) or (filename is None):\n            self.logger.error('file and body of script are required')\n            return False\n\n        pelog = os.path.join(self.pbs_conf['PBS_HOME'], 'mom_priv', filename)\n\n        self.logger.info(self.logprefix +\n                         ' creating ' + filename + ' with body\\n' + '---')\n        if body is not None:\n            self.logger.info(body)\n            src = self.du.create_temp_file(prefix='pbs-pelog', body=body)\n        elif src is not None:\n            with open(src) as _b:\n                self.logger.info(\"\\n\".join(_b.readlines()))\n        self.logger.info('---')\n\n        ret = self.du.run_copy(self.hostname, src=src, dest=pelog,\n                               preserve_permission=False, sudo=True)\n        if body is not None:\n            os.remove(src)\n        if ret['rc'] != 0:\n            self.logger.error('error creating pelog ')\n            return False\n\n        ret = self.du.chown(self.hostname, path=pelog, uid=0, gid=0, sudo=True,\n                            logerr=False)\n        if not ret:\n            self.logger.error('error chowning pelog to root')\n            return False\n        ret = self.du.chmod(self.hostname, path=pelog, mode=0o755, sudo=True)\n        return ret\n\n    def prologue(self, body=None, src=None):\n        \"\"\"\n        create prologue\n        \"\"\"\n        return self.create_pelog(body, src, 'prologue')\n\n    def epilogue(self, body=None, src=None):\n        \"\"\"\n        Create epilogue\n        \"\"\"\n        return self.create_pelog(body, src, 'epilogue')\n\n    def action(self, act, script):\n        \"\"\"\n        Define action script. Not currently implemented\n        \"\"\"\n        pass\n\n    def enable_cgroup_cset(self):\n        \"\"\"\n        Configure and enable cgroups hook\n        \"\"\"\n        # check if cgroups subsystems including cpusets are mounted\n        file = os.path.join(os.sep, 'proc', 'mounts')\n        mounts = self.du.cat(self.hostname, file)['out']\n        pat = 'cgroup /sys/fs/cgroup'\n        enablemem = False\n        for line in mounts:\n            entries = line.split()\n            if entries[2] != 'cgroup':\n                continue\n            flags = entries[3].split(',')\n            if 'memory' in flags:\n                enablemem = True\n        if str(mounts).count(pat) >= 6 and str(mounts).count('cpuset') >= 2:\n            pbs_conf_val = self.du.parse_pbs_config(self.hostname)\n            f1 = os.path.join(pbs_conf_val['PBS_EXEC'], 'lib',\n                              'python', 'altair', 'pbs_hooks',\n                              'pbs_cgroups.CF')\n            # set vnode_per_numa_node = true, use_hyperthreads = true\n            with open(f1, \"r\") as cfg:\n                cfg_dict = json.load(cfg)\n            cfg_dict['vnode_per_numa_node'] = True\n            cfg_dict['use_hyperthreads'] = True\n\n            # if the memory subsystem is not mounted, do not enable mem\n            # in the cgroups config otherwise PTL tests will fail.\n            # This matches what is documented for cgroups and mem.\n            cfg_dict['cgroup']['memory']['enabled'] = enablemem\n            _, path = tempfile.mkstemp(prefix=\"cfg\", suffix=\".json\")\n            with open(path, \"w\") as cfg1:\n                json.dump(cfg_dict, cfg1, indent=4)\n            # read in the cgroup hook configuration\n            a = {'content-type': 'application/x-config',\n                 'content-encoding': 'default',\n                 'input-file': path}\n            # check that mom is ready before importing hook\n            self.server.expect(NODE, {'state': 'free'}, id=self.shortname)\n            self.server.manager(MGR_CMD_IMPORT, HOOK, a,\n                                'pbs_cgroups')\n            os.remove(path)\n            # enable cgroups hook\n            self.server.manager(MGR_CMD_SET, HOOK,\n                                {'enabled': 'True'}, 'pbs_cgroups')\n        else:\n            self.logger.error('%s: cgroup subsystems not mounted' %\n                              self.hostname)\n            raise AssertionError('cgroup subsystems not mounted')\n"
  },
  {
    "path": "test/fw/ptl/lib/ptl_object.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\nimport copy\nimport logging\nimport os\nimport pwd\nimport re\nimport sys\nimport time\nfrom collections import OrderedDict\n\nfrom ptl.lib.ptl_batchutils import *\n\n\nclass PBSObject(object):\n\n    \"\"\"\n    Generic PBS Object encapsulating attributes and defaults\n\n    The ptl_conf dictionary holds general configuration for the\n    framework's operations, specifically, one can control:\n\n    mode: set to ``PTL_CLI`` to operate in ``CLI`` mode or\n    ``PTL_API`` to operate in ``API`` mode\n\n    max_attempts: the default maximum number of attempts\n    to be used by different methods like expect, log_match.\n    Defaults to 60\n\n    attempt_interval: the default time interval (in seconds)\n    between each requests. Defaults to 0.5\n\n    update_attributes: the default on whether Object attributes\n    should be updated using a list of dictionaries. Defaults\n    to True\n\n    :param name: The name associated to the object\n    :type name: str\n    :param attrs: Dictionary of attributes to set on object\n    :type attrs: Dictionary\n    :param defaults: Dictionary of default attributes. Setting\n                     this will override any other object's default\n    :type defaults: Dictionary\n    \"\"\"\n\n    logger = logging.getLogger(__name__)\n    utils = BatchUtils()\n    platform = sys.platform\n\n    ptl_conf = {\n        'mode': PTL_API,\n        'max_attempts': 60,\n        'attempt_interval': 0.5,\n        'update_attributes': True,\n    }\n\n    def __init__(self, name, attrs={}, defaults={}):\n        self.attributes = OrderedDict()\n        self.name = name\n        self.dflt_attributes = defaults\n        self.attropl = None\n        self.custom_attrs = OrderedDict()\n        self.ctime = time.time()\n\n        self.set_attributes(attrs)\n\n    @classmethod\n    def set_update_attributes(cls, val):\n        \"\"\"\n        Set update attributes\n        \"\"\"\n        cls.logger.info('setting update attributes ' + str(val))\n        if val or (val.isdigit() and int(val) == 1) or val[0] in ('t', 'T'):\n            val = True\n        else:\n            val = False\n        cls.ptl_conf['update_attributes'] = val\n\n    @classmethod\n    def set_max_attempts(cls, val):\n        \"\"\"\n        Set max attempts\n        \"\"\"\n        cls.logger.info('setting max attempts ' + str(val))\n        cls.ptl_conf['max_attempts'] = int(val)\n\n    @classmethod\n    def set_attempt_interval(cls, val):\n        \"\"\"\n        Set attempt interval\n        \"\"\"\n        cls.logger.info('setting attempt interval ' + str(val))\n        cls.ptl_conf['attempt_interval'] = float(val)\n\n    def set_attributes(self, a={}):\n        \"\"\"\n        set attributes and custom attributes on this object.\n        custom attributes are used when converting attributes\n        to CLI\n\n        :param a: Attribute dictionary\n        :type a: Dictionary\n        \"\"\"\n        if isinstance(a, list):\n            a = OrderedDict(a)\n\n        self.attributes = OrderedDict(list(self.dflt_attributes.items()) +\n                                      list(self.attributes.items()) +\n                                      list(a.items()))\n\n        self.custom_attrs = OrderedDict(list(self.custom_attrs.items()) +\n                                        list(a.items()))\n\n    def unset_attributes(self, attrl=[]):\n        \"\"\"\n        Unset attributes from object's attributes and custom\n        attributes\n\n        :param attrl: Attribute list\n        :type attrl: List\n        \"\"\"\n        for attr in attrl:\n            if attr in self.attributes:\n                del self.attributes[attr]\n            if attr in self.custom_attrs:\n                del self.custom_attrs[attr]\n\n    def __str__(self):\n        \"\"\"\n        Return a string representation of this PBSObject\n        \"\"\"\n        if self.name is None:\n            return \"\"\n\n        s = []\n        if isinstance(self, Job):\n            s += [\"Job Id: \" + self.name + \"\\n\"]\n        elif isinstance(self, Queue):\n            s += [\"Queue: \" + self.name + \"\\n\"]\n        elif isinstance(self, Server):\n            s += [\"Server: \" + self.hostname + \"\\n\"]\n        elif isinstance(self, Reservation):\n            s += [\"Name: \" + \"\\n\"]\n        else:\n            s += [self.name + \"\\n\"]\n        for k, v in self.attributes.items():\n            s += [\"    \" + k + \" = \" + str(v) + \"\\n\"]\n        return \"\".join(s)\n\n    def __repr__(self):\n        return str(self.attributes)\n"
  },
  {
    "path": "test/fw/ptl/lib/ptl_resourceresv.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport ast\nimport copy\nimport logging\nimport os\nimport pwd\nimport re\nimport string\nimport sys\nimport threading\nimport time\nimport traceback\nfrom collections import OrderedDict\n\nfrom ptl.utils.pbs_dshutils import DshUtils\nfrom ptl.utils.pbs_procutils import ProcUtils\nfrom ptl.utils.pbs_testusers import (ROOT_USER, TEST_USER, PbsUser,\n                                     DAEMON_SERVICE_USER)\n\nfrom ptl.lib.ptl_object import PBSObject\nfrom ptl.lib.ptl_constants import (ATTR_N, ATTR_j, ATTR_m, ATTR_v, ATTR_k,\n                                   ATTR_p, ATTR_r, ATTR_Arglist,\n                                   ATTR_executable, ATTR_S, ATTR_resv_start,\n                                   ATTR_job, ATTR_resv_end,\n                                   ATTR_resv_duration)\nfrom ptl.lib.ptl_types import (PbsTypeExecVnode, PbsTypeExecHost,\n                               PbsTypeSelect)\n\n\nclass ResourceResv(PBSObject):\n\n    \"\"\"\n    Generic PBS resource reservation, i.e., job or\n    ``advance/standing`` reservation\n    \"\"\"\n\n    def execvnode(self, attr='exec_vnode'):\n        \"\"\"\n        PBS type execution vnode\n        \"\"\"\n        if attr in self.attributes:\n            return PbsTypeExecVnode(self.attributes[attr])\n        else:\n            return None\n\n    def exechost(self):\n        \"\"\"\n        PBS type execution host\n        \"\"\"\n        if 'exec_host' in self.attributes:\n            return PbsTypeExecHost(self.attributes['exec_host'])\n        else:\n            return None\n\n    def resvnodes(self):\n        \"\"\"\n        nodes assigned to a reservation\n        \"\"\"\n        if 'resv_nodes' in self.attributes:\n            return self.attributes['resv_nodes']\n        else:\n            return None\n\n    def select(self):\n        if hasattr(self, '_select') and self._select is not None:\n            return self._select\n\n        if 'schedselect' in self.attributes:\n            self._select = PbsTypeSelect(self.attributes['schedselect'])\n\n        elif 'select' in self.attributes:\n            self._select = PbsTypeSelect(self.attributes['select'])\n        else:\n            return None\n\n        return self._select\n\n    @classmethod\n    def get_hosts(cls, exechost=None):\n        \"\"\"\n        :returns: The hosts portion of the exec_host\n        \"\"\"\n        hosts = []\n        exechosts = cls.utils.parse_exechost(exechost)\n        if exechosts:\n            for h in exechosts:\n                eh = list(h.keys())[0]\n                if eh not in hosts:\n                    hosts.append(eh)\n        return hosts\n\n    def get_vnodes(self, execvnode=None):\n        \"\"\"\n        :returns: The unique vnode names of an execvnode as a list\n        \"\"\"\n        if execvnode is None:\n            if 'exec_vnode' in self.attributes:\n                execvnode = self.attributes['exec_vnode']\n            elif 'resv_nodes' in self.attributes:\n                execvnode = self.attributes['resv_nodes']\n            else:\n                return []\n\n        vnodes = []\n        execvnodes = PbsTypeExecVnode(execvnode)\n        if execvnodes:\n            for n in execvnodes:\n                ev = list(n.keys())[0]\n                if ev not in vnodes:\n                    vnodes.append(ev)\n        return vnodes\n\n    def walltime(self, attr='Resource_List.walltime'):\n        if attr in self.attributes:\n            return self.utils.convert_duration(self.attributes[attr])\n\n\nclass Reservation(ResourceResv):\n\n    \"\"\"\n    PBS Reservation. Attributes and Resources\n    :param attrs: Reservation attributes\n    :type attrs: Dictionary\n    :param hosts: List of hosts for maintenance\n    :type hosts: List\n    \"\"\"\n\n    dflt_attributes = {}\n\n    def __init__(self, username=TEST_USER, attrs=None, hosts=None):\n        self.server = {}\n        self.script = None\n\n        if attrs:\n            self.attributes = attrs\n        else:\n            self.attributes = {}\n\n        if hosts:\n            self.hosts = hosts\n        else:\n            self.hosts = []\n\n        if username is None:\n            userinfo = pwd.getpwuid(os.getuid())\n            self.username = userinfo[0]\n        else:\n            self.username = str(username)\n\n        # These are not in dflt_attributes because of the conversion to CLI\n        # options is done strictly\n        if ATTR_resv_start not in self.attributes and \\\n           ATTR_job not in self.attributes:\n            self.attributes[ATTR_resv_start] = str(int(time.time()) +\n                                                   36 * 3600)\n\n        if ATTR_resv_end not in self.attributes and \\\n           ATTR_job not in self.attributes:\n            if ATTR_resv_duration not in self.attributes:\n                self.attributes[ATTR_resv_end] = str(int(time.time()) +\n                                                     72 * 3600)\n\n        PBSObject.__init__(self, None, self.attributes, self.dflt_attributes)\n        self.set_attributes()\n\n    def __del__(self):\n        del self.__dict__\n\n    def set_variable_list(self, user, workdir=None):\n        pass\n\n\nclass Job(ResourceResv):\n\n    \"\"\"\n    PBS Job. Attributes and Resources\n\n    :param username: Job username\n    :type username: str or None\n    :param attrs: Job attributes\n    :type attrs: Dictionary\n    :param jobname: Name of the PBS job\n    :type jobname: str or None\n    \"\"\"\n\n    dflt_attributes = {\n        ATTR_N: 'STDIN',\n        ATTR_j: 'n',\n        ATTR_m: 'a',\n        ATTR_p: '0',\n        ATTR_r: 'y',\n        ATTR_k: 'oe',\n    }\n    runtime = 100\n    du = DshUtils()\n\n    def __init__(self, username=TEST_USER, attrs={}, jobname=None):\n        self.platform = self.du.get_platform()\n        self.server = {}\n        self.script = None\n        self.script_body = None\n        if username is not None:\n            self.username = str(username)\n        else:\n            self.username = None\n        self.du = None\n        self.interactive_handle = None\n        if self.platform == 'cray' or self.platform == 'craysim':\n            if 'Resource_List.select' in attrs:\n                select = attrs['Resource_List.select']\n                attrs['Resource_List.select'] = self.add_cray_vntype(select)\n            elif 'Resource_List.vntype' not in attrs:\n                attrs['Resource_List.vntype'] = 'cray_compute'\n\n        PBSObject.__init__(self, None, attrs, self.dflt_attributes)\n\n        if jobname is not None:\n            self.custom_attrs[ATTR_N] = jobname\n            self.attributes[ATTR_N] = jobname\n        self.set_variable_list(self.username)\n        self.set_sleep_time(100)\n\n    def __del__(self):\n        del self.__dict__\n\n    def add_cray_vntype(self, select=None):\n        \"\"\"\n        Cray specific function to add vntype as ``cray_compute`` to each\n        select chunk\n\n        :param select: PBS select statement\n        :type select: str or None\n        \"\"\"\n        ra = []\n        r = select.split('+')\n        for i in r:\n            select = PbsTypeSelect(i)\n            novntype = 'vntype' not in select.resources\n            nohost = 'host' not in select.resources\n            novnode = 'vnode' not in select.resources\n            if novntype and nohost and novnode:\n                i = i + \":vntype=cray_compute\"\n            ra.append(i)\n        select_str = ''\n        for l in ra:\n            select_str = select_str + \"+\" + l\n        select_str = select_str[1:]\n        return select_str\n\n    def set_attributes(self, a={}):\n        \"\"\"\n        set attributes and custom attributes on this job.\n        custom attributes are used when converting attributes to CLI.\n        In case of Cray platform if 'Resource_List.vntype' is set\n        already then remove it and add vntype value to each chunk of a\n        select statement.\n\n        :param a: Attribute dictionary\n        :type a: Dictionary\n        \"\"\"\n        if isinstance(a, list):\n            a = OrderedDict(a)\n\n        self.attributes = OrderedDict(list(self.dflt_attributes.items()) +\n                                      list(self.attributes.items()) +\n                                      list(a.items()))\n\n        if self.platform == 'cray' or self.platform == 'craysim':\n            s = 'Resource_List.select' in a\n            v = 'Resource_List.vntype' in self.custom_attrs\n            if s and v:\n                del self.custom_attrs['Resource_List.vntype']\n                select = a['Resource_List.select']\n                a['Resource_List.select'] = self.add_cray_vntype(select)\n\n        self.custom_attrs = OrderedDict(list(self.custom_attrs.items()) +\n                                        list(a.items()))\n\n    def set_variable_list(self, user=None, workdir=None):\n        \"\"\"\n        Customize the ``Variable_List`` job attribute to ``<user>``\n        \"\"\"\n        if user is None:\n            userinfo = pwd.getpwuid(os.getuid())\n            user = userinfo[0]\n            homedir = userinfo[5]\n        else:\n            try:\n                homedir = pwd.getpwnam(user)[5]\n            except Exception:\n                homedir = \"\"\n\n        self.username = user\n\n        s = ['PBS_O_HOME=' + homedir]\n        s += ['PBS_O_LANG=en_US.UTF-8']\n        s += ['PBS_O_LOGNAME=' + user]\n        s += ['PBS_O_PATH=/usr/bin:/bin:/usr/bin:/usr/local/bin']\n        s += ['PBS_O_MAIL=/var/spool/mail/' + user]\n        s += ['PBS_O_SHELL=/bin/bash']\n        s += ['PBS_O_SYSTEM=Linux']\n        if workdir is not None:\n            wd = workdir\n        else:\n            wd = os.getcwd()\n        s += ['PBS_O_WORKDIR=' + str(wd)]\n\n        self.attributes[ATTR_v] = \",\".join(s)\n        self.set_attributes()\n\n    def set_sleep_time(self, duration):\n        \"\"\"\n        Set the sleep duration for this job.\n\n        :param duration: The duration, in seconds, to sleep\n        :type duration: int\n        \"\"\"\n        pbs_conf = DshUtils().parse_pbs_config()\n        exe_path = os.path.join(pbs_conf['PBS_EXEC'], 'bin', 'pbs_sleep')\n        if not os.path.isfile(exe_path):\n            exe_path = '/bin/sleep'\n        self.set_execargs(exe_path, duration)\n\n    def set_execargs(self, executable, arguments=None):\n        \"\"\"\n        Set the executable and arguments to use for this job\n\n        :param executable: path to an executable. No checks are made.\n        :type executable: str\n        :param arguments: arguments to executable.\n        :type arguments: str or list or int\n        \"\"\"\n        msg = ['job: executable set to ' + str(executable)]\n        if arguments is not None:\n            msg += [' with arguments: ' + str(arguments)]\n\n        self.logger.info(\"\".join(msg))\n        self.attributes[ATTR_executable] = executable\n        if arguments is not None:\n            args = ''\n            xml_beginargs = '<jsdl-hpcpa:Argument>'\n            xml_endargs = '</jsdl-hpcpa:Argument>'\n            if isinstance(arguments, list):\n                for a in arguments:\n                    args += xml_beginargs + str(a) + xml_endargs\n            elif isinstance(arguments, str):\n                args = xml_beginargs + arguments + xml_endargs\n            elif isinstance(arguments, int):\n                args = xml_beginargs + str(arguments) + xml_endargs\n            self.attributes[ATTR_Arglist] = args\n        else:\n            self.unset_attributes([ATTR_Arglist])\n        self.set_attributes()\n\n    def create_script(self, body=None, asuser=None, hostname=None):\n        \"\"\"\n        Create a job script from a given body of text into a\n        temporary location\n\n        :param body: the body of the script\n        :type body: str or None\n        :param asuser: Optionally the user to own this script,\n                      defaults ot current user\n        :type asuser: str or None\n        :param hostname: The host on which the job script is to\n                         be created\n        :type hostname: str or None\n        \"\"\"\n\n        if body is None:\n            return None\n\n        if isinstance(body, list):\n            body = '\\n'.join(body)\n\n        if self.platform == 'cray' or self.platform == 'craysim':\n            body = body.split(\"\\n\")\n            for i, line in enumerate(body):\n                if line.startswith(\"#PBS\") and \"select=\" in line:\n                    if 'Resource_List.vntype' in self.attributes:\n                        self.unset_attributes(['Resource_List.vntype'])\n                    line_arr = line.split(\" \")\n                    for j, element in enumerate(line_arr):\n                        select = element.startswith(\"select=\")\n                        lselect = element.startswith(\"-lselect=\")\n                        if select or lselect:\n                            if lselect:\n                                sel_str = element[9:]\n                            else:\n                                sel_str = element[7:]\n                            sel_str = self.add_cray_vntype(select=sel_str)\n                            if lselect:\n                                line_arr[j] = \"-lselect=\" + sel_str\n                            else:\n                                line_arr[j] = \"select=\" + sel_str\n                    body[i] = \" \".join(line_arr)\n            body = '\\n'.join(body)\n\n        # If the user has a userhost, the job will run from there\n        # so the script should be made there\n        if self.username:\n            user = PbsUser.get_user(self.username)\n            if user.host:\n                hostname = user.host\n                asuser = user.name\n\n        self.script_body = body\n        if self.du is None:\n            self.du = DshUtils()\n        # First create the temporary file as current user and only change\n        # its mode once the current user has written to it\n        fn = self.du.create_temp_file(hostname, prefix='PtlPbsJobScript',\n                                      asuser=asuser, body=body)\n        self.du.chmod(hostname, fn, mode=0o755)\n        self.script = fn\n        return fn\n\n    def create_subjob_id(self, job_array_id, subjob_index):\n        \"\"\"\n        insert subjob index into the square brackets of job array id\n\n        :param job_array_id: PBS parent array job id\n        :type job_array_id: str\n        :param subjob_index: index of subjob\n        :type subjob_index: int\n        :returns: subjob id string\n        \"\"\"\n        idx = job_array_id.find('[]')\n        return job_array_id[:idx + 1] + str(subjob_index) + \\\n            job_array_id[idx + 1:]\n\n    def create_eatcpu_job(self, duration=None, hostname=None):\n        \"\"\"\n        Create a job that eats cpu indefinitely or for the given\n        duration of time\n\n        :param duration: The duration, in seconds, to sleep\n        :type duration: int\n        :param hostname: hostname on which to execute the job\n        :type hostname: str or None\n        \"\"\"\n        if self.du is None:\n            self.du = DshUtils()\n        shebang_line = '#!' + self.du.which(hostname, exe='python3')\n        body = \"\"\"\nimport signal\nimport sys\n\nx = 0\n\n\ndef receive_alarm(signum, stack):\n    sys.exit()\n\nsignal.signal(signal.SIGALRM, receive_alarm)\n\nif (len(sys.argv) > 1):\n    input_time = sys.argv[1]\n    print('Terminating after %s seconds' % input_time)\n    signal.alarm(int(input_time))\nelse:\n    print('Running indefinitely')\n\nwhile True:\n    x += 1\n\"\"\"\n        script_body = shebang_line + body\n        script_path = self.du.create_temp_file(hostname=hostname,\n                                               body=script_body,\n                                               suffix='.py')\n        pbs_conf = self.du.parse_pbs_config(hostname)\n        shell_path = os.path.join(pbs_conf['PBS_EXEC'],\n                                  'bin', 'pbs_python')\n        a = {ATTR_S: shell_path}\n        self.set_attributes(a)\n        mode = 0o755\n        if not self.du.chmod(hostname=hostname, path=script_path, mode=mode,\n                             sudo=True):\n            raise AssertionError(\"Failed to set permissions for file %s\"\n                                 \" to %s\" % (script_path, oct(mode)))\n        self.set_execargs(script_path, duration)\n        return script_path\n\n\nclass InteractiveJob(threading.Thread):\n\n    \"\"\"\n    An Interactive Job thread\n\n    Interactive Jobs are submitted as a thread that sets the jobid\n    as soon as it is returned by ``qsub -I``, such that the caller\n    can get back to monitoring the state of PBS while the interactive\n    session goes on in the thread.\n\n    The commands to be run within an interactive session are\n    specified in the job's interactive_script attribute as a list of\n    tuples, where the first item in each tuple is the command to run,\n    and the subsequent items are the expected returned data.\n\n    Implementation details:\n\n    Support for interactive jobs is currently done through the\n    pexpect module which must be installed separately from PTL.\n    Interactive jobs are submitted through ``CLI`` only, there is no\n    API support for this operation yet.\n\n    The submission of an interactive job requires passing in job\n    attributes,the command to execute ``(i.e. path to qsub -I)``\n    and the hostname\n\n    when not impersonating:\n\n    pexpect spawns the ``qsub -I`` command and expects a prompt\n    back, for each tuple in the interactive_script, it sends the\n    command and expects to match the return value.\n\n    when impersonating:\n\n    pexpect spawns ``sudo -u <user> qsub -I``. The rest is as\n    described in non- impersonating mode.\n    \"\"\"\n\n    logger = logging.getLogger(__name__)\n\n    pexpect_timeout = 15\n    pexpect_sleep_time = .1\n    du = DshUtils()\n\n    def __init__(self, job, cmd, host):\n        threading.Thread.__init__(self)\n        self.job = job\n        self.cmd = cmd\n        self.jobid = None\n        self.hostname = host\n        self._ru = \"\"\n        if self.du.get_platform() == \"shasta\":\n            self._ru = PbsUser.get_user(job.username)\n            if self._ru.host:\n                self.hostname = self._ru.host\n\n    def __del__(self):\n        del self.__dict__\n\n    def run(self):\n        \"\"\"\n        Run the interactive job\n        \"\"\"\n        try:\n            import pexpect\n        except Exception:\n            self.logger.error('pexpect module is required for '\n                              'interactive jobs')\n            return None\n\n        job = self.job\n        cmd = self.cmd\n\n        self.jobid = None\n        self.logger.info(\"submit interactive job as \" + job.username +\n                         \": \" + \" \".join(cmd))\n        if not hasattr(job, 'interactive_script'):\n            self.logger.debug('no interactive_script attribute on job')\n            return None\n\n        try:\n            # sleep to allow server to communicate with client\n            # this value is set empirically so tweaking may be\n            # needed\n            _st = self.pexpect_sleep_time\n            _to = self.pexpect_timeout\n            _sc = job.interactive_script\n            current_user = pwd.getpwuid(os.getuid())[0]\n            if current_user != job.username:\n                if hasattr(job, 'preserve_env') and job.preserve_env is True:\n                    cmd = (copy.copy(self.du.sudo_cmd) +\n                           ['-E', '-u', job.username] + cmd)\n                else:\n                    cmd = (copy.copy(self.du.sudo_cmd) +\n                           ['-u', job.username] + cmd)\n\n            self.logger.debug(cmd)\n            is_local = self.du.is_localhost(self.hostname)\n            _p = \"\"\n            if is_local:\n                _p = pexpect.spawn(\" \".join(cmd), timeout=_to)\n            else:\n                self.logger.info(\"Submit interactive job from a remote host\")\n                if self.du.get_platform() == \"shasta\":\n                    ssh_cmd = self.du.rsh_cmd + \\\n                        ['-p', self._ru.port,\n                         self._ru.name + '@' + self.hostname]\n                    _p = pexpect.spawn(\" \".join(ssh_cmd), timeout=_to)\n                    _p.sendline(\" \".join(self.cmd))\n                else:\n                    ssh_cmd = self.du.rsh_cmd + [self.hostname]\n                    _p = pexpect.spawn(\" \".join(ssh_cmd), timeout=_to)\n                    _p.sendline(\" \".join(cmd))\n            self.job.interactive_handle = _p\n            time.sleep(_st)\n            expstr = \"qsub: waiting for job \"\n            expstr += r\"(?P<jobid>\\d+.[0-9A-Za-z-.]+) to start\"\n            _p.expect(expstr)\n            if _p.match:\n                self.jobid = _p.match.group('jobid').decode()\n            else:\n                _p.close()\n                self.job.interactive_handle = None\n                return None\n            self.logger.debug(_p.after.decode())\n            _to = 5\n            for _l in _sc:\n                (cmd, out) = _l\n                if 'sleep ' in cmd:\n                    timev = cmd.split(' ')[1]\n                    if timev.isnumeric():\n                        _to = int(timev)\n                self.logger.info('sending: ' + cmd)\n                _p.sendline(cmd)\n                self.logger.info('expecting: ' + out)\n                _p.expect(out)\n            self.logger.info('sending exit')\n            _p.sendline(\"exit\")\n            while True:\n                try:\n                    # timeout value is same as sleep time of job\n                    _p.read_nonblocking(timeout=_to)\n                except Exception:\n                    break\n            if _p.isalive():\n                _p.close()\n            self.job.interactive_handle = None\n        except Exception:\n            self.logger.error(traceback.print_exc())\n            return None\n        return self.jobid\n"
  },
  {
    "path": "test/fw/ptl/lib/ptl_sched.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport ast\nimport copy\nimport datetime\nimport grp\nimport json\nimport logging\nimport os\nimport pwd\nimport re\nimport string\nimport sys\nimport tempfile\nimport time\nimport traceback\nfrom distutils.version import LooseVersion\nfrom operator import itemgetter\n\nfrom ptl.utils.pbs_cliutils import CliUtils\nfrom ptl.utils.pbs_testusers import (ROOT_USER, TEST_USER, PbsUser,\n                                     DAEMON_SERVICE_USER)\nfrom ptl.lib.ptl_service import PBSService, PBSInitServices\nfrom ptl.lib.ptl_fairshare import (FairshareTree, FairshareNode,\n                                   Fairshare)\nfrom ptl.lib.ptl_entities import Holidays\nfrom ptl.lib.ptl_error import (PbsManagerError, PbsStatusError,\n                               PbsInitServicesError, PbsServiceError,\n                               PtlLogMatchError, PbsSchedConfigError,\n                               PbsFairshareError)\nfrom ptl.lib.ptl_constants import (SCHED, MGR_CMD_SET, MGR_CMD_UNSET,\n                                   MGR_CMD_LIST, MGR_OBJ_SCHED, NE)\n\n\nclass Scheduler(PBSService):\n\n    \"\"\"\n    Container of Scheduler related properties\n\n    :param hostname: The hostname on which the scheduler instance\n                     is operating\n    :type hostname: str or None\n    :param server: A PBS server instance to which this scheduler\n                   is associated\n    :param pbsconf_file: path to a PBS configuration file\n    :type pbsconf_file: str or None\n    :param snapmap: A dictionary of PBS objects (node,server,etc)\n                    to mapped files from PBS snap directory\n    :type snapmap: Dictionary\n    :param snap: path to PBS snap directory (This will overrides\n                 snapmap)\n    :type snap: str or None\n    :param db_acccess: set to either file containing credentials\n                       to DB access or dictionary containing\n                       ``{'dbname':...,'user':...,'port':...}``\n    :type db_access: str or dictionary\n    \"\"\"\n\n    # A vanilla scheduler configuration. This set may change based on\n    # updates to PBS\n    sched_dflt_config = {\n        \"backfill\": \"true        ALL\",\n        \"backfill_prime\": \"false ALL\",\n        \"strict_ordering\": \"false ALL\",\n        \"provision_policy\": \"\\\"aggressive_provision\\\"\",\n        \"preempt_order\": \"\\\"SCR\\\"\",\n        \"fairshare_entity\": \"euser\",\n        \"dedicated_prefix\": \"ded\",\n        \"primetime_prefix\": \"p_\",\n        \"nonprimetime_prefix\": \"np_\",\n        \"preempt_queue_prio\": \"150\",\n        \"preempt_prio\": \"\\\"express_queue, normal_jobs\\\"\",\n        \"prime_exempt_anytime_queues\": \"false\",\n        \"round_robin\": \"False    all\",\n        \"fairshare_usage_res\": \"cput\",\n        \"smp_cluster_dist\": \"pack\",\n        \"fair_share\": \"false     ALL\",\n        \"preempt_sort\": \"min_time_since_start\",\n        \"node_sort_key\": \"\\\"sort_priority HIGH\\\" ALL\",\n        \"sort_queues\": \"true     ALL\",\n        \"by_queue\": \"True                ALL\",\n        \"preemptive_sched\": \"true        ALL\",\n        \"resources\": \"\\\"ncpus, mem, arch, host, vnode, aoe\\\"\",\n    }\n\n    sched_config_options = [\"node_group_key\",\n\n                            \"fairshare_enforce_no_shares\",\n                            \"strict_ordering\",\n                            \"resource_unset_infinite\",\n                            \"unknown_shares\",\n                            \"dedicated_prefix\",\n                            \"sort_queues\",\n                            \"backfill\",\n                            \"primetime_prefix\",\n                            \"nonprimetime_prefix\",\n                            \"backfill_prime\",\n                            \"prime_exempt_anytime_queues\",\n                            \"prime_spill\",\n                            \"prime_exempt_anytime_queues\",\n                            \"prime_spill\",\n                            \"resources\",\n                            \"mom_resources\",\n                            \"smp_cluster_dist\",\n                            \"preempt_queue_prio\",\n                            \"preempt_suspend\",\n                            \"preempt_checkpoint\",\n                            \"preempt_requeue\",\n                            \"preemptive_sched\",\n                            \"node_group_key\",\n                            \"fairshare_enforce_no_shares\",\n                            \"strict_ordering\",\n                            \"resource_unset_infinite\",\n                            \"provision_policy\",\n                            \"resv_confirm_ignore\",\n                            \"allow_aoe_calendar\",\n                            \"max_job_check\",\n                            \"preempt_attempts\",\n                            \"update_comments\",\n                            \"sort_by\",\n                            \"key\",\n                            \"assign_ssinodes\",\n                            \"cpus_per_ssinode\",\n                            \"mem_per_ssinode\",\n                            \"strict_fifo\",\n                            \"mem_per_ssinode\",\n                            \"strict_fifo\"\n                            ]\n\n    def __init__(self, server, hostname=None, pbsconf_file=None,\n                 snapmap={}, snap=None, db_access=None, id='default',\n                 sched_priv=None):\n\n        self.sched_config_file = None\n        self.dflt_holidays_file = None\n        self.holidays_file = None\n        self.sched_config = {}\n        self._sched_config_comments = {}\n        self._config_order = []\n        self.dedicated_time_file = None\n        self.dedicated_time = None\n        self.dedicated_time_as_str = None\n        self.fairshare_tree = None\n        self.resource_group = None\n        self.holidays_obj = None\n        self.server = None\n        self.db_access = None\n        self.user = None\n\n        self.server = server\n        if snap is None and self.server.snap is not None:\n            snap = self.server.snap\n        if (len(snapmap) == 0) and (len(self.server.snapmap) != 0):\n            snapmap = self.server.snapmap\n\n        if hostname is None:\n            hostname = self.server.hostname\n\n        super().__init__(hostname, pbsconf_file=pbsconf_file,\n                         snap=snap, snapmap=snapmap)\n        _m = ['scheduler ', self.shortname]\n        if pbsconf_file is not None:\n            _m += ['@', pbsconf_file]\n        _m += [': ']\n        self.logprefix = \"\".join(_m)\n        self.pi = PBSInitServices(hostname=self.hostname,\n                                  conf=self.pbs_conf_file)\n        self.pbs_conf = self.server.pbs_conf\n        self.sc_name = id\n\n        self.user = DAEMON_SERVICE_USER\n        self.fairshare = Fairshare(self.has_snap, self.pbs_conf, self.sc_name,\n                                   self.hostname, self.user)\n\n        self.dflt_sched_config_file = os.path.join(self.pbs_conf['PBS_EXEC'],\n                                                   'etc', 'pbs_sched_config')\n\n        self.dflt_holidays_file = os.path.join(self.pbs_conf['PBS_EXEC'],\n                                               'etc', 'pbs_holidays')\n\n        self.dflt_resource_group_file = os.path.join(self.pbs_conf['PBS_EXEC'],\n                                                     'etc',\n                                                     'pbs_resource_group')\n        self.dflt_dedicated_file = os.path.join(self.pbs_conf['PBS_EXEC'],\n                                                'etc',\n                                                'pbs_dedicated')\n        self.setup_sched_priv(sched_priv)\n        self.setup_sched_logs()\n\n        self.db_access = db_access\n\n        self.version = None\n\n    def setup_sched_priv(self, sched_priv=None):\n        \"\"\"\n        Initialize Scheduler() member variables on initialization or if\n        sched_priv changes\n        \"\"\"\n        if sched_priv is None:\n            if 'sched_priv' in self.attributes:\n                sched_priv = self.attributes['sched_priv']\n            else:\n                sched_priv = os.path.join(self.pbs_conf['PBS_HOME'],\n                                          'sched_priv')\n\n        self.du.chown(self.hostname, sched_priv, uid=self.user,\n                      recursive=True, sudo=True)\n\n        self.sched_config_file = os.path.join(sched_priv, 'sched_config')\n        self.resource_group_file = os.path.join(sched_priv, 'resource_group')\n        self.holidays_file = os.path.join(sched_priv, 'holidays')\n        self.set_dedicated_time_file(os.path.join(sched_priv,\n                                                  'dedicated_time'))\n\n        if not os.path.exists(sched_priv):\n            return\n\n        self.parse_sched_config()\n\n        self.fairshare_tree = self.fairshare.query_fairshare()\n        rg = self.parse_resource_group(self.hostname, self.resource_group_file)\n        self.resource_group = rg\n\n        self.holidays_obj = Holidays()\n        self.holidays_parse_file(level=logging.DEBUG)\n\n    def setup_sched_logs(self):\n        if 'sched_log' in self.attributes:\n            sched_logs = self.attributes['sched_log']\n        else:\n            sched_logs = os.path.join(self.pbs_conf['PBS_HOME'],\n                                      'sched_logs')\n\n        self.du.chown(self.hostname, sched_logs, uid=self.user,\n                      recursive=True, sudo=True)\n\n    def initialise_service(self):\n        \"\"\"\n        initialise the scheduler object\n        \"\"\"\n        super().initialise_service()\n        try:\n            attrs = self.server.status(SCHED, level=logging.DEBUG,\n                                       db_access=self.db_access,\n                                       id=self.sc_name)\n            if attrs is not None and len(attrs) > 0:\n                self.attributes = attrs[0]\n        except (PbsManagerError, PbsStatusError) as e:\n            self.logger.error('Error querying scheduler %s' % e.msg)\n\n    def start(self, sched_home=None, args=None, launcher=None):\n        \"\"\"\n        Start the scheduler\n        :param sched_home: Path to scheduler log and home directory\n        :type sched_home: str\n        :param args: Arguments required to start the scheduler\n        :type args: str\n        :param launcher: Optional utility to invoke the launch of the service\n        :type launcher: str or list\n        \"\"\"\n        if self.attributes['id'] != 'default':\n            cmd = [os.path.join(self.pbs_conf['PBS_EXEC'],\n                                'sbin', 'pbs_sched')]\n            cmd += ['-I', self.attributes['id']]\n            if sched_home is not None:\n                cmd += ['-d', sched_home]\n            try:\n                ret = self.du.run_cmd(self.hostname, cmd, sudo=True,\n                                      logerr=False, level=logging.INFOCLI)\n            except PbsInitServicesError as e:\n                raise PbsServiceError(rc=e.rc, rv=e.rv, msg=e.msg)\n            self.server.manager(MGR_CMD_LIST, SCHED)\n            return ret\n\n        if args is not None or launcher is not None:\n            return super()._start(inst=self, args=args,\n                                  launcher=launcher)\n        else:\n            try:\n                rv = self.pi.start_sched()\n                pid = self._validate_pid(self)\n                if pid is None:\n                    raise PbsServiceError(rv=False, rc=-1,\n                                          msg=\"Could not find PID\")\n            except PbsInitServicesError as e:\n                raise PbsServiceError(rc=e.rc, rv=e.rv, msg=e.msg)\n            return rv\n\n    def stop(self, sig=None):\n        \"\"\"\n        Stop the PBS scheduler\n\n        :param sig: Signal to stop the PBS scheduler\n        :type sig: str\n        \"\"\"\n        if sig is not None:\n            self.logger.info(self.logprefix + 'stopping Scheduler on host ' +\n                             self.hostname)\n            return super()._stop(sig, inst=self)\n        elif self.attributes['id'] != 'default':\n            self.logger.info(self.logprefix + 'stopping MultiSched ' +\n                             self.attributes['id'] + ' on host ' +\n                             self.hostname)\n            return super()._stop(inst=self)\n        else:\n            try:\n                self.pi.stop_sched()\n            except PbsInitServicesError as e:\n                raise PbsServiceError(rc=e.rc, rv=e.rv, msg=e.msg)\n            return True\n\n    def restart(self):\n        \"\"\"\n        Restart the PBS scheduler\n        \"\"\"\n        if self.isUp():\n            if not self.stop():\n                return False\n        return self.start()\n\n    def log_match(self, msg=None, id=None, n=50, tail=True, allmatch=False,\n                  regexp=False, max_attempts=None, interval=None,\n                  starttime=None, endtime=None, level=logging.INFO,\n                  existence=True):\n        \"\"\"\n        Match given ``msg`` in given ``n`` lines of Scheduler log\n\n        :param msg: log message to match, can be regex also when\n                    ``regexp`` is True\n        :type msg: str\n        :param id: The id of the object to trace. Only used for\n                   tracejob\n        :type id: str\n        :param n: 'ALL' or the number of lines to search through,\n                  defaults to 50\n        :type n: str or int\n        :param tail: If true (default), starts from the end of\n                     the file\n        :type tail: bool\n        :param allmatch: If True all matching lines out of then\n                         parsed are returned as a list. Defaults\n                         to False\n        :type allmatch: bool\n        :param regexp: If true msg is a Python regular expression.\n                       Defaults to False\n        :type regexp: bool\n        :param max_attempts: the number of attempts to make to find\n                             a matching entry\n        :type max_attempts: int\n        :param interval: the interval between attempts\n        :type interval: int\n        :param starttime: If set ignore matches that occur before\n                          specified time\n        :type starttime: float\n        :param endtime: If set ignore matches that occur after\n                        specified time\n        :type endtime: float\n        :param level: The logging level, defaults to INFO\n        :type level: int\n        :param existence: If True (default), check for existence of\n                        given msg, else check for non-existence of\n                        given msg.\n        :type existence: bool\n\n        :return: (x,y) where x is the matching line\n                 number and y the line itself. If allmatch is True,\n                 a list of tuples is returned.\n        :rtype: tuple\n        :raises PtlLogMatchError:\n                When ``existence`` is True and given\n                ``msg`` is not found in ``n`` line\n                Or\n                When ``existence`` is False and given\n                ``msg`` found in ``n`` line.\n\n        .. note:: The matching line number is relative to the record\n                  number, not the absolute line number in the file.\n        \"\"\"\n        return self._log_match(self, msg, id, n, tail, allmatch, regexp,\n                               max_attempts, interval, starttime, endtime,\n                               level=level, existence=existence)\n\n    def run_scheduling_cycle(self):\n        \"\"\"\n        Convenience method to start and finish a sched cycle\n        \"\"\"\n        sched = self.attributes['id']\n        old_val = self.server.status(SCHED, 'scheduling', id=sched)[\n            0]['scheduling']\n\n        # Make sure that we aren't in a sched cycle already\n        self.server.manager(MGR_CMD_SET, SCHED, {\n                            'scheduling': 'False'}, id=sched)\n\n        # Kick a new cycle\n        tbefore = time.time()\n        self.server.manager(MGR_CMD_SET, SCHED, {\n                            'scheduling': 'True'}, id=sched)\n        self.log_match(\"Starting Scheduling\",\n                       starttime=tbefore)\n\n        if old_val == 'False':\n            # This will also ensure that the sched cycle is over before\n            # returning\n            self.server.manager(MGR_CMD_SET, SCHED, {'scheduling': 'False'},\n                                id=sched)\n        else:\n            self.server.expect(SCHED, {'state': 'scheduling'}, op=NE,\n                               id=sched, interval=1, max_attempts=1200,\n                               trigger_sched_cycle=False)\n\n    def pbs_version(self):\n        \"\"\"\n        Get the version of the scheduler instance\n        \"\"\"\n        if self.version:\n            return self.version\n\n        version = self.log_match('pbs_version', tail=False)\n        if version:\n            version = version[1].strip().split('=')[1]\n        else:\n            version = \"unknown\"\n\n        self.version = LooseVersion(version)\n\n        return self.version\n\n    def parse_sched_config(self, schd_cnfg=None):\n        \"\"\"\n        Parse a sceduling configuration file into a dictionary.\n        Special handling of identical keys ``(e.g., node_sort_key)``\n        is done using a list of values as the value of the key.\n        When printed back to file, each entry in the list\n        gets written on a line of its own. For example, the python\n        dictionary entry:\n\n        ``{'node_sort_key':\n        ['\"ncpus HIGH unused\" prime', '\"node_priority HIGH\" non-prime']}``\n\n        will get written as:\n\n        ``node_sort_key: \"ncpus HIGH unused\" prime``\n        ``node_sort_key: \"node_priority HIGH\" non-prime``\n\n        Returns sched_config dictionary that gets reinitialized\n        every time this method is called.\n        \"\"\"\n        # sched_config is initialized\n        if self.sched_config:\n            del(self.sched_config)\n            self.sched_config = {}\n            self._sched_config_comments = {}\n            self._config_order = []\n        if schd_cnfg is None:\n            if self.sched_config_file is not None:\n                schd_cnfg = self.sched_config_file\n            else:\n                self.logger.error('no scheduler configuration file to parse')\n                return False\n\n        try:\n            conf_opts = self.du.cat(self.hostname, schd_cnfg,\n                                    sudo=(not self.has_snap),\n                                    level=logging.DEBUG2)['out']\n        except Exception:\n            self.logger.error('error parsing scheduler configuration')\n            return False\n\n        _comment = []\n        conf_re = re.compile(\n            r'[#]?[\\s]*(?P<conf_id>[\\w]+):[\\s]*(?P<conf_val>.*)')\n        for line in conf_opts:\n            m = conf_re.match(line)\n            if m:\n                key = m.group('conf_id')\n                val = m.group('conf_val')\n                # line is a comment, it could be a commented out scheduling\n                # option, or the description of an option. It could also be\n                # that part of the description is an example setting of the\n                # option.\n                # We must keep track of commented out options in order to\n                # rewrite the configuration in the same order as it was defined\n                if line.startswith('#'):\n                    if key in self.sched_config_options:\n                        _comment += [line]\n                        if key in self._sched_config_comments:\n                            self._sched_config_comments[key] += _comment\n                            _comment = []\n                        else:\n                            self._sched_config_comments[key] = _comment\n                            _comment = []\n                        if key not in self._config_order:\n                            self._config_order.append(key)\n                    else:\n                        _comment += [line]\n                    continue\n\n                if key not in self._sched_config_comments:\n                    self._sched_config_comments[key] = _comment\n                else:\n                    self._sched_config_comments[key] += _comment\n                if key not in self._config_order:\n                    self._config_order.append(key)\n\n                _comment = []\n                if key in self.sched_config:\n                    if isinstance(self.sched_config[key], list):\n                        if isinstance(val, list):\n                            self.sched_config[key].extend(val)\n                        else:\n                            self.sched_config[key].append(val)\n                    else:\n                        if isinstance(val, list):\n                            self.sched_config[key] = [self.sched_config[key]]\n                            self.sched_config[key].extend(val)\n                        else:\n                            self.sched_config[key] = [self.sched_config[key],\n                                                      val]\n                else:\n                    self.sched_config[key] = val\n            else:\n                _comment += [line]\n        self._sched_config_comments['PTL_SCHED_CONFIG_TAIL'] = _comment\n        return True\n\n    def check_defaults(self, config):\n        \"\"\"\n        Check the values in argument config against default values\n        \"\"\"\n\n        if len(config.keys()) == 0:\n            return\n        for k, v in self.sched_dflt_config.items():\n            if k in config:\n                s1 = v\n                s1 = s1.replace(\" \", \"\")\n                s1 = s1.replace(\"\\t\", \"\").strip()\n                s2 = config[k]\n                s2 = s2.replace(\" \", \"\")\n                s2 = s2.replace(\"\\t\", \"\").strip()\n\n                if s1 != s2:\n                    self.logger.debug(k + ' non-default: ' + v +\n                                      ' != ' + config[k])\n\n    def apply_config(self, config=None, validate=True, path=None):\n        \"\"\"\n        Apply the configuration specified by config\n\n        :param config: Configurations to set. Default: self.\n                       sched_config\n        :param validate: If True (the default) validate that\n                         settings did not yield an error.\n                         Validation is done by parsing the\n                         scheduler log which, in some cases may\n                         be slow and therefore undesirable.\n        :type validate: bool\n        :param path: Optional path to file to which configuration\n                     is written. If None, the configuration is\n                     written to PBS_HOME/sched_priv/sched_config\n        :type path: str\n        :returns: True on success and False otherwise. Success\n                  means that upon applying the new configuration\n                  the scheduler did not emit an\n                  \"Error reading line\" in its log file.\n        \"\"\"\n\n        if config is None:\n            config = self.sched_config\n\n        if len(config) == 0:\n            return True\n\n        reconfig_time = time.time()\n        try:\n            fn = self.du.create_temp_file()\n            with open(fn, \"w\", encoding=\"utf-8\") as fd:\n                for k in self._config_order:\n                    if k in config:\n                        if k in self._sched_config_comments:\n                            fd.write(\"\\n\".join(self._sched_config_comments[k]))\n                            fd.write(\"\\n\")\n                        v = config[k]\n                        if isinstance(v, list):\n                            for val in v:\n                                fd.write(k + \": \" + str(val) + \"\\n\")\n                        else:\n                            fd.write(k + \": \" + str(v) + \"\\n\")\n                    elif k in self._sched_config_comments:\n                        fd.write(\"\\n\".join(self._sched_config_comments[k]))\n                        fd.write(\"\\n\")\n                for k, v in self.sched_config.items():\n                    if k not in self._config_order:\n                        if isinstance(v, list):\n                            for val in v:\n                                fd.write(k + \": \" + str(val).strip() + \"\\n\")\n                        else:\n                            fd.write(k + \": \" + str(v).strip() + \"\\n\")\n\n                if 'PTL_SCHED_CONFIG_TAIL' in self._sched_config_comments:\n                    fd.write(\"\\n\".join(\n                        self._sched_config_comments['PTL_SCHED_CONFIG_TAIL']))\n                    fd.write(\"\\n\")\n\n            if path is None:\n                if 'sched_priv' in self.attributes:\n                    sched_priv = self.attributes['sched_priv']\n                else:\n                    sched_priv = os.path.join(self.pbs_conf['PBS_HOME'],\n                                              \"sched_priv\")\n                sp = os.path.join(sched_priv, \"sched_config\")\n            else:\n                sp = path\n            self.du.run_copy(self.hostname, src=fn, dest=sp,\n                             preserve_permission=False,\n                             sudo=True, uid=self.user)\n            os.remove(fn)\n\n            self.logger.debug(self.logprefix + \"updated configuration\")\n        except Exception:\n            m = self.logprefix + 'error in apply_config '\n            self.logger.error(m + str(traceback.print_exc()))\n            raise PbsSchedConfigError(rc=1, rv=False, msg=m)\n\n        if validate:\n            self.get_pid()\n            self.signal('-HUP')\n            try:\n                self.log_match(\"Sched;reconfigure;Scheduler is reconfiguring\",\n                               starttime=reconfig_time)\n                self.log_match(\"Error reading line\", max_attempts=2,\n                               starttime=reconfig_time, existence=False)\n            except PtlLogMatchError as log_error:\n                self.logger.error(log_error.msg)\n                _msg = 'Error in validating sched_config changes'\n                raise PbsSchedConfigError(rc=1, rv=False,\n                                          msg=_msg)\n        return True\n\n    def set_sched_config(self, confs={}, apply=True, validate=True):\n        \"\"\"\n        set a ``sched_config`` property\n\n        :param confs: dictionary of key value sched_config entries\n        :type confs: Dictionary\n        :param apply: if True (the default), apply configuration.\n        :type apply: bool\n        :param validate: if True (the default), validate the\n                         configuration settings.\n        :type validate: bool\n        \"\"\"\n        self.parse_sched_config()\n        self.logger.info(self.logprefix + \"config \" + str(confs))\n        self.sched_config = {**self.sched_config, **confs}\n        if apply:\n            try:\n                self.apply_config(validate=validate)\n            except PbsSchedConfigError as sched_error:\n                _msg = sched_error.msg\n                self.logger.error(_msg)\n                for k in confs:\n                    del self.sched_config[k]\n                self.apply_config(validate=validate)\n                raise PbsSchedConfigError(rc=1, rv=False, msg=_msg)\n        return True\n\n    def add_server_dyn_res(self, custom_resource, script_body=None,\n                           res_file=None, apply=True, validate=True,\n                           dirname=None, host=None, perm=0o700,\n                           prefix='PtlPbsSvrDynRes', suffix='.scr'):\n        \"\"\"\n        Add a root owned server dynamic resource script or file to the\n        scheduler configuration.\n\n        :param custom_resource: The name of the custom resource to\n                                define\n        :type custom_resource: str\n        :param script_body: The body of the server dynamic resource\n        :param res_file: Alternatively to passing the script body, use\n                     the file instead\n        :type res_file: str or None\n        :param apply: if True (the default), apply configuration.\n        :type apply: bool\n        :param validate: if True (the default), validate the\n                         configuration settings.\n        :type validate: bool\n        :param dirname: the file will be created in this directory\n        :type dirname: str or None\n        :param host: the hostname on which dyn res script is created\n        :type host: str or None\n        :param perm: perm to use while creating scripts\n                     (must be octal like 0o777)\n        :param prefix: the file name will begin with this prefix\n        :type prefix: str\n        :param suffix: the file name will end with this suffix\n        :type suffix: str\n        :return Absolute path of the dynamic resource script\n        \"\"\"\n        if res_file is not None:\n            with open(res_file) as f:\n                script_body = f.readlines()\n                self.du.chmod(hostname=host, path=res_file, mode=perm,\n                              sudo=True)\n        else:\n            if dirname is None:\n                dirname = self.pbs_conf['PBS_HOME']\n            tmp_file = self.du.create_temp_file(prefix=prefix, suffix=suffix,\n                                                body=script_body,\n                                                hostname=host)\n            res_file = os.path.join(dirname, tmp_file.split(os.path.sep)[-1])\n            self.du.run_copy(host, src=tmp_file, dest=res_file, sudo=True,\n                             preserve_permission=False)\n\n            user = self.user\n            group = pwd.getpwnam(str(user)).pw_gid\n\n            self.du.chown(hostname=host, path=res_file, uid=user, gid=group,\n                          sudo=True)\n            self.du.chmod(hostname=host, path=res_file, mode=perm, sudo=True)\n            if host is None:\n                self.dyn_created_files.append(res_file)\n\n        self.logger.info(self.logprefix + \"adding server dyn res \" + res_file)\n        self.logger.info(\"-\" * 30)\n        self.logger.info(script_body)\n        self.logger.info(\"-\" * 30)\n\n        a = {'server_dyn_res': '\"' + custom_resource + ' !' + res_file + '\"'}\n        self.set_sched_config(a, apply=apply, validate=validate)\n        return res_file\n\n    def unset_sched_config(self, name, apply=True):\n        \"\"\"\n        Delete a ``sched_config`` entry\n\n        :param name: the entry to delete from sched_config\n        :type name: str\n        :param apply: if True, apply configuration. Defaults to True\n        :type apply: bool\n        \"\"\"\n        self.parse_sched_config()\n        if name not in self.sched_config:\n            return True\n        self.logger.info(self.logprefix + \"unsetting config \" + name)\n        del self.sched_config[name]\n\n        if apply:\n            return self.apply_config()\n\n    def set_dedicated_time_file(self, filename):\n        \"\"\"\n        Set the path to a dedicated time\n        \"\"\"\n        self.logger.info(self.logprefix + \" setting dedicated time file to \" +\n                         str(filename))\n        self.dedicated_time_file = filename\n\n    def revert_to_defaults(self):\n        \"\"\"\n        Revert scheduler configuration to defaults.\n\n        :returns: True on success, False otherwise\n        \"\"\"\n        self.logger.info(self.logprefix +\n                         \"reverting configuration to defaults\")\n\n        ignore_attrs = ['id', 'pbs_version', 'sched_host', 'state']\n        unsetattrs = []\n        for k in self.attributes.keys():\n            if k not in ignore_attrs:\n                unsetattrs.append(k)\n        if len(unsetattrs) > 0:\n            self.server.manager(MGR_CMD_UNSET, SCHED, unsetattrs)\n        self.clear_dedicated_time(hup=False)\n        if self.du.cmp(self.hostname, self.dflt_resource_group_file,\n                       self.resource_group_file, sudo=True) != 0:\n            self.du.run_copy(self.hostname, src=self.dflt_resource_group_file,\n                             dest=self.resource_group_file,\n                             preserve_permission=False,\n                             sudo=True, uid=self.user)\n        rc = self.holidays_revert_to_default()\n        if self.du.cmp(self.hostname, self.dflt_sched_config_file,\n                       self.sched_config_file, sudo=True) != 0:\n            self.du.run_copy(self.hostname, src=self.dflt_sched_config_file,\n                             dest=self.sched_config_file,\n                             preserve_permission=False,\n                             sudo=True, uid=self.user)\n        if self.du.cmp(self.hostname, self.dflt_dedicated_file,\n                       self.dedicated_time_file, sudo=True):\n            self.du.run_copy(self.hostname, src=self.dflt_dedicated_file,\n                             dest=self.dedicated_time_file,\n                             preserve_permission=False, sudo=True,\n                             uid=self.user)\n\n        self.signal('-HUP')\n        if self.platform == 'cray' or self.platform == 'craysim':\n            self.add_resource('vntype')\n            self.add_resource('hbmem')\n        # Revert fairshare usage\n        self.fairshare.revert_fairshare()\n        self.fairshare_tree = None\n        self.resource_group = None\n        self.parse_sched_config()\n        return self.isUp()\n\n    def create_scheduler(self, sched_home=None):\n        \"\"\"\n        Start scheduler with creating required directories for scheduler\n        :param sched_home: path of scheduler home and log directory\n        :type sched_home: str\n        \"\"\"\n        if sched_home is None:\n            sched_home = self.server.pbs_conf['PBS_HOME']\n        sched_priv_dir = os.path.join(sched_home,\n                                      self.attributes['sched_priv'])\n        sched_logs_dir = os.path.join(sched_home,\n                                      self.attributes['sched_log'])\n        self.server.update_special_attr(SCHED, id=self.attributes['id'])\n        if not os.path.exists(sched_priv_dir):\n            self.du.mkdir(path=sched_priv_dir, sudo=True)\n            if self.user.name != 'root':\n                self.du.chown(hostname=self.hostname, path=sched_priv_dir,\n                              sudo=True, uid=self.user)\n            self.du.run_copy(self.hostname, src=self.dflt_resource_group_file,\n                             dest=self.resource_group_file, mode=0o644,\n                             sudo=True, uid=self.user)\n            self.du.run_copy(self.hostname, src=self.dflt_holidays_file,\n                             dest=self.holidays_file, mode=0o644,\n                             sudo=True, uid=self.user)\n            self.du.run_copy(self.hostname, src=self.dflt_sched_config_file,\n                             dest=self.sched_config_file, mode=0o644,\n                             sudo=True, uid=self.user)\n            self.du.run_copy(self.hostname, src=self.dflt_dedicated_file,\n                             dest=self.dedicated_time_file, mode=0o644,\n                             sudo=True, uid=self.user)\n        if not os.path.exists(sched_logs_dir):\n            self.du.mkdir(path=sched_logs_dir, sudo=True)\n            if self.user.name != 'root':\n                self.du.chown(hostname=self.hostname, path=sched_logs_dir,\n                              sudo=True, uid=self.user)\n\n        self.setup_sched_priv(sched_priv=sched_priv_dir)\n\n    def save_configuration(self, outfile=None, mode='w'):\n        \"\"\"\n        Save scheduler configuration\n\n        :param outfile: Optional Path to a file to which configuration\n                        is saved, when not provided, data is saved in\n                        class variable saved_config\n        :type outfile: str\n        :param mode: mode to use to access outfile. Defaults to\n                     append, 'w'.\n        :type mode: str\n        :returns: True on success and False otherwise\n        \"\"\"\n        conf = {}\n        if 'sched_priv' in self.attributes:\n            sched_priv = self.attributes['sched_priv']\n        else:\n            sched_priv = os.path.join(\n                self.pbs_conf['PBS_HOME'], 'sched_priv')\n        sc = os.path.join(sched_priv, 'sched_config')\n        self._save_config_file(conf, sc)\n        rg = os.path.join(sched_priv, 'resource_group')\n        self._save_config_file(conf, rg)\n        dt = os.path.join(sched_priv, 'dedicated_time')\n        self._save_config_file(conf, dt)\n        hd = os.path.join(sched_priv, 'holidays')\n        self._save_config_file(conf, hd)\n\n        self.server.saved_config[MGR_OBJ_SCHED] = conf\n        if outfile is not None:\n            try:\n                with open(outfile, mode) as f:\n                    json.dump(self.server.saved_config, f)\n                    self.server.saved_config[MGR_OBJ_SCHED].clear()\n            except Exception:\n                self.logger.error('error saving configuration ' + outfile)\n                return False\n\n        return True\n\n    def load_configuration(self, infile):\n        \"\"\"\n        load scheduler configuration from saved file infile\n        \"\"\"\n        rv = self._load_configuration(infile, MGR_OBJ_SCHED)\n        self.signal('-HUP')\n        return rv\n\n    def get_resources(self, exclude=[]):\n        \"\"\"\n        returns a list of allocatable resources.\n\n        :param exclude: if set, excludes the named resources, if\n                        they exist, from the resulting list\n        :type exclude: List\n        \"\"\"\n        if 'resources' not in self.sched_config:\n            return None\n        resources = self.sched_config['resources']\n        resources = resources.replace('\"', '')\n        resources = resources.replace(' ', '')\n        res = resources.split(',')\n        if len(exclude) > 0:\n            for e in exclude:\n                if e in res:\n                    res.remove(e)\n        return res\n\n    def add_resource(self, name, apply=True):\n        \"\"\"\n        Add a resource to ``sched_config``.\n\n        :param name: the resource name to add\n        :type name: str\n        :param apply: if True, apply configuration. Defaults to True\n        :type apply: bool\n        :returns: True on success and False otherwise.\n                  Return True if the resource is already defined.\n        \"\"\"\n        # if the sched_config has not been read in yet, parse it\n        if not self.sched_config:\n            self.parse_sched_config()\n\n        if 'resources' in self.sched_config:\n            resources = self.sched_config['resources']\n            resources = resources.replace('\"', '')\n            splitres = [r.strip() for r in resources.split(\",\")]\n            if name in splitres:\n                return True\n            resources = '\"' + resources + ', ' + name + '\"'\n        else:\n            resources = '\"' + name + '\"'\n\n        return self.set_sched_config({'resources': resources}, apply=apply)\n\n    def remove_resource(self, name, apply=True):\n        \"\"\"\n        Remove a resource to ``sched_config``.\n\n        :param name: the resource name to remove\n        :type name: str\n        :param apply: if True, apply configuration. Defaults to True\n        :type apply: bool\n        :returns: True on success and False otherwise\n        \"\"\"\n        # if the sched_config has not been read in yet, parse it\n        if not self.sched_config:\n            self.parse_sched_config()\n\n        if 'resources' in self.sched_config:\n            resources = self.sched_config['resources']\n            resources = resources.replace('\"', '')\n            splitres = [r.strip() for r in resources.split(\",\")]\n            if name not in splitres:\n                return True\n\n            newres = []\n            for r in splitres:\n                if r != name:\n                    newres.append(r)\n\n            resources = '\"' + \",\".join(newres) + '\"'\n            return self.set_sched_config({'resources': resources}, apply=apply)\n\n    def holidays_revert_to_default(self, level=logging.INFO):\n        \"\"\"\n        Revert holidays file to default\n        \"\"\"\n        self.logger.log(level, self.logprefix +\n                        \"reverting holidays file to default\")\n\n        rc = None\n        # Copy over the holidays file from PBS_EXEC if it exists\n        if self.du.cmp(self.hostname, self.dflt_holidays_file,\n                       self.holidays_file, sudo=True) != 0:\n            ret = self.du.run_copy(self.hostname, src=self.dflt_holidays_file,\n                                   dest=self.holidays_file,\n                                   preserve_permission=False, sudo=True,\n                                   logerr=True)\n            rc = ret['rc']\n            # Update the internal data structures for the updated file\n            self.holidays_parse_file(level=level)\n        else:\n            rc = 1\n        return rc\n\n    def holidays_parse_file(self, path=None, obj=None, level=logging.INFO):\n        \"\"\"\n        Parse the existing holidays file\n\n        :param path: optional path to the holidays file to parse\n        :type path: str or None\n        :param obj: optional holidays object to be used instead\n                    of internal\n        :returns: The content of holidays file as a list of lines\n        \"\"\"\n        self.logger.log(level, self.logprefix + \"Parsing holidays file\")\n\n        if obj is None:\n            obj = self.holidays_obj\n\n        days_map = obj._days_map\n        days_set = obj.days_set\n        if path is None:\n            path = self.holidays_file\n        lines = self.du.cat(self.hostname, path, sudo=True)['out']\n\n        content = []    # valid content to return\n\n        self.holidays_delete_entry(\n            'a', apply=False, obj=obj, level=logging.DEBUG)\n\n        for line in lines:\n            entry = str(line).split()\n            if len(entry) == 0:\n                continue\n            tag = entry[0].lower()\n            if tag == \"year\":   # initialize year\n                content.append(\"\\t\".join(entry))\n                obj.year['valid'] = True\n                if len(entry) > 1:\n                    obj.year['value'] = entry[1]\n            elif tag in days_map.keys():   # initialize a day\n                content.append(\"\\t\".join(entry))\n                day = days_map[tag]\n                day['valid'] = True\n                days_set.append(day)\n                day['position'] = len(days_set) - 1\n                if len(entry) > 1:\n                    day['p'] = entry[1]\n                if len(entry) > 2:\n                    day['np'] = entry[2]\n            elif tag.isdigit():   # initialize a holiday\n                content.append(\"\\t\".join(entry))\n                obj.holidays.append(tag)\n            else:\n                pass\n        return content\n\n    def holidays_set_day(self, day_id, prime=\"\", nonprime=\"\", apply=True,\n                         obj=None, level=logging.INFO):\n        \"\"\"\n        Set prime time values for a day\n\n        :param day_id: the day to be set (string)\n        :type day_id: str\n        :param prime: the prime time value\n        :param nonprime: the non-prime time value\n        :param apply: to reflect the changes to file\n        :type apply: bool\n        :param obj: optional holidays object to be used instead\n                    of internal\n        :returns: The position ``(0-7)`` of the set day\n        \"\"\"\n        self.logger.log(level, self.logprefix +\n                        \"setting holidays file entry for %s\",\n                        day_id)\n\n        if obj is None:\n            obj = self.holidays_obj\n\n        day = obj._days_map[str(day_id).lower()]\n        days_set = obj.days_set\n\n        if day['valid'] is None:    # Fresh entry\n            days_set.append(day)\n            day['position'] = len(days_set) - 1\n        elif day['valid'] is False:  # Previously invalidated entry\n            days_set.insert(day['position'], day)\n        else:\n            pass\n\n        day['valid'] = True\n        day['p'] = str(prime)\n        day['np'] = str(nonprime)\n\n        self.logger.debug(\"holidays_set_day(): changed day struct: \" +\n                          str(day))\n\n        if apply:\n            self.holidays_write_file(obj=obj, level=logging.DEBUG)\n\n        return day['position']\n\n    def holidays_get_day(self, day_id, obj=None, level=logging.INFO):\n        \"\"\"\n        :param obj: optional holidays object to be used instead\n                    of internal\n        :param day_id: either a day's name or \"all\"\n        :type day_id: str\n        :returns: A copy of info about a day/all set days\n        \"\"\"\n        self.logger.log(level, self.logprefix +\n                        \"getting holidays file entry for \" +\n                        day_id)\n\n        if obj is None:\n            obj = self.holidays_obj\n\n        days_set = obj.days_set\n        days_map = obj._days_map\n\n        if day_id == \"all\":\n            return days_set[:]\n        else:\n            return days_map[day_id].copy()\n\n    def holidays_reposition_day(self, day_id, new_pos, apply=True, obj=None,\n                                level=logging.INFO):\n        \"\"\"\n        Change position of a day ``(0-7)`` as it appears in the\n        holidays file\n\n        :param day_id: name of the day\n        :type day_id: str\n        :param new_pos: new position\n        :param apply: to reflect the changes to file\n        :type apply: bool\n        :param obj: optional holidays object to be used instead\n                    of internal\n        :returns: The new position of the day\n        \"\"\"\n        self.logger.log(level, self.logprefix +\n                        \"repositioning holidays file entry for \" +\n                        day_id + \" to position \" + str(new_pos))\n\n        if obj is None:\n            obj = self.holidays_obj\n\n        days_map = obj._days_map\n        days_set = obj.days_set\n        day = days_map[str(day_id).lower()]\n\n        if new_pos == day['position']:\n            return\n\n        # We also want to update order of invalid days, so add them to\n        # days_set temporarily\n        invalid_days = []\n        for name in days_map:\n            if days_map[name]['valid'] is False:\n                invalid_days.append(days_map[name])\n        days_set += invalid_days\n\n        # Sort the old list\n        days_set.sort(key=itemgetter('position'))\n\n        # Change position of 'day_id'\n        day['position'] = new_pos\n        days_set.remove(day)\n        days_set.insert(new_pos, day)\n\n        # Update the 'position' field\n        for i in range(0, len(days_set)):\n            days_set[i]['position'] = i\n\n        # Remove invalid days from days_set\n        len_days_set = len(days_set)\n        days_set = [days_set[i] for i in range(0, len_days_set)\n                    if days_set[i] not in invalid_days]\n\n        self.logger.debug(\"holidays_reposition_day(): List of days after \" +\n                          \" re-positioning \" + str(day_id) + \" is:\\n\" +\n                          str(days_set))\n\n        if apply:\n            self.holidays_write_file(obj=obj, level=logging.DEBUG)\n\n        return new_pos\n\n    def holidays_unset_day(self, day_id, apply=True, obj=None,\n                           level=logging.INFO):\n        \"\"\"\n        Unset prime time values for a day\n\n        :param day_id: day to unset (string)\n        :type day_id: str\n        :param apply: to reflect the changes to file\n        :param obj: optional holidays object to be used instead\n                    of internal\n\n        .. note:: we do not unset the 'valid' field here so the entry\n                  will still be displayed but without any values\n        \"\"\"\n        self.logger.log(level, self.logprefix +\n                        \"unsetting holidays file entry for \" + day_id)\n\n        if obj is None:\n            obj = self.holidays_obj\n\n        day = obj._days_map[str(day_id).lower()]\n        day['p'] = \"\"\n        day['np'] = \"\"\n\n        if apply:\n            self.holidays_write_file(obj=obj, level=logging.DEBUG)\n\n    def holidays_invalidate_day(self, day_id, apply=True, obj=None,\n                                level=logging.INFO):\n        \"\"\"\n        Remove a day's entry from the holidays file\n\n        :param day_id: the day to remove (string)\n        :type day_id: str\n        :param apply: to reflect the changes to file\n        :type apply: bool\n        :param obj: optional holidays object to be used instead\n                    of internal\n        \"\"\"\n        self.logger.log(level, self.logprefix +\n                        \"invalidating holidays file entry for \" +\n                        day_id)\n\n        if obj is None:\n            obj = self.holidays_obj\n\n        days_map = obj._days_map\n        days_set = obj.days_set\n\n        day = days_map[str(day_id).lower()]\n        day['valid'] = False\n        days_set.remove(day)\n\n        if apply:\n            self.holidays_write_file(obj=obj, level=logging.DEBUG)\n\n    def holidays_validate_day(self, day_id, apply=True, obj=None,\n                              level=logging.INFO):\n        \"\"\"\n        Make valid a previously set day's entry\n\n        :param day_id: the day to validate (string)\n        :type day_id: str\n        :param apply: to reflect the changes to file\n        :type apply: bool\n        :param obj: optional holidays object to be used instead\n                    of internal\n\n        .. note:: The day will retain its previous position in\n                  the file\n        \"\"\"\n        self.logger.log(level, self.logprefix +\n                        \"validating holidays file entry for \" +\n                        day_id)\n\n        if obj is None:\n            obj = self.holidays_obj\n\n        days_map = obj._days_map\n        days_set = obj.days_set\n\n        day = days_map[str(day_id).lower()]\n        if day in days_set:  # do not insert a pre-existing day\n            self.logger.debug(\"holidays_validate_day(): \" +\n                              day_id + \" is already valid!\")\n            return\n\n        day['valid'] = True\n        days_set.insert(day['position'], day)\n\n        if apply:\n            self.holidays_write_file(obj=obj, level=logging.DEBUG)\n\n    def holidays_delete_entry(self, entry_type, idx=None, apply=True,\n                              obj=None, level=logging.INFO):\n        \"\"\"\n        Delete ``one/all`` entries from holidays file\n\n        :param entry_type: 'y':year, 'd':day, 'h':holiday or 'a': all\n        :type entry_type: str\n        :param idx: either a day of week (monday, tuesday etc.)\n                    or Julian date  of a holiday\n        :type idx: str or None\n        :param apply: to reflect the changes to file\n        :type apply: bool\n        :param obj: optional holidays object to be used instead of\n                    internal\n        :returns: False if entry_type is invalid, otherwise True\n\n        .. note:: The day cannot be validated and will lose it's\n                  position in the file\n        \"\"\"\n        self.logger.log(level, self.logprefix +\n                        \"Deleting entries from holidays file\")\n\n        if obj is None:\n            obj = self.holidays_obj\n\n        days_map = obj._days_map\n        days_set = obj.days_set\n        holiday_list = obj.holidays\n        year = obj.year\n\n        if entry_type not in ['a', 'y', 'd', 'h']:\n            return False\n\n        if entry_type == 'y' or entry_type == 'a':\n            self.logger.debug(self.logprefix +\n                              \"deleting year entry from holidays file\")\n            # Delete year entry\n            year['value'] = None\n            year['valid'] = False\n\n        if entry_type == 'd' or entry_type == 'a':\n            # Delete one/all day entries\n            num_days_to_delete = 1\n            if entry_type == 'a':\n                self.logger.debug(self.logprefix +\n                                  \"deleting all days from holidays file\")\n                num_days_to_delete = len(days_set)\n            for i in range(0, num_days_to_delete):\n                if (entry_type == 'd'):\n                    self.logger.debug(self.logprefix +\n                                      \"deleting \" + str(idx) +\n                                      \" entry from holidays file\")\n                    day = days_map[str(idx).lower()]\n                else:\n                    day = days_set[0]\n\n                day['p'] = None\n                day['np'] = None\n                day['valid'] = None\n                day['position'] = None\n                days_set.remove(day)\n                if entry_type == 'd':\n                    # Correct 'position' field of every day\n                    for i in range(0, len(days_set)):\n                        days_set[i]['position'] = i\n\n        if entry_type == 'h' or entry_type == 'a':\n            # Delete one/all calendar holiday entries\n            if entry_type == 'a':\n                self.logger.debug(self.logprefix +\n                                  \"deleting all holidays from holidays file\")\n                del holiday_list[:]\n            else:\n                self.logger.debug(self.logprefix +\n                                  \"deleting holiday on \" + str(idx) +\n                                  \" from holidays file\")\n                holiday_list.remove(str(idx))\n\n        if apply:\n            self.holidays_write_file(obj=obj, level=logging.DEBUG)\n\n        return True\n\n    def holidays_set_year(self, new_year=\"\", apply=True, obj=None,\n                          level=logging.INFO):\n        \"\"\"\n        Set the year value\n\n        :param newyear: year value to set\n        :type newyear: str\n        :param apply: to reflect the changes to file\n        :type apply: bool\n        :param obj: optional holidays object to be used instead\n                    of internal\n        \"\"\"\n        self.logger.log(level, self.logprefix +\n                        \"setting holidays file year entry to \" +\n                        str(new_year))\n        if obj is None:\n            obj = self.holidays_obj\n\n        year = obj.year\n\n        year['value'] = str(new_year)\n        year['valid'] = True\n\n        if apply:\n            self.holidays_write_file(obj=obj, level=logging.DEBUG)\n\n    def holidays_unset_year(self, apply=True, obj=None, level=logging.INFO):\n        \"\"\"\n        Unset the year value\n\n        :param apply: to reflect the changes to file\n        :type apply: bool\n        :param obj: optional holidays object to be used instead\n                    of internal\n        \"\"\"\n        self.logger.log(level, self.logprefix +\n                        \"unsetting holidays file year entry\")\n        if obj is None:\n            obj = self.holidays_obj\n\n        obj.year['value'] = \"\"\n\n        if apply:\n            self.holidays_write_file(obj=obj, level=logging.DEBUG)\n\n    def holidays_get_year(self, obj=None, level=logging.INFO):\n        \"\"\"\n        :param obj: optional holidays object to be used instead\n                    of internal\n        :returns: The year entry of holidays file\n        \"\"\"\n        self.logger.log(level, self.logprefix +\n                        \"getting holidays file year entry\")\n        if obj is None:\n            obj = self.holidays_obj\n\n        year = obj.year\n        return year.copy()\n\n    def holidays_add_holiday(self, date=None, apply=True, obj=None,\n                             level=logging.INFO):\n        \"\"\"\n        Add a calendar holiday to the holidays file\n\n        :param date: Date value for the holiday\n        :param apply: to reflect the changes to file\n        :type apply: bool\n        :param obj: optional holidays object to be used instead\n                    of internal\n        \"\"\"\n        self.logger.log(level, self.logprefix +\n                        \"adding holiday \" + str(date) +\n                        \" to holidays file\")\n        if obj is None:\n            obj = self.holidays_obj\n\n        holiday_list = obj.holidays\n\n        if date is not None:\n            holiday_list.append(str(date))\n        else:\n            pass\n        self.logger.debug(\"holidays list after adding one: \" +\n                          str(holiday_list))\n        if apply:\n            self.holidays_write_file(obj=obj, level=logging.DEBUG)\n\n    def holidays_get_holidays(self, obj=None, level=logging.INFO):\n        \"\"\"\n        :param obj: optional holidays object to be used instead\n                    of internal\n        :returns: The list of holidays in holidays file\n        \"\"\"\n        self.logger.log(level, self.logprefix +\n                        \"retrieving list of holidays\")\n\n        if obj is None:\n            obj = self.holidays_obj\n\n        holiday_list = obj.holidays\n        return holiday_list[:]\n\n    def _holidays_process_content(self, content, obj=None):\n        \"\"\"\n        Process a user provided list of holidays file content\n\n        :param obj: optional holidays object to be used instead\n                    of internal\n        \"\"\"\n        self.logger.debug(\"_holidays_process_content(): \" +\n                          \"Processing user provided holidays content:\\n\" +\n                          str(content))\n        if obj is None:\n            obj = self.holidays_obj\n\n        days_map = obj._days_map\n        year = obj.year\n        holiday_list = obj.holidays\n        days_set = obj.days_set\n\n        self.holidays_delete_entry(\n            'a', apply=False, obj=obj, level=logging.DEBUG)\n\n        if content is None:\n            self.logger.debug(\"Holidays file was wiped out\")\n            return\n\n        for line in content:\n            entry = line.split()\n            if len(entry) == 0:\n                continue\n            tag = entry[0].lower()\n            if tag == \"year\":   # initialize self.year\n                year['valid'] = True\n                if len(entry) > 1:\n                    year['value'] = entry[1]\n            elif tag in days_map.keys():   # initialize self.<day>\n                day = days_map[tag]\n                day['valid'] = True\n                days_set.append(day)\n                day['position'] = len(days_set) - 1\n                if len(entry) > 1:\n                    day['p'] = entry[1]\n                if len(entry) > 2:\n                    day['np'] = entry[2]\n            elif tag.isdigit():   # initialize self.holiday\n                holiday_list.append(tag)\n            else:\n                pass\n\n    def holidays_write_file(self, content=None, out_path=None,\n                            hup=True, obj=None, level=logging.INFO):\n        \"\"\"\n        Write to the holidays file with content ``given/generated``\n\n        :param hup: SIGHUP the scheduler after writing the holidays\n                    file\n        :type hup: bool\n        :param obj: optional holidays object to be used instead of\n                    internal\n        \"\"\"\n        self.logger.log(level, self.logprefix +\n                        \"Writing to the holidays file\")\n\n        if obj is None:\n            obj = self.holidays_obj\n\n        if out_path is None:\n            out_path = self.holidays_file\n\n        if content is not None:\n            self._holidays_process_content(content, obj)\n        else:\n            content = str(obj)\n\n        self.logger.debug(\"content being written:\\n\" + str(content))\n\n        fn = self.du.create_temp_file(self.hostname, body=content)\n        ret = self.du.run_copy(self.hostname, src=fn, dest=out_path,\n                               preserve_permission=False, sudo=True)\n        self.du.rm(self.hostname, fn)\n\n        if ret['rc'] != 0:\n            raise PbsSchedConfigError(rc=ret['rc'], rv=ret['out'],\n                                      msg=('error applying holidays file' +\n                                           ret['err']))\n        if hup:\n            rv = self.signal('-HUP')\n            if not rv:\n                raise PbsSchedConfigError(rc=1, rv=False,\n                                          msg='error applying holidays file')\n        return True\n\n    def parse_dedicated_time(self, file=None):\n        \"\"\"\n        Parse the dedicated_time file and populate dedicated times\n        as both a string dedicated_time array of dictionaries defined\n        as ``[{'from': datetime, 'to': datetime}, ...]`` as well as a\n        dedicated_time_as_str array with a string representation of\n        each entry\n\n        :param file: optional file to parse. Defaults to the one under\n                     ``PBS_HOME/sched_priv``\n        :type file: str or None\n\n        :returns: The dedicated_time list of dictionaries or None on\n                  error.Return an empty array if dedicated time file\n                  is empty.\n        \"\"\"\n        self.dedicated_time_as_str = []\n        self.dedicated_time = []\n\n        if file:\n            dt_file = file\n        elif self.dedicated_time_file:\n            dt_file = self.dedicated_time_file\n        else:\n            dt_file = os.path.join(self.pbs_conf['PBS_HOME'], 'sched_priv',\n                                   'dedicated_time')\n        try:\n            lines = self.du.cat(self.hostname, dt_file, sudo=True)['out']\n            if lines is None:\n                return []\n\n            for line in lines:\n                if not line.startswith('#') and len(line) > 0:\n                    self.dedicated_time_as_str.append(line)\n                    (dtime_from, dtime_to) = self.utils.convert_dedtime(line)\n                    self.dedicated_time.append({'from': dtime_from,\n                                                'to': dtime_to})\n        except Exception:\n            self.logger.error('error in parse_dedicated_time')\n            return None\n\n        return self.dedicated_time\n\n    def clear_dedicated_time(self, hup=True):\n        \"\"\"\n        Clear the dedicated time file\n        \"\"\"\n        self.parse_dedicated_time()\n        if ((len(self.dedicated_time) == 0) and\n                (len(self.dedicated_time_as_str) == 0)):\n            return True\n        if self.dedicated_time:\n            for d in self.dedicated_time:\n                del d\n        if self.dedicated_time_as_str:\n            for d in self.dedicated_time_as_str:\n                del d\n        self.dedicated_time = []\n        self.dedicated_time_as_str = []\n        dt = \"# FORMAT: MM/DD/YYYY HH:MM MM/DD/YYYY HH:MM\"\n        return self.add_dedicated_time(dt, hup=hup)\n\n    def add_dedicated_time(self, as_str=None, start=None, end=None, hup=True):\n        \"\"\"\n        Append a dedicated time entry. The function can be called\n        in one of two ways, either by passing in start and end as\n        time values, or by passing as_str, a string that gets\n        appended to the dedicated time entries and formatted as\n        follows, note that no check on validity of the format will\n        be made the function uses strftime to parse the datetime\n        and will fail if the strftime can not convert the string.\n        ``MM/DD/YYYY HH:MM MM/DD/YYYY HH:MM``\n\n        :returns: True on success and False otherwise\n        \"\"\"\n        if self.dedicated_time is None:\n            self.parse_dedicated_time()\n\n        if start is not None and end is not None:\n            dtime_from = time.strftime(\"%m/%d/%Y %H:%M\", time.localtime(start))\n            dtime_to = time.strftime(\"%m/%d/%Y %H:%M\", time.localtime(end))\n            dedtime = dtime_from + \" \" + dtime_to\n        elif as_str is not None:\n            (dtime_from, dtime_to) = self.utils.convert_dedtime(as_str)\n            dedtime = as_str\n        else:\n            self.logger.warning(\"no dedicated from/to specified\")\n            return True\n\n        for d in self.dedicated_time_as_str:\n            if dedtime == d:\n                if dtime_from is None or dtime_to is None:\n                    self.logger.info(self.logprefix +\n                                     \"dedicated time already defined\")\n                else:\n                    self.logger.info(self.logprefix +\n                                     \"dedicated time from \" + dtime_from +\n                                     \" to \" + dtime_to + \" already defined\")\n                return True\n\n        if dtime_from is not None and dtime_to is not None:\n            self.logger.info(self.logprefix +\n                             \"adding dedicated time \" + dedtime)\n\n        self.dedicated_time_as_str.append(dedtime)\n        if dtime_from is not None and dtime_to is not None:\n            self.dedicated_time.append({'from': dtime_from, 'to': dtime_to})\n        try:\n            fn = self.du.create_temp_file()\n            with open(fn, \"w\") as fd:\n                for l in self.dedicated_time_as_str:\n                    fd.write(l + '\\n')\n            ddfile = os.path.join(self.pbs_conf['PBS_HOME'], 'sched_priv',\n                                  'dedicated_time')\n            self.du.run_copy(self.hostname, src=fn, dest=ddfile, sudo=True,\n                             preserve_permission=False)\n            os.remove(fn)\n        except Exception:\n            raise PbsSchedConfigError(rc=1, rv=False,\n                                      msg='error adding dedicated time')\n\n        if hup:\n            ret = self.signal('-HUP')\n            if ret['rc'] != 0:\n                raise PbsSchedConfigError(rc=1, rv=False,\n                                          msg='error adding dedicated time')\n\n        return True\n\n    def terminate(self):\n        self.signal('-KILL')\n\n    def valgrind(self):\n        \"\"\"\n        run scheduler instance through valgrind\n        \"\"\"\n        if self.isUp():\n            self.terminate()\n\n        rv = CliUtils().check_bin('valgrind')\n        if not rv:\n            self.logger.error(self.logprefix + 'valgrind not available')\n            return None\n\n        cmd = ['valgrind']\n\n        cmd += [\"--log-file=\" + os.path.join(tempfile.gettempdir(),\n                                             'schd.vlgrd')]\n        cmd += [os.path.join(self.pbs_conf['PBS_EXEC'], 'sbin', 'pbs_sched')]\n\n        return self.du.run_cmd(self.hostname, cmd, sudo=True)\n\n    def alloc_to_execvnode(self, chunks):\n        \"\"\"\n        convert a resource allocation to an execvnode string representation\n        \"\"\"\n        execvnode = []\n        for chunk in chunks:\n            execvnode += [\"(\" + chunk.vnode]\n            for res, val in chunk.resources.items():\n                execvnode += [\":\" + str(res) + \"=\" + str(val)]\n            for vchk in chunk.vchunk:\n                execvnode += [\"+\" + vchk.vnode]\n                for res, val in vchk.resources():\n                    execvnode += [\":\" + str(res) + \"=\" + str(val)]\n            execvnode += [\")+\"]\n\n        if len(execvnode) != 0:\n            ev = execvnode[len(execvnode) - 1]\n            ev = ev[:-1]\n            execvnode[len(execvnode) - 1] = ev\n\n        return \"\".join(execvnode)\n\n    def cycles(self, start=None, end=None, firstN=None, lastN=None):\n        \"\"\"\n        Analyze scheduler log and return cycle information\n\n        :param start: Optional setting of the start time to consider\n        :param end: Optional setting of the end time to consider\n        :param firstN: Optional setting to consider the given first\n                       N cycles\n        :param lastN: Optional setting to consider only the given\n                      last N cycles\n        \"\"\"\n        try:\n            from ptl.utils.pbs_logutils import PBSSchedulerLog\n        except Exception:\n            self.logger.error('error loading ptl.utils.pbs_logutils')\n            return None\n\n        if 'sched_log' in self.attributes:\n            logdir = self.attributes['sched_log']\n        else:\n            logdir = os.path.join(self.pbs_conf['PBS_HOME'], 'sched_logs')\n\n        tm = time.strftime(\"%Y%m%d\", time.localtime())\n        log_file = os.path.join(logdir, tm)\n\n        if start is not None or end is not None:\n            analyze_path = os.path.dirname(log_file)\n        else:\n            analyze_path = log_file\n\n        sl = PBSSchedulerLog()\n        sl.analyze(analyze_path, start, end, self.hostname)\n        cycles = sl.cycles\n        if cycles is None or len(cycles) == 0:\n            return []\n\n        if lastN is not None:\n            return cycles[-lastN:]\n        elif firstN is not None:\n            return cycles[:firstN]\n\n        return cycles\n\n    def decay_fairshare_tree(self):\n        \"\"\"\n        Decay the fairshare tree through pbsfs\n        \"\"\"\n        if self.has_snap:\n            return True\n\n        cmd = [os.path.join(self.pbs_conf['PBS_EXEC'], 'sbin', 'pbsfs')]\n        if self.sc_name != 'default':\n            cmd += ['-I', self.sc_name]\n        cmd += ['-d']\n\n        ret = self.du.run_cmd(self.hostname, cmd, runas=self.user)\n        if ret['rc'] == 0:\n            self.fairshare_tree = self.fairshare.query_fairshare()\n            return True\n        return False\n\n    def parse_resource_group(self, hostname=None, resource_group=None):\n        \"\"\"\n        Parse the Scheduler's ``resource_group`` file\n\n        :param hostname: The name of the host from which to parse\n                         resource_group\n        :type hostname: str or None\n        :param resource_group: The path to a resource_group file\n        :type resource_group: str or None\n        :returns: A fairshare tree\n        \"\"\"\n\n        if hostname is None:\n            hostname = self.hostname\n        # if resource_group is None:\n        resource_group = self.resource_group_file\n        # if.has_snap is True acces to sched_priv may not require su privilege\n        ret = self.du.cat(hostname, resource_group, sudo=(not self.has_snap))\n        if ret['rc'] != 0:\n            self.logger.error(hostname + ' error reading ' + resource_group)\n        tree = FairshareTree(hostname, resource_group)\n        root = FairshareNode('root', -1, parent_id=0, nshares=100)\n        tree.add_node(root, apply=False)\n        lines = ret['out']\n        for line in lines:\n            line = line.strip()\n            if not line.startswith(\"#\") and len(line) > 0:\n                # could have 5th column but we only need the first 4\n                (name, id, parent, nshares) = line.split()[:4]\n                node = FairshareNode(name, id, parent_name=parent,\n                                     nshares=nshares)\n                tree.add_node(node, apply=False)\n        tree.update()\n        return tree\n\n    def add_to_resource_group(self, name, fairshare_id, parent, nshares,\n                              validate=True):\n        \"\"\"\n        Add an entry to the resource group file\n\n        :param name: The name of the entity to add\n        :type name: str or :py:class:`~ptl.lib.pbs_testlib.PbsUser`\n        :param fairshare_id: The numeric identifier of the entity to add\n        :type fairshare_id: int\n        :param parent: The name of the parent group\n        :type parent: str\n        :param nshares: The number of shares associated to the entity\n        :type nshares: int\n        :param validate: if True (the default), validate the\n                         configuration settings.\n        :type validate: bool\n        \"\"\"\n        if self.resource_group is None:\n            self.resource_group = self.parse_resource_group(\n                self.hostname, self.resource_group_file)\n        if not self.resource_group:\n            self.resource_group = FairshareTree(\n                self.hostname, self.resource_group_file)\n        if isinstance(name, PbsUser):\n            name = str(name)\n        reconfig_time = time.time()\n        rc = self.resource_group.create_node(name, fairshare_id,\n                                             parent_name=parent,\n                                             nshares=nshares)\n        if validate:\n            self.get_pid()\n            self.signal('-HUP')\n            try:\n                self.log_match(\"Sched;reconfigure;Scheduler is reconfiguring\",\n                               starttime=reconfig_time)\n                self.log_match(\"fairshare;resgroup: error \",\n                               starttime=reconfig_time, existence=False,\n                               max_attempts=2)\n            except PtlLogMatchError:\n                _msg = 'Error in validating resource_group changes'\n                raise PbsSchedConfigError(rc=1, rv=False,\n                                          msg=_msg)\n        return rc\n\n    def job_formula(self, jobid=None, starttime=None, max_attempts=None):\n        \"\"\"\n        Extract formula value out of scheduler log\n\n        :param jobid: Optional, the job identifier for which to get\n                      the formula.\n        :type jobid: str or int\n        :param starttime: The time at which to start parsing the\n                          scheduler log\n        :param max_attempts: The number of attempts to search for\n                             formula in the logs\n        :type max_attempts: int\n        :returns: If jobid is specified, return the formula value\n                  associated to that job if no jobid is specified,\n                  returns a dictionary mapping job ids to formula\n        \"\"\"\n        if jobid is None:\n            jobid = \"(?P<jobid>.*)\"\n            _alljobs = True\n        else:\n            if isinstance(jobid, int):\n                jobid = str(jobid)\n            _alljobs = False\n\n        formula_pat = (\".*Job;\" + jobid +\n                       \".*;Formula Evaluation = (?P<fval>.*)\")\n        if max_attempts is None:\n            max_attempts = self.ptl_conf['max_attempts']\n        rv = self.log_match(formula_pat, regexp=True, starttime=starttime,\n                            n='ALL', allmatch=True,\n                            max_attempts=max_attempts)\n        ret = {}\n        if rv:\n            for _, l in rv:\n                m = re.match(formula_pat, l)\n                if m:\n                    if _alljobs:\n                        jobid = m.group('jobid')\n                    ret[jobid] = float(m.group('fval').strip())\n\n        if not _alljobs:\n            if jobid in ret:\n                return ret[jobid]\n            else:\n                return\n        return ret\n"
  },
  {
    "path": "test/fw/ptl/lib/ptl_server.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport ast\nimport base64\nimport copy\nimport datetime\nimport grp\nimport json\nimport logging\nimport os\nimport socket\nimport string\nimport sys\nimport time\n\nfrom ptl.utils.pbs_dshutils import DshUtils, PtlUtilError\nfrom ptl.utils.pbs_testusers import (ROOT_USER, TEST_USER, PbsUser,\n                                     DAEMON_SERVICE_USER)\n\ntry:\n    import psycopg2\n    PSYCOPG = True\nexcept Exception:\n    PSYCOPG = False\nfrom ptl.lib.ptl_error import (PbsStatusError, PbsSubmitError,\n                               PbsDeljobError, PbsDelresvError,\n                               PbsDeleteError, PbsSelectError,\n                               PbsManagerError, PbsSignalError,\n                               PbsAlterError, PbsHoldError,\n                               PbsRerunError, PbsOrderError,\n                               PbsRunError, PbsMoveError,\n                               PbsQtermError, PbsQdisableError,\n                               PbsQenableError, PbsQstartError,\n                               PbsQstopError, PbsResourceError,\n                               PbsResvAlterError, PtlExpectError,\n                               PbsConnectError, PbsServiceError,\n                               PbsInitServicesError, PbsMessageError,\n                               PtlLogMatchError)\nfrom ptl.lib.ptl_types import PbsAttribute\nfrom ptl.lib.ptl_constants import *\nfrom ptl.lib.ptl_entities import (Hook, Queue, Entity, Limit,\n                                  EquivClass, Resource)\nfrom ptl.lib.ptl_sched import Scheduler\nfrom ptl.lib.ptl_mom import MoM, get_mom_obj\nfrom ptl.lib.ptl_service import PBSService, PBSInitServices\nfrom ptl.lib.ptl_wrappers import *\n\n\nclass Server(Wrappers):\n\n    \"\"\"\n    PBS server ``configuration`` and ``control``\n    The Server class is a container to PBS server attributes\n    and implements wrappers to the ``IFL API`` to perform\n    operations on the server. For example to submit, status,\n    delete, manage, etc... jobs, reservations and configurations.\n    This class also offers higher-level routines to ease testing,\n    see functions, for ``example: revert_to_defaults,\n    init_logging, expect, counter.``\n    :param name: The hostname of the server. Defaults to\n                 calling pbs_default()\n    :type name: str\n    :param attrs: Dictionary of attributes to set, these will\n                  override defaults.\n    :type attrs: Dictionary\n    :param defaults: Dictionary of default attributes.\n                     Default: dflt_attributes\n    :type defaults: Dictionary\n    :param pbsconf_file: path to config file to parse for PBS_HOME,\n                         PBS_EXEC, etc\n    :type pbsconf_file: str\n    :param snapmap: A dictionary of PBS objects (node,server,etc)\n                    to mapped files from PBS snap directory\n    :type snapmap: Dictionary\n    :param snap: path to PBS snap directory (This will overrides\n                 snapmap)\n    :type snap: str\n    :param client: The host to use as client for CLI queries.\n                   Defaults to the local hostname.\n    :type client: str\n    :param client_pbsconf_file: The path to a custom PBS_CONF_FILE\n                                on the client host. Defaults to\n                                the same path as pbsconf_file.\n    :type client_pbsconf_file: str\n    :param db_acccess: set to either file containing credentials\n                       to DB access or dictionary containing\n                       {'dbname':...,'user':...,'port':...}\n    :param stat: if True, stat the server attributes\n    :type stat: bool\n    \"\"\"\n\n    def __init__(self, name=None, attrs={}, defaults={}, pbsconf_file=None,\n                 snapmap={}, snap=None, client=None, client_pbsconf_file=None,\n                 db_access=None, stat=True):\n        super().__init__(name, attrs, defaults, pbsconf_file, snapmap,\n                         snap, client, client_pbsconf_file, db_access, stat)\n\n    def add_expect_action(self, name=None, action=None):\n        \"\"\"\n        Add an action handler to expect. Expect Actions are\n        custom handlers that are triggered when an unexpected\n        value is encountered\n        :param name: Action name\n        :type name: str or None\n        :param action: Action to add\n        \"\"\"\n        if name is None and action.name is None:\n            return\n        if name is None and action.name is not None:\n            name = action.name\n\n        if not self.actions.has_action(name):\n            self.actions.add_action(action, self.shortname)\n\n    def counter(self, obj_type=None, attrib=None, id=None, extend=None,\n                op=None, attrop=None, bslist=None, level=logging.INFO,\n                idonly=True, grandtotal=False, db_access=None, runas=None,\n                resolve_indirectness=False):\n        \"\"\"\n        Accumulate properties set on an object. For example, to\n        count number of free nodes:\n        ``server.counter(VNODE,{'state':'free'})``\n        :param obj_type: The type of object to query, one of the\n                         * objects\n        :param attrib: Attributes to query, can be a string, a\n                       list, a dictionary\n        :type attrib: str or list or dictionary\n        :param id: The id of the object to act upon\n        :param extend: The extended parameter to pass to the stat\n                       call\n        :param op: The operation used to match attrib to what is\n                   queried. SET or None\n        :type op: str or None\n        :param attrop: Operation on multiple attributes, either\n                       PTL_AND, PTL_OR\n        :param bslist: Optional, use a batch status dict list\n                       instead of an obj_type\n        :param idonly: if true, return the name/id of the matching\n                       objects\n        :type idonly: bool\n        :param db_access: credentials to access db, either a path\n                          to file or dictionary\n        :type db_access: str or dictionary\n        :param runas: run as user\n        :type runas: str or None\n        \"\"\"\n        self.logit('counter: ', obj_type, attrib, id, level=level)\n        return self._filter(obj_type, attrib, id, extend, op, attrop, bslist,\n                            PTL_COUNTER, idonly, grandtotal, db_access,\n                            runas=runas, level=level,\n                            resolve_indirectness=resolve_indirectness)\n\n    def set_attributes(self, a={}):\n        \"\"\"\n        set server attributes\n        :param a: Attribute dictionary\n        :type a: Dictionary\n        \"\"\"\n        super(Server, self).set_attributes(a)\n        self.__dict__.update(a)\n\n    def isUp(self, max_attempts=None):\n        \"\"\"\n        returns ``True`` if server is up and ``False`` otherwise\n        \"\"\"\n        if max_attempts is None:\n            max_attempts = self.ptl_conf['max_attempts']\n        if self.has_snap:\n            return True\n        i = 0\n        op_mode = self.get_op_mode()\n        if ((op_mode == PTL_API) and (self._conn is not None)):\n            self._disconnect(self._conn, force=True)\n        while i < max_attempts:\n            rv = False\n            try:\n                if op_mode == PTL_CLI:\n                    self.status(SERVER, level=logging.DEBUG, logerr=False)\n                else:\n                    c = self._connect(self.hostname)\n                    self._disconnect(c, force=True)\n                return True\n            except (PbsConnectError, PbsStatusError):\n                # if the status/connect operation fails then there might be\n                # chances that server process is running but not responsive\n                # so we wait until the server is reported operational.\n                rv = self._isUp()\n                # We really mean to check != False rather than just \"rv\"\n                if str(rv) != 'False':\n                    self.logger.warning('Server process started' +\n                                        'but not up yet')\n                    time.sleep(1)\n                    i += 1\n                else:\n                    # status/connect failed + no server process means\n                    # server is actually down\n                    return False\n        return False\n\n    def start(self, args=None, launcher=None):\n        \"\"\"\n        Start the PBS server\n        :param args: Argument required to start the server\n        :type args: str\n        :param launcher: Optional utility to invoke the launch of the service\n        :type launcher: str or list\n        \"\"\"\n        if args is not None or launcher is not None:\n            rv = super(Server, self)._start(inst=self, args=args,\n                                            launcher=launcher)\n        else:\n            try:\n                rv = self.pi.start_server()\n                pid = self._validate_pid(self)\n                if pid is None:\n                    raise PbsServiceError(rv=False, rc=-1,\n                                          msg=\"Could not find PID\")\n            except PbsInitServicesError as e:\n                raise PbsServiceError(rc=e.rc, rv=e.rv, msg=e.msg)\n        if self.isUp():\n            return rv\n        else:\n            raise PbsServiceError(rv=False, rc=1, msg=rv['err'])\n\n    def stop(self, sig=None):\n        \"\"\"\n        Stop the PBS server\n        :param sig: Signal to stop PBS server\n        :type sig: str\n        \"\"\"\n        if sig is not None:\n            self.logger.info(self.logprefix + 'stopping Server on host ' +\n                             self.hostname)\n            rc = super(Server, self)._stop(sig, inst=self)\n        else:\n            try:\n                self.pi.stop_server()\n            except PbsInitServicesError as e:\n                raise PbsServiceError(rc=e.rc, rv=e.rv, msg=e.msg,\n                                      post=self._disconnect, conn=self._conn,\n                                      force=True)\n            rc = True\n        self._disconnect(self._conn, force=True)\n        return rc\n\n    def restart(self):\n        \"\"\"\n        Terminate and start a PBS server.\n        \"\"\"\n        if self.isUp():\n            if not self.stop():\n                return False\n        start_rc = self.start()\n        self.expect(NODE, {'state=state-unknown,down': 0})\n        return start_rc\n\n    def log_match(self, msg=None, id=None, n=50, tail=True, allmatch=False,\n                  regexp=False, max_attempts=None, interval=None,\n                  starttime=None, endtime=None, level=logging.INFO,\n                  existence=True):\n        \"\"\"\n        Match given ``msg`` in given ``n`` lines of Server log\n        :param msg: log message to match, can be regex also when\n                    ``regexp`` is True\n        :type msg: str\n        :param id: The id of the object to trace. Only used for\n                   tracejob\n        :type id: str\n        :param n: 'ALL' or the number of lines to search through,\n                  defaults to 50\n        :type n: str or int\n        :param tail: If true (default), starts from the end of\n                     the file\n        :type tail: bool\n        :param allmatch: If True all matching lines out of then\n                         parsed are returned as a list. Defaults\n                         to False\n        :type allmatch: bool\n        :param regexp: If true msg is a Python regular expression.\n                       Defaults to False\n        :type regexp: bool\n        :param max_attempts: the number of attempts to make to find\n                             a matching entry\n        :type max_attempts: int\n        :param interval: the interval between attempts\n        :type interval: int\n        :param starttime: If set ignore matches that occur before\n                          specified time\n        :type starttime: float\n        :param endtime: If set ignore matches that occur after\n                        specified time\n        :type endtime: float\n        :param level: The logging level, defaults to INFO\n        :type level: int\n        :param existence: If True (default), check for existence of\n                        given msg, else check for non-existence of\n                        given msg.\n        :type existence: bool\n        :return: (x,y) where x is the matching line\n                 number and y the line itself. If allmatch is True,\n                 a list of tuples is returned.\n        :rtype: tuple\n        :raises PtlLogMatchError:\n                When ``existence`` is True and given\n                ``msg`` is not found in ``n`` line\n                Or\n                When ``existence`` is False and given\n                ``msg`` found in ``n`` line.\n        .. note:: The matching line number is relative to the record\n                  number, not the absolute line number in the file.\n        \"\"\"\n        return self._log_match(self, msg, id, n, tail, allmatch, regexp,\n                               max_attempts, interval, starttime, endtime,\n                               level=level, existence=existence)\n\n    def revert_to_defaults(self, reverthooks=True, revertqueues=True,\n                           revertresources=True, delhooks=True,\n                           delqueues=True, delscheds=True, delnodes=True,\n                           server_stat=None):\n        \"\"\"\n        reset server attributes back to out of box defaults.\n        :param reverthooks: If True disable all hooks. Defaults\n                            to True\n        :type reverthooks: bool\n        :param revertqueues: If True disable all non-default\n                             queues. Defaults to True\n        :type revertqueues: bool\n        :param revertresources: If True, resourcedef file is\n                                removed. Defaults to True.\n                                Reverting resources causes a server\n                                restart to occur.\n        :type revertresources: bool\n        :param delhooks: If True, hooks are deleted, if deletion\n                         fails, fall back to reverting hooks. Defaults\n                         to True.\n        :type delhooks: bool\n        :param delqueues: If True, all non-default queues are deleted,\n                          will attempt to delete all jobs first, if it\n                          fails, revertqueues will be honored,\n                          otherwise,revertqueues is ignored. Defaults\n                          to True\n        :type delqueues: bool\n        :param delscheds: If True all non-default schedulers are deleted\n                          The sched_priv and sched_logs directories will be\n                          deleted.\n        :type delscheds: bool\n        :param delnodes: If True all vnodes are deleted\n        :type delnodes: bool\n        :returns: True upon success and False if an error is\n                  encountered.\n        :raises: PbsStatusError or PbsManagerError\n        \"\"\"\n        setdict = {}\n        skip_site_hooks = ['pbs_cgroups']\n        self.logger.info(self.logprefix +\n                         'reverting configuration to defaults')\n        self.cleanup_jobs_and_reservations()\n        self.atom_hk = os.path.join(self.pbs_conf['PBS_HOME'],\n                                    'server_priv', 'hooks',\n                                    'PBS_cray_atom.HK')\n        self.dflt_atom_hk = os.path.join(self.pbs_conf['PBS_EXEC'],\n                                         'lib', 'python', 'altair',\n                                         'pbs_hooks',\n                                         'PBS_cray_atom.HK')\n        self.atom_cf = os.path.join(self.pbs_conf['PBS_HOME'],\n                                    'server_priv', 'hooks',\n                                    'PBS_cray_atom.CF')\n        self.dflt_atom_cf = os.path.join(self.pbs_conf['PBS_EXEC'],\n                                         'lib', 'python', 'altair',\n                                         'pbs_hooks',\n                                         'PBS_cray_atom.CF')\n        self.unset_svr_attrib()\n        for k in self.dflt_attributes.keys():\n            if(k not in self.attributes or\n               self.attributes[k] != self.dflt_attributes[k]):\n                setdict[k] = self.dflt_attributes[k]\n        if self.platform == 'cray' or self.platform == 'craysim':\n            setdict[ATTR_restrict_res_to_release_on_suspend] = 'ncpus'\n        if delhooks:\n            self.delete_site_hooks()\n        if delqueues:\n            revertqueues = False\n            self.delete_queues()\n            a = {ATTR_qtype: 'Execution',\n                 ATTR_enable: 'True',\n                 ATTR_start: 'True'}\n            self.manager(MGR_CMD_CREATE, QUEUE, a, id='workq')\n            setdict.update({ATTR_dfltque: 'workq'})\n        if delscheds:\n            self.delete_sched_config()\n\n        if delnodes:\n            self.delete_nodes()\n        if reverthooks:\n            if self.platform == 'shasta':\n                dohup = False\n                if (self.du.cmp(self.hostname, self.dflt_atom_hk,\n                                self.atom_hk, sudo=True) != 0):\n                    self.du.run_copy(self.hostname, src=self.dflt_atom_hk,\n                                     dest=self.atom_hk, mode=0o644, sudo=True)\n                    dohup = True\n                if self.du.cmp(self.hostname, self.dflt_atom_cf,\n                               self.atom_cf, sudo=True) != 0:\n                    self.du.run_copy(self.hostname, src=self.dflt_atom_cf,\n                                     dest=self.atom_cf, mode=0o644, sudo=True)\n                    dohup = True\n                if dohup:\n                    self.signal('-HUP')\n            hooks = self.status(HOOK, level=logging.DEBUG)\n            hooks = [h['id'] for h in hooks]\n            a = {ATTR_enable: 'false'}\n            if len(hooks) > 0:\n                self.manager(MGR_CMD_SET, MGR_OBJ_HOOK, a, hooks)\n        if revertqueues:\n            self.status(QUEUE, level=logging.DEBUG)\n            queues = []\n            for (qname, qobj) in self.queues.items():\n                # skip reservation queues. This syntax for Python 2.4\n                # compatibility\n                if (qname.startswith('R') or qname.startswith('S') or\n                        qname == server_stat[ATTR_dfltque]):\n                    continue\n                qobj.revert_to_defaults()\n                queues.append(qname)\n                a = {ATTR_enable: 'false'}\n                self.manager(MGR_CMD_SET, QUEUE, a, id=queues)\n            a = {ATTR_enable: 'True', ATTR_start: 'True'}\n            self.manager(MGR_CMD_SET, MGR_OBJ_QUEUE, a,\n                         id=server_stat[ATTR_dfltque])\n        if len(setdict) > 0:\n            self.manager(MGR_CMD_SET, MGR_OBJ_SERVER, setdict)\n        if revertresources:\n            self.delete_resources()\n        return True\n\n    def delete_resources(self):\n        \"\"\"\n        Delete all resources\n        \"\"\"\n        try:\n            rescs = self.status(RSC)\n            rescs = [r['id'] for r in rescs]\n        except Exception:\n            rescs = []\n        if len(rescs) > 0:\n            self.manager(MGR_CMD_DELETE, RSC, id=rescs)\n\n    def unset_svr_attrib(self, server_stat=None):\n        \"\"\"\n        Unset server attributes\n        \"\"\"\n        ignore_attrs = ['id', 'pbs_license', ATTR_NODE_ProvisionEnable]\n        ignore_attrs += [ATTR_status, ATTR_total, ATTR_count]\n        ignore_attrs += [ATTR_rescassn, ATTR_FLicenses, ATTR_SvrHost]\n        ignore_attrs += [ATTR_license_count, ATTR_version, ATTR_managers]\n        ignore_attrs += [ATTR_operators, ATTR_license_min]\n        ignore_attrs += [ATTR_pbs_license_info, ATTR_power_provisioning]\n        unsetlist = []\n        self.cleanup_jobs_and_reservations()\n        if server_stat is None:\n            server_stat = self.status(SERVER, level=logging.DEBUG)[0]\n        for k in server_stat.keys():\n            if (k in ignore_attrs) or (k in self.dflt_attributes.keys()):\n                continue\n            elif (('.' in k) and (k.split('.')[0] in ignore_attrs)):\n                continue\n            else:\n                unsetlist.append(k)\n        if len(unsetlist) != 0:\n            self.manager(MGR_CMD_UNSET, MGR_OBJ_SERVER, unsetlist)\n\n    def delete_site_hooks(self):\n        \"\"\"\n        Delete site hooks from PBS\n        \"\"\"\n        skip_site_hooks = ['pbs_cgroups']\n        hooks = self.status(HOOK, level=logging.DEBUG)\n        hooks = [h['id'] for h in hooks]\n        for h in skip_site_hooks:\n            if h in hooks:\n                hooks.remove(h)\n        if len(hooks) > 0:\n            self.manager(MGR_CMD_DELETE, HOOK, id=hooks)\n\n    def delete_queues(self):\n        \"\"\"\n        Delete queues\n        \"\"\"\n        queues = self.status(QUEUE, level=logging.DEBUG)\n        queues = [q['id'] for q in queues]\n        if len(queues) > 0:\n            try:\n                nodes = self.status(VNODE, logerr=False)\n                for node in nodes:\n                    if 'queue' in node.keys():\n                        self.manager(MGR_CMD_UNSET, NODE, 'queue',\n                                     node['id'])\n            except Exception:\n                pass\n            self.manager(MGR_CMD_DELETE, QUEUE, id=queues)\n\n    def delete_sched_config(self):\n        \"\"\"\n        Delete sched_priv & sched_log files\n        \"\"\"\n        self.manager(MGR_CMD_LIST, SCHED)\n        for name in list(self.schedulers.keys()):\n            if name != 'default':\n                self.schedulers[name].terminate()\n                sched_log = self.schedulers[\n                    name].attributes['sched_log']\n                sched_priv = self.schedulers[\n                    name].attributes['sched_priv']\n                self.du.rm(path=sched_log, sudo=True,\n                           recursive=True, force=True)\n                self.du.rm(path=sched_priv, sudo=True,\n                           recursive=True, force=True)\n                self.manager(MGR_CMD_DELETE, SCHED, id=name)\n\n    def create_node(self, name, level=\"INFO\", logerr=False):\n        \"\"\"\n        Add a node to PBS\n        \"\"\"\n        ret = self.manager(MGR_CMD_CREATE, VNODE, name,\n                           level=level, logerr=logerr)\n        return ret\n\n    def delete_node(self, name, level=\"INFO\", logerr=False):\n        \"\"\"\n        Remove a node from PBS\n        \"\"\"\n        try:\n            ret = self.manager(MGR_CMD_DELETE, VNODE, name,\n                               level=level, logerr=logerr)\n        except PbsManagerError as err:\n            if \"Unknown node\" not in err.msg[0]:\n                raise\n            else:\n                ret = 15062\n        return ret\n\n    def delete_nodes(self):\n        \"\"\"\n        Remove all the nodes from PBS\n        \"\"\"\n        try:\n            self.manager(MGR_CMD_DELETE, VNODE, id=\"@default\",\n                         runas=ROOT_USER)\n        except PbsManagerError as e:\n            if \"Unknown node\" not in e.msg[0]:\n                raise\n\n    def save_configuration(self, outfile=None, mode='w'):\n        \"\"\"\n        Save a server configuration, this includes:\n          - ``server_priv/resourcedef``\n          - ``qmgr -c \"print server\"``\n          - ``qmgr -c \"print sched\"``\n          - ``qmgr -c \"print hook\"``\n        :param outfile: the output file to which onfiguration is\n                        saved\n        :type outfile: str\n        :param mode: The mode in which to open outfile to save\n                     configuration. The first object being saved\n                     should open this file with 'w' and subsequent\n                     calls from other objects should save with\n                     mode 'a' or 'a+'. Defaults to a+\n        :type mode: str\n        :returns: True on success, False on error\n        \"\"\"\n        conf = {}\n        # save pbs.conf file\n        cfg_path = self.du.get_pbs_conf_file()\n        with open(cfg_path, 'r') as p:\n            pbs_cfg = p.readlines()\n            config = self.utils.convert_to_dictlist(pbs_cfg)\n            cfg_str = str(config[0])\n            encode_utf = cfg_str.encode('UTF-8')\n            pbs_cfg_b64 = base64.b64encode(encode_utf)\n            decode_utf = pbs_cfg_b64.decode('UTF-8')\n        conf['pbs_conf'] = decode_utf\n        # save hook files\n        hooks_str = self._save_hook_files()\n        if hooks_str:\n            conf.update(hooks_str)\n            conf['hooks'] = hooks_str\n        else:\n            self.logger.error('Failed to save site hooks')\n            return False\n        qmgr = os.path.join(self.client_conf['PBS_EXEC'], 'bin', 'qmgr')\n        pbsnodes = os.path.join(\n            self.client_conf['PBS_EXEC'], 'bin', 'pbsnodes')\n        ret = self.du.run_cmd(\n            self.hostname, [\n                qmgr, '-c', 'print server'], sudo=True,\n            logerr=False, level=logging.DEBUG)\n        if ret['rc'] != 0:\n            self.logger.error('Failed to get Server attributes')\n            return False\n        else:\n            conf['qmgr_print_server'] = ret['out']\n        ret = self.du.run_cmd(self.hostname, [qmgr, '-c', 'print sched'],\n                              logerr=False, level=logging.DEBUG, sudo=True)\n        if ret['rc'] != 0:\n            self.logger.error('Failed to get sched attributes')\n            return False\n        else:\n            conf['qmgr_print_sched'] = ret['out']\n\n        # sudo=True is added while running \"pbsnodes -av\", to make\n        # sure that all the node attributes are preserved in\n        # save_configuration. If this command is run without sudo,\n        # some of the node attributes like port, version is not listed.\n        ret = self.du.run_cmd(self.hostname, [pbsnodes, '-av'],\n                              logerr=False, level=logging.DEBUG, sudo=True)\n        err_msg = \"Server has no node list\"\n        # pbsnodes -av returns a non zero exit code when there are\n        # no nodes in cluster\n        if ret['rc'] != 0 and err_msg in ret['err']:\n            self.logger.error('Failed to get nodes info')\n            return False\n        else:\n            nodes_val = self.utils.convert_to_dictlist(ret['out'])\n            conf['pbsnodes'] = nodes_val\n        self.saved_config[MGR_OBJ_SERVER] = conf\n        if outfile is not None:\n            try:\n                with open(outfile, mode) as f:\n                    json.dump(self.saved_config, f)\n                    self.saved_config[MGR_OBJ_SERVER].clear()\n            except Exception:\n                self.logger.error('Error processing file ' + outfile)\n                return False\n\n        return True\n\n    def _save_hook_files(self):\n        \"\"\"\n        save all the hooks .CF, .PY, .HK files\n        \"\"\"\n        qmgr = os.path.join(self.client_conf['PBS_EXEC'], 'bin', 'qmgr')\n        cfg = {\"hooks\": \"\"}\n        cmd = [qmgr, '-c', 'print hook @default']\n        ret = self.du.run_cmd(self.hostname, cmd,\n                              sudo=True)\n        if ret['rc'] != 0:\n            self.logger.error('Failed to save hook files ')\n            return False\n        else:\n            cfg['qmgr_print_hook'] = ret['out']\n        return cfg\n\n    def load_configuration(self, infile):\n        \"\"\"\n        load server configuration from saved file ``infile``\n        \"\"\"\n        rv = self._load_configuration(infile, MGR_OBJ_SERVER)\n        return rv\n\n    def get_hostname(self):\n        \"\"\"\n        return the default server hostname\n        \"\"\"\n\n        if self.get_op_mode() == PTL_CLI:\n            return self.hostname\n        return pbs_default()\n\n    def _db_connect(self, db_access=None):\n        if self._db_conn is None:\n            if 'user' not in db_access or\\\n               'password' not in db_access:\n                self.logger.error('missing credentials to access DB')\n                return None\n\n            if 'dbname' not in db_access:\n                db_access['dbname'] = 'pbs_datastore'\n            if 'port' not in db_access:\n                db_access['port'] = '15007'\n\n            if 'host' not in db_access:\n                db_access['host'] = self.hostname\n\n            user = db_access['user']\n            dbname = db_access['dbname']\n            port = db_access['port']\n            password = db_access['password']\n            host = db_access['host']\n\n            cred = \"host=%s dbname=%s user=%s password=%s port=%s\" % \\\n                (host, dbname, user, password, port)\n            self._db_conn = psycopg2.connect(cred)\n\n        return self._db_conn\n\n    def _db_server_host(self, cur=None, db_access=None):\n        \"\"\"\n        Get the server host name from the database. The server\n        host name is stored in the pbs.server table and not in\n        pbs.server_attr.\n        :param cur: Optional, a predefined cursor to use to\n                    operate on the DB\n        :param db_acccess: set to either file containing\n                           credentials to DB access or\n                           dictionary containing\n                           ``{'dbname':...,'user':...,'port':...}``\n        :type db_access: str or dictionary\n        \"\"\"\n        local_init = False\n\n        if cur is None:\n            conn = self._db_connect(db_access)\n            local_init = True\n            if conn is None:\n                return None\n            cur = conn.cursor()\n\n        # obtain server name. The server hostname is stored in table\n        # pbs.server\n        cur.execute('SELECT sv_hostname from pbs.server')\n        if local_init:\n            conn.commit()\n\n        tmp_query = cur.fetchone()\n        if len(tmp_query) > 0:\n            svr_host = tmp_query[0]\n        else:\n            svr_host = \"unknown\"\n        return svr_host\n\n    def status_db(self, obj_type=None, attrib=None, id=None, db_access=None,\n                  logerr=True):\n        \"\"\"\n        Status PBS objects from the SQL database\n        :param obj_type: The type of object to query, one of the\n                         * objects, Default: SERVER\n        :param attrib: Attributes to query, can a string, a list,\n                       a dictionary Default: None. All attributes\n                       will be queried\n        :type attrib: str or list or dictionary\n        :param id: An optional identifier, the name of the object\n                   to status\n        :type id: str\n        :param db_access: information needed to access the database,\n                          can be either a file containing user,\n                          port, dbname, password info or a\n                          dictionary of key/value entries\n        :type db_access: str or dictionary\n        \"\"\"\n        if not PSYCOPG:\n            self.logger.error('psycopg module unavailable, install from ' +\n                              'http://initd.org/psycopg/ and retry')\n            return None\n\n        if not isinstance(db_access, dict):\n            try:\n                with open(db_access, 'r') as f:\n                    lines = f.readlines()\n            except IOError:\n                self.logger.error('Unable to access ' + db_access)\n                return None\n            db_access = {}\n            for line in lines:\n                (k, v) = line.split('=')\n                db_access[k] = v\n\n        conn = self._db_connect(db_access)\n        if conn is None:\n            return None\n\n        cur = conn.cursor()\n\n        stmt = []\n        if obj_type == SERVER:\n            stmt = [\"SELECT sv_name,attr_name,attr_resource,attr_value \" +\n                    \"FROM pbs.server_attr\"]\n            svr_host = self.hostname  # self._db_server_host(cur)\n        elif obj_type == SCHED:\n            stmt = [\"SELECT sched_name,attr_name,attr_resource,attr_value \" +\n                    \"FROM pbs.scheduler_attr\"]\n            # reuse server host name for sched host\n            svr_host = self.hostname\n        elif obj_type == JOB:\n            stmt = [\"SELECT ji_jobid,attr_name,attr_resource,attr_value \" +\n                    \"FROM pbs.job_attr\"]\n            if id:\n                id_stmt = [\"ji_jobid='\" + id + \"'\"]\n        elif obj_type == QUEUE:\n            stmt = [\"SELECT qu_name,attr_name,attr_resource,attr_value \" +\n                    \"FROM pbs.queue_attr\"]\n            if id:\n                id_stmt = [\"qu_name='\" + id + \"'\"]\n        elif obj_type == RESV:\n            stmt = [\"SELECT ri_resvid,attr_name,attr_resource,attr_value \" +\n                    \"FROM pbs.resv_attr\"]\n            if id:\n                id_stmt = [\"ri_resvid='\" + id + \"'\"]\n        elif obj_type in (NODE, VNODE):\n            stmt = [\"SELECT nd_name,attr_name,attr_resource,attr_value \" +\n                    \"FROM pbs.node_attr\"]\n            if id:\n                id_stmt = [\"nd_name='\" + id + \"'\"]\n        else:\n            self.logger.error('status: object type not handled')\n            return None\n\n        if attrib or id:\n            stmt += [\"WHERE\"]\n            extra_stmt = []\n            if attrib:\n                if isinstance(attrib, dict):\n                    attrs = attrib.keys()\n                elif isinstance(attrib, list):\n                    attrs = attrib\n                elif isinstance(attrib, str):\n                    attrs = attrib.split(',')\n                for a in attrs:\n                    extra_stmt += [\"attr_name='\" + a + \"'\"]\n                stmt += [\" OR \".join(extra_stmt)]\n            if id:\n                stmt += [\" AND \", \" AND \".join(id_stmt)]\n\n        exec_stmt = \" \".join(stmt)\n        self.logger.debug('server: executing db statement: ' + exec_stmt)\n        cur.execute(exec_stmt)\n        conn.commit()\n        _results = cur.fetchall()\n        obj_dict = {}\n        for _res in _results:\n            if obj_type in (SERVER, SCHED):\n                obj_name = svr_host\n            else:\n                obj_name = _res[0]\n            if obj_name not in obj_dict:\n                obj_dict[obj_name] = {'id': obj_name}\n            attr = _res[1]\n            if _res[2]:\n                attr += '.' + _res[2]\n\n            obj_dict[obj_name][attr] = _res[3]\n\n        return list(obj_dict.values())\n\n    def qdisable(self, queue=None, runas=None, logerr=True):\n        \"\"\"\n        Disable queue. ``CLI`` mode only\n        :param queue: The name of the queue or list of queue to\n                      disable\n        :type queue: str or list\n        :param runas: Optional name of user to run command as\n        :type runas: str or None\n        :param logerr: Set to False ot disable logging command\n                       errors.Defaults to True.\n        :type logerr: bool\n        :raises: PbsQdisableError\n        \"\"\"\n        prefix = 'qdisable on ' + self.shortname\n        if runas is not None:\n            prefix += ' as ' + str(runas)\n        prefix += ': '\n        if queue is not None:\n            if not isinstance(queue, list):\n                queue = queue.split(',')\n            prefix += ', '.join(queue)\n        self.logger.info(prefix)\n\n        if self.get_op_mode() == PTL_CLI:\n            pcmd = [os.path.join(self.client_conf['PBS_EXEC'], 'bin',\n                                 'qdisable')]\n            if queue is not None:\n                pcmd += queue\n            if not self.default_client_pbs_conf:\n                pcmd = ['PBS_CONF_FILE=' + self.client_pbs_conf_file] + pcmd\n                as_script = True\n            else:\n                as_script = False\n            ret = self.du.run_cmd(self.client, pcmd, runas=runas,\n                                  as_script=as_script, level=logging.INFOCLI,\n                                  logerr=logerr)\n            if ret['err'] != ['']:\n                self.last_error = ret['err']\n            self.last_rc = ret['rc']\n            if self.last_rc != 0:\n                raise PbsQdisableError(rc=self.last_rc, rv=False,\n                                       msg=self.last_error)\n        else:\n            _msg = 'qdisable: currently not supported in API mode'\n            raise PbsQdisableError(rv=False, rc=1, msg=_msg)\n\n    def qenable(self, queue=None, runas=None, logerr=True):\n        \"\"\"\n        Enable queue. ``CLI`` mode only\n        :param queue: The name of the queue or list of queue to\n                      enable\n        :type queue: str or list\n        :param runas: Optional name of user to run command as\n        :type runas: str or None\n        :param logerr: Set to False ot disable logging command\n                       errors.Defaults to True.\n        :type logerr: bool\n        :raises: PbsQenableError\n        \"\"\"\n        prefix = 'qenable on ' + self.shortname\n        if runas is not None:\n            prefix += ' as ' + str(runas)\n        prefix += ': '\n        if queue is not None:\n            if not isinstance(queue, list):\n                queue = queue.split(',')\n            prefix += ', '.join(queue)\n        self.logger.info(prefix)\n\n        if self.get_op_mode() == PTL_CLI:\n            pcmd = [os.path.join(self.client_conf['PBS_EXEC'], 'bin',\n                                 'qenable')]\n            if queue is not None:\n                pcmd += queue\n            if not self.default_client_pbs_conf:\n                pcmd = ['PBS_CONF_FILE=' + self.client_pbs_conf_file] + pcmd\n                as_script = True\n            else:\n                as_script = False\n            ret = self.du.run_cmd(self.client, pcmd, runas=runas,\n                                  as_script=as_script, level=logging.INFOCLI,\n                                  logerr=logerr)\n            if ret['err'] != ['']:\n                self.last_error = ret['err']\n            self.last_rc = ret['rc']\n            if self.last_rc != 0:\n                raise PbsQenableError(rc=self.last_rc, rv=False,\n                                      msg=self.last_error)\n        else:\n            _msg = 'qenable: currently not supported in API mode'\n            raise PbsQenableError(rv=False, rc=1, msg=_msg)\n\n    def qstart(self, queue=None, runas=None, logerr=True):\n        \"\"\"\n        Start queue. ``CLI`` mode only\n        :param queue: The name of the queue or list of queue\n                      to start\n        :type queue: str or list\n        :param runas: Optional name of user to run command as\n        :type runas: str or None\n        :param logerr: Set to False ot disable logging command\n                       errors.Defaults to True.\n        :type logerr: bool\n        :raises: PbsQstartError\n        \"\"\"\n        prefix = 'qstart on ' + self.shortname\n        if runas is not None:\n            prefix += ' as ' + str(runas)\n        prefix += ': '\n        if queue is not None:\n            if not isinstance(queue, list):\n                queue = queue.split(',')\n            prefix += ', '.join(queue)\n        self.logger.info(prefix)\n\n        if self.get_op_mode() == PTL_CLI:\n            pcmd = [os.path.join(self.client_conf['PBS_EXEC'], 'bin',\n                                 'qstart')]\n            if queue is not None:\n                pcmd += queue\n            if not self.default_client_pbs_conf:\n                pcmd = ['PBS_CONF_FILE=' + self.client_pbs_conf_file] + pcmd\n                as_script = True\n            else:\n                as_script = False\n            ret = self.du.run_cmd(self.client, pcmd, runas=runas,\n                                  as_script=as_script, level=logging.INFOCLI,\n                                  logerr=logerr)\n            if ret['err'] != ['']:\n                self.last_error = ret['err']\n            self.last_rc = ret['rc']\n            if self.last_rc != 0:\n                raise PbsQstartError(rc=self.last_rc, rv=False,\n                                     msg=self.last_error)\n        else:\n            _msg = 'qstart: currently not supported in API mode'\n            raise PbsQstartError(rv=False, rc=1, msg=_msg)\n\n    def qstop(self, queue=None, runas=None, logerr=True):\n        \"\"\"\n        Stop queue. ``CLI`` mode only\n        :param queue: The name of the queue or list of queue to stop\n        :type queue: str or list\n        :param runas: Optional name of user to run command as\n        :type runas: str or None\n        :param logerr: Set to False ot disable logging command errors.\n                       Defaults to True.\n        :type logerr: bool\n        :raises: PbsQstopError\n        \"\"\"\n        prefix = 'qstop on ' + self.shortname\n        if runas is not None:\n            prefix += ' as ' + str(runas)\n        prefix += ': '\n        if queue is not None:\n            if not isinstance(queue, list):\n                queue = queue.split(',')\n            prefix += ', '.join(queue)\n        self.logger.info(prefix)\n\n        if self.get_op_mode() == PTL_CLI:\n            pcmd = [os.path.join(self.client_conf['PBS_EXEC'], 'bin',\n                                 'qstop')]\n            if queue is not None:\n                pcmd += queue\n            if not self.default_client_pbs_conf:\n                pcmd = ['PBS_CONF_FILE=' + self.client_pbs_conf_file] + pcmd\n                as_script = True\n            else:\n                as_script = False\n            ret = self.du.run_cmd(self.client, pcmd, runas=runas,\n                                  as_script=as_script, level=logging.INFOCLI,\n                                  logerr=logerr)\n            if ret['err'] != ['']:\n                self.last_error = ret['err']\n            self.last_rc = ret['rc']\n            if self.last_rc != 0:\n                raise PbsQstopError(rc=self.last_rc, rv=False,\n                                    msg=self.last_error)\n        else:\n            _msg = 'qstop: currently not supported in API mode'\n            raise PbsQstopError(rv=False, rc=1, msg=_msg)\n\n    def parse_resources(self):\n        \"\"\"\n        Parse server resources as defined in the resourcedef file\n        Populates instance variable self.resources\n        :returns: The resources as a dictionary\n        \"\"\"\n        if not self.has_snap:\n            self.manager(MGR_CMD_LIST, RSC)\n        return self.resources\n\n    def remove_resource(self, name):\n        \"\"\"\n        Remove an entry from resourcedef\n        :param name: The name of the resource to remove\n        :type name: str\n        :param restart: Whether to restart the server or not.\n                        Applicable to update_mode 'file'\n                        operations only.\n        :param update_mode: one of 'file' or 'auto' (the default).\n                            If 'file', updates the resourcedef file\n                            only and will not use the qmgr\n                            operations on resources introduced in\n                            12.3. If 'auto', will automatically\n                            handle the update on resourcedef or\n                            using qmgr based on the version of the\n                            Server.\n        \"\"\"\n        self.parse_resources()\n        if not self.has_snap:\n            if name in self.resources:\n                self.manager(MGR_CMD_DELETE, RSC, id=name)\n\n    def add_resource(self, name, type=None, flag=None):\n        \"\"\"\n        Define a server resource\n        :param name: The name of the resource to add to the\n                     resourcedef file\n        :type name: str\n        :param type: The type of the resource, one of string,\n                     long, boolean, float\n        :param flag: The target of the resource, one of n, h, q,\n                     or none\n        :type flag: str or None\n        :param restart: Whether to restart the server after adding\n                        a resource.Applicable to update_mode 'file'\n                        operations only.\n        :param update_mode: one of 'file' or 'auto' (the default).\n                            If 'file', updates the resourcedef file\n                            only and will not use the qmgr\n                            operations on resources introduced in\n                            12.3. If 'auto', will automatically\n                            handle the update on resourcedef or\n                            using qmgr based on the version of the\n                            Server.\n        :returns: True on success False on error\n        \"\"\"\n        rv = self.parse_resources()\n        if rv is None:\n            return False\n\n        resource_exists = False\n        if name in self.resources:\n            msg = [self.logprefix + \"resource \" + name]\n            if type:\n                msg += [\"type: \" + type]\n            if flag:\n                msg += [\"flag: \" + flag]\n            msg += [\" already defined\"]\n            self.logger.info(\" \".join(msg))\n\n            (t, f) = (self.resources[name].type, self.resources[name].flag)\n            if type == t and flag == f:\n                return True\n\n            self.logger.info(\"resource: redefining resource \" + name +\n                             \" type: \" + str(type) + \" and flag: \" + str(flag))\n            del self.resources[name]\n            resource_exists = True\n\n        r = Resource(name, type, flag)\n        self.resources[name] = r\n        a = {}\n        if type:\n            a['type'] = type\n        if flag:\n            a['flag'] = flag\n        if resource_exists:\n            self.manager(MGR_CMD_SET, RSC, a, id=name)\n        else:\n            self.manager(MGR_CMD_CREATE, RSC, a, id=name)\n        return True\n\n    def write_resourcedef(self, resources=None, filename=None, restart=True):\n        \"\"\"\n        Write into resource def file\n        :param resources: PBS resources\n        :type resources: dictionary\n        :param filename: resourcedef file name\n        :type filename: str or None\n        \"\"\"\n        if resources is None:\n            resources = self.resources\n        if isinstance(resources, Resource):\n            resources = {resources.name: resources}\n        fn = self.du.create_temp_file()\n        with open(fn, 'w+') as f:\n            for r in resources.values():\n                f.write(r.attributes['id'])\n                if r.attributes['type'] is not None:\n                    f.write(' type=' + r.attributes['type'])\n                if r.attributes['flag'] is not None:\n                    f.write(' flag=' + r.attributes['flag'])\n                f.write('\\n')\n        if filename is None:\n            dest = os.path.join(self.pbs_conf['PBS_HOME'], 'server_priv',\n                                'resourcedef')\n        else:\n            dest = filename\n        self.du.run_copy(self.hostname, src=fn, dest=dest, sudo=True,\n                         preserve_permission=False)\n        os.remove(fn)\n        if restart:\n            return self.restart()\n        return True\n\n    def parse_resourcedef(self, file=None):\n        \"\"\"\n        Parse an arbitrary resource definition file passed as\n        input and return a dictionary of resources\n        :param file: resource definition file\n        :type file: str or None\n        :returns: Dictionary of resource\n        :raises: PbsResourceError\n        \"\"\"\n        if file is None:\n            file = os.path.join(self.pbs_conf['PBS_HOME'], 'server_priv',\n                                'resourcedef')\n        ret = self.du.cat(self.hostname, file, logerr=False, sudo=True)\n        if ret['rc'] != 0 or len(ret['out']) == 0:\n            # Most probable error is that file does not exist, we'll let it\n            # be created\n            return {}\n\n        resources = {}\n        lines = ret['out']\n        try:\n            for l in lines:\n                strip_line = l.strip()\n                if strip_line == '' or strip_line.startswith('#'):\n                    continue\n                name = None\n                rtype = None\n                flag = None\n                res = strip_line.split()\n                e0 = res[0]\n                if len(res) > 1:\n                    e1 = res[1].split('=')\n                else:\n                    e1 = None\n                if len(res) > 2:\n                    e2 = res[2].split('=')\n                else:\n                    e2 = None\n                if e1 is not None and e1[0] == 'type':\n                    rtype = e1[1]\n                elif e2 is not None and e2[0] == 'type':\n                    rtype = e2[1]\n                if e1 is not None and e1[0] == 'flag':\n                    flag = e1[0]\n                elif e2 is not None and e2[0] == 'flag':\n                    flag = e2[1]\n                name = e0\n                r = Resource(name, rtype, flag)\n                resources[name] = r\n        except Exception:\n            raise PbsResourceError(rc=1, rv=False,\n                                   msg=\"error in parse_resources\")\n        return resources\n\n    def is_history_enabled(self):\n        \"\"\"\n        Short-hand method to return the value of job_history_enable\n        \"\"\"\n        a = ATTR_JobHistoryEnable\n        attrs = self.status(SERVER, level=logging.DEBUG)[0]\n        if ((a in attrs.keys()) and attrs[a] == 'True'):\n            return True\n        return False\n\n    def cleanup_jobs(self):\n        \"\"\"\n        Helper function to delete all jobs.\n        By default this method will determine whether\n        job_history_enable is on and will cleanup all history\n        jobs. Specifying an extend parameter could override\n        this behavior.\n        \"\"\"\n        delete_xt = 'force'\n        select_xt = None\n        if self.is_history_enabled():\n            delete_xt += 'deletehist'\n            select_xt = 'x'\n        jobs = self.status(JOB, extend=select_xt)\n        job_ids = sorted(list(set([x['id'] for x in jobs])))\n        running_jobs = sorted([j['id'] for j in jobs if j['job_state'] == 'R'])\n        host_pid_map = {}\n        for job in jobs:\n            exec_host = job.get('exec_host', None)\n            if not exec_host or 'session_id' not in job:\n                continue\n            _host = exec_host.split('/')[0].split(':')[0]\n            if _host not in host_pid_map:\n                host_pid_map.setdefault(_host, [])\n            host_pid_map[_host].append(job['session_id'])\n\n        # Turn off scheduling so jobs don't start when trying to\n        # delete. Restore the orignial scheduling state\n        # once jobs are deleted.\n        sched_state = []\n        scheds = self.status(SCHED)\n        for sc in scheds:\n            if sc['scheduling'] == 'True':\n                sched_state.append(sc['id'])\n                # runas is required here because some tests remove\n                # current user from managers list\n                a = {'scheduling': 'False'}\n                self.manager(MGR_CMD_SET, SCHED, a, id=sc['id'],\n                             runas=ROOT_USER)\n        try:\n            self.deljob(id=job_ids, extend=delete_xt,\n                        runas=ROOT_USER, wait=False)\n        except PbsDeljobError:\n            pass\n        st = time.time()\n        if len(job_ids) > 100:\n            for host, pids in host_pid_map.items():\n                chunks = [pids[i:i + 5000] for i in range(0, len(pids), 5000)]\n                pbsnodes = os.path.join(\n                    self.client_conf['PBS_EXEC'], 'bin', 'pbsnodes')\n                ret = self.du.run_cmd(\n                    self.hostname, [pbsnodes, '-v', host, '-F', 'json'],\n                    logerr=False, level=logging.DEBUG, sudo=True)\n                pbsnodes_json = json.loads('\\n'.join(ret['out']))\n                host = pbsnodes_json['nodes'][host]['Mom']\n                for chunk in chunks:\n                    self.du.run_cmd(host, ['kill', '-9'] + chunk,\n                                    runas=ROOT_USER, logerr=False)\n            if running_jobs:\n                last_running_job = running_jobs[-1]\n                _msg = last_running_job + ';'\n                _msg += 'Job Obit notice received has error 15001'\n                try:\n                    self.log_match(_msg, starttime=st, interval=10,\n                                   max_attempts=10)\n                except PtlLogMatchError:\n                    # don't fail on log match error as here purpose\n                    # of log match is to allow mom to catch up with\n                    # sigchild but we don't want to wait too long\n                    # so limit max attempts to 10 ~ total 100 sec\n                    # of wait\n                    pass\n        rv = self.expect(JOB, {'job_state': 0}, count=True, op=SET)\n        # restore 'scheduling' state\n        for sc in sched_state:\n            a = {'scheduling': 'True'}\n            self.manager(MGR_CMD_SET, SCHED, a, id=sc, runas=ROOT_USER)\n            self.expect(SCHED, a, id=sc)\n        return rv\n\n    def cleanup_reservations(self):\n        \"\"\"\n        Helper function to delete all reservations\n        \"\"\"\n        reservations = self.status(RESV, runas=ROOT_USER)\n        while reservations:\n            resvs = [r['id'] for r in reservations]\n            if len(resvs) > 0:\n                try:\n                    self.delresv(resvs, runas=ROOT_USER)\n                except Exception:\n                    pass\n                reservations = self.status(RESV, runas=ROOT_USER)\n\n    def cleanup_jobs_and_reservations(self):\n        \"\"\"\n        Helper function to delete all jobs and reservations\n        \"\"\"\n        rv = self.cleanup_jobs()\n        self.cleanup_reservations()\n        return rv\n\n    def filter(self, obj_type=None, attrib=None, id=None, extend=None, op=None,\n               attrop=None, bslist=None, idonly=True, grandtotal=False,\n               db_access=None, runas=None, resolve_indirectness=False):\n        \"\"\"\n        Filter objects by properties. For example, to filter all\n        free nodes:``server.filter(VNODE,{'state':'free'})``\n        For each attribute queried, if idonly is True, a list of\n        matching object names is returned; if idonly is False, then\n        the value of each attribute queried is returned.\n        This is unlike Python's built-in 'filter' that returns a\n        subset of objects matching from a pool of objects. The\n        Python filtering mechanism remains very useful in some\n        situations and should be used programmatically to achieve\n        desired filtering goals that can not be met easily with\n        PTL's filter method.\n        :param obj_type: The type of object to query, one of the\n                         * objects\n        :param attrib: Attributes to query, can be a string, a\n                       list, a dictionary\n        :type attrib: str or list or dictionary\n        :param id: The id of the object to act upon\n        :param extend: The extended parameter to pass to the stat\n                       call\n        :param op: The operation used to match attrib to what is\n                   queried. SET or None\n        :type op: str or None\n        :param bslist: Optional, use a batch status dict list\n                       instead of an obj_type\n        :type bslist: List or None\n        :param idonly: if true, return the name/id of the matching\n                       objects\n        :type idonly: bool\n        :param db_access: credentials to access db, either path to\n                          file or dictionary\n        :type db_access: str or dictionary\n        :param runas: run as user\n        :type runas: str or None\n        \"\"\"\n        self.logit('filter: ', obj_type, attrib, id)\n        return self._filter(obj_type, attrib, id, extend, op, attrop, bslist,\n                            PTL_FILTER, idonly, db_access, runas=runas,\n                            resolve_indirectness=resolve_indirectness)\n\n    def equivalence_classes(self, obj_type=None, attrib={}, bslist=None,\n                            op=RESOURCES_AVAILABLE, show_zero_resources=True,\n                            db_access=None, resolve_indirectness=False):\n        \"\"\"\n        :param obj_type: PBS Object to query, one of *\n        :param attrib: attributes to build equivalence classes\n                       out of.\n        :type attrib: dictionary\n        :param bslist: Optional, list of dictionary representation\n                       of a batch status\n        :type bslist: List\n        :param op: set to RESOURCES_AVAILABLE uses the dynamic\n                   amount of resources available, i.e., available -\n                   assigned, otherwise uses static amount of\n                   resources available\n        :param db_acccess: set to either file containing credentials\n                           to DB access or dictionary containing\n                           ``{'dbname':...,'user':...,'port':...}``\n        :type db_access: str or dictionary\n        \"\"\"\n\n        if attrib is None:\n            attrib = {}\n\n        if len(attrib) == 0 and obj_type is not None:\n            if obj_type in (VNODE, NODE):\n                attrib = ['resources_available.ncpus',\n                          'resources_available.mem', 'state']\n            elif obj_type == JOB:\n                attrib = ['Resource_List.select',\n                          'queue', 'array_indices_submitted']\n            elif obj_type == RESV:\n                attrib = ['Resource_List.select']\n            else:\n                return {}\n\n        if bslist is None and obj_type is not None:\n            # To get the resources_assigned we must stat the entire object so\n            # bypass the specific attributes that would filter out assigned\n            if op == RESOURCES_AVAILABLE:\n                bslist = self.status(obj_type, None, level=logging.DEBUG,\n                                     db_access=db_access,\n                                     resolve_indirectness=resolve_indirectness)\n            else:\n                bslist = self.status(obj_type, attrib, level=logging.DEBUG,\n                                     db_access=db_access,\n                                     resolve_indirectness=resolve_indirectness)\n\n        if bslist is None or len(bslist) == 0:\n            return {}\n\n        # automatically convert an objectlist into a batch status dict list\n        # for ease of use.\n        if not isinstance(bslist[0], dict):\n            bslist = self.utils.objlist_to_dictlist(bslist)\n\n        if isinstance(attrib, str):\n            attrib = attrib.split(',')\n\n        self.logger.debug(\"building equivalence class\")\n        equiv = {}\n        for bs in bslist:\n            cls = ()\n            skip_cls = False\n            # attrs will be part of the EquivClass object\n            attrs = {}\n            # Filter the batch attributes by the attribs requested\n            for a in attrib:\n                if a in bs:\n                    amt = PbsAttribute.decode_value(bs[a])\n                    if a.startswith('resources_available.'):\n                        val = a.replace('resources_available.', '')\n                        if (op == RESOURCES_AVAILABLE and\n                                'resources_assigned.' + val in bs):\n                            amt = (int(amt) - int(PbsAttribute.decode_value(\n                                   bs['resources_assigned.' + val])))\n                        # this case where amt goes negative is not a bug, it\n                        # may happen when computing whats_available due to the\n                        # fact that the computation is subtractive, it does\n                        # add back resources when jobs/reservations end but\n                        # is only concerned with what is available now for\n                        # a given duration, that is why in the case where\n                        # amount goes negative we set it to 0\n                        if amt < 0:\n                            amt = 0\n\n                        # TODO: not a failproof way to catch a memory type\n                        # but PbsTypeSize should return the right value if\n                        # it fails to parse it as a valid memory value\n                        if a.endswith('mem'):\n                            try:\n                                amt = PbsTypeSize().encode(amt)\n                            except Exception:\n                                # we guessed the type incorrectly\n                                pass\n                    else:\n                        val = a\n                    if amt == 0 and not show_zero_resources:\n                        skip_cls = True\n                        break\n                    # Build the key of the equivalence class\n                    cls += (val + '=' + str(amt),)\n                    attrs[val] = amt\n            # Now that we are done with this object, add it to an equiv class\n            if len(cls) > 0 and not skip_cls:\n                if cls in equiv:\n                    equiv[cls].add_entity(bs['id'])\n                else:\n                    equiv[cls] = EquivClass(cls, attrs, [bs['id']])\n\n        return list(equiv.values())\n\n    def show_equivalence_classes(self, eq=None, obj_type=None, attrib={},\n                                 bslist=None, op=RESOURCES_AVAILABLE,\n                                 show_zero_resources=True, db_access=None,\n                                 resolve_indirectness=False):\n        \"\"\"\n        helper function to show the equivalence classes\n        :param eq: equivalence classes as compute by\n                   equivalence_classes see equivalence_classes\n                   for remaining parameters description\n        :param db_acccess: set to either file containing credentials\n                           to DB access or dictionary containing\n                           ``{'dbname':...,'user':...,'port':...}``\n        :type db_access: str or dictionary\n        \"\"\"\n        if eq is None:\n            equiv = self.equivalence_classes(obj_type, attrib, bslist, op,\n                                             show_zero_resources, db_access,\n                                             resolve_indirectness)\n        else:\n            equiv = eq\n        equiv = sorted(equiv, key=lambda e: len(e.entities))\n        for e in equiv:\n            # e.show()\n            print((str(e)))\n\n    def whats_available(self, attrib=None, jobs=None, resvs=None, nodes=None):\n        \"\"\"\n        Returns what's available as a list of node equivalence\n        classes listed by availability over time.\n        :param attrib: attributes to consider\n        :type attrib: List\n        :param jobs: jobs to consider, if None, jobs are queried\n                     locally\n        :param resvs: reservations to consider, if None, they are\n                      queried locally\n        :param nodes: nodes to consider, if None, they are queried\n                      locally\n        \"\"\"\n\n        if attrib is None:\n            attrib = ['resources_available.ncpus',\n                      'resources_available.mem', 'state']\n\n        if resvs is None:\n            self.status(RESV)\n            resvs = self.reservations\n\n        if jobs is None:\n            self.status(JOB)\n            jobs = self.jobs\n\n        if nodes is None:\n            self.status(NODE)\n            nodes = self.nodes\n\n        nodes_id = list(nodes.keys())\n        avail_nodes_by_time = {}\n\n        def alloc_resource(self, node, resources):\n            # helper function. Must work on a scratch copy of nodes otherwise\n            # resources_available will get corrupted\n            for rsc, value in resources.items():\n                if isinstance(value, int) or value.isdigit():\n                    avail = node.attributes['resources_available.' + rsc]\n                    nvalue = int(avail) - int(value)\n                    node.attributes['resources_available.' + rsc] = nvalue\n\n        # Account for reservations\n        for resv in resvs.values():\n            resvnodes = resv.execvnode('resv_nodes')\n            if resvnodes:\n                starttime = self.utils.convert_stime_to_seconds(\n                    resv.attributes['reserve_start'])\n                for node in resvnodes:\n                    for n, resc in node.items():\n                        tm = int(starttime) - int(self.ctime)\n                        if tm < 0 or n not in nodes_id:\n                            continue\n                        if tm not in avail_nodes_by_time:\n                            avail_nodes_by_time[tm] = []\n                        if nodes[n].attributes['sharing'] in ('default_excl',\n                                                              'force_excl'):\n                            avail_nodes_by_time[tm].append(nodes[n])\n                            try:\n                                nodes_id.remove(n)\n                            except Exception:\n                                pass\n                        else:\n                            ncopy = copy.copy(nodes[n])\n                            ncopy.attributes = copy.deepcopy(\n                                nodes[n].attributes)\n                            avail_nodes_by_time[tm].append(ncopy)\n                            self.alloc_resource(nodes[n], resc)\n\n        # go on to look at the calendar of scheduled jobs to run and set\n        # the node availability according to when the job is estimated to\n        # start on the node\n        for job in self.jobs.values():\n            if (job.attributes['job_state'] != 'R' and\n                    'estimated.exec_vnode' in job.attributes):\n                estimatednodes = job.execvnode('estimated.exec_vnode')\n                if estimatednodes:\n                    st = job.attributes['estimated.start_time']\n                    # Tweak for nas format of estimated time that has\n                    # num seconds from epoch followed by datetime\n                    if st.split()[0].isdigit():\n                        starttime = st.split()[0]\n                    else:\n                        starttime = self.utils.convert_stime_to_seconds(st)\n                    for node in estimatednodes:\n                        for n, resc in node.items():\n                            tm = int(starttime) - int(self.ctime)\n                            if (tm < 0 or n not in nodes_id or\n                                    nodes[n].state != 'free'):\n                                continue\n                            if tm not in avail_nodes_by_time:\n                                avail_nodes_by_time[tm] = []\n                            if (nodes[n].attributes['sharing'] in\n                                    ('default_excl', 'force_excl')):\n                                avail_nodes_by_time[tm].append(nodes[n])\n                                try:\n                                    nodes_id.remove(n)\n                                except Exception:\n                                    pass\n                            else:\n                                ncopy = copy.copy(nodes[n])\n                                ncopy.attributes = copy.deepcopy(\n                                    nodes[n].attributes)\n                                avail_nodes_by_time[tm].append(ncopy)\n                                self.alloc_resource(nodes[n], resc)\n\n        # remaining nodes are free \"forever\"\n        for node in nodes_id:\n            if self.nodes[node].state == 'free':\n                if 'infinity' not in avail_nodes_by_time:\n                    avail_nodes_by_time['infinity'] = [nodes[node]]\n                else:\n                    avail_nodes_by_time['infinity'].append(nodes[node])\n\n        # if there is a dedicated time, move the availaility time up to that\n        # time as necessary\n        if self.schedulers[self.dflt_sched_name] is None:\n            self.schedulers[self.dflt_sched_name] = Scheduler(server=self)\n\n        self.schedulers[self.dflt_sched_name].parse_dedicated_time()\n\n        if self.schedulers[self.dflt_sched_name].dedicated_time:\n            dedtime = self.schedulers[\n                self.dflt_sched_name].dedicated_time[0]['from'] - int(\n                self.ctime)\n            if dedtime <= int(time.time()):\n                dedtime = None\n        else:\n            dedtime = None\n\n        # finally, build the equivalence classes off of the nodes availability\n        # over time\n        self.logger.debug(\"Building equivalence classes\")\n        whazzup = {}\n        if 'state' in attrib:\n            attrib.remove('state')\n        for tm, nds in avail_nodes_by_time.items():\n            equiv = self.equivalence_classes(VNODE, attrib, bslist=nds,\n                                             show_zero_resources=False)\n            if dedtime and (tm > dedtime or tm == 'infinity'):\n                tm = dedtime\n            if tm != 'infinity':\n                tm = str(datetime.timedelta(seconds=int(tm)))\n            whazzup[tm] = equiv\n\n        return whazzup\n\n    def show_whats_available(self, wa=None, attrib=None, jobs=None,\n                             resvs=None, nodes=None):\n        \"\"\"\n        helper function to show availability as computed by\n        whats_available\n        :param wa: a dictionary of available attributes. see\n                   whats_available for a\\\n                   description of the remaining parameters\n        :type wa: Dictionary\n        \"\"\"\n        if wa is None:\n            wa = self.whats_available(attrib, jobs, resvs, nodes)\n        if len(wa) > 0:\n            print((\"%24s\\t%s\" % (\"Duration of availability\", \"Resources\")))\n            print(\"-------------------------\\t----------\")\n        swa = sorted(wa.items(), key=lambda x: x[0])\n        for (k, eq_classes) in swa:\n            for eq_cl in eq_classes:\n                print((\"%24s\\t%s\" % (str(k), str(eq_cl))))\n\n    def utilization(self, resources=None, nodes=None, jobs=None, entity={}):\n        \"\"\"\n        Return utilization of consumable resources on a set of\n        nodes\n        :param nodes: A list of dictionary of nodes on which to\n                      compute utilization.Defaults to nodes\n                      resulting from a stat call to the current\n                      server.\n        :type nodes: List\n        :param resources: comma-separated list of resources to\n                          compute utilization on. The name of the\n                          resource is for example, ncpus or mem\n        :type resources: List\n        :param entity: An optional dictionary of entities to\n                       compute utilization of,\n                       ``e.g. {'user':u1, 'group':g1, 'project'=p1}``\n        :type entity: Dictionary\n        The utilization is returned as a dictionary of percentage\n        utilization for each resource.\n        Non-consumable resources are silently ignored.\n        \"\"\"\n        if nodes is None:\n            nodes = self.status(NODE)\n\n        if jobs is None:\n            jobs = self.status(JOB)\n\n        if resources is None:\n            rescs = ['ncpus', 'mem']\n        else:\n            rescs = resources\n\n        utilization = {}\n        resavail = {}\n        resassigned = {}\n        usednodes = 0\n        totnodes = 0\n        nodes_set = set()\n\n        for res in rescs:\n            resavail[res] = 0\n            resassigned[res] = 0\n\n        # If an entity is specified utilization must be collected from the\n        # Jobs usage, otherwise we can get the information directly from\n        # the nodes.\n        if len(entity) > 0 and jobs is not None:\n            for job in jobs:\n                if 'job_state' in job and job['job_state'] != 'R':\n                    continue\n                entity_match = True\n                for k, v in entity.items():\n                    if k not in job or job[k] != v:\n                        entity_match = False\n                        break\n                if entity_match:\n                    for res in rescs:\n                        r = 'Resource_List.' + res\n                        if r in job:\n                            tmpr = int(PbsAttribute.decode_value(job[r]))\n                            resassigned[res] += tmpr\n                    if 'exec_host' in job:\n                        hosts = ResourceResv.get_hosts(job['exec_host'])\n                        nodes_set |= set(hosts)\n\n        for node in nodes:\n            # skip nodes in non-schedulable state\n            nstate = node['state']\n            if ('down' in nstate or 'unavailable' in nstate or\n                    'unknown' in nstate or 'Stale' in nstate):\n                continue\n\n            totnodes += 1\n\n            # If an entity utilization was requested, all used nodes were\n            # already filtered into the nodes_set specific to that entity, we\n            # simply add them up. If no entity was requested, it suffices to\n            # have the node have a jobs attribute to count it towards total\n            # used nodes\n            if len(entity) > 0:\n                if node['id'] in nodes_set:\n                    usednodes += 1\n            elif 'jobs' in node:\n                usednodes += 1\n\n            for res in rescs:\n                avail = 'resources_available.' + res\n                if avail in node:\n                    val = PbsAttribute.decode_value(node[avail])\n                    if isinstance(val, int):\n                        resavail[res] += val\n\n                        # When entity matching all resources assigned are\n                        # accounted for by the job usage\n                        if len(entity) == 0:\n                            assigned = 'resources_assigned.' + res\n                            if assigned in node:\n                                val = PbsAttribute.decode_value(\n                                    node[assigned])\n                                if isinstance(val, int):\n                                    resassigned[res] += val\n\n        for res in rescs:\n            if res in resavail:\n                if res in resassigned:\n                    if resavail[res] > 0:\n                        utilization[res] = [resassigned[res], resavail[res]]\n\n        # Only report nodes utilization if no specific resources were requested\n        if resources is None:\n            utilization['nodes'] = [usednodes, totnodes]\n\n        return utilization\n\n    def create_moms(self, name=None, attrib=None, num=1, delall=True,\n                    createnode=True, conf_prefix='pbs.conf_m',\n                    home_prefix='pbs_m', momhosts=None, init_port=15011,\n                    step_port=2):\n        \"\"\"\n        Create MoM configurations and optionall add them to the\n        server. Unique ``pbs.conf`` files are defined and created\n        on each hosts on which MoMs are to be created.\n        :param name: Optional prefix name of the nodes to create.\n                     Defaults to the name of the MoM host.\n        :type name: str or None\n        :param attrib: Optional node attributes to assign to the\n                       MoM.\n        :param num: Number of MoMs to create\n        :type num: int\n        :param delall: Whether to delete all nodes on the server.\n                       Defaults to True.\n        :type delall: bool\n        :param createnode: Whether to create the nodes and add them\n                           to the server.Defaults to True.\n        :type createnode: bool\n        :param conf_prefix: The prefix of the PBS conf file.Defaults\n                            to pbs.conf_m\n        :type conf_prefix: str\n        :param home_prefix: The prefix of the PBS_HOME directory.\n                            Defaults to pbs_m\n        :type home_prefix: str\n        :param momhosts: A list of hosts on which to deploy num\n                         MoMs.\n        :type momhosts: List\n        :param init_port: The initial port number to start assigning\n                          ``PBS_MOM_SERIVCE_PORT to.\n                          Default 15011``.\n        :type init_port: int\n        :param step_port: The increments at which ports are\n                          allocated. Defaults to 2.\n        :type step_port: int\n        .. note:: Since PBS requires that\n                  PBS_MANAGER_SERVICE_PORT = PBS_MOM_SERVICE_PORT+1\n                  The step number must be greater or equal to 2.\n        \"\"\"\n\n        if not self.isUp():\n            logging.error(\"An up and running PBS server on \" + self.hostname +\n                          \" is required\")\n            return False\n\n        if delall:\n            try:\n                rc = self.manager(MGR_CMD_DELETE, NODE, None, \"\")\n            except PbsManagerError as e:\n                rc = e.rc\n            if rc:\n                node_length = 0\n                try:\n                    node_length = len(self.status(NODE))\n                except PbsStatusError as err:\n                    if \"Server has no node list\" not in err.msg[0]:\n                        self.logger.error(\n                            \"Error while checking node length:\" + str(err))\n                        return False\n                if node_length > 0:\n                    self.logger.error(\"create_moms: Error deleting all nodes\")\n                    return False\n\n        pi = PBSInitServices()\n        if momhosts is None:\n            momhosts = [self.hostname]\n\n        if attrib is None:\n            attrib = {}\n\n        error = False\n        momnum = 0\n        for hostname in momhosts:\n            momnum += 1\n            _pconf = self.du.parse_pbs_config(hostname)\n            if 'PBS_HOME' in _pconf:\n                _hp = _pconf['PBS_HOME']\n                if _hp.endswith('/'):\n                    _hp = _hp[:-1]\n                _hp = os.path.dirname(_hp)\n            else:\n                _hp = '/var/spool'\n            _np_conf = _pconf\n            _np_conf['PBS_START_SERVER'] = '0'\n            _np_conf['PBS_START_SCHED'] = '0'\n            _np_conf['PBS_START_COMM'] = '0'\n            _np_conf['PBS_START_MOM'] = '1'\n            for i in range(0, num * step_port, step_port):\n                _np = os.path.join(_hp, home_prefix + str(i))\n                _n_pbsconf = os.path.join('/etc', conf_prefix + str(i))\n                _np_conf['PBS_HOME'] = _np\n                port = init_port + i\n                _np_conf['PBS_MOM_SERVICE_PORT'] = str(port)\n                _np_conf['PBS_MANAGER_SERVICE_PORT'] = str(port + 1)\n                self.du.set_pbs_config(hostname, fout=_n_pbsconf,\n                                       confs=_np_conf)\n                pi.initd(hostname, conf_file=_n_pbsconf, op='start')\n                m = MoM(self, hostname, pbsconf_file=_n_pbsconf)\n                if m.isUp():\n                    m.stop()\n                try:\n                    m.start()\n                except PbsServiceError:\n                    # The service failed to start\n                    self.logger.error(\"Service failed to start using port \" +\n                                      str(port) + \"...skipping\")\n                    self.du.rm(hostname, _n_pbsconf)\n                    continue\n                if createnode:\n                    attrib['Mom'] = hostname\n                    attrib['port'] = port\n                    if name is None:\n                        name = hostname.split('.')[0]\n                    if momnum == 1:\n                        _n = name + '-' + str(i)\n                    else:\n                        _n = name + str(momnum) + '-' + str(i)\n                    rc = self.manager(MGR_CMD_CREATE, NODE, attrib, id=_n)\n                    if rc != 0:\n                        self.logger.error(\"error creating node \" + _n)\n                        error = True\n        if error:\n            return False\n\n        return True\n\n    def create_hook(self, name, attrs):\n        \"\"\"\n        Helper function to create a hook by name.\n        :param name: The name of the hook to create\n        :type name: str\n        :param attrs: The attributes to create the hook with.\n        :type attrs: str\n        :returns: False if hook already exists\n        :raises: PbsManagerError, otherwise return True.\n        \"\"\"\n        hooks = self.status(HOOK)\n        if ((hooks is None or len(hooks) == 0) or\n                (name not in [x['id'] for x in hooks])):\n            self.manager(MGR_CMD_CREATE, HOOK, None, name)\n        else:\n            self.logger.error('hook named ' + name + ' exists')\n            return False\n        self.update_special_attr(HOOK, id=name)\n        self.manager(MGR_CMD_SET, HOOK, attrs, id=name)\n        return True\n\n    def delete_hook(self, name):\n        \"\"\"\n        Helper function to delete a hook by name.\n        :param name: The name of the hook to delete\n        :type name: str\n        :returns: False if hook does not exist\n        :raises: PbsManagerError, otherwise return True.\n        \"\"\"\n        hooks = self.status(HOOK, level=logging.DEBUG)\n        for hook in hooks:\n            if hook['id'] == name:\n                self.logger.info(\"Removing hook:%s\" % name)\n                self.manager(MGR_CMD_DELETE, HOOK, id=name)\n        return True\n\n    def import_hook(self, name, body, level=logging.INFO):\n        \"\"\"\n        Helper function to import hook body into hook by name.\n        The hook must have been created prior to calling this\n        function.\n        :param name: The name of the hook to import body to\n        :type name: str\n        :param body: The body of the hook as a string.\n        :type body: str\n        :returns: True on success.\n        :raises: PbsManagerError\n        \"\"\"\n        # sync_mom_hookfiles_timeout is 15min by default\n        # Setting it to lower value to avoid the race condition at hook copy\n        srv_stat = self.status(SERVER, 'sync_mom_hookfiles_timeout')\n        try:\n            sync_val = srv_stat[0]['sync_mom_hookfiles_timeout']\n        except Exception:\n            self.logger.info(\"Setting sync_mom_hookfiles_timeout to 15s\")\n            self.manager(MGR_CMD_SET, SERVER,\n                         {\"sync_mom_hookfiles_timeout\": 15})\n\n        fn = self.du.create_temp_file(body=body)\n\n        if not self._is_local:\n            tmpdir = self.du.get_tempdir(self.hostname)\n            rfile = os.path.join(tmpdir, os.path.basename(fn))\n            self.du.run_copy(self.hostname, src=fn, dest=rfile)\n        else:\n            rfile = fn\n\n        a = {'content-type': 'application/x-python',\n             'content-encoding': 'default',\n             'input-file': rfile}\n        self.manager(MGR_CMD_IMPORT, HOOK, a, name)\n\n        os.remove(rfile)\n        if not self._is_local:\n            self.du.rm(self.hostname, rfile)\n        self.logger.log(level, 'server ' + self.shortname +\n                        ': imported hook body\\n---\\n' +\n                        body + '---')\n        return True\n\n    def create_import_hook(self, name, attrs=None, body=None, overwrite=True,\n                           level=logging.INFO):\n        \"\"\"\n        Helper function to create a hook, import content into it,\n        set the event and enable it.\n        :param name: The name of the hook to create\n        :type name: str\n        :param attrs: The attributes to create the hook with.\n                      Event and Enabled are mandatory. No defaults.\n        :type attrs: str\n        :param body: The hook body as a string\n        :type body: str\n        :param overwrite: If True, if a hook of the same name\n                          already exists, bypass its creation.\n                          Defaults to True\n        :returns: True on success and False otherwise\n        \"\"\"\n        # Check for log messages 20 seconds earlier, to account for\n        # server and mom system time differences\n        t = time.time() - 20\n\n        if 'event' not in attrs:\n            self.logger.error('attrs must specify at least an event and key')\n            return False\n\n        hook_exists = False\n        hooks = self.status(HOOK)\n        for h in hooks:\n            if h['id'] == name:\n                hook_exists = True\n\n        if not hook_exists or not overwrite:\n            rv = self.create_hook(name, attrs)\n            if not rv:\n                return False\n        else:\n            if attrs is None:\n                attrs = {'enabled': 'true'}\n            rc = self.manager(MGR_CMD_SET, HOOK, attrs, id=name)\n            if rc != 0:\n                return False\n\n        # In 12.0 A MoM hook must be enabled and the event set prior to\n        # importing, otherwise the MoM does not get the hook content\n        ret = self.import_hook(name, body, level)\n\n        # In case of mom hooks, make sure that the hook related files\n        # are successfully copied to the MoM\n        events = attrs['event']\n        if not isinstance(events, (list,)):\n            events = [events]\n        events = [hk for hk in events if 'exec' in hk]\n        msg = \"successfully sent hook file\"\n        for hook in events:\n            hook_py = name + '.PY'\n            hook_hk = name + '.HK'\n            pyfile = os.path.join(self.pbs_conf['PBS_HOME'],\n                                  \"server_priv\", \"hooks\", hook_py)\n            hfile = os.path.join(self.pbs_conf['PBS_HOME'],\n                                 \"server_priv\", \"hooks\", hook_hk)\n            logmsg = hook_py + \";copy hook-related file request received\"\n            cmd = os.path.join(self.client_conf['PBS_EXEC'], 'bin',\n                               'pbsnodes') + ' -a' + ' -Fjson'\n            cmd_out = self.du.run_cmd(self.hostname, cmd, sudo=True)\n            if cmd_out['rc'] != 0:\n                return False\n            pbsnodes_json = json.loads('\\n'.join(cmd_out['out']))\n            for m in pbsnodes_json['nodes']:\n                if m in self.moms:\n                    try:\n                        self.log_match(\"%s %s to %s\" %\n                                       (msg, hfile, m), interval=1)\n                        self.log_match(\"%s %s to %s\" %\n                                       (msg, pyfile, m), interval=1)\n                        self.moms[m].log_match(logmsg, starttime=t)\n                    except PtlLogMatchError:\n                        return False\n        return ret\n\n    def import_hook_config(self, hook_name, hook_conf, hook_type,\n                           level=logging.INFO):\n        \"\"\"\n        Helper function to import hook config body into hook by name.\n        The hook must have been created prior to calling this\n        function.\n        :param hook_name: The name of the hook to import hook config\n        :type name: str\n        :param hook_conf: The body of the hook config as a dict.\n        :type hook_conf: dict\n        :param hook_type: The hook type \"site\" or \"pbshook\"\n        :type hook_type: str\n        :returns: True on success.\n        :raises: PbsManagerError\n        \"\"\"\n        if hook_type == \"site\":\n            hook_t = HOOK\n        else:\n            hook_t = PBS_HOOK\n\n        hook_config_data = json.dumps(hook_conf, indent=4)\n        fn = self.du.create_temp_file(body=hook_config_data)\n\n        if not self._is_local:\n            tmpdir = self.du.get_tempdir(self.hostname)\n            rfile = os.path.join(tmpdir, os.path.basename(fn))\n            rc = self.du.run_copy(self.hostname, src=fn, dest=rfile)\n            if rc != 0:\n                raise AssertionError(\"Failed to copy file %s\"\n                                     % (rfile))\n        else:\n            rfile = fn\n\n        a = {'content-type': 'application/x-config',\n             'content-encoding': 'default',\n             'input-file': rfile}\n\n        self.manager(MGR_CMD_IMPORT, hook_t, a, hook_name)\n\n        os.remove(rfile)\n        if not self._is_local:\n            self.du.rm(self.hostname, rfile)\n        self.logger.log(level, 'server ' + self.shortname +\n                        ': imported hook config\\n---\\n' +\n                        str(hook_config_data) + '\\n---\\n')\n        return True\n\n    def export_hook_config(self, hook_name, hook_type):\n        \"\"\"\n        Helper function to export hook config body.\n        The hook must have been created prior to calling this\n        function.\n        :param hook_name: The name of the hook to export config from\n        :type name: str\n        :param hook_type: The hook type \"site\" or \"pbshook\"\n        :type hook_type: str\n        :returns: Dictionary on success False on failure\n        \"\"\"\n        if hook_type == \"site\":\n            hook_t = \"hook\"\n        else:\n            hook_t = \"pbshook\"\n        cmd = [\"export\", hook_t, hook_name]\n        cmd += [\"application/x-config\", \"default\"]\n        if not self._is_local:\n            cmd = '\\'' + \" \".join(cmd) + '\\''\n        else:\n            cmd = \" \".join(cmd)\n        pcmd = [os.path.join(self.pbs_conf['PBS_EXEC'], 'bin', 'qmgr'),\n                '-c', cmd]\n        ret = self.du.run_cmd(self.hostname, pcmd, sudo=True)\n        if ret['rc'] == 0:\n            config_out = ''.join(ret['out'])\n            config_dict = json.loads(config_out)\n            return config_dict\n        else:\n            raise AssertionError(\"Failed to export hook config, %s\"\n                                 % (ret['err']))\n\n    def evaluate_formula(self, jobid=None, formula=None, full=True,\n                         include_running_jobs=False, exclude_subjobs=True):\n        \"\"\"\n        Evaluate the job sort formula\n        :param jobid: If set, evaluate the formula for the given\n                      jobid, if not set,formula is evaluated for\n                      all jobs in state Q\n        :type jobid: str or None\n        :param formula: If set use the given formula. If not set,\n                        the server's formula, if any, is used\n        :param full: If True, returns a dictionary of job\n                     identifiers as keys and the evaluated formula\n                     as values. Returns None if no formula is used.\n                     Each job id formula is returned as a tuple\n                     (s,e) where s is the formula expression\n                     associated to the job and e is the evaluated\n                     numeric value of that expression, for example,\n                     if job_sort_formula is ncpus + mem\n                     a job requesting 2 cpus and 100kb of memory\n                     would return ('2 + 100', 102). If False, if\n                     a jobid is specified, return the integer\n                     value of the evaluated formula.\n        :type full: bool\n        :param include_running_jobs: If True, reports formula\n                                     value of running jobs.\n                                     Defaults to False.\n        :type include_running_jobs: bool\n        :param exclude_subjobs: If True, only report formula of\n                                parent job array\n        :type exclude_subjobs: bool\n        \"\"\"\n        _f_builtins = ['queue_priority', 'job_priority', 'eligible_time',\n                       'fair_share_perc']\n        if formula is None:\n            d = self.status(SERVER, 'job_sort_formula')\n            if len(d) > 0 and 'job_sort_formula' in d[0]:\n                formula = d[0]['job_sort_formula']\n            else:\n                return None\n\n        template_formula = self.utils._make_template_formula(formula)\n        # to split up the formula into keywords, first convert all possible\n        # operators into spaces and split the string.\n        # TODO: The list of operators may need to be expanded\n        T = formula.maketrans('()%+*/-', ' ' * 7)\n        fres = formula.translate(T).split()\n        if jobid:\n            d = self.status(JOB, id=jobid, extend='t')\n        else:\n            d = self.status(JOB, extend='t')\n        ret = {}\n        for job in d:\n            if not include_running_jobs and job['job_state'] != 'Q':\n                continue\n            f_value = {}\n            # initialize the formula values to 0\n            for res in fres:\n                f_value[res] = 0\n            if 'queue_priority' in fres:\n                queue = self.status(JOB, 'queue', id=job['id'])[0]['queue']\n                d = self.status(QUEUE, 'Priority', id=queue)\n                if d and 'Priority' in d[0]:\n                    qprio = int(d[0]['Priority'])\n                    qprio = int(d[0]['Priority'])\n                    f_value['queue_priority'] = qprio\n                else:\n                    continue\n            if 'job_priority' in fres:\n                if 'Priority' in job:\n                    jprio = int(job['Priority'])\n                    f_value['job_priority'] = jprio\n                else:\n                    continue\n            if 'eligible_time' in fres:\n                if 'eligible_time' in job:\n                    f_value['eligible_time'] = self.utils.convert_duration(\n                        job['eligible_time'])\n            if 'fair_share_perc' in fres:\n                if self.schedulers[self.dflt_sched_name] is None:\n                    self.schedulers[self.dflt_sched_name] = Scheduler(\n                        server=self)\n\n                if 'fairshare_entity' in self.schedulers[\n                    self.dflt_sched_name\n                ].sched_config:\n                    entity = self.schedulers[\n                        self.dflt_sched_name\n                    ].sched_config['fairshare_entity']\n                else:\n                    self.logger.error(self.logprefix +\n                                      ' no fairshare entity in sched config')\n                    continue\n                if entity not in job:\n                    self.logger.error(self.logprefix +\n                                      ' job does not have property ' + entity)\n                    continue\n                try:\n                    fs_info = self.schedulers[\n                        self.dflt_sched_name\n                    ].fairshare.query_fairshare(\n                        name=job[entity])\n                    if fs_info is not None and 'TREEROOT' in fs_info.perc:\n                        f_value['fair_share_perc'] = \\\n                            (fs_info.perc['TREEROOT'] / 100)\n                except PbsFairshareError:\n                    f_value['fair_share_perc'] = 0\n\n            for job_res, val in job.items():\n                val = PbsAttribute.decode_value(val)\n                if job_res.startswith('Resource_List.'):\n                    job_res = job_res.replace('Resource_List.', '')\n                if job_res in fres and job_res not in _f_builtins:\n                    f_value[job_res] = val\n            tf = string.Template(template_formula)\n            tfstr = tf.safe_substitute(f_value)\n            if (jobid is not None or not exclude_subjobs or\n                    (exclude_subjobs and not self.utils.is_subjob(job['id']))):\n                ret[job['id']] = (tfstr, eval(tfstr))\n        if not full and jobid is not None and jobid in ret:\n            return ret[job['id']][1]\n        return ret\n\n    def _parse_limits(self, container=None, dictlist=None, id=None,\n                      db_access=None):\n        \"\"\"\n        Helper function to parse limits syntax on a given\n        container.\n        :param container: The PBS object to query, one of ``QUEUE``\n                          or ``SERVER``.Metascheduling node group\n                          limits are not yet queri-able\n        :type container: str or None\n        :param dictlist: A list of dictionaries off of a batch\n                         status\n        :type diclist: List\n        :param id: Optional id of the object to query\n        :param db_acccess: set to either file containing credentials\n                           to DB access or dictionary containing\n                           ``{'dbname':...,'user':...,'port':...}``\n        :type db_access: str or dictionary\n        \"\"\"\n        if container is None:\n            self.logger.error('parse_limits expect container to be set')\n            return {}\n\n        if dictlist is None:\n            d = self.status(container, db_access=db_access)\n        else:\n            d = dictlist\n\n        if not d:\n            return {}\n\n        limits = {}\n        for obj in d:\n            # filter the id here instead of during the stat call so that\n            # we can call a full stat once rather than one stat per object\n            if id is not None and obj['id'] != id:\n                continue\n            for k, v in obj.items():\n                if k.startswith('max_run'):\n                    v = v.split(',')\n                    for rval in v:\n                        rval = rval.strip(\"'\")\n                        limit_list = self.utils.parse_fgc_limit(k + '=' + rval)\n                        if limit_list is None:\n                            self.logger.error(\"Couldn't parse limit: \" +\n                                              k + str(rval))\n                            continue\n\n                        (lim_type, resource, etype, ename, value) = limit_list\n                        if (etype, ename) not in self.entities:\n                            entity = Entity(etype, ename)\n                            self.entities[(etype, ename)] = entity\n                        else:\n                            entity = self.entities[(etype, ename)]\n\n                        lim = Limit(lim_type, resource, entity, value,\n                                    container, obj['id'])\n\n                        if container in limits:\n                            limits[container].append(lim)\n                        else:\n                            limits[container] = [lim]\n\n                        entity.set_limit(lim)\n        return limits\n\n    def parse_server_limits(self, server=None, db_access=None):\n        \"\"\"\n        Parse all server limits\n        :param server: list of dictionary of server data\n        :type server: List\n        :param db_acccess: set to either file containing credentials\n                           to DB access or dictionary containing\n                           ``{'dbname':...,'user':...,'port':...}``\n        :type db_access: str or dictionary\n        \"\"\"\n        return self._parse_limits(SERVER, server, db_access=db_access)\n\n    def parse_queue_limits(self, queues=None, id=None, db_access=None):\n        \"\"\"\n        Parse queue limits\n        :param queues: list of dictionary of queue data\n        :type queues: List\n        :param id: The id of the queue to parse limit for. If None,\n                   all queue limits are parsed\n        :param db_acccess: set to either file containing credentials\n                           to DB access or dictionary containing\n                           ``{'dbname':...,'user':...,'port':...}``\n        :type db_access: str or dictionary\n        \"\"\"\n        return self._parse_limits(QUEUE, queues, id=id, db_access=db_access)\n\n    def parse_all_limits(self, server=None, queues=None, db_access=None):\n        \"\"\"\n        Parse all server and queue limits\n        :param server: list of dictionary of server data\n        :type server: List\n        :param queues: list of dictionary of queue data\n        :type queues: List\n        :param db_acccess: set to either file containing credentials\n                           to DB access or dictionary containing\n                           ``{'dbname':...,'user':...,'port':...}``\n        :type db_access: str or dictionary\n        \"\"\"\n        if hasattr(self, 'limits'):\n            del self.limits\n\n        slim = self.parse_server_limits(server, db_access=db_access)\n        qlim = self.parse_queue_limits(queues, id=None, db_access=db_access)\n        self.limits = dict(list(slim.items()) + list(qlim.items()))\n        del slim\n        del qlim\n        return self.limits\n\n    def limits_info(self, etype=None, ename=None, server=None, queues=None,\n                    jobs=None, db_access=None, over=False):\n        \"\"\"\n        Collect limit information for each entity on which a\n        ``server/queue`` limit is applied.\n        :param etype: entity type, one of u, g, p, o\n        :type etype: str or None\n        :param ename: entity name\n        :type ename: str or None\n        :param server: optional list of dictionary representation\n                       of server object\n        :type server: List\n        :param queues: optional list of dictionary representation\n                       of queues object\n        :type queues: List\n        :param jobs: optional list of dictionary representation of\n                     jobs object\n        :type jobs: List\n        :param db_acccess: set to either file containing credentials\n                           to DB access or dictionary containing\n                           ``{'dbname':...,'user':...,'port':...}``\n        :type db_access: str or dictionary\n        :param over: If True, show only entities that are over their\n                     limit.Default is False.\n        :type over: bool\n        :returns: A list of dictionary similar to that returned by\n                  a converted batch_status object, i.e., can be\n                  displayed using the Utils.show method\n        \"\"\"\n        def create_linfo(lim, entity_type, id, used):\n            \"\"\"\n            Create limit information\n            :param lim: Limit to apply\n            :param entity_type: Type of entity\n            \"\"\"\n            tmp = {}\n            tmp['id'] = entity_type + ':' + id\n            c = [PBS_OBJ_MAP[lim.container]]\n            if lim.container_id:\n                c += [':', lim.container_id]\n            tmp['container'] = \"\".join(c)\n            s = [str(lim.limit_type)]\n            if lim.resource:\n                s += ['.', lim.resource]\n            tmp['limit_type'] = \"\".join(s)\n            tmp['usage/limit'] = \"\".join([str(used), '/', str(lim.value)])\n            tmp['remainder'] = int(lim.value) - int(used)\n\n            return tmp\n\n        def calc_usage(jobs, attr, name=None, resource=None):\n            \"\"\"\n            Calculate the usage for the entity\n            :param attr: Job attribute\n            :param name: Entity name\n            :type name: str or None\n            :param resource: PBS resource\n            :type resource: str or None\n            :returns: The usage\n            \"\"\"\n            usage = {}\n            # initialize usage of the named entity\n            if name is not None and name not in ('PBS_GENERIC', 'PBS_ALL'):\n                usage[name] = 0\n            for j in jobs:\n                entity = j[attr]\n                if entity not in usage:\n                    if resource:\n                        usage[entity] = int(\n                            PbsAttribute.decode_value(\n                                j['Resource_List.' + resource]))\n                    else:\n                        usage[entity] = 1\n                else:\n                    if resource:\n                        usage[entity] += int(\n                            PbsAttribute.decode_value(\n                                j['Resource_List.' + resource]))\n                    else:\n                        usage[entity] += 1\n            return usage\n\n        self.parse_all_limits(server, queues, db_access)\n        entities_p = self.entities.values()\n\n        linfo = []\n        cache = {}\n\n        if jobs is None:\n            jobs = self.status(JOB)\n\n        for entity in sorted(entities_p, key=lambda e: e.name):\n            for lim in entity.limits:\n                _t = entity.type\n                # skip non-matching entity types. We can't skip the entity\n                # name due to proper handling of the PBS_GENERIC limits\n                # we also can't skip overall limits\n                if (_t != 'o') and (etype is not None and etype != _t):\n                    continue\n\n                _n = entity.name\n\n                a = {}\n                if lim.container == QUEUE and lim.container_id is not None:\n                    a['queue'] = (EQ, lim.container_id)\n                if lim.resource:\n                    resource = 'Resource_List.' + lim.resource\n                    a[resource] = (GT, 0)\n                a['job_state'] = (EQ, 'R')\n                a['substate'] = (EQ, 42)\n                if etype == 'u' and ename is not None:\n                    a['euser'] = (EQ, ename)\n                else:\n                    a['euser'] = (SET, '')\n                if etype == 'g' and ename is not None:\n                    a['egroup'] = (EQ, ename)\n                else:\n                    a['egroup'] = (SET, '')\n                if etype == 'p' and ename is not None:\n                    a['project'] = (EQ, ename)\n                else:\n                    a['project'] = (SET, '')\n\n                # optimization: cache filtered results\n                d = None\n                for v in cache.keys():\n                    if cmp(a, eval(v)) == 0:\n                        d = cache[v]\n                        break\n                if d is None:\n                    d = self.filter(JOB, a, bslist=jobs, attrop=PTL_AND,\n                                    idonly=False, db_access=db_access)\n                    cache[str(a)] = d\n                if not d or 'job_state=R' not in d:\n                    # in the absence of jobs, display limits defined with usage\n                    # of 0\n                    if ename is not None:\n                        _u = {ename: 0}\n                    else:\n                        _u = {_n: 0}\n                else:\n                    if _t in ('u', 'o'):\n                        _u = calc_usage(\n                            d['job_state=R'], 'euser', _n, lim.resource)\n                        # an overall limit applies across all running jobs\n                        if _t == 'o':\n                            all_used = sum(_u.values())\n                            for k in _u.keys():\n                                _u[k] = all_used\n                    elif _t == 'g':\n                        _u = calc_usage(\n                            d['job_state=R'], 'egroup', _n, lim.resource)\n                    elif _t == 'p':\n                        _u = calc_usage(\n                            d['job_state=R'], 'project', _n, lim.resource)\n\n                for k, used in _u.items():\n                    if not over or (int(used) > int(lim.value)):\n                        if ename is not None and k != ename:\n                            continue\n                        if _n in ('PBS_GENERIC', 'PBS_ALL'):\n                            if k not in ('PBS_GENERIC', 'PBS_ALL'):\n                                k += '/' + _n\n                        elif _n != k:\n                            continue\n                        tmp_linfo = create_linfo(lim, _t, k, used)\n                        linfo.append(tmp_linfo)\n                del a\n        del cache\n        return linfo\n\n    def __insert_jobs_in_db(self, jobs, hostname=None):\n        \"\"\"\n        An experimental interface that converts jobs from file\n        into entries in the PBS database that can be recovered\n        upon server restart if all other ``objects``, ``queues``,\n        ``resources``, etc... are already defined.\n        The interface to PBS used in this method is incomplete\n        and will most likely cause serious issues. Use only for\n        development purposes\n        \"\"\"\n\n        if not jobs:\n            return []\n\n        if hostname is None:\n            hostname = socket.gethostname()\n\n        # a very crude, and not quite maintainale way to get the flag value\n        # of an attribute. This is one of the reasons why this conversion\n        # of jobs is highly experimental\n        flag_map = {'ctime': 9, 'qtime': 9, 'hop_count': 9, 'queue_rank': 9,\n                    'queue_type': 9, 'etime': 9, 'job_kill_delay': 9,\n                    'run_version': 9, 'job_state': 9, 'exec_host': 9,\n                    'exec_host2': 9, 'exec_vnode': 9, 'mtime': 9, 'stime': 9,\n                    'substate': 9, 'hashname': 9, 'comment': 9, 'run_count': 9,\n                    'schedselect': 13}\n\n        state_map = {'Q': 1, 'H': 2, 'W': 3, 'R': 4, 'E': 5, 'X': 6, 'B': 7}\n\n        job_attr_stmt = (\"INSERT INTO pbs.job_attr (ji_jobid, attr_name, \"\n                         \"attr_resource, attr_value, attr_flags)\")\n\n        job_stmt = (\"INSERT INTO pbs.job (ji_jobid, ji_sv_name, ji_state, \"\n                    \"ji_substate, ji_svrflags, ji_stime, \"\n                    \"ji_queue, ji_destin, ji_un_type, \"\n                    \"ji_exitstat, ji_quetime, ji_rteretry, \"\n                    \"ji_fromsock, ji_fromaddr, ji_jid, \"\n                    \"ji_credtype, ji_savetm, ji_creattm)\")\n\n        all_stmts = []\n\n        for job in jobs:\n\n            keys = []\n            values = []\n            flags = []\n\n            for k, v in job.items():\n                if k in ('id', 'Mail_Points', 'Mail_Users'):\n                    continue\n                keys.append(k)\n                if not v.isdigit():\n                    values.append(\"'\" + v + \"'\")\n                else:\n                    values.append(v)\n                if k in flag_map:\n                    flags.append(flag_map[k])\n                elif k.startswith('Resource_List'):\n                    flags.append(15)\n                else:\n                    flags.append(11)\n\n            jobid = job['id'].split('.')[0] + '.' + hostname\n\n            for i in range(len(keys)):\n                stmt = job_attr_stmt\n                stmt += \" VALUES('\" + jobid + \"', \"\n                if '.' in keys[i]:\n                    k, v = keys[i].split('.')\n                    stmt += \"'\" + k + \"', '\" + v + \"'\" + \", \"\n                else:\n                    stmt += \"'\" + keys[i] + \"', ''\" + \", \"\n                stmt += values[i] + \",\" + str(flags[i])\n                stmt += \");\"\n                self.logger.debug(stmt)\n                all_stmts.append(stmt)\n\n            js = job['job_state']\n            svrflags = 1\n            state = 1\n            if js in state_map:\n                state = state_map[js]\n                if state == 4:\n                    # Other states svrflags aren't handled and will\n                    # cause issues, another reason this is highly experimental\n                    svrflags = 12289\n\n            tm = time.strftime(\"%Y-%m-%d %H:%M:%S\", time.localtime())\n            stmt = job_stmt\n            stmt += \" VALUES('\" + jobid + \"', 1, \"\n            stmt += str(state) + \", \" + job['substate']\n            stmt += \", \" + str(svrflags)\n            stmt += \", 0, 0, 0\"\n            if 'stime' in job:\n                print(job['stime'])\n                st = time.strptime(job['stime'], \"%a %b %d %H:%M:%S %Y\")\n                stmt += \", \" + str(time.mktime(st))\n            else:\n                stmt += \", 0\"\n            stmt += \", 0\"\n            stmt += \", '\" + job['queue'] + \"'\"\n            if 'exec_host2' in job:\n                stmt += \", \" + job['exec_host2']\n            else:\n                stmt += \", ''\"\n            stmt += \", 0, 0, 0, 0, 0, 0, 0, 0, '', '', 0, 0\"\n            stmt += \", '\" + tm + \"', '\" + tm + \"');\"\n            self.logger.debug(stmt)\n\n            all_stmts.append(stmt)\n\n        return all_stmts\n\n    def clusterize(self, conf_file=None, hosts=None, acct_logs=None,\n                   import_jobs=False, db_creds_file=None):\n        \"\"\"\n        Mimic a ``pbs_snapshot`` snapshot onto a set of hosts running\n        a PBS ``server``,``scheduler``, and ``MoM``.\n        This method clones the following information from the snap:\n        ``Server attributes``\n        ``Server resourcedef``\n        ``Hooks``\n        ``Scheduler configuration``\n        ``Scheduler resource_group``\n        ``Scheduler holiday file``\n        ``Per Queue attributes``\n        Nodes are copied as a vnode definition file inserted into\n        each host's MoM instance.\n        Currently no support for cloning the server 'sched' object,\n        nor to copy nodes to multi-mom instances.\n        Jobs are copied over only if import_jobs is True, see below\n        for details\n        :param conf_file: Configuration file for the MoM instance\n        :param hosts: List of hosts on which to clone the snap\n                      snapshot\n        :type hosts: List\n        :param acct_logs: path to accounting logs\n        :type acct_logs str\n        :param include_jobs: [Experimental] if True jobs from the\n                             pbs_snapshot are imported into the host's\n                             database. There are several caveats to\n                             this option:\n                             The scripts are not imported\n                             The users and groups are not created on\n                             the local system.There are no actual\n                             processes created on the MoM for each\n                             job so operations on the job such as\n                             signals or delete will fail (delete -W\n                             force will still work)\n        :type include_jobs: bool\n        :param db_creds_file: Path to file containing credentials\n                              to access the DB\n        :type db_creds_file: str or None\n        \"\"\"\n        if not self.has_snap:\n            return\n        if hosts is None:\n            return\n\n        # Create users & groups (need to associate users to groups)\n        if acct_logs is not None:\n            self.logger.info(\"Parsing accounting logs to find \"\n                             \"users & groups to create\")\n            groups = set()\n            users = {}\n            for name in os.listdir(acct_logs):\n                fpath = os.path.join(acct_logs, name)\n                with open(fpath, \"r\") as fd:\n                    for line in fd:\n                        rec_list = line.split(\";\", 3)\n                        if len(rec_list) < 4 or rec_list[1] != \"E\":\n                            continue\n                        try:\n                            uname = rec_list[3].split(\n                                \"user=\")[1].split()[0]\n                            if uname not in users:\n                                users[uname] = set()\n                            gname = rec_list[3].split(\n                                \"group=\")[1].split()[0]\n                            users[uname].add(gname)\n                            groups.add(gname)\n                        except IndexError:\n                            continue\n            # Create groups first\n            for grp in groups:\n                try:\n                    self.du.groupadd(name=grp)\n                except PtlUtilError as e:\n                    if \"already exists\" not in e.msg[0]:\n                        raise\n            # Now create users and add them to their associated groups\n            for user, u_grps in users.items():\n                try:\n                    self.du.useradd(name=user, groups=list(u_grps))\n                except PtlUtilError as e:\n                    if \"already exists\" not in e.msg[0]:\n                        raise\n\n        for h in hosts:\n            svr = Server(h)\n            sched = Scheduler(server=svr, snap=self.snap,\n                              snapmap=self.snapmap)\n            try:\n                svr.manager(MGR_CMD_DELETE, NODE, None, id=\"\")\n            except Exception:\n                pass\n            svr.revert_to_defaults(delqueues=True, delhooks=True)\n            local = svr.pbs_conf['PBS_HOME']\n\n            snap_rdef = os.path.join(self.snap, 'server_priv', 'resourcedef')\n            snap_sc = os.path.join(self.snap, 'sched_priv', 'sched_config')\n            snap_rg = os.path.join(self.snap, 'sched_priv', 'resource_group')\n            snap_hldy = os.path.join(self.snap, 'sched_priv', 'holidays')\n            nodes = os.path.join(self.snap, 'node', 'pbsnodes_va.out')\n            snap_hooks = os.path.join(self.snap, 'hook',\n                                      'qmgr_ph_default.out')\n            snap_ps = os.path.join(self.snap, 'server', 'qmgr_ps.out')\n            snap_psched = os.path.join(self.snap, 'scheduler',\n                                       'qmgr_psched.out')\n            snap_pq = os.path.join(self.snap, 'server', 'qmgr_pq.out')\n\n            local_rdef = os.path.join(local, 'server_priv', 'resourcedef')\n            local_sc = os.path.join(local, 'sched_priv', 'sched_config')\n            local_rg = os.path.join(local, 'sched_priv', 'resource_group')\n            local_hldy = os.path.join(local, 'sched_priv', 'holidays')\n\n            _fcopy = [(snap_rdef, local_rdef), (snap_sc, local_sc),\n                      (snap_rg, local_rg), (snap_hldy, local_hldy)]\n\n            # Restart since resourcedef may have changed\n            svr.restart()\n\n            if os.path.isfile(snap_ps):\n                with open(snap_ps) as tmp_ps:\n                    cmd = [os.path.join(svr.pbs_conf['PBS_EXEC'], 'bin',\n                                        'qmgr')]\n                    self.du.run_cmd(h, cmd, stdin=tmp_ps, sudo=True,\n                                    logerr=False)\n            else:\n                self.logger.error(\"server information not found in snapshot\")\n\n            # Unset any site-sensitive attributes\n            for a in ['pbs_license_info', 'mail_from', 'acl_hosts']:\n                try:\n                    svr.manager(MGR_CMD_UNSET, SERVER, a, sudo=True)\n                except Exception:\n                    pass\n\n            for (d, l) in _fcopy:\n                if os.path.isfile(d):\n                    self.logger.info('copying ' + d + ' to ' + l)\n                    self.du.run_copy(h, src=d, dest=l, sudo=True)\n\n            if os.path.isfile(snap_pq):\n                with open(snap_pq) as tmp_pq:\n                    cmd = [os.path.join(svr.pbs_conf['PBS_EXEC'], 'bin',\n                                        'qmgr')]\n                    self.du.run_cmd(h, cmd, stdin=tmp_pq, sudo=True,\n                                    logerr=False)\n            else:\n                self.logger.error(\"queue information not found in snapshot\")\n\n            if os.path.isfile(snap_psched):\n                with open(snap_psched) as tmp_psched:\n                    cmd = [os.path.join(svr.pbs_conf['PBS_EXEC'], 'bin',\n                                        'qmgr')]\n                    self.du.run_cmd(h, cmd, stdin=tmp_psched, sudo=True,\n                                    logerr=False)\n            else:\n                self.logger.error(\"sched information not found in snapshot\")\n\n            if os.path.isfile(nodes):\n                with open(nodes) as f:\n                    lines = f.readlines()\n                dl = self.utils.convert_to_dictlist(lines)\n                vdef = self.utils.dictlist_to_vnodedef(dl)\n                if vdef:\n                    try:\n                        svr.manager(MGR_CMD_DELETE, NODE, None, \"\")\n                    except Exception:\n                        pass\n                    MoM(self, h, pbsconf_file=conf_file).insert_vnode_def(\n                        vdef)\n                    svr.restart()\n                    svr.manager(MGR_CMD_CREATE, NODE, id=svr.shortname)\n                # check if any node is associated to a queue.\n                # This is needed because the queues 'hasnodes' attribute\n                # does not get set through vnode def update and must be set\n                # via qmgr. It only needs to be set once, not for each node\n                qtoset = {}\n                for n in dl:\n                    if 'queue' in n and n['queue'] not in qtoset:\n                        qtoset[n['queue']] = n['id']\n\n                # before setting queue on nodes make sure that the vnode\n                # def is all set\n                svr.expect(NODE, {'state=free': (GE, len(dl))}, interval=3)\n                for k, v in qtoset.items():\n                    svr.manager(MGR_CMD_SET, NODE, {'queue': k}, id=v)\n            else:\n                self.logger.error(\"nodes information not found in snapshot\")\n\n            # populate hooks\n            if os.path.isfile(snap_hooks):\n                hooks = svr.status(HOOK, level=logging.DEBUG)\n                hooks = [hk['id'] for hk in hooks]\n                if len(hooks) > 0:\n                    svr.manager(MGR_CMD_DELETE, HOOK, id=hooks)\n                with open(snap_hooks) as tmp_hook:\n                    cmd = [os.path.join(svr.pbs_conf['PBS_EXEC'], 'bin',\n                                        'qmgr')]\n                    self.du.run_cmd(h, cmd, stdin=tmp_hook, sudo=True)\n            else:\n                self.logger.error(\"hooks information not found in snapshot\")\n\n            # import jobs\n            if import_jobs:\n                jobs = self.status(JOB)\n                sql_stmt = self.__insert_jobs_in_db(jobs, h)\n                print(\"\\n\".join(sql_stmt))\n                if db_creds_file is not None:\n                    pass\n"
  },
  {
    "path": "test/fw/ptl/lib/ptl_service.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport ast\nimport base64\nimport copy\nimport datetime\nimport json\nimport logging\nimport os\nimport re\nimport socket\nimport string\nimport sys\nimport time\nimport traceback\nimport platform\n\nfrom ptl.utils.pbs_dshutils import DshUtils\nfrom ptl.utils.pbs_procutils import ProcUtils\nfrom ptl.utils.pbs_testusers import (ROOT_USER, TEST_USER, PbsUser,\n                                     DAEMON_SERVICE_USER)\nfrom ptl.lib.ptl_error import (PbsInitServicesError, PbsServiceError,\n                               PtlLogMatchError)\nfrom ptl.lib.ptl_object import PBSObject\n\nfrom ptl.lib.ptl_constants import (SERVER, VNODE, QUEUE, JOB,\n                                   RESV, SCHED, HOOK)\n\n\nclass PBSInitServices(object):\n    \"\"\"\n    PBS initialization services\n\n    :param hostname: Machine hostname\n    :type hostname: str or None\n    :param conf: PBS configuaration file\n    :type conf: str or None\n    \"\"\"\n\n    def __init__(self, hostname=None, conf=None):\n        self.logger = logging.getLogger(__name__)\n        self.hostname = hostname\n        if self.hostname is None:\n            self.hostname = socket.gethostname()\n        self.dflt_conf_file = os.environ.get('PBS_CONF_FILE', '/etc/pbs.conf')\n        self.conf_file = conf\n        self.du = DshUtils()\n        self.is_linux = sys.platform.startswith('linux')\n\n    def initd(self, hostname=None, op='status', conf_file=None,\n              init_script=None, daemon='all'):\n        \"\"\"\n        Run the init script for a given operation\n\n        :param hostname: hostname on which to execute the init script\n        :type hostname: str or None\n        :param op: one of status, start, stop, restart\n        :type op: str\n        :param conf_file: optional path to a configuration file\n        :type conf_file: str or None\n        :param init_script: optional path to a PBS init script\n        :type init_script: str or None\n        :param daemon: name of daemon to operate on. one of server, mom,\n                       sched, comm or all\n        :type daemon: str\n        \"\"\"\n        if hostname is None:\n            hostname = self.hostname\n        if conf_file is None:\n            conf_file = self.conf_file\n        return self._unix_initd(hostname, op, conf_file, init_script, daemon)\n\n    def restart(self, hostname=None, init_script=None):\n        \"\"\"\n        Run the init script for a restart operation\n\n        :param hostname: hostname on which to execute the init script\n        :type hostname: str or None\n        :param init_script: optional path to a PBS init script\n        :type init_script: str or None\n        \"\"\"\n        return self.initd(hostname, op='restart', init_script=init_script)\n\n    def restart_server(self, hostname=None, init_script=None):\n        \"\"\"\n        Run the init script for a restart server\n\n        :param hostname: hostname on which to restart server\n        :type hostname: str or None\n        :param init_script: optional path to a PBS init script\n        :type init_script: str or None\n        \"\"\"\n        return self.initd(hostname, op='restart', init_script=init_script,\n                          daemon='server')\n\n    def restart_mom(self, hostname=None, init_script=None):\n        \"\"\"\n        Run the init script for a restart mom\n\n        :param hostname: hostname on which to restart mom\n        :type hostname: str or None\n        :param init_script: optional path to a PBS init script\n        :type init_script: str or None\n        \"\"\"\n        return self.initd(hostname, op='restart', init_script=init_script,\n                          daemon='mom')\n\n    def restart_sched(self, hostname=None, init_script=None):\n        \"\"\"\n        Run the init script for a restart sched\n\n        :param hostname: hostname on which to restart sched\n        :type hostname: str or None\n        :param init_script: optional path to a PBS init script\n        :type init_script: str or None\n        \"\"\"\n        return self.initd(hostname, op='restart', init_script=init_script,\n                          daemon='sched')\n\n    def restart_comm(self, hostname=None, init_script=None):\n        \"\"\"\n        Run the init script for a restart comm\n\n        :param hostname: hostname on which to restart comm\n        :type hostname: str or None\n        :param init_script: optional path to a PBS init script\n        :type init_script: str or None\n        \"\"\"\n        return self.initd(hostname, op='restart', init_script=init_script,\n                          daemon='comm')\n\n    def start(self, hostname=None, init_script=None):\n        \"\"\"\n        Run the init script for a start operation\n\n        :param hostname: hostname on which to execute the init script\n        :type hostname: str or None\n        :param init_script: optional path to a PBS init script\n        :type init_script: str or None\n        \"\"\"\n        return self.initd(hostname, op='start', init_script=init_script)\n\n    def start_server(self, hostname=None, init_script=None):\n        \"\"\"\n        Run the init script for a start server\n\n        :param hostname: hostname on which to start server\n        :type hostname: str or None\n        :param init_script: optional path to a PBS init script\n        :type init_script: str or None\n        \"\"\"\n        return self.initd(hostname, op='start', init_script=init_script,\n                          daemon='server')\n\n    def start_mom(self, hostname=None, init_script=None):\n        \"\"\"\n        Run the init script for a start mom\n\n        :param hostname: hostname on which to start mom\n        :type hostname: str or None\n        :param init_script: optional path to a PBS init script\n        :type init_script: str or None\n        \"\"\"\n        return self.initd(hostname, op='start', init_script=init_script,\n                          daemon='mom')\n\n    def start_sched(self, hostname=None, init_script=None):\n        \"\"\"\n        Run the init script for a start sched\n\n        :param hostname: hostname on which to start sched\n        :type hostname: str or None\n        :param init_script: optional path to a PBS init script\n        :type init_script: str or None\n        \"\"\"\n        return self.initd(hostname, op='start', init_script=init_script,\n                          daemon='sched')\n\n    def start_comm(self, hostname=None, init_script=None):\n        \"\"\"\n        Run the init script for a start comm\n\n        :param hostname: hostname on which to start comm\n        :type hostname: str or None\n        :param init_script: optional path to a PBS init script\n        :type init_script: str or None\n        \"\"\"\n        return self.initd(hostname, op='start', init_script=init_script,\n                          daemon='comm')\n\n    def stop(self, hostname=None, init_script=None):\n        \"\"\"\n        Run the init script for a stop operation\n\n        :param hostname: hostname on which to execute the init script\n        :type hostname: str or None\n        :param init_script: optional path to a PBS init script\n        :type init_script: str or None\n        \"\"\"\n        return self.initd(hostname, op='stop', init_script=init_script)\n\n    def stop_server(self, hostname=None, init_script=None):\n        \"\"\"\n        Run the init script for a stop server\n\n        :param hostname: hostname on which to stop server\n        :type hostname: str or None\n        :param init_script: optional path to a PBS init script\n        :type init_script: str or None\n        \"\"\"\n        return self.initd(hostname, op='stop', init_script=init_script,\n                          daemon='server')\n\n    def stop_mom(self, hostname=None, init_script=None):\n        \"\"\"\n        Run the init script for a stop mom\n\n        :param hostname: hostname on which to stop mom\n        :type hostname: str or None\n        :param init_script: optional path to a PBS init script\n        :type init_script: str or None\n        \"\"\"\n        return self.initd(hostname, op='stop', init_script=init_script,\n                          daemon='mom')\n\n    def stop_sched(self, hostname=None, init_script=None):\n        \"\"\"\n        Run the init script for a stop sched\n\n        :param hostname: hostname on which to stop sched\n        :type hostname: str or None\n        :param init_script: optional path to a PBS init script\n        :type init_script: str or None\n        \"\"\"\n        return self.initd(hostname, op='stop', init_script=init_script,\n                          daemon='sched')\n\n    def stop_comm(self, hostname=None, init_script=None):\n        \"\"\"\n        Run the init script for a stop comm\n\n        :param hostname: hostname on which to stop comm\n        :type hostname: str or None\n        :param init_script: optional path to a PBS init script\n        :type init_script: str or None\n        \"\"\"\n        return self.initd(hostname, op='stop', init_script=init_script,\n                          daemon='comm')\n\n    def status(self, hostname=None, init_script=None):\n        \"\"\"\n        Run the init script for a status operation\n\n        :param hostname: hostname on which to execute the init script\n        :type hostname: str or None\n        :param init_script: optional path to a PBS init script\n        :type init_script: str or None\n        \"\"\"\n        return self.initd(hostname, op='status', init_script=init_script)\n\n    def status_server(self, hostname=None, init_script=None):\n        \"\"\"\n        Run the init script for a status server\n\n        :param hostname: hostname on which to status server\n        :type hostname: str or None\n        :param init_script: optional path to a PBS init script\n        :type init_script: str or None\n        \"\"\"\n        return self.initd(hostname, op='status', init_script=init_script,\n                          daemon='server')\n\n    def status_mom(self, hostname=None, init_script=None):\n        \"\"\"\n        Run the init script for a status mom\n\n        :param hostname: hostname on which to status mom\n        :type hostname: str or None\n        :param init_script: optional path to a PBS init script\n        :type init_script: str or None\n        \"\"\"\n        return self.initd(hostname, op='status', init_script=init_script,\n                          daemon='mom')\n\n    def status_sched(self, hostname=None, init_script=None):\n        \"\"\"\n        Run the init script for a status sched\n\n        :param hostname: hostname on which to status sched\n        :type hostname: str or None\n        :param init_script: optional path to a PBS init script\n        :type init_script: str or None\n        \"\"\"\n        return self.initd(hostname, op='status', init_script=init_script,\n                          daemon='sched')\n\n    def status_comm(self, hostname=None, init_script=None):\n        \"\"\"\n        Run the init script for a status comm\n\n        :param hostname: hostname on which to status comm\n        :type hostname: str or None\n        :param init_script: optional path to a PBS init script\n        :type init_script: str or None\n        \"\"\"\n        return self.initd(hostname, op='status', init_script=init_script,\n                          daemon='comm')\n\n    def _unix_initd(self, hostname, op, conf_file, init_script, daemon):\n        \"\"\"\n        Helper function for initd ``(*nix version)``\n\n        :param hostname: hostname on which init script should run\n        :type hostname: str\n        :param op: Operation on daemons - start, stop, restart or status\n        :op type: str\n        :param conf_file: Optional path to the pbs configuration file\n        :type conf_file: str or None\n        :param init_script: optional path to a PBS init script\n        :type init_script: str or None\n        :param daemon: name of daemon to operate on. one of server, mom,\n                       sched, comm or all\n        :type daemon: str\n        \"\"\"\n        init_cmd = copy.copy(self.du.sudo_cmd)\n        if daemon is not None and daemon != 'all':\n            conf = self.du.parse_pbs_config(hostname, conf_file)\n            dconf = {\n                'PBS_START_SERVER': 0,\n                'PBS_START_MOM': 0,\n                'PBS_START_SCHED': 0,\n                'PBS_START_COMM': 0\n            }\n            if daemon == 'server' and conf.get('PBS_START_SERVER', 0) != 0:\n                dconf['PBS_START_SERVER'] = 1\n            elif daemon == 'mom' and conf.get('PBS_START_MOM', 0) != 0:\n                dconf['PBS_START_MOM'] = 1\n            elif daemon == 'sched' and conf.get('PBS_START_SCHED', 0) != 0:\n                dconf['PBS_START_SCHED'] = 1\n            elif daemon == 'comm' and conf.get('PBS_START_COMM', 0) != 0:\n                dconf['PBS_START_COMM'] = 1\n            for k, v in dconf.items():\n                init_cmd += [\"%s=%s\" % (k, str(v))]\n            _as = True\n        else:\n            fn = None\n            if (conf_file is not None) and (conf_file != self.dflt_conf_file):\n                init_cmd += ['PBS_CONF_FILE=' + conf_file]\n                _as = True\n            else:\n                _as = False\n            conf = self.du.parse_pbs_config(hostname, conf_file)\n        if (init_script is None) or (not init_script.startswith('/')):\n            if 'PBS_EXEC' not in conf:\n                msg = 'Missing PBS_EXEC setting in pbs config'\n                raise PbsInitServicesError(rc=1, rv=False, msg=msg)\n            if init_script is None:\n                init_script = os.path.join(conf['PBS_EXEC'], 'libexec',\n                                           'pbs_init.d')\n            else:\n                init_script = os.path.join(conf['PBS_EXEC'], 'etc',\n                                           init_script)\n            if not self.du.isfile(hostname, path=init_script, sudo=True):\n                # Could be Type 3 installation where we will not have\n                # PBS_EXEC/libexec/pbs_init.d\n                return []\n        init_cmd += [init_script, op]\n        msg = 'running init script to ' + op + ' pbs'\n        if daemon is not None and daemon != 'all':\n            msg += ' ' + daemon\n        msg += ' on ' + hostname\n        if conf_file is not None:\n            msg += ' using ' + conf_file\n        msg += ' init_cmd=%s' % (str(init_cmd))\n        self.logger.info(msg)\n        ret = self.du.run_cmd(hostname, init_cmd, as_script=_as,\n                              logerr=False)\n        if ret['rc'] != 0:\n            raise PbsInitServicesError(rc=ret['rc'], rv=False,\n                                       msg='\\n'.join(ret['err']))\n        else:\n            return ret\n\n    def switch_version(self, hostname=None, version=None):\n        \"\"\"\n        Switch to another version of PBS installed on the system\n\n        :param hostname: The hostname to operate on\n        :type hostname: str or None\n        :param version: version to switch\n        \"\"\"\n        pbs_conf = self.du.parse_pbs_config(hostname)\n        if 'PBS_EXEC' in pbs_conf:\n            dn = os.path.dirname(pbs_conf['PBS_EXEC'])\n            newver = os.path.join(dn, version)\n            ret = self.du.isdir(hostname, path=newver)\n            if not ret:\n                msg = 'no version ' + version + ' on host ' + hostname\n                raise PbsInitServicesError(rc=0, rv=False, msg=msg)\n            self.stop(hostname)\n            dflt = os.path.join(dn, 'default')\n            ret = self.du.isfile(hostname, path=dflt)\n            if ret:\n                self.logger.info('removing symbolic link ' + dflt)\n                self.du.rm(hostname, dflt, sudo=True, logerr=False)\n                self.du.set_pbs_config(hostname, confs={'PBS_EXEC': dflt})\n            else:\n                self.du.set_pbs_config(hostname, confs={'PBS_EXEC': newver})\n\n            self.logger.info('linking ' + newver + ' to ' + dflt)\n            self.du.run_cmd(hostname, ['ln', '-s', newver, dflt],\n                            sudo=True, logerr=False)\n            self.start(hostname)\n\n\nclass PBSService(PBSObject):\n\n    \"\"\"\n    Generic PBS service object to hold properties of PBS daemons\n\n    :param name: The name associated to the object\n    :type name: str or None\n    :param attrs: Dictionary of attributes to set on object\n    :type attrs: Dictionary\n    :param defaults: Dictionary of default attributes. Setting\n                     this will override any other object's default\n    :type defaults: Dictionary\n    :param pbsconf_file: Optional path to the pbs configuration\n                         file\n    :type pbsconf_file: str or None\n    :param snapmap: A dictionary of PBS objects (node,server,etc)\n                    to mapped files from PBS snapshot directory\n    :type snapmap: Dictionary\n    :param snap: path to PBS snap directory\n                 (This will override snapmap)\n    :type snap: str or None\n    :param pbs_conf: Parsed pbs.conf in dictionary format\n    :type pbs_conf: Dictionary or None\n    :param platform: PBS service's platform\n    :type platform: str or None\n    \"\"\"\n    du = DshUtils()\n    pu = ProcUtils()\n\n    def __init__(self, name=None, attrs=None, defaults=None, pbsconf_file=None,\n                 snapmap=None, snap=None, pbs_conf=None, platform=None):\n        if attrs is None:\n            attrs = {}\n        if defaults is None:\n            defaults = {}\n        if snapmap is None:\n            snapmap = {}\n        if name is None:\n            self.hostname = socket.gethostname()\n        else:\n            self.hostname = name\n        if snap:\n            self.snapmap = self._load_from_snap(snap)\n            self.has_snap = True\n            self.snap = snap\n        elif len(snapmap) > 0:\n            self.snapmap = snapmap\n            self.snap = None\n            self.has_snap = True\n        else:\n            self.snapmap = {}\n            self.snap = None\n            self.has_snap = False\n        if not self.has_snap:\n            try:\n                self.fqdn = socket.gethostbyaddr(self.hostname)[0]\n                if self.hostname != self.fqdn:\n                    self.logger.info('FQDN name ' + self.fqdn + ' differs '\n                                     'from name provided ' + self.hostname)\n                    self.hostname = self.fqdn\n            except Exception:\n                pass\n        else:\n            self.fqdn = self.hostname\n\n        self.shortname = self.hostname.split('.')[0]\n        if platform is None:\n            self.platform = self.du.get_platform()\n        else:\n            self.platform = platform\n\n        self.logutils = None\n        self.logfile = None\n        self.acctlogfile = None\n        self.pbs_conf = {}\n        self.pbs_env = {}\n        self._is_local = True\n        self.launcher = None\n        self.dyn_created_files = []\n        self.saved_config = {}\n\n        PBSObject.__init__(self, name, attrs, defaults)\n\n        if not self.has_snap:\n            if not self.du.is_localhost(self.hostname):\n                self._is_local = False\n\n        if pbsconf_file is None and not self.has_snap:\n            self.pbs_conf_file = self.du.get_pbs_conf_file(name)\n        else:\n            self.pbs_conf_file = pbsconf_file\n\n        if self.pbs_conf_file == '/etc/pbs.conf':\n            self.default_pbs_conf = True\n        elif (('PBS_CONF_FILE' not in os.environ) or\n              (os.environ['PBS_CONF_FILE'] != self.pbs_conf_file)):\n            self.default_pbs_conf = False\n        else:\n            self.default_pbs_conf = True\n\n        # default pbs_server_name to hostname, it will get set again once the\n        # config file is processed\n        self.pbs_server_name = self.hostname\n\n        # If snap is given then bypass parsing pbs.conf\n        if self.has_snap:\n            if snap is None:\n                t = 'snapshot_%s' % (time.strftime(\"%y%m%d_%H%M%S\"))\n                self.snap = os.path.join(self.du.get_tempdir(), t)\n            self.pbs_conf['PBS_HOME'] = self.snap\n            self.pbs_conf['PBS_EXEC'] = self.snap\n            self.pbs_conf['PBS_SERVER'] = self.hostname\n            m = re.match(r'.*snapshot_(?P<datetime>\\d{6,6}_\\d{6,6}).*',\n                         self.snap)\n            if m:\n                tm = time.strptime(m.group('datetime'), \"%y%m%d_%H%M%S\")\n                self.ctime = int(time.mktime(tm))\n        elif pbs_conf is not None:\n            self.pbs_conf = pbs_conf\n            self.pbs_server_name = self.du.get_pbs_server_name(self.pbs_conf)\n        else:\n            self.pbs_conf = self.du.parse_pbs_config(self.hostname,\n                                                     self.pbs_conf_file)\n            if self.pbs_conf is None or len(self.pbs_conf) == 0:\n                self.pbs_conf = {'PBS_HOME': \"\", 'PBS_EXEC': \"\"}\n            else:\n                ef = os.path.join(self.pbs_conf['PBS_HOME'], 'pbs_environment')\n                self.pbs_env = self.du.parse_pbs_environment(self.hostname, ef)\n                self.pbs_server_name = self.du.get_pbs_server_name(\n                    self.pbs_conf)\n\n        self.init_logfile_path(self.pbs_conf)\n\n    def _load_from_snap(self, snap):\n        snapmap = {}\n        snapmap[SERVER] = os.path.join(snap, 'server', 'qstat_Bf.out')\n        snapmap[VNODE] = os.path.join(snap, 'node', 'pbsnodes_va.out')\n        snapmap[QUEUE] = os.path.join(snap, 'server', 'qstat_Qf.out')\n        snapmap[JOB] = os.path.join(snap, 'job', 'qstat_tf.out')\n        if not os.path.isfile(snapmap[JOB]):\n            snapmap[JOB] = os.path.join(snap, 'job', 'qstat_f.out')\n        snapmap[RESV] = os.path.join(snap, 'reservation', 'pbs_rstat_f.out')\n        snapmap[SCHED] = os.path.join(snap, 'scheduler', 'qmgr_psched.out')\n        snapmap[HOOK] = []\n        if (os.path.isdir(os.path.join(snap, 'server_priv')) and\n                os.path.isdir(os.path.join(snap, 'server_priv', 'hooks'))):\n            _ld = os.listdir(os.path.join(snap, 'server_priv', 'hooks'))\n            for f in _ld:\n                if f.endswith('.HK'):\n                    snapmap[HOOK].append(\n                        os.path.join(snap, 'server_priv', 'hooks', f))\n\n        return snapmap\n\n    def init_logfile_path(self, conf=None):\n        \"\"\"\n        Initialize path to log files for this service\n\n        :param conf: PBS conf file parameters\n        :type conf: Dictionary\n        \"\"\"\n        elmt = self._instance_to_logpath(self)\n        if elmt is None:\n            return\n\n        if conf is not None and 'PBS_HOME' in conf:\n            tm = time.strftime(\"%Y%m%d\", time.localtime())\n            self.logfile = os.path.join(conf['PBS_HOME'], elmt, tm)\n            self.acctlogfile = os.path.join(conf['PBS_HOME'], 'server_priv',\n                                            'accounting', tm)\n\n    def _instance_to_logpath(self, inst):\n        \"\"\"\n        returns the log path associated to this service\n        \"\"\"\n        if inst.__class__.__name__ == \"Scheduler\":\n            logval = 'sched_logs'\n        elif inst.__class__.__name__ == \"Server\":\n            logval = 'server_logs'\n        elif inst.__class__.__name__ == \"MoM\":\n            logval = 'mom_logs'\n        elif inst.__class__.__name__ == \"Comm\":\n            logval = 'comm_logs'\n        else:\n            logval = None\n        return logval\n\n    def _instance_to_cmd(self, inst):\n        \"\"\"\n        returns the command associated to this service\n        \"\"\"\n        if inst.__class__.__name__ == \"Scheduler\":\n            cmd = 'pbs_sched'\n        elif inst.__class__.__name__ == \"Server\":\n            cmd = 'pbs_server'\n        elif inst.__class__.__name__ == \"MoM\":\n            cmd = 'pbs_mom'\n        elif inst.__class__.__name__ == \"Comm\":\n            cmd = 'pbs_comm'\n        else:\n            cmd = None\n        return cmd\n\n    def _instance_to_servicename(self, inst):\n        \"\"\"\n        return the service name associated to the instance. One of\n        ``server, scheduler, or mom.``\n        \"\"\"\n        if inst.__class__.__name__ == \"Scheduler\":\n            nm = 'scheduler'\n        elif inst.__class__.__name__ == \"Server\":\n            nm = 'server'\n        elif inst.__class__.__name__ == \"MoM\":\n            nm = 'mom'\n        elif inst.__class__.__name__ == \"Comm\":\n            nm = 'comm'\n        else:\n            nm = ''\n        return nm\n\n    def _instance_to_privpath(self, inst):\n        \"\"\"\n        returns the path to priv associated to this service\n        \"\"\"\n        if inst.__class__.__name__ == \"Scheduler\":\n            priv = 'sched_priv'\n        elif inst.__class__.__name__ == \"Server\":\n            priv = 'server_priv'\n        elif inst.__class__.__name__ == \"MoM\":\n            priv = 'mom_priv'\n        elif inst.__class__.__name__ == \"Comm\":\n            priv = 'server_priv'\n        else:\n            priv = None\n        return priv\n\n    def _instance_to_lock(self, inst):\n        \"\"\"\n        returns the path to lock file associated to this service\n        \"\"\"\n        if inst.__class__.__name__ == \"Scheduler\":\n            lock = 'sched.lock'\n        elif inst.__class__.__name__ == \"Server\":\n            lock = 'server.lock'\n        elif inst.__class__.__name__ == \"MoM\":\n            lock = 'mom.lock'\n        elif inst.__class__.__name__ == \"Comm\":\n            lock = 'comm.lock'\n        else:\n            lock = None\n        return lock\n\n    def set_launcher(self, execargs=None):\n        self.launcher = execargs\n\n    def _isUp(self):\n        \"\"\"\n        returns True if service is up and False otherwise\n        \"\"\"\n        live_pids = self._all_instance_pids()\n        pid = self._get_pid()\n        if live_pids is not None and pid in live_pids:\n            return True\n        return False\n\n    def _signal(self, sig, procname=None):\n        \"\"\"\n        Send signal ``sig`` to service. sig is the signal name\n        as it would be sent to the program kill, e.g. -HUP.\n\n        Return the ``out/err/rc`` from the command run to send\n        the signal. See DshUtils.run_cmd\n\n        :param procname: Process name\n        :type procname: str or None\n        \"\"\"\n        pid = self._get_pid()\n\n        if procname is not None:\n            pi = self.pu.get_proc_info(self.hostname, procname)\n            if pi is not None and pi.values() and list(pi.values())[0]:\n                for _p in list(pi.values())[0]:\n                    ret = self.du.run_cmd(self.hostname, ['kill', sig, _p.pid],\n                                          sudo=True)\n                return ret\n\n        if pid is None:\n            return {'rc': 0, 'err': '', 'out': 'no pid to signal'}\n\n        return self.du.run_cmd(self.hostname, ['kill', sig, pid], sudo=True)\n\n    def _all_instance_pids(self):\n        \"\"\"\n        Return a list of all ``PIDS`` that match the\n        instance name or None.\n        \"\"\"\n        cmd = self._instance_to_cmd(self)\n        self.pu.get_proc_info(self.hostname, \".*\" + cmd + \".*\",\n                              regexp=True)\n        _procs = self.pu.processes.values()\n        if _procs:\n            _pids = []\n            for _p in _procs:\n                _pids.extend([x.pid for x in _p])\n            return _pids\n        return None\n\n    def _get_pid(self):\n        \"\"\"\n        Get the ``PID`` associated to this instance.\n        Implementation note, the pid is read from the\n        daemon's lock file.\n\n        This is different than _all_instance_pids in that\n        the PID of the last running instance can be retrieved\n        with ``_get_pid`` but not with ``_all_instance_pids``\n        \"\"\"\n        priv = self._instance_to_privpath(self)\n        lock = self._instance_to_lock(self)\n        if ((self.__class__.__name__ == \"Scheduler\") and\n                'sched_priv' in self.attributes):\n            path = os.path.join(self.attributes['sched_priv'], lock)\n        else:\n            path = os.path.join(self.pbs_conf['PBS_HOME'], priv, lock)\n        rv = self.du.cat(self.hostname, path, sudo=True, logerr=False)\n        if ((rv['rc'] == 0) and (len(rv['out']) > 0)):\n            pid = rv['out'][0].strip()\n        else:\n            pid = None\n        return pid\n\n    def _validate_pid(self, inst):\n        \"\"\"\n        Get pid and validate\n        :param inst: inst to update pid\n        :type inst: object\n        \"\"\"\n        for i in range(30):\n            live_pids = self._all_instance_pids()\n            pid = self._get_pid()\n            if live_pids is not None and pid in live_pids:\n                return pid\n            time.sleep(1)\n        return None\n\n    def _start(self, inst=None, args=None, cmd_map=None, launcher=None):\n        \"\"\"\n        Generic service startup\n\n        :param inst: The instance to act upon\n        :type inst: str\n        :param args: Optional command-line arguments\n        :type args: List\n        :param cmd_map: Optional dictionary of command line\n                        options to configuration variables\n        :type cmd_map: Dictionary\n        :param launcher: Optional utility to invoke the launch\n                         of the service. This option only takes\n                         effect on ``Unix/Linux``. The option can\n                         be a string or a list.Options may be passed\n                         to the launcher, for example to start a\n                         service through the valgrind utility\n                         redirecting to a log file,launcher could be\n                         set to e.g.\n                         ``['valgrind', '--log-file=/tmp/vlgrd.out']``\n                         or ``'valgrind --log-file=/tmp/vlgrd.out'``\n        \"\"\"\n        if launcher is None and self.launcher is not None:\n            launcher = self.launcher\n\n        app = self._instance_to_cmd(inst)\n        if app is None:\n            return\n        _m = ['service: starting', app]\n        if args is not None:\n            _m += ['with args: ']\n            _m += args\n\n        as_script = False\n        wait_on = True\n        if launcher is not None:\n            if isinstance(launcher, str):\n                launcher = launcher.split()\n            if app == 'pbs_server':\n                # running the pbs server through valgrind requires a bit of\n                # a dance because the pbs_server binary is pbs_server.bin\n                # and to run it requires being able to find libraries, so\n                # LD_LIBRARY_PATH is set and pbs_server.bin is run as a\n                # script\n                pexec = inst.pbs_conf['PBS_EXEC']\n                ldlib = ['LD_LIBRARY_PATH=' +\n                         os.path.join(pexec, 'lib') + ':' +\n                         os.path.join(pexec, 'pgsql', 'lib')]\n                app = 'pbs_server.bin'\n            else:\n                ldlib = []\n            cmd = ldlib + launcher\n            as_script = True\n            wait_on = False\n        else:\n            cmd = []\n\n        cmd += [os.path.join(self.pbs_conf['PBS_EXEC'], 'sbin', app)]\n        if args is not None:\n            cmd += args\n        if not self.default_pbs_conf:\n            cmd = ['PBS_CONF_FILE=' + inst.pbs_conf_file] + cmd\n            as_script = True\n        if cmd_map is not None:\n            conf_cmd = self.du.map_pbs_conf_to_cmd(cmd_map,\n                                                   pconf=self.pbs_conf)\n            cmd.extend(conf_cmd)\n            _m += conf_cmd\n\n        self.logger.info(\" \".join(_m))\n\n        ret = self.du.run_cmd(self.hostname, cmd, sudo=True,\n                              as_script=as_script, wait_on_script=wait_on,\n                              level=logging.INFOCLI, logerr=False)\n        if ret['rc'] != 0:\n            raise PbsServiceError(rv=False, rc=ret['rc'], msg=ret['err'])\n\n        ret_msg = True\n        if ret['err']:\n            ret_msg = ret['err']\n        pid = self._validate_pid(inst)\n        if pid is None:\n            raise PbsServiceError(rv=False, rc=-1, msg=\"Could not find PID\")\n        return ret_msg\n\n    def _stop(self, sig='-TERM', inst=None):\n        if inst is None:\n            return True\n        self._signal(sig)\n        pid = self._get_pid()\n        chk_pid = self._all_instance_pids()\n        if pid is None or chk_pid is None:\n            return True\n        num_seconds = 0\n        while (chk_pid is not None) and (str(pid) in chk_pid):\n            if num_seconds > 60:\n                m = (self.logprefix + 'could not stop service ' +\n                     self._instance_to_servicename(inst))\n                raise PbsServiceError(rv=False, rc=-1, msg=m)\n            time.sleep(1)\n            num_seconds += 1\n            chk_pid = self._all_instance_pids()\n        return True\n\n    def initialise_service(self):\n        \"\"\"\n        Purpose of this method is to override and initialise\n        the service\n        \"\"\"\n\n    def log_lines(self, logtype, id=None, n=50, tail=True, starttime=None,\n                  endtime=None, host=None):\n        \"\"\"\n        Return the last ``<n>`` lines of a PBS log file, which\n        can be one of ``server``, ``scheduler``, ``MoM``, or\n        ``tracejob``\n\n        :param logtype: The entity requested, an instance of a\n                        Scheduler, Server or MoM object, or the\n                        string 'tracejob' for tracejob\n        :type logtype: str or object\n        :param id: The id of the object to trace. Only used for\n                   tracejob\n        :param n: One of 'ALL' of the number of lines to\n                  process/display, defaults to 50.\n        :type n: str or int\n        :param tail: if True, parse log from the end to the start,\n                     otherwise parse from the start to the end.\n                     Defaults to True.\n        :type tail: bool\n        :param day: Optional day in ``YYYMMDD`` format. Defaults\n                    to current day\n        :type day: int\n        :param starttime: date timestamp to start matching\n        :type starttime: float\n        :param endtime: date timestamp to end matching\n        :type endtime: float\n        :param host: Hostname\n        :type host: str\n        :returns: Last ``<n>`` lines of logfile for ``Server``,\n                  ``Scheduler``, ``MoM or tracejob``\n        \"\"\"\n        logval = None\n        lines = []\n        sudo = False\n        if endtime is None:\n            endtime = time.time()\n        if starttime is None:\n            starttime = self.ctime\n        if host is None:\n            host = self.hostname\n        try:\n            if logtype == 'tracejob':\n                if id is None:\n                    return None\n                cmd = [os.path.join(\n                       self.pbs_conf['PBS_EXEC'],\n                       'bin',\n                       'tracejob')]\n                cmd += [str(id)]\n                lines = self.du.run_cmd(host, cmd)['out']\n                if n != 'ALL':\n                    lines = lines[-n:]\n            else:\n                daystart = time.strftime(\"%Y%m%d\", time.localtime(starttime))\n                dayend = time.strftime(\"%Y%m%d\", time.localtime(endtime))\n                firstday_obj = datetime.datetime.strptime(daystart, '%Y%m%d')\n                lastday_obj = datetime.datetime.strptime(dayend, '%Y%m%d')\n                if logtype == 'accounting':\n                    logdir = os.path.join(self.pbs_conf['PBS_HOME'],\n                                          'server_priv', 'accounting')\n                    sudo = True\n                elif ((self.__class__.__name__ == \"Scheduler\") and\n                        'sched_log' in self.attributes):\n                    # if setup is multi-sched then get logdir from\n                    # its attributes\n                    logdir = self.attributes['sched_log']\n                else:\n                    logval = self._instance_to_logpath(logtype)\n                    if logval is None:\n                        m = 'Invalid logtype'\n                        raise PtlLogMatchError(rv=False, rc=-1, msg=m)\n                    logdir = os.path.join(self.pbs_conf['PBS_HOME'], logval)\n                while firstday_obj <= lastday_obj:\n                    day = firstday_obj.strftime(\"%Y%m%d\")\n                    filename = os.path.join(logdir, day)\n                    if n == 'ALL':\n                        day_lines = self.du.cat(\n                            host, filename, sudo=sudo,\n                            level=logging.DEBUG2)['out']\n                    else:\n                        if tail:\n                            cmd = ['/usr/bin/tail']\n                        else:\n                            cmd = ['/usr/bin/head']\n\n                        cmd += ['-n']\n                        cmd += [str(n), filename]\n                        day_lines = self.du.run_cmd(\n                            host, cmd, sudo=sudo,\n                            level=logging.DEBUG2)['out']\n                    lines.extend(day_lines)\n                    firstday_obj = firstday_obj + datetime.timedelta(days=1)\n                    if n == 'ALL':\n                        continue\n                    n = n - len(day_lines)\n                    if n <= 0:\n                        break\n        except (Exception, IOError, PtlLogMatchError):\n            self.logger.error('error in log_lines ')\n            self.logger.error(traceback.print_exc())\n            return None\n\n        return lines\n\n    def _log_match(self, logtype, msg, id=None, n=50, tail=True,\n                   allmatch=False, regexp=False, max_attempts=None,\n                   interval=None, starttime=None, endtime=None,\n                   level=logging.INFO, existence=True):\n        \"\"\"\n        Match given ``msg`` in given ``n`` lines of log file\n\n        :param logtype: The entity requested, an instance of a\n                        Scheduler, Server, or MoM object, or the\n                        strings 'tracejob' for tracejob or\n                        'accounting' for accounting logs.\n        :type logtype: object\n        :param msg: log message to match, can be regex also when\n                    ``regexp`` is True\n        :type msg: str\n        :param id: The id of the object to trace. Only used for\n                   tracejob\n        :type id: str\n        :param n: 'ALL' or the number of lines to search through,\n                  defaults to 50\n        :type n: str or int\n        :param tail: If true (default), starts from the end of\n                     the file\n        :type tail: bool\n        :param allmatch: If True all matching lines out of then\n                         parsed are returned as a list. Defaults\n                         to False\n        :type allmatch: bool\n        :param regexp: If true msg is a Python regular expression.\n                       Defaults to False\n        :type regexp: bool\n        :param max_attempts: the number of attempts to make to find\n                             a matching entry\n        :type max_attempts: int\n        :param interval: the interval between attempts\n        :type interval: int\n        :param starttime: If set ignore matches that occur before\n                          specified time\n        :type starttime: float\n        :param endtime: If set ignore matches that occur after\n                        specified time\n        :type endtime: float\n        :param level: The logging level, defaults to INFO\n        :type level: int\n        :param existence: If True (default), check for existence of\n                        given msg, else check for non-existence of\n                        given msg.\n        :type existence: bool\n\n        :return: (x,y) where x is the matching line\n                 number and y the line itself. If allmatch is True,\n                 a list of tuples is returned.\n        :rtype: tuple\n        :raises PtlLogMatchError:\n                When ``existence`` is True and given\n                ``msg`` is not found in ``n`` line\n                Or\n                When ``existence`` is False and given\n                ``msg`` found in ``n`` line.\n\n        .. note:: The matching line number is relative to the record\n                  number, not the absolute line number in the file.\n        \"\"\"\n        try:\n            from ptl.utils.pbs_logutils import PBSLogUtils\n        except Exception:\n            _msg = 'error loading ptl.utils.pbs_logutils'\n            raise ImportError(_msg)\n\n        if self.logutils is None:\n            self.logutils = PBSLogUtils()\n        if max_attempts is None:\n            max_attempts = self.ptl_conf['max_attempts']\n        if interval is None:\n            interval = self.ptl_conf['attempt_interval']\n        rv = (None, None)\n        attempt = 1\n        lines = None\n        name = self._instance_to_servicename(logtype)\n        infomsg = (name + ' ' + self.shortname +\n                   ' log match: searching for \"' + msg + '\"')\n        if regexp:\n            infomsg += ' - using regular expression '\n        if allmatch:\n            infomsg += ' - on all matches '\n        if existence:\n            infomsg += ' - with existence'\n        else:\n            infomsg += ' - with non-existence'\n        if starttime:\n            starttimestr = time.strftime(\n                \"%Y/%m/%d %H:%M:%S\", time.localtime(starttime))\n            infomsg += \" - from %s\" % starttimestr\n        if endtime:\n            endtimestr = time.strftime(\n                \"%Y/%m/%d %H:%M:%S\", time.localtime(endtime))\n            infomsg += \" - to %s\" % endtimestr\n        attemptmsg = ' - No match'\n        while attempt <= max_attempts:\n            if attempt > 1:\n                attemptmsg = ' - attempt ' + str(attempt)\n            lines = self.log_lines(logtype, id, n=n, tail=tail,\n                                   starttime=starttime, endtime=endtime)\n            rv = self.logutils.match_msg(lines, msg, allmatch=allmatch,\n                                         regexp=regexp, starttime=starttime,\n                                         endtime=endtime)\n            if not existence:\n                if rv:\n                    _msg = infomsg + ' - but exists'\n                    raise PtlLogMatchError(rc=1, rv=False, msg=_msg)\n                else:\n                    self.logger.log(level, infomsg + attemptmsg + '... OK')\n                    break\n            if rv:\n                self.logger.log(level, infomsg + '... OK')\n                break\n            else:\n                if n != 'ALL':\n                    if attempt > max_attempts:\n                        # We will do one last attempt to match in case the\n                        # number of lines that were provided did not capture\n                        # the start or end time of interest\n                        max_attempts += 1\n                    n = 'ALL'\n                self.logger.log(level, infomsg + attemptmsg)\n            attempt += 1\n            time.sleep(interval)\n        try:\n            # Depending on whether the hostname is local or remote and whether\n            # sudo privileges were required, lines returned by log_lines can be\n            # an open file descriptor, we close here but ignore errors in case\n            # any were raised for all irrelevant cases\n            lines.close()\n        except Exception:\n            pass\n        if (rv is None and existence):\n            _msg = infomsg + attemptmsg\n            raise PtlLogMatchError(rc=1, rv=False, msg=_msg)\n        return rv\n\n    def accounting_match(self, msg, id=None, n=50, tail=True,\n                         allmatch=False, regexp=False, max_attempts=None,\n                         interval=None, starttime=None, endtime=None,\n                         level=logging.INFO, existence=True):\n        \"\"\"\n        Match given ``msg`` in given ``n`` lines of accounting log\n\n        :param msg: log message to match, can be regex also when\n                    ``regexp`` is True\n        :type msg: str\n        :param id: The id of the object to trace. Only used for\n                   tracejob\n        :type id: str\n        :param n: 'ALL' or the number of lines to search through,\n                  defaults to 50\n        :type n: str or int\n        :param tail: If true (default), starts from the end of\n                     the file\n        :type tail: bool\n        :param allmatch: If True all matching lines out of then\n                         parsed are returned as a list. Defaults\n                         to False\n        :type allmatch: bool\n        :param regexp: If true msg is a Python regular expression.\n                       Defaults to False\n        :type regexp: bool\n        :param max_attempts: the number of attempts to make to find\n                             a matching entry\n        :type max_attempts: int\n        :param interval: the interval between attempts\n        :type interval: int\n        :param starttime: If set ignore matches that occur before\n                          specified time\n        :type starttime: int\n        :param endtime: If set ignore matches that occur after\n                        specified time\n        :type endtime: int\n        :param level: The logging level, defaults to INFO\n        :type level: int\n        :param existence: If True (default), check for existence of\n                        given msg, else check for non-existence of\n                        given msg.\n        :type existence: bool\n\n        :return: (x,y) where x is the matching line\n                 number and y the line itself. If allmatch is True,\n                 a list of tuples is returned.\n        :rtype: tuple\n        :raises PtlLogMatchError:\n                When ``existence`` is True and given\n                ``msg`` is not found in ``n`` line\n                Or\n                When ``existence`` is False and given\n                ``msg`` found in ``n`` line.\n\n        .. note:: The matching line number is relative to the record\n                  number, not the absolute line number in the file.\n        \"\"\"\n        return self._log_match('accounting', msg, id, n, tail, allmatch,\n                               regexp, max_attempts, interval, starttime,\n                               endtime, level, existence)\n\n    def tracejob_match(self, msg, id=None, n=50, tail=True,\n                       allmatch=False, regexp=False, max_attempts=None,\n                       interval=None, starttime=None, endtime=None,\n                       level=logging.INFO, existence=True):\n        \"\"\"\n        Match given ``msg`` in given ``n`` lines of tracejob log\n\n        :param msg: log message to match, can be regex also when\n                    ``regexp`` is True\n        :type msg: str\n        :param id: The id of the object to trace.\n        :type id: str\n        :param n: 'ALL' or the number of lines to search through,\n                  defaults to 50\n        :type n: str or int\n        :param tail: If true (default), starts from the end of\n                     the file\n        :type tail: bool\n        :param allmatch: If True all matching lines out of then\n                         parsed are returned as a list. Defaults\n                         to False\n        :type allmatch: bool\n        :param regexp: If true msg is a Python regular expression.\n                       Defaults to False\n        :type regexp: bool\n        :param max_attempts: the number of attempts to make to find\n                             a matching entry\n        :type max_attempts: int\n        :param interval: the interval between attempts\n        :type interval: int\n        :param starttime: If set ignore matches that occur before\n                          specified time\n        :type starttime: float\n        :param endtime: If set ignore matches that occur after\n                        specified time\n        :type endtime: float\n        :param level: The logging level, defaults to INFO\n        :type level: int\n        :param existence: If True (default), check for existence of\n                        given msg, else check for non-existence of\n                        given msg.\n        :type existence: bool\n\n        :return: (x,y) where x is the matching line\n                 number and y the line itself. If allmatch is True,\n                 a list of tuples is returned.\n        :rtype: tuple\n        :raises PtlLogMatchError:\n                When ``existence`` is True and given\n                ``msg`` is not found in ``n`` line\n                Or\n                When ``existence`` is False and given\n                ``msg`` found in ``n`` line.\n\n        .. note:: The matching line number is relative to the record\n                  number, not the absolute line number in the file.\n        \"\"\"\n        return self._log_match('tracejob', msg, id, n, tail, allmatch,\n                               regexp, max_attempts, interval, starttime,\n                               endtime, level, existence)\n\n    def _save_config_file(self, dict_conf, fname):\n        ret = self.du.cat(self.hostname, fname, sudo=True)\n        if ret['rc'] == 0:\n            dict_conf[fname] = ret['out']\n        else:\n            self.logger.error('error saving configuration ' + fname)\n\n    def _load_configuration(self, infile, objtype=None):\n        \"\"\"\n        Load configuration as was saved in infile\n\n        :param infile: the file in which configuration\n                       was saved\n        :type infile: str\n        :param objtype: the object type to load configuration\n                        for, one of server, scheduler, mom or\n                        if None, load all objects in infile\n        \"\"\"\n        if os.path.isfile(infile):\n            conf = {}\n            sconf = {}\n            with open(infile, 'r') as f:\n                try:\n                    sconf = json.load(f)\n                except ValueError:\n                    self.logger.info(\"Error loading JSON file: %s\"\n                                     % infile)\n                    return False\n            conf = sconf[str(objtype)]\n            if objtype == MGR_OBJ_SERVER:\n                qmgr = os.path.join(self.client_conf['PBS_EXEC'],\n                                    'bin', 'qmgr')\n                for k, v in conf.items():\n                    # Load server configuration\n                    if k.startswith('qmgr_'):\n                        fpath = self.du.create_temp_file()\n                        print_svr = '\\n'.join(v)\n                        with open(fpath, 'w') as f:\n                            f.write(print_svr)\n                        file_qmgr = open(fpath)\n                        d = self.du.run_cmd(\n                            self.hostname, [qmgr], stdin=file_qmgr, sudo=True,\n                            logerr=False, level=logging.DEBUG)\n                        err_msg = \"Failed to load server configurations\"\n                        file_qmgr.close()\n                        if d['rc'] != 0:\n                            self.logger.error(\"%s\" % err_msg)\n                            return False\n                    # Load pbs.conf file\n                    elif k == \"pbs_conf\":\n                        enc_utf = v.encode('UTF-8')\n                        dec_b64 = base64.b64decode(enc_utf)\n                        cfg_vals = dec_b64.decode('UTF-8')\n                        config = ast.literal_eval(cfg_vals)\n                        self.du.set_pbs_config(self.hostname, confs=config)\n                    # Load hooks\n                    elif k == \"hooks\":\n                        fpath = self.du.create_temp_file()\n                        print_hooks = '\\n'.join(v['qmgr_print_hook'])\n                        with open(fpath, 'w') as f:\n                            f.write(print_hooks)\n                        file_qmgr = open(fpath)\n                        d = self.du.run_cmd(\n                            self.hostname, [qmgr], stdin=file_qmgr, sudo=True,\n                            level=logging.DEBUG)\n                        file_qmgr.close()\n                        if d['rc'] != 0:\n                            self.logger.error(\"Failed to load site hooks\")\n                if 'pbsnodes' in conf:\n                    nodes = conf['pbsnodes']\n                    for node in nodes:\n                        node_name = str(node['id'])\n                        nodes_created = self.create_pbsnode(node_name, node)\n                        if not nodes_created:\n                            self.logger.error(\"Failed to create node: %s\"\n                                              % node)\n                            return False\n                return True\n            elif objtype == MGR_OBJ_SCHED:\n                for k, v in conf.items():\n                    fn = self.du.create_temp_file()\n                    try:\n                        rv = self.du.chmod(path=fn, mode=0o644)\n                        if not rv:\n                            self.logger.error(\"Failed to restore \" +\n                                              \"configuration: %s\" % k)\n                            return False\n                        with open(fn, 'w') as fd:\n                            fd.write(\"\\n\".join(v))\n                        rv = self.du.run_copy(\n                            self.hostname, src=fn, dest=k, sudo=True)\n                        if rv['rc'] != 0:\n                            self.logger.error(\"Failed to restore \" +\n                                              \"configuration: %s\" % k)\n                            return False\n                        rv = self.du.chown(path=k, runas=ROOT_USER,\n                                           uid=0, gid=0, sudo=True)\n                        if not rv:\n                            self.logger.error(\"Failed to restore \" +\n                                              \"configuration: %s\" % k)\n                            return False\n                    except Exception:\n                        self.logger.error(\"Failed to restore \" +\n                                          \"configuration: %s\" % k)\n                        return False\n                    finally:\n                        if os.path.isfile(fn):\n                            self.du.rm(path=fn, force=True, sudo=True)\n                return True\n            elif objtype == MGR_OBJ_NODE:\n                nconf = conf[str(self.hostname)]\n                for k, v in nconf.items():\n                    try:\n                        fn = self.du.create_temp_file()\n                        rv = self.du.chmod(path=fn, mode=0o644)\n                        if not rv:\n                            self.logger.error(\"Failed to restore \" +\n                                              \"configuration: %s\" % k)\n                            return False\n                        with open(fn, 'w') as fd:\n                            mom_config_data = \"\\n\".join(v) + \"\\n\"\n                            fd.write(mom_config_data)\n                        rv = self.du.run_copy(\n                            self.hostname, src=fn, dest=k, sudo=True)\n                        if rv['rc'] != 0:\n                            self.logger.error(\"Failed to restore \" +\n                                              \"configuration: %s\" % k)\n                            return False\n                        rv = self.du.chown(path=k, runas=ROOT_USER,\n                                           uid=0, gid=0, sudo=True)\n                        if not rv:\n                            self.logger.error(\"Failed to restore \" +\n                                              \"configuration: %s\" % k)\n                            return False\n                    except Exception:\n                        self.logger.error(\"Failed to restore \" +\n                                          \"configuration: %s\" % k)\n                        return False\n                    finally:\n                        if os.path.isfile(fn):\n                            self.du.rm(path=fn, force=True, sudo=True)\n                return True\n\n    def create_pbsnode(self, node_name, attrs):\n        \"\"\"\n        Create node in PBS with given attributes\n        \"\"\"\n        qmgr = os.path.join(self.client_conf['PBS_EXEC'],\n                            'bin', 'qmgr')\n        execcmd = \"create node \" + node_name\n        execcmd += \" Port=\" + attrs['Port']\n        cmd = [qmgr, \"-c\", execcmd]\n        ret = self.du.run_cmd(self.hostname, cmd, sudo=True)\n        if ret['rc'] != 0:\n            self.logger.info(\"Failed to create node: %s\" % node_name)\n            self.logger.error(\"Error: %s\" % ret['err'])\n            return False\n        # skip all read-only attributes\n        skip_atb_list = ['id', 'pbs_version', 'pcpus',\n                         'last_state_change_time', 'ntype',\n                         'Mom', 'sharing', 'resources_available.vnode',\n                         'resources_available.host', 'last_used_time',\n                         'resource_assigned', 'resv', 'Port'\n                         ]\n        for node_atb, val in attrs.items():\n            # only offline state of node is read, write attribute\n            if(node_atb in skip_atb_list or\n               'resources_assigned' in node_atb or\n               (node_atb == 'state' and val != 'offline')):\n                continue\n            k = str(node_atb)\n            v = str(val)\n            execcmd = \"set node %s %s='%s'\" % (node_name, k, v)\n            cmd = [qmgr, \"-c\", execcmd]\n            ret = self.du.run_cmd(self.hostname, cmd, sudo=True,\n                                  level=logging.DEBUG)\n            if ret['rc'] != 0:\n                self.logger.info(\"Failed to set node attribute %s=%s\" % (k, v))\n                return False\n        return True\n\n    def get_tempdir(self):\n        \"\"\"\n        platform independent call to get a temporary directory\n        \"\"\"\n        return self.du.get_tempdir(self.hostname)\n\n    def get_uname(self, hostname=None, pyexec=None):\n        \"\"\"\n        Get a local or remote platform info in uname format, essentially\n        the value of Python's platform.uname\n        :param hostname: The hostname to query for platform info\n        :type hostname: str or None\n        :param pyexec: A path to a Python interpreter to use to query\n                       a remote host for platform info\n        :type pyexec: str or None\n        For efficiency the value is cached and retrieved from the\n        cache upon subsequent request\n        \"\"\"\n        uplatform = ' '.join(platform.uname())\n        if hostname is None:\n            hostname = socket.gethostname()\n        if not self.du.is_localhost(hostname):\n            if pyexec is None:\n                pyexec = self.du.which(\n                    hostname, 'python3', level=logging.DEBUG2)\n            _cmdstr = '\"import platform;'\n            _cmdstr += 'print(\\' \\'.join(platform.uname()))\"'\n            cmd = [pyexec, '-c', _cmdstr]\n            ret = self.du.run_cmd(hostname, cmd=cmd)\n            if ret['rc'] != 0 or len(ret['out']) == 0:\n                _msg = 'Unable to retrieve platform info,'\n                _msg += 'defaulting to local platform'\n                self.logger.warning(_msg)\n            else:\n                uplatform = ret['out'][0]\n        return uplatform\n\n    def get_os_info(self, hostname=None, pyexec=None):\n        \"\"\"\n        Get a local or remote OS info\n        :param hostname: The hostname to query for platform info\n        :type hostname: str or None\n        :param pyexec: A path to a Python interpreter to use to query\n                       a remote host for platform info\n        :type pyexec: str or None\n        :returns: a 'str' object containing os info\n        \"\"\"\n\n        local_info = platform.platform()\n\n        if hostname is None or self.du.is_localhost(hostname):\n            return local_info\n\n        if pyexec is None:\n            pyexec = self.du.which(hostname, 'python3', level=logging.DEBUG2)\n\n        cmd = [pyexec, '-c',\n               '\"import platform; print(platform.platform())\"']\n        ret = self.du.run_cmd(hostname, cmd=cmd)\n        if ret['rc'] != 0 or len(ret['out']) == 0:\n            self.logger.warning(\"Unable to retrieve OS info, defaulting \"\n                                \"to local\")\n            ret_info = local_info\n        else:\n            ret_info = ret['out'][0]\n\n        return ret_info\n\n    def __str__(self):\n        return (self.__class__.__name__ + ' ' + self.hostname + ' config ' +\n                self.pbs_conf_file)\n\n    def __repr__(self):\n        return (self.__class__.__name__ + '/' + self.pbs_conf_file + '@' +\n                self.hostname)\n\n    def cleanup_files(self):\n        \"\"\"\n        This function removes any dynamic resource files created by server/mom\n        objects\n        \"\"\"\n        for dyn_files in self.dyn_created_files:\n            self.du.rm(path=dyn_files, sudo=True, force=True)\n        self.dyn_created_files = []\n\n    def isUp(self, max_attempts=None):\n        \"\"\"\n        Check for daemons up\n        \"\"\"\n        if max_attempts is None:\n            max_attempts = self.ptl_conf['max_attempts']\n        for _ in range(max_attempts):\n            rv = self._isUp()\n            if rv:\n                break\n            time.sleep(1)\n        return rv\n\n    def signal(self, sig):\n        \"\"\"\n        Send signal to daemons\n        \"\"\"\n        self.logger.info(self.__class__.__name__ + \" sent signal \" + sig)\n        return self._signal(sig)\n\n    def get_pid(self):\n        \"\"\"\n        Get the daemons pid\n        \"\"\"\n        return self._get_pid()\n\n    def all_instance_pids(self):\n        \"\"\"\n        Get all pids of given instance\n        \"\"\"\n        return self._all_instance_pids()\n"
  },
  {
    "path": "test/fw/ptl/lib/ptl_types.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\nimport collections\nimport datetime\nimport os\nimport re\nimport sys\nimport time\nimport string\nimport random\ntry:\n    from collections.abc import Callable  # Python 3.10+\nexcept ImportError:\n    from collections import Callable  # For Python versions before 3.10\n\n\nclass PbsAttribute(object):\n    \"\"\"\n    Descriptor class for PBS attribute\n\n    :param name: PBS attribute name\n    :type name: str\n    :param value: Value for the attribute\n    :type value: str or int or float\n    \"\"\"\n\n    def __init__(self, name=None, value=None):\n        self.set_name(name)\n        self.set_value(value)\n\n    @classmethod\n    def isfloat(cls, value):\n        \"\"\"\n        returns true if value is a float or a string representation\n        of a float returns false otherwise\n\n        :param value: value to be checked\n        :type value: str or int or float\n        :returns: True or False\n        \"\"\"\n        if isinstance(value, float):\n            return True\n        if isinstance(value, str):\n            try:\n                float(value)\n                return True\n            except ValueError:\n                return False\n\n    @classmethod\n    def decode_value(cls, value):\n        \"\"\"\n        Decode an attribute/resource value, if a value is\n        made up of digits only then return the numeric value\n        of it, if it is made of alphanumeric values only, return\n        it as a string, if it is of type size, i.e., with a memory\n        unit such as b,kb,mb,gb then return the converted size to\n        kb without the unit\n\n        :param value: attribute/resource value\n        :type value: str or int\n        :returns: int or float or string\n        \"\"\"\n\n        if value is None or isinstance(value, Callable):\n            return value\n\n        if isinstance(value, (int, float)):\n            return value\n\n        if value.isdigit():\n            return int(value)\n\n        if value.isalpha() or value == '':\n            return value\n\n        if cls.isfloat(value):\n            return float(value)\n\n        if ':' in value:\n            try:\n                value = int(PbsTypeDuration(value))\n            except ValueError:\n                pass\n            return value\n\n        # TODO revisit:  assume (this could be the wrong type, need a real\n        # data model anyway) that the remaining is a memory expression\n        try:\n            value = PbsTypeSize(value)\n            return value.value\n        except ValueError:\n            pass\n        except TypeError:\n            # if not then we pass to return the value as is\n            pass\n\n        return value\n\n    @classmethod\n    def random_str(cls, length=1, prefix=''):\n        \"\"\"\n        Generates the random string\n\n        :param length: Length of the string\n        :type length: int\n        :param prefix: Prefix of the string\n        :type prefix: str\n        :returns: Random string\n        \"\"\"\n        r = [random.choice(string.ascii_letters) for _ in range(length)]\n        r = ''.join([prefix] + r)\n        if hasattr(cls, '__uniq_rstr'):\n            while r in cls.__uniq_rstr:\n                r = [random.choice(string.ascii_letters)\n                     for _ in range(length)]\n                r = ''.join([prefix] + r)\n            cls.__uniq_rstr.append(r)\n        else:\n            cls.__uniq_rstr = [r]\n\n        return r\n\n    def set_name(self, name):\n        \"\"\"\n        Set PBS attribute name\n\n        :param name: PBS attribute\n        :type name: str\n        \"\"\"\n        self.name = name\n        if name is not None and '.' in name:\n            self.is_resource = True\n            self.resource_type, self.resource_name = self.name.split('.')\n        else:\n            self.is_resource = False\n            self.resource_type = self.resource_name = None\n\n    def set_value(self, value):\n        \"\"\"\n        Set PBS attribute value\n\n        :param value: Value of PBS attribute\n        :type value: str or int or float\n        \"\"\"\n        self.value = value\n        if isinstance(value, (int, float)) or str(value).isdigit():\n            self.is_consumable = True\n        else:\n            self.is_consumable = False\n\n    def obfuscate_name(self, a=None):\n        \"\"\"\n        Obfuscate PBS attribute name\n        \"\"\"\n        if a is not None:\n            on = a\n        else:\n            on = cls.random_str(len(self.name))\n\n        self.decoded_name = self.name\n        if self.is_resource:\n            self.set_name(self.resource_name + '.' + on)\n\n    def obfuscate_value(self, v=None):\n        \"\"\"\n        Obfuscate PBS attribute value\n        \"\"\"\n        if not self.is_consuable:\n            self.decoded_value = self.value\n            return\n\n        if v is not None:\n            ov = v\n        else:\n            ov = cls.random_str(len(self.value))\n\n        self.decoded_value = self.value\n        self.set_value(ov)\n\n\nclass PbsTypeSize(str):\n\n    \"\"\"\n    Descriptor class for memory as a numeric entity.\n    Units can be one of ``b``, ``kb``, ``mb``, ``gb``, ``tb``, ``pt``\n\n    :param unit: The unit type associated to the memory value\n    :type unit: str\n    :param value: The numeric value of the memory\n    :type value: int or None\n    :raises: ValueError and TypeError\n    \"\"\"\n\n    def __init__(self, value=None):\n        if value is None:\n            return\n\n        if len(value) < 2:\n            raise ValueError\n\n        if value[-1:] in ('b', 'B') and value[:-1].isdigit():\n            self.unit = 'b'\n            self.value = int(int(value[:-1]) / 1024)\n            return\n\n        # lower() applied to ignore case\n        unit = value[-2:].lower()\n        self.value = value[:-2]\n        if not self.value.isdigit():\n            raise ValueError\n        if unit == 'kb':\n            self.value = int(self.value)\n        elif unit == 'mb':\n            self.value = int(self.value) * 1024\n        elif unit == 'gb':\n            self.value = int(self.value) * 1024 * 1024\n        elif unit == 'tb':\n            self.value = int(self.value) * 1024 * 1024 * 1024\n        elif unit == 'pb':\n            self.value = int(self.value) * 1024 * 1024 * 1024 * 1024\n        else:\n            raise TypeError\n        self.unit = 'kb'\n\n    def encode(self, value=None, valtype='kb', precision=1):\n        \"\"\"\n        Encode numeric memory input in kilobytes to a string, including\n        unit\n\n        :param value: The numeric value of memory to encode\n        :type value: int or None.\n        :param valtype: The unit of the input value, defaults to kb\n        :type valtype: str\n        :param precision: Precision of the encoded value, defaults to 1\n        :type precision: int\n        :returns: Encoded memory in kb to string\n        \"\"\"\n        if value is None:\n            value = self.value\n\n        if valtype == 'b':\n            val = value\n        elif valtype == 'kb':\n            val = value * 1024\n        elif valtype == 'mb':\n            val = value * 1024 * 1024\n        elif valtype == 'gb':\n            val = value * 1024 * 1024 * 1024 * 1024\n        elif valtype == 'tb':\n            val = value * 1024 * 1024 * 1024 * 1024 * 1024\n        elif valtype == 'pt':\n            val = value * 1024 * 1024 * 1024 * 1024 * 1024 * 1024\n\n        m = (\n            (1 << 50, 'pb'),\n            (1 << 40, 'tb'),\n            (1 << 30, 'gb'),\n            (1 << 20, 'mb'),\n            (1 << 10, 'kb'),\n            (1, 'b')\n        )\n\n        for factor, suffix in m:\n            if val >= factor:\n                break\n\n        return '%.*f%s' % (precision, float(val) / factor, suffix)\n\n    def __cmp__(self, other):\n        if self.value < other.value:\n            return -1\n        if self.value == other.value:\n            return 0\n        return 1\n\n    def __lt__(self, other):\n        if self.value < other.value:\n            return True\n        return False\n\n    def __le__(self, other):\n        if self.value <= other.value:\n            return True\n        return False\n\n    def __gt__(self, other):\n        if self.value > other.value:\n            return True\n        return False\n\n    def __ge__(self, other):\n        if self.value < other.value:\n            return True\n        return False\n\n    def __eq__(self, other):\n        if self.value == other.value:\n            return True\n        return False\n\n    def __get__(self):\n        return self.value\n\n    def __add__(self, other):\n        if isinstance(other, int):\n            self.value += other\n        else:\n            self.value += other.value\n        return self\n\n    def __mul__(self, other):\n        if isinstance(other, int):\n            self.value *= other\n        else:\n            self.value *= other.value\n        return self\n\n    def __floordiv__(self, other):\n        self.value /= other.value\n        return self\n\n    def __sub__(self, other):\n        self.value -= other.value\n        return self\n\n    def __repr__(self):\n        return self.__str__()\n\n    def __str__(self):\n        return self.encode(valtype=self.unit)\n\n\nclass PbsTypeDuration(str):\n\n    \"\"\"\n    Descriptor class for a duration represented as ``hours``,\n    ``minutes``, and ``seconds``,in the form of ``[HH:][MM:]SS``\n\n    :param as_seconds: HH:MM:SS represented in seconds\n    :type as_seconds: int\n    :param as_str: duration represented in HH:MM:SS\n    :type as_str: str\n    \"\"\"\n\n    def __init__(self, val):\n        if isinstance(val, str):\n            if ':' in val:\n                s = val.split(':')\n                fields = len(s)\n                if fields > 3:\n                    raise ValueError\n                hr = mn = sc = 0\n                if fields >= 2:\n                    sc = s[fields - 1]\n                    mn = s[fields - 2]\n                    if fields == 3:\n                        hr = s[0]\n                self.duration = int(hr) * 3600 + int(mn) * 60 + int(sc)\n            elif val.isdigit():\n                self.duration = int(val)\n        elif isinstance(val, int) or isinstance(val, float):\n            self.duration = val\n\n    def __add__(self, other):\n        self.duration += other.duration\n        return self\n\n    def __sub__(self, other):\n        self.duration -= other.duration\n        return self\n\n    def __cmp__(self, other):\n        if self.duration < other.duration:\n            return -1\n        if self.duration == other.duration:\n            return 0\n        return 1\n\n    def __lt__(self, other):\n        if self.duration < other.duration:\n            return True\n        return False\n\n    def __le__(self, other):\n        if self.duration <= other.duration:\n            return True\n        return False\n\n    def __gt__(self, other):\n        if self.duration > other.duration:\n            return True\n        return False\n\n    def __ge__(self, other):\n        if self.duration < other.duration:\n            return True\n        return False\n\n    def __eq__(self, other):\n        if self.duration == other.duration:\n            return True\n        return False\n\n    def __get__(self):\n        return self.as_str\n\n    def __repr__(self):\n        return self.__str__()\n\n    def __int__(self):\n        return int(self.duration)\n\n    def __str__(self):\n        return str(datetime.timedelta(seconds=self.duration))\n\n\nclass PbsTypeArray(list):\n\n    \"\"\"\n    Descriptor class for a PBS array list type, e.g. String array\n\n    :param value: Array value to be passed\n    :param sep: Separator for two array elements\n    :type sep: str\n    :returns: List\n    \"\"\"\n\n    def __init__(self, value=None, sep=','):\n        self.separator = sep\n        self = list.__init__(self, value.split(sep))\n\n    def __str__(self):\n        return self.separator.join(self)\n\n\nclass PbsTypeList(dict):\n\n    \"\"\"\n    Descriptor class for a generic PBS list that are key/value pairs\n    delimited\n\n    :param value: List value to be passed\n    :param sep: Separator for two key/value pair\n    :type sep: str\n    :param kvsep: Separator for key and value\n    :type kvsep: str\n    :returns: Dictionary\n    \"\"\"\n\n    def __init__(self, value=None, sep=',', kvsep='='):\n        self.kvsep = kvsep\n        self.separator = sep\n        d = {}\n        as_list = [v.split(kvsep) for v in value.split(sep)]\n        if as_list:\n            for k, v in as_list:\n                d[k] = v\n            del as_list\n        dict.__init__(self, d)\n\n    def __str__(self):\n        s = []\n        for k, v in self.items():\n            s += [str(k) + self.kvsep + str(v)]\n        return self.separator.join(s)\n\n\nclass PbsTypeLicenseCount(PbsTypeList):\n\n    \"\"\"\n    Descriptor class for a PBS license_count attribute.\n\n    It is a specialized list where key/values are ':' delimited, separated\n    by a ' ' (space)\n\n    :param value: PBS license_count attribute value\n    :returns: Specialized list\n    \"\"\"\n\n    def __init__(self, value=None):\n        super(PbsTypeLicenseCount, self).__init__(value, sep=' ', kvsep=':')\n\n\nclass PbsTypeVariableList(PbsTypeList):\n\n    \"\"\"\n    Descriptor class for a PBS Variable_List attribute\n\n    It is a specialized list where key/values are '=' delimited, separated\n    by a ',' (space)\n\n    :param value: PBS Variable_List attribute value\n    :returns: Specialized list\n    \"\"\"\n\n    def __init__(self, value=None):\n        super(PbsTypeVariableList, self).__init__(value, sep=',', kvsep='=')\n\n\nclass PbsTypeSelect(list):\n\n    \"\"\"\n    Descriptor class for PBS select/schedselect specification.\n    Select is of the form:\n\n    ``<select> ::= <m>\":\"<chunk> | <select>\"+\"<select>``\n\n    ``<m> ::= <digit> | <digit><m>``\n\n    ``<chunk> ::= <resc_name>\":\"<resc_value> | <chunk>\":\"<chunk>``\n\n    ``<m>`` is a multiplying factor for each chunk requested\n\n    ``<chunk>`` are resource key/value pairs\n\n    The type populates a list of single chunk of resource\n    ``key/value`` pairs, the list can be walked by iterating over\n    the type itself.\n\n    :param num_chunks: The total number of chunks in the select\n    :type num_chunk: int\n    :param resources: A dictionary of all resource counts in the select\n    :type resources: Dictionary\n    \"\"\"\n\n    def __init__(self, s=None):\n        if s is not None:\n            self._as_str = s\n            self.resources = {}\n            self.num_chunks = 0\n            nc = s.split('+')\n            for chunk in nc:\n                self._parse_chunk(chunk)\n\n    def _parse_chunk(self, chunk):\n        d = chunk.split(':')\n        # number of chunks\n        _num_chunks = int(d[0])\n        self.num_chunks += _num_chunks\n        r = {}\n        for e in d[1:]:\n            k, v = e.split('=')\n            r[k] = v\n            if 'mem' in k:\n                try:\n                    v = PbsTypeSize(v).value\n                except Exception:\n                    # failed so we guessed wrong on the type\n                    pass\n            if isinstance(v, int) or v.isdigit():\n                if k not in self.resources:\n                    self.resources[k] = _num_chunks * int(v)\n                else:\n                    self.resources[k] += _num_chunks * int(v)\n            else:\n                if k not in self.resources:\n                    self.resources[k] = v\n                else:\n                    self.resources[k] = [self.resources[k], v]\n\n        # explicitly expose the multiplying factor\n        for _ in range(_num_chunks):\n            self.append(r)\n\n    def __add__(self, chunk=None):\n        if chunk is None:\n            return self\n        self._parse_chunk(chunk)\n        self._as_str = self._as_str + \"+\" + chunk\n        return self\n\n    def __repr__(self):\n        return str(self)\n\n    def __str__(self):\n        return self._as_str\n\n\nclass PbsTypeChunk(dict):\n\n    \"\"\"\n    Descriptor class for a PBS chunk associated to a\n    ``PbsTypeExecVnode``.This type of chunk corresponds to\n    a node solution to a resource request,not to the select\n    specification.\n\n    ``chunk ::= <subchk> | <chunk>\"+\"<chunk>``\n\n    ``subchk ::= <node>\":\"<resource>``\n\n    ``resource ::= <key>\":\"<val> | <resource>\":\"<resource>``\n\n    A chunk expresses a solution to a specific select-chunk\n    request. If multiple chunks are needed to solve a single\n    select-chunk, e.g., on a shared memory system, the chunk\n    will be extended into virtual chunk,vchunk.\n\n    :param vnode: the vnode name corresponding to the chunk\n    :type vnode: str or None\n    :param resources: the key value pair of resources in\n                      dictionary form\n    :type resources: Dictionary or None\n    :param vchunk: a list of virtual chunks needed to solve\n                   the select-chunk, vchunk is only set if more\n                   than one vchunk are required to solve the\n                   select-chunk\n    :type vchunk: list\n    \"\"\"\n\n    def __init__(self, vnode=None, resources=None, chunkstr=None):\n        self.vnode = vnode\n        if resources is not None:\n            self.resources = resources\n        else:\n            self.resources = {}\n        self.vchunk = []\n        self.as_str = chunkstr\n        self.__parse_chunk(chunkstr)\n\n    def __parse_chunk(self, chunkstr=None):\n        if chunkstr is None:\n            return\n\n        vchunks = chunkstr.split('+')\n        if len(vchunks) == 1:\n            entities = chunkstr.split(':')\n            self.vnode = entities[0]\n            if len(entities) > 1:\n                for e in entities[1:]:\n                    (r, v) = e.split('=')\n                    self.resources[r] = v\n            self[self.vnode] = self.resources\n        else:\n            for sc in vchunks:\n                chk = PbsTypeChunk(chunkstr=sc)\n                self.vchunk.append(chk)\n                self[chk.vnode] = chk.resources\n\n    def add(self, vnode, resources):\n        \"\"\"\n        Add a chunk specificiation. If a chunk is already\n        defined, add the chunk as a vchunk.\n\n        :param vnode: The vnode to add\n        :type vnode: str\n        :param resources: The resources associated to the\n                          vnode\n        :type resources: str\n        :returns: Added chunk specification\n        \"\"\"\n        if self.vnode == vnode:\n            self.resources = {**self.resources, **resources}\n            return self\n        elif len(self.vchunk) != 0:\n            for chk in self.vchunk:\n                if chk.vnode == vnode:\n                    chk.resources = {**self.resources, **resources}\n                    return self\n        chk = PbsTypeChunk(vnode, resources)\n        self.vchunk.append(chk)\n        return self\n\n    def __repr__(self):\n        return self.__str__()\n\n    def __str__(self):\n        _s = [\"(\"]\n        _s += [self.vnode, \":\"]\n        for resc_k, resc_v in self.resources.items():\n            _s += [resc_k, \"=\", str(resc_v)]\n        if self.vchunk:\n            for _v in self.vchunk:\n                _s += [\"+\", _v.vnode, \":\"]\n                for resc_k, resc_v in _v.resources.items():\n                    _s += [resc_k, \"=\", str(resc_v)]\n        _s += [\")\"]\n        return \"\".join(_s)\n\n\nclass PbsTypeExecVnode(list):\n\n    \"\"\"\n    Execvnode representation, expressed as a list of\n    PbsTypeChunk\n\n    :param vchunk: List of virtual chunks, only set when\n                   more than one vnode is allocated to a\n                   host satisfy a chunk requested\n    :type vchunk: List\n    :param num_chunks: The number of chunks satisfied by\n                       this execvnode\n    :type num_chunks: int\n    :param vnodes: List of vnode names allocated to the execvnode\n    :type vnodes: List\n    :param resource: method to return the amount of a named\n                     resource satisfied by this execvnode\n    \"\"\"\n\n    def __init__(self, s=None):\n        if s is None:\n            return None\n\n        self._as_str = s\n        start = 0\n        self.num_chunks = 0\n        for c in range(len(s)):\n            # must split on '+' between parens because '+' can occur within\n            # paren for complex specs\n            if s[c] == '(':\n                start = c + 1\n            if s[c] == ')':\n                self.append(PbsTypeChunk(chunkstr=s[start:c]))\n                self.num_chunks += 1\n\n    def resource(self, name=None):\n        \"\"\"\n        :param name: Name of the resource\n        :type name: str or None\n        \"\"\"\n        if name is None:\n            return None\n        _total = 0\n        for _c in self:\n            if _c.vchunk:\n                for _v in _c.vchunk:\n                    if name in _v.resources:\n                        _total += int(_v.resources[name])\n            if name in _c.resources:\n                if name in _c.resources:\n                    _total += int(_c.resources[name])\n        return _total\n\n    @property\n    def vnodes(self):\n        vnodes = []\n        for e in self:\n            vnodes += [e.vnode]\n            if e.vchunk:\n                vnodes += [n.vnode for n in e.vchunk]\n\n        return list(set(vnodes))\n\n    def _str__(self):\n        return self._as_str\n        # below would be to verify that the converted type maps back correctly\n        _s = []\n        for _c in self:\n            _s += [str(_c)]\n        return \"+\".join(_s)\n\n\nclass PbsTypeExecHost(str):\n\n    \"\"\"\n    Descriptor class for exec_host attribute\n\n    :param hosts: List of hosts in the exec_host. Each entry is\n                  a host info dictionary that maps the number of\n                  cpus and its task number\n    :type hosts: List\n    \"\"\"\n\n    def __init__(self, s=None):\n        if s is None:\n            return None\n\n        self._as_str = s\n\n        self.hosts = []\n        hsts = s.split('+')\n        for h in hsts:\n            hi = {}\n            ti = {}\n            (host, task) = h.split('/',)\n            d = task.split('*')\n            if len(d) == 1:\n                taskslot = d[0]\n                ncpus = 1\n            elif len(d) == 2:\n                (taskslot, ncpus) = d\n            else:\n                (taskslot, ncpus) = (0, 1)\n            ti['task'] = taskslot\n            ti['ncpus'] = ncpus\n            hi[host] = ti\n            self.hosts.append(hi)\n\n    def __repr__(self):\n        return str(self.hosts)\n\n    def __str__(self):\n        return self._as_str\n\n\nclass PbsTypeJobId(str):\n\n    \"\"\"\n    Descriptor class for a Job identifier\n\n    :param id: The numeric portion of a job identifier\n    :type id: int\n    :param server_name: The pbs server name\n    :type server_name: str\n    :param server_shortname: The first portion of a FQDN server\n                             name\n    :type server_shortname: str\n    \"\"\"\n\n    def __init__(self, value=None):\n        if value is None:\n            return\n\n        self.value = value\n\n        r = value.split('.', 1)\n        if len(r) != 2:\n            return\n\n        self.id = int(r[0])\n        self.server_name = r[1]\n        self.server_shortname = r[1].split('.', 1)[0]\n\n    def __str__(self):\n        return str(self.value)\n\n\nclass PbsTypeFGCLimit(object):\n\n    \"\"\"\n    FGC limit entry, of the form:\n\n    ``<limtype>[.<resource>]=[<entity_type>:<entity_name>=\n    <entity_value>]``\n\n    :param attr: FGC limit attribute\n    :type attr: str\n    :param value: Value of attribute\n    :type value: int\n    :returns: FGC limit entry of given format\n    \"\"\"\n\n    fgc_attr_pat = re.compile(r\"(?P<ltype>[a-z_]+)[\\.]*(?P<resource>[\\w\\d-]*)\")\n    fgc_val_pat = re.compile(r\"[\\s]*\\[(?P<etype>[ugpo]):(?P<ename>[\\w\\d-]+)\"\n                             r\"=(?P<eval>[\\d]+)\\][\\s]*\")\n\n    def __init__(self, attr, val):\n\n        self.attr = attr\n        self.val = val\n\n        a = self.fgc_attr_pat.match(attr)\n        if a:\n            self.limit_type = a.group('ltype')\n            self.resource_name = a.group('resource')\n        else:\n            self.limit_type = None\n            self.resource_name = None\n\n        v = self.fgc_val_pat.match(val)\n        if v:\n            self.lim_value = PbsAttribute.decode_value(v.group('eval'))\n            self.entity_type = v.group('etype')\n            self.entity_name = v.group('ename')\n        else:\n            self.lim_value = None\n            self.entity_type = None\n            self.entity_name = None\n\n    def __val__(self):\n        return ('[' + str(self.entity_type) + ':' +\n                str(self.entity_name) + '=' + str(self.lim_value) + ']')\n\n    def __str__(self):\n        return (self.attr + ' = ' + self.__val__())\n\n\nclass PbsTypeAttribute(dict):\n\n    \"\"\"\n    Experimental. This is a placeholder object that will be used\n    in the future to map attribute information and circumvent\n    the error-pron dynamic type detection that is currently done\n    using ``decode_value()``\n    \"\"\"\n\n    def __getitem__(self, name):\n        return PbsAttribute.decode_value(super(PbsTypeAttribute,\n                                               self).__getitem__(name))\n"
  },
  {
    "path": "test/fw/ptl/lib/ptl_wrappers.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport ast\nimport base64\nimport collections\nimport copy\nimport datetime\nimport grp\nimport json\nimport logging\nimport os\nimport pickle\nimport pwd\nimport random\nimport re\nimport socket\nimport string\nimport sys\nimport tempfile\nimport threading\nimport time\nimport traceback\nfrom collections import OrderedDict\nfrom distutils.version import LooseVersion\nfrom operator import itemgetter\n\nfrom ptl.lib.pbs_api_to_cli import api_to_cli\nfrom ptl.lib.ptl_batchutils import BatchUtils\nfrom ptl.utils.pbs_cliutils import CliUtils\nfrom ptl.utils.pbs_dshutils import DshUtils, PtlUtilError, get_method_name\nfrom ptl.utils.pbs_procutils import ProcUtils\nfrom ptl.utils.pbs_testusers import (ROOT_USER, TEST_USER, PbsUser,\n                                     DAEMON_SERVICE_USER)\n\ntry:\n    import psycopg2\n    PSYCOPG = True\nexcept Exception:\n    PSYCOPG = False\nfrom ptl.lib.ptl_error import (PbsStatusError, PbsSubmitError,\n                               PbsDeljobError, PbsDelresvError,\n                               PbsDeleteError, PbsSelectError,\n                               PbsManagerError, PbsSignalError,\n                               PbsAlterError, PbsHoldError,\n                               PbsRerunError, PbsOrderError,\n                               PbsRunError, PbsMoveError,\n                               PbsQtermError, PbsQdisableError,\n                               PbsQenableError, PbsQstartError,\n                               PbsQstopError, PbsResourceError,\n                               PbsResvAlterError, PtlExpectError,\n                               PbsConnectError, PbsServiceError,\n                               PbsInitServicesError, PbsMessageError,\n                               PtlLogMatchError)\nfrom ptl.lib.ptl_types import PbsAttribute\nfrom ptl.lib.ptl_constants import *\nfrom ptl.lib.ptl_entities import (Hook, Queue, Entity, Limit,\n                                  EquivClass, Resource)\nfrom ptl.lib.ptl_resourceresv import Job, Reservation, InteractiveJob\nfrom ptl.lib.ptl_sched import Scheduler\nfrom ptl.lib.ptl_mom import MoM, get_mom_obj\nfrom ptl.lib.ptl_service import PBSService, PBSInitServices\nfrom ptl.lib.ptl_expect_action import ExpectActions\ntry:\n    from nose.plugins.skip import SkipTest\nexcept ImportError:\n    class SkipTest(Exception):\n        pass\n\ntry:\n    from collections.abc import Callable  # Python 3.10+\nexcept ImportError:\n    from collections import Callable  # For Python versions before 3.10\n\n\nclass Wrappers(PBSService):\n    dflt_attributes = {\n        ATTR_dfltque: \"workq\",\n        ATTR_nodefailrq: \"310\",\n        ATTR_FlatUID: 'True',\n        ATTR_DefaultChunk + \".ncpus\": \"1\",\n    }\n\n    dflt_sched_name = 'default'\n\n    # this pattern is a bit relaxed to match common developer build numbers\n    version_tag = re.compile(r\"[a-zA-Z_]*(?P<version>[\\d\\.]+.[\\w\\d\\.]*)[\\s]*\")\n\n    actions = ExpectActions()\n\n    # these server attributes revert back to default value when unset\n    __special_attr_keys = {SERVER: [ATTR_scheduling, ATTR_logevents,\n                                    ATTR_mailfrom, ATTR_queryother,\n                                    ATTR_rescdflt + '.ncpus', ATTR_schedit,\n                                    ATTR_ResvEnable, ATTR_maxarraysize,\n                                    ATTR_license_min, ATTR_license_max,\n                                    ATTR_license_linger,\n                                    ATTR_EligibleTimeEnable,\n                                    ATTR_max_concurrent_prov],\n                           SCHED: [ATTR_sched_cycle_len, ATTR_scheduling,\n                                   ATTR_schedit, ATTR_logevents,\n                                   ATTR_sched_server_dyn_res_alarm,\n                                   ATTR_SchedHost,\n                                   'preempt_prio', 'preempt_queue_prio',\n                                   'throughput_mode', 'job_run_wait',\n                                   'partition', 'sched_priv', 'sched_log'],\n                           NODE: [ATTR_rescavail + '.ncpus'],\n                           HOOK: [ATTR_HOOK_type,\n                                  ATTR_HOOK_enable,\n                                  ATTR_HOOK_event,\n                                  ATTR_HOOK_alarm,\n                                  ATTR_HOOK_order,\n                                  ATTR_HOOK_debug,\n                                  ATTR_HOOK_fail_action,\n                                  ATTR_HOOK_user]}\n\n    __special_attr = {SERVER: None,\n                      SCHED: None,\n                      NODE: None,\n                      HOOK: None}\n\n    def __init__(self, name=None, attrs={}, defaults={}, pbsconf_file=None,\n                 snapmap={}, snap=None, client=None, client_pbsconf_file=None,\n                 db_access=None, stat=True):\n        self.jobs = {}\n        self.nodes = {}\n        self.reservations = {}\n        self.queues = {}\n        self.resources = {}\n        self.hooks = {}\n        self.pbshooks = {}\n        self.entities = {}\n        self.schedulers = {}\n        self.version = None\n        self.default_queue = None\n        self.last_error = []  # type: array. Set for CLI IFL errors. Not reset\n        self.last_out = []  # type: array. Set for CLI IFL output. Not reset\n        self.last_rc = None  # Set for CLI IFL return code. Not thread-safe\n        self.moms = {}\n\n        # default timeout on connect/disconnect set to 60s to mimick the qsub\n        # buffer introduced in PBS 11\n        self._conn_timeout = 60\n        self._conn_timer = None\n        self._conn = None\n        self._db_conn = None\n        self.current_user = pwd.getpwuid(os.getuid())[0]\n\n        if len(defaults.keys()) == 0:\n            defaults = self.dflt_attributes\n\n        self.pexpect_timeout = 15\n        self.pexpect_sleep_time = .1\n        super().__init__(name, attrs, defaults, pbsconf_file, snapmap,\n                         snap)\n        _m = ['server ', self.shortname]\n        if pbsconf_file is not None:\n            _m += ['@', pbsconf_file]\n        _m += [': ']\n        self.logprefix = \"\".join(_m)\n        self.pi = PBSInitServices(hostname=self.hostname,\n                                  conf=self.pbs_conf_file)\n        self.set_client(client)\n\n        if client_pbsconf_file is None:\n            self.client_pbs_conf_file = self.du.get_pbs_conf_file(self.client)\n        else:\n            self.client_pbs_conf_file = client_pbsconf_file\n\n        self.client_conf = self.du.parse_pbs_config(\n            self.client, file=self.client_pbs_conf_file)\n\n        if self.client_pbs_conf_file == '/etc/pbs.conf':\n            self.default_client_pbs_conf = True\n        elif (('PBS_CONF_FILE' not in os.environ) or\n              (os.environ['PBS_CONF_FILE'] != self.client_pbs_conf_file)):\n            self.default_client_pbs_conf = False\n        else:\n            self.default_client_pbs_conf = True\n\n        a = {}\n        if os.getuid() == 0:\n            a = {ATTR_aclroot: 'root'}\n        self.dflt_attributes.update(a)\n\n        if not API_OK:\n            # mode must be set before the first stat call\n            self.set_op_mode(PTL_CLI)\n\n        if stat:\n            try:\n                tmp_attrs = self.status(SERVER, level=logging.DEBUG,\n                                        db_access=db_access)\n            except (PbsConnectError, PbsStatusError):\n                tmp_attrs = None\n\n            if tmp_attrs is not None and len(tmp_attrs) > 0:\n                self.attributes = tmp_attrs[0]\n\n            if ATTR_dfltque in self.attributes:\n                self.default_queue = self.attributes[ATTR_dfltque]\n\n            self.update_version_info()\n\n    def update_version_info(self):\n        \"\"\"\n        Update the version information.\n        \"\"\"\n        if ATTR_version not in self.attributes:\n            self.attributes[ATTR_version] = 'unknown'\n        else:\n            m = self.version_tag.match(self.attributes[ATTR_version])\n            if m:\n                v = m.group('version')\n                self.version = LooseVersion(v)\n        self.logger.info(self.logprefix + 'version ' +\n                         self.attributes[ATTR_version])\n\n    def set_client(self, name=None):\n        \"\"\"\n        Set server client\n\n        :param name: Client name\n        :type name: str\n        \"\"\"\n        if name is None:\n            self.client = socket.gethostname()\n        else:\n            self.client = name\n\n    def get_op_mode(self):\n        \"\"\"\n        Returns operating mode for calls to the PBS server.\n        Currently, two modes are supported, either the ``API``\n        or the ``CLI``. Default is ``API``\n        \"\"\"\n        if (not API_OK or (self.ptl_conf['mode'] == PTL_CLI)):\n            return PTL_CLI\n        return PTL_API\n\n    def set_connect_timeout(self, timeout=0):\n        \"\"\"\n        Set server connection timeout\n        :param timeout: Timeout value\n        :type timeout: int\n        \"\"\"\n        self._conn_timeout = timeout\n\n    def set_op_mode(self, mode):\n        \"\"\"\n        set operating mode to one of either ``PTL_CLI`` or\n        ``PTL_API``.Returns the mode that was set which can\n        be different from the value requested, for example, if\n        requesting to set ``PTL_API``, in the absence of the\n        appropriate SWIG wrappers, the library will fall back to\n        ``CLI``, or if requesting ``PTL_CLI`` and there is no\n        ``PBS_EXEC`` on the system, None is returned.\n        :param mode: Operating mode\n        :type mode: str\n        \"\"\"\n        if mode == PTL_API:\n            if self._conn is not None or self._conn < 0:\n                self._conn = None\n            if not API_OK:\n                self.logger.error(self.logprefix +\n                                  'API submission is not available')\n                return PTL_CLI\n        elif mode == PTL_CLI:\n            if ((not self.has_snap) and\n                not os.path.isdir(os.path.join(self.client_conf['PBS_EXEC'],\n                                               'bin'))):\n                self.logger.error(self.logprefix +\n                                  'PBS commands are not available')\n                return None\n        else:\n            self.logger.error(self.logprefix + \"Unrecognized operating mode\")\n            return None\n\n        self.ptl_conf['mode'] = mode\n        self.logger.info(self.logprefix + 'server operating mode set to ' +\n                         mode)\n        return mode\n\n    def update_special_attr(self, obj_type, id=None):\n        \"\"\"\n        Update special attributes(__special_attr) dictionary\n        :param obj_type: The type of object to update attribute values\n                         in special attribute dictionary.\n        :type obj_type: str\n        :param id: The id of the object to act upon\n        :type id: str\n        \"\"\"\n        if not id:\n            if obj_type in (SERVER, NODE):\n                id = self.hostname\n            elif obj_type == SCHED:\n                id = 'default'\n        id_attr_dict = {}\n        obj_stat = self.status(obj_type, id=id)[0]\n        for key in obj_stat.keys():\n            if key in self.__special_attr_keys[obj_type]:\n                id_attr_dict[key] = obj_stat[key]\n\n        id_attr = {id: id_attr_dict}\n        self.__special_attr[obj_type] = id_attr\n\n    def get_special_attr_val(self, obj_type, attr, id=None):\n        \"\"\"\n        Get value for given attribute from\n        special attributes(__special_attr) dictionary\n        :param obj_type: The type of object to update attribute values\n                         in special attribute dictionary.\n        :type obj_type: str\n        :param attr: The attribute for which value requested.\n        :type id: str\n        :param id: The id of the object to act upon\n        :type id: str\n        \"\"\"\n\n        if not id:\n            if obj_type in (SERVER, NODE):\n                id = self.hostname\n            elif obj_type == SCHED:\n                id = 'default'\n        res_val = ATTR_rescavail + '.ncpus'\n        if obj_type in (NODE, VNODE) and attr == res_val:\n            obj_stat = self.status(obj_type, id=id)[0]\n            if 'pcpus' not in obj_stat.keys():\n                return 1\n            else:\n                return self.__special_attr[obj_type][id][attr]\n        elif obj_type == HOOK and (id == 'pbs_cgroups' and attr == 'freq'):\n            return 120\n        else:\n            return self.__special_attr[obj_type][id][attr]\n\n    def _filter(self, obj_type=None, attrib=None, id=None, extend=None,\n                op=None, attrop=None, bslist=None, mode=PTL_COUNTER,\n                idonly=True, grandtotal=False, db_access=None, runas=None,\n                resolve_indirectness=False, level=logging.DEBUG):\n\n        if bslist is None:\n            try:\n                _a = resolve_indirectness\n                tmp_bsl = self.status(obj_type, attrib, id,\n                                      level=level, extend=extend,\n                                      db_access=db_access, runas=runas,\n                                      resolve_indirectness=_a)\n                del _a\n            except PbsStatusError:\n                return None\n\n            bslist = self.utils.filter_batch_status(tmp_bsl, attrib)\n            del tmp_bsl\n\n        if bslist is None:\n            return None\n\n        if isinstance(attrib, str):\n            attrib = attrib.split(',')\n\n        total = {}\n        for bs in bslist:\n            if isinstance(attrib, list):\n                # when filtering on multiple values, ensure that they are\n                # all present on the object, otherwise skip\n                if attrop == PTL_AND:\n                    match = True\n                    for k in attrib:\n                        if k not in bs:\n                            match = False\n                    if not match:\n                        continue\n\n                for a in attrib:\n                    if a in bs:\n                        if op == SET:\n                            k = a\n                        else:\n                            # Since this is a list of attributes, no operator\n                            # was provided so we settle on \"equal\"\n                            k = a + '=' + str(bs[a])\n                        if mode == PTL_COUNTER:\n                            amt = 1\n                            if grandtotal:\n                                amt = PbsAttribute.decode_value(bs[a])\n                                if not isinstance(amt, (int, float)):\n                                    amt = 1\n                                if a in total:\n                                    total[a] += amt\n                                else:\n                                    total[a] = amt\n                            else:\n                                if k in total:\n                                    total[k] += amt\n                                else:\n                                    total[k] = amt\n                        elif mode == PTL_FILTER:\n                            if k in total:\n                                if idonly:\n                                    total[k].append(bs['id'])\n                                else:\n                                    total[k].append(bs)\n                            else:\n                                if idonly:\n                                    total[k] = [bs['id']]\n                                else:\n                                    total[k] = [bs]\n                        else:\n                            self.logger.error(\"Unhandled mode \" + str(mode))\n                            return None\n\n            elif isinstance(attrib, dict):\n                tmptotal = {}  # The running count that will be used for total\n\n                # when filtering on multiple values, ensure that they are\n                # all present on the object, otherwise skip\n                match = True\n                for k, v in attrib.items():\n                    if k not in bs:\n                        match = False\n                        if attrop == PTL_AND:\n                            break\n                        else:\n                            continue\n                    amt = PbsAttribute.decode_value(bs[k])\n                    if isinstance(v, tuple):\n                        op = v[0]\n                        val = PbsAttribute.decode_value(v[1])\n                    elif op == SET:\n                        val = None\n                        pass\n                    else:\n                        op = EQ\n                        val = PbsAttribute.decode_value(v)\n\n                    if ((op == LT and amt < val) or\n                            (op == LE and amt <= val) or\n                            (op == EQ and amt == val) or\n                            (op == GE and amt >= val) or\n                            (op == GT and amt > val) or\n                            (op == NE and amt != val) or\n                            (op == MATCH and str(amt).find(str(val)) != -1) or\n                            (op == MATCH_RE and\n                             re.search(str(val), str(amt))) or\n                            (op == SET)):\n                        # There is a match, proceed to track the attribute\n                        self._filter_helper(bs, k, val, amt, op, mode,\n                                            tmptotal, idonly, grandtotal)\n                    elif attrop == PTL_AND:\n                        match = False\n                        if mode == PTL_COUNTER:\n                            # requesting specific key/value pairs should result\n                            # in 0 available elements\n                            tmptotal[str(k) + PTL_OP_TO_STR[op] + str(val)] = 0\n                        break\n                    elif mode == PTL_COUNTER:\n                        tmptotal[str(k) + PTL_OP_TO_STR[op] + str(val)] = 0\n\n                if attrop != PTL_AND or (attrop == PTL_AND and match):\n                    for k, v in tmptotal.items():\n                        if k not in total:\n                            total[k] = v\n                        else:\n                            total[k] += v\n        return total\n\n    def _filter_helper(self, bs, k, v, amt, op, mode, total, idonly,\n                       grandtotal):\n        # default operation to '='\n        if op is None or op not in PTL_OP_TO_STR:\n            op = '='\n        op_str = PTL_OP_TO_STR[op]\n\n        if op == SET:\n            # override PTL_OP_TO_STR fro SET operations\n            op_str = ''\n            v = ''\n\n        ky = k + op_str + str(v)\n        if mode == PTL_COUNTER:\n            incr = 1\n            if grandtotal:\n                if not isinstance(amt, (int, float)):\n                    incr = 1\n                else:\n                    incr = amt\n            if ky in total:\n                total[ky] += incr\n            else:\n                total[ky] = incr\n        elif mode == PTL_FILTER:\n            if ky in total:\n                if idonly:\n                    total[ky].append(bs['id'])\n                else:\n                    total[ky].append(bs)\n            else:\n                if idonly:\n                    total[ky] = [bs['id']]\n                else:\n                    total[ky] = [bs]\n\n    def _connect(self, hostname, attempt=1):\n        if ((self._conn is None or self._conn < 0) or\n                (self._conn_timeout == 0 or self._conn_timer is None)):\n            self._conn = pbs_connect(hostname)\n            self._conn_timer = time.time()\n\n        if self._conn is None or self._conn < 0:\n            if attempt > 5:\n                m = self.logprefix + 'unable to connect'\n                raise PbsConnectError(rv=None, rc=-1, msg=m)\n            else:\n                self._disconnect(self._conn, force=True)\n                time.sleep(1)\n                return self._connect(hostname, attempt + 1)\n\n        return self._conn\n\n    def _disconnect(self, conn, force=False):\n        \"\"\"\n        disconnect a connection to a Server.\n        For performance of the API calls, a connection is\n        maintained up to _conn_timer, unless the force parameter\n        is set to True\n        :param conn: Server connection\n        :param force: If true then diconnect forcefully\n        :type force: bool\n        \"\"\"\n        if ((conn is not None and conn >= 0) and\n            (force or\n             (self._conn_timeout == 0 or\n              (self._conn_timer is not None and\n               (time.time() - self._conn_timer > self._conn_timeout))))):\n            pbs_disconnect(conn)\n            self._conn_timer = None\n            self._conn = None\n\n    def update_attributes(self, obj_type, bs, overwrite=False):\n        \"\"\"\n        Populate objects from batch status data\n        \"\"\"\n        if bs is None:\n            return\n\n        for binfo in bs:\n            if 'id' not in binfo:\n                continue\n            id = binfo['id']\n            obj = None\n            if obj_type == JOB:\n                if ATTR_owner in binfo:\n                    user = binfo[ATTR_owner].split('@')[0]\n                else:\n                    user = None\n                if id in self.jobs:\n                    if overwrite:\n                        self.jobs[id].attributes = copy.deepcopy(binfo)\n                    else:\n                        self.jobs[id].attributes.update(binfo)\n                    if self.jobs[id].username != user:\n                        self.jobs[id].username = user\n                else:\n                    self.jobs[id] = Job(user, binfo)\n                obj = self.jobs[id]\n            elif obj_type in (VNODE, NODE):\n                if id in self.nodes:\n                    if overwrite:\n                        self.nodes[id].attributes = copy.deepcopy(binfo)\n                    else:\n                        self.nodes[id].attributes.update(binfo)\n                else:\n                    if 'Mom' in binfo:\n                        self.nodes[id] = get_mom_obj(self, binfo['Mom'], binfo,\n                                                     snapmap={NODE: None})\n                    else:\n                        self.nodes[id] = get_mom_obj(self, id, binfo,\n                                                     snapmap={NODE: None})\n                obj = self.nodes[id]\n            elif obj_type == SERVER:\n                if overwrite:\n                    self.attributes = copy.deepcopy(binfo)\n                else:\n                    self.attributes.update(binfo)\n                obj = self\n            elif obj_type == QUEUE:\n                if id in self.queues:\n                    if overwrite:\n                        self.queues[id].attributes = copy.deepcopy(binfo)\n                    else:\n                        self.queues[id].attributes.update(binfo)\n                else:\n                    self.queues[id] = Queue(id, binfo, server=self)\n                obj = self.queues[id]\n            elif obj_type == RESV:\n                if id in self.reservations:\n                    if overwrite:\n                        self.reservations[id].attributes = copy.deepcopy(binfo)\n                    else:\n                        self.reservations[id].attributes.update(binfo)\n                else:\n                    self.reservations[id] = Reservation(id, binfo)\n                obj = self.reservations[id]\n            elif obj_type == HOOK:\n                if id in self.hooks:\n                    if overwrite:\n                        self.hooks[id].attributes = copy.deepcopy(binfo)\n                    else:\n                        self.hooks[id].attributes.update(binfo)\n                else:\n                    self.hooks[id] = Hook(id, binfo, server=self)\n                obj = self.hooks[id]\n            elif obj_type == PBS_HOOK:\n                if id in self.pbshooks:\n                    if overwrite:\n                        self.pbshooks[id].attributes = copy.deepcopy(binfo)\n                    else:\n                        self.pbshooks[id].attributes.update(binfo)\n                else:\n                    self.pbshooks[id] = Hook(id, binfo, server=self)\n                obj = self.pbshooks[id]\n            elif obj_type == SCHED:\n                if id in self.schedulers:\n                    if overwrite:\n                        self.schedulers[id].attributes = copy.deepcopy(binfo)\n                    else:\n                        self.schedulers[id].attributes.update(binfo)\n                    if 'sched_priv' in binfo:\n                        self.schedulers[id].setup_sched_priv(\n                            binfo['sched_priv'])\n                else:\n                    if 'sched_host' not in binfo:\n                        hostname = self.hostname\n                    else:\n                        hostname = binfo['sched_host']\n                    if SCHED in self.snapmap:\n                        snap = self.snap\n                        snapmap = self.snapmap\n                    else:\n                        snap = None\n                        snapmap = {}\n                    spriv = None\n                    if 'sched_priv' in binfo:\n                        spriv = binfo['sched_priv']\n                    self.schedulers[id] = Scheduler(server=self,\n                                                    hostname=hostname,\n                                                    snap=snap,\n                                                    snapmap=snapmap,\n                                                    id=id,\n                                                    sched_priv=spriv)\n                    if overwrite:\n                        self.schedulers[id].attributes = copy.deepcopy(binfo)\n                    else:\n                        self.schedulers[id].attributes.update(binfo)\n                obj = self.schedulers[id]\n\n            elif obj_type == RSC:\n                if id in self.resources:\n                    if overwrite:\n                        self.resources[id].attributes = copy.deepcopy(binfo)\n                    else:\n                        self.resources[id].attributes.update(binfo)\n                else:\n                    rtype = None\n                    rflag = None\n                    if 'type' in binfo:\n                        rtype = binfo['type']\n                    if 'flag' in binfo:\n                        rflag = binfo['flag']\n                    self.resources[id] = Resource(id, rtype, rflag)\n\n            if obj is not None:\n                self.utils.update_attributes_list(obj)\n                obj.__dict__.update(binfo)\n\n    def pbs_api_as(self, cmd=None, obj=None, user=None, **kwargs):\n        \"\"\"\n        Generic handler to run an ``API`` call impersonating\n        a given user.This method is only used for impersonation\n        over the ``API`` because ``CLI`` impersonation takes place\n        through the generic ``DshUtils`` run_cmd mechanism.\n        :param cmd: PBS command\n        :type cmd: str or None\n        :param user: PBS user or current user\n        :type user: str or None\n        :raises: eval\n        \"\"\"\n        fn = None\n        objid = None\n        _data = None\n\n        if user is None:\n            user = self.du.get_current_user()\n        else:\n            # user may be a PbsUser object, cast it to string for the remainder\n            # of the function\n            user = str(user)\n\n        if cmd == 'submit':\n            if obj is None:\n                return None\n\n            _data = copy.copy(obj)\n            # the following attributes cause problems 'pickling',\n            # since they are not needed we unset them\n            _data.attrl = None\n            _data.attropl = None\n            _data.logger = None\n            _data.utils = None\n\n        elif cmd in ('alterjob', 'holdjob', 'sigjob', 'msgjob', 'rlsjob',\n                     'rerunjob', 'orderjob', 'runjob', 'movejob',\n                     'select', 'delete', 'status', 'manager', 'terminate',\n                     'deljob', 'delresv', 'alterresv'):\n            objid = obj\n            if 'data' in kwargs:\n                _data = kwargs['data']\n\n        if _data is not None:\n            fn = self.du.create_temp_file()\n            with open(fn, 'w+b') as tmpfile:\n                pickle.dump(_data, tmpfile)\n\n            os.chmod(fn, 0o755)\n\n            if self._is_local:\n                os.chdir(tempfile.gettempdir())\n            else:\n                self.du.run_copy(self.hostname, src=fn, dest=fn)\n\n        if not self._is_local:\n            p_env = '\"import os; print(os.environ[\\'PTL_EXEC\\'])\"'\n            ret = self.du.run_cmd(self.hostname, ['python3', '-c', p_env],\n                                  logerr=False)\n            if ret['out']:\n                runcmd = [os.path.join(ret['out'][0], 'pbs_as')]\n            else:\n                runcmd = ['pbs_as']\n        elif 'PTL_EXEC' in os.environ:\n            runcmd = [os.path.join(os.environ['PTL_EXEC'], 'pbs_as')]\n        else:\n            runcmd = ['pbs_as']\n\n        runcmd += ['-c', cmd, '-u', user]\n\n        if objid is not None:\n            runcmd += ['-o']\n            if isinstance(objid, list):\n                runcmd += [','.join(objid)]\n            else:\n                runcmd += [objid]\n\n        if fn is not None:\n            runcmd += ['-f', fn]\n\n        if 'hostname' in kwargs:\n            hostname = kwargs['hostname']\n        else:\n            hostname = self.hostname\n        runcmd += ['-s', hostname]\n\n        if 'extend' in kwargs and kwargs['extend'] is not None:\n            runcmd += ['-e', kwargs['extend']]\n\n        ret = self.du.run_cmd(self.hostname, runcmd, logerr=False, runas=user)\n        out = ret['out']\n        if ret['err']:\n            if cmd in CMD_ERROR_MAP:\n                m = CMD_ERROR_MAP[cmd]\n                if m in ret['err'][0]:\n                    if fn is not None:\n                        os.remove(fn)\n                        if not self._is_local:\n                            self.du.rm(self.hostname, fn)\n                    raise eval(str(ret['err'][0]))\n            self.logger.debug(\"<\" + get_method_name(self) + '>err: ' +\n                              str(ret['err']))\n\n        if fn is not None:\n            os.remove(fn)\n            if not self._is_local:\n                self.du.rm(self.hostname, fn)\n\n        if cmd == 'submit':\n            if out:\n                return out[0].strip()\n            else:\n                return None\n        elif cmd in ('alterjob', 'holdjob', 'sigjob', 'msgjob', 'rlsjob',\n                     'rerunjob', 'orderjob', 'runjob', 'movejob', 'delete',\n                     'terminate', 'alterresv'):\n            if ret['out']:\n                return int(ret['out'][0])\n            else:\n                return 1\n\n        elif cmd in ('manager', 'select', 'status'):\n            return eval(out[0])\n\n    def logit(self, msg, obj_type, attrib, id, level=logging.INFO):\n        \"\"\"\n        Generic logging routine for ``IFL`` commands\n        :param msg: The message to log\n        :type msg: str\n        :param obj_type: object type, i.e *\n        :param attrib: attributes to log\n        :param id: name of object to log\n        :type id: str or list\n        :param level: log level, defaults to ``INFO``\n        \"\"\"\n        s = []\n        if self.logger is not None:\n            if obj_type is None:\n                obj_type = MGR_OBJ_NONE\n            s = [msg + PBS_OBJ_MAP[obj_type]]\n            if id:\n                if isinstance(id, list):\n                    s += [' ' + \",\".join(id)]\n                else:\n                    s += [' ' + str(id)]\n            if attrib:\n                s += [' ' + str(attrib)]\n            self.logger.log(level, \"\".join(s))\n\n    def status(self, obj_type=SERVER, attrib=None, id=None,\n               extend=None, level=logging.INFO, db_access=None, runas=None,\n               resolve_indirectness=False, logerr=True):\n        \"\"\"\n        Stat any PBS object ``[queue, server, node, hook, job,\n        resv, sched]``.If the Server is setup from snap input,\n        see snap or snapmap member, the status calls are routed\n        directly to the data on files from snap.\n        The server can be queried either through the 'qstat'\n        command line tool or through the wrapped PBS IFL api,\n        see set_op_mode.\n        Return a dictionary representation of a batch status object\n        raises ``PbsStatsuError on error``.\n        :param obj_type: The type of object to query, one of the *\n                         objects.Default: SERVER\n        :param attrib: Attributes to query, can be a string, a\n                       list, a dictionary.Default is to query all\n                       attributes.\n        :type attrib: str or list or dictionary\n        :param id: An optional id, the name of the object to status\n        :type id: str\n        :param extend: Optional extension to the IFL call\n        :param level: The logging level, defaults to INFO\n        :type level: str\n        :param db_acccess: set to either file containing credentials\n                           to DB access or dictionary containing\n                           ``{'dbname':...,'user':...,'port':...}``\n        :type db_access: str or dictionary\n        :param runas: run stat as user\n        :type runas: str\n        :param resolve_indirectness: If True resolves indirect node\n                                     resources values\n        :type resolve_indirectness: bool\n        :param logerr: If True (default) logs run_cmd errors\n        :type logerr: bool\n        In addition to standard IFL stat call, this wrapper handles\n        a few cases that aren't implicitly offered by pbs_stat*,\n        those are for Hooks,Resources, and a formula evaluation.\n        \"\"\"\n\n        prefix = 'status on ' + self.shortname\n        if runas:\n            prefix += ' as ' + str(runas)\n        prefix += ': '\n        self.logit(prefix, obj_type, attrib, id, level)\n\n        bs = None\n        bsl = []\n        freebs = False\n        # 2 - Special handling for gathering the job formula value.\n        if attrib is not None and PTL_FORMULA in attrib:\n            if (((isinstance(attrib, list) or isinstance(attrib, dict)) and\n                 (len(attrib) == 1)) or\n                    (isinstance(attrib, str) and len(attrib.split(',')) == 1)):\n                bsl = self.status(\n                    JOB, 'Resource_List.select', id=id, extend='t')\n            if self.schedulers[self.dflt_sched_name] is None:\n                self.schedulers[self.dflt_sched_name] = Scheduler(\n                    self, name=self.hostname)\n            _prev_events = self.status(SCHED, 'log_events',\n                                       id=self.dflt_schd_name)[0]['log_events']\n\n            # Job sort formula events are logged at DEBUG2 (256)\n            if not int(_prev_events) & 256:\n                self.manager(MGR_CMD_SET, SCHED, {'log_events': 2047},\n                             id=self.dflt_schd_name)\n            self.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n            if id is None:\n                _formulas = self.schedulers[self.dflt_sched_name].job_formula()\n            else:\n                _formulas = {\n                    id: self.schedulers[\n                        self.dflt_sched_name].job_formula(\n                        jobid=id)\n                }\n            if not int(_prev_filter) & 256:\n                self.manager(MGR_CMD_SET, SCHED, {'log_events': _prev_events},\n                             id=self.dflt_schd_name)\n            if len(bsl) == 0:\n                bsl = [{'id': id}]\n            for _b in bsl:\n                if _b['id'] in _formulas:\n                    _b[PTL_FORMULA] = _formulas[_b['id']]\n            return bsl\n\n        # 3- Serve data from database if requested... and available for the\n        # given object type\n        if db_access and obj_type in (SERVER, SCHED, NODE, QUEUE, RESV, JOB):\n            bsl = self.status_db(obj_type, attrib, id, db_access=db_access,\n                                 logerr=logerr)\n\n        # 4- Serve data from snap files\n        elif obj_type in self.snapmap:\n            if obj_type in (HOOK, PBS_HOOK):\n                for f in self.snapmap[obj_type]:\n                    _b = self.utils.file_to_dictlist(f, attrib)\n                    if _b and 'hook_name' in _b[0]:\n                        _b[0]['id'] = _b[0]['hook_name']\n                    else:\n                        _b[0]['id'] = os.path.basename(f)\n                    if id is None or id == _b[0]['id']:\n                        bsl.extend(_b)\n            else:\n                bsl = self.utils.file_to_dictlist(self.snapmap[obj_type],\n                                                  attrib, id=id)\n        # 6- Stat using PBS CLI commands\n        elif self.get_op_mode() == PTL_CLI:\n            tgt = self.client\n            if obj_type in (JOB, QUEUE, SERVER):\n                pcmd = [os.path.join(\n                        self.client_conf['PBS_EXEC'],\n                        'bin',\n                        'qstat')]\n\n                if extend:\n                    pcmd += ['-' + extend]\n\n                if obj_type == JOB:\n                    pcmd += ['-f']\n                    if id:\n                        pcmd += [id]\n                    else:\n                        pcmd += ['@' + self.hostname]\n                elif obj_type == QUEUE:\n                    pcmd += ['-Qf']\n                    if id:\n                        if '@' not in id:\n                            pcmd += [id + '@' + self.hostname]\n                        else:\n                            pcmd += [id]\n                    else:\n                        pcmd += ['@' + self.hostname]\n                elif obj_type == SERVER:\n                    pcmd += ['-Bf', self.hostname]\n\n            elif obj_type in (NODE, VNODE, HOST):\n                pcmd = [os.path.join(self.client_conf['PBS_EXEC'], 'bin',\n                                     'pbsnodes')]\n                pcmd += ['-s', self.hostname]\n                if obj_type in (NODE, VNODE):\n                    pcmd += ['-v']\n                if obj_type == HOST:\n                    pcmd += ['-H']\n                if id:\n                    pcmd += [id]\n                else:\n                    pcmd += ['-a']\n            elif obj_type == RESV:\n                pcmd = [os.path.join(self.client_conf['PBS_EXEC'], 'bin',\n                                     'pbs_rstat')]\n                pcmd += ['-f']\n                if id:\n                    pcmd += [id]\n            elif obj_type in (SCHED, PBS_HOOK, HOOK, RSC):\n                try:\n                    rc = self.manager(MGR_CMD_LIST, obj_type, attrib, id,\n                                      runas=runas, level=level, logerr=logerr)\n                except PbsManagerError as e:\n                    rc = e.rc\n                    # PBS bug, no hooks yields a return code of 1, we ignore\n                    if obj_type != HOOK:\n                        raise PbsStatusError(\n                            rc=rc, rv=[], msg=self.geterrmsg())\n                if rc == 0:\n                    if obj_type == HOOK:\n                        o = self.hooks\n                    elif obj_type == PBS_HOOK:\n                        o = self.pbshooks\n                    elif obj_type == SCHED:\n                        o = self.schedulers\n                    elif obj_type == RSC:\n                        o = self.resources\n                    if id:\n                        if id in o:\n                            return [o[id].attributes]\n                        else:\n                            return None\n                    return [h.attributes for h in o.values()]\n                return []\n\n            else:\n                self.logger.error(self.logprefix + \"unrecognized object type\")\n                raise PbsStatusError(rc=-1, rv=[],\n                                     msg=\"unrecognized object type\")\n                return None\n\n            # as_script is used to circumvent some shells that will not pass\n            # along environment variables when invoking a command through sudo\n            if not self.default_client_pbs_conf:\n                pcmd = ['PBS_CONF_FILE=' + self.client_pbs_conf_file] + pcmd\n                as_script = True\n            elif obj_type == RESV and not self._is_local:\n                pcmd = ['PBS_SERVER=' + self.hostname] + pcmd\n                as_script = True\n            else:\n                as_script = False\n\n            ret = self.du.run_cmd(tgt, pcmd, runas=runas, as_script=as_script,\n                                  level=logging.INFOCLI, logerr=logerr)\n            o = ret['out']\n            if ret['err'] != ['']:\n                self.last_error = ret['err']\n            self.last_rc = ret['rc']\n            if ret['rc'] != 0:\n                raise PbsStatusError(rc=ret['rc'], rv=[], msg=self.geterrmsg())\n\n            bsl = self.utils.convert_to_dictlist(\n                o, attrib, mergelines=True, obj_type=obj_type)\n\n        # 7- Stat with impersonation over PBS IFL swig-wrapped API\n        elif runas is not None:\n            _data = {'obj_type': obj_type, 'attrib': attrib, 'id': id}\n            bsl = self.pbs_api_as('status', user=runas, data=_data,\n                                  extend=extend)\n        else:\n            # 8- Stat over PBS IFL API\n            #\n            # resources are special attributes, all resources are queried as\n            # a single attribute.\n            # e.g. querying the resources_available attribute returns all\n            # resources such as ncpus, mem etc. when querying for\n            # resources_available.ncpus and resources_available.mem only query\n            # resources_available once and retrieve the resources desired from\n            # there\n            if isinstance(attrib, dict):\n                attribcopy = {}\n                restype = []\n                for k, v in attrib.items():\n                    if isinstance(v, tuple):\n                        # SET requires a special handling because status may\n                        # have been called through counter to count the number\n                        # of objects have a given attribute set, in this case\n                        # we set the attribute to an empty string rather than\n                        # the number of elements requested. This is a\n                        # side-effect of the way pbs_statjob works\n                        if v[0] in (SET, MATCH_RE):\n                            v = ''\n                        else:\n                            v = v[1]\n                    if isinstance(v, Callable):\n                        v = ''\n                    if '.' in k:\n                        _r = k.split('.')[0]\n                        if _r not in restype:\n                            attribcopy[k] = v\n                            restype.append(_r)\n                    else:\n                        attribcopy[k] = v\n            elif isinstance(attrib, list):\n                attribcopy = []\n                for k in attrib:\n                    if '.' in k:\n                        _found = False\n                        for _e in attribcopy:\n                            _r = k.split('.')[0]\n                            if _r == _e.split('.')[0]:\n                                _found = True\n                                break\n                        if not _found:\n                            attribcopy.append(k)\n                    else:\n                        attribcopy.append(k)\n            else:\n                attribcopy = attrib\n\n            a = self.utils.convert_to_attrl(attribcopy)\n            c = self._connect(self.hostname)\n\n            if obj_type == JOB:\n                bs = pbs_statjob(c, id, a, extend)\n            elif obj_type == QUEUE:\n                bs = pbs_statque(c, id, a, extend)\n            elif obj_type == SERVER:\n                bs = pbs_statserver(c, a, extend)\n            elif obj_type == HOST:\n                bs = pbs_statnode(c, id, a, extend)\n            elif obj_type == VNODE:\n                bs = pbs_statvnode(c, id, a, extend)\n            elif obj_type == RESV:\n                bs = pbs_statresv(c, id, a, extend)\n            elif obj_type == SCHED:\n                bs = pbs_statsched(c, a, extend)\n            elif obj_type == RSC:\n                # up to PBS 12.3 pbs_statrsc was not in pbs_ifl.h\n                bs = pbs_statrsc(c, id, a, extend)\n            elif obj_type in (HOOK, PBS_HOOK):\n                if os.getuid() != 0:\n                    try:\n                        rc = self.manager(MGR_CMD_LIST, obj_type, attrib,\n                                          id, level=level)\n                        if rc == 0:\n                            if id:\n                                if (obj_type == HOOK and\n                                        id in self.hooks):\n                                    return [self.hooks[id].attributes]\n                                elif (obj_type == PBS_HOOK and\n                                      id in self.pbshooks):\n                                    return [self.pbshooks[id].attributes]\n                                else:\n                                    return None\n                            if obj_type == HOOK:\n                                return [h.attributes for h in\n                                        self.hooks.values()]\n                            elif obj_type == PBS_HOOK:\n                                return [h.attributes for h in\n                                        self.pbshooks.values()]\n                    except Exception:\n                        pass\n                else:\n                    bs = pbs_stathook(c, id, a, extend)\n            else:\n                self.logger.error(self.logprefix +\n                                  \"unrecognized object type \" + str(obj_type))\n\n            freebs = True\n            err = self.geterrmsg()\n            self._disconnect(c)\n\n            if err:\n                raise PbsStatusError(rc=-1, rv=[], msg=err)\n\n            if not isinstance(bs, list):\n                bsl = self.utils.batch_status_to_dictlist(bs, attrib)\n            else:\n                bsl = self.utils.filter_batch_status(bs, attrib)\n\n        # Update each object's dictionary with corresponding attributes and\n        # values\n        self.update_attributes(obj_type, bsl)\n\n        # Hook stat is done through CLI, no need to free the batch_status\n        if (not isinstance(bs, list) and freebs and\n                obj_type not in (HOOK, PBS_HOOK) and os.getuid() != 0):\n            pbs_statfree(bs)\n\n        # 9- Resolve indirect resources\n        if obj_type in (NODE, VNODE) and resolve_indirectness:\n            nodes = {}\n            for _b in bsl:\n                for k, v in _b.items():\n                    if v.startswith('@'):\n                        if v[1:] in nodes:\n                            _b[k] = nodes[v[1:]][k]\n                        else:\n                            for l in bsl:\n                                if l['id'] == v[1:]:\n                                    nodes[k] = l[k]\n                                    _b[k] = l[k]\n                                    break\n            del nodes\n        return bsl\n\n    def submit_interactive_job(self, job, cmd):\n        \"\"\"\n        submit an ``interactive`` job. Returns a job identifier\n        or raises PbsSubmitError on error\n        :param cmd: The command to run to submit the interactive\n                    job\n        :type cmd: str\n        :param job: the job object. The job must have the attribute\n                    'interactive_job' populated. That attribute is\n                    a list of tuples of the form:\n                    (<command>, <expected output>, <...>)\n                    for example to send the command\n                    hostname and expect 'myhost.mydomain' one would\n                    set:job.interactive_job =\n                    [('hostname', 'myhost.mydomain')]\n                    If more than one lines are expected they are\n                    appended to the tuple.\n        :raises: PbsSubmitError\n        \"\"\"\n        ij = InteractiveJob(job, cmd, self.client)\n        # start the interactive job submission thread and wait to pickup the\n        # actual job identifier\n        ij.start()\n        while ij.jobid is None:\n            continue\n        return ij.jobid\n\n    def expect(self, obj_type, attrib=None, id=None, op=EQ, attrop=PTL_AND,\n               attempt=0, max_attempts=None, interval=None, count=None,\n               extend=None, offset=0, runas=None, level=logging.INFO,\n               msg=None, trigger_sched_cycle=True):\n        \"\"\"\n        expect an attribute to match a given value as per an\n        operation.\n        :param obj_type: The type of object to query, JOB, SERVER,\n                         SCHEDULER, QUEUE, NODE\n        :type obj_type: str\n        :param attrib: Attributes to query, can be a string, a list,\n                       or a dict\n        :type attrib: str or list or dictionary\n        :param id: The id of the object to act upon\n        :param op: An operation to perform on the queried data,\n                   e.g., EQ, SET, LT,..\n        :param attrop: Operation on multiple attributes, either\n                       PTL_AND, PTL_OR when an PTL_AND is used, only\n                       batch objects having all matches are\n                       returned, otherwise an OR is applied\n        :param attempt: The number of times this function has been\n                        called\n        :type attempt: int\n        :param max_attempts: The maximum number of attempts to\n                             perform\n        :type max_attempts: int or None\n        :param interval: The interval time between attempts.\n        :param count: If True, attrib will be accumulated using\n                      function counter\n        :type count: bool\n        :param extend: passed to the stat call\n        :param offset: the time to wait before the initial check.\n                       Defaults to 0.\n        :type offset: int\n        :param runas: query as a given user. Defaults to current\n                      user\n        :type runas: str or None\n        :param msg: Message from last call of this function, this\n                    message will be used while raising\n                    PtlExpectError.\n        :type msg: str or None\n        :param trigger_sched_cycle: True by default can be set to False if\n                          kicksched_action is not supposed to be called\n        :type trigger_sched_cycle: Boolean\n        :returns: True if attributes are as expected\n        :raises: PtlExpectError if attributes are not as expected\n        \"\"\"\n\n        if attempt == 0 and offset > 0:\n            self.logger.log(level, self.logprefix + 'expect offset set to ' +\n                            str(offset))\n            time.sleep(offset)\n\n        if attrib is None:\n            attrib = {}\n\n        if ATTR_version in attrib and max_attempts is None:\n            max_attempts = 3\n\n        if max_attempts is None:\n            max_attempts = self.ptl_conf['max_attempts']\n\n        if interval is None:\n            interval = self.ptl_conf['attempt_interval']\n\n        if attempt >= max_attempts:\n            _msg = \"expected on \" + self.logprefix + msg\n            raise PtlExpectError(rc=1, rv=False, msg=_msg)\n\n        if obj_type == SERVER and id is None:\n            id = self.hostname\n\n        if isinstance(attrib, str):\n            attrib = {attrib: ''}\n        elif isinstance(attrib, list):\n            d = {}\n            for l in attrib:\n                d[l] = ''\n            attrib = d\n\n        # Add check for substate=42 for jobstate=R, if not added explicitly.\n        if obj_type == JOB:\n            add_attribs = {}\n            substate = False\n            for k, v in attrib.items():\n                if k == 'job_state' and ((isinstance(v, tuple) and\n                                          'R' in v[-1]) or v == 'R'):\n                    add_attribs['substate'] = 42\n                elif k == 'job_state=R':\n                    add_attribs['substate=42'] = v\n                elif 'substate' in k:\n                    substate = True\n            if add_attribs and not substate:\n                attrib.update(add_attribs)\n                attrop = PTL_AND\n            del add_attribs, substate\n\n        prefix = 'expect on ' + self.logprefix\n        msg = []\n        attrs_to_ignore = []\n        for k, v in attrib.items():\n            args = None\n            if isinstance(v, tuple):\n                operator = v[0]\n                if len(v) > 2:\n                    args = v[2:]\n                val = v[1]\n            else:\n                operator = op\n                val = v\n            if operator not in PTL_OP_TO_STR:\n                self.logger.log(level, \"Operator not supported by expect(), \"\n                                \"cannot verify change in \" + str(k))\n                attrs_to_ignore.append(k)\n                continue\n            msg += [k, PTL_OP_TO_STR[operator].strip()]\n            if isinstance(val, Callable):\n                msg += ['callable(' + val.__name__ + ')']\n                if args is not None:\n                    msg.extend([str(x) for x in args])\n            else:\n                msg += [str(val)]\n            msg += [PTL_ATTROP_TO_STR[attrop]]\n\n        # Delete the attributes that we cannot verify\n        for k in attrs_to_ignore:\n            del(attrib[k])\n\n        if attrs_to_ignore and len(attrib) < 1 and op == SET:\n            return True\n\n        # remove the last converted PTL_ATTROP_TO_STR\n        if len(msg) > 1:\n            msg = msg[:-1]\n\n        if len(attrib) == 0:\n            msg += [PTL_OP_TO_STR[op]]\n\n        msg += [PBS_OBJ_MAP[obj_type]]\n        if id is not None:\n            msg += [str(id)]\n        if attempt > 0:\n            msg += ['attempt:', str(attempt + 1)]\n\n        # Default count to True if the attribute contains an '=' in its name\n        # for example 'job_state=R' implies that a count of job_state is needed\n        if count is None and self.utils.operator_in_attribute(attrib):\n            count = True\n\n        if count:\n            newattr = self.utils.convert_attributes_by_op(attrib)\n            if len(newattr) == 0:\n                newattr = attrib\n\n            statlist = [self._filter(obj_type, newattr, id, extend, op=op,\n                                     attrop=attrop, runas=runas,\n                                     level=logging.DEBUG)]\n        else:\n            try:\n                statlist = self.status(obj_type, attrib, id=id,\n                                       level=logging.DEBUG, extend=extend,\n                                       runas=runas, logerr=False)\n            except PbsStatusError:\n                statlist = []\n\n        if (statlist is None or len(statlist) == 0 or\n                statlist[0] is None or len(statlist[0]) == 0):\n            if op == UNSET or list(set(attrib.values())) == [0]:\n                self.logger.log(level, prefix + \" \".join(msg) + ' ...  OK')\n                return True\n            else:\n                time.sleep(interval)\n                msg = \" no data for \" + \" \".join(msg)\n                self.logger.log(level, prefix + msg)\n                return self.expect(obj_type, attrib, id, op, attrop,\n                                   attempt + 1, max_attempts, interval, count,\n                                   extend, level=level, msg=msg)\n        else:\n            if op == UNSET and obj_type in (SERVER, SCHED, NODE, HOOK, QUEUE):\n                for key in attrib.keys():\n                    if key in self.__special_attr_keys[obj_type]:\n                        val = self.get_special_attr_val(obj_type, key, id)\n                        attrib = {key: val}\n                        op = EQ\n                        return self.expect(obj_type, attrib, id, op, attrop,\n                                           attempt, max_attempts, interval,\n                                           count, extend, runas=runas,\n                                           level=level, msg=msg)\n\n        if attrib is None:\n            time.sleep(interval)\n            return self.expect(obj_type, attrib, id, op, attrop, attempt + 1,\n                               max_attempts, interval, count, extend,\n                               runas=runas, level=level, msg=\" \".join(msg))\n        inp_op = op\n        for k, v in attrib.items():\n            varargs = None\n            if isinstance(v, tuple):\n                op = v[0]\n                if len(v) > 2:\n                    varargs = v[2:]\n                v = v[1]\n            else:\n                op = inp_op\n\n            for stat in statlist:\n                if k not in stat:\n                    if op == UNSET:\n                        continue\n\n                    # Sometimes users provide the wrong case for attributes\n                    # Convert to lowercase and compare\n                    attrs_lower = {\n                        ks.lower(): [ks, vs] for ks, vs in stat.items()}\n                    k_lower = k.lower()\n                    if k_lower not in attrs_lower:\n                        if (statlist.index(stat) + 1) < len(statlist):\n                            continue\n                        time.sleep(interval)\n                        _tsc = trigger_sched_cycle\n                        return self.expect(obj_type, attrib, id, op, attrop,\n                                           attempt + 1, max_attempts,\n                                           interval, count, extend,\n                                           level=level, msg=\" \".join(msg),\n                                           trigger_sched_cycle=_tsc)\n                    stat_v = attrs_lower[k_lower][1]\n                    stat_k = attrs_lower[k_lower][0]\n                else:\n                    stat_v = stat[k]\n                    stat_k = k\n\n                if stat_k == ATTR_version:\n                    m = self.version_tag.match(stat_v)\n                    if m:\n                        stat_v = m.group('version')\n                    else:\n                        time.sleep(interval)\n                        return self.expect(obj_type, attrib, id, op, attrop,\n                                           attempt + 1, max_attempts, interval,\n                                           count, extend, runas=runas,\n                                           level=level, msg=\" \".join(msg))\n\n                # functions/methods are invoked and their return value\n                # used on expect\n                if isinstance(v, Callable):\n                    if varargs is not None:\n                        rv = v(stat_v, *varargs)\n                    else:\n                        rv = v(stat_v)\n                    if isinstance(rv, bool):\n                        if op == NOT:\n                            if not rv:\n                                continue\n                        if rv:\n                            continue\n                    else:\n                        v = rv\n\n                stat_v = PbsAttribute.decode_value(stat_v)\n                v = PbsAttribute.decode_value(str(v))\n\n                if stat_k == ATTR_version:\n                    stat_v = LooseVersion(str(stat_v))\n                    v = LooseVersion(str(v))\n\n                if op == EQ and stat_v == v:\n                    continue\n                elif op == SET and count and stat_v == v:\n                    continue\n                elif op == SET and count in (False, None):\n                    continue\n                elif op == NE and stat_v != v:\n                    continue\n                elif op == LT:\n                    if stat_v < v:\n                        continue\n                elif op == GT:\n                    if stat_v > v:\n                        continue\n                elif op == LE:\n                    if stat_v <= v:\n                        continue\n                elif op == GE:\n                    if stat_v >= v:\n                        continue\n                elif op == MATCH_RE:\n                    if re.search(str(v), str(stat_v)):\n                        continue\n                elif op == MATCH:\n                    if str(stat_v).find(str(v)) != -1:\n                        continue\n\n                msg += [' got: ' + stat_k + ' = ' + str(stat_v)]\n                self.logger.info(prefix + \" \".join(msg))\n                time.sleep(interval)\n\n                # run custom actions defined for this object type\n                if trigger_sched_cycle and self.actions:\n                    for act_obj in self.actions.get_actions_by_type(obj_type):\n                        if act_obj.enabled:\n                            act_obj.action(self, obj_type, attrib, id, op,\n                                           attrop)\n                return self.expect(obj_type, attrib, id, op, attrop,\n                                   attempt + 1, max_attempts, interval, count,\n                                   extend, level=level, msg=\" \".join(msg),\n                                   trigger_sched_cycle=trigger_sched_cycle)\n\n        self.logger.log(level, prefix + \" \".join(msg) + ' ...  OK')\n        return True\n\n    def submit(self, obj, script=None, extend=None, submit_dir=None,\n               env=None):\n        \"\"\"\n        Submit a job or reservation. Returns a job identifier\n        or raises PbsSubmitError on error\n        :param obj: The Job or Reservation instance to submit\n        :param script: Path to a script to submit. Default: None\n                       as an executable /bin/sleep 100 is submitted\n        :type script: str or None\n        :param extend: Optional extension to the IFL call.\n                       see pbs_ifl.h\n        :type extend: str or None\n        :param submit_dir: directory from which job is submitted.\n                           Defaults to temporary directory\n        :type submit_dir: str or None\n        :raises: PbsSubmitError\n        \"\"\"\n\n        _interactive_job = False\n        as_script = False\n        rc = None\n        if isinstance(obj, Job):\n            if self.platform == 'cray' or self.platform == 'craysim':\n                m = False\n                vncompute = False\n                if 'Resource_List.select' in obj.attributes:\n                    select = obj.attributes['Resource_List.select']\n                    start = select.startswith('vntype=cray_compute')\n                    m = start or ':vntype=cray_compute' in select\n                if 'Resource_List.vntype' in obj.attributes:\n                    vn_type = obj.attributes['Resource_List.vntype']\n                    if vn_type == 'cray_compute':\n                        vncompute = True\n                if obj.script is not None:\n                    script = obj.script\n                elif m or vncompute:\n                    aprun_cmd = \"aprun -b -B\"\n                    executable = obj.attributes[ATTR_executable]\n                    start = executable.startswith('aprun ')\n                    aprun_exist = start or '/aprun' in executable\n                    if script:\n                        aprun_cmd += \" \" + script\n                    else:\n                        if aprun_exist:\n                            aprun_cmd = executable\n                        else:\n                            aprun_cmd += \" \" + executable\n                        arg_list = obj.attributes[ATTR_Arglist]\n                        aprun_cmd += \" \" + self.utils.convert_arglist(arg_list)\n                    fn = self.du.create_temp_file(hostname=None,\n                                                  prefix='PtlPbsJobScript',\n                                                  asuser=obj.username,\n                                                  body=aprun_cmd)\n                    self.du.chmod(path=fn, mode=0o755)\n                    script = fn\n            elif script is None and obj.script is not None:\n                script = obj.script\n            if ATTR_inter in obj.attributes:\n                _interactive_job = True\n                if ATTR_executable in obj.attributes:\n                    del obj.attributes[ATTR_executable]\n                if ATTR_Arglist in obj.attributes:\n                    del obj.attributes[ATTR_Arglist]\n        elif not isinstance(obj, Reservation):\n            m = self.logprefix + \"unrecognized object type\"\n            self.logger.error(m)\n            return None\n\n        if not submit_dir:\n            submit_dir = pwd.getpwnam(obj.username)[5]\n\n        cwd = os.getcwd()\n        if self.platform != 'shasta':\n            if submit_dir:\n                os.chdir(submit_dir)\n        c = None\n\n        # Revisit this after fixing submitting of executables without\n        # the whole path\n        # Get sleep command depending on which Mom the job will run\n        if ((ATTR_executable in obj.attributes) and\n                ('sleep' in obj.attributes[ATTR_executable])):\n            obj.attributes[ATTR_executable] = (\n                list(self.moms.values())[0]).sleep_cmd\n\n        # 1- Submission using the command line tools\n        runcmd = []\n        if env:\n            runcmd += ['#!/bin/bash\\n']\n            for k, v in env.items():\n                if '()' in k:\n                    f_name = k.replace('()', '')\n                    runcmd += [k, v, \"\\n\", \"export\", \"-f\", f_name]\n                else:\n                    runcmd += ['export %s=\\\"%s\\\"' % (k, v)]\n                runcmd += [\"\\n\"]\n\n        script_file = None\n        if self.get_op_mode() == PTL_CLI:\n            exclude_attrs = []  # list of attributes to not convert to CLI\n            if isinstance(obj, Job):\n                runcmd += [os.path.join(self.client_conf['PBS_EXEC'], 'bin',\n                                        'qsub')]\n            elif isinstance(obj, Reservation):\n                runcmd += [os.path.join(self.client_conf['PBS_EXEC'], 'bin',\n                                        'pbs_rsub')]\n                if ATTR_resv_start in obj.custom_attrs:\n                    start = obj.custom_attrs[ATTR_resv_start]\n                    obj.custom_attrs[ATTR_resv_start] = \\\n                        self.utils.convert_seconds_to_datetime(start)\n                if ATTR_resv_end in obj.custom_attrs:\n                    end = obj.custom_attrs[ATTR_resv_end]\n                    obj.custom_attrs[ATTR_resv_end] = \\\n                        self.utils.convert_seconds_to_datetime(end)\n                if ATTR_resv_timezone in obj.custom_attrs:\n                    exclude_attrs += [ATTR_resv_timezone, ATTR_resv_standing]\n                    # handling of impersonation differs widely across OS's,\n                    # when setting PBS_TZID we standardize on running the cmd\n                    # as a script instead of customizing for each OS flavor\n                    _tz = obj.custom_attrs[ATTR_resv_timezone]\n                    runcmd = ['PBS_TZID=' + _tz] + runcmd\n                    as_script = True\n                    if ATTR_resv_rrule in obj.custom_attrs:\n                        _rrule = obj.custom_attrs[ATTR_resv_rrule]\n                        if _rrule[0] not in (\"'\", '\"'):\n                            _rrule = \"'\" + _rrule + \"'\"\n                        obj.custom_attrs[ATTR_resv_rrule] = _rrule\n                if ATTR_job in obj.attributes:\n                    runcmd += ['--job', obj.attributes[ATTR_job]]\n                    exclude_attrs += [ATTR_job]\n\n            if not self._is_local:\n                if ATTR_queue not in obj.attributes:\n                    runcmd += ['-q@' + self.hostname]\n                elif '@' not in obj.attributes[ATTR_queue]:\n                    curq = obj.attributes[ATTR_queue]\n                    runcmd += ['-q' + curq + '@' + self.hostname]\n                if obj.custom_attrs and (ATTR_queue in obj.custom_attrs):\n                    del obj.custom_attrs[ATTR_queue]\n\n            _conf = self.default_client_pbs_conf\n            cmd = self.utils.convert_to_cli(obj.custom_attrs, IFL_SUBMIT,\n                                            self.hostname, dflt_conf=_conf,\n                                            exclude_attrs=exclude_attrs)\n\n            if cmd is None:\n                try:\n                    os.chdir(cwd)\n                except OSError:\n                    pass\n                return None\n            runcmd += cmd\n\n            if script:\n                runcmd += [script]\n            else:\n                if ATTR_executable in obj.attributes:\n                    runcmd += ['--', obj.attributes[ATTR_executable]]\n                    if ((ATTR_Arglist in obj.attributes) and\n                            (obj.attributes[ATTR_Arglist] is not None)):\n                        args = obj.attributes[ATTR_Arglist]\n                        arglist = self.utils.convert_arglist(args)\n                        if arglist is None:\n                            try:\n                                os.chdir(cwd)\n                            except OSError:\n                                pass\n                            return None\n                        runcmd += [arglist]\n            if obj.username != self.current_user:\n                runas = obj.username\n            else:\n                runas = None\n\n            if isinstance(obj, Reservation) and obj.hosts:\n                runcmd += ['--hosts'] + obj.hosts\n\n            if _interactive_job:\n                ijid = self.submit_interactive_job(obj, runcmd)\n                try:\n                    os.chdir(cwd)\n                except OSError:\n                    pass\n                return ijid\n\n            if not self.default_client_pbs_conf:\n                runcmd = [\n                    'PBS_CONF_FILE=' + self.client_pbs_conf_file] + runcmd\n                as_script = True\n            if env:\n                user = PbsUser.get_user(obj.username)\n                host = user.host\n                run_str = \" \".join(runcmd)\n                script_file = self.du.create_temp_file(hostname=host,\n                                                       body=run_str)\n                self.du.chmod(hostname=host, path=script_file, mode=0o755)\n                runcmd = [script_file]\n            ret = self.du.run_cmd(self.client, runcmd, runas=runas,\n                                  level=logging.INFOCLI, as_script=as_script,\n                                  env=env, logerr=False)\n            if ret['rc'] != 0:\n                objid = None\n            else:\n                objid = ret['out'][0]\n            if ret['err'] != ['']:\n                self.last_error = ret['err']\n            self.last_rc = rc = ret['rc']\n\n        # 2- Submission with impersonation over API\n        elif obj.username != self.current_user:\n            # submit job as a user requires setting uid to that user. It's\n            # done in a separate process\n            obj.set_variable_list(obj.username, submit_dir)\n            obj.set_attributes()\n            if (obj.script is not None and not self._is_local):\n                # This copy assumes that the file system layout on the\n                # remote host is identical to the local host. When not\n                # the case, this code will need to be updated to copy\n                # to a known remote location and update the obj.script\n                self.du.run_copy(\n                    self.hostname, src=obj.script, dest=obj.script)\n                os.remove(obj.script)\n            objid = self.pbs_api_as('submit', obj, user=obj.username,\n                                    extend=extend)\n        # 3- Submission as current user over API\n        else:\n            c = self._connect(self.hostname)\n\n            if isinstance(obj, Job):\n                if script:\n                    if ATTR_o not in obj.attributes:\n                        obj.attributes[ATTR_o] = (self.hostname + ':' +\n                                                  obj.script + '.o')\n                    if ATTR_e not in obj.attributes:\n                        obj.attributes[ATTR_e] = (self.hostname + ':' +\n                                                  obj.script + '.e')\n                    sc = os.path.basename(script)\n                    obj.unset_attributes([ATTR_executable, ATTR_Arglist])\n                    if ATTR_N not in obj.custom_attrs:\n                        obj.attributes[ATTR_N] = sc\n                if ATTR_queue in obj.attributes:\n                    destination = obj.attributes[ATTR_queue]\n                    # queue must be removed otherwise will cause the submit\n                    # to fail silently\n                    del obj.attributes[ATTR_queue]\n                else:\n                    destination = None\n\n                    if (ATTR_o not in obj.attributes or\n                            ATTR_e not in obj.attributes):\n                        fn = PbsAttribute.random_str(\n                            length=4, prefix='PtlPbsJob')\n                        tmp = self.du.get_tempdir(self.hostname)\n                        fn = os.path.join(tmp, fn)\n                    if ATTR_o not in obj.attributes:\n                        obj.attributes[ATTR_o] = (self.hostname + ':' +\n                                                  fn + '.o')\n                    if ATTR_e not in obj.attributes:\n                        obj.attributes[ATTR_e] = (self.hostname + ':' +\n                                                  fn + '.e')\n\n                obj.attropl = self.utils.dict_to_attropl(obj.attributes)\n                objid = pbs_submit(c, obj.attropl, script, destination,\n                                   extend)\n            elif isinstance(obj, Reservation):\n                if ATTR_resv_duration in obj.attributes:\n                    # reserve_duration is not a valid attribute, the API call\n                    # will get rejected if it is used\n                    wlt = ATTR_l + '.walltime'\n                    obj.attributes[wlt] = obj.attributes[ATTR_resv_duration]\n                    del obj.attributes[ATTR_resv_duration]\n\n                obj.attropl = self.utils.dict_to_attropl(obj.attributes)\n                objid = pbs_submit_resv(c, obj.attropl, extend)\n\n        prefix = 'submit to ' + self.shortname + ' as '\n        if isinstance(obj, Job):\n            self.logit(prefix + '%s: ' % obj.username, JOB, obj.custom_attrs,\n                       objid)\n            if obj.script_body:\n                self.logger.log(logging.INFOCLI, 'job script ' + script +\n                                '\\n---\\n' + obj.script_body + '\\n---')\n            if objid is not None:\n                self.jobs[objid] = obj\n        elif isinstance(obj, Reservation):\n            # Reservations without -I option return as 'R123 UNCONFIRMED'\n            # so split to get the R123 only\n\n            self.logit(prefix + '%s: ' % obj.username, RESV, obj.attributes,\n                       objid)\n            if objid is not None:\n                objid = objid.split()[0]\n                self.reservations[objid] = obj\n\n        if objid is not None:\n            obj.server[self.hostname] = objid\n        else:\n            try:\n                os.chdir(cwd)\n            except OSError:\n                pass\n            raise PbsSubmitError(rc=rc, rv=None, msg=self.geterrmsg(),\n                                 post=self._disconnect, conn=c)\n\n        if c:\n            self._disconnect(c)\n\n        try:\n            os.chdir(cwd)\n        except OSError:\n            pass\n\n        return objid\n\n    def submit_resv(self, offset, duration, select='1:ncpus=1', rrule='',\n                    conf=None, confirmed=True):\n        \"\"\"\n        Helper function to submit an advance/a standing reservation.\n        :param int offset: Time in seconds from time this is called to set the\n                       advance reservation's start time.\n        :param int duration: Duration in seconds of advance reservation\n        :param str select: Select statement for reservation placement.\n                           Default: \"1:ncpus=1\"\n        :param str rrule: Recurrence rule.  Default is an empty string.\n        :param boolean times: If true, return a tuple of reservation id, start\n                              time and end time of created reservation.\n                              Otherwise return just the reservation id.\n                              Default: False\n        :param conf: Configuration for test case for PBS_TZID information.\n        :param boolean confirmed: Wait until the reservation is confimred if\n                                  True.\n                                  Default: True\n        :return The reservation id if times is false.  Otherwise a tuple of\n                reservation id, start time and end time of the reservation.\n\n        \"\"\"\n        start_time = int(time.time()) + offset\n        end_time = start_time + duration\n\n        attrs = {\n            'reserve_start': start_time,\n            'reserve_end': end_time,\n            'Resource_List.select': select\n        }\n\n        if rrule:\n            if conf is None:\n                self.logger.info('conf not set. Falling back to Asia/Kolkata')\n                tzone = 'Asia/Kolkata'\n            elif 'PBS_TZID' in conf:\n                tzone = conf['PBS_TZID']\n            elif 'PBS_TZID' in os.environ:\n                tzone = os.environ['PBS_TZID']\n            else:\n                self.logger.info('Missing timezone, using Asia/Kolkata')\n                tzone = 'Asia/Kolkata'\n            attrs[ATTR_resv_rrule] = rrule\n            attrs[ATTR_resv_timezone] = tzone\n\n        rid = self.submit(Reservation(TEST_USER, attrs))\n        time_format = \"%Y-%m-%d %H:%M:%S\"\n        self.logger.info(\"Submitted reservation: %s, start=%s, end=%s\", rid,\n                         time.strftime(time_format,\n                                       time.localtime(start_time)),\n                         time.strftime(time_format,\n                                       time.localtime(end_time)))\n        if confirmed:\n            attrs = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n            self.expect(RESV, attrs, id=rid)\n\n        return rid, start_time, end_time\n\n    def alter_a_reservation(self, r, start, end, shift=0,\n                            alter_s=False, alter_e=False,\n                            whichMessage=1, confirm=True, check_log=True,\n                            interactive=0, sequence=1,\n                            a_duration=None, select=None, extend=None,\n                            runas=None, sched_down=False):\n        \"\"\"\n        Helper method for altering a reservation.\n        This method also checks for the server and accounting logs.\n\n        :param r: Reservation id.\n        :type  r: string.\n\n        :param start: Start time of the reservation.\n        :type  start: int.\n\n        :param end: End time of the reservation.\n        :type  end: int\n\n        :param shift: Time in seconds the reservation times will be moved.\n        :type  shift: int.\n\n        :param alter_s: Whether the caller intends to change the start time.\n                       Default - False.\n        :type  alter_s: bool.\n\n        :param alter_e: Whether the caller intends to change the end time.\n                       Default - False.\n        :type  alter_e: bool.\n\n        :param whichMessage: Which message is expected to be returned.\n                            Default: 1.\n                             =-1 - No exception, don't check logs\n                             =0 - PbsResvAlterError exception will be raised,\n                                  so check for appropriate error response.\n                             =1 - No exception, check for \"CONFIRMED\" message\n                             =2 - No exception, check for \"UNCONFIRMED\" message\n                             =3 - No exception, check for \"DENIED\" message\n        :type  whichMessage: int.\n        :param check_log: If False, do not check the log of confirmation of the\n                          reservation.  Default: True\n        :tupe check_log boolean\n\n        :param confirm: The expected state of the reservation after it is\n                       altered. It can be either Confirmed or Running.\n                       Default - Confirmed State.\n        :type  confirm: bool.\n\n        :param sched_down: The test is being run with the scheduler down.\n                           Don't wait for confirmed or running states.\n                           Default - False\n        :type sched_down: bool\n\n        :param interactive: Time in seconds the CLI waits for a reply.\n                           Default - 0 seconds.\n        :type  interactive: int.\n\n        :param sequence: To check the number of log matches corresponding\n                        to alter.\n                        Default: 1\n        :type  sequence: int.\n\n        :param a_duration: The duration to modify.\n        :type a_duration: int.\n        :param extend: extend parameter.\n        :type extend: str.\n        :param runas: User who own alters the reservation.\n                      Default: user running the test.\n        :type runas: PbsUser.\n\n        raises: PBSResvAlterError\n        \"\"\"\n        fmt = \"%a %b %d %H:%M:%S %Y\"\n        new_start = start\n        new_end = end\n        attrs = {}\n        bu = BatchUtils()\n\n        if alter_s:\n            new_start = start + shift\n            new_start_conv = bu.convert_seconds_to_datetime(\n                new_start)\n            attrs['reserve_start'] = new_start_conv\n\n        if alter_e:\n            new_end = end + shift\n            new_end_conv = bu.convert_seconds_to_datetime(new_end)\n            attrs['reserve_end'] = new_end_conv\n\n        if interactive > 0:\n            attrs['interactive'] = interactive\n\n        if a_duration:\n            if isinstance(a_duration, str) and ':' in a_duration:\n                new_duration_conv = bu.convert_duration(a_duration)\n            else:\n                new_duration_conv = a_duration\n\n            if not alter_s and not alter_e:\n                new_end = start + new_duration_conv + shift\n            elif alter_s and not alter_e:\n                new_end = new_start + new_duration_conv\n            elif not alter_s and alter_e:\n                new_start = new_end - new_duration_conv\n            # else new_start and new_end have already been calculated\n        else:\n            new_duration_conv = new_end - new_start\n\n        if a_duration:\n            attrs['reserve_duration'] = new_duration_conv\n\n        if select:\n            attrs['Resource_List.select'] = select\n\n        if runas is None:\n            runas = self.du.get_current_user()\n\n        if whichMessage:\n            msg = ['']\n            acct_msg = ['']\n\n            if interactive:\n                if whichMessage == 1:\n                    msg = \"pbs_ralter: \" + r + \" CONFIRMED\"\n                elif whichMessage == 2:\n                    msg = \"pbs_ralter: \" + r + \" UNCONFIRMED\"\n                else:\n                    msg = \"pbs_ralter: \" + r + \" DENIED\"\n            else:\n                msg = \"pbs_ralter: \" + r + \" ALTER REQUESTED\"\n\n            self.alterresv(r, attrs, extend=extend, runas=runas)\n\n            if msg != self.last_out[0]:\n                raise PBSResvAlterError(\n                    msg=f\"Wrong Message expected {msg} got {self.last_out[0]}\")\n            self.logger.info(msg + \" displayed\")\n\n            if check_log:\n                msg = \"Resv;\" + r + \";Attempting to modify reservation \"\n                if start != new_start:\n                    msg += \"start=\"\n                    msg += time.strftime(fmt,\n                                         time.localtime(int(new_start)))\n                    msg += \" \"\n\n                if end != new_end:\n                    msg += \"end=\"\n                    msg += time.strftime(fmt,\n                                         time.localtime(int(new_end)))\n                    msg += \" \"\n\n                if select:\n                    msg += \"select=\" + select + \" \"\n\n                # strip the last space\n                msg = msg[:-1]\n                self.log_match(msg, interval=2, max_attempts=30)\n\n            if whichMessage == -1:\n                return new_start, new_end\n            elif whichMessage == 1:\n                if alter_s:\n                    new_start_conv = bu.convert_seconds_to_datetime(\n                        new_start, fmt)\n                    attrs['reserve_start'] = new_start_conv\n\n                if alter_e:\n                    new_end_conv = bu.convert_seconds_to_datetime(\n                        new_end, fmt)\n                    attrs['reserve_end'] = new_end_conv\n\n                if a_duration:\n                    attrs['reserve_duration'] = new_duration_conv\n\n                if sched_down:\n                    attrs['reserve_state'] = (MATCH_RE,\n                                              'RESV_BEING_ALTERED|11')\n                elif confirm:\n                    attrs['reserve_state'] = (MATCH_RE, 'RESV_CONFIRMED|2')\n                else:\n                    attrs['reserve_state'] = (MATCH_RE, 'RESV_RUNNING|5')\n\n                self.expect(RESV, attrs, id=r)\n                if check_log:\n                    acct_msg = \"Y;\" + r + \";requestor=Scheduler@.*\" + \" start=\"\n                    acct_msg += str(new_start) + \" end=\" + str(new_end)\n                    self.status(RESV, 'resv_nodes', id=r)\n                    acct_msg += \" nodes=\"\n                    acct_msg += re.escape(self.reservations[r].\n                                          resvnodes())\n\n                    if r[0] == 'S':\n                        self.status(RESV, 'reserve_count', id=r)\n                        count = self.reservations[r].attributes[\n                            'reserve_count']\n                        acct_msg += \" count=\" + count\n\n                    self.accounting_match(acct_msg, regexp=True,\n                                          interval=2,\n                                          max_attempts=30, n='ALL')\n\n                # Check if reservation reports new start time\n                # and updated duration.\n\n                msg = \"Resv;\" + r + \";Reservation alter confirmed\"\n            else:\n                msg = \"Resv;\" + r + \";Reservation alter denied\"\n            interval = 0.5\n            max_attempts = 20\n            if sched_down:\n                self.logger.info(\"Scheduler Down: Modify should not succeed.\")\n                return start, end\n            for attempts in range(1, max_attempts + 1):\n                lines = self.log_match(msg, n='ALL', allmatch=True,\n                                              max_attempts=5)\n                info_msg = \"log_match: searching \" + \\\n                    str(sequence) + \" sequence of message: \" + \\\n                    msg + \": Got: \" + str(len(lines))\n                self.logger.info(info_msg)\n                if len(lines) == sequence:\n                    break\n                else:\n                    attempts = attempts + 1\n                    time.sleep(interval)\n            if attempts > max_attempts:\n                raise PtlLogMatchError(rc=1, rv=False, msg=info_msg)\n            return new_start, new_end\n        else:\n            try:\n                self.alterresv(r, attrs, extend=extend, runas=runas)\n            except PbsResvAlterError:\n                self.logger.info(\n                    \"Reservation Alteration failed.  This is expected.\")\n                return start, end\n            else:\n                self.assertFalse(\"Reservation alter allowed when it should\" +\n                                 \"not be.\")\n\n    def deljob(self, id=None, extend=None, runas=None, wait=False,\n               logerr=True, attr_W=None):\n        \"\"\"\n        delete a single job or list of jobs specified by id\n        raises ``PbsDeljobError`` on error\n        :param id: The identifier(s) of the jobs to delete\n        :type id: str or list\n        :param extend: Optional parameters to pass along to PBS\n        :type extend: str or None\n        :param runas: run as user\n        :type runas: str or None\n        :param wait: Set to True to wait for job(s) to no longer\n                     be reported by PBS. False by default\n        :type wait: bool\n        :param logerr: Whether to log errors. Defaults to True.\n        :type logerr: bool\n        :param attr_w: -W args to qdel (Only for cli mode)\n        :type attr_w: str\n        :raises: PbsDeljobError\n        \"\"\"\n        prefix = 'delete job on ' + self.shortname\n        if runas is not None:\n            prefix += ' as ' + str(runas)\n        prefix += ': '\n        if id is not None:\n            if not isinstance(id, list):\n                id = id.split(',')\n            prefix += ', '.join(id)\n        self.logger.info(prefix)\n        c = None\n        rc = 0\n        if self.get_op_mode() == PTL_CLI:\n            pcmd = [os.path.join(self.client_conf['PBS_EXEC'], 'bin', 'qdel')]\n            if extend is not None:\n                pcmd += self.utils.convert_to_cli(extend, op=IFL_DELETE,\n                                                  hostname=self.hostname)\n            if attr_W is not None:\n                pcmd += ['-W']\n                if attr_W != PTL_NOARG:\n                    pcmd += [attr_W]\n            if not self.default_client_pbs_conf:\n                pcmd = ['PBS_CONF_FILE=' + self.client_pbs_conf_file] + pcmd\n                as_script = True\n            elif not self._is_local:\n                pcmd = ['PBS_SERVER=' + self.hostname] + pcmd\n                as_script = True\n            else:\n                as_script = False\n            if id is not None:\n                chunks = [id[i:i + 2000] for i in range(0, len(id), 2000)]\n                for chunk in chunks:\n                    ret = self.du.run_cmd(self.client, pcmd + chunk,\n                                          runas=runas, as_script=as_script,\n                                          logerr=logerr, level=logging.INFOCLI)\n                    rc = ret['rc']\n                    if ret['err'] != ['']:\n                        self.last_error = ret['err']\n                    self.last_rc = rc\n                    if rc != 0:\n                        break\n            else:\n                ret = self.du.run_cmd(self.client, pcmd, runas=runas,\n                                      as_script=as_script, logerr=logerr,\n                                      level=logging.INFOCLI)\n                rc = ret['rc']\n                if ret['err'] != ['']:\n                    self.last_error = ret['err']\n                self.last_rc = rc\n        elif runas is not None:\n            rc = self.pbs_api_as('deljob', id, user=runas, extend=extend)\n        else:\n            c = self._connect(self.hostname)\n            rc = 0\n            for ajob in id:\n                tmp_rc = pbs_deljob(c, ajob, extend)\n                if tmp_rc != 0:\n                    rc = tmp_rc\n        if rc != 0:\n            raise PbsDeljobError(rc=rc, rv=False, msg=self.geterrmsg(),\n                                 post=self._disconnect, conn=c)\n        if self.jobs is not None:\n            for j in id:\n                if j in self.jobs:\n                    if self.jobs[j].interactive_handle is not None:\n                        self.jobs[j].interactive_handle.close()\n                    del self.jobs[j]\n        if c:\n            self._disconnect(c)\n        if wait and id is not None:\n            for oid in id:\n                self.expect(JOB, 'queue', id=oid, op=UNSET, runas=runas,\n                            level=logging.DEBUG)\n        return rc\n\n    def delresv(self, id=None, extend=None, runas=None, wait=False,\n                logerr=True):\n        \"\"\"\n        delete a single job or list of jobs specified by id\n        raises ``PbsDeljobError`` on error\n        :param id: The identifier(s) of the jobs to delete\n        :type id: str or list\n        :param extend: Optional parameters to pass along to PBS\n        :type extend: str or None\n        :param runas: run as user\n        :type runas: str or None\n        :param wait: Set to True to wait for job(s) to no longer\n                     be reported by PBS. False by default\n        :type wait: bool\n        :param logerr: Whether to log errors. Defaults to True.\n        :type logerr: bool\n        :raises: PbsDeljobError\n        \"\"\"\n        prefix = 'delete resv on ' + self.shortname\n        if runas is not None:\n            prefix += ' as ' + str(runas)\n        prefix += ': '\n        if id is not None:\n            if not isinstance(id, list):\n                id = id.split(',')\n            prefix += ', '.join(id)\n        self.logger.info(prefix)\n        c = None\n        rc = 0\n        if self.get_op_mode() == PTL_CLI:\n            pcmd = [os.path.join(self.client_conf['PBS_EXEC'], 'bin',\n                                 'pbs_rdel')]\n            if not self.default_client_pbs_conf:\n                pcmd = ['PBS_CONF_FILE=' + self.client_pbs_conf_file] + pcmd\n                as_script = True\n            elif not self._is_local:\n                pcmd = ['PBS_SERVER=' + self.hostname] + pcmd\n                as_script = True\n            else:\n                as_script = False\n            if id is not None:\n                chunks = [id[i:i + 2000] for i in range(0, len(id), 2000)]\n                for chunk in chunks:\n                    ret = self.du.run_cmd(self.client, pcmd + chunk,\n                                          runas=runas, as_script=as_script,\n                                          logerr=logerr, level=logging.INFOCLI)\n                    rc = ret['rc']\n                    if ret['err'] != ['']:\n                        self.last_error = ret['err']\n                    self.last_rc = rc\n                    if rc != 0:\n                        break\n            else:\n                ret = self.du.run_cmd(self.client, pcmd, runas=runas,\n                                      as_script=as_script, logerr=logerr,\n                                      level=logging.INFOCLI)\n                rc = ret['rc']\n                if ret['err'] != ['']:\n                    self.last_error = ret['err']\n                self.last_rc = rc\n        elif runas is not None:\n            rc = self.pbs_api_as('delresv', id, user=runas, extend=extend)\n        else:\n            c = self._connect(self.hostname)\n            rc = 0\n            for ajob in id:\n                tmp_rc = pbs_delresv(c, ajob, extend)\n                if tmp_rc != 0:\n                    rc = tmp_rc\n        if rc != 0:\n            raise PbsDelresvError(rc=rc, rv=False, msg=self.geterrmsg(),\n                                  post=self._disconnect, conn=c)\n        if self.reservations is not None:\n            for j in id:\n                if j in self.reservations:\n                    del self.reservations[j]\n        if c:\n            self._disconnect(c)\n        if wait and id is not None:\n            for oid in id:\n                self.expect(RESV, 'queue', id=oid, op=UNSET, runas=runas,\n                            level=logging.DEBUG)\n        return rc\n\n    def delete(self, id=None, extend=None, runas=None, wait=False,\n               logerr=True):\n        \"\"\"\n        delete a single job or list of jobs specified by id\n        raises ``PbsDeleteError`` on error\n        :param id: The identifier(s) of the jobs/resvs to delete\n        :type id: str or list\n        :param extend: Optional parameters to pass along to PBS\n        :type extend: str or none\n        :param runas: run as user\n        :type runas: str\n        :param wait: Set to True to wait for job(s)/resv(s) to\n                     no longer be reported by PBS. False by default\n        :type wait: bool\n        :param logerr: Whether to log errors. Defaults to True.\n        :type logerr: bool\n        :raises: PbsDeleteError\n        \"\"\"\n        prefix = 'delete on ' + self.shortname\n        if runas is not None:\n            prefix += ' as ' + str(runas)\n        prefix += ': '\n        if id is not None:\n            if not isinstance(id, list):\n                id = id.split(',')\n            prefix += ','.join(id)\n        if extend is not None:\n            prefix += ' with ' + str(extend)\n        self.logger.info(prefix)\n\n        if not len(id) > 0:\n            return 0\n\n        obj_type = {}\n        job_list = []\n        resv_list = []\n        for j in id:\n            if j[0] in ('R', 'S', 'M'):\n                obj_type[j] = RESV\n                resv_list.append(j)\n            else:\n                obj_type[j] = JOB\n                job_list.append(j)\n\n        if resv_list:\n            try:\n                rc = self.delresv(resv_list, extend, runas, logerr=logerr)\n            except PbsDelresvError as e:\n                rc = e.rc\n                msg = e.msg\n                rv = e.rv\n        if job_list:\n            obj_type[j] = JOB\n            try:\n                rc = self.deljob(job_list, extend, runas, logerr=logerr)\n            except PbsDeljobError as e:\n                rc = e.rc\n                msg = e.msg\n                rv = e.rv\n\n        if rc != 0:\n            raise PbsDeleteError(rc=rc, rv=rv, msg=msg)\n\n        if wait:\n            for oid in id:\n                self.expect(obj_type[oid], 'queue', id=oid, op=UNSET,\n                            runas=runas, level=logging.DEBUG)\n\n        return rc\n\n    def select(self, attrib=None, extend=None, runas=None, logerr=True):\n        \"\"\"\n        Select jobs that match attributes list or all jobs if no\n        attributes raises ``PbsSelectError`` on error\n        :param attrib: A string, list, or dictionary of attributes\n        :type attrib: str or list or dictionary\n        :param extend: the extended attributes to pass to select\n        :type extend: str or None\n        :param runas: run as user\n        :type runas: str or None\n        :param logerr: If True (default) logs run_cmd errors\n        :type logerr: bool\n        :returns: A list of job identifiers that match the\n                  attributes specified\n        :raises: PbsSelectError\n        \"\"\"\n        prefix = \"select on \" + self.shortname\n        if runas is not None:\n            prefix += \" as \" + str(runas)\n        prefix += \": \"\n        if attrib is None:\n            s = PTL_ALL\n        elif not isinstance(attrib, dict):\n            self.logger.error(prefix + \"attributes must be a dictionary\")\n            return\n        else:\n            s = str(attrib)\n        self.logger.info(prefix + s)\n\n        c = None\n        if self.get_op_mode() == PTL_CLI:\n            pcmd = [os.path.join(self.client_conf['PBS_EXEC'],\n                                 'bin', 'qselect')]\n\n            cmd = self.utils.convert_to_cli(attrib, op=IFL_SELECT,\n                                            hostname=self.hostname)\n            if extend is not None:\n                pcmd += ['-' + extend]\n\n            if not self._is_local and ((attrib is None) or\n                                       (ATTR_queue not in attrib)):\n                pcmd += ['-q', '@' + self.hostname]\n\n            pcmd += cmd\n            if not self.default_client_pbs_conf:\n                pcmd = ['PBS_CONF_FILE=' + self.client_pbs_conf_file] + pcmd\n                as_script = True\n            else:\n                as_script = False\n\n            ret = self.du.run_cmd(self.client, pcmd, runas=runas,\n                                  as_script=as_script, level=logging.INFOCLI,\n                                  logerr=logerr)\n            if ret['err'] != ['']:\n                self.last_error = ret['err']\n            self.last_rc = ret['rc']\n            if self.last_rc != 0:\n                raise PbsSelectError(rc=self.last_rc, rv=False,\n                                     msg=self.geterrmsg())\n            jobs = ret['out']\n            # command returns no jobs as empty, since we expect a valid id,\n            # we reset the jobs to an empty array\n            if len(jobs) == 1 and jobs[0] == '':\n                jobs = []\n        elif runas is not None:\n            jobs = self.pbs_api_as('select', user=runas, data=attrib,\n                                   extend=extend)\n        else:\n            attropl = self.utils.convert_to_attropl(attrib, op=EQ)\n            c = self._connect(self.hostname)\n            jobs = pbs_selectjob(c, attropl, extend)\n            err = self.geterrmsg()\n            if err:\n                raise PbsSelectError(rc=-1, rv=False, msg=err,\n                                     post=self._disconnect, conn=c)\n            self._disconnect(c)\n\n        return jobs\n\n    def selstat(self, select_list, rattrib, runas=None, extend=None):\n        \"\"\"\n        stat and filter jobs attributes.\n        :param select_list: The filter criteria\n        :type select: List\n        :param rattrib: The attributes to query\n        :type rattrib: List\n        :param runas: run as user\n        :type runas: str or None\n        .. note:: No ``CLI`` counterpart for this call\n        \"\"\"\n\n        attrl = self.utils.convert_to_attrl(rattrib)\n        attropl = self.utils.convert_to_attropl(select_list)\n\n        c = self._connect(self.hostname)\n        bs = pbs_selstat(c, attropl, attrl, extend)\n        self._disconnect(c)\n        return bs\n\n    def skipTest(self, reason=None):\n        \"\"\"\n        Skip Test\n        :param reason: message to indicate why test is skipped\n        :type reason: str or None\n        \"\"\"\n        if reason:\n            self.logger.warning('test skipped: ' + reason)\n        else:\n            reason = 'unknown'\n        raise SkipTest(reason)\n\n    def manager(self, cmd, obj_type, attrib=None, id=None, extend=None,\n                level=logging.INFO, sudo=None, runas=None, logerr=True):\n        \"\"\"\n        issue a management command to the server, e.g to set an\n        attribute\n        Returns 0 for Success and non 0 number for Failure\n        :param cmd: The command to issue,\n                    ``MGR_CMD_[SET,UNSET, LIST,...]`` see pbs_ifl.h\n        :type cmd: str\n        :param obj_type: The type of object to query, one of\n                         the * objects\n        :param attrib: Attributes to operate on, can be a string, a\n                       list,a dictionary\n        :type attrib: str or list or dictionary\n        :param id: The name or list of names of the object(s) to act\n                   upon.\n        :type id: str or list\n        :param extend: Optional extension to the IFL call. see\n                       pbs_ifl.h\n        :type extend: str or None\n        :param level: logging level\n        :param sudo: If True, run the manager command as super user.\n                     Defaults to None. Some attribute settings\n                     should be run with sudo set to True, those are\n                     acl_roots, job_sort_formula, hook operations,\n                     no_sched_hook_event, in those cases, setting\n                     sudo to False is only needed for testing\n                     purposes\n        :type sudo: bool\n        :param runas: run as user\n        :type runas: str\n        :param logerr: If False, CLI commands do not log error,\n                       i.e. silent mode\n        :type logerr: bool\n        :raises: PbsManagerError\n        \"\"\"\n\n        if cmd == MGR_CMD_DELETE and obj_type == NODE and id != '@default':\n            for cmom, momobj in self.moms.items():\n                if momobj.is_cpuset_mom():\n                    self.skipTest(\"Do not delete nodes on cpuset moms\")\n\n        if ((cmd == MGR_CMD_SET or cmd == MGR_CMD_CREATE) and\n                id is not None and obj_type == NODE):\n            for cmom, momobj in self.moms.items():\n                cpuset_nodes = []\n                if momobj.is_cpuset_mom() and attrib:\n                    momobj.check_mem_request(attrib)\n                    if len(attrib) == 0:\n                        return True\n                    vnodes = self.status(HOST, id=momobj.shortname)\n                    del vnodes[0]  # don't set anything on a naturalnode\n                    for vn in vnodes:\n                        momobj.check_ncpus_request(attrib, vn)\n                    if len(attrib) == 0:\n                        return True\n\n        if isinstance(id, str):\n            oid = id.split(',')\n        else:\n            oid = id\n\n        self.logit('manager on ' + self.shortname +\n                   [' as ' + str(runas), ''][runas is None] + ': ' +\n                   PBS_CMD_MAP[cmd] + ' ', obj_type, attrib, oid, level=level)\n\n        c = None  # connection handle\n\n        if (self.get_op_mode() == PTL_CLI or\n            sudo is not None or\n            obj_type in (HOOK, PBS_HOOK) or\n            (attrib is not None and ('job_sort_formula' in attrib or\n                                     'acl_roots' in attrib or\n                                     'no_sched_hook_event' in attrib))):\n\n            execcmd = [PBS_CMD_MAP[cmd], PBS_OBJ_MAP[obj_type]]\n\n            if oid is not None:\n                if cmd == MGR_CMD_DELETE and obj_type == NODE and oid[0] == \"\":\n                    oid[0] = \"@default\"\n                execcmd += [\",\".join(oid)]\n\n            if attrib is not None and cmd != MGR_CMD_LIST:\n                if cmd == MGR_CMD_IMPORT:\n                    execcmd += [attrib['content-type'],\n                                attrib['content-encoding'],\n                                attrib['input-file']]\n                else:\n                    if isinstance(attrib, (dict, OrderedDict)):\n                        kvpairs = []\n                        for k, v in attrib.items():\n                            if isinstance(v, tuple):\n                                if v[0] == INCR:\n                                    op = '+='\n                                elif v[0] == DECR:\n                                    op = '-='\n                                else:\n                                    msg = 'Invalid operation: %s' % (v[0])\n                                    raise PbsManagerError(rc=1, rv=False,\n                                                          msg=msg)\n                                v = v[1]\n                            else:\n                                op = '='\n                            if isinstance(v, list) and not isinstance(v, str):\n                                # handle string arrays with strings\n                                # that contain special characters\n                                # with multiple manager calls\n                                if any((c in vv) for c in set(', \\'\\n\"')\n                                       for vv in v):\n                                    if op == '+=' or op == '=':\n                                        oper = INCR\n                                    else:\n                                        oper = DECR\n                                    for vv in v:\n                                        a = {k: (oper, vv)}\n                                        rc = self.manager(cmd=cmd,\n                                                          obj_type=obj_type,\n                                                          attrib=a, id=id,\n                                                          extend=extend,\n                                                          level=level,\n                                                          sudo=sudo,\n                                                          runas=runas,\n                                                          logerr=logerr)\n                                        if rc:\n                                            return rc\n                                    return 0\n                                # if there are no special characters, then\n                                # join the list and parse it normally.\n                                v = ','.join(v)\n                            if isinstance(v, str):\n                                # don't quote if already quoted\n                                if v[0] == v[-1] and v[0] in set('\"\\''):\n                                    pass\n                                # handle string arrays\n                                elif ',' in v and v[0] != '\"':\n                                    v = '\"' + v + '\"'\n                                # handle strings that need to be quoted\n                                elif any((c in v) for c in set(', \\'\\n\"')):\n                                    if '\"' in v:\n                                        v = \"'%s'\" % v\n                                    else:\n                                        v = '\"%s\"' % v\n                            kvpairs += [str(k) + op + str(v)]\n                        if kvpairs:\n                            execcmd += [\",\".join(kvpairs)]\n                            del kvpairs\n                    elif isinstance(attrib, list):\n                        execcmd += [\",\".join(attrib)]\n                    elif isinstance(attrib, str):\n                        execcmd += [attrib]\n\n            if not self.default_pbs_conf or not self.default_client_pbs_conf:\n                as_script = True\n            else:\n                as_script = False\n\n            if (not self._is_local or as_script or\n                (runas and\n                 not self.du.is_localhost(PbsUser.get_user(runas).host))):\n                execcmd = '\\'' + \" \".join(execcmd) + '\\''\n            else:\n                execcmd = \" \".join(execcmd)\n\n            # Hooks can only be queried as a privileged user on the host where\n            # the server is running, care must be taken to use the appropriate\n            # path to qmgr and appropriate escaping sequences\n            # VERSION INFO: no_sched_hook_event introduced in 11.3.120 only\n            if sudo is None:\n                if (obj_type in (HOOK, PBS_HOOK) or\n                    (attrib is not None and\n                     ('job_sort_formula' in attrib or\n                      'acl_roots' in attrib or\n                      'no_sched_hook_event' in attrib))):\n                    sudo = True\n                else:\n                    sudo = False\n\n            pcmd = [os.path.join(self.pbs_conf['PBS_EXEC'], 'bin', 'qmgr'),\n                    '-c', execcmd]\n\n            if as_script:\n                pcmd = ['PBS_CONF_FILE=' + self.client_pbs_conf_file] + pcmd\n\n            ret = self.du.run_cmd(self.hostname, pcmd, sudo=sudo, runas=runas,\n                                  level=logging.INFOCLI, as_script=as_script,\n                                  logerr=logerr)\n            rc = ret['rc']\n            # NOTE: workaround the fact that qmgr overloads the return code in\n            # cases where the list returned is empty an error flag is set even\n            # through there is no error. Handled here by checking if there is\n            # no err and out message, in which case return code is set to 0\n            if rc != 0 and (ret['out'] == [''] and ret['err'] == ['']):\n                rc = 0\n            if rc == 0:\n                if cmd == MGR_CMD_LIST:\n                    bsl = self.utils.convert_to_dictlist(ret['out'],\n                                                         mergelines=True,\n                                                         obj_type=obj_type)\n                    # Since we stat everything, overwrite the cache\n                    self.update_attributes(obj_type, bsl, overwrite=True)\n                    # Filter out the attributes requested\n                    if attrib:\n                        bsl_attr = []\n                        for obj in bsl:\n                            dnew = {}\n                            for k in obj.keys():\n                                if k in attrib:\n                                    dnew[k] = obj[k]\n                            bsl_attr.append(dnew)\n                        bsl = bsl_attr\n            else:\n                # Need to rework setting error, this is not thread safe\n                self.last_error = ret['err']\n            self.last_rc = ret['rc']\n        elif runas is not None:\n            _data = {'cmd': cmd, 'obj_type': obj_type, 'attrib': attrib,\n                     'id': oid}\n            rc = self.pbs_api_as('manager', user=runas, data=_data,\n                                 extend=extend)\n        else:\n            a = self.utils.convert_to_attropl(attrib, cmd)\n            c = self._connect(self.hostname)\n            rc = 0\n            if obj_type == SERVER and oid is None:\n                oid = [self.hostname]\n            if oid is None:\n                # server will run strlen on id, it can not be NULL\n                oid = ['']\n            if cmd == MGR_CMD_LIST:\n                if oid is None:\n                    bsl = self.status(obj_type, attrib, oid, extend)\n                else:\n                    bsl = None\n                    for i in oid:\n                        tmpbsl = self.status(obj_type, attrib, i, extend)\n                        if tmpbsl is None:\n                            rc = 1\n                        else:\n                            if bsl is None:\n                                bsl = tmpbsl\n                            else:\n                                bsl += tmpbsl\n            else:\n                rc = 0\n                if oid is None:\n                    rc = pbs_manager(c, cmd, obj_type, i, a, extend)\n                else:\n                    for i in oid:\n                        tmprc = pbs_manager(c, cmd, obj_type, i, a, extend)\n                        if tmprc != 0:\n                            rc = tmprc\n                            break\n                    if rc == 0:\n                        rc = tmprc\n\n        if id is None and obj_type == SERVER:\n            id = self.pbs_conf['PBS_SERVER']\n        bs_list = []\n        if cmd == MGR_CMD_DELETE and oid is not None:\n            if rc == 0:\n                for i in oid:\n                    if obj_type == MGR_OBJ_HOOK and i in self.hooks:\n                        del self.hooks[i]\n                    if obj_type in (NODE, VNODE) and i in self.nodes:\n                        del self.nodes[i]\n                    if obj_type == MGR_OBJ_QUEUE and i in self.queues:\n                        del self.queues[i]\n                    if obj_type == MGR_OBJ_RSC and i in self.resources:\n                        del self.resources[i]\n                    if obj_type == SCHED and i in self.schedulers:\n                        del self.schedulers[i]\n            else:\n                if obj_type == MGR_OBJ_RSC:\n                    res_ret = self.du.run_cmd(cmd=[\n                        os.path.join(\n                            self.pbs_conf['PBS_EXEC'],\n                            'bin',\n                            'qmgr'),\n                        '-c',\n                        \"list resource\"],\n                        logerr=True)\n                    ress = [x.split()[1].strip()\n                            for x in res_ret['out'] if 'Resource' in x]\n                    tmp_res = copy.deepcopy(self.resources)\n                    for i in tmp_res:\n                        if i not in ress:\n                            del self.resources[i]\n\n        elif cmd == MGR_CMD_SET and rc == 0 and id is not None:\n            if isinstance(id, list):\n                for name in id:\n                    tbsl = copy.deepcopy(attrib)\n                    tbsl['name'] = name\n                    bs_list.append(tbsl)\n                    self.update_attributes(obj_type, bs_list)\n            else:\n                tbsl = copy.deepcopy(attrib)\n                tbsl['id'] = id\n                bs_list.append(tbsl)\n                self.update_attributes(obj_type, bs_list)\n\n        elif cmd == MGR_CMD_CREATE and rc == 0:\n            if isinstance(id, list):\n                for name in id:\n                    bsl = self.status(obj_type, id=name, extend=extend)\n                    self.update_attributes(obj_type, bsl)\n            else:\n                bsl = self.status(obj_type, id=id, extend=extend)\n                self.update_attributes(obj_type, bsl)\n\n        if rc != 0:\n            raise PbsManagerError(rv=False, rc=rc, msg=self.geterrmsg(),\n                                  post=self._disconnect, conn=c)\n\n        if c is not None:\n            self._disconnect(c)\n        if cmd == MGR_CMD_SET and 'scheduling' in attrib:\n            if attrib['scheduling'] in PTL_FALSE:\n                if obj_type == SERVER:\n                    sname = 'default'\n                else:\n                    sname = id\n\n                # Default max cycle length is 1200 seconds (20m)\n                self.expect(SCHED, {'state': 'scheduling'}, op=NE, id=sname,\n                            interval=1, max_attempts=1200,\n                            trigger_sched_cycle=False)\n        return rc\n\n    def sigjob(self, jobid=None, signal=None, extend=None, runas=None,\n               logerr=True):\n        \"\"\"\n        Send a signal to a job. Raises ``PbsSignalError`` on error.\n        :param jobid: identifier of the job or list of jobs to send\n                      the signal to\n        :type jobid: str or list\n        :param signal: The signal to send to the job, see pbs_ifl.h\n        :type signal: str or None\n        :param extend: extend options\n        :param runas: run as user\n        :type runas: str or None\n        :param logerr: If True (default) logs run_cmd errors\n        :type logerr: bool\n        :raises: PbsSignalError\n        \"\"\"\n\n        prefix = 'signal on ' + self.shortname\n        if runas is not None:\n            prefix += ' as ' + str(runas)\n        prefix += ': '\n        if jobid is not None:\n            if not isinstance(jobid, list):\n                jobid = jobid.split(',')\n            prefix += ', '.join(jobid)\n        if signal is not None:\n            prefix += ' with signal = ' + str(signal)\n        self.logger.info(prefix)\n\n        c = None\n        if self.get_op_mode() == PTL_CLI:\n            pcmd = [os.path.join(self.client_conf['PBS_EXEC'], 'bin', 'qsig')]\n            if signal is not None:\n                pcmd += ['-s']\n                if signal != PTL_NOARG:\n                    pcmd += [str(signal)]\n            if jobid is not None:\n                pcmd += jobid\n            if not self.default_client_pbs_conf:\n                pcmd = ['PBS_CONF_FILE=' + self.client_pbs_conf_file] + pcmd\n                as_script = True\n            else:\n                as_script = False\n            ret = self.du.run_cmd(self.client, pcmd, runas=runas,\n                                  as_script=as_script, level=logging.INFOCLI,\n                                  logerr=logerr)\n            rc = ret['rc']\n            if ret['err'] != ['']:\n                self.last_error = ret['err']\n            self.last_rc = rc\n        elif runas is not None:\n            rc = self.pbs_api_as('sigjob', jobid, runas, data=signal)\n        else:\n            c = self._connect(self.hostname)\n            rc = 0\n            for ajob in jobid:\n                tmp_rc = pbs_sigjob(c, ajob, signal, extend)\n                if tmp_rc != 0:\n                    rc = tmp_rc\n        if rc != 0:\n            raise PbsSignalError(rc=rc, rv=False, msg=self.geterrmsg(),\n                                 post=self._disconnect, conn=c)\n\n        if c:\n            self._disconnect(c)\n\n        return rc\n\n    def msgjob(self, jobid=None, to_file=None, msg=None, extend=None,\n               runas=None, logerr=True):\n        \"\"\"\n        Send a message to a job. Raises ``PbsMessageError`` on\n        error.\n        :param jobid: identifier of the job or list of jobs to\n                      send the message to\n        :type jobid: str or List\n        :param msg: The message to send to the job\n        :type msg: str or None\n        :param to_file: one of ``MSG_ERR`` or ``MSG_OUT`` or\n                        ``MSG_ERR|MSG_OUT``\n        :type to_file: str or None\n        :param extend: extend options\n        :param runas: run as user\n        :type runas: str or None\n        :param logerr: If True (default) logs run_cmd errors\n        :type logerr: bool\n        :raises: PbsMessageError\n        \"\"\"\n        prefix = 'msgjob on ' + self.shortname\n        if runas is not None:\n            prefix += ' as ' + str(runas)\n        prefix += ': '\n        if jobid is not None:\n            if not isinstance(jobid, list):\n                jobid = jobid.split(',')\n            prefix += ', '.join(jobid)\n        if to_file is not None:\n            prefix += ' with to_file = '\n            if MSG_ERR == to_file:\n                prefix += 'MSG_ERR'\n            elif MSG_OUT == to_file:\n                prefix += 'MSG_OUT'\n            elif MSG_OUT | MSG_ERR == to_file:\n                prefix += 'MSG_ERR|MSG_OUT'\n            else:\n                prefix += str(to_file)\n        if msg is not None:\n            prefix += ' msg = %s' % (str(msg))\n        if extend is not None:\n            prefix += ' extend = %s' % (str(extend))\n        self.logger.info(prefix)\n\n        c = None\n        if self.get_op_mode() == PTL_CLI:\n            pcmd = [os.path.join(self.client_conf['PBS_EXEC'], 'bin', 'qmsg')]\n            if to_file is not None:\n                if MSG_ERR == to_file:\n                    pcmd += ['-E']\n                elif MSG_OUT == to_file:\n                    pcmd += ['-O']\n                elif MSG_OUT | MSG_ERR == to_file:\n                    pcmd += ['-E', '-O']\n                else:\n                    pcmd += ['-' + str(to_file)]\n            if msg is not None:\n                pcmd += [msg]\n            if jobid is not None:\n                pcmd += jobid\n            if not self.default_client_pbs_conf:\n                pcmd = ['PBS_CONF_FILE=' + self.client_pbs_conf_file] + pcmd\n                as_script = True\n            else:\n                as_script = False\n\n            ret = self.du.run_cmd(self.client, pcmd, runas=runas,\n                                  as_script=as_script, level=logging.INFOCLI,\n                                  logerr=logerr)\n            rc = ret['rc']\n            if ret['err'] != ['']:\n                self.last_error = ret['err']\n            self.last_rc = rc\n        elif runas is not None:\n            data = {'msg': msg, 'to_file': to_file}\n            rc = self.pbs_api_as('msgjob', jobid, runas, data=data,\n                                 extend=extend)\n        else:\n            c = self._connect(self.hostname)\n            if c < 0:\n                return c\n            for ajob in jobid:\n                tmp_rc = pbs_msgjob(c, ajob, to_file, msg, extend)\n                if tmp_rc != 0:\n                    rc = tmp_rc\n\n        if rc != 0:\n            raise PbsMessageError(rc=rc, rv=False, msg=self.geterrmsg(),\n                                  post=self._disconnect, conn=c)\n\n        if c:\n            self._disconnect(c)\n\n        return rc\n\n    def alterjob(self, jobid=None, attrib=None, extend=None, runas=None,\n                 logerr=True):\n        \"\"\"\n        Alter attributes associated to a job. Raises\n        ``PbsAlterError`` on error.\n        :param jobid: identifier of the job or list of jobs to\n                      operate on\n        :type jobid: str or list\n        :param attrib: A dictionary of attributes to set\n        :type attrib: dictionary\n        :param extend: extend options\n        :param runas: run as user\n        :type runas: str or None\n        :param logerr: If False, CLI commands do not log error,\n                       i.e. silent mode\n        :type logerr: bool\n        :raises: PbsAlterError\n        \"\"\"\n        prefix = 'alter on ' + self.shortname\n        if runas is not None:\n            prefix += ' as ' + str(runas)\n        prefix += ': '\n        if jobid is not None:\n            if not isinstance(jobid, list):\n                jobid = jobid.split(',')\n            prefix += ', '.join(jobid)\n        if attrib is not None:\n            prefix += ' %s' % (str(attrib))\n        self.logger.info(prefix)\n\n        c = None\n        if self.get_op_mode() == PTL_CLI:\n            pcmd = [os.path.join(self.client_conf['PBS_EXEC'], 'bin',\n                                 'qalter')]\n            if attrib is not None:\n                _conf = self.default_client_pbs_conf\n                pcmd += self.utils.convert_to_cli(attrib, op=IFL_ALTER,\n                                                  hostname=self.client,\n                                                  dflt_conf=_conf)\n            if jobid is not None:\n                pcmd += jobid\n            if not self.default_client_pbs_conf:\n                pcmd = ['PBS_CONF_FILE=' + self.client_pbs_conf_file] + pcmd\n                as_script = True\n            else:\n                as_script = False\n            ret = self.du.run_cmd(self.client, pcmd, runas=runas,\n                                  as_script=as_script, level=logging.INFOCLI,\n                                  logerr=logerr)\n            rc = ret['rc']\n            if ret['err'] != ['']:\n                self.last_error = ret['err']\n            self.last_rc = rc\n        elif runas is not None:\n            rc = self.pbs_api_as('alterjob', jobid, runas, data=attrib)\n        else:\n            c = self._connect(self.hostname)\n            if c < 0:\n                return c\n            a = self.utils.convert_to_attrl(attrib)\n            rc = 0\n            for ajob in jobid:\n                tmp_rc = pbs_alterjob(c, ajob, a, extend)\n                if tmp_rc != 0:\n                    rc = tmp_rc\n        if rc != 0:\n            raise PbsAlterError(rc=rc, rv=False, msg=self.geterrmsg(),\n                                post=self._disconnect, conn=c)\n\n        if c:\n            self._disconnect(c)\n\n        return rc\n\n    def alterresv(self, resvid, attrib, extend=None, runas=None,\n                  logerr=True):\n        \"\"\"\n        Alter attributes associated to a reservation. Raises\n        ``PbsResvAlterError`` on error.\n        :param resvid: identifier of the reservation.\n        :type resvid: str.\n        :param attrib: A dictionary of attributes to set.\n        :type attrib: dictionary.\n        :param extend: extend options.\n        :param runas: run as user.\n        :type runas: str or None.\n        :param logerr: If False, CLI commands do not log error,\n                       i.e. silent mode.\n        :type logerr: bool.\n        :raises: PbsResvAlterError.\n        \"\"\"\n        prefix = 'reservation alter on ' + self.shortname\n        if runas is not None:\n            prefix += ' as ' + str(runas)\n        prefix += ': ' + resvid\n\n        if attrib is not None:\n            prefix += ' %s' % (str(attrib))\n        self.logger.info(prefix)\n\n        c = None\n        resvid = resvid.split()\n        if self.get_op_mode() == PTL_CLI:\n            pcmd = [os.path.join(self.client_conf['PBS_EXEC'], 'bin',\n                                 'pbs_ralter')]\n            if attrib is not None:\n                if extend is not None:\n                    attrib['extend'] = extend\n                _conf = self.default_client_pbs_conf\n                pcmd += self.utils.convert_to_cli(attrib, op=IFL_RALTER,\n                                                  hostname=self.client,\n                                                  dflt_conf=_conf)\n            pcmd += resvid\n            if not self.default_client_pbs_conf:\n                pcmd = ['PBS_CONF_FILE=' + self.client_pbs_conf_file] + pcmd\n                as_script = True\n            else:\n                as_script = False\n            ret = self.du.run_cmd(self.client, pcmd, runas=runas,\n                                  as_script=as_script, level=logging.INFOCLI,\n                                  logerr=logerr)\n            rc = ret['rc']\n            if ret['err'] != ['']:\n                self.last_error = ret['err']\n            if ret['out'] != ['']:\n                self.last_out = ret['out']\n            self.last_rc = rc\n        elif runas is not None:\n            rc = self.pbs_api_as('alterresv', resvid, runas, data=attrib,\n                                 extend=extend)\n        else:\n            c = self._connect(self.hostname)\n            if c < 0:\n                return c\n            a = self.utils.convert_to_attrl(attrib)\n            rc = pbs_modify_resv(c, resvid, a, extend)\n\n        if rc != 0:\n            raise PbsResvAlterError(rc=rc, rv=False, msg=self.geterrmsg(),\n                                    post=self._disconnect, conn=c)\n        else:\n            return rc\n\n        if c:\n            self._disconnect(c)\n\n    def holdjob(self, jobid=None, holdtype=None, extend=None, runas=None,\n                logerr=True):\n        \"\"\"\n        Hold a job. Raises ``PbsHoldError`` on error.\n        :param jobid: identifier of the job or list of jobs to hold\n        :type jobid: str or list\n        :param holdtype: The type of hold to put on the job\n        :type holdtype: str or None\n        :param extend: extend options\n        :param runas: run as user\n        :type runas: str or None\n        :param logerr: If True (default) logs run_cmd errors\n        :type logerr: bool\n        :raises: PbsHoldError\n        \"\"\"\n        prefix = 'holdjob on ' + self.shortname\n        if runas is not None:\n            prefix += ' as ' + str(runas)\n        prefix += ': '\n        if jobid is not None:\n            if not isinstance(jobid, list):\n                jobid = jobid.split(',')\n            prefix += ', '.join(jobid)\n        if holdtype is not None:\n            prefix += ' with hold_list = %s' % (holdtype)\n        self.logger.info(prefix)\n\n        c = None\n        if self.get_op_mode() == PTL_CLI:\n            pcmd = [os.path.join(self.client_conf['PBS_EXEC'], 'bin', 'qhold')]\n            if holdtype is not None:\n                pcmd += ['-h']\n                if holdtype != PTL_NOARG:\n                    pcmd += [holdtype]\n            if jobid is not None:\n                pcmd += jobid\n            if not self.default_client_pbs_conf:\n                pcmd = ['PBS_CONF_FILE=' + self.client_pbs_conf_file] + pcmd\n                as_script = True\n            else:\n                as_script = False\n            ret = self.du.run_cmd(self.client, pcmd, runas=runas,\n                                  logerr=logerr, as_script=as_script,\n                                  level=logging.INFOCLI)\n            rc = ret['rc']\n            if ret['err'] != ['']:\n                self.last_error = ret['err']\n            self.last_rc = rc\n        elif runas is not None:\n            rc = self.pbs_api_as('holdjob', jobid, runas, data=holdtype,\n                                 logerr=logerr)\n        else:\n            c = self._connect(self.hostname)\n            if c < 0:\n                return c\n            rc = 0\n            for ajob in jobid:\n                tmp_rc = pbs_holdjob(c, ajob, holdtype, extend)\n                if tmp_rc != 0:\n                    rc = tmp_rc\n        if rc != 0:\n            raise PbsHoldError(rc=rc, rv=False, msg=self.geterrmsg(),\n                               post=self._disconnect, conn=c)\n\n        if c:\n            self._disconnect(c)\n\n        return rc\n\n    def rlsjob(self, jobid, holdtype, extend=None, runas=None, logerr=True):\n        \"\"\"\n        Release a job. Raises ``PbsReleaseError`` on error.\n        :param jobid: job or list of jobs to release\n        :type jobid: str or list\n        :param holdtype: The type of hold to release on the job\n        :type holdtype: str\n        :param extend: extend options\n        :param runas: run as user\n        :type runas: str or None\n        :param logerr: If True (default) logs run_cmd errors\n        :type logerr: bool\n        :raises: PbsReleaseError\n        \"\"\"\n        prefix = 'release on ' + self.shortname\n        if runas is not None:\n            prefix += ' as ' + str(runas)\n        prefix += ': '\n        if jobid is not None:\n            if not isinstance(jobid, list):\n                jobid = jobid.split(',')\n            prefix += ', '.join(jobid)\n        if holdtype is not None:\n            prefix += ' with hold_list = %s' % (holdtype)\n        self.logger.info(prefix)\n\n        c = None\n        if self.get_op_mode() == PTL_CLI:\n            pcmd = [os.path.join(self.client_conf['PBS_EXEC'], 'bin', 'qrls')]\n            if holdtype is not None:\n                pcmd += ['-h']\n                if holdtype != PTL_NOARG:\n                    pcmd += [holdtype]\n            if jobid is not None:\n                pcmd += jobid\n            if not self.default_client_pbs_conf:\n                pcmd = ['PBS_CONF_FILE=' + self.client_pbs_conf_file] + pcmd\n                as_script = True\n            else:\n                as_script = False\n            ret = self.du.run_cmd(self.client, pcmd, runas=runas,\n                                  as_script=as_script, level=logging.INFOCLI,\n                                  logerr=logerr)\n            rc = ret['rc']\n            if ret['err'] != ['']:\n                self.last_error = ret['err']\n            self.last_rc = rc\n        elif runas is not None:\n            rc = self.pbs_api_as('rlsjob', jobid, runas, data=holdtype)\n        else:\n            c = self._connect(self.hostname)\n            if c < 0:\n                return c\n            rc = 0\n            for ajob in jobid:\n                tmp_rc = pbs_rlsjob(c, ajob, holdtype, extend)\n                if tmp_rc != 0:\n                    rc = tmp_rc\n        if rc != 0:\n            raise PbsHoldError(rc=rc, rv=False, msg=self.geterrmsg(),\n                               post=self._disconnect, conn=c)\n\n        if c:\n            self._disconnect(c)\n\n        return rc\n\n    def rerunjob(self, jobid=None, extend=None, runas=None, logerr=True):\n        \"\"\"\n        Rerun a job. Raises ``PbsRerunError`` on error.\n        :param jobid: job or list of jobs to release\n        :type jobid: str or list\n        :param extend: extend options\n        :param runas: run as user\n        :type runas: str or None\n        :param logerr: If True (default) logs run_cmd errors\n        :type logerr: bool\n        :raises: PbsRerunError\n        \"\"\"\n        prefix = 'rerun on ' + self.shortname\n        if runas is not None:\n            prefix += ' as ' + str(runas)\n        prefix += ': '\n        if jobid is not None:\n            if not isinstance(jobid, list):\n                jobid = jobid.split(',')\n            prefix += ', '.join(jobid)\n        if extend is not None:\n            prefix += extend\n        self.logger.info(prefix)\n\n        c = None\n        if self.get_op_mode() == PTL_CLI:\n            pcmd = [os.path.join(self.client_conf['PBS_EXEC'], 'bin',\n                                 'qrerun')]\n            if extend:\n                pcmd += ['-W', extend]\n            if jobid is not None:\n                pcmd += jobid\n            if not self.default_client_pbs_conf:\n                pcmd = ['PBS_CONF_FILE=' + self.client_pbs_conf_file] + pcmd\n                as_script = True\n            else:\n                as_script = False\n            ret = self.du.run_cmd(self.client, pcmd, runas=runas,\n                                  as_script=as_script, level=logging.INFOCLI,\n                                  logerr=logerr)\n            rc = ret['rc']\n            if ret['err'] != ['']:\n                self.last_error = ret['err']\n            self.last_rc = rc\n        elif runas is not None:\n            rc = self.pbs_api_as('rerunjob', jobid, runas, extend=extend)\n        else:\n            c = self._connect(self.hostname)\n            if c < 0:\n                return c\n            rc = 0\n            for ajob in jobid:\n                tmp_rc = pbs_rerunjob(c, ajob, extend)\n                if tmp_rc != 0:\n                    rc = tmp_rc\n        if rc != 0:\n            raise PbsRerunError(rc=rc, rv=False, msg=self.geterrmsg(),\n                                post=self._disconnect, conn=c)\n\n        if c:\n            self._disconnect(c)\n\n        return rc\n\n    def orderjob(self, jobid1=None, jobid2=None, extend=None, runas=None,\n                 logerr=True):\n        \"\"\"\n        reorder position of ``jobid1`` and ``jobid2``. Raises\n        ``PbsOrderJob`` on error.\n        :param jobid1: first jobid\n        :type jobid1: str or None\n        :param jobid2: second jobid\n        :type jobid2: str or None\n        :param extend: extend options\n        :param runas: run as user\n        :type runas: str or None\n        :param logerr: If True (default) logs run_cmd errors\n        :type logerr: bool\n        :raises: PbsOrderJob\n        \"\"\"\n        prefix = 'orderjob on ' + self.shortname\n        if runas is not None:\n            prefix += ' as ' + str(runas)\n        prefix += ': '\n        prefix += str(jobid1) + ', ' + str(jobid2)\n        if extend is not None:\n            prefix += ' ' + str(extend)\n        self.logger.info(prefix)\n\n        c = None\n        if self.get_op_mode() == PTL_CLI:\n            pcmd = [os.path.join(self.client_conf['PBS_EXEC'], 'bin',\n                                 'qorder')]\n            if jobid1 is not None:\n                pcmd += [jobid1]\n            if jobid2 is not None:\n                pcmd += [jobid2]\n            if not self.default_client_pbs_conf:\n                pcmd = ['PBS_CONF_FILE=' + self.client_pbs_conf_file] + pcmd\n                as_script = True\n            else:\n                as_script = False\n            ret = self.du.run_cmd(self.client, pcmd, runas=runas,\n                                  as_script=as_script, level=logging.INFOCLI,\n                                  logerr=logerr)\n            rc = ret['rc']\n            if ret['err'] != ['']:\n                self.last_error = ret['err']\n            self.last_rc = rc\n        elif runas is not None:\n            rc = self.pbs_api_as('orderjob', jobid1, runas, data=jobid2,\n                                 extend=extend)\n        else:\n            c = self._connect(self.hostname)\n            if c < 0:\n                return c\n            rc = pbs_orderjob(c, jobid1, jobid2, extend)\n        if rc != 0:\n            raise PbsOrderError(rc=rc, rv=False, msg=self.geterrmsg(),\n                                post=self._disconnect, conn=c)\n\n        if c:\n            self._disconnect(c)\n\n        return rc\n\n    def runjob(self, jobid=None, location=None, run_async=False, extend=None,\n               runas=None, logerr=False):\n        \"\"\"\n        Run a job on given nodes. Raises ``PbsRunError`` on error.\n        :param jobid: job or list of jobs to run\n        :type jobid: str or list\n        :param location: An execvnode on which to run the job\n        :type location: str or None\n        :param run_async: If true the call will return immediately\n                      assuming success.\n        :type run_async: bool\n        :param extend: extend options\n        :param runas: run as user\n        :type runas: str or None\n        :param logerr: If True (default) logs run_cmd errors\n        :type logerr: bool\n        :raises: PbsRunError\n        \"\"\"\n        if run_async:\n            prefix = 'Async run on ' + self.shortname\n        else:\n            prefix = 'run on ' + self.shortname\n        if runas is not None:\n            prefix += ' as ' + str(runas)\n        prefix += ': '\n        if jobid is not None:\n            if not isinstance(jobid, list):\n                jobid = jobid.split(',')\n            prefix += ', '.join(jobid)\n        if location is not None:\n            prefix += ' with location = %s' % (location)\n        self.logger.info(prefix)\n\n        if self.has_snap:\n            return 0\n\n        c = None\n        if self.get_op_mode() == PTL_CLI:\n            pcmd = [os.path.join(self.client_conf['PBS_EXEC'], 'bin', 'qrun')]\n            if run_async:\n                pcmd += ['-a']\n            if location is not None:\n                pcmd += ['-H']\n                if location != PTL_NOARG:\n                    pcmd += [location]\n            if jobid:\n                pcmd += jobid\n            if not self.default_client_pbs_conf:\n                pcmd = ['PBS_CONF_FILE=' + self.client_pbs_conf_file] + pcmd\n                as_script = True\n            else:\n                as_script = False\n            ret = self.du.run_cmd(self.client, pcmd, runas=runas,\n                                  as_script=as_script, level=logging.INFOCLI,\n                                  logerr=logerr)\n            rc = ret['rc']\n            if ret['err'] != ['']:\n                self.last_error = ret['err']\n            self.last_rc = rc\n        elif runas is not None:\n            rc = self.pbs_api_as(\n                'runjob', jobid, runas, data=location, extend=extend)\n        else:\n            c = self._connect(self.hostname)\n            if c < 0:\n                return c\n            rc = 0\n            for ajob in jobid:\n                if run_async:\n                    tmp_rc = pbs_asyrunjob(c, ajob, location, extend)\n                else:\n                    tmp_rc = pbs_runjob(c, ajob, location, extend)\n                if tmp_rc != 0:\n                    rc = tmp_rc\n        if rc != 0:\n            raise PbsRunError(rc=rc, rv=False, msg=self.geterrmsg(),\n                              post=self._disconnect, conn=c)\n\n        if c:\n            self._disconnect(c)\n\n        return rc\n\n    def movejob(self, jobid=None, destination=None, extend=None, runas=None,\n                logerr=True):\n        \"\"\"\n        Move a job or list of job ids to a given destination queue.\n        Raises ``PbsMoveError`` on error.\n        :param jobid: A job or list of job ids to move\n        :type jobid: str or list\n        :param destination: The destination queue@server\n        :type destination: str or None\n        :param extend: extend options\n        :param runas: run as user\n        :type runas: str or None\n        :param logerr: If True (default) logs run_cmd errors\n        :type logerr: bool\n        :raises: PbsMoveError\n        \"\"\"\n        prefix = 'movejob on ' + self.shortname\n        if runas is not None:\n            prefix += ' as ' + str(runas)\n        prefix += ': '\n        if jobid is not None:\n            if not isinstance(jobid, list):\n                jobid = jobid.split(',')\n            prefix += ', '.join(jobid)\n        if destination is not None:\n            prefix += ' destination = %s' % (destination)\n        self.logger.info(prefix)\n\n        c = None\n        rc = 0\n\n        if self.get_op_mode() == PTL_CLI:\n            pcmd = [os.path.join(self.client_conf['PBS_EXEC'], 'bin', 'qmove')]\n            if destination is not None:\n                pcmd += [destination]\n            if jobid is not None:\n                pcmd += jobid\n            if not self.default_client_pbs_conf:\n                pcmd = ['PBS_CONF_FILE=' + self.client_pbs_conf_file] + pcmd\n                as_script = True\n            else:\n                as_script = False\n\n            ret = self.du.run_cmd(self.client, pcmd, runas=runas,\n                                  logerr=logerr, as_script=as_script,\n                                  level=logging.INFOCLI)\n            rc = ret['rc']\n            if ret['err'] != ['']:\n                self.last_error = ret['err']\n            self.last_rc = rc\n        elif runas is not None:\n            rc = self.pbs_api_as('movejob', jobid, runas, data=destination,\n                                 extend=extend)\n        else:\n            c = self._connect(self.hostname)\n            if c < 0:\n                return c\n            for ajob in jobid:\n                tmp_rc = pbs_movejob(c, ajob, destination, extend)\n                if tmp_rc != 0:\n                    rc = tmp_rc\n\n        if rc != 0:\n            raise PbsMoveError(rc=rc, rv=False, msg=self.geterrmsg(),\n                               post=self._disconnect, conn=c)\n\n        if c:\n            self._disconnect(c)\n\n        return rc\n\n    def qterm(self, manner=None, extend=None, server_name=None, runas=None,\n              logerr=True):\n        \"\"\"\n        Terminate the ``pbs_server`` daemon\n        :param manner: one of ``(SHUT_IMMEDIATE | SHUT_DELAY |\n                       SHUT_QUICK)`` and can be\\\n                       combined with SHUT_WHO_SCHED, SHUT_WHO_MOM,\n                       SHUT_WHO_SECDRY, \\\n                       SHUT_WHO_IDLESECDRY, SHUT_WHO_SECDONLY. \\\n        :param extend: extend options\n        :param server_name: name of the pbs server\n        :type server_name: str or None\n        :param runas: run as user\n        :type runas: str or None\n        :param logerr: If True (default) logs run_cmd errors\n        :type logerr: bool\n        :raises: PbsQtermError\n        \"\"\"\n        prefix = 'terminate ' + self.shortname\n        if runas is not None:\n            prefix += ' as ' + str(runas)\n        prefix += ': with manner '\n        attrs = manner\n        if attrs is None:\n            prefix += \"None \"\n        elif isinstance(attrs, str):\n            prefix += attrs\n        else:\n            if ((attrs & SHUT_QUICK) == SHUT_QUICK):\n                prefix += \"quick \"\n            if ((attrs & SHUT_IMMEDIATE) == SHUT_IMMEDIATE):\n                prefix += \"immediate \"\n            if ((attrs & SHUT_DELAY) == SHUT_DELAY):\n                prefix += \"delay \"\n            if ((attrs & SHUT_WHO_SCHED) == SHUT_WHO_SCHED):\n                prefix += \"schedular \"\n            if ((attrs & SHUT_WHO_MOM) == SHUT_WHO_MOM):\n                prefix += \"mom \"\n            if ((attrs & SHUT_WHO_SECDRY) == SHUT_WHO_SECDRY):\n                prefix += \"secondary server \"\n            if ((attrs & SHUT_WHO_IDLESECDRY) == SHUT_WHO_IDLESECDRY):\n                prefix += \"idle secondary \"\n            if ((attrs & SHUT_WHO_SECDONLY) == SHUT_WHO_SECDONLY):\n                prefix += \"shoutdown secondary only \"\n\n        self.logger.info(prefix)\n\n        if self.has_snap:\n            return 0\n\n        c = None\n        rc = 0\n\n        if self.get_op_mode() == PTL_CLI:\n            pcmd = [os.path.join(self.client_conf['PBS_EXEC'], 'bin', 'qterm')]\n            _conf = self.default_client_pbs_conf\n            pcmd += self.utils.convert_to_cli(manner, op=IFL_TERMINATE,\n                                              hostname=self.hostname,\n                                              dflt_conf=_conf)\n            if server_name is not None:\n                pcmd += [server_name]\n\n            if not self.default_client_pbs_conf:\n                pcmd = ['PBS_CONF_FILE=' + self.client_pbs_conf_file] + pcmd\n                as_script = True\n            else:\n                as_script = False\n\n            ret = self.du.run_cmd(self.client, pcmd, runas=runas,\n                                  level=logging.INFOCLI, as_script=as_script)\n            rc = ret['rc']\n            if ret['err'] != ['']:\n                self.last_error = ret['err']\n            self.last_rc = rc\n        elif runas is not None:\n            attrs = {'manner': manner, 'server_name': server_name}\n            rc = self.pbs_api_as('terminate', None, runas, data=attrs,\n                                 extend=extend)\n        else:\n            if server_name is None:\n                server_name = self.hostname\n            c = self._connect(self.hostname)\n            rc = pbs_terminate(c, manner, extend)\n        if rc != 0:\n            raise PbsQtermError(rc=rc, rv=False, msg=self.geterrmsg(),\n                                post=self._disconnect, conn=c, force=True)\n\n        if c:\n            self._disconnect(c, force=True)\n\n        return rc\n    teminate = qterm\n\n    def geterrmsg(self):\n        \"\"\"\n        Get the error message\n        \"\"\"\n        mode = self.get_op_mode()\n        if mode == PTL_CLI:\n            return self.last_error\n        elif self._conn is not None and self._conn >= 0:\n            m = pbs_geterrmsg(self._conn)\n            if m is not None:\n                m = m.split('\\n')\n            return m\n"
  },
  {
    "path": "test/fw/ptl/utils/__init__.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n"
  },
  {
    "path": "test/fw/ptl/utils/pbs_anonutils.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport logging\nimport os\nimport copy\nimport shlex\nimport re\n\nfrom ptl.lib.pbs_teslib import (PbsTypeFGCLimit,\n                                PbsAttribute)\nfrom ptl.lib.pbs_ifl_mock import *\nfrom ptl.utils.pbs_dshutils import DshUtils\n\n\nANON_USER_K = \"user\"\nANON_GROUP_K = \"group\"\nANON_HOST_K = \"host\"\nANON_JOBNAME_K = ATTR_name\nANON_ACCTNAME_K = ATTR_A\n\n\nclass PBSAnonymizer(object):\n\n    \"\"\"\n    Holds and controls anonymizing operations of PBS data\n\n    The anonymizer operates on attributes or resources.\n    Resources operate on the resource name itself rather than\n    the entire name, for example, to obfuscate the values associated\n    to a custom resource \"foo\" that could be set as resources_available.\n    foo resources_default.foo or Resource_List.foo, all that needs to be\n    passed in to the function is \"foo\" in the list to obfuscate.\n\n    :param attr_key: Attributes for which the attribute names themselves\n                    should be obfuscated\n    :type attr_key: list or None\n    :param attr_val: Attributes for which the values should be obfuscated\n    :type attr_val: list or None\n    :param resc_key: Resources for which the resource names themselves should\n                    be obfuscated\n    :type resc_key: list or None\n    :param resc_val: Resources for which the values should be obfuscated\n    :type resc_val: list or None\n    \"\"\"\n\n    logger = logging.getLogger(__name__)\n    utils = BatchUtils()\n    du = DshUtils()\n\n    def __init__(self, attr_delete=None, resc_delete=None,\n                 attr_key=None, attr_val=None,\n                 resc_key=None, resc_val=None):\n\n        # special cases\n        self._entity = False\n        self.job_sort_formula = None\n        self.schedselect = None\n        self.select = None\n\n        self.set_attr_delete(attr_delete)\n        self.set_resc_delete(resc_delete)\n        self.set_attr_key(attr_key)\n        self.set_attr_val(attr_val)\n        self.set_resc_key(resc_key)\n        self.set_resc_val(resc_val)\n        self.anonymize = self.anonymize_batch_status\n\n        # global anonymized mapping data\n        self.gmap_attr_val = {}\n        self.gmap_resc_val = {}\n        self.gmap_attr_key = {}\n        self.gmap_resc_key = {}\n        self.num_bad_acct_records = 0\n\n    def __get_anon_key(self, key, attr_map):\n        \"\"\"\n        Get an anonymized string for the 'key' belonging to attr_map\n\n        :param key: the key to anonymize\n        :type key: String\n        :param attr_map: the attr_map to which the key belongs\n        :type attr_map: dict\n\n        :returns: an anonymized string for the key\n        \"\"\"\n        key = self.__refactor_key(key)\n\n        if key in attr_map.keys():\n            anon_key = attr_map[key]\n        else:\n            anon_key = PbsAttribute.random_str(len(key))\n            attr_map[key] = anon_key\n\n        return anon_key\n\n    @staticmethod\n    def __refactor_key(key):\n        \"\"\"\n        There are some attributes which are aliases of each other\n        and others which are lists like user/group lists, lists of hosts etc.\n        Set a common key for them.\n        \"\"\"\n        key_lower = key.lower()\n        if \"user\" in key_lower or key == \"requestor\":\n            key = ANON_USER_K\n        elif \"group\" in key_lower:\n            key = ANON_GROUP_K\n        elif \"host\" in key_lower:\n            key = ANON_HOST_K\n        elif key == \"Name\" or key == \"Jobname\":\n            key = ANON_JOBNAME_K\n        elif key == \"account\":\n            key = ANON_ACCTNAME_K\n\n        return key\n\n    def __get_anon_value(self, key, value, kv_map):\n        \"\"\"\n        Get an anonymied string for the 'value' belonging to the kv_map\n        provided.\n        The kv_map will be in the following format:\n            key:{val1:anon_val1, val2:anon_val2, ...}\n\n        :param key: the key for this value\n        :type key: String\n        :param value: the value to anonymize\n        :type value: String\n        :param kv_map: the kv_map to which the key belongs\n        :type kv_map: dict\n\n        :returns: an anonymized string for the value\n        \"\"\"\n\n        if key == \"project\" and value == \"_pbs_project_default\":\n            return \"_pbs_project_default\"\n\n        # Deal with attributes which have a list of values\n        if key in (ATTR_u, ATTR_managers, ATTR_M, ATTR_g, ATTR_aclResvhost,\n                   ATTR_aclhost, ATTR_auth_g, ATTR_auth_u):\n            value_temp = \"\".join(value.split())\n            value_list = value_temp.split(\",\")\n        elif key == ATTR_exechost:\n            value_list = []\n            value_list_temp = value.split(\"+\")\n            for item in value_list_temp:\n                value_list.append(item.split(\"/\")[0])\n        else:\n            value_list = [value]\n\n        key = self.__refactor_key(key)\n\n        # Go through the list of values and anonymize each in the value string\n        for val in value_list:\n            if \"@\" in val:\n                # value if of type \"user@host\"\n                # anonymize the user and host parts separately\n                if ANON_HOST_K in self.attr_val:\n                    try:\n                        user, host = val.split(\"@\")\n                        host = self.__get_anon_value(ANON_HOST_K, host,\n                                                     self.gmap_attr_val)\n                        user = self.__get_anon_value(ANON_USER_K, user,\n                                                     self.gmap_attr_val)\n                        anon_val = user + \"@\" + host\n                        value = value.replace(val, anon_val)\n                        continue\n                    except Exception:\n                        pass\n\n            if key in kv_map:\n                value_map = kv_map[key]\n                anon_val = self.__get_anon_key(val, value_map)\n            else:\n                anon_val = PbsAttribute.random_str(len(val))\n                kv_map[key] = {val: anon_val}\n            value = value.replace(val, anon_val)\n\n        return value\n\n    def _initialize_key_map(self, keys):\n        k = {}\n        if keys is not None:\n            if isinstance(keys, dict):\n                return keys\n            elif isinstance(keys, list):\n                for i in keys:\n                    k[i] = None\n            elif isinstance(keys, str):\n                for i in keys.split(\",\"):\n                    k[i] = None\n            else:\n                self.logger.error(\"unhandled map type\")\n                k = {None: None}\n        return k\n\n    def _initialize_value_map(self, keys):\n        k = {}\n        if keys is not None:\n            if isinstance(keys, dict):\n                return keys\n            elif isinstance(keys, list):\n                for i in keys:\n                    k[i] = {}\n            elif isinstance(keys, str):\n                for i in keys.split(\",\"):\n                    k[i] = {}\n            else:\n                self.logger.error(\"unhandled map type\")\n                k = {None: None}\n        return k\n\n    def set_attr_delete(self, ad):\n        \"\"\"\n        Name of attributes to delete\n\n        :param ad: Attributes to delete\n        :type ad: str or list or dictionary\n        \"\"\"\n        self.attr_delete = self._initialize_value_map(ad)\n\n    def set_resc_delete(self, rd):\n        \"\"\"\n        Name of resources to delete\n\n        :param rd: Resources to delete\n        :type rd: str or list or dictionary\n        \"\"\"\n        self.resc_delete = self._initialize_value_map(rd)\n\n    def set_attr_key(self, ak):\n        \"\"\"\n        Name of attributes to obfuscate.\n\n        :param ak: Attribute keys\n        :type ak: str or list or dictionary\n        \"\"\"\n        self.attr_key = self._initialize_key_map(ak)\n\n    def set_attr_val(self, av):\n        \"\"\"\n        Name of attributes for which to obfuscate the value\n\n        :param av: Attributes value to obfuscate\n        :type av: str or list or dictionary\n        \"\"\"\n        self.attr_val = self._initialize_value_map(av)\n        if (\"euser\" or \"egroup\" or \"project\") in self.attr_val:\n            self._entity = True\n\n    def set_resc_key(self, rk):\n        \"\"\"\n        Name of resources to obfuscate\n\n        :param rk: Resource key\n        :type rk: str or list or dictionary\n        \"\"\"\n        self.resc_key = self._initialize_key_map(rk)\n\n    def set_resc_val(self, rv):\n        \"\"\"\n        Name of resources for which to obfuscate the value\n\n        :param rv: Resource value to obfuscate\n        :type rv: str or list or dictionary\n        \"\"\"\n        self.resc_val = self._initialize_value_map(rv)\n\n    def set_anon_map_file(self, name):\n        \"\"\"\n        Name of file in which to store anonymized map data.\n        This file is meant to remain private to a site as it\n        contains the sensitive anonymized data.\n\n        :param name: Name of file to which anonymized data to store.\n        :type name: str\n        \"\"\"\n        self.anon_map_file = name\n\n    def anonymize_resource_group(self, filename):\n        \"\"\"\n        Anonymize the user and group fields of a resource\n        group filename\n\n        :param filename: Resource group filename\n        :type filename: str\n        \"\"\"\n        anon_rg = []\n\n        try:\n            f = open(filename)\n            lines = f.readlines()\n            f.close()\n        except IOError:\n            self.logger.error(\"Error processing \" + filename)\n            return None\n\n        for data in lines:\n            data = data.strip()\n            if data:\n                if data[0] == \"#\":\n                    continue\n\n                _d = data.split()\n                ug = _d[0]\n                if \":\" in ug:\n                    (euser, egroup) = ug.split(\":\")\n                else:\n                    euser = ug\n                    egroup = None\n\n                if \"euser\" not in self.attr_val:\n                    anon_euser = euser\n                else:\n                    anon_euser = None\n                    if ANON_USER_K in self.gmap_attr_val:\n                        if euser in self.gmap_attr_val[ANON_USER_K]:\n                            anon_euser = self.gmap_attr_val[ANON_USER_K][euser]\n                    else:\n                        self.gmap_attr_val[ANON_USER_K] = {}\n\n                if euser is not None and anon_euser is None:\n                    anon_euser = PbsAttribute.random_str(len(euser))\n                    self.gmap_attr_val[ANON_USER_K][euser] = anon_euser\n\n                if \"egroup\" not in self.attr_val:\n                    anon_egroup = egroup\n                else:\n                    anon_egroup = None\n\n                    if egroup is not None:\n                        if ANON_GROUP_K in self.gmap_attr_val:\n                            if egroup in self.gmap_attr_val[ANON_GROUP_K]:\n                                anon_egroup = (self.gmap_attr_val[ANON_GROUP_K]\n                                               [egroup])\n                        else:\n                            self.gmap_attr_val[ANON_GROUP_K] = {}\n\n                if egroup is not None and anon_egroup is None:\n                    anon_egroup = PbsAttribute.random_str(len(egroup))\n                    self.gmap_attr_val[ANON_GROUP_K][egroup] = anon_egroup\n\n                # reconstruct the fairshare info by combining euser and egroup\n                out = [anon_euser]\n                if anon_egroup is not None:\n                    out[0] += \":\" + anon_egroup\n                # and appending the rest of the original line\n                out.append(_d[1])\n                if len(_d) > 1:\n                    p = _d[2].strip()\n                    if (ANON_USER_K in self.gmap_attr_val and\n                            p in self.gmap_attr_val[ANON_USER_K]):\n                        out.append(self.gmap_attr_val[ANON_USER_K][p])\n                    else:\n                        out.append(_d[2])\n                if len(_d) > 2:\n                    out += _d[3:]\n                anon_rg.append(\" \".join(out))\n\n        return anon_rg\n\n    def anonymize_resource_def(self, resources):\n        \"\"\"\n        Anonymize the resource definition\n        \"\"\"\n        if not self.resc_key:\n            return resources\n\n        for curr_anon_resc, val in self.resc_key.items():\n            if curr_anon_resc in resources:\n                tmp_resc = copy.copy(resources[curr_anon_resc])\n                del resources[curr_anon_resc]\n                if val is None:\n                    if curr_anon_resc in self.gmap_resc_key:\n                        val = self.gmap_resc_key[curr_anon_resc]\n                    else:\n                        val = PbsAttribute.random_str(len(curr_anon_resc))\n                elif curr_anon_resc not in self.gmap_resc_key:\n                    self.gmap_resc_key[curr_anon_resc] = val\n                tmp_resc.set_name(val)\n                resources[val] = tmp_resc\n        return resources\n\n    def __anonymize_fgc(self, d, attr, ar, val):\n        \"\"\"\n        Anonymize an FGC limit value\n        \"\"\"\n\n        m = {\"u\": \"euser\", \"g\": \"egroup\", \"p\": \"project\"}\n\n        if \",\" in val:\n            fgc_lim = val.split(\",\")\n        else:\n            fgc_lim = [val]\n\n        nfgc = []\n        for lim in fgc_lim:\n            _fgc = PbsTypeFGCLimit(attr, lim)\n            ename = _fgc.entity_name\n            if ename in (\"PBS_GENERIC\", \"PBS_ALL\"):\n                nfgc.append(lim)\n                continue\n\n            obf_ename = ename\n            for etype, nm in m.items():\n                if _fgc.entity_type == etype:\n                    if nm not in self.gmap_attr_val:\n                        if nm in ar and ename in ar[nm]:\n                            obf_ename = ar[nm][ename]\n                        else:\n                            obf_ename = PbsAttribute.random_str(len(ename))\n                        self.gmap_attr_val[nm] = {ename: obf_ename}\n                    elif ename in self.gmap_attr_val[nm]:\n                        if ename in self.gmap_attr_val[nm]:\n                            obf_ename = self.gmap_attr_val[nm][ename]\n                    break\n            _fgc.entity_name = obf_ename\n            nfgc.append(_fgc.__val__())\n\n        d[attr] = \",\".join(nfgc)\n\n    def __anonymize_attr_val(self, d, attr, ar, name, val):\n        \"\"\"\n        Obfuscate an attribute/resource values\n        \"\"\"\n\n        # don't obfuscate default project\n        if attr == \"project\" and val == \"_pbs_project_default\":\n            return\n\n        nstr = []\n        if \".\" in attr:\n            m = self.gmap_resc_val\n        else:\n            m = self.gmap_attr_val\n\n        if val in ar[name]:\n            nstr.append(ar[name][val])\n            if name in self.lmap:\n                self.lmap[name][val] = ar[name][val]\n            else:\n                self.lmap[name] = {val: ar[name][val]}\n            if name not in m:\n                m[name] = {val: ar[name][val]}\n            elif val not in m[name]:\n                m[name][val] = ar[name][val]\n        else:\n            # Obfuscate by randomizing with a value of the same length\n            tmp_v = val.split(\",\")\n            for v in tmp_v:\n                if v in ar[name]:\n                    r = ar[name][v]\n                elif name in m and v in m[name]:\n                    r = m[name][v]\n                else:\n                    r = PbsAttribute.random_str(len(v))\n                    if not isinstance(ar[name], dict):\n                        ar[name] = {}\n                    ar[name][v] = r\n                self.lmap[name] = {v: r}\n                if name not in m:\n                    m[name] = {v: r}\n                elif v not in m[name]:\n                    m[name][v] = r\n\n                nstr.append(r)\n\n        if d is not None:\n            d[attr] = \",\".join(nstr)\n\n    def __anonymize_attr_key(self, d, attr, ar, name, res):\n        \"\"\"\n        Obfuscate an attribute/resource key\n        \"\"\"\n\n        if res is not None:\n            m = self.gmap_resc_key\n        else:\n            m = self.gmap_attr_key\n\n        if not ar[name]:\n            if name in m:\n                ar[name] = m[name]\n            else:\n                randstr = PbsAttribute.random_str(len(name))\n                ar[name] = randstr\n                m[name] = randstr\n\n        if d is not None:\n            tmp_val = d[attr]\n            del d[attr]\n\n            if res is not None:\n                d[res + \".\" + ar[name]] = tmp_val\n            else:\n                d[ar[name]] = tmp_val\n\n        if name not in self.lmap:\n            self.lmap[name] = ar[name]\n\n        if name not in m:\n            m[name] = ar[name]\n\n    def anonymize_batch_status(self, data=None):\n        \"\"\"\n        Anonymize arbitrary batch_status data\n\n        :param data: Batch status data\n        :type data: List or dictionary\n        \"\"\"\n        if not isinstance(data, (list, dict)):\n            self.logger.error(\"data expected to be dict or list\")\n            return None\n\n        if isinstance(data, dict):\n            dat = [data]\n        else:\n            dat = data\n\n        # Local mapping data used to store obfuscation mapping data for this\n        # specific item, d\n        self.lmap = {}\n\n        # loop over each \"batch_status\" entry to obfuscate\n        for d in dat:\n\n            if self.attr_delete is not None:\n                for todel in self.attr_delete:\n                    if todel in d:\n                        del d[todel]\n\n            if self.resc_delete is not None:\n                for todel in self.resc_delete:\n                    for tmpk in d.keys():\n                        if \".\" in tmpk and todel == tmpk.split(\".\")[1]:\n                            del d[tmpk]\n\n            # Loop over each object's attributes, this is where the special\n            # cases are handled (e.g., FGC limits, formula, select spec...)\n            for attr in d:\n                val = d[attr]\n\n                if \".\" in attr:\n                    (res_type, res_name) = attr.split(\".\")\n                else:\n                    res_type = None\n                    res_name = attr\n\n                if res_type is not None:\n                    if self._entity and (attr.startswith(\"max_run\") or\n                                         attr.startswith(\"max_queued\")):\n                        self.__anonymize_fgc(d, attr, self.attr_val,\n                                             val)\n\n                    if res_name in self.resc_val:\n                        if (attr.startswith(\"max_run\") or\n                                attr.startswith(\"max_queued\")):\n                            self.__anonymize_fgc(d, attr, self.attr_val,\n                                                 val)\n                        self.__anonymize_attr_val(d, attr, self.resc_val,\n                                                  res_name, val)\n\n                    if res_name in self.resc_key:\n                        self.__anonymize_attr_key(d, attr, self.resc_key,\n                                                  res_name, res_type)\n                else:\n                    if attr in self.attr_val:\n                        self.__anonymize_attr_val(d, attr, self.attr_val,\n                                                  attr, val)\n\n                    if attr in self.attr_key:\n                        self.__anonymize_attr_key(d, attr, self.attr_key,\n                                                  attr, None)\n\n                    if ((attr in (\"job_sort_formula\", \"schedselect\",\n                                  \"select\")) and self.resc_key):\n                        for r in self.resc_key:\n                            if r in val:\n                                if r not in self.gmap_resc_key:\n                                    self.gmap_resc_key[\n                                        r] = PbsAttribute.random_str(len(r))\n                                val = val.replace(r, self.gmap_resc_key[r])\n                                setattr(self, attr, val)\n\n                        d[attr] = val\n\n    @staticmethod\n    def __verify_key(line, key):\n        \"\"\"\n        Verify that a given key is actually a key in the context of the line\n        given.\n\n        :param line: the line to check in\n        :type line: String\n        :param key: the key to find\n        :type key: String\n\n        :returns a tuple of (key index, 1st character of key's value)\n        :returns None if the key is invalid\n        \"\"\"\n        line_len = len(line)\n        key_len = len(key)\n        key_index = line.find(key, 0, line_len)\n        line_nospaces = \"\".join(line.split())\n        len_nospaces = len(line_nospaces)\n        key_idx_nospaces = line_nospaces.find(key, 0, len_nospaces)\n        value_char = None\n\n        # Find all instances of the string representing key in the line\n        # Find the instance which is a valid key\n        while key_index >= 0 and key_index < line_len:\n            valid_key = True\n\n            # Make sure that the characters before & after are not alpanum\n            if key_index != 0:\n                index_before = key_index - 1\n                char_before = line[index_before]\n                if char_before.isalnum() is True:\n                    valid_key = False\n            else:\n                char_before = None\n            if valid_key is True:\n                if key_index < line_len:\n                    index_after = key_index + key_len\n                    char_after = line[index_after]\n                    if char_after.isalnum() is True:\n                        valid_key = False\n                else:\n                    char_after = None\n                if valid_key is True:\n                    # if 'char_after' is not \"=\", then the characters before\n                    # and after should be the delimiter, and be equal\n                    if char_before is not None and char_after is not None:\n                        if char_after != \"=\":\n                            if char_before != char_after:\n                                valid_key = False\n                    if valid_key is True:\n                        # Now, let's look at the whitespace stripped line\n                        index_after = key_idx_nospaces + key_len\n                        if index_after >= len_nospaces:\n                            # Nothing after the key, can't be a key\n                            valid_key = False\n                        else:\n                            # Find a valid operator after the key\n                            # valid operators: =, +=, -=, ==\n                            if line_nospaces[index_after] != \"=\":\n                                # Check for this case: \"key +=/-=/== value\"\n                                if line_nospaces[index_after] in (\"+\", \"-\"):\n                                    index_after = index_after + 1\n                                    if line_nospaces[index_after] != \"=\":\n                                        valid_key = False\n                                else:\n                                    valid_key = False\n                            if valid_key is True:\n                                val_idx_nospaces = index_after + 1\n                                if val_idx_nospaces >= len_nospaces:\n                                    # There's no value!, can't be a valid key\n                                    valid_key = False\n\n            if valid_key is False:\n                # Find the next instance of the key\n                key_index = line.find(key, key_index + len(key), line_len)\n                key_idx_nospaces = line_nospaces.find(key,\n                                                      key_idx_nospaces +\n                                                      len(key),\n                                                      len_nospaces)\n            else:\n                # Seems like a valid key!\n                # Break out of the loop\n                value_char = line_nospaces[val_idx_nospaces]\n                break\n\n        if key_index == -1 or key_idx_nospaces == -1:\n            return None\n\n        return (key_index, value_char)\n\n    def __get_value(self, line, key):\n        \"\"\"\n        Get the 'value' of a kv pair for the key given, from the line given\n\n        :param line: the line to search in\n        :type line: String\n        :param key: the key for the value\n        :type key: String\n\n        :returns: String containing the value or None\n        \"\"\"\n        # Check if the line is of type:\n        #     <attribute name> = <value>\n        line_list_spaces = line.split()\n        if line_list_spaces is not None:\n            first_word = line_list_spaces[0]\n            if key == first_word:\n                # Check that this word is followed by an '=' sign\n                equals_sign = line_list_spaces[1]\n                if equals_sign == \"=\":\n                    # Ok, we are going to assume that this is enough to\n                    # determine that this is the correct type\n                    # return everything after the '=\" as value\n                    val_index = line.index(\"=\") + 1\n                    value = line[val_index:].strip()\n                    return value\n\n        # Check that a valid instance of this key exists in the string\n        kv = self.__verify_key(line, key)\n        if kv is None:\n            return None\n        key_index, val_char = kv\n\n        # Assumption: the character before the key is the delimiter\n        # for the k-v pair\n        delimiter = line[key_index - 1]\n        if delimiter is None:\n            # Hard luck, now there's no way to know, let's just assume\n            # that space is the delimiter and hope for the best\n            delimiter = \" \"\n\n        # Determine the value's start index\n        index_after_key = key_index + len(key)\n        value_index = line[index_after_key:].find(val_char) + index_after_key\n\n        # Get the value\n        lexer = shlex.shlex(line[value_index:], posix=True)\n        lexer.whitespace = delimiter\n        lexer.whitespace_split = True\n        try:\n            value = lexer.get_token()\n        except ValueError:\n            # Sometimes, the data can be incoherent with things like\n            # Unclosed quotes, which makes get_token() throw an exception\n            # Just return None\n            return None\n\n        # Strip the value of any trailing whitespaces (like newlines)\n        value = value.rstrip()\n\n        return value\n\n    @staticmethod\n    def __delete_kv(line, key, value):\n        \"\"\"\n        Delete a key-value pair from a line\n        If after deleting the k-v pair, the left over string has\n        no alphanumeric characters, then delete the line\n\n        :param line: the line in question\n        :type line: String\n        :param key: the key ofo the kv pair\n        :type key: String\n        :param value: the value of the kv pair\n        :type value: String\n\n        :returns: the line without the kv pair\n        :returns: None if the line should be deleted\n        \"\"\"\n        key_index = line.find(key)\n        index_after_key = key_index + len(key)\n        line_afterkey = line[index_after_key:]\n        value_index = line_afterkey.find(value) + index_after_key\n\n        # find the index of the last character of value\n        end_index = value_index + len(value)\n\n        # Find the start index of the kv pair\n        # Also include the character before the key\n        # This will remove an extra delimiter that would be\n        # left after the kv pair is deleted\n        start_index = key_index - 1\n        if start_index < 0:\n            start_index = 0\n\n        # Remove the kv pair\n        line = line[:start_index] + line[end_index:]\n\n        # Check if there's any alphanumeric characters left in the line\n        if re.search(\"[A-Za-z0-9]\", line) is None:\n            # Delete the whole line\n            return None\n\n        return line\n\n    def __add_alias_attr(self, key, alias_key):\n        \"\"\"\n        Some attributes have aliases. Added alias for a given attribute to the\n        global maps\n\n        :param key: the original attribute\n        :type key: str\n        :param alias_key: the alias\n        :type alias_key: str\n        \"\"\"\n        if key in self.attr_delete:\n            self.attr_delete[alias_key] = self.attr_delete[key]\n        if key in self.attr_key:\n            self.attr_key[alias_key] = self.attr_key[key]\n        if key in self.attr_val:\n            self.attr_val[alias_key] = self.attr_val[key]\n        if key in self.resc_delete:\n            self.resc_delete[alias_key] = self.resc_delete[key]\n        if key in self.resc_key:\n            self.resc_key[alias_key] = self.resc_key[key]\n        if key in self.resc_val:\n            self.resc_val[alias_key] = self.resc_val[key]\n\n    def anonymize_file_tabular(self, filename, extension=\".anon\",\n                               inplace=False):\n        \"\"\"\n        Anonymize pbs short format outputs (tabular form)\n        (e.g - qstat, pbsnodes -aS)\n        The 'titles' of various columns are used to look up keys inside the\n        global attribute maps and they are anonymized/removed accordingly.\n\n        Warning: only works work PBS tabular outputs, not generic.\n\n        :param filename: Name of the file to anonymize\n        :type filename: str\n        :param delim: delimiter for the table\n        :type delim: str\n        :param extension: Extension of the anonymized file\n        :type extension: str\n        :param inplace: If true returns the original file name for\n                        which contents have been replaced\n        :type inplace: bool\n\n        :returns: a str object containing filename of the anonymized file\n        \"\"\"\n        fn = self.du.create_temp_file()\n\n        # qstat outputs sometimes have different names for some attributes\n        self.__add_alias_attr(ATTR_euser, \"User\")\n        self.__add_alias_attr(ATTR_euser, \"Username\")\n        self.__add_alias_attr(ATTR_name, \"Jobname\")\n        self.__add_alias_attr(ATTR_name, \"Name\")\n\n        # pbsnodes -aS output has a 'host' field which should be anonymized\n        self.__add_alias_attr(ATTR_NODE_Host, \"host\")\n\n        header = None\n        with open(filename) as f, open(fn, \"w\") as nf:\n            # Get the header and the line with '-'s\n            # Also write out the header and dash lines to the output file\n            line_num = 0\n            for line in f:\n                nf.write(line)\n                line_num += 1\n                line_strip = line.strip()\n\n                if len(line_strip) == 0:\n                    continue\n\n                if line_strip[0].isalpha():\n                    header = line\n                    continue\n                # Dash line is the line after header\n                if header is not None:\n                    dash_line = line\n                    break\n\n            if header is None:  # Couldn't find the header\n                # Remove the aliases\n\n                return filename\n\n            # The dash line tells us the length of each column\n            dash_list = dash_line.split()\n            col_length = {}\n            # Store each column's length\n            col_index = 0\n            for item in dash_list:\n                col_len = len(item)\n                col_length[col_index] = col_len\n                col_index += 1\n\n            # Find out the columns to anonymize/delete\n            del_columns = []\n            anon_columns = {}\n            start_index = 0\n            end_index = 0\n            for col_index, length in enumerate(col_length):\n                start_index = end_index\n                end_index = start_index + length + 1\n\n                # Get the column's title\n                title = header[start_index:end_index]\n                title = title.strip()\n\n                if title in self.attr_delete.keys():\n                    # Need to delete this whole column\n                    del_columns.append(col_index)\n                elif title in self.attr_val.keys():\n                    # Need to anonymize all values in the column\n                    anon_columns[col_index] = title\n\n            anon_col_keys = anon_columns.keys()\n            # Go through the file and anonymize/delete columns\n            for line in f:\n                start_index = 0\n                end_index = 0\n                # Iterate over the different fields\n                col_index = 0\n                for col_index in range(len(col_length)):\n                    length = col_length[col_index]\n                    start_index = end_index\n                    end_index = start_index + length\n\n                    if col_index in del_columns:\n                        # Need to delete the value of this column\n                        # Just replace the value by blank spaces\n                        line2 = list(line)\n                        for i in range(len(line2)):\n                            if i >= start_index and i < end_index:\n                                line2[i] = \" \"\n                        line = \"\".join(line2)\n                    elif col_index in anon_col_keys:\n                        # Need to anonymize this column's value\n                        # Get the value\n                        value = line[start_index:end_index]\n                        value_strip = value.strip()\n                        anon_val = self.__get_anon_value(\n                            anon_columns[col_index],\n                            value_strip,\n                            self.gmap_attr_val)\n                        line = line.replace(value_strip, anon_val)\n\n                nf.write(line)\n\n        if inplace:\n            out_filename = filename\n\n        else:\n            out_filename = filename + extension\n\n        os.rename(fn, out_filename)\n\n        return out_filename\n\n    def anonymize_file_kv(self, filename, extension=\".anon\", inplace=False):\n        \"\"\"\n        Anonymize a file which has data in the form of key-value pairs.\n        Replace every occurrence of any entry in the global\n        map for the given file by its anonymized values.\n\n        :param filename: Name of the file to anonymize\n        :type filename: str\n        :param extension: Extension of the anonymized file\n        :type extension: str\n        :param inplace: If true returns the original file name for\n                        which contents have been replaced\n        :type inplace: bool\n\n        :returns: a str object containing filename of the anonymized file\n        \"\"\"\n        if not os.path.isfile(filename):\n            self.logger.debug(\"%s not found, nothing to anonymize\" % filename)\n            return filename\n\n        fn = self.du.create_temp_file()\n\n        with open(filename) as f, open(fn, \"w\") as nf:\n            delete_line = False\n            for line in f:\n                # Check if this is a line extension for an attr being deleted\n                if delete_line is True and line[0] == \"\\t\":\n                    continue\n\n                delete_line = False\n\n                # Check if any of the attributes to delete are in the line\n                for key in self.attr_delete.keys():\n                    if key in line:\n                        value = self.__get_value(line, key)\n                        if value is None:\n                            continue\n                        # Delete the key-value pair\n                        line = self.__delete_kv(line, key, value)\n                        if line is None:\n                            delete_line = True\n                            break\n\n                if delete_line is True:\n                    continue\n\n                # Anonymize key-value pairs\n                for key in self.attr_key.keys():\n                    if key in line:\n                        if self.__verify_key(line, key) is None:\n                            continue\n                        anon_key = self.__get_anon_key(key, self.gmap_attr_key)\n                        line = line.replace(key, anon_key)\n\n                for key in self.resc_key.keys():\n                    if key in line:\n                        if self.__verify_key(line, key) is None:\n                            continue\n                        anon_key = self.__get_anon_key(key, self.gmap_resc_key)\n                        line = line.replace(key, anon_key)\n\n                for key in self.attr_val.keys():\n                    if key in line:\n                        value = self.__get_value(line, key)\n                        if value is None:\n                            continue\n                        anon_value = self.__get_anon_value(key, value,\n                                                           self.gmap_attr_val)\n                        line = line.replace(value, anon_value)\n\n                for key in self.resc_val.keys():\n                    if key in line:\n                        value = self.__get_value(line, key)\n                        if value is None:\n                            continue\n                        anon_value = self.__get_anon_value(key, value,\n                                                           self.gmap_resc_val)\n                        line = line.replace(value, anon_value)\n\n                # Anonymize IP addresses\n                pattern = re.compile(\n                    r\"\\b*\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\b*\")\n                match_obj = re.search(pattern, line)\n                if match_obj:\n                    ip = match_obj.group(0)\n                    anon_key = self.__get_anon_key(ip, self.gmap_attr_key)\n                    line = line.replace(ip, anon_key)\n\n                nf.write(line)\n\n        if inplace:\n            out_filename = filename\n\n        else:\n            out_filename = filename + extension\n\n        os.rename(fn, out_filename)\n\n        return out_filename\n\n    def anonymize_accounting_log(self, logfile):\n        \"\"\"\n        Anonymize the accounting log\n\n        :param logfile: Acconting log file\n        :type logfile: str\n        \"\"\"\n        try:\n            f = open(logfile)\n        except IOError:\n            self.logger.error(\"Error processing \" + logfile)\n            return None\n\n        self.__add_alias_attr(ATTR_euser, \"user\")\n        self.__add_alias_attr(ATTR_euser, \"requestor\")\n        self.__add_alias_attr(ATTR_egroup, \"group\")\n        self.__add_alias_attr(ATTR_A, \"account\")\n\n        anon_data = []\n        for data in f:\n            # accounting log format is\n            # %Y/%m/%d %H:%M:%S;<Key>;<Id>;<key1=val1> <key2=val2> ...\n            curr = data.split(\";\", 3)\n            if curr is None or len(curr) < 4:\n                continue\n            if curr[1] in (\"A\", \"L\"):\n                anon_data.append(data.strip())\n                continue\n            buf = shlex.split(curr[3].strip())\n\n            skip_record = False\n            # Split the attribute list into key value pairs\n            kvl_list = [n.split(\"=\", 1) for n in buf]\n            if kvl_list is None:\n                self.num_bad_acct_records += 1\n                self.logger.debug(\"Bad accounting record found:\\n\" +\n                                  data)\n                continue\n            for kvl in kvl_list:\n                try:\n                    k, v = kvl\n                except ValueError:\n                    self.num_bad_acct_records += 1\n                    self.logger.debug(\"Bad accounting record found:\\n\" +\n                                      data)\n                    skip_record = True\n                    break\n\n                if k in self.attr_val:\n                    anon_kv = self.__get_anon_value(k, v, self.gmap_attr_val)\n                    kvl[1] = anon_kv\n\n                if k in self.attr_key:\n                    anon_ak = self.__get_anon_key(k, self.gmap_attr_key)\n                    kvl[0] = anon_ak\n\n                if \".\" in k:\n                    restype, resname = k.split(\".\")\n                    for rv in self.resc_val:\n                        if resname == rv:\n                            anon_rv = self.__get_anon_value(\n                                resname, rv, self.gmap_resc_val)\n                            kvl[1] = anon_rv\n\n                    if resname in self.resc_key:\n                        anon_rk = self.__get_anon_key(resname,\n                                                      self.gmap_resc_key)\n                        kvl[0] = restype + \".\" + anon_rk\n            if not skip_record:\n                anon_data.append(\";\".join(curr[:3]) + \";\" +\n                                 \" \".join([\"=\".join(n) for n in kvl_list]))\n        f.close()\n\n        return anon_data\n\n    def anonymize_sched_config(self, scheduler):\n        \"\"\"\n        Anonymize the scheduler config\n\n        :param scheduler: PBS scheduler object\n        \"\"\"\n        if len(self.resc_key) == 0:\n            return\n\n        # when anonymizing we get rid of the comments as they may contain\n        # sensitive information\n        scheduler._sched_config_comments = {}\n\n        # If resources need to be anonymized then update the resources line\n        # job_sort_key and node_sort_key\n        sr = scheduler.get_resources()\n        if sr:\n            for i, sres in enumerate(sr):\n                if sres in self.resc_key:\n                    if sres in self.gmap_resc_key:\n                        sr[i] = self.gmap_resc_key[sres]\n                    else:\n                        anon_res = PbsAttribute.random_str(len(sres))\n                        self.gmap_resc_key[sres] = anon_res\n                        sr[i] = anon_res\n\n            scheduler.sched_config[\"resources\"] = \",\".join(sr)\n\n        for k in [\"job_sort_key\", \"node_sort_key\"]:\n            if k in scheduler.sched_config:\n                sc_jsk = scheduler.sched_config[k]\n                if not isinstance(sc_jsk, list):\n                    sc_jsk = list(sc_jsk)\n\n                for r in self.resc_key:\n                    for i, key in enumerate(sc_jsk):\n                        if r in key:\n                            sc_jsk[i] = key.replace(r, self.resc_key[r])\n\n    def __str__(self):\n        return (\"Attributes Values: \" + str(self.gmap_attr_val) + \"\\n\" +\n                \"Resources Values: \" + str(self.gmap_resc_val) + \"\\n\" +\n                \"Attributes Keys: \" + str(self.gmap_attr_key) + \"\\n\" +\n                \"Resources Keys: \" + str(self.gmap_resc_key))\n"
  },
  {
    "path": "test/fw/ptl/utils/pbs_cliutils.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport logging\nimport os\nimport re\n\n\nclass CliUtils(object):\n    \"\"\"\n    Command line interface utility\n    \"\"\"\n\n    @classmethod\n    def get_logging_level(cls, level):\n        \"\"\"\n        Get the logging levels\n\n        :param level: Name of the logging level\n        :type level: str\n         \"\"\"\n        logging.DEBUG2 = logging.DEBUG - 1\n        logging.INFOCLI = logging.INFO - 1\n        logging.INFOCLI2 = logging.INFOCLI - 1\n\n        log_lvl = None\n        level = str(level).upper()\n        if level == 'INFO':\n            log_lvl = logging.INFO\n        elif level == 'INFOCLI':\n            log_lvl = logging.INFOCLI\n        elif level == 'INFOCLI2':\n            log_lvl = logging.INFOCLI2\n        elif level == 'DEBUG':\n            log_lvl = logging.DEBUG\n        elif level == 'DEBUG2':\n            log_lvl = logging.DEBUG2\n        elif level == 'WARNING':\n            log_lvl = logging.WARNING\n        elif level == 'ERROR':\n            log_lvl = logging.ERROR\n        elif level == 'FATAL':\n            log_lvl = logging.FATAL\n\n        return log_lvl\n\n    @staticmethod\n    def check_bin(bin_name):\n        \"\"\"\n        Check for command exist\n\n        :param bin_name: Command to be checked\n        :type bin_name: str\n        :returns: True if command exist else return False\n        \"\"\"\n        ec = os.system(\"/usr/bin/which \" + bin_name + \" > /dev/null\")\n        if ec == 0:\n            return True\n        return False\n\n    @staticmethod\n    def __json__(data):\n        try:\n            import json\n            return json.dumps(data, sort_keys=True, indent=4)\n        except Exception:\n            # first escape any existing double quotes\n            _pre = str(data).replace('\"', '\\\\\"')\n            # only then, replace the single quotes with double quotes\n            return _pre.replace('\\'', '\"')\n\n    @staticmethod\n    def expand_abs_path(path):\n        \"\"\"\n        Expand the path to absolute path\n        \"\"\"\n        if path.startswith('~'):\n            return os.path.expanduser(path)\n        return os.path.abspath(path)\n\n    @staticmethod\n    def priv_ports_info(hostname=None):\n        \"\"\"\n        Return a list of privileged ports in use on a given host\n\n        :param hostname: The host on which to query privilege ports\n                         usage. Defaults to the local host\n        :type hostname: str or None\n        \"\"\"\n        from ptl.utils.pbs_dshutils import DshUtils\n\n        netstat_tag = re.compile(r\"tcp[\\s]+[\\d]+[\\s]+[\\d]+[\\s]+\"\n                                 r\"(?P<srchost>[\\w\\*\\.]+):(?P<srcport>[\\d]+)\"\n                                 r\"[\\s]+(?P<desthost>[\\.\\w\\*:]+):\"\n                                 r\"(?P<destport>[\\d]+)\"\n                                 r\"[\\s]+(?P<state>[\\w]+).*\")\n        du = DshUtils()\n        ret = du.run_cmd(hostname, ['netstat', '-at', '--numeric-ports'])\n        if ret['rc'] != 0:\n            return False\n\n        msg = []\n        lines = ret['out']\n        resv_ports = {}\n        source_hosts = []\n        for line in lines:\n            m = netstat_tag.match(line)\n            if m:\n                srcport = int(m.group('srcport'))\n                srchost = m.group('srchost')\n                destport = int(m.group('destport'))\n                desthost = m.group('desthost')\n                if srcport < 1024:\n                    if srchost not in source_hosts:\n                        source_hosts.append(srchost)\n                    msg.append(line)\n                    if srchost not in resv_ports:\n                        resv_ports[srchost] = [srcport]\n                    elif srcport not in resv_ports[srchost]:\n                        resv_ports[srchost].append(srcport)\n                if destport < 1024:\n                    msg.append(line)\n                    if desthost not in resv_ports:\n                        resv_ports[desthost] = [destport]\n                    elif destport not in resv_ports[desthost]:\n                        resv_ports[desthost].append(destport)\n\n        if len(resv_ports) > 0:\n            msg.append('\\nPrivilege ports in use: ')\n            for k, v in resv_ports.items():\n                msg.append('\\t' + k + ': ' +\n                           str(\",\".join([str(l) for l in v])))\n            for sh in source_hosts:\n                msg.append('\\nTotal on ' + sh + ': ' +\n                           str(len(resv_ports[sh])))\n        else:\n            msg.append('No privileged ports currently allocated')\n\n        return msg\n"
  },
  {
    "path": "test/fw/ptl/utils/pbs_covutils.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport os\nimport sys\nimport time\nimport logging\nimport tempfile\nfrom stat import S_IWOTH\nfrom urllib.parse import urljoin\nfrom ptl.utils.pbs_dshutils import DshUtils\nfrom ptl.utils.pbs_cliutils import CliUtils\n\ntry:\n    from BeautifulSoup import BeautifulSoup\nexcept Exception:\n    pass\n\n\nclass LcovUtils(object):\n    \"\"\"\n    Coverage Utils\n\n    :param cov_bin: Coverage binary\n    :param html_bin: Coverage html binary\n    :param cov_out: Coverage output directory\n    :type cov_out: str or None\n    :param data_dir: Coverage data directory\n    :type data_dir: str or None\n    :param html_nosrc: HTML reports without PBS source\n    :type stml_nosrc: bool\n    :param html_baseurl: HTML base url\n    :type html_baseurl: str or None\n    \"\"\"\n    du = DshUtils()\n    logger = logging.getLogger(__name__)\n\n    def __init__(self, cov_bin=None, html_bin=None, cov_out=None,\n                 data_dir=None, html_nosrc=False, html_baseurl=None):\n        self.set_coverage_data_dir(data_dir)\n        self.set_coverage_bin(cov_bin)\n        self.set_genhtml_bin(html_bin)\n        self.set_coverage_out(cov_out)\n        self.set_html_nosource(html_nosrc)\n        self.set_html_baseurl(html_baseurl)\n        self.coverage_traces = []\n\n    def set_html_baseurl(self, baseurl):\n        \"\"\"\n        Set ``HTML`` base url\n        \"\"\"\n        self.logger.info('coverage baseurl set to ' + str(baseurl))\n        self.html_baseurl = baseurl\n\n    def set_html_nosource(self, nosource=False):\n        \"\"\"\n        Set HTML nosource parameter to False.\n        \"\"\"\n        self.logger.info('coverage no-source set to ' + str(nosource))\n        self.html_nosrc = nosource\n\n    def set_coverage_bin(self, cov_bin=None):\n        \"\"\"\n        Set the coverage binary\n\n        :param cov_binary: Coverage bin to set\n        \"\"\"\n        if cov_bin is None:\n            cov_bin = 'lcov'\n        rv = CliUtils.check_bin(cov_bin)\n        if not rv:\n            self.logger.error('None lcov_bin defined!')\n            sys.exit(1)\n        else:\n            self.logger.info('coverage utility set to ' + cov_bin)\n            self.cov_bin = cov_bin\n        return rv\n\n    def set_genhtml_bin(self, html_bin=None):\n        \"\"\"\n        Set HTML generation utility.\n\n        :param html_bin: HTML bin to set\n        \"\"\"\n        if html_bin is None:\n            html_bin = 'genhtml'\n        rv = CliUtils.check_bin(html_bin)\n        if not rv:\n            self.logger.error('%s tool not found' % (html_bin))\n            self.html_bin = None\n        else:\n            self.logger.info('HTML generation utility set to ' + html_bin)\n            self.html_bin = html_bin\n        return rv\n\n    def set_coverage_out(self, cov_out=None):\n        \"\"\"\n        Set the coverage output directory\n\n        :param cov_out: Coverage output directory path.\n        \"\"\"\n        if cov_out is None:\n            d = 'pbscov-' + time.strftime('%Y%m%d_%H%M%S', time.localtime())\n            cov_out = os.path.join(tempfile.gettempdir(), d)\n        if not os.path.isdir(cov_out):\n            os.mkdir(cov_out)\n        self.logger.info('coverage output directory set to ' + cov_out)\n        self.cov_out = cov_out\n\n    def set_coverage_data_dir(self, data=None):\n        \"\"\"\n        Set the coverage data directory\n\n        :param data: Data directory path\n        :returns: True if file name ends with .gcno else return False\n        \"\"\"\n        self.data_dir = data\n        if self.data_dir is not None:\n            walker = os.walk(self.data_dir)\n            for _, _, files in walker:\n                for f in files:\n                    if f.endswith('.gcno'):\n                        return True\n        return False\n\n    def add_trace(self, trace):\n        \"\"\"\n        Add coverage trace\n\n        :param trace: Coverage trace\n        \"\"\"\n        if trace not in self.coverage_traces:\n            self.logger.info('Adding coverage trace: %s' % (trace))\n            self.coverage_traces.append(trace)\n\n    def create_coverage_data_files(self, path):\n        \"\"\"\n        Create .gcda counterpart files for every .gcno file and give it\n        read/write permissions\n        \"\"\"\n        walker = os.walk(path)\n        for root, _, files in walker:\n            for f in files:\n                if f.endswith('.gcda'):\n                    pf = os.path.join(root, f)\n                    s = os.stat(pf)\n                    if (s.st_mode & S_IWOTH) == 0:\n                        self.du.run_cmd(cmd=['chmod', '666', pf],\n                                        level=logging.DEBUG, sudo=True)\n                elif f.endswith('.gcno'):\n                    nf = f.replace('.gcno', '.gcda')\n                    pf = os.path.join(root, nf)\n                    if not os.path.isfile(pf):\n                        self.du.run_cmd(cmd=['touch', pf],\n                                        level=logging.DEBUG, sudo=True)\n                        self.du.run_cmd(cmd=['chmod', '666', pf],\n                                        level=logging.DEBUG, sudo=True)\n\n    def initialize_coverage(self, out=None, name=None):\n        \"\"\"\n        Initialize coverage\n\n        :param out: Output path\n        :type out: str or None\n        :param name: name of the command\n        :type name: str or None\n        \"\"\"\n        if self.data_dir is not None:\n            if out is None:\n                out = os.path.join(self.cov_out, 'baseline.info')\n            self.logger.info('Initializing coverage data to ' + out)\n            self.create_coverage_data_files(self.data_dir)\n            cmd = [self.cov_bin]\n            if name is not None:\n                cmd += ['-t', name]\n            cmd += ['-i', '-d', self.data_dir, '-c', '-o', out]\n            self.du.run_cmd(cmd=cmd, logerr=False)\n            self.add_trace(out)\n\n    def capture_coverage(self, out=None, name=None):\n        \"\"\"\n        Capture the coverage parameters\n        \"\"\"\n        if self.data_dir is not None:\n            if out is None:\n                out = os.path.join(self.cov_out, 'tests.info')\n            self.logger.info('Capturing coverage data to ' + out)\n            cmd = [self.cov_bin]\n            if name is not None:\n                cmd += ['-t', name]\n            cmd += ['-c', '-d', self.data_dir, '-o', out]\n            self.du.run_cmd(cmd=cmd, logerr=False)\n            self.add_trace(out)\n\n    def zero_coverage(self):\n        \"\"\"\n        Zero the data counters. Note that a process would need to be restarted\n        in order to collect data again, running ``--initialize`` will not get\n        populate the data counters\n        \"\"\"\n        if self.data_dir is not None:\n            self.logger.info('Resetting coverage data')\n            cmd = [self.cov_bin, '-z', '-d', self.data_dir]\n            self.du.run_cmd(cmd=cmd, logerr=False)\n\n    def merge_coverage_traces(self, out=None, name=None, exclude=None):\n        \"\"\"\n        Merge the coverage traces\n        \"\"\"\n        if not self.coverage_traces:\n            return\n        if out is None:\n            out = os.path.join(self.cov_out, 'total.info')\n        self.logger.info('Merging coverage traces to ' + out)\n        if exclude is not None:\n            tmpout = out + '.tmp'\n        else:\n            tmpout = out\n        cmd = [self.cov_bin]\n        if name is not None:\n            cmd += ['-t', name]\n        for t in self.coverage_traces:\n            cmd += ['-a', t]\n        cmd += ['-o', tmpout]\n        self.du.run_cmd(cmd=cmd, logerr=False)\n        if exclude is not None:\n            cmd = [self.cov_bin]\n            if name is not None:\n                cmd += ['-t', name]\n            cmd += ['-r', tmpout] + exclude + ['-o', out]\n            self.du.run_cmd(cmd=cmd, logerr=False)\n            self.du.rm(path=tmpout, logerr=False)\n\n    def generate_html(self, out=None, html_out=None, html_nosrc=False):\n        \"\"\"\n        Generate the ``HTML`` report\n        \"\"\"\n        if self.html_bin is None:\n            self.logger.warn('No genhtml bin is defined')\n            return\n        if out is None:\n            out = os.path.join(self.cov_out, 'total.info')\n        if not os.path.isfile(out):\n            return\n        if html_out is None:\n            html_out = os.path.join(self.cov_out, 'html')\n        if (self.html_nosrc or html_nosrc):\n            self.logger.info('Generating HTML reports (without PBS source)'\n                             ' from  coverage data')\n            cmd = [self.html_bin, '--no-source', out]\n            cmd += ['-o', html_out]\n            self.du.run_cmd(cmd=cmd, logerr=False)\n        else:\n            self.logger.info('Generating HTML reports (with PBS Source) from'\n                             ' coverage data')\n            cmd = [self.html_bin, out, '-o', html_out]\n            self.du.run_cmd(cmd=cmd, logerr=False)\n\n    def change_baseurl(self, html_out=None, html_baseurl=None):\n        \"\"\"\n        Change the ``HTML`` base url\n        \"\"\"\n        if html_baseurl is None:\n            html_baseurl = self.html_baseurl\n        if html_baseurl is None:\n            return\n        if html_out is None:\n            html_out = os.path.join(self.cov_out, 'html')\n        if not os.path.isdir(html_out):\n            return\n        html_out_bu = os.path.join(os.path.dirname(html_out),\n                                   os.path.basename(html_out) + '_baseurl')\n        if html_baseurl[-1] != '/':\n            html_baseurl += '/'\n        self.logger.info('Changing baseurl to %s' % (html_baseurl))\n        self.du.run_copy(src=html_out, dest=html_out_bu, recursive=True)\n        for root, _, files in os.walk(html_out_bu):\n            newroot = root.split(html_out_bu)[1]\n            if ((len(newroot) > 0) and (newroot[0] == '/')):\n                newroot = newroot[1:]\n            newroot = urljoin(html_baseurl, newroot)\n            if newroot[-1] != '/':\n                newroot += '/'\n            print(root, newroot)\n            for f in files:\n                if not f.endswith('.html'):\n                    continue\n                f = os.path.join(root, f)\n                fd = open(f, 'r')\n                line = ''.join(fd.readlines())\n                fd.close()\n                tree = BeautifulSoup(line)\n                for a in tree.findAll('a'):\n                    href = a['href']\n                    if href.startswith('http://'):\n                        continue\n                    a['href'] = urljoin(newroot, href)\n                for img in tree.findAll('img'):\n                    img['src'] = urljoin(newroot, img['src'])\n                for css in tree.findAll('link', rel='stylesheet'):\n                    css['href'] = urljoin(newroot, css['href'])\n                fd = open(f, 'w+')\n                fd.write(str(tree))\n                fd.close()\n\n    def summarize_coverage(self, out=None):\n        \"\"\"\n        Summarize the coverage output\n        \"\"\"\n        if out is None:\n            out = os.path.join(self.cov_out, 'total.info')\n        if not os.path.isfile(out):\n            return ''\n        self.logger.info('Summarizing coverage data from ' + out)\n        cmd = [self.cov_bin, '--summary', out]\n        return self.du.run_cmd(cmd=cmd, logerr=False)['err']\n"
  },
  {
    "path": "test/fw/ptl/utils/pbs_crayutils.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport socket\nimport os\n\nfrom ptl.utils.pbs_dshutils import DshUtils\nfrom ptl.lib.pbs_ifl_mock import *\n\n\nclass CrayUtils(object):\n\n    \"\"\"\n    Cray specific utility class\n    \"\"\"\n    node_status = []\n    node_summary = {}\n    cmd_output = []\n    du = None\n\n    def __init__(self):\n        self.du = DshUtils()\n        (self.node_status, self.node_summary) = self.parse_apstat_rn()\n\n    def call_apstat(self, options):\n        \"\"\"\n        Build the apstat command and run it.  Return the output of the command.\n\n        :param options: options to pass to apstat command\n        :type options: str\n        :returns: the command output\n        \"\"\"\n        hostname = socket.gethostname()\n        platform = self.du.get_platform(hostname)\n        apstat_env = os.environ\n        apstat_cmd = \"apstat\"\n        if 'cray' not in platform:\n            return None\n        if 'craysim' in platform:\n            lib_path = '$LD_LIBRARY_PATH:/opt/alps/tester/usr/lib/'\n            apstat_env['LD_LIBRARY_PATH'] = lib_path\n            apstat_env['ALPS_CONFIG_FILE'] = '/opt/alps/tester/alps.conf'\n            apstat_env['apsched_sharedDir'] = '/opt/alps/tester/'\n            apstat_cmd = \"/opt/alps/tester/usr/bin/apstat -d .\"\n        cmd_run = self.du.run_cmd(hostname, [apstat_cmd, options],\n                                  as_script=True, wait_on_script=True,\n                                  env=apstat_env)\n        return cmd_run\n\n    def parse_apstat_rn(self):\n        \"\"\"\n        Parse the apstat command output for node status and summary\n\n        :type options: str\n        :returns: tuple of (node status, node summary)\n        \"\"\"\n        status = []\n        summary = {}\n        count = 0\n        options = '-rn'\n        cmd_run = self.call_apstat(options)\n        if cmd_run is None:\n            return (status, summary)\n        cmd_result = cmd_run['out']\n        keys = cmd_result[0].split()\n        # Add a key 'Mode' because 'State' is composed of two list items, e.g:\n        # State = 'UP  B', where Mode = 'B'\n        k2 = ['Mode']\n        keys = keys[0:3] + k2 + keys[3:]\n        cmd_iter = iter(cmd_result)\n        for line in cmd_iter:\n            if count == 0:\n                count = 1\n                continue\n            if \"Compute node summary\" in line:\n                summary_line = next(cmd_iter)\n                summary_keys = summary_line.split()\n                summary_data = next(cmd_iter).split()\n                sum_index = 0\n                for a in summary_keys:\n                    summary[a] = summary_data[sum_index]\n                    sum_index += 1\n                break\n            obj = {}\n            line = line.split()\n            for i, value in enumerate(line):\n                obj[keys[i]] = value\n                if keys[i] == 'State':\n                    obj[keys[i]] = value + \"  \" + line[i + 1]\n            # If there is no Apids in the apstat then use 'None' as the value\n            if \"Apids\" in obj:\n                pass\n            else:\n                obj[\"Apids\"] = None\n            status.append(obj)\n        return (status, summary)\n\n    def count_node_summ(self, cnsumm='up'):\n        \"\"\"\n        Return the value of any one of the following parameters as shown in\n        the 'Compute Node Summary' section of 'apstat -rn' output:\n        arch, config, up, resv, use, avail, down\n\n        :param cnsumm: parameter which is being queried, defaults to 'up'\n        :type cnsumm: str\n        :returns: value of parameter being queried\n        \"\"\"\n        return int(self.node_summary[cnsumm])\n\n    def count_node_state(self, state='UP  B'):\n        \"\"\"\n        Return how many nodes have a certain 'State' value.\n\n        :param state: parameter which is being queried, defaults to 'UP  B'\n        :type state: str\n        :returns: count of how many nodes have the state\n        \"\"\"\n        count = 0\n        status = self.node_status\n        for stat in status:\n            if stat['State'] == state:\n                count += 1\n        return count\n\n    def get_numthreads(self, nid):\n        \"\"\"\n        Returns the number of hyperthread for the given node\n        \"\"\"\n        options = '-N %d -n -f \"nid,c/cu\"' % int(nid)\n        cmd_run = self.call_apstat(options)\n        if cmd_run is None:\n            return None\n        cmd_result = cmd_run['out']\n        cmd_iter = iter(cmd_result)\n        numthreads = 0\n        for line in cmd_iter:\n            if \"Compute node summary\" in line:\n                break\n            elif \"NID\" in line:\n                continue\n            else:\n                key = line.split()\n                numthreads = int(key[1])\n        return numthreads\n\n    def num_compute_vnodes(self, server):\n        \"\"\"\n        Count the Cray compute nodes and return the value.\n        \"\"\"\n        vnl = server.filter(MGR_OBJ_NODE,\n                            {'resources_available.vntype': 'cray_compute'})\n        return len(vnl[\"resources_available.vntype=cray_compute\"])\n"
  },
  {
    "path": "test/fw/ptl/utils/pbs_dshutils.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport copy\nimport grp\nimport logging\nimport os\nimport platform\nimport pwd\nimport re\nimport socket\nimport stat\nimport sys\nimport tempfile\nimport traceback\nimport inspect\nfrom subprocess import PIPE, Popen\n\nfrom ptl.utils.pbs_testusers import PBS_ALL_USERS, PbsUser, PbsGroup\n\nDFLT_RSYNC_CMD = ['rsync', '-e', 'ssh', '--progress', '--partial', '-ravz']\nDFLT_COPY_CMD = ['scp', '-p']\nDFLT_RSH_CMD = ['ssh']\nDFLT_SUDO_CMD = ['sudo', '-H']\n\nlogging.DEBUG2 = logging.DEBUG - 1\nlogging.INFOCLI = logging.INFO - 1\nlogging.INFOCLI2 = logging.INFOCLI - 1\n\n\ndef get_method_name(slf):\n    try:\n        curr_method = inspect.currentframe().f_back.f_code.co_name\n        method_name = \"%s.%s\" % (slf.__class__.__name__, curr_method)\n    except AttributeError:\n        method_name = \"***UNKNOWN***\"\n    return method_name\n\n\nclass TimeOut(Exception):\n\n    \"\"\"\n    Raise this exception to mark a test as timed out.\n    \"\"\"\n    pass\n\n\nclass PbsConfigError(Exception):\n    \"\"\"\n    Initialize PBS configuration error\n    \"\"\"\n\n    def __init__(self, message=None, rv=None, rc=None, msg=None):\n        self.message = message\n        self.rv = rv\n        self.rc = rc\n        self.msg = msg\n\n    def __str__(self):\n        return ('rc=' + str(self.rc) + ', rv=' + str(self.rv) +\n                ',msg=' + str(self.msg))\n\n    def __repr__(self):\n        return (self.__class__.__name__ + '(rc=' + str(self.rc) + ', rv=' +\n                str(self.rv) + ', msg=' + str(self.msg) + ')')\n\n\nclass PtlUtilError(Exception):\n    \"\"\"\n    Initialize PTL Util error\n    \"\"\"\n\n    def __init__(self, message=None, rv=None, rc=None, msg=None):\n        self.message = message\n        self.rv = rv\n        self.rc = rc\n        self.msg = msg\n\n    def __str__(self):\n        return ('rc=' + str(self.rc) + ', rv=' + str(self.rv) +\n                ',msg=' + str(self.msg))\n\n    def __repr__(self):\n        return (self.__class__.__name__ + '(rc=' + str(self.rc) + ', rv=' +\n                str(self.rv) + ', msg=' + str(self.msg) + ')')\n\n\nclass DshUtils(object):\n\n    \"\"\"\n    PBS shell utilities\n\n    A set of tools to run commands, copy files, get process\n    information and parse a PBS configuration on an arbitrary host\n    \"\"\"\n\n    logger = logging.getLogger(__name__)\n    _h2osinfo = {}  # host to OS info cache\n    _h2p = {}  # host to platform cache\n    _h2pu = {}  # host to uname cache\n    _h2c = {}  # host to pbs_conf file cache\n    _h2l = {}  # host to islocal cache\n    _h2which = {}  # host to which cache\n    rsh_cmd = DFLT_RSH_CMD\n    sudo_cmd = DFLT_SUDO_CMD\n    copy_cmd = DFLT_COPY_CMD\n    tmpfilelist = []\n\n    def __init__(self):\n\n        self._current_user = None\n\n        logging.addLevelName('INFOCLI', logging.INFOCLI)\n        setattr(self.logger, 'infocli',\n                lambda *args: self.logger.log(logging.INFOCLI, *args))\n\n        logging.addLevelName('DEBUG2', logging.DEBUG2)\n        setattr(self.logger, 'debug2',\n                lambda *args: self.logger.log(logging.DEBUG2, *args))\n\n        logging.addLevelName('INFOCLI2', logging.INFOCLI2)\n        setattr(self.logger, 'infocli2',\n                lambda *args: self.logger.log(logging.INFOCLI2, *args))\n\n        self.mom_conf_map = {'PBS_MOM_SERVICE_PORT': '-M',\n                             'PBS_MANAGER_SERVICE_PORT': '-R',\n                             'PBS_HOME': '-d',\n                             'PBS_BATCH_SERVICE_PORT': '-S',\n                             }\n        self.server_conf_map = {'PBS_MOM_SERVICE_PORT': '-M',\n                                'PBS_MANAGER_SERVICE_PORT': '-R',\n                                'PBS_HOME': '-d',\n                                'PBS_BATCH_SERVICE_PORT': '-p',\n                                'PBS_SCHEDULER_SERVICE_PORT': '-S',\n                                }\n        self.sched_conf_map = {'PBS_HOME': '-d',\n                               'PBS_BATCH_SERVICE_PORT': '-p',\n                               'PBS_SCHEDULER_SERVICE_PORT': '-S',\n                               }\n        self._tempdir = {}\n        self.platform = self.get_platform()\n\n    def get_platform(self, hostname=None, pyexec=None):\n        \"\"\"\n        Get a local or remote platform info, essentially the value of\n        Python's sys.platform, in case of Cray it will return a string\n        as \"cray\" or \"shasta\" for actual Cray cluster and \"craysim\"\n        for Cray ALPS simulator\n\n        :param hostname: The hostname to query for platform info\n        :type hostname: str or None\n        :param pyexec: A path to a Python interpreter to use to query\n                       a remote host for platform info\n        :type pyexec: str or None\n        For efficiency the value is cached and retrieved from the\n        cache upon subsequent request\n        \"\"\"\n        splatform = sys.platform\n        found_already = False\n        if hostname is None:\n            hostname = socket.gethostname()\n        if hostname in self._h2p:\n            return self._h2p[hostname]\n        if self.isfile(hostname=hostname, path='/etc/xthostname',\n                       level=logging.DEBUG2):\n            if self.isfile(hostname=hostname, path='/proc/cray_xt/cname',\n                           level=logging.DEBUG2):\n                splatform = 'cray'\n            else:\n                splatform = 'craysim'\n            found_already = True\n        if self.isfile(hostname=hostname, path='/etc/cray/xname',\n                       level=logging.DEBUG2):\n            splatform = 'shasta'\n            found_already = True\n        if not self.is_localhost(hostname) and not found_already:\n            if pyexec is None:\n                pyexec = self.which(hostname, 'python3', level=logging.DEBUG2)\n            cmd = [pyexec, '-c', '\"import sys; print(sys.platform)\"']\n            ret = self.run_cmd(hostname, cmd=cmd)\n            if ret['rc'] != 0 or len(ret['out']) == 0:\n                _msg = 'Unable to retrieve platform info,'\n                _msg += 'defaulting to local platform'\n                self.logger.warning(_msg)\n                splatform = sys.platform\n            else:\n                splatform = ret['out'][0]\n        self._h2p[hostname] = splatform\n        return splatform\n\n    def _parse_file(self, hostname, file):\n        \"\"\"\n         helper function to parse a file containing entries of the\n         form ``<key>=<value>`` into a Python dictionary format\n        \"\"\"\n        if hostname is None:\n            hostname = socket.gethostname()\n\n        try:\n            rv = self.cat(hostname, file, level=logging.DEBUG2, logerr=False)\n            if rv['rc'] != 0:\n                return {}\n\n            props = {}\n            for l in rv['out']:\n                if l.find('=') != -1 and l[0] != '#':\n                    c = l.split('=')\n                    props[c[0]] = c[1].strip()\n        except BaseException:\n            self.logger.error('error parsing file ' + str(file))\n            self.logger.error(traceback.print_exc())\n            return {}\n\n        return props\n\n    def _set_file(self, hostname, fin, fout, append, variables, sudo=False):\n        \"\"\"\n        Create a file out of a set of dictionaries, possibly parsed\n        from an input file. @see _parse_file.\n\n        :param hostname: the name of the host on which to operate.\n                         Defaults to localhost\n        :type hostname: str\n        :param fin: the input file to read from\n        :type fin: str\n        :param fout: the output file to write to\n        :type fout: str\n        :param append: If true, append to the output file.\n        :type append: bool\n        :param variables: The ``key/value`` pairs to write to fout\n        :type variables: dictionary\n        :param sudo: copy file to destination through sudo\n        :type sudo: boolean\n        :return dictionary of items set\n        :raises PbsConfigError:\n        \"\"\"\n        if hostname is None:\n            hostname = socket.gethostname()\n\n        if append:\n            conf = self._parse_file(hostname, fin)\n        else:\n            conf = {}\n        conf = {**conf, **variables}\n        if os.path.isfile(fout):\n            fout_stat = os.stat(fout)\n            user = fout_stat.st_uid\n            group = fout_stat.st_gid\n        else:\n            user = None\n            group = None\n\n        try:\n            fn = self.create_temp_file()\n            self.chmod(path=fn, mode=0o644)\n            with open(fn, 'w') as fd:\n                for k, v in conf.items():\n                    fd.write(str(k) + '=' + str(v) + '\\n')\n            rv = self.run_copy(hostname, src=fn, dest=fout, uid=user,\n                               gid=group, level=logging.DEBUG2, sudo=sudo)\n            if rv['rc'] != 0:\n                raise PbsConfigError\n        except BaseException:\n            raise PbsConfigError(rc=1, rv=None,\n                                 msg='error writing to file ' + str(fout))\n        finally:\n            if os.path.isfile(fn):\n                self.rm(path=fn)\n\n        return conf\n\n    def get_pbs_conf_file(self, hostname=None):\n        \"\"\"\n        Get the path of the pbs conf file. Defaults back to\n        ``/etc/pbs.conf`` if unsuccessful\n\n        :param hostname: Hostname of the machine\n        :type hostname: str or None\n        :returns: Path to pbs conf file\n        \"\"\"\n        dflt_conf = '/etc/pbs.conf'\n        dflt_python = '/opt/pbs/python/bin/python'\n\n        if hostname is None:\n            hostname = socket.gethostname()\n        if hostname in self._h2c:\n            return self._h2c[hostname]\n\n        if self.is_localhost(hostname):\n            if 'PBS_CONF_FILE' in os.environ:\n                dflt_conf = os.environ['PBS_CONF_FILE']\n        else:\n            pc = ('\"import os;'\n                  'print(os.environ.get(\\\"PBS_CONF_FILE\\\", False))\"')\n            cmd = ['ls', '-1', dflt_python]\n            ret = self.run_cmd(hostname, cmd, logerr=False)\n            if ret['rc'] == 0:\n                pyexec = dflt_python\n            else:\n                pyexec = 'python3'\n            cmd = [pyexec, '-c', pc]\n            ret = self.run_cmd(hostname, cmd, logerr=False)\n            if ((ret['rc'] != 0) and (len(ret['out']) > 0) and\n                    (ret['out'][0] != 'False')):\n                dflt_conf = ret['out'][0]\n\n        self._h2c[hostname] = dflt_conf\n        return dflt_conf\n\n    def parse_pbs_config(self, hostname=None, file=None):\n        \"\"\"\n        initialize ``pbs_conf`` dictionary by parsing pbs config file\n\n        :param file: PBS conf file\n        :type file: str or None\n        \"\"\"\n        if file is None:\n            file = self.get_pbs_conf_file(hostname)\n        return self._parse_file(hostname, file)\n\n    def set_pbs_config(self, hostname=None, fin=None, fout=None,\n                       append=True, confs=None):\n        \"\"\"\n        Set ``environment/configuration`` variables in a\n        ``pbs.conf`` file\n\n        :param hostname: the name of the host on which to operate\n        :type hostname: str or None\n        :param fin: the input pbs.conf file\n        :type fin: str or None\n        :param fout: the name of the output pbs.conf file, defaults\n                     to ``/etc/pbs.conf``\n        :type fout: str or None\n        :param append: whether to append to fout or not, defaults\n                       to True\n        :type append: boolean\n        :param confs: The ``key/value`` pairs to create\n        :type confs: Dictionary or None\n        \"\"\"\n        if fin is None:\n            fin = self.get_pbs_conf_file(hostname)\n        if fout is None and fin is not None:\n            fout = fin\n        if confs is not None:\n            self.logger.info('Set ' + str(confs) + ' in ' + fout)\n        else:\n            confs = {}\n        return self._set_file(hostname, fin, fout, append, confs, sudo=True)\n\n    def unset_pbs_config(self, hostname=None, fin=None, fout=None,\n                         confs=None):\n        \"\"\"\n        Unset ``environment/configuration`` variables in a pbs.conf\n        file\n\n        :param hostname: the name of the host on which to operate\n        :type hostname: str or None\n        :param fin: the input pbs.conf file\n        :type fin: str or None\n        :param fout: the name of the output pbs.conf file, defaults\n                     to ``/etc/pbs.conf``\n        :type fout: str or None\n        :param confs: The configuration keys to unset\n        :type confs: List or str or dict or None\n        \"\"\"\n        if fin is None:\n            fin = self.get_pbs_conf_file(hostname)\n\n        if fout is None and fin is not None:\n            fout = fin\n        if confs is None:\n            confs = []\n        elif isinstance(confs, str):\n            confs = confs.split(',')\n        elif isinstance(confs, dict):\n            confs = list(confs.keys())\n\n        tounset = []\n        cur_confs = self.parse_pbs_config(hostname, fin)\n        for k in confs:\n            if k in cur_confs:\n                tounset.append(k)\n                del cur_confs[k]\n        if tounset:\n            self.logger.info('Unset ' + \",\".join(tounset) + ' from ' + fout)\n\n        return self._set_file(hostname, fin, fout, append=False,\n                              variables=cur_confs, sudo=True)\n\n    def get_pbs_server_name(self, pbs_conf=None):\n        \"\"\"\n        Return the name of the server which may be different than\n        ``PBS_SERVER``,in order, this method looks at\n        ``PBS_PRIMARY``, ``PBS_SERVER_HOST_NAME``, and\n        ``PBS_LEAF_NAME``, and ``PBS_SERVER``\n        \"\"\"\n        if pbs_conf is None:\n            pbs_conf = self.parse_pbs_config()\n\n        if 'PBS_PRIMARY' in pbs_conf:\n            return pbs_conf['PBS_PRIMARY']\n        elif 'PBS_SERVER_HOST_NAME' in pbs_conf:\n            return pbs_conf['PBS_SERVER_HOST_NAME']\n        elif 'PBS_LEAF_NAME' in pbs_conf:\n            return pbs_conf['PBS_LEAF_NAME']\n\n        return pbs_conf['PBS_SERVER']\n\n    def parse_pbs_environment(self, hostname=None,\n                              file='/var/spool/pbs/pbs_environment'):\n        \"\"\"\n        Initialize pbs_conf dictionary by parsing pbs config file\n        \"\"\"\n        return self._parse_file(hostname, file)\n\n    def set_pbs_environment(self, hostname=None,\n                            fin='/var/spool/pbs/pbs_environment', fout=None,\n                            append=True, environ=None):\n        \"\"\"\n        Set the PBS environment\n\n        :param environ: variables to set\n        :type environ: dict or None\n        :param hostname: Hostname of the machine\n        :type hostname: str or None\n        :param fin: pbs_environment input file\n        :type fin: str\n        :param fout: pbs_environment output file\n        :type fout: str or None\n        :param append: whether to append to fout or not, defaults\n                       defaults to true\n        :type append: bool\n        \"\"\"\n        if fout is None and fin is not None:\n            fout = fin\n        if environ is None:\n            environ = {}\n        return self._set_file(hostname, fin, fout, append, environ, sudo=True)\n\n    def unset_pbs_environment(self, hostname=None,\n                              fin='/var/spool/pbs/pbs_environment', fout=None,\n                              environ=None):\n        \"\"\"\n        Unset environment variables in a pbs_environment file\n\n        :param hostname: the name of the host on which to operate\n        :type hostname: str or None\n        :param fin: the input pbs_environment file\n        :type fin: str\n        :param fout: the name of the output pbs.conf file, defaults\n                     to ``/var/spool/pbs/pbs_environment``\n        :type fout: str or None\n        :param environ: The environment keys to unset\n        :type environ: List or str or dict or None\n        \"\"\"\n        if fout is None and fin is not None:\n            fout = fin\n        if environ is None:\n            environ = []\n        elif isinstance(environ, str):\n            environ = environ.split(',')\n        elif isinstance(environ, dict):\n            environ = list(environ.keys())\n\n        tounset = []\n        cur_environ = self.parse_pbs_environment(hostname, fin)\n        for k in environ:\n            if k in cur_environ:\n                tounset.append(k)\n                del cur_environ[k]\n        if tounset:\n            self.logger.info('Unset ' + \",\".join(tounset) + ' from ' + fout)\n\n        return self._set_file(hostname, fin, fout, append=False,\n                              variables=cur_environ, sudo=True)\n\n    def parse_rhosts(self, hostname=None, user=None):\n        \"\"\"\n        Parse remote host\n\n        :param hostname: Hostname of the machine\n        :type hostname: str or None\n        :param user: User name\n        :type user: str or None\n        \"\"\"\n        if hostname is None:\n            hostname = socket.gethostname()\n        if user is None:\n            user = os.getuid()\n        try:\n            # currently assumes identical file system layout on every host\n            if isinstance(user, int):\n                home = pwd.getpwuid(user).pw_dir\n            else:\n                home = pwd.getpwnam(user).pw_dir\n            rhost = os.path.join(home, '.rhosts')\n            rv = self.cat(hostname, rhost, level=logging.DEBUG2, runas=user,\n                          logerr=False)\n            if rv['rc'] != 0:\n                return {}\n            props = {}\n            for l in rv['out']:\n                if l[0] != '#':\n                    k, v = l.split()\n                    v = v.strip()\n                    if k in self.props:\n                        if isinstance(self.props[k], list):\n                            self.props[k].append(v)\n                        else:\n                            self.props[k] = [self.props[k], v]\n                    else:\n                        self.props[k] = v\n        except BaseException:\n            self.logger.error('error parsing .rhost')\n            self.logger.error(traceback.print_exc())\n            return {}\n        return props\n\n    def set_rhosts(self, hostname=None, user=None, entry={}, append=True):\n        \"\"\"\n        Set the remote host attributes\n\n        :param entry: remote hostname user dictionary\n        :type entry: Dictionary\n        :param append: If true append key value else not\n        :type append: boolean\n        \"\"\"\n        if hostname is None:\n            hostname = socket.gethostname()\n        if user is None:\n            user = os.getuid()\n        if append:\n            conf = self.parse_rhosts(hostname, user)\n            for k, v in entry.items():\n                if k in conf:\n                    if isinstance(conf[k], list):\n                        if isinstance(v, list):\n                            conf[k].extend(v)\n                        else:\n                            conf[k].append(v)\n                    else:\n                        if isinstance(v, list):\n                            conf[k] = [conf[k]] + v\n                        else:\n                            conf[k] = [conf[k], v]\n                else:\n                    conf[k] = v\n        else:\n            conf = entry\n        try:\n            # currently assumes identical file system layout on every host\n            if isinstance(user, int):\n                _user = pwd.getpwuid(user)\n                home = _user.pw_dir\n                uid = _user.pw_uid\n            else:\n                # user might be PbsUser object\n                _user = pwd.getpwnam(str(user))\n                home = _user.pw_dir\n                uid = _user.pw_uid\n            rhost = os.path.join(home, '.rhosts')\n            fn = self.create_temp_file(hostname)\n            self.chmod(hostname, fn, mode=0o755)\n            with open(fn, 'w') as fd:\n                fd.write('#!/bin/bash\\n')\n                fd.write('cd %s\\n' % (home))\n                fd.write('%s -rf %s\\n' % (self.which(hostname, 'rm',\n                                                     level=logging.DEBUG2),\n                                          rhost))\n                fd.write('touch %s\\n' % (rhost))\n                for k, v in conf.items():\n                    if isinstance(v, list):\n                        for eachprop in v:\n                            fields = 'echo \"%s %s\" >> %s\\n' % (\n                                str(k),\n                                str(eachprop),\n                                rhost)\n                            fd.write(fields)\n                    else:\n                        fields = 'echo \"%s %s\" >> %s\\n' % (str(k), str(v),\n                                                           rhost)\n                        fd.write(fields)\n                fd.write('%s 0600 %s\\n' % (self.which(hostname, 'chmod',\n                                                      level=logging.DEBUG2),\n                                           rhost))\n            ret = self.run_cmd(hostname, cmd=fn, runas=uid)\n            self.rm(hostname, path=fn)\n            if ret['rc'] != 0:\n                raise Exception(ret['out'] + ret['err'])\n        except Exception as e:\n            raise PbsConfigError(rc=1, rv=None, msg='error writing .rhosts ' +\n                                 str(e))\n        return conf\n\n    def map_pbs_conf_to_cmd(self, cmd_map={}, pconf={}):\n        \"\"\"\n        Map PBS configuration parameter to command\n\n        :param cmd_map: command mapping\n        :type cmd_map: Dictionary\n        :param pconf: PBS conf parameter dictionary\n        :type pconf: Dictionary\n        \"\"\"\n        cmd = []\n        for k, v in pconf.items():\n            if k in cmd_map:\n                cmd += [cmd_map[k], str(v)]\n        return cmd\n\n    def get_current_user(self):\n        \"\"\"\n        helper function to return the name of the current user\n        \"\"\"\n        if self._current_user is not None:\n            return self._current_user\n        self._current_user = pwd.getpwuid(os.getuid())[0]\n        return self._current_user\n\n    def check_user_exists(self, username=None, hostname=None, port=None):\n        \"\"\"\n        Check if user exist  or not\n\n        :param username: Username to check\n        :type username: str or None\n        :param hostname: Machine hostname\n        :type hostname: str or None\n        :param port: port used to ssh other host\n        :type port: str or None\n        :returns: True if exist else return False\n        \"\"\"\n        if hostname is None:\n            hostname = socket.gethostname()\n        if self.get_platform() == \"shasta\":\n            runas = username\n        else:\n            runas = None\n        ret = self.run_cmd(hostname, ['id', username], port=port, runas=runas)\n        if ret['rc'] == 0:\n            return True\n        return False\n\n    def check_group_membership(self, username=None, uid=None, grpname=None,\n                               gid=None):\n        \"\"\"\n        Checks whether a user, passed in as username or uid, is a\n        member of a group, passed in as group name or group id.\n\n        :param username: The username to inquire about\n        :type username: str or None\n        :param uid: The uid of the user to inquire about (alternative\n                    to username)\n        :param grpname: The groupname to check for user membership\n        :type grpname: str or None\n        :param gid: The group id to check for user membership\n                    (alternative to grpname)\n        \"\"\"\n        if username is None and uid is None:\n            self.logger.warning('A username or uid was expected')\n            return True\n        if grpname is None and gid is None:\n            self.logger.warning('A grpname or gid was expected')\n            return True\n        if grpname:\n            try:\n                _g = grp.getgrnam(grpname)\n                if username and username in _g.gr_mem:\n                    return True\n                elif uid is not None:\n                    _u = pwd.getpwuid(uid)\n                    if _u.pwname in _g.gr_mem:\n                        return True\n            except BaseException:\n                self.logger.error('Unknown user')\n        return False\n\n    def group_memberships(self, group_list=[]):\n        \"\"\"\n        Returns all group memberships as a dictionary of group names\n        and associated memberships\n        \"\"\"\n        groups = {}\n        if not group_list:\n            return groups\n        users_list = [u.pw_name for u in pwd.getpwall()]\n        glist = {}\n        for u in users_list:\n            info = self.get_id_info(u)\n            if not info['pgroup'] in list(glist.keys()):\n                glist[info['pgroup']] = [info['name']]\n            else:\n                glist[info['pgroup']].append(info['name'])\n            for g in info['groups']:\n                if g not in list(glist.keys()):\n                    glist[g] = []\n                if not info['name'] in glist[g]:\n                    glist[g].append(info['name'])\n        for g in group_list:\n            if g in list(glist.keys()):\n                groups[g] = glist[g]\n            else:\n                try:\n                    i = grp.getgrnam(g)\n                    groups[g] = i.gr_mem\n                except KeyError:\n                    pass\n        return groups\n\n    def get_id_info(self, user):\n        \"\"\"\n        Return user info in dic format\n        obtained by ``\"id -a <user>\"`` command for given user\n\n        :param user: The username to inquire about\n        :type user: str\n        :returns: dic format:\n\n                {\n\n                   \"uid\": <uid of given user>,\n\n                   \"gid\": <gid of given user's primary group>,\n\n                   \"name\": <name of given user>,\n\n                   \"pgroup\": <name of primary group of given user>,\n\n                   \"groups\": <list of names of groups of given user>\n\n                }\n        \"\"\"\n        info = {'uid': None, 'gid': None, 'name': None, 'pgroup': None,\n                'groups': None}\n        ret = self.run_cmd(cmd=['id', '-a', str(user)], logerr=True)\n        if ret['rc'] == 0:\n            p = re.compile(r'(?P<uid>\\d+)\\((?P<name>[\\w\\s.\"\\'-]+)\\)')\n            map_list = re.findall(p, ret['out'][0])\n            info['uid'] = int(map_list[0][0])\n            info['name'] = map_list[0][1].strip()\n            info['gid'] = int(map_list[1][0])\n            info['pgroup'] = map_list[1][1].strip()\n            groups = []\n            if len(map_list) > 2:\n                for g in map_list[2:]:\n                    groups.append(g[1].strip().strip('\"').strip(\"'\"))\n            info['groups'] = groups\n        return info\n\n    def get_tempdir(self, hostname=None):\n        \"\"\"\n        :returns: The temporary directory on the given host\n                  Default host is localhost.\n        \"\"\"\n        # return the cached value whenever possible\n\n        if hostname is None:\n            hostname = socket.gethostname()\n\n        if hostname in self._tempdir:\n            return self._tempdir[hostname]\n\n        if self.is_localhost(hostname):\n            self._tempdir[hostname] = tempfile.gettempdir()\n        else:\n            pyexec = self.which(hostname, 'python3', level=logging.DEBUG2)\n            cmd = [pyexec, '-c',\n                   '\"import tempfile; print(tempfile.gettempdir())\"']\n            ret = self.run_cmd(hostname, cmd, level=logging.DEBUG)\n            if ret['rc'] == 0:\n                self._tempdir[hostname] = ret['out'][0].strip()\n            else:\n                # Optimistically fall back to /tmp.\n                self._tempdir[hostname] = '/tmp'\n        return self._tempdir[hostname]\n\n    def run_cmd(self, hosts=None, cmd=None, sudo=False, stdin=None,\n                stdout=PIPE, stderr=PIPE, input=None, cwd=None, env=None,\n                runas=None, logerr=True, as_script=False, wait_on_script=True,\n                level=logging.INFOCLI2, port=None):\n        \"\"\"\n        Run a command on a host or list of hosts.\n\n        :param hosts: the name of hosts on which to run the command,\n                      can be a comma-separated string or a list.\n                      Defaults to localhost\n        :type hosts: str or None\n        :param cmd: the command to run\n        :type cmd: str or None\n        :param sudo: whether to run the command as root or not.\n                     Defaults to False.\n        :type sudo: boolean\n        :param stdin: custom stdin. Defaults to PIPE\n        :param stdout: custom stdout. Defaults to PIPE\n        :param stderr: custom stderr. Defaults to PIPE\n        :param input: input to pass to the pipe on target host,\n                      e.g. PBS answer file\n        :param cwd: working directory on local host from which\n                    command is run\n        :param env: environment variables to set on local host\n        :param runas: run command as given user. Defaults to calling\n                      user\n        :param logerr: whether to log error messages or not. Defaults\n                       to True\n        :type logerr: boolean\n        :param as_script: if True, run the command in a script\n                          created as a temporary file that gets\n                          deleted after being run. This is used\n                          mainly to circumvent some implementations\n                          of sudo that prevent passing environment\n                          variables through sudo.\n        :type as_script: boolean\n        :param wait_on_script: If True (default) waits on process\n                               launched as script to return.\n        :type wait_on_script: boolean\n        :type port: str\n        :param port: port number used with remote host IP address\n                     for ssh\n        :returns: error, output, return code as a dictionary:\n                  ``{'out':...,'err':...,'rc':...}``\n        \"\"\"\n\n        rshcmd = []\n        sudocmd = []\n        platform = self.get_platform()\n        _runas_user = None\n\n        if level is None:\n            level = self.logger.level\n\n        _user = self.get_current_user()\n\n        # runas may be a PbsUser object, ensure it is a string for the\n        # remainder of the function\n        if runas is not None:\n            if isinstance(runas, int):\n                runas = pwd.getpwuid(runas).pw_name\n            elif not isinstance(runas, str):\n                # must be as PbsUser object\n                runas = str(runas)\n\n        if runas:\n            _runas_user = PbsUser.get_user(runas)\n\n        if isinstance(cmd, str):\n            cmd = cmd.split()\n\n        if hosts is None:\n            hosts = socket.gethostname()\n\n        if isinstance(hosts, str):\n            hosts = hosts.split(',')\n\n        if not isinstance(hosts, list):\n            err_msg = 'target hostnames must be a comma-separated ' + \\\n                'string or list'\n            self.logger.error(err_msg)\n            return {'out': '', 'err': err_msg, 'rc': 1}\n\n        ret = {'out': '', 'err': '', 'rc': 0}\n\n        for hostname in hosts:\n            if (platform == \"shasta\") and _runas_user:\n                hostname = _runas_user.host if _runas_user.host else hostname\n                port = _runas_user.port\n            islocal = self.is_localhost(hostname)\n            if islocal is None:\n                # an error occurred processing that name, move on\n                # the error is logged in is_localhost.\n                ret['err'] = 'error getting host by name in run_cmd'\n                ret['rc'] = 1\n                continue\n            if not islocal:\n                if port and platform == \"shasta\":\n                    if runas is None:\n                        user = _user\n                    else:\n                        user = _runas_user.name\n                    rshcmd = self.rsh_cmd + ['-p', port, user + '@' + hostname]\n                else:\n                    rshcmd = self.rsh_cmd + [hostname]\n            if platform != \"shasta\":\n                if sudo or ((runas is not None) and (runas != _user)):\n                    sudocmd = copy.copy(self.sudo_cmd)\n                    if runas is not None:\n                        sudocmd += ['-u', runas]\n\n            # Initialize information to return\n            ret = {'out': None, 'err': None, 'rc': None}\n            rc = rshcmd + sudocmd + cmd\n            if as_script:\n                _script = self.create_temp_file()\n                script_body = ['#!/bin/bash']\n                if cwd is not None:\n                    script_body += ['cd \"%s\"' % (cwd)]\n                    cwd = None\n                if isinstance(cmd, str):\n                    script_body += [cmd]\n                elif isinstance(cmd, list):\n                    script_body += [\" \".join(cmd)]\n                with open(_script, 'w') as f:\n                    f.write('\\n'.join(script_body))\n                os.chmod(_script, 0o755)\n                if not islocal:\n                    # TODO: get a valid remote temporary file rather than\n                    # assume that the remote host has a similar file\n                    # system layout\n                    self.run_copy(hostname, src=_script, dest=_script,\n                                  runas=runas, level=level)\n                    os.remove(_script)\n                runcmd = rshcmd + sudocmd + [_script]\n            else:\n                runcmd = rc\n\n            _msg = hostname.split('.')[0] + '(run_cmd): '\n            _runcmd = ['\\'\\'' if x == '' else str(x) for x in runcmd]\n            _msg += ' '.join(_runcmd)\n            _msg = [_msg]\n            if as_script:\n                _msg += ['Contents of ' + _script + ':']\n                _msg += ['-' * 40, '\\n'.join(script_body), '-' * 40]\n            self.logger.log(level, '\\n'.join(_msg))\n\n            if input:\n                self.logger.log(level, input)\n\n            try:\n                p = Popen(runcmd, bufsize=-1, stdin=stdin, stdout=stdout,\n                          stderr=stderr, cwd=cwd, env=env)\n            except Exception as e:\n                self.logger.error(\"Error running command \" + str(runcmd))\n                if as_script:\n                    self.logger.error('Script contents: \\n' +\n                                      '\\n'.join(script_body))\n                self.logger.debug(str(e))\n                raise\n\n            if as_script and not wait_on_script:\n                o = p.stdout.readline()\n                e = p.stderr.readline()\n                ret['rc'] = 0\n            else:\n                try:\n                    (o, e) = p.communicate(input)\n                except TimeOut:\n                    self.logger.error(\"TimeOut Exception, cmd:%s\" %\n                                      str(runcmd))\n                    raise\n                ret['rc'] = p.returncode\n\n            if as_script:\n                # Remove the script file. If we ran remotely, the file will\n                # be owned by the runas user. If we ran locally, the file\n                # is owned by the current user.\n                # must pass as_script=False otherwise it will loop infinite\n                self.rm(hostname, path=_script, as_script=False,\n                        level=level, runas=(_user if islocal else runas))\n\n            # handle the case where stdout is not a PIPE\n            if o is not None:\n                ret['out'] = [i.decode(\"utf-8\", 'backslashreplace')\n                              for i in o.splitlines()]\n            else:\n                ret['out'] = []\n            # Some output can be very verbose, for example listing many lines\n            # of a log file, those messages are typically channeled through\n            # at level DEBUG2, since we don't to pollute the output with too\n            # verbose an information, we log at most at level DEBUG\n            if level < logging.DEBUG:\n                self.logger.log(level, 'out: ' + str(ret['out']))\n            else:\n                self.logger.debug('out: ' + str(ret['out']))\n            if e is not None:\n                ret['err'] = [i.decode(\"utf-8\", 'backslashreplace')\n                              for i in e.splitlines()]\n            else:\n                ret['err'] = []\n            if ret['err'] and logerr:\n                self.logger.error(\"<\" + get_method_name(self) + '>cmd:' +\n                                  ' '.join(cmd) + ' err: ' + str(ret['err']))\n            else:\n                self.logger.debug(\"<\" + get_method_name(self) + '>cmd:' +\n                                  ' '.join(cmd) + ' err: ' + str(ret['err']))\n            self.logger.debug('rc: ' + str(ret['rc']))\n\n        return ret\n\n    def run_copy(self, hosts=None, srchost=None, src=None, dest=None,\n                 sudo=False, uid=None, gid=None, mode=None, env=None,\n                 logerr=True, recursive=False, runas=None,\n                 preserve_permission=True, level=logging.INFOCLI2):\n        \"\"\"\n        copy a file or directory to specified target hosts.\n\n        :param hosts: the host(s) to which to copy the data. Can be\n                      a comma-separated string or a list\n        :type hosts: str or None\n        :param srchost: the host on which the src file resides.\n        :type srchost: str or None\n        :param src: the path to the file or directory to copy.\n        :type src: str or None\n        :param dest: the destination path.\n        :type dest: str or None\n        :param sudo: whether to copy as root or not. Defaults to\n                     False\n        :type sudo: boolean\n        :param uid: optionally change ownership of dest to the\n                    specified user id,referenced by uid number or\n                    username\n        :param gid: optionally change ownership of dest to the\n                    specified group ``name/id``\n        :param mode: optinoally set mode bits of dest\n        :param env: environment variables to set on the calling host\n        :param logerr: whether to log error messages or not.\n                       Defaults to True.\n        :param recursive: whether to copy a directory (when true) or\n                          a file.Defaults to False.\n        :type recursive: boolean\n        :param runas: run command as user\n        :type runas: str or None\n        :param preserve_permission: Preserve file permission while\n                                    copying file (cp cmd with -p flag)\n                                    Defaults to True\n        :type preserve_permission:boolean\n        :param level: logging level, defaults to DEBUG\n        :type level: int\n        :returns: {'out':<outdata>, 'err': <errdata>, 'rc':<retcode>}\n                  upon and None if no source file specified\n        \"\"\"\n\n        if src is None:\n            self.logger.warning('no source file specified')\n            return None\n\n        if hosts is None:\n            hosts = socket.gethostname()\n\n        if isinstance(hosts, str):\n            hosts = hosts.split(',')\n\n        if not isinstance(hosts, list):\n            self.logger.error('destination must be a string or a list')\n            return 1\n\n        if dest is None:\n            dest = src\n\n        # If PTL_SUDO_CMD were to be unset we should assume no sudo\n        if sudo is True and not self.sudo_cmd:\n            sudo = False\n\n        runas = PbsUser.get_user(runas)\n        issrclocal = None\n        if srchost:\n            issrclocal = self.is_localhost(srchost)\n        for targethost in hosts:\n            _msg = 'run_copy: '\n            _msg += \" src:%s\" % src\n            _msg += \" to:%s dest:%s\" % (targethost, dest)\n            _msg += \" sudo:%s\" % sudo\n            self.logger.debug(_msg)\n\n            islocal = self.is_localhost(targethost)\n            if sudo and not islocal and not issrclocal:\n                # to avoid a file copy as root, we copy it as current user\n                # and move it remotely to the desired path/name.\n                # First, get a remote temporary filename\n                pyexec = self.which(targethost, 'python3',\n                                    level=logging.DEBUG2)\n                cmd = [pyexec, '-c',\n                       '\"import tempfile;print(' +\n                       'tempfile.mkstemp(\\'PtlPbstmpcopy\\')[1])\"']\n                # save original destination\n                sudo_save_dest = dest\n                # Make the target of the copy the temporary file\n                dest = self.run_cmd(targethost, cmd,\n                                    level=level,\n                                    logerr=logerr)['out'][0]\n                cmd = []\n            else:\n                # if not using sudo or target is local, initialize the\n                # command to run accordingly\n                sudo_save_dest = None\n                if sudo:\n                    cmd = copy.copy(self.sudo_cmd)\n                else:\n                    cmd = []\n\n            # Remote copy if target host is remote or if source file/dir is\n            # remote.\n            if srchost:\n                srchost = socket.getfqdn(srchost)\n            if ((not islocal) or (srchost)):\n                copy_cmd = copy.deepcopy(self.copy_cmd)\n                targethost = socket.getfqdn(targethost)\n                if (srchost == targethost):\n                    cmd += [self.which(targethost, 'cp', level=level)]\n                    if preserve_permission:\n                        cmd += ['-p']\n                    if recursive:\n                        cmd += ['-r']\n                    cmd += [src]\n                    cmd += [dest]\n                else:\n                    if not preserve_permission:\n                        copy_cmd.remove('-p')\n                    if copy_cmd[0][0] != '/':\n                        copy_cmd[0] = self.which(targethost, copy_cmd[0],\n                                                 level=level)\n                    cmd += copy_cmd\n                    if recursive:\n                        cmd += ['-r']\n                    if runas and runas.port:\n                        cmd += ['-P', runas.port]\n                    if srchost:\n                        src = srchost + ':' + src\n                    cmd += [src]\n                    if islocal:\n                        cmd += [dest]\n                    else:\n                        if self.get_platform() == 'shasta' and runas:\n                            cmd += [str(runas) + '@' + targethost + ':' + dest]\n                        else:\n                            cmd += [targethost + ':' + dest]\n            else:\n                cmd += [self.which(targethost, 'cp', level=level)]\n                if preserve_permission:\n                    cmd += ['-p']\n                if recursive:\n                    cmd += ['-r']\n                cmd += [src]\n                cmd += [dest]\n\n            if srchost == targethost:\n                ret = self.run_cmd(targethost, cmd, env=env,\n                                   runas=runas, logerr=logerr, level=level)\n            elif self.get_platform() == 'shasta':\n                ret = self.run_cmd(socket.gethostname(), cmd, env=env,\n                                   logerr=logerr, level=level)\n            else:\n                ret = self.run_cmd(socket.gethostname(), cmd, env=env,\n                                   runas=runas, logerr=logerr, level=level)\n\n            if ret['rc'] != 0:\n                self.logger.error(ret['err'])\n            elif sudo_save_dest:\n                cmd = [self.which(targethost, 'cp', level=level)]\n                cmd += [dest, sudo_save_dest]\n                ret = self.run_cmd(targethost, cmd=cmd, sudo=True, level=level)\n                self.rm(targethost, path=dest, level=level)\n                dest = sudo_save_dest\n                if ret['rc'] != 0:\n                    self.logger.error(ret['err'])\n\n            if mode is not None:\n                self.chmod(targethost, path=dest, mode=mode, sudo=sudo,\n                           recursive=recursive, runas=runas)\n            if ((uid is not None and uid != self.get_current_user()) or\n                    gid is not None):\n                if dest == self.get_pbs_conf_file(targethost):\n                    uid = pwd.getpwnam('root')[2]\n                    gid = pwd.getpwnam('root')[3]\n                self.chown(targethost, path=dest, uid=uid, gid=gid, sudo=True,\n                           recursive=False)\n\n            return ret\n\n    def run_ptl_cmd(self, hostname, cmd, sudo=False, stdin=None, stdout=PIPE,\n                    stderr=PIPE, input=None, cwd=None, env=None, runas=None,\n                    logerr=True, as_script=False, wait_on_script=True,\n                    level=logging.INFOCLI2):\n        \"\"\"\n        Wrapper method of run_cmd to run PTL command\n        \"\"\"\n        # Add absolute path of command also add log level to command\n        self.logger.infocli('running command \"%s\" on %s' % (' '.join(cmd),\n                                                            hostname))\n        _cmd = [self.which(exe=cmd[0], level=level)]\n        _cmd += ['-l', logging.getLevelName(self.logger.parent.level)]\n        _cmd += cmd[1:]\n        cmd = _cmd\n        self.logger.debug(' '.join(cmd))\n        dest = None\n        if ('PYTHONPATH' in list(os.environ.keys()) and\n                not self.is_localhost(hostname)):\n            body = ['#!/bin/bash']\n            body += ['PYTHONPATH=%s exec %s' % (os.environ['PYTHONPATH'],\n                                                ' '.join(cmd))]\n            fn = self.create_temp_file(body='\\n'.join(body))\n            tmpdir = self.get_tempdir(hostname)\n            dest = os.path.join(tmpdir, os.path.basename(fn))\n            oldc = self.copy_cmd[:]\n            self.set_copy_cmd('scp -p')\n            self.run_copy(hostname, src=fn, dest=dest, mode=0o755, level=level)\n            self.set_copy_cmd(' '.join(oldc))\n            self.rm(None, path=fn, force=True, logerr=False)\n            cmd = dest\n        ret = self.run_cmd(hostname, cmd, sudo, stdin, stdout, stderr, input,\n                           cwd, env, runas, logerr, as_script, wait_on_script,\n                           level)\n        if dest is not None:\n            self.rm(hostname, path=dest, force=True, logerr=False)\n        # TODO: check why output is coming to ret['err']\n        if ret['rc'] == 0:\n            ret['out'] = ret['err']\n            ret['err'] = []\n        return ret\n\n    @classmethod\n    def set_sudo_cmd(cls, cmd):\n        \"\"\"\n        set the sudo command\n        \"\"\"\n        cls.logger.infocli('setting sudo command to ' + cmd)\n        cls.sudo_cmd = cmd.split()\n\n    @classmethod\n    def set_copy_cmd(cls, cmd):\n        \"\"\"\n        set the copy command\n        \"\"\"\n        cls.logger.infocli('setting copy command to ' + cmd)\n        cls.copy_cmd = cmd.split()\n\n    @classmethod\n    def set_rsh_cmd(cls, cmd):\n        \"\"\"\n        set the remote shell command\n        \"\"\"\n        cls.logger.infocli('setting remote shell command to ' + cmd)\n        cls.rsh_cmd = cmd.split()\n\n    def is_localhost(self, host=None):\n        \"\"\"\n        :param host: Hostname of machine\n        :type host: str or None\n        :returns: true if specified host (by name) is the localhost\n                  all aliases matching the hostname are searched\n        \"\"\"\n        if host is None:\n            return True\n\n        if host in self._h2l:\n            return self._h2l[host]\n\n        try:\n            (hostname, aliaslist, iplist) = socket.gethostbyname_ex(host)\n        except BaseException:\n            self.logger.error('error getting host by name: ' + host)\n            print((traceback.print_stack()))\n            return None\n\n        localhost = socket.gethostname()\n        if localhost == hostname or localhost in aliaslist:\n            self._h2l[host] = True\n        try:\n            ipaddr = socket.gethostbyname(localhost)\n        except BaseException:\n            self.logger.error('could not resolve local host name')\n            return False\n        if ipaddr in iplist:\n            self._h2l[host] = True\n            return True\n        # on a shasta machine, the name returned by `hostname` (pbs-host) is\n        # different than the one we tell PTL to use (pbs-service-nmn). This\n        # causes a name mismatch, so we should just set it to be True\n        if (self.get_platform() == 'shasta' and host == 'pbs-service-nmn' and\n                localhost == 'pbs-host'):\n            self._h2l[host] = True\n            return True\n        self._h2l[host] = False\n        return False\n\n    def isdir(self, hostname=None, path=None, sudo=False, runas=None,\n              level=logging.INFOCLI2):\n        \"\"\"\n        :param hostname: The name of the host on which to check for\n                         directory\n        :type hostname: str or None\n        :param path: The path to the directory to check\n        :type path: str or None\n        :param sudo: Whether to run the command as a privileged user\n        :type sudo: boolean\n        :param runas: run command as user\n        :type runas: str or None\n        :param level: Logging level\n        :returns: True if directory pointed to by path exists and\n                  False otherwise\n        \"\"\"\n        if path is None:\n            return False\n\n        if (self.is_localhost(hostname) and (not sudo) and (runas is None)):\n            return os.path.isdir(path)\n        else:\n            # Constraints on the build system prevent running commands as\n            # a privileged user through python, fall back to ls\n            dirname = os.path.dirname(path)\n            basename = os.path.basename(path)\n            cmd = ['ls', '-l', dirname]\n            self.logger.log(level, \"grep'ing for \" + basename + \" in \" +\n                            dirname)\n            ret = self.run_cmd(hostname, cmd=cmd, sudo=sudo, runas=runas,\n                               logerr=False, level=level)\n            if ret['rc'] != 0:\n                return False\n            else:\n                for l in ret['out']:\n                    if basename == l[-len(basename):] and l.startswith('d'):\n                        return True\n\n        return False\n\n    def isfile(self, hostname=None, path=None, sudo=False, runas=None,\n               level=logging.INFOCLI2):\n        \"\"\"\n        :param hostname: The name of the host on which to check for\n                         file\n        :type hostname: str or None\n        :param path: The path to the file to check\n        :type path: str or None\n        :param sudo: Whether to run the command as a privileged user\n        :type sudo: boolean\n        :param runas: run command as user\n        :type runas: str or None\n        :param level: Logging level\n        :returns: True if file pointed to by path exists, and False\n                  otherwise\n        \"\"\"\n\n        if path is None:\n            return False\n\n        if (self.is_localhost(hostname) and (not sudo) and (runas is None)):\n            return os.path.isfile(path)\n        else:\n            # Constraints on the build system prevent running commands as\n            # a privileged user through python, fall back to ls\n            cmd = ['ls', '-l', path]\n            ret = self.run_cmd(hostname, cmd=cmd, sudo=sudo, runas=runas,\n                               logerr=False, level=level)\n            if ret['rc'] != 0:\n                return False\n            elif ret['out']:\n                if not ret['out'][0].startswith('d'):\n                    return True\n\n        return False\n\n    def getmtime(self, hostname=None, path=None, sudo=False, runas=None,\n                 level=logging.INFOCLI2):\n        \"\"\"\n        :param hostname: The name of the host on which file exists\n        :type hostname: str or None\n        :param path: The path to the file to get mtime\n        :type path: str or None\n        :param sudo: Whether to run the command as a privileged user\n        :type sudo: boolean\n        :param runas: run command as user\n        :type runas: str or None\n        :param level: Logging level\n        :returns: Modified time of given file\n        \"\"\"\n\n        if path is None:\n            return None\n\n        if (self.is_localhost(hostname) and (not sudo) and (runas is None)):\n            return os.path.getmtime(path)\n        else:\n            py_cmd = 'import os; print(os.path.getmtime(\\'%s\\'))' % (path)\n            if not self.is_localhost(hostname):\n                py_cmd = '\\\"' + py_cmd + '\\\"'\n            pyexec = self.which(hostname, 'python3', level=logging.DEBUG2)\n            cmd = [pyexec, '-c', py_cmd]\n            ret = self.run_cmd(hostname, cmd=cmd, sudo=sudo, runas=runas,\n                               logerr=False, level=level)\n            if ((ret['rc'] == 0) and (len(ret['out']) == 1) and\n                    (isinstance(eval(ret['out'][0].strip()), (int, float)))):\n                return eval(ret['out'][0].strip())\n        return None\n\n    def listdir(self, hostname=None, path=None, sudo=False, runas=None,\n                fullpath=True, level=logging.INFOCLI2):\n        \"\"\"\n        :param hostname: The name of the host on which to list for\n                         directory\n        :type hostname: str or None\n        :param path: The path to directory to list\n        :type path: str or None\n        :param sudo: Whether to chmod as root or not. Defaults to\n                     False\n        :type sudo: bool\n        :param runas: run command as user\n        :type runas: str or None\n        :param fullpath: Return full paths?\n        :type fullpath: bool\n        :param level: Logging level.\n        :type level: int\n        :returns: A list containing the names of the entries in\n                  the directory or an empty list in case no files exist\n        \"\"\"\n        retvalerr = []\n\n        if path is None:\n            return retvalerr\n\n        if (self.is_localhost(hostname) and (not sudo) and (runas is None)):\n            try:\n                files = os.listdir(path)\n            except OSError:\n                return retvalerr\n        else:\n            ret = self.run_cmd(hostname, cmd=['ls', path], sudo=sudo,\n                               runas=runas, logerr=False, level=level)\n            if ret['rc'] == 0:\n                files = ret['out']\n            else:\n                return retvalerr\n        if fullpath is True:\n            return [os.path.join(path, p.strip()) for p in files]\n        else:\n            return [p.strip() for p in files]\n\n    def chmod(self, hostname=None, path=None, mode=None, sudo=False,\n              runas=None, recursive=False, logerr=True,\n              level=logging.INFOCLI2):\n        \"\"\"\n        Generic function of chmod with remote host support\n\n        :param hostname: hostname (default current host)\n        :type hostname: str or None\n        :param path: the path to the file or directory to chmod\n        :type path: str or None\n        :param mode: mode to apply as octal number like 0777,\n                     0666 etc.\n        :param sudo: whether to chmod as root or not. Defaults\n                     to False\n        :type sudo: boolean\n        :param runas: run command as user\n        :type runas: str or None\n        :param recursive: whether to chmod a directory (when true)\n                          or a file.Defaults to False.\n        :type recursive: boolean\n        :param logerr: whether to log error messages or not. Defaults\n                       to True.\n        :type logerr: boolean\n        :param level: logging level, defaults to INFOCLI2\n        :returns: True on success otherwise False\n        \"\"\"\n        if (path is None) or (mode is None):\n            return False\n        islocal = self.is_localhost(hostname)\n        if islocal and not runas and not sudo and not recursive:\n            self.logger.debug('os.chmod %s %s' % (path, oct(mode)))\n            try:\n                os.chmod(path, mode)\n            except OSError as err:\n                if logerr:\n                    self.logger.error(\"os.chmod failed with err:%s\" % str(err))\n                else:\n                    self.logger.debug(\"os.chmod failed with err:%s\" % str(err))\n                return False\n            return True\n        else:\n            cmd = [self.which(hostname, 'chmod', level=level)]\n            if recursive:\n                cmd += ['-R']\n            mode = '{:o}'.format(mode)\n            cmd += [mode, path]\n            ret = self.run_cmd(hostname, cmd=cmd, sudo=sudo, logerr=logerr,\n                               runas=runas, level=level)\n            if ret['rc'] == 0:\n                return True\n        return False\n\n    def chown(self, hostname=None, path=None, uid=None, gid=None, sudo=False,\n              recursive=False, runas=None, logerr=True,\n              level=logging.INFOCLI2):\n        \"\"\"\n        Generic function of chown with remote host support\n\n        :param hostname: hostname (default current host)\n        :type hostname: str or None\n        :param path: the path to the file or directory to chown\n        :type path: str or None\n        :param uid: uid to apply (must be either user name or\n                    uid or -1)\n        :param gid: gid to apply (must be either group name or\n                    gid or -1)\n        :param sudo: whether to chown as root or not. Defaults\n                     to False\n        :type sudo: boolean\n        :param recursive: whether to chmod a directory (when true)\n                          or a file.Defaults to False.\n        :type recursive: boolean\n        :param runas: run command as user\n        :type runas: str or None\n        :param logerr: whether to log error messages or not. Defaults\n                       to True.\n        :type logerr: boolean\n        :param level: logging level, defaults to INFOCLI2\n        :returns: True on success otherwise False\n        \"\"\"\n        if path is None or (uid is None and gid is None):\n            return False\n        _u = ''\n        if isinstance(uid, int) and uid != -1:\n            _u = pwd.getpwuid(uid).pw_name\n        elif (isinstance(uid, str) and (uid != '-1')):\n            _u = uid\n        else:\n            # must be as PbsUser object\n            if str(uid) != '-1':\n                _u = str(uid)\n        if _u == '':\n            return False\n        cmd = [self.which(hostname, 'chown', level=level)]\n        if recursive:\n            cmd += ['-R']\n        cmd += [_u, path]\n        ret = self.run_cmd(hostname, cmd=cmd, sudo=sudo, logerr=logerr,\n                           runas=runas, level=level)\n        if ret['rc'] == 0:\n            if gid is not None:\n                if runas is None:\n                    runas = _u\n                rv = self.chgrp(hostname, path, gid=gid, sudo=sudo,\n                                level=level, recursive=recursive, runas=runas,\n                                logerr=logerr)\n                if not rv:\n                    return False\n            return True\n        return False\n\n    def chgrp(self, hostname=None, path=None, gid=None, sudo=False,\n              recursive=False, runas=None, logerr=True,\n              level=logging.INFOCLI2):\n        \"\"\"\n        Generic function of chgrp with remote host support\n\n        :param hostname: hostname (default current host)\n        :type hostname: str or None\n        :param path: the path to the file or directory to chown\n        :type path: str or None\n        :param gid: gid to apply (must be either group name or\n                    gid or -1)\n        :param sudo: whether to chgrp as root or not. Defaults\n                     to False\n        :type sudo: boolean\n        :param recursive: whether to chmod a directory (when true)\n                          or a file.Defaults to False.\n        :type recursive: boolean\n        :param runas: run command as user\n        :type runas: str or None\n        :param logerr: whether to log error messages or not. Defaults\n                       to True.\n        :type logerr: boolean\n        :param level: logging level, defaults to INFOCLI2\n        :returns: True on success otherwise False\n        \"\"\"\n        if path is None or gid is None:\n            return False\n\n        _g = ''\n        if isinstance(gid, int) and gid != -1:\n            _g = grp.getgrgid(gid).gr_name\n        elif (isinstance(gid, str) and (gid != '-1')):\n            _g = gid\n        else:\n            # must be as PbsGroup object\n            if str(gid) != '-1':\n                _g = str(gid)\n\n        if _g == '':\n            return False\n\n        cmd = [self.which(hostname, 'chgrp', level=level)]\n        if recursive:\n            cmd += ['-R']\n        cmd += [_g, path]\n\n        ret = self.run_cmd(hostname, cmd=cmd, sudo=sudo, logerr=logerr,\n                           runas=runas, level=level)\n        if ret['rc'] == 0:\n            return True\n\n        return False\n\n    def which(self, hostname=None, exe=None, level=logging.INFOCLI2):\n        \"\"\"\n        Generic function of which with remote host support\n\n        :param hostname: hostname (default current host)\n        :type hostname: str or None\n        :param exe: executable to locate (can be full path also)\n                    (if exe is full path then only basename will\n                    be used to locate)\n        :type exe: str or None\n        :param level: logging level, defaults to INFOCLI2\n        \"\"\"\n        if exe is None:\n            return None\n\n        if hostname is None:\n            hostname = socket.gethostname()\n\n        oexe = exe\n        exe = os.path.basename(exe)\n        if hostname in list(self._h2which.keys()):\n            if exe in self._h2which[hostname]:\n                return self._h2which[hostname][exe]\n\n        sudo_wrappers_dir = '/opt/tools/wrappers'\n        _exe = os.path.join(sudo_wrappers_dir, exe)\n        if os.path.isfile(_exe) and os.access(_exe, os.X_OK):\n            if hostname not in list(self._h2which.keys()):\n                self._h2which.setdefault(hostname, {exe: _exe})\n            else:\n                self._h2which[hostname].setdefault(exe, _exe)\n            return _exe\n\n        # Changes specific to python\n        # Use PBS Python if available before looking for system Python\n        if exe == 'python3':\n            pbs_conf = self.parse_pbs_config(hostname)\n            py_path = os.path.join(pbs_conf['PBS_EXEC'], 'python',\n                                   'bin', 'python')\n            cmd = ['ls', '-1', py_path]\n            ret = self.run_cmd(hostname, cmd, logerr=False)\n            if ret['rc'] == 0:\n                if hostname not in self._h2which.keys():\n                    self._h2which.setdefault(hostname, {exe: py_path})\n                else:\n                    self._h2which[hostname].setdefault(exe, py_path)\n                return py_path\n\n        cmd = ['which', exe]\n        ret = self.run_cmd(hostname, cmd=cmd, logerr=False,\n                           level=level)\n        if ((ret['rc'] == 0) and (len(ret['out']) == 1) and\n                os.path.isabs(ret['out'][0].strip())):\n            path = ret['out'][0].strip()\n            if hostname not in self._h2which.keys():\n                self._h2which.setdefault(hostname, {exe: path})\n            else:\n                self._h2which[hostname].setdefault(exe, path)\n            return path\n        else:\n            return oexe\n\n    def rm(self, hostname=None, path=None, sudo=False, runas=None,\n           recursive=False, force=False, cwd=None, logerr=True,\n           as_script=False, level=logging.INFOCLI2):\n        \"\"\"\n        Generic function of rm with remote host support\n\n        :param hostname: hostname (default current host)\n        :type hostname: str or None\n        :param path: the path to the files or directories to remove\n                     for more than one files or directories pass as\n                     list\n        :type path: str or None\n        :param sudo: whether to remove files or directories as root\n                     or not.Defaults to False\n        :type sudo: boolean\n        :param runas: remove files or directories as given user\n                      Defaults to calling user\n        :param recursive: remove files or directories and their\n                          contents recursively\n        :type recursive: boolean\n        :param force: force remove files or directories\n        :type force: boolean\n        :param cwd: working directory on local host from which\n                    command is run\n        :param logerr: whether to log error messages or not.\n                       Defaults to True.\n        :type logerr: boolean\n        :param as_script: if True, run the rm in a script created\n                          as a temporary file that gets deleted after\n                          being run. This is used mainly to handle\n                          wildcard in path list. Defaults to False.\n        :type as_script: boolean\n        :param level: logging level, defaults to INFOCLI2\n        :returns: True on success otherwise False\n        \"\"\"\n        if (path is None) or (len(path) == 0):\n            return True\n\n        cmd = [self.which(hostname, 'rm', level=level)]\n        if recursive and force:\n            cmd += ['-rf']\n        else:\n            if recursive:\n                cmd += ['-r']\n            if force:\n                cmd += ['-f']\n\n        if isinstance(path, list):\n            for p in path:\n                if p == '/':\n                    msg = 'encountered a dangerous package path ' + p\n                    self.logger.error(msg)\n                    return False\n            cmd += path\n        else:\n            if path == '/':\n                msg = 'encountered a dangerous package path ' + path\n                self.logger.error(msg)\n                return False\n            cmd += [path]\n\n        ret = self.run_cmd(hostname, cmd=cmd, sudo=sudo, logerr=logerr,\n                           runas=runas, cwd=cwd, level=level,\n                           as_script=as_script)\n        if ret['rc'] != 0:\n            return False\n        return True\n\n    def mkdir(self, hostname=None, path=None, mode=None, sudo=False,\n              runas=None, parents=True, cwd=None, logerr=True,\n              as_script=False, level=logging.INFOCLI2):\n        \"\"\"\n        Generic function of mkdir with remote host support\n\n        :param hostname: hostname (default current host)\n        :type hostname: str or None\n        :param path: the path to the directories to create\n                     for more than one directories pass as list\n        :type path: str or None\n        :param mode: mode to use while creating directories\n                     (must be octal like 0777)\n        :param sudo: whether to create directories as root or not.\n                     Defaults to False\n        :type sudo: boolean\n        :param runas: create directories as given user. Defaults to\n                      calling user\n        :param parents: create parent directories as needed. Defaults\n                        to True\n        :type parents: boolean\n        :param cwd: working directory on local host from which\n                    command is run\n        :type cwd: str or None\n        :param logerr: whether to log error messages or not. Defaults\n                       to True.\n        :type logerr: boolean\n        :param as_script: if True, run the command in a script\n                          created as a temporary file that gets\n                          deleted after being run. This is used\n                          mainly to handle wildcard in path list.\n                          Defaults to False.\n        :type as_script: boolean\n        :param level: logging level, defaults to INFOCLI2\n        :returns: True on success otherwise False\n        \"\"\"\n        if (path is None) or (len(path) == 0):\n            return True\n\n        cmd = [self.which(hostname, 'mkdir', level=level)]\n        if parents:\n            cmd += ['-p']\n        if mode is not None:\n            mode = '{:o}'.format(mode)\n            cmd += ['-m', mode]\n        if isinstance(path, list):\n            cmd += path\n        else:\n            cmd += [path]\n        ret = self.run_cmd(hostname, cmd=cmd, sudo=sudo, logerr=logerr,\n                           runas=runas, cwd=cwd, level=level,\n                           as_script=as_script)\n        if ret['rc'] != 0:\n            return False\n        return True\n\n    def cat(self, hostname=None, filename=None, sudo=False, runas=None,\n            logerr=True, level=logging.INFOCLI2, option=None):\n        \"\"\"\n        Generic function of cat with remote host support\n\n        :param hostname: hostname (default current host)\n        :type hostname: str or None\n        :param filename: the path to the filename to cat\n        :type filename: str or None\n        :param sudo: whether to create directories as root or not.\n                     Defaults to False\n        :type sudo: boolean\n        :param runas: create directories as given user. Defaults\n                      to calling user\n        :type runas: str or None\n        :param logerr: whether to log error messages or not. Defaults\n                       to True.\n        :type logerr: boolean\n        :returns: output of run_cmd\n        \"\"\"\n        cmd = [self.which(hostname, 'cat', level=level)]\n        if option:\n            cmd += [option, filename]\n        else:\n            cmd.append(filename)\n        return self.run_cmd(hostname, cmd=cmd, sudo=sudo,\n                            runas=runas, logerr=logerr, level=level)\n\n    def tail(self, hostname=None, filename=None, sudo=False, runas=None,\n             logerr=True, level=logging.INFOCLI2, option=None):\n        \"\"\"\n        Generic function of tail with remote host support\n\n        :param hostname: hostname (default current host)\n        :type hostname: str or None\n        :param filename: the path to the filename to tail\n        :type filename: str or None\n        :param sudo: whether to create directories as root or not.\n                     Defaults to False\n        :type sudo: boolean\n        :param runas: create directories as given user. Defaults\n                      to calling user\n        :type runas: str or None\n        :param logerr: whether to log error messages or not. Defaults\n                       to True.\n        :type logerr: boolean\n        :returns: output of run_cmd\n        \"\"\"\n        cmd = [self.which(hostname, 'tail', level=level)]\n        if option:\n            cmd += [option, filename]\n        else:\n            cmd.append(filename)\n        return self.run_cmd(hostname, cmd=cmd, sudo=sudo,\n                            runas=runas, logerr=logerr, level=level)\n\n    def cmp(self, hostname=None, fileA=None, fileB=None, sudo=False,\n            runas=None, logerr=True):\n        \"\"\"\n        Compare two files and return 0 if they are identical or\n        non-zero if not\n\n        :param hostname: the name of the host to operate on\n        :type hostname: str or None\n        :param fileA: the first file to compare\n        :type fileA: str or None\n        :param fileB: the file to compare fileA to\n        :type fileB: str or None\n        :param sudo: run the command as a privileged user\n        :type sudo: boolean\n        :param runas: run the cmp command as given user\n        :type runas: str or None\n        :param logerr: whether to log error messages or not.\n                       Defaults to True.\n        :type logerr: boolean\n        \"\"\"\n\n        if fileA is None and fileB is None:\n            return 0\n\n        if fileA is None or fileB is None:\n            return 1\n\n        cmd = ['cmp', fileA, fileB]\n        ret = self.run_cmd(hostname, cmd=cmd, sudo=sudo, runas=runas,\n                           logerr=logerr)\n        return ret['rc']\n\n    def useradd(self, name, uid=None, gid=None, shell='/bin/bash',\n                create_home_dir=True, home_dir=None, groups=None, logerr=True,\n                level=logging.INFOCLI2):\n        \"\"\"\n        Add the user\n\n        :param name: User name\n        :type name: str\n        :param shell: shell to use\n        :param create_home_dir: If true then create home directory\n        :type create_home_dir: boolean\n        :param home_dir: path to home directory\n        :type home_dir: str or None\n        :param groups: User groups\n        \"\"\"\n        self.logger.info('adding user ' + str(name))\n        cmd = ['useradd']\n        cmd += ['-K', 'UMASK=0022']\n        if uid is not None:\n            cmd += ['-u', str(uid)]\n        if shell is not None:\n            cmd += ['-s', shell]\n        if gid is not None:\n            cmd += ['-g', str(gid)]\n        if create_home_dir:\n            cmd += ['-m']\n        if home_dir is not None:\n            cmd += ['-d', home_dir]\n        if ((groups is not None) and (len(groups) > 0)):\n            cmd += ['-G', ','.join([str(g) for g in groups])]\n        cmd += [str(name)]\n        ret = self.run_cmd(cmd=cmd, logerr=logerr, sudo=True, level=level)\n        if ((ret['rc'] != 0) and logerr):\n            raise PtlUtilError(rc=ret['rc'], rv=False, msg=ret['err'])\n\n    def userdel(self, name, del_home=True, force=True, logerr=True,\n                level=logging.INFOCLI2):\n        \"\"\"\n        Delete the user\n\n        :param del_home: If true then delete user home\n        :type del_home: boolean\n        :param force: If true then delete forcefully\n        :type force: boolean\n        \"\"\"\n        cmd = ['userdel']\n        if del_home:\n            cmd += ['-r']\n        if force:\n            cmd += ['-f']\n        cmd += [str(name)]\n        self.logger.info('deleting user ' + str(name))\n        ret = self.run_cmd(cmd=cmd, sudo=True, logerr=False, level=level)\n        if ((ret['rc'] != 0) and logerr):\n            raise PtlUtilError(rc=ret['rc'], rv=False, msg=ret['err'])\n\n    def groupadd(self, name, gid=None, logerr=True, level=logging.INFOCLI2):\n        \"\"\"\n        Add a group\n        \"\"\"\n        self.logger.info('adding group ' + str(name))\n        cmd = ['groupadd']\n        if gid is not None:\n            cmd += ['-g', str(gid)]\n        cmd += [str(name)]\n        ret = self.run_cmd(cmd=cmd, sudo=True, logerr=False, level=level)\n        if ((ret['rc'] != 0) and logerr):\n            raise PtlUtilError(rc=ret['rc'], rv=False, msg=ret['err'])\n\n    def groupdel(self, name, logerr=True, level=logging.INFOCLI2):\n        self.logger.info('deleting group ' + str(name))\n        cmd = ['groupdel', str(name)]\n        ret = self.run_cmd(cmd=cmd, sudo=True, logerr=logerr, level=level)\n        if ((ret['rc'] != 0) and logerr):\n            raise PtlUtilError(rc=ret['rc'], rv=False, msg=ret['err'])\n\n    def create_temp_file(self, hostname=None, suffix='', prefix='PtlPbs',\n                         dirname=None, text=False, asuser=None, body=None,\n                         level=logging.INFOCLI2):\n        \"\"\"\n        Create a temp file by calling tempfile.mkstemp\n\n        :param hostname: the hostname on which to query tempdir from\n        :type hostname: str or None\n        :param suffix: the file name will end with this suffix\n        :type suffix: str\n        :param prefix: the file name will begin with this prefix\n        :type prefix: str\n        :param dirname: the file will be created in this directory\n        :type dirname: str or None\n        :param text: the file is opened in text mode is this is true\n                     else in binary mode\n        :type text: boolean\n        :param asuser: Optional username or uid of temp file owner\n        :type asuser: str or None\n        :param body: Optional content to write to the temporary file\n        :type body: str or None\n        :param level: logging level, defaults to INFOCLI2\n        :type level: int\n        \"\"\"\n        _msg = 'create_temp_file(vvv start vvv):'\n        self.logger.debug(_msg)\n\n        # create a temp file as current user\n        (fd, tmpfile) = tempfile.mkstemp(suffix, prefix, dirname, text)\n\n        # write user provided contents to file\n        if body is not None:\n            if isinstance(body, list):\n                os.write(fd, \"\\n\".join(body).encode())\n            else:\n                os.write(fd, body.encode())\n        os.close(fd)\n\n        if not hostname and asuser:\n            asuser = PbsUser.get_user(asuser)\n            if asuser.host:\n                hostname = asuser.host\n\n        # if temp file to be created on remote host\n        if not self.is_localhost(hostname):\n            if asuser is not None:\n                # by default mkstemp creates file with 0600 permission\n                # to create file as different user first change the file\n                # permission to 0644 so that other user has read permission\n                self.chmod(path=tmpfile, mode=0o644)\n                # copy temp file created  on local host to remote host\n                # as different user\n                self.run_copy(hostname, src=tmpfile, dest=tmpfile,\n                              runas=asuser, preserve_permission=False,\n                              level=level)\n            else:\n                # copy temp file created on localhost to remote as current user\n                self.run_copy(hostname, src=tmpfile, dest=tmpfile,\n                              preserve_permission=False, level=level)\n                # remove local temp file\n                os.unlink(tmpfile)\n        if asuser is not None:\n            # by default mkstemp creates file with 0600 permission\n            # to create file as different user first change the file\n            # permission to 0644 so that other user has read permission\n            self.chmod(hostname, tmpfile, mode=0o644)\n            # since we need to create as differnt user than current user\n            # create a temp file just to get temp file name with absolute path\n            (_, tmpfile2) = tempfile.mkstemp(suffix, prefix, dirname, text)\n            # remove the newly created temp file\n            os.unlink(tmpfile2)\n            # copy the orginal temp as new temp file\n            self.run_copy(hostname, src=tmpfile, dest=tmpfile2, runas=asuser,\n                          preserve_permission=False, level=level)\n            # remove original temp file\n            os.unlink(tmpfile)\n            self.tmpfilelist.append(tmpfile2)\n            return tmpfile2\n        self.tmpfilelist.append(tmpfile)\n        _msg = 'create_temp_file(^^^ end ^^^): '\n        _msg += \" hostname:%s\" % hostname\n        _msg += \" tmpfile:%s\" % tmpfile\n        self.logger.debug(_msg)\n        return tmpfile\n\n    def create_temp_dir(self, hostname=None, suffix='', prefix='PtlPbs',\n                        dirname=None, asuser=None, asgroup=None, mode=0o755,\n                        level=logging.INFOCLI2):\n        \"\"\"\n        Create a temp dir by calling ``tempfile.mkdtemp``\n        :param hostname: the hostname on which to query tempdir from\n        :type hostname: str or None\n        :param suffix: the directory name will end with this suffix\n        :type suffix: str\n        :param prefix: the directory name will begin with this prefix\n        :type prefix: str\n        :param dirname: the directory will be created in this directory\n        :type dirname: str or None\n        :param asuser: Optional username of temp directory owner\n        :type asuser: str\n        :param asgroup: Optional group name of temp directory\n                    group owner\n        :type asgroup: str\n        :param mode: Optional mode bits to assign to the temporary\n                     directory\n        :type mode: octal integer\n        :param level: logging level, defaults to INFOCLI2\n        \"\"\"\n        current_user_info = self.get_id_info(self.get_current_user())\n        uid = current_user_info['uid']\n        if asuser is not None:\n            uid = PbsUser.get_user(asuser).uid\n        if asgroup is not None:\n            gid = PbsGroup.get_group(asgroup).gid\n        else:\n            gid = None\n        # create a temp dir as current user\n        tmpdir = tempfile.mkdtemp(suffix, prefix)\n        # By default mkdtemp creates dir according to umask.\n        # To create dir as different user first change the dir\n        # permission to 0755 so that other user has read permission\n        self.chmod(path=tmpdir, mode=0o755)\n        if dirname is not None:\n            dirname = str(dirname)\n            self.run_copy(hostname, src=tmpdir, dest=dirname, runas=asuser,\n                          recursive=True, gid=gid, uid=uid,\n                          level=level, preserve_permission=False)\n            self.chmod(hostname, path=dirname, mode=mode, runas=asuser)\n\n            tmpdir = dirname + tmpdir[4:]\n\n        # if temp dir to be created on remote host\n        if not self.is_localhost(hostname):\n            self.run_copy(hostname, src=tmpdir, dest=tmpdir,\n                          level=level, preserve_permission=False,\n                          recursive=True, uid=uid, gid=gid)\n            self.chmod(hostname, path=tmpdir, mode=mode, runas=asuser)\n            # remove local temp dir\n            os.rmdir(tmpdir)\n            return tmpdir\n        elif asuser is not None:\n            # since we need to create as differnt user than current user\n            # create a temp dir just to get temp dir name with absolute path\n            tmpdir2 = tempfile.mkdtemp(suffix, prefix, dirname)\n            os.rmdir(tmpdir2)\n            # copy the orginal temp as new temp dir\n            self.run_copy(hostname, src=tmpdir, dest=tmpdir2, runas=asuser,\n                          recursive=True, uid=uid, gid=gid, level=level,\n                          preserve_permission=False)\n            self.chmod(hostname, path=tmpdir2, mode=mode, runas=asuser)\n            # remove original temp dir\n            os.rmdir(tmpdir)\n            return tmpdir2\n        # Its a local directory and user name is not provided\n        self.chmod(path=tmpdir, mode=mode)\n        return tmpdir\n\n    def parse_strace(self, lines):\n        \"\"\"\n        strace parsing. Just the regular expressions for now\n        \"\"\"\n        timestamp_pat = r'(^(\\d{2}:\\d{2}:\\d{2})(.\\d+){0,1} |^(\\d+.\\d+) ){0,1}'\n        exec_pat = r'execve\\((\"[^\"]+\"), \\[([^]]+)\\], [^,]+ = (\\d+)$'\n\n        timestamp_exec_re = re.compile(timestamp_pat + exec_pat)\n\n        for line in lines:\n            m = timestamp_exec_re.match(line)\n            if m:\n                print(line)\n"
  },
  {
    "path": "test/fw/ptl/utils/pbs_logutils.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport collections\nimport copy\nimport logging\nimport math\nimport re\nimport sys\nimport time\nimport traceback\nfrom datetime import datetime, timedelta, tzinfo\nfrom subprocess import PIPE, Popen\n\nfrom ptl.lib.pbs_testlib import (EQ, JOB, NODE, SET, BatchUtils, ResourceResv,\n                                 Server, PbsAttribute)\nfrom ptl.utils.pbs_dshutils import DshUtils\n\n\"\"\"\nAnalyze ``server``, ``scheduler``, ``MoM``, and ``accounting`` logs.\n\n- Scheduler log analysis:\n\n    Extraction of per cycle information including:\n        cycle start time\n        cycle duration\n        time to query objects from server\n        number of jobs considered\n        number of jobs run\n        number of jobs that failed to run\n        Number of jobs preempted\n        Number of jobs that failed to preempt\n        Number of jobs calendared\n        time to determine that a job can run\n        time to determine that a job can not run\n        time spent calendaring\n        time spent in scheduler solver\n    Summary of all cycles information\n\n- Server log analysis:\n    job submit rate\n    number of jobs ended\n    number of jobs run\n    job run rate\n    job submit rate\n    job end rate\n    job wait time distribution\n    PBS versions\n    node up rate\n    wait time\n\n- Mom log analysis:\n    job submit rate\n    number of jobs ended\n    number of jobs run\n    job run rate\n    job submit rate\n    job end rate\n    PBS versions\n\n- Accounting log analysis:\n    job submit rate\n    number of jobs ended\n    number of jobs run\n    job run rate\n    job submit rate\n    job end rate\n    job size (cpu and node) distribution\n    job wait time distribution\n    utilization\n\"\"\"\n\ntm_re = r'(?P<datetime>\\d\\d/\\d\\d/\\d{4}\\s\\d\\d:\\d\\d:\\d\\d(\\.\\d{6})?)'\njob_re = r';(?P<jobid>[\\d\\[\\d*\\]]+)\\.'\nfail_re = r';(?P<jobid>[\\d\\[\\[]+)\\.'\n\n# Server metrics\nNUR = 'node_up_rate'\n\n# Scheduler metrics\nNC = 'num_cycles'\nmCD = 'cycle_duration_min'\nMCD = 'cycle_duration_max'\nmCT = 'min_cycle_time'\nMCT = 'max_cycle_time'\nCDA = 'cycle_duration_mean'\nCD25 = 'cycle_duration_25p'\nCDA = 'cycle_duration_mean'\nCD50 = 'cycle_duration_median'\nCD75 = 'cycle_duration_75p'\nCST = 'cycle_start_time'\nCD = 'cycle_duration'\nQD = 'query_duration'\nNJC = 'num_jobs_considered'\nNJFR = 'num_jobs_failed_to_run'\nSST = 'scheduler_solver_time'\nNJCAL = 'num_jobs_calendared'\nNJFP = 'num_jobs_failed_to_preempt'\nNJP = 'num_jobs_preempted'\nT2R = 'time_to_run'\nT2D = 'time_to_discard'\nTiS = 'time_in_sched'\nTTC = 'time_to_calendar'\n\n# Scheduling Estimated Start Time\nEST = 'estimates'\nEJ = 'estimated_jobs'\nEat = 'estimated'\nDDm = 'drift_duration_min'\nDDM = 'drift_duration_max'\nDDA = 'drift_duration_mean'\nDD50 = 'drift_duration_median'\nND = 'num_drifts'\nNJD = 'num_jobs_drifted'\nNJND = 'num_jobs_no_drift'\nNEST = 'num_estimates'\nJDD = 'job_drift_duration'\nESTR = 'estimated_start_time_range'\nESTA = 'estimated_start_time_accuracy'\nJST = 'job_start_time'\nESTS = 'estimated_start_time_summary'\nDs15mn = 'drifted_sub_15mn'\nDs1hr = 'drifted_sub_1hr'\nDs3hr = 'drifted_sub_3hr'\nDo3hr = 'drifted_over_3hr'\n\n# Accounting metrics\nJWTm = 'job_wait_time_min'\nJWTM = 'job_wait_time_max'\nJWTA = 'job_wait_time_mean'\nJWT25 = 'job_wait_time_25p'\nJWT50 = 'job_wait_time_median'\nJWT75 = 'job_wait_time_75p'\nJRTm = 'job_run_time_min'\nJRT25 = 'job_run_time_25p'\nJRT50 = 'job_run_time_median'\nJRTA = 'job_run_time_mean'\nJRT75 = 'job_run_time_75p'\nJRTM = 'job_run_time_max'\nJNSm = 'job_node_size_min'\nJNS25 = 'job_node_size_25p'\nJNS50 = 'job_node_size_median'\nJNSA = 'job_node_size_mean'\nJNS75 = 'job_node_size_75p'\nJNSM = 'job_node_size_max'\nJCSm = 'job_cpu_size_min'\nJCS25 = 'job_cpu_size_25p'\nJCS50 = 'job_cpu_size_median'\nJCSA = 'job_cpu_size_mean'\nJCS75 = 'job_cpu_size_75p'\nJCSM = 'job_cpu_size_max'\nCPH = 'cpu_hours'\nNPH = 'node_hours'\nUSRS = 'unique_users'\nUNCPUS = 'utilization_ncpus'\nUNODES = 'utilization_nodes'\n\n# Generic metrics\nVER = 'pbs_version'\nJID = 'job_id'\nJRR = 'job_run_rate'\nJSR = 'job_submit_rate'\nJER = 'job_end_rate'\nJTR = 'job_throughput'\nNJQ = 'num_jobs_queued'\nNJR = 'num_jobs_run'\nNJE = 'num_jobs_ended'\nDUR = 'duration'\nRI = 'custom_interval'\nIT = 'init_time'\nCF = 'custom_freq'\nCFC = 'custom_freq_counts'\nCG = 'custom_groups'\n\nPARSER_OK_CONTINUE = 0\nPARSER_OK_STOP = 1\nPARSER_ERROR_CONTINUE = 2\nPARSER_ERROR_STOP = 3\n\n\nclass PBSLogUtils(object):\n\n    \"\"\"\n    Miscellaneous utilities to process log files\n    \"\"\"\n\n    logger = logging.getLogger(__name__)\n    du = DshUtils()\n\n    @classmethod\n    def convert_date_time(cls, dt=None, fmt=None):\n        \"\"\"\n        convert a date time string of the form given by fmt into\n        number of seconds since epoch (with possible microseconds).\n        it considers the current system's timezone to convert\n        the datetime to epoch time\n\n        :param dt: the datetime string to convert\n        :type dt: str or None\n        :param fmt: Format to which datetime is to be converted\n        :type fmt: str\n        :returns: timestamp in seconds since epoch,\n                or None if conversion fails\n        \"\"\"\n        if dt is None:\n            return None\n\n        micro = False\n        if fmt is None:\n            if '.' in dt:\n                micro = True\n                fmt = \"%m/%d/%Y %H:%M:%S.%f\"\n            else:\n                fmt = \"%m/%d/%Y %H:%M:%S\"\n\n        try:\n            # Get datetime object\n            t = datetime.strptime(dt, fmt)\n            # Get epoch-timestamp assuming local timezone\n            tm = t.timestamp()\n        except ValueError:\n            cls.logger.debug(\"could not convert date time: \" + str(dt))\n            return None\n\n        if micro is True:\n            return tm\n        else:\n            return int(tm)\n\n    def get_num_lines(self, log, hostname=None, sudo=False):\n        \"\"\"\n        Get the number of lines of particular log\n\n        :param log: the log file name\n        :type log: str\n        \"\"\"\n        f = self.open_log(log, hostname, sudo=sudo)\n        nl = sum([1 for _ in f])\n        f.close()\n        return nl\n\n    def open_log(self, log, hostname=None, sudo=False, start=None,\n                 num_records=None):\n        \"\"\"\n        :param log: the log file name to read from\n        :type log: str\n        :param hostname: the hostname from which to read the file\n        :type hostname: str or None\n        :param sudo: Whether to access log file as a privileged user.\n        :type sudo: boolean\n        :returns: A file instance\n        \"\"\"\n        readcmd = ['cat', log]\n        taillogs = 10000\n        tailcmd = [self.du.which(hostname, 'tail')]\n        if start:\n            i = 0\n            while(True):\n                i += 1\n                taillogs = 10000 * i\n                tail_out = self.du.tail(hostname, log, sudo,\n                                        option='-n ' + str(taillogs))\n                line = tail_out['out'][0]\n                ts = line.split(';')[0]\n                epoch = self.convert_date_time(ts)\n                readcmd = tailcmd + ['-n', str(taillogs), log]\n                if start > epoch:\n                    break\n                elif taillogs > num_records:\n                    readcmd = ['cat', log]\n                    break\n\n        try:\n            if hostname is None or self.du.is_localhost(hostname):\n                if sudo:\n                    cmd = self.du.sudo_cmd + readcmd\n                    self.logger.info('running ' + \" \".join(cmd))\n                    p = Popen(cmd, stdout=PIPE)\n                    f = p.stdout\n                else:\n                    cmd = readcmd\n                    p = Popen(cmd, stdout=PIPE)\n                    f = p.stdout\n            else:\n                cmd = ['ssh', hostname]\n                if sudo:\n                    cmd += self.du.sudo_cmd\n                cmd += readcmd\n                self.logger.debug('running ' + \" \".join(cmd))\n                p = Popen(cmd, stdout=PIPE)\n                f = p.stdout\n        except Exception:\n            self.logger.error(traceback.print_exc())\n            self.logger.error('Problem processing file ' + log)\n            f = None\n\n        return f\n\n    def get_timestamps(self, logfile=None, hostname=None, num=None,\n                       sudo=False):\n        \"\"\"\n        Helper function to parse logfile\n\n        :returns: Each timestamp in a list as number of seconds since epoch\n        \"\"\"\n        if logfile is None:\n            return\n\n        records = self.open_log(logfile, hostname, sudo=sudo)\n        if records is None:\n            return\n\n        rec_times = []\n        tm_tag = re.compile(tm_re)\n        num_rec = 0\n        for record in records:\n            num_rec += 1\n            if num is not None and num_rec > num:\n                break\n\n            if type(record) == bytes:\n                record = record.decode(\"utf-8\")\n\n            m = tm_tag.match(record)\n            if m:\n                rec_times.append(\n                    self.convert_date_time(m.group('datetime')))\n        records.close()\n        return rec_times\n\n    def match_msg(self, lines, msg, allmatch=False, regexp=False,\n                  starttime=None, endtime=None):\n        \"\"\"\n        Returns (x,y) where x is the matching line y, or None if\n        nothing is found.\n\n        :param allmatch: If True (False by default), return a list\n                         of matching tuples.\n        :type allmatch: boolean\n        :param regexp: If True, msg is a Python regular expression.\n                       Defaults to False.\n        :type regexp: bool\n        :param starttime: If set ignore matches that occur before\n                          specified time\n        :param endtime: If set ignore matches that occur after\n                        specified time\n        \"\"\"\n        linecount = 0\n        ret = []\n        if lines:\n            for l in lines:\n                # l.split(';', 1)[0] gets the time stamp string\n                dt_str = l.split(';', 1)[0]\n                if starttime is not None:\n                    tm = self.convert_date_time(dt_str)\n                    if tm is None or tm < starttime:\n                        continue\n                if endtime is not None:\n                    tm = self.convert_date_time(dt_str)\n                    if tm is None or tm > endtime:\n                        continue\n                if ((regexp and re.search(msg, l)) or\n                        (not regexp and l.find(msg) != -1)):\n                    m = (linecount, l)\n                    if allmatch:\n                        ret.append(m)\n                    else:\n                        return m\n                linecount += 1\n        if len(ret) > 0:\n            return ret\n        return None\n\n    @staticmethod\n    def convert_resv_date_time(date_time):\n        \"\"\"\n        Convert reservation datetime to seconds\n        \"\"\"\n        try:\n            t = time.strptime(date_time, \"%a %b %d %H:%M:%S %Y\")\n        except Exception:\n            t = time.localtime()\n        return int(time.mktime(t))\n\n    @staticmethod\n    def convert_hhmmss_time(tm):\n        \"\"\"\n        Convert datetime in hhmmss format to seconds\n        \"\"\"\n        if ':' not in tm:\n            return tm\n\n        hms = tm.split(':')\n        return int(int(hms[0]) * 3600 + int(hms[1]) * 60 + int(hms[2]))\n\n    def get_rate(self, in_list=[]):\n        \"\"\"\n        :returns: The frequency of occurrences of array l\n                  The array is expected to be sorted\n        \"\"\"\n        if len(in_list) > 0:\n            duration = in_list[len(in_list) - 1] - in_list[0]\n            if duration > 0:\n                tm_factor = [1, 60, 60, 24]\n                _rate = float(len(in_list)) / float(duration)\n                index = 0\n                while _rate < 1 and index < len(tm_factor):\n                    index += 1\n                    _rate *= tm_factor[index]\n                _rate = \"%.2f\" % (_rate)\n                if index == 0:\n                    _rate = str(_rate) + '/s'\n                elif index == 1:\n                    _rate = str(_rate) + '/mn'\n                elif index == 2:\n                    _rate = str(_rate) + '/hr'\n                else:\n                    _rate = str(_rate) + '/day'\n            else:\n                _rate = str(len(in_list)) + '/s'\n            return _rate\n        return 0\n\n    def in_range(self, tm, start=None, end=None):\n        \"\"\"\n        :param tm: time to check within a provided range\n        :param start: Lower limit for the time range\n        :param end: Higer limit for the time range\n        :returns: True if time is in the range else return False\n        \"\"\"\n        if start is None and end is None:\n            return True\n\n        if start is None and end is not None:\n            if tm <= end:\n                return True\n            else:\n                return False\n\n        if start is not None and end is None:\n            if tm >= start:\n                return True\n            else:\n                return False\n        else:\n            if tm >= start and tm <= end:\n                return True\n        return False\n\n    @staticmethod\n    def _duration(val=None):\n        if val is not None:\n            return str(timedelta(seconds=int(float(val))))\n\n    @staticmethod\n    def get_day(tm=None):\n        \"\"\"\n        :param tm: Time for which to get a day\n        \"\"\"\n        if tm is None:\n            tm = time.time()\n        return time.strftime(\"%Y%m%d\", time.localtime(tm))\n\n    @staticmethod\n    def percentile(N, percent):\n        \"\"\"\n        Find the percentile of a list of values.\n\n        :param N: A list of values. Note N MUST BE already sorted.\n        :type N: List\n        :param percent: A float value from 0.0 to 1.0.\n        :type percent: Float\n        :returns: The percentile of the values\n        \"\"\"\n        if not N:\n            return None\n        k = (len(N) - 1) * percent\n        f = math.floor(k)\n        c = math.ceil(k)\n        if f == c:\n            return N[int(k)]\n        d0 = N[int(f)] * (c - k)\n        d1 = N[int(c)] * (k - f)\n        return d0 + d1\n\n    @staticmethod\n    def process_intervals(intervals, groups, frequency=60):\n        \"\"\"\n        Process the intervals\n        \"\"\"\n        info = {}\n        if not intervals:\n            return info\n\n        val = [x - intervals[i - 1] for i, x in enumerate(intervals) if i > 0]\n        info[RI] = \", \".join([str(v) for v in val])\n        if intervals:\n            info[IT] = intervals[0]\n        if frequency is not None:\n            _cf = []\n            j = 0\n            i = 1\n            while i < len(intervals):\n                if (intervals[i] - intervals[j]) > frequency:\n                    _cf.append(((intervals[j], intervals[i - 1]), i - j))\n                    j = i\n                i += 1\n            if i != j + 1:\n                _cf.append(((intervals[j], intervals[i - 1]), i - j))\n            else:\n                _cf.append(((intervals[j], intervals[j]), 1))\n            info[CFC] = _cf\n            info[CF] = frequency\n        if groups:\n            info[CG] = groups\n        return info\n\n    def get_log_files(self, hostname, path, start, end, sudo=False):\n        \"\"\"\n        :param hostname: Hostname of the machine\n        :type hostname: str\n        :param path: Path for the log file\n        :type path: str\n        :param start: Start time for the log file\n        :param end: End time for the log file\n        :returns: list of log file(s) found or an empty list\n        \"\"\"\n        paths = []\n        if self.du.isdir(hostname, path, sudo=sudo):\n            logs = self.du.listdir(hostname, path, sudo=sudo)\n            for f in sorted(logs):\n                if start is not None or end is not None:\n                    tm = self.get_timestamps(f, hostname, num=1, sudo=sudo)\n                    if not tm:\n                        continue\n                    d1 = time.strftime(\"%Y%m%d\", time.localtime(tm[0]))\n                    if start is not None:\n                        d2 = time.strftime(\"%Y%m%d\", time.localtime(start))\n                        if d1 < d2:\n                            continue\n                    if end is not None:\n                        d2 = time.strftime(\"%Y%m%d\", time.localtime(end))\n                        if d1 > d2:\n                            continue\n                paths.append(f)\n        elif self.du.isfile(hostname, path, sudo=sudo):\n            paths = [path]\n\n        return paths\n\n\nclass PBSLogAnalyzer(object):\n    \"\"\"\n    Utility to analyze the PBS logs\n    \"\"\"\n    logger = logging.getLogger(__name__)\n    logutils = PBSLogUtils()\n\n    generic_tag = re.compile(tm_re + \".*\")\n    node_type_tag = re.compile(tm_re + \".*\" + \"Type 58 request.*\")\n    queue_type_tag = re.compile(tm_re + \".*\" + \"Type 20 request.*\")\n    job_type_tag = re.compile(tm_re + \".*\" + \"Type 51 request.*\")\n    job_exit_tag = re.compile(tm_re + \".*\" + job_re + \";Exit_status.*\")\n\n    def __init__(self, schedlog=None, serverlog=None,\n                 momlog=None, acctlog=None, genericlog=None,\n                 hostname=None, show_progress=False):\n\n        self.hostname = hostname\n        self.schedlog = schedlog\n        self.serverlog = serverlog\n        self.acctlog = acctlog\n        self.momlog = momlog\n        self.genericlog = genericlog\n        self.show_progress = show_progress\n\n        self._custom_tag = None\n        self._custom_freq = None\n        self._custom_id = False\n        self._re_interval = []\n        self._re_group = {}\n\n        self.num_conditional_matches = 0\n        self.re_conditional = None\n        self.num_conditionals = 0\n        self.prev_records = []\n\n        self.info = {}\n\n        self.scheduler = None\n        self.server = None\n        self.mom = None\n        self.accounting = None\n\n        if schedlog:\n            self.scheduler = PBSSchedulerLog(schedlog, hostname, show_progress)\n\n        if serverlog:\n            self.server = PBSServerLog(serverlog, hostname, show_progress)\n        if momlog:\n            self.mom = PBSMoMLog(momlog, hostname, show_progress)\n\n        if acctlog:\n            self.accounting = PBSAccountingLog(acctlog, hostname,\n                                               show_progress)\n\n    def set_custom_match(self, pattern, frequency=None):\n        \"\"\"\n        Set the custome matching\n\n        :param pattern: Matching pattern\n        :type pattern: str\n        :param frequency: Frequency of match\n        :type frequency: int\n        \"\"\"\n        self._custom_tag = re.compile(tm_re + \".*\" + pattern + \".*\")\n        self._custom_freq = frequency\n\n    def set_conditional_match(self, conditions):\n        \"\"\"\n        Set the conditional match\n\n        :param conditions: Conditions for macthing\n        \"\"\"\n        if not isinstance(conditions, list):\n            return False\n        self.re_conditional = conditions\n        self.num_conditionals = len(conditions)\n        self.prev_records = ['' for n in range(self.num_conditionals)]\n        self.info['matches'] = []\n\n    def analyze_scheduler_log(self, filename=None, start=None, end=None,\n                              hostname=None, summarize=True):\n        \"\"\"\n        Analyze the scheduler log\n\n        :param filename: Scheduler log file name\n        :type filename: str or None\n        :param start: Time from which log to be analyzed\n        :param end: Time till which log to be analyzed\n        :param hostname: Hostname of the machine\n        :type hostname: str or None\n        :param summarize: Summarize data parsed if True else not\n        :type summarize: bool\n        \"\"\"\n        if self.scheduler is None:\n            self.scheduler = PBSSchedulerLog(filename, hostname=hostname)\n        return self.scheduler.analyze(filename, start, end, hostname,\n                                      summarize)\n\n    def analyze_server_log(self, filename=None, start=None, end=None,\n                           hostname=None, summarize=True):\n        \"\"\"\n        Analyze the server log\n        \"\"\"\n        if self.server is None:\n            self.server = PBSServerLog(filename, hostname=hostname)\n\n        return self.server.analyze(filename, start, end, hostname,\n                                   summarize)\n\n    def analyze_accounting_log(self, filename=None, start=None, end=None,\n                               hostname=None, summarize=True):\n        \"\"\"\n        Analyze the accounting log\n        \"\"\"\n        if self.accounting is None:\n            self.accounting = PBSAccountingLog(filename, hostname=hostname)\n\n        return self.accounting.analyze(filename, start, end, hostname,\n                                       summarize=summarize, sudo=True)\n\n    def analyze_mom_log(self, filename=None, start=None, end=None,\n                        hostname=None, summarize=True):\n        \"\"\"\n        Analyze the mom log\n        \"\"\"\n        if self.mom is None:\n            self.mom = PBSMoMLog(filename, hostname=hostname)\n\n        return self.mom.analyze(filename, start, end, hostname, summarize)\n\n    def parse_conditional(self, rec, start, end):\n        \"\"\"\n        Match a sequence of regular expressions against multiple\n        consecutive lines in a generic log. Calculate the number\n        of conditional matching lines.\n\n        Example usage: to find the number of times the scheduler\n        stat'ing the server causes the scheduler to miss jobs ending,\n        which could possibly indicate a race condition between the\n        view of resources assigned to nodes and the actual jobs\n        running, one would call this function by setting\n        re_conditional to\n        ``['Type 20 request received from Scheduler', 'Exit_status']``\n        Which can be read as counting the number of times that the\n        Type 20 message is preceded by an ``Exit_status`` message\n        \"\"\"\n        match = True\n        for rc in range(self.num_conditionals):\n            if not re.search(self.re_conditional[rc], self.prev_records[rc]):\n                match = False\n        if match:\n            self.num_conditional_matches += 1\n            self.info['matches'].extend(self.prev_records)\n        for i in range(self.num_conditionals - 1, -1, -1):\n            self.prev_records[i] = self.prev_records[i - 1]\n        self.prev_records[0] = rec\n        return PARSER_OK_CONTINUE\n\n    def parse_custom_tag(self, rec, start, end):\n        m = self._custom_tag.match(rec)\n        if m:\n            tm = self.logutils.convert_date_time(m.group('datetime'))\n            if ((start is None and end is None) or\n                    self.logutils.in_range(tm, start, end)):\n                self._re_interval.append(tm)\n                for k, v in m.groupdict().items():\n                    if k in self._re_group:\n                        self._re_group[k].append(v)\n                    else:\n                        self._re_group[k] = [v]\n            elif end is not None and tm > end:\n                return PARSER_OK_STOP\n\n        return PARSER_OK_CONTINUE\n\n    def parse_block(self, rec, start, end):\n        m = self.generic_tag.match(rec)\n        if m:\n            tm = self.logutils.convert_date_time(m.group('datetime'))\n            if ((start is None and end is None) or\n                    self.logutils.in_range(tm, start, end)):\n                print(rec, end=' ')\n\n    def comp_analyze(self, rec, start, end):\n        if self.re_conditional is not None:\n            return self.parse_conditional(rec, start, end)\n        elif self._custom_tag is not None:\n            return self.parse_custom_tag(rec, start, end)\n        elif start is not None or end is not None:\n            return self.parse_block(rec, start, end)\n\n    def analyze(self, path=None, start=None, end=None, hostname=None,\n                summarize=True, sudo=False):\n        \"\"\"\n        Parse any log file. This method is not ``context-specific``\n        to each log file type.\n\n        :param path: name of ``file/dir`` to parse\n        :type path: str or None\n        :param start: optional record time at which to start analyzing\n        :param end: optional record time after which to stop analyzing\n        :param hostname: name of host on which to operate. Defaults to\n                         localhost\n        :type hostname: str or None\n        :param summarize: if True, summarize data parsed. Defaults to\n                          True.\n        :type summarize: bool\n        :param sudo: If True, access log file(s) as privileged user.\n        :type sudo: bool\n        \"\"\"\n        if hostname is None and self.hostname is not None:\n            hostname = self.hostname\n\n        for f in self.logutils.get_log_files(hostname, path, start, end,\n                                             sudo=sudo):\n            self._log_parser(f, start, end, hostname, sudo=sudo)\n\n        if summarize:\n            return self.summary()\n\n    def _log_parser(self, filename, start, end, hostname=None, sudo=False):\n        num_records = self.logutils.get_num_lines(filename, hostname,\n                                                  sudo=sudo)\n        if filename is not None:\n            records = self.logutils.open_log(filename, hostname, sudo=sudo,\n                                             start=start,\n                                             num_records=num_records)\n        else:\n            return None\n\n        if records is None:\n            return None\n\n        num_line = 0\n        last_rec = None\n        if self.show_progress:\n            perc_range = list(range(10, 110, 10))\n            perc_records = [num_records * x / 100 for x in perc_range]\n            sys.stderr.write('Parsing ' + filename + ': |0%')\n            sys.stderr.flush()\n\n        for rec in records:\n            rec = rec.decode(\"utf-8\")\n            num_line += 1\n            if self.show_progress and (num_line > perc_records[0]):\n                sys.stderr.write('-' + str(perc_range[0]) + '%')\n                sys.stderr.flush()\n                perc_range.remove(perc_range[0])\n                perc_records.remove(perc_records[0])\n            last_rec = rec\n            rv = self.comp_analyze(rec, start, end)\n            if (rv in (PARSER_OK_STOP, PARSER_ERROR_STOP) or\n                    (self.show_progress and len(perc_records) == 0)):\n                break\n        if self.show_progress:\n            sys.stderr.write('-100%|\\n')\n            sys.stderr.flush()\n        records.close()\n\n        if last_rec is not None:\n            self.epilogue(last_rec)\n\n    def analyze_logs(self, schedlog=None, serverlog=None, momlog=None,\n                     acctlog=None, genericlog=None, start=None, end=None,\n                     hostname=None, showjob=False):\n        \"\"\"\n        Analyze logs\n        \"\"\"\n        if hostname is None and self.hostname is not None:\n            hostname = self.hostname\n\n        if schedlog is None and self.schedlog is not None:\n            schedlog = self.schedlog\n        if serverlog is None and self.serverlog is not None:\n            serverlog = self.serverlog\n        if momlog is None and self.momlog is not None:\n            momlog = self.momlog\n        if acctlog is None and self.acctlog is not None:\n            acctlog = self.acctlog\n        if genericlog is None and self.genericlog is not None:\n            genericlog = self.genericlog\n\n        cycles = None\n        sjr = {}\n\n        if schedlog:\n            self.analyze_scheduler_log(schedlog, start, end, hostname,\n                                       summarize=False)\n            cycles = self.scheduler.cycles\n\n        if serverlog:\n            self.analyze_server_log(serverlog, start, end, hostname,\n                                    summarize=False)\n            sjr = self.server.server_job_run\n\n        if momlog:\n            self.analyze_mom_log(momlog, start, end, hostname,\n                                 summarize=False)\n\n        if acctlog:\n            self.analyze_accounting_log(acctlog, start, end, hostname,\n                                        summarize=False)\n\n        if genericlog:\n            self.analyze(genericlog, start, end, hostname, sudo=True,\n                         summarize=False)\n\n        if cycles is not None and len(sjr.keys()) != 0:\n            for cycle in cycles:\n                for jid, tm in cycle.sched_job_run.items():\n                    # skip job arrays: scheduler runs a subjob\n                    # but we don't keep track of which Considering job to run\n                    # message it is associated with because the consider\n                    # message doesn't show the subjob\n                    if '[' in jid:\n                        continue\n                    if jid in sjr:\n                        for tm in sjr[jid]:\n                            if tm > cycle.start and tm < cycle.end:\n                                cycle.inschedduration[jid] = \\\n                                    tm - cycle.consider[jid]\n\n        return self.summary(showjob)\n\n    def epilogue(self, line):\n        pass\n\n    def summary(self, showjob=False, writer=None):\n\n        info = {}\n\n        if self._custom_tag is not None:\n            self.info = self.logutils.process_intervals(self._re_interval,\n                                                        self._re_group,\n                                                        self._custom_freq)\n            return self.info\n\n        if self.re_conditional is not None:\n            self.info['num_conditional_matches'] = self.num_conditional_matches\n            return self.info\n\n        if self.scheduler is not None:\n            info['scheduler'] = self.scheduler.summary(self.scheduler.cycles,\n                                                       showjob)\n        if self.server is not None:\n            info['server'] = self.server.summary()\n\n        if self.accounting is not None:\n            info['accounting'] = self.accounting.summary()\n\n        if self.mom is not None:\n            info['mom'] = self.mom.summary()\n\n        return info\n\n\nclass PBSServerLog(PBSLogAnalyzer):\n    \"\"\"\n    :param filename: Server log filename\n    :type filename: str or None\n    :param hostname: Hostname of the machine\n    :type hostname: str or None\n    \"\"\"\n    tm_tag = re.compile(tm_re)\n    server_run_tag = re.compile(tm_re + \".*\" + job_re + \".*;Job Run at.*\")\n    server_nodeup_tag = re.compile(tm_re + \".*Node;.*;node up.*\")\n    server_enquejob_tag = re.compile(tm_re + \".*\" + job_re +\n                                     \".*enqueuing into.*state Q .*\")\n    server_endjob_tag = re.compile(tm_re + \".*\" + job_re +\n                                   \".*;Exit_status.*\")\n\n    def __init__(self, filename=None, hostname=None, show_progress=False):\n\n        self.server_job_queued = {}\n        self.server_job_run = {}\n        self.server_job_end = {}\n        self.records = None\n        self.nodeup = []\n        self.enquejob = []\n        self.record_tm = []\n        self.jobsrun = []\n        self.jobsend = []\n        self.wait_time = []\n        self.run_time = []\n\n        self.hostname = hostname\n\n        self.info = {}\n        self.version = []\n\n        self.filename = filename\n        self.show_progress = show_progress\n\n    def parse_runjob(self, line):\n        \"\"\"\n        Parse server log for run job records.\n        For each record keep track of the job id, and time in a\n        dedicated array\n        \"\"\"\n        m = self.server_run_tag.match(line)\n        if m:\n            tm = self.logutils.convert_date_time(m.group('datetime'))\n            self.jobsrun.append(tm)\n            jobid = str(m.group('jobid'))\n            if jobid in self.server_job_run:\n                self.server_job_run[jobid].append(tm)\n            else:\n                self.server_job_run[jobid] = [tm]\n            if jobid in self.server_job_queued:\n                self.wait_time.append(tm - self.server_job_queued[jobid])\n\n    def parse_endjob(self, line):\n        \"\"\"\n        Parse server log for run job records.\n        For each record keep track of the job id, and time in a\n        dedicated array\n        \"\"\"\n        m = self.server_endjob_tag.match(line)\n        if m:\n            tm = self.logutils.convert_date_time(m.group('datetime'))\n            self.jobsend.append(tm)\n            jobid = str(m.group('jobid'))\n            if jobid in self.server_job_end:\n                self.server_job_end[jobid].append(tm)\n            else:\n                self.server_job_end[jobid] = [tm]\n            if jobid in self.server_job_run:\n                self.run_time.append(tm - self.server_job_run[jobid][-1:][0])\n\n    def parse_nodeup(self, line):\n        \"\"\"\n        Parse server log for nodes that are up\n        \"\"\"\n        m = self.server_nodeup_tag.match(line)\n        if m:\n            tm = self.logutils.convert_date_time(m.group('datetime'))\n            self.nodeup.append(tm)\n\n    def parse_enquejob(self, line):\n        \"\"\"\n        Parse server log for enqued jobs\n        \"\"\"\n        m = self.server_enquejob_tag.match(line)\n        if m:\n            tm = self.logutils.convert_date_time(m.group('datetime'))\n            self.enquejob.append(tm)\n            jobid = str(m.group('jobid'))\n            self.server_job_queued[jobid] = tm\n\n    def comp_analyze(self, rec, start=None, end=None):\n        m = self.tm_tag.match(rec)\n        if m:\n            tm = self.logutils.convert_date_time(m.group('datetime'))\n            self.record_tm.append(tm)\n            if not self.logutils.in_range(tm, start, end):\n                if end and tm > end:\n                    return PARSER_OK_STOP\n                return PARSER_OK_CONTINUE\n\n        if 'pbs_version=' in rec:\n            version = rec.split('pbs_version=')[1].strip()\n            if version not in self.version:\n                self.version.append(version)\n        self.parse_enquejob(rec)\n        self.parse_nodeup(rec)\n        self.parse_runjob(rec)\n        self.parse_endjob(rec)\n\n        return PARSER_OK_CONTINUE\n\n    def summary(self):\n        self.info[JSR] = self.logutils.get_rate(self.enquejob)\n        self.info[NJE] = len(self.server_job_end.keys())\n        self.info[NJQ] = len(self.enquejob)\n        self.info[NUR] = self.logutils.get_rate(self.nodeup)\n        self.info[JRR] = self.logutils.get_rate(self.jobsrun)\n        self.info[JER] = self.logutils.get_rate(self.jobsend)\n        if len(self.server_job_end) > 0:\n            tjr = self.jobsend[-1] - self.enquejob[0]\n            self.info[JTR] = str(len(self.server_job_end) / tjr) + '/s'\n        if len(self.wait_time) > 0:\n            wt = sorted(self.wait_time)\n            wta = float(sum(self.wait_time)) / len(self.wait_time)\n            self.info[JWTm] = self.logutils._duration(min(wt))\n            self.info[JWTM] = self.logutils._duration(max(wt))\n            self.info[JWTA] = self.logutils._duration(wta)\n            self.info[JWT25] = self.logutils._duration(\n                self.logutils.percentile(wt, .25))\n            self.info[JWT50] = self.logutils._duration(\n                self.logutils.percentile(wt, .5))\n            self.info[JWT75] = self.logutils._duration(\n                self.logutils.percentile(wt, .75))\n        njr = 0\n        for v in self.server_job_run.values():\n            njr += len(v)\n        self.info[NJR] = njr\n        self.info[VER] = \",\".join(self.version)\n\n        if len(self.run_time) > 0:\n            rt = sorted(self.run_time)\n            self.info[JRTm] = self.logutils._duration(min(rt))\n            self.info[JRT25] = self.logutils._duration(\n                self.logutils.percentile(rt, 0.25))\n            self.info[JRT50] = self.logutils._duration(\n                self.logutils.percentile(rt, 0.50))\n            self.info[JRTA] = self.logutils._duration(\n                str(sum(rt) / len(rt)))\n            self.info[JRT75] = self.logutils._duration(\n                self.logutils.percentile(rt, 0.75))\n            self.info[JRTM] = self.logutils._duration(max(rt))\n        return self.info\n\n\nclass JobEstimatedStartTimeInfo(object):\n    \"\"\"\n    Information regarding Job estimated start time\n    \"\"\"\n\n    def __init__(self, jobid):\n        self.jobid = jobid\n        self.started_at = None\n        self.estimated_at = []\n        self.num_drifts = 0\n        self.num_estimates = 0\n        self.drift_time = 0\n\n    def add_estimate(self, tm):\n        \"\"\"\n        Add a job's new estimated start time\n        If the new estimate is now later than any preivous one, we\n        add that difference to the drift time. If the new drift time\n        is pulled earlier it is not added to the drift time.\n\n        drift time is a measure of ``\"negative perception\"`` that\n        comes along a job being estimated to run at a later date than\n        earlier ``\"advertised\"``.\n        \"\"\"\n        if self.estimated_at:\n            prev_tm = self.estimated_at[len(self.estimated_at) - 1]\n            if tm > prev_tm:\n                self.num_drifts += 1\n                self.drift_time += tm - prev_tm\n\n        self.estimated_at.append(tm)\n        self.num_estimates += 1\n\n    def __repr__(self):\n        estimated_at_str = [str(t) for t in self.estimated_at]\n        return \" \".join([str(self.jobid), 'started: ', str(self.started_at),\n                         'estimated: ', \",\".join(estimated_at_str)])\n\n    def __str__(self):\n        return self.__repr__()\n\n\nclass PBSSchedulerLog(PBSLogAnalyzer):\n\n    tm_tag = re.compile(tm_re)\n    startcycle_tag = re.compile(tm_re + \".*Starting Scheduling.*\")\n    endcycle_tag = re.compile(tm_re + \".*Leaving [(the )]*[sS]cheduling.*\")\n    alarm_tag = re.compile(tm_re + \".*alarm.*\")\n    considering_job_tag = re.compile(tm_re + \".*\" + job_re +\n                                     \".*;Considering job to run.*\")\n    sched_job_run_tag = re.compile(tm_re + \".*\" + job_re + \".*;Job run.*\")\n    estimated_tag = re.compile(tm_re + \".*\" + job_re +\n                               \".*;Job is a top job and will run at \"\n                               \"(?P<est_tm>.*)\")\n    run_failure_tag = re.compile(tm_re + \".*\" + fail_re + \".*;Failed to run.*\")\n    calendarjob_tag = re.compile(\n        tm_re +\n        \".*\" +\n        job_re +\n        \".*;Job is a top job.*\")\n    preempt_failure_tag = re.compile(tm_re + \".*;Job failed to be preempted.*\")\n    preempt_tag = re.compile(tm_re + \".*\" + job_re + \".*;Job preempted.*\")\n    record_tag = re.compile(tm_re + \".*\")\n\n    def __init__(self, filename=None, hostname=None, show_progress=False):\n\n        self.filename = filename\n        self.hostname = hostname\n        self.show_progress = show_progress\n\n        self.record_tm = []\n        self.version = []\n\n        self.cycle = None\n        self.cycles = []\n\n        self.estimated_jobs = {}\n        self.estimated_parsing_enabled = False\n        self.parse_estimated_only = False\n\n        self.info = {}\n        self.summary_info = {}\n\n    def _parse_line(self, line):\n        \"\"\"\n        Parse scheduling cycle Starting, Leaving, and alarm records\n        From each record, keep track of the record time in a\n        dedicated array\n        \"\"\"\n        m = self.startcycle_tag.match(line)\n\n        if m:\n            tm = self.logutils.convert_date_time(m.group('datetime'))\n            # if cycle was interrupted assume previous cycle ended now\n            if self.cycle is not None and self.cycle.end == -1:\n                self.cycle.end = tm\n            self.cycle = PBSCycleInfo()\n            self.cycles.append(self.cycle)\n            self.cycle.start = tm\n            self.cycle.end = -1\n            return PARSER_OK_CONTINUE\n\n        m = self.endcycle_tag.match(line)\n        if m is not None and self.cycle is not None:\n            tm = self.logutils.convert_date_time(m.group('datetime'))\n            self.cycle.end = tm\n            self.cycle.duration = tm - self.cycle.start\n            if (self.cycle.lastjob is not None and\n                    self.cycle.lastjob not in self.cycle.sched_job_run and\n                    self.cycle.lastjob not in self.cycle.calendared_jobs):\n                self.cycle.cantrunduration[self.cycle.lastjob] = (\n                    tm - self.cycle.consider[self.cycle.lastjob])\n            return PARSER_OK_CONTINUE\n\n        m = self.alarm_tag.match(line)\n        if m is not None and self.cycle is not None:\n            tm = self.logutils.convert_date_time(m.group('datetime'))\n            self.cycle.end = tm\n            return PARSER_OK_CONTINUE\n\n        m = self.considering_job_tag.match(line)\n        if m is not None and self.cycle is not None:\n            self.cycle.num_considered += 1\n            jid = str(m.group('jobid'))\n            tm = self.logutils.convert_date_time(m.group('datetime'))\n            self.cycle.consider[jid] = tm\n            self.cycle.political_order.append(jid)\n            if (self.cycle.lastjob is not None and\n                    self.cycle.lastjob not in self.cycle.sched_job_run and\n                    self.cycle.lastjob not in self.cycle.calendared_jobs):\n                self.cycle.cantrunduration[self.cycle.lastjob] = (\n                    tm - self.cycle.consider[self.cycle.lastjob])\n            self.cycle.lastjob = jid\n            if self.cycle.queryduration == 0:\n                self.cycle.queryduration = tm - self.cycle.start\n            return PARSER_OK_CONTINUE\n\n        m = self.sched_job_run_tag.match(line)\n        if m is not None and self.cycle is not None:\n            jid = str(m.group('jobid'))\n            tm = self.logutils.convert_date_time(m.group('datetime'))\n            self.cycle.sched_job_run[jid] = tm\n            # job arrays require special handling because the considering\n            # job to run message does not have the subjob index but only []\n            if '[' in jid:\n                subjid = jid\n                if subjid not in self.cycle.consider:\n                    jid = jid.split('[')[0] + '[]'\n                    self.cycle.consider[subjid] = self.cycle.consider[jid]\n                self.cycle.runduration[subjid] = tm - self.cycle.consider[jid]\n            # job rerun due to preemption failure aren't considered, skip\n            elif jid in self.cycle.consider:\n                self.cycle.runduration[jid] = tm - self.cycle.consider[jid]\n            return PARSER_OK_CONTINUE\n\n        m = self.run_failure_tag.match(line)\n        if m is not None:\n            if self.cycle is not None:\n                jid = str(m.group('jobid'))\n                tm = self.logutils.convert_date_time(m.group('datetime'))\n                self.cycle.run_failure[jid] = tm\n            return PARSER_OK_CONTINUE\n\n        m = self.preempt_failure_tag.match(line)\n        if m is not None:\n            if self.cycle is not None:\n                self.cycle.num_preempt_failure += 1\n            return PARSER_OK_CONTINUE\n\n        m = self.preempt_tag.match(line)\n        if m is not None:\n            if self.cycle is not None:\n                jid = str(m.group('jobid'))\n                if self.cycle.lastjob in self.cycle.preempted_jobs:\n                    self.cycle.preempted_jobs[self.cycle.lastjob].append(jid)\n                else:\n                    self.cycle.preempted_jobs[self.cycle.lastjob] = [jid]\n                self.cycle.num_preempted += 1\n            return PARSER_OK_CONTINUE\n\n        m = self.calendarjob_tag.match(line)\n        if m is not None:\n            if self.cycle is not None:\n                jid = str(m.group('jobid'))\n                tm = self.logutils.convert_date_time(m.group('datetime'))\n                self.cycle.calendared_jobs[jid] = tm\n                if jid in self.cycle.consider:\n                    self.cycle.calendarduration[jid] = \\\n                        (tm - self.cycle.consider[jid])\n                elif '[' in jid:\n                    arrjid = re.sub(r\"(\\[\\d+\\])\", '[]', jid)\n                    if arrjid in self.cycle.consider:\n                        self.cycle.consider[jid] = self.cycle.consider[arrjid]\n                        self.cycle.calendarduration[jid] = \\\n                            (tm - self.cycle.consider[arrjid])\n            return PARSER_OK_CONTINUE\n\n    def get_cycles(self, start=None, end=None):\n        \"\"\"\n        Get the scheduler cycles\n\n        :param start: Start time\n        :param end: End time\n        :returns: Scheduling cycles\n        \"\"\"\n        if start is None and end is None:\n            return self.cycles\n\n        cycles = []\n        if end is None:\n            end = time.time()\n        for c in self.cycles:\n            if c.start >= start and c.end < end:\n                cycles.append(c)\n        return cycles\n\n    def comp_analyze(self, rec, start, end):\n        if self.estimated_parsing_enabled:\n            rv = self.estimated_info_parsing(rec)\n            if self.parse_estimated_only:\n                return rv\n        return self.scheduler_parsing(rec, start, end)\n\n    def scheduler_parsing(self, rec, start, end):\n        m = self.tm_tag.match(rec)\n        if m:\n            tm = self.logutils.convert_date_time(m.group('datetime'))\n            self.record_tm.append(tm)\n            if self.logutils.in_range(tm, start, end):\n                rv = self._parse_line(rec)\n                if rv in (PARSER_OK_STOP, PARSER_ERROR_STOP):\n                    return rv\n            if 'pbs_version=' in rec:\n                version = rec.split('pbs_version=')[1].strip()\n                if version not in self.version:\n                    self.version.append(version)\n        elif end is not None and tm > end:\n            PARSER_OK_STOP\n\n        return PARSER_OK_CONTINUE\n\n    def estimated_info_parsing(self, line):\n        \"\"\"\n        Parse Estimated start time information for a job\n        \"\"\"\n        m = self.sched_job_run_tag.match(line)\n        if m is not None:\n            jid = str(m.group('jobid'))\n            tm = self.logutils.convert_date_time(m.group('datetime'))\n            if jid in self.estimated_jobs:\n                self.estimated_jobs[jid].started_at = tm\n            else:\n                ej = JobEstimatedStartTimeInfo(jid)\n                ej.started_at = tm\n                self.estimated_jobs[jid] = ej\n\n        m = self.estimated_tag.match(line)\n        if m is not None:\n            jid = str(m.group('jobid'))\n            try:\n                tm = self.logutils.convert_date_time(m.group('est_tm'),\n                                                     \"%a %b %d %H:%M:%S %Y\")\n            except Exception:\n                logging.error('error converting time: ' +\n                              str(m.group('est_tm')))\n                return PARSER_ERROR_STOP\n\n            if jid in self.estimated_jobs:\n                self.estimated_jobs[jid].add_estimate(tm)\n            else:\n                ej = JobEstimatedStartTimeInfo(jid)\n                ej.add_estimate(tm)\n                self.estimated_jobs[jid] = ej\n\n        return PARSER_OK_CONTINUE\n\n    def epilogue(self, line):\n        # if log ends in the middle of a cycle there is no 'Leaving cycle'\n        # message, in this case the last cycle duration is computed as\n        # from start to the last record in the log file\n        if self.cycle is not None and self.cycle.end <= 0:\n            m = self.record_tag.match(line)\n            if m:\n                self.cycle.end = self.logutils.convert_date_time(\n                    m.group('datetime'))\n\n    def summarize_estimated_analysis(self, estimated_jobs=None):\n        \"\"\"\n        Summarize estimated job analysis\n        \"\"\"\n        if estimated_jobs is None and self.estimated_jobs is not None:\n            estimated_jobs = self.estimated_jobs\n\n        einfo = {EJ: []}\n        sub15mn = 0\n        sub1hr = 0\n        sub3hr = 0\n        sup3hr = 0\n        total_drifters = 0\n        total_nondrifters = 0\n        drift_times = []\n        for e in estimated_jobs.values():\n            info = {}\n            if len(e.estimated_at) > 0:\n                info[JID] = e.jobid\n                e_sorted = sorted(e.estimated_at)\n                info[Eat] = e.estimated_at\n                if e.started_at is not None:\n                    info[JST] = e.started_at\n                    e_diff = e_sorted[len(e_sorted) - 1] - e_sorted[0]\n                    e_accuracy = (e.started_at -\n                                  e.estimated_at[len(e.estimated_at) - 1])\n                    info[ESTR] = e_diff\n                    info[ESTA] = e_accuracy\n\n                info[NEST] = e.num_estimates\n                info[ND] = e.num_drifts\n                info[JDD] = e.drift_time\n                drift_times.append(e.drift_time)\n\n                if e.drift_time > 0:\n                    total_drifters += 1\n                    if e.drift_time < 15 * 60:\n                        sub15mn += 1\n                    elif e.drift_time < 3600:\n                        sub1hr += 1\n                    elif e.drift_time < 3 * 3600:\n                        sub3hr += 1\n                    else:\n                        sup3hr += 1\n                else:\n                    total_nondrifters += 1\n                einfo[EJ].append(info)\n\n        info = {}\n        info[Ds15mn] = sub15mn\n        info[Ds1hr] = sub1hr\n        info[Ds3hr] = sub3hr\n        info[Do3hr] = sup3hr\n        info[NJD] = total_drifters\n        info[NJND] = total_nondrifters\n        if drift_times:\n            info[DDm] = min(drift_times)\n            info[DDM] = max(drift_times)\n            info[DDA] = (sum(drift_times) / len(drift_times))\n            info[DD50] = sorted(drift_times)[len(drift_times) / 2]\n        einfo[ESTS] = info\n\n        return einfo\n\n    def summary(self, cycles=None, showjobs=False):\n        \"\"\"\n        Scheduler log summary\n        \"\"\"\n        if self.estimated_parsing_enabled:\n            self.info[EST] = self.summarize_estimated_analysis()\n            if self.parse_estimated_only:\n                return self.info\n\n        if cycles is None and self.cycles is not None:\n            cycles = self.cycles\n\n        num_cycle = 0\n        run = 0\n        failed = 0\n        total_considered = 0\n        run_tm = []\n        cycle_duration = []\n        min_duration = None\n        max_duration = None\n        mint = maxt = None\n        calendarduration = 0\n        schedsolvertime = 0\n\n        for c in cycles:\n            c.summary(showjobs)\n            self.info[num_cycle] = c.info\n            run += len(c.sched_job_run.keys())\n            run_tm.extend(list(c.sched_job_run.values()))\n            failed += len(c.run_failure.keys())\n            total_considered += c.num_considered\n\n            if max_duration is None or c.duration > max_duration:\n                max_duration = c.duration\n                maxt = time.strftime(\"%Y-%m-%d %H:%M:%S\",\n                                     time.localtime(c.start))\n\n            if min_duration is None or c.duration < min_duration:\n                min_duration = c.duration\n                mint = time.strftime(\"%Y-%m-%d %H:%M:%S\",\n                                     time.localtime(c.start))\n\n            cycle_duration.append(c.duration)\n            num_cycle += 1\n            calendarduration += sum(c.calendarduration.values())\n            schedsolvertime += c.scheduler_solver_time\n\n        run_rate = self.logutils.get_rate(sorted(run_tm))\n\n        sorted_cd = sorted(cycle_duration)\n\n        self.summary_info[NC] = len(cycles)\n        self.summary_info[NJR] = run\n        self.summary_info[NJFR] = failed\n        self.summary_info[JRR] = run_rate\n        self.summary_info[NJC] = total_considered\n        self.summary_info[mCD] = self.logutils._duration(min_duration)\n        self.summary_info[MCD] = self.logutils._duration(max_duration)\n        self.summary_info[CD25] = self.logutils._duration(\n            self.logutils.percentile(sorted_cd, .25))\n        if len(sorted_cd) > 0:\n            self.summary_info[CDA] = self.logutils._duration(\n                sum(sorted_cd) / len(sorted_cd))\n        self.summary_info[CD50] = self.logutils._duration(\n            self.logutils.percentile(sorted_cd, .5))\n        self.summary_info[CD75] = self.logutils._duration(\n            self.logutils.percentile(sorted_cd, .75))\n\n        if mint is not None:\n            self.summary_info[mCT] = mint\n        if maxt is not None:\n            self.summary_info[MCT] = maxt\n        self.summary_info[DUR] = self.logutils._duration(sum(cycle_duration))\n        self.summary_info[TTC] = self.logutils._duration(calendarduration)\n        self.summary_info[SST] = self.logutils._duration(schedsolvertime)\n        self.summary_info[VER] = \",\".join(self.version)\n\n        self.info['summary'] = dict(self.summary_info.items())\n        return self.info\n\n\nclass PBSCycleInfo(object):\n\n    def __init__(self):\n\n        self.info = {}\n\n        \"\"\"\n        Time between end and start of a cycle, which may be on alarm,\n        or signal, not only Leaving - Starting\n        \"\"\"\n        self.duration = 0\n        \" Time of a Starting scheduling cycle message \"\n        self.start = 0\n        \" Time of a Leaving scheduling cycle message \"\n        self.end = 0\n        \" Time at which Considering job to run message \"\n        self.consider = {}\n        \" Number of jobs considered \"\n        self.num_considered = 0\n        \" Time at which job run message in scheduler. This includes time to \"\n        \" start the job by the server \"\n        self.sched_job_run = {}\n        \"\"\"\n        number of jobs added to the calendar, i.e.,\n        number of backfilling jobs\n        \"\"\"\n        self.calendared_jobs = {}\n        \" Time between Considering job to run to Job run message \"\n        self.runduration = {}\n        \" Time to determine that job couldn't run \"\n        self.cantrunduration = {}\n        \" List of jobs preempted in order to run high priority job\"\n        self.preempted_jobs = {}\n        \"\"\"\n        Time between considering job to run to server logging\n        'Job Run at request...\n        \"\"\"\n        self.inschedduration = {}\n        \" Total time spent in scheduler solver, insched + cantrun + calendar\"\n        self.scheduler_solver_time = 0\n        \" Error 15XXX in the sched log corresponds to a failure to run\"\n        self.run_failure = {}\n        \" Job failed to be preempted\"\n        self.num_preempt_failure = 0\n        \" Job preempted by \"\n        self.num_preempted = 0\n        \" Time between start of cycle and first job considered to run \"\n        self.queryduration = 0\n        \" The order in which jobs are considered \"\n        self.political_order = []\n        \" Time to calendar \"\n        self.calendarduration = {}\n\n        self.lastjob = None\n\n    def summary(self, showjobs=False):\n        \"\"\"\n        Summary regarding cycle\n        \"\"\"\n        self.info[CST] = time.strftime(\n            \"%Y-%m-%d %H:%M:%S\", time.localtime(self.start))\n        self.info[CD] = PBSLogUtils._duration(self.end - self.start)\n        self.info[QD] = PBSLogUtils._duration(self.queryduration)\n        # number of jobs considered may be different than length of\n        # the consider dictionary due to job arrays being considered once\n        # per subjob using the parent array job id\n        self.info[NJC] = self.num_considered\n        self.info[NJR] = len(self.sched_job_run.keys())\n        self.info[NJFR] = len(self.run_failure)\n        self.scheduler_solver_time = (sum(self.inschedduration.values()) +\n                                      sum(self.cantrunduration.values()) +\n                                      sum(self.calendarduration.values()))\n        self.info[SST] = self.scheduler_solver_time\n        self.info[NJCAL] = len(self.calendared_jobs.keys())\n        self.info[NJFP] = self.num_preempt_failure\n        self.info[NJP] = self.num_preempted\n        self.info[TTC] = sum(self.calendarduration.values())\n\n        if showjobs:\n            for j in self.consider.keys():\n                s = {JID: j}\n                if j in self.runduration:\n                    s[T2R] = self.runduration[j]\n                if j in self.cantrunduration:\n                    s[T2D] = self.cantrunduration[j]\n                if j in self.inschedduration:\n                    s[TiS] = self.inschedduration[j]\n                if j in self.calendarduration:\n                    s[TTC] = self.calendarduration[j]\n                if 'jobs' in self.info:\n                    self.info['jobs'].append(s)\n                else:\n                    self.info['jobs'] = [s]\n\n\nclass PBSMoMLog(PBSLogAnalyzer):\n\n    \"\"\"\n    Container and Parser of a PBS ``MoM`` log\n    \"\"\"\n    tm_tag = re.compile(tm_re)\n    mom_run_tag = re.compile(tm_re + \".*\" + job_re + \".*;Started, pid.*\")\n    mom_end_tag = re.compile(tm_re + \".*\" + job_re +\n                             \".*;delete job request received.*\")\n    mom_enquejob_tag = re.compile(tm_re + \".*;Type 5 .*\")\n\n    def __init__(self, filename=None, hostname=None, show_progress=False):\n\n        self.filename = filename\n        self.hostname = hostname\n        self.show_progress = show_progress\n\n        self.start = []\n        self.end = []\n        self.queued = []\n\n        self.info = {}\n        self.version = []\n\n    def comp_analyze(self, rec, start, end):\n        m = self.mom_run_tag.match(rec)\n        if m:\n            tm = self.logutils.convert_date_time(m.group('datetime'))\n            if ((start is None and end is None) or\n                    self.logutils.in_range(tm, start, end)):\n                self.start.append(tm)\n                return PARSER_OK_CONTINUE\n            elif end is not None and tm > end:\n                return PARSER_OK_STOP\n\n        m = self.mom_end_tag.match(rec)\n        if m:\n            tm = self.logutils.convert_date_time(m.group('datetime'))\n            if ((start is None and end is None) or\n                    self.logutils.in_range(tm, start, end)):\n                self.end.append(tm)\n                return PARSER_OK_CONTINUE\n            elif end is not None and tm > end:\n                return PARSER_OK_STOP\n\n        m = self.mom_enquejob_tag.match(rec)\n        if m:\n            tm = self.logutils.convert_date_time(m.group('datetime'))\n            if ((start is None and end is None) or\n                    self.logutils.in_range(tm, start, end)):\n                self.queued.append(tm)\n                return PARSER_OK_CONTINUE\n            elif end is not None and tm > end:\n                return PARSER_OK_STOP\n\n        if 'pbs_version=' in rec:\n            version = rec.split('pbs_version=')[1].strip()\n            if version not in self.version:\n                self.version.append(version)\n\n        return PARSER_OK_CONTINUE\n\n    def summary(self):\n        \"\"\"\n        Mom log summary\n        \"\"\"\n        run_rate = self.logutils.get_rate(self.start)\n        queue_rate = self.logutils.get_rate(self.queued)\n        end_rate = self.logutils.get_rate(self.end)\n\n        self.info[NJQ] = len(self.queued)\n        self.info[NJR] = len(self.start)\n        self.info[NJE] = len(self.end)\n        self.info[JRR] = run_rate\n        self.info[JSR] = queue_rate\n        self.info[JER] = end_rate\n        self.info[VER] = \",\".join(self.version)\n\n        return self.info\n\n\nclass PBSAccountingLog(PBSLogAnalyzer):\n\n    \"\"\"\n    Container and Parser of a PBS accounting log\n    \"\"\"\n\n    tm_tag = re.compile(tm_re)\n\n    record_tag = re.compile(r\"\"\"\n                        (?P<date>\\d\\d/\\d\\d/\\d{4,4})[\\s]+\n                        (?P<time>\\d\\d:\\d\\d:\\d\\d);\n                        (?P<type>[A-Z]);\n                        (?P<id>[0-9\\[\\]].*);\n                        (?P<msg>.*)\n                        \"\"\", re.VERBOSE)\n\n    S_sub_record_tag = re.compile(r\"\"\"\n                        .*user=(?P<user>[\\w\\d]+)[\\s]+\n                        .*qtime=(?P<qtime>[0-9]+)[\\s]+\n                        .*start=(?P<start>[0-9]+)[\\s]+\n                        .*exec_host=(?P<exechost>[\\[\\],\\-\\=\\/\\.\\w/*\\d\\+]+)[\\s]+\n                        .*Resource_List.ncpus=(?P<ncpus>[0-9]+)[\\s]+\n                        .*\n                        \"\"\", re.VERBOSE)\n\n    E_sub_record_tag = re.compile(r\"\"\"\n                        .*user=(?P<user>[\\w\\d]+)[\\s]+\n                        .*qtime=(?P<qtime>[0-9]+)[\\s]+\n                        .*start=(?P<start>[0-9]+)[\\s]+\n                        .*exec_host=(?P<exechost>[\\[\\],\\-\\=\\/\\.\\w/*\\d\\+]+)[\\s]+\n                        .*Resource_List.ncpus=(?P<ncpus>[0-9]+)[\\s]+\n                        .*resources_used.walltime=(?P<walltime>[0-9:]+)\n                        .*\n                        \"\"\", re.VERBOSE)\n\n    __E_sub_record_tag = re.compile(r\"\"\"\n                        .*user=(?P<user>[\\w\\d]+)[\\s]+\n                        .*qtime=(?P<qtime>[0-9]+)[\\s]+\n                        .*start=(?P<start>[0-9]+)[\\s]+\n                        .*exec_host=(?P<exechost>[\\[\\],\\-\\=\\/\\.\\w/*\\d\\+]+)[\\s]+\n                        .*Resource_List.ncpus=(?P<ncpus>[0-9]+)[\\s]+\n                        .*resources_used.walltime=(?P<walltime>[0-9:]+)\n                        .*\n                        \"\"\", re.VERBOSE)\n\n    sub_record_tag = re.compile(r\"\"\"\n                .*qtime=(?P<qtime>[0-9]+)[\\s]+\n                .*start=(?P<start>[0-9]+)[\\s]+\n                .*exec_host=(?P<exechost>[\\[\\],\\-\\=\\/\\.\\w/*\\d\\+]+)[\\s]+\n                .*exec_vnode=(?P<execvnode>[\\(\\)\\[\\],:\\-\\=\\/\\.\\w/*\\d\\+]+)[\\s]+\n                .*Resource_List.ncpus=(?P<ncpus>[\\d]+)[\\s]+\n                .*\n                \"\"\", re.VERBOSE)\n\n    logger = logging.getLogger(__name__)\n    utils = BatchUtils()\n\n    def __init__(self, filename=None, hostname=None, show_progress=False):\n\n        self.filename = filename\n        self.hostname = hostname\n        self.show_progress = show_progress\n\n        self.record_tm = []\n\n        self.entries = {}\n        self.queue = []\n        self.start = []\n        self.end = []\n        self.wait_time = []\n        self.run_time = []\n        self.job_node_size = []\n        self.job_cpu_size = []\n        self.used_cph = 0\n        self.nodes_cph = 0\n        self.used_nph = 0\n        self.jobs_started = []\n        self.jobs_ended = []\n        self.users = {}\n        self.tmp_wait_time = {}\n\n        self.duration = 0\n\n        self.utilization_parsing = False\n        self.running_jobs_parsing = False\n        self.job_info_parsing = False\n        self.accounting_workload_parsing = False\n\n        self._total_ncpus = 0\n        self._num_nodes = 0\n        self._running_jobids = []\n        self._server = None\n\n        self.running_jobs = {}\n        self.job_start = {}\n        self.job_end = {}\n        self.job_nodes = {}\n        self.job_cpus = {}\n        self.job_rectypes = {}\n\n        self.job_attrs = {}\n        self.parser_errors = 0\n\n        self.info = {}\n\n    def enable_running_jobs_parsing(self):\n        \"\"\"\n        Enable parsing for running jobs\n        \"\"\"\n        self.running_jobs_parsing = True\n\n    def enable_utilization_parsing(self, hostname=None, nodesfile=None,\n                                   jobsfile=None):\n        \"\"\"\n        Enable utilization parsing\n\n        :param hostname: Hostname of the machine\n        :type hostname: str or None\n        :param nodesfile: optional file containing output of\n                          pbsnodes -av\n        :type nodesfile: str or None\n        :param jobsfile: optional file containing output of\n                         qstat -f\n        :type jobsfile: str or None\n        \"\"\"\n        self.utilization_parsing = True\n        self.process_nodes_data(hostname, nodesfile, jobsfile)\n\n    def enable_job_info_parsing(self):\n        \"\"\"\n        Enable job information parsing\n        \"\"\"\n        self.job_info_res = {}\n        self.job_info_parsing = True\n\n    def enable_accounting_workload_parsing(self):\n        \"\"\"\n        Enable accounting workload parsing\n        \"\"\"\n        self.accounting_workload_parsing = True\n\n    def process_nodes_data(self, hostname=None, nodesfile=None, jobsfile=None):\n        \"\"\"\n        Get job and node information by stat'ing and parsing node\n        data from the server.\n        Compute the number of nodes and populate a list of running\n        job ids on those nodes.\n\n        :param hostname: The host to query\n        :type hostname: str or None\n        :param nodesfile: optional file containing output of\n                          pbsnodes -av\n        :type nodesfile: str or None\n        :param jobsfile: optional file containing output of\n                         qstat -f\n        :type jobsfile: str or None\n\n        The node data is needed to compute counts of nodes and cpus\n        The job data is needed to compute the amount of resources\n        requested\n        \"\"\"\n        if nodesfile or jobsfile:\n            self._server = Server(snapmap={NODE: nodesfile, JOB: jobsfile})\n        else:\n            self._server = Server(hostname)\n\n        ncpus = self._server.counter(NODE, 'resources_available.ncpus',\n                                     grandtotal=True, level=logging.DEBUG)\n\n        if 'resources_available.ncpus' in ncpus:\n            self._total_ncpus = ncpus['resources_available.ncpus']\n\n        self._num_nodes = len(self._server.status(NODE))\n\n        jobs = self._server.status(NODE, 'jobs')\n        running_jobids = []\n        for cur_job in jobs:\n            if 'jobs' not in cur_job:\n                continue\n            job = cur_job['jobs']\n            jlist = job.split(',')\n            for j in jlist:\n                running_jobids.append(j.split('/')[0].strip())\n        self._running_jobids = list(set(running_jobids))\n\n    def comp_analyze(self, rec, start, end, **kwargs):\n        if self.job_info_parsing:\n            return self.job_info(rec)\n        else:\n            return self.accounting_parsing(rec, start, end)\n\n    def accounting_parsing(self, rec, start, end):\n        \"\"\"\n        Parsing accounting log\n        \"\"\"\n        r = self.record_tag.match(rec.decode(\"utf-8\"))\n        if not r:\n            return PARSER_ERROR_CONTINUE\n\n        tm = self.logutils.convert_date_time(r.group('date') +\n                                             ' ' + r.group('time'))\n        if ((start is None and end is None) or\n                self.logutils.in_range(tm, start, end)):\n            self.record_tm.append(tm)\n            rec_type = r.group('type')\n            jobid = r.group('id')\n\n            if not self.accounting_workload_parsing and rec_type == 'S':\n                # Precompute metrics about the S record just in case\n                # it does not have an E record. The differences are\n                # resolved after all records are processed\n                if jobid in self._running_jobids:\n                    self._running_jobids.remove(jobid)\n                m = self.S_sub_record_tag.match(r.group('msg'))\n                if m:\n                    self.users[jobid] = m.group('user')\n                    qtime = int(m.group('qtime'))\n                    starttime = int(m.group('start'))\n                    ncpus = int(m.group('ncpus'))\n                    self.job_cpus[jobid] = ncpus\n\n                    if starttime != 0 and qtime != 0:\n                        self.tmp_wait_time[jobid] = starttime - qtime\n                        self.job_start[jobid] = starttime\n                    ehost = m.group('exechost')\n                    self.job_nodes[jobid] = ResourceResv.get_hosts(ehost)\n            elif rec_type == 'E':\n                if self.accounting_workload_parsing:\n                    try:\n                        msg = r.group('msg').split()\n                        attrs = dict([l.split('=', 1) for l in msg])\n                    except Exception:\n                        self.parser_errors += 1\n                        return PARSER_OK_CONTINUE\n                    for k in attrs.keys():\n                        attrs[k] = PbsAttribute.decode_value(attrs[k])\n                    running_time = (int(attrs['end']) - int(attrs['start']))\n                    attrs['running_time'] = str(running_time)\n                    attrs['schedselect'] = attrs['Resource_List.select']\n                    if 'euser' not in attrs:\n                        attrs['euser'] = 'unknown_user'\n\n                    attrs['id'] = r.group('id')\n                    self.job_attrs[r.group('id')] = attrs\n\n                m = self.E_sub_record_tag.match(r.group('msg'))\n                if m:\n                    if jobid not in self.users:\n                        self.users[jobid] = m.group('user')\n                    ehost = m.group('exechost')\n                    self.job_nodes[jobid] = ResourceResv.get_hosts(ehost)\n                    ncpus = int(m.group('ncpus'))\n                    self.job_cpus[jobid] = ncpus\n                    self.job_end[jobid] = tm\n\n                    qtime = int(m.group('qtime'))\n                    starttime = int(m.group('start'))\n                    if starttime != 0 and qtime != 0:\n                        # jobs enqueued prior to start of time range\n                        # considered should be reset to start of time\n                        # range. Only matters when computing\n                        # utilization\n                        if (self.utilization_parsing and\n                                qtime < self.record_tm[0]):\n                            qtime = self.record_tm[0]\n                            if starttime < self.record_tm[0]:\n                                starttime = self.record_tm[0]\n                        self.wait_time.append(starttime - qtime)\n                        if m.group('walltime'):\n                            try:\n                                walltime = self.logutils.convert_hhmmss_time(\n                                    m.group('walltime').strip())\n                                self.run_time.append(walltime)\n                            except Exception:\n                                pass\n                        else:\n                            walltime = tm - starttime\n                            self.run_time.append(walltime)\n\n                        if self.utilization_parsing:\n                            self.used_cph += ncpus * (walltime / 60)\n                            if self.utils:\n                                self.used_nph += (len(self.job_nodes[jobid]) *\n                                                  (walltime / 60))\n            elif rec_type == 'Q':\n                self.queue.append(tm)\n            elif rec_type == 'D':\n                if jobid not in self.job_end:\n                    self.job_end[jobid] = tm\n\n        elif end is not None and tm > end:\n            return PARSER_OK_STOP\n\n        return PARSER_OK_CONTINUE\n\n    def epilogue(self, line):\n        if self.running_jobs_parsing or self.accounting_workload_parsing:\n            return\n\n        if len(self.record_tm) > 0:\n            last_record_tm = self.record_tm[len(self.record_tm) - 1]\n            self.duration = last_record_tm - self.record_tm[0]\n            self.info[DUR] = self.logutils._duration(self.duration)\n\n        self.jobs_started = list(self.job_start.keys())\n        self.jobs_ended = list(self.job_end.keys())\n        self.job_node_size = [len(n) for n in self.job_nodes.values()]\n        self.job_cpu_size = list(self.job_cpus.values())\n        self.start = sorted(self.job_start.values())\n        self.end = sorted(self.job_end.values())\n\n        # list of jobs that have not yet ended, those are jobs that\n        # have an S record but no E record. We port back the precomputed\n        # metrics from the S record into the data to \"publish\"\n        sjobs = set(self.jobs_started).difference(self.jobs_ended)\n        for job in sjobs:\n            if job in self.tmp_wait_time:\n                self.wait_time.append(self.tmp_wait_time[job])\n            if job in self.job_nodes:\n                self.job_node_size.append(len(self.job_nodes[job]))\n            if job in self.job_cpus:\n                self.job_cpu_size.append(self.job_cpus[job])\n            if self.utilization_parsing:\n                if job in self.job_start:\n                    if job in self.job_cpus:\n                        self.used_cph += self.job_cpus[job] * \\\n                            ((last_record_tm - self.job_start[\n                             job]) / 60)\n                    if job in self.job_nodes:\n                        self.used_nph += len(self.job_nodes[job]) * \\\n                            ((last_record_tm - self.job_start[\n                             job]) / 60)\n\n        # Process jobs currently running, those may have an S record\n        # that is older than the time window considered or not.\n        # If they have an S record, then they were already processed\n        # by the S record routine, otherwise, they are processed here\n        if self.utilization_parsing:\n            first_record_tm = self.record_tm[0]\n            a = {'job_state': (EQ, 'R'),\n                 'Resource_List.ncpus': (SET, ''),\n                 'exec_host': (SET, ''),\n                 'stime': (SET, '')}\n            alljobs = self._server.status(JOB, a)\n            for job in alljobs:\n                # the running_jobids is populated from the node's jobs\n                # attribute. If a job id is not in the running jobids\n                # list, then its S record was already processed\n                if job['id'] not in self._running_jobids:\n                    continue\n\n                if ('job_state' not in job or\n                        'Resource_List.ncpus' not in job or\n                        'exec_host' not in job or 'stime' not in job):\n                    continue\n                # split to catch a customer tweak\n                stime = int(job['stime'].split()[0])\n                if stime < first_record_tm:\n                    stime = first_record_tm\n                self.used_cph += int(job['Resource_List.ncpus']) * \\\n                    (last_record_tm - stime)\n                nodes = len(self.utils.parse_exechost(\n                    job['exec_host']))\n                self.used_nph += nodes * (last_record_tm - stime)\n\n    def job_info(self, rec):\n        \"\"\"\n        PBS Job information\n        \"\"\"\n        m = self.record_tag.match(rec)\n        if m:\n            d = {}\n            if m.group('type') == 'E':\n                if getattr(self, 'jobid', None) != m.group('id'):\n                    return PARSER_OK_CONTINUE\n                if not hasattr(self, 'job_info_res'):\n                    self.job_info_res = {}\n                for a in m.group('msg').split():\n                    (k, v) = a.split('=', 1)\n                    d[k] = v\n                self.job_info_res[m.group('id')] = d\n\n        return PARSER_OK_CONTINUE\n\n    def summary(self):\n        \"\"\"\n        Accounting log summary\n        \"\"\"\n        if self.running_jobs_parsing or self.accounting_workload_parsing:\n            return\n\n        run_rate = self.logutils.get_rate(self.start)\n        queue_rate = self.logutils.get_rate(self.queue)\n        end_rate = self.logutils.get_rate(self.end)\n\n        self.info[NJQ] = len(self.queue)\n        self.info[NJR] = len(self.start)\n        self.info[NJE] = len(self.end)\n        self.info[JRR] = run_rate\n        self.info[JSR] = queue_rate\n        self.info[JER] = end_rate\n        if len(self.wait_time) > 0:\n            wt = sorted(self.wait_time)\n            wta = float(sum(self.wait_time)) / len(self.wait_time)\n            self.info[JWTm] = self.logutils._duration(min(wt))\n            self.info[JWTM] = self.logutils._duration(max(wt))\n            self.info[JWTA] = self.logutils._duration(wta)\n            self.info[JWT25] = self.logutils._duration(\n                self.logutils.percentile(wt, .25))\n            self.info[JWT50] = self.logutils._duration(\n                self.logutils.percentile(wt, .5))\n            self.info[JWT75] = self.logutils._duration(\n                self.logutils.percentile(wt, .75))\n\n        if len(self.run_time) > 0:\n            rt = sorted(self.run_time)\n            self.info[JRTm] = self.logutils._duration(min(rt))\n            self.info[JRT25] = self.logutils._duration(\n                self.logutils.percentile(rt, 0.25))\n            self.info[JRT50] = self.logutils._duration(\n                self.logutils.percentile(rt, 0.50))\n            self.info[JRTA] = self.logutils._duration(\n                str(sum(rt) / len(rt)))\n            self.info[JRT75] = self.logutils._duration(\n                self.logutils.percentile(rt, 0.75))\n            self.info[JRTM] = self.logutils._duration(max(rt))\n\n        if len(self.job_node_size) > 0:\n            js = sorted(self.job_node_size)\n            self.info[JNSm] = min(js)\n            self.info[JNS25] = self.logutils.percentile(js, 0.25)\n            self.info[JNS50] = self.logutils.percentile(js, 0.50)\n            self.info[JNSA] = str(\"%.2f\" % (float(sum(js)) / len(js)))\n            self.info[JNS75] = self.logutils.percentile(js, 0.75)\n            self.info[JNSM] = max(js)\n\n        if len(self.job_cpu_size) > 0:\n            js = sorted(self.job_cpu_size)\n            self.info[JCSm] = min(js)\n            self.info[JCS25] = self.logutils.percentile(js, 0.25)\n            self.info[JCS50] = self.logutils.percentile(js, 0.50)\n            self.info[JCSA] = str(\"%.2f\" % (float(sum(js)) / len(js)))\n            self.info[JCS75] = self.logutils.percentile(js, 0.75)\n            self.info[JCSM] = max(js)\n\n        if self.utilization_parsing:\n            ncph = self._total_ncpus * self.duration\n            nph = self._num_nodes * self.duration\n            if ncph > 0:\n                self.info[UNCPUS] = str(\"%.2f\" %\n                                        (100 * float(self.used_cph) / ncph) +\n                                        '%')\n            if nph > 0:\n                self.info[UNODES] = str(\"%.2f\" %\n                                        (100 * float(self.used_nph) / nph) +\n                                        '%')\n            self.info[CPH] = self.used_cph\n            self.info[NPH] = self.used_nph\n\n        self.info[USRS] = len(set(self.users.values()))\n\n        return self.info\n"
  },
  {
    "path": "test/fw/ptl/utils/pbs_procutils.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport sys\nimport time\nimport re\nimport threading\nimport logging\nimport socket\nimport os\nimport json\nfrom ptl.utils.pbs_dshutils import DshUtils\n\n\nclass ProcUtils(object):\n\n    \"\"\"\n    Utilities to query process information\n    \"\"\"\n\n    logger = logging.getLogger(__name__)\n    du = DshUtils()\n    platform = sys.platform\n\n    def __init__(self):\n        self.processes = {}\n        self.__h2ps = {}\n\n    def get_ps_cmd(self, hostname=None):\n        \"\"\"\n        Get the ps command\n\n        :param hostname: hostname of the machine\n        :type hostname: str or None\n        \"\"\"\n        if hostname is None:\n            hostname = socket.gethostname()\n\n        if hostname in self.__h2ps:\n            return self.__h2ps[hostname]\n\n        if not self.du.is_localhost(hostname):\n            platform = self.du.get_platform(hostname)\n        else:\n            platform = self.platform\n\n        # set some platform-specific arguments to ps\n        ps_arg = '-C'\n        ps_cmd = ['ps', '-o', 'pid,rss,vsz,pcpu,pmem,size,cputime,command']\n        self.__h2ps[hostname] = (ps_cmd, ps_arg)\n\n        return (ps_cmd, ps_arg)\n\n    def _init_processes(self):\n        self.processes = {}\n\n    def _get_proc_info_unix(self, hostname=None, name=None,\n                            pid=None, regexp=False):\n        \"\"\"\n        Helper function to ``get_proc_info`` for Unix only system\n        \"\"\"\n        (ps_cmd, ps_arg) = self.get_ps_cmd(hostname)\n        if name is not None:\n            if not regexp:\n                cr = self.du.run_cmd(hostname, (ps_cmd + [ps_arg, name]),\n                                     level=logging.DEBUG2)\n            else:\n                cr = self.du.run_cmd(hostname, ps_cmd + ['-e'],\n                                     level=logging.DEBUG2)\n        elif pid is not None:\n            cr = self.du.run_cmd(hostname, ps_cmd + ['-p', pid],\n                                 level=logging.DEBUG2)\n        else:\n            return\n\n        if cr['rc'] == 0 and cr['out']:\n            for proc in cr['out']:\n                _pi = None\n                try:\n                    _s = proc.split()\n                    p = _s[0]\n                    rss = _s[1]\n                    vsz = _s[2]\n                    pcpu = _s[3]\n                    pmem = _s[4]\n                    size = _s[5]\n                    cputime = _s[6]\n                    command = \" \".join(_s[7:])\n                except BaseException:\n                    continue\n\n                if ((pid is not None and p == str(pid)) or\n                    (name is not None and (\n                        (regexp and re.search(name, command) is not None) or\n                        (not regexp and name in command)))):\n                    _pi = ProcInfo(name=command)\n                    _pi.pid = p\n                    _pi.rss = rss\n                    _pi.vsz = vsz\n                    _pi.pcpu = pcpu\n                    _pi.pmem = pmem\n                    _pi.size = size\n                    _pi.cputime = cputime\n                    _pi.command = command\n\n                if _pi is not None:\n                    if command in self.processes:\n                        self.processes[command].append(_pi)\n                    else:\n                        self.processes[command] = [_pi]\n        return self.processes\n\n    def get_proc_info(self, hostname=None, name=None, pid=None, regexp=False):\n        \"\"\"\n        Return process information from a process name, or pid,\n        on a given host\n\n        :param hostname: The hostname on which to query the process\n                         info. On Windows,only localhost is queried.\n        :type hostname: str or none\n        :param name: The name of the process to query.\n        :type name: str or None\n        :param pid: The pid of the process to query\n        :type pid: int or None\n        :param regexp: Match processes by regular expression. Defaults\n                       to True. Does not apply to matching by PID.\n        :type regexp: bool\n        :returns: A list of ProcInfo objects, one for each matching\n                  process.\n\n        .. note:: If both, name and pid, are specified, name is used.\n        \"\"\"\n        self._init_processes()\n        return self._get_proc_info_unix(hostname, name, pid, regexp)\n\n    def get_proc_state(self, hostname=None, pid=None):\n        \"\"\"\n        :returns: PID's process state on host hostname\n\n        On error the empty string is returned.\n        \"\"\"\n        if not self.du.is_localhost(hostname):\n            platform = self.du.get_platform(hostname)\n        else:\n            platform = sys.platform\n\n        try:\n            if platform.startswith('linux') or platform.startswith('shasta'):\n                cmd = ['ps', '-o', 'stat', '-p', str(pid), '--no-heading']\n                rv = self.du.run_cmd(hostname, cmd, level=logging.DEBUG2)\n                return rv['out'][0][0]\n        except BaseException:\n            self.logger.error('Error getting process state for pid ' + pid)\n            return ''\n\n    def get_proc_children(self, hostname=None, ppid=None):\n        \"\"\"\n        :returns: A list of children PIDs associated to ``PPID`` on\n                  host hostname.\n\n        On error, an empty list is returned.\n        \"\"\"\n        try:\n            if not isinstance(ppid, str):\n                ppid = str(ppid)\n\n            if int(ppid) <= 0:\n                raise\n\n            if not self.du.is_localhost(hostname):\n                platform = self.du.get_platform(hostname)\n            else:\n                platform = sys.platform\n\n            childlist = []\n\n            if platform.startswith('linux') or platform.startswith('shasta'):\n                cmd = ['ps', '-o', 'pid', '--ppid:%s' % ppid, '--no-heading']\n                rv = self.du.run_cmd(hostname, cmd)\n                children = rv['out'][:-1]\n            else:\n                children = []\n\n            for child in children:\n                child = child.strip()\n                if child != '':\n                    childlist.append(child)\n                    childlist.extend(self.get_proc_children(hostname, child))\n\n            return childlist\n        except BaseException:\n            self.logger.error('Error getting children processes of parent ' +\n                              ppid)\n            return []\n\n\nclass ProcInfo(object):\n\n    \"\"\"\n    Process information reports ``PID``, ``RSS``, ``VSZ``, Command\n    and Time at which process information is collected\n    \"\"\"\n\n    def __init__(self, name=None, pid=None):\n        self.name = name\n        self.pid = pid\n        self.rss = None\n        self.vsz = None\n        self.pcpu = None\n        self.pmem = None\n        self.size = None\n        self.cputime = None\n        self.time = time.time()\n        self.command = None\n\n    def __str__(self):\n        return \"%s pid: %s rss: %s vsz: %s pcpu: %s pmem: %s \\\n               size: %s cputime: %s command: %s\" % \\\n               (self.name, str(self.pid), str(self.rss), str(self.vsz),\n                str(self.pcpu), str(self.pmem), str(self.size),\n                str(self.cputime), self.command)\n\n\nclass ProcMonitor(threading.Thread):\n\n    \"\"\"\n    A background process monitoring tool\n    \"\"\"\n    du = DshUtils()\n\n    def __init__(self, name=None, regexp=False, frequency=60):\n        threading.Thread.__init__(self)\n        self.name = name\n        self.frequency = frequency\n        self.regexp = regexp\n        self._pu = ProcUtils()\n        self.stop_thread = threading.Event()\n        self.db_proc_info = []\n\n    def set_frequency(self, value=60):\n        \"\"\"\n        Set the frequency\n\n        :param value: Frequency value\n        :type value: int\n        \"\"\"\n        self.logger.debug('procmonitor: set frequency to ' + str(value))\n        self.frequency = value\n\n    def get_system_stats(self, nw_protocols=None):\n        \"\"\"\n        Run system monitoring\n        \"\"\"\n        timenow = int(time.time())\n        sysstat = {}\n        # if no protocols set, use default\n        if not nw_protocols:\n            nw_protocols = ['TCP']\n        cmd = 'sar -rSub -n %s 1 1' % ','.join(nw_protocols)\n        rv = self.du.run_cmd(cmd=cmd, as_script=True)\n        if rv['err']:\n            return None\n        op = rv['out'][2:]\n        op = [i.split()[2:] for i in op if\n              (i and not i.startswith('Average'))]\n        sysstat['name'] = \"System\"\n        sysstat['time'] = time.ctime(timenow)\n        for i in range(0, len(op), 2):\n            sysstat.update(dict(zip(op[i], op[i + 1])))\n        return sysstat\n\n    def run(self):\n        \"\"\"\n        Run the process monitoring\n        \"\"\"\n        while not self.stop_thread.is_set():\n            self._pu.get_proc_info(name=self.name, regexp=self.regexp)\n            for _p in self._pu.processes.values():\n                for _per_proc in _p:\n                    if bool(re.search(\"^((?!benchpress).)*$\", _per_proc.name)):\n                        _to_db = {}\n                        _to_db['time'] = time.ctime(int(_per_proc.time))\n                        _to_db['rss'] = _per_proc.rss\n                        _to_db['vsz'] = _per_proc.vsz\n                        _to_db['pcpu'] = _per_proc.pcpu\n                        _to_db['pmem'] = _per_proc.pmem\n                        _to_db['size'] = _per_proc.size\n                        _to_db['cputime'] = _per_proc.cputime\n                        _to_db['name'] = _per_proc.name\n                        self.db_proc_info.append(_to_db)\n            _sys_info = self.get_system_stats(nw_protocols=['TCP'])\n            if _sys_info is not None:\n                self.db_proc_info.append(_sys_info)\n            with open('proc_monitor.json', 'a+', encoding='utf-8') as proc:\n                json.dump(\n                    self.db_proc_info,\n                    proc,\n                    ensure_ascii=False,\n                    indent=4)\n            time.sleep(self.frequency)\n\n    def stop(self):\n        \"\"\"\n        Stop the process monitoring\n        \"\"\"\n        self.stop_thread.set()\n        self.join()\n\n\nif __name__ == '__main__':\n    pm = ProcMonitor(name='.*pbs_server.*|.*pbs_sched.*', regexp=True,\n                     frequency=1)\n    pm.start()\n    time.sleep(4)\n    pm.stop()\n"
  },
  {
    "path": "test/fw/ptl/utils/pbs_snaputils.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport collections\nimport logging\nimport os\nimport pprint\nimport random\nimport re\nimport shlex\nimport shutil\nimport socket\nimport tarfile\nimport time\nimport platform\nfrom subprocess import STDOUT\nfrom pathlib import Path\nfrom multiprocessing import Process\n\nfrom ptl.lib.pbs_ifl_mock import *\nfrom ptl.lib.pbs_testlib import (SCHED, BatchUtils, Scheduler, Server,\n                                 PbsAttribute)\nfrom ptl.utils.pbs_dshutils import DshUtils\nfrom ptl.utils.pbs_logutils import PBSLogUtils\n\n# Define an enum which is used to label various pieces of information\n(   # qstat outputs\n    QSTAT_B_OUT,\n    QSTAT_BF_OUT,\n    QSTAT_OUT,\n    QSTAT_F_OUT,\n    QSTAT_T_OUT,\n    QSTAT_TF_OUT,\n    QSTAT_X_OUT,\n    QSTAT_XF_OUT,\n    QSTAT_NS_OUT,\n    QSTAT_FX_DSV_OUT,\n    QSTAT_F_DSV_OUT,\n    QSTAT_F_JSON_OUT,\n    QSTAT_Q_OUT,\n    QSTAT_QF_OUT,\n    # qmgr outputs\n    QMGR_PS_OUT,\n    QMGR_PH_OUT,\n    QMGR_LPBSHOOK_OUT,\n    QMGR_LSCHED_OUT,\n    QMGR_PN_OUT,\n    QMGR_PR_OUT,\n    QMGR_PQ_OUT,\n    QMGR_PSCHED_OUT,\n    # pbsnodes outputs\n    PBSNODES_VA_OUT,\n    PBSNODES_A_OUT,\n    PBSNODES_AVSJ_OUT,\n    PBSNODES_ASJ_OUT,\n    PBSNODES_AVS_OUT,\n    PBSNODES_AS_OUT,\n    PBSNODES_AFDSV_OUT,\n    PBSNODES_AVFDSV_OUT,\n    PBSNODES_AVFJSON_OUT,\n    # pbs_rstat outputs\n    PBS_RSTAT_OUT,\n    PBS_RSTAT_F_OUT,\n    # PBS config related outputs\n    PBS_CONF,\n    PBS_PROBE_OUT,\n    PBS_HOSTN_OUT,\n    PBS_ENVIRONMENT,\n    # System related outputs\n    OS_INFO,\n    PROCESS_INFO,\n    LSOF_PBS_OUT,\n    ETC_HOSTS,\n    ETC_NSSWITCH_CONF,\n    VMSTAT_OUT,\n    DF_H_OUT,\n    DMESG_OUT,\n    PS_LEAF_OUT,\n    # Logs\n    ACCT_LOGS,\n    SVR_LOGS,\n    SCHED_LOGS,\n    MOM_LOGS,\n    PG_LOGS,\n    COMM_LOGS,\n    # Daemon priv directories\n    SVR_PRIV,\n    MOM_PRIV,\n    SCHED_PRIV,\n    # Core file information\n    CORE_SCHED,\n    CORE_SERVER,\n    CORE_MOM,\n    # Miscellaneous\n    CTIME) = list(range(59))\n\n\n# Define paths to various files/directories with respect to the snapshot\n# server/\nSERVER_DIR = \"server\"\nQSTAT_B_PATH = os.path.join(SERVER_DIR, \"qstat_B.out\")\nQSTAT_BF_PATH = os.path.join(SERVER_DIR, \"qstat_Bf.out\")\nQMGR_PS_PATH = os.path.join(SERVER_DIR, \"qmgr_ps.out\")\nQSTAT_Q_PATH = os.path.join(SERVER_DIR, \"qstat_Q.out\")\nQSTAT_QF_PATH = os.path.join(SERVER_DIR, \"qstat_Qf.out\")\nQMGR_PR_PATH = os.path.join(SERVER_DIR, \"qmgr_pr.out\")\nQMGR_PQ_PATH = os.path.join(SERVER_DIR, \"qmgr_pq.out\")\n# server_priv/\nSVR_PRIV_PATH = \"server_priv\"\nACCT_LOGS_PATH = os.path.join(\"server_priv\", \"accounting\")\nRSCDEF_PATH = os.path.join(\"server_priv\", \"resourcedef\")\n# server_logs/\nSVR_LOGS_PATH = \"server_logs\"\n# job/\nJOB_DIR = \"job\"\nQSTAT_PATH = os.path.join(JOB_DIR, \"qstat.out\")\nQSTAT_F_PATH = os.path.join(JOB_DIR, \"qstat_f.out\")\nQSTAT_T_PATH = os.path.join(JOB_DIR, \"qstat_t.out\")\nQSTAT_TF_PATH = os.path.join(JOB_DIR, \"qstat_tf.out\")\nQSTAT_X_PATH = os.path.join(JOB_DIR, \"qstat_x.out\")\nQSTAT_XF_PATH = os.path.join(JOB_DIR, \"qstat_xf.out\")\nQSTAT_NS_PATH = os.path.join(JOB_DIR, \"qstat_ns.out\")\nQSTAT_FX_DSV_PATH = os.path.join(JOB_DIR, \"qstat_fx_F_dsv.out\")\nQSTAT_F_DSV_PATH = os.path.join(JOB_DIR, \"qstat_f_F_dsv.out\")\nQSTAT_F_JSON_PATH = os.path.join(JOB_DIR, \"qstat_f_F_json.out\")\n# node/\nNODE_DIR = \"node\"\nPBSNODES_VA_PATH = os.path.join(NODE_DIR, \"pbsnodes_va.out\")\nPBSNODES_A_PATH = os.path.join(NODE_DIR, \"pbsnodes_a.out\")\nPBSNODES_AVSJ_PATH = os.path.join(NODE_DIR, \"pbsnodes_avSj.out\")\nPBSNODES_ASJ_PATH = os.path.join(NODE_DIR, \"pbsnodes_aSj.out\")\nPBSNODES_AVS_PATH = os.path.join(NODE_DIR, \"pbsnodes_avS.out\")\nPBSNODES_AS_PATH = os.path.join(NODE_DIR, \"pbsnodes_aS.out\")\nPBSNODES_AFDSV_PATH = os.path.join(NODE_DIR, \"pbsnodes_aFdsv.out\")\nPBSNODES_AVFDSV_PATH = os.path.join(NODE_DIR, \"pbsnodes_avFdsv.out\")\nPBSNODES_AVFJSON_PATH = os.path.join(NODE_DIR, \"pbsnodes_avFjson.out\")\nQMGR_PN_PATH = os.path.join(NODE_DIR, \"qmgr_pn_default.out\")\n# mom_priv/\nMOM_PRIV_PATH = \"mom_priv\"\n# mom_logs/\nMOM_LOGS_PATH = \"mom_logs\"\n# comm_logs/\nCOMM_LOGS_PATH = \"comm_logs\"\n# hook/\nHOOK_DIR = \"hook\"\nQMGR_PH_PATH = os.path.join(HOOK_DIR, \"qmgr_ph_default.out\")\nQMGR_LPBSHOOK_PATH = os.path.join(HOOK_DIR, \"qmgr_lpbshook.out\")\n# scheduler/\nSCHED_DIR = \"scheduler\"\nQMGR_LSCHED_PATH = os.path.join(SCHED_DIR, \"qmgr_lsched.out\")\nQMGR_PSCHED_PATH = os.path.join(SCHED_DIR, \"qmgr_psched.out\")\n# sched_priv/\nDFLT_SCHED_PRIV_PATH = \"sched_priv\"\n# sched_logs/\nDFLT_SCHED_LOGS_PATH = \"sched_logs\"\n# reservation/\nRESV_DIR = \"reservation\"\nPBS_RSTAT_PATH = os.path.join(RESV_DIR, \"pbs_rstat.out\")\nPBS_RSTAT_F_PATH = os.path.join(RESV_DIR, \"pbs_rstat_f.out\")\n# datastore/\nDATASTORE_DIR = \"datastore\"\nPG_LOGS_PATH = os.path.join(DATASTORE_DIR, \"pg_log\")\n# core_file_bt/\nCORE_DIR = \"core_file_bt\"\nCORE_SERVER_PATH = os.path.join(CORE_DIR, \"server_priv\")\nCORE_SCHED_PATH = os.path.join(CORE_DIR, \"sched_priv\")\nCORE_MOM_PATH = os.path.join(CORE_DIR, \"mom_priv\")\n# system/\nSYS_DIR = \"system\"\nPBS_PROBE_PATH = os.path.join(SYS_DIR, \"pbs_probe_v.out\")\nPBS_HOSTN_PATH = os.path.join(SYS_DIR, \"pbs_hostn_v.out\")\nPBS_ENV_PATH = os.path.join(SYS_DIR, \"pbs_environment\")\nOS_PATH = os.path.join(SYS_DIR, \"os_info\")\nPROCESS_PATH = os.path.join(SYS_DIR, \"process_info\")\nETC_HOSTS_PATH = os.path.join(SYS_DIR, \"etc_hosts\")\nETC_NSSWITCH_PATH = os.path.join(SYS_DIR, \"etc_nsswitch_conf\")\nLSOF_PBS_PATH = os.path.join(SYS_DIR, \"lsof_pbs.out\")\nVMSTAT_PATH = os.path.join(SYS_DIR, \"vmstat.out\")\nDF_H_PATH = os.path.join(SYS_DIR, \"df_h.out\")\nDMESG_PATH = os.path.join(SYS_DIR, \"dmesg.out\")\nPS_LEAF_PATH = os.path.join(SYS_DIR, \"ps_leaf.out\")\n# top-level\nPBS_CONF_PATH = \"pbs.conf\"\nCTIME_PATH = \"ctime\"\n\n# Define paths to PBS commands used to capture data with respect to PBS_EXEC\nQSTAT_CMD = os.path.join(\"bin\", \"qstat\")\nPBSNODES_CMD = os.path.join(\"bin\", \"pbsnodes\")\nQMGR_CMD = os.path.join(\"bin\", \"qmgr\")\nPBS_RSTAT_CMD = os.path.join(\"bin\", \"pbs_rstat\")\nPBS_PROBE_CMD = os.path.join(\"sbin\", \"pbs_probe\")\nPBS_HOSTN_CMD = os.path.join(\"bin\", \"pbs_hostn\")\n\n\nclass ObfuscateSnapshot(object):\n    val_obf_map = {}\n    vals_to_del = []\n    bu = BatchUtils()\n    du = DshUtils()\n    num_bad_acct_records = 0\n    logger = logging.getLogger(__name__)\n\n    job_attrs_del = [ATTR_v, ATTR_e, ATTR_jobdir,\n                     ATTR_submit_arguments, ATTR_o, ATTR_S]\n    resv_attrs_del = [ATTR_v]\n    svr_attrs_del = [ATTR_mailfrom]\n    job_attrs_obf = [ATTR_euser, ATTR_egroup, ATTR_project, ATTR_A,\n                     ATTR_g, ATTR_M, ATTR_u, ATTR_owner, ATTR_name]\n    resv_attrs_obf = [ATTR_A, ATTR_g, ATTR_M, ATTR_auth_u, ATTR_auth_g,\n                      ATTR_auth_h, ATTR_resv_owner]\n    svr_attrs_obf = [ATTR_SvrHost, ATTR_acluser, ATTR_aclResvuser,\n                     ATTR_aclResvhost, ATTR_aclhost, ATTR_operators,\n                     ATTR_managers]\n    node_attrs_obf = [ATTR_NODE_Host, ATTR_NODE_Mom, ATTR_rescavail + \".host\",\n                      ATTR_rescavail + \".vnode\"]\n    sched_attrs_obf = [ATTR_SchedHost]\n    queue_attrs_obf = [ATTR_acluser, ATTR_aclgroup, ATTR_aclhost]\n    skip_vals = [\"_pbs_project_default\", \"*\", \"pbsadmin\", \"pbsuser\"]\n\n    def _obfuscate_stat(self, file_path, attrs_to_obf, attrs_to_del):\n        \"\"\"\n        Helper function to obfuscate qstat/rstat -f & pbsnodes -av outputs\n\n        :param file_path - path to the qstat output file in snapshot\n        :type file_path - str\n        :param attrs_to_obf - attribute list to obfuscate\n        :type list\n        :param attrs_to_del- attribute list to delete\n        :type list\n        \"\"\"\n        fout = self.du.create_temp_file()\n\n        with open(file_path, \"r\") as fdin, open(fout, \"w\") as fdout:\n            delete_line = False\n            val_obf = None\n            val_del = None\n            key_obf = None\n            for line in fdin:\n                # Check if this is a line extension for an attr being deleted\n                if line[0] == \"\\t\":\n                    if delete_line:\n                        val_del += line.strip()\n                        continue\n                    elif val_obf is not None:\n                        val_obf += line.strip()\n                        continue\n\n                delete_line = False\n                if val_del is not None:\n                    self.vals_to_del.append(val_del)\n                    val_del = None\n\n                if val_obf is not None:\n                    # Write the previous, obfuscated attribute first\n                    val_to_write = []\n                    for val in val_obf.split(\",\"):\n                        val = val.strip()\n                        val = val.split(\"@\")\n                        out_val = []\n                        for _val in val:\n                            if _val in self.skip_vals:\n                                obf = _val\n                            elif _val not in self.val_obf_map:\n                                obf = PbsAttribute.random_str(\n                                    length=random.randint(8, 30))\n                                self.val_obf_map[_val] = obf\n                            else:\n                                obf = self.val_obf_map[_val]\n                            out_val.append(obf)\n                        out_val = \"@\".join(out_val)\n                        val_to_write.append(out_val)\n\n                    # Some PBS outputs have inconsistent whitespaces\n                    # e.g - pbs_rstat -f doesn't print leading spaces\n                    # So, extract the lead from the original line\n                    lead = \"\"\n                    for c in line:\n                        if not c.isspace():\n                            break\n                        lead += c\n\n                    obf_line = lead + key_obf + \" = \" + \\\n                        \",\".join(val_to_write) + \"\\n\"\n                    fdout.write(obf_line)\n                    val_obf = None\n                    key_obf = None\n\n                if \"=\" in line:\n                    attrname, attrval = line.split(\"=\", 1)\n                    attrname = attrname.strip()\n                    attrval = attrval.strip()\n\n                    # Check if this attribute needs to be deleted\n                    if attrname in attrs_to_del:\n                        delete_line = True\n                        val_del = attrval\n\n                    if delete_line is True:\n                        continue\n\n                    # Check if this attribute needs to be obfuscated\n                    if attrname in attrs_to_obf:\n                        val_obf = attrval\n                        key_obf = attrname\n\n                if val_obf is None:\n                    fdout.write(line)\n\n        shutil.move(fout, file_path)\n\n    def _obfuscate_acct_file(self, attrs_obf, file_path):\n        \"\"\"\n        Helper function to anonymize\n\n        :param attrs_obf - set of attributes to obfuscate\n        :type attrs_obf - set\n        :param file_path - path of acct log file\n        :type file_path - str\n        \"\"\"\n        newcontent = []\n        with open(file_path, \"r\") as fd:\n            for record in fd:\n                # accounting log format is\n                # %Y/%m/%d %H:%M:%S;<Key>;<Id>;<key1=val1> <key2=val2> ...\n                record_list = record.split(\";\", 3)\n                if record_list is None or len(record_list) < 4:\n                    continue\n                if record_list[1] in (\"A\", \"L\"):\n                    newcontent.append(record)\n                    continue\n                content_list = shlex.split(record_list[3].strip())\n\n                skip_record = False\n                kvl_list = [kv.split(\"=\", 1) for kv in content_list]\n                if kvl_list is None:\n                    self.num_bad_acct_records += 1\n                    self.logger.debug(\"Bad accounting record found:\\n\" +\n                                      record)\n                    continue\n                for kvl in kvl_list:\n                    try:\n                        k, v = kvl\n                    except ValueError:\n                        self.num_bad_acct_records += 1\n                        self.logger.debug(\"Bad accounting record found:\\n\" +\n                                          record)\n                        skip_record = True\n                        break\n\n                    if k in attrs_obf:\n                        val = v.split(\"@\")\n                        obf = []\n                        for _val in val:\n                            if _val == \"_pbs_project_default\":\n                                obf.append(_val)\n                            elif _val not in self.val_obf_map:\n                                obf_v = PbsAttribute.random_str(\n                                    length=random.randint(8, 30))\n                                self.val_obf_map[_val] = obf_v\n                                obf.append(obf_v)\n                            else:\n                                obf.append(self.val_obf_map[_val])\n                        kvl[1] = \"@\".join(obf)\n\n                if not skip_record:\n                    record = \";\".join(record_list[:3]) + \";\" + \\\n                        \" \".join([\"=\".join(n) for n in kvl_list])\n                    newcontent.append(record + \"\\n\")\n\n        with open(file_path, \"w\") as fd:\n            fd.write(\"\".join(newcontent))\n\n    def obfuscate_acct_logs(self, snap_dir, sudo_val):\n        \"\"\"\n        Helper function to obfuscate accounting logs\n\n        :param snap_dir - the snapshot directory path\n        :type snap_dir - str\n        \"\"\"\n        attrs_to_obf = self.job_attrs_obf + self.resv_attrs_obf +\\\n            self.svr_attrs_obf + self.queue_attrs_obf + self.node_attrs_obf +\\\n            self.sched_attrs_obf\n\n        # Some accounting record attributes are named differently\n        acct_extras = [\"user\", \"requestor\", \"group\", \"account\"]\n        attrs_to_obf += acct_extras\n        attrs_to_obf = set(attrs_to_obf)\n\n        acct_path = os.path.join(snap_dir, \"server_priv\", \"accounting\")\n        if not os.path.isdir(acct_path):\n            return\n        acct_fpaths = self.du.listdir(path=acct_path, sudo=sudo_val)\n\n        # Limit the number of cores used to 10\n        ncpus = os.cpu_count()\n        ncpus = min(ncpus, 10)\n        nfiles = len(acct_fpaths)\n        i = 0\n        while i < nfiles:\n            plist = []\n            for _ in range(ncpus):\n                acct_fpath = acct_fpaths[i]\n                p = Process(target=self._obfuscate_acct_file,\n                            args=(attrs_to_obf, acct_fpath))\n                p.start()\n                plist.append(p)\n                i += 1\n                if i >= nfiles:\n                    break\n            for p in plist:\n                p.join()\n\n        if self.num_bad_acct_records > 0:\n            self.logger.info(\"Total bad records found: \" +\n                             str(self.num_bad_acct_records))\n\n    def _obfuscate_with_map(self, fpath, sudo=False):\n        \"\"\"\n        Helper function to obfuscate a file with obfuscation map\n\n        :param filepath - path to the file\n        :type filepath - str\n        :param sudo - sudo True/False?\n        :type bool\n\n        :return str - possibly updated path to the obfuscated file\n        \"\"\"\n        fout = self.du.create_temp_file()\n        pathobj = Path(fpath)\n        fname = pathobj.name\n        fparent = pathobj.parent\n        newfpath = fpath\n        with open(fpath, \"r\", encoding=\"latin-1\") as fd, \\\n                open(fout, \"w\") as fdout:\n            alltext = fd.read()\n            # Obfuscate values from val_obf_map\n            for key, val in self.val_obf_map.items():\n                alltext = re.sub(r'\\b' + key + r'\\b', val, alltext)\n                if key in fname:\n                    fname = fname.replace(key, val)\n                    newfpath = os.path.join(fparent, fname)\n            # Remove the attr values from vals_to_del list\n            for val in self.vals_to_del:\n                alltext = alltext.replace(val, \"\")\n            fdout.write(alltext)\n\n        self.du.rm(path=fpath, sudo=sudo)\n        shutil.move(fout, newfpath)\n\n        return newfpath\n\n    def obfuscate_snapshot(self, snap_dir, map_file, sudo_val):\n        \"\"\"\n        Helper function to obfuscate a snapshot\n\n        :param snap_dir - path to snapshot directory to obfsucate\n        :type snap_dir - str\n        :param map_file - path to the map file to create\n        :type map_file - str\n        :param sudo_val - value of the --with-sudo option (needed for printjob)\n        :type sudo_val bool\n        \"\"\"\n        if not os.path.isdir(snap_dir):\n            raise ValueError(\"Snapshot directory path not accessible\"\n                             \" for obfuscation\")\n\n        # Let's go through the qmgr, qstat, pbsnodes and resourcedef file\n        # Get the values associated with attributes to obfuscate and\n        # obfuscate them everywhere in the snapshot\n        # Delete the attribute-value pair in the delete lists\n        stat_f_files = {\n            QSTAT_BF_PATH: [self.svr_attrs_obf, self.svr_attrs_del],\n            QSTAT_F_PATH: [self.job_attrs_obf, self.job_attrs_del],\n            QSTAT_TF_PATH: [self.job_attrs_obf, self.job_attrs_del],\n            QSTAT_XF_PATH: [self.job_attrs_obf, self.job_attrs_del],\n            QSTAT_QF_PATH: [self.queue_attrs_obf, []],\n            PBSNODES_VA_PATH: [self.node_attrs_obf, []],\n            PBS_RSTAT_F_PATH: [self.resv_attrs_obf, self.resv_attrs_del]\n        }\n        for s_f_file, attrs in stat_f_files.items():\n            qstat_f_path = os.path.join(snap_dir, s_f_file)\n            if os.path.isfile(qstat_f_path):\n                self._obfuscate_stat(qstat_f_path, attrs[0], attrs[1])\n\n        # Parse resourcedef file and add custom resources to obfuscation map\n        # We will later do a sed on the whole snapshot, that's when these\n        # will get obfuscated\n        custom_rscs = []\n        custrscs_path = os.path.join(snap_dir, RSCDEF_PATH)\n        if os.path.isfile(custrscs_path):\n            with open(custrscs_path, \"r\") as fd:\n                for line in fd:\n                    rscs_name = line.split(\" \", 1)[0]\n                    custom_rscs.append(rscs_name.strip())\n        for rscs in custom_rscs:\n            if rscs not in self.val_obf_map:\n                obf = PbsAttribute.random_str(length=random.randint(8, 30))\n                self.val_obf_map[rscs] = obf\n\n        # Obfuscate accounting logs\n        # Note: We can't rely on sed to do this because there might be logs\n        # From long back which have usernames & hostnames that didn't get\n        # captured in the qstat/pbs_rstat/pbsnodes outputs\n        self.obfuscate_acct_logs(snap_dir, sudo_val)\n\n        # Until we can support obfuscating daemon logs, delete them\n        svr_logs = os.path.join(snap_dir, SVR_LOGS_PATH)\n        mom_logs = os.path.join(snap_dir, MOM_LOGS_PATH)\n        comm_logs = os.path.join(snap_dir, COMM_LOGS_PATH)\n        db_logs = os.path.join(snap_dir, PG_LOGS_PATH)\n        topology = os.path.join(snap_dir, SVR_PRIV_PATH, \"topology\")\n        sched_logs = []\n        for dirname in self.du.listdir(path=snap_dir, sudo=sudo_val,\n                                       fullpath=False):\n            if dirname.startswith(DFLT_SCHED_LOGS_PATH):\n                dirpath = os.path.join(snap_dir, str(dirname))\n                sched_logs.append(dirpath)\n        # Also delete any .JB files, store printjob outputs of them instead\n        conf = self.du.parse_pbs_config()\n        printjob = None\n        if conf is not None:\n            printjob = os.path.join(conf[\"PBS_EXEC\"], \"bin\", \"printjob\")\n            if not os.path.isfile(printjob):\n                printjob = None\n        if printjob is None:\n            self.logger.error(\"printjob not found, so .JB files will \"\n                              \"simply be deleted\")\n        jobspath = os.path.join(snap_dir, MOM_PRIV_PATH, \"jobs\")\n        jbcontent = {}\n        jbfilelist = self.du.listdir(path=jobspath, sudo=sudo_val,\n                                     fullpath=False)\n        if jbfilelist is not None:\n            for name in jbfilelist:\n                if name.endswith(\".JB\"):\n                    ret = None\n                    fpath = os.path.join(jobspath, name)\n                    if printjob is not None:\n                        cmd = [printjob, fpath]\n                        ret = self.du.run_cmd(cmd=cmd, sudo=sudo_val,\n                                              as_script=True)\n                    self.du.rm(path=fpath)\n                    if ret is not None and ret[\"out\"] is not None:\n                        jbcontent[name] = \"\\n\".join(ret[\"out\"])\n                # Also delete any other files/directories inside mom_priv/jobs\n                else:\n                    path = os.path.join(jobspath, name)\n                    self.du.rm(path=path, recursive=True, force=True)\n        for name, content in jbcontent.items():\n            # Save the printjob outputs, these will be obfuscated later\n            fpath = os.path.join(jobspath, name + \"_printjob\")\n            with open(fpath, \"w\") as fd:\n                fd.write(str(content))\n\n        dirs_to_del = [svr_logs, mom_logs, comm_logs, db_logs, topology]\n        dirs_to_del += sched_logs\n        for dirpath in dirs_to_del:\n            self.du.rm(path=dirpath, recursive=True, force=True)\n\n        # Now, go through the obfuscation map and replace all other instances\n        # of the sensitive values in the snapshot with their obfuscated values\n        for root, _, fnames in os.walk(snap_dir):\n            for fname in fnames:\n                fpath = os.path.join(root, fname)\n                self._obfuscate_with_map(fpath, sudo=sudo_val)\n\n        with open(map_file, \"w\") as fd:\n            fd.write(\"Attributes Obfuscated:\\n\")\n            fd.write(pprint.pformat(self.val_obf_map) + \"\\n\")\n            fd.write(\"Attributes Deleted:\\n\")\n            fd.write(\"\\n\".join(self.vals_to_del) + \"\\n\")\n\n\nclass PBSSnapUtils(object):\n    \"\"\"\n    Wrapper class around _PBSSnapUtils\n    This makes sure that we do necessay cleanup before destroying objects\n    \"\"\"\n\n    def __init__(self, out_dir, basic=None, acct_logs=None,\n                 daemon_logs=None, create_tar=False, log_path=None,\n                 with_sudo=False):\n        self.out_dir = out_dir\n        self.basic = basic\n        self.acct_logs = acct_logs\n        self.srvc_logs = daemon_logs\n        self.create_tar = create_tar\n        self.log_path = log_path\n        self.with_sudo = with_sudo\n        self.utils_obj = None\n\n    def __enter__(self):\n        self.utils_obj = _PBSSnapUtils(self.out_dir, self.basic,\n                                       self.acct_logs, self.srvc_logs,\n                                       self.create_tar, self.log_path,\n                                       self.with_sudo)\n        return self.utils_obj\n\n    def __exit__(self, exc_type, exc_value, traceback):\n        # Do some cleanup\n        self.utils_obj.finalize()\n\n        return False\n\n\nclass _PBSSnapUtils(object):\n\n    \"\"\"\n    PBS snapshot utilities\n    \"\"\"\n\n    def __init__(self, out_dir, basic=None, acct_logs=None,\n                 daemon_logs=None, create_tar=False, log_path=None,\n                 with_sudo=False):\n        \"\"\"\n        Initialize a PBSSnapUtils object with the arguments specified\n\n        :param out_dir: path to the directory where snapshot will be created\n        :type out_dir: str\n        :param basic: only capture basic PBS configuration & state data?\n        :type basic: bool\n        :param acct_logs: number of accounting logs to capture\n        :type acct_logs: int or None\n        :param daemon_logs: number of daemon logs to capture\n        :type daemon_logs: int or None\n        :param create_tar: Create a tarball of the output snapshot?\n        :type create_tar: bool or None\n        :param log_path: Path to pbs_snapshot's log file\n        :type log_path: str or None\n        :param with_sudo: Capture relevant information with sudo?\n        :type with_sudo: bool\n        \"\"\"\n        self.logger = logging.getLogger(__name__)\n        self.du = DshUtils()\n        self.basic = basic\n        self.server_info = {}\n        self.job_info = {}\n        self.node_info = {}\n        self.comm_info = {}\n        self.hook_info = {}\n        self.sched_info = {}\n        self.resv_info = {}\n        self.sys_info = {}\n        self.core_info = {}\n        self.all_hosts = []\n        self.server = None\n        self.mom = None\n        self.comm = None\n        self.scheduler = None\n        self.log_utils = PBSLogUtils()\n        self.outtar_path = None\n        self.outtar_fd = None\n        self.create_tar = create_tar\n        self.snapshot_name = None\n        self.with_sudo = with_sudo\n        self.log_path = log_path\n        self.server_up = False\n        self.server_info_avail = False\n        self.mom_info_avail = False\n        self.comm_info_avail = False\n        self.sched_info_avail = False\n        if self.log_path is not None:\n            self.log_filename = os.path.basename(self.log_path)\n        else:\n            self.log_filename = None\n        self.capture_core_files = True\n\n        filecmd = \"file\"\n        self.filecmd = self.du.which(exe=filecmd)\n        # du.which returns the input cmd name if it can't find the cmd\n        if self.filecmd is filecmd:\n            self.capture_core_files = False\n            self.logger.info(\"Warning: file command not found, \"\n                             \"can't capture traces from any core files\")\n\n        # finalize() is called by the context's __exit__() automatically\n        # however, finalize() is non-reenterant, so set a flag to keep\n        # track of whether it has been called or not.\n        self.finalized = False\n\n        # Parse the input arguments\n        timestamp_str = time.strftime(\"%Y%m%d_%H_%M_%S\")\n        self.snapshot_name = \"snapshot_\" + timestamp_str\n        # Make sure that the target directory exists\n        dir_path = os.path.abspath(out_dir)\n        if not os.path.isdir(dir_path):\n            raise ValueError(\"Target directory either doesn't exist\" +\n                             \"or not accessible. Quitting.\")\n        self.snapdir = os.path.join(dir_path, self.snapshot_name)\n        self.num_acct_logs = int(acct_logs) if acct_logs is not None else 0\n        if daemon_logs is not None:\n            self.num_daemon_logs = int(daemon_logs)\n        else:\n            self.num_daemon_logs = 0\n\n        # Check which of the PBS daemons' information is available\n        self.server = Server()\n        self.scheduler = None\n        daemon_status = self.server.pi.status()\n        if len(daemon_status) > 0 and daemon_status['rc'] == 0 and \\\n                len(daemon_status['err']) == 0:\n            for d_stat in daemon_status['out']:\n                if d_stat.startswith(\"pbs_server\"):\n                    self.server_info_avail = True\n                    if \"not running\" not in d_stat:\n                        self.server_up = True\n                elif d_stat.startswith(\"pbs_sched\"):\n                    self.sched_info_avail = True\n                    self.scheduler = Scheduler(server=self.server)\n                elif d_stat.startswith(\"pbs_mom\"):\n                    self.mom_info_avail = True\n                elif d_stat.startswith(\"pbs_comm\"):\n                    self.comm_info_avail = True\n        self.custom_rscs = None\n        if self.server_up:\n            self.custom_rscs = self.server.parse_resources()\n\n        # Store paths to PBS_HOME and PBS_EXEC\n        self.pbs_home = self.server.pbs_conf[\"PBS_HOME\"]\n        self.pbs_exec = self.server.pbs_conf[\"PBS_EXEC\"]\n\n        # If output needs to be a tarball, create the tarfile name\n        # tarfile name = <output directory name>.tgz\n        self.outtar_path = self.snapdir + \".tgz\"\n\n        # Set up some infrastructure\n        self.__init_cmd_path_map()\n\n        # Create the snapshot directory tree\n        self.__initialize_snapshot()\n\n    def __init_cmd_path_map(self):\n        \"\"\"\n        Fill in various dicts which map the commands used for capturing\n        various classes of outputs along with the paths to the files where\n        they will be stored inside the snapshot as a tuple.\n        \"\"\"\n        if self.server_up:\n            # Server information\n            value = (QSTAT_BF_PATH, [QSTAT_CMD, \"-Bf\"])\n            self.server_info[QSTAT_BF_OUT] = value\n            value = (QSTAT_QF_PATH, [QSTAT_CMD, \"-Qf\"])\n            self.server_info[QSTAT_QF_OUT] = value\n            if not self.basic:\n                value = (QSTAT_B_PATH, [QSTAT_CMD, \"-B\"])\n                self.server_info[QSTAT_B_OUT] = value\n                value = (QMGR_PS_PATH, [QMGR_CMD, \"-c\", \"p s\"])\n                self.server_info[QMGR_PS_OUT] = value\n                value = (QSTAT_Q_PATH, [QSTAT_CMD, \"-Q\"])\n                self.server_info[QSTAT_Q_OUT] = value\n                value = (QMGR_PR_PATH, [QMGR_CMD, \"-c\", \"p r\"])\n                self.server_info[QMGR_PR_OUT] = value\n                value = (QMGR_PQ_PATH, [QMGR_CMD, \"-c\", \"p q @default\"])\n                self.server_info[QMGR_PQ_OUT] = value\n\n            # Job information\n            value = (QSTAT_F_PATH, [QSTAT_CMD, \"-f\"])\n            self.job_info[QSTAT_F_OUT] = value\n            value = (QSTAT_TF_PATH, [QSTAT_CMD, \"-tf\"])\n            self.job_info[QSTAT_TF_OUT] = value\n            if not self.basic:\n                value = (QSTAT_PATH, [QSTAT_CMD])\n                self.job_info[QSTAT_OUT] = value\n                value = (QSTAT_T_PATH, [QSTAT_CMD, \"-t\"])\n                self.job_info[QSTAT_T_OUT] = value\n                value = (QSTAT_X_PATH, [QSTAT_CMD, \"-x\"])\n                self.job_info[QSTAT_X_OUT] = value\n                value = (QSTAT_XF_PATH, [QSTAT_CMD, \"-xf\"])\n                self.job_info[QSTAT_XF_OUT] = value\n                value = (QSTAT_NS_PATH, [QSTAT_CMD, \"-ns\"])\n                self.job_info[QSTAT_NS_OUT] = value\n                value = (QSTAT_FX_DSV_PATH, [QSTAT_CMD, \"-fx\", \"-F\", \"dsv\"])\n                self.job_info[QSTAT_FX_DSV_OUT] = value\n                value = (QSTAT_F_DSV_PATH, [QSTAT_CMD, \"-f\", \"-F\", \"dsv\"])\n                self.job_info[QSTAT_F_DSV_OUT] = value\n                value = (QSTAT_F_JSON_PATH, [QSTAT_CMD, \"-f\", \"-F\", \"json\"])\n                self.job_info[QSTAT_F_JSON_OUT] = value\n\n            # Node information\n            value = (PBSNODES_VA_PATH, [PBSNODES_CMD, \"-va\"])\n            self.node_info[PBSNODES_VA_OUT] = value\n            if not self.basic:\n                value = (PBSNODES_A_PATH, [PBSNODES_CMD, \"-a\"])\n                self.node_info[PBSNODES_A_OUT] = value\n                value = (PBSNODES_AVSJ_PATH, [PBSNODES_CMD, \"-avSj\"])\n                self.node_info[PBSNODES_AVSJ_OUT] = value\n                value = (PBSNODES_ASJ_PATH, [PBSNODES_CMD, \"-aSj\"])\n                self.node_info[PBSNODES_ASJ_OUT] = value\n                value = (PBSNODES_AVS_PATH, [PBSNODES_CMD, \"-avS\"])\n                self.node_info[PBSNODES_AVS_OUT] = value\n                value = (PBSNODES_AS_PATH, [PBSNODES_CMD, \"-aS\"])\n                self.node_info[PBSNODES_AS_OUT] = value\n                value = (PBSNODES_AFDSV_PATH, [PBSNODES_CMD, \"-aFdsv\"])\n                self.node_info[PBSNODES_AFDSV_OUT] = value\n                value = (PBSNODES_AVFDSV_PATH, [PBSNODES_CMD, \"-avFdsv\"])\n                self.node_info[PBSNODES_AVFDSV_OUT] = value\n                value = (PBSNODES_AVFJSON_PATH, [PBSNODES_CMD, \"-avFjson\"])\n                self.node_info[PBSNODES_AVFJSON_OUT] = value\n                value = (QMGR_PN_PATH, [QMGR_CMD, \"-c\", \"p n @default\"])\n                self.node_info[QMGR_PN_OUT] = value\n\n            # Hook information\n            value = (QMGR_LPBSHOOK_PATH, [QMGR_CMD, \"-c\", \"l pbshook\"])\n            self.hook_info[QMGR_LPBSHOOK_OUT] = value\n            if not self.basic:\n                value = (QMGR_PH_PATH, [QMGR_CMD, \"-c\", \"p h @default\"])\n                self.hook_info[QMGR_PH_OUT] = value\n\n            # Reservation information\n            value = (PBS_RSTAT_F_PATH, [PBS_RSTAT_CMD, \"-f\"])\n            self.resv_info[PBS_RSTAT_F_OUT] = value\n            if not self.basic:\n                value = (PBS_RSTAT_PATH, [PBS_RSTAT_CMD])\n                self.resv_info[PBS_RSTAT_OUT] = value\n\n            # Scheduler information\n            value = (QMGR_LSCHED_PATH, [QMGR_CMD, \"-c\", \"l sched\"])\n            self.sched_info[QMGR_LSCHED_OUT] = value\n            if not self.basic:\n                value = (QMGR_PSCHED_PATH, [QMGR_CMD, \"-c\", \"p sched\"])\n                self.sched_info[QMGR_PSCHED_OUT] = value\n\n        if self.server_info_avail:\n            # Server priv and logs\n            value = (SVR_PRIV_PATH, None)\n            self.server_info[SVR_PRIV] = value\n            value = (SVR_LOGS_PATH, None)\n            self.server_info[SVR_LOGS] = value\n            value = (ACCT_LOGS_PATH, None)\n            self.server_info[ACCT_LOGS] = value\n\n            # Core file information\n            value = (CORE_SERVER_PATH, None)\n            self.core_info[CORE_SERVER] = value\n\n        if self.mom_info_avail:\n            # Mom priv and logs\n            value = (MOM_PRIV_PATH, None)\n            self.node_info[MOM_PRIV] = value\n            value = (MOM_LOGS_PATH, None)\n            self.node_info[MOM_LOGS] = value\n\n            # Core file information\n            value = (CORE_MOM_PATH, None)\n            self.core_info[CORE_MOM] = value\n\n        if self.comm_info_avail:\n            # Comm information\n            value = (COMM_LOGS_PATH, None)\n            self.comm_info[COMM_LOGS] = value\n\n        if self.sched_info_avail:\n            # Scheduler logs and priv\n            value = (DFLT_SCHED_PRIV_PATH, None)\n            self.sched_info[SCHED_PRIV] = value\n            value = (DFLT_SCHED_LOGS_PATH, None)\n            self.sched_info[SCHED_LOGS] = value\n\n            # Core file information\n            value = (CORE_SCHED_PATH, None)\n            self.core_info[CORE_SCHED] = value\n\n        # System information\n        if not self.basic:\n            value = (PBS_PROBE_PATH, [PBS_PROBE_CMD, \"-v\"])\n            self.sys_info[PBS_PROBE_OUT] = value\n            # We'll append hostname to this later (see capture_system_info)\n            value = (PBS_HOSTN_PATH, [PBS_HOSTN_CMD, \"-v\"])\n            self.sys_info[PBS_HOSTN_OUT] = value\n            value = (PBS_ENV_PATH, None)\n            self.sys_info[PBS_ENVIRONMENT] = value\n            value = (OS_PATH, None)\n            self.sys_info[OS_INFO] = value\n            value = (PROCESS_PATH, [\"ps\", \"aux\", \"|\", \"grep\", \"[p]bs\"])\n            self.sys_info[PROCESS_INFO] = value\n            value = (ETC_HOSTS_PATH,\n                     [\"cat\", os.path.join(os.sep, \"etc\", \"hosts\")])\n            self.sys_info[ETC_HOSTS] = value\n            value = (ETC_NSSWITCH_PATH,\n                     [\"cat\", os.path.join(os.sep, \"etc\", \"nsswitch.conf\")])\n            self.sys_info[ETC_NSSWITCH_CONF] = value\n            value = (LSOF_PBS_PATH, [\"lsof\", \"|\", \"grep\", \"[p]bs\"])\n            self.sys_info[LSOF_PBS_OUT] = value\n            value = (VMSTAT_PATH, [\"vmstat\"])\n            self.sys_info[VMSTAT_OUT] = value\n            value = (DF_H_PATH, [\"df\", \"-h\"])\n            self.sys_info[DF_H_OUT] = value\n            value = (DMESG_PATH, [\"dmesg\", \"-T\"])\n            self.sys_info[DMESG_OUT] = value\n            value = (PS_LEAF_PATH, [\"ps\", \"-leaf\"])\n            self.sys_info[PS_LEAF_OUT] = value\n\n    def __initialize_snapshot(self):\n        \"\"\"\n        Create a snapshot directory along with the directory structure\n        Also create a tarfile and add the snapshot dir if create_tar is True\n        \"\"\"\n\n        os.mkdir(self.snapdir)\n\n        if self.create_tar:\n            self.outtar_fd = tarfile.open(self.outtar_path, \"w:gz\")\n\n        dirs_in_snapshot = [SYS_DIR, CORE_DIR]\n        if self.server_up:\n            dirs_in_snapshot.extend([SERVER_DIR, JOB_DIR, HOOK_DIR, RESV_DIR,\n                                     NODE_DIR, SCHED_DIR])\n        if self.server_info_avail:\n            dirs_in_snapshot.extend([SVR_PRIV_PATH, SVR_LOGS_PATH,\n                                     ACCT_LOGS_PATH, DATASTORE_DIR,\n                                     PG_LOGS_PATH])\n        if self.mom_info_avail:\n            dirs_in_snapshot.extend([MOM_PRIV_PATH, MOM_LOGS_PATH])\n        if self.comm_info_avail:\n            dirs_in_snapshot.append(COMM_LOGS_PATH)\n        if self.sched_info_avail:\n            dirs_in_snapshot.extend([DFLT_SCHED_LOGS_PATH,\n                                     DFLT_SCHED_PRIV_PATH])\n\n        for item in dirs_in_snapshot:\n            rel_path = os.path.join(self.snapdir, item)\n            os.makedirs(rel_path, 0o755)\n\n    def __capture_cmd_output(self, out_path, cmd, as_script=False,\n                             ret_out=False, sudo=False):\n        \"\"\"\n        Run a command and capture its output\n\n        :param out_path: path of the output file for this command\n        :type out_path: str\n        :param cmd: The command to execute\n        :type cmd: list\n        :param as_script: Passed to run_cmd()\n        :type as_Script: bool\n        :param ret_out: Return output of the command?\n        :type ret_out: bool\n        \"\"\"\n        retstr = None\n\n        with open(out_path, \"a+\") as out_fd:\n            try:\n                self.du.run_cmd(cmd=cmd, stdout=out_fd,\n                                sudo=sudo, as_script=as_script)\n                if ret_out:\n                    out_fd.seek(0, 0)\n                    retstr = out_fd.read()\n            except OSError as e:\n                # This usually happens when the command is not found\n                # Just log and return\n                self.logger.error(str(e))\n                return\n\n        if self.create_tar:\n            self.__add_to_archive(out_path)\n\n        if ret_out:\n            return retstr\n\n    @staticmethod\n    def __convert_flag_to_numeric(flag):\n        \"\"\"\n        Convert a resource's flag attribute to its numeric equivalent\n\n        :param flag: the resource flag to convert\n        :type flag: string\n\n        :returns: numeric value of the resource flag\n        \"\"\"\n        # Variable assignments below mirrors definitions\n        # from src/include/pbs_internal.h\n        ATR_DFLAG_USRD = 0x01\n        ATR_DFLAG_USWR = 0x02\n        ATR_DFLAG_OPRD = 0x04\n        ATR_DFLAG_OPWR = 0x08\n        ATR_DFLAG_MGRD = 0x10\n        ATR_DFLAG_MGWR = 0x20\n        ATR_DFLAG_MOM = 0x400\n        ATR_DFLAG_RASSN = 0x4000\n        ATR_DFLAG_ANASSN = 0x8000\n        ATR_DFLAG_FNASSN = 0x10000\n        ATR_DFLAG_CVTSLT = 0x20000\n\n        NO_USER_SET = (ATR_DFLAG_USRD | ATR_DFLAG_OPRD | ATR_DFLAG_MGRD |\n                       ATR_DFLAG_OPWR | ATR_DFLAG_MGWR)\n        READ_WRITE = (ATR_DFLAG_USRD | ATR_DFLAG_OPRD | ATR_DFLAG_MGRD |\n                      ATR_DFLAG_USWR | ATR_DFLAG_OPWR | ATR_DFLAG_MGWR)\n\n        resc_flag = READ_WRITE\n        if \"q\" in flag:\n            resc_flag |= ATR_DFLAG_RASSN\n        if \"f\" in flag:\n            resc_flag |= ATR_DFLAG_FNASSN\n        if \"n\" in flag:\n            resc_flag |= ATR_DFLAG_ANASSN\n        if \"h\" in flag:\n            resc_flag |= ATR_DFLAG_CVTSLT\n        if \"m\" in flag:\n            resc_flag |= ATR_DFLAG_MOM\n        if \"r\" in flag:\n            resc_flag &= ~READ_WRITE\n            resc_flag |= NO_USER_SET\n        if \"i\" in flag:\n            resc_flag &= ~READ_WRITE\n            resc_flag |= (ATR_DFLAG_OPRD | ATR_DFLAG_OPWR |\n                          ATR_DFLAG_MGRD | ATR_DFLAG_MGWR)\n        return resc_flag\n\n    @staticmethod\n    def __convert_type_to_numeric(attr_type):\n        \"\"\"\n        Convert a resource's type attribute to its numeric equivalent\n\n        :param attr_type: the type to convert\n        :type attr_type: string\n\n        :returns: Numeric equivalent of attr_type\n        \"\"\"\n        PBS_ATTR_TYPE_TO_INT = {\n            \"long\": 1,\n            \"string\": 3,\n            \"string_array\": 4,\n            \"size\": 5,\n            \"boolean\": 11,\n            \"float\": 14,\n        }\n\n        return PBS_ATTR_TYPE_TO_INT[attr_type.strip()]\n\n    def __capture_trace_from_core(self, core_file_name, exec_path, out_path):\n        \"\"\"\n        Capture stack strace from the core file specified\n\n        :param core_file_name: name of the core file\n        :type core_file_name: str\n        :param exec_path: path to the executable which generated the core\n        :type exec_path: str\n        :param out_path: ofile to print the trace out to\n        :type out_path: str\n        \"\"\"\n        self.logger.info(\"capturing stack trace from core file \" +\n                         core_file_name)\n\n        # Create a gdb-python script to capture backtrace from core\n        gdb_python = \"\"\"\nimport gdb\ngdb.execute(\"file %s\")\ngdb.execute(\"core %s\")\no = gdb.execute(\"thread apply all bt\", to_string=True)\nprint(o)\ngdb.execute(\"quit\")\nquit()\n        \"\"\" % (exec_path, core_file_name)\n        # Remove tabs from triple quoted strings\n        gdb_python = gdb_python.replace(\"\\t\", \"\")\n\n        # Write the gdb-python script in a temporary file\n        fn = self.du.create_temp_file(body=gdb_python)\n\n        # Catch the stack trace using gdb\n        gdb_cmd = [\"gdb\", \"-P\", fn]\n        with open(out_path, \"w\") as outfd:\n            self.du.run_cmd(cmd=gdb_cmd, stdout=outfd, stderr=STDOUT,\n                            sudo=self.with_sudo)\n\n        # Remove the temp file\n        os.remove(fn)\n\n        if self.create_tar:\n            self.__add_to_archive(out_path)\n\n    def __capture_logs(self, pbs_logdir, snap_logdir, num_days_logs,\n                       sudo=False):\n        \"\"\"\n        Capture specific logs for the days mentioned\n\n        :param pbs_logdir: path to the PBS logs directory (source)\n        :type pbs_logdir: str\n        :param snap_logdir: path to the snapshot logs directory (destination)\n        :type snap_logdir: str\n        :param num_days_logs: Number of days of logs to capture\n        :type num_days_logs: int\n        :param sudo: copy logs with sudo?\n        :type sudo: bool\n        \"\"\"\n\n        if num_days_logs < 1:\n            self.logger.debug(\"Number of days of logs < 1, skipping\")\n            return\n\n        end_time = self.server.ctime\n        start_time = end_time - ((num_days_logs - 1) * 24 * 60 * 60)\n\n        # Get the list of log file names to capture\n        pbs_logfiles = self.log_utils.get_log_files(self.server.hostname,\n                                                    pbs_logdir, start_time,\n                                                    end_time, sudo)\n        if len(pbs_logfiles) == 0:\n            self.logger.debug(pbs_logdir + \"not found/accessible\")\n            return\n\n        self.logger.debug(\"Capturing \" + str(num_days_logs) +\n                          \" days of logs from \" + pbs_logdir)\n\n        # Make sure that the target log dir exists\n        if not os.path.isdir(snap_logdir):\n            os.makedirs(snap_logdir)\n\n        # Go over the list and copy over each log file\n        for pbs_logfile in pbs_logfiles:\n            snap_logfile = os.path.join(snap_logdir,\n                                        os.path.basename(pbs_logfile))\n            pbs_logfile = pbs_logfile\n            self.du.run_copy(src=pbs_logfile, dest=snap_logfile,\n                             recursive=False,\n                             preserve_permission=False,\n                             sudo=sudo)\n            if sudo:\n                # Copying files with sudo makes root the owner, set it to the\n                # current user\n                self.du.chown(path=snap_logfile, uid=os.getuid(),\n                              gid=os.getgid(), sudo=self.with_sudo)\n\n            if self.create_tar:\n                self.__add_to_archive(snap_logfile)\n\n    def __evaluate_core_file(self, file_path, core_dir):\n        \"\"\"\n        Check whether the specified file is a core dump\n        If yes, capture its stack trace and store it\n\n        :param file_path: path to the file\n        :type file_path: str\n        :param core_dir: path to directory to store core information\n        :type core_dir: str\n\n\n        :returns: True if this was a valid core file, otherwise False\n        \"\"\"\n        if not self.capture_core_files:\n            return False\n\n        if not self.du.isfile(path=file_path, sudo=self.with_sudo):\n            self.logger.debug(\"Could not find file path \" + str(file_path))\n            return False\n\n        # Get the header of this file\n        ret = self.du.run_cmd(cmd=[self.filecmd, file_path],\n                              sudo=self.with_sudo)\n        if ret['err'] is not None and len(ret['err']) != 0:\n            self.logger.error(\n                \"\\'file\\' command failed with error: \" + ret['err'] +\n                \" on file: \" + str(file_path))\n            return False\n\n        file_header = ret[\"out\"][0]\n        if \"core file\" not in file_header:\n            return False\n\n        # Identify the program which created this core file\n        header_list = file_header.split()\n        if \"from\" not in header_list:\n            return False\n        exec_index = header_list.index(\"from\") + 1\n        exec_name = header_list[exec_index].replace(\"\\'\", \"\")\n        exec_name = exec_name.replace(\",\", \"\")\n\n        # Capture the stack trace from this core file\n        filename = os.path.basename(file_path)\n        core_dest = os.path.join(core_dir, filename)\n        if not os.path.isdir(core_dir):\n            os.makedirs(core_dir, 0o755)\n        self.__capture_trace_from_core(file_path, exec_name,\n                                       core_dest)\n\n        # Delete the core file itself\n        if os.path.isfile(file_path):\n            os.remove(file_path)\n\n        return True\n\n    def __copy_dir_with_core(self, src_path, dest_path, core_dir,\n                             except_list=None, only_core=False, sudo=False):\n        \"\"\"\n        Copy over a directory recursively which might have core files\n        When a core file is found, capture the stack trace from it\n\n        :param src_path: path of the source directory\n        :type src_path: str\n        :param dest_path: path of the destination directory\n        :type dest_path: str\n        :param core_dir: path to the directory to store core files' trace\n        :type core_dir: str\n        :param except_list: list  of files/directories (basenames) to exclude\n        :type except_list: list\n        :param only_core: Copy over only core files?\n        :type only_core: bool\n        :param sudo: Copy with sudo?\n        :type sudo: bool\n        \"\"\"\n        if except_list is None:\n            except_list = []\n\n        # This can happen when -o is a path that we are capturing\n        # Just return success\n        if os.path.basename(src_path) == self.snapshot_name:\n            self.logger.debug(\"src_path %s seems to be snapshot directory,\"\n                              \"ignoring\" % src_path)\n            return\n        dir_list = self.du.listdir(path=src_path, fullpath=False,\n                                   sudo=sudo)\n\n        if dir_list is None:\n            self.logger.info(\"Can't find/access \" + src_path)\n            return\n\n        # Go over the list and copy over everything\n        # If we find a core file, we'll store backtrace from it inside\n        # core_file_bt\n        for item in dir_list:\n            if item in except_list:\n                continue\n\n            item_src_path = os.path.join(src_path, item)\n            if not only_core:\n                item_dest_path = os.path.join(dest_path, item)\n            else:\n                item_dest_path = core_dir\n\n            # We can't directly use 'recursive' argument of run_copy\n            # to copy the entire directory tree as we need to take care\n            # of the 'except_list'. So, we recursively explore the whole\n            # tree and copy over files individually.\n            if self.du.isdir(path=item_src_path, sudo=sudo):\n                # Make sure that the directory exists in the snapshot\n                if not self.du.isdir(path=item_dest_path):\n                    # Create the directory\n                    os.makedirs(item_dest_path, 0o755)\n                # Recursive call to copy contents of the directory\n                self.__copy_dir_with_core(item_src_path, item_dest_path,\n                                          core_dir, except_list, only_core,\n                                          sudo=sudo)\n            else:\n                # Copy the file over\n                item_src_path = item_src_path\n                try:\n                    self.du.run_copy(src=item_src_path, dest=item_dest_path,\n                                     recursive=False,\n                                     preserve_permission=False,\n                                     level=logging.DEBUG, sudo=sudo)\n                    if sudo:\n                        # Copying files with sudo makes root the owner,\n                        # set it to the current user\n                        self.du.chown(path=item_dest_path, uid=os.getuid(),\n                                      gid=os.getgid(), sudo=self.with_sudo)\n                except OSError:\n                    self.logger.error(\"Could not copy %s\" % item_src_path)\n                    continue\n\n                # Check if this is a core file\n                # If it is then this method will capture its stack trace\n                is_core = self.__evaluate_core_file(item_dest_path, core_dir)\n\n                # If it was a core file, then it's already been captured\n                if is_core:\n                    continue\n\n                # If only_core is True and this was not a core file, then we\n                # should delete it\n                if only_core:\n                    os.remove(item_dest_path)\n                else:\n                    # This was not a core file, and 'only_core' is not True\n                    # So, we need to capture this file\n                    if self.create_tar:\n                        self.__add_to_archive(item_dest_path)\n\n    def __capture_mom_priv(self):\n        \"\"\"\n        Capture mom_priv information\n        \"\"\"\n        pbs_home = self.pbs_home\n        pbs_mom_priv = os.path.join(pbs_home, \"mom_priv\")\n        snap_mom_priv = os.path.join(self.snapdir, MOM_PRIV_PATH)\n        core_dir = os.path.join(self.snapdir, CORE_MOM_PATH)\n        self.__copy_dir_with_core(pbs_mom_priv, snap_mom_priv, core_dir,\n                                  sudo=self.with_sudo)\n\n    def __add_to_archive(self, dest_path, src_path=None):\n        \"\"\"\n        Add a file to the output tarball and delete the original file\n\n        :param dest_path: path to the file inside the target tarball\n        :type dest_path: str\n        :param src_path: path to the file to add, if different than dest_path\n        :type src_path: str\n        \"\"\"\n        if src_path is None:\n            src_path = dest_path\n\n        self.logger.debug(\"Adding \" + src_path + \" to tarball \" +\n                          self.outtar_path)\n\n        # Add file to tar\n        dest_relpath = os.path.relpath(dest_path, self.snapdir)\n        path_in_tar = os.path.join(self.snapshot_name, dest_relpath)\n        try:\n            self.outtar_fd.add(src_path, arcname=path_in_tar)\n\n            # Remove original file\n            os.remove(src_path)\n        except OSError:\n            self.logger.error(\n                \"File %s could not be added to tarball\" % (src_path))\n\n    def __capture_svr_logs(self):\n        \"\"\"\n        Capture server logs\n        \"\"\"\n        pbs_logdir = os.path.join(self.pbs_home, \"server_logs\")\n        snap_logdir = os.path.join(self.snapdir, SVR_LOGS_PATH)\n        self.__capture_logs(pbs_logdir, snap_logdir, self.num_daemon_logs)\n\n    def __capture_acct_logs(self):\n        \"\"\"\n        Capture accounting logs\n        \"\"\"\n        pbs_logdir = os.path.join(self.pbs_home, \"server_priv\", \"accounting\")\n        snap_logdir = os.path.join(self.snapdir, ACCT_LOGS_PATH)\n        self.__capture_logs(pbs_logdir, snap_logdir, self.num_acct_logs,\n                            sudo=self.with_sudo)\n\n    def __capture_sched_logs(self, pbs_logdir, snap_logdir):\n        \"\"\"\n        Capture scheduler logs\n        \"\"\"\n        self.__capture_logs(pbs_logdir, snap_logdir, self.num_daemon_logs)\n\n    def __capture_mom_logs(self):\n        \"\"\"\n        Capture mom logs\n        \"\"\"\n        pbs_home = self.pbs_home\n        pbs_logdir = os.path.join(pbs_home, \"mom_logs\")\n        snap_logdir = os.path.join(self.snapdir, MOM_LOGS_PATH)\n        self.__capture_logs(pbs_logdir, snap_logdir, self.num_daemon_logs)\n\n    def __capture_comm_logs(self):\n        \"\"\"\n        Capture pbs_comm logs\n        \"\"\"\n        pbs_home = self.pbs_home\n        pbs_logdir = os.path.join(pbs_home, \"comm_logs\")\n        snap_logdir = os.path.join(self.snapdir, COMM_LOGS_PATH)\n        self.__capture_logs(pbs_logdir, snap_logdir, self.num_daemon_logs)\n\n    def capture_server(self, with_svr_logs=False, with_acct_logs=False):\n        \"\"\"\n        Capture PBS server specific information\n\n        :param with_svr_logs: capture server logs as well?\n        :type with_svr_logs: bool\n        :param with_acct_logs: capture accounting logs as well?\n        :type with_acct_logs: bool\n\n        :returns: name of the output directory/tarfile containing the snapshot\n        \"\"\"\n        self.logger.info(\"capturing server information\")\n\n        if self.server_up:\n            # Go through 'server_info' and capture info that depends on\n            # commands\n            for (path, cmd_list) in self.server_info.values():\n                if cmd_list is None:\n                    continue\n                cmd_list_cpy = list(cmd_list)\n\n                # Add the path to PBS_EXEC to the command path\n                # The command path is the first entry in command list\n                cmd_list_cpy[0] = os.path.join(self.pbs_exec, cmd_list[0])\n                snap_path = os.path.join(self.snapdir, path)\n                self.__capture_cmd_output(snap_path, cmd_list_cpy,\n                                          sudo=self.with_sudo)\n\n        if self.server_info_avail:\n            if self.basic:\n                # Only copy over the resourcedef file\n                snap_rscdef = os.path.join(self.snapdir, RSCDEF_PATH)\n                pbs_rscdef = os.path.join(self.pbs_home, RSCDEF_PATH)\n                self.du.run_copy(src=pbs_rscdef, dest=snap_rscdef,\n                                 recursive=False,\n                                 preserve_permission=False,\n                                 level=logging.DEBUG, sudo=self.with_sudo)\n                if self.create_tar:\n                    self.__add_to_archive(snap_rscdef)\n\n            else:\n                # Copy over 'server_priv', everything except accounting logs\n                snap_server_priv = os.path.join(self.snapdir, SVR_PRIV_PATH)\n                pbs_server_priv = os.path.join(self.pbs_home, \"server_priv\")\n                core_dir = os.path.join(self.snapdir, CORE_SERVER_PATH)\n                exclude_list = [\"accounting\"]\n                self.__copy_dir_with_core(pbs_server_priv,\n                                          snap_server_priv, core_dir,\n                                          exclude_list, sudo=self.with_sudo)\n\n            if with_svr_logs and self.num_daemon_logs > 0:\n                # Capture server logs\n                self.__capture_svr_logs()\n\n            if with_acct_logs and self.num_acct_logs > 0:\n                # Capture accounting logs\n                self.__capture_acct_logs()\n\n        if self.create_tar:\n            return self.outtar_path\n        else:\n            return self.snapdir\n\n    def capture_jobs(self):\n        \"\"\"\n        Capture information related to jobs\n\n        :returns: name of the output directory/tarfile containing the snapshot\n        \"\"\"\n        self.logger.info(\"capturing jobs information\")\n\n        if self.server_up:\n            # Go through 'job_info' and capture info that depends on commands\n            for (path, cmd_list) in self.job_info.values():\n                cmd_list_cpy = list(cmd_list)\n\n                # Add the path to PBS_EXEC to the command path\n                # The command path is the first entry in command list\n                cmd_list_cpy[0] = os.path.join(self.pbs_exec, cmd_list[0])\n                snap_path = os.path.join(self.snapdir, path)\n                self.__capture_cmd_output(snap_path, cmd_list_cpy,\n                                          sudo=self.with_sudo)\n\n        if self.create_tar:\n            return self.outtar_path\n        else:\n            return self.snapdir\n\n    def capture_nodes(self, with_mom_logs=False):\n        \"\"\"\n        Capture information related to nodes & mom along with mom logs\n\n        :param with_mom_logs: Capture mom logs?\n        :type with_mom_logs: bool\n\n        :returns: name of the output directory/tarfile containing the snapshot\n        \"\"\"\n        self.logger.info(\"capturing nodes & mom information\")\n\n        if self.server_up:\n            # Go through 'node_info' and capture info that depends on commands\n            for (path, cmd_list) in self.node_info.values():\n                if cmd_list is None:\n                    continue\n                cmd_list_cpy = list(cmd_list)\n\n                # Add the path to PBS_EXEC to the command path\n                # The command path is the first entry in command list\n                cmd_list_cpy[0] = os.path.join(self.pbs_exec, cmd_list[0])\n                snap_path = os.path.join(self.snapdir, path)\n                self.__capture_cmd_output(snap_path, cmd_list_cpy,\n                                          sudo=self.with_sudo)\n\n        # Collect mom logs and priv\n        if self.mom_info_avail:\n            if not self.basic:\n                # Capture mom_priv info\n                self.__capture_mom_priv()\n\n            if with_mom_logs and self.num_daemon_logs > 0:\n                # Capture mom_logs\n                self.__capture_mom_logs()\n\n        if self.create_tar:\n            return self.outtar_path\n        else:\n            return self.snapdir\n\n    def capture_comms(self, with_comm_logs=False):\n        \"\"\"\n        Capture Comm related information\n\n        :returns: name of the output directory/tarfile containing the snapshot\n        \"\"\"\n        self.logger.info(\"capturing comm information\")\n\n        # Capture comm logs\n        if self.comm_info_avail:\n            if self.num_daemon_logs > 0 and with_comm_logs:\n                self.__capture_comm_logs()\n\n        # If not already capturing server information, copy over server_priv\n        # as pbs_comm runs out of it\n        if not self.server_info_avail and not self.basic:\n            pbs_server_priv = os.path.join(self.pbs_home, \"server_priv\")\n            snap_server_priv = os.path.join(self.snapdir, SVR_PRIV_PATH)\n            core_dir = os.path.join(self.snapdir, CORE_SERVER_PATH)\n            exclude_list = [\"accounting\"]\n            self.__copy_dir_with_core(pbs_server_priv,\n                                      snap_server_priv, core_dir, exclude_list,\n                                      sudo=self.with_sudo)\n        if self.create_tar:\n            return self.outtar_path\n        else:\n            return self.snapdir\n\n    def capture_scheduler(self, with_sched_logs=False):\n        \"\"\"\n        Capture information related to the scheduler\n\n        :param with_sched_logs: Capture scheduler logs?\n        :type with_sched_logs: bool\n\n        :returns: name of the output directory/tarfile containing the snapshot\n        \"\"\"\n        self.logger.info(\"capturing scheduler information\")\n\n        qmgr_lsched = None\n        if self.server_up:\n            # Go through 'sched_info' and capture info that depends on commands\n            for (path, cmd_list) in self.sched_info.values():\n                if cmd_list is None:\n                    continue\n                cmd_list_cpy = list(cmd_list)\n\n                # Add the path to PBS_EXEC to the command path\n                # The command path is the first entry in command list\n                cmd_list_cpy[0] = os.path.join(self.pbs_exec, cmd_list[0])\n                snap_path = os.path.join(self.snapdir, path)\n                if \"l sched\" in cmd_list_cpy:\n                    qmgr_lsched = self.__capture_cmd_output(snap_path,\n                                                            cmd_list_cpy,\n                                                            ret_out=True)\n                else:\n                    self.__capture_cmd_output(snap_path, cmd_list_cpy,\n                                              sudo=self.with_sudo)\n\n        # Capture sched_priv & sched_logs for all schedulers\n        if qmgr_lsched is not None and self.sched_info_avail:\n            sched_details = {}\n            sched_name = None\n            for line in qmgr_lsched.splitlines():\n                if line.startswith(\"Sched \"):\n                    sched_name = line.split(\"Sched \")[1]\n                    sched_name = \"\".join(sched_name.split())\n                    sched_details[sched_name] = {}\n                    continue\n                if sched_name is not None:\n                    line = \"\".join(line.split())\n                    if line.startswith(\"sched_priv=\"):\n                        sched_details[sched_name][\"sched_priv\"] = \\\n                            line.split(\"=\")[1]\n                    elif line.startswith(\"sched_log=\"):\n                        sched_details[sched_name][\"sched_log\"] = \\\n                            line.split(\"=\")[1]\n\n            for sched_name in sched_details:\n                pbs_sched_priv = None\n                # Capture sched_priv for the scheduler\n                if len(sched_details) == 1:  # For pre-multisched outputs\n                    pbs_sched_priv = os.path.join(self.pbs_home, \"sched_priv\")\n                elif \"sched_priv\" in sched_details[sched_name]:\n                    pbs_sched_priv = sched_details[sched_name][\"sched_priv\"]\n                if sched_name == \"default\" or len(sched_details) == 1:\n                    snap_sched_priv = os.path.join(self.snapdir,\n                                                   DFLT_SCHED_PRIV_PATH)\n                    core_dir = os.path.join(self.snapdir, CORE_SCHED_PATH)\n                else:\n                    dirname = DFLT_SCHED_PRIV_PATH + \"_\" + sched_name\n                    coredirname = CORE_SCHED_PATH + \"_\" + sched_name\n                    snap_sched_priv = os.path.join(self.snapdir, dirname)\n                    os.makedirs(snap_sched_priv, 0o755)\n                    core_dir = os.path.join(self.snapdir, coredirname)\n\n                if pbs_sched_priv and os.path.isdir(pbs_sched_priv):\n                    self.__copy_dir_with_core(pbs_sched_priv,\n                                              snap_sched_priv, core_dir,\n                                              sudo=self.with_sudo)\n                if with_sched_logs and self.num_daemon_logs > 0:\n                    pbs_sched_log = None\n                    # Capture scheduler logs\n                    if len(sched_details) == 1:  # For pre-multisched outputs\n                        pbs_sched_log = os.path.join(self.pbs_home,\n                                                     \"sched_logs\")\n                    elif \"sched_log\" in sched_details[sched_name]:\n                        pbs_sched_log = sched_details[sched_name][\"sched_log\"]\n                    if sched_name == \"default\" or len(sched_details) == 1:\n                        snap_sched_log = os.path.join(self.snapdir,\n                                                      DFLT_SCHED_LOGS_PATH)\n                    else:\n                        dirname = DFLT_SCHED_LOGS_PATH + \"_\" + sched_name\n                        snap_sched_log = os.path.join(self.snapdir, dirname)\n                        os.makedirs(snap_sched_log, 0o755)\n\n                    if pbs_sched_log and os.path.isdir(pbs_sched_log):\n                        self.__capture_sched_logs(pbs_sched_log,\n                                                  snap_sched_log)\n\n        elif self.sched_info_avail:\n            # We don't know about other multi-scheds,\n            # but can still capture the default sched's logs & priv\n            pbs_sched_priv = os.path.join(self.pbs_home, \"sched_priv\")\n            snap_sched_priv = os.path.join(self.snapdir,\n                                           DFLT_SCHED_PRIV_PATH)\n            core_dir = os.path.join(self.snapdir, CORE_SCHED_PATH)\n            self.__copy_dir_with_core(pbs_sched_priv,\n                                      snap_sched_priv, core_dir,\n                                      sudo=self.with_sudo)\n            if with_sched_logs and self.num_daemon_logs > 0:\n                pbs_sched_log = os.path.join(self.pbs_home,\n                                             \"sched_logs\")\n                snap_sched_log = os.path.join(self.snapdir,\n                                              DFLT_SCHED_LOGS_PATH)\n                self.__capture_sched_logs(pbs_sched_log, snap_sched_log)\n\n        if self.create_tar:\n            return self.outtar_path\n        else:\n            return self.snapdir\n\n    def capture_hooks(self):\n        \"\"\"\n        Capture information related to hooks\n\n        :returns: name of the output directory/tarfile containing the snapshot\n        \"\"\"\n        self.logger.info(\"capturing hooks information\")\n\n        # Go through 'hook_info' and capture info that depends on commands\n        for (path, cmd_list) in self.hook_info.values():\n            if cmd_list is None:\n                continue\n            cmd_list_cpy = list(cmd_list)\n\n            # Add the path to PBS_EXEC to the command path\n            # The command path is the first entry in command list\n            cmd_list_cpy[0] = os.path.join(self.pbs_exec, cmd_list[0])\n            snap_path = os.path.join(self.snapdir, path)\n            self.__capture_cmd_output(snap_path, cmd_list_cpy,\n                                      sudo=self.with_sudo)\n\n        if self.create_tar:\n            return self.outtar_path\n        else:\n            return self.snapdir\n\n    def capture_reservations(self):\n        \"\"\"\n        Capture information related to reservations\n\n        :returns: name of the output directory/tarfile containing the snapshot\n        \"\"\"\n        self.logger.info(\"capturing reservations information\")\n\n        # Go through 'resv_info' and capture info that depends on commands\n        for (path, cmd_list) in self.resv_info.values():\n            if cmd_list is None:\n                continue\n            cmd_list_cpy = list(cmd_list)\n\n            # Add the path to PBS_EXEC to the command path\n            # The command path is the first entry in command list\n            cmd_list_cpy[0] = os.path.join(self.pbs_exec, cmd_list[0])\n            snap_path = os.path.join(self.snapdir, path)\n            self.__capture_cmd_output(snap_path, cmd_list_cpy,\n                                      sudo=self.with_sudo)\n\n        if self.create_tar:\n            return self.outtar_path\n        else:\n            return self.snapdir\n\n    def capture_datastore(self, with_db_logs=False):\n        \"\"\"\n        Capture information related to datastore\n\n        :returns: name of the output directory/tarfile containing the snapshot\n        \"\"\"\n        self.logger.info(\"capturing datastore information\")\n\n        if with_db_logs and self.num_daemon_logs > 0:\n            # Capture database logs\n            pbs_logdir = os.path.join(self.pbs_home, PG_LOGS_PATH)\n            snap_logdir = os.path.join(self.snapdir, PG_LOGS_PATH)\n            self.__capture_logs(pbs_logdir, snap_logdir, self.num_daemon_logs,\n                                sudo=self.with_sudo)\n\n        if self.create_tar:\n            return self.outtar_path\n        else:\n            return self.snapdir\n\n    def capture_pbs_conf(self):\n        \"\"\"\n        Capture pbs.conf file\n\n        :returns: name of the output directory/tarfile containing the snapshot\n        \"\"\"\n        # Capture pbs.conf\n        self.logger.info(\"capturing pbs.conf\")\n        snap_confpath = os.path.join(self.snapdir, PBS_CONF_PATH)\n        with open(snap_confpath, \"w\") as fd:\n            for k, v in self.server.pbs_conf.items():\n                fd.write(k + \"=\" + str(v) + \"\\n\")\n\n        if self.create_tar:\n            self.__add_to_archive(snap_confpath)\n            return self.outtar_path\n        else:\n            return self.snapdir\n\n    def capture_system_info(self):\n        \"\"\"\n        Capture system related information\n\n        :returns: name of the output directory/tarfile containing the snapshot\n        \"\"\"\n        self.logger.info(\"capturing system information\")\n\n        if self.basic:\n            return\n\n        sudo_cmds = [PBS_PROBE_OUT, LSOF_PBS_OUT, DMESG_OUT]\n        as_script_cmds = [PROCESS_INFO, LSOF_PBS_OUT]\n        pbs_cmds = [PBS_PROBE_OUT, PBS_HOSTN_OUT]\n\n        host_platform = self.du.get_platform()\n        win_platform = False\n        if host_platform.startswith(\"win\"):\n            win_platform = True\n\n        # Capture information that's dependent on commands\n        for (key, values) in self.sys_info.items():\n            sudo = False\n            (path, cmd_list) = values\n            if cmd_list is None:\n                continue\n            # For Windows, only capture PBS commands\n            if win_platform and (key not in pbs_cmds):\n                continue\n\n            cmd_list_cpy = list(cmd_list)\n\n            # Find the full path to the command on the host\n            if key in pbs_cmds:\n                cmd_full = os.path.join(self.pbs_exec, cmd_list_cpy[0])\n            else:\n                cmd_full = self.du.which(exe=cmd_list_cpy[0])\n            # du.which() returns the name of the command passed if\n            # it can't find the command\n            if cmd_full is cmd_list_cpy[0]:\n                continue\n            cmd_list_cpy[0] = cmd_full\n\n            # Handle special commands\n            if \"pbs_hostn\" in cmd_list_cpy[0]:\n                # Append hostname to the command list\n                cmd_list_cpy.append(self.server.hostname)\n            if key in as_script_cmds:\n                as_script = True\n                if key in sudo_cmds and self.with_sudo:\n                    # Because this cmd needs to be run in a script,\n                    # PTL run_cmd's sudo will try to run the script\n                    # itself with sudo, not the cmd\n                    # So, append sudo as a prefix to the cmd instead\n                    cmd_list_cpy[0] = (' '.join(self.du.sudo_cmd) +\n                                       ' ' + cmd_list_cpy[0])\n            else:\n                as_script = False\n                if key in sudo_cmds:\n                    sudo = self.with_sudo\n\n            snap_path = os.path.join(self.snapdir, path)\n            self.__capture_cmd_output(snap_path, cmd_list_cpy,\n                                      as_script=as_script, sudo=sudo)\n\n        # Capture platform dependent information\n        if win_platform:\n            # Capture process information using tasklist command\n            cmd = [\"tasklist\", [\"/v\"]]\n            snap_path = PROCESS_PATH\n            self.__capture_cmd_output(snap_path, cmd,\n                                      sudo=self.with_sudo)\n\n        # Capture OS/platform information\n        self.logger.info(\"capturing OS information\")\n        snap_ospath = os.path.join(self.snapdir, OS_PATH)\n        with open(snap_ospath, \"w\") as osfd:\n            osinfo = platform.platform()\n            osfd.write(osinfo + \"\\n\")\n            # If /etc/os-release is available then save that as well\n            fpath = os.path.join(os.sep, \"etc\", \"os-release\")\n            if os.path.isfile(fpath):\n                with open(fpath, \"r\") as fd:\n                    fcontent = fd.read()\n                osfd.write(\"\\n/etc/os-release:\\n\" + fcontent)\n        if self.create_tar:\n            self.__add_to_archive(snap_ospath)\n\n        # Capture pbs_environment\n        self.logger.info(\"capturing pbs_environment\")\n        snap_envpath = os.path.join(self.snapdir, PBS_ENV_PATH)\n        if self.server.pbs_env is not None:\n            with open(snap_envpath, \"w\") as envfd:\n                for k, v in self.server.pbs_env.items():\n                    envfd.write(k + \"=\" + v + \"\\n\")\n        if self.create_tar:\n            self.__add_to_archive(snap_envpath)\n\n        if self.create_tar:\n            return self.outtar_path\n        else:\n            return self.snapdir\n\n    def capture_pbs_logs(self):\n        \"\"\"\n        Capture PBS logs from all relevant hosts\n\n        :returns: name of the output directory/tarfile containing the snapshot\n        \"\"\"\n        self.logger.info(\"capturing PBS logs\")\n\n        if self.num_daemon_logs > 0:\n            # Capture server logs\n            if self.server_info_avail:\n                self.__capture_svr_logs()\n\n            # Capture sched logs for all schedulers\n            if self.sched_info_avail:\n                if self.server_up:\n                    sched_info = self.server.status(SCHED)\n                    for sched in sched_info:\n                        sched_name = sched[\"id\"]\n                        pbs_sched_log = sched[\"sched_log\"]\n                        if sched_name != \"default\":\n                            snap_sched_log = DFLT_SCHED_LOGS_PATH + \\\n                                \"_\" + sched[\"id\"]\n                        else:\n                            snap_sched_log = DFLT_SCHED_LOGS_PATH\n                        snap_sched_log = os.path.join(self.snapdir,\n                                                      snap_sched_log)\n                        self.__capture_sched_logs(pbs_sched_log,\n                                                  snap_sched_log)\n                else:\n                    # Capture the default sched's logs\n                    pbs_sched_log = os.path.join(self.pbs_home,\n                                                 \"sched_logs\")\n                    snap_sched_log = os.path.join(self.snapdir,\n                                                  DFLT_SCHED_LOGS_PATH)\n                    self.__capture_sched_logs(pbs_sched_log, snap_sched_log)\n\n            # Capture mom & comm logs\n            if self.mom_info_avail:\n                self.__capture_mom_logs()\n            if self.comm_info_avail:\n                self.__capture_comm_logs()\n\n        if self.num_acct_logs > 0:\n            # Capture accounting logs\n            self.__capture_acct_logs()\n\n        if self.create_tar:\n            return self.outtar_path\n        else:\n            return self.snapdir\n\n    def capture_all(self):\n        \"\"\"\n        Capture a snapshot from the PBS system\n\n        :returns: name of the output directory/tarfile containing the snapshot\n        \"\"\"\n        # Capture Server related information\n        self.capture_server(with_svr_logs=True, with_acct_logs=True)\n        # Capture scheduler information\n        self.capture_scheduler(with_sched_logs=True)\n        # Capture jobs related information\n        self.capture_jobs()\n        # Capture nodes relateed information\n        self.capture_nodes(with_mom_logs=True)\n        # Capture comm related information\n        self.capture_comms(with_comm_logs=True)\n        # Capture hooks related information\n        self.capture_hooks()\n        # Capture reservations related information\n        self.capture_reservations()\n        # Capture datastore related information\n        self.capture_datastore(with_db_logs=True)\n        # Capture pbs.conf\n        self.capture_pbs_conf()\n        # Capture system related information\n        self.capture_system_info()\n\n        if self.create_tar:\n            return self.outtar_path\n        else:\n            return self.snapdir\n\n    def finalize(self):\n        \"\"\"\n        Capture some common information and perform cleanup\n        \"\"\"\n\n        if self.finalized:\n            # This function is non-reenterant\n            # So just return if it's already been called once\n            self.logger.debug(\"finalize() already called once, skipping it.\")\n            return\n\n        self.finalized = True\n\n        # Record timestamp of the snapshot\n        snap_ctimepath = os.path.join(self.snapdir, CTIME_PATH)\n        with open(snap_ctimepath, \"w\") as ctimefd:\n            ctimefd.write(str(self.server.ctime) + \"\\n\")\n        if self.create_tar:\n            self.__add_to_archive(snap_ctimepath)\n\n        # If the caller was pbs_snapshot, add its log file to the tarball\n        if self.create_tar and self.log_path is not None:\n            snap_logpath = os.path.join(self.snapdir, self.log_filename)\n            self.__add_to_archive(snap_logpath, self.log_path)\n\n        # Cleanup\n        if self.create_tar:\n            # Close the output tarfile\n            self.outtar_fd.close()\n            # Remove the snapshot directory\n            self.du.rm(path=self.snapdir, recursive=True, force=True)\n"
  },
  {
    "path": "test/fw/ptl/utils/pbs_testsuite.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport calendar\nimport grp\nimport inspect\nimport logging\nimport os\nimport platform\nimport pwd\nimport socket\nimport subprocess\nimport sys\nimport time\nimport textwrap\nimport unittest\nfrom distutils.util import strtobool\n\nimport ptl\nfrom ptl.lib.pbs_testlib import *\nfrom ptl.utils.pbs_cliutils import CliUtils\nfrom ptl.utils.pbs_dshutils import DshUtils\nfrom ptl.utils.pbs_logutils import PBSLogAnalyzer\nfrom ptl.utils.pbs_procutils import ProcMonitor\nfrom ptl.utils.pbs_testusers import *\ntry:\n    from ptl.utils.plugins.ptl_test_tags import tags\nexcept ImportError:\n    def tags(*args, **kwargs):\n        pass\ntry:\n    from nose.plugins.skip import SkipTest\nexcept ImportError:\n    class SkipTest(Exception):\n        pass\n\nSETUPLOG = 'setuplog'\nTEARDOWNLOG = 'teardownlog'\n\nSMOKE = 'smoke'\nREGRESSION = 'regression'\nNUMNODES = 'numnodes'\nTIMEOUT_KEY = '__testcase_timeout__'\nMINIMUM_TESTCASE_TIMEOUT = 600\nREQUIREMENTS_KEY = '__PTL_REQS_LIST__'\n\n# unit of min_ram and min_disk is GB\ndefault_requirements = {\n    'num_servers': 1,\n    'num_moms': 1,\n    'num_comms': 1,\n    'num_clients': 1,\n    'min_mom_ram': .128,\n    'min_mom_disk': 1,\n    'min_server_ram': .128,\n    'min_server_disk': 1,\n    'mom_on_server': False,\n    'no_mom_on_server': False,\n    'no_comm_on_server': False,\n    'no_comm_on_mom': True\n}\n\n\ndef skip(reason=\"Skipped test execution\"):\n    \"\"\"\n    Unconditionally skip a test.\n\n    :param reason: Reason for the skip\n    :type reason: str or None\n    \"\"\"\n    skip_flag = True\n\n    def wrapper(test_item):\n        test_item.__unittest_skip__ = skip_flag\n        test_item.__unittest_skip_why__ = reason\n        return test_item\n    return wrapper\n\n\ndef timeout(val):\n    \"\"\"\n    Decorator to set timeout value of test case\n    \"\"\"\n    def wrapper(obj):\n        setattr(obj, TIMEOUT_KEY, int(val))\n        return obj\n    return wrapper\n\n\ndef checkModule(modname):\n    \"\"\"\n    Decorator to check if named module is available on the system\n    and if not skip the test\n    \"\"\"\n    def decorated(function):\n        import imp\n        try:\n            imp.find_module(modname)\n        except ImportError:\n            function.__unittest_skip__ = True\n            function.__unittest_skip_why__ = 'Module unavailable ' + modname\n        return function\n    return decorated\n\n\ndef skipOnCray(function):\n    \"\"\"\n    Decorator to skip a test on a ``Cray`` system\n    \"\"\"\n    function.__skip_on_cray__ = True\n    return function\n\n\ndef skipOnShasta(function):\n    \"\"\"\n    Decorator to skip a test on a ``Cray Shasta`` system\n    \"\"\"\n    function.__skip_on_shasta__ = True\n    return function\n\n\ndef skipOnCpuSet(function):\n    \"\"\"\n    Decorator to skip a test on a cgroup cpuset system\n    \"\"\"\n    function.__skip_on_cpuset__ = True\n    return function\n\n\ndef runOnlyOnLinux(function):\n    \"\"\"\n    \"\"\"\n    function.__run_only_on_linux__ = True\n    return function\n\n\ndef checkMomBashVersion(function):\n    \"\"\"\n    Decorator to skip a test if bash version is less than 4.2.46\n    \"\"\"\n    function.__check_mom_bash_version__ = True\n    return function\n\n\ndef requirements(*args, **kwargs):\n    \"\"\"\n    Decorator to provide the cluster information required for a particular\n    testcase.\n    \"\"\"\n    def wrap_obj(obj):\n        getreq = getattr(obj, REQUIREMENTS_KEY, {})\n        for name, value in kwargs.items():\n            getreq[name] = value\n        setattr(obj, REQUIREMENTS_KEY, getreq)\n        return obj\n    return wrap_obj\n\n\ndef testparams(**kwargs):\n    \"\"\"\n    Decorator to set or modify test specific parameters\n    \"\"\"\n    def decorated(function):\n        function.__doc__ += \"Test Params:\" + \"\\n\\t\"\n        for key, value in kwargs.items():\n            function.__doc__ += str(key) + ' : ' + str(value) + '\\n\\t'\n\n        def wrapper(self, *args):\n            self.testconf = {}\n            for key, value in kwargs.items():\n                keyname = type(self).__name__ + \".\" + key\n                if keyname not in self.conf.keys():\n                    self.conf[keyname] = value\n                    self.testconf[keyname] = value\n                else:\n                    self.testconf[keyname] = self.conf[keyname]\n                    t = type(value)\n                    if t == bool:\n                        if strtobool(self.conf[keyname]):\n                            self.conf[keyname] = True\n                        else:\n                            self.conf[keyname] = False\n                    else:\n                        # If value is not a boolean then typecast\n                        self.conf[keyname] = t(self.conf[keyname])\n\n            function(self, *args)\n        wrapper.__doc__ = function.__doc__\n        wrapper.__name__ = function.__name__\n        return wrapper\n    return decorated\n\n\ndef generate_hook_body_from_func(hook_func, *args):\n    return textwrap.dedent(inspect.getsource(hook_func)) + \\\n        \"%s%s\" % (hook_func.__name__, str(args))\n\n\nclass PBSServiceInstanceWrapper(dict):\n\n    \"\"\"\n    A wrapper class to handle multiple service\n    ``(i.e., mom, server, scheduler)``instances as passed along\n    through the test harness ``(pbs_benchpress)``.Returns an\n    ordered dictionary of PBS service instances ``(i.e., mom/server/\n    scheduler)``\n\n    Users may invoke PTL using pointers to multiple services per\n    host, for example:\n\n    ``pbs_benchpress -p moms=hostA@/etc/pbs.conf,hostB,hostA@/etc/pbs.conf3``\n\n    In such cases, the moms instance variable must be able to distinguish\n    the ``self.moms['hostA']`` instances, each instance will be mapped\n    to a unique configuration file\n    \"\"\"\n\n    def __init__(self, *args, **kwargs):\n        super().__init__(*args, **kwargs)\n        self.orderedlist = list(super().keys())\n\n    def __setitem__(self, key, value):\n        super(self.__class__, self).__setitem__(key, value)\n        if key not in self.orderedlist:\n            self.orderedlist.append(key)\n\n    def __getitem__(self, key):\n        for k, v in self.items():\n            if k == key:\n                return v\n            if '@' in k:\n                name, _ = k.split('@')\n                if key in name:\n                    return v\n            else:\n                name = k\n            # Users may have specified shortnames instead of FQDN, in order\n            # to not enforce that PBS_SERVER match the hostname passed in as\n            # parameter, we check if a shortname matches a FQDN entry\n            if '.' in key and key.split('.')[0] in name:\n                return v\n            if '.' in name and name.split('.')[0] in key:\n                return v\n        return None\n\n    def __contains__(self, key):\n        if key in self.keys():\n            return True\n\n        for k in self.keys():\n            if '@' in k:\n                name, _ = k.split('@')\n                if key in name:\n                    return True\n            else:\n                name = k\n            # Users may have specified shortnames instead of FQDN, in order\n            # to not enforce that PBS_SERVER match the hostname passed in as\n            # parameter, we check if a shortname matches a FQDN entry\n            if '.' in key and key.split('.')[0] in name:\n                return True\n            if '.' in name and name.split('.')[0] in key:\n                return True\n        return False\n\n    def __iter__(self):\n        return iter(self.orderedlist)\n\n    def host_keys(self):\n        return [h.split('@')[0] for h in self.keys()]\n\n    def keys(self):\n        return list(self.orderedlist)\n\n    def values(self):\n        return list(self[key] for key in self.orderedlist)\n\n\nclass setUpClassError(Exception):\n    pass\n\n\nclass tearDownClassError(Exception):\n    pass\n\n\nclass PBSTestSuite(unittest.TestCase):\n\n    \"\"\"\n    Generic ``setup``, ``teardown``, and ``logging`` functions to\n    be used as parent class for most tests.\n    Class instantiates:\n\n    ``server object connected to localhost``\n\n    ``scheduler objected connected to localhost``\n\n    ``mom object connected to localhost``\n\n    Custom parameters:\n\n    :param server: The hostname on which the PBS ``server/scheduler``\n                   are running\n    :param mom: The hostname on which the PBS MoM is running\n    :param servers: Colon-separated list of hostnames hosting a PBS server.\n                    Servers are then accessible as a dictionary in the\n                    instance variable servers.\n    :param client: For CLI mode only, name of the host on which the PBS\n                   client commands are to be run from. Format is\n                   ``<host>@<path-to-config-file>``\n    :param moms: Colon-separated list of hostnames hosting a PBS MoM.\n                 MoMs are made accessible as a dictionary in the instance\n                 variable moms.\n    :param comms: Colon-separated list of hostnames hosting a PBS Comm.\n                  Comms are made accessible as a dictionary in the\n                  instance variable comms.\n    :param nomom: expect no MoM on colon-separated set of hosts\n    :param mode: Sets mode of operation to PBS server. Can be either\n                 ``'cli'`` or ``'api'``.Defaults to API behavior.\n    :param conn_timeout: set a timeout in seconds after which a pbs_connect\n                         IFL call is refreshed (i.e., disconnected)\n    :param skip-setup: Bypasses setUp of PBSTestSuite (not custom ones)\n    :param skip-teardown: Bypasses tearDown of PBSTestSuite (not custom ones)\n    :param procinfo: Enables process monitoring thread, logged into\n                     ptl_proc_info test metrics. The value can be set to\n                     _all_ to monitor all PBS processes,including\n                     ``pbs_server``, ``pbs_sched``, ``pbs_mom``, or a process\n                     defined by name.\n    :param revert-to-defaults=<True|False>: if False, will not revert to\n                                            defaults.True by default.\n    :param revert-hooks=<True|False>: if False, do not revert hooks to\n                                      defaults.Defaults to True.\n                                      ``revert-to-defaults`` set to False\n                                      overrides this setting.\n    :param del-hooks=<True|False>: If False, do not delete hooks. Defaults\n                                   to False.``revert-to-defaults`` set to\n                                   False overrides this setting.\n    :param revert-queues=<True|False>: If False, do not revert queues to\n                                       defaults.Defaults to True.\n                                       ``revert-to-defaults`` set to False\n                                       overrides this setting.\n    :param revert-resources=<True|False>: If False, do not revert resources\n                                          to defaults. Defaults to True.\n                                          ``revert-to-defaults`` set to False\n                                          overrides this setting.\n    :param del-queues=<True|False>: If False, do not delete queues. Defaults\n                                    to False.``revert-to-defaults`` set to\n                                    Falseoverrides this setting.\n    :param del-vnodes=<True|False>: If False, do not delete vnodes on MoM\n                                    instances.Defaults to True.\n    :param server-revert-to-defaults=<True|False>: if False, don't revert\n                                                   Server to defaults\n    :param comm-revert-to-defaults=<True|False>: if False, don't revert Comm\n                                                 to defaults\n    :param mom-revert-to-defaults=<True|False>: if False, don't revert MoM\n                                                to defaults\n    :param sched-revert-to-defaults=<True|False>: if False, don't revert\n                                                  Scheduler to defaults\n    :param procmon: Enables process monitoring. Multiple values must be\n                    colon separated. For example to monitor ``server``,\n                    ``sched``, and ``mom`` use\n                    ``procmon=pbs_server:pbs_sched:pbs_mom``\n    :param procmon-freq: Sets a polling frequency for the process monitoring\n                         tool.Defaults to 10 seconds.\n    :param test-users: colon-separated list of users to use as test users.\n                       The users specified override the default users in the\n                       order in which they appear in the ``PBS_USERS`` list.\n    :param default-testcase-timeout: Default test case timeout value.\n    :param data-users: colon-separated list of data users.\n    :param oper-users: colon-separated list of operator users.\n    :param mgr-users: colon-separated list of manager users.\n    :param root-users: colon-separated list of root users.\n    :param build-users: colon-separated list of build users.\n    :param clienthost: the hostnames to set in the MoM config file\n    \"\"\"\n\n    logger = logging.getLogger(__name__)\n    metrics_data = {}\n    measurements = []\n    additional_data = {}\n    conf = {}\n    testconf = {}\n    param = None\n    du = DshUtils()\n    _procmon = None\n    _process_monitoring = False\n    revert_to_defaults = True\n    server_revert_to_defaults = True\n    mom_revert_to_defaults = True\n    sched_revert_to_defaults = True\n    revert_queues = True\n    revert_resources = True\n    revert_hooks = True\n    del_hooks = True\n    del_queues = True\n    del_scheds = True\n    del_vnodes = True\n    config_saved = False\n    server = None\n    scheduler = None\n    mom = None\n    comm = None\n    servers = None\n    schedulers = {}\n    scheds = {}\n    moms = None\n    comms = None\n\n    @classmethod\n    def setUpClass(cls):\n        cls.log_enter_setup(True)\n        cls._testMethodName = 'setUpClass'\n        cls.parse_param()\n        cls.init_param()\n        cls.check_users_exist()\n        cls.init_servers()\n        cls.init_schedulers()\n        cls.init_moms()\n        if cls.use_cur_setup:\n            _, path = tempfile.mkstemp(prefix=\"saved_custom_setup\",\n                                       suffix=\".json\")\n            ret = cls.server.save_configuration()\n            if not ret:\n                cls.logger.error(\"Failed to save server's custom setup\")\n                raise Exception(\"Failed to save server's custom setup\")\n            for mom in cls.moms.values():\n                ret = mom.save_configuration()\n                if not ret:\n                    cls.logger.error(\"Failed to save mom's custom setup\")\n                    raise Exception(\"Failed to save mom's custom setup\")\n            ret = cls.scheduler.save_configuration(path, 'w')\n            if ret:\n                cls.saved_file = path\n            else:\n                cls.logger.error(\"Failed to save scheduler's custom setup\")\n                raise Exception(\"Failed to save scheduler's custom setup\")\n            cls.add_mgrs_opers()\n        cls.init_comms()\n        a = {ATTR_license_min: len(cls.moms)}\n        cls.server.manager(MGR_CMD_SET, SERVER, a, sudo=True)\n        cls.server.restart()\n        cls.log_end_setup(True)\n        # methods for skipping tests with ptl decorators\n        cls.populate_test_dict()\n        cls.skip_cray_tests()\n        cls.skip_shasta_tests()\n        cls.skip_cpuset_tests()\n        cls.run_only_on_linux()\n        cls.check_mom_bash_version()\n\n    def setUp(self):\n        if 'skip-setup' in self.conf:\n            return\n        self.log_enter_setup()\n        self.init_proc_mon()\n        if not PBSTestSuite.config_saved and self.use_cur_setup:\n            _, path = tempfile.mkstemp(prefix=\"saved_test_setup\",\n                                       suffix=\".json\")\n            ret = self.server.save_configuration()\n            if not ret:\n                self.logger.error(\"Failed to save server's test setup\")\n                raise Exception(\"Failed to save server's test setup\")\n            for mom in self.moms.values():\n                ret = mom.save_configuration()\n                if not ret:\n                    self.logger.error(\"Failed to save mom's test setup\")\n                    raise Exception(\"Failed to save mom's test setup\")\n            ret = self.scheduler.save_configuration(path, 'w')\n            if ret:\n                self.saved_file = path\n            else:\n                self.logger.error(\"Failed to save scheduler's test setup\")\n                raise Exception(\"Failed to save scheduler's test setup\")\n            PBSTestSuite.config_saved = True\n        # Adding only server, mom & scheduler and pbs.conf methods in use\n        # current setup block, rest of them to be added to this block\n        # once save & load configurations are implemented for\n        # comm\n        if not self.use_cur_setup:\n            self.revert_servers()\n            self.revert_pbsconf()\n            self.revert_schedulers()\n            self.revert_moms()\n\n        # turn off opt_backfill_fuzzy to avoid unexpected calendaring behavior\n        # as many tests assume that scheduler will simulate each event\n        a = {'opt_backfill_fuzzy': 'off'}\n        for schedinfo in self.schedulers.values():\n            for schedname in schedinfo.keys():\n                self.server.manager(MGR_CMD_SET, SCHED, a, id=schedname)\n\n        self.revert_comms()\n        self.log_end_setup()\n        self.measurements = []\n\n    @classmethod\n    def populate_test_dict(cls):\n        cls.test_dict = {}\n        for attr in dir(cls):\n            if attr.startswith('test'):\n                obj = getattr(cls, attr)\n                if callable(obj):\n                    cls.test_dict[attr] = obj\n\n    @classmethod\n    def skip_cray_tests(cls):\n        if not cls.mom.is_cray():\n            return\n        msg = 'capability not supported on Cray'\n        if cls.__dict__.get('__skip_on_cray__', False):\n            # skip all test cases in this test suite\n            for test_item in cls.test_dict.values():\n                test_item.__unittest_skip__ = True\n                test_item.__unittest_skip_why__ = msg\n        else:\n            # skip individual test cases\n            for test_item in cls.test_dict.values():\n                if test_item.__dict__.get('__skip_on_cray__', False):\n                    test_item.__unittest_skip__ = True\n                    test_item.__unittest_skip_why__ = msg\n\n    @classmethod\n    def skip_shasta_tests(cls):\n        if not cls.mom.is_shasta():\n            return\n        msg = 'capability not supported on Cray Shasta'\n        if cls.__dict__.get('__skip_on_shasta__', False):\n            # skip all test cases in this test suite\n            for test_item in cls.test_dict.values():\n                test_item.__unittest_skip__ = True\n                test_item.__unittest_skip_why__ = msg\n        else:\n            # skip individual test cases\n            for test_item in cls.test_dict.values():\n                if test_item.__dict__.get('__skip_on_shasta__', False):\n                    test_item.__unittest_skip__ = True\n                    test_item.__unittest_skip_why__ = msg\n\n    @classmethod\n    def skip_cpuset_tests(cls):\n        skip_cpuset_tests = False\n        for mom in cls.moms.values():\n            if mom.is_cpuset_mom():\n                skip_cpuset_tests = True\n                msg = 'capability not supported on cgroup cpuset system: '\n                msg += mom.shortname\n                break\n        if not skip_cpuset_tests:\n            return\n        if cls.__dict__.get('__skip_on_cpuset__', False):\n            # skip all test cases in this test suite\n            for test_item in cls.test_dict.values():\n                test_item.__unittest_skip__ = True\n                test_item.__unittest_skip_why__ = msg\n        else:\n            # skip individual test cases\n            for test_item in cls.test_dict.values():\n                if test_item.__dict__.get('__skip_on_cpuset__', False):\n                    test_item.__unittest_skip__ = True\n                    test_item.__unittest_skip_why__ = msg\n\n    @classmethod\n    def run_only_on_linux(cls):\n        if cls.mom.is_only_linux():\n            return\n        msg = 'capability supported only on Linux'\n        if cls.__dict__.get('__run_only_on_linux__', False):\n            # skip all test cases in this test suite\n            for test_item in cls.test_dict.values():\n                test_item.__unittest_skip__ = True\n                test_item.__unittest_skip_why__ = msg\n        else:\n            # skip individual test cases\n            for test_item in cls.test_dict.values():\n                if test_item.__dict__.get('__run_only_on_linux__', False):\n                    test_item.__unittest_skip__ = True\n                    test_item.__unittest_skip_why__ = msg\n\n    @classmethod\n    def check_mom_bash_version(cls):\n        skip_test = False\n        msg = 'capability supported only for bash version >= 4.2.46'\n        for mom in cls.moms.values():\n            if not mom.check_mom_bash_version():\n                skip_test = True\n                break\n        if not skip_test:\n            return\n        if cls.__dict__.get('__check_mom_bash_version__', False):\n            # skip all test cases in this test suite\n            for test_item in cls.test_dict.values():\n                test_item.__unittest_skip__ = True\n                test_item.__unittest_skip_why__ = msg\n        else:\n            # skip individual test cases\n            for test_item in cls.test_dict.values():\n                if test_item.__dict__.get('__check_mom_bash_version__', False):\n                    test_item.__unittest_skip__ = True\n                    test_item.__unittest_skip_why__ = msg\n\n    @classmethod\n    def log_enter_setup(cls, iscls=False):\n        _m = ' Entered ' + cls.__name__ + ' setUp'\n        if iscls:\n            _m += 'Class'\n        _m_len = len(_m)\n        cls.logger.info('=' * _m_len)\n        cls.logger.info(_m)\n        cls.logger.info('=' * _m_len)\n\n    @classmethod\n    def log_end_setup(cls, iscls=False):\n        _m = 'Completed ' + cls.__name__ + ' setUp'\n        if iscls:\n            _m += 'Class'\n        _m_len = len(_m)\n        cls.logger.info('=' * _m_len)\n        cls.logger.info(_m)\n        cls.logger.info('=' * _m_len)\n\n    @classmethod\n    def _validate_param(cls, pname):\n        \"\"\"\n        Check if parameter was enabled at the ``command-line``\n\n        :param pname: parameter name\n        :type pname: str\n        :param pvar: class variable to set according to command-line setting\n        \"\"\"\n        if pname not in cls.conf:\n            return\n        if cls.conf[pname] in PTL_TRUE:\n            setattr(cls, pname.replace('-', '_'), True)\n        else:\n            setattr(cls, pname.replace('-', '_'), False)\n\n    @classmethod\n    def _set_user(cls, name, user_list):\n        if name in cls.conf:\n            for idx, u in enumerate(cls.conf[name].split(':')):\n                user_list[idx].__init__(u)\n\n    @classmethod\n    def check_users_exist(cls):\n        \"\"\"\n        Check whether the user is exist or not\n        \"\"\"\n        testusersexist = True\n        for u in PBS_ALL_USERS:\n            rv = cls.du.check_user_exists(u.name, u.host, u.port)\n            if not rv:\n                _msg = 'User ' + str(u) + ' does not exist!'\n                raise setUpClassError(_msg)\n        return testusersexist\n\n    @classmethod\n    def kicksched_action(cls, server, obj_type, *args, **kwargs):\n        \"\"\"\n        custom scheduler action to kick a scheduling cycle when expectig\n        a job state change\n        \"\"\"\n        if server is None:\n            cls.logger.error('no server defined for custom action')\n            return\n        if obj_type == JOB:\n            if (('scheduling' in server.attributes) and\n                    (server.attributes['scheduling'] != 'False')):\n                server.manager(MGR_CMD_SET, MGR_OBJ_SERVER,\n                               {'scheduling': 'True'},\n                               level=logging.DEBUG)\n\n    @classmethod\n    def parse_param(cls):\n        \"\"\"\n        get test configuration parameters as a ``comma-separated``\n        list of attributes.\n\n        Attributes may be ``'='`` separated key value pairs or standalone\n        entries.\n\n        ``Multi-property`` attributes are colon-delimited.\n        \"\"\"\n        if cls.param is None:\n            return\n        for h in cls.param.split(','):\n            if '=' in h:\n                k, v = h.split('=')\n                cls.conf[k.strip()] = v.strip()\n            else:\n                cls.conf[h.strip()] = ''\n        if (('clienthost' in cls.conf) and\n                not isinstance(cls.conf['clienthost'], list)):\n            cls.conf['clienthost'] = cls.conf['clienthost'].split(':')\n        users_map = [('test-users', PBS_USERS),\n                     ('oper-users', PBS_OPER_USERS),\n                     ('mgr-users', PBS_MGR_USERS),\n                     ('data-users', PBS_DATA_USERS),\n                     ('root-users', PBS_ROOT_USERS),\n                     ('build-users', PBS_BUILD_USERS),\n                     ('daemon-users', PBS_DAEMON_SERVICE_USERS)]\n        for k, v in users_map:\n            cls._set_user(k, v)\n\n    @classmethod\n    def init_param(cls):\n        cls._validate_param('revert-to-defaults')\n        cls._validate_param('server-revert-to-defaults')\n        cls._validate_param('comm-revert-to-defaults')\n        cls._validate_param('mom-revert-to-defaults')\n        cls._validate_param('sched-revert-to-defaults')\n        cls._validate_param('del-hooks')\n        cls._validate_param('revert-hooks')\n        cls._validate_param('del-queues')\n        cls._validate_param('del-vnodes')\n        cls._validate_param('revert-queues')\n        cls._validate_param('revert-resources')\n\n    @classmethod\n    def is_server_licensed(cls, server):\n        \"\"\"\n        Check if server is licensed or not\n        \"\"\"\n        for i in range(0, 10, 1):\n            lic = server.status(SERVER, 'license_count', level=logging.INFOCLI)\n            if lic and 'license_count' in lic[0]:\n                lic = PbsTypeLicenseCount(lic[0]['license_count'])\n                if ('Avail_Nodes' in lic) and (int(lic['Avail_Nodes']) > 0):\n                    return True\n                elif (('Avail_Sockets' in lic) and\n                        (int(lic['Avail_Sockets']) > 0)):\n                    return True\n                elif (('Avail_Global' in lic) and\n                        (int(lic['Avail_Global']) > 0)):\n                    return True\n                elif ('Avail_Local' in lic) and (int(lic['Avail_Local']) > 0):\n                    return True\n            time.sleep(i)\n        return False\n\n    @classmethod\n    def init_from_conf(cls, conf, single=None, multiple=None, skip=None,\n                       func=None):\n        \"\"\"\n        Helper method to parse test parameters for`` mom/server/scheduler``\n        instances.\n\n        The supported format of each service request is:\n\n        ``hostname@configuration/path``\n\n        For example:\n\n        ``pbs_benchpress -p server=remote@/etc/pbs.conf.12.0``\n\n        initializes a remote server instance that is configured according to\n        the remote file ``/etc/pbs.conf.12.0``\n        \"\"\"\n        endpoints = []\n        if ((multiple in conf) and (conf[multiple] is not None)):\n            __objs = conf[multiple].split(':')\n            for _m in __objs:\n                tmp = _m.split('@')\n                if len(tmp) == 2:\n                    endpoints.append(tuple(tmp))\n                elif len(tmp) == 1:\n                    endpoints.append((tmp[0], None))\n        elif ((single in conf) and (conf[single] is not None)):\n            tmp = conf[single].split('@')\n            if len(tmp) == 2:\n                endpoints.append(tuple(tmp))\n            elif len(tmp) == 1:\n                endpoints.append((tmp[0], None))\n        else:\n            endpoints = [(socket.gethostname(), None)]\n        objs = PBSServiceInstanceWrapper()\n        for name, objconf in endpoints:\n            if ((skip is not None) and (skip in conf) and\n                    ((name in conf[skip]) or (conf[skip] in name))):\n                continue\n            if objconf is not None:\n                n = name + '@' + objconf\n            else:\n                n = name\n            if getattr(cls, \"server\", None) is not None:\n                objs[n] = func(name, pbsconf_file=objconf,\n                               server=cls.server.hostname)\n            else:\n                objs[n] = func(name, pbsconf_file=objconf)\n            if objs[n] is None:\n                _msg = 'Failed %s(%s, %s)' % (func.__name__, name, objconf)\n                raise setUpClassError(_msg)\n            objs[n].initialise_service()\n        return objs\n\n    @classmethod\n    def init_servers(cls, init_server_func=None, skip=None):\n        \"\"\"\n        Initialize servers\n        \"\"\"\n        if init_server_func is None:\n            init_server_func = cls.init_server\n        if 'servers' in cls.conf:\n            server_param = cls.conf['servers']\n            if 'comms' not in cls.conf and 'comm' not in cls.conf:\n                cls.conf['comms'] = server_param\n            if 'scheduler' not in cls.conf and 'schedulers' not in cls.conf:\n                cls.conf['schedulers'] = server_param\n            if 'moms' not in cls.conf and 'mom' not in cls.conf:\n                cls.conf['moms'] = server_param\n        if 'server' in cls.conf:\n            server_param = cls.conf['server']\n            if 'comm' not in cls.conf:\n                cls.conf['comm'] = server_param\n            if 'scheduler' not in cls.conf:\n                cls.conf['scheduler'] = server_param\n            if 'mom' not in cls.conf:\n                cls.conf['mom'] = server_param\n        cls.servers = cls.init_from_conf(conf=cls.conf, single='server',\n                                         multiple='servers', skip=skip,\n                                         func=init_server_func)\n        if cls.servers:\n            cls.server = cls.servers.values()[0]\n            for _server in cls.servers.values():\n                rv = _server.isUp()\n                if not rv:\n                    cls.logger.error('server ' + _server.hostname + ' is down')\n                    _server.pi.restart(_server.hostname)\n                    msg = 'Failed to restart server ' + _server.hostname\n                    cls.assertTrue(_server.isUp(), msg)\n\n    @classmethod\n    def init_comms(cls, init_comm_func=None, skip=None):\n        \"\"\"\n        Initialize comms\n        \"\"\"\n        if init_comm_func is None:\n            init_comm_func = cls.init_comm\n        cls.comms = cls.init_from_conf(conf=cls.conf,\n                                       single='comm',\n                                       multiple='comms', skip=skip,\n                                       func=init_comm_func)\n        if cls.comms:\n            cls.comm = cls.comms.values()[0]\n        cls.server.comms = cls.comms\n\n    @classmethod\n    def init_schedulers(cls, init_sched_func=None, skip=None):\n        \"\"\"\n        Initialize schedulers\n        \"\"\"\n        if init_sched_func is None:\n            init_sched_func = cls.init_scheduler\n        cls.scheds = cls.init_from_conf(conf=cls.conf,\n                                        single='scheduler',\n                                        multiple='schedulers', skip=skip,\n                                        func=init_sched_func)\n\n        for sched in cls.scheds.values():\n            if sched.server.name in cls.schedulers:\n                continue\n            else:\n                cls.schedulers[sched.server.name] = sched.server.schedulers\n        # creating a short hand for current host server.schedulers\n        cls.scheds = cls.server.schedulers\n        try:\n            cls.scheduler = cls.scheds['default']\n        except KeyError:\n            cls.logger.error(\"Could not get default scheduler:%s, \"\n                             \"check the server(core), server.isUp:%s\" %\n                             (str(cls.scheds), cls.server.isUp()))\n            raise\n\n    @classmethod\n    def init_moms(cls, init_mom_func=None, skip='nomom'):\n        \"\"\"\n        Initialize moms\n        \"\"\"\n        if init_mom_func is None:\n            init_mom_func = cls.init_mom\n        cls.moms = cls.init_from_conf(conf=cls.conf, single='mom',\n                                      multiple='moms', skip=skip,\n                                      func=init_mom_func)\n        if cls.moms:\n            cls.mom = cls.moms.values()[0]\n        cls.server.moms = cls.moms\n\n    @classmethod\n    def init_server(cls, hostname, pbsconf_file=None):\n        \"\"\"\n        Initialize a server instance\n\n        Define custom expect action to trigger a scheduling cycle when job\n        is not in running state\n\n        :returns: The server instance on success and None on failure\n        \"\"\"\n        client = hostname\n        client_conf = None\n        if 'client' in cls.conf:\n            _cl = cls.conf['client'].split('@')\n            client = _cl[0]\n            if len(_cl) > 1:\n                client_conf = _cl[1]\n        server = Server(hostname, pbsconf_file=pbsconf_file, client=client,\n                        client_pbsconf_file=client_conf)\n        server._conn_timeout = 0\n        if cls.conf is not None:\n            if 'mode' in cls.conf:\n                if cls.conf['mode'] == 'cli':\n                    server.set_op_mode(PTL_CLI)\n            if 'conn_timeout' in cls.conf:\n                conn_timeout = int(cls.conf['conn_timeout'])\n                server.set_connect_timeout(conn_timeout)\n        sched_action = ExpectAction('kicksched', True, JOB,\n                                    cls.kicksched_action)\n        server.add_expect_action(action=sched_action)\n        return server\n\n    @classmethod\n    def init_comm(cls, hostname, pbsconf_file=None, server=None):\n        \"\"\"\n        Initialize a Comm instance associated to the given hostname.\n\n        This method must be called after init_server\n\n        :param hostname: The host on which the Comm is running\n        :type hostname: str\n        :param pbsconf_file: Optional path to an alternate pbs config file\n        :type pbsconf_file: str or None\n        :returns: The instantiated Comm upon success and None on failure.\n        :param server: The server name associated to the Comm\n        :type server: str\n        Return the instantiated Comm upon success and None on failure.\n        \"\"\"\n        try:\n            server = cls.servers[server]\n        except BaseException:\n            server = Server(hostname, pbsconf_file=pbsconf_file)\n        return Comm(server, hostname, pbsconf_file=pbsconf_file)\n\n    @classmethod\n    def init_scheduler(cls, hostname, pbsconf_file=None, server=None):\n        \"\"\"\n        Initialize a Scheduler instance associated to the given server.\n        This method must be called after ``init_server``\n\n        :param server: The server name associated to the scheduler\n        :type server: str\n        :param pbsconf_file: Optional path to an alternate config file\n        :type pbsconf_file: str or None\n        :param hostname: The host on which Sched is running\n        :type hostname: str\n        :returns: The instantiated scheduler upon success and None on failure\n        \"\"\"\n        try:\n            server = cls.servers[server]\n        except BaseException:\n            server = Server(hostname, pbsconf_file=pbsconf_file)\n        return Scheduler(server, hostname=hostname,\n                         pbsconf_file=pbsconf_file)\n\n    @classmethod\n    def init_mom(cls, hostname, pbsconf_file=None, server=None):\n        \"\"\"\n        Initialize a ``MoM`` instance associated to the given hostname.\n\n        This method must be called after ``init_server``\n\n        :param hostname: The host on which the MoM is running\n        :type hostname: str\n        :param pbsconf_file: Optional path to an alternate pbs config file\n        :type pbsconf_file: str or None\n        :returns: The instantiated MoM upon success and None on failure.\n        \"\"\"\n        try:\n            server = cls.servers[server]\n        except BaseException:\n            server = Server(hostname, pbsconf_file=pbsconf_file)\n        return get_mom_obj(server, hostname, pbsconf_file=pbsconf_file)\n\n    def init_proc_mon(self):\n        \"\"\"\n        Initialize process monitoring when requested\n        \"\"\"\n        if 'procmon' in self.conf:\n            _proc_mon = []\n            for p in self.conf['procmon'].split(':'):\n                _proc_mon += ['.*' + p + '.*']\n            if _proc_mon:\n                if 'procmon-freq' in self.conf:\n                    freq = int(self.conf['procmon-freq'])\n                else:\n                    freq = 10\n                self.start_proc_monitor(name='|'.join(_proc_mon), regexp=True,\n                                        frequency=freq)\n                self._process_monitoring = True\n\n    def _get_dflt_pbsconfval(self, conf, svr_hostname, hosttype, hostobj):\n        \"\"\"\n        Helper function to revert_pbsconf, tries to determine and return\n        default value for the pbs.conf variable given\n\n        :param conf: the pbs.conf variable\n        :type conf: str\n        :param svr_hostname: hostname of the server host\n        :type svr_hostname: str\n        :param hosttype: type of host being reverted\n        :type hosttype: str\n        :param hostobj: PTL object associated with the host\n        :type hostobj: PBSService\n\n        :return default value of the pbs.conf variable if it can be determined\n        as a string, otherwise None\n        \"\"\"\n        if conf == \"PBS_SERVER\":\n            return svr_hostname\n        elif conf == \"PBS_START_SCHED\":\n            if hosttype == \"server\":\n                return \"1\"\n            else:\n                return \"0\"\n        elif conf == \"PBS_START_COMM\":\n            if hosttype == \"comm\":\n                return \"1\"\n            else:\n                return \"0\"\n        elif conf == \"PBS_START_SERVER\":\n            if hosttype == \"server\":\n                return \"1\"\n            else:\n                return \"0\"\n        elif conf == \"PBS_START_MOM\":\n            if hosttype == \"mom\":\n                return \"1\"\n            else:\n                return \"0\"\n        elif conf == \"PBS_CORE_LIMIT\":\n            return \"unlimited\"\n        elif conf == \"PBS_SCP\":\n            scppath = self.du.which(hostobj.hostname, \"scp\")\n            if scppath != \"scp\":\n                return scppath\n        elif conf == \"PBS_LOG_HIGHRES_TIMESTAMP\":\n            return \"1\"\n        elif conf == \"PBS_PUBLIC_HOST_NAME\":\n            if hostobj.platform == \"shasta\" and hosttype == \"server\":\n                return socket.gethostname()\n            else:\n                return None\n        elif conf == \"PBS_DAEMON_SERVICE_USER\":\n            # Only set if scheduler user is not default\n            if DAEMON_SERVICE_USER.name == 'root':\n                return None\n            else:\n                return DAEMON_SERVICE_USER.name\n\n        return None\n\n    def _revert_pbsconf_comm(self, primary_server, vals_to_set):\n        \"\"\"\n        Helper function to revert_pbsconf to revert all comm daemons' pbs.conf\n\n        :param primary_server: object of the primary PBS server\n        :type primary_server: PBSService\n        :param vals_to_set: dict of pbs.conf values to set\n        :type vals_to_set: dict\n        \"\"\"\n        svr_hostnames = [svr.hostname for svr in self.servers.values()]\n        for comm in self.comms.values():\n            if comm.hostname in svr_hostnames:\n                continue\n\n            new_pbsconf = dict(vals_to_set)\n            restart_comm = False\n            pbs_conf_val = self.du.parse_pbs_config(comm.hostname)\n            if not pbs_conf_val:\n                raise ValueError(\"Could not parse pbs.conf on host %s\" %\n                                 (comm.hostname))\n\n            # to start with, set all keys in new_pbsconf with values from the\n            # existing pbs.conf\n            keys_to_delete = []\n            for conf in new_pbsconf:\n                if new_pbsconf[conf]:\n                    if (conf in pbs_conf_val) and (new_pbsconf[conf] !=\n                                                   pbs_conf_val[conf]):\n                        restart_pbs = True\n                    elif conf not in pbs_conf_val:\n                        restart_pbs = True\n                    continue\n                elif conf in pbs_conf_val:\n                    new_pbsconf[conf] = pbs_conf_val[conf]\n                elif conf in pbs_conf_val:\n                    new_pbsconf[conf] = pbs_conf_val[conf]\n                else:\n                    # existing pbs.conf doesn't have a default variable set\n                    # Try to determine the default\n                    val = self._get_dflt_pbsconfval(conf,\n                                                    primary_server.hostname,\n                                                    \"comm\", comm)\n                    if val is None:\n                        self.logger.info(\"Couldn't revert %s in pbs.conf\"\n                                         \" to its default value\" %\n                                         (conf))\n                        keys_to_delete.append(conf)\n                    else:\n                        new_pbsconf[conf] = val\n\n            for key in keys_to_delete:\n                del(new_pbsconf[key])\n\n            # Set the comm start bit to 1\n            if new_pbsconf[\"PBS_START_COMM\"] != \"1\":\n                new_pbsconf[\"PBS_START_COMM\"] = \"1\"\n                restart_comm = True\n\n            # Set PBS_CORE_LIMIT, PBS_SCP and PBS_SERVER\n            if new_pbsconf[\"PBS_CORE_LIMIT\"] != \"unlimited\":\n                new_pbsconf[\"PBS_CORE_LIMIT\"] = \"unlimited\"\n                restart_comm = True\n            if new_pbsconf[\"PBS_SERVER\"] != primary_server.hostname:\n                new_pbsconf[\"PBS_SERVER\"] = primary_server.hostname\n                restart_comm = True\n            if \"PBS_SCP\" not in new_pbsconf:\n                scppath = self.du.which(comm.hostname, \"scp\")\n                if scppath != \"scp\":\n                    new_pbsconf[\"PBS_SCP\"] = scppath\n                    restart_comm = True\n            if new_pbsconf[\"PBS_LOG_HIGHRES_TIMESTAMP\"] != \"1\":\n                new_pbsconf[\"PBS_LOG_HIGHRES_TIMESTAMP\"] = \"1\"\n                restart_comm = True\n\n            # Check if existing pbs.conf has more/less entries than the\n            # default list\n            if len(pbs_conf_val) != len(new_pbsconf):\n                restart_comm = True\n            # Check if existing pbs.conf has correct ownership\n            dest = self.du.get_pbs_conf_file(comm.hostname)\n            (cf_uid, cf_gid) = (os.stat(dest).st_uid, os.stat(dest).st_gid)\n            if cf_uid != 0 or cf_gid > 10:\n                restart_comm = True\n\n            if restart_comm:\n                self.du.set_pbs_config(comm.hostname, confs=new_pbsconf)\n                comm.pbs_conf = new_pbsconf\n                comm.pi.initd(comm.hostname, \"restart\", daemon=\"comm\")\n                if not comm.isUp():\n                    self.fail(\"comm is not up\")\n\n    def _revert_pbsconf_mom(self, primary_server, vals_to_set):\n        \"\"\"\n        Helper function to revert_pbsconf to revert all mom daemons' pbs.conf\n\n        :param primary_server: object of the primary PBS server\n        :type primary_server: PBSService\n        :param vals_to_set: dict of pbs.conf values to set\n        :type vals_to_set: dict\n        \"\"\"\n        svr_hostnames = [svr.hostname for svr in self.servers.values()]\n        for mom in self.moms.values():\n            if mom.hostname in svr_hostnames:\n                continue\n            mom.revert_mom_pbs_conf(primary_server, vals_to_set)\n\n    def _revert_pbsconf_server(self, vals_to_set):\n        \"\"\"\n        Helper function to revert_pbsconf to revert all servers' pbs.conf\n\n        :param vals_to_set: dict of pbs.conf values to set\n        :type vals_to_set: dict\n        \"\"\"\n        for server in self.servers.values():\n            new_pbsconf = dict(vals_to_set)\n            cmds_to_exec = []\n            dmns_to_restart = 0\n            restart_pbs = False\n            pbs_conf_val = self.du.parse_pbs_config(server.hostname)\n            if not pbs_conf_val:\n                raise ValueError(\"Could not parse pbs.conf on host %s\" %\n                                 (server.hostname))\n\n            # to start with, set all keys in new_pbsconf with values from the\n            # existing pbs.conf\n            keys_to_delete = []\n            for conf in new_pbsconf:\n                if new_pbsconf[conf]:\n                    if (conf in pbs_conf_val) and (new_pbsconf[conf] !=\n                                                   pbs_conf_val[conf]):\n                        restart_pbs = True\n                    elif conf not in pbs_conf_val:\n                        restart_pbs = True\n                    continue\n                elif conf in pbs_conf_val:\n                    new_pbsconf[conf] = pbs_conf_val[conf]\n                elif conf in pbs_conf_val:\n                    new_pbsconf[conf] = pbs_conf_val[conf]\n                else:\n                    # existing pbs.conf doesn't have a default variable set\n                    # Try to determine the default\n                    val = self._get_dflt_pbsconfval(conf,\n                                                    server.hostname,\n                                                    \"server\", server)\n                    if val is None:\n                        self.logger.error(\"Couldn't revert %s in pbs.conf\"\n                                          \" to its default value\" %\n                                          (conf))\n                        keys_to_delete.append(conf)\n                    else:\n                        new_pbsconf[conf] = val\n\n            for key in keys_to_delete:\n                del(new_pbsconf[key])\n\n            # Set all start bits\n            if (new_pbsconf[\"PBS_START_SERVER\"] != \"1\"):\n                new_pbsconf[\"PBS_START_SERVER\"] = \"1\"\n                dmns_to_restart += 1\n                cmds_to_exec.append([\"server\", \"start\"])\n            if (new_pbsconf[\"PBS_START_SCHED\"] != \"1\"):\n                new_pbsconf[\"PBS_START_SCHED\"] = \"1\"\n                cmds_to_exec.append([\"sched\", \"start\"])\n                dmns_to_restart += 1\n            if self.moms and server.hostname not in self.moms:\n                if new_pbsconf[\"PBS_START_MOM\"] != \"0\":\n                    new_pbsconf[\"PBS_START_MOM\"] = \"0\"\n                    cmds_to_exec.append([\"mom\", \"stop\"])\n                    dmns_to_restart += 1\n            else:\n                if (new_pbsconf[\"PBS_START_MOM\"] != \"1\"):\n                    new_pbsconf[\"PBS_START_MOM\"] = \"1\"\n                    cmds_to_exec.append([\"mom\", \"start\"])\n                    dmns_to_restart += 1\n            if self.comms and server.hostname not in self.comms:\n                if new_pbsconf[\"PBS_START_COMM\"] != \"0\":\n                    new_pbsconf[\"PBS_START_COMM\"] = \"0\"\n                    cmds_to_exec.append([\"comm\", \"stop\"])\n            else:\n                if (new_pbsconf[\"PBS_START_COMM\"] != \"1\"):\n                    new_pbsconf[\"PBS_START_COMM\"] = \"1\"\n                    cmds_to_exec.append([\"comm\", \"start\"])\n                    dmns_to_restart += 1\n\n            if dmns_to_restart == 4:\n                # If all daemons need to be started again, just restart PBS\n                # instead of making PTL start each of them one at a time\n                restart_pbs = True\n\n            # Set PBS_CORE_LIMIT, PBS_SCP, PBS_SERVER\n            # and PBS_LOG_HIGHRES_TIMESTAMP\n            if new_pbsconf[\"PBS_CORE_LIMIT\"] != \"unlimited\":\n                new_pbsconf[\"PBS_CORE_LIMIT\"] = \"unlimited\"\n                restart_pbs = True\n            if new_pbsconf[\"PBS_SERVER\"] != server.shortname:\n                new_pbsconf[\"PBS_SERVER\"] = server.shortname\n                restart_pbs = True\n            if \"PBS_SCP\" not in new_pbsconf:\n                scppath = self.du.which(server.hostname, \"scp\")\n                if scppath != \"scp\":\n                    new_pbsconf[\"PBS_SCP\"] = scppath\n                    restart_pbs = True\n            if new_pbsconf[\"PBS_LOG_HIGHRES_TIMESTAMP\"] != \"1\":\n                new_pbsconf[\"PBS_LOG_HIGHRES_TIMESTAMP\"] = \"1\"\n                restart_pbs = True\n            if DAEMON_SERVICE_USER.name == 'root':\n                if \"PBS_DAEMON_SERVICE_USER\" in new_pbsconf:\n                    del(new_pbsconf['PBS_DAEMON_SERVICE_USER'])\n                    restart_pbs = True\n            elif (new_pbsconf[\"PBS_DAEMON_SERVICE_USER\"] !=\n                  DAEMON_SERVICE_USER.name):\n                new_pbsconf[\"PBS_DAEMON_SERVICE_USER\"] = \\\n                    DAEMON_SERVICE_USER.name\n                restart_pbs = True\n            # if shasta, set PBS_PUBLIC_HOST_NAME\n            if server.platform == 'shasta':\n                localhost = socket.gethostname()\n                if new_pbsconf[\"PBS_PUBLIC_HOST_NAME\"] != localhost:\n                    new_pbsconf[\"PBS_PUBLIC_HOST_NAME\"] = localhost\n                    restart_pbs = True\n\n            # Check if existing pbs.conf has more/less entries than the\n            # default list\n            if len(pbs_conf_val) != len(new_pbsconf):\n                restart_pbs = True\n            # Check if existing pbs.conf has correct ownership\n            dest = self.du.get_pbs_conf_file(server.hostname)\n            (cf_uid, cf_gid) = (os.stat(dest).st_uid, os.stat(dest).st_gid)\n            if cf_uid != 0 or cf_gid > 10:\n                restart_pbs = True\n\n            if restart_pbs or dmns_to_restart > 0:\n                # Write out the new pbs.conf file\n                self.du.set_pbs_config(server.hostname, confs=new_pbsconf,\n                                       append=False)\n                server.pbs_conf = new_pbsconf\n\n                if restart_pbs:\n                    # Restart all\n                    server.pi.restart(server.hostname)\n                    self._check_daemons_on_server(server, \"server\")\n                    if new_pbsconf[\"PBS_START_MOM\"] == \"1\":\n                        self._check_daemons_on_server(server, \"mom\")\n                    self._check_daemons_on_server(server, \"sched\")\n                    if new_pbsconf[\"PBS_START_COMM\"] == \"1\":\n                        self._check_daemons_on_server(server, \"comm\")\n                else:\n                    for initcmd in cmds_to_exec:\n                        # start/stop the particular daemon\n                        server.pi.initd(server.hostname, initcmd[1],\n                                        daemon=initcmd[0])\n                        if initcmd[1] == \"start\":\n                            if initcmd[0] == \"server\":\n                                self._check_daemons_on_server(server, \"server\")\n                            if initcmd[0] == \"sched\":\n                                self._check_daemons_on_server(server, \"sched\")\n                            if initcmd[0] == \"mom\":\n                                self._check_daemons_on_server(server, \"mom\")\n                            if initcmd[0] == \"comm\":\n                                self._check_daemons_on_server(server, \"comm\")\n\n    def _check_daemons_on_server(self, server_obj, daemon_name):\n        \"\"\"\n        Checks if specified daemon is up and running on server host\n        server_obj : server\n        daemon_name : server/sched/mom/comm\n        \"\"\"\n        if daemon_name == \"server\":\n            if not server_obj.isUp():\n                self.fail(\"Server is not up\")\n        elif daemon_name == \"sched\":\n            if not server_obj.schedulers['default'].isUp():\n                self.fail(\"Scheduler is not up\")\n        elif daemon_name == \"mom\":\n            if not server_obj.moms.values()[0].isUp():\n                self.fail(\"Mom is not up\")\n        elif daemon_name == \"comm\":\n            if not server_obj.comms.values()[0].isUp():\n                self.fail(\"Comm is not up\")\n        else:\n            self.fail(\"Incorrect daemon specified\")\n\n    def revert_pbsconf(self):\n        \"\"\"\n        Revert contents and ownership of the pbs.conf file\n        Also start/stop the appropriate daemons\n        \"\"\"\n        primary_server = self.server\n\n        vals_to_set = {\n            \"PBS_HOME\": None,\n            \"PBS_EXEC\": None,\n            \"PBS_SERVER\": None,\n            \"PBS_START_SCHED\": None,\n            \"PBS_START_COMM\": None,\n            \"PBS_START_SERVER\": None,\n            \"PBS_START_MOM\": None,\n            \"PBS_CORE_LIMIT\": None,\n            \"PBS_SCP\": None,\n            \"PBS_LOG_HIGHRES_TIMESTAMP\": None\n        }\n\n        self._revert_pbsconf_mom(primary_server, vals_to_set)\n\n        server_vals_to_set = copy.deepcopy(vals_to_set)\n\n        server_vals_to_set[\"PBS_DAEMON_SERVICE_USER\"] = None\n        if primary_server.platform == 'shasta':\n            server_vals_to_set[\"PBS_PUBLIC_HOST_NAME\"] = None\n\n        self._revert_pbsconf_server(server_vals_to_set)\n\n        self._revert_pbsconf_comm(primary_server, vals_to_set)\n\n    def revert_servers(self, force=False):\n        \"\"\"\n        Revert the values set for servers\n        \"\"\"\n        for server in self.servers.values():\n            self.revert_server(server, force)\n\n    def revert_comms(self, force=False):\n        \"\"\"\n        Revert the values set for comms\n        \"\"\"\n        for comm in self.comms.values():\n            self.revert_comm(comm, force)\n\n    def revert_schedulers(self, force=False):\n        \"\"\"\n        Revert the values set for schedulers\n        \"\"\"\n        for scheds in self.schedulers.values():\n            if 'default' in scheds:\n                self.revert_scheduler(scheds['default'], force)\n\n    def revert_moms(self, force=False):\n        \"\"\"\n        Revert the values set for moms\n        \"\"\"\n        self.del_all_nodes = True\n        for mom in self.moms.values():\n            self.revert_mom(mom, force)\n\n    @classmethod\n    def add_mgrs_opers(cls):\n        \"\"\"\n        Adding manager and operator users\n        \"\"\"\n        if not cls.use_cur_setup:\n            try:\n                # Unset managers list\n                cls.server.manager(MGR_CMD_UNSET, SERVER, 'managers',\n                                   sudo=True)\n                # Unset operators list\n                cls.server.manager(MGR_CMD_UNSET, SERVER, 'operators',\n                                   sudo=True)\n            except PbsManagerError as e:\n                cls.logger.error(e.msg)\n        attr = {}\n        current_user = pwd.getpwuid(os.getuid())[0]\n        if str(current_user) in str(MGR_USER):\n            mgrs_opers = {\"managers\": [str(MGR_USER) + '@*'],\n                          \"operators\": [str(OPER_USER) + '@*']}\n        else:\n            current_user += '@*'\n            mgrs_opers = {\"managers\": [current_user, str(MGR_USER) + '@*'],\n                          \"operators\": [str(OPER_USER) + '@*']}\n        server_stat = cls.server.status(SERVER, [\"managers\", \"operators\"])\n        if len(server_stat) > 0:\n            server_stat = server_stat[0]\n        for role, users in mgrs_opers.items():\n            if role not in server_stat:\n                attr[role] = (INCR, ','.join(users))\n            else:\n                add_users = []\n                for user in users:\n                    if user not in server_stat[role]:\n                        add_users.append(user)\n                if len(add_users) > 0:\n                    attr[role] = (INCR, \",\".join(add_users))\n        if len(attr) > 0:\n            cls.server.manager(MGR_CMD_SET, SERVER, attr, sudo=True)\n\n    def revert_server(self, server, force=False):\n        \"\"\"\n        Revert the values set for server\n        \"\"\"\n        rv = server.isUp()\n        if not rv:\n            self.logger.error('server ' + server.hostname + ' is down')\n            server.start()\n            msg = 'Failed to restart server ' + server.hostname\n            self.assertTrue(server.isUp(), msg)\n        server_stat = server.status(SERVER)[0]\n        self.add_mgrs_opers()\n        if ((self.revert_to_defaults and self.server_revert_to_defaults) or\n                force):\n            server.revert_to_defaults(reverthooks=self.revert_hooks,\n                                      delhooks=self.del_hooks,\n                                      revertqueues=self.revert_queues,\n                                      delqueues=self.del_queues,\n                                      delscheds=self.del_scheds,\n                                      revertresources=self.revert_resources,\n                                      server_stat=server_stat)\n        rv = self.is_server_licensed(server)\n        _msg = 'No license found on server %s' % (server.shortname)\n        self.assertTrue(rv, _msg)\n        self.logger.info('server: %s licensed', server.hostname)\n        server.update_special_attr(SERVER, id=server.hostname)\n\n    def revert_comm(self, comm, force=False):\n        \"\"\"\n        Revert the values set for comm\n        \"\"\"\n        rv = comm.isUp()\n        if not rv:\n            self.logger.error('comm ' + comm.hostname + ' is down')\n            comm.start()\n            msg = 'Failed to restart comm ' + comm.hostname\n            self.assertTrue(comm.isUp(), msg)\n\n    def revert_scheduler(self, scheduler, force=False):\n        \"\"\"\n        Revert the values set for scheduler\n        \"\"\"\n        rv = scheduler.isUp()\n        if not rv:\n            self.logger.error('scheduler ' + scheduler.hostname + ' is down')\n            scheduler.start()\n            msg = 'Failed to restart scheduler ' + scheduler.hostname\n            self.assertTrue(scheduler.isUp(), msg)\n        if ((self.revert_to_defaults and self.sched_revert_to_defaults) or\n                force):\n            rv = scheduler.revert_to_defaults()\n            _msg = 'Failed to revert sched %s' % (scheduler.hostname)\n            self.assertTrue(rv, _msg)\n        self.server.update_special_attr(SCHED)\n\n    def revert_mom(self, mom, force=False):\n        \"\"\"\n        Revert the values set for mom\n        :param mom: the MoM object whose values are to be reverted\n        :type mom: MoM object\n        :param force: Option to reverse forcibly\n        :type force: bool\n        \"\"\"\n        rv = mom.isUp()\n        if not rv:\n            self.logger.error('mom ' + mom.hostname + ' is down')\n            mom.start()\n            msg = 'Failed to restart mom ' + mom.hostname\n            self.assertTrue(mom.isUp(), msg)\n        restart = False\n        enabled_cpuset = False\n        if ((self.revert_to_defaults and self.mom_revert_to_defaults and\n             mom.revert_to_default) or force):\n            # no need to delete vnodes as it is already deleted in\n            # server revert_to_defaults\n            mom.delete_pelog()\n            if mom.has_vnode_defs():\n                mom.delete_vnode_defs()\n                restart = True\n            mom.config = {}\n            conf = mom.dflt_config\n            if 'clienthost' in self.conf:\n                conf.update({'$clienthost': self.conf['clienthost']})\n            mom.apply_config(conf=conf, hup=False, restart=False)\n            if mom.is_cpuset_mom():\n                enabled_cpuset = True\n        if restart:\n            mom.restart()\n        else:\n            mom.signal('-HUP')\n        if not mom.isUp():\n            self.logger.error('mom ' + mom.shortname + ' is down after revert')\n        # give mom enough time to network sync with the server on cpuset system\n        if enabled_cpuset:\n            time.sleep(4)\n        a = {'state': 'free'}\n        self.server.manager(MGR_CMD_CREATE, NODE, None, mom.shortname,\n                            runas=ROOT_USER)\n        if enabled_cpuset:\n            # In order to avoid intermingling CF/HK/PY file copies from the\n            # create node and those caused by the following call, wait\n            # until the dialogue between MoM and the server is complete\n            time.sleep(4)\n            just_before_enable_cgroup_cset = time.time()\n            mom.enable_cgroup_cset()\n            # a high max_attempts is needed to tolerate delay receiving\n            # hook-related files, due to temporary network interruptions\n            mom.log_match('pbs_cgroups.CF;copy hook-related '\n                          'file request received', max_attempts=120,\n                          starttime=just_before_enable_cgroup_cset - 1,\n                          interval=1)\n            # Make sure that the MoM will generate per-NUMA node vnodes\n            # when the natural node was created above.\n            # HUP may not be enough if exechost_startup is delayed\n            time.sleep(2)\n            mom.signal('-HUP')\n            self.server.expect(NODE, a, id=mom.shortname + '[0]', interval=1)\n        else:\n            self.server.expect(NODE, a, id=mom.shortname, interval=1)\n            self.server.update_special_attr(NODE, id=mom.shortname)\n\n        return mom\n\n    def analyze_logs(self):\n        \"\"\"\n        analyze accounting and scheduler logs from time test was started\n        until it finished\n        \"\"\"\n        pla = PBSLogAnalyzer()\n        self.metrics_data = pla.analyze_logs(serverlog=self.server.logfile,\n                                             schedlog=self.scheduler.logfile,\n                                             momlog=self.mom.logfile,\n                                             acctlog=self.server.acctlogfile,\n                                             start=self.server.ctime,\n                                             end=time.time())\n\n    def set_test_measurements(self, mdic=None):\n        \"\"\"\n        set dictionary of analytical results of the test\n        in order to include it in test report\n\n        :param mdic: dictionary with analytical data\n        :type mdic: dict\n\n        :returns: True on successful append or False on failure\n        \"\"\"\n        if not (mdic and isinstance(mdic, dict)):\n            return False\n        self.measurements.append(mdic)\n        return True\n\n    def add_additional_data_to_report(self, datadic=None):\n        \"\"\"\n        set dictionary that will be merged with the test report\n        for the overall test run\n\n        :param datadic: dictionary with analytical data\n        :type datadic: dict\n\n        :returns: True on succssful update or False on failure\n        \"\"\"\n        if not (datadic and isinstance(datadic, dict)):\n            return False\n        self.additional_data.update(datadic)\n        return True\n\n    def start_proc_monitor(self, name=None, regexp=False, frequency=60):\n        \"\"\"\n        Start the process monitoring\n\n        :param name: Process name\n        :type name: str or None\n        :param regexp: Regular expression to match\n        :type regexp: bool\n        :param frequency: Frequency of monitoring\n        :type frequency: int\n        \"\"\"\n        if self._process_monitoring:\n            self.logger.info('A process monitor is already instantiated')\n            return\n        self.logger.info('starting process monitoring of ' + name +\n                         ' every ' + str(frequency) + 'seconds')\n        self._procmon = ProcMonitor(name=name, regexp=regexp,\n                                    frequency=frequency)\n        self._procmon.start()\n\n    def stop_proc_monitor(self):\n        \"\"\"\n        Stop the process monitoring\n        \"\"\"\n        if not self._process_monitoring:\n            return\n        self.logger.info('stopping process monitoring')\n        self._procmon.stop()\n        self.metrics_data['procs'] = self._procmon.db_proc_info\n        self.set_test_measurements(self.metrics_data)\n        self._process_monitoring = False\n\n    def skipTest(self, reason=None):\n        \"\"\"\n        Skip Test\n\n        :param reason: message to indicate why test is skipped\n        :type reason: str or None\n        \"\"\"\n        if reason:\n            self.logger.warning('test skipped: ' + reason)\n        else:\n            reason = 'unknown'\n        raise SkipTest(reason)\n\n    skip_test = skipTest\n\n    def add_pbs_python_path_to_sys_path(self):\n        \"\"\"\n        Add the path to the installed PBS Python modules located in the PBS\n        installation directory to the module search path if the path is not\n        already present.\n        \"\"\"\n        for lib_dir in ['lib64', 'lib']:\n            pbs_python_path = os.path.join(\n                self.server.pbs_conf['PBS_EXEC'], lib_dir, 'python', 'altair')\n            if os.path.isdir(pbs_python_path):\n                if pbs_python_path not in sys.path:\n                    sys.path.append(pbs_python_path)\n                return\n        raise Exception(\n            \"Unable to determine the path to the PBS Python modules in the \" +\n            \"PBS installation directory.\")\n\n    @classmethod\n    def log_enter_teardown(cls, iscls=False):\n        _m = ' Entered ' + cls.__name__ + ' tearDown'\n        if iscls:\n            _m += 'Class'\n        _m_len = len(_m)\n        cls.logger.info('=' * _m_len)\n        cls.logger.info(_m)\n        cls.logger.info('=' * _m_len)\n\n    @classmethod\n    def log_end_teardown(cls, iscls=False):\n        _m = 'Completed ' + cls.__name__ + ' tearDown'\n        if iscls:\n            _m += 'Class'\n        _m_len = len(_m)\n        cls.logger.info('=' * _m_len)\n        cls.logger.info(_m)\n        cls.logger.info('=' * _m_len)\n\n    @staticmethod\n    def delete_current_state(svr, moms):\n        \"\"\"\n        Delete nodes, queues, site hooks, reservations and\n        vnodedef file\n        \"\"\"\n        # unset server attributes\n        svr.unset_svr_attrib()\n        # Delete site hooks\n        svr.delete_site_hooks()\n        # cleanup reservations\n        svr.cleanup_reservations()\n        # Delete vnodedef file & vnodes\n        for m in moms:\n            # Check if vnodedef file is present\n            if moms[m].has_vnode_defs():\n                moms[m].delete_vnode_defs()\n                moms[m].delete_vnodes()\n                moms[m].restart()\n        # Delete nodes\n        svr.delete_nodes()\n        # Delete queues\n        svr.delete_queues()\n        # Delete resources\n        svr.delete_resources()\n\n    def tearDown(self):\n        \"\"\"\n        verify that ``server`` and ``scheduler`` are up\n        clean up jobs and reservations\n        \"\"\"\n        if self.conf:\n            self.set_test_measurements({'testconfig': self.testconf})\n        if 'skip-teardown' in self.conf:\n            return\n        self.log_enter_teardown()\n        self.server.cleanup_jobs()\n        self.stop_proc_monitor()\n\n        for server in self.servers.values():\n            server.cleanup_files()\n\n        for comm in self.comms.values():\n            if not comm.isUp(max_attempts=1):\n                # If the comm was stopped, killed or left in a bad state, bring\n                # the comm back up to avoid a possible delay or failure when\n                # starting the next test.\n                comm.start()\n\n        for mom in self.moms.values():\n            mom.cleanup_files()\n            if not mom.isUp(max_attempts=1):\n                # If the MoM was stopped, killed or left in a bad state, bring\n                # the MoM back up to avoid a possible delay or failure when\n                # starting the next test.\n                mom.start()\n\n        for sched in self.scheds.values():\n            sched.cleanup_files()\n            if not sched.isUp(max_attempts=1):\n                # If the scheduler was stopped, killed or left in a bad state,\n                # bring the scheduler back up to avoid a possible delay or\n                # failure when starting the next test.\n                sched.start()\n        self.server.delete_sched_config()\n\n        if self.use_cur_setup:\n            self.delete_current_state(self.server, self.moms)\n            ret = self.server.load_configuration(self.saved_file)\n            if not ret:\n                raise Exception(\"Failed to load server's test setup\")\n            ret = self.scheduler.load_configuration(self.saved_file)\n            if not ret:\n                raise Exception(\"Failed to load scheduler's test setup\")\n            for mom in self.moms.values():\n                ret = mom.load_configuration(self.saved_file)\n                if not ret:\n                    raise Exception(\"Failed to load mom's test setup\")\n            self.du.rm(path=self.saved_file)\n        self.log_end_teardown()\n\n    @classmethod\n    def tearDownClass(cls):\n        cls._testMethodName = 'tearDownClass'\n        if cls.use_cur_setup:\n            PBSTestSuite.delete_current_state(cls.server, cls.moms)\n            PBSTestSuite.config_saved = False\n            ret = cls.server.load_configuration(cls.saved_file)\n            if not ret:\n                raise Exception(\"Failed to load server's custom setup\")\n            ret = cls.scheduler.load_configuration(cls.saved_file)\n            if not ret:\n                raise Exception(\"Failed to load scheduler's custom setup\")\n            for mom in cls.moms.values():\n                ret = mom.load_configuration(cls.saved_file)\n                if not ret:\n                    raise Exception(\"Failed to load mom's custom setup\")\n            cls.du.rm(path=cls.saved_file)\n"
  },
  {
    "path": "test/fw/ptl/utils/pbs_testusers.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport grp\nimport pwd\nimport copy\n\n\nclass PbsGroup(object):\n\n    \"\"\"\n    The PbsGroup type augments a PBS groupname to associate it\n    to users to which the group belongs\n\n    :param name: The group name referenced\n    :type name: str\n    :param gid: gid of group\n    :type gid: int or None\n    :param users: The list of PbsUser objects the group belongs to\n    :type users: List or None\n    \"\"\"\n\n    @staticmethod\n    def get_group(group):\n        \"\"\"\n        param: group - groupname\n        type: group - str or int or class object\n        returns PbsGroup class object or None\n        \"\"\"\n        if isinstance(group, int):\n            for g in PBS_ALL_GROUPS:\n                if g.gid == group:\n                    return g\n        elif isinstance(group, str):\n            for g in PBS_ALL_GROUPS:\n                if g.name == group:\n                    return g\n        elif isinstance(group, PbsGroup):\n            if group in PBS_ALL_GROUPS:\n                return group\n        return None\n\n    def __init__(self, name, gid=None, users=None):\n        self.name = name\n        if gid is not None:\n            self.gid = int(gid)\n        else:\n            self.gid = None\n\n        try:\n            _group = grp.getgrnam(self.name)\n            self.gid = _group.gr_gid\n        except Exception:\n            pass\n\n        if users is None:\n            self.users = []\n        elif isinstance(users, list):\n            self.users = users\n        else:\n            self.users = users.split(\",\")\n\n        for u in self.users:\n            if isinstance(u, str):\n                self.users.append(PbsUser(u, groups=[self]))\n            elif self not in u.groups:\n                u.groups.append(self)\n\n    def __repr__(self):\n        return str(self.name)\n\n    def __str__(self):\n        return self.__repr__()\n\n    def __int__(self):\n        return int(self.gid)\n\n\nclass PbsUser(object):\n\n    \"\"\"\n    The PbsUser type augments a PBS username to associate\n    it to groups to which the user belongs\n\n    :param name: The user name referenced\n    :type name: str\n    :param uid: uid of user\n    :type uid: int or None\n    :param groups: The list of PbsGroup objects the user\n                   belongs to\n    :type groups: List or None\n    \"\"\"\n\n    @staticmethod\n    def get_user(user):\n        \"\"\"\n        param: user - username\n        type: user - str or int or class object\n        returns PbsUser class object or None\n        \"\"\"\n        if isinstance(user, int):\n            for u in PBS_ALL_USERS:\n                if u.uid == user:\n                    return u\n        elif isinstance(user, str):\n            for u in PBS_ALL_USERS:\n                if u.name == user:\n                    return u\n        elif isinstance(user, PbsUser):\n            if user in PBS_ALL_USERS:\n                return user\n        return None\n\n    def __init__(self, name, uid=None, groups=None):\n        self.name = name\n        self.home = None\n        self.gid = None\n        self.shell = None\n        self.gecos = None\n        self.host = None\n        self.port = None\n        if uid is not None:\n            self.uid = int(uid)\n        else:\n            self.uid = None\n        if \"@\" in name:\n            user = name.split('@')\n            self.name = user[0]\n            host_port = user[1].split('+')\n            self.host = host_port[0]\n            if len(host_port) > 1:\n                self.port = host_port[1]\n\n        try:\n            _user = pwd.getpwnam(self.name)\n            self.uid = _user.pw_uid\n            self.home = _user.pw_dir\n            self.gid = _user.pw_gid\n            self.shell = _user.pw_shell\n            self.gecos = _user.pw_gecos\n        except Exception:\n            pass\n\n        if groups is None:\n            self.groups = []\n        elif isinstance(groups, list):\n            self.groups = groups\n        else:\n            self.groups = groups.split(\",\")\n\n        for g in self.groups:\n            if isinstance(g, str):\n                self.groups.append(PbsGroup(g, users=[self]))\n            elif self not in g.users:\n                g.users.append(self)\n\n    def __repr__(self):\n        return str(self.name)\n\n    def __str__(self):\n        return self.__repr__()\n\n    def __int__(self):\n        return int(self.uid)\n\n\n# Test users/groups are expected to exist on the test systems\n# User running the tests and the test users should have passwordless sudo\n# access configured to avoid interrupted (queries for password) test runs\n\n# Groups\n\nTSTGRP0 = PbsGroup('tstgrp00', gid=1900)\nTSTGRP1 = PbsGroup('tstgrp01', gid=1901)\nTSTGRP2 = PbsGroup('tstgrp02', gid=1902)\nTSTGRP3 = PbsGroup('tstgrp03', gid=1903)\nTSTGRP4 = PbsGroup('tstgrp04', gid=1904)\nTSTGRP5 = PbsGroup('tstgrp05', gid=1905)\nTSTGRP6 = PbsGroup('tstgrp06', gid=1906)\nTSTGRP7 = PbsGroup('tstgrp07', gid=1907)\nGRP_PBS = PbsGroup('pbs', gid=901)\nGRP_AGT = PbsGroup('agt', gid=1146)\nROOT_GRP = PbsGroup(grp.getgrgid(0).gr_name, gid=0)\n\n# Users\n# first group from group list is primary group of user\nTEST_USER = PbsUser('pbsuser', uid=4359, groups=[TSTGRP0])\nTEST_USER1 = PbsUser('pbsuser1', uid=4361, groups=[TSTGRP0, TSTGRP1, TSTGRP2])\nTEST_USER2 = PbsUser('pbsuser2', uid=4362, groups=[TSTGRP0, TSTGRP1, TSTGRP3])\nTEST_USER3 = PbsUser('pbsuser3', uid=4363, groups=[TSTGRP0, TSTGRP1, TSTGRP4])\nTEST_USER4 = PbsUser('pbsuser4', uid=4364, groups=[TSTGRP1, TSTGRP4, TSTGRP5])\nTEST_USER5 = PbsUser('pbsuser5', uid=4365, groups=[TSTGRP2, TSTGRP4, TSTGRP6])\nTEST_USER6 = PbsUser('pbsuser6', uid=4366, groups=[TSTGRP3, TSTGRP4, TSTGRP7])\nTEST_USER7 = PbsUser('pbsuser7', uid=4368, groups=[TSTGRP1])\n\nOTHER_USER = PbsUser('pbsother', uid=4358, groups=[TSTGRP0, TSTGRP2, GRP_PBS,\n                                                   GRP_AGT])\nPBSTEST_USER = PbsUser('pbstest', uid=4355, groups=[TSTGRP0, TSTGRP2, GRP_PBS,\n                                                    GRP_AGT])\nTST_USR = PbsUser('tstusr00', uid=11000, groups=[TSTGRP0])\nTST_USR1 = PbsUser('tstusr01', uid=11001, groups=[TSTGRP0])\n\nBUILD_USER = PbsUser('pbsbuild', uid=9000, groups=[TSTGRP0])\nDATA_USER = PbsUser('pbsdata', uid=4372, groups=[TSTGRP0])\nMGR_USER = PbsUser('pbsmgr', uid=4367, groups=[TSTGRP0])\nOPER_USER = PbsUser('pbsoper', uid=4356, groups=[TSTGRP0, TSTGRP2, GRP_PBS,\n                                                 GRP_AGT])\nADMIN_USER = PbsUser('pbsadmin', uid=4357, groups=[TSTGRP0, TSTGRP2, GRP_PBS,\n                                                   GRP_AGT])\nPBSROOT_USER = PbsUser('pbsroot', uid=4371, groups=[TSTGRP0, TSTGRP2])\nROOT_USER = PbsUser('root', uid=0, groups=[ROOT_GRP])\n\n# This is so pbs_config --make-ug will create a daemon user\n# However, the default is to use root as the daemon user\nDAEMON_USER = PbsUser('pbsdaemon', uid=4373, groups=[GRP_PBS])\n# This will let users edit the daemon user without changing ROOT_USER\nDAEMON_SERVICE_USER = copy.deepcopy(ROOT_USER)\n\nPBS_USERS = (TEST_USER, TEST_USER1, TEST_USER2, TEST_USER3, TEST_USER4,\n             TEST_USER5, TEST_USER6, TEST_USER7, OTHER_USER, PBSTEST_USER,\n             TST_USR, TST_USR1)\n\nPBS_GROUPS = (TSTGRP0, TSTGRP1, TSTGRP2, TSTGRP3, TSTGRP4, TSTGRP5, TSTGRP6,\n              TSTGRP7, GRP_PBS, GRP_AGT)\n\nPBS_OPER_USERS = (OPER_USER,)\n\nPBS_MGR_USERS = (MGR_USER, ADMIN_USER)\n\nPBS_DATA_USERS = (DATA_USER,)\n\nPBS_ROOT_USERS = (PBSROOT_USER, ROOT_USER)\n\nPBS_BUILD_USERS = (BUILD_USER,)\n\nPBS_DAEMON_SERVICE_USERS = (DAEMON_SERVICE_USER,)\n\nREQUIRED_USERS = (TEST_USER, TEST_USER1, TEST_USER2, TEST_USER3)\n\nPBS_ALL_USERS = (PBS_USERS + PBS_OPER_USERS + PBS_MGR_USERS +\n                 PBS_DATA_USERS + PBS_ROOT_USERS + PBS_BUILD_USERS +\n                 PBS_DAEMON_SERVICE_USERS)\n\nPBS_ALL_GROUPS = (PBS_GROUPS + (ROOT_GRP,))\n"
  },
  {
    "path": "test/fw/ptl/utils/plugins/__init__.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n"
  },
  {
    "path": "test/fw/ptl/utils/plugins/ptl_report_json.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport re\nimport copy\nimport statistics\nfrom ptl.utils.pbs_dshutils import DshUtils\nfrom ptl.utils.plugins.ptl_test_runner import PtlTextTestRunner\nimport datetime\n\n\nclass PTLJsonData(object):\n    \"\"\"\n    The intent of the class is to generate json format of PTL test data\n    \"\"\"\n\n    cur_repeat_count = 1\n\n    def __init__(self, command):\n        self.__du = DshUtils()\n        self.__cmd = command\n\n    def get_json(self, data, prev_data=None):\n        \"\"\"\n        Method to generate test data in accordance to json schema\n\n        :param data: dictionary of a test case details\n        :type data: dict\n        :param prev_data: dictionary of test run details that ran before\n                          the current test\n        :type prev_data: dict\n\n        :returns a formatted dictionary of the data\n        \"\"\"\n        FMT = '%H:%M:%S.%f'\n        run_count = str(PtlTextTestRunner.cur_repeat_count)\n        data_json = None\n        if not prev_data:\n            PTLJsonData.cur_repeat_count = 1\n            tests_start = str(data['start_time']).split()[1]\n            data_json = {\n                'command': self.__cmd,\n                'user': self.__du.get_current_user(),\n                'product_version': data['pbs_version'],\n                'run_id': data['start_time'].strftime('%s'),\n                'test_conf': {},\n                'machine_info': data['machinfo'],\n                'testsuites': {},\n                'additional_data': {},\n                'test_summary': {},\n                'avg_measurements': {},\n                'result': {\n                    'tests_with_failures': [],\n                    'test_suites_with_failures': [],\n                    'start': str(data['start_time'])\n                }\n\n            }\n            test_summary = {\n                'result_summary': {\n                    'run': 0,\n                    'succeeded': 0,\n                    'failed': 0,\n                    'errors': 0,\n                    'skipped': 0,\n                    'timedout': 0\n                },\n                'test_start_time': str(data['start_time']),\n                'tests_with_failures': [],\n                'test_suites_with_failures': []\n            }\n            data_json['test_summary'][run_count] = test_summary\n            if data['testparam']:\n                for param in data['testparam'].split(','):\n                    if '=' in param:\n                        par = param.split('=', 1)\n                        data_json['test_conf'][par[0]] = par[1]\n                    else:\n                        data_json['test_conf'][param] = True\n        else:\n            data_json = prev_data\n        if PTLJsonData.cur_repeat_count != PtlTextTestRunner.cur_repeat_count:\n            test_summary = {\n                'result_summary': {\n                    'run': 0,\n                    'succeeded': 0,\n                    'failed': 0,\n                    'errors': 0,\n                    'skipped': 0,\n                    'timedout': 0\n                },\n                'test_start_time': str(data['start_time']),\n                'tests_with_failures': [],\n                'test_suites_with_failures': []\n            }\n            data_json['test_summary'][run_count] = test_summary\n            PTLJsonData.cur_repeat_count = PtlTextTestRunner.cur_repeat_count\n        tsname = data['suite']\n        tcname = data['testcase']\n        jdata = {\n            'status': data['status'],\n            'status_data': str(data['status_data']),\n            'duration': str(data['duration']),\n            'start_time': str(data['start_time']),\n            'end_time': str(data['end_time']),\n            'measurements': []\n        }\n        if 'measurements' in data:\n            jdata['measurements'] = data['measurements']\n        if PtlTextTestRunner.cur_repeat_count == 1:\n            if tsname not in data_json['testsuites']:\n                data_json['testsuites'][tsname] = {\n                    'module': data['module'],\n                    'file': data['file'],\n                    'testcases': {}\n                }\n            tsdoc = []\n            if data['suitedoc']:\n                tsdoc = (re.sub(r\"[\\t\\n ]+\", \" \", data['suitedoc'])).strip()\n            data_json['testsuites'][tsname]['docstring'] = tsdoc\n            tcdoc = []\n            if data['testdoc']:\n                tcdoc = (re.sub(r\"[\\t\\n ]+\", \" \", data['testdoc'])).strip()\n            data_json['testsuites'][tsname]['testcases'][tcname] = {\n                'docstring': tcdoc,\n                'requirements': data['requirements'],\n                'results': {}\n            }\n            if data['testdoc']:\n                jdata_tests = data_json['testsuites'][tsname]['testcases']\n                jdata_tests[tcname]['tags'] = data['tags']\n        jdata_tests = data_json['testsuites'][tsname]['testcases']\n        jdata_tests[tcname]['results'][run_count] = jdata\n        if 'additional_data' in data:\n            data_json['additional_data'] = data['additional_data']\n        data_json['test_summary'][run_count]['test_end_time'] = str(\n            data['end_time'])\n        run_summary = data_json['test_summary'][run_count]\n        start = run_summary['test_start_time'].split()[1]\n        end = str(data['end_time']).split()[1]\n        dur = str(datetime.datetime.strptime(end, FMT) -\n                  datetime.datetime.strptime(start, FMT))\n        data_json['test_summary'][run_count]['tests_duration'] = dur\n        data_json['test_summary'][run_count]['result_summary']['run'] += 1\n        d_ts = data_json['test_summary'][run_count]\n        if data['status'] == 'PASS':\n            d_ts['result_summary']['succeeded'] += 1\n        elif data['status'] == 'SKIP':\n            d_ts['result_summary']['skipped'] += 1\n        elif data['status'] == 'TIMEDOUT':\n            d_ts['result_summary']['timedout'] += 1\n            d_ts['tests_with_failures'].append(data['testcase'])\n            if data['suite'] not in d_ts['test_suites_with_failures']:\n                d_ts['test_suites_with_failures'].append(data['suite'])\n        elif data['status'] == 'ERROR':\n            d_ts['result_summary']['errors'] += 1\n            d_ts['tests_with_failures'].append(data['testcase'])\n            if data['suite'] not in d_ts['test_suites_with_failures']:\n                d_ts['test_suites_with_failures'].append(data['suite'])\n        elif data['status'] == 'FAIL':\n            d_ts['result_summary']['failed'] += 1\n            d_ts['tests_with_failures'].append(data['testcase'])\n            if data['suite'] not in d_ts['test_suites_with_failures']:\n                d_ts['test_suites_with_failures'].append(data['suite'])\n        m_avg = {\n            'testsuites': {}\n        }\n\n        for tsname in data_json['testsuites']:\n            m_avg['testsuites'][tsname] = {\n                'testcases': {}\n            }\n            for tcname in data_json['testsuites'][tsname]['testcases']:\n                test_status = \"PASS\"\n                m_avg['testsuites'][tsname]['testcases'][tcname] = []\n                t_sum = []\n                count = 0\n                j_data = data_json['testsuites'][tsname]['testcases'][tcname]\n                measurements_data = []\n                for key in j_data['results'].keys():\n                    count += 1\n                    r_count = str(count)\n                    m_case = data_json['testsuites'][tsname]['testcases']\n                    m = m_case[tcname]['results'][r_count]['measurements']\n                    if j_data['results'][r_count]['status'] != \"PASS\":\n                        test_status = \"FAIL\"\n                    m_sum = []\n                    for i in range(len(m)):\n                        sum_mean = []\n                        sum_std = []\n                        sum_min = []\n                        sum_max = []\n                        record = []\n                        if \"test_measure\" in m[i].keys():\n                            if len(t_sum) > i:\n                                sum_mean.extend(t_sum[i][0])\n                                sum_std.extend(t_sum[i][1])\n                                sum_min.extend(t_sum[i][2])\n                                sum_max.extend(t_sum[i][3])\n                            else:\n                                measurements_data.append(m[i])\n                                sum_mean.append(m[i][\"test_data\"]['mean'])\n                            sum_std.append(m[i][\"test_data\"]['mean'])\n                            sum_min.append(m[i][\"test_data\"]['minimum'])\n                            sum_max.append(m[i][\"test_data\"]['maximum'])\n                            record = [sum_mean, sum_std, sum_min, sum_max]\n                        else:\n                            if len(measurements_data) <= i:\n                                measurements_data.append(m[i])\n                            record = [sum_mean, sum_std, sum_min, sum_max]\n                        m_sum.append(record)\n                    if len(t_sum) > len(m_sum):\n                        for v in range(len(m_sum)):\n                            t_sum[v] = m_sum[v]\n                    else:\n                        t_sum = m_sum\n                m_list = []\n                if test_status == \"PASS\":\n                    for i in range(len(measurements_data)):\n                        m_data = {}\n                        if \"test_measure\" in measurements_data[i].keys():\n                            measure = measurements_data[i]['test_measure']\n                            m_data['test_measure'] = measure\n                            m_data['unit'] = measurements_data[i]['unit']\n                            m_data['test_data'] = {}\n                            if len(t_sum[i][1]) < 2:\n                                std_dev = 0\n                            else:\n                                std_dev = statistics.stdev(t_sum[i][1])\n                            mean = statistics.mean(t_sum[i][1])\n                            lowv = mean - (std_dev * 2)\n                            uppv = mean + (std_dev * 2)\n                            new_list = [x for x in t_sum[i]\n                                        [1] if x > lowv and x < uppv]\n                            if len(new_list) == 0:\n                                new_list = t_sum[i][1]\n                            else:\n                                mean = statistics.mean(new_list)\n                                std_dev = statistics.stdev(new_list)\n                            minimum = min(t_sum[i][2])\n                            maximum = max(t_sum[i][3])\n                            m_data['test_data']['std_dev'] = std_dev\n                            m_data['test_data']['mean'] = mean\n                            m_data['test_data']['minimum'] = minimum\n                            m_data['test_data']['maximum'] = maximum\n                            m_data['test_data']['samples_considered'] = len(\n                                new_list)\n                            m_data['test_data']['total_samples'] = len(\n                                t_sum[i][1])\n                        m_list.append(m_data)\n                    m_avg['testsuites'][tsname]['testcases'][tcname] = m_list\n        data_json[\"avg_measurements\"] = m_avg\n\n        data_json['result']['end'] = str(data['end_time'])\n        start = data_json['result']['start'].split()[1]\n        end = data_json['result']['end'].split()[1]\n        dur = str(datetime.datetime.strptime(end, FMT) -\n                  datetime.datetime.strptime(start, FMT))\n        fail_tests = []\n        fail_ts = []\n        for count in range(PtlTextTestRunner.cur_repeat_count):\n            r_count = str(count + 1)\n            fail_tests.extend(\n                data_json['test_summary'][r_count]['tests_with_failures'])\n            fail_ts.extend(data_json['test_summary']\n                           [r_count]['test_suites_with_failures'])\n        data_json['result']['duration'] = dur\n        data_json['result']['tests_with_failures'] = list(set(fail_tests))\n        data_json['result']['test_suites_with_failures'] = list(set(fail_ts))\n        return data_json\n"
  },
  {
    "path": "test/fw/ptl/utils/plugins/ptl_test_data.py",
    "content": "# coding: utf-8\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\nimport os\nimport sys\nimport socket\nimport logging\nimport signal\nimport pwd\nimport re\nfrom nose.util import isclass\nfrom nose.plugins.base import Plugin\nfrom nose.plugins.skip import SkipTest\nfrom ptl.utils.plugins.ptl_test_runner import TimeOut\nfrom ptl.utils.pbs_dshutils import DshUtils\n\nlog = logging.getLogger('nose.plugins.PTLTestData')\n\n\nclass PTLTestData(Plugin):\n\n    \"\"\"\n    Save post analysis data on test cases failure or error\n    \"\"\"\n    name = 'PTLTestData'\n    score = sys.maxsize - 6\n    logger = logging.getLogger(__name__)\n\n    def __init__(self):\n        Plugin.__init__(self)\n        self.post_data_dir = None\n        self.max_postdata_threshold = None\n        self.__save_data_count = 0\n        self.__priv_sn = ''\n        self.du = DshUtils()\n\n    def options(self, parser, env):\n        \"\"\"\n        Register command line options\n        \"\"\"\n        pass\n\n    def set_data(self, post_data_dir, max_postdata_threshold):\n        self.post_data_dir = post_data_dir\n        self.max_postdata_threshold = max_postdata_threshold\n\n    def configure(self, options, config):\n        \"\"\"\n        Configure the plugin and system, based on selected options\n        \"\"\"\n        self.config = config\n        if self.post_data_dir is not None:\n            self.enabled = True\n        else:\n            self.enabled = False\n\n    def __save_home(self, test, status, err=None):\n        if hasattr(test, 'test'):\n            _test = test.test\n            sn = _test.__class__.__name__\n        elif hasattr(test, 'context'):\n            _test = test.context\n            sn = _test.__name__\n        else:\n            # test does not have any PBS Objects, so just return\n            return\n        if self.__priv_sn != sn:\n            self.__save_data_count = 0\n            self.__priv_sn = sn\n        # Saving home might take time so disable timeout\n        # handler set by runner\n        tn = getattr(_test, '_testMethodName', 'unknown')\n        testlogs = getattr(test, 'captured_logs', '')\n        datadir = os.path.join(self.post_data_dir, sn, tn)\n        if os.path.exists(datadir):\n            _msg = 'Old post analysis data exists at %s' % datadir\n            _msg += ', skipping saving data for this test case'\n            self.logger.warn(_msg)\n            _msg = 'Please remove old directory or'\n            _msg += ' provide different directory'\n            self.logger.warn(_msg)\n            return\n        if getattr(test, 'old_sigalrm_handler', None) is not None:\n            _h = getattr(test, 'old_sigalrm_handler')\n            signal.signal(signal.SIGALRM, _h)\n            signal.alarm(0)\n        self.logger.log(logging.DEBUG2, 'Saving post analysis data...')\n        current_host = socket.gethostname().split('.')[0]\n        self.du.mkdir(current_host, path=datadir, mode=0o755,\n                      parents=True, logerr=False, level=logging.DEBUG2)\n        if err is not None:\n            if isclass(err[0]) and issubclass(err[0], SkipTest):\n                status = 'SKIP'\n                status_data = 'Reason = %s' % (err[1])\n            else:\n                if isclass(err[0]) and issubclass(err[0], TimeOut):\n                    status = 'TIMEDOUT'\n                status_data = getattr(test, 'err_in_string', '')\n        else:\n            status_data = ''\n        logfile = os.path.join(datadir, 'logfile_' + status)\n        f = open(logfile, 'w+')\n        f.write(testlogs + '\\n')\n        f.write(status_data + '\\n')\n        f.write('test duration: %s\\n' % str(getattr(test, 'duration', '0')))\n        if status in ('PASS', 'SKIP'):\n            # Test case passed or skipped, no need to save post analysis data\n            f.close()\n            return\n        if ((self.max_postdata_threshold != 0) and\n                (self.__save_data_count >= self.max_postdata_threshold)):\n            _msg = 'Total number of saved post analysis data for this'\n            _msg += ' testsuite is exceeded max postdata threshold'\n            _msg += ' (%d)' % self.max_postdata_threshold\n            f.write(_msg + '\\n')\n            self.logger.error(_msg)\n            f.close()\n            return\n\n        servers = getattr(_test, 'servers', None)\n        if servers is not None:\n            server_host = servers.values()[0].shortname\n        else:\n            _msg = 'Could not find Server Object in given test object'\n            _msg += ', skipping saving post analysis data'\n            f.write(_msg + '\\n')\n            self.logger.warning(_msg)\n            f.close()\n            return\n        moms = getattr(_test, 'moms', None)\n        comms = getattr(_test, 'comms', None)\n        client = getattr(_test.servers.values()[0], 'client', None)\n        server = servers.values()[0]\n        add_hosts = []\n        if len(servers) > 1:\n            for param in servers.values()[1:]:\n                add_hosts.append(param.shortname)\n        if moms is not None:\n            for param in moms.values():\n                add_hosts.append(param.shortname)\n        if comms is not None:\n            for param in comms.values():\n                add_hosts.append(param.shortname)\n        if client is not None:\n            add_hosts.append(client.split('.')[0])\n\n        add_hosts = list(set(add_hosts) - set([server_host]))\n\n        pbs_snapshot_path = os.path.join(\n            server.pbs_conf[\"PBS_EXEC\"], \"sbin\", \"pbs_snapshot\")\n        cur_user = self.du.get_current_user()\n        cur_user_dir = pwd.getpwnam(cur_user).pw_dir\n        cmd = [\n            pbs_snapshot_path,\n            '-H', server_host,\n            '--daemon-logs',\n            '2',\n            '--accounting-logs',\n            '2',\n            '--with-sudo'\n        ]\n        if len(add_hosts) > 0:\n            cmd += ['--additional-hosts=' + ','.join(add_hosts)]\n        cmd += ['-o', cur_user_dir]\n        ret = self.du.run_cmd(current_host, cmd, level=logging.DEBUG2,\n                              logerr=False)\n        if ret['rc'] != 0:\n            _msg = 'Failed to get analysis information '\n            _msg += 'on %s:' % server_host\n            _msg += '\\n\\n' + '\\n'.join(ret['err']) + '\\n\\n'\n            f.write(_msg + '\\n')\n            self.logger.error(_msg)\n            f.close()\n            return\n        else:\n            if len(ret['out']) == 0:\n                self.logger.error('Snapshot command failed')\n                f.close()\n                return\n\n        snap_out = ret['out'][0]\n        snap_out_dest = (snap_out.split(\":\")[1]).strip()\n\n        dest = os.path.join(datadir,\n                            'PBS_' + server_host + '.tar.gz')\n        ret = self.du.run_copy(current_host, src=snap_out_dest,\n                               dest=dest, sudo=True, level=logging.DEBUG2)\n        self.du.rm(current_host, path=snap_out_dest,\n                   recursive=True, force=True, level=logging.DEBUG2)\n\n        f.close()\n        self.__save_data_count += 1\n        _msg = 'Saved post analysis data'\n        self.logger.info(_msg)\n\n    def addError(self, test, err):\n        self.__save_home(test, 'ERROR', err)\n\n    def addFailure(self, test, err):\n        self.__save_home(test, 'FAIL', err)\n\n    def addSuccess(self, test):\n        self.__save_home(test, 'PASS')\n"
  },
  {
    "path": "test/fw/ptl/utils/plugins/ptl_test_db.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport datetime\nimport json\nimport copy\nimport logging\nimport os\nimport platform\nimport pwd\nimport sys\nimport time\nimport traceback\n\nimport ptl\nimport ptl.utils.pbs_logutils as lu\nfrom ptl.lib.pbs_testlib import PbsTypeDuration\nfrom ptl.utils.pbs_dshutils import DshUtils\nfrom ptl.utils.pbs_testsuite import default_requirements\nfrom ptl.utils.plugins.ptl_report_json import PTLJsonData\nfrom ptl.utils.plugins.ptl_test_tags import TAGKEY\n\n# Following dance require because PTLTestDb().process_output() from this file\n# is used in pbs_loganalyzer script which is shipped with PBS package\n# through unsupported directory where we might not have nose installed\ntry:\n    from nose.util import isclass\n    from nose.plugins.base import Plugin\n    from nose.plugins.skip import SkipTest\n    from ptl.utils.plugins.ptl_test_runner import TimeOut\n    log = logging.getLogger('nose.plugins.PTLTestDb')\nexcept ImportError:\n    class Plugin(object):\n        pass\n\n    def isclass(obj):\n        pass\n\n    class SkipTest(Exception):\n        pass\n\n    class TimeOut(Exception):\n        pass\n    log = logging.getLogger('PTLTestDb')\n\n# Table names\n_DBVER_TN = 'ptl_db_version'\n_TESTRESULT_TN = 'ptl_test_results'\n_SCHEDM_TN = 'ptl_scheduler_metrics'\n_SVRM_TN = 'ptl_server_metrics'\n_MOMM_TN = 'ptl_mom_metrics'\n_ACCTM_TN = 'ptl_accounting_metrics'\n_PROCM_TN = 'ptl_proc_metrics'\n_CYCLEM_TN = 'ptl_cycle_metrics'\n_ESTINFOSUM_TN = 'ptl_estimated_info_summary'\n_ESTINFO_TN = 'ptl_estimated_info'\n_JOBM_TN = 'ptl_job_metrics'\n\n\nclass PTLDbError(Exception):\n\n    \"\"\"\n    PTL database error class\n\n    :param rv: Return value for the database error\n    :type rv: str or None\n    :param rc: Return code for the database error\n    :type rc: str or None\n    :param msg: Error message\n    :type msg: str or None\n    \"\"\"\n\n    def __init__(self, rv=None, rc=None, msg=None, post=None, *args, **kwargs):\n        self.rv = rv\n        self.rc = rc\n        self.msg = msg\n        if post is not None:\n            post(*args, **kwargs)\n\n    def __str__(self):\n        return ('rc=' + str(self.rc) + ', rv=' + str(self.rv) +\n                ', msg=' + str(self.msg))\n\n    def __repr__(self):\n        return (self.__class__.__name__ + '(rc=' + str(self.rc) + ', rv=' +\n                str(self.rv) + ', msg=' + str(self.msg) + ')')\n\n\nclass DBType(object):\n\n    \"\"\"\n    Base class for each database type\n    Any type of database must inherit from me\n\n    :param dbtype: Database type\n    :type dbtype: str\n    :param dbpath: Path to database\n    :type dbpath: str\n    :param dbaccess: Path to a file that defines db options\n    :type dbaccess: str\n    \"\"\"\n\n    def __init__(self, dbtype, dbpath, dbaccess):\n        if dbpath is None:\n            dn = _TESTRESULT_TN + '.db'\n            dbdir = os.getcwd()\n            dbpath = os.path.join(dbdir, dn)\n        elif os.path.isdir(dbpath):\n            dn = _TESTRESULT_TN + '.db'\n            dbdir = dbpath\n            dbpath = os.path.join(dbdir, dn)\n        else:\n            dbdir = os.path.dirname(dbpath)\n            dbpath = dbpath\n        self.dbtype = dbtype\n        self.dbpath = dbpath\n        self.dbdir = dbdir\n        self.dbaccess = dbaccess\n\n    def write(self, data, logfile=None):\n        \"\"\"\n        :param data: Data to write\n        :param logfile: Can be one of ``server``, ``scheduler``, ``mom``,\n                        ``accounting`` or ``procs``\n        :type logfile: str or None\n        \"\"\"\n        _msg = 'write method must be implemented in'\n        _msg += ' %s' % (str(self.__class__.__name__))\n        raise PTLDbError(rc=1, rv=False, msg=_msg)\n\n    def close(self, result=None):\n        \"\"\"\n        Close the database\n        \"\"\"\n        _msg = 'close method must be implemented in'\n        _msg += ' %s' % (str(self.__class__.__name__))\n        raise PTLDbError(rc=1, rv=False, msg=_msg)\n\n\nclass PostgreSQLDb(DBType):\n\n    \"\"\"\n    PostgreSQL type database\n    \"\"\"\n\n    def __init__(self, dbtype, dbpath, dbaccess):\n        DBType.__init__(self, dbtype, dbpath, dbaccess)\n        if self.dbtype != 'pgsql':\n            _msg = 'db type does not match with my type(file)'\n            raise PTLDbError(rc=1, rv=False, msg=_msg)\n        if self.dbaccess is None:\n            _msg = 'Db access creds require!'\n            raise PTLDbError(rc=1, rv=False, msg=_msg)\n        try:\n            import psycopg2\n        except ImportError:\n            _msg = 'psycopg2 require for %s type database!' % (self.dbtype)\n            raise PTLDbError(rc=1, rv=False, msg=_msg)\n        try:\n            f = open(self.dbaccess)\n            creds = ' '.join([n.strip() for n in f.readlines()])\n            f.close()\n            self.__dbobj = psycopg2.connect(creds)\n        except Exception as e:\n            _msg = 'Failed to connect to database:\\n%s\\n' % (str(e))\n            raise PTLDbError(rc=1, rv=False, msg=_msg)\n        self.__username = pwd.getpwuid(os.getuid())[0]\n        self.__platform = ' '.join(platform.uname()).strip()\n        self.__ptlversion = str(ptl.__version__)\n        self.__db_version = '1.0.0'\n        self.__index = self.__create_tables()\n\n    def __get_index(self, c):\n        idxs = []\n        stmt = 'SELECT max(id) from %s;' % (_TESTRESULT_TN)\n        idxs.append(c.execute(stmt).fetchone()[0])\n        stmt = 'SELECT max(id) from %s;' % (_SCHEDM_TN)\n        idxs.append(c.execute(stmt).fetchone()[0])\n        stmt = 'SELECT max(id) from %s;' % (_SVRM_TN)\n        idxs.append(c.execute(stmt).fetchone()[0])\n        stmt = 'SELECT max(id) from %s;' % (_MOMM_TN)\n        idxs.append(c.execute(stmt).fetchone()[0])\n        stmt = 'SELECT max(id) from %s;' % (_ACCTM_TN)\n        idxs.append(c.execute(stmt).fetchone()[0])\n        stmt = 'SELECT max(id) from %s;' % (_PROCM_TN)\n        idxs.append(c.execute(stmt).fetchone()[0])\n        stmt = 'SELECT max(id) from %s;' % (_CYCLEM_TN)\n        idxs.append(c.execute(stmt).fetchone()[0])\n        stmt = 'SELECT max(id) from %s;' % (_ESTINFOSUM_TN)\n        idxs.append(c.execute(stmt).fetchone()[0])\n        stmt = 'SELECT max(id) from %s;' % (_ESTINFO_TN)\n        idxs.append(c.execute(stmt).fetchone()[0])\n        stmt = 'SELECT max(id) from %s;' % (_JOBM_TN)\n        idxs.append(c.execute(stmt).fetchone()[0])\n        idx = max(idxs)\n        if idx is not None:\n            return idx\n        else:\n            return 1\n\n    def __upgrade_db(self, version):\n        if version == self.__db_version:\n            return\n\n    def __create_tables(self):\n        c = self.__dbobj.cursor()\n\n        try:\n            stmt = ['CREATE TABLE %s (' % (_DBVER_TN)]\n            stmt += ['version TEXT);']\n            c.execute(''.join(stmt))\n        except BaseException:\n            stmt = 'SELECT version from %s;' % (_DBVER_TN)\n            version = c.execute(stmt).fetchone()[0]\n            self.__upgrade_db(version)\n            return self.__get_index(c)\n        stmt = ['INSERT INTO %s (version)' % (_DBVER_TN)]\n        stmt += [' VALUES (%s);' % (self.__db_version)]\n        c.execute(''.join(stmt))\n\n        stmt = ['CREATE TABLE IF NOT EXISTS %s (' % (_TESTRESULT_TN)]\n        stmt += ['id INTEGER,']\n        stmt += ['suite TEXT,']\n        stmt += ['testcase TEXT,']\n        stmt += ['testdoc TEXT,']\n        stmt += ['start_time TEXT,']\n        stmt += ['end_time TEXT,']\n        stmt += ['duration TEXT,']\n        stmt += ['pbs_version TEXT,']\n        stmt += ['testparam TEXT,']\n        stmt += ['username TEXT,']\n        stmt += ['ptl_version TEXT,']\n        stmt += ['platform TEXT,']\n        stmt += ['status TEXT,']\n        stmt += ['status_data TEXT,']\n        c.execute(''.join(stmt))\n\n        stmt = ['CREATE TABLE IF NOT EXISTS %s (' % (_SCHEDM_TN)]\n        stmt += ['id INTEGER,']\n        stmt += ['logname TEXT,']\n        stmt += [lu.VER + ' TEXT,']\n        stmt += [lu.NC + ' INTEGER,']\n        stmt += [lu.NJR + ' INTEGER,']\n        stmt += [lu.NJC + ' INTEGER,']\n        stmt += [lu.NJFR + ' INTEGER,']\n        stmt += [lu.mCD + '  TIME,']\n        stmt += [lu.CD25 + ' TIME,']\n        stmt += [lu.CDA + ' TIME,']\n        stmt += [lu.CD50 + ' TIME,']\n        stmt += [lu.CD75 + ' TIME,']\n        stmt += [lu.MCD + ' TIME,']\n        stmt += [lu.mCT + ' TIMESTAMP,']\n        stmt += [lu.MCT + ' TIMESTAMP,']\n        stmt += [lu.TTC + ' TIME,']\n        stmt += [lu.DUR + ' TIME,']\n        stmt += [lu.SST + ' TIME,']\n        stmt += [lu.JRR + ' TEXT);']\n        c.execute(''.join(stmt))\n\n        stmt = ['CREATE TABLE IF NOT EXISTS %s (' % (_SVRM_TN)]\n        stmt += ['id INTEGER,']\n        stmt += ['logname TEXT,']\n        stmt += [lu.VER + ' TEXT,']\n        stmt += [lu.NJQ + ' INTEGER,']\n        stmt += [lu.NJR + ' INTEGER,']\n        stmt += [lu.NJE + ' INTEGER,']\n        stmt += [lu.JRR + ' TEXT,']\n        stmt += [lu.JER + ' TEXT,']\n        stmt += [lu.JSR + ' TEXT,']\n        stmt += [lu.NUR + ' TEXT,']\n        stmt += [lu.JWTm + ' TIME,']\n        stmt += [lu.JWT25 + ' TIME,']\n        stmt += [lu.JWT50 + ' TIME,']\n        stmt += [lu.JWTA + ' TIME,']\n        stmt += [lu.JWT75 + ' TIME,']\n        stmt += [lu.JWTM + ' TIME,']\n        stmt += [lu.JRTm + ' TIME,']\n        stmt += [lu.JRT25 + ' TIME,']\n        stmt += [lu.JRTA + ' TIME,']\n        stmt += [lu.JRT50 + ' TIME,']\n        stmt += [lu.JRT75 + ' TIME,']\n        stmt += [lu.JRTM + ' TIME);']\n        c.execute(''.join(stmt))\n\n        stmt = ['CREATE TABLE IF NOT EXISTS %s (' % (_MOMM_TN)]\n        stmt += ['id INTEGER,']\n        stmt += ['logname TEXT,']\n        stmt += [lu.VER + ' TEXT,']\n        stmt += [lu.NJQ + ' INTEGER,']\n        stmt += [lu.NJR + ' INTEGER,']\n        stmt += [lu.NJE + ' INTEGER,']\n        stmt += [lu.JRR + ' TEXT,']\n        stmt += [lu.JER + ' TEXT,']\n        stmt += [lu.JSR + ' TEXT);']\n        c.execute(''.join(stmt))\n\n        stmt = ['CREATE TABLE IF NOT EXISTS %s (' % (_ACCTM_TN)]\n        stmt += ['id INTEGER,']\n        stmt += ['logname TEXT,']\n        stmt += [lu.DUR + ' TEXT,']\n        stmt += [lu.NJQ + ' INTEGER,']\n        stmt += [lu.NJR + ' INTEGER,']\n        stmt += [lu.NJE + ' INTEGER,']\n        stmt += [lu.JRR + ' TEXT,']\n        stmt += [lu.JSR + ' TEXT,']\n        stmt += [lu.JER + ' TEXT,']\n        stmt += [lu.JWTm + ' TIME,']\n        stmt += [lu.JWT25 + ' TIME,']\n        stmt += [lu.JWT50 + ' TIME,']\n        stmt += [lu.JWTA + ' TIME,']\n        stmt += [lu.JWT75 + ' TIME,']\n        stmt += [lu.JWTM + ' TIME,']\n        stmt += [lu.JRTm + ' TIME,']\n        stmt += [lu.JRT25 + ' TIME,']\n        stmt += [lu.JRTA + ' TIME,']\n        stmt += [lu.JRT50 + ' TIME,']\n        stmt += [lu.JRT75 + ' TIME,']\n        stmt += [lu.JRTM + ' TIME,']\n        stmt += [lu.JNSm + ' INTEGER,']\n        stmt += [lu.JNS25 + ' REAL,']\n        stmt += [lu.JNSA + ' REAL,']\n        stmt += [lu.JNS50 + ' REAL,']\n        stmt += [lu.JNS75 + ' REAL,']\n        stmt += [lu.JNSM + ' REAL,']\n        stmt += [lu.JCSm + ' INTEGER,']\n        stmt += [lu.JCS25 + ' REAL,']\n        stmt += [lu.JCSA + ' REAL,']\n        stmt += [lu.JCS50 + ' REAL,']\n        stmt += [lu.JCS75 + ' REAL,']\n        stmt += [lu.JCSM + ' REAL,']\n        stmt += [lu.CPH + ' INTEGER,']\n        stmt += [lu.NPH + ' INTEGER,']\n        stmt += [lu.UNCPUS + ' TEXT,']\n        stmt += [lu.UNODES + ' TEXT,']\n        stmt += [lu.USRS + ' TEXT);']\n        c.execute(''.join(stmt))\n\n        stmt = ['CREATE TABLE IF NOT EXISTS %s (' % (_PROCM_TN)]\n        stmt += ['id INTEGER, ']\n        stmt += ['name TEXT,']\n        stmt += ['rss INTEGER,']\n        stmt += ['vsz INTEGER,']\n        stmt += ['pcpu TEXT,']\n        stmt += ['time TEXT);']\n        c.execute(''.join(stmt))\n\n        stmt = ['CREATE TABLE IF NOT EXISTS %s (' % (_CYCLEM_TN)]\n        stmt += ['id INTEGER,']\n        stmt += ['logname TEXT,']\n        stmt += [lu.CST + ' TIMESTAMP,']\n        stmt += [lu.CD + ' TIME,']\n        stmt += [lu.QD + ' TIME,']\n        stmt += [lu.NJC + ' INTEGER,']\n        stmt += [lu.NJR + ' INTEGER,']\n        stmt += [lu.NJFR + ' INTEGER,']\n        stmt += [lu.NJCAL + ' INTEGER,']\n        stmt += [lu.NJFP + ' INTEGER,']\n        stmt += [lu.NJP + ' INTEGER,']\n        stmt += [lu.TTC + ' INTEGER,']\n        stmt += [lu.SST + ' INTEGER);']\n        c.execute(''.join(stmt))\n\n        stmt = ['CREATE TABLE IF NOT EXISTS %s (' % (_ESTINFOSUM_TN)]\n        stmt += ['id INTEGER,']\n        stmt += ['logname TEXT,']\n        stmt += [lu.NJD + ' INTEGER,']\n        stmt += [lu.NJND + ' INTEGER,']\n        stmt += [lu.Ds15mn + ' INTEGER,']\n        stmt += [lu.Ds1hr + ' INTEGER,']\n        stmt += [lu.Ds3hr + ' INTEGER,']\n        stmt += [lu.Do3hr + ' INTEGER,']\n        stmt += [lu.DDm + ' INTEGER,']\n        stmt += [lu.DDM + ' INTEGER,']\n        stmt += [lu.DDA + ' INTEGER,']\n        stmt += [lu.DD50 + ' INTEGER);']\n        c.execute(''.join(stmt))\n\n        stmt = ['CREATE TABLE IF NOT EXISTS %s (' % (_ESTINFO_TN)]\n        stmt += ['id INTEGER,']\n        stmt += ['logname TEXT,']\n        stmt += [lu.JID + ' TEXT,']\n        stmt += [lu.Eat + ' INTEGER,']\n        stmt += [lu.JST + ' INTEGER,']\n        stmt += [lu.ESTR + ' INTEGER,']\n        stmt += [lu.ESTA + ' INTEGER,']\n        stmt += [lu.NEST + ' INTEGER,']\n        stmt += [lu.ND + ' INTEGER,']\n        stmt += [lu.JDD + ' INTEGER);']\n        c.execute(''.join(stmt))\n\n        stmt = ['CREATE TABLE IF NOT EXISTS %s (' % (_JOBM_TN)]\n        stmt += ['id INTEGER,']\n        stmt += ['logname TEXT,']\n        stmt += [lu.CST + ' TIMESTAMP,']\n        stmt += [lu.JID + ' TEXT,']\n        stmt += [lu.T2R + ' INTEGER,']\n        stmt += [lu.T2D + ' INTEGER,']\n        stmt += [lu.TiS + ' INTEGER,']\n        stmt += [lu.TTC + ' INTEGER);']\n        c.execute(''.join(stmt))\n        self.__dbobj.commit()\n        return self.__get_index(c)\n\n    def __write_data(self, tablename, data, logfile):\n        keys = ['id']\n        values = [str(self.__index)]\n        if logfile is not None:\n            keys.append('logname')\n            values.append('\\'' + str(logfile).replace(' ', '_') + '\\'')\n        for k, v in data.items():\n            if k == 'id':\n                continue\n            keys.append(str(k))\n            v = str(v)\n            if v.isdigit():\n                values.append(v)\n            else:\n                values.append('\\'' + v + '\\'')\n        _keys = ','.join(keys)\n        _values = ','.join(values)\n        c = self.__dbobj.cursor()\n        s = 'INSERT INTO %s (%s) VALUES (%s)' % (tablename, _keys, _values)\n        c.execute(s)\n        self.__dbobj.commit()\n\n    def __write_server_data(self, data, logfile=None):\n        self.__write_data(_SVRM_TN, data, logfile)\n\n    def __write_mom_data(self, data, logfile=None):\n        self.__write_data(_MOMM_TN, data, logfile)\n\n    def __write_sched_data(self, data, logfile=None):\n        for k, v in data.items():\n            if k == 'summary':\n                self.__write_data(_SCHEDM_TN, data, logfile)\n                continue\n            elif k == lu.EST:\n                if lu.ESTS in v:\n                    self.__write_estinfosum_data(v[lu.ESTS], logfile)\n                if lu.EJ in v:\n                    for j in v[lu.EJ]:\n                        if lu.EST in j:\n                            dt = [str(s) for s in j[lu.Eat]]\n                            j[lu.Eat] = ','.join(dt)\n                        self.__write_estsum_data(j, logfile)\n                continue\n            if 'jobs' in v:\n                for j in v['jobs']:\n                    j[lu.CST] = v[lu.CST]\n                    self.__write_job_data(j, logfile)\n                del v['jobs']\n            self.__write_cycle_data(v, logfile)\n\n    def __write_acct_data(self, data, logfile=None):\n        self.__write_data(_ACCTM_TN, data, logfile)\n\n    def __write_proc_data(self, data, logfile=None):\n        self.__write_data(_PROCM_TN, data, None)\n\n    def __write_cycle_data(self, data, logfile=None):\n        self.__write_data(_CYCLEM_TN, data, logfile)\n\n    def __write_estsum_data(self, data, logfile=None):\n        self.__write_data(_ESTINFOSUM_TN, data, logfile)\n\n    def __write_estinfosum_data(self, data, logfile=None):\n        self.__write_data(_ESTINFO_TN, data, logfile)\n\n    def __write_job_data(self, data, logfile=None):\n        self.__write_data(_JOBM_TN, data, logfile)\n\n    def __write_test_data(self, data):\n        keys = ['id']\n        values = [str(self.__index)]\n        keys.append('suite')\n        values.append(str(data['suite']))\n        keys.append('testcase')\n        values.append(str(data['testcase']))\n        doc = []\n        for l in str(data['testdoc']).strip().split('\\n'):\n            doc.append(l.strip().replace('\\t', ' ').replace('\\'', '\\'\\''))\n        doc = ' '.join(doc)\n        keys.append('testdoc')\n        values.append('\\'' + doc + '\\'')\n        keys.append('start_time')\n        values.append(str(data['start_time']))\n        keys.append('end_time')\n        values.append(str(data['end_time']))\n        keys.append('duration')\n        values.append(str(data['duration']))\n        keys.append('pbs_version')\n        values.append(str(data['pbs_version']))\n        keys.append('testparam')\n        values.append(str(data['testparam']))\n        keys.append('username')\n        values.append(str(self.__username))\n        keys.append('platform')\n        values.append(str(self.__platform))\n        keys.append('status')\n        values.append(str(data['status']))\n        sdata = data['status_data']\n        sdata = sdata.replace('\\'', '\\'\\'')\n        keys.append('status_data')\n        values.append('\\'' + sdata + '\\'')\n        _keys = ','.join(keys)\n        _values = ','.join(values)\n        c = self.__dbobj.cursor()\n        s = 'INSERT INTO %s (%s) VALUES (%s)' % (\n            _TESTRESULT_TN, _keys, _values)\n        c.execute(s)\n        self.__dbobj.commit()\n\n    def write(self, data, logfile=None):\n        if len(data) == 0:\n            return\n        if 'testdata' in data.keys():\n            self.__write_test_data(data['testdata'])\n        if 'metrics_data' in data.keys():\n            md = data['metrics_data']\n            if 'server' in md.keys():\n                self.__write_server_data(md['server'], logfile)\n            if 'mom' in md.keys():\n                self.__write_mom_data(md['mom'], logfile)\n            if 'scheduler' in md.keys():\n                self.__write_sched_data(md['scheduler'], logfile)\n            if 'accounting' in md.keys():\n                self.__write_acct_data(md['accounting'], logfile)\n            if 'procs' in md.keys():\n                self.__write_proc_data(md['procs'], logfile)\n        self.__index += 1\n\n    def close(self, result=None):\n        self.__dbobj.commit()\n        self.__dbobj.close()\n        del self.__dbobj\n\n\nclass SQLiteDb(DBType):\n\n    \"\"\"\n    SQLite type database\n    \"\"\"\n\n    def __init__(self, dbtype, dbpath, dbaccess):\n        DBType.__init__(self, dbtype, dbpath, dbaccess)\n        if self.dbtype != 'sqlite':\n            _msg = 'db type does not match with my type(file)'\n            raise PTLDbError(rc=1, rv=False, msg=_msg)\n        if self.dbpath is None:\n            _msg = 'Db path require!'\n            raise PTLDbError(rc=1, rv=False, msg=_msg)\n        try:\n            import sqlite3 as db\n        except BaseException:\n            _msg = 'sqlite3 module is required'\n            _msg += ' for %s type database!' % (self.dbtype)\n            raise PTLDbError(rc=1, rv=False, msg=_msg)\n        try:\n            self.__dbobj = db.connect(self.dbpath)\n        except Exception as e:\n            _msg = 'Failed to connect to database:\\n%s\\n' % (str(e))\n            raise PTLDbError(rc=1, rv=False, msg=_msg)\n        self.__username = pwd.getpwuid(os.getuid())[0]\n        self.__platform = ' '.join(platform.uname()).strip()\n        self.__ptlversion = str(ptl.__version__)\n        self.__db_version = '1.0.0'\n        self.__index = self.__create_tables()\n\n    def __get_index(self, c):\n        idxs = []\n        stmt = 'SELECT max(id) from %s;' % (_TESTRESULT_TN)\n        idxs.append(c.execute(stmt).fetchone()[0])\n        stmt = 'SELECT max(id) from %s;' % (_SCHEDM_TN)\n        idxs.append(c.execute(stmt).fetchone()[0])\n        stmt = 'SELECT max(id) from %s;' % (_SVRM_TN)\n        idxs.append(c.execute(stmt).fetchone()[0])\n        stmt = 'SELECT max(id) from %s;' % (_MOMM_TN)\n        idxs.append(c.execute(stmt).fetchone()[0])\n        stmt = 'SELECT max(id) from %s;' % (_ACCTM_TN)\n        idxs.append(c.execute(stmt).fetchone()[0])\n        stmt = 'SELECT max(id) from %s;' % (_PROCM_TN)\n        idxs.append(c.execute(stmt).fetchone()[0])\n        stmt = 'SELECT max(id) from %s;' % (_CYCLEM_TN)\n        idxs.append(c.execute(stmt).fetchone()[0])\n        stmt = 'SELECT max(id) from %s;' % (_ESTINFOSUM_TN)\n        idxs.append(c.execute(stmt).fetchone()[0])\n        stmt = 'SELECT max(id) from %s;' % (_ESTINFO_TN)\n        idxs.append(c.execute(stmt).fetchone()[0])\n        stmt = 'SELECT max(id) from %s;' % (_JOBM_TN)\n        idxs.append(c.execute(stmt).fetchone()[0])\n        idx = max(idxs)\n        if idx is not None:\n            return idx\n        else:\n            return 1\n\n    def __upgrade_db(self, version):\n        if version == self.__db_version:\n            return\n\n    def __create_tables(self):\n        c = self.__dbobj.cursor()\n\n        try:\n            stmt = ['CREATE TABLE %s (' % (_DBVER_TN)]\n            stmt += ['version TEXT);']\n            c.execute(''.join(stmt))\n        except BaseException:\n            stmt = 'SELECT version from %s;' % (_DBVER_TN)\n            version = c.execute(stmt).fetchone()[0]\n            self.__upgrade_db(version)\n            return self.__get_index(c)\n        stmt = ['INSERT INTO %s (version)' % (_DBVER_TN)]\n        stmt += [' VALUES (%s);' % (self.__db_version)]\n        c.execute(''.join(stmt))\n\n        stmt = ['CREATE TABLE IF NOT EXISTS %s (' % (_TESTRESULT_TN)]\n        stmt += ['id INTEGER,']\n        stmt += ['suite TEXT,']\n        stmt += ['testcase TEXT,']\n        stmt += ['testdoc TEXT,']\n        stmt += ['start_time TEXT,']\n        stmt += ['end_time TEXT,']\n        stmt += ['duration TEXT,']\n        stmt += ['pbs_version TEXT,']\n        stmt += ['testparam TEXT,']\n        stmt += ['username TEXT,']\n        stmt += ['ptl_version TEXT,']\n        stmt += ['platform TEXT,']\n        stmt += ['status TEXT,']\n        stmt += ['status_data TEXT,']\n        c.execute(''.join(stmt))\n\n        stmt = ['CREATE TABLE IF NOT EXISTS %s (' % (_SCHEDM_TN)]\n        stmt += ['id INTEGER,']\n        stmt += ['logname TEXT,']\n        stmt += [lu.VER + ' TEXT,']\n        stmt += [lu.NC + ' INTEGER,']\n        stmt += [lu.NJR + ' INTEGER,']\n        stmt += [lu.NJC + ' INTEGER,']\n        stmt += [lu.NJFR + ' INTEGER,']\n        stmt += [lu.mCD + '  TIME,']\n        stmt += [lu.CD25 + ' TIME,']\n        stmt += [lu.CDA + ' TIME,']\n        stmt += [lu.CD50 + ' TIME,']\n        stmt += [lu.CD75 + ' TIME,']\n        stmt += [lu.MCD + ' TIME,']\n        stmt += [lu.mCT + ' INTEGER,']\n        stmt += [lu.MCT + ' INTEGER,']\n        stmt += [lu.TTC + ' TIME,']\n        stmt += [lu.DUR + ' TIME,']\n        stmt += [lu.SST + ' TIME,']\n        stmt += [lu.JRR + ' TEXT);']\n        c.execute(''.join(stmt))\n\n        stmt = ['CREATE TABLE IF NOT EXISTS %s (' % (_SVRM_TN)]\n        stmt += ['id INTEGER,']\n        stmt += ['logname TEXT,']\n        stmt += [lu.VER + ' TEXT,']\n        stmt += [lu.NJQ + ' INTEGER,']\n        stmt += [lu.NJR + ' INTEGER,']\n        stmt += [lu.NJE + ' INTEGER,']\n        stmt += [lu.JRR + ' TEXT,']\n        stmt += [lu.JER + ' TEXT,']\n        stmt += [lu.JSR + ' TEXT,']\n        stmt += [lu.NUR + ' TEXT,']\n        stmt += [lu.JWTm + ' TIME,']\n        stmt += [lu.JWT25 + ' TIME,']\n        stmt += [lu.JWT50 + ' TIME,']\n        stmt += [lu.JWTA + ' TIME,']\n        stmt += [lu.JWT75 + ' TIME,']\n        stmt += [lu.JWTM + ' TIME,']\n        stmt += [lu.JRTm + ' TIME,']\n        stmt += [lu.JRT25 + ' TIME,']\n        stmt += [lu.JRTA + ' TIME,']\n        stmt += [lu.JRT50 + ' TIME,']\n        stmt += [lu.JRT75 + ' TIME,']\n        stmt += [lu.JRTM + ' TIME);']\n        c.execute(''.join(stmt))\n\n        stmt = ['CREATE TABLE IF NOT EXISTS %s (' % (_MOMM_TN)]\n        stmt += ['id INTEGER,']\n        stmt += ['logname TEXT,']\n        stmt += [lu.VER + ' TEXT,']\n        stmt += [lu.NJQ + ' INTEGER,']\n        stmt += [lu.NJR + ' INTEGER,']\n        stmt += [lu.NJE + ' INTEGER,']\n        stmt += [lu.JRR + ' TEXT,']\n        stmt += [lu.JER + ' TEXT,']\n        stmt += [lu.JSR + ' TEXT);']\n        c.execute(''.join(stmt))\n\n        stmt = ['CREATE TABLE IF NOT EXISTS %s (' % (_ACCTM_TN)]\n        stmt += ['id INTEGER,']\n        stmt += ['logname TEXT,']\n        stmt += [lu.DUR + ' TEXT,']\n        stmt += [lu.NJQ + ' INTEGER,']\n        stmt += [lu.NJR + ' INTEGER,']\n        stmt += [lu.NJE + ' INTEGER,']\n        stmt += [lu.JRR + ' TEXT,']\n        stmt += [lu.JSR + ' TEXT,']\n        stmt += [lu.JER + ' TEXT,']\n        stmt += [lu.JWTm + ' TIME,']\n        stmt += [lu.JWT25 + ' TIME,']\n        stmt += [lu.JWT50 + ' TIME,']\n        stmt += [lu.JWTA + ' TIME,']\n        stmt += [lu.JWT75 + ' TIME,']\n        stmt += [lu.JWTM + ' TIME,']\n        stmt += [lu.JRTm + ' TIME,']\n        stmt += [lu.JRT25 + ' TIME,']\n        stmt += [lu.JRTA + ' TIME,']\n        stmt += [lu.JRT50 + ' TIME,']\n        stmt += [lu.JRT75 + ' TIME,']\n        stmt += [lu.JRTM + ' TIME,']\n        stmt += [lu.JNSm + ' INTEGER,']\n        stmt += [lu.JNS25 + ' REAL,']\n        stmt += [lu.JNSA + ' REAL,']\n        stmt += [lu.JNS50 + ' REAL,']\n        stmt += [lu.JNS75 + ' REAL,']\n        stmt += [lu.JNSM + ' REAL,']\n        stmt += [lu.JCSm + ' INTEGER,']\n        stmt += [lu.JCS25 + ' REAL,']\n        stmt += [lu.JCSA + ' REAL,']\n        stmt += [lu.JCS50 + ' REAL,']\n        stmt += [lu.JCS75 + ' REAL,']\n        stmt += [lu.JCSM + ' REAL,']\n        stmt += [lu.CPH + ' INTEGER,']\n        stmt += [lu.NPH + ' INTEGER,']\n        stmt += [lu.UNCPUS + ' TEXT,']\n        stmt += [lu.UNODES + ' TEXT,']\n        stmt += [lu.USRS + ' TEXT);']\n        c.execute(''.join(stmt))\n\n        stmt = ['CREATE TABLE IF NOT EXISTS %s (' % (_PROCM_TN)]\n        stmt += ['id INTEGER, ']\n        stmt += ['name TEXT,']\n        stmt += ['rss INTEGER,']\n        stmt += ['vsz INTEGER,']\n        stmt += ['pcpu TEXT,']\n        stmt += ['time TEXT);']\n        c.execute(''.join(stmt))\n\n        stmt = ['CREATE TABLE IF NOT EXISTS %s (' % (_CYCLEM_TN)]\n        stmt += ['id INTEGER,']\n        stmt += ['logname TEXT,']\n        stmt += [lu.CST + ' INTEGER,']\n        stmt += [lu.CD + ' TIME,']\n        stmt += [lu.QD + ' TIME,']\n        stmt += [lu.NJC + ' INTEGER,']\n        stmt += [lu.NJR + ' INTEGER,']\n        stmt += [lu.NJFR + ' INTEGER,']\n        stmt += [lu.NJCAL + ' INTEGER,']\n        stmt += [lu.NJFP + ' INTEGER,']\n        stmt += [lu.NJP + ' INTEGER,']\n        stmt += [lu.TTC + ' INTEGER,']\n        stmt += [lu.SST + ' INTEGER);']\n        c.execute(''.join(stmt))\n\n        stmt = ['CREATE TABLE IF NOT EXISTS %s (' % (_ESTINFOSUM_TN)]\n        stmt += ['id INTEGER,']\n        stmt += ['logname TEXT,']\n        stmt += [lu.NJD + ' INTEGER,']\n        stmt += [lu.NJND + ' INTEGER,']\n        stmt += [lu.Ds15mn + ' INTEGER,']\n        stmt += [lu.Ds1hr + ' INTEGER,']\n        stmt += [lu.Ds3hr + ' INTEGER,']\n        stmt += [lu.Do3hr + ' INTEGER,']\n        stmt += [lu.DDm + ' INTEGER,']\n        stmt += [lu.DDM + ' INTEGER,']\n        stmt += [lu.DDA + ' INTEGER,']\n        stmt += [lu.DD50 + ' INTEGER);']\n        c.execute(''.join(stmt))\n\n        stmt = ['CREATE TABLE IF NOT EXISTS %s (' % (_ESTINFO_TN)]\n        stmt += ['id INTEGER,']\n        stmt += ['logname TEXT,']\n        stmt += [lu.JID + ' TEXT,']\n        stmt += [lu.Eat + ' INTEGER,']\n        stmt += [lu.JST + ' INTEGER,']\n        stmt += [lu.ESTR + ' INTEGER,']\n        stmt += [lu.ESTA + ' INTEGER,']\n        stmt += [lu.NEST + ' INTEGER,']\n        stmt += [lu.ND + ' INTEGER,']\n        stmt += [lu.JDD + ' INTEGER);']\n        c.execute(''.join(stmt))\n\n        stmt = ['CREATE TABLE IF NOT EXISTS %s (' % (_JOBM_TN)]\n        stmt += ['id INTEGER,']\n        stmt += ['logname TEXT,']\n        stmt += [lu.CST + ' INTEGER,']\n        stmt += [lu.JID + ' TEXT,']\n        stmt += [lu.T2R + ' INTEGER,']\n        stmt += [lu.T2D + ' INTEGER,']\n        stmt += [lu.TiS + ' INTEGER,']\n        stmt += [lu.TTC + ' INTEGER);']\n        c.execute(''.join(stmt))\n        self.__dbobj.commit()\n        return self.__get_index(c)\n\n    def __write_data(self, tablename, data, logfile):\n        keys = ['id']\n        values = [str(self.__index)]\n        if logfile is not None:\n            keys.append('logfile')\n            values.append('\\'' + str(logfile).replace(' ', '_') + '\\'')\n        for k, v in data.items():\n            if k == 'id':\n                continue\n            keys.append(str(k))\n            v = str(v)\n            if v.isdigit():\n                values.append(v)\n            else:\n                values.append('\\'' + v + '\\'')\n        _keys = ','.join(keys)\n        _values = ','.join(values)\n        c = self.__dbobj.cursor()\n        s = 'INSERT INTO %s (%s) VALUES (%s)' % (tablename, _keys, _values)\n        c.execute(s)\n        self.__dbobj.commit()\n\n    def __write_server_data(self, data, logfile=None):\n        self.__write_data(_SVRM_TN, data, logfile)\n\n    def __write_mom_data(self, data, logfile=None):\n        self.__write_data(_MOMM_TN, data, logfile)\n\n    def __write_sched_data(self, data, logfile=None):\n        for k, v in data.items():\n            if k == 'summary':\n                self.__write_data(_SCHEDM_TN, data, logfile)\n                continue\n            elif k == lu.EST:\n                if lu.ESTS in v:\n                    self.__write_estinfosum_data(v[lu.ESTS], logfile)\n                if lu.EJ in v:\n                    for j in v[lu.EJ]:\n                        if lu.EST in j:\n                            dt = [str(s) for s in j[lu.Eat]]\n                            j[lu.Eat] = ','.join(dt)\n                        self.__write_estsum_data(j, logfile)\n                continue\n            if 'jobs' in v:\n                for j in v['jobs']:\n                    j[lu.CST] = v[lu.CST]\n                    self.__write_job_data(j, logfile)\n                del v['jobs']\n            self.__write_cycle_data(v, logfile)\n\n    def __write_acct_data(self, data, logfile=None):\n        self.__write_data(_ACCTM_TN, data, logfile)\n\n    def __write_proc_data(self, data, logfile=None):\n        self.__write_data(_PROCM_TN, data, None)\n\n    def __write_cycle_data(self, data, logfile=None):\n        self.__write_data(_CYCLEM_TN, data, logfile)\n\n    def __write_estsum_data(self, data, logfile=None):\n        self.__write_data(_ESTINFOSUM_TN, data, logfile)\n\n    def __write_estinfosum_data(self, data, logfile=None):\n        self.__write_data(_ESTINFO_TN, data, logfile)\n\n    def __write_job_data(self, data, logfile=None):\n        self.__write_data(_JOBM_TN, data, logfile)\n\n    def __write_test_data(self, data):\n        keys = ['id']\n        values = [str(self.__index)]\n        keys.append('suite')\n        values.append(str(data['suite']))\n        keys.append('testcase')\n        values.append(str(data['testcase']))\n        doc = []\n        for l in str(data['testdoc']).strip().split('\\n'):\n            doc.append(l.strip().replace('\\t', ' ').replace('\\'', '\\'\\''))\n        doc = ' '.join(doc)\n        keys.append('testdoc')\n        values.append('\\'' + doc + '\\'')\n        keys.append('start_time')\n        values.append(str(data['start_time']))\n        keys.append('end_time')\n        values.append(str(data['end_time']))\n        keys.append('duration')\n        values.append(str(data['duration']))\n        keys.append('pbs_version')\n        values.append(str(data['pbs_version']))\n        keys.append('testparam')\n        values.append(str(data['testparam']))\n        keys.append('username')\n        values.append(str(self.__username))\n        keys.append('platform')\n        values.append(str(self.__platform))\n        keys.append('status')\n        values.append(str(data['status']))\n        sdata = data['status_data']\n        sdata = sdata.replace('\\'', '\\'\\'')\n        keys.append('status_data')\n        values.append('\\'' + sdata + '\\'')\n        _keys = ','.join(keys)\n        _values = ','.join(values)\n        c = self.__dbobj.cursor()\n        s = 'INSERT INTO %s (%s) VALUES (%s)' % (\n            _TESTRESULT_TN, _keys, _values)\n        c.execute(s)\n        self.__dbobj.commit()\n\n    def write(self, data, logfile=None):\n        if len(data) == 0:\n            return\n        if 'testdata' in data.keys():\n            self.__write_test_data(data['testdata'])\n        if 'metrics_data' in data.keys():\n            md = data['metrics_data']\n            if 'server' in md.keys():\n                self.__write_server_data(md['server'], logfile)\n            if 'mom' in md.keys():\n                self.__write_mom_data(md['mom'], logfile)\n            if 'scheduler' in md.keys():\n                self.__write_sched_data(md['scheduler'], logfile)\n            if 'accounting' in md.keys():\n                self.__write_acct_data(md['accounting'], logfile)\n            if 'procs' in md.keys():\n                self.__write_proc_data(md['procs'], logfile)\n        self.__index += 1\n\n    def close(self, result=None):\n        self.__dbobj.commit()\n        self.__dbobj.close()\n        del self.__dbobj\n\n\nclass FileDb(DBType):\n\n    \"\"\"\n    File type database\n    \"\"\"\n\n    def __init__(self, dbtype, dbpath, dbaccess):\n        DBType.__init__(self, dbtype, dbpath, dbaccess)\n        if self.dbtype != 'file':\n            _msg = 'db type does not match with my type(file)'\n            raise PTLDbError(rc=1, rv=False, msg=_msg)\n        if self.dbpath is None:\n            _msg = 'Db path require!'\n            raise PTLDbError(rc=1, rv=False, msg=_msg)\n        self.__separator1 = '=' * 80\n        self.__separator2 = '___m_oo_m___'\n        self.__username = pwd.getpwuid(os.getuid())[0]\n        self.__platform = ' '.join(platform.uname()).strip()\n        self.__ptlversion = str(ptl.__version__)\n        self.__dbobj = {}\n        self.__index = 1\n\n    def __write_data(self, key, data, logfile):\n        if key not in self.__dbobj.keys():\n            f = os.path.join(self.dbdir, key + '.db')\n            self.__dbobj[key] = open(f, 'w+')\n        msg = [self.__separator1]\n        msg += ['id = %s' % (self.__index)]\n        if logfile is not None:\n            msg += ['logfile = %s' % (logfile)]\n        for k, v in data.items():\n            if k == 'id':\n                continue\n            msg += [str(k) + ' = ' + str(v)]\n        msg += [self.__separator1]\n        self.__dbobj[key].write('\\n'.join(msg) + '\\n')\n        self.__dbobj[key].flush()\n\n    def __write_server_data(self, data, logfile=None):\n        self.__write_data(_SVRM_TN, data, logfile)\n\n    def __write_mom_data(self, data, logfile=None):\n        self.__write_data(_MOMM_TN, data, logfile)\n\n    def __write_sched_data(self, data, logfile=None):\n        for k, v in data.items():\n            if k == 'summary':\n                self.__write_data(_SCHEDM_TN, data, logfile)\n                continue\n            elif k == lu.EST:\n                if lu.ESTS in v:\n                    self.__write_estinfosum_data(v[lu.ESTS], logfile)\n                if lu.EJ in v:\n                    for j in v[lu.EJ]:\n                        if lu.EST in j:\n                            dt = [str(s) for s in j[lu.Eat]]\n                            j[lu.Eat] = ','.join(dt)\n                        self.__write_estsum_data(j, logfile)\n                continue\n            if 'jobs' in v:\n                for j in v['jobs']:\n                    j[lu.CST] = v[lu.CST]\n                    self.__write_job_data(j, logfile)\n                del v['jobs']\n            self.__write_cycle_data(v, logfile)\n\n    def __write_acct_data(self, data, logfile=None):\n        self.__write_data(_ACCTM_TN, data, logfile)\n\n    def __write_proc_data(self, data, logfile=None):\n        self.__write_data(_PROCM_TN, data, None)\n\n    def __write_cycle_data(self, data, logfile=None):\n        self.__write_data(_CYCLEM_TN, data, logfile)\n\n    def __write_estsum_data(self, data, logfile=None):\n        self.__write_data(_ESTINFOSUM_TN, data, logfile)\n\n    def __write_estinfosum_data(self, data, logfile=None):\n        self.__write_data(_ESTINFO_TN, data, logfile)\n\n    def __write_job_data(self, data, logfile=None):\n        self.__write_data(_JOBM_TN, data, logfile)\n\n    def __write_test_data(self, data):\n        if _TESTRESULT_TN not in self.__dbobj.keys():\n            self.__dbobj[_TESTRESULT_TN] = open(self.dbpath, 'w+')\n        msg = [self.__separator1]\n        msg += ['id = %s' % (self.__index)]\n        msg += ['suite = %s' % (data['suite'])]\n        msg += ['testcase = %s' % (data['testcase'])]\n        doc = []\n        for l in str(data['testdoc']).strip().split('\\n'):\n            doc.append(l.strip())\n        doc = ' '.join(doc)\n        msg += ['testdoc = %s' % (doc)]\n        msg += ['start_time = %s' % (str(data['start_time']))]\n        msg += ['end_time = %s' % (str(data['end_time']))]\n        msg += ['duration = %s' % (str(data['duration']))]\n        msg += ['pbs_version = %s' % (data['pbs_version'])]\n        msg += ['testparam = %s' % (data['testparam'])]\n        msg += ['username = %s' % (self.__username)]\n        msg += ['ptl_version = %s' % (self.__ptlversion)]\n        msg += ['platform = %s' % (self.__platform)]\n        msg += ['status = %s' % (data['status'])]\n        msg += ['status_data = ']\n        msg += [self.__separator2]\n        msg += ['%s' % (str(data['status_data']))]\n        msg += [self.__separator2]\n        msg += [self.__separator1]\n        self.__dbobj[_TESTRESULT_TN].write('\\n'.join(msg) + '\\n')\n        self.__dbobj[_TESTRESULT_TN].flush()\n\n    def write(self, data, logfile=None):\n        if len(data) == 0:\n            return\n        if 'testdata' in data.keys():\n            self.__write_test_data(data['testdata'])\n        if 'metrics_data' in data.keys():\n            md = data['metrics_data']\n            if 'server' in md.keys():\n                self.__write_server_data(md['server'], logfile)\n            if 'mom' in md.keys():\n                self.__write_mom_data(md['mom'], logfile)\n            if 'scheduler' in md.keys():\n                self.__write_sched_data(md['scheduler'], logfile)\n            if 'accounting' in md.keys():\n                self.__write_acct_data(md['accounting'], logfile)\n            if 'procs' in md.keys():\n                self.__write_proc_data(md['procs'], logfile)\n        self.__index += 1\n\n    def close(self, result=None):\n        for v in self.__dbobj.values():\n            v.write('\\n')\n            v.flush()\n            v.close()\n\n\nclass HTMLDb(DBType):\n\n    \"\"\"\n    HTML type database\n    \"\"\"\n\n    def __init__(self, dbtype, dbpath, dbaccess):\n        DBType.__init__(self, dbtype, dbpath, dbaccess)\n        if self.dbtype != 'html':\n            _msg = 'db type does not match with my type(html)'\n            raise PTLDbError(rc=1, rv=False, msg=_msg)\n        if self.dbpath is None:\n            _msg = 'Db path require!'\n            raise PTLDbError(rc=1, rv=False, msg=_msg)\n        elif not self.dbpath.endswith('.html'):\n            self.dbpath = self.dbpath.rstrip('.db') + '.html'\n        self.__cmd = [os.path.basename(sys.argv[0])]\n        self.__cmd += sys.argv[1:]\n        self.__username = pwd.getpwuid(os.getuid())[0]\n        self.__platform = ' '.join(platform.uname()).strip()\n        self.__ptlversion = str(ptl.__version__)\n        self.__dbobj = {}\n        self.__index = 1\n\n    def __write_test_html_header(self, data):\n        _title = 'PTL Test Report of %s' % (data['pbs_version'])\n        __s = []\n        __s += ['<!DOCTYPE html><head>']\n        __s += ['<title>%s</title>' % (_title)]\n        __s += ['<style type=\"text/css\">']\n        __s += [' * {']\n        __s += ['     font-family: verdana;']\n        __s += [' }']\n        __s += [' h1 {']\n        __s += ['     text-align: center;']\n        __s += [' }']\n        __s += [' .data {']\n        __s += ['     font-size: 15px;']\n        __s += ['     font-weight: normal;']\n        __s += ['     width: 100%;']\n        __s += [' }']\n        __s += [' .data button {']\n        __s += ['     float: left;']\n        __s += ['     margin-right: 4px;']\n        __s += ['     border: 1px solid #d0d0d0;']\n        __s += ['     font-weight: bold;']\n        __s += ['     outline: none;']\n        __s += ['     width: 30px;']\n        __s += ['     height: 30px;']\n        __s += ['     cursor: pointer;']\n        __s += ['     background-color: #eeeeee;']\n        __s += [' }']\n        __s += [' .data div {']\n        __s += ['     text-align: left;']\n        __s += [' }']\n        __s += [' .data table {']\n        __s += ['     border-spacing: 0px;']\n        __s += ['     margin-top: 6px;']\n        __s += ['     text-align: center;']\n        __s += [' }']\n        __s += [' .data th {']\n        __s += ['     border: 1px solid #d0d0d0;']\n        __s += ['     border-right: 0px;']\n        __s += ['     background-color: #eeeeee;']\n        __s += ['     font-weight: normal;']\n        __s += ['     width: 13%;']\n        __s += ['     padding: 5px;']\n        __s += [' }']\n        __s += [' .data th.innert:last-child {']\n        __s += ['     width: 100%;']\n        __s += [' }']\n        __s += [' .data th.tsname {']\n        __s += ['     font-weight: bold;']\n        __s += ['     width: 500px;']\n        __s += ['     text-align: left;']\n        __s += [' }']\n        __s += [' .data th.pass {']\n        __s += ['     color: #3c763d;']\n        __s += ['     background-color: #dff0d8;']\n        __s += [' }']\n        __s += [' .data th.skip {']\n        __s += ['     color: #31708f;']\n        __s += ['     background-color: #d9edf7;']\n        __s += [' }']\n        __s += [' .data th.fail,th.error,th.timedout {']\n        __s += ['     color: #a94442;']\n        __s += ['     background-color: #f2dede;']\n        __s += [' }']\n        __s += [' .data table th:last-child {']\n        __s += ['     border: 1px solid #d0d0d0;']\n        __s += [' }']\n        __s += [' .data td {']\n        __s += ['     border: 1px solid #d0d0d0;']\n        __s += ['     border-top: 0px;']\n        __s += ['     border-right: 0px;']\n        __s += ['     padding: 5px;']\n        __s += [' }']\n        __s += [' .data td.pass_td {']\n        __s += ['     background-color: #dff0d8;']\n        __s += ['     color: #3c763d;']\n        __s += [' }']\n        __s += [' .data td.skip_td {']\n        __s += ['     background-color: #d9edf7;']\n        __s += ['     color: #31708f;']\n        __s += [' }']\n        __s += [' .data td.fail_td,td.error_td,td.timedout_td {']\n        __s += ['     color: #a94442;']\n        __s += ['     background-color: #f2dede;']\n        __s += [' }']\n        __s += [' .data tr td:last-child {']\n        __s += ['     border: 1px solid #d0d0d0;']\n        __s += ['     border-top: 0px;']\n        __s += ['     text-align: left;']\n        __s += ['     word-break: break-all;']\n        __s += ['     white-space: pre-wrap;']\n        __s += [' }']\n        __s += [' .flt td {']\n        __s += ['     padding-right: 25px;']\n        __s += ['     padding-bottom: 10px;']\n        __s += ['     font-size: 14px;']\n        __s += [' }']\n        __s += ['</style><script type=\"text/javascript\">']\n        __s += ['function toggle(id) {']\n        __s += ['    var b = document.getElementById(id);']\n        __s += ['    if (b == null) {']\n        __s += ['        return;']\n        __s += ['    }']\n        __s += ['    b.textContent = b.textContent == \"+\" ? \"-\" : \"+\";']\n        __s += ['    if (b.textContent == \"-\") {']\n        __s += ['        var table = document.getElementById(id + \"_t\");']\n        __s += ['        table.style.display = \"\";']\n        __s += ['        var i = b.offsetLeft + b.offsetWidth;']\n        __s += ['        i = i - b.clientLeft - 1;']\n        __s += ['        table.style.marginLeft = i.toString() + \"px\";']\n        __s += ['        sessionStorage.setItem(id, \"\");']\n        __s += ['    } else {']\n        __s += ['        var table = document.getElementById(id + \"_t\");']\n        __s += ['        table.style.display = \"none\";']\n        __s += ['        sessionStorage.removeItem(id);']\n        __s += ['    }']\n        __s += ['}']\n        __s += ['function add_ts(tsn, n) {']\n        __s += ['    sum = 0;']\n        __s += ['    if (tsn == \"Summary\")']\n        __s += ['        sum = 1;']\n        __s += ['    if (document.getElementById(tsn + \"_d\") != null) {']\n        __s += ['        return;']\n        __s += ['    }']\n        __s += ['    var div = document.createElement(\"div\");']\n        __s += ['    div.setAttribute(\"class\", \"data\");']\n        __s += ['    div.setAttribute(\"id\", tsn + \"_d\");']\n        __s += ['    if (sum == 1) {']\n        __s += ['        div.setAttribute(\"style\", \"margin-bottom: 15px;\");']\n        __s += ['    }']\n        __s += ['    document.body.appendChild(div);']\n        __s += ['    if (n != null) {']\n        __s += ['        return;']\n        __s += ['    }']\n        __s += ['    var btn = document.createElement(\"button\");']\n        __s += ['    btn.appendChild(document.createTextNode(\"+\"));']\n        __s += ['    div.appendChild(btn);']\n        __s += ['    btn.setAttribute(\"id\", tsn);']\n        __s += ['    var t = \"toggle(\\\\\"\" + tsn + \"\\\\\")\";']\n        __s += ['    btn.setAttribute(\"onclick\", t);']\n        __s += ['    if (sum == 1) {']\n        __s += ['        btn.setAttribute(\"style\", \"visibility: hidden;\");']\n        __s += ['    }']\n        __s += ['    var table = document.createElement(\"table\");']\n        __s += ['    div.appendChild(table);']\n        __s += ['    table.setAttribute(\"id\", tsn + \"_i\");']\n        __s += ['    var th = document.createElement(\"th\");']\n        __s += ['    th.setAttribute(\"class\", \"tsname\");']\n        __s += ['    if (sum == 1) {']\n        __s += ['        th.setAttribute(\"style\", \"text-align: center;\");']\n        __s += ['    }']\n        __s += ['    th.appendChild(document.createTextNode(tsn));']\n        __s += ['    table.appendChild(th);']\n        __s += ['    var th = document.createElement(\"th\");']\n        __s += ['    th.setAttribute(\"class\", \"run\");']\n        __s += ['    if (sum == 1) {']\n        __s += ['        th.setAttribute(\"style\", \"font-weight: bold;\");']\n        __s += ['    }']\n        __s += ['    table.appendChild(th);']\n        __s += ['    var th = document.createElement(\"th\");']\n        __s += ['    th.setAttribute(\"class\", \"pass\");']\n        __s += ['    if (sum == 1) {']\n        __s += ['        th.setAttribute(\"style\", \"font-weight: bold;\");']\n        __s += ['    }']\n        __s += ['    table.appendChild(th);']\n        __s += ['    f = document.createElement(\"font\");']\n        __s += ['    f.setAttribute(\"style\", \"color: #ade4ad\");']\n        __s += ['    th.appendChild(f);']\n        __s += ['    tsn = document.createTextNode(\"Passed: 0\");']\n        __s += ['    f.appendChild(tsn);']\n        __s += ['    var th = document.createElement(\"th\");']\n        __s += ['    th.setAttribute(\"class\", \"skip\");']\n        __s += ['    if (sum == 1) {']\n        __s += ['        th.setAttribute(\"style\", \"font-weight: bold;\");']\n        __s += ['    }']\n        __s += ['    table.appendChild(th);']\n        __s += ['    f = document.createElement(\"font\");']\n        __s += ['    f.setAttribute(\"style\", \"color: #b5d7f3\");']\n        __s += ['    th.appendChild(f);']\n        __s += ['    tsn = document.createTextNode(\"Skipped: 0\");']\n        __s += ['    f.appendChild(tsn);']\n        __s += ['    var th = document.createElement(\"th\");']\n        __s += ['    th.setAttribute(\"class\", \"fail\");']\n        __s += ['    if (sum == 1) {']\n        __s += ['        th.setAttribute(\"style\", \"font-weight: bold;\");']\n        __s += ['    }']\n        __s += ['    table.appendChild(th);']\n        __s += ['    f = document.createElement(\"font\");']\n        __s += ['    f.setAttribute(\"style\", \"color: #efc0bf\");']\n        __s += ['    th.appendChild(f);']\n        __s += ['    tsn = document.createTextNode(\"Failed: 0\");']\n        __s += ['    f.appendChild(tsn);']\n        __s += ['    var th = document.createElement(\"th\");']\n        __s += ['    th.setAttribute(\"class\", \"error\");']\n        __s += ['    if (sum == 1) {']\n        __s += ['        th.setAttribute(\"style\", \"font-weight: bold;\");']\n        __s += ['    }']\n        __s += ['    table.appendChild(th);']\n        __s += ['    f = document.createElement(\"font\");']\n        __s += ['    f.setAttribute(\"style\", \"color: #efc0bf\");']\n        __s += ['    th.appendChild(f);']\n        __s += ['    tsn = document.createTextNode(\"Error: 0\");']\n        __s += ['    f.appendChild(tsn);']\n        __s += ['    var th = document.createElement(\"th\");']\n        __s += ['    th.setAttribute(\"class\", \"timedout\");']\n        __s += ['    if (sum == 1) {']\n        __s += ['        th.setAttribute(\"style\", \"font-weight: bold;\");']\n        __s += ['    }']\n        __s += ['    table.appendChild(th);']\n        __s += ['    f = document.createElement(\"font\");']\n        __s += ['    f.setAttribute(\"style\", \"color: #efc0bf\");']\n        __s += ['    th.appendChild(f);']\n        __s += ['    tsn = document.createTextNode(\"TimedOut: 0\");']\n        __s += ['    f.appendChild(tsn);']\n        __s += ['}']\n        __s += ['function add_th(tsn) {']\n        __s += ['    if (document.getElementById(tsn + \"_t\") != null) {']\n        __s += ['        return;']\n        __s += ['    }']\n        __s += ['    var div = document.getElementById(tsn + \"_d\");']\n        __s += ['    var table = document.createElement(\"table\");']\n        __s += ['    table.setAttribute(\"id\", tsn + \"_t\");']\n        __s += ['    table.setAttribute(\"style\", \"display: none\");']\n        __s += ['    div.appendChild(table);']\n        __s += ['    var tr = document.createElement(\"tr\");']\n        __s += ['    table.appendChild(tr);']\n        __s += ['    var lenh = datah.length;']\n        __s += ['    for (var i = 0; i < lenh; i++) {']\n        __s += ['        var th = document.createElement(\"th\");']\n        __s += ['        if (i < (lenh-1)) {']\n        __s += ['            th.setAttribute(\"style\", \"border-right: 0px\");']\n        __s += ['        }']\n        __s += ['        th.setAttribute(\"class\", \"innert\");']\n        __s += ['        var txt = document.createTextNode(datah[i]);']\n        __s += ['        th.appendChild(txt);']\n        __s += ['        tr.appendChild(th);']\n        __s += ['    }']\n        __s += ['}']\n        __s += ['function restore(tc) {']\n        __s += ['    var tco = document.getElementById(tc + \"_o\");']\n        __s += ['    tco.removeAttribute(\"style\");']\n        __s += ['    tco.removeAttribute(\"id\");']\n        __s += ['    var tc = document.getElementById(tc);']\n        __s += ['    tc.parentNode.removeChild(tc);']\n        __s += ['}']\n        __s += ['function add_row(d, sel) {']\n        __s += ['    var s = d.status.toLowerCase();']\n        __s += ['    if (s != sel && sel != \"all\") {']\n        __s += ['        return;']\n        __s += ['    }']\n        __s += ['    if (sel == \"all\") {']\n        __s += ['        sel = d.suite;']\n        __s += ['    }']\n        __s += ['    var table = document.getElementById(sel + \"_t\");']\n        __s += ['    var tr = document.createElement(\"tr\");']\n        __s += ['    table.appendChild(tr);']\n        __s += ['    var td = document.createElement(\"td\");']\n        __s += ['    td.setAttribute(\"class\", s + \"_td\");']\n        __s += ['    var tc = d.suite + \".\" + d.testcase']\n        __s += ['    td.appendChild(document.createTextNode(tc));']\n        __s += ['    tr.appendChild(td);']\n        __s += ['    var td = document.createElement(\"td\");']\n        __s += ['    td.setAttribute(\"class\", s + \"_td\");']\n        __s += ['    var txt = document.createTextNode(d.duration);']\n        __s += ['    td.appendChild(txt);']\n        __s += ['    tr.appendChild(td);']\n        __s += ['    var td = document.createElement(\"td\");']\n        __s += ['    td.setAttribute(\"class\", s + \"_td\");']\n        __s += ['    td.appendChild(document.createTextNode(d.status));']\n        __s += ['    tr.appendChild(td);']\n        __s += ['    tr.setAttribute(\"class\", s);']\n        __s += ['    var td = document.createElement(\"td\");']\n        __s += ['    td.setAttribute(\"class\", s + \"_td\");']\n        __s += ['    var lines = d.status_data.split(\"\\\\n\");']\n        __s += ['    var llen = lines.length;']\n        __s += ['    if (llen > 10) {']\n        __s += ['        var fl = lines.slice(0, 3).join(\"\\\\n\");']\n        __s += ['        td.appendChild(document.createTextNode(fl));']\n        __s += ['        var a = document.createElement(\"a\");']\n        __s += ['        var scr = \"javascript:restore(\\\\\"\" + tc + \"\\\\\")\";']\n        __s += ['        a.setAttribute(\"href\", scr);']\n        __s += ['        var txt = document.createTextNode(\"\\\\n...\\\\n\\\\n\");']\n        __s += ['        a.appendChild(txt);']\n        __s += ['        td.setAttribute(\"id\", tc);']\n        __s += ['        td.appendChild(a);']\n        __s += ['        lines = lines.slice(llen - 4, llen).join(\"\\\\n\");']\n        __s += ['        td.appendChild(document.createTextNode(lines));']\n        __s += ['        var tdo = document.createElement(\"td\");']\n        __s += ['        tdo.setAttribute(\"class\", s + \"_td\");']\n        __s += ['        tdo.setAttribute(\"id\", tc + \"_o\");']\n        __s += ['        tdo.setAttribute(\"style\", \"display: none\");']\n        __s += ['        tr.appendChild(tdo);']\n        __s += ['        var txt = document.createTextNode(d.status_data);']\n        __s += ['        tdo.appendChild(txt);']\n        __s += ['    } else {']\n        __s += ['        var txt = document.createTextNode(d.status_data);']\n        __s += ['        td.appendChild(txt);']\n        __s += ['    }']\n        __s += ['    tr.appendChild(td);']\n        __s += ['    var tc = document.getElementById(sel + \"_i\");']\n        __s += ['    if (tc == null) {']\n        __s += ['        return;']\n        __s += ['    }']\n        __s += ['    var tt = tc.getElementsByClassName(s)[0];']\n        __s += ['    var t = tt.textContent.split(\" \");']\n        __s += ['    var i = parseInt(t[1]) + 1;']\n        __s += ['    tt.textContent = t[0] + \" \" + i;']\n        __s += ['    var tt = tc.getElementsByClassName(\"run\")[0]']\n        __s += ['    if (tt.textContent == \"\") {']\n        __s += ['        tt.textContent = \"Run: 1\";']\n        __s += ['    } else {']\n        __s += ['        var t = tt.textContent.split(\" \");']\n        __s += ['        var i = parseInt(t[1]) + 1;']\n        __s += ['        tt.textContent = t[0] + \" \" + i;']\n        __s += ['    }']\n        __s += ['    var tc = document.getElementById(\"Summary_i\");']\n        __s += ['    if (tc == null) {']\n        __s += ['        return;']\n        __s += ['    }']\n        __s += ['    var tt = tc.getElementsByClassName(s)[0];']\n        __s += ['    var t = tt.textContent.split(\" \");']\n        __s += ['    var i = parseInt(t[1]) + 1;']\n        __s += ['    tt.textContent = t[0] + \" \" + i;']\n        __s += ['    var tt = tc.getElementsByClassName(\"run\")[0]']\n        __s += ['    if (tt.textContent == \"\") {']\n        __s += ['        tt.textContent = \"Run: 1\";']\n        __s += ['    } else {']\n        __s += ['        var t = tt.textContent.split(\" \");']\n        __s += ['        var i = parseInt(t[1]) + 1;']\n        __s += ['        tt.textContent = t[0] + \" \" + i;']\n        __s += ['    }']\n        __s += ['}']\n        __s += ['function add_dt(sel) {']\n        __s += ['    var len = data.length;']\n        __s += ['    for (i = 0; i < len; i++) {']\n        __s += ['        var d = data[i];']\n        __s += ['        if (sel == \"all\") {']\n        __s += ['            add_ts(\"Summary\");']\n        __s += ['            add_ts(d.suite);']\n        __s += ['            add_th(d.suite);']\n        __s += ['        } else {']\n        __s += ['            add_ts(sel, 1);']\n        __s += ['            add_th(sel, 1);']\n        __s += ['        }']\n        __s += ['        add_row(d, sel);']\n        __s += ['    }']\n        __s += ['    var size = \"40px\";']\n        __s += ['    if (len > 0) {']\n        __s += ['        var b = document.getElementById(data[0].suite);']\n        __s += ['        if (b != null) {']\n        __s += ['            var i = b.offsetLeft + b.offsetWidth;']\n        __s += ['            i = i - b.clientLeft - 1;']\n        __s += ['            size = i.toString() + \"px\";']\n        __s += ['        }']\n        __s += ['    }']\n        __s += ['    var t = document.getElementById(\"flt\");']\n        __s += ['    t.style.marginLeft = size;']\n        __s += ['    if (sel != \"all\") {']\n        __s += ['        t = document.getElementById(sel + \"_t\");']\n        __s += ['        t.style.display = \"\";']\n        __s += ['        t.style.marginLeft = size;']\n        __s += ['    }']\n        __s += ['    sessionStorage.removeItem(\"_filter_\");']\n        __s += ['    for (i = 0; i < sessionStorage.length; i++) {']\n        __s += ['        toggle(sessionStorage.key(i));']\n        __s += ['    }']\n        __s += ['    sessionStorage.setItem(\"_filter_\", sel);']\n        __s += ['}']\n        __s += ['function filter(n) {']\n        __s += ['    var sf = sessionStorage.getItem(\"_filter_\");']\n        __s += ['    if (sf == null)']\n        __s += ['        sf = \"all\";']\n        __s += ['    var map = [sf, \"all\", \"pass\", \"skip\", \"fail\", \"error\",']\n        __s += ['               \"timedout\"];']\n        __s += ['    var sel = map[n];']\n        __s += ['    var rbs = document.getElementsByTagName(\"input\");']\n        __s += ['    for (i = 0; i < rbs.length; i++) {']\n        __s += ['        if (rbs[i].name == sel) {']\n        __s += ['            rbs[i].checked = true;']\n        __s += ['        } else {']\n        __s += ['            rbs[i].checked = false;']\n        __s += ['        }']\n        __s += ['    }']\n        __s += ['    var els = document.getElementsByClassName(\"data\");']\n        __s += ['    while (els.length > 0) {']\n        __s += ['        els[0].parentNode.removeChild(els[0]);']\n        __s += ['    }']\n        __s += ['    add_dt(sel);']\n        __s += ['}']\n        __s += ['document.addEventListener(\"keydown\", function(event) {']\n        __s += ['    if (event.shiftKey || event.ctrlKey || event.altKey']\n        __s += ['        || event.metaKey) {']\n        __s += ['        return;']\n        __s += ['    }']\n        __s += ['    //              a,  p,  s,  f,  e,  t']\n        __s += ['    var map = [-1, 65, 80, 83, 70, 69, 84]']\n        __s += ['    if (map.indexOf(event.keyCode) != -1) {']\n        __s += ['        filter(map.indexOf(event.keyCode));']\n        __s += ['    }']\n        __s += ['});']\n        __s += ['</script></head><body onload=\"filter(0)\">']\n        __s += ['<h1>%s</h1>' % (_title)]\n        _s = 'margin: 30px;margin-bottom: 15px;text-align: left;'\n        __s += ['<div style=\"%s\"><table>' % (_s)]\n        _s = '<tr><th>%s:</th><td>%s</td></tr>'\n        __s += [_s % ('Command', ' '.join(self.__cmd))]\n        __s += [_s % ('TestParm', data['testparam'])]\n        __s += [_s % ('User', self.__username)]\n        __s += [_s % ('PTL Version', self.__ptlversion)]\n        __s += [_s % ('Platform', self.__platform)]\n        __s += ['</table></div><div id=\"flt\"><table class=\"flt\"><tr>']\n        _s = '<td><input name=\"%s\" type=\"radio\" onclick=\"filter(%d);\"/>%s</td>'\n        __s += [_s % ('all', 1, 'Show All')]\n        __s += [_s % ('pass', 2, 'Show only \"Passed\"')]\n        __s += [_s % ('skip', 3, 'Show only \"Skipped\"')]\n        __s += [_s % ('fail', 4, 'Show only \"Failed\"')]\n        __s += [_s % ('error', 5, 'Show only \"Error\"')]\n        __s += [_s % ('timedout', 6, 'Show only \"TimedOut\"')]\n        __s += ['</tr></table></div><script type=\"text/javascript\">']\n        __s += ['datah = [\"TestCase\", \"Duration\", \"Status\", \"Status Data\"];']\n        __s += ['data = [']\n        __s += ['];</script></body></html>']\n        self.__dbobj[_TESTRESULT_TN].write('\\n'.join(__s))\n        self.__dbobj[_TESTRESULT_TN].flush()\n\n    def __write_test_data(self, data):\n        if _TESTRESULT_TN not in self.__dbobj.keys():\n            self.__dbobj[_TESTRESULT_TN] = open(self.dbpath, 'w+')\n            self.__write_test_html_header(data)\n        d = {}\n        d['suite'] = data['suite']\n        d['testcase'] = data['testcase']\n        d['status'] = data['status']\n        d['status_data'] = data['status_data']\n        d['duration'] = str(data['duration'])\n        self.__dbobj[_TESTRESULT_TN].seek(0, os.SEEK_END)\n        self.__dbobj[_TESTRESULT_TN].seek(\n            self.__dbobj[_TESTRESULT_TN].tell() - 27, os.SEEK_SET)\n        t = self.__dbobj[_TESTRESULT_TN].readline().strip()\n        line = ''\n        if t != '[':\n            line += ',\\n'\n        else:\n            line += '\\n'\n        line += str(d) + '\\n];</script></body></html>'\n        self.__dbobj[_TESTRESULT_TN].seek(0, os.SEEK_END)\n        self.__dbobj[_TESTRESULT_TN].seek(\n            self.__dbobj[_TESTRESULT_TN].tell() - 26, os.SEEK_SET)\n        self.__dbobj[_TESTRESULT_TN].write(line)\n        self.__dbobj[_TESTRESULT_TN].flush()\n        self.__index += 1\n\n    def write(self, data, logfile=None):\n        if len(data) == 0:\n            return\n        if 'testdata' in data.keys():\n            self.__write_test_data(data['testdata'])\n\n    def close(self, result=None):\n        for v in self.__dbobj.values():\n            v.write('\\n')\n            v.flush()\n            v.close()\n\n\nclass JSONDb(DBType):\n\n    \"\"\"\n    JSON type database\n    \"\"\"\n\n    def __init__(self, dbtype, dbpath, dbaccess):\n        super(JSONDb, self).__init__(dbtype, dbpath, dbaccess)\n        if self.dbtype != 'json':\n            _msg = 'db type does not match with my type(json)'\n            raise PTLDbError(rc=1, rv=False, msg=_msg)\n        if not self.dbpath:\n            _msg = 'Db path require!'\n            raise PTLDbError(rc=1, rv=False, msg=_msg)\n        elif not self.dbpath.endswith('.json'):\n            self.dbpath = self.dbpath.rstrip('.db') + '.json'\n        self.jdata = {}\n        self.__cmd = [os.path.basename(sys.argv[0])]\n        self.__cmd += sys.argv[1:]\n        self.__cmd = ' '.join(self.__cmd)\n        self.res_data = PTLJsonData(command=self.__cmd)\n\n    def __write_test_data(self, data):\n        prev_data = copy.deepcopy(self.jdata)\n        self.jdata = self.res_data.get_json(data=data, prev_data=prev_data)\n        with open(self.dbpath, 'w') as fd:\n            json.dump(self.jdata, fd, indent=2)\n            fd.write(\"\\n\")\n\n    def write(self, data, logfile=None):\n        if len(data) == 0:\n            return\n        if 'testdata' in data.keys():\n            self.__write_test_data(data['testdata'])\n\n    def close(self, result=None):\n        if result is not None and self.jdata:\n            dur = str(result.stop - result.start)\n            self.jdata['result']['start'] = str(result.start)\n            self.jdata['result'][\"end\"] = str(result.stop)\n            self.jdata['result']['duration'] = dur\n            with open(self.dbpath, 'w') as fd:\n                json.dump(self.jdata, fd, indent=2)\n                fd.write(\"\\n\")\n\n\nclass PTLTestDb(Plugin):\n\n    \"\"\"\n    PTL Test Database Plugin\n    \"\"\"\n    name = 'PTLTestDb'\n    score = sys.maxsize - 5\n    logger = logging.getLogger(__name__)\n\n    def __init__(self):\n        Plugin.__init__(self)\n        self.__dbconn = None\n        self.__dbtype = None\n        self.__dbpath = None\n        self.__dbaccess = None\n        self.__dbmapping = {'file': FileDb,\n                            'html': HTMLDb,\n                            'json': JSONDb,\n                            'sqlite': SQLiteDb,\n                            'pgsql': PostgreSQLDb}\n        self.__du = DshUtils()\n\n    def options(self, parser, env):\n        \"\"\"\n        Register command line options\n        \"\"\"\n        pass\n\n    def set_data(self, dbtype, dbpath, dbaccess):\n        \"\"\"\n        Set the data\n        \"\"\"\n        self.__dbtype = dbtype\n        self.__dbpath = dbpath\n        self.__dbaccess = dbaccess\n\n    def configure(self, options, config):\n        \"\"\"\n        Configure the plugin and system, based on selected options\n\n        :param options: Configuration options for ``plugin`` and ``system``\n        \"\"\"\n        if self.__dbconn is not None:\n            return\n        if self.__dbtype is None:\n            self.__dbtype = 'json'\n        if self.__dbtype not in self.__dbmapping.keys():\n            self.logger.error('Invalid db type: %s' % self.__dbtype)\n            sys.exit(1)\n        try:\n            self.__dbconn = self.__dbmapping[self.__dbtype](self.__dbtype,\n                                                            self.__dbpath,\n                                                            self.__dbaccess)\n        except PTLDbError as e:\n            self.logger.error(str(e) + '\\n')\n            sys.exit(1)\n        self.enabled = True\n\n    def __create_data(self, test, err=None, status=None):\n        if hasattr(test, 'test'):\n            _test = test.test\n            sn = _test.__class__.__name__\n        elif hasattr(test, 'context'):\n            test = _test = test.context\n            sn = test.__name__\n        else:\n            return {}\n        testdata = {}\n        data = {}\n        cur_time = datetime.datetime.now()\n        if (hasattr(_test, 'server') and\n                (getattr(_test, 'server', None) is not None)):\n            testdata['pbs_version'] = _test.server.attributes['pbs_version']\n            testdata['hostname'] = _test.server.hostname\n        else:\n            testdata['pbs_version'] = 'unknown'\n            testdata['hostname'] = 'unknown'\n        testdata['machinfo'] = self.__get_machine_info(_test)\n        testdata['testparam'] = getattr(_test, 'param', None)\n        testdata['suite'] = sn\n        testdata['suitedoc'] = str(_test.__class__.__doc__)\n        testdata['file'] = _test.__module__.replace('.', '/') + '.py'\n        testdata['module'] = _test.__module__\n        testdata['testcase'] = getattr(_test, '_testMethodName', '<unknown>')\n        testdata['testdoc'] = getattr(_test, '_testMethodDoc', '<unknown>')\n        testdata['start_time'] = getattr(test, 'start_time', cur_time)\n        testdata['end_time'] = getattr(test, 'end_time', cur_time)\n        testdata['duration'] = getattr(test, 'duration', 0)\n        testdata['tags'] = getattr(_test, TAGKEY, [])\n        testdata['requirements'] = getattr(_test, 'requirements',\n                                           default_requirements)\n        measurements_dic = getattr(_test, 'measurements', {})\n        if measurements_dic:\n            testdata['measurements'] = measurements_dic\n        additional_data_dic = getattr(_test, 'additional_data', {})\n        if additional_data_dic:\n            testdata['additional_data'] = additional_data_dic\n        if err is not None:\n            if isclass(err[0]) and issubclass(err[0], SkipTest):\n                testdata['status'] = 'SKIP'\n                testdata['status_data'] = 'Reason = %s' % (err[1])\n            else:\n                if isclass(err[0]) and issubclass(err[0], TimeOut):\n                    status = 'TIMEDOUT'\n                testdata['status'] = status\n                testdata['status_data'] = getattr(test, 'err_in_string',\n                                                  '<unknown>')\n        else:\n            testdata['status'] = status\n            testdata['status_data'] = ''\n        data['testdata'] = testdata\n        md = getattr(_test, 'metrics_data', {})\n        if len(md) > 0:\n            data['metrics_data'] = md\n        return data\n\n    def __get_machine_info(self, test):\n        \"\"\"\n        Helper function to return machines dictionary with details\n\n        :param: test\n        :test type: object\n\n        returns dictionary with machines information\n        \"\"\"\n        mpinfo = {\n            'servers': [],\n            'moms': [],\n            'comms': [],\n            'clients': []\n        }\n        minstall_type = {\n            'servers': 'server',\n            'moms': 'execution',\n            'comms': 'communication',\n            'clients': 'client'\n        }\n        for name in mpinfo:\n            mlist = None\n            if (hasattr(test, name) and\n                    (getattr(test, name, None) is not None)):\n                mlist = getattr(test, name).values()\n            if mlist:\n                for mc in mlist:\n                    mpinfo[name].append(mc)\n        machines = {}\n        for k, v in mpinfo.items():\n            for _v in v:\n                hst = _v.hostname\n                if hst not in machines:\n                    machines[hst] = {}\n                    mshort = machines[hst]\n                    mshort['platform'] = _v.get_uname(hostname=hst)\n                    mshort['os_info'] = _v.get_os_info(hostname=hst)\n                machines[hst]['pbs_install_type'] = minstall_type[k]\n                if ((k == 'moms' or k == 'comms') and\n                        hst in mpinfo['servers']):\n                    machines[hst]['pbs_install_type'] = 'server'\n        return machines\n\n    def addError(self, test, err):\n        self.__dbconn.write(self.__create_data(test, err, 'ERROR'))\n\n    def addFailure(self, test, err):\n        self.__dbconn.write(self.__create_data(test, err, 'FAIL'))\n\n    def addSuccess(self, test):\n        self.__dbconn.write(self.__create_data(test, None, 'PASS'))\n\n    def finalize(self, result):\n        self.__dbconn.close(result)\n        self.__dbconn = None\n        self.__dbaccess = None\n\n    def process_output(self, info={}, dbout=None, dbtype=None, dbaccess=None,\n                       name=None, logtype=None, summary=False):\n        \"\"\"\n        Send analyzed log information to either the screen or to a database\n        file.\n\n        :param info: A dictionary of log analysis metrics.\n        :type info: Dictionary\n        :param dbout: The name of the database file to send output to\n        :type dbout: str or None\n        :param dbtype: Type of database\n        :param dbaccess: Path to a file that defines db options\n                         (PostreSQL only)\n        :param name: The name of the log file being analyzed\n        :type name: str or None\n        :param logtype: The log type, one of ``accounting``, ``schedsummary``,\n                        ``scheduler``, ``server``, or ``mom``\n        :param summary: If True output summary only\n        \"\"\"\n        if dbout is not None:\n            try:\n                self.set_data(dbtype, dbout, dbaccess)\n                self.configure(None, None)\n                data = {'metrics_data': {logtype: info}}\n                self.__dbconn.write(data, os.path.basename(name))\n                self.finalize(None)\n            except Exception as e:\n                sys.stderr.write(str(traceback.print_exc()))\n                sys.stderr.write('Error processing output ' + str(e))\n            return\n\n        if lu.CFC in info:\n            freq_info = info[lu.CFC]\n        elif 'summary' in info and lu.CFC in info['summary']:\n            freq_info = info['summary'][lu.CFC]\n        else:\n            freq_info = None\n\n        if 'matches' in info:\n            for m in info['matches']:\n                print(m, end=' ')\n            del info['matches']\n\n        if freq_info is not None:\n            for ((l, m), n) in freq_info:\n                b = time.strftime(\"%m/%d/%y %H:%M:%S\", time.localtime(l))\n                e = time.strftime(\"%m/%d/%y %H:%M:%S\", time.localtime(m))\n                print(b + ' -', end=' ')\n                if b[:8] != e[:8]:\n                    print(e, end=' ')\n                else:\n                    print(e[9:], end=' ')\n                print(': ' + str(n))\n            return\n\n        if lu.EST in info:\n            einfo = info[lu.EST]\n            m = []\n\n            for j in einfo[lu.EJ]:\n                m.append('Job ' + j[lu.JID] + '\\n\\testimated:')\n                if lu.Eat in j:\n                    for estimate in j[lu.Eat]:\n                        m.append('\\t\\t' + str(time.ctime(estimate)))\n                if lu.JST in j:\n                    m.append('\\tstarted:\\n')\n                    m.append('\\t\\t' + str(time.ctime(j[lu.JST])))\n                    m.append('\\testimate range: ' + str(j[lu.ESTR]))\n                    m.append('\\tstart to estimated: ' + str(j[lu.ESTA]))\n\n                if lu.NEST in j:\n                    m.append('\\tnumber of estimates: ' + str(j[lu.NEST]))\n                if lu.NJD in j:\n                    m.append('\\tnumber of drifts: ' + str(j[lu.NJD]))\n                if lu.JDD in j:\n                    m.append('\\tdrift duration: ' + str(j[lu.JDD]))\n                m.append('\\n')\n\n            if lu.ESTS in einfo:\n                m.append('\\nsummary: ')\n                for k, v in sorted(einfo[lu.ESTS].items()):\n                    if 'duration' in k:\n                        m.append('\\t' + k + ': ' +\n                                 str(PbsTypeDuration(int(v))))\n                    else:\n                        m.append('\\t' + k + ': ' + str(v))\n\n            print(\"\\n\".join(m))\n            return\n\n        sorted_info = sorted(info.items())\n        for (k, v) in sorted_info:\n            if summary and k != 'summary':\n                continue\n            print(str(k) + \": \", end=' ')\n            if isinstance(v, dict):\n                sorted_v = sorted(v.items())\n                for (k, val) in sorted_v:\n                    print(str(k) + '=' + str(val) + ' ')\n                print()\n            else:\n                print(str(v))\n        print('')\n"
  },
  {
    "path": "test/fw/ptl/utils/plugins/ptl_test_info.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport sys\nimport logging\nimport unittest\nfrom nose.plugins.base import Plugin\nfrom ptl.utils.pbs_testsuite import PBSTestSuite\nfrom ptl.utils.plugins.ptl_test_tags import TAGKEY\nfrom ptl.utils.pbs_testsuite import REQUIREMENTS_KEY\nfrom ptl.utils.pbs_testsuite import default_requirements\nfrom copy import deepcopy\n\nlog = logging.getLogger('nose.plugins.PTLTestInfo')\n\n\ndef get_effective_reqs(ts_reqs=None, tc_reqs=None):\n    \"\"\"\n    get effective requirements at test case\n    \"\"\"\n    tc_effective_reqs = {}\n    if (tc_reqs is None and ts_reqs is None):\n        tc_effective_reqs = deepcopy(default_requirements)\n    else:\n        tc_effective_reqs = deepcopy(default_requirements)\n        tc_effective_reqs.update(ts_reqs)\n        tc_effective_reqs.update(tc_reqs)\n    return tc_effective_reqs\n\n\nclass FakeRunner(object):\n\n    def __init__(self, config):\n        self.config = config\n\n    def run(self, test):\n        self.config.plugins.finalize(None)\n        sys.exit(0)\n\n\nclass PTLTestInfo(Plugin):\n\n    \"\"\"\n    Load test cases from given parameter\n    \"\"\"\n    name = 'PTLTestInfo'\n    score = sys.maxsize - 2\n    logger = logging.getLogger(__name__)\n\n    def __init__(self):\n        self.list_test = None\n        self.showinfo = None\n        self.verbose = None\n        self.gen_ts_tree = None\n        self.suites = []\n        self._tree = {}\n        self.total_suite = 0\n        self.total_case = 0\n        self.__ts_tree = {}\n        self.__tags_tree = {'NoTags': {}}\n\n    def options(self, parser, env):\n        \"\"\"\n        Register command line options\n        \"\"\"\n        pass\n\n    def set_data(self, suites, list_test, showinfo, verbose, gen_ts_tree):\n        \"\"\"\n        Set the data required for running the tests\n\n        :param suites: Test suites to run\n        :param list_test: List of test to run\n        :param gen_ts_tree: Generate test suite tree\n        \"\"\"\n        self.suites = suites.split(',')\n        self.list_test = list_test\n        self.showinfo = showinfo\n        self.verbose = verbose\n        self.gen_ts_tree = gen_ts_tree\n\n    def configure(self, options, config):\n        \"\"\"\n        Configure the plugin and system, based on selected options\n\n        :param options: Options to configure plugin and system\n        \"\"\"\n        self.config = config\n        self.enabled = True\n\n    def prepareTestRunner(self, runner):\n        return FakeRunner(config=self.config)\n\n    def wantClass(self, cls):\n        \"\"\"\n        Is the class wanted?\n        \"\"\"\n        if not issubclass(cls, unittest.TestCase) or cls is PBSTestSuite \\\n                or cls is unittest.TestCase:\n            return False\n        self._tree.setdefault(cls.__name__, cls)\n        if len(cls.__bases__) > 0:\n            self.wantClass(cls.__bases__[0])\n\n    def _get_hierarchy(self, cls, level=0):\n        delim = '    ' * level\n        msg = [delim + cls.__name__]\n        try:\n            subclses = cls.__subclasses__()\n        except TypeError:\n            pass\n        else:\n            for subcls in subclses:\n                msg.extend(self._get_hierarchy(subcls, level + 1))\n        return msg\n\n    def _print_suite_info(self, suite):\n        w = sys.stdout\n        self.total_suite += 1\n        if self.list_test:\n            w.write('\\n\\n')\n        w.write('Test Suite: %s\\n\\n' % suite.__name__)\n        w.write('    file: %s.py\\n\\n' % suite.__module__.replace('.', '/'))\n        w.write('    module: %s\\n\\n' % suite.__module__)\n        tags = getattr(suite, TAGKEY, None)\n        if tags is not None:\n            w.write('    Tags: %s\\n\\n' % (', '.join(tags)))\n        w.write('    Suite Doc: \\n')\n        for l in str(suite.__doc__).split('\\n'):\n            w.write('    %s\\n' % l)\n        dcl = suite.__dict__\n        cases = []\n        for k in dcl.keys():\n            if k.startswith('test_'):\n                k = getattr(suite, k)\n                try:\n                    k.__name__\n                except BaseException:\n                    # not a test case, ignore\n                    continue\n                self.total_case += 1\n                cases.append('\\t%s\\n' % (k.__name__))\n                if self.verbose:\n                    tags = getattr(k, TAGKEY, None)\n                    if tags is not None:\n                        cases.append('\\n\\t    Tags: %s\\n\\n' %\n                                     (', '.join(tags)))\n                    doc = k.__doc__\n                    if doc is not None:\n                        cases.append('\\t    Test Case Doc: \\n')\n                        for l in str(doc).split('\\n'):\n                            cases.append('\\t%s\\n' % (l))\n        if len(cases) > 0:\n            w.write('    Test Cases: \\n')\n            w.writelines(cases)\n        if self.list_test or self.showinfo:\n            lines = self._get_hierarchy(suite, 1)[1:]\n            if len(lines) > 0:\n                w.write('\\n    Test suite hierarchy:\\n')\n                for l in lines:\n                    w.write(l + '\\n')\n\n    def _gen_ts_tree(self, suite):\n        n = suite.__name__\n        tsd = {}\n        tsd['doc'] = str(suite.__doc__)\n        tstags = getattr(suite, TAGKEY, [])\n        numnodes = 1\n        for tag in tstags:\n            if 'numnodes' in tag:\n                numnodes = tag.split('=')[1].strip()\n                break\n        tsd['tags'] = tstags if len(tstags) > 0 else \"None\"\n        tsd['numnodes'] = str(numnodes)\n        tsd['file'] = suite.__module__.replace('.', '/') + '.py'\n        tsd['module'] = suite.__module__\n        dcl = suite.__dict__\n        tcs = {}\n        ts_req = getattr(suite, REQUIREMENTS_KEY, {})\n        for k in dcl.keys():\n            if k.startswith('test_'):\n                tcd = {}\n                tc = getattr(suite, k)\n                try:\n                    tc.__name__\n                except BaseException:\n                    # not a test case, ignore\n                    continue\n                tcd['doc'] = str(tc.__doc__)\n                tc_req = getattr(tc, REQUIREMENTS_KEY, {})\n                tcd['requirements'] = get_effective_reqs(ts_req, tc_req)\n                numnodes = 1\n                tctags = sorted(set(tstags + getattr(tc, TAGKEY, [])))\n                for tag in tctags:\n                    if 'numnodes' in tag:\n                        numnodes = tag.split('=')[1].strip()\n                        break\n                tcd['tags'] = tctags if len(tctags) > 0 else \"None\"\n                tcd['numnodes'] = str(numnodes)\n                tcs[k] = deepcopy(tcd)\n                if len(tctags) > 0:\n                    for tag in tctags:\n                        if tag not in self.__tags_tree.keys():\n                            self.__tags_tree[tag] = {}\n                        if n not in self.__tags_tree[tag].keys():\n                            self.__tags_tree[tag][n] = deepcopy(tsd)\n                        if 'tclist' not in self.__tags_tree[tag][n].keys():\n                            self.__tags_tree[tag][n]['tclist'] = {}\n                        self.__tags_tree[tag][n]['tclist'][k] = deepcopy(tcd)\n                else:\n                    if n not in self.__tags_tree['NoTags'].keys():\n                        self.__tags_tree['NoTags'][n] = deepcopy(tsd)\n                    if 'tclist' not in self.__tags_tree['NoTags'][n].keys():\n                        self.__tags_tree['NoTags'][n]['tclist'] = {}\n                    self.__tags_tree['NoTags'][n]['tclist'][k] = deepcopy(tcd)\n        if len(tcs.keys()) > 0:\n            self.__ts_tree[n] = deepcopy(tsd)\n            self.__ts_tree[n]['tclist'] = tcs\n\n    def finalize(self, result):\n        if (self.list_test and not self.suites) or self.gen_ts_tree:\n            suites = list(self._tree.keys())\n        else:\n            suites = self.suites\n        suites.sort()\n        unknown = []\n        if self.gen_ts_tree:\n            func = self._gen_ts_tree\n        else:\n            func = self._print_suite_info\n        for k in suites:\n            try:\n                suite = eval(k, globals(), self._tree)\n            except BaseException:\n                unknown.append(k)\n                continue\n            func(suite)\n        if self.list_test:\n            w = sys.stdout\n            w.write('\\n\\n')\n            w.write('Total number of Test Suites: %d\\n' % (self.total_suite))\n            w.write('Total number of Test Cases: %d\\n' % (self.total_case))\n        elif self.gen_ts_tree:\n            tsdata = ''\n            tagsdata = ''\n            try:\n                import json\n                tsdata = json.dumps(self.__ts_tree, indent=4)\n                tagsdata = json.dumps(self.__tags_tree, indent=4)\n            except ImportError:\n                try:\n                    import simplejson\n                    tsdata = simplejson.dumps(self.__ts_tree, indent=4)\n                    tagsdata = simplejson.dumps(self.__tags_tree, indent=4)\n                except ImportError:\n                    _pre = str(self.__ts_tree).replace('\"', '\\\\\"')\n                    tsdata = _pre.replace('\\'', '\"')\n                    _pre = str(self.__tags_tree).replace('\"', '\\\\\"')\n                    tagsdata = _pre.replace('\\'', '\"')\n            f = open('ptl_ts_tree.json', 'w+')\n            f.write(tsdata)\n            f.close()\n            f = open('ptl_tags_tree.json', 'w+')\n            f.write(tagsdata)\n            f.close()\n        if len(unknown) > 0:\n            self.logger.error('Unknown testsuite(s): %s' % (','.join(unknown)))\n"
  },
  {
    "path": "test/fw/ptl/utils/plugins/ptl_test_loader.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport os\nimport sys\nimport logging\nimport copy\nfrom nose.plugins.base import Plugin\nfrom ptl.utils.pbs_testsuite import PBSTestSuite\nfrom ptl.utils.pbs_dshutils import DshUtils\n\n\nclass PTLTestLoader(Plugin):\n\n    \"\"\"\n    Load test cases from given parameter\n    \"\"\"\n    name = 'PTLTestLoader'\n    score = sys.maxsize - 1\n    logger = logging.getLogger(__name__)\n\n    def __init__(self):\n        Plugin.__init__(self)\n        self.suites_list = []\n        self.excludes = []\n        self.follow = False\n        self._only_ts = '__only__ts__'\n        self._only_tc = '__only__tc__'\n        self._test_marker = 'test_'\n        self._tests_list = {self._only_ts: [], self._only_tc: []}\n        self._excludes_list = {self._only_ts: [], self._only_tc: []}\n        self.__tests_list_copy = {self._only_ts: [], self._only_tc: []}\n        self.__allowed_cls = []\n        self.__allowed_method = []\n        self.testfiles = None\n\n    def options(self, parser, env):\n        \"\"\"\n        Register command line options\n        \"\"\"\n        pass\n\n    def set_data(self, testgroup, suites, excludes, follow, testfiles=None):\n        \"\"\"\n        Set the data required for loading test data\n\n        :param testgroup: Test group\n        :param suites: Test suites to load\n        :param excludes: Tests to exclude while running\n        :param testfiles: Flag to check if test is run by filename\n        \"\"\"\n        if os.access(str(testgroup), os.R_OK):\n            f = open(testgroup, 'r')\n            self.suites_list.extend(f.readline().strip().split(','))\n            f.close()\n        elif suites is not None:\n            self.suites_list.extend(suites.split(','))\n        if excludes is not None:\n            self.excludes.extend(excludes.split(','))\n        self.follow = follow\n        self.testfiles = testfiles\n\n    def configure(self, options, config):\n        \"\"\"\n        Configure the ``plugin`` and ``system``, based on selected options\n        \"\"\"\n        tl = self._tests_list\n        tlc = self.__tests_list_copy\n        for _is in self.suites_list:\n            if '.' in _is:\n                suite, case = _is.split('.')\n                if case in tl[self._only_tc]:\n                    tl[self._only_tc].remove(case)\n                    tlc[self._only_tc].remove(case)\n                if suite in tl.keys():\n                    if case not in tl[suite]:\n                        tl[suite].append(case)\n                        tlc[suite].append(case)\n                else:\n                    tl.setdefault(suite, [case])\n                    tlc.setdefault(suite, [case])\n            elif _is.startswith(self._test_marker):\n                if _is not in tl[self._only_tc]:\n                    tl[self._only_tc].append(_is)\n                    tlc[self._only_tc].append(_is)\n            else:\n                if _is not in tl[self._only_ts]:\n                    tl[self._only_ts].append(_is)\n                    tlc[self._only_ts].append(_is)\n        for k, v in tl.items():\n            if k in (self._only_ts, self._only_tc):\n                continue\n            if len(v) == 0:\n                tl[self._only_ts].append(k)\n                tlc[self._only_ts].append(k)\n        for name in tl[self._only_ts]:\n            if name in tl.keys():\n                del tl[name]\n                del tlc[name]\n        extl = self._excludes_list\n        for _is in self.excludes:\n            if '.' in _is:\n                suite, case = _is.split('.')\n                if case in extl[self._only_tc]:\n                    extl[self._only_tc].remove(case)\n                if suite in extl.keys():\n                    if case not in extl[suite]:\n                        extl[suite].append(case)\n                else:\n                    extl.setdefault(suite, [case])\n            elif _is.startswith(self._test_marker):\n                if _is not in extl[self._only_tc]:\n                    extl[self._only_tc].append(_is)\n            else:\n                if _is not in extl[self._only_ts]:\n                    extl[self._only_ts].append(_is)\n        for k, v in extl.items():\n            if k in (self._only_ts, self._only_tc):\n                continue\n            if len(v) == 0:\n                extl[self._only_ts].append(k)\n        for name in extl[self._only_ts]:\n            if name in extl.keys():\n                del extl[name]\n        self.logger.debug('included_tests:%s' % (str(self._tests_list)))\n        self.logger.debug('included_tests(copy):%s' %\n                          (str(self.__tests_list_copy)))\n        self.logger.debug('excluded_tests:%s' % (str(self._excludes_list)))\n        self.enabled = len(self.suites_list) > 0\n        del self.suites_list\n        del self.excludes\n\n    def check_unknown(self):\n        \"\"\"\n        Check for unknown test suite and test case\n        \"\"\"\n        self.logger.debug('check_unknown called')\n        tests_list_copy = copy.deepcopy(self.__tests_list_copy)\n        only_ts = tests_list_copy.pop(self._only_ts)\n        only_tc = tests_list_copy.pop(self._only_tc)\n        msg = []\n        if len(tests_list_copy) > 0:\n            for k, v in tests_list_copy.items():\n                msg.extend(map(lambda x: k + '.' + x, v))\n        if len(only_tc) > 0:\n            msg.extend(only_tc)\n        if len(msg) > 0:\n            _msg = ['unknown testcase(s): %s' % (','.join(msg))]\n            msg = _msg\n        if len(only_ts) > 0:\n            msg += ['unknown testsuite(s): %s' % (','.join(only_ts))]\n        if len(msg) > 0:\n            for l in msg:\n                self.logger.error(l)\n            sys.exit(1)\n\n    def prepareTestLoader(self, loader):\n        \"\"\"\n        Prepare test loader\n        \"\"\"\n        old_loadTestsFromNames = loader.loadTestsFromNames\n\n        def check_loadTestsFromNames(names, module=None):\n            tests_dir = names\n            if not self.testfiles:\n                ptl_test_dir = __file__\n                ptl_test_dir = os.path.join(ptl_test_dir.split('ptl')[0],\n                                            \"ptl\", \"tests\")\n                user_test_dir = os.environ.get(\"PTL_TESTS_DIR\", None)\n                if user_test_dir and os.path.isdir(user_test_dir):\n                    tests_dir += [user_test_dir]\n                if os.path.isdir(ptl_test_dir):\n                    tests_dir += [ptl_test_dir]\n            rv = old_loadTestsFromNames(tests_dir, module)\n            self.check_unknown()\n            return rv\n        loader.loadTestsFromNames = check_loadTestsFromNames\n        return loader\n\n    def check_follow(self, cls, method=None):\n        cname = cls.__name__\n        if not issubclass(cls, PBSTestSuite):\n            return False\n        if cname == 'PBSTestSuite':\n            if 'PBSTestSuite' not in self._tests_list[self._only_ts]:\n                return False\n        if cname in self._excludes_list[self._only_ts]:\n            return False\n        if cname in self._tests_list[self._only_ts]:\n            if cname in self.__tests_list_copy[self._only_ts]:\n                self.__tests_list_copy[self._only_ts].remove(cname)\n            return True\n        if ((cname in self._tests_list.keys()) and (method is None)):\n            return True\n        if method is not None:\n            mname = method.__name__\n            if not mname.startswith(self._test_marker):\n                return False\n            if mname in self._excludes_list[self._only_tc]:\n                return False\n            if ((cname in self._excludes_list.keys()) and\n                    (mname in self._excludes_list[cname])):\n                return False\n            if ((cname in self._tests_list.keys()) and\n                    (mname in self._tests_list[cname])):\n                if cname in self.__tests_list_copy.keys():\n                    if mname in self.__tests_list_copy[cname]:\n                        self.__tests_list_copy[cname].remove(mname)\n                    if len(self.__tests_list_copy[cname]) == 0:\n                        del self.__tests_list_copy[cname]\n                return True\n            if mname in self._tests_list[self._only_tc]:\n                if mname in self.__tests_list_copy[self._only_tc]:\n                    self.__tests_list_copy[self._only_tc].remove(mname)\n                return True\n        if self.follow:\n            return self.check_follow(cls.__bases__[0], method)\n        else:\n            return False\n\n    def is_already_allowed(self, cls, method=None):\n        \"\"\"\n        :param method: Method to check\n        :returns: True if method is already allowed else False\n        \"\"\"\n        name = cls.__name__\n        if method is not None:\n            name += '.' + method.__name__\n            if name in self.__allowed_method:\n                return True\n            else:\n                self.__allowed_method.append(name)\n                return False\n        else:\n            if name in self.__allowed_cls:\n                return True\n            else:\n                self.__allowed_cls.append(name)\n                return False\n\n    def wantClass(self, cls):\n        \"\"\"\n        Is the class wanted?\n        \"\"\"\n        has_test = False\n        for t in dir(cls):\n            if t.startswith(self._test_marker):\n                has_test = True\n                break\n        if not has_test:\n            return False\n        rv = self.check_follow(cls)\n        if rv and not self.is_already_allowed(cls):\n            self.logger.debug('wantClass:%s' % (str(cls)))\n        else:\n            return False\n\n    def wantFunction(self, function):\n        \"\"\"\n        Is the function wanted?\n        \"\"\"\n        return self.wantMethod(function)\n\n    def wantMethod(self, method):\n        \"\"\"\n        Is the method wanted?\n        \"\"\"\n        try:\n            cls = method.__self__.__class__\n        except AttributeError:\n            return False\n        if not method.__name__.startswith(self._test_marker):\n            return False\n        rv = self.check_follow(cls, method)\n        if rv and not self.is_already_allowed(cls, method):\n            self.logger.debug('wantMethod:%s' % (str(method)))\n        else:\n            return False\n"
  },
  {
    "path": "test/fw/ptl/utils/plugins/ptl_test_runner.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport datetime\nimport logging\nimport fnmatch\nimport os\nimport platform\nimport pwd\nimport re\nimport signal\nimport socket\nimport sys\nimport time\nimport tempfile\nimport unittest\nfrom threading import Timer\nfrom logging import StreamHandler\nfrom traceback import format_exception\nfrom types import ModuleType\n\nfrom nose.core import TextTestRunner\nfrom nose.plugins.base import Plugin\nfrom nose.plugins.skip import SkipTest\nfrom nose.suite import ContextSuite\nfrom nose.util import isclass\n\nimport ptl\nfrom ptl.lib.pbs_testlib import PBSInitServices\nfrom ptl.utils.pbs_covutils import LcovUtils\nfrom ptl.utils.pbs_dshutils import DshUtils\nfrom ptl.utils.pbs_dshutils import TimeOut\nfrom ptl.utils.pbs_testsuite import (MINIMUM_TESTCASE_TIMEOUT,\n                                     REQUIREMENTS_KEY, TIMEOUT_KEY)\nfrom ptl.utils.plugins.ptl_test_info import get_effective_reqs\nfrom ptl.utils.pbs_testusers import PBS_ALL_USERS, PBS_USERS, PbsUser\nfrom ptl.lib.ptl_constants import (PTL_TRUE, PTL_FALSE)\nfrom io import StringIO\n\nlog = logging.getLogger('nose.plugins.PTLTestRunner')\n\n\nclass TCThresholdReached(Exception):\n    \"\"\"\n    Raise this exception to tell that tc-failure-threshold reached\n    \"\"\"\n\n\nclass TestLogCaptureHandler(StreamHandler):\n    \"\"\"\n    Log handler for capturing logs which test case print\n    using logging module\n    \"\"\"\n\n    def __init__(self):\n        self.buffer = StringIO()\n        StreamHandler.__init__(self, self.buffer)\n        self.setLevel(logging.DEBUG2)\n        fmt = '%(asctime)-15s %(levelname)-8s %(message)s'\n        self.setFormatter(logging.Formatter(fmt))\n\n    def get_logs(self):\n        return self.buffer.getvalue()\n\n\nclass _PtlTestResult(unittest.TestResult):\n\n    \"\"\"\n    Ptl custom test result\n    \"\"\"\n    separator1 = '=' * 70\n    separator2 = '___m_oo_m___'\n    logger = logging.getLogger(__name__)\n\n    def __init__(self, stream, descriptions, verbosity, config=None):\n        unittest.TestResult.__init__(self)\n        self.stream = stream\n        self.showAll = verbosity > 1\n        self.dots = verbosity == 1\n        self.descriptions = descriptions\n        self.errorClasses = {}\n        self.config = config\n        self.success = []\n        self.skipped = []\n        self.timedout = []\n        self.handler = TestLogCaptureHandler()\n        self.start = datetime.datetime.now()\n        self.stop = datetime.datetime.now()\n\n    def getDescription(self, test):\n        \"\"\"\n        Get the test result description\n        \"\"\"\n        if hasattr(test, 'test'):\n            return str(test.test)\n        elif isinstance(test.context, ModuleType):\n            tmn = getattr(test.context, '_testMethodName', 'unknown')\n            return '%s (%s)' % (tmn, test.context.__name__)\n        elif isinstance(test, ContextSuite):\n            tmn = getattr(test.context, '_testMethodName', 'unknown')\n            return '%s (%s.%s)' % (tmn,\n                                   test.context.__module__,\n                                   test.context.__name__)\n        else:\n            return str(test)\n\n    def getTestDoc(self, test):\n        \"\"\"\n        Get test document\n        \"\"\"\n        if hasattr(test, 'test'):\n            if hasattr(test.test, '_testMethodDoc'):\n                return test.test._testMethodDoc\n            else:\n                return None\n        else:\n            if hasattr(test, '_testMethodDoc'):\n                return test._testMethodDoc\n            else:\n                return None\n\n    def clear_stop(self):\n        self.shouldStop = False\n\n    def startTest(self, test):\n        \"\"\"\n        Start the test\n\n        :param test: Test to start\n        :type test: str\n        \"\"\"\n        ptl_logger = logging.getLogger('ptl')\n        if self.handler not in ptl_logger.handlers:\n            ptl_logger.addHandler(self.handler)\n        self.handler.buffer.truncate(0)\n        self.handler.buffer.seek(0)\n        unittest.TestResult.startTest(self, test)\n        test.start_time = datetime.datetime.now()\n        if self.showAll:\n            self.logger.info('test name: ' + self.getDescription(test) + '...')\n            self.logger.info('test start time: ' + test.start_time.ctime())\n            tdoc = self.getTestDoc(test)\n            if tdoc is not None:\n                tdoc = '\\n' + tdoc\n            self.logger.info('test docstring: %s' % (tdoc))\n\n    def addSuccess(self, test):\n        \"\"\"\n        Add success to the test result\n        \"\"\"\n        self.success.append(test)\n        unittest.TestResult.addSuccess(self, test)\n        if self.showAll:\n            self.logger.info('ok\\n')\n        elif self.dots:\n            self.logger.info('.')\n\n    def _addError(self, test, err):\n        unittest.TestResult.addError(self, test, err)\n        if self.showAll:\n            self.logger.info('ERROR\\n')\n        elif self.dots:\n            self.logger.info('E')\n\n    def addError(self, test, err):\n        \"\"\"\n        Add error to the test result\n\n        :param test: Test for which to add error\n        :type test: str\n        :param error: Error message to add\n        :type error: str\n        \"\"\"\n        if isclass(err[0]) and issubclass(err[0], TCThresholdReached):\n            return\n        if isclass(err[0]) and issubclass(err[0], SkipTest):\n            self.addSkip(test, err[1])\n            return\n        if isclass(err[0]) and issubclass(err[0], TimeOut):\n            self.addTimedOut(test, err)\n            return\n        for cls, (storage, label, isfail) in self.errorClasses.items():\n            if isclass(err[0]) and issubclass(err[0], cls):\n                if isfail:\n                    test.passed = False\n                storage.append((test, err))\n                if self.showAll:\n                    self.logger.info(label + '\\n')\n                elif self.dots:\n                    self.logger.info(label[0])\n                return\n        test.passed = False\n        self._addError(test, err)\n\n    def addFailure(self, test, err):\n        \"\"\"\n        Indicate failure\n        \"\"\"\n        unittest.TestResult.addFailure(self, test, err)\n        if self.showAll:\n            self.logger.info('FAILED\\n')\n        elif self.dots:\n            self.logger.info('F')\n\n    def addSkip(self, test, reason):\n        \"\"\"\n        Indicate skipping of test\n\n        :param test: Test to skip\n        :type test: str\n        :param reason: Reason fot the skip\n        :type reason: str\n        \"\"\"\n        self.skipped.append((test, reason))\n        if self.showAll:\n            self.logger.info('SKIPPED')\n        elif self.dots:\n            self.logger.info('S')\n\n    def addTimedOut(self, test, err):\n        \"\"\"\n        Indicate timeout\n\n        :param test: Test for which timeout happened\n        :type test: str\n        :param err: Error for timeout\n        :type err: str\n        \"\"\"\n        self.timedout.append((test, self._exc_info_to_string(err, test)))\n        if self.showAll:\n            self.logger.info('TIMEDOUT')\n        elif self.dots:\n            self.logger.info('T')\n\n    def printErrors(self):\n        \"\"\"\n        Print the errors\n        \"\"\"\n        _blank_line = False\n        if ((len(self.errors) > 0) or (len(self.failures) > 0) or\n                (len(self.timedout) > 0)):\n            if self.dots or self.showAll:\n                self.logger.info('')\n                _blank_line = True\n            self.printErrorList('ERROR', self.errors)\n            self.printErrorList('FAILED', self.failures)\n            self.printErrorList('TIMEDOUT', self.timedout)\n        for cls in self.errorClasses.keys():\n            storage, label, isfail = self.errorClasses[cls]\n            if isfail:\n                if not _blank_line:\n                    self.logger.info('')\n                    _blank_line = True\n                self.printErrorList(label, storage)\n        self.config.plugins.report(self.stream)\n\n    def printErrorList(self, flavour, errors):\n        \"\"\"\n        Print the error list\n\n        :param errors: Errors to print\n        \"\"\"\n        for test, err in errors:\n            self.logger.info(self.separator1)\n            self.logger.info('%s: %s\\n' % (flavour, self.getDescription(test)))\n            self.logger.info(self.separator2)\n            self.logger.info('%s\\n' % err)\n\n    def printLabel(self, label, err=None):\n        \"\"\"\n        Print the label for the error\n\n        :param label: Label to print\n        :type label: str\n        :param err: Error for which label to be printed\n        :type err: str\n        \"\"\"\n        if self.showAll:\n            message = [label]\n            if err:\n                try:\n                    detail = str(err[1])\n                except BaseException:\n                    detail = None\n                if detail:\n                    message.append(detail)\n            self.logger.info(': '.join(message))\n        elif self.dots:\n            self.logger.info(label[:1])\n\n    def wasSuccessful(self):\n        \"\"\"\n        Check whether the test successful or not\n\n        :returns: True if no ``errors`` or no ``failures`` or no ``timeout``\n                  else return False\n        \"\"\"\n        if self.errors or self.failures or self.timedout:\n            return False\n        for cls in self.errorClasses.keys():\n            storage, _, isfail = self.errorClasses[cls]\n            if not isfail:\n                continue\n            if storage:\n                return False\n        return True\n\n    def printSummary(self):\n        \"\"\"\n        Called by the test runner to print the final summary of test\n        run results.\n\n        :param start: Time at which test begins\n        :param stop:  Time at which test ends\n        \"\"\"\n        self.printErrors()\n        msg = ['=' * 80]\n        ef = []\n        error = 0\n        fail = 0\n        skip = 0\n        timedout = 0\n        success = len(self.success)\n        if len(self.failures) > 0:\n            for failedtest in self.failures:\n                fail += 1\n                msg += ['failed: ' + self.getDescription(failedtest[0])]\n                ef.append(failedtest)\n        if len(self.errors) > 0:\n            for errtest in self.errors:\n                error += 1\n                msg += ['error: ' + self.getDescription(errtest[0])]\n                ef.append(errtest)\n        if len(self.skipped) > 0:\n            for skiptest, reason in self.skipped:\n                skip += 1\n                _msg = 'skipped: ' + str(skiptest).strip()\n                _msg += ' reason: ' + str(reason).strip()\n                msg += [_msg]\n        if len(self.timedout) > 0:\n            for tdtest in self.timedout:\n                timedout += 1\n                msg += ['timedout: ' + self.getDescription(tdtest[0])]\n                ef.append(tdtest)\n        cases = []\n        suites = []\n        for _ef in ef:\n            if hasattr(_ef[0], 'test'):\n                cname = _ef[0].test.__class__.__name__\n                tname = getattr(_ef[0].test, '_testMethodName', 'unknown')\n                cases.append(cname + '.' + tname)\n                suites.append(cname)\n        cases = sorted(list(set(cases)))\n        suites = sorted(list(set(suites)))\n        if len(cases) > 0:\n            _msg = 'Test cases with failures: '\n            _msg += ','.join(cases)\n            msg += [_msg]\n        if len(suites) > 0:\n            _msg = 'Test suites with failures: '\n            _msg += ','.join(suites)\n            msg += [_msg]\n        runned = success + fail + error + skip + timedout\n        _msg = 'run: ' + str(runned)\n        _msg += ', succeeded: ' + str(success)\n        _msg += ', failed: ' + str(fail)\n        _msg += ', errors: ' + str(error)\n        _msg += ', skipped: ' + str(skip)\n        _msg += ', timedout: ' + str(timedout)\n        msg += [_msg]\n        msg += ['Tests run in ' + str(self.stop - self.start)]\n        self.logger.info('\\n'.join(msg))\n\n\nclass SystemInfo:\n\n    \"\"\"\n        used to get system's ram size and disk size information.\n\n        :system_ram: Available ram(in GB) of the test running machine\n        :system_disk: Available disk size(in GB) of the test running machine\n    \"\"\"\n    logger = logging.getLogger(__name__)\n\n    def get_system_info(self, hostname=None):\n        du = DshUtils()\n        # getting RAM size in gb\n        mem_info = du.cat(hostname, \"/proc/meminfo\")\n        if mem_info['rc'] != 0:\n            _msg = 'failed to get content of /proc/meminfo of host: '\n            self.logger.error(_msg + hostname)\n        else:\n            got_mem_available = False\n            for i in mem_info['out']:\n                if \"MemTotal\" in i:\n                    self.system_total_ram = float(i.split()[1]) / (2**20)\n                elif \"MemAvailable\" in i:\n                    mem_available = float(i.split()[1]) / (2**20)\n                    got_mem_available = True\n                    break\n                elif \"MemFree\" in i:\n                    mem_free = float(i.split()[1]) / (2**20)\n                elif \"Buffers\" in i:\n                    buffers = float(i.split()[1]) / (2**20)\n                elif i.startswith(\"Cached\"):\n                    cached = float(i.split()[1]) / (2**20)\n            if got_mem_available:\n                self.system_ram = mem_available\n            else:\n                self.system_ram = mem_free + buffers + cached\n        # getting disk size in gb\n        pbs_conf = du.parse_pbs_config(hostname)\n        pbs_home_info = du.run_cmd(hostname, cmd=['df', '-k',\n                                                  pbs_conf['PBS_HOME']])\n        if pbs_home_info['rc'] != 0:\n            _msg = 'failed to get output of df -k command of host: '\n            self.logger.error(_msg + hostname)\n        else:\n            disk_info = pbs_home_info['out']\n            disk_size = disk_info[1].split()\n            self.system_disk = float(disk_size[3]) / (2**20)\n            self.system_disk_used_percent = float(disk_size[4].rstrip('%'))\n\n\nclass PtlTextTestRunner(TextTestRunner):\n\n    \"\"\"\n    Test runner that uses ``PtlTestResult`` to enable errorClasses,\n    as well as providing hooks for plugins to override or replace the test\n    output stream, results, and the test case itself.\n    \"\"\"\n\n    cur_repeat_count = 1\n\n    def __init__(self, stream=sys.stdout, descriptions=True, verbosity=3,\n                 config=None, repeat_count=1, repeat_delay=0):\n        self.logger = logging.getLogger(__name__)\n        self.result = None\n        self.repeat_count = repeat_count\n        self.repeat_delay = repeat_delay\n        TextTestRunner.__init__(self, stream, descriptions, verbosity, config)\n\n    def _makeResult(self):\n        return _PtlTestResult(self.stream, self.descriptions, self.verbosity,\n                              self.config)\n\n    def run(self, test):\n        \"\"\"\n        Overrides to provide plugin hooks and defer all output to\n        the test result class.\n        \"\"\"\n        do_exit = False\n        wrapper = self.config.plugins.prepareTest(test)\n        if wrapper is not None:\n            test = wrapper\n        wrapped = self.config.plugins.setOutputStream(self.stream)\n        if wrapped is not None:\n            self.stream = wrapped\n        self.result = result = self._makeResult()\n        self.result.start = datetime.datetime.now()\n        try:\n            for i in range(self.repeat_count):\n                PtlTextTestRunner.cur_repeat_count = i + 1\n                if i != 0:\n                    time.sleep(self.repeat_delay)\n                test(result)\n            if self.repeat_count > 1:\n                self.logger.info(\"==========================================\")\n                self.logger.info(\"All Tests are repeated %d times\"\n                                 % self.repeat_count)\n                self.logger.info(\"==========================================\")\n        except KeyboardInterrupt:\n            do_exit = True\n        self.result.stop = datetime.datetime.now()\n        result.printSummary()\n        self.config.plugins.finalize(result)\n        if do_exit:\n            sys.exit(1)\n        return result\n\n\nclass PTLTestRunner(Plugin):\n\n    \"\"\"\n    PTL Test Runner Plugin\n    \"\"\"\n    name = 'PTLTestRunner'\n    score = sys.maxsize - 4\n    logger = logging.getLogger(__name__)\n    timeout = None\n\n    def __init__(self):\n        Plugin.__init__(self)\n        self.param = None\n        self.repeat_count = 1\n        self.repeat_delay = 0\n        self.use_cur_setup = False\n        self.lcov_bin = None\n        self.lcov_data = None\n        self.lcov_out = None\n        self.lcov_utils = None\n        self.lcov_nosrc = None\n        self.lcov_baseurl = None\n        self.genhtml_bin = None\n        self.config = None\n        self.result = None\n        self.tc_failure_threshold = None\n        self.cumulative_tc_failure_threshold = None\n        self.__failed_tc_count = 0\n        self.__tf_count = 0\n        self.__failed_tc_count_msg = False\n        self._test_marker = 'test_'\n        self.hardware_report_timer = None\n\n    def options(self, parser, env):\n        \"\"\"\n        Register command line options\n        \"\"\"\n        pass\n\n    def set_data(self, paramfile, testparam, repeat_count,\n                 repeat_delay, lcov_bin, lcov_data, lcov_out,\n                 genhtml_bin, lcov_nosrc, lcov_baseurl,\n                 tc_failure_threshold, cumulative_tc_failure_threshold,\n                 use_cur_setup):\n        if paramfile is not None:\n            _pf = open(paramfile, 'r')\n            _params_from_file = _pf.readlines()\n            _pf.close()\n            _nparams = []\n            for l in range(len(_params_from_file)):\n                if _params_from_file[l].startswith('#'):\n                    continue\n                else:\n                    _nparams.append(_params_from_file[l])\n            _f = ','.join([l.strip('\\r\\n') for l in _nparams])\n            if testparam is not None:\n                testparam += ',' + _f\n            else:\n                testparam = _f\n        self.param = testparam\n        self.repeat_count = repeat_count\n        self.repeat_delay = repeat_delay\n        self.use_cur_setup = use_cur_setup\n        self.lcov_bin = lcov_bin\n        self.lcov_data = lcov_data\n        self.lcov_out = lcov_out\n        self.genhtml_bin = genhtml_bin\n        self.lcov_nosrc = lcov_nosrc\n        self.lcov_baseurl = lcov_baseurl\n        self.tc_failure_threshold = tc_failure_threshold\n        self.cumulative_tc_failure_threshold = cumulative_tc_failure_threshold\n\n    def configure(self, options, config):\n        \"\"\"\n        Configure the plugin and system, based on selected options\n        \"\"\"\n        self.config = config\n        self.enabled = True\n        self.param_dict = self.__get_param_dictionary()\n\n    def prepareTestRunner(self, runner):\n        \"\"\"\n        Prepare test runner\n        \"\"\"\n        return PtlTextTestRunner(verbosity=3, config=self.config,\n                                 repeat_count=self.repeat_count,\n                                 repeat_delay=self.repeat_delay)\n\n    def prepareTestResult(self, result):\n        \"\"\"\n        Prepare test result\n        \"\"\"\n        self.result = result\n\n    def startContext(self, context):\n        context.param = self.param\n        context.use_cur_setup = self.use_cur_setup\n        context.start_time = datetime.datetime.now()\n        if isclass(context) and issubclass(context, unittest.TestCase):\n            self.result.logger.info(self.result.separator1)\n            self.result.logger.info('suite name: ' + context.__name__)\n            doc = context.__doc__\n            if doc is not None:\n                self.result.logger.info('suite docstring: \\n' + doc + '\\n')\n            self.result.logger.info(self.result.separator1)\n            self.__failed_tc_count = 0\n            self.__failed_tc_count_msg = False\n\n    def __get_timeout(self, test):\n        _test = None\n        if hasattr(test, 'test'):\n            _test = test.test\n        elif hasattr(test, 'context'):\n            _test = test.context\n        if _test is None:\n            return MINIMUM_TESTCASE_TIMEOUT\n        dflt_timeout = int(getattr(_test,\n                                   'conf',\n                                   {}).get('default-testcase-timeout',\n                                           MINIMUM_TESTCASE_TIMEOUT))\n        tc_timeout = int(getattr(getattr(_test,\n                                         getattr(_test, '_testMethodName', ''),\n                                         None),\n                                 TIMEOUT_KEY,\n                                 0))\n        return max([dflt_timeout, tc_timeout])\n\n    def __set_test_end_data(self, test, err=None):\n        if self.hardware_report_timer is not None:\n            self.hardware_report_timer.cancel()\n        if not hasattr(test, 'start_time'):\n            test = test.context\n        if err is not None:\n            is_skip = issubclass(err[0], SkipTest)\n            is_tctr = issubclass(err[0], TCThresholdReached)\n            if not (is_skip or is_tctr):\n                self.__failed_tc_count += 1\n                self.__tf_count += 1\n            try:\n                test.err_in_string = self.result._exc_info_to_string(err,\n                                                                     test)\n            except BaseException:\n                etype, value, tb = err\n                test.err_in_string = ''.join(format_exception(etype, value,\n                                                              tb))\n        else:\n            test.err_in_string = 'None'\n        test.end_time = datetime.datetime.now()\n        test.duration = test.end_time - test.start_time\n        test.captured_logs = self.result.handler.get_logs()\n\n    def __get_param_dictionary(self):\n        \"\"\"\n        Method to convert data in param into dictionary of cluster\n        information\n        \"\"\"\n        def get_bool(v):\n            if v is None or v == '':\n                return False\n            if v in PTL_TRUE:\n                return True\n            if v in PTL_FALSE:\n                return False\n            raise ValueError(\"Need boolean value, not %s\" % v)\n\n        tparam_contents = {}\n        nomomlist = []\n        shortname = (socket.gethostname()).split('.', 1)[0]\n        for key in ['servers', 'moms', 'comms', 'clients', 'nomom']:\n            tparam_contents[key] = []\n        tparam_contents['mom_on_server'] = False\n        tparam_contents['no_mom_on_server'] = False\n        tparam_contents['no_comm_on_server'] = False\n        tparam_contents['no_comm_on_mom'] = False\n        if self.param is not None:\n            for h in self.param.split(','):\n                if '=' in h:\n                    k, v = h.split('=', 1)\n                    hosts = [x.split('@')[0] for x in v.split(':')]\n                    if (k == 'server' or k == 'servers'):\n                        tparam_contents['servers'].extend(hosts)\n                    elif (k == 'mom' or k == 'moms'):\n                        tparam_contents['moms'].extend(hosts)\n                    elif k == 'comms':\n                        tparam_contents['comms'] = hosts\n                    elif k == 'client':\n                        tparam_contents['clients'] = hosts\n                    elif k == 'nomom':\n                        nomomlist = hosts\n                    elif k == 'mom_on_server':\n                        tparam_contents['mom_on_server'] = get_bool(v)\n                    elif k == 'no_mom_on_server':\n                        tparam_contents['no_mom_on_server'] = get_bool(v)\n                    elif k == 'no_comm_on_mom':\n                        tparam_contents['no_comm_on_mom'] = get_bool(v)\n        for pkey in ['servers', 'moms', 'comms', 'clients']:\n            if not tparam_contents[pkey]:\n                tparam_contents[pkey] = set([shortname])\n            else:\n                tparam_contents[pkey] = set(tparam_contents[pkey])\n        if nomomlist:\n            tparam_contents['nomom'] = set(nomomlist)\n        return tparam_contents\n\n    @staticmethod\n    def __are_requirements_matching(param_dic=None, test=None):\n        \"\"\"\n        Validates test requirements against test cluster information\n        returns True on match or error message otherwise None\n\n        :param param_dic: dictionary of cluster information from data passed\n                          to param list\n        :param_dic type: dic\n        :param test: test object\n        :test type: object\n\n        :returns True or error message or None\n        \"\"\"\n        logger = logging.getLogger(__name__)\n        ts_requirements = {}\n        tc_requirements = {}\n        param_count = {}\n        _servers = set(param_dic['servers'])\n        _moms = set(param_dic['moms'])\n        _comms = set(param_dic['comms'])\n        _nomom = set(param_dic['nomom'])\n        _mom_on_server = param_dic['mom_on_server']\n        _no_mom_on_server = param_dic['no_mom_on_server']\n        _no_comm_on_mom = param_dic['no_comm_on_mom']\n        _no_comm_on_server = param_dic['no_comm_on_server']\n        shortname = (socket.gethostname()).split('.', 1)[0]\n        if test is None:\n            return None\n        test_name = getattr(test.test, '_testMethodName', None)\n        if test_name is not None:\n            method = getattr(test.test, test_name, None)\n        if method is not None:\n            tc_requirements = getattr(method, REQUIREMENTS_KEY, {})\n            cls = method.__self__.__class__\n            ts_requirements = getattr(cls, REQUIREMENTS_KEY, {})\n        if not tc_requirements:\n            if not ts_requirements:\n                return None\n        eff_tc_req = get_effective_reqs(ts_requirements, tc_requirements)\n        setattr(test.test, 'requirements', eff_tc_req)\n        for key in ['servers', 'moms', 'comms', 'clients']:\n            param_count['num_' + key] = len(param_dic[key])\n        for pk in param_count:\n            if param_count[pk] < eff_tc_req[pk]:\n                _msg = 'available ' + pk + \" (\"\n                _msg += str(param_count[pk]) + \") is less than required \" + pk\n                _msg += \" (\" + str(eff_tc_req[pk]) + \")\"\n                logger.error(_msg)\n                return _msg\n\n        if hasattr(test, 'test'):\n            _test = test.test\n        elif hasattr(test, 'context'):\n            _test = test.context\n        else:\n            return None\n\n        name = 'moms'\n        if (hasattr(_test, name) and\n                (getattr(_test, name, None) is not None)):\n            for mc in getattr(_test, name).values():\n                platform = mc.platform\n                if platform not in ['linux', 'shasta',\n                                    'cray'] and mc.hostname in _moms:\n                    _moms.remove(mc.hostname)\n        for hostname in _moms:\n            si = SystemInfo()\n            si.get_system_info(hostname)\n            available_sys_ram = getattr(si, 'system_ram', None)\n            if available_sys_ram is None:\n                _msg = 'failed to get ram info on host: ' + hostname\n                logger.error(_msg)\n                return _msg\n            elif eff_tc_req['min_mom_ram'] >= available_sys_ram:\n                _msg = hostname + ': available ram (' + str(available_sys_ram)\n                _msg += ') is less than the minimum required ram ('\n                _msg += str(eff_tc_req['min_mom_ram'])\n                _msg += ') for test execution'\n                logger.error(_msg)\n                return _msg\n            available_sys_disk = getattr(si, 'system_disk', None)\n            if available_sys_disk is None:\n                _msg = 'failed to get disk info on host: ' + hostname\n                logger.error(_msg)\n                return _msg\n            elif eff_tc_req['min_mom_disk'] >= available_sys_disk:\n                _msg = hostname + ': available disk space ('\n                _msg += str(available_sys_disk)\n                _msg += ') is less than the minimum required disk space ('\n                _msg += str(eff_tc_req['min_mom_disk'])\n                _msg += ') for test execution'\n                logger.error(_msg)\n                return _msg\n        for hostname in param_dic['servers']:\n            si = SystemInfo()\n            si.get_system_info(hostname)\n            available_sys_ram = getattr(si, 'system_ram', None)\n            if available_sys_ram is None:\n                _msg = 'failed to get ram info on host: ' + hostname\n                logger.error(_msg)\n                return _msg\n            elif eff_tc_req['min_server_ram'] >= available_sys_ram:\n                _msg = hostname + ': available ram (' + str(available_sys_ram)\n                _msg += ') is less than the minimum required ram ('\n                _msg += str(eff_tc_req['min_server_ram'])\n                _msg += ') for test execution'\n                logger.error(_msg)\n                return _msg\n            available_sys_disk = getattr(si, 'system_disk', None)\n            if available_sys_disk is None:\n                _msg = 'failed to get disk info on host: ' + hostname\n                logger.error(_msg)\n                return _msg\n            elif eff_tc_req['min_server_disk'] >= available_sys_disk:\n                _msg = hostname + ': available disk space ('\n                _msg += str(available_sys_disk)\n                _msg += ') is less than the minimum required disk space ('\n                _msg += str(eff_tc_req['min_server_disk'])\n                _msg += ') for test execution'\n                logger.error(_msg)\n                return _msg\n        if _moms & _servers:\n            if eff_tc_req['no_mom_on_server'] or \\\n               (_nomom - _servers) or \\\n               _no_mom_on_server:\n                _msg = 'no mom on server'\n                logger.error(_msg)\n                return _msg\n        else:\n            if eff_tc_req['mom_on_server'] or \\\n               _mom_on_server:\n                _msg = 'mom on server'\n                logger.error(_msg)\n                return _msg\n        if _comms & _servers:\n            if eff_tc_req['no_comm_on_server'] or _no_comm_on_server:\n                _msg = 'no comm on server'\n                logger.error(_msg)\n                return _msg\n        comm_mom_list = _moms & _comms\n        if comm_mom_list and shortname in comm_mom_list:\n            # Excluding the server hostname for flag 'no_comm_on_mom'\n            comm_mom_list.remove(shortname)\n        if comm_mom_list:\n            if eff_tc_req['no_comm_on_mom']:\n                _msg = 'no comm on mom'\n                logger.error(_msg)\n                return _msg\n        else:\n            if not eff_tc_req['no_comm_on_mom']:\n                _msg = 'no comm on server'\n                logger.error(_msg)\n                return _msg\n\n    def check_hardware_status_and_core_files(self, test):\n        \"\"\"\n        function checks hardware status and core files\n        every 5 minutes\n        \"\"\"\n        du = DshUtils()\n        systems = list(self.param_dict['servers'])\n        systems.extend(self.param_dict['moms'])\n        systems.extend(self.param_dict['comms'])\n        systems = list(set(systems))\n\n        if hasattr(test, 'test'):\n            _test = test.test\n        elif hasattr(test, 'context'):\n            _test = test.context\n        else:\n            return None\n\n        for name in ['servers', 'moms', 'comms', 'clients']:\n            mlist = None\n            if (hasattr(_test, name) and\n                    (getattr(_test, name, None) is not None)):\n                mlist = getattr(_test, name).values()\n            if mlist:\n                for mc in mlist:\n                    platform = mc.platform\n                    if ((platform not in ['linux', 'shasta', 'cray']) and\n                            (mc.hostname in systems)):\n                        systems.remove(mc.hostname)\n\n        self.hardware_report_timer = Timer(\n            300, self.check_hardware_status_and_core_files, args=(test,))\n        self.hardware_report_timer.start()\n\n        for hostname in systems:\n            hr = SystemInfo()\n            hr.get_system_info(hostname)\n            # monitors disk\n            used_disk_percent = getattr(hr,\n                                        'system_disk_used_percent', None)\n            if used_disk_percent is None:\n                _msg = hostname\n                _msg += \": unable to get disk info\"\n                self.hardware_report_timer.cancel()\n                raise SkipTest(_msg)\n            elif 70 <= used_disk_percent < 95:\n                _msg = hostname + \": disk usage is at \"\n                _msg += str(used_disk_percent) + \"%\"\n                _msg += \", disk cleanup is recommended.\"\n                self.logger.warning(_msg)\n            elif used_disk_percent >= 95:\n                _msg = hostname + \":disk usage > 95%, skipping the test(s)\"\n                self.hardware_report_timer.cancel()\n                raise SkipTest(_msg)\n            # checks for core files\n            pbs_conf = du.parse_pbs_config(hostname)\n            mom_priv_path = os.path.join(pbs_conf[\"PBS_HOME\"], \"mom_priv\")\n            if du.isdir(hostname=hostname, path=mom_priv_path):\n                mom_priv_files = du.listdir(\n                    hostname=hostname,\n                    path=mom_priv_path,\n                    sudo=True,\n                    fullpath=False)\n                if fnmatch.filter(mom_priv_files, \"core*\"):\n                    _msg = hostname + \": core files found in \"\n                    _msg += mom_priv_path\n                    self.logger.warning(_msg)\n            server_priv_path = os.path.join(\n                pbs_conf[\"PBS_HOME\"], \"server_priv\")\n            if du.isdir(hostname=hostname, path=server_priv_path):\n                server_priv_files = du.listdir(\n                    hostname=hostname,\n                    path=server_priv_path,\n                    sudo=True,\n                    fullpath=False)\n                if fnmatch.filter(server_priv_files, \"core*\"):\n                    _msg = hostname + \": core files found in \"\n                    _msg += server_priv_path\n                    self.logger.warning(_msg)\n            sched_priv_path = os.path.join(pbs_conf[\"PBS_HOME\"], \"sched_priv\")\n            if du.isdir(hostname=hostname, path=sched_priv_path):\n                sched_priv_files = du.listdir(\n                    hostname=hostname,\n                    path=sched_priv_path,\n                    sudo=True,\n                    fullpath=False)\n                if fnmatch.filter(sched_priv_files, \"core*\"):\n                    _msg = hostname + \": core files found in \"\n                    _msg += sched_priv_path\n                    self.logger.warning(_msg)\n            for u in PBS_ALL_USERS:\n                user_home_files = du.listdir(hostname=hostname, path=u.home,\n                                             sudo=True, fullpath=False,\n                                             runas=u.name)\n                if user_home_files and fnmatch.filter(\n                        user_home_files, \"core*\"):\n                    _msg = hostname + \": user-\" + str(u)\n                    _msg += \": core files found in \"\n                    self.logger.warning(_msg + u.home)\n\n    def startTest(self, test):\n        \"\"\"\n        Start the test\n        \"\"\"\n        if ((self.cumulative_tc_failure_threshold != 0) and\n                (self.__tf_count >= self.cumulative_tc_failure_threshold)):\n            _msg = 'Total testcases failure count exceeded cumulative'\n            _msg += ' testcase failure threshold '\n            _msg += '(%d)' % self.cumulative_tc_failure_threshold\n            self.logger.error(_msg)\n            raise KeyboardInterrupt\n        if ((self.tc_failure_threshold != 0) and\n                (self.__failed_tc_count >= self.tc_failure_threshold)):\n            if self.__failed_tc_count_msg:\n                raise TCThresholdReached\n            _msg = 'Testcases failure for this testsuite count exceeded'\n            _msg += ' testcase failure threshold '\n            _msg += '(%d)' % self.tc_failure_threshold\n            self.logger.error(_msg)\n            self.__failed_tc_count_msg = True\n            raise TCThresholdReached\n        rv = None\n        rv = self.__are_requirements_matching(self.param_dict, test)\n        if rv is not None:\n            # Below method call is needed in order to get the test case\n            # details in the output and to have the skipped test count\n            # included in total run count of the test run\n            self.result.startTest(test)\n            raise SkipTest(rv)\n        # function report hardware status and core files\n        self.check_hardware_status_and_core_files(test)\n\n        def timeout_handler(signum, frame):\n            raise TimeOut('Timed out after %s second' % timeout)\n        if PTLTestRunner.timeout is None:\n            timeout = self.__get_timeout(test)\n            old_handler = signal.signal(signal.SIGALRM, timeout_handler)\n            setattr(test, 'old_sigalrm_handler', old_handler)\n            signal.alarm(timeout)\n\n    def stopTest(self, test):\n        \"\"\"\n        Stop the test\n        \"\"\"\n        old_sigalrm_handler = getattr(test, 'old_sigalrm_handler', None)\n        if old_sigalrm_handler is not None:\n            signal.signal(signal.SIGALRM, old_sigalrm_handler)\n            signal.alarm(0)\n\n    def addError(self, test, err):\n        \"\"\"\n        Add error\n        \"\"\"\n        if isclass(err[0]) and issubclass(err[0], TCThresholdReached):\n            return True\n        self.__set_test_end_data(test, err)\n\n    def addFailure(self, test, err):\n        \"\"\"\n        Add failure\n        \"\"\"\n        self.__set_test_end_data(test, err)\n\n    def addSuccess(self, test):\n        \"\"\"\n        Add success\n        \"\"\"\n        self.__set_test_end_data(test)\n\n    def _cleanup(self):\n        self.logger.info('Cleaning up temporary files')\n        du = DshUtils()\n        hosts = set(self.param_dict['moms']).union(\n            set(self.param_dict['servers']))\n        for user in PBS_USERS:\n            self.logger.debug('Cleaning %s\\'s home directory' % (str(user)))\n            runas = PbsUser.get_user(user)\n            for host in hosts:\n                ret = du.run_cmd(host, cmd=['printenv', 'HOME'], sudo=True,\n                                 runas=runas, logerr=False, as_script=False,\n                                 level=logging.DEBUG)\n                if ret['rc'] == 0:\n                    path = ret['out'][0].strip()\n                else:\n                    return None\n                ftd = []\n                files = du.listdir(host, path=path, runas=user,\n                                   level=logging.DEBUG)\n                bn = os.path.basename\n                ftd.extend([f for f in files if bn(f).startswith('PtlPbs')])\n                ftd.extend([f for f in files if bn(f).startswith('STDIN')])\n\n                if len(ftd) > 1000:\n                    for i in range(0, len(ftd), 1000):\n                        j = i + 1000\n                        du.rm(host, path=ftd[i:j], runas=user,\n                              force=True, level=logging.DEBUG)\n\n        root_dir = os.sep\n        dirlist = set([os.path.join(root_dir, 'tmp'),\n                       os.path.join(root_dir, 'var', 'tmp')])\n        # get tmp dir from the environment\n        for envname in 'TMPDIR', 'TEMP', 'TMP':\n            dirname = os.getenv(envname)\n            if dirname:\n                dirlist.add(dirname)\n\n        p = re.compile(r'^pbs\\.\\d+')\n        for tmpdir in dirlist:\n            # list the contents of each tmp dir and\n            # get the file list to be deleted\n            self.logger.info('Cleaning up ' + tmpdir + ' dir')\n            ftd = []\n            files = du.listdir(path=tmpdir)\n            bn = os.path.basename\n            ftd.extend([f for f in files if bn(f).startswith('PtlPbs')])\n            ftd.extend([f for f in files if bn(f).startswith('STDIN')])\n            ftd.extend([f for f in files if bn(f).startswith('pbsscrpt')])\n            ftd.extend([f for f in files if bn(f).startswith('pbs.conf.')])\n            ftd.extend([f for f in files if p.match(bn(f))])\n            for f in ftd:\n                du.rm(path=f, sudo=True, recursive=True, force=True,\n                      level=logging.DEBUG)\n        for f in du.tmpfilelist:\n            du.rm(path=f, sudo=True, force=True, level=logging.DEBUG)\n        del du.tmpfilelist[:]\n        tmpdir = tempfile.gettempdir()\n        os.chdir(tmpdir)\n        tmppath = os.path.join(tmpdir, 'dejagnutemp%s' % os.getpid())\n        if du.isdir(path=tmppath):\n            du.rm(path=tmppath, recursive=True, sudo=True, force=True,\n                  level=logging.DEBUG)\n\n    def begin(self):\n        command = sys.argv\n        command[0] = os.path.basename(command[0])\n        self.logger.info('input command: ' + ' '.join(command))\n        self.logger.info('param: ' + str(self.param))\n        self.logger.info('ptl version: ' + str(ptl.__version__))\n        _m = 'platform: ' + ' '.join(platform.uname()).strip()\n        self.logger.info(_m)\n        self.logger.info('python version: ' + str(platform.python_version()))\n        self.logger.info('user: ' + pwd.getpwuid(os.getuid())[0])\n        self.logger.info('-' * 80)\n\n        if self.lcov_data is not None:\n            self.lcov_utils = LcovUtils(cov_bin=self.lcov_bin,\n                                        html_bin=self.genhtml_bin,\n                                        cov_out=self.lcov_out,\n                                        data_dir=self.lcov_data,\n                                        html_nosrc=self.lcov_nosrc,\n                                        html_baseurl=self.lcov_baseurl)\n            # Initialize coverage analysis\n            self.lcov_utils.zero_coverage()\n            # The following 'dance' is done due to some oddities on lcov's\n            # part, according to this the lcov readme file at\n            # http://ltp.sourceforge.net/coverage/lcov/readme.php that reads:\n            #\n            # Note that this step only works after the application has\n            # been started and stopped at least once. Otherwise lcov will\n            # abort with an error mentioning that there are no data/.gcda\n            # files.\n            self.lcov_utils.initialize_coverage(name='PTLTestCov')\n            PBSInitServices().restart()\n        self._cleanup()\n\n    def finalize(self, result):\n        if self.lcov_data is not None:\n            # See note above that briefly explains the 'dance' needed to get\n            # reliable coverage data\n            PBSInitServices().restart()\n            self.lcov_utils.capture_coverage(name='PTLTestCov')\n            exclude = ['\"*work/gSOAP/*\"', '\"*/pbs/doc/*\"', 'lex.yy.c',\n                       'pbs_ifl_wrap.c', 'usr/include/*', 'unsupported/*']\n            self.lcov_utils.merge_coverage_traces(name='PTLTestCov',\n                                                  exclude=exclude)\n            self.lcov_utils.generate_html()\n            self.lcov_utils.change_baseurl()\n            self.logger.info('\\n'.join(self.lcov_utils.summarize_coverage()))\n        self._cleanup()\n"
  },
  {
    "path": "test/fw/ptl/utils/plugins/ptl_test_tags.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport sys\nimport logging\nimport unittest\nfrom nose.plugins.base import Plugin\nimport collections\ntry:\n    from collections.abc import Callable  # Python 3.10+\nexcept ImportError:\n    from collections import Callable  # For Python versions before 3.10\n\nlog = logging.getLogger('nose.plugins.PTLTestTags')\n\nTAGKEY = '__PTL_TAGS_LIST__'\n\n\ndef tags(*args, **kwargs):\n    \"\"\"\n    Decorator that adds tags to classes or functions or methods\n    \"\"\"\n    def wrap_obj(obj):\n        tagobj = getattr(obj, TAGKEY, [])\n        for name in args:\n            tagobj.append(name)\n            PTLTestTags.tags_list.append(name)\n            setattr(obj, name, True)\n        for name, value in kwargs.items():\n            tagobj.append('%s=%s' % (name, value))\n            PTLTestTags.tags_list.append(name)\n            setattr(obj, name, value)\n        setattr(obj, TAGKEY, sorted(set(tagobj)))\n        return obj\n    return wrap_obj\n\n\ndef get_tag_value(method, cls, tag_name, default=False):\n    \"\"\"\n    Look up an tag on a ``method/function``.\n    If the tag isn't found there, looking it up in the\n    method's class, if any.\n    \"\"\"\n    Missing = object()\n    value = getattr(method, tag_name, Missing)\n    if value is Missing and cls is not None:\n        value = getattr(cls, tag_name, Missing)\n    if value is Missing:\n        return default\n    return value\n\n\nclass EvalHelper(object):\n\n    \"\"\"\n    Object that can act as context dictionary for eval and looks up\n    names as attributes on a method/function and its class.\n    \"\"\"\n\n    def __init__(self, method, cls):\n        self.method = method\n        self.cls = cls\n\n    def __getitem__(self, name):\n        return get_tag_value(self.method, self.cls, name)\n\n\nclass FakeRunner(object):\n\n    def __init__(self, matched, tags_list, list_tags, verbose):\n        self.matched = matched\n        self.tags_list = tags_list\n        self.list_tags = list_tags\n        self.verbose = verbose\n\n    def run(self, test):\n        if self.list_tags:\n            print(('\\n'.join(sorted(set(self.tags_list)))))\n            sys.exit(0)\n        suites = sorted(set(self.matched.keys()))\n        if not self.verbose:\n            print(('\\n'.join(suites)))\n        else:\n            for k in suites:\n                v = sorted(set(self.matched[k]))\n                for _v in v:\n                    print((k + '.' + _v))\n        sys.exit(0)\n\n\nclass PTLTestTags(Plugin):\n\n    \"\"\"\n    Load test cases from given parameter\n    \"\"\"\n    name = 'PTLTestTags'\n    score = sys.maxsize - 3\n    logger = logging.getLogger(__name__)\n    tags_list = []\n\n    def __init__(self):\n        Plugin.__init__(self)\n        self.tags_to_check = []\n        self.tags = []\n        self.eval_tags = []\n        self.tags_info = False\n        self.list_tags = False\n        self.verbose = False\n        self.matched = {}\n        self._test_marker = 'test_'\n\n    def options(self, parser, env):\n        \"\"\"\n        Register command line options\n        \"\"\"\n        pass\n\n    def set_data(self, tags, eval_tags, tags_info=False, list_tags=False,\n                 verbose=False):\n        self.tags.extend(tags)\n        self.eval_tags.extend(eval_tags)\n        self.tags_info = tags_info\n        self.list_tags = list_tags\n        self.verbose = verbose\n\n    def configure(self, options, config):\n        \"\"\"\n        Configure the plugin and system, based on selected options.\n\n        attr and eval_attr may each be lists.\n\n        self.attribs will be a list of lists of tuples. In that list, each\n        list is a group of attributes, all of which must match for the rule to\n        match.\n        \"\"\"\n        self.tags_to_check = []\n        for tag in self.eval_tags:\n            def eval_in_context(expr, obj, cls):\n                return eval(expr, None, EvalHelper(obj, cls))\n            self.tags_to_check.append([(tag, eval_in_context)])\n        for tags in self.tags:\n            tag_group = []\n            for tag in tags.strip().split(','):\n                if not tag:\n                    continue\n                items = tag.split('=', 1)\n                if len(items) > 1:\n                    key, value = items\n                else:\n                    key = items[0]\n                    if key[0] == '!':\n                        key = key[1:]\n                        value = False\n                    else:\n                        value = True\n                tag_group.append((key, value))\n            self.tags_to_check.append(tag_group)\n        if (len(self.tags_to_check) > 0) or self.list_tags:\n            self.enabled = True\n\n    def is_tags_matching(self, method, cls=None):\n        \"\"\"\n        Verify whether a method has the required tags\n        The method is considered a match if it matches all tags\n        for any tag group.\n        \"\"\"\n        any_matched = False\n        for group in self.tags_to_check:\n            group_matched = True\n            for key, value in group:\n                tag_value = get_tag_value(method, cls, key)\n                if isinstance(value, Callable):\n                    if not value(key, method, cls):\n                        group_matched = False\n                        break\n                elif value is True:\n                    if not bool(tag_value):\n                        group_matched = False\n                        break\n                elif value is False:\n                    if bool(tag_value):\n                        group_matched = False\n                        break\n                elif type(tag_value) in (list, tuple):\n                    value = str(value).lower()\n                    if value not in [str(x).lower() for x in tag_value]:\n                        group_matched = False\n                        break\n                else:\n                    if ((value != tag_value) and\n                            (str(value).lower() != str(tag_value).lower())):\n                        group_matched = False\n                        break\n            any_matched = any_matched or group_matched\n        if not any_matched:\n            return False\n\n    def prepareTestRunner(self, runner):\n        \"\"\"\n        Prepare test runner\n        \"\"\"\n        if (self.tags_info or self.list_tags):\n            return FakeRunner(self.matched, self.tags_list, self.list_tags,\n                              self.verbose)\n\n    def wantClass(self, cls):\n        \"\"\"\n        Accept the class if its subclass of TestCase and has at-least one\n        test case\n        \"\"\"\n        if not issubclass(cls, unittest.TestCase):\n            return False\n        has_test = False\n        for t in dir(cls):\n            if t.startswith(self._test_marker):\n                has_test = True\n                break\n        if not has_test:\n            return False\n\n    def wantFunction(self, function):\n        \"\"\"\n        Accept the function if its tags match.\n        \"\"\"\n        return False\n\n    def wantMethod(self, method):\n        \"\"\"\n        Accept the method if its tags match.\n        \"\"\"\n        try:\n            cls = method.__self__.__class__\n        except AttributeError:\n            return False\n        if not method.__name__.startswith(self._test_marker):\n            return False\n        rv = self.is_tags_matching(method, cls)\n        if rv is None:\n            cname = cls.__name__\n            if cname not in self.matched.keys():\n                self.matched[cname] = []\n            self.matched[cname].append(method.__name__)\n        return rv\n"
  },
  {
    "path": "test/fw/ptl.csh",
    "content": "#!/usr/bin/csh\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\n# This file will set path variables in case of ptl installation\nif ( -f /etc/debian_version ) then\n    set __ptlpkgname=`dpkg -W -f='${binary:Package}\\n' | grep -E '*-ptl$'`\n    if ( \"x${__ptlpkgname}\" != \"x\" ) then\n        set ptl_prefix_lib=`dpkg -L ${__ptlpkgname} | grep -m 1 lib$`\n    endif\nelse\n    set __ptlpkgname=`rpm -qa | grep -E '*-ptl-[[:digit:]]'`\n    if ( \"x${__ptlpkgname}\" != \"x\" ) then\n        set ptl_prefix_lib=`rpm -ql ${__ptlpkgname} | grep -m 1 lib$`\n    endif\nendif\nif ( $?ptl_prefix_lib ) then\n\tset python_dir=`/bin/ls -1 ${ptl_prefix_lib}`\n\tset prefix=`dirname ${ptl_prefix_lib}`\n\n\tsetenv PATH ${prefix}/bin/:${PATH}\n\tif ( $?PYTHONPATH ) then\n\t\tsetenv PYTHONPATH ${prefix}/lib/${python_dir}/site-packages/:$PYTHONPATH\n\telse\n\t\tsetenv PYTHONPATH ${prefix}/lib/${python_dir}/site-packages/\n\tendif\n\tunset python_dir\n\tunset prefix\n\tunset ptl_prefix_lib\nelse\n\tif ( $?PBS_CONF_FILE ) then\n\t\tset conf = \"$PBS_CONF_FILE\"\n\telse\n\t\tset conf = /etc/pbs.conf\n\tendif\n\tif ( -r \"${conf}\" ) then\n\t\t# we only need PBS_EXEC from pbs.conf\n\t\tset __PBS_EXEC=`grep '^[[:space:]]*PBS_EXEC=' \"$conf\" | tail -1 | sed 's/^[[:space:]]*PBS_EXEC=\\([^[:space:]]*\\)[[:space:]]*/\\1/'`\n\t\tif ( \"X${__PBS_EXEC}\" != \"X\" ) then\n\t\t\t# Define PATH and PYTHONPATH for the users\n\t\t\tset PTL_PREFIX=`dirname ${__PBS_EXEC}`/ptl\n\t\t\tset python_dir=`/bin/ls -1 ${PTL_PREFIX}/lib`/site-packages\n\t\t\tif ( $?PATH && -d ${PTL_PREFIX}/bin ) then\n\t\t\t\tsetenv PATH \"${PATH}:${PTL_PREFIX}/bin\"\n\t\t\tendif\n\t\t\tif ( -d \"${PTL_PREFIX}/lib/${python_dir}\" ) then\n\t\t\t\tif ( $?PYTHONPATH ) then\n\t\t\t\t\tsetenv PYTHONPATH \"${PYTHONPATH}:${PTL_PREFIX}/lib/${python_dir}\"\n\t\t\t\telse\n\t\t\t\t\tsetenv PYTHONPATH \"${PTL_PREFIX}/lib/${python_dir}\"\n\t\t\t\tendif\n\t\t\tendif\n\t\t\tendif\n\t\tendif\n\t\tunset __PBS_EXEC\n\t\tunset PTL_PREFIX\n\t\tunset conf\n\t\tunset python_dir\n\tendif\nendif\n"
  },
  {
    "path": "test/fw/ptl.sh",
    "content": "#!/usr/bin/sh\n#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\n# This file will set path variables in case of ptl installation\n\nif [ -f /etc/debian_version ]; then\n    __ptlpkgname=$(dpkg -W -f='${binary:Package}\\n' 2>/dev/null | grep -E '*-ptl$')\n    if [ \"x${__ptlpkgname}\" != \"x\" ]; then\n        ptl_prefix_lib=$(dpkg -L ${__ptlpkgname} 2>/dev/null | grep -m 1 lib$ 2>/dev/null)\n    fi\nelse\n    __ptlpkgname=$(rpm -qa 2>/dev/null | grep -E '*-ptl-[[:digit:]]')\n    if [ \"x${__ptlpkgname}\" != \"x\" ]; then\n        ptl_prefix_lib=$(rpm -ql ${__ptlpkgname} 2>/dev/null | grep -m 1 lib$ 2>/dev/null)\n    fi\nfi\nif [ \"x${ptl_prefix_lib}\" != \"x\" ]; then\n\tpython_dir=$( /bin/ls -1 ${ptl_prefix_lib} )\n\tprefix=$( dirname ${ptl_prefix_lib} )\n\n\texport PATH=${prefix}/bin/:${PATH}\n\texport PYTHONPATH=${prefix}/lib/${python_dir}/site-packages${PYTHONPATH:+:$PYTHONPATH}\n\tunset python_dir\n\tunset prefix\n\tunset ptl_prefix_lib\nelse\n\tconf=\"${PBS_CONF_FILE:-/etc/pbs.conf}\"\n\tif [ -r \"${conf}\" ]; then\n\t\t# we only need PBS_EXEC from pbs.conf\n\t\t__PBS_EXEC=$( grep '^[[:space:]]*PBS_EXEC=' \"$conf\" | tail -1 | sed 's/^[[:space:]]*PBS_EXEC=\\([^[:space:]]*\\)[[:space:]]*/\\1/' )\n\t\tif [ \"X${__PBS_EXEC}\" != \"X\" ]; then\n\t\t\t# Define PATH and PYTHONPATH for the users\n\t\t\tPTL_PREFIX=$( dirname ${__PBS_EXEC} )/ptl\n\t\t\tpython_dir=$( /bin/ls -1 ${PTL_PREFIX}/lib )/site-packages\n\t\t\t[ -d \"${PTL_PREFIX}/bin\" ] && export PATH=\"${PATH}:${PTL_PREFIX}/bin\"\n\t\t\t[ -d \"${PTL_PREFIX}/lib/${python_dir}\" ] && export PYTHONPATH=\"${PYTHONPATH:+$PYTHONPATH:}${PTL_PREFIX}/lib/${python_dir}\"\n\t\t\t[ -d \"${__PBS_EXEC}/lib/python/altair\" ] && export PYTHONPATH=\"${PYTHONPATH:+$PYTHONPATH:}${__PBS_EXEC}/lib/python/altair\"\n\t\t\t[ -d \"${__PBS_EXEC}/lib64/python/altair\" ] && export PYTHONPATH=\"${PYTHONPATH:+$PYTHONPATH:}${__PBS_EXEC}/lib64/python/altair\"\n\t\tfi\n\t\tunset __PBS_EXEC\n\t\tunset PTL_PREFIX\n\t\tunset conf\n\t\tunset python_dir\n\tfi\nfi\n"
  },
  {
    "path": "test/fw/ptlreport",
    "content": "#!/bin/bash\n# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nprog=\"`basename $0`\"\n\nusage() {\n\techo -en \"${prog}\\n\"\n\techo -en \"\\tParses the PTL test output file <ptl_test_log> and reports\\n\"\n\techo -en \"\\tvarious counts like total, passed, failed, error-ed,\\n\"\n\techo -en \"\\tskipped and timedout test cases from <ptl_test_log> file.\\n\\n\"\n\techo -en \"Usage:\\n\\t${prog} <ptl_test_log> [OPTIONS]\\n\\n\"\n\techo -en \"OPTIONS:\\n\"\n\techo -en \"\\t-t | --total\\t- Print total number of test cases\\n\"\n\techo -en \"\\t-p | --passes\\t- Print passed test cases\\n\"\n\techo -en \"\\t-f | --fails\\t- Print failed test cases\\n\"\n\techo -en \"\\t-e | --errors\\t- Print error-ed test cases\\n\"\n\techo -en \"\\t-s | --skipped\\t- Print skipped test cases\\n\"\n\techo -en \"\\t-T | --timedout\\t- Print timedout test cases\\n\"\n\techo -en \"\\t-r | --runtime\\t- Print total runtime of tests\\n\"\n\techo -en \"\\t-S | --summary\\t- Print summary of tests\\n\"\n\techo -en \"\\t-v | --verbose\\t- Print verbose output, can be supplied multiple times to increase verbosity\\n\\n\"\n}\n\n# args: <is_asked> <count> <type>\nprint_info() {\n\tif [ ${1} -eq 1 ]\n\tthen\n\t\t[ ${_space} -eq 1 ] && echo \" \" || _space=1\n\t\tif [ ${verbose} -ge 1 ]\n\t\tthen\n\t\t\t[ ${2} -le 0 ] && echo \"${3^} test(s): ${2}\" && return\n\t\t\t[ ${verbose} -eq 1 -o ${3} == \"skipped\" ] && echo \"${2} test(s) ${3}:\" && \\\n\t\t\t\tsed -n \"/^${3}: \\(.*\\)$/p\" ${ptl_test_log} | awk '{ $1 = \"\\t\"; print $0 }' && return\n\t\t\tif [ ${verbose} -gt 1 ]\n\t\t\tthen\n\t\t\t\tlines=`sed -n \"/^${3}: \\(.*\\)$/p\" ${ptl_test_log} | awk '{ $1 = \"\"; gsub(/ /, \"@\", $0); print $0 }'`\n\t\t\t\tfor line in ${lines}\n\t\t\t\tdo\n\t\t\t\t\tline=${3^^}\":\"`echo ${line} | tr '@' ' '`\n\t\t\t\t\techo ${line}\n\t\t\t\t\tsed -n \"/${line}/,\\${N;/^\\n$/{P;q};P;D}\" ${ptl_test_log} | \\\n\t\t\t\t\t\tawk 'NR > 3 { sub(/.*Traceback/, \"Traceback\", $0); print \"  \"$0}'\n\t\t\t\tdone\n\t\t\tfi\n\t\telse\n\t\t\techo ${2}\n\t\tfi\n\tfi\n}\n\nif [ $# -le 1 ]\nthen\n\tusage\n\texit 1\nfi\n\nptl_test_log=$1\n\nif [ ! -r \"${ptl_test_log}\" ]\nthen\n\techo \"${prog}: ${ptl_test_log} doesn't exist or does't have read permission!\"\n\texit 1\nfi\n\ntotal=0\npasses=0\nfails=0\nerrors=0\nskipped=0\ntimedout=0\nsummary=0\nverbose=0\nruntime=0\n_space=0\n\nshift\nwhile [ \"$1\" != \"\" ]; do\n\tcase $1 in\n\t\t-p | --passes) passes=1; shift;;\n\t\t-f | --fails) fails=1; shift;;\n\t\t-e | --errors) errors=1; shift;;\n\t\t-s | --skipped) skipped=1; shift ;;\n\t\t-T | --timedout) timedout=1; shift;;\n\t\t-v | --verbose) verbose=$((${verbose} + 1)); shift ;;\n\t\t-t | --total) total=1; shift;;\n\t\t-S | --summary) summary=1; shift;;\n\t\t-r | --runtime) runtime=1; shift;;\n\t\t-h | --help ) usage; exit 0;;\n\t\t* ) echo -en \"Unknown Option: $1\\n\\n\"; usage; exit 1;\n\tesac\ndone\n\nsummary_line=`sed -n '/^run:.*: [0-9]*$/p' ${ptl_test_log}`\nread total_ct pass_ct fail_ct err_ct skip_ct timedout_ct <<< \\\n\t`echo ${summary_line} | awk -F '[,:]' \\\n\t'ORS=\" \" { for (i=2; i<=NF; i=i+2) { gsub(/^[ \\t]+/, \"\", $i); print $i } }'`\n\nif [ ${total} -eq 1 ]\nthen\n\t[ ${_space} -eq 1 ] && echo \" \" || _space=1\n\t[ ${verbose} -ge 1 ] && echo \"Total test(s): ${total_ct}\" || echo ${total_ct}\nfi\n\nif [ ${passes} -eq 1 ]\nthen\n\t[ ${_space} -eq 1 ] && echo \" \" || _space=1\n\t[ ${verbose} -ge 1 ] && echo \"Passed test(s): ${pass_ct}\" || echo ${pass_ct}\nfi\n\nprint_info ${fails} ${fail_ct} \"failed\"\nprint_info ${errors} ${err_ct} \"error\"\nprint_info ${skipped} ${skip_ct} \"skipped\"\nprint_info ${timedout} ${timedout_ct} \"timedout\"\n\nif [ ${summary} -eq 1 ]\nthen\n\t[ ${_space} -eq 1 ] && echo \" \" || _space=1\n\t[ ${verbose} -ge 1 ] && echo -en \"Summary: \\n\\t \"\n\techo ${summary_line}\nfi\n\nif [ ${runtime} -eq 1 ]\nthen\n\ttest_run_output=`sed -n '/^Tests run in [\\.:0-9]*$/p' ${ptl_test_log} | awk -F. '{print $1}'`\n\t[ ${_space} -eq 1 ] && echo \" \" || _space=1\n\t[ ${verbose} -ge 1 ] && echo ${test_run_output} || echo ${test_run_output} | awk '{ print $NF }'\nfi\n\nexit 0\n"
  },
  {
    "path": "test/fw/requirements.txt",
    "content": "# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\nnose\nbeautifulsoup4\npexpect\ndefusedxml\n"
  },
  {
    "path": "test/fw/setup.py.in",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom setuptools import setup, find_packages\nimport os\n\nos.chdir(os.path.dirname(os.path.abspath(os.path.abspath(__file__))))\n\n\ndef get_reqs():\n    install_requires = open('requirements.txt').readlines()\n    return [r.strip() for r in install_requires]\n\n\ndef get_scripts():\n    return ['bin/%s' % (x) for x in os.listdir('bin')]\n\n\nsetup(\n    name='PbsTestLab',\n    version='@PBS_VERSION@',\n    packages=find_packages(),\n    scripts=get_scripts(),\n    include_package_data=True,\n    license='AGPLv3 with exceptions',\n    description='PBS Testing and Benchmarking Framework',\n    long_description=open(os.path.abspath('./doc/intro.rst')).read(),\n    install_requires=get_reqs(),\n    keywords='PbsTestLab ptl pbs',\n    zip_safe=False,\n    classifiers=[\n        'Development Status :: 5 - Production/Stable',\n        'Environment :: Other Environment',\n        'Intended Audience :: Developers',\n        'License :: AGPLv3 with exceptions',\n        'Operating System :: POSIX :: Linux',\n        'Programming Language :: Python :: 3.6',\n        'Topic :: Software Development :: Testing',\n        'Topic :: Software Development :: Quality Assurance',\n    ]\n)\n"
  },
  {
    "path": "test/scripts/qsub_multi.sh",
    "content": "#!/bin/bash\n\n# Used to achieve faster job submission of large number of jobs for performance testing\n\nif [ $# -lt 2 ]; then\n\techo \"syntax: $0 <num-threads> <jobs-per-thread>\"\n\texit 1\nfi\n\nfunction submit_jobs {\n\tnjobs=$1\n\n\techo \"New thread submitting jobs=$njobs\"\n\n\tfor i in $(seq 1 $njobs)\n\tdo\n\t\tqsub -- /bin/date > /dev/null\n\tdone\n}\n\nif [ \"$1\" = \"submit\" ]; then\n\tnjobs=$2\n\tsubmit_jobs $njobs\n\texit 0\nfi\n\nnthreads=$1\nnjobs=$2\n\necho \"parameters supplied: nthreads=$nthreads, njobs=$njobs\"\n\nstart_time=`date +%s%3N`\n\nfor i in $(seq 1 $nthreads)\ndo\n\tsetsid $0 submit $njobs &\ndone\n\nwait\n\nend_time=`date +%s%3N`\n\ndiff=`bc -l <<< \"scale=3; ($end_time - $start_time) / 1000\"`\ntotal_jobs=`bc -l <<< \"$njobs * $nthreads\"`\nperf=`bc -l <<< \"scale=3; $total_jobs / $diff\"`\n\necho \"Time(ms) started=$start_time, ended=$end_time\"\necho \"Total jobs submitted=$total_jobs, time taken(secs.ms)=$diff, jobs/sec=$perf\"\n"
  },
  {
    "path": "test/tests/Makefile.am",
    "content": "#\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n#\nif ENABLEPTL\nptl_testsdir = ${ptl_prefix}/tests\ndist_ptl_tests_PYTHON = $(wildcard $(srcdir)/*.py)\n\nptl_testfunctionaldir = $(ptl_testsdir)/functional\ndist_ptl_testfunctional_DATA = $(wildcard $(srcdir)/functional/*.py)\n\nptl_testinterfacesdir = $(ptl_testsdir)/interfaces\ndist_ptl_testinterfaces_DATA = $(wildcard $(srcdir)/interfaces/*.py)\n\nptl_testperformancedir = $(ptl_testsdir)/performance\ndist_ptl_testperformance_DATA = $(wildcard $(srcdir)/performance/*.py)\n\nptl_testresiliencedir = $(ptl_testsdir)/resilience\ndist_ptl_testresilience_DATA = $(wildcard $(srcdir)/resilience/*.py)\n\nptl_testsecuritydir = $(ptl_testsdir)/security\ndist_ptl_testsecurity_DATA = $(wildcard $(srcdir)/security/*.py)\n\nptl_testselftestdir = $(ptl_testsdir)/selftest\ndist_ptl_testselftest_DATA = $(wildcard $(srcdir)/selftest/*.py)\n\nptl_testupgradesdir = $(ptl_testsdir)/upgrades\ndist_ptl_testupgrades_DATA = $(wildcard $(srcdir)/upgrades/*.py)\nendif\n"
  },
  {
    "path": "test/tests/__init__.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n"
  },
  {
    "path": "test/tests/functional/__init__.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom ptl.utils.pbs_testsuite import *\n\n\nclass TestFunctional(PBSTestSuite):\n    \"\"\"\n    Base test suite for Functional tests\n    \"\"\"\n    pass\n"
  },
  {
    "path": "test/tests/functional/pbs_Rrecord_resources_used.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\nimport re\n\n\n@requirements(num_moms=2)\nclass Test_Rrecord_with_resources_used(TestFunctional):\n\n    \"\"\"\n    This test suite tests whether the 'R' record in accounting logs has\n    information on resources_used in the following scenarios.\n        a) The node the job was running on goes down and node_fail_requeue\n           timeout is hit.\n        b) It is rerun using qrerun <job-id>.\n        c) It is rerun using qrerun -Wforce <job-id>.\n        d) mom is restarted without any options or with the '-r' option\n    \"\"\"\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n\n        if len(self.moms) != 2:\n            self.skipTest('test requires two MoMs as input, ' +\n                          'use -p moms=<mom1>:<mom2>')\n\n        self.server.set_op_mode(PTL_CLI)\n\n        # PBSTestSuite returns the moms passed in as parameters as dictionary\n        # of hostname and MoM object\n        self.momA = self.moms.values()[0]\n        self.momB = self.moms.values()[1]\n\n        self.hostA = self.momA.shortname\n        self.hostB = self.momB.shortname\n\n        a = {'resources_available.ncpus': 4}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.hostA)\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.hostB)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n\n    def common(self, is_nonrerunnable, restart_mom):\n\n        # Set node_fail_requeue=5 on server\n\n        a = {ATTR_nodefailrq: 5}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        # Job script\n\n        test = []\n        test += ['#PBS -N RequeueTest\\n']\n        test += ['#PBS -l ncpus=1\\n']\n        test += ['echo Starting test at `date`\\n']\n        test += ['sleep 1000\\n']\n\n        test1 = []\n        test1 += ['#PBS -N RequeueTest\\n']\n        test1 += ['#PBS -lselect=1:ncpus=1 -l place=scatter\\n']\n        test1 += ['echo Starting test at `date`\\n']\n        test1 += ['sleep 1000\\n']\n\n        # Submit three jobs J1,J2,J3[]\n\n        j1 = Job(TEST_USER, attrs={ATTR_k: 'oe'})\n        j1.create_script(body=test)\n        jid1 = self.server.submit(j1)\n\n        if is_nonrerunnable is True:\n            j2 = Job(TEST_USER, attrs={ATTR_r: 'n', ATTR_k: 'oe'})\n        else:\n            j2 = Job(TEST_USER, attrs={ATTR_r: 'y', ATTR_k: 'oe'})\n\n        j2.create_script(body=test1)\n        jid2 = self.server.submit(j2)\n\n        j3 = Job(TEST_USER, attrs={ATTR_J: '1-6', ATTR_k: 'oe'})\n        j3.create_script(body=test)\n        jid3 = self.server.submit(j3)\n\n        subjobs = self.server.status(JOB, id=jid3, extend='t')\n        jid3s1 = subjobs[1]['id']\n\n        # Wait for the jobs to start running.\n        self.server.expect(JOB, {ATTR_substate: '42'}, jid1)\n        self.server.expect(JOB, {ATTR_substate: '42'}, jid2)\n        self.server.expect(JOB, {ATTR_substate: '42'}, jid3s1)\n\n        # Verify that accounting logs have Resource_List.<resource> value\n        self.server.accounting_match(\n            msg='.*Resource_List.*', id=jid1, regexp=True)\n        self.server.accounting_match(\n            msg='.*Resource_List.*', id=jid2, regexp=True)\n        self.server.accounting_match(\n            msg='.*Resource_List.*', id=jid3s1, regexp=True)\n\n        # Bring both moms down using kill -9 <mom pid>\n        self.momA.signal('-KILL')\n        self.momB.signal('-KILL')\n\n        # Verify that both nodes are reported to be down.\n        self.server.expect(NODE, {ATTR_NODE_state: (\n            MATCH_RE, '.*down.*')}, id=self.hostA)\n        self.server.expect(NODE, {ATTR_NODE_state: (\n            MATCH_RE, '.*down.*')}, id=self.hostB)\n\n        self.server.expect(JOB, {ATTR_state: 'Q'}, jid1)\n        self.server.expect(JOB, {ATTR_state: 'Q'}, jid3s1)\n        if is_nonrerunnable is False:\n            # All rerunnable jobs - all should be in 'Q' state.\n            self.server.expect(JOB, {ATTR_state: 'Q'}, jid2)\n        else:\n            # Job2 is non-rerunnable.\n            self.server.expect(JOB, {ATTR_state: 'F'}, jid2, extend='x')\n\n        # tracejob should show \"Job requeued, execution node <node name> down\"\n        self.server.tracejob_match(\n            msg='Job requeued, execution node .* down', id=jid1, regexp=True)\n\n        if is_nonrerunnable is False:\n            e = True\n        else:\n            e = False\n        msg = 'Job requeued, execution node .* down'\n        self.server.tracejob_match(msg=msg, id=jid2, regexp=True,\n                                   existence=e)\n\n        self.server.tracejob_match(\n            msg='Job requeued, execution node .* down', id=jid3s1, regexp=True)\n\n        self.server.accounting_match(\n            msg='.*Resource_List.*', id=jid1, regexp=True)\n        self.server.accounting_match(\n            msg='.*Resource_List.*', id=jid2, regexp=True)\n        self.server.accounting_match(\n            msg='.*Resource_List.*', id=jid3s1, regexp=True)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n\n        if restart_mom == 's':\n            # Start mom without any  option\n            self.momA.start()\n            self.momB.start()\n        elif restart_mom == 'r':\n            # Start mom with -r option\n            self.momA.start(args=['-r'])\n            self.momB.start(args=['-r'])\n\n        return jid1, jid2, jid3s1\n\n    def test_Rrecord_with_nodefailrequeue(self):\n        \"\"\"\n        Scenario: The node on which the job was running goes down and\n                  node_fail_requeue time-out is hit.\n        Expected outcome: Server should record last known resource usage in\n                  the 'R' record.\n        \"\"\"\n\n        jid1, jid2, jid3s1 = self.common(False, False)\n\n        self.server.accounting_match(\n            msg='.*R;' + jid1 + '.*resources_used.*', id=jid1, regexp=True)\n        self.server.accounting_match(\n            msg='.*R;' + jid2 + '.*resources_used.*', id=jid2, regexp=True)\n        self.server.accounting_match(\n            msg='.*R;' + re.escape(jid3s1) + '.*resources_used.*',\n            id=jid3s1, regexp=True)\n\n    def test_Rrecord_when_mom_restarted_with_r(self):\n        \"\"\"\n        Scenario: The node on which the job was running goes down and\n                  node_fail_requeue time-out is hit and mom is restarted\n                  with '-r'\n        Expected outcome: Server should record last known resource usage in\n                  the 'R' record.\n        \"\"\"\n\n        jid1, jid2, jid3s1 = self.common(False, 'r')\n\n        self.server.accounting_match(\n            msg='.*R;' + jid1 + '.*resources_used.*run_count=1', id=jid1,\n            regexp=True)\n        self.server.accounting_match(\n            msg='.*R;' + jid2 + '.*resources_used.*run_count=1', id=jid2,\n            regexp=True)\n        self.server.accounting_match(\n            msg='.*R;' + re.escape(jid3s1) + '.*resources_used.*run_count=1',\n            id=jid3s1, regexp=True)\n\n    def test_Rrecord_for_nonrerunnable_jobs(self):\n        \"\"\"\n        Scenario: One non-rerunnable job. The node on which the job was\n                  running goes down and node_fail_requeue time-out is hit.\n        Expected outcome: Server should record last known resource usage in\n                  the 'R' record only for rerunnable jobs.\n        \"\"\"\n        a = {ATTR_JobHistoryEnable: 1}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        jid1, jid2, jid3s1 = self.common(True, 'r')\n\n        self.server.accounting_match(\n            msg='.*R;' + jid1 + '.*resources_used.*run_count=1', id=jid1,\n            regexp=True)\n        self.server.accounting_match(\n            msg='.*R;' + jid2 + '.*resources_used.*run_count=1', id=jid2,\n            regexp=True, existence=False, max_attempts=5)\n        self.server.accounting_match(\n            msg='.*R;' + re.escape(jid3s1) + '.*resources_used.*run_count=1',\n            id=jid3s1, regexp=True)\n\n    def test_Rrecord_when_mom_restarted_without_r(self):\n        \"\"\"\n        Scenario: Mom restarted without '-r' option and jobs are requeued\n                   using qrerun.\n        Expected outcome: Server should record last known resource usage in\n                   the 'R' record for both.\n        \"\"\"\n\n        jid1, jid2, jid3s1 = self.common(False, 's')\n\n        self.server.accounting_match(\n            msg='.*R;' + jid1 + '.*resources_used.*run_count=1', id=jid1,\n            regexp=True)\n        self.server.accounting_match(\n            msg='.*R;' + jid2 + '.*resources_used.*run_count=1', id=jid2,\n            regexp=True)\n        self.server.accounting_match(\n            msg='.*R;' + re.escape(jid3s1) + '.*resources_used.*run_count=1',\n            id=jid3s1, regexp=True)\n\n        # Verify that the jobs are in 'Q' state.\n        self.server.expect(JOB, {ATTR_state: 'Q'}, jid1)\n        self.server.expect(JOB, {ATTR_state: 'Q'}, jid2)\n        self.server.expect(JOB, {ATTR_state: 'Q'}, jid3s1)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n\n        self.server.expect(JOB, {ATTR_substate: '42'}, jid1)\n        self.server.expect(JOB, {ATTR_substate: '42'}, jid2)\n        self.server.expect(JOB, {ATTR_substate: '42'}, jid3s1)\n\n        # qrerun the jobs and wait for them to start running.\n        self.server.rerunjob(jobid=jid1)\n        self.server.rerunjob(jobid=jid2)\n        self.server.rerunjob(jobid=jid3s1)\n\n        # Confirm that the 'R' record is generated and the run_count is 2.\n        self.server.accounting_match(\n            msg='.*R;' + jid1 + '.*resources_used.*run_count=2', id=jid1,\n            regexp=True)\n        self.server.accounting_match(\n            msg='.*R;' + jid2 + '.*resources_used.*run_count=2', id=jid2,\n            regexp=True)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n\n    def test_Rrecord_with_multiple_reruns(self):\n        \"\"\"\n        Scenario: Job is rerun multiple times.\n        Expected outcome: Server should record last known resource usage\n                  every time the job is rerun.\n        \"\"\"\n\n        dflt_q = self.server.default_queue\n\n        # As user submit three jobs.\n        test = []\n        test += ['#PBS -N RequeueTest\\n']\n        test += ['#PBS -l ncpus=1\\n']\n        test += ['echo Starting test at `date`\\n']\n        test += ['sleep 1000\\n']\n\n        j1 = Job(TEST_USER)\n        j1.create_script(body=test)\n        j1.set_attributes({ATTR_r: 'y', ATTR_l + '.ncpus': 2})\n        jid1 = self.server.submit(j1)\n\n        j2 = Job(TEST_USER)\n        j2.create_script(body=test)\n        j2.set_attributes({ATTR_r: 'n', ATTR_l + '.ncpus': 2})\n        jid2 = self.server.submit(j2)\n\n        j3 = Job(TEST_USER)\n        j3.create_script(body=test)\n        j3.set_attributes({ATTR_J: '1-4', ATTR_k: 'oe'})\n        jid3 = self.server.submit(j3)\n\n        subjobs = self.server.status(JOB, id=jid3, extend='t')\n        jid3s1 = subjobs[1]['id']\n\n        # Verify that the jobs have started running.\n        self.server.expect(JOB, {ATTR_substate: '42', 'run_count': 1}, jid1)\n        self.server.expect(JOB, {ATTR_substate: '42', 'run_count': 1}, jid2)\n        self.server.expect(JOB, {ATTR_state: 'B'}, jid3)\n        self.server.expect(JOB, {ATTR_substate: '42', 'run_count': 1}, jid3s1)\n\n        # Verify that the accounting logs have Resource_List.<resource> but no\n        # R records.\n        self.server.accounting_match(\n            msg='.*Resource_List.*', id=jid1, regexp=True)\n        msg = '.*R;' + jid1 + '.*resources_used.*'\n        self.server.accounting_match(msg=msg, id=jid1, regexp=True,\n                                     existence=False)\n        self.server.accounting_match(\n            msg='.*Resource_List.*', id=jid2, regexp=True)\n        msg = '.*R;' + jid2 + '.*resources_used.*'\n        self.server.accounting_match(msg=msg, id=jid2, regexp=True,\n                                     existence=False)\n        self.server.accounting_match(\n            msg='.*Resource_List.*', id=jid3s1, regexp=True)\n        self.server.accounting_match(msg='.*R;' + re.escape(jid3s1) +\n                                         '.*resources_used.*', id=jid3s1,\n                                         regexp=True, existence=False)\n\n        # sleep for 5 seconds so the jobs use some resources.\n        time.sleep(5)\n\n        self.server.rerunjob(jid1)\n        self.server.rerunjob(jid3s1)\n\n        # Verify that the accounting logs have R logs with last known resource\n        # usage. No R logs for J2.\n\n        self.server.accounting_match(\n            msg='.*R;' + jid1 +\n            '.*Exit_status=-11.*.*resources_used.*.*run_count=1.*',\n            id=jid1, regexp=True)\n\n        msg = '.*R;' + jid2 + '.*resources_used.*'\n        self.server.accounting_match(msg=msg, id=jid2, regexp=True,\n                                     existence=False)\n\n        self.server.accounting_match(msg='.*R;' + re.escape(\n            jid3s1) + '.*Exit_status=-11.*.*resources_used.*.*run_count=1.*',\n            id=jid3s1, regexp=True)\n\n        # sleep for 5 seconds so the jobs use some resources.\n        time.sleep(5)\n\n        self.server.rerunjob(jid1)\n        self.server.rerunjob(jid3s1)\n\n        # Verify that the accounting logs should R logs with last known\n        # resource usage Resource_used and run_count should be 3 for J1.\n        # No R logs in accounting for J2.\n        self.server.accounting_match(\n            msg='.*R;' + jid1 +\n            '.*Exit_status=-11.*.*resources_used.*.*run_count=2.*',\n            id=jid1, regexp=True)\n        msg = '.*R;' + jid2 + '.*resources_used.*'\n        self.server.accounting_match(msg=msg, id=jid2, regexp=True,\n                                     existence=False)\n        self.server.accounting_match(\n            msg='.*R;' + re.escape(jid3s1) +\n            '.*Exit_status=-11.*.*resources_used.*.*run_count=1.*',\n            id=jid3s1, regexp=True)\n\n    def test_Rrecord_with_multiple_reruns_case2(self):\n        \"\"\"\n        Scenario: Jobs submitted with select cput and ncpus. Job is rerun\n                  multiple times.\n        Expected outcome: Server should record last known resource usage\n                  that has cputime.\n        \"\"\"\n        dflt_q = self.server.default_queue\n\n        script = []\n        script += ['i=0;\\n']\n        script += ['while [ $i -ne 0 ] || sleep 0.125;\\n']\n        script += ['do i=$(((i+1) % 10000 ));\\n']\n        script += ['done\\n']\n        j1 = Job(TEST_USER)\n        j1.create_script(body=script)\n\n        j1.set_attributes(\n            {ATTR_l + '.cput': 160, ATTR_l + '.ncpus': 3, ATTR_k: 'oe'})\n        jid1 = self.server.submit(j1)\n\n        j2 = Job(TEST_USER)\n        j2.create_script(body=script)\n        j2.set_attributes(\n            {ATTR_l + '.cput': 180, ATTR_l + '.ncpus': 3, ATTR_k: 'oe'})\n        jid2 = self.server.submit(j2)\n\n        # Verify that the jobs have started running.\n        self.server.expect(JOB, {ATTR_substate: '42', 'run_count': 1}, jid1)\n        self.server.expect(JOB, {ATTR_substate: '42', 'run_count': 1}, jid2)\n\n        # Verify that the accounting logs have Resource_List.<resource> but no\n        # R records.\n        self.server.accounting_match(\n            msg='.*Resource_List.*', id=jid1, regexp=True)\n        msg = '.*R;' + jid1 + '.*resources_used.*'\n        self.server.accounting_match(msg=msg, id=jid1, regexp=True,\n                                     existence=False)\n        self.server.accounting_match(\n            msg='.*Resource_List.*', id=jid2, regexp=True)\n        msg = '.*R;' + jid2 + '.*resources_used.*'\n        self.server.accounting_match(msg=msg, id=jid2, regexp=True,\n                                     existence=False)\n\n        time.sleep(5)\n\n        jids = self.server.select()\n        self.server.rerunjob(jids)\n\n        # Verify that the accounting logs have R record with last known\n        # resource usage and run_count should be 2 for J1 and J2.\n\n        self.server.accounting_match(\n            msg='.*R;' + jid1 +\n            '.*.*resources_used.cput=[0-9]*:[0-9]*:[0-9]*.*.*run_count=1.*',\n            id=jid1, regexp=True)\n        self.server.accounting_match(\n            msg='.*R;' + jid2 +\n            '.*.*resources_used.cput=[0-9]*:[0-9]*:[0-9]*.*.*run_count=1.*',\n            id=jid2, regexp=True)\n\n        time.sleep(5)\n\n        jids = self.server.select()\n        self.server.rerunjob(jids)\n\n        self.server.accounting_match(\n            msg='.*R;' + jid1 +\n            '.*.*resources_used.cput=[0-9]*:[0-9]*:[0-9]*.*.*run_count=2.*',\n            id=jid1, regexp=True)\n        self.server.accounting_match(\n            msg='.*R;' + jid2 +\n            '.*.*resources_used.cput=[0-9]*:[0-9]*:[0-9]*.*.*run_count=2.*',\n            id=jid2, regexp=True)\n\n    def test_Rrecord_job_rerun_forcefully(self):\n        \"\"\"\n        Scenario: Job is forcefully rerun.\n        Expected outcome: server should record last known resource usage in\n                  the R record.\n        \"\"\"\n\n        dflt_q = self.server.default_queue\n\n        test = []\n        test += ['#PBS -N RequeueTest\\n']\n        test += ['#PBS -l ncpus=1\\n']\n        test += ['echo Starting test at `date`\\n']\n        test += ['sleep 1000\\n']\n\n        j1 = Job(TEST_USER)\n        j1.create_script(body=test)\n        j1.set_attributes({ATTR_r: 'y', ATTR_l + '.ncpus': 2})\n        jid1 = self.server.submit(j1)\n\n        j2 = Job(TEST_USER)\n        j2.create_script(body=test)\n        j2.set_attributes({ATTR_r: 'n', ATTR_l + '.ncpus': 2})\n        jid2 = self.server.submit(j2)\n\n        j3 = Job(TEST_USER)\n        j3.create_script(body=test)\n        j3.set_attributes({ATTR_J: '1-4', ATTR_k: 'oe'})\n        jid3 = self.server.submit(j3)\n        subjobs = self.server.status(JOB, id=jid3, extend='t')\n        jid3s1 = subjobs[1]['id']\n\n        # Verify that the jobs have started running.\n        self.server.expect(JOB, {ATTR_substate: '42', 'run_count': 1}, jid1)\n        self.server.expect(JOB, {ATTR_substate: '42', 'run_count': 1}, jid2)\n        self.server.expect(JOB, {ATTR_state: 'B'}, jid3)\n        self.server.expect(JOB, {ATTR_substate: '42', 'run_count': 1}, jid3s1)\n\n        # Verify that the accounting logs have Resource_List.<resource> but no\n        # R records.\n        self.server.accounting_match(\n            msg='.*Resource_List.*', id=jid1, regexp=True)\n        msg = '.*R;' + jid1 + '.*resources_used.*'\n        self.server.accounting_match(msg=msg, id=jid1, regexp=True,\n                                     existence=False)\n        self.server.accounting_match(\n            msg='.*Resource_List.*', id=jid2, regexp=True)\n        msg = '.*R;' + jid2 + '.*resources_used.*'\n        self.server.accounting_match(msg=msg, id=jid2, regexp=True,\n                                     existence=False)\n        self.server.accounting_match(\n            msg='.*Resource_List.*', id=jid3s1, regexp=True)\n        self.server.accounting_match(msg='.*R;' +\n                                         re.escape(jid3s1) +\n                                         '.*resources_used.*',\n                                         id=jid3s1, regexp=True,\n                                     existence=False)\n\n        time.sleep(5)\n\n        jids = self.server.select(extend='T')\n        self.server.rerunjob(jids, extend='force')\n\n        # Verify that the accounting logs have R record with last known\n        # resource usage and run_count should be 1 for J1 and J2\n\n        self.server.accounting_match(\n            msg='.*R;' + jid1 +\n            '.*Exit_status=-11.*.*resources_used.*.*run_count=1.*',\n            id=jid1, regexp=True)\n        self.server.accounting_match(\n            msg='.*R;' + jid2 +\n            '.*Exit_status=-11.*.*resources_used.*.*run_count=1.*',\n            id=jid2, regexp=True)\n        self.server.accounting_match(msg='.*R;' + re.escape(\n            jid3s1) + '.*Exit_status=-11.*.*resources_used.*.*run_count=1.*',\n            id=jid3s1, regexp=True)\n        time.sleep(5)\n\n        jids = self.server.select(extend='T')\n        self.server.rerunjob(jids, extend='force')\n\n        # Verify that the accounting logs have R record with last known\n        # usage and run_count should be 2 for J1 and J2.\n        self.server.accounting_match(\n            msg='.*R;' + jid1 +\n            '.*Exit_status=-11.*.*resources_used.*.*run_count=2.*',\n            id=jid1, regexp=True)\n        self.server.accounting_match(\n            msg='.*R;' + jid2 +\n            '.*Exit_status=-11.*.*resources_used.*.*run_count=2.*',\n            id=jid2, regexp=True)\n        self.server.accounting_match(msg='.*R;' + re.escape(\n            jid3s1) +\n            '.*Exit_status=-11.*.*resources_used.*.*run_count=2.*',\n            id=jid3s1, regexp=True)\n\n    def tearDown(self):\n        TestFunctional.tearDown(self)\n"
  },
  {
    "path": "test/tests/functional/pbs_acct_log.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestAcctLog(TestFunctional):\n    \"\"\"\n    Tests dealing with the PBS accounting logs\n    \"\"\"\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n        a = {'type': 'string', 'flag': 'h'}\n        self.server.manager(MGR_CMD_CREATE, RSC, a, id='foo_str')\n\n    def test_long_resource_end(self):\n        \"\"\"\n        Test to see if a very long string resource is neither truncated\n        in the job's resources_used attr or the accounting log at job end\n        \"\"\"\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'job_history_enable': 'True'})\n\n        # Create a very long string - the truncation was 2048 characters\n        # 4096 is plenty big to show it\n\n        hstr = '1'*4096\n        hook_body = \"import pbs\\n\"\n        hook_body += \"e = pbs.event()\\n\"\n        hook_body += \"hstr=\\'\" + hstr + \"\\'\\n\"\n        hook_body += \"e.job.resources_used[\\\"foo_str\\\"] = hstr\\n\"\n\n        a = {'event': 'execjob_epilogue', 'enabled': 'True'}\n        self.server.create_import_hook(\"ep\", a, hook_body)\n\n        J = Job()\n        J.set_sleep_time(1)\n        jid = self.server.submit(J)\n\n        # Make sure the resources_used value hasn't been truncated\n        self.server.expect(JOB, {'job_state': 'F'}, id=jid, extend='x')\n        self.server.expect(\n            JOB, {'resources_used.foo_str': hstr}, extend='x', max_attempts=1)\n\n        # Make sure the accounting log hasn't been truncated\n        log_match = 'resources_used.foo_str=' + hstr\n        self.server.accounting_match(\n            \"E;%s;.*%s.*\" % (jid, log_match), regexp=True)\n\n        # Make sure the server log hasn't been truncated\n        log_match = 'resources_used.foo_str=' + hstr\n        self.server.log_match(\"%s;.*%s.*\" % (jid, log_match), regexp=True)\n\n    def test_long_resource_reque(self):\n        \"\"\"\n        Test to see if a very long string value is truncated\n        in the 'R' requeue accounting record\n        \"\"\"\n\n        # Create a very long string - the truncation was 2048 characters\n        # 4096 is plenty big to show it\n        hstr = \"\"\n        for i in range(4096):\n            hstr += \"1\"\n\n        hook_body = \"import pbs\\n\"\n        hook_body += \"e = pbs.event()\\n\"\n        hook_body += \"hstr=\\'\" + hstr + \"\\'\\n\"\n        hook_body += \"e.job.resources_used[\\\"foo_str\\\"] = hstr\\n\"\n\n        a = {'event': 'execjob_prologue', 'enabled': 'True'}\n        self.server.create_import_hook(\"pr\", a, hook_body)\n\n        J = Job()\n        jid = self.server.submit(J)\n        self.server.expect(JOB, {'job_state': 'R', 'substate': 42}, id=jid)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n\n        self.server.rerunjob(jid)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid)\n\n        # Make sure the accounting log hasn't been truncated\n        acctlog_match = 'resources_used.foo_str=' + hstr\n        self.server.accounting_match(\n            \"R;%s;.*%s.*\" % (jid, acctlog_match), regexp=True)\n\n    def test_queue_record(self):\n        \"\"\"\n        Test the correct data is being printed in the queue record\n        \"\"\"\n        t = time.time()\n        a = {ATTR_g: TEST_USER.groups[0], ATTR_project: 'foo',\n             ATTR_A: 'bar', ATTR_N: 'baz', ATTR_l + '.walltime': '1:00:00'}\n        j1 = Job(TEST_USER, a)\n        jid1 = self.server.submit(j1)\n\n        (_, line) = self.server.accounting_match(';Q;' + jid1)\n\n        # Check for euser\n        self.assertIn('user=' + str(TEST_USER), line)\n\n        # Check for egroup\n        self.assertIn('group=' + str(TEST_USER.groups[0]), line)\n\n        # Check for project\n        self.assertIn('project=foo', line)\n\n        # Check for account name\n        self.assertIn('account=\\\"bar\\\"', line)\n\n        # Check for job name\n        self.assertIn('jobname=baz', line)\n\n        # Check for queue\n        self.assertIn('queue=workq', line)\n\n        # Check for the existance of times\n        self.assertIn('etime=', line)\n        self.assertIn('ctime=', line)\n        self.assertIn('qtime=', line)\n        self.assertNotIn('start=', line)\n\n        # Check for walltime\n        self.assertIn('Resource_List.walltime=01:00:00', line)\n\n        j2 = Job(TEST_USER, {ATTR_J: '1-2', ATTR_depend: 'afterok:' + jid1})\n        jid2 = self.server.submit(j2)\n\n        (_, line) = self.server.accounting_match(';Q;' + jid2)\n\n        self.assertIn('array_indices=1-2', line)\n        self.assertIn('depend=afterok:' + jid1, line)\n\n        r = Reservation()\n        rid1 = self.server.submit(r)\n        a = {'reserve_state': (MATCH_RE, \"RESV_CONFIRMED|2\")}\n        self.server.expect(RESV, a, id=rid1)\n        j3 = Job(TEST_USER, {ATTR_queue: rid1.split('.')[0]})\n        jid3 = self.server.submit(j3)\n\n        (_, line) = self.server.accounting_match(';Q;' + jid3)\n\n        self.assertIn('resvID=' + rid1, line)\n\n    def test_queue_record_hook(self):\n        \"\"\"\n        Test that changes made in a queuejob hook are reflected in the Q record\n        \"\"\"\n        qj_hook = \"\"\"\nimport pbs\npbs.event().job.project = 'foo'\npbs.event().accept()\n\"\"\"\n        qj_attrs = {'event': 'queuejob', 'enabled': 'True'}\n        self.server.create_import_hook('qj', qj_attrs, qj_hook)\n\n        j = Job()\n        jid1 = self.server.submit(j)\n\n        (_, line) = self.server.accounting_match(';Q;' + jid1)\n        self.assertIn('project=foo', line)\n\n    def test_alter_record(self):\n        \"\"\"\n        Test the accounting log alter record\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n\n        j1 = Job(TEST_USER1)\n        jid1 = self.server.submit(j1)\n\n        # Basic test for existance of record for Resource_List\n        self.server.alterjob(jid1, {ATTR_l + '.walltime': '1:00:00'})\n        self.server.accounting_match(';a;' + jid1 +\n                                     ';Resource_List.walltime=01:00:00')\n\n        # Check for default value when unsetting\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {ATTR_rescdflt + '.walltime': '30:00'})\n        self.server.alterjob(jid1, {ATTR_l + '.walltime': ''})\n        self.server.accounting_match(';a;' + jid1 +\n                                     ';Resource_List.walltime=00:30:00')\n\n        self.server.alterjob(jid1, {ATTR_l + '.software': 'foo'})\n        self.server.accounting_match(';a;' + jid1 +\n                                     ';Resource_List.software=foo')\n        # Check for UNSET record when value is unset\n        self.server.alterjob(jid1, {ATTR_l + '.software': '\\\"\\\"'})\n        self.server.accounting_match(';a;' + jid1 +\n                                     ';Resource_List.software=UNSET')\n\n        # Check for non-resource attribute\n        self.server.alterjob(jid1, {ATTR_p: 150})\n        self.server.accounting_match(';a;' + jid1 + ';Priority=150')\n\n        self.server.alterjob(jid1, {ATTR_g: str(TSTGRP1)})\n        self.server.accounting_match(';a;' + jid1 +\n                                     ';group_list=' + str(TSTGRP1))\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n\n        # Check that scheduler's alters are not logged\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n        self.server.accounting_match(\n            ';a;' + jid1 + ';comment', existence=False, max_attempts=2)\n\n    def test_alter_record_hooks(self):\n        \"\"\"\n        Test that when hooks set attributes, an 'a' record is logged\n        \"\"\"\n        mj_hook = \"\"\"\nimport pbs\npbs.event().job.comment = 'foo'\npbs.event().accept()\n\"\"\"\n        mj_attrs = {'event': 'modifyjob', 'enabled': 'True'}\n        rj_hook = \"\"\"\nimport pbs\npbs.event().job.project = 'abc'\npbs.event().reject('foo')\n\"\"\"\n        rj_attrs = {'event': 'runjob', 'enabled': 'True'}\n\n        self.server.create_import_hook('mj', mj_attrs, mj_hook)\n        self.server.create_import_hook('rj', rj_attrs, rj_hook)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n\n        j1 = Job()\n        jid1 = self.server.submit(j1)\n\n        self.server.alterjob(jid1, {ATTR_p: 150})\n        (_, line) = self.server.accounting_match(';a;' + jid1 + ';')\n        self.assertIn('Priority=150', line)\n        self.assertIn('comment=foo', line)\n\n        try:\n            self.server.runjob(jid1)\n        except PbsRunError:\n            # runjob hook is rejecting the run request\n            pass\n        self.server.accounting_match(';a;' + jid1 + ';project=abc')\n\n    def test_alter_record_queuejob_hook(self):\n        \"\"\"\n        Test that when a queuejob hook set an attribute, an 'a' record is\n        logged.\n        \"\"\"\n        qj_hook = \"\"\"\nimport pbs\ne1 = pbs.event()\ne1.job.project = 'abc'\ne2 = pbs.event()\ne2.accept()\n\"\"\"\n        qj_attrs = {'event': 'queuejob', 'enabled': 'True'}\n        self.server.create_import_hook('qj', qj_attrs, qj_hook)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n        j1 = Job(TEST_USER, {'Resource_List.walltime': 42})\n        j1.set_sleep_time(1)\n        jid1 = self.server.submit(j1)\n        self.server.alterjob(jid1, {ATTR_p: 150})\n        (_, line) = self.server.accounting_match(';a;' + jid1 + ';')\n        self.assertIn('Priority=150', line)\n        (_, line) = self.server.accounting_match(';Q;' + jid1 + ';')\n        self.assertIn('project=abc', line)\n        self.server.runjob(jid1)\n        (_, line) = self.server.accounting_match(';E;' + jid1 + ';')\n        self.assertIn('project=abc', line)\n\n    def test_alter_record_modifyjob_hook(self):\n        \"\"\"\n        Test that when a modifyjob hook set attributes, an 'a' record is\n        logged.\n        \"\"\"\n        mj_hook = \"\"\"\nimport pbs\ne1 = pbs.event()\ne1.job.comment = 'foo'\ne1.job.project = 'abc'\ne2 = pbs.event()\ne2.accept()\n\"\"\"\n        mj_attrs = {'event': 'modifyjob', 'enabled': 'True'}\n        self.server.create_import_hook('mj', mj_attrs, mj_hook)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n        j1 = Job(TEST_USER, {'Resource_List.walltime': 42})\n        j1.set_sleep_time(1)\n        jid1 = self.server.submit(j1)\n        self.server.alterjob(jid1, {ATTR_p: 150})\n        (_, line) = self.server.accounting_match(';a;' + jid1 + ';')\n        self.assertIn('Priority=150', line)\n        self.assertIn('comment=foo', line)\n        self.assertIn('project=abc', line)\n        self.server.runjob(jid1)\n        (_, line) = self.server.accounting_match(';E;' + jid1 + ';')\n        # self.assertIn('comment=foo', line) # Doesn't exist in E\n        self.assertIn('project=abc', line)\n\n    def test_alter_record_runjob_hook(self):\n        \"\"\"\n        Test that when a runjob hook set attributes, an 'a' record is logged.\n        \"\"\"\n        info_hook = \"\"\"\nimport pbs\ne1 = pbs.event()\npbs.logmsg(pbs.LOG_ERROR, f\"HOOK:e1:{hex(id(e1))}\"\n                    f\" job.id:{e1.job.id}\"\n                    f\" job.project:{e1.job.project}\"\n                    f\" comment:{e1.job.comment}\"\n                    f\" hex(id(job)):{hex(id(e1.job))}\")\ne1.accept()\n\"\"\"\n\n        qj_attrs = {'event': 'queuejob', 'enabled': 'True'}\n        mj_attrs = {'event': 'modifyjob', 'enabled': 'True'}\n        self.server.create_import_hook('qj', qj_attrs, info_hook)\n        self.server.create_import_hook('mj', mj_attrs, info_hook)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n        j1 = Job(TEST_USER, {'Resource_List.walltime': 42})\n        j1.set_sleep_time(1)\n        jid1 = self.server.submit(j1)\n        self.server.alterjob(jid1, {ATTR_p: 150})\n\n        rj_hook = \"\"\"\nimport pbs\ne1 = pbs.event()\ne1.job.Output_Path = '/tmp/job-%s-output'\ne1.job.Error_Path = '/tmp/job-%s-error'\ne2 = pbs.event()\ne2.accept()\n\"\"\" % (jid1, jid1)\n        rj_attrs = {'event': 'runjob', 'enabled': 'True'}\n        self.server.create_import_hook('rj', rj_attrs, rj_hook)\n\n        self.server.runjob(jid1)\n        (_, line) = self.server.accounting_match(';a;' + jid1 + ';')\n        self.assertIn('Priority=150', line)\n        (_, line) = self.server.accounting_match(';a;' + jid1 + ';Output_Path')\n        self.assertIn('Output_Path=/tmp/job-%s-output' % jid1, line)\n        (_, line) = self.server.accounting_match(';a;' + jid1 + ';Error_Path')\n        self.assertIn('Error_Path=/tmp/job-%s-error' % jid1, line)\n\n    def test_multiple_alter_record_hooks(self):\n        \"\"\"\n        Test that when hooks set attributes, an 'a' record is logged.\n        \"\"\"\n        mj_hook_00 = \"\"\"\nimport pbs\ne1 = pbs.event()\ne1.job.comment = 'foo'\ne1.job.project = \"aaa\"\ne2 = pbs.event()\ne2.accept()\n\"\"\"\n        mj_hook_01 = \"\"\"\nimport pbs\ne1 = pbs.event()\ne1.job.comment = 'foo2'\ne1.job.project = \"bbb\"\ne2 = pbs.event()\ne2.accept()\n    \"\"\"\n        mj_attrs_00 = {'event': 'modifyjob', 'order': '1', 'enabled': 'True'}\n        mj_attrs_01 = {'event': 'modifyjob', 'order': '2', 'enabled': 'True'}\n        rj_hook = \"\"\"\nimport pbs\ne1 = pbs.event()\ne1.job.project = 'abc'\ne2 = pbs.event()\ne2.reject('bar')\n\"\"\"\n        rj_attrs = {'event': 'runjob', 'enabled': 'True'}\n\n        self.server.create_import_hook('mj01', mj_attrs_01, mj_hook_01)\n        # create out of order.\n        self.server.create_import_hook('mj00', mj_attrs_00, mj_hook_00)\n        self.server.create_import_hook('rj', rj_attrs, rj_hook)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n\n        j1 = Job(TEST_USER, {'Resource_List.walltime': 42})\n        jid1 = self.server.submit(j1)\n\n        self.server.alterjob(jid1, {ATTR_p: 150})\n        (_, line) = self.server.accounting_match(';a;' + jid1 + ';')\n        self.assertIn('Priority=150', line)\n        self.assertIn('comment=foo2', line)\n        self.assertIn('project=bbb', line)\n\n        try:\n            self.server.runjob(jid1)\n        except PbsRunError:\n            # runjob hook is rejecting the run request\n            pass\n        self.server.accounting_match(f';a;{jid1};project=abc')\n\n    def test_queue_record_multiple_hook_00(self):\n        \"\"\"\n        Test that changes made in a queuejob hooks are reflected in the\n        Q record\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'False',\n                             'job_history_enable': 'True',\n                             })\n        qj_hook_00 = \"\"\"\nimport pbs\ne1 = pbs.event()\ne1.job.project = 'foo00'\npbs.logmsg(pbs.LOG_ERROR, f\"HOOK:e1:{hex(id(e1))}\"\n                    f\" job.id:{e1.job.id}\"\n                    f\" job.project:{e1.job.project}\"\n                    f\" Resource_List:{e1.job.Resource_List}\"\n                    f\" hex(id(job)):{hex(id(e1.job))}\")\ne1.accept()\n\"\"\"\n        qj_hook_01 = \"\"\"\nimport pbs\ne1 = pbs.event()\ne1.job.project = str(e1.job.project) + '_foo01'\ne1.accept()\n\"\"\"\n        qj_attrs = {'event': 'queuejob', 'enabled': 'True'}\n        self.server.create_import_hook('qj00', qj_attrs, qj_hook_00)\n        # FIXME set hook attr to priority order\n        self.server.create_import_hook('qj01', qj_attrs, qj_hook_01)\n\n        j = Job(TEST_USER, {'Resource_List.walltime': 42})\n        j.set_sleep_time(1)\n        jid1 = self.server.submit(j)\n        self.server.alterjob(jid1, {ATTR_p: 150})\n        (_, line) = self.server.accounting_match(';Q;' + jid1)\n        self.assertIn('project=foo00_foo01', line)\n        (_, line) = self.server.accounting_match(';a;' + jid1)\n        self.assertIn('Priority=150', line)\n        self.server.runjob(jid1)\n        self.server.expect(JOB, {'job_state': 'F'}, extend='x', id=jid1)\n        (_, line) = self.server.accounting_match(';E;' + jid1)\n        self.assertIn('project=foo00_foo01', line)\n\n    def test_queue_record_multiple_hook_01(self):\n        \"\"\"\n        Test that changes made in a modifyjob hook are reflected in the\n        E record\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'False',\n                             'job_history_enable': 'True',\n                             })\n        mj_hook_00 = \"\"\"\nimport pbs\ne1 = pbs.event()\ne1.job.project = 'foo02'\npbs.logmsg(pbs.LOG_ERROR, f\"HOOKQ0:e1:{hex(id(e1))}\"\n                    f\" job.id:{e1.job.id}\"\n                    f\" job.project:{e1.job.project}\"\n                    f\" hex(id(job)):{hex(id(e1.job))}\")\ne1.accept()\n\"\"\"\n        mj_hook_01 = \"\"\"\nimport pbs\ne1 = pbs.event()\ne1.job.project = str(e1.job.project) + '_foo03'\npbs.logmsg(pbs.LOG_ERROR, f\"HOOKQ0:e1:{hex(id(e1))}\"\n                    f\" job.id:{e1.job.id}\"\n                    f\" job.project:{e1.job.project}\"\n                    f\" hex(id(job)):{hex(id(e1.job))}\")\ne1.accept()\n\"\"\"\n        mj_attrs_00 = {'event': 'modifyjob', 'order': 1, 'enabled': 'True'}\n        mj_attrs_01 = {'event': 'modifyjob', 'order': 2, 'enabled': 'True'}\n        self.server.create_import_hook('mj_00', mj_attrs_00, mj_hook_00)\n        self.server.create_import_hook('mj_01', mj_attrs_01, mj_hook_01)\n        j = Job(TEST_USER, {'Resource_List.walltime': 42})\n        j.set_sleep_time(1)\n        jid1 = self.server.submit(j)\n        self.server.alterjob(jid1, {ATTR_p: 150})\n        (_, line) = self.server.accounting_match(';a;' + jid1)\n        self.assertIn('Priority=150', line)\n        self.assertIn('project=foo02_foo03', line)\n        self.server.runjob(jid1)\n        self.server.expect(JOB, {'job_state': 'F'}, extend='x', id=jid1)\n        (_, line) = self.server.accounting_match(';E;' + jid1)\n        self.assertIn('project=foo02_foo03', line)\n\n    def test_queue_record_multiple_hook_02(self):\n        \"\"\"\n        Test that changes made in a queuejob then modifyjob are stacking using\n        job_o in the modifyjob hook.\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'False',\n                             'job_history_enable': 'True',\n                             })\n        qj_hook_00 = \"\"\"\nimport pbs\ne1 = pbs.event()\ne1.job.project = 'foo00'\ne1 = pbs.event()\npbs.logmsg(pbs.LOG_ERROR, f\"HOOKQ0:e1:{hex(id(e1))}\"\n                    f\" job.id:{e1.job.id}\"\n                    f\" job.project:{e1.job.project}\"\n                    f\" Resource_List:{e1.job.Resource_List}\"\n                    f\" hex(id(job)):{hex(id(e1.job))}\")\ne1.accept()\n\"\"\"\n        qj_attrs = {'event': 'queuejob', 'enabled': 'True'}\n        self.server.create_import_hook('qj00', qj_attrs, qj_hook_00)\n\n        mj_hook_00 = \"\"\"\nimport pbs\ne1 = pbs.event()\npbs.logmsg(pbs.LOG_ERROR, f\"HOOKM0a:e1:{hex(id(e1))}\"\n                    f\" job.id:{e1.job.id}\"\n                    f\" job.project:{e1.job.project}\"\n                    f\" job_o.id:{e1.job_o.id}\"\n                    f\" job_o.project:{e1.job_o.project}\"\n                    f\" hex(id(job)):{hex(id(e1.job))}\")\ne1.job.project = str(e1.job_o.project) + '_foo01'\npbs.logmsg(pbs.LOG_ERROR, f\"HOOKM0b:e1:{hex(id(e1))}\"\n                    f\" job.id:{e1.job.id}\"\n                    f\" job.project:{e1.job.project}\"\n                    f\" job_o.id:{e1.job_o.id}\"\n                    f\" job_o.project:{e1.job_o.project}\"\n                    f\" hex(id(job)):{hex(id(e1.job))}\")\ne1.accept()\n\"\"\"\n        mj_attrs = {'event': 'modifyjob', 'enabled': 'True'}\n        # FIXME: there is a problem here when you enable the modifyjob hook.\n        # the modifyjob hook doesn't get the change from the queuejob.\n        self.server.create_import_hook('mj_00', mj_attrs, mj_hook_00)\n\n        j = Job(TEST_USER, {'Resource_List.walltime': 1})\n        j.set_sleep_time(1)\n        jid1 = self.server.submit(j)\n        self.server.alterjob(jid1, {ATTR_p: 150})\n        self.server.runjob(jid1)\n        self.server.expect(JOB, {'job_state': 'F'}, extend='x', id=jid1)\n\n        (_, line) = self.server.accounting_match(';Q;' + jid1)\n        self.assertIn('project=foo00', line)\n\n        (_, line) = self.server.accounting_match(';a;' + jid1)\n        self.assertIn('Priority=150', line)\n        self.assertIn('project=foo00_foo01', line)\n\n        (_, line) = self.server.accounting_match(';E;' + jid1)\n        self.assertIn('project=foo00_foo01', line)\n\n    def test_queue_record_multiple_hook_03(self):\n        \"\"\"\n        Test that changes made in a queuejob then modifyjob are stacking using\n        job_o in the first modifyjob hook, but not in the second.\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'False',\n                             'job_history_enable': 'True',\n                             })\n        qj_hook_00 = \"\"\"\nimport pbs\ne1 = pbs.event()\ne1.job.project = 'foo00'\ne1.accept()\n\"\"\"\n        qj_hook_01 = \"\"\"\nimport pbs\ne1 = pbs.event()\ne1.job.project = str(e1.job.project) + '_foo01'\npbs.logmsg(pbs.LOG_ERROR, f\"HOOKQ1:e1:{hex(id(e1))}\"\n                    f\" job.id:{e1.job.id}\"\n                    f\" job.project:{e1.job.project}\"\n                    f\" Resource_List:{e1.job.Resource_List}\"\n                    f\" hex(id(job)):{hex(id(e1.job))}\")\ne1.accept()\n\"\"\"\n        qj_attrs = {'event': 'queuejob', 'enabled': 'True'}\n        self.server.create_import_hook('qj00', qj_attrs, qj_hook_00)\n        self.server.create_import_hook('qj01', qj_attrs, qj_hook_01)\n\n        mj_hook_00 = \"\"\"\nimport pbs\ne1 = pbs.event()\npbs.logmsg(pbs.LOG_ERROR, f\"HOOKM0a:e1:{hex(id(e1))}\"\n                    f\" job.id:{e1.job.id}\"\n                    f\" job.project:{e1.job.project}\"\n                    f\" job_o.id:{e1.job_o.id}\"\n                    f\" job_o.project:{e1.job_o.project}\"\n                    f\" hex(id(job)):{hex(id(e1.job))}\")\ne1.job.project = str(e1.job_o.project) + '_foo02'\npbs.logmsg(pbs.LOG_ERROR, f\"HOOKM0b:e1:{hex(id(e1))}\"\n                    f\" jobid:{e1.job.id}\"\n                    f\" project:{e1.job.project}\"\n                    f\" job_o.id:{e1.job_o.id}\"\n                    f\" job_o.project:{e1.job_o.project}\"\n                    f\" hex(id(job)):{hex(id(e1.job))}\")\ne1.accept()\n\"\"\"\n        mj_hook_01 = \"\"\"\nimport pbs\ne1 = pbs.event()\ne1.job.project = str(e1.job.project) + '_foo03'\npbs.logmsg(pbs.LOG_ERROR, f\"HOOKM1:e1:{hex(id(e1))}\"\n                    f\" job.id:{e1.job.id}\"\n                    f\" job.project:{e1.job.project}\"\n                    f\" job_o.id:{e1.job_o.id}\"\n                    f\" job_o.project:{e1.job_o.project}\"\n                    f\" hex(id(job)):{hex(id(e1.job))}\")\ne1.accept()\n\"\"\"\n        mj_attrs_00 = {'event': 'modifyjob', 'order': 1, 'enabled': 'True'}\n        mj_attrs_01 = {'event': 'modifyjob', 'order': 2, 'enabled': 'True'}\n        self.server.create_import_hook('mj_00', mj_attrs_00, mj_hook_00)\n        self.server.create_import_hook('mj_01', mj_attrs_01, mj_hook_01)\n\n        j = Job(TEST_USER, {'Resource_List.walltime': 1})\n        j.set_sleep_time(1)\n        jid1 = self.server.submit(j)\n        self.server.alterjob(jid1, {ATTR_p: 150})\n        (_, line) = self.server.accounting_match(';Q;' + jid1)\n        self.assertIn('project=foo00_foo01', line)\n        self.server.runjob(jid1)\n        self.server.expect(JOB, {'job_state': 'F'}, extend='x', id=jid1)\n\n        (_, line) = self.server.accounting_match(';a;' + jid1)\n        self.assertIn('Priority=150', line)\n        self.assertIn('project=foo00_foo01_foo02_foo03', line)\n\n        (_, line) = self.server.accounting_match(';E;' + jid1)\n        self.assertIn('project=foo00_foo01_foo02_foo03', line)\n"
  },
  {
    "path": "test/tests/functional/pbs_accumulate_resc_used.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\nfrom tests.functional import *\nimport ast\n\n\n@requirements(num_moms=3)\nclass TestPbsAccumulateRescUsed(TestFunctional):\n\n    \"\"\"\n    This tests the feature in PBS that enables mom hooks to accumulate\n    resources_used values for resources beside cput, cpupercent, and mem.\n    This includes accumulation of custom resources. The mom hooks supported\n    this feature are: exechost_periodic, execjob_prologue,\n    and execjob_epilogue.\n\n\n    PRE: Have a cluster of PBS with 3 mom hosts, with an exechost_startup\n         that adds custom resources.\n\n    POST: When a job ends, accounting_logs reflect the aggregated\n          resources_used values. And with job_history_enable=true, one\n          can do a 'qstat -x -f <jobid>' to obtain information of a previous\n          job.\n    \"\"\"\n\n    # Class variables\n\n    def setUp(self):\n\n        TestFunctional.setUp(self)\n        self.logger.info(\"len moms = %d\" % (len(self.moms)))\n        if len(self.moms) != 3:\n            usage_string = 'test requires 3 MoMs as input, ' + \\\n                           'use -p moms=<mom1>:<mom2>:<mom3>'\n            self.skip_test(usage_string)\n\n        # PBSTestSuite returns the moms passed in as parameters as dictionary\n        # of hostname and MoM object\n        self.momA = self.moms.values()[0]\n        self.momB = self.moms.values()[1]\n        self.momC = self.moms.values()[2]\n        self.momA.delete_vnode_defs()\n        self.momB.delete_vnode_defs()\n        self.momC.delete_vnode_defs()\n\n        self.hostA = self.momA.shortname\n        self.hostB = self.momB.shortname\n        self.hostC = self.momC.shortname\n\n        rc = self.server.manager(MGR_CMD_DELETE, NODE, None, \"\")\n        self.assertEqual(rc, 0)\n\n        rc = self.server.manager(MGR_CMD_CREATE, NODE, id=self.hostA)\n        self.assertEqual(rc, 0)\n\n        rc = self.server.manager(MGR_CMD_CREATE, NODE, id=self.hostB)\n        self.assertEqual(rc, 0)\n\n        rc = self.server.manager(MGR_CMD_CREATE, NODE, id=self.hostC)\n        self.assertEqual(rc, 0)\n\n        # Give the moms a chance to contact the server.\n        self.server.expect(NODE, {'state': 'free'}, id=self.hostA)\n        self.server.expect(NODE, {'state': 'free'}, id=self.hostB)\n        self.server.expect(NODE, {'state': 'free'}, id=self.hostC)\n\n        # First set some custom resources via exechost_startup hook.\n        startup_hook_body = \"\"\"\nimport pbs\ne=pbs.event()\nlocalnode=pbs.get_local_nodename()\n\ne.vnode_list[localnode].resources_available['foo_i'] = 7\ne.vnode_list[localnode].resources_available['foo_f'] = 5.0\ne.vnode_list[localnode].resources_available['foo_str'] = \"seventyseven\"\n\"\"\"\n        hook_name = \"start\"\n        a = {'event': \"exechost_startup\", 'enabled': 'True'}\n        rv = self.server.create_import_hook(\n            hook_name,\n            a,\n            startup_hook_body,\n            overwrite=True)\n        self.assertTrue(rv)\n\n        self.momA.signal(\"-HUP\")\n        self.momB.signal(\"-HUP\")\n        self.momC.signal(\"-HUP\")\n\n        a = {'job_history_enable': 'True'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        # Next set some custom resources via qmgr -c 'create resource'\n        attr = {}\n        attr['type'] = 'string'\n        attr['flag'] = 'h'\n        r = 'foo_str2'\n        rc = self.server.manager(\n            MGR_CMD_CREATE, RSC, attr, id=r, runas=ROOT_USER, logerr=False)\n        self.assertEqual(rc, 0)\n\n        # Ensure the new resource is seen by all moms.\n        momlist = [self.momA, self.momB, self.momC]\n        for m in momlist:\n            m.log_match(\"resourcedef;copy hook-related file\")\n\n        attr['type'] = 'string'\n        attr['flag'] = 'h'\n        r = 'foo_str3'\n        rc = self.server.manager(\n            MGR_CMD_CREATE, RSC, attr, id=r, runas=ROOT_USER, logerr=False)\n        self.assertEqual(rc, 0)\n\n        # Ensure the new resource is seen by all moms.\n        for m in momlist:\n            m.log_match(\"resourcedef;copy hook-related file\")\n\n        attr['type'] = 'string'\n        attr['flag'] = 'h'\n        r = 'foo_str4'\n        rc = self.server.manager(\n            MGR_CMD_CREATE, RSC, attr, id=r, runas=ROOT_USER, logerr=False)\n        self.assertEqual(rc, 0)\n\n        # Ensure the new resource is seen by all moms.\n        for m in momlist:\n            m.log_match(\"resourcedef;copy hook-related file\")\n\n        attr['type'] = 'string_array'\n        attr['flag'] = 'h'\n        r = 'stra'\n        rc = self.server.manager(\n            MGR_CMD_CREATE, RSC, attr, id=r, runas=ROOT_USER, logerr=False)\n        self.assertEqual(rc, 0)\n\n        # Give the moms a chance to receive the updated resource.\n        # Ensure the new resource is seen by all moms.\n        for m in momlist:\n            m.log_match(\"resourcedef;copy hook-related file\")\n\n    def test_epilogue(self):\n        \"\"\"\n        Test accumulatinon of resources of a multinode job from an\n        exechost_epilogue hook.\n        \"\"\"\n        self.logger.info(\"test_epilogue\")\n        hook_body = \"\"\"\nimport pbs\ne=pbs.event()\npbs.logmsg(pbs.LOG_DEBUG, \"executed epilogue hook\")\nif e.job.in_ms_mom():\n    e.job.resources_used[\"vmem\"] = pbs.size(\"9gb\")\n    e.job.resources_used[\"foo_i\"] = 9\n    e.job.resources_used[\"foo_f\"] = 0.09\n    e.job.resources_used[\"foo_str\"] = '{\"seven\":7}'\n    e.job.resources_used[\"cput\"] = 10\n    e.job.resources_used[\"stra\"] = '\"glad,elated\",\"happy\"'\n    e.job.resources_used[\"foo_str3\"] = \\\n        \\\"\\\"\\\"{\"a\":6,\"b\":\"some value #$%^&*@\",\"c\":54.4,\"d\":\"32.5gb\"}\\\"\\\"\\\"\n    e.job.resources_used[\"foo_str2\"] = \"seven\"\n    e.job.resources_used[\"foo_str4\"] = \"eight\"\nelse:\n    e.job.resources_used[\"vmem\"] = pbs.size(\"10gb\")\n    e.job.resources_used[\"foo_i\"] = 10\n    e.job.resources_used[\"foo_f\"] = 0.10\n    e.job.resources_used[\"foo_str\"] = '{\"eight\":8,\"nine\":9}'\n    e.job.resources_used[\"foo_str2\"] = '{\"seven\":7}'\n    e.job.resources_used[\"cput\"] = 20\n    e.job.resources_used[\"stra\"] = '\"cucumbers,bananas\"'\n    e.job.resources_used[\"foo_str3\"] = \\\"\\\"\\\"\"vn1\":4,\"vn2\":5,\"vn3\":6\\\"\\\"\\\"\n\"\"\"\n\n        hook_name = \"epi\"\n        a = {'event': \"execjob_epilogue\", 'enabled': 'True', 'order': 999}\n        rv = self.server.create_import_hook(\n            hook_name,\n            a,\n            hook_body,\n            overwrite=True)\n        self.assertTrue(rv)\n\n        a = {'Resource_List.select': '3:ncpus=1',\n             'Resource_List.walltime': 10,\n             'Resource_List.place': \"scatter\"}\n        j = Job(TEST_USER)\n        j.set_attributes(a)\n        j.set_sleep_time(\"10\")\n        jid = self.server.submit(j)\n\n        # The results should show results for custom resources 'foo_i',\n        # 'foo_f', 'foo_str', 'foo_str3', and bultin resources 'vmem',\n        # 'cput', and should be accumulating  based\n        # on the hook script, where MS defines 1 value, while the 2 sister\n        # Moms define the same value. For 'string' type, it will be a\n        # union of all values obtained from sister moms and local mom, and\n        # the result will be in JSON-format.\n        #\n        # foo_str is for testing normal values.\n        # foo_str2 is for testing non-JSON format value received from MS.\n        # foo_str3 is for testing non-JSON format value received from a sister\n        # mom.\n        # foo_str4 is for testing MS-only set values.\n        #\n        # For string_array type  resource 'stra', it is not accumulated but\n        # will be set to last seen value from a mom epilogue hook.\n        self.server.expect(JOB, {\n            'job_state': 'F',\n            'resources_used.foo_f': '0.29',\n            'resources_used.foo_i': '29',\n            'resources_used.foo_str4': \"eight\",\n            'resources_used.stra': \"\\\"glad,elated\\\",\\\"happy\\\"\",\n            'resources_used.vmem': '29gb',\n            'resources_used.cput': '00:00:50',\n            'resources_used.ncpus': '3'},\n            extend='x', offset=10, attrop=PTL_AND, id=jid)\n\n        foo_str_dict_in = {\"eight\": 8, \"seven\": 7, \"nine\": 9}\n        qstat = self.server.status(\n            JOB, 'resources_used.foo_str', id=jid, extend='x')\n        foo_str_dict_out_str = eval(qstat[0]['resources_used.foo_str'])\n        foo_str_dict_out = eval(foo_str_dict_out_str)\n        self.assertTrue(foo_str_dict_in == foo_str_dict_out)\n\n        # resources_used.foo_str3 must not be set since a sister value is not\n        # of JSON-format.\n        self.server.expect(JOB, 'resources_used.foo_str3',\n                           op=UNSET, extend='x', id=jid)\n\n        self.momA.log_match(\n            \"Job %s resources_used.foo_str3 cannot be \" % (jid,) +\n            \"accumulated: value '\\\"vn1\\\":4,\\\"vn2\\\":5,\\\"vn3\\\":6' \" +\n            \"from mom %s not JSON-format\" % (self.hostB,))\n\n        # resources_used.foo_str2 must not be set.\n        self.server.expect(JOB, 'resources_used.foo_str2', op=UNSET, id=jid)\n        self.momA.log_match(\n            \"Job %s resources_used.foo_str2 cannot be \" % (jid,) +\n            \"accumulated: value 'seven' from mom %s \" % (self.hostA,) +\n            \"not JSON-format\")\n\n        # Match accounting_logs entry\n\n        acctlog_match = 'resources_used.foo_f=0.29'\n        self.server.accounting_match(\n            \"E;%s;.*%s.*\" % (jid, acctlog_match), regexp=True, n=100)\n\n        acctlog_match = 'resources_used.foo_i=29'\n        self.server.accounting_match(\n            \"E;%s;.*%s.*\" % (jid, acctlog_match), regexp=True, n=100)\n\n        acctlog_match = \"resources_used.foo_str='%s'\" % (foo_str_dict_out_str,)\n        self.server.accounting_match(\n            \"E;%s;.*%s.*\" % (jid, acctlog_match), regexp=True, n=100)\n\n        acctlog_match = 'resources_used.vmem=29gb'\n        self.server.accounting_match(\n            \"E;%s;.*%s.*\" % (jid, acctlog_match), regexp=True, n=100)\n\n        acctlog_match = 'resources_used.cput=00:00:50'\n        self.server.accounting_match(\n            \"E;%s;.*%s.*\" % (jid, acctlog_match), regexp=True, n=100)\n\n        # ensure resources_foo_str2 is not reported in accounting_logs since\n        # it's unset due to non-JSON-format value.\n        acctlog_match = 'resources_used.foo_str2='\n        self.server.accounting_match(\"E;%s;.*%s.*\" % (jid, acctlog_match),\n                                     regexp=True, n=100, existence=False)\n\n        acctlog_match = 'resources_used.foo_str4=eight'\n        self.server.accounting_match(\n            \"E;%s;.*%s.*\" % (jid, acctlog_match), regexp=True, n=100)\n\n        acctlog_match = 'resources_used.ncpus=3'\n        self.server.accounting_match(\n            \"E;%s;.*%s.*\" % (jid, acctlog_match), regexp=True, n=100)\n\n        # resources_used.foo_str3 must not show up in accounting_logs\n        acctlog_match = 'resources_used.foo_str3=',\n        self.server.accounting_match(\"E;%s;.*%s.*\" % (jid, acctlog_match),\n                                     regexp=True, n=100, existence=False)\n\n        acctlog_match = r'resources_used.stra=\\\"glad\\,elated\\\"\\,\\\"happy\\\"'\n        self.server.accounting_match(\n            \"E;%s;.*%s.*\" % (jid, acctlog_match), regexp=True, n=100)\n\n    def test_prologue(self):\n        \"\"\"\n        Test accumulatinon of resources of a multinode job from an\n        exechost_prologue hook.\n        On cpuset systems don't check for cput because the pbs_cgroups hook\n        will be enabled and will overwrite the cput value set in the prologue\n        hook\n        \"\"\"\n        has_cpuset = False\n        for mom in self.moms.values():\n            if mom.is_cpuset_mom():\n                has_cpuset = True\n\n        self.logger.info(\"test_prologue\")\n        hook_body = \"\"\"\nimport pbs\ne=pbs.event()\npbs.logmsg(pbs.LOG_DEBUG, \"executed prologue hook\")\nif e.job.in_ms_mom():\n    e.job.resources_used[\"vmem\"] = pbs.size(\"11gb\")\n    e.job.resources_used[\"foo_i\"] = 11\n    e.job.resources_used[\"foo_f\"] = 0.11\n    e.job.resources_used[\"foo_str\"] = '{\"seven\":7}'\n    e.job.resources_used[\"cput\"] = 11\n    e.job.resources_used[\"stra\"] = '\"glad,elated\",\"happy\"'\n    e.job.resources_used[\"foo_str3\"] = \\\n      \\\"\\\"\\\"{\"a\":6,\"b\":\"some value #$%^&*@\",\"c\":54.4,\"d\":\"32.5gb\"}\\\"\\\"\\\"\n    e.job.resources_used[\"foo_str2\"] = \"seven\"\n    e.job.resources_used[\"foo_str4\"] = \"eight\"\nelse:\n    e.job.resources_used[\"vmem\"] = pbs.size(\"12gb\")\n    e.job.resources_used[\"foo_i\"] = 12\n    e.job.resources_used[\"foo_f\"] = 0.12\n    e.job.resources_used[\"foo_str\"] = '{\"eight\":8,\"nine\":9}'\n    e.job.resources_used[\"foo_str2\"] = '{\"seven\":7}'\n    e.job.resources_used[\"cput\"] = 12\n    e.job.resources_used[\"stra\"] = '\"cucumbers,bananas\"'\n    e.job.resources_used[\"foo_str3\"] = \\\"\\\"\\\"\"vn1\":4,\"vn2\":5,\"vn3\":6\\\"\\\"\\\"\n\"\"\"\n\n        hook_name = \"prolo\"\n        a = {'event': \"execjob_prologue\", 'enabled': 'True'}\n        rv = self.server.create_import_hook(\n            hook_name,\n            a,\n            hook_body,\n            overwrite=True)\n        self.assertTrue(rv)\n\n        a = {'Resource_List.select': '3:ncpus=1',\n             'Resource_List.walltime': 10,\n             'Resource_List.place': 'scatter'}\n        j = Job(TEST_USER)\n        j.set_attributes(a)\n\n        # The pbsdsh call is what allows a first task to get spawned on\n        # on a sister mom, causing the execjob_prologue hook to execute.\n        j.create_script(\n            \"pbsdsh -n 1 hostname\\n\" + \"pbsdsh -n 2 hostname\\n\" + \"sleep 10\\n\")\n\n        jid = self.server.submit(j)\n\n        # The results should show results for custom resources 'foo_i',\n        # 'foo_f', 'foo_str', 'foo_str3', and bultin resources 'vmem',\n        # 'cput', and should be accumulating  based\n        # on the hook script, where MS defines 1 value, while the 2 sister\n        # Moms define the same value. For 'string' type, it will be a\n        # union of all values obtained from sister moms and local mom, and\n        # the result will be in JSON-format.\n        #\n        # foo_str is for testing normal values.\n        # foo_str2 is for testing non-JSON format value received from MS.\n        # foo_str3 is for testing non-JSON format value received from a sister\n        # mom.\n        # foo_str4 is for testing MS-only set values.\n        #\n        # For string_array type  resource 'stra', it is not accumulated but\n        # will be set to last seen value from a mom prologue hook.\n        a = {\n            'job_state': 'F',\n            'resources_used.foo_f': '0.35',\n            'resources_used.foo_i': '35',\n            'resources_used.foo_str4': \"eight\",\n            'resources_used.stra': \"\\\"glad,elated\\\",\\\"happy\\\"\",\n            'resources_used.vmem': '35gb',\n            'resources_used.ncpus': '3'}\n\n        if not has_cpuset:\n            a['resources_used.cput'] = '00:00:35'\n\n        self.server.expect(JOB, a, extend='x', offset=10,\n                           attrop=PTL_AND, id=jid)\n\n        foo_str_dict_in = {\"eight\": 8, \"seven\": 7, \"nine\": 9}\n        qstat = self.server.status(\n            JOB, 'resources_used.foo_str', id=jid, extend='x')\n        foo_str_dict_out_str = eval(qstat[0]['resources_used.foo_str'])\n        foo_str_dict_out = eval(foo_str_dict_out_str)\n        self.assertTrue(foo_str_dict_in == foo_str_dict_out)\n\n        # resources_used.foo_str3 must not be set since a sister value is\n        # not of JSON-format.\n        self.server.expect(JOB, 'resources_used.foo_str3',\n                           op=UNSET, extend='x', id=jid)\n\n        self.momA.log_match(\n            \"Job %s resources_used.foo_str3 cannot be \" % (jid,) +\n            \"accumulated: value '\\\"vn1\\\":4,\\\"vn2\\\":5,\\\"vn3\\\":6' \" +\n            \"from mom %s not JSON-format\" % (self.hostB,))\n        self.momA.log_match(\n            \"Job %s resources_used.foo_str3 cannot be \" % (jid,) +\n            \"accumulated: value '\\\"vn1\\\":4,\\\"vn2\\\":5,\\\"vn3\\\":6' \" +\n            \"from mom %s not JSON-format\" % (self.hostC,))\n\n        # Ensure resources_used.foo_str3 is not set since it has a\n        # non-JSON format value.\n        self.server.expect(JOB, 'resources_used.foo_str3', op=UNSET,\n                           extend='x', id=jid)\n\n        # resources_used.foo_str2 must not be set.\n        self.server.expect(JOB, 'resources_used.foo_str2', op=UNSET, id=jid)\n        self.momA.log_match(\n            \"Job %s resources_used.foo_str2 cannot be \" % (jid,) +\n            \"accumulated: value 'seven' from \" +\n            \"mom %s not JSON-format\" % (self.hostA,))\n\n        # Match accounting_logs entry\n\n        acctlog_match = 'resources_used.foo_f=0.35'\n        self.server.accounting_match(\n            \"E;%s;.*%s.*\" % (jid, acctlog_match), regexp=True, n=100)\n\n        acctlog_match = 'resources_used.foo_i=35'\n        self.server.accounting_match(\n            \"E;%s;.*%s.*\" % (jid, acctlog_match), regexp=True, n=100)\n\n        acctlog_match = \"resources_used.foo_str='%s'\" % (foo_str_dict_out_str,)\n        self.server.accounting_match(\n            \"E;%s;.*%s.*\" % (jid, acctlog_match), regexp=True, n=100)\n\n        acctlog_match = 'resources_used.vmem=35gb'\n        self.server.accounting_match(\n            \"E;%s;.*%s.*\" % (jid, acctlog_match), regexp=True, n=100)\n\n        if not has_cpuset:\n            acctlog_match = 'resources_used.cput=00:00:35'\n            self.server.accounting_match(\n                \"E;%s;.*%s.*\" % (jid, acctlog_match), regexp=True, n=100)\n\n        # resources_used.foo_str2 should not be reported in accounting_logs.\n        acctlog_match = 'resources_used.foo_str2='\n        self.server.accounting_match(\"E;%s;.*%s.*\" % (jid, acctlog_match),\n                                     regexp=True, n=100, existence=False)\n\n        acctlog_match = 'resources_used.ncpus=3'\n        self.server.accounting_match(\n            \"E;%s;.*%s.*\" % (jid, acctlog_match), regexp=True, n=100)\n\n        # resources_used.foo_str3 must not show up in accounting_logs\n        acctlog_match = 'resources_used.foo_str3='\n        self.server.accounting_match(\"E;%s;.*%s.*\" % (jid, acctlog_match),\n                                     regexp=True, n=100, existence=False)\n\n        acctlog_match = 'resources_used.foo_str4=eight'\n        self.server.accounting_match(\n            \"E;%s;.*%s.*\" % (jid, acctlog_match), regexp=True, n=100)\n\n        acctlog_match = r'resources_used.stra=\\\"glad\\,elated\\\"\\,\\\"happy\\\"'\n        self.server.accounting_match(\n            \"E;%s;.*%s.*\" % (jid, acctlog_match), regexp=True, n=100)\n\n    def test_periodic(self):\n        \"\"\"\n        Test accumulatinon of resources from an exechost_periodic hook.\n        \"\"\"\n        self.logger.info(\"test_periodic\")\n        hook_body = \"\"\"\nimport pbs\ne=pbs.event()\npbs.logmsg(pbs.LOG_DEBUG, \"executed periodic hook\")\ni = 0\nl = []\nfor v in pbs.server().vnodes():\n    pbs.logmsg(pbs.LOG_DEBUG, \"node %s\" % (v.name,))\n    l.append(v.name)\n\nlocal_node=pbs.get_local_nodename()\nfor jk in e.job_list.keys():\n    if local_node == l[0]:\n        e.job_list[jk].resources_used[\"vmem\"] = pbs.size(\"11gb\")\n        e.job_list[jk].resources_used[\"foo_i\"] = 11\n        e.job_list[jk].resources_used[\"foo_f\"] = 0.11\n        e.job_list[jk].resources_used[\"foo_str\"] = '{\"seven\":7}'\n        e.job_list[jk].resources_used[\"cput\"] = 11\n        e.job_list[jk].resources_used[\"stra\"] = '\"glad,elated\",\"happy\"'\n        e.job_list[jk].resources_used[\"foo_str3\"] = \\\n         \\\"\\\"\\\"{\"a\":6,\"b\":\"some value #$%^&*@\",\"c\":54.4,\"d\":\"32.5gb\"}\\\"\\\"\\\"\n        e.job_list[jk].resources_used[\"foo_str2\"] = \"seven\"\n    elif local_node == l[1]:\n        e.job_list[jk].resources_used[\"vmem\"] = pbs.size(\"12gb\")\n        e.job_list[jk].resources_used[\"foo_i\"] = 12\n        e.job_list[jk].resources_used[\"foo_f\"] = 0.12\n        e.job_list[jk].resources_used[\"foo_str\"] = '{\"eight\":8}'\n        e.job_list[jk].resources_used[\"cput\"] = 12\n        e.job_list[jk].resources_used[\"stra\"] = '\"cucumbers,bananas\"'\n        e.job_list[jk].resources_used[\"foo_str2\"] =  '{\"seven\":7}'\n        e.job_list[jk].resources_used[\"foo_str3\"] = \\\n                \\\"\\\"\\\"{\"vn1\":4,\"vn2\":5,\"vn3\":6}\\\"\\\"\\\"\n    else:\n        e.job_list[jk].resources_used[\"vmem\"] = pbs.size(\"13gb\")\n        e.job_list[jk].resources_used[\"foo_i\"] = 13\n        e.job_list[jk].resources_used[\"foo_f\"] = 0.13\n        e.job_list[jk].resources_used[\"foo_str\"] = '{\"nine\":9}'\n        e.job_list[jk].resources_used[\"foo_str2\"] =  '{\"seven\":7}'\n        e.job_list[jk].resources_used[\"cput\"] = 13\n        e.job_list[jk].resources_used[\"stra\"] = '\"cucumbers,bananas\"'\n        e.job_list[jk].resources_used[\"foo_str3\"] = \\\n                \\\"\\\"\\\"{\"vn1\":4,\"vn2\":5,\"vn3\":6}\\\"\\\"\\\"\n\"\"\"\n\n        hook_name = \"period\"\n        a = {'event': \"exechost_periodic\", 'enabled': 'True', 'freq': 15}\n        rv = self.server.create_import_hook(\n            hook_name,\n            a,\n            hook_body,\n            overwrite=True)\n        self.assertTrue(rv)\n\n        a = {'resources_available.ncpus': '2'}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.hostA)\n\n        self.server.manager(MGR_CMD_SET, NODE, a, self.hostB)\n\n        self.server.manager(MGR_CMD_SET, NODE, a, self.hostC)\n\n        a = {'Resource_List.select': '3:ncpus=1',\n             'Resource_List.place': 'scatter'}\n        j = Job(TEST_USER)\n        j.set_attributes(a)\n        j.set_sleep_time(\"35\")\n        jid1 = self.server.submit(j)\n        jid2 = self.server.submit(j)\n\n        for jid in [jid1, jid2]:\n\n            # The results should show results for custom resources 'foo_i',\n            # 'foo_f', 'foo_str', 'foo_str3', and bultin resources 'vmem',\n            # 'cput', and should be accumulating  based\n            # on the hook script, where MS defines 1 value, while the 2 sister\n            # Moms define the same value. For 'string' type, it will be a\n            # union of all values obtained from sister moms and local mom, and\n            # the result will be in JSON-format.\n            # foo_str is for testing normal values.\n            # foo_str2 is for testing non-JSON format value received from MS.\n            # foo_str3 is for testing non-JSON format value received from a\n            # sister mom.\n            #\n\n            self.server.expect(JOB, {\n                'job_state': 'F',\n                'resources_used.foo_f': '0.36',\n                'resources_used.foo_i': '36',\n                'resources_used.stra': \"\\\"glad,elated\\\",\\\"happy\\\"\",\n                'resources_used.vmem': '36gb',\n                'resources_used.cput': '00:00:36',\n                'resources_used.ncpus': '3'},\n                extend='x', offset=35, attrop=PTL_AND, id=jid)\n\n            foo_str_dict_in = {\"eight\": 8, \"seven\": 7, \"nine\": 9}\n            qstat = self.server.status(\n                JOB, 'resources_used.foo_str', id=jid, extend='x')\n            foo_str_dict_out_str = eval(qstat[0]['resources_used.foo_str'])\n            foo_str_dict_out = eval(foo_str_dict_out_str)\n            self.assertTrue(foo_str_dict_in == foo_str_dict_out)\n\n            foo_str3_dict_in = {\"a\": 6, \"b\": \"some value #$%^&*@\",\n                                \"c\": 54.4, \"d\": \"32.5gb\", \"vn1\": 4,\n                                \"vn2\": 5, \"vn3\": 6}\n            qstat = self.server.status(\n                JOB, 'resources_used.foo_str3', id=jid, extend='x')\n            foo_str3_dict_out_str = eval(qstat[0]['resources_used.foo_str3'])\n            foo_str3_dict_out = eval(foo_str3_dict_out_str)\n            self.assertTrue(foo_str3_dict_in == foo_str3_dict_out)\n\n            # resources_used.foo_str2 must be unset since its value is not of\n            # JSON-format.\n            self.server.expect(JOB, 'resources_used.foo_str2', op=UNSET,\n                               extend='x', id=jid)\n\n            # Match accounting_logs entry\n\n            acctlog_match = 'resources_used.foo_f=0.36'\n            self.server.accounting_match(\n                \"E;%s;.*%s.*\" % (jid, acctlog_match), regexp=True, n=100)\n\n            acctlog_match = 'resources_used.foo_i=36'\n            self.server.accounting_match(\n                \"E;%s;.*%s.*\" % (jid, acctlog_match), regexp=True, n=100)\n\n            acctlog_match = \"resources_used.foo_str='%s'\" % (\n                foo_str_dict_out_str,)\n            self.server.accounting_match(\n                \"E;%s;.*%s.*\" % (jid, acctlog_match), regexp=True, n=100)\n\n            acctlog_match = 'resources_used.vmem=36gb'\n            self.server.accounting_match(\n                \"E;%s;.*%s.*\" % (jid, acctlog_match), regexp=True, n=100)\n\n            acctlog_match = 'resources_used.cput=00:00:36'\n            self.server.accounting_match(\n                \"E;%s;.*%s.*\" % (jid, acctlog_match), regexp=True, n=100)\n\n            # resources_used.foo_str2 must not show in accounting_logs\n            acctlog_match = 'resources_used.foo_str2=',\n            self.server.accounting_match(\"E;%s;.*%s.*\" % (jid, acctlog_match),\n                                         regexp=True, n=100, existence=False)\n\n            acctlog_match = 'resources_used.ncpus=3'\n            self.server.accounting_match(\n                \"E;%s;.*%s.*\" % (jid, acctlog_match), regexp=True, n=100)\n\n            acctlog_match = \"resources_used.foo_str3='%s'\" % (\n                foo_str3_dict_out_str.replace('.', r'\\.').\n                replace(\"#$%^&*@\", r\"\\#\\$\\%\\^\\&\\*\\@\"))\n            self.server.accounting_match(\n                \"E;%s;.*%s.*\" % (jid, acctlog_match), regexp=True, n=100)\n            acctlog_match = r'resources_used.stra=\\\"glad\\,elated\\\"\\,\\\"happy\\\"'\n            self.server.accounting_match(\n                \"E;%s;.*%s.*\" % (jid, acctlog_match), regexp=True, n=100)\n\n    def test_resource_bool(self):\n        \"\"\"\n        To test that boolean value are not getting aggregated\n        \"\"\"\n\n        # Create a boolean type resource\n        attr = {}\n        attr['type'] = 'boolean'\n        self.server.manager(\n            MGR_CMD_CREATE, RSC, attr,\n            id='foo_bool', runas=ROOT_USER,\n            logerr=False)\n\n        hook_body = \"\"\"\nimport pbs\ne=pbs.event()\nj=e.job\nif j.in_ms_mom():\n    j.resources_used[\"foo_bool\"] = True\nelse:\n    j.resources_used[\"foo_bool\"] = False\n\"\"\"\n\n        hook_name = \"epi_bool\"\n        a = {'event': \"execjob_epilogue\", 'enabled': \"True\"}\n        self.server.create_import_hook(\n            hook_name,\n            a,\n            hook_body,\n            overwrite=True)\n\n        a = {'Resource_List.select': '3:ncpus=1',\n             'Resource_List.walltime': 10,\n             'Resource_List.place': 'scatter'}\n        j = Job(TEST_USER)\n        j.set_attributes(a)\n        j.set_sleep_time(\"5\")\n        jid = self.server.submit(j)\n\n        # foo_bool is True\n        a = {'resources_used.foo_bool': \"True\",\n             'job_state': 'F'}\n        self.server.expect(JOB, a, extend='x', offset=5, attrop=PTL_AND,\n                           id=jid)\n\n    def test_resource_invisible(self):\n        \"\"\"\n        Test that value aggregation is same for invisible resources\n        \"\"\"\n\n        # Set float and string_array to be invisible resource\n        attr = {}\n        attr['flag'] = 'ih'\n        self.server.manager(\n            MGR_CMD_SET, RSC, attr, id='foo_f', runas=ROOT_USER)\n        self.server.manager(\n            MGR_CMD_SET, RSC, attr, id='foo_str', runas=ROOT_USER)\n\n        hook_body = \"\"\"\nimport pbs\ne=pbs.event()\nj = e.job\nif j.in_ms_mom():\n    j.resources_used[\"foo_f\"] = 2.114\n    j.resources_used[\"foo_str\"] = '{\"one\":1,\"two\":2}'\nelse:\n    j.resources_used[\"foo_f\"] = 3.246\n    j.resources_used[\"foo_str\"] = '{\"two\":2, \"three\":3}'\n\"\"\"\n\n        hook_name = \"epi_invis\"\n        a = {'event': \"execjob_epilogue\", 'enabled': 'True'}\n        self.server.create_import_hook(\n            hook_name,\n            a,\n            hook_body,\n            overwrite=True)\n\n        a = {'Resource_List.select': '3:ncpus=1',\n             'Resource_List.walltime': 10,\n             'Resource_List.place': 'scatter'}\n        j = Job(TEST_USER)\n        j.set_attributes(a)\n        j.set_sleep_time(\"5\")\n        jid = self.server.submit(j)\n\n        # Verify that values are accumulated for float and string array\n        a = {'resources_used.foo_f': '8.606'}\n        self.server.expect(JOB, a, extend='x', offset=5, id=jid)\n\n        foo_str_dict_in = {\"one\": 1, \"two\": 2, \"three\": 3}\n        qstat = self.server.status(\n            JOB, 'resources_used.foo_str', id=jid, extend='x')\n        foo_str_dict_out_str = eval(qstat[0]['resources_used.foo_str'])\n        foo_str_dict_out = eval(foo_str_dict_out_str)\n        self.assertEqual(foo_str_dict_in, foo_str_dict_out)\n\n    def test_reservation(self):\n        \"\"\"\n        Test that job inside reservations works same\n        NOTE: Due to the reservation duration and the job duration\n        both being equal, this test found 2 race conditions.\n        KEEP the durations equal to each other.\n        \"\"\"\n        # Create non-host level resources from qmgr\n        attr = {}\n        attr['type'] = 'size'\n        self.server.manager(\n            MGR_CMD_CREATE, RSC, attr, id='foo_i2', runas=ROOT_USER)\n        # Ensure the new resource is seen by all moms.\n        momlist = [self.momA, self.momB, self.momC]\n        for m in momlist:\n            m.log_match(\"resourcedef;copy hook-related file\")\n\n        attr['type'] = 'float'\n        self.server.manager(\n            MGR_CMD_CREATE, RSC, attr, id='foo_f2', runas=ROOT_USER)\n        # Ensure the new resource is seen by all moms.\n        for m in momlist:\n            m.log_match(\"resourcedef;copy hook-related file\")\n\n        attr['type'] = 'string_array'\n        self.server.manager(\n            MGR_CMD_CREATE, RSC, attr, id='stra2', runas=ROOT_USER)\n        # Ensure the new resource is seen by all moms.\n        for m in momlist:\n            m.log_match(\"resourcedef;copy hook-related file\")\n\n        # Create an epilogue hook\n        hook_body = \"\"\"\nimport pbs\ne = pbs.event()\nj = e.job\npbs.logmsg(pbs.LOG_DEBUG, \"executed epilogue hook\")\nj.resources_used[\"foo_i\"] = 2\nj.resources_used[\"foo_i2\"] = pbs.size(1000)\nj.resources_used[\"foo_f\"] = 1.02\nj.resources_used[\"foo_f2\"] = 2.01\nj.resources_used[\"stra\"] = '\"happy\"'\nj.resources_used[\"stra2\"] = '\"glad\"'\n\"\"\"\n\n        # Create and import hook\n        a = {'event': \"execjob_epilogue\", 'enabled': 'True'}\n        self.server.create_import_hook(\n            \"epi\", a, hook_body,\n            overwrite=True)\n\n        # Submit a reservation\n        a = {'Resource_List.select': '3:ncpus=1',\n             'Resource_List.place': 'scatter',\n             'reserve_start': time.time() + 10,\n             'reserve_end': time.time() + 30, }\n        r = Reservation(TEST_USER, a)\n        rid = self.server.submit(r)\n        a = {'reserve_state': (MATCH_RE, \"RESV_CONFIRMED|2\")}\n        self.server.expect(RESV, a, id=rid)\n\n        rname = rid.split('.')\n        # Submit a job inside reservation\n        a = {'Resource_List.select': '3:ncpus=1', ATTR_queue: rname[0]}\n        j = Job(TEST_USER)\n        j.set_attributes(a)\n        j.set_sleep_time(20)\n        jid = self.server.submit(j)\n\n        # Verify the resource values\n        a = {'resources_used.foo_i': '6',\n             'resources_used.foo_i2': '3kb',\n             'resources_used.foo_f': '3.06',\n             'resources_used.foo_f2': '6.03',\n             'resources_used.stra': \"\\\"happy\\\"\",\n             'resources_used.stra2': \"\\\"glad\\\"\",\n             'job_state': 'F'}\n        self.server.expect(JOB, a, extend='x', attrop=PTL_AND,\n                           offset=30, interval=1, id=jid)\n\n        # Below is commented out due to a problem with history jobs\n        # disapearing after a server restart when the reservation is\n        # in state BD during restart.\n        # Once that bug is fixed, this test code should be uncommented\n        # and run.\n\n        # Restart server and verifies that the values are still the same\n        # self.server.restart()\n        # self.server.expect(JOB, a, extend='x', id=jid)\n\n    def test_server_restart(self):\n        \"\"\"\n        Test that resource accumulation will not get\n        impacted if server is restarted during job execution\n        On cpuset systems don't check for cput because the pbs_cgroups hook\n        will be enabled and will overwrite the cput value set in the prologue\n        hook\n        \"\"\"\n        has_cpuset = False\n        for mom in self.moms.values():\n            if mom.is_cpuset_mom():\n                has_cpuset = True\n\n        # Create a prologue hook\n        hook_body = \"\"\"\nimport pbs\ne=pbs.event()\npbs.logmsg(pbs.LOG_DEBUG, \"executed prologue hook\")\nif e.job.in_ms_mom():\n    e.job.resources_used[\"vmem\"] = pbs.size(\"11gb\")\n    e.job.resources_used[\"foo_i\"] = 11\n    e.job.resources_used[\"foo_f\"] = 0.11\n    e.job.resources_used[\"foo_str\"] = '{\"seven\":7}'\n    e.job.resources_used[\"cput\"] = 11\n    e.job.resources_used[\"stra\"] = '\"glad,elated\",\"happy\"'\n    e.job.resources_used[\"foo_str4\"] = \"eight\"\nelse:\n    e.job.resources_used[\"vmem\"] = pbs.size(\"12gb\")\n    e.job.resources_used[\"foo_i\"] = 12\n    e.job.resources_used[\"foo_f\"] = 0.12\n    e.job.resources_used[\"foo_str\"] = '{\"eight\":8,\"nine\":9}'\n    e.job.resources_used[\"cput\"] = 12\n    e.job.resources_used[\"stra\"] = '\"cucumbers,bananas\"'\n\"\"\"\n\n        hook_name = \"prolo\"\n        a = {'event': \"execjob_prologue\", 'enabled': 'True'}\n        self.server.create_import_hook(\n            hook_name,\n            a,\n            hook_body,\n            overwrite=True)\n\n        a = {'Resource_List.select': '3:ncpus=1',\n             'Resource_List.walltime': 20,\n             'Resource_List.place': 'scatter'}\n        j = Job(TEST_USER)\n        j.set_attributes(a)\n\n        # The pbsdsh call is what allows a first task to get spawned on\n        # on a sister mom, causing the execjob_prologue hook to execute.\n        j.create_script(\n            \"pbsdsh -n 1 hostname\\n\" +\n            \"pbsdsh -n 2 hostname\\n\" +\n            \"sleep 10\\n\")\n\n        jid = self.server.submit(j)\n\n        # Once the job is started running restart server\n        self.server.expect(JOB, {'job_state': \"R\", \"substate\": 42}, id=jid)\n        self.server.restart()\n\n        # Job will be requeued and rerun. Verify that the\n        # resource accumulation is similar as if server is\n        # not started\n        a = {'resources_used.foo_i': '35',\n             'resources_used.foo_f': '0.35',\n             'resources_used.vmem': '35gb',\n             'resources_used.stra': \"\\\"glad,elated\\\",\\\"happy\\\"\",\n             'resources_used.foo_str4': \"eight\",\n             'job_state': 'F'}\n        if not has_cpuset:\n            a['resources_used.cput'] = '00:00:35'\n        self.server.expect(JOB, a, extend='x',\n                           offset=5, id=jid, interval=1, attrop=PTL_AND)\n\n        foo_str_dict_in = {\"eight\": 8, \"seven\": 7, \"nine\": 9}\n        qstat = self.server.status(\n            JOB, 'resources_used.foo_str', id=jid, extend='x')\n        foo_str_dict_out_str = eval(qstat[0]['resources_used.foo_str'])\n        foo_str_dict_out = eval(foo_str_dict_out_str)\n        self.assertEqual(foo_str_dict_in, foo_str_dict_out)\n\n    def test_mom_down(self):\n        \"\"\"\n        Test that resource_accumulation is not impacted due to\n        mom restart\n        \"\"\"\n\n        # Set node_fail_requeue to requeue job\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'node_fail_requeue': 10})\n\n        hook_body = \"\"\"\nimport pbs\ne = pbs.event()\npbs.logmsg(pbs.LOG_DEBUG, \"executed periodic hook\")\n\nfor jj in e.job_list.keys():\n    e.job_list[jj].resources_used[\"foo_i\"] = 1\n    e.job_list[jj].resources_used[\"foo_str\"] = '{\"happy\":\"true\"}'\n    e.job_list[jj].resources_used[\"stra\"] = '\"one\",\"two\"'\n\"\"\"\n\n        a = {'event': \"exechost_periodic\", 'enabled': 'True', 'freq': 10}\n        self.server.create_import_hook(\n            \"period\",\n            a,\n            hook_body,\n            overwrite=True)\n\n        a = {'Resource_List.select': '3:ncpus=1',\n             'Resource_List.walltime': 300,\n             'Resource_List.place': 'scatter'}\n        j = Job(TEST_USER)\n        j.set_attributes(a)\n        jid1 = self.server.submit(j)\n\n        # Submit a job that can never run\n        a = {'Resource_List.select': '5:ncpus=1',\n             'Resource_List.place': 'scatter'}\n        j.set_attributes(a)\n        j.set_sleep_time(\"300\")\n        jid2 = self.server.submit(j)\n\n        # Wait for 10s approx for hook to get executed\n        # verify the resources_used.foo_i\n        self.server.expect(JOB, {'resources_used.foo_i': '3'},\n                           offset=10, id=jid1, interval=1)\n        self.server.expect(JOB, \"resources_used.foo_i\", op=UNSET, id=jid2)\n\n        # Bring sister mom down\n        self.momB.stop()\n\n        # Wait for 20 more seconds for preiodic hook to run\n        # more than once and verify that value is still 3\n        self.server.expect(JOB, {'resources_used.foo_i': '3'},\n                           offset=20, id=jid1, interval=1)\n\n        # Wait for job to be requeued by node_fail_requeue\n        self.server.rerunjob(jid1, runas=ROOT_USER)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid1)\n\n        # Verify that resources_used.foo_i is unset\n        self.server.expect(JOB, \"resources_used.foo_i\", op=UNSET, id=jid1)\n\n        # Bring sister mom up\n        self.momB.start()\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1, interval=1)\n\n        # Verify that value of foo_i for job1 is set back\n        self.server.expect(JOB, {'resources_used.foo_i': '3'},\n                           offset=10, id=jid1, interval=1)\n\n    def test_job_rerun(self):\n        \"\"\"\n        Test that resource accumulates once when job\n        is rerun\n        \"\"\"\n\n        hook_body = \"\"\"\nimport pbs\ne = pbs.event()\npbs.logmsg(pbs.LOG_DEBUG, \"executed periodic hook\")\n\nfor jj in e.job_list.keys():\n    e.job_list[jj].resources_used[\"foo_f\"] = 1.01\n    e.job_list[jj].resources_used[\"cput\"] = 10\n\"\"\"\n\n        a = {'event': \"exechost_periodic\", 'enabled': 'True', 'freq': 10}\n        self.server.create_import_hook(\n            \"period\",\n            a,\n            hook_body,\n            overwrite=True)\n\n        a = {'Resource_List.select': '3:ncpus=1',\n             'Resource_List.place': 'scatter'}\n        j = Job(TEST_USER)\n        j.set_attributes(a)\n        jid1 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': \"R\", \"substate\": 42}, id=jid1)\n\n        # Wait for 10s approx for hook to get executed\n        # Verify the resources_used.foo_f\n        a = {'resources_used.foo_f': '3.03',\n             'resources_used.cput': 30}\n        self.server.expect(JOB, a,\n                           offset=10, id=jid1, attrop=PTL_AND, interval=1)\n\n        # Rerun the job\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'False'})\n        self.server.rerunjob(jobid=jid1, runas=ROOT_USER)\n        self.server.expect(JOB,\n                           {'job_state': 'Q'}, id=jid1)\n\n        # Verify that foo_f is unset\n        self.server.expect(JOB,\n                           'Resource_List.foo_f',\n                           op=UNSET, id=jid1)\n\n        # turn the scheduling on\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'True'})\n        self.server.expect(JOB, {'job_state': \"R\", \"substate\": 42},\n                           attrop=PTL_AND, id=jid1)\n\n        # Validate that resources_used.foo_f is reset\n        self.server.expect(JOB, a,\n                           offset=10, id=jid1, attrop=PTL_AND, interval=1)\n\n    def test_job_array(self):\n        \"\"\"\n        Test that resource accumulation for subjobs also work\n        \"\"\"\n\n        hook_body = \"\"\"\nimport pbs\ne=pbs.event()\npbs.logmsg(pbs.LOG_DEBUG, \"executed epilogue hook\")\nif e.job.in_ms_mom():\n    e.job.resources_used[\"vmem\"] = pbs.size(\"9gb\")\n    e.job.resources_used[\"foo_i\"] = 9\n    e.job.resources_used[\"foo_f\"] = 0.09\n    e.job.resources_used[\"foo_str\"] = '{\"seven\":7}'\n    e.job.resources_used[\"cput\"] = 10\n    e.job.resources_used[\"stra\"] = '\"glad,elated\",\"happy\"'\nelse:\n    e.job.resources_used[\"vmem\"] = pbs.size(\"10gb\")\n    e.job.resources_used[\"foo_i\"] = 10\n    e.job.resources_used[\"foo_f\"] = 0.10\n    e.job.resources_used[\"foo_str\"] = '{\"eight\":8,\"nine\":9}'\n    e.job.resources_used[\"cput\"] = 20\n    e.job.resources_used[\"stra\"] = '\"cucumbers,bananas\"'\n\"\"\"\n\n        a = {'event': \"execjob_epilogue\", 'enabled': 'True'}\n        self.server.create_import_hook(\n            \"test\",\n            a,\n            hook_body,\n            overwrite=True)\n\n        a = {'Resource_List.select': '3:ncpus=1',\n             'Resource_List.walltime': 10,\n             'Resource_List.place': 'scatter'}\n        j = Job(TEST_USER, attrs={ATTR_J: '1-2'})\n        j.set_attributes(a)\n        j.set_sleep_time(\"5\")\n        jid = self.server.submit(j)\n\n        # Verify that once subjobs are over values are\n        # set for each subjob in the accounting logs\n        subjob1 = str.replace(jid, '[]', '[1]')\n\n        acctlog_match = 'resources_used.foo_f=0.29'\n        # Below code is commented due to a PTL issue\n        # s = self.server.accounting_match(\n        #    \"E;%s;.*%s.*\" % (subjob1, acctlog_match), regexp=True, n=100)\n        # self.assertTrue(s)\n\n        acctlog_match = 'resources_used.foo_i=29'\n        # s = self.server.accounting_match(\n        #    \"E;%s;.*%s.*\" % (subjob1, acctlog_match), regexp=True, n=100)\n        # self.assertTrue(s)\n\n        foo_str_dict_in = {\"eight\": 8, \"seven\": 7, \"nine\": 9}\n        acctlog_match = \"resources_used.foo_str='%s'\" % (foo_str_dict_in,)\n        # s = self.server.accounting_match(\n        #    \"E;%s;.*%s.*\" % (subjob1, acctlog_match), regexp=True, n=100)\n        # self.assertTrue(s)\n\n        acctlog_match = 'resources_used.vmem=29gb'\n        # s = self.server.accounting_match(\n        #    \"E;%s;.*%s.*\" % (subjob1, acctlog_match), regexp=True, n=100)\n        # self.assertTrue(s)\n\n        acctlog_match = 'resources_used.cput=00:00:50'\n        # s = self.server.accounting_match(\n        #    \"E;%s;.*%s.*\" % (subjob1, acctlog_match), regexp=True, n=100)\n        # self.assertTrue(s)\n\n        acctlog_match = r'resources_used.stra=\\\"glad\\,elated\\\"\\,\\\"happy\\\"'\n        # s = self.server.accounting_match(\n        #    \"E;%s;.*%s.*\" % (subjob1, acctlog_match), regexp=True, n=100)\n        # self.assertTrue(s)\n\n    def test_epi_pro(self):\n        \"\"\"\n        Test that epilogue and prologue changing same\n        and different resources. Values of same resource\n        would get overwriteen by the last hook.\n        On cpuset systems don't check for cput because the pbs_cgroups hook\n        will be enabled and will overwrite the cput value set in the prologue\n        hook\n        \"\"\"\n        has_cpuset = False\n        for mom in self.moms.values():\n            if mom.is_cpuset_mom():\n                has_cpuset = True\n\n        hook_body = \"\"\"\nimport pbs\ne=pbs.event()\npbs.logmsg(pbs.LOG_DEBUG, \"In prologue hook\")\ne.job.resources_used[\"foo_i\"] = 10\ne.job.resources_used[\"foo_f\"] = 0.10\n\"\"\"\n\n        a = {'event': \"execjob_prologue\", 'enabled': 'True'}\n        self.server.create_import_hook(\n            \"pro\", a, hook_body,\n            overwrite=True)\n\n        # Verify the copy message in the logs to avoid\n        # race conditions\n        momlist = [self.momA, self.momB, self.momC]\n        for m in momlist:\n            m.log_match(\"pro.PY;copy hook-related file\")\n\n        hook_body = \"\"\"\nimport pbs\ne=pbs.event()\npbs.logmsg(pbs.LOG_DEBUG, \"In epilogue hook\")\ne.job.resources_used[\"foo_f\"] = 0.20\ne.job.resources_used[\"cput\"] = 10\n\"\"\"\n\n        a = {'event': \"execjob_epilogue\", 'enabled': 'True'}\n        self.server.create_import_hook(\n            \"epi\", a, hook_body,\n            overwrite=True)\n\n        # Verify the copy message in the logs to avoid\n        # race conditions\n        for m in momlist:\n            m.log_match(\"epi.PY;copy hook-related file\")\n\n        a = {'Resource_List.select': '3:ncpus=1',\n             'Resource_List.place': 'scatter'}\n        j = Job(TEST_USER)\n        j.set_attributes(a)\n        j.create_script(\n            \"pbsdsh -n 1 hostname\\n\" +\n            \"pbsdsh -n 2 hostname\\n\" +\n            \"sleep 5\\n\")\n        jid = self.server.submit(j)\n\n        # Verify the resources_used once the job is over\n        b = {\n            'resources_used.foo_i': '30',\n            'resources_used.foo_f': '0.6',\n            'job_state': 'F'}\n\n        if not has_cpuset:\n            b['resources_used.cput'] = '30'\n        self.server.expect(JOB, b, extend='x', id=jid, offset=5, interval=1)\n\n        # Submit another job\n        j1 = Job(TEST_USER)\n        j1.set_attributes(a)\n        j1.create_script(\n            \"pbsdsh -n 1 hostname\\n\" +\n            \"pbsdsh -n 2 hostname\\n\" +\n            \"sleep 300\\n\")\n        jid1 = self.server.submit(j1)\n\n        # Verify that prologue hook has set the values\n        self.server.expect(JOB, {\n            'job_state': 'R',\n            'resources_used.foo_i': '30',\n            'resources_used.foo_f': '0.3'}, attrop=PTL_AND,\n            id=jid1, interval=2)\n\n        # Force delete the job\n        self.server.deljob(id=jid1, wait=True, attr_W=\"force\")\n\n        # Verify values are accumulated by prologue hook only\n        self.server.expect(JOB, {\n            'resources_used.foo_i': '30',\n            'resources_used.foo_f': '0.3'}, attrop=PTL_AND,\n            extend='x', id=jid1)\n\n    def test_server_restart2(self):\n        \"\"\"\n        Test that server restart during hook execution\n        has no impact\n        \"\"\"\n\n        hook_body = \"\"\"\nimport pbs\nimport time\n\ne = pbs.event()\n\npbs.logmsg(pbs.LOG_DEBUG, \"executed epilogue hook\")\nif e.job.in_ms_mom():\n        e.job.resources_used[\"vmem\"] = pbs.size(\"9gb\")\n        e.job.resources_used[\"foo_i\"] = 9\n        e.job.resources_used[\"foo_f\"] = 0.09\n        e.job.resources_used[\"foo_str\"] = '{\"seven\":7}'\n        e.job.resources_used[\"cput\"] = 10\nelse:\n        e.job.resources_used[\"vmem\"] = pbs.size(\"10gb\")\n        e.job.resources_used[\"foo_i\"] = 10\n        e.job.resources_used[\"foo_f\"] = 0.10\n        e.job.resources_used[\"foo_str\"] = '{\"eight\":8,\"nine\":9}'\n        e.job.resources_used[\"cput\"] = 20\n\ntime.sleep(15)\n\"\"\"\n\n        a = {'event': \"execjob_epilogue\", 'enabled': 'True'}\n        self.server.create_import_hook(\n            \"epi\", a, hook_body, overwrite=True)\n\n        # Submit a job\n        a = {'Resource_List.select': '3:ncpus=1',\n             'Resource_List.walltime': 10,\n             'Resource_List.place': \"scatter\",\n             'Keep_Files': 'oe'}\n        j = Job(TEST_USER)\n        j.set_attributes(a)\n        j.set_sleep_time(\"5\")\n        jid = self.server.submit(j)\n\n        # Verify the resource values\n        a = {'resources_used.foo_i': 29,\n             'resources_used.foo_f': 0.29}\n        a_dict = {'eight': 8, 'seven': 7, 'nine': 9}\n\n        self.server.expect(JOB, a, extend='x', attrop=PTL_AND,\n                           offset=5, id=jid, interval=1)\n        # check for dictionary resource\n        job_status = self.server.status(JOB, id=jid, extend='x')\n        job_str_resource = dict(job_status[0])['resources_used.foo_str']\n        job_str_resource = ast.literal_eval(ast.literal_eval(job_str_resource))\n        self.assertEqual(job_str_resource, a_dict)\n\n        # Restart server while hook is still executing\n        self.server.restart()\n\n        # Verify that values again\n        self.server.expect(JOB, a, extend='x', attrop=PTL_AND,\n                           id=jid)\n        # check for dictionary resource\n        job_status = self.server.status(JOB, id=jid, extend='x')\n        job_str_resource = dict(job_status[0])['resources_used.foo_str']\n        job_str_resource = ast.literal_eval(ast.literal_eval(job_str_resource))\n        self.assertEqual(job_str_resource, a_dict)\n\n    def test_mom_down2(self):\n        \"\"\"\n        Test that when mom is down values are still\n        accumulated for resources\n        \"\"\"\n\n        hook_body = \"\"\"\nimport pbs\ne=pbs.event()\npbs.logmsg(pbs.LOG_DEBUG, \"executed epilogue hook\")\nif e.job.in_ms_mom():\n    e.job.resources_used[\"vmem\"] = pbs.size(\"9gb\")\n    e.job.resources_used[\"foo_i\"] = 9\n    e.job.resources_used[\"foo_f\"] = 0.09\n    e.job.resources_used[\"foo_str\"] = '{\"seven\":7}'\n    e.job.resources_used[\"cput\"] = 10\n    e.job.resources_used[\"stra\"] = '\"glad,elated\",\"happy\"'\nelse:\n    e.job.resources_used[\"vmem\"] = pbs.size(\"10gb\")\n    e.job.resources_used[\"foo_i\"] = 10\n    e.job.resources_used[\"foo_f\"] = 0.10\n    e.job.resources_used[\"foo_str\"] = '{\"eight\":8,\"nine\":9}'\n    e.job.resources_used[\"cput\"] = 20\n    e.job.resources_used[\"stra\"] = '\"cucumbers,bananas\"'\n\"\"\"\n\n        a = {'event': \"execjob_epilogue\",\n             'enabled': 'True'}\n        self.server.create_import_hook(\n            \"epi\", a, hook_body,\n            overwrite=True)\n\n        # Submit a job\n        a = {'Resource_List.select': '3:ncpus=1',\n             'Resource_List.walltime': 40,\n             'Resource_List.place': \"scatter\"}\n        j = Job(TEST_USER)\n        j.set_attributes(a)\n        jid = self.server.submit(j)\n\n        # Verify job is running\n        self.server.expect(JOB,\n                           {'job_state': \"R\"}, id=jid)\n\n        # Bring sister mom down\n        self.momB.stop()\n\n        # Wait for job to end\n        # Validate that the values are being set\n        # with 2 moms only\n        self.server.expect(JOB,\n                           {'job_state': 'F',\n                            'resources_used.foo_i': '19',\n                            'resources_used.foo_f': '0.19'},\n                           offset=10, id=jid, interval=1, extend='x',\n                           attrop=PTL_AND)\n        a_dict = {'eight': 8, 'nine': 9, 'seven': 7}\n\n        # check for dictionary resource\n        job_status = self.server.status(JOB, id=jid, extend='x')\n        job_str_resource = dict(job_status[0])['resources_used.foo_str']\n        job_str_resource = ast.literal_eval(ast.literal_eval(job_str_resource))\n        self.assertEqual(job_str_resource, a_dict)\n\n        # Bring the mom back up\n        self.momB.start()\n\n    def test_finished_walltime(self):\n        \"\"\"\n        If used resources are modified from hook, this test makes sure\n        that mem used resources are merged and once the job ends,\n        the walltime is not zero.\n        \"\"\"\n        hook_body = \"\"\"\nimport pbs\ne = pbs.event()\nif e.type == pbs.EXECHOST_PERIODIC:\n    for jobid in e.job_list:\n        e.job_list[jobid].resources_used[\"mem\"] = pbs.size('1024kb')\nelse:\n    e.job.resources_used[\"mem\"] = pbs.size('1024kb')\n\"\"\"\n        hook_name = \"multinode_used\"\n        attr = {'event': 'exechost_periodic,execjob_epilogue,execjob_end',\n                'freq': '3',\n                'enabled': 'True'}\n        rv = self.server.create_import_hook(hook_name, attr, hook_body)\n        self.assertTrue(rv)\n\n        sleeptime = 30\n        a = {'Resource_List.select': '3:ncpus=1',\n             'Resource_List.walltime': sleeptime,\n             'Resource_List.place': \"scatter\"}\n        j = Job(TEST_USER)\n        j.set_attributes(a)\n        j.set_sleep_time(f\"{sleeptime}\")\n        jid = self.server.submit(j)\n\n        self.server.expect(JOB, {\n            'job_state': 'R',\n            'resources_used.mem': '3072kb'},\n            attrop=PTL_AND, offset=sleeptime/2, id=jid)\n\n        self.server.expect(JOB, {\n            'job_state': 'F',\n            'resources_used.mem': '3072kb',\n            'resources_used.walltime': sleeptime}, op=GE,\n            extend='x', offset=sleeptime/2, attrop=PTL_AND, id=jid)\n"
  },
  {
    "path": "test/tests/functional/pbs_acl_groups.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass Test_acl_groups(TestFunctional):\n    \"\"\"\n    Test to check acl_groups and acl_resv_groups considers secondary group\n    \"\"\"\n\n    def test_acl_grp_queue(self):\n        \"\"\"\n        Set acl_groups on a queue and submit a job with a user\n        for whom the set group is a secondary group\n        \"\"\"\n        a = {'queue_type': 'execution', 'started': 't', 'enabled': 't',\n             'acl_group_enable': 't', 'acl_groups': TSTGRP1}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id='workq2')\n        a = {'queue': 'workq2'}\n        j = Job(TEST_USER1, attrs=a)\n        # If 'Unauthorized Request' is found in error message the test would\n        # fail as user was not able to submit job as a secondary group member\n        try:\n            jid = self.server.submit(j)\n        except PbsSubmitError as e:\n            self.assertFalse('Unauthorized Request' in e.msg[0])\n\n    def test_acl_resv_groups(self):\n        \"\"\"\n        Set acl_resv_groups on server and submit a reservation\n        from a user for whom the set group is a secondary group\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, SERVER, {\n                            'acl_resv_group_enable': 'true'})\n        self.server.manager(MGR_CMD_SET, SERVER, {'acl_resv_groups': TSTGRP1})\n        # If 'Requestor's group not authorized' is found in error message the\n        # test would fail as user was not able to submit reservation\n        # as a secondary group member\n        try:\n            r = Reservation(TEST_USER1)\n            rstart = int(time.time()) + 10\n            rend = int(time.time()) + 360\n            a = {'reserve_start': rstart,\n                 'reserve_end': rend}\n            r.set_attributes(a)\n            rid = self.server.submit(r)\n        except PbsSubmitError as e:\n            self.assertFalse(\n                'Requestor\\'s group not authorized' in e.msg[0])\n"
  },
  {
    "path": "test/tests/functional/pbs_acl_host_moms.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass Test_acl_host_moms(TestFunctional):\n    \"\"\"\n    This test suite is for testing the server attribute acl_host_moms_enable\n    and this test requires two moms.\n    \"\"\"\n\n    def setUp(self):\n        \"\"\"\n        Determine the remote host and set acl_host_enable = True\n        \"\"\"\n\n        TestFunctional.setUp(self)\n\n        usage_string = 'test requires a MoM and a client as input, ' + \\\n                       ' use -p moms=<mom>,client=<client>'\n\n        # PBSTestSuite returns the moms passed in as parameters as dictionary\n        # of hostname and MoM object\n        self.momA = self.moms.values()[0]\n        self.momA.delete_vnode_defs()\n\n        self.hostA = self.momA.shortname\n        if not self.du.is_localhost(self.server.client):\n            # acl_hosts expects FQDN\n            self.hostB = socket.getfqdn(self.server.client)\n        else:\n            self.skip_test(usage_string)\n\n        self.remote_host = None\n\n        if not self.du.is_localhost(self.hostA):\n            self.remote_host = self.hostA\n        else:\n            self.skip_test(usage_string)\n\n        self.assertTrue(self.remote_host)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {\n                            'acl_hosts': self.hostB})\n        self.server.manager(MGR_CMD_SET, SERVER, {'acl_host_enable': True})\n\n        self.pbsnodes_cmd = os.path.join(self.server.pbs_conf[\n            'PBS_EXEC'], 'bin', 'pbsnodes') + ' -av'\n\n        self.qstat_cmd = os.path.join(self.server.pbs_conf[\n            'PBS_EXEC'], 'bin', 'qstat')\n\n    def test_acl_host_moms_enable(self):\n        \"\"\"\n        Set acl_host_moms_enable = True and check whether or not the remote\n        host is able run pbsnodes and qstat.\n        \"\"\"\n\n        self.server.manager(MGR_CMD_SET, SERVER, {\n                            'acl_host_moms_enable': True})\n        ret = self.du.run_cmd(self.remote_host, cmd=self.pbsnodes_cmd)\n        self.assertEqual(ret['rc'], 0)\n\n        ret = self.du.run_cmd(self.remote_host, cmd=self.qstat_cmd)\n        self.assertEqual(ret['rc'], 0)\n\n    def test_acl_host_moms_disable(self):\n        \"\"\"\n        Set acl_host_moms_enable = False and check whether or not the remote\n        host is forbidden to run pbsnodes and qstat.\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, SERVER, {\n                            'acl_host_moms_enable': False})\n\n        ret = self.du.run_cmd(self.remote_host, cmd=self.pbsnodes_cmd)\n        self.assertNotEqual(ret['rc'], 0)\n\n        ret = self.du.run_cmd(self.remote_host, cmd=self.qstat_cmd)\n        self.assertNotEqual(ret['rc'], 0)\n\n    def test_acl_host_moms_hooks_and_jobs(self):\n        \"\"\"\n        Use hooks to test whether remote host is able to run pbs.server()\n        and check whether the job that is submitted goes to the 'R' state.\n        \"\"\"\n        hook_name = \"hook_acl_host_moms_t\"\n        hook_body = \"\"\"\nimport pbs\ne = pbs.event()\nsvr = pbs.server().server_state\ne.accept()\n\"\"\"\n        try:\n            self.server.manager(MGR_CMD_DELETE, HOOK, None, hook_name)\n        except Exception:\n            pass\n\n        a = {'event': 'execjob_begin', 'enabled': 'True'}\n        self.server.create_import_hook(\n            hook_name, a, hook_body, overwrite=True)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {\n                            'acl_host_moms_enable': False})\n\n        j = Job()\n        j.set_sleep_time(10)\n        jid = self.server.submit(j)\n\n        self.server.expect(JOB, {'job_state': 'H'}, id=jid)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {\n                            'acl_host_moms_enable': True})\n        j = Job()\n        j.set_sleep_time(10)\n        jid = self.server.submit(j)\n\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n    def test_acl_host_mom_queue_access(self):\n        \"\"\"\n        Test that remote host cannot submit jobs to queue where\n        acl_host_enable is True and acl_host_moms_enable is set\n        on server, but remote host is not added in acl_hosts.\n        \"\"\"\n        queue_n = 'tempq'\n        queue_params = {'queue_type': 'Execution', 'enabled': 'True',\n                        'started': 'True', 'acl_host_enable': 'True'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, queue_params, id='tempq')\n\n        self.server.manager(MGR_CMD_SET, SERVER, {\n                            'acl_host_moms_enable': True})\n        # Setting acl_host_enable on queue overrides acl_host_moms_enable\n        # on server and requires acl_hosts to include remote host's name.\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'flatuid': True})\n        # Setting flatuid lets us submit jobs on server as a remote\n        # host without creating a seperate user account there.\n        qsub_cmd_on_queue = os.path.join(self.server.pbs_conf[\n            'PBS_EXEC'], 'bin',\n            'qsub') + ' -q ' + queue_n + ' -- /bin/sleep 10'\n\n        j = Job(attrs={ATTR_queue: queue_n})\n        j.set_sleep_time(10)\n        cannot_submit = 0\n        try:\n            jid = self.server.submit(j)\n        except PbsSubmitError:\n            cannot_submit = 1\n        self.assertEqual(cannot_submit, 1)\n"
  },
  {
    "path": "test/tests/functional/pbs_acl_host_queue.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass Test_acl_host_queue(TestFunctional):\n    \"\"\"\n    This test suite is for testing the queue attributes acl_host_enable\n    and acl_hosts.\n    \"\"\"\n\n    def test_acl_host_enable_refuse(self):\n        \"\"\"\n        Set acl_host_enable = True on queue and check whether or not\n        the submit is refused.\n        \"\"\"\n        a = {\"acl_host_enable\": True,\n             \"acl_hosts\": \"foo\"}\n        self.server.manager(MGR_CMD_SET, QUEUE, a,\n                            self.server.default_queue)\n\n        j = Job(TEST_USER)\n        try:\n            self.server.submit(j)\n        except PbsSubmitError as e:\n            error_msg = \"qsub: Access from host not allowed, or unknown host\"\n            self.assertEquals(e.msg[0], error_msg)\n        else:\n            self.fail(\"Queue is violating acl_hosts\")\n\n    def test_acl_host_enable_allow(self):\n        \"\"\"\n        Set acl_host_enable = True along with acl_hosts and check\n        whether or not a job can be submitted.\n        \"\"\"\n        a = {\"acl_host_enable\": True,\n             \"acl_hosts\": self.server.hostname}\n        self.server.manager(MGR_CMD_SET, QUEUE, a,\n                            self.server.default_queue)\n\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n        self.logger.info('Job submitted successfully: ' + jid)\n"
  },
  {
    "path": "test/tests/functional/pbs_acl_host_server.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass Test_acl_host_server(TestFunctional):\n    \"\"\"\n    This test suite is for testing the subnets in server's\n    attribute acl_hosts. This test requires remote client.\n    \"\"\"\n\n    def setUp(self):\n        \"\"\"\n        Determine the server ip and remote host\n        \"\"\"\n\n        TestFunctional.setUp(self)\n\n        usage_string = 'test requires a remote client as input,' + \\\n                       ' use -p client=<client>'\n\n        self.serverip = socket.gethostbyname(self.server.hostname)\n\n        if not self.du.is_localhost(self.server.client):\n            self.remote_host = socket.getfqdn(self.server.client)\n        else:\n            self.skip_test(usage_string)\n\n        self.assertTrue(self.remote_host)\n\n        self.pbsnodes_cmd = os.path.join(self.server.pbs_conf[\n            'PBS_EXEC'], 'bin', 'pbsnodes') + ' -av' \\\n            + ' -s ' + self.server.hostname\n\n    def test_acl_subnet_enable_allow(self):\n        \"\"\"\n        Set acl_host_enable = True, subnet to server ip with the mask\n        255.255.0.0 or 16 and check whether or not the remote host\n        is able to run pbsnodes. It should allow.\n        \"\"\"\n\n        a = {\"acl_host_enable\": True,\n             \"acl_hosts\": self.serverip + \"/255.255.0.0\"}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        ret = self.du.run_cmd(self.remote_host, cmd=self.pbsnodes_cmd)\n        self.assertEqual(ret['rc'], 0)\n\n        a = {\"acl_host_enable\": True,\n             \"acl_hosts\": self.serverip + \"/16\"}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        ret = self.du.run_cmd(self.remote_host, cmd=self.pbsnodes_cmd)\n        self.assertEqual(ret['rc'], 0)\n\n    def test_acl_subnet_enable_refuse(self):\n        \"\"\"\n        Set acl_host_enable = True, subnet to server ip with the mask\n        255.255.255.255 or 32 and check whether or not the remote host\n        is able to run pbsnodes. It should refuse.\n        \"\"\"\n\n        a = {\"acl_host_enable\": True,\n             \"acl_hosts\": self.serverip + \"/255.255.255.255\"}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        ret = self.du.run_cmd(self.remote_host, cmd=self.pbsnodes_cmd)\n        self.assertNotEqual(ret['rc'], 0)\n\n        a = {\"acl_host_enable\": True,\n             \"acl_hosts\": self.serverip + \"/32\"}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        ret = self.du.run_cmd(self.remote_host, cmd=self.pbsnodes_cmd)\n        self.assertNotEqual(ret['rc'], 0)\n\n    def tearDown(self):\n        \"\"\"\n        Unset the acl attributes so tearDown can process on remote host.\n        \"\"\"\n\n        a = [\"acl_host_enable\", \"acl_hosts\"]\n        self.server.manager(MGR_CMD_UNSET, SERVER, a)\n"
  },
  {
    "path": "test/tests/functional/pbs_admin_suspend.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport time\nfrom tests.functional import *\n\n\nclass TestAdminSuspend(TestFunctional):\n\n    \"\"\"\n    Test the admin-suspend/admin-resume feature for node maintenance\n    \"\"\"\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n        a = {'resources_available.ncpus': 4, 'resources_available.mem': '4gb'}\n        self.mom.create_vnodes(a, 1)\n\n    def test_basic(self):\n        \"\"\"\n        Test basic admin-suspend functionality\n        \"\"\"\n        j1 = Job(TEST_USER)\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {'job_state': 'R', 'substate': 42}, id=jid1)\n\n        j2 = Job(TEST_USER)\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {'job_state': 'R', 'substate': 42}, id=jid2)\n        vnode = self.mom.shortname + '[0]'\n        # admin-suspend job 1.\n        self.server.sigjob(jid1, 'admin-suspend', runas=ROOT_USER)\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid1)\n        self.server.expect(NODE, {'state': 'maintenance'}, id=vnode)\n        self.server.expect(NODE, {'maintenance_jobs': jid1})\n\n        # admin-suspend job 2\n        self.server.sigjob(jid2, 'admin-suspend', runas=ROOT_USER)\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid2)\n        self.server.expect(NODE, {'state': 'maintenance'}, id=vnode)\n        self.server.expect(NODE, {'maintenance_jobs': jid1 + \",\" + jid2})\n\n        # admin-resume job 1.  Make sure the node is still in state maintenance\n        self.server.sigjob(jid1, 'admin-resume', runas=ROOT_USER)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        self.server.expect(NODE, {'state': 'maintenance'}, id=vnode)\n        self.server.expect(NODE, {'maintenance_jobs': jid2})\n\n        # admin-resume job 2.  Make sure the node retuns to state free\n        self.server.sigjob(jid2, 'admin-resume', runas=ROOT_USER)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n        self.server.expect(NODE, {'state': 'free'}, id=vnode)\n\n    def test_basic_ja(self):\n        \"\"\"\n        Test basic admin-suspend functionality for job arrays\n        \"\"\"\n        jA = Job(TEST_USER)\n        jA.set_attributes({'Resource_List.select': '1:ncpus=1', ATTR_J: '1-2'})\n        jidA = self.server.submit(jA)\n        self.server.expect(JOB, {'job_state': 'B'}, id=jidA)\n\n        subjobs = self.server.status(JOB, id=jidA, extend='t')\n        # subjobs[0] is the array itself.  Need the subjobs\n        jid1 = subjobs[1]['id']\n        jid2 = subjobs[2]['id']\n\n        self.server.expect(JOB, {'job_state': 'R', 'substate': 42}, id=jid1)\n        self.server.expect(JOB, {'job_state': 'R', 'substate': 42}, id=jid2)\n        vnode = self.mom.shortname + '[0]'\n\n        # admin-suspend job 1.\n        self.server.sigjob(jid1, 'admin-suspend', runas=ROOT_USER)\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid1)\n        self.server.expect(NODE, {'state': 'maintenance'}, id=vnode)\n        self.server.expect(NODE, {'maintenance_jobs': jid1})\n\n        # admin-suspend job 2\n        self.server.sigjob(jid2, 'admin-suspend', runas=ROOT_USER)\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid2)\n        self.server.expect(NODE, {'state': 'maintenance'}, id=vnode)\n        self.server.expect(NODE, {'maintenance_jobs': jid1 + \",\" + jid2})\n\n        # admin-resume job 1.  Make sure the node is still in state maintenance\n        self.server.sigjob(jid1, 'admin-resume', runas=ROOT_USER)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        self.server.expect(NODE, {'state': 'maintenance'}, id=vnode)\n        self.server.expect(NODE, {'maintenance_jobs': jid2})\n\n        # admin-resume job 2.  Make sure the node retuns to state free\n        self.server.sigjob(jid2, 'admin-resume', runas=ROOT_USER)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n        self.server.expect(NODE, {'state': 'free'}, id=vnode)\n\n    def test_basic_restart(self):\n        \"\"\"\n        Test basic admin-suspend functionality with server restart\n        The restart will test if the node recovers properly in maintenance\n        \"\"\"\n        j1 = Job(TEST_USER)\n        jid = self.server.submit(j1)\n        self.server.expect(\n            JOB, {'job_state': 'R', 'substate': 42}, attrop=PTL_AND, id=jid)\n        vnode = self.mom.shortname + '[0]'\n        # admin-suspend job\n        self.server.sigjob(jid, 'admin-suspend', runas=ROOT_USER)\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid)\n        self.server.expect(NODE, {'state': 'maintenance'}, id=vnode)\n        self.server.expect(NODE, {'maintenance_jobs': jid})\n\n        self.server.restart()\n\n        self.server.expect(NODE, {'state': 'maintenance'}, id=vnode)\n        self.server.expect(NODE, {'maintenance_jobs': jid})\n\n        # Checking licenses to avoid failure at resume since PBS licenses\n        # might not be available and as a result resume fails\n        rv = self.is_server_licensed(self.server)\n        _msg = 'No license found on server %s' % (self.server.shortname)\n        self.assertTrue(rv, _msg)\n\n        # admin-resume job\n        self.server.sigjob(jid, 'admin-resume', runas=ROOT_USER)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        self.server.expect(NODE, {'state': 'free'}, id=vnode)\n\n    def test_cmd_perm(self):\n        \"\"\"\n        Test permissions on admin-suspend, admin-resume, maintenance_jobs\n        and the maintenace node state.\n        \"\"\"\n        vnode = self.mom.shortname + '[0]'\n        # Test to make sure we can't set the maintenance node state\n        try:\n            self.server.manager(\n                MGR_CMD_SET, NODE,\n                {'state': 'maintenance'}, id=vnode, runas=ROOT_USER)\n        except PbsManagerError as e:\n            self.assertTrue('Illegal value for node state' in e.msg[0])\n\n        self.server.expect(NODE, {'state': 'free'}, id=vnode)\n\n        # Test to make sure we can't set the 'maintenance_jobs' attribute\n        try:\n            self.server.manager(\n                MGR_CMD_SET, NODE,\n                {'maintenance_jobs': 'foo'}, id=vnode, runas=ROOT_USER)\n        except PbsManagerError as e:\n            self.assertTrue(\n                'Cannot set attribute, read only or insufficient permission'\n                in e.msg[0])\n\n        self.server.expect(NODE, 'maintenance_jobs', op=UNSET, id=vnode)\n\n        # Test to make sure regular users can't admin-suspend jobs\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n        self.server.expect(\n            JOB, {'job_state': 'R', 'substate': 42}, attrop=PTL_AND, id=jid)\n\n        try:\n            self.server.sigjob(jid, 'admin-suspend', runas=TEST_USER)\n        except PbsSignalError as e:\n            self.assertTrue('Unauthorized Request' in e.msg[0])\n\n        self.server.expect(JOB, {'job_state': 'R', 'substate': 42}, id=jid)\n\n        # Test to make sure regular users can't admin-resume jobs\n        self.server.sigjob(jid, 'admin-suspend', runas=ROOT_USER)\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid)\n\n        try:\n            self.server.sigjob(jid, 'admin-resume', runas=TEST_USER)\n        except PbsSignalError as e:\n            self.assertTrue('Unauthorized Request' in e.msg[0])\n\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid)\n\n    def test_wrong_state1(self):\n        \"\"\"\n        Test using wrong resume signal is correctly rejected\n        \"\"\"\n\n        j1 = Job(TEST_USER)\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {'job_state': 'R', 'substate': 42}, id=jid1)\n\n        self.server.sigjob(jid1, \"suspend\", runas=ROOT_USER)\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid1)\n\n        try:\n            self.server.sigjob(jid1, \"admin-resume\", runas=ROOT_USER)\n        except PbsSignalError as e:\n            self.assertTrue(\n                'Job can not be resumed with the requested resume signal'\n                in e.msg[0])\n\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid1)\n\n    def test_wrong_state2(self):\n        \"\"\"\n        Test using wrong resume signal is correctly rejected\n        \"\"\"\n\n        j1 = Job(TEST_USER)\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {'job_state': 'R', 'substate': 42}, id=jid1)\n\n        self.server.sigjob(jid1, \"admin-suspend\", runas=ROOT_USER)\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid1)\n        self.server.expect(JOB, {'substate': 43}, id=jid1)\n\n        try:\n            self.server.sigjob(jid1, \"resume\", runas=ROOT_USER)\n        except PbsSignalError as e:\n            self.assertTrue(\n                'Job can not be resumed with the requested resume signal'\n                in e.msg[0])\n\n        # If resume had worked, the job would be in substate 45\n        self.server.expect(JOB, {'substate': 43}, id=jid1)\n\n    def test_deljob(self):\n        \"\"\"\n        Test whether a node leaves the maintenance state when\n        an admin-suspendedd job is deleted\n        \"\"\"\n\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R', 'substate': 42}, id=jid)\n        vnode = self.mom.shortname + '[0]'\n\n        self.server.sigjob(jid, 'admin-suspend', runas=ROOT_USER)\n        self.server.expect(NODE, {'state': 'maintenance'}, id=vnode)\n\n        self.server.deljob(jid, wait=True)\n        self.server.expect(NODE, {'state': 'free'}, id=vnode)\n\n    def test_deljob_force(self):\n        \"\"\"\n        Test whether a node leaves the maintenance state when\n        an admin-suspendedd job is deleted with -Wforce\n        \"\"\"\n\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R', 'substate': 42}, id=jid)\n        vnode = self.mom.shortname + '[0]'\n\n        self.server.sigjob(jid, 'admin-suspend', runas=ROOT_USER)\n        self.server.expect(NODE, {'state': 'maintenance'}, id=vnode)\n\n        self.server.deljob(jid, extend='force', wait=True)\n        self.server.expect(NODE, {'state': 'free'}, id=vnode)\n\n    def test_rerunjob(self):\n        \"\"\"\n        Test whether a node leaves the maintenance state when\n        an admin-suspended job is requeued\n        \"\"\"\n\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R', 'substate': 42}, id=jid)\n        vnode = self.mom.shortname + '[0]'\n\n        self.server.sigjob(jid, 'admin-suspend', runas=ROOT_USER)\n        self.server.expect(NODE, {'state': 'maintenance'}, id=vnode)\n\n        self.server.rerunjob(jid, extend='force')\n        # Job eventually goes to R state after being requeued for short time\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        self.server.expect(NODE, {'state': 'free'}, id=vnode)\n\n    def test_multivnode(self):\n        \"\"\"\n        Submit a job to multiple vnodes.  Send an admin-suspend signal\n        and see all nodes go into maintenance\n        \"\"\"\n        a = {'resources_available.ncpus': 4, 'resources_available.mem': '4gb'}\n        self.mom.create_vnodes(a, 3, usenatvnode=True)\n\n        j = Job(TEST_USER)\n        j.set_attributes({'Resource_List.select': '3:ncpus=1',\n                          'Resource_List.place': 'vscatter'})\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R', 'substate': 42}, id=jid)\n\n        self.server.sigjob(jid, 'admin-suspend', runas=ROOT_USER)\n        self.server.expect(NODE, {'state=maintenance': 3})\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid)\n\n        self.server.sigjob(jid, 'admin-resume', runas=ROOT_USER)\n        self.server.expect(NODE, {'state=free': 3})\n\n    def test_multivnode2(self):\n        \"\"\"\n        Submit a job to multiple vnodes.  Send an admin-suspend signal\n        and see all nodes go into maintenance\n        Submit a single node job to one of the nodes.  Resume the multinode\n        Job and see the single node job's node stil in maintenance\n        \"\"\"\n        a = {'resources_available.ncpus': 4, 'resources_available.mem': '4gb'}\n        self.mom.create_vnodes(a, 3, usenatvnode=True)\n\n        # Submit multinode job 1\n        j1 = Job(TEST_USER)\n        j1.set_attributes({'Resource_List.select': '3:ncpus=1',\n                           'Resource_List.place': 'vscatter'})\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {'job_state': 'R', 'substate': 42}, id=jid1)\n\n        vnode = self.mom.shortname + '[0]'\n        # Submit Job 2 to specific node\n        j2 = Job(TEST_USER)\n        j2.set_attributes({'Resource_List.select': '1:ncpus=1:vnode=' + vnode})\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {'job_state': 'R', 'substate': 42}, id=jid2)\n\n        # admin-suspend job 1 and see all three nodes go into maintenance\n        self.server.sigjob(jid1, 'admin-suspend')\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid1)\n        self.server.expect(NODE, {'state=maintenance': 3})\n\n        # admin-suspend job 2\n        self.server.sigjob(jid2, 'admin-suspend', runas=ROOT_USER)\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid2)\n\n        # admin-resume job1 and see one node stay in maintenance\n        self.server.sigjob(jid1, 'admin-resume', runas=ROOT_USER)\n        self.server.expect(NODE, {'state=free': 2})\n        self.server.expect(NODE, {'state': 'maintenance'}, id=vnode)\n\n    def test_multivnode_excl(self):\n        \"\"\"\n        Submit an excl job to multiple vnodes.  Send an admin-suspend\n        signal and see all nodes go into maintenance\n        \"\"\"\n        a = {'resources_available.ncpus': 4, 'resources_available.mem': '4gb'}\n        self.mom.create_vnodes(a, 3, usenatvnode=True)\n\n        j = Job(TEST_USER)\n        j.set_attributes({'Resource_List.select': '3:ncpus=1',\n                          'Resource_List.place': 'vscatter:excl'})\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R', 'substate': 42}, id=jid)\n        self.server.expect(NODE, {'state=job-exclusive': 3})\n\n        self.server.sigjob(jid, 'admin-suspend', runas=ROOT_USER)\n        self.server.expect(NODE, {'state=maintenance': 3})\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid)\n\n        self.server.sigjob(jid, 'admin-resume', runas=ROOT_USER)\n        self.server.expect(NODE, {'state=job-exclusive': 3})\n\n    def test_degraded_resv(self):\n        \"\"\"\n        Test if a reservation goes into the degraded state after its node is\n        put into maintenance\n        \"\"\"\n\n        # Submit a reservation\n        r = Reservation(TEST_USER)\n        r.set_attributes({'Resource_List.select': '1:ncpus=1',\n                          'reserve_start': time.time() + 3600,\n                          'reserve_end': time.time() + 7200})\n        rid = self.server.submit(r)\n\n        # See reservation is confirmed\n        a = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        d = self.server.expect(RESV, a, rid)\n\n        # Submit a job and see it run\n        j = Job(TEST_USER)\n        j.set_attributes({'Resource_List.select': '1:ncpus=1',\n                          'Resource_List.walltime': 120})\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R', 'substate': 42}, id=jid)\n        vnode = self.mom.shortname + '[0]'\n        # Admin-suspend job\n        self.server.sigjob(jid, 'admin-suspend', runas=ROOT_USER)\n        self.server.expect(NODE, {'state': 'maintenance'}, id=vnode)\n\n        # See reservation in degreaded state\n        a = {'reserve_state': (MATCH_RE, 'RESV_DEGRADED|10')}\n        d = self.server.expect(RESV, a, rid)\n\n    def test_resv_jobend(self):\n        \"\"\"\n        Test if a node goes back to free state when reservation ends and\n        admin-suspended job is killed\n        \"\"\"\n\n        # Submit a reservation\n        r = Reservation(TEST_USER)\n        r.set_attributes({'Resource_List.select': '1:ncpus=1',\n                          'reserve_start': time.time() + 30,\n                          'reserve_end': time.time() + 60})\n        rid = self.server.submit(r)\n\n        # See reservation is confirmed\n        a = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        d = self.server.expect(RESV, a, id=rid)\n\n        # Submit a job\n        j = Job(TEST_USER)\n        rque = rid.split(\".\")\n        j.set_attributes({'queue': rque[0]})\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid)\n\n        # Wait for reservation to start\n        a = {'reserve_state': (MATCH_RE, 'RESV_RUNNING|3')}\n        d = self.server.expect(RESV, a, rid, offset=30)\n\n        # job is running as well\n        self.server.expect(\n            JOB, {'job_state': 'R', 'substate': 42},\n            id=jid, max_attempts=30)\n        vnode = self.mom.shortname + '[0]'\n        # Admin-suspend job\n        self.server.sigjob(jid, 'admin-suspend', runas=ROOT_USER)\n        self.server.expect(NODE, {'state': 'maintenance'}, id=vnode)\n\n        # Submit another job outside of reservation\n        j = Job(TEST_USER)\n        jid2 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid2)\n\n        # Wait for the reservation to get over\n        # Job also gets deleted and node state goes back to free\n        self.server.expect(JOB, 'queue', op=UNSET, id=jid, offset=120)\n        self.server.expect(NODE, {'state': 'free'}, id=vnode)\n\n        # job2 starts running\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2, max_attempts=60)\n\n    def test_que(self):\n        \"\"\"\n        Test to check that job gets suspended on non-default queue\n        \"\"\"\n\n        # create a high priority workq2 and a routeq\n        a = {'queue_type': 'execution', 'started': 't', 'enabled': 't',\n             'priority': 150}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id='workq2')\n        a = {'queue_type': 'route', 'started': 't', 'enabled': 't',\n             'route_destinations': 'workq2'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id='route')\n\n        # submit a normal job\n        j = Job(TEST_USER)\n        j.set_attributes({'Resource_List.select': '1:ncpus=3'})\n        jid1 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R', 'substate': 42}, id=jid1)\n\n        # submit a high priority job. Make sure job1 is suspended.\n        j = Job(TEST_USER)\n        j.set_attributes(\n            {'Resource_List.select': '1:ncpus=3', 'queue': 'route'})\n        jid2 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R', 'substate': 42}, id=jid2)\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid1)\n\n        # Above will not cause node state to go to maintenance\n        vnode = self.mom.shortname + '[0]'\n        self.server.expect(\n            NODE, {'state': (MATCH_RE, 'free|job-exclusive')}, id=vnode)\n\n        # admin suspend job2\n        self.server.sigjob(jid2, 'admin-suspend', runas=ROOT_USER)\n        self.server.expect(NODE, {'state': 'maintenance'}, id=vnode)\n        self.server.expect(JOB, {'job_state=S': 2})\n\n        # Releasing job1 will fail and not change node state\n        rv = self.server.sigjob(jid1, 'resume', runas=ROOT_USER, logerr='True')\n        self.assertFalse(rv)\n        self.server.expect(NODE, {'state': 'maintenance'}, id=vnode)\n\n        # deleting job1 will not change node state either\n        self.server.deljob(jid1, wait=True)\n        self.server.expect(NODE, {'state': 'maintenance'}, id=vnode)\n\n        # Admin-resume job2\n        self.server.sigjob(jid2, 'admin-resume', runas=ROOT_USER)\n        self.server.expect(JOB, {'job_state': 'R', 'substate': 42}, id=jid2)\n        self.server.expect(NODE, {'state': 'free'}, id=vnode)\n\n        # suspend the job\n        self.server.sigjob(jid2, 'suspend', runas=ROOT_USER)\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid2)\n        self.server.expect(\n            NODE, {'state': (MATCH_RE, 'free|job-exclusive')}, id=vnode)\n\n    def test_resume(self):\n        \"\"\"\n        Test node state remains in maintenance until\n        all jobs are not resumed\n        \"\"\"\n\n        a = {'resources_available.ncpus': 4, 'resources_available.mem': '4gb'}\n        self.mom.create_vnodes(a, 3, usenatvnode=True)\n\n        j = Job(TEST_USER)\n        j.set_attributes({'Resource_List.select': '3:ncpus=1',\n                          'Resource_List.place': 'vscatter'})\n        jid1 = self.server.submit(j)\n        jid2 = self.server.submit(j)\n        jid3 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state=R': 3, 'substate=42': 3})\n        self.server.expect(NODE, {'state=free': 3})\n\n        # admin suspend first 2 jobs and let 3rd job run\n        # First only suspend job1 and verify that it will\n        # put all the nodes to maintenance state\n        self.server.sigjob(jid1, 'admin-suspend', runas=ROOT_USER)\n        self.server.expect(NODE, {'state=maintenance': 3})\n        self.server.sigjob(jid2, 'admin-suspend', runas=ROOT_USER)\n        self.server.expect(JOB, {'job_state=S': 2})\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid3)\n\n        # submit a new job and it will be queued\n        j = Job(TEST_USER)\n        jid4 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid4)\n\n        # List all maintenance_jobs\n        self.server.expect(NODE, {'maintenance_jobs': jid1 + \",\" + jid2})\n\n        # resume 1 job that will not change node state\n        self.server.sigjob(jid1, 'admin-resume', runas=ROOT_USER)\n        self.server.expect(NODE, {'state=maintenance': 3})\n        self.server.expect(JOB, {'job_state': 'R', 'substate': 42}, id=jid1)\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid2)\n        self.server.expect(JOB, {'job_state': 'R', 'substate': 42}, id=jid3)\n\n        # resume the remaining job\n        self.server.sigjob(jid2, 'admin-resume', runas=ROOT_USER)\n        self.server.expect(NODE, {'state=free': 3})\n        self.server.expect(JOB, {'job_state=R': 4})\n\n    def test_admin_resume_loop(self):\n        \"\"\"\n        Test that running admin-resume in a loop will have no impact on PBS\n        \"\"\"\n\n        # submit a job\n        j = Job(TEST_USER)\n        j.set_sleep_time(300)\n        jid1 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R', 'substate': 42}, id=jid1)\n        vnode = self.mom.shortname + '[0]'\n        # admin suspend and resume job in a loop\n        for x in range(15):\n            self.server.sigjob(jid1, 'admin-suspend', runas=ROOT_USER)\n            self.server.expect(JOB, {'job_state': 'S'}, id=jid1)\n            self.server.expect(NODE, {'state': 'maintenance'}, id=vnode)\n\n            # sleep for sometime\n            time.sleep(3)\n\n            # resume the job\n            self.server.sigjob(jid1, 'admin-resume', runas=ROOT_USER)\n            self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n            self.server.expect(NODE, {'state': 'free'}, id=vnode)\n\n    def test_custom_res(self):\n        \"\"\"\n        Test that job will not run on a node in\n        maintenance state if explicitly asking\n        for a resource on that node\n        \"\"\"\n\n        # create multiple vnodes\n        a = {'resources_available.ncpus': 4, 'resources_available.mem': '4gb'}\n        self.mom.create_vnodes(a, 3, usenatvnode=True)\n\n        # create a node level resource\n        self.server.manager(\n            MGR_CMD_CREATE, RSC, {'type': 'float', 'flag': 'nh'}, id=\"foo\",\n            runas=ROOT_USER)\n        vnode = self.mom.shortname + '[1]'\n        # set foo on vn[1]\n        self.server.manager(\n            MGR_CMD_SET, NODE, {'resources_available.foo': 5}, id=vnode,\n            runas=ROOT_USER)\n\n        # set foo in sched_config\n        self.scheduler.add_resource('foo')\n\n        # submit a few jobs\n        j = Job(TEST_USER)\n        j.set_attributes({'Resource_List.select': 'vnode=' + vnode})\n        jid1 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R', 'substate': 42}, id=jid1)\n\n        # admin suspend the job to put the node to maintenance\n        self.server.sigjob(jid1, 'admin-suspend', runas=ROOT_USER)\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid1)\n        self.server.expect(NODE, {'state': 'maintenance'}, id=vnode)\n\n        # submit other jobs asking for specific resources on vn[1]\n        j = Job(TEST_USER)\n        j.set_attributes({'Resource_List.foo': '2'})\n        jid2 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid2)\n\n        # submit more jobs. They should be running\n        j = Job(TEST_USER)\n        jid3 = self.server.submit(j)\n        jid4 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R', 'substate': 42}, id=jid3)\n        self.server.expect(JOB, {'job_state': 'R', 'substate': 42}, id=jid4)\n\n        # verify that vn[1] is still in maintenance and\n        # job3 and job4 not running on vn[1]\n        self.server.expect(NODE, {'state': 'maintenance'}, id=vnode)\n        try:\n            self.server.expect(JOB, {'exec_vnode': (MATCH_RE, vnode)},\n                               id=jid3, max_attempts=20)\n            self.server.expect(JOB, {'exec_vnode': (MATCH_RE, vnode)},\n                               id=jid4, max_attempts=20)\n        except Exception as e:\n            self.assertFalse(e.rv)\n            msg = \"jid3 and jid4 not running on \" + vnode + \" as expected\"\n            self.logger.info(msg)\n\n    def test_list_jobs_1(self):\n        \"\"\"\n        Test to list and set maintenance_jobs as various users\n        \"\"\"\n        # This test is run with CLI mode only\n        _m = self.server.get_op_mode()\n        if _m != PTL_CLI:\n            self.skipTest(\"Not all commands can be run with API mode\")\n\n        # submit a few jobs\n        j = Job(TEST_USER)\n        jid1 = self.server.submit(j)\n        jid2 = self.server.submit(j)\n        jid3 = self.server.submit(j)\n\n        # verify that all are running\n        self.server.expect(JOB, {'job_state=R': 3, 'substate=42': 3})\n\n        # admin-suspend 2 of them\n        self.server.sigjob(jid2, 'admin-suspend', runas=ROOT_USER)\n        self.server.sigjob(jid3, 'admin-suspend', runas=ROOT_USER)\n        vnode = self.mom.shortname + '[0]'\n        # node state is in maintenance\n        self.server.expect(NODE, {'state': 'maintenance'}, id=vnode)\n\n        # list maintenance_jobs as root\n        self.server.expect(NODE, {'maintenance_jobs': jid2 + \",\" + jid3},\n                           runas=ROOT_USER)\n\n        # list maintenance jobs as user\n        self.server.expect(NODE, {'maintenance_jobs': jid2 + \",\" + jid3},\n                           runas=TEST_USER)\n\n        # set an operator\n        self.server.manager(MGR_CMD_SET, SERVER, {'operators': 'pbsoper@*'})\n\n        # List all jobs in maintenance mode as operator\n        self.server.expect(\n            NODE, {'maintenance_jobs': jid2 + \",\" + jid3}, runas='pbsoper')\n\n        # set maintenance_jobs as root\n        try:\n            self.server.manager(MGR_CMD_SET, NODE,\n                                {'maintenance_jobs': jid1}, id=vnode,\n                                runas=ROOT_USER)\n        except PbsManagerError as e:\n            self.assertFalse(e.rv)\n            msg = \"Cannot set attribute, read only\" +\\\n                  \" or insufficient permission  maintenance_jobs\"\n            self.assertTrue(msg in e.msg[0])\n\n        # Set maintenance_jobs as operator\n        try:\n            self.server.manager(MGR_CMD_SET, NODE,\n                                {'maintenance_jobs': jid1}, id=vnode,\n                                runas='pbsoper')\n        except PbsManagerError as e:\n            self.assertFalse(e.rv)\n            msg = \"Cannot set attribute, read only\" +\\\n                  \" or insufficient permission  maintenance_jobs\"\n            self.assertTrue(msg in e.msg[0])\n\n        # Set maintenance_jobs as user\n        try:\n            self.server.manager(MGR_CMD_SET, NODE,\n                                {'maintenance_jobs': jid1}, id=vnode,\n                                runas=TEST_USER)\n        except PbsManagerError as e:\n            self.assertFalse(e.rv)\n            self.assertTrue(\"Unauthorized Request\" in e.msg[0])\n\n    def test_list_jobs_2(self):\n        \"\"\"\n        Test to list maintenance_jobs when no job is admin-suspended\n        \"\"\"\n\n        # Submit a few jobs\n        j = Job(TEST_USER)\n        jid1 = self.server.submit(j)\n        jid2 = self.server.submit(j)\n        jid3 = self.server.submit(j)\n\n        # verify that all are running\n        self.server.expect(JOB, {'job_state=R': 3, 'substate=42': 3})\n        vnode = self.mom.shortname + '[0]'\n        # list maintenance_jobs. It should be empty\n        self.server.expect(NODE, 'maintenance_jobs', op=UNSET, id=vnode)\n\n        # Regular suspend a job\n        self.server.sigjob(jid2, 'suspend', runas=ROOT_USER)\n\n        # List maintenance_jobs again\n        self.server.expect(NODE, 'maintenance_jobs', op=UNSET, id=vnode)\n\n    def test_preempt_order(self):\n        \"\"\"\n        Test that scheduler preempt_order has no impact\n        on admin-suspend\n        \"\"\"\n\n        # create a high priority queue\n        a = {'queue_type': 'e', 'enabled': 't', 'started': 't',\n             'priority': 150}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id=\"highp\")\n\n        # set preempt_order to R\n        self.server.manager(MGR_CMD_SET, SCHED, {'preempt_order': 'R'},\n                            runas=ROOT_USER)\n        vnode = self.mom.shortname + '[0]'\n        # submit a job\n        j = Job(TEST_USER)\n        j.set_attributes({'Resource_List.select': 'vnode=' + vnode})\n        jid1 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R', 'substate': 42}, id=jid1)\n\n        # submit a high priority job\n        j = Job(TEST_USER)\n        j.set_attributes({'queue': 'highp', 'Resource_List.select':\n                          '1:ncpus=4:vnode=' + vnode})\n        jid2 = self.server.submit(j)\n\n        # job2 is running and job1 is requeued\n        self.server.expect(JOB, {'job_state': 'R', 'substate': 42}, id=jid2)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid1)\n\n        # admin-suspend job1. It will fail\n        try:\n            self.server.sigjob(jid1, 'admin-suspend', logerr=False)\n        except Exception as e:\n            self.assertFalse(e.rv)\n\n        # admin suspend job2\n        self.server.sigjob(jid2, 'admin-suspend')\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid2)\n        self.server.expect(NODE, {'state': 'maintenance'}, id=vnode)\n\n        # admin-resume job2. node state will become job-busy.\n        self.server.sigjob(jid2, 'admin-resume')\n        self.server.expect(NODE, {'state': 'job-busy'}, id=vnode)\n        self.server.expect(JOB, {'job_state': 'R', 'substate': 42}, id=jid2)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid1)\n\n    def test_hook(self):\n        \"\"\"\n        List maintenance_jobs via hook\n        \"\"\"\n\n        # Create and import a hook\n        hook_name = \"test\"\n        hook_body = \"\"\"\nimport pbs\n\nvn = pbs.server().vnode('vn[0]')\npbs.logmsg(pbs.LOG_DEBUG,\\\n\"list of maintenance_jobs are %s\" % vn.maintenance_jobs)\n\"\"\"\n        a = {'resources_available.ncpus': 4, 'resources_available.mem': '4gb'}\n        self.mom.create_vnodes(a, 1, vname='vn')\n        a = {'event': 'exechost_periodic', 'enabled': 'True', 'freq': 5}\n        self.server.create_import_hook(hook_name, a, hook_body)\n\n        # submit few jobs\n        j = Job(TEST_USER)\n        jid1 = self.server.submit(j)\n        jid2 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state=R': 2})\n\n        # wait for the periodic hook\n        time.sleep(5)\n\n        # look for the log message\n        self.mom.log_match(\"list of maintenance_jobs are None\")\n\n        # admin-suspend jobs\n        self.server.sigjob(jid1, 'admin-suspend')\n        self.server.sigjob(jid2, 'admin-suspend')\n\n        # wait for periodic hook and check mom_log\n        time.sleep(5)\n        self.mom.log_match(\"list of maintenance_jobs are %s\" %\n                           ((jid1 + \",\" + jid2),))\n\n        # admin-resume job1\n        self.server.sigjob(jid1, 'admin-resume')\n\n        # wait for periodic hook and check mom_log\n        time.sleep(5)\n        self.mom.log_match(\n            \"list of maintenance_jobs are %s\" % (jid2,))\n\n    def test_offline(self):\n        \"\"\"\n        Test that if a node is put to offline\n        and removed from maintenance state it\n        remains offlined\n        \"\"\"\n\n        # submit a job and admin-suspend it\n        j1 = Job(TEST_USER)\n        jid1 = self.server.submit(j1)\n        j2 = Job(TEST_USER)\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {'job_state': \"R\", 'substate': 42}, id=jid1)\n        self.server.expect(JOB, {'job_state': \"R\", 'substate': 42}, id=jid2)\n        self.server.sigjob(jid1, 'admin-suspend')\n        self.server.sigjob(jid2, 'admin-suspend')\n\n        vnode = self.mom.shortname + '[0]'\n        # node state is in maintenance\n        self.server.expect(NODE, {'state': 'maintenance'}, id=vnode)\n\n        # submit another job. It will be queued\n        j3 = Job(TEST_USER)\n        jid3 = self.server.submit(j3)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid3)\n\n        # mark the node as offline too\n        self.server.manager(MGR_CMD_SET, NODE, {'state': 'offline'},\n                            id=vnode)\n\n        # delete job1 as user and resume job2\n        self.server.deljob(jid1, wait=True, runas=TEST_USER)\n        self.server.sigjob(jid2, 'admin-resume')\n\n        # verify that node state is offline and\n        # job3 is still queued\n        self.server.expect(NODE, {'state': 'offline'}, id=vnode)\n        self.server.expect(JOB, {'job_state': 'R', 'substate': 42}, id=jid2)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid3)\n"
  },
  {
    "path": "test/tests/functional/pbs_allpart.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestSchedAllPart(TestFunctional):\n    \"\"\"\n    Test the scheduler's allpart optimization\n    \"\"\"\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n        a = {'resources_available.ncpus': 1, 'resources_available.mem': '1gb'}\n        self.mom.create_vnodes(a, 2, usenatvnode=True)\n\n    def test_free_nodes(self):\n        \"\"\"\n        Test that if there aren't enough free nodes available, it is reported\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n        a = {'Resource_List.select': '2:ncpus=1'}\n        j1 = Job(TEST_USER, a)\n        jid1 = self.server.submit(j1)\n        j2 = Job(TEST_USER, a)\n        jid2 = self.server.submit(j2)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        a = {'job_state': 'Q', 'comment':\n             'Not Running: Not enough free nodes available'}\n        self.server.expect(JOB, a, id=jid2)\n\n    def test_vscatter(self):\n        \"\"\"\n        Test that we determine we can't run a job when there aren't enough\n        free nodes available due to vscatter\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n        a = {'Resource_List.select': '1:ncpus=1'}\n        j1 = Job(TEST_USER, a)\n        jid1 = self.server.submit(j1)\n\n        a = {'Resource_List.select': '2:ncpus=1',\n             'Resource_List.place': 'vscatter'}\n        j2 = Job(TEST_USER, a)\n        jid2 = self.server.submit(j2)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        a = {'job_state': 'Q', 'comment':\n             'Not Running: Not enough free nodes available'}\n        self.server.expect(JOB, a, id=jid2)\n\n    def test_vscatter2(self):\n        \"\"\"\n        Test that we can determine a job can never run if it is requesting\n        more nodes than is in the complex via vscatter\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n        a = {'Resource_List.select': '3:ncpus=1',\n             'Resource_List.place': 'vscatter'}\n        j = Job(TEST_USER, a)\n        jid = self.server.submit(j)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n\n        a = {'job_state': 'Q', 'comment':\n             'Can Never Run: Not enough total nodes available'}\n        self.server.expect(JOB, a, id=jid)\n\n    def test_rassn(self):\n        \"\"\"\n        Test rassn resource (ncpus) is unavailable and the comment is shown\n        with a RAT line\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n        a = {'Resource_List.select': '1:ncpus=1'}\n        j1 = Job(TEST_USER, a)\n        jid1 = self.server.submit(j1)\n\n        a = {'Resource_List.select': '2:ncpus=1'}\n        j2 = Job(TEST_USER, a)\n        jid2 = self.server.submit(j2)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        m = 'Not Running: Insufficient amount of resource: ncpus ' + \\\n            '(R: 2 A: 1 T: 2)'\n        a = {'job_state': 'Q', 'comment': m}\n        self.server.expect(JOB, a, id=jid2)\n\n    def test_nonexistent_non_consumable(self):\n        \"\"\"\n        Test that a nonexistent non-consumable value is caught as 'Never Run'\n        \"\"\"\n        a = {'Resource_List.select': '1:ncpus=1:vnode=foo'}\n        j = Job(TEST_USER, a)\n        jid = self.server.submit(j)\n\n        m = r'Can Never Run: Insufficient amount of resource: vnode \\(foo !='\n        a = {'job_state': 'Q', 'comment': (MATCH_RE, m)}\n        self.server.expect(JOB, a, id=jid)\n\n    def test_too_many_ncpus(self):\n        \"\"\"\n        test that a job is marked as can never run if it requests more cpus\n        than are available on the entire complex\n        \"\"\"\n        a = {'Resource_List.select': '3:ncpus=1'}\n        j = Job(TEST_USER, a)\n        jid = self.server.submit(j)\n\n        m = 'Can Never Run: Insufficient amount of resource: ncpus ' + \\\n            '(R: 3 A: 2 T: 2)'\n        a = {'job_state': 'Q', 'comment': m}\n        self.server.expect(JOB, a, id=jid)\n"
  },
  {
    "path": "test/tests/functional/pbs_alps_inventory_check_hook.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport os\n\nfrom tests.functional import *\n\n\n@tags('cray', 'mom')\nclass TestAlpsInventoryCheckHook(TestFunctional):\n    \"\"\"\n    PBS mom appears not to periodically automatically re-query the\n    node inventory on Cray.\n    \"\"\"\n\n    def setUp(self):\n        self.platform = DshUtils().get_platform()\n        if self.platform != 'cray' and self.platform != 'craysim':\n            self.skipTest(\"This is not a cray platform\")\n\n        TestFunctional.setUp(self)\n        with open(\"/etc/xthostname\") as xthost_file:\n            self.crayhostname = xthost_file.readline().rstrip()\n\n        self.server.manager(MGR_CMD_SET, PBS_HOOK,\n                            {'enabled': 'true', 'freq': 3},\n                            id='PBS_alps_inventory_check')\n\n    def delete_cray_compute_node(self):\n        \"\"\"\n        Deletes the cray compute node from pbs node list\n        \"\"\"\n        vnl = self.server.filter(\n            VNODE, {'resources_available.vntype': 'cray_compute'})\n        vlist = vnl[\"resources_available.vntype=cray_compute\"]\n        self.server.manager(MGR_CMD_DELETE, NODE, id=vlist[0])\n\n    def test_apstat_cmd(self):\n        \"\"\"\n        Test the log when apstat is not present in the\n        expected/default location, it indicates a Cray system issue.\n        \"\"\"\n        now = time.time()\n        if self.platform == \"craysim\":\n            if os.path.exists(\"/opt/cray/alps/default/bin/stat\"):\n                # The file to be renamed is conflicting with existing file\n                self.skipTest(\"Conflict in the testcase settings\")\n            os.rename(\n                \"/opt/cray/alps/default/bin/apstat\",\n                \"/opt/cray/alps/default/bin/stat\")\n            try:\n                self.mom.log_match(\n                    \"ALPS Inventory Check: apstat command can not \" +\n                    \"be found at /opt/cray/alps/default/bin/apstat\",\n                    starttime=now,\n                    max_attempts=10,\n                    interval=2)\n            finally:\n                os.rename(\n                    \"/opt/cray/alps/default/bin/stat\",\n                    \"/opt/cray/alps/default/bin/apstat\")\n        else:\n            self.skipTest(\"This test can be run on a simulator\")\n\n    def test_xthostname(self):\n        \"\"\"\n        Test when hook attempts to read the /etc/xthostname file to\n        determine Cray hostname, but the hostname file is missing.\n        \"\"\"\n        now = time.time()\n        if self.platform == \"craysim\":\n            if os.path.exists(\"/etc/xt\"):\n                # The file to be renamed is conflicting with existing file\n                self.skipTest(\"Conflict in the testcase settings\")\n            os.rename(\"/etc/xthostname\", \"/etc/xt\")\n            try:\n                self.mom.log_match(\n                    \"/etc/xthostname file found on this host\",\n                    starttime=now,\n                    max_attempts=10,\n                    interval=2)\n            finally:\n                os.rename(\"/etc/xt\", \"/etc/xthostname\")\n        else:\n            self.skipTest(\"This test can be run on a simulator\")\n\n    def test_start_of_hook(self):\n        \"\"\"\n        Test log at the start of hook processing.\n        \"\"\"\n        now = time.time()\n        self.mom.log_match(\n            \"Processing ALPS inventory for crayhost %s\" % self.crayhostname,\n            starttime=now,\n            max_attempts=10,\n            interval=2)\n\n    def test_cray_login_nodes(self):\n        \"\"\"\n        Test log when no nodes with vntype 'cray_login' are present.\n        \"\"\"\n        now = time.time()\n        mc = self.mom.parse_config()\n        save = mc[\"$alps_client\"]\n        del mc[\"$alps_client\"]\n        self.mom.apply_config(mc)\n        self.host = self.mom.shortname\n        try:\n            self.server.manager(MGR_CMD_DELETE, NODE, None, \"\")\n            self.server.manager(MGR_CMD_CREATE, NODE, id=self.host)\n            self.mom.log_match(\n                \"ALPS Inventory Check: No eligible \" +\n                \"login nodes to perform inventory check\",\n                starttime=now,\n                max_attempts=10,\n                interval=2)\n        finally:\n            mc[\"$alps_client\"] = save\n            self.mom.apply_config(mc, False)\n\n    def test_pbs_home_path(self):\n        \"\"\"\n        Test log when mom_priv directory is not in the expected/default\n        location (PBS_HOME), indicating a PBS installation issue.\n        \"\"\"\n        if self.platform == \"craysim\":\n            now = time.time()\n            pbs_conf = self.du.parse_pbs_config(self.server.shortname)\n            save = pbs_conf['PBS_HOME']\n            self.du.set_pbs_config(\n                self.server.shortname, confs={\n                    'PBS_HOME': ''})\n            try:\n                self.delete_cray_compute_node()\n                self.mom.log_match(\n                    \"ALPS Inventory Check: Internal error in retrieving \" +\n                    \"path to mom_priv\",\n                    starttime=now,\n                    max_attempts=10,\n                    interval=2)\n            finally:\n                self.du.set_pbs_config(\n                    self.server.shortname, confs={\n                        'PBS_HOME': save})\n        else:\n            self.skipTest(\"This test can be run on a simulator\")\n\n    def test_alps_and_pbs_are_in_sync(self):\n        \"\"\"\n        Test log when both PBS and ALPS are in sync i.e. they report the\n        same number of compute nodes in the Cray cluster.\n        \"\"\"\n        now = time.time()\n        self.mom.log_match(\n            \"ALPS Inventory Check: PBS and ALPS are in sync\",\n            starttime=now,\n            max_attempts=10,\n            interval=2)\n\n    def test_nodes_out_of_sync(self):\n        \"\"\"\n         Test the log when PBS and ALPS are out of sync\n        \"\"\"\n        now = time.time()\n        self.delete_cray_compute_node()\n        self.mom.log_match(\n            \"ALPS Inventory Check: Compute \" +\n            \"nodes defined in ALPS, but not in PBS\",\n            starttime=now,\n            max_attempts=10,\n            interval=2)\n\n    def test_failure_in_refreshing_nodes(self):\n        \"\"\"\n        Test log when the Hook is unable to HUP the Mom and successfully\n        refresh nodes.\n        \"\"\"\n        if self.platform == \"craysim\":\n            now = time.time()\n            pbs_conf = self.du.parse_pbs_config(self.server.shortname)\n            save = pbs_conf['PBS_HOME']\n            self.du.set_pbs_config(\n                self.server.shortname, confs={'PBS_HOME': 'xyz'})\n            try:\n                self.delete_cray_compute_node()\n                self.mom.log_match(\n                    \"ALPS Inventory Check: Failure in refreshing nodes on \" +\n                    \"login node (%s)\" %\n                    self.mom.hostname,\n                    starttime=now,\n                    max_attempts=10,\n                    interval=2)\n            finally:\n                self.du.set_pbs_config(\n                    self.server.shortname, confs={\n                        'PBS_HOME': save})\n        else:\n            self.skipTest(\"This test can be run on cray a simulator\")\n"
  },
  {
    "path": "test/tests/functional/pbs_alps_release_tunables.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\nfrom tests.functional import *\nimport math\nfrom ptl.utils.pbs_logutils import PBSLogUtils\n\n\n@tags('cray')\nclass TestCrayAlpsReleaseTunables(TestFunctional):\n    \"\"\"\n    Set of tests to verify alps release tunables namely alps_release_wait_time\n    and alps_release_jitter\n    \"\"\"\n\n    def setUp(self):\n        machine = self.du.get_platform()\n        if not machine == 'cray':\n            self.skipTest(\"Test suite only meant to run on a Cray\")\n        TestFunctional.setUp(self)\n\n    @staticmethod\n    def get_epoch(msg):\n        # Since its a log message split on ';' to get timestamp\n        a = PBSLogUtils.convert_date_time(msg.split(';')[0])\n        return a\n\n    def test_alps_release_wait_time(self):\n        \"\"\"\n        Set alps_release_wait_time to a higher value and then notice that\n        subsequest reservation cancellation requests are made at least\n        after the set interval.\n        \"\"\"\n        # assigning a random value to alps_release_wait_time that is\n        # measurable using mom log messages\n        arwt = 4.298\n        self.mom.add_config({'$alps_release_wait_time': arwt})\n\n        # submit a job and then delete it after it starts running\n        start_time = time.time()\n        j1 = Job(TEST_USER)\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n        time.sleep(2)\n        self.server.delete(jid1)\n\n        # Look for a message that confirms that reservation is deleted\n        self.mom.log_match(\"%s;ALPS reservation cancelled\" % jid1,\n                           starttime=start_time)\n        # Now that we know that reservation is cleared we should\n        # check for time difference between each cancellation request\n        out = self.mom.log_match(\"%s;Canceling ALPS reservation *\" % jid1,\n                                 n='ALL', regexp=True, allmatch=True)\n\n        # We found something, Let's first check there are atleast 2 such\n        # log messages, If not then that means reservation was cancelled\n        # in the first attempt itself, at that point right thing to do is\n        # to either run it again or find out a way to delay the reservation\n        # cancellation at ALPS level itself.\n        if len(out) >= 2:\n            # variable 'out' is a list of tuples and every second element\n            # in a tuple is the matched log message\n            time_prev = self.get_epoch(out[0][1])\n            for data in out[1:]:\n                time_current = self.get_epoch(data[1])\n                fail_msg = \"alps_release_wait_time not working\"\n                self.assertGreaterEqual(time_current - time_prev,\n                                        math.floor(arwt),\n                                        msg=fail_msg)\n                time_prev = time_current\n\n        else:\n            self.skipTest(\"Reservation cancelled without retry, Try again!\")\n\n    def test_alps_release_jitter(self):\n        \"\"\"\n        Set alps_release_jitter to a higher value and then notice that\n        subsequest reservation cancellation requests are made by adding\n        a random time interval (less than jitter) to alps_release_wait_time.\n        \"\"\"\n        # assigning a random value to alps_release_jitter that is\n        # measurable using mom log messages\n        arj = 2.198\n        arwt = 1\n        max_delay = (arwt + math.ceil(arj))\n        self.mom.add_config({'$alps_release_jitter': arj})\n        self.mom.add_config({'$alps_release_wait_time': arwt})\n        # There is no good way to test jitter and it is a random number\n        # less than value set in alps_release_jitter. So in this case\n        # we can probably try deleting a reservation a few times.\n        n = retry = 5\n        for _ in range(n):\n            # submit a job and then delete it after it starts running\n            start_time = time.time()\n            j1 = Job(TEST_USER)\n            jid1 = self.server.submit(j1)\n            self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n            time.sleep(2)\n            self.server.delete(jid1)\n\n            # Look for a message that confirms that reservation is deleted\n            self.mom.log_match(\"%s;ALPS reservation cancelled\" % jid1,\n                               starttime=start_time)\n            # Now that we know that reservation is cleared we should\n            # check for time difference between each cancellation request\n            out = self.mom.log_match(\"%s;Canceling ALPS reservation *\" % jid1,\n                                     n='ALL', regexp=True, allmatch=True)\n\n            # We found something, Let's first check there are atleast 2 such\n            # log messages, If not then that means reservation was cancelled\n            # in the first attempt itself, at that point right thing to do is\n            # to either run it again or find out a way to delay the reservation\n            # cancellation at ALPS level itself.\n            if len(out) >= 2:\n                retry -= 1\n                # variable 'out' is a list of tuples and every second element\n                # in a tuple is the matched log message\n                time_prev = self.get_epoch(out[0][1])\n                for data in out[1:]:\n                    time_current = self.get_epoch(data[1])\n                    self.assertLessEqual(time_current - time_prev,\n                                         max_delay,\n                                         msg=\"alps_release_jitter not working\")\n                    time_prev = time_current\n        if retry == 5:\n            self.skipTest(\"Reservation cancelled without retry, Try again!\")\n"
  },
  {
    "path": "test/tests/functional/pbs_array_job_mail.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\nimport os\n\n\nclass Test_array_job_email(TestFunctional):\n    \"\"\"\n    This test suite is for testing arrayjob e-mailing (parent job and subjob)\n    \"\"\"\n\n    def test_emails(self):\n        \"\"\"\n        Run arrayjob with -m jabe and test if the e-mails are received\n        \"\"\"\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'job_history_enable': 'true'})\n\n        mailfile = os.path.join(\"/var/mail\", str(TEST_USER))\n        if not os.path.isfile(mailfile):\n            self.skip_test(\"Mail file '%s' does not exist or \"\n                           \"mail is not setup. \"\n                           \"Hence this step would be skipped. \"\n                           \"Please check manually.\" % mailfile)\n\n        J = Job(TEST_USER, attrs={ATTR_m: 'jabe', ATTR_J: '1-2'})\n        J.set_sleep_time(1)\n        parent_jid = self.server.submit(J)\n\n        self.server.expect(JOB, {'job_state': 'F'}, parent_jid,\n                           extend='x', max_attempts=15, interval=2)\n\n        subjob_jid = parent_jid.replace(\"[]\", \"[1]\")\n\n        emails = [(\"PBS Job Id: \" + parent_jid, \"Begun execution\"),\n                  (\"PBS Job Id: \" + parent_jid, \"Execution terminated\"),\n                  (\"PBS Job Id: \" + subjob_jid, \"Begun execution\"),\n                  (\"PBS Job Id: \" + subjob_jid, \"Execution terminated\")]\n\n        for (jobid, msg) in emails:\n            emailpass = 0\n            for j in range(5):\n                time.sleep(5)\n                ret = self.du.tail(filename=mailfile, sudo=True,\n                                   option=\"-n 600\")\n                maillog = [x.strip() for x in ret['out']]\n                for i in range(0, len(maillog) - 2):\n                    if jobid == maillog[i] and msg == maillog[i + 2]:\n                        emailpass = 1\n                        break\n                if emailpass:\n                    break\n            self.assertTrue(emailpass, \"Message '\" + jobid + \" \" + msg +\n                            \"' not found in \" + mailfile)\n\n    def test_qsub_errors_j_mailpoint(self):\n        \"\"\"\n        Try to submit 'qsub -m j' and test possible errors\n        \"\"\"\n\n        J = Job(TEST_USER, attrs={ATTR_m: 'j'})\n\n        error_msg = \"mail option 'j' can not be used without array job\"\n        try:\n            self.server.submit(J)\n        except PbsSubmitError as e:\n            self.assertTrue(error_msg in e.msg[0])\n\n        J = Job(TEST_USER, attrs={ATTR_m: 'j', ATTR_J: '1-2'})\n\n        error_msg = \"illegal -m value\"\n        try:\n            self.server.submit(J)\n        except PbsSubmitError as e:\n            self.assertTrue(error_msg in e.msg[0])\n\n    def test_email_non_existent_user(self):\n        \"\"\"\n        Verify when a job array is submitted with a valid and invalid\n        mail recipients and all file stageout attempts fails then\n        email should get delivered to valid recipient and no email\n        would be sent to invalid recipient.\n        \"\"\"\n        non_existent_user = PbsAttribute.random_str(length=5)\n        non_existent_mailfile = os.path.join(os.sep, \"var\", \"mail\",\n                                             non_existent_user)\n        pbsuser_mailfile = os.path.join(os.sep, \"var\", \"mail\",\n                                        str(TEST_USER))\n\n        # Check mail file should exist for existent user\n        if not os.path.isfile(pbsuser_mailfile):\n            msg = \"Skipping this test as Mail file '%s' \" % pbsuser_mailfile\n            msg += \"does not exist or mail is not setup.\"\n            self.skip_test(msg)\n\n        # Check non existent user mail file should not exist\n        self.assertFalse(os.path.isfile(non_existent_mailfile))\n\n        src_file = PbsAttribute.random_str(length=5)\n        stageout_path = os.path.join(os.sep, '1', src_file)\n        dest_file = stageout_path + '1'\n        if not os.path.isdir(stageout_path) and os.path.exists(src_file):\n            os.remove(src_file)\n\n        # Submit job with invalid stageout path\n        usermail_list = str(TEST_USER) + \",\" + non_existent_user\n        set_attrib = {ATTR_stageout: stageout_path + '@' +\n                      self.mom.shortname + ':' + dest_file,\n                      ATTR_M: usermail_list, ATTR_J: '1-2',\n                      ATTR_S: '/bin/bash'}\n        j = Job()\n        j.set_attributes(set_attrib)\n        j.set_sleep_time(1)\n        jid = self.server.submit(j)\n        subjid = j.create_subjob_id(jid, 1)\n\n        self.server.expect(JOB, 'queue', op=UNSET, id=jid)\n\n        # Check stageout file should not be present\n        self.assertFalse(os.path.exists(dest_file))\n\n        exp_msg = \"PBS Job Id: \" + subjid\n        err_msg = \"%s msg not found in pbsuser's mail log\" % exp_msg\n\n        email_pass = 0\n        for i in range(5):\n            time.sleep(5)\n            # Check if mail is deliverd to valid user mail file\n            ret = self.du.tail(filename=pbsuser_mailfile, runas=TEST_USER,\n                               option=\"-n 50\")\n            maillog = [x.strip() for x in ret['out']]\n            if exp_msg in maillog:\n                email_pass = 1\n                break\n        self.assertTrue(email_pass, err_msg)\n\n        # Verify there should not be any email for invalid user\n        self.assertFalse(os.path.isfile(non_existent_mailfile))\n"
  },
  {
    "path": "test/tests/functional/pbs_basil_parser_err.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\n@tags('cray', 'mom')\nclass TestBasilParserErrors(TestFunctional):\n    \"\"\"\n    Test the BASIL parser error messages\n    \"\"\"\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n        momA = self.moms.values()[0]\n        if not momA.is_cray():\n            self.skipTest(\"%s: not a cray mom.\" % (momA.shortname))\n\n    def test_basil_errors(self):\n        \"\"\"\n        Check for the non existence of BASIL errors in mom logs\n        \"\"\"\n        self.mom.log_match(\"PERMANENT BASIL error from SYNTAX\",\n                           max_attempts=10,\n                           interval=1,\n                           existence=False)\n        self.mom.log_match(\"Error in BASIL response\",\n                           max_attempts=10,\n                           interval=1,\n                           existence=False)\n"
  },
  {
    "path": "test/tests/functional/pbs_basil_support.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\nfrom string import Template\nimport os\nimport defusedxml.ElementTree as ET\n\n\n@tags('cray', 'mom')\nclass TestBasilQuery(TestFunctional):\n    \"\"\"\n    This test suite is for testing the support for BASIL 1.7/1.4 basil\n    query.Test if query is made with correct BASIL version, and that\n    vnodes are getting created as per the query response.\n    \"\"\"\n    basil_version = ['1.7', '1.4', '1.3']\n    available_version = \"\"\n\n    @staticmethod\n    def init_inventory_node():\n        node = {}\n        node['vnode'] = \"\"\n        node['arch'] = \"\"\n        node['current_aoe'] = \"\"\n        node['host'] = \"\"\n        node['hbmem'] = \"\"\n        node['mem'] = \"\"\n        node['ncpus'] = \"\"\n        node['PBScrayhost'] = \"\"\n        node['PBScraynid'] = \"\"\n        node['vntype'] = \"\"\n        node['accelerator_memory'] = \"\"\n        node['accelerator_model'] = \"\"\n        node['naccelerators'] = \"\"\n        return node\n\n    def reset_nodes(self, hostA):\n        # Remove all nodes\n        self.server.manager(MGR_CMD_DELETE, NODE, None, \"\")\n        # Restart PBS\n        self.server.restart()\n        # Create node\n        self.server.manager(MGR_CMD_CREATE, NODE, None, hostA)\n        # Wait for 3 seconds for changes to take effect\n        time.sleep(3)\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n\n        self.server.manager(MGR_CMD_SET, PBS_HOOK,\n                            {'enabled': 'true', 'freq': 10},\n                            id='PBS_alps_inventory_check')\n\n        momA = self.moms.values()[0]\n        if not momA.is_cray():\n            self.skipTest(\"%s: not a cray mom.\" % (momA.shortname))\n        mom_config = momA.parse_config()\n        if '$alps_client' not in mom_config:\n            self.skipTest(\"alps_client not set in mom config.\")\n\n        if '$vnode_per_numa_node' in mom_config:\n            momA.unset_mom_config('$vnode_per_numa_node', False)\n\n        momA.add_config({'$logevent': '0xffffffff'})\n\n        # check if required BASIL version available on the machine.\n        for ver in self.basil_version:\n            xml_out = self.query_alps(ver, 'QUERY', 'ENGINE')\n            xml_tree = ET.parse(xml_out)\n            os.remove(xml_out)\n            response = xml_tree.find(\".//ResponseData\")\n            status = response.attrib['status']\n            if status == \"SUCCESS\":\n                self.available_version = ver\n                break\n        if self.available_version == \"\":\n            self.skipTest(\"No supported basil version found on the platform.\")\n\n        # Reset nodes\n        self.reset_nodes(momA.shortname)\n\n    def query_alps(self, ver, method, qtype):\n        \"\"\"\n        Send a query to ALPS of a certain type and return the xml output file.\n        \"\"\"\n        basil_protocol = 'protocol=\"%s\"' % (ver)\n        basil_method = 'method=\"%s\"' % (method)\n        basil_qtype = 'type=\"%s\"' % (qtype)\n        queryt = Template('<BasilRequest $ver $method $qtype/>\\n')\n        query = queryt.substitute(ver=basil_protocol,\n                                  method=basil_method, qtype=basil_qtype)\n        mom_config = self.mom.parse_config()\n        alps_client = mom_config['$alps_client']\n        fn = self.du.create_temp_file(body=query)\n        xout = self.du.create_temp_file()\n        self.du.run_cmd(cmd=\"%s < %s > %s\" % (alps_client, fn, xout),\n                        as_script=True)\n        os.remove(fn)\n        return xout\n\n    def comp_node(self, vnode):\n        \"\"\"\n        Check if compute node is found in pbsnodes -av output.\n        If so check if the vnode attribute has the correct values.\n        \"\"\"\n        name = vnode['vnode']\n        try:\n            pbs_node = self.server.status(NODE, id=name)[0]\n        except PbsStatusError:\n            self.assertFalse(pbs_node is None,\n                             \"Cray compute node %s doesn't exist on pbs server\"\n                             % (name))\n\n        for rsc, xval in vnode.items():\n            if rsc != 'current_aoe':\n                resource = 'resources_available.' + rsc\n            else:\n                resource = rsc\n            if xval != \"\":\n                if resource in pbs_node:\n                    rval = pbs_node[resource]\n                    if rval == xval:\n                        self.logger.info(\n                            \"%s: node has %s=%s\" % (name, rsc, rval))\n                        self.assertTrue(True)\n                    else:\n                        self.assertFalse(\"%s: node has %s=%s but XML %s=%s\"\n                                         % (name, resource, rval,\n                                            rsc, xval))\n                else:\n                    self.assertFalse(\n                        \"%s\\t: node has no resource %s\" % (name, rsc))\n\n    def get_knl_vnodes(self):\n        xml_out = self.query_alps('1.7', 'QUERY', 'SYSTEM')\n        tree = ET.parse(xml_out)\n        os.remove(xml_out)\n        root = tree.getroot()\n        knl_vnodes = {}\n        knl_info = {}\n\n        # If node has the KNL processor then add them\n        # to knl_vnodes dictionary\n        for node in root.getiterator('Nodes'):\n            # XML values\n            role = node.attrib[\"role\"]\n            state = node.attrib[\"state\"]\n            numa_cfg = node.attrib[\"numa_cfg\"]\n            hbm_size_mb = node.attrib[\"hbm_size_mb\"]\n            hbm_cache_pct = node.attrib[\"hbm_cache_pct\"]\n\n            if role == 'batch' and state == 'up' and numa_cfg != \"\"\\\n               and hbm_size_mb != \"\" and hbm_cache_pct != \"\":\n                # derived values from XML\n                knl_info['current_aoe'] = numa_cfg + '_' + hbm_cache_pct\n                knl_info['hbmem'] = hbm_size_mb + 'mb'\n                nid_ranges = node.text.strip()\n                nid_range_list = list(nid_ranges.split(','))\n                while len(nid_range_list) > 0:\n                    nid_range = nid_range_list.pop()\n                    nid1 = nid_range.split('-')\n                    if len(nid1) == 2:\n                        # range of nodes\n                        r1 = int(nid1[0])\n                        r2 = int(nid1[1]) + 1\n                        for node_id in range(r1, r2):\n                            # associate each nid with it's knl information\n                            knl_vnodes['%d' % node_id] = knl_info\n                    else:\n                        # single node\n                        node_id = int(nid1[0])\n                        knl_vnodes['%d' % node_id] = knl_info\n        return knl_vnodes\n\n    def retklist(self):\n        \"\"\"\n        Return a list of KNL vnodes, empty list if there are no KNL vnodes.\n        \"\"\"\n        klist = []\n        # Find the list of KNL vnodes\n        kvnl = self.server.filter(VNODE, {'current_aoe': (NE, \"\")})\n        if len(kvnl) == 0:\n            self.skipTest(reason='No KNL vnodes present')\n        else:\n            klist = list(kvnl.values())[0]\n            self.logger.info(\"KNL vnode list: %s\" % (klist))\n        return klist\n\n    def set_provisioning(self):\n        \"\"\"\n        Set provisioning enabled and aoe resource on Xeon Phi nodes.\n        \"\"\"\n        # Check for provisioning setup\n        momA = self.moms.values()[0].shortname\n        serverA = self.servers.values()[0].shortname\n        msg = (\"Provide a mom not present on server host while invoking\"\n               \" the test: -p moms=<m1>\")\n        if momA == serverA:\n            self.skipTest(reason=msg)\n\n        nodelist = self.server.status(NODE, 'current_aoe')\n        for node in nodelist:\n            a = {'provision_enable': 'true',\n                 'resources_available.aoe': '%s' % node['current_aoe']}\n            self.server.manager(MGR_CMD_SET, NODE, a, id=node['id'])\n\n    def unset_provisioning(self):\n        \"\"\"\n        Unset provisioning attribute and aoe resource on Xeon Phi nodes.\n        \"\"\"\n        nodelist = self.server.status(NODE, 'current_aoe')\n        for node in nodelist:\n            a = ['provision_enable',\n                 'resources_available.aoe']\n            self.server.manager(MGR_CMD_UNSET, NODE, a, id=node['id'])\n\n    def request_current_aoe(self):\n        \"\"\"\n        Get the value of current_aoe set on the XeonPhi vnodes\n        \"\"\"\n        aoe_val = self.server.status(NODE, 'current_aoe')\n        req_aoe = aoe_val[0]['current_aoe']\n        return req_aoe\n\n    def test_InventoryQueryVersion(self):\n        \"\"\"\n        Test if BASIL version is set to required BASIL version\n        on cray/simulator platform.\n        \"\"\"\n        self.mom.signal('-HUP')\n\n        engine_query_log = \"<BasilRequest protocol=\\\"%s\\\" method=\\\"QUERY\\\" \\\ntype=\\\"ENGINE\\\"/>\" % (self.basil_version[1])\n        self.mom.log_match(engine_query_log, n='ALL', max_attempts=3)\n\n        if self.available_version == '1.7':\n            msg = 'This Cray system supports the BASIL 1.7 protocol'\n            self.mom.log_match(msg, n='ALL', max_attempts=3)\n            basil_version_log = 'alps_engine_query;The basilversion is' \\\n                ' set to 1.4'\n        else:\n            basil_version_log = 'alps_engine_query;The basilversion is' \\\n                ' set to ' + self.available_version\n        self.mom.log_match(basil_version_log, max_attempts=3)\n\n    def test_InventoryVnodes(self):\n        \"\"\"\n        This test validates the vnode created using alps BASIL 1.4 & 1.7\n        inventory query response.\n        \"\"\"\n        knl_vnodes = {}\n        # Parse inventory query response and fetch node information.\n        xml_out = self.query_alps('1.4', 'QUERY', 'INVENTORY')\n        xml_tree = ET.parse(xml_out)\n        os.remove(xml_out)\n        inventory_1_4_el = xml_tree.find(\".//Inventory\")\n        hn = inventory_1_4_el.attrib[\"mpp_host\"]\n\n        if self.available_version == '1.7':\n            knl_vnodes = self.get_knl_vnodes()\n\n        # Fill vnode structure using BASIL response\n        for node in inventory_1_4_el.getiterator('Node'):\n            role = node.attrib[\"role\"]\n            if role == 'BATCH':\n                # XML values\n                node_id = node.attrib[\"node_id\"]\n                cu_el = node.findall('.//ComputeUnit')\n                mem_el = node.findall('.//Memory')\n                ac_el = node.findall('.//Accelerator')\n                page_size_kb = mem_el[0].attrib[\"page_size_kb\"]\n                page_count = mem_el[0].attrib[\"page_count\"]\n\n                vnode = self.init_inventory_node()\n                vnode['arch'] = node.attrib['architecture']\n                vnode['vnode'] = hn + '_' + node_id\n                vnode['vntype'] = \"cray_compute\"\n                vnode['mem'] = str(int(page_size_kb) *\n                                   int(page_count) * len(mem_el)) + \"kb\"\n                vnode['host'] = vnode['vnode']\n                vnode['PBScraynid'] = node_id\n                vnode['PBScrayhost'] = hn\n                vnode['ncpus'] = str(len(cu_el))\n                if ac_el:\n                    vnode['naccelerators'] = str(len(ac_el))\n                    vnode['accelerator_memory'] = str(\n                        ac_el[0].attrib['memory_mb']) + \"mb\"\n                    vnode['accelerator_model'] = ac_el[0].attrib['family']\n\n                if node_id in knl_vnodes:\n                    vnode['hbmem'] = knl_vnodes[node_id]['hbmem']\n                    vnode['current_aoe'] = knl_vnodes[node_id]['current_aoe']\n                    vnode['vnode'] = hn + '_' + node_id\n\n                # Compare xml vnode with pbs node.\n                self.logger.info(\"Validating vnode:%s\" % (vnode['vnode']))\n                self.comp_node(vnode)\n\n    def test_cray_login_node(self):\n        \"\"\"\n        This test validates that cray mom node resources value remain\n        unchanged before and after adding $alps_client in mom config.\n        \"\"\"\n        mom_id = self.mom.shortname\n        try:\n            cray_login_node = self.server.status(NODE, id=mom_id)[0]\n            self.mom.unset_mom_config('$alps_client', False)\n            self.reset_nodes(mom_id)\n            pbs_node = self.server.status(NODE, id=mom_id)[0]\n\n        except PbsStatusError:\n            self.assertFalse(True,\n                             \"Mom node %s doesn't exist on pbs server\"\n                             % (mom_id))\n        # List of resources to be ignored while comparing.\n        ignr_rsc = ['license', 'last_state_change_time']\n\n        for rsc, val in pbs_node.items():\n            if rsc in ignr_rsc:\n                continue\n            self.assertTrue(rsc in cray_login_node,\n                            (\"%s\\t: login node has no rsc %s\") %\n                            (mom_id, rsc))\n            rval = cray_login_node[rsc]\n            self.assertEqual(rval, val,\n                             (\"%s\\t: pbs node has %s=%s but login \"\n                              \"node has %s=%s\") %\n                             (mom_id, rsc, val, rsc, rval))\n\n    def test_hbmemm_rsc(self):\n        \"\"\"\n        Create a job that requests enough HBMEM. Submit the job to\n        the Server. Check if the job is in the 'R' state and if the\n        job runs on a KNL vnode. Delete the job.\n        \"\"\"\n\n        knl_vnodes = self.get_knl_vnodes()\n\n        if len(knl_vnodes) == 0:\n            self.skipTest(reason='No KNL vnodes present')\n        else:\n            self.logger.info(\"KNL vnode list: %s\" % (knl_vnodes))\n\n        hbm_req = 4192\n        a = {'Resource_List.select': '1:hbmem=%dmb' % hbm_req}\n        job = Job(TEST_USER, attrs=a)\n\n        job_id = self.server.submit(job)\n        self.server.expect(JOB, {'job_state': 'R'}, id=job_id)\n\n        # Check that exec_vnode is a KNL vnode.`\n        self.server.status(JOB, 'exec_vnode', id=job_id)\n        evnode = list(job.execvnode()[0].keys())[0]\n        nid = evnode.split('_')[1]\n        if nid in knl_vnodes.keys():\n            self.logger.info(\"exec_vnode %s is a KNL vnode.\" % (evnode))\n            rv = 1\n        else:\n            self.logger.info(\"exec_vnode %s is not a KNL vnode.\" % (evnode))\n            rv = 0\n        self.assertTrue(rv == 1)\n\n        nodes = self.server.status(NODE)\n        for n in nodes:\n            v_name = n['id']\n            if v_name == evnode:\n                hbm_assig = n['resources_assigned.hbmem']\n                hbm_int = int(re.search(r'\\d+', hbm_assig).group())\n                hbm_in_kb = hbm_req * 1024\n                self.logger.info(\n                    \"vnode name=%s -- hbm assigned=%s -- hbm requested=%dkb\"\n                    % (v_name, hbm_assig, hbm_in_kb))\n                if hbm_int == hbm_in_kb:\n                    self.logger.info(\n                        \"The requested hbmem of %s mb has been assigned.\" %\n                        (str(hbm_req)))\n                    self.assertTrue(True)\n                else:\n                    self.logger.info(\n                        \"The assigned hbmem of %s, on %s, does not match \"\n                        \"requested hbmem of %d mb\" %\n                        (hbm_assig, v_name, hbm_req))\n                    self.assertTrue(False)\n\n    def test_job_request_insufficent_hbmemm_rsc(self):\n        \"\"\"\n        Submit a job request that requests more than available HBMEM.\n        Check if the job is in the 'Q' state with valid comment.\n        Delete the job\n        \"\"\"\n        # Find the list of KNL vnodes\n        knl_vnodes = self.get_knl_vnodes()\n\n        if len(knl_vnodes) == 0:\n            self.skipTest(reason='No KNL vnodes present')\n        else:\n            self.logger.info(\"KNL vnode list: %s\" % (knl_vnodes))\n\n        hbm_req = 18000\n        a = {'Resource_List.select': '1:hbmem=%dmb' % hbm_req}\n        job = Job(TEST_USER, attrs=a)\n\n        job_id = self.server.submit(job)\n\n        # Check that job is in Q state with valid comment\n        job_comment = \"Not Running: Insufficient amount of resource: hbmem\"\n        self.server.expect(JOB, {'job_state': 'Q', 'comment':\n                                 (MATCH_RE, job_comment)}, attrop=PTL_AND,\n                           id=job_id)\n\n    def test_job_request_knl(self):\n        \"\"\"\n        Create a job that requests aoe should run on a KNL vnode.\n        Submit the job to the Server. Check if the job runs on a KNL vnode\n        and if the job is in the 'R' state.\n        \"\"\"\n        if self.du.platform == 'craysim':\n            self.skipTest(reason='Test is not applicable for Craysim')\n\n        # Find the list of KNL vnodes\n        klist = self.retklist()\n\n        # Set provisioning attributes on KNL vnode.\n        self.set_provisioning()\n\n        # Submit job that request aoe\n        req_aoe = self.request_current_aoe()\n        job = Job(TEST_USER)\n\n        job.create_script(\n            \"#PBS -joe -o localhost:/tmp -lselect=1:ncpus=1:aoe=%s\\n\"\n            % req_aoe +\n            \" cd /tmp\\n\"\n            \"aprun -B sleep 10\\n\"\n            \"sleep 10\")\n\n        job_id = self.server.submit(job)\n        self.server.expect(JOB, {'job_state': 'R'}, id=job_id)\n\n        # Check that exec_vnode is a KNL vnode.\n        self.server.status(JOB, 'exec_vnode', id=job_id)\n        evnode = job.get_vnodes()[0]\n        self.assertIn(evnode, klist, \"exec_vnode %s is not a KNL vnode.\"\n                      % (evnode))\n        self.logger.info(\"exec_vnode %s is a KNL vnode.\" % (evnode))\n\n        # Unset provisioning attributes.\n        self.unset_provisioning()\n\n    def test_job_request_subchunk(self):\n        \"\"\"\n        Test job request consist of subchunks with and without aoe resource.\n        \"\"\"\n        if self.du.platform == 'craysim':\n            self.skipTest(reason='Test is not applicable for craysim')\n\n        # Find the list of KNL vnodes\n        klist = self.retklist()\n\n        # Set provisioning attributes.\n        self.set_provisioning()\n\n        # Submit job that request sub-chunk with and without aoe resources\n        req_aoe = self.request_current_aoe()\n        job = Job(TEST_USER)\n\n        job.create_script(\n            \"#PBS -joe -o localhost:/tmp -lplace=scatter \"\n            \"-lselect=1:ncpus=1:aoe=%s+1:ncpus=1\\n\" % req_aoe +\n            \" cd /tmp\\n\"\n            \"aprun -B sleep 10\\n\"\n            \"sleep 10\")\n        job_id = self.server.submit(job)\n        self.server.expect(JOB, {'job_state': 'R'}, id=job_id)\n\n        # Check that exec_vnode is a KNL vnode.\n        self.server.status(JOB, 'exec_vnode', id=job_id)\n        evnode = job.get_vnodes()\n        self.assertIn(evnode[0], klist, \"exec_vnode %s is not a KNL vnode.\"\n                      % (evnode[0]))\n        self.logger.info(\"exec_vnode %s is a KNL vnode.\" % (evnode[0]))\n\n        self.assertNotIn(evnode[1], klist, \"exec_vnode %s is a KNL\"\n                         \" vnode.\" % (evnode[1]))\n        self.logger.info(\"exec_vnode %s is not a KNL vnode.\" % (evnode[1]))\n\n        # Unset provisioning attributes.\n        self.unset_provisioning()\n\n    def test_pbs_alps_in_sync(self):\n        \"\"\"\n        Check for the presence of message indicating PBS and ALPS are\n        in sync.\n        \"\"\"\n        # Determine if BASIL 1.7 is supported.\n        try:\n            rv = self.mom.log_match(\n                \"This Cray system supports the BASIL 1.7 protocol.\",\n                n='ALL', max_attempts=10)\n        except PtlLogMatchError:\n            self.skipTest(\n                reason='Test not applicable for system not having BASIL 1.7')\n\n        # Determine if KNL vnodes are present.\n        knl_vnodes = self.get_knl_vnodes()\n\n        if len(knl_vnodes) == 0:\n            self.skipTest(reason='No KNL vnodes present')\n        else:\n            self.logger.info(\"KNL vnode list: %s\" % (knl_vnodes))\n\n        # Check for PBS ALPS Inventory Hook message.\n        now = time.time()\n        rv = self.mom.log_match(\"ALPS Inventory Check: PBS and ALPS\"\n                                \" are in sync\",\n                                starttime=now, interval=5)\n        self.assertTrue(rv)\n\n    def test_knl_batch_to_interactive(self):\n        \"\"\"\n        Change the mode of any two KNL nodes to interactive. Then check if the\n        PBS_alps_inventory_check hook picks up on the change and nodes are\n        marked as stale. Restore changes to hook and mode of KNL nodes.\n        \"\"\"\n        if self.du.platform == 'craysim':\n            self.skipTest(reason='xtprocadmin cmd is not on cray simulator')\n\n        # Find the list of KNL vnodes\n        klist = self.retklist()\n\n        # Change mode of two KNL nodes to interactive\n        if len(klist) >= 2:\n            k1 = klist[0]\n            k2 = klist[len(klist) - 1]\n            knl1 = re.search(r'\\d+', k1).group()\n            knl2 = re.search(r'\\d+', k2).group()\n\n        cmd = ['xtprocadmin', '-k', 'm', 'interactive', '-n', knl1]\n        ret = self.server.du.run_cmd(self.server.hostname,\n                                     cmd, logerr=True)\n        self.assertEqual(ret['rc'], 0)\n\n        cmd = ['xtprocadmin', '-k', 'm', 'interactive', '-n', knl2]\n        ret = self.server.du.run_cmd(self.server.hostname,\n                                     cmd, logerr=True)\n        self.assertEqual(ret['rc'], 0)\n\n        # Do Mom HUP\n        self.mom.signal('-HUP')\n\n        # Check that the nodes are now stale.\n        self.server.expect(VNODE, {'state': 'Stale'}, id=k1,\n                           max_attempts=10, interval=5)\n        self.server.expect(VNODE, {'state': 'Stale'}, id=k2)\n\n        # Change nodes back to batch mode\n        cmd = ['xtprocadmin', '-k', 'm', 'batch']\n        ret = self.server.du.run_cmd(self.server.hostname,\n                                     cmd, logerr=True)\n        self.assertEqual(ret['rc'], 0)\n\n        # Do Mom HUP\n        self.mom.signal('-HUP')\n\n        # Check that the nodes are now free.\n        self.server.expect(VNODE, {'state': 'free'}, id=k1,\n                           max_attempts=10, interval=5)\n        self.server.expect(VNODE, {'state': 'free'}, id=k2)\n\n    def test_job_run_on_knl_node(self):\n        \"\"\"\n        Change the mode of KNL nodes to batch.\n        Then check if the PBS_alps_inventory_check hook picks up on the change.\n        Submit job and confirm job should be in R state\n        \"\"\"\n        if self.du.platform == 'craysim':\n            self.skipTest(reason='xtprocadmin cmd is not on cray simulator')\n\n        # Find the list of KNL vnodes\n        klist = self.retklist()\n\n        # Change mode of all nodes to interactive\n        cmd = ['xtprocadmin', '-k', 'm', 'interactive']\n        ret = self.server.du.run_cmd(self.server.hostname,\n                                     cmd, logerr=True)\n        self.assertEqual(ret['rc'], 0)\n\n        # Change mode of two KNL nodes to batch\n        if len(klist) >= 2:\n            k1 = klist[0]\n            k2 = klist[len(klist) - 1]\n            knl1 = re.search(r'\\d+', k1).group()\n            knl2 = re.search(r'\\d+', k2).group()\n\n        cmd = ['xtprocadmin', '-k', 'm', 'batch', '-n', knl1]\n        ret = self.server.du.run_cmd(self.server.hostname, cmd, logerr=True)\n        self.assertEqual(ret['rc'], 0)\n        cmd = ['xtprocadmin', '-k', 'm', 'batch', '-n', knl2]\n        ret = self.server.du.run_cmd(self.server.hostname, cmd, logerr=True)\n        self.assertEqual(ret['rc'], 0)\n\n        # Do Mom HUP\n        self.mom.signal('-HUP')\n\n        # Check that the nodes are Free.\n        self.server.expect(VNODE, {'state': 'free'}, id=k1, max_attempts=10,\n                           interval=5)\n        self.server.expect(VNODE, {'state': 'free'}, id=k2)\n\n        # Submit few jobs\n        a = {'Resource_List.select': '1:vntype=cray_compute'}\n        job = Job(TEST_USER, attrs=a)\n\n        job_id = self.server.submit(job)\n        self.server.expect(JOB, {'job_state': 'R'}, id=job_id)\n        # Check that exec_vnode is a KNL vnode.\n        self.server.status(JOB, 'exec_vnode', id=job_id)\n        evnode = job.get_vnodes()[0]\n        self.assertIn(evnode, klist, \"exec_vnode %s is not a KNL vnode.\"\n                      % (evnode))\n        self.logger.info(\"exec_vnode %s is a KNL vnode.\" % (evnode))\n\n        job2 = Job(TEST_USER, attrs=a)\n\n        job_id2 = self.server.submit(job2)\n        self.server.expect(JOB, {'job_state': 'R'}, id=job_id2)\n        # Check that exec_vnode is a KNL vnode.\n        self.server.status(JOB, 'exec_vnode', id=job_id2)\n        evnode = job2.get_vnodes()[0]\n        self.assertIn(evnode, klist, \"exec_vnode %s is not a KNL vnode.\"\n                      % (evnode))\n        self.logger.info(\"exec_vnode %s is a KNL vnode.\" % (evnode))\n\n        job3 = Job(TEST_USER, attrs=a)\n\n        job_id3 = self.server.submit(job3)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=job_id3)\n\n        # Delete the Job1.\n        self.server.delete(job_id, wait=True)\n\n        # Verify Job3 should start running\n        self.server.expect(JOB, {'job_state': 'R'}, id=job_id3)\n        # Check that exec_vnode is a KNL vnode.\n        self.server.status(JOB, 'exec_vnode', id=job_id3)\n        evnode = job3.get_vnodes()[0]\n        self.assertIn(evnode, klist, \"exec_vnode %s is not a KNL vnode.\"\n                      % (evnode))\n        self.logger.info(\"exec_vnode %s is a KNL vnode.\" % (evnode))\n\n    def test_validate_pbs_xeon_phi_provision_hook(self):\n        \"\"\"\n        Verify the default attribute of pbs_hook PBS_xeon_phi_provision hook.\n        \"\"\"\n        if self.du.platform != 'cray':\n            self.skipTest(reason='pbs_hook PBS_xeon_phi_provision is not'\n                          ' available on non-cray machine')\n\n        attr = {'type': 'pbs', 'enabled': 'false', 'event': 'provision',\n                'alarm': 1800, 'order': 1, 'debug': 'false',\n                'user': 'pbsadmin', 'fail_action': 'none'}\n\n        self.server.manager(MGR_CMD_LIST, PBS_HOOK,\n                            attr, id='PBS_xeon_phi_provision')\n\n        self.server.manager(MGR_CMD_SET, PBS_HOOK, {'enabled': 'true',\n                                                    'alarm': 1000},\n                            id='PBS_xeon_phi_provision')\n        self.server.manager(MGR_CMD_LIST, PBS_HOOK, {'enabled': 'true',\n                                                     'alarm': 1000},\n                            id='PBS_xeon_phi_provision')\n\n        # Reset pbs_hook value to default PBS_xeon_phi_provision hook\n        self.server.manager(MGR_CMD_SET, PBS_HOOK, {'enabled': 'false',\n                                                    'alarm': 1800},\n                            id='PBS_xeon_phi_provision')\n\n        self.server.manager(MGR_CMD_LIST, PBS_HOOK,\n                            attr, id='PBS_xeon_phi_provision')\n\n    def tearDown(self):\n        TestFunctional.tearDown(self)\n        if self.du.platform == 'cray':\n            # Change all nodes back to batch mode and restart PBS\n            cmd = ['xtprocadmin', '-k', 'm', 'batch']\n            self.logger.info(cmd)\n            ret = self.server.du.run_cmd(self.server.hostname,\n                                         cmd, logerr=True)\n            self.assertEqual(ret['rc'], 0)\n\n        # Restore hook freq to 300\n        self.server.manager(MGR_CMD_SET, PBS_HOOK,\n                            {'enabled': 'true', 'freq': 300},\n                            id='PBS_alps_inventory_check')\n        # Do Mom HUP\n        self.mom.signal('-HUP')\n"
  },
  {
    "path": "test/tests/functional/pbs_calendaring.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport time\nfrom tests.functional import *\nfrom ptl.utils.pbs_logutils import PBSLogUtils\n\n\nclass TestCalendaring(TestFunctional):\n\n    \"\"\"\n    This test suite tests if PBS scheduler calendars events correctly\n    \"\"\"\n\n    def test_topjob_start_time(self):\n        \"\"\"\n        In this test we test that the top job which gets added to the\n        calendar has estimated start time correctly set for future when\n        job history is enabled and opt_backfill_fuzzy is turned off.\n        \"\"\"\n\n        self.scheduler.set_sched_config({'strict_ordering': 'true all'})\n        a = {'resources_available.ncpus': 1}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        a = {'backfill_depth': '2', 'job_history_enable': 'True'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        # Turn opt_backfill_fuzzy off because we want to check if the job can\n        # run after performing every end event in calendaring code instead\n        # of rounding it off to next time boundary (default it 60 seconds)\n        a = {'opt_backfill_fuzzy': 'off'}\n        self.server.manager(MGR_CMD_SET, SCHED, a)\n\n        res_req = {'Resource_List.select': '1:ncpus=1',\n                   'Resource_List.walltime': 30,\n                   'array_indices_submitted': '1-6'}\n        j1 = Job(TEST_USER, attrs=res_req)\n        j1.set_sleep_time(30)\n        jid1 = self.server.submit(j1)\n        j1_sub1 = j1.create_subjob_id(jid1, 1)\n        j1_sub2 = j1.create_subjob_id(jid1, 2)\n\n        res_req = {'Resource_List.select': '1:ncpus=1',\n                   'Resource_List.walltime': 30}\n        j2 = Job(TEST_USER, attrs=res_req)\n        jid2 = self.server.submit(j2)\n\n        self.server.expect(JOB, {'job_state': 'X'}, j1_sub1, interval=1)\n        self.server.expect(JOB, {'job_state': 'R'}, j1_sub2)\n        self.server.expect(JOB, {'job_state': 'Q'}, jid2)\n        job1 = self.server.status(JOB, id=jid1)\n        job2 = self.server.status(JOB, id=jid2)\n        time_now = int(time.time())\n\n        # get estimated start time of both the jobs\n        self.assertIn('estimated.start_time', job1[0])\n        est_val1 = job1[0]['estimated.start_time']\n        self.assertIn('estimated.start_time', job2[0])\n        est_val2 = job2[0]['estimated.start_time']\n        est1 = time.strptime(est_val1, \"%a %b %d %H:%M:%S %Y\")\n        est2 = time.strptime(est_val2, \"%a %b %d %H:%M:%S %Y\")\n        est_epoch1 = int(time.mktime(est1))\n        est_epoch2 = int(time.mktime(est2))\n\n        # since only one subjob of array parent can become topjob\n        # second job must start 10 seconds after that because\n        # walltime of array job is 10 seconds.\n        self.assertEqual(est_epoch2, est_epoch1 + 30)\n        # Also make sure that since second subjob from array is running\n        # Third subjob should set estimated.start_time in future.\n        self.assertGreater(est_epoch1, time_now)\n\n    def test_topjob_start_time_of_subjob(self):\n        \"\"\"\n        In this test we test that the subjob which gets added to the\n        calendar as top job and it has estimated start time correctly set when\n        opt_backfill_fuzzy is turned off.\n        \"\"\"\n\n        self.scheduler.set_sched_config({'strict_ordering': 'true all'})\n        a = {'resources_available.ncpus': 1}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        a = {'backfill_depth': '2'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        # Turn opt_backfill_fuzzy off because we want to check if the job can\n        # run after performing every end event in calendaring code instead\n        # of rounding it off to next time boundary (default it 60 seconds)\n        a = {'opt_backfill_fuzzy': 'off'}\n        self.server.manager(MGR_CMD_SET, SCHED, a)\n\n        res_req = {'Resource_List.select': '1:ncpus=1',\n                   'Resource_List.walltime': 20,\n                   'array_indices_submitted': '1-6'}\n        j = Job(TEST_USER, attrs=res_req)\n        j.set_sleep_time(10)\n        jid = self.server.submit(j)\n        j1_sub1 = j.create_subjob_id(jid, 1)\n        j1_sub2 = j.create_subjob_id(jid, 2)\n\n        self.server.expect(JOB, {'job_state': 'X'}, j1_sub1, interval=1)\n        self.server.expect(JOB, {'job_state': 'R'}, j1_sub2)\n        job_arr = self.server.status(JOB, id=jid)\n\n        # check estimated start time is set on job array\n        self.assertIn('estimated.start_time', job_arr[0])\n        errmsg = jid + \";Error in calculation of start time of top job\"\n        self.scheduler.log_match(errmsg, existence=False, max_attempts=10)\n\n    def test_topjob_fail(self):\n        \"\"\"\n        Test that when we fail to add a job to the calendar it doesn't\n        take up a topjob slot.  The server's backfill_depth is 1 by default,\n        so we just need to submit a job that can never run and a job that can.\n        The can never run job will fail to be added to the calendar and the\n        second job will be.\n        \"\"\"\n\n        # We need two nodes to create the situation where a job can never run.\n        # We need to create this situation in such a way that the scheduler\n        # doesn't detect it.  If the scheduler detects that a job can't run,\n        # it won't try and add it to the calendar.  To do this, we ask for\n        # 1 node with 2 cpus.  There are 2 nodes with 1 cpu each.\n        attrs = {'resources_available.ncpus': 1}\n        self.mom.create_vnodes(attrib=attrs, num=2,\n                               sharednode=False)\n\n        self.scheduler.set_sched_config({'strict_ordering': 'True ALL'})\n\n        # Submit job to eat up all the resources\n        attrs = {'Resource_List.select': '2:ncpus=1',\n                 'Resource_List.walltime': '1:00:00'}\n        j1 = Job(TEST_USER, attrs)\n        jid1 = self.server.submit(j1)\n\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n\n        # submit job that can never run.\n        attrs['Resource_List.select'] = '1:ncpus=2'\n        j2 = Job(TEST_USER, attrs)\n        jid2 = self.server.submit(j2)\n\n        # submit a job that can run, but just not now\n        attrs['Resource_List.select'] = '1:ncpus=1'\n        j3 = Job(TEST_USER, attrs)\n        jid3 = self.server.submit(j3)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n\n        msg = jid2 + ';Error in calculation of start time of top job'\n        self.scheduler.log_match(msg)\n\n        msg = jid3 + ';Job is a top job and will run at'\n        self.scheduler.log_match(msg)\n\n    def test_topjob_bucket(self):\n        \"\"\"\n        In this test we test that a bucket job will be calendared to start\n        at the end of the last job on a node\n        \"\"\"\n\n        self.scheduler.set_sched_config({'strict_ordering': 'true all'})\n        a = {'resources_available.ncpus': 2}\n        self.mom.create_vnodes(a, 1)\n\n        res_req = {'Resource_List.select': '1:ncpus=1',\n                   'Resource_List.walltime': 30}\n        j1 = Job(TEST_USER, attrs=res_req)\n        j1.set_sleep_time(30)\n        jid1 = self.server.submit(j1)\n\n        res_req = {'Resource_List.select': '1:ncpus=1',\n                   'Resource_List.walltime': 45}\n        j2 = Job(TEST_USER, attrs=res_req)\n        j2.set_sleep_time(45)\n        jid2 = self.server.submit(j2)\n\n        res_req = {'Resource_List.select': '1:ncpus=1',\n                   'Resource_List.place': 'excl'}\n        j3 = Job(TEST_USER, attrs=res_req)\n        jid3 = self.server.submit(j3)\n\n        self.server.expect(JOB, {'job_state': 'R'}, jid1)\n        self.server.expect(JOB, {'job_state': 'R'}, jid2)\n        self.server.expect(JOB, {'job_state': 'Q'}, jid3)\n        job1 = self.server.status(JOB, id=jid1)\n        job2 = self.server.status(JOB, id=jid2)\n        job3 = self.server.status(JOB, id=jid3)\n\n        end_time = time.mktime(time.strptime(job2[0]['stime'], '%c')) + 45\n        est_time = job3[0]['estimated.start_time']\n        est_time = time.mktime(time.strptime(est_time, '%c'))\n        self.assertAlmostEqual(end_time, est_time, delta=1)\n\n    def test_zero_resource_pushes_topjob(self):\n        \"\"\"\n        This test case tests the scenario where a job that requests zero\n        instance of a resource as the last resource in the select statement\n        pushes the start time of top jobs\n        \"\"\"\n        attrs = {'resources_available.ncpus': 4}\n        self.mom.create_vnodes(attrib=attrs, num=5,\n                               sharednode=False)\n\n        attr = {ATTR_RESC_TYPE: 'long', ATTR_RESC_FLAG: 'hn'}\n        self.server.manager(MGR_CMD_CREATE, RSC, attr, id='ngpus')\n\n        resources = self.scheduler.sched_config['resources']\n        resources = resources[:-1] + ', ngpus, zz\\\"'\n        a = {'job_sort_key': '\"job_priority HIGH ALL\"',\n             'resources': resources,\n             'strict_ordering': 'True ALL'}\n        self.scheduler.set_sched_config(a)\n\n        a = {'Resource_List.select': '2:ncpus=4',\n             'Resource_List.walltime': '1:00:00',\n             'Resource_List.place': 'vscatter'}\n\n        j = Job(TEST_USER)\n        j.set_attributes(a)\n        jid1 = self.server.submit(j)\n\n        j = Job(TEST_USER)\n        j.set_attributes(a)\n        jid2 = self.server.submit(j)\n\n        a = {'Resource_List.select': '5:ncpus=4',\n             'Resource_List.walltime': '1:00:00',\n             ATTR_p: \"1000\",\n             'Resource_List.place': 'vscatter'}\n\n        j = Job(TEST_USER)\n        j.set_attributes(a)\n        jid3 = self.server.submit(j)\n\n        a = {'Resource_List.select': '1:ncpus=4',\n             'Resource_List.walltime': '24:00:01',\n             'Resource_List.place': 'vscatter'}\n\n        j = Job(TEST_USER)\n        j.set_attributes(a)\n        jid4 = self.server.submit(j)\n\n        a = {'Resource_List.select': '1:ncpus=4:ngpus=0',\n             'Resource_List.walltime': '24:00:01',\n             'Resource_List.place': 'vscatter'}\n\n        j = Job(TEST_USER)\n        j.set_attributes(a)\n        jid5 = self.server.submit(j)\n\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid2)\n        self.server.expect(JOB, {ATTR_state: 'Q'}, id=jid3)\n        c = \"Not Running: Job would conflict with reservation or top job\"\n        self.server.expect(JOB, {ATTR_state: 'Q', ATTR_comment: c}, id=jid4)\n        self.server.expect(JOB, {ATTR_state: 'Q', ATTR_comment: c}, id=jid5)\n\n    def test_zero_resource_job_conflict_resv(self):\n        \"\"\"\n        This test case tests the scenario where a job that requests zero\n        instance of a resource as the last resource in the select statement\n        pushes the start time of reservations\n        \"\"\"\n        attrs = {'resources_available.ncpus': 4}\n        self.mom.create_vnodes(attrib=attrs, num=5,\n                               sharednode=False)\n\n        attr = {ATTR_RESC_TYPE: 'long', ATTR_RESC_FLAG: 'hn'}\n        self.server.manager(MGR_CMD_CREATE, RSC, attr, id='ngpus')\n\n        resources = self.scheduler.sched_config['resources']\n        resources = resources[:-1] + ', ngpus, zz\\\"'\n        a = {'job_sort_key': '\"job_priority HIGH ALL\"',\n             'resources': resources,\n             'strict_ordering': 'True ALL'}\n        self.scheduler.set_sched_config(a)\n\n        a = {'Resource_List.select': '2:ncpus=4',\n             'Resource_List.walltime': '1:00:00',\n             'Resource_List.place': 'vscatter'}\n\n        j = Job(TEST_USER)\n        j.set_attributes(a)\n        jid1 = self.server.submit(j)\n\n        j = Job(TEST_USER)\n        j.set_attributes(a)\n        jid2 = self.server.submit(j)\n\n        now = int(time.time())\n        a = {'Resource_List.select': '5:ncpus=4',\n             'reserve_start': now + 3610,\n             'reserve_end': now + 6610,\n             'Resource_List.place': 'vscatter'}\n\n        r = Reservation(TEST_USER)\n        r.set_attributes(a)\n        rid = self.server.submit(r)\n        exp = {'reserve_state': (MATCH_RE, \"RESV_CONFIRMED|2\")}\n        self.server.expect(RESV, exp, id=rid)\n\n        a = {'Resource_List.select': '1:ncpus=4',\n             'Resource_List.walltime': '24:00:01',\n             'Resource_List.place': 'vscatter'}\n\n        j = Job(TEST_USER)\n        j.set_attributes(a)\n        jid3 = self.server.submit(j)\n\n        a = {'Resource_List.select': '1:ncpus=4:ngpus=0',\n             'Resource_List.walltime': '24:00:01',\n             'Resource_List.place': 'vscatter'}\n\n        j = Job(TEST_USER)\n        j.set_attributes(a)\n        jid4 = self.server.submit(j)\n\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid2)\n        c = \"Not Running: Job would conflict with reservation or top job\"\n        self.server.expect(JOB, {ATTR_state: 'Q', ATTR_comment: c}, id=jid3)\n        self.server.expect(JOB, {ATTR_state: 'Q', ATTR_comment: c}, id=jid4)\n\n    def test_topjob_stale_estimates_clearing_on_clear_attr_set(self):\n        \"\"\"\n        In this test we test that former top job with stale estimate\n        gets the estimate cleared once the server attribute\n        clear_topjob_estimates_enable is set to True\n        \"\"\"\n\n        self.scheduler.set_sched_config({'strict_ordering': 'true all'})\n        a = {'resources_available.ncpus': 1}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        a = {'backfill_depth': '2'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        a = {'scheduler_iteration': '5'}\n        self.server.manager(MGR_CMD_SET, SCHED, a)\n\n        res_req = {'Resource_List.select': '1:ncpus=1',\n                   'Resource_List.walltime': 300}\n        j1 = Job(TEST_USER, attrs=res_req)\n        jid1 = self.server.submit(j1)\n\n        self.server.expect(JOB, {'job_state': 'R'}, jid1)\n\n        j2 = Job(TEST_USER, attrs=res_req)\n        jid2 = self.server.submit(j2)\n        job2 = self.server.status(JOB, id=jid2)\n        self.assertIn('estimated.start_time', job2[0])\n        self.assertIn('estimated.exec_vnode', job2[0])\n        self.server.expect(JOB, {'topjob': True}, jid2, max_attempts=5)\n\n        a = {'backfill_depth': '0'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        time.sleep(6)\n\n        job2 = self.server.status(JOB, id=jid2)\n        self.assertIn('estimated.start_time', job2[0])\n        self.assertIn('estimated.exec_vnode', job2[0])\n        self.server.expect(JOB, {'topjob': False}, jid2, max_attempts=5)\n\n        a = {'clear_topjob_estimates_enable': True}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        self.server.expect(JOB, 'estimated.start_time', id=jid2, op=UNSET,\n                           interval=1, max_attempts=10)\n        self.server.expect(JOB, 'estimated.exec_vnode', id=jid2, op=UNSET,\n                           interval=1, max_attempts=10)\n\n    def test_topjob_estimates_clearing_enabled(self):\n        \"\"\"\n        In this test we test that the top job which gets added to the\n        calendar with valid estimate has estimate cleared once it losses\n        top job status. The clearing needs to have the server attribute\n        clear_topjob_estimates_enable set to true. Also, the job's topjob\n        attribute is set accordingly.\n        \"\"\"\n\n        self.scheduler.set_sched_config({'strict_ordering': 'true all'})\n        a = {'resources_available.ncpus': 1}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        a = {'backfill_depth': '2', 'clear_topjob_estimates_enable': True}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        a = {'scheduler_iteration': '5'}\n        self.server.manager(MGR_CMD_SET, SCHED, a)\n\n        res_req = {'Resource_List.select': '1:ncpus=1',\n                   'Resource_List.walltime': 300}\n        j1 = Job(TEST_USER, attrs=res_req)\n        jid1 = self.server.submit(j1)\n\n        self.server.expect(JOB, {'job_state': 'R'}, jid1)\n\n        j2 = Job(TEST_USER, attrs=res_req)\n        jid2 = self.server.submit(j2)\n        job2 = self.server.status(JOB, id=jid2)\n        self.assertIn('estimated.start_time', job2[0])\n        self.assertIn('estimated.exec_vnode', job2[0])\n        self.server.expect(JOB, {'topjob': True}, jid2, max_attempts=5)\n\n        a = {'backfill_depth': '0'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        time.sleep(6)\n\n        job2 = self.server.status(JOB, id=jid2)\n        self.assertNotIn('estimated.start_time', job2[0])\n        self.assertNotIn('estimated.exec_vnode', job2[0])\n        self.server.expect(JOB, {'topjob': False}, jid2, max_attempts=5)\n\n    def test_topjob_estimates_clearing_disabled(self):\n        \"\"\"\n        In this test we test that the top job which gets added to the\n        calendar with valid estimate has not estimate cleared if it losses\n        top job status. The clearing is prevented by clear_topjob_estimates_enable\n        set to false/unset. Also, the job's topjob attribute is set accordingly.\n        \"\"\"\n\n        self.scheduler.set_sched_config({'strict_ordering': 'true all'})\n        a = {'resources_available.ncpus': 1}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        a = {'backfill_depth': '2', 'clear_topjob_estimates_enable': False}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        a = {'scheduler_iteration': '5'}\n        self.server.manager(MGR_CMD_SET, SCHED, a)\n\n        res_req = {'Resource_List.select': '1:ncpus=1',\n                   'Resource_List.walltime': 300}\n        j1 = Job(TEST_USER, attrs=res_req)\n        jid1 = self.server.submit(j1)\n\n        self.server.expect(JOB, {'job_state': 'R'}, jid1)\n\n        j2 = Job(TEST_USER, attrs=res_req)\n        jid2 = self.server.submit(j2)\n        job2 = self.server.status(JOB, id=jid2)\n        self.assertIn('estimated.start_time', job2[0])\n        self.assertIn('estimated.exec_vnode', job2[0])\n        self.server.expect(JOB, {'topjob': True}, jid2, max_attempts=5)\n\n        a = {'backfill_depth': '0'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        time.sleep(6)\n\n        job2 = self.server.status(JOB, id=jid2)\n        self.assertIn('estimated.start_time', job2[0])\n        self.assertIn('estimated.exec_vnode', job2[0])\n        self.server.expect(JOB, {'topjob': False}, jid2, max_attempts=5)\n"
  },
  {
    "path": "test/tests/functional/pbs_cgroups_hook.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport glob\n\nfrom tests.functional import *\n\n\n#\n# FUNCTION convert_size\n#\ndef convert_size(value, units='b'):\n    \"\"\"\n    Convert a string containing a size specification (e.g. \"1m\") to a\n    string using different units (e.g. \"1024k\").\n\n    This function only interprets a decimal number at the start of the string,\n    stopping at any unrecognized character and ignoring the rest of the string.\n\n    When down-converting (e.g. MB to KB), all calculations involve integers and\n    the result returned is exact. When up-converting (e.g. KB to MB) floating\n    point numbers are involved. The result is rounded up. For example:\n\n    1023MB -> GB yields 1g\n    1024MB -> GB yields 1g\n    1025MB -> GB yields 2g  <-- This value was rounded up\n\n    Pattern matching or conversion may result in exceptions.\n    \"\"\"\n    logs = {'b': 0, 'k': 10, 'm': 20, 'g': 30,\n            't': 40, 'p': 50, 'e': 60, 'z': 70, 'y': 80}\n    try:\n        new = units[0].lower()\n        if new not in logs:\n            raise ValueError('Invalid unit value')\n        result = re.match(r'([-+]?\\d+)([bkmgtpezy]?)',\n                          str(value).lower())\n        if not result:\n            raise ValueError('Unrecognized value')\n        val, old = result.groups()\n        if int(val) < 0:\n            raise ValueError('Value may not be negative')\n        if old not in logs:\n            old = 'b'\n        factor = logs[old] - logs[new]\n        val = float(val)\n        val *= 2 ** factor\n        if (val - int(val)) > 0.0:\n            val += 1.0\n        val = int(val)\n        return str(val) + units.lower()\n    except Exception:\n        return None\n\n\ndef have_swap():\n    \"\"\"\n    Returns 1 if swap space is not 0 otherwise returns 0\n    \"\"\"\n    tt = 0\n    with open(os.path.join(os.sep, 'proc', 'meminfo'), 'r') as fd:\n        for line in fd:\n            entry = line.split()\n            if ((entry[0] == 'SwapFree:') and (entry[1] != '0')):\n                tt = 1\n    return tt\n\n\ndef systemd_escape(buf):\n    \"\"\"\n    Escape strings for usage in system unit names\n    Some distros don't provide the systemd-escape command\n    \"\"\"\n    if not isinstance(buf, str):\n        raise ValueError('Not a basetype string')\n    ret = ''\n    for i, char in enumerate(buf):\n        if i < 1 and char == '.':\n            if (sys.version_info[0] < 3):\n                ret += '\\\\x' + '.'.encode('hex')\n            else:\n                ret += '\\\\x' + b'.'.hex()\n        elif char.isalnum() or char in '_.':\n            ret += char\n        elif char == '/':\n            ret += '-'\n        else:\n            # Will turn non-ASCII into UTF-8 hex sequence on both Py2/3\n            if (sys.version_info[0] < 3):\n                hexval = char.encode('hex')\n            else:\n                hexval = char.encode('utf-8').hex()\n            for j in range(0, len(hexval), 2):\n                ret += '\\\\x' + hexval[j:j + 2]\n    return ret\n\n\ndef count_items(items):\n    \"\"\"\n    Given a comma-separated string of numerical items of either\n    singular value or a range of values (<start>-<stop>),\n    return the actual number of items.\n    For example,\n         items=\"4-6,9,12-15\"\n         count(items) = 8\n    since items expands to \"4,5,6,9,12,13,14,15\"\n    \"\"\"\n    ct = 0\n    if items is None:\n        return ct\n    for i in items.split(','):\n        j = i.split('-')\n        if len(j) == 2:\n            ct += len(range(int(j[0]), int(j[1]))) + 1\n        else:\n            ct += 1\n    return ct\n\n\n@tags('mom', 'multi_node')\nclass TestCgroupsHook(TestFunctional):\n\n    \"\"\"\n    This test suite targets Linux Cgroups hook functionality.\n    \"\"\"\n\n    def is_memsw_enabled(self, host, mem_path):\n        \"\"\"\n        Check if system has swapcontrol enabled, then return true\n        else return false\n        \"\"\"\n        if not mem_path:\n            self.logger.info(\"memory controller not enabled on this host\")\n            return 'false'\n        # List all files and check if memsw files exists\n        if self.du.isfile(hostname=host,\n                          path=mem_path + os.path.sep\n                          + \"memory.memsw.usage_in_bytes\"):\n            self.logger.info(\"memsw swap accounting is enabled on this host\")\n            return 'true'\n        else:\n            self.logger.info(\"memsw swap accounting not enabled on this host\")\n            return 'false'\n\n    def setUp(self):\n\n        self.hook_name = 'pbs_cgroups'\n        # Cleanup previous pbs_cgroup hook so as to not interfere with test\n        c_hook = self.server.filter(HOOK,\n                                    {'enabled': True}, id=self.hook_name)\n        if c_hook:\n            self.server.manager(MGR_CMD_DELETE, HOOK, id=self.hook_name)\n\n        a = {'resources_available.ncpus': (EQ, 0), 'state': 'free'}\n        no_cpu_vnodes = self.server.filter(VNODE, a, attrop=PTL_AND)\n        if no_cpu_vnodes:\n            # TestFunctional.setUp() would error out if leftover setup\n            # has no cpus vnodes. Best to cleanup vnodes altogether.\n            self.logger.info(\"Deleting the existing vnodes\")\n            self.mom.delete_vnode_defs()\n            self.mom.restart()\n\n        for mom in self.moms.values():\n            if mom.is_cpuset_mom():\n                mom.revert_to_default = False\n\n        TestFunctional.setUp(self)\n\n        # Some of the tests requires 2 or 3 nodes.\n        # Setting the default values when no mom is specified\n\n        self.vntypename = []\n        self.iscray = False\n        self.noprefix = False\n        self.tempfile = []\n        self.moms_list = []\n        self.hosts_list = []\n        self.nodes_list = []\n        self.paths = {}\n        for cnt in range(0, len(self.moms)):\n            mom = self.moms.values()[cnt]\n            if mom.is_cray():\n                self.iscray = True\n            host = mom.shortname\n            # Check if mom has needed cgroup mounted, otherwise skip test\n            self.paths[host] = self.get_paths(host)\n            if not self.paths[host]['cpuset']:\n                self.skipTest('cpuset subsystem not mounted')\n            self.logger.info(\"%s: cgroup cpuset is mounted\" % host)\n            if self.iscray:\n                node = self.get_hostname(host)\n            else:\n                node = host\n            vntype = self.get_vntype(host)\n            if vntype is None:\n                vntype = \"no_cgroups\"\n            self.logger.info(\"vntype value is %s\" % vntype)\n            self.logger.info(\"Deleting the existing vnodes on %s\" % host)\n            mom.delete_vnode_defs()\n\n            # Restart MoM\n            time.sleep(2)\n            time_before_restart = int(time.time())\n            time.sleep(2)\n            mom.restart()\n\n            # Make sure that MoM has restarted far enough before reconfiguring\n            # as that sends a HUP and may otherwise interfere with the restart\n            # We send either a HELLO or a restart to server -- wait for that\n            mom.log_match(\"sent to server\",\n                          starttime=time_before_restart,\n                          n='ALL')\n\n            self.logger.info(\"increase log level for mom and \\\n                             set polling intervals\")\n            c = {'$logevent': '0xffffffff', '$clienthost': self.server.name,\n                 '$min_check_poll': 8, '$max_check_poll': 12}\n            mom.add_config(c)\n\n            self.moms_list.append(mom)\n            self.hosts_list.append(host)\n            self.nodes_list.append(node)\n            self.vntypename.append(vntype)\n\n        # Setting self.mom defaults to primary mom as some of\n        # library methods assume that\n        self.mom = self.moms_list[0]\n        host = self.moms_list[0].shortname\n\n        # Delete ALL vnodes\n        # Re-creation moved to the end *after* we correctly set up the hook\n        self.server.manager(MGR_CMD_DELETE, NODE, None, \"\")\n\n        self.serverA = self.servers.values()[0].name\n        self.mem = 'true'\n        if not self.paths[host]['memory']:\n            self.mem = 'false'\n        self.swapctl = self.is_memsw_enabled(host, self.paths[host]['memsw'])\n        self.server.set_op_mode(PTL_CLI)\n        self.server.cleanup_jobs()\n        if not self.iscray:\n            self.remove_vntype()\n\n        self.eatmem_script = \"\"\"\nimport sys\nimport time\nMB = 2 ** 20\niterations = 1\nchunkSizeMb = 1\nsleeptime = 0\nif (len(sys.argv) > 1):\n    iterations = int(sys.argv[1])\nif (len(sys.argv) > 2):\n    chunkSizeMb = int(sys.argv[2])\nif (len(sys.argv) > 3):\n    sleeptime = int(sys.argv[3])\nif (iterations < 1):\n    print('Iteration count must be greater than zero.')\n    exit(1)\nif (chunkSizeMb < 1):\n    print('Chunk size must be greater than zero.')\n    exit(1)\ntotalSizeMb = chunkSizeMb * iterations\nprint('Allocating %d chunk(s) of size %dMB. (%dMB total)' %\n      (iterations, chunkSizeMb, totalSizeMb))\nbuf = ''\nfor i in range(iterations):\n    print('allocating %dMB' % ((i + 1) * chunkSizeMb))\n    buf += ('#' * MB * chunkSizeMb)\nif sleeptime > 0:\n    time.sleep(sleeptime)\n\"\"\"\n        self.eatmem_script2 = \"\"\"\nimport sys\nimport time\nMB = 2 ** 20\n\niterations1 = 1\nchunkSizeMb1 = 1\nsleeptime1 = 0\nif (len(sys.argv) > 1):\n    iterations1 = int(sys.argv[1])\nif (len(sys.argv) > 2):\n    chunkSizeMb1 = int(sys.argv[2])\nif (len(sys.argv) > 3):\n    sleeptime1 = int(sys.argv[3])\nif (iterations1 < 1):\n    print('Iteration count must be greater than zero.')\n    exit(1)\nif (chunkSizeMb1 < 1):\n    print('Chunk size must be greater than zero.')\n    exit(1)\ntotalSizeMb1 = chunkSizeMb1 * iterations1\nprint('Allocating %d chunk(s) of size %dMB. (%dMB total)' %\n      (iterations1, chunkSizeMb1, totalSizeMb1))\nstart_time1 = time.time()\nbuf = ''\nfor i in range(iterations1):\n    print('allocating %dMB' % ((i + 1) * chunkSizeMb1))\n    buf += ('#' * MB * chunkSizeMb1)\nend_time1 = time.time()\nif sleeptime1 > 0 and (end_time1 - start_time1) < sleeptime1 :\n    time.sleep(sleeptime1 - end_time1 + start_time1)\n\nif len(sys.argv) <= 4:\n    exit(0)\n\niterations2 = 1\nchunkSizeMb2 = 1\nsleeptime2 = 0\nif (len(sys.argv) > 4):\n    iterations2 = int(sys.argv[4])\nif (len(sys.argv) > 5):\n    chunkSizeMb2 = int(sys.argv[5])\nif (len(sys.argv) > 6):\n    sleeptime2 = int(sys.argv[6])\nif (iterations2 < 1):\n    print('Iteration count must be greater than zero.')\n    exit(1)\nif (chunkSizeMb2 < 1):\n    print('Chunk size must be greater than zero.')\n    exit(1)\ntotalSizeMb2 = chunkSizeMb2 * iterations2\nprint('Allocating %d chunk(s) of size %dMB. (%dMB total)' %\n      (iterations2, chunkSizeMb2, totalSizeMb2))\nstart_time2 = time.time()\n# Do not reinitialize buf!!\nfor i in range(iterations2):\n    print('allocating %dMB' % ((i + 1) * chunkSizeMb2))\n    buf += ('#' * MB * chunkSizeMb2)\nend_time2 = time.time()\nif sleeptime2 > 0 and (end_time2 - start_time2) < sleeptime2 :\n    time.sleep(sleeptime2 - end_time2 + start_time2)\n\"\"\"\n        self.eatmem_job1 = \\\n            '#PBS -joe\\n' \\\n            '#PBS -S /bin/bash\\n' \\\n            'sleep 10\\n' \\\n            'python_path=`which python 2>/dev/null`\\n' \\\n            'python3_path=`which python3 2>/dev/null`\\n' \\\n            'python2_path=`which python2 2>/dev/null`\\n' \\\n            'if [ -z \"$python_path\" ]; then\\n' \\\n            '    if [ -n \"$python3_path\" ]; then\\n' \\\n            '        python_path=$python3_path\\n' \\\n            '    else\\n' \\\n            '        python_path=$python2_path\\n' \\\n            '    fi\\n' \\\n            'fi\\n' \\\n            'if [ -z \"$python_path\" ]; then\\n' \\\n            '    echo Exiting -- no python found\\n' \\\n            '    exit 1\\n' \\\n            'fi\\n' \\\n            '$python_path - 80 10 10 <<EOF\\n' \\\n            '%s\\nEOF\\n' % self.eatmem_script\n        self.eatmem_job2 = \\\n            '#PBS -joe\\n' \\\n            '#PBS -S /bin/bash\\n' \\\n            'python_path=`which python 2>/dev/null`\\n' \\\n            'python3_path=`which python3 2>/dev/null`\\n' \\\n            'python2_path=`which python2 2>/dev/null`\\n' \\\n            'if [ -z \"$python_path\" ]; then\\n' \\\n            '    if [ -n \"$python3_path\" ]; then\\n' \\\n            '        python_path=$python3_path\\n' \\\n            '    else\\n' \\\n            '        python_path=$python2_path\\n' \\\n            '    fi\\n' \\\n            'fi\\n' \\\n            'if [ -z \"$python_path\" ]; then\\n' \\\n            '    echo Exiting -- no python found\\n' \\\n            '    exit 1\\n' \\\n            'fi\\n' \\\n            'let i=0; while [ $i -lt 400000 ]; do let i+=1 ; done\\n' \\\n            '$python_path - 200 2 10 <<EOF\\n' \\\n            '%s\\nEOF\\n' \\\n            'let i=0; while [ $i -lt 400000 ]; do let i+=1 ; done\\n' \\\n            '$python_path - 100 4 10 <<EOF\\n' \\\n            '%s\\nEOF\\n' \\\n            'let i=0; while [ $i -lt 400000 ]; do let i+=1 ; done\\n' \\\n            'sleep 25\\n' % (self.eatmem_script, self.eatmem_script)\n        self.eatmem_job3 = \\\n            '#PBS -joe\\n' \\\n            '#PBS -S /bin/bash\\n' \\\n            'python_path=`which python 2>/dev/null`\\n' \\\n            'python3_path=`which python3 2>/dev/null`\\n' \\\n            'python2_path=`which python2 2>/dev/null`\\n' \\\n            'if [ -z \"$python_path\" ]; then\\n' \\\n            '    if [ -n \"$python3_path\" ]; then\\n' \\\n            '        python_path=$python3_path\\n' \\\n            '    else\\n' \\\n            '        python_path=$python2_path\\n' \\\n            '    fi\\n' \\\n            'fi\\n' \\\n            'if [ -z \"$python_path\" ]; then\\n' \\\n            '    echo Exiting -- no python found\\n' \\\n            '    exit 1\\n' \\\n            'fi\\n' \\\n            'timeout 8 md5sum </dev/urandom\\n' \\\n            '# Args are segments1 sizeMB1 sleep1 segments2 sizeMB2 sleep2\\n' \\\n            '$python_path -  9 25 9  8 25 300 <<EOF\\n' \\\n            '%s\\nEOF\\n' % self.eatmem_script2\n\n        self.cpuset_mem_script = \"\"\"\nbase='%s'\necho \"cgroups base path for cpuset is $base\"\nif [ -d $base ]; then\n    cpupath1=$base/cpuset.cpus\n    cpupath2=$base/cpus\n    if [ -f $cpupath1 ]; then\n        cpus=`cat $cpupath1`\n    elif [ -f $cpupath2 ]; then\n        cpus=`cat $cpupath2`\n    fi\n    echo \"CpuIDs=${cpus}\"\n    mempath1=\"$base/cpuset.mems\"\n    mempath2=\"$base/mems\"\n    if [ -f $mempath1 ]; then\n        mems=`cat $mempath1`\n    elif [ -f $mempath2 ]; then\n        mems=`cat $mempath2`\n    fi\n    echo \"MemorySocket=${mems}\"\nelse\n    echo \"Cpuset subsystem job directory not created.\"\nfi\nmbase='%s'\nif [ \"${mbase}\" != \"None\" ] ; then\n    echo \"cgroups base path for memory is $mbase\"\n    if [ -d $mbase ]; then\n        mem_limit=`cat $mbase/memory.limit_in_bytes`\n        echo \"MemoryLimit=${mem_limit}\"\n        memsw_limit=`cat $mbase/memory.memsw.limit_in_bytes`\n        echo \"MemswLimit=${memsw_limit}\"\n    else\n        echo \"Memory subsystem job directory not created.\"\n    fi\nfi\n\"\"\"\n# no need to cater for cgroup_prefix options, it is obviously sbp here\n        self.check_dirs_script = \"\"\"\nPBS_JOBID='%s'\njobnum=${PBS_JOBID%%.*}\ndevices_base='%s'\nif [ -d \"$devices_base/sbp\" ]; then\n    if [ -d \"$devices_base/sbp/$PBS_JOBID\" ]; then\n        devices_job=\"$devices_base/sbp/$PBS_JOBID\"\n    elif [ -d \"$devices_base/sbp.service/jobid/$PBS_JOBID\" ]; then\n        devices_job=\"$devices_base/sbp.service/jobid/$PBS_JOBID\"\n    else\n        devices_job=\"$devices_base/sbp/sbp-${jobnum}.*.slice\"\n    fi\nelif [ -d \"$devices_base/sbp.service/jobid/$PBS_JOBID\" ]; then\n    devices_job=\"$devices_base/sbp.service/jobid/$PBS_JOBID\"\nelse\n    devices_job=\"$devices_base/sbp.slice/sbp-${jobnum}.*.slice\"\nfi\necho \"devices_job: $devices_job\"\nsleep 10\nif [ -d $devices_job ]; then\n    device_list=`cat $devices_job/devices.list`\n    echo \"${device_list}\"\nelse\n    echo \"Devices directory should be populated\"\nfi\n\"\"\"\n\n        self.check_gpu_script = \"\"\"#!/bin/bash\n#PBS -joe\n\njobnum=${PBS_JOBID%%.*}\ndevices_base=`grep cgroup /proc/mounts | grep devices | cut -d' ' -f2`\nif [ -d \"$devices_base/sbp\" ]; then\n    if [ -d \"$devices_base/sbp/$PBS_JOBID\" ]; then\n        devices_job=\"$devices_base/sbp/$PBS_JOBID\"\n    elif [ -d \"$devices_base/sbp.service/jobid/$PBS_JOBID\" ]; then\n        devices_job=\"$devices_base/sbp.service/jobid/$PBS_JOBID\"\n        devices_job=\"$devices_base/sbp/sbp-${jobnum}.*.slice\"\n    fi\nelif [ -d \"$devices_base/sbp.service/jobid/$PBS_JOBID\" ]; then\n    devices_job=\"$devices_base/sbp.service/jobid/$PBS_JOBID\"\nelse\n    devices_job=\"$devices_base/sbp.slice/sbp-${jobnum}.*.slice\"\nfi\n\ndevice_list=`cat $devices_job/devices.list`\ngrep \"195\" $devices_job/devices.list\n\nngpus=$(nvidia-smi -L | grep \"MIG-GPU\" | wc -l)\nif [ \"$ngpus\" -eq \"0\" ]; then\n    ngpus=$(nvidia-smi -L | grep \"GPU\" | wc -l)\nfi\necho \"There are $ngpus GPUs\"\necho $CUDA_VISIBLE_DEVICES\nsleep 10\n\"\"\"\n        self.cpu_controller_script = \"\"\"\nbase='%s'\necho \"cgroups base path for cpuset is $base\"\nif [ -d $base ]; then\n    shares=$base/cpu.shares\n    echo shares=${shares}\n    if [ -f $shares ]; then\n        cpu_shares=`cat $shares`\n        echo \"cpu_shares=${cpu_shares}\"\n    fi\n    cfs_period_us=$base/cpu.cfs_period_us\n    echo cfs_period_us=${cfs_period_us}\n    if [ -f $cfs_period_us ]; then\n        cpu_cfs_period_us=`cat $cfs_period_us`\n        echo \"cpu_cfs_period_us=${cpu_cfs_period_us}\"\n    fi\n    cfs_quota_us=$base/cpu.cfs_quota_us\n    echo cfs_quota_us=$cfs_quota_us\n    if [ -f $cfs_quota_us ]; then\n        cpu_cfs_quota_us=`cat $cfs_quota_us`\n        echo \"cpu_cfs_quota_us=${cpu_cfs_quota_us}\"\n    fi\nelse\n    echo \"Cpu subsystem job directory not created.\"\nfi\n\"\"\"\n        self.sleep15_job = \"\"\"#!/bin/bash\n#PBS -joe\nsleep 15\n\"\"\"\n        self.sleep30_job = \"\"\"#!/bin/bash\n#PBS -joe\nsleep 30\n\"\"\"\n        self.sleep600_job = \"\"\"#!/bin/bash\n#PBS -joe\nsleep 600\n\"\"\"\n        self.sleep5_job = \"\"\"#!/bin/bash\n#PBS -joe\nsleep 5\n\"\"\"\n        self.eat_cpu_script = \"\"\"#!/bin/bash\n#PBS -joe\nfor i in 1 2 3 4; do while : ; do : ; done & done\n\"\"\"\n        self.job_scr2 = \"\"\"#!/bin/bash\n#PBS -l select=host=%s:ncpus=1+ncpus=1\n#PBS -l place=vscatter\n#PBS -W umask=022\n#PBS -koe\necho \"$PBS_NODEFILE\"\ncat $PBS_NODEFILE\nsleep 300\n\"\"\"\n        self.job_scr3 = \"\"\"#!/bin/bash\n#PBS -l select=2:ncpus=1:mem=100mb\n#PBS -l place=vscatter\n#PBS -W umask=022\n#PBS -W tolerate_node_failures=job_start\n#PBS -koe\necho \"$PBS_NODEFILE\"\ncat $PBS_NODEFILE\nsleep 300\n\"\"\"\n        self.cfg0 = \"\"\"{\n    \"exclude_hosts\"         : [],\n    \"exclude_vntypes\"       : [],\n    \"run_only_on_hosts\"     : [],\n    \"periodic_resc_update\"  : false,\n    \"vnode_per_numa_node\"   : false,\n    \"online_offlined_nodes\" : false,\n    \"use_hyperthreads\"      : false,\n    \"cgroup\" : {\n        \"cpuacct\" : {\n            \"enabled\"         : false\n        },\n        \"cpuset\" : {\n            \"enabled\"         : false\n        },\n        \"devices\" : {\n            \"enabled\"         : false\n        },\n        \"hugetlb\" : {\n            \"enabled\"         : false\n        },\n        \"memory\" : {\n            \"enabled\"         : false\n        },\n        \"memsw\" : {\n            \"enabled\"         : false\n        }\n    }\n}\n\"\"\"\n        self.cfg1 = \"\"\"{\n    \"exclude_hosts\"         : [%s],\n    \"exclude_vntypes\"       : [%s],\n    \"run_only_on_hosts\"     : [%s],\n    \"periodic_resc_update\"  : true,\n    \"vnode_per_numa_node\"   : false,\n    \"online_offlined_nodes\" : true,\n    \"use_hyperthreads\"      : false,\n    \"cgroup\":\n    {\n        \"cpuacct\":\n        {\n            \"enabled\"         : true,\n            \"exclude_hosts\"   : [],\n            \"exclude_vntypes\" : []\n        },\n        \"cpuset\":\n        {\n            \"enabled\"         : true,\n            \"exclude_hosts\"   : [%s],\n            \"exclude_vntypes\" : []\n        },\n        \"devices\":\n        {\n            \"enabled\"         : false\n        },\n        \"hugetlb\":\n        {\n            \"enabled\"         : false\n        },\n        \"memory\":\n        {\n            \"enabled\"         : %s,\n            \"exclude_hosts\"   : [],\n            \"exclude_vntypes\" : [],\n            \"soft_limit\"      : false,\n            \"default\"         : \"96MB\",\n            \"reserve_percent\" : \"0\",\n            \"reserve_amount\"  : \"0MB\"\n        },\n        \"memsw\":\n        {\n            \"enabled\"         : %s,\n            \"exclude_hosts\"   : [],\n            \"exclude_vntypes\" : [],\n            \"default\"         : \"96MB\",\n            \"reserve_percent\" : \"0\",\n            \"reserve_amount\"  : \"128MB\"\n        }\n    }\n}\n\"\"\"\n        self.cfg2 = \"\"\"{\n    \"cgroup_prefix\"         : \"sbp\",\n    \"exclude_hosts\"         : [],\n    \"exclude_vntypes\"       : [],\n    \"run_only_on_hosts\"     : [],\n    \"periodic_resc_update\"  : false,\n    \"vnode_per_numa_node\"   : false,\n    \"online_offlined_nodes\" : false,\n    \"use_hyperthreads\"      : false,\n    \"cgroup\":\n    {\n        \"cpuacct\":\n        {\n            \"enabled\"         : false\n        },\n        \"cpuset\":\n        {\n            \"enabled\"         : false\n        },\n        \"devices\":\n        {\n            \"enabled\"         : true,\n            \"exclude_hosts\"   : [],\n            \"exclude_vntypes\" : [],\n            \"allow\"           : [\n                \"b *:* rwm\",\n                [\"console\",\"rwm\"],\n                [\"tty0\",\"rwm\", \"*\"],\n                \"c 1:* rwm\",\n                \"c 10:* rwm\"\n            ]\n        },\n        \"hugetlb\":\n        {\n            \"enabled\"         : false\n        },\n        \"memory\":\n        {\n            \"enabled\"         : false\n        },\n        \"memsw\":\n        {\n            \"enabled\"         : false\n        }\n    }\n}\n\"\"\"\n        self.cfg3 = \"\"\"{\n    \"exclude_hosts\"         : [],\n    \"exclude_vntypes\"       : [%s],\n    \"run_only_on_hosts\"     : [],\n    \"periodic_resc_update\"  : true,\n    \"vnode_per_numa_node\"   : %s,\n    \"online_offlined_nodes\" : true,\n    \"use_hyperthreads\"      : true,\n    \"cgroup\":\n    {\n        \"cpuacct\":\n        {\n            \"enabled\"         : true,\n            \"exclude_hosts\"   : [],\n            \"exclude_vntypes\" : []\n        },\n        \"cpuset\":\n        {\n            \"enabled\"         : true,\n            \"exclude_hosts\"   : [],\n            \"exclude_vntypes\" : [%s]\n        },\n        \"devices\":\n        {\n            \"enabled\"         : false\n        },\n        \"hugetlb\":\n        {\n            \"enabled\"         : false\n        },\n        \"memory\":\n        {\n            \"enabled\"         : %s,\n            \"default\"         : \"96MB\",\n            \"reserve_amount\"  : \"50MB\",\n            \"exclude_hosts\"   : [],\n            \"exclude_vntypes\" : [%s]\n        },\n        \"memsw\":\n        {\n            \"enabled\"         : %s,\n            \"default\"         : \"96MB\",\n            \"reserve_amount\"  : \"45MB\",\n            \"exclude_hosts\"   : [],\n            \"exclude_vntypes\" : [%s]\n        }\n    }\n}\n\"\"\"\n        self.cfg3b = \"\"\"{\n    \"exclude_hosts\"         : [],\n    \"exclude_vntypes\"       : [],\n    \"run_only_on_hosts\"     : [],\n    \"periodic_resc_update\"  : true,\n    \"vnode_per_numa_node\"   : %s,\n    \"online_offlined_nodes\" : true,\n    \"use_hyperthreads\"      : true,\n    \"cgroup\":\n    {\n        \"cpuacct\":\n        {\n            \"enabled\"         : true,\n            \"exclude_hosts\"   : [],\n            \"exclude_vntypes\" : []\n        },\n        \"cpuset\":\n        {\n            \"enabled\"         : true,\n            \"exclude_hosts\"   : [],\n            \"exclude_vntypes\" : []\n        },\n        \"devices\":\n        {\n            \"enabled\"         : false\n        },\n        \"hugetlb\":\n        {\n            \"enabled\"         : false\n        },\n        \"memory\":\n        {\n            \"enabled\"         : true,\n            \"default\"         : \"96MB\",\n            \"reserve_amount\"  : \"50MB\",\n            \"exclude_hosts\"   : [],\n            \"exclude_vntypes\" : [],\n            \"swappiness\"      : 0\n        },\n        \"memsw\":\n        {\n            \"enabled\"         : false,\n            \"default\"         : \"96MB\",\n            \"reserve_amount\"  : \"45MB\",\n            \"exclude_hosts\"   : [],\n            \"exclude_vntypes\" : []\n        }\n    }\n}\n\"\"\"\n        self.cfg4 = \"\"\"{\n    \"exclude_hosts\"         : [],\n    \"exclude_vntypes\"       : [\"no_cgroups\"],\n    \"run_only_on_hosts\"     : [],\n    \"periodic_resc_update\"  : true,\n    \"vnode_per_numa_node\"   : false,\n    \"online_offlined_nodes\" : true,\n    \"use_hyperthreads\"      : false,\n    \"cgroup\":\n    {\n        \"cpuacct\":\n        {\n            \"enabled\"         : true,\n            \"exclude_hosts\"   : [],\n            \"exclude_vntypes\" : []\n        },\n        \"cpuset\":\n        {\n            \"enabled\"         : true,\n            \"exclude_hosts\"   : [],\n            \"exclude_vntypes\" : [\"no_cgroups_cpus\"]\n        },\n        \"devices\":\n        {\n            \"enabled\"         : false\n        },\n        \"hugetlb\":\n        {\n            \"enabled\"         : false\n        },\n        \"memory\":\n        {\n            \"enabled\"         : %s,\n            \"default\"         : \"96MB\",\n            \"reserve_amount\"  : \"100MB\",\n            \"exclude_hosts\"   : [],\n            \"exclude_vntypes\" : [\"no_cgroups_mem\"]\n        },\n        \"memsw\":\n        {\n            \"enabled\"         : %s,\n            \"default\"         : \"96MB\",\n            \"reserve_amount\"  : \"90MB\",\n            \"exclude_hosts\"   : [],\n            \"exclude_vntypes\" : []\n        }\n    }\n}\n\"\"\"\n        self.cfg5 = \"\"\"{\n    \"vnode_per_numa_node\" : %s,\n    \"cgroup\" : {\n        \"cpuset\" : {\n            \"enabled\"            : true,\n            \"exclude_cpus\"       : [%s],\n            \"mem_fences\"         : %s,\n            \"mem_hardwall\"       : %s,\n            \"memory_spread_page\" : %s\n        },\n        \"memory\" : {\n            \"enabled\" : %s\n        },\n        \"memsw\" : {\n            \"enabled\" : %s\n        }\n    }\n}\n\"\"\"\n        self.cfg6 = \"\"\"{\n    \"vnode_per_numa_node\" : false,\n    \"cgroup\" : {\n        \"memory\":\n        {\n            \"enabled\"         : %s,\n            \"default\"         : \"64MB\",\n            \"reserve_percent\" : \"0\",\n            \"reserve_amount\"  : \"0MB\"\n        },\n        \"memsw\":\n        {\n            \"enabled\"         : %s,\n            \"default\"         : \"64MB\",\n            \"reserve_percent\" : \"0\",\n            \"reserve_amount\"  : \"0MB\"\n        }\n    }\n}\n\"\"\"\n        self.cfg7 = \"\"\"{\n    \"exclude_hosts\"         : [],\n    \"exclude_vntypes\"       : [],\n    \"run_only_on_hosts\"     : [],\n    \"periodic_resc_update\"  : true,\n    \"vnode_per_numa_node\"   : true,\n    \"online_offlined_nodes\" : true,\n    \"use_hyperthreads\"      : false,\n    \"cgroup\" : {\n        \"cpuacct\" : {\n            \"enabled\"            : true,\n            \"exclude_hosts\"      : [],\n            \"exclude_vntypes\"    : []\n        },\n        \"cpuset\" : {\n            \"enabled\"            : true,\n            \"exclude_cpus\"       : [],\n            \"exclude_hosts\"      : [],\n            \"exclude_vntypes\"    : []\n        },\n        \"devices\" : {\n            \"enabled\"            : false\n        },\n        \"hugetlb\" : {\n            \"enabled\"            : false\n        },\n        \"memory\" : {\n            \"enabled\"            : %s,\n            \"exclude_hosts\"      : [],\n            \"exclude_vntypes\"    : [],\n            \"default\"            : \"256MB\",\n            \"reserve_amount\"     : \"64MB\"\n        },\n        \"memsw\" : {\n            \"enabled\"            : %s,\n            \"exclude_hosts\"      : [],\n            \"exclude_vntypes\"    : [],\n            \"default\"            : \"256MB\",\n            \"reserve_amount\"     : \"64MB\"\n        }\n    }\n}\n\"\"\"\n        self.cfg8 = \"\"\"{\n    \"exclude_hosts\"         : [],\n    \"exclude_vntypes\"       : [%s],\n    \"run_only_on_hosts\"     : [],\n    \"periodic_resc_update\"  : true,\n    \"vnode_per_numa_node\"   : false,\n    \"online_offlined_nodes\" : true,\n    \"use_hyperthreads\"      : true,\n    \"ncpus_are_cores\"       : true,\n    \"cgroup\":\n    {\n        \"cpuacct\":\n        {\n            \"enabled\"         : true,\n            \"exclude_hosts\"   : [],\n            \"exclude_vntypes\" : []\n        },\n        \"cpuset\":\n        {\n            \"enabled\"         : true,\n            \"exclude_hosts\"   : [],\n            \"exclude_vntypes\" : [%s]\n        },\n        \"devices\":\n        {\n            \"enabled\"         : false\n        },\n        \"hugetlb\":\n        {\n            \"enabled\"         : false\n        },\n        \"memory\":\n        {\n            \"enabled\"         : %s,\n            \"default\"         : \"96MB\",\n            \"reserve_amount\"  : \"50MB\",\n            \"exclude_hosts\"   : [],\n            \"exclude_vntypes\" : [%s]\n        },\n        \"memsw\":\n        {\n            \"enabled\"         : %s,\n            \"default\"         : \"96MB\",\n            \"reserve_amount\"  : \"45MB\",\n            \"exclude_hosts\"   : [],\n            \"exclude_vntypes\" : [%s]\n        }\n    }\n}\n\"\"\"\n        self.cfg9 = \"\"\"{\n    \"exclude_hosts\"         : [],\n    \"exclude_vntypes\"       : [],\n    \"run_only_on_hosts\"     : [],\n    \"periodic_resc_update\"  : true,\n    \"vnode_per_numa_node\"   : true,\n    \"online_offlined_nodes\" : true,\n    \"use_hyperthreads\"      : true,\n    \"cgroup\" : {\n        \"cpuacct\" : {\n            \"enabled\"            : true,\n            \"exclude_hosts\"      : [],\n            \"exclude_vntypes\"    : []\n        },\n        \"cpuset\" : {\n            \"enabled\"            : true,\n            \"exclude_cpus\"       : [],\n            \"exclude_hosts\"      : [],\n            \"exclude_vntypes\"    : []\n        },\n        \"devices\" : {\n            \"enabled\"            : false\n        },\n        \"hugetlb\" : {\n            \"enabled\"            : false\n        },\n        \"memory\" : {\n            \"enabled\"            : %s,\n            \"exclude_hosts\"      : [],\n            \"exclude_vntypes\"    : [],\n            \"default\"            : \"256MB\",\n            \"reserve_amount\"     : \"64MB\"\n        },\n        \"memsw\" : {\n            \"enabled\"            : %s,\n            \"exclude_hosts\"      : [],\n            \"exclude_vntypes\"    : [],\n            \"default\"            : \"256MB\",\n            \"reserve_amount\"     : \"64MB\"\n        }\n    }\n}\n\"\"\"\n        self.cfg10 = \"\"\"{\n    \"exclude_hosts\"         : [],\n    \"exclude_vntypes\"       : [\"no_cgroups\"],\n    \"run_only_on_hosts\"     : [],\n    \"periodic_resc_update\"  : true,\n    \"vnode_per_numa_node\"   : false,\n    \"online_offlined_nodes\" : true,\n    \"use_hyperthreads\"      : true,\n    \"cgroup\":\n    {\n        \"cpuacct\":\n        {\n            \"enabled\"         : true,\n            \"exclude_hosts\"   : [],\n            \"exclude_vntypes\" : []\n        },\n        \"cpuset\":\n        {\n            \"enabled\"         : true,\n            \"exclude_cpus\"    : [],\n            \"exclude_hosts\"   : [],\n            \"exclude_vntypes\" : []\n        },\n        \"devices\":\n        {\n            \"enabled\"         : false\n        },\n        \"hugetlb\":\n        {\n            \"enabled\"         : false\n        },\n        \"memory\":\n        {\n            \"enabled\"         : %s,\n            \"default\"         : \"256MB\",\n            \"reserve_amount\"  : \"64MB\",\n            \"exclude_hosts\"   : [],\n            \"exclude_vntypes\" : []\n        },\n        \"memsw\":\n        {\n            \"enabled\"         : %s,\n            \"default\"         : \"256MB\",\n            \"reserve_amount\"  : \"64MB\",\n            \"exclude_hosts\"   : [],\n            \"exclude_vntypes\" : []\n        },\n        \"cpu\" : {\n            \"enabled\"                    : true,\n            \"enforce_per_period_quota\"   : true\n          }\n    }\n}\n\"\"\"\n        self.cfg11 = \"\"\"{\n    \"exclude_hosts\"         : [],\n    \"exclude_vntypes\"       : [\"no_cgroups\"],\n    \"run_only_on_hosts\"     : [],\n    \"periodic_resc_update\"  : true,\n    \"vnode_per_numa_node\"   : false,\n    \"online_offlined_nodes\" : true,\n    \"use_hyperthreads\"      : true,\n    \"cgroup\":\n    {\n        \"cpuacct\":\n        {\n            \"enabled\"         : true,\n            \"exclude_hosts\"   : [],\n            \"exclude_vntypes\" : []\n        },\n        \"cpuset\":\n        {\n            \"enabled\"         : true,\n            \"exclude_cpus\"    : [],\n            \"exclude_hosts\"   : [],\n            \"exclude_vntypes\" : []\n        },\n        \"devices\":\n        {\n            \"enabled\"         : false\n        },\n        \"hugetlb\":\n        {\n            \"enabled\"         : false\n        },\n        \"memory\":\n        {\n            \"enabled\"         : %s,\n            \"default\"         : \"256MB\",\n            \"reserve_amount\"  : \"64MB\",\n            \"exclude_hosts\"   : [],\n            \"exclude_vntypes\" : []\n        },\n        \"memsw\":\n        {\n            \"enabled\"         : %s,\n            \"default\"         : \"256MB\",\n            \"reserve_amount\"  : \"64MB\",\n            \"exclude_hosts\"   : [],\n            \"exclude_vntypes\" : []\n        },\n        \"cpu\" : {\n            \"enabled\"                    : true,\n            \"enforce_per_period_quota\"   : true,\n            \"cfs_period_us\"              : %d,\n            \"cfs_quota_fudge_factor\"     : %f\n          }\n    }\n}\n\"\"\"\n        self.cfg12 = \"\"\"{\n    \"exclude_hosts\"         : [],\n    \"exclude_vntypes\"       : [\"no_cgroups\"],\n    \"run_only_on_hosts\"     : [],\n    \"periodic_resc_update\"  : true,\n    \"vnode_per_numa_node\"   : false,\n    \"online_offlined_nodes\" : true,\n    \"use_hyperthreads\"      : false,\n    \"cgroup\":\n    {\n        \"cpuacct\":\n        {\n            \"enabled\"         : true,\n            \"exclude_hosts\"   : [],\n            \"exclude_vntypes\" : []\n        },\n        \"cpuset\":\n        {\n            \"enabled\"         : true,\n            \"exclude_cpus\"    : [],\n            \"exclude_hosts\"   : [],\n            \"exclude_vntypes\" : [],\n            \"allow_zero_cpus\" : true\n        },\n        \"devices\":\n        {\n            \"enabled\"         : false\n        },\n        \"hugetlb\":\n        {\n            \"enabled\"         : false\n        },\n        \"memory\":\n        {\n            \"enabled\"         : %s,\n            \"default\"         : \"256MB\",\n            \"reserve_amount\"  : \"64MB\",\n            \"exclude_hosts\"   : [],\n            \"exclude_vntypes\" : []\n        },\n        \"memsw\":\n        {\n            \"enabled\"         : %s,\n            \"default\"         : \"256MB\",\n            \"reserve_amount\"  : \"64MB\",\n            \"exclude_hosts\"   : [],\n            \"exclude_vntypes\" : []\n        },\n        \"cpu\" : {\n            \"enabled\"                    : true,\n            \"enforce_per_period_quota\"   : true\n          }\n    }\n}\n\"\"\"\n        self.cfg13 = \"\"\"{\n    \"exclude_hosts\"         : [],\n    \"exclude_vntypes\"       : [\"no_cgroups\"],\n    \"run_only_on_hosts\"     : [],\n    \"periodic_resc_update\"  : true,\n    \"vnode_per_numa_node\"   : false,\n    \"online_offlined_nodes\" : true,\n    \"use_hyperthreads\"      : false,\n    \"cgroup\":\n    {\n        \"cpuacct\":\n        {\n            \"enabled\"         : true,\n            \"exclude_hosts\"   : [],\n            \"exclude_vntypes\" : []\n        },\n        \"cpuset\":\n        {\n            \"enabled\"         : true,\n            \"exclude_cpus\"    : [],\n            \"exclude_hosts\"   : [],\n            \"exclude_vntypes\" : [],\n            \"allow_zero_cpus\" : true\n        },\n        \"devices\":\n        {\n            \"enabled\"         : false\n        },\n        \"hugetlb\":\n        {\n            \"enabled\"         : false\n        },\n        \"memory\":\n        {\n            \"enabled\"         : %s,\n            \"default\"         : \"256MB\",\n            \"reserve_amount\"  : \"64MB\",\n            \"exclude_hosts\"   : [],\n            \"exclude_vntypes\" : []\n        },\n        \"memsw\":\n        {\n            \"enabled\"         : %s,\n            \"default\"         : \"256MB\",\n            \"reserve_amount\"  : \"64MB\",\n            \"exclude_hosts\"   : [],\n            \"exclude_vntypes\" : []\n        },\n        \"cpu\" : {\n            \"enabled\"                    : true,\n            \"enforce_per_period_quota\"   : true,\n            \"cfs_period_us\"              : %d,\n            \"cfs_quota_fudge_factor\"     : %f,\n            \"zero_cpus_shares_fraction\"  : %f,\n            \"zero_cpus_quota_fraction\"   : %f\n          }\n    }\n}\n\"\"\"\n        self.cfg14 = \"\"\"{\n    \"exclude_hosts\"         : [],\n    \"exclude_vntypes\"       : [],\n    \"run_only_on_hosts\"     : [],\n    \"periodic_resc_update\"  : false,\n    \"vnode_per_numa_node\"   : false,\n    \"online_offlined_nodes\" : false,\n    \"use_hyperthreads\"      : false,\n    \"discover_gpus\"         : %s,\n    \"cgroup\":\n    {\n        \"cpuacct\":\n        {\n            \"enabled\"         : true\n        },\n        \"cpuset\":\n        {\n            \"enabled\"         : false\n        },\n        \"devices\":\n        {\n            \"enabled\"         : %s,\n            \"exclude_hosts\"   : [],\n            \"exclude_vntypes\" : [],\n            \"allow\"           : [\n                \"b *:* rwm\",\n                [\"console\",\"rwm\"],\n                [\"tty0\",\"rwm\", \"*\"],\n                \"c 1:* rwm\",\n                \"c 10:* rwm\"\n            ]\n        },\n        \"hugetlb\":\n        {\n            \"enabled\"         : false\n        },\n        \"memory\":\n        {\n            \"enabled\"         : true\n        },\n        \"memsw\":\n        {\n            \"enabled\"         : false\n        }\n    }\n}\n\"\"\"\n        self.cfg15 = \"\"\"{\n    \"cgroup_prefix\"         : \"pbs_jobs\",\n    \"exclude_hosts\"         : [],\n    \"exclude_vntypes\"       : [\"no_cgroups\"],\n    \"run_only_on_hosts\"     : [],\n    \"periodic_resc_update\"  : true,\n    \"vnode_per_numa_node\"   : %s,\n    \"online_offlined_nodes\" : true,\n    \"use_hyperthreads\"      : false,\n    \"ncpus_are_cores\"       : false,\n    \"cgroup\" : {\n        \"cpuacct\" : {\n            \"enabled\"            : true,\n            \"exclude_hosts\"      : [],\n            \"exclude_vntypes\"    : []\n        },\n        \"cpuset\" : {\n            \"enabled\"            : true,\n            \"exclude_cpus\"       : [],\n            \"exclude_hosts\"      : [],\n            \"exclude_vntypes\"    : [],\n            \"mem_fences\"         : true,\n            \"mem_hardwall\"       : false,\n            \"memory_spread_page\" : false,\n            \"allow_zero_cpus\"    : true\n        },\n        \"devices\" : {\n            \"enabled\"            : true,\n            \"exclude_hosts\"      : [],\n            \"exclude_vntypes\"    : [],\n            \"allow\"              : [\n                \"b *:* rwm\",\n                \"c *:* rwm\"\n            ]\n        },\n        \"hugetlb\" : {\n            \"enabled\"            : false,\n            \"exclude_hosts\"      : [],\n            \"exclude_vntypes\"    : [],\n            \"default\"            : \"0MB\",\n            \"reserve_percent\"    : 0,\n            \"reserve_amount\"     : \"0MB\"\n        },\n        \"memory\" : {\n            \"enabled\"            : true,\n            \"exclude_hosts\"      : [],\n            \"exclude_vntypes\"    : [],\n            \"soft_limit\"         : false,\n            \"default\"            : \"256MB\",\n            \"reserve_percent\"    : 0,\n            \"swappiness\"         : 0,\n            \"reserve_amount\"     : \"1GB\",\n            \"enforce_default\"    : true,\n            \"exclhost_ignore_default\" : true\n        },\n        \"memsw\" : {\n            \"enabled\"            : true,\n            \"exclude_hosts\"      : [],\n            \"exclude_vntypes\"    : [],\n            \"default\"            : \"256MB\",\n            \"reserve_percent\"    : 0,\n            \"reserve_amount\"     : \"10GB\",\n            \"manage_cgswap\"      : true,\n            \"enforce_default\"    : true,\n            \"exclhost_ignore_default\" : true\n        }\n    }\n}\n\"\"\"\n        self.cfg16 = \"\"\"{\n    \"cgroup_prefix\"         : \"pbs_jobs\",\n    \"exclude_hosts\"         : [],\n    \"exclude_vntypes\"       : [\"no_cgroups\"],\n    \"run_only_on_hosts\"     : [],\n    \"periodic_resc_update\"  : true,\n    \"vnode_per_numa_node\"   : false,\n    \"online_offlined_nodes\" : true,\n    \"use_hyperthreads\"      : false,\n    \"ncpus_are_cores\"       : false,\n    \"manage_rlimit_as\"      : true,\n    \"cgroup\" : {\n        \"cpuacct\" : {\n            \"enabled\"            : true,\n            \"exclude_hosts\"      : [],\n            \"exclude_vntypes\"    : []\n        },\n        \"cpuset\" : {\n            \"enabled\"            : true,\n            \"exclude_cpus\"       : [],\n            \"exclude_hosts\"      : [],\n            \"exclude_vntypes\"    : [],\n            \"mem_fences\"         : true,\n            \"mem_hardwall\"       : false,\n            \"memory_spread_page\" : false\n        },\n        \"devices\" : {\n            \"enabled\"            : true,\n            \"exclude_hosts\"      : [],\n            \"exclude_vntypes\"    : [],\n            \"allow\"              : [\n                \"b *:* rwm\",\n                \"c *:* rwm\"\n            ]\n        },\n        \"hugetlb\" : {\n            \"enabled\"            : false,\n            \"exclude_hosts\"      : [],\n            \"exclude_vntypes\"    : [],\n            \"default\"            : \"0MB\",\n            \"reserve_percent\"    : 0,\n            \"reserve_amount\"     : \"0MB\"\n        },\n        \"memory\" : {\n            \"enabled\"            : true,\n            \"exclude_hosts\"      : [],\n            \"exclude_vntypes\"    : [],\n            \"soft_limit\"         : false,\n            \"default\"            : \"100MB\",\n            \"reserve_percent\"    : 0,\n            \"swappiness\"         : 0,\n            \"reserve_amount\"     : \"1GB\",\n            \"enforce_default\"    : %s,\n            \"exclhost_ignore_default\" : true\n        },\n        \"memsw\" : {\n            \"enabled\"            : true,\n            \"exclude_hosts\"      : [],\n            \"exclude_vntypes\"    : [],\n            \"default\"            : \"100MB\",\n            \"reserve_percent\"    : 0,\n            \"reserve_amount\"     : \"10GB\",\n            \"manage_cgswap\"      : true,\n            \"enforce_default\"    : %s,\n            \"exclhost_ignore_default\" : true\n        }\n    }\n}\n\"\"\"\n        self.cfg17 = \"\"\"{\n    \"cgroup_prefix\"         : \"pbs_jobs\",\n    \"exclude_hosts\"         : [],\n    \"exclude_vntypes\"       : [\"no_cgroups\"],\n    \"run_only_on_hosts\"     : [],\n    \"periodic_resc_update\"  : true,\n    \"vnode_per_numa_node\"   : false,\n    \"online_offlined_nodes\" : true,\n    \"use_hyperthreads\"      : false,\n    \"ncpus_are_cores\"       : false,\n    \"manage_rlimit_as\"      : true,\n    \"cgroup\" : {\n        \"cpuacct\" : {\n            \"enabled\"            : true,\n            \"exclude_hosts\"      : [],\n            \"exclude_vntypes\"    : []\n        },\n        \"cpuset\" : {\n            \"mount_path\"          : \"/sys/fs/cgroup/cpuset\",\n            \"enabled\"            : true,\n            \"exclude_cpus\"       : [],\n            \"exclude_hosts\"      : [],\n            \"exclude_vntypes\"    : [],\n            \"mem_fences\"         : true,\n            \"mem_hardwall\"       : false,\n            \"memory_spread_page\" : false\n        },\n        \"devices\" : {\n            \"enabled\"            : false,\n            \"exclude_hosts\"      : [],\n            \"exclude_vntypes\"    : [],\n            \"allow\"              : [\n                \"b *:* rwm\",\n                \"c *:* rwm\"\n            ]\n        },\n        \"hugetlb\" : {\n            \"enabled\"            : false,\n            \"exclude_hosts\"      : [],\n            \"exclude_vntypes\"    : [],\n            \"default\"            : \"0MB\",\n            \"reserve_percent\"    : 0,\n            \"reserve_amount\"     : \"0MB\"\n        },\n        \"memory\" : {\n            \"enabled\"            : true,\n            \"exclude_hosts\"      : [],\n            \"exclude_vntypes\"    : [],\n            \"soft_limit\"         : false,\n            \"default\"            : \"100MB\",\n            \"reserve_percent\"    : 0,\n            \"swappiness\"         : 0,\n            \"reserve_amount\"     : \"1GB\",\n            \"enforce_default\"    : true,\n            \"exclhost_ignore_default\" : true\n        },\n        \"memsw\" : {\n            \"enabled\"            : false,\n            \"exclude_hosts\"      : [],\n            \"exclude_vntypes\"    : [],\n            \"default\"            : \"100MB\",\n            \"reserve_percent\"    : 0,\n            \"reserve_amount\"     : \"10GB\",\n            \"manage_cgswap\"      : true,\n            \"enforce_default\"    : true,\n            \"exclhost_ignore_default\" : true\n        }\n    }\n}\n\"\"\"\n\n        Job.dflt_attributes[ATTR_k] = 'oe'\n        # Increase the server log level\n        a = {'log_events': '4095'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        # Configure the scheduler to schedule using vmem\n        a = {'resources': 'ncpus,mem,vmem,host,vnode,ngpus,nmics'}\n        self.scheduler.set_sched_config(a)\n        # Create resources\n        attr = {'type': 'long', 'flag': 'nh'}\n\n        rss = self.server.status(RSC)\n        self.logger.info('resources on server are: %s' % str(rss))\n        if not next((item for item in rss if item['id'] == 'nmics'), None):\n            self.server.manager(MGR_CMD_CREATE, RSC, attr, id='nmics',\n                                logerr=False)\n        if not next((item for item in rss if item['id'] == 'ngpus'), None):\n            self.server.manager(MGR_CMD_CREATE, RSC, attr, id='ngpus',\n                                logerr=False)\n        # Import the hook\n        self.hook_file = os.path.join(self.server.pbs_conf['PBS_EXEC'],\n                                      'lib',\n                                      'python',\n                                      'altair',\n                                      'pbs_hooks',\n                                      'pbs_cgroups.PY')\n\n        # Load hook, but do not check MoMs\n        # since the vnodes are deleted on the server\n        self.load_hook(self.hook_file, mom_checks=False)\n\n        # Recreate the nodes moved to the end, after we set up\n        # the hook with its default config\n        # Make sure the load_hook is done on server first\n        time.sleep(2)\n        for host in self.hosts_list:\n            self.server.manager(MGR_CMD_CREATE, NODE, id=host)\n\n        # Make sure that by the time we send a HUP and the test\n        # actually tinkers with the hooks once more,\n        # MoMs will already have gone through their initial setup\n        # and copied the hooks after the new hello from the server\n\n        # perhaps we could replace this by matching a HELLO from\n        # the server\n        time.sleep(10)\n\n        # HUP mom so exechost_startup hook is run for each mom...\n        for mom in self.moms_list:\n            mom.signal('-HUP')\n\n        # ...then wait for exechost_startup updates to propagate to server\n        time.sleep(6)\n\n        # queuejob hook\n        self.qjob_hook_body = \"\"\"\nimport pbs\ne=pbs.event()\npbs.logmsg(pbs.LOG_DEBUG, \"queuejob hook executed\")\n# Save current select spec in resource 'site'\ne.job.Resource_List[\"site\"] = str(e.job.Resource_List[\"select\"])\n\n# Add 1 chunk to each chunk (except the first chunk) in the job's select s\nnew_select = e.job.Resource_List[\"select\"].increment_chunks(1)\ne.job.Resource_List[\"select\"] = new_select\n\n# Make job tolerate node failures that occur only during start.\ne.job.tolerate_node_failures = \"job_start\"\n\"\"\"\n        # launch hook\n        self.launch_hook_body = \"\"\"\nimport pbs\nimport time\ne=pbs.event()\n\npbs.logmsg(pbs.LOG_DEBUG, \"Executing launch\")\n\n# print out the vnode_list[] values\nfor vn in e.vnode_list:\n    v = e.vnode_list[vn]\n    pbs.logjobmsg(e.job.id, \"launch: found vnode_list[\" + v.name + \"]\")\n\n# print out the vnode_list_fail[] values:\nfor vn in e.vnode_list_fail:\n    v = e.vnode_list_fail[vn]\n    pbs.logjobmsg(e.job.id, \"launch: found vnode_list_fail[\" + v.name + \"]\")\nif e.job.in_ms_mom():\n    pj = e.job.release_nodes(keep_select=%s)\n    if pj is None:\n        e.job.Hold_Types = pbs.hold_types(\"s\")\n        e.job.rerun()\n        e.reject(\"unsuccessful at LAUNCH\")\n\"\"\"\n        # resize hook\n        self.resize_hook_body = \"\"\"\nimport pbs\ne=pbs.event()\nif %s e.job.in_ms_mom():\n    e.reject(\"Cannot resize the job\")\n\"\"\"\n\n    def get_paths(self, host):\n        \"\"\"\n        Returns a dictionary containing the location where each cgroup\n        is mounted on host.\n        \"\"\"\n        paths = {'pids': None,\n                 'blkio': None,\n                 'systemd': None,\n                 'cpuset': None,\n                 'memory': None,\n                 'memsw': None,\n                 'cpuacct': None,\n                 'devices': None,\n                 'cpu': None,\n                 'hugetlb': None,\n                 'perf_event': None,\n                 'freezer': None,\n                 'net_cls': None,\n                 'net_prio': None}\n        # Loop through the mounts and collect the ones for cgroups\n        fd = self.du.cat(host, '/proc/mounts')\n        for line in fd['out']:\n            entries = line.split()\n            if entries[2] != 'cgroup':\n                continue\n            flags = entries[3].split(',')\n            if 'noprefix' in flags:\n                self.noprefix = True\n            subsys = os.path.basename(entries[1])\n            paths[subsys] = entries[1]\n            if 'memory' in flags:\n                paths['memsw'] = paths[subsys]\n                paths['memory'] = paths[subsys]\n            if 'cpuacct' in flags:\n                paths['cpuacct'] = paths[subsys]\n            if 'devices' in flags:\n                paths['devices'] = paths[subsys]\n            if 'cpu' in flags:\n                paths['cpu'] = paths[subsys]\n            # Add these to support future unified hierarchy\n            # (everything in one dir)\n            if paths['pids'] is None and 'pids' in flags:\n                paths['pids'] = paths[subsys]\n            if paths['blkio'] is None and 'blkio' in flags:\n                paths['blkio'] = paths[subsys]\n            if paths['systemd'] is None and 'systemd' in flags:\n                paths['systemd'] = paths[subsys]\n            if paths['cpuset'] is None and 'cpuset' in flags:\n                paths['cpuset'] = paths[subsys]\n            if paths['hugetlb'] is None and 'hugetlb' in flags:\n                paths['hugetlb'] = paths[subsys]\n            if paths['perf_event'] is None and 'perf_event' in flags:\n                paths['perf_event'] = paths[subsys]\n            if paths['freezer'] is None and 'freezer' in flags:\n                paths['freezer'] = paths[subsys]\n            if paths['net_cls'] is None and 'net_cls' in flags:\n                paths['net_cls'] = paths[subsys]\n            if paths['net_prio'] is None and 'net_prio' in flags:\n                paths['net_prio'] = paths[subsys]\n        return paths\n\n    def is_dir(self, cpath, host):\n        \"\"\"\n        Returns True if path exists otherwise false\n        \"\"\"\n        for _ in range(5):\n            rv = self.du.isdir(hostname=host, path=cpath, sudo=True)\n            if rv:\n                return True\n            time.sleep(0.1)\n        return False\n\n    def is_file(self, cpath, host):\n        \"\"\"\n        Returns True if path exists otherwise false\n        \"\"\"\n        for _ in range(5):\n            rv = self.du.isfile(hostname=host, path=cpath, sudo=True)\n            if rv:\n                return True\n            time.sleep(0.5)\n        return False\n\n    def get_cgroup_job_dir(self, subsys, jobid, host):\n        \"\"\"\n        Returns path of subsystem for jobid\n        \"\"\"\n        basedir = self.paths[host][subsys]\n        # One of the entries in the following list should exist\n        #\n        # This cleaned version assumes cgroup_prefix is always pbs_jobs,\n        # i.e. that cgroup_prefix is not changed if you use this routine\n        #\n        # The separate test for a different prefix (\"sbp\") uses its own\n        # script instead; that script need not support multi-host jobs\n        #\n        # Older possible per job paths (relative to the basedir) looked:\n        # 1) <prefix>.slice/<prefix>-<jobid>.slice\n        #     (and <jobid> needs to be passed through systemd_escape)\n        # 2) <prefix>/<jobid>\n        #\n        # Some older hooks used either depending on the OS platform\n        # which was the reason to support a list in the first place\n        #\n        # If you need to add paths to make the tests support older hooks,\n        # put the least likely paths at the end of the list, to avoid\n        # changing test timings too much.\n        #\n        jobdirs = [os.path.join(basedir, 'pbs_jobs.service/jobid', jobid)]\n        for jdir in jobdirs:\n            if self.du.isdir(hostname=host, path=jdir, sudo=True):\n                return jdir\n        return None\n\n    def find_main_cpath(self, cdir, host=None):\n        if host is None:\n            host = self.hosts_list[0]\n        rc = self.du.isdir(host, path=cdir)\n        if rc:\n            paths = ['pbs_jobs.service/jobid',\n                     'pbs.service/jobid',\n                     'pbs.slice',\n                     'pbs']\n            for p in paths:\n                cpath = os.path.join(cdir, p)\n                rc = self.du.isdir(host, path=cpath)\n                if rc:\n                    return cpath\n        return None\n\n    def load_hook(self, filename, mom_checks=True):\n        \"\"\"\n        Import and enable a hook pointed to by the URL specified.\n        \"\"\"\n        try:\n            with open(filename, 'r') as fd:\n                script = fd.read()\n        except IOError:\n            self.assertTrue(False, 'Failed to open hook file %s' % filename)\n        events = ['execjob_begin', 'execjob_launch', 'execjob_attach',\n                  'execjob_epilogue', 'execjob_end', 'exechost_startup',\n                  'exechost_periodic', 'execjob_resize', 'execjob_abort']\n        # Alarm timeout should be set really large because some tests will\n        # create a lot of simultaneous jobs on a single (slow) MoM\n        # Shipped default is 90 seconds, which is reasonable for real hosts,\n        # but not for containers or VMs sharing a host\n        a = {'enabled': 'True',\n             'freq': '10',\n             'alarm': 120,\n             'event': events}\n        # Sometimes the deletion of the old hook is still pending\n        failed = True\n        for _ in range(5):\n            try:\n                self.server.create_import_hook(self.hook_name, a, script,\n                                               overwrite=True,\n                                               level=logging.DEBUG)\n            except Exception:\n                time.sleep(2)\n            else:\n                failed = False\n                break\n        if failed:\n            self.skipTest('pbs_cgroups_hook: failed to load hook')\n        # Add the configuration\n        self.load_default_config(mom_checks=mom_checks)\n\n    def load_config(self, cfg, mom_checks=True):\n        \"\"\"\n        Create a hook configuration file with the provided contents.\n        \"\"\"\n        fn = self.du.create_temp_file(hostname=self.serverA, body=cfg)\n        self.tempfile.append(fn)\n        self.logger.info('Current config: %s' % cfg)\n        a = {'content-type': 'application/x-config',\n             'content-encoding': 'default',\n             'input-file': fn}\n        # In tests that use this, make sure that other hook CF\n        # copies from setup, node creations, MoM restarts etc.\n        # are all finished, so that we don't match a CF copy\n        # message in the logs from someone else!\n        time.sleep(5)\n        just_before_import = int(time.time())\n        time.sleep(2)\n        self.server.manager(MGR_CMD_IMPORT, HOOK, a, self.hook_name)\n        if mom_checks:\n            self.moms_list[0].log_match('pbs_cgroups.CF;'\n                                        'copy hook-related '\n                                        'file request received',\n                                        starttime=just_before_import,\n                                        n='ALL')\n        pbs_home = self.server.pbs_conf['PBS_HOME']\n        svr_conf = os.path.join(\n            os.sep, pbs_home, 'server_priv', 'hooks', 'pbs_cgroups.CF')\n        pbs_home = self.mom.pbs_conf['PBS_HOME']\n        mom_conf = os.path.join(\n            os.sep, pbs_home, 'mom_priv', 'hooks', 'pbs_cgroups.CF')\n        if mom_checks:\n            # reload config if server and mom cfg differ up to count times\n            count = 5\n            while (count > 0):\n                r1 = self.du.run_cmd(cmd=['cat', svr_conf], sudo=True,\n                                     hosts=self.serverA)\n                r2 = self.du.run_cmd(cmd=['cat', mom_conf], sudo=True,\n                                     hosts=self.mom.shortname)\n                if r1['out'] != r2['out']:\n                    self.logger.info('server & mom pbs_cgroups.CF differ')\n                    time.sleep(2)\n                    just_before_import = int(time.time())\n                    time.sleep(2)\n                    self.server.manager(MGR_CMD_IMPORT, HOOK, a,\n                                        self.hook_name)\n                    self.moms_list[0].log_match('pbs_cgroups.CF;'\n                                                'copy hook-related '\n                                                'file request received',\n                                                starttime=just_before_import,\n                                                n='ALL')\n                else:\n                    self.logger.info('server & mom pbs_cgroups.CF match')\n                    break\n                time.sleep(1)\n                count -= 1\n            self.assertGreater(count, 0, \"pbs_cgroups.CF failed to load\")\n            # A HUP of each mom ensures update to hook config file is\n            # seen by the exechost_startup hook.\n\n            time.sleep(2)\n            stime = int(time.time())\n            time.sleep(2)\n            for mom in self.moms_list:\n                mom.signal('-HUP')\n                mom.log_match('hook_perf_stat;label=hook_exechost_startup_'\n                              'pbs_cgroups_.* profile_stop',\n                              regexp=True,\n                              starttime=stime, existence=True,\n                              interval=1, n='ALL')\n\n    def load_default_config(self, mom_checks=True):\n        \"\"\"\n        Load the default pbs_cgroups hook config file\n        \"\"\"\n        self.config_file = os.path.join(self.server.pbs_conf['PBS_EXEC'],\n                                        'lib',\n                                        'python',\n                                        'altair',\n                                        'pbs_hooks',\n                                        'pbs_cgroups.CF')\n        time.sleep(2)\n        now = int(time.time())\n        time.sleep(2)\n        a = {'content-type': 'application/x-config',\n             'content-encoding': 'default',\n             'input-file': self.config_file}\n        self.server.manager(MGR_CMD_IMPORT, HOOK, a, self.hook_name)\n        if not mom_checks:\n            return\n        self.moms_list[0].log_match('pbs_cgroups.CF;copy hook-related '\n                                    'file request received',\n                                    starttime=now, n='ALL')\n\n    def set_vntype(self, host, typestring='myvntype'):\n        \"\"\"\n        Set the vnode type for the local mom.\n        \"\"\"\n        pbs_home = self.server.pbs_conf['PBS_HOME']\n        vntype_file = os.path.join(pbs_home, 'mom_priv', 'vntype')\n        self.logger.info('Setting vntype to %s in %s on mom %s' %\n                         (typestring, vntype_file, host))\n        localhost = socket.gethostname()\n        fn = self.du.create_temp_file(hostname=localhost, body=typestring)\n        self.tempfile.append(fn)\n        ret = self.du.run_copy(hosts=host, src=fn,\n                               dest=vntype_file, sudo=True, uid='root',\n                               gid='root', mode=0o644)\n        if ret['rc'] != 0:\n            self.skipTest('pbs_cgroups_hook: failed to set vntype')\n\n    def remove_vntype(self):\n        \"\"\"\n        Unset the vnode type on the moms.\n        \"\"\"\n        for mom in self.moms_list:\n            pbs_home = mom.pbs_conf['PBS_HOME']\n            vn_file = os.path.join(pbs_home, 'mom_priv', 'vntype')\n            host = mom.shortname\n            self.logger.info('Deleting vntype files %s from mom %s'\n                             % (vn_file, host))\n            ret = self.du.rm(hostname=host, path=vn_file,\n                             force=True, sudo=True, logerr=False)\n            if not ret:\n                self.skipTest('pbs_cgroups_hook: failed to remove vntype')\n\n    def get_vntype(self, host):\n        \"\"\"\n        Get the vntype if it exists for example on cray\n        \"\"\"\n        vntype = 'no_cgroups'\n        pbs_home = self.server.pbs_conf['PBS_HOME']\n        vntype_f = os.path.join(pbs_home, 'mom_priv', 'vntype')\n        self.logger.info('Reading the vntype value for mom %s' % host)\n        if self.du.isfile(hostname=host, path=vntype_f):\n            output = self.du.cat(hostname=host, filename=vntype_f, sudo=True)\n            vntype = output['out'][0]\n        return vntype\n\n    def wait_and_read_file(self, host, filename=''):\n        \"\"\"\n        Make several attempts to read a file and return its contents\n        \"\"\"\n        self.logger.info('Reading file: %s on host: %s' % (filename, host))\n        if not filename:\n            raise ValueError('Invalid filename')\n        for _ in range(30):\n            if self.du.isfile(hostname=host, path=filename):\n                break\n            time.sleep(0.5)\n        self.assertTrue(self.du.isfile(hostname=host, path=filename),\n                        'File %s not found on host %s' % (filename, host))\n        # Wait for output to flush\n        time.sleep(2)\n        output = self.du.cat(hostname=host, filename=filename, sudo=True)\n        if output['rc'] == 0:\n            return output['out']\n        else:\n            return []\n\n    def get_hostname(self, host):\n        \"\"\"\n        get hostname of the mom.\n        This is needed since cgroups logs hostname not mom name\n        \"\"\"\n        cmd = 'hostname'\n        rv = self.du.run_cmd(hosts=host, cmd=cmd)\n        ret = rv['out'][0].split('.')[0]\n        return ret\n\n    def get_host_names(self, host):\n        \"\"\"\n        get shortname and hostname of the mom. This is needed\n        for some systems where hostname and shortname is different.\n        \"\"\"\n        cmd1 = 'hostname -s'\n        rv1 = self.du.run_cmd(hosts=host, cmd=cmd1)\n        host2 = self.get_hostname(host)\n        hostlist = '\"' + host2 + '\"'\n        moms = [hostlist]\n        mlog = [\"'\" + host2 + \"'\"]\n        # if shortname and hostname is not same then construct a\n        # list including both to be passed to cgroups hook\n        if (str(rv1['out'][0]) != host2):\n            moms.append('\"' + str(rv1['out'][0]) + '\"')\n            mlog.append(\"'\" + str(rv1['out'][0]) + \"'\")\n        if len(moms) > 1:\n            mom1 = ','.join(moms)\n            log1 = ', '.join(mlog)\n        else:\n            mom1 = '\"' + host2 + '\"'\n            log1 = \"'\" + host2 + \"'\"\n        return mom1, log1\n\n    @requirements(num_moms=2)\n    def test_cgroup_vntype_excluded(self):\n        \"\"\"\n        Test to verify that cgroups are not enforced on nodes\n        that have an exclude vntype file set\n        \"\"\"\n        name = 'CGROUP8'\n        if self.vntypename[0] == 'no_cgroups':\n            self.logger.info('Adding vntype %s to mom %s ' %\n                             (self.vntypename[0], self.moms_list[0]))\n            self.set_vntype(typestring=self.vntypename[0],\n                            host=self.hosts_list[0])\n        a = self.cfg1 % ('', '\"' + self.vntypename[0] + '\"',\n                         '', '', self.mem, self.swapctl)\n        self.load_config(a)\n        for m in self.moms.values():\n            m.restart()\n\n        a = {'Resource_List.select': '1:ncpus=1:mem=300mb:host=%s' %\n             self.hosts_list[0], ATTR_N: name}\n        j = Job(TEST_USER, attrs=a)\n        j.create_script(self.sleep600_job)\n        time.sleep(2)\n        stime = int(time.time())\n        time.sleep(2)\n        jid = self.server.submit(j)\n        a = {'job_state': 'R'}\n        self.server.expect(JOB, a, jid)\n        self.server.status(JOB, ATTR_o, jid)\n        o = j.attributes[ATTR_o]\n        self.tempfile.append(o)\n        self.logger.info('memory subsystem is at location %s' %\n                         self.paths[self.hosts_list[0]]['memory'])\n        cpath = self.get_cgroup_job_dir('memory', jid, self.hosts_list[0])\n        self.assertFalse(self.is_dir(cpath, self.hosts_list[0]))\n        self.moms_list[0].log_match(\n            \"%s is in the excluded vnode type list: ['%s']\"\n            % (self.vntypename[0],\n               self.vntypename[0]),\n            starttime=stime, n='ALL')\n        self.logger.info('vntypes on both hosts are: %s and %s'\n                         % (self.vntypename[0], self.vntypename[1]))\n        if self.vntypename[1] == self.vntypename[0]:\n            self.logger.info('Skipping the second part of this test '\n                             'since hostB also has same vntype value')\n            return\n\n        a = {'Resource_List.select': '1:ncpus=1:mem=300mb:host=%s' %\n             self.hosts_list[1], ATTR_N: name}\n        j1 = Job(TEST_USER, attrs=a)\n        j1.create_script(self.sleep600_job)\n        jid2 = self.server.submit(j1)\n        a = {'job_state': 'R'}\n        self.server.expect(JOB, a, jid2)\n        self.server.status(JOB, ATTR_o, jid2)\n        o = j1.attributes[ATTR_o]\n        self.tempfile.append(o)\n        cpath = self.get_cgroup_job_dir('memory', jid2, self.hosts_list[1])\n        self.assertTrue(self.is_dir(cpath, self.hosts_list[1]))\n\n    @requirements(num_moms=2)\n    def test_cgroup_host_excluded(self):\n        \"\"\"\n        Test to verify that cgroups are not enforced on nodes\n        that have the exclude_hosts set\n        \"\"\"\n        name = 'CGROUP9'\n        mom, log = self.get_host_names(self.hosts_list[0])\n        self.load_config(self.cfg1 % ('%s' % mom, '', '', '',\n                                      self.mem, self.swapctl))\n        for m in self.moms.values():\n            m.restart()\n\n        a = {'Resource_List.select': '1:ncpus=1:mem=300mb:host=%s' %\n             self.hosts_list[0], ATTR_N: name}\n        j = Job(TEST_USER, attrs=a)\n        j.create_script(self.sleep600_job)\n        time.sleep(2)\n        stime = int(time.time())\n        time.sleep(2)\n        jid = self.server.submit(j)\n        a = {'job_state': 'R'}\n        self.server.expect(JOB, a, jid)\n        self.server.status(JOB, ATTR_o, jid)\n        o = j.attributes[ATTR_o]\n        self.tempfile.append(o)\n        cpath = self.get_cgroup_job_dir('memory', jid, self.hosts_list[0])\n        self.assertFalse(self.is_dir(cpath, self.hosts_list[0]))\n        host = self.get_hostname(self.hosts_list[0])\n        self.moms_list[0].log_match('%s is in the excluded host list: [%s]' %\n                                    (host, log), starttime=stime,\n                                    n='ALL')\n        self.server.delete(jid, wait=True)\n\n        a = {'Resource_List.select': '1:ncpus=1:mem=300mb:host=%s' %\n             self.hosts_list[1], ATTR_N: name}\n        j = Job(TEST_USER, attrs=a)\n        j.create_script(self.sleep600_job)\n        jid2 = self.server.submit(j)\n        a = {'job_state': 'R'}\n        self.server.expect(JOB, a, jid2)\n        self.server.status(JOB, ATTR_o, jid2)\n        o = j.attributes[ATTR_o]\n        self.tempfile.append(o)\n        cpath = self.get_cgroup_job_dir('memory', jid2, self.hosts_list[1])\n        self.assertTrue(self.is_dir(cpath, self.hosts_list[1]))\n\n    @requirements(num_moms=2)\n    def test_cgroup_exclude_vntype_mem(self):\n        \"\"\"\n        Test to verify that cgroups are not enforced on nodes\n        that have an exclude vntype file set\n        \"\"\"\n        name = 'CGROUP12'\n        if self.vntypename[0] == 'no_cgroups':\n            self.logger.info('Adding vntype %s to mom %s' %\n                             (self.vntypename[0], self.moms_list[0]))\n            self.set_vntype(typestring='no_cgroups', host=self.hosts_list[0])\n        self.load_config(self.cfg3 % ('', 'false', '', self.mem,\n                                      '\"' + self.vntypename[0] + '\"',\n                                      self.swapctl,\n                                      '\"' + self.vntypename[0] + '\"'))\n        for m in self.moms.values():\n            m.restart()\n\n        a = {'Resource_List.select': '1:ncpus=1:mem=100mb:host=%s'\n             % self.hosts_list[0], ATTR_N: name}\n        j = Job(TEST_USER, attrs=a)\n        j.create_script(self.sleep600_job)\n        time.sleep(2)\n        stime = int(time.time())\n        time.sleep(2)\n        jid = self.server.submit(j)\n        a = {'job_state': 'R'}\n        self.server.expect(JOB, a, jid)\n        self.server.status(JOB, ATTR_o, jid)\n        o = j.attributes[ATTR_o]\n        self.tempfile.append(o)\n        self.moms_list[0].log_match('cgroup excluded for subsystem memory '\n                                    'on vnode type %s' % self.vntypename[0],\n                                    starttime=stime, n='ALL')\n        self.logger.info('vntype values for each hosts are: %s and %s'\n                         % (self.vntypename[0], self.vntypename[1]))\n        if self.vntypename[0] == self.vntypename[1]:\n            self.logger.info('Skipping the second part of this test '\n                             'since hostB also has same vntype value')\n            return\n\n        a = {'Resource_List.select': '1:ncpus=1:mem=100mb:host=%s' %\n             self.hosts_list[1], ATTR_N: name}\n        j1 = Job(TEST_USER, attrs=a)\n        j1.create_script(self.sleep600_job)\n        jid2 = self.server.submit(j1)\n        a = {'job_state': 'R'}\n        self.server.expect(JOB, a, jid2)\n        self.server.status(JOB, ATTR_o, jid2)\n        o = j1.attributes[ATTR_o]\n        self.tempfile.append(o)\n        cpath = self.get_cgroup_job_dir('memory', jid2, self.hosts_list[1])\n        self.assertTrue(self.is_dir(cpath, self.hosts_list[1]))\n\n    def test_cgroup_periodic_update_check_values(self):\n        \"\"\"\n        Test to verify that cgroups are reporting usage for cput and mem\n        \"\"\"\n        if not self.paths[self.hosts_list[0]]['memory']:\n            self.skipTest('Test requires memory subystem mounted')\n        name = 'CGROUP13'\n        conf = {'freq': 2}\n        self.server.manager(MGR_CMD_SET, HOOK, conf, self.hook_name)\n        self.load_config(self.cfg3 % ('', 'false', '', self.mem, '',\n                                      self.swapctl, ''))\n\n        a = {'Resource_List.select': '1:ncpus=1:mem=500mb:host=%s' %\n             self.hosts_list[0], ATTR_N: name}\n        j = Job(TEST_USER, attrs=a)\n        j.create_script(self.eatmem_job3)\n        time.sleep(2)\n        stime = int(time.time())\n        time.sleep(2)\n        jid = self.server.submit(j)\n        a = {'job_state': 'R'}\n        self.server.expect(JOB, a, jid)\n        self.server.status(JOB, ATTR_o, jid)\n        o = j.attributes[ATTR_o]\n        self.tempfile.append(o)\n        # Scouring the logs for initial values takes too long\n        resc_list = ['resources_used.mem']\n        if self.swapctl == 'true':\n            resc_list.append('resources_used.vmem')\n        qstat = self.server.status(JOB, resc_list, id=jid)\n        mem = convert_size(qstat[0]['resources_used.mem'], 'kb')\n        match = re.match(r'(\\d+)kb', mem)\n        self.assertFalse(match is None)\n        usage = int(match.groups()[0])\n        self.assertGreater(300000, usage)\n        if self.swapctl == 'true':\n            vmem = convert_size(qstat[0]['resources_used.vmem'], 'kb')\n            match = re.match(r'(\\d+)kb', vmem)\n            self.assertFalse(match is None)\n            usage = int(match.groups()[0])\n            self.assertGreater(300000, usage)\n        err_msg = \"Unexpected error in pbs_cgroups \" + \\\n            \"handling exechost_periodic event: TypeError\"\n        self.moms_list[0].log_match(err_msg, max_attempts=3,\n                                    interval=1, n='ALL',\n                                    starttime=stime, existence=False)\n\n        # Allow some time to pass for values to be updated\n        # sleep 2s: make sure no old log lines will match 'begin' time\n        time.sleep(2)\n        begin = int(time.time())\n        # sleep 2s to allow for small time differences and rounding errors\n        time.sleep(2)\n\n        self.logger.info('Waiting for periodic hook to update usage data.')\n        # loop to check if cput, mem, vmem are expected values\n        cput_usage = 0.0\n        mem_usage = 0\n        vmem_usage = 0\n        # Faster systems might expect to see the usage you finally expect\n        # recorder after 8-10 seconds; on TH it can take up to a minute\n        time.sleep(8)\n        for count in range(30):\n            time.sleep(2)\n            if self.paths[self.hosts_list[0]]['cpuacct'] and cput_usage <= 1.0:\n                # Match last line from the bottom\n                line = self.moms_list[0].log_match(\n                    '%s;update_job_usage: CPU usage:' % jid,\n                    starttime=begin, n='ALL')\n                match = re.search(r'CPU usage: ([0-9.]+) secs', line[1])\n                cput_usage = float(match.groups()[0])\n                self.logger.info(\"Found cput_usage: %ss\" % str(cput_usage))\n            if (self.paths[self.hosts_list[0]]['memory'] and\n                    mem_usage <= 400000):\n                # Match last line from the bottom\n                line = self.moms_list[0].log_match(\n                    '%s;update_job_usage: Memory usage: mem=' % jid,\n                    starttime=begin, n='ALL')\n                match = re.search(r'mem=(\\d+)kb', line[1])\n                mem_usage = int(match.groups()[0])\n                self.logger.info(\"Found mem_usage: %skb\" % str(mem_usage))\n                if self.swapctl == 'true' and vmem_usage <= 400000:\n                    # Match last line from the bottom\n                    line = self.moms_list[0].log_match(\n                        '%s;update_job_usage: Memory usage: vmem=' % jid,\n                        starttime=begin, n='ALL')\n                    match = re.search(r'vmem=(\\d+)kb', line[1])\n                    vmem_usage = int(match.groups()[0])\n                    self.logger.info(\"Found vmem_usage: %skb\"\n                                     % str(vmem_usage))\n            if cput_usage > 1.0 and mem_usage > 400000:\n                if self.swapctl == 'true':\n                    if vmem_usage > 400000:\n                        break\n                else:\n                    break\n            # try to make next loop match the _next_ updates\n            # note: we might still be unlucky and just match an old update,\n            # but not next time: the loop's sleep will make 'begin' advance\n            begin = int(time.time())\n\n        self.assertGreater(cput_usage, 1.0)\n        self.assertGreater(mem_usage, 400000)\n        if self.swapctl == 'true':\n            self.assertGreater(vmem_usage, 400000)\n\n    def test_cgroup_cpuset_and_memory(self):\n        \"\"\"\n        Test to verify that the job cgroup is created correctly\n        Check to see that cpuset.cpus=0, cpuset.mems=0 and that\n        memory.limit_in_bytes = 314572800\n        \"\"\"\n        if not self.paths[self.hosts_list[0]]['memory']:\n            self.skipTest('Test requires memory subystem mounted')\n        name = 'CGROUP1'\n        self.load_config(self.cfg3 % ('', 'false', '', self.mem, '',\n                                      self.swapctl, ''))\n        # This test expects the job to land on CPU 0.\n        # The previous test may have qdel -Wforce its jobs, and then it takes\n        # some time for MoM to run the execjob_epilogue and execjob_end\n        # *after* the job has disappeared on the server.\n        # So wait a while before restarting MoM\n        time.sleep(10)\n        # Restart mom for changes made by cgroups hook to take effect\n        self.mom.restart()\n        a = {'Resource_List.select':\n             '1:ncpus=1:mem=300mb:host=%s' % self.hosts_list[0],\n             ATTR_N: name, ATTR_k: 'oe'}\n        j = Job(TEST_USER, attrs=a)\n        j.create_script(self.sleep600_job)\n        jid = self.server.submit(j)\n        a = {'job_state': 'R'}\n        self.server.expect(JOB, a, jid)\n        self.server.status(JOB, [ATTR_o, 'exec_host'], jid)\n        fna = self.get_cgroup_job_dir('cpuset', jid, self.hosts_list[0])\n        self.assertFalse(fna is None, 'No job directory for cpuset subsystem')\n        fnma = self.get_cgroup_job_dir('memory', jid, self.hosts_list[0])\n        self.assertFalse(fnma is None, 'No job directory for memory subsystem')\n        memscr = self.du.run_cmd(cmd=[self.cpuset_mem_script % (fna, fnma)],\n                                 as_script=True, hosts=self.mom.shortname)\n        memscr_out = memscr['out']\n        self.logger.info('memscr_out:\\n%s' % memscr_out)\n        self.assertTrue('CpuIDs=0' in memscr_out)\n        self.logger.info('CpuIDs check passed')\n        self.assertTrue('MemorySocket=0' in memscr_out)\n        self.logger.info('MemorySocket check passed')\n        if self.mem == 'true':\n            self.assertTrue('MemoryLimit=314572800' in memscr_out)\n            self.logger.info('MemoryLimit check passed')\n\n    def test_cgroup_cpuset_and_memsw(self):\n        \"\"\"\n        Test to verify that the job cgroup is created correctly\n        using the default memory and vmem\n        Check to see that cpuset.cpus=0, cpuset.mems=0 and that\n        memory.limit_in_bytes = 100663296\n        memory.memsw.limit_in_bytes = 201326592\n        If there is too little swap, the latter could be smaller\n        \"\"\"\n        if not self.paths[self.hosts_list[0]]['memory']:\n            self.skipTest('Test requires memory subystem mounted')\n        name = 'CGROUP2'\n        self.load_config(self.cfg3 % ('', 'false', '', self.mem, '',\n                                      self.swapctl, ''))\n        a = {'Resource_List.select': '1:ncpus=1:host=%s' %\n             self.hosts_list[0], ATTR_N: name}\n        j = Job(TEST_USER, attrs=a)\n        j.create_script(self.sleep600_job)\n        jid = self.server.submit(j)\n        a = {'job_state': 'R'}\n        self.server.expect(JOB, a, jid)\n        self.server.status(JOB, [ATTR_o, 'exec_host'], jid)\n        fn = self.get_cgroup_job_dir('cpuset', jid, self.hosts_list[0])\n        fnm = self.get_cgroup_job_dir('memory', jid, self.hosts_list[0])\n        scr = self.du.run_cmd(cmd=[self.cpuset_mem_script % (fn, fnm)],\n                              as_script=True, hosts=self.mom.shortname)\n        scr_out = scr['out']\n        self.logger.info('scr_out:\\n%s' % scr_out)\n        self.assertTrue('CpuIDs=0' in scr_out)\n        self.logger.info('CpuIDs check passed')\n        self.assertTrue('MemorySocket=0' in scr_out)\n        self.logger.info('MemorySocket check passed')\n        if self.mem == 'true':\n            self.assertTrue('MemoryLimit=100663296' in scr_out)\n            self.logger.info('MemoryLimit check passed')\n        if self.swapctl == 'true':\n            # Get total phys+swap memory available\n            mem_base = os.path.join(self.paths[self.hosts_list[0]]\n                                    ['memory'], 'pbs_jobs.service',\n                                    'jobid')\n            vmem_avail = os.path.join(mem_base,\n                                      'memory.memsw.limit_in_bytes')\n            result = self.du.cat(hostname=self.mom.hostname,\n                                 filename=vmem_avail, sudo=True)\n            vmem_avail_in_bytes = None\n            try:\n                vmem_avail_in_bytes = int(result['out'][0])\n            except Exception:\n                # None will be seen as a failure, nothing to do\n                pass\n            self.logger.info(\"total available memsw: %d\"\n                             % vmem_avail_in_bytes)\n            self.assertTrue(vmem_avail_in_bytes is not None,\n                            \"Unable to read total memsw available\")\n\n            mem_avail = os.path.join(mem_base,\n                                     'memory.limit_in_bytes')\n            result = self.du.cat(hostname=self.mom.hostname,\n                                 filename=mem_avail, sudo=True)\n            mem_avail_in_bytes = None\n            try:\n                mem_avail_in_bytes = int(result['out'][0])\n            except Exception:\n                # None will be seen as a failure, nothing to do\n                pass\n            self.logger.info(\"total available mem: %d\"\n                             % mem_avail_in_bytes)\n            self.assertTrue(mem_avail_in_bytes is not None,\n                            \"Unable to read total mem available\")\n\n            swap_avail_in_bytes = vmem_avail_in_bytes - mem_avail_in_bytes\n            MemswLimitExpected = (100663296\n                                  + min(100663296, swap_avail_in_bytes))\n            self.assertTrue(('MemswLimit=%d' % MemswLimitExpected)\n                            in scr_out)\n            self.logger.info('MemswLimit check passed')\n\n    def test_cgroup_prefix_and_devices(self):\n        \"\"\"\n        Test to verify that the cgroup prefix is set to \"sbp\" and that\n        the devices subsystem exists with the correct devices allowed\n        \"\"\"\n        if not self.paths[self.hosts_list[0]]['devices']:\n            self.skipTest('Skipping test since no devices subsystem defined')\n        name = 'CGROUP3'\n        self.load_config(self.cfg2)\n        # Restart mom for changes made by cgroups hook to take effect\n        self.mom.restart()\n        # Make sure to run on the MoM just restarted\n        a = {ATTR_N: name}\n        a['Resource_List.select'] = \\\n            '1:ncpus=1:mem=300mb:host=%s' % self.hosts_list[0]\n        j = Job(TEST_USER, attrs=a)\n        j.set_sleep_time(600)\n        jid = self.server.submit(j)\n        a = {'job_state': 'R'}\n        self.server.expect(JOB, a, jid)\n        self.server.status(JOB, [ATTR_o, 'exec_host'], jid)\n        devd = self.paths[self.hosts_list[0]]['devices']\n        scr = self.du.run_cmd(\n            cmd=[self.check_dirs_script % (jid, devd)],\n            as_script=True, hosts=self.mom.shortname)\n        scr_out = scr['out']\n        self.logger.info('scr_out:\\n%s' % scr_out)\n        # the config file named entries must be translated to major/minor\n        # containers will make them different!!\n        # self.du.run_cmd returns a list of one-line strings\n        # the console awk command produces major and minor on separate lines\n        console_results = \\\n            self.du.run_cmd(cmd=['ls -al /dev/console'\n                                 '| awk \\'BEGIN {FS=\" |,\"} '\n                                 '{print $5} {print $7}\\''],\n                            as_script=True, hosts=self.hosts_list[0])\n        (console_major, console_minor) = console_results['out']\n        # only one line here\n        tty0_major_results = \\\n            self.du.run_cmd(cmd=['ls -al /dev/tty0'\n                                 '| awk \\'BEGIN {FS=\" |,\"} '\n                                 '{print $5}\\''],\n                            as_script=True, hosts=self.hosts_list[0])\n        tty0_major = tty0_major_results['out'][0]\n        check_devices = ['b *:* rwm',\n                         'c %s:%s rwm' % (console_major, console_minor),\n                         'c %s:* rwm' % (tty0_major),\n                         'c 1:* rwm',\n                         'c 10:* rwm']\n\n        for device in check_devices:\n            self.assertTrue(device in scr_out,\n                            '\"%s\" not found in: %s' % (device, scr_out))\n        self.logger.info('device_list check passed')\n\n    def test_devices_and_gpu_discovery(self):\n        \"\"\"\n        Test to verify that if the device subsystem is enabled\n        and discover_gpus is true, _discover_gpus is called\n\n        The GPU tests should in theory make this redundant,\n        but they require a test harness that has GPUs. This test will\n        allow to see if the GPU discovery is at least called even when\n        the test harness has no GPUs.\n        \"\"\"\n        if not self.paths[self.hosts_list[0]]['devices']:\n            self.skipTest('Skipping test since no devices subsystem defined')\n        name = 'CGROUP3'\n        time.sleep(2)\n        begin = int(time.time())\n        time.sleep(2)\n        self.load_config(self.cfg14 % ('true', 'true'))\n\n        # These will throw an exception if the routines that should not\n        # have been called were called.\n        # n='ALL' is needed because the cgroup hook is so verbose\n        # that 50 lines will not suffice\n        self.moms_list[0].log_match('_discover_devices', starttime=begin,\n                                    existence=True, max_attempts=2,\n                                    interval=1, n='ALL')\n        self.moms_list[0].log_match('NVIDIA SMI', starttime=begin,\n                                    existence=True, max_attempts=2,\n                                    interval=1, n='ALL')\n        self.logger.info('devices_and_gpu_discovery check passed')\n\n    def test_suppress_devices_discovery(self):\n        \"\"\"\n        Test to verify that if the device subsystem is turned off,\n        neither _discover_devices nor _discover_gpus is called\n        \"\"\"\n        if not self.paths[self.hosts_list[0]]['devices']:\n            self.skipTest('Skipping test since no devices subsystem defined')\n        name = 'CGROUP3'\n        time.sleep(2)\n        begin = int(time.time())\n        time.sleep(2)\n        self.load_config(self.cfg14 % ('true', 'false'))\n\n        # These will throw an exception if the routines that should not\n        # have been called were called.\n        # n='ALL' is needed because the cgroup hook is so verbose\n        # that 50 lines will not suffice\n        self.moms_list[0].log_match('_discover_devices', starttime=begin,\n                                    existence=False, max_attempts=2,\n                                    interval=1, n='ALL')\n        self.moms_list[0].log_match('_discover_gpus', starttime=begin,\n                                    existence=False, max_attempts=2,\n                                    interval=1, n='ALL')\n        self.logger.info('suppress_devices_discovery check passed')\n\n    def test_suppress_gpu_discovery(self):\n        \"\"\"\n        Test to verify that if the device subsystem is enabled\n        and discover_gpus is false, nvidia-smi is not called\n        discover_gpus is called but just returns {}\n        \"\"\"\n        if not self.paths[self.hosts_list[0]]['devices']:\n            self.skipTest('Skipping test since no devices subsystem defined')\n        name = 'CGROUP3'\n        time.sleep(2)\n        begin = int(time.time())\n        time.sleep(2)\n        self.load_config(self.cfg14 % ('false', 'true'))\n\n        # These will throw an exception if the routines that should not\n        # have been called were called.\n        # n='ALL' is needed because the cgroup hook is so verbose\n        # that 50 lines will not suffice\n        self.moms_list[0].log_match('_discover_devices', starttime=begin,\n                                    existence=True, max_attempts=2,\n                                    interval=1, n='ALL')\n        self.moms_list[0].log_match('NVIDIA SMI', starttime=begin,\n                                    existence=False, max_attempts=2,\n                                    interval=1, n='ALL')\n        self.logger.info('suppress_gpu_discovery check passed')\n\n    def test_cgroup_cpuset(self):\n        \"\"\"\n        Test to verify that 2 jobs are not assigned the same cpus\n        \"\"\"\n        pcpus = 0\n        with open('/proc/cpuinfo', 'r') as desc:\n            for line in desc:\n                if re.match('^processor', line):\n                    pcpus += 1\n        if pcpus < 2:\n            self.skipTest('Test requires at least two physical CPUs')\n        name = 'CGROUP4'\n        # since we do not configure vnodes ourselves wait for the setup\n        # of this test to propagate all hooks etc.\n        # otherwise the load_config tests to see if it's all done\n        # might get confused\n        # occasional trouble seen on TH2\n        self.load_config(self.cfg3 % ('', 'false', '', self.mem, '',\n                                      self.swapctl, ''))\n        # Submit two jobs\n        a = {'Resource_List.select': '1:ncpus=1:mem=300mb:host=%s' %\n             self.hosts_list[0], ATTR_N: name + 'a'}\n        j1 = Job(TEST_USER, attrs=a)\n        j1.create_script(self.sleep600_job)\n        jid1 = self.server.submit(j1)\n        b = {'Resource_List.select': '1:ncpus=1:mem=300mb:host=%s' %\n             self.hosts_list[0], ATTR_N: name + 'b'}\n        j2 = Job(TEST_USER, attrs=b)\n        j2.create_script(self.sleep600_job)\n        jid2 = self.server.submit(j2)\n        a = {'job_state': 'R'}\n        # Make sure they are both running\n        self.server.expect(JOB, a, jid1)\n        self.server.expect(JOB, a, jid2)\n        # cpuset paths for both jobs\n        fn1 = self.get_cgroup_job_dir('cpuset', jid1, self.hosts_list[0])\n        fn2 = self.get_cgroup_job_dir('cpuset', jid2, self.hosts_list[0])\n        # Capture the output of cpuset_mem_script for both jobs\n        scr1 = self.du.run_cmd(cmd=[self.cpuset_mem_script % (fn1, None)],\n                               as_script=True, hosts=self.hosts_list[0])\n        scr1_out = scr1['out']\n        self.logger.info('scr1_out:\\n%s' % scr1_out)\n        scr2 = self.du.run_cmd(cmd=[self.cpuset_mem_script % (fn2, None)],\n                               as_script=True, hosts=self.hosts_list[0])\n        scr2_out = scr2['out']\n        self.logger.info('scr2_out:\\n%s' % scr2_out)\n        # Ensure the CPU ID for each job differs\n        cpuid1 = None\n        for kv in scr1_out:\n            if 'CpuIDs=' in kv:\n                cpuid1 = kv\n                break\n        self.assertNotEqual(cpuid1, None, 'Could not read first CPU ID.')\n        cpuid2 = None\n        for kv in scr2_out:\n            if 'CpuIDs=' in kv:\n                cpuid2 = kv\n                break\n        self.assertNotEqual(cpuid2, None, 'Could not read second CPU ID.')\n        self.logger.info(\"cpuid1 = %s and cpuid2 = %s\" % (cpuid1, cpuid2))\n        self.assertNotEqual(cpuid1, cpuid2,\n                            'Processes should be assigned to different CPUs')\n        self.logger.info('CpuIDs check passed')\n\n    @timeout(1800)\n    def test_cgroup_cpuset_ncpus_are_cores(self):\n        \"\"\"\n        Test to verify that correct number of jobs run on a hyperthread\n        enabled system when ncpus_are_cores is set to true.\n        \"\"\"\n        # Check that system has hyperthreading enabled and has\n        # at least two threads (\"pcpus\")\n        # WARNING: do not assume that physical CPUs are numbered from 0\n        # and that all processors from a physical ID are contiguous\n        # count the number of different physical IDs with a set!\n        pcpus = 0\n        sibs = 0\n        cores = 0\n        pval = 0\n        phys_set = set()\n        with open('/proc/cpuinfo', 'r') as desc:\n            for line in desc:\n                if re.match('^processor', line):\n                    pcpus += 1\n                sibs_match = re.search(r'siblings\t: ([0-9]+)', line)\n                cores_match = re.search(r'cpu cores\t: ([0-9]+)', line)\n                phys_match = re.search(r'physical id\t: ([0-9]+)', line)\n                if sibs_match:\n                    sibs = int(sibs_match.groups()[0])\n                if cores_match:\n                    cores = int(cores_match.groups()[0])\n                if phys_match:\n                    pval = int(phys_match.groups()[0])\n                    phys_set.add(pval)\n        phys = len(phys_set)\n        if (sibs == 0 or cores == 0):\n            self.skipTest('Insufficient information about the processors.')\n        if pcpus < 2:\n            self.skipTest('This test requires at least two processors.')\n        if sibs / cores == 1:\n            self.skipTest('This test requires hyperthreading to be enabled.')\n\n        name = 'CGROUP18'\n        self.load_config(self.cfg8 % ('', '', self.mem, '', self.swapctl,\n                                      ''))\n        # Make sure to restart MOM\n        # HUP is not enough to get rid of earlier\n        # per socket vnodes created when vnode_per_numa_node=True\n        self.mom.restart()\n\n        # Submit M jobs N cpus wide, where M is the amount of physical\n        # processors and N is number of 'cpu cores' per M. Expect them to run.\n        njobs = phys\n        if njobs > 100:\n            self.skipTest(\"too many jobs (%d) to submit\" % njobs)\n        a = {'Resource_List.select': '1:ncpus=%s:mem=300mb:host=%s' %\n             (cores, self.hosts_list[0]), ATTR_N: name + 'a'}\n        for _ in range(njobs):\n            j = Job(TEST_USER, attrs=a)\n            # make sure this stays around for an hour\n            # (or until deleted in teardown)\n            j.set_sleep_time(3600)\n            jid = self.server.submit(j)\n            a1 = {'job_state': 'R'}\n            # give the scheduler, server and MoM some time\n            # it's not a luxury on containers with few CPU resources\n            time.sleep(2)\n            self.server.expect(JOB, a1, jid)\n        # Submit another job, expect in Q state -- this one with only 1 CPU\n        b = {'Resource_List.select': '1:ncpus=1:mem=300mb:host=%s' %\n             self.hosts_list[0], ATTR_N: name + 'b'}\n        j2 = Job(TEST_USER, attrs=b)\n        jid2 = self.server.submit(j2)\n        b1 = {'job_state': 'Q'}\n        # Make sure to give the scheduler ample time here:\n        # we want to make sure jid2 doesn't run because it can't,\n        # not because the scheduler has not yet gotten to it\n        time.sleep(30)\n        self.server.expect(JOB, b1, jid2)\n\n    def test_cgroup_enforce_memory(self):\n        \"\"\"\n        Test to verify that the job is killed when it tries to\n        use more memory than it requested\n        \"\"\"\n        if not self.paths[self.hosts_list[0]]['memory'] or not self.mem:\n            self.skipTest('Test requires memory subystem mounted')\n        name = 'CGROUP5'\n\n        self.load_config(self.cfg3b % ('false'))\n\n        a = {'Resource_List.select': '1:ncpus=1:mem=300mb:host=%s' %\n             self.hosts_list[0], ATTR_N: name}\n        j = Job(TEST_USER, attrs=a)\n        j.create_script(self.eatmem_job1)\n        time.sleep(2)\n        stime = int(time.time())\n        time.sleep(2)\n        jid = self.server.submit(j)\n        a = {'job_state': 'R'}\n        self.server.expect(JOB, a, jid)\n        self.server.status(JOB, ATTR_o, jid)\n        o = j.attributes[ATTR_o]\n        self.tempfile.append(o)\n        # mem and vmem limit will both be set, and either could be detected\n        self.mom.log_match('%s;Cgroup mem(ory|sw) limit exceeded' % jid,\n                           regexp=True, n='ALL', starttime=stime)\n\n    def test_cgroup_enforce_memsw(self):\n        \"\"\"\n        Test to verify that the job is killed when it tries to\n        use more vmem than it requested\n        \"\"\"\n        if not self.paths[self.hosts_list[0]]['memory']:\n            self.skipTest('Test requires memory subystem mounted')\n        # run the test if swap space is available\n        if not self.mem or not self.swapctl:\n            self.skipTest('Test requires memory controller with memsw'\n                          'swap accounting enabled')\n        if have_swap() == 0:\n            self.skipTest('no swap space available on the local host')\n        # Get the grandparent directory\n        fn = self.paths[self.hosts_list[0]]['memory']\n        fn = os.path.join(fn, 'memory.memsw.limit_in_bytes')\n        if not self.is_file(fn, self.hosts_list[0]):\n            self.skipTest('vmem resource not present on node')\n\n        self.load_config(self.cfg3 % ('', 'false', '', self.mem, '',\n                                      self.swapctl, ''))\n\n        name = 'CGROUP6'\n        # Make sure output file is gone, otherwise wait and read\n        # may pick up stale copy of earlier test\n        self.du.rm(runas=TEST_USER, path='~/' + name + '.*', as_script=True)\n\n        a = {\n            'Resource_List.select':\n            '1:ncpus=1:mem=400mb:vmem=420mb:host=%s' % self.hosts_list[0],\n            ATTR_N: name}\n        j = Job(TEST_USER, attrs=a)\n        j.create_script(self.eatmem_job1)\n        jid = self.server.submit(j)\n        a = {'job_state': 'R'}\n        self.server.expect(JOB, a, jid)\n        self.server.status(JOB, [ATTR_o, 'exec_host'], jid)\n        filename = j.attributes[ATTR_o]\n        ehost = j.attributes['exec_host']\n        tmp_file = filename.split(':')[1]\n        tmp_host = ehost.split('/')[0]\n        tmp_out = self.wait_and_read_file(filename=tmp_file, host=tmp_host)\n        self.tempfile.append(tmp_file)\n        success = False\n        foundstr = ''\n        if tmp_out == []:\n            success = False\n        else:\n            joined_out = '\\n'.join(tmp_out)\n            if 'Cgroup memsw limit exceeded' in joined_out:\n                success = True\n                foundstr = 'Cgroup memsw limit exceeded'\n            elif 'Cgroup mem limit exceeded' in joined_out:\n                success = True\n                foundstr = 'Cgroup mem limit exceeded'\n            elif 'MemoryError' in joined_out:\n                success = True\n                foundstr = 'MemoryError'\n        self.assertTrue(success, 'No Cgroup memory/memsw limit exceeded '\n                        'or MemoryError found in joined stdout/stderr')\n        self.logger.info('Joined stdout/stderr contained expected string: '\n                         + foundstr)\n\n    def test_cgroup_diag_messages(self):\n        \"\"\"\n        Test to verify that job that exceeded resources has the diag_message\n        set correctly.\n        \"\"\"\n\n        if not self.paths[self.hosts_list[0]]['memory']:\n            self.skipTest('Test requires memory subystem mounted')\n        # run the test if swap space is available\n        if not self.mem or not self.swapctl:\n            self.skipTest('Test requires memory controller with memsw'\n                          'swap accounting enabled')\n        if have_swap() == 0:\n            self.skipTest('no swap space available on the local host')\n        # Get the grandparent directory\n        fn = self.paths[self.hosts_list[0]]['memory']\n        fn = os.path.join(fn, 'memory.memsw.limit_in_bytes')\n        if not self.is_file(fn, self.hosts_list[0]):\n            self.skipTest('vmem resource not present on node')\n\n        # Make sure job history is enabled to see when job is gone\n        a = {'job_history_enable': 'True'}\n        rc = self.server.manager(MGR_CMD_SET, SERVER, a)\n        self.assertEqual(rc, 0)\n\n        self.load_config(self.cfg3 % ('', 'false', '', self.mem, '',\n                                      self.swapctl, ''))\n\n        a = {\n            'Resource_List.select':\n            '1:ncpus=1:mem=400mb:vmem=420mb:host=%s' % self.hosts_list[0]}\n        j = Job(TEST_USER, attrs=a)\n        j.create_script(self.eatmem_job1)\n        jid = self.server.submit(j)\n        a = {'job_state': 'F'}\n        self.server.expect(JOB, a, jid, extend='x', offset=10)\n        resc = ['resources_used.diag_messages']\n        s = self.server.status(JOB, resc, id=jid, extend='x')\n        dmsg = s[0]['resources_used.diag_messages'].replace(\"'\", \"\")\n        json_exceeded = json.loads(dmsg)\n        msg = json_exceeded[self.mom.shortname]\n        self.assertEqual(msg, 'Cgroup mem limit exceeded, '\n                         'Cgroup memsw limit exceeded')\n\n    def cgroup_offline_node(self, name, vnpernuma=False):\n        \"\"\"\n        Per vnode_per_numa_node config setting, return True if able to\n        verify that the node is offlined when it can't clean up the cgroup\n        and brought back online once the cgroup is cleaned up.\n        \"\"\"\n\n        # Make sure job history is enabled to see when job is gone\n        a = {'job_history_enable': 'True'}\n        rc = self.server.manager(MGR_CMD_SET, SERVER, a)\n        self.assertEqual(rc, 0)\n        self.server.expect(SERVER, {'job_history_enable': 'True'})\n\n        if 'freezer' not in self.paths[self.hosts_list[0]]:\n            self.skipTest('Freezer cgroup is not mounted')\n        # Get the grandparent directory\n        fdir = self.paths[self.hosts_list[0]]['freezer']\n        if not self.is_dir(fdir, self.hosts_list[0]):\n            self.skipTest('Freezer cgroup is not found')\n        # Configure the hook\n        self.load_config(self.cfg3 % ('', vnpernuma, '', self.mem, '',\n                                      self.swapctl, ''))\n        a = {'Resource_List.select': '1:ncpus=1:mem=300mb:host=%s' %\n             self.hosts_list[0], 'Resource_List.walltime': 600, ATTR_N: name}\n        j = Job(TEST_USER, attrs=a)\n        j.create_script(self.sleep600_job)\n        jid = self.server.submit(j)\n        a = {'job_state': 'R'}\n        self.server.expect(JOB, a, jid)\n        job_status = self.server.status(JOB, id=jid)\n        filename = j.attributes[ATTR_o]\n        tmp_file = filename.split(':')[1]\n        self.tempfile.append(tmp_file)\n        self.logger.info(\"Added %s to temp files to clean up\"\n                         % tmp_file)\n        self.logger.info(\"Job session ID is apparently %s\"\n                         % str(j.attributes['session_id']))\n        # Query the pids in the cgroup\n        jdir = self.get_cgroup_job_dir('cpuset', jid, self.hosts_list[0])\n        tasks_file = os.path.join(jdir, 'tasks')\n        time.sleep(2)\n        ret = self.du.cat(self.hosts_list[0], tasks_file, sudo=True)\n        tasks = ret['out']\n        if len(tasks) < 2:\n            self.skipTest('pbs_cgroups_hook: only one task in cgroup')\n        self.logger.info('Tasks: %s' % tasks)\n        self.assertTrue(tasks, 'No tasks in cpuset cgroup for job')\n        # Make dir in freezer subsystem under directory where we\n        # have delegate control from systemd\n        fdir_pbs = os.path.join(fdir, 'pbs_jobs.service', 'PtlPbs')\n        if not self.du.isdir(self.hosts_list[0], fdir_pbs):\n            self.du.mkdir(hostname=self.hosts_list[0], path=fdir_pbs,\n                          mode=0o755, sudo=True)\n        # Write PIDs into the tasks file for the freezer cgroup\n        # All except the top job process -- it remains thawed to\n        # let the job exit\n        task_file = os.path.join(fdir_pbs, 'tasks')\n        success = True\n        body = ''\n        for pidstr in tasks:\n            if pidstr.strip() == j.attributes['session_id']:\n                self.logger.info('Skipping top job process ' + pidstr)\n            else:\n                cmd = ['echo ' + pidstr + ' >>' + task_file]\n                ret = self.du.run_cmd(hosts=self.hosts_list[0],\n                                      cmd=cmd,\n                                      sudo=True,\n                                      as_script=True)\n                if ret['rc'] != 0:\n                    success = False\n                    self.logger.info('Failed to put %s into %s on %s' %\n                                     (pidstr, task_file, self.hosts_list[0]))\n                    self.logger.info('rc = %d', ret['rc'])\n                    self.logger.info('stdout = %s', ret['out'])\n                    self.logger.info('stderr = %s', ret['err'])\n        if not success:\n            self.skipTest('pbs_cgroups_hook: Failed to copy freezer tasks')\n\n        # Freeze the cgroup\n        freezer_file = os.path.join(fdir_pbs, 'freezer.state')\n        state = 'FROZEN'\n        fn = self.du.create_temp_file(body=state)\n        self.tempfile.append(fn)\n        ret = self.du.run_copy(self.hosts_list[0], src=fn,\n                               dest=freezer_file, sudo=True,\n                               uid='root', gid='root',\n                               mode=0o644)\n        if ret['rc'] != 0:\n            self.skipTest('pbs_cgroups_hook: Failed to copy '\n                          'freezer state FROZEN')\n\n        confirmed_frozen = False\n\n        for count in range(30):\n            ret = self.du.cat(hostname=self.hosts_list[0],\n                              filename=freezer_file,\n                              sudo=True)\n            if ret['rc'] != 0:\n                self.logger.info(\"Cannot confirm freezer state\"\n                                 \"sleeping 30 seconds instead\")\n                time.sleep(30)\n                break\n            if ret['out'][0] == 'FROZEN':\n                self.logger.info(\"job processes reported as FROZEN\")\n                confirmed_frozen = True\n                break\n            else:\n                self.logger.info(\"freezer state reported as \"\n                                 + ret['out'][0])\n                time.sleep(1)\n\n        if not confirmed_frozen:\n            self.logger.info(\"Freezer did not work; skip test after cleanup\")\n\n        # Catch any exception so we can thaw the cgroup or the jobs\n        # will remain frozen and impact subsequent tests\n        passed = True\n\n        # Now delete the job\n        try:\n            self.server.delete(id=jid)\n        except Exception as exc:\n            passed = False\n            self.logger.info('Job could not be deleted')\n\n        if confirmed_frozen:\n            # The cgroup hook should fail to clean up the cgroups\n            # because of the freeze, and offline node\n            # Note that when vnode per numa node is enabled, this\n            # will take longer: the execjob_epilogue will first mark\n            # the per-socket vnode offline, but only the exechost_periodic\n            # will mark the natural node offline\n            try:\n                self.server.expect(NODE, {'state': (MATCH_RE, 'offline')},\n                                   id=self.nodes_list[0], offset=10,\n                                   interval=3)\n            except Exception as exc:\n                passed = False\n                self.logger.info('Node never went offline')\n\n        # Thaw the cgroup\n        state = 'THAWED'\n        fn = self.du.create_temp_file(body=state)\n        self.tempfile.append(fn)\n        ret = self.du.run_copy(self.hosts_list[0], src=fn,\n                               dest=freezer_file, sudo=True,\n                               uid='root', gid='root',\n                               mode=0o644)\n\n        if ret['rc'] != 0:\n            # Skip the test at the end when this happens,\n            # but still attempt to clean up!\n            confirmed_frozen = False\n\n        # First confirm the processes were thawed\n        for count in range(30):\n            ret = self.du.cat(hostname=self.hosts_list[0],\n                              filename=freezer_file,\n                              sudo=True)\n            if ret['rc'] != 0:\n                self.logger.info(\"Cannot confirm freezer state\"\n                                 \"sleeping 30 seconds instead\")\n                time.sleep(30)\n                break\n            if ret['out'][0] == 'THAWED':\n                self.logger.info(\"job processes reported as THAWED\")\n                break\n            else:\n                self.logger.info(\"freezer state reported as \"\n                                 + ret['out'][0])\n                time.sleep(1)\n\n        # once the freezer is thawed, all the processes should receive\n        # the cgroup hook's kill signal and disappear;\n        # confirm they're gone before deleting freezer\n        freezer_tasks = os.path.join(fdir_pbs, 'tasks')\n        for count in range(30):\n            ret = self.du.cat(hostname=self.hosts_list[0],\n                              filename=freezer_tasks,\n                              sudo=True)\n            if ret['rc'] != 0:\n                self.logger.info(\"Cannot confirm freezer tasks\"\n                                 \"sleeping 30 seconds instead\")\n                time.sleep(30)\n                break\n            if ret['out'] == [] or ret['out'][0] == '':\n                self.logger.info(\"Processes in thawed freezer are gone\")\n                break\n            else:\n                self.logger.info(\"tasks still in thawed freezer: \"\n                                 + str(ret['out']))\n                time.sleep(1)\n\n        cmd = [\"rmdir\", fdir_pbs]\n        self.logger.info(\"Removing %s\" % fdir_pbs)\n        self.du.run_cmd(self.hosts_list[0], cmd=cmd, sudo=True)\n        # Due to orphaned jobs node is not coming back to free state\n        # workaround is to recreate the nodes. Orphaned jobs will\n        # get cleaned up in tearDown hence not doing it here\n\n        # try deleting the job once more, to ensure that the node isn't\n        # busy\n        try:\n            self.server.delete(id=jid)\n        except Exception as exc:\n            pass\n\n        bs = {'job_state': 'F'}\n        self.server.expect(JOB, bs, jid, extend='x', offset=1)\n\n        if not confirmed_frozen:\n            self.cgroup_recreate_nodes()\n            self.skipTest('Could not confirm freeze/thaw worked')\n\n        return passed\n\n    def cgroup_recreate_nodes(self):\n        \"\"\"\n        Since the job delete action was purposefully bent out of shape,\n        node state might stay busy for some time\n        retry until it works -- this is for the sanity of the next\n        test\n        \"\"\"\n        for count in range(30):\n            try:\n                self.server.manager(MGR_CMD_DELETE, NODE, None, \"\")\n                self.logger.info('Managed to delete nodes')\n                break\n            except Exception:\n                self.logger.info('Failed to delete nodes (still busy?)')\n                time.sleep(1)\n\n        for host in self.hosts_list:\n            try:\n                self.server.manager(MGR_CMD_CREATE, NODE, id=host)\n            except Exception:\n                # the delete might have failed and then the create will,\n                # but still confirm the node goes back to free state\n                pass\n            self.server.expect(NODE, {'state': 'free'},\n                               id=host, interval=3)\n\n    def test_cgroup_offline_node_preserve_comment(self):\n        \"\"\"\n        Test to verify that offlined node that is bring back online\n        preserves custom comment.\n        \"\"\"\n        a = {'comment': \"foo bar\"}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.hosts_list[0])\n        name = 'CGROUP7.1'\n        vn_per_numa = 'false'\n        rv = self.cgroup_offline_node(name, vn_per_numa)\n        self.assertTrue(rv)\n        a = {'comment': \"foo bar\"}\n        self.server.expect(NODE, a, id=self.hosts_list[0])\n        self.cgroup_recreate_nodes()\n\n    def test_cgroup_offline_node(self):\n        \"\"\"\n        Test to verify that the node is offlined when it can't clean up\n        the cgroup and brought back online once the cgroup is cleaned up.\n        vnode_per_numa_node = false\n        \"\"\"\n        name = 'CGROUP7.1'\n        vn_per_numa = 'false'\n        rv = self.cgroup_offline_node(name, vn_per_numa)\n        self.assertTrue(rv)\n        self.cgroup_recreate_nodes()\n\n    def test_cgroup_offline_node_vnpernuma(self):\n        \"\"\"\n        Test to verify that the node is offlined when it can't clean up\n        the cgroup and brought back online once the cgroup is cleaned up.\n        vnode_per_numa_node = true\n        \"\"\"\n        with open(os.path.join(os.sep, 'proc', 'meminfo'), 'r') as fd:\n            meminfo = fd.read()\n        if 'Hugepagesize' not in meminfo:\n            self.skipTest('Hugepagesize not in meminfo')\n        name = 'CGROUP7.2'\n        vn_per_numa = 'true'\n        rv = self.cgroup_offline_node(name, vn_per_numa)\n        self.assertTrue(rv)\n        self.cgroup_recreate_nodes()\n\n    @requirements(num_moms=2)\n    def test_cgroup_cpuset_host_excluded(self):\n        \"\"\"\n        Test to verify that cgroups subsystems are not enforced on nodes\n        that have the exclude_hosts set but are enforced on other systems\n        \"\"\"\n        name = 'CGROUP10'\n        mom, _ = self.get_host_names(self.hosts_list[0])\n        self.load_config(self.cfg1 % ('', '', '', '%s' % mom,\n                                      self.mem, self.swapctl))\n        a = {'Resource_List.select': '1:ncpus=1:mem=300mb:host=%s' %\n             self.hosts_list[0], ATTR_N: name}\n        j = Job(TEST_USER, attrs=a)\n        j.create_script(self.sleep600_job)\n        time.sleep(2)\n        stime = int(time.time())\n        time.sleep(2)\n        jid = self.server.submit(j)\n        a = {'job_state': 'R'}\n        self.server.expect(JOB, a, jid)\n        self.server.status(JOB, ATTR_o, jid)\n        o = j.attributes[ATTR_o]\n        self.tempfile.append(o)\n        hostn = self.get_hostname(self.hosts_list[0])\n        self.moms_list[0].log_match('cgroup excluded for subsystem cpuset '\n                                    'on host %s' % hostn,\n                                    starttime=stime, n='ALL')\n        cpath = self.get_cgroup_job_dir('cpuset', jid, self.hosts_list[0])\n        self.assertFalse(self.is_dir(cpath, self.hosts_list[0]))\n        # Now try a job on momB\n        a = {'Resource_List.select': '1:ncpus=1:mem=300mb:host=%s' %\n             self.hosts_list[1], ATTR_N: name}\n        j = Job(TEST_USER, attrs=a)\n        j.create_script(self.sleep600_job)\n        jid2 = self.server.submit(j)\n        a = {'job_state': 'R'}\n        self.server.expect(JOB, a, jid2)\n        cpath = self.get_cgroup_job_dir('cpuset', jid2, self.hosts_list[1])\n        self.logger.info('Checking for %s on %s' % (cpath, self.moms_list[1]))\n        self.assertTrue(self.is_dir(cpath, self.hosts_list[1]))\n\n    @requirements(num_moms=2)\n    def test_cgroup_run_on_host(self):\n        \"\"\"\n        Test to verify that the cgroup hook only runs on nodes\n        in the run_only_on_hosts\n        \"\"\"\n        name = 'CGROUP11'\n        mom, log = self.get_host_names(self.hosts_list[0])\n        self.load_config(self.cfg1 % ('', '', '%s' % mom, '',\n                                      self.mem, self.swapctl))\n        a = {'Resource_List.select': '1:ncpus=1:mem=300mb:host=%s' %\n             self.hosts_list[1], ATTR_N: name}\n        j = Job(TEST_USER, attrs=a)\n        j.create_script(self.sleep600_job)\n        time.sleep(2)\n        stime = int(time.time())\n        time.sleep(2)\n        jid = self.server.submit(j)\n        a = {'job_state': 'R'}\n        self.server.expect(JOB, a, jid)\n        self.server.status(JOB, ATTR_o, jid)\n        o = j.attributes[ATTR_o]\n        self.tempfile.append(o)\n        hostn = self.get_hostname(self.hosts_list[1])\n        self.moms_list[1].log_match(\n            'set enabled to False based on run_only_on_hosts',\n            starttime=stime, n='ALL')\n        cpath = self.get_cgroup_job_dir('memory', jid, self.hosts_list[1])\n        self.assertFalse(self.is_dir(cpath, self.hosts_list[1]))\n        a = {'Resource_List.select': '1:ncpus=1:mem=300mb:host=%s' %\n             self.hosts_list[0], ATTR_N: name}\n        j = Job(TEST_USER, attrs=a)\n        j.create_script(self.sleep600_job)\n        jid2 = self.server.submit(j)\n        a = {'job_state': 'R'}\n        self.server.expect(JOB, a, jid2)\n        self.server.status(JOB, ATTR_o, jid2)\n        o = j.attributes[ATTR_o]\n        self.tempfile.append(o)\n        cpath = self.get_cgroup_job_dir('memory', jid2, self.hosts_list[0])\n        self.assertTrue(self.is_dir(cpath, self.hosts_list[0]))\n\n    def test_cgroup_qstat_resources(self):\n        \"\"\"\n        Test to verify that cgroups are reporting usage for\n        mem, and vmem in qstat\n        \"\"\"\n        name = 'CGROUP14'\n        self.load_config(self.cfg3 % ('', 'false', '', self.mem, '',\n                                      self.swapctl, ''))\n        a = {'Resource_List.select': '1:ncpus=1:mem=500mb', ATTR_N: name}\n        j = Job(TEST_USER, attrs=a)\n        j.create_script(self.eatmem_job2)\n        jid = self.server.submit(j)\n        a = {'job_state': 'R'}\n        self.server.expect(JOB, a, jid)\n        self.server.status(JOB, [ATTR_o, 'exec_host'], jid)\n        o = j.attributes[ATTR_o]\n        self.tempfile.append(o)\n        host = j.attributes['exec_host']\n        self.logger.info('OUTPUT: %s' % o)\n        resc_list = ['resources_used.cput']\n        resc_list += ['resources_used.mem']\n        resc_list += ['resources_used.vmem']\n        qstat1 = self.server.status(JOB, resc_list, id=jid)\n        for q in qstat1:\n            self.logger.info('Q1: %s' % q)\n        cput1 = qstat1[0]['resources_used.cput']\n        mem1 = qstat1[0]['resources_used.mem']\n        vmem1 = qstat1[0]['resources_used.vmem']\n        self.logger.info('Waiting 35 seconds for CPU time to accumulate')\n        time.sleep(35)\n        qstat2 = self.server.status(JOB, resc_list, id=jid)\n        for q in qstat2:\n            self.logger.info('Q2: %s' % q)\n        cput2 = qstat2[0]['resources_used.cput']\n        mem2 = qstat2[0]['resources_used.mem']\n        vmem2 = qstat2[0]['resources_used.vmem']\n        self.assertNotEqual(cput1, cput2)\n        self.assertNotEqual(mem1, mem2)\n        # Check vmem only if system has swap control\n        if self.swapctl == 'true':\n            self.assertNotEqual(vmem1, vmem2)\n\n    def test_cgroup_reserve_mem(self):\n        \"\"\"\n        Test to verify that the mom reserve memory for OS\n        when there is a reserve mem request in the config.\n        Install cfg3 and then cfg4 and measure difference\n        between the amount of available memory and memsw.\n        For example, on a system with 1GB of physical memory\n        and 1GB of active swap. With cfg3 in place, we should\n        see 1GB - 50MB = 950MB of available memory and\n        2GB - (50MB + 45MB) = 1905MB of available vmem.\n        With cfg4 in place, we should see 1GB - 100MB = 900MB\n        of available memory and 2GB - (100MB + 90MB) = 1810MB\n        of available vmem. When we calculate the differences\n        we get:\n        mem: 950MB - 900MB = 50MB = 51200KB\n        vmem: 1905MB - 1810MB = 95MB = 97280KB\n        \"\"\"\n        if not self.paths[self.hosts_list[0]]['memory']:\n            self.skipTest('Test requires memory subystem mounted')\n        self.load_config(self.cfg3 % ('', 'false', '', self.mem, '',\n                                      self.swapctl, ''))\n        self.server.expect(NODE, {'state': 'free'},\n                           id=self.nodes_list[0], interval=3, offset=10)\n        if self.swapctl == 'true':\n            vmem = self.server.status(NODE, 'resources_available.vmem',\n                                      id=self.nodes_list[0])\n            self.logger.info('vmem: %s' % str(vmem))\n            vmem1 = PbsTypeSize(vmem[0]['resources_available.vmem'])\n            self.logger.info('Vmem-1: %s' % vmem1.value)\n        mem = self.server.status(NODE, 'resources_available.mem',\n                                 id=self.nodes_list[0])\n        mem1 = PbsTypeSize(mem[0]['resources_available.mem'])\n        self.logger.info('Mem-1: %s' % mem1.value)\n        self.load_config(self.cfg4 % (self.mem, self.swapctl))\n        self.server.expect(NODE, {'state': 'free'},\n                           id=self.nodes_list[0], interval=3, offset=10)\n        if self.swapctl == 'true':\n            vmem = self.server.status(NODE, 'resources_available.vmem',\n                                      id=self.nodes_list[0])\n            vmem2 = PbsTypeSize(vmem[0]['resources_available.vmem'])\n            self.logger.info('Vmem-2: %s' % vmem2.value)\n            vmem_resv = vmem1 - vmem2\n            if (vmem_resv.unit == 'b'):\n                vmem_resv_bytes = vmem_resv.value\n            elif (vmem_resv.unit == 'kb'):\n                vmem_resv_bytes = vmem_resv.value * 1024\n            elif (vmem_resv.unit == 'mb'):\n                vmem_resv_bytes = vmem_resv.value * 1024 * 1024\n            self.logger.info('Vmem resv diff in bytes: %s' % vmem_resv_bytes)\n            # rounding differences may make diff slighly smaller than we expect\n            # accept 1MB deviation as irrelevant\n            # Note: since we don't know if there is swap, memsw reserved\n            # increase might not have been heeded. Change this to a higher\n            # value (cfr. above) only on test harnesses that have enough swap\n            self.assertGreaterEqual(vmem_resv_bytes, (51200 - 1024) * 1024)\n        mem = self.server.status(NODE, 'resources_available.mem',\n                                 id=self.nodes_list[0])\n        mem2 = PbsTypeSize(mem[0]['resources_available.mem'])\n        self.logger.info('Mem-2: %s' % mem2.value)\n        mem_resv = mem1 - mem2\n        if (mem_resv.unit == 'b'):\n            mem_resv_bytes = mem_resv.value\n        elif (mem_resv.unit == 'kb'):\n            mem_resv_bytes = mem_resv.value * 1024\n        elif (mem_resv.unit == 'mb'):\n            mem_resv_bytes = mem_resv.value * 1024 * 1024\n        self.logger.info('Mem resv diff in bytes: %s' % mem_resv_bytes)\n        # rounding differences may make diff slighly smaller than we expect\n        # accept 1MB deviation as irrelevant\n        self.assertGreaterEqual(mem_resv_bytes, (51200 - 1024) * 1024)\n\n    @requirements(num_moms=2)\n    def test_cgroup_multi_node(self):\n        \"\"\"\n        Test multi-node jobs with cgroups\n        \"\"\"\n        name = 'CGROUP16'\n        self.load_config(self.cfg6 % (self.mem, self.swapctl))\n        a = {'Resource_List.select': '2:ncpus=1:mem=100mb',\n             'Resource_List.place': 'scatter', ATTR_N: name}\n        j = Job(TEST_USER, attrs=a)\n        j.create_script(self.sleep30_job)\n        jid = self.server.submit(j)\n        a = {'job_state': 'R'}\n        self.server.expect(JOB, a, jid)\n        self.server.status(JOB, 'exec_host', jid)\n        ehost = j.attributes['exec_host']\n        tmp_host = ehost.split('+')\n        ehost1 = tmp_host[0].split('/')[0]\n        ehjd1 = self.get_cgroup_job_dir('memory', jid, ehost1)\n        self.assertTrue(self.is_dir(ehjd1, ehost1),\n                        'Missing memory subdirectory: %s' % ehjd1)\n        ehost2 = tmp_host[1].split('/')[0]\n        ehjd2 = self.get_cgroup_job_dir('memory', jid, ehost2)\n        self.assertTrue(self.is_dir(ehjd2, ehost2),\n                        'Missing memory subdirectory: %s' % ehjd2)\n        # Wait for job to finish and make sure that cgroup directories\n        # has been cleaned up by the hook\n        self.server.expect(JOB, 'queue', op=UNSET, offset=30, interval=1,\n                           id=jid)\n        self.assertFalse(self.is_dir(ehjd1, ehost1),\n                         'Directory still present: %s' % ehjd1)\n        self.assertFalse(self.is_dir(ehjd2, ehost2),\n                         'Directory still present: %s' % ehjd2)\n\n    def test_cgroup_job_array(self):\n        \"\"\"\n        Test that cgroups are created for subjobs like a regular job\n        \"\"\"\n        if not self.paths[self.hosts_list[0]]['memory']:\n            self.skipTest('Test requires memory subystem mounted')\n        name = 'CGROUP17'\n        self.load_config(self.cfg1 % ('', '', '', '', self.mem, self.swapctl))\n        a = {'Resource_List.select': '1:ncpus=1:mem=300mb:host=%s' %\n             self.hosts_list[0], ATTR_N: name, ATTR_J: '1-4',\n             'Resource_List.place': 'pack:excl'}\n        j = Job(TEST_USER, attrs=a)\n        j.set_sleep_time(60)\n        jid = self.server.submit(j)\n        a = {'job_state': 'B'}\n        self.server.expect(JOB, a, jid)\n        # Get subjob ID\n        subj1 = jid.replace('[]', '[1]')\n        self.server.expect(JOB, {'job_state': 'R'}, subj1)\n        rv = self.server.status(JOB, ['exec_host'], subj1)\n        ehost = rv[0].get('exec_host')\n        ehost1 = ehost.split('/')[0]\n        # Verify that cgroups files created for subjobs\n        # but not for parent job array\n        cpath = self.get_cgroup_job_dir('memory', subj1, ehost1)\n        self.assertTrue(self.is_dir(cpath, ehost1))\n        cpath = self.get_cgroup_job_dir('memory', jid, ehost1)\n        self.assertFalse(self.is_dir(cpath, ehost1))\n        # Verify that subjob4 is queued and no cgroups\n        # files are created for queued subjob\n        subj4 = jid.replace('[]', '[4]')\n        self.server.expect(JOB, {'job_state': 'Q'}, id=subj4)\n        cpath = self.get_cgroup_job_dir('memory', subj4, ehost1)\n        self.assertFalse(self.is_dir(cpath, self.hosts_list[0]))\n        # Delete subjob1 and verify that cgroups files are cleaned up\n        self.server.delete(id=subj1)\n        self.server.expect(JOB, {'job_state': 'X'}, subj1)\n        cpath = self.get_cgroup_job_dir('memory', subj1, ehost1)\n        self.assertFalse(self.is_dir(cpath, ehost1))\n        # Verify if subjob2 is running\n        subj2 = jid.replace('[]', '[2]')\n        self.server.expect(JOB, {'job_state': 'R'}, id=subj2)\n        # Force delete the subjob and verify cgroups\n        # files are cleaned up\n        self.server.delete(id=subj2, extend='force')\n        self.server.expect(JOB, {'job_state': 'X'}, subj2)\n        # Adding extra sleep for file to clean up\n        # since qdel -Wforce changed state of subjob\n        # without waiting for MoM\n        # retry 10 times (for 20 seconds max. in total)\n        # if the directory is still there...\n        cpath = self.get_cgroup_job_dir('memory', subj2, ehost1)\n        for trial in range(0, 10):\n            time.sleep(2)\n            if not self.is_dir(cpath, ehost1):\n                # we're done\n                break\n        self.assertFalse(self.is_dir(cpath, ehost1))\n\n    @requirements(num_moms=2)\n    def test_cgroup_cleanup(self):\n        \"\"\"\n        Test that cgroups files are cleaned up after qdel\n        \"\"\"\n        self.load_config(self.cfg1 % ('', '', '', '', self.mem, self.swapctl))\n        a = {'Resource_List.select': '2:ncpus=1:mem=100mb',\n             'Resource_List.place': 'scatter'}\n        j = Job(TEST_USER, attrs=a)\n        j.create_script(self.sleep600_job)\n        jid = self.server.submit(j)\n        a = {'job_state': 'R'}\n        self.server.expect(JOB, a, jid)\n        self.server.status(JOB, ['exec_host'], jid)\n        ehost = j.attributes['exec_host']\n        tmp_host = ehost.split('+')\n        ehost1 = tmp_host[0].split('/')[0]\n        ehost2 = tmp_host[1].split('/')[0]\n        ehjd1 = self.get_cgroup_job_dir('cpuset', jid, ehost1)\n        self.assertTrue(self.is_dir(ehjd1, ehost1))\n        ehjd2 = self.get_cgroup_job_dir('cpuset', jid, ehost2)\n        self.assertTrue(self.is_dir(ehjd2, ehost2))\n        self.server.delete(id=jid, wait=True)\n        self.assertFalse(self.is_dir(ehjd1, ehost1))\n        self.assertFalse(self.is_dir(ehjd2, ehost2))\n\n    def test_cgroup_execjob_end_should_delete_cgroup(self):\n        \"\"\"\n        Test to verify that if execjob_epilogue hook failed to run or to\n        clean up cgroup files for a job, execjob_end hook should clean\n        them up\n        \"\"\"\n        self.load_config(self.cfg4 % (self.mem, self.swapctl))\n        # remove epilogue and periodic from the list of events\n        attr = {'enabled': 'True',\n                'event': ['execjob_begin', 'execjob_launch',\n                          'execjob_attach', 'execjob_end', 'exechost_startup']}\n        self.server.manager(MGR_CMD_SET, HOOK, attr, self.hook_name)\n        self.server.expect(NODE, {'state': 'free'}, id=self.nodes_list[0])\n        j = Job(TEST_USER)\n        j.set_sleep_time(1)\n        jid = self.server.submit(j)\n        # wait for job to finish\n        self.server.expect(JOB, 'queue', id=jid, op=UNSET,\n                           interval=1, offset=1)\n        # verify that cgroup files for this job are gone even if\n        # epilogue and periodic events are disabled\n        for subsys, path in self.paths[self.hosts_list[0]].items():\n            # only check under subsystems that are enabled\n            enabled_subsys = ['cpuacct', 'cpuset', 'memory', 'memsw']\n            if (any([x in subsys for x in enabled_subsys])):\n                continue\n            if path:\n                # Following code only works with recent hooks\n                # and default cgroup_prefix\n                # change the path if testing with older hooks\n                # see comments in get_cgroup_job_dir()\n                filename = os.path.join(path,\n                                        'pbs_jobs.service',\n                                        'jobid', str(jid))\n                self.logger.info('Checking that file %s should not exist'\n                                 % filename)\n                self.assertFalse(self.du.isfile(self.hosts_list[0], filename))\n\n    @skipOnCray\n    def test_cgroup_assign_resources_mem_only_vnode(self):\n        \"\"\"\n        Test to verify that job requesting mem larger than any single vnode\n        works properly\n        \"\"\"\n        if not self.paths[self.hosts_list[0]]['memory']:\n            self.skipTest('Test requires memory subystem mounted')\n\n        # vnode_per_numa_node enabled, so we get per-socket vnodes\n        self.load_config(self.cfg3\n                         % ('', 'true', '', self.mem, '', self.swapctl, ''))\n        self.server.expect(NODE, {ATTR_NODE_state: 'free'},\n                           id=self.hosts_list[0]+'[0]')\n        socket1_found = False\n        nodestat = self.server.status(NODE)\n        total_kb = 0\n        for node in nodestat:\n            if (self.mom.shortname + '[') not in node['id']:\n                self.logger.info('Skipping vnode %s' % node['id'])\n            else:\n                if node['id'] == self.mom.shortname + '[0]':\n                    self.logger.info('Found socket 0, vnode %s'\n                                     % node['id'])\n                if node['id'] == self.mom.shortname + '[1]':\n                    socket1_found = True\n                    self.logger.info('Found socket 1, vnode %s '\n                                     '(multi socket!)'\n                                     % node['id'])\n                # PbsTypeSize value is in kb\n                node_kb = PbsTypeSize(node['resources_available.mem']).value\n                self.logger.info('Vnode %s memory: %skb'\n                                 % (node['id'], node_kb))\n                total_kb += node_kb\n        total_mb = int(total_kb / 1024)\n        self.logger.info(\"Total memory on first MoM: %smb\" % total_mb)\n        if not socket1_found:\n            self.skipTest('Test requires more than one NUMA node '\n                          '(i.e. \"socket\") on first host')\n        memreq_mb = total_mb - 2\n        a = {'Resource_List.select':\n             '1:ncpus=1:host=%s:mem=%smb'\n             % (self.mom.shortname, str(memreq_mb))}\n        j1 = Job(TEST_USER, attrs=a)\n        j1.create_script('date')\n        jid1 = self.server.submit(j1)\n        # Job should finish and thus dequeued\n        self.server.expect(JOB, 'queue', id=jid1, op=UNSET,\n                           interval=1, offset=1)\n        a = {'Resource_List.select':\n             '1:ncpus=1:host=%s:mem=%smb'\n             % (self.mom.shortname, str(memreq_mb + 1024))}\n        j3 = Job(TEST_USER, attrs=a)\n        j3.create_script('date')\n        jid3 = self.server.submit(j3)\n        # Will either start with \"Can Never Run\" or \"Not Running\"\n        # Don't match only one\n        a = {'job_state': 'Q',\n             'comment':\n             (MATCH_RE,\n              '.*: Insufficient amount of resource: mem.*')}\n        self.server.expect(JOB, a, attrop=PTL_AND, id=jid3, offset=10,\n                           interval=1)\n\n    @timeout(1800)\n    def test_cgroup_cpuset_exclude_cpu(self):\n        \"\"\"\n        Confirm that exclude_cpus reduces resources_available.ncpus\n        \"\"\"\n        # Fetch the unmodified value of resources_available.ncpus\n        self.load_config(self.cfg5 % ('false', '', 'false', 'false',\n                                      'false', self.mem, self.swapctl))\n        self.server.expect(NODE, {'state': 'free'},\n                           id=self.nodes_list[0], interval=1)\n        result = self.server.status(NODE, 'resources_available.ncpus',\n                                    id=self.nodes_list[0])\n        orig_ncpus = int(result[0]['resources_available.ncpus'])\n        self.assertGreater(orig_ncpus, 0)\n        self.logger.info('Original value of ncpus: %d' % orig_ncpus)\n        if orig_ncpus < 2:\n            self.skipTest('Node must have at least two CPUs')\n        # Now exclude CPU zero\n        self.load_config(self.cfg5 % ('false', '0', 'false', 'false',\n                                      'false', self.mem, self.swapctl))\n        self.server.expect(NODE, {'state': 'free'},\n                           id=self.nodes_list[0], interval=1)\n        result = self.server.status(NODE, 'resources_available.ncpus',\n                                    id=self.nodes_list[0])\n        new_ncpus = int(result[0]['resources_available.ncpus'])\n        self.assertGreater(new_ncpus, 0)\n        self.logger.info('New value with one CPU excluded: %d' % new_ncpus)\n        self.assertEqual((new_ncpus + 1), orig_ncpus)\n        # Repeat the process with vnode_per_numa_node set to true\n        vnode = '%s[0]' % self.nodes_list[0]\n        self.load_config(self.cfg5 % ('true', '', 'false', 'false',\n                                      'false', self.mem, self.swapctl))\n        self.server.expect(NODE, {'state': 'free'},\n                           id=vnode, interval=1)\n        result = self.server.status(NODE, 'resources_available.ncpus',\n                                    id=vnode)\n        orig_ncpus = int(result[0]['resources_available.ncpus'])\n        self.assertGreater(orig_ncpus, 0)\n        self.logger.info('Original value of vnode ncpus: %d' % orig_ncpus)\n        # Exclude CPU zero again\n        self.load_config(self.cfg5 % ('true', '0', 'false', 'false',\n                                      'false', self.mem, self.swapctl))\n        self.server.expect(NODE, {'state': 'free'},\n                           id=vnode, interval=1)\n        result = self.server.status(NODE, 'resources_available.ncpus',\n                                    id=vnode)\n        new_ncpus = int(result[0]['resources_available.ncpus'])\n        self.assertEqual((new_ncpus + 1), orig_ncpus)\n\n    def test_cgroup_cpuset_mem_fences(self):\n        \"\"\"\n        Confirm that mem_fences affects setting of cpuset.mems\n        \"\"\"\n        if not self.paths[self.hosts_list[0]]['memory']:\n            self.skipTest('Test requires memory subystem mounted')\n        # Get the grandparent directory\n        cpuset_base = self.paths[self.hosts_list[0]]['cpuset']\n        cpuset_mems = os.path.join(cpuset_base, 'cpuset.mems')\n        result = self.du.cat(hostname=self.hosts_list[0], filename=cpuset_mems,\n                             sudo=True)\n        if result['rc'] != 0 or result['out'][0] == '0':\n            self.skipTest('Test requires two NUMA nodes')\n        # First try with mem_fences set to true (the default)\n        self.load_config(self.cfg5 % ('false', '', 'true', 'false',\n                                      'false', self.mem, self.swapctl))\n        # Do not use node_list -- vnode_per_numa_node is NOW off\n        # so use the natural node. Otherwise might 'expect' stale vnode\n        self.server.expect(NODE, {'state': 'free'},\n                           id=self.hosts_list[0], interval=3, offset=10)\n        a = {'Resource_List.select': '1:ncpus=1:mem=100mb:host=%s' %\n             self.hosts_list[0]}\n        j = Job(TEST_USER, attrs=a)\n        j.create_script(self.sleep600_job)\n        jid = self.server.submit(j)\n        a = {'job_state': 'R'}\n        self.server.expect(JOB, a, jid)\n        self.server.status(JOB, ATTR_o, jid)\n        o = j.attributes[ATTR_o]\n        self.tempfile.append(o)\n        fn = self.get_cgroup_job_dir('cpuset', jid, self.hosts_list[0])\n        fn = os.path.join(fn, 'cpuset.mems')\n        result = self.du.cat(hostname=self.hosts_list[0],\n                             filename=fn, sudo=True)\n        self.assertEqual(result['rc'], 0)\n        value_mem_fences = result['out'][0]\n        self.logger.info(\"value with mem_fences: %s\" % value_mem_fences)\n        self.server.delete(jid, wait=True)\n\n        # Now try with mem_fences set to false\n        self.load_config(self.cfg5 % ('false', '', 'false', 'false',\n                                      'false', self.mem, self.swapctl))\n        self.server.expect(NODE, {'state': 'free'},\n                           id=self.nodes_list[0], interval=3, offset=10)\n        a = {'Resource_List.select': '1:ncpus=1:mem=100mb:host=%s' %\n             self.hosts_list[0]}\n        j = Job(TEST_USER, attrs=a)\n        j.create_script(self.sleep600_job)\n        jid = self.server.submit(j)\n        a = {'job_state': 'R'}\n        self.server.expect(JOB, a, jid)\n        self.server.status(JOB, ATTR_o, jid)\n        o = j.attributes[ATTR_o]\n        self.tempfile.append(o)\n        fn = self.get_cgroup_job_dir('cpuset', jid, self.hosts_list[0])\n        fn = os.path.join(fn, 'cpuset.mems')\n        result = self.du.cat(hostname=self.hosts_list[0],\n                             filename=fn, sudo=True)\n        self.assertEqual(result['rc'], 0)\n        # compare mem value under mem_fences and under no mem_fences\n        value_no_mem_fences = result['out'][0]\n        self.logger.info(\"value with no mem_fences:%s\" % value_no_mem_fences)\n        self.assertNotEqual(value_no_mem_fences, value_mem_fences)\n\n    def test_cgroup_cpuset_mem_hardwall(self):\n        \"\"\"\n        Confirm that mem_hardwall affects setting of cpuset.mem_hardwall\n        \"\"\"\n        if not self.paths[self.hosts_list[0]]['memory']:\n            self.skipTest('Test requires memory subystem mounted')\n\n        self.load_config(self.cfg5 % ('false', '', 'true', 'false',\n                                      'false', self.mem, self.swapctl))\n        self.server.expect(NODE, {'state': 'free'},\n                           id=self.nodes_list[0], interval=3, offset=10)\n        a = {'Resource_List.select': '1:ncpus=1:mem=100mb:host=%s' %\n             self.hosts_list[0]}\n        j = Job(TEST_USER, attrs=a)\n        j.create_script(self.sleep600_job)\n        jid = self.server.submit(j)\n        a = {'job_state': 'R'}\n        self.server.expect(JOB, a, jid)\n        self.server.status(JOB, ATTR_o, jid)\n        o = j.attributes[ATTR_o]\n        self.tempfile.append(o)\n        memh_path = 'cpuset.mem_hardwall'\n        fn = self.get_cgroup_job_dir('cpuset', jid, self.hosts_list[0])\n        if self.noprefix:\n            memh_path = 'mem_hardwall'\n        fn = os.path.join(fn, memh_path)\n        self.logger.info('fn is %s' % fn)\n        if not (self.is_file(fn, self.hosts_list[0])):\n            self.skipTest('cgroup mem_hardwall of job does not exist')\n        result = self.du.cat(hostname=self.hosts_list[0],\n                             filename=fn, sudo=True)\n        self.assertEqual(result['rc'], 0)\n        self.assertEqual(result['out'][0], '0')\n        self.server.delete(jid, wait=True)\n\n        self.load_config(self.cfg5 % ('false', '', 'true', 'true',\n                                      'false', self.mem, self.swapctl))\n        self.server.expect(NODE, {'state': 'free'},\n                           id=self.nodes_list[0], interval=3, offset=10)\n        a = {'Resource_List.select': '1:ncpus=1:mem=100mb:host=%s' %\n             self.hosts_list[0]}\n        j = Job(TEST_USER, attrs=a)\n        j.create_script(self.sleep600_job)\n        jid = self.server.submit(j)\n        a = {'job_state': 'R'}\n        self.server.expect(JOB, a, jid)\n        self.server.status(JOB, ATTR_o, jid)\n        o = j.attributes[ATTR_o]\n        self.tempfile.append(o)\n        fn = self.get_cgroup_job_dir('cpuset', jid, self.hosts_list[0])\n        fn = os.path.join(fn, memh_path)\n        if not (self.is_file(fn, self.hosts_list[0])):\n            self.skipTest('cgroup mem_hardwall of job does not exist')\n        result = self.du.cat(hostname=self.hosts_list[0],\n                             filename=fn, sudo=True)\n        self.assertEqual(result['rc'], 0)\n        self.assertEqual(result['out'][0], '1')\n\n    def test_cgroup_find_gpus(self):\n        \"\"\"\n        Confirm that the hook finds the correct number of GPUs.\n        Note: This assumes all GPUs have the same MIG configuration,\n        either on or off.\n        \"\"\"\n        if not self.paths[self.hosts_list[0]]['devices']:\n            self.skipTest('Skipping test since no devices subsystem defined')\n        name = 'CGROUP3'\n        self.load_config(self.cfg2)\n\n        cmd = ['nvidia-smi', '-L']\n        try:\n            rv = self.du.run_cmd(hosts=self.moms_list[0].hostname, cmd=cmd)\n        except OSError:\n            rv = {'err': True}\n        if rv['err'] or 'GPU' not in rv['out'][0]:\n            self.skipTest('Skipping test since nvidia-smi not found')\n        last_gpu_was_physical = False\n        gpus = 0\n        # store uuids of the MIG devices\n        uuid_list = []\n        for l in rv['out']:\n            if l.startswith('GPU'):\n                last_gpu_was_physical = True\n                gpus += 1\n            elif l.lstrip().startswith('MIG'):\n                uuid_list.append(l.split()[-1].rstrip(\")\"))\n                if last_gpu_was_physical:\n                    gpus -= 1\n                last_gpu_was_physical = False\n                gpus += 1\n        if gpus < 1:\n            self.skipTest('Skipping test since no gpus found on %s'\n                          % (self.nodes_list[0]))\n        ngpus_stat = self.server.status(NODE, id=self.nodes_list[0])[0]\n        self.logger.info(\"pbsnodes for %s reported: %s\"\n                         % (self.nodes_list[0], ngpus_stat))\n        self.assertTrue('resources_available.ngpus' in ngpus_stat,\n                        \"No resources_available.ngpus found on node %s\"\n                        % (self.nodes_list[0]))\n        ngpus = int(ngpus_stat['resources_available.ngpus'])\n        self.assertEqual(gpus, ngpus, 'ngpus is incorrect')\n        a = {'Resource_List.select': '1:ngpus=1', ATTR_N: name}\n        j = Job(TEST_USER, attrs=a)\n        j.create_script(self.check_gpu_script)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, jid)\n        self.server.status(JOB, [ATTR_o, 'exec_host'], jid)\n        filename = j.attributes[ATTR_o]\n        self.tempfile.append(filename)\n        ehost = j.attributes['exec_host']\n        tmp_file = filename.split(':')[1]\n        tmp_host = ehost.split('/')[0]\n        tmp_out = self.wait_and_read_file(filename=tmp_file, host=tmp_host)\n\n        mig_devices_in_use = tmp_out[-1]\n        for mig_device in mig_devices_in_use.split(\",\"):\n            self.assertIn(mig_device, uuid_list,\n                          \"MIG identifiers do not match\")\n\n        self.logger.info(tmp_out)\n        self.assertIn('There are 1 GPUs', tmp_out, 'No gpus were assigned')\n        self.assertIn('c 195:255 rwm', tmp_out, 'Nvidia controller not found')\n        m = re.search(r'195:(?!255)', '\\n'.join(tmp_out))\n        self.assertIsNotNone(m.group(0), 'No gpu assigned in cgroups')\n\n    def test_cgroup_cpuset_memory_spread_page(self):\n        \"\"\"\n        Confirm that mem_spread_page affects setting of\n        cpuset.memory_spread_page\n        \"\"\"\n        if not self.paths[self.hosts_list[0]]['memory']:\n            self.skipTest('Test requires memory subystem mounted')\n\n        self.load_config(self.cfg5 % ('false', '', 'true', 'false',\n                                      'false', self.mem, self.swapctl))\n        nid = self.nodes_list[0]\n        self.server.expect(NODE, {'state': 'free'}, id=nid,\n                           interval=3, offset=10)\n        hostn = self.hosts_list[0]\n        a = {'Resource_List.select': '1:ncpus=1:mem=100mb:host=%s' % hostn}\n        j = Job(TEST_USER, attrs=a)\n        j.create_script(self.sleep600_job)\n        jid = self.server.submit(j)\n        a = {'job_state': 'R'}\n        self.server.expect(JOB, a, jid)\n        self.server.status(JOB, ATTR_o, jid)\n        o = j.attributes[ATTR_o]\n        self.tempfile.append(o)\n        spread_path = 'cpuset.memory_spread_page'\n        fn = self.get_cgroup_job_dir('cpuset', jid, hostn)\n        if self.noprefix:\n            spread_path = 'memory_spread_page'\n        fn = os.path.join(fn, spread_path)\n        self.assertTrue(self.is_file(fn, hostn))\n        result = self.du.cat(hostname=hostn, filename=fn, sudo=True)\n        self.assertEqual(result['rc'], 0)\n        self.assertEqual(result['out'][0], '0')\n        self.server.delete(jid, wait=True)\n\n        self.load_config(self.cfg5 % ('false', '', 'true', 'false',\n                                      'true', self.mem, self.swapctl))\n        self.server.expect(NODE, {'state': 'free'}, id=nid,\n                           interval=3, offset=10)\n        a = {'Resource_List.select': '1:ncpus=1:mem=100mb:host=%s' % hostn}\n        j = Job(TEST_USER, attrs=a)\n        j.create_script(self.sleep600_job)\n        jid = self.server.submit(j)\n        a = {'job_state': 'R'}\n        self.server.expect(JOB, a, jid)\n        self.server.status(JOB, ATTR_o, jid)\n        o = j.attributes[ATTR_o]\n        self.tempfile.append(o)\n        fn = self.get_cgroup_job_dir('cpuset', jid, hostn)\n        fn = os.path.join(fn, spread_path)\n        result = self.du.cat(hostname=hostn, filename=fn, sudo=True)\n        self.assertEqual(result['rc'], 0)\n        self.assertEqual(result['out'][0], '1')\n\n    def test_cgroup_use_hierarchy(self):\n        \"\"\"\n        Test that memory.use_hierarchy is enabled by default\n        when PBS cgroups hook is instantiated\n        \"\"\"\n        # Remove PBS directories from memory subsystem\n        cpath = None\n        if ('memory' in self.paths[self.hosts_list[0]] and\n                self.paths[self.hosts_list[0]]['memory']):\n            cdir = self.paths[self.hosts_list[0]]['memory']\n            cpath = self.find_main_cpath(cdir)\n        else:\n            self.skipTest(\n                \"memory subsystem is not enabled for cgroups\")\n        if cpath is not None:\n            cmd = [\"rmdir\", cpath]\n            self.du.run_cmd(cmd=cmd, sudo=True, hosts=self.hosts_list[0])\n        self.logger.info(\"Removing %s\" % cpath)\n        self.load_config(self.cfg6 % (self.mem, self.swapctl))\n        # check where cpath is once more\n        # since we loaded a new cgroup config file\n        cpath = None\n        if ('memory' in self.paths[self.hosts_list[0]] and\n                self.paths[self.hosts_list[0]]['memory']):\n            cdir = self.paths[self.hosts_list[0]]['memory']\n            cpath = self.find_main_cpath(cdir)\n        # Verify that memory.use_hierarchy is enabled\n        fpath = os.path.join(cpath, \"memory.use_hierarchy\")\n        self.logger.info(\"looking for file %s\" % fpath)\n        rc = self.du.isfile(hostname=self.hosts_list[0], path=fpath)\n        if rc:\n            ret = self.du.cat(hostname=self.hosts_list[0], filename=fpath,\n                              logerr=False)\n            val = (' '.join(ret['out'])).strip()\n            self.assertEqual(\n                val, \"1\", \"%s is not equal to 1\" % val)\n            self.logger.info(\"memory.use_hierarchy is enabled\")\n        else:\n            self.assertFalse(1, \"File %s not present\" % fpath)\n\n    def test_cgroup_periodic_update_known_jobs(self):\n        \"\"\"\n        Verify that jobs known to mom are updated, not orphans\n        \"\"\"\n        conf = {'freq': 5, 'order': 100}\n        self.server.manager(MGR_CMD_SET, HOOK, conf, self.hook_name)\n        self.load_config(self.cfg3 % ('', 'false', '', self.mem, '',\n                                      self.swapctl, ''))\n        # Submit a short job and let it run to completion\n        a = {'Resource_List.select': '1:ncpus=1:mem=100mb:host=%s' %\n             self.hosts_list[0]}\n        j = Job(TEST_USER, attrs=a)\n        j.create_script(self.sleep5_job)\n        time.sleep(2)\n        stime = int(time.time())\n        time.sleep(2)\n        jid1 = self.server.submit(j)\n        a = {'job_state': 'R'}\n        self.server.expect(JOB, a, jid1)\n        self.server.status(JOB, ATTR_o, jid1)\n        o = j.attributes[ATTR_o]\n        self.tempfile.append(o)\n        err_msg = \"Unexpected error in pbs_cgroups \" + \\\n            \"handling exechost_periodic event: TypeError\"\n        self.moms_list[0].log_match(err_msg, max_attempts=3,\n                                    interval=1, n='ALL',\n                                    starttime=stime,\n                                    existence=False)\n        self.server.log_match(jid1 + ';Exit_status=0', n='ALL',\n                              starttime=stime)\n        # Create a periodic hook that runs more frequently than the\n        # cgroup hook to prepend jid1 to mom_priv/hooks/hook_data/cgroup_jobs\n        hookname = 'prependjob'\n        hookbody = \"\"\"\nimport pbs\nimport os\nimport re\nimport time\nimport traceback\nevent = pbs.event()\njid_to_prepend = '%s'\npbs_home = ''\npbs_mom_home = ''\nif 'PBS_HOME' in os.environ:\n    pbs_home = os.environ['PBS_HOME']\nif 'PBS_MOM_HOME' in os.environ:\n    pbs_mom_home = os.environ['PBS_MOM_HOME']\npbs_conf = pbs.get_pbs_conf()\nif pbs_conf:\n    if not pbs_home and 'PBS_HOME' in pbs_conf:\n        pbs_home = pbs_conf['PBS_HOME']\n    if not pbs_mom_home and 'PBS_MOM_HOME' in pbs_conf:\n        pbs_mom_home = pbs_conf['PBS_MOM_HOME']\nif not pbs_home or not pbs_mom_home:\n    if 'PBS_CONF_FILE' in os.environ:\n        pbs_conf_file = os.environ['PBS_CONF_FILE']\n    else:\n        pbs_conf_file = os.path.join(os.sep, 'etc', 'pbs.conf')\n    regex = re.compile(r'\\\\s*([^\\\\s]+)\\\\s*=\\\\s*([^\\\\s]+)\\\\s*')\n    try:\n        with open(pbs_conf_file, 'r') as desc:\n            for line in desc:\n                match = regex.match(line)\n                if match:\n                    if not pbs_home and match.group(1) == 'PBS_HOME':\n                        pbs_home = match.group(2)\n                    if not pbs_mom_home and (match.group(1) ==\n                                             'PBS_MOM_HOME'):\n                        pbs_mom_home = match.group(2)\n    except Exception:\n        pass\nif not pbs_home:\n    pbs.logmsg(pbs.EVENT_DEBUG, 'Failed to locate PBS_HOME')\n    event.reject()\nif not pbs_mom_home:\n    pbs_mom_home = pbs_home\njobsfile = os.path.join(pbs_mom_home, 'mom_priv', 'hooks',\n                        'hook_data', 'cgroup_jobs')\ntry:\n    with open(jobsfile, 'r+') as desc:\n        jobdict = eval(desc.read())\n        if jid_to_prepend not in jobdict:\n            jobdict[jid_to_prepend] = time.time()\n            desc.seek(0)\n            desc.write(str(jobdict))\n            desc.truncate()\nexcept Exception as exc:\n    pbs.logmsg(pbs.EVENT_DEBUG, 'Failed to modify ' + jobsfile)\n    pbs.logmsg(pbs.EVENT_DEBUG,\n               str(traceback.format_exc().strip().splitlines()))\n    event.reject()\nevent.accept()\n\"\"\" % jid1\n        events = ['execjob_begin', 'exechost_periodic']\n        hookconf = {'enabled': 'True', 'freq': 2, 'alarm': 30, 'event': events}\n        self.server.create_import_hook(hookname, hookconf, hookbody,\n                                       overwrite=True)\n        # Submit a second job and verify that the following message\n        # does NOT appear in the mom log:\n        # _exechost_periodic_handler: Failed to update jid1\n        a = {'Resource_List.select': '1:ncpus=1:mem=100mb:host=%s' %\n             self.hosts_list[0]}\n        j = Job(TEST_USER, attrs=a)\n        # Here a short job is OK, since we are waiting for it to end\n        j.create_script(self.sleep30_job)\n        time.sleep(2)\n        presubmit = int(time.time())\n        time.sleep(2)\n        jid2 = self.server.submit(j)\n        a = {'job_state': 'R'}\n        self.server.expect(JOB, a, jid2)\n        self.server.status(JOB, ATTR_o, jid2)\n        o = j.attributes[ATTR_o]\n        self.tempfile.append(o)\n        err_msg = \"Unexpected error in pbs_cgroups \" + \\\n            \"handling exechost_periodic event: TypeError\"\n        self.moms_list[0].log_match(err_msg, max_attempts=3,\n                                    interval=1, n='ALL',\n                                    starttime=presubmit,\n                                    existence=False)\n        self.server.log_match(jid2 + ';Exit_status=0', n='ALL',\n                              starttime=presubmit)\n        self.server.manager(MGR_CMD_DELETE, HOOK, None, hookname)\n        command = ['rm', '-rf',\n                   os.path.join(self.moms_list[0].pbs_conf['PBS_HOME'],\n                                'mom_priv', 'hooks', 'hook_data',\n                                'cgroup_jobs')]\n        self.du.run_cmd(cmd=command, hosts=self.hosts_list[0], sudo=True)\n        logmsg = '_exechost_periodic_handler: Failed to update %s' % jid1\n        self.moms_list[0].log_match(msg=logmsg, starttime=presubmit,\n                                    n='ALL', max_attempts=1, existence=False)\n\n    @requirements(num_moms=3)\n    def test_cgroup_release_nodes(self):\n        \"\"\"\n        Verify that exec_vnode values are trimmed\n        when execjob_launch hook prunes job via release_nodes(),\n        tolerate_node_failures=job_start\n        \"\"\"\n        self.load_config(self.cfg7 % (self.mem, self.mem))\n        # instantiate queuejob hook\n        hook_event = 'queuejob'\n        hook_name = 'qjob'\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.qjob_hook_body)\n        # instantiate execjob_launch hook\n        hook_event = 'execjob_launch'\n        hook_name = 'launch'\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.keep_select = 'e.job.Resource_List[\"site\"]'\n        self.server.create_import_hook(\n            hook_name, a, self.launch_hook_body % (self.keep_select))\n        # Submit a job that requires 2 nodes\n        j = Job(TEST_USER)\n        j.create_script(self.job_scr2 % (self.hosts_list[1]))\n        jid = self.server.submit(j)\n        # Check the exec_vnode while in substate 41\n        self.server.expect(JOB, {ATTR_substate: '41'}, id=jid)\n        self.server.expect(JOB, 'exec_vnode', id=jid, op=SET)\n        job_stat = self.server.status(JOB, id=jid)\n        execvnode1 = job_stat[0]['exec_vnode']\n        self.logger.info(\"initial exec_vnode: %s\" % execvnode1)\n        initial_vnodes = execvnode1.split('+')\n        # Check the exec_vnode after job is in substate 42\n        self.server.expect(JOB, {ATTR_substate: '42'}, id=jid)\n        self.server.expect(JOB, 'exec_vnode', id=jid, op=SET)\n        job_stat = self.server.status(JOB, id=jid)\n        execvnode2 = job_stat[0]['exec_vnode']\n        self.logger.info(\"pruned exec_vnode: %s\" % execvnode2)\n        pruned_vnodes = execvnode2.split('+')\n        # Check that the pruned exec_vnode has one less than initial value\n        self.assertEqual(len(pruned_vnodes) + 1, len(initial_vnodes))\n        # Find the released vnode\n        for vn in initial_vnodes:\n            if vn not in pruned_vnodes:\n                rel_vn = vn\n        vnodeB = rel_vn.split(':')[0].split('(')[1]\n        self.logger.info(\"released vnode: %s\" % vnodeB)\n        # Submit a second job requesting the released vnode, job runs\n        j2 = Job(TEST_USER,\n                 {ATTR_l + '.select': '1:ncpus=1:mem=100mb:vnode=%s' % vnodeB})\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid2)\n\n    @requirements(num_moms=3)\n    def test_cgroup_sismom_resize_fail(self):\n        \"\"\"\n        Verify that exec_vnode values are trimmed\n        when execjob_launch hook prunes job via release_nodes(),\n        exec_job_resize failure in sister mom,\n        tolerate_node_failures=job_start\n        \"\"\"\n        self.load_config(self.cfg7 % (self.mem, self.mem))\n        # instantiate queuejob hook\n        hook_event = 'queuejob'\n        hook_name = 'qjob'\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.qjob_hook_body)\n        # instantiate execjob_launch hook\n        hook_event = 'execjob_launch'\n        hook_name = 'launch'\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.keep_select = 'e.job.Resource_List[\"site\"]'\n        self.server.create_import_hook(\n            hook_name, a, self.launch_hook_body % (self.keep_select))\n        # instantiate execjob_resize hook\n        hook_event = 'execjob_resize'\n        hook_name = 'resize'\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(\n            hook_name, a, self.resize_hook_body % ('not'))\n        # Submit a job that requires 2 nodes\n        j = Job(TEST_USER)\n        # Note mother superior is mom[1] not mom[0]\n        j.create_script(self.job_scr2 % (self.hosts_list[1]))\n        time.sleep(2)\n        stime = int(time.time())\n        time.sleep(2)\n        jid = self.server.submit(j)\n        # Check the exec_vnode while in substate 41\n        self.server.expect(JOB, {ATTR_substate: '41'}, id=jid)\n        self.server.expect(JOB, 'exec_vnode', id=jid, op=SET)\n        job_stat = self.server.status(JOB, id=jid)\n        execvnode1 = job_stat[0]['exec_vnode']\n        self.logger.info(\"initial exec_vnode: %s\" % execvnode1)\n        # Check the exec_resize hook reject message in sister mom logs\n        self.moms_list[0].log_match(\n            \"Job;%s;Cannot resize the job\" % (jid),\n            starttime=stime, interval=2, n='ALL')\n        # Check that MS saw that the sister mom failed to update the job\n        # This message is on MS mom[1] but mentions sismom mom[0]\n        self.moms_list[1].log_match(\n            \"Job;%s;sister node %s.* failed to update job\"\n            % (jid, self.hosts_list[0]),\n            starttime=stime, interval=2, regexp=True, n='ALL')\n        # Because of resize hook reject Mom failed to update the job.\n        # Check that job got requeued.\n        self.server.log_match(\"Job;%s;Job requeued\" % (jid),\n                              starttime=stime, n='ALL')\n\n    @requirements(num_moms=3)\n    def test_cgroup_msmom_resize_fail(self):\n        \"\"\"\n        Verify that exec_vnode values are trimmed\n        when execjob_launch hook prunes job via release_nodes(),\n        exec_job_resize failure in mom superior,\n        tolerate_node_failures=job_start\n        \"\"\"\n        self.load_config(self.cfg7 % (self.mem, self.mem))\n        # instantiate queuejob hook\n        hook_event = 'queuejob'\n        hook_name = 'qjob'\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.qjob_hook_body)\n        # instantiate execjob_launch hook\n        hook_event = 'execjob_launch'\n        hook_name = 'launch'\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.keep_select = 'e.job.Resource_List[\"site\"]'\n        self.server.create_import_hook(\n            hook_name, a, self.launch_hook_body % (self.keep_select))\n        # instantiate execjob_resize hook\n        hook_event = 'execjob_resize'\n        hook_name = 'resize'\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(\n            hook_name, a, self.resize_hook_body % (''))\n        # Submit a job that requires 2 nodes\n        j = Job(TEST_USER)\n        j.create_script(self.job_scr2 % (self.hosts_list[1]))\n        time.sleep(2)\n        stime = int(time.time())\n        time.sleep(2)\n        jid = self.server.submit(j)\n        # Check the exec_vnode while in substate 41\n        self.server.expect(JOB, {ATTR_substate: '41'}, id=jid)\n        self.server.expect(JOB, 'exec_vnode', id=jid, op=SET)\n        job_stat = self.server.status(JOB, id=jid)\n        execvnode1 = job_stat[0]['exec_vnode']\n        self.logger.info(\"initial exec_vnode: %s\" % execvnode1)\n        # Check the exec_resize hook reject message in MS log\n        self.moms_list[1].log_match(\n            \"Job;%s;Cannot resize the job\" % (jid),\n            starttime=stime, interval=2, n='ALL')\n        # Because of resize hook reject Mom failed to update the job.\n        # Check that job got requeued\n        self.server.log_match(\"Job;%s;Job requeued\" % (jid), starttime=stime)\n\n    @requirements(num_moms=3)\n    def test_cgroup_msmom_nodes_only(self):\n        \"\"\"\n        Verify that exec_vnode values are trimmed\n        when execjob_launch hook prunes job via release_nodes(),\n        job is using only vnodes from mother superior host,\n        tolerate_node_failures=job_start\n        \"\"\"\n        self.load_config(self.cfg7 % (self.mem, self.mem))\n        # disable queuejob hook\n        hook_event = 'queuejob'\n        hook_name = 'qjob'\n        a = {'event': hook_event, 'enabled': 'false'}\n        self.server.create_import_hook(hook_name, a, self.qjob_hook_body)\n        # instantiate execjob_launch hook\n        hook_event = 'execjob_launch'\n        hook_name = 'launch'\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.keep_select = '\"ncpus=1:mem=100mb\"'\n        self.server.create_import_hook(\n            hook_name, a, self.launch_hook_body % (self.keep_select))\n        # disable execjob_resize hook\n        hook_event = 'execjob_resize'\n        hook_name = 'resize'\n        a = {'event': hook_event, 'enabled': 'false'}\n        self.server.create_import_hook(\n            hook_name, a, self.resize_hook_body % (''))\n        # Submit a job that requires two vnodes\n        j = Job(TEST_USER)\n        j.create_script(self.job_scr3)\n        time.sleep(2)\n        stime = int(time.time())\n        time.sleep(2)\n        jid = self.server.submit(j)\n        # Check the exec_vnode while in substate 41\n        self.server.expect(JOB, {ATTR_substate: '41'}, id=jid)\n        self.server.expect(JOB, 'exec_vnode', id=jid, op=SET)\n        job_stat = self.server.status(JOB, id=jid)\n        execvnode1 = job_stat[0]['exec_vnode']\n        self.logger.info(\"initial exec_vnode: %s\" % execvnode1)\n        initial_vnodes = execvnode1.split('+')\n        # Check the exec_vnode after job is in substate 42\n        self.server.expect(JOB, {ATTR_substate: '42'}, id=jid)\n        self.server.expect(JOB, 'exec_vnode', id=jid, op=SET)\n        job_stat = self.server.status(JOB, id=jid)\n        execvnode2 = job_stat[0]['exec_vnode']\n        self.logger.info(\"pruned exec_vnode: %s\" % execvnode2)\n        pruned_vnodes = execvnode2.split('+')\n        # Check that the pruned exec_vnode has one less than initial value\n        self.assertEqual(len(pruned_vnodes) + 1, len(initial_vnodes))\n        # Check that the exec_vnode got pruned\n        self.moms_list[0].log_match(\"Job;%s;pruned from exec_vnode=%s\" % (\n            jid, execvnode1), starttime=stime, n='ALL')\n        self.moms_list[0].log_match(\"Job;%s;pruned to exec_vnode=%s\" % (\n            jid, execvnode2), starttime=stime, n='ALL')\n        # Find out the released vnode\n        if initial_vnodes[0] == execvnode2:\n            execvnodeB = initial_vnodes[1]\n        else:\n            execvnodeB = initial_vnodes[0]\n        vnodeB = execvnodeB.split(':')[0].split('(')[1]\n        self.logger.info(\"released vnode: %s\" % vnodeB)\n        # Submit job2 requesting the released vnode, job runs\n        j2 = Job(TEST_USER, {\n            ATTR_l + '.select': '1:ncpus=1:mem=100mb:vnode=%s' % vnodeB})\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid2)\n\n    @requirements(num_moms=3)\n    def test_cgroups_abort(self):\n        \"\"\"\n        Verify that if one of the sister mom is down then\n        cgroups hook will call the abort event which will\n        cleanup the cgroups files on sister moms and primary\n        mom\n        \"\"\"\n        self.logger.info(\"Stopping mom on host %s\" % self.hosts_list[1])\n        self.moms_list[1].signal('-19')\n\n        a = {'Resource_List.select':\n             '1:ncpus=1:host=%s+1:ncpus=1:host=%s+1:ncpus=1:host=%s' %\n             (self.hosts_list[0], self.hosts_list[1], self.hosts_list[2])}\n        j = Job(TEST_USER, attrs=a)\n        j.create_script(self.sleep600_job)\n        jid = self.server.submit(j)\n        a = {'job_state': 'R', 'substate': '41'}\n        self.server.expect(JOB, a, jid)\n\n        self.logger.info(\"Killing mom on host %s\" % self.hosts_list[1])\n        time.sleep(2)\n        now = int(time.time())\n        time.sleep(2)\n        self.moms_list[1].signal('-9')\n\n        self.server.expect(NODE, {'state': \"down\"}, id=self.hosts_list[1])\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid)\n\n        # Verify that cgroups directories are cleaned on primary mom\n        cpath = self.get_cgroup_job_dir('memory', jid, self.hosts_list[0])\n        self.assertFalse(self.is_dir(cpath, self.hosts_list[0]))\n\n        # Verify that cgroups directories are cleaned by execjob_abort\n        # hook on sister mom\n        cpath = self.get_cgroup_job_dir('memory', jid, self.hosts_list[2])\n        self.assertFalse(self.is_dir(cpath, self.hosts_list[2]))\n\n        self.moms_list[0].log_match(\"job_start_error\",\n                                    starttime=now, n='ALL')\n        self.moms_list[0].log_match(\"Event type is execjob_abort\",\n                                    starttime=now, n='ALL')\n        self.moms_list[0].log_match(\"Event type is execjob_epilogue\",\n                                    starttime=now, n='ALL')\n        self.moms_list[0].log_match(\"Event type is execjob_end\",\n                                    starttime=now, n='ALL')\n        self.moms_list[2].log_match(\"Event type is execjob_abort\",\n                                    starttime=now, n='ALL')\n\n        self.moms_list[1].pi.restart()\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n    @timeout(1800)\n    def test_big_cgroup_cpuset(self):\n        \"\"\"\n        With vnodes_per_numa and use_hyperthreads set to \"true\",\n        test to verify that a job requesting at least 10 vnodes\n        (i.e. 10 memory sockets) get a cgroup cpuset with the\n        correct number of cpus and memory sockets.\n        \"\"\"\n        name = 'CGROUP_BIG'\n        self.load_config(self.cfg9 % (self.mem, self.mem))\n\n        vnodes_count = 10\n        try:\n            self.server.expect(VNODE, {'state=free': vnodes_count},\n                               op=GE, count=True, interval=2)\n        except Exception as exc:\n            self.skipTest(\"Test require >= %d free vnodes\" % (vnodes_count,))\n\n        rncpus = 'resources_available.ncpus'\n        a = {rncpus: (GT, 0), 'state': 'free'}\n        free_nodes = self.server.filter(VNODE, a, attrop=PTL_AND, idonly=False)\n        vnodes = list(free_nodes.values())[0]\n        self.assertGreaterEqual(len(vnodes), vnodes_count,\n                                'Test does not have enough free vnodes')\n        # find the minimum number of cpus found among the vnodes\n        cpus_per_vnode = None\n        for v in vnodes:\n            v_rncpus = int(v[rncpus])\n            if not cpus_per_vnode:\n                cpus_per_vnode = v_rncpus\n            if v_rncpus < cpus_per_vnode:\n                cpus_per_vnode = v_rncpus\n\n        # Submit a job\n        select_spec = \"%d:ncpus=%d\" % (vnodes_count, cpus_per_vnode)\n        a = {'Resource_List.select': select_spec, ATTR_N: name + 'a'}\n        j1 = Job(TEST_USER, attrs=a)\n        j1.create_script(self.sleep600_job)\n        jid1 = self.server.submit(j1)\n        a = {'job_state': 'R'}\n        # Make sure job is running\n        self.server.expect(JOB, a, jid1)\n        # cpuset path for job\n        fn1 = self.get_cgroup_job_dir('cpuset', jid1, self.hosts_list[0])\n        # Capture the output of cpuset_mem_script for job\n        scr1 = self.du.run_cmd(cmd=[self.cpuset_mem_script % (fn1, None)],\n                               as_script=True, hosts=self.hosts_list[0])\n        tmp_out1 = scr1['out']\n        self.logger.info(\"test output for job1: %s\" % (tmp_out1))\n        # Ensure the number of cpus assigned matches request\n        cpuids = None\n        for kv in tmp_out1:\n            if 'CpuIDs=' in kv:\n                cpuids = kv.split(\"=\")[1]\n                break\n        cpus_assn = count_items(cpuids)\n        cpus_req = vnodes_count * cpus_per_vnode\n        self.logger.info(\"CpuIDs assn=%d req=%d\" % (cpus_assn, cpus_req))\n        self.assertEqual(cpus_assn, cpus_req,\n                         'CpuIDs assigned did not match requested')\n        self.logger.info('CpuIDs check passed')\n\n        # Ensure the number of sockets assigned matches request\n        memsocket = None\n        for kv in tmp_out1:\n            if 'MemorySocket=' in kv:\n                memsocket = kv.split(\"=\")[1]\n                break\n        mem_assn = count_items(memsocket)\n        self.logger.info(\"MemSocket assn=%d req=%d\" % (mem_assn, vnodes_count))\n        self.assertEqual(mem_assn, vnodes_count,\n                         'MemSocket assigned not match requested')\n        self.logger.info('MemSocket check passed')\n\n    @requirements(num_moms=2)\n    def test_checkpoint_abort_preemption(self):\n        \"\"\"\n        Test to make sure that when scheduler preempts a multi-node job with\n        checkpoint_abort, execjob_abort cgroups hook on secondary node\n        gets called.  The abort hook cleans up assigned cgroups, allowing\n        the higher priority job to run on the same node.\n        \"\"\"\n        # create express queue\n        a = {'queue_type': 'execution',\n             'started': 'True',\n             'enabled': 'True',\n             'Priority': 200}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, \"express\")\n\n        # have scheduler preempt lower priority jobs using 'checkpoint'\n        self.server.manager(MGR_CMD_SET, SCHED, {'preempt_order': 'C'})\n\n        # have moms do checkpoint_abort\n        chk_script = \"\"\"#!/bin/bash\nkill $1\nexit 0\n\"\"\"\n        a = {'resources_available.ncpus': 1}\n        for m in self.moms.values():\n            chk_file = m.add_checkpoint_abort_script(body=chk_script)\n            # ensure resulting checkpoint file has correct permission\n            self.du.chown(hostname=m.shortname, path=chk_file, uid=0, gid=0,\n                          sudo=True)\n            self.server.manager(MGR_CMD_SET, NODE, a, id=m.shortname)\n\n        # submit multi-node job\n        a = {'Resource_List.select': '1:ncpus=1:host=%s+1:ncpus=1:host=%s' % (\n            self.hosts_list[0], self.hosts_list[1]),\n            'Resource_List.place': 'scatter:exclhost'}\n        j1 = Job(TEST_USER, attrs=a)\n        jid1 = self.server.submit(j1)\n\n        # to work around a scheduling race, check for substate 42\n        # if you test for R then a slow job startup might update\n        # resources_assigned late and make scheduler overcommit nodes\n        # and run both jobs\n        self.server.expect(JOB, {'substate': '42'}, id=jid1)\n\n        # Submit an express queue job requesting needing also 2 nodes\n        a[ATTR_q] = 'express'\n        j2 = Job(TEST_USER, attrs=a)\n        time.sleep(2)\n        stime = int(time.time())\n        time.sleep(2)\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid1)\n        err_msg = \"%s;.*Failed to assign resources.*\" % (jid2,)\n        for m in self.moms.values():\n            m.log_match(err_msg, max_attempts=3, interval=1, starttime=stime,\n                        regexp=True, existence=False, n='ALL')\n\n        self.server.expect(JOB, {'job_state': 'R', 'substate': 42}, id=jid2)\n\n    @requirements(num_moms=2)\n    def test_checkpoint_restart(self):\n        \"\"\"\n        Test to make sure that when a preempted and checkpointed multi-node\n        job restarts, execjob_begin cgroups hook gets called on both mother\n        superior and sister moms.\n        \"\"\"\n        # create express queue\n        a = {'queue_type': 'execution',\n             'started': 'True',\n             'enabled': 'True',\n             'Priority': 200}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, \"express\")\n\n        # have scheduler preempt lower priority jobs using 'checkpoint'\n        self.server.manager(MGR_CMD_SET, SCHED, {'preempt_order': 'C'})\n\n        # have moms do checkpoint_abort\n        chk_script = \"\"\"#!/bin/bash\nkill $1\nexit 0\n\"\"\"\n        restart_script = \"\"\"#!/bin/bash\nsleep 300\n\"\"\"\n        a = {'resources_available.ncpus': 1}\n        for m in self.moms.values():\n            # add checkpoint script\n            m.add_checkpoint_abort_script(body=chk_script)\n            m.add_restart_script(body=restart_script, abort_time=300)\n            self.server.manager(MGR_CMD_SET, NODE, a, id=m.shortname)\n\n        # submit multi-node job\n        a = {'Resource_List.select': '1:ncpus=1:host=%s+1:ncpus=1:host=%s' % (\n            self.hosts_list[0], self.hosts_list[1]),\n            'Resource_List.place': 'scatter:exclhost'}\n        j1 = Job(TEST_USER, attrs=a)\n        j1.set_sleep_time(300)\n        jid1 = self.server.submit(j1)\n        # to work around a scheduling race, check for substate 42\n        # if you test for R then a slow job startup might update\n        # resources_assigned late and make scheduler overcommit nodes\n        # and run both jobs\n        self.server.expect(JOB, {'substate': '42'}, id=jid1)\n        time.sleep(5)\n        cpath = self.get_cgroup_job_dir('cpuset', jid1, self.hosts_list[0])\n        self.assertTrue(self.is_dir(cpath, self.hosts_list[0]))\n        cpath = self.get_cgroup_job_dir('cpuset', jid1, self.hosts_list[1])\n        self.assertTrue(self.is_dir(cpath, self.hosts_list[1]))\n\n        # Submit an express queue job requesting needing also 2 nodes\n        a[ATTR_q] = 'express'\n        j2 = Job(TEST_USER, attrs=a)\n        j2.set_sleep_time(300)\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid1)\n        self.server.expect(JOB, {'substate': '42'}, id=jid2)\n        time.sleep(5)\n        cpath = self.get_cgroup_job_dir('cpuset', jid2, self.hosts_list[0])\n        self.assertTrue(self.is_dir(cpath, self.hosts_list[0]))\n        cpath = self.get_cgroup_job_dir('cpuset', jid2, self.hosts_list[1])\n        self.assertTrue(self.is_dir(cpath, self.hosts_list[1]))\n\n        # delete express queue job\n        self.server.delete(jid2)\n        # wait until the preempted job is sent to MoM again\n        # the checkpointing script hangs, so it stays in substate 41\n        self.server.expect(JOB, {'job_state': 'R', 'substate': 41}, id=jid1)\n        # we need to give the hooks some time here...\n        time.sleep(10)\n        # check the cpusets for the deleted preemptor are gone\n        cpath = self.get_cgroup_job_dir('cpuset', jid2, self.hosts_list[0])\n        self.assertFalse(self.is_dir(cpath, self.hosts_list[0]))\n        cpath = self.get_cgroup_job_dir('cpuset', jid2, self.hosts_list[1])\n        self.assertFalse(self.is_dir(cpath, self.hosts_list[1]))\n        # check the cpusets for the restarted formerly-preempted are there\n        cpath = self.get_cgroup_job_dir('cpuset', jid1, self.hosts_list[0])\n        self.assertTrue(self.is_dir(cpath, self.hosts_list[0]))\n        cpath = self.get_cgroup_job_dir('cpuset', jid1, self.hosts_list[1])\n        self.assertTrue(self.is_dir(cpath, self.hosts_list[1]))\n\n    def test_cpu_controller_enforce_default(self):\n        \"\"\"\n        Test an enabled cgroup 'cpu' controller with quotas enforced\n        using default (non-specified) values of cfs_period_us, and\n        cfs_quota_fudge_factor.\n        \"\"\"\n        root_quota_host1 = None\n        try:\n            root_quota_host1_str = \\\n                self.du.run_cmd(hosts=self.hosts_list[0],\n                                cmd=['cat',\n                                     '/sys/fs/cgroup/cpu/cpu.cfs_quota_us'])\n            root_quota_host1 = int(root_quota_host1_str['out'][0])\n        except Exception:\n            pass\n        # If that link is missing and it's only\n        # mounted under the cpu/cpuacct unified directory...\n        if root_quota_host1 is None:\n            try:\n                root_quota_host1_str = \\\n                    self.du.run_cmd(hosts=self.hosts_list[0],\n                                    cmd=['cat',\n                                         '/sys/fs/cgroup/'\n                                         'cpu,cpuacct/cpu.cfs_quota_us'])\n                root_quota_host1 = int(root_quota_host1_str['out'][0])\n            except Exception:\n                pass\n        # If still not found, try to see if it is in a unified cgroup mount\n        # as in cgroup v2\n        if root_quota_host1 is None:\n            try:\n                root_quota_host1_str = \\\n                    self.du.run_cmd(hosts=self.hosts_list[0],\n                                    cmd=['cat',\n                                         '/sys/fs/cgroup/cpu.cfs_quota_us'])\n                root_quota_host1 = int(root_quota_host1_str['out'][0])\n            except Exception:\n                pass\n\n        if root_quota_host1 is None:\n            self.skipTest('cpu group controller test: '\n                          'could not determine root cfs_quota_us')\n        elif root_quota_host1 != -1:\n            self.skipTest('cpu group controller test: '\n                          'root cfs_quota_us is not unlimited, cannot test '\n                          'cgroup hook CPU quotas in this environment')\n\n        name = 'CGROUP1'\n        self.load_config(self.cfg10 % (self.mem, self.mem))\n        default_cfs_period_us = 100000\n        default_cfs_quota_fudge_factor = 1.03\n\n        # Restart mom for changes made by cgroups hook to take effect\n        self.mom.restart()\n        self.server.expect(NODE, {'state': 'free'},\n                           id=self.nodes_list[0], interval=1)\n        result = self.server.status(NODE, 'resources_available.ncpus',\n                                    id=self.nodes_list[0])\n        orig_ncpus = int(result[0]['resources_available.ncpus'])\n        self.assertGreater(orig_ncpus, 0)\n        self.logger.info('Original value of ncpus: %d' % orig_ncpus)\n        if orig_ncpus >= 2:\n            ncpus_req = 2\n        else:\n            ncpus_req = 1\n\n        a = {'Resource_List.select':\n             \"ncpus=%d\" % ncpus_req,\n             ATTR_N: name, ATTR_k: 'oe'}\n        j = Job(TEST_USER, attrs=a)\n        j.create_script(self.sleep600_job)\n        jid = self.server.submit(j)\n        a = {'job_state': 'R'}\n        self.server.expect(JOB, a, jid)\n        self.server.status(JOB, [ATTR_o, 'exec_host'], jid)\n        fna = self.get_cgroup_job_dir('cpu', jid, self.hosts_list[0])\n        self.assertFalse(fna is None, 'No job directory for cpu subsystem')\n        cpu_scr = self.du.run_cmd(cmd=[self.cpu_controller_script % fna],\n                                  as_script=True, hosts=self.hosts_list[0])\n        cpu_scr_out = cpu_scr['out']\n        self.logger.info('cpu_scr_out:\\n%s' % cpu_scr_out)\n\n        shares_match = (ncpus_req * 1000)\n        self.assertTrue(\"cpu_shares=%d\" % shares_match in cpu_scr_out)\n        self.logger.info(\"cpu_shares check passed (match %d)\" % shares_match)\n\n        self.assertTrue(\"cpu_cfs_period_us=%d\" %\n                        (default_cfs_period_us) in cpu_scr_out)\n        self.logger.info(\"cpu_cfs_period_us check passed (match %d)\" %\n                         (default_cfs_period_us))\n\n        cfs_quota_us_match = default_cfs_period_us * \\\n            ncpus_req * default_cfs_quota_fudge_factor\n        self.assertTrue(\"cpu_cfs_quota_us=%d\" %\n                        (cfs_quota_us_match) in cpu_scr_out)\n        self.logger.info(\"cpu_cfs_quota_us check passed (match %d)\" %\n                         (cfs_quota_us_match))\n\n    def test_cpu_controller_enforce(self):\n        \"\"\"\n        Test an enabled cgroup 'cpu' controller with quotas enforced,\n        using specific values to:\n              cfs_period_us\n              cfs_quota_fudge_factor\n        in config file 'cfg11'.\n        \"\"\"\n        root_quota_host1 = None\n        try:\n            root_quota_host1_str = \\\n                self.du.run_cmd(hosts=self.hosts_list[0],\n                                cmd=['cat',\n                                     '/sys/fs/cgroup/cpu/cpu.cfs_quota_us'])\n            root_quota_host1 = int(root_quota_host1_str['out'][0])\n        except Exception:\n            pass\n        # If that link is missing and it's only\n        # mounted under the cpu/cpuacct unified directory...\n        if root_quota_host1 is None:\n            try:\n                root_quota_host1_str = \\\n                    self.du.run_cmd(hosts=self.hosts_list[0],\n                                    cmd=['cat',\n                                         '/sys/fs/cgroup/'\n                                         'cpu,cpuacct/cpu.cfs_quota_us'])\n                root_quota_host1 = int(root_quota_host1_str['out'][0])\n            except Exception:\n                pass\n        # If still not found, try to see if it is in a unified cgroup mount\n        # as in cgroup v2\n        if root_quota_host1 is None:\n            try:\n                root_quota_host1_str = \\\n                    self.du.run_cmd(hosts=self.hosts_list[0],\n                                    cmd=['cat',\n                                         '/sys/fs/cgroup/cpu.cfs_quota_us'])\n                root_quota_host1 = int(root_quota_host1_str['out'][0])\n            except Exception:\n                pass\n\n        if root_quota_host1 is None:\n            self.skipTest('cpu group controller test: '\n                          'could not determine root cfs_quota_us')\n        elif root_quota_host1 != -1:\n            self.skipTest('cpu group controller test: '\n                          'root cfs_quota_us is not unlimited, cannot test '\n                          'cgroup hook CPU quotas in this environment')\n\n        name = 'CGROUP1'\n        cfs_period_us = 200000\n        cfs_quota_fudge_factor = 1.05\n        self.load_config(self.cfg11 % (self.mem, self.mem,\n                                       cfs_period_us, cfs_quota_fudge_factor))\n        self.server.expect(NODE, {'state': 'free'},\n                           id=self.nodes_list[0], interval=1)\n        result = self.server.status(NODE, 'resources_available.ncpus',\n                                    id=self.nodes_list[0])\n        orig_ncpus = int(result[0]['resources_available.ncpus'])\n        self.assertGreater(orig_ncpus, 0)\n        self.logger.info('Original value of ncpus: %d' % orig_ncpus)\n        if orig_ncpus >= 2:\n            ncpus_req = 2\n        else:\n            ncpus_req = 1\n        a = {'Resource_List.select':\n             \"ncpus=%d\" % ncpus_req,\n             ATTR_N: name, ATTR_k: 'oe'}\n        j = Job(TEST_USER, attrs=a)\n        j.create_script(self.sleep600_job)\n        jid = self.server.submit(j)\n        a = {'job_state': 'R'}\n        self.server.expect(JOB, a, jid)\n        self.server.status(JOB, [ATTR_o, 'exec_host'], jid)\n        fna = self.get_cgroup_job_dir('cpu', jid, self.hosts_list[0])\n        self.assertFalse(fna is None, 'No job directory for cpu subsystem')\n        cpu_scr = self.du.run_cmd(cmd=[self.cpu_controller_script % fna],\n                                  as_script=True, hosts=self.hosts_list[0])\n        cpu_scr_out = cpu_scr['out']\n        self.logger.info('cpu_scr_out:\\n%s' % cpu_scr_out)\n        shares_match = (ncpus_req * 1000)\n        self.assertTrue(\"cpu_shares=%d\" % shares_match in cpu_scr_out)\n        self.logger.info(\"cpu_shares check passed (match %d)\" % shares_match)\n\n        self.assertTrue(\"cpu_cfs_period_us=%d\" %\n                        (cfs_period_us) in cpu_scr_out)\n        self.logger.info(\n            \"cpu_cfs_period_us check passed (match %d)\" % (cfs_period_us))\n        cfs_quota_us_match = cfs_period_us * ncpus_req * cfs_quota_fudge_factor\n        self.assertTrue(\"cpu_cfs_quota_us=%d\" %\n                        (cfs_quota_us_match) in cpu_scr_out)\n        self.logger.info(\"cpu_cfs_quota_us check passed (match %d)\" %\n                         (cfs_quota_us_match))\n\n    def test_cpu_controller_enforce_default_zero_job(self):\n        \"\"\"\n        Test an enabled cgroup 'cpu' controller with quotas enforced\n        on zero-cpu job, using default (non-specified) values of:\n              cfs_period_us\n              cfs_quota_fudge_factor\n              zero_cpus_shares_fraction\n              zero_cpus_quota_fraction\n        \"\"\"\n        root_quota_host1 = None\n        try:\n            root_quota_host1_str = \\\n                self.du.run_cmd(hosts=self.hosts_list[0],\n                                cmd=['cat',\n                                     '/sys/fs/cgroup/cpu/cpu.cfs_quota_us'])\n            root_quota_host1 = int(root_quota_host1_str['out'][0])\n        except Exception:\n            pass\n        # If that link is missing and it's only\n        # mounted under the cpu/cpuacct unified directory...\n        if root_quota_host1 is None:\n            try:\n                root_quota_host1_str = \\\n                    self.du.run_cmd(hosts=self.hosts_list[0],\n                                    cmd=['cat',\n                                         '/sys/fs/cgroup/'\n                                         'cpu,cpuacct/cpu.cfs_quota_us'])\n                root_quota_host1 = int(root_quota_host1_str['out'][0])\n            except Exception:\n                pass\n        # If still not found, try to see if it is in a unified cgroup mount\n        # as in cgroup v2\n        if root_quota_host1 is None:\n            try:\n                root_quota_host1_str = \\\n                    self.du.run_cmd(hosts=self.hosts_list[0],\n                                    cmd=['cat',\n                                         '/sys/fs/cgroup/cpu.cfs_quota_us'])\n                root_quota_host1 = int(root_quota_host1_str['out'][0])\n            except Exception:\n                pass\n\n        if root_quota_host1 is None:\n            self.skipTest('cpu group controller test: '\n                          'could not determine root cfs_quota_us')\n        elif root_quota_host1 != -1:\n            self.skipTest('cpu group controller test: '\n                          'root cfs_quota_us is not unlimited, cannot test '\n                          'cgroup hook CPU quotas in this environment')\n\n        name = 'CGROUP1'\n        # config file 'cfg12' has 'allow_zero_cpus=true' under cpuset, to allow\n        # zero-cpu jobs.\n        self.load_config(self.cfg12 % (self.mem, self.mem))\n        default_cfs_period_us = 100000\n        default_cfs_quota_fudge_factor = 1.03\n        default_zero_shares_fraction = 0.002\n        default_zero_quota_fraction = 0.2\n        # Restart mom for changes made by cgroups hook to take effect\n        self.mom.restart()\n        a = {'Resource_List.select': 'ncpus=0',\n             ATTR_N: name, ATTR_k: 'oe'}\n        j = Job(TEST_USER, attrs=a)\n        j.create_script(self.sleep600_job)\n        jid = self.server.submit(j)\n        a = {'job_state': 'R'}\n        self.server.expect(JOB, a, jid)\n        self.server.status(JOB, [ATTR_o, 'exec_host'], jid)\n        fna = self.get_cgroup_job_dir('cpu', jid, self.hosts_list[0])\n        self.assertFalse(fna is None, 'No job directory for cpu subsystem')\n        cpu_scr = self.du.run_cmd(cmd=[self.cpu_controller_script % fna],\n                                  as_script=True, hosts=self.hosts_list[0])\n        cpu_scr_out = cpu_scr['out']\n        self.logger.info('cpu_scr_out:\\n%s' % cpu_scr_out)\n        shares_match = (default_zero_shares_fraction * 1000)\n        self.assertTrue(\"cpu_shares=%d\" % shares_match in cpu_scr_out)\n        self.logger.info(\"cpu_shares check passed (match %d)\" % shares_match)\n\n        self.assertTrue(\"cpu_cfs_period_us=%d\" %\n                        (default_cfs_period_us) in cpu_scr_out)\n        self.logger.info(\"cpu_cfs_period_us check passed (match %d)\" %\n                         (default_cfs_period_us))\n        cfs_quota_us_match = default_cfs_period_us * \\\n            default_zero_quota_fraction * default_cfs_quota_fudge_factor\n        self.assertTrue(\"cpu_cfs_quota_us=%d\" %\n                        (cfs_quota_us_match) in cpu_scr_out)\n        self.logger.info(\"cpu_cfs_quota_us check passed (match %d)\" %\n                         (cfs_quota_us_match))\n\n    def test_cpu_controller_enforce_zero_job(self):\n        \"\"\"\n        Test an enabled cgroup 'cpu' controller with quotas enforced on a\n        zero-cpu job. Quotas are enforced using specific values to:\n              cfs_period_us\n              cfs_quota_fudge_factor\n              zero_cpus_shares_fraction\n              zero_cpus_quota_fraction\n        in config file 'cfg13'.\n        \"\"\"\n        root_quota_host1 = None\n        try:\n            root_quota_host1_str = \\\n                self.du.run_cmd(hosts=self.hosts_list[0],\n                                cmd=['cat',\n                                     '/sys/fs/cgroup/cpu/cpu.cfs_quota_us'])\n            root_quota_host1 = int(root_quota_host1_str['out'][0])\n        except Exception:\n            pass\n        # If that link is missing and it's only\n        # mounted under the cpu/cpuacct unified directory...\n        if root_quota_host1 is None:\n            try:\n                root_quota_host1_str = \\\n                    self.du.run_cmd(hosts=self.hosts_list[0],\n                                    cmd=['cat',\n                                         '/sys/fs/cgroup/'\n                                         'cpu,cpuacct/cpu.cfs_quota_us'])\n                root_quota_host1 = int(root_quota_host1_str['out'][0])\n            except Exception:\n                pass\n        # If still not found, try to see if it is in a unified cgroup mount\n        # as in cgroup v2\n        if root_quota_host1 is None:\n            try:\n                root_quota_host1_str = \\\n                    self.du.run_cmd(hosts=self.hosts_list[0],\n                                    cmd=['cat',\n                                         '/sys/fs/cgroup/cpu.cfs_quota_us'])\n                root_quota_host1 = int(root_quota_host1_str['out'][0])\n            except Exception:\n                pass\n\n        if root_quota_host1 is None:\n            self.skipTest('cpu group controller test: '\n                          'could not determine root cfs_quota_us')\n        elif root_quota_host1 != -1:\n            self.skipTest('cpu group controller test: '\n                          'root cfs_quota_us is not unlimited, cannot test '\n                          'cgroup hook CPU quotas in this environment')\n        name = 'CGROUP1'\n        cfs_period_us = 200000\n        cfs_quota_fudge_factor = 1.05\n        zero_cpus_shares_fraction = 0.3\n        zero_cpus_quota_fraction = 0.5\n        # config file 'cfg13' has 'allow_zero_cpus=true' under cpuset, to allow\n        # zero-cpu jobs.\n        self.load_config(self.cfg13 % (self.mem, self.mem, cfs_period_us,\n                                       cfs_quota_fudge_factor,\n                                       zero_cpus_shares_fraction,\n                                       zero_cpus_quota_fraction))\n        a = {'Resource_List.select': 'ncpus=0',\n             ATTR_N: name, ATTR_k: 'oe'}\n        j = Job(TEST_USER, attrs=a)\n        j.create_script(self.sleep600_job)\n        jid = self.server.submit(j)\n        a = {'job_state': 'R'}\n        self.server.expect(JOB, a, jid)\n        self.server.status(JOB, [ATTR_o, 'exec_host'], jid)\n        fna = self.get_cgroup_job_dir('cpu', jid, self.hosts_list[0])\n        self.assertFalse(fna is None, 'No job directory for cpu subsystem')\n        cpu_scr = self.du.run_cmd(cmd=[self.cpu_controller_script % fna],\n                                  as_script=True, hosts=self.hosts_list[0])\n        cpu_scr_out = cpu_scr['out']\n        self.logger.info('cpu_scr_out:\\n%s' % cpu_scr_out)\n        shares_match = (zero_cpus_shares_fraction * 1000)\n        self.assertTrue(\"cpu_shares=%d\" % shares_match in cpu_scr_out)\n        self.logger.info(\"cpu_shares check passed (match %d)\" % shares_match)\n\n        self.assertTrue(\"cpu_cfs_period_us=%d\" %\n                        (cfs_period_us) in cpu_scr_out)\n        self.logger.info(\n            \"cpu_cfs_period_us check passed (match %d)\" % (cfs_period_us))\n        cfs_quota_us_match = cfs_period_us * \\\n            zero_cpus_quota_fraction * cfs_quota_fudge_factor\n        self.assertTrue(\"cpu_cfs_quota_us=%d\" %\n                        (cfs_quota_us_match) in cpu_scr_out)\n        self.logger.info(\"cpu_cfs_quota_us check passed (match %d)\" %\n                         (cfs_quota_us_match))\n\n    def test_vnodepernuma_use_hyperthreads(self):\n        \"\"\"\n        Test to verify that correct number of jobs run with\n        vnodes_per_numa=true and use_hyperthreads=true\n        \"\"\"\n        pcpus = 0\n        sibs = 0\n        cores = 0\n        pval = 0\n        phys = {}\n        with open('/proc/cpuinfo', 'r') as desc:\n            for line in desc:\n                if re.match('^processor', line):\n                    pcpus += 1\n                sibs_match = re.search(r'siblings\t: ([0-9]+)', line)\n                cores_match = re.search(r'cpu cores\t: ([0-9]+)', line)\n                phys_match = re.search(r'physical id\t: ([0-9]+)', line)\n                if sibs_match:\n                    sibs = int(sibs_match.groups()[0])\n                if cores_match:\n                    cores = int(cores_match.groups()[0])\n                if phys_match:\n                    pval = int(phys_match.groups()[0])\n                    phys[pval] = 1\n        if (sibs == 0 or cores == 0):\n            self.skipTest('Insufficient information about the processors.')\n        if pcpus < 2:\n            self.skipTest('This test requires at least two processors.')\n\n        hyperthreads_per_core = int(sibs / cores)\n        name = 'CGROUP20'\n        # set vnode_per_numa=true with use_hyperthreads=true\n        self.load_config(self.cfg3 % ('', 'true', '', self.mem, '',\n                                      self.swapctl, ''))\n        # Submit M*N*P jobs, where M is the number of physical processors,\n        # N is the number of 'cpu cores' per M. and P being the\n        # number of hyperthreads per core.\n        njobs = len(phys) * cores * hyperthreads_per_core\n        if njobs > 100:\n            self.skipTest(\"too many jobs (%d) to submit\" % njobs)\n        a = {'Resource_List.select': '1:ncpus=1:mem=300mb:host=%s' %\n             self.hosts_list[0], ATTR_N: name + 'a'}\n        for _ in range(njobs):\n            j = Job(TEST_USER, attrs=a)\n            # make sure this stays around for an hour\n            # (or until deleted in teardown)\n            j.set_sleep_time(3600)\n            jid = self.server.submit(j)\n            a1 = {'job_state': 'R'}\n            self.server.expect(JOB, a1, jid)\n\n        # Submit another job, expect in Q state\n        b = {'Resource_List.select': '1:ncpus=1:mem=300mb:host=%s' %\n             self.hosts_list[0], ATTR_N: name + 'b'}\n        j2 = Job(TEST_USER, attrs=b)\n        jid2 = self.server.submit(j2)\n        b1 = {'job_state': 'Q'}\n        self.server.expect(JOB, b1, jid2)\n\n    def test_cgroup_default_config(self):\n        \"\"\"\n        Test to make sure using the default hook config file\n        still run a basic job, and cleans up cpuset upon qdel.\n        \"\"\"\n        # The default hook config has 'memory' subsystem enabled\n        if not self.paths[self.hosts_list[0]]['memory']:\n            self.skipTest('Test requires memory subystem mounted')\n        self.load_default_config()\n        # Reduce the noise in mom_logs for existence=False matching\n        c = {'$logevent': '511'}\n        self.mom.add_config(c)\n        a = {'Resource_List.select': 'ncpus=1:mem=100mb'}\n        j = Job(TEST_USER, attrs=a)\n        j.create_script(self.sleep600_job)\n        time.sleep(2)\n        stime = int(time.time())\n        time.sleep(2)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, jid)\n        err_msg = \"write_value: Permission denied.*%s.*memsw\" % (jid)\n        self.mom.log_match(err_msg, max_attempts=3, interval=1, n='ALL',\n                           starttime=stime, regexp=True, existence=False)\n        self.server.status(JOB, ['exec_host'], jid)\n        ehost = j.attributes['exec_host']\n        ehost1 = ehost.split('/')[0]\n        ehjd1 = self.get_cgroup_job_dir('cpuset', jid, ehost1)\n        self.assertTrue(self.is_dir(ehjd1, ehost1), \"job cpuset dir not found\")\n        self.server.delete(id=jid, wait=True)\n        self.assertFalse(self.is_dir(ehjd1, ehost1), \"job cpuset dir found\")\n\n    def test_cgroup_cgswap(self, vnode_per_numa_node=False):\n        \"\"\"\n        Test to verify (with vnode_per_numa_node disabled by default):\n        - whether queuejob/modifyjob set cgswap to vmem-mem in jobs\n        - whether nodes get resources_available.cgswap filled in\n        - whether a collection of jobs submitted that do not exceed available\n          vmem but would deplete cgswap are indeed not all run simultaneously\n        \"\"\"\n        if not self.mem:\n            self.skipTest('Test requires memory subystem mounted')\n        if self.swapctl != 'true':\n            self.skipTest('Test requires memsw accounting enabled')\n        self.server.remove_resource('cgswap')\n        self.server.add_resource('cgswap', 'size', 'nh')\n        self.scheduler.add_resource('cgswap')\n        events = ['execjob_begin', 'execjob_launch', 'execjob_attach',\n                  'execjob_epilogue', 'execjob_end', 'exechost_startup',\n                  'exechost_periodic', 'execjob_resize', 'execjob_abort',\n                  'queuejob', 'modifyjob']\n        # Enable the cgroups hook new events\n        conf = {'enabled': 'True', 'freq': 10, 'event': events}\n        self.server.manager(MGR_CMD_SET, HOOK, conf, self.hook_name)\n\n        self.load_config(self.cfg15\n                         % ('true' if vnode_per_numa_node else 'false'))\n        vnode_name = self.mom.shortname\n        if vnode_per_numa_node:\n            vnode_name += \"[0]\"\n        cgswapstat = self.server.status(NODE, 'resources_available.cgswap',\n                                        id=vnode_name)\n        self.assertTrue(cgswapstat\n                        and 'resources_available.cgswap' in cgswapstat[0],\n                        'cgswap resource not found on node')\n\n        cgswap = PbsTypeSize(cgswapstat[0]['resources_available.cgswap'])\n        self.logger.info('Test node appears to have %s cgswap'\n                         % cgswap.encode())\n        if cgswap == PbsTypeSize(\"0kb\"):\n            self.logger.info('First Mom has no swap, test will just '\n                             'check if job cgswap is added')\n            a = {'Resource_List.select':\n                 '1:ncpus=0:mem=100mb:vmem=1100mb:vnode=%s'\n                 % vnode_name}\n\n            j = Job(TEST_USER, attrs=a)\n            j.create_script(self.sleep30_job)\n            jid = self.server.submit(j)\n\n            # scheduler sets comment when the job cannot run,\n            # server sets comment when the job runs\n            # in both cases the comment gets set\n            self.server.expect(JOB, 'comment', op=SET)\n            job_status = self.server.status(JOB, id=jid)\n\n            cgswap = None\n            select_resource = job_status[0]['Resource_List.select']\n            chunkspecs = select_resource.split(':')\n            for c in chunkspecs:\n                if '=' in c:\n                    name, value = c.split('=')\n                    if name == 'cgswap':\n                        cgswap = PbsTypeSize(value)\n            self.assertTrue(cgswap is not None, 'job cgswap was not added')\n            self.assertTrue(cgswap == PbsTypeSize('1000mb'),\n                            'job cgswap is %s instead of expected 1000mb'\n                            % str(cgswap))\n            self.logger.info('job cgswap detected to be correct, roughly %s'\n                             % str(cgswap))\n\n            # check that indeed you cannot run the job since it requests\n            # swap usage and there is none\n            job_comment = job_status[0]['comment']\n            self.assertTrue('Insufficient amount of resource: cgswap'\n                            in job_comment,\n                            'Job comment should indicate insufficient cgswap '\n                            'but is: %s' % job_comment)\n            self.logger.info('job comment as expected: %s' % job_comment)\n\n        else:\n            self.logger.info('First MoM has swap, confirming cgswap '\n                             'correctly throttles jobs accepted')\n            # PbsTypeSize value is stored in kb units\n            cgreqval = int(float(cgswap.value)\n                           / 1024.0 / 3.0 * 2.0)\n            cgreqsuffix = 'mb'\n            cgreq = PbsTypeSize(str(cgreqval) + cgreqsuffix)\n            vmemreqsize = PbsTypeSize(\"100mb\") + cgreq\n            vmemreq = str(int(vmemreqsize.value / 1024))+'mb'\n            self.logger.info('will submit jobs with 100mb mem and %s vmem'\n                             % vmemreq)\n            a = {'Resource_List.select':\n                 '1:ncpus=0:mem=100mb:vmem=%s:vnode=%s'\n                 % (vmemreq, vnode_name)}\n\n            j = Job(TEST_USER, attrs=a)\n            j.create_script(self.sleep600_job)\n            jid = self.server.submit(j)\n            bs = {'job_state': 'R'}\n            self.server.expect(JOB, bs, jid, offset=1)\n\n            cgswap = None\n            job_status = self.server.status(JOB, id=jid)\n            select_resource = job_status[0]['Resource_List.select']\n            chunkspecs = select_resource.split(':')\n            for c in chunkspecs:\n                if '=' in c:\n                    name, value = c.split('=')\n                    if name == 'cgswap':\n                        cgswap = PbsTypeSize(value)\n            self.assertTrue(cgswap is not None, 'job cgswap was not added')\n            self.assertTrue(cgswap == cgreq,\n                            'job cgswap is %s instead of expected %s'\n                            % (str(cgswap), str(cgreq)))\n            self.logger.info('job cgswap detected to be correct, roughly %s'\n                             % str(cgswap))\n            j = Job(TEST_USER, attrs=a)\n            j.create_script(self.sleep600_job)\n            jid = self.server.submit(j)\n\n            # Second job should not run - not enough cgswap\n            # scheduler sets comment when the job cannot run,\n            # server sets comment when the job runs\n            # in both cases the comment gets set\n            self.server.expect(JOB, 'comment', op=SET)\n            job_status = self.server.status(JOB, id=jid)\n\n            # check that indeed you cannot run the job since it requests\n            # too much swap usage while the first job runs\n            job_comment = job_status[0]['comment']\n            self.assertTrue('Insufficient amount of resource: cgswap'\n                            in job_comment,\n                            'Job comment should indicate insufficient cgswap '\n                            'but is: %s' % job_comment)\n            self.logger.info('job comment as expected: %s' % job_comment)\n\n    def test_cgroup_cgswap_numa(self):\n        \"\"\"\n        Test to verify (with vnode_per_numa_node enabled):\n        - whether queuejob/modifyjob set cgswap to vmem-mem in jobs\n        - whether nodes get resources_available.cgswap filled in\n        - whether a collection of jobs submitted that do not exceed available\n          vmem but would deplete cgswap are indeed not all run simultaneously\n        \"\"\"\n        self.test_cgroup_cgswap(vnode_per_numa_node=True)\n\n    def test_cgroup_enforce_default(self,\n                                    enforce_flags=('true', 'true'),\n                                    exclhost=False):\n        \"\"\"\n        Test to verify if the flags to enforce default mem are working\n        and to ensure mem and memsw limits are set as expected;\n        default is to enforce both mem and memsw defaults:\n        job should get small mem limit and larger memsw limit\n        if there is swap.\n        \"\"\"\n        if not self.mem:\n            self.skipTest('Test requires memory subystem mounted')\n        if self.swapctl != 'true':\n            self.skipTest('Test requires memsw accounting enabled')\n\n        self.load_config(self.cfg16\n                         % enforce_flags)\n\n        a = {'Resource_List.select':\n             '1:ncpus=1:vnode=%s'\n             % self.mom.shortname}\n        if exclhost:\n            a['Resource_List.place'] = 'exclhost'\n\n        j = Job(TEST_USER, attrs=a)\n        j.create_script(self.sleep600_job)\n        jid = self.server.submit(j)\n        bs = {'job_state': 'R'}\n        self.server.expect(JOB, bs, jid, offset=1)\n\n        mem_base = os.path.join(self.paths[self.hosts_list[0]]['memory'],\n                                'pbs_jobs.service', 'jobid')\n\n        # Get total physical memory available\n        mem_avail = os.path.join(mem_base,\n                                 'memory.limit_in_bytes')\n        result = self.du.cat(hostname=self.mom.hostname, filename=mem_avail,\n                             sudo=True)\n        mem_avail_in_bytes = None\n        try:\n            mem_avail_in_bytes = int(result['out'][0])\n        except Exception:\n            # None will be seen as a failure, nothing to do\n            pass\n        self.logger.info(\"total available mem: %d\"\n                         % mem_avail_in_bytes)\n        self.assertTrue(mem_avail_in_bytes is not None,\n                        \"Unable to read total memory available\")\n\n        # Get total phys+swap memory available\n        vmem_avail = os.path.join(mem_base,\n                                  'memory.memsw.limit_in_bytes')\n        result = self.du.cat(hostname=self.mom.hostname, filename=vmem_avail,\n                             sudo=True)\n        vmem_avail_in_bytes = None\n        try:\n            vmem_avail_in_bytes = int(result['out'][0])\n        except Exception:\n            # None will be seen as a failure, nothing to do\n            pass\n        self.assertTrue(vmem_avail_in_bytes is not None,\n                        \"Unable to read total memsw available\")\n        self.logger.info(\"total available memsw: %d\"\n                         % vmem_avail_in_bytes)\n\n        # Get job physical mem limit\n        mem_limit = os.path.join(mem_base, str(jid),\n                                 'memory.limit_in_bytes')\n        result = self.du.cat(hostname=self.mom.hostname, filename=mem_limit,\n                             sudo=True)\n        mem_limit_in_bytes = None\n        try:\n            mem_limit_in_bytes = int(result['out'][0])\n        except Exception:\n            # None will be seen as a failure, nothing to do\n            pass\n        self.assertTrue(mem_limit_in_bytes is not None,\n                        \"Unable to read job mem limit\")\n        self.logger.info(\"job mem limit: %d\"\n                         % mem_limit_in_bytes)\n\n        # Get job phys+swap mem limit\n        vmem_limit = os.path.join(mem_base, str(jid),\n                                  'memory.memsw.limit_in_bytes')\n        result = self.du.cat(hostname=self.mom.hostname, filename=vmem_limit,\n                             sudo=True)\n        vmem_limit_in_bytes = None\n        try:\n            vmem_limit_in_bytes = int(result['out'][0])\n        except Exception:\n            # None will be seen as a failure, nothing to do\n            pass\n        self.assertTrue(vmem_limit_in_bytes is not None,\n                        \"Unable to read job memsw limit\")\n        self.logger.info(\"job memsw limit: %d\"\n                         % vmem_limit_in_bytes)\n\n        # Check results correspond to enforcement flags and job placement\n        swap_avail = vmem_avail_in_bytes - mem_avail_in_bytes\n        if enforce_flags[0] == 'true' and not exclhost:\n            self.assertTrue(mem_limit_in_bytes == 100 * 1024 * 1024,\n                            \"Job mem limit is %d expected %d\"\n                            % (mem_limit_in_bytes, 100 * 1024 * 1024))\n        else:\n            self.assertTrue(mem_avail_in_bytes == mem_limit_in_bytes,\n                            \"job mem limit (%d) should be identical to \"\n                            \"total mem available (%d)\"\n                            % (mem_limit_in_bytes, mem_avail_in_bytes))\n            self.logger.info(\"job mem limit is total mem available (%d)\"\n                             % mem_avail_in_bytes)\n        if enforce_flags[1] == 'true' and not exclhost:\n            expected_vmem = (mem_limit_in_bytes\n                             + min(100 * 1024 * 1024, swap_avail))\n            self.assertTrue(vmem_limit_in_bytes == expected_vmem,\n                            \"memsw limit: expected %d, got %d\"\n                            % (expected_vmem, vmem_limit_in_bytes))\n            self.logger.info(\"job memsw limit is expected %d\"\n                             % vmem_limit_in_bytes)\n        else:\n            if swap_avail:\n                self.assertTrue(vmem_avail_in_bytes == vmem_limit_in_bytes,\n                                \"job memsw limit (%d) should be identical to \"\n                                \"total memsw available (%d)\"\n                                % (vmem_limit_in_bytes, vmem_avail_in_bytes))\n                self.logger.info(\"job memsw limit is total memsw available \"\n                                 \" (%d)\" % vmem_avail_in_bytes)\n            else:\n                self.assertTrue(mem_limit_in_bytes == vmem_limit_in_bytes,\n                                \"no swap, mem (%d) and vmem (%d) limits \"\n                                \"should be identical but are not\"\n                                % (mem_limit_in_bytes, vmem_limit_in_bytes))\n                self.logger.info(\"no swap: job memsw limit is job mem limit\")\n\n    def test_cgroup_enforce_default_tf(self):\n        \"\"\"\n        Test to verify if the flags to enforce default mem are working\n        and to ensure mem and memsw limits are set as expected;\n        enforce mem but not memsw:\n        job should get small mem limit memsw should be unlimited\n        (i.e. able to consume memsw set as limit for all jobs)\n        \"\"\"\n        self.test_cgroup_enforce_default(enforce_flags=('true', 'false'))\n\n    def test_cgroup_enforce_default_ft(self):\n        \"\"\"\n        Test to verify if the flags to enforce default mem are working\n        and to ensure mem and memsw limits are set as expected;\n        enforce memsw but not mem:\n        job should be able to consume all physical memory\n        set as limit for all jobs but only a small amount of additional swap\n        \"\"\"\n        self.test_cgroup_enforce_default(enforce_flags=('false', 'true'))\n\n    def test_cgroup_enforce_default_exclhost(self):\n        \"\"\"\n        Test to verify if the flags to enforce default mem are working\n        and to ensure mem and memsw limits are set as expected;\n        enforce neither mem nor memsw by enabling flags to ignore\n        enforcement for exclhost jobs and submitting an exclhost job:\n        job should be able to consume all physical memory\n        and memsw set as limit for all jobs\n        \"\"\"\n        # enforce flags should both be overrided by exclhost\n        self.test_cgroup_enforce_default(enforce_flags=('true', 'true'),\n                                         exclhost=True)\n\n    def test_manage_rlimit_as(self):\n        if not self.mem:\n            self.skipTest('Test requires memory subystem mounted')\n        if self.swapctl != 'true':\n            self.skipTest('Test requires memsw accounting enabled')\n\n        # Make sure job history is enabled to see when job has ended\n        a = {'job_history_enable': 'True'}\n        rc = self.server.manager(MGR_CMD_SET, SERVER, a)\n        self.assertEqual(rc, 0)\n        self.server.expect(SERVER, {'job_history_enable': 'True'})\n\n        self.load_config(self.cfg16 % ('true', 'true'))\n\n        # First job -- request vmem and no pvmem,\n        # RLIMIT_AS shoud be unlimited\n        a = {'Resource_List.select':\n             '1:ncpus=0:mem=400mb:vmem=400mb:vnode=%s'\n             % self.mom.shortname}\n\n        j = Job(TEST_USER, attrs=a)\n        j.create_script(\"#!/bin/bash\\nulimit -v\")\n        jid = self.server.submit(j)\n        bs = {'job_state': 'F'}\n        self.server.expect(JOB, bs, jid, extend='x', offset=1)\n\n        thisjob = self.server.status(JOB, id=jid, extend='x')\n        try:\n            job_output_file = thisjob[0]['Output_Path'].split(':')[1]\n        except Exception:\n            self.assertTrue(False, \"Could not determine job output path\")\n        result = self.du.cat(hostname=self.server.hostname,\n                             filename=job_output_file,\n                             sudo=True)\n        self.assertTrue('out' in result, \"Nothing in job output file?\")\n        job_out = '\\n'.join(result['out'])\n        self.logger.info(\"job_out=%s\" % job_out)\n        self.assertTrue('unlimited' in job_out)\n        self.logger.info(\"Job that requests vmem \"\n                         \"but no pvmem correctly has unlimited RLIMIT_AS\")\n\n        # Second job -- see if pvmem still works\n        # RLIMIT_AS should correspond to pvmem\n        a['Resource_List.pvmem'] = '400mb'\n        j = Job(TEST_USER, attrs=a)\n        j.create_script(\"#!/bin/bash\\nulimit -v\")\n        jid = self.server.submit(j)\n        bs = {'job_state': 'F'}\n        self.server.expect(JOB, bs, jid, extend='x', offset=1)\n\n        thisjob = self.server.status(JOB, id=jid, extend='x')\n        try:\n            job_output_file = thisjob[0]['Output_Path'].split(':')[1]\n        except Exception:\n            self.assertTrue(False, \"Could not determine job output path\")\n\n        result = self.du.cat(hostname=self.server.hostname,\n                             filename=job_output_file,\n                             sudo=True)\n        self.assertTrue('out' in result, \"Nothing in job output file?\")\n        job_out = '\\n'.join(result['out'])\n        self.logger.info(\"job_out=%s\" % job_out)\n        # ulimit reports kb, not bytes\n        self.assertTrue(str(400 * 1024) in job_out)\n        self.logger.info(\"Job that requests 400mb pvmem \"\n                         \"correctly has 400mb RLIMIT_AS\")\n\n    def test_cgroup_mount_paths(self):\n        \"\"\"\n        Test to see if the cgroup hook picks the shortest path,\n        but also if it can be overrided in the config file\n        \"\"\"\n\n        if self.du.isdir(self.hosts_list[0], '/dev/tstc'):\n            self.skipTest('Test requires /dev/tstc not to exist')\n        if self.du.isdir(self.hosts_list[0], '/dev/tstm'):\n            self.skipTest('Test requires /dev/tstm not to exist')\n\n        self.load_config(self.cfg17)\n\n        dir_created = self.du.mkdir(hostname=self.hosts_list[0],\n                                    path='/dev/tstm', mode=0o0755,\n                                    sudo=True)\n        if not dir_created:\n            self.skipTest('not able to create /dev/tstm')\n        result = self.du.run_cmd(self.hosts_list[0],\n                                 ['mount', '-t', 'cgroup', '-o',\n                                  'rw,nosuid,nodev,noexec,relatime,seclabel,'\n                                  'memory',\n                                  'cgroup', '/dev/tstm'],\n                                 sudo=True)\n        if result['rc'] != 0:\n            self.du.run_cmd(self.hosts_list[0],\n                            ['rmdir', '/dev/tstm'],\n                            sudo=True)\n            self.skipTest('not able to mount /dev/tstm')\n\n        dir_created = self.du.mkdir(hostname=self.hosts_list[0],\n                                    path='/dev/tstc', mode=0o0755,\n                                    sudo=True)\n        if not dir_created:\n            self.du.run_cmd(self.hosts_list[0],\n                            ['umount', '/dev/tstm'],\n                            sudo=True)\n            self.du.run_cmd(self.hosts_list[0],\n                            ['rmdir', '/dev/tstm'],\n                            sudo=True)\n            self.skipTest('not able to create /dev/tstc')\n\n        result = self.du.run_cmd(self.hosts_list[0],\n                                 ['mount', '-t', 'cgroup', '-o',\n                                  'rw,nosuid,nodev,noexec,relatime,seclabel,'\n                                  'cpuset',\n                                  'cgroup', '/dev/tstc'],\n                                 sudo=True)\n        if result['rc'] != 0:\n            self.du.run_cmd(self.hosts_list[0],\n                            ['umount', '/dev/tstm'],\n                            sudo=True)\n            self.du.run_cmd(self.hosts_list[0],\n                            ['rmdir', '/dev/tstm'],\n                            sudo=True)\n            self.du.run_cmd(self.hosts_list[0],\n                            ['rmdir', '/dev/tstc'],\n                            sudo=True)\n            self.skipTest('not able to mount /dev/tstc')\n\n        # sleep 2s: make sure no old log lines will match 'begin' time\n        time.sleep(2)\n        begin = int(time.time())\n        # sleep 2s to allow for small time differences and rounding errors\n        time.sleep(2)\n\n        a = {'Resource_List.select':\n             \"1:ncpus=1:host=%s\" % self.hosts_list[0]}\n        j = Job(TEST_USER, attrs=a)\n        j.create_script(self.sleep600_job)\n        jid = self.server.submit(j)\n        a = {'job_state': 'R'}\n        self.server.expect(JOB, a, jid)\n        failure = False\n\n        try:\n            self.moms_list[0].log_match(msg='create_job: Creating directory '\n                                            '/sys/fs/cgroup/cpuset/'\n                                            'pbs_jobs.service/jobid/%s'\n                                            % jid,\n                                        n='ALL', starttime=begin,\n                                        max_attempts=1)\n        except Exception:\n            failure = True\n        try:\n            self.moms_list[0].log_match(msg='create_job: Creating directory '\n                                            '/dev/tstm/'\n                                            'pbs_jobs.service/jobid/%s'\n                                            % jid,\n                                        n='ALL', starttime=begin,\n                                        max_attempts=1)\n        except Exception:\n            failure = True\n\n        self.du.run_cmd(self.hosts_list[0],\n                        ['umount', '/dev/tstm'],\n                        sudo=True)\n        self.du.run_cmd(self.hosts_list[0],\n                        ['rmdir', '/dev/tstm'],\n                        sudo=True)\n        self.du.run_cmd(self.hosts_list[0],\n                        ['umount', '/dev/tstc'],\n                        sudo=True)\n        self.du.run_cmd(self.hosts_list[0],\n                        ['rmdir', '/dev/tstc'],\n                        sudo=True)\n\n        self.assertFalse(failure,\n                         'Did not find correct paths for created cgroup dirs')\n\n    def cleanup_cgroup_subsys(self, host):\n        # Remove the jobdir if any under other cgroups\n        cgroup_subsys = ('systemd', 'cpu', 'cpuacct', 'cpuset', 'devices',\n                         'memory', 'hugetlb', 'perf_event', 'freezer',\n                         'blkio', 'pids', 'net_cls', 'net_prio')\n        for subsys in cgroup_subsys:\n            if (subsys in self.paths[host] and\n                    self.paths[host][subsys]):\n                self.logger.info('Looking for orphaned jobdir in %s' % subsys)\n                cdir = self.paths[host][subsys]\n                if self.du.isdir(host, cdir):\n                    self.logger.info(\"Inspecting \" + cdir)\n                    cpath = self.find_main_cpath(cdir, host)\n                    # not always immediately under main path\n                    if cpath is not None and self.du.isdir(host, cpath):\n                        tasks_files = (\n                            glob.glob(os.path.join(cpath,\n                                                   '*', '*', 'tasks'))\n                            + glob.glob(os.path.join(cpath,\n                                                     '*', 'tasks')))\n                        if tasks_files != []:\n                            self.logger.info(\"Tasks files found in %s: %s\"\n                                             % (cpath, tasks_files))\n                        for tasks_file in tasks_files:\n                            jdir = os.path.dirname(tasks_file)\n                            if not self.du.isdir(host, jdir):\n                                continue\n                            self.logger.info('deleting jobdir %s' % jdir)\n\n                            # Kill tasks before trying to rmdir freezer\n                            cgroup_tasks = os.path.join(jdir, 'tasks')\n                            ret = self.du.cat(hostname=host,\n                                              filename=cgroup_tasks,\n                                              sudo=True)\n                            if ret['rc'] == 0:\n                                for taskstr in ret['out']:\n                                    self.logger.info(\"trying to kill %s on %s\"\n                                                     % (taskstr,\n                                                        host))\n                                    self.du.run_cmd(host,\n                                                    ['kill', '-9'] + [taskstr],\n                                                    sudo=True)\n                            for count in range(30):\n                                ret = self.du.cat(hostname=host,\n                                                  filename=cgroup_tasks,\n                                                  sudo=True)\n                                if ret['rc'] != 0:\n                                    self.logger.info(\"Cannot confirm \"\n                                                     \"cgroup tasks; sleeping \"\n                                                     \"30 seconds instead\")\n                                    time.sleep(30)\n                                    break\n                                if ret['out'] == [] or ret['out'][0] == '':\n                                    self.logger.info(\"Processes in cgroup \"\n                                                     \"are gone\")\n                                    break\n                                else:\n                                    self.logger.info(\"tasks still in cgroup: \"\n                                                     + str(ret['out']))\n                                    time.sleep(1)\n\n                            cmd2 = ['rmdir', jdir]\n                            self.du.run_cmd(host, cmd=cmd2, sudo=True)\n\n    def cleanup_frozen_jobs(self, host):\n        # Cleanup frozen jobs\n        # Thaw ALL freezers found\n        # If directory starts with a number (i.e. a job)\n        # kill processes in the freezers and remove them\n\n        if 'freezer' in self.paths[host]:\n            # Find freezers to thaw\n            self.logger.info('Cleaning up frozen jobs ****')\n            fdir = self.paths[host]['freezer']\n            freezer_states = \\\n                glob.glob(os.path.join(fdir, '*', '*', '*', 'freezer.state'))\n            freezer_states += \\\n                glob.glob(os.path.join(fdir, '*', '*', 'freezer.state'))\n            freezer_states += \\\n                glob.glob(os.path.join(fdir, '*', 'freezer.state'))\n            self.logger.info('*** found freezer states %s'\n                             % str(freezer_states))\n\n            for freezer_state in freezer_states:\n                # thaw the freezer\n                self.logger.info('Thawing ' + freezer_state)\n                state = 'THAWED'\n                fn = self.du.create_temp_file(body=state)\n                self.du.run_copy(hosts=host, src=fn,\n                                 dest=freezer_state, sudo=True,\n                                 uid='root', gid='root',\n                                 mode=0o644)\n                # Confirm it's thawed\n                for count in range(30):\n                    ret = self.du.cat(hostname=host,\n                                      filename=freezer_state,\n                                      sudo=True)\n                    if ret['rc'] != 0:\n                        self.logger.info(\"Cannot confirm freezer state\"\n                                         \"sleeping 30 seconds instead\")\n                        time.sleep(30)\n                        break\n                    if ret['out'][0] == 'THAWED':\n                        self.logger.info(\"freezer processes reported as\"\n                                         \" THAWED\")\n                        break\n                    else:\n                        self.logger.info(\"freezer state reported as \"\n                                         + ret['out'][0])\n                        time.sleep(1)\n\n                freezer_basename = os.path.basename(\n                    os.path.dirname(freezer_state))\n                jobid = None\n                try:\n                    jobid = int(freezer_basename.split('.')[0])\n                except Exception:\n                    # not a job directory\n                    pass\n                if jobid is not None:\n                    self.logger.info(\"Apparently found job freezer for job %s\"\n                                     % freezer_basename)\n                    freezer_tasks = os.path.join(\n                        os.path.dirname(freezer_state), \"tasks\")\n\n                    # Kill tasks before trying to rmdir freezer\n                    ret = self.du.cat(hostname=host,\n                                      filename=freezer_tasks,\n                                      sudo=True)\n                    if ret['rc'] == 0:\n                        for taskstr in ret['out']:\n                            self.logger.info(\"trying to kill %s on %s\"\n                                             % (taskstr,\n                                                self.hosts_list[0]))\n                            self.du.run_cmd(host,\n                                            ['kill', '-9'] + [taskstr],\n                                            sudo=True)\n                    for count in range(30):\n                        ret = self.du.cat(hostname=host,\n                                          filename=freezer_tasks,\n                                          sudo=True)\n                        if ret['rc'] != 0:\n                            self.logger.info(\"Cannot confirm freezer tasks; \"\n                                             \"sleeping 30 seconds instead\")\n                            time.sleep(30)\n                            break\n                        if ret['out'] == [] or ret['out'][0] == '':\n                            self.logger.info(\"Processes in thawed freezer\"\n                                             \" are gone\")\n                            break\n                        else:\n                            self.logger.info(\"tasks still in thawed freezer: \"\n                                             + str(ret['out']))\n                            time.sleep(1)\n\n                    cmd = [\"rmdir\", os.path.dirname(freezer_state)]\n                    self.logger.info(\"Executing %s\" % ' '.join(cmd))\n                    self.du.run_cmd(hosts=host, cmd=cmd, sudo=True)\n\n    def tearDown(self):\n        TestFunctional.tearDown(self)\n        mom_checks = True\n        if self.moms_list[0].is_cpuset_mom():\n            mom_checks = False\n        self.load_default_config(mom_checks=mom_checks)\n        if not self.iscray:\n            self.remove_vntype()\n        events = ['execjob_begin', 'execjob_launch', 'execjob_attach',\n                  'execjob_epilogue', 'execjob_end', 'exechost_startup',\n                  'exechost_periodic', 'execjob_resize', 'execjob_abort']\n        # Disable the cgroups hook\n        conf = {'enabled': 'False', 'freq': 10, 'event': events}\n        self.server.manager(MGR_CMD_SET, HOOK, conf, self.hook_name)\n        # Cleanup any temp file created\n        self.logger.info('Deleting temporary files %s' % self.tempfile)\n        self.du.rm(hostname=self.serverA, path=self.tempfile, force=True,\n                   recursive=True, sudo=True)\n        for host in self.hosts_list:\n            self.cleanup_frozen_jobs(host)\n            self.cleanup_cgroup_subsys(host)\n"
  },
  {
    "path": "test/tests/functional/pbs_check_job_attrib.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestCheckJobAttrib(TestFunctional):\n    \"\"\"\n    This testsuite is to validate job attributes and values\n    \"\"\"\n\n    def test_exec_vnode_after_job_rerun(self):\n        \"\"\"\n        Test unsetting of exec_vnode of a job which got requeued\n        after stage-in and make sure stage-in files are cleaned up.\n        \"\"\"\n        hook_name = \"momhook\"\n        hook_body = \"import pbs\\npbs.event().reject('my custom message')\\n\"\n        a = {'event': 'execjob_begin', 'enabled': 'True'}\n        self.server.create_import_hook(hook_name, a, hook_body)\n\n        self.server.log_match(\".*successfully sent hook file.*\" +\n                              hook_name + \".PY\" + \".*\", regexp=True,\n                              max_attempts=100, interval=5)\n        storage_info = {}\n        starttime = int(time.time())\n        stagein_path = self.mom.create_and_format_stagein_path(\n            storage_info, asuser=str(TEST_USER))\n        a = {ATTR_stagein: stagein_path}\n        j = Job(TEST_USER, a)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, 'exec_vnode', id=jid, op=UNSET)\n        # make scheduling off to avoid any race conditions\n        # otherwise scheduler tries to run job till it reached H state\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n        self.server.expect(JOB, {'run_count': (GT, 0)}, id=jid)\n        self.server.log_match('my custom message', starttime=starttime)\n        path = stagein_path.split(\"@\")\n        msg = \"Staged in file not cleaned\"\n        self.assertFalse(self.mom.isfile(path[0]), msg)\n"
  },
  {
    "path": "test/tests/functional/pbs_checkpoint.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom ptl.utils.pbs_crayutils import CrayUtils\nfrom tests.functional import *\n\n\nclass TestCheckpoint(TestFunctional):\n    \"\"\"\n    This test suite targets Checkpoint functionality.\n    \"\"\"\n    abort_file = ''\n    cu = CrayUtils()\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n        a = {'job_history_enable': 'True'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        abort_script = \"\"\"#!/bin/bash\nkill $1\nexit 0\n\"\"\"\n        self.abort_file = self.mom.add_checkpoint_abort_script(\n            body=abort_script)\n        self.platform = self.du.get_platform()\n        if self.platform != 'cray' and self.platform != 'craysim':\n            self.attrs = {ATTR_l + '.select': '1:ncpus=1',\n                          ATTR_l + '.place': 'excl'}\n        else:\n            nv = self.cu.num_compute_vnodes(self.server)\n            self.assertNotEqual(nv, 0, \"No cray_compute vnodes are present.\")\n            self.attrs = {ATTR_l + '.select': '%d:ncpus=1' % nv,\n                          ATTR_l + '.place': 'scatter'}\n\n    def verify_checkpoint_abort(self, jid, stime):\n        \"\"\"\n        Verify that checkpoint and abort happened.\n        \"\"\"\n        self.ck_dir = os.path.join(self.mom.pbs_conf['PBS_HOME'],\n                                   'checkpoint', jid + '.CK')\n        self.assertTrue(self.du.isdir(hostname=self.mom.hostname,\n                                      path=self.ck_dir, runas=ROOT_USER),\n                        msg=\"Checkpoint directory %s not found\" % self.ck_dir)\n        _msg1 = \"%s;req_holdjob: Checkpoint initiated.\" % jid\n        self.mom.log_match(_msg1, starttime=stime)\n        _msg2 = \"%s;checkpoint_abort script %s: exit code 0\" % (\n            jid, self.abort_file)\n        self.mom.log_match(_msg2, starttime=stime)\n        _msg3 = \"%s;checkpointed to %s\" % (jid, self.ck_dir)\n        self.mom.log_match(_msg3, starttime=stime)\n        _msg4 = \"%s;task 00000001 terminated\" % jid\n        self.mom.log_match(_msg4, starttime=stime)\n\n    def start_server_hot(self):\n        \"\"\"\n        Start the server with the hot option.\n        \"\"\"\n        pbs_exec = self.server.pbs_conf['PBS_EXEC']\n        svrname = self.server.pbs_server_name\n        pbs_server_hot = [os.path.join(\n            pbs_exec, 'sbin', 'pbs_server'), '-t', 'hot']\n        self.du.run_cmd(svrname, cmd=pbs_server_hot, sudo=True)\n        self.assertTrue(self.server.isUp())\n\n    def checkpoint_abort_with_qterm_restart_hot(self, qterm_type):\n        \"\"\"\n        Checkpointing with qterm -t <type>, hot server restart.\n        \"\"\"\n\n        j1 = Job(TEST_USER, self.attrs)\n        j1.set_sleep_time(20)\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n\n        start_time = int(time.time())\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n        self.server.qterm(manner=qterm_type)\n\n        self.verify_checkpoint_abort(jid1, start_time)\n\n        self.start_server_hot()\n        self.assertTrue(self.server.isUp())\n\n        msg = \"%s;Requeueing job, substate: 10 Requeued in queue: workq\" % jid1\n        self.server.log_match(msg, starttime=start_time)\n\n        # wait for the server to hot start the job\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1, interval=2)\n        self.server.expect(JOB, 'exec_vnode', id=jid1, op=SET)\n        self.assertFalse(os.path.exists(self.ck_dir),\n                         msg=self.ck_dir + \" still exists\")\n        self.server.expect(JOB, {'job_state': 'F'},\n                           jid1, extend='x', interval=5)\n\n    def test_checkpoint_abort_with_preempt(self):\n        \"\"\"\n        This test verifies that checkpoint_abort works as expected when\n        a job is preempted via checkpoint. It does so by submitting a job\n        in express queue which preempts a running job in the default queue.\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, SCHED, {'preempt_order': 'C'},\n                            runas=ROOT_USER)\n        a = {'queue_type': 'execution',\n             'started': 'True',\n             'enabled': 'True',\n             'Priority': 200}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, \"expressq\")\n\n        j1 = Job(TEST_USER, self.attrs)\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n\n        self.attrs['queue'] = 'expressq'\n        j2 = Job(TEST_USER, self.attrs)\n        j2.set_sleep_time(20)\n        start_time = int(time.time())\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid1)\n\n        self.verify_checkpoint_abort(jid1, start_time)\n\n        self.server.expect(JOB, {'job_state': 'F'},\n                           jid2, extend='x', interval=5)\n        self.server.expect(JOB, {'job_state': 'F'},\n                           jid1, extend='x', interval=5)\n\n    def test_checkpoint_abort_with_qhold(self):\n        \"\"\"\n        This test uses qhold for checkpointing.\n        \"\"\"\n        j1 = Job(TEST_USER, self.attrs)\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        start_time = int(time.time())\n        self.server.holdjob(jid1)\n        self.server.expect(JOB, {'job_state': 'H'}, id=jid1)\n\n        self.verify_checkpoint_abort(jid1, start_time)\n\n    def test_checkpoint_abort_with_qterm_immediate_restart_hot(self):\n        \"\"\"\n        This tests checkpointing with qterm -t immediate, hot server restart.\n        \"\"\"\n        self.checkpoint_abort_with_qterm_restart_hot(\"immediate\")\n\n    def test_checkpoint_abort_with_qterm_delay_restart_hot(self):\n        \"\"\"\n        This tests checkpointing with qterm -t delay, hot server restart.\n        \"\"\"\n        self.checkpoint_abort_with_qterm_restart_hot(\"delay\")\n\n    def tearDown(self):\n        TestFunctional.tearDown(self)\n        self.du.rm(hostname=self.mom.hostname, path=self.abort_file,\n                   sudo=True, force=True)\n"
  },
  {
    "path": "test/tests/functional/pbs_client_response.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\nimport time\n\n\nclass TestClientResponse(TestFunctional):\n    \"\"\"\n    Test cases to check number of response getting from\n    client command in 1 second\n    \"\"\"\n    def test_qstat_reponse(self):\n        \"\"\"\n        Test to check how many qstat can be done in 1 second.\n        \"\"\"\n        count = 0\n        t = time.time() + 1\n        qstat_cmd = os.path.join(self.server.pbs_conf[\"PBS_EXEC\"], \"bin\",\n                                 \"qstat\")\n        while time.time() < t:\n            ret = self.du.run_cmd(self.server.hostname, qstat_cmd)\n            self.assertTrue('rc', 0)\n            count += 1\n        self.logger.info(\"Number qstat response:%d\", count)\n"
  },
  {
    "path": "test/tests/functional/pbs_complete_running_parent_job.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass Test_complete_running_parent_job(TestFunctional):\n    \"\"\"\n    This test suite is for testing the complete_running() procedure\n    is processed for parent array job.\n    \"\"\"\n\n    def setUp(self):\n        \"\"\"\n        Set eligible_time_enable = True.\n        This is due to test the issue in PP-1211\n        \"\"\"\n\n        TestFunctional.setUp(self)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {\n                            'eligible_time_enable': True})\n\n    def test_parent_job_S_accounting_record(self):\n        \"\"\"\n        Submit an array job and test whether the 'S' accounting record\n        is created for parent job.\n        \"\"\"\n\n        J = Job(TEST_USER, attrs={ATTR_J: '1-2'})\n        J.set_sleep_time(1)\n        parent_jid = self.server.submit(J)\n\n        self.server.accounting_match(msg='.*;S;' +\n                                     re.escape(parent_jid) + \".*\",\n                                     id=parent_jid, regexp=True)\n\n    def test_parent_job_comment_and_stime(self):\n        \"\"\"\n        Submit an array job and test whether the comment and stime is set\n        for parent job.\n        \"\"\"\n\n        J = Job(TEST_USER, attrs={ATTR_J: '1-2'})\n        J.set_sleep_time(10)\n        parent_jid = self.server.submit(J)\n\n        attr = {\n            ATTR_comment: (MATCH_RE, 'Job Array Began at .*'),\n            ATTR_stime: (MATCH_RE, '.+')\n        }\n        self.server.expect(JOB, attr, id=parent_jid, attrop=PTL_AND)\n"
  },
  {
    "path": "test/tests/functional/pbs_conf_resv_stale_vnode.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestResvStaleVnode(TestFunctional):\n    \"\"\"\n    Test that the scheduler won't confirm a reservation on stale vnode and\n    make sure reservations that have nodes that have gone stale get degreaded\n    \"\"\"\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n        # Create 3 vnodes named different things in different vnodedef files\n        # This allows us to delete a vnodedef file and make that node stale\n        self.mom.add_config(conf={'$vnodedef_additive': 'False'})\n        a = {'resources_available.ncpus': 1, 'priority': 100}\n        self.mom.create_vnodes(a, 1, fname='nat', restart=False,\n                               usenatvnode=True, expect=False, vname='foo')\n        a['priority'] = 10\n        self.mom.create_vnodes(a, 1, fname='fname1', delall=False,\n                               restart=False, additive=True,\n                               expect=False, vname='vn')\n        a['priority'] = 1\n        self.mom.create_vnodes(a, 1, fname='fname2', delall=False,\n                               additive=True, expect=False, vname='vnode')\n\n        self.scheduler.set_sched_config({'node_sort_key':\n                                         '\\\"sort_priority HIGH\\\"'})\n\n    def test_conf_resv_stale_vnode(self):\n        \"\"\"\n        Test that the scheduler won't confirm a reservation on a stale node.\n        \"\"\"\n        # Ensure the hostsets aren't used by associating a node to a queue\n        a = {'queue_type': 'Execution', 'enabled': 'True', 'started': 'True'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id='workq2')\n        self.server.manager(MGR_CMD_SET, NODE, {'queue': 'workq2'},\n                            id=self.mom.shortname)\n\n        # Submit a job that will run on our stale vnode\n        a = {'Resource_List.select': '1:vnode=vn[0]',\n             'Resource_List.walltime': 3600}\n        J = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(J)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid)\n\n        self.mom.delete_vnode_defs(vdefname='fname1')\n        self.mom.signal('-HUP')\n        self.server.expect(NODE, {'state': (MATCH_RE, 'Stale')}, id='vn[0]')\n\n        now = int(time.time())\n        a = {'reserve_start': now + 5400, 'reserve_end': now + 7200}\n        R = Reservation(TEST_USER, a)\n        rid = self.server.submit(R)\n\n        # Reservation should be confirmed on vnode[0] since vn[0] is Stale\n        a = {'resv_nodes': '(vnode[0]:ncpus=1)'}\n        a2 = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, a, id=rid)\n        self.server.expect(RESV, a2, id=rid)\n\n    def test_stale_degraded(self):\n        \"\"\"\n        Test that a reservation goes into the degraded state\n        when one of its vnodes go stale\n        \"\"\"\n        self.server.expect(NODE, {'state=free': 3})\n        now = int(time.time())\n        a = {'Resource_List.select': '3:ncpus=1',\n             'Resource_List.place': 'vscatter',\n             'reserve_start': now + 3600, 'reserve_end': now + 7200}\n\n        R = Reservation(TEST_USER, attrs=a)\n        rid = self.server.submit(R)\n\n        a = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, a, id=rid)\n\n        self.mom.delete_vnode_defs(vdefname='fname1')\n        self.mom.signal('-HUP')\n        self.server.expect(NODE, {'state': (MATCH_RE, 'Stale')}, id='vn[0]')\n\n        a = {'reserve_state': (MATCH_RE, 'RESV_DEGRADED|10')}\n        self.server.expect(RESV, a, id=rid)\n"
  },
  {
    "path": "test/tests/functional/pbs_config.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport tarfile\nfrom tests.functional import *\n\n\nclass TestPBSConfig(TestFunctional):\n    \"\"\"\n    Test cases for pbs_config tool\n    \"\"\"\n\n    snapdirs = []\n    snaptars = []\n\n    def test_config_for_snapshot(self):\n        \"\"\"\n        Test pbs_config's --snap option\n        \"\"\"\n        pbs_snapshot_path = os.path.join(\n            self.server.pbs_conf[\"PBS_EXEC\"], \"sbin\", \"pbs_snapshot\")\n        if not os.path.isfile(pbs_snapshot_path):\n            self.skipTest(\"pbs_snapshot not found\")\n        pbs_config_path = os.path.join(\n            self.server.pbs_conf[\"PBS_EXEC\"], \"unsupported\", \"pbs_config\")\n        if not os.path.isfile(pbs_config_path):\n            self.skipTest(\"pbs_config not found\")\n\n        # Create 4 vnodes\n        a = {ATTR_rescavail + \".ncpus\": 2}\n        self.mom.create_vnodes(attrib=a, num=4,\n                               usenatvnode=True)\n        self.server.expect(VNODE, {'state=free': 4}, count=True)\n\n        # Create a queue\n        a = {'queue_type': 'execution',\n             'started': 'True',\n             'enabled': 'True',\n             'Priority': 200}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id=\"expressq\")\n\n        # Set preempt_order to 'R'\n        a = {\"preempt_order\": \"R\"}\n        self.server.manager(MGR_CMD_SET, SCHED, a, id=\"default\")\n\n        # Set sched_config 'smp_cluster_dist' to 'round_robin'\n        self.scheds[\"default\"].set_sched_config(\n            {\"smp_cluster_dist\": \"round_robin\"})\n\n        # Now that we have a custom configuration, take a snapshot\n        outdir = pwd.getpwnam(self.du.get_current_user()).pw_dir\n        snap_cmd = [pbs_snapshot_path, \"-o \" + outdir, \"--with-sudo\"]\n        ret = self.du.run_cmd(cmd=snap_cmd, logerr=False, as_script=True)\n        self.assertEqual(ret[\"rc\"], 0, \"pbs_snapshot command failed\")\n        snap_out = ret['out'][0]\n        output_tar = snap_out.split(\":\")[1]\n        output_tar = output_tar.strip()\n\n        # Check that the output tarball was created\n        self.assertTrue(os.path.isfile(output_tar),\n                        \"Error capturing snapshot:\\n\" + str(ret))\n        self.snaptars.append(output_tar)\n\n        # Unwrap the tarball\n        tar = tarfile.open(output_tar)\n        tar.extractall(path=outdir)\n        tar.close()\n\n        # snapshot directory name = <snapshot>.tgz[:-4]\n        snap_dir = output_tar[:-4]\n        self.assertTrue(os.path.isdir(snap_dir))\n        self.snapdirs.append(snap_dir)\n\n        # Let's revert the system back to default now\n        TestFunctional.setUp(self)\n\n        # Now, use pbs_config --snap to build the system captured\n        # previously in the snapshot\n        config_cmd = [pbs_config_path, \"--snap=\" + snap_dir]\n        self.du.run_cmd(cmd=config_cmd, sudo=True, logerr=False)\n\n        # Verify that there are 4 vnodes, expressq, preempt_order=R and\n        # smp_cluster_dist=round_robin\n        self.server.expect(VNODE, {'state=free': 4}, count=True)\n        self.server.expect(QUEUE, {\"Priority\": 200}, id=\"expressq\")\n        self.server.expect(SCHED, {\"preempt_order\": \"R\"}, id=\"default\")\n        self.scheds[\"default\"].parse_sched_config()\n        self.assertEqual(\n            self.scheds[\"default\"].sched_config[\"smp_cluster_dist\"],\n            \"round_robin\",\n            \"pbs_config didn't load sched_config correctly\")\n\n    def tearDown(self):\n        # Cleanup snapshot dirs and tars\n        for snap_dir in self.snapdirs:\n            self.du.rm(path=snap_dir, recursive=True, force=True)\n        for snap_tar in self.snaptars:\n            self.du.rm(path=snap_tar, force=True)\n"
  },
  {
    "path": "test/tests/functional/pbs_cpuset.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\n@requirements(num_moms=2)\nclass TestPbsCpuset(TestFunctional):\n\n    \"\"\"\n    This testsuite covers various features using cgroup cpuset systems\n        - Reliable Job Startup Feature\n        - Node Rampdown Feature\n    \"\"\"\n\n    def check_stageout_file_size(self):\n        \"\"\"\n        This function will check that at least 1gb of test.img\n        file which is to be stagedout is created within 10 seconds.\n        \"\"\"\n        fpath = os.path.join(TEST_USER.home, \"test.img\")\n        cmd = ['stat', '-c', '%s', fpath]\n        fsize = 0\n        for i in range(11):\n            rc = self.du.run_cmd(hosts=self.h0, cmd=cmd,\n                                 runas=TEST_USER)\n            if rc['rc'] == 0 and len(rc['out']) == 1:\n                try:\n                    fsize = int(rc['out'][0])\n                except Exception:\n                    pass\n            # 1073741824 == 1Gb\n            if fsize > 1073741824:\n                break\n            else:\n                time.sleep(1)\n        if fsize <= 1073741824:\n            self.fail(\"Failed to create 1gb file at %s\" % fpath)\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n\n        # skip if there are no cpuset systems in the test cluster\n        no_csetmom = True\n        for mom in self.moms.values():\n            if mom.is_cpuset_mom():\n                no_csetmom = False\n        if no_csetmom:\n            self.skipTest(\"Skip on cluster without cgroup cpuset system.\")\n\n        # Various host names\n        self.h0 = self.moms.values()[0].shortname\n        self.h1 = self.moms.values()[1].shortname\n        self.hostA = socket.getfqdn(self.h0)\n        self.hostB = socket.getfqdn(self.h1)\n        # Various node names. First mom may or may not be a cpuset system.\n        try:\n            self.n0 = self.server.status(\n                NODE, id='%s[0]' % (self.h0))[0]['id']\n        except PbsStatusError:\n            self.n0 = self.h0\n        self.n1 = self.h1\n        self.n2 = '%s[0]' % (self.n1)\n        self.n3 = '%s[1]' % (self.n1)\n\n        # Skip if there are less than four vnodes. There should be\n        # three from cpuset system (natural + two NUMA vnodes)\n        nodeinfo = self.server.status(NODE)\n        if len(nodeinfo) < 4:\n            self.skipTest(\"Not enough vnodes to run the test.\")\n        # skip if second mom has less than two NUMA vnodes\n        try:\n            self.server.status(NODE, id=self.n3)\n        except PbsStatusError:\n            self.skipTest(\"vnode %s doesn't exist on pbs server\" % (self.n3))\n        # skip if vnodes are not in free state\n        for node in nodeinfo:\n            if node['state'] != 'free':\n                self.skipTest(\"Not all the vnodes are in free state\")\n\n        self.pbs_release_nodes_cmd = os.path.join(\n            self.server.pbs_conf['PBS_EXEC'], 'bin', 'pbs_release_nodes')\n        # number of resource ncpus to request initially\n        ncpus = self.server.status(NODE, 'resources_available.ncpus',\n                                   id=self.n3)[0]\n        # request a partial amount of ncpus in self.n3\n        self.ncpus2 = int(ncpus['resources_available.ncpus']) / 2\n        # cgroup cpuset path on second node\n        cmd = ['grep cgroup', '/proc/mounts', '|', 'grep cpuset', '|',\n               'grep -v', '/dev/cpuset']\n        ret = self.server.du.run_cmd(self.n1, cmd, runas=TEST_USER)\n        self.cset_path = ret['out'][0].split()[1]\n\n        # launch hook\n        self.launch_hook_body = \"\"\"\nimport pbs\nimport time\ne=pbs.event()\npbs.logmsg(pbs.LOG_DEBUG, \"Executing launch\")\n# print out the vnode_list[] values\nfor vn in e.vnode_list:\n    v = e.vnode_list[vn]\n    pbs.logjobmsg(e.job.id, \"launch: found vnode_list[\" + v.name + \"]\")\n# print out the vnode_list_fail[] values:\nfor vn in e.vnode_list_fail:\n    v = e.vnode_list_fail[vn]\n    pbs.logjobmsg(e.job.id, \"launch: found vnode_list_fail[\" + v.name + \"]\")\nif e.job.in_ms_mom():\n    pj = e.job.release_nodes(keep_select=\"ncpus=1:mem=1gb\")\n    if pj is None:\n        e.job.Hold_Types = pbs.hold_types(\"s\")\n        e.job.rerun()\n        e.reject(\"unsuccessful at LAUNCH\")\npbs.logmsg(pbs.LOG_DEBUG, \"Sleeping for 20sec\")\ntime.sleep(20)\n\"\"\"\n\n        self.script = {}\n        self.job1_select = \"ncpus=1:mem=1gb+\" + \\\n            \"ncpus=%d:mem=1gb:vnode=%s+\" % (self.ncpus2, self.n2) + \\\n            \"ncpus=%d:mem=1gb:vnode=%s\" % (self.ncpus2, self.n3)\n        self.job1_place = \"vscatter\"\n\n        # expected values upon successful job submission\n        self.job1_schedselect = \"1:ncpus=1:mem=1gb+\" + \\\n            \"1:ncpus=%d:mem=1gb:vnode=%s+\" % (self.ncpus2, self.n2) + \\\n            \"1:ncpus=%d:mem=1gb:vnode=%s\" % (self.ncpus2, self.n3)\n        self.job1_exec_host = \"%s/0+%s/0*%d+%s/1*%d\" % (\n            self.h0, self.h1, self.ncpus2, self.n1, self.ncpus2)\n        self.job1_exec_vnode = \"(%s:ncpus=1:mem=1048576kb)+\" % (self.n0,) + \\\n            \"(%s:ncpus=%d:mem=1048576kb)+\" % (self.n2, self.ncpus2) + \\\n            \"(%s:ncpus=%d:mem=1048576kb)\" % (self.n3, self.ncpus2)\n\n        # expected values after release of vnode of self.n3\n        self.job1_schedsel1 = \"1:ncpus=1:mem=1048576kb+\" + \\\n            \"1:ncpus=%d:mem=1048576kb:vnode=%s\" % (self.ncpus2, self.n2)\n        self.job1_exec_host1 = \"%s/0+%s/0*%d\" % (self.h0, self.h1, self.ncpus2)\n        self.job1_exec_vnode1 = \"(%s:ncpus=1:mem=1048576kb)+\" % (self.n0,) + \\\n            \"(%s:ncpus=%d:mem=1048576kb)\" % (self.n2, self.ncpus2)\n\n        # expected values during lengthy stageout\n        self.job1_newsel = \"1:ncpus=1:mem=1048576kb\"\n        self.job1_new_exec_host = \"%s/0\" % self.h0\n        self.job1_new_exec_vnode = \"(%s:ncpus=1:mem=1048576kb)\" % self.n0\n\n        # values to use when matching accounting logs\n        self.job1_exec_host_esc = self.job1_exec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                    \"+\", r\"\\+\")\n        self.job1_exec_vnode_esc = self.job1_exec_vnode.replace(\n            \"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\"(\", r\"\\(\").replace(\n            \")\", r\"\\)\").replace(\"+\", r\"\\+\")\n        self.job1_sel_esc = self.job1_select.replace(\n            \"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\"(\", r\"\\(\").replace(\n            \")\", r\"\\)\").replace(\"+\", r\"\\+\")\n        self.job1_new_exec_vnode_esc = self.job1_new_exec_vnode.replace(\n            \"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\"(\", r\"\\(\").replace(\n            \")\", r\"\\)\").replace(\"+\", r\"\\+\")\n\n    def tearDown(self):\n        for host in [self.h0, self.h1]:\n            test_img = os.path.join(\"/home\", \"pbsuser\", \"test.img\")\n            self.du.rm(hostname=host, path=test_img, force=True,\n                       runas=TEST_USER)\n        TestFunctional.tearDown(self)\n\n    def test_reliable_job_startup_on_cpuset(self):\n        \"\"\"\n        A job is started with two numa nodes and goes in R state.\n        An execjob_launch hook will force job to have only one numa node.\n        The released numa node can be used in another job.\n        \"\"\"\n        # instantiate execjob_launch hook\n        hook_event = \"execjob_launch\"\n        hook_name = \"launch\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        stime = time.time()\n        self.server.create_import_hook(hook_name, a, self.launch_hook_body)\n\n        # Check mom logs that the launch hook got propagated\n        msg = \"Hook;launch.PY;copy hook-related file request received\"\n        self.moms.values()[1].log_match(msg, starttime=stime)\n\n        # Submit job1 that uses second mom's two NUMA nodes, in R state\n        a = {ATTR_l + '.select': self.job1_select,\n             ATTR_l + '.place': self.job1_place,\n             ATTR_W: 'tolerate_node_failures=job_start'}\n        j = Job(TEST_USER, attrs=a)\n        stime = time.time()\n        jid = self.server.submit(j)\n\n        # Check the exec_vnode while in substate 41\n        self.server.expect(JOB, {ATTR_substate: '41'}, id=jid)\n        self.server.expect(JOB, 'exec_vnode', id=jid, op=SET)\n        job_stat = self.server.status(JOB, id=jid)\n        execvnode1 = job_stat[0]['exec_vnode']\n        self.logger.info(\"initial exec_vnode: %s\" % execvnode1)\n        initial_vnodes = execvnode1.split('+')\n\n        # Check the exec_vnode after job is in substate 42\n        self.server.expect(JOB, {ATTR_substate: '42'}, offset=20, id=jid)\n        self.server.expect(JOB, 'exec_vnode', id=jid, op=SET)\n        job_stat = self.server.status(JOB, id=jid)\n        execvnode2 = job_stat[0]['exec_vnode']\n        self.logger.info(\"pruned exec_vnode: %s\" % execvnode2)\n\n        # Check mom logs for pruned from and pruned to messages\n        self.moms.values()[0].log_match(\"Job;%s;pruned from exec_vnode=%s\" % (\n            jid, execvnode1), starttime=stime)\n        self.moms.values()[0].log_match(\"Job;%s;pruned to exec_vnode=%s\" % (\n            jid, execvnode2), starttime=stime)\n\n        # Find out the released vnode\n        if initial_vnodes[0] == execvnode2:\n            execvnodeB = initial_vnodes[1]\n        else:\n            execvnodeB = initial_vnodes[0]\n        vnodeB = execvnodeB.split(':')[0].split('(')[1]\n        self.logger.info(\"released vnode: %s\" % vnodeB)\n\n        # Submit job2 requesting all of the released vnode's cpus, job runs\n        a = {ATTR_l + '.select': '1:ncpus=%d:mem=1gb:vnode=%s' % (\n            self.ncpus2 * 2, vnodeB)}\n        j2 = Job(TEST_USER, attrs=a)\n        stime = time.time()\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {ATTR_state: 'R'}, offset=20, id=jid2)\n\n        # Check if vnode for job2 matches released vnode from job1\n        self.server.expect(JOB, 'exec_vnode', id=jid2, op=SET)\n        job_stat = self.server.status(JOB, id=jid2)\n        execvnode3 = job_stat[0]['exec_vnode']\n        vnode3 = execvnode3.split(':')[0].split('(')[1]\n        self.assertEqual(vnode3, vnodeB)\n        self.logger.info(\"job2 vnode %s is the released vnode %s\" % (\n            vnode3, vnodeB))\n\n    def test_release_nodes_on_cpuset_sis(self):\n        \"\"\"\n        On a cluster where the second mom is a cgroup cpuset system with two\n        NUMA nodes, submit a job that will use cpus on both NUMA vnodes.\n        The job goes in R state. Use pbs_release_nodes to successfully release\n        one of the NUMA vnodes and its resources used in the job. Compare the\n        job's cgroup cpuset info before and after calling pbs_release_nodes\n        to verify that NUMA node's cpu resources were released.\n        \"\"\"\n        # Submit a job that uses second mom's two NUMA nodes, in R state\n        a = {ATTR_l + '.select': self.job1_select,\n             ATTR_l + '.place': self.job1_place}\n        j1 = Job(TEST_USER, attrs=a)\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '3gb',\n                                 'Resource_List.ncpus': 1 + self.ncpus2 * 2,\n                                 'Resource_List.nodect': 3,\n                                 'schedselect': self.job1_schedselect,\n                                 'exec_host': self.job1_exec_host,\n                                 'exec_vnode': self.job1_exec_vnode}, id=jid1)\n\n        # Check the cpuset before releasing self.n3 from jid1\n        cset_file = os.path.join(self.cset_path, 'pbs_jobs.service/jobid',\n                                 jid1, 'cpuset.cpus')\n        cset_before = self.du.cat(self.n1, cset_file)\n        cset_j1_before = cset_before['out']\n        self.logger.info(\"cset_j1_before : %s\" % cset_j1_before)\n\n        before_release = time.time()\n\n        # Release a NUMA vnode on second mom using command pbs_release_nodes\n        cmd = [self.pbs_release_nodes_cmd, '-j', jid1, self.n3]\n        ret = self.server.du.run_cmd(self.server.hostname,\n                                     cmd, runas=TEST_USER)\n        self.assertEqual(ret['rc'], 0)\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.ncpus': 1 + self.ncpus2,\n                                 'Resource_List.nodect': 2,\n                                 'schedselect': self.job1_schedsel1,\n                                 'exec_host': self.job1_exec_host1,\n                                 'exec_vnode': self.job1_exec_vnode1}, id=jid1)\n\n        # Check if sister mom updated its internal nodes table after release\n        self.moms.values()[1].log_match('Job;%s;updated nodes info' % jid1,\n                                        starttime=before_release - 1)\n\n        # Check the cpuset for the job after releasing self.n3\n        cset_after = self.du.cat(self.n1, cset_file)\n        cset_j1_after = cset_after['out']\n        self.logger.info(\"cset_j1_after : %s\" % cset_j1_after)\n\n        # Compare the before and after cpusets info\n        msg = \"%s: cpuset cpus remain after release of %s\" % (jid1, self.n3)\n        self.assertNotEqual(cset_j1_before, cset_j1_after, msg)\n\n    def test_release_nodes_on_stageout_cset(self):\n        \"\"\"\n        Submit a job, with -W release_nodes_on_stageout=true as a PBS directive\n        in the job script, that will use cpus and mem on two NUMA vnodes on the\n        second mom. The job goes in R state. The job creates a huge stageout\n        file. When the job is deleted the sister NUMA vnodes are released\n        during lengthy stageout and only the primary execution host's vnode\n        is left assigned to the job.\n        \"\"\"\n        FIB40 = os.path.join(self.server.pbs_conf['PBS_EXEC'], 'bin', '') + \\\n            'pbs_python -c \"exec(\\\\\\\"def fib(i):\\\\n if i < 2:\\\\n  \\\nreturn i\\\\n return fib(i-1) + fib(i-2)\\\\n\\\\nprint(fib(40))\\\\\\\")\"'\n        FIB400 = os.path.join(self.server.pbs_conf['PBS_EXEC'], 'bin', '') + \\\n            'pbs_python -c \"exec(\\\\\\\"def fib(i):\\\\n if i < 2:\\\\n  \\\nreturn i\\\\n return fib(i-1) + fib(i-2)\\\\n\\\\nprint(fib(400))\\\\\\\")\"'\n\n        self.script['job1'] = \\\n            \"#PBS -S /bin/bash\\n\" \\\n            \"#PBS -l select=\" + self.job1_select + \"\\n\" + \\\n            \"#PBS -l place=\" + self.job1_place + \"\\n\" + \\\n            \"#PBS -W stageout=test.img@%s:test.img\\n\" % (self.n1,) + \\\n            \"#PBS -W release_nodes_on_stageout=true\\n\" + \\\n            \"dd if=/dev/zero of=test.img count=1024 bs=2097152\\n\" + \\\n            \"pbsdsh -n 1 -- %s\\n\" % (FIB40,) + \\\n            \"pbsdsh -n 2 -- %s\\n\" % (FIB40,) + \\\n            \"%s\\n\" % (FIB400,)\n\n        stime = time.time()\n        j = Job(TEST_USER)\n        j.create_script(self.script['job1'])\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'release_nodes_on_stageout': 'True',\n                                 'Resource_List.mem': '3gb',\n                                 'Resource_List.ncpus': 1 + self.ncpus2 * 2,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job1_select,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_schedselect,\n                                 'exec_host': self.job1_exec_host,\n                                 'exec_vnode': self.job1_exec_vnode}, id=jid)\n        # Check various vnode status.\n        attr0 = {'state': 'job-busy', 'jobs': jid + '/0',\n                 'resources_assigned.ncpus': 1,\n                 'resources_assigned.mem': '1048576kb'}\n        self.server.expect(VNODE, attr0, id=self.n0)\n        attr1 = {'state': 'free', 'resources_assigned.ncpus': 0,\n                 'resources_assigned.mem': '0kb'}\n        self.server.expect(VNODE, attr1, id=self.n1)\n        jobs = ''\n        for i in range(0, int(self.ncpus2)):\n            jobs += ' %s/%d,' % (jid, i)\n        jobs = jobs.strip().strip(',')\n        attr2 = {'state': 'free',\n                 'jobs': jobs,\n                 'resources_assigned.ncpus': int(self.ncpus2),\n                 'resources_assigned.mem': '1048576kb'}\n        for vn in [self.n2, self.n3]:\n            self.server.expect(VNODE, attr2, id=vn)\n        # job's PBS_NODEFILE contents should match exec_host\n        pbs_nodefile = os.path.join(self.server.\n                                    pbs_conf['PBS_HOME'], 'aux', jid)\n        cmd = ['cat', pbs_nodefile]\n        ret = self.server.du.run_cmd(self.h0, cmd, sudo=False)\n        self.assertTrue(self.hostA and self.hostB in ret['out'])\n\n        # The job will write out enough file size to have a lengthy stageout\n        self.check_stageout_file_size()\n\n        # Deleting the job will trigger the stageout process\n        # at which time the sister node is automatically released\n        # due to release_nodes_stageout=true set\n        self.server.delete(jid)\n\n        # Verify remaining job resources.\n        self.server.expect(JOB, {'job_state': 'E',\n                                 'Resource_List.mem': '1gb',\n                                 'Resource_List.ncpus': 1,\n                                 'Resource_List.select': self.job1_newsel,\n                                 'Resource_List.place': self.job1_place,\n                                 'Resource_List.nodect': 1,\n                                 'schedselect': self.job1_newsel,\n                                 'exec_host': self.job1_new_exec_host,\n                                 'exec_vnode': self.job1_new_exec_vnode},\n                           id=jid)\n        # Check various vnode status\n        attr0 = {'state': 'job-busy', 'jobs': jid + '/0',\n                 'resources_assigned.ncpus': 1,\n                 'resources_assigned.mem': '1048576kb'}\n        self.server.expect(VNODE, attr0, id=self.n0)\n        attr1 = {'state': 'free', 'resources_assigned.ncpus': '0',\n                 'resources_assigned.mem': '0kb'}\n        for vn in [self.n1, self.n2, self.n3]:\n            self.server.expect(VNODE, attr1, id=vn)\n        # job's PBS_NODEFILE contents should match exec_host\n        ret = self.server.du.run_cmd(self.h0, cmd, sudo=False)\n        self.assertTrue(self.hostA in ret['out'])\n        self.assertFalse(self.hostB in ret['out'])\n        # Verify mom_logs\n        self.moms.values()[0].log_match(\n            \"Job;%s;%s.+cput=.+ mem=.+\" % (jid, self.n1), n=10, regexp=True)\n        self.moms.values()[1].log_match(\n            \"Job;%s;DELETE_JOB2 received\" % (jid,), n=20)\n        # Check account update ('u') record\n        msg0 = \".*%s;%s.*exec_host=%s\" % ('u', jid, self.job1_exec_host_esc)\n        msg1 = \".*exec_vnode=%s\" % self.job1_exec_vnode_esc\n        msg2 = r\".*Resource_List\\.mem=%s\" % '3gb'\n        msg3 = r\".*Resource_List\\.ncpus=%d\" % 9\n        msg4 = r\".*Resource_List\\.place=%s\" % self.job1_place\n        msg5 = r\".*Resource_List\\.select=%s.*\" % self.job1_sel_esc\n        msg = msg0 + msg1 + msg2 + msg3 + msg4 + msg5\n        self.server.accounting_match(msg=msg, regexp=True, n=\"ALL\",\n                                     starttime=stime)\n        # Check to make sure 'c' (next) record got generated\n        msg0 = \".*%s;%s.*exec_host=%s\" % ('c', jid, self.job1_new_exec_host)\n        msg1 = \".*exec_vnode=%s\" % self.job1_new_exec_vnode_esc\n        msg2 = r\".*Resource_List\\.mem=%s\" % '1048576kb'\n        msg3 = r\".*Resource_List\\.ncpus=%d\" % 1\n        msg4 = r\".*Resource_List\\.place=%s\" % self.job1_place\n        msg5 = r\".*Resource_List\\.select=%s.*\" % self.job1_newsel\n        msg = msg0 + msg1 + msg2 + msg3 + msg4 + msg5\n        self.server.accounting_match(msg=msg, regexp=True, n=\"ALL\",\n                                     starttime=stime)\n"
  },
  {
    "path": "test/tests/functional/pbs_cray_check_node_exclusivity.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\n@tags('cray', 'reservation')\nclass TestCheckNodeExclusivity(TestFunctional):\n\n    \"\"\"\n    Test suite for reservation. This test Suite checks\n    the exclusivity of node when a reservation asks for it.\n    Adapted for Cray Configuration\n    \"\"\"\n    ncpus = None\n    vnode = None\n\n    def setUp(self):\n        if not self.du.get_platform().startswith('cray'):\n            self.skipTest(\"Test suite only meant to run on a Cray\")\n        self.script = []\n        self.script += ['echo Hello World\\n']\n        self.script += ['aprun -b -B /bin/sleep 10']\n\n        TestFunctional.setUp(self)\n\n    def submit_and_confirm_resv(self, a=None, index=None):\n        \"\"\"\n        This is common function to submit reservation\n        and verify reservation confirmed\n        \"\"\"\n        r = Reservation(TEST_USER, attrs=a)\n        rid = self.server.submit(r)\n        a = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        if index is not None:\n            a['reserve_index'] = index\n        self.server.expect(RESV, a, id=rid)\n        return rid\n\n    def get_vnode_ncpus_value(self):\n        all_nodes = self.server.status(NODE)\n        for n in all_nodes:\n            if n['resources_available.vntype'] == 'cray_compute':\n                self.ncpus = n['resources_available.ncpus']\n                self.vnode = n['resources_available.vnode']\n                break\n\n    def test_node_state_with_adavance_resv(self):\n        \"\"\"\n        Test node state will change when reservation\n        asks for exclusivity.\n        \"\"\"\n        # Submit a reservation with place=excl\n        start_time = time.time()\n        now = int(start_time)\n        a = {'Resource_List.select': '1:ncpus=1:vntype=cray_compute',\n             'Resource_List.place': 'excl', 'reserve_start': now + 30,\n             'reserve_end': now + 60}\n        rid = self.submit_and_confirm_resv(a)\n        rid_q = rid.split('.')[0]\n        self.server.status(RESV, 'resv_nodes', id=rid)\n        resv_node = self.server.reservations[rid].get_vnodes()[0]\n        self.server.expect(NODE, {'state': 'free'},\n                           id=resv_node)\n        self.server.restart()\n        self.server.expect(NODE, {'state': 'free'},\n                           id=resv_node)\n\n        self.logger.info('Waiting 20s for reservation to start')\n        a = {'reserve_state': (MATCH_RE, \"RESV_RUNNING|5\")}\n        self.server.expect(RESV, a, id=rid, offset=20)\n        self.server.expect(NODE, {'state': 'resv-exclusive'},\n                           id=resv_node)\n        # Wait for reservation to delete from server\n        msg = \"Que;\" + rid_q + \";deleted at request of pbs_server@\"\n        self.server.log_match(msg, starttime=start_time, interval=10)\n        self.server.expect(NODE, {'state': 'free'},\n                           id=resv_node)\n\n    def test_node_state_with_standing_resv(self):\n        \"\"\"\n        Test node state will change when reservation\n        asks for exclusivity.\n        \"\"\"\n        if 'PBS_TZID' in self.conf:\n            tzone = self.conf['PBS_TZID']\n        elif 'PBS_TZID' in os.environ:\n            tzone = os.environ['PBS_TZID']\n        else:\n            self.logger.info('Missing timezone, using America/Los_Angeles')\n            tzone = 'America/Los_Angeles'\n        # Submit a standing reservation to occur every other minute for a\n        # total count of 2\n        start = time.time() + 20\n        now = start + 20\n        start = int(start)\n        end = int(now)\n        a = {'Resource_List.select': '1:ncpus=1:vntype=cray_compute',\n             'Resource_List.place': 'excl',\n             ATTR_resv_rrule: 'FREQ=MINUTELY;COUNT=2',\n             ATTR_resv_timezone: tzone,\n             'reserve_start': start,\n             'reserve_end': end,\n             }\n        rid = self.submit_and_confirm_resv(a, 1)\n        rid_q = rid.split(\".\")[0]\n        self.server.status(RESV, 'resv_nodes', id=rid)\n        resv_node = self.server.reservations[rid].get_vnodes()[0]\n        self.server.expect(NODE, {'state': 'free'},\n                           id=resv_node)\n        self.logger.info('Waiting 10s for reservation to start')\n        a = {'reserve_state': (MATCH_RE, \"RESV_RUNNING|5\"),\n             'reserve_index': 1}\n        self.server.expect(RESV, a, id=rid, offset=10)\n        self.server.expect(NODE, {'state': 'resv-exclusive'},\n                           id=resv_node)\n        # Wait for standing reservation first instance to finished\n        self.logger.info(\n            'Waiting 20 sec for second instance of reservation to start')\n        exp_attr = {'reserve_state': (MATCH_RE, \"RESV_CONFIRMED|2\"),\n                    'reserve_index': 2}\n        self.server.expect(RESV, exp_attr, id=rid, offset=20)\n        # Node state of the nodes in resv_nodes should be free\n        self.server.expect(NODE, {'state': 'free'},\n                           id=resv_node)\n        # Wait for standing reservation second instance to start\n        self.logger.info(\n            'Waiting 40 sec for second instance of reservation to start')\n        exp_attr = {'reserve_state': (MATCH_RE, \"RESV_RUNNING|5\"),\n                    'reserve_index': 2}\n        self.server.expect(RESV, exp_attr, id=rid, offset=40, interval=1)\n        # check the node state of the nodes in resv_nodes\n        self.server.expect(NODE, {'state': 'resv-exclusive'},\n                           id=resv_node)\n        # Wait for reservations to be finished\n        msg = \"Que;\" + rid_q + \";deleted at request of pbs_server@\"\n        self.server.log_match(msg, starttime=now, interval=2)\n        self.server.expect(NODE, {'state': 'free'},\n                           id=resv_node)\n\n    def test_job_outside_resv_not_allowed(self):\n        \"\"\"\n        Test Job outside the reservation will not be allowed\n        to run if reservation has place=excl.\n        \"\"\"\n        # Submit a reservation with place=excl\n        start_time = time.time()\n        now = int(start_time)\n        a = {'Resource_List.select': '1:ncpus=1:vntype=cray_compute',\n             'Resource_List.place': 'excl', 'reserve_start': now + 20,\n             'reserve_end': now + 30}\n        rid = self.submit_and_confirm_resv(a)\n        rid_q = rid.split('.')[0]\n        self.server.status(RESV, 'resv_nodes', id=rid)\n        resv_node = self.server.reservations[rid].get_vnodes()[0]\n        self.server.expect(NODE, {'state': 'free'},\n                           id=resv_node)\n        self.logger.info('Waiting 20s for reservation to start')\n        a = {'reserve_state': (MATCH_RE, \"RESV_RUNNING|5\")}\n        self.server.expect(RESV, a, id=rid, offset=20)\n        self.server.expect(NODE, {'state': 'resv-exclusive'},\n                           id=resv_node)\n        # Submit a job outside the reservation requesting resv_nodes\n        submit_dir = self.du.create_temp_dir(asuser=TEST_USER)\n        a = {ATTR_q: 'workq', ATTR_l + '.select': '1:vnode=%s' % resv_node}\n        j1 = Job(TEST_USER, attrs=a)\n        j1.create_script(self.script)\n        jid1 = self.server.submit(j1, submit_dir=submit_dir)\n        comment = 'Not Running: Insufficient amount of resource: vnode'\n        self.server.expect(\n            JOB, {'job_state': 'Q', 'comment': comment}, id=jid1)\n        # Wait for reservation to end and verify node state\n        # changed as job-exclusive\n        msg = \"Que;\" + rid_q + \";deleted at request of pbs_server@\"\n        self.server.log_match(msg, starttime=start_time, interval=2)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        self.server.expect(NODE, {'state': 'job-exclusive'},\n                           id=resv_node)\n\n    def test_conflict_reservation_on_resv_exclusive_node(self):\n        \"\"\"\n        Test no other reservation will get confirmed (in the duration)\n        when a node has a exclusive reservation confirmed on it.\n        Reservation2 is inside the duration of confirmed reservation\n        requesting the same vnode in Reservation1.\n        \"\"\"\n        # Submit a reservation with place=excl\n        start_time = time.time()\n        now = int(start_time)\n        a = {'Resource_List.select': '1:ncpus=1:vntype=cray_compute',\n             'Resource_List.place': 'excl', 'reserve_start': now + 20,\n             'reserve_end': now + 60}\n        rid = self.submit_and_confirm_resv(a)\n        self.server.status(RESV, 'resv_nodes', id=rid)\n        resv_node = self.server.reservations[rid].get_vnodes()[0]\n        self.logger.info('Waiting 20s for reservation to start')\n        a = {'reserve_state': (MATCH_RE, \"RESV_RUNNING|5\")}\n        self.server.expect(RESV, a, id=rid, offset=20)\n        self.server.expect(NODE, {'state': 'resv-exclusive'},\n                           id=resv_node)\n        # Submit another reservation requesting on vnode in resv_node\n        a = {ATTR_l + '.select': '1:ncpus=1:vnode=%s' % resv_node,\n             'reserve_start': now + 25,\n             'reserve_end': now + 30}\n        r = Reservation(TEST_USER, attrs=a)\n        rid2 = self.server.submit(r)\n        msg = \"Resv;\" + rid2 + \";Reservation denied\"\n        self.server.log_match(msg, starttime=start_time, interval=2)\n        msg2 = \"Resv;\" + rid2 + \";reservation deleted\"\n        self.server.log_match(msg2, starttime=now, interval=2)\n        msg3 = \"Resv;\" + rid2 + \";PBS Failed to confirm resv: Insufficient \"\n        msg3 += \"amount of resource: vnode\"\n        self.scheduler.log_match(msg3, starttime=now, interval=2)\n\n    def test_node_exclusivity_with_multinode_reservation(self):\n        \"\"\"\n        Test Jobs run correctly in multinode reservation\n        and accordingly update node exclusivity.\n        \"\"\"\n        self.get_vnode_ncpus_value()\n        # Submit a reservation with place=excl\n        now = int(time.time())\n        a = {ATTR_l + '.select': '2:ncpus=%d' % (int(self.ncpus)),\n             'Resource_List.place': 'excl', 'reserve_start': now + 10,\n             'reserve_end': now + 1600}\n        rid = self.submit_and_confirm_resv(a)\n        rid_q = rid.split(\".\")[0]\n        self.server.status(RESV, 'resv_nodes', id=rid)\n        resv_node = self.server.reservations[rid].get_vnodes()\n        self.logger.info('Waiting 10s for reservation to start')\n        a = {'reserve_state': (MATCH_RE, \"RESV_RUNNING|5\")}\n        self.server.expect(RESV, a, id=rid, offset=10)\n        self.server.expect(NODE, {'state': 'resv-exclusive'},\n                           id=resv_node[0])\n        self.server.expect(NODE, {'state': 'resv-exclusive'},\n                           id=resv_node[1])\n        # Submit a job inside the reservation\n        submit_dir = self.du.create_temp_dir(asuser=TEST_USER)\n        a = {ATTR_q: rid_q, ATTR_l + '.select': '1:ncpus=1',\n             'Resource_List.place': 'shared'}\n        j1 = Job(TEST_USER, attrs=a)\n        j1.create_script(self.script)\n        jid1 = self.server.submit(j1, submit_dir=submit_dir)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        self.server.expect(NODE, {'state': 'job-exclusive,resv-exclusive'},\n                           id=resv_node[0])\n        # Submit another job inside the reservation\n        submit_dir = self.du.create_temp_dir(asuser=TEST_USER)\n        a = {ATTR_q: rid_q, ATTR_l + '.select': '2:ncpus=1',\n             'Resource_List.place': 'shared'}\n        j2 = Job(TEST_USER, attrs=a)\n        j2.create_script(self.script)\n        jid2 = self.server.submit(j2, submit_dir=submit_dir)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n        self.server.expect(NODE, {'state': 'job-exclusive,resv-exclusive'},\n                           id=resv_node[0])\n\n    def test_multiple_reservation_request_exclusive_placement(self):\n        \"\"\"\n        Test Multiple reservations requesting exclusive placement\n        are confirmed when not overlapping in time.\n        \"\"\"\n        self.get_vnode_ncpus_value()\n        # Submit a reservation with place=excl\n        now = int(time.time())\n        a = {ATTR_l + '.select': '1:ncpus=1:vnode=%s' % self.vnode,\n             'Resource_List.place': 'excl', 'reserve_start': now + 10,\n             'reserve_duration': 3600}\n        rid = self.submit_and_confirm_resv(a)\n        self.server.status(RESV, 'resv_nodes', id=rid)\n        resv_node = self.server.reservations[rid].get_vnodes()[0]\n        # Submit a non-overlapping reservation requesting place=excl\n        a = {ATTR_l + '.select': '1:ncpus=1:vnode=%s' % resv_node,\n             'Resource_List.place': 'excl',\n             'reserve_start': now + 7200,\n             'reserve_duration': 3600}\n        self.submit_and_confirm_resv(a)\n\n    def test_delete_future_resv_not_effect_node_state(self):\n        \"\"\"\n        Test (Advance Reservation)Multiple reservations requesting exclusive\n        placement are confirmed when not overlapping.\n        Deleting the latter reservation after earlier one starts running\n        leaves node in state resv-exclusive.\n        \"\"\"\n        self.get_vnode_ncpus_value()\n        # Submit a reservation with place=excl\n        now = int(time.time())\n        a = {ATTR_l + '.select': '1:ncpus=1:vnode=%s' % self.vnode,\n             'Resource_List.place': 'excl', 'reserve_start': now + 10,\n             'reserve_duration': 3600}\n        rid = self.submit_and_confirm_resv(a)\n        self.server.status(RESV, 'resv_nodes', id=rid)\n        resv_node = self.server.reservations[rid].get_vnodes()[0]\n        # Submit a non-overlapping reservation requesting place=excli\n        # on vnode in resv_node\n        a = {ATTR_l + '.select': '1:ncpus=1:vnode=%s' % resv_node,\n             'Resource_List.place': 'excl',\n             'reserve_start': now + 7200,\n             'reserve_duration': 3600}\n        rid2 = self.submit_and_confirm_resv(a)\n        self.logger.info('Waiting 10s for reservation to start')\n        a = {'reserve_state': (MATCH_RE, \"RESV_RUNNING|5\")}\n        self.server.expect(RESV, a, id=rid, offset=10)\n        self.server.expect(NODE, {'state': 'resv-exclusive'},\n                           id=resv_node)\n        # Delete future reservation rid2 and verify that resv node\n        # is still in state resv-exclusive\n        self.server.delete(rid2)\n        self.server.expect(NODE, {'state': 'resv-exclusive'},\n                           id=resv_node)\n\n    def test_delete_future_standing_resv_not_effect_node_state(self):\n        \"\"\"\n        Test (Standing Reservation)Multiple reservations requesting exclusive\n        placement are confirmed when not overlapping.\n        Deleting the latter reservation after earlier one starts running\n        leaves node in state resv-exclusive.\n        \"\"\"\n        self.get_vnode_ncpus_value()\n\n        if 'PBS_TZID' in self.conf:\n            tzone = self.conf['PBS_TZID']\n        elif 'PBS_TZID' in os.environ:\n            tzone = os.environ['PBS_TZID']\n        else:\n            self.logger.info('Missing timezone, using America/Los_Angeles')\n            tzone = 'America/Los_Angeles'\n        # Submit a standing reservation with place=excl\n        now = int(time.time())\n        a = {ATTR_l + '.select': '1:ncpus=1:vnode=%s' % self.vnode,\n             'Resource_List.place': 'excl',\n             ATTR_resv_rrule: 'FREQ=HOURLY;COUNT=2',\n             ATTR_resv_timezone: tzone,\n             'reserve_start': now + 10,\n             'reserve_end': now + 3100}\n        rid = self.submit_and_confirm_resv(a)\n        self.server.status(RESV, 'resv_nodes', id=rid)\n        resv_node = self.server.reservations[rid].get_vnodes()[0]\n        # Submit a non-overlapping reservation requesting place=excli\n        # on vnode in resv_node\n        a = {ATTR_l + '.select': '1:ncpus=1:vnode=%s' % resv_node,\n             'Resource_List.place': 'excl',\n             ATTR_resv_rrule: 'FREQ=HOURLY;COUNT=2',\n             ATTR_resv_timezone: tzone,\n             'reserve_start': now + 7200,\n             'reserve_end': now + 10800}\n        rid2 = self.submit_and_confirm_resv(a)\n        self.logger.info('Waiting 10s for reservation to start')\n        a = {'reserve_state': (MATCH_RE, \"RESV_RUNNING|5\")}\n        self.server.expect(RESV, a, id=rid, offset=10)\n        self.server.expect(NODE, {'state': 'resv-exclusive'},\n                           id=resv_node)\n        # Delete future reservation rid2 and verify that resv node\n        # is still in state resv-exclusive\n        self.server.delete(rid2)\n        self.server.expect(NODE, {'state': 'resv-exclusive'},\n                           id=resv_node)\n\n    def test_job_inside_exclusive_reservation(self):\n        \"\"\"\n        Test Job will run correctly inside the exclusive\n        reservation\n        \"\"\"\n        self.script2 = []\n        self.script2 += ['echo Hello World\\n']\n        self.script2 += ['/bin/sleep 10']\n\n        # Submit a reservation with place=excl\n        start_time = time.time()\n        now = int(start_time)\n        a = {'Resource_List.select': '1:ncpus=1:vntype=cray_login',\n             'Resource_List.place': 'excl', 'reserve_start': now + 20,\n             'reserve_end': now + 40}\n        rid = self.submit_and_confirm_resv(a)\n        rid_q = rid.split('.')[0]\n        self.server.status(RESV, 'resv_nodes', id=rid)\n        resv_node = self.server.reservations[rid].get_vnodes()[0]\n        self.server.expect(NODE, {'state': 'free'},\n                           id=resv_node)\n        self.logger.info('Waiting 20s for reservation to start')\n        a = {'reserve_state': (MATCH_RE, \"RESV_RUNNING|5\")}\n        self.server.expect(RESV, a, id=rid, offset=20)\n        self.server.expect(NODE, {'state': 'resv-exclusive'},\n                           id=resv_node)\n        # Submit a job inside the reservation\n        submit_dir = self.du.create_temp_dir(asuser=TEST_USER)\n        a = {ATTR_q: rid_q, ATTR_l + '.select': '1:ncpus=1:vntype=cray_login',\n             'Resource_List.place': 'excl'}\n        j1 = Job(TEST_USER, attrs=a)\n        j1.create_script(self.script2)\n        jid1 = self.server.submit(j1, submit_dir=submit_dir)\n        self.server.expect(\n            JOB, {'job_state': 'R'}, id=jid1)\n        self.server.expect(NODE, {'state': 'job-exclusive,resv-exclusive'},\n                           id=resv_node)\n        # wait 5 sec for job to end\n        self.server.expect(NODE, {'state': 'resv-exclusive'},\n                           id=resv_node, offset=5, interval=10)\n        # Wait for reservation to end and verify node state\n        # changed as free\n        msg = \"Que;\" + rid_q + \";deleted at request of pbs_server@\"\n        self.server.log_match(msg, starttime=start_time, interval=2)\n        self.server.expect(NODE, {'state': 'free'},\n                           id=resv_node)\n\n        #  Test Job will run correctly inside the exclusive\n        #  standing reservation requesting compute_node\n        if 'PBS_TZID' in self.conf:\n            tzone = self.conf['PBS_TZID']\n        elif 'PBS_TZID' in os.environ:\n            tzone = os.environ['PBS_TZID']\n        else:\n            self.logger.info('Missing timezone, using America/Los_Angeles')\n            tzone = 'America/Los_Angeles'\n        # Submit a standing reservation with place=excl\n        now = int(time.time())\n        a = {ATTR_l + '.select': '1:ncpus=1:vntype=cray_compute',\n             'Resource_List.place': 'excl',\n             ATTR_resv_rrule: 'FREQ=HOURLY;COUNT=1',\n             ATTR_resv_timezone: tzone,\n             'reserve_start': now + 10,\n             'reserve_end': now + 300}\n        rid = self.submit_and_confirm_resv(a)\n        rid_q = rid.split('.')[0]\n        self.server.status(RESV, 'resv_nodes', id=rid)\n        resv_node = self.server.reservations[rid].get_vnodes()[0]\n        self.server.expect(NODE, {'state': 'free'}, id=resv_node)\n        self.logger.info('Waiting 10s for reservation to start')\n        a = {'reserve_state': (MATCH_RE, \"RESV_RUNNING|5\")}\n        self.server.expect(RESV, a, id=rid, offset=10)\n        self.server.expect(NODE, {'state': 'resv-exclusive'},\n                           id=resv_node)\n        # Submit a job inside the reservation\n        submit_dir = self.du.create_temp_dir(asuser=TEST_USER)\n        a = {ATTR_q: rid_q}\n        j1 = Job(TEST_USER, attrs=a)\n        j1.create_script(self.script)\n        jid1 = self.server.submit(j1, submit_dir=submit_dir)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        self.server.expect(NODE, {'state': 'job-exclusive,resv-exclusive'},\n                           id=resv_node)\n        # wait 5 sec for job to end\n        self.server.expect(NODE, {'state': 'resv-exclusive'},\n                           id=resv_node, offset=5, interval=10)\n\n    def test_reservation_request_node_ignore_excl(self):\n        \"\"\"\n        Test Reservation asking for place=excl\n        will not get confirmed if node has\n        ignore_excl set on it.\n        \"\"\"\n\n        a = {'sharing': 'ignore_excl'}\n        self.mom.create_vnodes(a, 1,\n                               createnode=False,\n                               delall=False, usenatvnode=True)\n        self.server.expect(NODE, {'state': 'free',\n                                  'sharing': 'ignore_excl'},\n                           id=self.mom.shortname)\n        # Submit a reservation\n        now = int(time.time())\n        a = {'Resource_List.select': '1:ncpus=1:vntype=cray_login',\n             'Resource_List.place': 'excl', 'reserve_start': now + 20,\n             'reserve_end': now + 40}\n        rid = self.submit_and_confirm_resv(a)\n        self.server.status(RESV, 'resv_nodes', id=rid)\n        resv_node = self.server.reservations[rid].get_vnodes()[0]\n        # Wait for reservation to start and verify\n        # node state should not be resv-exclusive\n        self.logger.info('Waiting 10s for reservation to start')\n        a = {'reserve_state': (MATCH_RE, \"RESV_RUNNING|5\")}\n        self.server.expect(RESV, a, id=rid, offset=10)\n        self.server.expect(NODE, {'state': 'resv-exclusive'},\n                           id=resv_node, op=NE)\n\n    def test_multijob_on_resv_exclusive_node(self):\n        \"\"\"\n        Test multiple jobs request inside a reservation\n        if none(node,reservation or job) asks for exclusivity\n        \"\"\"\n        now = int(time.time())\n        a = {'Resource_List.select': '1:ncpus=2:vntype=cray_compute',\n             'Resource_List.place': 'shared', 'reserve_start': now + 20,\n             'reserve_end': now + 40}\n        rid = self.submit_and_confirm_resv(a)\n        rid_q = rid.split('.')[0]\n        self.server.status(RESV, 'resv_nodes', id=rid)\n        resv_node = self.server.reservations[rid].get_vnodes()[0]\n        self.logger.info('Waiting for reservation to start')\n        a = {'reserve_state': (MATCH_RE, \"RESV_RUNNING|5\")}\n        self.server.expect(RESV, a, id=rid, offset=10)\n        self.server.expect(NODE, {'state': 'resv-exclusive'}, id=resv_node)\n        a = {ATTR_q: rid_q}\n        j1 = Job(TEST_USER, attrs=a)\n        j1.create_script(self.script)\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        self.server.expect(NODE, {'state': 'job-exclusive,resv-exclusive'},\n                           id=resv_node)\n        j2 = Job(TEST_USER, attrs=a)\n        j2.create_script(self.script)\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid2)\n\n    def test_job_with_exclusive_placement(self):\n        \"\"\"\n        Job will honour exclusivity inside the reservation\n        \"\"\"\n        now = int(time.time())\n        a = {'Resource_List.select': '1:ncpus=2:vntype=cray_compute',\n             'Resource_List.place': 'excl', 'reserve_start': now + 20,\n             'reserve_end': now + 40}\n        rid = self.submit_and_confirm_resv(a)\n        rid_q = rid.split('.')[0]\n        self.logger.info('Waiting for reservation to start')\n        a = {'reserve_state': (MATCH_RE, \"RESV_RUNNING|5\")}\n        self.server.expect(RESV, a, id=rid, offset=10)\n        a = {ATTR_q: rid_q, ATTR_l + '.select': '1:ncpus=1',\n             'Resource_List.place': 'excl'}\n        j1 = Job(TEST_USER, attrs=a)\n        j1.create_script(self.script)\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        a = {ATTR_q: rid_q, ATTR_l + '.select': '1:ncpus=1',\n             'Resource_List.place': 'shared'}\n        j2 = Job(TEST_USER, attrs=a)\n        j2.create_script(self.script)\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid2)\n        self.server.expect(JOB, 'queue', op=UNSET, id=jid1, offset=5)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n\n    def test_job_running_on_multinode_reservation(self):\n        \"\"\"\n        Test to submit job on multinode reservation with different placement\n        \"\"\"\n        ncpus = []\n        vnodes = self.server.status(NODE)\n        num_vnodes = 2\n        i = 0\n        for vnode in vnodes:\n            if i < 2:\n                if vnode['resources_available.vntype'] == 'cray_compute':\n                    ncpus.append(int(vnode['resources_available.ncpus']))\n                    i += 1\n                    if i == 2:\n                        break\n        total_ncpus = ncpus[0] + ncpus[1]\n        req_ncpus = min(ncpus[0] / 2, ncpus[1] / 2)\n        now = int(time.time())\n        a = {\n            'Resource_List.select': '2:ncpus=%d:vntype=cray_compute' % min(\n                ncpus[0], ncpus[1]),\n            'Resource_List.place': 'excl',\n            'reserve_start': now + 20,\n            'reserve_end': now + 60}\n        rid = self.submit_and_confirm_resv(a)\n        rid_q = rid.split('.')[0]\n        self.logger.info('Waiting for reservation to start')\n        a = {'reserve_state': (MATCH_RE, \"RESV_RUNNING|5\")}\n        self.server.expect(RESV, a, id=rid, offset=20)\n        a = {ATTR_q: rid_q, ATTR_l + '.select': '2:ncpus=%d' % req_ncpus,\n             'Resource_List.place': 'scatter'}\n        j1 = Job(TEST_USER, attrs=a)\n        j1.create_script(self.script)\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        a = {ATTR_q: rid_q, ATTR_l + '.select': '1:ncpus=%d' % ncpus[0],\n             'Resource_List.place': 'excl'}\n        j2 = Job(TEST_USER, attrs=a)\n        j2.create_script(self.script)\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid2)\n        a = {ATTR_q: rid_q, ATTR_l + '.select': '1:ncpus=%d' % ncpus[1],\n             'Resource_List.place': 'shared'}\n        j3 = Job(TEST_USER, attrs=a)\n        j3.create_script(self.script)\n        jid3 = self.server.submit(j3)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid3)\n        self.server.expect(JOB, 'queue', op=UNSET, id=jid1, offset=5)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid3)\n\n    def test_job_with_exclhost_placement_inside_resv(self):\n        \"\"\"\n        Job inside a reservation asking for place=exclhost on host\n        will have all resources of the vnodes present on host assigned to it\n        \"\"\"\n        now = int(time.time())\n        a = {'Resource_List.select': '1:ncpus=2:vntype=cray_compute',\n             'Resource_List.place': 'exclhost', 'reserve_start': now + 20,\n             'reserve_end': now + 40}\n        rid = self.submit_and_confirm_resv(a)\n        rid_q = rid.split('.')[0]\n        self.server.status(RESV, 'resv_nodes', id=rid)\n        resv_node = self.server.reservations[rid].get_vnodes()[0]\n        self.logger.info('Waiting for reservation to start')\n        a = {'reserve_state': (MATCH_RE, \"RESV_RUNNING|5\")}\n        self.server.expect(RESV, a, id=rid, offset=10)\n        self.server.expect(NODE, {'state': 'resv-exclusive'}, id=resv_node)\n        a = {ATTR_q: rid_q}\n        j1 = Job(TEST_USER, attrs=a)\n        j1.create_script(self.script)\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        self.server.expect(NODE, {'state': 'job-exclusive,resv-exclusive'},\n                           id=resv_node)\n        a = {ATTR_q: rid_q}\n        j2 = Job(TEST_USER, attrs=a)\n        j2.create_script(self.script)\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid2)\n        self.server.expect(JOB, 'queue', op=UNSET, id=jid1, offset=10)\n        self.server.expect(RESV, 'queue', op=UNSET, id=rid, offset=10)\n        self.server.expect(NODE, {'state': 'free'}, id=resv_node)\n"
  },
  {
    "path": "test/tests/functional/pbs_cray_hyperthread.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\nfrom ptl.utils.pbs_crayutils import CrayUtils\nimport os\n\n\n@tags('cray')\nclass TestCrayHyperthread(TestFunctional):\n\n    \"\"\"\n    The test will submit a job script that calls aprun with the\n    option that will allow callers to use the hyperthreads\n    on a hyperthreaded compute node.\n    \"\"\"\n\n    def setUp(self):\n        if not self.du.get_platform().startswith('cray'):\n            self.skipTest(\"Test suite only meant to run on a Cray\")\n        TestFunctional.setUp(self)\n\n    def test_hyperthread(self):\n        \"\"\"\n        Check for a compute node that has hyperthreads, if there is one\n        submit a job to that node requesting the hyperthreads.  Check\n        there are no errors in the job error output.\n        If there is no node with hyperthreads, skip the test.\n        \"\"\"\n        # Get the compute nodes from PBS and see if they are threaded\n        cu = CrayUtils()\n        all_nodes = self.server.status(NODE)\n        threaded = 0\n        for n in all_nodes:\n            if n['resources_available.vntype'] == 'cray_compute':\n                numthreads = cu.get_numthreads(\n                    n['resources_available.PBScraynid'])\n                if numthreads > 1:\n                    self.logger.info(\"Node %s has %s hyperthreads\" %\n                                     (n['resources_available.vnode'],\n                                      numthreads))\n                    ncpus = n['resources_available.ncpus']\n                    vnode = n['resources_available.vnode']\n                    threaded = 1\n                    break\n        if not threaded:\n            self.skipTest(\"Test suite needs nodes with hyperthreads\")\n\n        # There is a node with hyperthreads, get the number of cpus\n        aprun_args = '-j %d -n %d' % (int(numthreads), int(ncpus))\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'job_history_enable': 'True'})\n        j1 = Job(TEST_USER, {ATTR_l + '.select': '1:ncpus=%d:vnode=%s' %\n                             (int(ncpus), vnode),\n                             ATTR_N: 'hyperthread'})\n\n        scr = []\n        scr += ['hostname\\n']\n        scr += ['/bin/sleep 5\\n']\n        scr += ['aprun -b %s /bin/hostname\\n' % aprun_args]\n\n        sub_dir = self.du.create_temp_dir(asuser=TEST_USER)\n        j1.create_script(scr)\n        jid1 = self.server.submit(j1, submit_dir=sub_dir)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n\n        # Verify the contents of the output/error files\n        self.server.expect(JOB, {'job_state': 'F'}, id=jid1, extend='x')\n        error_file = os.path.join(\n            sub_dir, 'hyperthread.e' + jid1.split('.')[0])\n        self.assertEqual(os.stat(error_file).st_size, 0,\n                         msg=\"Job error file should be empty\")\n"
  },
  {
    "path": "test/tests/functional/pbs_cray_pagg_id.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\n@tags('cray')\nclass TestCrayPaggIdUniqueness(TestFunctional):\n    \"\"\"\n    This test suite is written to verify that the PAGG ID provided to ALPS\n    while confirming and releasing an ALPS reservation is not equal to the\n    session ID of the job.\n    This test is specific to Cray and will also not work on the Cray simulator,\n    hence, will be skipped on non-Cray systems and Cray simulator.\n    \"\"\"\n    def setUp(self):\n        platform = self.du.get_platform()\n        if platform != 'cray':\n            self.skipTest(\"not a cray\")\n        TestFunctional.setUp(self)\n\n    def test_pagg_id(self):\n        \"\"\"\n        This test case submits a job, waits for it to run and then checks\n        the MoM logs to confirm that the PAGG ID provided in the ALPS\n        query is not equal to the session ID of the job.\n        \"\"\"\n        j1 = Job(TEST_USER)\n        jid = self.server.submit(j1)\n\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid)\n        self.mom.log_match(\"Job;%s;Started, pid\" % (jid,), n=100,\n                           max_attempts=5, interval=5, regexp=True)\n\n        self.server.status(JOB, [ATTR_session], jid)\n        sess_id = j1.attributes[ATTR_session]\n\n        msg = \"pagg_id =\\\"\" + sess_id + \"\\\"\"\n        try:\n            self.mom.log_match(msg, n='ALL')\n        except PtlLogMatchError:\n            self.logger.info(\"pagg_id is not equal to session id, test passes\")\n        else:\n            self.assertFalse(\"pagg_id is equal to session id, test fails.\")\n"
  },
  {
    "path": "test/tests/functional/pbs_cray_reliable_job_startup.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport time\nimport fnmatch\nfrom tests.functional import *\nfrom ptl.utils.pbs_logutils import PBSLogUtils\n\n\n@tags('cray')\nclass TestPbsReliableJobStartupOnCray(TestFunctional):\n\n    \"\"\"\n    This tests the Reliable Job Startup Feature on Cray.\n    A job can be started with extra nodes with node failures tolerated\n    during job start but setting is not supported and ignored on Cray.\n    \"\"\"\n\n    def setUp(self):\n        if not self.du.get_platform().startswith('cray'):\n            self.skipTest(\"Test suite only meant to run on a Cray\")\n        TestFunctional.setUp(self)\n\n        # queuejob hook\n        self.qjob_hook_body = \"\"\"\nimport pbs\ne=pbs.event()\npbs.logmsg(pbs.LOG_DEBUG, \"queuejob hook executed\")\n# Save current select spec in resource 'site'\ne.job.Resource_List[\"site\"] = str(e.job.Resource_List[\"select\"])\nnew_select = e.job.Resource_List[\"select\"].increment_chunks(1)\ne.job.Resource_List[\"select\"] = new_select\ne.job.tolerate_node_failures = \"job_start\"\n\"\"\"\n\n        # prologue hook\n        self.prolo_hook_body = \"\"\"\nimport pbs\ne=pbs.event()\npbs.logmsg(pbs.LOG_DEBUG, \"Executing prologue\")\n# print out the vnode_list[] values\nfor vn in e.vnode_list:\n    v = e.vnode_list[vn]\n    pbs.logjobmsg(e.job.id, \"prologue: found vnode_list[\" + v.name + \"]\")\n# print out the vnode_list_fail[] values\nfor vn in e.vnode_list_fail:\n    v = e.vnode_list_fail[vn]\n    pbs.logjobmsg(e.job.id, \"prologue: found vnode_list_fail[\" + v.name + \"]\")\nif e.job.in_ms_mom():\n    pj = e.job.release_nodes(keep_select=e.job.Resource_List[\"site\"])\n    if pj is None:\n        e.job.Hold_Types = pbs.hold_types(\"s\")\n        e.job.rerun()\n        e.reject(\"unsuccessful at PROLOGUE\")\n\"\"\"\n\n        # launch hook\n        self.launch_hook_body = \"\"\"\nimport pbs\ne=pbs.event()\nif 'PBS_NODEFILE' not in e.env:\n    e.accept()\npbs.logmsg(pbs.LOG_DEBUG, \"Executing launch\")\n# print out the vnode_list[] values\nfor vn in e.vnode_list:\n    v = e.vnode_list[vn]\n    pbs.logjobmsg(e.job.id, \"launch: found vnode_list[\" + v.name + \"]\")\n# print out the vnode_list_fail[] values:\nfor vn in e.vnode_list_fail:\n    v = e.vnode_list_fail[vn]\n    pbs.logjobmsg(e.job.id, \"launch: found vnode_list_fail[\" + v.name + \"]\")\n    v.state = pbs.ND_OFFLINE\nif e.job.in_ms_mom():\n    pj = e.job.release_nodes(keep_select=e.job.Resource_List[\"site\"])\n    if pj is None:\n        e.job.Hold_Types = pbs.hold_types(\"s\")\n        e.job.rerun()\n        e.reject(\"unsuccessful at LAUNCH\")\n\"\"\"\n\n    def match_str_in_input_file(self, file_path, file_pattern, search_str):\n        \"\"\"\n        Assert that search string appears in the input file\n        that matches file_pattern\n        \"\"\"\n        input_file = None\n        for item in self.du.listdir(path=file_path, sudo=True):\n            if fnmatch.fnmatch(item, file_pattern):\n                input_file = item\n                break\n        self.assertTrue(input_file is not None)\n        with PBSLogUtils().open_log(input_file, sudo=True) as f:\n            self.assertTrue(search_str in f.read().decode())\n            self.logger.info(\"Found \\\"%s\\\" in %s\" % (search_str, input_file))\n\n    @tags('cray')\n    def test_reliable_job_startup_not_supported_on_cray(self):\n        \"\"\"\n        A job is started with extra nodes. Mom superior will show no sign\n        of tolerating node failure.  Accounting logs won't have 's' record.\n        Input files to prologue and launch hooks will show the\n        tolerate_node_failures=none value.\n        \"\"\"\n        # instantiate queuejob hook\n        hook_event = 'queuejob'\n        hook_name = 'qjob'\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.qjob_hook_body)\n\n        # instantiate execjob_prologue hook\n        hook_event = 'execjob_prologue'\n        hook_name = 'prolo'\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.prolo_hook_body)\n\n        # instantiate execjob_launch hook\n        hook_event = 'execjob_launch'\n        hook_name = 'launch'\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.launch_hook_body)\n\n        # Submit a job\n        j = Job(TEST_USER, {ATTR_l + '.select': '1:ncpus=3:mem=2gb:vntype=' +\n                            'cray_compute+1:ncpus=3:mem=2gb:vntype=' +\n                            'cray_compute',\n                            ATTR_l + '.place': 'scatter'})\n        start_time = time.time()\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid)\n\n        # Check for msg in mom superior logs\n        msg = \"no nodes released as job does not tolerate node failures\"\n        self.server.expect(JOB, 'exec_host', id=jid, op=SET)\n        job_stat = self.server.status(JOB, id=jid)\n        exechost = job_stat[0]['exec_host'].partition('/')[0]\n        mom_superior = self.moms[exechost]\n        mom_superior.log_match(msg, starttime=start_time)\n\n        # Check that 's' record is absent since release_nodes() was not called\n        self.server.accounting_match(\n            msg=\".*%s;%s;.*\" % ('s', jid),\n            regexp=True, n=50, max_attempts=10, existence=False)\n        self.logger.info(\n            \"There was no 's' record found for job %s, test passes\" % jid)\n\n        # On mom superior check the input files to prologue and launch hooks\n        # showed the tolerate_node_failures=none value\n        search_str = 'pbs.event().job.tolerate_node_failures=none'\n        self.mom_hooks_tmp_dir = os.path.join(\n            self.server.pbs_conf['PBS_HOME'], 'mom_priv', 'hooks', 'tmp')\n\n        hook_name = 'prolo'\n        input_file_pattern = os.path.join(\n            self.mom_hooks_tmp_dir, 'hook_execjob_prologue_%s*.in' % hook_name)\n        self.match_str_in_input_file(\n            self.mom_hooks_tmp_dir, input_file_pattern, search_str)\n\n        hook_name = 'launch'\n        input_file_pattern = os.path.join(\n            self.mom_hooks_tmp_dir, 'hook_execjob_launch_%s*.in' % hook_name)\n        self.match_str_in_input_file(\n            self.mom_hooks_tmp_dir, input_file_pattern, search_str)\n"
  },
  {
    "path": "test/tests/functional/pbs_cray_smoketest.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\nfrom ptl.utils.pbs_crayutils import CrayUtils\nimport os\n\n\n@tags('cray', 'smoke')\nclass TestCraySmokeTest(TestFunctional):\n\n    \"\"\"\n    Set of tests that qualifies as smoketest for Cray platform\n    \"\"\"\n\n    def setUp(self):\n        if not self.du.get_platform().startswith('cray'):\n            self.skipTest(\"Test suite only meant to run on a Cray\")\n        TestFunctional.setUp(self)\n\n        # no node in 'resv' and 'use' in apstat\n        cu = CrayUtils()\n        self.assertEqual(cu.count_node_summ('resv'), 0,\n                         \"No compute node should be having ALPS reservation\")\n        self.assertEqual(cu.count_node_summ('use'), 0,\n                         \"No compute node should be in use\")\n\n        # The number of compute nodes in State up and batch mode\n        # (State = 'UP  B') should equal the number of cray_compute nodes.\n        nodes_up_b = cu.count_node_state('UP  B')\n        self.logger.info(\"Nodes with State 'UP  B' : %s\" % nodes_up_b)\n        nodes_up_i = cu.count_node_state('UP  I')\n        self.logger.info(\"Nodes with State 'UP  I' : %s\" % nodes_up_i)\n        nodes = self.server.filter(NODE,\n                                   {ATTR_rescavail + '.vntype':\n                                    'cray_compute'})\n        num_cray_compute = len(nodes[ATTR_rescavail + '.vntype=cray_compute'])\n        self.assertEqual(nodes_up_b, num_cray_compute)\n        self.logger.info(\"nodes in State 'UP  B': %s == cray_compute: %s\" %\n                         (nodes_up_b, num_cray_compute))\n\n        # nodes are free and resources are available.\n        nodes = self.server.status(NODE)\n        for node in nodes:\n            self.assertEqual(node['state'], 'free')\n            self.assertEqual(node['resources_assigned.ncpus'], '0')\n            self.assertEqual(node['resources_assigned.mem'], '0kb')\n\n    @staticmethod\n    def find_hw(output_file):\n        \"\"\"\n        Find the string \"Hello World\" in the specified file.\n        Return 1 if found.\n        \"\"\"\n        found = 0\n        with open(output_file, 'r') as outf:\n            for line in outf:\n                if \"Hello World\" in line:\n                    found = 1\n                    break\n                else:\n                    continue\n        return found\n\n    @tags('cray', 'smoke')\n    def test_cray_login_job(self):\n        \"\"\"\n        Submit a simple sleep job that requests to run on a login node\n        and expect that job to go in running state on a login node.\n        Verify that the job runs to completion and check job output/error.\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'job_history_enable': 'True'})\n        j1 = Job(TEST_USER, {ATTR_l + '.vntype': 'cray_login',\n                             ATTR_N: 'cray_login'})\n\n        scr = []\n        scr += ['echo Hello World\\n']\n        scr += ['/bin/sleep 5\\n']\n\n        sub_dir = self.du.create_temp_dir(asuser=TEST_USER)\n        j1.create_script(scr)\n        jid1 = self.server.submit(j1, submit_dir=sub_dir)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n        # fetch node name where the job is running and check that the\n        # node is a login node\n        self.server.status(JOB, 'exec_vnode', id=jid1)\n        vname = j1.get_vnodes()[0]\n        self.server.expect(NODE, {ATTR_rescavail + '.vntype': 'cray_login'},\n                           id=vname, max_attempts=1)\n\n        cu = CrayUtils()\n        # Check if number of compute nodes in use are 0\n        self.assertEqual(cu.count_node_summ('use'), 0)\n\n        # verify the contents of output/error files\n        self.server.expect(JOB, {'job_state': 'F'}, id=jid1, extend='x')\n        error_file = os.path.join(sub_dir, 'cray_login.e' + jid1.split('.')[0])\n        self.assertEqual(os.stat(error_file).st_size, 0,\n                         msg=\"Job error file should be empty\")\n\n        output_file = os.path.join(\n            sub_dir, 'cray_login.o' + jid1.split('.')[0])\n        foundhw = self.find_hw(output_file)\n        self.assertEqual(foundhw, 1, msg=\"Job output file incorrect\")\n\n    @tags('cray', 'smoke')\n    def test_cray_compute_job(self):\n        \"\"\"\n        Submit a simple sleep job that runs on a compute node and\n        expect the job to go in running state on a compute node.\n        Verify that the job runs to completion and check job output/error.\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'job_history_enable': 'True'})\n        j1 = Job(TEST_USER, {ATTR_l + '.vntype': 'cray_compute',\n                             ATTR_N: 'cray_compute'})\n\n        scr = []\n        scr += ['echo Hello World\\n']\n        scr += ['/bin/sleep 5\\n']\n        scr += ['aprun -b -B /bin/sleep 10\\n']\n\n        sub_dir = self.du.create_temp_dir(asuser=TEST_USER)\n        j1.create_script(scr)\n        jid1 = self.server.submit(j1, submit_dir=sub_dir)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n        # fetch node name where the job is running and check that the\n        # node is a compute node\n        self.server.status(JOB, 'exec_vnode', id=jid1)\n        vname = j1.get_vnodes()[0]\n        self.server.expect(NODE, {ATTR_rescavail + '.vntype': 'cray_compute'},\n                           id=vname)\n        # Sleep for some time before aprun actually starts\n        # using the reservation\n        self.logger.info(\n            \"Sleeping 6 seconds before aprun starts using the reservation\")\n        time.sleep(6)\n\n        cu = CrayUtils()\n        # Check if number of compute nodes in use is 1\n        self.assertEqual(cu.count_node_summ('resv'), 1)\n        if self.du.get_platform() == 'cray':\n            # Cray simulator will not show anything in 'use' because\n            # aprun command is just a pass through on simulator\n            self.assertEqual(cu.count_node_summ('use'), 1)\n        # verify the contents of output/error files\n        self.server.expect(JOB, {'job_state': 'F'}, id=jid1, extend='x')\n        error_file = os.path.join(\n            sub_dir, 'cray_compute.e' + jid1.split('.')[0])\n        self.assertEqual(os.stat(error_file).st_size, 0,\n                         msg=\"Job error file should be empty\")\n\n        output_file = os.path.join(\n            sub_dir, 'cray_compute.o' + jid1.split('.')[0])\n        foundhw = self.find_hw(output_file)\n        self.assertEqual(foundhw, 1, msg=\"Job output file incorrect\")\n\n        (cu.node_status, cu.node_summary) = cu.parse_apstat_rn()\n        self.assertEqual(cu.count_node_summ('resv'), 0)\n        if self.du.get_platform() == 'cray':\n            self.assertEqual(cu.count_node_summ('use'), 0)\n"
  },
  {
    "path": "test/tests/functional/pbs_cray_suspend_resume.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport time\nfrom tests.functional import *\nfrom ptl.utils.pbs_crayutils import CrayUtils\n\n\n@tags('cray')\nclass TestSuspendResumeOnCray(TestFunctional):\n\n    \"\"\"\n    Test special cases where suspend/resume functionality differs on cray\n    as compared to other platforms.\n    This test suite expects the platform to be 'cray' and assumes that\n    suspend/resume feature is enabled on it.\n    \"\"\"\n    cu = CrayUtils()\n\n    def setUp(self):\n        if not self.du.get_platform().startswith('cray'):\n            self.skipTest(\"Test suite only meant to run on a Cray\")\n        TestFunctional.setUp(self)\n\n    @tags('cray', 'smoke')\n    def test_default_restrict_res_to_release_on_suspend_setting(self):\n        \"\"\"\n        Check that on Cray restrict_res_to_release_on_suspend is always set\n        to 'ncpus' by default\n        \"\"\"\n\n        # Set restrict_res_to_release_on_suspend server attribute\n        a = {ATTR_restrict_res_to_release_on_suspend: 'ncpus'}\n        self.server.expect(SERVER, a)\n\n    def test_exclusive_job_not_suspended(self):\n        \"\"\"\n        If a running job is a job with exclusive placement then this job can\n        not be suspended.\n        This test is checking for a log message which is an unstable\n        interface and may need change in future when interface changes.\n        \"\"\"\n\n        msg_expected = \"BASIL;ERROR: ALPS error: apsched: \\\nat least resid .* is exclusive\"\n        # Submit a job\n        j = Job(TEST_USER, {ATTR_l + '.select': '1:ncpus=1',\n                            ATTR_l + '.place': 'excl'})\n        check_after = time.time()\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid)\n\n        # suspend job\n        try:\n            self.server.sigjob(jobid=jid, signal=\"suspend\")\n        except PbsSignalError as e:\n            self.assertTrue(\"Switching ALPS reservation failed\" in e.msg[0])\n\n        self.server.expect(JOB, 'exec_host', id=jid, op=SET)\n        job_stat = self.server.status(JOB, id=jid)\n        s = self.mom.log_match(msg_expected, starttime=check_after,\n                               regexp=True, max_attempts=10)\n        self.assertTrue(s)\n\n    @tags('cray')\n    def test_basic_admin_suspend_restart(self):\n        \"\"\"\n        Test basic admin-suspend funcionality for jobs and array jobs with\n        restart on Cray. The restart will test if the node recovers properly\n        in maintenance. After turning off scheduling and a mom restart, a\n        subjob is always requeued and node shows up as free.\n        \"\"\"\n        j1 = Job(TEST_USER)\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n\n        qstat = self.server.status(JOB, 'exec_vnode', id=jid1)\n        vname = qstat[0]['exec_vnode'].partition(':')[0].strip('(')\n\n        # admin-suspend regular job\n        self.server.sigjob(jid1, 'admin-suspend', runas=ROOT_USER)\n        self.server.expect(JOB, {ATTR_state: 'S'}, id=jid1)\n        self.server.expect(NODE, {'state': 'maintenance'}, id=vname)\n        self.server.expect(NODE, {'maintenance_jobs': jid1})\n\n        self.server.restart()\n        self.server.expect(NODE, {'state': 'maintenance'}, id=vname)\n        self.server.expect(NODE, {'maintenance_jobs': jid1})\n\n        # Adding sleep to avoid failure at resume since PBS licenses\n        # might not be available and as a result resume fails\n        time.sleep(2)\n\n        # admin-resume regular job. Make sure the node retuns to state\n        # job-exclusive.\n        self.server.sigjob(jid1, 'admin-resume', runas=ROOT_USER)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n        self.server.expect(NODE, {'state': 'job-exclusive'}, id=vname)\n        self.server.cleanup_jobs()\n\n        # admin-suspend job array\n        jA = Job(TEST_USER, {ATTR_l + '.select': '1:ncpus=1', ATTR_J: '1-2'})\n        jidA = self.server.submit(jA)\n        self.server.expect(JOB, {ATTR_state: 'B'}, id=jidA)\n\n        subjobs = self.server.status(JOB, id=jidA, extend='t')\n        # subjobs[0] is the array itself.  Need the subjobs\n        jid1 = subjobs[1]['id']\n        jid2 = subjobs[2]['id']\n\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid2)\n\n        qstat = self.server.status(JOB, 'exec_vnode', id=jid1)\n        vname1 = qstat[0]['exec_vnode'].partition(':')[0].strip('(')\n        qstat = self.server.status(JOB, 'exec_vnode', id=jid2)\n        vname2 = qstat[0]['exec_vnode'].partition(':')[0].strip('(')\n\n        # admin-suspend subjob 1\n        self.server.sigjob(jid1, 'admin-suspend', runas=ROOT_USER)\n        self.server.expect(JOB, {ATTR_state: 'S'}, id=jid1)\n        self.server.expect(NODE, {'state': 'maintenance'}, id=vname1)\n        self.server.expect(NODE, {'maintenance_jobs': jid1})\n\n        # admin-resume subjob 1 . Make sure the node retuns to state\n        # job-exclusive.\n        self.server.sigjob(jid1, 'admin-resume', runas=ROOT_USER)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n        self.server.expect(NODE, {'state': 'job-exclusive'}, id=vname1)\n\n        # admin-suspend subjob 2\n        self.server.sigjob(jid2, 'admin-suspend', runas=ROOT_USER)\n        self.server.expect(JOB, {ATTR_state: 'S'}, id=jid2)\n        self.server.expect(NODE, {'state': 'maintenance'}, id=vname2)\n        self.server.expect(NODE, {'maintenance_jobs': jid2})\n\n        # Turn off scheduling and restart mom\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n        self.mom.restart()\n\n        # Check that nodes are now free\n        self.server.expect(NODE, {'state': 'free'}, id=vname1)\n        self.server.expect(NODE, {'state': 'free'}, id=vname2)\n\n    def test_admin_suspend_wrong_state(self):\n        \"\"\"\n        Check that wrong 'resume' signal is correctly rejected.\n        \"\"\"\n        j1 = Job(TEST_USER)\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n        self.server.sigjob(jid1, \"suspend\", runas=ROOT_USER)\n        self.server.expect(JOB, {ATTR_state: 'S'}, id=jid1)\n\n        try:\n            self.server.sigjob(jid1, \"admin-resume\", runas=ROOT_USER)\n        except PbsSignalError as e:\n            self.assertTrue(\n                'Job can not be resumed with the requested resume signal'\n                in e.msg[0])\n        self.server.expect(JOB, {ATTR_state: 'S'}, id=jid1)\n\n        j2 = Job(TEST_USER)\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid2)\n        self.server.sigjob(jid2, \"admin-suspend\", runas=ROOT_USER)\n        self.server.expect(JOB, {ATTR_state: 'S', ATTR_substate: 43}, id=jid2)\n\n        try:\n            self.server.sigjob(jid2, \"resume\", runas=ROOT_USER)\n        except PbsSignalError as e:\n            self.assertTrue(\n                'Job can not be resumed with the requested resume signal'\n                in e.msg[0])\n\n        # The job should be in the same state as it was prior to the signal\n        self.server.expect(JOB, {ATTR_state: 'S', ATTR_substate: 43}, id=jid2)\n\n    def submit_resv(self, resv_start, chunks, resv_dur):\n        \"\"\"\n        Function to request a PBS reservation with start time, chunks and\n        duration as arguments.\n        \"\"\"\n        a = {'Resource_List.select': '%d:ncpus=1:vntype=cray_compute' % chunks,\n             'Resource_List.place': 'scatter',\n             'reserve_start': int(resv_start),\n             'reserve_duration': int(resv_dur)\n             }\n        r = Reservation(TEST_USER, attrs=a)\n        rid = self.server.submit(r)\n        try:\n            a = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n            d = self.server.expect(RESV, a, id=rid)\n        except PtlExpectError as e:\n            d = e.rv\n        return d\n\n    @timeout(300)\n    def test_preempt_STF(self):\n        \"\"\"\n        Test shrink to fit by creating a reservation for all compute nodes\n        starting in 100 sec. with a duration of two hours.  A preempted STF job\n        with min_walltime of 1 min. and max_walltime of 2 hours will stay\n        suspended after higher priority job goes away if its\n        min_walltime can't be satisfied.\n        \"\"\"\n        qname = 'highp'\n        a = {'queue_type': 'execution'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, qname)\n        a = {'enabled': 'True', 'started': 'True', 'priority': '150'}\n        self.server.manager(MGR_CMD_SET, QUEUE, a, qname)\n\n        # Reserve all the compute nodes\n        nv = self.cu.num_compute_vnodes(self.server)\n        self.assertNotEqual(nv, 0, \"There are no cray_compute vnodes present.\")\n        now = time.time()\n        resv_start = now + 100\n        resv_dur = 7200\n        d = self.submit_resv(resv_start, nv, resv_dur)\n        self.assertTrue(d)\n\n        j = Job(TEST_USER, {ATTR_l + '.select': '%d:ncpus=1' % nv,\n                            ATTR_l + '.place': 'scatter',\n                            ATTR_l + '.min_walltime': '00:01:00',\n                            ATTR_l + '.max_walltime': '02:00:00'})\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid)\n        self.server.expect(\n            JOB, {ATTR_l + '.walltime': (LE, '00:01:40')}, id=jid)\n        self.server.expect(\n            JOB, {ATTR_l + '.walltime': (GE, '00:01:00')}, id=jid)\n\n        # The sleep below will leave less than 1 minute window for jid\n        # after j2id is deleted. The min_walltime of jid can't be\n        # satisfied and jid will stay in S state.\n        time.sleep(35)\n\n        j2 = Job(TEST_USER, {ATTR_l + '.select': '%d:ncpus=1' % nv,\n                             ATTR_l + '.walltime': '00:01:00',\n                             ATTR_l + '.place': 'scatter',\n                             ATTR_q: 'highp'})\n        j2id = self.server.submit(j2)\n\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=j2id)\n        self.server.expect(JOB, {ATTR_state: 'S'}, id=jid)\n\n        # The sleep below will leave less than 1 minute window for jid\n        time.sleep(50)\n\n        self.server.delete(j2id)\n        a = {'scheduling': 'True'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        self.server.expect(SERVER, {'server_state': 'Active'})\n        self.server.expect(JOB, {ATTR_state: 'S'}, id=jid)\n\n    def test_multi_express(self):\n        \"\"\"\n        Test of multiple express queues of different priorities.\n        See that jobs from the higher express queues preempt jobs\n        from lower express queues.  Also see when express jobs finish\n        (or are deleted), suspended jobs restart.\n        Make sure loadLimit is set to 4 on the server node:\n        # apmgr config loadLimit 4\n        \"\"\"\n\n        _t = ('\\\"express_queue, normal_jobs, server_softlimits,' +\n              ' queue_softlimits\\\"')\n        a = {'preempt_prio': _t}\n        self.scheduler.set_sched_config(a)\n\n        a = {'queue_type': 'e',\n             'started': 'True',\n             'enabled': 'True',\n             'Priority': 150}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, \"expressq\")\n\n        a['Priority'] = 160\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, \"expressq2\")\n\n        a['Priority'] = 170\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, \"expressq3\")\n\n        # Count the compute nodes\n        nv = self.cu.num_compute_vnodes(self.server)\n        self.assertNotEqual(nv, 0, \"There are no cray_compute vnodes present.\")\n\n        j1 = Job(TEST_USER, {ATTR_l + '.select': '%d:ncpus=1' % nv,\n                             ATTR_l + '.place': 'scatter',\n                             ATTR_l + '.walltime': 3600})\n        j1id = self.server.submit(j1)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=j1id)\n\n        j2 = Job(TEST_USER, {ATTR_l + '.select': '%d:ncpus=1' % nv,\n                             ATTR_l + '.place': 'scatter',\n                             ATTR_l + '.walltime': 3600,\n                             ATTR_q: 'expressq'})\n        j2id = self.server.submit(j2)\n        self.server.expect(JOB, {ATTR_state: 'S'}, id=j1id)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=j2id)\n\n        j3 = Job(TEST_USER, {ATTR_l + '.select': '%d:ncpus=1' % nv,\n                             ATTR_l + '.place': 'scatter',\n                             ATTR_l + '.walltime': 3600,\n                             ATTR_q: 'expressq2'})\n        j3id = self.server.submit(j3)\n        self.server.expect(JOB, {ATTR_state: 'S'}, id=j2id)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=j3id)\n\n        j4 = Job(TEST_USER, {ATTR_l + '.select': '%d:ncpus=1' % nv,\n                             ATTR_l + '.place': 'scatter',\n                             ATTR_l + '.walltime': 3600,\n                             ATTR_q: 'expressq3'})\n        j4id = self.server.submit(j4)\n        self.server.expect(JOB, {ATTR_state: 'S'}, id=j3id)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=j4id)\n\n        self.server.delete(j4id)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=j3id)\n\n    def test_preempted_topjob_calendared(self):\n        \"\"\"\n        That even if topjob_ineligible is set for\n        a preempted job and sched_preempt_enforce_resumption\n        is set true, the preempted job will be calendared\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'sched_preempt_enforce_resumption': 'true'})\n        self.server.manager(MGR_CMD_SET, SERVER, {'backfill_depth': '2'})\n\n        # Count the compute nodes\n        nv = self.cu.num_compute_vnodes(self.server)\n        self.assertNotEqual(nv, 0, \"There are no cray_compute vnodes present.\")\n\n        # Submit a job\n        j = Job(TEST_USER, {ATTR_l + '.select': '%d:ncpus=1' % nv,\n                            ATTR_l + '.place': 'scatter',\n                            ATTR_l + '.walltime': '120'})\n        jid1 = self.server.submit(j)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n\n        # Alter topjob_ineligible for runnng job\n        self.server.alterjob(jid1, {ATTR_W: \"topjob_ineligible = true\"},\n                             runas=ROOT_USER, logerr=True)\n\n        # Create a high priority queue\n        a = {'queue_type': 'e', 'started': 't',\n             'enabled': 'True', 'priority': '150'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id=\"highp\")\n\n        # Submit a job to high priority queue\n        j = Job(TEST_USER, {ATTR_queue: 'highp', ATTR_l + '.walltime': '60'})\n        jid2 = self.server.submit(j)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid2)\n\n        # Verify that job1 is calendared\n        self.server.expect(JOB, 'estimated.start_time',\n                           op=SET, id=jid1)\n        qstat = self.server.status(JOB, 'estimated.start_time',\n                                   id=jid1)\n        est_time = qstat[0]['estimated.start_time']\n        self.assertNotEqual(est_time, None)\n        self.scheduler.log_match(jid1 + \";Job is a top job\",\n                                 starttime=self.server.ctime,\n                                 max_attempts=10)\n"
  },
  {
    "path": "test/tests/functional/pbs_cray_vnode_per_numa.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\n@tags('cray', 'mom', 'configuration')\nclass TestVnodePerNumaNode(TestFunctional):\n\n    \"\"\"\n    This test suite is for testing the new mom_priv configuration\n    parameter, vnode_per_numa_node.\n    Test that the information is correctly being compressed into one vnode\n    using the default setting (equivalent to FALSE).\n    \"\"\"\n\n    def setUp(self):\n        if not self.du.get_platform().startswith('cray'):\n            self.skipTest(\"Test suite only meant to run on a Cray\")\n        TestFunctional.setUp(self)\n\n    @tags('cray', 'smoke')\n    def test_settings(self):\n        \"\"\"\n        vnode_per_numa_node is unset (defaults to FALSE).\n        Set $vnode_per_numa_node to TRUE\n        Sum up the ncpus, memory, and naccelerators for all vnodes that\n        have the same host (i.e. NUMA nodes that belong to the same compute\n        node).\n        Unset $vnode_per_numa_node in mom_priv/config.\n        Now for each host, compare the ncpus, mem, and naccelerators against\n        the values we got when $vnode_per_numa_node was set to TRUE.\n        They should be equal.\n\n        Verify that PBS created only one vnode, and:\n            - PBScrayseg attribute is not set\n            - ncpus is a total from all NUMA nodes of that node\n            - mem is a total from all NUMA nodes of that node\n            - the naccelerators value is correct\n            - the accelerator_memory value is correct\n\n        Set $vnode_per_numa_node to FALSE.\n        Compare the pbsnodes output when vnode_per_numa_node was unset\n        versus when vnode_per_numa_node was set to False.\n        \"\"\"\n        dncpus = {}\n        dmem = {}\n        dacc = {}\n        daccmem = {}\n\n        # First we mimic old behavior by setting vnode_per_numa_node to TRUE\n        # Do not HUP now, we will do so when we reset the nodes\n        rv = self.mom.add_config({'$vnode_per_numa_node': True}, False)\n        self.assertTrue(rv)\n\n        # Start from a clean slate, delete any existing nodes and re-create\n        # them\n        momname = self.mom.shortname\n        self.reset_nodes(momname)\n\n        # Get the pbsnodes -av output for comparison later\n        vnodes_pernuma = self.server.status(NODE)\n        for n in vnodes_pernuma:\n            if n['resources_available.host'] not in dncpus.keys():\n                dncpus[n['resources_available.host']] = int(\n                    n['resources_available.ncpus'])\n            else:\n                dncpus[n['resources_available.host']\n                       ] += int(n['resources_available.ncpus'])\n            if n['resources_available.host'] not in dmem.keys():\n                dmem[n['resources_available.host']] = int(\n                    n['resources_available.mem'][0:-2])\n            else:\n                dmem[n['resources_available.host']\n                     ] += int(n['resources_available.mem'][0:-2])\n            if 'resources_available.naccelerators' in n.keys():\n                if n['resources_available.naccelerators'][0] != '@':\n                    if n['resources_available.host'] not in dacc.keys():\n                        dacc[n['resources_available.host']] = int(\n                            n['resources_available.naccelerators'])\n                    else:\n                        dacc[n['resources_available.host']\n                             ] += int(n['resources_available.naccelerators'])\n            if 'resources_available.accelerator_memory' in n.keys():\n                if n['resources_available.accelerator_memory'][0] != '@':\n                    if n['resources_available.host'] not in daccmem.keys():\n                        daccmem[n['resources_available.host']] = int(\n                            n['resources_available.accelerator_memory'][0:-2])\n                    else:\n                        daccmem[n['resources_available.host']] += int(n[\n                            'resources_available.accelerator_memory'][0:-2])\n\n        # Remove the configuration setting and re-read the vnodes\n        rv = self.mom.unset_mom_config('$vnode_per_numa_node', False)\n        self.assertTrue(rv)\n        self.reset_nodes(momname)\n\n        vnodes_combined = self.server.status(NODE)\n\n        # Compare the multiple vnodes values to the combined vnode output\n\n        for n in vnodes_combined:\n            if 'resources_available.PBScrayseg' in n:\n                self.logger.error(\n                    \"ERROR resources_available.PBScrayseg was found.\")\n                self.assertTrue(False)\n\n            self.assertEqual(int(n['resources_available.ncpus']), dncpus[\n                n['resources_available.host']])\n            self.assertEqual(int(n['resources_available.mem'][0:-2]), dmem[\n                n['resources_available.host']])\n            if 'resources_available.naccelerators' in n:\n                self.assertEqual(int(n['resources_available.naccelerators']),\n                                 dacc[n['resources_available.host']])\n            if 'resources_available.accelerator_memory' in n:\n                self.assertEqual(int(n['resources_available.accelerator_memory'\n                                       ][0:-2]),\n                                 daccmem[n['resources_available.host']])\n\n        # Set vnode_per_numa_node to FALSE and re-read the vnodes\n        rv = self.mom.add_config({'$vnode_per_numa_node': False}, False)\n        self.assertTrue(rv)\n        self.reset_nodes(momname)\n\n        vnodes_combined1 = self.server.status(NODE)\n\n        # Compare the pbsnodes output when vnode_per_numa_node was unset\n        # versus when vnode_per_numa_node was set to False.\n        # List of resources to be ignored while comparing.\n        ignr_rsc = ['license', 'last_state_change_time']\n        len_vnodes_combined1 = len(vnodes_combined1)\n        len_vnodes_combined = len(vnodes_combined)\n        n = 0\n        if len_vnodes_combined == len_vnodes_combined1:\n            self.logger.info(\n                \"pbsnodes outputs are equal in length\")\n            for vdict in vnodes_combined:\n                for key in vdict:\n                    if key in ignr_rsc:\n                        continue\n                    if key in vnodes_combined1[n]:\n                        if vdict[key] != vnodes_combined1[n][key]:\n                            self.fail(\"ERROR vnode %s has \"\n                                      \"differing element.\" % key)\n                    else:\n                        self.fail(\"ERROR vnode %s has \"\n                                  \"differing element.\" % key)\n                n += 1\n\n        else:\n            self.fail(\"ERROR pbsnodes outputs differ in length.\")\n\n    def restartPBS(self):\n        try:\n            svcs = PBSInitServices()\n            svcs.restart()\n        except PbsInitServicesError as e:\n            self.logger.error(\"PBS restart failed: \\n\" + e.msg)\n            self.assertTrue(e.rv)\n\n    def reset_nodes(self, hostA):\n        \"\"\"\n        Reset nodes.\n        \"\"\"\n\n        # Remove all nodes\n        rv = self.server.manager(MGR_CMD_DELETE, NODE, None, \"\")\n        self.assertEqual(rv, 0)\n\n        # Restart PBS\n        self.restartPBS()\n\n        # Create node\n        rv = self.server.manager(MGR_CMD_CREATE, NODE, None, hostA)\n        self.assertEqual(rv, 0)\n\n        # Wait for 3 seconds for changes to take effect\n        time.sleep(3)\n"
  },
  {
    "path": "test/tests/functional/pbs_cray_vnode_pool.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\n@tags('cray', 'configuration')\nclass TestVnodePool(TestFunctional):\n\n    \"\"\"\n    This test suite tests how PBS makes use of node attribute \"vnode_pool\"\n    It expects at least 2 moms to be specified to it while executing.\n    \"\"\"\n\n    def setUp(self):\n        if not self.du.get_platform().startswith('cray'):\n            self.skipTest(\"This test can only run on a cray\")\n        TestFunctional.setUp(self)\n        if len(self.moms.values()) < 2:\n            self.skipTest(\"Provide at least 2 moms while invoking test\")\n\n        # The moms provided to the test may have unwanted vnodedef files.\n        if self.moms.values()[0].has_vnode_defs():\n            self.moms.values()[0].delete_vnode_defs()\n        if self.moms.values()[1].has_vnode_defs():\n            self.moms.values()[1].delete_vnode_defs()\n\n        # Check if vnodes exist before deleting nodes.\n        # Clean all default nodes because each test case will set up nodes.\n        try:\n            self.server.status(NODE)\n            self.server.manager(MGR_CMD_DELETE, NODE, None, \"\")\n        except PbsStatusError as e:\n            self.assertTrue(\"Server has no node list\" in e.msg[0])\n\n    def test_invalid_values(self):\n        \"\"\"\n        Invalid vnode_pool values shall result in errors.\n        \"\"\"\n        self.momA = self.moms.values()[0]\n        self.momB = self.moms.values()[1]\n\n        self.hostA = self.momA.shortname\n        self.hostB = self.momB.shortname\n\n        attr_A = {'vnode_pool': '-1'}\n        try:\n            self.server.manager(MGR_CMD_CREATE, NODE, id=self.hostA,\n                                attrib=attr_A)\n        except PbsManagerError as e:\n            self.assertTrue(\"Illegal attribute or resource value\" in e.msg[0])\n\n        attr_A = {'vnode_pool': '0'}\n        try:\n            self.server.manager(MGR_CMD_CREATE, NODE, id=self.hostA,\n                                attrib=attr_A)\n        except PbsManagerError as e:\n            self.assertTrue(\"Illegal attribute or resource value\" in e.msg[0])\n\n        attr_A = {'vnode_pool': 'a'}\n        try:\n            self.server.manager(MGR_CMD_CREATE, NODE, id=self.hostA,\n                                attrib=attr_A)\n        except PbsManagerError as e:\n            self.assertTrue(\"Illegal attribute or resource value\" in e.msg[0])\n\n    def test_two_moms_single_vnode_pool(self):\n        \"\"\"\n        Same vnode_pool for two moms shall result in one mom being the\n        inventory mom and the other the non-inventory mom.\n        The inventory mom goes down (e.g. killed).\n        Compute nodes remain up even when the inventory mom is killed,\n        since another mom is reporting them.\n        Check that a new inventory mom is listed in the log.\n        Bring up killed mom.\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, SERVER, {\"log_events\": -1})\n\n        self.momA = self.moms.values()[0]\n        self.momB = self.moms.values()[1]\n        self.hostA = self.momA.shortname\n        self.hostB = self.momB.shortname\n\n        attr = {'vnode_pool': '1'}\n\n        start_time = time.time()\n\n        self.server.manager(MGR_CMD_CREATE, NODE, id=self.hostA, attrib=attr)\n        self.server.manager(MGR_CMD_CREATE, NODE, id=self.hostB, attrib=attr)\n\n        self.server.log_match(\"Mom %s added to vnode_pool %s\" %\n                              (self.momB.hostname, '1'), max_attempts=5,\n                              starttime=start_time)\n\n        _msg = \"Hello (no inventory required) from server\"\n        try:\n            self.momA.log_match(_msg, max_attempts=9, starttime=start_time)\n            found_in_momA = 1\n        except PtlLogMatchError:\n            found_in_momA = 0\n        try:\n            self.momB.log_match(_msg, max_attempts=9, starttime=start_time)\n            found_in_momB = 1\n        except PtlLogMatchError:\n            found_in_momB = 0\n        self.assertEqual(found_in_momA + found_in_momB,\n                         1, msg=\"an inventory mom not chosen correctly\")\n\n        # Only one mom is inventory mom\n        if (found_in_momA == 0):\n            inv_mom = self.momA\n            noninv_mom = self.momB\n        else:\n            inv_mom = self.momB\n            noninv_mom = self.momA\n\n        self.logger.info(\"Inventory mom is %s.\" % inv_mom.shortname)\n        self.logger.info(\"Non-inventory mom is %s.\" %\n                         noninv_mom.shortname)\n\n        start_time = time.time()\n\n        # Kill inventory mom\n        inv_mom.signal('-KILL')\n\n        # Check that former inventory mom is down\n        rv = self.server.expect(\n            VNODE, {'state': 'down'}, id=inv_mom.shortname,\n            max_attempts=10, interval=2)\n        self.assertTrue(rv)\n\n        # Check if inventory mom changed and is listed in the server log.\n        self.server.log_match(\n            \"Setting inventory_mom for vnode_pool %s to %s\" %\n            ('1', noninv_mom.shortname), max_attempts=5,\n            starttime=start_time)\n        self.logger.info(\n            \"Inventory mom is now %s in server logs.\" %\n            (noninv_mom.shortname))\n\n        # Check compute nodes are up\n        vlist = []\n        try:\n            vnl = self.server.filter(\n                VNODE, {'resources_available.vntype': 'cray_compute'})\n            vlist = vnl[\"resources_available.vntype=cray_compute\"]\n        except Exception:\n            pass\n\n        # Loop through each compute vnode in the list and check if state = free\n        for v1 in vlist:\n            # Check that the node is in free state\n            rv = self.server.expect(\n                VNODE, {'state': 'free'}, id=v1, max_attempts=3, interval=2)\n            self.assertTrue(rv)\n\n        # Start the previous inv mom.\n        inv_mom.start()\n\n        # Check previous inventory mom is up\n        rv = self.server.expect(\n            VNODE, {'state': 'free'}, id=inv_mom.shortname,\n            max_attempts=3, interval=2)\n        self.assertTrue(rv)\n\n    def test_two_moms_different_vnode_pool(self):\n        \"\"\"\n        Differing vnode_pool for two moms shall result in both moms reporting\n        inventory.\n        \"\"\"\n        self.momA = self.moms.values()[0]\n        self.momB = self.moms.values()[1]\n        self.hostA = self.momA.shortname\n        self.hostB = self.momB.shortname\n\n        attr_A = {'vnode_pool': '1'}\n        attr_B = {'vnode_pool': '2'}\n\n        start_time = time.time()\n\n        self.server.manager(MGR_CMD_CREATE, NODE, id=self.hostA, attrib=attr_A)\n        self.server.manager(MGR_CMD_CREATE, NODE, id=self.hostB, attrib=attr_B)\n\n        _msg = \"Hello (no inventory required) from server\"\n        try:\n            self.momA.log_match(_msg, max_attempts=5, starttime=start_time)\n            found_in_momA = 1\n        except PtlLogMatchError:\n            found_in_momA = 0\n        try:\n            self.momB.log_match(_msg, max_attempts=5, starttime=start_time)\n            found_in_momB = 1\n        except PtlLogMatchError:\n            found_in_momB = 0\n        self.assertTrue((found_in_momA + found_in_momB == 0),\n                        msg=\"Both moms must report inventory\")\n\n    def test_invalid_usage(self):\n        \"\"\"\n        Setting vnode_pool for an existing mom that does not have a vnode_pool\n        attribute shall not be allowable.\n        Setting vnode_pool for an existing mom having a vnode_pool attribute\n        shall not be allowable.\n        Unsetting vnode_pool for an existing mom having a vnode_pool attribute\n        shall not be allowable.\n        \"\"\"\n        self.momA = self.moms.values()[0]\n        self.hostA = self.momA.shortname\n        self.logger.info(\"hostA is %s.\" % self.hostA)\n\n        start_time = time.time()\n\n        self.server.manager(MGR_CMD_CREATE, NODE, id=self.hostA)\n\n        attr_2 = {'vnode_pool': '2'}\n        try:\n            self.server.manager(\n                MGR_CMD_SET, NODE, id=self.hostA, attrib=attr_2)\n        except PbsManagerError as e:\n            self.assertTrue(\"Invalid request\" in e.msg[0])\n\n        self.server.log_match(\"Unsupported actions for vnode_pool\",\n                              max_attempts=5, starttime=start_time)\n        self.logger.info(\"Found correct server log message\")\n\n        self.momB = self.moms.values()[1]\n        self.hostB = self.momB.shortname\n\n        attr_1 = {'vnode_pool': '1'}\n\n        start_time = time.time()\n\n        self.server.manager(MGR_CMD_CREATE, NODE, id=self.hostB, attrib=attr_1)\n\n        attr_2 = {'vnode_pool': '2'}\n        try:\n            self.server.manager(MGR_CMD_SET, NODE, id=self.hostB,\n                                attrib=attr_2)\n        except PbsManagerError as e:\n            self.assertTrue(\"Invalid request\" in e.msg[0])\n\n        self.server.log_match(\"Unsupported actions for vnode_pool\",\n                              max_attempts=5, starttime=start_time)\n        try:\n            self.server.manager(MGR_CMD_UNSET, NODE, id=self.hostB,\n                                attrib='vnode_pool')\n        except PbsManagerError as e:\n            self.assertTrue(\"Illegal value for node vnode_pool\" in e.msg[0])\n"
  },
  {
    "path": "test/tests/functional/pbs_daemon_service_user.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport resource\n\nfrom tests.functional import *\n\n\nclass TestDaemonServiceUser(TestFunctional):\n\n    \"\"\"\n    Test suite to test running schedulers as a non-root user\n    \"\"\"\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n\n    def common_test(self, binary, runas, scheduser, msg, setup_sched=False):\n        \"\"\"\n        Test if running `binary` as `runas` with\n        PBS_DAEMON_SERVICE_USER set as `scheduser`\n        Check to see `msg` is in stderr\n        If `msg` is None, make sure command passed\n        \"\"\"\n        if scheduser:\n            self.du.set_pbs_config(\n                self.server.hostname,\n                confs={'PBS_DAEMON_SERVICE_USER': str(scheduser)}\n            )\n        else:\n            self.du.unset_pbs_config(\n                self.server.hostname,\n                confs='PBS_DAEMON_SERVICE_USER'\n            )\n        self.server.restart()\n        pbs_conf = self.du.parse_pbs_config(self.server.shortname)\n        if setup_sched:\n            sched_logs = os.path.join(pbs_conf['PBS_HOME'], 'sched_logs')\n            sched_priv = os.path.join(pbs_conf['PBS_HOME'], 'sched_priv')\n            self.du.chown(path=sched_logs, uid=scheduser,\n                          recursive=True, sudo=True, level=logging.INFO)\n            self.du.chown(path=sched_priv, uid=scheduser,\n                          recursive=True, sudo=True, level=logging.INFO)\n\n        binpath = os.path.join(pbs_conf['PBS_EXEC'], 'sbin', binary)\n        ret = self.du.run_cmd(self.server.shortname,\n                              cmd=[binpath], runas=runas)\n        if msg:\n            self.assertEquals(ret['rc'], 1)\n            self.assertIn(msg, '\\n'.join(ret['err']))\n        else:\n            self.assertEquals(ret['rc'], 0)\n            self.assertFalse(ret['err'])\n\n    def test_sched_runas_nonroot(self):\n        \"\"\"\n        Test if running sched as nonroot with\n        PBS_DAEMON_SERVICE_USER set as another user\n        \"\"\"\n        self.common_test('pbs_sched', TEST_USER, TEST_USER1,\n                         'Must be run by PBS_DAEMON_SERVICE_USER')\n\n    def test_pbsfs_runas_nonroot(self):\n        \"\"\"\n        Test if running pbsfs as root with\n        PBS_DAEMON_SERVICE_USER set as another user\n        \"\"\"\n        self.common_test('pbsfs', TEST_USER, TEST_USER1,\n                         'Must be run by PBS_DAEMON_SERVICE_USER')\n\n    def test_sched_runas_nonroot_notset(self):\n        \"\"\"\n        Test if running sched as nonroot with\n        PBS_DAEMON_SERVICE_USER not set\n        \"\"\"\n        self.common_test('pbs_sched', TEST_USER, None,\n                         'Must be run by PBS_DAEMON_SERVICE_USER if '\n                         'set or root if not set')\n\n    def test_pbsfs_runas_nonroot_notset(self):\n        \"\"\"\n        Test if running pbsfs as nonroot with\n        PBS_DAEMON_SERVICE_USER not set\n        \"\"\"\n        self.common_test('pbsfs', TEST_USER, None,\n                         'Must be run by PBS_DAEMON_SERVICE_USER if '\n                         'set or root if not set')\n\n    def test_sched_runas_nonroot_pass(self):\n        \"\"\"\n        Test if sched runs as non-root user\n        \"\"\"\n        self.scheduler.stop()\n        self.common_test('pbs_sched', TEST_USER, TEST_USER, None,\n                         setup_sched=True)\n        j = Job(TEST_USER1)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n"
  },
  {
    "path": "test/tests/functional/pbs_dup_acc_log_for_resv.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestDupAccLogForResv(TestFunctional):\n    \"\"\"\n    This test suite is for testing duplicate records in accounting log\n    for start of reservations.\n    \"\"\"\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n\n    def test_accounting_logs(self):\n        r1 = Reservation(TEST_USER)\n        a = {'Resource_List.select': '1:ncpus=1', 'reserve_start': int(\n            time.time() + 5), 'reserve_end': int(time.time() + 60)}\n        r1.set_attributes(a)\n        r1id = self.server.submit(r1)\n        time.sleep(8)\n        self.server.restart()\n        m = self.server.accounting_match(\n            msg='.*B;' + r1id, id=r1id, n='ALL', allmatch=True, regexp=True)\n        self.assertEqual(len(m), 1)\n"
  },
  {
    "path": "test/tests/functional/pbs_eligible_time.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\nfrom ptl.utils.pbs_logutils import PBSLogUtils\n\n\nclass TestEligibleTime(TestFunctional):\n    \"\"\"\n    Test suite for eligible time tests\n    \"\"\"\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n        a = {'eligible_time_enable': 'True'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        self.accrue = {'ineligible': 1, 'eligible': 2, 'run': 3, 'exit': 4}\n\n    def test_eligible_time_updated(self):\n        \"\"\"\n        Test that eligible time gets updated when a job is eligible\n        \"\"\"\n        a = {'resources_available.ncpus': 1}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.mom.shortname)\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {\"eligible_time_enable\": \"True\"})\n\n        jid1 = self.server.submit(Job())\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n\n        jid2 = self.server.submit(Job())\n        a = {ATTR_state: 'Q', \"accrue_type\": \"2\"}\n        self.server.expect(JOB, a, id=jid2)\n\n        self.server.expect(JOB, {\"eligible_time\": \"00:00:00\"}, op=NE, id=jid2)\n\n    def test_qsub_a(self):\n        \"\"\"\n        Test that jobs requsting qsub -a <time> do not accrue\n        eligible time until <time> is reached\n        \"\"\"\n        a = {'scheduling': 'False'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        now = int(time.time())\n        now += 120\n        s = time.strftime(\"%H%M.%S\", time.localtime(now))\n\n        J1 = Job(TEST_USER, attrs={ATTR_a: s})\n        jid = self.server.submit(J1)\n        self.server.expect(JOB, {ATTR_state: 'W'}, id=jid)\n\n        self.logger.info(\"Sleeping 120s till job is out of 'W' state\")\n        time.sleep(120)\n        self.server.expect(JOB, {ATTR_state: 'Q'}, id=jid)\n        # eligible_time should really be 0, but just incase there is some\n        # lag on some slow systems, add a little leeway.\n        self.server.expect(JOB, {'eligible_time': 10}, op=LT)\n\n    def test_job_array(self):\n        \"\"\"\n        Test that a job array switches from accruing eligible time\n        to ineligible time when its last subjob starts running\n        \"\"\"\n        logutils = PBSLogUtils()\n        a = {'resources_available.ncpus': 2}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.mom.shortname)\n\n        a = {'log_events': 2047}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        J1 = Job(TEST_USER, attrs={ATTR_J: '1-3'})\n        J1.set_sleep_time(20)\n        jid = self.server.submit(J1)\n        jid_short = jid.split('[')[0]\n        sjid1 = jid_short + '[1]'\n        sjid2 = jid_short + '[2]'\n        sjid3 = jid_short + '[3]'\n\n        # Capture the time stamp when subjob 1 starts run. Accrue type changes\n        # to eligible time\n        msg1 = J1.create_subjob_id(jid, 1) + \";Job Run at request of Scheduler\"\n        m1 = self.server.log_match(msg1)\n        t1 = logutils.convert_date_time(m1[1].split(';')[0])\n\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=sjid1, extend='t')\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=sjid2, extend='t')\n        self.server.expect(JOB, {ATTR_state: 'Q'}, id=sjid3, extend='t')\n\n        self.server.expect(JOB, {'accrue_type': self.accrue['eligible']},\n                           id=jid)\n\n        self.logger.info(\"subjobs 1 and 2 finished; subjob 3 must run now\")\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=sjid3,\n                           extend='t', offset=20)\n        self.server.expect(JOB, {'accrue_type': self.accrue['ineligible']},\n                           id=jid)\n\n        # Capture the time stamp when subjob 3 starts run. Accrue type changes\n        # to ineligible time. eligible_time calculation is completed.\n        msg2 = J1.create_subjob_id(jid, 3) + \";Job Run at request of Scheduler\"\n        m2 = self.server.log_match(msg2)\n        t2 = logutils.convert_date_time(m2[1].split(';')[0])\n        eligible_time = int(t2) - int(t1)\n\n        m1 = jid + \";Accrue type has changed to ineligible_time, \"\n        m1 += \"previous accrue type was eligible_time\"\n\n        m2 = m1 + \" for %d secs, \" % eligible_time\n        # Format timedelta object as it does not print a preceding 0 for\n        # hours in HH:MM:SS\n        m2 += \"total eligible_time={!s:0>8}\".format(\n              datetime.timedelta(seconds=eligible_time))\n        try:\n            self.server.log_match(m2)\n        except PtlLogMatchError as e:\n            # In some slow machines, there is a delay observed between\n            # job run and accrue type change.\n            # Checking if log_match failed because eligible_time\n            # value was off only by a few seconds(5 seconds).\n            # This is done to acommodate differences in the eligible\n            # time calculated by the test and the eligible time\n            # calculated by PBS.\n            # If the eligible_time value was off by > 5 seconds, test fails.\n            match = self.server.log_match(m1)\n            e_time = re.search(r'(\\d+) secs', match[1])\n            if e_time:\n                self.logger.info(\"Checking if log_match failed because \"\n                                 \"the eligible_time value was off by \"\n                                 \"a few seconds, but within the allowed \"\n                                 \"range (5 secs). Expected %d secs Got: %s\"\n                                 % (eligible_time, e_time.group(1)))\n                if int(e_time.group(1)) - eligible_time > 5:\n                    raise PtlLogMatchError(rc=1, rv=False, msg=e.msg)\n            else:\n                raise PtlLogMatchError(rc=1, rv=False, msg=e.msg)\n\n    def test_after_depend(self):\n        \"\"\"\n        Make sure jobs accrue eligible time (or not) approprately with an\n        after dependency\n        \"\"\"\n\n        self.server.manager(MGR_CMD_SET, NODE,\n                            {'resources_available.ncpus': 2},\n                            id=self.mom.shortname)\n        J1 = Job(TEST_USER)\n        jid1 = self.server.submit(J1)\n        attribs = {'job_state': 'R', 'accrue_type': self.accrue['run']}\n        self.server.expect(JOB, attribs, id=jid1)\n\n        J2 = Job(TEST_USER, {'Resource_List.select': '1:ncpus=2'})\n        jid2 = self.server.submit(J2)\n        attribs = {'job_state': 'Q', 'accrue_type': self.accrue['eligible']}\n        self.server.expect(JOB, attribs, id=jid2)\n\n        a = {'Resource_List.select': '1:ncpus=1',\n             ATTR_depend: 'afterany:' + jid2}\n        J3 = Job(TEST_USER, a)\n        jid3 = self.server.submit(J3)\n        attribs = {'job_state': 'H', 'accrue_type': self.accrue['ineligible']}\n        self.server.expect(JOB, attribs, id=jid3)\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'max_run_res.ncpus': '[u:PBS_GENERIC=1]'})\n\n        # Make sure there are enough resources to run the job, so the reason\n        # the job can't run is the limit.  Otherwise, we'd accrue eligible time\n        self.server.manager(MGR_CMD_SET, NODE,\n                            {'resources_available.ncpus': 3},\n                            id=self.mom.shortname)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n\n        self.server.expect(JOB, {'accrue_type': self.accrue['ineligible']},\n                           id=jid2)\n\n        # force the server to reassess the accrue type\n        self.server.holdjob(jid2, 'u')\n        self.server.rlsjob(jid2, 'u')\n\n        self.server.expect(JOB, {'accrue_type': self.accrue['ineligible']},\n                           id=jid2)\n\n    def test_default_accrue_type(self):\n        \"\"\"\n        Test that the default accrue_type for jobs is \"eligible time\"\n        \"\"\"\n\n        self.server.manager(MGR_CMD_SET, NODE,\n                            {'resources_available.ncpus': 1},\n                            id=self.mom.shortname)\n        self.server.manager(MGR_CMD_SET, SCHED, {\"scheduling\": \"false\"})\n\n        jid1 = self.server.submit(Job())\n\n        # Check that the job's accrue_type is set to eligible time\n        a = {\"accrue_type\": self.accrue['eligible']}\n        self.server.expect(JOB, a, id=jid1)\n\n    def test_delayed_ineligible(self):\n        \"\"\"\n        Test that jobs are still correctly marked ineligible by sched\n        even if server thinks that they are eligible\n        \"\"\"\n\n        self.server.manager(MGR_CMD_SET, NODE,\n                            {'resources_available.ncpus': 2},\n                            id=self.mom.shortname)\n        self.server.manager(MGR_CMD_SET, SCHED, {\"scheduling\": \"false\"})\n\n        a = {\"max_run_res.ncpus\": \"[u:PBS_GENERIC=1]\"}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        jid1 = self.server.submit(Job(attrs={\"Resource_List.ncpus\": 2}))\n\n        # Check that server sets job's accrue_type to eligible time\n        a = {\"accrue_type\": self.accrue['eligible']}\n        self.server.expect(JOB, a, id=jid1)\n\n        self.scheduler.run_scheduling_cycle()\n\n        # Check that scheduler corrects the accrue_type to ineligible\n        a = {\"accrue_type\": self.accrue['ineligible']}\n        self.server.expect(JOB, a, id=jid1)\n"
  },
  {
    "path": "test/tests/functional/pbs_equiv_classes.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestEquivClass(TestFunctional):\n\n    \"\"\"\n    Test equivalence class functionality\n    \"\"\"\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n        a = {'resources_available.ncpus': 8}\n        self.mom.create_vnodes(a, 1, usenatvnode=True)\n        self.server.manager(MGR_CMD_SET, SCHED, {'log_events': 2047})\n        # capture the start time of the test for log matching\n        self.t = time.time()\n\n    def submit_jobs(self, num_jobs=1,\n                    attrs={'Resource_List.select': '1:ncpus=1'},\n                    user=TEST_USER):\n        \"\"\"\n        Submit num_jobs number of jobs with attrs attributes for user.\n        Return a list of job ids\n        \"\"\"\n        ret_jids = []\n        for n in range(num_jobs):\n            J = Job(user, attrs)\n            jid = self.server.submit(J)\n            ret_jids += [jid]\n\n        return ret_jids\n\n    def test_basic(self):\n        \"\"\"\n        Test the basic behavior of job equivalence classes: submit two\n        different types of jobs and see they are in two different classes\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'False'})\n\n        # Eat up all the resources\n        a = {'Resource_List.select': '1:ncpus=8'}\n        J = Job(TEST_USER, attrs=a)\n        self.server.submit(J)\n\n        jids1 = self.submit_jobs(3, a)\n\n        a = {'Resource_List.select': '1:ncpus=4'}\n        jids2 = self.submit_jobs(3, a)\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'True'})\n\n        self.scheduler.log_match(\"Number of job equivalence classes: 2\",\n                                 starttime=self.t)\n\n    def test_select(self):\n        \"\"\"\n        Test to see if jobs with select resources not in the resources line\n        fall into the same equivalence class\n        \"\"\"\n        self.server.manager(MGR_CMD_CREATE, RSC,\n                            {'type': 'long', 'flag': 'nh'}, id='foo')\n\n        # Eat up all the resources\n        a = {'Resource_List.select': '1:ncpus=8'}\n        J = Job(TEST_USER, attrs=a)\n        self.server.submit(J)\n\n        a = {'Resource_List.select': '1:ncpus=1:foo=4'}\n        jids1 = self.submit_jobs(3, a)\n\n        a = {'Resource_List.select': '1:ncpus=1:foo=8'}\n        jids2 = self.submit_jobs(3, a)\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'True'})\n\n        # Two equivalence classes: one for the resource eating job and one\n        # for the other two jobs. While jobs have different amounts of\n        # the foo resource, foo is not on the resources line.\n        self.scheduler.log_match(\"Number of job equivalence classes: 2\",\n                                 starttime=self.t)\n\n    def test_place(self):\n        \"\"\"\n        Test to see if jobs with different place statements\n        fall into the different equivalence classes\n        \"\"\"\n        # Eat up all the resources\n        a = {'Resource_List.select': '1:ncpus=8'}\n        J = Job(TEST_USER, attrs=a)\n        self.server.submit(J)\n\n        a = {'Resource_List.select': '1:ncpus=1',\n             'Resource_List.place': 'free'}\n        jids1 = self.submit_jobs(3, a)\n\n        a = {'Resource_List.select': '1:ncpus=1',\n             'Resource_List.place': 'excl'}\n        jids2 = self.submit_jobs(3, a)\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'True'})\n\n        # Three equivalence classes: one for the resource eating job and\n        # one for each place statement\n        self.scheduler.log_match(\"Number of job equivalence classes: 3\",\n                                 starttime=self.t)\n\n    def test_reslist1(self):\n        \"\"\"\n        Test to see if jobs with resources in Resource_List that are not in\n        the sched_config resources line fall into the same equivalence class\n        \"\"\"\n        self.server.manager(MGR_CMD_CREATE, RSC, {'type': 'string'},\n                            id='baz')\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'False'})\n\n        # Eat up all the resources\n        a = {'Resource_List.select': '1:ncpus=8'}\n        J = Job(TEST_USER, attrs=a)\n        self.server.submit(J)\n\n        a = {'Resource_List.software': 'foo'}\n        jids1 = self.submit_jobs(3, a)\n\n        a = {'Resource_List.software': 'bar'}\n        jids2 = self.submit_jobs(3, a)\n\n        a = {'Resource_List.baz': 'foo'}\n        jids1 = self.submit_jobs(3, a)\n\n        a = {'Resource_List.baz': 'bar'}\n        jids2 = self.submit_jobs(3, a)\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'True'})\n\n        # Two equivalence classes.  One for the resource eating job and\n        # one for the rest.  The rest of the jobs have differing values of\n        # resources not on the resources line.  They fall into one class.\n        self.scheduler.log_match(\"Number of job equivalence classes: 2\",\n                                 starttime=self.t)\n\n    def test_reslist2(self):\n        \"\"\"\n        Test to see if jobs with resources in Resource_List that are in the\n        sched_config resources line fall into the different equivalence classes\n        \"\"\"\n        self.server.manager(MGR_CMD_CREATE, RSC, {'type': 'string'},\n                            id='baz')\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'False'})\n        self.scheduler.add_resource('software')\n        self.scheduler.add_resource('baz')\n\n        # Eat up all the resources\n        a = {'Resource_List.select': '1:ncpus=8'}\n        J = Job(TEST_USER, attrs=a)\n        self.server.submit(J)\n\n        a = {'Resource_List.software': 'foo'}\n        jids1 = self.submit_jobs(3, a)\n\n        a = {'Resource_List.software': 'bar'}\n        jids2 = self.submit_jobs(3, a)\n\n        a = {'Resource_List.baz': 'foo'}\n        jids3 = self.submit_jobs(3, a)\n\n        a = {'Resource_List.baz': 'bar'}\n        jids4 = self.submit_jobs(3, a)\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'True'})\n\n        # Three equivalence classes.  One for the resource eating job and\n        # one for each value of software and baz.\n        self.scheduler.log_match(\"Number of job equivalence classes: 5\",\n                                 starttime=self.t)\n\n    def test_nolimits(self):\n        \"\"\"\n        Test to see that jobs from different users, groups, and projects\n        all fall into the same equivalence class when there are no limits\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'False'})\n\n        # Eat up all the resources\n        a = {'Resource_List.select': '1:ncpus=8'}\n        J = Job(TEST_USER, attrs=a)\n        self.server.submit(J)\n\n        jids1 = self.submit_jobs(3, user=TEST_USER)\n        jids2 = self.submit_jobs(3, user=TEST_USER2)\n\n        b = {'group_list': TSTGRP1, 'Resource_List.select': '1:ncpus=8'}\n        jids3 = self.submit_jobs(3, a, TEST_USER1)\n\n        b = {'group_list': TSTGRP2, 'Resource_List.select': '1:ncpus=8'}\n        jids4 = self.submit_jobs(3, a, TEST_USER1)\n\n        b = {'project': 'p1', 'Resource_List.select': '1:ncpus=8'}\n        jids5 = self.submit_jobs(3, a)\n\n        b = {'project': 'p2', 'Resource_List.select': '1:ncpus=8'}\n        jids6 = self.submit_jobs(3, a)\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'True'})\n\n        # Two equivalence classes: one for the resource eating job and one\n        # for the rest.  Since there are no limits, user, group, nor project\n        # are taken into account\n        self.scheduler.log_match(\"Number of job equivalence classes: 2\",\n                                 starttime=self.t)\n\n    def test_user(self):\n        \"\"\"\n        Test to see that jobs from different users fall into the same\n        equivalence class without user limits set\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'False'})\n\n        # Eat up all the resources\n        a = {'Resource_List.select': '1:ncpus=8'}\n        J = Job(TEST_USER, attrs=a)\n        self.server.submit(J)\n\n        jids1 = self.submit_jobs(3, user=TEST_USER)\n        jids2 = self.submit_jobs(3, user=TEST_USER2)\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'True'})\n\n        # Two equivalence classes: One for the resource eating job and\n        # one for the rest.  Since there are no limits, both users are\n        # in one class.\n        self.scheduler.log_match(\"Number of job equivalence classes: 2\",\n                                 starttime=self.t)\n\n    def test_user_old(self):\n        \"\"\"\n        Test to see that jobs from different users fall into different\n        equivalence classes with old style limits set\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'False'})\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'max_user_run': 4})\n\n        # Eat up all the resources\n        a = {'Resource_List.select': '1:ncpus=8'}\n        J = Job(TEST_USER, attrs=a)\n        self.server.submit(J)\n\n        jids1 = self.submit_jobs(3, user=TEST_USER)\n        jids2 = self.submit_jobs(3, user=TEST_USER2)\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'True'})\n\n        # Three equivalence classes.  One for the resource eating job\n        # and one for each user.\n        self.scheduler.log_match(\"Number of job equivalence classes: 3\",\n                                 starttime=self.t)\n\n    def test_user_server(self):\n        \"\"\"\n        Test to see that jobs from different users fall into different\n        equivalence classes with server hard limits set\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'False'})\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'max_run': '[u:PBS_GENERIC=4]'})\n\n        # Eat up all the resources\n        a = {'Resource_List.select': '1:ncpus=8'}\n        J = Job(TEST_USER, attrs=a)\n        self.server.submit(J)\n\n        jids1 = self.submit_jobs(3, user=TEST_USER)\n        jids2 = self.submit_jobs(3, user=TEST_USER2)\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'True'})\n\n        # Three equivalence classes.  One for the resource eating job\n        # and one for each user.\n        self.scheduler.log_match(\"Number of job equivalence classes: 3\",\n                                 starttime=self.t)\n\n    def test_user_server_soft(self):\n        \"\"\"\n        Test to see that jobs from different users fall into different\n        equivalence classes with server soft limits set\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'False'})\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'max_run_soft': '[u:PBS_GENERIC=4]'})\n\n        # Eat up all the resources\n        a = {'Resource_List.select': '1:ncpus=8'}\n        J = Job(TEST_USER, attrs=a)\n        self.server.submit(J)\n\n        jids1 = self.submit_jobs(3, user=TEST_USER)\n        jids2 = self.submit_jobs(3, user=TEST_USER2)\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'True'})\n\n        # Three equivalence classes.  One for the resource eating job and\n        # one for each user.\n        self.scheduler.log_match(\"Number of job equivalence classes: 3\",\n                                 starttime=self.t)\n\n    def test_user_queue(self):\n        \"\"\"\n        Test to see that jobs from different users fall into different\n        equivalence classes with queue limits set\n        \"\"\"\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'False'})\n\n        self.server.manager(MGR_CMD_SET, QUEUE,\n                            {'max_run': '[u:PBS_GENERIC=4]'}, id='workq')\n\n        # Eat up all the resources\n        a = {'Resource_List.select': '1:ncpus=8'}\n        J = Job(TEST_USER, attrs=a)\n        self.server.submit(J)\n\n        jids1 = self.submit_jobs(3, user=TEST_USER)\n        jids2 = self.submit_jobs(3, user=TEST_USER2)\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'True'})\n\n        # Three equivalence classes.  One for the resource eating job and\n        # one for each user.\n        self.scheduler.log_match(\"Number of job equivalence classes: 3\",\n                                 starttime=self.t)\n\n    def test_user_queue_without_limits(self):\n        \"\"\"\n        Test that jobs from different users submitted to a queue without\n        a user limit set, will not create a multiple equivalence classes.\n        \"\"\"\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'False'})\n\n        self.server.manager(MGR_CMD_SET, QUEUE,\n                            {'max_run': '[u:PBS_GENERIC=4]'}, id='workq')\n\n        # Eat up all the resources, this job will make first equiv class\n        a = {'Resource_List.select': '1:ncpus=8'}\n        J = Job(TEST_USER, attrs=a)\n        self.server.submit(J)\n\n        # Create a new queue and submit jobs to this queue\n        a = {'queue_type': 'e', 'started': 'True', 'enabled': 'True'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id='workq2')\n\n        a = {'Resource_List.select': '1:ncpus=1', ATTR_q: 'workq2'}\n        jids1 = self.submit_jobs(3, user=TEST_USER, attrs=a)\n        jids2 = self.submit_jobs(3, user=TEST_USER2, attrs=a)\n        a = {'Resource_List.select': '1:ncpus=1'}\n        J3 = Job(TEST_USER3, attrs=a)\n        self.server.submit(J3)\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'True'})\n\n        # Three equivalence classes.  One for the resource eating job and\n        # one for all jobs in workq2 and one for TEST_USER3\n        self.scheduler.log_match(\"Number of job equivalence classes: 3\",\n                                 starttime=self.t)\n\n    def test_user_queue_soft(self):\n        \"\"\"\n        Test to see that jobs from different users fall into different\n        equivalence classes with queue soft limits set\n        \"\"\"\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'False'})\n\n        self.server.manager(MGR_CMD_SET, QUEUE,\n                            {'max_run_soft': '[u:PBS_GENERIC=4]'}, id='workq')\n\n        # Eat up all the resources\n        a = {'Resource_List.select': '1:ncpus=8'}\n        J = Job(TEST_USER, attrs=a)\n        self.server.submit(J)\n\n        jids1 = self.submit_jobs(3, user=TEST_USER)\n        jids2 = self.submit_jobs(3, user=TEST_USER2)\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'True'})\n\n        # Three equivalence classes.  One for the resource eating job and\n        # one for each user.\n        self.scheduler.log_match(\"Number of job equivalence classes: 3\",\n                                 starttime=self.t)\n\n    def test_user_queue_without_soft_limits(self):\n        \"\"\"\n        Test that jobs from different users submitted to a queue without\n        a user soft limit set, will not create a multiple equivalence classes.\n        \"\"\"\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'False'})\n\n        self.server.manager(MGR_CMD_SET, QUEUE,\n                            {'max_run_soft': '[u:PBS_GENERIC=4]'}, id='workq')\n\n        # Eat up all the resources, this job will make first equiv class\n        a = {'Resource_List.select': '1:ncpus=8'}\n        J = Job(TEST_USER, attrs=a)\n        self.server.submit(J)\n\n        # Create a new queue and submit jobs to this queue\n        a = {'queue_type': 'e', 'started': 't', 'enabled': 't'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id='workq2')\n\n        a = {'Resource_List.select': '1:ncpus=1', ATTR_q: 'workq2'}\n        jids1 = self.submit_jobs(3, user=TEST_USER, attrs=a)\n        jids2 = self.submit_jobs(3, user=TEST_USER2, attrs=a)\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'True'})\n\n        # Two equivalence classes.  One for the resource eating job and\n        # one for all jobs in workq2.\n        self.scheduler.log_match(\"Number of job equivalence classes: 2\",\n                                 starttime=self.t)\n\n    def test_group(self):\n        \"\"\"\n        Test to see that jobs from different groups fall into the same\n        equivalence class without group limits set\n        \"\"\"\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'False'})\n\n        # Eat up all the resources\n        a = {'Resource_List.select': '1:ncpus=8'}\n        J = Job(TEST_USER1, attrs=a)\n        self.server.submit(J)\n\n        a = {'group_list': TSTGRP1}\n        jids1 = self.submit_jobs(3, a, TEST_USER1)\n\n        a = {'group_list': TSTGRP2}\n        jids2 = self.submit_jobs(3, a, TEST_USER1)\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'True'})\n\n        # Two equivalence classes: One for the resource eating job and\n        # one for the rest.  Since there are no limits, both groups are\n        # in one class.\n        self.scheduler.log_match(\"Number of job equivalence classes: 2\",\n                                 starttime=self.t)\n\n    @skipOnShasta\n    def test_group_old(self):\n        \"\"\"\n        Test to see that jobs from different groups fall into different\n        equivalence class old style group limits set\n        \"\"\"\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'False'})\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'max_group_run': 4})\n\n        # Eat up all the resources\n        a = {'Resource_List.select': '1:ncpus=8'}\n        J = Job(TEST_USER1, attrs=a)\n        self.server.submit(J)\n\n        a = {'group_list': TSTGRP1}\n        jids1 = self.submit_jobs(3, a, TEST_USER1)\n\n        a = {'group_list': TSTGRP2}\n        jids2 = self.submit_jobs(3, a, TEST_USER1)\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'True'})\n\n        # Three equivalence classes.  One for the resource eating job and\n        # one for each group.\n        self.scheduler.log_match(\"Number of job equivalence classes: 3\",\n                                 starttime=self.t)\n\n    @skipOnShasta\n    def test_group_server(self):\n        \"\"\"\n        Test to see that jobs from different groups fall into different\n        equivalence class server group limits set\n        \"\"\"\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'False'})\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'max_run': '[g:PBS_GENERIC=4]'})\n\n        # Eat up all the resources\n        a = {'Resource_List.select': '1:ncpus=8'}\n        J = Job(TEST_USER1, attrs=a)\n        self.server.submit(J)\n\n        a = {'group_list': TSTGRP1}\n        jids1 = self.submit_jobs(3, a, TEST_USER1)\n\n        a = {'group_list': TSTGRP2}\n        jids2 = self.submit_jobs(3, a, TEST_USER1)\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'True'})\n\n        # Three equivalence classes.  One for the resource eating job and\n        # one for each group.\n        self.scheduler.log_match(\"Number of job equivalence classes: 3\",\n                                 starttime=self.t)\n\n    @skipOnShasta\n    def test_group_server_soft(self):\n        \"\"\"\n        Test to see that jobs from different groups fall into different\n        equivalence class server soft group limits set\n        \"\"\"\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'False'})\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'max_run_soft': '[g:PBS_GENERIC=4]'})\n\n        # Eat up all the resources\n        a = {'Resource_List.select': '1:ncpus=8'}\n        J = Job(TEST_USER1, attrs=a)\n        self.server.submit(J)\n\n        a = {'group_list': TSTGRP1}\n        jids1 = self.submit_jobs(3, a, TEST_USER1)\n\n        a = {'group_list': TSTGRP2}\n        jids2 = self.submit_jobs(3, a, TEST_USER1)\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'True'})\n\n        # Three equivalence classes.  One for the resource eating job and\n        # one for each group.\n        self.scheduler.log_match(\"Number of job equivalence classes: 3\",\n                                 starttime=self.t)\n\n    @skipOnShasta\n    def test_group_queue(self):\n        \"\"\"\n        Test to see that jobs from different groups fall into different\n        equivalence class queue group limits set\n        \"\"\"\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'False'})\n\n        self.server.manager(MGR_CMD_SET, QUEUE,\n                            {'max_run': '[g:PBS_GENERIC=4]'}, id='workq')\n\n        # Eat up all the resources\n        a = {'Resource_List.select': '1:ncpus=8'}\n        J = Job(TEST_USER1, attrs=a)\n        self.server.submit(J)\n\n        a = {'group_list': TSTGRP1}\n        jids1 = self.submit_jobs(3, a, TEST_USER1)\n\n        a = {'group_list': TSTGRP2}\n        jids2 = self.submit_jobs(3, a, TEST_USER1)\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'True'})\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'True'})\n\n        # Three equivalence classes.  One for the resource eating job and\n        # one for each group.\n        self.scheduler.log_match(\"Number of job equivalence classes: 3\",\n                                 starttime=self.t)\n\n    @skipOnShasta\n    def test_group_queue_soft(self):\n        \"\"\"\n        Test to see that jobs from different groups fall into different\n        equivalence class queue group soft limits set\n        \"\"\"\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'False'})\n\n        self.server.manager(MGR_CMD_SET, QUEUE,\n                            {'max_run_soft': '[g:PBS_GENERIC=4]'}, id='workq')\n\n        # Eat up all the resources\n        a = {'Resource_List.select': '1:ncpus=8'}\n        J = Job(TEST_USER1, attrs=a)\n        self.server.submit(J)\n\n        a = {'group_list': TSTGRP1}\n        jids1 = self.submit_jobs(3, a, TEST_USER1)\n\n        a = {'group_list': TSTGRP2}\n        jids2 = self.submit_jobs(3, a, TEST_USER1)\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'True'})\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'True'})\n\n        # Three equivalence classes.  One for the resource eating job and\n        # one for each group.\n        self.scheduler.log_match(\"Number of job equivalence classes: 3\",\n                                 starttime=self.t)\n\n    def test_proj(self):\n        \"\"\"\n        Test to see that jobs from different projects fall into the same\n        equivalence class without project limits set\n        \"\"\"\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'False'})\n\n        # Eat up all the resources\n        a = {'Resource_List.select': '1:ncpus=8'}\n        J = Job(TEST_USER1, attrs=a)\n        self.server.submit(J)\n\n        a = {'project': 'p1'}\n        jids1 = self.submit_jobs(3, a)\n\n        a = {'project': 'p2'}\n        jids2 = self.submit_jobs(3, a)\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'True'})\n\n        # Two equivalence classes: One for the resource eating job and\n        # one for the rest.  Since there are no limits, both projects are\n        # in one class.\n        self.scheduler.log_match(\"Number of job equivalence classes: 2\",\n                                 starttime=self.t)\n\n    def test_proj_server(self):\n        \"\"\"\n        Test to see that jobs from different projects fall into different\n        equivalence classes with server project limits set\n        \"\"\"\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'False'})\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'max_run': '[p:PBS_GENERIC=4]'})\n\n        # Eat up all the resources\n        a = {'Resource_List.select': '1:ncpus=8'}\n        J = Job(TEST_USER1, attrs=a)\n        self.server.submit(J)\n\n        a = {'project': 'p1'}\n        jids1 = self.submit_jobs(3, a)\n\n        a = {'project': 'p2'}\n        jids2 = self.submit_jobs(3, a)\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'True'})\n\n        # Three equivalence classes.  One for the resource eating job and\n        # one for each project.\n        self.scheduler.log_match(\"Number of job equivalence classes: 3\",\n                                 starttime=self.t)\n\n    def test_proj_server_soft(self):\n        \"\"\"\n        Test to see that jobs from different projects fall into different\n        equivalence class server project soft limits set\n        \"\"\"\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'False'})\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'max_run_soft': '[p:PBS_GENERIC=4]'})\n\n        # Eat up all the resources\n        a = {'Resource_List.select': '1:ncpus=8'}\n        J = Job(TEST_USER1, attrs=a)\n        self.server.submit(J)\n\n        a = {'project': 'p1'}\n        jids1 = self.submit_jobs(3, a)\n\n        a = {'project': 'p2'}\n        jids2 = self.submit_jobs(3, a)\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'True'})\n\n        # Three equivalence classes.  One for the resource eating job and\n        # one for each project.\n        self.scheduler.log_match(\"Number of job equivalence classes: 3\",\n                                 starttime=self.t)\n\n    def test_proj_queue(self):\n        \"\"\"\n        Test to see that jobs from different groups fall into different\n        equivalence class queue project limits set\n        \"\"\"\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'False'})\n\n        self.server.manager(MGR_CMD_SET, QUEUE,\n                            {'max_run': '[p:PBS_GENERIC=4]'}, id='workq')\n\n        # Eat up all the resources\n        a = {'Resource_List.select': '1:ncpus=8'}\n        J = Job(TEST_USER1, attrs=a)\n        self.server.submit(J)\n\n        a = {'project': 'p1'}\n        jids1 = self.submit_jobs(3, a)\n\n        a = {'project': 'p2'}\n        jids2 = self.submit_jobs(3, a)\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'True'})\n\n        # Three equivalence classes.  One for the resource eating job and\n        # one for each project.\n        self.scheduler.log_match(\"Number of job equivalence classes: 3\",\n                                 starttime=self.t)\n\n    def test_proj_queue_soft(self):\n        \"\"\"\n        Test to see that jobs from different groups fall into different\n        equivalence class queue project soft limits set\n        \"\"\"\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'False'})\n\n        self.server.manager(MGR_CMD_SET, QUEUE,\n                            {'max_run_soft': '[p:PBS_GENERIC=4]'}, id='workq')\n\n        # Eat up all the resources\n        a = {'Resource_List.select': '1:ncpus=8'}\n        J = Job(TEST_USER1, attrs=a)\n        self.server.submit(J)\n\n        a = {'project': 'p1'}\n        jids1 = self.submit_jobs(3, a)\n\n        a = {'project': 'p2'}\n        jids2 = self.submit_jobs(3, a)\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'True'})\n\n        # Three equivalence classes.  One for the resource eating job and\n        # one for each project.\n        self.scheduler.log_match(\"Number of job equivalence classes: 3\",\n                                 starttime=self.t)\n\n    def test_queue(self):\n        \"\"\"\n        Test to see that jobs from different generic queues fall into\n        the same equivalence class\n        \"\"\"\n\n        self.server.manager(MGR_CMD_CREATE, QUEUE,\n                            {'queue_type': 'e', 'started': 'True',\n                             'enabled': 'True'}, id='workq2')\n\n        self.server.manager(MGR_CMD_SET, QUEUE,\n                            {'Priority': 120}, id='workq')\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'False'})\n\n        # Eat up all the resources\n        a = {'Resource_List.select': '1:ncpus=8'}\n        J = Job(TEST_USER1, attrs=a)\n        self.server.submit(J)\n\n        a = {'queue': 'workq'}\n        jids1 = self.submit_jobs(3, a)\n\n        a = {'queue': 'workq2'}\n        jids2 = self.submit_jobs(3, a)\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'True'})\n\n        # Two equivalence classes.  One for the resource eating job and\n        # one for the rest.  There is nothing to differentiate the queues\n        # so all jobs are in one class.\n        self.scheduler.log_match(\"Number of job equivalence classes: 2\",\n                                 starttime=self.t)\n\n    def test_queue_limits(self):\n        \"\"\"\n        Test to see if jobs in a queue with limits use their queue as part\n        of what defines their equivalence class.\n        \"\"\"\n\n        self.server.manager(MGR_CMD_CREATE, QUEUE,\n                            {'queue_type': 'e', 'started': 'True',\n                             'enabled': 'True'}, id='workq2')\n\n        self.server.manager(MGR_CMD_CREATE, QUEUE,\n                            {'queue_type': 'e', 'started': 'True',\n                             'enabled': 'True'}, id='limits1')\n\n        self.server.manager(MGR_CMD_CREATE, QUEUE,\n                            {'queue_type': 'e', 'started': 'True',\n                             'enabled': 'True'}, id='limits2')\n\n        self.server.manager(MGR_CMD_SET, QUEUE,\n                            {'Priority': 120}, id='workq')\n\n        self.server.manager(MGR_CMD_SET, QUEUE,\n                            {'max_run': '[o:PBS_ALL=20]'}, id='limits1')\n\n        self.server.manager(MGR_CMD_SET, QUEUE,\n                            {'max_run_soft': '[o:PBS_ALL=20]'}, id='limits2')\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'False'})\n\n        # Eat up all the resources\n        a = {'Resource_List.select': '1:ncpus=8'}\n        J = Job(TEST_USER1, attrs=a)\n        self.server.submit(J)\n\n        a = {'queue': 'workq'}\n        jids1 = self.submit_jobs(3, a)\n\n        a = {'queue': 'workq2'}\n        jids2 = self.submit_jobs(3, a)\n\n        a = {'queue': 'limits1'}\n        jids3 = self.submit_jobs(3, a)\n\n        a = {'queue': 'limits2'}\n        jids4 = self.submit_jobs(3, a)\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'True'})\n\n        # 4 equivalence classes.  One for the resource eating job and\n        # One for the queues without limits and one\n        # each for the two queues with limits.\n        self.scheduler.log_match(\"Number of job equivalence classes: 4\",\n                                 starttime=self.t)\n\n    def test_queue_nodes(self):\n        \"\"\"\n        Test to see if jobs that are submitted into a queue with nodes\n        associated with it fall into their own equivalence class\n        \"\"\"\n\n        a = {'resources_available.ncpus': 8}\n        self.mom.create_vnodes(a, 2, usenatvnode=True)\n\n        self.server.manager(MGR_CMD_CREATE, QUEUE,\n                            {'queue_type': 'e', 'started': 'True',\n                             'enabled': 'True', 'Priority': 100}, id='workq2')\n\n        self.server.manager(MGR_CMD_CREATE, QUEUE,\n                            {'queue_type': 'e', 'started': 'True',\n                             'enabled': 'True'}, id='nodes_queue')\n        vn = self.mom.shortname + '[0]'\n        self.server.manager(MGR_CMD_SET, NODE,\n                            {'queue': 'nodes_queue'}, id=vn)\n\n        self.server.manager(MGR_CMD_SET, QUEUE,\n                            {'Priority': 120}, id='workq')\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'False'})\n\n        # Eat up all the resources on the normal node\n        a = {'Resource_List.select': '1:ncpus=8', 'queue': 'workq'}\n        J = Job(TEST_USER1, attrs=a)\n        self.server.submit(J)\n\n        # Eat up all the resources on node associated to nodes_queue\n        a = {'Resource_List.select': '1:ncpus=4', 'queue': 'nodes_queue'}\n        J = Job(TEST_USER1, attrs=a)\n        self.server.submit(J)\n\n        a = {'Resource_List.select': '1:ncpus=4', 'queue': 'workq'}\n        jids1 = self.submit_jobs(3, a)\n\n        a = {'Resource_List.select': '1:ncpus=4', 'queue': 'workq2'}\n        jids2 = self.submit_jobs(3, a)\n\n        a = {'Resource_List.select': '1:ncpus=4', 'queue': 'nodes_queue'}\n        jids3 = self.submit_jobs(3, a)\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'True'})\n\n        # Three equivalence classes.  One for the resource eating job and\n        # one class for the queue with nodes associated with it.\n        # One class for normal queues.\n        self.scheduler.log_match(\"Number of job equivalence classes: 3\",\n                                 starttime=self.t)\n\n    def test_prime_queue(self):\n        \"\"\"\n        Test to see if a job in a primetime queue has its queue be part of\n        what defines its equivalence class.  Also see that jobs in anytime\n        queues do not use queue as part of what determines their class\n        \"\"\"\n\n        # Force primetime\n        self.scheduler.holidays_set_day(\"weekday\", prime=\"all\",\n                                        nonprime=\"none\")\n        self.scheduler.holidays_set_day(\"saturday\", prime=\"all\",\n                                        nonprime=\"none\")\n        self.scheduler.holidays_set_day(\"sunday\", prime=\"all\",\n                                        nonprime=\"none\")\n\n        self.server.manager(MGR_CMD_CREATE, QUEUE,\n                            {'queue_type': 'e', 'started': 'True',\n                             'enabled': 'True', 'Priority': 100},\n                            id='anytime1')\n        self.server.manager(MGR_CMD_CREATE, QUEUE,\n                            {'queue_type': 'e', 'started': 'True',\n                             'enabled': 'True'}, id='anytime2')\n\n        self.server.manager(MGR_CMD_CREATE, QUEUE,\n                            {'queue_type': 'e', 'started': 'True',\n                             'enabled': 'True', 'Priority': 100},\n                            id='p_queue1')\n        self.server.manager(MGR_CMD_CREATE, QUEUE,\n                            {'queue_type': 'e', 'started': 'True',\n                             'enabled': 'True'}, id='p_queue2')\n\n        self.server.manager(MGR_CMD_SET, QUEUE,\n                            {'Priority': 120}, id='workq')\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'False'})\n\n        # Eat up all the resources\n        a = {'Resource_List.select': '1:ncpus=8', 'queue': 'workq'}\n        J = Job(TEST_USER1, attrs=a)\n        self.server.submit(J)\n\n        a = {'Resource_List.select': '1:ncpus=4', 'queue': 'anytime1'}\n        jids1 = self.submit_jobs(3, a)\n\n        a = {'Resource_List.select': '1:ncpus=4', 'queue': 'anytime2'}\n        jids2 = self.submit_jobs(3, a)\n\n        a = {'Resource_List.select': '1:ncpus=4', 'queue': 'p_queue1'}\n        jids3 = self.submit_jobs(3, a)\n\n        a = {'Resource_List.select': '1:ncpus=4', 'queue': 'p_queue2'}\n        jids4 = self.submit_jobs(3, a)\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'True'})\n\n        # Four equivalence classes.  One for the resource eating job and\n        # one for the normal queues and one for each prime time queue\n        self.scheduler.log_match(\"Number of job equivalence classes: 4\",\n                                 starttime=self.t)\n\n    def test_non_prime_queue(self):\n        \"\"\"\n        Test to see if a job in a non-primetime queue has its queue be part of\n        what defines its equivalence class.  Also see that jobs in anytime\n        queues do not use queue as part of what determines their class\n        \"\"\"\n\n        # Force non-primetime\n        self.scheduler.holidays_set_day(\"weekday\", prime=\"none\",\n                                        nonprime=\"all\")\n        self.scheduler.holidays_set_day(\"saturday\", prime=\"none\",\n                                        nonprime=\"all\")\n        self.scheduler.holidays_set_day(\"sunday\", prime=\"none\",\n                                        nonprime=\"all\")\n\n        self.server.manager(MGR_CMD_CREATE, QUEUE,\n                            {'queue_type': 'e', 'started': 'True',\n                             'enabled': 'True', 'Priority': 100},\n                            id='anytime1')\n        self.server.manager(MGR_CMD_CREATE, QUEUE,\n                            {'queue_type': 'e', 'started': 'True',\n                             'enabled': 'True'}, id='anytime2')\n\n        self.server.manager(MGR_CMD_CREATE, QUEUE,\n                            {'queue_type': 'e', 'started': 'True',\n                             'enabled': 'True', 'Priority': 100},\n                            id='np_queue1')\n\n        self.server.manager(MGR_CMD_CREATE, QUEUE,\n                            {'queue_type': 'e', 'started': 'True',\n                             'enabled': 'True'}, id='np_queue2')\n\n        self.server.manager(MGR_CMD_SET, QUEUE,\n                            {'Priority': 120}, id='workq')\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'False'})\n\n        # Eat up all the resources\n        a = {'Resource_List.select': '1:ncpus=8', 'queue': 'workq'}\n        J = Job(TEST_USER1, attrs=a)\n        self.server.submit(J)\n\n        a = {'Resource_List.select': '1:ncpus=4', 'queue': 'anytime1'}\n        jids1 = self.submit_jobs(3, a)\n\n        a = {'Resource_List.select': '1:ncpus=4', 'queue': 'anytime2'}\n        jids2 = self.submit_jobs(3, a)\n\n        a = {'Resource_List.select': '1:ncpus=4', 'queue': 'np_queue1'}\n        jids3 = self.submit_jobs(3, a)\n\n        a = {'Resource_List.select': '1:ncpus=4', 'queue': 'np_queue2'}\n        jids4 = self.submit_jobs(3, a)\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'True'})\n\n        # Four equivalence classes.  One for the resource eating job and\n        # one for the normal queues and one for each non-prime time queue\n        self.scheduler.log_match(\"Number of job equivalence classes: 4\",\n                                 starttime=self.t)\n\n    def test_ded_time_queue(self):\n        \"\"\"\n        Test to see if a job in a dedicated time queue has its queue be part\n        of what defines its equivalence class.  Also see that jobs in anytime\n        queues do not use queue as part of what determines their class\n        \"\"\"\n\n        # Force dedicated time\n        now = time.time()\n        self.scheduler.add_dedicated_time(start=now - 5, end=now + 3600)\n\n        self.server.manager(MGR_CMD_CREATE, QUEUE,\n                            {'queue_type': 'e', 'started': 'True',\n                             'enabled': 'True', 'Priority': 100},\n                            id='ded_queue1')\n\n        self.server.manager(MGR_CMD_CREATE, QUEUE,\n                            {'queue_type': 'e', 'started': 'True',\n                             'enabled': 'True', 'Priority': 100},\n                            id='ded_queue2')\n\n        self.server.manager(MGR_CMD_SET, QUEUE,\n                            {'Priority': 120}, id='workq')\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'False'})\n\n        # Eat up all the resources\n        a = {'Resource_List.select': '1:ncpus=8', 'queue': 'workq'}\n        J = Job(TEST_USER1, attrs=a)\n        self.server.submit(J)\n\n        a = {'Resource_List.select': '1:ncpus=4',\n             'Resource_List.walltime': 600, 'queue': 'ded_queue1'}\n        jids1 = self.submit_jobs(3, a)\n\n        a = {'Resource_List.select': '1:ncpus=4',\n             'Resource_List.walltime': 600, 'queue': 'ded_queue2'}\n        jids2 = self.submit_jobs(3, a)\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'True'})\n\n        # Three equivalence classes: One for the resource eating job and\n        # one for each dedicated time queue job\n        self.scheduler.log_match(\"Number of job equivalence classes: 3\",\n                                 starttime=self.t)\n\n    def test_job_array(self):\n        \"\"\"\n        Test that various job types will fall into single equivalence\n        class with same type of request.\n        \"\"\"\n\n        # Eat up all the resources\n        a = {'Resource_List.select': '1:ncpus=8', 'queue': 'workq'}\n        J = Job(TEST_USER1, attrs=a)\n        self.server.submit(J)\n\n        # Submit a job array\n        j = Job(TEST_USER)\n        j.set_attributes(\n            {ATTR_J: '1-3:1',\n             'Resource_List.select': '1:ncpus=8',\n             'queue': 'workq'})\n        jid = self.server.submit(j)\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'True'})\n\n        # One equivalence class\n        self.scheduler.log_match(\"Number of job equivalence classes: 1\",\n                                 starttime=self.t)\n\n    def test_reservation(self):\n        \"\"\"\n        Test that similar jobs inside reservations falls under same\n        equivalence class.\n        \"\"\"\n\n        # Submit a reservation\n        a = {'Resource_List.select': '1:ncpus=3',\n             'reserve_start': int(time.time()) + 10,\n             'reserve_end': int(time.time()) + 300, }\n        r = Reservation(TEST_USER, a)\n        rid = self.server.submit(r)\n        a = {'reserve_state': (MATCH_RE, \"RESV_CONFIRMED|2\")}\n        self.server.expect(RESV, a, id=rid)\n\n        rname = rid.split('.')\n        # Submit jobs inside reservation\n        a = {ATTR_queue: rname[0], 'Resource_List.select': '1:ncpus=1'}\n        jids1 = self.submit_jobs(3, a)\n\n        # Submit jobs outside of reservations\n        jids2 = self.submit_jobs(3)\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'True'})\n\n        # Two equivalence classes: one for jobs inside reservations\n        # and one for regular jobs\n        self.scheduler.log_match(\"Number of job equivalence classes: 2\",\n                                 starttime=self.t)\n\n    def test_time_limit(self):\n        \"\"\"\n        Test that various time limits will have their own\n        equivalence classes\n        \"\"\"\n\n        # Submit a reservation\n        a = {'Resource_List.select': '1:ncpus=8',\n             'reserve_start': time.time() + 30,\n             'reserve_end': time.time() + 300, }\n        r = Reservation(TEST_USER, a)\n        rid = self.server.submit(r)\n        a = {'reserve_state': (MATCH_RE, \"RESV_CONFIRMED|2\")}\n        self.server.expect(RESV, a, id=rid)\n\n        rname = rid.split('.')\n\n        # Submit jobs with cput limit inside reservation\n        a = {'Resource_List.cput': '20', ATTR_queue: rname[0]}\n        jid1 = self.submit_jobs(2, a)\n\n        # Submit jobs with min and max walltime inside reservation\n        a = {'Resource_List.min_walltime': '20',\n             'Resource_List.max_walltime': '200',\n             ATTR_queue: rname[0]}\n        jid2 = self.submit_jobs(2, a)\n\n        # Submit jobs with regular walltime inside reservation\n        a = {'Resource_List.walltime': '20', ATTR_queue: rname[0]}\n        jid3 = self.submit_jobs(2, a)\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'True'})\n\n        # Three equivalence classes: one for each job set\n        self.scheduler.log_match(\"Number of job equivalence classes: 3\",\n                                 starttime=self.t)\n\n    def test_fairshare(self):\n        \"\"\"\n        Test that scheduler do not create any equiv classes\n        if fairshare is set\n        \"\"\"\n\n        a = {'fair_share': 'true ALL',\n             'fairshare_usage_res': 'ncpus*walltime',\n             'unknown_shares': 10}\n        self.scheduler.set_sched_config(a)\n\n        # Submit jobs as different user\n        jid1 = self.submit_jobs(8, user=TEST_USER1)\n        jid2 = self.submit_jobs(8, user=TEST_USER2)\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'True'})\n\n        # One equivalence class\n        self.scheduler.log_match(\"Number of job equivalence classes: 1\",\n                                 starttime=self.t)\n\n        # Wait sometime for jobs to accumulate walltime\n        time.sleep(20)\n\n        # Submit another job\n        self.t = time.time()\n        jid3 = self.submit_jobs(1, user=TEST_USER3)\n\n        # Look at the job equivalence classes again\n        self.scheduler.log_match(\"Number of job equivalence classes: 1\",\n                                 starttime=self.t)\n\n    def test_server_hook(self):\n        \"\"\"\n        Test that job equivalence classes are updated\n        when job attributes get updated by hooks\n        \"\"\"\n\n        # Define a queuejob hook\n        hook1 = \"\"\"\nimport pbs\ne = pbs.event()\ne.job.Resource_List[\"walltime\"] = 200\n\"\"\"\n\n        # Define a runjob hook\n        hook2 = \"\"\"\nimport pbs\ne = pbs.event()\ne.job.Resource_List[\"cput\"] = 40\n\"\"\"\n\n        # Define a modifyjob hook\n        hook3 = \"\"\"\nimport pbs\ne = pbs.event()\ne.job.Resource_List[\"cput\"] = 20\n\"\"\"\n\n        # Create a queuejob hook\n        a = {'event': 'queuejob', 'enabled': 'True'}\n        self.server.create_import_hook(\"t_q\", a, hook1)\n\n        # Create a runjob hook\n        a = {'event': 'runjob', 'enabled': 'True'}\n        self.server.create_import_hook(\"t_r\", a, hook2)\n\n        # Create a modifyjob hook\n        a = {'event': 'modifyjob', 'enabled': 'True'}\n        self.server.create_import_hook(\"t_m\", a, hook3)\n\n        # Turn scheduling off\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'False'})\n\n        # Submit jobs as different users\n        a = {'Resource_List.ncpus': 2}\n        jid1 = self.submit_jobs(4, a, user=TEST_USER1)\n        jid2 = self.submit_jobs(4, a, user=TEST_USER2)\n        jid3 = self.submit_jobs(4, a, user=TEST_USER3)\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'True'})\n\n        # One equivalence class\n        self.scheduler.log_match(\"Number of job equivalence classes: 1\",\n                                 starttime=self.t)\n\n        # Alter a queued job\n        self.t = time.time()\n        self.server.alterjob(jid3[2], {ATTR_N: \"test\"})\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'True'})\n\n        # Three equivalence classes: one is for queued jobs that\n        # do not have cput set. 2 for the different cputime value\n        # set by runjob and modifyjob hook\n        self.scheduler.log_match(\"Number of job equivalence classes: 3\",\n                                 starttime=self.t)\n\n    def test_mom_hook(self):\n        \"\"\"\n        Test for job equivalence classes with mom hooks.\n        \"\"\"\n\n        # Create resource\n        attrib = {}\n        attrib['type'] = \"string_array\"\n        attrib['flag'] = 'h'\n        self.server.manager(MGR_CMD_CREATE, RSC, attrib, id='foo_str')\n\n        # Create vnodes\n        a = {'resources_available.ncpus': 4,\n             'resources_available.foo_str': \"foo,bar,buba\"}\n        self.mom.create_vnodes(a, 4)\n\n        # Add resources to sched_config\n        self.scheduler.add_resource(\"foo_str\")\n\n        # Create execjob_begin hook\n        hook1 = \"\"\"\nimport pbs\ne = pbs.event()\nj = e.job\n\nif j.Resource_List[\"host\"] == \"vnode[0]\":\n    j.Resource_List[\"foo_str\"] = \"foo\"\nelif j.Resource_List[\"host\"] == \"vnode[1]\":\n    j.Resource_List[\"foo_str\"] = \"bar\"\nelse:\n    j.Resource_List[\"foo_str\"] = \"buba\"\n\"\"\"\n\n        a = {'event': \"execjob_begin\", 'enabled': 'True'}\n        self.server.create_import_hook(\"test\", a, hook1)\n\n        # Turn off the scheduling\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'False'})\n\n        # Submit jobs\n        a = {'Resource_List.select': \"vnode=vnode[0]:ncpus=2\"}\n        jid1 = self.submit_jobs(2, a)\n        a = {'Resource_List.select': \"vnode=vnode[1]:ncpus=2\"}\n        jid2 = self.submit_jobs(2, a)\n        a = {'Resource_List.select': \"vnode=vnode[2]:ncpus=2\"}\n        jid3 = self.submit_jobs(2, a)\n\n        # Turn on the scheduling\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'True'})\n\n        # Three equivalence class for each string value\n        # set by mom_hook\n        self.scheduler.log_match(\"Number of job equivalence classes: 3\",\n                                 starttime=self.t)\n\n    def test_incr_decr(self):\n        \"\"\"\n        Test for varying job equivalence class values\n        \"\"\"\n\n        # Submit a job\n        j = Job(TEST_USER,\n                attrs={'Resource_List.select': '1:ncpus=8',\n                       'Resource_List.walltime': '20'})\n        jid1 = self.server.submit(j)\n\n        # One equivalance class\n        self.scheduler.log_match(\"Number of job equivalence classes: 1\",\n                                 starttime=self.t)\n\n        # Submit another job\n        self.t = time.time()\n        j = Job(TEST_USER,\n                attrs={'Resource_List.select': '1:ncpus=8',\n                       'Resource_List.walltime': '30'})\n        jid2 = self.server.submit(j)\n\n        # Two equivalence classes\n        self.scheduler.log_match(\"Number of job equivalence classes: 2\",\n                                 starttime=self.t)\n\n        # Submit another job\n        self.t = time.time()\n        j = Job(TEST_USER,\n                attrs={'Resource_List.select': '1:ncpus=8',\n                       'Resource_List.walltime': '40'})\n        jid3 = self.server.submit(j)\n\n        # Three equivalence classes\n        self.scheduler.log_match(\"Number of job equivalence classes: 3\",\n                                 starttime=self.t)\n\n        # Delete job1\n        self.server.delete(jid1, wait='True')\n\n        # Rerun scheduling cycle\n        self.t = time.time()\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'True'})\n\n        # Two equivalence classes\n        self.scheduler.log_match(\"Number of job equivalence classes: 2\",\n                                 starttime=self.t)\n\n        # Delete job2\n        self.server.delete(jid2, wait='true')\n\n        # Rerun scheduling cycle\n        self.t = time.time()\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'True'})\n\n        # One equivalence classes\n        self.scheduler.log_match(\"Number of job equivalence classes: 1\",\n                                 starttime=self.t)\n\n        # Delete job3\n        self.server.delete(jid3, wait='true')\n\n        time.sleep(1)  # adding delay to avoid race condition\n        # Rerun scheduling cycle\n        self.t = time.time()\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'True'})\n\n        # No message for equivalence class\n        self.scheduler.log_match(\"Number of job equivalence classes\",\n                                 starttime=self.t,\n                                 existence=False)\n        self.logger.info(\n            \"Number of job equivalence classes message \" +\n            \"not present when there are no jobs as expected\")\n\n    def test_server_queue_limit(self):\n        \"\"\"\n        Test with mix of hard and soft limits\n        on resources for users and groups\n        \"\"\"\n\n        # Create workq2\n        self.server.manager(MGR_CMD_CREATE, QUEUE,\n                            {'queue_type': 'e', 'started': 'True',\n                             'enabled': 'True'}, id='workq2')\n\n        # Set queue limit\n        a = {\n            'max_run': '[o:PBS_ALL=100],[g:PBS_GENERIC=20],\\\n                       [u:PBS_GENERIC=20],[g:%s = 8],[u:%s=10]' %\n                       (str(TSTGRP1), str(TEST_USER1))}\n        self.server.manager(MGR_CMD_SET, QUEUE,\n                            a, id='workq2')\n\n        a = {'max_run_res.ncpus':\n             '[o:PBS_ALL=100],[g:PBS_GENERIC=50],\\\n             [u:PBS_GENERIC=20],[g:%s=13],[u:%s=12]' %\n             (str(TSTGRP1), str(TEST_USER1))}\n        self.server.manager(MGR_CMD_SET, QUEUE, a, id='workq2')\n\n        a = {'max_run_res_soft.ncpus':\n             '[o:PBS_ALL=100],[g:PBS_GENERIC=30],\\\n             [u:PBS_GENERIC=10],[g:%s=10],[u:%s=10]' %\n             (str(TSTGRP1), str(TEST_USER1))}\n        self.server.manager(MGR_CMD_SET, QUEUE, a, id='workq2')\n\n        # Create server limits\n        a = {\n            'max_run': '[o:PBS_ALL=100],[g:PBS_GENERIC=50],\\\n            [u:PBS_GENERIC=20],[g:%s=13],[u:%s=13]' %\n            (str(TSTGRP1), str(TEST_USER1))}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        a = {'max_run_soft':\n             '[o:PBS_ALL=50],[g:PBS_GENERIC=25],[u:PBS_GENERIC=10],\\\n             [g:%s=10],[u:%s=10]' % (str(TSTGRP1), str(TEST_USER1))}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        # Turn scheduling off\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'false'})\n\n        # Submit jobs as pbsuser1 from group tstgrp01 in workq2\n        a = {'Resource_List.select': '1:ncpus=1',\n             'group_list': TSTGRP1, ATTR_q: 'workq2'}\n        jid1 = self.submit_jobs(10, a, TEST_USER1)\n\n        # Submit jobs as pbsuser1 from group tstgrp02 in workq2\n        a = {'Resource_List.select': '1:ncpus=1',\n             'group_list': TSTGRP2, ATTR_q: 'workq2'}\n        jid2 = self.submit_jobs(10, a, TEST_USER1)\n\n        # Submit jobs as pbsuser2 from tstgrp01 in workq2\n        a = {'Resource_List.select': '1:ncpus=1',\n             'group_list': TSTGRP1, ATTR_q: 'workq2'}\n        jid3 = self.submit_jobs(10, a, TEST_USER2)\n\n        # Submit jobs as pbsuser2 from tstgrp03 in workq2\n        a = {'Resource_List.select': '1:ncpus=1',\n             'group_list': TSTGRP3, ATTR_q: 'workq2'}\n        jid4 = self.submit_jobs(10, a, TEST_USER2)\n\n        # Submit jobs as pbsuser1 from tstgrp01 in workq\n        a = {'Resource_List.select': '1:ncpus=1',\n             'group_list': TSTGRP1, ATTR_q: 'workq'}\n        jid5 = self.submit_jobs(10, a, TEST_USER1)\n\n        # Submit jobs as pbsuser1 from tstgrp02 in workq\n        a = {'Resource_List.select': '1:ncpus=1',\n             'group_list': TSTGRP2, ATTR_q: 'workq'}\n        jid6 = self.submit_jobs(10, a, TEST_USER1)\n\n        # Submit jobs as pbsuser2 from tstgrp01 in workq\n        a = {'Resource_List.select': '1:ncpus=1',\n             'group_list': TSTGRP1, ATTR_q: 'workq'}\n        jid7 = self.submit_jobs(10, a, TEST_USER2)\n\n        # Submit jobs as pbsuser2 from tstgrp03 in workq\n        a = {'Resource_List.select': '1:ncpus=1',\n             'group_list': TSTGRP3, ATTR_q: 'workq'}\n        jid8 = self.submit_jobs(10, a, TEST_USER2)\n\n        self.t = time.time()\n\n        # Run only one cycle\n        self.server.manager(MGR_CMD_SET, MGR_OBJ_SERVER,\n                            {'scheduling': 'True'})\n        self.server.manager(MGR_CMD_SET, MGR_OBJ_SERVER,\n                            {'scheduling': 'False'})\n\n        # Eight equivalence classes; one for each combination of\n        # users and groups\n        self.scheduler.log_match(\"Number of job equivalence classes: 8\",\n                                 starttime=self.t)\n\n    def test_preemption(self):\n        \"\"\"\n        Suspended jobs are placed into their own equivalence class.  If\n        they remain in the class they were in when they were queued, they\n        can stop other jobs in that class from running.\n\n        Equivalence classes are created in query-order.  Test to see if\n        suspended job which comes first in query-order is added to its own\n        class.\n        \"\"\"\n\n        a = {'resources_available.ncpus': 1}\n        self.mom.create_vnodes(a, 4, usenatvnode=True)\n\n        a = {'queue_type': 'e', 'started': 't',\n             'enabled': 't', 'Priority': 150}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id='expressq')\n\n        (jid1, ) = self.submit_jobs(1)\n        (jid2, ) = self.submit_jobs(1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n\n        a = {'Resource_List.ncpus': 3, 'queue': 'expressq'}\n        (jid3,) = self.submit_jobs(1, a)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid3)\n\n        # Make sure one of the job is suspended\n        sus_job = self.server.select(attrib={'job_state': 'S'})\n        self.assertEqual(len(sus_job), 1,\n                         \"Either no or more jobs are suspended\")\n        self.logger.info(\"Job %s is suspended\" % sus_job[0])\n\n        (jid4,) = self.submit_jobs(1)\n        self.server.expect(JOB, 'comment', op=SET)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid4)\n\n        # 3 equivalence classes: 1 for jid2 and jid4; 1 for jid3; and 1 for\n        # jid1 by itself because it is suspended.\n        self.scheduler.log_match(\"Number of job equivalence classes: 3\",\n                                 starttime=self.t)\n\n        # Make sure suspended job is in its own class. If it is still in\n        # jid4's class jid4 will not run.  This is because suspended job\n        # will be considered first and mark the entire class as can not run.\n        if sus_job[0] == jid2:\n            self.server.deljob(jid1, wait=True)\n        else:\n            self.server.deljob(jid2, wait=True)\n\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid4)\n\n    def test_preemption2(self):\n        \"\"\"\n        Suspended jobs are placed into their own equivalence class.  If\n        they remain in the class they were in when they were queued, they\n        can stop other jobs in that class from running.\n\n        Equivalence classes are created in query-order.  Test to see if\n        suspended job which comes later in query-order is added to its own\n        class instead of the class it was in when it was queued.\n        \"\"\"\n\n        a = {'resources_available.ncpus': 1}\n        self.mom.create_vnodes(a, 4, usenatvnode=True)\n\n        a = {'queue_type': 'e', 'started': 't',\n             'enabled': 't', 'Priority': 150}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id='expressq')\n\n        a = {'preempt_sort': 'min_time_since_start'}\n        self.server.manager(MGR_CMD_SET, SCHED, a)\n\n        (jid1,) = self.submit_jobs(1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n\n        (jid2,) = self.submit_jobs(1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n\n        # Jobs most recently started are suspended first.\n        # Sleep for a second to force jid3 to be suspended.\n        time.sleep(1)\n        (jid3,) = self.submit_jobs(1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid3)\n\n        a = {'Resource_List.ncpus': 2, 'queue': 'expressq'}\n        (jid4,) = self.submit_jobs(1, a)\n\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid3)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid4)\n\n        (jid5,) = self.submit_jobs(1)\n        self.server.expect(JOB, 'comment', op=SET)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid5)\n\n        # 3 equivalence classes: 1 for jid1, jid2, and jid5; 1 for jid4;\n        # jid3 by itself because it is suspended.\n        self.scheduler.log_match(\"Number of job equivalence classes: 3\",\n                                 starttime=self.t)\n\n        # Make sure jid3 is in its own class.  If it is still in jid5's class\n        # jid5 will not run.  This is because jid3 will be considered first\n        # and mark the entire class as can not run.\n\n        self.server.deljob(jid2, wait=True)\n\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid5)\n\n    def test_multiple_job_preemption_order(self):\n        \"\"\"\n        Test that when multiple jobs from same eqivalence class are\n        preempted in reverse order they were created in and they are placed\n        into the same equivalence class\n        2) Test that for jobs of same type, suspended job which comes\n        later in query-order is in its own equivalence class, and can\n        be picked up to run along with the queued job in\n        the same scheduling cycle.\n        \"\"\"\n\n        # Create 1 vnode with 3 ncpus\n        a = {'resources_available.ncpus': 3}\n        self.mom.create_vnodes(a, 1, usenatvnode=True)\n\n        # Create expressq\n        a = {'queue_type': 'execution', 'started': 'true',\n             'enabled': 'true', 'Priority': 150}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id='expressq')\n\n        a = {'preempt_sort': 'min_time_since_start'}\n        self.server.manager(MGR_CMD_SET, SCHED, a)\n\n        # Submit 3 jobs with delay of 1 sec\n        # Delay of 1 sec will preempt jid3 and then jid2.\n        a = {'Resource_List.ncpus': 1}\n        J = Job(TEST_USER, attrs=a)\n        jid1 = self.server.submit(J)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n\n        time.sleep(1)\n\n        J2 = Job(TEST_USER, attrs=a)\n        jid2 = self.server.submit(J2)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n\n        time.sleep(1)\n        J3 = Job(TEST_USER, attrs=a)\n        jid3 = self.server.submit(J3)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid3)\n\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid3)\n\n        # Preempt jid3 with expressq, check 1 equivalence class is created\n        a = {'Resource_List.ncpus': 1, 'queue': 'expressq'}\n        Je = Job(TEST_USER, attrs=a)\n        jid4 = self.server.submit(Je)\n\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid3)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid4)\n\n        self.scheduler.log_match(\"Number of job equivalence classes: 2\",\n                                 starttime=self.t)\n        self.t = time.time()\n\n        # Preempt jid2, check no new equivalence class is created\n        Je2 = Job(TEST_USER, attrs=a)\n        jid5 = self.server.submit(Je2)\n\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid2)\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid3)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid4)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid5)\n\n        # Only One equivalence class for jid2 and jid3 is present since both\n        # suspended jobs are of same type and running on same vnode\n\n        self.scheduler.log_match(\"Number of job equivalence classes: 2\",\n                                 starttime=self.t)\n\n        # Add a job to Queue state\n        a = {'Resource_List.ncpus': 1}\n        J = Job(TEST_USER, attrs=a)\n        jid6 = self.server.submit(J)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid2)\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid3)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid4)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid5)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid6)\n\n        # Set scheduling to false before deleting jobs to free nodes, so that\n        # suspended and queued jobs do not run. These jobs will be picked up\n        # in the next scheduling cycle when scheduling is again set to true\n        self.server.manager(MGR_CMD_SET, MGR_OBJ_SERVER,\n                            {'scheduling': 'False'})\n\n        # Delete one running, one suspended job and one of high priority job\n        # This will leave 2 free nodes to pick up the suspended and queued job\n\n        self.server.deljob([jid1, jid2, jid5])\n\n        # if we use deljob(wait=True) starts the scheduling cycle if job\n        # takes more time to be deleted.\n        # The for loop below is to check that the jobs have been deleted\n        # without kicking off a new scheduling cycle.\n\n        deleted = False\n        for _ in range(20):\n            workq_dict = self.server.status(QUEUE, id='workq')[0]\n            expressq_dict = self.server.status(QUEUE, id='expressq')[0]\n\n            if workq_dict['total_jobs'] == '2'\\\n                    and expressq_dict['total_jobs'] == '1':\n                deleted = True\n                break\n            else:\n                # jobs take longer than one second to delete, use two seconds\n                time.sleep(2)\n\n        self.assertTrue(deleted)\n\n        self.server.manager(MGR_CMD_SET, MGR_OBJ_SERVER,\n                            {'scheduling': 'True'})\n\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid3)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid6)\n\n    def test_multiple_equivalence_class_preemption(self):\n        \"\"\"\n        This test is to test that -\n        1) Suspended jobs of different types go to different equiv classes\n        2) Different types of jobs suspended by qsig signal\n        go to different equivalence classes\n        3) Jobs of same type and same node on suspension by qsig\n        or preemption go to same equivalence classes\n        4) Same type of suspended jobs, when resumed after qsig\n        and jobs suspended by preemption both go to same equivalence classes\n        \"\"\"\n\n        # Create vnode with 4 ncpus\n        a = {'resources_available.ncpus': 4}\n        self.mom.create_vnodes(a, 1, usenatvnode=True)\n\n        # Create a expressq\n        a = {'queue_type': 'execution', 'started': 'true',\n             'enabled': 'true', 'Priority': 150}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id='expressq')\n\n        # Submit regular job\n        a = {'Resource_List.ncpus': 1}\n        (jid1, jid2) = self.submit_jobs(2, a)\n\n        # Submit a job with walltime\n        a2 = {'Resource_List.ncpus': 1, 'Resource_List.walltime': 600}\n        (jid3, jid4) = self.submit_jobs(2, a2)\n\n        self.scheduler.log_match(\"Number of job equivalence classes: 2\",\n                                 starttime=self.t)\n\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid3)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid4)\n\n        # Suspend 1 job from each equivalence class\n        self.server.sigjob(jobid=jid1, signal=\"suspend\")\n        self.server.sigjob(jobid=jid3, signal=\"suspend\")\n\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid3)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid4)\n\n        # Check that both suspended jobs go to different equivalence class\n        # 1 for jid1, 1 for jid2, 1 for jid3, and 1 for jid4\n        self.scheduler.log_match(\"Number of job equivalence classes: 4\",\n                                 starttime=self.t)\n\n        # Start a high priority job to preempt jid 2 and jid4\n        a = {'Resource_List.ncpus': 4, 'queue': 'expressq'}\n        Je = Job(TEST_USER, attrs=a)\n        jid5 = self.server.submit(Je)\n\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid1)\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid2)\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid3)\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid4)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid5)\n\n        # Check only 3 equivalence class are present,\n        # i.e 1 equivalence class for jid1 and jid2,1 equivalence class\n        # for jid3 and jid4 and 1 equivalence class for jid5\n\n        self.scheduler.log_match(\"Number of job equivalence classes: 3\",\n                                 starttime=self.t)\n        self.t = time.time()\n\n        # Resume the jobs suspended by qsig\n        # 1 second delay is added so that time of next logging moves ahead.\n        # This will make sure log_match does not take previous entry.\n        time.sleep(1)\n        self.server.sigjob(jobid=jid1, signal=\"resume\")\n        self.server.sigjob(jobid=jid3, signal=\"resume\")\n\n        # On resume check that there are same number of equivalence classes\n        self.scheduler.log_match(\"Number of job equivalence classes: 3\",\n                                 starttime=self.t)\n        self.t = time.time()\n\n        # delete the expressq jobs and check that the suspended jobs\n        # go back to running state. equivalence classes=2 again\n        self.server.deljob(jid5, wait=True)\n\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid3)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid4)\n\n        # Check equivalence classes =2\n        self.scheduler.log_match(\"Number of job equivalence classes: 2\",\n                                 starttime=self.t)\n\n    def test_held_jobs_equiv_class(self):\n        \"\"\"\n        1) Test that held jobs do not go into another equivalence class.\n        2) Running jobs do not go into a seperate equivalence class\n        \"\"\"\n\n        a = {'resources_available.ncpus': 1}\n        self.mom.create_vnodes(a, 1, usenatvnode=True)\n\n        a = {'Resource_List.select': '1:ncpus=1', ATTR_h: None}\n        J1 = Job(TEST_USER, attrs=a)\n        jid1 = self.server.submit(J1)\n\n        a = {'Resource_List.select': '1:ncpus=1'}\n        J2 = Job(TEST_USER, attrs=a)\n        jid2 = self.server.submit(J2)\n\n        self.server.expect(JOB, {'job_state': 'H'}, id=jid1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n\n        self.scheduler.log_match(\"Number of job equivalence classes: 1\",\n                                 starttime=self.t)\n\n    def test_queue_resav(self):\n        \"\"\"\n        Test that jobs in queues with resources_available limits use queue as\n        part of the criteria of making an equivalence class\n        \"\"\"\n\n        a = {'resources_available.ncpus': 2}\n        self.mom.create_vnodes(a, 1, usenatvnode=True)\n\n        attrs = {'queue_type': 'Execution', 'started': 'True',\n                 'enabled': 'True', 'resources_available.ncpus': 1,\n                 'Priority': 10}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, attrs, id='workq2')\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n\n        a = {'queue': 'workq', 'Resource_List.select': '1:ncpus=1'}\n        a2 = {'queue': 'workq2', 'Resource_List.select': '1:ncpus=1'}\n        J = Job(TEST_USER, attrs=a)\n        jid1 = self.server.submit(J)\n\n        J = Job(TEST_USER, attrs=a2)\n        jid2 = self.server.submit(J)\n\n        J = Job(TEST_USER, attrs=a2)\n        jid3 = self.server.submit(J)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid2)\n        self.server.expect(JOB, {ATTR_state: 'Q'}, id=jid3)\n\n        # 2 quivalence classes - one for jobs inside workq2\n        # and one for jobs inside workq\n        self.scheduler.log_match(\"Number of job equivalence classes: 2\",\n                                 starttime=self.t)\n\n    def test_overlap_resv(self):\n        \"\"\"\n        Test that 2 overlapping reservation creates 2 different\n        equivalence classes\n        \"\"\"\n\n        # Submit a reservation\n        a = {'Resource_List.select': '1:ncpus=1',\n             'reserve_start': int(time.time()) + 20,\n             'reserve_end': int(time.time()) + 300, }\n        r1 = Reservation(TEST_USER, a)\n        rid1 = self.server.submit(r1)\n        r2 = Reservation(TEST_USER, a)\n        rid2 = self.server.submit(r2)\n        a = {'reserve_state': (MATCH_RE, \"RESV_CONFIRMED|2\")}\n        self.server.expect(RESV, a, id=rid1)\n        self.server.expect(RESV, a, id=rid2)\n\n        r1name = rid1.split('.')\n        r2name = rid2.split('.')\n        a = {ATTR_queue: r1name[0], 'Resource_List.select': '1:ncpus=1'}\n        j1 = Job(TEST_USER, a)\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, 'comment', op=SET, id=jid1)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid1)\n        j2 = Job(TEST_USER, a)\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, 'comment', op=SET, id=jid2)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid2)\n\n        a = {ATTR_queue: r2name[0], 'Resource_List.select': '1:ncpus=1'}\n        j3 = Job(TEST_USER, a)\n        jid3 = self.server.submit(j3)\n        self.server.expect(JOB, 'comment', op=SET, id=jid3)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid3)\n        j4 = Job(TEST_USER, a)\n        jid4 = self.server.submit(j4)\n        self.server.expect(JOB, 'comment', op=SET, id=jid4)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid4)\n\n        # Wait for reservation to start\n        self.server.expect(RESV, {'reserve_state=RESV_RUNNING': 2}, offset=20)\n\n        # Verify that equivalence class is 2; one for\n        # each reservation queue\n        self.scheduler.log_match(\"Number of job equivalence classes: 2\",\n                                 starttime=self.t)\n\n        # Verify that one job from R1 is running and\n        # one job from R2 is running\n\n        self.server.expect(JOB, {\"job_state\": 'R'}, id=jid1)\n        self.server.expect(JOB, {\"job_state\": 'R'}, id=jid3)\n\n    def test_limit_res(self):\n        \"\"\"\n        Test when resources are being limited on, but those resources are not\n        in the sched_config resources line.  Jobs requesting these resources\n        should be split into their own equivalence classes.\n        \"\"\"\n        a = {ATTR_RESC_TYPE: 'long'}\n        self.server.manager(MGR_CMD_CREATE, RSC, a, id='foores')\n\n        a = {'max_run_res.foores': '[u:PBS_GENERIC=4]'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        a = {'Resource_List.foores': 1, 'Resource_List.select': '1:ncpus=1'}\n        self.submit_jobs(2, a)\n        a['Resource_List.foores'] = 2\n        (_, jid4) = self.submit_jobs(2, a)\n        self.server.expect(JOB, {'job_state=R': 3})\n        self.server.expect(JOB, 'comment', op=SET, id=jid4)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid4)\n        (jid5, ) = self.submit_jobs(1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid5)\n\n        # Verify that equivalence class is 3; one for\n        # foores=1 and one for  foores=2 and\n        # one for no foores\n        self.scheduler.log_match(\"Number of job equivalence classes: 3\",\n                                 starttime=self.t)\n\n    def change_res(self, name, total, node_num, attribs):\n        \"\"\"\n        Callback to change the value of memory on one of the node\n\n        :param name: Name of the vnode which is being created\n        :type name: str\n        :param total: Total number of vnodes being created\n        :type total: int\n        :param node_num: Index of the node for which callback is being called\n        :type node_num: int\n        :param attribs: attributes of the node being created\n        :type attribs: dict\n        \"\"\"\n        if node_num % 2 != 0:\n            attribs['resource_available.mem'] = '16gb'\n        else:\n            attribs['resource_available.mem'] = '4gb'\n        return attribs\n\n    def test_equiv_class_not_marked_on_suspend(self):\n        \"\"\"\n        Test that if a job is suspended then scheduler does not mark its\n        equivalence class as can_not_run within the same cycle when it gets\n        suspended.\n        \"\"\"\n        a = {'resources_available.ncpus': 2}\n        self.mom.create_vnodes(a, 2,\n                               attrfunc=self.change_res)\n\n        # Create an express queue\n        a = {'queue_type': 'execution', 'started': 'true',\n             'enabled': 'true', 'Priority': 200}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id='wq2')\n        # Set node sort key so that higher memory node comes up first\n        a = {'node_sort_key': '\"mem HIGH\" ALL'}\n        self.scheduler.set_sched_config(a)\n\n        a = {'Resource_List.select': '1:ncpus=1:mem=4gb'}\n        (jid1, ) = self.submit_jobs(1, a)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n\n        # Turn off scheduling\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n\n        # submit another normal job\n        (jid2, ) = self.submit_jobs(1, a)\n\n        # submit a high priority job\n        a = {'queue': 'wq2', 'Resource_List.select': '1:ncpus=2:mem=14gb',\n             'Resource_List.place': 'excl'}\n        (jidh, ) = self.submit_jobs(1, a)\n\n        # Turn on scheduling\n        st = time.time()\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jidh)\n\n        # make sure that the second job ran in the same cycle as the high\n        # priority job\n        c = self.scheduler.cycles(start=st)\n        found = False\n        for sched_cycle in c:\n            if jidh.split('.')[0] in sched_cycle.sched_job_run:\n                found = True\n                break\n        self.assertTrue(found, \"%s didn't found in any sched cycle\" % jidh)\n        self.assertIn(jid2.split('.')[0], sched_cycle.sched_job_run)\n"
  },
  {
    "path": "test/tests/functional/pbs_exceeded_resources_notification.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\nfrom tests.functional import *\n\n\nclass TestExceededResourcesNotification(TestFunctional):\n    \"\"\"\n    This test suite tests exceeding resources notification.\n    The notification is done via email, job comment, and job exit code.\n    \"\"\"\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n\n        a = {'job_history_enable': 'True'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n    def check_mail(self, jid, msg):\n        \"\"\"\n        Check the mail file of TEST_USER for message.\n        \"\"\"\n\n        mailfile = os.path.join('/var/mail', str(TEST_USER))\n\n        self.logger.info('Wait 3s for saving the e-mail')\n        time.sleep(3)\n\n        if not os.path.isfile(mailfile):\n            self.skip_test(\"Mail file '%s' does not exist or \"\n                           \"mail is not setup. \"\n                           \"Hence this step would be skipped. \"\n                           \"Please check manually.\" % mailfile)\n\n        ret = self.du.tail(filename=mailfile, runas=TEST_USER, option='-n 10')\n        maillog = [x.strip() for x in ret['out']]\n\n        self.assertIn('PBS Job Id: ' + jid, maillog)\n        self.assertIn(msg, maillog)\n\n    def test_exceeding_walltime(self):\n        \"\"\"\n        This test suite tests exceeding walltime.\n        \"\"\"\n\n        a = {'Resource_List.walltime': 1}\n        j = Job(TEST_USER, a)\n\n        jid = self.server.submit(j)\n        j_comment = '.* and exceeded resource walltime'\n        self.server.expect(JOB, {ATTR_state: 'F',\n                                 ATTR_comment: (MATCH_RE, j_comment),\n                                 ATTR_exit_status: -29},\n                           id=jid, extend='x')\n\n        msg = 'Job exceeded resource walltime'\n        self.check_mail(jid, msg)\n\n    def test_exceeding_mem(self):\n        \"\"\"\n        This test suite tests exceeding memory.\n        \"\"\"\n\n        self.mom.add_config({'$enforce mem': ''})\n\n        self.mom.restart()\n\n        a = {'Resource_List.walltime': 3600,\n             'Resource_List.select': 'mem=1kb'}\n        j = Job(TEST_USER, a)\n\n        test = []\n        test += ['#!/bin/bash']\n        test += ['tail -c 100K /dev/zero']\n        j.create_script(body=test)\n\n        jid = self.server.submit(j)\n        j_comment = '.* and exceeded resource mem'\n        self.server.expect(JOB, {ATTR_state: \"F\",\n                                 ATTR_comment: (MATCH_RE, j_comment),\n                                 ATTR_exit_status: -27},\n                           id=jid, extend='x')\n\n        msg = 'Job exceeded resource mem'\n        self.check_mail(jid, msg)\n\n    def test_exceeding_ncpus_sum(self):\n        \"\"\"\n        This test suite tests exceeding ncpus (sum).\n        \"\"\"\n\n        self.mom.add_config(\n            {'$enforce average_percent_over': '0',\n             '$enforce average_cpufactor': '0.1',\n             '$enforce average_trialperiod': '1',\n             '$enforce cpuaverage': ''})\n\n        self.mom.restart()\n\n        a = {'Resource_List.walltime': 3600}\n        j = Job(TEST_USER, a)\n\n        test = []\n        test += ['#!/bin/bash']\n        test += ['dd if=/dev/zero of=/dev/null']\n        j.create_script(body=test)\n\n        jid = self.server.submit(j)\n        j_comment = r'.* and exceeded resource ncpus \\(sum\\)'\n        self.server.expect(JOB, {ATTR_state: 'F',\n                                 ATTR_comment: (MATCH_RE, j_comment),\n                                 ATTR_exit_status: -25},\n                           id=jid, extend='x')\n\n        msg = 'Job exceeded resource ncpus (sum)'\n        self.check_mail(jid, msg)\n\n    def test_exceeding_ncpus_burst(self):\n        \"\"\"\n        This test suite tests exceeding ncpus (burst).\n        \"\"\"\n\n        self.mom.add_config(\n            {'$enforce delta_percent_over': '0',\n             '$enforce delta_cpufactor': '0.1',\n             '$enforce cpuburst': ''})\n\n        self.mom.restart()\n\n        a = {'Resource_List.walltime': 3600}\n        j = Job(TEST_USER, a)\n\n        test = []\n        test += ['#!/bin/bash']\n        test += ['dd if=/dev/zero of=/dev/null']\n        j.create_script(body=test)\n\n        jid = self.server.submit(j)\n        j_comment = r'.* and exceeded resource ncpus \\(burst\\)'\n        self.server.expect(JOB, {ATTR_state: 'F',\n                                 ATTR_comment: (MATCH_RE, j_comment),\n                                 ATTR_exit_status: -24},\n                           id=jid, extend='x')\n\n        msg = 'Job exceeded resource ncpus (burst)'\n        self.check_mail(jid, msg)\n\n    def test_exceeding_cput(self):\n        \"\"\"\n        This test suite tests exceeding cput.\n        \"\"\"\n\n        a = {'Resource_List.cput': 10}\n        j = Job(TEST_USER, a)\n\n        # we need at least two processes otherwise the kernel\n        # would kill the process first\n        test = []\n        test += ['#!/bin/bash']\n        test += ['dd if=/dev/zero of=/dev/null & \\\ndd if=/dev/zero of=/dev/null']\n        j.create_script(body=test)\n\n        jid = self.server.submit(j)\n        j_comment = '.* and exceeded resource cput'\n        self.server.expect(JOB, {ATTR_state: 'F',\n                                 ATTR_comment: (MATCH_RE, j_comment),\n                                 ATTR_exit_status: -28},\n                           id=jid, extend='x')\n\n        msg = 'Job exceeded resource cput'\n        self.check_mail(jid, msg)\n"
  },
  {
    "path": "test/tests/functional/pbs_execjob_susp_resume.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport time\nfrom tests.functional import *\nfrom ptl.utils.pbs_logutils import PBSLogUtils\n\n\n@requirements(num_moms=2)\nclass TestPbsExecjobSuspendResume(TestFunctional):\n    \"\"\"\n    Tests the hook events execjob_postsuspend, execjob_preresume which are\n    called after a job is suspended, and before the job is resumed.\n    \"\"\"\n    logutils = PBSLogUtils()\n\n    def setUp(self):\n        if len(self.moms) != 2:\n            self.skipTest('test requires two MoMs as input, ' +\n                          'use -p moms=<mom1>:<mom2>')\n        TestFunctional.setUp(self)\n        self.momA = self.moms.values()[0]\n        self.momB = self.moms.values()[1]\n\n        # execjob_postsuspend hook\n        self.postsuspend_hook_body = \"\"\"import pbs\ne=pbs.event()\n\ndef proc_status(pid):\n    try:\n        for line in open(\"/proc/%d/status\" % pid).readlines():\n            if line.startswith(\"State:\"):\n                return line.split(\":\",1)[1].strip().split(' ')[0]\n    except:\n        pass\n    return None\n\ndef print_attribs(pbs_obj, header):\n    for a in pbs_obj.attributes:\n        v = getattr(pbs_obj, a)\n        if v and str(v):\n            pbs.logjobmsg(e.job.id, \"%s: %s = %s\" % (header, a, v))\n            if a == \"session_id\":\n                st = proc_status(v)\n                if st == 'T':\n                    pbs.logjobmsg(e.job.id,\n                                  \"%s: process seen as suspended\" % header)\n\nif e.type == pbs.EXECJOB_POSTSUSPEND:\n    pbs.logmsg(pbs.LOG_DEBUG, \"%s;called execjob_postsuspend hook\" %e.job.id)\nprint_attribs(e.job, \"JOB\")\n\nfor vn in e.vnode_list:\n    v = e.vnode_list[vn]\n    print_attribs(v, \"vnode_list[\" + vn + \"]\")\n\"\"\"\n        # execjob_postsuspend hook, reject\n        self.postsuspend_hook_reject_body = \"\"\"import pbs\ne=pbs.event()\njob=e.job\ne.reject(\"bad suspend on ms\")\n\"\"\"\n        # execjob_postsuspend hook, reject by sister only\n        self.postsuspend_hook_sis_reject_body = \"\"\"import pbs\ne=pbs.event()\njob=e.job\nif not e.job.in_ms_mom():\n    e.reject(\"bad suspend on sis\")\n\"\"\"\n        # hook with an unhandled exception\n        self.hook_error_body = \"\"\"import pbs\ne=pbs.event()\njob=e.job\nraise NameError\n\"\"\"\n        # hook with an unhandled exception, sister only\n        self.hook_sis_error_body = \"\"\"import pbs\ne=pbs.event()\njob=e.job\nif not job.in_ms_mom():\n    raise NameError\n\"\"\"\n\n        # execjob_preresume hook\n        self.preresume_hook_body = \"\"\"import pbs\ne=pbs.event()\n\ndef proc_status(pid):\n    try:\n        for line in open(\"/proc/%d/status\" % pid).readlines():\n            if line.startswith(\"State:\"):\n                return line.split(\":\",1)[1].strip().split(' ')[0]\n    except:\n        pass\n    return None\n\ndef print_attribs(pbs_obj, header):\n    for a in pbs_obj.attributes:\n        v = getattr(pbs_obj, a)\n        if v and str(v):\n            pbs.logjobmsg(e.job.id, \"%s: %s = %s\" % (header, a, v))\n            if a == \"session_id\":\n                st = proc_status(v)\n                if st == 'T':\n                    pbs.logjobmsg(e.job.id,\n                                  \"%s: process seen as suspended\" % header)\n\nif e.type == pbs.EXECJOB_PRERESUME:\n    pbs.logmsg(pbs.LOG_DEBUG, \"%s;called execjob_preresume hook\" %e.job.id)\n\nprint_attribs(e.job, \"JOB\")\n\nfor vn in e.vnode_list:\n    v = e.vnode_list[vn]\n    print_attribs(v, \"vnode_list[\" + vn + \"]\")\n\"\"\"\n        # execjob_preresume hook, reject\n        self.preresume_hook_reject_body = \"\"\"import pbs\ne=pbs.event()\njob=e.job\ne.reject(\"bad resumption on ms\")\n\"\"\"\n        # execjob_preresume hook, reject by sister only\n        self.preresume_hook_sis_reject_body = \"\"\"import pbs\ne=pbs.event()\njob=e.job\nif not e.job.in_ms_mom():\n    e.reject(\"bad resumption on sis\")\n\"\"\"\n        # job used in the tests\n        self.j = Job(self.du.get_current_user())\n\n        script = \"\"\"\n#PBS -l select=2:ncpus=1\n#PBS -l place=scatter\n#PBS -S /bin/bash\npbsdsh -n 1 -- sleep 60 &\nsleep 60\n\"\"\"\n        self.j.create_script(script)\n\n    def test_execjob_postsuspend(self):\n        \"\"\"\n        An execjob_postsuspend hook is executed by primary mom and then by\n        the connected sister moms after a job has been suspended.\n        \"\"\"\n        # instantiate execjob_postsuspend hook\n        hook_event = 'execjob_postsuspend'\n        hook_name = 'psus'\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(\n            hook_name, a, self.postsuspend_hook_body)\n\n        # Submit a job\n        jid = self.server.submit(self.j)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid)\n\n        self.server.status(JOB, 'exec_vnodes', id=jid)\n        job_node = self.j.get_vnodes()[0]\n\n        # Suspend job\n        self.server.sigjob(jobid=jid, signal=\"suspend\")\n\n        for vn in [self.momA, self.momB]:\n            if vn == self.momA:\n                vn.log_match(\"Job;%s;signal job with suspend\" % jid)\n            else:\n                vn.log_match(\"Job;%s;SUSPEND received\" % jid)\n\n            vn.log_match(\"%s;called execjob_postsuspend hook\" % jid)\n            if vn == self.momA:\n                # as postsuspend hook is executing,\n                # job's process should be seen as suspended\n                vn.log_match(\"Job;%s;JOB: process seen as suspended\" % jid)\n\n            # Check presence of pbs.event().job\n            vn.log_match(\"Job;%s;JOB: id = %s\" % (jid, jid))\n\n            # Check presence of vnode_list[] parameter\n            vnode_list = [self.momA.name, self.momB.name]\n            for v in vnode_list:\n                vn.log_match(\"Job;%s;vnode_list[%s]: name = %s\" % (\n                             jid, job_node, job_node))\n\n        # after hook executes, job continues to be suspended\n        self.server.expect(JOB, {ATTR_state: 'S'}, id=jid)\n\n    def test_execjob_preresume(self):\n        \"\"\"\n        An execjob_preresume hook is executed by primary mom and then by\n        the connected sister moms just before a job is resumed.\n        \"\"\"\n        # instantiate execjob_preresume hook\n        hook_event = 'execjob_preresume'\n        hook_name = 'pres'\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(\n            hook_name, a, self.preresume_hook_body)\n\n        # Submit a job\n        jid = self.server.submit(self.j)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid)\n\n        self.server.status(JOB, 'exec_vnodes', id=jid)\n        job_node = self.j.get_vnodes()[0]\n\n        # Suspend, then resume job\n        self.server.sigjob(jobid=jid, signal=\"suspend\")\n        self.server.expect(JOB, {ATTR_state: 'S'}, id=jid)\n        self.server.sigjob(jobid=jid, signal=\"resume\")\n\n        for vn in [self.momA, self.momB]:\n            if vn == self.momA:\n                vn.log_match(\"Job;%s;signal job with resume\" % jid)\n            else:\n                vn.log_match(\"Job;%s;RESUME received\" % jid)\n\n            vn.log_match(\"%s;called execjob_preresume hook\" % jid)\n            if vn == self.momA:\n                # as preresume hook is executing,\n                # job's process should be seen as suspended\n                vn.log_match(\"Job;%s;JOB: process seen as suspended\" % jid)\n            # Check presence of pbs.event().job\n            vn.log_match(\"Job;%s;JOB: id = %s\" % (jid, jid))\n\n            # Check presence of vnode_list[] parameter\n            vnode_list = [self.momA.name, self.momB.name]\n            for v in vnode_list:\n                vn.log_match(\"Job;%s;vnode_list[%s]: name = %s\" % (\n                             jid, job_node, job_node))\n\n        # after hook executes, job should be running again.\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid)\n\n    def test_execjob_postsuspend_reject(self):\n        \"\"\"\n        An execjob_postsuspend hook that results in a reject action\n        would not affect the currently suspended job.\n        \"\"\"\n        # instantiate execjob_postsuspend hook\n        hook_event = 'execjob_postsuspend'\n        hook_name = 'psus'\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(\n            hook_name, a, self.postsuspend_hook_reject_body)\n\n        # Submit a job\n        jid = self.server.submit(self.j)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid)\n\n        # Suspend job\n        self.server.sigjob(jobid=jid, signal=\"suspend\")\n\n        hook_msg = \"bad suspend on ms\"\n        reject_msg = \"%s hook rejected request: %s\" % (hook_event, hook_msg)\n        for vn in [self.momA, self.momB]:\n            if vn == self.momA:\n                vn.log_match(\"Job;%s;signal job with suspend\" % jid)\n            else:\n                vn.log_match(\"Job;%s;SUSPEND received\" % jid)\n\n            vn.log_match(\"Job;%s;%s\" % (jid, hook_msg))\n            vn.log_match(\"Job;%s;%s\" % (jid, reject_msg))\n\n        # after hook executes, job continues to be suspended\n        self.server.expect(JOB, {ATTR_state: 'S'}, id=jid)\n\n    def test_execjob_postsuspend_reject_sis(self):\n        \"\"\"\n        An execjob_postsuspend hook that results in a reject action\n        by sister mom only would not affect the currently suspended\n        job.\n        \"\"\"\n        # instantiate execjob_postsuspend hook\n        hook_event = 'execjob_postsuspend'\n        hook_name = 'psus'\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(\n            hook_name, a, self.postsuspend_hook_sis_reject_body)\n\n        # Submit a job\n        jid = self.server.submit(self.j)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid)\n\n        # Suspend job\n        self.server.sigjob(jobid=jid, signal=\"suspend\")\n\n        hook_msg = \"bad suspend on sis\"\n        reject_msg = \"%s hook rejected request: %s\" % (hook_event, hook_msg)\n        for vn in [self.momA, self.momB]:\n            if vn == self.momA:\n                vn.log_match(\"Job;%s;signal job with suspend\" % jid)\n                vn.log_match(\"Job;%s;%s\" % (jid, hook_msg),\n                             existence=False, max_attempts=30)\n                vn.log_match(\"Job;%s;%s\" % (jid, reject_msg),\n                             existence=False, max_attempts=30)\n            else:\n                vn.log_match(\"Job;%s;SUSPEND received\" % jid)\n                vn.log_match(\"Job;%s;%s\" % (jid, hook_msg))\n                vn.log_match(\"Job;%s;%s\" % (jid, reject_msg))\n\n        # after hook executes, job continues to be suspended\n        self.server.expect(JOB, {ATTR_state: 'S'}, id=jid)\n\n    def test_execjob_postsuspend_error(self):\n        \"\"\"\n        An execjob_postsuspend hook that results in an error action\n        would not affect the currently suspended job.\n        \"\"\"\n        # instantiate execjob_postsuspend hook\n        hook_event = 'execjob_postsuspend'\n        hook_name = 'psus'\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(\n            hook_name, a, self.hook_error_body)\n\n        # Submit a job\n        jid = self.server.submit(self.j)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid)\n\n        # Suspend job\n        self.server.sigjob(jobid=jid, signal=\"suspend\")\n\n        error_msg = \\\n            \"%s hook \\'%s\\' encountered an exception, request rejected\" \\\n            % (hook_event, hook_name)\n\n        for vn in [self.momA, self.momB]:\n            if vn == self.momA:\n                vn.log_match(\"Job;%s;signal job with suspend\" % jid)\n            else:\n                vn.log_match(\"Job;%s;SUSPEND received\" % jid)\n\n            vn.log_match(\"Job;%s;%s\" % (jid, error_msg))\n\n        # after hook executes, job continues to be suspended\n        self.server.expect(JOB, {ATTR_state: 'S'}, id=jid)\n\n    def test_execjob_postsuspend_error_sis(self):\n        \"\"\"\n        An execjob_postsuspend hook that results in a error action\n        by sister mom only would not affect the currently suspended\n        job.\n        \"\"\"\n        # instantiate execjob_postsuspend hook\n        hook_event = 'execjob_postsuspend'\n        hook_name = 'psus'\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(\n            hook_name, a, self.hook_sis_error_body)\n\n        # Submit a job\n        jid = self.server.submit(self.j)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid)\n\n        # Suspend job\n        self.server.sigjob(jobid=jid, signal=\"suspend\")\n\n        error_msg = \\\n            \"%s hook \\'%s\\' encountered an exception, request rejected\" \\\n            % (hook_event, hook_name)\n\n        for vn in [self.momA, self.momB]:\n            if vn == self.momA:\n                vn.log_match(\"Job;%s;signal job with suspend\" % jid)\n                vn.log_match(\"Job;%s;%s\" % (jid, error_msg),\n                             existence=False, max_attempts=30)\n            else:\n                vn.log_match(\"Job;%s;SUSPEND received\" % jid)\n                vn.log_match(\"Job;%s;%s\" % (jid, error_msg))\n\n        # after hook executes, job continues to be suspended\n        self.server.expect(JOB, {ATTR_state: 'S'}, id=jid)\n\n    def test_execjob_preresume_reject(self):\n        \"\"\"\n        An execjob_preresume hook that results in a reject action\n        would prevent suspended job from being resumed.\n        \"\"\"\n        # instantiate execjob_preresume hook\n        hook_event = 'execjob_preresume'\n        hook_name = 'pres'\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(\n            hook_name, a, self.preresume_hook_reject_body)\n\n        # Submit a job\n        jid = self.server.submit(self.j)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid)\n\n        # Suspend, then resume job\n        self.server.sigjob(jobid=jid, signal=\"suspend\")\n        self.server.expect(JOB, {ATTR_state: 'S'}, id=jid)\n        self.server.sigjob(jobid=jid, signal=\"resume\")\n\n        hook_msg = \"bad resumption on ms\"\n        reject_msg = \"%s hook rejected request: %s\" % (hook_event, hook_msg)\n        # Mom hook executes on momA and gets a rejection,\n        # so a resume request is not sent to sister momB.\n        self.momA.log_match(\"Job;%s;signal job with resume\" % jid)\n        self.momA.log_match(\"Job;%s;%s\" % (jid, hook_msg))\n        self.momA.log_match(\"Job;%s;%s\" % (jid, reject_msg))\n        # after hook executes, job continues to be suspended\n        self.server.expect(JOB, {ATTR_state: 'S'}, id=jid)\n\n    def test_execjob_preresume_reject_sis(self):\n        \"\"\"\n        An execjob_preresume hook that results in a reject action\n        by sister mom only would not affect the currently suspended\n        job.\n        \"\"\"\n        # instantiate execjob_preresume hook\n        hook_event = 'execjob_preresume'\n        hook_name = 'psus'\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(\n            hook_name, a, self.preresume_hook_sis_reject_body)\n\n        # Submit a job\n        jid = self.server.submit(self.j)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid)\n\n        # Suspend, then resume job\n        self.server.sigjob(jobid=jid, signal=\"suspend\")\n        self.server.expect(JOB, {ATTR_state: 'S'}, id=jid)\n        self.server.sigjob(jobid=jid, signal=\"resume\")\n\n        hook_msg = \"bad resumption on sis\"\n        reject_msg = \"%s hook rejected request: %s\" % (hook_event, hook_msg)\n        for vn in [self.momA, self.momB]:\n            if vn == self.momA:\n                vn.log_match(\"Job;%s;signal job with resume\" % jid)\n                vn.log_match(\"Job;%s;%s\" % (jid, hook_msg),\n                             existence=False, max_attempts=30)\n                vn.log_match(\"Job;%s;%s\" % (jid, reject_msg),\n                             existence=False, max_attempts=30)\n            else:\n                vn.log_match(\"Job;%s;RESUME received\" % jid)\n                vn.log_match(\"Job;%s;%s\" % (jid, hook_msg))\n                vn.log_match(\"Job;%s;%s\" % (jid, reject_msg))\n\n        # after hook executes, job continues to be suspended\n        self.server.expect(JOB, {ATTR_state: 'S'}, id=jid)\n\n    def test_execjob_preresume_error(self):\n        \"\"\"\n        An execjob_preresume hook that results in a error action\n        would prevent suspended job from being resumed.\n        \"\"\"\n        # instantiate execjob_preresume hook\n        hook_event = 'execjob_preresume'\n        hook_name = 'pres'\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(\n            hook_name, a, self.hook_error_body)\n\n        # Submit a job\n        jid = self.server.submit(self.j)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid)\n\n        # Suspend, then resume job\n        self.server.sigjob(jobid=jid, signal=\"suspend\")\n        self.server.expect(JOB, {ATTR_state: 'S'}, id=jid)\n        self.server.sigjob(jobid=jid, signal=\"resume\")\n\n        error_msg = \\\n            \"%s hook \\'%s\\' encountered an exception, request rejected\" \\\n            % (hook_event, hook_name)\n\n        # Mom hook executes on momA and gets a errorion,\n        # so a resume request is not sent to sister momB.\n        self.momA.log_match(\"Job;%s;signal job with resume\" % jid)\n        self.momA.log_match(\"Job;%s;%s\" % (jid, error_msg))\n        # after hook executes, job continues to be suspended\n        self.server.expect(JOB, {ATTR_state: 'S'}, id=jid)\n\n    def test_execjob_preresume_error_sis(self):\n        \"\"\"\n        An execjob_preresume hook that results in a error action\n        by sister mom only would not affect the currently suspended\n        job.\n        \"\"\"\n        # instantiate execjob_preresume hook\n        hook_event = 'execjob_preresume'\n        hook_name = 'psus'\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(\n            hook_name, a, self.hook_sis_error_body)\n\n        # Submit a job\n        jid = self.server.submit(self.j)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid)\n\n        # Suspend, then resume job\n        self.server.sigjob(jobid=jid, signal=\"suspend\")\n        self.server.expect(JOB, {ATTR_state: 'S'}, id=jid)\n        self.server.sigjob(jobid=jid, signal=\"resume\")\n\n        error_msg = \\\n            \"%s hook \\'%s\\' encountered an exception, request rejected\" \\\n            % (hook_event, hook_name)\n\n        for vn in [self.momA, self.momB]:\n            if vn == self.momA:\n                vn.log_match(\"Job;%s;signal job with resume\" % jid)\n                vn.log_match(\"Job;%s;%s\" % (jid, error_msg),\n                             existence=False, max_attempts=30)\n            else:\n                vn.log_match(\"Job;%s;RESUME received\" % jid)\n                vn.log_match(\"Job;%s;%s\" % (jid, error_msg))\n\n        # after hook executes, job continues to be suspended\n        self.server.expect(JOB, {ATTR_state: 'S'}, id=jid)\n"
  },
  {
    "path": "test/tests/functional/pbs_fairshare.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\n@tags('sched')\nclass TestFairshare(TestFunctional):\n\n    \"\"\"\n    Test the pbs_sched fairshare functionality.  Note there are two\n    fairshare tests in the standard smoke test that are not included here.\n    \"\"\"\n\n    def set_up_resource_group(self):\n        \"\"\"\n        Set up the resource_group file for test suite\n        \"\"\"\n        self.scheduler.add_to_resource_group('group1', 10, 'root', 40)\n        self.scheduler.add_to_resource_group('group2', 20, 'root', 60)\n        self.scheduler.add_to_resource_group(TEST_USER, 11, 'group1', 50)\n        self.scheduler.add_to_resource_group(TEST_USER1, 12, 'group1', 50)\n        self.scheduler.add_to_resource_group(TEST_USER2, 21, 'group2', 60)\n        self.scheduler.add_to_resource_group(TEST_USER3, 22, 'group2', 40)\n        self.scheduler.fairshare.set_fairshare_usage(TEST_USER, 100)\n        self.scheduler.fairshare.set_fairshare_usage(TEST_USER1, 100)\n        self.scheduler.fairshare.set_fairshare_usage(TEST_USER3, 1000)\n\n    def test_formula_keyword(self):\n        \"\"\"\n        Test to see if 'fairshare_tree_usage' and 'fairshare_perc' are allowed\n        to be set in the job_sort_formula\n        \"\"\"\n\n        # manager() will throw a PbsManagerError exception if this fails\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'job_sort_formula': 'fairshare_tree_usage'})\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'job_sort_formula': 'fairshare_perc'})\n\n        formula = 'pow(2,-(fairshare_tree_usage/fairshare_perc))'\n        self.server.manager(MGR_CMD_SET, SERVER, {'job_sort_formula': formula})\n\n        formula = 'fairshare_factor'\n        self.server.manager(MGR_CMD_SET, SERVER, {'job_sort_formula': formula})\n\n        formula = 'fair_share_factor'\n        try:\n            self.server.manager(\n                MGR_CMD_SET, SERVER, {'job_sort_formula': formula})\n        except PbsManagerError as e:\n            self.assertTrue(\"Formula contains invalid keyword\" in e.msg[0])\n\n    def test_fairshare_formula(self):\n        \"\"\"\n        Test fairshare in the formula.  Make sure the fairshare_tree_usage\n        is correct\n        \"\"\"\n\n        self.set_up_resource_group()\n        self.server.manager(MGR_CMD_SET, SCHED, {'log_events': 2047})\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'job_sort_formula': 'fairshare_tree_usage'})\n        J1 = Job(TEST_USER)\n        jid1 = self.server.submit(J1)\n        J2 = Job(TEST_USER1)\n        jid2 = self.server.submit(J2)\n        J3 = Job(TEST_USER2)\n        jid3 = self.server.submit(J3)\n        J4 = Job(TEST_USER3)\n        jid4 = self.server.submit(J4)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n        msg = ';Formula Evaluation = '\n        self.scheduler.log_match(str(jid1) + msg + '0.1253')\n        self.scheduler.log_match(str(jid2) + msg + '0.1253')\n        self.scheduler.log_match(str(jid3) + msg + '0.5004')\n        self.scheduler.log_match(str(jid4) + msg + '0.8330')\n\n    def test_fairshare_formula2(self):\n        \"\"\"\n        Test fairshare in the formula.  Make sure the fairshare_perc\n        is correct\n        \"\"\"\n\n        self.set_up_resource_group()\n        self.server.manager(MGR_CMD_SET, SCHED, {'log_events': 2047})\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'job_sort_formula': 'fairshare_perc'})\n        J1 = Job(TEST_USER)\n        jid1 = self.server.submit(J1)\n        J2 = Job(TEST_USER1)\n        jid2 = self.server.submit(J2)\n        J3 = Job(TEST_USER2)\n        jid3 = self.server.submit(J3)\n        J4 = Job(TEST_USER3)\n        jid4 = self.server.submit(J4)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n        msg = ';Formula Evaluation = '\n        self.scheduler.log_match(str(jid1) + msg + '0.2')\n        self.scheduler.log_match(str(jid2) + msg + '0.2')\n        self.scheduler.log_match(str(jid3) + msg + '0.36')\n        self.scheduler.log_match(str(jid4) + msg + '0.24')\n\n    def test_fairshare_formula3(self):\n        \"\"\"\n        Test fairshare in the formula.  Make sure entities with small usage\n        are negatively affected by their high usage siblings.  Make sure that\n        jobs run in the correct order.  Use fairshare_tree_usage in a\n        pow() formula\n        \"\"\"\n\n        self.set_up_resource_group()\n        self.server.manager(MGR_CMD_SET, SCHED, {'log_events': 2047})\n\n        formula = 'pow(2,-(fairshare_tree_usage/fairshare_perc))'\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n        self.server.manager(MGR_CMD_SET, SERVER, {'job_sort_formula': formula})\n        J1 = Job(TEST_USER2)\n        jid1 = self.server.submit(J1)\n        J2 = Job(TEST_USER3)\n        jid2 = self.server.submit(J2)\n        J3 = Job(TEST_USER)\n        jid3 = self.server.submit(J3)\n        J4 = Job(TEST_USER1)\n        jid4 = self.server.submit(J4)\n        t = time.time()\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n        msg = ';Formula Evaluation = '\n        self.scheduler.log_match(str(jid1) + msg + '0.3816')\n        self.scheduler.log_match(str(jid2) + msg + '0.0902')\n        self.scheduler.log_match(str(jid3) + msg + '0.6477')\n        self.scheduler.log_match(str(jid4) + msg + '0.6477')\n        self.scheduler.log_match('Leaving Scheduling Cycle', starttime=t)\n\n        c = self.scheduler.cycles(lastN=1)[0]\n        job_order = [jid3, jid4, jid1, jid2]\n        for i in range(len(job_order)):\n            self.assertEqual(job_order[i].split('.')[0], c.political_order[i])\n\n    def test_fairshare_formula4(self):\n        \"\"\"\n        Test fairshare in the formula.  Make sure entities with small usage\n        are negatively affected by their high usage siblings.  Make sure that\n        jobs run in the correct order.  Use keyword fairshare_factor\n        \"\"\"\n\n        self.set_up_resource_group()\n        self.server.manager(MGR_CMD_SET, SCHED, {'log_events': 2047})\n\n        formula = 'fairshare_factor'\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n        self.server.manager(MGR_CMD_SET, SERVER, {'job_sort_formula': formula})\n\n        J1 = Job(TEST_USER2)\n        jid1 = self.server.submit(J1)\n        J2 = Job(TEST_USER3)\n        jid2 = self.server.submit(J2)\n        J3 = Job(TEST_USER)\n        jid3 = self.server.submit(J3)\n        J4 = Job(TEST_USER1)\n        jid4 = self.server.submit(J4)\n        t = time.time()\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n        msg = ';Formula Evaluation = '\n        self.scheduler.log_match(str(jid1) + msg + '0.3816')\n        self.scheduler.log_match(str(jid2) + msg + '0.0902')\n        self.scheduler.log_match(str(jid3) + msg + '0.6477')\n        self.scheduler.log_match(str(jid4) + msg + '0.6477')\n        self.scheduler.log_match('Leaving Scheduling Cycle', starttime=t)\n\n        c = self.scheduler.cycles(lastN=1)[0]\n        job_order = [jid3, jid4, jid1, jid2]\n        for i in range(len(job_order)):\n            self.assertEqual(job_order[i].split('.')[0], c.political_order[i])\n\n    def test_fairshare_formula5(self):\n        \"\"\"\n        Test fairshare in the formula with fair_share set to true in scheduler.\n        Make sure formula takes precedence over fairshare usage. Output will be\n        same as in test_fairshare_formula4.\n        \"\"\"\n\n        self.set_up_resource_group()\n        self.server.manager(MGR_CMD_SET, SCHED, {'log_events': 2047})\n        a = {'fair_share': \"True ALL\"}\n        self.scheduler.set_sched_config(a)\n\n        formula = 'fairshare_factor'\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n        self.server.manager(MGR_CMD_SET, SERVER, {'job_sort_formula': formula})\n\n        J1 = Job(TEST_USER2, {'Resource_List.cput': 10})\n        jid1 = self.server.submit(J1)\n        J2 = Job(TEST_USER3, {'Resource_List.cput': 20})\n        jid2 = self.server.submit(J2)\n        J3 = Job(TEST_USER, {'Resource_List.cput': 30})\n        jid3 = self.server.submit(J3)\n        J4 = Job(TEST_USER1, {'Resource_List.cput': 40})\n        jid4 = self.server.submit(J4)\n        t = time.time()\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n        msg = ';Formula Evaluation = '\n        self.scheduler.log_match(str(jid1) + msg + '0.3816')\n        self.scheduler.log_match(str(jid2) + msg + '0.0902')\n        self.scheduler.log_match(str(jid3) + msg + '0.6477')\n        self.scheduler.log_match(str(jid4) + msg + '0.6477')\n        self.scheduler.log_match('Leaving Scheduling Cycle', starttime=t)\n\n        c = self.scheduler.cycles(start=t, lastN=1)[0]\n        job_order = [jid3, jid4, jid1, jid2]\n        for i in range(len(job_order)):\n            self.assertEqual(job_order[i].split('.')[0], c.political_order[i])\n\n    def test_fairshare_formula6(self):\n        \"\"\"\n        Test fairshare in the formula.  Make sure entities with small usage\n        are negatively affected by their high usage siblings.  Make sure that\n        jobs run in the correct order.  Use keyword fairshare_factor\n        with ncpus/walltime\n        \"\"\"\n\n        self.set_up_resource_group()\n        self.server.manager(MGR_CMD_SET, SCHED, {'log_events': 2047})\n\n        formula = 'fairshare_factor + (walltime/ncpus)'\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n        self.server.manager(MGR_CMD_SET, SERVER, {'job_sort_formula': formula})\n\n        J1 = Job(TEST_USER2, {'Resource_List.ncpus': 1,\n                              'Resource_List.walltime': \"00:01:00\"})\n        jid1 = self.server.submit(J1)\n        J2 = Job(TEST_USER3, {'Resource_List.ncpus': 2,\n                              'Resource_List.walltime': \"00:01:00\"})\n        jid2 = self.server.submit(J2)\n        J3 = Job(TEST_USER, {'Resource_List.ncpus': 3,\n                             'Resource_List.walltime': \"00:02:00\"})\n        jid3 = self.server.submit(J3)\n        J4 = Job(TEST_USER1, {'Resource_List.ncpus': 4,\n                              'Resource_List.walltime': \"00:02:00\"})\n        jid4 = self.server.submit(J4)\n        t = time.time()\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n        msg = ';Formula Evaluation = '\n        self.scheduler.log_match(str(jid1) + msg + '60.3816')\n        self.scheduler.log_match(str(jid2) + msg + '30.0902')\n        self.scheduler.log_match(str(jid3) + msg + '40.6477')\n        self.scheduler.log_match(str(jid4) + msg + '30.6477')\n        self.scheduler.log_match('Leaving Scheduling Cycle', starttime=t)\n\n        c = self.scheduler.cycles(start=t, lastN=1)[0]\n        job_order = [jid1, jid3, jid4, jid2]\n        for i in range(len(job_order)):\n            self.assertEqual(job_order[i].split('.')[0], c.political_order[i])\n\n    def test_pbsfs(self):\n        \"\"\"\n        Test to see if running pbsfs affects the scheduler's view of the\n        fairshare usage.  This is done by calling the Scheduler()'s\n        revert_to_defaults().  This will call pbsfs -e to remove all usage.\n        \"\"\"\n\n        self.scheduler.add_to_resource_group(TEST_USER, 11, 'root', 10)\n        self.scheduler.add_to_resource_group(TEST_USER1, 12, 'root', 10)\n        self.scheduler.set_sched_config({'fair_share': 'True'})\n\n        self.scheduler.fairshare.set_fairshare_usage(TEST_USER, 100)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n        J1 = Job(TEST_USER)\n        jid1 = self.server.submit(J1)\n        J2 = Job(TEST_USER1)\n        jid2 = self.server.submit(J2)\n\n        self.scheduler.run_scheduling_cycle()\n\n        c = self.scheduler.cycles(lastN=1)[0]\n        job_order = [jid2, jid1]\n        for i in range(len(job_order)):\n            self.assertEqual(job_order[i].split('.')[0], c.political_order[i])\n\n        self.server.deljob(id=jid1, wait=True)\n        self.server.deljob(id=jid2, wait=True)\n        self.scheduler.revert_to_defaults()\n\n        # revert_to_defaults() will set the default scheduler's scheduling\n        # back to true.  Need to turn it off again\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n\n        # Set TEST_USER1 to 50.  If revert_to_defaults() has affected the\n        # scheduler's view of the fairshare usage, it's the only entity with\n        # usage.  Its job will run second.  If revert_to_defaults() did\n        # nothing, 50 is less than 100, so TEST_USER1's job will run first\n        self.scheduler.add_to_resource_group(TEST_USER, 11, 'root', 10)\n        self.scheduler.add_to_resource_group(TEST_USER1, 12, 'root', 10)\n        self.scheduler.set_sched_config({'fair_share': 'True'})\n\n        self.scheduler.fairshare.set_fairshare_usage(TEST_USER1, 50)\n\n        J3 = Job(TEST_USER)\n        jid3 = self.server.submit(J3)\n        J4 = Job(TEST_USER1)\n        jid4 = self.server.submit(J4)\n\n        self.scheduler.run_scheduling_cycle()\n\n        c = self.scheduler.cycles(lastN=1)[0]\n        job_order = [jid3, jid4]\n        for i in range(len(job_order)):\n            self.assertEqual(job_order[i].split('.')[0], c.political_order[i])\n\n    def test_fairshare_decay_min_usage(self):\n        \"\"\"\n        Test that fairshare decay doesn't reduce the usage below 1\n        \"\"\"\n        self.scheduler.set_sched_config({'fair_share': 'True'})\n        self.server.manager(MGR_CMD_SET, SCHED, {'log_events': 4095})\n        self.scheduler.add_to_resource_group(TEST_USER, 10, 'root', 50)\n        self.scheduler.fairshare.set_fairshare_usage(TEST_USER, 1)\n        self.scheduler.set_sched_config({\"fairshare_decay_time\": \"00:00:02\"})\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n\n        t = time.time()\n        time.sleep(3)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n\n        self.scheduler.log_match(\"Decaying Fairshare Tree\", starttime=t)\n\n        # Check that TEST_USER's usage is 1\n        fs = self.scheduler.fairshare.query_fairshare(name=str(TEST_USER))\n        fs_usage = int(fs.usage)\n        self.assertEqual(fs_usage, 1,\n                         \"Fairshare usage %d not equal to 1\" % fs_usage)\n\n    def test_fairshare_topjob(self):\n        \"\"\"\n        Test that jobs are run in the augmented fairshare order after a topjob\n        is added to the calendar\n        \"\"\"\n        self.scheduler.set_sched_config({'fair_share': 'True'})\n        self.scheduler.set_sched_config({'fairshare_usage_res': 'ncpus'})\n        self.scheduler.set_sched_config({'strict_ordering': 'True'})\n        self.scheduler.add_to_resource_group(TEST_USER, 11, 'root', 10)\n        self.scheduler.add_to_resource_group(TEST_USER1, 12, 'root', 10)\n        self.scheduler.add_to_resource_group(TEST_USER2, 13, 'root', 10)\n        a = {'resources_available.ncpus': 5}\n        self.mom.create_vnodes(a, 1)\n        a = {'Resource_List.select': '5:ncpus=1'}\n        j1 = Job(TEST_USER, attrs=a)\n        jid1 = self.server.submit(j1)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n\n        j2 = Job(TEST_USER1, attrs=a)\n        jid2 = self.server.submit(j2)\n        j3 = Job(TEST_USER1, attrs=a)\n        jid3 = self.server.submit(j3)\n        j4 = Job(TEST_USER2, attrs=a)\n        jid4 = self.server.submit(j4)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n        self.scheduler.log_match(jid2 + ';Job is a top job and will run at')\n        c = self.scheduler.cycles(lastN=1)[0]\n        jorder = [jid2, jid4, jid3]\n        jorder = [j.split('.')[0] for j in jorder]\n        msg = 'Jobs ran out of order'\n        self.assertEqual(jorder, c.political_order, msg)\n\n    def test_fairshare_acct_name(self):\n        \"\"\"\n        Test fairshare with fairshare_entity as Account_Name\n        \"\"\"\n        self.scheduler.set_sched_config({'fair_share': 'True'})\n        self.scheduler.set_sched_config({'fairshare_usage_res': 'ncpus'})\n        self.scheduler.set_sched_config({'fairshare_entity': ATTR_A})\n\n        self.scheduler.add_to_resource_group('acctA', 11, 'root', 10)\n        self.scheduler.fairshare.set_fairshare_usage('acctA', 1)\n        self.scheduler.add_to_resource_group('acctB', 12, 'root', 25)\n        self.scheduler.fairshare.set_fairshare_usage('acctB', 1)\n\n        self.server.manager(MGR_CMD_SET, SCHED, {'scheduling': False})\n        self.server.manager(MGR_CMD_SET, NODE,\n                            {'resources_available.ncpus': 1},\n                            id=self.mom.shortname)\n        a = {ATTR_A: 'acctA'}\n        j1 = Job(attrs=a)\n        j1.set_sleep_time(15)\n        jid1 = self.server.submit(j1)\n\n        a = {ATTR_A: 'acctB'}\n        j2 = Job(attrs=a)\n        j2.set_sleep_time(15)\n        jid2 = self.server.submit(j2)\n\n        j3 = Job(attrs=a)\n        j3.set_sleep_time(15)\n        jid3 = self.server.submit(j3)\n\n        j4 = Job(attrs=a)\n        j4.set_sleep_time(15)\n        jid4 = self.server.submit(j4)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': True})\n\n        # acctB has 2/3s of the shares, so 2 of its jobs will run before acctA\n        # a second cycle has to be kicked between jobs to make sure the\n        # scheduler acumulates the fairshare usage.\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': True})\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid3, offset=15)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': True})\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1, offset=15)\n"
  },
  {
    "path": "test/tests/functional/pbs_gen_nodefile_on_sister_mom.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\n@requirements(num_moms=2)\nclass TestGenNodefileOnSisterMom(TestFunctional):\n    \"\"\"\n    This test suite tests the PBS_NODEFILE creation on\n    sister moms of job.\n    \"\"\"\n    def test_gen_nodefile_on_sister_mom_default(self):\n        \"\"\"\n        This test case verifies PBS_NODEFILE gets created on\n        sister mom by default\n        \"\"\"\n        # Skip test if number of mom provided is not equal to two\n        if not len(self.moms) == 2:\n            self.skipTest(\"test requires two MoMs as input, \" +\n                          \"use -p moms=<mom1:mom2>\")\n        ms = self.moms.keys()[0]\n        sister_mom = self.moms.keys()[1]\n        j = Job(TEST_USER)\n        j.set_attributes({'Resource_List.select': '2:ncpus=1',\n                          'Resource_List.place': 'scatter'})\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        nodefile = os.path.join(self.server.pbs_conf['PBS_HOME'],\n                                \"aux\", jid)\n\n        file_exists = self.du.isfile(hostname=sister_mom, path=nodefile,\n                                     sudo=True)\n        self.assertTrue(file_exists, \"PBS_NODEFILE not created in \"\n                                     \"sister mom %s\" % sister_mom)\n        sister_nodes = \"\\n\".join(self.du.cat(sister_mom, nodefile,\n                                             sudo=True)['out'])\n        ms_nodes = \"\\n\".join(self.du.cat(ms, nodefile, sudo=True)['out'])\n        self.assertEqual(ms_nodes, sister_nodes)\n        self.server.log_match(jid + \";Exit_status=0\", interval=4,\n                              max_attempts=30)\n        file_exists = self.du.isfile(hostname=sister_mom, path=nodefile,\n                                     sudo=True)\n        self.assertFalse(file_exists, \"PBS_NODEFILE not deleted in \"\n                                      \"sister mom %s\" % sister_mom)\n\n    def test_gen_nodefile_on_sister_mom_config_enabled(self):\n        \"\"\"\n        This test case verifies PBS_NODEFILE gets created on\n        sister mom on setting gen_nodefile_on_sister_mom mom\n        config parameter to true.\n        \"\"\"\n        # Skip test if number of mom provided is not equal to two\n        if not len(self.moms) == 2:\n            self.skipTest(\"test requires two MoMs as input, \" +\n                          \"use -p moms=<mom1:mom2>\")\n        ms = self.moms.keys()[0]\n        sister_mom = self.moms.keys()[1]\n        sister_mom_obj = self.moms.values()[1]\n        sister_mom_obj.add_config({'$gen_nodefile_on_sister_mom': 1})\n\n        j = Job(TEST_USER)\n        j.set_attributes({'Resource_List.select': '2:ncpus=1',\n                          'Resource_List.place': 'scatter'})\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        nodefile = os.path.join(self.server.pbs_conf['PBS_HOME'],\n                                \"aux\", jid)\n\n        file_exists = self.du.isfile(hostname=sister_mom, path=nodefile,\n                                     sudo=True)\n        self.assertTrue(file_exists, \"PBS_NODEFILE not created in \"\n                                     \"sister mom %s\" % sister_mom)\n        sister_nodes = \"\\n\".join(self.du.cat(sister_mom, nodefile,\n                                             sudo=True)['out'])\n        ms_nodes = \"\\n\".join(self.du.cat(ms, nodefile, sudo=True)['out'])\n        self.assertEqual(ms_nodes, sister_nodes)\n        self.server.log_match(jid + \";Exit_status=0\", interval=4,\n                              max_attempts=30)\n        file_exists = self.du.isfile(hostname=sister_mom, path=nodefile,\n                                     sudo=True)\n        self.assertFalse(file_exists, \"PBS_NODEFILE not deleted in \"\n                                      \"sister mom %s\" % sister_mom)\n\n    def test_gen_nodefile_on_sister_mom_config_disabled(self):\n        \"\"\"\n        This test case verifies PBS_NODEFILE does not get created\n        on sister mom on setting gen_nodefile_on_sister_mom mom\n        config parameter to false.\n        \"\"\"\n        # Skip test if number of mom provided is not equal to two\n        if not len(self.moms) == 2:\n            self.skipTest(\"test requires two MoMs as input, \" +\n                          \"use -p moms=<mom1:mom2>\")\n\n        sister_mom = self.moms.keys()[1]\n        sister_mom_obj = self.moms.values()[1]\n        sister_mom_obj.add_config({'$gen_nodefile_on_sister_mom': 0})\n        j = Job(TEST_USER)\n        j.set_attributes({'Resource_List.select': '2:ncpus=1',\n                          'Resource_List.place': 'scatter'})\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        nodefile = os.path.join(self.server.pbs_conf['PBS_HOME'],\n                                \"aux\", jid)\n\n        file_exists = self.du.isfile(hostname=sister_mom, path=nodefile,\n                                     sudo=True)\n        self.assertFalse(file_exists, \"PBS_NODEFILE created in \"\n                                      \"sister mom %s\" % sister_mom)\n\n    def test_gen_nodefile_on_sister_mom_hup(self):\n        \"\"\"\n        This test suite verifies PBS_NODEFILE does not get created\n        on sister mom on setting gen_nodefile_on_sister_mom mom\n        config parameter to false and then HUP the mom to verify if\n        PBS_NODEFILE is created on sister.\n        \"\"\"\n        # Skip test if number of mom provided is not equal to two\n        if not len(self.moms) == 2:\n            self.skipTest(\"test requires two MoMs as input, \" +\n                          \"use -p moms=<mom1:mom2>\")\n        ms = self.moms.keys()[0]\n        sister_mom = self.moms.keys()[1]\n        sister_mom_obj = self.moms.values()[1]\n        sister_mom_obj.add_config({'$gen_nodefile_on_sister_mom': 0})\n        j = Job(TEST_USER)\n        j.set_attributes({'Resource_List.select': '2:ncpus=1',\n                          'Resource_List.place': 'scatter'})\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        nodefile = os.path.join(self.server.pbs_conf['PBS_HOME'], \"aux\", jid)\n\n        file_exists = self.du.isfile(hostname=sister_mom, path=nodefile,\n                                     sudo=True)\n        self.assertFalse(file_exists, \"PBS_NODEFILE created in \"\n                                      \"sister mom %s\" % sister_mom)\n        self.server.delete(jid)\n        config = {'$clienthost': self.server.hostname}\n        sister_mom_obj.config = config\n        sister_mom_obj.apply_config(config)\n        j = Job(TEST_USER)\n        j.set_attributes({'Resource_List.select': '2:ncpus=1',\n                          'Resource_List.place': 'scatter'})\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        nodefile = os.path.join(self.server.pbs_conf['PBS_HOME'], \"aux\", jid)\n        file_exists = self.du.isfile(hostname=sister_mom, path=nodefile,\n                                     sudo=True)\n        self.assertTrue(file_exists, \"PBS_NODEFILE not created in \"\n                                     \"sister mom %s\" % sister_mom)\n        sister_nodes = \"\\n\".join(self.du.cat(sister_mom, nodefile,\n                                             sudo=True)['out'])\n        ms_nodes = \"\\n\".join(self.du.cat(ms, nodefile, sudo=True)['out'])\n        self.assertEqual(ms_nodes, sister_nodes)\n        self.server.log_match(jid + \";Exit_status=0\", interval=4,\n                              max_attempts=30)\n        file_exists = self.du.isfile(hostname=sister_mom, path=nodefile,\n                                     sudo=True)\n        self.assertFalse(file_exists, \"PBS_NODEFILE not deleted in \"\n                                      \"sister mom %s\" % sister_mom)\n"
  },
  {
    "path": "test/tests/functional/pbs_grunt.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestGrunt(TestFunctional):\n\n    \"\"\"\n    This test suite is for testing the grunt routines\n    \"\"\"\n    our_queue = 'workq2'\n    kvp_size = 50\n    # Define a lot of resource names\n    resources = [\"resource%d\" % i for i in range(2*kvp_size)]\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n\n        rlist = ','.join(self.resources)\n        a = {ATTR_RESC_TYPE: 'long', ATTR_RESC_FLAG: 'hn'}\n        self.server.manager(MGR_CMD_CREATE, RSC, a, id=rlist)\n\n        # Create a queue to test against\n        a = {'queue_type': 'execution', 'enabled': 'True', 'started': 'False'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id=self.our_queue)\n\n    def try_a_job(self, base, job_res, que_res, svr_res):\n        \"\"\" Submit a job and check its schedselect result\n\n        :param base: the prefix of the job select statement before the\n            resources under test\n        :type base: list of str\n        :param job_res: resources requested by job\n        :type job_res: dict with keys of resource names, values of values\n        :param que_res: resource defaults for queue\n        :type que_res: same as job_res\n        :param svr_res: resource defaults for server\n        :type svr_res: same as job_res\n        :returns: job id\n        \"\"\"\n        attrs = {ATTR_queue: self.our_queue}\n\n        job_part = ['%s=%s' % r for r in job_res.items()]\n        sel_arg = ':'.join(base + job_part)\n        attrs[ATTR_l] = 'select=' + sel_arg\n        j = Job(TEST_USER, attrs)\n        jid = self.server.submit(j)\n\n        # Merge server, queue, and job requested resources into\n        # expected resource list.  Last one wins.\n        expected = svr_res.copy()\n        expected.update(que_res)\n        expected.update(job_res)\n        e_set = set([\"%s=%s\" % r for r in expected.items()])\n\n        # See which resources ended up in job's schedselect\n        a = [ATTR_SchedSelect]\n        job_stat = self.server.status(JOB, a, id=jid)\n        ssel = job_stat[0][ATTR_SchedSelect]\n        # Generate actual resource list\n        a_set = set(ssel.split(':'))\n        # Ignore any pieces from base\n        a_set -= set((':'.join(base)).split(':'))\n        # Determine where expected and actual differ\n        missing = e_set.difference(a_set)\n        extra = a_set.difference(e_set)\n        if missing:\n            msg = \"Actual schedselect missing %s for select=%s\" % \\\n                   (', '.join(sorted(missing)), sel_arg)\n            self.fail(msg)\n        if extra:\n            msg = \"Actual schedselect includes extra %s for select=%s\" % \\\n                   (', '.join(sorted(extra)), sel_arg)\n            self.fail(msg)\n        return jid\n\n    @tags('server')\n    def test_nkve_overflow(self):\n        \"\"\"\n        Test whether do_schedselect() in vnparse.c can overflow the\n        grunt nkve array if a queue has a large number of default resources\n        (> KVP_SIZE (50)).\n        Note: This test can corrupt memory in an unpatched server.\n        \"\"\"\n\n        # Remove any current server defaults\n        rlist = ['default_chunk.%s' % r for r in self.resources]\n        a = ','.join(rlist)\n        self.server.manager(MGR_CMD_UNSET, SERVER, a)\n\n        # Set our queue to have default values for all the resources.\n        a = {'default_chunk.%s' % r: 1 for r in self.resources}\n        self.server.manager(MGR_CMD_SET, QUEUE, a, id=self.our_queue)\n\n        base_sel = ['1:ncpus=1']\n        job_res = {}\n        que_res = {r: 1 for r in self.resources}\n        svr_res = {}\n        # Submit job to that queue and check schedselect value\n        self.try_a_job(base_sel, job_res, que_res, svr_res)\n\n    @tags('server')\n    def test_general_dflt_chunks(self):\n        \"\"\"\n        Test that chunk specifications are handled correctly. That is,\n        job specs should override queue defaults which override server\n        defaults.\n        \"\"\"\n        # Remove any current defaults\n        rlist = ['default_chunk.%s' % r for r in self.resources]\n        a = ','.join(rlist)\n        self.server.manager(MGR_CMD_UNSET, QUEUE, a, id=self.our_queue)\n        self.server.manager(MGR_CMD_UNSET, SERVER, a)\n\n        # Job with no resources\n\n        base_sel = ['1:ncpus=1']\n        job_res = {}\n        que_res = {}\n        svr_res = {}\n        self.try_a_job(base_sel, job_res, que_res, svr_res)\n\n        # Job specifying resources\n\n        job_res = {'resource10': 1, 'resource11': 2}\n        self.try_a_job(base_sel, job_res, que_res, svr_res)\n\n        # Add a server chunk default\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'default_chunk.resource12': 3})\n        svr_res = {'resource12': 3}\n        self.try_a_job(base_sel, job_res, que_res, svr_res)\n\n        # Add a queue chunk default\n\n        self.server.manager(MGR_CMD_SET, QUEUE,\n                            {'default_chunk.resource13': 4},\n                            id=self.our_queue)\n        que_res = {'resource13': 4}\n\n        self.try_a_job(base_sel, job_res, que_res, svr_res)\n\n        # Check that job can override a default\n\n        job_res = {'resource12': 10}\n        self.try_a_job(base_sel, job_res, que_res, svr_res)\n"
  },
  {
    "path": "test/tests/functional/pbs_highreslog.py",
    "content": "# coding: utf-8\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestHighResLogging(TestFunctional):\n    \"\"\"\n    TestSuite for High resolution logging in PBS\n    \"\"\"\n    tm_micro_re = re.compile(\n        r'(\\d{2,2}/\\d{2,2}/\\d{4,4}\\s\\d{2,2}:\\d{2,2}:\\d{2,2}.\\d{6,6})')\n\n    def validate_trace_job_lines(self, jid=None):\n        \"\"\"\n        Validate that tracejob lines have high resolution time\n        stamp\n        \"\"\"\n        lines = self.server.log_lines(logtype='tracejob', id=jid, n='ALL')\n        if len(lines) > 1:\n            # Remove first line as it is not log line\n            lines = lines[1:]\n        for line in lines:\n            m = self.tm_micro_re.match(line)\n            if m is None:\n                # If this is accounting log, ignore it as\n                # Accounting log does not have high res logging\n                line = line.split()\n                _msg = 'Not Found high resolution time stamp in log'\n                self.assertFalse(len(line) >= 3 and\n                                 (line[2].strip() != 'A'), _msg)\n        _msg = self.server.shortname + \\\n            ': High resolution time stamp found in log'\n        self.logger.info(_msg)\n\n    def validate_server_log_lines(self):\n        \"\"\"\n        validates that the server_log lines have high resolution\n        time stamp\n        \"\"\"\n        lines = self.server.log_lines(logtype=self.server, n=20)\n        for line in lines:\n            m = self.tm_micro_re.match(line)\n            _msg = 'Not Found high resolution time stamp in log'\n            self.assertTrue(m, _msg)\n        _msg = self.server.shortname + \\\n            ': High resolution time stamp found in log'\n        self.logger.info(_msg)\n\n    def switch_microsecondlogging(self, hostname=None, highrestimestamp=1):\n        \"\"\"\n        Set microsecond logging in pbs.conf\n        \"\"\"\n        if hostname is None:\n            hostname = self.server.hostname\n        a = {'PBS_LOG_HIGHRES_TIMESTAMP': highrestimestamp}\n        self.du.set_pbs_config(hostname=hostname, confs=a, append=True)\n        PBSInitServices().restart()\n        self.assertTrue(self.server.isUp(), 'Failed to restart PBS Daemons')\n\n    def test_disabled(self):\n        \"\"\"\n        Disable High res logging, and test that high res timestamp is not\n        there in the server logs lines\n        \"\"\"\n        self.switch_microsecondlogging(highrestimestamp=0)\n        now = time.time()\n\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid)\n        lines = self.server.log_lines(logtype=self.server,\n                                      starttime=now)\n\n        _msg = 'Found high resolution time stamp in log,' \\\n               ' it shouldn\\'t be there'\n\n        for line in lines:\n            m = self.tm_micro_re.match(line)\n            self.assertIsNone(m, _msg)\n\n        _msg = self.server.shortname + \\\n            ': High resolution time stamp correctly not set in log'\n        self.logger.info(_msg)\n\n    def test_disabled_tracejob(self):\n        \"\"\"\n        Disable High res logging, and test that high res timestamp is not\n        there in the tracejob output\n        \"\"\"\n        self.switch_microsecondlogging(highrestimestamp=0)\n\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid)\n        lines = self.server.log_lines(logtype='tracejob', id=jid, n='ALL')\n        if len(lines) > 1:\n            # Remove first line as it is not log line\n            lines = lines[1:]\n\n        _msg = 'Found high resolution time stamp in tracejob, ' \\\n               'it should not be there'\n\n        for line in lines:\n            m = self.tm_micro_re.match(line)\n            self.assertIsNone(m, _msg)\n\n        _msg = self.server.shortname + \\\n            ': High resolution time stamp correctly not set in ' \\\n            'tracejob output'\n        self.logger.info(_msg)\n\n    def test_basic(self):\n        \"\"\"\n        Enable High resolution logging, restart server\n        and look for high resolution time stamp in server log\n        \"\"\"\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid)\n\n        self.validate_server_log_lines()\n\n    def test_basic_tracejob(self):\n        \"\"\"\n        Enable High resolution logging, restart PBS Daemons\n        and look for high resolution time stamp in tracejob output\n        \"\"\"\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid)\n        self.validate_trace_job_lines(jid=jid)\n\n    def test_env_variable_overwrite(self):\n        \"\"\"\n        Check env variable overwrites the pbs.conf value\n        \"\"\"\n        a = {'PBS_LOG_HIGHRES_TIMESTAMP': 0}\n        self.du.set_pbs_config(confs=a, append=True)\n        conf_path = self.du.parse_pbs_config()\n        pbs_init = os.path.join(os.sep, conf_path['PBS_EXEC'],\n                                'libexec', 'pbs_init.d')\n        cmd = copy.copy(self.du.sudo_cmd)\n        cmd += ['PBS_LOG_HIGHRES_TIMESTAMP = 1', pbs_init, 'restart']\n        self.du.run_cmd(cmd=cmd, as_script=True, wait_on_script=True)\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid)\n        self.validate_server_log_lines()\n        self.validate_trace_job_lines(jid=jid)\n"
  },
  {
    "path": "test/tests/functional/pbs_holidays.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport datetime\nfrom tests.functional import *\n\n\nclass TestHolidays(TestFunctional):\n\n    \"\"\"\n    This test suite tests if PBS scheduler's holidays file feature\n    works correctly\n    \"\"\"\n    days = [\"monday\", \"tuesday\", \"wednesday\", \"thursday\", \"friday\",\n            \"saturday\", \"sunday\"]\n    np_queue = \"np_workq\"\n    p_queue = \"p_workq\"\n    cur_year = datetime.datetime.today().year\n    allprime_msg = \"It is primetime.  It will never end\"\n    tom = datetime.date.today() + datetime.timedelta(days=1)\n    tom = tom.strftime(\"%m/%d/%Y\")\n    dayprime_msg = r\"It is primetime.  It will end in \\d+ seconds at \" +\\\n        tom + \" 00:00:00\"\n    daynp_msg = r\"It is non-primetime.  It will end in \\d+ seconds at \" +\\\n        tom + \" 00:00:00\"\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n\n        # Enable DEBUG2 sched log messages\n        self.server.manager(MGR_CMD_SET, SCHED, {'log_events': 1023})\n\n    def test_missing_days(self):\n        \"\"\"\n        Test that scheduler correctly assumes 24hr prime-time for days that\n        are missing from the holidays file\n        \"\"\"\n        self.scheduler.holidays_delete_entry('a')\n\n        self.scheduler.holidays_set_year(self.cur_year)\n\n        # Set all days of the week as non-prime time and today as prime time\n        today_idx = datetime.datetime.today().weekday()\n        today = self.days[today_idx]\n        for i in range(7):\n            if i != today_idx:\n                self.scheduler.holidays_set_day(self.days[i], \"none\", \"all\")\n\n        ctime = time.time()\n        time.sleep(1)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {ATTR_scheduling: True})\n\n        # Verify that there's no entry in holidays file for today\n        ret = self.scheduler.holidays_get_day(today)\n        self.assertIsNone(ret['p'])\n        self.assertIsNone(ret['np'])\n        self.assertIsNone(ret['valid'])\n\n        # Verify that it's prime-time until tomorrow\n        self.scheduler.log_match(msg=self.dayprime_msg, regexp=True,\n                                 starttime=ctime)\n\n    def test_inconsistent_days(self):\n        \"\"\"\n        Test that scheduler correctly assumes 24hr prime-time for days that\n        have inconsistent data\n        \"\"\"\n        self.scheduler.holidays_delete_entry('a')\n\n        self.scheduler.holidays_set_year(self.cur_year)\n\n        # Set all days of the week as non-prime time except today\n        today_idx = datetime.datetime.today().weekday()\n        today = self.days[today_idx]\n        for i in range(7):\n            if i != today_idx:\n                self.scheduler.holidays_set_day(self.days[i], \"none\", \"all\")\n\n        # Set both prime and non-prime start times to 'all' for today\n        # and check that scheduler assumes that day as prime-time\n        self.scheduler.holidays_set_day(today, \"all\", \"all\")\n\n        # Verify that it's prime-time until tomorrow\n        ctime = time.time()\n        time.sleep(1)\n        self.server.manager(MGR_CMD_SET, SERVER, {ATTR_scheduling: True})\n        self.scheduler.log_match(msg=self.dayprime_msg, regexp=True,\n                                 starttime=ctime)\n\n        # Set both prime and non-prime start times to 'all' for today\n        self.scheduler.holidays_set_day(today, \"none\", \"none\")\n\n        # Verify that it's prime-time until tomorrow\n        ctime = time.time()\n        time.sleep(1)\n        self.server.manager(MGR_CMD_SET, SERVER, {ATTR_scheduling: True})\n        self.scheduler.log_match(msg=self.dayprime_msg, regexp=True,\n                                 starttime=ctime)\n\n    def test_only_year(self):\n        \"\"\"\n        Test that scheduler assumes all prime-time if there's only a year\n        entry in the holidays file\n        \"\"\"\n        self.scheduler.holidays_delete_entry('a')\n\n        self.scheduler.holidays_set_year(self.cur_year)\n\n        # Verify that the holidays file has only the year line\n        h_content = self.scheduler.holidays_parse_file()\n        self.assertEqual(len(h_content), 1)\n        self.assertEqual(h_content[0], \"YEAR\\t%d\" % self.cur_year)\n\n        # Verify that it's 24x7 prime-time\n        ctime = time.time()\n        time.sleep(1)\n        self.server.manager(MGR_CMD_SET, SERVER, {ATTR_scheduling: True})\n        self.scheduler.log_match(msg=self.allprime_msg, regexp=True,\n                                 starttime=ctime)\n\n    def test_empty_holidays_file(self):\n        \"\"\"\n        Test that the scheduler assumes all prime-time if the holidays\n        file is completely empty\n        \"\"\"\n        ctime = time.time()\n        self.scheduler.holidays_delete_entry('a')\n\n        # Verify that the holidays file is empty\n        h_content = self.scheduler.holidays_parse_file()\n        self.assertEqual(len(h_content), 0)\n\n        # Verify that it's 24x7 prime-time\n        ctime2 = time.time()\n        time.sleep(1)\n        self.server.manager(MGR_CMD_SET, SERVER, {ATTR_scheduling: True})\n        self.scheduler.log_match(msg=self.allprime_msg, regexp=True,\n                                 starttime=ctime2)\n\n        # Verify that scheduler didn't log a message for out of date file\n        msg = \"holidays;The holiday file is out of date; please update it.\"\n        self.scheduler.log_match(msg, starttime=ctime, existence=False,\n                                 max_attempts=5)\n\n    def test_stale_year(self):\n        \"\"\"\n        Test that the scheduler logs out-of-date log message and assumed\n        all prime-time for a holidays file with just a stale year line\n        \"\"\"\n        ctime = time.time()\n        self.scheduler.holidays_delete_entry('a')\n\n        self.scheduler.holidays_set_year(self.cur_year - 1)\n\n        # Verify that the holidays file has only the year line\n        h_content = self.scheduler.holidays_parse_file()\n        self.assertEqual(len(h_content), 1)\n        self.assertEqual(h_content[0], \"YEAR\\t%d\" % (self.cur_year - 1))\n\n        # Verify that it's 24x7 prime-time\n        ctime2 = time.time()\n        time.sleep(1)\n        self.server.manager(MGR_CMD_SET, SERVER, {ATTR_scheduling: True})\n        self.scheduler.log_match(msg=self.allprime_msg, regexp=True,\n                                 starttime=ctime2)\n\n        # Verify that scheduler logged a message for out of date file\n        msg = \"holidays;The holiday file is out of date; please update it.\"\n        self.scheduler.log_match(msg, starttime=ctime)\n\n    def test_commented_holidays_file(self):\n        \"\"\"\n        Test that the scheduler assumes all prime-time if the holidays\n        file is completely commented out\n        \"\"\"\n        ctime = time.time()\n        self.scheduler.holidays_delete_entry('a')\n\n        content = \"\"\"# YEAR 1970\n#  weekday 0600  1730\n#  saturday  none  all\n#  sunday  none  all\"\"\"\n\n        self.scheduler.holidays_write_file(content=content)\n\n        # Verify that the holidays file has 4 lines\n        h_content = self.du.cat(filename=self.scheduler.holidays_file,\n                                sudo=True)['out']\n        self.assertEqual(len(h_content), 4)\n\n        # Verify that it's 24x7 prime-time\n        ctime2 = time.time()\n        time.sleep(1)\n        self.server.manager(MGR_CMD_SET, SERVER, {ATTR_scheduling: True})\n        self.scheduler.log_match(msg=self.allprime_msg, regexp=True,\n                                 starttime=ctime2)\n\n        # Verify that scheduler didn't log a message for out of date file\n        msg = \"holidays;The holiday file is out of date; please update it.\"\n        self.scheduler.log_match(msg, starttime=ctime, existence=False,\n                                 max_attempts=5)\n\n    def test_non_prime(self):\n        \"\"\"\n        Test that non-prime time set via holidays file works correctly\n        \"\"\"\n        self.scheduler.holidays_delete_entry('a')\n\n        self.scheduler.holidays_set_year(self.cur_year)\n\n        # Set all days of the week as prime time except today\n        today_idx = datetime.datetime.today().weekday()\n        today = self.days[today_idx]\n        for i in range(7):\n            if i != today_idx:\n                self.scheduler.holidays_set_day(self.days[i], \"all\", \"none\")\n\n        # Set today as all non-prime time\n        self.scheduler.holidays_set_day(today, \"none\", \"all\")\n\n        # Verify that it's non-prime time until tomorrow\n        ctime = time.time()\n        time.sleep(1)\n        self.server.manager(MGR_CMD_SET, SERVER, {ATTR_scheduling: True})\n        self.scheduler.log_match(msg=self.daynp_msg, regexp=True,\n                                 starttime=ctime)\n\n    def test_missing_year(self):\n        \"\"\"\n        Test that scheduler assumes all prime time if the year entry is\n        missing from holidays file\n        \"\"\"\n        ctime = time.time()\n        self.scheduler.holidays_delete_entry('a')\n\n        # Create a holidays file with no year entry and all days set to\n        # 24hrs non-prime time\n        content = \"\"\"  weekday none  all\n  saturday  none  all\n  sunday  none  all\"\"\"\n\n        self.scheduler.holidays_write_file(content=content)\n\n        # Verify that the holidays file has 3 lines\n        h_content = self.du.cat(filename=self.scheduler.holidays_file,\n                                sudo=True)['out']\n        self.assertEqual(len(h_content), 3)\n\n        # Verify that it's 24x7 prime-time\n        ctime2 = time.time()\n        time.sleep(1)\n        self.server.manager(MGR_CMD_SET, SERVER, {ATTR_scheduling: True})\n        self.scheduler.log_match(msg=self.allprime_msg, regexp=True,\n                                 starttime=ctime2)\n\n        # Verify that scheduler didn't log a message for out of date file\n        msg = \"holidays;The holiday file is out of date; please update it.\"\n        self.scheduler.log_match(msg, starttime=ctime, existence=False,\n                                 max_attempts=5)\n\n    def test_prime_events_calendar(self):\n        \"\"\"\n        Test that for a commented out holidays file, scheduler doesn't\n        add policy change events to the calendar\n        \"\"\"\n        self.scheduler.set_sched_config({'strict_ordering': \"true    all\"})\n        self.server.manager(MGR_CMD_SET, SCHED, {'log_events': 2047})\n\n        self.scheduler.holidays_delete_entry('a')\n\n        # Create a holidays file with no year entry and all days set to\n        # 24hrs non-prime time\n        content = \"\"\"# YEAR 1970\n#  weekday 0600  1730\n#  saturday  none  all\n#  sunday  none  all\"\"\"\n        self.scheduler.holidays_write_file(content=content)\n\n        time.sleep(1)\n        ctime = time.time()\n\n        # Verify that it's 24x7 prime-time\n        ctime2 = time.time()\n        time.sleep(1)\n        self.server.manager(MGR_CMD_SET, SERVER, {ATTR_scheduling: True})\n        self.scheduler.log_match(msg=self.allprime_msg, regexp=True,\n                                 starttime=ctime2)\n\n        # Set ncpus on vnode to 1\n        attrs = {ATTR_rescavail + \".ncpus\": '1'}\n        self.server.manager(MGR_CMD_SET, NODE, attrs)\n\n        # Submit a job that will occupy all resources, without a walltime\n        a = {'Resource_List.select': '1:ncpus=1'}\n        j = Job(TEST_USER, attrs=a)\n        self.server.submit(j)\n\n        # Submit another job which will get calendared\n        j = Job(TEST_USER, attrs=a)\n        self.server.submit(j)\n\n        # Verify that scheduler did not calendar any policy change events\n        msg = r\".*Simulation: Policy change.*\"\n        self.scheduler.log_match(msg, regexp=True, starttime=ctime,\n                                 existence=False,\n                                 max_attempts=5)\n\n    def test_week_day_after_weekday(self):\n        \"\"\"\n        Test that an individual weekday's entry after the 'weekday'\n        entry takes precedence in the holidays file\n        \"\"\"\n        self.scheduler.holidays_delete_entry('a')\n\n        self.scheduler.holidays_set_year(self.cur_year)\n\n        # Add a weekday entry with all prime-time\n        self.scheduler.holidays_set_day(\"weekday\", \"all\", \"none\")\n\n        # Set today as all non-prime time\n        today_idx = datetime.datetime.today().weekday()\n        today = self.days[today_idx]\n        self.scheduler.holidays_set_day(today, \"none\", \"all\")\n\n        # Verify that it's non-prime time until tomorrow\n        ctime = time.time()\n        time.sleep(1)\n        self.server.manager(MGR_CMD_SET, SERVER, {ATTR_scheduling: True})\n        self.scheduler.log_match(msg=self.daynp_msg, regexp=True,\n                                 starttime=ctime)\n\n    def test_year_0(self):\n        \"\"\"\n        Test that setting holidays file's year entry to 0 causes 24x7\n        prime-time\n        \"\"\"\n        ctime = time.time()\n        self.scheduler.holidays_delete_entry('a')\n\n        self.scheduler.holidays_set_year('0')\n\n        # Verify that the holidays file has only the year line\n        h_content = self.scheduler.holidays_parse_file()\n        self.assertEqual(len(h_content), 1)\n        self.assertEqual(h_content[0], \"YEAR\\t0\")\n\n        # Verify that it's 24x7 prime-time\n        ctime2 = time.time()\n        time.sleep(1)\n        self.server.manager(MGR_CMD_SET, SERVER, {ATTR_scheduling: True})\n        self.scheduler.log_match(msg=self.allprime_msg, regexp=True,\n                                 starttime=ctime2)\n\n        # Verify that scheduler didn't log a message for out of date file\n        msg = \"holidays;The holiday file is out of date; please update it.\"\n        self.scheduler.log_match(msg, starttime=ctime, existence=False,\n                                 max_attempts=5)\n"
  },
  {
    "path": "test/tests/functional/pbs_hook_config_os_env.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\nfrom tests.functional import *\n\n\n@tags('hooks')\nclass TestHookConfigOSEnv(TestFunctional):\n    \"\"\"\n    Test suite to check if hook can access\n    os.environ when a config file is configured\n    for a hook\n    \"\"\"\n\n    def test_hook_config_os_env(self):\n        \"\"\"\n        Create a hook, import a config file for the hook\n        and test the os.environ call in the hook\n        \"\"\"\n        hook_name = \"test_hook\"\n        hook_body = \"\"\"\nimport pbs\nimport os\npbs.logmsg(pbs.LOG_DEBUG, \"Printing os.environ %s\" % os.environ)\n\"\"\"\n        cfg = \"\"\"\n{'hook_config':'testhook'}\n\"\"\"\n        a = {'event': 'queuejob', 'enabled': 'True'}\n        self.server.create_import_hook(hook_name, a, hook_body)\n        fn = self.du.create_temp_file(body=cfg)\n        a = {'content-type': 'application/x-config',\n             'content-encoding': 'default',\n             'input-file': fn}\n        self.server.manager(MGR_CMD_IMPORT, HOOK, a, hook_name)\n        a = {'Resource_List.select': '1:ncpus=1'}\n        j = Job(TEST_USER, a)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        self.server.log_match(\"Printing os.environ\", self.server.shortname)\n"
  },
  {
    "path": "test/tests/functional/pbs_hook_crosslink_mom.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\n@tags('hooks')\n@requirements(num_moms=2)\nclass TestPbsHookCrossLinkMom(TestFunctional):\n    \"\"\"\n    When a hook updates attributes of vnodes not belonging to MoM on which\n    the hook is running, the server wrongly cross links the MoM with the vnode.\n    This testsuite tests the fix for this issue and needs two MoMs.\n    \"\"\"\n    def setUp(self):\n        TestFunctional.setUp(self)\n\n        if len(self.moms) != 2:\n            self.skipTest('test requires two MoMs as input, ' +\n                          'use -p moms=<mom1>:<mom2>')\n\n        self.momA = self.moms.values()[0]\n        self.momB = self.moms.values()[1]\n\n        self.hostA = self.momA.shortname\n        self.hostB = self.momB.shortname\n\n        a = {'job_history_enable': 'True'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n    def test_crosslink(self):\n        \"\"\"\n        This test creates a execjob_end hook which updates an attribute of\n        all vnodes. A job is submitted that runs on two different MoM hosts.\n        When the job has finished, the test checks if the server did the wrong\n        cross-linking or not.\n        \"\"\"\n        status = self.server.status(NODE, id=self.hostA)\n        Mom1_before = status[0][ATTR_NODE_Mom]\n\n        status = self.server.status(NODE, id=self.hostB)\n        Mom2_before = status[0][ATTR_NODE_Mom]\n\n        hook_name = \"job_end\"\n        hook_body = \"\"\"\nimport pbs\n\nthis_event = pbs.event()\nif this_event.type == pbs.EXECJOB_END:\n    job = this_event.job\n    exec_vnode = str(job.exec_vnode).replace(\"(\", \"\").replace(\")\", \"\")\n    vnodes = sorted(set([x.partition(':')[0]\n                        for x in exec_vnode.split('+')]))\n    for h in vnodes:\n        try:\n            pbs.logjobmsg(job.id, \"vnode is %s ======\" % h)\n            pbs.event().vnode_list[h].current_eoe = None\n        except:\n            pass\n\nthis_event.accept()\n\"\"\"\n\n        a = {'event': \"execjob_end\", 'enabled': 'true', 'debug': 'true'}\n        self.server.create_import_hook(hook_name, a, hook_body)\n\n        select = \"1:host=\" + self.hostA + \":ncpus=1+1:host=\" \\\n            + self.hostB + \":ncpus=1\"\n        a = {'Resource_List.select': select,  ATTR_k: 'oe'}\n\n        j = Job(TEST_USER, a)\n        j.set_sleep_time(1)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'F'}, id=jid, extend='x')\n\n        status = self.server.status(NODE, id=self.hostA)\n        Mom = status[0][ATTR_NODE_Mom]\n        self.assertEqual(Mom, Mom1_before)\n\n        status = self.server.status(NODE, id=self.hostB)\n        Mom = status[0][ATTR_NODE_Mom]\n        self.assertEqual(Mom, Mom2_before)\n"
  },
  {
    "path": "test/tests/functional/pbs_hook_debug_input.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\nimport os\nimport fnmatch\nfrom ptl.utils.pbs_logutils import PBSLogUtils\n\n\nclass TestHookDebugInput(TestFunctional):\n    \"\"\"\n    Tests related to hook debug input files\n    \"\"\"\n    def setUp(self):\n        TestFunctional.setUp(self)\n        if not hasattr(self, 'server_hooks_tmp_dir'):\n            self.server_hooks_tmp_dir = \\\n                os.path.join(self.server.pbs_conf['PBS_HOME'],\n                             'server_priv', 'hooks', 'tmp')\n        if not hasattr(self, 'mom_hooks_tmp_dir'):\n            self.mom_hooks_tmp_dir = \\\n                self.mom.get_formed_path(self.mom.pbs_conf['PBS_HOME'],\n                                         'mom_priv', 'hooks', 'tmp')\n\n    def remove_files_match(self, pattern, mom=False):\n        \"\"\"\n        Remove hook debug files in hooks/tmp folder that\n        match pattern\n        \"\"\"\n        if mom:\n            hooks_tmp_dir = self.mom_hooks_tmp_dir\n            a = self.mom.listdir(path=hooks_tmp_dir, sudo=True)\n        else:\n            hooks_tmp_dir = self.server_hooks_tmp_dir\n            a = self.du.listdir(path=hooks_tmp_dir, sudo=True)\n\n        for item in a:\n            if fnmatch.fnmatch(item, pattern):\n                if mom:\n                    self.mom.rm(path=item, sudo=True)\n                    ret = self.mom.isfile(path=item, sudo=True)\n                else:\n                    self.du.rm(path=item, sudo=True)\n                    ret = self.du.isfile(path=item, sudo=True)\n\n                # Check if the file was removed\n                self.assertFalse(ret)\n\n    def match_queue_name_in_input_file(self, input_file_pattern, qname):\n        \"\"\"\n        Assert that qname appears in the hook debug input file\n        that matches input_file_pattern\n        \"\"\"\n        input_file = None\n        for item in self.du.listdir(path=self.server_hooks_tmp_dir, sudo=True):\n            if fnmatch.fnmatch(item, input_file_pattern):\n                input_file = item\n                break\n        self.assertTrue(input_file is not None)\n        with PBSLogUtils().open_log(input_file, sudo=True) as f:\n            search_str = 'pbs.event().job.queue=%s' % qname\n            self.assertTrue(search_str in f.read().decode())\n        self.remove_files_match(input_file_pattern)\n\n    def match_in_debug_file(self, input_file_pattern, search_list, mom=False):\n        \"\"\"\n        Assert that all the strings in 'search_list' appears in the hook\n        debug file that matches input_file_pattern\n        \"\"\"\n        input_file = None\n        if mom:\n            hooks_tmp_dir = self.mom_hooks_tmp_dir\n            a = self.mom.listdir(path=hooks_tmp_dir, sudo=True)\n        else:\n            hooks_tmp_dir = self.server_hooks_tmp_dir\n            a = self.du.listdir(path=hooks_tmp_dir, sudo=True)\n\n        for item in a:\n            if fnmatch.fnmatch(item, input_file_pattern):\n                input_file = item\n                break\n        self.assertTrue(input_file is not None)\n        if mom:\n            ret = self.mom.cat(filename=input_file, sudo=True)\n        else:\n            ret = self.du.cat(filename=input_file, sudo=True)\n\n        if ret['rc'] == 0 and len(ret['out']) > 0:\n            flag = False\n            if(all(x in ret['out'] for x in search_list)):\n                flag = True\n            self.assertTrue(flag)\n        self.remove_files_match(input_file_pattern, mom)\n\n    def test_queuejob_hook_debug_input_has_queue_name(self):\n        \"\"\"\n        Test that user requested queue name is written to\n        queuejob hook debug input file\n        \"\"\"\n        hook_name = \"queuejob_debug\"\n        hook_body = (\"import pbs\\n\"\n                     \"pbs.event().accept()\")\n        attr = {'enabled': 'true', 'event': 'queuejob', 'debug': 'true'}\n        self.server.create_import_hook(hook_name, attr, hook_body)\n\n        new_queue = 'happyq'\n        attr = {ATTR_qtype: 'execution', ATTR_enable: 'True'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, attr, id=new_queue)\n\n        input_file_pattern = os.path.join(self.server_hooks_tmp_dir,\n                                          'hook_queuejob_%s*.in' % hook_name)\n        self.remove_files_match(input_file_pattern)\n\n        j1 = Job(TEST_USER)\n        self.server.submit(j1)\n\n        self.match_queue_name_in_input_file(input_file_pattern,\n                                            self.server.default_queue)\n\n        attr = {ATTR_queue: new_queue}\n        j2 = Job(TEST_USER, attrs=attr)\n        self.server.submit(j2)\n        self.match_queue_name_in_input_file(input_file_pattern, new_queue)\n\n    def test_mom_hook_debug_data(self):\n        \"\"\"\n        Test that a debug enabled mom hook produces expected debug data.\n        \"\"\"\n        def_que = self.server.default_queue\n        hname = \"debug\"\n        hook_body = \"\"\"\nimport pbs\ns = pbs.server()\nq = s.queue(\"%s\")\nfor vn in s.vnodes():\n    pbs.logmsg(pbs.LOG_DEBUG, \"found vn=\" + vn.name)\npbs.event().accept()\n\"\"\" % def_que\n        attr = {'enabled': 'true', 'event': 'execjob_begin', 'debug': 'true'}\n        self.server.create_import_hook(hname, attr, hook_body)\n\n        data_file_pattern = self.mom.get_formed_path(\n                             self.mom_hooks_tmp_dir,\n                             'hook_execjob_begin_%s*.data'\n                             % hname)\n        self.remove_files_match(data_file_pattern, mom=True)\n\n        j1 = Job(TEST_USER)\n        j1.set_sleep_time(5)\n        jid = self.server.submit(j1)\n        self.server.expect(JOB, 'queue', op=UNSET, id=jid)\n\n        search = [\"pbs.server().queue(%s).queue_type=Execution\" % def_que]\n        search.append(\"pbs.server().vnode(%s).ntype=0\" % self.mom.shortname)\n        self.logger.info(search)\n        self.match_in_debug_file(data_file_pattern, search, mom=True)\n"
  },
  {
    "path": "test/tests/functional/pbs_hook_debug_nocrash.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestHookDebugNoCrash(TestFunctional):\n\n    \"\"\"\n    This tests to make sure the following does not occur:\n          Hook debug causes file descriptor leak that crashes PBS server\n\n    PRE: Have 3 queuejob hooks, qjob1, qjob2, qjob3 with order=1, order=2,\n         order=2 respectively. qjob1 and qjob2 have debug=True while\n         qjob3 has debug=False. Try submitting 1000 jobs.\n    POST: On a fixed PBS, this test case will run to completion.\n          On a PBS containing the bug, the test could fail on a server crash,\n          a failure in qsub with \"Invalid credential\", or even a qstat\n          hang with ptl returning:\n             corretja: /opt/pbs/bin/qstat -f 4833.corretja\n              2016-07-08 12:56:52,799 INFO     TIMEDOUT\n          and server_logs having the message \"Too many open files\".\n\n         This is because a previous bug causes pbs_server to not close the\n         debug output file descriptors opened by subsequent hook executions.\n\n         NOTE: This is assuming on one's local system, we have the\n                follwoing limit:\n                # ulimit -a\n                ...\n                open files                      (-n) 1024\n    \"\"\"\n\n    # Class variables\n    open_files_limit_expected = 1024\n\n    def setUp(self):\n        ret = self.du.run_cmd(\n            self.server.hostname, [\n                'ulimit', '-n'], as_script=True, logerr=False)\n        self.assertEqual(ret['rc'], 0)\n        open_files_limit = ret['out'][0]\n        if (open_files_limit == \"unlimited\") or (\n                int(open_files_limit) > self.open_files_limit_expected):\n            msg = \"\\n'This test requires 'open files' system limit\"\n            msg += \" to be <= %d \" % self.open_files_limit_expected\n            msg += \"(current value=%s).\" % open_files_limit\n            self.skipTest(msg)\n        TestFunctional.setUp(self)\n\n    @timeout(2400)\n    def test_hook_debug_no_crash(self):\n\n        hook_body = \"\"\"\nimport pbs\ne=pbs.event()\npbs.logmsg(pbs.LOG_DEBUG, \"hook %s executed\" % (e.hook_name,))\n\"\"\"\n        hook_name = \"qjob1\"\n        a = {\n            'event': \"queuejob\",\n            'enabled': 'True',\n            'debug': 'True',\n            'order': 1}\n        rv = self.server.create_import_hook(\n            hook_name,\n            a,\n            hook_body,\n            overwrite=True)\n        self.assertTrue(rv)\n\n        hook_name = \"qjob2\"\n        a = {\n            'event': \"queuejob\",\n            'enabled': 'True',\n            'debug': 'True',\n            'order': 2}\n        rv = self.server.create_import_hook(\n            hook_name,\n            a,\n            hook_body,\n            overwrite=True)\n        self.assertTrue(rv)\n\n        hook_name = \"qjob3\"\n        a = {\n            'event': \"queuejob\",\n            'enabled': 'True',\n            'debug': 'False',\n            'order': 2}\n        rv = self.server.create_import_hook(\n            hook_name,\n            a,\n            hook_body,\n            overwrite=True)\n        self.assertTrue(rv)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n\n        for i in range(1000):\n            j = Job(TEST_USER)\n            a = {\n                'Resource_List.select': '1:ncpus=1',\n                'Resource_List.walltime': 3600}\n            j.set_attributes(a)\n            j.set_sleep_time(\"5\")\n            jid = self.server.submit(j)\n            self.server.expect(JOB, {'job_state': 'Q'}, id=jid)\n"
  },
  {
    "path": "test/tests/functional/pbs_hook_exechost_periodic.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\ncommon_periodic_hook_script = \"\"\"import pbs\npbs.logmsg(pbs.LOG_DEBUG, \"In exechost_periodic hook\")\nvn = pbs.event().vnode_list\nhost = pbs.get_local_nodename()\nnode = ''\nfor k in vn.keys():\n    if host in k:\n        node = k\n        break\nvn[node].resources_available[\"mem\"] = pbs.size(\"90gb\")\nother_node = \"invalid_node\"\nif other_node not in vn:\n    vn[other_node] = pbs.vnode(other_node)\nvn[other_node].resources_available[\"mem\"] = pbs.size(\"9gb\")\n\"\"\"\n\n\nclass TestHookExechostPeriodic(TestFunctional):\n    \"\"\"\n    Tests to test exechost_periodic hook\n    \"\"\"\n    def setUp(self):\n        TestFunctional.setUp(self)\n\n    def test_multiple_exechost_periodic_hooks(self):\n        \"\"\"\n        This test sets two exechost_periodic hooks and restarts the mom,\n        which tests whether both the hooks will successfully run on node\n        startup and the mom not crashes.\n        \"\"\"\n        self.attr = {'event': 'exechost_periodic',\n                     'enabled': 'True', 'freq': '30'}\n        self.hook_body1 = (\"import pbs\\n\"\n                           \"e = pbs.event()\\n\"\n                           \"pbs.logmsg(pbs.EVENT_DEBUG,\\n\"\n                           \"\\t\\\"exechost_periodic hook1\\\")\\n\"\n                           \"e.accept()\\n\")\n        self.hook_body2 = (\"import pbs\\n\"\n                           \"e = pbs.event()\\n\"\n                           \"pbs.logmsg(pbs.EVENT_DEBUG,\\n\"\n                           \"\\t\\\"exechost_periodic hook2\\\")\\n\"\n                           \"e.accept()\\n\")\n        self.server.create_import_hook(\"exechost_periodic1\",\n                                       self.attr, self.hook_body1)\n        self.server.create_import_hook(\"exechost_periodic2\",\n                                       self.attr, self.hook_body2)\n        self.mom.restart()\n        self.assertTrue(self.mom.isUp())\n        self.mom.log_match(\"exechost_periodic hook1\",\n                           max_attempts=5, interval=5)\n        self.mom.log_match(\"exechost_periodic hook2\",\n                           max_attempts=5, interval=5)\n\n    @skipOnCpuSet\n    @requirements(num_moms=2)\n    def test_exechost_periodic_accept(self):\n        \"\"\"\n        Test exechost_periodic which accepts event and verify that\n        error is thrown when updating resources of a vnode\n        which is owned by different mom\n        \"\"\"\n        self.momA = self.moms.values()[0]\n        self.hostA = self.momA.shortname\n        self.momB = self.moms.values()[1]\n        self.hostB = self.momB.shortname\n        hook_name = \"periodic\"\n        hook_attrs = {'event': 'exechost_periodic', 'enabled': 'True'}\n        hook_body = common_periodic_hook_script\n        self.server.create_import_hook(hook_name, hook_attrs, hook_body)\n\n        exp_msg = \"In exechost_periodic hook\"\n        for mom in self.moms.values():\n            mom.log_match(exp_msg)\n\n        other_node = \"invalid_node\"\n        common_msg = \" as it is owned by a different mom\"\n        common_msg2 = \"resources_available.mem=9gb per mom hook request\"\n\n        exp_msg1 = \"autocreated vnode %s\" % (other_node)\n        msg1 = \"%s;Updated vnode %s's resource \" % (self.momA.hostname,\n                                                    other_node)\n        exp_msg2 = msg1 + common_msg2\n        msg2 = \"%s;Not allowed to update vnode '%s',\" % (self.momB.hostname,\n                                                         other_node)\n        exp_msg3 = msg2 + common_msg\n\n        for msg in [exp_msg1, exp_msg2, exp_msg3]:\n            self.server.log_match(msg)\n\n        node_attribs = {'resources_available.mem': \"90gb\"}\n        self.server.expect(NODE, node_attribs, id=self.momB.shortname)\n\n    @skipOnCpuSet\n    @requirements(num_moms=2)\n    def test_exechost_periodic_alarm(self):\n        \"\"\"\n        Test exechost_periodic with alarm timeout in hook script\n        \"\"\"\n        hook_name = \"periodic\"\n        hook_attrs = {'event': 'exechost_periodic', 'enabled': 'True',\n                      'alarm': '5'}\n        hook_script = \"\"\"time.sleep(10)\"\"\"\n        hook_body = \"\"\"import time \\n\"\"\"\n        hook_body += common_periodic_hook_script + hook_script\n        self.server.create_import_hook(hook_name, hook_attrs, hook_body)\n        log_msg = \"alarm call while running exechost_periodic hook\"\n        log_msg += \" '%s', request rejected\" % hook_name\n        exp_msg = [\"In exechost_periodic hook\",\n                   log_msg,\n                   \"Non-zero exit status 253 encountered for periodic hook\",\n                   \"exechost_periodic request rejected by '%s'\" % hook_name]\n        for mom in self.moms.values():\n            for msg in exp_msg:\n                mom.log_match(msg)\n\n    @skipOnCpuSet\n    @requirements(num_moms=2)\n    def test_exechost_periodic_error(self):\n        \"\"\"\n        Test exechost_periodic with an unhandled exception in the hook script\n        \"\"\"\n        hook_name = \"periodic\"\n        hook_attrs = {'event': 'exechost_periodic', 'enabled': 'True'}\n        hook_script = \"\"\"raise Exception('x')\"\"\"\n        hook_body = common_periodic_hook_script + hook_script\n        self.server.create_import_hook(hook_name, hook_attrs, hook_body)\n\n        common_msg = \"PBS server internal error (15011) in \"\n        common_msg += \"Error evaluating Python script\"\n        exp_msg = [\"In exechost_periodic hook\",\n                   common_msg + \", <class 'Exception'>\",\n                   common_msg + \", x\",\n                   \"Non-zero exit status 254 encountered for periodic hook\",\n                   \"exechost_periodic request rejected by '%s'\" % hook_name]\n        for mom in self.moms.values():\n            for msg in exp_msg:\n                mom.log_match(msg)\n\n    @skipOnCpuSet\n    @requirements(num_moms=2)\n    def test_exechost_periodic_custom_resc(self):\n        \"\"\"\n        Test setting custom resource setting on vnode using exechost_periodic\n        hook\n        \"\"\"\n        self.momB = self.moms.values()[1]\n        self.hostB = self.momB.shortname\n        hook_name = \"periodic\"\n        hook_attrs = {'event': 'exechost_periodic', 'enabled': 'True'}\n        hook_script = \"\"\"vn[node].resources_available[\"foo\"] = True\"\"\"\n        hook_body = common_periodic_hook_script + hook_script\n        self.server.create_import_hook(hook_name, hook_attrs, hook_body)\n        node_attribs = {'resources_available.foo': True}\n        self.server.expect(NODE, node_attribs, id=self.hostB)\n"
  },
  {
    "path": "test/tests/functional/pbs_hook_execjob_abort.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport time\nfrom tests.functional import *\nfrom ptl.utils.pbs_logutils import PBSLogUtils\n\n\n@requirements(num_moms=3)\nclass TestPbsExecjobAbort(TestFunctional):\n    \"\"\"\n    Tests the hook event execjob_abort for when a job prematurely exits\n    during startup and the epilogue hook or end hook may not always execute.\n    \"\"\"\n    logutils = PBSLogUtils()\n\n    def setUp(self):\n        if len(self.moms) != 3:\n            self.skipTest('test requires three MoMs as input, ' +\n                          'use -p moms=<mom1>:<mom2>:<mom3>')\n        TestFunctional.setUp(self)\n        self.momC = self.moms.values()[2]\n\n        # execjob_abort hook\n        self.abort_hook_body = \"\"\"import pbs\ne=pbs.event()\npbs.logmsg(pbs.LOG_DEBUG, \"called execjob_abort hook\")\n\ndef print_attribs(pbs_obj, header):\n    for a in pbs_obj.attributes:\n        v = getattr(pbs_obj, a)\n        if v and str(v) != \"\":\n            pbs.logmsg(pbs.LOG_DEBUG, \"%s: %s = %s\" % (header, a, v))\nprint_attribs(e.job, \"JOB\")\n\nfor vn in e.vnode_list:\n    v = e.vnode_list[vn]\n    print_attribs(v, \"vnode_list[\" + vn + \"]\")\n\"\"\"\n        # instantiate execjob_abort hook\n        hook_event = 'execjob_abort'\n        hook_name = 'abort'\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.abort_hook_body)\n\n        # execjob_abort hook with sleep\n        self.abort1_hook_body = \"\"\"import pbs\nimport time\ne=pbs.event()\npbs.logmsg(pbs.LOG_DEBUG, \"called execjob_abort hook\")\ntime.sleep(2)\n\"\"\"\n        # execjob_prologue hook, unhandled exception\n        self.prolo_hook_body = \"\"\"import pbs\ne=pbs.event()\njob=e.job\nif %s job.in_ms_mom():\n    raise NameError\n\"\"\"\n        # execjob_launch hook, unhandled exception on MS mom\n        self.launch_hook_body = \"\"\"import pbs\ne=pbs.event()\njob=e.job\nif job.in_ms_mom():\n    raise NameError\n\"\"\"\n        # execjob_end hook\n        self.end_hook_body = \"\"\"import pbs\ne=pbs.event()\ne.reject(\"end hook rejected\")\n\"\"\"\n        # execjob_begin hook with sleep\n        self.begin_hook_body = \"\"\"import pbs\nimport time\ne=pbs.event()\ne.job.delete()\npbs.logmsg(pbs.LOG_DEBUG, \"called execjob_begin hook with job.delete()\")\ntime.sleep(2)\n\"\"\"\n        # job used in the tests\n        a = {ATTR_l + '.select': '3:ncpus=1', ATTR_l + '.place': 'scatter'}\n        self.j = Job(TEST_USER, attrs=a)\n\n    def test_execjob_abort_ms_prologue(self):\n        \"\"\"\n        An execjob_abort hook is executed in the primary mom and then in the\n        connected sister moms when a job has problems starting up and needing\n        to be aborted because execjob_prologue hook rejected in the\n        primary mom. Job is requeued, gets held (H state).\n        \"\"\"\n        # instantiate execjob_prologue hook to run on MS mom\n        hook_event = 'execjob_prologue'\n        hook_name = 'prolo'\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(\n            hook_name, a, self.prolo_hook_body % (''))\n        # Submit a job that eventually goes in H state\n        start_time = time.time()\n        j1attr = {ATTR_l + '.select': '3:ncpus=1',\n                  ATTR_l + '.place': 'scatter',\n                  ATTR_W: 'run_count=18'}\n        j1 = Job(TEST_USER, attrs=j1attr)\n        jid = self.server.submit(j1)\n        msg = \"Job;%s;Job requeued, execution node  down\" % jid\n        self.server.log_match(msg, starttime=start_time)\n        # Check for abort hook message in each of the moms\n        msg = \"called execjob_abort hook\"\n        for mom in self.moms.values():\n            mom.log_match(msg, starttime=start_time)\n        self.server.expect(JOB, {ATTR_state: 'H'}, id=jid)\n\n    def test_execjob_abort_exit_job_launch_reject(self):\n        \"\"\"\n        An execjob_abort hook is executed in the primary mom and then in the\n        connected sister moms when a job has problems starting up and needing\n        to be aborted because execjob_launch hook rejected in the\n        primary mom. Job exits.\n        \"\"\"\n        # instantiate execjob_launch hook to run on primary moms\n        hook_event = 'execjob_launch'\n        hook_name = 'launch'\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(\n            hook_name, a, self.launch_hook_body)\n        # Submit a job\n        start_time = time.time()\n        jid = self.server.submit(self.j)\n        self.server.expect(JOB, 'exec_host', id=jid, op=SET)\n        job_stat = self.server.status(JOB, id=jid)\n        exechost = job_stat[0]['exec_host'].split('/')[0]\n        mom_superior = self.moms[exechost]\n        msg = \"Job;%s;execjob_launch hook 'launch' \" % (\n            jid) + \"encountered an exception, request rejected\"\n        mom_superior.log_match(msg, starttime=start_time)\n        # Check for abort hook message in each of the moms\n        msg = \"called execjob_abort hook\"\n        for mom in self.moms.values():\n            mom.log_match(msg, starttime=start_time)\n        self.server.expect(JOB, 'queue', op=UNSET, id=jid, offset=1)\n\n    def msg_order(self, node, msg1, msg2, stime):\n        \"\"\"\n        Checks if msg1 appears after stime, and msg1 appears before msg2 in\n        mom logs of node. Returns date and time when msg1 and msg2 appeared.\n        \"\"\"\n        # msg1 appears before msg2\n        (_, str1) = node.log_match(msg1, starttime=stime)\n        date_time1 = str1.split(\";\")[0]\n        epoch1 = self.logutils.convert_date_time(date_time1)\n        # Use epoch1 to mark the starttime of msg2\n        (_, str2) = node.log_match(msg2, starttime=epoch1)\n        date_time2 = str2.split(\";\")[0]\n        return (date_time1, date_time2)\n\n    def test_execjob_abort_sis_joinjob_requeue(self):\n        \"\"\"\n        An execjob_abort hook is executed on a sister mom when a sister mom\n        fails to join job. On connected primary mom an execjob_abort hook is\n        executed first then execjob_end hook. On connected mom only an\n        execjob_abort hook is executed. Job gets requeued.\n        \"\"\"\n        # instantiate execjob_abort hook with sleep\n        hook_event = 'execjob_abort'\n        hook_name = 'abort'\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.abort1_hook_body)\n        # instantiate execjob_end hook\n        hook_event = 'execjob_end'\n        hook_name = 'end'\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(\n            hook_name, a, self.end_hook_body)\n\n        # Simulate a sister failure to join job\n        # kill -STOP momC, submit a multi-node job, kill -9 momC\n        self.momC.signal(\"-STOP\")\n        stime = time.time()\n        jid = self.server.submit(self.j)\n        self.server.expect(JOB, 'exec_host', id=jid, op=SET)\n        job_stat = self.server.status(JOB, id=jid)\n        exechost = job_stat[0]['exec_host'].split('/')[0]\n        pri_mom = self.moms[exechost]\n        self.momC.signal(\"-KILL\")\n        msg = \"Job;%s;job_start_error.+from node %s \" % (\n            jid, self.momC.hostname) + \"could not JOIN_JOB successfully\"\n        msg_abort = \"called execjob_abort hook\"\n        msg_end = \"end hook rejected\"\n\n        # momC failed to join job\n        pri_mom.log_match(msg, starttime=stime, regexp=True)\n        # abort hook executed before end hook on primary mom\n        (dt1, dt2) = self.msg_order(pri_mom, msg_abort, msg_end, stime)\n        self.logger.info(\n            \"\\n%s: abort hook executed at: %s\"\n            \"\\n%s: end   hook executed at: %s\" %\n            (pri_mom.shortname, dt1, pri_mom.shortname, dt2))\n        # only abort hook executed on connected sister mom\n        for mom in self.moms.values():\n            if mom != pri_mom and mom != self.momC:\n                mom.log_match(msg_abort, starttime=stime)\n                mom.log_match(\n                    msg_end, starttime=stime, max_attempts=10,\n                    interval=2, existence=False)\n        self.server.expect(JOB, {ATTR_state: 'Q'}, id=jid, offset=1)\n\n    def test_execjob_abort_sis_joinjob_exit(self):\n        \"\"\"\n        An execjob_abort hook is executed on a sister mom when a sister mom\n        fails to join job. An execjob_begin hook which instructs the job to\n        be deleted via the pbs.event().job.delete() call executes before the\n        execjob_abort hook. Job exits.\n        \"\"\"\n        # instantiate execjob_begin hook\n        hook_event = 'execjob_begin'\n        hook_name = 'begin'\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(\n            hook_name, a, self.begin_hook_body)\n\n        # Simulate a sister failure to join job\n        # kill -STOP momC, submit a multi-node job, kill -9 momC\n        self.momC.signal(\"-STOP\")\n        stime = time.time()\n        jid = self.server.submit(self.j)\n        self.server.expect(JOB, 'exec_host', id=jid, op=SET)\n        job_stat = self.server.status(JOB, id=jid)\n        exechost = job_stat[0]['exec_host'].split('/')[0]\n        pri_mom = self.moms[exechost]\n        self.momC.signal(\"-KILL\")\n\n        msg = \"Job;%s;job_start_error.+from node %s \" % (\n            jid, self.momC.hostname) + \"could not JOIN_JOB successfully\"\n        msg_begin = \"called execjob_begin hook with job.delete()\"\n        msg_abort = \"called execjob_abort hook\"\n\n        # momC failed to join job\n        pri_mom.log_match(msg, starttime=stime, regexp=True)\n        # begin hook executed before abort hook on connected sister mom\n        for mom in self.moms.values():\n            if mom != pri_mom and mom != self.momC:\n                (dt1, dt2) = self.msg_order(\n                    pri_mom, msg_begin, msg_abort, stime)\n                self.logger.info(\n                    \"\\n%s: begin hook executed at: %s\"\n                    \"\\n%s: abort hook executed at: %s\" %\n                    (mom.shortname, dt1, mom.shortname, dt2))\n        # begin hook executed before abort hook executed on primary mom\n        (dt1, dt2) = self.msg_order(pri_mom, msg_begin, msg_abort, stime)\n        self.logger.info(\n            \"\\n%s: begin hook executed at: %s\"\n            \"\\n%s: abort hook executed at: %s\" %\n            (pri_mom.shortname, dt1, pri_mom.shortname, dt2))\n        self.server.expect(JOB, 'queue', op=UNSET, id=jid, offset=1)\n"
  },
  {
    "path": "test/tests/functional/pbs_hook_execjob_end.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\nimport textwrap\n\nfrom ptl.utils.pbs_logutils import PBSLogUtils\nfrom tests.functional import *\n\n\ndef get_hook_body(sleep_time):\n    \"\"\"\n    method to return hook body\n    :param sleep_time: sleep time added in the hook\n    :type sleep_time: int\n    \"\"\"\n    hook_body = \"\"\"\n    import pbs\n    import time\n    e = pbs.event()\n\n    if e.type == pbs.EXECJOB_EPILOGUE:\n        hook_type = \"EXECJOB_EPILOGUE\"\n    elif e.type == pbs.EXECJOB_END:\n        hook_type = \"EXECJOB_END\"\n    pbs.logjobmsg(e.job.id, \"starting hook event \" + hook_type)\n    time.sleep(%s)\n    pbs.logjobmsg(e.job.id, \"ending hook event \" + hook_type)\n    \"\"\" % sleep_time\n    hook_body = textwrap.dedent(hook_body)\n    return hook_body\n\n\nclass TestPbsExecjobEnd(TestFunctional):\n    \"\"\"\n    This tests the feature in PBS that allows\n    execjob_end hook to execute such that\n    pbs_mom is not blocked upon execution.\n    \"\"\"\n    logutils = PBSLogUtils()\n    job_list = []\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n        self.attr = {'event': 'execjob_end', 'enabled': 'True', 'alarm': '50'}\n        self.hook_body = (\"import pbs\\n\"\n                          \"import time\\n\"\n                          \"e = pbs.event()\\n\"\n                          \"pbs.logjobmsg(e.job.id, \\\n                                         'executed execjob_end hook')\\n\"\n                          \"time.sleep(10)\\n\"\n                          \"pbs.logjobmsg(e.job.id, \\\n                                         'execjob_end hook ended')\\n\"\n                          \"e.accept()\\n\")\n\n    def test_execjob_end_non_blocking(self):\n        \"\"\"\n        Test to make sure that mom is unblocked and running\n        exechost_periodic hook while it's child process is executing\n        execjob_end hook.\n        \"\"\"\n        hook_name = \"execjob_end_logmsg\"\n        self.server.create_import_hook(hook_name, self.attr, self.hook_body)\n        hook_name = \"exechost_periodic_logmsg\"\n        hook_body = (\"import pbs\\n\"\n                     \"e = pbs.event()\\n\"\n                     \"pbs.logmsg(pbs.LOG_DEBUG, \\\n                                 'executed exechost_periodic hook')\\n\"\n                     \"e.accept()\\n\")\n        attr = {'event': 'exechost_periodic', 'freq': '3', 'enabled': 'True'}\n        j = Job(TEST_USER)\n        j.set_sleep_time(1)\n        self.server.create_import_hook(hook_name, attr, hook_body)\n        jid = self.server.submit(j)\n        self.job_list.append(jid)\n        # need to verify hook messages in the below mentioned order to\n        # confirm mom is not blocked on execjob_end hook execution.\n        # The order is verified with the use of starttime and endtime\n        # parameters.\n        (_, str1) = self.mom.log_match(\"Job;%s;executed execjob_end hook\" %\n                                       jid, n=100, max_attempts=10, interval=2)\n        date_time1 = str1.split(\";\")[0]\n        epoch1 = self.logutils.convert_date_time(date_time1)\n        # following message should be logged while execjob_end hook is in sleep\n        (_, str1) = self.mom.log_match(\"executed exechost_periodic hook\",\n                                       starttime=epoch1 - 1,\n                                       endtime=epoch1 + 10,\n                                       n=100, max_attempts=10, interval=1)\n        date_time2 = str1.split(\";\")[0]\n        epoch2 = self.logutils.convert_date_time(date_time2)\n        (_, str1) = self.mom.log_match(\n            \"Job;%s;execjob_end hook ended\" %\n            jid, starttime=epoch2 - 1, n=100,\n            max_attempts=10, interval=2)\n        date_time3 = str1.split(\";\")[0]\n        self.logger.info(\n            \"execjob_end hook executed at: %s,\"\n            \"exechost_periodic at: %s and execjob_end hook ended at: %s\" %\n            (date_time1, date_time2, date_time3))\n\n    def test_execjob_end_hook_order_and_reject(self):\n        \"\"\"\n        Test with multiple execjob_end hooks having different order\n        with one of the hooks rejecting the job.\n        \"\"\"\n        hook_name1 = \"execjob_end_logmsg1\"\n        hook_body_accept = (\"import pbs\\n\"\n                            \"e = pbs.event()\\n\"\n                            \"pbs.logjobmsg(e.job.id, \\\n                                  'executed %s hook' % e.hook_name)\\n\"\n                            \"e.accept()\\n\")\n        attr = {'event': 'execjob_end', 'order': '1', 'enabled': 'True'}\n        self.server.create_import_hook(hook_name1, attr, hook_body_accept)\n        hook_name = \"execjob_end_logmsg2\"\n        hook_body_reject = (\n            \"import pbs\\n\"\n            \"e = pbs.event()\\n\"\n            \"pbs.logjobmsg(e.job.id, 'executed execjob_end hook')\\n\"\n            \"e.reject('Job is rejected')\\n\")\n        attr = {'event': 'execjob_end', 'order': '2', 'enabled': 'True'}\n        self.server.create_import_hook(hook_name, attr, hook_body_reject)\n        hook_name2 = \"execjob_end_logmsg3\"\n        attr = {'event': 'execjob_end', 'order': '170', 'enabled': 'True'}\n        self.server.create_import_hook(hook_name2, attr, hook_body_accept)\n        j = Job(TEST_USER)\n        j.set_sleep_time(1)\n        jid = self.server.submit(j)\n        self.job_list.append(jid)\n        self.mom.log_match(\"Job;%s;executed %s hook\" % (jid, hook_name1),\n                           n=100, max_attempts=10, interval=2)\n        self.mom.log_match(\"Job;%s;Job is rejected\" % jid,\n                           n=100, max_attempts=10, interval=2)\n        self.mom.log_match(\"Job;%s;executed %s hook\" % (jid, hook_name2),\n                           n=100, max_attempts=10, interval=2, existence=False)\n\n    def test_execjob_end_multi_job(self):\n        \"\"\"\n        Test to make sure that mom is unblocked with\n        execjob_end hook with mutiple jobs\n        \"\"\"\n        if self.mom.is_cpuset_mom():\n            status = self.server.status(NODE,\n                                        id=self.server.status(NODE)[1]['id'])\n            if status[0][\"resources_available.ncpus\"] < \"2\":\n                self.skip_test(reason=\"need 2 or more available ncpus\")\n        else:\n            a = {'resources_available.ncpus': 2}\n            self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        hook_name = \"execjob_end_logmsg4\"\n        self.server.create_import_hook(hook_name, self.attr, self.hook_body)\n        # jobs need to land on the same host even in a multi-node setup\n        a = {'Resource_List.select': '1:ncpus=1:host=%s' % self.mom.shortname}\n        j = Job(TEST_USER, attrs=a)\n        j.set_sleep_time(1)\n        jid1 = self.server.submit(j)\n        self.job_list.append(jid1)\n        # jid1 should be in E state, in sleep of execjob_end hook for\n        # jid2 submmision.\n        self.server.expect(JOB, {'job_state': 'E'}, id=jid1)\n        j = Job(TEST_USER, attrs=a)\n        j.set_sleep_time(1)\n        jid2 = self.server.submit(j)\n        self.job_list.append(jid2)\n        # hook message of jid2 should be logged after the message of jid1 and\n        # before completion of sleep in hook for jid1 inorder to prove mom\n        # is not in blocked state.\n        (_, str1) = self.mom.log_match(\"Job;%s;executed execjob_end hook\" %\n                                       jid1, n=100, max_attempts=10,\n                                       interval=2)\n        date_time1 = str1.split(\";\")[0]\n        epoch1 = self.logutils.convert_date_time(date_time1)\n        # hook message for jid2 should appear while hook is in sleep for jid1\n        (_, str1) = self.mom.log_match(\"Job;%s;executed execjob_end hook\" %\n                                       jid2, starttime=epoch1 - 1,\n                                       endtime=epoch1 + 10,\n                                       n=100, max_attempts=10, interval=1)\n        date_time1 = str1.split(\";\")[0]\n        epoch1 = self.logutils.convert_date_time(date_time1)\n        (_, str1) = self.mom.log_match(\"Job;%s;execjob_end hook ended\" % jid1,\n                                       starttime=epoch1 - 1,\n                                       n=100, max_attempts=10, interval=2)\n        self.mom.log_match(\"Job;%s;execjob_end hook ended\" % jid2,\n                           n=100, max_attempts=10, interval=2)\n\n    @requirements(num_moms=2)\n    def test_execjob_end_non_blocking_multi_node(self):\n        \"\"\"\n        Test to make sure sister mom is unblocked\n        when execjob_end hook is running on sister mom\n        \"\"\"\n        if len(self.moms) != 2:\n            self.skip_test(reason=\"need 2 mom hosts: -p moms=<m1>:<m2>\")\n        self.momA = self.moms.values()[0]\n        self.momB = self.moms.values()[1]\n        hook_name = \"execjob_end_logmsg5\"\n        self.server.create_import_hook(hook_name, self.attr, self.hook_body)\n        hook_name = \"exechost_periodic_logmsg2\"\n        hook_body = (\"import pbs\\n\"\n                     \"e = pbs.event()\\n\"\n                     \"pbs.logmsg(pbs.LOG_DEBUG, \\\n                                 'executed exechost_periodic hook')\\n\"\n                     \"e.accept()\\n\")\n        attr = {'event': 'exechost_periodic', 'freq': '3', 'enabled': 'True'}\n        a = {'Resource_List.select': '1:ncpus=1:host=%s+1:ncpus=1:host=%s' %\n             (self.momA.shortname, self.momB.shortname)}\n        j = Job(TEST_USER, attrs=a)\n        j.set_sleep_time(1)\n        self.server.create_import_hook(hook_name, attr, hook_body)\n        jid = self.server.submit(j)\n        self.job_list.append(jid)\n        for host, mom in self.moms.items():\n            (_, str1) = mom.log_match(\"Job;%s;executed execjob_end hook\" %\n                                      jid, n=100, max_attempts=10,\n                                      interval=2)\n            date_time1 = str1.split(\";\")[0]\n            epoch1 = self.logutils.convert_date_time(date_time1)\n            (_, str1) = mom.log_match(\"executed exechost_periodic hook\",\n                                      starttime=epoch1 - 1,\n                                      endtime=epoch1 + 10,\n                                      n=100, max_attempts=10, interval=1)\n            date_time2 = str1.split(\";\")[0]\n            epoch2 = self.logutils.convert_date_time(date_time2)\n            (_, str1) = mom.log_match(\n                \"Job;%s;execjob_end hook ended\" %\n                jid, starttime=epoch2 - 1, n=100,\n                max_attempts=10, interval=2)\n            date_time3 = str1.split(\";\")[0]\n            msg = \"Got expected log_msg on host:%s\" % host\n            self.logger.info(msg)\n            self.logger.info(\n                \"execjob_end hook executed at: %s,\"\n                \"exechost_periodic at: %s and execjob_end hook ended at: %s\" %\n                (date_time1, date_time2, date_time3))\n\n    @requirements(num_moms=2)\n    def test_execjob_end_delete_request(self):\n        \"\"\"\n        Test to make sure execjob_end hook is running\n        after job force deletion request(IS_DISCARD_JOB) when\n        mom is unblocked.\n        \"\"\"\n        hook_name = \"execjob_end_logmsg6\"\n        self.server.create_import_hook(hook_name, self.attr, self.hook_body)\n        if len(self.moms) == 2:\n            self.momA = self.moms.values()[0]\n            self.momB = self.moms.values()[1]\n            a = {'Resource_List.select':\n                 '1:ncpus=1:host=%s+1:ncpus=1:host=%s' %\n                 (self.momA.shortname, self.momB.shortname)}\n            j = Job(TEST_USER, attrs=a)\n        elif len(self.moms) == 1:\n            j = Job(TEST_USER)\n        j.set_sleep_time(10)\n        jid = self.server.submit(j)\n        self.job_list.append(jid)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        self.server.deljob(id=jid, wait=True, attr_W=\"force\")\n        for host, mom in self.moms.items():\n            mom.log_match(\"Job;%s;executed execjob_end hook\" %\n                          jid, n=100, max_attempts=10,\n                          interval=2)\n            mom.log_match(\"Job;%s;execjob_end hook ended\" %\n                          jid, n=100, max_attempts=10,\n                          interval=2)\n            msg = \"Got expected log_msg on host:%s\" % host\n            self.logger.info(msg)\n\n    @requirements(num_moms=2)\n    def test_execjob_end_reject_request(self):\n        \"\"\"\n        Test to make sure hook job reject message should appear in mom log\n        in case sister mom went down before executing execjob_end hook\n        \"\"\"\n\n        if len(self.moms) != 2:\n            self.skip_test(reason=\"need 2 mom hosts: -p moms=<m1>:<m2>\")\n        self.momA = self.moms.values()[0]\n        self.momB = self.moms.values()[1]\n\n        # Create hook\n        hook_name = \"execjob_end_logmsg7\"\n        self.server.create_import_hook(hook_name, self.attr, self.hook_body)\n\n        # Submit a multi-node job\n        a = {'Resource_List.select':\n             '1:ncpus=1:host=%s+1:ncpus=1:host=%s' %\n             (self.momA.shortname, self.momB.shortname)}\n        j = Job(TEST_USER, attrs=a)\n        j.set_sleep_time(60)\n        jid = self.server.submit(j)\n        self.job_list.append(jid)\n\n        # Verify job spawn on sisterm mom\n        self.momB.log_match(\"Job;%s;JOIN_JOB as node\" % jid, n=100,\n                            max_attempts=10, interval=2)\n\n        # When the job run approx 5 sec, bring sister mom down\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid, offset=5)\n        msg = 'mom is not down'\n        self.assertTrue(self.momB.stop(), msg)\n\n        # Verify momB is down and job is running\n        a = {'state': (MATCH_RE, \"down\")}\n        self.server.expect(\n            NODE, a, id=self.momB.shortname)\n        self.server.expect(JOB, {'job_state': \"R\"}, id=jid)\n\n        hook_execution_time = time.time()\n        self.server.expect(JOB, {'job_state': \"E\"}, id=jid, max_attempts=300)\n\n        # Following message should be logged on momA after job delete request\n        # received\n        msg = \"%s;Unable to send delete job request to one or more\" % jid\n        msg += \" sisters\"\n\n        self.momA.log_match(msg, interval=2, starttime=hook_execution_time)\n\n        # Following message should be logged on momA while execjob_end hook is\n        # in sleep\n        self.momA.log_match(\"Job;%s;executed execjob_end hook\" % jid,\n                            starttime=hook_execution_time, max_attempts=10,\n                            interval=2)\n\n        self.momA.log_match(\"Job;%s;execjob_end hook ended\" % jid,\n                            starttime=hook_execution_time, max_attempts=10,\n                            interval=2)\n\n        # Verify  reject reply code 15059 for hook job logged in mother\n        # superior(momA)\n        self.momA.log_match(\"Req;req_reject;Reject reply code=15059,\",\n                            starttime=hook_execution_time, max_attempts=10,\n                            interval=2)\n\n        # Start pbs on MomA\n        self.server.pi.restart(hostname=self.server.hostname)\n        # Verify mother superior is not down\n        self.assertTrue(self.momA.isUp())\n\n        # Start pbs on MomB\n        self.momB.start()\n        # Verify sister mom is not down\n        self.assertTrue(self.momB.isUp())\n\n    def test_rerun_on_epilogue_hook(self):\n        \"\"\"\n        Test force qrerun when epilogue hook is running\n        \"\"\"\n\n        hook_name = \"epiend_hook\"\n        hook_body = get_hook_body(5)\n        attr = {'event': 'execjob_epilogue,execjob_end', 'enabled': 'True'}\n        self.server.create_import_hook(hook_name, attr, hook_body)\n        j = Job(TEST_USER)\n        j.set_sleep_time(10)\n        jid = self.server.submit(j)\n        self.job_list.append(jid)\n        self.mom.log_match(\"starting hook event EXECJOB_EPILOGUE\")\n        # Force rerun job\n        self.server.rerunjob(jid, extend='force')\n        self.mom.log_match(\"starting hook event EXECJOB_END\")\n        self.server.expect(JOB, {'job_state': 'R'}, jid)\n        self.mom.log_match(\"ending hook event EXECJOB_END\")\n\n    def common_steps(self, jid, host):\n        \"\"\"\n        Function to run common steps for test job on mom breakdown\n        and restarts\n        \"\"\"\n\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid)\n        host.signal('-KILL')\n        self.logger.info(\n            \"Successfully killed mom process on %s\" %\n            host.shortname)\n\n        # set scheduling false\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n\n        # check for the job's state after the node_fail_requeue time hits\n        self.logger.info(\"Waiting for 30s so that node_fail_requeue time hits\")\n        self.server.expect(JOB, {ATTR_state: 'Q'}, id=jid, offset=30)\n\n        host.start()\n        self.logger.info(\n            \"Successfully started mom process on %s\" %\n            host.shortname)\n        self.server.expect(NODE, {'state': 'free'}, id=host.shortname)\n\n        self.server.expect(JOB, {ATTR_state: 'Q'}, id=jid)\n        # run job\n        try:\n            now = time.time()\n            # qrun will fail as it is discarding the job\n            self.server.runjob(jid)\n        except PbsRunError as e:\n            self.logger.info(\"Runjob throws error: \" + e.msg[0])\n            self.assertTrue(\n                'qrun: Request invalid for state of job'\n                in e.msg[0])\n            self.mom.log_match(\"ending hook event EXECJOB_END\",\n                               starttime=now, interval=2)\n            time.sleep(5)\n            self.server.runjob(jid)\n            self.server.expect(JOB, {'job_state': 'R'}, jid)\n            now = time.time()\n            self.mom.log_match(\n                \"starting hook event EXECJOB_END\",\n                starttime=now, interval=2)\n            time.sleep(5)\n            self.mom.log_match(\"ending hook event EXECJOB_END\",\n                               starttime=now, interval=2)\n\n    def test_qrun_on_mom_breakdown(self):\n        \"\"\"\n        Test qrun when mom breaksdown and restarts\n        \"\"\"\n\n        hook_name = \"end_hook\"\n        hook_body = get_hook_body(5)\n        attr = {'event': 'execjob_end', 'enabled': 'True', 'alarm': '50'}\n        self.server.create_import_hook(hook_name, attr, hook_body)\n        attrib = {ATTR_nodefailrq: 30}\n        self.server.manager(MGR_CMD_SET, SERVER, attrib=attrib)\n        j = Job(TEST_USER)\n        j.set_sleep_time(10)\n        jid = self.server.submit(j)\n        self.job_list.append(jid)\n        self.common_steps(jid, self.mom)\n\n    def test_qrun_arrayjob_on_mom_breakdown(self):\n        \"\"\"\n        Test qrun array job when mom breaksdown and restarts\n        \"\"\"\n\n        hook_name = \"end_hook\"\n        hook_body = get_hook_body(5)\n        attr = {'event': 'execjob_end', 'enabled': 'True', 'alarm': '50'}\n        self.server.create_import_hook(hook_name, attr, hook_body)\n        attrib = {ATTR_nodefailrq: 30}\n        self.server.manager(MGR_CMD_SET, SERVER, attrib=attrib)\n        j = Job(TEST_USER, attrs={\n            ATTR_J: '1-2', ATTR_k: 'oe',\n            'Resource_List.select': 'ncpus=1'\n        })\n        j.set_sleep_time(10)\n        jid = self.server.submit(j)\n        self.job_list.append(jid)\n        # check job array has begun\n        self.server.expect(JOB, {'job_state': 'B'}, jid)\n        subjid_1 = j.create_subjob_id(jid, 1)\n        self.common_steps(subjid_1, self.mom)\n\n    def test_mom_restart(self):\n        \"\"\"\n        Test to restart mom while execjob_end hook is running\n        \"\"\"\n        hook_name = \"end_hook\"\n        hook_body = get_hook_body(20)\n        attr = {'event': 'execjob_end', 'enabled': 'True', 'alarm': '40'}\n        self.server.create_import_hook(hook_name, attr, hook_body)\n        j = Job(TEST_USER)\n        j.set_sleep_time(10)\n        jid = self.server.submit(j)\n        self.job_list.append(jid)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        self.mom.log_match(\"Job;%s;starting hook event EXECJOB_END\" %\n                           jid, n=100, interval=2)\n        self.mom.restart()\n        self.mom.log_match(\"Job;%s;ending hook event EXECJOB_END\" %\n                           jid, n=100, interval=2)\n        self.server.log_match(jid + \";Exit_status=0\", interval=4)\n\n    def tearDown(self):\n        for mom_val in self.moms.values():\n            if mom_val.is_cpuset_mom():\n                mom_val.restart()\n        self.job_list.clear()\n"
  },
  {
    "path": "test/tests/functional/pbs_hook_execjob_prologue.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\nfrom tests.functional import *\n\n\n@requirements(num_moms=3)\nclass TestPbsExecutePrologue(TestFunctional):\n    \"\"\"\n    This tests the feature in PBS that allows execjob_prologue hook to\n    execute on all sister moms all the time, and not just when first\n    task is spawned on the node.\n\n    PRE: Have a cluster of PBS with 3 mom hosts.\n    \"\"\"\n\n    def setUp(self):\n        if len(self.moms) != 3:\n            self.skip_test(reason=\"need 3 mom hosts: -p moms=<m1>:<m2>:<m3>\")\n\n        TestFunctional.setUp(self)\n\n        self.momA = self.moms.values()[0]\n        self.momB = self.moms.values()[1]\n        self.momC = self.moms.values()[2]\n\n        self.hostA = self.momA.shortname\n        self.hostB = self.momB.shortname\n        self.hostC = self.momC.shortname\n\n        for mom in self.moms.values():\n            self.server.expect(NODE, {'state': 'free'}, id=mom.shortname)\n\n    def test_prologue_execute_on_all_moms(self):\n        \"\"\"\n        Test to make sure execjob_prologue always get\n        executed on all sister moms when mother superior\n        has successfully executed its prologue hook.\n        \"\"\"\n        hook_name = \"prologue_logmsg\"\n        hook_body = (\"import pbs\\n\"\n                     \"e = pbs.event()\\n\"\n                     \"pbs.logjobmsg(e.job.id, 'executed prologue hook')\\n\")\n        attr = {'event': 'execjob_prologue', 'enabled': 'True'}\n        self.server.create_import_hook(hook_name, attr, hook_body)\n\n        attr = {'Resource_List.select': '3:ncpus=1',\n                'Resource_List.place': 'scatter',\n                'Resource_List.walltime': 30}\n        j = Job(TEST_USER, attrs=attr)\n        jid = self.server.submit(j)\n\n        self.momB.log_match(\"Job;%s;JOIN_JOB as node\" % jid, n=100,\n                            max_attempts=10, interval=2)\n        self.momC.log_match(\"Job;%s;JOIN_JOB as node\" % jid, n=100,\n                            max_attempts=10, interval=2)\n        self.momA.log_match(\"Job;%s;executed prologue hook\" % jid,\n                            n=100, max_attempts=10, interval=2)\n        self.momB.log_match(\"Job;%s;executed prologue hook\" % jid,\n                            n=100, max_attempts=10, interval=2)\n        self.momC.log_match(\"Job;%s;executed prologue hook\" % jid,\n                            n=100, max_attempts=10, interval=2)\n\n    def test_prologue_internal_error_no_fail_action(self):\n        \"\"\"\n        Test a prologue hook with an internal error and no fail_action.\n        \"\"\"\n        hook_name = \"prologue_exception\"\n        hook_body = (\"import pbs\\n\"\n                     \"e = pbs.event()\\n\"\n                     \"x\\n\")\n\n        attr = {'event': 'execjob_prologue',\n                'enabled': 'True'}\n        self.server.create_import_hook(hook_name, attr, hook_body)\n\n        attr = {'Resource_List.select': 'vnode=%s' % self.hostA,\n                'Resource_List.walltime': 30}\n        j = Job(TEST_USER, attrs=attr)\n        j.set_sleep_time(1)\n        self.server.submit(j)\n\n        self.server.expect(NODE, {'state': 'free'}, id=self.hostA, offset=1)\n\n    def test_prologue_internal_error_offline_vnodes(self):\n        \"\"\"\n        Test a prologue hook with an internal error and\n        fail_action=offline_vnodes.\n        \"\"\"\n        attr = {'resources_available.mem': '2gb',\n                'resources_available.ncpus': '1'}\n        self.momC.create_vnodes(attr, 3,\n                                delall=True, usenatvnode=True)\n        hook_name = \"prologue_exception\"\n        hook_body = (\"import pbs\\n\"\n                     \"e = pbs.event()\\n\"\n                     \"x\\n\")\n        attr = {'event': 'execjob_prologue',\n                'enabled': 'True',\n                'fail_action': 'offline_vnodes'}\n        self.server.create_import_hook(hook_name, attr, hook_body)\n\n        attr = {'Resource_List.select': 'vnode=%s[0]' % self.hostC,\n                'Resource_List.walltime': 30}\n        j = Job(TEST_USER, attrs=attr)\n        self.server.submit(j)\n\n        attr = {'state': 'offline',\n                'comment': \"offlined by hook '%s' due to hook error\"\n                % hook_name}\n        self.server.expect(VNODE, attr, id=self.hostC, max_attempts=10,\n                           interval=2)\n        self.server.expect(VNODE, attr, id='%s[0]' % self.hostC,\n                           max_attempts=10, interval=2)\n        self.server.expect(VNODE, attr, id='%s[1]' % self.hostC,\n                           max_attempts=10, interval=2)\n\n        # revert momC\n        self.server.manager(MGR_CMD_SET, NODE, {'state': (DECR, 'offline')},\n                            id=self.hostC)\n        self.server.manager(MGR_CMD_SET, NODE, {'state': (DECR, 'offline')},\n                            id='%s[0]' % self.hostC)\n        self.server.manager(MGR_CMD_SET, NODE, {'state': (DECR, 'offline')},\n                            id='%s[1]' % self.hostC)\n\n        self.server.manager(MGR_CMD_UNSET, NODE, 'comment',\n                            id=self.hostC)\n        self.server.manager(MGR_CMD_UNSET, NODE, 'comment',\n                            id='%s[0]' % self.hostC)\n        self.server.manager(MGR_CMD_UNSET, NODE, 'comment',\n                            id='%s[1]' % self.hostC)\n        self.momC.revert_to_defaults()\n\n    def test_prologue_hook_set_fail_action(self):\n        \"\"\"\n        Test that fail_actions can be set on execjob_prologue\n        hooks by qmgr.\n        \"\"\"\n        hook_name = \"prologue\"\n        hook_body = (\"import pbs\\n\"\n                     \"pbs.event().accept()\\n\")\n        attr = {'event': 'execjob_prologue',\n                'enabled': 'True'}\n        self.server.create_import_hook(hook_name, attr, hook_body)\n        self.server.expect(HOOK, {'fail_action': 'none'}, id=hook_name)\n\n        self.server.manager(MGR_CMD_SET, HOOK,\n                            {'fail_action': 'offline_vnodes'},\n                            id=hook_name)\n        self.server.expect(HOOK, {'fail_action': 'offline_vnodes'})\n\n        self.server.manager(MGR_CMD_SET, HOOK,\n                            {'fail_action': 'scheduler_restart_cycle'},\n                            id=hook_name)\n        self.server.expect(HOOK, {'fail_action': 'scheduler_restart_cycle'},\n                           id=hook_name)\n\n    def test_prologue_hook_set_job_attr(self):\n        \"\"\"\n        Test that a execjob_prologue hook can modify job attributes.\n        \"\"\"\n        hook_name = \"prologue_set_job_attr\"\n        hook_body = (\"import pbs\\n\"\n                     \"pbs.event().job.resources_used['file']=\"\n                     \"pbs.size('2gb')\\n\")\n        attr = {'event': 'execjob_prologue',\n                'enabled': 'True'}\n        self.server.create_import_hook(hook_name, attr, hook_body)\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'job_history_enable': 'True'})\n\n        j = Job(TEST_USER)\n        j.set_sleep_time(1)\n        jid = self.server.submit(j)\n\n        attr = {'resources_used.file': '2gb'}\n        self.server.expect(JOB, attr, id=jid, extend='x', offset=1)\n        self.server.accounting_match(\n            \"E;\" + jid + \";.*resources_used.file=2gb\", regexp=True,\n            max_attempts=10)\n\n    def test_prologue_hook_fail_action_non_mom_hook(self):\n        \"\"\"\n        Test that when fail action is set to anything other than 'None' on\n        a mom hook, and the mom hook event is not either execjob_begin,\n        exechost_startup, execjob_prologue, an error message is dispalyed\n        \"\"\"\n        hook_name = \"prologue\"\n        hook_body = (\"import pbs\\n\"\n                     \"pbs.event().accept()\\n\")\n        attr = {'event': 'exechost_periodic',\n                'fail_action': 'offline_vnodes'}\n        try:\n            self.server.create_import_hook(hook_name, attr, hook_body)\n        except PbsManagerError as e:\n            exp_err = \"Can't set hook fail_action value to 'offline_vnodes':\"\n            exp_err += \" hook event must\"\n            exp_err += \" contain at least one of execjob_begin\"\n            exp_err += \", exechost_startup, execjob_prologue\"\n            self.assertTrue(exp_err in e.msg[0])\n\n    def test_prologue_hook_does_not_execute_twice_on_pbsdsh(self):\n        \"\"\"\n        This test creates a hook and then submits a job.\n        It then uses the job output file to do a log_match\n        on both the moms\n        \"\"\"\n        hook_name = 'prologue'\n        hook_body = (\"import pbs\\n\"\n                     \"e = pbs.event()\\n\"\n                     \"pbs.logjobmsg(e.job.id, 'executed prologue hook')\\n\")\n        attr = {'event': 'execjob_prologue'}\n        self.server.create_import_hook(hook_name, attr, hook_body)\n\n        j = Job(TEST_USER, {'Resource_List.select': '2:ncpus=1',\n                            'Resource_List.place': 'scatter'})\n        pbsdsh_path = os.path.join(self.server.pbs_conf['PBS_EXEC'],\n                                   \"bin\", \"pbsdsh\")\n        j.create_script('#!/bin/sh\\n%s  hostname\\nsleep 10\\n' % pbsdsh_path)\n        jid = self.server.submit(j)\n        attribs = self.server.status(JOB, id=jid)\n        self.server.expect(JOB, 'queue', op=UNSET, id=jid, offset=10)\n        host, opath = attribs[0]['Output_Path'].split(':', 1)\n        ret = self.du.cat(hostname=host, filename=opath, runas=TEST_USER)\n        _msg = \"cat command failed with error: %s\" % ret['err']\n        self.assertEqual(ret['rc'], 0, _msg)\n        ret['out'] = ret['out'][-2:]\n        mom1 = ret['out'][0].split(\".\")[0]\n        mom2 = ret['out'][1].split(\".\")[0]\n        self.exec_mom1 = self.moms[mom1]\n        self.exec_mom2 = self.moms[mom2]\n        self.exec_mom1.log_match(\"Job;%s;executed prologue hook\" % jid)\n        self.exec_mom2.log_match(\"Job;%s;executed prologue hook\" % jid)\n\n    def test_prologue_exception_sisters(self):\n        \"\"\"\n        Test requeueing jobs due to a prologue hook with an exception\n        when executed by sister moms only.\n        Jobs should all start, fail (due to prologue hook error),\n        requeue, and rerun several times before eventually getting held\n        due to too many failed attempts.\n        The test also confirms that all execjob_end hooks get executed.\n        \"\"\"\n        hook_name = \"prologue_exception\"\n        hook_body = (\"import pbs\\n\"\n                     \"e = pbs.event()\\n\"\n                     \"if not e.job.in_ms_mom():\\n\"\n                     \"    raise NameError\\n\")\n\n        attr = {'event': 'execjob_prologue', 'enabled': 'True'}\n        self.server.create_import_hook(hook_name, attr, hook_body)\n\n        hook_name = \"endjob_hook1\"\n        hook_body = (\"import pbs\\n\"\n                     \"e = pbs.event()\\n\"\n                     \"pbs.logjobmsg(e.job.id, 'executed endjob hook 1')\\n\")\n\n        attr = {'event': 'execjob_end', 'enabled': 'True'}\n        self.server.create_import_hook(hook_name, attr, hook_body)\n\n        hook_name = \"endjob_hook2\"\n        hook_body = (\"import pbs\\n\"\n                     \"e = pbs.event()\\n\"\n                     \"pbs.logjobmsg(e.job.id, 'executed endjob hook 2')\\n\")\n\n        attr = {'event': 'execjob_end', 'enabled': 'True'}\n        self.server.create_import_hook(hook_name, attr, hook_body)\n\n        attr = {'Resource_List.select': '3:ncpus=1',\n                'Resource_List.place': 'scatter:excl',\n                'Resource_List.walltime': 30}\n\n        num_jobs = 3\n        job_list = []\n        search_after = time.time()\n        for _ in range(num_jobs):\n            j = Job(TEST_USER, attrs=attr)\n            jid = self.server.submit(j)\n            job_list.append(jid)\n\n        held_cmt = \"job held, too many failed attempts to run\"\n        criteria = {'job_state': 'H', 'comment': held_cmt}\n        for jid in job_list:\n            for _ in range(21):\n                self.momA.log_match(\"Job;%s;executed endjob hook 1\" % jid,\n                                    max_attempts=10, interval=1,\n                                    starttime=search_after)\n                self.momA.log_match(\"Job;%s;executed endjob hook 2\" % jid,\n                                    max_attempts=10, interval=1,\n                                    starttime=search_after)\n                search_after = time.time()\n            self.server.expect(JOB, criteria, id=jid, max_attempts=100,\n                               interval=2)\n"
  },
  {
    "path": "test/tests/functional/pbs_hook_jobobit.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport time\nfrom tests.functional import *\n\n\ndef jobobit_hook():\n    import pbs\n    import sys\n\n    try:\n        e = pbs.event()\n        job = e.job\n        pbs.logjobmsg(\n            job.id, 'jobobit hook started for test %s' % (e.hook_name,))\n        pbs.logjobmsg(job.id, 'jobobit hook, job starttime:%s' % (job.stime,))\n        pbs.logjobmsg(\n            job.id, 'jobobit hook, job obittime:%s' % (job.obittime,))\n        pbs.logjobmsg(job.id, 'jobobit hook, job_state=%s' % (job.job_state,))\n        pbs.logjobmsg(\n            job.id, 'jobobit hook, job_substate=%s' % (job.substate,))\n        state_desc = pbs.REVERSE_JOB_STATE.get(job.job_state, '(None)')\n        substate_desc = pbs.REVERSE_JOB_SUBSTATE.get(job.substate, '(None)')\n        pbs.logjobmsg(\n            job.id, 'jobobit hook, job_state_desc=%s' % (state_desc,))\n        pbs.logjobmsg(\n            job.id, 'jobobit hook, job_substate_desc=%s' % (substate_desc,))\n        if hasattr(job, \"resv\") and job.resv:\n            pbs.logjobmsg(\n                job.id, 'jobobit hook, resv:%s' % (job.resv.resvid,))\n            pbs.logjobmsg(\n                job.id,\n                'jobobit hook, resv_nodes:%s' % (job.resv.resv_nodes,))\n            pbs.logjobmsg(\n                job.id,\n                'jobobit hook, resv_state:%s' % (job.resv.reserve_state,))\n        else:\n            pbs.logjobmsg(job.id, 'jobobit hook, resv:(None)')\n        pbs.logjobmsg(\n            job.id, 'jobobit hook finished for test %s' % (e.hook_name,))\n    except Exception as err:\n        ty, _, tb = sys.exc_info()\n        pbs.logmsg(\n            pbs.LOG_DEBUG, str(ty) + str(tb.tb_frame.f_code.co_filename) +\n            str(tb.tb_lineno))\n        e.reject()\n    else:\n        e.accept()\n\n\n@tags('hooks')\nclass TestHookJobObit(TestFunctional):\n    node_cpu_count = 4\n    job_default_nchunks = 1\n    job_default_ncpus = 1\n    job_array_num_subjobs = node_cpu_count\n    job_time_success = 5\n    job_time_rerun = 10\n    job_time_qdel = 30\n    resv_default_nchunks = 1\n    resv_default_ncpus = node_cpu_count\n    resv_start_delay = 20\n    resv_duration = 180\n\n    node_fail_timeout = 15\n    job_requeue_timeout = 5\n    resv_retry_time = 5\n\n    @property\n    def is_array_job(self):\n        return len(self.subjob_ids) > 0\n\n    def run_test_func(self, test_body_func, *args, **kwargs):\n        \"\"\"\n        Setup the environment for running jobobit hook related tests, execute\n        the test function and then perform common checks and clean up.\n        \"\"\"\n        self.job = None\n        self.subjob_ids = []\n        self.started_job_ids = set()\n        self.ended_job_ids = set()\n        self.deleted_job_ids = set()\n        self.delete_failed_job_ids = set()\n        self.rerun_job_ids = set()\n        self.resv_id = None\n        self.resv_queue = None\n        self.resv_start_time = None\n        self.scheduling_enabled = True\n        self.moms_stopped = False\n        self.node_count = len(self.server.moms)\n        self.hook_name = test_body_func.__name__\n\n        self.logger.info(\"***** JOBOBIT HOOK TEST START *****\")\n\n        a = {'resources_available.ncpus': self.node_cpu_count}\n        for mom in self.moms.values():\n            self.server.manager(MGR_CMD_SET, NODE, a, mom.shortname)\n\n        try:\n            # If hook exists from a previous test, remove it\n            self.server.delete_hook(self.hook_name)\n        except PtlException:\n            pass\n\n        a = {'event': 'jobobit', 'enabled': 'True'}\n        ret = self.server.create_hook(self.hook_name, a)\n        self.assertTrue(ret, \"Could not create hook %s\" % self.hook_name)\n\n        hook_body = generate_hook_body_from_func(jobobit_hook)\n        ret = self.server.import_hook(self.hook_name, hook_body)\n        self.assertTrue(ret, \"Could not import hook %s\" % self.hook_name)\n\n        a = {\n            'job_history_enable': 'True',\n            'job_requeue_timeout': self.job_requeue_timeout,\n            'node_fail_requeue': self.node_fail_timeout,\n            'reserve_retry_time': self.resv_retry_time,\n        }\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        self.log_start_time = time.time()\n        try:\n            test_body_func(*args, **kwargs)\n            self.check_log_for_jobobit_hook_messages()\n        finally:\n            # Make an effort to start the MoMs if they are not running\n            for mom in self.moms.values():\n                if not self.mom.isUp(max_attempts=3):\n                    try:\n                        self.mom.start()\n                    except PtlException:\n                        pass\n            ret = self.server.delete_hook(self.hook_name)\n            self.assertTrue(ret, \"Could not delete hook %s\" % self.hook_name)\n\n        self.logger.info(\"***** JOBOBIT HOOK TEST END *****\")\n\n    def job_verify_jobobit_hook_messages(self, job_id, existence):\n        \"\"\"\n        Look for messages logged by the jobobit hook.  This method assumes that\n        a started job have been verified as terminated (ended/requeued) or\n        forced deleted, thus insuring that the jobobit hook has run for the\n        job.\n        \"\"\"\n        self.server.log_match(\n            '%s;jobobit hook started for test %s' % (job_id, self.hook_name),\n            starttime=self.log_start_time, n='ALL', max_attempts=1,\n            existence=existence)\n        # TODO: Add checks for expected job state and substate\n        self.server.log_match(\n            '%s;jobobit hook, resv:%s' % (job_id, self.resv_queue or \"(None)\"),\n            starttime=self.log_start_time, n='ALL', max_attempts=1,\n            existence=existence)\n        self.server.log_match(\n            '%s;jobobit hook finished for test %s' % (job_id, self.hook_name),\n            starttime=self.log_start_time, n='ALL', max_attempts=1,\n            existence=existence)\n\n    def check_log_for_jobobit_hook_messages(self):\n        \"\"\"\n        Look for messages logged by the jobobit hook.  This method assumes that\n        all started jobs have been verified as terminated or forced deleted,\n        thus insuring that the jobobit hook has run for those jobs.\n        \"\"\"\n        for jid in [self.job_id] + self.subjob_ids:\n            job_ended = jid in self.ended_job_ids or jid in self.rerun_job_ids\n            self.job_verify_jobobit_hook_messages(jid, job_ended)\n            # Remove any jobs that ended from the list of started and deleted\n            # jobs.  At this point, they should no longer exist and thus are\n            # irrelevant in either set.\n            if job_ended:\n                self.started_job_ids.discard(jid)\n                self.rerun_job_ids.discard(jid)\n                self.deleted_job_ids.discard(jid)\n                self.delete_failed_job_ids.discard(jid)\n                self.ended_job_ids.discard(jid)\n        # Reset the start time so that searches on requeued jobs don't match\n        # state or log messages prior to the current search.  This assumes that\n        # previous state and log messages for a test will not contain a time\n        # stamp equal to or greater than the new start time.\n        self.log_start_time = time.time()\n\n    def get_job_id_set(self, job_ids):\n        try:\n            return set(job_ids)\n        except TypeError:\n            return set([job_ids]) if job_ids else set([self.job_id])\n\n    def job_submit(\n            self,\n            subjob_count=0,\n            user=TEST_USER,\n            nchunks=job_default_nchunks,\n            ncpus=job_default_ncpus,\n            job_time=job_time_success,\n            job_rerunnable=True,\n            job_attrs=None):\n        if self.scheduling_enabled:\n            # Disable scheduling so that jobs won't be immediately started\n            # until we've verified that they have been queued\n            self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n            self.scheduling_enabled = False\n        a = {}\n        a['Resource_List.select'] = str(nchunks) + ':ncpus=' + str(ncpus)\n        if subjob_count == 1:\n            a[ATTR_J] = '0-1:2'\n        elif subjob_count > 1:\n            a[ATTR_J] = '0-' + str(subjob_count - 1)\n        self.job_rerunnable = job_rerunnable\n        if not job_rerunnable:\n            a[ATTR_r] = 'n'\n        a.update(job_attrs or {})\n        self.job = Job(user, attrs=a)\n        self.job.set_sleep_time(job_time)\n        self.job_id = self.server.submit(self.job)\n        self.subjob_ids = [\n            self.job.create_subjob_id(self.job_id, i)\n            for i in range(subjob_count)]\n\n    def job_rerun(self, job_ids=None, force=False, user=None):\n        if self.scheduling_enabled:\n            # Disable scheduling so that requeued (rerun) jobs won't be\n            # immediately restarted\n            self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n            self.scheduling_enabled = False\n        jids = self.get_job_id_set(job_ids)\n        extend = 'force' if force else None\n        try:\n            self.server.rerunjob(list(jids), extend=extend, runas=user)\n        except PbsRerunError:\n            # A failed rerun should eventually result in the jobobit hook being\n            # run and the job being requeued\n            pass\n        if self.is_array_job and self.job_id in jids:\n            self.rerun_job_ids.update(self.started_job_ids)\n            self.rerun_job_ids.remove(self.job_id)\n        else:\n            self.rerun_job_ids.update(jids & self.started_job_ids)\n\n    def job_delete(self, job_ids=None, force=False, user=None):\n        if self.scheduling_enabled:\n            # Disable scheduling so that requeued (rerun) jobs won't be\n            # immediately restarted\n            self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n            self.scheduling_enabled = False\n        jids = self.get_job_id_set(job_ids)\n        extend = 'force' if force else None\n        try:\n            self.server.delete(list(jids), extend=extend, runas=user)\n        except PbsDeleteError:\n            # This assumes that if deleting one job fails then the delete\n            # failed for all jobs, which in our controlled testing is likely\n            # true\n            job_id_set = self.delete_failed_job_ids\n        else:\n            job_id_set = self.deleted_job_ids\n        job_id_set.update(jids)\n        if self.is_array_job:\n            if self.job_id in jids:\n                job_id_set.update(self.subjob_ids)\n            # If a single subjob is deleted (TERMINATED), the substate for the\n            # array job is also set to TERMINATED.\n            self.deleted_job_ids.add(self.job_id)\n\n    def job_verify_queued(self, job_ids=None):\n        # Verifying that the jobs are queued should only be performed when the\n        # scheduler has been disabled before submitting or rerunning the jobs.\n        # If the scheduler is active, then the jobs may have been started and\n        # thus may no longer be in the queued state.\n        self.assertFalse(self.scheduling_enabled, \"scheduling is enabled!\")\n        jids = self.get_job_id_set(job_ids)\n        if self.is_array_job and self.job_id in jids:\n            if self.job_id not in self.started_job_ids:\n                self.server.expect(JOB, {'job_state': 'Q'}, id=self.job_id)\n                self.server.accounting_match(\n                    \"Q;%s;\" % (self.job_id,),\n                    starttime=int(self.log_start_time),\n                    n='ALL', max_attempts=1)\n            else:\n                self.server.expect(JOB, {'job_state': 'B'}, id=self.job_id)\n            jids.update(self.subjob_ids)\n            jids.remove(self.job_id)\n        for jid in jids:\n            self.server.expect(JOB, {'job_state': 'Q'}, id=jid)\n            if jid in self.started_job_ids:\n                # Jobs/Subjobs that were started and then requeued need to be\n                # added to the rerun list to indicate that output from the hook\n                # should be present.\n                self.rerun_job_ids.add(jid)\n            if jid in self.rerun_job_ids:\n                self.server.accounting_match(\n                    \"R;%s;\" % (jid,),\n                    starttime=int(self.log_start_time),\n                    n='ALL', max_attempts=1)\n            elif not self.is_array_job:\n                self.server.accounting_match(\n                    \"Q;%s;\" % (jid,),\n                    starttime=int(self.log_start_time),\n                    n='ALL', max_attempts=1)\n\n    def job_verify_started(self, job_ids=None):\n        if not self.scheduling_enabled:\n            # If scheduling was previously disabled to allow time for log\n            # scraping, etc. before requeued jobs were restarted, then enabling\n            # scheduling again.\n            self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n            self.scheduling_enabled = True\n        jids = self.get_job_id_set(job_ids)\n        if self.is_array_job and self.job_id in jids:\n            self.server.expect(JOB, {'job_state': 'B'}, id=self.job_id)\n            if self.job_id not in self.started_job_ids:\n                self.server.accounting_match(\n                    \"S;%s;\" % (self.job_id,),\n                    starttime=int(self.log_start_time),\n                    n='ALL', max_attempts=1)\n            jids.update(self.subjob_ids)\n            jids.remove(self.job_id)\n            self.started_job_ids.add(self.job_id)\n        for jid in jids:\n            self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n            self.server.accounting_match(\n                \"S;%s;\" % (jid,),\n                starttime=int(self.log_start_time),\n                n='ALL', max_attempts=1)\n            self.started_job_ids.add(jid)\n\n    def job_verify_ended(self, job_ids=None):\n        jids = self.get_job_id_set(job_ids)\n        if self.is_array_job and self.job_id in jids:\n            if self.job_id not in self.delete_failed_job_ids:\n                jids.update(self.started_job_ids)\n            else:\n                jids.remove(self.job_id)\n        for jid in jids & self.started_job_ids:\n            # If the job failed, the verify that the substate is FAILED (93).\n            # If the job was deleted, then verify that the substate is set to\n            # TERMINATED (91).  Otherwise, verify that the substate is set to\n            # FINISHED (92).\n            if jid in self.delete_failed_job_ids:\n                substate = 93\n            elif jid in self.deleted_job_ids:\n                substate = 91\n            else:\n                substate = 92\n            self.server.expect(\n                JOB, {'job_state': 'F', 'substate': substate}, extend='x',\n                id=jid)\n            # If the job was deleted without the force flag and the moms were\n            # stopped and not restarted, then an accounting 'E' record will not\n            # be immediately written.\n            if not (jid in self.delete_failed_job_ids and self.moms_stopped):\n                self.server.accounting_match(\n                    \"E;%s;\" % (jid,),\n                    starttime=int(self.log_start_time),\n                    n='ALL', max_attempts=1)\n            self.ended_job_ids.add(jid)\n\n    def resv_submit(\n            self,\n            user=TEST_USER,\n            nchunks=resv_default_nchunks,\n            ncpus=resv_default_ncpus,\n            resv_start_time=None,\n            resv_end_time=None,\n            resv_attrs=None):\n        start_time = resv_start_time or int(time.time()) + \\\n            self.resv_start_delay\n        end_time = resv_end_time or start_time + self.resv_duration\n        a = {}\n        a['Resource_List.select'] = str(nchunks) + ':ncpus=' + str(ncpus)\n        a['Resource_List.place'] = 'free'\n        a['reserve_start'] = start_time\n        a['reserve_end'] = end_time\n        a.update(resv_attrs or {})\n        resv = Reservation(user, a)\n        self.resv_id = self.server.submit(resv)\n        self.resv_start_time = start_time\n\n    def resv_verify_confirmed(self):\n        a = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, a, id=self.resv_id)\n        self.resv_queue = self.resv_id.split('.')[0]\n        self.server.status(RESV, 'resv_nodes')\n\n    def resv_verify_started(self):\n        self.logger.info('Sleeping until reservation starts')\n        self.server.expect(\n            RESV, {'reserve_state': (MATCH_RE, 'RESV_RUNNING|5')},\n            id=self.resv_id,\n            offset=self.resv_start_time - int(time.time() + 1))\n\n    def moms_start(self, *args, **kwargs):\n        for mom in self.moms.values():\n            mom.start(*args, **kwargs)\n        self.moms_stopped = False\n        if not self.scheduling_enabled:\n            self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n            self.scheduling_enabled = True\n\n    def moms_stop(self, *args, **kwargs):\n        if self.scheduling_enabled:\n            self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n            self.scheduling_enabled = False\n        for mom in self.moms.values():\n            mom.stop(*args, **kwargs)\n        self.moms_stopped = True\n\n    # -------------------------------------------------------------------------\n\n    def jobobit_simple_run_job(self, subjob_count=0):\n        self.job_submit(subjob_count=subjob_count)\n        self.job_verify_queued()\n        self.job_verify_started()\n        self.job_verify_ended()\n\n    def test_hook_jobobit_run_single_job(self):\n        \"\"\"\n        Run a single job to completion and verify that the jobobit hook is\n        executed.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_simple_run_job)\n\n    @tags('smoke')\n    def test_hook_jobobit_run_array_job(self):\n        \"\"\"\n        Run an array of jobs to completion and verify that the jobobit hook is\n        executed for all subjobs and the array job.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_simple_run_job,\n            subjob_count=self.job_array_num_subjobs)\n\n    # -------------------------------------------------------------------------\n\n    def test_hook_jobobit_run_array_job_in_resv(self):\n        \"\"\"\n        Run an array of jobs to completion within a reservation and verify that\n        that the jobobit hook is executed for all subjobs and the array job.\n        \"\"\"\n        def jobobit_run_array_job_in_resv():\n            self.resv_submit()\n            self.resv_verify_confirmed()\n            a = {ATTR_queue: self.resv_queue}\n            self.job_submit(\n                subjob_count=2,\n                ncpus=self.node_cpu_count // 2,\n                job_attrs=a)\n            self.job_verify_queued()\n            self.resv_verify_started()\n            self.job_verify_started()\n            self.job_verify_ended()\n\n        self.run_test_func(jobobit_run_array_job_in_resv)\n\n    # -------------------------------------------------------------------------\n\n    def jobobit_rerun_job(\n            self,\n            subjob_count=0,\n            rerun_force=False,\n            rerun_user=None,\n            stop_moms=False,\n            restart_moms=False):\n        self.job_submit(\n            subjob_count=subjob_count,\n            job_time=self.job_time_rerun)\n        self.job_verify_queued()\n        self.job_verify_started()\n        if stop_moms:\n            self.moms_stop()\n        self.job_rerun(force=rerun_force, user=rerun_user)\n        self.job_verify_queued()\n        self.check_log_for_jobobit_hook_messages()\n        if restart_moms:\n            self.moms_start()\n        if not stop_moms or restart_moms:\n            self.job_verify_started()\n        self.job_verify_ended()\n\n    def test_hook_jobobit_rerun_single_job_as_root(self):\n        \"\"\"\n        Start a single job, issue a rerun as root, and verify that the jobobit\n        hook is executed for both runs.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_rerun_job)\n\n    def test_hook_jobobit_rerun_single_job_as_mgr(self):\n        \"\"\"\n        Start a single job, issue a rerun as manager, and verify that the end\n        job hook is executed for both runs.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_rerun_job,\n            rerun_user=MGR_USER)\n\n    def test_hook_jobobit_force_rerun_single_job_as_root(self):\n        \"\"\"\n        Start a single job, issue a rerun as root, and verify that the jobobit\n        hook is executed for both runs.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_rerun_job,\n            rerun_force=True)\n\n    def test_hook_jobobit_force_rerun_single_job_as_mgr(self):\n        \"\"\"\n        Start a single job, force issue a rerun as manager, and verify that the\n        jobobit hook is executed for both runs.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_rerun_job,\n            rerun_force=True,\n            rerun_user=MGR_USER)\n\n    def test_hook_jobobit_rerun_single_job_stop_moms(self):\n        \"\"\"\n        Start a single job, issue a rerun after stopping the MoMs. Verify that\n        the job is requeued and that the jobobit hook is executed.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_rerun_job,\n            stop_moms=True,\n            restart_moms=False)\n\n    def test_hook_jobobit_force_rerun_single_job_stop_moms(self):\n        \"\"\"\n        Start a single job, issue a force rerun after stopping the MoMs. Verify\n        that the job is requeued and that the jobobit hook is executed.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_rerun_job,\n            rerun_force=True,\n            stop_moms=True,\n            restart_moms=False)\n\n    def test_hook_jobobit_rerun_single_job_restart_moms(self):\n        \"\"\"\n        Start a single job, issue a rerun after stopping the MoMs, then enable\n        the MoMs again, verifying that the jobobit hook is executed for both\n        runs.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_rerun_job,\n            stop_moms=True,\n            restart_moms=True)\n\n    def test_hook_jobobit_force_rerun_single_job_restart_moms(self):\n        \"\"\"\n        Start a single job, issue a force rerun after stopping the MoMs, then\n        enable the MoMs again, verifying that the jobobit hook is executed for\n        both runs.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_rerun_job,\n            rerun_force=True,\n            stop_moms=True,\n            restart_moms=True)\n\n    def test_hook_jobobit_rerun_array_job(self):\n        \"\"\"\n        Start an array job, issue a rerun, and verify that the jobobit hook is\n        executed for all subjob on both runs and only once for the array job.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_rerun_job,\n            subjob_count=self.job_array_num_subjobs)\n\n    def test_hook_jobobit_force_rerun_array_job(self):\n        \"\"\"\n        Start an array job, issue a force rerun, and verify that the jobobit\n        hook is executed for all subjob on both runs and only once for the\n        array job.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_rerun_job,\n            subjob_count=self.job_array_num_subjobs,\n            rerun_force=True)\n\n    def test_hook_jobobit_rerun_array_job_restart_moms(self):\n        \"\"\"\n        Start an array job, issue a rerun after stopping the MoMs, then start\n        the MoMs again, verifying that the jobobit hook is executed for both\n        runs.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_rerun_job,\n            subjob_count=self.job_array_num_subjobs,\n            stop_moms=True,\n            restart_moms=True)\n\n    def test_hook_jobobit_force_rerun_array_job_restart_moms(self):\n        \"\"\"\n        Start an array job, issue a force rerun after stopping the MoMs, then\n        start the MoMs again, verifying that the jobobit hook is executed for\n        both runs.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_rerun_job,\n            subjob_count=self.job_array_num_subjobs,\n            rerun_force=True,\n            stop_moms=True,\n            restart_moms=True)\n\n    # -------------------------------------------------------------------------\n\n    def jobobit_rerun_and_delete_job(\n            self,\n            subjob_count=0,\n            delete_force=False):\n        self.job_submit(\n            subjob_count=subjob_count,\n            job_time=self.job_time_rerun)\n        self.job_verify_started()\n        self.job_rerun()\n        self.job_verify_queued()\n        self.check_log_for_jobobit_hook_messages()\n        self.job_delete(force=delete_force)\n        self.job_verify_ended()\n\n    def test_hook_jobobit_rerun_and_delete_single_job(self):\n        \"\"\"\n        Start a single job, issue a rerun and immediately delete it.  Verify\n        that the jobobit hook is only executed once.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_rerun_and_delete_job)\n\n    def test_hook_jobobit_rerun_and_force_delete_single_job(self):\n        \"\"\"\n        Start a single job, issue a rerun and immediately force delete it.\n        Verify that the jobobit hook is only executed once.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_rerun_and_delete_job,\n            delete_force=True)\n\n    def test_hook_jobobit_rerun_and_delete_array_job(self):\n        \"\"\"\n        Start an array job, issue a rerun and immediately delete it. Verify\n        that the jobobit hook is only executed once for each subjob and the\n        job array.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_rerun_and_delete_job,\n            subjob_count=self.job_array_num_subjobs)\n\n    def test_hook_jobobit_rerun_and_force_delete_array_job(self):\n        \"\"\"\n        Start an array job, issue a rerun and immediately force delete it.\n        Verify that the jobobit hook is only executed once for each subjob and\n        the job array.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_rerun_and_delete_job,\n            subjob_count=self.job_array_num_subjobs,\n            delete_force=True)\n\n    # -------------------------------------------------------------------------\n\n    def jobobit_delete_unstarted_job(\n            self,\n            subjob_count=0,\n            delete_force=False,\n            delete_user=None):\n        self.job_submit(\n            subjob_count=subjob_count,\n            nchunks=self.node_count,\n            ncpus=self.node_cpu_count * 2,\n            job_time=self.job_time_qdel)\n        self.job_verify_queued()\n        self.job_delete(force=delete_force, user=delete_user)\n        self.job_verify_ended()\n\n    def test_hook_jobobit_delete_unstarted_single_job_as_root(self):\n        \"\"\"\n        Queue a single job, but delete it as root before it starts.  Verify\n        that the jobobit hook is not executed.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_delete_unstarted_job)\n\n    def test_hook_jobobit_delete_unstarted_single_job_as_user(self):\n        \"\"\"\n        Queue a single job, but delete it as the user before it starts.  Verify\n        that the jobobit hook is not executed.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_delete_unstarted_job,\n            delete_user=TEST_USER)\n\n    def test_hook_jobobit_force_delete_unstarted_single_job_as_root(self):\n        \"\"\"\n        Queue a single job, but force delete it as root before it starts.\n        Verify that the jobobit hook is not executed.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_delete_unstarted_job,\n            delete_force=True)\n\n    def test_hook_jobobit_force_delete_unstarted_single_job_as_user(self):\n        \"\"\"\n        Queue a single job, but force delete it as the user before it starts.\n        Verify that the jobobit hook is not executed.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_delete_unstarted_job,\n            delete_force=True,\n            delete_user=TEST_USER)\n\n    def test_hook_jobobit_delete_unstarted_array_job_as_root(self):\n        \"\"\"\n        Queue an array job, but delete it as root before it starts.  Verify\n        that the jobobit hook is not executed.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_delete_unstarted_job,\n            subjob_count=self.job_array_num_subjobs)\n\n    def test_hook_jobobit_delete_unstarted_array_job_as_user(self):\n        \"\"\"\n        Queue an array job, but delete it as the user before it starts.  Verify\n        that the jobobit hook is not executed.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_delete_unstarted_job,\n            subjob_count=self.job_array_num_subjobs,\n            delete_user=TEST_USER)\n\n    def test_hook_jobobit_force_delete_unstarted_array_job_as_root(self):\n        \"\"\"\n        Queue an array job, but force delete it as root before it starts.\n        Verify that the jobobit hook is not executed.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_delete_unstarted_job,\n            subjob_count=self.job_array_num_subjobs,\n            delete_force=True)\n\n    def test_hook_jobobit_force_delete_unstarted_array_job_as_user(self):\n        \"\"\"\n        Queue an array job, but force delete it as the user before it starts.\n        Verify that the jobobit hook is not executed.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_delete_unstarted_job,\n            subjob_count=self.job_array_num_subjobs,\n            delete_force=True,\n            delete_user=TEST_USER)\n\n    # -------------------------------------------------------------------------\n\n    def jobobit_delete_running_job(\n            self,\n            subjob_count=0,\n            job_rerunnable=True,\n            delete_force=False,\n            delete_user=None):\n        self.job_submit(\n            job_rerunnable=job_rerunnable,\n            subjob_count=subjob_count,\n            job_time=self.job_time_qdel)\n        self.job_verify_queued()\n        self.job_verify_started()\n        self.job_delete(force=delete_force, user=delete_user)\n        self.job_verify_ended()\n\n    def test_hook_jobobit_delete_running_single_job_as_root(self):\n        \"\"\"\n        Run a single job, but delete as root before completion.  Verify that\n        the jobobit hook is executed.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_delete_running_job)\n\n    def test_hook_jobobit_delete_running_single_job_as_user(self):\n        \"\"\"\n        Run a single job, but delete as the user before completion.  Verify\n        that the jobobit hook is executed.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_delete_running_job,\n            delete_user=TEST_USER)\n\n    def test_hook_jobobit_force_delete_running_single_job_as_root(self):\n        \"\"\"\n        Run a single job, but force delete as root before completion.  Verify\n        that the jobobit hook is executed.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_delete_running_job,\n            delete_force=True)\n\n    def test_hook_jobobit_force_delete_running_single_job_as_user(self):\n        \"\"\"\n        Run a single job, but force delete as the user before completion.\n        Verify that the jobobit hook is executed.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_delete_running_job,\n            delete_force=True,\n            delete_user=TEST_USER)\n\n    def test_hook_jobobit_delete_running_array_job_as_root(self):\n        \"\"\"\n        Run an array job, where all jobs are started but also deleted (by root)\n        before completion.  Verify that the jobobit hook is executed for all\n        subjobs and the array job.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_delete_running_job,\n            subjob_count=self.job_array_num_subjobs)\n\n    def test_hook_jobobit_delete_running_array_job_as_user(self):\n        \"\"\"\n        Run an array job, where all jobs have started but also deleted (by the\n        user) before completion.  Verify that the jobobit hook is executed for\n        all subjobs and the array job.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_delete_running_job,\n            subjob_count=self.job_array_num_subjobs,\n            delete_user=TEST_USER)\n\n    def test_hook_jobobit_force_delete_running_array_job_as_root(self):\n        \"\"\"\n        Run an array job, where all jobs are started but also force deleted (by\n        root) before completion.  Verify that the jobobit hook is executed for\n        all subjobs and the array job.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_delete_running_job,\n            delete_force=True,\n            subjob_count=self.job_array_num_subjobs)\n\n    def test_hook_jobobit_force_delete_running_array_job_as_user(self):\n        \"\"\"\n        Run an array job, where all jobs have started but also force deleted\n        (by the user) before completion.  Verify that the jobobit hook is\n        executed for all subjobs and the array job.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_delete_running_job,\n            subjob_count=self.job_array_num_subjobs,\n            delete_force=True,\n            delete_user=TEST_USER)\n\n    # -------------------------------------------------------------------------\n\n    def jobobit_delete_running_job_moms_stopped(\n            self,\n            subjob_count=0,\n            job_rerunnable=True,\n            delete_user=None,\n            delete_force=False,\n            restart_moms=False):\n        self.job_submit(\n            subjob_count=subjob_count,\n            job_time=self.job_time_qdel,\n            job_rerunnable=job_rerunnable)\n        self.job_verify_started()\n        self.moms_stop()\n        self.job_delete(force=delete_force, user=delete_user)\n        if job_rerunnable and not delete_force:\n            self.job_verify_queued()\n            self.check_log_for_jobobit_hook_messages()\n            if restart_moms:\n                self.moms_start()\n                self.job_verify_started()\n        self.job_verify_ended()\n\n    def test_hook_jobobit_delete_running_single_job_as_root_nrr_sm(self):\n        \"\"\"\n        Run a single non-rerunable job, but delete as root before completion\n        after stopping the MoM. Verify that the jobobit hook is executed.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_delete_running_job_moms_stopped,\n            job_rerunnable=False)\n\n    def test_hook_jobobit_delete_running_single_job_as_user_nrr_sm(self):\n        \"\"\"\n        Run a single non-rerunable job, but delete as user before completion\n        after stopping the MoM. Verify that the jobobit hook is executed.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_delete_running_job_moms_stopped,\n            delete_user=TEST_USER,\n            job_rerunnable=False)\n\n    def test_hook_jobobit_force_delete_running_single_job_as_root_nrr_sm(self):\n        \"\"\"\n        Run a single non-rerunable job, but force delete as root before\n        completion after stopping the MoM. Verify that the jobobit hook is\n        executed.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_delete_running_job_moms_stopped,\n            delete_force=True,\n            job_rerunnable=False)\n\n    def test_hook_jobobit_force_delete_running_single_job_as_user_nrr_sm(self):\n        \"\"\"\n        Run a single non-rerunable job, but force delete as user before\n        completion after stopping the MoM. Verify that the jobobit hook is\n        executed.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_delete_running_job_moms_stopped,\n            delete_force=True,\n            delete_user=TEST_USER,\n            job_rerunnable=False)\n\n    def test_hook_jobobit_delete_running_single_job_as_root_sm(self):\n        \"\"\"\n        Run a single rerunable job, but delete as user before completion\n        after stopping the MoM. Verify that the jobobit hook is executed.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_delete_running_job_moms_stopped,\n            job_rerunnable=True)\n\n    def test_hook_jobobit_delete_running_single_job_as_user_sm(self):\n        \"\"\"\n        Run a single rerunable job, but delete as user before completion\n        after stopping the MoM. Verify that the jobobit hook is executed.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_delete_running_job_moms_stopped,\n            delete_user=TEST_USER,\n            job_rerunnable=True)\n\n    def test_hook_jobobit_force_delete_running_single_job_as_root_sm(self):\n        \"\"\"\n        Run a single rerunable job, but force delete as root before completion\n        after stopping the MoM. Verify that the jobobit hook is executed.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_delete_running_job_moms_stopped,\n            delete_force=True,\n            job_rerunnable=True)\n\n    def test_hook_jobobit_force_delete_running_single_job_as_user_sm(self):\n        \"\"\"\n        Run a single rerunable job, but force delete as user before completion\n        after stopping the MoM. Verify that the jobobit hook is executed.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_delete_running_job_moms_stopped,\n            delete_force=True,\n            delete_user=TEST_USER,\n            job_rerunnable=True)\n\n    def test_hook_jobobit_delete_running_single_job_as_root_rm(self):\n        \"\"\"\n        Run a single rerunable job, but delete as root before completion after\n        stopping the MoM. Verify that the jobobit hook is executed.  Then\n        restart the mom and verify that the job is restarted and the end\n        job hook is executed again.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_delete_running_job_moms_stopped,\n            restart_moms=True,\n            job_rerunnable=True)\n\n    def test_hook_jobobit_delete_running_single_job_as_user_rm(self):\n        \"\"\"\n        Run a single rerunable job, but delete as root before completion after\n        stopping the MoM. Verify that the jobobit hook is executed.  Then\n        restart the mom and verify that the job is restarted and the end\n        job hook is executed again.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_delete_running_job_moms_stopped,\n            delete_user=TEST_USER,\n            restart_moms=True,\n            job_rerunnable=True)\n\n    def test_hook_jobobit_force_delete_running_single_job_as_root_rm(self):\n        \"\"\"\n        Run a single rerunable job, but force delete as root before completion\n        after stopping the MoM. Verify that the jobobit hook is executed.  Then\n        restart the mom and verify that the jobobit hook is not executed again.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_delete_running_job_moms_stopped,\n            delete_force=True,\n            restart_moms=True,\n            job_rerunnable=True)\n\n    def test_hook_jobobit_force_delete_running_single_job_as_user_rm(self):\n        \"\"\"\n        Run a single rerunable job, but force delete as user before completion\n        after stopping the MoM. Verify that the jobobit hook is executed.  Then\n        restart the mom and verify that the jobobit hook is not executed again.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_delete_running_job_moms_stopped,\n            delete_force=True,\n            delete_user=TEST_USER,\n            restart_moms=True,\n            job_rerunnable=True)\n\n    def test_hook_jobobit_delete_running_array_job_as_root_sm(self):\n        \"\"\"\n        Run an array job, but delete as root before completion after stopping\n        the MoM. Verify that the jobobit hook is executed.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_delete_running_job_moms_stopped,\n            subjob_count=self.job_array_num_subjobs)\n\n    def test_hook_jobobit_delete_running_array_job_as_user_sm(self):\n        \"\"\"\n        Run an array job, but delete as user before completion after stopping\n        the MoM. Verify that the jobobit hook is executed.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_delete_running_job_moms_stopped,\n            subjob_count=self.job_array_num_subjobs,\n            delete_user=TEST_USER)\n\n    def test_hook_jobobit_force_delete_running_array_job_as_root_sm(self):\n        \"\"\"\n        Run an array job, but force delete as root before completion after\n        stopping the MoM. Verify that the jobobit hook is executed.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_delete_running_job_moms_stopped,\n            subjob_count=self.job_array_num_subjobs,\n            delete_force=True)\n\n    def test_hook_jobobit_force_delete_running_array_job_as_user_sm(self):\n        \"\"\"\n        Run an array job, but force delete as user before completion after\n        stopping the MoM. Verify that the jobobit hook is executed.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_delete_running_job_moms_stopped,\n            subjob_count=self.job_array_num_subjobs,\n            delete_force=True,\n            delete_user=TEST_USER)\n\n    # -------------------------------------------------------------------------\n\n    def jobobit_job_running_during_mom_restart(\n            self,\n            subjob_count=0,\n            mom_preserve_jobs=True,\n            mom_restart_delayed=False):\n        a = {\n            'node_fail_requeue': self.job_time_rerun + 60,\n        }\n        if mom_preserve_jobs:\n            mom_stop_kwargs = {'sig': '-INT'}\n            mom_start_kwargs = {'args': ['-p']}\n        else:\n            mom_stop_kwargs = {}\n            mom_start_kwargs = {}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        self.job_submit(\n            subjob_count=subjob_count,\n            job_time=self.job_time_rerun)\n        self.job_verify_started()\n        self.moms_stop(**mom_stop_kwargs)\n        not mom_restart_delayed or time.sleep(self.job_time_rerun + 10)\n        if not mom_preserve_jobs:\n            self.rerun_job_ids.update(self.started_job_ids)\n            if self.is_array_job:\n                self.rerun_job_ids.remove(self.job_id)\n            self.job_verify_queued()\n            self.check_log_for_jobobit_hook_messages()\n        self.moms_start(**mom_start_kwargs)\n        if not mom_preserve_jobs:\n            self.job_verify_started()\n        self.job_verify_ended()\n\n    def test_hook_jobobit_finish_single_job_during_mom_restart(self):\n        \"\"\"\n        Run a single rerunable job and restart the MoMs.  Verify that the job\n        successfully completes without being rerun and that the jobobit hook\n        is executed.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_job_running_during_mom_restart\n        )\n\n    def test_hook_jobobit_finish_single_job_during_mom_restart_delayed(self):\n        \"\"\"\n        Run a single rerunable job.  Stop the MoMs long enough for the job to\n        complete and then restart the MoMs.  Verify that the job successfully\n        completes without being rerun and that the jobobit hook is executed.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_job_running_during_mom_restart,\n            mom_restart_delayed=True\n        )\n\n    def test_hook_jobobit_rerun_single_job_during_mom_restart(self):\n        \"\"\"\n        Run a single rerunable job and restart the MoMs.  Verify that the job\n        is requeued and successfully completes, and that the jobobit hook is\n        executed twice.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_job_running_during_mom_restart,\n            mom_preserve_jobs=False\n        )\n\n    def test_hook_jobobit_rerun_single_job_during_mom_restart_delayed(self):\n        \"\"\"\n        Run a single rerunable job.  Stop the MoMs long enough for the job to\n        complete and then restart the MoMs.  Verify that the job is requeued\n        and successfully completes, and that the jobobit hook is executed\n        twice.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_job_running_during_mom_restart,\n            mom_preserve_jobs=False,\n            mom_restart_delayed=True\n        )\n\n    def test_hook_jobobit_finish_array_job_during_mom_restart(self):\n        \"\"\"\n        Run an array job and restart the MoMs.  Verify that the subjobs\n        successfully complete without being rerun and that the jobobit hook\n        is executed.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_job_running_during_mom_restart,\n            subjob_count=self.job_array_num_subjobs\n        )\n\n    def test_hook_jobobit_finish_array_job_during_mom_restart_delayed(self):\n        \"\"\"\n        Run an array job.  Stop the MoMs long enough for the job to\n        complete and then restart the MoMs.  Verify that the job successfully\n        completes without being rerun and that the jobobit hook is executed.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_job_running_during_mom_restart,\n            subjob_count=self.job_array_num_subjobs,\n            mom_restart_delayed=True\n        )\n\n    def test_hook_jobobit_rerun_array_job_during_mom_restart(self):\n        \"\"\"\n        Run an array job and restart the MoMs without preserving existing jobs.\n        Verify that the subjobs are successfully rerun and that the jobobit\n        hook is executed twice for each subjob.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_job_running_during_mom_restart,\n            subjob_count=self.job_array_num_subjobs,\n            mom_preserve_jobs=False\n        )\n\n    def test_hook_jobobit_rerun_array_job_during_mom_restart_delayed(self):\n        \"\"\"\n        Run an array job.  Stop the MoMs long enough for the job to complete\n        and then restart the MoMs without preserving existing jobs.  Verify\n        that the subjobs are successfully rerun and that the jobobit hook is\n        executed twice for each subjob.\n        \"\"\"\n        self.run_test_func(\n            self.jobobit_job_running_during_mom_restart,\n            subjob_count=self.job_array_num_subjobs,\n            mom_preserve_jobs=False,\n            mom_restart_delayed=True\n        )\n\n    # -------------------------------------------------------------------------\n\n    # TODO: Test aborted single/array job for going over time.\n\n    # TODO: Test deletion of individual and ranges of subjobs in an array job.\n\n    # TODO: Test delete and rerun of an array job when a subset of the possible\n    # subjobs are running.  Verify that the jobobit hooks are called for all\n    # jobs/subjobs that were previously started.\n\n    # TODO: Test various scenarios of the server being stopped and restarted,\n    # insuring that the jobobit hooks are called for all jobs/subjobs that were\n    # previously started.\n\n    # TODO: Test deletion of job during provisioning to insure that the jobobit\n    # hook is not run.\n"
  },
  {
    "path": "test/tests/functional/pbs_hook_management.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\nimport datetime\nimport os\nimport sys\nimport socket\nimport textwrap\nimport time\nfrom pprint import pformat\nfrom ptl.utils.pbs_testsuite import generate_hook_body_from_func\n\nfrom tests.functional import *\nfrom tests.functional import JOB, MGR_CMD_SET, SERVER, TEST_USER, ATTR_h, Job\n\n\ndef get_hook_body(hook_msg):\n    hook_body = \"\"\"\n    import pbs\n    e = pbs.event()\n    m = e.management\n    pbs.logmsg(pbs.LOG_DEBUG, '%s')\n    \"\"\" % hook_msg\n    hook_body = textwrap.dedent(hook_body)\n    return hook_body\n\n\ndef get_hook_body_str(hook_msg):\n    hook_body = \"\"\"\n    import pbs\n    e = pbs.event()\n    m = e.management\n    for a in m.attribs:\n        pbs.logmsg(pbs.LOG_DEBUG, str(a))\n    pbs.logmsg(pbs.LOG_DEBUG, '%s')\n    \"\"\" % hook_msg\n    hook_body = textwrap.dedent(hook_body)\n    return hook_body\n\n\ndef hook_accept(hook_msg):\n    import pbs\n    e = pbs.event()\n    m = e.management\n    pbs.logmsg(pbs.LOG_DEBUG, hook_msg)\n    e.accept()\n\n\ndef hook_reject(hook_msg):\n    import pbs\n    e = pbs.event()\n    m = e.management\n    pbs.logmsg(pbs.LOG_DEBUG, hook_msg)\n    e.reject()\n\n\ndef get_hook_body_reject_with_text(hook_msg, bad_message=\"badmsg\"):\n    hook_body = \"\"\"\n    import pbs\n    e = pbs.event()\n    m = e.management\n    pbs.logmsg(pbs.LOG_DEBUG, '%s')\n    e.reject('%s')\n    \"\"\" % (hook_msg, bad_message)\n    hook_body = textwrap.dedent(hook_body)\n    return hook_body\n\n\ndef get_hook_body_traceback(hook_msg, bad_message=\"badmsg\"):\n    hook_body = \"\"\"\n    import pbs\n    e = pbs.event()\n    m = e.management\n    pbs.logmsg(pbs.LOG_DEBUG, '%s')\n    raise Exception('%s')\n    \"\"\" % (hook_msg, bad_message)\n    hook_body = textwrap.dedent(hook_body)\n    return hook_body\n\n\ndef get_hook_body_sleep(hook_msg, sleeptime=0.0):\n    hook_body = \"\"\"\n    import pbs\n    import time\n    e = pbs.event()\n    m = e.management\n    pbs.logmsg(pbs.LOG_DEBUG, '%s')\n    time.sleep(%s)\n    e.accept()\n    \"\"\" % (hook_msg, sleeptime)\n    hook_body = textwrap.dedent(hook_body)\n    return hook_body\n\n\ndef hook_attrs_func(hook_msg):\n    def get_traceback():\n        import sys\n        import traceback\n        (exc_cls, exc, tracbk) = sys.exc_info()\n        exc_str = traceback.format_exception_only(exc_cls, exc)[0]\n        stack = traceback.format_tb(tracbk)\n        tracebacklst = []\n        tracebacklst.append(\"EX(%s)\" % (exc_str.strip()))\n        for stackpiece in stack:\n            stackpiece = stackpiece.strip().replace(\"\\n\", \"||\")\n            tracebacklst.append(stackpiece)\n        return tracebacklst\n\n    attributes = [\"cmd\", \"objtype\", \"objname\", \"request_time\",\n                  \"reply_code\", \"reply_auxcode\", \"reply_choice\",\n                  \"reply_text\", 'attribs']\n    import pbs\n    from datetime import datetime\n    missing = []\n    e = pbs.event()\n    try:\n        import sys\n        import pbs_ifl\n        from pprint import pformat\n        m = e.management\n        # pbs.logmsg(pbs.LOG_DEBUG, str(dir(pbs)))\n        # pbs.logmsg(pbs.LOG_DEBUG, str(dir(e)))\n        # pbs.logmsg(pbs.LOG_DEBUG, str(dir(m)))\n        for attr in attributes:\n            if not hasattr(m, attr):\n                missing.append(attr)\n            else:\n                value = getattr(m, attr)\n                value_lst = []\n                if attr == 'attribs':\n                    if type(value) == list:\n                        for obj in value:\n                            value_dct = {}\n                            value_dct['name'] = obj.name\n                            value_dct['value'] = obj.value\n                            value_dct['flags'] = obj.flags\n                            subvalue_lst = []\n                            for k, v in pbs.REVERSE_ATR_VFLAGS.items():\n                                if int(k) & int(obj.flags):\n                                    subvalue_lst.append(str(v))\n                            value_dct[f\"flags_lst\"] = subvalue_lst\n                            value_dct['op'] = obj.op\n                            try:\n                                value_dct[f\"op_str\"] = \\\n                                    pbs.REVERSE_BATCH_OPS[obj.op]\n                            except Exception as err:\n                                value_dct[f\"op_str\"] = \"?\"\n                            value_dct['resource'] = obj.resource\n                            value_dct['sisters'] = obj.sisters\n                            value_lst.append(value_dct)\n                elif attr == 'objtype':\n                    value_str = pbs.REVERSE_MGR_OBJS[value]\n                    pbs.logmsg(pbs.LOG_DEBUG, f\"{attr}=>{value_str} \"\n                                              f\"(reversed)\")\n                elif attr == 'reply_choice':\n                    value_str = pbs.REVERSE_BRP_CHOICES[value]\n                    pbs.logmsg(pbs.LOG_DEBUG, f\"{attr}=>{value_str} \"\n                                              f\"(reversed)\")\n                elif attr == 'cmd':\n                    value_str = pbs.REVERSE_MGR_CMDS[value]\n                    pbs.logmsg(pbs.LOG_DEBUG, f\"{attr}=>{value_str} \"\n                                              f\"(reversed)\")\n                if attr == 'attribs':\n                    for idx, dct in enumerate(value_lst):\n                        dct_lst = []\n                        for key, value in dct.items():\n                            dct_lst.append(f\"{key}:{value}\")\n                        # need to sort the list to allow for the test to\n                        # find the string correctly.\n                        dct_lst = sorted(dct_lst)\n                        dct_lst_str = f\"{attr}[{idx}]=>{','.join(dct_lst)}\"\n                        pbs.logmsg(pbs.LOG_DEBUG, f\"{dct_lst_str} \"\n                                                  f\"(stringified)\")\n                pbs.logmsg(pbs.LOG_DEBUG, f\"{attr}=>{value}\")\n        if len(missing) > 0:\n            pbs.logmsg(pbs.LOG_DEBUG, \"Hook, processed normally.\")\n            e.reject(\"missing attributes in pbs:\" + \",\".join(missing))\n        else:\n            pbs.logmsg(pbs.LOG_DEBUG, 'all attributes found in pbs')\n            pbs.logmsg(pbs.LOG_DEBUG, hook_msg)\n            pbs.logmsg(pbs.LOG_DEBUG, \"Hook, processed normally.\")\n            e.accept()\n    except Exception as err:\n        now_str = datetime.utcnow().strftime(\"%Y-%m-%d %H:%M:%S.%f\")\n        pbs.logmsg(pbs.LOG_DEBUG, \"%s|Error in hook:%s\" %\n                   (now_str, '||'.join(get_traceback()).replace(\"\\n\", \"|||\")))\n        pbs.logmsg(pbs.LOG_DEBUG, \"Error in hook:%s\" % str(err))\n        # errstr = str(sys.exc_info()[:2])\n        # errstr = errstr.replace('\\n', '||')\n        # pbs.logmsg(pbs.LOG_DEBUG, \"Error in hook:%s\" % errstr)\n        e.reject(\"a hook error has occurred\")\n\n\n@tags('hooks', 'smoke')\nclass TestHookManagement(TestFunctional):\n\n    def test_hook_00(self):\n        \"\"\"\n        By creating an import hook, it executes a management hook.\n        \"\"\"\n        self.logger.info(\"**************** HOOK START ****************\")\n        hook_name = \"management\"\n        hook_msg = 'running management hook_00'\n        hook_body = get_hook_body(hook_msg)\n        attrs = {'event': 'management', 'enabled': 'True'}\n        start_time = time.time()\n        ret = self.server.create_hook(hook_name, attrs)\n        self.assertEqual(ret, True, \"Could not create hook %s\" % hook_name)\n        ret = self.server.import_hook(hook_name, hook_body)\n        self.assertEqual(ret, True, \"Could not import hook %s\" % hook_name)\n        ret = self.server.delete_hook(hook_name)\n        self.assertEqual(ret, True, \"Could not delete hook %s\" % hook_name)\n\n        self.server.log_match(hook_msg, starttime=start_time)\n        self.logger.info(\"**************** HOOK END ****************\")\n\n    def test_hook_01(self):\n        \"\"\"\n        By creating an import hook, it executes a management hook.\n        Create three hooks, and create, import and delete each one.\n        \"\"\"\n        self.logger.info(\"**************** HOOK START ****************\")\n        hook_msg = 'running management hook_01'\n        hook_body = get_hook_body(hook_msg)\n        attrs = {'event': 'management', 'enabled': 'True'}\n        start_time = time.time()\n        for hook_name in ['a1234', 'b1234', 'c1234']:\n            self.logger.info(\"hook_name:%s\" % hook_name)\n            ret = self.server.create_hook(hook_name, attrs)\n            self.assertEqual(ret, True, \"Could not create hook %s\" % hook_name)\n            ret = self.server.import_hook(hook_name, hook_body)\n            self.assertEqual(ret, True, \"Could not import hook %s\" % hook_name)\n            ret = self.server.delete_hook(hook_name)\n            self.assertEqual(ret, True, \"Could not delete hook %s\" % hook_name)\n            self.server.log_match(hook_msg, starttime=start_time)\n        self.logger.info(\"**************** HOOK END ****************\")\n\n    def test_hook_02(self):\n        \"\"\"\n        By creating an import hook, it executes a management hook.\n        Create three hooks serially, then delete them out of order.\n        \"\"\"\n        self.logger.info(\"**************** HOOK START ****************\")\n        attrs = {'event': 'management', 'enabled': 'True'}\n\n        hook_name_00 = 'a1234'\n        hook_name_01 = 'b1234'\n        hook_name_02 = 'c1234'\n        hook_msg_00 = 'running management hook_02 name:%s' % hook_name_00\n        hook_body_00 = get_hook_body(hook_msg_00)\n        hook_msg_01 = 'running management hook_02 name:%s' % hook_name_01\n        hook_body_01 = get_hook_body(hook_msg_01)\n        hook_msg_02 = 'running management hook_02 name:%s' % hook_name_02\n        hook_body_02 = get_hook_body(hook_msg_02)\n\n        start_time = time.time()\n        ret = self.server.create_hook(hook_name_00, attrs)\n        self.assertEqual(ret, True, \"Could not create hook %s\" % hook_name_00)\n        ret = self.server.create_hook(hook_name_01, attrs)\n        self.assertEqual(ret, True, \"Could not create hook %s\" % hook_name_01)\n        ret = self.server.create_hook(hook_name_02, attrs)\n        self.assertEqual(ret, True, \"Could not create hook %s\" % hook_name_02)\n\n        ret = self.server.import_hook(hook_name_00, hook_body_00)\n        self.assertEqual(ret, True, \"Could not import hook %s\" % hook_name_00)\n        ret = self.server.import_hook(hook_name_01, hook_body_01)\n        self.assertEqual(ret, True, \"Could not import hook %s\" % hook_name_01)\n        ret = self.server.import_hook(hook_name_02, hook_body_02)\n        self.assertEqual(ret, True, \"Could not import hook %s\" % hook_name_02)\n\n        # out of order delete\n        ret = self.server.delete_hook(hook_name_01)\n        self.assertEqual(ret, True, \"Could not delete hook %s\" % hook_name_01)\n        ret = self.server.delete_hook(hook_name_00)\n        self.assertEqual(ret, True, \"Could not delete hook %s\" % hook_name_00)\n        ret = self.server.delete_hook(hook_name_02)\n        self.assertEqual(ret, True, \"Could not delete hook %s\" % hook_name_02)\n\n        self.server.log_match(hook_msg_00, starttime=start_time)\n        self.server.log_match(hook_msg_01, starttime=start_time)\n        self.server.log_match(hook_msg_02, starttime=start_time)\n\n        self.logger.info(\"**************** HOOK END ****************\")\n\n    def test_hook_03(self):\n        \"\"\"\n        By creating an import hook, it executes a management hook.\n        Also sets debug to True.\n        \"\"\"\n        self.logger.info(\"**************** HOOK START ****************\")\n        hook_name_a = \"management_03a\"\n        hook_msg_a = 'running management hook_03a'\n        hook_body_a = get_hook_body(hook_msg_a)\n        attrs = {'event': 'management', 'enabled': 'True', 'debug': 'True'}\n        start_time_a = time.time()\n        ret = self.server.create_hook(hook_name_a, attrs)\n        self.assertEqual(ret, True, f\"Could not create hook {hook_name_a}\")\n        ret = self.server.import_hook(hook_name_a, hook_body_a)\n        self.assertEqual(ret, True, f\"Could not import hook {hook_name_a}\")\n\n        self.server.add_resource(\"management_03_1_resource\", type=\"string\")\n\n        hook_name_b = \"management_03b\"\n        hook_msg_b = 'running management hook_03b'\n        hook_body_b = get_hook_body(hook_msg_b)\n        attrs = {'event': 'management', 'enabled': 'True', 'debug': 'True'}\n        start_time_b = time.time()\n        ret = self.server.create_hook(hook_name_b, attrs)\n        self.assertEqual(ret, True, f\"Could not create hook {hook_name_b}\")\n        ret = self.server.import_hook(hook_name_b, hook_body_b)\n        self.assertEqual(ret, True, f\"Could not import hook {hook_name_b}\")\n\n        self.server.add_resource(\"management_03_2_resource\", type=\"string\")\n        self.server.add_resource(\"management_03_3_resource\", type=\"string\")\n        self.server.delete_resources()\n\n        ret = self.server.delete_hook(hook_name_a)\n        self.assertEqual(ret, True, f\"Could not delete hook {hook_name_a}\")\n        ret = self.server.delete_hook(hook_name_b)\n        self.assertEqual(ret, True, f\"Could not delete hook {hook_name_b}\")\n\n        self.server.log_match(hook_msg_a, starttime=start_time_a)\n        self.server.log_match(hook_msg_b, starttime=start_time_b)\n        self.logger.info(\"**************** HOOK END ****************\")\n\n    def test_hook_04(self):\n        \"\"\"\n        By creating an import hook, it executes a management hook.\n        Also sets debug to False.\n        \"\"\"\n        self.logger.info(\"**************** HOOK START ****************\")\n        hook_name_a = \"management_03a\"\n        hook_msg_a = 'running management hook_03a'\n        hook_body_a = get_hook_body(hook_msg_a)\n        attrs = {'event': 'management', 'enabled': 'True',\n                 'debug': 'True', 'order': 2}\n        start_time_a = time.time()\n        ret = self.server.create_hook(hook_name_a, attrs)\n        self.assertEqual(ret, True, f\"Could not create hook {hook_name_a}\")\n        ret = self.server.import_hook(hook_name_a, hook_body_a)\n        self.assertEqual(ret, True, f\"Could not import hook {hook_name_a}\")\n\n        self.server.add_resource(\"management_03_1_resource\", type=\"string\")\n\n        hook_name_b = \"management_03b\"\n        hook_msg_b = 'running management hook_03b'\n        hook_body_b = get_hook_body(hook_msg_b)\n        attrs = {'event': 'management', 'enabled': 'False', 'debug': 'False'}\n        start_time_b = time.time()\n        ret = self.server.create_hook(hook_name_b, attrs)\n        self.assertEqual(ret, True, f\"Could not create hook {hook_name_b}\")\n        ret = self.server.import_hook(hook_name_b, hook_body_b)\n        self.assertEqual(ret, True, f\"Could not import hook {hook_name_b}\")\n        ret = self.server.import_hook(hook_name_b, hook_body_b)\n\n        attrs = {'enabled': 'true'}\n        self.logger.info(f\"Enabling {hook_name_b}...\")\n        rc = self.server.manager(MGR_CMD_SET, HOOK, attrs, id=hook_name_b)\n        self.logger.info(f\"Result for {hook_name_b}->{rc}\")\n\n        self.server.add_resource(\"management_03_2_resource\", type=\"string\")\n        self.server.add_resource(\"management_03_3_resource\", type=\"string\")\n        self.server.delete_resources()\n\n        ret = self.server.delete_hook(hook_name_a)\n        self.assertEqual(ret, True, f\"Could not delete hook {hook_name_a}\")\n        ret = self.server.delete_hook(hook_name_b)\n        self.assertEqual(ret, True, f\"Could not delete hook {hook_name_b}\")\n\n        self.server.log_match(hook_msg_a, starttime=start_time_a)\n        self.server.log_match(hook_msg_b, starttime=start_time_b)\n        self.logger.info(\"**************** HOOK END ****************\")\n\n    def test_hook_str_00(self):\n        \"\"\"\n        By creating an import hook, it executes a management hook.\n        \"\"\"\n        self.logger.info(\"**************** HOOK START ****************\")\n        hook_name = \"management\"\n        hook_msg = 'running management hook_str_00'\n        hook_body = get_hook_body_str(hook_msg)\n        attrs = {'event': 'management', 'enabled': 'True'}\n        start_time = time.time()\n        ret = self.server.create_hook(hook_name, attrs)\n        self.assertEqual(ret, True, \"Could not create hook %s\" % hook_name)\n        ret = self.server.import_hook(hook_name, hook_body)\n        self.assertEqual(ret, True, \"Could not import hook %s\" % hook_name)\n        ret = self.server.delete_hook(hook_name)\n        self.assertEqual(ret, True, \"Could not delete hook %s\" % hook_name)\n\n        self.server.log_match(hook_msg, starttime=start_time)\n        self.logger.info(\"**************** HOOK END ****************\")\n\n    def test_hook_accept_00(self):\n        \"\"\"\n        Tests the event.accept() of a hook.\n        \"\"\"\n        self.logger.info(\"**************** HOOK START ****************\")\n        attrs = {'event': 'management', 'enabled': 'True'}\n\n        hook_name_00 = 'a1234'\n        hook_name_01 = 'b1234'\n        hook_name_02 = 'c1234'\n        hook_msg_00 = 'running management hook_accept_00 name:%s' % \\\n                      hook_name_00\n        hook_body_00 = get_hook_body(hook_msg_00)\n        hook_msg_01 = 'running management hook_accept_00 name:%s' % \\\n                      hook_name_01\n        hook_body_01 = get_hook_body(hook_msg_01)\n        hook_msg_02 = 'running management hook_accept_00 name:%s' % \\\n                      hook_name_02\n        hook_body_02 = get_hook_body(hook_msg_02)\n\n        start_time = time.time()\n        ret = self.server.create_hook(hook_name_00, attrs)\n        self.assertEqual(ret, True, \"Could not create hook %s\" % hook_name_00)\n        ret = self.server.create_hook(hook_name_01, attrs)\n        self.assertEqual(ret, True, \"Could not create hook %s\" % hook_name_01)\n        ret = self.server.create_hook(hook_name_02, attrs)\n        self.assertEqual(ret, True, \"Could not create hook %s\" % hook_name_02)\n\n        ret = self.server.import_hook(hook_name_00, hook_body_00)\n        self.assertEqual(ret, True, \"Could not import hook %s\" % hook_name_00)\n        ret = self.server.import_hook(hook_name_01, hook_body_01)\n        self.assertEqual(ret, True, \"Could not import hook %s\" % hook_name_01)\n        ret = self.server.import_hook(hook_name_02, hook_body_02)\n        self.assertEqual(ret, True, \"Could not import hook %s\" % hook_name_02)\n\n        # out of order delete\n        ret = self.server.delete_hook(hook_name_01)\n        self.assertEqual(ret, True, \"Could not delete hook %s\" % hook_name_01)\n        ret = self.server.delete_hook(hook_name_00)\n        self.assertEqual(ret, True, \"Could not delete hook %s\" % hook_name_00)\n        ret = self.server.delete_hook(hook_name_02)\n        self.assertEqual(ret, True, \"Could not delete hook %s\" % hook_name_02)\n\n        self.server.log_match(hook_msg_00, starttime=start_time)\n        self.server.log_match(hook_msg_01, starttime=start_time)\n        self.server.log_match(hook_msg_02, starttime=start_time)\n\n        self.logger.info(\"**************** HOOK END ****************\")\n\n    def test_hook_reject_00(self):\n        \"\"\"\n        Tests the event.reject() of a hook.  The third hook will not fire\n        due to the second calling reject.\n        \"\"\"\n        self.logger.info(\"**************** HOOK START ****************\")\n        attrs = {'event': 'management', 'enabled': 'True'}\n\n        hook_name_00 = 'a1234'\n        hook_name_01 = 'b1234'\n        hook_name_02 = 'c1234'\n        hook_msg_00 = 'running management hook_reject_00 name:%s' % \\\n                      hook_name_00\n        hook_body_00 = get_hook_body(hook_msg_00)\n        hook_msg_01 = 'running management hook_reject_00 name:%s' % \\\n                      hook_name_01\n        hook_body_01 = generate_hook_body_from_func(hook_reject, hook_msg_01)\n        hook_msg_02 = 'running management hook_reject_00 name:%s' % \\\n                      hook_name_02\n        hook_body_02 = get_hook_body(hook_msg_02)\n\n        start_time = time.time()\n        ret = self.server.create_hook(hook_name_00, attrs)\n        self.assertEqual(ret, True, \"Could not create hook %s\" % hook_name_00)\n        ret = self.server.create_hook(hook_name_01, attrs)\n        self.assertEqual(ret, True, \"Could not create hook %s\" % hook_name_01)\n        ret = self.server.create_hook(hook_name_02, attrs)\n        self.assertEqual(ret, True, \"Could not create hook %s\" % hook_name_02)\n\n        self.server.log_match(\"%s;created at request\" % hook_name_00,\n                              starttime=start_time)\n        self.server.log_match(\"%s;created at request\" % hook_name_01,\n                              starttime=start_time)\n        self.server.log_match(\"%s;created at request\" % hook_name_02,\n                              starttime=start_time)\n\n        ret = self.server.import_hook(hook_name_00, hook_body_00)\n        self.assertEqual(ret, True, \"Could not import hook %s\" % hook_name_00)\n        ret = self.server.import_hook(hook_name_01, hook_body_01)\n        self.assertEqual(ret, True, \"Could not import hook %s\" % hook_name_01)\n        ret = self.server.import_hook(hook_name_02, hook_body_02)\n        self.assertEqual(ret, True, \"Could not import hook %s\" % hook_name_02)\n\n        self.server.log_match(hook_msg_00, starttime=start_time)\n        self.server.log_match(hook_msg_01, starttime=start_time)\n        # we should not see vvv it vvv fire because ^^^ b1234 ^^^ rejects\n        self.server.log_match(hook_msg_02, starttime=start_time,\n                              existence=False)\n\n        # out of order delete, make sure the reject hook is last\n        ret = self.server.delete_hook(hook_name_00)\n        self.assertEqual(ret, True, \"Could not delete hook %s\" % hook_name_00)\n        ret = self.server.delete_hook(hook_name_02)\n        self.assertEqual(ret, True, \"Could not delete hook %s\" % hook_name_02)\n        # reject hook\n        ret = self.server.delete_hook(hook_name_01)\n        self.assertEqual(ret, True, \"Could not delete hook %s\" % hook_name_01)\n\n        self.server.log_match(\"%s;deleted at request of\" % hook_name_00,\n                              starttime=start_time)\n        self.server.log_match(\"%s;deleted at request of\" % hook_name_01,\n                              starttime=start_time)\n        self.server.log_match(\"%s;deleted at request of\" % hook_name_02,\n                              starttime=start_time)\n\n    def test_hook_reject_01(self):\n        \"\"\"\n        Tests the event.reject() of a hook.  All hooks should fire.  The\n        second hook is added last thus all three will fire.\n        \"\"\"\n        self.logger.info(\"**************** HOOK START ****************\")\n        attrs = {'event': 'management', 'enabled': 'True'}\n\n        hook_name_00 = 'a1234'\n        hook_name_01 = 'b1234'\n        hook_name_02 = 'c1234'\n        hook_msg_00 = 'running management hook_reject_00 name:%s' % \\\n                      hook_name_00\n        hook_body_00 = get_hook_body(hook_msg_00)\n        hook_msg_01 = 'running management hook_reject_00 name:%s' % \\\n                      hook_name_01\n        hook_body_01 = generate_hook_body_from_func(hook_reject, hook_msg_01)\n        hook_msg_02 = 'running management hook_reject_00 name:%s' % \\\n                      hook_name_02\n        hook_body_02 = get_hook_body(hook_msg_02)\n\n        start_time = time.time()\n        ret = self.server.create_hook(hook_name_00, attrs)\n        self.assertEqual(ret, True, \"Could not create hook %s\" % hook_name_00)\n        ret = self.server.create_hook(hook_name_01, attrs)\n        self.assertEqual(ret, True, \"Could not create hook %s\" % hook_name_01)\n        ret = self.server.create_hook(hook_name_02, attrs)\n        self.assertEqual(ret, True, \"Could not create hook %s\" % hook_name_02)\n\n        self.server.log_match(\"%s;created at request\" % hook_name_00,\n                              starttime=start_time)\n        self.server.log_match(\"%s;created at request\" % hook_name_01,\n                              starttime=start_time)\n        self.server.log_match(\"%s;created at request\" % hook_name_02,\n                              starttime=start_time)\n\n        ret = self.server.import_hook(hook_name_00, hook_body_00)\n        self.assertEqual(ret, True, \"Could not import hook %s\" % hook_name_00)\n        ret = self.server.import_hook(hook_name_02, hook_body_02)\n        self.assertEqual(ret, True, \"Could not import hook %s\" % hook_name_02)\n        # the bad one\n        ret = self.server.import_hook(hook_name_01, hook_body_01)\n        self.assertEqual(ret, True, \"Could not import hook %s\" % hook_name_01)\n\n        self.server.log_match(hook_msg_00, starttime=start_time)\n        self.server.log_match(hook_msg_02, starttime=start_time)\n        self.server.log_match(hook_msg_01, starttime=start_time)\n\n        # out of order delete, make sure the reject hook is last\n        ret = self.server.delete_hook(hook_name_00)\n        self.assertEqual(ret, True, \"Could not delete hook %s\" % hook_name_00)\n        ret = self.server.delete_hook(hook_name_02)\n        self.assertEqual(ret, True, \"Could not delete hook %s\" % hook_name_02)\n        # reject hook\n        ret = self.server.delete_hook(hook_name_01)\n        self.assertEqual(ret, True, \"Could not delete hook %s\" % hook_name_01)\n\n        self.server.log_match(\"%s;deleted at request of\" % hook_name_00,\n                              starttime=start_time)\n        self.server.log_match(\"%s;deleted at request of\" % hook_name_01,\n                              starttime=start_time)\n        self.server.log_match(\"%s;deleted at request of\" % hook_name_02,\n                              starttime=start_time)\n\n        self.logger.info(\"**************** HOOK END ****************\")\n\n    def test_hook_reject_02(self):\n        \"\"\"\n        Tests the event.reject() of a hook.  The hook will fire and reject\n        with a message.\n        \"\"\"\n        self.logger.info(\"**************** HOOK START ****************\")\n        attrs = {'event': 'management', 'enabled': 'True'}\n\n        hook_name_00 = 'd1234'\n        hook_msg_00 = 'running management hook_reject_02 name:%s' % \\\n                      hook_name_00\n        hook_bad_msg = \"badmessagetext\"\n        hook_body_00 = get_hook_body_reject_with_text(hook_msg_00,\n                                                      hook_bad_msg)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'log_events': 2047})\n\n        start_time = time.time()\n        ret = self.server.create_hook(hook_name_00, attrs)\n        self.assertEqual(ret, True, \"Could not create hook %s\" % hook_name_00)\n\n        self.server.log_match(\"%s;created at request\" % hook_name_00,\n                              starttime=start_time)\n\n        ret = self.server.import_hook(hook_name_00, hook_body_00)\n        self.assertEqual(ret, True, \"Could not import hook %s\" % hook_name_00)\n\n        self.server.log_match(hook_msg_00, starttime=start_time)\n        self.server.log_match(hook_bad_msg, starttime=start_time)\n\n        ret = self.server.delete_hook(hook_name_00)\n        self.assertEqual(ret, True, \"Could not delete hook %s\" % hook_name_00)\n\n        self.server.log_match(\"%s;deleted at request of\" % hook_name_00,\n                              starttime=start_time)\n\n        self.logger.info(\"**************** HOOK END ****************\")\n\n    def test_hook_traceback_00(self):\n        \"\"\"\n        Tests a traceback in a hook.  The hook will fire and reject\n        with a message.\n        \"\"\"\n        self.logger.info(\"**************** HOOK START ****************\")\n        attrs = {'event': 'management', 'enabled': 'True'}\n\n        hook_name_00 = 'e1234'\n        hook_msg_00 = 'running management hook_traceback_00 name:%s' % \\\n                      hook_name_00\n        hook_bad_msg = \"badmessagetext\"\n        hook_body_00 = get_hook_body_traceback(hook_msg_00,\n                                               hook_bad_msg)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'log_events': 2047})\n\n        start_time = time.time()\n        ret = self.server.create_hook(hook_name_00, attrs)\n        self.assertEqual(ret, True, \"Could not create hook %s\" % hook_name_00)\n\n        self.server.log_match(\"%s;created at request\" % hook_name_00,\n                              starttime=start_time)\n\n        ret = self.server.import_hook(hook_name_00, hook_body_00)\n        self.assertEqual(ret, True, \"Could not import hook %s\" % hook_name_00)\n\n        self.server.log_match(hook_msg_00, starttime=start_time)\n        self.server.log_match(hook_bad_msg, starttime=start_time)\n\n        ret = self.server.delete_hook(hook_name_00)\n        self.assertEqual(ret, True, \"Could not delete hook %s\" % hook_name_00)\n\n        self.server.log_match(\"%s;deleted at request of\" % hook_name_00,\n                              starttime=start_time)\n\n        self.server.log_match(\"hook '%s' encountered an exception\"\n                              % hook_name_00, starttime=start_time)\n\n        self.logger.info(\"**************** HOOK END ****************\")\n\n    def test_hook_alarm_00(self):\n        \"\"\"\n        Tests a alarm with a hook.  The hook will fire and reject\n        with a message.\n        \"\"\"\n        self.logger.info(\"**************** HOOK START ****************\")\n        attrs = {'event': 'management', 'enabled': 'True', 'alarm': 1}\n\n        hook_name_00 = 'f1234'\n        hook_msg_00 = 'running management hook_alarm_00 name:%s' % \\\n                      hook_name_00\n        hook_body_00 = get_hook_body_sleep(hook_msg_00, sleeptime=2.0)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'log_events': 2047})\n\n        start_time = time.time()\n        ret = self.server.create_hook(hook_name_00, attrs)\n        self.assertEqual(ret, True, \"Could not create hook %s\" % hook_name_00)\n\n        self.server.log_match(\"%s;created at request\" % hook_name_00,\n                              starttime=start_time)\n\n        ret = self.server.import_hook(hook_name_00, hook_body_00)\n        self.assertEqual(ret, True, \"Could not import hook %s\" % hook_name_00)\n\n        self.server.log_match(\"alarm call while running management hook '%s'\"\n                              % hook_name_00, starttime=start_time)\n\n        self.server.log_match(hook_msg_00, starttime=start_time)\n\n        ret = self.server.delete_hook(hook_name_00)\n        self.assertEqual(ret, True, \"Could not delete hook %s\" % hook_name_00)\n\n        self.server.log_match(\"%s;deleted at request of\" % hook_name_00,\n                              starttime=start_time)\n\n        self.logger.info(\"**************** HOOK END ****************\")\n\n        self.logger.info(\"**************** HOOK END ****************\")\n\n    def test_hook_import_00(self):\n        \"\"\"\n        Test for a set of the management hook attributes.\n        \"\"\"\n\n        def _get_hook_body(hook_msg):\n            attributes = [\"MGR_CMD_NONE\", \"MGR_CMD_CREATE\", \"MGR_CMD_DELETE\",\n                          \"MGR_CMD_SET\", \"MGR_CMD_UNSET\", \"MGR_CMD_LIST\",\n                          \"MGR_CMD_PRINT\", \"MGR_CMD_ACTIVE\", \"MGR_CMD_IMPORT\",\n                          \"MGR_CMD_EXPORT\", \"MGR_CMD_LAST\", \"MGR_OBJ_NONE\",\n                          \"MGR_OBJ_SERVER\", \"MGR_OBJ_QUEUE\", \"MGR_OBJ_JOB\",\n                          \"MGR_OBJ_NODE\", \"MGR_OBJ_RESV\", \"MGR_OBJ_RSC\",\n                          \"MGR_OBJ_SCHED\", \"MGR_OBJ_HOST\", \"MGR_OBJ_HOOK\",\n                          \"MGR_OBJ_PBS_HOOK\", \"MGR_OBJ_LAST\"]\n            hook_body = \"\"\"\n            import pbs\n            import pbs_ifl\n            attributes = %s\n            missing = []\n            e = pbs.event()\n            for attr in attributes:\n                if not hasattr(pbs, attr):\n                    missing.append(attr)\n            if len(missing) > 0:\n                e.reject(\"missing attributes in pbs:\" + \",\".join(missing))\n            else:\n                pbs.logmsg(pbs.LOG_DEBUG, 'all attributes found in pbs')\n                pbs.logmsg(pbs.LOG_DEBUG, 'dir(pbs_ifl):')\n                pbs.logmsg(pbs.LOG_DEBUG, str(dir(pbs_ifl)))\n                pbs.logmsg(pbs.LOG_DEBUG, 'dir(pbs):')\n                pbs.logmsg(pbs.LOG_DEBUG, str(dir(pbs)))\n                m = e.management\n                pbs.logmsg(pbs.LOG_DEBUG, '%s')\n                e.accept()\n            \"\"\" % (attributes, hook_msg)\n            hook_body = textwrap.dedent(hook_body)\n            return hook_body\n\n        self.logger.info(\"**************** HOOK START ****************\")\n        attrs = {'event': 'management', 'enabled': 'True'}\n\n        hook_name_00 = 'g1234'\n        hook_msg_00 = 'running management hook_import_00 name:%s' % \\\n                      hook_name_00\n        hook_body_00 = _get_hook_body(hook_msg_00)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'log_events': 2047})\n\n        start_time = time.time()\n        ret = self.server.create_hook(hook_name_00, attrs)\n        self.assertEqual(ret, True, \"Could not create hook %s\" % hook_name_00)\n\n        self.server.log_match(\"%s;created at request\" % hook_name_00,\n                              starttime=start_time)\n\n        ret = self.server.import_hook(hook_name_00, hook_body_00)\n        self.assertEqual(ret, True, \"Could not import hook %s\" % hook_name_00)\n\n        self.server.log_match(hook_msg_00, starttime=start_time)\n\n        ret = self.server.delete_hook(hook_name_00)\n        self.assertEqual(ret, True, \"Could not delete hook %s\" % hook_name_00)\n\n        self.server.log_match(\"%s;deleted at request of\" % hook_name_00,\n                              starttime=start_time)\n\n        self.server.log_match(\"missing attributes in pbs\",\n                              starttime=start_time, existence=False)\n        self.server.log_match(\"all attributes found in pbs\",\n                              starttime=start_time, existence=True)\n\n        self.logger.info(\"**************** HOOK END ****************\")\n\n    def test_hook_attrs_00(self):\n        \"\"\"\n        Test for a set of the management hook attributes.\n        \"\"\"\n        self.logger.info(\"**************** HOOK START ****************\")\n        attrs = {'event': 'management', 'enabled': 'True'}\n\n        hook_name_00 = 'h1234'\n        hook_msg_00 = 'running management hook_import_00 name:%s' % \\\n                      hook_name_00\n        hook_body_00 = generate_hook_body_from_func(hook_attrs_func,\n                                                    hook_msg_00)\n        self.server.manager(MGR_CMD_SET, SERVER, {'log_events': 2047})\n\n        start_time = time.time()\n        ret = self.server.create_hook(hook_name_00, attrs)\n        self.assertEqual(ret, True, \"Could not create hook %s\" % hook_name_00)\n\n        self.server.log_match(\"%s;created at request\" % hook_name_00,\n                              starttime=start_time)\n\n        ret = self.server.import_hook(hook_name_00, hook_body_00)\n        self.assertEqual(ret, True, \"Could not import hook %s\" % hook_name_00)\n\n        self.server.log_match(hook_msg_00, starttime=start_time)\n        self.server.log_match(\"cmd=>7\", starttime=start_time)\n        self.server.log_match(\"objtype=>8\", starttime=start_time)\n        self.server.log_match(\"objname=>%s\" % hook_name_00,\n                              starttime=start_time)\n        self.server.log_match(\"reply_code=>0\", starttime=start_time)\n        self.server.log_match(\"reply_auxcode=>0\", starttime=start_time)\n        self.server.log_match(\"reply_choice=>1\", starttime=start_time)\n        self.server.log_match(\"reply_text=>None\", starttime=start_time)\n\n        ret = self.server.delete_hook(hook_name_00)\n        self.assertEqual(ret, True, \"Could not delete hook %s\" % hook_name_00)\n\n        self.server.log_match(\"%s;deleted at request of\" % hook_name_00,\n                              starttime=start_time)\n\n        self.server.log_match(\"missing attributes in pbs\",\n                              starttime=start_time, existence=False)\n        self.server.log_match(\"all attributes found in pbs\",\n                              starttime=start_time, existence=True)\n\n        self.logger.info(\"**************** HOOK END ****************\")\n\n    def test_hook_attrs_01(self):\n        \"\"\"\n        Test for a set of the management hook attributes.\n        \"\"\"\n        self.logger.info(\"**************** HOOK START ****************\")\n        attrs = {'event': 'management', 'enabled': 'True'}\n\n        hook_name_00 = 'i1234'\n        hook_msg_00 = 'running management hook_import_00 name:%s' % \\\n                      hook_name_00\n        hook_body_00 = generate_hook_body_from_func(hook_attrs_func,\n                                                    hook_msg_00)\n        self.server.manager(MGR_CMD_SET, SERVER, {'log_events': 2047})\n\n        start_time = time.time()\n        ret = self.server.create_hook(hook_name_00, attrs)\n        self.assertEqual(ret, True, \"Could not create hook %s\" % hook_name_00)\n\n        self.server.log_match(\"%s;created at request\" % hook_name_00,\n                              starttime=start_time)\n\n        ret = self.server.import_hook(hook_name_00, hook_body_00)\n        self.assertEqual(ret, True, \"Could not import hook %s\" % hook_name_00)\n\n        self.server.log_match(hook_msg_00, starttime=start_time)\n        self.server.log_match(\"cmd=>7\", starttime=start_time)\n        self.server.log_match(\"objtype=>8\", starttime=start_time)\n        self.server.log_match(\"objname=>%s\" % hook_name_00,\n                              starttime=start_time)\n        self.server.log_match(\"reply_code=>0\", starttime=start_time)\n        self.server.log_match(\"reply_auxcode=>0\", starttime=start_time)\n        self.server.log_match(\"reply_choice=>1\", starttime=start_time)\n        self.server.log_match(\"reply_text=>None\", starttime=start_time)\n\n        # run a qmgr command and import the script.\n        hook_name_01 = 'i1234accept'\n        hook_msg_01 = \"%s accept hook\" % hook_name_01\n        hook_body_01 = generate_hook_body_from_func(hook_accept, hook_msg_01)\n        attrs = {'event': 'queuejob', 'enabled': 'True'}\n        ret = self.server.create_hook(hook_name_01, attrs)\n        ret = self.server.import_hook(hook_name_01, hook_body_01)\n\n        # we don't need to run a job, we just want to check the attributes.\n        self.server.log_match(hook_msg_00, starttime=start_time)\n        self.server.log_match(\"cmd=>0\", starttime=start_time)\n        self.server.log_match(\"objtype=>8\", starttime=start_time)\n        self.server.log_match(\"objname=>%s\" % hook_name_01,\n                              starttime=start_time)\n        self.server.log_match(\"reply_code=>0\", starttime=start_time)\n        self.server.log_match(\"reply_auxcode=>0\", starttime=start_time)\n        self.server.log_match(\"reply_choice=>1\", starttime=start_time)\n        self.server.log_match(\"reply_text=>None\", starttime=start_time)\n        self.server.log_match(\"server_priv/hooks/%s.PY\" % hook_name_01,\n                              starttime=start_time)\n\n        ret = self.server.delete_hook(hook_name_00)\n        self.assertEqual(ret, True, \"Could not delete hook %s\" % hook_name_00)\n        self.server.log_match(\"%s;deleted at request of\" % hook_name_00,\n                              starttime=start_time)\n        ret = self.server.delete_hook(hook_name_01)\n        self.assertEqual(ret, True, \"Could not delete hook %s\" % hook_name_01)\n        self.server.log_match(\"%s;deleted at request of\" % hook_name_01,\n                              starttime=start_time)\n        self.server.log_match(\"missing attributes in pbs\",\n                              starttime=start_time, existence=False)\n        self.server.log_match(\"all attributes found in pbs\",\n                              starttime=start_time, existence=True)\n\n        self.logger.info(\"**************** HOOK END ****************\")\n\n    def test_hook_attrs_02(self):\n        \"\"\"\n        Test for a set of the management hook attributes.\n        \"\"\"\n        self.logger.info(\"**************** HOOK START ****************\")\n        attrs = {'event': 'management', 'enabled': 'True'}\n\n        hook_name_00 = 'j1234'\n        hook_msg_00 = 'running management hook_import_00 name:%s' % \\\n                      hook_name_00\n        hook_body_00 = generate_hook_body_from_func(hook_attrs_func,\n                                                    hook_msg_00)\n        self.server.manager(MGR_CMD_SET, SERVER, {'log_events': 2047})\n\n        start_time = time.time()\n        ret = self.server.create_hook(hook_name_00, attrs)\n        self.assertEqual(ret, True, \"Could not create hook %s\" % hook_name_00)\n\n        self.server.log_match(\"%s;created at request\" % hook_name_00,\n                              starttime=start_time)\n\n        ret = self.server.import_hook(hook_name_00, hook_body_00)\n        self.assertEqual(ret, True, \"Could not import hook %s\" % hook_name_00)\n\n        self.server.log_match(hook_msg_00, starttime=start_time)\n        self.server.log_match(\"cmd=>7\", starttime=start_time)\n        self.server.log_match(\"objtype=>8\", starttime=start_time)\n        self.server.log_match(\"objname=>%s\" % hook_name_00,\n                              starttime=start_time)\n        self.server.log_match(\"reply_code=>0\", starttime=start_time)\n        self.server.log_match(\"reply_auxcode=>0\", starttime=start_time)\n        self.server.log_match(\"reply_choice=>1\", starttime=start_time)\n        self.server.log_match(\"reply_text=>None\", starttime=start_time)\n\n        for mom in self.server.moms.values():\n            start_time_mom = time.time()\n            self.logger.error(f\"deleting and creating\\n\"\n                              f\"mom.hostname:{mom.hostname}\\n\"\n                              f\"mom.fqdn:{mom.fqdn}\\n\"\n                              f\"mom.name:{mom.name}\\n\"\n                              f\"mom.__dict__:{mom.__dict__}\\n\"\n                              )\n            # self.server.delete_node(mom.shortname)\n            # self.server.create_node(mom.shortname)\n            self.server.manager(MGR_CMD_SET, NODE,\n                                {'resources_available.ncpus': '700000'},\n                                id=mom.shortname)\n            self.server.manager(MGR_CMD_UNSET, NODE,\n                                'resources_available.ncpus',\n                                id=mom.shortname)\n\n            a = {'max_run_res_soft.ncpus': \"[u:\" + str(TEST_USER1) + \"=2]\"}\n            self.server.manager(MGR_CMD_SET, QUEUE, a, 'workq')\n            self.server.manager(MGR_CMD_UNSET, QUEUE,\n                                'max_run_res_soft.ncpus', 'workq')\n\n            self.server.log_match(\"cmd=>MGR_CMD_SET\",\n                                  starttime=start_time_mom)\n            self.server.log_match(\"objtype=>MGR_OBJ_NODE\",\n                                  starttime=start_time_mom)\n            self.server.log_match(\"objname=>%s\" % mom.shortname,\n                                  starttime=start_time_mom)\n            match = self.server.log_match(\"attribs[0]=>flags:0,flags_lst:[]\",\n                                          starttime=start_time_mom,\n                                          existence=True,\n                                          allmatch=True,\n                                          n=\"ALL\")\n            self.logger.info(pformat(match))\n            match = self.server.log_match(\"(stringified)\",\n                                          starttime=start_time_mom,\n                                          allmatch=True,\n                                          n=\"ALL\"\n                                          )\n            self.logger.info(pformat(match))\n            match = self.server.log_match(\"resources_available.ncpus\",\n                                          starttime=start_time_mom,\n                                          allmatch=True,\n                                          n=\"ALL\"\n                                          )\n            self.logger.info(pformat(match))\n            match = self.server.log_match(\"max_run_res_soft.ncpus\",\n                                          starttime=start_time_mom,\n                                          allmatch=True,\n                                          n=\"ALL\"\n                                          )\n            self.logger.info(pformat(match))\n            match = self.server.log_match(\"Hook, processed normally.\",\n                                          starttime=start_time_mom)\n            self.logger.info(pformat(match))\n            try:\n                match = self.server.log_match(\"Error in hook\",\n                                              starttime=start_time_mom,\n                                              existence=False)\n            except Exception:\n                match = self.server.log_match(\"Error in hook\",\n                                              starttime=start_time_mom,\n                                              existence=True,\n                                              allmatch=True,\n                                              n=\"ALL\")\n                self.logger.info(pformat(match))\n                raise\n\n            self.server.log_match(\"attribs[0]=>flags:0,flags_lst:[],name:reso\"\n                                  \"urces_available,op:0,op_str:BATCH_OP_SET,r\"\n                                  \"esource:ncpus,sisters:[],value:700000 (str\"\n                                  \"ingified)\",\n                                  starttime=start_time_mom)\n            self.server.log_match(\"attribs[0]=>flags:0,flags_lst:[],name:reso\"\n                                  \"urces_available,op:1,op_str:BATCH_OP_UNSET\"\n                                  \",resource:ncpus,sisters:[],value: (stringi\"\n                                  \"fied)\",\n                                  starttime=start_time_mom)\n        ret = self.server.delete_hook(hook_name_00)\n        self.assertEqual(ret, True, \"Could not delete hook %s\" % hook_name_00)\n        self.server.log_match(\"%s;deleted at request of\" % hook_name_00,\n                              starttime=start_time)\n\n        self.server.log_match(\"missing attributes in pbs\",\n                              starttime=start_time, existence=False)\n        self.server.log_match(\"all attributes found in pbs\",\n                              starttime=start_time, existence=True)\n\n        self.logger.info(\"**************** HOOK END ****************\")\n"
  },
  {
    "path": "test/tests/functional/pbs_hook_modifyvnode_state_changes.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\nimport socket\nimport logging\nimport textwrap\nfrom pprint import pformat\nfrom tests.functional import *\nfrom ptl.utils.pbs_dshutils import get_method_name\n\n\nnode_states = {\n    'ND_STATE_FREE': 0,\n    'ND_STATE_OFFLINE': 1,\n    'ND_STATE_DOWN': 2,\n    'ND_STATE_DELETED': 4,\n    'ND_STATE_UNRESOLVABLE': 8,\n    'ND_STATE_STALE': 32,\n    'ND_STATE_JOBBUSY': 16,\n    'ND_STATE_JOB_EXCLUSIVE': 64,\n    'ND_STATE_RESV_EXCLUSIVE': 8192,\n    'ND_STATE_BUSY': 128,\n    'ND_STATE_UNKNOWN': 256,\n    'ND_STATE_NEEDS_HELLOSVR': 512,\n    'ND_STATE_INIT': 1024,\n    'ND_STATE_PROV': 2048,\n    'ND_STATE_WAIT_PROV': 4096,\n    'ND_STATE_SLEEP': 262144,\n    'ND_STATE_OFFLINE_BY_MOM': 16384,\n    'ND_STATE_MARKEDDOWN': 32768,\n    'ND_STATE_NEED_ADDRS': 65536,\n    'ND_STATE_MAINTENANCE': 131072,\n    'ND_STATE_NEED_CREDENTIALS': 524288,\n    'ND_STATE_VNODE_UNAVAILABLE': 409903\n}\n\n\ndef get_hook_body_modifyvnode_param_rpt():\n    hook_body = \"\"\"\n    import pbs\n    import sys\n    try:\n        e = pbs.event()\n        v = e.vnode\n        v_o = e.vnode_o\n        lsct = v.last_state_change_time\n        lsct_o = v_o.last_state_change_time\n        state_str_buf_v = \",\".join(v.extract_state_strs())\n        state_str_buf_v_o = \",\".join(v_o.extract_state_strs())\n        state_int_buf_v = ','.join([str(_) for _ in v.extract_state_ints()])\n        state_int_buf_v_o = ','.join(\n            [str(_) for _ in v_o.extract_state_ints()])\n        # print show_vnode_state record (bi consumable)\n        svs1_data = \"v.state_hex=%s v_o.state_hex=%s v.state_strs=%s \" \\\n                    \"v_o.state_strs=%s\" % \\\n                    (hex(v.state), hex(v_o.state), state_str_buf_v,\n                     state_str_buf_v_o)\n        svs2_data = \"v.state_ints=%s v_o.state_ints=%s v.lsct=%s \" \\\n                    \"v_o.lsct=%s\" % \\\n                    (state_int_buf_v, state_int_buf_v_o, str(lsct),\n                     str(lsct_o))\n        svs_data = \"%s %s\" % (svs1_data, svs2_data)\n        pbs.logmsg(pbs.LOG_DEBUG,\n                   \"show_vnode_state;name=%s %s\" % (v.name, svs_data))\n        # print additional hook parameter values\n        pbs.logmsg(pbs.LOG_DEBUG, \"name: v=%s, v_o=%s\" % (v.name, v_o.name))\n        pbs.logmsg(pbs.LOG_DEBUG,\n                   \"state: v=%s, v_o=%s\" % (hex(v.state), hex(v_o.state)))\n        pbs.logmsg(pbs.LOG_DEBUG, \"last_state_change_time: v=%s, v_o=%s\" % (\n                   str(lsct), str(lsct_o)))\n        pbs.logmsg(pbs.LOG_DEBUG,\n                   \"comment: v=%s, v_o=%s\" % (v.comment, v_o.comment))\n        pbs.logmsg(pbs.LOG_DEBUG,\n                   \"aoe: v=%s, v_o=%s\" % (v.current_aoe, v_o.current_aoe))\n        pbs.logmsg(pbs.LOG_DEBUG, \"in_mvn_host: v=%s, v_o=%s\" % (\n                   v.in_multivnode_host, v_o.in_multivnode_host))\n        pbs.logmsg(pbs.LOG_DEBUG, \"jobs: v=%s, v_o=%s\" % (v.jobs, v_o.jobs))\n        pbs.logmsg(pbs.LOG_DEBUG, \"Mom: v=%s, v_o=%s\" % (v.Mom, v_o.Mom))\n        pbs.logmsg(pbs.LOG_DEBUG,\n                   \"ntype: v=%s, v_o=%s\" % (hex(v.ntype), hex(v_o.ntype)))\n        pbs.logmsg(pbs.LOG_DEBUG, \"pcpus: v=%s, v_o=%s\" % (v.pcpus, v_o.pcpus))\n        pbs.logmsg(pbs.LOG_DEBUG,\n                   \"pnames: v=%s, v_o=%s\" % (v.pnames, v_o.pnames))\n        pbs.logmsg(pbs.LOG_DEBUG, \"Port: v=%s, v_o=%s\" % (v.Port, v_o.Port))\n        pbs.logmsg(pbs.LOG_DEBUG,\n                   \"Priority: v=%s, v_o=%s\" % (v.Priority, v_o.Priority))\n        pbs.logmsg(pbs.LOG_DEBUG, \"provision_enable: v=%s, v_o=%s\" % (\n                   v.provision_enable, v_o.provision_enable))\n        pbs.logmsg(pbs.LOG_DEBUG, \"queue: v=%s, v_o=%s\" % (v.queue, v_o.queue))\n        pbs.logmsg(pbs.LOG_DEBUG, \"res_assigned: v=%s, v_o=%s\" % (\n                   v.resources_assigned, v_o.resources_assigned))\n        pbs.logmsg(pbs.LOG_DEBUG, \"res_avail: v=%s, v_o=%s\" % (\n                   v.resources_available, v_o.resources_available))\n        pbs.logmsg(pbs.LOG_DEBUG, \"resv: v=%s, v_o=%s\" % (v.resv, v_o.resv))\n        pbs.logmsg(pbs.LOG_DEBUG, \"resv_enable: v=%s, v_o=%s\" % (\n                   v.resv_enable, v_o.resv_enable))\n        pbs.logmsg(pbs.LOG_DEBUG,\n                   \"sharing: v=%s, v_o=%s\" % (v.sharing, v_o.sharing))\n        # sanity test some values\n        if (lsct < lsct_o) or (lsct_o <= 0):\n            e.reject(\"last_state_change_time: bad timestamp value\")\n        else:\n            pbs.logmsg(pbs.LOG_DEBUG, \"last_state_change_time: good times\")\n        if (v.name != v_o.name) or (not v.name):\n            e.reject(\n                \"name: vnode and vnode_o name values are null or mismatched\")\n        else:\n            pbs.logmsg(pbs.LOG_DEBUG, \"name: good names\")\n        if (isinstance(v.state, int)) and (isinstance(v_o.state, int)):\n            pbs.logmsg(pbs.LOG_DEBUG, \"state: good states\")\n        else:\n            e.reject(\"state: bad state value\")\n        if len(v.extract_state_strs()) == len(v.extract_state_ints()):\n            pbs.logmsg(pbs.LOG_DEBUG, \"state sets: good v sets\")\n        else:\n            e.reject(\"state sets: bad v sets\")\n        if len(v_o.extract_state_strs()) == len(v_o.extract_state_ints()):\n            pbs.logmsg(pbs.LOG_DEBUG, \"state sets: good v_o sets\")\n        else:\n            e.reject(\"state sets: bad v_o sets\")\n        e.accept()\n    except SystemExit:\n        pass\n    except:\n        pbs.event().reject(\"%s hook failed with %s\" % (\n                           pbs.event().hook_name, sys.exc_info()[:2]))\n    \"\"\"\n    hook_body = textwrap.dedent(hook_body)\n    return hook_body\n\n\ndef get_hook_body_reverse_node_state():\n    hook_body = \"\"\"\n    import pbs\n    e = pbs.event()\n    pbs.logmsg(pbs.LOG_DEBUG, \"pbs.__file__:\" + pbs.__file__)\n    # this is backwards as it's a reverse lookup.\n    for value, key in pbs.REVERSE_NODE_STATE.items():\n        pbs.logmsg(pbs.LOG_DEBUG, \"key:%s value:%s\" % (key, value))\n    e.accept()\n    \"\"\"\n    hook_body = textwrap.dedent(hook_body)\n    return hook_body\n\n\nclass TestPbsModifyvnodeStateChanges(TestFunctional):\n\n    \"\"\"\n    Test the modifyvnode hook by inducing various vnode state changes and\n    inspecting the pbs log for expected values.\n    \"\"\"\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n        Job.dflt_attributes[ATTR_k] = 'oe'\n\n    def checkLog(self, start_time, mom, check_up, check_down):\n        self.server.log_match(\"set_vnode_state;vnode.state=\",\n                              starttime=start_time)\n        self.server.log_match(\"show_vnode_state;name=\",\n                              starttime=start_time)\n        self.server.log_match(\"name: v=\", starttime=start_time)\n        self.server.log_match(\"state: v=\", starttime=start_time)\n        self.server.log_match(\"last_state_change_time: v=\",\n                              starttime=start_time)\n        self.server.log_match(\"good times\", starttime=start_time)\n        self.server.log_match(\"good names\", starttime=start_time)\n        self.server.log_match(\"good states\", starttime=start_time)\n        self.server.log_match(\"good v sets\", starttime=start_time)\n        self.server.log_match(\"good v_o sets\", starttime=start_time)\n        if check_up:\n            self.server.log_match(\"Node;%s;node up\" % mom,\n                                  starttime=start_time)\n        if check_down:\n            self.server.log_match(\"Node;%s;node down\" % mom,\n                                  starttime=start_time)\n\n    def checkNodeFree(self, start_time):\n        self.server.log_match(\"v.state_hex=0x0\",\n                              starttime=start_time)\n        self.server.log_match(\"v.state_strs=ND_STATE_FREE\",\n                              starttime=start_time)\n        self.server.log_match(\"v.state_ints=0\",\n                              starttime=start_time)\n\n    def checkNodeDown(self, start_time):\n        self.server.log_match(\"v.state_hex=0x2\",\n                              starttime=start_time)\n        self.server.log_match(\n            \"v.state_strs=ND_STATE_DOWN,ND_STATE_VNODE_UNAVAILABLE\",\n            starttime=start_time)\n        self.server.log_match(\"v.state_ints=2,409903\",\n                              starttime=start_time)\n\n    def checkNodeOffline(self, start_time):\n        self.server.log_match(\"v.state_hex=0x1\",\n                              starttime=start_time)\n        self.server.log_match(\n            \"v.state_strs=ND_STATE_OFFLINE,ND_STATE_VNODE_UNAVAILABLE\",\n            starttime=start_time)\n        self.server.log_match(\"v.state_ints=1,409903\",\n                              starttime=start_time)\n\n    def checkNodeResvExclusive(self, start_time):\n        self.server.log_match(\"v.state_hex=0x2000\",\n                              starttime=start_time)\n        self.server.log_match(\"v.state_strs=ND_STATE_RESV_EXCLUSIVE\",\n                              starttime=start_time)\n        self.server.log_match(\"v.state_ints=8192\",\n                              starttime=start_time)\n\n    def checkpreviousStateChain(self, start_time, end_time, mom):\n        # Check the state change entries for the specified mom\n        search_string = \";show_vnode_state;name=\" + mom\n        self.logger.info(\n            'checkpreviousStateChain search_string='+search_string+' start=' +\n            str(start_time)+' end='+str(end_time))\n\n        # Retrieve requested entries from the last 2000 lines of the server log\n        lines = self.server.log_match(msg=search_string,\n                                      allmatch=True,\n                                      starttime=start_time,\n                                      endtime=end_time,\n                                      tail=True,\n                                      n=2000)\n\n        not_first = False\n        previous_state = None\n        previous_lsct = None\n        for tupleline in lines:\n            line = tupleline[1]\n            head, tail = line.rsplit(';', 1)\n            pairs = tail.split(' ')\n            line_dict = dict([key_value.split(\"=\", 1) for key_value in pairs])\n            self.logger.debug('Examining line: ' + line)\n            if not_first:\n                # compare current \"v_o\" values with previous entry's \"v\" values\n                self.assertEqual(\n                    previous_state, line_dict['v_o.state_hex'],\n                    'Node state chain mismatch! previous_state=%s line=%s' %\n                    (previous_state, line))\n                self.assertEqual(\n                    previous_lsct, line_dict['v_o.lsct'],\n                    'Node lsct chain mismatch! previous_lsct=%s line=%s' %\n                    (previous_lsct, line))\n                self.logger.debug('Current and previous matched!')\n            else:\n                not_first = True\n                self.logger.debug('Setting not_first!')\n            # current values become the previous values for the next iteration\n            previous_state = line_dict['v.state_hex']\n            previous_lsct = line_dict['v.lsct']\n\n    @skipOnCpuSet\n    def test_hook_state_changes_00(self):\n        \"\"\"\n        Test: induce a variety of vnode state changes with debug turned on\n        and inspect the pbs log for expected entries\n        \"\"\"\n        if os.getuid() != 0 or sys.platform in ('cygwin', 'win32'):\n            self.skipTest(\"Test need to run as root\")\n\n        self.logger.debug(\"---- %s TEST STARTED ----\" % get_method_name(self))\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'log_events': 4095})\n        attrs = {'event': 'modifyvnode', 'enabled': 'True', 'debug': 'True'}\n        hook_name_00 = 'm1234'\n        hook_body_00 = get_hook_body_modifyvnode_param_rpt()\n        ret = self.server.create_hook(hook_name_00, attrs)\n        self.assertTrue(ret, \"Could not create hook %s\" % hook_name_00)\n        ret = self.server.import_hook(hook_name_00, hook_body_00)\n\n        # print info about the test deployment\n        self.logger.debug(\"socket.gethostname():%s\" % socket.gethostname())\n        self.logger.debug(\"***self.server.name:%s\" % str(self.server.name))\n        self.logger.debug(\"self.server.moms:%s\" % str(self.server.moms))\n        self.logger.debug(\"self.server.hostname=%s\" % self.server.hostname)\n        nodeinfo = self.server.status(NODE)\n\n        # test effects of various state changes on each mom\n        for mom in self.server.moms.values():\n            # State change test: mom stop\n            start_time = time.time()\n            state_chain_start_time = start_time\n            mom.stop()\n            self.checkLog(start_time, mom.fqdn, check_up=False,\n                          check_down=True)\n            self.checkNodeDown(start_time)\n\n            # State change test: mom start\n            start_time = time.time()\n            mom.start()\n            self.checkLog(start_time, mom.fqdn, check_up=True,\n                          check_down=False)\n            self.checkNodeFree(start_time)\n\n            # State change test: mom restart\n            start_time = time.time()\n            mom.restart()\n            self.checkLog(start_time, mom.fqdn, check_up=True,\n                          check_down=True)\n            self.checkNodeDown(start_time)\n            self.checkNodeFree(start_time)\n\n            # State change test: take mom offline then online\n            # take offline\n            start_time = time.time()\n            self.logger.debug(\"    ***offline mom:%s\" % mom)\n            self.server.manager(MGR_CMD_SET, NODE, {'state': (INCR,\n                                                              'offline')},\n                                id=mom.shortname)\n            self.checkLog(start_time, mom.fqdn, check_up=False,\n                          check_down=False)\n            self.checkNodeOffline(start_time)\n            # back online\n            start_time = time.time()\n            self.logger.debug(\"    ***online mom:%s\" % mom)\n            self.server.manager(MGR_CMD_SET, NODE, {'state': (DECR,\n                                                              'offline')},\n                                id=mom.shortname)\n            self.checkLog(start_time, mom.fqdn, check_up=False,\n                          check_down=False)\n            self.checkNodeFree(start_time)\n\n            # State change test: create and release maintenance reservation\n            start_time = time.time()\n            res_start_time = start_time + 15\n            res_end_time = res_start_time + 1\n            attrs = {\n                'reserve_start': res_start_time,\n                'reserve_end': res_end_time\n            }\n            self.logger.debug(\"    ***reserve & release mom:%s\" % mom)\n            rid = self.server.submit(Reservation(ROOT_USER, attrs,\n                                                 hosts=[mom.shortname]))\n            self.logger.debug(\"rid=%s\" % rid)\n            self.checkLog(start_time, mom.fqdn, check_up=False,\n                          check_down=False)\n            self.checkNodeResvExclusive(start_time)\n            self.checkNodeFree(start_time)\n\n            # Verify each preceeding state matches the current previous state\n            state_chain_end_time = time.time()\n            self.checkpreviousStateChain(state_chain_start_time,\n                                         state_chain_end_time,\n                                         mom.shortname)\n\n        self.logger.debug(\"---- %s TEST ENDED ----\" % get_method_name(self))\n\n    @tags('smoke')\n    def test_check_node_state_constants_00(self):\n        \"\"\"\n        Test: verify expected node state constants and associated reverse map\n        are defined in the pbs module and contain the expected values.\n        \"\"\"\n        self.logger.debug(\"---- %s TEST STARTED ----\" % get_method_name(self))\n        self.add_pbs_python_path_to_sys_path()\n        import pbs\n        self.assertEqual(\n            len(pbs.REVERSE_NODE_STATE), len(node_states),\n            \"node state count mismatch: actual=%s, expected:%s\" %\n            (len(pbs.REVERSE_NODE_STATE), len(node_states)))\n        for attr, value in node_states.items():\n            self.logger.debug(\"checking attribute '%s' in pbs module\", attr)\n            self.assertTrue(\n                hasattr(pbs, attr), \"pbs.%s does not exist.\" % attr)\n            self.assertEqual(\n                getattr(pbs, attr), value,\n                \"pbs.%s is incorrect: actual=%s, expected=%s.\" %\n                (attr, getattr(pbs, attr), value))\n            self.assertIn(value, pbs.REVERSE_NODE_STATE)\n            self.assertEqual(\n                pbs.REVERSE_NODE_STATE[value], attr,\n                (\"pbs.REVERSE_NODE_STATE[%s] is incorrect: actual=%s, \" +\n                 \"expected=%s.\") % (value, pbs.REVERSE_NODE_STATE[value],\n                                    attr))\n        self.logger.debug(\"---- %s TEST ENDED ----\" % get_method_name(self))\n\n    def test_check_node_state_lookup_00(self):\n        \"\"\"\n        Test: check for the existence and values of the\n        pbs.REVERSE_STATE_CHANGES dictionary\n\n        run a hook that converts a state change hex into a string, then search\n        for it in the server log.\n        \"\"\"\n\n        self.add_pbs_python_path_to_sys_path()\n        import pbs\n        self.logger.debug(\"---- %s TEST STARTED ----\" % get_method_name(self))\n        self.server.manager(MGR_CMD_SET, SERVER, {'log_events': 4095})\n        attrs = {'event': 'modifyvnode', 'enabled': 'True', 'debug': 'True'}\n        hook_name_00 = 'x1234'\n        hook_body_00 = get_hook_body_reverse_node_state()\n        ret = self.server.create_hook(hook_name_00, attrs)\n        self.assertTrue(ret, \"Could not create hook %s\" % hook_name_00)\n        ret = self.server.import_hook(hook_name_00, hook_body_00)\n        for mom in self.server.moms.values():\n            start_time = time.time()\n            mom.restart()\n            self.server.log_match(\"Node;%s;node up\" % mom.fqdn,\n                                  starttime=start_time)\n            self.server.log_match(\"Node;%s;node down\" % mom.fqdn,\n                                  starttime=start_time)\n            for value, key in pbs.REVERSE_NODE_STATE.items():\n                self.server.log_match(\"key:%s value:%s\" % (key, value),\n                                      starttime=start_time)\n        self.logger.debug(\"---- %s TEST ENDED ----\" % get_method_name(self))\n\n    @skipOnCpuSet\n    @tags('smoke')\n    def test_hook_state_changes_01(self):\n        \"\"\"\n        Test each mom for state changes:\n        1.  sigkill the mom; check server log for expected messages\n        2.  start the mom; check server log for expected messages\n        3.  verify the chain of reported current/previous states is unbroken\n        \"\"\"\n\n        self.logger.debug(\"---- %s TEST STARTED ----\" % get_method_name(self))\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'log_events': 4095})\n        attrs = {'event': 'modifyvnode', 'enabled': 'True', 'debug': 'True'}\n        hook_name_00 = 'p1234'\n        hook_body_00 = get_hook_body_modifyvnode_param_rpt()\n        ret = self.server.create_hook(hook_name_00, attrs)\n        self.assertTrue(ret, \"Could not create hook %s\" % hook_name_00)\n        ret = self.server.import_hook(hook_name_00, hook_body_00)\n\n        for mom in self.server.moms.values():\n            self.logger.debug(\"    ***sigkilling mom:%s\", mom.fqdn)\n\n            start_time = time.time()\n            state_chain_start_time = start_time\n            mom.signal('-KILL')\n            self.checkLog(start_time, mom.fqdn, check_up=False,\n                          check_down=True)\n            self.checkNodeDown(start_time)\n\n            start_time = time.time()\n            mom.start()\n            self.checkLog(start_time, mom.fqdn, check_up=True,\n                          check_down=False)\n            self.checkNodeFree(start_time)\n\n            # Verify each preceeding state matches the current previous state\n            state_chain_end_time = time.time()\n            self.checkpreviousStateChain(state_chain_start_time,\n                                         state_chain_end_time,\n                                         mom.shortname)\n\n        self.logger.debug(\"---- %s TEST ENDED ----\" % get_method_name(self))\n\n    @skipOnCpuSet\n    def test_hook_state_changes_02(self):\n        \"\"\"\n        Test:  stop and start the pbs server; look for proper log messages\n        \"\"\"\n\n        self.logger.debug(\"---- %s TEST STARTED ----\" % get_method_name(self))\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'log_events': 4095})\n        attrs = {'event': 'modifyvnode', 'enabled': 'True', 'debug': 'True'}\n        hook_name_00 = 's1234'\n        hook_body_00 = get_hook_body_modifyvnode_param_rpt()\n        ret = self.server.create_hook(hook_name_00, attrs)\n        self.assertTrue(ret, \"Could not create hook %s\" % hook_name_00)\n        ret = self.server.import_hook(hook_name_00, hook_body_00)\n\n        # stop the server and then start it\n        start_time = time.time()\n        state_chain_start_time = start_time\n        self.server.stop()\n        self.server.start()\n\n        # look for messages indicating all the vnodes came up\n        for mom in self.server.moms.values():\n            self.checkLog(start_time, mom.fqdn, check_up=True,\n                          check_down=False)\n            self.checkNodeFree(start_time)\n            # Verify each preceeding state matches the current previous state\n            state_chain_end_time = time.time()\n            self.checkpreviousStateChain(state_chain_start_time,\n                                         state_chain_end_time,\n                                         mom.shortname)\n\n        self.logger.debug(\"---- %s TEST ENDED ----\" % get_method_name(self))\n"
  },
  {
    "path": "test/tests/functional/pbs_hook_perf_stat.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass Test_hook_perf_stat(TestFunctional):\n    \"\"\"\n    This test suite tests that the hook performance stats are in place\n    \"\"\"\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n        # ensure LOG_EVENT_DEBUG3 is being recorded to see perf stats\n        a = {'log_events': 4095}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        self.mom.add_config({'$logevent': 4095})\n        self.mom.signal('-HUP')\n\n        self.hook_content = \"\"\"\nimport pbs\npbs.logmsg(pbs.LOG_DEBUG, \"server hook called\")\ns = pbs.server()\npbs.logmsg(pbs.LOG_DEBUG, \"server data collected for %s\" % s.name)\n\"\"\"\n        self.mhook_content = \"\"\"\nimport pbs\npbs.logmsg(pbs.LOG_DEBUG, \"mom hook called\")\ns = pbs.server()\npbs.logmsg(pbs.LOG_DEBUG, \"mom data collected for %s\" % s.name)\n\"\"\"\n\n    def test_queuejob_hook(self):\n        \"\"\"\n        Test that pbs_server collects performance stats for queuejob hook\n        \"\"\"\n        hook_name = 'qhook'\n        hook_event = 'queuejob'\n        hook_attr = {'enabled': 'true', 'event': hook_event}\n        self.server.create_import_hook(hook_name, hook_attr, self.hook_content)\n\n        j = Job(TEST_USER)\n        self.server.submit(j)\n\n        hd = \"hook_perf_stat\"\n        lbl = \"label=hook_%s_%s_.*\" % (hook_event, hook_name)\n        tr = \"profile_start\"\n        act = \"action=server_process_hooks\"\n        self.server.log_match(\"%s;%s %s %s\" % (hd, lbl, act, tr),\n                              regexp=True)\n\n        stat = \"walltime=.* cputime=.*\"\n        act = r\"action=populate:pbs\\.event\\(\\)\\.job\\(.*\\)\"\n        self.server.log_match(\"%s;%s %s %s\" % (hd, lbl, act, stat),\n                              regexp=True)\n\n        lbl = \"label=hook_func\"\n        act = r\"action=populate:pbs.server\\(\\)\"\n        self.server.log_match(\"%s;%s %s %s\" % (hd, lbl, act, stat),\n                              regexp=True)\n\n        lbl = \"label=hook_%s_%s_.*\" % (hook_event, hook_name)\n        act = \"action=run_code\"\n        self.server.log_match(\"%s;%s %s %s\" % (hd, lbl, act, stat),\n                              regexp=True)\n        tr = \"profile_stop\"\n        act = \"action=server_process_hooks\"\n        self.server.log_match(\"%s;%s %s %s %s\" % (hd, lbl, act, stat, tr),\n                              regexp=True)\n\n        lbl = \"label=hook_%s_.*\" % (hook_event,)\n        act = \"action=hook_output\"\n        self.server.log_match(\"%s;%s %s %s\" % (hd, lbl, act, stat),\n                              regexp=True)\n\n    def test_modifyjob_hook(self):\n        \"\"\"\n        Test that pbs_server collects performance stats for modifyjob hook\n        \"\"\"\n        hook_name = 'mhook'\n        hook_event = 'modifyjob'\n        hook_attr = {'enabled': 'true', 'event': hook_event}\n        self.server.create_import_hook(hook_name, hook_attr, self.hook_content)\n\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n        self.server.alterjob(jid, {'Priority': 7}, runas=TEST_USER)\n\n        hd = \"hook_perf_stat\"\n        lbl = \"label=hook_%s_%s_.*\" % (hook_event, hook_name)\n        tr = \"profile_start\"\n        act = \"action=server_process_hooks\"\n        self.server.log_match(\"%s;%s %s %s\" % (hd, lbl, act, tr),\n                              regexp=True)\n\n        stat = \"walltime=.* cputime=.*\"\n        act = r\"action=populate:pbs\\.event\\(\\)\\.job\\(.*\\)\"\n        self.server.log_match(\"%s;%s %s %s\" % (hd, lbl, act, stat),\n                              regexp=True)\n\n        act = r\"action=populate:pbs.server\\(\\).job\\(.*\\)\"\n        self.server.log_match(\"%s;%s %s %s\" % (hd, lbl, act, stat),\n                              regexp=True)\n\n        act = r\"action=populate:pbs.server\\(\\).queue\\(.*\\)\"\n        self.server.log_match(\"%s;%s %s %s\" % (hd, lbl, act, stat),\n                              regexp=True)\n\n        act = r\"action=populate:pbs.server\\(\\)\"\n        self.server.log_match(\"%s;%s %s %s\" % (hd, lbl, act, stat),\n                              regexp=True)\n\n        lbl = \"label=hook_%s_%s_.*\" % (hook_event, hook_name)\n        act = \"action=run_code\"\n        self.server.log_match(\"%s;%s %s %s\" % (hd, lbl, act, stat),\n                              regexp=True)\n        tr = \"profile_stop\"\n        act = \"action=server_process_hooks\"\n        self.server.log_match(\"%s;%s %s %s %s\" % (hd, lbl, act, stat, tr),\n                              regexp=True)\n\n        lbl = \"label=hook_%s_.*\" % (hook_event,)\n        act = \"action=hook_output\"\n        self.server.log_match(\"%s;%s %s %s\" % (hd, lbl, act, stat),\n                              regexp=True)\n\n    def test_movejob_hook(self):\n        \"\"\"\n        Test that pbs_server collects performance stats for movejob hook\n        \"\"\"\n        hook_name = 'mvhook'\n        hook_event = 'movejob'\n        hook_attr = {'enabled': 'true', 'event': hook_event}\n        self.server.create_import_hook(hook_name, hook_attr, self.hook_content)\n\n        j = Job(TEST_USER, {'Hold_Types': None})\n        jid = self.server.submit(j)\n        self.server.movejob(jobid=jid, destination=\"workq\")\n\n        hd = \"hook_perf_stat\"\n        lbl = \"label=hook_%s_%s_.*\" % (hook_event, hook_name)\n        tr = \"profile_start\"\n        act = \"action=server_process_hooks\"\n        self.server.log_match(\"%s;%s %s %s\" % (hd, lbl, act, tr),\n                              regexp=True)\n\n        stat = \"walltime=.* cputime=.*\"\n        act = r\"action=populate:pbs.server\\(\\).job\\(.*\\)\"\n        self.server.log_match(\"%s;%s %s %s\" % (hd, lbl, act, stat),\n                              regexp=True)\n\n        act = r\"action=populate:pbs.server\\(\\).queue\\(.*\\)\"\n        self.server.log_match(\"%s;%s %s %s\" % (hd, lbl, act, stat),\n                              regexp=True)\n\n        act = r\"action=populate:pbs.server\\(\\)\"\n        self.server.log_match(\"%s;%s %s %s\" % (hd, lbl, act, stat),\n                              regexp=True)\n\n        lbl = \"label=hook_%s_%s_.*\" % (hook_event, hook_name)\n        act = \"action=run_code\"\n        self.server.log_match(\"%s;%s %s %s\" % (hd, lbl, act, stat),\n                              regexp=True)\n        tr = \"profile_stop\"\n        act = \"action=server_process_hooks\"\n        self.server.log_match(\"%s;%s %s %s %s\" % (hd, lbl, act, stat, tr),\n                              regexp=True)\n\n        lbl = \"label=hook_%s_.*\" % (hook_event,)\n        act = \"action=hook_output\"\n        self.server.log_match(\"%s;%s %s %s\" % (hd, lbl, act, stat),\n                              regexp=True)\n\n    def test_runjob_hook(self):\n        \"\"\"\n        Test that pbs_server collects performance stats for runjob hook\n        \"\"\"\n        hook_name = 'rhook'\n        hook_event = 'runjob'\n        hook_attr = {'enabled': 'true', 'event': hook_event}\n        self.server.create_import_hook(hook_name, hook_attr, self.hook_content)\n\n        j = Job(TEST_USER)\n        self.server.submit(j)\n\n        hd = \"hook_perf_stat\"\n        lbl = \"label=hook_%s_%s_.*\" % (hook_event, hook_name)\n        tr = \"profile_start\"\n        act = \"action=server_process_hooks\"\n        self.server.log_match(\"%s;%s %s %s\" % (hd, lbl, act, tr),\n                              regexp=True)\n\n        stat = \"walltime=.* cputime=.*\"\n        act = r\"action=populate:pbs.server\\(\\).job\\(.*\\)\"\n        self.server.log_match(\"%s;%s %s %s\" % (hd, lbl, act, stat),\n                              regexp=True)\n\n        act = r\"action=populate:pbs.server\\(\\).queue\\(.*\\)\"\n        self.server.log_match(\"%s;%s %s %s\" % (hd, lbl, act, stat),\n                              regexp=True)\n\n        act = r\"action=populate:pbs.server\\(\\)\"\n        self.server.log_match(\"%s;%s %s %s\" % (hd, lbl, act, stat),\n                              regexp=True)\n\n        lbl = \"label=hook_%s_%s_.*\" % (hook_event, hook_name)\n        act = \"action=run_code\"\n        self.server.log_match(\"%s;%s %s %s\" % (hd, lbl, act, stat),\n                              regexp=True)\n        tr = \"profile_stop\"\n        act = \"action=server_process_hooks\"\n        self.server.log_match(\"%s;%s %s %s %s\" % (hd, lbl, act, stat, tr),\n                              regexp=True)\n\n    def test_resvsub_hook(self):\n        \"\"\"\n        Test that pbs_server collects performance stats for resvsub hook\n        \"\"\"\n        hook_name = 'rhook'\n        hook_event = 'resvsub'\n        hook_attr = {'enabled': 'true', 'event': hook_event}\n        self.server.create_import_hook(hook_name, hook_attr, self.hook_content)\n\n        r = Reservation(TEST_USER)\n        self.server.submit(r)\n\n        hd = \"hook_perf_stat\"\n        lbl = \"label=hook_%s_%s_.*\" % (hook_event, hook_name)\n        tr = \"profile_start\"\n        act = \"action=server_process_hooks\"\n        self.server.log_match(\"%s;%s %s %s\" % (hd, lbl, act, tr),\n                              regexp=True)\n\n        stat = \"walltime=.* cputime=.*\"\n        act = r\"action=populate:pbs\\.event\\(\\)\\.resv\\(.*\\)\"\n        self.server.log_match(\"%s;%s %s %s\" % (hd, lbl, act, stat),\n                              regexp=True)\n\n        lbl = \"label=hook_func\"\n        act = r\"action=populate:pbs.server\\(\\)\"\n        self.server.log_match(\"%s;%s %s %s\" % (hd, lbl, act, stat),\n                              regexp=True)\n\n        lbl = \"label=hook_%s_%s_.*\" % (hook_event, hook_name)\n        act = \"action=run_code\"\n        self.server.log_match(\"%s;%s %s %s\" % (hd, lbl, act, stat),\n                              regexp=True)\n        tr = \"profile_stop\"\n        act = \"action=server_process_hooks\"\n        self.server.log_match(\"%s;%s %s %s %s\" % (hd, lbl, act, stat, tr),\n                              regexp=True)\n\n        lbl = \"label=hook_%s_.*\" % (hook_event,)\n        act = \"action=hook_output\"\n        self.server.log_match(\"%s;%s %s %s\" % (hd, lbl, act, stat),\n                              regexp=True)\n\n    def test_periodic_hook(self):\n        \"\"\"\n        Test that pbs_server collects performance stats for periodic hook\n        \"\"\"\n        hook_name = 'phook'\n        hook_event = 'periodic'\n        hook_attr = {'event': hook_event, 'freq': 5}\n        self.server.create_import_hook(hook_name, hook_attr,\n                                       self.hook_content, overwrite=True)\n\n        hd = \"hook_perf_stat\"\n        lbl = \"label=hook_%s_%s_.*\" % (hook_event, hook_name)\n        tr = \"profile_start\"\n        act = \"action=server_process_hooks\"\n        self.server.log_match(\"%s;%s %s %s\" % (hd, lbl, act, tr),\n                              regexp=True)\n\n        stat = \"walltime=.* cputime=.*\"\n        act = r\"action=populate:pbs\\.event\\(\\)\\.vnode_list\"\n        self.server.log_match(\"%s;%s %s %s\" % (hd, lbl, act, stat),\n                              regexp=True)\n\n        act = r\"action=populate:pbs\\.event\\(\\)\\.resv_list\"\n        self.server.log_match(\"%s;%s %s %s\" % (hd, lbl, act, stat),\n                              regexp=True)\n\n        lbl = \"label=hook_func\"\n        act = r\"action=populate:pbs.server\\(\\)\"\n        self.server.log_match(\"%s;%s %s %s\" % (hd, lbl, act, stat),\n                              regexp=True)\n\n        lbl = \"label=hook_%s_%s_.*\" % (hook_event, hook_name)\n        act = \"action=run_code\"\n        self.server.log_match(\"%s;%s %s %s\" % (hd, lbl, act, stat),\n                              regexp=True)\n        tr = \"profile_stop\"\n        act = \"action=server_process_hooks\"\n        self.server.log_match(\"%s;%s %s %s %s\" % (hd, lbl, act, stat, tr),\n                              regexp=True)\n\n        lbl = \"label=hook_%s_.*\" % (hook_event,)\n        act = \"action=hook_output\"\n        self.server.log_match(\"%s;%s %s %s\" % (hd, lbl, act, stat),\n                              regexp=True)\n\n    def test_mom_hooks(self):\n        \"\"\"\n        Test that pbs_mom collects performance stats for mom hooks\n        \"\"\"\n        for hook_event in ['execjob_begin',\n                           'execjob_launch',\n                           'execjob_prologue',\n                           'execjob_epilogue',\n                           'execjob_end']:\n            hook_name = hook_event.replace('execjob_', '')\n            hook_attr = {'enabled': 'true', 'event': hook_event}\n            self.server.create_import_hook(hook_name, hook_attr,\n                                           self.mhook_content)\n        j = Job(TEST_USER)\n        j.set_sleep_time(5)\n        self.server.submit(j)\n\n        for hook_event in ['execjob_begin',\n                           'execjob_launch',\n                           'execjob_prologue',\n                           'execjob_epilogue',\n                           'execjob_end']:\n            hook_name = hook_event.replace('execjob_', '')\n\n            hd = \"hook_perf_stat\"\n            lbl = \"label=hook_%s_%s_.*\" % (hook_event, hook_name)\n            tr = \"profile_start\"\n            act = \"action=mom_process_hooks\"\n            self.mom.log_match(\"%s;%s %s %s\" % (hd, lbl, act, tr),\n                               regexp=True)\n\n            act = \"action=pbs_python\"\n            self.mom.log_match(\"%s;%s %s %s\" % (hd, lbl, act, tr),\n                               regexp=True)\n\n            stat = \"walltime=.* cputime=.*\"\n            act = \"action=load_hook_input_file\"\n            self.mom.log_match(\"%s;%s %s %s\" % (hd, lbl, act, stat),\n                               regexp=True)\n\n            act = \"action=start_interpreter\"\n            self.mom.log_match(\"%s;%s %s %s\" % (hd, lbl, act, stat),\n                               regexp=True)\n\n            act = r\"action=populate:pbs\\.event\\(\\)\\.job\\(.*\\)\"\n            self.mom.log_match(\"%s;%s %s %s\" % (hd, lbl, act, stat),\n                               regexp=True)\n\n            act = \"action=run_code\"\n            self.mom.log_match(\"%s;%s %s %s\" % (hd, lbl, act, stat),\n                               regexp=True)\n\n            act = \"action=hook_output:.*\"\n            self.mom.log_match(\"%s;%s %s %s\" % (hd, lbl, act, stat),\n                               regexp=True)\n\n            tr = \"profile_stop\"\n            act = \"action=pbs_python\"\n            self.mom.log_match(\"%s;%s %s %s %s\" % (hd, lbl, act, stat, tr),\n                               regexp=True)\n\n            act = \"action=mom_process_hooks\"\n            self.mom.log_match(\"%s;%s %s %s %s\" % (hd, lbl, act, stat, tr),\n                               regexp=True)\n\n    def test_mom_period_hook(self):\n        \"\"\"\n        Test that pbs_mom collects performance stats for mom period hooks\n        \"\"\"\n        hook_name = \"mom_period\"\n        hook_event = \"exechost_periodic\"\n        hook_attr = {'enabled': 'true', 'event': hook_event}\n        self.server.create_import_hook(hook_name, hook_attr,\n                                       self.mhook_content)\n\n        hd = \"hook_perf_stat\"\n        lbl = \"label=hook_%s_%s_.*\" % (hook_event, hook_name)\n        tr = \"profile_start\"\n        act = \"action=pbs_python\"\n        self.mom.log_match(\"%s;%s %s %s\" % (hd, lbl, act, tr),\n                           regexp=True)\n\n        stat = \"walltime=.* cputime=.*\"\n        act = \"action=load_hook_input_file\"\n        self.mom.log_match(\"%s;%s %s %s\" % (hd, lbl, act, stat),\n                           regexp=True)\n\n        act = \"action=start_interpreter\"\n        self.mom.log_match(\"%s;%s %s %s\" % (hd, lbl, act, stat),\n                           regexp=True)\n\n        act = r\"action=populate:pbs\\.event\\(\\)\\.vnode_list\"\n        self.mom.log_match(\"%s;%s %s %s\" % (hd, lbl, act, stat),\n                           regexp=True)\n\n        act = r\"action=populate:pbs\\.event\\(\\)\\.job_list\"\n        self.mom.log_match(\"%s;%s %s %s\" % (hd, lbl, act, stat),\n                           regexp=True)\n\n        act = \"action=run_code\"\n        self.mom.log_match(\"%s;%s %s %s\" % (hd, lbl, act, stat),\n                           regexp=True)\n\n        act = \"action=hook_output:.*\"\n        self.mom.log_match(\"%s;%s %s %s\" % (hd, lbl, act, stat),\n                           regexp=True)\n\n        tr = \"profile_stop\"\n        act = \"action=pbs_python\"\n        self.mom.log_match(\"%s;%s %s %s %s\" % (hd, lbl, act, stat, tr),\n                           regexp=True)\n"
  },
  {
    "path": "test/tests/functional/pbs_hook_postqueuejob.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2022 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\nhook_body = \"\"\"\nimport pbs\n\nhook_events = ['queuejob', 'postqueuejob', 'runjob']\nhook_event = {}\n\nfor he in hook_events:\n    if hasattr(pbs, he.upper()):\n        event_code = eval('pbs.'+he.upper())\n        hook_event[event_code] = he\n        hook_event[he] = event_code\n        hook_event[he.upper()] = event_code\n        del event_code\n    else:\n        del hook_events[hook_events.index(he)]\n\n\npbs_event = pbs.event()\njob = pbs_event.job\n\npbs.logmsg(pbs.LOG_DEBUG, \"Starting... %s\" % (hook_event[pbs_event.type]))\nif hook_event[pbs_event.type] == \"postqueuejob\":\n    myjob = pbs.server().job(job.id)\n    pbs.logmsg(pbs.LOG_DEBUG, \"my jobid=> %s\" % (myjob.id))\n    pbs.logmsg(pbs.LOG_DEBUG, \"my job queue=> %s\" % (myjob.queue))\n\n\npbs.logmsg(pbs.LOG_DEBUG, \"hook name=> %s\" % (pbs_event.hook_name))\npbs.logmsg(pbs.LOG_DEBUG, \"Ending... %s\" % (hook_event[pbs_event.type]))\n\npbs_event.accept()\n\"\"\"\n\n\n@tags('hooks')\nclass TestHookPostQueueJob(TestFunctional):\n    \"\"\"\n    This test suite is to test the postqueuejob hook event\n    \"\"\"\n\n    def test_postqueuejob_hook_single_job(self):\n        \"\"\"\n        Verify postqueuejob is running\n        \"\"\"\n\n        hook_name = \"postqueuejob_hook\"\n        attr = {'event': 'postqueuejob', 'enabled': 'True', 'alarm': '50'}\n        self.server.create_import_hook(hook_name, attr, hook_body)\n\n        j = Job(TEST_USER)\n        j.set_sleep_time(10)\n        jid = self.server.submit(j)\n        self.server.log_match(\"Starting... postqueuejob\")\n        self.server.log_match(\"my jobid=> %s\" % jid)\n        self.server.log_match(\"Ending... postqueuejob\")\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n    def test_postqueuejob_hook_multiple_job(self):\n        \"\"\"\n        Verify postqueuejob hook with multiple jobs\n        \"\"\"\n\n        hook_name = \"postqueuejob_hook\"\n        attr = {'event': 'postqueuejob', 'enabled': 'True', 'alarm': '50'}\n        self.server.create_import_hook(hook_name, attr, hook_body)\n\n        j = Job(TEST_USER)\n        j.set_sleep_time(10)\n        jid1 = self.server.submit(j)\n        self.server.log_match(\"Starting... postqueuejob\")\n        self.server.log_match(\"my jobid=> %s\" % jid1)\n        self.server.log_match(\"Ending... postqueuejob\")\n        jid2 = self.server.submit(j)\n        self.server.log_match(\"Starting... postqueuejob\")\n        self.server.log_match(\"my jobid=> %s\" % jid2)\n        self.server.log_match(\"Ending... postqueuejob\")\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n\n    def test_postqueuejob_hook_multiple_hooks(self):\n        \"\"\"\n        Verify postqueuejob event with multiple hooks\n        \"\"\"\n\n        hook_name1 = \"postqueuejob_hook1\"\n        attr = {'event': 'postqueuejob', 'enabled': 'True', 'alarm': '50'}\n        self.server.create_import_hook(hook_name1, attr, hook_body)\n\n        hook_name2 = \"postqueuejob_hook2\"\n        attr = {'event': 'postqueuejob', 'enabled': 'True', 'alarm': '50'}\n        self.server.create_import_hook(hook_name2, attr, hook_body)\n\n        j = Job(TEST_USER)\n        j.set_sleep_time(10)\n        jid1 = self.server.submit(j)\n        self.server.log_match(\"Starting... postqueuejob\")\n        self.server.log_match(\"my jobid=> %s\" % jid1)\n        self.server.log_match(\"hook name=> %s\" % (hook_name1))\n        self.server.log_match(\"hook name=> %s\" % (hook_name2))\n        self.server.log_match(\"Ending... postqueuejob\")\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n\n    def test_postqueuejob_hook_multiple_hooks_muliple_jobs(self):\n        \"\"\"\n        Verify multiple postqueuejob hooks with multiple jobs\n        \"\"\"\n\n        hook_name1 = \"postqueuejob_hook1\"\n        attr = {'event': 'postqueuejob', 'enabled': 'True', 'alarm': '50'}\n        self.server.create_import_hook(hook_name1, attr, hook_body)\n\n        hook_name2 = \"postqueuejob_hook2\"\n        attr = {'event': 'postqueuejob', 'enabled': 'True', 'alarm': '50'}\n        self.server.create_import_hook(hook_name2, attr, hook_body)\n\n        j = Job(TEST_USER)\n        j.set_sleep_time(10)\n        jid1 = self.server.submit(j)\n        self.server.log_match(\"Starting... postqueuejob\")\n        self.server.log_match(\"my jobid=> %s\" % jid1)\n        self.server.log_match(\"hook name=> %s\" % (hook_name1))\n        self.server.log_match(\"hook name=> %s\" % (hook_name2))\n        self.server.log_match(\"Ending... postqueuejob\")\n        jid2 = self.server.submit(j)\n        self.server.log_match(\"Starting... postqueuejob\")\n        self.server.log_match(\"my jobid=> %s\" % jid2)\n        self.server.log_match(\"Ending... postqueuejob\")\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n\n    def test_queuejob_with_postqueuejob_hook(self):\n        \"\"\"\n        Test postqueuejob hook along with queuejob hook\n        \"\"\"\n\n        hook_queuejob = \"queuejob_hook\"\n        attr = {'event': 'queuejob', 'enabled': 'True', 'alarm': '50'}\n        self.server.create_import_hook(hook_queuejob, attr, hook_body)\n\n        hook_postqueuejob = \"postqueuejob_hook\"\n        attr = {'event': 'postqueuejob', 'enabled': 'True', 'alarm': '50'}\n        self.server.create_import_hook(hook_postqueuejob, attr, hook_body)\n\n        j = Job(TEST_USER)\n        j.set_sleep_time(10)\n        jid1 = self.server.submit(j)\n\n        self.server.log_match(\"Starting... postqueuejob\")\n        self.server.log_match(\"hook name=> %s\" % (hook_queuejob))\n        self.server.log_match(\"Ending... postqueuejob\")\n\n        self.server.log_match(\"Starting... postqueuejob\")\n        self.server.log_match(\"my jobid=> %s\" % jid1)\n        self.server.log_match(\"hook name=> %s\" % (hook_postqueuejob))\n        self.server.log_match(\"Ending... postqueuejob\")\n\n    def test_postqueuejob_hook_reject(self):\n        \"\"\"\n        Test postqueuejob reject event\n        \"\"\"\n\n        reject_hook_script = \"\"\"\nimport pbs\npbs.event().reject(\"postqueuejob hook rejected the job\")\n\"\"\"\n        self.server.manager(MGR_CMD_SET, SERVER, {'log_events': 2047})\n        hook_postqueuejob = \"postqueuejob_hook\"\n        attr = {'event': 'postqueuejob', 'enabled': 'True'}\n        self.server.create_import_hook(\n            hook_postqueuejob, attr, reject_hook_script)\n        j = Job(TEST_USER)\n        j.set_sleep_time(10)\n        jid = self.server.submit(j)\n        self.server.log_match(\"postqueuejob hook rejected the job\")\n\n    def test_postqueuejob_hook_with_route_queue(self):\n        \"\"\"\n        Verify that a routing queue routes a job into the appropriate\n        execution queue and postqueuejob hook is executed all the\n        time.\n        \"\"\"\n\n        hook_queuejob = \"queuejob_hook\"\n        attr = {'event': 'queuejob', 'enabled': 'True', 'alarm': '50'}\n        self.server.create_import_hook(hook_queuejob, attr, hook_body)\n\n        hook_postqueuejob = \"postqueuejob_hook\"\n        attr = {'event': 'postqueuejob', 'enabled': 'True', 'alarm': '50'}\n        self.server.create_import_hook(hook_postqueuejob, attr, hook_body)\n\n        a = {'queue_type': 'Execution', 'resources_min.ncpus': 1,\n             'enabled': 'True', 'started': 'False'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id='specialq')\n        dflt_q = self.server.default_queue\n        a = {'queue_type': 'route',\n             'route_destinations': dflt_q + ',specialq',\n             'enabled': 'True', 'started': 'True'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id='routeq')\n        a = {'resources_min.ncpus': 4}\n        self.server.manager(MGR_CMD_SET, QUEUE, a, id=dflt_q)\n        j = Job(TEST_USER, attrs={ATTR_queue: 'routeq',\n                                  'Resource_List.ncpus': 1})\n        jid = self.server.submit(j)\n        self.server.log_match(\"my job queue=> %s\" % 'routeq')\n        self.server.expect(JOB, {ATTR_queue: 'specialq'}, id=jid)\n        self.server.log_match(\"my job queue=> %s\" % 'specialq')\n\n    def test_postqueuejob_hook_with_movejob(self):\n        \"\"\"\n        Verify that a job can be moved to another queue than the one it was\n        originally submitted to and postqueuejob hook is executed\n        \"\"\"\n\n        hook_postqueuejob = \"postqueuejob_hook\"\n        attr = {'event': 'postqueuejob', 'enabled': 'True', 'alarm': '50'}\n        self.server.create_import_hook(hook_postqueuejob, attr, hook_body)\n\n        a = {'queue_type': 'Execution', 'enabled': 'True', 'started': 'True'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id='solverq')\n        a = {'scheduling': 'False'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n        self.server.log_match(\"Starting... postqueuejob\")\n        self.server.log_match(\"my job queue=> %s\" % 'workq')\n        self.server.log_match(\"Ending... postqueuejob\")\n        self.server.movejob(jid, 'solverq')\n        self.server.log_match(\"Starting... postqueuejob\")\n        self.server.log_match(\"my job queue=> %s\" % 'solverq')\n        self.server.log_match(\"Ending... postqueuejob\")\n        a = {'scheduling': 'True'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        self.server.expect(JOB, {ATTR_queue: 'solverq', 'job_state': 'R'},\n                           attrop=PTL_AND)\n\n    def create_hook_and_submit_job(self, hook_script):\n        \"\"\"\n        Helper function to create hook and submit job\n        \"\"\"\n\n        hook_postqueuejob = \"postqueuejob_hook\"\n        attr = {'event': 'postqueuejob', 'enabled': 'True', 'alarm': '50'}\n\n        self.server.create_import_hook(hook_postqueuejob, attr, hook_script)\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n        j_status = self.server.status(JOB, id=jid)[0]\n        return j_status\n\n    def test_altering_job_attribute_in_accepted_hook(self):\n        \"\"\"\n        Verify the postqueuejob hook can update the job attribute\n        (Resource_List.ncpus, Project) when hook is accepted.\n        \"\"\"\n\n        req_ncpus = '2'\n        req_project = 'ptl_test'\n        hook_script = \"\"\"\nimport pbs\nevent = pbs.event()\njob = event.job\njob.Resource_List[\"ncpus\"] = %s\njob.project= \"%s\"\nevent.accept()\n\"\"\"\n        j_status = self.create_hook_and_submit_job(\n            hook_script % (req_ncpus, req_project))\n        job_ncpus = j_status['Resource_List.ncpus']\n        job_project = j_status['project']\n        self.assertEqual(\n            req_ncpus,\n            job_ncpus,\n            \"Requested ncpus is not updated after postqueuejob \"\n            \"hook run\")\n        self.assertEqual(\n            req_project,\n            job_project,\n            \"Requested project is not updated after postqueuejob \"\n            \"hook run\")\n\n    def test_altering_job_attribute_in_rejected_hook(self):\n        \"\"\"\n        Verify the postqueuejob hook can not update the job attribute\n        (Resource_List.ncpus, Project) when hook is rejected.\n        \"\"\"\n\n        req_ncpus = '2'\n        req_project = 'ptl_test'\n        hook_script = \"\"\"\nimport pbs\nevent = pbs.event()\njob = event.job\njob.Resource_List[\"ncpus\"] = %s\njob.project=%s\nevent.reject()\n\"\"\"\n        j_status = self.create_hook_and_submit_job(\n            hook_script % (req_ncpus, req_project))\n        job_ncpus = j_status['Resource_List.ncpus']\n        job_project = j_status['project']\n        self.assertNotEqual(\n            req_ncpus,\n            job_ncpus,\n            \"Requested ncpus is same as job's ncpus\")\n        self.assertNotEqual(\n            req_project,\n            job_project,\n            \"Requested project is same as job's project\")\n\n    def test_setting_job_readonly_attr(self):\n        \"\"\"\n        Verify postqueuejob hook can not set job's readonly attribute\n        \"\"\"\n        req_substate = '50'\n        hook_script = \"\"\"\nimport pbs\nevent = pbs.event()\njob = event.job\njob.substate=%s\nevent.accept()\n\"\"\"\n        j_status = self.create_hook_and_submit_job(\n            hook_script % (req_substate))\n        log_msg = \"PBS server internal error (15011) in Error \" \\\n            \"evaluating Python script, job attribute 'substate' is \" \\\n            \"readonly\"\n        self.server.log_match(log_msg)\n        job_substate = j_status['substate']\n        self.assertNotEqual(\n            req_substate,\n            job_substate,\n            \"Requested substate is not updated after postqueuejob \"\n            \"hook run\")\n\n    def test_postqueuejob_in_list_hook(self):\n        \"\"\"\n        Set a hook event to queuejob and postqueuejob\n        and test if it is listed successfully in qmgr\n        \"\"\"\n\n        hook_name = \"myhook\"\n        a = {'event': 'queuejob,postqueuejob', 'enabled': 'True'}\n        self.server.create_hook(hook_name, a)\n        attrs = {'event': 'queuejob,postqueuejob'}\n        rv = self.server.expect(HOOK, attrs, id=hook_name)\n        self.assertTrue(rv)\n"
  },
  {
    "path": "test/tests/functional/pbs_hook_set_attr.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\nhook_body_node_attr_alter = \"\"\"\nimport pbs\ne = pbs.event()\nvnl = pbs.event().vnode_list\nlocal_node = pbs.get_local_nodename()\nvnl[local_node].Mom = None\nvnl[local_node].Port = 123\n\"\"\"\n\n\nclass TestHookSetAttr(TestFunctional):\n\n    def test_node_ro_attr_hook(self):\n        \"\"\"\n        Try to alter RO node attributes from hook and check\n        the attributes are protected to the write.\n        \"\"\"\n        hook_name = \"node_attr_ro\"\n        a = {'event': 'exechost_periodic',\n             'enabled': 'True',\n             'freq': 3}\n        rv = self.server.create_import_hook(\n            hook_name, a, hook_body_node_attr_alter, overwrite=True)\n        self.assertTrue(rv)\n\n        msg = 'Error 15003 setting attribute Mom in update from mom hook'\n        self.server.log_match(msg, starttime=time.time())\n\n        msg = 'Error 15003 setting attribute Port in update from mom hook'\n        self.server.log_match(msg, starttime=time.time())\n"
  },
  {
    "path": "test/tests/functional/pbs_hook_set_interrupt.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestHookInterrupt(TestFunctional):\n\n    \"\"\"\n    This suite contains hook test to verify that pbs generates\n    KeyBoardInterrupt via hook alarm on long running hook\n    \"\"\"\n\n    def test_hook_interrupt(self):\n        \"\"\"\n        Test hook interrupt\n        \"\"\"\n        hook_name = \"testhook\"\n        hook_body = \"\"\"\nimport pbs\nimport time\n\npbs.logmsg(pbs.LOG_DEBUG, \"TestHook Started\")\ntime.sleep(100000)\npbs.logmsg(pbs.LOG_DEBUG, \"TestHook Ended\")\n\"\"\"\n        a = {'event': 'runjob', 'enabled': 'True', 'alarm': '5'}\n        self.server.create_import_hook(hook_name, a, hook_body)\n        self.server.manager(MGR_CMD_SET, SERVER, {'log_events': -1})\n        j = Job(TEST_USER)\n        st = time.time()\n        jid = self.server.submit(j)\n\n        self.server.log_match(\"TestHook Started\", starttime=st)\n\n        _msg = \"Not Running: PBS Error: request rejected\"\n        _msg += \" as filter hook '%s' got an alarm call.\" % hook_name\n        _msg += \" Please inform Admin\"\n        self.server.expect(JOB,\n                           {'job_state': 'Q', 'comment': _msg},\n                           id=jid, offset=5)\n\n        self.server.log_match(\"Hook;catch_hook_alarm;alarm call received\",\n                              starttime=st)\n        _msg1 = \"PBS server internal error (15011)\"\n        _msg1 += \" in Python script received a KeyboardInterrupt, \"\n        _msg2 = \"<class 'KeyboardInterrupt'>\"\n        _msg3 = \"<could not figure out the exception value>\"\n        self.server.log_match(_msg1 + _msg2, starttime=st)\n        self.server.log_match(_msg1 + _msg3, starttime=st)\n        self.server.log_match(\"Hook;%s;finished\" % hook_name, starttime=st)\n        _msg = \"alarm call while running runjob hook\"\n        _msg += \" '%s', request rejected\" % hook_name\n        self.server.log_match(_msg, starttime=st)\n        self.server.log_match(\"TestHook Ended\", existence=False, starttime=st)\n"
  },
  {
    "path": "test/tests/functional/pbs_hook_set_jobenv.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\nimport os\nfrom tests.functional import *\nfrom ptl.utils.pbs_logutils import PBSLogUtils\n\n\nclass TestPbsHookSetJobEnv(TestFunctional):\n    \"\"\"\n    This test suite to make sure hooks properly\n    handle environment variables with special characters,\n    values, in particular newline (\\n), commas (,), semicolons (;),\n    single quotes ('), double quotes (\"), and backaslashes.\n    PRE: Set up currently executing user's environment to have variables\n         whose values have the special characters.\n         Job A: Submit a job using the -V option (pass current environment)\n         where there are NO hooks in the system.\n         Introduce execjob_begin and execjob_launch hooks in the system.\n         Let the former update pbs.event().job.Variable_List while the latter\n         update pbs.event().env.\n         Job B: Submit a job using the -V option (pass current environment)\n         where there are now mom hooks in the system.\n    POST: Job A and Job B would see the same environment variables, with\n          Job B also seeing the changes made to the job by the 2 mom hooks.\n    \"\"\"\n\n    # List of environment variables not to compare between\n    # job ran without hooks, job ran with hooks.\n    exclude_env = []\n    env_nohook = {}\n    env_nohook_exclude = {}\n    env_hook = {}\n    env_hook_exclude = {}\n\n    def setUp(self):\n        \"\"\"\n        Set environment variables\n        \"\"\"\n        TestFunctional.setUp(self)\n        self.interactive = False\n        # Set environment variables with special characters\n        os.environ['TEST_COMMA'] = '1,2,3,4'\n        os.environ['TEST_RETURN'] = \"\"\"'3,\n4,\n5'\"\"\"\n        os.environ['TEST_SEMICOLON'] = ';'\n        os.environ['TEST_ENCLOSED'] = '\\',\\''\n        os.environ['TEST_COLON'] = ':'\n        os.environ['TEST_BACKSLASH'] = '\\\\'\n        os.environ['TEST_DQUOTE'] = '\"'\n        os.environ['TEST_DQUOTE2'] = 'happy days\"are\"here to stay'\n        os.environ['TEST_DQUOTE3'] = 'nothing compares\" to you'\n        os.environ['TEST_DQUOTE4'] = '\"music makes the people\"'\n        os.environ['TEST_DQUOTE5'] = 'music \"makes \\'the\\'\"people'\n        os.environ['TEST_DQUOTE6'] = 'lalaland\"'\n        os.environ['TEST_SQUOTE'] = '\\''\n        os.environ['TEST_SQUOTE2'] = 'happy\\'days'\n        os.environ['TEST_SQUOTE3'] = 'the days\\'are here now\\'then'\n        os.environ['TEST_SQUOTE4'] = '\\'the way that was\\''\n        os.environ['TEST_SQUOTE5'] = 'music \\'makes \"the\\'\"people'\n        os.environ['TEST_SQUOTE6'] = 'loving\\''\n        os.environ['TEST_SPECIAL'] = \"{}[]()~@#$%^&*!\"\n        os.environ['TEST_SPECIAL2'] = \"<dumb-test_text>\"\n\n        # List of environment variables not to compare between\n        # job ran without hooks, job ran with hooks.\n        self.exclude_env = ['PBS_NODEFILE']\n        self.exclude_env += ['PBS_JOBID']\n        self.exclude_env += ['PBS_JOBCOOKIE']\n        # Each job submitted by default gets a unique jobname\n        self.exclude_env += ['PBS_JOBNAME']\n        self.exclude_env += ['TMPDIR']\n        self.exclude_env += ['happy']\n\n        self.ATTR_V = 'Full_Variable_List'\n        api_to_cli.setdefault(self.ATTR_V, 'V')\n\n        # temporary files\n        fn = self.du.create_temp_file(prefix=\"job_out1\")\n        self.job_out1_tempfile = fn\n\n        fn = self.du.create_temp_file(prefix=\"job_out2\")\n        self.job_out2_tempfile = fn\n\n        fn = self.du.create_temp_file(prefix=\"job_out3\")\n        self.job_out3_tempfile = fn\n\n    def tearDown(self):\n        TestFunctional.tearDown(self)\n        try:\n            os.remove(self.job_out1_tempfile)\n            os.remove(self.job_out2_tempfile)\n            os.remove(self.job_out3_tempfile)\n            for _file in [self.job_out1_tempfile, self.job_out2_tempfile,\n                          self.job_out3_tempfile]:\n                rc = self.du.isfile(hostname=self.mom.shortname,\n                                    path=_file, sudo=True)\n                if rc:\n                    self.du.rm(self.mom.hostname, _file)\n        except OSError:\n            pass\n\n    def read_env(self, outputfile, ishook):\n        \"\"\"\n        Parse the output file and store the\n        variable list in a dictionary\n        \"\"\"\n        if (not self.du.is_localhost(self.mom.hostname)) and self.interactive:\n            srchost = self.mom.hostname\n            destpath = self.du.get_tempdir(self.server.hostname)\n            self.du.run_copy(srchost=srchost, src=outputfile, dest=destpath)\n\n        with open(outputfile) as fd:\n            pkey = \"\"\n            tmpenv = {}\n            penv = {}\n            penv_exclude = {}\n            for line in fd:\n                fields = line.split(\"=\", 1)\n                if (len(fields) == 2):\n                    pkey = fields[0]\n                    if pkey not in self.exclude_env:\n                        penv[pkey] = fields[1]\n                        tmpenv = penv\n                    else:\n                        penv_exclude[pkey] = fields[1]\n                        tmpenv = penv_exclude\n                elif pkey != \"\":\n                    # append to previous dictionary entry\n                    tmpenv[pkey] += fields[0]\n        if (ishook == \"hook\"):\n            self.env_hook = penv\n            self.env_hook_exclude = penv_exclude\n        else:\n            self.env_nohook = penv\n            self.env_nohook_exclude = penv_exclude\n\n    def common_log_match(self, daemon):\n        \"\"\"\n        Validate the env variable output in daemon logs\n        \"\"\"\n        logutils = PBSLogUtils()\n        logmsg = [r\"TEST_COMMA=1\\,2\\,3\\,4\",\n                  \"TEST_SEMICOLON=;\",\n                  r\"TEST_ENCLOSED=\\\\'\\,\\\\'\",\n                  \"TEST_COLON=:\",\n                  \"TEST_BACKSLASH=\\\\\\\\\",\n                  \"TEST_DQUOTE=\\\\\\\"\",\n                  \"TEST_DQUOTE2=happy days\\\\\\\"are\\\\\\\"here to stay\",\n                  \"TEST_DQUOTE3=nothing compares\\\\\\\" to you\",\n                  \"TEST_DQUOTE4=\\\\\\\"music makes the people\\\\\\\"\",\n                  \"TEST_DQUOTE5=music \\\\\\\"makes \\\\'the\\\\'\\\\\\\"people\",\n                  \"TEST_DQUOTE6=lalaland\\\\\\\"\",\n                  \"TEST_SQUOTE=\\\\'\",\n                  \"TEST_SQUOTE2=happy\\\\'days\",\n                  \"TEST_SQUOTE3=the days\\\\'are here now\\\\'then\",\n                  \"TEST_SQUOTE4=\\\\'the way that was\\\\'\",\n                  \"TEST_SQUOTE5=music \\\\'makes \\\\\\\"the\\\\'\\\\\\\"people\",\n                  \"TEST_SQUOTE6=loving\\\\'\",\n                  \"TEST_SPECIAL={}[]()~@#$%^&*!\",\n                  \"TEST_SPECIAL2=<dumb-test_text>\",\n                  r\"TEST_RETURN=\\\\'3\\,\",\n                  # Cannot add '\\n' here because '\\n' is not included in\n                  # the items of the list returned by log_lines(), (though\n                  # lines are split by '\\n')\n                  r\"4\\,\",\n                  \"5\\\\',\"]\n\n        if (daemon == \"mom\"):\n            self.logger.info(\"Matching in mom logs\")\n            logfile_type = self.mom\n            host = self.mom.hostname\n        elif (daemon == \"server\"):\n            self.logger.info(\"Matching in server logs\")\n            logfile_type = self.server\n            host = self.server.hostname\n        else:\n            self.logger.info(\"Provide a valid daemon name; server or mom\")\n            return\n        lines = None\n        ret_linenum = 0\n        search_msg = 'log match: searching for '\n        nomatch_msg = ' No match for '\n        for msg in logmsg:\n            for attempt in range(1, 61):\n                lines = self.server.log_lines(logfile_type,\n                                              starttime=self.server.ctime,\n                                              host=host, n='ALL')\n                match = logutils.match_msg(lines, msg=msg)\n                if match:\n                    # Dont want the test to pass if there are\n                    # unwanted matched for \"4\\,\" and \"5\\\\'.\n                    if msg == r\"TEST_RETURN=\\\\'3\\,\":\n                        ret_linenum = match[0]\n                    if (msg == r\"4\\,\" and match[0] != (ret_linenum - 1)) or \\\n                       (msg == \"5\\\\'\" and match[0] != (ret_linenum - 2)):\n                        pass\n                    else:\n                        self.logger.info(search_msg + msg + ' ... OK')\n                        break\n                else:\n                    self.logger.info(nomatch_msg + msg +\n                                     ' attempt ' + str(attempt))\n                time.sleep(0.5)\n            if match is None:\n                _msg = nomatch_msg + msg\n                raise PtlLogMatchError(rc=1, rv=False, msg=_msg)\n\n    def common_validate(self):\n        \"\"\"\n        This is a common function to validate the\n        environment values with and without hook\n        \"\"\"\n\n        self.assertEqual(self.env_nohook, self.env_hook)\n        self.logger.info(\"Environment variables are same\"\n                         \" with and without hooks\")\n        match_str = self.env_hook['TEST_COMMA'].rstrip('\\n')\n        self.assertEqual(os.environ['TEST_COMMA'], match_str)\n        self.logger.info(\n            \"TEST_COMMA matched - \" + os.environ['TEST_COMMA'] +\n            \" == \" + match_str)\n        self.assertEqual(os.environ['TEST_RETURN'],\n                         self.env_hook['TEST_RETURN'].rstrip('\\n'))\n        self.logger.info(\n            \"TEST_RETURN matched - \" + os.environ['TEST_RETURN'] +\n            \" == \" + self.env_hook['TEST_RETURN'].rstrip('\\n'))\n        self.assertEqual(os.environ['TEST_SEMICOLON'],\n                         self.env_hook['TEST_SEMICOLON'].rstrip('\\n'))\n        self.logger.info(\n            \"TEST_SEMICOLON matched - \" + os.environ['TEST_SEMICOLON'] +\n            \" == \" + self.env_hook['TEST_SEMICOLON'].rstrip('\\n'))\n        self.assertEqual(\n            os.environ['TEST_ENCLOSED'],\n            self.env_hook['TEST_ENCLOSED'].rstrip('\\n'))\n        self.logger.info(\n            \"TEST_ENCLOSED matched - \" + os.environ['TEST_ENCLOSED'] +\n            \" == \" + self.env_hook['TEST_ENCLOSED'].rstrip('\\n'))\n        self.assertEqual(os.environ['TEST_COLON'],\n                         self.env_hook['TEST_COLON'].rstrip('\\n'))\n        self.logger.info(\"TEST_COLON matched - \" + os.environ['TEST_COLON'] +\n                         \" == \" + self.env_hook['TEST_COLON'].rstrip('\\n'))\n        self.assertEqual(\n            os.environ['TEST_BACKSLASH'],\n            self.env_hook['TEST_BACKSLASH'].rstrip('\\n'))\n        self.logger.info(\n            \"TEST_BACKSLASH matched - \" + os.environ['TEST_BACKSLASH'] +\n            \" == \" + self.env_hook['TEST_BACKSLASH'].rstrip('\\n'))\n        self.assertEqual(os.environ['TEST_DQUOTE'],\n                         self.env_hook['TEST_DQUOTE'].rstrip('\\n'))\n        self.logger.info(\"TEST_DQUOTE matched - \" +\n                         os.environ['TEST_DQUOTE'] +\n                         \" == \" + self.env_hook['TEST_DQUOTE'].rstrip('\\n'))\n        self.assertEqual(os.environ['TEST_DQUOTE2'],\n                         self.env_hook['TEST_DQUOTE2'].rstrip('\\n'))\n        self.logger.info(\"TEST_DQUOTE2 matched - \" +\n                         os.environ['TEST_DQUOTE2'] +\n                         \" == \" + self.env_hook['TEST_DQUOTE2'].rstrip('\\n'))\n        self.assertEqual(os.environ['TEST_DQUOTE3'],\n                         self.env_hook['TEST_DQUOTE3'].rstrip('\\n'))\n        self.logger.info(\"TEST_DQUOTE3 matched - \" +\n                         os.environ['TEST_DQUOTE3'] +\n                         \" == \" + self.env_hook['TEST_DQUOTE3'].rstrip('\\n'))\n        self.assertEqual(os.environ['TEST_DQUOTE4'],\n                         self.env_hook['TEST_DQUOTE4'].rstrip('\\n'))\n        self.logger.info(\"TEST_DQUOTE4 matched - \" +\n                         os.environ['TEST_DQUOTE4'] +\n                         \" == \" + self.env_hook['TEST_DQUOTE4'].rstrip('\\n'))\n        self.assertEqual(os.environ['TEST_DQUOTE5'],\n                         self.env_hook['TEST_DQUOTE5'].rstrip('\\n'))\n        self.logger.info(\"TEST_DQUOTE5 matched - \" +\n                         os.environ['TEST_DQUOTE5'] +\n                         \" == \" + self.env_hook['TEST_DQUOTE5'].rstrip('\\n'))\n        self.assertEqual(os.environ['TEST_DQUOTE6'],\n                         self.env_hook['TEST_DQUOTE6'].rstrip('\\n'))\n        self.logger.info(\"TEST_DQUOTE6 matched - \" +\n                         os.environ['TEST_DQUOTE6'] +\n                         \" == \" + self.env_hook['TEST_DQUOTE6'].rstrip('\\n'))\n        self.assertEqual(os.environ['TEST_SQUOTE'],\n                         self.env_hook['TEST_SQUOTE'].rstrip('\\n'))\n        self.logger.info(\"TEST_SQUOTE matched - \" + os.environ['TEST_SQUOTE'] +\n                         \" == \" + self.env_hook['TEST_SQUOTE'].rstrip('\\n'))\n        self.assertEqual(os.environ['TEST_SQUOTE2'],\n                         self.env_hook['TEST_SQUOTE2'].rstrip('\\n'))\n        self.logger.info(\"TEST_SQUOTE2 matched - \" +\n                         os.environ['TEST_SQUOTE2'] +\n                         \" == \" + self.env_hook['TEST_SQUOTE2'].rstrip('\\n'))\n        self.assertEqual(os.environ['TEST_SQUOTE3'],\n                         self.env_hook['TEST_SQUOTE3'].rstrip('\\n'))\n        self.logger.info(\"TEST_SQUOTE3 matched - \" +\n                         os.environ['TEST_SQUOTE3'] +\n                         \" == \" + self.env_hook['TEST_SQUOTE3'].rstrip('\\n'))\n        self.assertEqual(os.environ['TEST_SQUOTE4'],\n                         self.env_hook['TEST_SQUOTE4'].rstrip('\\n'))\n        self.logger.info(\"TEST_SQUOTE4 matched - \" +\n                         os.environ['TEST_SQUOTE4'] +\n                         \" == \" + self.env_hook['TEST_SQUOTE4'].rstrip('\\n'))\n        self.assertEqual(os.environ['TEST_SQUOTE5'],\n                         self.env_hook['TEST_SQUOTE5'].rstrip('\\n'))\n        self.logger.info(\"TEST_SQUOTE5 matched - \" +\n                         os.environ['TEST_SQUOTE5'] +\n                         \" == \" + self.env_hook['TEST_SQUOTE5'].rstrip('\\n'))\n        self.assertEqual(os.environ['TEST_SQUOTE6'],\n                         self.env_hook['TEST_SQUOTE6'].rstrip('\\n'))\n        self.logger.info(\"TEST_SQUOTE6 matched - \" +\n                         os.environ['TEST_SQUOTE6'] +\n                         \" == \" + self.env_hook['TEST_SQUOTE6'].rstrip('\\n'))\n        self.assertEqual(os.environ['TEST_SPECIAL'],\n                         self.env_hook['TEST_SPECIAL'].rstrip('\\n'))\n        self.logger.info(\"TEST_SPECIAL matched - \" +\n                         os.environ['TEST_SPECIAL'] +\n                         \" == \" + self.env_hook['TEST_SPECIAL'].rstrip('\\n'))\n        self.assertEqual(os.environ['TEST_SPECIAL2'],\n                         self.env_hook['TEST_SPECIAL2'].rstrip('\\n'))\n        self.logger.info(\"TEST_SPECIAL2 matched - \" +\n                         os.environ['TEST_SPECIAL2'] +\n                         \" == \" + self.env_hook['TEST_SPECIAL2'].rstrip('\\n'))\n\n    def create_and_submit_job(self, user=None, attribs=None, content=None,\n                              content_interactive=None, preserve_env=False):\n        \"\"\"\n        create the job object and submit it to the server\n        as 'user', attributes list 'attribs' script\n        'content' or 'content_interactive', and to\n        'preserve_env' if interactive job.\n        \"\"\"\n\n        # A user=None value means job will be executed by current user\n        # where the environment is set up\n        if attribs is None:\n            use_attribs = {}\n        else:\n            use_attribs = attribs\n        retjob = Job(username=user, attrs=use_attribs)\n\n        if content is not None:\n            retjob.create_script(body=content)\n        elif content_interactive is not None:\n            retjob.interactive_script = content_interactive\n            retjob.preserve_env = preserve_env\n\n        return self.server.submit(retjob)\n\n    @skipOnShasta\n    def test_begin_launch(self):\n        \"\"\"\n        Test to verify that job environment variables having special\n        characters are not truncated with execjob_launch and\n        execjob_begin hook\n        \"\"\"\n\n        self.exclude_env += ['HAPPY']\n        self.exclude_env += ['happy']\n\n        a = {'Resource_List.select': '1:ncpus=1',\n             'Resource_List.walltime': 10,\n             self.ATTR_V: None}\n        script = ['env\\n']\n        script += ['sleep 5\\n']\n\n        # Submit a job without hooks in the system\n        jid = self.create_and_submit_job(attribs=a, content=script)\n        qstat = self.server.status(JOB, ATTR_o, id=jid)\n        job_outfile = qstat[0][ATTR_o].split(':')[1]\n        self.server.expect(JOB, 'queue', op=UNSET, id=jid, offset=10)\n\n        # Read the env variables from job output\n        self.env_nohook = {}\n        self.env_nohook_exclude = {}\n        self.read_env(job_outfile, \"nohook\")\n\n        # Now start introducing hooks\n        hook_body = \"\"\"\nimport pbs\ne=pbs.event()\ne.job.Variable_List[\"happy\"] = \"days\"\npbs.logmsg(pbs.LOG_DEBUG,\"Variable List is %s\" % (e.job.Variable_List,))\n\"\"\"\n        hook_name = \"begin\"\n        a2 = {'event': \"execjob_begin\", 'enabled': 'True', 'debug': 'True'}\n\n        rv = self.server.create_import_hook(\n            hook_name,\n            a2,\n            hook_body,\n            overwrite=True)\n        self.assertTrue(rv)\n\n        hook_body = \"\"\"\nimport pbs\ne=pbs.event()\ne.env[\"HAPPY\"] = \"nights\"\n\"\"\"\n        hook_name = \"launch\"\n        a2 = {'event': \"execjob_launch\", 'enabled': 'True', 'debug': 'True'}\n\n        rv = self.server.create_import_hook(\n            hook_name,\n            a2,\n            hook_body,\n            overwrite=True)\n        self.assertTrue(rv)\n\n        # Submit a job with hooks in the system\n        jid2 = self.create_and_submit_job(attribs=a, content=script)\n        qstat = self.server.status(JOB, ATTR_o, id=jid2)\n        job_outfile = qstat[0][ATTR_o].split(':')[1]\n        self.server.expect(JOB, 'queue', op=UNSET, id=jid2, offset=10)\n\n        self.env_hook = {}\n        self.env_hook_exclude = {}\n        self.read_env(job_outfile, \"hook\")\n\n        # Validate the values printed in job output file\n        self.assertTrue('HAPPY' not in self.env_nohook_exclude)\n        self.assertTrue('happy' not in self.env_nohook_exclude)\n        self.assertEqual(self.env_hook_exclude['HAPPY'], 'nights\\n')\n        self.assertEqual(self.env_hook_exclude['happy'], 'days\\n')\n        self.common_validate()\n\n        # Check the values in mom logs as well\n        self.common_log_match(\"mom\")\n\n    @skipOnShasta\n    def test_que(self):\n        \"\"\"\n        Test that variable_list do not change with and without\n        queuejob hook\n        \"\"\"\n        self.exclude_env += ['happy']\n\n        a = {'Resource_List.select': '1:ncpus=1',\n             'Resource_List.walltime': 10}\n\n        script = ['#PBS -V']\n        script += ['env\\n']\n        script += ['sleep 5\\n']\n\n        # Submit a job without hooks in the system\n        jid = self.create_and_submit_job(attribs=a, content=script)\n        qstat = self.server.status(JOB, ATTR_o, id=jid)\n        job_outfile = qstat[0][ATTR_o].split(':')[1]\n        self.server.expect(JOB, 'queue', op=UNSET, id=jid, offset=10)\n\n        # Read the env variable from job output file\n        self.env_nohook = {}\n        self.env_nohook_exclude = {}\n        self.read_env(job_outfile, \"nohook\")\n\n        # Now start introducing hooks\n        hook_body = \"\"\"\nimport pbs\ne=pbs.event()\ne.job.Variable_List[\"happy\"] = \"days\"\npbs.logmsg(pbs.LOG_DEBUG,\"Variable List is %s\" % (e.job.Variable_List,))\n\"\"\"\n        hook_name = \"qjob\"\n        a2 = {'event': \"queuejob\", 'enabled': 'True', 'debug': 'True'}\n\n        rv = self.server.create_import_hook(\n            hook_name,\n            a2,\n            hook_body,\n            overwrite=True)\n        self.assertTrue(rv)\n\n        # Submit a job with hooks in the system\n        jid2 = self.create_and_submit_job(attribs=a, content=script)\n        qstat = self.server.status(JOB, ATTR_o, id=jid2)\n        job_outfile = qstat[0][ATTR_o].split(':')[1]\n        self.server.expect(JOB, 'queue', op=UNSET, id=jid2, offset=10)\n\n        self.env_hook = {}\n        self.env_hook_exclude = {}\n        self.read_env(job_outfile, \"hook\")\n\n        # Validate the env values from job output file\n        # with and without queuejob hook\n        self.assertTrue('happy' not in self.env_nohook_exclude)\n        self.assertEqual(self.env_hook_exclude['happy'], 'days\\n')\n        self.common_validate()\n\n        self.common_log_match(\"server\")\n\n    @skipOnShasta\n    def test_execjob_epi(self):\n        \"\"\"\n        Test that Variable_List will contain environment variable\n        with commas, newline and all special characters even for\n        other mom hooks\n        \"\"\"\n        self.exclude_env += ['happy']\n\n        a = {'Resource_List.select': '1:ncpus=1',\n             'Resource_List.walltime': 10}\n        script = ['#PBS -V']\n        script += ['env\\n']\n        script += ['sleep 5\\n']\n\n        # Submit a job without hooks in the system\n        jid = self.create_and_submit_job(attribs=a, content=script)\n        qstat = self.server.status(JOB, ATTR_o, id=jid)\n        job_outfile = qstat[0][ATTR_o].split(':')[1]\n        self.server.expect(JOB, 'queue', op=UNSET, id=jid, offset=10)\n\n        # Read the output file and parse the values\n        self.env_nohook = {}\n        self.env_nohook_exclude = {}\n        self.read_env(job_outfile, \"nohook\")\n\n        # Now start the hooks\n        hook_name = \"test_epi\"\n        hook_body = \"\"\"\nimport pbs\ne = pbs.event()\nj = e.job\nj.Variable_List[\"happy\"] = \"days\"\npbs.logmsg(pbs.LOG_DEBUG,\"Variable_List is %s\" % (j.Variable_List,))\n\"\"\"\n\n        a2 = {'event': \"execjob_epilogue\", 'enabled': \"true\", 'debug': \"true\"}\n\n        self.server.create_import_hook(\n            hook_name,\n            a2,\n            hook_body,\n            overwrite=True)\n\n        # Submit a job with hooks in the system\n        jid2 = self.create_and_submit_job(attribs=a, content=script)\n        qstat = self.server.status(JOB, ATTR_o, id=jid2)\n        job_outfile = qstat[0][ATTR_o].split(':')[1]\n        self.server.expect(JOB, 'queue', op=UNSET, id=jid2, offset=10)\n\n        # read the output file for env with hooks\n        self.env_hook = {}\n        self.env_hook_exclude = {}\n        self.read_env(job_outfile, \"hook\")\n\n        # Validate\n        self.common_validate()\n\n        # Verify the env variables in logs too\n        self.common_log_match(\"mom\")\n\n    @skipOnShasta\n    def test_execjob_pro(self):\n        \"\"\"\n        Test that environment variable not gets truncated\n        for execjob_prologue hook\n        \"\"\"\n\n        a = {'Resource_List.select': '1:ncpus=1',\n             'Resource_List.walltime': 10}\n        script = ['#PBS -V']\n        script += ['env\\n']\n        script += ['sleep 5\\n']\n\n        # Submit a job without hooks in the system\n        jid = self.create_and_submit_job(attribs=a, content=script)\n        qstat = self.server.status(JOB, ATTR_o, id=jid)\n        job_outfile = qstat[0][ATTR_o].split(':')[1]\n        self.server.expect(JOB, 'queue', op=UNSET, id=jid, offset=10)\n\n        # read the output file for env without hook\n        self.env_nohook = {}\n        self.env_nohook_exclude = {}\n        self.read_env(job_outfile, \"nohook\")\n\n        # Now start the hooks\n        hook_name = \"test_pro\"\n        hook_body = \"\"\"\nimport pbs\ne = pbs.event()\nj = e.job\nj.Variable_List[\"happy\"] = \"days\"\npbs.logmsg(pbs.LOG_DEBUG,\"Variable_List is %s\" % (j.Variable_List,))\n\"\"\"\n        a2 = {'event': \"execjob_prologue\", 'enabled': \"true\", 'debug': \"true\"}\n\n        rv = self.server.create_import_hook(\n            hook_name,\n            a2,\n            hook_body,\n            overwrite=True)\n        self.assertTrue(rv)\n\n        # Submit a job with hooks in the system\n        jid2 = self.create_and_submit_job(attribs=a, content=script)\n        qstat = self.server.status(JOB, ATTR_o, id=jid2)\n        job_outfile = qstat[0][ATTR_o].split(':')[1]\n        self.server.expect(JOB, 'queue', op=UNSET, id=jid2, offset=10)\n\n        # Read the job ouput file\n        self.env_hook = {}\n        self.env_hook_exclude = {}\n        self.read_env(job_outfile, \"hook\")\n\n        # Validate the env values with and without hook\n        self.common_validate()\n\n        # compare the values in mom_logs as well\n        self.common_log_match(\"mom\")\n\n    @skipOnShasta\n    @checkModule(\"pexpect\")\n    def test_interactive(self):\n        \"\"\"\n        Test that interactive jobs do not have truncated environment\n        variable list with execjob_launch hook\n        \"\"\"\n\n        self.interactive = True\n        self.exclude_env += ['happy']\n\n        # submit an interactive job without hook\n        cmd = 'env > ' + self.job_out1_tempfile\n        a = {ATTR_inter: '', self.ATTR_V: None}\n\n        interactive_script = [('hostname', '.*'), (cmd, '.*')]\n        jid = self.create_and_submit_job(\n            attribs=a,\n            content_interactive=interactive_script,\n            preserve_env=True)\n        # Once all commands sent and matched, job exits\n        self.server.expect(JOB, 'queue', op=UNSET, id=jid, offset=10)\n\n        # read the environment list from the job without hook\n        self.env_nohook = {}\n        self.env_nohook_exclude = {}\n        self.read_env(self.job_out1_tempfile, \"nohook\")\n\n        # now do the same with the hook\n        hook_name = \"launch\"\n        hook_body = \"\"\"\nimport pbs\ne = pbs.event()\nj = e.job\nj.Variable_List[\"happy\"] = \"days\"\npbs.logmsg(pbs.LOG_DEBUG, \"Variable_List is %s\" % (j.Variable_List,))\n\"\"\"\n\n        a2 = {'event': \"execjob_launch\", 'enabled': 'true', 'debug': 'true'}\n        self.server.create_import_hook(hook_name, a2, hook_body)\n\n        # submit an interactive job without hook\n        cmd = 'env > ' + self.job_out2_tempfile\n        interactive_script = [('hostname', '.*'), (cmd, '.*')]\n        jid2 = self.create_and_submit_job(\n            attribs=a,\n            content_interactive=interactive_script,\n            preserve_env=True)\n        # Once all commands sent and matched, job exits\n        self.server.expect(JOB, 'queue', op=UNSET, id=jid2, offset=10)\n\n        # read the environment list from the job without hook\n        self.env_hook = {}\n        self.env_hook_exclude = {}\n        self.read_env(self.job_out2_tempfile, \"hook\")\n\n        # validate the environment values\n        self.common_validate()\n\n        # verify the env values in logs\n        self.common_log_match(\"mom\")\n\n    @skipOnShasta\n    def test_no_hook(self):\n        \"\"\"\n        Test to verify that environment variables are\n        not truncated and also not modified by PBS when\n        no hook is present\n        \"\"\"\n\n        os.environ['BROL'] = r'hii\\\\\\haha'\n        os.environ['BROL1'] = \"\"\"'hii\nhaa'\"\"\"\n\n        a = {'Resource_List.select': '1:ncpus=1',\n             'Resource_List.walltime': 10}\n        script = ['#PBS -V']\n        script += ['env\\n']\n        script += ['sleep 5\\n']\n\n        # Submit a job without hooks in the system\n        jid = self.create_and_submit_job(attribs=a, content=script)\n        qstat = self.server.status(JOB, id=jid)\n        job_outfile = qstat[0]['Output_Path'].split(':')[1]\n        job_var = qstat[0]['Variable_List']\n        self.logger.info(\"job variable list is %s\" % job_var)\n        self.server.expect(JOB, 'queue', op=UNSET, id=jid, offset=10)\n\n        # Read the env variable from job output file\n        self.env_nohook = {}\n        self.env_nohook_exclude = {}\n        self.read_env(job_outfile, \"nohook\")\n\n        # Verify the output with and without job\n        self.assertEqual(os.environ['TEST_COMMA'],\n                         self.env_nohook['TEST_COMMA'].rstrip('\\n'))\n        self.logger.info(\n            \"TEST_COMMA matched - \" + os.environ['TEST_COMMA'] +\n            \" == \" + self.env_nohook['TEST_COMMA'].rstrip('\\n'))\n        self.assertEqual(os.environ['TEST_RETURN'],\n                         self.env_nohook['TEST_RETURN'].rstrip('\\n'))\n        self.logger.info(\n            \"TEST_RETURN matched - \" + os.environ['TEST_RETURN'] +\n            \" == \" + self.env_nohook['TEST_RETURN'].rstrip('\\n'))\n        self.assertEqual(os.environ['TEST_SEMICOLON'],\n                         self.env_nohook['TEST_SEMICOLON'].rstrip('\\n'))\n        self.logger.info(\n            \"TEST_SEMICOLON macthed - \" + os.environ['TEST_SEMICOLON'] +\n            \" == \" + self.env_nohook['TEST_SEMICOLON'].rstrip('\\n'))\n        self.assertEqual(\n            os.environ['TEST_ENCLOSED'],\n            self.env_nohook['TEST_ENCLOSED'].rstrip('\\n'))\n        self.logger.info(\n            \"TEST_ENCLOSED matched - \" + os.environ['TEST_ENCLOSED'] +\n            \" == \" + self.env_nohook['TEST_ENCLOSED'].rstrip('\\n'))\n        self.assertEqual(os.environ['TEST_COLON'],\n                         self.env_nohook['TEST_COLON'].rstrip('\\n'))\n        self.logger.info(\"TEST_COLON macthed - \" + os.environ['TEST_COLON'] +\n                         \" == \" + self.env_nohook['TEST_COLON'].rstrip('\\n'))\n        self.assertEqual(\n            os.environ['TEST_BACKSLASH'],\n            self.env_nohook['TEST_BACKSLASH'].rstrip('\\n'))\n        self.logger.info(\n            \"TEST_BACKSLASH matched - \" + os.environ['TEST_BACKSLASH'] +\n            \" == \" + self.env_nohook['TEST_BACKSLASH'].rstrip('\\n'))\n        self.assertEqual(os.environ['TEST_DQUOTE'],\n                         self.env_nohook['TEST_DQUOTE'].rstrip('\\n'))\n        self.logger.info(\"TEST_DQUOTE - \" + os.environ['TEST_DQUOTE'] +\n                         \" == \" + self.env_nohook['TEST_DQUOTE'].rstrip('\\n'))\n        self.assertEqual(os.environ['TEST_DQUOTE2'],\n                         self.env_nohook['TEST_DQUOTE2'].rstrip('\\n'))\n        self.logger.info(\"TEST_DQUOTE2 - \" + os.environ['TEST_DQUOTE2'] +\n                         \" == \" + self.env_nohook['TEST_DQUOTE2'].rstrip('\\n'))\n        self.assertEqual(os.environ['TEST_DQUOTE3'],\n                         self.env_nohook['TEST_DQUOTE3'].rstrip('\\n'))\n        self.logger.info(\"TEST_DQUOTE3 - \" + os.environ['TEST_DQUOTE3'] +\n                         \" == \" + self.env_nohook['TEST_DQUOTE3'].rstrip('\\n'))\n        self.assertEqual(os.environ['TEST_DQUOTE4'],\n                         self.env_nohook['TEST_DQUOTE4'].rstrip('\\n'))\n        self.logger.info(\"TEST_DQUOTE4 - \" + os.environ['TEST_DQUOTE4'] +\n                         \" == \" + self.env_nohook['TEST_DQUOTE4'].rstrip('\\n'))\n        self.assertEqual(os.environ['TEST_DQUOTE5'],\n                         self.env_nohook['TEST_DQUOTE5'].rstrip('\\n'))\n        self.logger.info(\"TEST_DQUOTE5 - \" + os.environ['TEST_DQUOTE5'] +\n                         \" == \" + self.env_nohook['TEST_DQUOTE5'].rstrip('\\n'))\n        self.assertEqual(os.environ['TEST_DQUOTE6'],\n                         self.env_nohook['TEST_DQUOTE6'].rstrip('\\n'))\n        self.logger.info(\"TEST_DQUOTE6 - \" + os.environ['TEST_DQUOTE6'] +\n                         \" == \" + self.env_nohook['TEST_DQUOTE6'].rstrip('\\n'))\n        self.assertEqual(os.environ['TEST_SQUOTE'],\n                         self.env_nohook['TEST_SQUOTE'].rstrip('\\n'))\n        self.logger.info(\"TEST_SQUOTE - \" + os.environ['TEST_SQUOTE'] +\n                         \" == \" + self.env_nohook['TEST_SQUOTE'].rstrip('\\n'))\n        self.assertEqual(os.environ['TEST_SQUOTE2'],\n                         self.env_nohook['TEST_SQUOTE2'].rstrip('\\n'))\n        self.logger.info(\"TEST_SQUOTE2 - \" + os.environ['TEST_SQUOTE2'] +\n                         \" == \" + self.env_nohook['TEST_SQUOTE2'].rstrip('\\n'))\n        self.assertEqual(os.environ['TEST_SQUOTE3'],\n                         self.env_nohook['TEST_SQUOTE3'].rstrip('\\n'))\n        self.logger.info(\"TEST_SQUOTE3 - \" + os.environ['TEST_SQUOTE3'] +\n                         \" == \" + self.env_nohook['TEST_SQUOTE3'].rstrip('\\n'))\n        self.assertEqual(os.environ['TEST_SQUOTE4'],\n                         self.env_nohook['TEST_SQUOTE4'].rstrip('\\n'))\n        self.logger.info(\"TEST_SQUOTE4 - \" + os.environ['TEST_SQUOTE4'] +\n                         \" == \" + self.env_nohook['TEST_SQUOTE4'].rstrip('\\n'))\n        self.assertEqual(os.environ['TEST_SQUOTE5'],\n                         self.env_nohook['TEST_SQUOTE5'].rstrip('\\n'))\n        self.logger.info(\"TEST_SQUOTE5 - \" + os.environ['TEST_SQUOTE5'] +\n                         \" == \" + self.env_nohook['TEST_SQUOTE5'].rstrip('\\n'))\n        self.assertEqual(os.environ['TEST_SQUOTE6'],\n                         self.env_nohook['TEST_SQUOTE6'].rstrip('\\n'))\n        self.logger.info(\"TEST_SQUOTE6 - \" + os.environ['TEST_SQUOTE6'] +\n                         \" == \" + self.env_nohook['TEST_SQUOTE6'].rstrip('\\n'))\n        self.assertEqual(os.environ['BROL'],\n                         self.env_nohook['BROL'].rstrip('\\n'))\n        self.logger.info(\"BROL - \" + os.environ['BROL'] + \" == \" +\n                         self.env_nohook['BROL'].rstrip('\\n'))\n        self.assertEqual(os.environ['BROL1'],\n                         self.env_nohook['BROL1'].rstrip('\\n'))\n        self.logger.info(\"BROL - \" + os.environ['BROL1'] + \" == \" +\n                         self.env_nohook['BROL1'].rstrip('\\n'))\n\n        # match the values in qstat -f Variable_List\n        # Following is blocked on PTL bug PP-1008\n\n        # self.assertTrue(\"TEST_COMMA=1\\,2\\,3\\,4\" in job_var)\n        # self.assertTrue(\"TEST_SEMICOLON=\\;\" in job_var)\n        # self.assertTrue(\"TEST_COLON=:\" in job_var)\n        # self.assertTrue(\"TEST_DQUOTE=\\\"\" in job_var)\n        # self.assertTrue(\"TEST_SQUOTE=\\'\" in job_var)\n        # self.assertTrue(\"TEST_BACKSLASH=\\\\\" in job_var)\n        # self.assertTrue(\"BROL=hii\\\\\\\\\\\\haha\" in job_var)\n        # self.assertTrue(\"TEST_ENCLOSED=\\,\" in job_var)\n        # self.assertTrue(\"BROL1=hii\\nhaa\" in job_var)\n        # self.assertTrue(\"TEST_RETURN=3\\,\\n4\\,\\n5\\,\" in job_var)\n\n    @skipOnShasta\n    @checkModule(\"pexpect\")\n    def test_interactive_no_hook(self):\n        \"\"\"\n        Test to verify that environment variable values\n        are not truncated or escaped wrongly whithin a\n        job even when there is no hook present\n        \"\"\"\n\n        self.interactive = True\n        os.environ['BROL'] = r'hii\\\\\\haha'\n        os.environ['BROL1'] = \"\"\"'hii\nhaa'\"\"\"\n\n        # submit an interactive job without hook\n        cmd = 'env > ' + self.job_out3_tempfile\n        a = {ATTR_inter: '', self.ATTR_V: None}\n        interactive_script = [('hostname', '.*'), (cmd, '.*')]\n        jid = self.create_and_submit_job(\n            attribs=a,\n            content_interactive=interactive_script,\n            preserve_env=True)\n        # Once all commands sent and matched, job exits\n        self.server.expect(JOB, 'queue', op=UNSET, id=jid, offset=10)\n\n        # read the environment list from the job without hook\n        self.env_nohook = {}\n        self.env_nohook_exclude = {}\n        self.read_env(self.job_out3_tempfile, \"nohook\")\n\n        # Verify the output with and without job\n        self.logger.info(\"job Variable list is \")\n        self.assertEqual(os.environ['TEST_COMMA'],\n                         self.env_nohook['TEST_COMMA'].rstrip('\\n'))\n        self.logger.info(\n            \"TEST_COMMA matched - \" + os.environ['TEST_COMMA'] +\n            \" == \" + self.env_nohook['TEST_COMMA'].rstrip('\\n'))\n        self.assertEqual(os.environ['TEST_RETURN'],\n                         self.env_nohook['TEST_RETURN'].rstrip('\\n'))\n        self.logger.info(\n            \"TEST_RETURN matched - \" + os.environ['TEST_RETURN'] +\n            \" == \" + self.env_nohook['TEST_RETURN'].rstrip('\\n'))\n        self.assertEqual(os.environ['TEST_SEMICOLON'],\n                         self.env_nohook['TEST_SEMICOLON'].rstrip('\\n'))\n        self.logger.info(\n            \"TEST_SEMICOLON macthed - \" + os.environ['TEST_SEMICOLON'] +\n            \" == \" + self.env_nohook['TEST_SEMICOLON'].rstrip('\\n'))\n        self.assertEqual(\n            os.environ['TEST_ENCLOSED'],\n            self.env_nohook['TEST_ENCLOSED'].rstrip('\\n'))\n        self.logger.info(\n            \"TEST_ENCLOSED matched - \" + os.environ['TEST_ENCLOSED'] +\n            \" == \" + self.env_nohook['TEST_ENCLOSED'].rstrip('\\n'))\n        self.assertEqual(os.environ['TEST_COLON'],\n                         self.env_nohook['TEST_COLON'].rstrip('\\n'))\n        self.logger.info(\"TEST_COLON macthed - \" + os.environ['TEST_COLON'] +\n                         \" == \" + self.env_nohook['TEST_COLON'].rstrip('\\n'))\n        self.assertEqual(\n            os.environ['TEST_BACKSLASH'],\n            self.env_nohook['TEST_BACKSLASH'].rstrip('\\n'))\n        self.logger.info(\n            \"TEST_BACKSLASH matched - \" + os.environ['TEST_BACKSLASH'] +\n            \" == \" + self.env_nohook['TEST_BACKSLASH'].rstrip('\\n'))\n        self.assertEqual(os.environ['TEST_DQUOTE'],\n                         self.env_nohook['TEST_DQUOTE'].rstrip('\\n'))\n        self.logger.info(\"TEST_DQUOTE - \" + os.environ['TEST_DQUOTE'] +\n                         \" == \" + self.env_nohook['TEST_DQUOTE'].rstrip('\\n'))\n        self.logger.info(\"TEST_DQUOTE2 - \" + os.environ['TEST_DQUOTE2'] +\n                         \" == \" + self.env_nohook['TEST_DQUOTE2'].rstrip('\\n'))\n        self.logger.info(\"TEST_DQUOTE3 - \" + os.environ['TEST_DQUOTE3'] +\n                         \" == \" + self.env_nohook['TEST_DQUOTE3'].rstrip('\\n'))\n        self.logger.info(\"TEST_DQUOTE4 - \" + os.environ['TEST_DQUOTE4'] +\n                         \" == \" + self.env_nohook['TEST_DQUOTE4'].rstrip('\\n'))\n        self.logger.info(\"TEST_DQUOTE5 - \" + os.environ['TEST_DQUOTE5'] +\n                         \" == \" + self.env_nohook['TEST_DQUOTE5'].rstrip('\\n'))\n        self.logger.info(\"TEST_DQUOTE6 - \" + os.environ['TEST_DQUOTE6'] +\n                         \" == \" + self.env_nohook['TEST_DQUOTE6'].rstrip('\\n'))\n        self.logger.info(\"TEST_SQUOTE - \" + os.environ['TEST_SQUOTE'] +\n                         \" == \" + self.env_nohook['TEST_SQUOTE'].rstrip('\\n'))\n        self.logger.info(\"TEST_SQUOTE2 - \" + os.environ['TEST_SQUOTE2'] +\n                         \" == \" + self.env_nohook['TEST_SQUOTE2'].rstrip('\\n'))\n        self.logger.info(\"TEST_SQUOTE3 - \" + os.environ['TEST_SQUOTE3'] +\n                         \" == \" + self.env_nohook['TEST_SQUOTE3'].rstrip('\\n'))\n        self.logger.info(\"TEST_SQUOTE4 - \" + os.environ['TEST_SQUOTE4'] +\n                         \" == \" + self.env_nohook['TEST_SQUOTE4'].rstrip('\\n'))\n        self.logger.info(\"TEST_SQUOTE5 - \" + os.environ['TEST_SQUOTE5'] +\n                         \" == \" + self.env_nohook['TEST_SQUOTE5'].rstrip('\\n'))\n        self.logger.info(\"TEST_SQUOTE6 - \" + os.environ['TEST_SQUOTE6'] +\n                         \" == \" + self.env_nohook['TEST_SQUOTE6'].rstrip('\\n'))\n        self.assertEqual(os.environ['BROL'],\n                         self.env_nohook['BROL'].rstrip('\\n'))\n        self.logger.info(\"BROL - \" + os.environ['BROL'] + \" == \" +\n                         self.env_nohook['BROL'].rstrip('\\n'))\n        self.assertEqual(os.environ['BROL1'],\n                         self.env_nohook['BROL1'].rstrip('\\n'))\n        self.logger.info(\"BROL - \" + os.environ['BROL1'] + \" == \" +\n                         self.env_nohook['BROL1'].rstrip('\\n'))\n\n    @skipOnShasta\n    def test_execjob_epi2(self):\n        \"\"\"\n        Test that Variable_List will contain environment variable\n        with commas, newline and all special characters for a job\n        that has been recovered from a prematurely killed mom. This\n        is a test from an execjob_epilogue hook's view.\n        PRE: Set up currently executing user's environment to have variables\n             whose values have the special characters.\n             Submit a job using the -V option (pass current environment)\n             where there is an execjob_epilogue hook that references\n             Variable_List value.\n             Now kill -9 pbs_mom and then restart it.\n             This causes pbs_mom to read in job data from the *.JB file on\n             disk, and pbs_mom immediately kills the job causing\n             execjob_epilogue hook to execute.\n        POST: The epilogue hook should see the proper value to the\n              Variable_List.\n        \"\"\"\n\n        msg = \"skipped due to issue: \"\n        msg += \"PTL failed to parse env variable\"\n        msg += \" when qstat has multiline variable/attr value.\"\n        self.skipTest(msg)\n\n        a = {'Resource_List.select': '1:ncpus=1',\n             'Resource_List.walltime': 60}\n        j = Job(attrs=a)\n        script = ['#PBS -V']\n        script += ['env\\n']\n        script += ['sleep 30\\n']\n        j.create_script(body=script)\n\n        # Now create/start the hook\n        hook_name = \"test_epi\"\n        hook_body = \"\"\"\nimport pbs\nimport time\ne = pbs.event()\nj = e.job\npbs.logmsg(pbs.LOG_DEBUG,\"Variable_List is %s\" % (j.Variable_List,))\npbs.logmsg(pbs.LOG_DEBUG,\n    \"PBS_O_LOGNAME is %s\" % j.Variable_List[\"PBS_O_LOGNAME\"])\n\"\"\"\n\n        a = {'event': \"execjob_epilogue\", 'enabled': \"true\", 'debug': \"true\"}\n\n        self.server.create_import_hook(\n            hook_name,\n            a,\n            hook_body,\n            overwrite=True)\n\n        # Submit a job with hooks in the system\n        jid = self.server.submit(j)\n\n        # Wait for the job to start running.\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid)\n\n        # kill -9 mom\n        self.mom.signal('-KILL')\n\n        # now restart mom\n        self.mom.start()\n\n        self.mom.log_match(\"Restart sent to server\")\n\n        # Verify the env variables are seen in logs\n        self.common_log_match(\"mom\")\n        self.mom.log_match(\n            \"PBS_O_LOGNAME is %s\" % (self.du.get_current_user()))\n"
  },
  {
    "path": "test/tests/functional/pbs_hook_set_nonexist.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\nfrom tests.functional import *\n\n\nclass TestHookSetNonExist(TestFunctional):\n\n    def test_set_on_nonexist_hook(self):\n        try:\n            a = {ATTR_enable: True}\n            self.server.manager(MGR_CMD_SET, MGR_OBJ_HOOK, a, id=\"test\")\n        except PbsManagerError:\n            pass\n        else:\n            _msg = \"Mgr set operation should fail on non existance hook\"\n            self.assertTrue(False, _msg)\n"
  },
  {
    "path": "test/tests/functional/pbs_hook_timeout.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport os\nfrom tests.functional import *\nfrom time import sleep\n\n\n@requirements(num_moms=3)\nclass TestHookTimeout(TestFunctional):\n    \"\"\"\n    Test to make sure hooks are resent to moms that don't ack when\n    the hooks are sent\n    \"\"\"\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n\n        if len(self.moms) != 3:\n            self.skip_test('Test requires 3 moms, use -p <moms>')\n\n        self.momA = self.moms.values()[0]\n        self.momB = self.moms.values()[1]\n        self.momC = self.moms.values()[2]\n        self.momA.delete_vnode_defs()\n        self.momB.delete_vnode_defs()\n        self.momC.delete_vnode_defs()\n\n        self.hostA = self.momA.shortname\n        self.hostB = self.momB.shortname\n        self.hostC = self.momC.shortname\n\n        self.server.manager(MGR_CMD_DELETE, NODE, None, \"\")\n\n        self.server.manager(MGR_CMD_CREATE, NODE, id=self.hostA)\n\n        self.server.manager(MGR_CMD_CREATE, NODE, id=self.hostB)\n\n        self.server.manager(MGR_CMD_CREATE, NODE, id=self.hostC)\n        for mom in self.moms.values():\n            self.server.expect(NODE, {'state': 'free'}, id=mom.shortname)\n\n    def timeout_messages(self, num_msg=1, starttime=None):\n        msg_found = None\n        for count in range(10):\n            sleep(30)\n            allmatch_msg = self.server.log_match(\n                \"Timing out previous send of mom hook updates \",\n                max_attempts=1, starttime=starttime, allmatch=True)\n            if len(allmatch_msg) >= num_msg:\n                msg_found = allmatch_msg\n                break\n        self.assertIsNotNone(msg_found,\n                             msg=\"Didn't get expected timeout messages\")\n\n    def test_hook_send(self):\n        \"\"\"\n        Test when the server doesn't receive an ACK from a mom for\n        sending hooks he resends them\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, SERVER, {'log_events': 2047})\n        timeout_max_attempt = 7\n\n        # Make momB unresponsive\n        self.logger.info(\"Stopping MomB\")\n        self.momB.signal(\"-STOP\")\n\n        start_time = time.time()\n\n        hook_body = \"import pbs\\n\"\n        a = {'event': 'execjob_epilogue', 'enabled': 'True'}\n\n        self.server.create_hook(\"test\", a)\n        self.server.import_hook(\"test\", hook_body)\n\n        # First batch of hook update is for the *.HK files\n        self.server.log_match(\n            \"Timing out previous send of mom hook updates \", n=600,\n            max_attempts=timeout_max_attempt, interval=30,\n            starttime=start_time)\n\n        # sent hook control file\n        for h in [self.hostA, self.hostB, self.hostC]:\n            hfile = os.path.join(self.server.pbs_conf['PBS_HOME'],\n                                 \"server_priv\", \"hooks\", \"test.HK\")\n            if h != self.hostB:\n                exist = True\n            else:\n                exist = False\n            self.server.log_match(\n                \".*successfully sent hook file %s to %s.*\" %\n                (hfile, h), max_attempts=5, interval=1,\n                regexp=True, existence=exist,\n                starttime=start_time)\n\n        # Second batch of hook update is for the *.PY files + resend of\n        # *.HK file to momB\n        self.timeout_messages(2, start_time)\n        # sent hook content file\n        for h in [self.hostA, self.hostB, self.hostC]:\n            hfile = os.path.join(self.server.pbs_conf['PBS_HOME'],\n                                 \"server_priv\", \"hooks\", \"test.PY\")\n            if h != self.hostB:\n                exist = True\n            else:\n                exist = False\n\n            self.server.log_match(\n                \".*successfully sent hook file %s to %s.*\" %\n                (hfile, h), max_attempts=3, interval=1,\n                regexp=True, existence=exist,\n                starttime=start_time)\n\n        # Now check to make sure moms have received the hook files\n        for m in [self.momA, self.momB, self.momC]:\n            if m != self.momB:\n                exist = True\n            else:\n                exist = False\n            m.log_match(\n                \"test.HK;copy hook-related file request received\",\n                regexp=True, max_attempts=10, interval=1,\n                existence=exist, starttime=start_time)\n\n            m.log_match(\n                \"test.PY;copy hook-related file request received\",\n                regexp=True, max_attempts=10, interval=1,\n                existence=exist, starttime=start_time)\n\n        # Ensure that hook send updates are retried for\n        # the *.HK and *.PY file to momB\n        self.timeout_messages(3, start_time)\n        # Submit a job, it should still run\n        a = {'Resource_List.select': '3:ncpus=1',\n             'Resource_List.place': 'scatter'}\n        j1 = Job(TEST_USER, attrs=a)\n        j1id = self.server.submit(j1)\n\n        # Wait for the job to start running.\n        a = {ATTR_state: (EQ, 'R'), ATTR_substate: (EQ, 41)}\n        self.server.expect(JOB, a, op=PTL_AND, id=j1id)\n\n        self.server.log_match(\n            \"%s;vnode %s.*parent mom.*has a pending copy hook \"\n            \"or delete hook request.*\" % (j1id, self.hostB),\n            max_attempts=5, interval=1, regexp=True,\n            starttime=start_time)\n\n    def tearDown(self):\n        self.momB.signal(\"-CONT\")\n        TestFunctional.tearDown(self)\n"
  },
  {
    "path": "test/tests/functional/pbs_hook_unset_res.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\nhook_body_modifyjob = \"\"\"\nimport pbs\ne = pbs.event()\nj = e.job\nselect = \"1:ncpus=1:mem=10m\"\nj.Resource_List['ncpus'] = None\nj.Resource_List['select'] = pbs.select(select)\nj.comment = \"Modified this job\"\n\"\"\"\n\nhook_body_node_res_unset = \"\"\"\nimport pbs\ne = pbs.event()\nvnl = pbs.event().vnode_list\nlocal_node = pbs.get_local_nodename()\nvnl[local_node].resources_available[\"foo\"] = None\n\"\"\"\n\n\nclass TestHookUnsetRes(TestFunctional):\n\n    def test_modifyjob_hook(self):\n        \"\"\"\n        Unsetting ncpus, that is ['ncpus'] = None, in modifyjob hook\n        \"\"\"\n        hook_name = \"myhook\"\n        a = {'event': 'modifyjob', 'enabled': 'True'}\n        rv = self.server.create_import_hook(\n            hook_name, a, hook_body_modifyjob, overwrite=True)\n        self.assertTrue(rv)\n        self.server.manager(MGR_CMD_SET, SERVER, {'log_events': 2047})\n        j = Job(TEST_USER, attrs={\n                'Resource_List.select': '1:ncpus=1', ATTR_h: None})\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'H'}, id=jid)\n        self.server.alterjob(jid, {'Resource_List.ncpus': '2'})\n\n    def test_node_res_unset_hook(self):\n        \"\"\"\n        Unsetting custom node resource via hook and test\n        the resource can be set again on the node.\n        \"\"\"\n        a = {'type': 'string', 'flag': 'h'}\n        r = 'foo'\n        self.server.manager(MGR_CMD_CREATE, RSC, a, id=r)\n\n        vnode = self.mom.shortname\n        self.server.manager(\n            MGR_CMD_SET, NODE,\n            {'resources_available.foo': 'bar'},\n            id=vnode,\n            runas=ROOT_USER)\n\n        hook_name = \"node_res_unset\"\n        a = {'event': 'exechost_periodic',\n             'enabled': 'True',\n             'freq': 10}\n        rv = self.server.create_import_hook(\n            hook_name, a, hook_body_node_res_unset, overwrite=True)\n        self.assertTrue(rv)\n\n        msg = 'resource resources_available.foo= per mom hook request'\n        self.server.log_match(msg, starttime=time.time())\n\n        rc = self.server.manager(\n            MGR_CMD_SET, NODE,\n            {'resources_available.foo': 'bar'},\n            id=vnode,\n            runas=ROOT_USER)\n        self.assertEqual(rc, 0)\n"
  },
  {
    "path": "test/tests/functional/pbs_hooksmoketest.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\nfrom tests.functional import *\n\n\n@tags('hooks', 'smoke')\nclass TestHookSmokeTest(TestFunctional):\n    \"\"\"\n    Hooks Smoke Test\n    \"\"\"\n    hook_name = \"test_hook\"\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n        a = {'log_events': 2047, 'scheduling': 'False'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        self.script = []\n        self.script += ['echo Hello World\\n']\n        self.script += ['%s 30\\n' % (self.mom.sleep_cmd)]\n        if self.du.get_platform() == \"cray\" or \\\n           self.du.get_platform() == \"craysim\":\n            self.script += ['aprun -b -B /bin/sleep 10']\n\n    def check_hk_file(self, hook_name, existence=False):\n        \"\"\"\n        Function to check existence of server's hook\n        directory, name of the .HK file and path of HK file\n        \"\"\"\n        conf = self.du.parse_pbs_config()\n        pbs_home = conf['PBS_HOME']\n        server_hooks_dir = os.path.join(pbs_home, \"server_priv\", \"hooks\")\n        rc = self.du.isdir(hostname=self.server.hostname,\n                           path=server_hooks_dir, sudo=True)\n        msg = \"Dir '%s' not present\" % server_hooks_dir\n        self.assertEqual(rc, True, msg)\n        self.logger.info(\"As expected dir '%s' present\" % server_hooks_dir)\n        hk_file = hook_name + \".HK\"\n        hk_file_location = os.path.join(server_hooks_dir, hk_file)\n        self.logger.info(\"Check existence of .HK file\")\n        count = 2\n        if existence:\n            count = 10\n            msg = \"As expected file '%s' is present\" % hk_file\n            msg += \" in '%s' directory\" % server_hooks_dir\n            _msg = \"File '%s' is not present\" % hk_file\n            _msg += \" in '%s' directory\" % server_hooks_dir\n        else:\n            msg = \"As expected file '%s' is not present\" % hk_file\n            msg += \" in '%s' directory\" % server_hooks_dir\n            _msg = \"File '%s' is present\" % hk_file\n            _msg += \" in '%s' directory\" % server_hooks_dir\n\n        # sleeping for some time as generation of *.HK file takes time\n        while True:\n            rc = self.du.isfile(hostname=self.server.hostname,\n                                path=hk_file_location,\n                                sudo=True)\n            count = count - 1\n            if rc or count == 0:\n                break\n            time.sleep(1)\n        self.assertEqual(rc, existence, _msg)\n        self.logger.info(msg)\n\n    def test_create_and_print_hook(self):\n        \"\"\"\n        Test create and print a hook\n        \"\"\"\n        attrs = {'event': 'queuejob'}\n        self.logger.info('Create a queuejob hook')\n        self.server.create_hook(self.hook_name, attrs)\n        self.check_hk_file(self.hook_name, existence=True)\n\n        attrs = {'type': 'site', 'enabled': 'true', 'event': 'queuejob',\n                 'alarm': 30, 'order': 1, 'debug': 'false',\n                 'user': 'pbsadmin', 'fail_action': 'none'}\n        self.logger.info('Verify hook values for test_hook')\n        rc = self.server.manager(MGR_CMD_LIST, HOOK,\n                                 id=self.hook_name)\n        self.assertEqual(rc, 0)\n        rv = self.server.expect(HOOK, attrs, id=self.hook_name)\n        self.assertTrue(rv)\n\n    def test_import_and_export_hook(self):\n        \"\"\"\n        Test import and export hook\n        \"\"\"\n        hook_body = \"\"\"import pbs\ne = pbs.event()\nj = e.job\nif j.Resource_List[\"walltime\"] is None:\n  e.reject(\"no walltime specified\")\nj.Resource_List[\"mem\"] = pbs.size(\"7mb\")\ne.accept()\"\"\"\n        imp_hook_body = hook_body.split('\\n')\n        exp_hook_body = imp_hook_body\n        attrs = {'event': 'queuejob'}\n        self.server.create_import_hook(self.hook_name, attrs, hook_body)\n        fn = self.du.create_temp_file(asuser=ROOT_USER)\n        hook_attrs = 'application/x-config default %s' % fn\n        rc = self.server.manager(MGR_CMD_EXPORT, HOOK, hook_attrs,\n                                 self.hook_name)\n        self.assertEqual(rc, 0)\n        # For Cray PTL does not run on the server host\n        if self.du.is_localhost(self.server.hostname):\n            cmd = \"export h test_hook application/x-python default\"\n        else:\n            cmd = \"'export h test_hook application/x-python default'\"\n        export_cmd = [os.path.join(self.server.pbs_conf['PBS_EXEC'], 'bin',\n                                   'qmgr'), '-c', cmd]\n        ret = self.du.run_cmd(self.server.hostname, export_cmd, sudo=True)\n        self.assertEqual(ret['out'], exp_hook_body,\n                         msg=\"Failed to get expected usage message\")\n\n    def test_enable_and_disable_hook(self):\n        \"\"\"\n        Test enable and disable a hook\n        \"\"\"\n        rc = self.server.manager(MGR_CMD_CREATE, HOOK, None, self.hook_name)\n        self.assertEqual(rc, 0)\n        self.server.manager(MGR_CMD_SET, HOOK, {\n                            'enabled': 0}, id=self.hook_name)\n        attrs = {'type': 'site', 'enabled': 'false', 'event': '\"\"',\n                 'alarm': 30, 'order': 1, 'debug': 'false',\n                 'user': 'pbsadmin', 'fail_action': 'none'}\n        self.logger.info('Verify hook values for test_hook')\n        rc = self.server.manager(MGR_CMD_LIST, HOOK,\n                                 id=self.hook_name)\n        self.assertEqual(rc, 0)\n        rv = self.server.expect(HOOK, attrs, id=self.hook_name)\n        self.assertTrue(rv)\n        self.server.manager(MGR_CMD_SET, HOOK, {\n                            'enabled': 1}, id=self.hook_name)\n        self.logger.info('Verify hook values for test_hook')\n        attrs['enabled'] = 'true'\n        rc = self.server.manager(MGR_CMD_LIST, HOOK,\n                                 id=self.hook_name)\n        self.assertEqual(rc, 0)\n        rv = self.server.expect(HOOK, attrs, id=self.hook_name)\n        self.assertTrue(rv)\n\n    def test_modify_hook(self):\n        \"\"\"\n        Test to modify a hook\"\n        \"\"\"\n        attrs = {'event': 'queuejob', 'alarm': 60,\n                 'enabled': 'false', 'order': 7}\n        self.logger.info('Create hook test_hook')\n        rv = self.server.create_hook(self.hook_name, attrs)\n        self.assertTrue(rv)\n        rc = self.server.manager(MGR_CMD_LIST, HOOK,\n                                 id=self.hook_name)\n        self.assertEqual(rc, 0)\n        rv = self.server.expect(HOOK, attrs, id=self.hook_name)\n        self.assertTrue(rv)\n        self.logger.info(\"Modify hook test_hook event\")\n        self.server.manager(MGR_CMD_SET, HOOK, {\n                            'event+': 'resvsub'}, id=self.hook_name)\n        self.logger.info('Verify hook values for test_hook')\n        attrs2 = {'event': 'queuejob,resvsub',\n                  'alarm': 60, 'enabled': 'false', 'order': 7}\n        rc = self.server.manager(MGR_CMD_LIST, HOOK,\n                                 id=self.hook_name)\n        self.assertEqual(rc, 0)\n        rv = self.server.expect(HOOK, attrs2, id=self.hook_name)\n        self.assertTrue(rv)\n        self.server.manager(MGR_CMD_SET, HOOK, {\n                            'event-': 'resvsub'}, id=self.hook_name)\n        self.logger.info('Verify hook values for test_hook')\n        rc = self.server.manager(MGR_CMD_LIST, HOOK,\n                                 id=self.hook_name)\n        self.assertEqual(rc, 0)\n        rv = self.server.expect(HOOK, attrs, id=self.hook_name)\n        self.assertTrue(rv)\n\n    def test_delete_hook(self):\n        \"\"\"\n        Test delete a hook\n        \"\"\"\n        rc = self.server.manager(MGR_CMD_CREATE, HOOK, None, self.hook_name)\n        self.assertEqual(rc, 0)\n        self.server.manager(MGR_CMD_DELETE, HOOK, id=self.hook_name)\n        # For Cray PTL does not run on the server host\n        if self.du.is_localhost(self.server.hostname):\n            cmd = \"l h test_hook\"\n        else:\n            cmd = \"'l h test_hook'\"\n        export_cmd = [os.path.join(self.server.pbs_conf['PBS_EXEC'], 'bin',\n                                   'qmgr'), '-c', cmd]\n        ret = self.du.run_cmd(self.server.hostname, export_cmd, sudo=True)\n        err_msg = []\n        err_msg.append(\"qmgr obj=test_hook svr=default: hook not found\")\n        err_msg.append(\"qmgr: hook error returned from server\")\n        for i in err_msg:\n            self.assertIn(i, ret['err'],\n                          msg=\"Failed to get expected error message\")\n        self.check_hk_file(self.hook_name)\n\n    def test_queuejob_hook(self):\n        \"\"\"\n        Test queuejob hook\n        \"\"\"\n        # Create a hook with event queuejob\n        hook_body = \"\"\"import pbs\nimport time\n\ne = pbs.event()\nj = e.job\nif not j.Resource_List[\"walltime\"]:\n\n        e.reject(\"No walltime specified. Master does not approve! ;o)\")\n        # select resource\n        sel = pbs.select(\"1:ncpus=2\")\n        s = repr(sel)\nelse:\n        e.accept()\"\"\"\n\n        attrs = {'event': 'queuejob', 'enabled': 'True'}\n        rv = self.server.create_import_hook(self.hook_name, attrs, hook_body)\n        self.assertTrue(rv)\n        # As a user submit a job requesting walltime\n        submit_dir = self.du.create_temp_dir(asuser=TEST_USER)\n        a = {'Resource_List.walltime': 30}\n        j1 = Job(TEST_USER, a)\n        j1.create_script(self.script)\n        jid1 = self.server.submit(j1, submit_dir=submit_dir)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid1)\n        # As a user submit a job without requesting walltime\n        # Job is denied with the message\n        _msg = \"qsub: No walltime specified. Master does not approve! ;o)\"\n        submit_dir = self.du.create_temp_dir(asuser=TEST_USER1)\n        j2 = Job(TEST_USER1)\n        j2.create_script(self.script)\n        try:\n            jid2 = self.server.submit(j2, submit_dir=submit_dir)\n        except PbsSubmitError as e:\n            self.assertEqual(\n                e.msg[0], _msg, msg=\"Did not get expected qsub err message\")\n            self.logger.info(\"Got expected qsub err message as %s\", e.msg[0])\n        # To handle delay in  ALPS reservation cancelation on cray simulator\n        # Deleting job explicitly\n        self.server.delete([jid1])\n\n    def test_modifyjob_hook(self):\n        \"\"\"\n        Test modifyjob hook\n        \"\"\"\n        # Create a hook with event modifyjob\n        hook_body = \"\"\"import pbs\n\ntry:\n e = pbs.event()\n r = e.resv\n\nexcept pbs.EventIncompatibleError:\n e.reject( \"Event is incompatible\")\"\"\"\n\n        a = {'eligible_time_enable': 'True', 'scheduling': 'True'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        attrs = {'event': 'modifyjob', 'enabled': 'True'}\n        rv = self.server.create_import_hook(self.hook_name, attrs, hook_body)\n        self.assertTrue(rv)\n        # As user submit a job j1\n        submit_dir = self.du.create_temp_dir(asuser=TEST_USER2)\n        j1 = Job(TEST_USER2)\n        j1.create_script(self.script)\n        jid1 = self.server.submit(j1, submit_dir=submit_dir)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        # qalter the job, qalter will fail with error\n        _msg = \"qalter: Event is incompatible \" + jid1\n        try:\n            self.server.alterjob(jid1, {ATTR_p: '5'})\n        except PbsAlterError as e:\n            self.assertEqual(\n                e.msg[0], _msg, msg=\"Did not get expected qalter err message\")\n            self.logger.info(\"Got expected qalter err message as %s\", e.msg[0])\n        # To handle delay in  ALPS reservation cancelation on cray simulator\n        # Deleting job explicitly\n        self.server.delete([jid1])\n\n    def test_resvsub_hook(self):\n        \"\"\"\n        Test resvsub hook\n        \"\"\"\n        # Create a hook with event resvsub\n        hook_body = \"\"\"import pbs\ne = pbs.event()\nr = e.resv\n\nr.Resource_List[\"place\"] = pbs.place(\"pack:freed\")\"\"\"\n\n        attrs = {'event': 'resvsub', 'enabled': 'True'}\n        rv = self.server.create_import_hook(self.hook_name, attrs, hook_body)\n        self.assertTrue(rv)\n        # Submit a reservation\n        now = int(time.time())\n        a = {'reserve_start': now + 10,\n             'reserve_end': now + 120}\n        r = Reservation(TEST_USER3, attrs=a)\n        # The reservation gets an error as\n        _msg = \"pbs_rsub: request rejected as filter hook \" + \"'\" + \\\n            self.hook_name + \"'\" + \\\n            \" encountered an exception. Please inform Admin\"\n        try:\n            rid = self.server.submit(r)\n        except PbsSubmitError as e:\n            self.assertEqual(\n                e.msg[0], _msg,\n                msg=\"Did not get expected pbs_rsub err message\")\n            self.logger.info(\n                \"Got expected pbs_rsub err message as %s\", e.msg[0])\n\n    def test_movejob_hook(self):\n        \"\"\"\n        Test movejob hook\n        \"\"\"\n        # Create testq\n        qname = 'testq'\n        err_msg = []\n        err_msg.append(\"qmgr obj=testq svr=default: Unknown queue\")\n        err_msg.append(\"qmgr: Error (15018) returned from server\")\n        try:\n            self.server.manager(MGR_CMD_DELETE, QUEUE, None, qname)\n        except PbsManagerError as e:\n            for i in err_msg:\n                self.assertIn(i, e.msg,\n                              msg=\"Failed to get expected error message\")\n            self.logger.info(\"Got expected qmgr err message as %s\", e.msg)\n\n        a = {'queue_type': 'Execution', 'enabled': 'True', 'started': 'True'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, qname)\n        # Create a hook with event movejob\n        hook_body = \"\"\"import pbs\ne = pbs.event()\nj = e.job\n\nif j.queue.name == \"testq\" and not j.Resource_List[\"mem\"]:\n e.reject(\"testq requires job to have mem\")\"\"\"\n        attrs = {'event': 'movejob', 'enabled': 'True'}\n        rv = self.server.create_import_hook(self.hook_name, attrs, hook_body)\n        self.assertTrue(rv)\n        # submit a job j1 to default queue\n        submit_dir = self.du.create_temp_dir(asuser=TEST_USER4)\n        a = {ATTR_h: None, 'Resource_List.mem': '30mb'}\n        j1 = Job(TEST_USER4, a)\n        j1.create_script(self.script)\n        jid1 = self.server.submit(j1, submit_dir=submit_dir)\n        self.server.expect(JOB, {'job_state': 'H'}, id=jid1)\n        # qmove the job to queue testq\n        self.server.movejob(jid1, \"testq\")\n        self.server.expect(\n            JOB, {'job_state': 'H', ATTR_queue: 'testq'},\n            attrop=PTL_AND, id=jid1)\n        # Submit a job j2\n        submit_dir = self.du.create_temp_dir(asuser=TEST_USER5)\n        a = {ATTR_h: None}\n        j2 = Job(TEST_USER5, a)\n        j2.create_script(self.script)\n        jid2 = self.server.submit(j2, submit_dir=submit_dir)\n        self.server.expect(JOB, {'job_state': 'H'}, id=jid2)\n        # qmove the job j2 to queue testq\n        # Qmove will fail with an error\n        _msg = \"qmove: testq requires job to have mem \" + jid2\n        try:\n            self.server.movejob(jid2, \"testq\")\n        except PbsMoveError as e:\n            self.assertEqual(\n                e.msg[0], _msg, msg=\"Did not get expected qmove err message\")\n            self.logger.info(\"Got expected qmove err message as %s\", e.msg[0])\n        # Delete the jobs and delete queue testq\n        self.server.delete([jid1, jid2])\n        self.server.manager(MGR_CMD_DELETE, QUEUE, None, qname)\n\n    def test_runjob_hook(self):\n        \"\"\"\n        Test runjob hook using qrun\n        \"\"\"\n        # Create a hook with event runjob\n        hook_body = \"\"\"import pbs\n\ndef print_attribs(pbs_obj):\n\n pbs.logmsg(pbs.LOG_DEBUG, \"Printing a PBS object of type %s\" % (type(pbs_obj)\n))\n for a in pbs_obj.attributes:\n  v = getattr(pbs_obj, a)\n  if v and str(v) != \"\":\n   pbs.logmsg(pbs.LOG_DEBUG, \"%s = %s\" % (a,v))\n\ne = pbs.event()\n\nprint_attribs(e)\n\nj = e.job\nprint_attribs(j)\"\"\"\n        now = time.time()\n        attrs = {'event': 'runjob', 'enabled': 'True'}\n        rv = self.server.create_import_hook(self.hook_name, attrs, hook_body)\n        self.assertTrue(rv)\n        # Set scheduling false\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'False'})\n        # submit job j1\n        submit_dir = self.du.create_temp_dir(asuser=TEST_USER7)\n        # submit job j1\n        j1 = Job(TEST_USER7)\n        j1.create_script(self.script)\n        jid1 = self.server.submit(j1, submit_dir=submit_dir)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid1)\n        # qrun job j1\n        self.server.runjob(jid1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        # Get exec_vnode for job j1\n        ret = self.server.status(JOB, {'exec_vnode'}, id=jid1)\n        ev = ret[0]['exec_vnode']\n        # Verify server logs\n        self.logger.info(\"Verifying logs in server\")\n        msg_1 = \"Server@.*;Hook;%s;started\" % \\\n                (re.escape(self.hook_name))\n        msg_2 = \"Hook;Server@.*;requestor = Scheduler\"\n        msg_3 = \"Hook;Server@.*;hook_name = %s\" % \\\n                (re.escape(self.hook_name))\n        msg_4 = \"Hook;Server@.*;exec_vnode = %s\" % \\\n                (re.escape(ev))\n        # Character escaping '()' as the log_match is regexp\n        msg_5 = \"Server@.*;Hook;%s;finished\" % \\\n                (re.escape(self.hook_name))\n        msg = [msg_1, msg_2, msg_3, msg_4, msg_5]\n        for i in msg:\n            self.server.log_match(i, starttime=now, regexp=True)\n            self.logger.info(\"Got expected logs in server as %s\", i)\n\n    def tearDown(self):\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'True'})\n        PBSTestSuite.tearDown(self)\n"
  },
  {
    "path": "test/tests/functional/pbs_hookswig.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestHookSwig(TestFunctional):\n\n    \"\"\"\n    This test suite contains hook test to verify that the internal,\n    swig-generated 'pbs_ifl.py' does not cause an exception.\n    \"\"\"\n\n    def test_hook(self):\n        \"\"\"\n        Create a hook, import a hook content that test pbs.server() call.\n        \"\"\"\n        hook_name = \"testhook\"\n        hook_body = \"\"\"\nimport pbs\ne = pbs.event()\ns = pbs.server()\nif e.job.id:\n    jid = e.job.id\nelse:\n    jid = \"newjob\"\npbs.logjobmsg(jid, \"server is %s\" % (s.name,))\n\"\"\"\n        a = {'event': ['queuejob', 'execjob_begin'], 'enabled': 'True'}\n        self.server.create_import_hook(hook_name, a, hook_body)\n\n        j = Job(TEST_USER)\n        a = {'Resource_List.select': '1:ncpus=1', 'Resource_List.walltime': 30}\n\n        j.set_attributes(a)\n        j.set_sleep_time(10)\n\n        try:\n            jid = self.server.submit(j)\n        except PbsSubmitError:\n            pass\n        self.server.log_match(\"Job;%s;server is %s\" % (\n            \"newjob\", self.server.shortname))\n\n        self.mom.log_match(\"Job;%s;server is %s\" % (\n            jid, self.server.shortname))\n"
  },
  {
    "path": "test/tests/functional/pbs_indirect_resources.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestHostResources(TestFunctional):\n\n    def test_set_direct_on_indirect_resc(self):\n        \"\"\"\n        Set a direct resource on indirect resource and make sure\n        this change is reflected on resource assigned.\n        \"\"\"\n        # Create a consumable custom resources 'fooi'\n        self.server.add_resource('fooi', 'long', 'nh')\n        # Create 2 vnodes with individual hosts\n        attr = {'resources_available.ncpus': 1}\n        self.mom.create_vnodes(attr, 2, sharednode=False)\n        vnode0 = self.mom.shortname + '[0]'\n        vnode1 = self.mom.shortname + '[1]'\n        self.server.manager(MGR_CMD_SET, NODE,\n                            {'resources_available.fooi': 100}, vnode0)\n        self.server.manager(MGR_CMD_SET, NODE, {'resources_available.fooi':\n                                                '@' + vnode0}, vnode1)\n        self.server.manager(MGR_CMD_SET, NODE,\n                            {'resources_available.fooi': 100}, vnode1)\n        self.server.expect(NODE, {'resources_assigned.fooi': ''},\n                           id=vnode1, op=UNSET, max_attempts=1)\n\n    def test_set_direct_on_indirect_resc_busy(self):\n        \"\"\"\n        Set a direct resource on indirect resource\n        but a busy node and make sure this is failing.\n        \"\"\"\n        # Create 2 vnodes with individual hosts\n        attr = {'resources_available.ncpus': 1}\n        self.mom.create_vnodes(attr, 2, sharednode=False)\n        vnode0 = self.mom.shortname + '[0]'\n        vnode1 = self.mom.shortname + '[1]'\n        # Create a consumable custom resources 'fooi'\n        self.server.add_resource('fooi', 'long', 'nh')\n        self.scheduler.add_resource(\"fooi\")\n        self.server.manager(MGR_CMD_SET, NODE,\n                            {'resources_available.fooi': 100}, vnode0)\n        self.server.manager(MGR_CMD_SET, NODE, {'resources_available.fooi':\n                                                '@' + vnode0}, vnode1)\n        # Submit jobs.\n        a = {'Resource_List.fooi': 50}\n        J = Job(attrs=a)\n        self.server.submit(J)\n        j2 = self.server.submit(J)\n        self.server.expect(JOB, {'job_state': 'R'}, id=j2)\n        self.server.expect(NODE, {'state': 'job-busy'}, id=vnode1)\n        try:\n            self.server.manager(MGR_CMD_SET, NODE,\n                                {'resources_available.fooi': 100},\n                                vnode1)\n        except PbsManagerError:\n            pass\n        else:\n            self.server.expect(NODE, {'resources_available.fooi': 100},\n                               id=vnode1, op=UNSET, max_attempts=1)\n\n    def test_set_direct_on_target_node(self):\n        \"\"\"\n        Set a direct resource on target node which should\n        fail with error.\n        A and B are having direct resources.\n        C -> B\n        B -> A should fail as it is alreay a target.\n        \"\"\"\n        # Create 3 vnodes with individual hosts\n        attr = {'resources_available.ncpus': 1}\n        self.mom.create_vnodes(attr, 3, sharednode=False)\n        # Create a consumable custom resources 'fooi'\n        self.server.add_resource('fooi', 'long', 'nh')\n        vnode0 = self.mom.shortname + '[0]'\n        vnode1 = self.mom.shortname + '[1]'\n        vnode2 = self.mom.shortname + '[2]'\n        self.server.manager(MGR_CMD_SET, NODE,\n                            {'resources_available.fooi': 100}, vnode0)\n        self.server.manager(MGR_CMD_SET, NODE,\n                            {'resources_available.fooi': 100}, vnode1)\n        self.server.manager(MGR_CMD_SET, NODE, {'resources_available.fooi':\n                                                '@' + vnode1}, vnode2)\n        try:\n            self.server.manager(MGR_CMD_SET, NODE,\n                                {'resources_available.fooi': '@' + vnode0},\n                                vnode1)\n        except PbsManagerError:\n            pass\n        else:\n            _msg = \"Setting indirect resources on a target object should fail\"\n            self.assertTrue(False, _msg)\n\n    def test_create_node_without_resc_set(self):\n        \"\"\"\n        Create a consumable resource then create a new node.\n        The resources_assigned value should not get assigned\n        on it without explicitely setting resource on the node.\n        \"\"\"\n        # Create a consumable custom resources 'fooi'\n        self.server.add_resource('fooi', 'long', 'nh')\n        self.server.manager(MGR_CMD_DELETE, NODE, None, \"\")\n        self.server.manager(MGR_CMD_CREATE, NODE, id=self.mom.shortname)\n        self.server.expect(NODE, {'resources_assigned.fooi': ''},\n                           id=self.mom.shortname, op=UNSET, max_attempts=1)\n"
  },
  {
    "path": "test/tests/functional/pbs_init_script.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestPbsInitScript(TestFunctional):\n    \"\"\"\n    Testing PBS init script\n    \"\"\"\n\n    def test_env_vars_precede_pbs_conf_file(self):\n        \"\"\"\n        Test PBS_START environment variables overrides values in pbs.conf file\n        \"\"\"\n        conf = {'PBS_START_SERVER': '1', 'PBS_START_SCHED': '1',\n                'PBS_START_COMM': '1', 'PBS_START_MOM': '1'}\n        self.du.set_pbs_config(confs=conf)\n\n        conf_path = self.du.parse_pbs_config()\n        pbs_init = os.path.join(os.sep, conf_path['PBS_EXEC'],\n                                'libexec', 'pbs_init.d')\n        self.du.run_cmd(cmd=[pbs_init, 'stop'], sudo=True)\n\n        conf['PBS_START_MOM'] = '0'\n        self.du.set_pbs_config(confs=conf)\n\n        cmd = copy.copy(self.du.sudo_cmd)\n        cmd += ['PBS_START_SERVER=0', 'PBS_START_SCHED=0',\n                'PBS_START_COMM=0', 'PBS_START_MOM=1',\n                pbs_init, 'start']\n\n        rc = self.du.run_cmd(cmd=cmd, as_script=True)\n        output = rc['out']\n\n        self.assertNotIn(\"PBS server\", output)\n        self.assertNotIn(\"PBS sched\", output)\n        self.assertNotIn(\"PBS comm\", output)\n        self.assertIn(\"PBS mom\", output)\n\n    def tearDown(self):\n        # Above test leaves system in unusable state for PTL and PBS.\n        # Hence restarting PBS explicitly\n        PBSInitServices().restart()\n        TestFunctional.tearDown(self)\n"
  },
  {
    "path": "test/tests/functional/pbs_job_array.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\nfrom ptl.utils.pbs_logutils import PBSLogUtils\n\n\nclass TestJobArray(TestFunctional):\n    \"\"\"\n    Test suite for job array feature\n    \"\"\"\n    lu = PBSLogUtils()\n    qjh = \"\"\"\nimport pbs\n\ne = pbs.event()\nj = e.job\nif j.max_run_subjobs is None:\n    j.max_run_subjobs = %d\npbs.logmsg(pbs.LOG_DEBUG, \"max_run_subjobs set to %%d\" %% j.max_run_subjobs)\ne.accept()\n\"\"\"\n    mjh = \"\"\"\nimport pbs\n\ne = pbs.event()\nj = e.job\nif j.max_run_subjobs != 0:\n    if j.max_run_subjobs > 10:\n        j.max_run_subjobs = %d\npbs.logmsg(pbs.LOG_DEBUG, \"max_run_subjobs set to %%d\" %% j.max_run_subjobs)\ne.accept()\n\"\"\"\n    mjh2 = \"\"\"\nimport pbs\n\ne = pbs.event()\nj = e.job\nj.max_run_subjobs = %d\npbs.logmsg(pbs.LOG_DEBUG, \"max_run_subjobs set to %%d\" %% j.max_run_subjobs)\ne.accept()\n\"\"\"\n\n    def create_max_run_subjobs_hook(self, max_run, event, name, script):\n        \"\"\"\n        function to create a hook\n        - max_run Number of subjobs that can concurrently run\n        - event queuejob or modifyjob\n        - name hook name\n        - script hook script\n        \"\"\"\n        hook = script % int(max_run)\n        attrs = {'event': event}\n        self.server.create_import_hook(name, attrs, hook, overwrite=True)\n\n    def test_arrayjob_Erecord_startval(self):\n        \"\"\"\n        Check that an arrayjob's E record's 'start' value is not set to 0\n        \"\"\"\n        j = Job(TEST_USER, attrs={\n            ATTR_J: '1-2', ATTR_k: 'oe',\n            'Resource_List.select': 'ncpus=1'\n        })\n        j.set_sleep_time(1)\n\n        j_id = self.server.submit(j)\n\n        # Check for the E record for the arrayjob\n        acct_string = \";E;\" + str(j_id)\n        _, record = self.server.accounting_match(acct_string, max_attempts=10,\n                                                 interval=1)\n\n        # Extract the 'start' value from the E record\n        values = record.split(\";\", 3)[3]\n        start_str = \" start=\"\n        values_temp = values.split(start_str, 1)[1]\n        start_val = int(values_temp.split()[0])\n\n        # Verify that the value of 'start' isn't 0\n        self.assertNotEqual(start_val, 0,\n                            \"E record value of 'start' for arrayjob is 0\")\n\n    def kill_and_restart_svr(self):\n        try:\n            self.server.stop('-KILL')\n        except PbsServiceError as e:\n            # The server failed to stop\n            raise self.failureException(\"Server failed to stop:\" + e.msg)\n\n        try:\n            self.server.start()\n        except PbsServiceError as e:\n            # The server failed to start\n            raise self.failureException(\"Server failed to start:\" + e.msg)\n        self.server.isUp()\n        rv = self.is_server_licensed(self.server)\n        _msg = 'No license found on server %s' % (self.server.shortname)\n        self.assertTrue(rv, _msg)\n        attr = {'state': (MATCH_RE, 'free|job-busy')}\n        self.server.expect(NODE, attr, id=self.mom.shortname)\n\n    def test_running_subjob_survive_restart(self):\n        \"\"\"\n        Test to check if a running subjob of an array job survive a\n        pbs_server restart\n        \"\"\"\n        a = {'resources_available.ncpus': 1}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        j = Job(TEST_USER, attrs={\n            ATTR_J: '1-3', 'Resource_List.select': 'ncpus=1'})\n\n        j_id = self.server.submit(j)\n        subjid_1 = j.create_subjob_id(j_id, 1)\n\n        # 1. check job array has begun\n        self.server.expect(JOB, {'job_state': 'B'}, j_id)\n\n        # 2. check subjob 1 started running\n        self.server.expect(JOB, {'job_state': 'R'}, subjid_1)\n\n        # 3. Kill and restart the server\n        self.kill_and_restart_svr()\n\n        # 4. array job should be B\n        self.server.expect(JOB, {'job_state': 'B'}, j_id)\n\n        # 5. subjob 1 should be R\n        self.server.expect(JOB, {'job_state': 'R'}, subjid_1)\n\n    def test_running_subjob_survive_restart_with_history(self):\n        \"\"\"\n        Test to check if a running subjob of an array job survive a\n        pbs_server restart when history is enabled\n        \"\"\"\n        attr = {'job_history_enable': 'true'}\n        self.server.manager(MGR_CMD_SET, SERVER, attr)\n        self.test_running_subjob_survive_restart()\n\n    def test_suspended_subjob_survive_restart(self):\n        \"\"\"\n        Test to check if a suspended subjob of an array job survive a\n        pbs_server restart\n        \"\"\"\n        a = {'resources_available.ncpus': 1}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        j = Job(TEST_USER, attrs={\n            ATTR_J: '1-3', 'Resource_List.select': 'ncpus=1'})\n\n        j.set_sleep_time(10)\n\n        j_id = self.server.submit(j)\n        subjid_2 = j.create_subjob_id(j_id, 2)\n\n        # 1. check job array has begun\n        self.server.expect(JOB, {'job_state': 'B'}, j_id)\n\n        # 2. wait till subjob_2 starts running\n        self.server.expect(JOB, {'job_state': 'R'}, subjid_2)\n\n        try:\n            self.server.sigjob(subjid_2, 'suspend')\n        except PbsSignalError as e:\n            raise self.failureException(\"Failed to suspend subjob:\" + e.msg)\n\n        self.server.expect(JOB, {'job_state': 'S'}, subjid_2, max_attempts=1)\n\n        # 3. Kill and restart the server\n        self.kill_and_restart_svr()\n\n        # 4. array job should be B\n        self.server.expect(JOB, {'job_state': 'B'}, j_id, max_attempts=1)\n\n        # 5. subjob_2 should be S\n        self.server.expect(JOB, {'job_state': 'S'}, subjid_2, max_attempts=1)\n\n        try:\n            self.server.sigjob(subjid_2, 'resume')\n        except PbsSignalError as e:\n            raise self.failureException(\"Failed to resume subjob:\" + e.msg)\n\n    def test_suspended_subjob_survive_restart_with_history(self):\n        \"\"\"\n        Test to check if a suspended subjob of an array job survive a\n        pbs_server restart when history is enabled\n        \"\"\"\n        attr = {'job_history_enable': 'true'}\n        self.server.manager(MGR_CMD_SET, SERVER, attr)\n        self.test_suspended_subjob_survive_restart()\n\n    def test_deleted_q_subjob_survive_restart(self):\n        \"\"\"\n        Test to check if a deleted queued subjob of an array job survive a\n        pbs_server restart when history is disabled\n        \"\"\"\n        a = {'resources_available.ncpus': 1}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        j = Job(TEST_USER, attrs={\n            ATTR_J: '1-3', 'Resource_List.select': 'ncpus=1'})\n\n        j.set_sleep_time(10)\n\n        j_id = self.server.submit(j)\n        subjid_3 = j.create_subjob_id(j_id, 3)\n\n        self.server.expect(JOB, {'job_state': 'B'}, j_id)\n        self.server.deljob(subjid_3)\n        self.server.expect(JOB, {'job_state': 'X'}, subjid_3)\n\n        self.kill_and_restart_svr()\n\n        self.server.expect(JOB, {'job_state': 'B'}, j_id, max_attempts=1)\n        self.server.expect(JOB, {'job_state': 'X'}, subjid_3, max_attempts=1)\n\n    def test_deleted_q_subjob_survive_restart_w_history(self):\n        \"\"\"\n        Test to check if a deleted queued subjob of an array job survive a\n        pbs_server restart when history is enabled\n        \"\"\"\n        attr = {'job_history_enable': 'true'}\n        self.server.manager(MGR_CMD_SET, SERVER, attr)\n        self.test_deleted_q_subjob_survive_restart()\n\n    def test_deleted_r_subjob_survive_restart(self):\n        \"\"\"\n        Test to check if a deleted running subjob of an array job survive a\n        pbs_server restart when history is disabled\n        \"\"\"\n        a = {'resources_available.ncpus': 1}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        j = Job(TEST_USER, attrs={\n            ATTR_J: '1-3', 'Resource_List.select': 'ncpus=1'})\n\n        j.set_sleep_time(10)\n\n        j_id = self.server.submit(j)\n        subjid_1 = j.create_subjob_id(j_id, 1)\n\n        self.server.expect(JOB, {'job_state': 'B'}, j_id)\n        self.server.expect(JOB, {'job_state': 'R'}, subjid_1)\n        self.server.deljob(subjid_1)\n        self.server.expect(JOB, {'job_state': 'X'}, subjid_1)\n\n        self.kill_and_restart_svr()\n\n        self.server.expect(JOB, {'job_state': 'B'}, j_id, max_attempts=1)\n        self.server.expect(JOB, {'job_state': 'X'}, subjid_1, max_attempts=1)\n\n    def test_deleted_r_subjob_survive_restart_w_history(self):\n        \"\"\"\n        Test to check if a deleted running subjob of an array job survive a\n        pbs_server restart when history is enabled\n        \"\"\"\n        attr = {'job_history_enable': 'true'}\n        self.server.manager(MGR_CMD_SET, SERVER, attr)\n        self.test_deleted_q_subjob_survive_restart()\n\n    def test_qdel_expired_subjob(self):\n        \"\"\"\n        Test to check if qdel of a subjob is disallowed\n        \"\"\"\n        attr = {'job_history_enable': 'true'}\n        self.server.manager(MGR_CMD_SET, SERVER, attr)\n        a = {'resources_available.ncpus': 1}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        j = Job(TEST_USER, attrs={\n            ATTR_J: '1-3', 'Resource_List.select': 'ncpus=1'})\n\n        j.set_sleep_time(5)\n\n        j_id = self.server.submit(j)\n        subjid_1 = j.create_subjob_id(j_id, 1)\n\n        # 1. check job array has begun\n        self.server.expect(JOB, {'job_state': 'B'}, j_id)\n\n        # 2. wait till subjob 1 becomes expired\n        self.server.expect(JOB, {'job_state': 'X'}, subjid_1)\n\n        try:\n            self.server.deljob(subjid_1)\n        except PbsDeljobError as e:\n            err_msg = \"Request invalid for finished array subjob\"\n            self.assertTrue(err_msg in e.msg[0],\n                            \"Error message is not expected\")\n        else:\n            raise self.failureException(\"subjob in X state can be deleted\")\n\n        try:\n            self.server.deljob(subjid_1, extend=\"deletehist\")\n        except PbsDeljobError as e:\n            err_msg = \"Request invalid for finished array subjob\"\n            self.assertTrue(err_msg in e.msg[0],\n                            \"Error message is not expected\")\n        else:\n            raise self.failureException(\"subjob in X state can be deleted\")\n\n    def test_subjob_comments(self):\n        \"\"\"\n        Test subjob comments for finished and terminated subjobs\n        \"\"\"\n        a = {'resources_available.ncpus': 1}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        j = Job(TEST_USER, attrs={\n            ATTR_J: '1-30', 'Resource_List.select': 'ncpus=1'})\n        j.set_sleep_time(8)\n        j_id = self.server.submit(j)\n        subjid_1 = j.create_subjob_id(j_id, 1)\n        subjid_2 = j.create_subjob_id(j_id, 2)\n        self.server.expect(JOB, {'comment': 'Subjob finished'}, subjid_1,\n                           offset=8)\n        self.server.delete(subjid_2, extend='force')\n        self.server.expect(JOB, {'comment': 'Subjob finished'}, subjid_2)\n        self.kill_and_restart_svr()\n        self.server.expect(\n            JOB, {'comment': 'Subjob finished'}, subjid_1, max_attempts=1)\n\n    def test_subjob_comments_with_history(self):\n        \"\"\"\n        Test subjob comments for finished, failed and terminated subjobs\n        when history is enabled\n        \"\"\"\n        a = {'resources_available.ncpus': 1}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        a = {'job_history_enable': 'True'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        j = Job(TEST_USER, attrs={\n            ATTR_J: '1-2', 'Resource_List.select': 'ncpus=1'})\n        j.set_sleep_time(5)\n        j_id = self.server.submit(j)\n        subjid_1 = j.create_subjob_id(j_id, 1)\n        subjid_2 = j.create_subjob_id(j_id, 2)\n        self.server.delete(subjid_2, extend='force')\n        self.server.expect(\n            JOB, {'comment': (MATCH_RE, 'finished')}, subjid_2, extend='x')\n        self.server.expect(JOB, {'comment': (\n            MATCH_RE, 'Job run at.*and finished')}, subjid_1, extend='x')\n        self.kill_and_restart_svr()\n        self.server.expect(JOB, {'comment': (\n            MATCH_RE, 'Job run at.*and finished')}, subjid_1, extend='x',\n            max_attempts=1)\n        script_body = \"exit 1\"\n        j = Job(TEST_USER, attrs={\n            ATTR_J: '1-2', 'Resource_List.select': 'ncpus=1'})\n        j.create_script(body=script_body)\n        j_id = self.server.submit(j)\n        subjid_1 = j.create_subjob_id(j_id, 1)\n        subjid_2 = j.create_subjob_id(j_id, 2)\n        self.server.expect(\n            JOB, {'comment': (MATCH_RE, 'Job run at.*and failed')}, subjid_1,\n            extend='x')\n        self.server.expect(\n            JOB, {'comment': (MATCH_RE, 'Job run at.*and failed')}, subjid_2,\n            extend='x')\n        self.kill_and_restart_svr()\n        self.server.expect(\n            JOB, {'comment': (MATCH_RE, 'Job run at.*and failed')}, subjid_1,\n            extend='x', max_attempts=1)\n        self.server.expect(\n            JOB, {'comment': (MATCH_RE, 'Job run at.*and failed')}, subjid_2,\n            extend='x')\n\n    def test_multiple_server_restarts(self):\n        \"\"\"\n        Test subjobs wont rerun after multiple server restarts\n        \"\"\"\n        a = {'resources_available.ncpus': 1}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        a = {'job_history_enable': 'True'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        j = Job(TEST_USER, attrs={\n            ATTR_J: '1-2', 'Resource_List.select': 'ncpus=1'})\n        j.set_sleep_time(300)\n        j_id = self.server.submit(j)\n        subjid_1 = j.create_subjob_id(j_id, 1)\n        a = {'job_state': 'R', 'run_count': 1}\n        self.server.expect(JOB, a, subjid_1, attrop=PTL_AND)\n        for _ in range(5):\n            self.kill_and_restart_svr()\n            self.server.expect(\n                JOB, a, subjid_1, attrop=PTL_AND)\n\n    def test_job_array_history_duration(self):\n        \"\"\"\n        Test that job array and subjobs are purged after history duration\n        \"\"\"\n        a = {'resources_available.ncpus': 1}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        a = {'job_history_enable': 'True'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        a = {'job_history_duration': 30}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        j = Job(TEST_USER, attrs={\n            ATTR_J: '1-2', 'Resource_List.select': 'ncpus=1'})\n        j.set_sleep_time(15)\n        j_id = self.server.submit(j)\n        subjid_1 = j.create_subjob_id(j_id, 1)\n        subjid_2 = j.create_subjob_id(j_id, 2)\n        a = {'job_state': 'R', 'run_count': 1}\n        self.server.expect(JOB, a, subjid_1, attrop=PTL_AND)\n        self.server.delete(subjid_1, extend='force')\n        b = {'job_state': 'X'}\n        self.server.expect(JOB, b, subjid_1)\n        self.server.expect(JOB, a, subjid_2, attrop=PTL_AND)\n        msg = \"Waiting for 150 secs as server will purge db once\"\n        msg += \" in 2 mins plus 30 sec of history duration\"\n        self.logger.info(msg)\n        self.server.expect(JOB, 'job_state', op=UNSET,\n                           id=subjid_1, offset=150, extend='x')\n        self.server.expect(JOB, 'job_state', op=UNSET,\n                           id=subjid_2, extend='x')\n\n    def test_queue_deletion_after_terminated_subjob(self):\n        \"\"\"\n        Test that queue can be deleted after the job array is\n        terminated and server is restarted.\n        \"\"\"\n        a = {'resources_available.ncpus': 1}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        j = Job(TEST_USER, attrs={\n            ATTR_J: '1-2', 'Resource_List.select': 'ncpus=1'})\n        j_id = self.server.submit(j)\n        subjid_1 = j.create_subjob_id(j_id, 1)\n        a = {'job_state': 'R', 'run_count': 1}\n        self.server.expect(JOB, a, subjid_1, attrop=PTL_AND)\n        self.server.delete(subjid_1, extend='force')\n        self.kill_and_restart_svr()\n        subjid_2 = j.create_subjob_id(j_id, 2)\n        self.server.expect(JOB, {'job_state': 'R'}, subjid_2)\n        self.server.delete(j_id, wait=True)\n        self.server.manager(MGR_CMD_DELETE, QUEUE, id='workq')\n\n    def test_held_job_array_survive_server_restart(self):\n        \"\"\"\n        Test held job array can be released after server restart\n        \"\"\"\n        a = {'resources_available.ncpus': 1}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        j = Job(TEST_USER, attrs={\n            ATTR_J: '1-2', 'Resource_List.select': 'ncpus=1'})\n        j.set_sleep_time(60)\n        j_id = self.server.submit(j)\n        j_id2 = self.server.submit(j)\n        subjid_1 = j.create_subjob_id(j_id, 1)\n        subjid_3 = j.create_subjob_id(j_id2, 1)\n        a = {'job_state': 'R', 'run_count': 1}\n        self.server.expect(JOB, a, subjid_1, attrop=PTL_AND)\n        self.server.holdjob(j_id2, USER_HOLD)\n        self.server.expect(JOB, {'job_state': 'H'}, j_id2)\n        self.kill_and_restart_svr()\n        self.server.delete(j_id, wait=True)\n        self.server.expect(JOB, {'job_state': 'H'}, j_id2)\n        self.server.rlsjob(j_id2, USER_HOLD)\n        self.server.expect(JOB, {'job_state': 'B'}, j_id2)\n        self.server.expect(JOB, a, subjid_3, attrop=PTL_AND)\n\n    def test_held_job_array_survive_server_restart_w_history(self):\n        \"\"\"\n        Test held job array can be released after server restart\n        when history is enabled\n        \"\"\"\n        a = {'job_history_enable': 'True'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        self.test_held_job_array_survive_server_restart()\n\n    def test_subjobs_qrun(self):\n        \"\"\"\n        Test that job array's subjobs can be qrun\n        \"\"\"\n        a = {'resources_available.ncpus': 1}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        a = {'scheduling': 'false'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        j = Job(TEST_USER, attrs={\n            ATTR_J: '1-2', 'Resource_List.select': 'ncpus=1'})\n        j.set_sleep_time(60)\n        j_id = self.server.submit(j)\n        subjid_1 = j.create_subjob_id(j_id, 1)\n        self.server.runjob(subjid_1)\n        self.server.expect(JOB, {'job_state': 'B'}, j_id)\n        self.server.expect(JOB, {'job_state': 'R'}, subjid_1)\n\n    def test_dependent_job_array_server_restart(self):\n        \"\"\"\n        Check Job array dependency is not released after server restart\n        \"\"\"\n        a = {'job_history_enable': 'true'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        a = {'resources_available.ncpus': 2}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        j = Job(TEST_USER, attrs={\n            ATTR_J: '1-2', 'Resource_List.select': 'ncpus=1'})\n        j.set_sleep_time(10)\n        j_id = self.server.submit(j)\n        subjid_1 = j.create_subjob_id(j_id, 1)\n        subjid_2 = j.create_subjob_id(j_id, 2)\n        self.server.expect(JOB, {'job_state': 'B'}, j_id)\n        self.server.expect(JOB, {'job_state': 'R'}, subjid_1)\n        self.server.expect(JOB, {'job_state': 'R'}, subjid_2)\n        depend_value = 'afterok:' + j_id\n        j = Job(TEST_USER, attrs={\n            ATTR_J: '1-2', 'Resource_List.select': 'ncpus=1',\n            ATTR_depend: depend_value})\n        j_id2 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'H'}, j_id2)\n        self.kill_and_restart_svr()\n        self.server.expect(JOB, {'job_state': 'F'},\n                           j_id, extend='x', interval=5)\n        self.server.expect(JOB, {'job_state': 'B'}, j_id2, interval=5)\n\n    def test_rerun_subjobs_server_restart(self):\n        \"\"\"\n        Test that subjobs which are requeued remain queued after server restart\n        \"\"\"\n        a = {'job_history_enable': 'true'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        a = {'resources_available.ncpus': 1}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        j = Job(TEST_USER, attrs={\n            ATTR_J: '1-2', 'Resource_List.select': 'ncpus=1'})\n        j.set_sleep_time(60)\n        j_id = self.server.submit(j)\n        subjid_1 = j.create_subjob_id(j_id, 1)\n        self.server.expect(JOB, {'job_state': 'R'}, subjid_1)\n        a = {'scheduling': 'false'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        self.server.rerunjob(subjid_1)\n        self.server.expect(JOB, {'job_state': 'Q'}, subjid_1)\n        self.kill_and_restart_svr()\n        self.server.expect(JOB, {'job_state': 'Q'}, subjid_1)\n        a = {'scheduling': 'true'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        a = {'job_state': 'R'}\n        self.server.expect(JOB, a, subjid_1)\n\n    def test_rerun_node_fail_requeue(self):\n        \"\"\"\n        Test sub jobs gets requeued after node_fail_requeue time\n        \"\"\"\n        a = {'node_fail_requeue': 10}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        a = {'resources_available.ncpus': 1}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        j = Job(TEST_USER, attrs={\n            ATTR_J: '1-2', 'Resource_List.select': 'ncpus=1'})\n        j.set_sleep_time(60)\n        j_id = self.server.submit(j)\n        subjid_1 = j.create_subjob_id(j_id, 1)\n        self.server.expect(JOB, {'job_state': 'R'}, subjid_1)\n        self.mom.stop()\n        self.server.expect(JOB, {'job_state': 'Q'}, subjid_1, offset=5)\n\n    def test_qmove_job_array(self):\n        \"\"\"\n        Test job array's can be qmoved to a high priority queue\n        and qmoved job array preempts running subjob\n        \"\"\"\n        a = {'queue_type': 'execution',\n             'started': 'True',\n             'enabled': 'True',\n             'priority': 150}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id='wq1')\n        a = {'job_history_enable': 'true'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        a = {'resources_available.ncpus': 1}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        j = Job(TEST_USER, attrs={\n            ATTR_J: '1-2', 'Resource_List.select': 'ncpus=1'})\n        j.set_sleep_time(60)\n        j_id = self.server.submit(j)\n        subjid_1 = j.create_subjob_id(j_id, 1)\n        self.server.expect(JOB, {'job_state': 'R'}, subjid_1)\n        j_id2 = self.server.submit(j)\n        subjid_3 = j.create_subjob_id(j_id2, 1)\n        self.server.movejob(j_id2, 'wq1')\n        a = {'scheduling': 'true'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        self.server.expect(JOB, {'job_state': 'S'}, subjid_1)\n        self.server.expect(JOB, {'job_state': 'R'}, subjid_3)\n\n    def test_delete_history_subjob_server_restart(self):\n        \"\"\"\n        Test that subjobs can be deleted from history\n        and they remain deleted after server restart\n        \"\"\"\n        a = {'job_history_enable': 'true'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        a = {'resources_available.ncpus': 2}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        j = Job(TEST_USER, attrs={\n            ATTR_J: '1-2', 'Resource_List.select': 'ncpus=1',\n            ATTR_k: 'oe'})\n        j.set_sleep_time(5)\n        j_id = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'F'}, j_id, extend='x', offset=5)\n        self.server.delete(j_id, extend='deletehist')\n        self.server.expect(JOB, 'job_state', op=UNSET, extend='x', id=j_id)\n        self.kill_and_restart_svr()\n        self.server.expect(JOB, 'job_state', op=UNSET, extend='x', id=j_id)\n\n    def test_job_id_duplicate_server_restart(self):\n        \"\"\"\n        Test that after server restart there is no duplication\n        of job identifiers\n        \"\"\"\n        a = {'resources_available.ncpus': 1}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        j = Job(TEST_USER, attrs={\n            ATTR_J: '1-2', 'Resource_List.select': 'ncpus=1'})\n        self.server.submit(j)\n        j = Job(TEST_USER)\n        self.server.submit(j)\n        self.kill_and_restart_svr()\n        try:\n            j = Job(TEST_USER, attrs={\n                ATTR_J: '1-2', 'Resource_List.select': 'ncpus=1'})\n            self.server.submit(j)\n        except PbsSubmitError as e:\n            raise self.failureException(\"Failed to submit job: \" + str(e.msg))\n\n    def test_expired_subjobs_not_reported(self):\n        \"\"\"\n        Test if a subjob is finished and moves to expired state,\n        it is not reported to scheduler in the next scheduling\n        cycle. Scheduler expects only running subjobs to be reported to it.\n        \"\"\"\n\n        a = {'job_history_enable': 'True'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        req_node = \":host=\" + self.mom.shortname\n        res_req = {'Resource_List.select': '1:ncpus=1' + req_node,\n                   'array_indices_submitted': '1-16',\n                   'Resource_List.place': 'excl'}\n        j1 = Job(TEST_USER, attrs=res_req)\n        j1.set_sleep_time(2)\n        jid1 = self.server.submit(j1)\n        j1_sub1 = j1.create_subjob_id(jid1, 1)\n\n        self.server.expect(JOB, {'job_state': 'X'}, j1_sub1)\n        # Trigger a sched cycle\n        a = {'scheduling': 'True'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        msg = j1_sub1 + \";\" + \"Subjob found in undesirable state\"\n        msg += \", ignoring this job\"\n        self.scheduler.log_match(msg, existence=False, max_attempts=10)\n\n    def test_subjob_stdfile_custom_dir(self):\n        \"\"\"\n        Test that subjobs standard error and out files are generated\n        in the custom directory provided with oe qsub options\n        \"\"\"\n        tmp_dir = self.du.create_temp_dir(asuser=TEST_USER)\n        a = {ATTR_e: tmp_dir, ATTR_o: tmp_dir, ATTR_J: '1-4'}\n        j = Job(TEST_USER, attrs=a)\n        j.set_sleep_time(2)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {ATTR_state: 'B'}, id=jid)\n        self.server.expect(JOB, ATTR_state, op=UNSET, id=jid)\n        file_list = [name for name in os.listdir(\n            tmp_dir) if os.path.isfile(os.path.join(tmp_dir, name))]\n        self.assertEqual(8, len(file_list), 'expected 8 std files')\n        for ext in ['.OU', '.ER']:\n            for sub_ind in range(1, 5):\n                f_name = j.create_subjob_id(jid, sub_ind) + ext\n                if f_name not in file_list:\n                    raise self.failureException(\"std file \" + f_name +\n                                                \" not found\")\n\n    @skipOnCray\n    def test_subjob_wrong_state(self):\n        \"\"\"\n        Test that after submitting a job and restarting the server,\n        the subjobs are not in the wrong substate and can be scheduled.\n        \"\"\"\n        a = {'resources_available.ncpus': 200}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        j = Job(attrs={ATTR_J: '1-200'})\n        j.set_sleep_time(200)\n        self.server.submit(j)\n        # while the server is sending the jobs to the MoM, restart the server\n        self.server.restart()\n        # triggering scheduling cycle all jobs are in R state.\n        self.scheduler.run_scheduling_cycle()\n        # ensure all the subjobs are running\n        self.server.expect(JOB, {'job_state=R': 200}, extend='t')\n\n    def test_recover_big_array_job(self):\n        \"\"\"\n        Test that during server restart, server is able to recover valid\n        array jobs which are bigger than the current value of max_array_size\n        server attribute\n        \"\"\"\n        # submit a medium size array job\n        a = {'resources_available.ncpus': 4}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        j = Job(attrs={ATTR_J: '1-200'})\n        j_id = self.server.submit(j)\n        self.server.expect(JOB, {ATTR_state: 'B'}, id=j_id)\n\n        # reduce max_array_size\n        a = {ATTR_maxarraysize: 40}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        self.server.expect(SERVER, a)\n        try:\n            self.server.submit(Job(attrs={ATTR_J: '1-200'}))\n        except PbsSubmitError as e:\n            exp_msg = 'qsub: Array job exceeds server or queue size limit'\n            self.assertEqual(exp_msg, e.msg[0])\n\n        # restart the server to check for crash\n        try:\n            self.server.restart()\n        except PbsServiceError as e:\n            if 'pbs_server startup failed' in e.msg:\n                reset_db = 'echo y | ' + \\\n                    os.path.join(self.server.pbs_conf['PBS_EXEC'],\n                                 'sbin', 'pbs_server') + ' -t create'\n                self.du.run_cmd(cmd=reset_db, sudo=True, as_script=True)\n            self.fail('TC failed as server recovery failed')\n        else:\n            self.server.expect(JOB, {ATTR_state: 'B'}, id=j_id)\n\n    def test_max_run_subjobs_basic(self):\n        \"\"\"\n        Test that if a job is submitted with 'max_run_subjobs' attribute\n        number of subjobs that run do not exceed the attribute value.\n        \"\"\"\n\n        a = {'resources_available.ncpus': 8}\n        self.mom.create_vnodes(a, 1)\n        j = Job(attrs={ATTR_J: '1-20%2'})\n        j_id = self.server.submit(j)\n        self.server.expect(JOB, {ATTR_state: 'B'}, id=j_id)\n        self.server.expect(JOB, {'job_state=R': 2}, extend='t')\n\n        self.server.alterjob(j_id, {ATTR_W: 'max_run_subjobs=5'})\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n        self.server.expect(JOB, {'job_state=R': 5}, extend='t')\n        msg = \"Number of concurrent running subjobs limit reached\"\n        self.scheduler.log_match(j_id + ';' + msg)\n\n    @skipOnCpuSet\n    def test_max_run_subjobs_equiv_class(self):\n        \"\"\"\n        Test that if a job is submitted with 'max_run_subjobs' attribute\n        it does not stop jobs in equivalence class from running\n        \"\"\"\n\n        a = {'resources_available.ncpus': 8}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        j = Job(attrs={ATTR_J: '1-20%2', 'Resource_List.walltime': 3600,\n                       'Resource_List.select': 'ncpus=2'})\n        j_id = self.server.submit(j)\n        self.server.expect(JOB, {ATTR_state: 'B'}, id=j_id)\n        self.server.expect(JOB, {'job_state=R': 2}, extend='t')\n\n        j = Job(attrs={'Resource_List.walltime': 3600,\n                       'Resource_List.select': 'ncpus=2'})\n        j_id_equiv = self.server.submit(j)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=j_id_equiv)\n\n    @skipOnCpuSet\n    def test_max_run_subjobs_calendar(self):\n        \"\"\"\n        Test that if a job is submitted with 'max_run_subjobs' attribute\n        gets into calendar when it cannot run.\n        \"\"\"\n\n        a = {'resources_available.ncpus': 8}\n        self.mom.create_vnodes(a, 1)\n        a = {'backfill_depth': '2'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        self.scheduler.set_sched_config({'strict_ordering': 'True'})\n        j1 = Job(attrs={'Resource_List.walltime': 200})\n        j1.set_sleep_time(200)\n        j1_id = self.server.submit(j1)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=j1_id)\n        j2 = Job(attrs={ATTR_J: '1-20%2', 'Resource_List.walltime': 300})\n        j2.set_sleep_time(300)\n        j2_id = self.server.submit(j2)\n        self.server.expect(JOB, {ATTR_state: 'B'}, id=j2_id)\n        self.server.expect(JOB, {'job_state=R': 3}, extend='t')\n        j2_sub1 = j2.create_subjob_id(j2_id, 1)\n        job_arr = self.server.status(JOB, id=j2_sub1)\n        stime = self.lu.convert_date_time(job_arr[0]['stime'],\n                                          fmt=\"%a %b %d %H:%M:%S %Y\")\n        job_arr = self.server.status(JOB, id=j2_id)\n\n        # check estimated start time is set on job array\n        self.assertIn('estimated.start_time', job_arr[0])\n        errmsg = j2_id + \";Error in calculation of start time of top job\"\n        self.scheduler.log_match(errmsg, existence=False, max_attempts=10)\n        est = self.lu.convert_date_time(job_arr[0]['estimated.start_time'],\n                                        fmt=\"%a %b %d %H:%M:%S %Y\")\n        self.assertAlmostEqual(stime + 300, est, 1)\n\n    def test_max_run_subjobs_queuejob_hook(self):\n        \"\"\"\n        Test that a queuejob hook is able to set max_run_subjobs attribute.\n        \"\"\"\n        a = {'resources_available.ncpus': 8}\n        self.mom.create_vnodes(a, 1)\n\n        self.create_max_run_subjobs_hook(3, \"queuejob\", \"h1\", self.qjh)\n        j1 = Job(attrs={ATTR_J: '1-20'})\n        jid1 = self.server.submit(j1)\n        self.server.log_match(\"max_run_subjobs set to 3\")\n        self.server.expect(JOB, {ATTR_state: 'B'}, id=jid1)\n        self.server.expect(JOB, {'job_state=R': 3}, extend='t')\n\n        # Submit a normal job and see if queuejob hook cannot set the\n        # attribute.\n        with self.assertRaises(PbsSubmitError) as e:\n            self.server.submit(Job())\n        self.assertIn(\"Attribute has to be set on an array job\",\n                      e.exception.msg[0])\n\n    def test_max_run_subjobs_modifyjob_hook(self):\n        \"\"\"\n        Submit array job with large max_run_subjobs limit see if modifyjob\n        modifies it.\n        \"\"\"\n        a = {'resources_available.ncpus': 20}\n        self.mom.create_vnodes(a, 1)\n\n        self.create_max_run_subjobs_hook(3, \"modifyjob\", \"h1\", self.mjh)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n        j = Job(attrs={ATTR_J: '1-50'})\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {ATTR_state: 'Q'}, id=jid)\n        self.server.alterjob(jid, {ATTR_W: 'max_run_subjobs=20'})\n        self.server.log_match(\"max_run_subjobs set to 3\")\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n        self.server.expect(JOB, {'job_state=R': 3}, extend='t')\n\n        # Modify a normal job and see if queuejob hook cannot set the\n        # attribute.\n        self.create_max_run_subjobs_hook(3, \"modifyjob\", \"h1\", self.mjh2)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n        nj = self.server.submit(Job())\n        with self.assertRaises(PbsAlterError) as e:\n            self.server.alterjob(nj, {'Resource_List.soft_walltime': 50})\n        self.assertIn(\"Attribute has to be set on an array job\",\n                      e.exception.msg[0])\n\n    def test_max_run_subjobs_preemption(self):\n        \"\"\"\n        Submit array job with max_run_subjobs limit and see if such a job\n        hits the limit, no preemption is attempted.\n        \"\"\"\n        a = {'queue_type': 'execution',\n             'started': 'True',\n             'enabled': 'True',\n             'Priority': 200}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, \"wq2\")\n\n        a = {'resources_available.ncpus': 8}\n        self.mom.create_vnodes(a, 1)\n\n        a = {'Resource_List.select': 'ncpus=2'}\n        j = Job(attrs=a)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid)\n\n        a = {ATTR_J: '1-20%3', 'Resource_List.select': 'ncpus=2',\n             ATTR_q: 'wq2'}\n        j_arr = Job(attrs=a)\n        jid_arr = self.server.submit(j_arr)\n        self.server.expect(JOB, {ATTR_state: 'B'}, id=jid_arr)\n        self.server.expect(JOB, {'job_state=R': 4}, extend='t')\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid)\n\n    def test_max_run_subjobs_qrun(self):\n        \"\"\"\n        Submit array job with max_run_subjobs limit and see if such a job\n        is run using qrun, max_run_subjobs limit is ignored.\n        \"\"\"\n        a = {'resources_available.ncpus': 8}\n        self.mom.create_vnodes(a, 1)\n\n        a = {'Resource_List.select': 'ncpus=2'}\n        j = Job(attrs=a)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid)\n\n        a = {ATTR_J: '1-20%3', 'Resource_List.select': 'ncpus=2'}\n        j_arr = Job(attrs=a)\n        jid_arr = self.server.submit(j_arr)\n        self.server.expect(JOB, {ATTR_state: 'B'}, id=jid_arr)\n        self.server.expect(JOB, {'job_state=R': 4}, extend='t')\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid)\n        subjid_4 = j_arr.create_subjob_id(jid_arr, 4)\n        self.server.expect(JOB, {ATTR_state: 'Q'}, id=subjid_4)\n        self.server.runjob(subjid_4)\n        self.server.expect(JOB, {ATTR_state: 'B'}, id=jid_arr)\n        self.server.expect(JOB, {'job_state=R': 4}, extend='t')\n        self.server.expect(JOB, {ATTR_state: 'S'}, id=jid)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=subjid_4)\n\n    def test_max_run_subjobs_suspend(self):\n        \"\"\"\n        Submit array job with max_run_subjobs limit and see if such a job\n        is has suspended subjobs, those subjobs are not counted against the\n        limit.\n        \"\"\"\n\n        a = {'resources_available.ncpus': 8}\n        self.mom.create_vnodes(a, 1)\n\n        a = {ATTR_J: '1-20%3', 'Resource_List.select': 'ncpus=2'}\n        j_arr = Job(attrs=a)\n        jid_arr = self.server.submit(j_arr)\n        self.server.expect(JOB, {ATTR_state: 'B'}, id=jid_arr)\n        self.server.expect(JOB, {'job_state=R': 3}, extend='t')\n        subjid_2 = j_arr.create_subjob_id(jid_arr, 2)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=subjid_2)\n        self.server.sigjob(jobid=subjid_2, signal=\"suspend\")\n        self.server.expect(JOB, {ATTR_state: 'S'}, id=subjid_2)\n        self.server.expect(JOB, {'job_state=R': 3}, extend='t')\n        subjid_4 = j_arr.create_subjob_id(jid_arr, 4)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=subjid_4)\n\n    def test_max_run_subjobs_eligible_time(self):\n        \"\"\"\n        Test that array jobs hitting max_run_subjobs limit still\n        accrues eligible time.\n        \"\"\"\n\n        a = {'resources_available.ncpus': 8}\n        self.mom.create_vnodes(a, 1)\n\n        a = {'eligible_time_enable': 'True'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        accrue = {'ineligible': 1, 'eligible': 2, 'run': 3, 'exit': 4}\n\n        a = {ATTR_J: '1-20%3', 'Resource_List.select': 'ncpus=2'}\n        j_arr = Job(attrs=a)\n        jid_arr = self.server.submit(j_arr)\n        self.server.expect(JOB, {ATTR_state: 'B'}, id=jid_arr)\n        self.server.expect(JOB, {'job_state=R': 3}, extend='t')\n        self.server.expect(JOB, {'accrue_type': accrue['eligible']},\n                           id=jid_arr)\n\n    def test_max_run_subjobs_on_non_array(self):\n        \"\"\"\n        Test that setting max_run_subjobs on non-array jobs is rejected.\n        \"\"\"\n        a = {ATTR_W: 'max_run_subjobs=4'}\n        with self.assertRaises(PbsSubmitError) as e:\n            self.server.submit(Job(attrs=a))\n        self.assertIn(\"Attribute has to be set on an array job\",\n                      e.exception.msg[0])\n\n    def test_multiple_max_run_subjobs_values(self):\n        \"\"\"\n        Test that setting max_run_subjobs more than once on an array\n        job is rejected.\n        \"\"\"\n\n        qsub_cmd = os.path.join(self.server.pbs_conf['PBS_EXEC'],\n                                'bin', 'qsub')\n\n        cmd = [qsub_cmd, '-J1-4%2', '-Wmax_run_subjobs=4', '--',\n               self.mom.sleep_cmd, '100']\n        rv = self.du.run_cmd(self.server.hostname, cmd=cmd)\n        self.assertNotEqual(rv['rc'], 0, 'qsub must fail')\n        msg = \"qsub: multiple max_run_subjobs values found\"\n        self.assertEqual(rv['err'][0], msg)\n\n    def test_qdel_job_array_downed_mom(self):\n        \"\"\"\n        Test to check if qdel of a job array returns\n        an error when mom is downed.\n        \"\"\"\n\n        a = {'resources_available.ncpus': 1}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        j = Job(TEST_USER, attrs={\n            ATTR_J: '1-3', 'Resource_List.select': 'ncpus=1'})\n\n        j_id = self.server.submit(j)\n\n        # 1. check job array has begun\n        self.server.expect(JOB, {'job_state': 'B'}, j_id)\n\n        self.mom.stop()\n\n        try:\n            self.server.deljob(j_id)\n        except PbsDeljobError as e:\n            err_msg = \"could not connect to MOM\"\n            self.assertTrue(err_msg in e.msg[0],\n                            \"Did not get the expected message\")\n            self.assertTrue(e.rc != 0, \"Exit code shows success\")\n        else:\n            raise self.failureException(\"qdel job array did not return error\")\n"
  },
  {
    "path": "test/tests/functional/pbs_job_array_comment.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestJobArrayComment(TestFunctional):\n    \"\"\"\n    Testing job array comment is accurate\n    \"\"\"\n\n    def test_job_array_comment(self):\n        \"\"\"\n        Testing job array comment is correct when one or more sub jobs\n        are rejected by mom\n        \"\"\"\n        attr = {'resources_available.ncpus': 10}\n        self.server.manager(MGR_CMD_SET, NODE, attr, id=self.mom.shortname)\n        attr = {'job_history_enable': 'true'}\n        self.server.manager(MGR_CMD_SET, SERVER, attr)\n        # create a mom hook that rejects and deletes subjob 0, 5, and 7\n        hook_name = \"reject_subjob\"\n        hook_body = (\n            \"import pbs\\n\"\n            \"import re\\n\"\n            \"e = pbs.event()\\n\"\n            \"jid = str(e.job.id)\\n\"\n            r\"if re.match(r'[0-9]*\\[[057]\\]', jid):\\n\"\n            \"    e.job.delete()\\n\"\n            \"    e.reject()\\n\"\n            \"else:\\n\"\n            \"    e.accept()\\n\"\n        )\n        attr = {'event': 'execjob_begin', 'enabled': 'True'}\n        self.server.create_import_hook(hook_name, attr, hook_body)\n\n        # Check if the hook copy was successful\n        self.server.log_match(\"successfully sent hook file.*\" +\n                              hook_name + \".PY\", regexp=True,\n                              max_attempts=60, interval=2)\n\n        test_job_array = Job(TEST_USER, attrs={\n            ATTR_J: '0-9',\n            'Resource_List.select': 'ncpus=1'\n        })\n        jid = self.server.submit(test_job_array)\n        attr = {\n            ATTR_state: 'B',\n            ATTR_comment: (MATCH_RE, 'Job Array Began at .*')\n        }\n        self.server.expect(JOB, attr, id=jid, attrop=PTL_AND)\n        self.server.expect(JOB, {ATTR_comment: 'Not Running: PBS Error:' +\n                                 ' Execution server rejected request' +\n                                 ' and failed'},\n                           id=test_job_array.create_subjob_id(jid, 0),\n                           extend='x')\n        attr = {\n            ATTR_state: 'R',\n            ATTR_comment: (MATCH_RE, 'Job run at .*')\n        }\n        self.server.expect(JOB, attr, extend='x',\n                           id=test_job_array.create_subjob_id(jid, 1),\n                           attrop=PTL_AND)\n        self.server.expect(JOB, {ATTR_comment: 'Not Running: PBS Error:' +\n                                 ' Execution server rejected request' +\n                                 ' and failed'},\n                           id=test_job_array.create_subjob_id(jid, 5),\n                           extend='x')\n        self.server.expect(JOB, {ATTR_comment: 'Not Running: PBS Error:' +\n                                 ' Execution server rejected request' +\n                                 ' and failed'},\n                           id=test_job_array.create_subjob_id(jid, 7),\n                           extend='x')\n"
  },
  {
    "path": "test/tests/functional/pbs_job_comment_on_resume.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestJobComment(TestFunctional):\n\n    \"\"\"\n    Testing job comment is accurate\n    \"\"\"\n\n    def test_job_comment_on_resume(self):\n        \"\"\"\n        Testing whether job comment is accurate\n        after resuming from suspended state.\n        \"\"\"\n        attr = {'resources_available.ncpus': 1}\n        self.server.manager(MGR_CMD_SET, NODE, attr, self.mom.shortname)\n        attr = {'queue_type': 'execution', 'started': 't', 'enabled': 't',\n                'priority': 200}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, attr, id='expressq')\n        J = Job(TEST_USER)\n        jid1 = self.server.submit(J)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        J = Job(TEST_USER, {'queue': 'expressq'})\n        jid2 = self.server.submit(J)\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid1)\n        self.server.delete(jid2, wait=True)\n        self.server.log_match(\n            jid2 + \";dequeuing from expressq\")\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        self.server.expect(JOB, {'comment': (MATCH_RE, '.*Job run at.*')})\n"
  },
  {
    "path": "test/tests/functional/pbs_job_default_group.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2022 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\nimport pwd\nfrom tests.functional import *\n\n\nclass TestJobDefaultGroup(TestFunctional):\n    \"\"\"\n    This test suite contains test for job which gets default group\n    \"\"\"\n\n    def test_job_with_default_group(self):\n        \"\"\"\n        When a user group is not present on server machine\n        then job will get a \"-default-\" group set on it.\n        And MOM should be able to run that job\n        \"\"\"\n        if self.server.hostname == self.mom.hostname:\n            self.skipTest(\"Server and Execution host must be different\")\n        # add a temporary new user on execution host and assume that user\n        # will be created and check user is not present on server host\n        self.user_name = \"ptlpbstestuser1\"\n        try:\n            pwd.getpwnam(self.user_name)\n            # user is present in server host, must delete user before\n            # qsub\n            cmd = f\"userdel {self.user_name}\"\n            res = self.du.run_cmd(self.server.hostname, cmd=cmd, sudo=True)\n            if res[\"rc\"] != 0:\n                raise PtlException(\"Unable to delete user on server host\")\n            self.logger.info(f\"Deleted {self.user_name} on server host.\")\n        except KeyError:\n            # good! user is not present on server host\n            # just as we needed\n            pass\n        self.testuser_created = True\n        cmd = f\"useradd -m {self.user_name}\"\n        res = self.du.run_cmd(self.mom.hostname, cmd=cmd, sudo=True)\n        if res[\"rc\"] != 0:\n            self.testuser_created = False\n            raise PtlException(\"Unable to create user on execution host\")\n        attr = {\"flatuid\": True}\n        self.server.manager(MGR_CMD_SET, SERVER, attr)\n        starttime = int(time.time())\n        user = PbsUser(self.user_name)\n        self.server.client = self.mom.hostname\n        jid = self.server.submit(Job(user), submit_dir=\"/tmp\")\n        self.server.client = self.server.hostname\n        attr = {\"job_state\": \"R\"}\n        self.server.expect(JOB, attr, id=jid)\n        self.mom.log_match(\n            f\"Job;{jid};No Group Entry for Group -default-\",\n            starttime=starttime,\n            existence=False,\n            max_attempts=30,\n        )\n\n    def tearDown(self):\n        super().tearDown()\n        if self.testuser_created:\n            cmd = f\"userdel {self.user_name}\"\n            self.du.run_cmd(self.mom.hostname, cmd=cmd, sudo=True)\n"
  },
  {
    "path": "test/tests/functional/pbs_job_dependency.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestJobDependency(TestFunctional):\n\n    \"\"\"\n    Test suite to test different job dependencies\n    \"\"\"\n    hook_body = \"\"\"\nimport pbs\ne = pbs.event()\nj = e.job\nif ('DEPENDENT_JOB' in j.Variable_List):\n    j.depend = pbs.depend(\"runone:\" + str(j.Variable_List['DEPENDENT_JOB']))\ne.accept()\n\"\"\"\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n        attr = {ATTR_RESC_TYPE: 'string', ATTR_RESC_FLAG: 'h'}\n        self.server.manager(MGR_CMD_CREATE, RSC, attr, id='NewRes')\n        self.scheduler.add_resource('NewRes', apply=True)\n        attr = {'resources_available.ncpus': 1}\n        self.mom.create_vnodes(attr, 6, attrfunc=self.cust_attr,\n                               usenatvnode=False)\n\n    def cust_attr(self, name, totnodes, numnode, attrib):\n        res_str = \"ver\" + str(numnode)\n        attr = {'resources_available.NewRes': res_str}\n        return {**attrib, **attr}\n\n    def assert_dependency(self, *jobs):\n        dl = []\n        num = len(jobs)\n        for ind, job in enumerate(jobs):\n            temp = self.server.status(JOB, id=job)[0][ATTR_depend]\n            dl.append([i.split('@')[0] for i in temp.split(':')[1:]])\n            self.assertEqual(num - 1, len(dl[ind]))\n\n        for ind, job in enumerate(jobs):\n            # make a list of dependency list that does not contain the\n            # enumerated job\n            check_dl = dl[:ind] + dl[ind + 1:]\n            for job_list in check_dl:\n                self.assertIn(job, job_list)\n\n    def test_runone_depend_basic(self):\n        \"\"\"\n        Test basic runone dependency tests\n        1 - Submit a job that runs and then submit a job having \"runone\"\n        dependency on the first job. Check that second job is deleted as\n        soon as the first job ends.\n        2 - Submit a can never run job and submit a second job having \"runone\"\n        dependency on the first job. Check that first job is deleted when\n        second job ends.\n        \"\"\"\n\n        job = Job(attrs={'Resource_List.select': '1:NewRes=ver3'})\n        job.set_sleep_time(10)\n        j1 = self.server.submit(job)\n        d_job = Job(attrs={'Resource_List.select': '2:ncpus=1',\n                           ATTR_depend: 'runone:' + j1})\n        j1_2 = self.server.submit(d_job)\n        self.server.expect(JOB, {'job_state': 'R'}, id=j1)\n        self.server.expect(JOB, {'job_state': 'H'}, id=j1_2)\n        self.server.accounting_match(\"Job deleted as result of dependency\",\n                                     id=j1_2)\n\n        job = Job(attrs={'Resource_List.select': '1:ncpus=4:NewRes=ver3'})\n        job.set_sleep_time(10)\n        j2 = self.server.submit(job)\n        d_job = Job(attrs={'Resource_List.select': '2:ncpus=1',\n                           ATTR_depend: 'runone:' + j2})\n        j2_2 = self.server.submit(d_job)\n        self.server.expect(JOB, {'job_state': 'H'}, id=j2)\n        self.server.expect(JOB, {'job_state': 'R'}, id=j2_2)\n        self.server.accounting_match(\"Job deleted as result of dependency\",\n                                     id=j2)\n\n    def test_runone_dependency_on_running_job(self):\n        \"\"\"\n        Submit a job putting runone dependency on an already running job\n        check to see that the second job is put on system hold as soon\n        as it is submitted, then submit another job dependent on the\n        second held job and see if that gets held as well.\n        Also test that all jobs have dependency on other two jobs.\n        \"\"\"\n\n        job = Job()\n        j1 = self.server.submit(job)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=j1)\n\n        a = {ATTR_depend: 'runone:' + j1}\n        job2 = Job(attrs=a)\n        j2 = self.server.submit(job2)\n        self.server.expect(JOB, {ATTR_state: 'H', ATTR_h: 's'}, id=j2)\n        self.assert_dependency(j1, j2)\n\n        a = {ATTR_depend: 'runone:' + j2}\n        job3 = Job(attrs=a)\n        j3 = self.server.submit(job3)\n        self.server.expect(JOB, {ATTR_state: 'H', ATTR_h: 's'}, id=j3)\n        self.assert_dependency(j1, j2, j3)\n\n    def test_runone_depend_basic_on_job_array(self):\n        \"\"\"\n        Test basic runone dependency tests on job arrays\n        1 - Submit a job array that runs and then submit a job having \"runone\"\n        dependency on the parent job array. Check that second job is held as\n        soon as it is submitted.\n        2 - Submit a can never run array job and submit a second job having\n        \"runone\" dependency on the array parent. Check that first job is\n        deleted when second job runs.\n        3 - Submit a runone dependency on an array subjob and check if job\n        submission fails in this case.\n        \"\"\"\n\n        job = Job(attrs={'Resource_List.select': '1:ncpus=1',\n                         ATTR_J: '1-2'})\n        job.set_sleep_time(10)\n        j1 = self.server.submit(job)\n        d_job = Job(attrs={'Resource_List.select': '2:ncpus=1',\n                           ATTR_depend: 'runone:' + j1})\n        j1_2 = self.server.submit(d_job)\n        self.server.expect(JOB, {'job_state': 'B'}, id=j1)\n        self.server.expect(JOB, {'job_state': 'H'}, id=j1_2)\n        self.server.accounting_match(\"Job deleted as result of dependency\",\n                                     id=j1_2)\n\n        job = Job(attrs={'Resource_List.select': '1:ncpus=4:NewRes=ver3',\n                         ATTR_J: '1-2'})\n        job.set_sleep_time(10)\n        j2 = self.server.submit(job)\n        d_job = Job(attrs={'Resource_List.select': '2:ncpus=1',\n                           ATTR_depend: 'runone:' + j2})\n        j2_1 = self.server.submit(d_job)\n        self.server.expect(JOB, {'job_state': 'R'}, id=j2_1)\n        self.server.expect(JOB, {'job_state': 'H'}, id=j2)\n        self.server.accounting_match(\"Job deleted as result of dependency\",\n                                     id=j2)\n\n        job = Job(attrs={'Resource_List.select': '1:ncpus=1',\n                         ATTR_J: '1-2'})\n        job.set_sleep_time(10)\n        j3 = self.server.submit(job)\n        j3_1 = job.create_subjob_id(j3, 1)\n        d_job = Job(attrs={'Resource_List.select': '2:ncpus=1',\n                           ATTR_depend: 'runone:' + j3_1})\n        with self.assertRaises(PbsSubmitError):\n            self.server.submit(d_job)\n\n    def test_runone_queuejob_hook(self):\n        \"\"\"\n        Check to see that a queue job hook can set runone job\n        dependency.\n        \"\"\"\n        a = {'event': 'queuejob', 'enabled': 'True'}\n        self.server.create_import_hook('h1', a, self.hook_body)\n        job = Job()\n        j1 = self.server.submit(job)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=j1)\n\n        a = {ATTR_v: 'DEPENDENT_JOB=' + j1}\n        job2 = Job(attrs=a)\n        j2 = self.server.submit(job2)\n        self.server.expect(JOB, {ATTR_state: 'H', ATTR_h: 's'}, id=j2)\n\n        self.assert_dependency(j1, j2)\n\n    def test_runone_runjob_hook(self):\n        \"\"\"\n        Check to see that a run job hook cannot set runone job\n        dependency.\n        \"\"\"\n        a = {'event': 'runjob', 'enabled': 'True'}\n        self.server.create_import_hook('h1', a, self.hook_body)\n        job = Job()\n        j1 = self.server.submit(job)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=j1)\n\n        a = {ATTR_v: 'DEPENDENT_JOB=' + j1}\n        job2 = Job(attrs=a)\n        j2 = self.server.submit(job2)\n        logmsg = \"cannot modify job after runjob request has been accepted\"\n        self.server.log_match(logmsg)\n\n    def test_deleting_one_runone_dependency_job(self):\n        \"\"\"\n        Submit a job putting runone dependency on an already running job\n        check to see that the second job is put on system hold as soon\n        as it is submitted, then submit another job dependent on the\n        second held job and see if that gets held as well.\n        Then test that deleting second job updates dependencies on\n        other two jobs.\n        \"\"\"\n        job = Job()\n        j1 = self.server.submit(job)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=j1)\n\n        a = {ATTR_depend: 'runone:' + j1}\n        job2 = Job(attrs=a)\n        j2 = self.server.submit(job2)\n        self.server.expect(JOB, {ATTR_state: 'H', ATTR_h: 's'}, id=j2)\n\n        a = {ATTR_depend: 'runone:' + j2}\n        job3 = Job(attrs=a)\n        j3 = self.server.submit(job3)\n        self.server.expect(JOB, {ATTR_state: 'H', ATTR_h: 's'}, id=j3)\n\n        self.assert_dependency(j1, j2, j3)\n\n        self.server.delete(j2)\n        self.assert_dependency(j1, j3)\n\n    def check_job(self, attr, msg, state):\n        \"\"\"\n        helper function to submit a dependent job and check the job\n        to see if the dependency is met or rejected\n        \"\"\"\n        j = Job(attrs=attr)\n        jid = self.server.submit(j)\n        msg_to_check = jid + ';' + msg\n        self.server.expect(JOB, {ATTR_state: state}, id=jid, extend='x')\n        self.server.log_match(msg_to_check)\n        if state == 'R':\n            self.server.delete(jid)\n\n    def test_dependency_on_finished_job(self):\n        \"\"\"\n        Submit a short job and when it ends submit jobs dependent on the\n        finished job and check that after, afterok, afterany dependencies\n        do not hold the dependent job. Also check that the job is not\n        accepted for any other afternotok, before, beforeok, beforenotok\n        dependency types.\n        \"\"\"\n        a = {'job_history_enable': 'True'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        job = Job()\n        job.set_sleep_time(5)\n        j1 = self.server.submit(job)\n        self.server.expect(JOB, {ATTR_state: 'F'}, id=j1, extend='x')\n        accept_msg = j1 + \" Job has finished, dependency satisfied\"\n        reject_msg = j1 + \" Finished job did not satisfy dependency\"\n\n        a = {ATTR_depend: 'after:' + j1}\n        self.check_job(a, accept_msg, 'R')\n\n        a = {ATTR_depend: 'afterok:' + j1}\n        self.check_job(a, accept_msg, 'R')\n\n        a = {ATTR_depend: 'afterany:' + j1}\n        self.check_job(a, accept_msg, 'R')\n\n        a = {ATTR_depend: 'afternotok:' + j1}\n        self.check_job(a, reject_msg, 'F')\n\n        a = {ATTR_depend: 'before:' + j1}\n        self.check_job(a, reject_msg, 'F')\n\n        a = {ATTR_depend: 'beforeany:' + j1}\n        self.check_job(a, reject_msg, 'F')\n\n        a = {ATTR_depend: 'beforeok:' + j1}\n        self.check_job(a, reject_msg, 'F')\n\n        a = {ATTR_depend: 'beforenotok:' + j1}\n        self.check_job(a, reject_msg, 'F')\n\n    def test_dependency_on_multiple_finished_job(self):\n        \"\"\"\n        Submit a short job and and a long job, when the short job ends\n        submit dependent jobs on finished job and running job,\n        check that after dependecy runs, afterok, afterany dependencies\n        hold the dependent job, because there is a running job in the\n        system. Also check that the job is not accepted for any other\n        afternotok, before, beforeok, beforenotok dependency types.\n        \"\"\"\n        a = {'job_history_enable': 'True'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        job = Job()\n        job.set_sleep_time(1)\n        j1 = self.server.submit(job)\n        self.server.expect(JOB, {ATTR_state: 'F'}, id=j1, extend='x')\n        job = Job()\n        job.set_sleep_time(500)\n        j2 = self.server.submit(job)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=j2)\n        accept_msg = j1 + \" Job has finished, dependency satisfied\"\n        reject_msg = j1 + \" Finished job did not satisfy dependency\"\n\n        a = {ATTR_depend: 'after:' + j1 + \":\" + j2}\n        self.check_job(a, accept_msg, 'R')\n\n        a = {ATTR_depend: 'afterok:' + j1 + \":\" + j2}\n        self.check_job(a, accept_msg, 'H')\n\n        a = {ATTR_depend: 'afterany:' + j1 + \":\" + j2}\n        self.check_job(a, accept_msg, 'H')\n\n        a = {ATTR_depend: 'afternotok:' + j1 + \":\" + j2}\n        self.check_job(a, reject_msg, 'F')\n\n        a = {ATTR_depend: 'before:' + j1 + \":\" + j2}\n        self.check_job(a, reject_msg, 'F')\n\n        a = {ATTR_depend: 'beforeany:' + j1 + \":\" + j2}\n        self.check_job(a, reject_msg, 'F')\n\n        a = {ATTR_depend: 'beforeok:' + j1 + \":\" + j2}\n        self.check_job(a, reject_msg, 'F')\n\n        a = {ATTR_depend: 'beforenotok:' + j1 + \":\" + j2}\n        self.check_job(a, reject_msg, 'F')\n\n    def check_depend_delete_msg(self, pjid, cjid):\n        \"\"\"\n        helper function to check ia message that the dependent job (cjid)\n        is deleted because of the parent job (pjid)\n        \"\"\"\n        msg = cjid + \";Job deleted as result of dependency on job \" + pjid\n        self.server.log_match(msg)\n\n    def test_job_end_deleting_chain_of_dependency(self):\n        \"\"\"\n        Submit a chain of dependent jobs and see if one of the running jobs\n        ends, all the dependent jobs (and their dependent jobs)\n        are also deleted.\n        \"\"\"\n\n        a = {'job_history_enable': 'True'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n        job = Job()\n        job.set_sleep_time(10)\n        j1 = self.server.submit(job)\n\n        a = {ATTR_depend: \"afternotok:\" + j1}\n        job = Job(attrs=a)\n        j2 = self.server.submit(job)\n\n        a = {ATTR_depend: \"afterok:\" + j2}\n        job = Job(attrs=a)\n        j3 = self.server.submit(job)\n\n        a = {ATTR_depend: \"afterok:\" + j1}\n        job = Job(attrs=a)\n        j4 = self.server.submit(job)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n        self.server.expect(JOB, {ATTR_state: 'F'}, id=j1, extend='x')\n        self.server.expect(JOB, {ATTR_state: 'F'}, id=j2, extend='x',\n                           max_attempts=3)\n        self.check_depend_delete_msg(j1, j2)\n        self.server.expect(JOB, {ATTR_state: 'F'}, id=j3, extend='x',\n                           max_attempts=3)\n        self.check_depend_delete_msg(j2, j3)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=j4, max_attempts=3)\n\n    def test_qdel_deleting_chain_of_dependency(self):\n        \"\"\"\n        Submit a chain of dependent jobs and see if one of the running jobs\n        is deleted, all the jobs without their dependency released\n        are also deleted.\n        Try the same test with array jobs as well.\n        \"\"\"\n\n        a = {'job_history_enable': 'True'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n        job = Job()\n        j1 = self.server.submit(job)\n\n        a = {ATTR_depend: \"afterok:\" + j1}\n        job = Job(attrs=a)\n        j2 = self.server.submit(job)\n\n        a = {ATTR_depend: \"afternotok:\" + j2}\n        job = Job(attrs=a)\n        j3 = self.server.submit(job)\n\n        a = {ATTR_depend: \"after:\" + j1}\n        job = Job(attrs=a)\n        j4 = self.server.submit(job)\n\n        a = {ATTR_depend: \"afternotok:\" + j1}\n        job = Job(attrs=a)\n        j5 = self.server.submit(job)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=j1)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=j4)\n        self.server.expect(JOB, {ATTR_state: 'H'}, id=j5)\n        self.server.delete(j1)\n        self.server.expect(JOB, {ATTR_state: 'F'}, id=j1, extend='x')\n        self.server.expect(JOB, {ATTR_state: 'F'}, id=j2, extend='x',\n                           max_attempts=3)\n        self.check_depend_delete_msg(j1, j2)\n        self.server.expect(JOB, {ATTR_state: 'F'}, id=j3, extend='x',\n                           max_attempts=3)\n        self.check_depend_delete_msg(j2, j3)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=j4)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=j5)\n        self.server.delete(j4)\n        self.server.delete(j5)\n\n        # repeat the steps for array job\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n        job = Job(attrs={ATTR_J: '1-2'})\n        j5 = self.server.submit(job)\n\n        a = {ATTR_depend: \"afterok:\" + j5}\n        job = Job(attrs=a)\n        j6 = self.server.submit(job)\n\n        a = {ATTR_depend: \"afternotok:\" + j6}\n        job = Job(attrs=a)\n        j7 = self.server.submit(job)\n\n        a = {ATTR_depend: \"after:\" + j5}\n        job = Job(attrs=a)\n        j8 = self.server.submit(job)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n        self.server.expect(JOB, {'job_state': 'B'}, id=j5)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=j8)\n        self.server.delete(j5)\n        self.server.expect(JOB, {ATTR_state: 'F'}, id=j5, extend='x')\n        self.server.expect(JOB, {ATTR_state: 'F'}, id=j6, extend='x',\n                           max_attempts=3)\n        self.check_depend_delete_msg(j5, j6)\n        self.server.expect(JOB, {ATTR_state: 'F'}, id=j7, extend='x',\n                           max_attempts=3)\n        self.check_depend_delete_msg(j6, j7)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=j8)\n\n    def test_qdel_held_job_deleting_chain_of_dependency(self):\n        \"\"\"\n        Submit a chain of dependent jobs and see if one of the held jobs\n        is deleted, all the jobs without their dependency released\n        are also deleted.\n        \"\"\"\n\n        a = {'job_history_enable': 'True'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n        job = Job()\n        j1 = self.server.submit(job)\n\n        a = {ATTR_depend: \"afternotok:\" + j1}\n        job = Job(attrs=a)\n        j2 = self.server.submit(job)\n\n        a = {ATTR_depend: \"afterok:\" + j2}\n        job = Job(attrs=a)\n        j3 = self.server.submit(job)\n\n        a = {ATTR_depend: \"after:\" + j2}\n        job = Job(attrs=a)\n        j4 = self.server.submit(job)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n        self.server.expect(JOB, {ATTR_state: 'H'}, id=j2)\n        self.server.expect(JOB, {ATTR_state: 'H'}, id=j3)\n        self.server.expect(JOB, {ATTR_state: 'H'}, id=j4)\n        self.server.delete(j2)\n        self.server.expect(JOB, {ATTR_state: 'F'}, id=j2, extend='x',\n                           max_attempts=3)\n        self.server.expect(JOB, {ATTR_state: 'F'}, id=j3, extend='x',\n                           max_attempts=3)\n        self.check_depend_delete_msg(j2, j3)\n        self.server.expect(JOB, {ATTR_state: 'F'}, id=j4, extend='x',\n                           max_attempts=3)\n        self.check_depend_delete_msg(j2, j4)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=j1)\n\n    def test_only_after_dependency_chain_is_deleted(self):\n        \"\"\"\n        Submit a chain of dependent jobs and see that only downstream jobs\n        with after dependencies are deleted.\n        \"\"\"\n\n        a = {'job_history_enable': 'True'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n        job = Job()\n        j1 = self.server.submit(job)\n\n        a = {ATTR_depend: \"afterok:\" + j1}\n        job = Job(attrs=a)\n        j2 = self.server.submit(job)\n\n        a = {ATTR_depend: \"afterok:\" + j2}\n        job = Job(attrs=a)\n        j3 = self.server.submit(job)\n\n        a = {ATTR_depend: \"after:\" + j3}\n        job = Job(attrs=a)\n        j4 = self.server.submit(job)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n        self.server.expect(JOB, {ATTR_state: 'H'}, id=j2)\n        self.server.expect(JOB, {ATTR_state: 'H'}, id=j3)\n        self.server.expect(JOB, {ATTR_state: 'H'}, id=j4)\n        self.server.delete(j3)\n        self.server.expect(JOB, {ATTR_state: 'H'}, id=j2)\n        self.server.expect(JOB, {ATTR_state: 'F'}, id=j3, extend='x',\n                           max_attempts=3)\n        self.server.expect(JOB, {ATTR_state: 'F'}, id=j4, extend='x',\n                           max_attempts=3)\n        self.check_depend_delete_msg(j3, j4)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=j1)\n"
  },
  {
    "path": "test/tests/functional/pbs_job_purge.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestJobPurge(TestFunctional):\n    \"\"\"\n    This test suite tests the Job purge process\n    \"\"\"\n\n    def test_job_files_after_execution(self):\n        \"\"\"\n        Checks the job related files and ensures that files are\n        deleted successfully upon job completion\n        \"\"\"\n        a = {'resources_available.ncpus': 3}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        # Submit a normal and an array job\n        j = Job(TEST_USER)\n        j.set_sleep_time(30)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        self.server.expect(JOB, 'queue', op=UNSET, id=jid, offset=25)\n        j1 = Job(TEST_USER)\n        j1.set_sleep_time(30)\n        j1.set_attributes({ATTR_J: '1-2'})\n        jid_1 = self.server.submit(j1)\n        jobid_list = [jid, j1.create_subjob_id(jid_1, 1),\n                      j1.create_subjob_id(jid_1, 2)]\n        self.server.expect(JOB, {'job_state': 'B'}, id=jid_1)\n        self.server.expect(JOB, 'queue', op=UNSET, id=jid_1, offset=25)\n        # Checking the job control(.JB) file, job script(.SC) file\n        # and job task(.TK) directory after successful job execution\n        jobs_suffix_list = ['.JB', '.SC', '.TK']\n        for jobid in jobid_list:\n            for suffix in jobs_suffix_list:\n                job_file = self.mom.get_formed_path(\n                            self.mom.pbs_conf['PBS_HOME'],\n                            'mom_priv', 'jobs', jobid + suffix)\n                self.assertFalse(self.mom.isfile(path=job_file, sudo=True))\n"
  },
  {
    "path": "test/tests/functional/pbs_job_requeue_timeout_error.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\n@requirements(num_moms=2)\nclass TestJobRequeueTimeoutErrorMsg(TestFunctional):\n    \"\"\"\n    This test suite is for testing the new job_requeue_timeout error\n    message that informs the user that the job rerun process is still\n    in progress.\n    \"\"\"\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n\n        if len(self.moms) != 2:\n            self.skip_test(reason=\"need 2 mom hosts: -p moms=<m1>:<m2>\")\n\n        self.server.set_op_mode(PTL_CLI)\n\n        # PBSTestSuite returns the moms passed in as parameters as dictionary\n        # of hostname and MoM object\n        self.momA = self.moms.values()[0]\n        self.momB = self.moms.values()[1]\n        self.momA.delete_vnode_defs()\n        self.momB.delete_vnode_defs()\n\n        self.hostA = self.momA.shortname\n        self.hostB = self.momB.shortname\n\n        self.server.manager(MGR_CMD_DELETE, NODE, None, \"\")\n\n        islocal = self.du.is_localhost(self.hostA)\n        if islocal is False:\n            self.server.manager(MGR_CMD_CREATE, NODE, id=self.hostA)\n        else:\n            self.server.manager(MGR_CMD_CREATE, NODE, id=self.hostB)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'job_requeue_timeout': 1})\n\n    def test_error_message(self):\n        j = Job(TEST_USER, attrs={ATTR_N: 'job_requeue_timeout'})\n\n        test = []\n        test += ['dd if=/dev/zero of=file bs=1024 count=0 seek=10240\\n']\n        test += ['cat file\\n']\n        test += ['sleep 30\\n']\n\n        j.create_script(test, hostname=self.server.client)\n        jid = self.server.submit(j)\n\n        self.server.expect(\n            JOB, {'job_state': 'R', 'substate': 42}, id=jid, max_attempts=30,\n            interval=2)\n        try:\n            self.server.rerunjob(jid)\n        except PbsRerunError as e:\n            self.assertTrue('qrerun: Response timed out. Job rerun request ' +\n                            'still in progress for' in e.msg[0])\n"
  },
  {
    "path": "test/tests/functional/pbs_job_routing.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestJobRouting(TestFunctional):\n    \"\"\"\n    This test suite validates state of parent job and subjobs in a Job Array.\n    \"\"\"\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n\n        a = {'resources_available.ncpus': 3}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.mom.shortname)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'false'})\n\n    def test_t1(self):\n        \"\"\"\n        This test case validates Job array state when one\n        of the subjob is deleted while the array job is HELD in a routing\n        queue and is released after the subjob is deleted.\n        \"\"\"\n        dflt_q = self.server.default_queue\n        # Create a route queue with destination to default queue\n        queue_attrib = {ATTR_qtype: 'route',\n                        ATTR_routedest: dflt_q,\n                        ATTR_enable: 'True'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, queue_attrib, id='routeq')\n\n        job_attrib = Job(TEST_USER, attrs={ATTR_queue: 'routeq',\n                                           ATTR_l + '.ncpus': 1,\n                                           ATTR_h: None,\n                                           ATTR_J: '1-2',\n                                           ATTR_r: 'y'})\n\n        # Submit an array job in Held state\n        jid = self.server.submit(job_attrib)\n\n        self.server.expect(JOB, {ATTR_state: 'H'}, jid)\n        self.server.expect(JOB, {ATTR_state + '=Q': 2}, count=True,\n                           id=jid, extend='t')\n        subjobs = self.server.status(JOB, id=jid, extend='t')\n\n        # Delete one of the subjob\n        self.server.deljob(subjobs[-1]['id'])\n\n        self.server.expect(JOB, {ATTR_state: 'H'}, jid)\n        self.server.expect(JOB, {ATTR_state + '=Q': 1}, count=True,\n                           id=jid, extend='t')\n        self.server.expect(JOB, {ATTR_state + '=X': 1}, count=True,\n                           id=jid, extend='t')\n        self.server.expect(JOB, {ATTR_queue + '=routeq': 3}, count=True,\n                           id=jid, extend='t')\n\n        # Release the array and verify job array state\n        self.server.rlsjob(jid, 'u')\n        self.server.expect(JOB, {ATTR_state: 'Q'}, jid)\n        self.server.expect(JOB, {ATTR_state + '=Q': 2}, count=True,\n                           id=jid, extend='t')\n        self.server.expect(JOB, {ATTR_state + '=X': 1}, count=True,\n                           id=jid, extend='t')\n        self.server.expect(JOB, {ATTR_queue + '=routeq': 3}, count=True,\n                           id=jid, extend='t')\n\n        # No errors should be in server logs\n        msg = '(job_route) Request invalid for state of job, state=7'\n        self.server.log_match(msg, id=jid, existence=False)\n\n        # Start routing queue and verify job array queue set to default queue\n        a = {ATTR_start: 'True'}\n        self.server.manager(MGR_CMD_SET, QUEUE, a, id='routeq')\n        self.server.expect(JOB, {ATTR_queue + '=' + dflt_q: 3}, count=True,\n                           id=jid, extend='t')\n\n    def test_t2(self):\n        \"\"\"\n        This test case validates Job array state when running subjobs\n        are force fully deleted. After deleting the running subjob\n        Array job is held and released, this should cause job array\n        state change to Q from B.\n        \"\"\"\n\n        # Submit a job array of size 3\n        job = Job()\n        job.set_attributes({ATTR_l + '.ncpus': 1,\n                            ATTR_J: '1-3',\n                            ATTR_r: 'y'})\n\n        job.set_sleep_time(1000)\n        jid = self.server.submit(job)\n\n        self.server.expect(JOB, {ATTR_state + '=Q': 4}, count=True,\n                           id=jid, extend='t')\n\n        # Start scheduling cycle. This will move all 3 subjobs to R state.\n        # And parent job state to B state.\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'true'})\n\n        self.server.expect(JOB, {ATTR_state + '=R': 3}, count=True,\n                           id=jid, extend='t')\n        self.server.expect(JOB, {ATTR_state + '=B': 1}, count=True,\n                           id=jid, extend='t')\n\n        # Delete two of the subjobs.\n        subjobs = self.server.status(JOB, id=jid, extend='t')\n        self.server.deljob(subjobs[1]['id'])\n        self.server.deljob(subjobs[2]['id'])\n\n        # Mark node offline, and  rerun the third job.\n        self.momA = self.moms.values()[0]\n        self.hostA = self.momA.shortname\n        a = {'state': 'offline'}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.hostA)\n\n        # Rerun Third job, job will move to Q state.\n        self.server.rerunjob(subjobs[3]['id'])\n        self.server.expect(JOB, {ATTR_state + '=Q': 1}, count=True,\n                           id=jid, extend='t')\n        self.server.expect(JOB, {ATTR_state + '=X': 2}, count=True,\n                           id=jid, extend='t')\n        self.server.expect(JOB, {ATTR_state + '=B': 1}, count=True,\n                           id=jid, extend='t')\n\n        # Hold the job array. Parent job will move to H state.\n\n        self.server.holdjob(jid)\n        self.server.expect(JOB, {ATTR_state + '=H': 1}, count=True,\n                           id=jid, extend='t')\n        self.server.expect(JOB, {ATTR_state + '=Q': 1}, count=True,\n                           id=jid, extend='t')\n        self.server.expect(JOB, {ATTR_state + '=X': 2}, count=True,\n                           id=jid, extend='t')\n\n        # Release the job and validate array job state.\n        # Expected parent array job state is Q\n\n        self.server.rlsjob(jid, 'u')\n        self.server.expect(JOB, {ATTR_state + '=Q': 2}, count=True,\n                           id=jid, extend='t')\n        self.server.expect(JOB, {ATTR_state + '=X': 2}, count=True,\n                           id=jid, extend='t')\n\n    def test_route_resource_with_cr(self):\n        \"\"\"\n        test submitting and routing job with select\n        containing nasty CR chars from Windows\n        \"\"\"\n\n        dflt_q = self.server.default_queue\n        # Create a route queue with destination to default queue\n        queue_attrib = {ATTR_qtype: 'route',\n                        ATTR_routedest: dflt_q,\n                        ATTR_enable: 'True'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, queue_attrib, id='routeq')\n\n        select = \"select=ncpus=1\\r:mem=1gb\\r:arch=linux\"\n        job = Job(TEST_USER, attrs={ATTR_queue: 'routeq',\n                                    ATTR_l: select})\n        try:\n            jid = self.server.submit(job)\n        except PbsSubmitError as e:\n            error_msg = \"qsub: Illegal attribute or \" \\\n                        \"resource value Resource_List.:mem\"\n            self.assertEquals(e.msg[0], error_msg)\n        else:\n            self.fail(\"Job submit did not fail as expected.\")\n"
  },
  {
    "path": "test/tests/functional/pbs_job_script.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\nfrom tests.functional import *\n\n\nclass TestPbsJobScript(TestFunctional):\n    \"\"\"\n    Test suite for testing PBS's job script functionality\n    \"\"\"\n\n    def submit_job(self, addnewline=False):\n        a = {'resources_available.ncpus': 2}\n        self.mom.create_vnodes(a, 500, vname='Verylongvnodename')\n\n        selstr = \"#PBS -l select=1\"\n        for node in range(500):\n            selstr += \":ncpus=1:vnode=Verylongvnodename[\" + str(node) + \"]+1\"\n            if addnewline and node == 250:\n                selstr += \"\\\\\\n\"\n        selstr = selstr[0:-2]\n\n        scr = []\n        scr += [selstr + '\\n']\n        scr += ['%s 100\\n' % (self.mom.sleep_cmd)]\n\n        j = Job()\n        j.create_script(scr)\n        jid = self.server.submit(j)\n        return jid\n\n    def test_long_select_spec(self):\n        \"\"\"\n        Test that PBS is able to accept jobs scripts with very long select\n        specification with no newline in it.\n        \"\"\"\n        jid = self.submit_job()\n        execvnode = \"\"\n        for node in range(500):\n            execvnode += \"(Verylongvnodename[\" + str(node) + \"]:ncpus=1)+\"\n        execvnode = execvnode[0:-1]\n        self.server.expect(JOB, {'job_state': 'R', 'exec_vnode': execvnode},\n                           id=jid)\n\n    def test_long_select_spec_extend(self):\n        \"\"\"\n        Test that PBS is able to accept jobs scripts with very long select\n        specification with newline in it.\n        \"\"\"\n        jid = self.submit_job(addnewline=True)\n        execvnode = \"\"\n        for node in range(500):\n            execvnode += \"(Verylongvnodename[\" + str(node) + \"]:ncpus=1)+\"\n        execvnode = execvnode[0:-1]\n        self.server.expect(JOB, {'job_state': 'R', 'exec_vnode': execvnode},\n                           id=jid)\n"
  },
  {
    "path": "test/tests/functional/pbs_job_sort_formula.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestJobSortFormula(TestFunctional):\n    \"\"\"\n    Tests for the job_sort_formula\n    \"\"\"\n\n    def test_job_sort_formula_negative_value(self):\n        \"\"\"\n        Test to see that negative values in the\n        job_sort_formula sort properly\n        \"\"\"\n        a = {'resources_available.ncpus': 2}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        self.server.manager(MGR_CMD_CREATE, RSC, {'type': 'float'}, id='foo')\n\n        a = {'job_sort_formula': 'foo', 'scheduling': 'False'}\n        self.server.manager(MGR_CMD_SET, SERVER, a, runas=ROOT_USER)\n\n        j1 = Job(TEST_USER, attrs={'Resource_List.foo': -2})\n        jid1 = self.server.submit(j1)\n        j2 = Job(TEST_USER, attrs={'Resource_List.foo': -1})\n        jid2 = self.server.submit(j2)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n\n        c = self.scheduler.cycles(lastN=1)[0]\n        job_order = [jid2, jid1]\n        for i, job in enumerate(job_order):\n            self.assertEqual(job.split('.')[0], c.political_order[i])\n\n        self.server.expect(JOB, {'job_state=R': 2})\n"
  },
  {
    "path": "test/tests/functional/pbs_job_status_after_mom_hup.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\nfrom time import sleep\n\n\nclass Test_job_status_after_mom_hup(TestFunctional):\n    \"\"\"\n    Test the status of jobs after pbs_mom daemon has been signalled HUP.\n    \"\"\"\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n\n    def test_job_status(self):\n        \"\"\"\n        Check whether job is still running after the MoM process is killed\n        with HUP signal.\n        \"\"\"\n        job = Job()\n        job.set_sleep_time(1000)\n        jid = self.server.submit(job)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid)\n\n        self.mom.signal('-HUP')\n\n        sleep(2)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid)\n"
  },
  {
    "path": "test/tests/functional/pbs_job_task.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestJobTask(TestFunctional):\n    \"\"\"\n    This test suite validates the job task using pbsdsh or pbs_tmrsh\n    \"\"\"\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'job_history_enable': 'true'})\n\n    def check_jobs_file(self, out_file):\n        \"\"\"\n        This function validates job's output file\n        \"\"\"\n        ret = self.du.cat(hostname=self.server.shortname,\n                          filename=out_file,\n                          runas=TEST_USER)\n        _msg = \"cat command failed with error:%s\" % ret['err']\n        self.assertEqual(ret['rc'], 0, _msg)\n        _msg = 'Job\\'s error file has error:\"%s\"' % ret['out']\n        self.assertEqual(ret['out'][0], \"OK\", _msg)\n        self.logger.info(\"Job has executed without any error\")\n\n    def test_singlenode_pbsdsh(self):\n        \"\"\"\n        This test case validates that task started by pbsdsh runs\n        properly within a single-noded job.\n        \"\"\"\n        a = {ATTR_S: '/bin/bash'}\n        job = Job(TEST_USER, attrs=a)\n        pbsdsh_cmd = os.path.join(self.server.pbs_conf['PBS_EXEC'],\n                                  'bin', 'pbsdsh')\n        script = ['%s echo \"OK\"' % pbsdsh_cmd]\n        job.create_script(body=script)\n        jid = self.server.submit(job)\n        self.server.expect(JOB, {'job_state': 'F'}, id=jid, extend='x')\n\n        job_status = self.server.status(JOB, id=jid, extend='x')\n        if job_status:\n            job_output_file = job_status[0]['Output_Path'].split(':')[1]\n        self.check_jobs_file(job_output_file)\n\n    def test_singlenode_pbs_tmrsh(self):\n        \"\"\"\n        This test case validates that task started by pbs_tmrsh runs\n        properly within a single-noded job.\n        \"\"\"\n        a = {ATTR_S: '/bin/bash'}\n        job = Job(TEST_USER, attrs=a)\n        pbstmrsh_cmd = os.path.join(self.server.pbs_conf['PBS_EXEC'],\n                                    'bin', 'pbs_tmrsh')\n        script = ['%s $(hostname -f) echo \"OK\"' % pbstmrsh_cmd]\n        job.create_script(body=script)\n        jid = self.server.submit(job)\n\n        self.server.expect(JOB, {'job_state': 'F'}, id=jid, extend='x')\n\n        job_status = self.server.status(JOB, id=jid, extend='x')\n        if job_status:\n            job_output_file = job_status[0]['Output_Path'].split(':')[1]\n        self.check_jobs_file(job_output_file)\n\n    @requirements(num_moms=3)\n    def test_invoke_pbs_tmrsh_from_sister_mom(self):\n        \"\"\"\n        This test cases verifies pbs_tmrsh invoked from sister mom\n        executes successfully\n        \"\"\"\n        # Skip test if number of mom provided is not equal to three\n        if not len(self.moms) == 3:\n            self.skipTest(\"test requires three MoMs as input, \" +\n                          \"use -p moms=<mom1:mom2:mom3>\")\n        mom1 = self.moms.keys()[0]\n        mom2 = self.moms.keys()[1]\n        mom3 = self.moms.keys()[2]\n        fqdn_mom2 = socket.getfqdn(mom2)\n        fqdn_mom3 = socket.getfqdn(mom3)\n        pbstmrsh_cmd = os.path.join(self.server.pbs_conf['PBS_EXEC'],\n                                    'bin', 'pbs_tmrsh')\n\n        script_mom2 = \"\"\"#!/bin/bash\\n%s %s hostname\"\"\" % \\\n                      (pbstmrsh_cmd, fqdn_mom3)\n        fn = self.du.create_temp_file(hostname=mom2, body=script_mom2)\n        self.du.chmod(hostname=mom2, path=fn, mode=0o755)\n        a = {ATTR_S: '/bin/bash'}\n        script = ['%s %s %s' % (pbstmrsh_cmd, fqdn_mom2, fn)]\n        job = Job(TEST_USER, attrs=a)\n        job.set_attributes({'Resource_List.select': '3:ncpus=1',\n                            'Resource_List.place': 'scatter'})\n        job.create_script(body=script)\n        jid = self.server.submit(job)\n\n        self.server.expect(JOB, {'job_state': 'F'}, id=jid, extend='x')\n        job_status = self.server.status(JOB, id=jid, extend='x')\n        if job_status:\n            job_output_file = job_status[0]['Output_Path'].split(':')[1]\n\n        ret = self.du.cat(hostname=mom1, filename=job_output_file,\n                          runas=TEST_USER)\n        self.assertEqual(ret['out'][0], mom3, \"pbs_tmrsh invoked from sister\"\n                                              \" mom did not execute \"\n                                              \"successfully\")\n"
  },
  {
    "path": "test/tests/functional/pbs_maintenance_reservations.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestMaintenanceReservations(TestFunctional):\n    \"\"\"\n    Various tests to verify behavior of maintenance reservations.\n    Two moms (-p \"servers=M1,moms=M1:M2\") are recommended for the tests.\n    \"\"\"\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n        self.server.set_op_mode(PTL_CLI)\n\n    def test_maintenance_acl_denied(self):\n        \"\"\"\n        Test if the maintenance reservation is denied for common user\n        \"\"\"\n        now = int(time.time())\n\n        a = {'reserve_start': now + 3600,\n             'reserve_end': now + 7200}\n        h = [self.mom.shortname]\n        r = Reservation(TEST_USER, attrs=a, hosts=h)\n\n        with self.assertRaises(PbsSubmitError) as err:\n            self.server.submit(r)\n\n        msg = err.exception.msg[0].strip()\n\n        self.assertEqual(\"pbs_rsub: Unauthorized Request\", msg)\n\n    def test_maintenance_conflicting_parameters(self):\n        \"\"\"\n        Test if conflicting parameters are refused\n        \"\"\"\n        now = int(time.time())\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'managers': '%s@*' % TEST_USER})\n\n        a = {'Resource_List.select': '1',\n             'reserve_start': now + 3600,\n             'reserve_end': now + 7200}\n        h = [self.mom.shortname]\n        r = Reservation(TEST_USER, attrs=a, hosts=h)\n\n        with self.assertRaises(PbsSubmitError) as err:\n            self.server.submit(r)\n\n        msg = err.exception.msg[0].strip()\n\n        self.assertEqual(\"pbs_rsub: can't use -l with --hosts\", msg)\n\n        a = {'Resource_List.place': 'scatter',\n             'reserve_start': now + 3600,\n             'reserve_end': now + 7200}\n        h = [self.mom.shortname]\n        r = Reservation(TEST_USER, attrs=a, hosts=h)\n\n        with self.assertRaises(PbsSubmitError) as err:\n            self.server.submit(r)\n\n        msg = err.exception.msg[0].strip()\n\n        self.assertEqual(\"pbs_rsub: can't use -l with --hosts\", msg)\n\n        a = {'interactive': 300,\n             'reserve_start': now + 3600,\n             'reserve_end': now + 7200}\n        h = [self.mom.shortname]\n        r = Reservation(TEST_USER, attrs=a, hosts=h)\n\n        with self.assertRaises(PbsSubmitError) as err:\n            self.server.submit(r)\n\n        msg = err.exception.msg[0].strip()\n\n        self.assertEqual(\"pbs_rsub: can't use -I with --hosts\", msg)\n\n    def test_maintenance_unknown_hosts(self):\n        \"\"\"\n        Test if the pbs_rsub with all unknown hosts return error.\n        Test if the pbs_rsub with any unknown host return error.\n        Test if the --hosts without host parameter return error.\n        \"\"\"\n        now = int(time.time())\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'managers': '%s@*' % TEST_USER})\n\n        a = {'reserve_start': now + 3600,\n             'reserve_end': now + 7200}\n        h = [\"foo\"]\n        r = Reservation(TEST_USER, attrs=a, hosts=h)\n\n        msg = \"\"\n\n        try:\n            self.server.submit(r)\n        except PbsSubmitError as err:\n            msg = err.msg[0].strip()\n\n        self.assertEqual(\"pbs_rsub: Host with resources not found: foo\", msg)\n\n        a = {'reserve_start': now + 3600,\n             'reserve_end': now + 7200}\n        h = [self.mom.shortname, \"foo\"]\n        r = Reservation(TEST_USER, attrs=a, hosts=h)\n\n        with self.assertRaises(PbsSubmitError) as err:\n            self.server.submit(r)\n\n        msg = err.exception.msg[0].strip()\n\n        self.assertEqual(\"pbs_rsub: Host with resources not found: foo\", msg)\n\n        a = {'reserve_start': now + 3600,\n             'reserve_end': now + 7200}\n        h = [\"\"]\n        r = Reservation(TEST_USER, attrs=a, hosts=h)\n\n        with self.assertRaises(PbsSubmitError) as err:\n            self.server.submit(r)\n\n        msg = err.exception.msg[0].strip()\n\n        self.assertEqual(\"pbs_rsub: missing host(s)\", msg)\n\n    def test_maintenance_duplicate_host(self):\n        \"\"\"\n        Test if the pbs_rsub with duplicate host return error.\n        \"\"\"\n        now = int(time.time())\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'managers': '%s@*' % TEST_USER})\n\n        a = {'reserve_start': now + 3600,\n             'reserve_end': now + 7200}\n        h = [\"foo\", \"foo\"]\n        r = Reservation(TEST_USER, attrs=a, hosts=h)\n\n        with self.assertRaises(PbsSubmitError) as err:\n            self.server.submit(r)\n\n        msg = err.exception.msg[0].strip()\n\n        self.assertEqual(\"pbs_rsub: Duplicate host: foo\", msg)\n\n    def test_maintenance_confirm(self):\n        \"\"\"\n        Test if the maintenance (prefixed with 'M') is immediately\n        confirmed without scheduler and the select, place, and resv_nodes\n        are correctly crafted. Also check if resv_enable = False is\n        ignored.\n        \"\"\"\n        now = int(time.time())\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': False})\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'managers': '%s@*' % TEST_USER})\n\n        a = {'resources_available.ncpus': 2}\n        self.mom.create_vnodes(a, num=2)\n        vn = self.mom.shortname\n\n        a = {'resv_enable': False}\n        self.server.manager(MGR_CMD_SET, NODE, a,\n                            id=self.mom.shortname, runas=TEST_USER)\n        self.server.manager(MGR_CMD_SET, NODE, a, vn + '[0]', runas=TEST_USER)\n        self.server.manager(MGR_CMD_SET, NODE, a, vn + '[1]', runas=TEST_USER)\n\n        a = {'reserve_start': now + 3600,\n             'reserve_end': now + 7200}\n        h = [self.mom.shortname]\n        r = Reservation(TEST_USER, attrs=a, hosts=h)\n\n        rid = self.server.submit(r)\n\n        self.assertTrue(rid.startswith('M'))\n        resv_vnodes = '(' + vn + '[0]:ncpus=2)+(' + vn + '[1]:ncpus=2)'\n        exp_attr = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2'),\n                    'Resource_List.select':\n                    'host=%s:ncpus=4' % self.mom.shortname,\n                    'Resource_List.place': 'exclhost',\n                    'resv_nodes': resv_vnodes}\n        self.server.expect(RESV, exp_attr, id=rid)\n\n    def test_maintenance_delete(self):\n        \"\"\"\n        Test if the maintenance can not be deleted by common user.\n        Test if the maintenance reservation can be deleted by a manager.\n        \"\"\"\n        now = int(time.time())\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'managers': (INCR, '%s@*' % TEST_USER)})\n\n        a = {'reserve_start': now + 3600,\n             'reserve_end': now + 7200}\n        h = [self.mom.shortname]\n        r = Reservation(TEST_USER, attrs=a, hosts=h)\n\n        rid = self.server.submit(r)\n\n        self.assertTrue(rid.startswith('M'))\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'managers': (DECR, '%s@*' % TEST_USER)})\n\n        with self.assertRaises(PbsDeleteError) as err:\n            self.server.delete(rid, runas=TEST_USER)\n\n        msg = err.exception.msg[0].strip()\n\n        self.assertEqual(\"pbs_rdel: Unauthorized Request  \" + rid, msg)\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'managers': (INCR, '%s@*' % TEST_USER)})\n\n        self.server.delete(rid, runas=TEST_USER)\n\n    def test_maintenance_degrade_reservation_overlap1(self):\n        \"\"\"\n        Test if the reservation is degraded by overlapping\n        maintenance reservation - overlap: beginning\n        \"\"\"\n        now = int(time.time())\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'managers': '%s@*' % TEST_USER})\n\n        a1 = {'Resource_List.select': 'host=%s' % self.mom.shortname,\n              'reserve_start': now + 3600,\n              'reserve_end': now + 7200}\n        r1 = Reservation(TEST_USER, attrs=a1)\n        rid1 = self.server.submit(r1)\n\n        exp_attr = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, exp_attr, id=rid1)\n\n        a2 = {'reserve_start': now + 1800,\n              'reserve_end': now + 5400}\n        h2 = [self.mom.shortname]\n        r2 = Reservation(TEST_USER, attrs=a2, hosts=h2)\n\n        self.server.submit(r2)\n\n        exp_attr = {'reserve_state': (MATCH_RE, 'RESV_DEGRADED|10'),\n                    'reserve_substate': 12}\n        self.server.expect(RESV, exp_attr, id=rid1)\n\n    def test_maintenance_degrade_reservation_overlap2(self):\n        \"\"\"\n        Test if the reservation is degraded by overlapping\n        maintenance reservation - overlap: ending\n        \"\"\"\n        now = int(time.time())\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'managers': '%s@*' % TEST_USER})\n\n        a1 = {'Resource_List.select': 'host=%s' % self.mom.shortname,\n              'reserve_start': now + 3600,\n              'reserve_end': now + 7200}\n        r1 = Reservation(TEST_USER, attrs=a1)\n        rid1 = self.server.submit(r1)\n\n        exp_attr = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, exp_attr, id=rid1)\n\n        a2 = {'reserve_start': now + 5400,\n              'reserve_end': now + 9000}\n        h2 = [self.mom.shortname]\n        r2 = Reservation(TEST_USER, attrs=a2, hosts=h2)\n\n        self.server.submit(r2)\n\n        exp_attr = {'reserve_state': (MATCH_RE, 'RESV_DEGRADED|10'),\n                    'reserve_substate': 12}\n        self.server.expect(RESV, exp_attr, id=rid1)\n\n    def test_maintenance_degrade_reservation_overlap3(self):\n        \"\"\"\n        Test if the reservation is degraded by overlapping\n        maintenance reservation - overlap: inner\n        \"\"\"\n        now = int(time.time())\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'managers': '%s@*' % TEST_USER})\n\n        a1 = {'Resource_List.select': 'host=%s' % self.mom.shortname,\n              'reserve_start': now + 3600,\n              'reserve_end': now + 10800}\n        r1 = Reservation(TEST_USER, attrs=a1)\n        rid1 = self.server.submit(r1)\n\n        exp_attr = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, exp_attr, id=rid1)\n\n        a2 = {'reserve_start': now + 5400,\n              'reserve_end': now + 9000}\n        h2 = [self.mom.shortname]\n        r2 = Reservation(TEST_USER, attrs=a2, hosts=h2)\n\n        self.server.submit(r2)\n\n        exp_attr = {'reserve_state': (MATCH_RE, 'RESV_DEGRADED|10'),\n                    'reserve_substate': 12}\n        self.server.expect(RESV, exp_attr, id=rid1)\n\n    def test_maintenance_degrade_reservation_overlap4(self):\n        \"\"\"\n        Test if the reservation is degraded by overlapping\n        maintenance reservation - overlap: outer\n        \"\"\"\n        now = int(time.time())\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'managers': '%s@*' % TEST_USER})\n\n        a1 = {'Resource_List.select': 'host=%s' % self.mom.shortname,\n              'reserve_start': now + 3600,\n              'reserve_end': now + 7200}\n        r1 = Reservation(TEST_USER, attrs=a1)\n        rid1 = self.server.submit(r1)\n\n        exp_attr = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, exp_attr, id=rid1)\n\n        a2 = {'reserve_start': now + 1800,\n              'reserve_end': now + 9000}\n        h2 = [self.mom.shortname]\n        r2 = Reservation(TEST_USER, attrs=a2, hosts=h2)\n\n        self.server.submit(r2)\n\n        exp_attr = {'reserve_state': (MATCH_RE, 'RESV_DEGRADED|10'),\n                    'reserve_substate': 12}\n        self.server.expect(RESV, exp_attr, id=rid1)\n\n    def test_maintenance_and_reservation_not_overlap1(self):\n        \"\"\"\n        Test if the reservation is not degraded by maintenance\n        reservation on the same host - not-overlap: before\n        \"\"\"\n        now = int(time.time())\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'managers': '%s@*' % TEST_USER})\n\n        a1 = {'Resource_List.select': 'host=%s' % self.mom.shortname,\n              'reserve_start': now + 10800,\n              'reserve_end': now + 14400}\n        r1 = Reservation(TEST_USER, attrs=a1)\n        rid1 = self.server.submit(r1)\n\n        exp_attr = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, exp_attr, id=rid1)\n\n        a2 = {'reserve_start': now + 3600,\n              'reserve_end': now + 7200}\n        h2 = [self.mom.shortname]\n        r2 = Reservation(TEST_USER, attrs=a2, hosts=h2)\n\n        self.server.submit(r2)\n\n        exp_attr = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2'),\n                    'reserve_substate': 2}\n        self.server.expect(RESV, exp_attr, id=rid1)\n\n    def test_maintenance_and_reservation_not_overlap2(self):\n        \"\"\"\n        Test if the reservation is not degraded by maintenance\n        reservation on the same host - not-overlap: after\n        \"\"\"\n        now = int(time.time())\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'managers': '%s@*' % TEST_USER})\n\n        a1 = {'Resource_List.select': 'host=%s' % self.mom.shortname,\n              'reserve_start': now + 3600,\n              'reserve_end': now + 7200}\n        r1 = Reservation(TEST_USER, attrs=a1)\n        rid1 = self.server.submit(r1)\n\n        exp_attr = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, exp_attr, id=rid1)\n\n        a2 = {'reserve_start': now + 9000,\n              'reserve_end': now + 12600}\n        h2 = [self.mom.shortname]\n        r2 = Reservation(TEST_USER, attrs=a2, hosts=h2)\n\n        self.server.submit(r2)\n\n        exp_attr = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2'),\n                    'reserve_substate': 2}\n        self.server.expect(RESV, exp_attr, id=rid1)\n\n    def get_reg_expr(self, _mom, _host):\n        \"\"\"\n        This function forms the match string based on cpuset mom\n        \"\"\"\n        if _mom.is_cpuset_mom():\n            n = self.server.status(NODE)\n            cpuset_nodes = [i['id'] for i in n if i['Mom'] == _mom.hostname]\n            reg_str = r'\\(%s\\[0\\]:ncpus=[0-9]+\\)' % _host\n            if (len(cpuset_nodes) - 1) > 1:\n                for i in range(1, len(cpuset_nodes) - 1):\n                    reg_str += r'\\+' + \\\n                        r'\\(%s\\[%s\\]:ncpus=[0-9]+\\)' % (_host, i)\n        else:\n            reg_str = r\"\\(%s:ncpus=[0-9]+\\)\" % _host\n        return reg_str\n\n    @requirements(num_moms=2)\n    def test_maintenance_two_hosts(self):\n        \"\"\"\n        Test if the maintenance reservation is confirmed on multiple hosts.\n        Test the crafted resv_nodes, select, and place.\n        Two moms (-p \"servers=M1,moms=M1:M2\") are needed for this test.\n        \"\"\"\n        self.momA = self.moms.values()[0]\n        self.momB = self.moms.values()[1]\n        self.hostA = self.momA.shortname\n        self.hostB = self.momB.shortname\n        reg_expr_hostA = self.get_reg_expr(self.momA, self.hostA)\n        reg_expr_hostB = self.get_reg_expr(self.momB, self.hostB)\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'managers': (INCR, '%s@*' % TEST_USER)})\n        now = int(time.time())\n        a = {'reserve_start': now + 900,\n             'reserve_end': now + 5400}\n        h = [self.hostA, self.hostB]\n        r = Reservation(TEST_USER, attrs=a, hosts=h)\n\n        rid = self.server.submit(r)\n\n        possibility1 = reg_expr_hostA + r'\\+' + reg_expr_hostB\n        possibility2 = reg_expr_hostB + r'\\+' + reg_expr_hostA\n\n        resv_nodes_re = \"%s|%s\" % (possibility1, possibility2)\n\n        possibility1 = r\"host=%s:ncpus=[0-9]+\\+host=%s:ncpus=[0-9]+\" \\\n                       % (self.momA.shortname, self.momB.shortname)\n        possibility2 = r\"host=%s:ncpus=[0-9]+\\+host=%s:ncpus=[0-9]+\" \\\n                       % (self.momB.shortname, self.momA.shortname)\n        select_re = \"%s|%s\" % (possibility1, possibility2)\n\n        exp_attr = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2'),\n                    'reserve_substate': 2,\n                    'resv_nodes': (MATCH_RE, resv_nodes_re),\n                    'Resource_List.select': (MATCH_RE, select_re),\n                    'Resource_List.place': 'exclhost'}\n        self.server.expect(RESV, exp_attr, id=rid)\n\n    @requirements(num_moms=2)\n    def test_maintenance_reconfirm_reservation_and_run(self):\n        \"\"\"\n        Test if the overlapping reservation is reconfirmed correctly.\n        Wait for both reservations to start and submit jobs into this\n        reservations. Test if the jobs will run on correct nodes.\n        Two moms (-p \"servers=M1,moms=M1:M2\") are needed for this test.\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'managers': (INCR, '%s@*' % TEST_USER)})\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduler_iteration': 3})\n\n        self.momA = self.moms.values()[0]\n        self.momB = self.moms.values()[1]\n        self.hostA = self.momA.shortname\n        self.hostB = self.momB.shortname\n\n        if self.momA.is_cpuset_mom():\n            self.hostA += '[0]'\n        if self.momB.is_cpuset_mom():\n            self.hostB += '[0]'\n\n        now = int(time.time())\n        a1 = {'Resource_List.select': '1:ncpus=1',\n              'reserve_start': now + 60,\n              'reserve_end': now + 7200}\n        r1 = Reservation(TEST_USER, attrs=a1)\n        rid1 = self.server.submit(r1)\n\n        exp_attr = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2'),\n                    'resv_nodes': (EQ, '(%s:ncpus=1)' % self.hostA)}\n        self.server.expect(RESV, exp_attr, id=rid1)\n\n        a2 = {'reserve_start': now + 60,\n              'reserve_end': now + 7200}\n        h2 = [self.momA.shortname]\n        r2 = Reservation(TEST_USER, attrs=a2, hosts=h2)\n\n        rid2 = self.server.submit(r2)\n\n        exp_attr = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2'),\n                    'resv_nodes': (EQ, '(%s:ncpus=1)' % self.hostB)}\n        self.server.expect(RESV, exp_attr, id=rid1)\n\n        a = {'Resource_List.select': '1:ncpus=1',\n             'Resource_List.walltime': '1000',\n             'queue': rid1.split('.')[0]}\n        j = Job(TEST_USER, attrs=a)\n        j.set_sleep_time(990)\n        jid1 = self.server.submit(j)\n\n        a = {'Resource_List.select': '1:ncpus=1',\n             'Resource_List.walltime': '1000',\n             'queue': rid2.split('.')[0]}\n        j = Job(TEST_USER, attrs=a)\n        j.set_sleep_time(990)\n        jid2 = self.server.submit(j)\n\n        self.logger.info(\"Wait for reservations to start (2 minutes)\")\n        time.sleep(120)\n\n        exp_attr = {'reserve_state': (MATCH_RE, 'RESV_RUNNING|5')}\n        self.server.expect(RESV, exp_attr, id=rid1)\n\n        exp_attr = {'reserve_state': (MATCH_RE, 'RESV_RUNNING|5')}\n        self.server.expect(RESV, exp_attr, id=rid2)\n\n        exp_attr = {'job_state': 'R',\n                    'exec_vnode': \"(%s:ncpus=1)\" % self.hostB}\n        self.server.expect(JOB, exp_attr, id=jid1)\n\n        exp_attr = {'job_state': 'R',\n                    'exec_vnode': \"(%s:ncpus=1)\" % self.hostA}\n        self.server.expect(JOB, exp_attr, id=jid2)\n\n    @requirements(num_moms=2)\n    def test_maintenance_progressive_degrade_reservation(self):\n        \"\"\"\n        Test if the reservation is partially degraded by overlapping\n        maintenance reservation at first, and then do full overlap.\n        Test if the job in fully overlapped reservation will not run.\n        Two moms (-p \"servers=M1,moms=M1:M2\") are needed for this test.\n        \"\"\"\n        if len(self.moms) != 2:\n            cmt = \"need 2 mom hosts: -p servers=<m1>,moms=<m1>:<m2>\"\n            self.skip_test(reason=cmt)\n\n        now = int(time.time())\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'managers': '%s@*' % TEST_USER})\n\n        self.momA = self.moms.values()[0]\n        self.momB = self.moms.values()[1]\n\n        select = 'host=%s+host=%s' % (self.momA.shortname,\n                                      self.momB.shortname)\n        a1 = {'Resource_List.select': select,\n              'reserve_start': now + 60,\n              'reserve_end': now + 7200}\n        r1 = Reservation(TEST_USER, attrs=a1)\n        rid1 = self.server.submit(r1)\n\n        exp_attr = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, exp_attr, id=rid1)\n\n        a2 = {'reserve_start': now + 1800,\n              'reserve_end': now + 5400}\n        h2 = [self.momA.shortname]\n        r2 = Reservation(TEST_USER, attrs=a2, hosts=h2)\n\n        self.server.submit(r2)\n\n        exp_attr = {'reserve_state': (MATCH_RE, 'RESV_DEGRADED|12'),\n                    'reserve_substate': 12}\n        self.server.expect(RESV, exp_attr, id=rid1)\n        self.server.status(RESV, id=rid1)\n        vnodes = r1.get_vnodes()\n        self.assertEqual(len(vnodes), 1)\n        vnode = self.server.status(NODE, id=vnodes[0])[0]\n        self.assertEqual(vnode['Mom'], self.momB.hostname)\n\n        self.logger.info(\"Wait for reservation to start (2 minutes)\")\n        time.sleep(120)\n\n        a3 = {'reserve_start': now + 1800,\n              'reserve_end': now + 5400}\n        h3 = [self.momB.shortname]\n        r3 = Reservation(TEST_USER, attrs=a3, hosts=h3)\n\n        self.server.submit(r3)\n\n        a = {'Resource_List.select': '1:ncpus=1',\n             'Resource_List.walltime': '1000',\n             'queue': rid1.split('.')[0]}\n        j = Job(TEST_USER, attrs=a)\n        j.set_sleep_time(990)\n        jid1 = self.server.submit(j)\n\n        self.logger.info(\"Wait for the job to try to run (30 seconds)\")\n        time.sleep(30)\n\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid1)\n\n    def test_maintenance_degrade_reservation_jobs_dont_run(self):\n        \"\"\"\n        Test if the reservation is degraded by overlapping\n        maintenance reservation the jobs inside that degraded reservations do\n        not run when the maintenance reservation starts running.\n        \"\"\"\n\n        now = int(time.time())\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'managers': (INCR, '%s@*' % TEST_USER)})\n\n        a1 = {'reserve_start': now + 30,\n              'reserve_end': now + 1200}\n        start = now + 30\n        r1 = Reservation(TEST_USER, attrs=a1)\n        rid1 = self.server.submit(r1)\n        resv_name = rid1.split('.')[0]\n\n        exp_attr = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, exp_attr, id=resv_name)\n        self.server.status(RESV, 'resv_nodes', id=rid1)\n        resv_node_list = self.server.reservations[rid1].get_vnodes()\n        resv_node = resv_node_list[0]\n\n        # On a cpuset machine the resv_node could be a vnode,\n        # always pull the hostname from the node attribute\n        status = self.server.status(NODE, id=resv_node)\n        h2 = [status[0]['resources_available.host']]\n\n        jid = self.server.submit(Job(attrs={ATTR_q: resv_name}))\n\n        a2 = {'reserve_start': now + 35,\n              'reserve_end': now + 1000}\n        r2 = Reservation(TEST_USER, attrs=a2, hosts=h2)\n        rid2 = self.server.submit(r2)\n\n        exp_attr = {'reserve_state': (MATCH_RE, 'RESV_DEGRADED|10'),\n                    'reserve_substate': 12}\n        self.server.expect(RESV, exp_attr, id=rid1)\n        resv_state = {'reserve_state': (MATCH_RE, 'RESV_RUNNING|5')}\n        self.logger.info('Sleeping until reservation starts')\n        offset = start - int(time.time())\n        self.server.expect(RESV, resv_state, id=rid1,\n                           offset=offset)\n        self.server.expect(RESV, resv_state, id=rid2,\n                           offset=offset)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': True})\n        a = {'comment': 'Not Running: Queue not started.',\n             'job_state': 'Q'}\n        self.server.expect(JOB, a, id=jid)\n\n    def test_maintenance_parse_numerous_hosts(self):\n        \"\"\"\n        Test if the parsing of numerous hosts succeed.\n        \"\"\"\n        now = int(time.time())\n\n        a = {'reserve_start': now + 3600,\n             'reserve_end': now + 7200}\n        chars = 'abcdefghijklmnopqrstuvwxyz'\n        h = [''.join(random.choices(chars, k=8)) for i in range(200)]\n        r = Reservation(TEST_USER, attrs=a, hosts=h)\n\n        with self.assertRaises(PbsSubmitError) as err:\n            self.server.submit(r)\n\n        msg = err.exception.msg[0].strip()\n\n        regex = \"^pbs_rsub: Host with resources not found: .*\"\n        self.assertTrue(re.search(regex, msg))\n"
  },
  {
    "path": "test/tests/functional/pbs_modifyresv_hook.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2020 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\nimport textwrap\nfrom tests.functional import *\n\n\nclass TestModifyResvHook(TestFunctional):\n    \"\"\"\n    Tests to verify the reservation begin hook for a confirm standing/advance/\n    degraded reservation once the reservation begins.\n    \"\"\"\n\n    advance_resv_hook_script = textwrap.dedent(\"\"\"\\\n        import pbs\n        e=pbs.event()\n\n        pbs.logmsg(pbs.LOG_DEBUG, 'Reservation Modify Hook name - %s' %\n                   e.hook_name)\n\n        if e.type == pbs.MODIFYRESV:\n            pbs.logmsg(pbs.LOG_DEBUG, 'Reservation ID - %s' % e.resv.resvid)\n    \"\"\")\n\n    def setUp(self):\n        \"\"\"\n        Create a reservation begin hook and set the server log level.\n        \"\"\"\n        super(TestModifyResvHook, self).setUp()\n        self.hook_name = 'modifyresv_hook'\n        attrs = {'event': 'modifyresv'}\n        self.server.create_hook(self.hook_name, attrs)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'log_events': 2047})\n\n    @tags('hooks')\n    def test_server_down_case_1(self):\n        \"\"\"\n        Testcase to submit and confirm an advance reservation, turn the server\n        off, attempt to modify the reservation, turn the server on, delete the\n        reservation after start and verify the modifyresv hook didn't run.\n        \"\"\"\n        self.server.import_hook(self.hook_name,\n                                TestModifyResvHook.advance_resv_hook_script)\n\n        offset = 20\n        duration = 300\n        shift = 30\n        rid, start, end = self.server.submit_resv(offset, duration)\n\n        self.server.stop()\n        self.server.alter_a_reservation(rid, start, end, shift, whichMessage=0)\n        self.server.start()\n\n        time.sleep(11)\n\n        self.server.delete(rid)\n\n        msg = 'Hook;Server@%s;Reservation ID - %s' % \\\n              (self.server.shortname, rid)\n        self.server.log_match(msg, tail=True, interval=1, max_attempts=10,\n                              existence=False)\n\n    @tags('hooks')\n    def test_alter_advance_resv(self):\n        \"\"\"\n        Testcase to submit and confirm an advance reservation, wait for it\n        to begin and verify the reservation begin hook.\n        \"\"\"\n        self.server.import_hook(self.hook_name,\n                                TestModifyResvHook.advance_resv_hook_script)\n\n        offset = 20\n        duration = 30\n        shift = 10\n        rid, start, end = self.server.submit_resv(offset, duration)\n\n        self.server.alter_a_reservation(rid, start, end, shift, alter_s=True,\n                                        alter_e=True)\n        msg = 'Hook;Server@%s;Reservation ID - %s' % \\\n              (self.server.shortname, rid)\n        self.server.log_match(msg, tail=True, interval=2, max_attempts=30)\n        off = offset + shift + 10\n        self.logger.info(\"Waiting %s sec for resv to run.\", off)\n        attrs = {'reserve_state': (MATCH_RE, 'RESV_RUNNING|5')}\n        self.server.expect(RESV, attrs, id=rid, offset=off)\n\n    @tags('hooks')\n    def test_set_attrs(self):\n        \"\"\"\n        Testcase to submit and confirm an advance reservation, delete the\n        reservation and verify permissions in the modifyresv hook.\n        \"\"\"\n        msg_rw = 'Reservation modify Hook - check rw Authorized_Groups'\n        msg_ro = 'Reservation modify Hook - ctime is read-only.'\n\n        hook_script = textwrap.dedent(\"\"\"\\\n            import pbs\n            e = pbs.event()\n            r = e.resv\n            pbs.logmsg(pbs.LOG_DEBUG,\n                       'Reservation modify Hook name - %%s' %% e.hook_name)\n            if e.type == pbs.MODIFYRESV:\n                vo = r.Authorized_Groups\n                r.Authorized_Groups = None\n                r.Authorized_Groups = vo\n                pbs.logmsg(pbs.LOG_DEBUG, \"%s\")\n                try:\n                    r.ctime = None\n                except pbs.v1._exc_types.BadAttributeValueError:\n                    pbs.logmsg(pbs.LOG_DEBUG, \"%s\")\n        \"\"\" % (msg_rw, msg_ro))\n\n        self.logger.info(hook_script)\n        self.server.import_hook(self.hook_name, hook_script)\n\n        offset = 10\n        duration = 30\n        shift = 10\n        rid, start, end = self.server.submit_resv(offset, duration)\n        self.server.alter_a_reservation(rid, start, end, shift, alter_s=True,\n                                        alter_e=True)\n\n        time.sleep(15)\n\n        self.server.delete(rid)\n        msg = \"Reservation modify Hook name - %s\" % (self.hook_name)\n        self.server.log_match(msg, tail=True, max_attempts=30, interval=2)\n        self.server.log_match(msg_rw, tail=True, max_attempts=30, interval=2)\n        self.server.log_match(msg_ro, tail=True, max_attempts=30, interval=2)\n\n    @tags('hooks')\n    def test_scheduler_down(self):\n        \"\"\"\n        Testcase to turn off the scheduler and submit a reservation,\n        the same will be in unconfirmed state and upon ending the\n        resvmodify hook would have run.\n        \"\"\"\n        self.server.import_hook(self.hook_name,\n                                TestModifyResvHook.advance_resv_hook_script)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n\n        offset = 10\n        duration = 30\n        shift = 10\n\n        rid, start, end = self.server.submit_resv(offset, duration,\n                                                  confirmed=False)\n        self.server.alter_a_reservation(rid, start, end, shift, alter_s=True,\n                                        alter_e=True, check_log=False,\n                                        sched_down=True)\n\n        time.sleep(5)\n\n        msg = 'Hook;Server@%s;Reservation ID - %s' % (self.server.shortname,\n                                                      rid)\n        self.server.log_match(msg, tail=True)\n\n    @tags('hooks')\n    def test_multiple_hooks(self):\n        \"\"\"\n        Define multiple hooks for the modifyresv event and make sure both\n        get run.\n\n        \"\"\"\n        test_hook_script = textwrap.dedent(\"\"\"\n        import pbs\n        e=pbs.event()\n\n        pbs.logmsg(pbs.LOG_DEBUG,\n                   'Reservation Modify Hook name - %%s' %% e.hook_name)\n\n        if e.type == pbs.MODIFYRESV:\n            pbs.logmsg(pbs.LOG_DEBUG,\n                       'Test %d Reservation ID - %%s' %% e.resv.resvid)\n        \"\"\")\n\n        attrs = {'event': 'modifyresv'}\n        self.server.create_import_hook(\"test_hook_1\", attrs,\n                                       test_hook_script % 1)\n        self.server.create_import_hook(\"test_hook_2\", attrs,\n                                       test_hook_script % 2)\n        offset = 10\n        duration = 30\n        shift = 10\n        rid, start, end = self.server.submit_resv(offset, duration)\n        self.server.alter_a_reservation(rid, start, end, shift, alter_s=True,\n                                        alter_e=True)\n\n        attrs = {'reserve_state': (MATCH_RE, 'RESV_RUNNING|5')}\n        self.server.expect(RESV, attrs, id=rid, offset=10)\n\n        msg = 'Hook;Server@%s;Test 1 Reservation ID - %s' % \\\n              (self.server.shortname, rid)\n        self.server.log_match(msg, tail=True, interval=2, max_attempts=3)\n\n        msg = 'Hook;Server@%s;Test 2 Reservation ID - %s' % \\\n              (self.server.shortname, rid)\n        self.server.log_match(msg, tail=True, interval=2, max_attempts=3)\n"
  },
  {
    "path": "test/tests/functional/pbs_mom_hook_sync.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\n@requirements(num_moms=2)\nclass TestMomHookSync(TestFunctional):\n    \"\"\"\n    This test suite tests to make sure a hook does not disappear in\n    a series of hook event change from mom hook to server hook and\n    then back to a mom hook. This is a good exercise to make sure\n    hook updates are not lost even when mom is stopped, killed, and\n    restarted during hook event changes.\n    \"\"\"\n\n    def setUp(self):\n        if len(self.moms) != 2:\n            self.skip_test(reason=\"need 2 mom hosts: -p moms=<m1>:<m2>\")\n        TestFunctional.setUp(self)\n\n        self.momA = self.moms.values()[0]\n        self.momB = self.moms.values()[1]\n        self.momA.delete_vnode_defs()\n        self.momB.delete_vnode_defs()\n\n        self.hostA = self.momA.shortname\n        self.hostB = self.momB.shortname\n\n        rc = self.server.manager(MGR_CMD_DELETE, NODE, None, \"\")\n        self.assertEqual(rc, 0)\n\n        rc = self.server.manager(MGR_CMD_CREATE, NODE, id=self.hostA)\n        self.assertEqual(rc, 0)\n\n        rc = self.server.manager(MGR_CMD_CREATE, NODE, id=self.hostB)\n        self.assertEqual(rc, 0)\n\n        self.hook_name = \"cpufreq\"\n        hook_body = \"import pbs\\n\"\n        a = {'event': 'execjob_begin', 'enabled': 'True'}\n        self.server.create_import_hook(self.hook_name, a, hook_body)\n\n        hook_config = \"\"\"{\n    \"apple\"         : \"pears\",\n    \"banana\"         : \"cucumbers\"\n}\n\"\"\"\n        fn = self.du.create_temp_file(body=hook_config)\n        a = {'content-type': 'application/x-config',\n             'content-encoding': 'default',\n             'input-file': fn}\n        self.server.manager(MGR_CMD_IMPORT, HOOK, a, self.hook_name)\n        os.remove(fn)\n\n        self.server.log_match(\n            'successfully sent hook file.*cpufreq.HK ' +\n            'to %s.*' % self.momA.hostname,\n            max_attempts=10, regexp=True)\n\n        self.server.log_match(\n            'successfully sent hook file.*cpufreq.CF ' +\n            'to %s.*' % self.momA.hostname,\n            max_attempts=10, regexp=True)\n\n        self.server.log_match(\n            'successfully sent hook file.*cpufreq.PY ' +\n            'to %s.*' % self.momA.hostname,\n            max_attempts=10, regexp=True)\n\n        self.server.log_match(\n            'successfully sent hook file.*cpufreq.HK ' +\n            'to %s.*' % self.momB.hostname,\n            max_attempts=10, regexp=True)\n\n        self.server.log_match(\n            'successfully sent hook file.*cpufreq.CF ' +\n            'to %s.*' % self.momB.hostname,\n            max_attempts=10, regexp=True)\n\n        self.server.log_match(\n            'successfully sent hook file.*cpufreq.PY ' +\n            'to %s.*' % self.momB.hostname,\n            max_attempts=10, regexp=True)\n\n    def tearDown(self):\n        self.momB.signal(\"-CONT\")\n        TestFunctional.tearDown(self)\n\n    def test_momhook_to_serverhook_with_resume(self):\n        \"\"\"\n        Given an existing mom hook, suspend mom on hostB,\n        change the hook to be a server hook (causes a\n        delete action), then change it back to a mom hook\n        (results in a send action), and then resume mom.\n        The delete action occurs first and then the send\n        action so we end up with a mom hook in place.\n        \"\"\"\n\n        self.momB.signal('-STOP')\n\n        # Turn current mom hook into a server hook\n        self.server.manager(MGR_CMD_SET, HOOK,\n                            {'event': 'queuejob'},\n                            id=self.hook_name)\n\n        # Turn current mom hook back to a mom hook\n        self.server.manager(MGR_CMD_SET, HOOK,\n                            {'event': 'exechost_periodic'},\n                            id=self.hook_name)\n\n        # For testability, delay resuming the mom so we can\n        # get a different timestamp on the hook updates\n        self.logger.info(\"Waiting 3 secs for earlier hook updates to complete\")\n        time.sleep(3)\n\n        now = time.time()\n        self.momB.signal('-CONT')\n\n        # Put another sleep delay so log_match() can see all the matches\n        self.logger.info(\"Waiting 3 secs for new hook updates to complete\")\n        time.sleep(3)\n        match_delete = self.server.log_match(\n            'successfully deleted hook file cpufreq.HK ' +\n            'from %s.*' % self.momB.hostname,\n            starttime=now, max_attempts=10, regexp=True)\n\n        # Without the fix, there won't be these sent hook file messages\n        match_sent1 = self.server.log_match(\n            'successfully sent hook file.*cpufreq.HK ' +\n            'to %s.*' % self.momB.hostname,\n            starttime=now, max_attempts=10, regexp=True)\n\n        match_sent2 = self.server.log_match(\n            'successfully sent hook file.*cpufreq.CF ' +\n            'to %s.*' % self.momB.hostname,\n            starttime=now, max_attempts=10, regexp=True)\n\n        match_sent3 = self.server.log_match(\n            'successfully sent hook file.*cpufreq.PY ' +\n            'to %s.*' % self.momB.hostname,\n            starttime=now, max_attempts=10, regexp=True)\n\n        # Lower the linecount, earlier the line appears in log\n        self.assertTrue(match_delete[0] < match_sent1[0])\n        self.assertTrue(match_delete[0] < match_sent2[0])\n        self.assertTrue(match_delete[0] < match_sent3[0])\n\n    def test_momhook_to_momhook_with_resume(self):\n        \"\"\"\n        Given an existing mom hook, suspend mom on hostB,\n        change the hook event to be another mom hook event\n        (results in a send action), change the hook to be a\n        server hook (causes a delete action),\n        and then resume mom.\n        The send action occurs first and then the delete\n        action so we end up with no mom hook in place.\n        \"\"\"\n\n        self.momB.signal('-STOP')\n\n        # Turn current mom hook back to a mom hook\n        self.server.manager(MGR_CMD_SET, HOOK,\n                            {'event': 'exechost_periodic'},\n                            id=self.hook_name)\n\n        # Turn current mom hook into a server hook\n        self.server.manager(MGR_CMD_SET, HOOK,\n                            {'event': 'queuejob'},\n                            id=self.hook_name)\n\n        # For testability, delay resuming the mom so we can\n        # get a different timestamp on the hook updates\n        self.logger.info(\"Waiting 3 secs for earlier hook updates to complete\")\n        time.sleep(3)\n\n        now = time.time()\n        self.momB.signal('-CONT')\n\n        # Put another sleep delay so log_match() can see all the matches\n        self.logger.info(\"Waiting 3 secs for new hook updates to complete\")\n        time.sleep(3)\n        match_delete = self.server.log_match(\n            'successfully deleted hook file cpufreq.HK ' +\n            'from %s.*' % self.momB.hostname,\n            starttime=now, max_attempts=10, regexp=True)\n\n        # Only the hook control file (.HK) is sent since that contains\n        # the hook event change to exechost_periodic.\n        match_sent = self.server.log_match(\n            'successfully sent hook file .*cpufreq.HK ' +\n            'to %s.*' % self.momB.hostname,\n            starttime=now, max_attempts=10, regexp=True)\n\n        # Lower the linecount, earlier the line appears in log\n        self.assertTrue(match_sent[0] < match_delete[0])\n\n        self.server.log_match(\n            'successfully sent hook file .*cpufreq.CF ' +\n            'to %s.*' % self.momB.hostname, existence=False,\n            starttime=now, max_attempts=10, regexp=True)\n\n        self.server.log_match(\n            'successfully sent hook file .*cpufreq.PY ' +\n            'to %s.*' % self.momB.hostname, existence=False,\n            starttime=now, max_attempts=10, regexp=True)\n\n    def test_momhook_to_serverhook_with_restart(self):\n        \"\"\"\n        Like test_1 except instead of resuming mom,\n        we kill -9 it and restart.\n        \"\"\"\n\n        self.momB.signal('-STOP')\n\n        # Turn current mom hook into a server hook\n        self.server.manager(MGR_CMD_SET, HOOK,\n                            {'event': 'queuejob'},\n                            id=self.hook_name)\n\n        # Turn current mom hook back to a mom hook\n        self.server.manager(MGR_CMD_SET, HOOK,\n                            {'event': 'exechost_periodic'},\n                            id=self.hook_name)\n\n        # For testability, delay resuming the mom so we can\n        # get a different timestamp on the hook updates\n        self.logger.info(\"Waiting 3 secs for earlier hook updates to complete\")\n        time.sleep(3)\n\n        now = time.time()\n        self.momB.signal('-KILL')\n        self.momB.start()\n\n        # Killing and restarting mom would cause server to sync\n        # up its version of the mom hook file resulting in an\n        # additional send action, which would not alter the\n        # outcome, as send action occurs after the delete action.\n        self.server.log_match(\n            'Node;%s.*;' % (self.momB.hostname,) +\n            'Hello from MoM',\n            starttime=now, max_attempts=10, regexp=True)\n\n        # Put another sleep delay so log_match() can see all the matches\n        self.logger.info(\"Waiting 3 secs for new hook updates to complete\")\n        time.sleep(3)\n        match_delete = self.server.log_match(\n            'successfully deleted hook file cpufreq.HK ' +\n            'from %s.*' % self.momB.hostname,\n            starttime=now, max_attempts=10, regexp=True)\n\n        # Without the fix, there won't be these sent hook file messages\n        match_sent1 = self.server.log_match(\n            'successfully sent hook file.*cpufreq.HK ' +\n            'to %s.*' % self.momB.hostname,\n            starttime=now, max_attempts=10, regexp=True)\n\n        match_sent2 = self.server.log_match(\n            'successfully sent hook file.*cpufreq.CF ' +\n            'to %s.*' % self.momB.hostname,\n            starttime=now, max_attempts=10, regexp=True)\n\n        match_sent3 = self.server.log_match(\n            'successfully sent hook file.*cpufreq.PY ' +\n            'to %s.*' % self.momB.hostname,\n            starttime=now, max_attempts=10, regexp=True)\n\n        # Lower the linecount, earlier the line appears in log\n        self.assertTrue(match_delete[0] < match_sent1[0])\n        self.assertTrue(match_delete[0] < match_sent2[0])\n        self.assertTrue(match_delete[0] < match_sent3[0])\n\n    def test_momhook_to_momhook_with_restart(self):\n        \"\"\"\n        Like test_2 except instead of resuming mom,\n        we kill -9 it and restart.\n        \"\"\"\n\n        self.momB.signal('-STOP')\n\n        # Turn current mom hook back to a mom hook\n        self.server.manager(MGR_CMD_SET, HOOK,\n                            {'event': 'exechost_periodic'},\n                            id=self.hook_name)\n\n        # Turn current mom hook into a server hook\n        self.server.manager(MGR_CMD_SET, HOOK,\n                            {'event': 'queuejob'},\n                            id=self.hook_name)\n\n        # For testability, delay resuming the mom so we can\n        # get a different timestamp on the hook updates\n        self.logger.info(\"Waiting 3 secs for earlier hook updates to complete\")\n        time.sleep(3)\n\n        # Killing and restarting mom would cause server to sync\n        # up its version of the mom hook file resulting in an\n        # delete mom hook action as that hook is now seen as a\n        # server hook. Since it's now a server hook, no further\n        # mom hook sends are done.\n        now = time.time()\n        self.momB.signal('-KILL')\n        self.momB.start()\n\n        # Put another sleep delay so log_match() can see all the matches\n        self.logger.info(\"Waiting 3 secs for new hook updates to complete\")\n        time.sleep(3)\n        self.server.log_match(\n            'successfully deleted hook file cpufreq.HK ' +\n            'from %s.*' % self.momB.hostname,\n            starttime=now, max_attempts=10, regexp=True)\n\n        self.server.log_match(\n            'successfully sent hook file .*cpufreq.HK ' +\n            'to %s.*' % self.momB.hostname, existence=False,\n            starttime=now, max_attempts=10, regexp=True)\n\n        self.server.log_match(\n            'successfully sent hook file .*cpufreq.CF ' +\n            'to %s.*' % self.momB.hostname, existence=False,\n            starttime=now, max_attempts=10, regexp=True)\n\n        self.server.log_match(\n            'successfully sent hook file .*cpufreq.PY ' +\n            'to %s.*' % self.momB.hostname, existence=False,\n            starttime=now, max_attempts=10, regexp=True)\n\n    def compare_rescourcedef(self):\n        srvret = None\n\n        for _ in range(5):\n            time.sleep(1)\n            if srvret is None:\n                file = os.path.join(self.server.pbs_conf['PBS_HOME'],\n                                    'server_priv', 'hooks', 'resourcedef')\n                srvret = self.du.cat(self.server.hostname, file, logerr=False,\n                                     sudo=True)\n                if srvret['rc'] != 0 or len(srvret['out']) == 0:\n                    srvret = None\n                    continue\n\n            file = self.momB.get_formed_path(self.momB.pbs_conf['PBS_HOME'],\n                                             'mom_priv', 'hooks',\n                                             'resourcedef')\n            momret = self.momB.cat(file, logerr=False,\n                                   sudo=True)\n            if momret['rc'] != 0 or len(momret['out']) == 0:\n                continue\n\n            if momret['out'] == srvret['out']:\n                return\n            else:\n                srvret = None\n        raise self.failureException(\"resourcedef file is not in sync\")\n\n    def test_rescdef_mom_recreate(self):\n        \"\"\"\n        test if rescdef file is recreated when a mom is deleted and added back\n        \"\"\"\n\n        # create a custom resource\n        self.server.manager(MGR_CMD_CREATE, RSC,\n                            {'type': 'string', 'flag': 'h'}, id='foo')\n\n        # compare rescdef files between mom and server\n        self.compare_rescourcedef()\n\n        # delete node\n        self.server.manager(MGR_CMD_DELETE, NODE, id=self.momB.shortname)\n        self.server.expect(NODE, 'state', id=self.momB.shortname, op=UNSET)\n\n        # check if rescdef is deleted\n        file = self.momB.get_formed_path(self.momB.pbs_conf['PBS_HOME'],\n                                         'mom_priv', 'resourcedef')\n        self.assertFalse(\n            self.momB.du.isfile(self.momB.hostname, file, sudo=True),\n            \"resourcedef not deleted at mom\")\n\n        # recreate node\n        self.server.manager(MGR_CMD_CREATE, NODE, id=self.momB.shortname)\n\n        # check for status of the node\n        self.server.expect(NODE, {'state': 'free'}, id=self.momB.shortname)\n\n        # compare rescdef files between mom and server\n        self.compare_rescourcedef()\n"
  },
  {
    "path": "test/tests/functional/pbs_mom_hooks_test.py",
    "content": "# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\nfrom tests.functional import *\n\n\n@requirements(num_moms=2)\nclass TestMoMHooks(TestFunctional):\n    \"\"\"\n    This test covers basic functionality of MoM Hooks\n    \"\"\"\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n        if len(self.moms) != 2:\n            self.skipTest('test requires two MoMs as input, ' +\n                          'use -p moms=<mom1>:<mom2>')\n        self.momA = self.moms.values()[0]\n        self.momB = self.moms.values()[1]\n        self.hostA = self.momA.shortname\n        self.hostB = self.momB.shortname\n        a = {'resources_available.ncpus': 8, 'resources_available.mem': '8gb'}\n        for mom in self.moms.values():\n            self.server.manager(MGR_CMD_SET, NODE, a, mom.shortname)\n        pbsdsh_cmd = os.path.join(self.server.pbs_conf['PBS_EXEC'],\n                                  'bin', 'pbsdsh')\n        self.job1 = Job()\n        self.job1.create_script(\n            \"#PBS -l select=vnode=\" + self.hostA + \"+vnode=\" + self.hostB +\n            \":mem=4mb\\n\" +\n            pbsdsh_cmd + \" -n 1 /bin/date\\n\" +\n            \"sleep 20\\n\")\n        self.job2 = Job()\n        self.job2.create_script(\n            \"#PBS -l select=vnode=\" + self.hostA + \"+vnode=\" + self.hostB +\n            \":mem=4mb\\n\" +\n            pbsdsh_cmd + \" -n 1 /bin/date\\n\" +\n            pbsdsh_cmd + \" -n 1 /bin/echo hi\\n\" +\n            pbsdsh_cmd + \" -n 1 /bin/ls\\n\" +\n            \"sleep 20\")\n        self.job3 = Job()\n        self.job3.create_script(\n            \"#PBS -l select=vnode=\" + self.hostA + \"+vnode=\" + self.hostB +\n            \":mem=4mb\\n\" +\n            \"sleep 600\\n\")\n        self.job4 = Job()\n        self.job4.create_script(\n            \"#PBS -l select=vnode=\" + self.hostA + \"+vnode=\" + self.hostB +\n            \":mem=4mb\\n\" +\n            pbsdsh_cmd + \" -n 1 /bin/date\\n\" +\n            \"sleep 20\\n\" +\n            \"exit 7\")\n\n    def hook_init(self, hook_name, hook_event, hook_body=None,\n                  freq=None):\n        \"\"\"\n        Dynamically create and import a MoM hook into the server.\n        hook_name: the name of the hook to create. No default\n        hook_event: the event type of the hook. No default\n        hook_body: the body of the hook. If None, it is created \"on-the-fly\"\n        based on vnode_comment, resc_avail_file, resc_avail_ncpus by calling\n        customize_hook.\n        resc_avail_file: the file size to set in the hook. Defaults to 1gb\n        resc_avail_ncpus: the ncpus to set in the hook. Defaults to 2\n        freq: The frequency of the periodic hook.\n        ignoreerror: if True, ignore an error in importing the hook. This is\n        needed for tests that test a hook error. Defaults to False.\n\n        Return True on success and False otherwise.\n        \"\"\"\n        a = {}\n        if hook_event:\n            a['event'] = hook_event\n        if freq:\n            a['freq'] = freq\n        a['enabled'] = 'true'\n        a['alarm'] = 5\n\n        self.server.create_import_hook(hook_name, a, hook_body,\n                                       overwrite=True)\n\n    def basic_hook_accept_periodic(self, hook_name, freq, hook_body):\n\n        hook_event = \"exechost_periodic\"\n        self.hook_init(hook_name, hook_event, hook_body, freq=freq)\n        exp_msg = [\"Hook;pbs_python;event is %s\" % hook_event.upper(),\n                   \"Hook;pbs_python;hook_name is %s\" % hook_name,\n                   \"Hook;pbs_python;hook_type is site\",\n                   \"Hook;pbs_python;requestor_host is %s\" % self.hostA,\n                   ]\n        for msg in exp_msg:\n            self.momA.log_match(msg)\n        exp_msg[3] = \"Hook;pbs_python;requestor_host is %s\" % self.hostB\n        for msg in exp_msg:\n            self.momB.log_match(msg)\n        msg = \"Not allowed to update vnode 'aspasia'\"\n        msg += \", as it is owned by a different mom\"\n        self.server.log_match(msg)\n\n        a = {'state': 'offline',\n             'resources_available.file': '17tb',\n             'resources_available.ncpus': 17,\n             'resources_available.mem': '700gb',\n             'comment': \"Comment update done  by %s hook @ %s\" %\n             (hook_name, self.hostA)}\n        self.server.expect(NODE, a, id=self.hostA)\n        a['comment'] = \"Comment update done  by %s hook @ %s\" % (\n            hook_name, self.hostB)\n        self.server.expect(NODE, a, id=self.hostB)\n        a = {'resources_available.file': '500tb',\n             'resources_available.mem': '300gb'}\n        self.server.expect(NODE, a, id=\"aspasia\")\n\n    def test_exechost_periodic_with_accept(self):\n        \"\"\"\n        Test exechost_periodic which accepts the event\n        \"\"\"\n        self.basic_hook_accept_periodic(\"period\", 5, period_py)\n\n    def tearDown(self):\n        TestFunctional.tearDown(self)\n        hooks = self.server.status(HOOK)\n        for h in hooks:\n            if h['id'] in (\"period\",):\n                self.server.manager(MGR_CMD_DELETE, HOOK, id=h['id'])\n\n\nperiod_py = \"\"\"import pbs\nimport os\nimport sys\nimport time\n\nlocal_node = pbs.get_local_nodename()\nother_node = local_node\nother_node2 = \"aspasia\"\n\ndef print_attribs(pbs_obj):\n   for a in pbs_obj.attributes:\n      v = getattr(pbs_obj, a)\n      if (v != None) and str(v) != \"\":\n         pbs.logmsg(pbs.LOG_DEBUG, \"%s = %s\" % (a,v))\n\ns = pbs.server()\n\njobs = pbs.server().jobs()\nfor j in jobs:\n   pbs.logmsg(pbs.LOG_DEBUG, \"Found job %s\" % (j.id))\n   print_attribs(j)\n\nqueues = s.queues()\nfor q in queues:\n   pbs.logmsg(pbs.LOG_DEBUG, \"Found queue %s\" % (q.name))\n   for k in q.jobs():\n     pbs.logmsg(pbs.LOG_DEBUG, \"Found job %s in queue %s\" % (k.id, q.name))\n\nresvs = s.resvs()\nfor r in resvs:\n   pbs.logmsg(pbs.LOG_DEBUG, \"Found resv %s\" % (r.id))\n\nvnodes = s.vnodes()\nfor v in vnodes:\n   pbs.logmsg(pbs.LOG_DEBUG, \"Found vnode %s\" % (v.name))\n\ne = pbs.event()\n\npbs.logmsg(pbs.LOG_DEBUG,\n           \"printing pbs.event() values ---------------------->\")\npbs.logmsg(pbs.LOG_DEBUG, \"event is %s\" % (\"EXECHOST_PERIODIC\"))\n\npbs.logmsg(pbs.LOG_DEBUG, \"hook_name is %s\" % (e.hook_name))\npbs.logmsg(pbs.LOG_DEBUG, \"hook_type is %s\" % (e.hook_type))\npbs.logmsg(pbs.LOG_DEBUG, \"requestor is %s\" % (e.requestor))\npbs.logmsg(pbs.LOG_DEBUG, \"requestor_host is %s\" % (e.requestor_host))\n\nvn = pbs.event().vnode_list\n\npbs.logmsg(pbs.LOG_DEBUG, \"vn is %s type is %s\" % (str(vn), type(vn)))\nfor k in pbs.event().vnode_list.keys():\n\n   if k == local_node:\n      pbs.logmsg(pbs.LOG_DEBUG, \"%s: pcpus=%d\" % (k, vn[k].pcpus));\n      pbs.logmsg(pbs.LOG_DEBUG, \"%s: pbs_version=%s\" % (k, vn[k].pbs_version));\n      pbs.logmsg(pbs.LOG_DEBUG, \"%s: resources_available[%s]=%d\" % (k,\n                \"ncpus\", vn[k].resources_available[\"ncpus\"]));\n      pbs.logmsg(pbs.LOG_DEBUG, \"%s: resources_available[%s]=%s type=%s\" % (k,\n                \"mem\", vn[k].resources_available[\"mem\"],\n                type(vn[k].resources_available[\"mem\"])));\n      pbs.logmsg(pbs.LOG_DEBUG, \"%s: resources_available[%s]=%s\" % (k, \"arch\",\n                    vn[k].resources_available[\"arch\"]));\n\nif other_node not in vn:\n   vn[other_node] = pbs.vnode(other_node)\n\nvn[other_node].pcpus = 7\nvn[other_node].ntype = pbs.ND_PBS\nvn[other_node].state = pbs.ND_OFFLINE\nvn[other_node].sharing = pbs.ND_FORCE_EXCL\nvn[other_node].resources_available[\"ncpus\"] = 17\nvn[other_node].resources_available[\"file\"] = pbs.size(\"17tb\")\nvn[other_node].resources_available[\"mem\"] = pbs.size(\"700gb\")\nvn[other_node].comment = \"Comment update done  by period hook \"\nvn[other_node].comment += \"@ %s\" % (local_node)\n\nif other_node2 not in vn:\n   vn[other_node2] = pbs.vnode(other_node2)\n\nvn[other_node2].resources_available[\"file\"] = pbs.size(\"500tb\")\nvn[other_node2].resources_available[\"mem\"] = pbs.size(\"300gb\")\n\"\"\"\n"
  },
  {
    "path": "test/tests/functional/pbs_mom_job_dir.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestMomJobDir(TestFunctional):\n    \"\"\"\n    This test suite tests the mom's ability to create job directories.\n    \"\"\"\n\n    def change_server_name(self, servername):\n        \"\"\"\n        Stops the server, changes the server name to `servername`,\n        sets the server hostname to the old servername,\n        and starts the server again.\n        \"\"\"\n        self.server.stop()\n        self.assertFalse(self.server.isUp(), 'Failed to stop PBS')\n\n        conf = self.du.parse_pbs_config(self.server.hostname)\n        self.du.set_pbs_config(\n            self.server.hostname,\n            confs={'PBS_SERVER_HOST_NAME': conf['PBS_SERVER'],\n                   'PBS_SERVER': servername})\n\n        self.server.start()\n        self.assertTrue(self.server.isUp(), 'Failed to start PBS')\n        return\n\n    def test_existing_directory_longid(self):\n        \"\"\"\n        If a job directory already exists, the mom should clean it up\n        after rejecting the job. When the server sends the request for\n        the second time, the mom will run the job correctly.\n        The mom has special code if the job id is less than 11 characters,\n        so this will test job ids longer than 11 characters.\n        \"\"\"\n\n        # Change the server name to create a job id longer than 11 characters\n        self.change_server_name('superlongservername')\n\n        j = Job(TEST_USER, attrs={ATTR_h: None})\n        j.set_sleep_time(3)\n        jid = self.server.submit(j)\n\n        # Create the job directory in mom_priv\n        path = self.mom.get_formed_path(self.mom.pbs_conf['PBS_HOME'],\n                                        'mom_priv', 'jobs', jid + '.TK')\n        self.logger.info('Creating directory %s', path)\n        self.du.mkdir(hostname=self.mom.hostname, path=path, sudo=True)\n\n        # Rls the job, ensure it finishes and the directory no longer exists\n        self.server.rlsjob(jid, USER_HOLD)\n        self.server.expect(JOB, 'queue', id=jid, op=UNSET, max_attempts=20,\n                           interval=1, offset=1)\n        ret = self.du.isdir(hostname=self.mom.hostname, path=path, sudo=True)\n        self.assertFalse(ret, 'Directory %s still exists.' % path)\n\n    def test_existing_directory_shortid(self):\n        \"\"\"\n        If a job directory already exists, the mom should clean it up\n        after rejecting the job. When the server sends the request for\n        the second time, the mom will run the job correctly.\n        The mom has special code if the job id is less than 11 characters,\n        so this will test job ids shorter than 11 characters.\n        \"\"\"\n\n        # Change the server name to create a job id shorter than 11 characters\n        self.change_server_name('svr')\n\n        # Submit a held job so the job id is known\n        j = Job(TEST_USER, attrs={ATTR_h: None})\n        j.set_sleep_time(3)\n        jid = self.server.submit(j)\n\n        # Create the job directory in mom_priv\n        path = self.mom.get_formed_path(self.mom.pbs_conf['PBS_HOME'],\n                                        'mom_priv', 'jobs', jid + '.TK')\n        self.logger.info('Creating directory %s', path)\n        self.du.mkdir(hostname=self.mom.hostname, path=path, sudo=True)\n\n        # Rls the job, ensure it finishes and the directory no longer exists\n        self.server.rlsjob(jid, USER_HOLD)\n        self.server.expect(JOB, 'queue', id=jid, op=UNSET, max_attempts=20,\n                           interval=1, offset=1)\n        ret = self.du.isdir(hostname=self.mom.hostname, path=path, sudo=True)\n        self.assertFalse(ret, 'Directory %s still exists.' % path)\n"
  },
  {
    "path": "test/tests/functional/pbs_mom_local_nodename.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\nfrom tests.functional import *\nfrom ptl.lib.pbs_testlib import BatchUtils\nimport socket\n\n\nclass TestMomLocalNodeName(TestFunctional):\n    \"\"\"\n    This test suite tests that mom sets its short name correctly\n    and that mom.get_local_nodename() returns the correct value\n    \"\"\"\n\n    def test_url_nodename_not_truncated(self):\n        \"\"\"\n        This test case tests that mom does not truncate the value of\n        PBS_MOM_NODE_NAME when it contains dots\n        \"\"\"\n        self.du.set_pbs_config(hostname=self.mom.shortname,\n                               confs={'PBS_MOM_NODE_NAME': 'a.b.c.d'})\n        # Restart PBS for changes\n        self.server.restart()\n        self.mom.restart()\n        # add hook\n        a = {'event': 'execjob_begin', 'enabled': 'True'}\n        hook_name = \"begin\"\n        hook_body = \"\"\"\nimport pbs\npbs.logmsg(pbs.LOG_DEBUG,\n           'my local nodename is %s'\n           % pbs.get_local_nodename())\n\"\"\"\n        self.server.create_import_hook(hook_name, a, hook_body)\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n        self.mom.log_match(\"my local nodename is a.b.c.d\")\n\n    def test_ip_nodename_not_truncated(self):\n        \"\"\"\n        This test case tests that mom does not truncate the value of\n        PBS_MOM_NODE_NAME when it is an ipaddress\n        \"\"\"\n        ipaddr = socket.gethostbyname(self.mom.hostname)\n        self.du.set_pbs_config(hostname=self.mom.shortname,\n                               confs={'PBS_MOM_NODE_NAME': ipaddr})\n        # Restart PBS for changes\n        self.server.restart()\n        self.mom.restart()\n        # add hook\n        a = {'event': 'execjob_begin', 'enabled': 'True'}\n        hook_name = \"begin\"\n        hook_body = \"\"\"\nimport pbs\npbs.logmsg(pbs.LOG_DEBUG,\n           'my local nodename is %s'\n           % pbs.get_local_nodename())\n\"\"\"\n        self.server.create_import_hook(hook_name, a, hook_body)\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n        self.mom.log_match(\"my local nodename is %s\" % ipaddr)\n\n    def tearDown(self):\n        self.du.unset_pbs_config(hostname=self.mom.shortname,\n                                 confs=['PBS_MOM_NODE_NAME'])\n        self.server.restart()\n        self.server.manager(MGR_CMD_DELETE, VNODE, id=\"@default\",\n                            runas=ROOT_USER)\n        self.mom.restart()\n        TestFunctional.tearDown(self)\n"
  },
  {
    "path": "test/tests/functional/pbs_mom_mock_run.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\n@skipOnCpuSet\nclass TestMomMockRun(TestFunctional):\n\n    def test_rsc_used(self):\n        \"\"\"\n        Test that resources_used are set correctly by mom under mock run\n        \"\"\"\n        # Kill the existing mom process\n        self.mom.stop()\n\n        # Start mom in mock run mode\n        mompath = os.path.join(self.server.pbs_conf[\"PBS_EXEC\"], \"sbin\",\n                               \"pbs_mom\")\n        cmd = [mompath, \"-m\"]\n        self.du.run_cmd(hosts=self.mom.shortname, cmd=cmd, sudo=True)\n\n        # Submit a job requesting ncpus, mem and walltime\n        attr = {ATTR_l + \".select\": \"1:ncpus=1:mem=5mb\",\n                ATTR_l + \".walltime\": \"00:00:20\"}\n        j = Job(attrs=attr)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        self.logger.info(\"Waiting until job finishes\")\n        self.server.expect(JOB, 'queue', op=UNSET, id=jid, offset=22)\n\n        # Check accounting record for this job\n        used_ncpus = \"resources_used.ncpus=1\"\n        self.server.accounting_match(msg=used_ncpus, id=jid, n='ALL')\n        used_mem = \"resources_used.mem=5mb\"\n        self.server.accounting_match(msg=used_mem, id=jid, n='ALL')\n        used_walltime = \"resources_used.walltime=00:00:00\"\n        self.server.accounting_match(\n            msg=\"resources_used.walltime\", id=jid, n='ALL')\n        self.server.accounting_match(\n            msg=used_walltime, existence=False, id=jid,\n            max_attempts=1, n='ALL')\n"
  },
  {
    "path": "test/tests/functional/pbs_mom_walltime.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\nfrom ptl.lib.pbs_testlib import BatchUtils\n\n\nclass TestMomWalltime(TestFunctional):\n\n    def test_mom_hook_not_counted_in_walltime(self):\n        \"\"\"\n        Test that time spent on mom hooks is not counted in walltime of the job\n        \"\"\"\n        hook_name_event_dict = {\n            'begin': 'execjob_begin',\n            'prologue': 'execjob_prologue',\n            'launch': 'execjob_launch',\n            'epilogue': 'execjob_epilogue',\n            'preterm': 'execjob_preterm',\n            'end': 'execjob_end'\n        }\n        hook_script = (\n            \"import pbs\\n\"\n            \"import time\\n\"\n            \"time.sleep(2)\\n\"\n            \"pbs.event().accept\\n\"\n        )\n        hook_attrib = {'event': '', 'enabled': 'True'}\n        for name, event in hook_name_event_dict.items():\n            hook_attrib['event'] = event\n            self.server.create_import_hook(name, hook_attrib, hook_script)\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'job_history_enable': 'True'})\n        job = Job(TEST_USER)\n        job.set_sleep_time(3)\n        jid = self.server.submit(job)\n\n        self.server.expect(JOB, {ATTR_state: 'F'}, id=jid, extend='x',\n                           offset=15)\n        self.server.expect(JOB, {'resources_used.walltime': 5}, op=LE, id=jid,\n                           extend='x')\n\n    def test_hold_time_not_counted_in_walltime(self):\n        \"\"\"\n        Test that hold time is not counted in walltime\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'job_history_enable': 'True'})\n\n        a = {'Resource_List.ncpus': 1}\n        J1 = Job(TEST_USER, attrs=a)\n        J1.set_sleep_time(60)\n        jid1 = self.server.submit(J1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        # Wait for job to run for sometime\n        time.sleep(15)\n        self.server.expect(JOB, {'resources_used.walltime': 0}, op=GT, id=jid1,\n                           extend='x')\n\n        self.server.holdjob(jid1, USER_HOLD)\n        self.server.rerunjob(jid1)\n        self.server.expect(JOB, {'Hold_Types': 'u'}, jid1)\n        # Wait for sometime to verify that this time is not\n        # accounted in 'resource_used.walltime'\n        time.sleep(20)\n        self.server.rlsjob(jid1, USER_HOLD)\n        self.server.expect(JOB, {ATTR_state: 'F'}, id=jid1, extend='x',\n                           offset=45)\n        # Verify if the job's walltime is in between 60 to 70\n        self.server.expect(JOB, {'resources_used.walltime': 60}, op=GE,\n                           id=jid1, extend='x')\n        self.server.expect(JOB, {'resources_used.walltime': 70}, op=LE,\n                           id=jid1, extend='x')\n\n    def test_suspend_time_not_counted_in_walltime(self):\n        \"\"\"\n        Test that suspend time is not counted in walltime\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'job_history_enable': 'True'})\n        a = {'Resource_List.ncpus': 1}\n\n        script_content = (\n            '#!/bin/bash\\n'\n            'for i in {1..30}\\n'\n            'do\\n'\n            '\\techo \"time wait\"\\n'\n            '\\tsleep 1\\n'\n            'done'\n        )\n\n        J1 = Job(TEST_USER, attrs=a)\n        J1.create_script(body=script_content)\n        jid1 = self.server.submit(J1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n\n        # Accumulate wall time\n        time.sleep(10)\n\n        self.server.sigjob(jobid=jid1, signal=\"suspend\")\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid1)\n\n        # Make sure the sched cycle is completed before reading\n        # the walltime\n        self.server.manager(MGR_CMD_SET, MGR_OBJ_SERVER,\n                            {'scheduling': 'True'})\n        self.server.manager(MGR_CMD_SET, MGR_OBJ_SERVER,\n                            {'scheduling': 'False'})\n\n        jstat = self.server.status(JOB, id=jid1,\n                                   attrib=['resources_used.walltime'])\n        walltime = BatchUtils().convert_duration(\n            jstat[0]['resources_used.walltime'])\n        self.logger.info(\"Walltime before sleep: %d secs\" % walltime)\n        self.server.manager(MGR_CMD_SET, MGR_OBJ_SERVER,\n                            {'scheduling': 'True'})\n\n        # Sleep for the job's entire walltime secs so we can catch any\n        # walltime increment during job suspension time\n        self.logger.info(\"Suspending job for 30s, job's execution time. \" +\n                         \"Walltime should not get incremented while job \" +\n                         \"is suspended\")\n        time.sleep(30)\n\n        # Used walltime should remain the same\n        self.server.expect(JOB, {'resources_used.walltime': walltime}, op=EQ,\n                           id=jid1)\n\n        self.server.sigjob(jobid=jid1, signal=\"resume\")\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        self.server.expect(JOB, {ATTR_state: 'F'}, id=jid1, extend='x',\n                           offset=20)\n\n        # Verify if the job's total walltime is within limits\n        # Adding 10s buffer since min mom poll time is 10s\n        jstat = self.server.status(JOB, id=jid1,\n                                   attrib=['resources_used.walltime'],\n                                   extend='x')\n        walltime_final = BatchUtils().convert_duration(\n            jstat[0]['resources_used.walltime'])\n        self.assertGreater(walltime_final, 0,\n                           'Error fetching resources_used.walltime value')\n        self.logger.info(\"Walltime at job completion: %d secs\"\n                         % walltime_final)\n        self.assertIn(walltime_final, range(25, 41),\n                      'Walltime is not in expected range')\n\n    def test_mom_restart(self):\n        \"\"\"\n        Test that time spent on jobs running on MoM will not reset when\n        MoM is restarted\n        \"\"\"\n        job = Job(TEST_USER)\n        job.set_sleep_time(300)\n        jid = self.server.submit(job)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid)\n        self.server.expect(JOB, {'resources_used.walltime': 30}, op=GT,\n                           id=jid, offset=30)\n\n        self.mom.stop(sig='-INT')\n        self.mom.start(args=['-p'])\n\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid)\n        try:\n            self.assertFalse(\n                self.server.expect(JOB, {'resources_used.walltime': 30},\n                                   op=LT, id=jid, max_attempts=5, interval=5))\n        except PtlExpectError:\n            pass\n"
  },
  {
    "path": "test/tests/functional/pbs_moved_job.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestMovedJob(TestFunctional):\n    \"\"\"\n    This test suite tests moved jobs between two servers\n    \"\"\"\n    @timeout(500)\n    def test_moved_job_history(self):\n        \"\"\"\n        This test suite verifies that moved (M) job preserves in the system\n        at least until the job is really finished on the target server.\n        Supposing the job is finished, this test also checks whether\n        the M job is removed after <job_history_duration>.\n        \"\"\"\n        # Skip test if number of servers is not equal to two\n        if len(self.servers) != 2:\n            self.skipTest(\"test requires atleast two servers as input, \" +\n                          \"use -p servers=<server1:server2>,moms=<server1>\")\n\n        second_server = self.servers.keys()[1]\n\n        attr = {'job_history_enable': 'True', 'job_history_duration': 5}\n        self.servers[second_server].manager(MGR_CMD_SET, SERVER, attr)\n\n        attr = {'queue_type': 'execution',\n                'started': 'True', 'enabled': 'True'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, attr, id='p2p')\n        self.servers[second_server].manager(MGR_CMD_CREATE, QUEUE,\n                                            attr, id='p2p')\n\n        attr = {'peer_queue': '\\\"p2p p2p@' + second_server + '\\\"'}\n        self.scheduler.set_sched_config(attr)\n\n        attr = {'scheduler_iteration': 5}\n        self.server.manager(MGR_CMD_SET, SERVER, attr)\n        self.server.restart()\n\n        attr = {ATTR_queue: \"p2p\", ATTR_j: \"oe\",\n                ATTR_W: \"Output_Path=%s:/dev/null\"\n                % self.servers.keys()[0],\n                'Resource_List.select': 'host=%s'\n                % self.moms.keys()[0]}\n        j = Job(TEST_USER, attrs=attr)\n        j.set_sleep_time(300)\n        jid = self.servers[second_server].submit(j)\n\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        self.servers[second_server].expect(JOB, {'job_state': 'M'},\n                                           id=jid, extend='x')\n\n        # history work task runs every two minutes\n        self.logger.info(\"Wait for history work task to process...\")\n        time.sleep(125)\n\n        # the jid should be still present with state M\n        self.servers[second_server].expect(JOB, {'job_state': 'M'},\n                                           id=jid, extend='x')\n\n        self.server.delete(id=jid, wait=True)\n\n        # history work task runs every two minutes\n        self.logger.info(\"Wait for history work task to process...\")\n        time.sleep(125)\n\n        # the jid should be gone\n        try:\n            qstat = self.servers[second_server].status(JOB, 'status',\n                                                       id=jid,\n                                                       extend='x')\n        except PbsStatusError as err:\n            #  rc = 153 is for 'Unknown Job Id'\n            self.assertEqual(err.rc, 153)\n            qstat = \"\"\n\n        self.assertEqual(qstat, \"\")\n\n    def test_movejob_to_unknown_host(self):\n        \"\"\"\n        This test verifies the error message qmove gives when user tries to\n        move a job to an unknown server\n        \"\"\"\n\n        attr = {'scheduling': 'false'}\n        self.server.manager(MGR_CMD_SET, SERVER, attr)\n\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n\n        err = \"Access from host not allowed, or unknown host \" + jid\n        with self.assertRaises(PbsMoveError) as e:\n            self.server.movejob(jid, 'workq@unknownserver')\n        self.assertIn(err, e.exception.msg[0])\n"
  },
  {
    "path": "test/tests/functional/pbs_moved_job_local.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestMovedJobLocal(TestFunctional):\n    \"\"\"\n    This test suite tests moved jobs between two queues\n    \"\"\"\n\n    def test_moved_job_acl_hosts_allow(self):\n        \"\"\"\n        This test suite verifies that job can be moved\n        into queue with acl_host_enabled.\n        \"\"\"\n        queue = 'testq'\n        a = {'queue_type': 'Execution',\n             'enabled': 'True',\n             'started': 'True',\n             'acl_host_enable': 'True',\n             'acl_hosts': self.servers.keys()[0]}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id=queue)\n        a = {'scheduling': 'False'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n        self.server.movejob(jid, queue, runas=TEST_USER)\n        a = {'scheduling': 'True'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        self.server.expect(JOB, {ATTR_queue: queue, 'job_state': 'R'},\n                           attrop=PTL_AND)\n\n    def test_moved_job_acl_hosts_denial(self):\n        \"\"\"\n        This test suite verifies that job can not be moved\n        into queue with acl_host_enabled without the right hostname.\n        \"\"\"\n        queue = 'testq'\n        a = {'queue_type': 'Execution',\n             'enabled': 'True',\n             'started': 'True',\n             'acl_host_enable': 'True',\n             'acl_hosts': 'foo'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id=queue)\n        a = {'scheduling': 'False'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n        err = \"Access from host not allowed, or unknown host \" + jid\n        with self.assertRaises(PbsMoveError) as e:\n            self.server.movejob(jid, queue, runas=TEST_USER)\n        self.assertIn(err, e.exception.msg[0])\n"
  },
  {
    "path": "test/tests/functional/pbs_multi_sched.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport resource\n\nfrom tests.functional import *\n\n\nclass TestMultipleSchedulers(TestFunctional):\n\n    \"\"\"\n    Test suite to test different scheduler interfaces\n    \"\"\"\n\n    def setup_sc1(self):\n        a = {'partition': 'P1',\n             'sched_host': self.server.hostname}\n        self.server.manager(MGR_CMD_CREATE, SCHED,\n                            a, id=\"sc1\")\n        self.scheds['sc1'].create_scheduler()\n        self.scheds['sc1'].start()\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'scheduling': 'True'}, id=\"sc1\")\n        self.server.manager(MGR_CMD_SET, SCHED, {'log_events': 2047}, id='sc1')\n\n    def setup_sc2(self):\n        dir_path = os.path.join(os.sep, 'var', 'spool', 'pbs', 'sched_dir')\n        if not os.path.exists(dir_path):\n            self.du.mkdir(path=dir_path, sudo=True)\n        a = {'partition': 'P2',\n             'sched_priv': os.path.join(dir_path, 'sched_priv_sc2'),\n             'sched_log': os.path.join(dir_path, 'sched_logs_sc2'),\n             'sched_host': self.server.hostname}\n        self.server.manager(MGR_CMD_CREATE, SCHED,\n                            a, id=\"sc2\")\n        self.scheds['sc2'].create_scheduler(dir_path)\n        self.scheds['sc2'].start(dir_path)\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'scheduling': 'True'}, id=\"sc2\")\n\n    def setup_sc3(self):\n        a = {'partition': 'P3',\n             'sched_host': self.server.hostname}\n        self.server.manager(MGR_CMD_CREATE, SCHED,\n                            a, id=\"sc3\")\n        self.scheds['sc3'].create_scheduler()\n        self.scheds['sc3'].start()\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'scheduling': 'True'}, id=\"sc3\")\n\n    def setup_queues_nodes(self):\n        a = {'queue_type': 'execution',\n             'started': 'True',\n             'enabled': 'True'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id='wq1')\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id='wq2')\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id='wq3')\n        p1 = {'partition': 'P1'}\n        self.server.manager(MGR_CMD_SET, QUEUE, p1, id='wq1')\n        p2 = {'partition': 'P2'}\n        self.server.manager(MGR_CMD_SET, QUEUE, p2, id='wq2')\n        p3 = {'partition': 'P3'}\n        self.server.manager(MGR_CMD_SET, QUEUE, p3, id='wq3')\n        a = {'resources_available.ncpus': 2}\n        self.mom.create_vnodes(a, 4)\n        vnode0 = self.mom.shortname + '[0]'\n        vnode1 = self.mom.shortname + '[1]'\n        vnode2 = self.mom.shortname + '[2]'\n        self.server.manager(MGR_CMD_SET, NODE, p1, id=vnode0)\n        self.server.manager(MGR_CMD_SET, NODE, p2, id=vnode1)\n        self.server.manager(MGR_CMD_SET, NODE, p3, id=vnode2)\n\n    def common_setup(self):\n        self.setup_sc1()\n        self.setup_sc2()\n        self.setup_sc3()\n        self.setup_queues_nodes()\n\n    def check_vnodes(self, j, vnodes, jid):\n        self.server.status(JOB, 'exec_vnode', id=jid)\n        nodes = j.get_vnodes(j.exec_vnode)\n        for vnode in vnodes:\n            if vnode not in nodes:\n                self.assertFalse(True, str(vnode) +\n                                 \" is not in exec_vnode list as expected\")\n\n    def get_tzid(self):\n        if 'PBS_TZID' in self.conf:\n            tzone = self.conf['PBS_TZID']\n        elif 'PBS_TZID' in os.environ:\n            tzone = os.environ['PBS_TZID']\n        else:\n            tzone = 'America/Los_Angeles'\n        return tzone\n\n    def set_scheduling(self, scheds=None, op=False):\n        if scheds is not None:\n            for each in scheds:\n                self.server.manager(MGR_CMD_SET, SCHED, {'scheduling': op},\n                                    id=each)\n\n    def delete_sched(self, sched_name):\n        \"\"\"\n        Helper function to delete sched\"\n        \"\"\"\n        self.scheds[sched_name].terminate()\n        sched_log = self.scheds[sched_name].attributes['sched_log']\n        sched_priv = self.scheds[sched_name].attributes['sched_priv']\n        self.du.rm(path=sched_log, sudo=True, recursive=True, force=True)\n        self.du.rm(path=sched_priv, sudo=True, recursive=True, force=True)\n        self.server.manager(MGR_CMD_DELETE, SCHED, id=sched_name)\n\n    def test_job_sort_formula_multisched(self):\n        \"\"\"\n        Test that job_sort_formula can be set for each sched\n        \"\"\"\n        self.common_setup()\n\n        # Set JSF on server and test that it is used by all scheds\n        self.server.manager(MGR_CMD_SET, SERVER, {\n                            'job_sort_formula': '1*walltime'})\n\n        # Submit 2 jobs to each sched with different walltimes and\n        # test that the one with higher walltime is scheduled first\n        queues = ['wq1', 'wq2', 'wq3']\n        for i in range(1, 4):\n            scid = \"sc\" + str(i)\n            self.server.manager(MGR_CMD_SET, SCHED,\n                                {'scheduling': 'False'}, id=scid)\n            a = {'Resource_List.walltime': 100, ATTR_queue: queues[i - 1],\n                 'Resource_List.ncpus': 2}\n            j = Job(TEST_USER1, attrs=a)\n            jid1 = self.server.submit(j)\n            a['Resource_List.walltime'] = 1000\n            j = Job(TEST_USER1, attrs=a)\n            jid2 = self.server.submit(j)\n            self.server.manager(MGR_CMD_SET, SCHED,\n                                {'scheduling': 'True'}, id=scid)\n            self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n            self.server.expect(JOB, {'job_state': 'Q'}, id=jid1)\n\n        # Set a different JSF on sc1, this should fail\n        try:\n            self.server.manager(MGR_CMD_SET, SCHED,\n                                {'job_sort_formula': '2*walltime'}, id='sc1',\n                                logerr=False)\n            self.fail(\"Setting job_sort_formula on sched should have failed\")\n        except PbsManagerError:\n            pass\n\n        # Unset server's JSF and set sc1's JSF again\n        self.server.manager(MGR_CMD_UNSET, SERVER, 'job_sort_formula')\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'job_sort_formula': '2*walltime'}, id='sc1')\n\n        self.server.cleanup_jobs()\n\n        # Submit 2 jobs with different walltimes to each sched again\n        # This time, sc1 should be the only sched to care about walltime\n        for i in range(1, 4):\n            scid = \"sc\" + str(i)\n            self.server.manager(MGR_CMD_SET, SCHED,\n                                {'scheduling': 'False'}, id=scid)\n            a = {'Resource_List.walltime': 100, ATTR_queue: queues[i - 1],\n                 'Resource_List.ncpus': 2}\n            j = Job(TEST_USER1, attrs=a)\n            jid1 = self.server.submit(j)\n            a['Resource_List.walltime'] = 1000\n            j = Job(TEST_USER1, attrs=a)\n            jid2 = self.server.submit(j)\n            self.server.manager(MGR_CMD_SET, SCHED,\n                                {'scheduling': 'True'}, id=scid)\n            if scid == \"sc1\":\n                self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n                self.server.expect(JOB, {'job_state': 'Q'}, id=jid1)\n            else:\n                self.server.expect(JOB, {'job_state': 'Q'}, id=jid2)\n                self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n\n    def test_set_sched_priv(self):\n        \"\"\"\n        Test sched_priv can be only set to valid paths\n        and check for appropriate comments\n        \"\"\"\n        self.setup_sc1()\n        if not os.path.exists('/var/sched_priv_do_not_exist'):\n            self.server.manager(MGR_CMD_SET, SCHED,\n                                {'sched_priv': '/var/sched_priv_do_not_exist'},\n                                id=\"sc1\")\n        msg = 'PBS failed validation checks for sched_priv directory'\n        a = {'sched_priv': '/var/sched_priv_do_not_exist',\n             'comment': msg,\n             'scheduling': 'False'}\n        self.server.expect(SCHED, a, id='sc1', attrop=PTL_AND, max_attempts=10)\n        pbs_home = self.server.pbs_conf['PBS_HOME']\n        self.du.run_copy(self.server.hostname,\n                         src=os.path.join(pbs_home, 'sched_priv'),\n                         dest=os.path.join(pbs_home, 'sc1_new_priv'),\n                         recursive=True)\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'sched_priv': '/var/spool/pbs/sc1_new_priv'},\n                            id=\"sc1\")\n        a = {'sched_priv': '/var/spool/pbs/sc1_new_priv'}\n        self.server.expect(SCHED, a, id='sc1', max_attempts=10)\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'scheduling': 'True'}, id=\"sc1\")\n        # Blocked by PP-1202 will revisit once its fixed\n        # self.server.expect(SCHED, 'comment', id='sc1', op=UNSET)\n\n    def test_set_sched_log(self):\n        \"\"\"\n        Test sched_log can be only set to valid paths\n        and check for appropriate comments\n        \"\"\"\n        self.setup_sc1()\n        if not os.path.exists('/var/sched_log_do_not_exist'):\n            self.server.manager(MGR_CMD_SET, SCHED,\n                                {'sched_log': '/var/sched_log_do_not_exist'},\n                                id=\"sc1\")\n        a = {'sched_log': '/var/sched_log_do_not_exist',\n             'comment': 'Unable to change the sched_log directory',\n             'scheduling': 'False'}\n        self.server.expect(SCHED, a, id='sc1', max_attempts=10)\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'scheduling': 'True'}, id=\"sc1\")\n        pbs_home = self.server.pbs_conf['PBS_HOME']\n        self.du.mkdir(path=os.path.join(pbs_home, 'sc1_new_logs'),\n                      sudo=True)\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'sched_log': '/var/spool/pbs/sc1_new_logs'},\n                            id=\"sc1\")\n        a = {'sched_log': '/var/spool/pbs/sc1_new_logs'}\n        self.server.expect(SCHED, a, id='sc1', max_attempts=10)\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'scheduling': 'True'}, id=\"sc1\")\n        # Blocked by PP-1202 will revisit once its fixed\n        # self.server.expect(SCHED, 'comment', id='sc1', op=UNSET)\n\n    def test_start_scheduler(self):\n        \"\"\"\n        Test that scheduler wont start without appropriate folders created.\n        Scheduler will log a message if started without partition. Test\n        scheduler states down, idle, scheduling.\n        \"\"\"\n        self.setup_queues_nodes()\n        pbs_home = self.server.pbs_conf['PBS_HOME']\n        self.server.manager(MGR_CMD_CREATE, SCHED,\n                            id=\"sc5\")\n        a = {'sched_host': self.server.hostname,\n             'scheduling': 'True'}\n        self.server.manager(MGR_CMD_SET, SCHED, a, id=\"sc5\")\n        # Try starting without sched_priv and sched_logs\n        ret = self.scheds['sc5'].start()\n        self.server.expect(SCHED, {'state': 'down'}, id='sc5', max_attempts=10)\n        msg = \"sched_priv dir is not present for scheduler\"\n        self.assertTrue(ret['rc'], msg)\n        self.du.run_copy(self.server.hostname,\n                         src=os.path.join(pbs_home, 'sched_priv'),\n                         dest=os.path.join(pbs_home, 'sched_priv_sc5'),\n                         recursive=True, sudo=True)\n        ret = self.scheds['sc5'].start()\n        msg = \"sched_logs dir is not present for scheduler\"\n        self.assertTrue(ret['rc'], msg)\n        self.du.run_copy(self.server.hostname,\n                         src=os.path.join(pbs_home, 'sched_logs'),\n                         dest=os.path.join(pbs_home, 'sched_logs_sc5'),\n                         recursive=True, sudo=True)\n        ret = self.scheds['sc5'].start()\n        self.scheds['sc5'].log_match(\n            \"Scheduler does not contain a partition\",\n            max_attempts=10, starttime=self.server.ctime)\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'partition': 'P3'}, id=\"sc5\")\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'Scheduling': 'True'}, id=\"sc5\")\n        self.server.expect(SCHED, {'state': 'idle'}, id='sc5', max_attempts=10)\n        a = {'resources_available.ncpus': 100}\n        vn = self.mom.shortname\n        self.server.manager(MGR_CMD_SET, NODE, a, id=vn + '[2]')\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'scheduling': 'False'}, id=\"sc5\")\n        for _ in range(500):\n            j = Job(TEST_USER1, attrs={ATTR_queue: 'wq3'})\n            self.server.submit(j)\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'scheduling': 'True'}, id=\"sc5\")\n        self.server.expect(SCHED, {'state': 'scheduling'},\n                           id='sc5', max_attempts=10)\n\n    def test_resource_sched_reconfigure(self):\n        \"\"\"\n        Test all schedulers will reconfigure while creating,\n        setting or deleting a resource\n        \"\"\"\n        self.common_setup()\n        t = time.time()\n        self.server.manager(MGR_CMD_CREATE, RSC, id='foo')\n        for name in self.scheds:\n            self.scheds[name].log_match(\n                \"Scheduler is reconfiguring\",\n                max_attempts=10, starttime=t)\n        # sleeping to make sure we are not checking for the\n        # same scheduler reconfiguring message again\n        time.sleep(1)\n        t = time.time()\n        attr = {ATTR_RESC_TYPE: 'long'}\n        self.server.manager(MGR_CMD_SET, RSC, attr, id='foo')\n        for name in self.scheds:\n            self.scheds[name].log_match(\n                \"Scheduler is reconfiguring\",\n                max_attempts=10, starttime=t)\n        # sleeping to make sure we are not checking for the\n        # same scheduler reconfiguring message again\n        time.sleep(1)\n        t = time.time()\n        self.server.manager(MGR_CMD_DELETE, RSC, id='foo')\n        for name in self.scheds:\n            self.scheds[name].log_match(\n                \"Scheduler is reconfiguring\",\n                max_attempts=10, starttime=t)\n\n    def test_remove_partition_sched(self):\n        \"\"\"\n        Test that removing all the partitions from a scheduler\n        unsets partition attribute on scheduler and update scheduler logs.\n        \"\"\"\n        self.setup_sc1()\n        self.server.manager(MGR_CMD_UNSET, SCHED,\n                            'partition', id=\"sc1\")\n        self.server.manager(MGR_CMD_SET, SCHED, {'scheduling': 'True'},\n                            id=\"sc1\")\n        log_msg = \"Scheduler does not contain a partition\"\n        self.scheds['sc1'].log_match(log_msg, max_attempts=10,\n                                     starttime=self.server.ctime)\n        # Blocked by PP-1202 will revisit once its fixed\n        # self.server.manager(MGR_CMD_UNSET, SCHED, 'partition', id=\"sc2\")\n\n    def test_job_queue_partition(self):\n        \"\"\"\n        Test job submitted to a queue associated to a partition will land\n        into a node associated with that partition.\n        \"\"\"\n        self.common_setup()\n        vn = ['%s[%d]' % (self.mom.shortname, i) for i in range(3)]\n        j = Job(TEST_USER1, attrs={ATTR_queue: 'wq1',\n                                   'Resource_List.select': '1:ncpus=2'})\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        self.check_vnodes(j, [vn[0]], jid)\n        self.scheds['sc1'].log_match(\n            jid + ';Job run', max_attempts=10,\n            starttime=self.server.ctime)\n        j = Job(TEST_USER1, attrs={ATTR_queue: 'wq2',\n                                   'Resource_List.select': '1:ncpus=2'})\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        self.check_vnodes(j, [vn[1]], jid)\n        self.scheds['sc2'].log_match(\n            jid + ';Job run', max_attempts=10,\n            starttime=self.server.ctime)\n        j = Job(TEST_USER1, attrs={ATTR_queue: 'wq3',\n                                   'Resource_List.select': '1:ncpus=2'})\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        self.check_vnodes(j, [vn[2]], jid)\n        self.scheds['sc3'].log_match(\n            jid + ';Job run', max_attempts=10,\n            starttime=self.server.ctime)\n\n    def test_multiple_queue_same_partition(self):\n        \"\"\"\n        Test multiple queue associated with same partition\n        is serviced by same scheduler\n        \"\"\"\n        self.setup_sc1()\n        self.setup_queues_nodes()\n        vn0 = self.mom.shortname + '[0]'\n        j = Job(TEST_USER1, attrs={ATTR_queue: 'wq1',\n                                   'Resource_List.select': '1:ncpus=1'})\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        self.check_vnodes(j, [vn0], jid)\n        self.scheds['sc1'].log_match(\n            jid + ';Job run', max_attempts=10,\n            starttime=self.server.ctime)\n        p1 = {'partition': 'P1'}\n        self.server.manager(MGR_CMD_SET, QUEUE, p1, id='wq3')\n        j = Job(TEST_USER1, attrs={ATTR_queue: 'wq3',\n                                   'Resource_List.select': '1:ncpus=1'})\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        self.check_vnodes(j, [vn0], jid)\n        self.scheds['sc1'].log_match(\n            jid + ';Job run', max_attempts=10,\n            starttime=self.server.ctime)\n\n    def test_preemption_highp_queue(self):\n        \"\"\"\n        Test preemption occures only within queues which are assigned\n        to same partition\n        \"\"\"\n        self.common_setup()\n        prio = {'Priority': 150, 'partition': 'P1'}\n        self.server.manager(MGR_CMD_SET, QUEUE, prio, id='wq3')\n        j = Job(TEST_USER1, attrs={ATTR_queue: 'wq1',\n                                   'Resource_List.select': '1:ncpus=2'})\n        jid1 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n\n        t = time.time()\n        j = Job(TEST_USER1, attrs={ATTR_queue: 'wq3',\n                                   'Resource_List.select': '1:ncpus=2'})\n        jid2 = self.server.submit(j)\n\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n        j = Job(TEST_USER1, attrs={ATTR_queue: 'wq3',\n                                   'Resource_List.select': '1:ncpus=2'})\n        jid3 = self.server.submit(j)\n        self.server.expect(JOB, ATTR_comment, op=SET, id=jid3)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid3)\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid1)\n        self.scheds['sc1'].log_match(\n            jid1 + ';Job preempted by suspension',\n            max_attempts=10, starttime=t)\n\n    def test_preemption_two_sched(self):\n        \"\"\"\n        Test two schedulers preempting jobs at the same time\n        \"\"\"\n        self.common_setup()\n        q = {'queue_type': 'Execution', 'started': 'True',\n             'enabled': 'True', 'Priority': 150}\n        q['partition'] = 'P1'\n        self.server.manager(MGR_CMD_CREATE, QUEUE, q, id='highp_P1')\n        q['partition'] = 'P2'\n        self.server.manager(MGR_CMD_CREATE, QUEUE, q, id='highp_P2')\n\n        n = {'resources_available.ncpus': 20}\n        vn = ['%s[%d]' % (self.mom.shortname, i) for i in range(2)]\n        self.server.manager(MGR_CMD_SET, NODE, n, id=vn)\n\n        jids1 = []\n        job_attrs = {'Resource_List.select': '1:ncpus=1', 'queue': 'wq1'}\n        for _ in range(20):\n            j = Job(TEST_USER, job_attrs)\n            jid = self.server.submit(j)\n            jids1.append(jid)\n\n        jids2 = []\n        job_attrs['queue'] = 'wq2'\n        for _ in range(20):\n            j = Job(TEST_USER, job_attrs)\n            jid = self.server.submit(j)\n            jids2.append(jid)\n\n        self.server.expect(JOB, {'job_state=R': 40})\n\n        s = {'scheduling': 'False'}\n        self.server.manager(MGR_CMD_SET, SCHED, s, id='sc1')\n        self.server.manager(MGR_CMD_SET, SCHED, s, id='sc2')\n\n        job_attrs = {'Resource_List.select': '20:ncpus=1', 'queue': 'highp_P1'}\n        hj1 = Job(TEST_USER, job_attrs)\n        hj1_jid = self.server.submit(hj1)\n\n        job_attrs['queue'] = 'highp_P2'\n        hj2 = Job(TEST_USER, job_attrs)\n        hj2_jid = self.server.submit(hj2)\n\n        s = {'scheduling': 'True'}\n        self.server.manager(MGR_CMD_SET, SCHED, s, id='sc1')\n        self.server.manager(MGR_CMD_SET, SCHED, s, id='sc2')\n\n        for jid in jids1:\n            self.server.expect(JOB, {'job_state': 'S'}, id=jid)\n            self.scheds['sc1'].log_match(jid + ';Job preempted by suspension')\n\n        for jid in jids2:\n            self.server.expect(JOB, {'job_state': 'S'}, id=jid)\n            self.scheds['sc2'].log_match(jid + ';Job preempted by suspension')\n\n        self.server.expect(JOB, {'job_state': 'R'}, id=hj1_jid)\n        self.server.expect(JOB, {'job_state': 'R'}, id=hj2_jid)\n\n    def test_backfill_per_scheduler(self):\n        \"\"\"\n        Test backfilling is applicable only per scheduler\n        \"\"\"\n        self.common_setup()\n        t = time.time()\n        self.scheds['sc2'].set_sched_config(\n            {'strict_ordering': 'True ALL'})\n        a = {ATTR_queue: 'wq2',\n             'Resource_List.select': '1:ncpus=2',\n             'Resource_List.walltime': 60}\n        j = Job(TEST_USER1, attrs=a)\n        jid1 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        j = Job(TEST_USER1, attrs=a)\n        jid2 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid2)\n        self.scheds['sc2'].log_match(\n            jid2 + ';Job is a top job and will run at',\n            starttime=t)\n        a['queue'] = 'wq3'\n        j = Job(TEST_USER1, attrs=a)\n        jid3 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid3)\n        j = Job(TEST_USER1, attrs=a)\n        jid4 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid4)\n        self.scheds['sc3'].log_match(\n            jid4 + ';Job is a top job and will run at',\n            max_attempts=5, starttime=t, existence=False)\n\n    def test_resource_per_scheduler(self):\n        \"\"\"\n        Test resources will be considered only by scheduler\n        to which resource is added in sched_config\n        \"\"\"\n        self.common_setup()\n        a = {'type': 'float', 'flag': 'nh'}\n        self.server.manager(MGR_CMD_CREATE, RSC, a, id='gpus')\n        self.scheds['sc3'].add_resource(\"gpus\")\n        a = {'resources_available.gpus': 2}\n        self.server.manager(MGR_CMD_SET, NODE, a, id='@default')\n        j = Job(TEST_USER1, attrs={ATTR_queue: 'wq3',\n                                   'Resource_List.select': '1:gpus=2',\n                                   'Resource_List.walltime': 60})\n        jid1 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        j = Job(TEST_USER1, attrs={ATTR_queue: 'wq3',\n                                   'Resource_List.select': '1:gpus=2',\n                                   'Resource_List.walltime': 60})\n        jid2 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid2)\n        job_comment = \"Not Running: Insufficient amount of resource: \"\n        job_comment += \"gpus (R: 2 A: 0 T: 2)\"\n        self.server.expect(JOB, {'comment': job_comment}, id=jid2)\n        j = Job(TEST_USER1, attrs={ATTR_queue: 'wq2',\n                                   'Resource_List.select': '1:gpus=2'})\n        jid3 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid3)\n        j = Job(TEST_USER1, attrs={ATTR_queue: 'wq2',\n                                   'Resource_List.select': '1:gpus=2'})\n        jid4 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid4)\n\n    def test_restart_server(self):\n        \"\"\"\n        Test after server restarts sched attributes are persistent\n        \"\"\"\n        self.setup_sc1()\n        sched_priv = os.path.join(\n            self.server.pbs_conf['PBS_HOME'], 'sched_priv_sc1')\n        sched_logs = os.path.join(\n            self.server.pbs_conf['PBS_HOME'], 'sched_logs_sc1')\n        a = {'sched_host': self.server.hostname,\n             'sched_priv': sched_priv,\n             'sched_log': sched_logs,\n             'scheduling': 'True',\n             'scheduler_iteration': 600,\n             'state': 'idle',\n             'sched_cycle_length': '00:20:00'}\n        self.server.expect(SCHED, a, id='sc1',\n                           attrop=PTL_AND, max_attempts=10)\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'scheduler_iteration': 300,\n                             'sched_cycle_length': '00:10:00'},\n                            id='sc1')\n        self.server.restart()\n        a['scheduler_iteration'] = 300\n        a['sched_cycle_length'] = '00:10:00'\n        self.server.expect(SCHED, a, id='sc1',\n                           attrop=PTL_AND, max_attempts=10)\n\n    def test_job_sorted_per_scheduler(self):\n        \"\"\"\n        Test jobs are sorted as per job_sort_formula\n        inside each scheduler\n        \"\"\"\n        self.common_setup()\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'job_sort_formula': 'ncpus'})\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'scheduling': 'False'}, id=\"default\")\n        j = Job(TEST_USER1, attrs={'Resource_List.select': '1:ncpus=1'})\n        jid1 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid1)\n        j = Job(TEST_USER1, attrs={'Resource_List.select': '1:ncpus=2'})\n        jid2 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid2)\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'scheduling': 'True'}, id=\"default\")\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'scheduling': 'False'}, id=\"sc3\")\n        j = Job(TEST_USER1, attrs={ATTR_queue: 'wq3',\n                                   'Resource_List.select': '1:ncpus=1'})\n        jid3 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid3)\n        j = Job(TEST_USER1, attrs={ATTR_queue: 'wq3',\n                                   'Resource_List.select': '1:ncpus=2'})\n        jid4 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid4)\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'scheduling': 'True'}, id=\"sc3\")\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid4)\n\n    def test_qrun_job(self):\n        \"\"\"\n        Test jobs can be run by qrun by a newly created scheduler.\n        \"\"\"\n        self.setup_sc1()\n        self.setup_queues_nodes()\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'scheduling': 'False'}, id=\"sc1\")\n        j = Job(TEST_USER1, attrs={ATTR_queue: 'wq1',\n                                   'Resource_List.select': '1:ncpus=2'})\n        jid1 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid1)\n        self.server.runjob(jid1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n\n    def test_run_limts_per_scheduler(self):\n        \"\"\"\n        Test run_limits applied at server level is\n        applied for every scheduler seperately.\n        \"\"\"\n        self.common_setup()\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'max_run': '[u:PBS_GENERIC=1]'})\n        j = Job(TEST_USER1, attrs={'Resource_List.select': '1:ncpus=1'})\n        jid1 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        j = Job(TEST_USER1, attrs={'Resource_List.select': '1:ncpus=1'})\n        jid2 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid2)\n        j = Job(TEST_USER1, attrs={'Resource_List.select': '1:ncpus=1'})\n        jc = \"Not Running: User has reached server running job limit.\"\n        self.server.expect(JOB, {'comment': jc}, id=jid2)\n        j = Job(TEST_USER1, attrs={ATTR_queue: 'wq3',\n                                   'Resource_List.select': '1:ncpus=1'})\n        jid3 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid3)\n        j = Job(TEST_USER1, attrs={ATTR_queue: 'wq3',\n                                   'Resource_List.select': '1:ncpus=1'})\n        jid4 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid4)\n        jc = \"Not Running: User has reached server running job limit.\"\n        self.server.expect(JOB, {'comment': jc}, id=jid4)\n\n    def test_multi_fairshare(self):\n        \"\"\"\n        Test different schedulers have their own fairshare trees with\n        their own usage\n        \"\"\"\n        self.common_setup()\n        default_shares = 10\n        default_usage = 100\n\n        sc1_shares = 20\n        sc1_usage = 200\n\n        sc2_shares = 30\n        sc2_usage = 300\n\n        sc3_shares = 40\n        sc3_usage = 400\n\n        self.scheds['default'].add_to_resource_group(TEST_USER, 10, 'root',\n                                                     default_shares)\n        self.scheds['default'].fairshare.set_fairshare_usage(\n            TEST_USER, default_usage)\n\n        self.scheds['sc1'].add_to_resource_group(TEST_USER, 10, 'root',\n                                                 sc1_shares)\n        self.scheds['sc1'].fairshare.set_fairshare_usage(TEST_USER, sc1_usage)\n\n        self.scheds['sc2'].add_to_resource_group(TEST_USER, 10, 'root',\n                                                 sc2_shares)\n        self.scheds['sc2'].fairshare.set_fairshare_usage(TEST_USER, sc2_usage)\n\n        self.scheds['sc3'].add_to_resource_group(TEST_USER, 10, 'root',\n                                                 sc3_shares)\n        self.scheds['sc3'].fairshare.set_fairshare_usage(TEST_USER, sc3_usage)\n\n        # requery fairshare info from pbsfs\n        default_fs = self.scheds['default'].fairshare.query_fairshare()\n        sc1_fs = self.scheds['sc1'].fairshare.query_fairshare()\n        sc2_fs = self.scheds['sc2'].fairshare.query_fairshare()\n        sc3_fs = self.scheds['sc3'].fairshare.query_fairshare()\n\n        n = default_fs.get_node(id=10)\n        self.assertEqual(n.nshares, default_shares)\n        self.assertEqual(n.usage, default_usage)\n\n        n = sc1_fs.get_node(id=10)\n        self.assertEqual(n.nshares, sc1_shares)\n        self.assertEqual(n.usage, sc1_usage)\n\n        n = sc2_fs.get_node(id=10)\n        self.assertEqual(n.nshares, sc2_shares)\n        self.assertEqual(n.usage, sc2_usage)\n\n        n = sc3_fs.get_node(id=10)\n        self.assertEqual(n.nshares, sc3_shares)\n        self.assertEqual(n.usage, sc3_usage)\n\n    def test_fairshare_usage(self):\n        \"\"\"\n        Test the schedulers fairshare usage file and\n        check the usage file is updating correctly or not\n        \"\"\"\n        self.setup_sc1()\n        a = {'queue_type': 'execution',\n             'started': 'True',\n             'enabled': 'True',\n             'partition': 'P1'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id='wq1')\n        # Set resources to node\n        if self.mom.is_cpuset_mom():\n            hostname = self.server.status(NODE)[1]['id']\n        else:\n            hostname = self.mom.shortname\n\n        resc = {'resources_available.ncpus': 1,\n                'partition': 'P1'}\n        self.server.manager(MGR_CMD_SET, NODE, resc, hostname)\n        # Add entry to the resource group of multisched 'sc1'\n        self.scheds['sc1'].add_to_resource_group('grp1', 100, 'root', 60)\n        self.scheds['sc1'].add_to_resource_group('grp2', 200, 'root', 40)\n        self.scheds['sc1'].add_to_resource_group(TEST_USER1,\n                                                 101, 'grp1', 40)\n        self.scheds['sc1'].add_to_resource_group(TEST_USER2,\n                                                 102, 'grp1', 20)\n        self.scheds['sc1'].add_to_resource_group(TEST_USER3,\n                                                 201, 'grp2', 30)\n        self.scheds['sc1'].add_to_resource_group(TEST_USER4,\n                                                 202, 'grp2', 10)\n        # Set scheduler iteration\n        sc_attr = {'scheduler_iteration': 7,\n                   'scheduling': 'False'}\n        self.server.manager(MGR_CMD_SET, SCHED, sc_attr, id='sc1')\n        # Update scheduler config file\n        sc_config = {'fair_share': 'True',\n                     'fairshare_usage_res': 'ncpus*100'}\n        self.scheds['sc1'].set_sched_config(sc_config)\n        # submit jobs to multisched 'sc1'\n        sc1_attr = {ATTR_queue: 'wq1',\n                    'Resource_List.select': '1:ncpus=1',\n                    'Resource_List.walltime': 100}\n        sc1_J1 = Job(TEST_USER1, attrs=sc1_attr)\n        sc1_jid1 = self.server.submit(sc1_J1)\n        sc1_J2 = Job(TEST_USER2, attrs=sc1_attr)\n        sc1_jid2 = self.server.submit(sc1_J2)\n        sc1_J3 = Job(TEST_USER3, attrs=sc1_attr)\n        sc1_jid3 = self.server.submit(sc1_J3)\n        self.server.manager(MGR_CMD_SET, SCHED, {'scheduling': 'True'},\n                            id='sc1')\n        # pbsuser1 job will run and other two will be queued\n        self.server.expect(JOB, {'job_state': 'R'}, id=sc1_jid1)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=sc1_jid3)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=sc1_jid2)\n        # need to delete the running job because PBS has only 1 ncpu and\n        # our work is also done with the job.\n        # this step will decrease the execution time as well\n        self.server.manager(MGR_CMD_SET, SCHED, {'scheduling': 'True'},\n                            id='sc1')\n        self.server.delete(sc1_jid1, wait=True)\n        # pbsuser3 job will run after pbsuser1\n        self.server.expect(JOB, {'job_state': 'R'}, id=sc1_jid3)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=sc1_jid2)\n        self.server.manager(MGR_CMD_SET, SCHED, {'scheduling': 'True'},\n                            id='sc1')\n        # deleting the currently running job\n        self.server.delete(sc1_jid3, wait=True)\n        # pbsuser2 job will run in the end\n        self.server.expect(JOB, {'job_state': 'R'}, id=sc1_jid2)\n        self.server.manager(MGR_CMD_SET, SCHED, {'scheduling': 'True'},\n                            id='sc1')\n        # deleting the currently running job\n        self.server.delete(sc1_jid2, wait=True)\n        # query fairshare and check usage\n        sc1_fs_user1 = self.scheds['sc1'].fairshare.query_fairshare(\n            name=str(TEST_USER1))\n        self.assertEqual(sc1_fs_user1.usage, 101)\n        sc1_fs_user2 = self.scheds['sc1'].fairshare.query_fairshare(\n            name=str(TEST_USER2))\n        self.assertEqual(sc1_fs_user2.usage, 101)\n        sc1_fs_user3 = self.scheds['sc1'].fairshare.query_fairshare(\n            name=str(TEST_USER3))\n        self.assertEqual(sc1_fs_user3.usage, 101)\n        sc1_fs_user4 = self.scheds['sc1'].fairshare.query_fairshare(\n            name=str(TEST_USER4))\n        self.assertEqual(sc1_fs_user4.usage, 1)\n        # Restart the scheduler\n        t = time.time()\n        self.scheds['sc1'].restart()\n        # Check the multisched 'sc1' usage file whether it's updating or not\n        self.assertTrue(self.scheds['sc1'].isUp())\n        # The scheduler will set scheduler attributes on the first scheduling\n        # cycle, so we need to trigger a cycle, have the scheduler configure,\n        # then turn it off again\n        self.server.manager(MGR_CMD_SET, SCHED, {'scheduling': 'True'},\n                            id='sc1')\n        self.scheds['sc1'].log_match(\"Scheduler is reconfiguring\", starttime=t)\n        self.server.manager(MGR_CMD_SET, SCHED, {'scheduling': 'False'},\n                            id='sc1')\n        sc1_J1 = Job(TEST_USER1, attrs=sc1_attr)\n        sc1_jid1 = self.server.submit(sc1_J1)\n        sc1_J2 = Job(TEST_USER2, attrs=sc1_attr)\n        sc1_jid2 = self.server.submit(sc1_J2)\n        sc1_J4 = Job(TEST_USER4, attrs=sc1_attr)\n        sc1_jid4 = self.server.submit(sc1_J4)\n        self.server.manager(MGR_CMD_SET, SCHED, {'scheduling': 'True'},\n                            id='sc1')\n        # pbsuser4 job will run and other two will be queued\n        self.server.expect(JOB, {'job_state': 'R'}, id=sc1_jid4)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=sc1_jid1)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=sc1_jid2)\n        self.server.manager(MGR_CMD_SET, SCHED, {'scheduling': 'True'},\n                            id='sc1')\n        # deleting the currently running job\n        self.server.delete(sc1_jid4, wait=True)\n        # pbsuser1 job will run after pbsuser4\n        self.server.expect(JOB, {'job_state': 'R'}, id=sc1_jid1)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=sc1_jid2)\n        self.server.manager(MGR_CMD_SET, SCHED, {'scheduling': 'True'},\n                            id='sc1')\n        # deleting the currently running job\n        self.server.delete(sc1_jid1, wait=True)\n        # pbsuser2 job will run in the end\n        self.server.expect(JOB, {'job_state': 'R'}, id=sc1_jid2)\n        self.server.manager(MGR_CMD_SET, SCHED, {'scheduling': 'True'},\n                            id='sc1')\n        # deleting the currently running job\n        self.server.delete(sc1_jid2, wait=True)\n        # query fairshare and check usage\n        sc1_fs_user1 = self.scheds['sc1'].fairshare.query_fairshare(\n            name=str(TEST_USER1))\n        self.assertEqual(sc1_fs_user1.usage, 201)\n        sc1_fs_user2 = self.scheds['sc1'].fairshare.query_fairshare(\n            name=str(TEST_USER2))\n        self.assertEqual(sc1_fs_user2.usage, 201)\n        sc1_fs_user3 = self.scheds['sc1'].fairshare.query_fairshare(\n            name=str(TEST_USER3))\n        self.assertEqual(sc1_fs_user3.usage, 101)\n        sc1_fs_user4 = self.scheds['sc1'].fairshare.query_fairshare(\n            name=str(TEST_USER4))\n        self.assertEqual(sc1_fs_user4.usage, 101)\n\n    def test_sched_priv_change(self):\n        \"\"\"\n        Test that when the sched_priv directory changes, all of the\n        PTL internal scheduler objects (e.g. fairshare tree) are reread\n        \"\"\"\n\n        new_sched_priv = os.path.join(self.server.pbs_conf['PBS_HOME'],\n                                      'sched_priv2')\n        if os.path.exists(new_sched_priv):\n            self.du.rm(path=new_sched_priv, recursive=True,\n                       sudo=True, force=True)\n\n        dflt_sched_priv = os.path.join(self.server.pbs_conf['PBS_HOME'],\n                                       'sched_priv')\n\n        self.du.run_copy(src=dflt_sched_priv, dest=new_sched_priv,\n                         recursive=True, sudo=True)\n        self.setup_sc3()\n        s = self.server.status(SCHED, id='sc3')\n        old_sched_priv = s[0]['sched_priv']\n\n        self.scheds['sc3'].add_to_resource_group(TEST_USER, 10, 'root', 20)\n        self.scheds['sc3'].holidays_set_year(new_year=\"3000\")\n        self.scheds['sc3'].set_sched_config({'fair_share': 'True ALL'})\n\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'sched_priv': new_sched_priv}, id='sc3')\n\n        n = self.scheds['sc3'].fairshare_tree.get_node(id=10)\n        self.assertFalse(n)\n\n        y = self.scheds['sc3'].holidays_get_year()\n        self.assertNotEquals(y, \"3000\")\n        self.assertTrue(self.scheds['sc3'].\n                        sched_config['fair_share'].startswith('false'))\n\n        # clean up: revert_to_defaults() will remove the new sched_priv.  We\n        # need to remove the old one\n        self.du.rm(path=old_sched_priv, sudo=True, recursive=True, force=True)\n\n    def test_fairshare_decay(self):\n        \"\"\"\n        Test pbsfs's fairshare decay for multisched\n        \"\"\"\n        self.setup_sc3()\n        self.scheds['sc3'].add_to_resource_group(TEST_USER, 10, 'root', 20)\n        self.scheds['sc3'].fairshare.set_fairshare_usage(\n            name=TEST_USER, usage=10)\n        self.scheds['sc3'].decay_fairshare_tree()\n        n = self.scheds['sc3'].fairshare_tree.get_node(id=10)\n        self.assertTrue(n.usage, 5)\n\n    def test_cmp_fairshare(self):\n        \"\"\"\n        Test pbsfs's compare fairshare functionality for multisched\n        \"\"\"\n        self.setup_sc3()\n        self.scheds['sc3'].add_to_resource_group(TEST_USER, 10, 'root', 20)\n        self.scheds['sc3'].fairshare.set_fairshare_usage(\n            name=TEST_USER, usage=10)\n        self.scheds['sc3'].add_to_resource_group(TEST_USER2, 20, 'root', 20)\n        self.scheds['sc3'].fairshare.set_fairshare_usage(\n            name=TEST_USER2, usage=100)\n\n        user = self.scheds['sc3'].fairshare.cmp_fairshare_entities(\n            TEST_USER, TEST_USER2)\n        self.assertEqual(user, str(TEST_USER))\n\n    def test_pbsfs_invalid_sched(self):\n        \"\"\"\n        Test pbsfs -I <sched_name> where sched_name does not exist\n        \"\"\"\n        sched_name = 'foo'\n        pbsfs_cmd = os.path.join(self.server.pbs_conf['PBS_EXEC'],\n                                 'sbin', 'pbsfs') + ' -I ' + sched_name\n        ret = self.du.run_cmd(cmd=pbsfs_cmd, runas=self.scheduler.user)\n        err_msg = 'Scheduler %s does not exist' % sched_name\n        self.assertEqual(err_msg, ret['err'][0])\n\n    def test_pbsfs_no_fairshare_data(self):\n        \"\"\"\n        Test pbsfs -I <sched_name> where sched_priv_<sched_name> dir\n        does not exist\n        \"\"\"\n        a = {'partition': 'P5',\n             'sched_host': self.server.hostname}\n        self.server.manager(MGR_CMD_CREATE, SCHED, a, id=\"sc5\")\n        err_msg = 'Unable to access fairshare data: No such file or directory'\n        try:\n            # Only a scheduler object is created. Corresponding sched_priv\n            # dir not created yet. Try to query fairshare data.\n            self.scheds['sc5'].fairshare.query_fairshare()\n        except PbsFairshareError as e:\n            self.assertTrue(err_msg in e.msg)\n\n    def test_pbsfs_server_restart(self):\n        \"\"\"\n        Verify that server restart has no impact on fairshare data\n        \"\"\"\n        self.setup_sc1()\n        self.scheds['sc1'].add_to_resource_group(TEST_USER, 20, 'root', 50)\n        self.scheds['sc1'].fairshare.set_fairshare_usage(\n            name=TEST_USER, usage=25)\n        n = self.scheds['sc1'].fairshare.query_fairshare().get_node(\n            name=str(TEST_USER))\n        self.assertTrue(n.usage, 25)\n\n        self.server.restart()\n        n = self.scheds['sc1'].fairshare.query_fairshare().get_node(\n            name=str(TEST_USER))\n        self.assertTrue(n.usage, 25)\n\n    def test_pbsfs_revert_to_defaults(self):\n        \"\"\"\n        Test if revert_to_defaults() works properly with multi scheds.\n        revert_to_defaults() removes entities from resource_group file and\n        removes their usage(with pbsfs -e)\n        \"\"\"\n        self.setup_sc1()\n        a = {'queue_type': 'execution',\n             'started': 'True',\n             'enabled': 'True',\n             'partition': 'P1'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id='wq1')\n        a = {'partition': 'P1', 'resources_available.ncpus': 2}\n        self.server.manager(MGR_CMD_SET, NODE, a,\n                            id=self.mom.shortname)\n\n        self.scheds['sc1'].add_to_resource_group(TEST_USER,\n                                                 11, 'root', 10)\n        self.scheds['sc1'].add_to_resource_group(TEST_USER1,\n                                                 12, 'root', 10)\n        self.scheds['sc1'].set_sched_config({'fair_share': 'True'})\n        self.scheds['sc1'].fairshare.set_fairshare_usage(TEST_USER, 100)\n\n        self.server.manager(MGR_CMD_SET, SCHED, {'scheduling': 'False'},\n                            id='sc1')\n        j1 = Job(TEST_USER, attrs={ATTR_queue: 'wq1'})\n        jid1 = self.server.submit(j1)\n        j2 = Job(TEST_USER1, attrs={ATTR_queue: 'wq1'})\n        jid2 = self.server.submit(j2)\n\n        t_start = time.time()\n        self.server.manager(MGR_CMD_SET, SCHED, {'scheduling': 'True'},\n                            id='sc1')\n        self.scheds['sc1'].log_match(\n            'Leaving Scheduling Cycle', starttime=t_start)\n        t_end = time.time()\n        job_list = self.scheds['sc1'].log_match(\n            'Considering job to run', starttime=t_start,\n            allmatch=True, endtime=t_end)\n\n        # job 1 runs second as it's run by an entity with usage = 100\n        self.assertTrue(jid1 in job_list[-1][1])\n\n        self.server.deljob(id=jid1, wait=True)\n        self.server.deljob(id=jid2, wait=True)\n\n        # revert_to_defaults() does a pbsfs -I <sched_name> -e and cleans up\n        # the resource_group file\n        self.scheds['sc1'].revert_to_defaults()\n\n        # Fairshare tree is trimmed now.  TEST_USER1 is the only entity with\n        # usage set.  So its job, job2 will run second. If trimming was not\n        # successful TEST_USER would still have usage=100 and job1 would run\n        # second\n\n        self.scheds['sc1'].add_to_resource_group(TEST_USER,\n                                                 15, 'root', 10)\n        self.scheds['sc1'].add_to_resource_group(TEST_USER1,\n                                                 16, 'root', 10)\n        self.scheds['sc1'].set_sched_config({'fair_share': 'True'})\n        self.scheds['sc1'].fairshare.set_fairshare_usage(TEST_USER1, 50)\n\n        self.server.manager(MGR_CMD_SET, SCHED, {'scheduling': 'False'},\n                            id='sc1')\n        j1 = Job(TEST_USER, attrs={ATTR_queue: 'wq1'})\n        jid1 = self.server.submit(j1)\n        j2 = Job(TEST_USER1, attrs={ATTR_queue: 'wq1'})\n        jid2 = self.server.submit(j2)\n\n        t_start = time.time()\n        self.server.manager(MGR_CMD_SET, SCHED, {'scheduling': 'True'},\n                            id='sc1')\n        self.scheds['sc1'].log_match(\n            'Leaving Scheduling Cycle', starttime=t_start)\n        t_end = time.time()\n        job_list = self.scheds['sc1'].log_match(\n            'Considering job to run', starttime=t_start,\n            allmatch=True, endtime=t_end)\n\n        self.assertTrue(jid2 in job_list[-1][1])\n\n    def submit_jobs(self, num_jobs=1, attrs=None, user=TEST_USER):\n        \"\"\"\n        Submit num_jobs number of jobs with attrs attributes for user.\n        Return a list of job ids\n        \"\"\"\n        if attrs is None:\n            attrs = {'Resource_List.select': '1:ncpus=2'}\n        ret_jids = []\n        for _ in range(num_jobs):\n            J = Job(user, attrs)\n            jid = self.server.submit(J)\n            ret_jids += [jid]\n\n        return ret_jids\n\n    def test_equiv_multisched(self):\n        \"\"\"\n        Test the basic behavior of job equivalence classes: submit two\n        different types of jobs into 2 different schedulers and see they\n        are in two different classes in each scheduler\n        \"\"\"\n        self.setup_sc1()\n        self.setup_sc2()\n        self.setup_queues_nodes()\n        self.server.manager(MGR_CMD_SET, SCHED, {'log_events': 2047}, id='sc2')\n        t = time.time()\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'scheduling': 'False'}, id=\"sc1\")\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'scheduling': 'False'}, id=\"sc2\")\n\n        # Eat up all the resources with the first job to each queue\n        a = {'Resource_List.select': '1:ncpus=2', ATTR_queue: 'wq1'}\n        self.submit_jobs(4, a)\n        a = {'Resource_List.select': '1:ncpus=2', ATTR_queue: 'wq2'}\n        self.submit_jobs(4, a)\n\n        a = {'Resource_List.select': '1:ncpus=1', ATTR_queue: 'wq1'}\n        self.submit_jobs(3, a)\n        a = {'Resource_List.select': '1:ncpus=1', ATTR_queue: 'wq2'}\n        self.submit_jobs(3, a)\n\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'scheduling': 'True'}, id=\"sc1\")\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'scheduling': 'True'}, id=\"sc2\")\n        self.scheds['sc1'].log_match(\"Number of job equivalence classes: 2\",\n                                     max_attempts=10, starttime=t)\n        self.scheds['sc2'].log_match(\"Number of job equivalence classes: 2\",\n                                     max_attempts=10, starttime=t)\n\n    def test_limits_queues(self):\n        \"\"\"\n        Test to see that jobs from different users fall into different\n        equivalence classes with queue hard limits.\n        \"\"\"\n        self.setup_sc1()\n        self.setup_queues_nodes()\n        p1 = {'partition': 'P1'}\n        self.server.manager(MGR_CMD_SET, QUEUE, p1, id='wq3')\n        t = time.time()\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'scheduling': 'False'}, id=\"sc1\")\n        self.server.manager(MGR_CMD_SET, QUEUE,\n                            {'max_run': '[u:PBS_GENERIC=1]'}, id='wq1')\n        self.server.manager(MGR_CMD_SET, QUEUE,\n                            {'max_run': '[u:PBS_GENERIC=1]'}, id='wq3')\n\n        # Eat up all the resources\n        a = {'Resource_List.select': '1:ncpus=2', ATTR_queue: 'wq1'}\n        J = Job(TEST_USER, attrs=a)\n        self.server.submit(J)\n        a = {'Resource_List.select': '1:ncpus=2', ATTR_queue: 'wq3'}\n        J = Job(TEST_USER, attrs=a)\n        self.server.submit(J)\n        a = {ATTR_queue: 'wq1'}\n        self.submit_jobs(3, a, user=TEST_USER1)\n        self.submit_jobs(3, a, user=TEST_USER2)\n        a = {ATTR_queue: 'wq3'}\n        self.submit_jobs(3, a, user=TEST_USER1)\n        self.submit_jobs(3, a, user=TEST_USER2)\n\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'scheduling': 'True'}, id=\"sc1\")\n\n        # Six equivalence classes. Two for the resource eating job in\n        # different partitions and one for each user per partition.\n        self.scheds['sc1'].log_match(\"Number of job equivalence classes: 6\",\n                                     starttime=t)\n\n    def test_list_multi_sched(self):\n        \"\"\"\n        Test to verify that qmgr list sched works when multiple\n        schedulers are present\n        \"\"\"\n\n        self.setup_sc1()\n        self.setup_sc2()\n        self.setup_sc3()\n\n        self.server.manager(MGR_CMD_LIST, SCHED)\n\n        self.server.manager(MGR_CMD_LIST, SCHED, id=\"default\")\n\n        self.server.manager(MGR_CMD_LIST, SCHED, id=\"sc1\")\n\n        dir_path = os.path.join(os.sep, 'var', 'spool', 'pbs', 'sched_dir')\n        a = {'partition': 'P2',\n             'sched_priv': os.path.join(dir_path, 'sched_priv_sc2'),\n             'sched_log': os.path.join(dir_path, 'sched_logs_sc2'),\n             'sched_host': self.server.hostname}\n        self.server.manager(MGR_CMD_LIST, SCHED, a, id=\"sc2\")\n\n        self.server.manager(MGR_CMD_LIST, SCHED, id=\"sc3\")\n\n        try:\n            self.server.manager(MGR_CMD_LIST, SCHED, id=\"invalid_scname\")\n        except PbsManagerError as e:\n            err_msg = \"Unknown Scheduler\"\n            self.assertTrue(err_msg in e.msg[0],\n                            \"Error message is not expected\")\n\n        # delete sc3 sched\n        self.delete_sched(\"sc3\")\n\n        try:\n            self.server.manager(MGR_CMD_LIST, SCHED, id=\"sc3\")\n        except PbsManagerError as e:\n            err_msg = \"Unknown Scheduler\"\n            self.assertTrue(err_msg in e.msg[0],\n                            \"Error message is not expected\")\n\n        self.server.manager(MGR_CMD_LIST, SCHED)\n\n        self.server.manager(MGR_CMD_LIST, SCHED, id=\"default\")\n\n        self.server.manager(MGR_CMD_LIST, SCHED, id=\"sc1\")\n\n        self.server.manager(MGR_CMD_LIST, SCHED, id=\"sc2\")\n\n        # delete sc1 sched\n        self.delete_sched(\"sc1\")\n\n        try:\n            self.server.manager(MGR_CMD_LIST, SCHED, id=\"sc1\")\n        except PbsManagerError as e:\n            err_msg = \"Unknown Scheduler\"\n            self.assertTrue(err_msg in e.msg[0],\n                            \"Error message is not expected\")\n\n    def test_job_sort_formula_threshold(self):\n        \"\"\"\n        Test the scheduler attribute job_sort_formula_threshold for multisched\n        \"\"\"\n        # Multisched setup\n        self.setup_sc3()\n        p3 = {'partition': 'P3'}\n        a = {'queue_type': 'execution',\n             'started': 'True',\n             'enabled': 'True'}\n        a.update(p3)\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id='wq1')\n        a = {'resources_available.ncpus': 2}\n        self.mom.create_vnodes(a, 2)\n        vn0 = self.mom.shortname + '[0]'\n        self.server.manager(MGR_CMD_SET, NODE, p3, id=vn0)\n        # Set job_sort_formula on the server\n        self.server.manager(MGR_CMD_SET, SERVER, {'job_sort_formula': 'ncpus'})\n        # Set job_sort_formula_threshold on the multisched\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'job_sort_formula_threshold': '2'}, id=\"sc3\")\n        # Submit job to multisched\n        j1_attrs = {ATTR_queue: 'wq1', 'Resource_List.ncpus': '1'}\n        J1 = Job(TEST_USER, j1_attrs)\n        jid_1 = self.server.submit(J1)\n        # Submit job to default scheduler\n        J2 = Job(TEST_USER, attrs={'Resource_List.ncpus': '1'})\n        jid_2 = self.server.submit(J2)\n        msg = {'job_state': 'Q',\n               'comment': ('Not Running: Job is ' +\n                           'under job_sort_formula threshold value')}\n        self.server.expect(JOB, msg, id=jid_1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid_2)\n\n        # test to make sure server can still start with job_sort_formula set\n        self.server.restart()\n        restart_msg = 'Failed to restart PBS'\n        self.assertTrue(self.server.isUp(), restart_msg)\n\n    @staticmethod\n    def cust_attr(name, totnodes, numnode, attrib):\n        a = {}\n        if numnode in range(0, 3):\n            a['resources_available.switch'] = 'A'\n        if numnode in range(3, 5):\n            a['resources_available.switch'] = 'B'\n        if numnode in range(6, 9):\n            a['resources_available.switch'] = 'A'\n            a['partition'] = 'P2'\n        if numnode in range(9, 11):\n            a['resources_available.switch'] = 'B'\n            a['partition'] = 'P2'\n        if numnode == 11:\n            a['partition'] = 'P2'\n        return {**attrib, **a}\n\n    def setup_placement_set(self):\n        self.server.add_resource('switch', 'string_array', 'h')\n        a = {'resources_available.ncpus': 2}\n        self.mom.create_vnodes(\n            a, 12, attrfunc=self.cust_attr)\n        self.server.manager(MGR_CMD_SET, SERVER, {'node_group_key': 'switch'})\n        self.server.manager(MGR_CMD_SET, SERVER, {'node_group_enable': 't'})\n\n    def test_multi_sched_explicit_ps(self):\n        \"\"\"\n        Test only_explicit_ps set to sched attr will be in affect\n        and will not read from default scheduler\n        \"\"\"\n        self.setup_placement_set()\n        self.setup_sc2()\n        a = {'queue_type': 'execution',\n             'started': 'True',\n             'enabled': 'True',\n             'partition': 'P2'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id='wq2')\n        a = {'Resource_List.select': '1:ncpus=2'}\n        j = Job(TEST_USER, attrs=a)\n        j1id = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=j1id)\n        vn = ['%s[%d]' % (self.mom.shortname, i) for i in range(10)]\n        nodes = [vn[5]]\n        self.check_vnodes(j, nodes, j1id)\n        a = {'Resource_List.select': '2:ncpus=2'}\n        j = Job(TEST_USER, attrs=a)\n        j2id = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=j2id)\n        nodes = vn[3:5]\n        self.check_vnodes(j, nodes, j2id)\n        a = {'Resource_List.select': '3:ncpus=2'}\n        j = Job(TEST_USER, attrs=a)\n        j3id = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=j3id)\n        self.check_vnodes(j, vn[0:3], j3id)\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'only_explicit_psets': 't'}, id='sc2')\n        a = {'Resource_List.select': '1:ncpus=2', ATTR_queue: 'wq2'}\n        j = Job(TEST_USER, attrs=a)\n        j4id = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=j4id)\n        nodes = [vn[9]]\n        self.check_vnodes(j, nodes, j4id)\n        a = {'Resource_List.select': '2:ncpus=2', ATTR_queue: 'wq2'}\n        j = Job(TEST_USER, attrs=a)\n        j5id = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=j5id)\n        self.check_vnodes(j, vn[6:8], j5id)\n        a = {'Resource_List.select': '3:ncpus=2', ATTR_queue: 'wq2'}\n        j = Job(TEST_USER, attrs=a)\n        j6id = self.server.submit(j)\n        self.server.expect(JOB, {\n                           'job_state': 'Q',\n                           'comment': 'Not Running: Placement set switch=A'\n                           ' has too few free resources'}, id=j6id)\n\n    def test_jobs_do_not_span_ps(self):\n        \"\"\"\n        Test do_not_span_psets set to sched attr will be in affect\n        and will not read from default scheduler\n        \"\"\"\n        self.setup_placement_set()\n        self.setup_sc2()\n        a = {'queue_type': 'execution',\n             'started': 'True',\n             'enabled': 'True',\n             'partition': 'P2'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id='wq2')\n        # Scheduler sc2 cannot span across placement sets\n        self.server.manager(MGR_CMD_SET, SCHED, {\n                            'do_not_span_psets': 't'}, id='sc2')\n        self.server.manager(MGR_CMD_SET, SCHED, {\n                            'scheduling': 't'}, id='sc2')\n        a = {'Resource_List.select': '4:ncpus=2', ATTR_queue: 'wq2'}\n        j = Job(TEST_USER, attrs=a)\n        j1id = self.server.submit(j)\n        self.server.expect(\n            JOB, {'job_state': 'Q', 'comment': 'Can Never Run: can\\'t fit in '\n                  'the largest placement set, and can\\'t span psets'}, id=j1id)\n        # Default scheduler can span as do_not_span_psets is not set\n        a = {'Resource_List.select': '4:ncpus=2'}\n        j = Job(TEST_USER, attrs=a)\n        j2id = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=j2id)\n\n    def test_sched_preempt_enforce_resumption(self):\n        \"\"\"\n        Test sched_preempt_enforce_resumption can be set to a multi sched\n        and that even if topjob_ineligible is set for a preempted job\n        and sched_preempt_enforce_resumption is set true , the\n        preempted job will be calandered\n        \"\"\"\n        self.setup_sc1()\n        self.setup_queues_nodes()\n        p1 = {'partition': 'P1'}\n        self.server.manager(MGR_CMD_SET, QUEUE, p1, id='wq3')\n        prio = {'Priority': 150, 'partition': 'P1'}\n        self.server.manager(MGR_CMD_SET, QUEUE, prio, id='wq3')\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'sched_preempt_enforce_resumption': 'true'},\n                            id='sc1')\n        self.server.manager(MGR_CMD_SET, SERVER, {'backfill_depth': '2'})\n\n        # Submit a job\n        j = Job(TEST_USER, {'Resource_List.walltime': '120',\n                            'Resource_List.ncpus': '2'})\n        jid1 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        j = Job(TEST_USER, {'Resource_List.walltime': '120',\n                            'Resource_List.ncpus': '2',\n                            ATTR_queue: 'wq1'})\n        jid2 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n\n        # Alter topjob_ineligible for running job\n        self.server.alterjob(jid1, {ATTR_W: \"topjob_ineligible = true\"},\n                             runas=ROOT_USER, logerr=True)\n        self.server.alterjob(jid1, {ATTR_W: \"topjob_ineligible = true\"},\n                             runas=ROOT_USER, logerr=True)\n\n        # Create a high priority queue\n        a = {'queue_type': 'e', 'started': 't',\n             'enabled': 't', 'Priority': '150'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id=\"highp\")\n\n        # Submit 2 jobs to high priority queue\n        j = Job(TEST_USER, {'queue': 'highp', 'Resource_List.walltime': '60',\n                            'Resource_List.ncpus': '2'})\n        jid3 = self.server.submit(j)\n        j = Job(TEST_USER, {'queue': 'wq3', 'Resource_List.walltime': '60',\n                            'Resource_List.ncpus': '2'})\n        jid4 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid3)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid4)\n        # Verify that job1 is not calendered\n        self.server.expect(JOB, 'estimated.start_time',\n                           op=UNSET, id=jid1)\n        # Verify that job2 is calendared\n        self.server.expect(JOB, 'estimated.start_time',\n                           op=SET, id=jid2)\n        qstat = self.server.status(JOB, 'estimated.start_time',\n                                   id=jid2)\n        est_time = qstat[0]['estimated.start_time']\n        self.assertNotEqual(est_time, None)\n        self.scheds['sc1'].log_match(jid2 + \";Job is a top job\",\n                                     starttime=self.server.ctime)\n\n    def set_primetime(self, ptime_start, ptime_end, scid='default'):\n        \"\"\"\n        This function will set the prime time\n        in holidays file\n        \"\"\"\n        self.scheds[scid].holidays_delete_entry('a')\n\n        cur_year = datetime.datetime.today().year\n        self.scheds[scid].holidays_set_year(cur_year)\n\n        p_day = 'weekday'\n        p_hhmm = time.strftime('%H%M', time.localtime(ptime_start))\n        np_hhmm = time.strftime('%H%M', time.localtime(ptime_end))\n        self.scheds[scid].holidays_set_day(p_day, p_hhmm, np_hhmm)\n\n        p_day = 'saturday'\n        self.scheds[scid].holidays_set_day(p_day, p_hhmm, np_hhmm)\n\n        p_day = 'sunday'\n        self.scheds[scid].holidays_set_day(p_day, p_hhmm, np_hhmm)\n\n    def test_prime_time_backfill(self):\n        \"\"\"\n        Test opt_backfill_fuzzy can be set to a multi sched and\n        while calandering primetime/nonprimetime will be considered\n        \"\"\"\n        self.setup_sc2()\n        self.setup_queues_nodes()\n        a = {'strict_ordering': \"True   ALL\"}\n        self.scheds['sc2'].set_sched_config(a)\n        # set primetime which will start after 30min\n        prime_start = int(time.time()) + 1800\n        prime_end = int(time.time()) + 3600\n        self.set_primetime(prime_start, prime_end, scid='sc2')\n\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'opt_backfill_fuzzy': 'high'}, id='sc2')\n        self.server.manager(MGR_CMD_SET, SERVER, {'backfill_depth': '2'})\n\n        # Submit a job\n        j = Job(TEST_USER, {'Resource_List.walltime': '60',\n                            'Resource_List.ncpus': '2',\n                            ATTR_queue: 'wq2'})\n        jid1 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n\n        j = Job(TEST_USER1, {'Resource_List.ncpus': '2',\n                             ATTR_queue: 'wq2'})\n        jid2 = self.server.submit(j)\n\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid2)\n\n        # Verify that job2 is calendared to start at primetime start\n        self.server.expect(JOB, 'estimated.start_time',\n                           op=SET, id=jid2)\n        qstat = self.server.status(JOB, 'estimated.start_time',\n                                   id=jid2)\n        est_time = qstat[0]['estimated.start_time']\n        est_epoch = est_time\n        if self.server.get_op_mode() == PTL_CLI:\n            est_epoch = int(time.mktime(time.strptime(est_time, '%c')))\n        prime_mod = prime_start % 60  # ignoring the seconds\n        self.assertEqual((prime_start - prime_mod), est_epoch)\n\n    def test_prime_time_multisched(self):\n        \"\"\"\n        Test prime time queue can be set partition and multi sched\n        considers prime time queue for jobs submitted to the p_queue\n        \"\"\"\n        self.setup_sc2()\n        self.setup_queues_nodes()\n        # set primetime which will start after 30min\n        prime_start = int(time.time()) + 1800\n        prime_end = int(time.time()) + 3600\n        self.set_primetime(prime_start, prime_end, scid='sc2')\n        a = {'queue_type': 'e', 'started': 't',\n             'enabled': 't', 'partition': 'P2'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id=\"p_queue\")\n\n        j = Job(TEST_USER1, {'Resource_List.ncpus': '1',\n                             ATTR_queue: 'wq2'})\n        jid1 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        j = Job(TEST_USER1, {'Resource_List.ncpus': '1',\n                             ATTR_queue: 'p_queue'})\n        jid2 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid2)\n        msg = 'Job will run in primetime only'\n        self.server.expect(JOB, {ATTR_comment: \"Not Running: \" + msg}, id=jid2)\n        self.scheds['sc2'].log_match(jid2 + \";Job only runs in primetime\",\n                                     starttime=self.server.ctime)\n\n    def test_dedicated_time_multisched(self):\n        \"\"\"\n        Test dedicated time queue can be set partition and multi sched\n        considers dedicated time for jobs submitted to the ded_queue\n        \"\"\"\n        self.setup_sc2()\n        self.setup_queues_nodes()\n        # Create a dedicated time queue\n        ded_start = int(time.time()) + 1800\n        ded_end = int(time.time()) + 3600\n        self.scheds['sc2'].add_dedicated_time(start=ded_start, end=ded_end)\n        a = {'queue_type': 'e', 'started': 't',\n             'enabled': 't', 'partition': 'P2'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id=\"ded_queue\")\n        j = Job(TEST_USER1, {'Resource_List.ncpus': '1',\n                             ATTR_queue: 'wq2'})\n        jid1 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        j = Job(TEST_USER1, {'Resource_List.ncpus': '1',\n                             ATTR_queue: 'ded_queue'})\n        jid2 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid2)\n        msg = 'Dedicated time conflict'\n        self.server.expect(JOB, {ATTR_comment: \"Not Running: \" + msg}, id=jid2)\n        self.scheds['sc2'].log_match(jid2 + \";Dedicated Time\",\n                                     starttime=self.server.ctime)\n\n    def test_auto_sched_off_due_to_fds_limit(self):\n        \"\"\"\n        Test to make sure scheduling should be turned off automatically\n        when number of open files per process are exhausted\n        \"\"\"\n\n        if os.getuid() != 0 or sys.platform in ('cygwin', 'win32'):\n            self.skipTest(\"Test need to run as root\")\n\n        self.setup_sc3()\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'scheduler_iteration': 1}, id=\"sc3\")\n        self.server.manager(MGR_CMD_SET, SCHED, {'scheduling': 'True'},\n                            id=\"sc3\")\n        try:\n            # get the number of open files per process\n            (open_files_soft_limit, open_files_hard_limit) = \\\n                resource.getrlimit(resource.RLIMIT_NOFILE)\n\n            # set the soft limit of number of open files per process to 10\n            resource.setrlimit(resource.RLIMIT_NOFILE,\n                               (10, open_files_hard_limit))\n\n        except (ValueError, resource.error):\n            self.assertFalse(True, \"Error in accessing system RLIMIT_ \"\n                                   \"variables, test fails.\")\n        try:\n            self.logger.info('The sleep is 15 seconds which will '\n                             'trigger required number of scheduling '\n                             'cycles that are needed to exhaust open '\n                             'files per process which is 10 in our case')\n            time.sleep(15)\n\n        except BaseException as exc:\n            raise exc\n        finally:\n            try:\n                resource.setrlimit(resource.RLIMIT_NOFILE,\n                                   (open_files_soft_limit,\n                                    open_files_hard_limit))\n                # scheduling should not go to false once all fds per process\n                # are exhausted.\n                self.server.expect(SCHED, {'scheduling': 'True'},\n                                   id='sc3', max_attempts=10)\n            except (ValueError, resource.error):\n                self.assertFalse(True, \"Error in accessing system RLIMIT_ \"\n                                       \"variables, test fails.\")\n\n    def test_set_msched_attr_sched_log_with_sched_off(self):\n        \"\"\"\n        Test to set Multisched attributes even when its scheduling is off\n        and check whether they are actually be effective\n        \"\"\"\n        self.setup_sc3()\n        self.server.manager(MGR_CMD_SET, SCHED, {'log_events': 2047}, id='sc3')\n\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'scheduling': 'False'}, id=\"sc3\")\n\n        new_sched_log = os.path.join(self.server.pbs_conf['PBS_HOME'],\n                                     'sc3_new_logs')\n        if os.path.exists(new_sched_log):\n            self.du.rm(path=new_sched_log, recursive=True,\n                       sudo=True, force=True)\n\n        self.du.mkdir(path=new_sched_log, sudo=True)\n        self.du.chown(path=new_sched_log, recursive=True,\n                      uid=self.scheds['sc3'].user, sudo=True)\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'sched_log': new_sched_log}, id=\"sc3\")\n\n        a = {'sched_log': new_sched_log}\n        self.server.expect(SCHED, a, id='sc3', max_attempts=10)\n\n        # This is required since we need to call log_match only when\n        # the new log file is created.\n        time.sleep(1)\n        self.scheds['sc3'].log_match(\n            \"scheduler log directory is changed to \" + new_sched_log,\n            max_attempts=10, starttime=self.server.ctime)\n\n    def test_set_msched_attr_sched_priv_with_sched_off(self):\n        \"\"\"\n        Test to set Multisched attributes even when its scheduling is off\n        and check whether they are actually be effective\n        \"\"\"\n        self.setup_sc3()\n        self.server.manager(MGR_CMD_SET, SCHED, {'scheduling': 'False',\n                                                 'log_events': 2047}, id=\"sc3\")\n\n        # create and set-up a new priv directory for sc3\n        new_sched_priv = os.path.join(self.server.pbs_conf['PBS_HOME'],\n                                      'sched_priv_new')\n        if os.path.exists(new_sched_priv):\n            self.du.rm(path=new_sched_priv, recursive=True,\n                       sudo=True, force=True)\n        dflt_sched_priv = os.path.join(self.server.pbs_conf['PBS_HOME'],\n                                       'sched_priv')\n\n        self.du.run_copy(src=dflt_sched_priv, dest=new_sched_priv,\n                         recursive=True, sudo=True)\n        self.server.manager(MGR_CMD_SET, SCHED, {'sched_priv': new_sched_priv},\n                            id=\"sc3\")\n\n        a = {'sched_priv': new_sched_priv}\n        self.server.expect(SCHED, a, id='sc3', max_attempts=10)\n\n        # This is required since we need to call log_match only when\n        # the new log file is created.\n        time.sleep(1)\n        self.scheds['sc3'].log_match(\n            \"scheduler priv directory has changed to \" + new_sched_priv,\n            max_attempts=10, starttime=self.server.ctime)\n\n    def test_set_msched_update_inbuilt_attrs_accrue_type(self):\n        \"\"\"\n        Test to make sure Multisched is able to update any one of the builtin\n        attributes like accrue_type\n        \"\"\"\n        a = {'eligible_time_enable': 'True'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        self.setup_sc3()\n        self.setup_queues_nodes()\n\n        a = {'Resource_List.select': '1:ncpus=2', ATTR_queue: 'wq3'}\n\n        J1 = Job(TEST_USER1, attrs=a)\n\n        J2 = Job(TEST_USER1, attrs=a)\n\n        jid1 = self.server.submit(J1)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n\n        jid2 = self.server.submit(J2)\n        self.server.expect(JOB, {ATTR_state: 'Q'}, id=jid2)\n\n        # accrue_type = 2 is eligible_time\n        self.server.expect(JOB, {ATTR_accrue_type: 2}, id=jid2)\n\n        self.server.delete(jid1, wait=True)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid2)\n        # This makes sure that accrue_type is indeed getting changed\n        self.server.expect(JOB, {ATTR_accrue_type: 3}, id=jid2)\n\n    def test_multisched_not_crash(self):\n        \"\"\"\n        Test to make sure Multisched does not crash when all nodes in partition\n        are not associated with the corresponding queue\n        \"\"\"\n        self.setup_sc1()\n        self.setup_queues_nodes()\n\n        # Assign a queue with the partition P1. This queue association is not\n        # required as per the current Multisched feature. But this is just to\n        # verify even if we associate a queue to one of the nodes in partition\n        # the scheduler won't crash.\n        # Ex: Here we are associating wq1 to vnode[0] but vnode[4] has no\n        # queue associated to it. Expectation is in this case scheduler won't\n        # crash\n        a = {ATTR_queue: 'wq1'}\n        vn = self.mom.shortname\n        self.server.manager(MGR_CMD_SET, NODE, a, id=vn + '[0]')\n\n        self.scheds['sc1'].terminate()\n\n        self.scheds['sc1'].start()\n\n        j = Job(TEST_USER1, attrs={ATTR_queue: 'wq1',\n                                   'Resource_List.select': '1:ncpus=1'})\n        jid1 = self.server.submit(j)\n        # If job goes to R state means scheduler is still alive.\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n\n    def test_multi_sched_job_sort_key(self):\n        \"\"\"\n        Test to make sure that jobs are sorted as per\n        job_sort_key in a multi sched\n        \"\"\"\n        self.setup_sc1()\n        self.setup_queues_nodes()\n        a = {'job_sort_key': '\"ncpus LOW\"'}\n        self.scheds['sc1'].set_sched_config(a)\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'scheduling': 'False'}, id=\"sc1\")\n        j = Job(TEST_USER, {'Resource_List.ncpus': '2',\n                            ATTR_queue: 'wq1'})\n        jid1 = self.server.submit(j)\n        j = Job(TEST_USER, {'Resource_List.ncpus': '1',\n                            ATTR_queue: 'wq1'})\n        jid2 = self.server.submit(j)\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'scheduling': 'True'}, id=\"sc1\")\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid1)\n\n    def test_multi_sched_node_sort_key(self):\n        \"\"\"\n        Test to make sure nodes are sorted in the order\n        as per node_sort_key in a multi sched\n        \"\"\"\n        self.setup_sc1()\n        self.setup_queues_nodes()\n        a = {'partition': 'P1'}\n        vn = ['%s[%d]' % (self.mom.shortname, i) for i in range(4)]\n        self.server.manager(MGR_CMD_SET, NODE, a, id='@default')\n        a = {'node_sort_key': '\"ncpus HIGH \" ALL'}\n        self.scheds['sc1'].set_sched_config(a)\n        a = {'resources_available.ncpus': 1}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=vn[0])\n        a = {'resources_available.ncpus': 2}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=vn[1])\n        a = {'resources_available.ncpus': 3}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=vn[2])\n        a = {'resources_available.ncpus': 4}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=vn[3])\n\n        a = {'Resource_List.select': '1:ncpus=1',\n             'Resource_List.place': 'excl',\n             ATTR_queue: 'wq1'}\n        j = Job(TEST_USER1, a)\n        jid1 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        self.check_vnodes(j, [vn[3]], jid1)\n        j = Job(TEST_USER1, a)\n        jid2 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n        self.check_vnodes(j, [vn[2]], jid2)\n        j = Job(TEST_USER1, a)\n        jid3 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid3)\n        self.check_vnodes(j, [vn[1]], jid3)\n        j = Job(TEST_USER1, a)\n        jid4 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid4)\n        self.check_vnodes(j, [vn[0]], jid4)\n\n    def test_multi_sched_priority_sockets(self):\n        \"\"\"\n        Test scheduler socket connections from all the schedulers\n        are processed on priority\n        \"\"\"\n        self.common_setup()\n        self.server.manager(MGR_CMD_SET, SERVER, {'log_events': 2047})\n        for name in self.scheds:\n            self.server.manager(MGR_CMD_SET, SCHED,\n                                {'scheduling': 'False'}, id=name)\n        a = {ATTR_queue: 'wq1',\n             'Resource_List.select': '1:ncpus=2',\n             'Resource_List.walltime': 60}\n        j = Job(TEST_USER1, attrs=a)\n        self.server.submit(j)\n        t = time.time()\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'scheduling': 'True'}, id='sc1')\n        self.server.log_match(\"processing priority socket\", starttime=t)\n        a = {ATTR_queue: 'wq2',\n             'Resource_List.select': '1:ncpus=2',\n             'Resource_List.walltime': 60}\n        j = Job(TEST_USER1, attrs=a)\n        self.server.submit(j)\n        t = time.time()\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'scheduling': 'True'}, id='sc2')\n        self.server.log_match(\"processing priority socket\", starttime=t)\n\n    def test_advance_resv_in_multi_sched(self):\n        \"\"\"\n        Test that advance reservations in a multi-sched environment can be\n        serviced by any scheduler\n        \"\"\"\n        # Create 3 multi-scheds sc1, sc2 and sc3, 3 partitions and 4 vnodes\n        self.common_setup()\n        # Consume all resources in partitions serviced by sc1 and sc3 and\n        # default scheduler\n        a = {ATTR_queue: 'wq1',\n             'Resource_List.select': '1:ncpus=2',\n             'Resource_List.walltime': 60}\n        j = Job(TEST_USER1, attrs=a)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        a = {ATTR_queue: 'wq3',\n             'Resource_List.select': '1:ncpus=2'}\n        j2 = Job(TEST_USER, attrs=a)\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n\n        a = {ATTR_queue: 'workq',\n             'Resource_List.select': '1:ncpus=2'}\n        j3 = Job(TEST_USER, attrs=a)\n        jid3 = self.server.submit(j3)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid3)\n\n        # Now submit a reservation which only sc2 can confirm because\n        # it has free nodes\n        t = int(time.time())\n        a = {'Resource_List.select': '1:ncpus=2', 'reserve_start': t + 5,\n             'reserve_end': t + 35}\n        r = Reservation(TEST_USER, a)\n        rid = self.server.submit(r)\n        a = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, a, rid)\n        vn1 = self.mom.shortname + '[1]'\n        rnodes = {'resv_nodes': '(' + vn1 + ':ncpus=2)'}\n        self.server.expect(RESV, rnodes, id=rid)\n\n        # Wait for reservation to run and then submit a job to the\n        # reservation\n        a = {'reserve_state': (MATCH_RE, 'RESV_RUNNING|5')}\n        self.server.expect(RESV, a, rid)\n        a = {ATTR_q: rid.split('.')[0]}\n        j4 = Job(TEST_USER, attrs=a)\n        jid4 = self.server.submit(j4)\n        result = {'job_state': 'R', 'exec_vnode': '(' + vn1 + ':ncpus=1)'}\n        self.server.expect(JOB, result, id=jid4)\n\n    def test_resv_in_empty_multi_sched_env(self):\n        \"\"\"\n        Test that advance reservations gets confirmed by all the schedulers\n        running in the complex\n        \"\"\"\n        # Create 3 multi-scheds sc1, sc2 and sc3, 3 partitions and 4 vnodes\n        self.common_setup()\n        # Submit 4 reservations and check they get confirmed\n        for _ in range(4):\n            t = int(time.time())\n            a = {'Resource_List.select': '1:ncpus=2', 'reserve_start': t + 25,\n                 'reserve_end': t + 55}\n            r = Reservation(TEST_USER, attrs=a)\n            rid = self.server.submit(r)\n            a = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n            self.server.expect(RESV, a, rid)\n\n        # Submit 5th reservation and check that it is denied\n        t = int(time.time())\n        a = {'Resource_List.select': '1:ncpus=2', 'reserve_start': t + 25,\n             'reserve_end': t + 55}\n        r = Reservation(TEST_USER, a)\n        rid = self.server.submit(r)\n        msg = \"Resv;\" + rid + \";Reservation denied\"\n        self.server.log_match(msg)\n\n    def test_asap_resv(self):\n        \"\"\"\n        Test ASAP reservation in multisched environment. It should not\n        matter if a job is part of a partition. An ASAP reservation could\n        confirm on any of the existing partitions and then moved to the\n        reservation queue.\n        \"\"\"\n        # Create 3 multi-scheds sc1, sc2 and sc3, 4 partitions and 4 vnodes\n        self.common_setup()\n        # Turn off scheduling in all schedulers but one (say sc3)\n        self.set_scheduling(['sc1', 'sc2', 'default'], False)\n\n        # submit a job in partition serviced by sc1\n        a = {ATTR_queue: 'wq1',\n             'Resource_List.select': '1:ncpus=2',\n             'Resource_List.walltime': 600}\n        j = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid)\n\n        # Now turn this job into a reservation and notice that it runs inside\n        # a reservation running on vnode[2] which is part of sc3\n        a = {ATTR_convert: jid}\n        r = Reservation(TEST_USER, a)\n        r.unset_attributes(['reserve_start', 'reserve_end'])\n        rid = self.server.submit(r)\n        exp_attrs = {'reserve_state': (MATCH_RE, 'RESV_RUNNING|5')}\n        self.server.expect(RESV, exp_attrs, id=rid)\n        exec_vn = '(' + self.mom.shortname + '[2]:ncpus=2)'\n        result = {'job_state': 'R', 'exec_vnode': exec_vn}\n        self.server.expect(JOB, result, id=jid)\n\n    def test_standing_resv_reject(self):\n        \"\"\"\n        Test that if a scheduler serving a partition is not able to\n        confirm all the occurrences of the standing reservation on the same\n        partition then it will reject it.\n        \"\"\"\n\n        self.common_setup()\n        # Turn off scheduling in all schedulers but sc1 because sc1 serves\n        # partition P1\n        self.set_scheduling(['sc2', 'sc3', 'default'], False)\n\n        # Submit an advance reservation which is going to occupy full\n        # partition in future\n        t = int(time.time())\n        a = {'Resource_List.select': '1:ncpus=2', 'reserve_start': t + 200,\n             'reserve_end': t + 4000}\n        r = Reservation(TEST_USER, a)\n        rid = self.server.submit(r)\n        a = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, a, rid)\n\n        # Submit a standing reservation such that it consumes one partition\n        # and an occurrence finishes before the advance reservation starts.\n        # This means scheduler will try to place the first occurance right\n        # before the advance reservation was confirmed because the\n        # node is free, it will not be able to place the second occurrence\n        # because of the advance reservation\n        start = int(time.time()) + 10\n        end = start + 150\n        tzone = self.get_tzid()\n        a = {ATTR_resv_rrule: 'FREQ=HOURLY;COUNT=2',\n             ATTR_resv_timezone: tzone,\n             'reserve_start': start,\n             'reserve_end': end,\n             'Resource_List.select': '1:ncpus=2'\n             }\n        sr = Reservation(TEST_USER, attrs=a)\n        srid = self.server.submit(sr)\n        msg = \"Resv;\" + srid + \";Reservation denied\"\n        self.server.log_match(msg)\n\n    def test_printing_partition_resv_hook(self):\n        \"\"\"\n        Test if a reservation having a partition set on it is readable in\n        a reservation hook\n        \"\"\"\n        hook_body = \"\"\"\nimport pbs\ne = pbs.event()\nresv = e.resv\npbs.logmsg(pbs.EVENT_DEBUG, \"Resv partition is %s\" % resv.partition)\ne.accept()\n\"\"\"\n        a = {'event': 'resv_end', 'enabled': 'true', 'debug': 'true'}\n        self.server.create_import_hook(\"h1\", a, hook_body)\n        # Create 3 multi-scheds sc1, sc2 and sc3, 3 partitions and 4 vnodes\n        self.common_setup()\n        # Turn off scheduling in all schedulers but one (say sc3)\n        self.set_scheduling(['sc1', 'sc2', 'default'], False)\n        t = int(time.time())\n        a = {'Resource_List.select': '1:ncpus=2', 'reserve_start': t + 5,\n             'reserve_end': t + 15}\n        r = Reservation(TEST_USER, a)\n        rid = self.server.submit(r)\n        a = {'reserve_state': (MATCH_RE, 'RESV_RUNNING|5')}\n        self.server.expect(RESV, a, rid)\n        self.logger.info(\"Wait for reservation to end\")\n        time.sleep(10)\n        msg = \"Resv partition is P3\"\n        self.server.log_match(msg)\n\n    def test_setting_partition_resv_hook(self):\n        \"\"\"\n        Test if a reservation can set partition name on reservation object\n        \"\"\"\n        hook_body = \"\"\"\nimport pbs\ne = pbs.event()\nresv = e.resv\nresv.partition = \"P-3\"\npbs.logmsg(pbs.EVENT_DEBUG, \"Resv partition is %s\" % resv.partition)\ne.accept()\n\"\"\"\n        a = {'event': 'resvsub', 'enabled': 'true', 'debug': 'true'}\n        self.server.create_import_hook(\"h1\", a, hook_body)\n        # Create 3 multi-scheds sc1, sc2 and sc3, 3 partitions and 4 vnodes\n        self.common_setup()\n        t = int(time.time())\n        a = {'Resource_List.select': '1:ncpus=2', 'reserve_start': t + 5,\n             'reserve_end': t + 15}\n        r = Reservation(TEST_USER, a)\n        with self.assertRaises(PbsSubmitError) as e:\n            rid = self.server.submit(r)\n        msg = \"resv attribute 'partition' is readonly\"\n        self.server.log_match(msg)\n        self.assertIn(\"hook 'h1' encountered an exception\",\n                      e.exception.msg[0])\n\n    def test_resv_alter(self):\n        \"\"\"\n        Test if a reservation confirmed by a multi-sched can be altered by the\n        same scheduler.\n        \"\"\"\n        self.common_setup()\n        # Submit 4 reservations to fill up the system and check they are\n        # confirmed\n        for _ in range(4):\n            t = int(time.time())\n            a = {'Resource_List.select': '1:ncpus=2', 'reserve_start': t + 60,\n                 'reserve_end': t + 120}\n            r = Reservation(TEST_USER, a)\n            rid = self.server.submit(r)\n            attr = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n            self.server.expect(RESV, attr, rid)\n            partition = self.server.status(RESV, 'partition', id=rid)\n            if (partition[0]['partition'] == 'P1'):\n                old_end_time = a['reserve_end']\n                modify_resv = rid\n        # Modify the endtime of reservation confirmed on partition P1 and\n        # make sure the node solution is correct.\n        end_time = old_end_time + 60\n        bu = BatchUtils()\n        new_end_time = bu.convert_seconds_to_datetime(end_time)\n        attrs = {'reserve_end': new_end_time}\n        time_now = time.time()\n        self.server.alterresv(modify_resv, attrs)\n        attr = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2'),\n                'partition': 'P1'}\n        self.server.expect(RESV, attr, modify_resv)\n        vn = self.mom.shortname\n        rnodes = {'resv_nodes': '(' + vn + '[0]:ncpus=2)'}\n        self.server.expect(RESV, rnodes, id=modify_resv)\n        msg = modify_resv + \";Reservation Confirmed\"\n        self.scheds['sc1'].log_match(msg, starttime=time_now)\n\n    def test_setting_default_partition(self):\n        \"\"\"\n        Test if setting default partition on pbs scheduler/queue/node fails\n        \"\"\"\n\n        self.common_setup()\n        a = {'partition': 'pbs-default'}\n        with self.assertRaises(PbsManagerError) as e:\n            self.server.manager(MGR_CMD_SET, QUEUE, a, id='workq')\n        self.assertIn(\"Default partition name is not allowed\",\n                      e.exception.msg[0])\n        with self.assertRaises(PbsManagerError) as e:\n            vn3 = self.mom.shortname + '[3]'\n            self.server.manager(MGR_CMD_SET, NODE, a, id=vn3)\n        self.assertIn(\"Default partition name is not allowed\",\n                      e.exception.msg[0])\n        with self.assertRaises(PbsManagerError) as e:\n            self.server.manager(MGR_CMD_SET, SCHED, a, id='sc1')\n        self.assertIn(\"Default partition name is not allowed\",\n                      e.exception.msg[0])\n\n    def degraded_resv_reconfirm(self, start, end, rrule=None, run=False):\n        \"\"\"\n        Test that a degraded reservation gets reconfirmed in a multi-sched env\n        \"\"\"\n        # Add two nodes to partition P1 and turn off scheduling for all other\n        # schedulers serving partition P2 and P3. Make scheduler sc1 serve\n        # only partition P1 (vnode[0], vnode[1]).\n        p1 = {'partition': 'P1'}\n        vn = ['%s[%d]' % (self.mom.shortname, i) for i in range(2)]\n        self.server.manager(MGR_CMD_SET, NODE, p1, id=vn[1])\n        self.server.expect(SCHED, p1, id=\"sc1\")\n\n        a = {'reserve_retry_time': 5}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        self.set_scheduling(['sc2', 'sc3', 'default'], False)\n\n        now = int(time.time())\n        attr = {'Resource_List.select': '1:ncpus=2',\n                'reserve_start': now + start,\n                'reserve_end': now + end}\n        if rrule is not None:\n            attr.update({ATTR_resv_rrule: rrule,\n                         ATTR_resv_timezone: self.get_tzid()})\n\n        resv = Reservation(TEST_USER, attr)\n        rid = self.server.submit(resv)\n\n        a = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, a, id=rid)\n\n        self.server.status(RESV, 'resv_nodes', id=rid)\n        resv_node = self.server.reservations[rid].get_vnodes()[0]\n\n        if run:\n            resv_state = {'reserve_state': (MATCH_RE, 'RESV_RUNNING|5')}\n            self.logger.info('Sleeping until reservation starts')\n            offset = start - int(time.time())\n            self.server.expect(RESV, resv_state, id=rid,\n                               offset=offset, interval=1)\n        else:\n            resv_state = {'reserve_state': (MATCH_RE, 'RESV_DEGRADED|10')}\n\n        self.server.manager(MGR_CMD_SET, SCHED, {'scheduling': 'false'},\n                            id=\"sc1\")\n        ret = self.server.status(RESV, 'partition', id=rid)\n        a = {'state': 'offline'}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=resv_node)\n\n        a = {'reserve_substate': 10}\n        a.update(resv_state)\n        self.server.expect(RESV, a, id=rid)\n\n        self.server.manager(MGR_CMD_SET, SCHED, {'scheduling': 'True'},\n                            id=\"sc1\")\n        other_node = vn[resv_node == vn[0]]\n\n        if run:\n            a = {'reserve_substate': 5}\n        else:\n            a = {'reserve_substate': 2}\n        a.update({'resv_nodes': (MATCH_RE, re.escape(other_node))})\n\n        self.server.expect(RESV, a, id=rid, interval=1)\n\n    def test_advance_confirmed_resv_reconfirm(self):\n        \"\"\"\n        Test degraded reservation gets reconfirmed on a different\n        node of the same partition in multi-sched environment\n        \"\"\"\n        self.common_setup()\n        self.degraded_resv_reconfirm(start=600, end=800)\n\n    def test_advance_running_resv_reconfirm(self):\n        \"\"\"\n        Test degraded running reservation gets reconfirmed on a different\n        node of the same partition in multi-sched environment\n        \"\"\"\n        self.common_setup()\n        self.degraded_resv_reconfirm(start=20, end=200, run=True)\n\n    def test_standing_confimred_resv_reconfirm(self):\n        \"\"\"\n        Test degraded standing resv gets reconfirmed on a different\n        node of the same partition in multi-sched environment\n        \"\"\"\n        self.common_setup()\n        self.degraded_resv_reconfirm(start=600, end=800,\n                                     rrule='FREQ=HOURLY;COUNT=2')\n\n    def test_standing_running_resv_reconfirm(self):\n        \"\"\"\n        Test degraded running standing resv gets reconfirmed on a different\n        node of the same partition in multi-sched environment\n        \"\"\"\n        self.common_setup()\n        self.degraded_resv_reconfirm(start=20, end=200, run=True,\n                                     rrule='FREQ=HOURLY;COUNT=2')\n\n    def test_resv_from_job_in_multi_sched_using_qsub(self):\n        \"\"\"\n        Test that a user is able to create a reservation out of a job using\n        qsub when the job is part of a non-default partition\n        \"\"\"\n        self.common_setup()\n        # Turn off scheduling in all schedulers but sc1\n        self.set_scheduling(['sc2', 'sc3', 'default'], False)\n\n        a = {ATTR_W: 'create_resv_from_job=1', ATTR_q: 'wq1',\n             'Resource_List.walltime': 1000}\n        job = Job(TEST_USER, a)\n        jid = self.server.submit(job)\n        self.server.expect(JOB, {ATTR_state: 'R'}, jid)\n\n        a = {ATTR_job: jid}\n        rid = self.server.status(RESV, a)[0]['id'].split(\".\")[0]\n\n        a = {ATTR_job: jid, 'reserve_state': (MATCH_RE, 'RESV_RUNNING|5'),\n             'partition': 'P1'}\n        self.server.expect(RESV, a, id=rid)\n\n    def test_resv_from_job_in_multi_sched_using_rsub(self):\n        \"\"\"\n        Test that a user is able to create a reservation out of a job using\n        pbs_rsub when the job is part of a non-default partition\n        \"\"\"\n        self.common_setup()\n        # Turn off scheduling in all schedulers but sc1\n        self.set_scheduling(['sc2', 'sc3', 'default'], False)\n\n        a = {'Resource_List.select': '1:ncpus=2', ATTR_q: 'wq1',\n             'Resource_List.walltime': 1000}\n        job = Job(TEST_USER, a)\n        jid = self.server.submit(job)\n        self.server.expect(JOB, {ATTR_state: 'R'}, jid)\n\n        a = {ATTR_job: jid}\n        resv = Reservation(attrs=a)\n        rid = self.server.submit(resv)\n\n        a = {ATTR_job: jid, 'reserve_state': (MATCH_RE, 'RESV_RUNNING|5'),\n             'partition': 'P1'}\n        self.server.expect(RESV, a, id=rid)\n\n    def test_resv_alter_force_for_confirmed_resv(self):\n        \"\"\"\n        Test that in a multi-sched setup ralter -Wforce can\n        modify a confirmed reservation successfully even when\n        the ralter results into over subscription of resources.\n        \"\"\"\n\n        self.common_setup()\n        # Submit 4 reservations to fill up the system and check they are\n        # confirmed\n        for _ in range(4):\n            t = int(time.time())\n            a = {'Resource_List.select': '1:ncpus=2', 'reserve_start': t + 300,\n                 'reserve_end': t + 900}\n            r = Reservation(TEST_USER, a)\n            rid = self.server.submit(r)\n            attr = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n            self.server.expect(RESV, attr, rid)\n            partition = self.server.status(RESV, 'partition', id=rid)\n            if (partition[0]['partition'] == 'P1'):\n                p1_start_time = t + 300\n        # submit a reservation that will end before the start time of\n        # reservation confimed in partition P1\n        bu = BatchUtils()\n        stime = int(time.time()) + 30\n        etime = p1_start_time - 10\n        # Turn off scheduling for all schedulers, except sc1\n        self.set_scheduling(['sc2', 'sc3', 'default'], False)\n\n        attrs = {'reserve_end': etime, 'reserve_start': stime,\n                 'Resource_List.select': '1:ncpus=2'}\n        rid_new = self.server.submit(Reservation(TEST_USER, attrs))\n\n        check_attr = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2'),\n                      'partition': 'P1'}\n        self.server.expect(RESV, check_attr, rid_new)\n\n        # Turn off the last running scheduler\n        self.server.manager(MGR_CMD_SET, SCHED, {'scheduling': 'false'},\n                            id=\"sc1\")\n        # extend end time so that it overlaps with an existin reservation\n        etime = etime + 300\n        a = {'reserve_end': bu.convert_seconds_to_datetime(etime),\n             'reserve_start': bu.convert_seconds_to_datetime(stime)}\n\n        self.server.alterresv(rid_new, a, extend='force')\n        msg = \"pbs_ralter: \" + rid_new + \" CONFIRMED\"\n        self.assertEqual(msg, self.server.last_out[0])\n        resv_attr = self.server.status(RESV, id=rid_new)[0]\n        resv_end = bu.convert_stime_to_seconds(resv_attr['reserve_end'])\n        self.assertEqual(int(resv_end), etime)\n"
  },
  {
    "path": "test/tests/functional/pbs_multiple_execjob_launch_hook.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestExecJobHook(TestFunctional):\n\n    hooks = {\n        \"execjob_hook1\":\n        \"\"\"\nimport pbs\npbs.logmsg(pbs.LOG_DEBUG, \"executing execjob_launch 1\" )\ne=pbs.event()\ne.progname = \"/usr/bin/echo\"\ne.argv=[]\ne.argv.append(\"launch 1\")\npbs.logmsg(pbs.LOG_DEBUG, \"environment var from execjob_hook1 is %s\" % (e.env))\n        \"\"\",\n        \"execjob_hook2\":\n        \"\"\"\nimport pbs\npbs.logmsg(pbs.LOG_DEBUG, \"executing execjob_launch 2\" )\ne = pbs.event()\nif (e.progname != \"/usr/bin/echo\"):\n    pbs.logmsg(pbs.LOG_DEBUG,\n        \"Modified progname value did not propagated from launch1 hook\")\n    e.reject(\"\")\nelse:\n    pbs.logmsg(pbs.LOG_DEBUG,\n        \"Modified progname value got updated from launch1\")\npbs.logmsg(pbs.LOG_DEBUG, \"environment var from execjob_hook2 is %s\" % (e.env))\n        \"\"\",\n    }\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n        self.server.manager(MGR_CMD_SET, SERVER, {'log_events': 2047})\n\n    def test_multi_execjob_hook(self):\n        \"\"\"\n        Unsetting ncpus, that is ['ncpus'] = None, in modifyjob hook\n        \"\"\"\n        hook_names = [\"execjob_hook1\", \"execjob_hook2\"]\n        hook_attrib = {'event': 'execjob_launch', 'enabled': 'True',\n                       'order': ''}\n        for hook_name in hook_names:\n            hook_script = self.hooks[hook_name]\n            hook_attrib['order'] = hook_names.index(hook_name) + 1\n            retval = self.server.create_import_hook(hook_name,\n                                                    hook_attrib,\n                                                    hook_script,\n                                                    overwrite=True)\n            self.assertTrue(retval)\n\n        job = Job(TEST_USER1, attrs={ATTR_l: 'select=1:ncpus=1',\n                                     ATTR_e: '/tmp/',\n                                     ATTR_o: '/tmp/'})\n        job.set_sleep_time(1)\n        jid = self.server.submit(job)\n        self.mom.log_match(\n            \"Modified progname value got updated from launch1\",\n            max_attempts=3, interval=3)\n\n        hook1 = [\"execjob_hook1\", \"execjob_hook2\"]\n\n        for hk in hook1:\n            msg = \"environment var from \" + hk\n            rv = self.mom.log_match(msg, starttime=self.server.ctime,\n                                    max_attempts=10)\n            val = rv[1] + ','  # appending comma at the end\n            self.assertTrue(\"PBS_TASKNUM=1,\" in val,\n                            \"Message not found for hook \" + hk)\n"
  },
  {
    "path": "test/tests/functional/pbs_node_buckets.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestNodeBuckets(TestFunctional):\n    \"\"\"\n    Test basic functionality of node buckets.\n    \"\"\"\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n        day = time.strftime(\"%Y%m%d\", time.localtime(time.time()))\n        filename = os.path.join(self.server.pbs_conf['PBS_HOME'],\n                                'sched_logs', day)\n        self.du.rm(path=filename, force=True, sudo=True, level=logging.DEBUG2)\n\n        self.colors = \\\n            ['red', 'orange', 'yellow', 'green', 'blue', 'indigo', 'violet']\n        self.shapes = ['circle', 'square', 'triangle',\n                       'diamond', 'pyramid', 'sphere', 'cube']\n        self.letters = ['A', 'B', 'C', 'D', 'E', 'F', 'G']\n\n        self.server.manager(MGR_CMD_CREATE, RSC,\n                            {'type': 'string', 'flag': 'h'}, id='color')\n        self.server.manager(MGR_CMD_CREATE, RSC,\n                            {'type': 'string_array', 'flag': 'h'}, id='shape')\n        self.server.manager(MGR_CMD_CREATE, RSC,\n                            {'type': 'string_array', 'flag': 'h'}, id='letter')\n        self.server.manager(MGR_CMD_CREATE, RSC,\n                            {'type': 'boolean', 'flag': 'h'}, id='bool')\n\n        a = {'resources_available.ncpus': 2, 'resources_available.mem': '8gb'}\n        # 10010 nodes since it divides into 7 evenly.\n        # Each node bucket will have 1430 nodes in it\n        self.mom.create_vnodes(attrib=a, num=10010,\n                               sharednode=False,\n                               expect=False, attrfunc=self.cust_attr_func)\n        # Make sure all the nodes are in state free.  We can't let\n        # create_vnodes() do this because it does a pbsnodes -v on each vnode.\n        # This takes a long time.\n        self.server.expect(NODE, {'state=free': (GE, 10010)})\n\n        self.scheduler.add_resource('color')\n\n        self.server.manager(MGR_CMD_SET, SCHED, {'log_events': 2047})\n\n    def cust_attr_func(self, name, totalnodes, numnode, attribs):\n        \"\"\"\n        Add resources to vnodes.  There are 10010 nodes, which means 1430\n        nodes of each color, letter, and shape.  The value of bool is True\n        for the last 5005 nodes and unset for the first 5005 nodes\n        \"\"\"\n        a = {'resources_available.color': self.colors[numnode // 1430],\n             'resources_available.shape': self.shapes[numnode % 7],\n             'resources_available.letter': self.letters[numnode % 7]}\n\n        if numnode // 5005 == 0:\n            a['resources_available.bool'] = 'True'\n\n        # Yellow buckets get a higher priority\n        if numnode // 1430 == 2:\n            a['Priority'] = 100\n        return {**attribs, **a}\n\n    def check_normal_path(self, sel='2:ncpus=2:mem=1gb', pl='scatter:excl',\n                          queue='workq'):\n        \"\"\"\n        Check if a job runs in the normal code path\n        \"\"\"\n        a = {'Resource_List.select': sel, 'Resource_List.place': pl,\n             'queue': queue}\n        j = Job(TEST_USER, attrs=a)\n\n        jid = self.server.submit(j)\n        self.scheduler.log_match(jid + ';Evaluating subchunk', n=10000,\n                                 interval=1)\n\n        self.server.delete(jid, wait=True)\n\n    @timeout(900)\n    def test_basic(self):\n        \"\"\"\n        Request nodes of a specific color and make sure they are correctly\n        allocated to the job\n        \"\"\"\n        chunk = '4:ncpus=1:color=yellow'\n        a = {'Resource_List.select': chunk,\n             'Resource_List.place': 'scatter:excl'}\n        J = Job(TEST_USER, a)\n        jid = self.server.submit(J)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        self.scheduler.log_match(jid + ';Chunk: ' + chunk, n=10000)\n\n        js = self.server.status(JOB, id=jid)\n        nodes = J.get_vnodes(js[0]['exec_vnode'])\n        for node in nodes:\n            n = self.server.status(NODE, 'resources_available.color', id=node)\n            self.assertTrue('yellow' in\n                            n[0]['resources_available.color'])\n\n    @timeout(900)\n    @skip(\"issue 2334\")\n    def test_multi_bucket(self):\n        \"\"\"\n        Request two different chunk types which need to be allocated from\n        different buckets and make sure they are allocated correctly.\n        \"\"\"\n        a = {'Resource_List.select':\n             '4:ncpus=1:color=yellow+4:ncpus=1:color=blue',\n             'Resource_List.place': 'scatter:excl'}\n        J = Job(TEST_USER, a)\n        jid = self.server.submit(J)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        self.scheduler.log_match(jid + ';Chunk: ', n=10000)\n\n        js = self.server.status(JOB, id=jid)\n        nodes = J.get_vnodes(js[0]['exec_vnode'])\n        # Yellow nodes were requested first.\n        # Make sure they come before the blue nodes.\n        for i in range(4):\n            n = self.server.status(NODE, id=nodes[i])\n            self.assertTrue('yellow' in n[0]['resources_available.color'])\n        for i in range(4, 8):\n            n = self.server.status(NODE, id=nodes[i])\n            self.assertTrue('blue' in n[0]['resources_available.color'])\n\n    @timeout(900)\n    @skip(\"issue 2334\")\n    def test_multi_bucket2(self):\n        \"\"\"\n        Request nodes from all 7 different buckets and see them allocated\n        correctly\n        \"\"\"\n        select = \"\"\n        for c in self.colors:\n            select += \"1:ncpus=1:color=%s+\" % (c)\n\n        # remove the trailing '+'\n        select = select[:-1]\n\n        a = {'Resource_List.select': select,\n             'Resource_List.place': 'scatter:excl'}\n\n        J = Job(TEST_USER, a)\n        jid = self.server.submit(J)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        self.scheduler.log_match(jid + ';Chunk:', n=10000)\n\n        js = self.server.status(JOB, id=jid)\n        nodes = J.get_vnodes(js[0]['exec_vnode'])\n        for i, node in enumerate(nodes):\n            n = self.server.status(NODE, id=node)\n            self.assertTrue(self.colors[i] in\n                            n[0]['resources_available.color'])\n\n    @skip(\"issue 2334\")\n    def test_not_run(self):\n        \"\"\"\n        Request more nodes of one color that is available to make sure\n        the job is not run on incorrect nodes.\n        \"\"\"\n        chunk = '1431:ncpus=1:color=yellow'\n        a = {'Resource_List.select': chunk,\n             'Resource_List.place': 'scatter:excl'}\n        J = Job(TEST_USER, a)\n        jid = self.server.submit(J)\n        a = {'comment': (MATCH_RE, '^Can Never Run'),\n             'job_state': 'Q'}\n        self.server.expect(JOB, a, attrop=PTL_AND, id=jid)\n        self.scheduler.log_match(jid + ';Chunk: ' + chunk, n=10000)\n\n    @timeout(900)\n    @skip(\"issue 2334\")\n    def test_calendaring1(self):\n        \"\"\"\n        Test to see that nodes that are used in the future for\n        calendared jobs are not used for filler jobs that would\n        distrupt the scheduled time.\n        \"\"\"\n        self.scheduler.set_sched_config({'strict_ordering': 'True'})\n\n        chunk1 = '1:ncpus=1'\n        a = {'Resource_List.select': chunk1,\n             'Resource_List.place': 'scatter:excl',\n             'Resource_List.walltime': '1:00:00'}\n        j = Job(TEST_USER, attrs=a)\n        jid1 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        self.scheduler.log_match(jid1 + ';Chunk: ' + chunk1, n=10000)\n\n        chunk2 = '10010:ncpus=1'\n        a = {'Resource_List.select': chunk2,\n             'Resource_List.place': 'scatter:excl',\n             'Resource_List.walltime': '2:00:00'}\n        j = Job(TEST_USER, attrs=a)\n        jid2 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid2)\n        self.server.expect(JOB, 'comment', op=SET, id=jid2, interval=1)\n        self.scheduler.log_match(jid2 + ';Chunk: ' + chunk2, n=10000)\n\n        chunk3 = '2:ncpus=1'\n        a = {'Resource_List.select': chunk3,\n             'Resource_List.place': 'scatter:excl',\n             'Resource_List.walltime': '30:00'}\n        j = Job(TEST_USER, attrs=a)\n        jid3 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid3, interval=1)\n        self.scheduler.log_match(jid3 + ';Chunk: ' + chunk3, n=10000)\n\n        a = {'Resource_List.select': chunk3,\n             'Resource_List.place': 'scatter:excl',\n             'Resource_List.walltime': '2:30:00'}\n        j = Job(TEST_USER, attrs=a)\n        jid4 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid4)\n        self.server.expect(JOB, 'comment', op=SET, id=jid4, interval=1)\n        self.scheduler.log_match(jid4 + ';Chunk: ' + chunk3, n=10000)\n\n    @timeout(900)\n    @skip(\"issue 2334\")\n    def test_calendaring2(self):\n        \"\"\"\n        Test that nodes that a reservation calendared on them later on\n        are used before totally free nodes\n        \"\"\"\n\n        self.scheduler.set_sched_config({'strict_ordering': 'True'})\n\n        now = int(time.time())\n        vnode = self.mom.shortname\n        select_s = '1:vnode=' + vnode + '[2865]+1:vnode=' + vnode + '[2870]'\n        a = {'Resource_List.select': select_s,\n             'Resource_List.place': 'scatter:excl',\n             'Resource_List.walltime': '1:00:00',\n             'reserve_start': now + 3600, 'reserve_end': now + 7200}\n        r = Reservation(TEST_USER, attrs=a)\n        rid = self.server.submit(r)\n        self.server.expect(RESV, {'reserve_state':\n                                  (MATCH_RE, 'RESV_CONFIRMED|2')}, id=rid)\n\n        chunk = '2:ncpus=1:color=yellow'\n        a = {'Resource_List.select': chunk,\n             'Resource_List.place': 'scatter:excl',\n             'Resource_List.walltime': '30:00'}\n        j = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        self.scheduler.log_match(jid + ';Chunk: ' + chunk, n=10000)\n\n        s = self.server.status(JOB, 'exec_vnode', id=jid)\n        n = j.get_vnodes(s[0]['exec_vnode'])\n        msg = 'busy_later nodes not chosen first'\n        self.assertTrue(vnode + '[2865]' in n, msg)\n        self.assertTrue(vnode + '[2870]' in n, msg)\n\n    @timeout(900)\n    @skip(\"issue 2334\")\n    def test_calendaring3(self):\n        \"\"\"\n        Test that a future reservation's nodes are used first for a job\n        that is put into the calendar.\n        \"\"\"\n\n        self.scheduler.set_sched_config({'strict_ordering': 'True'})\n        vnode = self.mom.shortname\n        now = int(time.time())\n        select_s = '1:vnode=' + vnode + '[2865]+1:vnode=' + vnode + '[2870]'\n        a = {'Resource_List.select': select_s,\n             'Resource_List.place': 'scatter:excl',\n             'Resource_List.walltime': '1:00:00',\n             'reserve_start': now + 3600, 'reserve_end': now + 7200}\n        r = Reservation(TEST_USER, attrs=a)\n        rid = self.server.submit(r)\n        self.server.expect(RESV, {'reserve_state':\n                                  (MATCH_RE, 'RESV_CONFIRMED|2')}, id=rid)\n\n        chunk1 = '1430:ncpus=1:color=yellow'\n        a = {'Resource_List.select': chunk1,\n             'Resource_List.place': 'scatter:excl',\n             'Resource_List.walltime': '30:00'}\n        j = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        self.scheduler.log_match(jid + ';Chunk: ' + chunk1, n=10000)\n\n        chunk2 = '2:ncpus=1:color=yellow'\n        a = {'Resource_List.select': chunk2,\n             'Resource_List.place': 'scatter:excl',\n             'Resource_List.walltime': '15:00'}\n        j2 = Job(TEST_USER, attrs=a)\n        jid2 = self.server.submit(j2)\n        self.scheduler.log_match(jid2 + ';Chunk: ' + chunk2, n=10000)\n        self.server.expect(JOB, 'estimated.exec_vnode', op=SET, id=jid2)\n\n        s = self.server.status(JOB, 'estimated.exec_vnode', id=jid2)\n        n = j2.get_vnodes(s[0]['estimated.exec_vnode'])\n        msg = 'busy_later nodes not chosen first'\n        self.assertTrue(vnode + '[2865]' in n, msg)\n        self.assertTrue(vnode + '[2870]' in n, msg)\n\n    @timeout(900)\n    @skip(\"issue 2334\")\n    def test_buckets_and_non(self):\n        \"\"\"\n        Test that jobs requesting buckets and not requesting buckets\n        play nice together\n        \"\"\"\n\n        # vnode[1435] is orange\n        vn = self.mom.shortname\n        a = {'Resource_List.ncpus': 1,\n             'Resource_List.vnode': vn + '[1435]'}\n        j1 = Job(TEST_USER, attrs=a)\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        self.scheduler.log_match(jid1 + ';Evaluating subchunk', n=10000)\n\n        chunk = '1429:ncpus=1:color=orange'\n        a = {'Resource_List.select': chunk,\n             'Resource_List.place': 'scatter:excl'}\n        j2 = Job(TEST_USER, attrs=a)\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n        self.scheduler.log_match(jid2 + ';Chunk: ' + chunk, n=10000)\n\n        s1 = self.server.status(JOB, 'exec_vnode', id=jid1)\n        s2 = self.server.status(JOB, 'exec_vnode', id=jid2)\n\n        nodes1 = j1.get_vnodes(s1[0]['exec_vnode'])\n        nodes2 = j2.get_vnodes(s2[0]['exec_vnode'])\n\n        msg = 'Job 1 and Job 2 are sharing nodes'\n        for n in nodes2:\n            self.assertNotEqual(n, nodes1[0], msg)\n\n    @timeout(900)\n    @skip(\"issue 2334\")\n    def test_not_buckets(self):\n        \"\"\"\n        Test to make sure the jobs that should use the standard node searching\n        code path do not use the bucket code path\n        \"\"\"\n\n        # Running a 10010 cpu job through the normal code path spams the log.\n        # We don't care about it, so there is no reason to increase\n        # the log size by so much.\n        self.server.manager(MGR_CMD_SET, SCHED, {'log_events': 767})\n        # Run a job on all nodes leaving 1 cpus available on each node\n        j = Job(TEST_USER, {'Resource_List.select': '10010:ncpus=1',\n                            'Resource_List.place': 'scatter'})\n        j.set_sleep_time(600)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        self.server.manager(MGR_CMD_SET, SCHED, {'log_events': 2047})\n\n        # Node sorting via unused resources uses the standard code path\n        self.logger.info('Test node_sort_key with unused resources')\n        a = {'node_sort_key': '\\\"ncpus HIGH unused\\\"'}\n        self.scheduler.set_sched_config(a)\n        self.check_normal_path()\n\n        self.scheduler.revert_to_defaults()\n        self.server.manager(MGR_CMD_SET, SCHED, {'log_events': 2047})\n\n        # provisioning_policy: avoid_provisioning uses the standard code path\n        self.logger.info('Test avoid_provision')\n        a = {'provision_policy': 'avoid_provision'}\n        self.scheduler.set_sched_config(a)\n        self.check_normal_path()\n\n        self.scheduler.revert_to_defaults()\n        self.scheduler.add_resource('color')\n        self.server.manager(MGR_CMD_SET, SCHED, {'log_events': 2047})\n\n        # the bucket codepath requires excl\n        self.logger.info('Test different place specs')\n        self.check_normal_path(pl='scatter:shared')\n        self.check_normal_path(pl='free')\n\n        vn = self.mom.shortname\n        # can't request host or vnode resources on the bucket codepath\n        self.logger.info('Test jobs requesting host and vnode')\n        self.check_normal_path(sel='1:ncpus=2:host=' + vn + '[0]')\n        self.check_normal_path(sel='1:ncpus=2:vnode=' + vn + '[0]')\n\n        # suspended jobs use the normal codepath\n        self.logger.info('Test suspended job')\n        a = {'queue_type': 'execution', 'started': 'True', 'enabled': 'True',\n             'priority': 200}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id='expressq')\n        self.server.delete(jid, wait=True)\n\n        a = {'Resource_List.select': '1430:ncpus=1:color=orange',\n             'Resource_List.place': 'scatter:excl'}\n        j2 = Job(TEST_USER, a)\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n\n        a = {'Resource_List.select': '1:ncpus=1:color=orange',\n             'queue': 'expressq'}\n        j3 = Job(TEST_USER, a)\n        jid3 = self.server.submit(j3)\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid2)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid3)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n        self.scheduler.log_match(jid3 + ';Evaluating subchunk', n=10000)\n        self.server.delete([jid2, jid3], wait=True)\n\n        # Checkpointed jobs use normal code path\n        self.logger.info('Test checkpointed job')\n        chk_script = \"\"\"#!/bin/bash\n                kill $1\n                exit 0\n                \"\"\"\n        self.mom.add_checkpoint_abort_script(body=chk_script)\n\n        self.server.manager(MGR_CMD_SET, SCHED, {'preempt_order': 'C'},\n                            runas=ROOT_USER)\n        attrs = {'Resource_List.select': '1430:ncpus=1:color=orange',\n                 'Resource_List.place': 'scatter:excl'}\n        j_c1 = Job(TEST_USER, attrs)\n        jid_c1 = self.server.submit(j_c1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid_c1)\n        self.scheduler.log_match(\n            jid_c1 + ';Chunk: 1430:ncpus=1:color=orange', n=10000)\n        a = {'Resource_List.select': '1:ncpus=1:color=orange',\n             'queue': 'expressq'}\n        j_c2 = Job(TEST_USER, a)\n        jid_c2 = self.server.submit(j_c2)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid_c1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid_c2)\n        self.scheduler.log_match(\n            jid_c1 + \";Job preempted by checkpointing\", n=10000)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n        self.scheduler.log_match(jid_c2 + ';Evaluating subchunk', n=10000)\n        self.server.delete([jid_c1, jid_c2], wait=True)\n\n        # Job's in reservations use the standard codepath\n        self.logger.info('Test job in reservation')\n        now = int(time.time())\n        a = {'Resource_List.select': '4:ncpus=2:mem=4gb',\n             'Resource_List.place': 'scatter:excl',\n             'reserve_start': now + 30, 'reserve_end': now + 120}\n        r = Reservation(TEST_USER, a)\n        rid = self.server.submit(r)\n        self.server.expect(RESV,\n                           {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')})\n        self.logger.info('Waiting 30s for reservation to start')\n        self.server.expect(RESV,\n                           {'reserve_state': (MATCH_RE, 'RESV_RUNNING|5')},\n                           offset=30)\n        r_queue = rid.split('.')[0]\n        self.check_normal_path(sel='1:ncpus=3', queue=r_queue)\n        self.server.delete(rid)\n\n        # Jobs on multi-vnoded systems use the standard codepath\n        self.logger.info('Test job on multi-vnoded system')\n        a = {'resources_available.ncpus': 2, 'resources_available.mem': '8gb'}\n        self.mom.create_vnodes(a, 8, sharednode=False,\n                               vnodes_per_host=4)\n        self.check_normal_path(sel='2:ncpus=8')\n\n    @timeout(900)\n    @skip(\"issue 2334\")\n    def test_multi_vnode_resv(self):\n        \"\"\"\n        Test that node buckets do not get in the way of running jobs on\n        multi-vnoded systems in reservations\n        \"\"\"\n        a = {'resources_available.ncpus': 2, 'resources_available.mem': '8gb'}\n        self.mom.create_vnodes(a, 12,\n                               sharednode=False, vnodes_per_host=4,\n                               attrfunc=self.cust_attr_func)\n\n        now = int(time.time())\n        a = {'Resource_List.select': '8:ncpus=1',\n             'Resource_List.place': 'vscatter',\n             'reserve_start': now + 30,\n             'reserve_end': now + 3600}\n\n        r = Reservation(TEST_USER, attrs=a)\n        rid = self.server.submit(r)\n\n        a = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, a, id=rid)\n\n        self.logger.info('Waiting 30s for reservation to start')\n        a['reserve_state'] = (MATCH_RE, 'RESV_RUNNING|5')\n        self.server.expect(RESV, a, id=rid, offset=30)\n\n        a = {'Resource_List.select': '2:ncpus=1',\n             'Resource_List.place': 'group=shape',\n             'queue': rid.split('.')[0]}\n        j = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(j)\n\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        self.scheduler.log_match(jid + ';Evaluating subchunk', n=10000)\n\n        ev = self.server.status(JOB, 'exec_vnode', id=jid)\n        used_nodes = j.get_vnodes(ev[0]['exec_vnode'])\n\n        n = self.server.status(NODE, 'resources_available.shape')\n        s = [x['resources_available.shape']\n             for x in n if x['id'] in used_nodes]\n        self.assertEqual(len(set(s)), 1,\n                         \"Job1 ran in more than one placement set\")\n\n    @timeout(900)\n    @skip(\"issue 2334\")\n    def test_bucket_sort(self):\n        \"\"\"\n        Test if buckets are sorted properly: all of the yellow bucket\n        also has priority 100.  It should be the first bucket.\n        \"\"\"\n        a = {'node_sort_key': '\\\"sort_priority HIGH\\\"'}\n        self.scheduler.set_sched_config(a)\n\n        chunk = '2:ncpus=1'\n        j = Job(TEST_USER, {'Resource_List.select': chunk,\n                            'Resource_List.place': 'scatter:excl'})\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        self.scheduler.log_match(jid + ';Chunk: ' + chunk, n=10000)\n\n        jobs = self.server.status(JOB, {'exec_vnode'})\n        jn = j.get_vnodes(jobs[0]['exec_vnode'])\n        n1 = self.server.status(NODE, 'resources_available.color',\n                                id=jn[0])\n        n2 = self.server.status(NODE, 'resources_available.color',\n                                id=jn[1])\n\n        c1 = n1[0]['resources_available.color']\n        c2 = n2[0]['resources_available.color']\n        self.assertEqual(c1, 'yellow', \"Job didn't run on yellow nodes\")\n        self.assertEqual(c2, 'yellow', \"Job didn't run on yellow nodes\")\n\n    @timeout(900)\n    @skip(\"issue 2334\")\n    def test_psets(self):\n        \"\"\"\n        Test placement sets with node buckets\n        \"\"\"\n        a = {'node_group_key': 'shape', 'node_group_enable': 'True',\n             'scheduling': 'False'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        chunk = '1430:ncpus=1'\n        a = {'Resource_List.select': chunk,\n             'Resource_List.place': 'scatter:excl'}\n        j1 = Job(TEST_USER, attrs=a)\n        jid1 = self.server.submit(j1)\n\n        j2 = Job(TEST_USER, attrs=a)\n        jid2 = self.server.submit(j2)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n        self.scheduler.log_match(jid1 + ';Chunk: ' + chunk, n=10000)\n        self.scheduler.log_match(jid2 + ';Chunk: ' + chunk, n=10000)\n\n        ev = self.server.status(JOB, 'exec_vnode', id=jid1)\n        used_nodes1 = j1.get_vnodes(ev[0]['exec_vnode'])\n\n        n = self.server.status(NODE, 'resources_available.shape')\n        s = [x['resources_available.shape']\n             for x in n if x['id'] in used_nodes1]\n        self.assertEqual(len(set(s)), 1,\n                         \"Job1 ran in more than one placement set\")\n\n        ev = self.server.status(JOB, 'exec_vnode', id=jid2)\n        used_nodes2 = j2.get_vnodes(ev[0]['exec_vnode'])\n\n        s = [x['resources_available.shape']\n             for x in n if x['id'] in used_nodes2]\n        self.assertEqual(len(set(s)), 1,\n                         \"Job2 ran in more than one placement set\")\n\n        for node in used_nodes1:\n            self.assertNotIn(node, used_nodes2, 'Jobs share nodes: ' + node)\n\n    @timeout(900)\n    @skip(\"issue 2334\")\n    def test_psets_calendaring(self):\n        \"\"\"\n        Test that jobs in the calendar fit within a placement set\n        \"\"\"\n        self.scheduler.set_sched_config({'strict_ordering': 'True'})\n        svr_attr = {'node_group_key': 'shape', 'node_group_enable': 'True',\n                    'backfill_depth': 5}\n        self.server.manager(MGR_CMD_SET, SERVER, svr_attr)\n\n        chunk1 = '10010:ncpus=1'\n        a = {'Resource_List.select': chunk1,\n             'Resource_List.place': 'scatter:excl',\n             'Resource_List.walltime': '1:00:00'}\n\n        j1 = Job(TEST_USER, attrs=a)\n        jid1 = self.server.submit(j1)\n\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        self.scheduler.log_match(jid1 + ';Chunk: ' + chunk1, n=10000)\n\n        chunk2 = '1430:ncpus=1'\n        a['Resource_List.select'] = chunk2\n\n        j2 = Job(TEST_USER, a)\n        jid2 = self.server.submit(j2)\n\n        self.scheduler.log_match(\n            jid2 + ';Chunk: ' + chunk2, interval=1, n=10000)\n        self.scheduler.log_match(jid2 + ';Job is a top job', n=10000)\n\n        n = self.server.status(NODE, 'resources_available.shape')\n\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid2)\n        self.server.expect(JOB, 'estimated.start_time', id=jid2, op=SET)\n        ev = self.server.status(JOB, 'estimated.exec_vnode', id=jid2)\n        used_nodes2 = j2.get_vnodes(ev[0]['estimated.exec_vnode'])\n\n        s = [x['resources_available.shape']\n             for x in n if x['id'] in used_nodes2]\n        self.assertEqual(len(set(s)), 1,\n                         \"Job1 will run in more than one placement set\")\n\n        j3 = Job(TEST_USER, a)\n        jid3 = self.server.submit(j3)\n\n        self.scheduler.log_match(\n            jid3 + ';Chunk: ' + chunk2, interval=1, n=10000)\n        self.scheduler.log_match(jid3 + ';Job is a top job', n=10000)\n\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid3)\n        self.server.expect(JOB, 'estimated.start_time', id=jid3, op=SET)\n        ev = self.server.status(JOB, 'estimated.exec_vnode', id=jid3)\n        used_nodes3 = j3.get_vnodes(ev[0]['estimated.exec_vnode'])\n\n        s = [x['resources_available.shape']\n             for x in n if x['id'] in used_nodes3]\n        self.assertEqual(len(set(s)), 1,\n                         \"Job1 will run in more than one placement set\")\n\n        for node in used_nodes2:\n            self.assertNotIn(node, used_nodes3,\n                             'Jobs will share nodes: ' + node)\n\n    @timeout(900)\n    @skip(\"issue 2334\")\n    def test_psets_calendaring_resv(self):\n        \"\"\"\n        Test that jobs do not run into a reservation and will correctly\n        be added to the calendar on the correct vnodes with placement sets\n        \"\"\"\n\n        self.scheduler.set_sched_config({'strict_ordering': True})\n        self.server.manager(MGR_CMD_SET, SERVER, {'node_group_key': 'shape',\n                                                  'node_group_enable': True})\n\n        now = int(time.time())\n        a = {'Resource_List.select': '10010:ncpus=1',\n             'Resource_List.place': 'scatter:excl',\n             'reserve_start': now + 600, 'reserve_end': now + 3600}\n        r = Reservation(attrs=a)\n        rid = self.server.submit(r)\n        self.server.expect(RESV, {'reserve_state':\n                                  (MATCH_RE, 'RESV_CONFIRMED|2')}, id=rid)\n\n        a = {'Resource_List.select': '1430:ncpus=1',\n             'Resource_List.place': 'scatter:excl',\n             'Resource_List.walltime': '1:00:00'}\n        j = Job(attrs=a)\n        jid = self.server.submit(j)\n\n        self.server.expect(JOB, 'estimated.exec_vnode', id=jid, op=SET)\n\n        n = self.server.status(NODE, 'resources_available.shape')\n        st = self.server.status(JOB, 'estimated.exec_vnode', id=jid)[0]\n        nodes = j.get_vnodes(st['estimated.exec_vnode'])\n\n        s = [x['resources_available.shape']\n             for x in n if x['id'] in nodes]\n        self.assertEqual(len(set(s)), 1,\n                         \"Job will run in more than one placement set\")\n\n    @timeout(900)\n    @skip(\"issue 2334\")\n    def test_place_group(self):\n        \"\"\"\n        Test node buckets with place=group\n        \"\"\"\n        chunk = '1430:ncpus=1'\n        a = {'Resource_List.select': chunk,\n             'Resource_List.place': 'scatter:excl:group=letter'}\n\n        j = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        self.scheduler.log_match(jid + ';Chunk: ' + chunk, n=10000)\n\n        ev = self.server.status(JOB, 'exec_vnode', id=jid)\n        used_nodes = j.get_vnodes(ev[0]['exec_vnode'])\n\n        n = self.server.status(NODE, 'resources_available.letter')\n        s = [x['resources_available.letter']\n             for x in n if x['id'] in used_nodes]\n        self.assertEqual(len(set(s)), 1,\n                         \"Job ran in more than one placement set\")\n\n    @timeout(900)\n    @skip(\"issue 2334\")\n    def test_psets_spanning(self):\n        \"\"\"\n        Request more nodes than available in one placement set and see\n        the job span or not depending on the value of do_not_span_psets\n        \"\"\"\n        # Turn off scheduling to be sure there is no cycle running when\n        # configurations are changed\n        a = {'scheduling': 'False'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        a = {'node_group_key': 'shape', 'node_group_enable': 'True'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        a = {'do_not_span_psets': 'True'}\n        self.server.manager(MGR_CMD_SET, SCHED, a, id='default')\n\n        # request one more node than the largest placement set\n        chunk = '1431:ncpus=1'\n        a = {'Resource_List.select': chunk,\n             'Resource_List.place': 'scatter:excl'}\n\n        j = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(j)\n\n        # Trigger a scheduling cycle\n        a = {'scheduling': 'True'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        a = {'job_state': 'Q', 'comment':\n             (MATCH_RE, 'can\\'t fit in the largest placement set, '\n              'and can\\'t span psets')}\n        self.server.expect(JOB, a, attrop=PTL_AND, id=jid)\n        self.scheduler.log_match(jid + ';Chunk: ' + chunk, n=10000)\n\n        a = {'do_not_span_psets': 'False'}\n        self.server.manager(MGR_CMD_SET, SCHED, a, id='default')\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        ev = self.server.status(JOB, 'exec_vnode', id=jid)\n        used_nodes = j.get_vnodes(ev[0]['exec_vnode'])\n\n        n = self.server.status(NODE, 'resources_available.shape')\n        s = [x['resources_available.shape']\n             for x in n if x['id'] in used_nodes]\n        self.assertGreater(len(set(s)), 1,\n                           \"Job did not span properly\")\n\n    @timeout(900)\n    @skip(\"issue 2334\")\n    def test_psets_queue(self):\n        \"\"\"\n        Test that placement sets work for nodes associated with queues\n        \"\"\"\n\n        a = {'node_group_key': 'shape', 'node_group_enable': 'True'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        a = {'queue_type': 'Execution', 'started': 'True', 'enabled': 'True'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id='workq2')\n\n        # Take the first 14 vnodes.  This means there are two nodes per shape\n        vn = self.mom.shortname\n        nodes = [vn + '[' + str(x) + ']' for x in range(14)]\n        self.server.manager(MGR_CMD_SET, NODE, {'queue': 'workq2'}, id=nodes)\n\n        chunk = '2:ncpus=1'\n        a = {'Resource_List.select': chunk, 'queue': 'workq2',\n             'Resource_List.place': 'scatter:excl'}\n        for _ in range(7):\n            j = Job(TEST_USER, a)\n            j.set_sleep_time(1000)\n            jid = self.server.submit(j)\n            self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n            self.scheduler.log_match(jid + ';Chunk: ' + chunk, n=10000)\n\n        # Check to see if jobs ran in one placement set\n        jobs = self.server.status(JOB)\n        for job in jobs:\n            ev = self.server.status(JOB, 'exec_vnode', id=job['id'])\n            used_nodes = j.get_vnodes(ev[0]['exec_vnode'])\n\n            n = self.server.status(NODE, 'resources_available.shape')\n            s = [x['resources_available.shape']\n                 for x in n if x['id'] in used_nodes]\n            self.assertEqual(len(set(s)), 1,\n                             \"Job \" + job['id'] +\n                             \"ran in more than one placement set\")\n\n        s = self.server.select()\n        for jid in s:\n            self.server.delete(jid, wait=True)\n\n        # Check to see of jobs span correctly\n        chunk = '7:ncpus=1'\n        a = {'Resource_List.select': chunk, 'queue': 'workq2',\n             'Resource_List.place': 'scatter:excl'}\n        j = Job(TEST_USER, a)\n        j.set_sleep_time(1000)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        self.scheduler.log_match(jid + ';Chunk: ' + chunk, n=10000)\n        ev = self.server.status(JOB, 'exec_vnode', id=jid)\n        used_nodes = j.get_vnodes(ev[0]['exec_vnode'])\n\n        n = self.server.status(NODE, 'resources_available.shape')\n        s = [x['resources_available.shape']\n             for x in n if x['id'] in used_nodes]\n        self.assertGreater(len(set(s)), 1,\n                           \"Job did not span properly\")\n\n    @timeout(900)\n    @skip(\"issue 2334\")\n    def test_free(self):\n        \"\"\"\n        Test that free placement works with the bucket code path\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n\n        chunk = '1430:ncpus=1:color=yellow'\n        a = {'Resource_List.select': chunk,\n             'Resource_List.place': 'excl'}\n        j1 = Job(TEST_USER, attrs=a)\n        jid1 = self.server.submit(j1)\n\n        j2 = Job(TEST_USER, attrs=a)\n        jid2 = self.server.submit(j2)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n        self.scheduler.log_match(jid1 + ';Chunk: ' + chunk, n=10000)\n        self.scheduler.log_match(jid2 + ';Chunk: ' + chunk, n=10000)\n\n        s1 = self.server.status(JOB, 'exec_vnode', id=jid1)\n        s2 = self.server.status(JOB, 'exec_vnode', id=jid2)\n\n        n1 = j1.get_vnodes(s1[0]['exec_vnode'])\n        n2 = j1.get_vnodes(s2[0]['exec_vnode'])\n\n        msg = 'job did not run on correct number of nodes'\n        self.assertEqual(len(n1), 715, msg)\n        self.assertEqual(len(n2), 715, msg)\n\n        for node in n1:\n            self.assertTrue(node not in n2, 'Jobs share nodes: ' + node)\n\n    @timeout(900)\n    @skip(\"issue 2334\")\n    def test_queue_nodes(self):\n        \"\"\"\n        Test that buckets work with nodes associated to a queue\n        \"\"\"\n        v1 = self.mom.shortname + '[1431]'\n        v2 = self.mom.shortname + '[1435]'\n        a = {'queue_type': 'execution', 'started': 'True', 'enabled': 'True'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id='q2')\n\n        self.server.manager(MGR_CMD_SET, NODE, {'queue': 'q2'}, id=v1)\n        self.server.manager(MGR_CMD_SET, NODE, {'queue': 'q2'}, id=v2)\n\n        chunk1 = '1428:ncpus=1:color=orange'\n        a = {'Resource_List.select': chunk1,\n             'Resource_List.place': 'scatter:excl'}\n        j1 = Job(TEST_USER, attrs=a)\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        self.scheduler.log_match(jid1 + ';Chunk: ' + chunk1, n=10000)\n        job = self.server.status(JOB, 'exec_vnode', id=jid1)[0]\n        ev = j1.get_vnodes(job['exec_vnode'])\n        msg = 'Job is using queue\\'s nodes'\n        self.assertNotIn(v1, ev)\n        self.assertNotIn(v2, ev)\n\n        chunk2 = '2:ncpus=1'\n        a = {'Resource_List.select': chunk2,\n             'Resource_List.place': 'scatter:excl',\n             'queue': 'q2'}\n        j2 = Job(TEST_USER, attrs=a)\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n        self.scheduler.log_match(jid2 + ';Chunk: ' + chunk2, n=10000)\n        job = self.server.status(JOB, 'exec_vnode', id=jid2)[0]\n        ev = j2.get_vnodes(job['exec_vnode'])\n        msg = 'Job running on nodes not associated with queue'\n        self.assertIn(v1, ev, msg)\n        self.assertIn(v2, ev, msg)\n\n    @timeout(900)\n    @skip(\"issue 2334\")\n    def test_booleans(self):\n        \"\"\"\n        Test that booleans are correctly handled if not in the sched_config\n        resources line.  This means that an unset boolean is considered false\n        and that booleans that are True are considered even though they\n        aren't on the resources line.\n        \"\"\"\n\n        chunk1 = '2:ncpus=1'\n        a = {'Resource_List.select': chunk1,\n             'Resource_List.place': 'scatter:excl'}\n        j1 = Job(TEST_USER, attrs=a)\n        jid1 = self.server.submit(j1)\n\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        self.scheduler.log_match(jid1 + ';Chunk: ' + chunk1, n=10000)\n        jst = self.server.status(JOB, 'exec_vnode', id=jid1)[0]\n        ev = j1.get_vnodes(jst['exec_vnode'])\n        for n in ev:\n            self.server.expect(\n                NODE, {'resources_available.bool': 'True'}, id=n)\n\n        chunk2 = '2:ncpus=1:bool=False'\n        a['Resource_List.select'] = chunk2\n        j2 = Job(TEST_USER, attrs=a)\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n        self.scheduler.log_match(jid2 + ';Chunk: ' + chunk2, n=10000)\n\n        jst = self.server.status(JOB, 'exec_vnode', id=jid2)[0]\n        ev = j2.get_vnodes(jst['exec_vnode'])\n        for n in ev:\n            self.server.expect(\n                NODE, 'resources_available.bool', op=UNSET, id=n)\n\n    @timeout(900)\n    @skip(\"issue 2334\")\n    def test_last_pset_can_never_run(self):\n        \"\"\"\n        Test that the job does not retain the error value of last placement\n        set seen by the node bucketing code. To check this make sure that the\n        last placement set check results into a 'can never run' case because\n        resources do not match and check that the job is not marked as\n        never run.\n        \"\"\"\n\n        self.server.manager(MGR_CMD_CREATE, RSC,\n                            {'type': 'long', 'flag': 'nh'}, id='foo')\n        self.server.manager(MGR_CMD_CREATE, RSC,\n                            {'type': 'string', 'flag': 'h'}, id='bar')\n        self.server.manager(MGR_CMD_SET, SERVER, {'node_group_key': 'bar'})\n        self.server.manager(MGR_CMD_SET, SERVER, {'node_group_enable': 'true'})\n        self.mom.delete_vnode_defs()\n        a = {'resources_available.ncpus': 80,\n             'resources_available.bar': 'large'}\n        self.mom.create_vnodes(attrib=a, num=8,\n                               sharednode=False)\n        self.scheduler.add_resource('foo')\n        a['resources_available.foo'] = 8\n        a['resources_available.ncpus'] = 8\n        a['resources_available.bar'] = 'small'\n        for val in range(0, 5):\n            vname = self.mom.shortname + \"[\" + str(val) + \"]\"\n            self.server.manager(MGR_CMD_SET, NODE, a, id=vname)\n        chunk1 = '4:ncpus=5:foo=5'\n        a = {'Resource_List.select': chunk1,\n             'Resource_List.place': 'scatter:excl'}\n        j1 = Job(TEST_USER, attrs=a)\n        jid1 = self.server.submit(j1)\n\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        j2 = Job(TEST_USER, attrs=a)\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid2)\n        self.scheduler.log_match(jid2 + ';Job will never run',\n                                 existence=False, max_attempts=10)\n"
  },
  {
    "path": "test/tests/functional/pbs_node_jobs_restart.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestNodeJobsRestart(TestFunctional):\n    \"\"\"\n    Make sure that jobs remain on the node's jobs line after a server restart\n    \"\"\"\n\n    def test_node_jobs_restart(self):\n        \"\"\"\n        Make sure that jobs attribute remains set properly after the\n        server is restarted\n        \"\"\"\n        J = Job()\n        jid = self.server.submit(J)\n        self.server.expect(JOB, 'exec_vnode', op=SET, id=jid)\n\n        job_nodes = J.get_vnodes(J.exec_vnode)\n        svr_nodes = self.server.status(NODE, id=job_nodes[0])\n        msg = 'Job ' + jid + ' not in node ' + job_nodes[0] + '\\'s jobs line'\n        self.assertTrue(jid in svr_nodes[0]['jobs'], msg)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        self.server.restart()\n\n        self.server.expect(NODE, 'jobs', op=SET, id=job_nodes[0])\n        svr_nodes2 = self.server.status(NODE, id=job_nodes[0])\n        self.assertTrue(jid in svr_nodes2[0]['jobs'], msg)\n"
  },
  {
    "path": "test/tests/functional/pbs_node_jobs_restart_multinode.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\nfrom time import sleep\n\n\n@requirements(num_moms=2)\nclass TestMultiNodeJobsRestart(TestFunctional):\n    \"\"\"\n    Make sure that jobs remain active after node restart\n    \"\"\"\n\n    def test_restart_hosts_resume(self):\n\n        momA = self.moms.values()[0]\n        momB = self.moms.values()[1]\n\n        # Make sure moms are running with -p flag\n        momA.stop(sig='-INT')\n        momA.start(args=['-p'])\n        momB.stop(sig='-INT')\n        momB.start(args=['-p'])\n\n        pbsdsh_path = os.path.join(self.server.pbs_conf['PBS_EXEC'],\n                                   \"bin\", \"pbsdsh\")\n        script = \"sleep 60 && %s echo 'Hello, World'\" % pbsdsh_path\n        j = Job(TEST_USER)\n        j.set_attributes({'Resource_List.select': '2',\n                          'Resource_List.place': 'scatter'})\n        j.create_script(script)\n        start_time = time.time()\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        momA.stop(sig='-INT')\n        momA.start(args=['-p'])\n        momB.stop(sig='-INT')\n        momB.start(args=['-p'])\n\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        sleep(60)\n\n        self.server.log_match(\"%s;Exit_status=0\" % jid, starttime=start_time)\n\n        # Restart moms without -p flag\n        momA.restart()\n        momB.restart()\n\n    def test_restart_hosts_resume_withoutp(self):\n\n        momA = self.moms.values()[0]\n        momB = self.moms.values()[1]\n\n        # Make sure moms are running with -p flag\n        momA.restart(args=[])\n        momB.restart(args=[])\n\n        pbsdsh_path = os.path.join(self.server.pbs_conf['PBS_EXEC'],\n                                   \"bin\", \"pbsdsh\")\n        script = \"sleep 60 && %s echo 'Hello, World'\" % pbsdsh_path\n        j = Job(TEST_USER)\n        j.set_attributes({'Resource_List.select': '2',\n                          'Resource_List.place': 'scatter'})\n        j.create_script(script)\n        start_time = time.time()\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        momA.stop(sig='-INT')\n        momA.start(args=['-p'])\n        momB.stop(sig='-INT')\n        momB.start(args=['-p'])\n\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        sleep(60)\n\n        self.server.log_match(\"%s;Exit_status=0\" % jid, starttime=start_time)\n\n        # Restart moms without -p flag\n        momA.restart()\n        momB.restart()\n\n    def test_premature_kill_restart(self):\n\n        momA = self.moms.values()[0]\n        momB = self.moms.values()[1]\n\n        # Make sure moms are running with -p flag\n        momA.restart(args=['-p'])\n        momB.restart(args=['-p'])\n\n        pbsdsh_path = os.path.join(self.server.pbs_conf['PBS_EXEC'],\n                                   \"bin\", \"pbsdsh\")\n        script = \"sleep 60 && %s echo 'Hello, World'\" % pbsdsh_path\n        j = Job(TEST_USER)\n        j.set_attributes({'Resource_List.select': '2',\n                          'Resource_List.place': 'scatter'})\n        j.create_script(script)\n        start_time = time.time()\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        momA.signal(\"-KILL\")\n        momB.signal(\"-KILL\")\n        sleep(5)\n        momA.start(args=['-p'])\n        momB.start(args=['-p'])\n\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        sleep(55)\n\n        self.server.log_match(\"%s;Exit_status=0\" % jid, starttime=start_time)\n\n        # Restart moms without -p flag\n        momA.restart()\n        momB.restart()\n"
  },
  {
    "path": "test/tests/functional/pbs_node_rampdown.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\nfrom tests.functional import *\n\n\ndef convert_time(fmt, tm, fixdate=False):\n    \"\"\"\n    Convert given time stamp <tm> into given format <fmt>\n    if fixdate is True add <space> before date if date is < 9\n    (This is because to match output with ctime as qstat uses it)\n    \"\"\"\n    rv = time.strftime(fmt, time.localtime(float(tm)))\n    if ((sys.platform not in ('cygwin', 'win32')) and (fixdate)):\n        rv = rv.split()\n        date = int(rv[2])\n        if date <= 9:\n            date = ' ' + str(date)\n        rv[2] = str(date)\n        rv = ' '.join(rv)\n    return rv\n\n\n@requirements(num_moms=3)\nclass TestPbsNodeRampDown(TestFunctional):\n\n    \"\"\"\n    This tests the Node Rampdown Feature,\n    where while a job is running, nodes/resources\n    assigned by non-mother superior can be released.\n\n    Custom parameters:\n    moms: colon-separated hostnames of three MoMs\n    \"\"\"\n\n    def transform_select(self, select):\n        \"\"\"\n        Takes a select substring:\n            \"<res1>=<val1>:<res2>=<val2>...:<resN>=<valN>\"\n        and transform it so that if any of the resource\n        (res1, res2,...,resN) matches 'mem', and\n        the corresponding value has a suffix of 'gb',\n        then convert it too 'kb' value. Also,\n        this will attach a \"1:' to the returned select\n        substring.\n        Ex:\n             % str = \"ncpus=7:mem=2gb:ompthreads=3\"\n             % transform_select(str)\n             1:ompthreads=3:mem=2097152kb:ncpus=7\n        \"\"\"\n        sel_list = select.split(':')\n        mystr = \"1:\"\n        for index in range(len(sel_list) - 1, -1, -1):\n            if (index != len(sel_list) - 1):\n                mystr += \":\"\n            nums = [s for s in sel_list[index] if s.isdigit()]\n            key = sel_list[index].split('=')[0]\n            if key == \"mem\":\n                mystr += sel_list[index].\\\n                    replace(nums[0] + \"gb\",\n                            str(int(nums[0]) * 1024 * 1024)) + \"kb\"\n            else:\n                mystr += sel_list[index]\n        return mystr\n\n    def pbs_nodefile_match_exec_host(self, jid, exec_host,\n                                     schedselect=None):\n        \"\"\"\n        Look into the PBS_NODEFILE on the first host listed in 'exec_host'\n        and returns True if all host entries in 'exec_host' match the entries\n        in the file. Otherwise, return False.\n\n        # Look for 'mpiprocs' values in 'schedselect' (if not None), and\n        # verify that the corresponding node hosts are appearing in\n        # PBS_NODEFILE 'mpiprocs' number of times.\n        \"\"\"\n\n        pbs_nodefile = os.path.join(self.server.\n                                    pbs_conf['PBS_HOME'], 'aux', jid)\n\n        # look for mpiprocs settings\n        mpiprocs = []\n        if schedselect is not None:\n            select_list = schedselect.split('+')\n\n            for chunk in select_list:\n                chl = chunk.split(':')\n                for ch in chl:\n                    if ch.find('=') != -1:\n                        c = ch.split('=')\n                        if c[0] == \"mpiprocs\":\n                            mpiprocs.append(c[1])\n        ehost = exec_host.split('+')\n        first_host = ehost[0].split('/')[0]\n\n        cmd = ['cat', pbs_nodefile]\n        ret = self.server.du.run_cmd(first_host, cmd, sudo=False)\n        ehost2 = []\n        for h in ret['out']:\n            ehost2.append(h.split('.')[0])\n\n        ehost1 = []\n        j = 0\n        for eh in ehost:\n            h = eh.split('/')\n            if (len(mpiprocs) > 0):\n                for k in range(int(mpiprocs[j])):\n                    ehost1.append(h[0])\n            else:\n                ehost1.append(h[0])\n            j += 1\n\n        if ((ehost1 > ehost2) - (ehost1 < ehost2)) != 0:\n            return False\n        return True\n\n    def check_stageout_file_size(self):\n        \"\"\"\n        This Function will check that atleast 1gb of test.img\n        file which is to be stagedout is created in 10 seconds\n        \"\"\"\n        fpath = os.path.join(TEST_USER.home, \"test.img\")\n        cmd = ['stat', '-c', '%s', fpath]\n        fsize = 0\n        for i in range(11):\n            rc = self.du.run_cmd(hosts=self.hostA, cmd=cmd,\n                                 runas=TEST_USER)\n            if rc['rc'] == 0 and len(rc['out']) == 1:\n                try:\n                    fsize = int(rc['out'][0])\n                except Exception:\n                    pass\n            # 1073741824 == 1Gb\n            if fsize > 1073741824:\n                break\n            else:\n                time.sleep(1)\n        if fsize <= 1073741824:\n            self.fail(\"Failed to create 1gb file at %s\" % fpath)\n\n    def match_accounting_log(self, atype, jid, exec_host, exec_vnode,\n                             mem, ncpus, nodect, place, select):\n        \"\"\"\n        This checks if there's an accounting log record 'atype' for\n        job 'jid' containing the values given (i.e.\n        Resource_List.exec_host, Resource_List.exec_vnode, etc...)\n        This throws an exception upon encountering a non-matching\n        accounting_logs entry.\n        Some example values of 'atype' are: 'u' (update record due to\n        release node request), 'c' (record containing the next\n        set of resources to be used by a phased job as a result of\n        release node request), 'e' (last update record for a phased job\n        due to a release node request), 'E' (end of job record).\n        \"\"\"\n        self.server.accounting_match(\n            msg=\".*%s;%s.*exec_host=%s.*\" % (atype, jid, exec_host),\n            regexp=True, n=\"ALL\", starttime=self.stime)\n\n        self.server.accounting_match(\n            msg=\".*%s;%s.*exec_vnode=%s.*\" % (atype, jid, exec_vnode),\n            regexp=True, n=\"ALL\", starttime=self.stime)\n\n        self.server.accounting_match(\n            msg=r\".*%s;%s.*Resource_List\\.mem=%s.*\" % (atype, jid, mem),\n            regexp=True, n=\"ALL\", starttime=self.stime)\n\n        self.server.accounting_match(\n            msg=r\".*%s;%s.*Resource_List\\.ncpus=%d.*\" % (atype, jid, ncpus),\n            regexp=True, n=\"ALL\", starttime=self.stime)\n\n        self.server.accounting_match(\n            msg=r\".*%s;%s.*Resource_List\\.nodect=%d.*\" % (atype, jid, nodect),\n            regexp=True, n=\"ALL\", starttime=self.stime)\n\n        self.server.accounting_match(\n            msg=r\".*%s;%s.*Resource_List\\.place=%s.*\" % (atype, jid, place),\n            regexp=True, n=\"ALL\", starttime=self.stime)\n\n        self.server.accounting_match(\n            msg=r\".*%s;%s.*Resource_List\\.select=%s.*\" % (atype, jid, select),\n            regexp=True, n=\"ALL\", starttime=self.stime)\n\n        if atype != 'c':\n            self.server.accounting_match(\n                msg=r\".*%s;%s.*resources_used\\..*\" % (atype, jid),\n                regexp=True, n=\"ALL\", starttime=self.stime)\n\n    def match_vnode_status(self, vnode_list, state, jobs=None, ncpus=None,\n                           mem=None):\n        \"\"\"\n        Given a list of vnode names in 'vnode_list', check to make\n        sure each vnode's state, jobs string, resources_assigned.mem,\n        and resources_assigned.ncpus match the passed arguments.\n        This will throw an exception if a match is not found.\n        \"\"\"\n        for vn in vnode_list:\n            dict_match = {'state': state}\n            if jobs is not None:\n                dict_match['jobs'] = jobs\n            if ncpus is not None:\n                dict_match['resources_assigned.ncpus'] = ncpus\n            if mem is not None:\n                dict_match['resources_assigned.mem'] = mem\n\n            self.server.expect(VNODE, dict_match, id=vn)\n\n    def create_and_submit_job(self, job_type, attribs={}):\n        \"\"\"\n        create the job object and submit it to the server\n        based on 'job_type' and attributes list 'attribs'.\n        \"\"\"\n        retjob = Job(TEST_USER, attrs=attribs)\n\n        if job_type == 'job1':\n            retjob.create_script(self.script['job1'])\n        elif job_type == 'job1_1':\n            retjob.create_script(self.script['job1_1'])\n        elif job_type == 'job1_2':\n            retjob.create_script(self.script['job1_2'])\n        elif job_type == 'job1_3':\n            retjob.create_script(self.script['job1_3'])\n        elif job_type == 'job1_5':\n            retjob.create_script(self.script['job1_5'])\n        elif job_type == 'job1_6':\n            retjob.create_script(self.script['job1_6'])\n        elif job_type == 'job1_extra_res':\n            retjob.create_script(self.script['job1_extra_res'])\n        elif job_type == 'job2':\n            retjob.create_script(self.script['job2'])\n        elif job_type == 'job3':\n            retjob.create_script(self.script['job3'])\n        elif job_type == 'job5':\n            retjob.create_script(self.script['job5'])\n        elif job_type == 'job11':\n            retjob.create_script(self.script['job11'])\n        elif job_type == 'job11x':\n            retjob.create_script(self.script['job11x'])\n        elif job_type == 'job12':\n            retjob.create_script(self.script['job12'])\n        elif job_type == 'job13':\n            retjob.create_script(self.script['job13'])\n        elif job_type == 'jobA':\n            retjob.create_script(self.script['jobA'])\n\n        return self.server.submit(retjob)\n\n    def setUp(self):\n\n        if len(self.moms) != 3:\n            self.skip_test(reason=\"need 3 mom hosts: -p moms=<m1>:<m2>:<m3>\")\n\n        TestFunctional.setUp(self)\n        Job.dflt_attributes[ATTR_k] = 'oe'\n\n        self.server.cleanup_jobs()\n\n        self.momA = self.moms.values()[0]\n        self.momB = self.moms.values()[1]\n        self.momC = self.moms.values()[2]\n\n        # Now start setting up and creating the vnodes\n        self.server.manager(MGR_CMD_DELETE, NODE, None, \"\")\n\n        # set node momA\n        self.hostA = self.momA.shortname\n        self.momA.delete_vnode_defs()\n        vnode_prefix = self.hostA\n        a = {'resources_available.mem': '1gb',\n             'resources_available.ncpus': '1'}\n        vnodedef = self.momA.create_vnode_def(vnode_prefix, a, 4)\n        self.assertNotEqual(vnodedef, None)\n        self.momA.insert_vnode_def(vnodedef, 'vnode.def')\n        self.server.manager(MGR_CMD_CREATE, NODE, id=self.hostA)\n\n        # set node momB\n        self.hostB = self.momB.shortname\n        self.momB.delete_vnode_defs()\n        vnode_prefix = self.hostB\n        a = {'resources_available.mem': '1gb',\n             'resources_available.ncpus': '1'}\n        vnodedef = self.momB.create_vnode_def(vnode_prefix, a, 5,\n                                              usenatvnode=True)\n        self.assertNotEqual(vnodedef, None)\n        self.momB.insert_vnode_def(vnodedef, 'vnode.def')\n        self.server.manager(MGR_CMD_CREATE, NODE, id=self.hostB)\n\n        # set node momC\n        # This one has no vnode definition.\n\n        self.hostC = self.momC.shortname\n        self.momC.delete_vnode_defs()\n        self.server.manager(MGR_CMD_CREATE, NODE, id=self.hostC)\n        a = {'resources_available.ncpus': 2,\n             'resources_available.mem': '2gb'}\n        # set natural vnode of hostC\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.hostC)\n\n        a = {'state': 'free', 'resources_available.ncpus': (GE, 1)}\n        self.server.expect(VNODE, {'state=free': 11}, count=True,\n                           interval=2)\n\n        # Various node names\n        self.n0 = self.hostA\n        self.n1 = '%s[0]' % (self.hostA,)\n        self.n2 = '%s[1]' % (self.hostA,)\n        self.n3 = '%s[2]' % (self.hostA,)\n        self.n4 = self.hostB\n        self.n5 = '%s[0]' % (self.hostB,)\n        self.n6 = '%s[1]' % (self.hostB,)\n        self.n7 = self.hostC\n        self.n8 = '%s[3]' % (self.hostA,)\n        self.n9 = '%s[2]' % (self.hostB,)\n        self.n10 = '%s[3]' % (self.hostB,)\n\n        SLEEP_CMD = self.mom.sleep_cmd\n\n        self.pbs_release_nodes_cmd = os.path.join(\n            self.server.pbs_conf['PBS_EXEC'], 'bin', 'pbs_release_nodes')\n\n        FIB40 = os.path.join(self.server.pbs_conf['PBS_EXEC'], 'bin', '') + \\\n            'pbs_python -c \"exec(\\\\\\\"def fib(i):\\\\n if i < 2:\\\\n  \\\nreturn i\\\\n return fib(i-1) + fib(i-2)\\\\n\\\\nprint(fib(40))\\\\\\\")\"'\n\n        FIB45 = os.path.join(self.server.pbs_conf['PBS_EXEC'], 'bin', '') + \\\n            'pbs_python -c \"exec(\\\\\\\"def fib(i):\\\\n if i < 2:\\\\n  \\\nreturn i\\\\n return fib(i-1) + fib(i-2)\\\\n\\\\nprint(fib(45))\\\\\\\")\"'\n\n        FIB50 = os.path.join(self.server.pbs_conf['PBS_EXEC'], 'bin', '') + \\\n            'pbs_python -c \"exec(\\\\\\\"def fib(i):\\\\n if i < 2:\\\\n  \\\nreturn i\\\\n return fib(i-1) + fib(i-2)\\\\n\\\\nprint(fib(50))\\\\\\\")\"'\n\n        FIB400 = os.path.join(self.server.pbs_conf['PBS_EXEC'], 'bin', '') + \\\n            'pbs_python -c \"exec(\\\\\\\"def fib(i):\\\\n if i < 2:\\\\n  \\\nreturn i\\\\n return fib(i-1) + fib(i-2)\\\\n\\\\nprint(fib(400))\\\\\\\")\"'\n\n        # job submission arguments\n        self.script = {}\n        self.job1_select = \"ncpus=3:mem=2gb+ncpus=3:mem=2gb+ncpus=2:mem=2gb\"\n        self.job1_place = \"scatter\"\n\n        # expected values upon successful job submission\n        self.job1_schedselect = \"1:ncpus=3:mem=2gb+1:ncpus=3:mem=2gb+\" + \\\n            \"1:ncpus=2:mem=2gb\"\n        self.job1_exec_host = \"%s/0*0+%s/0*0+%s/0*2\" % (\n            self.n0, self.n4, self.n7)\n        self.job1_exec_vnode = \\\n            \"(%s:mem=1048576kb:ncpus=1+\" % (self.n1,) + \\\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.n2,) + \\\n            \"%s:ncpus=1)+\" % (self.n3) + \\\n            \"(%s:mem=1048576kb:ncpus=1+\" % (self.n4,) + \\\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.n5,) + \\\n            \"%s:ncpus=1)+\" % (self.n6,) + \\\n            \"(%s:ncpus=2:mem=2097152kb)\" % (self.n7,)\n\n        self.job1_sel_esc = self.job1_select.replace(\"+\", r\"\\+\")\n        self.job1_exec_host_esc = self.job1_exec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n            \"+\", r\"\\+\")\n        self.job1_exec_vnode_esc = self.job1_exec_vnode.replace(\n            \"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\"(\", r\"\\(\").replace(\n            \")\", r\"\\)\").replace(\"+\", r\"\\+\")\n        self.job1_newsel = self.transform_select(self.job1_select.split(\n            '+')[0])\n        self.job1_new_exec_host = self.job1_exec_host.split('+')[0]\n        self.job1_new_exec_vnode = self.job1_exec_vnode.split(')')[0] + ')'\n        self.job1_new_exec_vnode_esc = \\\n            self.job1_new_exec_vnode.replace(\"[\", r\"\\[\").replace(\n                \"]\", r\"\\]\").replace(\"(\", r\"\\(\").replace(\")\", r\"\\)\").replace(\n                \"+\", r\"\\+\")\n\n        self.script['job1'] = \\\n            \"#PBS -S /bin/bash\\n\" \\\n            \"#PBS -l select=\" + self.job1_select + \"\\n\" + \\\n            \"#PBS -l place=\" + self.job1_place + \"\\n\" + \\\n            \"#PBS -W stageout=test.img@%s:test.img\\n\" % (self.n4,) + \\\n            \"#PBS -W release_nodes_on_stageout=true\\n\" + \\\n            \"dd if=/dev/zero of=test.img count=1024 bs=2097152\\n\" + \\\n            \"pbsdsh -n 1 -- %s\\n\" % (FIB40,) + \\\n            \"pbsdsh -n 2 -- %s\\n\" % (FIB40,) + \\\n            \"%s\\n\" % (FIB400,)\n\n        self.script['job1_1'] = \\\n            \"#PBS -S /bin/bash\\n\" \\\n            \"#PBS -l select=\" + self.job1_select + \"\\n\" + \\\n            \"#PBS -l place=\" + self.job1_place + \"\\n\" + \\\n            \"#PBS -W stageout=test.img@%s:test.img\\n\" % (self.n4,) + \\\n            \"#PBS -W release_nodes_on_stageout=false\\n\" + \\\n            \"dd if=/dev/zero of=test.img count=1024 bs=2097152\\n\" + \\\n            \"pbsdsh -n 1 -- %s\\n\" % (FIB40,) + \\\n            \"pbsdsh -n 2 -- %s\\n\" % (FIB40,) + \\\n            \"%s\\n\" % (FIB400,)\n\n        self.script['job1_2'] = \\\n            \"#PBS -S /bin/bash\\n\" \\\n            \"#PBS -l select=\" + self.job1_select + \"\\n\" + \\\n            \"#PBS -l place=\" + self.job1_place + \"\\n\" + \\\n            \"#PBS -W stageout=test.img@%s:test.img\\n\" % (self.n4,) + \\\n            \"dd if=/dev/zero of=test.img count=1024 bs=2097152\\n\" + \\\n            \"pbsdsh -n 1 -- %s\\n\" % (FIB40,) + \\\n            \"pbsdsh -n 2 -- %s\\n\" % (FIB40,) + \\\n            \"%s\\n\" % (FIB400,)\n\n        self.script['job1_3'] = \\\n            \"#PBS -S /bin/bash\\n\" \\\n            \"#PBS -l select=\" + self.job1_select + \"\\n\" + \\\n            \"#PBS -l place=\" + self.job1_place + \"\\n\" + \\\n            SLEEP_CMD + \" 30\\n\" + \\\n            \"pbs_release_nodes -a\\n\" + \\\n            \"%s\\n\" % (FIB50,)\n\n        self.script['job1_5'] = \\\n            \"#PBS -S /bin/bash\\n\" \\\n            \"#PBS -l select=\" + self.job1_select + \"\\n\" + \\\n            \"#PBS -l place=\" + self.job1_place + \"\\n\" + \\\n            \"pbsdsh -n 1 -- %s &\\n\" % (FIB45,) + \\\n            \"pbsdsh -n 2 -- %s &\\n\" % (FIB45,) + \\\n            \"%s\\n\" % (FIB400,)\n\n        self.script['jobA'] = \\\n            \"#PBS -S /bin/bash\\n\" \\\n            \"#PBS -l select=\" + self.job1_select + \"\\n\" + \\\n            \"#PBS -l place=\" + self.job1_place + \"\\n\" + \\\n            \"#PBS -J 1-5\\n\"\\\n            \"pbsdsh -n 1 -- %s &\\n\" % (FIB45,) + \\\n            \"pbsdsh -n 2 -- %s &\\n\" % (FIB45,) + \\\n            \"%s\\n\" % (FIB45,)\n\n        self.script['job1_6'] = \\\n            \"#PBS -S /bin/bash\\n\" \\\n            \"#PBS -l select=\" + self.job1_select + \"\\n\" + \\\n            \"#PBS -l place=\" + self.job1_place + \"\\n\" + \\\n            SLEEP_CMD + \" 30\\n\" + \\\n            self.pbs_release_nodes_cmd + \" \" + self.n4 + \"\\n\" + \\\n            \"%s\\n\" % (FIB50,)\n\n        self.job1_extra_res_select = \\\n            \"ncpus=3:mem=2gb:mpiprocs=3:ompthreads=2+\" + \\\n            \"ncpus=3:mem=2gb:mpiprocs=3:ompthreads=3+\" + \\\n            \"ncpus=2:mem=2gb:mpiprocs=2:ompthreads=2\"\n        self.job1_extra_res_place = \"scatter\"\n        self.job1_extra_res_schedselect = \\\n            \"1:ncpus=3:mem=2gb:mpiprocs=3:ompthreads=2+\" + \\\n            \"1:ncpus=3:mem=2gb:mpiprocs=3:ompthreads=3+\" + \\\n            \"1:ncpus=2:mem=2gb:mpiprocs=2:ompthreads=2\"\n        self.job1_extra_res_exec_host = \"%s/0*0+%s/0*0+%s/0*2\" % (\n            self.n0, self.n4, self.n7)\n        self.job1_extra_res_exec_vnode = \\\n            \"(%s:mem=1048576kb:ncpus=1+\" % (self.n1,) + \\\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.n2,) + \\\n            \"%s:ncpus=1)+\" % (self.n3,) + \\\n            \"(%s:mem=1048576kb:ncpus=1+\" % (self.n4,) + \\\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.n5,) + \\\n            \"%s:ncpus=1)+\" % (self.n6,) + \\\n            \"(%s:ncpus=2:mem=2097152kb)\" % (self.n7,)\n        self.script['job1_extra_res'] = \\\n            \"#PBS -S /bin/bash\\n\" \\\n            \"#PBS -l select=\" + self.job1_extra_res_select + \"\\n\" + \\\n            \"#PBS -l place=\" + self.job1_extra_res_place + \"\\n\" + \\\n            \"pbsdsh -n 1 -- %s &\\n\" % (FIB40,) + \\\n            \"pbsdsh -n 2 -- %s &\\n\" % (FIB40,) + \\\n            \"%s\\n\" % (FIB50,)\n\n        self.job2_select = \"ncpus=1:mem=1gb+ncpus=4:mem=4gb+ncpus=2:mem=2gb\"\n        self.job2_place = \"scatter\"\n        self.job2_schedselect = \"1:ncpus=1:mem=1gb+1:ncpus=4:mem=4gb+\" + \\\n            \"1:ncpus=2:mem=2gb\"\n        self.job2_exec_host = \"%s/1+%s/1*0+%s/1*2\" % (\n            self.n0, self.n4, self.n7)\n        self.job2_exec_vnode = \\\n            \"(%s:ncpus=1:mem=1048576kb)+\" % (self.n8,) + \\\n            \"(%s:mem=1048576kb:ncpus=1+\" % (self.n4,) + \\\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.n5,) + \\\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.n9,) + \\\n            \"%s:mem=1048576kb:ncpus=1)+\" % (self.n10,) + \\\n            \"(%s:ncpus=2:mem=2097152kb)\" % (self.n7,)\n\n        self.job2_exec_vnode_var1 = \\\n            \"(%s:ncpus=1:mem=1048576kb)+\" % (self.n8,) + \\\n            \"(%s:mem=1048576kb:ncpus=1+\" % (self.n4,) + \\\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.n5,) + \\\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.n6,) + \\\n            \"%s:mem=1048576kb:ncpus=1)+\" % (self.n9,) + \\\n            \"(%s:ncpus=2:mem=2097152kb)\" % (self.n7,)\n\n        self.script['job2'] = \\\n            \"#PBS -l select=\" + self.job2_select + \"\\n\" + \\\n            \"#PBS -l place=\" + self.job2_place + \"\\n\" + \\\n            SLEEP_CMD + \" 60\\n\"\n\n        self.script['job3'] = \\\n            \"#PBS -l select=vnode=\" + self.n4 + \"+vnode=\" + self.n0 + \\\n            \":mem=4mb\\n\" + SLEEP_CMD + \" 30\\n\"\n\n        self.script['job5'] = \\\n            \"#PBS -l select=vnode=\" + self.n0 + \":mem=4mb\\n\" + \\\n            SLEEP_CMD + \" 300\\n\"\n\n        self.job11x_select = \"ncpus=3:mem=2gb+ncpus=3:mem=2gb+ncpus=1:mem=1gb\"\n        self.job11x_place = \"scatter:excl\"\n        self.job11x_schedselect = \"1:ncpus=3:mem=2gb+\" + \\\n            \"1:ncpus=3:mem=2gb+1:ncpus=1:mem=1gb\"\n        self.job11x_exec_host = \"%s/0*0+%s/0*0+%s/0\" % (\n            self.n0, self.n4, self.n7)\n        self.job11x_exec_vnode = \\\n            \"(%s:mem=1048576kb:ncpus=1+\" % (self.n1,) + \\\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.n2,) + \\\n            \"%s:ncpus=1)+\" % (self.n3,) + \\\n            \"(%s:mem=1048576kb:ncpus=1+\" % (self.n4,) + \\\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.n5,) + \\\n            \"%s:ncpus=1)+\" % (self.n6,) + \\\n            \"(%s:ncpus=1:mem=1048576kb)\" % (self.n7,)\n        self.job11x_exec_vnode_match = \\\n            r\"\\(.+:mem=1048576kb:ncpus=1\\+\" + \\\n            r\".+:mem=1048576kb:ncpus=1\\+\" + \\\n            r\".+:ncpus=1\\)\\+\" + \\\n            r\"\\(.+:mem=1048576kb:ncpus=1\\+\" + \\\n            r\".+:mem=1048576kb:ncpus=1\\+\" + \\\n            r\".+:ncpus=1\\)\\+\" + \\\n            r\"\\(.+:ncpus=1:mem=1048576kb\\)\"\n        self.script['job11x'] = \\\n            \"#PBS -S /bin/bash\\n\" \\\n            \"#PBS -l select=\" + self.job11x_select + \"\\n\" + \\\n            \"#PBS -l place=\" + self.job11x_place + \"\\n\" + \\\n            \"pbsdsh -n 1 -- %s\\n\" % (FIB40,) + \\\n            \"pbsdsh -n 2 -- %s\\n\" % (FIB40,) + \\\n            \"%s\\n\" % (FIB50,)\n\n        self.job11_select = \"ncpus=3:mem=2gb+ncpus=3:mem=2gb+ncpus=1:mem=1gb\"\n        self.job11_place = \"scatter\"\n        self.job11_schedselect = \"1:ncpus=3:mem=2gb+1:ncpus=3:mem=2gb+\" + \\\n            \"1:ncpus=1:mem=1gb\"\n        self.job11_exec_host = \"%s/0*0+%s/0*0+%s/0\" % (\n            self.n0, self.n4, self.n7)\n        self.job11_exec_vnode = \\\n            \"(%s:mem=1048576kb:ncpus=1+\" % (self.n1,) + \\\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.n2,) + \\\n            \"%s:ncpus=1)+\" % (self.n3,) + \\\n            \"(%s:mem=1048576kb:ncpus=1+\" % (self.n4,) + \\\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.n5,) + \\\n            \"%s:ncpus=1)+\" % (self.n6,) + \\\n            \"(%s:ncpus=1:mem=1048576kb)\" % (self.n7,)\n        self.script['job11'] = \\\n            \"#PBS -S /bin/bash\\n\" \\\n            \"#PBS -l select=\" + self.job11_select + \"\\n\" + \\\n            \"#PBS -l place=\" + self.job11_place + \"\\n\" + \\\n            \"pbsdsh -n 1 -- %s\\n\" % (FIB40,) + \\\n            \"pbsdsh -n 2 -- %s\\n\" % (FIB40,) + \\\n            \"%s\\n\" % (FIB50,)\n\n        self.job12_select = \"vnode=%s:ncpus=1:mem=1gb\" % (self.n7,)\n        self.job12_schedselect = \"1:vnode=%s:ncpus=1:mem=1gb\" % (self.n7,)\n        self.job12_place = \"free\"\n        self.job12_exec_host = \"%s/1\" % (self.n7,)\n        self.job12_exec_vnode = \"(%s:ncpus=1:mem=1048576kb)\" % (self.n7,)\n        self.script['job12'] = \\\n            \"#PBS -l select=\" + self.job12_select + \"\\n\" + \\\n            \"#PBS -l place=\" + self.job12_place + \"\\n\" + \\\n            SLEEP_CMD + \" 60\\n\"\n\n        self.job13_select = \"3:ncpus=1\"\n        self.script['job13'] = \\\n            \"#PBS -S /bin/bash\\n\" \\\n            \"#PBS -l select=\" + self.job13_select + \"\\n\" + \\\n            \"#PBS -l place=\" + self.job1_place + \"\\n\" + \\\n            \"pbsdsh -n 1 -- %s\\n\" % (FIB400,) + \\\n            \"pbsdsh -n 2 -- %s\\n\" % (FIB400,) + \\\n            \"pbsdsh -n 3 -- %s\\n\" % (FIB400,)\n\n        self.stime = time.time()\n\n    def tearDown(self):\n        self.momA.signal(\"-CONT\")\n        self.momB.signal(\"-CONT\")\n        self.momC.signal(\"-CONT\")\n        for host in [self.hostA, self.hostB, self.hostC]:\n            test_img = os.path.join(\"/home\", \"pbsuser\", \"test.img\")\n            self.du.rm(hostname=host, path=test_img, force=True,\n                       runas=TEST_USER)\n        TestFunctional.tearDown(self)\n\n    def release_nodes_rerun(self, option=\"rerun\"):\n        \"\"\"\n        Test:\n            Test the behavior of a job with released nodes when it\n            gets rerun. Specifying an option \"kill_mom_and_restart\" will\n            kill primary mom and restart, which would cause the job\n            to requeue/rerun. Otherwise, a job qrerun will be issued\n            directly.\n\n            Given a job submitted with a select spec of\n            2 super-chunks of ncpus=3 and mem=2gb each,\n            and 1 chunk of ncpus=2 and mem=2gb, along with\n            place spec of \"scatter\", resulting in an:\n\n             exec_vnode=\n                  (<n1>+<n2><n3>)+(<n4>+<n5>+<n6>)+(<n7>)\n\n            First call:\n              pbs_release_nodes -j <job-id> <n5> <n6> <n7>\n\n            Then call:\n              if option is \"kill_and_restart_mom\":\n                  kill -KILL pbs_mom\n                  start pbs_mom\n              otherwise,\n                  qrerun <job-id>\n            Causes the job to rerun with the original requested\n            resources.\n        \"\"\"\n        jid = self.create_and_submit_job('job1_5')\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job1_select,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_schedselect,\n                                 'exec_host': self.job1_exec_host,\n                                 'exec_vnode': self.job1_exec_vnode}, id=jid)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        jobs_assn2 = \"%s/0, %s/1\" % (jid, jid)\n        self.match_vnode_status([self.n7], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        # Run pbs_release_nodes\n        cmd = [self.pbs_release_nodes_cmd, '-j', jid, self.n5,\n               self.n6, self.n7]\n        ret = self.server.du.run_cmd(self.server.hostname, cmd,\n                                     sudo=True)\n        self.assertEqual(ret['rc'], 0)\n\n        # only mom hostC released the job since the sole vnode\n        # <n7> has been released\n        self.momA.log_match(\"Job;%s;%s.+cput=.+ mem=.+\" % (\n            jid, self.hostB), n=10, regexp=True,\n            existence=False, max_attempts=5, interval=1)\n\n        self.momA.log_match(\"Job;%s;%s.+cput=.+ mem=.+\" % (\n            jid, self.hostC), n=10, regexp=True)\n\n        self.momB.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            existence=False, max_attempts=5, interval=1)\n\n        self.momC.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20)\n\n        # Verify remaining job resources.\n\n        sel_esc = self.job1_select.replace(\"+\", r\"\\+\")\n        exec_host_esc = self.job1_exec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                    \"+\", r\"\\+\")\n        exec_vnode_esc = self.job1_exec_vnode.replace(\"[\", r\"\\[\").replace(\n            \"]\", r\"\\]\").replace(\"(\", r\"\\(\").replace(\")\", r\"\\)\").replace(\n                    \"+\", r\"\\+\")\n        newsel = \"1:mem=2097152kb:ncpus=3+1:mem=1048576kb:ncpus=1\"\n        newsel_esc = newsel.replace(\"+\", r\"\\+\")\n        new_exec_host = self.job1_exec_host.replace(\n            \"+%s/0*2\" % (self.n7,), \"\")\n        new_exec_host_esc = new_exec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                    \"+\", r\"\\+\")\n        new_exec_vnode = self.job1_exec_vnode.replace(\n            \"+%s:mem=1048576kb:ncpus=1\" % (self.n5,), \"\")\n        new_exec_vnode = new_exec_vnode.replace(\n            \"+%s:ncpus=1\" % (self.n6,), \"\")\n        new_exec_vnode = new_exec_vnode.replace(\n            \"+(%s:ncpus=2:mem=2097152kb)\" % (self.n7,), \"\")\n        new_exec_vnode_esc = new_exec_vnode.replace(\"[\", r\"\\[\").replace(\n            \"]\", r\"\\]\").replace(\n            \"(\", r\"\\(\").replace(\")\", r\"\\)\").replace(\"+\", r\"\\+\")\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '3gb',\n                                 'Resource_List.ncpus': 4,\n                                 'Resource_List.select': newsel,\n                                 'Resource_List.place': self.job1_place,\n                                 'Resource_List.nodect': 2,\n                                 'schedselect': newsel,\n                                 'exec_host': new_exec_host,\n                                 'exec_vnode': new_exec_vnode}, id=jid)\n\n        # Check account update ('u') record\n        self.match_accounting_log('u', jid, exec_host_esc,\n                                  exec_vnode_esc, \"6gb\", 8, 3,\n                                  self.job1_place,\n                                  sel_esc)\n\n        # Check to make sure 'c' (next) record got generated\n        self.match_accounting_log('c', jid, new_exec_host_esc,\n                                  new_exec_vnode_esc, \"3145728kb\",\n                                  4, 2, self.job1_place, newsel_esc)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n5], 'job-busy', jobs_assn1,\n                                1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        self.match_vnode_status([self.n0, self.n7, self.n8, self.n9, self.n10],\n                                'free')\n\n        # Now rerun the job\n\n        if option == \"kill_mom_and_restart\":\n            self.momA.signal(\"-KILL\")\n            self.momA.start()\n        else:\n            self.server.rerunjob(jid)\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job1_select,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_schedselect,\n                                 'exec_host': self.job1_exec_host,\n                                 'exec_vnode': self.job1_exec_vnode}, id=jid)\n\n        # Check various vnode status.\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        self.match_vnode_status([self.n7], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n    def test_release_nodes_on_stageout_true(self):\n        \"\"\"\n        Test:\n              qsub -W release_nodes_on_stageout=true job.script\n              where job.script specifies a select spec of\n              2 super-chunks of ncpus=3 and mem=2gb each,\n              and 1 chunk of ncpus=2 and mem=2gb, along with\n              place spec of \"scatter\".\n\n              With release_nodes_on_stageout=true option, when\n              job is deleted and runs a lengthy stageout process,\n              only the primary execution host's\n              vnodes are left assigned to the job.\n        \"\"\"\n        # Inside job1's script contains the\n        # directive to release_nodes_on_stageout=true\n        jid = self.create_and_submit_job('job1')\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'release_nodes_on_stageout': 'True',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job1_select,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_schedselect,\n                                 'exec_host': self.job1_exec_host,\n                                 'exec_vnode': self.job1_exec_vnode}, id=jid)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        jobs_assn2 = \"%s/0, %s/1\" % (jid, jid)\n        self.match_vnode_status([self.n7], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(jid, self.job1_exec_host))\n\n        # Deleting the job will trigger the stageout process\n        # at which time sister nodes are automatically released\n        # due to release_nodes_stageout=true set\n        self.check_stageout_file_size()\n        self.server.delete(jid)\n\n        # Verify remaining job resources.\n        self.server.expect(JOB, {'job_state': 'E',\n                                 'Resource_List.mem': '2gb',\n                                 'Resource_List.ncpus': 3,\n                                 'Resource_List.select': self.job1_newsel,\n                                 'Resource_List.place': self.job1_place,\n                                 'Resource_List.nodect': 1,\n                                 'schedselect': self.job1_newsel,\n                                 'exec_host': self.job1_new_exec_host,\n                                 'exec_vnode': self.job1_new_exec_vnode},\n                           id=jid)\n        # Check various vnode status\n        self.match_vnode_status([self.n1, self.n2],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3], 'job-busy', jobs_assn1, 1, '0kb')\n\n        self.match_vnode_status([self.n0, self.n4, self.n5, self.n6,\n                                 self.n7, self.n8, self.n9, self.n10], 'free')\n\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(jid, self.job1_new_exec_host))\n\n        # Verify mom_logs\n        self.momA.log_match(\n            \"Job;%s;%s.+cput=.+ mem=.+\" % (jid, self.n4), n=10,\n            interval=2, regexp=True)\n\n        self.momA.log_match(\n            \"Job;%s;%s.+cput=.+ mem=.+\" % (jid, self.n7), n=10,\n            interval=2, regexp=True)\n\n        self.momB.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            interval=2)\n\n        self.momC.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            interval=2)\n\n        # Check account update ('u') record\n        self.match_accounting_log('u', jid, self.job1_exec_host_esc,\n                                  self.job1_exec_vnode_esc, \"6gb\", 8, 3,\n                                  self.job1_place,\n                                  self.job1_sel_esc)\n\n        # Check to make sure 'c' (next) record got generated\n        self.match_accounting_log('c', jid, self.job1_new_exec_host,\n                                  self.job1_new_exec_vnode_esc, \"2097152kb\",\n                                  3, 1, self.job1_place, self.job1_newsel)\n\n    def test_release_nodes_on_stageout_false(self):\n        \"\"\"\n        Test:\n              qsub -W release_nodes_on_stageout=False job.script\n              where job.script specifies a select spec of\n              2 super-chunks of ncpus=3 and mem=2gb each,\n              and 1 chunk of ncpus=2 and mem=2gb, along with\n              place spec of \"scatter\".\n\n              With release_nodes_on_stageout=false option, when job is\n              deleted and runs a lengthy stageout process, nothing\n              changes in job's vnodes assignment.\n        \"\"\"\n        # Inside job1_1's script contains the\n        # directive to release_nodes_on_stageout=false\n        jid = self.create_and_submit_job('job1_1')\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'release_nodes_on_stageout': 'False',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job1_select,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_schedselect,\n                                 'exec_host': self.job1_exec_host,\n                                 'exec_vnode': self.job1_exec_vnode}, id=jid)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n3, self.n6],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        jobs_assn2 = \"%s/0, %s/1\" % (jid, jid)\n        self.match_vnode_status([self.n7], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        # Deleting a job should not trigger automatic\n        # release of nodes due to release_nodes_stagout=False\n        self.check_stageout_file_size()\n        self.server.delete(jid)\n\n        # Verify no change in remaining job resources.\n        self.server.expect(JOB, {'job_state': 'E',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job1_select,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_schedselect,\n                                 'exec_host': self.job1_exec_host,\n                                 'exec_vnode': self.job1_exec_vnode}, id=jid)\n\n        # Check various vnode status.\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6], 'job-busy',\n                                jobs_assn1, 1, '0kb')\n\n        self.match_vnode_status([self.n7], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        # Verify mom_logs\n        self.momB.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            max_attempts=5, interval=1,\n                            existence=False)\n\n        self.momC.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            max_attempts=5, interval=1,\n                            existence=False)\n\n        # Check for no existence of account update ('u') record\n        self.server.accounting_match(\n            msg='.*u;' + jid + \".*exec_host=%s.*\" % (self.job1_exec_host_esc,),\n            regexp=True, n=\"ALL\", existence=False, max_attempts=5, interval=1,\n            starttime=self.stime)\n\n        # Check for no existence of account next ('c') record\n        self.server.accounting_match(\n            msg='.*c;' + jid + \".*exec_host=%s.*\" % (self.job1_new_exec_host,),\n            regexp=True, n=\"ALL\", existence=False, max_attempts=5, interval=1,\n            starttime=self.stime)\n\n    def test_release_nodes_on_stageout_default(self):\n        \"\"\"\n        Test:\n              qsub: no -Wrelease_nodes_on_stageout\n              option given.\n\n              Job runs as normal.\n        \"\"\"\n        jid = self.create_and_submit_job('job1_2')\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job1_select,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_schedselect,\n                                 'exec_host': self.job1_exec_host,\n                                 'exec_vnode': self.job1_exec_vnode}, id=jid)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        jobs_assn2 = \"%s/0, %s/1\" % (jid, jid)\n        self.match_vnode_status([self.n7], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        self.check_stageout_file_size()\n        self.server.delete(jid)\n\n        # Verify no change in remaining job resources.\n        self.server.expect(JOB, {'job_state': 'E',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job1_select,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_schedselect,\n                                 'exec_host': self.job1_exec_host,\n                                 'exec_vnode': self.job1_exec_vnode}, id=jid)\n        # Check various vnode status.\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6], 'job-busy',\n                                jobs_assn1, 1, '0kb')\n\n        self.match_vnode_status([self.n7], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10],\n                                'free')\n\n        # Verify mom_logs\n        self.momB.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            max_attempts=5, interval=1,\n                            existence=False)\n\n        self.momC.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            max_attempts=5, interval=1,\n                            existence=False)\n\n        # Check for no existence of account update ('u') record\n        self.server.accounting_match(\n            msg='.*u;' + jid + \".*exec_host=%s.*\" % (self.job1_exec_host_esc,),\n            regexp=True, n=\"ALL\", existence=False, max_attempts=5, interval=1,\n            starttime=self.stime)\n\n        # Check for no existence of account next ('c') record\n        self.server.accounting_match(\n            msg='.*c;' + jid + \".*exec_host=%s.*\" % (self.job1_new_exec_host,),\n            regexp=True, n=\"ALL\", existence=False, max_attempts=5, interval=1,\n            starttime=self.stime)\n\n    def test_release_nodes_on_stageout_true_qalter(self):\n        \"\"\"\n        Test:\n              qalter -W release_nodes_on_stageout=true.\n\n              After running job is modified by qalter,\n              with release_nodes_on_stageout=true option, when\n              job is deleted and runs a lengthy stageout process,\n              only the primary execution host's\n              vnodes are left assigned to the job.\n        \"\"\"\n\n        jid = self.create_and_submit_job('job1_2')\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job1_select,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_schedselect,\n                                 'exec_host': self.job1_exec_host,\n                                 'exec_vnode': self.job1_exec_vnode}, id=jid)\n\n        # run qalter -Wrelease_nodes_on_stageout=true\n        self.server.alterjob(jid,\n                             {ATTR_W: 'release_nodes_on_stageout=true'})\n\n        self.server.expect(JOB, {'release_nodes_on_stageout': 'True'}, id=jid)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        jobs_assn2 = \"%s/0, %s/1\" % (jid, jid)\n        self.match_vnode_status([self.n7], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        # This triggers the lengthy stageout process\n        # Wait for the Job to create test.img file\n        self.check_stageout_file_size()\n        self.server.delete(jid)\n\n        self.server.expect(JOB, {'job_state': 'E',\n                                 'Resource_List.mem': '2gb',\n                                 'Resource_List.ncpus': 3,\n                                 'Resource_List.select': self.job1_newsel,\n                                 'Resource_List.place': self.job1_place,\n                                 'Resource_List.nodect': 1,\n                                 'schedselect': self.job1_newsel,\n                                 'exec_host': self.job1_new_exec_host,\n                                 'exec_vnode': self.job1_new_exec_vnode},\n                           id=jid)\n        # Check various vnode status\n        self.match_vnode_status([self.n1, self.n2],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3], 'job-busy', jobs_assn1,\n                                1, '0kb')\n\n        self.match_vnode_status([self.n0, self.n4, self.n5, self.n6,\n                                 self.n7, self.n8, self.n9, self.n10], 'free')\n\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(jid, self.job1_new_exec_host))\n\n        # Verify mom_logs\n        self.momA.log_match(\"Job;%s;%s.+cput=.+ mem=.+\" % (\n            jid, self.hostB), n=10,\n            interval=2, regexp=True)\n\n        self.momA.log_match(\"Job;%s;%s.+cput=.+ mem=.+\" % (\n            jid, self.hostC), n=10,\n            interval=2, regexp=True)\n\n        self.momB.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            interval=2)\n\n        self.momC.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            interval=2)\n\n        # Check account update ('u') record\n        self.match_accounting_log('u', jid, self.job1_exec_host_esc,\n                                  self.job1_exec_vnode_esc, \"6gb\", 8, 3,\n                                  self.job1_place,\n                                  self.job1_sel_esc)\n\n        # Check to make sure 'c' (next) record got generated\n        self.match_accounting_log('c', jid, self.job1_new_exec_host,\n                                  self.job1_new_exec_vnode_esc, \"2097152kb\",\n                                  3, 1, self.job1_place, self.job1_newsel)\n\n    def test_release_nodes_on_stageout_false_qalter(self):\n        \"\"\"\n        Test:\n              qalter -W release_nodes_on_stageout=False.\n\n              After running job is modified by qalter,\n              With release_nodes_on_stageout=false option, when job is\n              deleted and runs a lengthy stageout process, nothing\n              changes in job's vnodes assignment.\n        \"\"\"\n        jid = self.create_and_submit_job('job1_2')\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job1_select,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_schedselect,\n                                 'exec_host': self.job1_exec_host,\n                                 'exec_vnode': self.job1_exec_vnode}, id=jid)\n\n        # run qalter -Wrelease_nodes_on_stageout=true\n        self.server.alterjob(jid,\n                             {ATTR_W: 'release_nodes_on_stageout=false'})\n\n        self.server.expect(JOB, {'release_nodes_on_stageout': 'False'}, id=jid)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        jobs_assn2 = \"%s/0, %s/1\" % (jid, jid)\n        self.match_vnode_status([self.n7], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        # This triggers long stageout process\n        # Wait for the Job to create test.img file\n        self.check_stageout_file_size()\n        self.server.delete(jid)\n\n        # Verify no change in remaining job resources.\n        self.server.expect(JOB, {'job_state': 'E',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job1_select,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_schedselect,\n                                 'exec_host': self.job1_exec_host,\n                                 'exec_vnode': self.job1_exec_vnode}, id=jid)\n\n        # Check various vnode status.\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6], 'job-busy',\n                                jobs_assn1, 1, '0kb')\n\n        self.match_vnode_status([self.n7], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        # Verify mom_logs\n        self.momB.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            max_attempts=5, interval=1,\n                            existence=False)\n\n        self.momC.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            max_attempts=5, interval=1,\n                            existence=False)\n\n        # Check for no existence of account update ('u') record\n        self.server.accounting_match(\n            msg='.*u;' + jid + \".*exec_host=%s.*\" % (self.job1_exec_host_esc,),\n            regexp=True, n=\"ALL\", existence=False, max_attempts=5, interval=1,\n            starttime=self.stime)\n\n        # Check for no existence of account next ('c') record\n        self.server.accounting_match(\n            msg='.*c;' + jid + \".*exec_host=%s.*\" % (self.job1_new_exec_host,),\n            regexp=True, n=\"ALL\", existence=False, max_attempts=5, interval=1,\n            starttime=self.stime)\n\n    def test_hook_release_nodes_on_stageout_true(self):\n        \"\"\"\n        Test:\n              Using a queuejob hook to set\n              release_nodes_on_stageout=true.\n\n              When job is deleted and runs a\n              lengthy stageout process, only\n              the primary execution host's\n              vnodes are left assigned to the job.\n        \"\"\"\n\n        hook_body = \"\"\"\nimport pbs\npbs.logmsg(pbs.LOG_DEBUG, \"queuejob hook executed\")\npbs.event().job.release_nodes_on_stageout=True\n\"\"\"\n        hook_event = \"queuejob\"\n        hook_name = \"qjob\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, hook_body)\n\n        jid = self.create_and_submit_job('job1_2')\n\n        self.server.log_match(\"queuejob hook executed\", n=20,\n                              interval=2)\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'release_nodes_on_stageout': 'True',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job1_select,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_schedselect,\n                                 'exec_host': self.job1_exec_host,\n                                 'exec_vnode': self.job1_exec_vnode}, id=jid)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        jobs_assn2 = \"%s/0, %s/1\" % (jid, jid)\n        self.match_vnode_status([self.n7], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(jid, self.job1_exec_host))\n\n        # Deleting the job will trigger the stageout process\n        # at which time sister nodes are automatically released\n        # due to release_nodes_stageout=true set\n        # Wait for the Job to create test.img file\n        self.check_stageout_file_size()\n        self.server.delete(jid)\n\n        # Verify remaining job resources.\n\n        self.server.expect(JOB, {'job_state': 'E',\n                                 'Resource_List.mem': '2gb',\n                                 'Resource_List.ncpus': 3,\n                                 'Resource_List.select': self.job1_newsel,\n                                 'Resource_List.place': self.job1_place,\n                                 'Resource_List.nodect': 1,\n                                 'schedselect': self.job1_newsel,\n                                 'exec_host': self.job1_new_exec_host,\n                                 'exec_vnode': self.job1_new_exec_vnode},\n                           id=jid)\n\n        # Check various vnode status\n        self.match_vnode_status([self.n1, self.n2],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3], 'job-busy', jobs_assn1, 1, '0kb')\n\n        self.match_vnode_status([self.n0, self.n4, self.n5, self.n6,\n                                 self.n7, self.n8, self.n9, self.n10], 'free')\n\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(jid, self.job1_new_exec_host))\n\n        # Verify mom_logs\n\n        self.momA.log_match(\n            \"Job;%s;%s.+cput=.+ mem=.+\" % (jid, self.n4,), n=10,\n            interval=2, regexp=True)\n\n        self.momA.log_match(\n            \"Job;%s;%s.+cput=.+ mem=.+\" % (jid, self.hostC), n=10,\n            interval=2, regexp=True)\n\n        self.momB.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            interval=2)\n\n        self.momC.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            interval=2)\n\n        # Check account update ('u') record\n        self.match_accounting_log('u', jid, self.job1_exec_host_esc,\n                                  self.job1_exec_vnode_esc, \"6gb\", 8, 3,\n                                  self.job1_place,\n                                  self.job1_sel_esc)\n\n        # Check to make sure 'c' (next) record got generated\n        self.match_accounting_log('c', jid, self.job1_new_exec_host,\n                                  self.job1_new_exec_vnode_esc, \"2097152kb\",\n                                  3, 1, self.job1_place, self.job1_newsel)\n\n    def test_hook_release_nodes_on_stageout_false(self):\n        \"\"\"\n        Test:\n              Using a queuejob hook to set\n              -Wrelease_nodes_on_stageout=False.\n\n              When job is deleted and runs a\n              lengthy stageout process, nothing\n              changes in job's vnodes assignment.\n        \"\"\"\n\n        hook_body = \"\"\"\nimport pbs\npbs.logmsg(pbs.LOG_DEBUG, \"queuejob hook executed\")\npbs.event().job.release_nodes_on_stageout=False\n\"\"\"\n        hook_event = \"queuejob\"\n        hook_name = \"qjob\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, hook_body)\n\n        jid = self.create_and_submit_job('job1_2')\n\n        self.server.log_match(\"queuejob hook executed\", n=20,\n                              interval=2)\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'release_nodes_on_stageout': 'False',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job1_select,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_schedselect,\n                                 'exec_host': self.job1_exec_host,\n                                 'exec_vnode': self.job1_exec_vnode}, id=jid)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n3, self.n6],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        jobs_assn2 = \"%s/0, %s/1\" % (jid, jid)\n        self.match_vnode_status([self.n7], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        # Deleting a job should not trigger automatic\n        # release of nodes due to release_nodes_stagout=False\n        # Wait for the Job to create test.img file\n        self.check_stageout_file_size()\n        self.server.delete(jid)\n\n        # Verify no change in remaining job resources.\n        self.server.expect(JOB, {'job_state': 'E',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job1_select,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_schedselect,\n                                 'exec_host': self.job1_exec_host,\n                                 'exec_vnode': self.job1_exec_vnode}, id=jid)\n\n        # Check various vnode status.\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6], 'job-busy',\n                                jobs_assn1, 1, '0kb')\n\n        self.match_vnode_status([self.n7], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        # Verify mom_logs\n        self.momB.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            max_attempts=5, interval=1,\n                            existence=False)\n\n        self.momC.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            max_attempts=5, interval=1,\n                            existence=False)\n\n        # Check for no existence of account update ('u') record\n        self.server.accounting_match(\n            msg='.*u;' + jid + \".*exec_host=%s.*\" % (self.job1_exec_host_esc,),\n            regexp=True, n=\"ALL\", existence=False, max_attempts=5, interval=1,\n            starttime=self.stime)\n\n        # Check for no existence of account next ('c') record\n        self.server.accounting_match(\n            msg='.*c;' + jid + \".*exec_host=%s.*\" % (self.job1_new_exec_host,),\n            regexp=True, n=\"ALL\", existence=False, max_attempts=5, interval=1,\n            starttime=self.stime)\n\n    def test_hook2_release_nodes_on_stageout_true(self):\n        \"\"\"\n        Test:\n              Using a modifyjob hook to set\n              release_nodes_on_stageout=true.\n\n              When job is deleted and runs a\n              lengthy stageout process, only\n              the primary execution host's\n              vnodes are left assigned to the job.\n        \"\"\"\n\n        hook_body = \"\"\"\nimport pbs\npbs.logmsg(pbs.LOG_DEBUG, \"modifyjob hook executed\")\npbs.event().job.release_nodes_on_stageout=True\n\"\"\"\n        hook_event = \"modifyjob\"\n        hook_name = \"mjob\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, hook_body)\n\n        jid = self.create_and_submit_job('job1_2')\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job1_select,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_schedselect,\n                                 'exec_host': self.job1_exec_host,\n                                 'exec_vnode': self.job1_exec_vnode}, id=jid)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        jobs_assn2 = \"%s/0, %s/1\" % (jid, jid)\n        self.match_vnode_status([self.n7], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        # This triggers the modifyjob hook\n        self.server.alterjob(jid, {ATTR_N: \"test\"})\n\n        self.server.log_match(\"modifyjob hook executed\", n=100,\n                              interval=2)\n\n        self.server.expect(JOB, {'release_nodes_on_stageout': 'True'}, id=jid)\n\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(jid, self.job1_exec_host))\n\n        # Deleting the job will trigger the stageout process\n        # at which time sister nodes are automatically released\n        # due to release_nodes_stageout=true set\n        # Wait for the Job to create test.img file\n        self.check_stageout_file_size()\n        self.server.delete(jid)\n\n        # Verify remaining job resources.\n\n        self.server.expect(JOB, {'job_state': 'E',\n                                 'Resource_List.mem': '2gb',\n                                 'Resource_List.ncpus': 3,\n                                 'Resource_List.select': self.job1_newsel,\n                                 'Resource_List.place': self.job1_place,\n                                 'Resource_List.nodect': 1,\n                                 'schedselect': self.job1_newsel,\n                                 'exec_host': self.job1_new_exec_host,\n                                 'exec_vnode': self.job1_new_exec_vnode},\n                           id=jid)\n\n        # Check various vnode status.\n        self.match_vnode_status([self.n1, self.n2],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3], 'job-busy', jobs_assn1, 1, '0kb')\n\n        self.match_vnode_status([self.n0, self.n4, self.n5, self.n6,\n                                 self.n7, self.n8, self.n9, self.n10], 'free')\n\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(jid, self.job1_new_exec_host))\n\n        # Verify mom_logs\n        self.momA.log_match(\n            \"Job;%s;%s.+cput=.+ mem=.+\" % (jid, self.hostB), n=10,\n            interval=2, regexp=True)\n\n        self.momA.log_match(\n            \"Job;%s;%s.+cput=.+ mem=.+\" % (jid, self.hostC), n=10,\n            interval=2, regexp=True)\n\n        self.momB.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            interval=2)\n\n        self.momC.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            interval=2)\n\n        # Check account update ('u') record\n        self.match_accounting_log('u', jid, self.job1_exec_host_esc,\n                                  self.job1_exec_vnode_esc, \"6gb\", 8, 3,\n                                  self.job1_place,\n                                  self.job1_sel_esc)\n\n        # Check to make sure 'c' (next) record got generated\n        self.match_accounting_log('c', jid, self.job1_new_exec_host,\n                                  self.job1_new_exec_vnode_esc, \"2097152kb\",\n                                  3, 1, self.job1_place, self.job1_newsel)\n\n    def test_hook2_release_nodes_on_stageout_false(self):\n        \"\"\"\n        Test:\n              Using a modifyjob hook to set\n              release_nodes_on_stageout=False.\n\n              When job is deleted and runs a\n              lengthy stageout process, nothing\n              changes in job's vnodes assignment.\n        \"\"\"\n\n        hook_body = \"\"\"\nimport pbs\npbs.logmsg(pbs.LOG_DEBUG, \"modifyjob hook executed\")\npbs.event().job.release_nodes_on_stageout=False\n\"\"\"\n        hook_event = \"modifyjob\"\n        hook_name = \"mjob\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, hook_body)\n\n        jid = self.create_and_submit_job('job1_2')\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job1_select,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_schedselect,\n                                 'exec_host': self.job1_exec_host,\n                                 'exec_vnode': self.job1_exec_vnode}, id=jid)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        jobs_assn2 = \"%s/0, %s/1\" % (jid, jid)\n        self.match_vnode_status([self.n7], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        # This triggers the modifyjob hook\n        self.server.alterjob(jid, {ATTR_N: \"test\"})\n\n        self.server.log_match(\"modifyjob hook executed\", n=100,\n                              interval=2)\n\n        self.server.expect(JOB, {'release_nodes_on_stageout': 'False'}, id=jid)\n\n        # Deleting a job should not trigger automatic\n        # release of nodes due to release_nodes_stagout=False\n        # Wait for the Job to create test.img file\n        self.check_stageout_file_size()\n        self.server.delete(jid)\n\n        # Verify no change in remaining job resources.\n        self.server.expect(JOB, {'job_state': 'E',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job1_select,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_schedselect,\n                                 'exec_host': self.job1_exec_host,\n                                 'exec_vnode': self.job1_exec_vnode}, id=jid)\n\n        # Check various vnode status.\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6], 'job-busy',\n                                jobs_assn1, 1, '0kb')\n\n        self.match_vnode_status([self.n7], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        # Verify mom_logs\n        self.momB.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            max_attempts=5, interval=1,\n                            existence=False)\n\n        self.momC.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            max_attempts=5, interval=1,\n                            existence=False)\n\n        # Check for no existence of account update ('u') record\n        self.server.accounting_match(\n            msg='.*u;' + jid + \".*exec_host=%s.*\" % (self.job1_exec_host_esc,),\n            regexp=True, n=\"ALL\", existence=False, max_attempts=5, interval=1,\n            starttime=self.stime)\n\n        # Check for no existence of account next ('c') record\n        self.server.accounting_match(\n            msg='.*c;' + jid + \".*exec_host=%s.*\" % (self.job1_new_exec_host,),\n            regexp=True, n=\"ALL\", existence=False, max_attempts=5, interval=1,\n            starttime=self.stime)\n\n    def test_release_nodes_error(self):\n        \"\"\"\n        Tests erroneous cases:\n            - pbs_release_nodes (no options given)\n            - pbs_release_nodes -j <job-id> (and nothing else)\n            - pbs_release_nodes -a (not run inside a job)\n            -  pbs_release_nodes -j <job-id> -a <node1>\n                 (both -a and listed nodes are given)\n            - pbs_release_nodes -j <unknown-job-id> -a\n            - pbs_release_nodes -j <job-id> -a\n              and job is not in a running state.\n\n        Returns the appropriate error message.\n        \"\"\"\n        # Test no option given\n        cmd = [self.pbs_release_nodes_cmd]\n        ret = self.server.du.run_cmd(self.server.hostname, cmd,\n                                     runas=TEST_USER)\n        self.assertNotEqual(ret['rc'], 0)\n        self.assertTrue(ret['err'][0].startswith('usage:'))\n\n        # test only -j <jobid> given\n        cmd = [self.pbs_release_nodes_cmd, '-j', '23']\n        ret = self.server.du.run_cmd(self.server.hostname, cmd,\n                                     runas=TEST_USER)\n        self.assertNotEqual(ret['rc'], 0)\n        self.assertTrue(ret['err'][0].startswith('usage:'))\n\n        # test only -a given\n        cmd = [self.pbs_release_nodes_cmd, '-a']\n        ret = self.server.du.run_cmd(self.server.hostname, cmd,\n                                     runas=TEST_USER)\n        self.assertNotEqual(ret['rc'], 0)\n        self.assertTrue(ret['err'][0].startswith(\n            'pbs_release_nodes: No jobid given'))\n\n        # Test specifying an unknown job id\n        cmd = [self.pbs_release_nodes_cmd, '-j', '300000', '-a']\n        ret = self.server.du.run_cmd(self.server.hostname, cmd,\n                                     runas=TEST_USER)\n        self.assertNotEqual(ret['rc'], 0)\n        self.assertTrue(ret['err'][0].startswith(\n            'pbs_release_nodes: Unknown Job Id 300000'))\n\n        # Test having '-a' and vnode parameter given to pbs_release_nodes\n        a = {'Resource_List.select': '3:ncpus=1',\n             'Resource_List.place': 'scatter'}\n        jid = self.create_and_submit_job('job', a)\n\n        cmd = [self.pbs_release_nodes_cmd, '-j', jid, '-a', self.n4]\n        ret = self.server.du.run_cmd(self.server.hostname, cmd,\n                                     runas=TEST_USER)\n        self.assertNotEqual(ret['rc'], 0)\n        self.assertTrue(ret['err'][0].startswith('usage:'))\n\n        self.server.delete(jid)\n\n        # Test pbs_release_nodes' permission\n        jid = self.create_and_submit_job('job', a)\n\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        # Run pbs_release_nodes as the executing user != TEST_USER\n        cmd = [self.pbs_release_nodes_cmd, '-j', jid, '-a']\n\n        ret = self.server.du.run_cmd(self.server.hostname, cmd,\n                                     runas=TEST_USER1)\n        self.assertNotEqual(ret['rc'], 0)\n        self.assertTrue(ret['err'][0].startswith(\n            'pbs_release_nodes: Unauthorized Request'))\n\n        self.server.delete(jid)\n\n        # Test pbs_release_nodes on a non-running job\n        a = {'Resource_List.select': '3:ncpus=1',\n             ATTR_h: None,\n             'Resource_List.place': 'scatter'}\n        jid = self.create_and_submit_job('job', a)\n\n        self.server.expect(JOB, {'job_state': 'H'}, id=jid)\n\n        # Run pbs_release_nodes\n        cmd = [self.pbs_release_nodes_cmd, '-j', jid, '-a']\n\n        ret = self.server.du.run_cmd(self.server.hostname, cmd,\n                                     runas=TEST_USER)\n        self.assertNotEqual(ret['rc'], 0)\n        self.assertTrue(ret['err'][0].startswith(\n            'pbs_release_nodes: Request invalid for state of job'))\n\n    def test_release_ms_nodes(self):\n        \"\"\"\n        Test:\n             Given: a job that has been submitted with a select spec\n             of 2 super-chunks of ncpus=3 and mem=2gb each,\n             and 1 chunk of ncpus=2 and mem=2gb, along with\n             place spec of \"scatter\", resulting in an\n\n             exec_vnode=\n                  (<n1>+<n2>+<n3>)+(<n4>+<n5>+<n6>)+(<n7>)\n\n             Executing pbs_release_nodes -j <job-id> <n5> <n6> <n1> <n7> where\n             <n1> is a mother superior vnode, results in\n             entire request to get rejected.\n        \"\"\"\n        jid = self.create_and_submit_job('job1')\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job1_select,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_schedselect,\n                                 'exec_host': self.job1_exec_host,\n                                 'exec_vnode': self.job1_exec_vnode}, id=jid)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        jobs_assn2 = \"%s/0, %s/1\" % (jid, jid)\n        self.match_vnode_status([self.n7], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        # Run pbs_release_nodes\n        cmd = [self.pbs_release_nodes_cmd, '-j', jid, self.n5, self.n6,\n               self.n1, self.n7]\n        ret = self.server.du.run_cmd(self.server.hostname, cmd,\n                                     runas=TEST_USER)\n        self.assertNotEqual(ret['rc'], 0)\n\n        self.assertTrue(ret['err'][0].startswith(\n            \"pbs_release_nodes: \" +\n            \"Can't free '%s' since \" % (self.n1,) +\n            \"it's on a primary execution host\"))\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job1_select,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_schedselect,\n                                 'exec_host': self.job1_exec_host,\n                                 'exec_vnode': self.job1_exec_vnode}, id=jid)\n\n        # Check various vnode status.\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6], 'job-busy',\n                                jobs_assn1, 1, '0kb')\n\n        self.match_vnode_status([self.n7], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        # Check for no existence of account update ('u') record\n        self.server.accounting_match(\n            msg='.*u;' + jid + \".*exec_host=%s.*\" % (self.job1_exec_host_esc,),\n            regexp=True, n=\"ALL\", existence=False, max_attempts=5, interval=1,\n            starttime=self.stime)\n\n        # Check for no existence of account next ('c') record\n        self.server.accounting_match(\n            msg='.*c;' + jid + \".*exec_host=%s.*\" % (self.job1_new_exec_host,),\n            regexp=True, n=\"ALL\", existence=False, max_attempts=5, interval=1,\n            starttime=self.stime)\n\n    def test_release_not_assigned_nodes(self):\n        \"\"\"\n        Test:\n             Given: a job that has been submitted with a select spec\n             of 2 super-chunks of ncpus=3 and mem=2gb each,\n             and 1 chunk of ncpus=2 and mem=2gb, along with\n             place spec of \"scatter\", resulting in an\n\n             exec_vnode=\n                  (<n1>+<n2>+<n3>)+(<n4>+<n5>+<n6>)+(<n7>)\n\n             Executing:\n                 pbs_release_nodes -j <job-id> <n4> <n5> <no_node> <n6> <n7>\n             with <no node> means such node is not assigned to the job.\n             entire request to get rejected.\n        Result:\n              Returns an error message and no nodes get released.\n        \"\"\"\n        jid = self.create_and_submit_job('job1')\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job1_select,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_schedselect,\n                                 'exec_host': self.job1_exec_host,\n                                 'exec_vnode': self.job1_exec_vnode}, id=jid)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n3, self.n6],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        jobs_assn2 = \"%s/0, %s/1\" % (jid, jid)\n        self.match_vnode_status([self.n7], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(jid, self.job1_exec_host))\n\n        # Run pbs_release_nodes\n        cmd = [self.pbs_release_nodes_cmd, '-j', jid, self.n4, self.n5,\n               self.n8, self.n6, self.n7]\n        ret = self.server.du.run_cmd(self.server.hostname, cmd,\n                                     runas=TEST_USER)\n\n        self.assertNotEqual(ret['rc'], 0)\n        self.assertTrue(ret['err'][0].startswith(\n            \"pbs_release_nodes: node(s) requested \" +\n            \"to be released not \" +\n            \"part of the job: %s\" % (self.n8,)))\n\n        # Ensure nothing has changed with the job.\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job1_select,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_schedselect,\n                                 'exec_host': self.job1_exec_host,\n                                 'exec_vnode': self.job1_exec_vnode}, id=jid)\n\n        # Check various vnode status.\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6], 'job-busy',\n                                jobs_assn1, 1, '0kb')\n\n        self.match_vnode_status([self.n7], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        # Check for no existence of account update ('u') record\n        self.server.accounting_match(\n            msg='.*u;' + jid + \".*exec_host=%s.*\" % (self.job1_exec_host_esc,),\n            regexp=True, n=\"ALL\", existence=False, max_attempts=5, interval=1,\n            starttime=self.stime)\n\n        # Check for no existence of account next ('c') record\n        self.server.accounting_match(\n            msg='.*c;' + jid + \".*exec_host=%s.*\" % (self.job1_new_exec_host,),\n            regexp=True, n=\"ALL\", existence=False, max_attempts=5, interval=1,\n            starttime=self.stime)\n\n    def test_release_cray_nodes(self):\n        \"\"\"\n        Test:\n             Given: a job that has been submitted with a select spec\n             of 2 super-chunks of ncpus=3 and mem=2gb each,\n             and 1 chunk of ncpus=2 and mem=2gb, along with\n             place spec of \"scatter\", resulting in an\n\n             exec_vnode=\n                  (<n1>+<n2>+<n3>)+(<n4>+<n5>+<n6>)+(<n7>)\n\n             Executing:\n                  pbs_release_nodes -j <job-id> <n4> <n5> <n6> <n7>\n              where <n7> is a Cray node,\n        Result:\n              Returns an error message and no nodes get released.\n        \"\"\"\n        jid = self.create_and_submit_job('job1')\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job1_select,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_schedselect,\n                                 'exec_host': self.job1_exec_host,\n                                 'exec_vnode': self.job1_exec_vnode}, id=jid)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n3, self.n6],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        jobs_assn2 = \"%s/0, %s/1\" % (jid, jid)\n        self.match_vnode_status([self.n7], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(jid, self.job1_exec_host))\n\n        # Set hostC node to be of cray type\n        a = {'resources_available.vntype': 'cray_login'}\n        # set natural vnode of hostC\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.n7)\n\n        # Run pbs_release_nodes\n        cmd = [self.pbs_release_nodes_cmd, '-j', jid, self.n4, self.n5,\n               self.n6, self.n7]\n\n        ret = self.server.du.run_cmd(self.server.hostname, cmd,\n                                     runas=TEST_USER)\n\n        self.assertNotEqual(ret['rc'], 0)\n        self.assertTrue(ret['err'][0].startswith(\n            \"pbs_release_nodes: not currently supported \" +\n            \"on Cray X* series nodes: \"\n            \"%s\" % (self.n7,)))\n\n        # Ensure nothing has changed with the job.\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job1_select,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_schedselect,\n                                 'exec_host': self.job1_exec_host,\n                                 'exec_vnode': self.job1_exec_vnode}, id=jid)\n\n        # Check various vnode status.\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6], 'job-busy',\n                                jobs_assn1, 1, '0kb')\n\n        self.match_vnode_status([self.n7], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        # Check for no existence of account update ('u') record\n        self.server.accounting_match(\n            msg='.*u;' + jid + \".*exec_host=%s.*\" % (self.job1_exec_host_esc,),\n            regexp=True, n=\"ALL\", existence=False, max_attempts=5, interval=1,\n            starttime=self.stime)\n\n        # Check for no existence of account next ('c') record\n        self.server.accounting_match(\n            msg='.*c;' + jid + \".*exec_host=%s.*\" % (self.job1_new_exec_host,),\n            regexp=True, n=\"ALL\", existence=False, max_attempts=5, interval=1,\n            starttime=self.stime)\n\n    def test_release_nodes_all(self):\n        \"\"\"\n        Test:\n              Given a job that specifies a select spec of\n              2 super-chunks of ncpus=3 and mem=2gb each,\n              and 1 chunk of ncpus=2 and mem=2gb, along with\n              place spec of \"scatter\".\n\n              Calling\n                  pbs_release_nodes -j <job-id> -a\n\n              will result in all the sister nodes getting\n              unassigned from the job.\n        \"\"\"\n        jid = self.create_and_submit_job('job1_2')\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job1_select,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_schedselect,\n                                 'exec_host': self.job1_exec_host,\n                                 'exec_vnode': self.job1_exec_vnode}, id=jid)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        jobs_assn2 = \"%s/0, %s/1\" % (jid, jid)\n        self.match_vnode_status([self.n7], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(jid, self.job1_exec_host))\n\n        # Run pbs_release_nodes as regular user\n        cmd = [self.pbs_release_nodes_cmd, '-j', jid, '-a']\n        ret = self.server.du.run_cmd(self.server.hostname, cmd,\n                                     runas=TEST_USER)\n        self.assertEqual(ret['rc'], 0)\n\n        # Verify mom_logs\n        self.momA.log_match(\n            \"Job;%s;%s.+cput=.+ mem=.+\" % (jid, self.hostB), n=10,\n            interval=2, regexp=True)\n\n        self.momA.log_match(\n            \"Job;%s;%s.+cput=.+ mem=.+\" % (jid, self.hostC), n=10,\n            interval=2, regexp=True)\n\n        self.momB.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            interval=2)\n\n        self.momC.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            interval=2)\n\n        # Verify remaining job resources.\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '2gb',\n                                 'Resource_List.ncpus': 3,\n                                 'Resource_List.select': self.job1_newsel,\n                                 'Resource_List.place': self.job1_place,\n                                 'Resource_List.nodect': 1,\n                                 'schedselect': self.job1_newsel,\n                                 'exec_host': self.job1_new_exec_host,\n                                 'exec_vnode': self.job1_new_exec_vnode},\n                           id=jid)\n\n        # Check various vnode status.\n        self.match_vnode_status([self.n1, self.n2],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3], 'job-busy', jobs_assn1, 1, '0kb')\n\n        self.match_vnode_status([self.n0, self.n4, self.n5, self.n6,\n\n                                 self.n7, self.n8, self.n9, self.n10], 'free')\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 3,\n                                    'resources_assigned.mem': '2097152kb'})\n        self.server.expect(QUEUE, {'resources_assigned.ncpus': 3,\n                                   'resources_assigned.mem': '2097152kb'},\n                           id=\"workq\")\n\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(jid, self.job1_new_exec_host))\n\n        # Check account update ('u') record\n        self.match_accounting_log('u', jid, self.job1_exec_host_esc,\n                                  self.job1_exec_vnode_esc, \"6gb\", 8, 3,\n                                  self.job1_place,\n                                  self.job1_sel_esc)\n\n        # Check to make sure 'c' (next) record got generated\n        self.match_accounting_log('c', jid, self.job1_new_exec_host,\n                                  self.job1_new_exec_vnode_esc, \"2097152kb\",\n                                  3, 1, self.job1_place, self.job1_newsel)\n\n    def test_release_nodes_all_as_root(self):\n        \"\"\"\n        Test:\n             Same test as test_release_nodes_all except the pbs_release_nodes\n             call is executed by root. Result is the same.\n        \"\"\"\n        jid = self.create_and_submit_job('job1_2')\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job1_select,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_schedselect,\n                                 'exec_host': self.job1_exec_host,\n                                 'exec_vnode': self.job1_exec_vnode}, id=jid)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        jobs_assn2 = \"%s/0, %s/1\" % (jid, jid)\n        self.match_vnode_status([self.n7], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(jid, self.job1_exec_host))\n\n        # Run pbs_release_nodes as root\n        cmd = [self.pbs_release_nodes_cmd, '-j', jid, '-a']\n        ret = self.server.du.run_cmd(self.server.hostname, cmd,\n                                     sudo=True)\n        self.assertEqual(ret['rc'], 0)\n\n        # Verify mom_logs\n        self.momA.log_match(\n            \"Job;%s;%s.+cput=.+ mem=.+\" % (jid, self.hostB), n=10,\n            interval=2, regexp=True)\n\n        self.momA.log_match(\n            \"Job;%s;%s.+cput=.+ mem=.+\" % (jid, self.hostC), n=10,\n            interval=2, regexp=True)\n\n        self.momB.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            interval=2)\n\n        self.momC.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            interval=2)\n\n        # Verify remaining job resources.\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '2gb',\n                                 'Resource_List.ncpus': 3,\n                                 'Resource_List.select': self.job1_newsel,\n                                 'Resource_List.place': self.job1_place,\n                                 'Resource_List.nodect': 1,\n                                 'schedselect': self.job1_newsel,\n                                 'exec_host': self.job1_new_exec_host,\n                                 'exec_vnode': self.job1_new_exec_vnode},\n                           id=jid)\n\n        # Check various vnode status.\n        self.match_vnode_status([self.n1, self.n2],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3], 'job-busy', jobs_assn1, 1, '0kb')\n\n        self.match_vnode_status([self.n0, self.n4, self.n5, self.n6,\n                                 self.n7, self.n8, self.n9, self.n10], 'free')\n\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 3,\n                                    'resources_assigned.mem': '2097152kb'})\n        self.server.expect(QUEUE, {'resources_assigned.ncpus': 3,\n                                   'resources_assigned.mem': '2097152kb'},\n                           id=\"workq\")\n\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(jid, self.job1_new_exec_host))\n\n        # Check account update ('u') record\n        self.match_accounting_log('u', jid, self.job1_exec_host_esc,\n                                  self.job1_exec_vnode_esc, \"6gb\", 8, 3,\n                                  self.job1_place,\n                                  self.job1_sel_esc)\n\n        # Check to make sure 'c' (next) record got generated\n        self.match_accounting_log('c', jid, self.job1_new_exec_host,\n                                  self.job1_new_exec_vnode_esc, \"2097152kb\",\n                                  3, 1, self.job1_place, self.job1_newsel)\n\n    def test_release_nodes_all_inside_job(self):\n        \"\"\"\n        Test:\n            Like test_release_all test except instead of calling\n            pbs_release_nodes from a command line, it is executed\n            inside the job script of a running job. Same results.\n        \"\"\"\n        # This one has a job script that calls 'pbs_release_nodes'\n        # (no jobid specified)\n        jid = self.create_and_submit_job('job1_3')\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job1_select,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_schedselect,\n                                 'exec_host': self.job1_exec_host,\n                                 'exec_vnode': self.job1_exec_vnode}, id=jid)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        jobs_assn2 = \"%s/0, %s/1\" % (jid, jid)\n        self.match_vnode_status([self.n7], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(jid, self.job1_exec_host))\n\n        # wait for the job to execute pbs_release_nodes\n        time.sleep(10)\n\n        # Verify mom_logs\n        self.momA.log_match(\n            \"Job;%s;%s.+cput=.+ mem=.+\" % (jid, self.hostB), n=10,\n            interval=2, regexp=True)\n\n        self.momA.log_match(\n            \"Job;%s;%s.+cput=.+ mem=.+\" % (jid, self.hostC), n=10,\n            interval=2, regexp=True)\n\n        self.momB.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            interval=2)\n\n        self.momC.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            interval=2)\n\n        # Verify remaining job resources.\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '2gb',\n                                 'Resource_List.ncpus': 3,\n                                 'Resource_List.select': self.job1_newsel,\n                                 'Resource_List.place': self.job1_place,\n                                 'Resource_List.nodect': 1,\n                                 'schedselect': self.job1_newsel,\n                                 'exec_host': self.job1_new_exec_host,\n                                 'exec_vnode': self.job1_new_exec_vnode},\n                           id=jid)\n\n        # Check various vnode status.\n        self.match_vnode_status([self.n1, self.n2],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3], 'job-busy', jobs_assn1, 1, '0kb')\n\n        self.match_vnode_status([self.n0, self.n4, self.n5, self.n6,\n                                 self.n7, self.n8, self.n9, self.n10], 'free')\n\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 3,\n                                    'resources_assigned.mem': '2097152kb'})\n        self.server.expect(QUEUE, {'resources_assigned.ncpus': 3,\n                                   'resources_assigned.mem': '2097152kb'},\n                           id=\"workq\")\n\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(jid, self.job1_new_exec_host))\n\n        # Check account update ('u') record\n        self.match_accounting_log('u', jid, self.job1_exec_host_esc,\n                                  self.job1_exec_vnode_esc, \"6gb\", 8, 3,\n                                  self.job1_place,\n                                  self.job1_sel_esc)\n\n        # Check to make sure 'c' (next) record got generated\n        self.match_accounting_log('c', jid, self.job1_new_exec_host,\n                                  self.job1_new_exec_vnode_esc, \"2097152kb\",\n                                  3, 1, self.job1_place, self.job1_newsel)\n\n    def test_release_nodes1(self):\n        \"\"\"\n        Test:\n             Given: a job that has been submitted with a select spec\n             of 2 super-chunks of ncpus=3 and mem=2gb each,\n             and 1 chunk of ncpus=2 and mem=2gb, along with\n             place spec of \"scatter\", resulting in an\n\n             exec_vnode=\n                  (<n1>+<n2>+<n3>)+(<n4>+<n5>+<n6>)+(<n7>)\n\n             Executing pbs_release_nodes -j <job-id> <n4>\n             results in:\n             1. node <n4> no longer appearing in job's\n                exec_vnode value,\n             2. resources associated with the\n                node are taken out of job's Resources_List.*,\n                schedselect values,\n             3. Since node <n4> is just one of the vnodes in the\n                host assigned to the second super-chunk, the node\n                still won't accept new jobs until all the other\n                allocated vnodes from the same mom host are released.\n                The resources then assigned to the job from\n                node <n4> continues to be assigned including\n                corresponding licenses.\n\n             NOTE: This is testing to make sure the position of <n4>\n             in the exec_vnode string (left end of a super-chunk) will\n             not break the recreation of the attribute value after\n             release.\n        \"\"\"\n        jid = self.create_and_submit_job('job1_5')\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job1_select,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_schedselect,\n                                 'exec_host': self.job1_exec_host,\n                                 'exec_vnode': self.job1_exec_vnode}, id=jid)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        jobs_assn2 = \"%s/0, %s/1\" % (jid, jid)\n        self.match_vnode_status([self.n7], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(jid, self.job1_exec_host))\n\n        # Run pbs_release_nodes as root\n        cmd = [self.pbs_release_nodes_cmd, '-j', jid, self.n4]\n        ret = self.server.du.run_cmd(self.server.hostname, cmd,\n                                     sudo=True)\n        self.assertEqual(ret['rc'], 0)\n\n        # Verify mom_logs\n        self.momA.log_match(\"Job;%s;%s.+cput=.+ mem=.+\" % (\n            jid, self.hostB), n=10,\n            regexp=True,\n            existence=False, max_attempts=5, interval=1)\n\n        self.momA.log_match(\"Job;%s;%s.+cput=.+ mem=.+\" % (\n            jid, self.hostC), n=10,\n            regexp=True,\n            existence=False, max_attempts=5, interval=1)\n\n        # momB's host will not get DELETE_JOB2 request since\n        # not all its vnodes have been released yet from the job.\n        self.momB.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            existence=False, max_attempts=5, interval=1)\n\n        # Verify remaining job resources.\n        newsel = \"1:mem=2097152kb:ncpus=3+1:mem=1048576kb:ncpus=2+\" + \\\n                 \"1:ncpus=2:mem=2097152kb\"\n        newsel_esc = newsel.replace(\"+\", r\"\\+\")\n        new_exec_host = self.job1_exec_host\n        new_exec_host_esc = self.job1_exec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                    \"+\", r\"\\+\")\n        new_exec_vnode = self.job1_exec_vnode.replace(\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.n4,), \"\")\n        new_exec_vnode_esc = new_exec_vnode.replace(\n            \"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n            \"(\", r\"\\(\").replace(\")\", r\"\\)\").replace(\"+\", r\"\\+\")\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '5gb',\n                                 'Resource_List.ncpus': 7,\n                                 'Resource_List.select': newsel,\n                                 'Resource_List.place': self.job1_place,\n                                 'Resource_List.nodect': 3,\n                                 'schedselect': newsel,\n                                 'exec_host': self.job1_exec_host,\n                                 'exec_vnode': new_exec_vnode}, id=jid)\n\n        # Check account update ('u') record\n        self.match_accounting_log('u', jid, self.job1_exec_host_esc,\n                                  self.job1_exec_vnode_esc, \"6gb\", 8, 3,\n                                  self.job1_place,\n                                  self.job1_sel_esc)\n\n        # Check to make sure 'c' (next) record got generated\n        self.match_accounting_log('c', jid, self.job1_exec_host_esc,\n                                  new_exec_vnode_esc, \"5242880kb\",\n                                  7, 3, self.job1_place, newsel_esc)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6], 'job-busy', jobs_assn1, 1,\n                                '0kb')\n\n        jobs_assn2 = \"%s/0, %s/1\" % (jid, jid)\n        self.match_vnode_status([self.n7], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 8,\n                                    'resources_assigned.mem': '6291456kb'})\n        self.server.expect(QUEUE, {'resources_assigned.ncpus': 8,\n                                   'resources_assigned.mem': '6291456kb'},\n                           id=\"workq\")\n\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(jid, new_exec_host))\n\n        self.server.delete(jid)\n\n        # Check account phased end ('e') record\n        self.match_accounting_log('e', jid, new_exec_host_esc,\n                                  new_exec_vnode_esc,\n                                  \"5242880kb\", 7, 3,\n                                  self.job1_place,\n                                  newsel_esc)\n\n        # Check to make sure 'E' (end of job) record got generated\n        self.match_accounting_log('E', jid, self.job1_exec_host_esc,\n                                  self.job1_exec_vnode_esc, \"6gb\",\n                                  8, 3, self.job1_place, self.job1_sel_esc)\n\n    def test_release_nodes1_as_user(self):\n        \"\"\"\n        Test:\n             Same as test_release_nodes1 except pbs_release_nodes\n             is executed by as regular user. Same results.\n        \"\"\"\n        jid = self.create_and_submit_job('job1_5')\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job1_select,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_schedselect,\n                                 'exec_host': self.job1_exec_host,\n                                 'exec_vnode': self.job1_exec_vnode}, id=jid)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        jobs_assn2 = \"%s/0, %s/1\" % (jid, jid)\n        self.match_vnode_status([self.n7], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(jid, self.job1_exec_host))\n\n        # Run pbs_release_nodes as regular user\n        cmd = [self.pbs_release_nodes_cmd, '-j', jid, self.n4]\n        ret = self.server.du.run_cmd(self.server.hostname, cmd,\n                                     runas=TEST_USER)\n        self.assertEqual(ret['rc'], 0)\n\n        # Verify mom_logs\n        self.momA.log_match(\"Job;%s;%s.+cput=.+ mem=.+\" % (\n            jid, self.hostB), n=10,\n            regexp=True,\n            existence=False, max_attempts=5, interval=1)\n\n        self.momA.log_match(\"Job;%s;%s.+cput=.+ mem=.+\" % (\n            jid, self.hostC), n=10,\n            regexp=True,\n            existence=False, max_attempts=5, interval=1)\n\n        # momB and momC's hosts will not get DELETE_JOB2 request since\n        # not all their vnodes have been released yet from the job.\n        self.momB.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            existence=False, max_attempts=5, interval=1)\n\n        self.momC.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            existence=False, max_attempts=5, interval=1)\n\n        # Verify remaining job resources.\n        newsel = \"1:mem=2097152kb:ncpus=3+1:mem=1048576kb:ncpus=2+\" + \\\n                 \"1:ncpus=2:mem=2097152kb\"\n        newsel_esc = newsel.replace(\"+\", r\"\\+\")\n        new_exec_host = self.job1_exec_host\n        new_exec_host_esc = self.job1_exec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                    \"+\", r\"\\+\")\n        new_exec_vnode = self.job1_exec_vnode.replace(\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.n4,), \"\")\n        new_exec_vnode_esc = new_exec_vnode.replace(\n            \"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n            \"(\", r\"\\(\").replace(\")\", r\"\\)\").replace(\"+\", r\"\\+\")\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '5gb',\n                                 'Resource_List.ncpus': 7,\n                                 'Resource_List.select': newsel,\n                                 'Resource_List.place': self.job1_place,\n                                 'Resource_List.nodect': 3,\n                                 'schedselect': newsel,\n                                 'exec_host': self.job1_exec_host,\n                                 'exec_vnode': new_exec_vnode}, id=jid)\n\n        # Check account update ('u') record\n        self.match_accounting_log('u', jid, self.job1_exec_host_esc,\n                                  self.job1_exec_vnode_esc, \"6gb\", 8, 3,\n                                  self.job1_place,\n                                  self.job1_sel_esc)\n\n        # Check to make sure 'c' (next) record got generated\n        self.match_accounting_log('c', jid, self.job1_exec_host_esc,\n                                  new_exec_vnode_esc, \"5242880kb\",\n                                  7, 3, self.job1_place, newsel_esc)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6], 'job-busy', jobs_assn1, 1,\n                                '0kb')\n\n        jobs_assn2 = \"%s/0, %s/1\" % (jid, jid)\n        self.match_vnode_status([self.n7], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 8,\n                                    'resources_assigned.mem': '6291456kb'})\n        self.server.expect(QUEUE, {'resources_assigned.ncpus': 8,\n                                   'resources_assigned.mem': '6291456kb'},\n                           id=\"workq\")\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(jid, new_exec_host))\n\n        self.server.delete(jid)\n\n        # Check account phased end ('e') record\n        self.match_accounting_log('e', jid, new_exec_host_esc,\n                                  new_exec_vnode_esc,\n                                  \"5242880kb\", 7, 3,\n                                  self.job1_place,\n                                  newsel_esc)\n\n        # Check to make sure 'E' (end of job) record got generated\n        self.match_accounting_log('E', jid, self.job1_exec_host_esc,\n                                  self.job1_exec_vnode_esc, \"6gb\",\n                                  8, 3, self.job1_place, self.job1_sel_esc)\n\n    def test_release_nodes1_extra(self):\n        \"\"\"\n        Test:\n             Like test_release_nodes1 except instead of the super-chunk\n             and chunks getting only ncpus and mem values, additional\n             resources mpiprocs and ompthreads are also requested and\n             assigned:\n\n             For example:\n\n               qsub -l select=\"ncpus=3:mem=2gb:mpiprocs=3:ompthreads=2+\n                               ncpus=3:mem=2gb:mpiprocs=3:ompthreads=3+\n                               ncpus=2:mem=2gb:mpiprocs=2:ompthreads=2\"\n\n             We want to make sure the ompthreads and mpiprocs values are\n             preserved in the new exec_vnode, and that in the $PBS_NODEFILE,\n             the host names are duplicated according to the  number of\n             mpiprocs. For example, if <n1> is assigned to first\n             chunk, with mpiprocs=3, <n1> will appear 3 times in\n             $PBS_NODEFILE.\n        \"\"\"\n        jid = self.create_and_submit_job('job1_extra_res')\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select':\n                                 self.job1_extra_res_select,\n                                 'Resource_List.place':\n                                 self.job1_extra_res_place,\n                                 'schedselect':\n                                 self.job1_extra_res_schedselect,\n                                 'exec_host':\n                                 self.job1_extra_res_exec_host,\n                                 'exec_vnode':\n                                 self.job1_extra_res_exec_vnode}, id=jid)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        jobs_assn2 = \"%s/0, %s/1\" % (jid, jid)\n        self.match_vnode_status([self.n7], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        # inside pbs_nodefile_match_exec_host() function, takes  care of\n        # verifying that the host names appear according to the number of\n        # mpiprocs assigned to the chunk.\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(\n                jid, self.job1_extra_res_exec_host,\n                self.job1_extra_res_schedselect))\n\n        # Run pbs_release_nodes as root\n        cmd = [self.pbs_release_nodes_cmd, '-j', jid, self.n4]\n        ret = self.server.du.run_cmd(self.server.hostname, cmd,\n                                     sudo=True)\n        self.assertEqual(ret['rc'], 0)\n\n        # Verify mom_logs\n        self.momA.log_match(\"Job;%s;%s.+cput=.+ mem=.+\" % (\n            jid, self.hostB), n=10,\n            regexp=True,\n            existence=False, max_attempts=5, interval=1)\n\n        self.momA.log_match(\"Job;%s;%s.+cput=.+ mem=.+\" % (\n            jid, self.hostC), n=10,\n            regexp=True,\n            existence=False, max_attempts=5, interval=1)\n\n        # momB and momC's hosts will not get DELETE_JOB2 request since\n        # not all their vnodes have been released yet from the job.\n        self.momB.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            existence=False, max_attempts=5, interval=1)\n\n        self.momC.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            existence=False, max_attempts=5, interval=1)\n\n        # Verify remaining job resources.\n        sel_esc = self.job1_extra_res_select.replace(\"+\", r\"\\+\")\n        exec_host_esc = self.job1_extra_res_exec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n            \"+\", r\"\\+\")\n        exec_vnode_esc = \\\n            self.job1_extra_res_exec_vnode.replace(\n                \"[\", r\"\\[\").replace(\n                \"]\", r\"\\]\").replace(\"(\", r\"\\(\").replace(\")\", r\"\\)\").replace(\n                \"+\", r\"\\+\")\n\n        newsel = \"1:mem=2097152kb:ncpus=3:mpiprocs=3:ompthreads=2+\" + \\\n            \"1:mem=1048576kb:ncpus=2:mpiprocs=3:ompthreads=3+\" + \\\n            \"1:ncpus=2:mem=2097152kb:mpiprocs=2:ompthreads=2\"\n        newsel_esc = newsel.replace(\"+\", r\"\\+\")\n        new_exec_host = self.job1_extra_res_exec_host\n        new_exec_host_esc = self.job1_extra_res_exec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n            \"+\", r\"\\+\")\n        new_exec_vnode = self.job1_extra_res_exec_vnode.replace(\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.n4,), \"\")\n        new_exec_vnode_esc = new_exec_vnode.replace(\"[\", r\"\\[\").replace(\n            \"]\", r\"\\]\").replace(\n            \"(\", r\"\\(\").replace(\")\", r\"\\)\").replace(\"+\", r\"\\+\")\n        self.server.expect(JOB,\n                           {'job_state': 'R',\n                            'Resource_List.mem': '5gb',\n                            'Resource_List.ncpus': 7,\n                            'Resource_List.select': newsel,\n                            'Resource_List.place': self.job1_extra_res_place,\n                            'Resource_List.nodect': 3,\n                            'schedselect': newsel,\n                            'exec_host': new_exec_host,\n                            'exec_vnode': new_exec_vnode}, id=jid)\n\n        # Check account update ('u') record\n        self.match_accounting_log('u', jid, exec_host_esc,\n                                  exec_vnode_esc, \"6gb\", 8, 3,\n                                  self.job1_extra_res_place,\n                                  sel_esc)\n\n        # Check to make sure 'c' (next) record got generated\n        self.match_accounting_log('c', jid, new_exec_host_esc,\n                                  new_exec_vnode_esc, \"5242880kb\",\n                                  7, 3, self.job1_extra_res_place, newsel_esc)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6], 'job-busy', jobs_assn1, 1,\n                                '0kb')\n\n        jobs_assn2 = \"%s/0, %s/1\" % (jid, jid)\n        self.match_vnode_status([self.n7], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 8,\n                                    'resources_assigned.mem': '6291456kb'})\n        self.server.expect(QUEUE, {'resources_assigned.ncpus': 8,\n                                   'resources_assigned.mem': '6291456kb'},\n                           id=\"workq\")\n\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(jid, new_exec_host, newsel))\n\n        self.server.delete(jid)\n\n        # Check account phased end ('e') record\n        self.match_accounting_log('e', jid, new_exec_host_esc,\n                                  new_exec_vnode_esc,\n                                  \"5242880kb\", 7, 3,\n                                  self.job1_extra_res_place,\n                                  newsel_esc)\n\n        # Check to make sure 'E' (end of job) record got generated\n        self.match_accounting_log('E', jid, exec_host_esc,\n                                  exec_vnode_esc, \"6gb\",\n                                  8, 3, self.job1_extra_res_place,\n                                  sel_esc)\n\n    @timeout(400)\n    def test_release_nodes2(self):\n        \"\"\"\n        Test:\n             Given: a job that has been submitted with a select spec\n             of 2 super-chunks of ncpus=3 and mem=2gb each,\n             and 1 chunk of ncpus=2 and mem=2gb, along with\n             place spec of \"scatter\", resulting in an\n\n             exec_vnode=\n                  (<n1>+<n2>+<n3>)+(<n4>+<n5>+<n6>)+(<n7>)\n\n             Executing pbs_release_nodes -j <job-id> <n5>\n             results in:\n             1. node <n5> no longer appearing in job's\n                exec_vnode value,\n             2. resources associated with the\n                node are taken out of job's Resources_List.*,\n                schedselect values,\n             3. Since node <n5> is just one of the vnodes in the\n                host assigned to the second super-chunk, the node\n                still won't accept new jobs until all the other\n                allocated vnodes from the same mom host are released.\n                The resources then assigned to the job from\n                node <n5> continues to be assigned including\n                corresponding licenses.\n\n             NOTE: This is testing to make sure the position of <n5>\n             in the exec_vnode string (middle of a super-chunk) will\n             not break the recreation of the attribute value after\n             release.\n        \"\"\"\n        jid = self.create_and_submit_job('job1_5')\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job1_select,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_schedselect,\n                                 'exec_host': self.job1_exec_host,\n                                 'exec_vnode': self.job1_exec_vnode}, id=jid)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        jobs_assn2 = \"%s/0, %s/1\" % (jid, jid)\n        self.match_vnode_status([self.n7], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(jid, self.job1_exec_host))\n\n        # Run pbs_release_nodes as root\n        cmd = [self.pbs_release_nodes_cmd, '-j', jid, self.n5]\n        ret = self.server.du.run_cmd(self.server.hostname, cmd,\n                                     sudo=True)\n        self.assertEqual(ret['rc'], 0)\n\n        # Verify mom_logs\n        self.momA.log_match(\"Job;%s;%s.+cput=.+ mem=.+\" % (\n            jid, self.hostB), n=10,\n            regexp=True,\n            existence=False, max_attempts=5, interval=1)\n\n        self.momA.log_match(\"Job;%s;%s.+cput=.+ mem=.+\" % (\n            jid, self.hostC), n=10,\n            regexp=True,\n            existence=False, max_attempts=5, interval=1)\n\n        # momB and momC's hosts will not get DELETE_JOB2 request since\n        # not all their vnodes have been released yet from the job.\n        self.momB.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            existence=False, max_attempts=5, interval=1)\n\n        self.momC.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            existence=False, max_attempts=5, interval=1)\n\n        # Verify remaining job resources.\n        exec_host_esc = self.job1_exec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n            \"+\", r\"\\+\")\n        exec_vnode_esc = self.job1_exec_vnode.replace(\"[\", r\"\\[\").replace(\n            \"]\", r\"\\]\").replace(\"(\", r\"\\(\").replace(\")\", r\"\\)\").replace(\n            \"+\", r\"\\+\")\n        newsel = \"1:mem=2097152kb:ncpus=3+1:mem=1048576kb:ncpus=2+\" + \\\n                 \"1:ncpus=2:mem=2097152kb\"\n        newsel_esc = newsel.replace(\"+\", r\"\\+\")\n        new_exec_host = self.job1_exec_host\n        new_exec_host_esc = self.job1_exec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n            \"+\", r\"\\+\")\n        new_exec_vnode = self.job1_exec_vnode.replace(\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.n5,), \"\")\n        new_exec_vnode_esc = new_exec_vnode.replace(\n            \"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n            \"(\", r\"\\(\").replace(\")\", r\"\\)\").replace(\"+\", r\"\\+\")\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '5gb',\n                                 'Resource_List.ncpus': 7,\n                                 'Resource_List.select': newsel,\n                                 'Resource_List.place': self.job1_place,\n                                 'Resource_List.nodect': 3,\n                                 'schedselect': newsel,\n                                 'exec_host': new_exec_host,\n                                 'exec_vnode': new_exec_vnode}, id=jid)\n\n        # Check account update ('u') record\n        self.match_accounting_log('u', jid, self.job1_exec_host_esc,\n                                  self.job1_exec_vnode_esc, \"6gb\", 8, 3,\n                                  self.job1_place,\n                                  self.job1_sel_esc)\n\n        # Check to make sure 'c' (next) record got generated\n        self.match_accounting_log('c', jid, self.job1_exec_host_esc,\n                                  new_exec_vnode_esc, \"5242880kb\",\n                                  7, 3, self.job1_place, newsel_esc)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6], 'job-busy', jobs_assn1, 1,\n                                '0kb')\n\n        jobs_assn2 = \"%s/0, %s/1\" % (jid, jid)\n        self.match_vnode_status([self.n7], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 8,\n                                    'resources_assigned.mem': '6291456kb'})\n        self.server.expect(QUEUE, {'resources_assigned.ncpus': 8,\n                                   'resources_assigned.mem': '6291456kb'},\n                           id=\"workq\")\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(jid, new_exec_host))\n\n        self.server.delete(jid)\n\n        # Check account phased end ('e') record\n        self.match_accounting_log('e', jid, new_exec_host_esc,\n                                  new_exec_vnode_esc,\n                                  \"5242880kb\", 7, 3,\n                                  self.job1_place,\n                                  newsel_esc)\n\n        # Check to make sure 'E' (end of job) record got generated\n        self.match_accounting_log('E', jid, self.job1_exec_host_esc,\n                                  self.job1_exec_vnode_esc, \"6gb\",\n                                  8, 3, self.job1_place, self.job1_sel_esc)\n\n    def test_release_nodes2_extra(self):\n        \"\"\"\n        Test:\n             Like test_release_nodes2 except instead of the super-chunk\n             and chunks getting only ncpus and mem values, additional\n             resources mpiprocs and ompthreads are also requested and\n             assigned:\n\n             For example:\n\n               qsub -l select=\"ncpus=3:mem=2gb:mpiprocs=3:ompthreads=2+\n                               ncpus=3:mem=2gb:mpiprocs=3:ompthreads=3+\n                               ncpus=2:mem=2gb:mpiprocs=2:ompthreads=2\"\n\n             We want to make sure the ompthreads and mpiprocs values are\n             preserved in the new exec_vnode, and that in the $PBS_NODEFILE,\n             the host names are duplicated according to the  number of\n             mpiprocs. For example, if <n1> is assigned to first\n             chunk, with mpiprocs=3, <n1> will appear 3 times in\n             $PBS_NODEFILE.\n        \"\"\"\n        jid = self.create_and_submit_job('job1_extra_res')\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select':\n                                 self.job1_extra_res_select,\n                                 'Resource_List.place':\n                                 self.job1_extra_res_place,\n                                 'schedselect':\n                                 self.job1_extra_res_schedselect,\n                                 'exec_host':\n                                 self.job1_extra_res_exec_host,\n                                 'exec_vnode':\n                                 self.job1_extra_res_exec_vnode}, id=jid)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        jobs_assn2 = \"%s/0, %s/1\" % (jid, jid)\n        self.match_vnode_status([self.n7], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        # inside pbs_nodefile_match_exec_host() function, takes care of\n        # verifying that the host names appear according to the number of\n        # mpiprocs assigned to the chunk.\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(\n                jid, self.job1_extra_res_exec_host,\n                self.job1_extra_res_schedselect))\n\n        # Run pbs_release_nodes as root\n        cmd = [self.pbs_release_nodes_cmd, '-j', jid, self.n5]\n        ret = self.server.du.run_cmd(self.server.hostname, cmd,\n                                     sudo=True)\n        self.assertEqual(ret['rc'], 0)\n\n        # Verify mom_logs\n        self.momA.log_match(\"Job;%s;%s.+cput=.+ mem=.+\" % (\n            jid, self.hostB), n=10,\n            regexp=True,\n            existence=False, max_attempts=5, interval=1)\n\n        self.momA.log_match(\"Job;%s;%s.+cput=.+ mem=.+\" % (\n            jid, self.hostC), n=10,\n            regexp=True,\n            existence=False, max_attempts=5, interval=1)\n\n        # momB and momC's hosts will not get DELETE_JOB2 request since\n        # not all their vnodes have been released yet from the job.\n        self.momB.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            existence=False, max_attempts=5, interval=1)\n\n        self.momC.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            existence=False, max_attempts=5, interval=1)\n\n        # Verify remaining job resources.\n        sel_esc = self.job1_extra_res_select.replace(\"+\", r\"\\+\")\n        exec_host_esc = self.job1_extra_res_exec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n            \"+\", r\"\\+\")\n        exec_vnode_esc = self.job1_extra_res_exec_vnode.replace(\n            \"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n            \"(\", r\"\\(\").replace(\")\", r\"\\)\").replace(\"+\", r\"\\+\")\n        newsel = \"1:mem=2097152kb:ncpus=3:mpiprocs=3:ompthreads=2+\" + \\\n                 \"1:mem=1048576kb:ncpus=2:mpiprocs=3:ompthreads=3+\" + \\\n                 \"1:ncpus=2:mem=2097152kb:mpiprocs=2:ompthreads=2\"\n\n        newsel_esc = newsel.replace(\"+\", r\"\\+\")\n        new_exec_host = self.job1_extra_res_exec_host\n        new_exec_host_esc = self.job1_extra_res_exec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                    \"+\", r\"\\+\")\n        new_exec_vnode = self.job1_extra_res_exec_vnode.replace(\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.n5,), \"\")\n        new_exec_vnode_esc = new_exec_vnode.replace(\n            \"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n            \"(\", r\"\\(\").replace(\")\", r\"\\)\").replace(\"+\", r\"\\+\")\n        self.server.expect(JOB,\n                           {'job_state': 'R',\n                            'Resource_List.mem': '5gb',\n                            'Resource_List.ncpus': 7,\n                            'Resource_List.select': newsel,\n                            'Resource_List.place': self.job1_extra_res_place,\n                            'Resource_List.nodect': 3,\n                            'schedselect': newsel,\n                            'exec_host': new_exec_host,\n                            'exec_vnode': new_exec_vnode}, id=jid)\n\n        # Check account update ('u') record\n        self.match_accounting_log('u', jid, exec_host_esc,\n                                  exec_vnode_esc, \"6gb\", 8, 3,\n                                  self.job1_extra_res_place,\n                                  sel_esc)\n\n        # Check to make sure 'c' (next) record got generated\n        self.match_accounting_log('c', jid, new_exec_host_esc,\n                                  new_exec_vnode_esc, \"5242880kb\",\n                                  7, 3, self.job1_extra_res_place, newsel_esc)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6], 'job-busy', jobs_assn1, 1,\n                                '0kb')\n\n        jobs_assn2 = \"%s/0, %s/1\" % (jid, jid)\n        self.match_vnode_status([self.n7], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 8,\n                                    'resources_assigned.mem': '6291456kb'})\n        self.server.expect(QUEUE, {'resources_assigned.ncpus': 8,\n                                   'resources_assigned.mem': '6291456kb'},\n                           id=\"workq\")\n\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(jid, new_exec_host, newsel))\n\n        self.server.delete(jid)\n\n        # Check account phased end ('e') record\n        self.match_accounting_log('e', jid, new_exec_host_esc,\n                                  new_exec_vnode_esc,\n                                  \"5242880kb\", 7, 3,\n                                  self.job1_extra_res_place,\n                                  newsel_esc)\n\n        # Check to make sure 'E' (end of job) record got generated\n        self.match_accounting_log('E', jid, exec_host_esc,\n                                  exec_vnode_esc, \"6gb\",\n                                  8, 3, self.job1_extra_res_place,\n                                  sel_esc)\n\n    @timeout(400)\n    def test_release_nodes3(self):\n        \"\"\"\n        Test:\n             Given: a job that has been submitted with a select spec\n             of 2 super-chunks of ncpus=3 and mem=2gb each,\n             and 1 chunk of ncpus=2 and mem=2gb, along with\n             place spec of \"scatter\", resulting in an\n\n             exec_vnode=\n                  (<n1>+<n2>+<n3>)+(<n4>+<n5>+<n6>)+(<n7>)\n\n             Executing pbs_release_nodes -j <job-id> <n6>\n             results in:\n             1. node <n6> no longer appearing in job's\n                exec_vnode value,\n             2. resources associated with the\n                node are taken out of job's Resources_List.*,\n                schedselect values,\n             3. Since node <n6> is just one of the vnodes in the\n                host assigned to the second super-chunk, the node\n                still won't accept new jobs until all the other\n                allocated vnodes from the same mom host are released.\n                The resources then assigned to the job from\n                node <n6> continues to be assigned including\n                corresponding licenses.\n\n             NOTE: This is testing to make sure the position of <n6>\n             in the exec_vnode string (right end of a super-chunk) will\n             not break the recreation of the attribute value after\n             release.\n        \"\"\"\n        jid = self.create_and_submit_job('job1_5')\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job1_select,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_schedselect,\n                                 'exec_host': self.job1_exec_host,\n                                 'exec_vnode': self.job1_exec_vnode}, id=jid)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        jobs_assn2 = \"%s/0, %s/1\" % (jid, jid)\n        self.match_vnode_status([self.n7], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(jid, self.job1_exec_host))\n\n        # Run pbs_release_nodes as root\n        cmd = [self.pbs_release_nodes_cmd, '-j', jid, self.n6]\n        ret = self.server.du.run_cmd(self.server.hostname, cmd,\n                                     sudo=True)\n        self.assertEqual(ret['rc'], 0)\n\n        # Verify mom_logs\n        self.momA.log_match(\"Job;%s;%s.+cput=.+ mem=.+\" % (\n            jid, self.hostB), n=10,\n            regexp=True,\n            existence=False, max_attempts=5, interval=1)\n\n        self.momA.log_match(\"Job;%s;%s.+cput=.+ mem=.+\" % (\n            jid, self.hostC), n=10,\n            regexp=True,\n            existence=False, max_attempts=5, interval=1)\n\n        # momB and momC's hosts will not get DELETE_JOB2 request since\n        # not all their vnodes have been released yet from the job.\n        self.momB.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            existence=False, max_attempts=5, interval=1)\n\n        self.momC.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            existence=False, max_attempts=5, interval=1)\n\n        # Verify remaining job resources.\n        exec_host_esc = self.job1_exec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n            \"+\", r\"\\+\")\n        exec_vnode_esc = self.job1_exec_vnode.replace(\"[\", r\"\\[\").replace(\n            \"]\", r\"\\]\").replace(\"(\", r\"\\(\").replace(\")\", r\"\\)\").replace(\n                    \"+\", r\"\\+\")\n\n        newsel = \"1:mem=2097152kb:ncpus=3+1:mem=2097152kb:ncpus=2+\" + \\\n                 \"1:ncpus=2:mem=2097152kb\"\n        newsel_esc = newsel.replace(\"+\", r\"\\+\")\n        new_exec_host = self.job1_exec_host\n        new_exec_host_esc = self.job1_exec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n            \"+\", r\"\\+\")\n        new_exec_vnode = self.job1_exec_vnode.replace(\n            \"+%s:ncpus=1\" % (self.n6,), \"\")\n        new_exec_vnode_esc = new_exec_vnode.replace(\"[\", r\"\\[\").replace(\n            \"]\", r\"\\]\").replace(\n            \"(\", r\"\\(\").replace(\")\", r\"\\)\").replace(\"+\", r\"\\+\")\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 7,\n                                 'Resource_List.select': newsel,\n                                 'Resource_List.place': self.job1_place,\n                                 'Resource_List.nodect': 3,\n                                 'schedselect': newsel,\n                                 'exec_host': new_exec_host,\n                                 'exec_vnode': new_exec_vnode}, id=jid)\n\n        # Check account update ('u') record\n        self.match_accounting_log('u', jid, self.job1_exec_host_esc,\n                                  self.job1_exec_vnode_esc, \"6gb\", 8, 3,\n                                  self.job1_place,\n                                  self.job1_sel_esc)\n\n        # Check to make sure 'c' (next) record got generated\n        self.match_accounting_log('c', jid, self.job1_exec_host_esc,\n                                  new_exec_vnode_esc, \"6291456kb\",\n                                  7, 3, self.job1_place, newsel_esc)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6], 'job-busy', jobs_assn1, 1,\n                                '0kb')\n\n        jobs_assn2 = \"%s/0, %s/1\" % (jid, jid)\n        self.match_vnode_status([self.n7], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 8,\n                                    'resources_assigned.mem': '6291456kb'})\n        self.server.expect(QUEUE, {'resources_assigned.ncpus': 8,\n                                   'resources_assigned.mem': '6291456kb'},\n                           id=\"workq\")\n\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(jid, new_exec_host))\n\n        self.server.delete(jid)\n\n        # Check account phased end ('e') record\n        self.match_accounting_log('e', jid, new_exec_host_esc,\n                                  new_exec_vnode_esc,\n                                  \"6291456kb\", 7, 3,\n                                  self.job1_place,\n                                  newsel_esc)\n\n        # Check to make sure 'E' (end of job) record got generated\n        self.match_accounting_log('E', jid, self.job1_exec_host_esc,\n                                  self.job1_exec_vnode_esc, \"6gb\",\n                                  8, 3, self.job1_place, self.job1_sel_esc)\n\n    @timeout(400)\n    def test_release_nodes3_extra(self):\n        \"\"\"\n        Test:\n             Like test_release_nodes3 except instead of the super-chunk\n             and chunks getting only ncpus and mem values, additional\n             resources mpiprocs and ompthreads are also requested and\n             assigned:\n\n             For example:\n\n               qsub -l select=\"ncpus=3:mem=2gb:mpiprocs=3:ompthreads=2+\n                               ncpus=3:mem=2gb:mpiprocs=3:ompthreads=3+\n                               ncpus=2:mem=2gb:mpiprocs=2:ompthreads=2\"\n\n             We want to make sure the ompthreads and mpiprocs values are\n             preserved in the new exec_vnode, and that in the $PBS_NODEFILE,\n             the host names are duplicated according to the  number of\n             mpiprocs. For example, if <n1> is assigned to first\n             chunk, with mpiprocs=3, <n1> will appear 3 times in\n             $PBS_NODEFILE.\n        \"\"\"\n        jid = self.create_and_submit_job('job1_extra_res')\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select':\n                                 self.job1_extra_res_select,\n                                 'Resource_List.place':\n                                 self.job1_extra_res_place,\n                                 'schedselect':\n                                 self.job1_extra_res_schedselect,\n                                 'exec_host':\n                                 self.job1_extra_res_exec_host,\n                                 'exec_vnode':\n                                 self.job1_extra_res_exec_vnode}, id=jid)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        jobs_assn2 = \"%s/0, %s/1\" % (jid, jid)\n        self.match_vnode_status([self.n7], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        # inside pbs_nodefile_match_exec_host() function, takes  care of\n        # verifying that the host names appear according to the number of\n        # mpiprocs assigned to the chunk.\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(\n                jid, self.job1_extra_res_exec_host,\n                self.job1_extra_res_schedselect))\n\n        # Run pbs_release_nodes as root\n        cmd = [self.pbs_release_nodes_cmd, '-j', jid, self.n6]\n        ret = self.server.du.run_cmd(self.server.hostname, cmd,\n                                     sudo=True)\n        self.assertEqual(ret['rc'], 0)\n\n        # Verify mom_logs\n        self.momA.log_match(\"Job;%s;%s.+cput=.+ mem=.+\" % (\n            jid, self.hostB), n=10,\n            regexp=True,\n            existence=False, max_attempts=5, interval=1)\n\n        self.momA.log_match(\"Job;%s;%s.+cput=.+ mem=.+\" % (\n            jid, self.hostC), n=10,\n            regexp=True,\n            existence=False, max_attempts=5, interval=1)\n\n        # momB and momC's hosts will not get DELETE_JOB2 request since\n        # not all their vnodes have been released yet from the job.\n        self.momB.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            existence=False, max_attempts=5, interval=1)\n\n        self.momC.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            existence=False, max_attempts=5, interval=1)\n\n        # Verify remaining job resources.\n        sel_esc = self.job1_extra_res_select.replace(\"+\", r\"\\+\")\n        exec_host_esc = self.job1_extra_res_exec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                    \"+\", r\"\\+\")\n        exec_vnode_esc = self.job1_extra_res_exec_vnode.replace(\n            \"[\", r\"\\[\").replace(\n            \"]\", r\"\\]\").replace(\"(\", r\"\\(\").replace(\")\", r\"\\)\").replace(\n                    \"+\", r\"\\+\")\n\n        newsel = \"1:mem=2097152kb:ncpus=3:mpiprocs=3:ompthreads=2+\" + \\\n                 \"1:mem=2097152kb:ncpus=2:mpiprocs=3:ompthreads=3+\" + \\\n                 \"1:ncpus=2:mem=2097152kb:mpiprocs=2:ompthreads=2\"\n        newsel_esc = newsel.replace(\"+\", r\"\\+\")\n        new_exec_host = self.job1_extra_res_exec_host\n        new_exec_host_esc = self.job1_extra_res_exec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n            \"+\", r\"\\+\")\n        new_exec_vnode = self.job1_extra_res_exec_vnode.replace(\n            \"+%s:ncpus=1\" % (self.n6,), \"\")\n        new_exec_vnode_esc = new_exec_vnode.replace(\n            \"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n            \"(\", r\"\\(\").replace(\")\", r\"\\)\").replace(\"+\", r\"\\+\")\n        self.server.expect(JOB,\n                           {'job_state': 'R',\n                            'Resource_List.mem': '6gb',\n                            'Resource_List.ncpus': 7,\n                            'Resource_List.select': newsel,\n                            'Resource_List.place':\n                            self.job1_extra_res_place,\n                            'Resource_List.nodect': 3,\n                            'schedselect': newsel,\n                            'exec_host': new_exec_host,\n                            'exec_vnode': new_exec_vnode}, id=jid)\n\n        # Check account update ('u') record\n        self.match_accounting_log('u', jid, exec_host_esc,\n                                  exec_vnode_esc, \"6gb\", 8, 3,\n                                  self.job1_extra_res_place,\n                                  sel_esc)\n\n        # Check to make sure 'c' (next) record got generated\n        self.match_accounting_log('c', jid, new_exec_host_esc,\n                                  new_exec_vnode_esc, \"6291456kb\",\n                                  7, 3, self.job1_extra_res_place, newsel_esc)\n\n        # Check various vnode status.\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6], 'job-busy', jobs_assn1, 1,\n                                '0kb')\n\n        self.match_vnode_status([self.n7], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 8,\n                                    'resources_assigned.mem': '6291456kb'})\n        self.server.expect(QUEUE, {'resources_assigned.ncpus': 8,\n                                   'resources_assigned.mem': '6291456kb'},\n                           id=\"workq\")\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(jid, new_exec_host, newsel))\n\n        self.server.delete(jid)\n\n        # Check account phased end ('e') record\n        self.match_accounting_log('e', jid, new_exec_host_esc,\n                                  new_exec_vnode_esc,\n                                  \"6291456kb\", 7, 3,\n                                  self.job1_extra_res_place,\n                                  newsel_esc)\n\n        # Check to make sure 'E' (end of job) record got generated\n        self.match_accounting_log('E', jid, exec_host_esc,\n                                  exec_vnode_esc, \"6gb\",\n                                  8, 3, self.job1_extra_res_place,\n                                  sel_esc)\n\n    def test_release_nodes4(self):\n        \"\"\"\n        Test:\n             Given: a job that has been submitted with a select spec\n             of 2 super-chunks of ncpus=3 and mem=2gb each,\n             and 1 chunk of ncpus=2 and mem=2gb, along with\n             place spec of \"scatter\", resulting in an\n\n             exec_vnode=\n                  (<n1>+<n2>+<n3>)+(<n4>+<n5>+<n6>)+(<n7>)\n\n             Executing pbs_release_nodes -j <job-id> <n4> <n5> <n7>\n             results in:\n             1. node <n4>, <n5>, and <n7> are no longer appearing in\n                job's exec_vnode value,\n             2. resources associated with the released\n                nodes are taken out of job's Resources_List.*,\n                schedselect values,\n             3. Since nodes <n4> and <n5> are some of the vnodes in the\n                host assigned to the second super-chunk, the node\n                still won't accept new jobs until all the other\n                allocated vnodes (<n6>)  from the same mom host are\n                released.\n             4. The resources then assigned to the job from\n                node <n4> and <n5> continue to be assigned including\n                corresponding licenses.\n             5. <n7> is the only vnode assigned to the host mapped\n                to the third chunk so it's fully deallocated and\n                its assigned resources are removed from the job.\n        \"\"\"\n        jid = self.create_and_submit_job('job1_5')\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job1_select,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_schedselect,\n                                 'exec_host': self.job1_exec_host,\n                                 'exec_vnode': self.job1_exec_vnode}, id=jid)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        jobs_assn2 = \"%s/0, %s/1\" % (jid, jid)\n        self.match_vnode_status([self.n7], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(jid, self.job1_exec_host))\n\n        # Run pbs_release_nodes as root\n        cmd = [self.pbs_release_nodes_cmd, '-j', jid, self.n4, self.n5,\n               self.n7]\n        ret = self.server.du.run_cmd(self.server.hostname, cmd,\n                                     sudo=True)\n        self.assertEqual(ret['rc'], 0)\n\n        # momB's host will not get job summary reported but\n        # momC's host will get the job summary since all vnodes\n        # from the host have been released.\n        self.momA.log_match(\"Job;%s;%s.+cput=.+ mem=.+\" % (\n            jid, self.hostB), n=10, regexp=True, existence=False,\n            max_attempts=5, interval=1)\n\n        self.momA.log_match(\"Job;%s;%s.+cput=.+ mem=.+\" % (\n            jid, self.hostC), n=10, regexp=True)\n\n        # momB's host will not get DELETE_JOB2 request since\n        # not all their vnodes have been released yet from the job.\n        # momC's host will get DELETE_JOB2 request since sole vnnode\n        # <n7> has been released from the job.\n        self.momB.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            existence=False, max_attempts=5, interval=1)\n\n        self.momC.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20)\n\n        # Ensure the 'fib' process is gone on hostC when DELETE_JOB request\n        # is received\n        self.server.pu.get_proc_info(\n            self.momC.hostname, \".*fib.*\", None, regexp=True)\n        self.assertEqual(len(self.server.pu.processes), 0)\n\n        # Verify remaining job resources.\n        sel_esc = self.job1_select.replace(\"+\", r\"\\+\")\n        exec_host_esc = self.job1_exec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                    \"+\", r\"\\+\")\n        exec_vnode_esc = self.job1_exec_vnode.replace(\"[\", r\"\\[\").replace(\n            \"]\", r\"\\]\").replace(\"(\", r\"\\(\").replace(\")\", r\"\\)\").replace(\n                    \"+\", r\"\\+\")\n\n        newsel = \"1:mem=2097152kb:ncpus=3+1:ncpus=1\"\n        newsel_esc = newsel.replace(\"+\", r\"\\+\")\n        new_exec_host = self.job1_exec_host.replace(\n            \"+%s/0*2\" % (self.n7,), \"\")\n        new_exec_host_esc = new_exec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                    \"+\", r\"\\+\")\n        new_exec_vnode = self.job1_exec_vnode.replace(\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.n4,), \"\")\n        new_exec_vnode = new_exec_vnode.replace(\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.n5,), \"\")\n        new_exec_vnode = new_exec_vnode.replace(\n            \"+(%s:ncpus=2:mem=2097152kb)\" % (self.n7,), \"\")\n        new_exec_vnode_esc = new_exec_vnode.replace(\n            \"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n            \"(\", r\"\\(\").replace(\")\", r\"\\)\").replace(\"+\", r\"\\+\")\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '2gb',\n                                 'Resource_List.ncpus': 4,\n                                 'Resource_List.select': newsel,\n                                 'Resource_List.place': self.job1_place,\n                                 'Resource_List.nodect': 2,\n                                 'schedselect': newsel,\n                                 'exec_host': new_exec_host,\n                                 'exec_vnode': new_exec_vnode}, id=jid)\n\n        # Though the job is listed with ncpus=4 taking away released vnode\n        # <n4> (1 cpu), <n5> (1 cpu), <n7> (2 cpus),\n        # only <n7> got released.  <n4> and <n5> are part of a super\n        # chunk that wasn't fully released.\n\n        # Check account update ('u') record\n        self.match_accounting_log('u', jid, self.job1_exec_host_esc,\n                                  self.job1_exec_vnode_esc, \"6gb\", 8, 3,\n                                  self.job1_place,\n                                  self.job1_sel_esc)\n\n        # Check to make sure 'c' (next) record got generated\n        self.match_accounting_log('c', jid, new_exec_host_esc,\n                                  new_exec_vnode_esc, \"2097152kb\",\n                                  4, 2, self.job1_place, newsel_esc)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6], 'job-busy', jobs_assn1, 1,\n                                '0kb')\n\n        self.match_vnode_status([self.n0, self.n7, self.n8, self.n9, self.n10],\n                                'free')\n\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 6,\n                                    'resources_assigned.mem': '4194304kb'})\n        self.server.expect(QUEUE, {'resources_assigned.ncpus': 6,\n                                   'resources_assigned.mem': '4194304kb'},\n                           id=\"workq\")\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(jid, new_exec_host))\n\n        self.server.delete(jid)\n\n        # Check account phased end ('e') record\n        self.match_accounting_log('e', jid, new_exec_host_esc,\n                                  new_exec_vnode_esc,\n                                  \"2097152kb\", 4, 2,\n                                  self.job1_place,\n                                  newsel_esc)\n\n        # Check to make sure 'E' (end of job) record got generated\n        self.match_accounting_log('E', jid, self.job1_exec_host_esc,\n                                  self.job1_exec_vnode_esc, \"6gb\",\n                                  8, 3, self.job1_place, self.job1_sel_esc)\n\n    def test_release_nodes4_extra(self):\n        \"\"\"\n        Test:\n             Like test_release_nodes4 except instead of the super-chunk\n             and chunks getting only ncpus and mem values, additional\n             resources mpiprocs and ompthreads are also requested and\n             assigned:\n\n             For example:\n\n               qsub -l select=\"ncpus=3:mem=2gb:mpiprocs=3:ompthreads=2+\n                               ncpus=3:mem=2gb:mpiprocs=3:ompthreads=3+\n                               ncpus=2:mem=2gb:mpiprocs=2:ompthreads=2\"\n\n             We want to make sure the ompthreads and mpiprocs values are\n             preserved in the new exec_vnode, and that in the $PBS_NODEFILE,\n             the host names are duplicated according to the  number of\n             mpiprocs. For example, if <n1> is assigned to first\n             chunk, with mpiprocs=3, <n1> will appear 3 times in\n             $PBS_NODEFILE.\n        \"\"\"\n        jid = self.create_and_submit_job('job1_extra_res')\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select':\n                                 self.job1_extra_res_select,\n                                 'Resource_List.place':\n                                 self.job1_extra_res_place,\n                                 'schedselect':\n                                 self.job1_extra_res_schedselect,\n                                 'exec_host':\n                                 self.job1_extra_res_exec_host,\n                                 'exec_vnode':\n                                 self.job1_extra_res_exec_vnode}, id=jid)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        jobs_assn2 = \"%s/0, %s/1\" % (jid, jid)\n        self.match_vnode_status([self.n7], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        # inside pbs_nodefile_match_exec_host() function, takes  care of\n        # verifying that the host names appear according to the number of\n        # mpiprocs assigned to the chunk.\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(\n                jid, self.job1_extra_res_exec_host,\n                self.job1_extra_res_schedselect))\n\n        # Run pbs_release_nodes as root\n        cmd = [self.pbs_release_nodes_cmd, '-j', jid, self.n4, self.n5,\n               self.n7]\n        ret = self.server.du.run_cmd(self.server.hostname, cmd,\n                                     sudo=True)\n        self.assertEqual(ret['rc'], 0)\n\n        # momB's host will not get job summary reported but\n        # momC's host will get the job summary since all vnodes\n        # from the host have been released.\n        self.momA.log_match(\"Job;%s;%s.+cput=.+ mem=.+\" % (\n            jid, self.hostB), n=10, regexp=True, existence=False,\n            max_attempts=5, interval=1)\n\n        self.momA.log_match(\"Job;%s;%s.+cput=.+ mem=.+\" % (\n            jid, self.hostC), n=10, regexp=True)\n\n        # momB's host will not get DELETE_JOB2 request since\n        # not all their vnodes have been released yet from the job.\n        # momC will get DELETE_JOB2 request since sole vnode\n        # <n7> has been released from the job.\n        self.momB.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            existence=False, max_attempts=5, interval=1)\n\n        self.momC.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20)\n\n        # Ensure the 'fib' process is gone from hostC when DELETE_JOB request\n        # received\n        self.server.pu.get_proc_info(\n            self.momC.hostname, \".*fib.*\", None, regexp=True)\n        self.assertEqual(len(self.server.pu.processes), 0)\n\n        # Verify remaining job resources.\n        sel_esc = self.job1_extra_res_select.replace(\"+\", r\"\\+\")\n        exec_host_esc = self.job1_extra_res_exec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                    \"+\", r\"\\+\")\n        exec_vnode_esc = self.job1_extra_res_exec_vnode.replace(\n            \"[\", r\"\\[\").replace(\n            \"]\", r\"\\]\").replace(\"(\", r\"\\(\").replace(\")\", r\"\\)\").replace(\n                    \"+\", r\"\\+\")\n\n        newsel = \"1:mem=2097152kb:ncpus=3:mpiprocs=3:ompthreads=2+\" + \\\n                 \"1:ncpus=1:mpiprocs=3:ompthreads=3\"\n        newsel_esc = newsel.replace(\"+\", r\"\\+\")\n        new_exec_host = self.job1_extra_res_exec_host.replace(\n            \"+%s/0*2\" % (self.n7,), \"\")\n        new_exec_host_esc = new_exec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                    \"+\", r\"\\+\")\n        new_exec_vnode = self.job1_extra_res_exec_vnode.replace(\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.n4,), \"\")\n        new_exec_vnode = new_exec_vnode.replace(\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.n5,), \"\")\n        new_exec_vnode = new_exec_vnode.replace(\n            \"+(%s:ncpus=2:mem=2097152kb)\" % (self.n7,), \"\")\n        new_exec_vnode_esc = new_exec_vnode.replace(\"[\", r\"\\[\").replace(\n            \"]\", r\"\\]\").replace(\n            \"(\", r\"\\(\").replace(\")\", r\"\\)\").replace(\"+\", r\"\\+\")\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '2gb',\n                                 'Resource_List.ncpus': 4,\n                                 'Resource_List.select': newsel,\n                                 'Resource_List.place':\n                                 self.job1_extra_res_place,\n                                 'Resource_List.nodect': 2,\n                                 'schedselect': newsel,\n                                 'exec_host': new_exec_host,\n                                 'exec_vnode': new_exec_vnode}, id=jid)\n\n        # Though the job is listed with ncpus=4 taking away released vnode\n        # <n4> (1 cpu), <n5> (1 cpu), <n7> (2 cpus),\n        # only <n7> got released.  <n4> and <n5> are part of a super\n        # chunk that wasn't fully released.\n\n        # Check account update ('u') record\n        self.match_accounting_log('u', jid, exec_host_esc,\n                                  exec_vnode_esc, \"6gb\", 8, 3,\n                                  self.job1_extra_res_place,\n                                  sel_esc)\n\n        # Check to make sure 'c' (next) record got generated\n        self.match_accounting_log('c', jid, new_exec_host_esc,\n                                  new_exec_vnode_esc, \"2097152kb\",\n                                  4, 2, self.job1_extra_res_place, newsel_esc)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6], 'job-busy', jobs_assn1, 1,\n                                '0kb')\n\n        self.match_vnode_status([self.n0, self.n7, self.n8, self.n9, self.n10],\n                                'free')\n\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 6,\n                                    'resources_assigned.mem': '4194304kb'})\n        self.server.expect(QUEUE, {'resources_assigned.ncpus': 6,\n                                   'resources_assigned.mem': '4194304kb'},\n                           id=\"workq\")\n\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(jid, new_exec_host, newsel))\n\n        self.server.delete(jid)\n\n        # Check account phased end ('e') record\n        self.match_accounting_log('e', jid, new_exec_host_esc,\n                                  new_exec_vnode_esc,\n                                  \"2097152kb\", 4, 2,\n                                  self.job1_extra_res_place,\n                                  newsel_esc)\n\n        # Check to make sure 'E' (end of job) record got generated\n        self.match_accounting_log('E', jid, exec_host_esc,\n                                  exec_vnode_esc, \"6gb\",\n                                  8, 3, self.job1_extra_res_place,\n                                  sel_esc)\n\n    def test_release_nodes5(self):\n        \"\"\"\n        Test:\n             Given: a job that has been submitted with a select spec\n             of 2 super-chunks of ncpus=3 and mem=2gb each,\n             and 1 chunk of ncpus=2 and mem=2gb, along with\n             place spec of \"scatter\", resulting in an\n\n             exec_vnode=\n                  (<n1>+<n2>+<n3>)+(<n4>+<n5>+<n6>)+(<n7>)\n\n             Executing pbs_release_nodes -j <job-id> <n5> <n6> <n7>\n             results in:\n             1. node <n5>, <n6>, and <n7> are no longer appearing in\n                job's exec_vnode value,\n             2. resources associated with the released\n                nodes are taken out of job's Resources_List.*,\n                schedselect values,\n             3. Since nodes <n5> and <n6> are some of the vnodes in the\n                host assigned to the second super-chunk, the node\n                still won't accept new jobs until all the other\n                allocated vnodes (<n4>) from the same mom host are\n                released.\n             4. The resources then assigned to the job from\n                node <n5> and <n6> continue to be assigned including\n                corresponding licenses.\n             5. <n7> is the only vnode assigned to the host mapped\n                to the third chunk so it's fully deallocated and\n                its assigned resources are removed from the job.\n        \"\"\"\n        jid = self.create_and_submit_job('job1_5')\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job1_select,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_schedselect,\n                                 'exec_host': self.job1_exec_host,\n                                 'exec_vnode': self.job1_exec_vnode}, id=jid)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        jobs_assn2 = \"%s/0, %s/1\" % (jid, jid)\n        self.match_vnode_status([self.n7], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(jid, self.job1_exec_host))\n\n        # Run pbs_release_nodes as root\n        cmd = [self.pbs_release_nodes_cmd, '-j', jid, self.n5, self.n6,\n               self.n7]\n        ret = self.server.du.run_cmd(self.server.hostname, cmd,\n                                     sudo=True)\n        self.assertEqual(ret['rc'], 0)\n\n        # momB's host will not get job summary reported but\n        # momC's host will get the job summary since all vnodes\n        # from the host have been released.\n        self.momA.log_match(\"Job;%s;%s.+cput=.+ mem=.+\" % (\n            jid, self.hostB), n=10, regexp=True, existence=False,\n            max_attempts=5, interval=1)\n\n        self.momA.log_match(\"Job;%s;%s.+cput=.+ mem=.+\" % (\n            jid, self.hostC), n=10, regexp=True)\n\n        # momB's host will not get DELETE_JOB2 request since\n        # not all their vnodes have been released yet from the job.\n        # momC will get DELETE_JOB2 request since sole vnode\n        # <n7> has been released from the job.\n        self.momB.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            existence=False, max_attempts=5, interval=1)\n\n        self.momC.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20)\n\n        # Ensure the 'fib' process is gone from hostC when DELETE_JOB request\n        # received\n        self.server.pu.get_proc_info(\n            self.momC.hostname, \".*fib.*\", None, regexp=True)\n        self.assertEqual(len(self.server.pu.processes), 0)\n\n        # Verify remaining job resources.\n        sel_esc = self.job1_select.replace(\"+\", r\"\\+\")\n        exec_host_esc = self.job1_exec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                    \"+\", r\"\\+\")\n        exec_vnode_esc = self.job1_exec_vnode.replace(\"[\", r\"\\[\").replace(\n            \"]\", r\"\\]\").replace(\"(\", r\"\\(\").replace(\")\", r\"\\)\").replace(\n                    \"+\", r\"\\+\")\n        newsel = \"1:mem=2097152kb:ncpus=3+1:mem=1048576kb:ncpus=1\"\n\n        newsel_esc = newsel.replace(\"+\", r\"\\+\")\n        new_exec_host = self.job1_exec_host.replace(\n            \"+%s/0*2\" % (self.n7,), \"\")\n        new_exec_host_esc = new_exec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                    \"+\", r\"\\+\")\n        new_exec_vnode = self.job1_exec_vnode.replace(\n            \"+%s:mem=1048576kb:ncpus=1\" % (self.n5,), \"\")\n        new_exec_vnode = new_exec_vnode.replace(\n            \"+%s:ncpus=1\" % (self.n6,), \"\")\n        new_exec_vnode = new_exec_vnode.replace(\n            \"+(%s:ncpus=2:mem=2097152kb)\" % (self.n7,), \"\")\n        new_exec_vnode_esc = \\\n            new_exec_vnode.replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                \"(\", r\"\\(\").replace(\")\", r\"\\)\").replace(\"+\", r\"\\+\")\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '3gb',\n                                 'Resource_List.ncpus': 4,\n                                 'Resource_List.select': newsel,\n                                 'Resource_List.place': self.job1_place,\n                                 'Resource_List.nodect': 2,\n                                 'schedselect': newsel,\n                                 'exec_host': new_exec_host,\n                                 'exec_vnode': new_exec_vnode}, id=jid)\n\n        # Though the job is listed with ncpus=4 taking away released vnode\n        # <n5> (1 cpu), <n6> (1 cpu), <n7> (2 cpus),\n        # only <n7> got released.  <n5> and <n6> are part of a super\n        # chunk that wasn't fully released.\n\n        # Check account update ('u') record\n        self.match_accounting_log('u', jid, self.job1_exec_host_esc,\n                                  self.job1_exec_vnode_esc, \"6gb\", 8, 3,\n                                  self.job1_place,\n                                  self.job1_sel_esc)\n\n        # Check to make sure 'c' (next) record got generated\n        self.match_accounting_log('c', jid, new_exec_host_esc,\n                                  new_exec_vnode_esc, \"3145728kb\",\n                                  4, 2, self.job1_place, newsel_esc)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        # <n5> still job-busy\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        # <n6> still job-busy\n        self.match_vnode_status([self.n3, self.n6], 'job-busy', jobs_assn1, 1,\n                                '0kb')\n\n        # <n7> now free\n        self.match_vnode_status([self.n0, self.n7, self.n8, self.n9, self.n10],\n                                'free')\n\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 6,\n                                    'resources_assigned.mem': '4194304kb'})\n        self.server.expect(QUEUE, {'resources_assigned.ncpus': 6,\n                                   'resources_assigned.mem': '4194304kb'},\n                           id=\"workq\")\n\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(jid, new_exec_host))\n\n        self.server.delete(jid)\n\n        # Check account phased end ('e') record\n        self.match_accounting_log('e', jid, new_exec_host_esc,\n                                  new_exec_vnode_esc,\n                                  \"3145728kb\", 4, 2,\n                                  self.job1_place,\n                                  newsel_esc)\n\n        # Check to make sure 'E' (end of job) record got generated\n        self.match_accounting_log('E', jid, self.job1_exec_host_esc,\n                                  self.job1_exec_vnode_esc, \"6gb\",\n                                  8, 3, self.job1_place, self.job1_sel_esc)\n\n    def test_release_nodes5_extra(self):\n        \"\"\"\n        Test:\n             Like test_release_nodes5 except instead of the super-chunk\n             and chunks getting only ncpus and mem values, additional\n             resources mpiprocs and ompthreads are also requested and\n             assigned:\n\n             For example:\n\n               qsub -l select=\"ncpus=3:mem=2gb:mpiprocs=3:ompthreads=2+\n                               ncpus=3:mem=2gb:mpiprocs=3:ompthreads=3+\n                               ncpus=2:mem=2gb:mpiprocs=2:ompthreads=2\"\n\n             We want to make sure the ompthreads and mpiprocs values are\n             preserved in the new exec_vnode, and that in the $PBS_NODEFILE,\n             the host names are duplicated according to the  number of\n             mpiprocs. For example, if <n1> is assigned to first\n             chunk, with mpiprocs=3, <n1> will appear 3 times in\n             $PBS_NODEFILE.\n        \"\"\"\n        jid = self.create_and_submit_job('job1_extra_res')\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select':\n                                 self.job1_extra_res_select,\n                                 'Resource_List.place':\n                                 self.job1_extra_res_place,\n                                 'schedselect':\n                                 self.job1_extra_res_schedselect,\n                                 'exec_host':\n                                 self.job1_extra_res_exec_host,\n                                 'exec_vnode':\n                                 self.job1_extra_res_exec_vnode}, id=jid)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        jobs_assn2 = \"%s/0, %s/1\" % (jid, jid)\n        self.match_vnode_status([self.n7], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        # inside pbs_nodefile_match_exec_host() function, takes  care of\n        # verifying that the host names appear according to the number of\n        # mpiprocs assigned to the chunk.\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(\n                jid, self.job1_extra_res_exec_host,\n                self.job1_extra_res_schedselect))\n\n        # Run pbs_release_nodes as root\n        cmd = [self.pbs_release_nodes_cmd, '-j', jid, self.n5, self.n6,\n               self.n7]\n        ret = self.server.du.run_cmd(self.server.hostname, cmd,\n                                     sudo=True)\n        self.assertEqual(ret['rc'], 0)\n\n        # momB's host will not get job summary reported but\n        # momC's host will get the job summary since all vnodes\n        # from the host have been released.\n        self.momA.log_match(\"Job;%s;%s.+cput=.+ mem=.+\" % (\n            jid, self.hostB), n=10, regexp=True, existence=False,\n            max_attempts=5, interval=1)\n\n        self.momA.log_match(\"Job;%s;%s.+cput=.+ mem=.+\" % (\n            jid, self.hostC), n=10, regexp=True)\n\n        # momB's host will not get DELETE_JOB2 request since\n        # not all their vnodes have been released yet from the job.\n        # momC will get DELETE_JOB2 request since sole vnode\n        # <n7> has been released from the job.\n        self.momB.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            existence=False, max_attempts=5, interval=1)\n\n        self.momC.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20)\n\n        # Ensure the 'fib' process is gone from hostC when DELETE_JOB request\n        # received\n        self.server.pu.get_proc_info(\n            self.momC.hostname, \".*fib.*\", None, regexp=True)\n        self.assertEqual(len(self.server.pu.processes), 0)\n\n        # Verify remaining job resources.\n        sel_esc = self.job1_extra_res_select.replace(\"+\", r\"\\+\")\n        exec_host_esc = self.job1_extra_res_exec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                    \"+\", r\"\\+\")\n        exec_vnode_esc = \\\n            self.job1_extra_res_exec_vnode.replace(\"[\", r\"\\[\").replace(\n                \"]\", r\"\\]\").replace(\"(\", r\"\\(\").replace(\")\", r\"\\)\").replace(\n                \"+\", r\"\\+\")\n        newsel = \\\n            \"1:mem=2097152kb:ncpus=3:mpiprocs=3:ompthreads=2+\" + \\\n            \"1:mem=1048576kb:ncpus=1:mpiprocs=3:ompthreads=3\"\n\n        newsel_esc = newsel.replace(\"+\", r\"\\+\")\n        new_exec_host = self.job1_extra_res_exec_host.replace(\n            \"+%s/0*2\" % (self.n7,), \"\")\n        new_exec_host_esc = new_exec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                    \"+\", r\"\\+\")\n        new_exec_vnode = self.job1_extra_res_exec_vnode.replace(\n            \"+%s:mem=1048576kb:ncpus=1\" % (self.n5,), \"\")\n        new_exec_vnode = new_exec_vnode.replace(\n            \"+%s:ncpus=1\" % (self.n6,), \"\")\n        new_exec_vnode = new_exec_vnode.replace(\n            \"+(%s:ncpus=2:mem=2097152kb)\" % (self.n7,), \"\")\n        new_exec_vnode_esc = \\\n            new_exec_vnode.replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                \"(\", r\"\\(\").replace(\")\", r\"\\)\").replace(\"+\", r\"\\+\")\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '3gb',\n                                 'Resource_List.ncpus': 4,\n                                 'Resource_List.select': newsel,\n                                 'Resource_List.place':\n                                 self.job1_extra_res_place,\n                                 'Resource_List.nodect': 2,\n                                 'schedselect': newsel,\n                                 'exec_host': new_exec_host,\n                                 'exec_vnode': new_exec_vnode}, id=jid)\n\n        # Though the job is listed with ncpus=4 taking away released vnode\n        # <n5> (1 cpu), <n6> (1 cpu), <n7> (2 cpus),\n        # only <n7> got released.  <n5> and <n6> are part of a super\n        # chunk that wasn't fully released.\n\n        # Check account update ('u') record\n        self.match_accounting_log('u', jid, exec_host_esc,\n                                  exec_vnode_esc, \"6gb\", 8, 3,\n                                  self.job1_extra_res_place,\n                                  sel_esc)\n\n        # Check to make sure 'c' (next) record got generated\n        self.match_accounting_log('c', jid, new_exec_host_esc,\n                                  new_exec_vnode_esc, \"3145728kb\",\n                                  4, 2, self.job1_extra_res_place, newsel_esc)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        # <n5> still job-busy\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        # <n6> still job-busy\n        self.match_vnode_status([self.n3, self.n6], 'job-busy', jobs_assn1, 1,\n                                '0kb')\n\n        # <n7> is now free\n        self.match_vnode_status([self.n0, self.n7, self.n8, self.n9, self.n10],\n                                'free')\n\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 6,\n                                    'resources_assigned.mem': '4194304kb'})\n        self.server.expect(QUEUE, {'resources_assigned.ncpus': 6,\n                                   'resources_assigned.mem': '4194304kb'},\n                           id=\"workq\")\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(jid, new_exec_host, newsel))\n\n        self.server.delete(jid)\n\n        # Check account phased end ('e') record\n        self.match_accounting_log('e', jid, new_exec_host_esc,\n                                  new_exec_vnode_esc,\n                                  \"3145728kb\", 4, 2,\n                                  self.job1_extra_res_place,\n                                  newsel_esc)\n\n        # Check to make sure 'E' (end of job) record got generated\n        self.match_accounting_log('E', jid, exec_host_esc,\n                                  exec_vnode_esc, \"6gb\",\n                                  8, 3, self.job1_extra_res_place,\n                                  sel_esc)\n\n    def test_release_nodes6(self):\n        \"\"\"\n        Test:\n             Given: a job that has been submitted with a select spec\n             of 2 super-chunks of ncpus=3 and mem=2gb each,\n             and 1 chunk of ncpus=2 and mem=2gb, along with\n             place spec of \"scatter\", resulting in an\n\n             exec_vnode=\n                  (<n1>+<n2>+<n3>)+(<n4>+<n5>+<n6>)+(<n7>)\n\n             Executing pbs_release_nodes -j <job-id> <n4> <n5> <n6> <n7>\n             is equivalent to doing 'pbs_release_nodes -a'  which\n             will have the same result as test_release_nodes_all.\n             That is, all sister nodes assigned to the job are\n             released early from the job.\n        \"\"\"\n        jid = self.create_and_submit_job('job1_5')\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job1_select,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_schedselect,\n                                 'exec_host': self.job1_exec_host,\n                                 'exec_vnode': self.job1_exec_vnode}, id=jid)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        jobs_assn2 = \"%s/0, %s/1\" % (jid, jid)\n        self.match_vnode_status([self.n7], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(jid, self.job1_exec_host))\n\n        # Run pbs_release_nodes as regular user\n        cmd = [self.pbs_release_nodes_cmd, '-j', jid, self.n4, self.n5,\n               self.n6, self.n7]\n        ret = self.server.du.run_cmd(self.server.hostname, cmd,\n                                     runas=TEST_USER)\n        self.assertEqual(ret['rc'], 0)\n\n        # Verify mom_logs\n        self.momA.log_match(\n            \"Job;%s;%s.+cput=.+ mem=.+\" % (jid, self.hostB), n=10,\n            interval=2, regexp=True)\n\n        self.momA.log_match(\n            \"Job;%s;%s.+cput=.+ mem=.+\" % (jid, self.hostC), n=10,\n            interval=2, regexp=True)\n\n        self.momB.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            interval=2)\n\n        self.momC.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            interval=2)\n\n        # Ensure the 'fib' process is gone when DELETE_JOB2 received on momB\n        self.server.pu.get_proc_info(\n            self.momB.hostname, \".*fib.*\", None, regexp=True)\n        self.assertEqual(len(self.server.pu.processes), 0)\n\n        # Ensure the 'fib' process is gone when DELETE_JOB2 received on momC\n        self.server.pu.get_proc_info(\n            self.momC.hostname, \".*fib.*\", None, regexp=True)\n        self.assertEqual(len(self.server.pu.processes), 0)\n\n        # Verify remaining job resources.\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '2gb',\n                                 'Resource_List.ncpus': 3,\n                                 'Resource_List.select': self.job1_newsel,\n                                 'Resource_List.place': self.job1_place,\n                                 'Resource_List.nodect': 1,\n                                 'schedselect': self.job1_newsel,\n                                 'exec_host': self.job1_new_exec_host,\n                                 'exec_vnode': self.job1_new_exec_vnode},\n                           id=jid)\n\n        # Check various vnode status.\n        self.match_vnode_status([self.n1, self.n2],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3], 'job-busy', jobs_assn1, 1, '0kb')\n\n        # nodes <n4>, <n5>, <n6>, <n7> are all free now\n        self.match_vnode_status([self.n0, self.n4, self.n5, self.n6,\n                                 self.n7, self.n8, self.n9, self.n10], 'free')\n\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 3,\n                                    'resources_assigned.mem': '2097152kb'})\n        self.server.expect(QUEUE, {'resources_assigned.ncpus': 3,\n                                   'resources_assigned.mem': '2097152kb'},\n                           id=\"workq\")\n\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(jid, self.job1_new_exec_host))\n\n        # Check account update ('u') record\n        self.match_accounting_log('u', jid, self.job1_exec_host_esc,\n                                  self.job1_exec_vnode_esc, \"6gb\", 8, 3,\n                                  self.job1_place,\n                                  self.job1_sel_esc)\n\n        # Check to make sure 'c' (next) record got generated\n        self.match_accounting_log('c', jid, self.job1_new_exec_host,\n                                  self.job1_new_exec_vnode_esc, \"2097152kb\",\n                                  3, 1, self.job1_place, self.job1_newsel)\n\n        # For job to end to get the end records in the accounting_logs\n        self.server.delete(jid)\n\n        # Check account phased end job ('e') record\n        self.match_accounting_log('e', jid, self.job1_new_exec_host,\n                                  self.job1_new_exec_vnode_esc, \"2097152kb\", 3,\n                                  1, self.job1_place, self.job1_newsel)\n\n        # Check account end of job ('E') record\n        self.match_accounting_log('E', jid, self.job1_exec_host_esc,\n                                  self.job1_exec_vnode_esc, \"6gb\", 8, 3,\n                                  self.job1_place,\n                                  self.job1_sel_esc)\n\n    def test_release_nodes6_extra(self):\n        \"\"\"\n        Test:\n             Like test_release_nodes6 except instead of the super-chunk\n             and chunks getting only ncpus and mem values, additional\n             resources mpiprocs and ompthreads are also requested and\n             assigned:\n\n             For example:\n\n               qsub -l select=\"ncpus=3:mem=2gb:mpiprocs=3:ompthreads=2+\n                               ncpus=3:mem=2gb:mpiprocs=3:ompthreads=3+\n                               ncpus=2:mem=2gb:mpiprocs=2:ompthreads=2\"\n\n             We want to make sure the ompthreads and mpiprocs values are\n             preserved in the new exec_vnode, and that in the $PBS_NODEFILE,\n             the host names are duplicated according to the  number of\n             mpiprocs. For example, if <n1> is assigned to first\n             chunk, with mpiprocs=3, <n1> will appear 3 times in\n             $PBS_NODEFILE.\n        \"\"\"\n        jid = self.create_and_submit_job('job1_extra_res')\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select':\n                                 self.job1_extra_res_select,\n                                 'Resource_List.place':\n                                 self.job1_extra_res_place,\n                                 'schedselect':\n                                 self.job1_extra_res_schedselect,\n                                 'exec_host': self.job1_extra_res_exec_host,\n                                 'exec_vnode': self.job1_extra_res_exec_vnode},\n                           id=jid)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        jobs_assn2 = \"%s/0, %s/1\" % (jid, jid)\n        self.match_vnode_status([self.n7], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(jid,\n                                              self.job1_extra_res_exec_host,\n                                              self.job1_extra_res_schedselect))\n\n        # Run pbs_release_nodes as regular user\n        cmd = [self.pbs_release_nodes_cmd, '-j', jid, self.n4, self.n5,\n               self.n6, self.n7]\n        ret = self.server.du.run_cmd(self.server.hostname, cmd,\n                                     runas=TEST_USER)\n        self.assertEqual(ret['rc'], 0)\n\n        # Verify mom_logs\n        self.momA.log_match(\n            \"Job;%s;%s.+cput=.+ mem=.+\" % (jid, self.hostB), n=10,\n            interval=2, regexp=True)\n\n        self.momA.log_match(\n            \"Job;%s;%s.+cput=.+ mem=.+\" % (jid, self.hostC), n=10,\n            interval=2, regexp=True)\n\n        self.momB.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            interval=2)\n\n        self.momC.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            interval=2)\n\n        # Ensure the 'fib' process is gone when DELETE_JOB2 received on momB\n        self.server.pu.get_proc_info(\n            self.momB.hostname, \".*fib.*\", None)\n        self.assertEqual(len(self.server.pu.processes), 0)\n\n        # Ensure the 'fib' process is gone when DELETE_JOB2 received on momC\n        self.server.pu.get_proc_info(\n            self.momC.hostname, \".*fib.*\", None, regexp=True)\n        self.assertEqual(len(self.server.pu.processes), 0)\n\n        # Verify remaining job resources.\n        sel_esc = self.job1_extra_res_select.replace(\"+\", r\"\\+\")\n        exec_host_esc = self.job1_extra_res_exec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                    \"+\", r\"\\+\")\n        exec_vnode_esc = \\\n            self.job1_extra_res_exec_vnode.replace(\"[\", r\"\\[\").replace(\n                \"]\", r\"\\]\").replace(\n                \"(\", r\"\\(\").replace(\")\", r\"\\)\").replace(\"+\", r\"\\+\")\n        newsel = \"1:mem=2097152kb:ncpus=3:mpiprocs=3:ompthreads=2\"\n        newsel_esc = newsel.replace(\"+\", r\"\\+\")\n        new_exec_host = self.job1_extra_res_exec_host.replace(\n            \"+%s/0*2\" % (self.n7,), \"\")\n        new_exec_host = new_exec_host.replace(\"+%s/0*0\" % (self.n4,), \"\")\n        new_exec_host_esc = new_exec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                    \"+\", r\"\\+\")\n        new_exec_vnode = self.job1_extra_res_exec_vnode.replace(\n            \"+%s:mem=1048576kb:ncpus=1\" % (self.n5,), \"\")\n        new_exec_vnode = new_exec_vnode.replace(\n            \"+%s:ncpus=1\" % (self.n6,), \"\")\n        new_exec_vnode = new_exec_vnode.replace(\n            \"+(%s:mem=1048576kb:ncpus=1)\" % (self.n4,), \"\")\n        new_exec_vnode = new_exec_vnode.replace(\n            \"+(%s:ncpus=2:mem=2097152kb)\" % (self.n7,), \"\")\n        new_exec_vnode_esc = \\\n            new_exec_vnode.replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                \"(\", r\"\\(\").replace(\")\", r\"\\)\").replace(\"+\", r\"\\+\")\n        self.server.expect(JOB,\n                           {'job_state': 'R',\n                            'Resource_List.mem': '2gb',\n                            'Resource_List.ncpus': 3,\n                            'Resource_List.select': newsel,\n                            'Resource_List.place':\n                            self.job1_extra_res_place,\n                            'Resource_List.nodect': 1,\n                            'schedselect': newsel,\n                            'exec_host': new_exec_host,\n                            'exec_vnode': new_exec_vnode}, id=jid)\n\n        # Check various vnode status.\n        self.match_vnode_status([self.n1, self.n2],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3], 'job-busy', jobs_assn1, 1, '0kb')\n\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 3,\n                                    'resources_assigned.mem': '2097152kb'})\n        self.server.expect(QUEUE, {'resources_assigned.ncpus': 3,\n                                   'resources_assigned.mem': '2097152kb'},\n                           id=\"workq\")\n\n        # nodes <n4>, <n5>, <n6>, <n7> are all free now\n        self.match_vnode_status([self.n0, self.n4, self.n5, self.n6,\n                                 self.n7, self.n8, self.n9, self.n10], 'free')\n\n        # Ensure the $PBS_NODEFILE contents account for the mpiprocs value;\n        # that is, each node hostname is listed 'mpiprocs' number of times in\n        # the file.\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(\n                jid, self.job1_new_exec_host, newsel))\n\n        # Check account update ('u') record\n        self.match_accounting_log('u', jid, exec_host_esc,\n                                  exec_vnode_esc,\n                                  \"6gb\", 8, 3,\n                                  self.job1_extra_res_place,\n                                  sel_esc)\n\n        # Check to make sure 'c' (next) record got generated\n        self.match_accounting_log('c', jid, new_exec_host_esc,\n                                  self.job1_new_exec_vnode_esc, \"2097152kb\",\n                                  3, 1, self.job1_place, newsel_esc)\n\n        # For job to end to get the end records in the accounting_logs\n        self.server.delete(jid)\n\n        # Check account phased end job ('e') record\n        self.match_accounting_log('e', jid, new_exec_host_esc,\n                                  new_exec_vnode_esc, \"2097152kb\", 3,\n                                  1, self.job1_place, newsel_esc)\n\n        # Check account end of job ('E') record\n        self.match_accounting_log('E', jid, exec_host_esc,\n                                  exec_vnode_esc, \"6gb\", 8, 3,\n                                  self.job1_place, sel_esc)\n\n    # longer timeout needed as the following test takes a bit\n    # longer waiting for job to finish due to stage out\n    @timeout(400)\n    def test_release_nodes_cmd_plus_stageout(self):\n        \"\"\"\n        Test:\n            This test calling pbs_release_nodes command on a job\n            submitted with release_nodes_on_stageout option.\n\n            Given a job submitted as:\n               qsub -W release_nodes_on_stageout=true job.script\n            where job.script specifies a select spec of\n            2 super-chunks of ncpus=3 and mem=2gb each,\n            and 1 chunk of ncpus=2 and mem=2gb, along with\n            place spec of \"scatter\", resulting in an:\n\n            exec_vnode=(<n1>+<n2>+<n3>)+(<n4>+<n5>+<n6>)+(<n7>)\n\n            Then issue:\n                  pbs_release_nodes -j <job-id> <n7>\n\n            This would generate a 'u' and 'c' accounting record.\n            while <n7> vnode gets deallocated given that it's\n            the only vnode assigned to host mapped to third chunk.\n\n            Now call:\n                  qdel <job-id>\n\n            This would cause the remaining vnodes <n4>, <n5>, <n6>\n            to be deallocated due to job have the\n            -W release_nodes_on_stageout=true setting.\n            The result is reflected in the 'u', 'c', and 'e'\n            accounting logs. 'E' accounting record summarizes\n            everything.\n        \"\"\"\n        jid = self.create_and_submit_job('job1')\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'release_nodes_on_stageout': 'True',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job1_select,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_schedselect,\n                                 'exec_host': self.job1_exec_host,\n                                 'exec_vnode': self.job1_exec_vnode}, id=jid)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        jobs_assn2 = \"%s/0, %s/1\" % (jid, jid)\n        self.match_vnode_status([self.n7], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        # Run pbs_release_nodes\n        cmd = [self.pbs_release_nodes_cmd, '-j', jid, self.n7]\n        ret = self.server.du.run_cmd(self.server.hostname, cmd,\n                                     sudo=True)\n        self.assertEqual(ret['rc'], 0)\n\n        # Only mom hostC will get the job summary it was released\n        # early courtesy of sole vnode <n7>.\n        self.momA.log_match(\n            \"Job;%s;%s.+cput=.+ mem=.+\" % (jid, self.hostB), n=10,\n            regexp=True, existence=False, max_attempts=5, interval=1)\n\n        self.momA.log_match(\n            \"Job;%s;%s.+cput=.+ mem=.+\" % (jid, self.hostC), n=10,\n            regexp=True)\n\n        # Only mom hostC will gt the IM_DELETE_JOB2 request\n        self.momB.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            existence=False, max_attempts=5, interval=1)\n\n        self.momC.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20)\n\n        # Ensure the 'fib' process is gone from hostC when DELETE_JOB request\n        # received\n        self.server.pu.get_proc_info(\n            self.momC.hostname, \".*fib.*\", None, regexp=True)\n        self.assertEqual(len(self.server.pu.processes), 0)\n\n        # Verify remaining job resources.\n\n        sel_esc = self.job1_select.replace(\"+\", r\"\\+\")\n        exec_host_esc = self.job1_exec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                    \"+\", r\"\\+\")\n        exec_vnode_esc = self.job1_exec_vnode.replace(\"[\", r\"\\[\").replace(\n            \"]\", r\"\\]\").replace(\"(\", r\"\\(\").replace(\")\", r\"\\)\").replace(\n                    \"+\", r\"\\+\")\n\n        newsel = \"1:mem=2097152kb:ncpus=3+1:mem=2097152kb:ncpus=3\"\n        newsel_esc = newsel.replace(\"+\", r\"\\+\")\n        new_exec_host = \"%s/0*0+%s/0*0\" % (self.n0, self.hostB)\n        new_exec_host_esc = new_exec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                    \"+\", r\"\\+\")\n        new_exec_vnode = self.job1_exec_vnode.replace(\n            \"+(%s:ncpus=2:mem=2097152kb)\" % (self.n7,), \"\")\n        new_exec_vnode_esc = new_exec_vnode.replace(\n            \"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n            \"(\", r\"\\(\").replace(\")\", r\"\\)\").replace(\"+\", r\"\\+\")\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '4194304kb',\n                                 'Resource_List.ncpus': 6,\n                                 'Resource_List.select': newsel,\n                                 'Resource_List.place': self.job1_place,\n                                 'Resource_List.nodect': 2,\n                                 'schedselect': newsel,\n                                 'exec_host': new_exec_host,\n                                 'exec_vnode': new_exec_vnode}, id=jid)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6], 'job-busy', jobs_assn1, 1,\n                                '0kb')\n\n        # <n7> now free\n        self.match_vnode_status([self.n0, self.n7, self.n8, self.n9, self.n10],\n                                'free')\n\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 6,\n                                    'resources_assigned.mem': '4194304kb'})\n        self.server.expect(QUEUE, {'resources_assigned.ncpus': 6,\n                                   'resources_assigned.mem': '4194304kb'},\n                           id=\"workq\")\n\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(jid, new_exec_host))\n\n        # Check account update ('u') record\n        self.match_accounting_log('u', jid, exec_host_esc,\n                                  exec_vnode_esc, \"6gb\", 8, 3,\n                                  self.job1_place,\n                                  sel_esc)\n\n        # Check to make sure 'c' (next) record got generated\n        self.match_accounting_log('c', jid, new_exec_host_esc,\n                                  new_exec_vnode_esc, \"4194304kb\",\n                                  6, 2, self.job1_place, newsel_esc)\n\n        # Terminate the job\n        self.check_stageout_file_size()\n        self.server.delete(jid)\n\n        # Verify remaining job resources.\n\n        sel_esc = self.job1_select.replace(\"+\", r\"\\+\")\n        exec_host_esc = self.job1_exec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                    \"+\", r\"\\+\")\n        exec_vnode_esc = self.job1_exec_vnode.replace(\"[\", r\"\\[\").replace(\n            \"]\", r\"\\]\").replace(\"(\", r\"\\(\").replace(\")\", r\"\\)\").replace(\n                    \"+\", r\"\\+\")\n        newsel = self.transform_select(self.job1_select.split('+')[0])\n        newsel_esc = newsel.replace(\"+\", r\"\\+\")\n\n        new_exec_host = self.job1_exec_host.split('+')[0]\n        new_exec_host_esc = new_exec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                    \"+\", r\"\\+\")\n        new_exec_vnode = self.job1_exec_vnode.split(')')[0] + ')'\n        new_exec_vnode_esc = new_exec_vnode.replace(\"[\", r\"\\[\").replace(\n            \"]\", r\"\\]\").replace(\"(\", r\"\\(\").replace(\")\", r\"\\)\").replace(\n            \"+\", r\"\\+\")\n        self.server.expect(JOB, {'job_state': 'E',\n                                 'Resource_List.mem': '2gb',\n                                 'Resource_List.ncpus': 3,\n                                 'Resource_List.select': newsel,\n                                 'Resource_List.place': self.job1_place,\n                                 'Resource_List.nodect': 1,\n                                 'schedselect': newsel,\n                                 'exec_host': new_exec_host,\n                                 'exec_vnode': new_exec_vnode}, id=jid)\n\n        # Check 'u' accounting record from release_nodes_on_stageout=true\n        self.match_accounting_log('u', jid, new_exec_host_esc,\n                                  new_exec_vnode_esc, \"4194304kb\", 6, 2,\n                                  self.job1_place,\n                                  newsel_esc)\n\n        # Verify mom_logs\n        self.momA.log_match(\"Job;%s;%s.+cput=.+ mem=.+\" % (\n            jid, self.hostB), n=10,\n            interval=2, regexp=True)\n\n        self.momB.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            interval=2)\n\n        # Check various vnode status.\n\n        # only vnodes from mother superior (sef.hostA) are job-busy\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3], 'job-busy', jobs_assn1, 1, '0kb')\n\n        self.match_vnode_status([self.n0, self.n4, self.n5,\n                                 self.n6, self.n7, self.n8, self.n9,\n                                 self.n10], 'free')\n\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(jid, new_exec_host))\n\n        # Check 'c' accounting record from release_nodes_on_stageout=true\n        self.match_accounting_log('c', jid, new_exec_host_esc,\n                                  new_exec_vnode_esc, \"2097152kb\",\n                                  3, 1, self.job1_place, newsel_esc)\n\n        # wait for job to finish\n        self.server.expect(JOB, 'queue', id=jid, op=UNSET,\n                           interval=4, offset=15)\n\n        # Check 'e' record to release_nodes_on_stageout=true\n        self.match_accounting_log('e', jid, new_exec_host,\n                                  new_exec_vnode_esc, \"2097152kb\",\n                                  3, 1, self.job1_place, newsel_esc)\n\n        # Check 'E' (end of job) record to release_nodes_on_stageout=true\n        self.match_accounting_log('E', jid, exec_host_esc,\n                                  exec_vnode_esc, \"6gb\", 8, 3,\n                                  self.job1_place,\n                                  self.job1_sel_esc)\n\n    def test_multi_release_nodes(self):\n        \"\"\"\n        Test:\n             This tests several calls to pbs_release_nodes command for\n             the same job.\n\n             Given a job submitted with a select spec of\n             2 super-chunks of ncpus=3 and mem=2gb each,\n             and 1 chunk of ncpus=2 and mem=2gb, along with\n             place spec of \"scatter\", resulting in an:\n\n             has exec_vnode=\n                  (<n1>+<n2><n3>)+(<n4>+<n5>+<n6>)+(<n7>)\n             First call:\n\n                  pbs_release_nodes -j <job-id> <n4>\n\n             <n4> node no longer shows in job's exec_vnode,\n             but it will still show as job-busy\n             (not accept jobs) since the other 2 vnodes,\n             <n5> and <n6> from the host mapped to second\n             chunk are still assigned. The 'u' and 'c'\n             accounting records will reflect this.\n\n             Second call:\n\n                  pbs_release_nodes -j <job-id> <n5> <n6> <n7>\n\n             Now since all vnodes assigned to the job from\n             host mapped to second chunk will show as free.\n             Again, the accounting 'u' and 'c' records would\n             reflect this fact.\n        \"\"\"\n        jid = self.create_and_submit_job('job1')\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job1_select,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_schedselect,\n                                 'exec_host': self.job1_exec_host,\n                                 'exec_vnode': self.job1_exec_vnode}, id=jid)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        jobs_assn2 = \"%s/0, %s/1\" % (jid, jid)\n        self.match_vnode_status([self.n7], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        # Run pbs_release_nodes\n        cmd = [self.pbs_release_nodes_cmd, '-j', jid, self.n4]\n        ret = self.server.du.run_cmd(self.server.hostname, cmd,\n                                     sudo=True)\n        self.assertEqual(ret['rc'], 0)\n\n        # Verify mom_logs\n        self.momA.log_match(\"Job;%s;%s.+cput=.+ mem=.+\" % (\n            jid, self.hostB), n=10,\n            regexp=True,\n            existence=False, max_attempts=5, interval=1)\n\n        self.momA.log_match(\"Job;%s;%s.+cput=.+ mem=.+\" % (\n            jid, self.hostC), n=10,\n            regexp=True,\n            existence=False, max_attempts=5, interval=1)\n\n        self.momB.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            existence=False, max_attempts=5, interval=1)\n\n        self.momC.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            existence=False, max_attempts=5, interval=1)\n\n        # Verify remaining job resources.\n\n        sel_esc = self.job1_select.replace(\"+\", r\"\\+\")\n        exec_host_esc = self.job1_exec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                    \"+\", r\"\\+\")\n        exec_vnode_esc = self.job1_exec_vnode.replace(\"[\", r\"\\[\").replace(\n            \"]\", r\"\\]\").replace(\"(\", r\"\\(\").replace(\")\", r\"\\)\").replace(\n                    \"+\", r\"\\+\")\n\n        newsel = \"1:mem=2097152kb:ncpus=3+1:mem=1048576kb:ncpus=2+\" + \\\n                 \"1:ncpus=2:mem=2097152kb\"\n        newsel_esc = newsel.replace(\"+\", r\"\\+\")\n        new_exec_host = self.job1_exec_host\n        new_exec_host_esc = self.job1_exec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                    \"+\", r\"\\+\")\n        new_exec_vnode = self.job1_exec_vnode.replace(\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.n4,), \"\")\n        new_exec_vnode_esc = \\\n            new_exec_vnode.replace(\"[\", r\"\\[\").replace(\n                \"]\", r\"\\]\").replace(\"(\", r\"\\(\").replace(\n                \")\", r\"\\)\").replace(\"+\", r\"\\+\")\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '5gb',\n                                 'Resource_List.ncpus': 7,\n                                 'Resource_List.select': newsel,\n                                 'Resource_List.place': self.job1_place,\n                                 'Resource_List.nodect': 3,\n                                 'schedselect': newsel,\n                                 'exec_host': new_exec_host,\n                                 'exec_vnode': new_exec_vnode}, id=jid)\n\n        # Though the job is listed with ncpus=7 taking away released vnode\n        # <n4> (1 cpu), its license is not taken away as <n4> is assigned\n        # to a super chunk, and the parent mom still has not released the\n        # job as vnodes <n5> and <n6> are still allocated to the job.\n\n        # Check various vnode status.\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6], 'job-busy', jobs_assn1, 1,\n                                '0kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10],\n                                'free')\n\n        self.match_vnode_status([self.n7], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 8,\n                                    'resources_assigned.mem': '6291456kb'})\n        self.server.expect(QUEUE, {'resources_assigned.ncpus': 8,\n                                   'resources_assigned.mem': '6291456kb'},\n                           id=\"workq\")\n\n        # Check account update ('u') record\n        self.match_accounting_log('u', jid, exec_host_esc,\n                                  exec_vnode_esc, \"6gb\", 8, 3,\n                                  self.job1_place,\n                                  sel_esc)\n\n        # Check to make sure 'c' (next) record got generated\n        self.match_accounting_log('c', jid, new_exec_host_esc,\n                                  new_exec_vnode_esc, \"5242880kb\",\n                                  7, 3, self.job1_place, newsel_esc)\n\n        # Run pbs_release_nodes again\n        cmd = [self.pbs_release_nodes_cmd, '-j', jid, self.n5,\n               self.n6, self.n7]\n        ret = self.server.du.run_cmd(self.server.hostname, cmd,\n                                     sudo=True)\n        self.assertEqual(ret['rc'], 0)\n\n        # Now mom hostB and hostC can fully release the job\n        # resulting in job summary information reported\n        self.momA.log_match(\"Job;%s;%s.+cput=.+ mem=.+\" % (\n            jid, self.hostB), n=10,\n            interval=2, regexp=True)\n\n        self.momA.log_match(\"Job;%s;%s.+cput=.+ mem=.+\" % (\n            jid, self.hostC), n=10,\n            interval=2, regexp=True)\n\n        self.momB.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            interval=2)\n\n        self.momC.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            interval=2)\n\n        # Check account update ('u') record got generated\n        # second pbs_release_nodes call\n        self.match_accounting_log('u', jid, new_exec_host_esc,\n                                  new_exec_vnode_esc, \"5242880kb\", 7, 3,\n                                  self.job1_place,\n                                  newsel_esc)\n\n        # Verify remaining job resources.\n        newsel = \"1:mem=2097152kb:ncpus=3\"\n        newsel_esc = newsel.replace(\"+\", r\"\\+\")\n        new_exec_host = new_exec_host.replace(\"+%s/0*2\" % (self.n7,), \"\")\n        new_exec_host = new_exec_host.replace(\"+%s/0*0\" % (self.n4,), \"\")\n        new_exec_host_esc = new_exec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                    \"+\", r\"\\+\")\n        new_exec_vnode = new_exec_vnode.replace(\n            \"+(%s:mem=1048576kb:ncpus=1\" % (self.n5,), \"\")\n        new_exec_vnode = new_exec_vnode.replace(\n            \"+%s:ncpus=1)\" % (self.n6,), \"\")\n        new_exec_vnode = new_exec_vnode.replace(\n            \"+(%s:ncpus=2:mem=2097152kb)\" % (self.n7,), \"\")\n        new_exec_vnode_esc = new_exec_vnode.replace(\n            \"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n            \"(\", r\"\\(\").replace(\")\", r\"\\)\").replace(\"+\", r\"\\+\")\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '2gb',\n                                 'Resource_List.ncpus': 3,\n                                 'Resource_List.select': newsel,\n                                 'Resource_List.place': self.job1_place,\n                                 'Resource_List.nodect': 1,\n                                 'schedselect': newsel,\n                                 'exec_host': new_exec_host,\n                                 'exec_vnode': new_exec_vnode}, id=jid)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3], 'job-busy', jobs_assn1, 1, '0kb')\n\n        self.match_vnode_status([self.n0, self.n4, self.n5, self.n6,\n                                 self.n7, self.n8, self.n9, self.n10],\n                                'free')\n\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 3,\n                                    'resources_assigned.mem': '2097152kb'})\n        self.server.expect(QUEUE, {'resources_assigned.ncpus': 3,\n                                   'resources_assigned.mem': '2097152kb'},\n                           id=\"workq\")\n\n        # Check to make sure 'c' (next) record got generated for\n        # second pbs_release_nodes call\n        self.match_accounting_log('c', jid, new_exec_host_esc,\n                                  new_exec_vnode_esc, \"2097152kb\",\n                                  3, 1, self.job1_place, newsel_esc)\n\n    def test_release_nodes_run_next_job(self):\n        \"\"\"\n        Test:\n             Test releasing nodes of one job to allow another\n             job to use resources from the released nodes.\n\n             Given a job submitted with a select spec of\n             2 super-chunks of ncpus=3 and mem=2gb each,\n             and 1 chunk of ncpus=2 and mem=2gb, along with\n             place spec of \"scatter\", resulting in an:\n\n             exec_vnode=\n                  (<n1>+<n2><n3>)+(<n4>+<n5>+<n6>)+(<n7>)\n\n             First call:\n\n                pbs_release_nodes -j <job-id> <n4> <n5> <n7>\n\n            Submit another job: j2 that will need also the\n            unreleased vnode <n6> so job stays queued.\n\n            Now execute:\n                   pbs_release_nodes -j <job-id> <n6>>\n\n            And job j2 starts executing using node <n6>\n        \"\"\"\n        jid = self.create_and_submit_job('job1_5')\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job1_select,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_schedselect,\n                                 'exec_host': self.job1_exec_host,\n                                 'exec_vnode': self.job1_exec_vnode}, id=jid)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        jobs_assn2 = \"%s/0, %s/1\" % (jid, jid)\n        self.match_vnode_status([self.n7], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        # Run pbs_release_nodes\n        cmd = [self.pbs_release_nodes_cmd, '-j', jid,\n               self.n4, self.n5, self.n7]\n        ret = self.server.du.run_cmd(self.server.hostname, cmd,\n                                     sudo=True)\n        self.assertEqual(ret['rc'], 0)\n\n        self.match_vnode_status([self.n3, self.n6],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        # this is a 7-cpu job that needs <n6> which has not been freed\n        jid2 = self.create_and_submit_job('job2')\n\n        # we expect job_state to be Queued\n        self.server.expect(JOB, 'comment', op=SET, id=jid2)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid2)\n\n        # Let's release the remaining <node6> vnode from hostB\n        cmd = [self.pbs_release_nodes_cmd, '-j', jid, self.n6]\n        ret = self.server.du.run_cmd(self.server.hostname, cmd,\n                                     sudo=True)\n        self.assertEqual(ret['rc'], 0)\n\n        # now job 2 should start running\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '7gb',\n                                 'Resource_List.ncpus': 7,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job2_select,\n                                 'Resource_List.place': self.job2_place,\n                                 'schedselect': self.job2_schedselect,\n                                 'exec_host': self.job2_exec_host,\n                                 'exec_vnode': self.job2_exec_vnode_var1},\n                           id=jid2)\n\n        jobs_assn2 = \"%s/0\" % (jid2,)\n        self.match_vnode_status([self.n4, self.n5, self.n6, self.n8, self.n9],\n                                'job-busy', jobs_assn2, 1, '1048576kb')\n\n        jobs_assn3 = \"%s/0, %s/1\" % (jid2, jid2)\n        self.match_vnode_status([self.n7], 'job-busy', jobs_assn3,\n                                2, '2097152kb')\n        self.match_vnode_status([self.n0, self.n10], 'free')\n\n    def test_release_nodes_rerun(self):\n        \"\"\"\n        Test:\n            Test the behavior of a job with released nodes when it\n            gets rerun. The job is killed, requeued, and assigned\n            the original set of resources before pbs_release_nodes\n            was called.\n        \"\"\"\n        self.release_nodes_rerun()\n\n    def test_release_nodes_rerun_downed_mom(self):\n        \"\"\"\n        Test:\n            Test the behavior of a job with released nodes when it\n            gets rerun, due to primary mom getting killed and restarted.\n            The job is killed, requeued, and assigned\n            the original set of resources before pbs_release_nodes\n            was called.\n        \"\"\"\n        self.release_nodes_rerun(\"kill_mom_and_restart\")\n\n    def test_release_nodes_epilogue(self):\n        \"\"\"\n        Test:\n             Test to make sure when a job is removed from\n             a mom host when all vnodes from that host have\n             been released for the job, and run the epilogue hook.\n        \"\"\"\n\n        # First, submit an epilogue hook:\n\n        hook_body = \"\"\"\nimport pbs\npbs.logjobmsg(pbs.event().job.id, \"epilogue hook executed\")\n\"\"\"\n\n        a = {'event': 'execjob_epilogue', 'enabled': 'true'}\n        self.server.create_import_hook(\"epi\", a, hook_body)\n\n        jid = self.create_and_submit_job('job1_5')\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job1_select,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_schedselect,\n                                 'exec_host': self.job1_exec_host,\n                                 'exec_vnode': self.job1_exec_vnode}, id=jid)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        jobs_assn2 = \"%s/0, %s/1\" % (jid, jid)\n        self.match_vnode_status([self.n7], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        # Run pbs_release_nodes\n        cmd = [self.pbs_release_nodes_cmd, '-j', jid, self.n4, self.n5,\n               self.n6, self.n7]\n        ret = self.server.du.run_cmd(self.server.hostname, cmd,\n                                     sudo=True)\n\n        self.assertEqual(ret['rc'], 0)\n\n        # Verify mom_logs\n        self.momB.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20)\n\n        self.momB.log_match(\"Job;%s;epilogue hook executed\" % (jid,), n=20)\n\n        self.momA.log_match(\"Job;%s;%s.+cput=.+ mem=.+\" % (\n            jid, self.hostB), n=10,\n            interval=5, regexp=True)\n\n        self.momA.log_match(\"Job;%s;%s.+cput=.+ mem=.+\" % (\n            jid, self.hostC), n=10,\n            interval=5, regexp=True)\n\n        # Ensure the 'fib' process is gone when DELETE_JOB\n        self.server.pu.get_proc_info(\n            self.momB.hostname, \".*fib.*\", None, regexp=True)\n        self.assertEqual(len(self.server.pu.processes), 0)\n\n        self.momC.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            interval=5)\n\n        self.momC.log_match(\"Job;%s;epilogue hook executed\" % (jid,), n=20,\n                            interval=5)\n        # Ensure the 'fib' process is gone when DELETE_JOB\n        self.server.pu.get_proc_info(\n            self.momC.hostname, \".*fib.*\", None, regexp=True)\n        self.assertEqual(len(self.server.pu.processes), 0)\n\n    def test_release_nodes_complex(self):\n        \"\"\"\n        Test:\n             Test a complicated scenario involving\n             releasing nodes from a job that has been\n             submitted with exclusive placement\n             (-l place=scatter:excl), having one of the\n             parent moms of released vnodes being\n             stopped and continued, suspending and resuming\n             of jobs, and finally submitting a new job\n             requiring a non-exclusive access to a vnode.\n\n             Given a job submitted with a select spec of\n             2 super-chunks of ncpus=3 and mem=2gb each,\n             and 1 chunk of ncpus=1 and mem=1gb, along with\n             place spec of \"scatter:excl\", resulting in an:\n\n             exec_vnode=\n                  (<n1>+<n2><n3>)+(<n4>+<n5>+<n6>)+(<n7>)\n\n             Then stop parent mom host of <n7> (kill -STOP), now issue:\n                pbs_release_nodes -j <job-id> <n4> <n5> <n7>\n\n             causing <n4>, <n5>, and <n7> to still be tied to the job\n             as there's still node <n6> tied to the job as part of mom\n             hostB, which satisfies the second super-chunk.\n             Node <n7> is still assigned to the job as parent\n             mom hostC has been stopped.\n\n             Submit another job (job2), needing the node <n7> and\n             1 cpu but job ends up queued since first job is still\n             using <n7>.\n             Now Delete job2.\n\n             Now suspend the first job, and all resources_assigned to\n             the job's nodes are cleared.\n\n             Now resume the mom of <n7> (kill -CONT). This mom would\n             tell server to free up node <n7> as first job has\n             been completely removed from the node.\n\n             Now resume job1, and all resources_asssigned\n             of the job's nodes including <n4>, <n5> are shown\n             allocated, with resources in node <n7> freed.\n\n             Then submit a new 1-cpu job that specifically asks for\n             vnode <n7>, and job should run\n             taking vnode <n7>, but on pbsnodes listing,\n             notice that the vnode's state is still \"free\"\n             and using 1 cpu and 1gb of memory. It's because\n             there's still 1 cpu and 1 gb of memory left to\n             use in vnode <n7>.\n        \"\"\"\n        jid = self.create_and_submit_job('job11x')\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '5gb',\n                                 'Resource_List.ncpus': 7,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job11x_select,\n                                 'Resource_List.place': self.job11x_place,\n                                 'schedselect': self.job11x_schedselect,\n                                 'exec_host': self.job11x_exec_host,\n                                 'exec_vnode': self.job11x_exec_vnode}, id=jid)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5, self.n7],\n                                'job-exclusive', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6],\n                                'job-exclusive', jobs_assn1, 1, '0kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        # temporarily suspend momC, prevents from operating on released nodes\n        self.momC.signal(\"-STOP\")\n\n        # Run pbs_release_nodes on nodes belonging to momB and momC\n        cmd = [self.pbs_release_nodes_cmd, '-j', jid,\n               self.n4, self.n5, self.n7]\n        ret = self.server.du.run_cmd(self.server.hostname, cmd,\n                                     sudo=True)\n        self.assertEqual(ret['rc'], 0)\n\n        # mom hostB and mom hostC continue to hold on to the job\n        self.momA.log_match(\"Job;%s;%s.+cput=.+ mem=.+\" % (\n            jid, self.hostB), n=10,\n            regexp=True,\n            existence=False, max_attempts=5, interval=1)\n\n        self.momA.log_match(\"Job;%s;%s.+cput=.+ mem=.+\" % (\n            jid, self.hostC), n=10,\n            regexp=True,\n            existence=False, max_attempts=5, interval=1)\n\n        # since not all vnodes from momB have been freed from the job,\n        # DELETE_JOB2 request from MS is not sent\n        self.momB.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            existence=False, max_attempts=5, interval=1)\n\n        # since node <n7> from mom hostC has not been freed from the job\n        # since mom is currently stopped, the DELETE_JOB2 request from\n        # MS is not sent\n        self.momC.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            existence=False, max_attempts=5, interval=1)\n\n        # Verify remaining job resources.\n\n        sel_esc = self.job11x_select.replace(\"+\", r\"\\+\")\n        exec_host_esc = self.job11x_exec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                    \"+\", r\"\\+\")\n        exec_vnode_esc = self.job11x_exec_vnode.replace(\"[\", r\"\\[\").replace(\n            \"]\", r\"\\]\").replace(\"(\", r\"\\(\").replace(\")\", r\"\\)\").replace(\n                    \"+\", r\"\\+\")\n        newsel = \"1:mem=2097152kb:ncpus=3+1:ncpus=1\"\n        newsel_esc = newsel.replace(\"+\", r\"\\+\")\n        new_exec_host = self.job11x_exec_host.replace(\n            \"+%s/0\" % (self.n7,), \"\")\n        new_exec_host_esc = new_exec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                    \"+\", r\"\\+\")\n        new_exec_vnode = self.job11x_exec_vnode.replace(\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.n4,), \"\")\n        new_exec_vnode = new_exec_vnode.replace(\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.n5), \"\")\n        new_exec_vnode = new_exec_vnode.replace(\n            \"+(%s:ncpus=1:mem=1048576kb)\" % (self.n7,), \"\")\n        new_exec_vnode_esc = new_exec_vnode.replace(\"[\", r\"\\[\").replace(\n            \"]\", r\"\\]\").replace(\n            \"(\", r\"\\(\").replace(\")\", r\"\\)\").replace(\"+\", r\"\\+\")\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '2gb',\n                                 'Resource_List.ncpus': 4,\n                                 'Resource_List.select': newsel,\n                                 'Resource_List.place': self.job11x_place,\n                                 'Resource_List.nodect': 2,\n                                 'schedselect': newsel,\n                                 'exec_host': new_exec_host,\n                                 'exec_vnode': new_exec_vnode},\n                           id=jid)\n\n        # Though the job is listed with ncpus=4 taking away released vnode\n        # <n4> (1 cpu), <n5> (1 cpu), <n7> (1 cpu),\n        # hostB hasn't released job because <n6> is still part of the job and\n        # <n7> hasn't been released because the mom is stopped.\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5,\n                                 self.n7], 'job-exclusive', jobs_assn1, 1,\n                                '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6],\n                                'job-exclusive', jobs_assn1, 1, '0kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 7,\n                                    'resources_assigned.mem': '5242880kb'})\n        self.server.expect(QUEUE, {'resources_assigned.ncpus': 7,\n                                   'resources_assigned.mem': '5242880kb'},\n                           id=\"workq\")\n\n        # submit a new job needing node <n7>, which is still currently tied\n        # to the previous job.\n        jid2 = self.create_and_submit_job('job12')\n\n        # we expect job_state to be Queued as the previous job still has\n        # vnode managed by hostC assigned exclusively.\n        self.server.expect(JOB, 'comment', op=SET, id=jid2)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid2)\n        self.server.delete(jid2)\n\n        # now suspend previous job\n        self.server.sigjob(jid, 'suspend')\n\n        a = {'job_state': 'S'}\n        self.server.expect(JOB, a, id=jid)\n\n        self.match_vnode_status([self.n0, self.n1, self.n2, self.n3,\n                                 self.n4, self.n5, self.n6, self.n7,\n                                 self.n8, self.n9, self.n10], 'free')\n\n        # check server's resources_assigned values\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 0,\n                                    'resources_assigned.mem': '0kb'})\n        self.server.expect(QUEUE, {'resources_assigned.ncpus': 0,\n                                   'resources_assigned.mem': '0kb'},\n                           id=\"workq\")\n\n        # now resume previous job\n        self.server.sigjob(jid, 'resume')\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '2gb',\n                                 'Resource_List.ncpus': 4,\n                                 'Resource_List.select': newsel,\n                                 'Resource_List.place': self.job11x_place,\n                                 'Resource_List.nodect': 2,\n                                 'schedselect': newsel,\n                                 'exec_host': new_exec_host,\n                                 'exec_vnode': new_exec_vnode},\n                           id=jid)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5,\n                                 self.n7], 'job-exclusive', jobs_assn1, 1,\n                                '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6],\n                                'job-exclusive', jobs_assn1, 1, '0kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 7,\n                                    'resources_assigned.mem': '5242880kb'})\n        self.server.expect(QUEUE, {'resources_assigned.ncpus': 7,\n                                   'resources_assigned.mem': '5242880kb'},\n                           id=\"workq\")\n\n        # resume momC\n        self.momC.signal(\"-CONT\")\n\n        # With momC resumed, it now receives DELETE_JOB2 request from\n        # MS\n        self.momC.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20)\n\n        # submit this 1 cpu job that requests specifically vnode <n7>\n        jid3 = self.create_and_submit_job('job12')\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '1gb',\n                                 'Resource_List.ncpus': 1,\n                                 'Resource_List.nodect': 1,\n                                 'Resource_List.select': self.job12_select,\n                                 'Resource_List.place': self.job12_place,\n                                 'schedselect': self.job12_schedselect,\n                                 'exec_host': self.job12_exec_host,\n                                 'exec_vnode': self.job12_exec_vnode},\n                           id=jid3)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-exclusive', jobs_assn1, 1,\n                                '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6],\n                                'job-exclusive', jobs_assn1, 1, '0kb')\n\n        # Node <n7> shows as free since job 'jid3' did not request\n        # exclusive access.\n        jobs_assn2 = \"%s/0\" % (jid3,)\n        self.match_vnode_status([self.n7], 'free', jobs_assn2,\n                                1, '1048576kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 7,\n                                    'resources_assigned.mem': '5242880kb'})\n        self.server.expect(QUEUE, {'resources_assigned.ncpus': 7,\n                                   'resources_assigned.mem': '5242880kb'},\n                           id=\"workq\")\n\n    def test_release_nodes_excl_server_restart_quick(self):\n        \"\"\"\n        Test:\n             Test having a job submitted with exclusive\n             placement (-l place=scatter:excl),\n             then release a node from it where parent\n             mom is stopped, before stopping the\n             server with qterm -t quick which\n             will leave the job running, and when\n             server is started in warm mode where\n             also previous job retains its state,\n             job continues to have previous node\n             assignment including the pending\n             released node.\n\n             Given a job submitted with a select spec of\n             2 super-chunks of ncpus=3 and mem=2gb each,\n             and 1 chunk of ncpus=2 and mem=2gb, along with\n             place spec of \"scatter:excl\", resulting in an:\n\n             exec_vnode=\n                  (<n1>+<n2><n3>)+(<n4>+<n5>+<n6>)+(<n7>)\n\n             Then stop parent mom host of <n7> (kill -STOP), now issue:\n                pbs_release_nodes -j <job-id> <n4> <n5> <n7>\n\n             causing <n4>, <n5>, and <n7> to still be tied to the job\n             as there's still node <n6> tied to the job as part of mom\n             hostB, which satisfies the second super-chunk.\n             Node <n7> is still assigned to the job as parent\n             mom hostC has been stopped.\n\n             Do a qterm -t quick, which will leave the job\n             running.\n\n             Now start pbs_server in default warm mode where all\n             running jobs are retained in that state including\n             their node assignments.\n\n             The job is restored to the same nodes assignment\n             as before taking into account the released nodes.\n\n             Now resume the mom of <n7> (kill -CONT). This mom would\n             tell server to free up node <n7> as first job has\n             been completely removed from the node.\n        \"\"\"\n        jid = self.create_and_submit_job('job11x')\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '5gb',\n                                 'Resource_List.ncpus': 7,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job11x_select,\n                                 'Resource_List.place': self.job11x_place,\n                                 'schedselect': self.job11x_schedselect,\n                                 'exec_host': self.job11x_exec_host,\n                                 'exec_vnode': self.job11x_exec_vnode}, id=jid)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5, self.n7],\n                                'job-exclusive', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6],\n                                'job-exclusive', jobs_assn1, 1, '0kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        # temporarily suspend momC, prevents from operating on released nodes\n        self.momC.signal(\"-STOP\")\n\n        # Run pbs_release_nodes on nodes belonging to momB and momC\n        cmd = [self.pbs_release_nodes_cmd, '-j', jid,\n               self.n4, self.n5, self.n7]\n        ret = self.server.du.run_cmd(self.server.hostname, cmd,\n                                     sudo=True)\n        self.assertEqual(ret['rc'], 0)\n\n        # mom hostB and mom hostC continuue to hold on to the job\n        self.momA.log_match(\"Job;%s;%s.+cput=.+ mem=.+\" % (\n            jid, self.hostB), n=10,\n            regexp=True,\n            existence=False, max_attempts=5, interval=1)\n\n        self.momA.log_match(\"Job;%s;%s.+cput=.+ mem=.+\" % (\n            jid, self.hostC), n=10,\n            regexp=True,\n            existence=False, max_attempts=5, interval=1)\n\n        # since not all vnodes from momB have been freed from the job,\n        # DELETE_JOB2 request from MS is not sent\n        self.momB.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            existence=False, max_attempts=5, interval=1)\n\n        # since node <n7> from mom hostC has not been freed from the job\n        # since mom is currently stopped, the DELETE_JOB2 request from\n        # MS is not sent\n        self.momC.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            existence=False, max_attempts=5, interval=1)\n\n        # Verify remaining job resources.\n\n        sel_esc = self.job11x_select.replace(\"+\", r\"\\+\")\n        exec_host_esc = self.job11x_exec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                    \"+\", r\"\\+\")\n        exec_vnode_esc = self.job11x_exec_vnode.replace(\"[\", r\"\\[\").replace(\n            \"]\", r\"\\]\").replace(\"(\", r\"\\(\").replace(\")\", r\"\\)\").replace(\n                    \"+\", r\"\\+\")\n        newsel = \"1:mem=2097152kb:ncpus=3+1:ncpus=1\"\n        newsel_esc = newsel.replace(\"+\", r\"\\+\")\n        new_exec_host = self.job11x_exec_host.replace(\n            \"+%s/0\" % (self.n7,), \"\")\n        new_exec_host_esc = new_exec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                    \"+\", r\"\\+\")\n        new_exec_vnode = self.job11x_exec_vnode.replace(\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.n4,), \"\")\n        new_exec_vnode = new_exec_vnode.replace(\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.n5), \"\")\n        new_exec_vnode = new_exec_vnode.replace(\n            \"+(%s:ncpus=1:mem=1048576kb)\" % (self.n7,), \"\")\n        new_exec_vnode_esc = new_exec_vnode.replace(\"[\", r\"\\[\").replace(\n            \"]\", r\"\\]\").replace(\n            \"(\", r\"\\(\").replace(\")\", r\"\\)\").replace(\"+\", r\"\\+\")\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '2gb',\n                                 'Resource_List.ncpus': 4,\n                                 'Resource_List.select': newsel,\n                                 'Resource_List.place': self.job11x_place,\n                                 'Resource_List.nodect': 2,\n                                 'schedselect': newsel,\n                                 'exec_host': new_exec_host,\n                                 'exec_vnode': new_exec_vnode},\n                           id=jid)\n\n        # Though the job is listed with ncpus=4 taking away released vnode\n        # <n4> (1 cpu), <n5> (1 cpu), <n7> (1 cpu),\n        # hostB hasn't released job because <n6> is still part of the job and\n        # <n7> hasn't been released because the mom is stopped.\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5,\n                                 self.n7], 'job-exclusive', jobs_assn1, 1,\n                                '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6],\n                                'job-exclusive', jobs_assn1, 1, '0kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 7,\n                                    'resources_assigned.mem': '5242880kb'})\n        self.server.expect(QUEUE, {'resources_assigned.ncpus': 7,\n                                   'resources_assigned.mem': '5242880kb'},\n                           id=\"workq\")\n\n        # Stop and Start the server\n        om = self.server.get_op_mode()\n        self.server.set_op_mode(PTL_CLI)\n        self.server.qterm(manner=\"quick\")\n        self.server.set_op_mode(om)\n        self.assertFalse(self.server.isUp())\n        self.server.start()\n        self.assertTrue(self.server.isUp())\n\n        # Job should have the same state as before\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '2gb',\n                                 'Resource_List.ncpus': 4,\n                                 'Resource_List.select': newsel,\n                                 'Resource_List.place': self.job11x_place,\n                                 'Resource_List.nodect': 2,\n                                 'schedselect': newsel,\n                                 'exec_host': new_exec_host,\n                                 'exec_vnode': new_exec_vnode},\n                           id=jid)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-exclusive', jobs_assn1, 1,\n                                '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6],\n                                'job-exclusive', jobs_assn1, 1, '0kb')\n\n        # parent mom of <n7> is currently in stopped state\n        self.match_vnode_status([self.n7], 'state-unknown,down,job-exclusive',\n                                jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 7,\n                                    'resources_assigned.mem': '5242880kb'})\n        self.server.expect(QUEUE, {'resources_assigned.ncpus': 7,\n                                   'resources_assigned.mem': '5242880kb'},\n                           id=\"workq\")\n\n        # resume momC\n        self.momC.signal(\"-CONT\")\n\n        # With momC resumed, it now receives DELETE_JOB2 request from\n        # MS\n        self.momC.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20)\n\n    def test_release_nodes_excl_server_restart_immed(self):\n        \"\"\"\n        Test:\n             Test having a job submitted with exclusive\n             placement (-l place=scatter:excl),\n             then release a node from it where parent\n             mom is stopped, before stopping the\n             server with qterm -t immediate which\n             will requeue job completely, and when\n             server is started, job gets assigned\n             resources to the original request\n             before the pbs_release_nodes call.\n\n             Given a job submitted with a select spec of\n             2 super-chunks of ncpus=3 and mem=2gb each,\n             and 1 chunk of ncpus=2 and mem=2gb, along with\n             place spec of \"scatter:excl\", resulting in an:\n\n             exec_vnode=\n                  (<n1>+<n2><n3>)+(<n4>+<n5>+<n6>)+(<n7>)\n\n             Then stop parent mom host of <n7> (kill -STOP), now issue:\n                pbs_release_nodes -j <job-id> <n4> <n5> <n7>\n\n             causing <n4>, <n5>, and <n7> to still be tied to the job\n             as there's still node <n6> tied to the job sd part of mom\n             hostB, which satisfies the second super-chunk.\n             Node <n7> is still assigned to the job as parent\n             mom hostC has been stopped.\n\n             Do a qterm -t immediate, which will requeue the\n             currently running job.\n\n             Now start pbs_server.\n\n             The job goes back to getting assigned resources for the\n             original request, before pbs_release_nodes\n             was called.\n        \"\"\"\n        jid = self.create_and_submit_job('job11x')\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '5gb',\n                                 'Resource_List.ncpus': 7,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job11x_select,\n                                 'Resource_List.place': self.job11x_place,\n                                 'schedselect': self.job11x_schedselect,\n                                 'exec_host': self.job11x_exec_host,\n                                 'exec_vnode': self.job11x_exec_vnode}, id=jid)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5, self.n7],\n                                'job-exclusive', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6],\n                                'job-exclusive', jobs_assn1, 1, '0kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        # temporarily suspend momC, prevents from operating on released nodes\n        self.momC.signal(\"-STOP\")\n\n        # Run pbs_release_nodes on nodes belonging to momB and momC\n        cmd = [self.pbs_release_nodes_cmd, '-j', jid,\n               self.n4, self.n5, self.n7]\n        ret = self.server.du.run_cmd(self.server.hostname, cmd,\n                                     sudo=True)\n        self.assertEqual(ret['rc'], 0)\n\n        # mom hostB and mom hostC continuue to hold on to the job\n        self.momA.log_match(\"Job;%s;%s.+cput=.+ mem=.+\" % (\n            jid, self.hostB), n=10,\n            regexp=True,\n            existence=False, max_attempts=5, interval=1)\n\n        self.momA.log_match(\"Job;%s;%s.+cput=.+ mem=.+\" % (\n            jid, self.hostC), n=10,\n            regexp=True,\n            existence=False, max_attempts=5, interval=1)\n\n        # since not all vnodes from momB have been freed from the job,\n        # DELETE_JOB2 request from MS is not sent\n        self.momB.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            existence=False, max_attempts=5, interval=1)\n\n        # since node <n7> from mom hostC has not been freed from the job\n        # since mom is currently stopped, the DELETE_JOB2 request from\n        # MS is not sent\n        self.momC.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            existence=False, max_attempts=5, interval=1)\n\n        # Verify remaining job resources.\n\n        sel_esc = self.job11x_select.replace(\"+\", r\"\\+\")\n        exec_host_esc = self.job11x_exec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                    \"+\", r\"\\+\")\n        exec_vnode_esc = self.job11x_exec_vnode.replace(\"[\", r\"\\[\").replace(\n            \"]\", r\"\\]\").replace(\"(\", r\"\\(\").replace(\")\", r\"\\)\").replace(\n                    \"+\", r\"\\+\")\n        newsel = \"1:mem=2097152kb:ncpus=3+1:ncpus=1\"\n        newsel_esc = newsel.replace(\"+\", r\"\\+\")\n        new_exec_host = self.job11x_exec_host.replace(\n            \"+%s/0\" % (self.n7,), \"\")\n        new_exec_host_esc = new_exec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                    \"+\", r\"\\+\")\n        new_exec_vnode = self.job11x_exec_vnode.replace(\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.n4,), \"\")\n        new_exec_vnode = new_exec_vnode.replace(\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.n5), \"\")\n        new_exec_vnode = new_exec_vnode.replace(\n            \"+(%s:ncpus=1:mem=1048576kb)\" % (self.n7,), \"\")\n        new_exec_vnode_esc = new_exec_vnode.replace(\"[\", r\"\\[\").replace(\n            \"]\", r\"\\]\").replace(\n            \"(\", r\"\\(\").replace(\")\", r\"\\)\").replace(\"+\", r\"\\+\")\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '2gb',\n                                 'Resource_List.ncpus': 4,\n                                 'Resource_List.select': newsel,\n                                 'Resource_List.place': self.job11x_place,\n                                 'Resource_List.nodect': 2,\n                                 'schedselect': newsel,\n                                 'exec_host': new_exec_host,\n                                 'exec_vnode': new_exec_vnode},\n                           id=jid)\n\n        # Though the job is listed with ncpus=4 taking away released vnode\n        # <n4> (1 cpu), <n5> (1 cpu), <n7> (1 cpu),\n        # hostB hasn't released job because <n6> is still part of the job and\n        # <n7> hasn't been released because the mom is stopped.\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5,\n                                 self.n7], 'job-exclusive', jobs_assn1, 1,\n                                '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6],\n                                'job-exclusive', jobs_assn1, 1, '0kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 7,\n                                    'resources_assigned.mem': '5242880kb'})\n        self.server.expect(QUEUE, {'resources_assigned.ncpus': 7,\n                                   'resources_assigned.mem': '5242880kb'},\n                           id=\"workq\")\n\n        # Stop and Start the server\n        om = self.server.get_op_mode()\n        self.server.set_op_mode(PTL_CLI)\n        self.server.qterm(manner=\"immediate\")\n        self.server.set_op_mode(om)\n        self.assertFalse(self.server.isUp())\n\n        check_time = time.time()\n\n        # resume momC, but this is a stale request (nothing happens)\n        # since server is down.\n        self.momC.signal(\"-CONT\")\n\n        # start the server again\n        self.server.start()\n        self.assertTrue(self.server.isUp())\n\n        # make sure job is now running after server restart\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        # make sure job is running with assigned resources\n        # from the original request\n        self.server.expect(JOB, {'Resource_List.mem': '5gb',\n                                 'Resource_List.ncpus': 7,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job11x_select,\n                                 'Resource_List.place': self.job11x_place,\n                                 'schedselect': self.job11x_schedselect,\n                                 'exec_host': self.job11x_exec_host}, id=jid)\n\n        self.server.log_match(\"Job;%s;Job Run.+on exec_vnode %s\" % (\n                              jid, self.job11x_exec_vnode_match), regexp=True,\n                              starttime=check_time)\n\n        self.server.expect(VNODE, {'state=job-exclusive': 7},\n                           count=True, max_attempts=20, interval=2)\n        self.server.expect(VNODE, {'state=free': 4},\n                           count=True, max_attempts=20, interval=2)\n\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 7,\n                                    'resources_assigned.mem': '5242880kb'})\n        self.server.expect(QUEUE, {'resources_assigned.ncpus': 7,\n                                   'resources_assigned.mem': '5242880kb'},\n                           id=\"workq\")\n\n    def test_release_nodes_shared_server_restart_quick(self):\n        \"\"\"\n        Test:\n             Like test_release_nodes_excl_server_restart_quick test\n             except the job submitted does not have exclusive\n             placement, simply -l place=scatter.\n             The results are the same, except the vnode states\n             are either \"job-busy\" or \"free\" when there are resources\n             still available to share.\n        \"\"\"\n\n        jid = self.create_and_submit_job('job11')\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '5gb',\n                                 'Resource_List.ncpus': 7,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job11_select,\n                                 'Resource_List.place': self.job11_place,\n                                 'schedselect': self.job11_schedselect,\n                                 'exec_host': self.job11_exec_host,\n                                 'exec_vnode': self.job11_exec_vnode}, id=jid)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        # node <n7> is free since there's still 1 ncpus and 1 gb\n        # that can be shared with other jobs\n        self.match_vnode_status([self.n7], 'free', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        # temporarily suspend momC, prevents from operating on released nodes\n        self.momC.signal(\"-STOP\")\n\n        # Run pbs_release_nodes on nodes belonging to momB and momC\n        cmd = [self.pbs_release_nodes_cmd, '-j', jid,\n               self.n4, self.n5, self.n7]\n        ret = self.server.du.run_cmd(self.server.hostname, cmd,\n                                     sudo=True)\n        self.assertEqual(ret['rc'], 0)\n\n        # mom hostB and mom hostC continuue to hold on to the job\n        self.momA.log_match(\"Job;%s;%s.+cput=.+ mem=.+\" % (\n            jid, self.hostB), n=10,\n            regexp=True,\n            existence=False, max_attempts=5, interval=1)\n\n        self.momA.log_match(\"Job;%s;%s.+cput=.+ mem=.+\" % (\n            jid, self.hostC), n=10,\n            regexp=True,\n            existence=False, max_attempts=5, interval=1)\n\n        # since not all vnodes from momB have been freed from the job,\n        # DELETE_JOB2 request from MS is not sent\n        self.momB.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            existence=False, max_attempts=5, interval=1)\n\n        # since node <n7> from mom hostC has not been freed from the job\n        # since mom is currently stopped, the DELETE_JOB2 request from\n        # MS is not sent\n        self.momC.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            existence=False, max_attempts=5, interval=1)\n\n        # Verify remaining job resources.\n        sel_esc = self.job11_select.replace(\"+\", r\"\\+\")\n        exec_host_esc = self.job11_exec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                    \"+\", r\"\\+\")\n        exec_vnode_esc = self.job11_exec_vnode.replace(\"[\", r\"\\[\").replace(\n            \"]\", r\"\\]\").replace(\"(\", r\"\\(\").replace(\")\", r\"\\)\").replace(\n                    \"+\", r\"\\+\")\n        newsel = \"1:mem=2097152kb:ncpus=3+1:ncpus=1\"\n        newsel_esc = newsel.replace(\"+\", r\"\\+\")\n        new_exec_host = self.job11_exec_host.replace(\n            \"+%s/0\" % (self.n7,), \"\")\n        new_exec_host_esc = new_exec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                    \"+\", r\"\\+\")\n        new_exec_vnode = self.job11_exec_vnode.replace(\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.n4,), \"\")\n        new_exec_vnode = new_exec_vnode.replace(\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.n5,), \"\")\n        new_exec_vnode = new_exec_vnode.replace(\n            \"+(%s:ncpus=1:mem=1048576kb)\" % (self.n7,), \"\")\n        new_exec_vnode_esc = new_exec_vnode.replace(\"[\", r\"\\[\").replace(\n            \"]\", r\"\\]\").replace(\n            \"(\", r\"\\(\").replace(\")\", r\"\\)\").replace(\"+\", r\"\\+\")\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '2gb',\n                                 'Resource_List.ncpus': 4,\n                                 'Resource_List.select': newsel,\n                                 'Resource_List.place': self.job11_place,\n                                 'Resource_List.nodect': 2,\n                                 'schedselect': newsel,\n                                 'exec_host': new_exec_host,\n                                 'exec_vnode': new_exec_vnode},\n                           id=jid)\n\n        # Though the job is listed with ncpus=4 taking away released vnode\n        # <n4> (1 cpu), <n5> (1 cpu), <n7> (1 cpu),\n        # hostB hasn't released job because <n6> is still part of the job and\n        # <n7> hasn't been released because the mom is stopped.\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1,\n                                '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        # node <n7> is free since there's still 1 ncpus and 1 gb\n        # that can be shared with other jobs\n        self.match_vnode_status([self.n7], 'free', jobs_assn1,\n                                1, '1048576kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 7,\n                                    'resources_assigned.mem': '5242880kb'})\n        self.server.expect(QUEUE, {'resources_assigned.ncpus': 7,\n                                   'resources_assigned.mem': '5242880kb'},\n                           id=\"workq\")\n\n        # Stop and Start the server\n        om = self.server.get_op_mode()\n        self.server.set_op_mode(PTL_CLI)\n        self.server.qterm(manner=\"quick\")\n        self.server.set_op_mode(om)\n        self.assertFalse(self.server.isUp())\n        self.server.start()\n        self.assertTrue(self.server.isUp())\n\n        # Job should have the same state as before\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '2gb',\n                                 'Resource_List.ncpus': 4,\n                                 'Resource_List.select': newsel,\n                                 'Resource_List.place': self.job11_place,\n                                 'Resource_List.nodect': 2,\n                                 'schedselect': newsel,\n                                 'exec_host': new_exec_host,\n                                 'exec_vnode': new_exec_vnode},\n                           id=jid)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1,\n                                '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        # parent mom of <n7> is currently in stopped state\n        self.match_vnode_status([self.n7], 'state-unknown,down', jobs_assn1,\n                                1, '1048576kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 7,\n                                    'resources_assigned.mem': '5242880kb'})\n        self.server.expect(QUEUE, {'resources_assigned.ncpus': 7,\n                                   'resources_assigned.mem': '5242880kb'},\n                           id=\"workq\")\n\n        # resume momC\n        self.momC.signal(\"-CONT\")\n\n        # With momC resumed, it now receives DELETE_JOB2 request from\n        # MS\n        self.momC.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20)\n\n    def test_release_nodes_shared_server_restart_immed(self):\n        \"\"\"\n        Test:\n             Like test_release_nodes_excl_server_restart_quick test\n             except the job submitted does not have exclusive\n             placement, simply -l place=scatter.\n             The results are the same, except the vnode states\n             are either \"job-busy\" or \"free\" when there are resources\n             still available to share.\n        \"\"\"\n\n        jid = self.create_and_submit_job('job11')\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '5gb',\n                                 'Resource_List.ncpus': 7,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job11_select,\n                                 'Resource_List.place': self.job11_place,\n                                 'schedselect': self.job11_schedselect,\n                                 'exec_host': self.job11_exec_host,\n                                 'exec_vnode': self.job11_exec_vnode}, id=jid)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        # node <n7> still has resources (ncpus=1, mem=1gb) to share\n        self.match_vnode_status([self.n7], 'free', jobs_assn1,\n                                1, '1048576kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        # temporarily suspend momC, prevents from operating on released nodes\n        self.momC.signal(\"-STOP\")\n\n        # Run pbs_release_nodes on nodes belonging to momB and momC\n        cmd = [self.pbs_release_nodes_cmd, '-j', jid,\n               self.n4, self.n5, self.n7]\n        ret = self.server.du.run_cmd(self.server.hostname, cmd,\n                                     sudo=True)\n        self.assertEqual(ret['rc'], 0)\n\n        # mom hostB and mom hostC continuue to hold on to the job\n        self.momA.log_match(\"Job;%s;%s.+cput=.+ mem=.+\" % (\n            jid, self.hostB), n=10,\n            regexp=True,\n            existence=False, max_attempts=5, interval=1)\n\n        self.momA.log_match(\"Job;%s;%s.+cput=.+ mem=.+\" % (\n            jid, self.hostC), n=10,\n            regexp=True,\n            existence=False, max_attempts=5, interval=1)\n\n        # since not all vnodes from momB have been freed from the job,\n        # DELETE_JOB2 request from MS is not sent\n        self.momB.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            existence=False, max_attempts=5, interval=1)\n\n        # since node <n7> from mom hostC has not been freed from the job\n        # since mom is currently stopped, the DELETE_JOB2 request from\n        # MS is not sent\n        self.momC.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            existence=False, max_attempts=5, interval=1)\n\n        # Verify remaining job resources.\n        sel_esc = self.job11_select.replace(\"+\", r\"\\+\")\n        exec_host_esc = self.job11_exec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                    \"+\", r\"\\+\")\n        exec_vnode_esc = self.job11_exec_vnode.replace(\"[\", r\"\\[\").replace(\n            \"]\", r\"\\]\").replace(\"(\", r\"\\(\").replace(\")\", r\"\\)\").replace(\n                    \"+\", r\"\\+\")\n        newsel = \"1:mem=2097152kb:ncpus=3+1:ncpus=1\"\n        newsel_esc = newsel.replace(\"+\", r\"\\+\")\n        new_exec_host = self.job11_exec_host.replace(\n            \"+%s/0\" % (self.n7,), \"\")\n        new_exec_host_esc = new_exec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                    \"+\", r\"\\+\")\n        new_exec_vnode = self.job11_exec_vnode.replace(\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.n4,), \"\")\n        new_exec_vnode = new_exec_vnode.replace(\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.n5,), \"\")\n        new_exec_vnode = new_exec_vnode.replace(\n            \"+(%s:ncpus=1:mem=1048576kb)\" % (self.n7,), \"\")\n        new_exec_vnode_esc = new_exec_vnode.replace(\n            \"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n            \"(\", r\"\\(\").replace(\")\", r\"\\)\").replace(\"+\", r\"\\+\")\n        # job's substate is 41 (PRERUN) since MS mom is stopped\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '2gb',\n                                 'Resource_List.ncpus': 4,\n                                 'Resource_List.select': newsel,\n                                 'Resource_List.place': self.job11_place,\n                                 'Resource_List.nodect': 2,\n                                 'schedselect': newsel,\n                                 'exec_host': new_exec_host,\n                                 'exec_vnode': new_exec_vnode},\n                           id=jid)\n\n        # Though the job is listed with ncpus=4 taking away released vnode\n        # <n4> (1 cpu), <n5> (1 cpu), <n7> (1 cpu),\n        # hostB hasn't released job because <n6> is still part of the job and\n        # <n7> hasn't been released because the mom is stopped.\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1,\n                                '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        # node <n7> still has resources (ncpus=1, mem=1gb) to share\n        self.match_vnode_status([self.n7], 'free', jobs_assn1,\n                                1, '1048576kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 7,\n                                    'resources_assigned.mem': '5242880kb'})\n        self.server.expect(QUEUE, {'resources_assigned.ncpus': 7,\n                                   'resources_assigned.mem': '5242880kb'},\n                           id=\"workq\")\n\n        # Stop and Start the server\n        om = self.server.get_op_mode()\n        self.server.set_op_mode(PTL_CLI)\n        self.server.qterm(manner=\"immediate\")\n        self.server.set_op_mode(om)\n        self.assertFalse(self.server.isUp())\n\n        check_time = time.time()\n\n        # resume momC, but this is a stale request (nothing happens)\n        # since server is down.\n        self.momC.signal(\"-CONT\")\n\n        # start the server again\n        self.server.start()\n        self.assertTrue(self.server.isUp())\n\n        # make sure job is now running after server restart\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        # make sure job is running with assigned resources\n        # from the original request\n        self.server.expect(JOB, {'Resource_List.mem': '5gb',\n                                 'Resource_List.ncpus': 7,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job11_select,\n                                 'Resource_List.place': self.job11_place,\n                                 'schedselect': self.job11_schedselect,\n                                 'exec_host': self.job11_exec_host}, id=jid)\n\n        self.server.log_match(\"Job;%s;Job Run.+on exec_vnode %s\" % (\n                              jid, self.job11x_exec_vnode_match), regexp=True,\n                              starttime=check_time)\n\n        # 7 vnodes are assigned in a shared way: 6 of them has single cpu,\n        # while 1 has multiple cpus. So 6 will get \"job-busy\" state while\n        # the other will be in \"free\" state like the rest.\n        self.server.expect(VNODE, {'state=job-busy': 6},\n                           count=True, max_attempts=20, interval=2)\n        self.server.expect(VNODE, {'state=free': 5},\n                           count=True, max_attempts=20, interval=2)\n\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 7,\n                                    'resources_assigned.mem': '5242880kb'})\n        self.server.expect(QUEUE, {'resources_assigned.ncpus': 7,\n                                   'resources_assigned.mem': '5242880kb'},\n                           id=\"workq\")\n\n    def test_release_mgr_oper(self):\n        \"\"\"\n        Test that nodes are getting released as manager and operator\n        \"\"\"\n\n        jid = self.create_and_submit_job('job1_5')\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job1_select,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_schedselect,\n                                 'exec_host': self.job1_exec_host,\n                                 'exec_vnode': self.job1_exec_vnode}, id=jid)\n\n        self.server.manager(MGR_CMD_UNSET, SERVER, [\"managers\", \"operators\"])\n        manager = str(MGR_USER) + '@*'\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'managers': (INCR, manager)},\n                            sudo=True)\n        operator = str(OPER_USER) + '@*'\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'operators': (INCR, operator)},\n                            sudo=True)\n\n        # Release hostC as manager\n        cmd = [self.pbs_release_nodes_cmd, '-j', jid, self.n7]\n        ret = self.server.du.run_cmd(self.server.hostname, cmd,\n                                     runas=MGR_USER)\n        self.assertEqual(ret['rc'], 0)\n\n        # Only mom hostC will get the job summary it was released\n        # early courtesy of sole vnode <n7>.\n        self.momA.log_match(\n            \"Job;%s;%s.+cput=.+ mem=.+\" % (jid, self.hostC), n=10,\n            regexp=True)\n\n        # Only mom hostC will get the IM_DELETE_JOB2 request\n        self.momC.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20)\n\n        # Release vnodes from momB as operator\n        cmd = [self.pbs_release_nodes_cmd, '-j', jid, self.n5, self.n6]\n        ret = self.server.du.run_cmd(self.server.hostname, cmd,\n                                     runas=OPER_USER)\n        self.assertEqual(ret['rc'], 0)\n\n        # momB's host will not get job summary reported\n        self.momA.log_match(\"Job;%s;%s.+cput=.+ mem=.+\" % (\n            jid, self.hostB), n=10, regexp=True, max_attempts=5,\n            existence=False, interval=1)\n\n        self.momB.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid,), n=20,\n                            max_attempts=5, existence=False, interval=1)\n\n        # Verify remaining job resources.\n        sel_esc = self.job1_select.replace(\"+\", r\"\\+\")\n        exec_host_esc = self.job1_exec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                    \"+\", r\"\\+\")\n        exec_vnode_esc = self.job1_exec_vnode.replace(\"[\", r\"\\[\").replace(\n            \"]\", r\"\\]\").replace(\"(\", r\"\\(\").replace(\")\", r\"\\)\").replace(\n                    \"+\", r\"\\+\")\n        newsel = \"1:mem=2097152kb:ncpus=3+1:mem=1048576kb:ncpus=1\"\n\n        newsel_esc = newsel.replace(\"+\", r\"\\+\")\n        new_exec_host = self.job1_exec_host.replace(\n            \"+%s/0*2\" % (self.n7,), \"\")\n        new_exec_host_esc = new_exec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                    \"+\", r\"\\+\")\n        new_exec_vnode = self.job1_exec_vnode.replace(\n            \"+%s:mem=1048576kb:ncpus=1\" % (self.n5,), \"\")\n        new_exec_vnode = new_exec_vnode.replace(\n            \"+%s:ncpus=1\" % (self.n6,), \"\")\n        new_exec_vnode = new_exec_vnode.replace(\n            \"+(%s:ncpus=2:mem=2097152kb)\" % (self.n7,), \"\")\n        new_exec_vnode_esc = \\\n            new_exec_vnode.replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                \"(\", r\"\\(\").replace(\")\", r\"\\)\").replace(\"+\", r\"\\+\")\n        self.server.expect(JOB,\n                           {'job_state': 'R',\n                            'Resource_List.mem': '3gb',\n                            'Resource_List.ncpus': 4,\n                            'Resource_List.select': newsel,\n                            'Resource_List.place': self.job1_place,\n                            'Resource_List.nodect': 2,\n                            'schedselect': newsel,\n                            'exec_host': new_exec_host,\n                            'exec_vnode': new_exec_vnode},\n                           id=jid,\n                           runas=ROOT_USER)\n\n        # Though the job is listed with ncpus=4 taking away released vnode\n        # <n5> (1 cpu), <n6> (1 cpu), <n7> (2 cpus),\n        # only <n7> got released.  <n5> and <n6> are part of a super\n        # chunk that wasn't fully released.\n\n        # Check account update ('u') record\n        self.match_accounting_log('u', jid, self.job1_exec_host_esc,\n                                  self.job1_exec_vnode_esc, \"6gb\", 8, 3,\n                                  self.job1_place,\n                                  self.job1_sel_esc)\n\n        # Check to make sure 'c' (next) record got generated\n        self.match_accounting_log('c', jid, new_exec_host_esc,\n                                  new_exec_vnode_esc, \"3145728kb\",\n                                  4, 2, self.job1_place, newsel_esc)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        # <n5> still job-busy\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        # <n6> still job-busy\n        self.match_vnode_status([self.n3, self.n6], 'job-busy', jobs_assn1, 1,\n                                '0kb')\n\n        # <n7> now free\n        self.match_vnode_status([self.n0, self.n7, self.n8, self.n9, self.n10],\n                                'free')\n\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 6,\n                                    'resources_assigned.mem': '4194304kb'})\n        self.server.expect(QUEUE, {'resources_assigned.ncpus': 6,\n                                   'resources_assigned.mem': '4194304kb'},\n                           id=\"workq\")\n\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(jid, new_exec_host))\n\n        self.server.delete(jid, runas=ROOT_USER)\n\n        # Check account phased end ('e') record\n        self.match_accounting_log('e', jid, new_exec_host_esc,\n                                  new_exec_vnode_esc,\n                                  \"3145728kb\", 4, 2,\n                                  self.job1_place,\n                                  newsel_esc)\n\n        # Check to make sure 'E' (end of job) record got generated\n        self.match_accounting_log('E', jid, self.job1_exec_host_esc,\n                                  self.job1_exec_vnode_esc, \"6gb\",\n                                  8, 3, self.job1_place, self.job1_sel_esc)\n\n    def test_release_job_array(self):\n        \"\"\"\n        Release vnodes from a job array and subjob\n        \"\"\"\n\n        jid = self.create_and_submit_job('jobA')\n\n        self.server.expect(JOB, {'job_state': 'B',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job1_select,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_schedselect}, id=jid)\n\n        # Release nodes from job array. It will fail\n        cmd = [self.pbs_release_nodes_cmd, '-j', jid, self.n4]\n        try:\n            ret = self.server.du.run_cmd(self.server.hostname, cmd,\n                                         sudo=True)\n        except PtlExceptions as e:\n            self.assertTrue(\"not supported for Array jobs\" in e.msg)\n            self.assertFalse(e.rc)\n\n        # Verify the same for subjob1\n        subjob1 = jid.replace('[]', '[1]')\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job1_select,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_schedselect,\n                                 'exec_host': self.job1_exec_host,\n                                 'exec_vnode': self.job1_exec_vnode},\n                           id=subjob1)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (subjob1,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        jobs_assn2 = \"%s/0, %s/1\" % (subjob1, subjob1)\n        self.match_vnode_status([self.n7], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(subjob1, self.job1_exec_host))\n\n        # Run pbs_release_nodes as root\n        cmd = [self.pbs_release_nodes_cmd, '-j', subjob1, self.n4]\n        ret = self.server.du.run_cmd(self.server.hostname, cmd,\n                                     sudo=True)\n        self.assertEqual(ret['rc'], 0)\n\n        # Verify mom_logs\n        self.momA.log_match(\"Job;%s;%s.+cput=.+ mem=.+\" % (\n            subjob1, self.hostB), n=10,\n            regexp=True,\n            max_attempts=5,\n            existence=False, interval=1)\n\n        self.momA.log_match(\"Job;%s;%s.+cput=.+ mem=.+\" % (\n            subjob1, self.hostC), n=10,\n            regexp=True, max_attempts=5,\n            existence=False, interval=1)\n\n        # momB's host will not get DELETE_JOB2 request since\n        # not all its vnodes have been released yet from the job.\n        self.momB.log_match(\"Job;%s;DELETE_JOB2 received\" % (subjob1,),\n                            n=20, max_attempts=5,\n                            existence=False, interval=1)\n\n        # Verify remaining job resources.\n        newsel = \"1:mem=2097152kb:ncpus=3+1:mem=1048576kb:ncpus=2+\" + \\\n                 \"1:ncpus=2:mem=2097152kb\"\n        newsel_esc = newsel.replace(\"+\", r\"\\+\")\n        new_exec_host = self.job1_exec_host\n\n        # Below variable is being used for the accounting log match\n        # which is currently blocked on PTL bug PP-596.\n        # new_exec_host_esc = self.job1_exec_host.replace(\n        # \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\"+\",\n        # r\"\\+\")\n\n        new_exec_vnode = self.job1_exec_vnode.replace(\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.n4,), \"\")\n        new_exec_vnode_esc = new_exec_vnode.replace(\n            \"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n            \"(\", r\"\\(\").replace(\")\", r\"\\)\").replace(\"+\", r\"\\+\")\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '5gb',\n                                 'Resource_List.ncpus': 7,\n                                 'Resource_List.select': newsel,\n                                 'Resource_List.place': self.job1_place,\n                                 'Resource_List.nodect': 3,\n                                 'schedselect': newsel,\n                                 'exec_host': self.job1_exec_host,\n                                 'exec_vnode': new_exec_vnode}, id=subjob1)\n\n        # BELOW IS CODE IS BLOCEKD ON PP-596\n        # Check account update ('u') record\n        # self.match_accounting_log('u', subjob1, self.job1_exec_host_esc,\n        #                          self.job1_exec_vnode_esc, \"6gb\", 8, 3,\n        #                          self.job1_place,\n        #                          self.job1_sel_esc)\n\n        # Check to make sure 'c' (next) record got generated\n        # self.match_accounting_log('c', subjob1, self.job1_exec_host_esc,\n        #                          new_exec_vnode_esc, \"5242880kb\",\n        #                          7, 3, self.job1_place, newsel_esc)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (subjob1,)\n        self.match_vnode_status([self.n1, self.n2, self.n4, self.n5],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.n3, self.n6], 'job-busy', jobs_assn1, 1,\n                                '0kb')\n\n        jobs_assn2 = \"%s/0, %s/1\" % (subjob1, subjob1)\n        self.match_vnode_status([self.n7], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        self.match_vnode_status([self.n0, self.n8, self.n9, self.n10], 'free')\n\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 8,\n                                    'resources_assigned.mem': '6291456kb'})\n        self.server.expect(QUEUE, {'resources_assigned.ncpus': 8,\n                                   'resources_assigned.mem': '6291456kb'},\n                           id=\"workq\")\n\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(subjob1, new_exec_host))\n\n        self.server.delete(subjob1)\n\n        # Check account phased end ('e') record\n        # self.match_accounting_log('e', subjob1, new_exec_host_esc,\n        #                          new_exec_vnode_esc,\n        #                          \"5242880kb\", 7, 3,\n        #                          self.job1_place,\n        #                          newsel_esc)\n\n        # Check to make sure 'E' (end of job) record got generated\n        # self.match_accounting_log('E', subjob1, self.job1_exec_host_esc,\n        #                          self.job1_exec_vnode_esc, \"6gb\",\n        #                          8, 3, self.job1_place, self.job1_sel_esc)\n\n    def test_release_job_states(self):\n        \"\"\"\n        Release nodes on jobs in various states; Q, H, S, W\n        \"\"\"\n\n        # Submit a regular job that cannot run\n        a = {'Resource_List.ncpus': 100}\n        j = Job(TEST_USER, a)\n        jid = self.server.submit(j)\n\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid)\n\n        # Release nodes from a queued job\n        cmd = [self.pbs_release_nodes_cmd, '-j', jid, self.n4]\n        ret = self.server.du.run_cmd(self.server.hostname, cmd,\n                                     sudo=True)\n        self.assertNotEqual(ret['rc'], 0)\n\n        self.server.delete(jid, wait=True)\n\n        # Submit a held job and try releasing the node\n        j1 = Job(TEST_USER, {ATTR_h: None})\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {'job_state': 'H'}, id=jid1)\n\n        cmd = [self.pbs_release_nodes_cmd, '-j', jid1, self.n4]\n        ret = self.server.du.run_cmd(self.server.hostname, cmd,\n                                     sudo=True)\n        self.assertNotEqual(ret['rc'], 0)\n        self.server.delete(jid1, wait=True)\n\n        # Submit a job in W state and try releasing the node\n        mydate = int(time.time()) + 120\n        mytime = convert_time('%m%d%H%M', str(mydate))\n        j2 = Job(TEST_USER, {ATTR_a: mytime})\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {'job_state': 'W'}, id=jid2)\n\n        cmd = [self.pbs_release_nodes_cmd, '-j', jid2, self.n4]\n        ret = self.server.du.run_cmd(self.server.hostname, cmd,\n                                     sudo=True)\n        self.assertNotEqual(ret['rc'], 0)\n        self.server.delete(jid2, wait=True)\n\n    def test_release_finishjob(self):\n        \"\"\"\n        Test that releasing vnodes on finished jobs will fail\n        also verify the updated schedselect on a finished job\n        \"\"\"\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'job_history_enable': \"true\"}, sudo=True)\n\n        jid = self.create_and_submit_job('job1_5')\n        self.server.expect(JOB, {'job_state': \"R\"}, id=jid)\n\n        # Release hostC\n        cmd = [self.pbs_release_nodes_cmd, '-j', jid, self.n7]\n        ret = self.server.du.run_cmd(self.server.hostname, cmd)\n        self.assertEqual(ret['rc'], 0)\n\n        # Submit another job and make sure it is\n        # picked up by hostC\n        j = Job(TEST_USER,\n                {'Resource_List.select': \"1:host=\" + self.hostC})\n        jid2 = self.server.submit(j)\n        ehost = self.hostC + \"/1\"\n        self.server.expect(JOB, {'job_state': \"R\",\n                                 \"exec_host\": ehost}, id=jid2)\n\n        self.server.delete(jid, wait=True)\n\n        # Release vnode4 from a finished job. It will throw error.\n        cmd = [self.pbs_release_nodes_cmd, '-j', jid, self.n4]\n        ret = self.server.du.run_cmd(self.server.hostname, cmd)\n        self.assertNotEqual(ret['rc'], 0)\n\n        # Verify the schedselect for a finished job\n        newsel = \"1:mem=2097152kb:ncpus=3+1:mem=2097152kb:ncpus=3\"\n        new_exec_host = \"%s/0*0+%s/0*0\" % (self.n0, self.hostB)\n        new_exec_vnode = self.job1_exec_vnode.replace(\n            \"+(%s:ncpus=2:mem=2097152kb)\" % (self.n7,), \"\")\n        self.server.expect(JOB, {'job_state': 'F',\n                                 'Resource_List.mem': '4194304kb',\n                                 'Resource_List.ncpus': 6,\n                                 'Resource_List.select': newsel,\n                                 'Resource_List.place': self.job1_place,\n                                 'Resource_List.nodect': 2,\n                                 'schedselect': newsel,\n                                 'exec_host': new_exec_host,\n                                 'exec_vnode': new_exec_vnode},\n                           extend='x', id=jid)\n\n    def test_release_suspendjob(self):\n        \"\"\"\n        Test that releasing nodes on suspended job will also\n        fail and schedselect will not change\n        \"\"\"\n\n        jid = self.create_and_submit_job('job1_5')\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job1_select,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_schedselect,\n                                 'exec_host': self.job1_exec_host,\n                                 'exec_vnode': self.job1_exec_vnode}, id=jid)\n\n        cmd = [self.pbs_release_nodes_cmd, '-j', jid, self.n4]\n        ret = self.server.du.run_cmd(self.server.hostname, cmd)\n        self.assertEqual(ret['rc'], 0)\n\n        # Verify remaining job resources\n        newsel = \"1:mem=2097152kb:ncpus=3+1:mem=1048576kb:ncpus=2+\" + \\\n                 \"1:ncpus=2:mem=2097152kb\"\n        new_exec_vnode = self.job1_exec_vnode.replace(\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.n4,), \"\")\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '5gb',\n                                 'Resource_List.ncpus': 7,\n                                 'Resource_List.select': newsel,\n                                 'Resource_List.place': self.job1_place,\n                                 'Resource_List.nodect': 3,\n                                 'schedselect': newsel,\n                                 'exec_host': self.job1_exec_host,\n                                 'exec_vnode': new_exec_vnode}, id=jid)\n\n        # Suspend the job with qsig\n        self.server.sigjob(jid, 'suspend', runas=ROOT_USER)\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid)\n\n        # Try releasing a node from suspended job\n        cmd = [self.pbs_release_nodes_cmd, '-j', jid, self.n7]\n        ret = self.server.du.run_cmd(self.server.hostname, cmd,\n                                     sudo=True)\n        self.assertNotEqual(ret['rc'], 0)\n\n        # Verify that resources won't change\n        self.server.expect(JOB, {'job_state': 'S',\n                                 'Resource_List.mem': '5gb',\n                                 'Resource_List.ncpus': 7,\n                                 'Resource_List.select': newsel,\n                                 'Resource_List.place': self.job1_place,\n                                 'Resource_List.nodect': 3,\n                                 'schedselect': newsel,\n                                 'exec_host': self.job1_exec_host,\n                                 'exec_vnode': new_exec_vnode}, id=jid)\n\n        # Release the job and make sure it is running\n        self.server.sigjob(jid, 'resume')\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '5gb',\n                                 'Resource_List.ncpus': 7,\n                                 'Resource_List.select': newsel,\n                                 'Resource_List.place': self.job1_place,\n                                 'Resource_List.nodect': 3,\n                                 'schedselect': newsel,\n                                 'exec_host': self.job1_exec_host,\n                                 'exec_vnode': new_exec_vnode}, id=jid)\n\n    @timeout(500)\n    def test_release_multi_jobs(self):\n        \"\"\"\n        Release vnodes when multiple jobs are present\n        \"\"\"\n\n        # Delete the vnodes and recreate them\n        self.momA.delete_vnode_defs()\n        self.momB.delete_vnode_defs()\n        self.momA.restart()\n        self.momB.restart()\n        self.server.manager(MGR_CMD_DELETE, NODE, None, \"\")\n\n        self.server.manager(MGR_CMD_CREATE, NODE, id=self.hostA)\n        self.server.manager(MGR_CMD_CREATE, NODE, id=self.hostB)\n        self.server.manager(MGR_CMD_CREATE, NODE, id=self.hostC)\n\n        self.server.manager(MGR_CMD_SET, NODE,\n                            {'resources_available.ncpus': 3},\n                            id=self.hostA)\n        self.server.manager(MGR_CMD_SET, NODE,\n                            {'resources_available.ncpus': 3},\n                            id=self.hostB)\n        self.server.manager(MGR_CMD_SET, NODE,\n                            {'resources_available.ncpus': 3},\n                            id=self.hostC)\n\n        self.server.expect(NODE, {'state=free': 3})\n\n        # Submit multiple jobs\n        jid1 = self.create_and_submit_job('job13')\n        jid2 = self.create_and_submit_job('job13')\n        jid3 = self.create_and_submit_job('job13')\n\n        e_host_j1 = self.hostA + \"/0+\" + self.hostB + \"/0+\" + self.hostC + \"/0\"\n        e_host_j2 = self.hostA + \"/1+\" + self.hostB + \"/1+\" + self.hostC + \"/1\"\n        e_host_j3 = self.hostA + \"/2+\" + self.hostB + \"/2+\" + self.hostC + \"/2\"\n        e_vnode = \"(%s:ncpus=1)+(%s:ncpus=1)+(%s:ncpus=1)\" \\\n            % (self.hostA, self.hostB, self.hostC)\n\n        self.server.expect(JOB, {\"job_state=R\": 3})\n        self.server.expect(JOB, {\"exec_host\": e_host_j1,\n                                 \"exec_vnode\": e_vnode}, id=jid1)\n        self.server.expect(JOB, {\"exec_host\": e_host_j2,\n                                 \"exec_vnode\": e_vnode}, id=jid2)\n        self.server.expect(JOB, {\"exec_host\": e_host_j3,\n                                 \"exec_vnode\": e_vnode}, id=jid3)\n\n        # Verify that 3 processes running on hostB\n        n = retry = 5\n        for _ in range(n):\n            process = 0\n            self.server.pu.get_proc_info(\n                self.momB.hostname, \".*fib.*\", None, regexp=True)\n            if (self.server.pu.processes is not None):\n                for key in self.server.pu.processes:\n                    if (\"fib\" in key):\n                        process = len(self.server.pu.processes[key])\n                        self.logger.info(\n                            \"length of the process is \" + str(process) +\n                            \", expected 3\")\n            if process == 3:\n                break\n            retry -= 1\n            if retry == 0:\n                raise AssertionError(\"not found 3 fib processes\")\n            self.logger.info(\"sleeping 3 secs before next retry\")\n            time.sleep(3)\n\n        # Release node2 from job1 only\n        cmd = [self.pbs_release_nodes_cmd, '-j', jid1, self.hostB]\n        ret = self.server.du.run_cmd(self.server.hostname, cmd,\n                                     runas=TEST_USER)\n        self.assertEqual(ret['rc'], 0)\n\n        self.momB.log_match(\"Job;%s;DELETE_JOB2 received\" % (jid1,),\n                            interval=2)\n\n        # Verify that only 2 process left on hostB now\n        process = 0\n        self.server.pu.get_proc_info(\n            self.momB.hostname, \".*fib.*\", None, regexp=True)\n        if (self.server.pu.processes is not None):\n            for key in self.server.pu.processes:\n                if (\"fib\" in key):\n                    process = len(self.server.pu.processes[key])\n                    self.logger.info(\"length of the process is %d\" % process)\n        self.assertEqual(process, 2)\n\n        # Mom logs only have message for job1 for node3\n        self.momA.log_match(\n            \"Job;%s;%s.+cput=.+mem.+\" % (jid1, self.hostB),\n            interval=2, regexp=True)\n\n        self.momA.log_match(\n            \"Job;%s;%s.+cput=.+mem.+\" % (jid2, self.hostB),\n            max_attempts=5, regexp=True,\n            existence=False, interval=1)\n\n        self.momA.log_match(\n            \"Job;%s;%s.+cput=.+mem.+\" % (jid3, self.hostB),\n            max_attempts=5, regexp=True,\n            existence=False, interval=1)\n\n        # Verify the new schedselect for job1\n        new_e_host_j1 = e_host_j1.replace(\"+%s/0\" % (self.hostB,), \"\")\n        new_e_vnode = e_vnode.replace(\"+(%s:ncpus=1)\" % (self.hostB,), \"\")\n        self.server.expect(JOB, {'job_state': \"R\",\n                                 \"exec_host\": new_e_host_j1,\n                                 \"exec_vnode\": new_e_vnode,\n                                 \"schedselect\": \"1:ncpus=1+1:ncpus=1\",\n                                 \"Resource_List.ncpus\": 2,\n                                 \"Resource_List.nodect\": 2}, id=jid1)\n\n        # Verify that host and vnode won't change for job2 and job3\n        self.server.expect(JOB, {'job_state': \"R\",\n                                 \"exec_host\": e_host_j2,\n                                 \"exec_vnode\": e_vnode,\n                                 \"Resource_List.nodect\": 3}, id=jid2)\n        self.server.expect(JOB, {'job_state': 'R',\n                                 \"exec_host\": e_host_j3,\n                                 \"exec_vnode\": e_vnode,\n                                 \"Resource_List.nodect\": 3}, id=jid3)\n\n    def test_PBS_JOBID(self):\n        \"\"\"\n        Test that if -j jobid is not provided then it is\n        picked by env variable $PBS_JOBID in job script\n        \"\"\"\n\n        # This one has a job script that calls 'pbs_release_nodes'\n        # (no jobid specified)\n        jid = self.create_and_submit_job('job1_6')\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job1_select,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_schedselect,\n                                 'exec_host': self.job1_exec_host,\n                                 'exec_vnode': self.job1_exec_vnode}, id=jid)\n\n        # Verify remaining job resources\n        newsel = \"1:mem=2097152kb:ncpus=3+1:mem=1048576kb:ncpus=2+\" + \\\n                 \"1:ncpus=2:mem=2097152kb\"\n        newsel_esc = newsel.replace(\"+\", r\"\\+\")\n        new_exec_vnode = self.job1_exec_vnode.replace(\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.n4,), \"\")\n        new_exec_vnode_esc = new_exec_vnode.replace(\n            \"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n            \"(\", r\"\\(\").replace(\")\", r\"\\)\").replace(\"+\", r\"\\+\")\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '5gb',\n                                 'Resource_List.ncpus': 7,\n                                 'Resource_List.select': newsel,\n                                 'Resource_List.place': self.job1_place,\n                                 'Resource_List.nodect': 3,\n                                 'schedselect': newsel,\n                                 'exec_host': self.job1_exec_host,\n                                 'exec_vnode': new_exec_vnode},\n                           id=jid, interval=1)\n\n        # Check account update ('u') record\n        self.match_accounting_log('u', jid, self.job1_exec_host_esc,\n                                  self.job1_exec_vnode_esc, \"6gb\", 8, 3,\n                                  self.job1_place,\n                                  self.job1_sel_esc)\n\n        # Check to make sure 'c' (next) record got generated\n        self.match_accounting_log('c', jid, self.job1_exec_host_esc,\n                                  new_exec_vnode_esc, \"5242880kb\",\n                                  7, 3, self.job1_place, newsel_esc)\n\n    def test_release_nodes_on_stageout_diffvalues(self):\n        \"\"\"\n        Set release_nodes_on_stageout to different values other than\n        true or false\n        \"\"\"\n\n        a = {ATTR_W: \"release_nodes_on_stageout=-1\"}\n        j = Job(TEST_USER, a)\n        try:\n            self.server.submit(j)\n        except PtlException as e:\n            self.assertTrue(\"illegal -W value\" in e.msg[0])\n\n        a = {ATTR_W: \"release_nodes_on_stageout=11\"}\n        j = Job(TEST_USER, a)\n        try:\n            self.server.submit(j)\n        except PtlException as e:\n            self.assertTrue(\"illegal -W value\" in e.msg[0])\n\n        a = {ATTR_W: \"release_nodes_on_stageout=tru\"}\n        j = Job(TEST_USER, a)\n        try:\n            self.server.submit(j)\n        except PtlException as e:\n            self.assertTrue(\"illegal -W value\" in e.msg[0])\n\n    def test_resc_accumulation(self):\n        \"\"\"\n        Test that resources gets accumulated when a mom is released\n        \"\"\"\n\n        # skip this test due to PP-972\n        self.skip_test(reason=\"Test fails due to PP-972\")\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'job_history_enable': \"true\"}, sudo=True)\n\n        # Create custom resources\n        attr = {}\n        attr['type'] = 'float'\n        attr['flag'] = 'nh'\n        r = 'foo_f'\n        self.server.manager(\n            MGR_CMD_CREATE, RSC, attr, id=r, runas=ROOT_USER, logerr=False)\n\n        attr1 = {}\n        attr1['type'] = 'size'\n        attr1['flag'] = 'nh'\n        r1 = 'foo_i'\n        self.server.manager(\n            MGR_CMD_CREATE, RSC, attr1, id=r1, runas=ROOT_USER, logerr=False)\n\n        hook_body = \"\"\"\nimport pbs\ne=pbs.event()\npbs.logmsg(pbs.LOG_DEBUG, \"executed epilogue hook\")\nif e.job.in_ms_mom():\n    e.job.resources_used[\"vmem\"] = pbs.size(\"9gb\")\n    e.job.resources_used[\"foo_i\"] = pbs.size(999)\n    e.job.resources_used[\"foo_f\"] = 0.09\nelse:\n    e.job.resources_used[\"vmem\"] = pbs.size(\"10gb\")\n    e.job.resources_used[\"foo_i\"] = pbs.size(1000)\n    e.job.resources_used[\"foo_f\"] = 0.10\n\"\"\"\n\n        hook_name = \"epi\"\n        a = {'event': \"execjob_epilogue\", 'enabled': 'True'}\n        rv = self.server.create_import_hook(\n            hook_name,\n            a,\n            hook_body,\n            overwrite=True)\n        self.assertTrue(rv)\n\n        jid = self.create_and_submit_job('job1_5')\n        self.server.expect(JOB, {'job_state': \"R\"}, id=jid)\n\n        # Release hostC\n        cmd = [self.pbs_release_nodes_cmd, '-j', jid, self.n7]\n        ret = self.server.du.run_cmd(self.server.hostname, cmd)\n        self.assertEqual(ret['rc'], 0)\n\n        self.momC.log_match(\"executed epilogue hook\")\n        self.momC.log_match(\"DELETE_JOB2 received\")\n\n        self.server.delete(jid, wait=True)\n\n        self.server.expect(JOB, {'job_state': 'F',\n                                 \"resources_used.foo_i\": \"3kb\",\n                                 \"resources_used.foo_f\": '0.29',\n                                 \"resources_used.vmem\": '29gb'}, id=jid)\n\n    @timeout(500)\n    def test_release_reservations(self):\n        \"\"\"\n        Release nodes from a reservation will throw error. However\n        jobs inside reservation queue works as expected.\n        \"\"\"\n\n        # Create a reservation on multiple nodes\n        start = int(time.time()) + 30\n        a = {'Resource_List.select': self.job1_select,\n             'Resource_List.place': 'scatter',\n             'reserve_start': start}\n        r = Reservation(TEST_USER, a)\n        rid = self.server.submit(r)\n        rid = rid.split('.')[0]\n\n        self.server.expect(RESV,\n                           {'reserve_state': (MATCH_RE, \"RESV_CONFIRMED|2\")},\n                           id=rid)\n\n        # Release a vnode from reservation. It will throw error.\n        cmd = [self.pbs_release_nodes_cmd, '-j', rid, self.n5]\n        r = self.server.du.run_cmd(self.server.hostname, cmd)\n        self.assertNotEqual(r['rc'], 0)\n\n        # Submit a job inside reservations and release vnode\n        a = {'queue': rid,\n             'Resource_List.select': self.job1_select}\n        j = Job(TEST_USER, a)\n        jid = self.server.submit(j)\n\n        # Wait for the job to start\n        self.server.expect(JOB, {'job_state': 'R'},\n                           offset=30, id=jid)\n\n        # Release vnodes from the job\n        cmd = [self.pbs_release_nodes_cmd, '-j', jid, self.n5]\n        r = self.server.du.run_cmd(self.server.hostname, cmd)\n        self.assertEqual(r['rc'], 0)\n\n        # Verify the new schedselect\n        newsel = \"1:mem=2097152kb:ncpus=3+1:mem=1048576kb:ncpus=2+\" + \\\n                 \"1:ncpus=2:mem=2097152kb\"\n        new_exec_host = self.job1_exec_host\n        new_exec_vnode = self.job1_exec_vnode.replace(\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.n5,), \"\")\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'Resource_List.mem': '5gb',\n                                 'Resource_List.ncpus': 7,\n                                 'Resource_List.select': newsel,\n                                 'Resource_List.nodect': 3,\n                                 'schedselect': newsel,\n                                 'exec_host': new_exec_host,\n                                 'exec_vnode': new_exec_vnode}, id=jid)\n\n    def test_execjob_end_called(self):\n        \"\"\"\n        Test:\n             Test to make sure when a job is removed from\n             a mom host that the execjob_end hook is called on\n             that mom.\n        \"\"\"\n\n        # First, submit an execjob_end hook:\n\n        hook_body = \"\"\"\nimport pbs\npbs.logjobmsg(pbs.event().job.id, \"execjob_end hook executed\")\n\"\"\"\n\n        a = {'event': 'execjob_end', 'enabled': 'true'}\n        self.server.create_import_hook(\"endjob\", a, hook_body)\n\n        # Create a multinode job request\n        a = {'Resource_List.select': '2:ncpus=1',\n             'Resource_List.place': 'scatter'}\n        j = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(j)\n\n        # Wait for the job to start\n        self.server.expect(JOB, {'job_state': 'R'},\n                           offset=30, id=jid)\n\n        cmd = [self.pbs_release_nodes_cmd, '-j', jid, '-a']\n        ret = self.server.du.run_cmd(self.server.hostname,\n                                     cmd, runas=TEST_USER)\n        self.assertEqual(ret['rc'], 0)\n\n        # Check the sister mom log for the \"execjob_end hook executed\"\n        self.momB.log_match(\"execjob_end hook executed\")\n\n        # Verify the rest of the job is still running\n        self.server.expect(JOB, {'job_state': 'R'},\n                           id=jid)\n"
  },
  {
    "path": "test/tests/functional/pbs_node_rampdown_keep_select.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\nfrom collections import Counter\nfrom copy import deepcopy\nfrom os import name\n\nfrom tests.functional import *\n\n\nclass n_conf:\n    \"\"\"\n    used to define node configuration info\n    \"\"\"\n    def __init__(self, node_a={}, node_ct=0, usenatvn=False):\n        self.vnode_ct = node_ct\n        self.a = node_a\n        self.usenatvnode = usenatvn\n\n\nclass new_res:\n    \"\"\"\n    used to define new custom resource\n    \"\"\"\n    def __init__(self, res_name, res_a={}):\n        self.res_name = res_name\n        self.a = res_a\n\n\nclass test_config:\n    \"\"\"\n    used to store config of a test case\n    \"\"\"\n    def __init__(self, qsub_sel, keep_sel, sched_sel, expected_res, job_stat,\n                 rel_user, qsub_sel_after, sched_sel_after, job_stat_after,\n                 expected_res_after, skip_vnode_status_check=False,\n                 use_script=False):\n        self.qsub_sel = qsub_sel\n        self.keep_sel = keep_sel\n        self.sched_sel = sched_sel\n        self.expected_res = expected_res\n        self.job_stat = job_stat\n        self.rel_user = rel_user\n        self.qsub_sel_after = qsub_sel_after\n        self.sched_sel_after = sched_sel_after\n        self.job_stat_after = job_stat_after\n        self.expected_res_after = expected_res_after\n        self.skip_vnode_status_check = skip_vnode_status_check\n        self.use_script = use_script\n\n\n@requirements(num_moms=5)\nclass TestPbsNodeRampDownKeepSelect(TestFunctional):\n    \"\"\"\n    This tests the Node Rampdown Feature's extension called keep_select,\n    where while a job is running, nodes/resources assigned on non-mother\n    superior can be released by specifying a subselect.\n\n    Custom parameters:\n    moms: colon-separated hostnames of five MoMs\n    \"\"\"\n\n    res_s_h = {'type': 'string', 'flag': 'h'}\n    res_b_h = {'type': 'boolean', 'flag': 'h'}\n    res_l_nh = {'type': 'long', 'flag': 'nh'}\n    res_sz_nh = {'type': 'size', 'flag': 'nh'}\n    res_f_nh = {'type': 'float', 'flag': 'nh'}\n\n    def pbs_nodefile_match_exec_host(self, jid, ehost, schedselect=None):\n        \"\"\"\n        Look into the PBS_NODEFILE on the first host listed in 'exec_host'\n        and returns True if all host entries in 'exec_host' match the entries\n        in the file. Otherwise, return False.\n\n        # Look for 'mpiprocs' values in 'schedselect' (if not None), and\n        # verify that the corresponding node hosts are appearing in\n        # PBS_NODEFILE 'mpiprocs' number of times.\n        \"\"\"\n\n        pbs_nodefile = os.path.join(self.server.\n                                    pbs_conf['PBS_HOME'], 'aux', jid)\n\n        # look for mpiprocs settings\n        mpiprocs = []\n        if schedselect is not None:\n            select_list = schedselect.split('+')\n\n            for chunk in select_list:\n                ch_ct = '1'\n                chl = chunk.split(':')\n                if chl[0].isnumeric():\n                    ch_ct = chl[0]\n                    del chl[0]\n                for x in range(int(ch_ct)):\n                    tmpmpi = '1'\n                    for ch in chl:\n                        if ch.find('=') != -1:\n                            c = ch.split('=')\n                            if c[0] == \"mpiprocs\":\n                                tmpmpi = c[1]\n                    mpiprocs.append(tmpmpi)\n        first_host = ehost[0]\n\n        cmd = ['cat', pbs_nodefile]\n        ret = self.server.du.run_cmd(first_host, cmd, sudo=False)\n        ehost2 = []\n        for h in ret['out']:\n            ehost2.append(h.split('.')[0])\n\n        ehost1 = []\n        for (eh, mpin) in zip(ehost, mpiprocs):\n            for k in range(int(mpin)):\n                ehost1.append(eh)\n\n        self.assertEqual(Counter(ehost1), Counter(ehost2),\n                         'PBS_NODEFILE match failed')\n\n    def match_vnode_status(self, vnode_list, state, ncpus=None, jobs=None,\n                           mem=None):\n        \"\"\"\n        Given a list of vnode names in 'vnode_list', check to make\n        sure each vnode's state, jobs string, resources_assigned.mem,\n        and resources_assigned.ncpus match the passed arguments.\n        This will throw an exception if a match is not found.\n        \"\"\"\n        if ncpus is None:\n            if state == 'free':\n                ncpus = [0 for x in range(len(vnode_list))]\n            else:\n                ncpus = [None for x in range(len(vnode_list))]\n        for (vn, cpus) in zip(vnode_list, ncpus):\n            dict_match = {'state': state}\n            if jobs is not None:\n                dict_match['jobs'] = jobs\n            if cpus is not None:\n                dict_match['resources_assigned.ncpus'] = cpus\n            if mem is not None:\n                dict_match['resources_assigned.mem'] = mem\n\n            self.server.expect(VNODE, dict_match, id=vn)\n\n    def create_res(self, res_list):\n        \"\"\"\n        creates custom resources\n        \"\"\"\n        for res in res_list:\n            self.server.manager(MGR_CMD_CREATE, RSC, res.a, id=res.res_name)\n\n    def config_nodes(self, node_conf):\n        \"\"\"\n        configures nodes as per the node_conf list parameter\n        \"\"\"\n        self.mom_list = []\n        self.vnode_dict = {}\n        # Now start setting up and creating the vnodes\n\n        for (mom, conf) in zip(self.momArr, node_conf):\n            if mom.has_vnode_defs():\n                mom.delete_vnode_defs()\n            start_time = time.time()\n            mom.create_vnodes(conf.a, conf.vnode_ct,\n                              delall=False,\n                              usenatvnode=conf.usenatvnode)\n            self.vnode_dict[mom.shortname] = {'mom': mom,\n                                              'res': conf}\n            vn_ct = conf.vnode_ct\n            if conf.usenatvnode:\n                vn_ct -= 1\n            elif vn_ct:\n                self.vnode_dict[mom.shortname]['res'] = None\n            vstat = deepcopy(conf.a)\n            vstat['state'] = 'free'\n            for n in range(vn_ct):\n                vnid = mom.shortname + '[' + str(n) + ']'\n                self.server.expect(NODE, id=vnid, attrib=vstat)\n                self.vnode_dict[vnid] = {'mom': mom,\n                                         'res': conf}\n            if not conf.usenatvnode:\n                if not vn_ct:\n                    nvstat = deepcopy(conf.a)\n                else:\n                    nvstat = {}\n                nvstat['state'] = 'free'\n                self.server.expect(NODE, id=mom.shortname, attrib=nvstat)\n            mom.log_match(\"copy hook-related file request received\",\n                          starttime=start_time)\n            self.mom_list.append(mom)\n\n    def setUp(self):\n\n        if len(self.moms) != 5:\n            self.skip_test(reason=\"need 5 mom hosts: \" +\n                           \"-p moms=<m1>:<m2>:<m3>:<m4>:<m5>\")\n\n        TestFunctional.setUp(self)\n        Job.dflt_attributes[ATTR_k] = 'oe'\n\n        self.server.cleanup_jobs()\n\n        self.momA = self.moms.values()[0]\n        self.momB = self.moms.values()[1]\n        self.momC = self.moms.values()[2]\n        self.momD = self.moms.values()[3]\n        self.momE = self.moms.values()[4]\n\n        self.momArr = [self.momA, self.momB, self.momC, self.momD, self.momE]\n\n        self.hostA = self.momA.shortname\n        self.hostB = self.momB.shortname\n        self.hostC = self.momC.shortname\n        self.hostD = self.momD.shortname\n        self.hostE = self.momE.shortname\n\n        self.server.manager(MGR_CMD_DELETE, NODE, None, \"\")\n        if sys.platform in ('cygwin', 'win32'):\n            SLEEP_CMD = \"pbs-sleep\"\n        else:\n            SLEEP_CMD = \"/bin/sleep\"\n\n        self.rel_nodes_cmd = os.path.join(\n            self.server.pbs_conf['PBS_EXEC'], 'bin', 'pbs_release_nodes')\n\n    def tearDown(self):\n        self.momA.signal(\"-CONT\")\n        self.momB.signal(\"-CONT\")\n        self.momC.signal(\"-CONT\")\n        self.momD.signal(\"-CONT\")\n        self.momE.signal(\"-CONT\")\n        TestFunctional.tearDown(self)\n        # Delete managers and operators if added\n        attrib = ['operators', 'managers']\n        self.server.manager(MGR_CMD_UNSET, SERVER, attrib)\n\n    def flatten_node_res(self, nc_list):\n        \"\"\"\n        returns a list of flattened node resource dictionary\n        \"\"\"\n        ret = []\n        for nc in nc_list:\n            ret.append(','.join(\"%s=%r\" % (key, val) for key, val in\n                                sorted(nc.a.items())))\n        return ret\n\n    def get_mom_vn_execvn(self, execvnode):\n        \"\"\"\n        will get vnode list and corresponding mom list from\n        the execvnode paramter\n        \"\"\"\n        mlist = []\n        vlist = []\n        for chunk in execvnode:\n            if chunk.vchunk:\n                mlist.append(self.vnode_dict[chunk.vchunk[0].vnode]\n                             ['mom'].shortname)\n                for vch in chunk.vchunk:\n                    vlist.append(vch.vnode)\n            else:\n                mlist.append(self.vnode_dict[chunk.vnode]['mom'].shortname)\n                vlist.append(chunk.vnode)\n        return (mlist, vlist)\n\n    def common_tc_flow(self, tc):\n        \"\"\"\n        Defines a common test case flow, the configuration of a test case is\n        passed using the 'tc' argument\n        \"\"\"\n        # 1. submit the job wih select spec\n        a = {'Resource_List.select': tc.qsub_sel, ATTR_S: '/bin/bash'}\n        job = Job(TEST_USER, attrs=a)\n        if tc.use_script is True:\n            job.create_script(self.jobscript)\n        else:\n            job.set_sleep_time(1000)\n        jid = self.server.submit(job)\n\n        # 2. validate job state and attributes.\n        self.server.expect(JOB, tc.job_stat, id=jid)\n        js = self.server.status(JOB, [ATTR_execvnode, ATTR_exechost], jid)[0]\n\n        # 3. extract vnode list and mom list from execvnode\n        (mlist, vlist) = self.get_mom_vn_execvn(job.execvnode())\n        actual_res = self.flatten_node_res([self.vnode_dict[vn]['res'] for vn\n                                            in vlist])\n\n        # 4. compare actual resources allocated vs expected\n        self.assertEqual(Counter(tc.expected_res), Counter(actual_res),\n                         'Actual Vnode resources are not as expected')\n\n        # 5. validate hostnames in exechost correspond to vnodes in execvnode\n        self.assertEqual(Counter(mlist),\n                         Counter([list(x.keys())[0] for x in\n                                  job.exechost().hosts]),\n                         'exechost failed to correspond with execvnode')\n\n        # 6. check assigned vnodes are in job-busy state\n        self.match_vnode_status(vlist, 'job-busy',\n                                [self.vnode_dict[x]['res'].a[\n                                 'resources_available.ncpus'] for x in vlist])\n\n        # 7. validate PBS_NODEFILE\n        self.pbs_nodefile_match_exec_host(jid, mlist, tc.sched_sel)\n\n        # 8. (drum rolls) submit release node command\n        if tc.use_script is False:\n            cmd = [self.rel_nodes_cmd, '-j', jid, '-k', tc.keep_sel]\n            ret = self.server.du.run_cmd(self.server.hostname, cmd,\n                                         runas=tc.rel_user)\n            self.assertEqual(ret['rc'], 0)\n        else:\n            time.sleep(2)\n            self.server.sigjob(jid, 'INT')\n\n        # 9. verify job state and attributes are as expected\n        self.server.expect(JOB, tc.job_stat_after, id=jid)\n\n        js = self.server.status(JOB, [ATTR_execvnode, ATTR_exechost], jid)[0]\n\n        # 10. extract vnode list and mom list from execvnode\n        (mlist_new, vlist_new) = self.get_mom_vn_execvn(job.execvnode())\n        actual_res_after = self.flatten_node_res([self.vnode_dict[vn]['res']\n                                                  for vn in vlist_new])\n\n        # 11. compare actual resources allocated vs expected\n        self.assertEqual(Counter(tc.expected_res_after),\n                         Counter(actual_res_after),\n                         'Actual Vnode resources are not as expected')\n\n        # 12. validate hostnames in exechost correspond to vnodes in execvnode\n        self.assertEqual(Counter(mlist_new),\n                         Counter([list(x.keys())[0] for x in\n                                 job.exechost().hosts]),\n                         'exechost failed to correspond with execvnode')\n\n        if tc.skip_vnode_status_check is False:\n            # 13. check assigned vnodes are in job-busy state\n            self.match_vnode_status(vlist_new, 'job-busy',\n                                    [self.vnode_dict[x]['res'].a[\n                                     'resources_available.ncpus'] for x in\n                                     vlist_new])\n\n            # 14. compute freed vnodes resource list\n            freed_res = list(set(vlist) - set(vlist_new))\n\n            # 15. check freed vnodes are in 'free' state\n            self.match_vnode_status(freed_res, 'free')\n\n        # 16. validate PBS_NODEFILE again\n        self.pbs_nodefile_match_exec_host(jid, mlist_new, tc.sched_sel_after)\n\n    def test_basic_use_case_ncpus(self, rel_user=TEST_USER, use_script=False):\n        \"\"\"\n        submit job with below select string\n        'select=ncpus=1+2:ncpus=2+2:ncpus=3:mpiprocs=2'\n        release nodes except the MS and nodes matching below sub select string\n        'select=ncpus=2+ncpus=3:mpiprocs=2'\n        \"\"\"\n        n1 = n_conf({'resources_available.ncpus': '1'})\n        n2 = n_conf({'resources_available.ncpus': '2'})\n        n3 = n_conf({'resources_available.ncpus': '3'})\n\n        nc_list = [n1, n2, n2, n3, n3]\n        # 1. configure the cluster\n        self.config_nodes(nc_list)\n\n        args = {\n            'qsub_sel': 'ncpus=1+2:ncpus=2+2:ncpus=3:mpiprocs=2',\n            'keep_sel': 'select=ncpus=2+ncpus=3:mpiprocs=2',\n            'sched_sel': '1:ncpus=1+2:ncpus=2+2:ncpus=3:mpiprocs=2',\n            'expected_res': self.flatten_node_res(nc_list),\n            'rel_user': rel_user,\n            'qsub_sel_after': '1:ncpus=1+1:ncpus=2+1:ncpus=3:mpiprocs=2',\n            'sched_sel_after': '1:ncpus=1+1:ncpus=2+1:ncpus=3:mpiprocs=2',\n            'expected_res_after': self.flatten_node_res([n1, n2, n3])\n            }\n\n        job_stat = {'job_state': 'R',\n                    'substate': 42,\n                    'Resource_List.mpiprocs': 4,\n                    'Resource_List.ncpus': 11,\n                    'Resource_List.nodect': 5,\n                    'Resource_List.select': args['qsub_sel'],\n                    'schedselect': args['sched_sel']}\n\n        args['job_stat'] = job_stat\n\n        job_stat_after = {'job_state': 'R',\n                          'substate': 42,\n                          'Resource_List.mpiprocs': 2,\n                          'Resource_List.ncpus': 6,\n                          'Resource_List.nodect': 3,\n                          'Resource_List.select': args['qsub_sel_after'],\n                          'schedselect': args['sched_sel_after']}\n\n        if use_script is True:\n            args['use_script'] = True\n\n        args['job_stat_after'] = job_stat_after\n        tc = test_config(**args)\n        self.common_tc_flow(tc)\n\n    def test_basic_use_case_ncpus_as_root(self):\n        \"\"\"\n        submit job with below select string\n        'select=ncpus=1+2:ncpus=2+2:ncpus=3:mpiprocs=2'\n        as root release nodes except the MS and nodes matching below sub\n        select string 'select=ncpus=2+ncpus=3:mpiprocs=2'\n        \"\"\"\n        self.test_basic_use_case_ncpus(rel_user=ROOT_USER)\n\n    def test_basic_use_case_ncpus_using_script(self):\n        \"\"\"\n        Like test_basic_use_case_ncpus test except instead of calling\n        pbs_release_nodes from a command line, it is executed\n        inside the job script of a running job. Same results.\n        \"\"\"\n        self.jobscript = \\\n            \"#!/bin/sh\\n\" + \\\n            \"trap 'pbs_release_nodes -k select=ncpus=2+ncpus=3:mpiprocs=2\" + \\\n            \";sleep 1000;exit 0' INT\\n\" + \\\n            \"sleep 1000\\n\" + \\\n            \"exit 0\"\n        self.test_basic_use_case_ncpus(use_script=True)\n\n    def test_with_a_custom_str_res(self, partial_res_list=False):\n        \"\"\"\n        submit job with select string containing a custom string resource\n        'select=ncpus=1+ncpus=2:model=abc+ncpus=2:model=def+ncpus=3:model=def+\n        ncpus=3:model=xyz'\n        release nodes except the MS and nodes matching below sub select string\n        'select=ncpus=2:model=def+ncpus=3:model=def'\n        \"\"\"\n        # 1. create a custom string resources\n        str_res = 'model'\n        model_a = 'abc'\n        model_b = 'def'\n        model_c = 'xyz'\n        self.create_res([new_res(str_res, self.res_s_h)])\n\n        # 2. add the custom resource to sched_config\n        self.scheduler.add_resource(str_res)\n\n        n1 = n_conf({'resources_available.ncpus': '1'})\n        n2_a = n_conf({'resources_available.ncpus': '2',\n                       'resources_available.'+str_res: model_a})\n        n2_b = n_conf({'resources_available.ncpus': '2',\n                       'resources_available.'+str_res: model_b})\n        n3_b = n_conf({'resources_available.ncpus': '3',\n                       'resources_available.'+str_res: model_b})\n        n3_c = n_conf({'resources_available.ncpus': '3',\n                       'resources_available.'+str_res: model_c})\n\n        nc_list = [n1, n2_a, n2_b, n3_b, n3_c]\n        # 3. configure the cluster\n        self.config_nodes(nc_list)\n\n        if partial_res_list is False:\n            keep_sel = ('select=ncpus=2:model='+model_b +\n                        '+ncpus=3:model='+model_b)\n        else:\n            keep_sel = 'select=2:model='+model_b\n\n        args = {\n            'qsub_sel': 'ncpus=1+ncpus=2:model='+model_a+'+ncpus=2:model=' +\n            model_b+'+ncpus=3:model='+model_b+'+ncpus=3:model='+model_c,\n            'keep_sel': keep_sel,\n            'sched_sel': '1:ncpus=1+1:ncpus=2:model='+model_a +\n            '+1:ncpus=2:model='+model_b+'+1:ncpus=3:model='+model_b +\n            '+1:ncpus=3:model='+model_c,\n            'expected_res': self.flatten_node_res(nc_list),\n            'rel_user': TEST_USER,\n            'qsub_sel_after': '1:ncpus=1+1:ncpus=2:model='+model_b +\n            '+1:ncpus=3:model='+model_b,\n            'sched_sel_after': '1:ncpus=1+1:ncpus=2:model='+model_b +\n            '+1:ncpus=3:model='+model_b,\n            'expected_res_after': self.flatten_node_res([n1, n2_b, n3_b])\n            }\n\n        job_stat = {'job_state': 'R',\n                    'substate': 42,\n                    'Resource_List.ncpus': 11,\n                    'Resource_List.nodect': 5,\n                    'Resource_List.select': args['qsub_sel'],\n                    'schedselect': args['sched_sel']}\n\n        args['job_stat'] = job_stat\n\n        job_stat_after = {'job_state': 'R',\n                          'substate': 42,\n                          'Resource_List.ncpus': 6,\n                          'Resource_List.nodect': 3,\n                          'Resource_List.select': args['qsub_sel_after'],\n                          'schedselect': args['sched_sel_after']}\n\n        args['job_stat_after'] = job_stat_after\n        tc = test_config(**args)\n        self.common_tc_flow(tc)\n\n    def test_with_a_custom_str_res_partial_list(self, partial_res_list=False):\n        \"\"\"\n        submit job with select string containing a custom string resource\n        'select=ncpus=1+ncpus=2:model=abc+ncpus=2:model=def+ncpus=3:model=def+\n        ncpus=3:model=xyz'\n        release nodes except the MS and nodes matching below sub select string\n        containing partial resource list\n        'select=2:model=def'\n        \"\"\"\n        self.test_with_a_custom_str_res(partial_res_list=True)\n\n    def test_with_a_custom_bool_res(self, partial_res_list=False):\n        \"\"\"\n        submit job with select string containing a custom boolean resource\n        'select=ncpus=1+ncpus=2:bigmem=true+ncpus=2+\n        ncpus=3:bigmem=true+ncpus=3'\n        release nodes except the MS and nodes matching below sub select string\n        'select=ncpus=2:bigmem=true+ncpus=3:bigmem=true'\n        \"\"\"\n        # 1. create a custom string resources\n        bool_res = 'bigmem'\n        self.create_res([new_res(bool_res, self.res_b_h)])\n\n        n1 = n_conf({'resources_available.ncpus': '1'})\n        n2_a = n_conf({'resources_available.ncpus': '2'})\n        n2_b = n_conf({'resources_available.ncpus': '2',\n                       'resources_available.'+bool_res: 'True'})\n        n3_b = n_conf({'resources_available.ncpus': '3',\n                       'resources_available.'+bool_res: 'True'})\n        n3_c = n_conf({'resources_available.ncpus': '3'})\n\n        nc_list = [n1, n2_a, n2_b, n3_b, n3_c]\n        # 2. configure the cluster\n        self.config_nodes(nc_list)\n\n        if partial_res_list is False:\n            keep_sel = ('select=ncpus=2:'+bool_res+'=true+ncpus=3:' +\n                        bool_res+'=true')\n        else:\n            keep_sel = 'select=2:'+bool_res+'=true'\n\n        args = {\n            'qsub_sel': 'ncpus=1+ncpus=2:'+bool_res+'=true+ncpus=2+ncpus=3:' +\n            bool_res+'=true+ncpus=3',\n            'keep_sel': keep_sel,\n            'sched_sel': '1:ncpus=1+1:ncpus=2:'+bool_res +\n            '=True+1:ncpus=2+1:ncpus=3:'+bool_res+'=True+1:ncpus=3',\n            'expected_res': self.flatten_node_res(nc_list),\n            'rel_user': TEST_USER,\n            'qsub_sel_after': '1:ncpus=1+1:ncpus=2:'+bool_res +\n            '=True+1:ncpus=3:'+bool_res+'=True',\n            'sched_sel_after': '1:ncpus=1+1:ncpus=2:'+bool_res +\n            '=True+1:ncpus=3:'+bool_res+'=True',\n            'expected_res_after': self.flatten_node_res([n1, n2_b, n3_b])\n            }\n\n        job_stat = {'job_state': 'R',\n                    'substate': 42,\n                    'Resource_List.ncpus': 11,\n                    'Resource_List.nodect': 5,\n                    'Resource_List.select': args['qsub_sel'],\n                    'schedselect': args['sched_sel']}\n\n        args['job_stat'] = job_stat\n\n        job_stat_after = {'job_state': 'R',\n                          'substate': 42,\n                          'Resource_List.ncpus': 6,\n                          'Resource_List.nodect': 3,\n                          'Resource_List.select': args['qsub_sel_after'],\n                          'schedselect': args['sched_sel_after']}\n\n        args['job_stat_after'] = job_stat_after\n        tc = test_config(**args)\n        self.common_tc_flow(tc)\n\n    def test_with_a_custom_bool_res_partial_list(self,\n                                                 partial_res_list=False):\n        \"\"\"\n        submit job with select string containing a boolean resource\n        'select=ncpus=1+ncpus=2:bigmem=true+ncpus=2+\n        ncpus=3:bigmem=true+ncpus=3'\n        release nodes except the MS and nodes matching below sub select string\n        containing partial resource list\n        'select=2:bigmem=true'\n        \"\"\"\n        self.test_with_a_custom_bool_res(partial_res_list=True)\n\n    def test_with_a_custom_long_res(self, partial_res_list=False):\n        \"\"\"\n        submit job with select string containing a custom long resource\n        'select=ncpus=1+ncpus=2:longres=7+ncpus=2:longres=9+\n        ncpus=3:longres=9+ncpus=3:longres=10'\n        release nodes except the MS and nodes matching below sub select string\n        'select=ncpus=2:longres=9+ncpus=3:longres=9'\n        \"\"\"\n        # 1. create a custom string resources\n        long_res = 'longres'\n        self.create_res([new_res(long_res, self.res_l_nh)])\n\n        # 2. add the custom resource to sched_config\n        self.scheduler.add_resource(long_res)\n\n        n1 = n_conf({'resources_available.ncpus': '1'})\n        n2_a = n_conf({'resources_available.ncpus': '2',\n                       'resources_available.'+long_res: '7'})\n        n2_b = n_conf({'resources_available.ncpus': '2',\n                       'resources_available.'+long_res: '9'})\n        n3_b = n_conf({'resources_available.ncpus': '3',\n                       'resources_available.'+long_res: '9'})\n        n3_c = n_conf({'resources_available.ncpus': '3',\n                       'resources_available.'+long_res: '10'})\n\n        nc_list = [n1, n2_a, n2_b, n3_b, n3_c]\n        # 3. configure the cluster\n        self.config_nodes(nc_list)\n\n        if partial_res_list is False:\n            keep_sel = ('select=ncpus=2:'+long_res+'=9+ncpus=3:' +\n                        long_res+'=9')\n        else:\n            keep_sel = 'select=2:'+long_res+'=9'\n\n        args = {\n            'qsub_sel': 'ncpus=1+ncpus=2:'+long_res+'=7+ncpus=2:'+long_res +\n            '=9+ncpus=3:'+long_res+'=9+ncpus=3:'+long_res+'=10',\n            'keep_sel': keep_sel,\n            'sched_sel': '1:ncpus=1+1:ncpus=2:'+long_res+'=7+1:ncpus=2:' +\n            long_res+'=9+1:ncpus=3:'+long_res+'=9+1:ncpus=3:'+long_res+'=10',\n            'expected_res': self.flatten_node_res(nc_list),\n            'rel_user': TEST_USER,\n            'qsub_sel_after': '1:ncpus=1+1:ncpus=2:'+long_res +\n            '=9+1:ncpus=3:'+long_res+'=9',\n            'sched_sel_after': '1:ncpus=1+1:ncpus=2:'+long_res +\n            '=9+1:ncpus=3:'+long_res+'=9',\n            'expected_res_after': self.flatten_node_res([n1, n2_b, n3_b])\n            }\n\n        job_stat = {'job_state': 'R',\n                    'substate': 42,\n                    'Resource_List.longres': 35,\n                    'Resource_List.ncpus': 11,\n                    'Resource_List.nodect': 5,\n                    'Resource_List.select': args['qsub_sel'],\n                    'schedselect': args['sched_sel']}\n\n        args['job_stat'] = job_stat\n\n        job_stat_after = {'job_state': 'R',\n                          'substate': 42,\n                          'Resource_List.longres': 18,\n                          'Resource_List.ncpus': 6,\n                          'Resource_List.nodect': 3,\n                          'Resource_List.select': args['qsub_sel_after'],\n                          'schedselect': args['sched_sel_after']}\n\n        args['job_stat_after'] = job_stat_after\n        tc = test_config(**args)\n        self.common_tc_flow(tc)\n\n    def test_with_a_custom_long_partial_list(self, partial_res_list=False):\n        \"\"\"\n        submit job with select string containing a custom long resource\n        'select=ncpus=1+ncpus=2:longres=7+ncpus=2:longres=9+\n        ncpus=3:longres=9+ncpus=3:longres=10'\n        release nodes except the MS and nodes matching below sub select string\n        containing partial resource list\n        'select=2:longres=9'\n        \"\"\"\n        self.test_with_a_custom_long_res(partial_res_list=True)\n\n    def test_with_a_custom_size_res(self, partial_res_list=False):\n        \"\"\"\n        submit job with select string containing a custom size resource\n        'select=ncpus=1+ncpus=2:sizres=7k+ncpus=2:sizres=9k+\n        ncpus=3:sizres=9k+ncpus=3:sizres=10k'\n        release nodes except the MS and nodes matching below sub select string\n        'select=ncpus=2:sizres=9k+ncpus=3:sizres=9k'\n        \"\"\"\n        # 1. create a custom string resources\n        size_res = 'sizres'\n        self.create_res([new_res(size_res, self.res_sz_nh)])\n\n        # 2. add the custom resource to sched_config\n        self.scheduler.add_resource(size_res)\n\n        n1 = n_conf({'resources_available.ncpus': '1'})\n        n2_a = n_conf({'resources_available.ncpus': '2',\n                       'resources_available.'+size_res: '7kb'})\n        n2_b = n_conf({'resources_available.ncpus': '2',\n                       'resources_available.'+size_res: '9kb'})\n        n3_b = n_conf({'resources_available.ncpus': '3',\n                       'resources_available.'+size_res: '9kb'})\n        n3_c = n_conf({'resources_available.ncpus': '3',\n                       'resources_available.'+size_res: '10kb'})\n\n        nc_list = [n1, n2_a, n2_b, n3_b, n3_c]\n        # 3. configure the cluster\n        self.config_nodes(nc_list)\n\n        if partial_res_list is False:\n            keep_sel = ('select=ncpus=2:'+size_res+'=9k+ncpus=3:' +\n                        size_res+'=9k')\n        else:\n            keep_sel = 'select=2:'+size_res+'=9k'\n\n        args = {\n            'qsub_sel': 'ncpus=1+ncpus=2:'+size_res+'=7k+ncpus=2:'+size_res +\n            '=9k+ncpus=3:'+size_res+'=9k+ncpus=3:'+size_res+'=10k',\n            'keep_sel': keep_sel,\n            'sched_sel': '1:ncpus=1+1:ncpus=2:'+size_res+'=7kb+1:ncpus=2:' +\n            size_res+'=9kb+1:ncpus=3:'+size_res+'=9kb+1:ncpus=3:'+size_res +\n            '=10kb',\n            'expected_res': self.flatten_node_res(nc_list),\n            'rel_user': TEST_USER,\n            'qsub_sel_after': '1:ncpus=1+1:ncpus=2:'+size_res +\n            '=9kb+1:ncpus=3:'+size_res+'=9kb',\n            'sched_sel_after': '1:ncpus=1+1:ncpus=2:'+size_res +\n            '=9kb+1:ncpus=3:'+size_res+'=9kb',\n            'expected_res_after': self.flatten_node_res([n1, n2_b, n3_b])\n            }\n\n        job_stat = {'job_state': 'R',\n                    'substate': 42,\n                    'Resource_List.sizres': '35kb',\n                    'Resource_List.ncpus': 11,\n                    'Resource_List.nodect': 5,\n                    'Resource_List.select': args['qsub_sel'],\n                    'schedselect': args['sched_sel']}\n\n        args['job_stat'] = job_stat\n\n        job_stat_after = {'job_state': 'R',\n                          'substate': 42,\n                          'Resource_List.sizres': '18kb',\n                          'Resource_List.ncpus': 6,\n                          'Resource_List.nodect': 3,\n                          'Resource_List.select': args['qsub_sel_after'],\n                          'schedselect': args['sched_sel_after']}\n\n        args['job_stat_after'] = job_stat_after\n        tc = test_config(**args)\n        self.common_tc_flow(tc)\n\n    def test_with_a_custom_size_partial_list(self, partial_res_list=False):\n        \"\"\"\n        submit job with select string containing a custom size resource\n        'select=ncpus=1+ncpus=2:sizres=7k+ncpus=2:sizres=9k+\n        ncpus=3:sizres=9k+ncpus=3:sizres=10k'\n        release nodes except the MS and nodes matching below sub select string\n        containing partial resource list\n        'select=2:sizres=9k'\n        \"\"\"\n        self.test_with_a_custom_size_res(partial_res_list=True)\n\n    def test_with_a_custom_float_res(self, partial_res_list=False):\n        \"\"\"\n        submit job with select string containing a custom float resource\n        'select=ncpus=1+ncpus=2:fltres=7.1+ncpus=2:fltres=9.1+\n        ncpus=3:fltres=9.1+ncpus=3:fltres=10.1'\n        release nodes except the MS and nodes matching below sub select string\n        'select=ncpus=2:fltres=9.1+ncpus=3:fltres=9.1'\n        \"\"\"\n        # 1. create a custom string resources\n        float_res = 'fltres'\n        self.create_res([new_res(float_res, self.res_f_nh)])\n\n        # 2. add the custom resource to sched_config\n        self.scheduler.add_resource(float_res)\n\n        n1 = n_conf({'resources_available.ncpus': '1'})\n        n2_a = n_conf({'resources_available.ncpus': '2',\n                       'resources_available.'+float_res: '7.1'})\n        n2_b = n_conf({'resources_available.ncpus': '2',\n                       'resources_available.'+float_res: '9.1'})\n        n3_b = n_conf({'resources_available.ncpus': '3',\n                       'resources_available.'+float_res: '9.1'})\n        n3_c = n_conf({'resources_available.ncpus': '3',\n                       'resources_available.'+float_res: '10.1'})\n\n        nc_list = [n1, n2_a, n2_b, n3_b, n3_c]\n        # 3. configure the cluster\n        self.config_nodes(nc_list)\n\n        if partial_res_list is False:\n            keep_sel = ('select=ncpus=2:'+float_res+'=9.1+ncpus=3:' +\n                        float_res+'=9.1')\n        else:\n            keep_sel = 'select=2:'+float_res+'=9.1'\n\n        args = {\n            'qsub_sel': 'ncpus=1+ncpus=2:'+float_res+'=7.1+ncpus=2:' +\n            float_res+'=9.1+ncpus=3:'+float_res+'=9.1+ncpus=3:' +\n            float_res+'=10.1',\n            'keep_sel': keep_sel,\n            'sched_sel': '1:ncpus=1+1:ncpus=2:'+float_res+'=7.1+1:ncpus=2:' +\n            float_res+'=9.1+1:ncpus=3:'+float_res+'=9.1+1:ncpus=3:'+float_res +\n            '=10.1',\n            'expected_res': self.flatten_node_res(nc_list),\n            'rel_user': TEST_USER,\n            'qsub_sel_after': '1:ncpus=1+1:ncpus=2:'+float_res +\n            '=9.1+1:ncpus=3:'+float_res+'=9.1',\n            'sched_sel_after': '1:ncpus=1+1:ncpus=2:'+float_res +\n            '=9.1+1:ncpus=3:'+float_res+'=9.1',\n            'expected_res_after': self.flatten_node_res([n1, n2_b, n3_b])\n            }\n\n        job_stat = {'job_state': 'R',\n                    'substate': 42,\n                    'Resource_List.fltres': '35.4',\n                    'Resource_List.ncpus': 11,\n                    'Resource_List.nodect': 5,\n                    'Resource_List.select': args['qsub_sel'],\n                    'schedselect': args['sched_sel']}\n\n        args['job_stat'] = job_stat\n\n        job_stat_after = {'job_state': 'R',\n                          'substate': 42,\n                          'Resource_List.fltres': '18.2',\n                          'Resource_List.ncpus': 6,\n                          'Resource_List.nodect': 3,\n                          'Resource_List.select': args['qsub_sel_after'],\n                          'schedselect': args['sched_sel_after']}\n\n        args['job_stat_after'] = job_stat_after\n        tc = test_config(**args)\n        self.common_tc_flow(tc)\n\n    def test_with_a_custom_float_partial_list(self, partial_res_list=False):\n        \"\"\"\n        submit job with select string containing a custom float resource\n        'select=ncpus=1+ncpus=2:fltres=7.1+ncpus=2:fltres=9.1+\n        ncpus=3:fltres=9.1+ncpus=3:fltres=10.1'\n        release nodes except the MS and nodes matching below sub select string\n        containing partial resource list\n        'select=2:fltres=9.1'\n        \"\"\"\n        self.test_with_a_custom_float_res(partial_res_list=True)\n\n    def test_with_mixed_custom_res(self, partial_res_list=False):\n        \"\"\"\n        submit job with select string containing a mix of all types of\n        custom resources\n        'select=ncpus=1+ncpus=2:model=abc:longres=7:sizres=7k:fltres=7.1+\n        ncpus=2:model=def:bigmem=true:longres=9:sizres=9k:fltres=9.1+ncpus=3:\n        model=def:bigmem=true:longres=9:sizres=9k:fltres=9.1+ncpus=3:\n        model=xyz:longres=10:sizres=10k:fltres=10.1'\n        release nodes except the MS and nodes matching below sub select string\n        'select=ncpus=2:model=def:bigmem=true:longres=9:sizres=9k:fltres=9.1+\n        ncpus=3:model=def:bigmem=true:longres=9:sizres=9k:fltres=9.1'\n        \"\"\"\n        # 1. create a custom string resources\n        str_res = 'model'\n        model_a = 'abc'\n        model_b = 'def'\n        model_c = 'xyz'\n        bool_res = 'bigmem'\n        long_res = 'longres'\n        size_res = 'sizres'\n        float_res = 'fltres'\n\n        self.create_res(\n            [\n                new_res(str_res, self.res_s_h),\n                new_res(bool_res, self.res_b_h),\n                new_res(long_res, self.res_l_nh),\n                new_res(size_res, self.res_sz_nh),\n                new_res(float_res, self.res_f_nh)\n            ])\n\n        # 2. add the custom resource to sched_config\n        self.scheduler.add_resource(str_res)\n        self.scheduler.add_resource(long_res)\n        self.scheduler.add_resource(size_res)\n        self.scheduler.add_resource(float_res)\n\n        n1 = n_conf({'resources_available.ncpus': '1'})\n        n2_a = n_conf({'resources_available.ncpus': '2',\n                       'resources_available.'+str_res: model_a,\n                       'resources_available.'+long_res: '7',\n                       'resources_available.'+size_res: '7kb',\n                       'resources_available.'+float_res: '7.1'})\n        n2_b = n_conf({'resources_available.ncpus': '2',\n                       'resources_available.'+str_res: model_b,\n                       'resources_available.'+bool_res: 'True',\n                       'resources_available.'+long_res: '9',\n                       'resources_available.'+size_res: '9kb',\n                       'resources_available.'+float_res: '9.1'})\n        n3_b = n_conf({'resources_available.ncpus': '3',\n                       'resources_available.'+str_res: model_b,\n                       'resources_available.'+bool_res: 'True',\n                       'resources_available.'+long_res: '9',\n                       'resources_available.'+size_res: '9kb',\n                       'resources_available.'+float_res: '9.1'})\n        n3_c = n_conf({'resources_available.ncpus': '3',\n                       'resources_available.'+str_res: model_c,\n                       'resources_available.'+long_res: '10',\n                       'resources_available.'+size_res: '10kb',\n                       'resources_available.'+float_res: '10.1'})\n\n        nc_list = [n1, n2_a, n2_b, n3_b, n3_c]\n        # 3. configure the cluster\n        self.config_nodes(nc_list)\n\n        if partial_res_list is False:\n            keep_sel = ('select=ncpus=2:model='+model_b+':' +\n                        bool_res+'=true:'+long_res+'=9:'+size_res+'=9k:' +\n                        float_res+'=9.1+ncpus=3:model='+model_b+':'+bool_res +\n                        '=true:'+long_res+'=9:'+size_res+'=9k:'+float_res +\n                        '=9.1')\n        else:\n            keep_sel = 'select=2:'+bool_res+'=true'\n\n        args = {\n            'qsub_sel': 'ncpus=1+ncpus=2:model='+model_a+':'+long_res+'=7:' +\n            size_res+'=7k:'+float_res+'=7.1+ncpus=2:model='+model_b+':' +\n            bool_res+'=true:'+long_res+'=9:'+size_res+'=9k:'+float_res +\n            '=9.1+ncpus=3:model='+model_b+':'+bool_res+'=true:'+long_res +\n            '=9:'+size_res+'=9k:'+float_res+'=9.1+ncpus=3:model='+model_c +\n            ':'+long_res+'=10:'+size_res+'=10k:'+float_res+'=10.1',\n            'keep_sel': keep_sel,\n            'sched_sel': '1:ncpus=1+1:ncpus=2:model='+model_a+':'+long_res +\n            '=7:'+size_res+'=7kb:'+float_res+'=7.1+1:ncpus=2:model='+model_b +\n            ':'+bool_res+'=True:'+long_res+'=9:'+size_res+'=9kb:'+float_res +\n            '=9.1+1:ncpus=3:model='+model_b+':'+bool_res+'=True:'+long_res +\n            '=9:'+size_res+'=9kb:'+float_res+'=9.1+1:ncpus=3:model='+model_c +\n            ':'+long_res+'=10:'+size_res+'=10kb:'+float_res+'=10.1',\n            'expected_res': self.flatten_node_res(nc_list),\n            'rel_user': TEST_USER,\n            'qsub_sel_after': '1:ncpus=1+1:ncpus=2:model=' +\n            model_b+':'+bool_res+'=True:'+long_res+'=9:'+size_res +\n            '=9kb:'+float_res +\n            '=9.1+1:ncpus=3:model='+model_b+':'+bool_res+'=True:'+long_res +\n            '=9:'+size_res+'=9kb:'+float_res+'=9.1',\n            'sched_sel_after': '1:ncpus=1+1:ncpus=2:model=' +\n            model_b+':'+bool_res+'=True:'+long_res+'=9:'+size_res+'=9kb:' +\n            float_res+'=9.1+1:ncpus=3:model='+model_b+':'+bool_res+'=True:' +\n            long_res+'=9:'+size_res+'=9kb:'+float_res+'=9.1',\n            'expected_res_after': self.flatten_node_res([n1, n2_b, n3_b])\n            }\n\n        job_stat = {'job_state': 'R',\n                    'substate': 42,\n                    'Resource_List.longres': 35,\n                    'Resource_List.fltres': '35.4',\n                    'Resource_List.sizres': '35kb',\n                    'Resource_List.ncpus': 11,\n                    'Resource_List.nodect': 5,\n                    'Resource_List.select': args['qsub_sel'],\n                    'schedselect': args['sched_sel']}\n\n        args['job_stat'] = job_stat\n\n        job_stat_after = {'job_state': 'R',\n                          'substate': 42,\n                          'Resource_List.longres': 18,\n                          'Resource_List.sizres': '18kb',\n                          'Resource_List.fltres': '18.2',\n                          'Resource_List.ncpus': 6,\n                          'Resource_List.nodect': 3,\n                          'Resource_List.select': args['qsub_sel_after'],\n                          'schedselect': args['sched_sel_after']}\n\n        args['job_stat_after'] = job_stat_after\n        tc = test_config(**args)\n        self.common_tc_flow(tc)\n\n    def test_with_mixed_custom_res_partial_list(self, partial_res_list=False):\n        \"\"\"\n        submit job with select string containing a mix of all types of\n        custom resources\n        'select=ncpus=1+ncpus=2:model=abc:longres=7:sizres=7k:fltres=7.1+\n        ncpus=2:model=def:bigmem=true:longres=9:sizres=9k:fltres=9.1+ncpus=3:\n        model=def:bigmem=true:longres=9:sizres=9k:fltres=9.1+ncpus=3:\n        model=xyz:longres=10:sizres=10k:fltres=10.1'\n        release nodes except the MS and nodes matching below sub select string\n        containing partial resource list\n        'select=2:bigmem=true'\n        \"\"\"\n        self.test_with_mixed_custom_res(partial_res_list=True)\n\n    def test_schunk_use_case(self, release_partial_schunk=False):\n        \"\"\"\n        submit job with below select string\n        'ncpus=1+2:ncpus=6+2:ncpus=9:mpiprocs=2'\n        cluster is configured such that we get 4 superchunks\n        release nodes except the MS and nodes matching below sub select string\n        'select=ncpus=6+ncpus=9:mpiprocs=2'\n        so that whole super chunks are released or kept\n        \"\"\"\n        n1 = n_conf({'resources_available.ncpus': '1'})\n        n2 = n_conf({'resources_available.ncpus': '2'}, 3)\n        n3 = n_conf({'resources_available.ncpus': '3'}, 3)\n\n        nc_list = [n1, n2, n2, n3, n3]\n        # 1. configure the cluster\n        self.config_nodes(nc_list)\n\n        if release_partial_schunk is False:\n            keep_sel = 'select=ncpus=6+ncpus=9:mpiprocs=2'\n            expected_res_list = [n1, n2, n2, n2, n3, n3, n3]\n        else:\n            keep_sel = 'select=ncpus=4+ncpus=6:mpiprocs=2'\n            expected_res_list = [n1, n2, n2, n3, n3]\n\n        args = {\n            'qsub_sel': 'ncpus=1+2:ncpus=6+2:ncpus=9:mpiprocs=2',\n            'keep_sel': keep_sel,\n            'sched_sel': '1:ncpus=1+2:ncpus=6+2:ncpus=9:mpiprocs=2',\n            'expected_res': self.flatten_node_res(\n                [n1, n2, n2, n2, n2, n2, n2, n3, n3, n3, n3, n3, n3]),\n            'rel_user': TEST_USER,\n            'qsub_sel_after': '1:ncpus=1+1:ncpus=6+1:ncpus=9:mpiprocs=2',\n            'sched_sel_after': '1:ncpus=1+1:ncpus=6+1:ncpus=9:mpiprocs=2',\n            'expected_res_after': self.flatten_node_res(expected_res_list)\n            }\n\n        job_stat = {'job_state': 'R',\n                    'substate': 42,\n                    'Resource_List.mpiprocs': 4,\n                    'Resource_List.ncpus': 31,\n                    'Resource_List.nodect': 5,\n                    'Resource_List.select': args['qsub_sel'],\n                    'schedselect': args['sched_sel']}\n\n        args['job_stat'] = job_stat\n\n        job_stat_after = {'job_state': 'R',\n                          'substate': 42,\n                          'Resource_List.mpiprocs': 2,\n                          'Resource_List.ncpus': 16,\n                          'Resource_List.nodect': 3,\n                          'Resource_List.select': args['qsub_sel_after'],\n                          'schedselect': args['sched_sel_after']}\n\n        args['job_stat_after'] = job_stat_after\n        if release_partial_schunk is True:\n            args['skip_vnode_status_check'] = True\n\n        tc = test_config(**args)\n        self.common_tc_flow(tc)\n\n    def test_schunk_partial_release_use_case(self):\n        \"\"\"\n        submit job with below select string\n        'ncpus=1+2:ncpus=6+2:ncpus=9:mpiprocs=2'\n        cluster is configured such that we get 4 superchungs\n        release nodes except the MS and nodes matching below sub select string\n        'select=ncpus=4+ncpus=6:mpiprocs=2'\n        so that some vnodes of super chunks are released or kept\n        \"\"\"\n        self.test_schunk_use_case(release_partial_schunk=True)\n\n    def test_release_nodes_error(self):\n        \"\"\"\n        Tests erroneous cases:\n        1. pbs_release_nodes -j <job-id> -a -k <select>\n            \"pbs_release_nodes: -a and -k options cannot be used together\"\n        2. pbs_release_nodes -j <job-id> -k <select> <node1>...\n            \"pbs_release_nodes: cannot supply node list with -k option\"\n        3. pbs_release_nodes -j <job-id> -k place=scatter\n            \"pbs_release_nodes: only a \"select=\" string is valid in -k option\"\n        4. pbs_release_nodes -j <job-id> -k <select containing undefined res>\n            \"pbs_release_nodes: Unknown resource: <undefined res name>\"\n        5. pbs_release_nodes -j <job-id> -k <unsatisfying/non-sub select>\n            \"pbs_release_nodes: Server returned error 15010 for job\"\n        6. pbs_release_nodes -j <job-id> -k <high node count>\n            \"pbs_release_nodes: Server returned error 15010 for job\"\n        \"\"\"\n\n        n1 = n_conf({'resources_available.ncpus': '1'})\n        n2 = n_conf({'resources_available.ncpus': '2'})\n        n3 = n_conf({'resources_available.ncpus': '3'})\n\n        nc_list = [n1, n2, n2, n3, n3]\n        self.config_nodes(nc_list)\n        qsub_sel = 'ncpus=1+2:ncpus=2+2:ncpus=3:mpiprocs=2'\n        keep_sel = 'select=ncpus=2+ncpus=3:mpiprocs=2'\n        job = Job(TEST_USER, attrs={'Resource_List.select': qsub_sel})\n        job.set_sleep_time(1000)\n        jid = self.server.submit(job)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        # 1. \"pbs_release_nodes: -a and -k options cannot be used together\"\n        cmd = [self.rel_nodes_cmd, '-j', jid, '-a', '-k', keep_sel]\n        ret = self.server.du.run_cmd(self.server.hostname, cmd,\n                                     runas=TEST_USER)\n        self.assertNotEqual(ret['rc'], 0)\n        self.assertTrue(ret['err'][0].startswith(\n            'pbs_release_nodes: -a and -k options cannot be used together'))\n\n        # 2. \"pbs_release_nodes: cannot supply node list with -k option\"\n        cmd = [self.rel_nodes_cmd, '-j', jid, '-k', keep_sel,\n               list(self.vnode_dict.keys())[2]]\n        ret = self.server.du.run_cmd(self.server.hostname, cmd,\n                                     runas=TEST_USER)\n        self.assertNotEqual(ret['rc'], 0)\n        self.assertTrue(ret['err'][0].startswith(\n            'pbs_release_nodes: cannot supply node list with -k option'))\n\n        # 3. \"pbs_release_nodes: only a \"select=\" string is valid in -k option\"\n        cmd = [self.rel_nodes_cmd, '-j', jid, '-k', 'place=scatter']\n        ret = self.server.du.run_cmd(self.server.hostname, cmd,\n                                     runas=TEST_USER)\n        self.assertNotEqual(ret['rc'], 0)\n        self.assertTrue(ret['err'][0].startswith(\n            'pbs_release_nodes: only a \"select=\" string is valid in -k option'\n            ))\n\n        # 4. \"pbs_release_nodes: Unknown resource: <undefined res name>\"\n        cmd = [self.rel_nodes_cmd, '-j', jid, '-k',\n               'select=ncpus=2:unkownres=3']\n        ret = self.server.du.run_cmd(self.server.hostname, cmd,\n                                     runas=TEST_USER)\n        self.assertNotEqual(ret['rc'], 0)\n        self.assertTrue(ret['err'][0].startswith(\n            'pbs_release_nodes: Unknown resource: unkownres'))\n\n        # 5. \"pbs_release_nodes: Server returned error 15010 for job\"\n        cmd = [self.rel_nodes_cmd, '-j', jid, '-k', 'select=ncpus=4']\n        ret = self.server.du.run_cmd(self.server.hostname, cmd,\n                                     runas=TEST_USER)\n        self.assertNotEqual(ret['rc'], 0)\n        self.assertTrue(ret['err'][0].startswith(\n            'pbs_release_nodes: Server returned error 15010 for job'))\n\n        # 6. \"pbs_release_nodes: Server returned error 15010 for job\"\n        cmd = [self.rel_nodes_cmd, '-j', jid, '-k', '5']\n        ret = self.server.du.run_cmd(self.server.hostname, cmd,\n                                     runas=TEST_USER)\n        self.assertNotEqual(ret['rc'], 0)\n        self.assertTrue(ret['err'][0].startswith(\n            'pbs_release_nodes: Server returned error 15010 for job'))\n\n    def test_node_count(self, rel_user=TEST_USER, use_script=False):\n        \"\"\"\n        submit job with below select string\n        'select=ncpus=1+2:ncpus=2+2:ncpus=3:mpiprocs=2'\n        release nodes except the MS and 2 nodes\n        \"\"\"\n        n1 = n_conf({'resources_available.ncpus': '1'})\n        n2 = n_conf({'resources_available.ncpus': '2'})\n        n3 = n_conf({'resources_available.ncpus': '3'})\n\n        nc_list = [n1, n2, n2, n3, n3]\n        # 1. configure the cluster\n        self.config_nodes(nc_list)\n\n        args = {\n            'qsub_sel': 'ncpus=1+2:ncpus=2+2:ncpus=3:mpiprocs=2',\n            'keep_sel': '2',\n            'sched_sel': '1:ncpus=1+2:ncpus=2+2:ncpus=3:mpiprocs=2',\n            'expected_res': self.flatten_node_res(nc_list),\n            'rel_user': rel_user,\n            'qsub_sel_after': '1:ncpus=1+2:ncpus=2',\n            'sched_sel_after': '1:ncpus=1+2:ncpus=2',\n            'expected_res_after': self.flatten_node_res([n1, n2, n2])\n            }\n\n        job_stat = {'job_state': 'R',\n                    'substate': 42,\n                    'Resource_List.mpiprocs': 4,\n                    'Resource_List.ncpus': 11,\n                    'Resource_List.nodect': 5,\n                    'Resource_List.select': args['qsub_sel'],\n                    'schedselect': args['sched_sel']}\n\n        args['job_stat'] = job_stat\n\n        job_stat_after = {'job_state': 'R',\n                          'substate': 42,\n                          'Resource_List.ncpus': 5,\n                          'Resource_List.nodect': 3,\n                          'Resource_List.select': args['qsub_sel_after'],\n                          'schedselect': args['sched_sel_after']}\n\n        if use_script is True:\n            args['use_script'] = True\n\n        args['job_stat_after'] = job_stat_after\n        tc = test_config(**args)\n        self.common_tc_flow(tc)\n\n    def test_node_count_as_root(self):\n        \"\"\"\n        submit job with below select string\n        'select=ncpus=1+2:ncpus=2+2:ncpus=3:mpiprocs=2'\n        as root release nodes except the MS and 2 nodes\n        \"\"\"\n        self.test_node_count(rel_user=ROOT_USER)\n\n    def test_node_count_using_script(self):\n        \"\"\"\n        Like test_node_count test except instead of calling\n        pbs_release_nodes from a command line, it is executed\n        inside the job script of a running job. Same results.\n        \"\"\"\n        self.jobscript = \\\n            \"#!/bin/sh\\n\" + \\\n            \"trap 'pbs_release_nodes -k 2\" + \\\n            \";sleep 1000;exit 0' INT\\n\" + \\\n            \"sleep 1000\\n\" + \\\n            \"exit 0\"\n        self.test_node_count(use_script=True)\n\n    def test_node_count_with_mixed_custom_res(self):\n        \"\"\"\n        submit job with select string containing a mix of all types of\n        custom resources\n        'select=ncpus=1+ncpus=2:model=abc:longres=7:sizres=7k:fltres=7.1+\n        ncpus=2:model=def:bigmem=true:longres=9:sizres=9k:fltres=9.1+ncpus=3:\n        model=def:bigmem=true:longres=9:sizres=9k:fltres=9.1+ncpus=3:\n        model=xyz:longres=10:sizres=10k:fltres=10.1'\n        release nodes except the MS and 2 nodes\n        \"\"\"\n        # 1. create a custom string resources\n        str_res = 'model'\n        model_a = 'abc'\n        model_b = 'def'\n        model_c = 'xyz'\n        bool_res = 'bigmem'\n        long_res = 'longres'\n        size_res = 'sizres'\n        float_res = 'fltres'\n\n        self.create_res(\n            [\n                new_res(str_res, self.res_s_h),\n                new_res(bool_res, self.res_b_h),\n                new_res(long_res, self.res_l_nh),\n                new_res(size_res, self.res_sz_nh),\n                new_res(float_res, self.res_f_nh)\n            ])\n\n        # 2. add the custom resource to sched_config\n        self.scheduler.add_resource(str_res)\n        self.scheduler.add_resource(long_res)\n        self.scheduler.add_resource(size_res)\n        self.scheduler.add_resource(float_res)\n\n        n1 = n_conf({'resources_available.ncpus': '1'})\n        n2_a = n_conf({'resources_available.ncpus': '2',\n                       'resources_available.'+str_res: model_a,\n                       'resources_available.'+long_res: '7',\n                       'resources_available.'+size_res: '7kb',\n                       'resources_available.'+float_res: '7.1'})\n        n2_b = n_conf({'resources_available.ncpus': '2',\n                       'resources_available.'+str_res: model_b,\n                       'resources_available.'+bool_res: 'True',\n                       'resources_available.'+long_res: '9',\n                       'resources_available.'+size_res: '9kb',\n                       'resources_available.'+float_res: '9.1'})\n        n3_b = n_conf({'resources_available.ncpus': '3',\n                       'resources_available.'+str_res: model_b,\n                       'resources_available.'+bool_res: 'True',\n                       'resources_available.'+long_res: '9',\n                       'resources_available.'+size_res: '9kb',\n                       'resources_available.'+float_res: '9.1'})\n        n3_c = n_conf({'resources_available.ncpus': '3',\n                       'resources_available.'+str_res: model_c,\n                       'resources_available.'+long_res: '10',\n                       'resources_available.'+size_res: '10kb',\n                       'resources_available.'+float_res: '10.1'})\n\n        nc_list = [n1, n2_a, n2_b, n3_b, n3_c]\n        # 3. configure the cluster\n        self.config_nodes(nc_list)\n\n        args = {\n            'qsub_sel': 'ncpus=1+ncpus=2:model='+model_a+':'+long_res+'=7:' +\n            size_res+'=7k:'+float_res+'=7.1+ncpus=2:model='+model_b+':' +\n            bool_res+'=true:'+long_res+'=9:'+size_res+'=9k:'+float_res +\n            '=9.1+ncpus=3:model='+model_b+':'+bool_res+'=true:'+long_res +\n            '=9:'+size_res+'=9k:'+float_res+'=9.1+ncpus=3:model='+model_c +\n            ':'+long_res+'=10:'+size_res+'=10k:'+float_res+'=10.1',\n            'keep_sel': '2',\n            'sched_sel': '1:ncpus=1+1:ncpus=2:model='+model_a+':'+long_res +\n            '=7:'+size_res+'=7kb:'+float_res+'=7.1+1:ncpus=2:model='+model_b +\n            ':'+bool_res+'=True:'+long_res+'=9:'+size_res+'=9kb:'+float_res +\n            '=9.1+1:ncpus=3:model='+model_b+':'+bool_res+'=True:'+long_res +\n            '=9:'+size_res+'=9kb:'+float_res+'=9.1+1:ncpus=3:model='+model_c +\n            ':'+long_res+'=10:'+size_res+'=10kb:'+float_res+'=10.1',\n            'expected_res': self.flatten_node_res(nc_list),\n            'rel_user': TEST_USER,\n            'qsub_sel_after': '1:ncpus=1+1:ncpus=2:model=' +\n            model_a+':'+long_res+'=7:'+size_res+'=7kb:'+float_res +\n            '=7.1+1:ncpus=3:model='+model_c+':'+long_res+'=10:'+size_res +\n            '=10kb:'+float_res+'=10.1',\n            'sched_sel_after': '1:ncpus=1+1:ncpus=2:model=' +\n            model_a+':'+long_res+'=7:'+size_res+'=7kb:'+float_res +\n            '=7.1+1:ncpus=3:model='+model_c+':'+long_res+'=10:'+size_res +\n            '=10kb:'+float_res+'=10.1',\n            'expected_res_after': self.flatten_node_res([n1, n2_a, n3_c])\n            }\n\n        job_stat = {'job_state': 'R',\n                    'substate': 42,\n                    'Resource_List.longres': 35,\n                    'Resource_List.fltres': '35.4',\n                    'Resource_List.sizres': '35kb',\n                    'Resource_List.ncpus': 11,\n                    'Resource_List.nodect': 5,\n                    'Resource_List.select': args['qsub_sel'],\n                    'schedselect': args['sched_sel']}\n\n        args['job_stat'] = job_stat\n\n        job_stat_after = {'job_state': 'R',\n                          'substate': 42,\n                          'Resource_List.longres': 17,\n                          'Resource_List.sizres': '17kb',\n                          'Resource_List.fltres': '17.2',\n                          'Resource_List.ncpus': 6,\n                          'Resource_List.nodect': 3,\n                          'Resource_List.select': args['qsub_sel_after'],\n                          'schedselect': args['sched_sel_after']}\n\n        args['job_stat_after'] = job_stat_after\n        tc = test_config(**args)\n        self.common_tc_flow(tc)\n\n    def test_node_count_schunk_use_case(self):\n        \"\"\"\n        submit job with below select string\n        'ncpus=1+2:ncpus=6+2:ncpus=9:mpiprocs=2'\n        cluster is configured such that we get 4 superchunks\n        release nodes except the MS and 2 nodes\n        \"\"\"\n        n1 = n_conf({'resources_available.ncpus': '1'})\n        n2 = n_conf({'resources_available.ncpus': '2'}, 3)\n        n3 = n_conf({'resources_available.ncpus': '3'}, 3)\n\n        nc_list = [n1, n2, n2, n3, n3]\n        # 1. configure the cluster\n        self.config_nodes(nc_list)\n\n        args = {\n            'qsub_sel': 'ncpus=1+2:ncpus=6+2:ncpus=9:mpiprocs=2',\n            'keep_sel': '2',\n            'sched_sel': '1:ncpus=1+2:ncpus=6+2:ncpus=9:mpiprocs=2',\n            'expected_res': self.flatten_node_res(\n                [n1, n2, n2, n2, n2, n2, n2, n3, n3, n3, n3, n3, n3]),\n            'rel_user': TEST_USER,\n            'qsub_sel_after': '1:ncpus=1+2:ncpus=6',\n            'sched_sel_after': '1:ncpus=1+2:ncpus=6',\n            'expected_res_after': self.flatten_node_res(\n                [n1, n2, n2, n2, n2, n2, n2])\n            }\n\n        job_stat = {'job_state': 'R',\n                    'substate': 42,\n                    'Resource_List.mpiprocs': 4,\n                    'Resource_List.ncpus': 31,\n                    'Resource_List.nodect': 5,\n                    'Resource_List.select': args['qsub_sel'],\n                    'schedselect': args['sched_sel']}\n\n        args['job_stat'] = job_stat\n\n        job_stat_after = {'job_state': 'R',\n                          'substate': 42,\n                          'Resource_List.ncpus': 13,\n                          'Resource_List.nodect': 3,\n                          'Resource_List.select': args['qsub_sel_after'],\n                          'schedselect': args['sched_sel_after']}\n\n        args['job_stat_after'] = job_stat_after\n\n        tc = test_config(**args)\n        self.common_tc_flow(tc)\n"
  },
  {
    "path": "test/tests/functional/pbs_node_sleep_state.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestNodeSleepState(TestFunctional):\n\n    \"\"\"\n    This test suite contains regression tests for node sleep state\n    \"\"\"\n\n    def test_node_set_sleep_state(self):\n        \"\"\"\n        Tests node setting state to sleep\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, NODE, {'state': 'sleep'},\n                            id=self.mom.shortname)\n        self.server.expect(NODE, {'state': 'sleep'})\n        # submit a job and it should remain in Q state\n        j = Job(self.du.get_current_user())\n        self.jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=self.jid)\n\n    def test_node_state_sleep_to_free_manual(self):\n        \"\"\"\n        Tests setting node state to free from sleep manually\n        \"\"\"\n        self.test_node_set_sleep_state()\n        self.server.manager(MGR_CMD_SET, NODE, {'state': 'free'},\n                            id=self.mom.shortname)\n        self.server.expect(NODE, {'state': 'free'})\n        self.server.expect(JOB, {'job_state': 'R'}, id=self.jid)\n\n    def test_node_state_sleep_to_free_on_node_restart(self):\n        \"\"\"\n        Tests setting node state to free from sleep on node restart\n        \"\"\"\n        self.test_node_set_sleep_state()\n        self.mom.stop()\n        self.server.expect(NODE, {'state': 'down,sleep'})\n        self.mom.start()\n        self.server.expect(NODE, {'state': 'free'})\n        self.server.expect(JOB, {'job_state': 'R'}, id=self.jid)\n\n    def test_node_state_offline_and_sleep_restart(self):\n        \"\"\"\n        Tests setting node state to offline and sleep on restart node\n        will still remain in offline\n        \"\"\"\n        self.test_node_set_sleep_state()\n        self.server.manager(MGR_CMD_SET, NODE, {'state': (INCR, 'offline')},\n                            id=self.mom.shortname)\n        self.mom.stop()\n        self.server.expect(NODE, {'state': 'down,offline,sleep'})\n        self.mom.start()\n        self.server.expect(NODE, {'state': 'offline'})\n"
  },
  {
    "path": "test/tests/functional/pbs_nodes_json.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\nimport json\n\n\nclass TestPbsnodes_json(TestFunctional):\n    \"\"\"\n    Tests the json output of pbsnodes\n    \"\"\"\n\n    def test_check_no_escape_json(self):\n        \"\"\"\n        Test if the comment with no special characters is valid json\n        \"\"\"\n\n        self.server.manager(MGR_CMD_SET, NODE,\n                            {'comment': '\"hiha\"'}, id=self.mom.shortname)\n        cmd = os.path.join(self.server.pbs_conf['PBS_EXEC'],\n                           'bin', 'pbsnodes') + ' -av -Fjson'\n        n_out = self.du.run_cmd(self.server.hostname, cmd=cmd)['out']\n\n        try:\n            json.loads(\"\\n\".join(n_out))\n        except ValueError:\n            self.assertFalse(True, \"Json failed to load\")\n\n    def test_check_newline_escape_json(self):\n        \"\"\"\n        Test if the comment with newline is valid json\n        \"\"\"\n\n        self.server.manager(MGR_CMD_SET, NODE,\n                            {'comment': '\"hi\\nha\"'}, id=self.mom.shortname)\n        cmd = os.path.join(self.server.pbs_conf['PBS_EXEC'],\n                           'bin', 'pbsnodes') + ' -av -Fjson'\n        n_out = self.du.run_cmd(self.server.hostname, cmd=cmd)['out']\n\n        try:\n            json.loads(\"\\n\".join(n_out))\n        except ValueError:\n            self.assertFalse(True, \"Json failed to load\")\n\n    def test_check_tab_escape_json(self):\n        \"\"\"\n        Test if the comment with tab is valid json\n        \"\"\"\n\n        self.server.manager(MGR_CMD_SET, NODE,\n                            {'comment': '\"hi\\tha\"'}, id=self.mom.shortname)\n        cmd = os.path.join(self.server.pbs_conf['PBS_EXEC'],\n                           'bin', 'pbsnodes') + ' -av -Fjson'\n        n_out = self.du.run_cmd(self.server.hostname, cmd=cmd)['out']\n\n        try:\n            json.loads(\"\\n\".join(n_out))\n        except ValueError:\n            self.assertFalse(True, \"Json failed to load\")\n\n    def test_check_quotes_escape_json(self):\n        \"\"\"\n        Test if the comment with quotes is valid json\n        \"\"\"\n\n        self.server.manager(MGR_CMD_SET, NODE,\n                            {'comment': '\\'hi\\\"ha\\''}, id=self.mom.shortname)\n        cmd = os.path.join(self.server.pbs_conf['PBS_EXEC'],\n                           'bin', 'pbsnodes') + ' -av -Fjson'\n        n_out = self.du.run_cmd(self.server.hostname, cmd=cmd)['out']\n\n        try:\n            json.loads(\"\\n\".join(n_out))\n        except ValueError:\n            self.assertFalse(True, \"Json failed to load\")\n\n    def test_check_reverse_solidus_escape_json(self):\n        \"\"\"\n        Test if the comment with reverse solidus is valid json\n        \"\"\"\n\n        self.server.manager(MGR_CMD_SET, NODE,\n                            {'comment': '\"hi\\\\ha\"'}, id=self.mom.shortname)\n        cmd = os.path.join(self.server.pbs_conf['PBS_EXEC'],\n                           'bin', 'pbsnodes') + ' -av -Fjson'\n        n_out = self.du.run_cmd(self.server.hostname, cmd=cmd)['out']\n\n        try:\n            json.loads(\"\\n\".join(n_out))\n        except ValueError:\n            self.assertFalse(True, \"Json failed to load\")\n\n    def test_empty_comment_json(self):\n        \"\"\"\n        Test an empty node comment (only a space) under json.\n        \"\"\"\n\n        self.server.manager(MGR_CMD_SET, NODE,\n                            {'comment': ' '}, id=self.mom.shortname)\n        cmd = os.path.join(self.server.pbs_conf['PBS_EXEC'],\n                           'bin', 'pbsnodes') + ' -av -Fjson'\n        ret = self.du.run_cmd(self.server.hostname, cmd=cmd)\n        n_out = \"\\n\".join(ret['out'])\n        try:\n            json.loads(n_out)\n        except ValueError:\n            self.logger.info(n_out)\n            self.assertFalse(True, \"Json failed to load\")\n"
  },
  {
    "path": "test/tests/functional/pbs_nodes_queues.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestNodesQueues(TestFunctional):\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n        self.server.add_resource('foo', 'string', 'h')\n\n        a = {'resources_available.ncpus': 4}\n        self.mom.create_vnodes(\n            a, 8, attrfunc=self.cust_attr)\n        self.vn = ['%s[%d]' % (self.mom.shortname, i) for i in range(4)]\n        a = {'queue_type': 'execution', 'started': 't', 'enabled': 't'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id='workq2')\n\n        self.server.manager(MGR_CMD_SET, NODE, {\n                            'queue': 'workq2'}, id=self.vn)\n        self.server.manager(MGR_CMD_SET, SERVER, {'node_group_key': 'foo'})\n        self.server.manager(MGR_CMD_SET, SERVER, {'node_group_enable': 't'})\n\n    def cust_attr(self, name, totnodes, numnode, attrib):\n        a = {}\n        if numnode % 2 == 0:\n            a['resources_available.foo'] = 'A'\n        else:\n            a['resources_available.foo'] = 'B'\n        return {**attrib, **a}\n\n    def test_node_queue_assoc_ignored(self):\n        \"\"\"\n        Issue with node grouping and nodes associated with queues.  If\n        node_grouping is set at the server level, node/queue association is\n        not honored\n        \"\"\"\n\n        a = {'Resource_List.select': '2:ncpus=1',\n             'Resource_List.place': 'vscatter', 'queue': 'workq2'}\n        j = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, 'exec_vnode', id=jid, op=SET)\n        nodes = j.get_vnodes(j.exec_vnode)\n        self.assertTrue((nodes[0] == self.vn[0] and\n                         nodes[1] == self.vn[2]) or\n                        (nodes[0] == self.vn[1] and\n                         nodes[1] == self.vn[3]))\n"
  },
  {
    "path": "test/tests/functional/pbs_nonprint_characters.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\nimport json\nimport os\n\n\nclass TestNonprintingCharacters(TestFunctional):\n    \"\"\"\n    Test to check passing non-printable environment variables\n    \"\"\"\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n        a = {'job_history_enable': 'True'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        # Mapping of ASCII non-printable character to escaped representation\n        self.npcat = {\n            \"\\x00\": \"^@\", \"\\x01\": \"^A\", \"\\x02\": \"^B\", \"\\x03\": \"^C\",\n            \"\\x04\": \"^D\", \"\\x05\": \"^E\", \"\\x06\": \"^F\", \"\\x07\": \"^G\",\n            \"\\x08\": \"^H\", \"\\x09\": \"^I\", \"\\x0A\": \"^J\", \"\\x0B\": \"^K\",\n            \"\\x0C\": \"^L\", \"\\x0D\": \"^M\", \"\\x0E\": \"^N\", \"\\x0F\": \"^O\",\n            \"\\x10\": \"^P\", \"\\x11\": \"^Q\", \"\\x12\": \"^R\", \"\\x13\": \"^S\",\n            \"\\x14\": \"^T\", \"\\x15\": \"^U\", \"\\x16\": \"^V\", \"\\x17\": \"^W\",\n            \"\\x18\": \"^X\", \"\\x19\": \"^Y\", \"\\x1A\": \"^Z\", \"\\x1B\": \"^[\",\n            \"\\x1C\": \"^\\\\\", \"\\x1D\": \"^]\", \"\\x1E\": \"^^\", \"\\x1F\": \"^_\"\n        }\n\n        # Exclude these:\n        # NULL(\\0) causes qsub error, LINE FEED(\\n) causes error in expect()\n        self.npch_exclude = ['\\x00', '\\x0A']\n\n        # Characters displayed as is: TAB (\\t), LINE FEED(\\n)\n        self.npch_asis = ['\\x09', '\\x0A']\n\n        # Terminal control characters used in the tests\n        self.bold = \"\\u001b[1m\"\n        self.red = \"\\u001b[31m\"\n        self.reset = \"\\u001b[0m\"\n        # Mapping of terminal control character to escaped representation\n        self.bold_esc = \"^[[1m\"\n        self.red_esc = \"^[[31m\"\n        self.reset_esc = \"^[[0m\"\n\n        self.ATTR_V = 'Full_Variable_List'\n        api_to_cli.setdefault(self.ATTR_V, 'V')\n\n        # Check if ShellShock fix for exporting shell function in bash exists\n        # on this system and what \"BASH_FUNC_\" format to use\n        foo_scr = \"\"\"#!/bin/bash\nfoo() { a=B; echo $a; }\nexport -f foo\nenv | grep foo\nunset -f foo\nexit 0\n\"\"\"\n        self.script = \"\"\"#PBS -V\nenv | grep -A2 BASH_FUNC_foo\nfoo\nsleep 5\n\"\"\"\n        fn = self.du.create_temp_file(body=foo_scr)\n        self.du.chmod(path=fn, mode=0o755)\n        foo_msg = 'Failed to run foo_scr'\n        ret = self.du.run_cmd(self.server.hostname, cmd=fn)\n        self.assertEqual(ret['rc'], 0, foo_msg)\n        msg = 'BASH_FUNC_'\n        self.n = 'foo'\n        for m in ret['out']:\n            if m.find(msg) != -1:\n                self.n = m.split('=')[0]\n                continue\n\n        # Client commands full path\n        self.qstat_cmd = os.path.join(self.server.pbs_conf['PBS_EXEC'],\n                                      'bin', 'qstat')\n        self.qmgr_cmd = os.path.join(self.server.pbs_conf['PBS_EXEC'],\n                                     'bin', 'qmgr')\n        self.pbsnodes_cmd = os.path.join(self.server.pbs_conf['PBS_EXEC'],\n                                         'bin', 'pbsnodes')\n\n    def create_and_submit_job(self, user=None, attribs=None, content=None,\n                              content_interactive=None, preserve_env=False,\n                              set_env={}):\n        \"\"\"\n        Create the job object and submit it to the server as 'user',\n        attributes list 'attribs' script 'content' or 'content_interactive',\n        and to 'preserve_env' if interactive job.\n        \"\"\"\n        if attribs is None:\n            use_attribs = {}\n        else:\n            use_attribs = attribs\n        retjob = Job(username=user, attrs=use_attribs)\n\n        if content is not None:\n            retjob.create_script(body=content)\n        elif content_interactive is not None:\n            retjob.interactive_script = content_interactive\n            retjob.preserve_env = preserve_env\n        return self.server.submit(retjob, env=set_env)\n\n    def check_jobout(self, chk_var, jid, job_outfile, host=None):\n        \"\"\"\n        Check if unescaped variable is in job output\n        \"\"\"\n        self.server.expect(JOB, 'queue', op=UNSET, id=jid, offset=1)\n        ret = self.du.cat(hostname=host, sudo=True, filename=job_outfile,\n                          option=\"-v\")\n        j_output = \"\"\n        if len(ret['out']) > 0:\n            if '\\t' in chk_var:\n                j_output = '\\n'.join(ret['out']).replace('\\t\\t', '\\t')\n            else:\n                j_output = '\\n'.join(ret['out']).replace('\\t', '')\n        self.assertIn(chk_var, j_output)\n        self.logger.info('job output has: %s' % chk_var)\n\n    def check_qstatout(self, chk_var, jid):\n        \"\"\"\n        Check if escaped variable is in qstat -f output\n        \"\"\"\n        cmd = [self.qstat_cmd, '-xf', jid]\n        ret = self.du.run_cmd(self.server.hostname, cmd=cmd)\n        if '\\t' in chk_var:\n            job_str = ''.join(ret['out']).replace('\\t\\t', '\\t')\n        else:\n            job_str = ''.join(ret['out']).replace('\\t', '')\n        self.assertIn(chk_var, job_str)\n        self.logger.info('qstat -xf output has: %s' % chk_var)\n\n    def test_nonprint_character_qsubv(self):\n        \"\"\"\n        Using each of the non-printable ASCII characters, except NULL\n        (x00), and LINE FEED (x0A) which will cause a qsub error,\n        submit a job script with\n        qsub -v \"var1='A,B,<non-printable character>,C,D'\"\n        and check that the value with the character is passed correctly\n        \"\"\"\n        uhost = PbsUser.get_user(TEST_USER).host\n        for ch in self.npcat:\n            self.logger.info('##### non-printable char: %s #####' % repr(ch))\n            if ch in self.npch_exclude:\n                self.logger.info('##### excluded char: %s' % repr(ch))\n                continue\n            # variable to check if with escaped nonprinting character or not\n            chk_var = r'var1=A\\,B\\,%s\\,C\\,D' % self.npcat[ch]\n            if ch in self.npch_asis:\n                chk_var = r'var1=A\\,B\\,%s\\,C\\,D' % ch\n            if uhost is None or self.du.is_localhost(uhost):\n                a = {ATTR_v: \"var1=\\'A,B,%s,C,D\\'\" % ch}\n            else:\n                a = {ATTR_v: r\"var1=\\'A\\,B\\,%s\\,C\\,D\\'\" % ch}\n            script = ['sleep 10']\n            script += ['env | grep var1']\n            jid = self.create_and_submit_job(attribs=a, content=script)\n            # Check if qstat -f output contains the escaped character\n            self.check_qstatout(chk_var, jid)\n            # Check if job output contains the character as is\n            qstat = self.server.status(JOB, ATTR_o, id=jid)\n            job_outfile = qstat[0][ATTR_o].split(':')[1]\n            job_host = qstat[0][ATTR_o].split(':')[0]\n            if ch == '\\x09':\n                chk_var = \"var1=A,B,%s,C,D\" % ch\n            else:\n                chk_var = \"var1=A,B,%s,C,D\" % self.npcat[ch]\n            self.check_jobout(chk_var, jid, job_outfile, job_host)\n\n    def test_nonprint_character_directive(self):\n        \"\"\"\n        Using each of the non-printable ASCII characters, except NULL\n        (hex 00) and LINE FEED (hex 0A) which will cause a qsub error,\n        submit a job script with PBS directive\n        -v \"var1='A,B,<non-printable character>,C,D'\"\n        and check that the value with the character is passed correctly\n        \"\"\"\n        for ch in self.npcat:\n            self.logger.info('##### non-printable char: %s #####' % repr(ch))\n            if ch in self.npch_exclude:\n                self.logger.info('##### excluded char: %s' % repr(ch))\n                continue\n            # variable to check if with escaped nonprinting character or not\n            chk_var = r'var1=A\\,B\\,%s\\,C\\,D' % self.npcat[ch]\n            if ch in self.npch_asis:\n                chk_var = r'var1=A\\,B\\,%s\\,C\\,D' % ch\n            script = ['#PBS -v \"var1=\\'A,B,%s,C,D\\'\"' % ch]\n            script += ['sleep 1']\n            script += ['env | grep var1']\n            jid = self.create_and_submit_job(content=script)\n            # Check if qstat -f output contains the escaped character\n            self.check_qstatout(chk_var, jid)\n            # Check if job output contains the character\n            qstat = self.server.status(JOB, ATTR_o, id=jid)\n            job_outfile = qstat[0][ATTR_o].split(':')[1]\n            job_host = qstat[0][ATTR_o].split(':')[0]\n            if ch == '\\x09':\n                chk_var = \"var1=A,B,%s,C,D\" % ch\n            else:\n                chk_var = \"var1=A,B,%s,C,D\" % self.npcat[ch]\n            self.check_jobout(chk_var, jid, job_outfile, job_host)\n\n    def test_nonprint_character_qsubV(self):\n        \"\"\"\n        Using each of the non-printable ASCII characters, except NULL\n        (hex 00) and LINE FEED (hex 0A) which will cause a qsub error,\n        test exporting the character in environment variable\n        when -V is passed through command line.\n        \"\"\"\n        for ch in self.npcat:\n            self.logger.info('##### non-printable char: %s #####' % repr(ch))\n            if ch in self.npch_exclude:\n                self.logger.info('##### excluded char: %s' % repr(ch))\n                continue\n            # variable to check if with escaped nonprinting character or not\n            chk_var = 'NONPRINT_VAR=X%sY' % self.npcat[ch]\n            if ch in self.npch_asis:\n                chk_var = 'NONPRINT_VAR=X%sY' % ch\n            script = ['sleep 5']\n            script += ['env | grep NONPRINT_VAR']\n            a = {self.ATTR_V: None, ATTR_S: '/bin/bash'}\n            j = Job(TEST_USER, attrs=a)\n            j.create_script(body=script)\n            xval = \"X%sY\" % ch\n            env_to_set = {\"NONPRINT_VAR\": xval}\n            jid = self.server.submit(j, env=env_to_set)\n            # Check if qstat -f output contains the escaped character\n            self.check_qstatout(chk_var, jid)\n            # Check if job output contains the character\n            qstat = self.server.status(JOB, ATTR_o, id=jid)\n            job_outfile = qstat[0][ATTR_o].split(':')[1]\n            job_host = qstat[0][ATTR_o].split(':')[0]\n            if ch == '\\x09':\n                chk_var = 'NONPRINT_VAR=X%sY' % ch\n            else:\n                chk_var = 'NONPRINT_VAR=X%sY' % self.npcat[ch]\n            self.check_jobout(chk_var, jid, job_outfile, job_host)\n\n    def test_nonprint_character_default_qsubV(self):\n        \"\"\"\n        Using each of the non-printable ASCII characters, except NULL\n        (hex 00) and LINE FEED (hex 0A) which will cause a qsub error,\n        test exporting the character in environment variable\n        when -V is in the server's default_qsub_arguments.\n        \"\"\"\n        user = PbsUser.get_user(TEST_USER)\n        host = user.host\n        for ch in self.npcat:\n            self.logger.info('##### non-printable char: %s #####' % repr(ch))\n            if ch in self.npch_exclude:\n                self.logger.info('##### excluded char: %s' % repr(ch))\n                continue\n            # variable to check if with escaped nonprinting character or not\n            chk_var = 'NONPRINT_VAR=X%sY' % self.npcat[ch]\n            if ch in self.npch_asis:\n                chk_var = 'NONPRINT_VAR=X%sY' % ch\n            os.environ[\"NONPRINT_VAR\"] = \"X%sY\" % ch\n            self.server.manager(MGR_CMD_SET, SERVER,\n                                {'default_qsub_arguments': '-V'})\n            script = ['sleep 5']\n            script += ['env | grep NONPRINT_VAR']\n            j = Job(TEST_USER, attrs={ATTR_S: '/bin/bash'})\n            j.create_script(body=script)\n            xval = \"X%sY\" % ch\n            env_to_set = {\"NONPRINT_VAR\": xval}\n            jid = self.server.submit(j, env=env_to_set)\n            # Check if qstat -f output contains the escaped character\n            self.check_qstatout(chk_var, jid)\n            # Check if job output contains the character\n            qstat = self.server.status(JOB, ATTR_o, id=jid)\n            job_outfile = qstat[0][ATTR_o].split(':')[1]\n            job_host = qstat[0][ATTR_o].split(':')[0]\n            if ch == '\\x09':\n                chk_var = 'NONPRINT_VAR=X%sY' % ch\n            else:\n                chk_var = 'NONPRINT_VAR=X%sY' % self.npcat[ch]\n            self.check_jobout(chk_var, jid, job_outfile, job_host)\n\n    @checkMomBashVersion\n    def test_nonprint_shell_function(self):\n        \"\"\"\n        Export a shell function with a non-printable character and check\n        that the function is passed correctly.\n        Using each of the non-printable ASCII characters, except\n        NULL (hex 00), SOH (hex 01), TAB (hex 09), LINE FEED (hex 0A)\n        which will cause problems in the exported shell function.\n        \"\"\"\n        self.npch_exclude += ['\\x01', '\\x09']\n\n        for ch in self.npcat:\n            self.logger.info('##### non-printable char: %s #####' % repr(ch))\n            if ch in self.npch_exclude:\n                self.logger.info('##### excluded char: %s' % repr(ch))\n                continue\n            func = '{ a=%s; echo XX${a}YY}; }' % ch\n            # Adjustments in bash due to ShellShock malware fix in various OS\n            env_vals = {\"foo()\": func}\n            chk_var = (self.n + '=() {  a=%s; echo XX${a}YY}}' %\n                       self.npcat[ch])\n            if ch in self.npch_asis:\n                chk_var = self.n + '=() {  a=%s; echo XX${a}YY}}' % ch\n            out = (self.n + '=() {  a=%s;\\n echo XX${a}YY}\\n}\\nXX%sYY}' %\n                   (self.npcat[ch], self.npcat[ch]))\n            jid = self.create_and_submit_job(content=self.script,\n                                             set_env=env_vals)\n            # Check if qstat -f output contains the escaped character\n            self.check_qstatout(chk_var, jid)\n            # Check if job output contains the character\n            qstat = self.server.status(JOB, ATTR_o, id=jid)\n            job_outfile = qstat[0][ATTR_o].split(':')[1]\n            job_host = qstat[0][ATTR_o].split(':')[0]\n            self.check_jobout(out, jid, job_outfile, job_host)\n\n    def test_terminal_control_in_qsubv(self):\n        \"\"\"\n        Using terminal control in environment variable\n        submit a job script with qsub\n        -v \"var1='X<terminal control>Y'\"\n        and check that the value with the character is passed correctly\n        \"\"\"\n        chk_var = \"var1=X%s%sY\" % (self.bold_esc, self.red_esc)\n        a = {ATTR_v: \"var1=\\'X%s%sY\\'\" % (self.bold, self.red)}\n        script = ['env | grep var1']\n        script += ['sleep 5']\n        jid = self.create_and_submit_job(attribs=a, content=script)\n        # Check if qstat -f output contains the escaped character\n        self.check_qstatout(chk_var, jid)\n        # Check if job output contains the character\n        qstat = self.server.status(JOB, ATTR_o, id=jid)\n        job_outfile = qstat[0][ATTR_o].split(':')[1]\n        job_host = qstat[0][ATTR_o].split(':')[0]\n        match = \"var1=X%s%sY\" % (self.bold_esc, self.red_esc)\n        self.check_jobout(match, jid, job_outfile, job_host)\n        # Reset the terminal\n        self.logger.info('%sReset terminal' % self.reset)\n\n    def test_terminal_control_in_directive(self):\n        \"\"\"\n        Using terminal control in environment variable\n        submit a job script with PBS directive\n        -v \"var1='X<terminal control>Y'\"\n        and check that the value with the character is passed correctly\n        \"\"\"\n        chk_var = \"var1=X%s%sY\" % (self.bold_esc, self.red_esc)\n        script = ['#PBS -v \"var1=\\'X%s%sY\\'\"' % (self.bold, self.red)]\n        script += ['env | grep var1']\n        script += ['sleep 5']\n        jid = self.create_and_submit_job(content=script)\n        # Check if qstat -f output contains the escaped character\n        self.check_qstatout(chk_var, jid)\n        # Check if job output contains the character\n        qstat = self.server.status(JOB, ATTR_o, id=jid)\n        job_outfile = qstat[0][ATTR_o].split(':')[1]\n        job_host = qstat[0][ATTR_o].split(':')[0]\n        match = \"var1=X%s%sY\" % (self.bold_esc, self.red_esc)\n        self.check_jobout(match, jid, job_outfile, job_host)\n        # Reset the terminal\n        self.logger.info('%sReset terminal' % self.reset)\n\n    def test_terminal_control_qsubV(self):\n        \"\"\"\n        Test exporting terminal control in environment variable\n        when -V is passed through command line.\n        \"\"\"\n        exp = \"X\" + self.bold + self.red + \"Y\"\n        chk_var = 'VAR_IN_TERM=X%s%sY' % (self.bold_esc, self.red_esc)\n        job_script = ['sleep 5']\n        job_script += ['env | grep VAR_IN_TERM']\n        a = {self.ATTR_V: None, ATTR_S: '/bin/bash'}\n        j = Job(TEST_USER, attrs=a)\n        file_n = j.create_script(body=job_script)\n        env_vals = {\"VAR_IN_TERM\": exp}\n        jid = self.server.submit(j, env=env_vals)\n        # Check if qstat -f output contains the escaped character\n        self.check_qstatout(chk_var, jid)\n        # Check if job output contains the character\n        qstat = self.server.status(JOB, ATTR_o, id=jid)\n        job_outfile = qstat[0][ATTR_o].split(':')[1]\n        job_host = qstat[0][ATTR_o].split(':')[0]\n        chk_var = 'VAR_IN_TERM=X%s%sY' % (self.bold_esc, self.red_esc)\n        self.check_jobout(chk_var, jid, job_outfile, job_host)\n        # Reset the terminal\n        self.logger.info('%sReset terminal' % self.reset)\n\n    def test_terminal_control_default_qsubV(self):\n        \"\"\"\n        Test exporting terminal control in environment variable\n        when -V is in the server's default_qsub_arguments.\n        \"\"\"\n        chk_var = 'VAR_IN_TERM=X%s%sY' % (self.bold_esc, self.red_esc)\n        exp = \"X%s%sY\" % (self.bold, self.red)\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'default_qsub_arguments': '-V'})\n        script = ['sleep 5']\n        script += ['env | grep VAR_IN_TERM']\n        env_to_set = {\"VAR_IN_TERM\": exp}\n        j = Job(TEST_USER, attrs={ATTR_S: '/bin/bash'})\n        j.create_script(body=script)\n        jid = self.server.submit(j, env=env_to_set)\n        # Check if qstat -f output contains the escaped character\n        self.check_qstatout(chk_var, jid)\n        # Check if job output contains the character\n        qstat = self.server.status(JOB, ATTR_o, id=jid)\n        job_outfile = qstat[0][ATTR_o].split(':')[1]\n        job_host = qstat[0][ATTR_o].split(':')[0]\n        chk_var = 'VAR_IN_TERM=X%s%sY' % (self.bold_esc, self.red_esc)\n        self.check_jobout(chk_var, jid, job_outfile, job_host)\n        # Reset the terminal\n        self.logger.info('%sReset terminal' % self.reset)\n\n    @checkMomBashVersion\n    def test_terminal_control_shell_function(self):\n        \"\"\"\n        Export a shell function with terminal control\n        characters and check that the function is passed correctly\n        \"\"\"\n        func = '{ a=$(%s; %s); echo XX${a}YY; }' % (self.bold, self.red)\n        # Adjustments in bash due to ShellShock malware fix in various OS\n        env_vals = {\"foo()\": func}\n        chk_var = self.n + '=() {  a=$(%s; %s); echo XX${a}YY}' % (\n            self.bold_esc, self.red_esc)\n        out = self.n + '=() {  a=$(%s; %s);\\n echo XX${a}YY\\n}\\nXXYY' % (\n            self.bold_esc, self.red_esc)\n        jid = self.create_and_submit_job(content=self.script, set_env=env_vals)\n        # Check if qstat -f output contains the escaped character\n        self.check_qstatout(chk_var, jid)\n        # Check if job output contains the character\n        qstat = self.server.status(JOB, ATTR_o, id=jid)\n        job_outfile = qstat[0][ATTR_o].split(':')[1]\n        job_host = qstat[0][ATTR_o].split(':')[0]\n        self.check_jobout(out, jid, job_outfile, job_host)\n        # Reset the terminal\n        self.logger.info('%sReset terminal' % self.reset)\n\n    def find_in_tracejob(self, msg, jid):\n        \"\"\"\n        Find msg in tracejob output of jid.\n        \"\"\"\n        rc = 0\n        tracejob_cmd = os.path.join(self.server.pbs_conf['PBS_EXEC'],\n                                    'bin', 'tracejob')\n        cmd = [tracejob_cmd, jid]\n        ret = self.du.run_cmd(self.server.hostname, cmd=cmd, sudo=True)\n        self.assertEqual(ret['rc'], 0)\n        for m in ret['out']:\n            if m.find(msg) != -1:\n                self.logger.info('Found \\\"%s\\\" in tracejob output' % msg)\n                rc += 1\n                continue\n        return rc\n\n    def find_in_printjob(self, msg, jid):\n        \"\"\"\n        Find msg in printjob output of jid.\n        \"\"\"\n        self.server.expect(JOB, {ATTR_state: 'R'}, offset=1, id=jid)\n        rc = 0\n        ret = self.mom.printjob(jid)\n        self.assertEqual(ret['rc'], 0)\n        for m in ret['out']:\n            if m.find(msg) != -1:\n                self.logger.info('Found \\\"%s\\\" in printjob output' % msg)\n                rc += 1\n                continue\n        return rc\n\n    def test_nonprint_character_in_qsubA(self):\n        \"\"\"\n        Using each of the non-printable ASCII characters, except\n        NULL (hex 00) and LINE FEED (hex 0A),\n        submit a job script with Account_Name containing the character\n        qsub -A \"J<non-printable character>K\"\n        and check that the value with the character is passed correctly\n        \"\"\"\n        uhost = PbsUser.get_user(TEST_USER).host\n        for ch in self.npcat:\n            self.logger.info('##### non-printable char: %s #####' % repr(ch))\n            if ch in self.npch_exclude:\n                self.logger.info('##### excluded char: %s' % repr(ch))\n                continue\n            if uhost is None or self.du.is_localhost(uhost):\n                a = {ATTR_A: \"J%sK\" % ch}\n            else:\n                a = {ATTR_A: \"'J%sK'\" % ch}\n            j = Job(TEST_USER, a)\n            jid = self.server.submit(j)\n            job_stat = self.server.status(JOB, id=jid)\n            acct_name = job_stat[0]['Account_Name']\n            self.logger.info(\"job Account_Name: %s\" % repr(acct_name))\n            exp_name = 'J%sK' % self.npcat[ch]\n            if ch in self.npch_asis:\n                exp_name = 'J%sK' % ch\n            self.logger.info(\"exp Account_Name: %s\" % repr(exp_name))\n            self.assertEqual(acct_name, exp_name)\n            # Check printjob output\n            msg = 'Account_Name = %s' % exp_name\n            rc = self.find_in_printjob(msg, jid)\n            self.assertEqual(rc, 1)\n            # Check tracejob output\n            msg = 'account=\"%s\"' % exp_name\n            rc = self.find_in_tracejob(msg, jid)\n            self.assertGreaterEqual(rc, 1)\n            self.server.delete(jid)\n            self.server.expect(JOB, 'queue', op=UNSET, id=jid, offset=1)\n\n    def test_terminal_control_in_qsubA(self):\n        \"\"\"\n        Using terminal control in environment variable\n        submit a job script with Account_Name\n        qsub -A \"J<terminal control>K\"\n        and check that the value with the character is passed correctly\n        \"\"\"\n        j = Job(TEST_USER, {ATTR_A: \"J%s%sK\" % (self.bold, self.red)})\n        jid = self.server.submit(j)\n        job_stat = self.server.status(JOB, id=jid)\n        acct_name = job_stat[0]['Account_Name']\n        self.logger.info(\"job Account_Name: %s\" % repr(acct_name))\n        exp_name = 'J%s%sK' % (self.bold_esc, self.red_esc)\n        self.logger.info(\"exp Account_Name: %s\" % exp_name)\n        self.assertEqual(acct_name, exp_name)\n        # Check printjob output\n        msg = 'Account_Name = %s' % exp_name\n        rc = self.find_in_printjob(msg, jid)\n        self.assertEqual(rc, 1)\n        # Check tracejob output\n        msg = 'account=\"%s\"' % exp_name\n        rc = self.find_in_tracejob(msg, jid)\n        self.assertGreaterEqual(rc, 1)\n        self.logger.info('%sReset terminal' % self.reset)\n\n    def find_in_json_valid(self, cmd, msg, jid):\n        \"\"\"\n        Check if qstat json output is valid of jid and find msg in output.\n        Returns 2 or greater on success (valid + found).\n        \"\"\"\n        rc = 0\n        if cmd == 'qstat':\n            qstat_cmd_json = self.qstat_cmd + ' -f -F json ' + str(jid)\n            ret = self.du.run_cmd(self.server.hostname, cmd=qstat_cmd_json)\n        elif cmd == 'nodes':\n            nodes_cmd_json = self.pbsnodes_cmd + ' -av -F json'\n            ret = self.du.run_cmd(self.server.hostname, cmd=nodes_cmd_json)\n        ret_out = \"\\n\".join(ret['out'])\n        try:\n            json.loads(ret_out)\n        except ValueError as err:\n            self.assertFalse(err)\n        rc += 1\n        self.logger.info('json output is valid')\n        # Check msg is in output\n        # cJSON uses different ANSI escape codes (like '\\u001b')\n        # we need to recode to check chars are preserved\n        out_esc = []\n        for m in ret['out']:\n            b = m.encode('utf-8')\n            tmp = b.decode('unicode_escape')\n            for ch in self.npcat:\n                if ch in self.npch_exclude:\n                    continue\n                if ch in tmp:\n                    tmp = tmp.replace(ch, self.npcat[ch])\n            out_esc.append(tmp)\n        for m in out_esc:\n            if m.find(msg) != -1:\n                self.logger.info('Found \\\"%s\\\" in json output' % msg)\n                rc += 1\n                continue\n        return rc\n\n    @skipOnShasta\n    def test_nonprint_character_in_qstat_json_valid(self):\n        \"\"\"\n        Using each of the non-printable ASCII characters, except NULL\n        (hex 00) and LINE FEED (hex 0A) which will cause a qsub error,\n        and File Separator (hex 1C) which will cause invalid json,\n        submit a job script with\n        qsub -v \"var1='A,B,<non-printable character>,C,D'\"\n        and check that the qstat -f -F json output is valid\n        \"\"\"\n        uhost = PbsUser.get_user(TEST_USER).host\n        self.npch_exclude += ['\\x1C']\n        for ch in self.npcat:\n            self.logger.info('##### non-printable char: %s #####' % repr(ch))\n            if ch in self.npch_exclude:\n                self.logger.info('##### excluded char: %s' % repr(ch))\n                continue\n            if self.du.is_localhost(uhost):\n                a = {ATTR_v: \"var1=\\'A,B,%s,C,D\\'\" % ch, ATTR_S: '/bin/bash'}\n            else:\n                a = {ATTR_v: r\"var1=\\'A\\,B\\,%s\\,C\\,D\\'\" % ch}\n            msg = 'A,B,%s,C,D' % self.npcat[ch]\n            script = ['env | grep var1']\n            script += ['sleep 5']\n            jid = self.create_and_submit_job(attribs=a, content=script)\n            rc = self.find_in_json_valid('qstat', msg, jid)\n            self.assertGreaterEqual(rc, 2)\n\n    def test_terminal_control_in_qstat_json_valid(self):\n        \"\"\"\n        Using terminal control in environment variable\n        submit a job script with qsub\n        -v \"var1='<terminal control>XY<terminal control>'\"\n        and check that the qstat -f -F json output is valid\n        \"\"\"\n        a = {ATTR_v: \"var1=\\'%s%sXY%s\\'\" % (self.bold, self.red, self.reset)}\n        msg = \"%s%sXY%s\" % (self.bold_esc, self.red_esc, self.reset_esc)\n        script = ['env | grep var1']\n        script += ['sleep 5']\n        jid = self.create_and_submit_job(attribs=a, content=script)\n        rc = self.find_in_json_valid('qstat', msg, jid)\n        self.assertGreaterEqual(rc, 2)\n\n    def test_nonprint_character_in_qstat_dsv(self):\n        \"\"\"\n        Using each of the non-printable ASCII characters, except NULL\n        (hex 00) and LINE FEED (hex 0A) which will cause a qsub error,\n        submit a job script with\n        qsub -v \"var1='AB<non-printable character>CD'\"\n        and check that the 'qstat -f -F dsv' output contains proper var1\n        \"\"\"\n        for ch in self.npcat:\n            self.logger.info('##### non-printable char: %s #####' % repr(ch))\n            if ch in self.npch_exclude:\n                self.logger.info('##### excluded char: %s' % repr(ch))\n                continue\n            a = {ATTR_v: \"var1=\\'AB%sCD\\'\" % ch}\n            script = ['env | grep var1']\n            jid = self.create_and_submit_job(attribs=a, content=script)\n            qstat_cmd_dsv = self.qstat_cmd + ' -f -F dsv ' + str(jid)\n            ret = self.du.run_cmd(self.server.hostname, cmd=qstat_cmd_dsv)\n            qstat_out = \"\\n\".join(ret['out'])\n            match = 'var1=AB%sCD' % self.npcat[ch]\n            if qstat_out.find(match) != -1:\n                self.logger.info('Found %s in qstat -f -F dsv output' % match)\n\n    def test_terminal_control_in_qstat_dsv(self):\n        \"\"\"\n        Using terminal control in environment variable\n        submit a job script with qsub\n        -v \"var1='<terminal control>XY<terminal control>'\"\n        and check that the 'qstat -f -F dsv' output contains proper var1\n        \"\"\"\n        a = {ATTR_v: \"var1=\\'%s%sXY%s\\'\" % (self.bold, self.red, self.reset)}\n        script = ['env | grep var1']\n        jid = self.create_and_submit_job(attribs=a, content=script)\n        qstat_cmd_dsv = self.qstat_cmd + ' -f -F dsv ' + str(jid)\n        ret = self.du.run_cmd(self.server.hostname, cmd=qstat_cmd_dsv)\n        qstat_out = \"\\n\".join(ret['out'])\n        match = 'var1=%s%sXY%s' % (self.bold_esc, self.red_esc, self.reset_esc)\n        if qstat_out.find(match) != -1:\n            self.logger.info('Found %s in qstat -f -F dsv output' % match)\n\n    @timeout(1200)\n    def test_nonprint_character_job_array(self):\n        \"\"\"\n        Using each of the non-printable ASCII characters, except NULL\n        (hex 00) and LINE FEED (hex 0A) which will cause a qsub error,\n        test exporting the character in environment variable of\n        job array when qsub -V is passed through command line.\n        \"\"\"\n        for ch in self.npcat:\n            self.logger.info('##### non-printable char: %s #####' % repr(ch))\n            if ch in self.npch_exclude:\n                self.logger.info('##### excluded char: %s' % repr(ch))\n                continue\n            # variable to check if with escaped nonprinting character or not\n            chk_var = 'NONPRINT_VAR=X%sY' % self.npcat[ch]\n            if ch in self.npch_asis:\n                chk_var = 'NONPRINT_VAR=X%sY' % ch\n            exp = \"X%sY\" % ch\n            set_env = {\"NONPRINT_VAR\": exp}\n            script = ['sleep 5']\n            script += ['env | grep NONPRINT_VAR']\n            a = {self.ATTR_V: None, ATTR_J: '1-2', ATTR_S: '/bin/bash'}\n            j = Job(TEST_USER, attrs=a)\n            j.create_script(body=script)\n            jid = self.server.submit(j, env=set_env)\n            subj1 = jid.replace('[]', '[1]')\n            subj2 = jid.replace('[]', '[2]')\n            # Check if qstat -f output contains the escaped character\n            ja = [jid, subj1, subj2]\n            for j in ja:\n                # Check if qstat -f output contains the escaped character\n                self.check_qstatout(chk_var, j)\n            qstat1 = self.server.status(JOB, ATTR_o, id=subj1, extend='x')\n            job_outfile1 = qstat1[0][ATTR_o].split(':')[1]\n            job_host = qstat1[0][ATTR_o].split(':')[0]\n            if job_outfile1.split('.')[2] == '^array_index^':\n                job_outfile1 = job_outfile1.replace('^array_index^', '1')\n            job_outfile2 = job_outfile1.replace('.1', '.2')\n            # Check if job array output contains the character as is\n            if ch == '\\x09':\n                chk_var = 'NONPRINT_VAR=X%sY' % ch\n            else:\n                chk_var = 'NONPRINT_VAR=X%sY' % self.npcat[ch]\n            self.check_jobout(chk_var, subj1, job_outfile1, job_host)\n            self.check_jobout(chk_var, subj2, job_outfile2, job_host)\n\n    def test_terminal_control_job_array(self):\n        \"\"\"\n        Using terminal control in environment variable of\n        job array when qsub -V is passed through command line.\n        \"\"\"\n        # variable to check if with escaped nonprinting character\n        chk_var = 'NONPRINT_VAR=X%s%sY' % (self.bold_esc, self.red_esc)\n        env_vals = {\"NONPRINT_VAR\": \"X%s%sY\" % (self.bold, self.red)}\n        script = ['sleep 5']\n        script += ['env | grep NONPRINT_VAR']\n        a = {self.ATTR_V: None, ATTR_J: '1-2', ATTR_S: '/bin/bash'}\n        j = Job(TEST_USER, attrs=a)\n        j.create_script(body=script)\n        jid = self.server.submit(j, env=env_vals)\n        subj1 = jid.replace('[]', '[1]')\n        subj2 = jid.replace('[]', '[2]')\n        # Check if qstat -f output contains the escaped character\n        ja = [jid, subj1, subj2]\n        for j in ja:\n            # Check if qstat -f output contains the escaped character\n            self.check_qstatout(chk_var, j)\n        qstat1 = self.server.status(JOB, ATTR_o, id=subj1)\n        job_outfile1 = qstat1[0][ATTR_o].split(':')[1]\n        job_host = qstat1[0][ATTR_o].split(':')[0]\n        if job_outfile1.split('.')[2] == '^array_index^':\n            job_outfile1 = job_outfile1.replace('^array_index^', '1')\n        job_outfile2 = job_outfile1.replace('.1', '.2')\n        # Check if job array output contains the character as is\n        chk_var = 'NONPRINT_VAR=X%s%sY' % (self.bold_esc, self.red_esc)\n        self.check_jobout(chk_var, subj1, job_outfile1, job_host)\n        self.logger.info('%sReset terminal' % self.reset)\n        self.check_jobout(chk_var, subj2, job_outfile2, job_host)\n        self.logger.info('%sReset terminal' % self.reset)\n\n    @checkModule(\"pexpect\")\n    def test_nonprint_character_interactive_job(self):\n        \"\"\"\n        Using each of the non-printable ASCII characters, except NULL\n        (hex 00) and LINE FEED (hex 0A) which will cause a qsub error,\n        test exporting the character in environment variable of\n        interactive job when qsub -V is passed through command line.\n        \"\"\"\n        for ch in self.npcat:\n            self.logger.info('##### non-printable char: %s #####' % repr(ch))\n            if ch in self.npch_exclude:\n                self.logger.info('##### excluded char: %s' % repr(ch))\n                continue\n            # variable to check if with escaped nonprinting character or not\n            chk_var = r'NONPRINT_VAR=X\\,%s\\,Y' % self.npcat[ch]\n            if ch in self.npch_asis:\n                chk_var = r'NONPRINT_VAR=X\\,%s\\,Y' % ch\n            os.environ[\"NONPRINT_VAR\"] = \"X,%s,Y\" % ch\n            fn = self.du.create_temp_file(prefix=\"job_out1\")\n            self.job_out1_tempfile = fn\n            # submit an interactive job\n            cmd = 'env > ' + self.job_out1_tempfile\n            a = {self.ATTR_V: None, ATTR_inter: ''}\n            interactive_script = [('hostname', '.*'), (cmd, '.*'),\n                                  ('sleep 5', '.*')]\n            jid = self.create_and_submit_job(\n                attribs=a,\n                content_interactive=interactive_script,\n                preserve_env=True)\n            # Check if qstat -f output contains the escaped character\n            self.check_qstatout(chk_var, jid)\n            # Once all commands sent and matched, job exits\n            self.server.expect(JOB, 'queue', op=UNSET, id=jid, offset=1)\n            # Check for the non-printable character in the output file\n            with open(self.job_out1_tempfile, newline=\"\") as fd:\n                pkey = \"\"\n                penv = {}\n                for line in fd:\n                    fields = line.split('=', 1)\n                    if len(fields) == 2:\n                        pkey = fields[0]\n                        penv[pkey] = fields[1]\n            np_var = penv['NONPRINT_VAR']\n            np_char = np_var.split(',')[1]\n            self.assertEqual(ch, np_char)\n            self.logger.info(\n                \"non-printable %s was in interactive job environment\"\n                % repr(np_char))\n\n    @checkModule(\"pexpect\")\n    def test_terminal_control_interactive_job(self):\n        \"\"\"\n        Using terminal control characters test exporting them\n        in environment variable of interactive job\n        when qsub -V is passed through command line.\n        \"\"\"\n        # variable to check if with escaped nonprinting character\n        chk_var = r'NONPRINT_VAR=X\\,%s\\,%s\\,Y' % (self.bold_esc, self.red_esc)\n        var = \"X,%s,%s,Y\" % (self.bold, self.red)\n        os.environ[\"NONPRINT_VAR\"] = var\n        fn = self.du.create_temp_file(prefix=\"job_out1\")\n        self.job_out1_tempfile = fn\n        # submit an interactive job\n        cmd = 'env > ' + self.job_out1_tempfile\n        a = {self.ATTR_V: None, ATTR_inter: ''}\n        interactive_script = [('hostname', '.*'), (cmd, '.*'),\n                              ('sleep 5', '.*')]\n        jid = self.create_and_submit_job(\n            attribs=a,\n            content_interactive=interactive_script,\n            preserve_env=True)\n        # Check if qstat -f output contains the escaped character\n        self.check_qstatout(chk_var, jid)\n        # Once all commands sent and matched, job exits\n        self.server.expect(JOB, 'queue', op=UNSET, id=jid, offset=1)\n        # Parse for the non-printable character in the output file\n        with open(self.job_out1_tempfile) as fd:\n            pkey = \"\"\n            penv = {}\n            for line in fd:\n                fields = line.split('=', 1)\n                if len(fields) == 2:\n                    pkey = fields[0]\n                    penv[pkey] = fields[1]\n        np_var = penv['NONPRINT_VAR']\n        self.logger.info(\"np_var: %s\" % repr(np_var))\n        np_char1 = np_var.split(',')[1]\n        np_char2 = np_var.split(',')[2]\n        var_env = \"X,%s,%s,Y\" % (np_char1, np_char2)\n        self.logger.info(\n            \"np_chars are: %s and %s\" % (repr(np_char1), repr(np_char2)))\n        self.assertEqual(var, var_env)\n        self.logger.info(\n            \"non-printables were in interactive job environment %s\"\n            % repr(var_env))\n\n    def test_terminal_control_begin_launch_hook(self):\n        \"\"\"\n        Using terminal control characters test exporting them\n        in environment variable of job having hooks execjob_begin and\n        execjob_launch when qsub -V is passed through command line.\n        \"\"\"\n        # variable to check if with escaped nonprinting character\n        chk_var = r'NONPRINT_VAR=X\\,%s\\,%s\\,Y' % (self.bold_esc, self.red_esc)\n        var = \"X,%s,%s,Y\" % (self.bold, self.red)\n        env_vals = {\"NONPRINT_VAR\": var}\n        a = {'Resource_List.select': '1:ncpus=1',\n             'Resource_List.walltime': 3,\n             self.ATTR_V: None}\n        script = ['env\\n']\n        script += ['sleep 5\\n']\n        # hook1: execjob_begin\n        hook_body = \"\"\"\nimport pbs\ne=pbs.event()\ne.job.Variable_List[\"BEGIN_NONPRINT\"] = \"AB\"\npbs.logmsg(pbs.LOG_DEBUG,\"Variable List is %s\" % (e.job.Variable_List,))\n\"\"\"\n        hook_name = \"begin\"\n        a2 = {'event': \"execjob_begin\", 'enabled': 'True', 'debug': 'True'}\n        rv = self.server.create_import_hook(\n            hook_name,\n            a2,\n            hook_body,\n            overwrite=True)\n        self.assertTrue(rv)\n        # hook2: execjob_launch\n        hook_body = \"\"\"\nimport pbs\ne=pbs.event()\ne.env[\"LAUNCH_NONPRINT\"] = \"CD\"\n\"\"\"\n        hook_name = \"launch\"\n        a2 = {'event': \"execjob_launch\", 'enabled': 'True', 'debug': 'True'}\n        rv = self.server.create_import_hook(\n            hook_name,\n            a2,\n            hook_body,\n            overwrite=True)\n        self.assertTrue(rv)\n\n        # Submit a job with hooks in the system\n        jid = self.create_and_submit_job(attribs=a, content=script,\n                                         set_env=env_vals)\n        # Check if qstat -f output contains the escaped character\n        self.check_qstatout(chk_var, jid)\n        # Check for the non-printable character in the job output file\n        qstat = self.server.status(JOB, ATTR_o, id=jid)\n        job_outfile = qstat[0][ATTR_o].split(':')[1]\n        job_host = qstat[0][ATTR_o].split(':')[0]\n        self.server.expect(JOB, 'queue', op=UNSET, id=jid, offset=3)\n        ret = self.du.cat(\n            hostname=job_host,\n            filename=job_outfile,\n            sudo=True,\n            option=\"-v\")\n        j_output = ret['out']\n        penv = {}\n        for line in j_output:\n            fields = line.split('=', 1)\n            if len(fields) == 2:\n                pkey = fields[0]\n                penv[pkey] = fields[1]\n        np_var = penv['NONPRINT_VAR']\n        self.logger.info(\"np_var: %s\" % repr(np_var))\n        np_char1 = np_var.split(',')[1]\n        np_char2 = np_var.split(',')[2]\n        var_env = \"X,%s,%s,Y\" % (np_char1, np_char2)\n        var = \"X,%s,%s,Y\" % (self.bold_esc, self.red_esc)\n        self.logger.info(\n            \"np_chars are: %s and %s\" % (repr(np_char1), repr(np_char2)))\n        self.assertEqual(var, var_env)\n        self.logger.info(\n            \"non-printables were in interactive job environment %s\"\n            % repr(var_env))\n\n    def check_print_list_hook(self, hook_name, hook_name_esc):\n        \"\"\"\n        Check if the hook_name_esc is displayed in the output of qmgr\n        'print hook' and 'list hook'\n        \"\"\"\n        # Print hook displays escaped nonprinting characters\n        phook = 'create hook %s' % hook_name_esc\n        cmd = [self.qmgr_cmd, '-c', 'print hook']\n        ret = self.du.run_cmd(self.server.hostname, cmd=cmd, sudo=True)\n        self.assertEqual(ret['rc'], 0)\n        if phook in ret['out']:\n            self.logger.info('Found \\\"%s\\\" in print hook output' % phook)\n        # List hook displays escaped nonprinting characters\n        lhook = 'Hook %s' % hook_name_esc\n        cmd = [self.qmgr_cmd, '-c', 'list hook']\n        ret = self.du.run_cmd(self.server.hostname, cmd=cmd, sudo=True)\n        self.assertEqual(ret['rc'], 0)\n        if lhook in ret['out']:\n            self.logger.info('Found \\\"%s\\\" in list hook output' % lhook)\n\n    def test_terminal_control_hook_name(self):\n        \"\"\"\n        Test using terminal control characters in hook name. Qmgr\n        'print hook' and 'list hook' displays the escaped nonprint character.\n        \"\"\"\n        hook_name = \"h%s%sd\" % (self.bold, self.red)\n        create_hook = [self.qmgr_cmd, '-c', 'create hook %s' % hook_name]\n        delete_hook = [self.qmgr_cmd, '-c', 'delete hook %s' % hook_name]\n        list_hook = [self.qmgr_cmd, '-c', 'list hook %s' % hook_name]\n        # Delete hook if hook already exists\n        ret = self.du.run_cmd(self.server.hostname, cmd=list_hook, sudo=True)\n        if ret['rc'] == 0:\n            ret = self.du.run_cmd(self.server.hostname,\n                                  cmd=delete_hook, sudo=True)\n            self.assertEqual(ret['rc'], 0)\n        # Create hook. Qmgr print,list hook output will have escaped chars\n        ret = self.du.run_cmd(self.server.hostname, cmd=create_hook, sudo=True)\n        self.assertEqual(ret['rc'], 0)\n        hook_name_esc = \"h%s%sd\" % (self.bold_esc, self.red_esc)\n        self.check_print_list_hook(hook_name, hook_name_esc)\n        # Delete the hook\n        ret = self.du.run_cmd(self.server.hostname, cmd=delete_hook, sudo=True)\n        self.assertEqual(ret['rc'], 0)\n        # Reset the terminal\n        self.logger.info('%sReset terminal' % self.reset)\n\n    def test_nonprint_character_hook_name(self):\n        \"\"\"\n        Use in a hook name each of the non-printable ASCII characters, except\n        NULL (x00), TAB (x09), LINE FEED (x0A), VT (x0B), FF (x0C), CR (x0D)\n        which will cause a qmgr create hook error. Qmgr 'print hook' and\n        'list hook' displays the escaped nonprint character.\n        \"\"\"\n        self.npch_exclude += ['\\x09', '\\x0B', '\\x0C', '\\x0D']\n        for ch in self.npcat:\n            self.logger.info('##### non-printable char: %s #####' % repr(ch))\n            if ch in self.npch_exclude:\n                self.logger.info('##### excluded char: %s' % repr(ch))\n                continue\n            # Create hook\n            hk_name = 'h%sd' % ch\n            a = {'event': \"execjob_begin\", 'enabled': 'True', 'debug': 'True'}\n            try:\n                rv = self.server.create_hook(hk_name, a)\n            except PbsManagerError:\n                # Delete pre-existing hook first then create the hook\n                self.server.manager(MGR_CMD_DELETE, HOOK, id=hk_name)\n                rv = self.server.create_hook(hk_name, a)\n            self.assertTrue(rv)\n            hk_name_esc = \"h%sd\" % self.npcat[ch]\n            self.check_print_list_hook(hk_name, hk_name_esc)\n            self.server.manager(MGR_CMD_DELETE, HOOK, id=hk_name)\n\n    def test_terminal_control_in_rsubH(self):\n        \"\"\"\n        Using terminal control in authorized hostnames submit a reservation\n        pbs_rsub -H \"h<terminal control>d\" and check that the escaped\n        representation is displayed in pbs_rstat correctly.\n        \"\"\"\n        r = Reservation(TEST_USER, {\"-H\": \"h%s%sd\" % (self.bold, self.red)})\n        rid = self.server.submit(r)\n        resv_stat = self.server.status(RESV, id=rid)\n        auth_hname = resv_stat[0]['Authorized_Hosts']\n        self.logger.info(\"job Authorized_Hosts: %s\" % auth_hname)\n        exp_name = 'h%s%sd' % (self.bold_esc, self.red_esc)\n        self.logger.info(\"expected Authorized_Hosts: %s\" % exp_name)\n        self.assertEqual(auth_hname, exp_name)\n        self.logger.info('%sReset terminal' % self.reset)\n\n    def test_nonprint_character_in_rsubH(self):\n        \"\"\"\n        Using each of the non-printable ASCII characters, except NULL (hex 00)\n        and LINE FEED (hex 0A), submit a reservation with authorized hostnames\n        pbs_rsub -H \"h<terminal control>d\" and check that the escaped\n        representation is displayed in pbs_rstat correctly.\n        \"\"\"\n        uhost = PbsUser.get_user(TEST_USER).host\n        for ch in self.npcat:\n            self.logger.info('##### non-printable char: %s #####' % repr(ch))\n            if ch in self.npch_exclude:\n                self.logger.info('##### excluded char: %s' % repr(ch))\n                continue\n            if uhost is None or self.du.is_localhost(uhost):\n                h = {\"-H\": \"h%sd\" % ch}\n            else:\n                h = {\"-H\": \"'h%sd'\" % ch}\n            r = Reservation(TEST_USER, h)\n            rid = self.server.submit(r)\n            resv_stat = self.server.status(RESV, id=rid)\n            auth_hname = resv_stat[0]['Authorized_Hosts']\n            self.logger.info(\"job Authorized_Hosts: %s\" % auth_hname)\n            exp_name = 'h%sd' % self.npcat[ch]\n            if ch in self.npch_asis:\n                exp_name = 'h%sd' % ch\n            self.logger.info(\"expected Authorized_Hosts: %s\" % exp_name)\n            self.assertEqual(auth_hname, exp_name)\n            self.server.delete(rid)\n\n    def test_terminal_control_in_node_comment(self):\n        \"\"\"\n        Test if pbsnodes -C with terminal control characters results in\n        valid json and escaped representation id displayed correctly.\n        \"\"\"\n        comment = 'h%s%sd%s' % (self.bold, self.red, self.reset)\n        cmd = [self.pbsnodes_cmd, '-C', '%s' % comment, self.mom.shortname]\n        self.du.run_cmd(self.server.hostname, cmd=cmd)\n        # Check json output\n        comm1 = 'h%s%sd%s' % (self.bold_esc, self.red_esc, self.reset_esc)\n        rc = self.find_in_json_valid('nodes', comm1, None)\n        self.assertGreaterEqual(rc, 2)\n        # Check qmgr -c 'list node @default' output\n        comm2 = '    comment = %s' % comm1\n        cmd = [self.qmgr_cmd, '-c', 'list node @default']\n        ret = self.du.run_cmd(self.server.hostname, cmd=cmd, sudo=True)\n        self.assertEqual(ret['rc'], 0)\n        if comm2 in ret['out']:\n            self.logger.info('Found \\\"%s\\\" in qmgr list node output' % comm2)\n        # Check pbsnodes -a output\n        comm3 = '     comment = %s' % comm1\n        cmd = [self.pbsnodes_cmd, '-a']\n        ret = self.du.run_cmd(self.server.hostname, cmd=cmd)\n        self.assertEqual(ret['rc'], 0)\n        if comm3 in ret['out']:\n            self.logger.info('Found \\\"%s\\\" in pbsnodes -a output' % comm3)\n\n    def test_nonprint_character_in_node_comment(self):\n        \"\"\"\n        Using each of the non-printable ASCII characters, except NULL (hex 00),\n        LINE FEED (hex 0A), and File Separator (hex 1C) which will cause\n        invalid json, test if pbsnodes -C with special characters results in\n        valid json and escaped representation id displayed correctly.\n        \"\"\"\n        self.npch_exclude += ['\\x1C']\n        for ch in self.npcat:\n            self.logger.info('##### non-printable char: %s #####' % repr(ch))\n            if ch in self.npch_exclude:\n                self.logger.info('##### excluded char: %s' % repr(ch))\n                continue\n            comment = 'h%sd' % ch\n            cmd = [self.pbsnodes_cmd, '-C', '%s' % comment, self.mom.shortname]\n            self.du.run_cmd(self.server.hostname, cmd=cmd, sudo=True)\n            comm1 = 'h%sd' % self.npcat[ch]\n            # Check json output\n            rc = self.find_in_json_valid('nodes', comm1, None)\n            self.assertGreaterEqual(rc, 2)\n            # Check qmgr -c 'list node @default' output\n            comm2 = '    comment = %s' % comm1\n            cmd = [self.qmgr_cmd, '-c', 'list node @default']\n            ret = self.du.run_cmd(self.server.hostname, cmd=cmd, sudo=True)\n            self.assertEqual(ret['rc'], 0)\n            if comm2 in ret['out']:\n                self.logger.info('Found \\\"%s\\\" in qmgr list node out' % comm2)\n            # Check pbsnodes -a output\n            comm3 = '     comment = %s' % comm1\n            cmd = [self.pbsnodes_cmd, '-a']\n            ret = self.du.run_cmd(self.server.hostname, cmd=cmd)\n            self.assertEqual(ret['rc'], 0)\n            if comm3 in ret['out']:\n                self.logger.info('Found \\\"%s\\\" in pbsnodes -a output' % comm3)\n"
  },
  {
    "path": "test/tests/functional/pbs_offline_vnodes.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestOfflineVnode(TestFunctional):\n    \"\"\"\n    Tests if vnodes are marked offline:\n     - when a hook fails and the hook fail action is 'offline_vnodes'\n     - using pbsnodes -o\n    \"\"\"\n    is_cray = True\n\n    def setUp(self):\n        if not self.du.get_platform().startswith('cray'):\n            self.is_cray = False\n\n        TestFunctional.setUp(self)\n        self.server.manager(MGR_CMD_SET, SERVER, {'log_events': 2047})\n\n    def create_mom_hook(self):\n        name = \"h1\"\n        body = (\"import pbs\\n\"\n                \"import time\\n\"\n                \"time.sleep(60)\\n\"\n                \"pbs.event.accept()\")\n        attr = {'event': 'execjob_begin', 'fail_action': 'offline_vnodes',\n                'alarm': '3', 'enabled': 'true'}\n        self.server.create_import_hook(name, attr, body)\n\n    def create_bad_begin_hook(self):\n        name = \"h2\"\n        body = (\"import pbs\\n\"\n                \"e=pbs.event()\\n\"\n                \"if e.job.in_ms_mom():\\n\"\n                \"    e.accept()\\n\"\n                \"raise ValueError('invalid name')\\n\")\n        attr = {'event': 'execjob_begin', 'fail_action': 'offline_vnodes'}\n        self.server.create_import_hook(name, attr, body)\n\n    def create_bad_startup_hook(self):\n        name = \"h3\"\n        body = (\"import pbs\\n\"\n                \"raise ValueError('invalid name')\\n\")\n        attr = {'event': 'exechost_startup', 'fail_action': 'offline_vnodes'}\n        self.server.create_import_hook(name, attr, body)\n\n    def create_multi_vnodes(self, num_moms, num_vnode=3):\n        if num_moms != len(self.moms):\n            self.server.manager(MGR_CMD_DELETE, NODE, id=\"@default\")\n        if self.is_cray is True:\n            if num_moms == 1 and len(self.moms) != 1:\n                self.server.manager(MGR_CMD_CREATE, NODE,\n                                    id=self.moms.values()[0].shortname)\n                # adding a sleep of two seconds because it takes some time\n                # before node resources start showing up\n                time.sleep(2)\n            return\n        # No need to create vnodes on a cpuset mom\n        if self.moms.values()[0].is_cpuset_mom() is True:\n            return\n        vn_attrs = {ATTR_rescavail + '.ncpus': 1,\n                    ATTR_rescavail + '.mem': '1024mb'}\n        for i in range(num_moms):\n            self.moms.values()[i].create_vnodes(vn_attrs, num_vnode,\n                                                usenatvnode=True, delall=False,\n                                                expect=False)\n            # Calling an explicit expect on newly created nodes.\n            self.server.expect(NODE, {ATTR_NODE_state: 'free'},\n                               id=self.moms.values()[i].shortname)\n\n    def verify_vnodes_state(self, expected_state, nodes):\n        \"\"\"\n        Verify that the vnodes are set to the expected state\n        \"\"\"\n        vlist = []\n        for nd in nodes:\n            vn = nd.shortname\n            if self.is_cray is True:\n                vnl = self.server.filter(\n                    VNODE, {'resources_available.vntype': 'cray_compute'})\n                vlist = vnl[\"resources_available.vntype=cray_compute\"]\n            elif nd.is_cpuset_mom() is True:\n                vnl = self.server.status(NODE)\n                vlist = [x['id'] for x in vnl if x['id'] !=\n                         self.mom.shortname]\n            else:\n                vlist = [vn + \"[0]\", vn + \"[1]\"]\n            for v1 in vlist:\n                # Check the vnode state\n                self.server.expect(\n                    VNODE, {'state': expected_state}, id=v1, interval=2)\n        return vlist[0]\n\n    def tearDown(self):\n        TestFunctional.tearDown(self)\n\n        # Restore original node setup for future test cases.\n        self.server.cleanup_jobs()\n        self.server.manager(MGR_CMD_DELETE, NODE, id=\"@default\")\n        for m in self.moms.values():\n            self.server.manager(MGR_CMD_CREATE, NODE,\n                                id=m.shortname)\n\n    def test_single_mom_hook_failure_affects_vnode(self):\n        \"\"\"\n        Run an execjob_begin hook that sleep for sometime,\n        at the same time set an alarm value so less that\n        the hook alarms out and server executes the fail_action\n        After this check if vnodes are marked offline.\n        In case of a single mom reporting vnodes, it should mark\n        all the vnodes and mom as offline.\n        Once offlined, reset the mom by issueing pbsnodes -r\n        and check if the job runs on one of the vnodes.\n        \"\"\"\n        single_mom = self.moms.values()[0]\n        start_time = time.time()\n        self.create_multi_vnodes(1)\n        self.create_mom_hook()\n\n        # Check if hook files were copied to mom\n        single_mom.log_match(\n            \"h1.HK;copy hook-related file request received\",\n            starttime=start_time, interval=2)\n        single_mom.log_match(\n            \"h1.PY;copy hook-related file request received\",\n            starttime=start_time, interval=2)\n\n        self.server.expect(NODE, {ATTR_NODE_state: 'free'},\n                           id=single_mom.shortname, interval=2)\n        j1 = Job(TEST_USER)\n        j1.set_sleep_time(1000)\n        jid = self.server.submit(j1)\n        # mom hook will alarm out and job will get into Q state\n        self.server.expect(JOB, {ATTR_state: 'Q'}, id=jid)\n\n        # since mom hook alarm out it's fail_action will put the\n        # node in offline state\n        self.server.expect(\n            NODE, {ATTR_NODE_state: 'offline'},\n            id=single_mom.shortname, interval=2)\n\n        vname = self.verify_vnodes_state('offline', [single_mom])\n\n        mom_host = single_mom.shortname\n        pbs_exec = self.server.pbs_conf['PBS_EXEC']\n        pbsnodes_cmd = os.path.join(pbs_exec, 'bin', 'pbsnodes')\n        pbsnodes_reset = pbsnodes_cmd + ' -r ' + mom_host\n\n        # Set mom sync hook timeout to be a low value because if mom fails to\n        # get the hook after disabling it then next sync will happen after\n        # 2 minutes by default and we don't want to wait that long.\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'sync_mom_hookfiles_timeout': '5'})\n        self.server.manager(MGR_CMD_SET, HOOK, {'enabled': 'False'}, id=\"h1\")\n        # Make sure that hook has been sent to mom\n        self.server.log_match(\"successfully sent hook file\")\n        self.du.run_cmd(self.server.hostname, cmd=pbsnodes_reset)\n        self.server.delete(jid, wait=True)\n\n        j2 = Job(TEST_USER)\n        j2.set_attributes({ATTR_l + '.select': '1:vnode=' + vname})\n        jid2 = self.server.submit(j2)\n        self.server.expect(NODE, {ATTR_NODE_state: 'free'},\n                           id=single_mom.shortname, interval=2)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid2)\n\n    @requirements(num_moms=2)\n    def test_multi_mom_hook_failure_affects_vnode(self):\n        \"\"\"\n        Run an execjob_begin hook that sleeps for sometime,\n        at the same time set an alarm value so less that\n        the hook alarms out and server executes the fail_action\n        After this check if vnodes are marked offline.\n        In case of a multiple mom reporting same set of vnodes,\n        it should not mark the vnodes and mom as offline because\n        there are other moms active and reporting same vnodes.\n        NOTE: This test needs moms to report the same set of vnodes\n        \"\"\"\n        if len(self.moms) != 2:\n            self.skipTest(\"Provide 2 moms while invoking test\")\n\n        for m in self.moms.values():\n            if m.is_cpuset_mom():\n                self.skipTest(\"Skipping test on cpuset moms\")\n\n        # The moms provided to the test may have unwanted vnodedef files.\n        if self.moms.values()[0].has_vnode_defs():\n            self.moms.values()[0].delete_vnode_defs()\n        if self.moms.values()[1].has_vnode_defs():\n            self.moms.values()[1].delete_vnode_defs()\n\n        start_time = time.time()\n        self.create_multi_vnodes(2)\n        self.create_mom_hook()\n\n        # Check if hook files were copied to mom\n        for m in self.moms.values():\n            m.log_match(\n                \"h1.HK;copy hook-related file request received\",\n                starttime=start_time, interval=2)\n            m.log_match(\n                \"h1.PY;copy hook-related file request received\",\n                starttime=start_time, interval=2)\n\n        # set one natural node to have higher ncpus than the other one so\n        # that the job only goes to this natural node.\n        self.server.manager(MGR_CMD_SET, NODE, {\n                            ATTR_rescavail + '.ncpus': '256'},\n                            id=self.moms.values()[0].shortname)\n        self.server.manager(MGR_CMD_SET, NODE, {\n                            ATTR_rescavail + '.ncpus': '1'},\n                            id=self.moms.values()[1].shortname)\n\n        j1 = Job(TEST_USER)\n\n        if self.is_cray is True:\n            # on a cray, make sure job runs on login node with higher number of\n            # ncpus\n            j1.set_attributes(\n                {ATTR_l + '.select': '1:ncpus=256:vntype=cray_login'})\n        else:\n            j1.set_attributes({ATTR_l + '.select': '1:ncpus=256'})\n\n        jid = self.server.submit(j1)\n        self.server.expect(JOB, {ATTR_state: 'Q'}, id=jid)\n\n        self.server.expect(NODE, {ATTR_NODE_state: 'offline'},\n                           id=self.moms.values()[0].shortname, interval=2)\n        self.server.expect(NODE, {ATTR_NODE_state: 'free'},\n                           id=self.moms.values()[1].shortname, interval=2)\n\n        self.verify_vnodes_state('free', [self.moms.values()[1]])\n        self.verify_vnodes_state('offline', [self.moms.values()[0]])\n\n    @requirements(num_moms=2)\n    def test_multi_mom_hook_failure_affects_vnode2(self):\n        \"\"\"\n        Run an execjob_begin hook that gets an exception\n        when executed by sister mom, causing\n        the server to execute the fail_action=offline_vnodes, which\n        result in sister vnode to be marked offline.\n        \"\"\"\n        if len(self.moms) != 2:\n            self.skipTest(\"Provide 2 moms while invoking test\")\n\n        for m in self.moms.values():\n            if m.is_cpuset_mom():\n                self.skipTest(\"Skipping test on cpuset moms\")\n\n        if self.is_cray is True:\n            self.skipTest(\"Skipping test on Crays\")\n\n        # The moms provided to the test may have unwanted vnodedef files.\n        if self.moms.values()[0].has_vnode_defs():\n            self.moms.values()[0].delete_vnode_defs()\n        if self.moms.values()[1].has_vnode_defs():\n            self.moms.values()[1].delete_vnode_defs()\n\n        start_time = time.time()\n        self.create_multi_vnodes(num_moms=2, num_vnode=1)\n\n        self.create_bad_begin_hook()\n\n        # Check if hook files were copied to mom\n        for m in self.moms.values():\n            m.log_match(\n                \"h2.HK;copy hook-related file request received\",\n                starttime=start_time, interval=2)\n            m.log_match(\n                \"h2.PY;copy hook-related file request received\",\n                starttime=start_time, interval=2)\n\n        j1 = Job(TEST_USER)\n\n        a = {ATTR_l + '.select': '2:ncpus=1',\n             ATTR_l + '.place': 'scatter'}\n        j1.set_attributes(a)\n\n        jid = self.server.submit(j1)\n\n        self.server.expect(NODE, {ATTR_NODE_state: 'free'},\n                           id=self.moms.values()[0].shortname, interval=2)\n        # sister mom's vnode gets offlined due to hook exception\n        self.server.expect(NODE,\n                           {ATTR_NODE_state: 'offline',\n                            ATTR_comment:\n                            \"offlined by hook 'h2' due to hook error\"},\n                           id=self.moms.values()[1].shortname,\n                           interval=2, attrop=PTL_AND)\n        self.server.expect(JOB, {ATTR_state: 'Q'}, id=jid)\n\n    def test_fail_action_startup_hook(self):\n        \"\"\"\n        Run an exechost_startup hook that gets an\n        exception when local mom is restarted. Vnode representing\n        local mom would be marked offline.\n        \"\"\"\n        mom = self.moms.values()[0]\n        if mom.is_cpuset_mom():\n            self.skipTest(\"Skipping test on cpuset moms\")\n\n        if self.is_cray is True:\n            self.skipTest(\"Skipping test on Crays\")\n\n        # The moms provided to the test may have unwanted vnodedef files.\n        if mom.has_vnode_defs():\n            mom.delete_vnode_defs()\n\n        start_time = time.time()\n        self.create_multi_vnodes(1)\n        self.create_bad_startup_hook()\n\n        # Check if hook files were copied to mom\n        mom.log_match(\n            \"h3.HK;copy hook-related file request received\",\n            starttime=start_time, interval=2)\n        mom.log_match(\n            \"h3.PY;copy hook-related file request received\",\n            starttime=start_time, interval=2)\n\n        mom.stop()\n        mom.start()\n\n        # primary mom's vnode gets offlined due to startup hook exception\n        self.server.expect(NODE,\n                           {ATTR_NODE_state: 'offline',\n                            ATTR_comment:\n                            \"offlined by hook 'h3' due to hook error\"},\n                           id=mom.shortname,\n                           interval=2, attrop=PTL_AND)\n\n    def test_pbsnodes_o_single_mom(self):\n        \"\"\"\n        Offline a mom using pbsnodes -o.\n        Since it is the only mom, all vnodes reported by her\n        should also be offline.\n        \"\"\"\n        single_mom = self.moms.values()[0]\n        self.create_multi_vnodes(1)\n        self.server.expect(NODE, {ATTR_NODE_state: 'free'},\n                           id=single_mom.shortname, interval=2)\n\n        mom_host = single_mom.shortname\n        pbs_exec = self.server.pbs_conf['PBS_EXEC']\n        pbsnodes_cmd = os.path.join(pbs_exec, 'bin', 'pbsnodes')\n        pbsnodes_offline = [pbsnodes_cmd, '-o', mom_host]\n        self.du.run_cmd(self.server.hostname, cmd=pbsnodes_offline)\n\n        # the mom node and all of her children should be offline\n        self.server.expect(\n            NODE, {ATTR_NODE_state: 'offline'},\n            id=single_mom.shortname, interval=2)\n\n        self.verify_vnodes_state('offline', [single_mom])\n\n    @requirements(num_moms=2)\n    def test_pbsnodes_o_multi_mom_only_one_offline(self):\n        \"\"\"\n        Offline one mom using pbsnodes -o.\n        In the case of multiple moms reporting the same set of vnodes,\n        none of the vnodes should be marked offline,\n        including the children vnodes.\n        NOTE: This test needs moms to report the same set of vnodes.\n        \"\"\"\n        if len(self.moms) != 2:\n            self.skipTest(\"Provide 2 moms while invoking test\")\n\n        for m in self.moms.values():\n            if m.is_cpuset_mom():\n                self.skipTest(\"Skipping test on cpuset moms\")\n\n        momA = self.moms.values()[0]\n        momB = self.moms.values()[1]\n\n        # The moms provided to the test may have unwanted vnodedef files.\n        if momA.has_vnode_defs():\n            momA.delete_vnode_defs()\n        if momB.has_vnode_defs():\n            momB.delete_vnode_defs()\n\n        self.create_multi_vnodes(2)\n\n        # Offline only one of the moms, the other mom and her children\n        # should still be free\n        pbs_exec = self.server.pbs_conf['PBS_EXEC']\n        pbsnodes_cmd = os.path.join(pbs_exec, 'bin', 'pbsnodes')\n        pbsnodes_offline = [pbsnodes_cmd, '-o', momA.shortname]\n        self.du.run_cmd(self.server.hostname, cmd=pbsnodes_offline)\n\n        # MomA should be offline\n        self.server.expect(\n            NODE, {ATTR_NODE_state: 'offline'},\n            id=momA.shortname, interval=2)\n\n        # momB and the rest of the vnodes should be free\n        self.server.expect(NODE, {ATTR_NODE_state: 'free'},\n                           id=momB.shortname, interval=2)\n        self.verify_vnodes_state('free', [momB])\n        self.verify_vnodes_state('offline', [momA])\n\n    @requirements(num_moms=2)\n    def test_pbsnodes_multi_mom_offline_online(self):\n        \"\"\"\n        When all of the moms reporting a vnode are offline,\n        the vnode should also be marked offline.\n        And when pbsnodes -r is used to clear the offline from at\n        least one of the moms reporting a vnode, then that vnode\n        should also get the offline cleared.\n        Note: This test needs moms to report the same set of vnodes.\n        \"\"\"\n        if len(self.moms) != 2:\n            self.skipTest(\"Provide 2 moms while invoking test\")\n\n        for m in self.moms.values():\n            if m.is_cpuset_mom():\n                self.skipTest(\"Skipping test on cpuset moms\")\n\n        momA = self.moms.values()[0]\n        momB = self.moms.values()[1]\n\n        # The moms provided to the test may have unwanted vnodedef files.\n        if momA.has_vnode_defs():\n            momA.delete_vnode_defs()\n        if momB.has_vnode_defs():\n            momB.delete_vnode_defs()\n\n        self.create_multi_vnodes(2)\n\n        # Offline both of the moms, the vnodes reported by them\n        # will also be offlined\n        pbs_exec = self.server.pbs_conf['PBS_EXEC']\n        pbsnodes_cmd = os.path.join(pbs_exec, 'bin', 'pbsnodes')\n        pbsnodes_offline = [pbsnodes_cmd, '-o', momA.shortname, momB.shortname]\n        self.du.run_cmd(self.server.hostname, cmd=pbsnodes_offline)\n\n        # MomA and MomB should be offline\n        self.server.expect(\n            NODE, {ATTR_NODE_state: 'offline'},\n            id=momA.shortname, interval=2)\n\n        self.server.expect(\n            NODE, {ATTR_NODE_state: 'offline'},\n            id=momB.shortname, interval=2)\n\n        self.verify_vnodes_state('offline', [momA, momB])\n\n        # Now call pbsnodes -r to clear the offline from MomA\n        pbsnodes_clear_offline = [pbsnodes_cmd, '-r', momA.shortname]\n        self.du.run_cmd(self.server.hostname, cmd=pbsnodes_clear_offline)\n\n        # MomB should still be offline\n        self.server.expect(\n            NODE, {ATTR_NODE_state: 'offline'},\n            id=momB.shortname, interval=2)\n\n        # momA and the vnodes she reports should be free\n        self.server.expect(NODE, {ATTR_NODE_state: 'free'},\n                           id=momA.shortname, interval=2)\n        self.verify_vnodes_state('free', [momA])\n        self.verify_vnodes_state('offline', [momB])\n"
  },
  {
    "path": "test/tests/functional/pbs_one_event_multiple_hooks.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass Test_single_event_multiple_hooks(TestFunctional):\n    \"\"\"\n    Test if changes made to pbs objects by multiple hooks of same\n    event type takes effect or not.\n    \"\"\"\n    hook_string = \"\"\"\nimport pbs\ne = pbs.event()\ne.job.Resource_List[\"%s\"]=%s\n%s\n\"\"\"\n\n    def create_hook_scr(self, accept, resource, value):\n        \"\"\"\n        function to create a hook script.\n        It accepts 3 arguments\n        - accept\tIf set to true, then hook will accept else reject\n        - resource\tResource whose value we want to change\n        - value\t\tNew value to be assigned\n        \"\"\"\n        hook_action = \"e.accept()\"\n        if not accept:\n            hook_action = \"e.reject()\"\n        final_hook = self.hook_string % (resource, value, hook_action)\n        return final_hook\n\n    def test_two_queuejob_hooks(self):\n        \"\"\"\n        Submit two queue job hooks. One of the hook modifies resource ncpus\n        and other hook modifies walltime. Check the result of modification.\n        \"\"\"\n        hook_name = \"h1\"\n        scr = self.create_hook_scr(True, \"ncpus\", \"int(5)\")\n        attrs = {'event': \"queuejob\", 'order': '1'}\n        rv = self.server.create_import_hook(\n            hook_name,\n            attrs,\n            scr,\n            overwrite=True)\n        self.assertTrue(rv)\n\n        hook_name = \"h2\"\n        scr = self.create_hook_scr(True, \"walltime\", \"600\")\n        attrs = {'event': \"queuejob\", 'order': '2'}\n        rv = self.server.create_import_hook(\n            hook_name,\n            attrs,\n            scr,\n            overwrite=True)\n        self.assertTrue(rv)\n        a = {'Resource_List.ncpus': 1,\n             'Resource_List.walltime': 10}\n        j = Job(TEST_USER)\n        j.set_attributes(a)\n        j.set_sleep_time(\"10\")\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {\n            'Resource_List.ncpus': 5,\n            'Resource_List.walltime': '00:10:00'},\n            offset=2, id=jid)\n"
  },
  {
    "path": "test/tests/functional/pbs_only_explicit_psets.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass Test_explicit_psets(TestFunctional):\n    \"\"\"\n    Test if 'only_explicit_psets = True' disables the creation of pool of nodes\n    with unset resources in psets.\n    \"\"\"\n\n    def setUp(self):\n        \"\"\"\n        Set the attributes 'only_explicit_psets', 'do_not_span_psets'\n        to True and create 'foo' host resource.\n        \"\"\"\n        TestFunctional.setUp(self)\n        sched_qmgr_attr = {'do_not_span_psets': 'True',\n                           'only_explicit_psets': 'True'}\n        self.server.manager(MGR_CMD_SET, SCHED, sched_qmgr_attr)\n\n        attr = {'type': 'string',\n                'flag': 'h'}\n        r = 'foo'\n        rc = self.server.manager(\n            MGR_CMD_CREATE, RSC, attr, id=r, runas=ROOT_USER, logerr=False)\n\n    def test_only_explicit_psets(self):\n        \"\"\"\n        Test if job with '-lplace=group=foo' can run. It shouldn't.\n        \"\"\"\n\n        # submit a job that can never run with only_explicit_psets = True\n        a = {'Resource_List.select': '1:ncpus=1',\n             'Resource_List.place': 'group=foo'}\n        j = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(j)\n\n        c = \"Can Never Run: can't fit in the largest placement set,\\\n and can't span psets\"\n        self.server.expect(JOB, {'comment': c}, id=jid)\n"
  },
  {
    "path": "test/tests/functional/pbs_only_small_files_over_tpp.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\n@requirements(num_moms=2)\nclass TestOnlySmallFilesOverTPP(TestFunctional):\n    \"\"\"\n    This test suite is for testing that only smaller job files (.OU/.ER/.CK)\n    and scripts (size < 2MB) are sent over TPP and larger files are sent by\n    forking.\n    \"\"\"\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n\n        if len(self.moms) != 2:\n            self.skip_test(reason=\"need 2 mom hosts: -p moms=<m1>:<m2>\")\n\n        self.server.set_op_mode(PTL_CLI)\n\n        # PBSTestSuite returns the moms passed in as parameters as dictionary\n        # of hostname and MoM object\n        self.momA = self.moms.values()[0]\n        self.momB = self.moms.values()[1]\n        self.momA.delete_vnode_defs()\n        self.momB.delete_vnode_defs()\n\n        self.hostA = self.momA.shortname\n        self.hostB = self.momB.shortname\n\n        self.server.manager(MGR_CMD_DELETE, NODE, None, \"\")\n\n        islocal = self.du.is_localhost(self.hostA)\n        if islocal is False:\n            self.server.manager(MGR_CMD_CREATE, NODE, id=self.hostA)\n        else:\n            self.server.manager(MGR_CMD_CREATE, NODE, id=self.hostB)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'job_requeue_timeout': 175})\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'log_events': 4095})\n\n    def test_small_job_file(self):\n        \"\"\"\n        This test case tests that small output files are sent over TPP.\n        \"\"\"\n        j = Job(TEST_USER, attrs={ATTR_N: 'small_job_file'})\n\n        test = []\n        test += ['dd if=/dev/zero of=file bs=1024 count=0 seek=1024\\n']\n        test += ['cat file\\n']\n        test += ['sleep 30\\n']\n\n        j.create_script(test, hostname=self.server.client)\n        jid = self.server.submit(j)\n\n        self.server.expect(JOB, {'job_state': 'R', 'substate': 42},\n                           id=jid, max_attempts=30, interval=2)\n        time.sleep(5)\n        try:\n            self.server.rerunjob(jid)\n        except PbsRerunError as e:\n            self.assertTrue('qrerun: Response timed out. Job rerun request ' +\n                            'still in progress for' in e.msg[0])\n\n        msg = jid + \";big job files, sending via subprocess\"\n        self.server.log_match(msg, max_attempts=10, interval=2,\n                              existence=False)\n\n    def test_big_job_file(self):\n        \"\"\"\n        This test case tests that large output files are not sent over TPP.\n        \"\"\"\n        j = Job(TEST_USER, attrs={ATTR_N: 'big_job_file'})\n\n        test = []\n        test += ['dd if=/dev/zero of=file bs=1024 count=0 seek=3072\\n']\n        test += ['cat file\\n']\n        test += ['sleep 30\\n']\n\n        j.create_script(test, hostname=self.server.client)\n        jid = self.server.submit(j)\n\n        self.server.expect(JOB, {'job_state': 'R', 'substate': 42},\n                           id=jid, max_attempts=30, interval=2)\n        time.sleep(5)\n        try:\n            self.server.rerunjob(jid)\n        except PbsRerunError as e:\n            self.assertTrue('qrerun: Response timed out. Job rerun request ' +\n                            'still in progress for' in e.msg[0])\n\n        msg = jid + \";big job files, sending via subprocess\"\n        self.server.log_match(msg, max_attempts=30, interval=2)\n\n    def test_big_job_script(self):\n        \"\"\"\n        This test case tests that large job scripts are not sent over TPP.\n        \"\"\"\n        j = Job(TEST_USER, attrs={\n            ATTR_N: 'big_job_script'})\n\n        # Create a big job script.\n        test = []\n        for i in range(105000):\n            test += ['echo hey > /dev/null']\n            test += ['sleep 5']\n\n        j.create_script(test, hostname=self.server.client)\n        jid = self.server.submit(j)\n\n        self.server.expect(JOB, {'job_state': 'R', 'substate': 42},\n                           id=jid, max_attempts=30, interval=2)\n\n        msg = jid + \";big job files, sending via subprocess\"\n        self.server.log_match(\n            msg, max_attempts=30, interval=2)\n"
  },
  {
    "path": "test/tests/functional/pbs_passing_environment_variable.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass Test_passing_environment_variable_via_qsub(TestFunctional):\n    \"\"\"\n    Test to check passing environment variables via qsub\n    \"\"\"\n\n    def create_and_submit_job(self, user=None, attribs=None, content=None,\n                              content_interactive=None, preserve_env=False):\n        \"\"\"\n        create the job object and submit it to the server as 'user',\n        attributes list 'attribs' script 'content' or 'content_interactive',\n        and to 'preserve_env' if interactive job.\n        \"\"\"\n        # A user=None value means job will be executed by current user\n        # where the environment is set up\n        if attribs is None:\n            use_attribs = {}\n        else:\n            use_attribs = attribs\n        retjob = Job(username=user, attrs=use_attribs)\n\n        if content is not None:\n            retjob.create_script(body=content)\n        elif content_interactive is not None:\n            retjob.interactive_script = content_interactive\n            retjob.preserve_env = preserve_env\n\n        return self.server.submit(retjob)\n\n    def test_commas_in_custom_variable(self):\n        \"\"\"\n        Submit a job with -v \"var1='A,B,C,D'\" and check that the value\n        is passed correctly\n        \"\"\"\n        a = {'Resource_List.select': '1:ncpus=1',\n             'Resource_List.walltime': 10}\n        script = ['#PBS -v \"var1=\\'A,B,C,D\\'\"']\n        script += ['env | grep var1']\n        jid = self.create_and_submit_job(user=TEST_USER, content=script,\n                                         attribs={ATTR_S: \"/bin/bash\"})\n        qstat = self.server.status(JOB, ATTR_o, id=jid)\n        job_outfile = qstat[0][ATTR_o].split(':')[1]\n\n        self.server.expect(JOB, 'queue', op=UNSET, id=jid, offset=10)\n        job_output = \"\"\n        ret = self.du.cat(self.server.client, filename=job_outfile,\n                          runas=TEST_USER, logerr=False)\n        job_output = (' '.join(ret['out'])).strip()\n        self.assertEqual(job_output, \"var1=A,B,C,D\")\n\n    def test_passing_shell_function(self):\n        \"\"\"\n        Define a shell function with new line characters and check that\n        the function is passed correctly\n        \"\"\"\n        # Check if ShellShock fix for exporting shell function in bash exists\n        # on this system and what \"BASH_FUNC_\" format to use\n        foo_scr = \"\"\"#!/bin/bash\nfoo() { a=B; echo $a; }\nexport -f foo\nenv | grep foo\nunset -f foo\nexit 0\n\"\"\"\n        fn = self.du.create_temp_file(hostname=self.mom.hostname, body=foo_scr)\n        self.du.chmod(hostname=self.mom.hostname, path=fn, mode=0o755)\n        foo_msg = 'Failed to run foo_scr'\n        ret = self.du.run_cmd(self.mom.hostname, cmd=fn)\n        self.assertEqual(ret['rc'], 0, foo_msg)\n        msg = 'BASH_FUNC_'\n        n = 'foo'\n        for m in ret['out']:\n            if m.find(msg) != -1:\n                n = m.split('=')[0]\n                break\n        # Adjustments in bash due to ShellShock malware fix in various OS\n        script = \"\"\"#!/bin/bash\nfoo() { if [ /bin/true ]; then\\necho hello;\\nfi\\n}\nexport -f foo\n#PBS -V\nenv | grep -A 3 foo\\n\nfoo\\n\n\"\"\"\n        # Submit a job without hooks in the system\n        jid = self.create_and_submit_job(user=TEST_USER, content=script,\n                                         attribs={ATTR_S: \"/bin/bash\"})\n        qstat = self.server.status(JOB, ATTR_o, id=jid)\n        job_outfile = qstat[0][ATTR_o].split(':')[1]\n        self.server.expect(JOB, 'queue', op=UNSET, id=jid, offset=2)\n        job_output = \"\"\n        ret = self.du.cat(self.server.client, filename=job_outfile,\n                          runas=TEST_USER, logerr=False)\n        job_output = ('\\n'.join(ret['out'])).strip()\n        match = n + \\\n            '=() {  if [ /bin/true ]; then\\n echo hello;\\n fi\\n}\\nhello'\n        self.assertEqual(job_output, match,\n                         msg=\"Environment variable foo content does \"\n                         \"not match original\")\n\n    def test_option_V_dfltqsubargs(self):\n        \"\"\"\n        Test exporting environment variable when -V is enabled\n        in default_qsub_arguments.\n        \"\"\"\n        os.environ[\"SET_IN_SUBMISSION\"] = \"true\"\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'default_qsub_arguments': '-V'})\n        j = Job(self.du.get_current_user())\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'Variable_List': (MATCH_RE,\n                                                   'SET_IN_SUBMISSION=true')},\n                           id=jid)\n\n    def test_option_V_cmdline(self):\n        \"\"\"\n        Test exporting environment variable when -V is passed\n        through command line.\n        \"\"\"\n        os.environ[\"SET_IN_SUBMISSION\"] = \"true\"\n        self.ATTR_V = 'Full_Variable_List'\n        api_to_cli.setdefault(self.ATTR_V, 'V')\n        a = {self.ATTR_V: None}\n        j = Job(self.du.get_current_user(), attrs=a)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'Variable_List': (MATCH_RE,\n                                                   'SET_IN_SUBMISSION=true')},\n                           id=jid)\n\n    def test_option_V_dfltqsubargs_qsub_daemon(self):\n        \"\"\"\n        Test whether the changed value of the exported\n        environment variable is reflected if the submitted job\n        goes to qsub daemon.\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'default_qsub_arguments': '-V'})\n        os.environ[\"SET_IN_SUBMISSION\"] = \"true\"\n        j = Job(self.du.get_current_user())\n        jid = self.server.submit(j)\n        os.environ[\"SET_IN_SUBMISSION\"] = \"false\"\n        j1 = Job(self.du.get_current_user())\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {'Variable_List': (MATCH_RE,\n                                                   'SET_IN_SUBMISSION=true')},\n                           id=jid)\n        self.server.expect(JOB, {'Variable_List': (MATCH_RE,\n                                                   'SET_IN_SUBMISSION=false')},\n                           id=jid1)\n\n    def test_passing_env_special_char_via_qsub(self):\n        \"\"\"\n        Submit a job with -v ENV_TEST=N:\\\\aa\\\\bb\\\\cc\\\\dd\\\\ee\\\\ff\\\\gg\\\\hh\\\\ii\n        and check that the value is passed correctly\n\n        NOTE: As per the Guide 5.2.4.7 Special Characters\n        in Variable_List Job Attribute\n        Python requires that double quotes\n        and backslashes also be escaped with a backslash\n        \"\"\"\n        a = {ATTR_v: 'ENV_TEST=\"N:\\\\aa\\\\bb\\\\cc\\\\dd\\\\ee\\\\ff\\\\gg\\\\hh\\\\ii\"'}\n        j2 = Job(TEST_USER, attrs=a)\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n        qstat = self.server.status(JOB, ATTR_v, id=jid2)\n        job_outfile = qstat[0]['Variable_List']\n        var_list = job_outfile.split(\",\")\n        exp_string = 'ENV_TEST=N:\\\\\\\\\\\\\\\\aa\\\\\\\\\\\\\\\\bb\\\\\\\\\\\\\\\\cc\\\\\\\\\\\\\\\\dd'\n        exp_string += '\\\\\\\\\\\\\\\\ee\\\\\\\\\\\\\\\\ff\\\\\\\\\\\\\\\\gg\\\\\\\\\\\\\\\\hh\\\\\\\\\\\\\\\\ii'\n        self.assertIn(exp_string, var_list)\n\n    def test_long_env(self):\n        \"\"\"\n        Test to verify that job is able to process\n        very long env attribute.\n        \"\"\"\n\n        env = \"VAR0=foobar\"\n        for i in range(1, 300):\n            env = f\"{env},VAR{i}=foobar\"\n\n        a = {ATTR_v: env}\n        j = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        qstat = self.server.status(JOB, ATTR_v, id=jid)\n        var_list = qstat[0]['Variable_List'].split(\",\")\n\n        for i in range(0, 300):\n            exp_string = f\"VAR{i}=foobar\"\n            self.assertIn(exp_string, var_list)\n\n    def test_passing_non_ascii_env(self):\n        \"\"\"\n        Test to verify that job is able to process\n        non-ascii env attribute.\n        \"\"\"\n\n        import locale\n        target_locale = 'cs_CZ.UTF-8'\n        try:\n            locale.setlocale(locale.LC_ALL, target_locale)\n        except:\n            msg = f\"Target locale ${target_locale} not available.\"\n            self.skipTest(msg)\n\n        env = \"VAR=ěščřžýáíéů\"\n\n        a = {ATTR_v: env}\n        j = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        qstat = self.server.status(JOB, ATTR_v, id=jid)\n        var_list = qstat[0]['Variable_List'].split(\",\")\n\n        exp_string = f\"VAR=ěščřžýáíéů\"\n        self.assertIn(exp_string, var_list)\n"
  },
  {
    "path": "test/tests/functional/pbs_pbsnodes.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\n@tags('commands')\nclass TestPbsnodes(TestFunctional):\n\n    \"\"\"\n    This test suite contains regression tests for pbsnodes command.\n    \"\"\"\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n\n        self.header = ['vnode', 'state', 'OS', 'hardware', 'host',\n                       'queue', 'mem', 'ncpus', 'nmics', 'ngpus', 'comment']\n        self.pbs_exec = self.server.pbs_conf['PBS_EXEC']\n        self.pbsnodes = [os.path.join(self.pbs_exec, 'bin', 'pbsnodes')]\n        self.svrname = self.server.pbs_server_name\n        self.hostA = self.moms.values()[0].shortname\n\n    def common_setUp(self):\n        \"\"\"\n        Common setUp for tests test_pbsnodes_as_user and test_pbsnodes_as_root\n        \"\"\"\n        TestFunctional.setUp(self)\n        self.server.manager(MGR_CMD_DELETE, NODE, id=\"\", sudo=True)\n        self.server.manager(MGR_CMD_CREATE, NODE, id=self.mom.shortname)\n        self.server.expect(NODE, {'state': 'free'})\n\n    def get_newnode_attrs(self, user):\n        \"\"\"\n        return expected values of attributes on a newly created node\n        \"\"\"\n        expect_dict = {}\n        expect_dict[ATTR_NODE_Mom] = self.mom.hostname\n        expect_dict[ATTR_NODE_ntype] = 'PBS'\n        expect_dict[ATTR_NODE_state] = 'free'\n        expect_dict[ATTR_rescavail + '.vnode'] = self.mom.shortname\n        expect_dict[ATTR_rescavail + '.host'] = self.mom.shortname\n        expect_dict[ATTR_NODE_resv_enable] = 'True'\n\n        if user == 'root':\n            expect_dict[ATTR_version] = self.server.pbs_version\n            expect_dict[ATTR_NODE_Port] = '15002'\n\n        if self.mom.is_cpuset_mom():\n            del expect_dict['resources_available.vnode']\n\n        return expect_dict\n\n    def verify_node_dynamic_val(self, last_state_change_time, available_ncpus,\n                                pcpus, sharing, available_mem):\n        \"\"\"\n        verifies node dynamic attributes have expected value\n        \"\"\"\n        sharing_list = ['default_shared', 'default_excl', 'default_exclhost',\n                        'ignore_excl', 'force_excl', 'force_exclhost']\n\n        # Verify that 'last_state_change_time' has value in datetime format\n        last_state_change_time = str(last_state_change_time)\n        try:\n            time.strptime(last_state_change_time, \"%a %b %d %H:%M:%S %Y\")\n        except ValueError:\n            self.fail(\"'last_state_change_time' has value in incorrect format\")\n        else:\n            mssg = \"'last_state_change_time' has value in correct format\"\n            self.logger.info(mssg)\n\n        # checking resources_avalable.ncpus and pcpus value are positive value\n        ncpus_val = int(available_ncpus)\n        if ncpus_val >= 0:\n            mssg = \"resources_available.ncpus have positive int value\"\n            self.logger.info(mssg)\n        else:\n            self.fail(\"resources_available.ncpus have negative value\")\n        pcpus_val = int(pcpus)\n        if pcpus_val >= 0:\n            self.logger.info(\"pcpus have positive int value\")\n        else:\n            self.fail(\"pcpus have negative value\")\n\n        # verify pcpus and ncpus value are same\n        if pcpus_val == ncpus_val:\n            self.logger.info(\"pcpus and ncpus have same value\")\n        else:\n            self.fail(\"pcpus and ncpus not having same value\")\n\n        # verify node sharing attribute have one of the value in\n        # sharing_val list\n        mssg = \"Node sharing attribute not have expected value\"\n        self.assertIn(sharing, sharing_list, mssg)\n\n        # checking resources_avalable.mem value is positive value\n        index = available_mem.find('kb')\n        if int(available_mem[:index]) >= 0:\n            mssg = \"resources_available.mem have positive int value\"\n            self.logger.info(mssg)\n        else:\n            self.fail(\"resources_available.mem not having positive int value\")\n\n    def test_pbsnodes_S(self):\n        \"\"\"\n        This verifies that 'pbsnodes -S' results in a usage message\n        \"\"\"\n        pbsnodes_S = self.pbsnodes + ['-S']\n        out = self.du.run_cmd(self.svrname, cmd=pbsnodes_S)\n        self.logger.info(out['err'][0])\n        self.assertIn('usage:', out['err'][\n                      0], 'usage not found in error message')\n\n    def test_pbsnodes_S_host(self):\n        \"\"\"\n        This verifies that 'pbsnodes -S <host>' results in an output\n        with correct headers.\n        \"\"\"\n        pbsnodes_S_host = self.pbsnodes + ['-S', self.hostA]\n        out1 = self.du.run_cmd(self.svrname, cmd=pbsnodes_S_host)\n        self.logger.info(out1['out'])\n        for hdr in self.header:\n            self.assertIn(\n                hdr, out1['out'][0],\n                \"header %s not found in output\" % hdr)\n\n    def test_pbsnodes_aS(self):\n        \"\"\"\n        This verifies that 'pbsnodes -aS' results in an output\n        with correct headers.\n        \"\"\"\n        pbsnodes_aS = self.pbsnodes + ['-aS']\n        out2 = self.du.run_cmd(self.svrname, cmd=pbsnodes_aS)\n        self.logger.info(out2['out'])\n        for hdr in self.header:\n            self.assertIn(\n                hdr, out2['out'][0],\n                \"header %s not found in output\" % hdr)\n\n    def test_pbsnodes_av(self):\n        \"\"\"\n        This verifies the values of last_used_time in 'pbsnodes -av'\n        result before and after server shutdown, once a job submitted.\n        \"\"\"\n        j = Job(TEST_USER)\n        j.set_sleep_time(1)\n        jid = self.server.submit(j)\n        self.server.accounting_match(\"E;%s;\" % jid)\n        if self.mom.is_cpuset_mom():\n            i = 1\n        else:\n            i = 0\n\n        prev = self.server.status(NODE, 'last_used_time')[i]['last_used_time']\n        self.logger.info(\"Restarting server\")\n        self.server.restart()\n        self.assertTrue(self.server.isUp(), 'Failed to restart Server Daemon')\n\n        now = self.server.status(NODE, 'last_used_time')[i]['last_used_time']\n        self.logger.info(\"Before: \" + prev + \". After: \" + now + \".\")\n        self.assertEqual(prev.strip(), now.strip(),\n                         'Last used time mismatch after server restart')\n\n    @skipOnCray\n    def test_pbsnodes_as_user(self):\n        \"\"\"\n        Validate default values of node attributes for non-root user\n        \"\"\"\n        self.common_setUp()\n        attr_dict = {}\n        expected_attrs = self.get_newnode_attrs(TEST_USER)\n        command = os.path.join(self.server.pbs_conf['PBS_EXEC'],\n                               'bin', 'pbsnodes -a')\n        ret = self.du.run_cmd(self.server.hostname, command)\n        self.assertEqual(ret['rc'], 0)\n        attr_list = ret['out']\n        list_len = len(attr_list) - 1\n        for i in range(1, list_len):\n            attr = attr_list[i].split('=')[0].strip()\n            val = attr_list[i].split('=')[1].strip()\n            attr_dict[attr] = val\n\n        # comparing the pbsnodes -a output with expected result\n        for attr in expected_attrs:\n            self.assertEqual(expected_attrs[attr], attr_dict[attr])\n\n        self.verify_node_dynamic_val(attr_dict['last_state_change_time'],\n                                     attr_dict['resources_available.ncpus'],\n                                     attr_dict['pcpus'], attr_dict['sharing'],\n                                     attr_dict['resources_available.mem'])\n\n    @tags('smoke')\n    @skipOnCray\n    def test_pbsnodes_as_root(self):\n        \"\"\"\n        Validate default values of node attributes for root user\n        \"\"\"\n        self.common_setUp()\n        attr_dict = {}\n        expected_attrs = self.get_newnode_attrs('root')\n        command = os.path.join(self.server.pbs_conf['PBS_EXEC'],\n                               'bin', 'pbsnodes -a')\n        ret = self.du.run_cmd(self.server.hostname, command, sudo=True)\n        self.assertEqual(ret['rc'], 0)\n        attr_list = ret['out']\n        list_len = len(attr_list) - 1\n        for i in range(1, list_len):\n            attr = attr_list[i].split('=')[0].strip()\n            val = attr_list[i].split('=')[1].strip()\n            attr_dict[attr] = val\n\n        # comparing the pbsnodes -a output with expected result\n        for attr in expected_attrs:\n            self.assertEqual(expected_attrs[attr], attr_dict[attr])\n\n        self.verify_node_dynamic_val(attr_dict['last_state_change_time'],\n                                     attr_dict['resources_available.ncpus'],\n                                     attr_dict['pcpus'], attr_dict['sharing'],\n                                     attr_dict['resources_available.mem'])\n"
  },
  {
    "path": "test/tests/functional/pbs_pbsnodes_output_trimmed.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport socket\n\nfrom tests.functional import *\n\n\nclass TestPbsnodesOutputTrimmed(TestFunctional):\n    \"\"\"\n    This test suite tests pbsnodes executable with -l\n    and makes sure that the node names are not trimmed.\n    \"\"\"\n\n    def test_pbsnodes_output(self):\n        \"\"\"\n        This method creates a new vnode with name more than\n        20 characters, and checks the pbsnodes output and\n        makes sure it is not trimmed\n        \"\"\"\n        self.server.manager(MGR_CMD_DELETE, NODE, None, '')\n        pbsnodes = os.path.join(self.server.pbs_conf['PBS_EXEC'],\n                                \"bin\", \"pbsnodes\")\n        hname = \"long123456789012345678901234567890.pbs.com\"\n        a = {'resources_available.ncpus': 4}\n        rc = self.mom.create_vnodes(a, 1, vname=hname)\n        command = pbsnodes + \" -s \" + self.server.hostname + \\\n            ' -v ' + hname + \"[0]\"\n        rc = self.du.run_cmd(cmd=command, sudo=True)\n        self.assertEqual(rc['out'][0], hname + '[0]')\n        self.server.manager(MGR_CMD_DELETE, NODE, None, '')\n"
  },
  {
    "path": "test/tests/functional/pbs_peer.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\nfrom tests.functional import *\n\n\nclass TestPeering(TestFunctional):\n    \"\"\"\n    Some tests for Peering queues\n    \"\"\"\n\n    cres = \"custom_res\"\n    cqueue = \"custom_queue\"\n\n    def create_resource(self, name=None, server=None):\n        if not name:\n            name = self.cres\n\n        if not server:\n            server = self.server\n\n        server.manager(MGR_CMD_CREATE, RSC,\n                       {'type': 'long', 'flag': 'q'},\n                       id=name)\n\n        self.scheduler.add_resource(name)\n\n    def create_queue(self, name=None, server=None, a=None):\n        if not name:\n            name = self.cqueue\n\n        if not server:\n            server = self.server\n\n        if not a:\n            a = {'queue_type': 'execution', 'enabled': True, 'started': True}\n\n        server.manager(MGR_CMD_CREATE, QUEUE, a, id=name)\n\n    def test_local_resc_limits(self):\n        \"\"\"\n        Test that a local peering queue enforces new limits\n        \"\"\"\n        self.create_resource()\n        self.create_queue()\n        a = {'resources_max.' + self.cres: 4}\n        self.server.manager(MGR_CMD_SET, QUEUE, a, id=self.cqueue)\n        a = {'started': False}\n        self.server.manager(MGR_CMD_SET, QUEUE, a, id='workq')\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': False})\n\n        a = {'peer_queue': '\"' + self.cqueue + ' workq\"'}\n        self.scheduler.set_sched_config(a)\n\n        j = Job(TEST_USER, attrs={'Resource_List.' + self.cres: 100})\n        jid = self.server.submit(j)\n        j = Job(TEST_USER, attrs={'Resource_List.' + self.cres: 1})\n        jid2 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid2)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': True})\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid)\n        msg = (jid + ';Failed to run: Job violates queue and/or server'\n               ' resource limits (15036)')\n        self.scheduler.log_match(msg)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n\n    @requirements(num_servers=2)\n    def test_remote_resc_limits(self):\n        \"\"\"\n        Test that a remote peering queue enforces new limits\n        \"\"\"\n        s1 = self.servers.values()[0]\n        s2 = self.servers.values()[1]\n        self.create_resource(server=s1)\n        self.create_resource(server=s2)\n        a = {'resources_max.' + self.cres: 4}\n        s1.manager(MGR_CMD_SET, QUEUE, a, id='workq')\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': False})\n\n        a = {'flatuid': True}\n        s1.manager(MGR_CMD_SET, SERVER, a)\n        s2.manager(MGR_CMD_SET, SERVER, a)\n\n        a = {'peer_queue': '\"workq workq@' + s2.hostname + '\"'}\n        self.scheduler.set_sched_config(a)\n\n        a = {'Resource_List.' + self.cres: 100,\n             ATTR_queue: 'workq@' + s2.hostname}\n        j = Job(TEST_USER, attrs=a)\n        jid = s1.submit(j)\n        a['Resource_List.' + self.cres] = 1\n        j = Job(TEST_USER, attrs=a)\n        jid2 = s1.submit(j)\n        s2.expect(JOB, {'job_state': 'Q'}, id=jid)\n        s2.expect(JOB, {'job_state': 'Q'}, id=jid2)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': True})\n        s1.manager(MGR_CMD_SET, SERVER, {'scheduling': True})\n        s1.expect(JOB, {'job_state': 'Q'}, id=jid)\n        msg = (jid + r';Failed to run: .* \\(15039\\)')\n        self.scheduler.log_match(msg, regexp=True)\n        msg = jid + ';send of job to workq@.* failed error = 15036'\n        s2.log_match(msg, regexp=True)\n        s1.expect(JOB, {'job_state': 'R'}, id=jid2)\n"
  },
  {
    "path": "test/tests/functional/pbs_periodic_constant.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass Test_periodic_constant(TestFunctional):\n    \"\"\"\n    Test if pbs.PERIODIC constant is available in pbs module.\n    \"\"\"\n    hook_script = \"\"\"\nimport pbs\ne = pbs.event()\nif e.type == pbs.PERIODIC:\n    pbs.logmsg(pbs.EVENT_DEBUG,\"This hook is using pbs.PERIODIC\")\n\"\"\"\n\n    def test_periodic_constant_via_server_log(self):\n        \"\"\"\n        Create periodic hook. The hook will test the availability of\n        pbs.PERIODIC. If constant is available the hook will write a log\n        message to the server_log.\n        \"\"\"\n        hook_name = \"periodic_constant\"\n        hook_attrib = {'event': 'periodic', 'freq': 5}\n        retval = self.server.create_import_hook(hook_name,\n                                                hook_attrib,\n                                                self.hook_script,\n                                                overwrite=True)\n        self.assertTrue(retval)\n\n        self.server.log_match(\"This hook is using pbs.PERIODIC\")\n"
  },
  {
    "path": "test/tests/functional/pbs_power_provisioning_cray.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\nimport json\nfrom subprocess import Popen, PIPE\nimport time\n\n\nclass Test_power_provisioning_cray(TestFunctional):\n\n    \"\"\"\n    Test power provisioning feature for the CRAY platform.\n\n    \"\"\"\n\n    def setUp(self):\n        \"\"\"\n        Use the MOM's that are already setup or define the ones passed in.\n        \"\"\"\n        TestFunctional.setUp(self)\n        pltfom = self.du.get_platform()\n        if pltfom != 'cray':\n            self.skipTest(\"%s: not a cray\")\n        self.mom.add_config({\"logevent\": \"0xfffffff\"})\n        a = {'log_events': '2047'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        self.nids = []\n        self.names = []\n        for n in self.server.status(NODE):\n            if 'resources_available.PBScraynid' in n:\n                self.names.append(n['id'])\n                craynid = n['resources_available.PBScraynid']\n                self.nids.append(craynid)\n        self.enable_power()     # enable hooks\n\n    def modify_hook_config(self, attrs, hook_id):\n        \"\"\"\n        Modify the hook config file contents\n        \"\"\"\n        conf_file = str(hook_id) + '.CF'\n        conf_file_path = os.path.join(self.server.pbs_conf['PBS_HOME'],\n                                      'server_priv', 'hooks', conf_file)\n        with open(conf_file_path) as data_file:\n            data = json.load(data_file)\n        for key, value in attrs.items():\n            data[key] = value\n        with open(conf_file_path, 'w') as fp:\n            json.dump(data, fp)\n        a = {'enabled': 'True'}\n        self.server.manager(MGR_CMD_SET, PBS_HOOK, a, id=hook_id, sudo=True)\n\n    def setup_cray_eoe(self):\n        \"\"\"\n        Setup a eoe list for all the nodes.\n        Get possible values for pcaps using capmc command.\n        \"\"\"\n        for n in self.server.status(NODE):\n            if 'resources_available.PBScraynid' in n:\n                self.server.manager(MGR_CMD_SET, NODE,\n                                    {\"power_provisioning\": True}, n['id'])\n        # Dividing total number of nodes by 3 and setting each part to a\n        # different power profile , which will be used to submit jobs with\n        # chunks matching to the number of nodes set to each profile\n        self.npp = len(self.names) / 3\n        for i in range(len(self.names)):\n            if i in range(0, self.npp):\n                self.server.manager(MGR_CMD_SET, NODE,\n                                    {\"resources_available.eoe\": 'low'},\n                                    self.names[i])\n            if i in range(self.npp, self.npp * 2):\n                self.server.manager(MGR_CMD_SET, NODE,\n                                    {\"resources_available.eoe\": 'med'},\n                                    self.names[i])\n            if i in range(self.npp * 2, self.npp * 3):\n                self.server.manager(MGR_CMD_SET, NODE,\n                                    {\"resources_available.eoe\": 'high'},\n                                    self.names[i])\n\n        # Find nid range for capmc command\n        cmd = \"/opt/cray/capmc/default/bin/capmc \"\\\n              \"get_power_cap_capabilities --nids \" + ','.join(self.nids)\n        p = Popen(cmd, shell=True, stdout=PIPE, stderr=PIPE)\n        (o, e) = p.communicate()\n        out = json.loads(o)\n        low = 0\n        med = 0\n        high = 0\n        rv = 'groups' in out\n        msg = \"Error while creating hook content from capmc output: \" + cmd\n        self.assertTrue(rv, msg)\n        for group in out['groups']:\n            for control in group['controls']:\n                if control['name'] == 'node':\n                    min_cap = control['min']\n                    max_cap = control['max']\n            pcap_list = {}\n            for nid in group['nids']:\n                pcap_list[nid] = {}\n                pcap_list[nid]['min'] = min_cap\n                pcap_list[nid]['max'] = max_cap\n                if low == 0 or low < min_cap:\n                    low = min_cap\n                if high == 0 or high > max_cap:\n                    high = max_cap\n        # Get the med using mean of low and high\n        med = (low + high) / 2\n\n        # Now create the map_eoe hook file\n        hook_content = \"\"\"\nimport pbs\ne = pbs.event()\nj = e.job\nprofile = j.Resource_List['eoe']\nif profile is None:\n    res = j.Resource_List['select']\n    if res is not None:\n        for s in str(res).split('+')[0].split(':'):\n            if s[:4] == 'eoe=':\n                profile = s.partition('=')[2]\n                break\npbs.logmsg(pbs.LOG_DEBUG, \"got profile '%s'\" % str(profile))\nif profile == \"low\":\n    j.Resource_List[\"pcap_node\"] = LOW_PCAP\n    pbs.logmsg(pbs.LOG_DEBUG, \"set low\")\nelif profile == \"med\":\n    j.Resource_List[\"pcap_node\"] = MED_PCAP\n    pbs.logmsg(pbs.LOG_DEBUG, \"set med\")\nelif profile == \"high\":\n    j.Resource_List[\"pcap_node\"] = HIGH_PCAP\n    pbs.logmsg(pbs.LOG_DEBUG, \"set high\")\nelse:\n    pbs.logmsg(pbs.LOG_DEBUG, \"unhandled profile '%s'\" % str(profile))\n\ne.accept()\n\"\"\"\n\n        hook_content = hook_content.replace('LOW_PCAP', str(low))\n        hook_content = hook_content.replace('MED_PCAP', str(med))\n        hook_content = hook_content.replace('HIGH_PCAP', str(high))\n        hook_name = \"map_eoe\"\n        a = {'event': 'queuejob', 'enabled': 'true'}\n        rv = self.server.create_import_hook(hook_name, a, hook_content)\n        msg = \"Error while creating and importing hook contents\"\n        self.assertTrue(rv, msg)\n        msg = \"Hook %s created and \" % hook_name\n        msg += \"hook script is imported successfully\"\n        self.logger.info(msg)\n\n    def enable_power(self):\n        \"\"\"\n        Enable power_provisioning on the server.\n        \"\"\"\n        a = {'enabled': 'True'}\n        hook_name = \"PBS_power\"\n        self.server.manager(MGR_CMD_SET, PBS_HOOK, a, id=hook_name,\n                            sudo=True)\n\n        # check that hook becomes active\n        nodes = self.server.status(NODE)\n        n = nodes[0]\n        host = n['Mom']\n        self.assertTrue(host is not None)\n        mom = self.moms[host]\n        mom.log_match(\n            \"Hook;PBS_power.HK;copy hook-related file request received\",\n            starttime=self.server.ctime)\n\n    def disable_power(self):\n        \"\"\"\n        Disable power_provisioning on the server.\n        \"\"\"\n        a = {'enabled': 'False'}\n        hook_name = \"PBS_power\"\n        self.server.manager(MGR_CMD_SET, PBS_HOOK, a, id=hook_name,\n                            sudo=True)\n\n    def submit_job(self, secs=10, a={}):\n        \"\"\"\n        secs: sleep time for the job\n        a: any job attributes\n        \"\"\"\n        a['Keep_Files'] = 'oe'\n        j = Job(TEST_USER, attrs=a)\n        j.set_sleep_time(secs)\n        self.logger.info(str(j))\n        jid = self.server.submit(j)\n        self.job = j\n        return jid\n\n    def energy_check(self, jid):\n        s = self.server.accounting_match(\"E;%s;.*\" % jid,\n                                         regexp=True)\n        self.assertTrue(s is not None)\n        # got the account record, hack it apart\n        for resc in s[1].split(';')[3].split():\n            if resc.partition('=')[0] == \"resources_used.energy\":\n                return True\n        return False\n\n    def mom_logcheck(self, msg, jid=None):\n        mom = self.moms[self.host]           # top mom\n        if jid is not None:\n            mom.log_match(msg % jid,\n                          regexp=True, starttime=self.server.ctime,\n                          max_attempts=10)\n        else:\n            mom.log_match(msg,\n                          regexp=True, starttime=self.server.ctime,\n                          max_attempts=10)\n\n    def eoe_check(self, jid, eoe, secs):\n        # check that job is running and that the vnode has current_eoe set\n        # check for the appropriate log messages for cray\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        qstat = self.server.status(JOB, id=jid)\n        nodes = self.job.get_vnodes(self.job.exec_vnode)\n        for vname in nodes:\n            self.server.expect(VNODE, {'current_eoe': eoe}, id=vname)\n        self.server.expect(JOB, 'queue', op=UNSET, id=jid, offset=secs)\n        self.host = qstat[0]['exec_host'].partition('/')[0]\n        self.mom_logcheck(\"capmc get_node_energy_counter --nids\")\n        self.mom_logcheck(\";Job;%s;energy usage\", jid)\n        self.mom_logcheck(\";Job;%s;Cray: pcap node\", jid)\n        self.mom_logcheck(\"capmc set_power_cap --nids\")\n        self.mom_logcheck(\";Job;%s;PMI: reset current_eoe\", jid)\n        self.mom_logcheck(\";Job;%s;Cray: remove pcap node\", jid)\n        for vname in nodes:\n            self.server.expect(VNODE, {'current_eoe': eoe}, id=vname, op=UNSET)\n\n    def eoe_job(self, num, eoe):\n        \"\"\"\n        Helper function to submit a job with an eoe value.\n        Parameters:\n        num: number of chunks\n        eoe: profile name\n        \"\"\"\n        secs = 10\n        jid = self.submit_job(secs,\n                              {'Resource_List.place': 'scatter',\n                               'Resource_List.select': '%d:eoe=%s' % (num,\n                                                                      eoe)})\n        self.eoe_check(jid, eoe, secs)\n        return jid\n\n    def cleanup_power_on(self):\n        \"\"\"\n        cleanup by switching back all the nodes\n        \"\"\"\n        capmc_cmd = os.path.join(\n            os.sep, 'opt', 'cray', 'capmc', 'default', 'bin', 'capmc')\n        self.du.run_cmd(self.server.hostname, [\n                        capmc_cmd, 'node_on', '--nids',\n                        ','.join(self.nids)], sudo=True)\n        self.logger.info(\"Waiting for 15 mins to power on all the nodes\")\n        time.sleep(900)\n\n    def cleanup_power_ramp_rate(self):\n        \"\"\"\n        cleanup by ramping back all the nodes\n        \"\"\"\n        for nid in self.nids:\n            capmc_cmd = os.path.join(\n                os.sep, 'opt', 'cray', 'capmc', 'default', 'bin', 'capmc')\n            self.du.run_cmd(self.server.hostname, [\n                capmc_cmd, 'set_sleep_state_limit', '--nids',\n                str(nid), '--limit', '1'], sudo=True)\n            self.logger.info(\"ramping up the node with nid\" + str(nid))\n\n    def setup_power_ramp_rate(self):\n        \"\"\"\n        Offline the nodes which does not have sleep_state_capablities\n        \"\"\"\n        self.offnodes = 0\n        for n in self.server.status(NODE):\n            if 'resources_available.PBScraynid' in n:\n                nid = n['resources_available.PBScraynid']\n                cmd = os.path.join(os.sep, 'opt', 'cray',\n                                   'capmc', 'default', 'bin', 'capmc')\n                ret = self.du.run_cmd(self.server.hostname,\n                                      [cmd,\n                                       'get_sleep_state_limit_capabilities',\n                                       '--nids', str(nid)], sudo=True)\n                try:\n                    out = json.loads(ret['out'][0])\n                except Exception:\n                    out = None\n                if out is not None:\n                    errno = out[\"e\"]\n                    msg = out[\"err_msg\"]\n                    if errno == 52 and msg == \"Invalid exchange\":\n                        self.offnodes = self.offnodes + 1\n                        a = {'state': 'offline'}\n                        self.server.manager(MGR_CMD_SET, NODE, a, id=n['id'])\n\n    @timeout(700)\n    def test_cray_eoe_job(self):\n        \"\"\"\n        Submit jobs with an eoe value and check that messages are logged\n        indicating PMI activity, and current_eoe and resources_used.energy\n        get set.\n        \"\"\"\n        self.setup_cray_eoe()\n        eoes = ['low', 'med', 'high']\n        for profile in eoes:\n            jid = self.eoe_job(self.npp, profile)\n            self.energy_check(jid)\n\n    @timeout(700)\n    def test_cray_request_more_eoe(self):\n        \"\"\"\n        Submit jobs with available+1 eoe chunks and verify job comment.\n        \"\"\"\n        self.setup_cray_eoe()\n        x = self.npp + 1\n        jid = self.submit_job(10,\n                              {'Resource_List.place': 'scatter',\n                               'Resource_List.select': '%d:eoe=%s' % (x,\n                                                                      'high')})\n        self.server.expect(JOB, {\n            'job_state': 'Q',\n            'comment': 'Not Running: No available resources on nodes'},\n            id=jid)\n\n    @timeout(700)\n    def test_cray_eoe_job_multiple_eoe(self):\n        \"\"\"\n        Submit jobs requesting multiple eoe and job should rejected by qsub.\n        \"\"\"\n        self.setup_cray_eoe()\n        a = {'Resource_List.place': 'scatter',\n             'Resource_List.select': '10:eoe=low+10:eoe=high'}\n        j = Job(TEST_USER, attrs=a)\n        j.set_sleep_time(10)\n        jid = None\n        try:\n            jid = self.server.submit(j)\n        except PbsSubmitError as e:\n            self.assertTrue(\n                'Invalid provisioning request in chunk' in e.msg[0])\n        self.assertFalse(jid)\n\n    @timeout(700)\n    def test_cray_server_prov_off(self):\n        \"\"\"\n        Submit jobs requesting eoe when power provisioning unset on server\n        and verify that jobs wont run.\n        \"\"\"\n        self.setup_cray_eoe()\n        eoes = ['low', 'med', 'high']\n        a = {'enabled': 'False'}\n        hook_name = \"PBS_power\"\n        self.server.manager(MGR_CMD_SET, PBS_HOOK, a, id=hook_name,\n                            sudo=True)\n        self.server.expect(SERVER, {'power_provisioning': 'False'})\n        for profile in eoes:\n            jid = self.submit_job(10,\n                                  {'Resource_List.place': 'scatter',\n                                   'Resource_List.select': '%d:eoe=%s'\n                                   % (self.npp, profile)})\n            self.server.expect(JOB, {\n                'job_state': 'Q',\n                'comment': 'Not Running: No available resources on nodes'},\n                id=jid)\n\n    @timeout(700)\n    def test_cray_node_prov_off(self):\n        \"\"\"\n        Submit jobs requesting eoe and verify that jobs wont run on\n        nodes where power provisioning is set to false.\n        \"\"\"\n        self.setup_cray_eoe()\n        eoes = ['med', 'high']\n        # set power_provisioning to off where eoe is set to false\n        for i in range(0, self.npp):\n            a = {'power_provisioning': 'False'}\n            self.server.manager(MGR_CMD_SET, NODE, a, id=self.names[i])\n\n        for profile in eoes:\n            jid = self.submit_job(10,\n                                  {'Resource_List.place': 'scatter',\n                                   'Resource_List.select': '%d:eoe=%s'\n                                   % (self.npp, profile)})\n            self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        jid_low = self.submit_job(10,\n                                  {'Resource_List.place': 'scatter',\n                                   'Resource_List.select': '%d:eoe=%s'\n                                   % (self.npp, 'low')})\n        exp_comm = 'Not Running: Insufficient amount of resource: '\n        exp_comm += 'vntype (cray_compute != cray_login)'\n        self.server.expect(JOB, {\n                           'job_state': 'Q',\n                           'comment': exp_comm}, attrop=PTL_AND, id=jid_low)\n\n    @timeout(700)\n    def test_cray_job_preemption(self):\n        \"\"\"\n        Submit job to a high priority queue and verify\n        that job is preempted by requeueing.\n        \"\"\"\n        self.setup_cray_eoe()\n        self.server.manager(MGR_CMD_CREATE, QUEUE,\n                            {'queue_type': 'execution', 'started': 'True',\n                             'enabled': 'True', 'priority': 150}, id='workq2')\n        jid = self.submit_job(10,\n                              {'Resource_List.place': 'scatter',\n                               'Resource_List.select': '%d:eoe=%s'\n                               % (self.npp, 'low')})\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        t = time.time()\n        jid_hp = self.submit_job(10, {ATTR_queue: 'workq2',\n                                      'Resource_List.place': 'scatter',\n                                      'Resource_List.select': '%d:eoe=%s' %\n                                      (self.npp, 'low')})\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid_hp)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid)\n        self.scheduler.log_match(\"Job preempted by requeuing\", starttime=t)\n\n    def test_power_provisioning_attribute(self):\n        \"\"\"\n        Test that when hook is disabled power_provisioning on\n        server is set to false and when enabled true.\n        \"\"\"\n        self.enable_power()\n        a = {'power_provisioning': 'True'}\n        self.server.expect(SERVER, a)\n\n        self.disable_power()\n        a = {'power_provisioning': 'False'}\n        self.server.expect(SERVER, a)\n\n    def test_poweroff_eligible_atrribute(self):\n        \"\"\"\n        Test that we can set poweroff_eligible for nodes to true/false.\n        \"\"\"\n        nodes = self.server.status(NODE)\n        host = nodes[0]['id']\n        self.server.manager(MGR_CMD_SET, NODE, {'poweroff_eligible': 'True'},\n                            id=host)\n        self.server.manager(MGR_CMD_SET, NODE, {'poweroff_eligible': 'False'},\n                            id=host)\n\n    def test_last_state_change_time(self):\n        \"\"\"\n        Test last_state_change_time is set when a job is run and is exited.\n        \"\"\"\n        pattern = '%a %b %d %H:%M:%S %Y'\n        self.server.manager(MGR_CMD_SET, SERVER, {\n                            'job_history_enable': 'True'})\n        nodes = self.server.status(NODE)\n        vnode = nodes[0]['resources_available.vnode']\n        ncpus = nodes[0]['resources_available.ncpus']\n        vntype = nodes[0]['resources_available.vntype']\n        jid = self.submit_job(5, {'Resource_List.vnode': vnode,\n                                  'Resource_List.ncpus': ncpus,\n                                  'Resource_List.vntype': vntype})\n        self.server.expect(JOB, {'job_state': 'F'}, id=jid, extend='x')\n        status = self.server.status(NODE, id=vnode)\n        fmttime = status[0][ATTR_NODE_last_state_change_time]\n        sts_time1 = int(time.mktime(time.strptime(fmttime, pattern)))\n        jid = self.submit_job(5, {'Resource_List.vnode': vnode,\n                                  'Resource_List.ncpus': ncpus,\n                                  'Resource_List.vntype': vntype})\n        self.server.expect(JOB, {'job_state': 'F'}, id=jid, extend='x')\n        status = self.server.status(NODE, id=vnode)\n        fmttime = status[0][ATTR_NODE_last_state_change_time]\n        sts_time2 = int(time.mktime(time.strptime(fmttime, pattern)))\n        self.assertGreater(sts_time2, sts_time1)\n\n    def test_last_used_time(self):\n        \"\"\"\n        Test last_used_time is set when a job is run and is exited.\n        \"\"\"\n        pattern = '%a %b %d %H:%M:%S %Y'\n        self.server.manager(MGR_CMD_SET, SERVER, {\n                            'job_history_enable': 'True'})\n        nodes = self.server.status(NODE)\n        vnode = nodes[0]['resources_available.vnode']\n        vntype = nodes[0]['resources_available.vntype']\n        jid = self.submit_job(5, {'Resource_List.vnode': vnode,\n                                  'Resource_List.vntype': vntype})\n        self.server.expect(JOB, {'job_state': 'F'}, id=jid, extend='x')\n        status = self.server.status(NODE, id=vnode)\n        fmttime = status[0][ATTR_NODE_last_used_time]\n        sts_time1 = int(time.mktime(time.strptime(fmttime, pattern)))\n        jid = self.submit_job(5, {'Resource_List.vnode': vnode,\n                                  'Resource_List.vntype': vntype})\n        self.server.expect(JOB, {'job_state': 'F'}, id=jid, extend='x')\n        status = self.server.status(NODE, id=vnode)\n        fmttime = status[0][ATTR_NODE_last_used_time]\n        sts_time2 = int(time.mktime(time.strptime(fmttime, pattern)))\n        self.assertGreater(sts_time2, sts_time1)\n\n    @timeout(1200)\n    def test_power_off_nodes(self):\n        \"\"\"\n        Test power hook will power off the nodes if power_on_off_enable when\n        poweroff_eligible is set to true on nodes.\n        \"\"\"\n        for n in self.server.status(NODE):\n            if 'resources_available.PBScraynid' in n:\n                self.server.manager(MGR_CMD_SET, NODE,\n                                    {\"poweroff_eligible\": True}, n['id'])\n        a = {\"power_on_off_enable\": True,\n             \"max_concurrent_nodes\": \"30\", 'node_idle_limit': '30'}\n        self.modify_hook_config(attrs=a, hook_id='PBS_power')\n        a = {'freq': 30}\n        self.server.manager(MGR_CMD_SET, PBS_HOOK, a, id='PBS_power',\n                            sudo=True)\n        a = {'enabled': 'True'}\n        self.server.manager(MGR_CMD_SET, PBS_HOOK, a, id='PBS_power',\n                            sudo=True)\n        t = time.time()\n        self.logger.info(\"Waiting for 4 mins to power off all the nodes\")\n        time.sleep(240)\n        self.server.log_match(\n            \"/opt/cray/capmc/default/bin/capmc node_off\", starttime=t)\n        # Expect sleep state on all nodes expect login node\n        self.server.expect(\n            NODE, {'state=sleep': len(self.server.status(NODE)) - 1})\n        self.cleanup_power_on()\n\n    @timeout(1200)\n    def test_power_on_off_max_concurennt_nodes(self):\n        \"\"\"\n        Test power hook will power off the only the number of\n        max_concurrent nodes specified in conf file per hook run\n        even when poweroff_eligible is set to true on all the nodes.\n        \"\"\"\n        for n in self.server.status(NODE):\n            if 'resources_available.PBScraynid' in n:\n                self.server.manager(MGR_CMD_SET, NODE,\n                                    {\"poweroff_eligible\": True}, n['id'])\n        a = {\"power_on_off_enable\": True, 'node_idle_limit': '10',\n             'max_concurrent_nodes': '2'}\n        self.modify_hook_config(attrs=a, hook_id='PBS_power')\n        a = {'freq': 30}\n        self.server.manager(MGR_CMD_SET, PBS_HOOK, a, id='PBS_power',\n                            sudo=True)\n        a = {'enabled': 'True'}\n        self.server.manager(MGR_CMD_SET, PBS_HOOK, a, id='PBS_power',\n                            sudo=True)\n        self.logger.info(\"Waiting for 40 secs to power off 2 nodes\")\n        time.sleep(40)\n        a = {'enabled': 'False'}\n        self.server.manager(MGR_CMD_SET, PBS_HOOK, a, id='PBS_power',\n                            sudo=True)\n        self.server.expect(NODE, {'state=sleep': 2})\n        self.cleanup_power_on()\n\n    def test_poweroffelgible_false(self):\n        \"\"\"\n        Test hook wont power off the nodes where\n        poweroff_eligible is set to false\n        \"\"\"\n        for n in self.server.status(NODE):\n            if 'resources_available.PBScraynid' in n:\n                self.server.manager(MGR_CMD_SET, NODE,\n                                    {\"poweroff_eligible\": False}, n['id'])\n        a = {\"power_on_off_enable\": True,\n             \"max_concurrent_nodes\": \"30\", 'node_idle_limit': '30'}\n        self.modify_hook_config(attrs=a, hook_id='PBS_power')\n        a = {'freq': 30}\n        self.server.manager(MGR_CMD_SET, PBS_HOOK, a, id='PBS_power',\n                            sudo=True)\n        a = {'enabled': 'True'}\n        self.server.manager(MGR_CMD_SET, PBS_HOOK, a, id='PBS_power',\n                            sudo=True)\n        self.logger.info(\n            \"Waiting for 100 secs to make sure no nodes are powered off\")\n        time.sleep(100)\n        self.server.expect(NODE, {'state=free': len(self.server.status(NODE))})\n\n    @timeout(900)\n    def test_power_on_nodes(self):\n        \"\"\"\n        Test when a job is calandered on a vnode which is in sleep state,\n        the node will be powered on and job will run.\n        \"\"\"\n        self.scheduler.set_sched_config({'strict_ordering': 'True ALL'})\n        self.server.manager(MGR_CMD_SET, NODE, {\n                            \"poweroff_eligible\": True}, self.names[0])\n        a = {\"power_on_off_enable\": True, 'node_idle_limit': '30'}\n        self.modify_hook_config(attrs=a, hook_id='PBS_power')\n        a = {'freq': 30}\n        self.server.manager(MGR_CMD_SET, PBS_HOOK, a, id='PBS_power',\n                            sudo=True)\n        a = {'enabled': 'True'}\n        self.server.manager(MGR_CMD_SET, PBS_HOOK, a, id='PBS_power',\n                            sudo=True)\n        t = time.time()\n        self.logger.info(\"Waiting for 1 min to poweroff 1st node\")\n        time.sleep(60)\n        self.server.log_match(\n            \"/opt/cray/capmc/default/bin/capmc node_off\", starttime=t)\n        self.server.expect(NODE, {'state': 'sleep'}, id=self.names[0])\n        a = {\"node_idle_limit\": \"1800\", 'min_node_down_delay': '30'}\n        self.modify_hook_config(attrs=a, hook_id='PBS_power')\n        t = time.time()\n        jid = self.submit_job(1000, {'Resource_List.vnode': self.names[0]})\n        self.scheduler.log_match(\n            jid + ';Job is a top job and will run at',\n            max_attempts=10, starttime=t)\n        t = time.time()\n        self.logger.info(\"Waiting for 10 min to poweron 1st node\")\n        time.sleep(600)\n        self.server.log_match(\n            \"/opt/cray/capmc/default/bin/capmc node_on\", starttime=t)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        self.server.expect(NODE, {'state': 'job-exclusive'}, id=self.names[0])\n\n    @timeout(900)\n    def test_power_on_ramp_rate_nodes(self):\n        \"\"\"\n        Test when both ramp rate and power on off is enabled,\n        power_on_off_enable will override and nodes will be powered off\n        and powered on.\n        \"\"\"\n        self.scheduler.set_sched_config({'strict_ordering': 'True ALL'})\n        self.server.manager(MGR_CMD_SET, NODE, {\n                            \"poweroff_eligible\": True}, self.names[0])\n        a = {\"power_on_off_enable\": True,\n             \"power_ramp_rate_enable\": True, 'node_idle_limit': '30'}\n        self.modify_hook_config(attrs=a, hook_id='PBS_power')\n        a = {'freq': 30}\n        self.server.manager(MGR_CMD_SET, PBS_HOOK, a, id='PBS_power',\n                            sudo=True)\n        a = {'enabled': 'True'}\n        self.server.manager(MGR_CMD_SET, PBS_HOOK, a, id='PBS_power',\n                            sudo=True)\n        t = time.time()\n        self.logger.info(\"Waiting for 1 min to poweroff 1st node\")\n        time.sleep(60)\n        self.server.log_match(\n            \"power_on_off_enable is over-riding power_ramp_rate_enable\",\n            starttime=t)\n        self.server.log_match(\n            \"/opt/cray/capmc/default/bin/capmc node_off\", starttime=t)\n        self.server.expect(NODE, {'state': 'sleep'}, id=self.names[0])\n        a = {\"node_idle_limit\": \"1800\", 'min_node_down_delay': '30'}\n        self.modify_hook_config(attrs=a, hook_id='PBS_power')\n        t = time.time()\n        jid = self.submit_job(1000, {'Resource_List.vnode': self.names[0]})\n        self.scheduler.log_match(\n            jid + ';Job is a top job and will run at',\n            max_attempts=10, starttime=t)\n        t = time.time()\n        self.logger.info(\"Waiting for 10 min to poweron 1st node\")\n        time.sleep(600)\n        self.server.log_match(\n            \"/opt/cray/capmc/default/bin/capmc node_on\", starttime=t)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        self.server.expect(NODE, {'state': 'job-exclusive'}, id=self.names[0])\n\n    @timeout(1200)\n    def test_power_on_min_node_down_delay(self):\n        \"\"\"\n        Test when a job is calandered on a vnode which is in sleep state,\n        the node will be not be powered on until min_node_down_delay time.\n        \"\"\"\n        self.scheduler.set_sched_config({'strict_ordering': 'True ALL'})\n        self.server.manager(MGR_CMD_SET, NODE, {\n                            \"poweroff_eligible\": True}, self.names[0])\n        a = {\"power_on_off_enable\": True, 'min_node_down_delay': '3000'}\n        self.modify_hook_config(attrs=a, hook_id='PBS_power')\n        a = {'freq': 30}\n        self.server.manager(MGR_CMD_SET, PBS_HOOK, a, id='PBS_power',\n                            sudo=True)\n        a = {'enabled': 'True'}\n        self.server.manager(MGR_CMD_SET, PBS_HOOK, a, id='PBS_power',\n                            sudo=True)\n        self.logger.info(\"Waiting for 1 min to poweroff 1st node\")\n        time.sleep(60)\n        self.server.expect(NODE, {'state': 'sleep'}, id=self.names[0])\n        jid = self.submit_job(1000, {'Resource_List.vnode': self.names[0]})\n        t = time.time()\n        self.scheduler.log_match(\n            jid + ';Job is a top job and will run at',\n            max_attempts=10, starttime=t)\n        self.logger.info(\"Waiting for 2 mins to make sure node is not powered\")\n        time.sleep(120)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid)\n        self.server.expect(NODE, {'state': 'sleep'}, id=self.names[0])\n        self.cleanup_power_on()\n\n    @timeout(1800)\n    def test_max_jobs_analyze_limit(self):\n        \"\"\"\n        Test that even when 4 jobs are calandered only nodes assigned\n        to max_jobs_analyze_limit number of jobs will be considered\n        for powering on.\n        \"\"\"\n        self.scheduler.set_sched_config({'strict_ordering': 'True ALL'})\n        self.server.manager(MGR_CMD_SET, SERVER, {'backfill_depth': '4'})\n        for n in self.server.status(NODE):\n            if 'resources_available.PBScraynid' in n:\n                self.server.manager(MGR_CMD_SET, NODE, {\n                                    \"poweroff_eligible\": True}, n['id'])\n        a = {\"power_on_off_enable\": True, 'max_jobs_analyze_limit': '2',\n             'node_idle_limit': '30', 'max_concurrent_nodes': '30'}\n        self.modify_hook_config(attrs=a, hook_id='PBS_power')\n        a = {'freq': 30}\n        self.server.manager(MGR_CMD_SET, PBS_HOOK, a, id='PBS_power',\n                            sudo=True)\n        a = {'enabled': 'True'}\n        self.server.manager(MGR_CMD_SET, PBS_HOOK, a, id='PBS_power',\n                            sudo=True)\n        self.logger.info(\"Waiting for 2 mins to poweroff all the nodes\")\n        time.sleep(120)\n        # Expect sleep state on all nodes expect login node\n        self.server.expect(\n            NODE, {'state=sleep': len(self.server.status(NODE)) - 1})\n        a = {\"node_idle_limit\": \"1800\", 'min_node_down_delay': '30'}\n        self.modify_hook_config(attrs=a, hook_id='PBS_power')\n        j1id = self.submit_job(1000, {'Resource_List.vnode': self.names[0]})\n        j2id = self.submit_job(1000, {'Resource_List.vnode': self.names[1]})\n        j3id = self.submit_job(1000, {'Resource_List.vnode': self.names[2]})\n        j4id = self.submit_job(1000, {'Resource_List.vnode': self.names[3]})\n        self.logger.info(\n            \"Waiting for 10 mins to poweron the nodes which are calandered\")\n        time.sleep(600)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n        self.server.expect(JOB, {'job_state': 'R'}, id=j1id)\n        self.server.expect(NODE, {'state': 'job-exclusive'}, id=self.names[0])\n        self.server.expect(JOB, {'job_state': 'R'}, id=j2id)\n        self.server.expect(NODE, {'state': 'job-exclusive'}, id=self.names[1])\n        self.server.expect(JOB, {'job_state': 'Q'}, id=j3id)\n        self.server.expect(NODE, {'state': 'sleep'}, id=self.names[2])\n        self.server.expect(JOB, {'job_state': 'Q'}, id=j4id)\n        self.server.expect(NODE, {'state': 'sleep'}, id=self.names[3])\n        self.cleanup_power_on()\n\n    def test_last_used_time_node_sort_key(self):\n        \"\"\"\n        Test last_used_time as node sort key.\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, SERVER, {\n                            'job_history_enable': 'True'})\n        i = 0\n        for n in self.server.status(NODE):\n            if 'resources_available.PBScraynid' in n:\n                if i > 1:\n                    self.server.manager(MGR_CMD_SET, NODE, {\n                        'state': 'offline'}, id=n['id'])\n                i += 1\n        a = {'node_sort_key': '\"last_used_time LOW\" ALL'}\n        self.scheduler.set_sched_config(a)\n        jid = self.submit_job(\n            1, {'Resource_List.select': '1:ncpus=1',\n                'Resource_List.place': 'excl'})\n        self.server.expect(JOB, {'job_state': 'F'}, id=jid, extend='x')\n        status = self.server.status(JOB, 'exec_vnode', id=jid, extend='x')\n        exec_vnode = status[0]['exec_vnode']\n        node1 = exec_vnode.split(':')[0][1:]\n        jid = self.submit_job(\n            1, {'Resource_List.select': '1:ncpus=1',\n                'Resource_List.place': 'excl'})\n        self.server.expect(JOB, {'job_state': 'F'}, id=jid, extend='x')\n        jid = self.submit_job(\n            1, {'Resource_List.select': '1:ncpus=1',\n                'Resource_List.place': 'excl'})\n        self.server.expect(JOB, {'job_state': 'F'}, id=jid, extend='x')\n        status = self.server.status(JOB, 'exec_vnode', id=jid, extend='x')\n        exec_vnode = status[0]['exec_vnode']\n        node2 = exec_vnode.split(':')[0][1:]\n        # Check that 3rd job falls on the same node as 1st job as per\n        # node_sort_key. Node on which 1st job ran has lower last_used_time\n        # than the node on which 2nd job ran.\n        self.assertEqual(node1, node2)\n\n    @timeout(1200)\n    def test_power_ramp_down_nodes(self):\n        \"\"\"\n        Test power hook will ramp down the nodes if power_ramp_rate_enable\n        is enabled and node_idle_limit is reached.\n        \"\"\"\n        self.setup_power_ramp_rate()\n        a = {\"power_ramp_rate_enable\": True, 'node_idle_limit': '30'}\n        self.modify_hook_config(attrs=a, hook_id='PBS_power')\n        a = {'freq': 60}\n        self.server.manager(MGR_CMD_SET, PBS_HOOK, a, id='PBS_power',\n                            sudo=True)\n        a = {'enabled': 'True'}\n        self.server.manager(MGR_CMD_SET, PBS_HOOK, a, id='PBS_power',\n                            sudo=True)\n        self.logger.info(\"Waiting for 15 mins to ramp down all the nodes\")\n        time.sleep(900)\n        # Do not check for the offline nodes and 1 login node\n        nn = self.offnodes + 1\n        self.server.expect(\n            NODE, {'state=sleep': len(self.server.status(NODE)) - nn})\n        self.cleanup_power_ramp_rate()\n\n    @timeout(1000)\n    def test_power_ramp_down_max_concurennt_nodes(self):\n        \"\"\"\n        Test power hook will ramp down only the number of\n        max_concurrent nodes specified in conf file per hook run.\n        \"\"\"\n        self.setup_power_ramp_rate()\n        a = {\"power_ramp_rate_enable\": True, 'node_idle_limit': '10',\n             'max_concurrent_nodes': '2'}\n        self.modify_hook_config(attrs=a, hook_id='PBS_power')\n        a = {'freq': 60}\n        self.server.manager(MGR_CMD_SET, PBS_HOOK, a, id='PBS_power',\n                            sudo=True)\n        a = {'enabled': 'True'}\n        self.server.manager(MGR_CMD_SET, PBS_HOOK, a, id='PBS_power',\n                            sudo=True)\n        self.logger.info(\"Waiting for 90 secs to ramp down 2 nodes\")\n        time.sleep(90)\n        a = {'enabled': 'False'}\n        self.server.manager(MGR_CMD_SET, PBS_HOOK, a, id='PBS_power',\n                            sudo=True)\n        self.server.expect(NODE, {'state=sleep': 2})\n        self.cleanup_power_ramp_rate()\n\n    @timeout(1500)\n    def test_power_ramp_up_nodes(self):\n        \"\"\"\n        Test when a job is calandered on a vnode which is in sleep state,\n        the node will be ramped up and job will run.\n        \"\"\"\n        self.setup_power_ramp_rate()\n        self.scheduler.set_sched_config({'strict_ordering': 'True ALL'})\n        a = {\"power_ramp_rate_enable\": True, 'node_idle_limit': '30'}\n        self.modify_hook_config(attrs=a, hook_id='PBS_power')\n        a = {'freq': 60}\n        self.server.manager(MGR_CMD_SET, PBS_HOOK, a, id='PBS_power',\n                            sudo=True)\n        a = {'enabled': 'True'}\n        self.server.manager(MGR_CMD_SET, PBS_HOOK, a, id='PBS_power',\n                            sudo=True)\n        self.logger.info(\"Waiting for 15 mins to ramp down all the nodes\")\n        time.sleep(900)\n        self.server.expect(NODE, {'state': 'sleep'}, id=self.names[0])\n        a = {\"node_idle_limit\": \"1800\", 'min_node_down_delay': '30'}\n        self.modify_hook_config(attrs=a, hook_id='PBS_power')\n        t = time.time()\n        jid = self.submit_job(1000, {'Resource_List.vnode': self.names[0]})\n        self.scheduler.log_match(\n            jid + ';Job is a top job and will run at',\n            max_attempts=10, starttime=t)\n        self.logger.info(\"Waiting for 90 secs to ramp up the calandered node\")\n        time.sleep(90)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        self.server.expect(NODE, {'state': 'job-exclusive'}, id=self.names[0])\n        self.cleanup_power_ramp_rate()\n\n    @timeout(1200)\n    def test_max_jobs_analyze_limit_ramp_up(self):\n        \"\"\"\n        Test that even when 4 jobs are calandered only nodes assigned\n        to max_jobs_analyze_limit number of jobs will be considered\n        for ramping up.\n        \"\"\"\n        self.setup_power_ramp_rate()\n        self.scheduler.set_sched_config({'strict_ordering': 'True ALL'})\n        self.server.manager(MGR_CMD_SET, SERVER, {'backfill_depth': '4'})\n        a = {\"power_ramp_rate_enable\": True,\n             'max_jobs_analyze_limit': '2', 'node_idle_limit': '30'}\n        self.modify_hook_config(attrs=a, hook_id='PBS_power')\n        a = {'freq': 60}\n        self.server.manager(MGR_CMD_SET, PBS_HOOK, a, id='PBS_power',\n                            sudo=True)\n        a = {'enabled': 'True'}\n        self.server.manager(MGR_CMD_SET, PBS_HOOK, a, id='PBS_power',\n                            sudo=True)\n        self.logger.info(\"Waiting for 15 mins to ramp down all the nodes\")\n        time.sleep(900)\n        # Do not check for the offline nodes and 1 login node\n        nn = self.offnodes + 1\n        self.server.expect(\n            NODE, {'state=sleep': len(self.server.status(NODE)) - nn})\n        a = {\"node_idle_limit\": \"1800\", 'min_node_down_delay': '30'}\n        self.modify_hook_config(attrs=a, hook_id='PBS_power')\n        j1id = self.submit_job(1000, {'Resource_List.vnode': self.names[0]})\n        j2id = self.submit_job(1000, {'Resource_List.vnode': self.names[1]})\n        j3id = self.submit_job(1000, {'Resource_List.vnode': self.names[2]})\n        j4id = self.submit_job(1000, {'Resource_List.vnode': self.names[3]})\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n        self.logger.info(\"Waiting for 90 secs to ramp up the calandered nodes\")\n        time.sleep(90)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n        self.server.expect(JOB, {'job_state': 'R'}, id=j1id)\n        self.server.expect(NODE, {'state': 'job-exclusive'}, id=self.names[0])\n        self.server.expect(JOB, {'job_state': 'R'}, id=j2id)\n        self.server.expect(NODE, {'state': 'job-exclusive'}, id=self.names[1])\n        self.server.expect(JOB, {'job_state': 'Q'}, id=j3id)\n        self.server.expect(NODE, {'state': 'sleep'}, id=self.names[2])\n        self.server.expect(JOB, {'job_state': 'Q'}, id=j4id)\n        self.server.expect(NODE, {'state': 'sleep'}, id=self.names[3])\n        self.cleanup_power_ramp_rate()\n\n    @timeout(1200)\n    def test_power_ramp_up_poweroff_eligible(self):\n        \"\"\"\n        Test that nodes are considered for ramp down and ramp up\n        even when poweroff_elgible is set to false.\n        \"\"\"\n        self.setup_power_ramp_rate()\n        self.scheduler.set_sched_config({'strict_ordering': 'True ALL'})\n        self.server.manager(MGR_CMD_SET, NODE, {'poweroff_eligible': 'False'},\n                            id=self.names[0])\n        a = {\"power_ramp_rate_enable\": True, 'node_idle_limit': '30'}\n        self.modify_hook_config(attrs=a, hook_id='PBS_power')\n        a = {'freq': 60}\n        self.server.manager(MGR_CMD_SET, PBS_HOOK, a, id='PBS_power',\n                            sudo=True)\n        a = {'enabled': 'True'}\n        self.server.manager(MGR_CMD_SET, PBS_HOOK, a, id='PBS_power',\n                            sudo=True)\n        self.logger.info(\"Waiting for 15 mins to ramp down all the nodes\")\n        time.sleep(900)\n        self.server.expect(NODE, {'state': 'sleep'}, id=self.names[0])\n        a = {\"node_idle_limit\": \"1800\", 'min_node_down_delay': '30'}\n        self.modify_hook_config(attrs=a, hook_id='PBS_power')\n        t = time.time()\n        jid = self.submit_job(1000, {'Resource_List.vnode': self.names[0]})\n        self.scheduler.log_match(\n            jid + ';Job is a top job and will run at',\n            max_attempts=10, starttime=t)\n        self.logger.info(\"Waiting for 90 secs to ramp up calandered node\")\n        time.sleep(90)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        self.server.expect(NODE, {'state': 'job-exclusive'}, id=self.names[0])\n        self.cleanup_power_ramp_rate()\n\n    def tearDown(self):\n        a = {\"power_ramp_rate_enable\": False,\n             \"power_on_off_enable\": False,\n             'node_idle_limit': '1800',\n             'min_node_down_delay': '1800',\n             \"max_jobs_analyze_limit\": \"100\",\n             \"max_concurrent_nodes\": \"5\"}\n        self.modify_hook_config(attrs=a, hook_id='PBS_power')\n        self.disable_power()\n        TestFunctional.tearDown(self)\n"
  },
  {
    "path": "test/tests/functional/pbs_power_provisioning_sgi.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\n@requirements(no_mom_on_server=True)\nclass Test_power_provisioning_sgi(TestFunctional):\n\n    \"\"\"\n    Test power provisioning feature for the SGI platform.\n\n    Create stub SGI API script at /opt/sgi/ta  and load eoe's from it.\n    \"\"\"\n    script = \\\n        \"\"\"\n# Fake SGI API python\nimport time\n\ndef VerifyConnection():\n    return \"connected\"\n\ndef ListAvailableProfiles():\n    return ['100W', '150W', '200W', '250W', '300W', '350W', '400W', '450W',\n            '500W', 'NONE']\n\ndef MonitorStart( nodeset_name, profile ):\n    return None\n\ndef MonitorReport( nodeset_name ):\n    # fake an energy value\n    fmt = \"%Y/%d/%m\"\n    now = time.time()\n    st = time.strptime(time.strftime(fmt, time.localtime(now)), fmt)\n    night = time.mktime(st)\n    return ['total_energy', (now - night)/60000, 1415218704.5979109]\n\ndef MonitorStop( nodeset_name ):\n    return None\n\ndef NodesetCreate( nodeset_name, node_hostname_list ):\n    return None\n\ndef NodesetDelete( nodeset_name ):\n    return None\n\"\"\"\n    power_nodes = None\n\n    def setUp(self):\n        \"\"\"\n        Don't set any special flags.\n        Use the MOM's that are already setup or define the ones passed in.\n        \"\"\"\n        TestFunctional.setUp(self)\n        nodes = self.server.status(NODE)\n        if(self.check_mom_configuration()):\n            for n in nodes:\n                host = n['Mom']\n                if host is None:\n                    continue\n                # Delete the server side Mom\n                if host == self.server.shortname:\n                    self.server.manager(MGR_CMD_DELETE, NODE, None, host)\n                    break\n            # setup environment for power provisioning\n            self.power_nodes = self.setup_sgi_api(self.script)\n            if(self.power_nodes == 0):\n                self.skip_test(\"No mom found with power profile setup\")\n            else:\n                # enable power hook\n                self.enable_power()\n                for i in range(0, len(self.moms)):\n                    a = {'power_provisioning': 'True'}\n                    self.server.manager(\n                        MGR_CMD_SET, NODE, a, id=self.moms.keys()[i])\n        else:\n            self.skip_test(\"No mom defined on non-server host\")\n\n    def check_mom_configuration(self):\n        \"\"\"\n        There needs to be at least one Mom that is not running on the\n        server host.\n        \"\"\"\n        multimom = False\n        moms = self.server.filter(NODE, 'Mom')\n        if moms is not None:\n            for filt in moms.values():\n                if filt[0] != self.server.shortname:\n                    self.logger.info(\"found different mom %s from local %s\" %\n                                     (filt, self.server.shortname))\n                    multimom = True\n                    return True\n            if not multimom:\n                return False\n        else:\n            self.skip_test(\n                \"No mom found at server/non-server host\")\n\n    def setup_sgi_api(self, script, perm=0o755):\n        \"\"\"\n        Setup a fake sgi_api script on all the nodes.\n        Return the number of nodes.\n        \"\"\"\n        fn = self.du.create_temp_file(body=script)\n        self.du.chmod(path=fn, mode=perm, sudo=True)\n\n        done = set()\n        nodes = self.server.status(NODE)\n        for n in nodes:\n            host = n['Mom']\n            if host is None:\n                continue\n            if host in done:\n                continue\n            done.add(host)\n            pwr_dir = os.path.join(os.sep, \"opt\", \"clmgr\", \"power-service\")\n            dest = os.path.join(pwr_dir, \"hpe_clmgr_power_api.py\")\n            self.server.du.run_cmd(host, \"mkdir -p \" + pwr_dir, sudo=True)\n            self.server.du.run_copy(host, src=fn, dest=dest, sudo=True)\n            # Set PBS_PMINAME=sgi in pbs_environment so the power hook\n            # will use the SGI functionality.\n            mom = self.moms[host]\n            if mom is not None:\n                environ = {\"PBS_PMINAME\": \"sgi\"}\n                self.server.du.set_pbs_environment(host,\n                                                   environ=environ)\n                self.server.du.run_cmd(host, \"chown root %s\" %\n                                       os.path.join(mom.pbs_conf[\n                                                    'PBS_HOME'],\n                                                    \"pbs_environment\"),\n                                       sudo=True)\n            else:\n                self.skip_test(\"Need to pass atleast one mom \"\n                               \"use -p moms=<mom1:mom2>\")\n\n        os.remove(fn)\n        return len(nodes)\n\n    def revert_sgi_api(self):\n        \"\"\"\n        Remove any fake sgi_api from the nodes.\n        Return the number of nodes.\n        \"\"\"\n        done = set()\n        nodes = self.server.status(NODE)\n        for n in nodes:\n            host = n['Mom']\n            if host is None:\n                continue\n            if host in done:\n                continue\n            done.add(host)\n            pwr_dir = os.path.join(os.sep, \"opt\", \"clmgr\", \"power-service\")\n            dest = os.path.join(pwr_dir, \"hpe_clmgr_power_api.py\")\n            self.server.du.run_cmd(host, \"rm \" + dest, sudo=True)\n\n    def enable_power(self):\n        \"\"\"\n        Enable power_provisioning on the server.\n        \"\"\"\n        a = {'enabled': 'True'}\n        hook_name = \"PBS_power\"\n        self.server.manager(MGR_CMD_SET, PBS_HOOK, a, id=hook_name,\n                            sudo=True)\n        done = set()\t\t# check that hook becomes active\n        nodes = self.server.status(NODE)\n        for n in nodes:\n            host = n['Mom']\n            if host is None:\n                continue\n            if host in done:\n                continue\n            mom = self.moms[host]\n            s = mom.log_match(\n                \"Hook;PBS_power.HK;copy hook-related file request received\",\n                starttime=self.server.ctime, max_attempts=60)\n            self.assertTrue(s)\n            mom.signal(\"-HUP\")\n\n    def submit_job(self, secs=10, attr=None):\n        \"\"\"\n        secs: sleep time for the job\n        a: any job attributes\n        \"\"\"\n        attr['Keep_Files'] = 'oe'\n        j = Job(TEST_USER, attrs=attr)\n        j.set_sleep_time(secs)\n        self.logger.info(str(j))\n        jid = self.server.submit(j)\n        return jid\n\n    def energy_check(self, jid):\n        s = self.server.accounting_match(\"E;%s;.*\" % jid,\n                                         regexp=True)\n        self.assertTrue(s is not None)\n        # got the account record, hack it apart\n        for resc in s[1].split(';')[3].split():\n            if resc.partition('=')[0] == \"resources_used.energy\":\n                return True\n        return False\n\n    def eoe_check(self, jid, eoe, secs):\n        # check that job is running and that the vnode has current_eoe set\n        qstat = self.server.status(JOB, id=jid)\n        vname = qstat[0]['exec_vnode'].partition(':')[0].strip('(')\n        self.server.expect(VNODE, {'current_eoe': eoe}, id=vname)\n        self.server.expect(JOB, 'job_state', op=UNSET, id=jid, offset=secs)\n        host = qstat[0]['exec_host'].partition('/')[0]\n        mom = self.moms[host]\t\t# top mom\n        s = mom.log_match(\".*;Job;%s;PMI: reset current_eoe.*\" % jid,\n                          regexp=True, starttime=self.server.ctime,\n                          max_attempts=10)\n        self.assertTrue(s)\n        # check that vnode has current_eoe unset\n        self.server.expect(VNODE, {'current_eoe': eoe}, id=vname, op=UNSET)\n\n    def eoe_job(self, num, eoe):\n        \"\"\"\n        Helper function to submit a job with an eoe value.\n        Parameters:\n        num: number of chunks\n        eoe: profile name\n        \"\"\"\n        secs = 10\n        jid = self.submit_job(secs,\n                              {'Resource_List.select': '%d:eoe=%s' % (num,\n                                                                      eoe)})\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        self.eoe_check(jid, eoe, secs)\n        return jid\n\n    def test_sgi_job(self):\n        \"\"\"\n        Submit jobs with an eoe value and check that messages are logged\n        indicating PMI activity, and current_eoe and resources_used.energy\n        get set.\n        \"\"\"\n        # Make sure eoe is set correctly on the vnodes\n        eoes = set()\t\t# use sets to be order independent\n        nodes = list()\n        for n in self.server.status(NODE):\n            name = n['id']\n            if 'resources_available.eoe' in n:\n                self.server.manager(MGR_CMD_SET, NODE,\n                                    {\"power_provisioning\": True}, name)\n                nodes.append(name)\n                curr = n['resources_available.eoe'].split(',')\n                self.logger.info(\"%s has eoe values %s\" % (name, str(curr)))\n                if len(eoes) == 0:  # empty set\n                    eoes.update(curr)\n                else:  # all vnodes must have same eoes\n                    self.assertTrue(eoes == set(curr))\n        self.assertTrue(len(eoes) > 0)\n\n        # submit jobs for each eoe value\n        while len(eoes) > 0:\n            eoe = eoes.pop()\n            for x in range(1, len(nodes) + 1):\n                jid = self.eoe_job(x, eoe)\n                self.energy_check(jid)\n\n    def test_sgi_eoe_job(self):\n        \"\"\"\n        Submit jobs with an eoe values and check that messages are logged\n        indicating PMI activity, and current_eoe and resources_used.energy\n        get set.\n        \"\"\"\n        eoes = ['100W', '150W', '450W']\n        for x in range(1, self.power_nodes + 1):\n            while len(eoes) > 0:\n                eoe_profile = eoes.pop()\n                jid = self.eoe_job(x, eoe_profile)\n                self.energy_check(jid)\n\n    def test_sgi_request_more_power_nodes(self):\n        \"\"\"\n        Submit job with available+1 power nodes and verify job comment.\n        \"\"\"\n        total_nodes = self.power_nodes + 1\n        jid = self.submit_job(10, {'Resource_List.place': 'scatter',\n                                   'Resource_List.select': '%d:eoe=%s'\n                                   % (total_nodes, '150W')})\n        msg = \"Can Never Run: Not enough total nodes available\"\n        self.server.expect(JOB, {'job_state': 'Q', 'comment': msg},\n                           id=jid)\n\n    def test_sgi_job_multiple_eoe(self):\n        \"\"\"\n        Submit jobs requesting multiple eoe and job should rejected by qsub.\n        \"\"\"\n        try:\n            a = {'Resource_List.place': 'scatter',\n                 'Resource_List.select': '10:eoe=150W+10:eoe=300W'}\n            self.submit_job(attr=a)\n        except PbsSubmitError as e:\n            self.assertTrue(\n                'Invalid provisioning request in chunk' in e.msg[0])\n\n    def test_sgi_server_prov_off(self):\n        \"\"\"\n        Submit jobs requesting eoe when power provisioning unset on server\n        and verify that jobs wont run.\n        \"\"\"\n        a = {'enabled': 'False'}\n        hook_name = \"PBS_power\"\n        self.server.manager(MGR_CMD_SET, PBS_HOOK, a, id=hook_name,\n                            sudo=True)\n        self.server.expect(SERVER, {'power_provisioning': 'False'})\n        eoes = ['150W', '300W', '450W']\n        for profile in eoes:\n            jid = self.submit_job(10,\n                                  {'Resource_List.place': 'scatter',\n                                   'Resource_List.select': '%d:eoe=%s'\n                                   % (self.power_nodes, profile)})\n            self.server.expect(JOB, {\n                'job_state': 'Q',\n                'comment': 'Not Running: No available resources on nodes'},\n                id=jid)\n\n    def test_sgi_node_prov_off(self):\n        \"\"\"\n        Submit jobs requesting eoe and verify that jobs won't run on\n        nodes where power provisioning is set to false.\n        \"\"\"\n        eoes = ['100W', '250W', '300W', '400W']\n        # set power_provisioning to off where eoe is set to false\n        for i in range(0, self.power_nodes):\n            a = {'power_provisioning': 'False'}\n            self.server.manager(\n                MGR_CMD_SET, NODE, a, id=self.moms.keys()[i])\n        for profile in eoes:\n            jid = self.submit_job(10,\n                                  {'Resource_List.place': 'scatter',\n                                   'Resource_List.select': '%d:eoe=%s'\n                                   % (self.power_nodes, profile)})\n            msg = \"Not Running: No available resources on nodes\"\n            self.server.expect(JOB, {'job_state': 'Q', 'comment': msg},\n                               id=jid)\n\n    def test_sgi_job_preemption(self):\n        \"\"\"\n        Submit job to a high priority queue and verify\n        that job is preempted by requeueing.\n        \"\"\"\n        for i in range(0, self.power_nodes):\n            a = {'resources_available.ncpus': 1}\n            self.server.manager(\n                MGR_CMD_SET, NODE, a, id=self.moms.keys()[i])\n        self.server.manager(MGR_CMD_CREATE, QUEUE,\n                            {'queue_type': 'execution', 'started': 'True',\n                             'enabled': 'True', 'priority': 150}, id='workq2')\n        jid = self.submit_job(30,\n                              {'Resource_List.place': 'scatter',\n                               'Resource_List.select': '%d:eoe=%s'\n                               % (self.power_nodes, '150W')})\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        t = time.time()\n        jid_workq2 = self.submit_job(10, {ATTR_queue: 'workq2',\n                                          'Resource_List.place': 'scatter',\n                                          'Resource_List.select': '%d:eoe=%s' %\n                                          (self.power_nodes, '150W')})\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid_workq2)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid)\n        self.scheduler.log_match(\"Job preempted by requeuing\", starttime=t)\n\n    def tearDown(self):\n        # remove SGI fake script file\n        self.revert_sgi_api()\n        TestFunctional.tearDown(self)\n"
  },
  {
    "path": "test/tests/functional/pbs_preemption.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestPreemption(TestFunctional):\n    \"\"\"\n    Contains tests for scheduler's preemption functionality\n    \"\"\"\n    chk_script = \"\"\"#!/bin/bash\nkill $1\nexit 0\n\"\"\"\n    chk_script_fail = \"\"\"#!/bin/bash\nexit 1\n\"\"\"\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n\n        a = {'resources_available.ncpus': 1}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.mom.shortname)\n\n        # create express queue\n        a = {'queue_type': 'execution',\n             'started': 'True',\n             'enabled': 'True',\n             'Priority': 200}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, \"expressq\")\n        if len(self.moms) == 2:\n            self.mom1 = self.moms.keys()[0]\n            self.mom2 = self.moms.keys()[1]\n            # Since some tests need multi-node setup and majority don't,\n            # delete the second node so that single node tests don't fail.\n            # Tests needing multi-node setup will create the second node\n            # explicity.\n            self.server.manager(MGR_CMD_DELETE, NODE, id=self.mom2)\n\n    def submit_jobs(self):\n        \"\"\"\n        Function to submit two normal job and one high priority job\n        \"\"\"\n        j1 = Job(TEST_USER)\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n        time.sleep(1)\n        j2 = Job(TEST_USER)\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid2)\n\n        j3 = Job(TEST_USER)\n        j3.set_attributes({ATTR_q: 'expressq'})\n        jid3 = self.server.submit(j3)\n\n        return jid1, jid2, jid3\n\n    def submit_and_preempt_jobs(self, preempt_order='R', order=None,\n                                job_array=False, extra_attrs=None):\n        \"\"\"\n        This function will set the prempt order, submit jobs,\n        preempt jobs and do log_match()\n        \"\"\"\n        if preempt_order[-1] == 'R':\n            job_state = 'Q'\n            preempted_by = 'requeuing'\n        elif preempt_order[-1] == 'C':\n            job_state = 'Q'\n            preempted_by = 'checkpointing'\n        elif preempt_order[-1] == 'S':\n            job_state = 'S'\n            preempted_by = 'suspension'\n        elif preempt_order[-1] == 'D':\n            job_state = ''\n            preempted_by = 'deletion'\n\n        # construct preempt_order with a number inbetween.  We use 50\n        # since that will cause a different preempt_order to be used for the\n        # first 50% and a different for the second 50%\n        if order == 1:  # first half\n            po = '\"' + preempt_order + ' 50 S\"'\n        elif order == 2:  # second half\n            po = '\"S 50 ' + preempt_order + '\"'\n        else:\n            po = preempt_order\n\n        # set preempt order\n        self.server.manager(MGR_CMD_SET, SCHED, {'preempt_order': po})\n\n        lpattrs = {ATTR_l + '.select': '1:ncpus=1', ATTR_l + '.walltime': 40}\n        if job_array is True:\n            lpattrs[ATTR_J] = '1-3'\n        if extra_attrs is not None:\n            lpattrs.update(extra_attrs)\n\n        # submit a job to regular queue\n        j1 = Job(TEST_USER, lpattrs)\n        jid1 = self.server.submit(j1)\n        if job_array is True:\n            run_state = 'B'\n        else:\n            run_state = 'R'\n        self.server.expect(JOB, {'job_state': run_state}, id=jid1)\n\n        if job_array is True:\n            jids1 = j1.create_subjob_id(jid1, 1)\n            self.server.expect(JOB, {'job_state': 'R'}, id=jids1)\n\n        if order == 2:\n            self.logger.info('Sleep 30s until the job is over 50% done')\n            time.sleep(30)\n\n        # submit a job to high priority queue\n        j2 = Job(TEST_USER, {'queue': 'expressq'})\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n\n        if job_array is True:\n            jid = jids1\n        else:\n            jid = jid1\n\n        if preempt_order[-1] != 'D':\n            self.server.expect(JOB, {'job_state': job_state}, id=jid)\n        elif job_array is True:\n            self.server.expect(JOB, {'job_state': 'X'}, id=jids1)\n        else:\n            self.server.expect(JOB, 'queue', op=UNSET, id=jid)\n\n        self.scheduler.log_match(jid + \";Job preempted by \" + preempted_by)\n\n    def test_preempt_suspend(self):\n        \"\"\"\n        Test that a job is preempted by suspension\n        \"\"\"\n        self.submit_and_preempt_jobs(preempt_order='S')\n\n    def test_preempt_suspend_ja(self):\n        \"\"\"\n        Test that a subjob is preempted by suspension\n        \"\"\"\n        self.submit_and_preempt_jobs(preempt_order='S', job_array=True)\n\n    def test_preempt_checkpoint(self):\n        \"\"\"\n        Test that a job is preempted with checkpoint\n        \"\"\"\n        self.mom.add_checkpoint_abort_script(body=self.chk_script)\n        self.submit_and_preempt_jobs(preempt_order='C')\n\n    def test_preempt_checkpoint_requeue(self):\n        \"\"\"\n        Test that when checkpoint fails, a job is correctly requeued\n        \"\"\"\n        # no checkpoint script, should requeue\n        self.submit_and_preempt_jobs(preempt_order='CR')\n        self.server.cleanup_jobs()\n\n        # checkpoint script fails, should requeue\n        self.mom.add_checkpoint_abort_script(body=self.chk_script_fail)\n        self.submit_and_preempt_jobs(preempt_order='CR')\n\n    def test_preempt_requeue(self):\n        \"\"\"\n        Test that a job is preempted by requeue\n        \"\"\"\n        self.submit_and_preempt_jobs(preempt_order='R')\n\n    @skipOnCpuSet\n    def test_preempt_requeue_exclhost(self):\n        \"\"\"\n        Test that a job is preempted by requeue on node\n        where attribute share is set to force_exclhost\n        \"\"\"\n        # set node share attribute to force_exclhost\n        a = {'resources_available.ncpus': '1',\n             'sharing': 'force_exclhost'}\n        self.mom.create_vnodes(attrib=a, num=0)\n        start_time = time.time()\n        self.submit_and_preempt_jobs(preempt_order='R')\n        self.scheduler.log_match(\n            \"Failed to run: Resource temporarily unavailable (15044)\",\n            existence=False, starttime=start_time,\n            max_attempts=5)\n\n    def test_preempt_requeue_ja(self):\n        \"\"\"\n        Test that a subjob is preempted by requeue\n        \"\"\"\n        self.submit_and_preempt_jobs(preempt_order='R', job_array=True)\n\n    def test_preempt_delete(self):\n        \"\"\"\n        Test preempt via delete correctly deletes a job\n        \"\"\"\n        self.submit_and_preempt_jobs(preempt_order='D')\n\n    def test_preempt_delete_ja(self):\n        \"\"\"\n        Test preempt via delete correctly deletes a subjob\n        \"\"\"\n\n        self.submit_and_preempt_jobs(preempt_order='D', job_array=True)\n\n    def test_preempt_checkpoint_delete(self):\n        \"\"\"\n        Test that when checkpoint fails, a job is correctly deleted\n        \"\"\"\n        self.mom.add_checkpoint_abort_script(body=self.chk_script_fail)\n        self.submit_and_preempt_jobs(preempt_order='CD')\n\n    def test_preempt_rerunable_false(self):\n        # in CLI mode Rerunnable requires a 'n' value.  It's different with API\n        m = self.server.get_op_mode()\n\n        self.server.set_op_mode(PTL_CLI)\n        a = {'Rerunable': 'n'}\n        self.submit_and_preempt_jobs(preempt_order='RD', extra_attrs=a)\n\n        self.server.set_op_mode(m)\n\n    def test_preempt_checkpoint_false(self):\n        # in CLI mode Checkpoint requires a 'n' value.  It's different with API\n        m = self.server.get_op_mode()\n        self.server.set_op_mode(PTL_CLI)\n        self.mom.add_checkpoint_abort_script(body=self.chk_script)\n        a = {'Checkpoint': 'n'}\n        self.submit_and_preempt_jobs(preempt_order='CD', extra_attrs=a)\n\n        self.server.set_op_mode(m)\n\n    def test_preempt_order_requeue_first(self):\n        \"\"\"\n        Test that a low priority job is requeued if preempt_order is in\n        the form of 'R 50 S' and the job is in the first 50% of its run time\n        \"\"\"\n        self.submit_and_preempt_jobs(preempt_order='R', order=1)\n\n    def test_preempt_order_requeue_second(self):\n        \"\"\"\n        Test that a low priority job is requeued if preempt_order is in\n        the form of 'S 50 R' and the job is in the second 50% of its run time\n        \"\"\"\n        self.submit_and_preempt_jobs(preempt_order='R', order=2)\n\n    def test_preempt_requeue_never_run(self):\n        \"\"\"\n        Test that a job is preempted by requeue and the scheduler does not\n        report the job as can never run\n        \"\"\"\n        start_time = time.time()\n        self.submit_and_preempt_jobs(preempt_order='R')\n        self.scheduler.log_match(\n            \";Job will never run\", existence=False, starttime=start_time,\n            max_attempts=5)\n\n    def test_preempt_multiple_jobs(self):\n        \"\"\"\n        Test that multiple jobs are preempted by one large high priority job\n        \"\"\"\n        a = {'resources_available.ncpus': 10}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.mom.shortname)\n\n        for _ in range(10):\n            a = {'Resource_List.select': '1:ncpus=1',\n                 'Resource_List.walltime': 40}\n            j = Job(TEST_USER, a)\n            self.server.submit(j)\n\n        self.server.expect(JOB, {'job_state=R': 10})\n        a = {'Resource_List.select': '1:ncpus=10',\n             'Resource_List.walltime': 40,\n             'queue': 'expressq'}\n        hj = Job(TEST_USER, a)\n        hjid = self.server.submit(hj)\n\n        self.server.expect(JOB, {'job_state=S': 10})\n        self.server.expect(JOB, {'job_state': 'R'}, id=hjid)\n\n    def test_qalter_preempt_targets_to_none(self):\n        \"\"\"\n        Test that a job requesting preempt targets set to two different queues\n        can be altered to set preempt_targets as NONE\n        \"\"\"\n\n        # create an addition queue\n        a = {'queue_type': 'execution',\n             'started': 'True',\n             'enabled': 'True'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, \"workq2\")\n\n        self.server.manager(MGR_CMD_SET, SCHED, {'scheduling': 'False'})\n        # submit a job in expressq with preempt targets set to workq, workq2\n        a = {'Resource_List.preempt_targets': 'queue=workq,queue=workq2'}\n        j = Job(TEST_USER, a)\n        jid = self.server.submit(j)\n\n        self.server.alterjob(jobid=jid,\n                             attrib={'Resource_List.preempt_targets': 'None'})\n        self.server.expect(JOB, id=jid,\n                           attrib={'Resource_List.preempt_targets': 'None'})\n\n    def test_preempt_sort_when_set(self):\n        \"\"\"\n        This test is for preempt_sort when it is set to min_time_since_start\n        \"\"\"\n        a = {ATTR_rescavail + '.ncpus': 2}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n\n        a = {'preempt_sort': 'min_time_since_start'}\n        self.server.manager(MGR_CMD_SET, SCHED, a)\n\n        jid1, jid2, jid3 = self.submit_jobs()\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n        self.server.expect(JOB, {ATTR_state: 'S'}, id=jid2)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid3)\n\n    def test_preempt_retry(self):\n        \"\"\"\n        Test that jobs can be successfully preempted after a previously failed\n        attempt at preemption.\n        \"\"\"\n        # in CLI mode Rerunnable requires a 'n' value.  It's different with API\n        m = self.server.get_op_mode()\n\n        self.server.set_op_mode(PTL_CLI)\n\n        a = {'resources_available.ncpus': 2}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.mom.shortname)\n\n        abort_script = \"\"\"#!/bin/bash\nexit 3\n\"\"\"\n        self.mom.add_checkpoint_abort_script(body=abort_script)\n        # submit two jobs to regular queue\n        attrs = {'Resource_List.select': '1:ncpus=1', 'Rerunable': 'n'}\n        j1 = Job(TEST_USER, attrs)\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n\n        time.sleep(2)\n\n        j2 = Job(TEST_USER, attrs)\n        jid2 = self.server.submit(j2)\n\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n\n        # set preempt order\n        self.server.manager(MGR_CMD_SET, SCHED, {'preempt_order': 'CR'})\n\n        # submit a job to high priority queue\n        a = {ATTR_q: 'expressq'}\n        j3 = Job(TEST_USER, a)\n        jid3 = self.server.submit(j3)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid3)\n\n        self.server.log_match(jid1 + ';Job failed to be preempted')\n        self.server.log_match(jid2 + ';Job failed to be preempted')\n\n        # Allow jobs to be requeued.\n        attrs = {'Rerunable': 'y'}\n        self.server.alterjob(jid1, attrs)\n        self.server.alterjob(jid2, attrs)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid2)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid3)\n        self.server.set_op_mode(m)\n\n    def test_vnode_resource_contention(self):\n        \"\"\"\n        Test to make sure that preemption happens when the resource in\n        contention is vnode.\n        \"\"\"\n        vn4 = self.mom.shortname + '[4]'\n        a = {'resources_available.ncpus': 2}\n        self.mom.create_vnodes(attrib=a, num=11, usenatvnode=False)\n\n        a = {'Resource_List.select': '1:ncpus=2+1:ncpus=2'}\n        for _ in range(5):\n            j = Job(TEST_USER, attrs=a)\n            jid = self.server.submit(j)\n            self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        # Randomly select a vnode with running jobs on it. Request this\n        # vnode in the high priority job later.\n        self.server.expect(NODE, {'state': 'job-busy'}, id=vn4)\n\n        a = {ATTR_q: 'expressq', 'Resource_List.vnode': vn4}\n        hj = Job(TEST_USER, attrs=a)\n        hjid = self.server.submit(hj)\n        self.server.expect(JOB, {'job_state': 'R'}, id=hjid)\n\n        # Since high priority job consumed only one ncpu, vnode[4]'s\n        # node state should be free now\n        self.server.expect(NODE, {'state': 'free'}, id=vn4)\n\n    @requirements(num_moms=2)\n    def test_host_resource_contention(self):\n        \"\"\"\n        Test to make sure that preemption happens when the resource in\n        contention is host.\n        \"\"\"\n        # Skip test if number of mom provided is not equal to two\n        if len(self.moms) != 2:\n            self.skipTest(\"test requires two MoMs as input, \" +\n                          \"use -p moms=<mom1>:<mom2>\")\n        else:\n            self.server.manager(MGR_CMD_CREATE, NODE, id=self.mom2)\n\n        a = {'resources_available.ncpus': 2}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.mom1)\n        a = {'resources_available.ncpus': 3}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.mom2)\n\n        a = {'Resource_List.select': '1:ncpus=2'}\n        j1 = Job(TEST_USER, attrs=a)\n        jid1 = self.server.submit(j1)\n        j2 = Job(TEST_USER, attrs=a)\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n\n        # Stat job to check which job is running on mom1\n        pjid = jid2\n        job_stat = self.server.status(JOB, id=jid1)\n        ehost = job_stat[0]['exec_host'].partition('/')[0]\n        if ehost == self.mom1:\n            pjid = jid1\n\n        # Submit a express queue job requesting the host\n        a = {ATTR_q: 'expressq', 'Resource_List.host': self.mom1}\n        hj = Job(TEST_USER, attrs=a)\n        hjid = self.server.submit(hj)\n        self.server.expect(JOB, {'job_state': 'R'}, id=hjid)\n        self.server.expect(JOB, {'job_state': 'S'}, id=pjid)\n\n        # Submit another express queue job requesting the host,\n        # this job will stay queued\n        a = {ATTR_q: 'expressq', 'Resource_List.host': self.mom1,\n             'Resource_List.ncpus': 2}\n        hj2 = Job(TEST_USER, attrs=a)\n        hjid2 = self.server.submit(hj2)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=hjid2)\n        comment = \"Not Running: Insufficient amount of resource: host\"\n        self.server.expect(JOB, {'comment': comment}, id=hjid2)\n\n    def test_preempt_queue_restart(self):\n        \"\"\"\n        Test that a queue which has preempt_targets set to another queue\n        recovers successfully before the target queue during server restart\n        \"\"\"\n        # create an addition queue\n        a = {'queue_type': 'execution',\n             'started': 'True',\n             'enabled': 'True'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, \"workq2\")\n\n        # create an addition queue\n        a = {'queue_type': 'execution',\n             'started': 'True',\n             'enabled': 'True'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, \"workq3\")\n\n        a = {'resources_default.preempt_targets': 'queue=workq3'}\n        self.server.manager(MGR_CMD_SET, QUEUE, a, \"workq2\")\n        self.server.expect(QUEUE, a, id='workq2')\n        self.server.manager(MGR_CMD_SET, QUEUE, a, \"workq3\")\n\n        self.server.restart()\n\n        try:\n            self.server.expect(QUEUE, a, id='workq2', max_attempts=1)\n        except PtlExpectError:\n            self.server.stop()\n            reset_db = 'echo y | ' + \\\n                os.path.join(self.server.pbs_conf['PBS_EXEC'],\n                             'sbin', 'pbs_server') + ' -t create'\n            self.du.run_cmd(cmd=reset_db, sudo=True, as_script=True)\n            self.fail('TC failed as workq2 recovery failed')\n\n    def test_insufficient_server_rassn_select_resc(self):\n        \"\"\"\n        Set a rassn_select resource (like ncpus or mem) ons server and\n        check if scheduler is able to preempt a lower priority job when\n        resource in contention is this rassn_select resource.\n        \"\"\"\n\n        a = {ATTR_rescavail + \".ncpus\": \"8\"}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.mom.shortname)\n\n        # Make resource ncpu available on server\n        a = {ATTR_rescavail + \".ncpus\": 4}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        a = {ATTR_l + '.select': '1:ncpus=3'}\n        j = Job(TEST_USER, attrs=a)\n        jid_low = self.server.submit(j)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid_low)\n\n        a = {ATTR_l + '.select': '1:ncpus=3', ATTR_q: 'expressq'}\n        j = Job(TEST_USER, attrs=a)\n        jid_high = self.server.submit(j)\n\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid_high)\n\n    def test_preemption_priority_escalation(self):\n        \"\"\"\n        Test that scheduler does not try preempting a job that escalates its\n        preemption priority when preempted.\n        \"\"\"\n        # create an addition queue\n        a = {'queue_type': 'execution',\n             'started': 'True',\n             'enabled': 'True'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, \"workq2\")\n\n        a = {'resources_available.ncpus': 8}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.mom.shortname)\n\n        a = {'max_run_res_soft.ncpus': \"[u:\" + str(TEST_USER) + \"=4]\"}\n        self.server.manager(MGR_CMD_SET, QUEUE, a, 'workq')\n\n        a = {'max_run_res_soft.ncpus': \"[u:\" + str(TEST_USER2) + \"=2]\"}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        p = \"express_queue, normal_jobs, server_softlimits, queue_softlimits\"\n        a = {'preempt_prio': p}\n        self.server.manager(MGR_CMD_SET, SCHED, a)\n        self.server.manager(MGR_CMD_SET, SCHED, {'log_events':  2047})\n\n        # Submit 4 jobs requesting 1 ncpu each in workq\n        a = {ATTR_l + '.select': '1:ncpus=1'}\n        jid_list = []\n        for _ in range(4):\n            j = Job(TEST_USER, a)\n            jid = self.server.submit(j)\n            jid_list.append(jid)\n\n        # Submit 5th job that will make all the job in workq to go over its\n        # softlimits\n        a = {ATTR_l + '.select': '1:ncpus=1'}\n        j = Job(TEST_USER, a)\n        jid = self.server.submit(j)\n        jid_list.append(jid)\n        self.server.expect(JOB, {'job_state=R': 5})\n\n        # Submit a job in workq2 which requests for 3 ncpus, this job will\n        # make user2 go over its soft limits\n        a = {ATTR_l + '.select': '1:ncpus=3', ATTR_q: 'workq2'}\n        j = Job(TEST_USER2, a)\n        jid = self.server.submit(j)\n        jid_list.append(jid)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        # Submit a job in workq2 which requests for 1 ncpus, this job will\n        # not preempt because if it does then all TEST_USER jobs will move\n        # from being over queue softlimits to normal.\n        a = {ATTR_l + '.select': '1:ncpus=1', ATTR_q: 'workq2'}\n        j = Job(TEST_USER2, a)\n        jid = self.server.submit(j)\n        jid_list.append(jid)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid)\n        msg = \";Preempting job will escalate its priority\"\n        for job_id in jid_list[0:-2]:\n            self.scheduler.log_match(job_id + msg)\n\n    def test_preemption_priority_escalation_2(self):\n        \"\"\"\n        Test that scheduler does not try preempting a job that escalates its\n        preemption priority when preempted. But in this case ensure that the\n        job whose preemption priority gets escalated is one of the running\n        jobs that scheduler is yet to preempt in simulated universe.\n        \"\"\"\n        # create an addition queue\n        a = {'queue_type': 'execution',\n             'started': 'True',\n             'enabled': 'True'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, \"workq2\")\n\n        a = {'resources_available.ncpus': 10}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.mom.shortname)\n\n        a = {'type': 'long', 'flag': 'nh'}\n        self.server.manager(MGR_CMD_CREATE, RSC, a, id='foo')\n\n        a = {'resources_available.foo': 10}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.mom.shortname)\n        self.scheduler.add_resource('foo')\n\n        a = {'max_run_res_soft.ncpus': \"[u:PBS_GENERIC=5]\"}\n        self.server.manager(MGR_CMD_SET, QUEUE, a, 'workq')\n        # Set a soft limit on resource foo to 0 so that all jobs requesting\n        # this resource are over soft limits.\n        a = {'max_run_res_soft.foo': \"[u:PBS_GENERIC=0]\"}\n        self.server.manager(MGR_CMD_SET, QUEUE, a, 'workq')\n\n        p = \"express_queue, normal_jobs, queue_softlimits\"\n        a = {'preempt_prio': p}\n        self.server.manager(MGR_CMD_SET, SCHED, a)\n        self.server.manager(MGR_CMD_SET, SCHED, {'log_events':  2047})\n\n        # Submit 4 jobs requesting 1 ncpu each in workq\n        jid_list = []\n        for index in range(4):\n            a = {ATTR_l + '.select': '1:ncpus=1:foo=2'}\n            if (index == 2):\n                # Since this job is not requesting foo, preempting one job\n                # from this queue will escalate its preemption priority to\n                # normal and scheduler will not attempt to preempt it.\n                a = {ATTR_l + '.select': '1:ncpus=1'}\n            j = Job(TEST_USER, a)\n            jid = self.server.submit(j)\n            jid_list.append(jid)\n            time.sleep(1)\n\n        # Submit 5th job that will make all the job in workq to go over its\n        # softlimits because if resource ncpus\n        a = {ATTR_l + '.select': '1:ncpus=2:foo=2'}\n        j = Job(TEST_USER, a)\n        jid = self.server.submit(j)\n        jid_list.append(jid)\n        self.server.expect(JOB, {'job_state=R': 5})\n\n        # Submit a job in workq2 which requests for 8 ncpus and 3 foo resource\n        a = {ATTR_l + '.select': '1:ncpus=8:foo=3', ATTR_q: 'workq2'}\n        j = Job(TEST_USER, a)\n        jid = self.server.submit(j)\n        jid_list.append(jid)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid_list[5])\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid_list[2])\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid_list[0])\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid_list[1])\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid_list[3])\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid_list[4])\n\n    def test_preempt_requeue_resc(self):\n        \"\"\"\n        Test that scheduler will preempt jobs for resources with rrtros\n        set for other resources\n        \"\"\"\n        a = {'resources_available.ncpus': 2}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.mom.shortname)\n\n        a = {'type': 'long', 'flag': 'q'}\n        self.server.manager(MGR_CMD_CREATE, RSC, a, id='foo')\n\n        self.server.manager(MGR_CMD_SET, SCHED, {'preempt_order': 'R'})\n\n        a = {'resources_available.foo': 2,\n             ATTR_restrict_res_to_release_on_suspend: 'ncpus'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        self.scheduler.add_resource('foo')\n\n        a = {'Resource_List.foo': 1}\n        jid1 = self.server.submit(Job(attrs=a))\n        jid2 = self.server.submit(Job(attrs=a))\n\n        self.server.expect(JOB, {'job_state=R': 2})\n        a = {'Resource_List.foo': 1,\n             'queue': 'expressq'}\n        hjid = self.server.submit(Job(attrs=a))\n\n        self.server.expect(JOB, {'job_state': 'R'}, id=hjid)\n        self.server.expect(JOB, {'job_state=Q': 1})\n\n    @staticmethod\n    def wrong_cull_attr(name, totnodes, numnode, attrib):\n        \"\"\"\n        Helper function for test_preempt_wrong_cull\n        \"\"\"\n\n        a = {}\n        if numnode % 2 == 0:\n            a['resources_available.app'] = 'appA'\n        else:\n            a['resources_available.app'] = 'appB'\n        return {**attrib, **a}\n\n    def test_preempt_wrong_cull(self):\n        \"\"\"\n        Test to make sure that if a preemptor cannot run because\n        it misses a non-consumable on a node, preemption candidates\n        are not incorrectly removed from consideration\n        if and because they \"do not request the relevant resource\".\n        Deciding on their utility should be left to the check\n        to see whether the nodes they occupy are useful.\n        \"\"\"\n\n        attr = {'type': 'string_array', 'flag': 'h'}\n        self.server.manager(MGR_CMD_CREATE, RSC, attr, id='app')\n        self.scheduler.add_resource('app')\n\n        a = {'resources_available.ncpus': 1}\n        self.mom.create_vnodes(attrib=a, num=2, usenatvnode=False,\n                               attrfunc=self.wrong_cull_attr)\n        # set the preempt_order to kill/requeue only -- try old and new syntax\n        self.server.manager(MGR_CMD_SET, SCHED, {'preempt_order': 'R'})\n\n        # create express queue\n        a = {'queue_type': 'execution',\n             'started': 'True',\n             'enabled': 'True',\n             'Priority': 200}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, \"hipri\")\n\n        # create normal queue\n        a = {'queue_type': 'execution',\n             'started': 'True',\n             'enabled': 'True',\n             'Priority': 1}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, \"lopri\")\n\n        # submit job 1\n        a = {'Resource_List.select': '1:ncpus=1:vnode=' +\n             self.mom.shortname + '[0]', ATTR_q: 'lopri'}\n        j1 = Job(TEST_USER, attrs=a)\n        jid1 = self.server.submit(j1)\n\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n\n        # submit job 2\n        a = {'Resource_List.select': '1:ncpus=1:app=appA', ATTR_q: 'hipri'}\n        j2 = Job(TEST_USER, attrs=a)\n        jid2 = self.server.submit(j2)\n\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n\n    @requirements(num_moms=2)\n    def test_chunk_level_host_resource_contention(self):\n        \"\"\"\n        Test to make sure that preemption happens when the resource in\n        contention is host requested inside a chunk.\n        \"\"\"\n        # Skip test if number of mom provided is not equal to two\n        if len(self.moms) != 2:\n            self.skipTest(\"test requires two MoMs as input, \" +\n                          \"use -p moms=<mom1>:<mom2>\")\n        else:\n            self.server.manager(MGR_CMD_CREATE, NODE, id=self.mom2)\n\n        a = {'resources_available.ncpus': 2}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.mom1)\n        a = {'resources_available.ncpus': 2}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.mom2)\n\n        a = {'Resource_List.select': '1:ncpus=2'}\n        j1 = Job(TEST_USER, attrs=a)\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n\n        # Stat job to check which job is running on mom1\n        pjid = jid1\n        job_stat = self.server.status(JOB, id=jid1)\n        ehost = job_stat[0]['exec_host'].partition('/')[0]\n\n        # Submit a express queue job requesting the host\n        a = {ATTR_q: 'expressq',\n             'Resource_List.select': '1:ncpus=2:host=' + ehost}\n        hj = Job(TEST_USER, attrs=a)\n        hjid = self.server.submit(hj)\n        self.server.expect(JOB, {'job_state': 'R'}, id=hjid)\n        self.server.expect(JOB, {'job_state': 'S'}, id=pjid)\n\n        # Submit another express queue job requesting the host,\n        # this job will stay queued\n        a = {ATTR_q: 'expressq', 'Resource_List.host': ehost,\n             'Resource_List.ncpus': 2}\n        hj2 = Job(TEST_USER, attrs=a)\n        hjid2 = self.server.submit(hj2)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=hjid2)\n        comment = \"Not Running: Insufficient amount of resource: host\"\n        self.server.expect(JOB, {'comment': comment}, id=hjid2)\n\n    def test_chunk_level_vnode_resource_contention(self):\n        \"\"\"\n        Test to make sure that preemption happens when the resource in\n        contention is vnode requested inside a chunk.\n        \"\"\"\n\n        a = {'resources_available.ncpus': 2}\n        self.mom.create_vnodes(attrib=a, num=11, usenatvnode=False)\n\n        a = {'Resource_List.select': '1:ncpus=2+1:ncpus=2'}\n        for _ in range(5):\n            j = Job(TEST_USER, attrs=a)\n            jid = self.server.submit(j)\n            self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        # Select a vnode with running jobs on it. Request this\n        # vnode in the high priority job later.\n        vn4 = self.mom.shortname + '[4]'\n        self.server.expect(NODE, {'state': 'job-busy'}, id=vn4)\n\n        a = {ATTR_q: 'expressq',\n             'Resource_List.select': '1:ncpus=1:vnode=' + vn4}\n        hj = Job(TEST_USER, attrs=a)\n        hjid = self.server.submit(hj)\n        self.server.expect(JOB, {'job_state': 'R'}, id=hjid)\n        self.server.expect(JOB, {'job_state=R': 5})\n        self.server.expect(JOB, {'job_state=S': 1})\n"
  },
  {
    "path": "test/tests/functional/pbs_printjob.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestPrintjob(TestFunctional):\n    def test_state_substate(self):\n        \"\"\"\n        Verify that printjob prints the state and substate in expected format\n        \"\"\"\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n        a = {'job_state': 'R', 'substate': 42}\n        self.server.expect(JOB, a, id=jid)\n        ret = self.mom.printjob(jid)\n        self.assertEqual(ret['rc'], 0)\n        sfound = False\n        ssfound = False\n        for line in ret['out']:\n            if line.startswith(\"state:\"):\n                val = line.split(\"state:\", 1)[1]\n                val = val.strip()\n                numval = int(val, 16)\n                self.assertEqual(numval, 4)  # R state = numeric 4\n                sfound = True\n            elif line.startswith(\"substate:\"):\n                val = line.split(\"substate:\", 1)[1]\n                val = val.split()[0]\n                val = val.strip()\n                numval = int(val, 16)\n                self.assertEqual(numval, 42)\n                ssfound = True\n        self.assertTrue(sfound and ssfound)\n"
  },
  {
    "path": "test/tests/functional/pbs_provisioning.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport time\nfrom tests.functional import *\n\n\nhook_begin = \"\"\"\nimport pbs\npbs.logmsg(pbs.LOG_DEBUG,\n\"executed execjob_begin hook on job %s\" % (pbs.event().job.id))\n\"\"\"\nhook_provision = \"\"\"\nimport pbs\nimport time\ne = pbs.event()\nvnode = e.vnode\naoe = e.aoe\nif aoe == 'App1':\n    pbs.logmsg(pbs.LOG_DEBUG, \"fake calling application provisioning script\")\n    e.accept(1)\npbs.logmsg(pbs.LOG_DEBUG, \"aoe=%s,vnode=%s\" % (aoe,vnode))\npbs.logmsg(pbs.LOG_DEBUG, \"fake calling os provisioning script\")\ne.accept(0)\n\"\"\"\n\n\n@requirements(no_mom_on_server=True)\nclass TestProvisioningJob(TestFunctional):\n    \"\"\"\n    This testsuite tests whether OS provisioned jobs are getting all\n     required hook files at MOM\n\n      PRE: Have a cluster of PBS with a MOM installed on other\n             than PBS server host.\n\n      1) Enable provisioning and set AOEs to the provisioning MOM\n      2) Create Hooks\n          i) Provisioning hook which is facking OS provisioning.\n         ii) execjob_begin hook prints a log message on MOM node.\n      3) Submit job with an aoe.\n      4) Node will go into the provisioning state while it is\n             running provisioning hook.\n      5) Deletes execjob_begin hook file begin.PY from MOM\n      6) Restarts MOM as we are not doing actual OS provisioning\n      7) Then check for hook files are copied to the MOM node\n             and job is printing log message from execjob_begin hook.\n    \"\"\"\n    hook_list = ['begin', 'my_provisioning']\n    hostA = \"\"\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n        self.momA = self.moms.values()[0]\n        self.hostA = self.momA.shortname\n        msg = (\"We cannot provision on cpuset mom, host has vnodes\")\n        if self.momA.is_cpuset_mom():\n            self.skipTest(msg)\n        self.momA.delete_vnode_defs()\n        self.logger.info(self.momA.shortname)\n        a = {'provision_enable': 'true'}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.hostA)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'log_events': 2047})\n        self.server.expect(NODE, {'state': 'free'}, id=self.hostA)\n\n        a = {'event': 'execjob_begin', 'enabled': 'True'}\n        rv = self.server.create_import_hook(\n            self.hook_list[0], a, hook_begin, overwrite=True)\n        self.assertTrue(rv)\n\n        a = {'event': 'provision', 'enabled': 'True', 'alarm': '300'}\n        rv = self.server.create_import_hook(\n            self.hook_list[1], a, hook_provision, overwrite=True)\n        self.assertTrue(rv)\n\n    def test_execjob_begin_hook_on_os_provisioned_job(self):\n        \"\"\"\n        Test the execjob_begin hook is seen by OS provisioned job.\n        \"\"\"\n        a = {'resources_available.aoe': 'osimage1'}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.hostA)\n\n        job = Job(TEST_USER1, attrs={ATTR_l: 'aoe=osimage1'})\n        job.set_sleep_time(1)\n        jid = self.server.submit(job)\n\n        rv = self.server.expect(NODE, {'state': 'provisioning'}, id=self.hostA)\n        self.assertTrue(rv)\n\n        phome = self.momA.pbs_conf['PBS_HOME']\n        begin = self.momA.get_formed_path(phome, 'mom_priv', 'hooks',\n                                          'begin.PY')\n        ret = self.momA.rm(path=begin, force=True, sudo=True, logerr=False)\n        if not ret:\n            self.logger.error(\"problem deleting %s\" % begin)\n        self.momA.restart()\n        time.sleep(5)\n        rv = self.server.log_match(\n            \"successfully sent hook file \"\n            \"/var/spool/pbs/server_priv/hooks/begin.PY\",\n            max_attempts=20,\n            interval=1)\n        self.assertTrue(rv)\n        rv = self.mom.log_match(\"begin.PY;copy hook-related file \"\n                                \"request received\",\n                                regexp=True,\n                                max_attempts=20,\n                                interval=1)\n        self.assertTrue(rv)\n        rv = self.mom.log_match(\"executed execjob_begin hook on job %s\" % jid,\n                                regexp=True,\n                                max_attempts=20,\n                                interval=1)\n        self.assertTrue(rv)\n\n    def test_app_provisioning(self):\n        \"\"\"\n        Test application provisioning\n        \"\"\"\n        a = {'resources_available.aoe': 'App1'}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.hostA)\n\n        job = Job(TEST_USER1, attrs={ATTR_l: 'aoe=App1'})\n        jid = self.server.submit(job)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        self.server.log_match(\n            \"fake calling application provisioning script\",\n            max_attempts=20,\n            interval=1)\n\n    def test_os_provisioning_pending_hook_copy(self):\n        \"\"\"\n        Test that job still runs after:\n        1. os provisioning succeeded\n        2. pending mom hook copy action has persisted on\n           downed node that is not part of job\n        \"\"\"\n        if len(self.moms) < 2:\n            cmt = \"need 2 non-server mom hosts: -p moms=<m1>:<m2>\"\n            self.skip_test(reason=cmt)\n        self.momB = self.moms.values()[1]\n        self.hostB = self.momB.shortname\n        msg = (\"We cannot provision on cpuset mom, host has vnodes\")\n        if self.momA.is_cpuset_mom() or self.momB.is_cpuset_mom():\n            self.skipTest(msg)\n        a = {'resources_available.aoe': 'osimage1'}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.hostA)\n        self.server.expect(NODE, {'state': 'free'}, id=self.hostB)\n        self.server.expect(NODE, {'state': 'free'}, id=self.hostA)\n\n        job = Job(TEST_USER1, attrs={ATTR_l: 'aoe=osimage1'})\n        job.set_sleep_time(30)\n        jid = self.server.submit(job)\n        self.server.expect(NODE, {'state': 'provisioning'}, id=self.hostA)\n        self.server.expect(JOB, {'job_state': 'R', 'substate': 71}, id=jid)\n        self.server.log_match(\"fake calling os provisioning script\")\n\n        # Bring down the mom that is not part of the job\n        self.momB.stop()\n        # Force a server hook send of mom hooks\n        # With momB being down, pending hook copy to momB action persists\n        self.server.manager(MGR_CMD_SET, HOOK,\n                            {'enabled': 'True'}, self.hook_list[0])\n\n        # a restart is needed to complete os provisioning\n        self.momA.restart()\n        self.server.expect(JOB, {'job_state': 'R', 'substate': 42}, id=jid)\n"
  },
  {
    "path": "test/tests/functional/pbs_provisioning_enhancement.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\n@requirements(num_moms=2)\nclass TestProvisioningJob_Enh(TestFunctional):\n    \"\"\"\n    This testsuite tests newly introduced provisioining capabilities.\n    With this enhacement, PBS will be able to run job requesting aoe\n    in subchunks, just like any other custom non consumable resource.\n\n    PRE: Have a cluster of PBS with two MOM's installed, with one MOM\n    on a node other than PBS server host. Pass the provisionable mom\n    first in pbs_bencpress.\n    Eg. pbs_bencpress -p moms=second_node:server_node ...\n    \"\"\"\n\n    fake_prov_hook = \"\"\"\nimport pbs\nimport time\ne = pbs.event()\n\nvnode = e.vnode\naoe = e.aoe\nif aoe == 'App1':\n    pbs.logmsg(pbs.LOG_DEBUG, \"fake application provisioning script\")\n    e.accept(1)\npbs.logmsg(pbs.LOG_DEBUG, \"aoe=%s,vnode=%s\" % (aoe,vnode))\npbs.logmsg(pbs.LOG_DEBUG, \"fake os provisioning script\")\ne.accept(0)\n\"\"\"\n    reject_runjob_hook = \"\"\"\nimport pbs\ne = pbs.event()\nj = e.job\npbs.logmsg(pbs.LOG_DEBUG, \"job \" + str(j) + \" solution \" + str(j.exec_vnode))\ne.reject()\n\"\"\"\n\n    def setUp(self):\n\n        if self.du.get_platform().startswith('cray'):\n            self.skipTest(\"Test suite only meant to run on non-Cray\")\n        if len(self.moms) < 2:\n            self.skipTest(\"Provide at least 2 moms while invoking test\")\n\n        TestFunctional.setUp(self)\n        # This test suite expects the the first mom given with \"-p moms\"\n        # benchpress option to be remote mom. In case this assumption\n        # is not true then it reverses the order in the setup.\n        if self.moms.values()[0].shortname == self.server.shortname:\n            self.momA = self.moms.values()[1]\n            self.momB = self.moms.values()[0]\n        else:\n            self.momA = self.moms.values()[0]\n            self.momB = self.moms.values()[1]\n\n        self.hostA = self.momA.shortname\n        self.hostB = self.momB.shortname\n\n        # Remove all nodes\n        self.server.manager(MGR_CMD_DELETE, NODE, None, \"\")\n\n        # Create node\n        self.server.manager(MGR_CMD_CREATE, NODE, None, self.hostA)\n        self.server.manager(MGR_CMD_CREATE, NODE, None, self.hostB)\n        self.server.expect(NODE, {'state': 'free'}, id=self.hostA)\n        self.server.expect(NODE, {'state': 'free'}, id=self.hostB)\n\n        # Set hostA provisioning attributes.\n        a = {'provision_enable': 'true',\n             'resources_available.ncpus': '2',\n             'resources_available.aoe': 'App1,osimage1'}\n        self.server.manager(\n            MGR_CMD_SET, NODE, a, id=self.hostA)\n        self.server.manager(MGR_CMD_UNSET, NODE, id=self.hostA,\n                            attrib='current_aoe')\n\n        # Set hostB ncpus to 12\n        a = {'resources_available.ncpus': '12'}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.hostB)\n\n        # Setup provisioning hook.\n        a = {'event': 'provision', 'enabled': 'True', 'alarm': '300'}\n        rv = self.server.create_import_hook(\n            'fake_prov_hook', a, self.fake_prov_hook, overwrite=True)\n        self.assertTrue(rv)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'log_events': 2047})\n\n    def test_app_provisioning(self):\n        \"\"\"\n        Test application provisioning\n        \"\"\"\n        j = Job(TEST_USER1)\n        j.set_attributes({'Resource_List.select': '1:aoe=App1'})\n        jid = self.server.submit(j)\n\n        # Job should start running after provisioining script finish\n        # executing.\n        # Since this is application provisioining, mom restart is\n        # not needed.\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        # Current aoe on momA, should be set to the requested aoe in job.\n        self.server.expect(NODE, {'current_aoe': 'App1'}, id=self.hostA)\n\n        self.server.log_match(\n            \"fake application provisioning script\",\n            max_attempts=20,\n            interval=1)\n\n    def test_os_provisioning(self):\n        \"\"\"\n        Test os provisioning\n        \"\"\"\n\n        j = Job(TEST_USER1)\n        j.set_attributes({'Resource_List.select': '1:aoe=osimage1'})\n        jid = self.server.submit(j)\n\n        # Job will start and wait for provisioning to complete.\n        self.server.expect(JOB, {ATTR_substate: '71'}, id=jid)\n        self.server.log_match(\"fake os provisioning script\",\n                              max_attempts=60,\n                              interval=1)\n\n        # Since this is OS provisioining, mom restart is\n        # required to finish provisioining. Sending SIGHUP\n        # to the provisioning mom will also work, but restart\n        # is more apt to simulate real world scenario.\n        self.momA.restart()\n\n        # Current aoe on momA should be set to the requested aoe in job.\n        self.server.expect(NODE, {'current_aoe': 'osimage1'}, id=self.hostA)\n\n        # After mom restart job execution should start, as\n        # OS provisioining completes affer mom restart.\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid)\n\n    def test_subchunk_application_provisioning(self):\n        \"\"\"\n        Test application provisioning job request consist of subchunks\n        with and without aoe resource.\n        \"\"\"\n        j = Job(TEST_USER1)\n        j.set_attributes({'Resource_List.select':\n                          '1:ncpus=1:aoe=App1+1:ncpus=12'})\n        jid = self.server.submit(j)\n\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid)\n        self.server.expect(JOB, ATTR_execvnode, id=jid, op=SET)\n        nodes = j.get_vnodes(j.exec_vnode)\n        self.server.log_match(\"fake application provisioning script\",\n                              max_attempts=20,\n                              interval=1)\n        self.assertTrue((nodes[0] == self.momA.shortname and\n                         nodes[1] == self.momB.shortname) or\n                        (nodes[0] == self.momB.shortname and\n                         nodes[1] == self.momA.shortname))\n\n        # Current aoe on momA, should be set to the requested aoe in job.\n        self.server.expect(NODE, {'current_aoe': 'App1'}, id=self.hostA)\n\n    def test_subchunk_os_provisioning(self):\n        \"\"\"\n        Test os provisioning job request consist of subchunks\n        with and without aoe resource.\n        \"\"\"\n        a = {'Resource_List.select': '1:aoe=osimage1+1:ncpus=12'}\n        j = Job(TEST_USER1, a)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, ATTR_execvnode, id=jid, op=SET)\n        nodes = j.get_vnodes(j.exec_vnode)\n        self.assertTrue((nodes[0] == self.momA.shortname and\n                         nodes[1] == self.momB.shortname) or\n                        (nodes[0] == self.momB.shortname and\n                         nodes[1] == self.momA.shortname))\n\n        self.momA.restart()\n\n        # After mom restart job execution should start, as\n        # OS provisioining completes affer mom restart.\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid)\n\n        # Current aoe on momA, should be set to the requested aoe in job.\n        self.server.expect(NODE, {'current_aoe': 'osimage1'}, id=self.hostA)\n\n    def test_job_wide_provisioining_request(self):\n        \"\"\"\n        Test jobs with jobwide aoe resource request.\n        \"\"\"\n\n        # Below job will not run, since resource requested are job-wide,\n        # and no single node have all the requested resource.\n\n        j = Job(TEST_USER1)\n        j.set_attributes({\"Resource_List.aoe\": \"App1\",\n                          \"Resource_List.ncpus\": 12})\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {ATTR_state: 'Q',\n                                 ATTR_comment:\n                                 (MATCH_RE, 'Not Running: Insufficient ' +\n                                  'amount of resource: .*')}, id=jid)\n\n        j = Job(TEST_USER1)\n        j.set_attributes({\"Resource_List.aoe\": \"App1\",\n                          \"Resource_List.ncpus\": 1})\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid)\n\n        # Current aoe on momA, should be set to the requested aoe in job.\n        self.server.expect(NODE, {'current_aoe': 'App1'}, id=self.hostA)\n\n    def test_multiple_aoe_request(self):\n        \"\"\"\n        Test jobs with multiple similar/various aoe request in subchunks.\n        Job request cosisting of multiple subchunks with different aoe will\n        fail submission, Whereas job request with same aoe across multiple\n        subchunks should be succesful.\n        \"\"\"\n\n        a1 = {'Resource_List.select':\n              '1:ncpus=1:aoe=App1+1:ncpus=12:aoe=osimage1'}\n\n        a2 = {'Resource_List.select':\n              '1:ncpus=1:aoe=App1+1:ncpus=12:aoe=App1'}\n\n        # Below job will fail submission, since different aoe's requested,\n        # across multiple subchunks.\n\n        j = Job(TEST_USER1)\n        j.set_attributes(a1)\n        jid = None\n        try:\n            jid = self.server.submit(j)\n            self.assertTrue(jid is None, 'Job successfully submitted' +\n                            'when it should have failed')\n        except PbsSubmitError as e:\n            self.assertTrue('Invalid provisioning request in chunk(s)'\n                            in e.msg[0],\n                            'Job submission failed, but due to ' +\n                            'unexpected reason.\\n%s' % e.msg[0])\n            self.logger.info(\"Job submission failed, as expected\")\n\n        # Below job will get submitted, since same aoe requested,\n        # across multiple subchunks.\n\n        j = Job(TEST_USER1)\n        j.set_attributes(a2)\n        jid = self.server.submit(j)\n        self.assertTrue(jid is not None, 'Job submission failed' +\n                        'when it should have succeeded')\n        self.logger.info(\"Job submission succeeded, as expected\")\n\n    def test_provisioning_with_placement(self):\n        \"\"\"\n        Test provisioining job with various placement options.\n        \"\"\"\n\n        # Below job will not run, since placement is set to pack.\n        # and no single node have all the requested resource.\n\n        j = Job(TEST_USER1)\n        j.set_attributes({'Resource_List.select':\n                          '1:ncpus=1:aoe=App1+1:ncpus=12',\n                          'Resource_List.place': 'pack'})\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {ATTR_state: 'Q',\n                                 ATTR_comment:\n                                 (MATCH_RE, 'Not Running: Insufficient ' +\n                                  'amount of resource: .*')}, id=jid)\n\n        # Below job will run with placement set to pack.\n        # since there is only one node with the requested resource.\n\n        j = Job(TEST_USER1)\n        j.set_attributes({'Resource_List.select':\n                          '1:ncpus=1:aoe=App1+1:ncpus=1',\n                          'Resource_List.place': 'pack'})\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        self.server.expect(JOB, ATTR_execvnode, id=jid, op=SET)\n        nodes = j.get_vnodes(j.exec_vnode)\n        self.assertTrue(nodes[0] == self.momA.shortname)\n\n        # Current aoe on momA, should be set to the requested aoe in job.\n        self.server.expect(NODE, {'current_aoe': 'App1'}, id=self.hostA)\n\n        # This was needed since sometime the above job takes longer\n        # to finish and release the resources. This causes delay for\n        # the next job to start and can probably fail the test.\n        self.server.cleanup_jobs()\n\n        # Below job will run on two node with placement set to scatter.\n        # even though single node can satisfy both the requested chunks.\n\n        j = Job(TEST_USER1)\n        j.set_attributes({'Resource_List.select':\n                          '1:ncpus=1:aoe=App1+1:ncpus=1',\n                          'Resource_List.place': 'scatter'})\n        jid = self.server.submit(j)\n        self.server.expect(JOB, ATTR_execvnode, id=jid, op=SET)\n        nodes = j.get_vnodes(j.exec_vnode)\n        self.assertTrue((nodes[0] == self.momA.shortname and\n                         nodes[1] == self.momB.shortname) or\n                        (nodes[0] == self.momB.shortname and\n                         nodes[1] == self.momA.shortname))\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        # Current aoe on momA, should be set to the requested aoe in job.\n        self.server.expect(NODE, {'current_aoe': 'App1'}, id=self.hostA)\n\n    def test_sched_provisioning_response_with_runjob(self):\n        \"\"\"\n        Test that if one provisioning job fails to run then scheduler\n        correctly provides the node solution for the second job with aoe in\n        it.\n        \"\"\"\n        # Setup runjob hook.\n        a = {'event': 'runjob', 'enabled': 'True'}\n        rv = self.server.create_import_hook(\n            'reject_runjob_hook', a, self.reject_runjob_hook, overwrite=True)\n        self.assertTrue(rv)\n        # Set current aoe to App1\n        self.server.manager(MGR_CMD_SET, NODE, id=self.hostA,\n                            attrib={'current_aoe': 'App1'})\n\n        # Turn on scheduling\n        self.server.manager(MGR_CMD_SET,\n                            SERVER, {'scheduling': 'False'})\n\n        # submit two provisioning jobs\n        a = {'Resource_List.select': '1:aoe=osimage1:ncpus=1+1:ncpus=4',\n             'Resource_List.place': 'vscatter'}\n        j = Job(TEST_USER1, attrs=a)\n        jid1 = self.server.submit(j)\n        jid2 = self.server.submit(j)\n\n        # Turn off scheduling\n        self.server.manager(MGR_CMD_SET,\n                            SERVER, {'scheduling': 'True'})\n\n        # Job will be rejected by runjob hook and it should log\n        # correct exec_vnode for each job.\n        msg = \"job %s \" + \"solution (%s:aoe=osimage1:ncpus=1)+(%s:ncpus=4)\"\n        job1_msg = msg % (jid1, self.hostA, self.hostB)\n        job2_msg = msg % (jid2, self.hostA, self.hostB)\n        self.server.log_match(job1_msg)\n        self.server.log_match(job2_msg)\n\n    def test_sched_provisioning_response(self):\n        \"\"\"\n        Test that if scheduler could not find node solution for one\n        provisioning job then it will find the correct solution for the\n        second one.\n        \"\"\"\n\n        # Set current aoe to osimage1\n        self.server.manager(MGR_CMD_SET, NODE, id=self.hostA,\n                            attrib={'current_aoe': 'osimage1'})\n\n        # submit one job that will run on local node\n        a = {'Resource_List.select': '1:ncpus=10'}\n        j1 = Job(TEST_USER1, attrs=a)\n        j1.set_sleep_time(200)\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n\n        # Turn off scheduling\n        self.server.manager(MGR_CMD_SET,\n                            SERVER, {'scheduling': 'False'})\n\n        # submit two provisioning jobs where first job will not be able\n        # to run and second one can\n        a = {'Resource_List.select': '1:aoe=App1:ncpus=1+1:ncpus=3',\n             'Resource_List.place': 'vscatter'}\n        j2 = Job(TEST_USER1, attrs=a)\n        jid2 = self.server.submit(j2)\n\n        a = {'Resource_List.select': '1:aoe=App1:ncpus=1+1:ncpus=2',\n             'Resource_List.place': 'vscatter'}\n        j3 = Job(TEST_USER1, attrs=a)\n        jid3 = self.server.submit(j3)\n\n        # Turn on scheduling\n        self.server.manager(MGR_CMD_SET,\n                            SERVER, {'scheduling': 'True'})\n\n        ev_format = \"(%s:aoe=App1:ncpus=1)+(%s:ncpus=2)\"\n        solution = ev_format % (self.hostA, self.hostB)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid2)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid3)\n        job_state = self.server.status(JOB, id=jid3)\n        self.assertEqual(job_state[0]['exec_vnode'], solution)\n\n    def test_multinode_provisioning(self):\n        \"\"\"\n        Test the effect of max_concurrent_provision\n        If set to 1 and job requests a 4 node provision, the provision should\n        occur 1 node at a time\n        \"\"\"\n        # Setup provisioning hook with smaller alarm.\n        a = {'event': 'provision', 'enabled': 'True', 'alarm': '5'}\n        rv = self.server.create_import_hook(\n            'fake_prov_hook', a, self.fake_prov_hook, overwrite=True)\n\n        a = {'max_concurrent_provision': 1}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        a = {'resources_available.aoe': 'App1,osimage1',\n             'current_aoe': 'App1',\n             'provision_enable': 'True',\n             'resources_available.ncpus': 1}\n        rv = self.momA.create_vnodes(a, 4,\n                                     sharednode=False)\n        self.assertTrue(rv)\n        j = Job(TEST_USER,\n                attrs={'Resource_List.select': '4:ncpus=1:aoe=osimage1'})\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'substate': 71}, attrop=PTL_AND, id=jid)\n        exp_msg = \"Provisioning vnode \" + self.momA.shortname\n        exp_msg += r\"\\[[0-3]\\] with AOE osimage1 started\"\n        logs = self.server.log_match(msg=exp_msg, regexp=True, allmatch=True)\n\n        # since max_concurrent_provision is 1, there should be only one\n        # log\n        self.assertEqual(len(logs), 1)\n\n        # A node in provisioning state cannot be deleted. In order to make\n        # sure that cleanup happens properly do the following -\n        # sleep for a few seconds so that provisin timesout and the node\n        # is marked offline and then delete all the nodes\n        time.sleep(8)\n        # delete all nodes\n        self.server.manager(MGR_CMD_DELETE, NODE, None, \"\")\n"
  },
  {
    "path": "test/tests/functional/pbs_python_restart_settings.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\nfrom ptl.utils.pbs_logutils import PBSLogUtils\n\n\nclass TestPythonRestartSettings(TestFunctional):\n\n    \"\"\"\n    For addressing memory leak in server due to python objects python\n    interpreter needs to be restarted. Previously there were macros in\n    code to do that. The new design has added attributes in server to\n    configure how frequently python interpreter should be restarted\n    This test suite is to validate the server attributes. Actual memory\n    leak test is still manual\n    \"\"\"\n    logutils = PBSLogUtils()\n\n    def test_non_integer(self):\n        \"\"\"\n        This is to test that qmgr will throw error when non-integer\n        values are provided\n        \"\"\"\n\n        exp_err = \"Illegal attribute or resource value\"\n        # -1 will throw error\n        try:\n            self.server.manager(MGR_CMD_SET, SERVER,\n                                {'python_restart_max_hooks': '-1'},\n                                runas=ROOT_USER, logerr=True)\n        except PbsManagerError as e:\n            self.assertTrue(exp_err in e.msg[0],\n                            \"Error message is not expected\")\n        try:\n            self.server.manager(MGR_CMD_SET, SERVER,\n                                {'python_restart_max_objects': '-1'},\n                                runas=ROOT_USER, logerr=True)\n        except PbsManagerError as e:\n            self.assertTrue(exp_err in e.msg[0],\n                            \"Error message is not expected\")\n        try:\n            self.server.manager(MGR_CMD_SET, SERVER,\n                                {'python_restart_min_interval': '-1'},\n                                runas=ROOT_USER, logerr=True)\n        except PbsManagerError as e:\n            self.assertTrue(exp_err in e.msg[0],\n                            \"Error message is not expected\")\n        # 0 will also give error\n        try:\n            self.server.manager(MGR_CMD_SET, SERVER,\n                                {'python_restart_max_hooks': 0},\n                                runas=ROOT_USER, logerr=True)\n        except PbsManagerError as e:\n            self.assertTrue(exp_err in e.msg[0],\n                            \"Error message is not expected\")\n        try:\n            self.server.manager(MGR_CMD_SET, SERVER,\n                                {'python_restart_max_objects': 0},\n                                runas=ROOT_USER, logerr=True)\n        except PbsManagerError as e:\n            self.assertTrue(exp_err in e.msg[0],\n                            \"Error message is not expected\")\n        try:\n            self.server.manager(MGR_CMD_SET, SERVER,\n                                {'python_restart_min_interval': 0},\n                                runas=ROOT_USER, logerr=True)\n        except PbsManagerError as e:\n            self.assertTrue(exp_err in e.msg[0],\n                            \"Error message is not expected\")\n        try:\n            self.server.manager(MGR_CMD_SET, SERVER,\n                                {'python_restart_min_interval': \"00:00:00\"},\n                                runas=ROOT_USER, logerr=True)\n        except PbsManagerError as e:\n            self.assertTrue(exp_err in e.msg[0],\n                            \"Error message is not expected\")\n        try:\n            self.server.manager(MGR_CMD_SET, SERVER,\n                                {'python_restart_min_interval': \"HH:MM:SS\"},\n                                runas=ROOT_USER, logerr=True)\n        except PbsManagerError as e:\n            self.assertTrue(exp_err in e.msg[0],\n                            \"Error message is not expected\")\n\n    def test_non_manager(self):\n        \"\"\"\n        Test that hook values can not be set as operator or users.\n        \"\"\"\n        exp_err = \"Cannot set attribute, read only or insufficient permission\"\n        try:\n            self.server.manager(MGR_CMD_SET, SERVER,\n                                {'python_restart_max_hooks': 30},\n                                runas=OPER_USER, logerr=True)\n        except PbsManagerError as e:\n            self.assertIn(exp_err, e.msg[0],\n                          \"Error message is not expected\")\n        try:\n            self.server.manager(MGR_CMD_SET, SERVER,\n                                {'python_restart_max_objects': 2000},\n                                runas=OPER_USER, logerr=True)\n        except PbsManagerError as e:\n            self.assertIn(exp_err, e.msg[0],\n                          \"Error message is not expected\")\n        try:\n            self.server.manager(MGR_CMD_SET, SERVER,\n                                {'python_restart_min_interval': 10},\n                                runas=OPER_USER, logerr=True)\n        except PbsManagerError as e:\n            self.assertIn(exp_err, e.msg[0],\n                          \"Error message is not expected\")\n        exp_err = \"Unauthorized Request\"\n        try:\n            self.server.manager(MGR_CMD_SET, SERVER,\n                                {'python_restart_max_hooks': 30},\n                                runas=TEST_USER, logerr=True)\n        except PbsManagerError as e:\n            self.assertIn(exp_err, e.msg[0],\n                          \"Error message is not expected\")\n        try:\n            self.server.manager(MGR_CMD_SET, SERVER,\n                                {'python_restart_max_objects': 2000},\n                                runas=TEST_USER, logerr=True)\n        except PbsManagerError as e:\n            self.assertIn(exp_err, e.msg[0],\n                          \"Error message is not expected\")\n        try:\n            self.server.manager(MGR_CMD_SET, SERVER,\n                                {'python_restart_min_interval': 10},\n                                runas=TEST_USER, logerr=True)\n        except PbsManagerError as e:\n            self.assertIn(exp_err, e.msg[0],\n                          \"Error message is not expected\")\n\n    def test_log_message(self):\n        \"\"\"\n        Test that message logged in server_logs when values get set\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'python_restart_max_hooks': 200},\n                            runas=ROOT_USER, logerr=True)\n        self.server.log_match(\"python_restart_max_hooks = 200\",\n                              max_attempts=5)\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'python_restart_max_objects': 2000},\n                            runas=ROOT_USER, logerr=True)\n        self.server.log_match(\"python_restart_max_objects = 2000\",\n                              max_attempts=5)\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'python_restart_min_interval': \"00:01:00\"},\n                            runas=ROOT_USER, logerr=True)\n        self.server.log_match(\"python_restart_min_interval = 00:01:00\",\n                              max_attempts=5)\n\n    def test_long_values(self):\n        \"\"\"\n        Test that very long values are accepted\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'python_restart_max_hooks': 2147483647},\n                            runas=ROOT_USER, logerr=True)\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'python_restart_max_objects': 2147483647},\n                            runas=ROOT_USER, logerr=True)\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'python_restart_min_interval': 2147483647},\n                            runas=ROOT_USER, logerr=True)\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'python_restart_min_interval': \"596523:00:00\"},\n                            runas=ROOT_USER, logerr=True)\n\n    def test_set_unset(self):\n        \"\"\"\n        Test that when unset attribte is not visible in qmgr.\n        Also values will not change after server restart.\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'python_restart_max_hooks': 20},\n                            runas=ROOT_USER, logerr=True)\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'python_restart_max_objects': 20},\n                            runas=ROOT_USER, logerr=True)\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'python_restart_min_interval': \"00:00:20\"},\n                            runas=ROOT_USER, logerr=True)\n        # Restart server\n        self.server.restart()\n        self.server.expect(SERVER, {'python_restart_max_hooks': 20},\n                           op=SET, runas=ROOT_USER)\n        self.server.expect(SERVER, {'python_restart_max_objects': 20},\n                           op=SET, runas=ROOT_USER)\n        self.server.expect(SERVER, {'python_restart_min_interval': 20},\n                           op=SET, runas=ROOT_USER)\n        self.server.manager(MGR_CMD_UNSET, SERVER,\n                            'python_restart_max_hooks',\n                            runas=ROOT_USER, logerr=True)\n        self.server.manager(MGR_CMD_UNSET, SERVER,\n                            'python_restart_max_objects',\n                            runas=ROOT_USER, logerr=True)\n        self.server.manager(MGR_CMD_UNSET, SERVER,\n                            'python_restart_min_interval',\n                            runas=ROOT_USER, logerr=True)\n        # Restart server again\n        self.server.restart()\n        self.server.expect(SERVER, \"python_restart_max_hooks\",\n                           op=UNSET, runas=ROOT_USER)\n        self.server.expect(SERVER, \"python_restart_max_objects\",\n                           op=UNSET, runas=ROOT_USER)\n        self.server.expect(SERVER, \"python_restart_min_interval\",\n                           op=UNSET, runas=ROOT_USER)\n\n    def test_max_hooks(self):\n        \"\"\"\n        Test that python restarts at set interval\n        \"\"\"\n        # create a hook\n        hook_body = \"\"\"\nimport pbs\n\ne = pbs.event()\n\ns = pbs.server()\n\nlocalnode = pbs.get_local_nodename()\nvn = pbs.server().vnode(localnode)\npbs.event().accept()\n\"\"\"\n        a = {'event': [\"queuejob\", \"movejob\", \"modifyjob\", \"runjob\"],\n             'enabled': \"True\"}\n        self.server.create_import_hook(\"test\", a, hook_body, overwrite=True)\n        # Create workq2\n        a = {'queue_type': 'e', 'started': 't', 'enabled': 't'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, \"workq2\")\n        # Set max_hooks and min_interval so that further changes\n        # will generate a log message.\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'python_restart_max_hooks': 100},\n                            runas=ROOT_USER)\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'python_restart_min_interval': 30},\n                            runas=ROOT_USER)\n        # Need to run a job so these new settings are remembered\n        j = Job()\n        jid = self.server.submit(j)\n        # Set server log_events\n        self.server.manager(MGR_CMD_SET, SERVER, {\"log_events\": 2047})\n        # Set time to start scanning logs\n        time.sleep(1)\n        stime = time.time()\n        # Set max_hooks to low to hit max_hooks only\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'python_restart_max_hooks': 1},\n                            runas=ROOT_USER)\n        # Set min_interval to 3\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'python_restart_min_interval': 3},\n                            runas=ROOT_USER)\n        # Submit multiple jobs\n        for x in range(6):\n            j = Job()\n            j.set_attributes({ATTR_h: None})\n            j.set_sleep_time(1)\n            jid = self.server.submit(j)\n            self.server.expect(JOB, {'job_state': \"H\"}, id=jid)\n            self.server.alterjob(jid, {ATTR_N: \"yaya\"})\n            self.server.movejob(jid, \"workq2\")\n            self.server.rlsjob(jid, None)\n            time.sleep(1)\n        # Verify the logs and make sure that python interpreter is restarted\n        # every 3s\n        logs = self.server.log_match(\n            \"Restarting Python interpreter to reduce mem usage\",\n            allmatch=True, starttime=stime, max_attempts=8, n=\"ALL\")\n        self.assertTrue(len(logs) > 1)\n        log1 = logs[0][1]\n        log2 = logs[1][1]\n        tmp = log1.split(';')\n        # Convert the time into epoch time\n        time1 = int(self.logutils.convert_date_time(tmp[0]))\n        tmp = log2.split(';')\n        time2 = int(self.logutils.convert_date_time(tmp[0]))\n        # Difference between log message should not be less than 3\n        diff = time2 - time1\n        self.logger.info(\"Time difference between log message is \" +\n                         str(diff) + \" seconds\")\n        # Leave a little wiggle room for slow systems\n        self.assertTrue(diff > 2, \"time between Python restart log messages\"\n                        \" (%s seconds) is too short;\"\n                        \" expected roughly 3 seconds\" % str(diff))\n        self.assertTrue(diff <= 10, \"time between Python restart log messages\"\n                        \" (%s seconds) is too long;\"\n                        \" expected roughly 3 seconds\" % str(diff))\n        # This message only gets printed if /proc/self/statm is present\n        if os.path.isfile(\"/proc/self/statm\"):\n            self.server.log_match(\"Current memory usage:\",\n                                  starttime=self.server.ctime,\n                                  max_attempts=5)\n        else:\n            self.server.log_match(\"unknown\", max_attempts=5)\n        # Verify other log messages\n        self.server.log_match(\"python_restart_max_hooks is now 1\",\n                              starttime=stime, max_attempts=5)\n        self.server.log_match(\"python_restart_min_interval is now 3\",\n                              starttime=stime, max_attempts=5)\n\n    def test_max_objects(self):\n        \"\"\"\n        Test that python restarts if max objects limit have met\n        \"\"\"\n        hook_body = \"\"\"\nimport pbs\npbs.event().accept()\n\"\"\"\n        a = {'event': [\"queuejob\", \"modifyjob\"], 'enabled': 'True'}\n        self.server.create_import_hook(\"test\", a, hook_body, overwrite=True)\n        # Set max_objects and min_interval so that further changes\n        # will generate a log message.\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'python_restart_max_objects': 1000},\n                            runas=ROOT_USER)\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'python_restart_min_interval': 30},\n                            runas=ROOT_USER)\n        # Need to run a job so these new settings are remembered\n        j = Job()\n        jid = self.server.submit(j)\n        # Set server log_events\n        self.server.manager(MGR_CMD_SET, SERVER, {\"log_events\": 2047})\n        # Set time to start scanning logs\n        time.sleep(1)\n        stime = time.time()\n        # Set max_objects only\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'python_restart_max_objects': 1},\n                            runas=ROOT_USER)\n        # Set min_interval to 1\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'python_restart_min_interval': '00:00:01'},\n                            runas=ROOT_USER)\n        # Submit held jobs\n        for x in range(3):\n            j = Job()\n            j.set_attributes({ATTR_h: None})\n            j.set_sleep_time(1)\n            jid = self.server.submit(j)\n            self.server.expect(JOB, {'job_state': \"H\"}, id=jid)\n            self.server.alterjob(jid, {ATTR_N: \"yaya\"})\n        # Verify that python is restarted\n        self.server.log_match(\n            \"Restarting Python interpreter to reduce mem usage\",\n            starttime=self.server.ctime, max_attempts=5)\n        # This message only gets printed if\n        # /proc/self/statm presents\n        if os.path.isfile(\"/proc/self/statm\"):\n            self.server.log_match(\n                \"Current memory usage:\",\n                starttime=self.server.ctime, max_attempts=5)\n        else:\n            self.server.log_match(\"unknown\", max_attempts=5)\n        # Verify other log messages\n        self.server.log_match(\n            \"python_restart_max_objects is now 1\",\n            starttime=stime, max_attempts=5)\n        self.server.log_match(\n            \"python_restart_min_interval is now 1\",\n            starttime=stime, max_attempts=5)\n\n    def test_no_restart(self):\n        \"\"\"\n        Test that if limit not reached then python interpreter\n        will not be started\n        \"\"\"\n        hook_body = \"\"\"\nimport pbs\npbs.event().accept()\n\"\"\"\n        a = {'event': \"queuejob\", 'enabled': \"True\"}\n        self.server.create_import_hook(\"test\", a, hook_body, overwrite=True)\n        # Set max_hooks, max_objects, and min_interval to large values\n        # to avoid restarting the Python interpreter.\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'python_restart_max_hooks': 10000},\n                            runas=ROOT_USER)\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'python_restart_max_objects': 10000},\n                            runas=ROOT_USER)\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'python_restart_min_interval': 10000},\n                            runas=ROOT_USER)\n        stime = time.time()\n        # Submit a job\n        for x in range(10):\n            j = Job()\n            j.set_sleep_time(1)\n            jid = self.server.submit(j)\n        # Verify no restart message\n        msg = \"Restarting Python interpreter to reduce mem usage\"\n        self.server.log_match(msg, starttime=stime, max_attempts=8,\n                              existence=False)\n"
  },
  {
    "path": "test/tests/functional/pbs_python_test.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport subprocess\nfrom subprocess import PIPE, Popen\n\nfrom tests.functional import *\n\n\nclass Test_pbs_python(TestFunctional):\n    \"\"\"\n    This test suite tests pbs_python executable\n    and makes sure it works fine.\n    \"\"\"\n\n    def test_pbs_python(self):\n        \"\"\"\n        This method spawns a python process using\n        pbs_python and checks for the result\n        \"\"\"\n        fn = self.du.create_temp_file(prefix='test', suffix='.py',\n                                      body=\"print(\\\"Hello\\\")\", text=True)\n        self.logger.info(\"created temp python script \" + fn)\n        pbs_python = os.path.join(self.server.pbs_conf['PBS_EXEC'],\n                                  \"bin\", \"pbs_python\")\n        msg = ['Hello']\n        cmd = [pbs_python] + [fn]\n        rc = self.du.run_cmd(cmd=cmd, sudo=True)\n        self.assertTrue('out' in rc)\n        self.assertEqual(rc['out'], msg)\n\n    def test_pbs_python_zero_size(self):\n        \"\"\"\n        This method verifies that there are no shenanigans\n        when using pbs.size('0mb') et al\n        \"\"\"\n        fn = self.du.create_temp_file(\n            prefix='test_0mb', suffix='.py',\n            body=\"import pbs\\n\"\n                 \"if pbs.size('10mb') > pbs.size('0mb'):\\n\"\n                 \"    pbs.event().accept()\\n\"\n                 \"else:\\n\"\n                 \"    pbs.event().reject()\\n\",\n            text=True)\n        self.logger.info(\"created test python script \" + fn)\n        fn2 = self.du.create_temp_file(\n            prefix='dummy',\n            suffix='.in',\n            body=\"pbs.event().type=exechost_startup\\n\",\n            text=True)\n        self.logger.info(\"created dummy input file \" + fn2)\n\n        msg = \"pbs.event().accept=True\"\n        pbs_python = os.path.join(self.server.pbs_conf['PBS_EXEC'],\n                                  \"bin\", \"pbs_python\")\n        cmd = [pbs_python] + ['--hook', '-i', fn2, fn]\n        self.logger.info(\"running %s\" % repr(cmd))\n        rc = self.du.run_cmd(cmd=cmd, sudo=True)\n        self.assertTrue('out' in rc, \"command has no output\")\n        combined_output = '\\n'.join(rc['out'])\n        self.assertTrue(msg in combined_output, \"Test hooklet rejected event\")\n        self.logger.info(\"Test hooklet accepted event\")\n"
  },
  {
    "path": "test/tests/functional/pbs_qdel.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\nimport re\n\n\nclass TestQdel(TestFunctional):\n    \"\"\"\n    This test suite contains tests for qdel\n    \"\"\"\n\n    def test_qdel_with_server_tagged_in_jobid(self):\n        \"\"\"\n        Test to make sure that qdel uses server tagged in jobid instead of\n        the PBS_SERVER conf setting\n        \"\"\"\n        self.du.set_pbs_config(confs={'PBS_SERVER': 'not-a-server'})\n        j = Job(TEST_USER)\n        j.set_attributes({ATTR_q: 'workq@' + self.server.hostname})\n        jid = self.server.submit(j)\n        try:\n            self.server.delete(jid)\n        except PbsDeleteError as e:\n            self.assertFalse(\n                'Unknown Host' in e.msg[0],\n                \"Error message is not expected as server name is\"\n                \"tagged in the jobid\")\n        self.du.set_pbs_config(confs={'PBS_SERVER': self.server.hostname})\n\n    def test_qdel_unknown(self):\n        \"\"\"\n        Test that qdel for an unknown job throws error saying the same\n        \"\"\"\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n        self.server.delete(jid, wait=True)\n        try:\n            self.server.delete(jid)\n            self.fail(\"qdel didn't throw 'Unknown job id' error\")\n        except PbsDeleteError as e:\n            self.assertEqual(\"qdel: Unknown Job Id \" + jid, e.msg[0])\n\n    def test_qdel_history_job(self):\n        \"\"\"\n        Test deleting a history job after a custom resource is deleted\n        The deletion of the history job happens in teardown\n        \"\"\"\n        self.server.add_resource('foo')\n        a = {'job_history_enable': 'True'}\n        rc = self.server.manager(MGR_CMD_SET, SERVER, a)\n        hook_body = \"import pbs\\n\"\n        hook_body += \"e = pbs.event()\\n\"\n        hook_body += \"e.job.resources_used[\\\"foo\\\"] = \\\"10\\\"\\n\"\n        a = {'event': 'execjob_epilogue', 'enabled': 'True'}\n        self.server.create_import_hook(\"epi\", a, hook_body)\n        j = Job(TEST_USER)\n        j.set_sleep_time(10)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        self.server.expect(JOB, {'job_state': 'F'}, id=jid,\n                           extend='x', max_attempts=20)\n        msg = \"Resource allowed to be deleted\"\n        with self.assertRaises(PbsManagerError, msg=msg) as e:\n            self.server.manager(MGR_CMD_DELETE, RSC, id=\"foo\")\n        m = \"Resource busy on job\"\n        self.assertIn(m, e.exception.msg[0])\n        self.server.delete(jid, extend='deletehist')\n\n    def test_qdel_arrayjob_in_transit(self):\n        \"\"\"\n        Test the array job deletion\n        soon after they have been signalled for running.\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'false'})\n        a = {'resources_available.ncpus': 6}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        j = Job(TEST_USER, attrs={\n            ATTR_J: '1-3', 'Resource_List.select': 'ncpus=1'})\n        job_set = []\n        for i in range(4):\n            job_set.append(self.server.submit(j))\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'true'})\n        self.server.delete(job_set)\n        # Make sure that the counters are not going negative\n        msg = \"job*has already been deleted from delete job list\"\n        self.scheduler.log_match(msg, existence=False,\n                                 max_attempts=3, regexp=True)\n        # Make sure the last two jobs doesn't started running\n        # while the deletion is in process\n        for job in job_set[2:]:\n            jobid, server = job.split('.')\n            arrjob = jobid[-2:] + '[1]' + server\n            msg = arrjob + \";Job Run at request of Scheduler\"\n            self.scheduler.log_match(msg, existence=False, max_attempts=3)\n\n    def test_qdel_history_job_rerun(self):\n        \"\"\"\n        Test rerunning a history job that was prematurely terminated due\n        to a a downed mom.\n        \"\"\"\n        a = {'job_history_enable': 'True', 'job_history_duration': '5',\n             'job_requeue_timeout': '5', 'node_fail_requeue': '5',\n             'scheduler_iteration': '5'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        j = Job()\n        j.set_sleep_time(30)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n        self.mom.stop()\n\n        # Force job to be prematurely terminated\n        try:\n            self.server.deljob(jid)\n        except PbsDeljobError as e:\n            err_msg = \"could not connect to MOM\"\n            self.assertTrue(err_msg in e.msg[0],\n                            \"Did not get the expected message\")\n            self.assertTrue(e.rc != 0, \"Exit code shows success\")\n        else:\n            raise self.failureException(\"qdel job did not return error\")\n\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid)\n        self.mom.start()\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n\n        # Upon rerun, finished status should be '92' (Finished)\n        a = {'job_state': 'F', 'substate': '92'}\n        self.server.expect(JOB, a, extend='x',\n                           offset=30, id=jid, interval=1)\n\n    def test_qdel_history_job_rerun_nx(self):\n        \"\"\"\n        Test rerunning a history job that was prematurely terminated due\n        to a a downed mom.\n        \"\"\"\n        a = {'job_history_enable': 'True', 'job_history_duration': '5',\n             'job_requeue_timeout': '5', 'node_fail_requeue': '5',\n             'scheduler_iteration': '5'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        j = Job()\n        j.set_sleep_time(30)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n        self.mom.stop()\n\n        # Force job to be prematurely terminated and try to delete it more than\n        # once.\n        err_msg = \"could not connect to MOM\"\n        msg = \"qdel job did not return error\"\n        with self.assertRaises(PbsDeljobError, msg=msg) as e:\n            self.server.deljob([jid, jid, jid, jid])\n        self.assertIn(err_msg, e.exception.msg[0])\n        self.assertNotEqual(e.exception.rc, 0)\n\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid)\n        self.mom.start()\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n\n        # Upon rerun, finished status should be '92' (Finished)\n        a = {'job_state': 'F', 'substate': '92'}\n        self.server.expect(JOB, a, extend='x',\n                           offset=1, id=jid, interval=1)\n\n    def test_qdel_same_jobid_nx_00(self):\n        \"\"\"\n        Test that qdel that deletes the job more than once in the same line.\n        \"\"\"\n        a = {'job_history_enable': 'True'}\n        rc = self.server.manager(MGR_CMD_SET, SERVER, a)\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {ATTR_state: \"R\"}, id=jid)\n        self.server.delete([jid.split(\".\")[0], jid, jid, jid, jid], wait=True)\n        self.server.expect(JOB, {'job_state': 'F', 'substate': 91}, id=jid,\n                           extend='x', max_attempts=20)\n\n    def test_qdel_same_jobid_nx_01(self):\n        \"\"\"\n        Test that qdel that deletes the job more than once in the same line.\n        Done twice.\n        \"\"\"\n        a = {'job_history_enable': 'True'}\n        rc = self.server.manager(MGR_CMD_SET, SERVER, a)\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {ATTR_state: \"R\"}, id=jid)\n        self.server.delete([jid, jid], wait=True)\n        self.server.expect(JOB, {'job_state': 'F', 'substate': 91}, id=jid,\n                           extend='x', max_attempts=20)\n\n        # this may take 2 or more times to break.\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {ATTR_state: \"R\"}, id=jid)\n        self.server.delete([jid, jid], wait=True)\n        self.server.expect(JOB, {'job_state': 'F', 'substate': 91}, id=jid,\n                           extend='x', max_attempts=20)\n\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {ATTR_state: \"R\"}, id=jid)\n        self.server.delete([jid, jid], wait=True)\n        self.server.expect(JOB, {'job_state': 'F', 'substate': 91}, id=jid,\n                           extend='x', max_attempts=20)\n\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {ATTR_state: \"R\"}, id=jid)\n        self.server.delete([jid, jid], wait=True)\n        self.server.expect(JOB, {'job_state': 'F', 'substate': 91}, id=jid,\n                           extend='x', max_attempts=20)\n\n    def test_qdel_same_jobid_nx_02(self):\n        \"\"\"\n        Test that qdel that deletes the job more than once in the same line.\n        With rerun.\n        \"\"\"\n        a = {'job_history_enable': 'True'}\n        rc = self.server.manager(MGR_CMD_SET, SERVER, a)\n        attrs = {ATTR_r: 'y'}\n        j = Job(TEST_USER, attrs=attrs)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {ATTR_state: \"R\"}, id=jid)\n        self.server.delete([jid, jid, jid, jid, jid], wait=True)\n        self.server.expect(JOB, {'job_state': 'F', 'substate': 91}, id=jid,\n                           extend='x', max_attempts=20)\n\n    def array_job_start(self, job_sleep_time, ncpus, sj_range=None):\n        \"\"\"\n        Start an array job and capture job and subjob info\n        \"\"\"\n        if not sj_range:\n            sj_range = f\"1-{ncpus}\"\n        elif isinstance(sj_range, int):\n            sj_range = f\"1-{sj_range}\"\n        a = {\n            # 'log_events': 4095,\n            'job_history_enable': 'True'\n        }\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        a = {'resources_available.ncpus': ncpus}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        j = Job(TEST_USER, attrs={ATTR_J: sj_range})\n        j.set_sleep_time(job_sleep_time)\n        jid = self.server.submit(j)\n        array_id = jid.split(\"[\")[0]\n        sjm = re.match(r'(\\d+)-(\\d+)(:(\\d+))?', sj_range).groups()\n        sj_range_start = int(sjm[0])\n        sj_range_end = int(sjm[1])\n        sj_range_step = int(sjm[3]) if sjm[3] else 1\n        sjids = [j.create_subjob_id(jid, sjn) for sjn in\n                 range(sj_range_start, sj_range_end+1, sj_range_step or 1)]\n        return (jid, array_id, sjids)\n\n    def test_qdel_same_jobid_nx_array_00(self):\n        \"\"\"\n        Test that qdel that deletes the array job more than once in the same\n        line.\n        \"\"\"\n        jid, _, sjids = self.array_job_start(20, 6, 2)\n        self.server.expect(JOB, {ATTR_state: \"R\"}, id=sjids[0])\n        self.server.delete([jid] * 10, wait=True)\n        self.server.expect(JOB, {'job_state': 'F', 'substate': 91}, id=jid,\n                           extend='x', max_attempts=20)\n\n    def test_qdel_same_jobid_nx_array_01(self):\n        \"\"\"\n        Test that qdel that deletes the array job more than once in the same\n        line.\n        \"\"\"\n        jid, _, sjids = self.array_job_start(20, 6, '0-734:512')\n        self.server.expect(JOB, {ATTR_state: \"R\"}, id=sjids[0])\n        self.server.delete([jid] * 10, wait=True)\n        self.server.expect(JOB, {'job_state': 'F'}, id=jid,\n                           extend='x', max_attempts=60)\n\n    def test_qdel_same_jobid_nx_array_subjob_00(self):\n        \"\"\"\n        Test that the server handles deleting a running array subjob repeated\n        multiple times in the same operation (qdel command).\n        \"\"\"\n        jid, _, sjids = self.array_job_start(20, 6, 2)\n        for sjid in sjids:\n            self.server.expect(JOB, {ATTR_state: \"R\"}, id=sjid)\n        self.server.delete([sjids[0]] * 10, wait=True)\n        self.server.expect(JOB, {'job_state': 'F'}, id=jid,\n                           extend='x', max_attempts=60)\n\n    def test_qdel_same_jobid_nx_array_subjob_01(self):\n        \"\"\"\n        Test that the server handles deleting a running array subjob repeated\n        multiple times in the same operation (qdel command) where the array\n        specification containes a step.\n        \"\"\"\n        jid, _, sjids = self.array_job_start(20, 6, '0-734:512')\n        for sjid in sjids:\n            self.server.expect(JOB, {ATTR_state: \"R\"}, id=sjid)\n        self.server.delete([sjids[0]] * 10, wait=True)\n        self.server.expect(JOB, {'job_state': 'F'}, id=jid,\n                           extend='x', max_attempts=60)\n\n    @requirements(num_moms=1)\n    def test_qdel_same_jobid_nx_array_subjob_02(self):\n        \"\"\"\n        Test that the server handles deleting repeating overlapping ranges of\n        running array subjobs in the same operation (qdel command).\n        \"\"\"\n        jid, array_id, sjids = self.array_job_start(20, 6, 12)\n        for sjid in sjids[:6]:\n            self.server.expect(JOB, {ATTR_state: \"R\"}, id=sjid)\n\n        sj_range1 = f\"{array_id}[2-4]\"\n        sj_range2 = f\"{array_id}[3-5]\"\n        sj_list = [sj_range1, sj_range2] * 10\n        self.server.delete(sj_list, wait=True)\n        self.server.expect(\n            JOB, {'job_state': 'F'}, id=jid, extend='x', max_attempts=20)\n\n    @requirements(num_moms=1)\n    def test_qdel_same_jobid_nx_array_subjob_03(self):\n        \"\"\"\n        Test that the server handles deleting repeating overlapping ranges of\n        running array subjobs in the multiple operations (multiple qdel\n        commands backgrounded).\n        \"\"\"\n        jid, array_id, sjids = self.array_job_start(20, 6, 12)\n        for sjid in sjids[:6]:\n            self.server.expect(JOB, {ATTR_state: \"R\"}, id=sjid)\n\n        sj_range1 = f\"{array_id}[2-4]\"\n        sj_range2 = f\"{array_id}[3-5]\"\n        sj_list = [sj_range1, sj_range2] * 10\n        # roughly equivalent to \"qdel & qdel & qdel & wait\"\n        self.server.delete(sj_list, wait=False)\n        self.server.delete(sj_list, wait=False)\n        self.server.delete(sj_list, wait=True)\n        self.server.expect(\n            JOB, {'job_state': 'F'}, id=jid, extend='x', max_attempts=20)\n\n    @requirements(num_moms=1)\n    def test_qdel_same_jobid_nx_array_subjob_04(self):\n        \"\"\"\n        Test that the server handles deleting repeating overlapping ranges of\n        array subjobs in a single operation, where a subset are running, but\n        none have completed, and some are queued.\n        \"\"\"\n        jid, array_id, sjids = self.array_job_start(20, 6, 12)\n        for sjid in sjids[:6]:\n            self.server.expect(JOB, {ATTR_state: \"R\"}, id=sjid)\n\n        sj_range1 = f\"{array_id}[1-4]\"\n        sj_range2 = f\"{array_id}[3-8]\"\n        sj_list = [sj_range1, sj_range2] * 10\n        self.server.delete(sj_list, wait=True)\n        self.server.expect(\n            JOB, {'job_state': 'F'}, id=jid, extend='x', max_attempts=20)\n\n    @requirements(num_moms=1)\n    def test_qdel_same_jobid_nx_array_subjob_05(self):\n        \"\"\"\n        Test that the server handles deleting repeating overlapping ranges of\n        array subjobs in a single operation, where some subjobs have completed,\n        a subset are running, and some are queued.\n        \"\"\"\n        jid, array_id, sjids = self.array_job_start(20, 4, 12)\n        for sjid in sjids[:6]:\n            self.server.expect(JOB, {ATTR_state: \"R\"}, id=sjid)\n        sj_range1 = f\"{array_id}[1-4]\"\n        sj_range2 = f\"{array_id}[3-8]\"\n        sj_list = [sj_range1, sj_range2] * 10\n        self.server.delete(sj_list, wait=True)\n        self.server.expect(\n            JOB, {'job_state': 'F'}, id=jid, extend='x', max_attempts=20)\n\n    @requirements(num_moms=1)\n    def test_qdel_same_jobid_nx_array_subjob_06(self):\n        \"\"\"\n        Test that the server handles deleting repeating ranges of\n        array subjobs in a single operation, where some subjobs have completed,\n        a subset are running, and some are queued.\n        \"\"\"\n        jid, array_id, sjids = self.array_job_start(20, 4, 12)\n        for sjid in sjids[:6]:\n            self.server.expect(JOB, {ATTR_state: \"R\"}, id=sjid)\n        sj_range1 = f\"{array_id}[1-4]\"\n        sj_list = [sj_range1] * 20\n        self.server.delete(sj_list, wait=True)\n        self.server.expect(\n            JOB, {'job_state': 'F'}, id=jid, extend='x', max_attempts=20)\n\n    @requirements(num_moms=1)\n    def test_qdel_same_jobid_nx_array_subjob_07(self):\n        \"\"\"\n        Test that the server handles deleting repeating ranges of\n        array subjobs in a single operation, where some subjobs have completed,\n        a subset are running, and some are queued.\n        \"\"\"\n        jid, array_id, sjids = self.array_job_start(20, 4, 12)\n        for sjid in sjids[:6]:\n            self.server.expect(JOB, {ATTR_state: \"R\"}, id=sjid)\n        sj_range1 = f\"{array_id}[1-4]\"\n        sj_range2 = f\"{array_id}[3-6]\"\n        sj_range3 = f\"{array_id}[5-8]\"\n\n        sj_list = [sj_range1, sj_range2, sj_range3]\n        self.server.delete(sj_list, wait=True)\n        self.server.expect(\n            JOB, {'job_state': 'F'}, id=jid, extend='x', max_attempts=20)\n\n    # TODO: add rerun nx for job arrays\n\n    def test_qdel_with_list_of_jobids(self):\n        \"\"\"\n        Test deleting the list of jobids, containing unknown, queued\n        and running jobs in the list.\n        \"\"\"\n\n        self.server.manager(\n            MGR_CMD_SET, SERVER, {\n                'job_history_enable': 'True'})\n        fail_msg = f'qdel didn\\'t throw unknown job error'\n        with self.assertRaises(PbsDeleteError, msg=fail_msg) as c:\n            j = Job(TEST_USER)\n            j.set_sleep_time(5)\n            unknown_jid = \"100000\"\n            jid = self.server.submit(j)\n            stripped_jid = jid.split('.')[0]\n            self.server.expect(JOB, {'job_state': 'R'}, id=stripped_jid)\n            j = Job(TEST_USER)\n            j.set_sleep_time(1000)\n            running_jid = self.server.submit(j)\n            self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n            j = Job(TEST_USER)\n            queued_jid = self.server.submit(j)\n\n            self.server.expect(JOB, {'job_state': 'R'}, id=running_jid)\n            self.server.expect(JOB, {'job_state': 'Q'}, id=queued_jid)\n            job_set = [unknown_jid, running_jid, queued_jid, stripped_jid]\n            self.server.delete(job_set)\n        msg = f'qdel: Unknown Job Id {unknown_jid}'\n        self.assertTrue(c.exception.msg[0].startswith(msg))\n        self.server.expect(JOB, {'job_state': 'F'}, id=stripped_jid,\n                           extend='x')\n        self.server.expect(JOB, {'job_state': 'F'}, id=running_jid,\n                           extend='x')\n        self.server.expect(JOB, {'job_state': 'F'}, id=queued_jid,\n                           extend='x')\n\n    def test_qdel_with_duplicate_jobids_in_list(self):\n        \"\"\"\n        This tests server crash with duplicate jobids\n        \"\"\"\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        jid_list = [jid, jid, jid, jid]\n        self.server.delete(jid_list)\n        rv = self.server.isUp()\n        self.assertTrue(rv, \"Server crashed\")\n        j = Job(TEST_USER)\n        jid1 = self.server.submit(j)\n        jid2 = self.server.submit(j)\n        jid3 = self.server.submit(j)\n        jid4 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        jid_list = [jid1, jid2, jid1, jid2, jid3, jid4, jid4, jid3]\n        self.server.delete(jid_list)\n        rv = self.server.isUp()\n        self.assertTrue(rv, \"Server crashed\")\n\n    def test_qdel_with_duplicate_array_jobs(self):\n        \"\"\"\n        Test server crash with duplicate array jobs\n        \"\"\"\n        j = Job(TEST_USER, {\n            ATTR_J: '1-20', 'Resource_List.select': 'ncpus=1'})\n        jid1 = self.server.submit(j)\n        jid2 = self.server.submit(j)\n        jid3 = self.server.submit(j)\n        jid4 = self.server.submit(j)\n\n        self.server.expect(JOB, {'job_state': 'B'}, jid1)\n        jid_list = [jid1, jid1, jid2, jid1, jid3, jid4, jid3, jid2]\n        self.server.delete(jid_list)\n        rv = self.server.isUp()\n        self.assertTrue(rv, \"Server crashed\")\n\n    def test_qdel_with_duplicate_array_non_array_jobs(self):\n        \"\"\"\n        Test server crash with duplicate array and non-array jobs\n        \"\"\"\n        j = Job(TEST_USER, {\n            ATTR_J: '1-20', 'Resource_List.select': 'ncpus=1'})\n        jid1 = self.server.submit(j)\n        jid2 = self.server.submit(j)\n\n        j = Job(TEST_USER, {'Resource_List.select': 'ncpus=1'})\n        jid3 = self.server.submit(j)\n        jid4 = self.server.submit(j)\n\n        self.server.expect(JOB, {'job_state': 'B'}, jid1)\n        jid_list = [jid1, jid1, jid2, jid1, jid3, jid4, jid3, jid2]\n        self.server.delete(jid_list)\n        rv = self.server.isUp()\n        self.assertTrue(rv, \"Server crashed\")\n\n    def test_qdel_with_overlaping_array_jobs(self):\n        \"\"\"\n        Test server crash with overlaping array jobs\n        \"\"\"\n        j = Job(TEST_USER, {\n            ATTR_J: '1-20', 'Resource_List.select': 'ncpus=1'})\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'B'}, jid)\n        subjob1 = jid.replace('[]', '[1-6]')\n        subjob2 = jid.replace('[]', '[5-8]')\n        jid_list = [subjob1, subjob2]\n        self.server.delete(jid_list)\n        rv = self.server.isUp()\n        self.assertTrue(rv, \"Server crashed\")\n\n        subjob3 = jid.replace('[]', '[11-18]')\n        subjob4 = jid.replace('[]', '[15-19]')\n        jid_list = [subjob3, subjob4]\n        self.server.delete(jid_list)\n        rv = self.server.isUp()\n        self.assertTrue(rv, \"Server crashed\")\n\n    def test_qdel_mix_of_job_and_arrayjob_range(self):\n        \"\"\"\n        Test that the server handles deleting mix of common job\n        and array job range in one request\n        \"\"\"\n        j = Job(TEST_USER, {'Resource_List.select': 'ncpus=1'})\n        jid = self.server.submit(j)\n\n        ajid, array_id, sjids = self.array_job_start(20, 2, 2)\n\n        sj_list = [jid, f\"{array_id}[1]\", f\"{array_id}[2]\"]\n        self.server.delete(sj_list, wait=True)\n        self.server.expect(\n            JOB, {'job_state': 'F'}, id=jid, extend='x', max_attempts=20)\n        self.server.expect(\n            JOB, {'job_state': 'F'}, id=ajid, extend='x', max_attempts=20)\n"
  },
  {
    "path": "test/tests/functional/pbs_qmgr.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport os\n\nfrom ptl.lib.pbs_ifl_mock import *\nfrom tests.functional import *\n\n\n@tags('commands')\nclass TestQmgr(TestFunctional):\n\n    \"\"\"\n    Test suite for qmgr command\n    \"\"\"\n\n    resc_flags = [None, 'n', 'h', 'nh', 'q', 'f', 'fh', 'm', 'mh']\n    resc_flags_ctl = [None, 'r', 'i']\n    objs = [QUEUE, SERVER, NODE, JOB, RESV]\n    resc_name = \"ptl_custom_res\"\n    avail_resc_name = 'resources_available.' + resc_name\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n        self.obj_map = {QUEUE: self.server.default_queue,\n                        SERVER: self.server.name,\n                        NODE: self.mom.shortname,\n                        JOB: None, RESV: None}\n\n    def __check_whitespace_prefix(self, line):\n        \"\"\"\n        Check whether the whitespace prefix for the line specified is correct\n\n        :param line: the line to check\n        :type line: String\n        \"\"\"\n        if line is None:\n            return\n\n        if line[0] == \" \":\n            # Spaces are the prefix for new attribute lines\n            # Make sure that this is a new attribute line\n            self.assertTrue(\"=\" in line)\n        elif line[0] == \"\\t\":\n            # Tabs are prefix for line extensions\n            # Make sure that this is a line extension\n            self.assertTrue(\"=\" not in line)\n\n    def test_listcmd_whitespaces(self):\n        \"\"\"\n        Check that the prefix for new attributes listed out by qmgr list\n        are spaces and that for line extensions is a tab\n        \"\"\"\n        fn = self.du.create_temp_file()\n        node_prefix = \"vn\"\n        nodename = node_prefix + \"[0]\"\n        vndef_file = None\n        qmgr_path = os.path.join(self.server.pbs_conf[\"PBS_EXEC\"], \"bin\",\n                                 \"qmgr\")\n        if not os.path.isfile(qmgr_path):\n            self.server.skipTest(\"qmgr binary not found!\")\n\n        try:\n            # Check 1: New attributes are prefixed with spaces and not tabs\n            # Execute qmgr -c 'list sched' and store output in a temp file\n            if self.du.is_localhost(self.server.hostname) is True:\n                qmgr_cmd = [qmgr_path, \"-c\", \"list sched\"]\n            else:\n                qmgr_cmd = [qmgr_path, \"-c\", \"\\'list sched\\'\"]\n            with open(fn, \"w+\") as tempfd:\n                ret = self.du.run_cmd(self.server.hostname, qmgr_cmd,\n                                      stdout=tempfd)\n\n                self.assertTrue(ret['rc'] == 0)\n                for line in tempfd:\n                    self.__check_whitespace_prefix(line)\n\n            # Check 2: line extensions are prefixed with tabs and not spaces\n            # Create a random long, comma separated string\n            blah = \"blah\"\n            long_string = \"\"\n            for i in range(49):\n                long_string += blah + \",\"\n            long_string += blah\n            # Create a new vnode\n            attrs = {ATTR_rescavail + \".ncpus\": 2}\n            self.mom.create_vnodes(attrs, 1, vname=node_prefix)\n            # Set 'comment' attribute to the long string we created above\n            attrs = {ATTR_comment: long_string}\n            self.server.manager(MGR_CMD_SET, VNODE, attrs, nodename)\n            # Execute \"qmgr 'list node vn[0]'\"\n            # The comment attribute should generate a line extension\n            if self.du.is_localhost(self.server.hostname) is True:\n                qmgr_cmd = [qmgr_path, \"-c\", \"list node \" + nodename]\n            else:\n                qmgr_cmd = [qmgr_path, \"-c\", \"\\'list node \" + nodename + \"\\'\"]\n            with open(fn, \"w+\") as tempfd:\n                ret = self.du.run_cmd(self.server.hostname, qmgr_cmd,\n                                      stdout=tempfd)\n                self.assertTrue(ret['rc'] == 0)\n                for line in tempfd:\n                    self.__check_whitespace_prefix(line)\n\n        finally:\n            # Cleanup\n            # Remove the temporary file\n            os.remove(fn)\n            # Delete the vnode created\n            if vndef_file is not None:\n                self.mom.delete_vnodes()\n                self.server.manager(MGR_CMD_DELETE, VNODE, id=nodename)\n\n    def test_multi_attributes(self):\n        \"\"\"\n        Test to verify that if multiple attributes are set\n        simultaneously and out of which one fail then none\n        will be set.\n        \"\"\"\n\n        a = {'queue_type': 'execution',\n             'enabled': 'True',\n             'started': 'True'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id='workq2')\n\n        a = {'partition': 'foo'}\n        self.server.manager(MGR_CMD_SET, QUEUE, a, id='workq')\n\n        a = {'partition': 'bar'}\n        self.server.manager(MGR_CMD_SET, QUEUE, a, id='workq2')\n\n        a = {'queue': 'workq', 'partition': 'bar'}\n        try:\n            self.server.manager(MGR_CMD_SET, NODE, a,\n                                id=self.mom.shortname)\n        except PbsManagerError as e:\n            self.assertNotEqual(e.rc, '0')\n            # Due to PP-1073 checking for the partial message\n            msg = \" is not part of queue for node\"\n            self.logger.info(\"looking for error, %s\" % msg)\n            self.assertTrue(msg in e.msg[0])\n        self.server.expect(NODE, 'queue', op=UNSET, id=self.mom.shortname)\n\n    def set_and_test_comment(self, comment):\n        \"\"\"\n        Set the node's comment, then print it and re-import it\n        \"\"\"\n        a = {'comment': comment}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.mom.shortname)\n        qmgr_path = \\\n            os.path.join(self.server.pbs_conf['PBS_EXEC'], 'bin', 'qmgr')\n        qmgr_cmd_print = qmgr_path + \\\n            (' -c \"p n %s comment\"' % self.mom.shortname)\n        ret = self.du.run_cmd(self.server.hostname,\n                              cmd=qmgr_cmd_print, as_script=True)\n        self.assertEqual(ret['rc'], 0)\n        fn = self.du.create_temp_file()\n        for line in ret['out']:\n            if '#' in line:\n                continue\n            if 'create node' in line:\n                continue\n            with open(fn, 'w') as f:\n                f.write(line)\n            qmgr_cmd_set = qmgr_path + ' < ' + fn\n            ret_s = self.du.run_cmd(self.server.hostname,\n                                    cmd=qmgr_cmd_set, as_script=True)\n            self.assertEqual(ret_s['rc'], 0)\n\n    def create_resource_helper(self, resc_type, resc_flag, ctrl_flag):\n        \"\"\"\n        create a resource with associated type, flag, and control flag\n\n        resc_type - Type of the resource\n\n        resc_flag - Permissions/flags associated to the resource\n\n        ctrl_flag - Control flags\n        \"\"\"\n        attr = {}\n        if resc_type:\n            attr['type'] = resc_type\n        if resc_flag:\n            attr['flag'] = resc_flag\n        if ctrl_flag:\n            if 'flag' in attr:\n                attr['flag'] += ctrl_flag\n            else:\n                attr['flag'] = ctrl_flag\n        if attr is not None:\n            try:\n                rc = self.server.manager(MGR_CMD_CREATE, RSC, attr,\n                                         id=self.resc_name)\n            except PbsManagerError as e:\n                msg = 'Erroneous to have'\n                self.assertIn(msg, e.msg[0])\n                return False\n        else:\n            rv = self.server.resources[self.resc_name].attributes['type']\n            if resc_type is None:\n                self.assertEqual(rv, 'string')\n            else:\n                self.assertEqual(rv, resc_type)\n\n            if ctrl_flag is not None:\n                resc_flag += ctrl_flag\n            if resc_flag:\n                rv = self.server.resources[self.resc_name].attributes['flag']\n                self.assertEqual(sorted(rv), sorted(resc_flag))\n        return True\n\n    def delete_resource_helper(self, resc_type, resc_flg, ctrl_flg,\n                               obj_type, obj_id):\n        \"\"\"\n        Vierify behavior upon deleting a resource that is set on a PBS object.\n\n        resc_type - The type of resource\n\n        resc_flg - The permissions/flags of the resource\n\n        ctrl_flg - The control flags of the resource\n\n        obj_type - The object type (server, queue, node, job, reservation) on\n        which the resource is set.\n\n        obj_id - The object identifier/name\n        \"\"\"\n        ar = 'resources_available.' + self.resc_name\n        resc_map = {'long': 1, 'float': 1.0, 'string': 'abc', 'boolean': False,\n                    'string_array': 'abc', 'size': '1gb'}\n        if resc_type is not None:\n            val = resc_map[resc_type]\n        else:\n            val = 'abc'\n        objs = [JOB, RESV]\n        if obj_type in objs:\n            attr = {'Resource_List.' + self.resc_name: val}\n            if obj_type == JOB:\n                j = Job(TEST_USER1, attr)\n            else:\n                j = Reservation(TEST_USER1, attr)\n            try:\n                jid = self.server.submit(j)\n            except PbsSubmitError as e:\n                jid = e.rv\n            if ctrl_flg is not None and ('r' in ctrl_flg or 'i' in ctrl_flg):\n                self.assertEqual(jid, None)\n                self.server.manager(MGR_CMD_DELETE, RSC, id=self.resc_name)\n                return\n            if obj_type == RESV:\n                a = {'reserve_state': (MATCH_RE, \"RESV_CONFIRMED|2\")}\n                self.server.expect(RESV, a, id=jid)\n            self.assertNotEqual(jid, None)\n        else:\n            self.server.manager(MGR_CMD_SET, obj_type, {ar: val},\n                                id=obj_id)\n        try:\n            rc = self.server.manager(MGR_CMD_DELETE, RSC, id=self.resc_name)\n        except PbsManagerError as e:\n            if obj_type in objs:\n                self.assertNotEqual(e.rc, 0)\n                m = \"Resource busy on \" + PBS_OBJ_MAP[obj_type]\n                self.assertIn(m, e.msg[0])\n                self.server.delete(jid)\n                self.server.expect(obj_type, 'queue', op=UNSET)\n                self.server.manager(MGR_CMD_DELETE, RSC, id=self.resc_name)\n            else:\n                self.assertEqual(e.rc, 0)\n                d = self.server.status(obj_type, ar, id=obj_id)\n                if d:\n                    self.assertNotIn(ar, d[0])\n\n    def test_string_single_quoting(self):\n        \"\"\"\n        Test to verify that if a string attribute has a double quote,\n        the value is single-quoted correctly\n        \"\"\"\n        self.set_and_test_comment('This is \"my\" node.')\n\n    def test_string_double_quoting(self):\n        \"\"\"\n        Test to verify that if a string attribute has a quote, the value\n        is double-quoted correctly\n        \"\"\"\n        self.set_and_test_comment(\"This node isn't good.\")\n\n    def test_string_type_resource_create_delete(self):\n        \"\"\"\n        Test behavior of string type resource creation and deletion\n        by all possible and supported types and flags.\n        \"\"\"\n        for k, v in self.obj_map.items():\n            for resc_flag in self.resc_flags:\n                for ctrl_flag in self.resc_flags_ctl:\n                    rv = self.create_resource_helper('string', resc_flag,\n                                                     ctrl_flag)\n                    if rv:\n                        self.delete_resource_helper('string', resc_flag,\n                                                    ctrl_flag, k, v)\n\n    def test_long_type_resource_create_delete(self):\n        \"\"\"\n        Test behavior of long type resource creation and deletion\n        by all possible and supported types and flags.\n        \"\"\"\n        for k, v in self.obj_map.items():\n            for resc_flag in self.resc_flags:\n                for ctrl_flag in self.resc_flags_ctl:\n                    rv = self.create_resource_helper('long', resc_flag,\n                                                     ctrl_flag)\n                    if rv:\n                        self.delete_resource_helper('long', resc_flag,\n                                                    ctrl_flag, k, v)\n\n    def test_float_type_resource_create_delete(self):\n        \"\"\"\n        Test behavior of float type resource creation and deletion\n        by all possible and supported types and flags.\n        \"\"\"\n        for k, v in self.obj_map.items():\n            for resc_flag in self.resc_flags:\n                for ctrl_flag in self.resc_flags_ctl:\n                    rv = self.create_resource_helper('float', resc_flag,\n                                                     ctrl_flag)\n                    if rv:\n                        self.delete_resource_helper('float', resc_flag,\n                                                    ctrl_flag, k, v)\n\n    def test_boolean_type_resource_create_delete(self):\n        \"\"\"\n        Test behavior of boolean type resource creation and deletion\n        by all possible and supported types and flags.\n        \"\"\"\n        for k, v in self.obj_map.items():\n            for resc_flag in self.resc_flags:\n                for ctrl_flag in self.resc_flags_ctl:\n                    rv = self.create_resource_helper('boolean', resc_flag,\n                                                     ctrl_flag)\n                    if rv:\n                        self.delete_resource_helper('boolean', resc_flag,\n                                                    ctrl_flag, k, v)\n\n    def test_size_type_resource_create_delete(self):\n        \"\"\"\n        Test behavior of size type resource creation and deletion\n        by all possible and supported types and flags.\n        \"\"\"\n        for k, v in self.obj_map.items():\n            for resc_flag in self.resc_flags:\n                for ctrl_flag in self.resc_flags_ctl:\n                    rv = self.create_resource_helper('size', resc_flag,\n                                                     ctrl_flag)\n                    if rv:\n                        self.delete_resource_helper('size', resc_flag,\n                                                    ctrl_flag, k, v)\n\n    def test_string_array_type_resource_create_delete(self):\n        \"\"\"\n        Test behavior of string_array type resource creation and deletion\n        by all possible and supported types and flags.\n        \"\"\"\n        for k, v in self.obj_map.items():\n            for resc_flag in self.resc_flags:\n                for ctrl_flag in self.resc_flags_ctl:\n                    rv = self.create_resource_helper('string_array', resc_flag,\n                                                     ctrl_flag)\n                    if rv:\n                        self.delete_resource_helper('string_array', resc_flag,\n                                                    ctrl_flag, k, v)\n\n    def test_none_type_resource_create_delete(self):\n        \"\"\"\n        Test behavior of None type resource creation and deletion\n        by all possible and supported types and flags.\n        \"\"\"\n        for k, v in self.obj_map.items():\n            for resc_flag in self.resc_flags:\n                for ctrl_flag in self.resc_flags_ctl:\n                    rv = self.create_resource_helper(None, resc_flag,\n                                                     ctrl_flag)\n                    if rv:\n                        self.delete_resource_helper(None, resc_flag,\n                                                    ctrl_flag, k, v)\n"
  },
  {
    "path": "test/tests/functional/pbs_qrun.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\nimport os\nimport signal\n\n\nclass TestQrun(TestFunctional):\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n        # set ncpus to a known value, 2 here\n        a = {'resources_available.ncpus': 2}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        self.pbs_exec = self.server.pbs_conf['PBS_EXEC']\n        self.qrun = os.path.join(self.pbs_exec, 'bin', 'qrun')\n\n    def test_invalid_host_val(self):\n        \"\"\"\n        Tests that pbs_server should not crash when the node list in\n        qrun is ill-formed\n        \"\"\"\n        j1 = Job(TEST_USER)\n        # submit a multi-chunk job\n        j1 = Job(attrs={'Resource_List.select':\n                        'ncpus=2:host=%s+ncpus=2:host=%s' %\n                        (self.mom.shortname, self.mom.shortname)})\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {ATTR_state: 'Q'}, jid1)\n        exec_vnode = '\"\\'(%s)+(%s)\\'\"' % \\\n                     (self.mom.shortname, self.mom.shortname)\n        err_msg = 'qrun: Unknown node  \"\\'(%s)+(%s)\\'\"' % \\\n            (self.mom.shortname, self.mom.shortname)\n        try:\n            self.server.runjob(jobid=jid1, location=exec_vnode)\n        except PbsRunError as e:\n            self.assertIn(err_msg, e.msg[0])\n            self.logger.info('As expected qrun throws error: ' + err_msg)\n        else:\n            msg = \"Able to run job successfully\"\n            self.assertTrue(False, msg)\n        msg = \"Server is not up\"\n        self.assertTrue(self.server.isUp(), msg)\n        self.logger.info(\"As expected server is up and running\")\n        j2 = Job(TEST_USER)\n        # submit a sleep job\n        j2 = Job(attrs={'Resource_List.select': 'ncpus=3'})\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {ATTR_state: 'Q'}, jid2)\n        try:\n            self.server.runjob(jobid=jid2, location=exec_vnode)\n        except PbsRunError as e:\n            self.assertIn(err_msg, e.msg[0])\n            self.logger.info('As expected qrun throws error: ' + err_msg)\n        else:\n            msg = \"Able to run job successfully\"\n            self.assertTrue(False, msg)\n        msg = \"Server is not up\"\n        self.assertTrue(self.server.isUp(), msg)\n        self.logger.info(\"As expected server is up and running\")\n\n    def test_qrun_hangs(self):\n        \"\"\"\n        This test submit 500 jobs with differnt equivalence class,\n        turn of scheduling and qrun job to\n        verify whether qrun hangs.\n        \"\"\"\n        node = self.mom.shortname\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'scheduling': 'False'})\n        self.server.manager(MGR_CMD_SET, NODE,\n                            {'resources_available.ncpus': 1}, id=node)\n        for walltime in range(1, 501):\n            j = Job(TEST_USER)\n            a = {'Resource_List.walltime': walltime}\n            j.set_attributes(a)\n            if walltime == 500:\n                jid = self.server.submit(j)\n            else:\n                self.server.submit(j)\n        self.logger.info(\"Submitted 500 jobs with different walltime\")\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'True'})\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n        time.sleep(1)\n        now = time.time()\n        pid = os.fork()\n        if pid == 0:\n            try:\n                self.server.runjob(jobid=jid)\n                self.logger.info(\"Successfully runjob. Child process exit.\")\n                os._exit(0)\n            except PbsRunError as e:\n                self.logger.info(\"Runjob throws error: \" + e.msg[0])\n        else:\n            try:\n                self.scheduler.log_match(\"Starting Scheduling Cycle\",\n                                         interval=5, starttime=now,\n                                         max_attempts=10)\n                self.logger.info(\"No hangs. Parent process exit\")\n            except PtlLogMatchError:\n                os.kill(pid, signal.SIGKILL)\n                os.waitpid(pid, 0)\n                self.logger.info(\"Runjob hung. Child process exit.\")\n                self.fail(\"Qrun didn't start another sched cycle\")\n\n    def test_qrun_subjob(self):\n        \"\"\"\n        This test tests if PBS is able to qrun an array subjob\n        \"\"\"\n        a = {'resources_available.ncpus': 1}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.mom.shortname)\n        jid = self.server.submit(Job())\n        self.server.expect(JOB, {'job_state': 'R'}, jid)\n        j = Job(TEST_USER, {ATTR_J: '1-5'})\n        arr_jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'Q'}, arr_jid)\n        subj2 = j.create_subjob_id(arr_jid, 2)\n        self.server.runjob(jobid=subj2)\n        self.server.expect(JOB, {'job_state': 'R'}, subj2)\n        self.server.expect(JOB, {'job_state': 'S'}, jid)\n"
  },
  {
    "path": "test/tests/functional/pbs_qselect.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestQselect(TestFunctional):\n    \"\"\"\n    Test suite for qselect command\n    \"\"\"\n    def test_qselect_buffer_overflow(self):\n        \"\"\"\n        Check that various qselect option arguments does not buffer overflow\n        \"\"\"\n        # test -q option\n        qselect_cmd = os.path.join(self.server.pbs_conf['PBS_EXEC'],\n                                   'bin', 'qselect')\n        ret = self.du.run_cmd(cmd=[qselect_cmd, '-q',\n                                   ('a' * 30) + '@' + ('b' * 30)])\n        self.assertNotEqual(ret, None)\n        self.assertIn('err', ret)\n        self.assertIn('qselect: illegally formed destination: aaaaaaaaaaaaaaaa'\n                      'aaaaaaaaaaaaaa@bbbbbbbbbbbbbbbbbbbbbbbbbbbbbb',\n                      ret['err'])\n        # test -c\n        ret = self.du.run_cmd(cmd=[qselect_cmd, '-c.abcd.w'])\n        self.assertNotEqual(ret, None)\n        self.assertIn('err', ret)\n        self.assertIn('qselect: illegal -c value', ret['err'])\n        ret = self.du.run_cmd(cmd=[qselect_cmd, '-c.eq.abcdefg'])\n        self.assertNotEqual(ret, None)\n        self.assertIn('err', ret)\n        self.assertIn('qselect: illegal -c value', ret['err'])\n        # test -a\n        ret = self.du.run_cmd(cmd=[qselect_cmd, '-a.ne.' + ('1' * 100)])\n        self.assertNotEqual(ret, None)\n        self.assertIn('err', ret)\n        self.assertIn('qselect: illegal -a value', ret['err'])\n        ret = self.du.run_cmd(cmd=[qselect_cmd, '-a.abcd.5001011212'])\n        self.assertNotEqual(ret, None)\n        self.assertIn('err', ret)\n        self.assertIn('qselect: illegal -a value', ret['err'])\n        # test accounting string buf and optarg buf using -A\n        ret = self.du.run_cmd(cmd=[qselect_cmd, '-A', ('a' * 300)])\n        self.assertNotEqual(ret, None)\n        self.assertIn('err', ret)\n        self.assertEqual(ret['err'], [])\n        # test -l\n        ret = self.du.run_cmd(cmd=[qselect_cmd, '-l',\n                                   ('a' * 300) + '.abcd.' + ('b' * 300)])\n        self.assertNotEqual(ret, None)\n        self.assertIn('err', ret)\n        self.assertIn('qselect: illegal -l value', ret['err'])\n        # test -N\n        ret = self.du.run_cmd(cmd=[qselect_cmd, '-N', ('a' * 300)])\n        self.assertNotEqual(ret, None)\n        self.assertIn('err', ret)\n        self.assertIn('qselect: illegal -N value', ret['err'])\n        # test -u\n        ret = self.du.run_cmd(cmd=[qselect_cmd, '-u',\n                                   ('a' * 300) + '@' + ('b' * 300)])\n        self.assertNotEqual(ret, None)\n        self.assertIn('err', ret)\n        self.assertIn('qselect: illegal -u value', ret['err'])\n        # test -S\n        ret = self.du.run_cmd(cmd=[qselect_cmd, '-s', 'ABCD'])\n        self.assertNotEqual(ret, None)\n        self.assertIn('err', ret)\n        self.assertIn('qselect: illegal -s value', ret['err'])\n        # test -t\n        ret = self.du.run_cmd(cmd=[qselect_cmd, '-t',\n                                   ('a' * 90) + '.eq.' + ('1' * 90)])\n        self.assertNotEqual(ret, None)\n        self.assertIn('err', ret)\n        self.assertIn('qselect: illegal -t value', ret['err'])\n"
  },
  {
    "path": "test/tests/functional/pbs_qstat.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestQstat(TestFunctional):\n    \"\"\"\n    This test suite validates output of qstat with various options\n    \"\"\"\n\n    def test_qstat_pt(self):\n        \"\"\"\n        Test that checks correct output for qstat -pt\n        \"\"\"\n\n        attr = {'resources_available.ncpus': 1}\n        self.server.manager(MGR_CMD_SET, NODE, attr,\n                            id=self.mom.shortname)\n\n        job_count = 10\n        j = Job(TEST_USER)\n        j.set_sleep_time(5)\n        j.set_attributes({ATTR_J: '1-' + str(job_count)})\n        jid = self.server.submit(j)\n\n        self.server.expect(JOB, {'job_state': 'B'}, id=jid)\n\n        qstat_cmd = os.path.join(self.server.pbs_conf['PBS_EXEC'],\n                                 'bin', 'qstat')\n        qstat_cmd_pt = [qstat_cmd, '-pt', str(jid)]\n\n        # wait for first subjob to finish before doing qstat -pt\n        sj1 = j.create_subjob_id(jid, 1)\n        self.server.expect(JOB, {'job_state': 'X'}, id=sj1)\n\n        ret = self.du.run_cmd(self.server.hostname, cmd=qstat_cmd_pt)\n        self.assertEqual(ret['rc'], 0,\n                         'Qstat returned with non-zero exit status')\n        qstat_out = '\\n'.join(ret['out'])\n\n        sjids = [j.create_subjob_id(jid, x) for x in range(1, job_count + 1)]\n        for sjid in sjids:\n            if len(sjid) > 17:\n                sjid = sjid[0:16] + '*'\n            self.assertIn(sjid, qstat_out, 'Job %s not in output' % sjid)\n            sj_escaped = re.escape(sjid)\n            # check that at least first job is in X state and 100 percent done\n            match = re.search(\n                sj_escaped + r'\\s+\\S+\\s+\\S+\\s+(--\\s+[RQ]|100\\s+X)\\s+\\S+',\n                qstat_out)\n            self.assertIsNotNone(match, 'Job output does not match')\n\n    def test_qstat_qselect(self):\n        \"\"\"\n        Test to check that qstat can handle more than 150 jobs query at a time\n        without any connection issues.\n        \"\"\"\n        self.server.restart()\n        j = Job(TEST_USER)\n        for i in range(150):\n            self.server.submit(j)\n        ret_msg = 'Too many open connections.'\n        qselect_cmd = ' `' + \\\n            os.path.join(\n                self.server.client_conf['PBS_EXEC'],\n                'bin',\n                'qselect') + '`'\n        qstat_cmd = os.path.join(\n            self.server.client_conf['PBS_EXEC'], 'bin', 'qstat')\n        final_cmd = qstat_cmd + qselect_cmd\n        ret = self.du.run_cmd(self.server.hostname, final_cmd,\n                              as_script=True)\n        if ret['rc'] != 0:\n            self.assertFalse(ret['err'][0], ret_msg)\n\n    def test_qstat_n_ip(self):\n        \"\"\"\n        Test qstat -n output reports the correct node name\n        when node is created using IP address as name of the node.\n        \"\"\"\n        self.server.manager(MGR_CMD_DELETE, NODE, None, '')\n        ipaddr = socket.gethostbyname(self.mom.hostname)\n        attr_A = {'Mom': self.mom.hostname}\n        self.server.manager(MGR_CMD_CREATE, NODE, id=ipaddr, attrib=attr_A)\n        self.server.expect(NODE, {'state': 'free'}, id=ipaddr)\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        qstat_cmd = os.path.join(self.server.pbs_conf['PBS_EXEC'],\n                                 'bin', 'qstat')\n        qstat_cmd_n = [qstat_cmd, '-n', str(jid)]\n        ret = self.du.run_cmd(self.server.hostname, cmd=qstat_cmd_n)\n        self.assertEqual(ret['rc'], 0,\n                         'Qstat returned with non-zero exit status')\n        qstat_out = '\\n'.join(ret['out'])\n        self.assertIn(ipaddr, qstat_out,\n                      \"Incorrect node name in qstat -n when \"\n                      \"node created using IP address\")\n\n    def test_qstat_n_fqdn(self):\n        \"\"\"\n        Test qstat -n output reports task slot and processor info\n        when node is created using FQDN.\n        \"\"\"\n        self.server.manager(MGR_CMD_DELETE, NODE, None, '')\n        self.server.manager(MGR_CMD_CREATE, NODE, id=self.mom.hostname)\n        self.server.expect(NODE, {'state': 'free'}, id=self.mom.hostname)\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        qstat_cmd = os.path.join(self.server.pbs_conf['PBS_EXEC'],\n                                 'bin', 'qstat')\n        qstat_cmd_n = [qstat_cmd, '-n', str(jid)]\n        ret = self.du.run_cmd(self.server.hostname, cmd=qstat_cmd_n)\n        self.assertEqual(ret['rc'], 0,\n                         'Qstat returned with non-zero exit status')\n        qstat_out = '\\n'.join(ret['out'])\n        self.assertNotEqual(re.search(r\"%s/([0-9]+)\"\n                                      % re.escape(self.mom.shortname),\n                                      qstat_out), None, \"The exec host does\"\n                            \" not contain the task slot number\")\n"
  },
  {
    "path": "test/tests/functional/pbs_qstat_2servers.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestQstatTwoServers(TestFunctional):\n\n    \"\"\"\n    This test suite checks that qstat works correctly when there\n    are 2 PBS servers set up\n    \"\"\"\n\n    def setUp(self):\n        if len(self.servers) != 2 or self.server.client in self.servers:\n            self.skipTest(\"This test needs two servers and one client\")\n        # Because of a bug in PTL, having moms on respective server hosts\n        # don't work, so the server hosts need to be passed as nomom hosts\n        svrnames = self.servers.keys()\n        if \"nomom\" not in self.conf or \\\n                svrnames[0] not in self.conf[\"nomom\"] or \\\n                svrnames[1] not in self.conf[\"nomom\"]:\n            self.skipTest(\"This test needs the server hosts to be passed\"\n                          \" as nomom hosts: -p servers=<host1>:<host2>,\"\n                          \"nomom=<host1>:<host2>\")\n        TestFunctional.setUp(self)\n\n    def test_qstat_req_server(self):\n        _m = self.server.get_op_mode()\n        if _m != PTL_CLI:\n            self.skipTest(\"Test only supported for CLI mode\")\n\n        self.server1 = self.servers.values()[0]\n        self.server2 = self.servers.values()[1]\n        a = {'scheduling': 'false', 'flatuid': 'true'}\n        self.server1.manager(MGR_CMD_SET, SERVER, a)\n        self.server2.manager(MGR_CMD_SET, SERVER, a)\n\n        j = Job(TEST_USER)\n        jid = self.server1.submit(j)\n        destination = '@%s' % self.server2.hostname\n        self.server1.movejob(jobid=jid, destination=destination)\n        self.server1.status(JOB, id=jid + '@%s' % self.server2.hostname,\n                            runas=str(TEST_USER))\n        expmsg = \"Type 19 request received\"\\\n                 \" from %s@%s\" % (str(TEST_USER), self.server.client)\n        self.server1.log_match(msg=expmsg, existence=False, max_attempts=5)\n        self.server2.log_match(msg=expmsg)\n"
  },
  {
    "path": "test/tests/functional/pbs_qstat_count.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestqstatStateCount(TestFunctional):\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n        # set ncpus to a known value, 2 here\n        a = {'resources_available.ncpus': 2}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n\n    def submit_waiting_job(self, timedelta):\n        \"\"\"\n        Submit a job in W state using -a option.\n        The time specified for -a is current time + timedelta.\n        \"\"\"\n        attribs = {ATTR_a: BatchUtils().convert_seconds_to_datetime(\n            int(time.time()) + timedelta)}\n        j = Job(TEST_USER, attribs)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'W'}, id=jid)\n        return jid\n\n    def find_state_counts(self):\n        \"\"\"\n        From the output of qstat -Bf, parses the number of jobs in R, H, W\n        and Q states and the value of total_jobs. Calculates the total number\n        of jobs based on individual counts parsed. Returns these values in a\n        dictionary.\n        \"\"\"\n        counts = {}\n        # Get output of qstat\n        qstat = self.server.status(SERVER)\n        state_count = qstat[0]['state_count'].split()\n        all_state_count = 0\n        for s in state_count:\n            state = s.split(':')\n            # Check for negative value\n            self.assertGreaterEqual(\n                int(state[1]), 0, 'state count has negative values')\n            counts[state[0]] = int(state[1])\n            all_state_count = all_state_count + int(state[1])\n        counts['all_state_count'] = all_state_count\n        counts['total_jobs'] = int(qstat[0]['total_jobs'])\n        # Find queued count from output of qstat\n        counts['expected_queued_count'] = (counts['total_jobs'] -\n                                           counts['Held'] -\n                                           counts['Waiting'] -\n                                           counts['Running'])\n        return counts\n\n    def verify_count(self):\n        \"\"\"\n        The function does following checks based on output of qstat -Bf:\n        1. total_jobs should match the number of jobs submitted\n        2. queued_count should match total_jobs minus the number of jobs in\n        state other than Q.\n        (each job uses ncpus=1)\n        \"\"\"\n        counts = self.find_state_counts()\n        self.assertEqual(counts['total_jobs'],\n                         counts['all_state_count'], 'Job count incorrect')\n        self.assertEqual(counts['expected_queued_count'], counts['Queued'],\n                         'Queued count incorrect')\n\n    def test_queued_no_restart(self):\n        \"\"\"\n        The test case verifies that the reported queued_count in qstat -Bf\n        without a server restart is equal to the total_jobs - number of jobs in\n        state other than Q.\n        (each job uses ncpus=1)\n        \"\"\"\n        jid = []\n        # submit 4 jobs to ensure some jobs are in state Q as available ncpus=2\n        for _ in range(4):\n            j = Job(TEST_USER)\n            jid.append(self.server.submit(j))\n\n        a = {ATTR_h: None}\n        j = Job(TEST_USER, a)\n        self.server.submit(j)\n\n        self.submit_waiting_job(600)\n\n        # Wait for jobs to go in R state\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid[0])\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid[1])\n        self.verify_count()\n\n    def test_queued_restart(self):\n        \"\"\"\n        The test case verifies that the reported queued_count in qstat -Bf\n        is equal to total_jobs - number of jobs in state other than Q,\n        even after the server is restarted.\n        (each job uses ncpus=1)\n        \"\"\"\n        jid = []\n        # submit 4 jobs to ensure some jobs are in state Q as available ncpus=2\n        for _ in range(4):\n            j = Job(TEST_USER)\n            jid.append(self.server.submit(j))\n\n        a = {ATTR_h: None}\n        j = Job(TEST_USER, a)\n        self.server.submit(j)\n\n        self.submit_waiting_job(600)\n\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid[0])\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid[1])\n\n        self.server.restart()\n        self.verify_count()\n\n    def test_queued_no_restart_multiple_queue(self):\n        \"\"\"\n        The test case verifies that the queued_count reported in the output\n        of qstat -Bf is equal to total_jobs - running jobs, without server\n        restart.\n        (each job uses ncpus=1)\n        \"\"\"\n        # create 2 execution queues\n        qname = ['workq1', 'workq2']\n        for que in qname:\n            a = {\n                'queue_type': 'Execution',\n                'enabled': 'True',\n                'started': 'True'}\n            self.server.manager(MGR_CMD_CREATE, QUEUE, a, que)\n\n        q1_attr = {ATTR_queue: 'workq1'}\n        q2_attr = {ATTR_queue: 'workq2'}\n\n        # submit 1 job per queue to ensure a running job in each queue,\n        # then submit 2 more jobs per queue i.e. overall 3 jobs in each queue\n        j = Job(TEST_USER, q1_attr)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        j = Job(TEST_USER, q2_attr)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        for _ in range(2):\n            j = Job(TEST_USER, q1_attr)\n            self.server.submit(j)\n            j = Job(TEST_USER, q2_attr)\n            self.server.submit(j)\n\n        self.verify_count()\n\n    def test_queued_restart_multiple_queue(self):\n        \"\"\"\n        The test case verifies that the queued_count reported in the output\n        of qstat -Bf is equal to total_jobs - running jobs, even after the\n        server is restart.\n        (each job uses ncpus=1)\n        \"\"\"\n        qname = ['workq1', 'workq2']\n        for que in qname:\n            a = {\n                'queue_type': 'Execution',\n                'enabled': 'True',\n                'started': 'True'}\n            self.server.manager(MGR_CMD_CREATE, QUEUE, a, que)\n\n        q1_attr = {ATTR_queue: 'workq1'}\n        q2_attr = {ATTR_queue: 'workq2'}\n\n        # submit 1 job per queue to ensure a running job in each queue,\n        # then submit 2 more jobs per queue i.e. overall 3 jobs in each queue\n        j = Job(TEST_USER, q1_attr)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        j = Job(TEST_USER, q2_attr)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        for _ in range(2):\n            j = Job(TEST_USER, q1_attr)\n            self.server.submit(j)\n            j = Job(TEST_USER, q2_attr)\n            self.server.submit(j)\n\n        self.server.restart()\n        self.verify_count()\n\n    def test_queued_sched_false(self):\n        \"\"\"\n        This test case verifies that the value of queued_count in the output\n        of qstat -Bf matches the number of jobs submitted (each using ncpus=1),\n        as scheduling is set to False.\n        \"\"\"\n        a = {'scheduling': 'False'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        for _ in range(4):\n            j = Job(TEST_USER)\n            self.server.submit(j)\n        self.server.restart()\n        self.verify_count()\n\n    def test_wait_to_queued(self):\n        \"\"\"\n        This test case verifies that when a job state changes from W to Q after\n        server is restarted, the value of queued_count reported in the\n        output of qstat -Bf is as expected.\n        \"\"\"\n        a = {\n            ATTR_stagein: 'inputData@' +\n            self.server.hostname +\n            ':' + os.path.join('noDir', 'nofile')}\n        j = Job(TEST_USER, a)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'W'}, id=jid,\n                           offset=30, interval=2)\n\n        jid1 = self.submit_waiting_job(10)\n        j = Job(TEST_USER)\n        jid2 = self.server.submit(j)\n        j = Job(TEST_USER)\n        jid3 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid3)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid1, offset=10)\n        self.server.restart()\n        self.verify_count()\n\n    def test_job_state_count(self):\n        \"\"\"\n        Testing if jobs in the 'W' state will cause\n        the state_count to go negative or incorrect\n        \"\"\"\n        # Failing stage-in operation, to put job into the waiting state\n        a = {\n            ATTR_stagein: 'inputData@' +\n            self.server.hostname +\n            ':/noDir/nofile'}\n        j = Job(TEST_USER, a)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'W'}, id=jid,\n                           offset=30, interval=2)\n        # Restart server\n        self.server.restart()\n        self.verify_count()\n"
  },
  {
    "path": "test/tests/functional/pbs_qstat_formats.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\nimport json\n\n\n@tags('commands')\nclass TestQstatFormats(TestFunctional):\n    \"\"\"\n    This test suite validates output of qstat for\n    various formats\n    \"\"\"\n\n    def parse_dsv(self, jid, qstat_type, delimiter=None):\n        \"\"\"\n        Common function to parse qstat dsv output using delimiter\n        \"\"\"\n        if delimiter:\n            delim = \"-D\" + str(delimiter)\n        else:\n            delim = \" \"\n        if qstat_type == \"job\":\n            cmd = ' -f -F dsv ' + delim + \" \" + str(jid)\n            qstat_cmd_dsv = os.path.join(self.server.pbs_conf['PBS_EXEC'],\n                                         'bin', 'qstat') + cmd\n            qstat_cmd = os.path.join(self.server.pbs_conf['PBS_EXEC'],\n                                     'bin', 'qstat') + ' -f ' + str(jid)\n        elif qstat_type == \"server\":\n            qstat_cmd_dsv = os.path.join(self.server.pbs_conf[\n                'PBS_EXEC'], 'bin', 'qstat') + ' -Bf -F dsv ' + delim\n            qstat_cmd = os.path.join(self.server.pbs_conf[\n                'PBS_EXEC'], 'bin', 'qstat') + ' -Bf '\n        elif qstat_type == \"queue\":\n            qstat_cmd_dsv = os.path.join(self.server.pbs_conf[\n                'PBS_EXEC'], 'bin', 'qstat') + ' -Qf -F dsv ' + delim\n            qstat_cmd = os.path.join(self.server.pbs_conf[\n                'PBS_EXEC'], 'bin', 'qstat') + ' -Qf '\n        rv = self.du.run_cmd(self.server.hostname, cmd=qstat_cmd)\n        attrs_qstatf = []\n        for line in rv['out']:\n            attr = line.split(\"=\")\n            if not re.match(r'[\\t]', attr[0]):\n                attrs_qstatf.append(attr[0].strip())\n        attrs_qstatf.pop()\n        ret = self.du.run_cmd(self.server.hostname, cmd=qstat_cmd_dsv)\n        qstat_attrs = []\n        for line in ret['out']:\n            if delimiter:\n                attr_vals = line.split(str(delimiter))\n            else:\n                attr_vals = line.split(\"|\")\n        for item in attr_vals:\n            qstat_attr = item.split(\"=\")\n            qstat_attrs.append(qstat_attr[0])\n        for attr in attrs_qstatf:\n            if attr not in qstat_attrs:\n                self.assertFalse(attr + \" is missing\")\n\n    def parse_json(self, dictitems, qstat_attr):\n        \"\"\"\n        Common function for parsing all values in json output\n        \"\"\"\n        for key, val in dictitems.items():\n            qstat_attr.append(str(key))\n            if isinstance(val, dict):\n                for key, val in val.items():\n                    qstat_attr.append(str(key))\n                    if isinstance(val, dict):\n                        self.parse_json(val, qstat_attr)\n        return qstat_attr\n\n    def get_qstat_attribs(self, obj_type):\n        \"\"\"\n        Common function to get the qstat attributes in default format.\n        Attributes returned by this function are used to validate the\n        '-F json' format output.\n        The dictionary of attributes as returned by status() can not\n        be used directly because some attributes are printed differently\n        in '-F json' format.  Hence this function returns a modified\n        attributes list.\n        obj_type: Can be SERVER, QUEUE or JOB for qstat -Bf, qstat -Qf\n              and qstat -f respectively\n        \"\"\"\n        attrs = self.server.status(obj_type)\n        qstat_attrs = []\n\n        for key, val in attrs[0].items():\n            # qstat -F json output does not\n            # print the 'id' attribute. Its value\n            # is printed instead.\n            if key == 'id':\n                qstat_attrs.append(str(val))\n            else:\n                # Extract keys coming after '.' in 'qstat -f' output so they\n                # can be matched with 'qstat -f -F json' format.\n                # This is because some attributes, like below, are represented\n                # differently in 'qstat -f' output and 'qstat -f -F json'\n                # outputs\n                #\n                # Example:\n                # qstat -f output:\n                #   default_chunk.ncpus = 1\n                #   default_chunk.mem = 1gb\n                #   Resource_List.ncpus = 1\n                #   Resource_List.nodect = 1\n                #\n                # qstat -f -F json output:\n                #   \"default_chunk\":{\n                #      \"ncpus\":1\n                #      \"mem\":1gb\n                #   }\n                #    \"Resource_List\":{\n                #       \"ncpus\":1,\n                #      \"nodect\":1,\n                #   }\n\n                k = key.split('.')\n                if k[0] not in qstat_attrs:\n                    qstat_attrs.append(str(k[0]))\n                if len(k) == 2:\n                    qstat_attrs.append(str(k[1]))\n\n            # Extract individual variables under 'Variable_List' from\n            # 'qstat -f' output so they can be matched with 'qstat -f -F json'\n            # format.\n            # Example:\n            #\n            # qstat -f output:\n            #    Variable_List = PBS_O_LANG=en_US.UTF-8,\n            #        PBS_O_PATH=/usr/lib64/qt-3.3/bin\n            #        PBS_O_SHELL=/bin/bash,\n            #        PBS_O_WORKDIR=/home/pbsuser,\n            #        PBS_O_SYSTEM=Linux,PBS_O_QUEUE=workq,\n            #\n            # qstat -f -F json output:\n            #    \"Variable_List\":{\n            #        \"PBS_O_LANG\":\"en_US.UTF-8\",\n            #        \"PBS_O_PATH\":\"/usr/lib64/qt-3.3/bin:/usr/local/bin\n            #        \"PBS_O_SHELL\":\"/bin/bash\",\n            #        \"PBS_O_WORKDIR\":\"/home/pbsuser,\n            #        \"PBS_O_SYSTEM\":\"Linux\",\n            #        \"PBS_O_QUEUE\":\"workq\",\n            #    },\n\n            if key == ATTR_v:\n                for v in val.split(','):\n                    qstat_attrs.append(str(v).split('=')[0])\n        return qstat_attrs\n\n    def test_qstat_dsv(self):\n        \"\"\"\n        test qstat outputs job info in dsv format with default delimiter pipe\n        \"\"\"\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': \"R\"}, id=jid)\n        self.parse_dsv(jid, \"job\")\n\n    def test_qstat_bf_dsv(self):\n        \"\"\"\n        test qstat outputs server info in dsv format with default\n        delimiter pipe\n        \"\"\"\n        self.parse_dsv(None, \"server\")\n\n    def test_qstat_qf_dsv(self):\n        \"\"\"\n        test qstat outputs queue info in dsv format with default delimiter pipe\n        \"\"\"\n        self.parse_dsv(None, \"queue\")\n\n    def test_qstat_dsv_semicolon(self):\n        \"\"\"\n        test qstat outputs job info in dsv format with semicolon as delimiter\n        \"\"\"\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': \"R\"}, id=jid)\n        self.parse_dsv(jid, \"job\", \";\")\n\n    def test_qstat_bf_dsv_semicolon(self):\n        \"\"\"\n        test qstat outputs server info in dsv format with semicolon as\n        delimiter\n        \"\"\"\n        self.parse_dsv(None, \"server\", \";\")\n\n    def test_qstat_qf_dsv_semicolon(self):\n        \"\"\"\n        test qstat outputs queue info in dsv format with semicolon as delimiter\n        \"\"\"\n        self.parse_dsv(None, \"queue\", \";\")\n\n    def test_qstat_dsv_comma_ja(self):\n        \"\"\"\n        test qstat outputs job array info in dsv format with comma as delimiter\n        \"\"\"\n        j = Job(TEST_USER)\n        j.set_attributes({ATTR_J: '1-3'})\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': \"B\"}, id=jid)\n        self.parse_dsv(jid, \"job\", \",\")\n\n    def test_qstat_bf_dsv_comma(self):\n        \"\"\"\n        test qstat outputs server info in dsv format with comma as delimiter\n        \"\"\"\n        self.parse_dsv(None, \"server\", \",\")\n\n    def test_qstat_qf_dsv_comma(self):\n        \"\"\"\n        test qstat outputs queue info in dsv format with comma as delimiter\n        \"\"\"\n        self.parse_dsv(None, \"queue\", \",\")\n\n    def test_qstat_dsv_string(self):\n        \"\"\"\n        test qstat outputs job info in dsv format with string as delimiter\n        \"\"\"\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': \"R\"}, id=jid)\n        self.parse_dsv(jid, \"job\", \"QWERTY\")\n\n    def test_qstat_bf_dsv_string(self):\n        \"\"\"\n        test qstat outputs server info in dsv format with string as delimiter\n        \"\"\"\n        self.parse_dsv(None, \"server\", \"QWERTY\")\n\n    def test_qstat_qf_dsv_string(self):\n        \"\"\"\n        test qstat outputs queue info in dsv format with string as delimiter\n        \"\"\"\n        self.parse_dsv(None, \"queue\", \"QWERTY\")\n\n    def test_oneline_dsv(self):\n        \"\"\"\n        submit a single job and check the no of attributes parsed from dsv\n        is equal to the one parsed from one line output.\n        \"\"\"\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n        time.sleep(1)\n        qstat_cmd = os.path.join(\n            self.server.pbs_conf['PBS_EXEC'], 'bin', 'qstat')\n        [qstat_dsv_script, qstat_dsv_out, qstat_oneline_script,\n         qstat_oneline_out] = [DshUtils().create_temp_file() for _ in range(4)]\n        f = open(qstat_dsv_script, 'w')\n        f.write(qstat_cmd + ' -f -F dsv ' + str(jid) + ' > ' + qstat_dsv_out)\n        f.close()\n        run_script = \"sh \" + qstat_dsv_script\n        dsv_ret = self.du.run_cmd(\n            self.server.hostname,\n            cmd=run_script)\n        f = open(qstat_dsv_out, 'r')\n        dsv_out = f.read()\n        f.close()\n        dsv_attr_count = len(dsv_out.replace(r\"\\|\", \"\").split(\"|\"))\n        f = open(qstat_oneline_script, 'w')\n        f.write(qstat_cmd + ' -f -w ' + str(jid) + ' > ' + qstat_oneline_out)\n        f.close()\n        run_script = 'sh ' + qstat_oneline_script\n        oneline_ret = self.du.run_cmd(\n            self.server.hostname, cmd=run_script)\n        oneline_attr_count = sum(1 for line in open(\n            qstat_oneline_out) if not line.isspace())\n        map(os.remove, [qstat_dsv_script, qstat_dsv_out,\n                        qstat_oneline_script, qstat_oneline_out])\n        self.assertEqual(dsv_attr_count, oneline_attr_count)\n\n    def test_json(self):\n        \"\"\"\n        Check whether the qstat json output can be parsed using\n        python json module\n        \"\"\"\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n        [qstat_json_script, qstat_json_out] = [DshUtils().create_temp_file()\n                                               for _ in range(2)]\n        qstat_cmd = os.path.join(\n            self.server.pbs_conf['PBS_EXEC'], 'bin', 'qstat')\n        f = open(qstat_json_script, 'w')\n        f.write(qstat_cmd + ' -f -F json ' + str(jid) + ' > ' + qstat_json_out)\n        f.close()\n        self.du.chmod(path=qstat_json_script, mode=0o755)\n        run_script = 'sh ' + qstat_json_script\n        json_ret = self.du.run_cmd(\n            self.server.hostname, cmd=run_script)\n        data = open(qstat_json_out, 'r').read()\n        map(os.remove, [qstat_json_script, qstat_json_out])\n        try:\n            json_data = json.loads(data)\n        except BaseException:\n            self.assertTrue(False)\n\n    def test_qstat_tag(self):\n        \"\"\"\n        Test <jsdl-hpcpa:Executable> tag is dispalyed with \"Executable\"\n        while doing qstat -f\n        \"\"\"\n        ret = True\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n        qstat_cmd = os.path.join(self.server.pbs_conf['PBS_EXEC'], 'bin',\n                                 'qstat') + ' -f ' + str(jid)\n        ret = self.du.run_cmd(self.server.hostname, cmd=qstat_cmd, sudo=True)\n        if -1 != str(ret).find('Executable'):\n            if -1 == str(ret).find('<jsdl-hpcpa:Executable>'):\n                ret = False\n        self.assertTrue(ret)\n\n    @tags('smoke')\n    def test_qstat_json_valid(self):\n        \"\"\"\n        Test json output of qstat -f is in valid format when querired as a\n        super user and all attributes displayed in qstat are present in output\n        \"\"\"\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': \"R\"}, id=jid)\n\n        qstat_cmd_json = os.path.join(self.server.pbs_conf[\n            'PBS_EXEC'], 'bin', 'qstat') + ' -f -F json ' + str(jid)\n        ret = self.du.run_cmd(self.server.hostname, cmd=qstat_cmd_json)\n        qstat_out = \"\\n\".join(ret['out'])\n        try:\n            json_object = json.loads(qstat_out)\n        except ValueError as e:\n            self.assertTrue(False)\n\n        json_only_attrs = ['Jobs', 'timestamp', 'pbs_version', 'pbs_server']\n        attrs_qstatf = self.get_qstat_attribs(JOB)\n        qstat_json_attr = []\n\n        for key, val in json_object.items():\n            qstat_json_attr.append(str(key))\n            if isinstance(val, dict):\n                self.parse_json(val, qstat_json_attr)\n\n        for attr in attrs_qstatf:\n            if attr not in qstat_json_attr:\n                self.assertFalse(attr + \" is missing\")\n\n        for attr in json_only_attrs:\n            if attr not in qstat_json_attr:\n                self.assertFalse(attr + \" is missing\")\n\n    def test_qstat_json_valid_multiple_jobs(self):\n        \"\"\"\n        Test json output of qstat -f is in valid format when multiple jobs are\n        queried and make sure that all attributes are displayed in qstat are\n        present in the output\n        \"\"\"\n        j = Job(TEST_USER)\n        jid1 = self.server.submit(j)\n        jid2 = self.server.submit(j)\n        qstat_cmd_json = os.path.join(self.server.pbs_conf['PBS_EXEC'], 'bin',\n                                      'qstat') + \\\n            ' -f -F json ' + str(jid1) + ' ' + str(jid2)\n        ret = self.du.run_cmd(self.server.hostname, cmd=qstat_cmd_json)\n        qstat_out = \"\\n\".join(ret['out'])\n        try:\n            json.loads(qstat_out)\n        except ValueError:\n            self.assertTrue(False)\n\n    def test_qstat_json_valid_multiple_jobs_p(self):\n        \"\"\"\n        Test json output of qstat -f is in valid format when multiple jobs are\n        queried and make sure that attributes are displayed with `p` option.\n        When -p is passed, then only the Resource_List is requested. An\n        attribute with type resource list has to be the last attribute\n        in order to hit the bug.\n        \"\"\"\n        a = {'resources_available.ncpus': 4}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n        jid2 = self.server.submit(j)\n        jid3 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': \"R\"}, id=jid)\n        self.server.expect(JOB, {'job_state': \"R\"}, id=jid2)\n        self.server.expect(JOB, {'job_state': \"R\"}, id=jid3)\n        qstat_cmd_json = os.path.join(self.server.pbs_conf['PBS_EXEC'], 'bin',\n                                      'qstat') + ' -fp -F json '\n        ret = self.du.run_cmd(self.server.hostname, cmd=qstat_cmd_json)\n        qstat_out = '\\n'.join(ret['out'])\n        try:\n            js = json.loads(qstat_out)\n        except ValueError:\n            self.assertTrue(False, 'JSON failed to load.')\n\n        self.assertIn('Jobs', js)\n        self.assertIn(jid, js['Jobs'])\n        self.assertIn('Resource_List', js['Jobs'][jid])\n        self.assertIn(jid2, js['Jobs'])\n        self.assertIn('Resource_List', js['Jobs'][jid2])\n        self.assertIn(jid3, js['Jobs'])\n        self.assertIn('Resource_List', js['Jobs'][jid3])\n\n    def test_qstat_json_valid_user(self):\n        \"\"\"\n        Test json output of qstat -f is in valid format when queried as\n        normal user\n        \"\"\"\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': \"R\"}, id=jid)\n        qstat_cmd_json = os.path.join(self.server.pbs_conf[\n            'PBS_EXEC'], 'bin', 'qstat') + ' -f -F json ' + str(jid)\n        ret = self.du.run_cmd(self.server.hostname,\n                              cmd=qstat_cmd_json, runas=TEST_USER)\n        qstat_out = \"\\n\".join(ret['out'])\n        try:\n            json_object = json.loads(qstat_out)\n        except ValueError as e:\n            self.assertTrue(False)\n\n    def test_qstat_json_valid_ja(self):\n        \"\"\"\n        Test json output of qstat -f of Job arrays is in valid format\n        \"\"\"\n        j = Job(TEST_USER)\n        j.set_attributes({ATTR_J: '1-3'})\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': \"B\"}, id=jid)\n        qstat_cmd_json = os.path.join(self.server.pbs_conf[\n            'PBS_EXEC'], 'bin', 'qstat') + ' -f -F json ' + str(jid)\n        ret = self.du.run_cmd(self.server.hostname, cmd=qstat_cmd_json)\n        qstat_out = \"\\n\".join(ret['out'])\n        try:\n            json_object = json.loads(qstat_out)\n        except ValueError as e:\n            self.assertTrue(False)\n\n    @tags('smoke')\n    def test_qstat_bf_json_valid(self):\n        \"\"\"\n        Test json output of qstat -Bf is in valid format and all\n        attributes displayed in qstat are present in output\n        \"\"\"\n        qstat_cmd_json = os.path.join(self.server.pbs_conf[\n            'PBS_EXEC'], 'bin', 'qstat') + ' -Bf -F json'\n        ret = self.du.run_cmd(self.server.hostname, cmd=qstat_cmd_json)\n        qstat_out = \"\\n\".join(ret['out'])\n        try:\n            json_object = json.loads(qstat_out)\n        except ValueError as e:\n            self.assertTrue(False)\n\n        json_only_attrs = ['Server', 'timestamp', 'pbs_version', 'pbs_server']\n        attrs_qstatbf = self.get_qstat_attribs(SERVER)\n\n        qstat_json_attr = []\n        for key, val in json_object.items():\n            qstat_json_attr.append(str(key))\n            if isinstance(val, dict):\n                self.parse_json(val, qstat_json_attr)\n\n        for attr in attrs_qstatbf:\n            if attr not in qstat_json_attr:\n                self.assertFalse(attr + \" is missing\")\n\n        for attr in json_only_attrs:\n            if attr not in qstat_json_attr:\n                self.assertFalse(attr + \" is missing\")\n\n    @tags('smoke')\n    def test_qstat_qf_json_valid(self):\n        \"\"\"\n        Test json output of qstat -Qf is in valid format and all\n        attributes displayed in qstat are present in output\n        \"\"\"\n        qstat_cmd_json = os.path.join(self.server.pbs_conf[\n            'PBS_EXEC'], 'bin', 'qstat') + ' -Qf -F json'\n        ret = self.du.run_cmd(self.server.hostname, cmd=qstat_cmd_json)\n        qstat_out = \"\\n\".join(ret['out'])\n        try:\n            json_object = json.loads(qstat_out)\n        except ValueError as e:\n            self.assertTrue(False)\n\n        json_only_attrs = ['Queue', 'timestamp', 'pbs_version', 'pbs_server']\n        attrs_qstatqf = self.get_qstat_attribs(QUEUE)\n\n        qstat_json_attr = []\n        for key, val in json_object.items():\n            qstat_json_attr.append(str(key))\n            if isinstance(val, dict):\n                self.parse_json(val, qstat_json_attr)\n\n        for attr in attrs_qstatqf:\n            if attr not in qstat_json_attr:\n                self.assertFalse(attr + \" is missing\")\n\n        for attr in json_only_attrs:\n            if attr not in qstat_json_attr:\n                self.assertFalse(attr + \" is missing\")\n\n    def test_qstat_qf_json_valid_multiple_queues(self):\n        \"\"\"\n        Test json output of qstat -Qf is in valid format when\n        we query multiple queues\n        \"\"\"\n        a = {'queue_type': 'Execution', 'resources_max.walltime': '10:00:00'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id='workq2')\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id='workq3')\n        qstat_cmd_json = os.path.join(self.server.pbs_conf['PBS_EXEC'], 'bin',\n                                      'qstat') + ' -Q -f -F json'\n        ret = self.du.run_cmd(self.server.hostname, cmd=qstat_cmd_json)\n        qstat_out = \"\\n\".join(ret['out'])\n        try:\n            qs = json.loads(qstat_out)\n        except ValueError:\n            self.assertTrue(False, \"Invalid JSON, failed to load\")\n\n        self.assertIn('Queue', qs)\n        self.assertIn('workq', qs['Queue'])\n        self.assertIn('workq2', qs['Queue'])\n        self.assertIn('resources_max', qs['Queue']['workq2'])\n        self.assertIn('workq3', qs['Queue'])\n        self.assertIn('resources_max', qs['Queue']['workq3'])\n\n    def test_qstat_json_valid_job_special_env(self):\n        \"\"\"\n        Test json output of qstat -f is in valid format\n        with special chars in env\n        \"\"\"\n        os.environ[\"DOUBLEQUOTES\"] = 'hi\"ha'\n        os.environ[\"REVERSESOLIDUS\"] = r'hi\\ha'\n        os.environ[\"MYVAR\"] = \"\"\"\\'\\\"asads\\\"\\'\"\"\"\n        os.environ[\"MYHOME\"] = \"\"\"/home/pbstest01/Mo\\\\'\"\"\"\n        os.environ[\"FOO0\"] = \"\"\"00123\"\"\"\n        os.environ[\"FOO1\"] = \"\"\".123\"\"\"\n        os.environ[\"FOO2\"] = \"\"\"123.\"\"\"\n        os.environ[\"FOO3\"] = \"\"\"00\"\"\"\n        os.environ[\"FOO4\"] = \"\"\"-00\"\"\"\n        os.environ[\"FOO5\"] = \"\"\"00.123\"\"\"\n        os.environ[\"FOO6\"] = \"\"\"-00.123\"\"\"\n        os.environ[\"MYVAR0\"] = \"\"\"\\'\"\"\"\n        os.environ[\"MYVAR1\"] = \"\"\"\\\\'\"\"\"\n        os.environ[\"MYVAR2\"] = \"\"\"\\\\\\'\"\"\"\n        os.environ[\"MYVAR3\"] = \"\"\"\\\\\\\\'\"\"\"\n        os.environ[\"MYVAR4\"] = \"\"\"\\\\\\\\\\\\'\"\"\"\n        os.environ[\"MYVAR5\"] = \"\"\"\\\\\\\\\\\\\\\\\\\\'\"\"\"\n        os.environ[\"MYVAR6\"] = r\"\"\"\\,\"\"\"\n        os.environ[\"MYVAR7\"] = \"\"\"\\\\,\"\"\"\n        os.environ[\"MYVAR8\"] = r\"\"\"\\\\\\,\"\"\"\n        os.environ[\"MYVAR9\"] = \"\"\"\\\\\\\\,\"\"\"\n        os.environ[\"MYVAR10\"] = \"\"\"\\\\\\\\\\\\,\"\"\"\n        os.environ[\"MYVAR11\"] = \"\"\"\\\\\\\\\\\\\\\\\\\\,\"\"\"\n        os.environ[\"MYVAR12\"] = r\"\"\"apple\\,delight\"\"\"\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'default_qsub_arguments': '-V'})\n\n        j = Job(self.du.get_current_user())\n        j.preserve_env = True\n        jid = self.server.submit(j)\n        qstat_cmd_json = os.path.join(self.server.pbs_conf['PBS_EXEC'], 'bin',\n                                      'qstat') + \\\n            ' -f -F json ' + str(jid)\n        ret = self.du.run_cmd(self.server.hostname, cmd=qstat_cmd_json)\n        qstat_out = \"\\n\".join(ret['out'])\n        try:\n            json.loads(qstat_out)\n        except ValueError:\n            self.logger.info(qstat_out)\n            self.assertTrue(False)\n\n    def test_qstat_json_valid_job_longint_env(self):\n        \"\"\"\n        Test if JSON output of qstat -f is in valid format\n        with longint in env\n        \"\"\"\n        os.environ[\"LONGINT\"] = '1111111111111111111111111111111111111111' + \\\n                                '1111111111111111111111111111111111111111' + \\\n                                '11111111111111111111111111111111111'\n        os.environ[\"LONGDOUBLE\"] = '1111111111111111111111111111112.88888' + \\\n                                   '8888888888888888'\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'default_qsub_arguments': '-V'})\n\n        j = Job(self.du.get_current_user())\n        j.preserve_env = True\n        jid = self.server.submit(j)\n        qstat_cmd_json = os.path.join(self.server.pbs_conf['PBS_EXEC'], 'bin',\n                                      'qstat') + \\\n            ' -f -F json ' + str(jid)\n        ret = self.du.run_cmd(self.server.hostname, cmd=qstat_cmd_json)\n        qstat_out = \"\\n\".join(ret['out'])\n        try:\n            json.loads(qstat_out)\n        except ValueError:\n            self.assertTrue(False)\n\n    def run_namelength_test(self, options=''):\n        \"\"\"\n        Changes the server name, sets a long job and queue name,\n        and ensures they're truncated correctly in a wide format\n        \"\"\"\n        self.server.stop()\n        self.assertFalse(self.server.isUp(), 'Failed to stop PBS')\n\n        conf = self.du.parse_pbs_config(self.server.hostname)\n        self.du.set_pbs_config(\n            self.server.hostname,\n            confs={'PBS_SERVER_HOST_NAME': conf['PBS_SERVER'],\n                   'PBS_SERVER': 'supersuperduperlongservername31'})\n\n        self.server.start()\n        self.assertTrue(self.server.isUp(), 'Failed to start PBS')\n        a = {'queue_type': 'Execution', 'enabled': 'True',\n             'started': 'True'}\n        qname = 'queuename15char'\n        jname = 'jobname16xxxchar'\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id=qname)\n        a = {ATTR_queue: qname, ATTR_name: jname}\n        j = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(j)\n        qstat_cmd = os.path.join(self.server.pbs_conf['PBS_EXEC'], 'bin',\n                                 'qstat') + ' ' + options + ' ' + str(jid)\n        ret = self.du.run_cmd(self.server.hostname, cmd=qstat_cmd)\n        qstat_out = '\\n'.join(ret['out'])\n        jid_trunc = jid[:29] + '*'\n        jname_trunc = jname[:14] + '*'\n        self.assertIn(jid_trunc, qstat_out)\n        self.assertIn(qname, qstat_out)\n        self.assertIn(jname_trunc, qstat_out)\n        self.assertNotIn(jid, qstat_out)\n        self.assertNotIn(jname, qstat_out)\n\n    def test_qstat_wide(self):\n        \"\"\"\n        Test if qstat -w correctly prints in wide format\n        This tests the normal display function\n        \"\"\"\n        self.run_namelength_test('-w')\n\n    def test_qstat_rwt(self):\n        \"\"\"\n        Test if qstat -rwt correctly prints in wide format.\n        This tests the alternate display function\n        \"\"\"\n        self.run_namelength_test('-rwt')\n\n    def test_qstat_answ(self):\n        \"\"\"\n        Test if qstat -answ correctly prints in wide format.\n        This tests the alternate display function\n        \"\"\"\n        self.run_namelength_test('-answ')\n\n    def test_qstat_ans(self):\n        \"\"\"\n        Test if qstat -ans correctly prints with truncation.\n        \"\"\"\n        self.server.stop()\n        self.assertFalse(self.server.isUp(), 'Failed to stop PBS')\n\n        server_hostname = self.server.pbs_conf['PBS_SERVER']\n        self.du.set_pbs_config(\n            self.server.hostname,\n            confs={'PBS_SERVER_HOST_NAME': server_hostname,\n                   'PBS_SERVER': 'supersuperduperlongservername31'})\n\n        self.server.start()\n        self.assertTrue(self.server.isUp(), 'Failed to start PBS')\n        a = {'queue_type': 'Execution', 'enabled': 'True',\n             'started': 'True'}\n        qname = 'queuename15char'\n        jname = 'jobname16xxxchar'\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id=qname)\n        a = {ATTR_queue: qname, ATTR_name: jname}\n        j = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(j)\n        qstat_cmd = os.path.join(self.server.pbs_conf['PBS_EXEC'], 'bin',\n                                 'qstat') + ' -ans ' + str(jid)\n        ret = self.du.run_cmd(self.server.hostname, cmd=qstat_cmd)\n        qstat_out = '\\n'.join(ret['out'])\n        jid_trunc = jid[:14] + '*'\n        jname_trunc = jname[:9] + '*'\n        qname_trunc = qname[:7] + '*'\n        self.assertIn(jid_trunc, qstat_out)\n        self.assertIn(jname_trunc, qstat_out)\n        self.assertIn(qname_trunc, qstat_out)\n        self.assertNotIn(jid, qstat_out)\n        self.assertNotIn(jname, qstat_out)\n        self.assertNotIn(qname, qstat_out)\n\n    def test_qstat_json_empty_job_pset(self):\n        \"\"\"\n        Test an empty pset resource value under json.\n        \"\"\"\n        # create a custom resource\n        self.server.manager(MGR_CMD_CREATE, RSC,\n                            {'type': 'string', 'flag': 'h'}, id='iru')\n        attr = {'node_group_enable': 'True', 'node_group_key': 'iru'}\n        self.server.manager(MGR_CMD_SET, SERVER, attr)\n\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n        time.sleep(6)\n        # when job runs, pset will be set to iru=\"\"\"\n        qstat_cmd_json = os.path.join(self.server.pbs_conf['PBS_EXEC'], 'bin',\n                                      'qstat') + ' -f -F json ' + str(jid)\n        ret = self.du.run_cmd(self.server.hostname, cmd=qstat_cmd_json)\n        qstat_out = \"\\n\".join(ret['out'])\n        try:\n            json.loads(qstat_out)\n        except ValueError:\n            self.logger.info(qstat_out)\n            self.assertFalse(True, \"Json failed to load\")\n\n    def test_qstat_format_conflicts(self):\n        \"\"\"\n        Test conflicting combinations of alt_opt flags with -F\n        \"\"\"\n        binpath = os.path.join(self.server.pbs_conf['PBS_EXEC'], 'bin', 'qstat')\n        conflicting_opts = [\n            '-a -F JSON',\n            '-i -F JSON',\n            '-r -F JSON',\n            '-n -F JSON',\n            '-s -F JSON',\n            '-H -F JSON',\n            '-T -F JSON',\n            '-G -F JSON',\n            '-M -F JSON',\n            '-1 -F JSON',\n            '-w -F JSON',\n        ]\n\n        for flags in conflicting_opts:\n            qstat_cmd = binpath + ' ' + flags\n            ret = self.du.run_cmd(self.server.hostname, cmd=qstat_cmd)\n\n            self.assertNotEqual(ret['rc'], 0, f\"Expected failure for: {flags}\")\n            self.assertIn(\"conflicting options\", ''.join(ret['err']).lower(),\n                          f\"Missing conflict error for: {flags}\")\n            self.assertEqual(len(ret['out']), 0, f\"Unexpected stdout for: {flags}\")\n\n\n    def test_qstat_format_valid_combos(self):\n        \"\"\"\n        Test valid combinations of alt_opt flags with - JSON\n        i.e. should not throw errors\n        \"\"\"\n        binpath = os.path.join(self.server.pbs_conf['PBS_EXEC'], 'bin', 'qstat')\n\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n        ret = self.du.run_cmd(self.server.hostname, cmd=\"qstat -f \" + jid)\n        self.assertIn(\"job_state\", ''.join(ret['out']))\n\n        valid_opts = [\n            '-f -F JSON',\n            f'-f -F JSON {jid}',\n            '-Bf -F JSON',\n            '-Qf -F JSON'\n        ]\n\n        for flags in valid_opts:\n            qstat_cmd = binpath + ' ' + flags\n            ret = self.du.run_cmd(self.server.hostname, cmd=qstat_cmd)\n\n            self.assertEqual(ret['rc'], 0, f\"Expected success for: {flags}\")\n            self.assertEqual(len(ret['err']), 0, f\"Unexpected stderr for: {flags}\")\n    \n\n    def test_qstat_Qf_json_new_type_queue_format(self):\n        \"\"\"\n        Test if qstat -Qf -F JSON returns multiple new-type queue restrictions\n        as a single comma separated string, rather than having duplicate keys\n        \"\"\"\n\n        qname = 'newtypeq'\n        a = {'queue_type': 'Execution', 'enabled': 'True',\n             'started': 'True'}\n        \n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id=qname)\n\n        # set multiple new-type restrictions:\n        restriction_settings = [\n            ('max_queued ', '[u:foo=3]'),\n            ('max_queued +', '[g:bar=2]'),\n            ('max_run ', '[u:foo=4]'),\n            ('max_run +', '[p:projectX=10]'),\n            ('max_run_soft ', '[g:devs=6]'),\n            ('queued_jobs_threshold ', '[u:baz=15]'),\n            ('max_queued_res.mem ', '[u:foo=100mb]'),\n            ('max_queued_res.mem +', '[g:bar=200mb]'),\n            ('max_queued_res.ncpus ', '[u:foo=3]'),\n        ]\n\n        for attr, val in restriction_settings:\n            self.server.manager(MGR_CMD_SET, QUEUE, {attr: val}, id=qname)\n\n        qstat_cmd_json = os.path.join(self.server.pbs_conf['PBS_EXEC'], 'bin',\n                                      'qstat') + f' -Qf -F JSON {qname}'\n        ret = self.du.run_cmd(self.server.hostname, cmd=qstat_cmd_json)\n        qstat_out = \"\\n\".join(ret['out'])\n        try:\n            j = json.loads(qstat_out)\n        except ValueError:\n            self.logger.info(qstat_out)\n            self.assertFalse(True, \"Json failed to load\")\n\n        self.assertIn(\"Queue\", j)\n        self.assertIn(qname, j[\"Queue\"])\n        qdata = j[\"Queue\"][qname]\n        \n        expected = { #regular new type restrictions\n            \"max_queued\": \"[u:foo=3],[g:bar=2]\",\n            \"max_run\": \"[u:foo=4],[p:projectX=10]\",\n            \"max_run_soft\": \"[g:devs=6]\",\n            \"queued_jobs_threshold\": \"[u:baz=15]\",\n        }\n\n        for key, val in expected.items():\n            self.assertIn(key, qdata)\n            self.assertIsInstance(qdata[key], str, f\"{key} should be a comma-separated string\")\n            self.assertCountEqual(qdata[key], val)\n\n        res_expected = { #resource based new type restrictions\n            \"max_queued_res\": {\n                \"mem\": \"[u:foo=100mb],[g:bar=200mb]\",\n                \"ncpus\": \"[u:foo=3]\"\n            }\n        }\n\n        for attr_key, attr_val in res_expected.items():\n            self.assertIn(attr_key, qdata)\n            self.assertIsInstance(qdata[attr_key], dict)\n            for res_key, res_val in attr_val.items():\n                self.assertIn(res_key, qdata[attr_key])\n                self.assertCountEqual(qdata[attr_key][res_key], res_val)\n\n\n        self.server.manager(MGR_CMD_DELETE, QUEUE, id=qname)\n\n"
  },
  {
    "path": "test/tests/functional/pbs_qsub_direct_write.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestQsub_direct_write(TestFunctional):\n    \"\"\"\n    validate qsub direct write option.\n    \"\"\"\n\n    def setUp(self):\n        \"\"\"\n        Default setup and variable declaration\n        \"\"\"\n        TestFunctional.setUp(self)\n        self.msg = \"Job is sleeping for 10 secs as job should  be running\"\n        self.msg += \" at the time we check for directly written files\"\n\n    def checks_available_ncpus(self, ncpus=1):\n        nodes = self.server.counter(NODE, 'resources_available.ncpus',\n                                    grandtotal=True, level=logging.DEBUG)\n        if nodes and 'resources_available.ncpus' in nodes:\n            total_ncpus = nodes['resources_available.ncpus']\n            if total_ncpus < ncpus:\n                self.skip_test(reason=\"need %d available ncpus\" % ncpus)\n\n    @requirements(mom_on_server=True)\n    def test_direct_write_when_job_succeeds(self):\n        \"\"\"\n        submit a sleep job and make sure that the std_files\n        are getting directly written to the mapped directory\n        when direct_files option is used.\n        \"\"\"\n        j = Job(TEST_USER, attrs={ATTR_k: 'doe'})\n        j.set_sleep_time(10)\n        sub_dir = self.du.create_temp_dir(asuser=TEST_USER)\n        mapping_dir = self.du.create_temp_dir(asuser=TEST_USER)\n        self.mom.add_config(\n            {'$usecp': self.mom.hostname + ':' + sub_dir +\n             ' ' + mapping_dir})\n        self.mom.restart()\n        jid = self.server.submit(j, submit_dir=sub_dir)\n        self.logger.info(self.msg)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        file_count = len([name for name in os.listdir(\n            mapping_dir) if os.path.isfile(os.path.join(mapping_dir, name))])\n        self.assertEqual(2, file_count)\n        self.server.expect(JOB, {ATTR_k: 'doe'}, id=jid)\n\n    @requirements(mom_on_server=True)\n    def test_direct_write_when_job_succeeds_controlled(self):\n        \"\"\"\n        submit a sleep job and make sure that the std_files\n        are getting directly written to the mapped directory\n        when direct_files option is used.\n\n        directory should be\n        1) owned by a different user\n        2) owned by a group that is not the job user's primary gid\n                (but is a gid that the user is a member of)\n        3) not accessible via other permissions\n        \"\"\"\n        j = Job(TEST_USER2, attrs={ATTR_k: 'doe'})\n        j.set_sleep_time(10)\n        sub_dir = self.du.create_temp_dir(asuser=TEST_USER5)\n        mapping_dir = self.du.create_temp_dir(\n            asuser=TEST_USER2, asgroup=TSTGRP0, mode=0o770)\n        self.mom.add_config(\n            {'$usecp': self.mom.hostname + ':' + sub_dir +\n             ' ' + mapping_dir})\n        self.mom.restart()\n        jid = self.server.submit(j, submit_dir=sub_dir)\n        self.logger.info(self.msg)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        file_count = len([name for name in os.listdir(\n            mapping_dir) if os.path.isfile(os.path.join(mapping_dir, name))])\n        self.assertEqual(2, file_count)\n        self.server.expect(JOB, {ATTR_k: 'doe'}, id=jid)\n\n    @requirements(mom_on_server=True)\n    def test_direct_write_output_file(self):\n        \"\"\"\n        submit a sleep job and make sure that the output file\n        are getting directly written to the mapped directory\n        when direct_files option is used with o option.\n        \"\"\"\n        j = Job(TEST_USER, attrs={ATTR_k: 'do'})\n        j.set_sleep_time(10)\n        sub_dir = self.du.create_temp_dir(asuser=TEST_USER)\n        mapping_dir = self.du.create_temp_dir(asuser=TEST_USER)\n        self.mom.add_config(\n            {'$usecp': self.mom.hostname + ':' + sub_dir +\n             ' ' + mapping_dir})\n        self.mom.restart()\n        jid = self.server.submit(j, submit_dir=sub_dir)\n        self.logger.info(self.msg)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        for name in os.listdir(mapping_dir):\n            p = re.search('STDIN.e*', name)\n            if p:\n                self.logger.info('Match found: ' + p.group())\n            else:\n                self.assertTrue(False)\n        file_count = len([name for name in os.listdir(\n            mapping_dir) if os.path.isfile(os.path.join(mapping_dir, name))])\n        self.assertEqual(1, file_count)\n        self.server.expect(JOB, {ATTR_k: 'do'}, id=jid)\n\n    @requirements(mom_on_server=True)\n    def test_direct_write_error_file(self):\n        \"\"\"\n        submit a sleep job and make sure that the error file\n        are getting directly written to the mapped directory\n        when direct_files option is used with e option.\n        \"\"\"\n        j = Job(TEST_USER, attrs={ATTR_k: 'de'})\n        j.set_sleep_time(10)\n        sub_dir = self.du.create_temp_dir(asuser=TEST_USER)\n        mapping_dir = self.du.create_temp_dir(asuser=TEST_USER)\n        self.mom.add_config(\n            {'$usecp': self.mom.hostname + ':' + sub_dir +\n             ' ' + mapping_dir})\n        self.mom.restart()\n        jid = self.server.submit(j, submit_dir=sub_dir)\n        self.logger.info(self.msg)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        for name in os.listdir(mapping_dir):\n            p = re.search('STDIN.e*', name)\n            if p:\n                self.logger.info('Match found: ' + p.group())\n            else:\n                self.assertTrue(False)\n        file_count = len([name for name in os.listdir(\n            mapping_dir) if os.path.isfile(os.path.join(mapping_dir, name))])\n        self.assertEqual(1, file_count)\n        self.server.expect(JOB, {ATTR_k: 'de'}, id=jid)\n\n    @requirements(mom_on_server=True)\n    def test_direct_write_error_custom_path(self):\n        \"\"\"\n        submit a sleep job and make sure that the files\n        are getting directly written to the custom path\n        provided in -e and -o option even when -doe is set.\n        \"\"\"\n        tmp_dir = self.du.create_temp_dir(asuser=TEST_USER)\n        err_file = os.path.join(tmp_dir, 'error_file')\n        out_file = os.path.join(tmp_dir, 'output_file')\n        a = {ATTR_e: err_file, ATTR_o: out_file, ATTR_k: 'doe'}\n        j = Job(TEST_USER, attrs=a)\n        j.set_sleep_time(10)\n        sub_dir = self.du.create_temp_dir(asuser=TEST_USER)\n        mapping_dir = self.du.create_temp_dir(asuser=TEST_USER)\n        self.mom.add_config(\n            {'$usecp': self.mom.hostname + ':' + sub_dir +\n             ' ' + mapping_dir})\n        self.mom.restart()\n        jid = self.server.submit(j, submit_dir=sub_dir)\n        self.logger.info(self.msg)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        file_count = len([name for name in os.listdir(\n            tmp_dir) if os.path.isfile(os.path.join(tmp_dir, name))])\n        self.assertEqual(2, file_count)\n        self.server.expect(JOB, {ATTR_k: 'doe'}, id=jid)\n\n    @requirements(mom_on_server=True)\n    def test_direct_write_error_custom_dir(self):\n        \"\"\"\n        submit a sleep job and make sure that the files\n        are getting directly written to the custom dir\n        provided in -e and -o option even when -doe is set.\n        \"\"\"\n        tmp_dir = self.du.create_temp_dir(asuser=TEST_USER)\n        a = {ATTR_e: tmp_dir, ATTR_o: tmp_dir, ATTR_k: 'doe'}\n        j = Job(TEST_USER, attrs=a)\n        j.set_sleep_time(10)\n        sub_dir = self.du.create_temp_dir(asuser=TEST_USER)\n        mapping_dir = self.du.create_temp_dir(asuser=TEST_USER)\n        self.mom.add_config(\n            {'$usecp': self.mom.hostname + ':' + sub_dir +\n             ' ' + mapping_dir})\n        self.mom.restart()\n        jid = self.server.submit(j, submit_dir=sub_dir)\n        self.logger.info(self.msg)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        file_count = len([name for name in os.listdir(\n            tmp_dir) if os.path.isfile(os.path.join(tmp_dir, name))])\n        self.assertEqual(2, file_count)\n        self.server.expect(JOB, {ATTR_k: 'doe'}, id=jid)\n\n    @requirements(mom_on_server=True)\n    def test_direct_write_default_qsub_arguments(self):\n        \"\"\"\n        submit a sleep job and make sure that the std_files\n        are getting directly written to the mapped directory\n        when default_qsub_arguments is set to -kdoe.\n        \"\"\"\n        j = Job(TEST_USER)\n        j.set_sleep_time(10)\n        self.server.manager(MGR_CMD_SET, SERVER, {\n                            'default_qsub_arguments': '-kdoe'})\n        sub_dir = self.du.create_temp_dir(asuser=TEST_USER)\n        mapping_dir = self.du.create_temp_dir(asuser=TEST_USER)\n        self.mom.add_config(\n            {'$usecp': self.mom.hostname + ':' + sub_dir +\n             ' ' + mapping_dir})\n        self.mom.restart()\n        jid = self.server.submit(j, submit_dir=sub_dir)\n        self.logger.info(self.msg)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        file_count = len([name for name in os.listdir(\n            mapping_dir) if os.path.isfile(os.path.join(mapping_dir, name))])\n        self.assertEqual(2, file_count)\n        self.server.expect(JOB, {ATTR_k: 'doe'}, id=jid)\n\n    @requirements(mom_on_server=True)\n    def test_direct_write_without_config_entry(self):\n        \"\"\"\n        submit a sleep job and make sure that the std_files\n        is directly written to the submission directory when it is\n        accessible from mom and direct_files option is used\n        but submission directory is not mapped in mom config file.\n        \"\"\"\n        j = Job(TEST_USER, attrs={ATTR_k: 'doe'})\n        j.set_sleep_time(10)\n        sub_dir = self.du.create_temp_dir(asuser=TEST_USER)\n        jid = self.server.submit(j, submit_dir=sub_dir)\n        self.logger.info(self.msg)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        file_count = len([name for name in os.listdir(\n            sub_dir) if os.path.isfile(os.path.join(sub_dir, name))])\n        self.assertEqual(2, file_count)\n\n    def test_qalter_direct_write(self):\n        \"\"\"\n        submit a job and make sure that it in queued state.\n        alter the job with -koed and check whether it is\n        reflecting in qstat -f output.\n        \"\"\"\n        mydate = int(time.time()) + 60\n        j = Job(TEST_USER)\n        attribs = {\n            ATTR_a: time.strftime(\n                '%m%d%H%M',\n                time.localtime(\n                    float(mydate)))}\n        j.set_attributes(attribs)\n        jid = self.server.submit(j)\n        attribs = {ATTR_k: 'oed'}\n        try:\n            self.server.alterjob(jid, attribs)\n            if self.server.expect(JOB, {'job_state': 'W'},\n                                  id=jid):\n                self.server.expect(JOB, attribs,\n                                   id=jid)\n        except PbsAlterError as e:\n            print(str(e))\n\n    def test_qalter_direct_write_error(self):\n        \"\"\"\n        submit a job and after it starts running alter\n        the job with -koed and check whether expected\n        error message appears\n        \"\"\"\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n        attribs = {ATTR_k: 'oed'}\n        self.server.expect(JOB, {'job_state': 'R'})\n        try:\n            self.server.alterjob(jid, attribs)\n        except PbsAlterError as e:\n            self.assertTrue(\n                'Cannot modify attribute while job running  Keep_Files'\n                in e.msg[0])\n\n    @requirements(mom_on_server=True)\n    def test_direct_write_qrerun(self):\n        \"\"\"\n        submit a sleep job and make sure that the std_files\n        are written and when a job is rerun error message\n        in logged in mom_log that it is skipping directly\n        written/absent spool file as files are already\n        present on first run of the job.\n        \"\"\"\n        self.mom.add_config({'$logevent': '0xffffffff'})\n        j = Job(TEST_USER, attrs={ATTR_k: 'doe'})\n        j.set_sleep_time(10)\n        sub_dir = self.du.create_temp_dir(asuser=TEST_USER)\n        mapping_dir = self.du.create_temp_dir(asuser=TEST_USER)\n        self.mom.add_config(\n            {'$usecp': self.mom.hostname + ':' + sub_dir +\n             ' ' + mapping_dir})\n        self.mom.restart()\n        jid = self.server.submit(j, submit_dir=sub_dir)\n        self.logger.info(self.msg)\n        self.server.expect(JOB, {ATTR_k: 'doe'}, id=jid)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        self.server.rerunjob(jid)\n        self.mom.log_match(\n            \"stage_file;Skipping directly written/absent spool file\",\n            max_attempts=10, interval=5)\n        file_count = len([name for name in os.listdir(\n            mapping_dir) if os.path.isfile(os.path.join(mapping_dir, name))])\n        self.assertEqual(2, file_count)\n\n    @requirements(mom_on_server=True)\n    def test_direct_write_job_array(self):\n        \"\"\"\n        submit a job array and make sure that the std_files\n        is directly written to the submission directory when it is\n        accessible from mom and direct_files option is used\n        but submission directory is not mapped in mom config file.\n        \"\"\"\n        self.checks_available_ncpus(4)\n        a = {'resources_available.ncpus': 4}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        j = Job(TEST_USER, attrs={ATTR_k: 'doe', ATTR_J: '1-4'})\n        j.set_sleep_time(10)\n        sub_dir = self.du.create_temp_dir(asuser=TEST_USER)\n        jid = self.server.submit(j, submit_dir=sub_dir)\n        self.server.expect(JOB, {ATTR_state: 'B'}, id=jid)\n        self.server.expect(JOB, {ATTR_state + '=R': 4}, count=True,\n                           id=jid, extend='t')\n        self.logger.info('checking for directly written std files')\n        file_list = [name for name in os.listdir(\n            sub_dir) if os.path.isfile(os.path.join(sub_dir, name))]\n        self.assertEqual(8, len(file_list))\n        idn = jid[:jid.find('[]')]\n        for std in ['o', 'e']:\n            for sub_ind in range(1, 5):\n                f_name = 'STDIN.' + std + idn + '.' + str(sub_ind)\n                if f_name not in file_list:\n                    raise self.failureException(\"std file \" + f_name +\n                                                \" not found\")\n\n    @requirements(mom_on_server=True)\n    def test_direct_write_job_array_custom_dir(self):\n        \"\"\"\n        submit a job array and make sure that the files\n        are getting directly written to the custom dir\n        provided in -e and -o option even when -doe is set.\n        \"\"\"\n        self.checks_available_ncpus(4)\n        a = {'resources_available.ncpus': 4}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        tmp_dir = self.du.create_temp_dir(asuser=TEST_USER)\n        a = {ATTR_e: tmp_dir, ATTR_o: tmp_dir, ATTR_k: 'doe', ATTR_J: '1-4'}\n        j = Job(TEST_USER, attrs=a)\n        j.set_sleep_time(10)\n        sub_dir = self.du.create_temp_dir(asuser=TEST_USER)\n        mapping_dir = self.du.create_temp_dir(asuser=TEST_USER)\n        self.mom.add_config(\n            {'$usecp': self.mom.hostname + ':' + sub_dir +\n             ' ' + mapping_dir})\n        self.mom.restart()\n        jid = self.server.submit(j, submit_dir=sub_dir)\n        self.server.expect(JOB, {ATTR_state: 'B'}, id=jid)\n        self.server.expect(JOB, {ATTR_state + '=R': 4}, count=True,\n                           id=jid, extend='t')\n        self.logger.info('checking for directly written std files')\n        file_list = [name for name in os.listdir(\n            tmp_dir) if os.path.isfile(os.path.join(tmp_dir, name))]\n        self.assertEqual(8, len(file_list))\n        for ext in ['.OU', '.ER']:\n            for sub_ind in range(1, 5):\n                f_name = j.create_subjob_id(jid, sub_ind) + ext\n                if f_name not in file_list:\n                    raise self.failureException(\"std file \" + f_name +\n                                                \" not found\")\n"
  },
  {
    "path": "test/tests/functional/pbs_qsub_opts_args.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\nimport os\n\n\nclass TestQsubOptionsArguments(TestFunctional):\n    \"\"\"\n    validate qsub submission with a script and executable.\n    note: The no-arg test is an interactive job, which is tested in\n    SmokeTest.test_interactive_job\n    \"\"\"\n    fn = None\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n        script = '/bin/hostname'\n        self.fn = self.du.create_temp_file(body=script)\n        self.qsub_cmd = os.path.join(\n            self.server.pbs_conf['PBS_EXEC'], 'bin', 'qsub')\n        self.jobdir_root = \"/tmp\"\n        self.jobdir = []\n        self.remove_jobdir = False\n\n    def tearDown(self):\n        TestFunctional.tearDown(self)\n        if self.remove_jobdir:\n            for mom in self.moms.values():\n                for d in self.jobdir:\n                    if d.startswith(self.jobdir_root):\n                        self.logger.info('%s:remove jobdir %s' % (\n                                         mom.hostname, d))\n                        self.du.rm(hostname=mom.shortname, sudo=True,\n                                   path=d, recursive=True, force=True)\n            self.remove_jobdir = False\n\n    def validate_error(self, err):\n        ret_msg = 'qsub: Failed to save job/resv, '\\\n            'refer server logs for details'\n        # PBS returns 15161 error code when it fails to save the job in db\n        # but in PTL_CLI mode it returns modulo of the error code.\n\n        # PTL_API and PTL_CLI mode returns 15161 and 57 error code respectively\n        if err.rc == 15161 or err.rc == 15161 % 256:\n            self.assertFalse(err.msg[0], ret_msg)\n        else:\n            self.fail(\n                \"ERROR in submitting a job with future time: %s\" %\n                err.msg[0])\n\n    def jobdir_shared_body(self, location):\n        \"\"\"\n        Test submission of job with sandbox=PRIVATE,\n        and moms have $jobdir_root set to shared.\n        \"\"\"\n        self.remove_jobdir = True\n        momA = self.moms.values()[0]\n        momB = self.moms.values()[1]\n\n        loglevel = {'$logevent': 4095}\n        momB.add_config(loglevel)\n        c = {'$jobdir_root': '%s shared' % location}\n        for mom in [momA, momB]:\n            mom.add_config(c)\n            mom.restart()\n\n        a = {'Resource_List.select': '2:ncpus=1',\n             'Resource_List.place': 'scatter',\n             ATTR_sandbox: 'PRIVATE',\n             }\n        j = Job(TEST_USER, attrs=a)\n        j.set_sleep_time(30)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        attribs = self.server.status(JOB, id=jid)\n        jobdir = attribs[0]['jobdir']\n        self.jobdir.append(jobdir)\n        relpath = os.path.join(\n            self.server.pbs_conf['PBS_EXEC'], 'bin', 'pbs_release_nodes')\n\n        rel_cmd = [relpath, '-j', jid, momB.shortname]\n        ret = self.server.du.run_cmd(self.server.hostname, cmd=rel_cmd,\n                                     runas=TEST_USER)\n        self.assertEqual(ret['rc'], 0)\n        # sister mom has preserved the file\n        errmsg = \"sister mom deleted jobdir %s\" % jobdir\n        rc = self.du.isdir(hostname=momB.shortname, path=jobdir,\n                           sudo=True)\n        self.assertTrue(rc, errmsg)\n        msg = \"shared jobdir %s to be removed by primary mom\" % jobdir\n        momB.log_match(msg)\n        self.server.expect(JOB, 'job_state', op=UNSET, id=jid)\n        # primary mom has deleted the file\n        errmsg = \"MS mom preserved jobdir %s\" % jobdir\n        rc = self.du.isdir(hostname=momA.shortname, path=jobdir,\n                           sudo=True)\n        self.assertFalse(rc, errmsg)\n\n    def test_qsub_with_script_with_long_TMPDIR(self):\n        \"\"\"\n        submit a job with a script and with long path in TMPDIR\n        \"\"\"\n        longpath = '%s/aaaaaaaaaa/bbbbbbbbbb/cccccccccc/eeeeeeeeee/\\\nffffffffff/gggggggggg/hhhhhhhhhh/iiiiiiiiii/jj/afdj/hlppoo/jkloiytupoo/\\\nbhtiusabsdlg' % (os.environ['HOME'])\n        os.environ['TMPDIR'] = longpath\n        if not os.path.exists(longpath):\n            os.makedirs(longpath)\n        cmd = [self.qsub_cmd, self.fn]\n        rv = self.du.run_cmd(self.server.hostname, cmd=cmd)\n        self.assertEqual(rv['rc'], 0, 'qsub failed')\n\n    def test_qsub_with_script_executable(self):\n        \"\"\"\n        submit a job with a script and executable\n        \"\"\"\n        cmd = [self.qsub_cmd, self.fn, '--', self.mom.sleep_cmd, '10']\n        rv = self.du.run_cmd(self.server.hostname, cmd=cmd)\n        failed = rv['rc'] == 2 and rv['err'][0].split(' ')[0] == 'usage:'\n        self.assertTrue(failed, 'qsub should have failed, but did not fail')\n\n    def test_qsub_with_script_dashes(self):\n        \"\"\"\n        submit a job with a script and dashes\n        \"\"\"\n        cmd = [self.qsub_cmd, self.fn, '--']\n        rv = self.du.run_cmd(self.server.hostname, cmd=cmd)\n        failed = rv['rc'] == 2 and rv['err'][0].split(' ')[0] == 'usage:'\n        self.assertTrue(failed, 'qsub should have failed, but did not fail')\n\n    def test_qsub_with_dashes(self):\n        \"\"\"\n        submit a job with only dashes\n        \"\"\"\n        cmd = [self.qsub_cmd, '--']\n        rv = self.du.run_cmd(self.server.hostname, cmd=cmd)\n        failed = rv['rc'] == 2 and rv['err'][0].split(' ')[0] == 'usage:'\n        self.assertTrue(failed, 'qsub should have failed, but did not fail')\n\n    def test_qsub_with_script(self):\n        \"\"\"\n        submit a job with only a script\n        \"\"\"\n        cmd = [self.qsub_cmd, self.fn]\n        rv = self.du.run_cmd(self.server.hostname, cmd=cmd)\n        self.assertEqual(rv['rc'], 0, 'qsub failed')\n\n    def test_qsub_with_executable(self):\n        \"\"\"\n        submit a job with only an executable\n        \"\"\"\n        cmd = [self.qsub_cmd, '--', self.mom.sleep_cmd, '10']\n        rv = self.du.run_cmd(self.server.hostname, cmd=cmd)\n        self.assertEqual(rv['rc'], 0, 'qsub failed')\n\n    def test_qsub_with_option_executable(self):\n        \"\"\"\n        submit a job with an option and executable\n        \"\"\"\n        cmd = [self.qsub_cmd, '-V', '--', self.mom.sleep_cmd, '10']\n        rv = self.du.run_cmd(self.server.hostname, cmd=cmd)\n        self.assertEqual(rv['rc'], 0, 'qsub failed')\n\n    def test_qsub_with_option_script(self):\n        \"\"\"\n        submit a job with an option and script\n        \"\"\"\n        cmd = [self.qsub_cmd, '-V', self.fn]\n        rv = self.du.run_cmd(self.server.hostname, cmd=cmd)\n        self.assertEqual(rv['rc'], 0, 'qsub failed')\n\n    def test_qsub_with_option_a(self):\n        \"\"\"\n        Test submission of job with execution time(future and past)\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, NODE,\n                            {'resources_available.ncpus': 2},\n                            self.mom.shortname)\n        present_tm = int(time.time())\n        # submit a job with future time and should start whenever time hits\n        future_tm = time.strftime(\n            \"%Y%m%d%H%M\", time.localtime(\n                present_tm + 120))\n        j1 = Job(TEST_USER, {ATTR_a: future_tm})\n        try:\n            jid_1 = self.server.submit(j1)\n        except PbsSubmitError as e:\n            self.validate_error(e)\n        self.server.expect(JOB, {'job_state': 'W'}, id=jid_1)\n        self.logger.info(\n            'waiting for 90 seconds to run the job as it is a future job...')\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid_1, offset=90,\n                           interval=2)\n\n        # submit a job with past time and should start right away\n        past_tm = time.strftime(\n            \"%Y%m%d%H%M\", time.localtime(\n                present_tm - 3600))\n        j2 = Job(TEST_USER, {ATTR_a: past_tm})\n        try:\n            jid_2 = self.server.submit(j2)\n        except PbsSubmitError as e:\n            self.validate_error(e)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid_2)\n\n    @requirements(num_moms=2)\n    def test_qsub_sandbox_private_jobdir_shared(self):\n        \"\"\"\n        Test submission of job with sandbox=PRIVATE,\n        and moms have $jobdir_root set to shared.\n        \"\"\"\n        self.jobdir_shared_body(self.jobdir_root)\n\n    @requirements(num_moms=2)\n    def test_qsub_sandbox_private_jobdir_default_shared(self):\n        \"\"\"\n        Test submission of job with sandbox=PRIVATE,\n        and moms have $jobdir_root set to shared,\n        with location set to PBS_USER_HOME.\n        \"\"\"\n        self.jobdir_shared_body(\"PBS_USER_HOME\")\n\n    @runOnlyOnLinux\n    def test_qsub_with_options_o_e_with_colon(self):\n        \"\"\"\n        Test submission of job with output and error\n        paths with a colon\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'job_history_enable': 'true'})\n        tmp_dir = self.du.create_temp_dir(asuser=TEST_USER)\n        err_file = os.path.join(tmp_dir, 'err:or_file')\n        out_file = os.path.join(tmp_dir, 'out:put_file')\n        a = {ATTR_e: err_file, ATTR_o: out_file}\n        j = Job(TEST_USER, attrs=a)\n        j.set_sleep_time(1)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'F'}, extend='x', id=jid)\n        self.assertTrue(os.path.isfile(err_file),\n                        \"The error file was not found\")\n        self.assertTrue(os.path.isfile(out_file),\n                        \"The output file was not found\")\n"
  },
  {
    "path": "test/tests/functional/pbs_qsub_remove_files.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestQsub_remove_files(TestFunctional):\n    \"\"\"\n    validate qsub remove file option.\n    \"\"\"\n\n    def test_remove_file_when_job_succeeds(self):\n        \"\"\"\n        submit a sleep job and make sure that the std_files\n        are getting deleted when remove_files option is used.\n        \"\"\"\n        j = Job(TEST_USER, attrs={ATTR_R: 'oe'})\n        j.set_sleep_time(5)\n        sub_dir = self.du.create_temp_dir(asuser=TEST_USER)\n        jid = self.server.submit(j, submit_dir=sub_dir)\n        self.server.expect(JOB, {ATTR_R: 'oe'}, id=jid)\n        self.server.expect(JOB, 'job_state', op=UNSET, id=jid)\n        file_count = len([name for name in os.listdir(\n            sub_dir) if os.path.isfile(os.path.join(sub_dir, name))])\n        self.assertEqual(0, file_count)\n\n    def test_remove_file_sandbox_private(self):\n        \"\"\"\n        submit a sleep job and make sure that the std_files\n        are getting deleted when remove_files option is used\n        and job is submitted with -Wsandbox=private.\n        \"\"\"\n        j = Job(TEST_USER, attrs={ATTR_R: 'oe', ATTR_sandbox: 'private'})\n        j.set_sleep_time(5)\n        sub_dir = self.du.create_temp_dir(asuser=TEST_USER)\n        jid = self.server.submit(j, submit_dir=sub_dir)\n        self.server.expect(JOB, {ATTR_R: 'oe'}, id=jid)\n        self.server.expect(JOB, 'job_state', op=UNSET, id=jid)\n        file_count = len([name for name in os.listdir(\n            sub_dir) if os.path.isfile(os.path.join(sub_dir, name))])\n        self.assertEqual(0, file_count)\n\n    def test_remove_files_output_file(self):\n        \"\"\"\n        submit a job with -Ro option and make sure the output file\n        gets deleted after job finishes\n        \"\"\"\n        j = Job(TEST_USER, attrs={ATTR_R: 'o'})\n        j.set_sleep_time(5)\n        sub_dir = self.du.create_temp_dir(asuser=TEST_USER)\n        jid = self.server.submit(j, submit_dir=sub_dir)\n        self.server.expect(JOB, {ATTR_R: 'o'}, id=jid)\n        self.server.expect(JOB, 'job_state', op=UNSET, id=jid)\n        for name in os.listdir(sub_dir):\n            p = re.search('STDIN.e*', name)\n            if p:\n                self.logger.info('Match found: ' + p.group())\n            else:\n                self.assertTrue(False)\n        file_count = len([name for name in os.listdir(\n            sub_dir) if os.path.isfile(os.path.join(sub_dir, name))])\n        self.assertEqual(1, file_count)\n\n    @requirements(mom_on_server=True)\n    def test_remove_files_error_file(self):\n        \"\"\"\n        submit a job with -Re option and make sure the error file\n        gets deleted after job finishes and works with direct_write\n        \"\"\"\n        j = Job(TEST_USER, attrs={ATTR_k: 'de', ATTR_R: 'e'})\n        j.set_sleep_time(5)\n        sub_dir = self.du.create_temp_dir(asuser=TEST_USER)\n        mapping_dir = self.du.create_temp_dir(asuser=TEST_USER)\n        self.mom.add_config(\n            {'$usecp': self.mom.hostname + ':' + sub_dir +\n             ' ' + mapping_dir})\n        self.mom.restart()\n        jid = self.server.submit(j, submit_dir=sub_dir)\n        self.server.expect(JOB, {ATTR_R: 'e'}, id=jid)\n        self.server.expect(JOB, 'job_state', op=UNSET, id=jid)\n        for name in os.listdir(mapping_dir):\n            p = re.search('STDIN.o*', name)\n            if p:\n                self.logger.info('Match found: ' + p.group())\n            else:\n                self.assertTrue(False)\n        file_count = len([name for name in os.listdir(\n            mapping_dir) if os.path.isfile(os.path.join(mapping_dir, name))])\n        self.assertEqual(1, file_count)\n\n    def test_remove_files_error_custom_path(self):\n        \"\"\"\n        submit a sleep job and make sure that the files\n        are getting deleted from custom path provided in\n        -e and -o option when -Roe is set.\n        \"\"\"\n        tmp_dir = self.du.create_temp_dir(asuser=TEST_USER)\n        err_file = os.path.join(tmp_dir, 'error_file')\n        out_file = os.path.join(tmp_dir, 'output_file')\n        a = {ATTR_e: err_file, ATTR_o: out_file, ATTR_R: 'oe'}\n        j = Job(TEST_USER, attrs=a)\n        j.set_sleep_time(5)\n        sub_dir = self.du.create_temp_dir(asuser=TEST_USER)\n        jid = self.server.submit(j, submit_dir=sub_dir)\n        self.server.expect(JOB, {ATTR_R: 'oe'}, id=jid)\n        self.server.expect(JOB, 'job_state', op=UNSET, id=jid)\n        file_count = len([name for name in os.listdir(\n            tmp_dir) if os.path.isfile(os.path.join(tmp_dir, name))])\n        self.assertEqual(0, file_count)\n\n    def test_remove_files_error_custom_dir(self):\n        \"\"\"\n        submit a sleep job and make sure that the files\n        are getting deleted from custom directory path\n        provided in -e and -o option when -Roe is set.\n        \"\"\"\n        tmp_dir = self.du.create_temp_dir(asuser=TEST_USER)\n        a = {ATTR_e: tmp_dir, ATTR_o: tmp_dir, ATTR_R: 'oe'}\n        j = Job(TEST_USER, attrs=a)\n        j.set_sleep_time(5)\n        sub_dir = self.du.create_temp_dir(asuser=TEST_USER)\n        jid = self.server.submit(j, submit_dir=sub_dir)\n        self.server.expect(JOB, {ATTR_R: 'oe'}, id=jid)\n        self.server.expect(JOB, 'job_state', op=UNSET, id=jid)\n        file_count = len([name for name in os.listdir(\n            tmp_dir) if os.path.isfile(os.path.join(tmp_dir, name))])\n        self.assertEqual(0, file_count)\n\n    def test_remove_files_default_qsub_arguments(self):\n        \"\"\"\n        submit a sleep job and make sure that the std_files\n        are removed after the job finishes from submission\n        directory when default_qsub_arguments is set to -Roe.\n        \"\"\"\n        j = Job(TEST_USER)\n        j.set_sleep_time(5)\n        self.server.manager(MGR_CMD_SET, SERVER, {\n                            'default_qsub_arguments': '-Roe'})\n        sub_dir = self.du.create_temp_dir(asuser=TEST_USER)\n        jid = self.server.submit(j, submit_dir=sub_dir)\n        self.server.expect(JOB, {ATTR_R: 'oe'}, id=jid)\n        self.server.expect(JOB, 'job_state', op=UNSET, id=jid)\n        file_count = len([name for name in os.listdir(\n            sub_dir) if os.path.isfile(os.path.join(sub_dir, name))])\n        self.assertEqual(0, file_count)\n\n    def test_remove_file_when_job_fails(self):\n        \"\"\"\n        submit a job using unavailable binary and make sure\n        that the std_files are available when remove_files\n        option is used.\n        \"\"\"\n        j = Job(TEST_USER, attrs={ATTR_R: 'oe'})\n        j.set_execargs('hostname')\n        sub_dir = self.du.create_temp_dir(asuser=TEST_USER)\n        jid = self.server.submit(j, submit_dir=sub_dir)\n        self.server.expect(JOB, 'job_state', op=UNSET, id=jid)\n        file_count = len([name for name in os.listdir(\n            sub_dir) if os.path.isfile(os.path.join(sub_dir, name))])\n        self.assertEqual(2, file_count)\n\n    def test_qalter_remove_files(self):\n        \"\"\"\n        submit a job and make sure that it in queued state.\n        alter the job with -Roe and check whether it is\n        reflecting in qstat -f output.\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid)\n        attribs = {ATTR_R: 'oe'}\n        try:\n            self.server.alterjob(jid, attribs)\n            self.server.expect(JOB, attribs, id=jid)\n        except PbsAlterError as e:\n            print(str(e))\n\n    def test_qalter_direct_write_error(self):\n        \"\"\"\n        submit a job and after it starts running alter\n        the job with -Roe and check whether expected\n        error message appears\n        \"\"\"\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n        attribs = {ATTR_R: 'oe'}\n        self.server.expect(JOB, {'job_state': 'R'})\n        try:\n            self.server.alterjob(jid, attribs)\n        except PbsAlterError as e:\n            self.assertTrue(\n                'Cannot modify attribute while job'\n                ' running  Remove_Files' in e.msg[0])\n\n    def test_remove_file_job_array(self):\n        \"\"\"\n        submit job array script that makes subjobs to exit with 0 except for\n        subjob[2] and make sure that the std_files for only subjob[2] are\n        available when remove_files option is used.\n        \"\"\"\n        script = \\\n            \"#!/bin/sh\\n\"\\\n            \"%s 3;\\n\"\\\n            \"if [ $PBS_ARRAY_INDEX -eq 2 ]; then\\n\"\\\n            \"exit 1; fi; exit 0;\" % (self.mom.sleep_cmd)\n        j = Job(TEST_USER, attrs={ATTR_R: 'oe', ATTR_J: '1-3'},\n                jobname='JOB_NAME')\n        j.create_script(script)\n        sub_dir = self.du.create_temp_dir(asuser=TEST_USER)\n        jid = self.server.submit(j, submit_dir=sub_dir)\n        self.server.expect(JOB, {ATTR_state: 'B'}, id=jid)\n        self.server.expect(JOB, ATTR_state, op=UNSET, id=jid)\n        file_list = [name for name in os.listdir(\n            sub_dir) if os.path.isfile(os.path.join(sub_dir, name))]\n        self.assertEqual(2, len(file_list), \"expected 2 std files\")\n        idn = jid[:jid.find('[]')]\n        std_files = ['JOB_NAME.o' + idn + '.2', 'JOB_NAME.e' + idn + '.2']\n        for f_name in std_files:\n            if f_name not in file_list:\n                raise self.failureException(\"std file \" + f_name +\n                                            \" not found\")\n\n    def test_remove_file_custom_path_job_array(self):\n        \"\"\"\n        submit job array script that makes subjobs to exit with 0 except for\n        subjob[2] and make sure that the std_files for only subjob[2] are\n        available in custom directory when remove_files option is used with\n        -o and -e options.\n        \"\"\"\n        script = \\\n            \"#!/bin/sh\\n\"\\\n            \"%s 3;\\n\"\\\n            \"if [ $PBS_ARRAY_INDEX -eq 2 ]; then\\n\"\\\n            \"exit 1; fi; exit 0;\" % (self.mom.sleep_cmd)\n        tmp_dir = self.du.create_temp_dir(asuser=TEST_USER)\n        j = Job(TEST_USER, attrs={ATTR_e: tmp_dir, ATTR_o: tmp_dir,\n                                  ATTR_R: 'oe', ATTR_J: '1-3'})\n        j.create_script(script)\n        sub_dir = self.du.create_temp_dir(asuser=TEST_USER)\n        jid = self.server.submit(j, submit_dir=sub_dir)\n        self.server.expect(JOB, {ATTR_state: 'B'}, id=jid)\n        self.server.expect(JOB, ATTR_state, op=UNSET, id=jid)\n        file_list = [name for name in os.listdir(\n            tmp_dir) if os.path.isfile(os.path.join(tmp_dir, name))]\n        self.assertEqual(2, len(file_list), \"expected 2 std files\")\n        subj2_id = j.create_subjob_id(jid, 2)\n        std_files = [subj2_id + '.OU', subj2_id + '.ER']\n        for f_name in std_files:\n            if f_name not in file_list:\n                raise self.failureException(\"std file \" + f_name +\n                                            \" not found\")\n"
  },
  {
    "path": "test/tests/functional/pbs_qsub_script.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestQsubScript(TestFunctional):\n    \"\"\"\n    This test suite validates that qsub does not modify the script file\n    \"\"\"\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'job_history_enable': 'true'})\n        self.qsub_cmd = os.path.join(\n            self.server.pbs_conf['PBS_EXEC'], 'bin', 'qsub')\n        self.sub_dir = self.du.create_temp_dir(asuser=TEST_USER)\n\n    def test_qsub_basic_job(self):\n        \"\"\"\n        This test case ensures that #PBS directive lines are not\n        modified\n        \"\"\"\n        script = \"\"\"#!/bin/sh\n        #PBS -m n\n        cat $0\n        \"\"\"\n\n        fn = self.du.create_temp_file(body=script, asuser=TEST_USER)\n        cmd = [self.qsub_cmd, fn]\n        rv = self.du.run_cmd(self.server.hostname,\n                             cmd=cmd,\n                             runas=TEST_USER,\n                             cwd=self.sub_dir)\n        self.assertEqual(rv['rc'], 0, 'qsub failed')\n        jid = rv['out'][0]\n        self.logger.info(\"Job ID: %s\" % jid)\n\n        self.server.expect(JOB, {'job_state': 'F'}, id=jid, extend='x')\n        job_status = self.server.status(JOB, id=jid, extend='x')\n        if job_status:\n            job_output_file = job_status[0]['Output_Path'].split(':')[1]\n        rc = self.du.cmp(fileA=fn, fileB=job_output_file, runas=TEST_USER)\n        self.assertEqual(rc, 0, 'cmp of job files failed')\n\n    def test_qsub_line_extensions(self):\n        \"\"\"\n        This test case ensures that only #PBS directive lines are treated\n        as candidates for line extension\n        \"\"\"\n        script = '''\\\n#!/bin/sh\n#PBS -m \\\\\n\\\\\nn\n# This is a test comment that shouldn't be extended \\\\\ncat $0\n'''\n        expected_script = '''\\\n#!/bin/sh\n#PBS -m n\n# This is a test comment that shouldn't be extended \\\\\ncat $0\n'''\n\n        j = Job()\n        j.create_script(body=script, asuser=TEST_USER)\n        submitted_script = j.script\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'F'}, id=jid, extend='x')\n        job_status = self.server.status(JOB, id=jid, extend='x')\n        if job_status:\n            job_output_file = job_status[0]['Output_Path'].split(':')[1]\n        expected_fn = self.du.create_temp_file(\n            body=expected_script, asuser=TEST_USER)\n        rc = self.du.cmp(fileA=expected_fn,\n                         fileB=job_output_file, runas=TEST_USER)\n        self.assertEqual(rc, 0, 'cmp of job files failed')\n\n    def test_qsub_crlf(self):\n        \"\"\"\n        This test case check the qsub rejects script ending with cr,lf.\n        \"\"\"\n        script = \"\"\"#!/bin/sh\\r\\nhostname\\r\\n\"\"\"\n        j = Job(TEST_USER)\n        j.create_script(script)\n        fail_msg = 'qsub didn\\'t throw an error'\n        with self.assertRaises(PbsSubmitError, msg=fail_msg) as c:\n            self.server.submit(j)\n        msg = 'qsub: script contains cr, lf'\n        self.assertEqual(c.exception.msg[0], msg)\n"
  },
  {
    "path": "test/tests/functional/pbs_qsub_wblock.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestQsubWblock(TestFunctional):\n    \"\"\"\n    This test suite contains the block job feature tests\n    \"\"\"\n    def test_block_job(self):\n        \"\"\"\n        Test to submit a block job and verify the Server response\n        \"\"\"\n        j = Job(TEST_USER, attrs={ATTR_block: 'true'})\n        j.set_sleep_time(1)\n        jid = self.server.submit(j)\n        client_host = socket.getfqdn(self.server.client)\n        msg = 'Server@%s;Job;%s;check_block_wt: Write successful' \\\n              ' to client %s for job %s' % \\\n              (self.server.shortname, jid, client_host, jid)\n        self.server.log_match(msg, tail=True, interval=2, max_attempts=30)\n\n    def test_block_job_array(self):\n        \"\"\"\n        Test to submit a block array job and verify the Server response\n        \"\"\"\n        j = Job(TEST_USER, attrs={ATTR_block: 'true', ATTR_J: '1-3'})\n        j.set_sleep_time(1)\n        jid = self.server.submit(j)\n        client_host = socket.getfqdn(self.server.client)\n        msg = 'Server@%s;Job;%s;check_block_wt: Write successful ' \\\n              'to client %s for job %s' % \\\n              (self.server.shortname, jid, client_host, jid)\n        self.server.log_match(msg, tail=True, interval=2, max_attempts=30)\n"
  },
  {
    "path": "test/tests/functional/pbs_que_resc_usage.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestQueRescUsage(TestFunctional):\n    \"\"\"\n    Test resource behavior after set/unset/assign/release resource on the queue\n    \"\"\"\n    err_msg = 'Failed to get the expected resource value'\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n        self.server.manager(MGR_CMD_SET, NODE, {\n                            'resources_available.ncpus': 10},\n                            self.mom.shortname)\n\n    def create_custom_resc(self):\n        \"\"\"\n        function to create the resource named as \"foo\"\n        \"\"\"\n        self.server.manager(MGR_CMD_CREATE, RSC, {\n                            'type': 'long', 'flag': 'q'}, id='foo')\n        self.scheduler.add_resource('foo')\n        self.server.manager(MGR_CMD_SET, QUEUE, {\n                            'resources_available.foo': 6}, id='workq')\n\n    def test_resc_assigned_set_unset(self):\n        \"\"\"\n        Test \"resources_assigned\" attribute of the resource and it's behavior\n        during set/unset\n        \"\"\"\n\n        # set a resource and unset it without using\n        self.server.manager(MGR_CMD_SET, QUEUE, {\n                            'resources_available.ncpus': 4}, id='workq')\n        q_status = self.server.status(QUEUE, id='workq')\n        self.assertEqual(\n            int(q_status[0]['resources_available.ncpus']), 4, self.err_msg)\n        self.assertNotIn('resources_assigned.ncpus',\n                         q_status[0], self.err_msg)\n        self.server.manager(MGR_CMD_UNSET, QUEUE,\n                            'resources_available.ncpus', id='workq')\n        q_status = self.server.status(QUEUE, id='workq')\n        self.assertNotIn('resources_available.ncpus',\n                         q_status[0], self.err_msg)\n\n        # set the resource and unset it after using\n        a = {'resources_available.ncpus': 8}\n        self.server.manager(MGR_CMD_SET, QUEUE, a, id='workq')\n        q_status = self.server.status(QUEUE, id='workq')\n        self.assertEqual(\n            int(q_status[0]['resources_available.ncpus']), 8, self.err_msg)\n        # job submission\n        j_attr = {'Resource_List.ncpus': '3'}\n        j1 = Job(attrs=j_attr)\n        jid_1 = self.server.submit(j1)\n        j2 = Job(attrs=j_attr)\n        jid_2 = self.server.submit(j2)\n        self.server.expect(JOB, {'job_state': 'R'}, jid_1)\n        self.server.expect(JOB, {'job_state': 'R'}, jid_2)\n        q_status = self.server.status(\n            QUEUE, 'resources_assigned.ncpus', id='workq')\n        self.assertEqual(\n            int(q_status[0]['resources_assigned.ncpus']), 6, self.err_msg)\n        self.server.manager(MGR_CMD_UNSET, QUEUE,\n                            'resources_available.ncpus', id='workq')\n        q_status = self.server.status(QUEUE, id='workq')\n        self.assertNotIn('resources_available.ncpus',\n                         q_status[0], self.err_msg)\n        self.assertIn('resources_assigned.ncpus',\n                      q_status[0], self.err_msg)\n\n        # Restart the server() and check \"resources_assigned\" value when\n        # resource is not set but still some jobs are running in the system\n        # and using the same resource.\n        self.server.restart()\n        q_status = self.server.status(QUEUE, id='workq')\n        self.assertNotIn('resources_available.ncpus',\n                         q_status[0], self.err_msg)\n        self.assertEqual(\n            int(q_status[0]['resources_assigned.ncpus']), 6, self.err_msg)\n        # If no jobs are running at the time of restart the server\n        self.server.delete(jid_1, extend='force', wait=True)\n        self.server.delete(jid_2, extend='force', wait=True)\n        self.server.restart()\n        q_status = self.server.status(QUEUE, id='workq')\n        self.assertNotIn('resources_available.ncpus',\n                         q_status[0], self.err_msg)\n        self.assertNotIn('resources_assigned.ncpus',\n                         q_status[0], self.err_msg)\n\n    def test_resources_assigned_with_zero_val(self):\n        \"\"\"\n        In PBS we can request +ve or -ve values so sometimes resources_assigned\n        becomes zero despite some jobs are still using the same resource.\n        In this case resources_assigned shouldn't be unset if jobs are there\n        in the system or else unset it.\n        \"\"\"\n\n        # create a resource\n        self.create_custom_resc()\n\n        # resources_assigned is zero but still jobs are in the system\n        j1_attr = {'Resource_List.foo': '3'}\n        j1 = Job(attrs=j1_attr)\n        jid_1 = self.server.submit(j1)\n        # requesting negative resource here\n        j2_attr = {'Resource_List.foo': '-3'}\n        j2 = Job(attrs=j2_attr)\n        jid_2 = self.server.submit(j2)\n        self.server.expect(JOB, {'job_state': 'R'}, jid_1)\n        self.server.expect(JOB, {'job_state': 'R'}, jid_2)\n        q_status = self.server.status(QUEUE, id='workq')\n        self.assertEqual(\n            int(q_status[0]['resources_assigned.foo']), 0, self.err_msg)\n        self.server.manager(MGR_CMD_UNSET, QUEUE,\n                            'resources_available.foo', id='workq')\n        self.server.restart()\n        q_status = self.server.status(QUEUE, id='workq')\n        self.assertEqual(\n            int(q_status[0]['resources_assigned.foo']), 0, self.err_msg)\n        self.server.delete(jid_1, extend='force', wait=True)\n        self.server.delete(jid_2, extend='force', wait=True)\n        # jobs are finished now, resources_assigned should unset this time\n        self.server.restart()\n        q_status = self.server.status(QUEUE, id='workq')\n        self.assertNotIn('resources_assigned.foo',\n                         q_status[0], self.err_msg)\n\n    def test_resources_assigned_deletion(self):\n        \"\"\"\n        Test resources_assigned.<resc_name> deletion from the system\n        \"\"\"\n        # create a resource\n        self.create_custom_resc()\n        # submit jobs\n        j_attr = {'Resource_List.foo': '3'}\n        j1 = Job(attrs=j_attr)\n        j1.set_sleep_time(30)\n        jid_1 = self.server.submit(j1)\n        j2 = Job(attrs=j_attr)\n        j2.set_sleep_time(30)\n        jid_2 = self.server.submit(j2)\n        self.server.expect(JOB, {'job_state': 'R'}, jid_1)\n        self.server.expect(JOB, {'job_state': 'R'}, jid_2)\n        # try to delete the resource when it's busy on job\n        try:\n            self.server.manager(MGR_CMD_DELETE, RSC, id='foo')\n        except PbsManagerError as e:\n            self.assertIn(\"Resource busy on job\", e.msg[0])\n        self.server.expect(JOB, 'queue', op=UNSET, id=jid_1)\n        self.server.expect(JOB, 'queue', op=UNSET, id=jid_2)\n        # now jobs has been finished, try to delete the resource again\n        self.server.manager(MGR_CMD_DELETE, RSC, id='foo')\n        q_status = self.server.status(QUEUE, id='workq')\n        self.assertNotIn('resources_assigned.foo',\n                         q_status[0], self.err_msg)\n        self.assertNotIn('resources_available.foo',\n                         q_status[0], self.err_msg)\n"
  },
  {
    "path": "test/tests/functional/pbs_ralter.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\nfrom tests.functional import *\n\n\nclass TestPbsResvAlter(TestFunctional):\n\n    \"\"\"\n    This test suite is for testing the reservation times modification feature.\n    \"\"\"\n\n    # Class variables\n    tzone = None\n    # These must be equal to the declarations in pbs_error.h\n    PBSE_UNKRESVID = 236\n    PBSE_SELECT_NOT_SUBSET = 79\n    PBSE_RESV_NOT_EMPTY = 74\n    PBSE_STDG_RESV_OCCR_CONFLICT = 75\n    PBSE_INCORRECT_USAGE = 2\n    PBSE_PERM = 159\n    PBSE_NOSUP = 181\n    fmt = \"%a %b %d %H:%M:%S %Y\"\n    bu = BatchUtils()\n\n    def setUp(self):\n\n        TestFunctional.setUp(self)\n        self.server.set_op_mode(PTL_CLI)\n        # Set PBS_TZID, needed for standing reservation.\n        if 'PBS_TZID' in self.conf:\n            self.tzone = self.conf['PBS_TZID']\n        elif 'PBS_TZID' in os.environ:\n            self.tzone = os.environ['PBS_TZID']\n        else:\n            self.logger.info('Timezone not set, using Asia/Kolkata')\n            self.tzone = 'Asia/Kolkata'\n\n        a = {'resources_available.ncpus': 4, 'resources_available.mem': '1gb'}\n        self.mom.create_vnodes(a, num=2,\n                               usenatvnode=True)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'log_events': 4095})\n\n    def submit_and_confirm_reservation(self, offset, duration, standing=False,\n                                       select=\"1:ncpus=4\",\n                                       rrule=\"FREQ=HOURLY;COUNT=2\",\n                                       ExpectSuccess=1, ruser=TEST_USER):\n        \"\"\"\n        Helper function to submit a reservation and wait until it is confirmed.\n        It also checks for the corresponding server and accounting logs.\n\n        :param offset: time in seconds after which the reservation should\n                       start.\n        :type  offset: int\n\n        :param duration: duration of the reservation to be submitted.\n        :type  duration: int\n\n        :param standing: whether to submit a standing reservation. Default: No.\n        :type  standing: bool\n\n        :param select: select specification for the reservation,\n                      Default: \"1:ncpus=4\"\n        :type  select: string.\n\n        :param rrule:  iCal recurrence rule for submitting a standing\n                      reservation.  Default: \"FREQ=HOURLY;COUNT=2\"\n        :type  select: string.\n\n        :param ExpectSuccess: Whether the caller expects the submission to be\n                             successful or not. If set anything other than 1\n                             or 0, reservation state is not checked at all.\n                             Default: 1\n        :type  ExpectSuccess: int.\n\n        :param ruser: User who own the reservation. Default: TEST_USER.\n        :type ruser: PbsUser.\n        \"\"\"\n        start = int(time.time()) + offset\n        end = start + duration\n\n        if standing:\n            attrs = {'Resource_List.select': select,\n                     'reserve_start': start,\n                     'reserve_end': end,\n                     'reserve_timezone': self.tzone,\n                     'reserve_rrule': rrule}\n        else:\n            attrs = {'Resource_List.select': select,\n                     'reserve_start': start,\n                     'reserve_end': end}\n\n        rid = self.server.submit(Reservation(ruser, attrs))\n        msg = \"Resv;\" + rid + \";New reservation submitted start=\"\n        msg += time.strftime(self.fmt, time.localtime(int(start)))\n        msg += \" end=\"\n        msg += time.strftime(self.fmt, time.localtime(int(end)))\n        if standing:\n            msg += \" recurrence_rrule=\" + rrule + \" timezone=\"\n            msg += self.tzone\n\n        self.server.log_match(msg, interval=2, max_attempts=30)\n\n        if ExpectSuccess == 1:\n            attrs = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n            self.server.expect(RESV, attrs, id=rid)\n\n            msg = \"Resv;\" + rid + \";Reservation confirmed\"\n            self.server.log_match(msg, interval=2,\n                                  max_attempts=30)\n\n            self.server.expect(RESV, attrs, id=rid)\n            acct_msg = \"U;\" + rid + \";requestor=\" + ruser.name + \"@.*\"\n\n            if standing:\n                acct_msg += \" recurrence_rrule=\" + re.escape(rrule)\n                acct_msg += \" timezone=\" + re.escape(self.tzone)\n\n            self.server.accounting_match(acct_msg, interval=2, regexp=True,\n                                         max_attempts=30, n='ALL')\n        elif ExpectSuccess == 0:\n            msg = \"Resv;\" + rid + \";Reservation denied\"\n            self.server.log_match(msg, interval=2,\n                                  max_attempts=30)\n\n        return rid, start, end\n\n    def check_resv_running(self, rid, offset=0, display=True):\n        \"\"\"\n        Helper method to wait for reservation to start running.\n\n        :param rid: Reservation id.\n        :type  rid: string.\n\n        :param offset: Time in seconds to wait for the reservation to start\n                      running.\n        :type  offset: int.\n\n        :param display: Whether to display a message or not, default - Yes.\n        :type  display: bool\n        \"\"\"\n        if offset > 0:\n            if display:\n                self.logger.info(\"Waiting for reservation to start running.\")\n            else:\n                self.logger.info(\"Sleeping.\")\n\n        attrs = {'reserve_state': (MATCH_RE, 'RESV_RUNNING|5')}\n        self.server.expect(RESV, attrs, id=rid, offset=offset)\n\n    def check_occr_finish(self, rid, duration):\n        \"\"\"\n        Helper method to wait for a reservation occurrence to finish.\n        This method will not work when waiting for the last occurrence of a\n        standing reservation to finish running.\n\n        :param rid: Reservation id.\n        :type  rid: string.\n\n        :param duration: Time in seconds to wait for the reservation to finish\n                      running.\n        :type  duration: int.\n\n        :param standing: Whether to display a message or not, default - Yes.\n        :type  standing: bool\n        NOTE: This won't work for the final occurrence of a standing\n              reservation.\n        \"\"\"\n        if duration > 0:\n            self.logger.info(\"Waiting for reservation to finish.\")\n        attrs = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, attrs, id=rid,\n                           offset=(duration - 5), interval=2)\n\n    def check_standing_resv_second_occurrence(self, rid, start, end,\n                                              select=None, freq=3600,\n                                              wait=False):\n        \"\"\"\n        Helper method to verify that the second occurrence of a standing\n        reservation retains its original start, and end times and select.\n        Assumption: This method assumes that rid represents an HOURLY\n                    reservation.\n\n        :param rid: Reservation id.\n        :type  rid: string.\n\n        :param start: Start time of the first occurrence of the reservation.\n        :type  start: int.\n\n        :param end: End time of the first occurrence of the reservation.\n        :type  end: int.\n\n        :param freq: Frequency in seconds to run occurrences, default - 1 hour.\n        :type  freq: int.\n\n        :param wait: Whether to wait for occurrence to start, default - No.\n        :type  wait: int.\n        \"\"\"\n        next_start = start + freq\n        next_end = end + freq\n        duration = end - start\n        next_start_conv = self.bu.convert_seconds_to_datetime(\n            next_start, self.fmt)\n        next_end_conv = self.bu.convert_seconds_to_datetime(\n            next_end, self.fmt)\n        attrs = {'reserve_start': next_start_conv,\n                 'reserve_end': next_end_conv,\n                 'reserve_duration': duration}\n        if select:\n            attrs.update({'Resource_List.select': select})\n        self.server.expect(RESV, attrs, id=rid, max_attempts=10,\n                           interval=5)\n        if wait is True:\n            attr = {'reserve_state': (MATCH_RE, 'RESV_RUNNING|5')}\n            t = start + freq - time.time()\n            self.server.expect(RESV, attr, id=rid,\n                               offset=t, max_attempts=10)\n\n    def submit_job_to_resv(self, rid, sleep=10, user=None):\n        \"\"\"\n        Helper method for submitting a sleep job to the reservation.\n\n        :param rid: Reservation id.\n        :type  rid: string.\n\n        :param sleep: Sleep time in seconds for the job.\n        :type  sleep: int.\n        \"\"\"\n        r_queue = rid.split('.')[0]\n        a = {'queue': r_queue}\n        if not user:\n            user = TEST_USER\n        j = Job(user, a)\n        j.set_sleep_time(sleep)\n        jid = self.server.submit(j)\n        return jid\n\n    def alter_a_reservation(self, r, start, end, shift=0,\n                            alter_s=False, alter_e=False,\n                            whichMessage=1, confirm=True, check_log=True,\n                            interactive=0, sequence=1,\n                            a_duration=None, select=None, extend=None,\n                            runas=None):\n        \"\"\"\n        Helper method for altering a reservation.\n        This method also checks for the server and accounting logs.\n\n        :param r: Reservation id.\n        :type  r: string.\n\n        :param start: Start time of the reservation.\n        :type  start: int.\n\n        :param end: End time of the reservation.\n        :type  end: int\n\n        :param shift: Time in seconds the reservation times will be moved.\n        :type  shift: int.\n\n        :param alter_s: Whether the caller intends to change the start time.\n                       Default - False.\n        :type  alter_s: bool.\n\n        :param alter_e: Whether the caller intends to change the end time.\n                       Default - False.\n        :type  alter_e: bool.\n\n        :param whichMessage: Which message is expected to be returned.\n                            Default: 1.\n                             =-1 - No exception, don't check logs\n                             =0 - PbsResvAlterError exception will be raised,\n                                  so check for appropriate error response.\n                             =1 - No exception, check for \"CONFIRMED\" message\n                             =2 - No exception, check for \"UNCONFIRMED\" message\n                             =3 - No exception, check for \"DENIED\" message\n        :type  whichMessage: int.\n\n        :param confirm: The expected state of the reservation after it is\n                       altered. It can be either Confirmed or Running.\n                       Default - Confirmed State.\n        :type  confirm: bool.\n\n        :param interactive: Time in seconds the CLI waits for a reply.\n                           Default - 0 seconds.\n        :type  interactive: int.\n\n        :param sequence: To check the number of log matches corresponding\n                        to alter.\n                        Default: 1\n        :type  sequence: int.\n\n        :param a_duration: The duration to modify.\n        :type a_duration: int.\n        :param extend: extend parameter.\n        :type extend: str.\n        :param runas: User who own alters the reservation.\n                      Default: user running the test.\n        :type runas: PbsUser.\n        \"\"\"\n        new_start = start\n        new_end = end\n        attrs = {}\n\n        if alter_s:\n            new_start = start + shift\n            new_start_conv = self.bu.convert_seconds_to_datetime(\n                new_start)\n            attrs['reserve_start'] = new_start_conv\n\n        if alter_e:\n            new_end = end + shift\n            new_end_conv = self.bu.convert_seconds_to_datetime(new_end)\n            attrs['reserve_end'] = new_end_conv\n\n        if interactive > 0:\n            attrs['interactive'] = interactive\n\n        if a_duration:\n            if isinstance(a_duration, str) and ':' in a_duration:\n                new_duration_conv = self.bu.convert_duration(a_duration)\n            else:\n                new_duration_conv = a_duration\n\n            if not alter_s and not alter_e:\n                new_end = start + new_duration_conv + shift\n            elif alter_s and not alter_e:\n                new_end = new_start + new_duration_conv\n            elif not alter_s and alter_e:\n                new_start = new_end - new_duration_conv\n            # else new_start and new_end have already been calculated\n        else:\n            new_duration_conv = new_end - new_start\n\n        if a_duration:\n            attrs['reserve_duration'] = new_duration_conv\n\n        if select:\n            attrs['Resource_List.select'] = select\n\n        if runas is None:\n            runas = self.du.get_current_user()\n\n        if whichMessage:\n            msg = ['']\n            acct_msg = ['']\n\n            if interactive:\n                if whichMessage == 1:\n                    msg = \"pbs_ralter: \" + r + \" CONFIRMED\"\n                elif whichMessage == 2:\n                    msg = \"pbs_ralter: \" + r + \" UNCONFIRMED\"\n                else:\n                    msg = \"pbs_ralter: \" + r + \" DENIED\"\n            else:\n                msg = \"pbs_ralter: \" + r + \" ALTER REQUESTED\"\n\n            self.server.alterresv(r, attrs, extend=extend, runas=runas)\n\n            self.assertEqual(msg, self.server.last_out[0])\n            self.logger.info(msg + \" displayed\")\n\n            if check_log:\n                msg = \"Resv;\" + r + \";Attempting to modify reservation \"\n                if start != new_start:\n                    msg += \"start=\"\n                    msg += time.strftime(self.fmt,\n                                         time.localtime(int(new_start)))\n                    msg += \" \"\n\n                if end != new_end:\n                    msg += \"end=\"\n                    msg += time.strftime(self.fmt,\n                                         time.localtime(int(new_end)))\n                    msg += \" \"\n\n                if select:\n                    msg += \"select=\" + select + \" \"\n\n                # strip the last space\n                msg = msg[:-1]\n                self.server.log_match(msg, interval=2, max_attempts=30)\n\n            if whichMessage == -1:\n                return new_start, new_end\n            elif whichMessage == 1:\n                if alter_s:\n                    new_start_conv = self.bu.convert_seconds_to_datetime(\n                        new_start, self.fmt)\n                    attrs['reserve_start'] = new_start_conv\n\n                if alter_e:\n                    new_end_conv = self.bu.convert_seconds_to_datetime(\n                        new_end, self.fmt)\n                    attrs['reserve_end'] = new_end_conv\n\n                if a_duration:\n                    attrs['reserve_duration'] = new_duration_conv\n\n                if confirm:\n                    attrs['reserve_state'] = (MATCH_RE, 'RESV_CONFIRMED|2')\n                else:\n                    attrs['reserve_state'] = (MATCH_RE, 'RESV_RUNNING|5')\n\n                self.server.expect(RESV, attrs, id=r)\n                if check_log:\n                    acct_msg = \"Y;\" + r + \";requestor=Scheduler@.*\" + \" start=\"\n                    acct_msg += str(new_start) + \" end=\" + str(new_end)\n                    self.server.status(RESV, 'resv_nodes', id=r)\n                    acct_msg += \" nodes=\"\n                    acct_msg += re.escape(self.server.reservations[r].\n                                          resvnodes())\n\n                    if r[0] == 'S':\n                        self.server.status(RESV, 'reserve_count', id=r)\n                        count = self.server.reservations[r].attributes[\n                            'reserve_count']\n                        acct_msg += \" count=\" + count\n\n                    self.server.accounting_match(acct_msg, regexp=True,\n                                                 interval=2,\n                                                 max_attempts=30, n='ALL')\n\n                # Check if reservation reports new start time\n                # and updated duration.\n\n                msg = \"Resv;\" + r + \";Reservation alter confirmed\"\n            else:\n                msg = \"Resv;\" + r + \";Reservation alter denied\"\n            interval = 0.5\n            max_attempts = 20\n            for attempts in range(1, max_attempts + 1):\n                lines = self.server.log_match(msg, n='ALL', allmatch=True,\n                                              max_attempts=5)\n                info_msg = \"log_match: searching \" + \\\n                    str(sequence) + \" sequence of message: \" + \\\n                    msg + \": Got: \" + str(len(lines))\n                self.logger.info(info_msg)\n                if len(lines) == sequence:\n                    break\n                else:\n                    attempts = attempts + 1\n                    time.sleep(interval)\n            if attempts > max_attempts:\n                raise PtlLogMatchError(rc=1, rv=False, msg=info_msg)\n            return new_start, new_end\n        else:\n            try:\n                self.server.alterresv(r, attrs, extend=extend, runas=runas)\n            except PbsResvAlterError as e:\n                if e.rc == self.PBSE_RESV_NOT_EMPTY:\n                    msg = \"pbs_ralter: Reservation not empty\"\n                elif e.rc == self.PBSE_UNKRESVID:\n                    msg = \"pbs_ralter: Unknown Reservation Id\"\n                elif e.rc == self.PBSE_SELECT_NOT_SUBSET:\n                    msg = \"pbs_ralter: New select must be made up of a \"\n                    msg += \"subset of the original chunks\"\n                elif e.rc == self.PBSE_STDG_RESV_OCCR_CONFLICT:\n                    msg = \"Requested time(s) will interfere with \"\n                    msg += \"a later occurrence\"\n                    log_msg = \"Resv;\" + r + \";\" + msg\n                    msg = \"pbs_ralter: \" + msg\n                    self.server.log_match(log_msg, interval=2,\n                                          max_attempts=30)\n                elif e.rc == self.PBSE_INCORRECT_USAGE:\n                    pass\n                elif e.rc == self.PBSE_PERM:\n                    msg = \"pbs_ralter: Unauthorized Request\"\n                elif e.rc == self.PBSE_NOSUP:\n                    msg = \"pbs_ralter: No support for requested service\"\n\n                self.assertNotEqual(e.msg, msg)\n                return start, end\n            else:\n                self.assertFalse(\"Reservation alter allowed when it should\" +\n                                 \"not be.\")\n\n    def get_resv_time_info(self, rid):\n        \"\"\"\n        Get the start time, end time and duration of a reservation\n        in seconds\n        :param rid: reservation id\n        :type  rid: string\n        \"\"\"\n        resv_data = self.server.status(RESV, id=rid)\n        t_duration = int(resv_data[0]['reserve_duration'])\n        t_end = self.bu.convert_stime_to_seconds(resv_data[0]['reserve_end'])\n        t_start = self.bu.convert_stime_to_seconds(resv_data[0]\n                                                   ['reserve_start'])\n        return t_duration, t_start, t_end\n\n    def test_alter_advance_resv_start_time_before_run(self):\n        \"\"\"\n        This test case covers the below scenarios for an advance reservation\n        that has not started running.\n\n        1. Make an advance reservation start late (empty reservation)\n        2. Make an advance reservation start early (empty reservation)\n        3. Make an advance reservation start early (with a job in it)\n\n        All the above operations are expected to be successful.\n        \"\"\"\n        offset = 60\n        duration = 20\n        shift = 10\n        rid, start, end = self.submit_and_confirm_reservation(offset, duration)\n\n        new_start, new_end = self.alter_a_reservation(rid, start, end, shift,\n                                                      alter_s=True, sequence=1)\n\n        new_start, new_end = self.alter_a_reservation(rid, new_start, new_end,\n                                                      -shift, alter_s=True,\n                                                      interactive=5,\n                                                      sequence=2)\n\n        # Submit a job to the reservation and change its start time.\n        self.submit_job_to_resv(rid)\n\n        new_start, new_end = self.alter_a_reservation(rid, new_start, new_end,\n                                                      -shift, alter_s=True,\n                                                      sequence=3)\n\n    def test_alter_advance_resv_start_time_after_run(self):\n        \"\"\"\n        This test case covers the below scenarios for an advance reservation\n        that has started running.\n\n        1. Make an advance reservation start late (empty reservation)\n        2. Make an advance reservation start late (with a job in it)\n\n        Only operation 1 should be successful, operation 2 should fail.\n        \"\"\"\n        offset = 10\n        duration = 20\n        shift = 10\n        rid, start, end = self.submit_and_confirm_reservation(offset, duration)\n\n        # Wait for the reservation to start running.\n        self.check_resv_running(rid, offset)\n\n        # Changing start time should be allowed as the reservation is empty.\n        self.alter_a_reservation(rid, start, end, shift, alter_s=True)\n\n        # Wait for the reservation to start running.\n        self.check_resv_running(rid, offset)\n\n        # Submit a job to the reservation.\n        self.submit_job_to_resv(rid)\n\n        # Changing start time should fail this time as it is not empty.\n        self.alter_a_reservation(rid, start, end, shift, alter_s=True,\n                                 whichMessage=0)\n\n    def test_alter_advance_resv_end_time_before_run(self):\n        \"\"\"\n        This test case covers the below scenarios for an advance reservation\n        that has not started running.\n\n        1. Make an advance reservation end late (empty reservation)\n        2. Make an advance reservation end late (empty reservation)\n        3. Make an advance reservation end late (with a job in it)\n\n        All the above operations are expected to be successful.\n        \"\"\"\n        duration = 20\n        shift = 10\n        offset = 60\n        rid, start, end = self.submit_and_confirm_reservation(offset, duration)\n\n        new_start, new_end = self.alter_a_reservation(rid, start, end, shift,\n                                                      alter_e=True, sequence=1)\n\n        new_start, new_end = self.alter_a_reservation(rid, new_start, new_end,\n                                                      shift, alter_e=True,\n                                                      sequence=2)\n\n        # Submit a job to the reservation.\n        self.submit_job_to_resv(rid)\n\n        new_start, new_end = self.alter_a_reservation(rid, new_start, new_end,\n                                                      shift, alter_e=True,\n                                                      sequence=3)\n\n    def test_alter_advance_resv_end_time_after_run(self):\n        \"\"\"\n        This test case covers the below scenarios for an advance reservation\n        that has started running.\n\n        1. Make an advance reservation end late (with a job in it)\n\n        The above operation is expected to be successful.\n        \"\"\"\n        duration = 20\n        shift = 30\n        offset = 10\n        sleep = 45\n        rid, start, end = self.submit_and_confirm_reservation(offset, duration)\n\n        # Submit a job to the reservation.\n        jid = self.submit_job_to_resv(rid, sleep)\n\n        # Wait for the reservation to start running.\n        self.check_resv_running(rid, offset)\n\n        self.alter_a_reservation(rid, start, end, shift, alter_e=True,\n                                 confirm=False)\n\n        self.check_resv_running(rid, duration, 0)\n        self.server.expect(JOB, {'job_state': \"R\"}, id=jid)\n\n    def test_alter_advance_resv_both_times_before_run(self):\n        \"\"\"\n        This test case covers the below scenarios for an advance reservation\n        that has not started running.\n\n        1. Make an advance reservation start and end late (empty reservation)\n        2. Make an advance reservation start and end early (empty reservation)\n        3. Make an advance reservation start and end early (with a job in it)\n\n        All the above operations are expected to be successful.\n        \"\"\"\n        offset = 60\n        duration = 20\n        shift = 10\n        rid, start, end = self.submit_and_confirm_reservation(offset, duration)\n\n        new_start, new_end = self.alter_a_reservation(rid, start, end,\n                                                      shift, alter_s=True,\n                                                      alter_e=True, sequence=1)\n\n        new_start, new_end = self.alter_a_reservation(rid, new_start, new_end,\n                                                      -shift, alter_s=True,\n                                                      alter_e=True, sequence=2)\n\n        # Submit a job to the reservation and change its start time.\n        self.submit_job_to_resv(rid)\n\n        new_start, new_end = self.alter_a_reservation(rid, new_start, new_end,\n                                                      -shift, alter_s=True,\n                                                      alter_e=True, sequence=3)\n\n    def test_alter_advance_resv_both_times_after_run(self):\n        \"\"\"\n        This test case covers the below scenarios for an advance reservation\n        that has started running.\n\n        1. Make an advance reservation start and end late (empty reservation)\n        2. Make an advance reservation start and end late (with a job in it)\n\n        Only operation 1 should be successful, operation 2 should fail.\n        \"\"\"\n        offset = 10\n        duration = 200\n        shift = 10\n        rid, start, end = self.submit_and_confirm_reservation(offset, duration)\n\n        # Wait for the reservation to start running.\n        self.check_resv_running(rid, offset)\n\n        # Changing start time should be allowed as the reservation is empty.\n        new_start, new_end = self.alter_a_reservation(rid, start, end,\n                                                      shift, alter_s=True,\n                                                      alter_e=True)\n\n        # Wait for the reservation to start running.\n        self.check_resv_running(rid, offset)\n\n        # Submit a job to the reservation.\n        self.submit_job_to_resv(rid)\n\n        # Changing start time should fail this time as it is not empty.\n        self.alter_a_reservation(rid, new_start, new_end,\n                                 shift, alter_s=True,\n                                 alter_e=True, whichMessage=0)\n\n    def test_alter_standing_resv_start_time_before_run(self):\n        \"\"\"\n        This test case covers the below scenarios for a standing reservation\n        that has not started running.\n\n        1. Make an occurrence of a standing reservation start\n           late (empty reservation)\n        2. Make an occurrence of a standing reservation start\n           early (empty reservation)\n        3. Make an occurrence of a standing reservation start\n           early (with a job in it)\n        4. After the first occurrence of the reservation finishes, verify that\n           the start and end time of the second occurrence have not changed.\n\n        All the above operations are expected to be successful.\n        \"\"\"\n        offset = 30\n        duration = 20\n        shift = 15\n        rid, start, end = self.submit_and_confirm_reservation(offset,\n                                                              duration,\n                                                              standing=True)\n\n        new_start, new_end = self.alter_a_reservation(rid, start, end, shift,\n                                                      alter_s=True, sequence=1)\n\n        new_start, new_end = self.alter_a_reservation(rid, new_start,\n                                                      new_end, -shift,\n                                                      alter_s=True, sequence=2)\n\n        # Submit a job to the reservation and change its start time.\n        self.submit_job_to_resv(rid)\n\n        new_start, new_end = self.alter_a_reservation(rid, new_start,\n                                                      new_end, -shift,\n                                                      alter_s=True, sequence=3)\n\n        # Wait for the reservation to start running.\n        self.check_resv_running(rid, offset - shift)\n\n        # Wait for the reservation occurrence to finish.\n        new_duration = new_end - new_start\n        self.check_occr_finish(rid, new_duration)\n\n        # Check that duration of the second occurrence is not altered.\n        self.check_standing_resv_second_occurrence(rid, start, end)\n\n    def test_alter_standing_resv_start_time_after_run(self):\n        \"\"\"\n        This test case covers the below scenarios for a standing reservation\n        that has started running.\n\n        1. Make an occurrence of a standing reservation start\n           late (empty reservation)\n        2. Make an occurrence of a standing reservation start\n           late (with a job in it)\n        3. After the first occurrence of the reservation finishes, verify that\n           the start and end time of the second occurrence have not changed.\n\n        Only operations 1 and 3 should be successful, operation 2 should fail.\n        \"\"\"\n        offset = 10\n        duration = 20\n        shift = 15\n        rid, start, end = self.submit_and_confirm_reservation(offset, duration,\n                                                              standing=True)\n\n        # Wait for the reservation to start running.\n        self.check_resv_running(rid, offset)\n\n        # Changing start time should be allowed as the reservation is empty.\n        new_start, new_end = self.alter_a_reservation(rid, start, end, shift,\n                                                      alter_s=True)\n\n        # Wait for the reservation to start running.\n        self.check_resv_running(rid, offset)\n\n        # Submit a job to the reservation.\n        self.submit_job_to_resv(rid, sleep=15)\n\n        # Changing start time should fail this time as it is not empty.\n        self.alter_a_reservation(rid, new_start, new_end, shift,\n                                 alter_s=True, whichMessage=False)\n\n        # Wait for the reservation occurrence to finish.\n        new_duration = new_end - new_start\n        self.check_occr_finish(rid, new_duration)\n\n        # Check that duration of the second occurrence is not altered.\n        self.check_standing_resv_second_occurrence(rid, start, end)\n\n    def test_alter_standing_resv_end_time_before_run(self):\n        \"\"\"\n        This test case covers the below scenarios for a standing reservation\n        that has not started running and some scenarios after it starts\n        running.\n\n        1. Make an occurrence of a standing reservation end\n           late (empty reservation)\n        2. Make an occurrence of a standing reservation end early\n           (empty reservation)\n        3. Make an occurrence of a standing reservation end\n           late (with a job in it)\n        4. Check if the reservation continues to be in Running state after the\n           original end time has passed.\n        5. Verify that the job inside this reservation also continues to run\n           after the original end time has passed.\n        6. After the first occurrence of the reservation finishes, verify that\n           the start and end time of the second occurrence have not changed.\n\n        All the above operations are expected to be successful.\n        \"\"\"\n        duration = 30\n        shift = 10\n        offset = 30\n        rid, start, end = self.submit_and_confirm_reservation(offset, duration,\n                                                              standing=True)\n\n        new_start, new_end = self.alter_a_reservation(rid, start, end, shift,\n                                                      alter_e=True, sequence=1)\n\n        new_start, new_end = self.alter_a_reservation(rid, new_start, new_end,\n                                                      -shift, alter_e=True,\n                                                      sequence=2)\n\n        # Submit a job to the reservation.\n        jid = self.submit_job_to_resv(rid, 100)\n\n        new_start, new_end = self.alter_a_reservation(rid, start, end, shift,\n                                                      alter_e=True, sequence=3)\n\n        self.check_resv_running(rid, offset)\n        self.server.expect(JOB, {'job_state': \"R\"}, id=jid)\n\n        # Sleep for the actual duration.\n        time.sleep(duration)\n        self.check_resv_running(rid)\n        self.server.expect(JOB, {'job_state': \"R\"}, id=jid)\n\n        # Wait for the reservation occurrence to finish.\n        self.check_occr_finish(rid, shift)\n\n        # Check that duration of the second occurrence is not altered.\n        self.check_standing_resv_second_occurrence(rid, start, end)\n\n    def test_alter_standing_resv_end_time_after_run(self):\n        \"\"\"\n        This test case covers the below scenarios for a standing reservation\n        that has started running.\n\n        1. Make an occurrence of a standing reservation end\n           late (with a job in it)\n        2. Check if the reservation continues to be in Running state after the\n           original end time has passed.\n        3. Verify that the job inside this reservation also continues to run\n           after the original end time has passed.\n        4. After the first occurrence of the reservation finishes, verify that\n           the start and end time of the second occurrence have not changed.\n\n        All the above operations are expected to be successful.\n        \"\"\"\n        duration = 20\n        shift = 10\n        offset = 10\n        sleep = 30\n        rid, start, end = self.submit_and_confirm_reservation(offset, duration,\n                                                              standing=True)\n\n        # Submit a job to the reservation.\n        jid = self.submit_job_to_resv(rid, sleep)\n\n        # Wait for the reservation to start running.\n        self.check_resv_running(rid, offset)\n\n        new_end = self.alter_a_reservation(rid, start, end,\n                                           shift, alter_e=True,\n                                           confirm=False)[1]\n\n        self.check_resv_running(rid, end - int(time.time()) + 1, True)\n        self.server.expect(JOB, {'job_state': \"R\"}, id=jid)\n\n        # Wait for the reservation occurrence to finish.\n        new_duration = int(new_end - time.time())\n        self.check_occr_finish(rid, new_duration)\n\n        # Check that duration of the second occurrence is not altered.\n        self.check_standing_resv_second_occurrence(rid, start, end)\n\n    def test_alter_standing_resv_both_times_before_run(self):\n        \"\"\"\n        This test case covers the below scenarios for a standing reservation\n        that has not started running.\n\n        1. Make an occurrence of a standing reservation start and end\n           late (empty reservation).\n        2. Make an occurrence of a standing reservation start and end early\n           (empty reservation).\n        3. Make an occurrence of a standing reservation start and end early\n           (with a job in it).\n        4. Check if the reservation starts and ends as per the new times.\n        5. After the first occurrence of the reservation finishes, verify that\n           the start and end time of the second occurrence have not changed.\n\n        All the above operations are expected to be successful.\n        \"\"\"\n        offset = 40\n        duration = 20\n        shift = 10\n        rid, start, end = self.submit_and_confirm_reservation(offset,\n                                                              duration,\n                                                              standing=True)\n\n        new_start, new_end = self.alter_a_reservation(rid, start, end,\n                                                      shift, alter_s=True,\n                                                      alter_e=True, sequence=1)\n\n        new_start, new_end = self.alter_a_reservation(rid, new_start, new_end,\n                                                      -shift, alter_s=True,\n                                                      alter_e=True, sequence=2)\n\n        # Submit a job to the reservation and change its start time.\n        self.submit_job_to_resv(rid)\n\n        new_start, new_end = self.alter_a_reservation(rid, new_start, new_end,\n                                                      -shift, alter_s=True,\n                                                      alter_e=True, sequence=3)\n\n        # Wait for the reservation to start running.\n        self.check_resv_running(rid, offset - shift)\n\n        # Wait for the reservation occurrence to finish.\n        self.check_occr_finish(rid, duration)\n\n        # Check that duration of the second occurrence is not altered.\n        self.check_standing_resv_second_occurrence(rid, start, end)\n\n    def test_alter_standing_resv_both_times_after_run(self):\n        \"\"\"\n        This test case covers the below scenarios for a standing reservation\n        that has started running.\n\n        1. Make an occurrence of a standing reservation start and end\n           late (empty reservation).\n        2. Make an occurrence of a standing reservation start and end\n           late (with a job in it).\n        3. Check if the reservation starts and ends as per the new times.\n        4. After the first occurrence of the reservation finishes, verify that\n           the start and end time of the second occurrence have not changed.\n\n        Only scenario 1, 3 and 4 should be successful, operation 3\n        should fail.\n        \"\"\"\n        offset = 10\n        duration = 20\n        shift = 10\n        rid, start, end = self.submit_and_confirm_reservation(offset, duration,\n                                                              standing=True)\n\n        # Wait for the reservation to start running.\n        self.check_resv_running(rid, offset)\n\n        # Changing start time should be allowed as the reservation is empty.\n        new_start, new_end = self.alter_a_reservation(rid, start, end,\n                                                      shift, alter_s=True,\n                                                      alter_e=True,\n                                                      confirm=False)\n\n        # Wait for the reservation to start running.\n        self.check_resv_running(rid, offset)\n\n        # Submit a job to the reservation.\n        self.submit_job_to_resv(rid)\n\n        # Changing start time should fail this time as it is not empty.\n        self.alter_a_reservation(rid, new_start, new_end, shift,\n                                 alter_s=True, alter_e=True,\n                                 whichMessage=False)\n\n        # Wait for the reservation occurrence to finish.\n        self.check_occr_finish(rid, duration)\n\n        # Check that duration of the second occurrence is not altered.\n        self.check_standing_resv_second_occurrence(rid, start, end)\n\n    def test_conflict_two_advance_resvs(self):\n        \"\"\"\n        This test confirms that an advance reservation cannot be extended\n        (made to end late) if there is a conflicting reservation and all the\n        nodes in the complex are busy.\n\n        Two back to back advance reservations are submitted that use all the\n        nodes in the complex to test this.\n        \"\"\"\n        duration = 120\n        shift = 10\n        offset1 = 60\n        offset2 = 180\n\n        rid1, start1, end1 = self.submit_and_confirm_reservation(\n            offset1, duration, select=\"2:ncpus=4\")\n        rid2, start2, end2 = self.submit_and_confirm_reservation(\n            offset2, duration, select=\"2:ncpus=4\")\n        self.submit_and_confirm_reservation(offset1, duration,\n                                            select=\"2:ncpus=4\",\n                                            ExpectSuccess=0)\n        self.alter_a_reservation(rid1, start1, end1, shift, alter_e=True,\n                                 whichMessage=3)\n        self.alter_a_reservation(rid1, start1, end1, shift, alter_e=True,\n                                 whichMessage=3, interactive=5, sequence=2)\n        self.alter_a_reservation(rid2, start2, end2, -shift, alter_s=True,\n                                 whichMessage=3)\n        self.alter_a_reservation(rid2, start2, end2, -shift, alter_s=True,\n                                 whichMessage=3, interactive=5, sequence=2)\n\n    def test_conflict_two_standing_resvs(self):\n        \"\"\"\n        This test confirms that an occurrence of a standing reservation cannot\n        be extended (made to end late) if there is a conflicting reservation\n        and all the nodes in the complex are busy.\n\n        Two back to back standing reservations are submitted that use all the\n        nodes in the complex to test this.\n        \"\"\"\n        duration = 120\n        shift = 10\n        offset1 = 60\n        offset2 = 180\n\n        rid1, start1, end1 = self.submit_and_confirm_reservation(\n            offset1, duration, select=\"2:ncpus=4\", standing=True)\n        rid2, start2, end2 = self.submit_and_confirm_reservation(\n            offset2, duration, select=\"2:ncpus=4\", standing=True)\n        self.submit_and_confirm_reservation(offset1, duration,\n                                            select=\"2:ncpus=4\",\n                                            ExpectSuccess=0, standing=True)\n        self.alter_a_reservation(rid1, start1, end1, shift, alter_e=True,\n                                 whichMessage=3)\n        self.alter_a_reservation(rid1, start1, end1, shift, alter_e=True,\n                                 whichMessage=3, interactive=5, sequence=2)\n        self.alter_a_reservation(rid2, start2, end2, -shift, alter_s=True,\n                                 whichMessage=3)\n        self.alter_a_reservation(rid2, start2, end2, -shift, alter_s=True,\n                                 whichMessage=3, interactive=5, sequence=2)\n\n    def test_check_alternate_nodes_advance_resv_endtime(self):\n        \"\"\"\n        This test confirms that an advance reservation can be extended even if\n        there is a conflicting reservation but there are nodes in the complex\n        that satisfy the resource requirements of the reservation.\n\n        Two back to back advance reservations are submitted that use the\n        same nodes in the complex to test this and end time of the later\n        reservation is altered.\n        \"\"\"\n        duration = 120\n        shift = 10\n        offset1 = 60\n        offset2 = 180\n\n        rid1, start1, end1 = self.submit_and_confirm_reservation(\n            offset1, duration, select=\"1:ncpus=4\")\n\n        self.server.status(RESV, 'resv_nodes', id=rid1)\n        resv_node = self.server.reservations[rid1].get_vnodes()[0]\n        select = \"1:vnode=\" + resv_node + \":ncpus=4\"\n        rid2 = self.submit_and_confirm_reservation(\n            offset2, duration, select=select)[0]\n\n        attrs = {'resv_nodes': (MATCH_RE, re.escape(resv_node))}\n        self.server.expect(RESV, attrs, id=rid2)\n\n        nodes = self.server.status(NODE)\n        free_node = nodes[resv_node == nodes[0]['id']]['id']\n\n        self.alter_a_reservation(rid1, start1, end1, shift, alter_e=True)\n        attrs = {'resv_nodes': (MATCH_RE, re.escape(free_node))}\n        self.server.expect(RESV, attrs, id=rid1)\n\n    def test_check_alternate_nodes_advance_resv_starttime(self):\n        \"\"\"\n        This test confirms that an advance reservation can be extended even if\n        there is a conflicting reservation but there are nodes in the complex\n        that satisfy the resource requirements of the reservation.\n\n        Two back to back advance reservations are submitted that use the\n        same nodes in the complex to test this and start time of the former\n        reservation is altered.\n        \"\"\"\n        duration = 120\n        shift = 40\n        offset1 = 180\n        offset2 = 30\n\n        rid1, start1, end1 = self.submit_and_confirm_reservation(\n            offset1, duration, select=\"1:ncpus=4\")\n\n        self.server.status(RESV, 'resv_nodes', id=rid1)\n        resv_node = self.server.reservations[rid1].get_vnodes()[0]\n\n        select = \"1:vnode=\" + resv_node + \":ncpus=4\"\n        rid2 = self.submit_and_confirm_reservation(offset2, duration,\n                                                   select=select)[0]\n\n        attrs = {'resv_nodes': (MATCH_RE, re.escape(resv_node))}\n        self.server.expect(RESV, attrs, id=rid2)\n\n        nodes = self.server.status(NODE)\n        free_node = nodes[resv_node == nodes[0]['id']]['id']\n\n        self.alter_a_reservation(rid1, start1, end1, -shift, alter_s=True)\n        attrs = {'resv_nodes': (MATCH_RE, re.escape(free_node))}\n        self.server.expect(RESV, attrs, id=rid1)\n\n    def test_check_alternate_nodes_standing_resv_endtime(self):\n        \"\"\"\n        This test confirms that an occurrence of a standing reservation can be\n        extended even if there is a conflicting reservation but there are\n        nodes in the complex that satisfy the resource requirements of the\n        reservation.\n\n        Two back to back standing reservations are submitted that use the\n        same nodes in the complex to test this and end time of the later\n        reservation is altered.\n        \"\"\"\n        duration = 120\n        shift = 10\n        offset1 = 60\n        offset2 = 180\n\n        rid1, start1, end1 = self.submit_and_confirm_reservation(\n            offset1, duration, standing=True)\n\n        self.server.status(RESV, 'resv_nodes', id=rid1)\n        resv_node = self.server.reservations[rid1].get_vnodes()[0]\n\n        select = \"1:vnode=\" + resv_node + \":ncpus=4\"\n        rid2 = self.submit_and_confirm_reservation(offset2, duration,\n                                                   select=select,\n                                                   standing=True)[0]\n\n        attrs = {'resv_nodes': (MATCH_RE, re.escape(resv_node))}\n        self.server.expect(RESV, attrs, id=rid2)\n\n        nodes = self.server.status(NODE)\n        free_node = nodes[resv_node == nodes[0]['id']]['id']\n\n        self.alter_a_reservation(rid1, start1, end1, shift, alter_e=True)\n        attrs = {'resv_nodes': (MATCH_RE, re.escape(free_node))}\n        self.server.expect(RESV, attrs, id=rid1)\n\n    def test_check_alternate_nodes_standing_resv_starttime(self):\n        \"\"\"\n        This test confirms that an advance reservation can be extended even if\n        there is a conflicting reservation but there are nodes in the complex\n        that satisfy the resource requirements of the reservation.\n\n        Two back to back advance reservations are submitted that use the\n        same nodes in the complex to test this and start time of the former\n        reservation is altered.\n        \"\"\"\n        duration = 120\n        shift = 40\n        offset1 = 30\n        offset2 = 180\n\n        rid1, start1, end1 = self.submit_and_confirm_reservation(\n            offset2, duration, select=\"1:ncpus=4\", standing=True)\n\n        self.server.status(RESV, 'resv_nodes', id=rid1)\n        resv_node = self.server.reservations[rid1].get_vnodes()[0]\n\n        select = \"1:vnode=\" + resv_node + \":ncpus=4\"\n        rid2 = self.submit_and_confirm_reservation(offset1, duration,\n                                                   select=select,\n                                                   standing=True)[0]\n\n        attrs = {'resv_nodes': (MATCH_RE, re.escape(resv_node))}\n        self.server.expect(RESV, attrs, id=rid2)\n\n        nodes = self.server.status(NODE)\n        free_node = nodes[resv_node == nodes[0]['id']]['id']\n\n        self.alter_a_reservation(rid1, start1, end1, -shift, alter_s=True)\n        attrs = {'resv_nodes': (MATCH_RE, re.escape(free_node))}\n        self.server.expect(RESV, attrs, id=rid1)\n\n    def test_conflict_standing_resv_occurrence(self):\n        \"\"\"\n        This test confirms that if the requested time while altering an\n        occurrence of a standing reservation will conflict with a later\n        occurrence of the same standing reservation, the alter request\n        will be denied.\n        \"\"\"\n        duration = 60\n        shift = 10\n        offset = 10\n\n        rid, start, end = self.submit_and_confirm_reservation(\n            offset, duration, select=\"1:ncpus=4\", standing=True,\n            rrule=\"FREQ=MINUTELY;COUNT=2\")\n\n        self.alter_a_reservation(rid, start, end, shift, alter_e=True,\n                                 whichMessage=0)\n\n    def test_large_resv_nodes_server_crash(self):\n        \"\"\"\n        This test is to test whether the server crashes or not when a very\n        large resv_nodes is being recorded in the 'Y' accounting log.\n        If tested with a build without the fix, the test case will fail\n        and vice versa.\n        \"\"\"\n        duration = 60\n        shift = 10\n        offset = 10\n\n        a = {'resources_available.ncpus': 4}\n        self.mom.create_vnodes(a, num=256,\n                               usenatvnode=True)\n\n        rid, start, end = self.submit_and_confirm_reservation(\n            offset, duration, select=\"256:ncpus=4\")\n\n        self.alter_a_reservation(rid, start, end, shift, alter_s=True)\n\n    def test_alter_advance_resv_boundary_values(self):\n        \"\"\"\n        This test checks the alter of start and end times at the boundary\n        values for advance reservation.\n        \"\"\"\n        duration = 30\n        shift = 5\n        offset = 100\n\n        rid, start, end = self.submit_and_confirm_reservation(\n            offset, duration)\n\n        start, end = self.alter_a_reservation(\n            rid, start, end, shift, alter_e=True, sequence=1)\n        start, end = self.alter_a_reservation(\n            rid, start, end, -shift, alter_e=True, sequence=2)\n        start, end = self.alter_a_reservation(\n            rid, start, end, shift, alter_s=True, sequence=3)\n        self.alter_a_reservation(\n            rid, start, end, -shift, alter_s=True, sequence=4)\n\n    def test_alter_standing_resv_boundary_values(self):\n        \"\"\"\n        This test checks the alter of start and end times at the boundary\n        values for standing reservation.\n        \"\"\"\n        duration = 30\n        shift = 5\n        offset = 100\n\n        rid, start, end = self.submit_and_confirm_reservation(\n            offset, duration, standing=True)\n\n        start, end = self.alter_a_reservation(\n            rid, start, end, shift, alter_e=True, sequence=1)\n        start, end = self.alter_a_reservation(\n            rid, start, end, -shift, alter_e=True, sequence=2)\n        start, end = self.alter_a_reservation(\n            rid, start, end, shift, alter_s=True, sequence=3)\n        self.alter_a_reservation(\n            rid, start, end, -shift, alter_s=True, sequence=4)\n\n    def test_alter_degraded_resv_mom_down(self):\n        \"\"\"\n        This test checks the alter of start and end times of reservations\n        when mom is down.\n        \"\"\"\n        duration = 30\n        shift = 5\n        offset = 200\n\n        rid1, start1, end1 = self.submit_and_confirm_reservation(\n            offset, duration, select=\"1:ncpus=2\")\n        rid2, start2, end2 = self.submit_and_confirm_reservation(\n            offset, duration, select=\"1:ncpus=2\", standing=True)\n        self.mom.stop()\n        msg = 'mom is not down'\n        self.assertFalse(self.mom.isUp(max_attempts=5), msg)\n        attrs = {'reserve_state': (MATCH_RE, 'RESV_DEGRADED|10')}\n        self.server.expect(RESV, attrs, id=rid1)\n        self.server.expect(RESV, attrs, id=rid2)\n        self.alter_a_reservation(rid1, start1, end1, shift, alter_s=True,\n                                 alter_e=True, whichMessage=3, sequence=1)\n        self.alter_a_reservation(rid2, start2, end2, shift, alter_s=True,\n                                 alter_e=True, whichMessage=3, sequence=1)\n        self.alter_a_reservation(rid1, start1, end1, shift, alter_s=True,\n                                 alter_e=True, whichMessage=3, interactive=2,\n                                 sequence=2)\n        self.alter_a_reservation(rid2, start2, end2, shift, alter_s=True,\n                                 alter_e=True, whichMessage=3, interactive=2,\n                                 sequence=2)\n        self.mom.start()\n        # test same for node offline case\n        attrs1 = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, attrs1, id=rid1)\n        self.server.expect(RESV, attrs1, id=rid2)\n        resv_node1 = self.server.status(RESV, 'resv_nodes', id=rid1)[0][\n            'resv_nodes'].split(':')[0].split('(')[1]\n        resv_node2 = self.server.status(RESV, 'resv_nodes', id=rid2)[0][\n            'resv_nodes'].split(':')[0].split('(')[1]\n        mtype = 0\n        seq = 0\n        if resv_node1 == resv_node2:\n            self.server.manager(MGR_CMD_SET, NODE, {\n                                'state': \"offline\"}, id=resv_node1)\n            mtype = 1\n            seq = 1\n        else:\n            self.server.manager(MGR_CMD_SET, NODE, {\n                                'state': \"offline\"}, id=resv_node1)\n            self.server.manager(MGR_CMD_SET, NODE, {\n                                'state': \"offline\"}, id=resv_node2)\n            mtype = 3\n            seq = 3\n\n        self.server.expect(RESV, attrs, id=rid1)\n        self.server.expect(RESV, attrs, id=rid2)\n        self.alter_a_reservation(rid1, start1, end1, shift, alter_s=True,\n                                 alter_e=True, whichMessage=mtype,\n                                 sequence=seq)\n        self.alter_a_reservation(rid2, start2, end2, shift, alter_s=True,\n                                 alter_e=True, whichMessage=mtype,\n                                 sequence=seq)\n        self.alter_a_reservation(rid1, start1, end1, shift, alter_s=True,\n                                 alter_e=True, whichMessage=mtype,\n                                 interactive=2, sequence=seq + 1)\n        self.alter_a_reservation(rid2, start2, end2, shift, alter_s=True,\n                                 alter_e=True, whichMessage=mtype,\n                                 interactive=2, sequence=seq + 1)\n\n    def test_alter_resv_name(self):\n        \"\"\"\n        This test checks the alter of reservation name.\n        \"\"\"\n        duration = 30\n        offset = 20\n\n        rid1 = self.submit_and_confirm_reservation(\n            offset, duration)\n        attr1 = {ATTR_N: \"Adv_Resv\"}\n        self.server.alterresv(rid1[0], attr1)\n        attr1 = {'Reserve_Name': \"Adv_Resv\"}\n        self.server.expect(RESV, attr1, id=rid1[0])\n        rid2 = self.submit_and_confirm_reservation(\n            offset, duration, standing=True)\n        attr2 = {ATTR_N: \"Std_Resv\"}\n        self.server.alterresv(rid2[0], attr2)\n        attr2 = {'Reserve_Name': \"Std_Resv\"}\n        self.server.expect(RESV, attr2, id=rid2[0])\n\n    def test_alter_user_permission(self):\n        \"\"\"\n        This test checks the user permissions for pbs_ralter.\n        \"\"\"\n        duration = 30\n        offset = 20\n        shift = 10\n\n        rid1, start1, end1 = self.submit_and_confirm_reservation(\n            offset, duration)\n        rid2, start2, end2 = self.submit_and_confirm_reservation(\n            offset, duration, standing=True)\n        new_start1 = self.bu.convert_seconds_to_datetime(start1 + shift)\n        new_start2 = self.bu.convert_seconds_to_datetime(start2 + shift)\n        new_end1 = self.bu.convert_seconds_to_datetime(end1 + shift)\n        new_end2 = self.bu.convert_seconds_to_datetime(end2 + shift)\n        try:\n            attr = {'reserve_start': new_start1, 'reserve_end': new_end1}\n            self.server.alterresv(rid1, attr, runas=TEST_USER1)\n        except PbsResvAlterError as e:\n            self.assertTrue(\"Unauthorized Request\" in e.msg[0])\n        try:\n            attr = {'reserve_start': new_start2, 'reserve_end': new_end2}\n            self.server.alterresv(rid2, attr, runas=TEST_USER1)\n        except PbsResvAlterError as e:\n            self.assertTrue(\"Unauthorized Request\" in e.msg[0])\n\n    def test_auth_user(self):\n        \"\"\"\n        This test checks changing Authorized_Users\n        \"\"\"\n        duration = 30\n        offset = 1000\n        rid = self.submit_and_confirm_reservation(offset, duration)[0]\n\n        jid = self.submit_job_to_resv(rid, user=TEST_USER)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid)\n\n        with self.assertRaises(PbsSubmitError):\n            self.submit_job_to_resv(rid, user=TEST_USER1)\n\n        attr = {ATTR_auth_u: str(TEST_USER1)}\n        self.server.alterresv(rid, attr)\n\n        with self.assertRaises(PbsSubmitError):\n            self.submit_job_to_resv(rid, user=TEST_USER)\n\n        jid = self.submit_job_to_resv(rid, user=TEST_USER1)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid)\n\n        attr = {ATTR_auth_u: str(TEST_USER) + ',' + str(TEST_USER1)}\n        self.server.alterresv(rid, attr)\n\n        jid = self.submit_job_to_resv(rid, user=TEST_USER)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid)\n\n        jid = self.submit_job_to_resv(rid, user=TEST_USER1)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid)\n\n        attr = {ATTR_auth_u: str(TEST_USER) + ',-' + str(TEST_USER1)}\n        self.server.alterresv(rid, attr)\n\n        jid = self.submit_job_to_resv(rid, user=TEST_USER)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid)\n\n        with self.assertRaises(PbsSubmitError):\n            self.submit_job_to_resv(rid, user=TEST_USER1)\n\n    @skipOnShasta\n    def test_auth_group(self):\n        \"\"\"\n        This test checks changing Authorized_Groups\n        Skipped on shasta due to groups not being setup on the server\n        \"\"\"\n        duration = 30\n        offset = 1000\n        rid = self.submit_and_confirm_reservation(offset, duration)[0]\n\n        jid = self.submit_job_to_resv(rid, user=TEST_USER)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid)\n\n        with self.assertRaises(PbsSubmitError):\n            self.submit_job_to_resv(rid, user=TEST_USER4)\n\n        attr = {ATTR_auth_g: str(TSTGRP0) + ',' + str(TSTGRP1)}\n        self.server.alterresv(rid, attr)\n\n        jid = self.submit_job_to_resv(rid, user=TEST_USER)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid)\n\n        with self.assertRaises(PbsSubmitError):\n            self.submit_job_to_resv(rid, user=TEST_USER4)\n\n        attr = {ATTR_auth_u: str(TEST_USER) + ',' + str(TEST_USER4),\n                ATTR_auth_g: str(TSTGRP0) + ',' + str(TSTGRP1)}\n        self.server.alterresv(rid, attr)\n\n        jid = self.submit_job_to_resv(rid, user=TEST_USER)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid)\n\n        jid = self.submit_job_to_resv(rid, user=TEST_USER4)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid)\n\n    @skipOnShasta\n    def test_auth_group_restart(self):\n        \"\"\"\n        This test checks changing Authorized_Groups survives a server restart\n        Skipped on shasta due to groups not being setup on the server\n        \"\"\"\n        self.skipTest('skipped due to existing bug unsetting attributes')\n        duration = 30\n        offset = 1000\n        rid = self.submit_and_confirm_reservation(offset, duration)[0]\n        qid = rid.split('.')[0]\n\n        attr = {ATTR_auth_g: str(TSTGRP0) + ',' + str(TSTGRP1)}\n        self.server.alterresv(rid, attr)\n        attr2 = {\n            ATTR_aclgroup: attr[ATTR_auth_g],\n            ATTR_aclgren: 'True'\n        }\n\n        self.server.expect(RESV, attr, id=rid, max_attempts=5)\n        self.server.expect(QUEUE, attr2, id=qid, max_attempts=5)\n\n        self.server.restart()\n\n        self.server.expect(RESV, attr, id=rid, max_attempts=5)\n        self.server.expect(QUEUE, attr2, id=qid, max_attempts=5)\n\n        attr = {ATTR_auth_g: ''}\n        self.server.alterresv(rid, attr)\n\n        attr = [ATTR_auth_g]\n        attr2 = [ATTR_aclgroup, ATTR_aclgren]\n        self.server.expect(RESV, attr, op=UNSET, id=rid, max_attempts=5)\n        self.server.expect(QUEUE, attr2, op=UNSET, id=qid, max_attempts=5)\n\n        self.server.restart()\n\n        self.server.expect(RESV, attr, op=UNSET, id=rid, max_attempts=5)\n        self.server.expect(QUEUE, attr2, op=UNSET, id=qid, max_attempts=5)\n\n    def test_ralter_psets(self):\n        \"\"\"\n        Test that PBS will not place a job across placement sets after\n        successfully being altered\n        \"\"\"\n        duration = 120\n        offset1 = 30\n        offset2 = 180\n\n        a = {'type': 'string', 'flag': 'h'}\n        self.server.manager(MGR_CMD_CREATE, RSC, a, id='color')\n\n        a = {'resources_available.ncpus': 4, 'resources_available.mem': '4gb'}\n        self.mom.create_vnodes(a, 3)\n\n        a = {'resources_available.color': 'red'}\n        vn = self.mom.shortname\n        self.server.manager(MGR_CMD_SET, NODE, a, id=vn + '[0]')\n        self.server.manager(MGR_CMD_SET, NODE, a, id=vn + '[1]')\n        a = {'resources_available.color': 'green'}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=vn + '[2]')\n\n        a = {'node_group_key': 'color', 'node_group_enable': True}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        rid1, start1, end1 = self.submit_and_confirm_reservation(\n            offset1, duration, select=\"2:ncpus=4\")\n\n        self.server.status(RESV)\n        nodes = self.server.reservations[rid1].get_vnodes()\n\n        rid2, start2, end2 = self.submit_and_confirm_reservation(\n            offset2, duration, select=\"1:ncpus=4:vnode=\" + nodes[0])\n\n        c = 'resources_available.color'\n        color1 = self.server.status(NODE, c, id=nodes[0])[0][c]\n        color2 = self.server.status(NODE, c, id=nodes[1])[0][c]\n        self.assertEqual(color1, color2)\n\n        self.alter_a_reservation(rid1, start1, end1, shift=300,\n                                 alter_e=True, whichMessage=3)\n\n        t = start1 - int(time.time())\n        self.logger.info('Sleeping until reservation starts')\n        self.server.expect(RESV,\n                           {'reserve_state': (MATCH_RE, 'RESV_RUNNING|5')},\n                           id=rid1, offset=t)\n\n        # sequence=2 because we'll get one message from the last alter attempt\n        # and one message from this alter attempt\n        self.alter_a_reservation(rid1, start1, end1, shift=300,\n                                 alter_e=True, sequence=2, whichMessage=3)\n\n    def test_failed_ralter(self):\n        \"\"\"\n        Test that a failed ralter does not allow jobs to interfere with\n        that reservation.\n        \"\"\"\n        duration = 120\n        offset1 = 30\n        offset2 = 180\n\n        rid1, start1, end1 = self.submit_and_confirm_reservation(\n            offset1, duration, select=\"2:ncpus=4\")\n\n        j = Job(attrs={'Resource_List.walltime': 100})\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'Q',\n                                 'comment': (MATCH_RE, 'Not Running')}, id=jid)\n\n        rid2, start2, end2 = self.submit_and_confirm_reservation(\n            offset2, duration, select=\"2:ncpus=4\")\n\n        self.alter_a_reservation(rid1, start1, end1, shift=300,\n                                 alter_e=True, whichMessage=3)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid)\n\n        self.logger.info('Sleeping until reservation starts')\n        t = start1 - int(time.time())\n        self.server.expect(RESV, {'reserve_state':\n                                  (MATCH_RE, 'RESV_RUNNING|5')},\n                           offset=t, id=rid1)\n\n        self.alter_a_reservation(rid1, start1, end1, shift=300,\n                                 alter_e=True, sequence=2, whichMessage=3)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid)\n\n    def test_adv_resv_duration_before_start(self):\n        \"\"\"\n        Test duration of reservation can be changed. In this case end\n        time changes, and start time remains the same.\n        \"\"\"\n\n        offset = 20\n        duration = 20\n        new_duration = 30\n        shift = 10\n        rid, start, end = self.submit_and_confirm_reservation(offset, duration)\n\n        self.alter_a_reservation(rid, start, end, a_duration=new_duration,\n                                 check_log=False)\n\n        t_duration, t_start, t_end = self.get_resv_time_info(rid)\n\n        self.assertEqual(t_end, end + shift)\n        self.assertEqual(t_start, start)\n        self.assertEqual(t_duration, new_duration)\n\n        # Submit a job to the reservation and change its duration.\n        self.submit_job_to_resv(rid)\n\n        temp_end = t_end\n        temp_start = t_start\n\n        new_duration2 = new_duration + 10\n        self.alter_a_reservation(rid, temp_start, temp_end, sequence=2,\n                                 a_duration=new_duration2, check_log=False)\n\n        t_duration, t_start, t_end = self.get_resv_time_info(rid)\n        self.assertEqual(t_end, temp_end + shift)\n        self.assertEqual(t_start, temp_start)\n        self.assertEqual(t_duration, new_duration2)\n\n        sleepdur = (temp_end + shift / 2) - time.time()\n        self.logger.info('Sleeping until reservation would have ended')\n        self.server.expect(RESV,\n                           {'reserve_state': (MATCH_RE, 'RESV_RUNNING|5')},\n                           id=rid, max_attempts=5, offset=sleepdur)\n\n    def test_adv_resv_dur_and_endtime_before_start(self):\n        \"\"\"\n        Test that duration and end time of reservation can be changed together.\n        In this case the start time of reservation may also change.\n        \"\"\"\n\n        offset = 20\n        duration = 20\n        new_duration = 40\n        shift = 10\n        rid, start, end = self.submit_and_confirm_reservation(offset, duration)\n\n        self.alter_a_reservation(rid, start, end, shift=shift,\n                                 alter_e=True, a_duration=new_duration,\n                                 check_log=False)\n\n        t_duration, t_start, t_end = self.get_resv_time_info(rid)\n\n        self.assertEqual(t_end, end + shift)\n        self.assertEqual(t_start, t_end - t_duration)\n        self.assertEqual(t_duration, new_duration)\n\n        # Submit a job to the reservation and change its start time.\n        self.submit_job_to_resv(rid)\n        temp_end = t_end\n        temp_start = t_start\n\n        new_duration2 = new_duration + 10\n        self.alter_a_reservation(rid, temp_start, temp_end, shift=shift,\n                                 alter_e=True, sequence=2,\n                                 a_duration=new_duration2, check_log=False)\n        t_duration, t_start, t_end = self.get_resv_time_info(rid)\n        self.assertEqual(t_end, temp_end + shift)\n        self.assertEqual(t_start, t_end - t_duration)\n        self.assertEqual(t_duration, new_duration2)\n\n    def test_adv_resv_dur_and_starttime_before_start(self):\n        \"\"\"\n        Test duration and starttime of reservation can be changed together.\n        In this case the endtime will change accordingly\n        \"\"\"\n\n        offset = 20\n        duration = 20\n        new_duration = 30\n        shift = 10\n        rid, start, end = self.submit_and_confirm_reservation(offset, duration)\n\n        self.alter_a_reservation(rid, start, end, shift=shift,\n                                 alter_s=True, a_duration=new_duration,\n                                 check_log=False)\n\n        t_duration, t_start, t_end = self.get_resv_time_info(rid)\n\n        self.assertEqual(t_end, t_start + t_duration)\n        self.assertEqual(t_start, start + shift)\n        self.assertEqual(t_duration, new_duration)\n\n        # Submit a job to the reservation and change its start time.\n        self.submit_job_to_resv(rid)\n        temp_end = t_end\n        temp_start = t_start\n\n        new_duration2 = new_duration + 10\n        self.alter_a_reservation(rid, temp_start, temp_end, shift=shift,\n                                 alter_s=True, sequence=2,\n                                 a_duration=new_duration2, check_log=False)\n        t_duration, t_start, t_end = self.get_resv_time_info(rid)\n        self.assertEqual(t_end, t_start + t_duration)\n        self.assertEqual(t_start, temp_start + shift)\n        self.assertEqual(t_duration, new_duration2)\n\n    def test_adv_res_dur_after_start(self):\n        \"\"\"\n        Test that duration can be changed after the reservation starts.\n        Test that if the duration changes endtime of the reservation to an\n        already passed time, the reservation is deleted\n        \"\"\"\n        offset = 10\n        duration = 20\n        new_duration = 30\n        shift = 10\n        rid, start, end = self.submit_and_confirm_reservation(offset, duration)\n\n        self.check_resv_running(rid, offset)\n\n        self.alter_a_reservation(rid, start, end, a_duration=new_duration,\n                                 confirm=False, check_log=False)\n\n        t_duration, t_start, t_end = self.get_resv_time_info(rid)\n        self.assertEqual(t_duration, new_duration)\n\n        time.sleep(5)\n        new_duration = int(time.time()) - int(t_start) - 1\n        attr = {'reserve_duration': new_duration}\n        self.server.alterresv(rid, attr)\n        msg = \"Resv;\" + rid + \";Reservation alter confirmed\"\n        self.server.log_match(msg, id=rid)\n        rid = rid.split('.')[0]\n        self.server.log_match(rid + \";deleted at request of pbs_server\",\n                              id=rid, interval=2)\n\n    def test_adv_resv_endtime_starttime_dur_together(self):\n        \"\"\"\n        Test that all three end, start and duration can be changed together.\n        If the values of start, end, duration are not reolves properly, this\n        test should fail\n        \"\"\"\n        offset = 20\n        duration = 20\n        new_duration = 25\n        wrong_duration = 45\n        shift_start = 10\n        shift_end = 15\n        rid, start, end = self.submit_and_confirm_reservation(offset, duration)\n        new_start = self.bu.convert_seconds_to_datetime(start + shift_start)\n        new_end = self.bu.convert_seconds_to_datetime(end + shift_end)\n\n        with self.assertRaises(PbsResvAlterError) as e:\n            attr = {'reserve_start': new_start, 'reserve_end': new_end,\n                    'reserve_duration': wrong_duration}\n            self.server.alterresv(rid, attr)\n        self.assertIn('pbs_ralter: Bad time specification(s)',\n                      e.exception.msg[0])\n\n        t_duration, t_start, t_end = self.get_resv_time_info(rid)\n        self.assertEqual(int(t_start), start)\n        self.assertEqual(int(t_duration), duration)\n        self.assertEqual(int(t_end), end)\n\n        attr = {'reserve_start': new_start, 'reserve_end': new_end,\n                'reserve_duration': new_duration}\n        self.server.alterresv(rid, attr)\n\n        t_duration, t_start, t_end = self.get_resv_time_info(rid)\n        self.assertEqual(int(t_start), start + shift_start)\n        self.assertEqual(int(t_duration), new_duration)\n        self.assertEqual(int(t_end), end + shift_end)\n\n    def test_standing_resv_duration(self):\n        \"\"\"\n        This test case covers the below scenarios for a standing reservation.\n\n        1. Change duration of standing reservation occurance\n        2. After the first occurrence of the reservation finishes, verify that\n           the start and end time of the second occurrence have not changed.\n\n        All the above operations are expected to be successful.\n        \"\"\"\n        offset = 20\n        duration = 30\n        new_duration = 90\n        shift = 15\n        rid, start, end = self.submit_and_confirm_reservation(offset,\n                                                              duration,\n                                                              standing=True)\n\n        self.alter_a_reservation(rid, start, end, shift=shift,\n                                 a_duration=new_duration, check_log=False)\n\n        t_duration, t_start, t_end = self.get_resv_time_info(rid)\n        self.assertEqual(t_duration, new_duration)\n\n        # Wait for the reservation to start running.\n        self.check_resv_running(rid, offset - shift)\n\n        # Wait for the reservation occurrence to finish.\n        new_duration = t_end - t_start\n        self.check_occr_finish(rid, new_duration)\n\n        # Check that duration of the second occurrence is not altered.\n        self.check_standing_resv_second_occurrence(rid, start, end)\n\n    def test_standing_resv_duration_and_endtime(self):\n        \"\"\"\n        This test case covers the below scenarios for a standing reservation.\n\n        1. Change duration and endtime of standing reservation\n        2. After the first occurrence of the reservation finishes, verify that\n           the start and end time of the second occurrence have not changed.\n\n        All the above operations are expected to be successful.\n        \"\"\"\n        offset = 20\n        duration = 20\n        new_duration = 30\n        shift = 15\n        rid, start, end = self.submit_and_confirm_reservation(offset,\n                                                              duration,\n                                                              standing=True)\n\n        self.alter_a_reservation(rid, start, end, shift=shift, alter_e=True,\n                                 a_duration=new_duration, check_log=False)\n\n        t_duration, t_start, t_end = self.get_resv_time_info(rid)\n        self.assertEqual(t_duration, new_duration)\n        self.assertEqual(t_end, end + shift)\n        self.assertEqual(t_start, t_end - t_duration)\n\n        # Wait for the reservation to start running.\n        self.check_resv_running(rid, offset - shift)\n\n        # Wait for the reservation occurrence to finish.\n        new_duration = t_end - t_start\n        self.check_occr_finish(rid, new_duration)\n\n        # Check that duration of the second occurrence is not altered.\n        self.check_standing_resv_second_occurrence(rid, start, end)\n\n    def test_standing_resv_duration_and_starttime(self):\n        \"\"\"\n        This test case covers the below scenarios for a standing reservation.\n\n        1. Change duration and endtime of standing reservation\n        2. After the first occurrence of the reservation finishes, verify that\n           the start and end time of the second occurrence have not changed.\n\n        All the above operations are expected to be successful.\n        \"\"\"\n        offset = 20\n        duration = 20\n        new_duration = 30\n        shift = 15\n        rid, start, end = self.submit_and_confirm_reservation(offset,\n                                                              duration,\n                                                              standing=True)\n\n        self.alter_a_reservation(rid, start, end, shift=shift, alter_s=True,\n                                 a_duration=new_duration, check_log=False)\n\n        t_duration, t_start, t_end = self.get_resv_time_info(rid)\n        self.assertEqual(t_duration, new_duration)\n        self.assertEqual(t_end, t_start + t_duration)\n        self.assertEqual(t_start, start + shift)\n\n        # Wait for the reservation to start running.\n        self.check_resv_running(rid, offset - shift)\n\n        # Wait for the reservation occurrence to finish.\n        new_duration = t_end - t_start\n        self.check_occr_finish(rid, new_duration)\n\n        # Check that duration of the second occurrence is not altered.\n        self.check_standing_resv_second_occurrence(rid, start, end)\n\n    def test_conflict_standing_resv_occurrence_duration(self):\n        \"\"\"\n        This test confirms that if the requested duration while altering an\n        occurrence of a standing reservation will conflict with a later\n        occurrence of the same standing reservation, the alter request\n        will be denied.\n        \"\"\"\n        duration = 60\n        new_duration = 70\n        shift = 10\n        offset = 10\n\n        rid, start, end = self.submit_and_confirm_reservation(\n            offset, duration, select=\"1:ncpus=4\", standing=True,\n            rrule=\"FREQ=MINUTELY;COUNT=2\")\n\n        self.alter_a_reservation(rid, start, end, a_duration=new_duration,\n                                 check_log=False, whichMessage=0)\n\n        attrs = {'reserve_state': (MATCH_RE, 'RESV_RUNNING|5')}\n        self.server.expect(RESV, attrs, id=rid)\n\n        t_duration, t_start, t_end = self.get_resv_time_info(rid)\n        self.assertEqual(int(t_start), start)\n        self.assertEqual(int(t_duration), duration)\n        self.assertEqual(int(t_end), end)\n\n    def test_alter_empty_fail(self):\n        \"\"\"\n        This test confirms that if a requested ralter fails due to the\n        reservation having running jobs, the attributes are kept the same\n        \"\"\"\n        offset = 20\n        dur = 20\n        shift = 120\n\n        rid, start, end = self.submit_and_confirm_reservation(offset, dur)\n\n        jid = self.submit_job_to_resv(rid, user=TEST_USER)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid, offset=offset)\n\n        now = int(time.time())\n        new_start = self.bu.convert_seconds_to_datetime(now + shift)\n        new_end = self.bu.convert_seconds_to_datetime(now + shift + dur)\n\n        # This bug only shows if end time is changed before start time\n        ralter_cmd = [\n            os.path.join(\n                self.server.pbs_conf['PBS_EXEC'], 'bin', 'pbs_ralter'),\n            '-E', str(new_end),\n            '-R', str(new_start),\n            rid\n        ]\n        ret = self.du.run_cmd(self.server.hostname, ralter_cmd)\n        self.assertIn('pbs_ralter: Reservation not empty', ret['err'][0])\n\n        # Test that the reservation state is Running and not RESV_NONE\n        attrs = {'reserve_state': (MATCH_RE, 'RESV_RUNNING|5')}\n        self.server.expect(RESV, attrs, id=rid)\n\n        t_duration, t_start, t_end = self.get_resv_time_info(rid)\n        self.assertEqual(int(t_start), start)\n        self.assertEqual(int(t_duration), dur)\n        self.assertEqual(int(t_end), end)\n\n    def test_duration_in_hhmmss_format(self):\n        \"\"\"\n        Test duration input can be in hh:mm:ss format\n        \"\"\"\n        offset = 20\n        duration = 20\n        new_duration = \"00:00:30\"\n        new_duration_in_sec = 30\n        rid, start, end = self.submit_and_confirm_reservation(offset, duration)\n\n        new_end = end + 10\n\n        attr = {'reserve_duration': new_duration}\n        self.server.alterresv(rid, attr)\n\n        t_duration, t_start, t_end = self.get_resv_time_info(rid)\n        self.assertEqual(int(t_start), start)\n        self.assertEqual(int(t_duration), new_duration_in_sec)\n        self.assertEqual(int(t_end), new_end)\n\n    def test_adv_resv_dur_and_endtime_with_running_jobs(self):\n        \"\"\"\n        Test that duration and end time of reservation cannot be changed\n        together if there are running jobs inside it. This will fail\n        because start time cannot be changed when there are running\n        jobs in a reservation.\n        \"\"\"\n\n        offset = 10\n        duration = 20\n        new_duration = 30\n        shift = 10\n        rid, start, end = self.submit_and_confirm_reservation(offset, duration)\n\n        self.check_resv_running(rid, offset)\n\n        jid = self.submit_job_to_resv(rid, user=TEST_USER)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        new_end = self.bu.convert_seconds_to_datetime(end + 30)\n        with self.assertRaises(PbsResvAlterError) as e:\n            attr = {'reserve_end': new_end,\n                    'reserve_duration': new_duration}\n            self.server.alterresv(rid, attr)\n        self.assertIn('pbs_ralter: Reservation not empty',\n                      e.exception.msg[0])\n\n        # Test that the reservation state is Running and not RESV_NONE\n        attrs = {'reserve_state': (MATCH_RE, 'RESV_RUNNING|5')}\n        self.server.expect(RESV, attrs, id=rid)\n\n        t_duration, t_start, t_end = self.get_resv_time_info(rid)\n\n        self.assertEqual(t_end, end)\n        self.assertEqual(t_start, start)\n        self.assertEqual(t_duration, duration)\n\n    def test_standing_resv_dur_and_endtime_with_running_jobs(self):\n        \"\"\"\n        Change duration and endtime of standing reservation with\n        running jobs in it. Verify that the alter fails and\n        starttime remains the same\n        \"\"\"\n        offset = 10\n        duration = 20\n        new_duration = 30\n        shift = 15\n        rid, start, end = self.submit_and_confirm_reservation(offset,\n                                                              duration,\n                                                              standing=True)\n\n        jid = self.submit_job_to_resv(rid, user=TEST_USER)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid, offset=offset)\n\n        new_end = self.bu.convert_seconds_to_datetime(end + 30)\n        with self.assertRaises(PbsResvAlterError) as e:\n            attr = {'reserve_end': new_end,\n                    'reserve_duration': new_duration}\n            self.server.alterresv(rid, attr)\n        self.assertIn('pbs_ralter: Reservation not empty',\n                      e.exception.msg[0])\n\n        # Test that the reservation state is Running and not RESV_NONE\n        attrs = {'reserve_state': (MATCH_RE, 'RESV_RUNNING|5')}\n        self.server.expect(RESV, attrs, id=rid)\n\n        t_duration, t_start, t_end = self.get_resv_time_info(rid)\n        self.assertEqual(t_end, end)\n        self.assertEqual(t_start, start)\n        self.assertEqual(t_duration, duration)\n\n    def test_duration_walltime(self):\n        \"\"\"\n        Check when ralter changes duration, it also changes walltime\n        \"\"\"\n\n        rid, start, end = self.submit_and_confirm_reservation(3600, 600)\n\n        self.server.expect(RESV, {'Resource_List.walltime': 600}, id=rid)\n\n        # Alter start + end\n        self.alter_a_reservation(rid, start, end, shift=30, alter_s=True,\n                                 alter_e=True)\n        self.server.expect(RESV, {'Resource_List.walltime': 600}, id=rid)\n\n        # Alter start + duration\n        self.alter_a_reservation(rid, start, end, shift=30, alter_s=True,\n                                 a_duration=300, sequence=2)\n        self.server.expect(RESV, {'Resource_List.walltime': 300}, id=rid)\n\n        # Alter end + duration\n        self.alter_a_reservation(rid, start, end, shift=30, alter_e=True,\n                                 a_duration=450, sequence=3)\n        self.server.expect(RESV, {'Resource_List.walltime': 450}, id=rid)\n\n        # Alter start + end + duration\n        self.alter_a_reservation(rid, start, end, shift=30, alter_e=True,\n                                 alter_s=True, a_duration=600, sequence=4)\n        self.server.expect(RESV, {'Resource_List.walltime': 600}, id=rid)\n\n    def test_alter_select_basic(self):\n        \"\"\"\n        Test basic use of pbs_ralter -l select to shrink a reservation\n        We start with a 2 +'d reservation, and then we drop out the 1st and 3rd\n        chunk, and then reduce further\n        \"\"\"\n        select = \"2:ncpus=1:mem=1gb+4:ncpus=1:mem=2gb+2:ncpus=1:mem=3gb\"\n        aselect1 = \"4:ncpus=1:mem=2gb\"\n        aselect2 = \"2:ncpus=1:mem=2gb\"\n\n        rid, start, end, rnodes = self.alter_select_initial(True, select)\n\n        self.alter_select(\n            rid, start, end, True, aselect1, 4, [], 1)\n\n        self.alter_select(rid, start, end, True, aselect2, 2, [], 2)\n\n    def test_alter_select_basic_running(self):\n        \"\"\"\n        Test basic use of pbs_ralter -l select\n        to shrink a running reservation\n        We start with a 2 +'d reservation, and then we drop out the 1st and 3rd\n        chunk, and then reduce further\n        \"\"\"\n        select = \"2:ncpus=1:mem=1gb+4:ncpus=1:mem=2gb+2:ncpus=1:mem=3gb\"\n        aselect1 = \"4:ncpus=1:mem=2gb\"\n        aselect2 = \"2:ncpus=1:mem=2gb\"\n\n        rid, start, end, rnodes = self.alter_select_initial(False, select)\n\n        rnodes2 = self.alter_select(rid, start, end, False, aselect1, 4, [], 1)\n\n        self.alter_select(rid, start, end, False, aselect2, 2, [], 2)\n\n    def test_alter_select_complex(self):\n        \"\"\"\n        Test more complex use of pbs_ralter -l select\n        to shrink a reservation\n        We start with a 2 +'d spec, and shrink each chunk by one, and\n        then we shrink further and drop out the middle chunk\n        \"\"\"\n        select = \"2:ncpus=1:mem=1gb+4:ncpus=1:mem=2gb+2:ncpus=1:mem=3gb\"\n        aselect1 = \"1:ncpus=1:mem=1gb+2:ncpus=1:mem=2gb+1:ncpus=1:mem=3gb\"\n        aselect2 = \"1:ncpus=1:mem=2gb\"\n\n        rid, start, end, rnodes = self.alter_select_initial(True, select)\n\n        self.alter_select(rid, start, end, True, aselect1, 4, [], 1)\n\n        self.alter_select(rid, start, end, True, aselect2, 1, [], 2)\n\n    def test_alter_select_complex_running(self):\n        \"\"\"\n        Test more complex use of pbs_ralter -l select to\n        shrink a running reservation\n        We start with a 2 +'d spec, and shrink each chunk by one, and\n        then we shrink further and drop out the middle chunk\n        \"\"\"\n        select = \"2:ncpus=1:mem=1gb+4:ncpus=1:mem=2gb+2:ncpus=1:mem=3gb\"\n        aselect1 = \"1:ncpus=1:mem=1gb+2:ncpus=1:mem=2gb+1:ncpus=1:mem=3gb\"\n        aselect2 = \"1:ncpus=1:mem=2gb\"\n\n        rid, start, end, rnodes = self.alter_select_initial(False, select)\n\n        rnodes2 = self.alter_select(rid, start, end, False,\n                                    aselect1, 4, [], 1)\n\n        self.alter_select(rid, start, end, False, aselect2, 1, [], 2)\n\n    def test_alter_select_complex2(self):\n        \"\"\"\n        Test more complex use of pbs_ralter -l select\n        to shrink a reservation\n        We start with a 2 +'d chunk and then shrink each chunk by 1\n        We then shrink further and drop out the middle chunk\n        Lastly we drop out the first chunk\n        \"\"\"\n        select = \"3:ncpus=1:mem=1gb+2:ncpus=1:mem=2gb+3:ncpus=1:mem=3gb\"\n        aselect1 = \"2:ncpus=1:mem=1gb+1:ncpus=1:mem=2gb+2:ncpus=1:mem=3gb\"\n        aselect2 = \"1:ncpus=1:mem=1gb+1:ncpus=1:mem=3gb\"\n        aselect3 = \"1:ncpus=1:mem=3gb\"\n\n        rid, start, end, rnodes = self.alter_select_initial(True, select)\n\n        rnodes2 = self.alter_select(rid, start, end,\n                                    True, aselect1, 5, [], 1)\n\n        self.alter_select(rid, start, end, True, aselect2, 2, [], 2)\n\n        self.alter_select(rid, start, end, True, aselect3, 1, [], 3)\n\n    def test_alter_select_complex_running2(self):\n        \"\"\"\n        Test more complex use of pbs_ralter -l select to\n        shrink a running reservation\n        We start with a 2 +'d chunk and then shrink each chunk by 1\n        We then shrink further and drop out the middle chunk\n        Lastly we drop out the first chunk\n        \"\"\"\n        select = \"3:ncpus=1:mem=1gb+2:ncpus=1:mem=2gb+3:ncpus=1:mem=3gb\"\n        aselect1 = \"2:ncpus=1:mem=1gb+1:ncpus=1:mem=2gb+2:ncpus=1:mem=3gb\"\n        aselect2 = \"1:ncpus=1:mem=1gb+1:ncpus=1:mem=3gb\"\n        aselect3 = \"1:ncpus=1:mem=3gb\"\n\n        rid, start, end, rnodes = self.alter_select_initial(False, select)\n\n        rnodes2 = self.alter_select(rid, start, end,\n                                    False, aselect1, 5, [], 1)\n\n        rnodes3 = self.alter_select(rid, start, end,\n                                    False, aselect2, 2, [], 2)\n\n        self.alter_select(rid, start, end, False, aselect3, 1, [], 3)\n\n    def test_alter_select_complex_running3(self):\n        \"\"\"\n        A complex set of ralters with running jobs\n        \"\"\"\n        select = \"3:ncpus=1:mem=1gb+2:ncpus=1:mem=2gb+3:ncpus=1:mem=3gb\"\n        aselect1 = \"2:ncpus=1:mem=1gb+1:ncpus=1:mem=2gb+2:ncpus=1:mem=3gb\"\n        aselect2 = \"1:ncpus=1:mem=1gb+1:ncpus=1:mem=3gb\"\n        aselect3 = \"1:ncpus=1:mem=3gb\"\n\n        rid, start, end, rnodes = self.alter_select_initial(False, select)\n        rq = rid.split('.')[0]\n\n        a = {'queue': rq, 'Resource_List.select': '1:ncpus=1:mem=3gb'}\n        J1 = Job(attrs=a)\n        jid1 = self.server.submit(J1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n\n        a['Resource_List.select'] = '1:ncpus=1:mem=1gb'\n        J2 = Job(attrs=a)\n        jid2 = self.server.submit(J2)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n        st = self.server.status(JOB)\n\n        nodes1 = J1.get_vnodes(st[0]['exec_vnode'])\n        nodes2 = J2.get_vnodes(st[1]['exec_vnode'])\n        needed_nodes = [nodes1[0], nodes2[0]]\n\n        self.alter_select(rid, start, end, False, aselect1, 5, needed_nodes, 1)\n\n        self.alter_select(rid, start, end, False, aselect2, 2, needed_nodes, 2)\n\n        # Alter will fail because we're trying to release Jid2's node\n\n        self.alter_select(rid, start, end, False, aselect3, 2,\n                          needed_nodes, 1, False)\n\n        self.server.delete(jid2, wait=True)\n\n        self.alter_select(rid, start, end, False, aselect3, 1, nodes1, 3)\n\n    def alter_select_initial(self, confirm, select):\n        \"\"\"\n        Submit initial reservation and possibly wait until it starts\n        \"\"\"\n        numnodes = 8\n        offset = 30\n        dur = 3600\n\n        a = {'resources_available.ncpus': 1,\n             'resources_available.mem': '8gb'}\n        self.mom.create_vnodes(a, num=8)\n\n        rid, start, end = self.submit_and_confirm_reservation(offset, dur,\n                                                              select=select)\n        st = self.server.status(RESV)\n        resv_nodes = self.server.reservations[rid].get_vnodes()\n\n        self.assertEquals(len(st[0]['resv_nodes'].split('+')), numnodes)\n        a = {'Resource_List.ncpus': numnodes,\n             'Resource_List.nodect': numnodes}\n        self.server.expect(RESV, a, id=rid)\n\n        if not confirm:\n            a = {'reserve_state': (MATCH_RE, 'RESV_RUNNING|5')}\n            off = start - int(time.time())\n            self.logger.info('Waiting until reservation runs')\n            self.server.expect(RESV, a, id=rid, offset=off)\n\n        return rid, start, end, resv_nodes\n\n    def alter_select(self, rid, start, end,\n                     confirm, selectN, numnodes, nodes, seq, success=True):\n        \"\"\"\n        Alter a reservation and make sure it is on the correct nodes\n        \"\"\"\n\n        w = 1\n        if not success:\n            w = 3\n        self.alter_a_reservation(rid, start, end, select=selectN,\n                                 confirm=confirm, sequence=seq, whichMessage=w)\n\n        st = self.server.status(RESV)\n        self.assertEquals(len(st[0]['resv_nodes'].split('+')), numnodes)\n        a = {'Resource_List.ncpus': numnodes,\n             'Resource_List.nodect': numnodes}\n        self.server.expect(RESV, a, id=rid)\n        resv_nodes = self.server.reservations[rid].get_vnodes()\n        # nodes is a list of nodes we must keep\n        for n in nodes:\n            self.assertIn(n, resv_nodes, \"Required node not in resv_nodes\")\n        return resv_nodes\n\n    def test_alter_select_with_times(self):\n        \"\"\"\n        Modify the select with the start and end times all at once\n        \"\"\"\n        offset = 3600\n        duration = 3600\n        select = '6:ncpus=1'\n        new_select = '4:ncpus=1'\n        shift = 300\n\n        rid, start, end = self.submit_and_confirm_reservation(offset, duration,\n                                                              select=select)\n        st = self.server.status(RESV)\n        self.assertEquals(len(st[0]['resv_nodes'].split('+')), 6)\n\n        self.alter_a_reservation(rid, start, end, alter_s=True, alter_e=True,\n                                 shift=shift, select=new_select, interactive=9)\n\n        st = self.server.status(RESV)\n        self.assertEquals(len(st[0]['resv_nodes'].split('+')), 4)\n        t = int(time.mktime(time.strptime(st[0]['reserve_start'], '%c')))\n        self.assertEquals(t, start + shift)\n\n        t = int(time.mktime(time.strptime(st[0]['reserve_end'], '%c')))\n        self.assertEquals(t, end + shift)\n\n    def test_alter_select_with_running_jobs(self):\n        \"\"\"\n        Test that when a reservation is running and has running jobs,\n        that an ralter -lselect will release nodes without running jobs\n        \"\"\"\n        offset = 20\n        duration = 600\n        select = '3:ncpus=4'\n        select2 = '2:ncpus=4'\n        select3 = '1:ncpus=4'\n\n        a = {'resources_available.ncpus': 4, 'resources_available.mem': '1gb'}\n        self.mom.create_vnodes(a, num=3,\n                               usenatvnode=True)\n\n        rid, start, end = self.submit_and_confirm_reservation(offset, duration,\n                                                              select=select)\n        resv_queue = rid.split('.')[0]\n        a = {'queue': resv_queue}\n        j1 = Job(attrs=a)\n        jid = self.server.submit(j1)\n\n        self.logger.info('Waiting for reservation to start')\n        a = {'reserve_state': (MATCH_RE, 'RESV_RUNNING|5')}\n        off = int(start - time.time())\n        self.server.expect(RESV, a, id=rid, offset=off)\n\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        self.server.status(JOB)\n        job_node = j1.get_vnodes()[0]\n\n        self.alter_a_reservation(rid, start, end,\n                                 select=select2, confirm=False)\n        self.server.status(RESV)\n        resv_nodes = self.server.reservations[rid].get_vnodes()\n        errmsg1 = 'Reservation does not have the right number of nodes'\n        self.assertEquals(len(resv_nodes), 2, errmsg1)\n\n        errmsg2 = 'Reservation does not contain job node'\n        self.assertIn(job_node, resv_nodes, errmsg2)\n\n        self.alter_a_reservation(rid, start, end,\n                                 select=select3, confirm=False, sequence=2)\n        self.server.status(RESV)\n        resv_nodes = self.server.reservations[rid].get_vnodes()\n\n        self.assertEquals(len(resv_nodes), 1, errmsg1)\n        self.assertIn(job_node, resv_nodes, errmsg2)\n\n    def test_alter_select_running_degraded(self):\n        \"\"\"\n        Test that when a degraded running reservation with a running job is\n        altered, the unavailable nodes are released and the node with the\n        running job is kept\n        \"\"\"\n        offset = 20\n        duration = 3600\n        select = '3:ncpus=4'\n        select2 = '1:ncpus=4'\n\n        a = {'resources_available.ncpus': 4, 'resources_available.mem': '1gb'}\n        self.mom.create_vnodes(a, num=3,\n                               usenatvnode=True)\n\n        rid, start, end = self.submit_and_confirm_reservation(offset, duration,\n                                                              select=select)\n        resv_queue = rid.split('.')[0]\n        self.server.status(RESV)\n        resv_nodes = self.server.reservations[rid].get_vnodes()\n\n        self.assertEquals(len(resv_nodes), 3)\n\n        a = {'queue': resv_queue,\n             'Resource_List.select': '1:vnode=%s:ncpus=1' % resv_nodes[1]}\n        j1 = Job(attrs=a)\n        jid = self.server.submit(j1)\n\n        self.logger.info('Waiting for reservation to start')\n        a = {'reserve_state': (MATCH_RE, 'RESV_RUNNING|5')}\n        off = int(start - time.time())\n        self.server.expect(RESV, a, id=rid, offset=off)\n\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        self.server.status(JOB)\n        job_node = j1.get_vnodes()[0]\n\n        self.server.manager(MGR_CMD_SET, NODE, {'state': 'offline'},\n                            id=resv_nodes[2])\n\n        self.server.expect(RESV, {'reserve_substate': 10}, id=rid)\n\n        self.alter_a_reservation(rid, start, end,\n                                 select=select2, confirm=False)\n        self.server.status(RESV)\n        resv_nodes = self.server.reservations[rid].get_vnodes()\n\n        errmsg1 = 'Reservation does not have the right number of nodes'\n        self.assertEquals(len(resv_nodes), 1, errmsg1)\n\n        errmsg2 = 'Reservation does not contain job node'\n        self.assertIn(job_node, resv_nodes, errmsg2)\n\n    def test_alter_select_with_times_standing(self):\n        \"\"\"\n        Modify the select with start and end times on a standing reservation\n        \"\"\"\n        offset = 20\n        duration = 20\n        select = '6:ncpus=1'\n        new_select = '4:ncpus=1'\n        shift = 15\n\n        rid, start, end = self.submit_and_confirm_reservation(offset, duration,\n                                                              select=select,\n                                                              standing=True)\n        st = self.server.status(RESV)\n        self.assertEquals(len(st[0]['resv_nodes'].split('+')), 6)\n\n        self.alter_a_reservation(rid, start, end, alter_s=True, alter_e=True,\n                                 shift=shift, select=new_select, interactive=9)\n\n        st = self.server.status(RESV)\n        self.assertEquals(len(st[0]['resv_nodes'].split('+')), 4)\n        t = int(time.mktime(time.strptime(st[0]['reserve_start'], '%c')))\n        self.assertEquals(t, start + shift)\n\n        t = int(time.mktime(time.strptime(st[0]['reserve_end'], '%c')))\n        self.assertEquals(t, end + shift)\n\n        t = start + shift - int(time.time())\n\n        self.logger.info('Waiting until reservation starts')\n        self.server.expect(RESV, {'reserve_state':\n                                  (MATCH_RE, 'RESV_RUNNING|5')}, offset=t)\n\n        self.check_standing_resv_second_occurrence(rid, start, end, select)\n\n    def test_alter_select_larger_fail(self):\n        \"\"\"\n        Test proper failures if ralter -lselect with a larger select\n        \"\"\"\n\n        offset = 3600\n        duration = 3600\n        select = '6:ncpus=1'\n        select_more = '7:ncpus=1'\n        select_extra = '6:ncpus=1+1:ncpus=1:mem=1gb'\n        select_different = '6:ncpus=4:mem=1gb'\n\n        rid, start, end = self.submit_and_confirm_reservation(offset, duration,\n                                                              select=select)\n\n        self.alter_a_reservation(rid, start, end, select=select_more,\n                                 whichMessage=0)\n\n        self.alter_a_reservation(rid, start, end, select=select_extra,\n                                 whichMessage=0)\n        self.alter_a_reservation(rid, start, end, select=select_different,\n                                 whichMessage=0)\n\n    def test_standing_multiple_alter(self):\n        \"\"\"\n        Test that a standing reservation's second occurrence reverts to the\n        original start/end/duration/select if it is altered multiple times\n        \"\"\"\n\n        offset = 60\n        shift1 = -20\n        shift2 = -30\n        dur = 30\n        dur2 = 20\n        dur3 = 15\n        select = '6:ncpus=1'\n        select2 = '4:ncpus=1'\n        select3 = '2:ncpus=1'\n\n        rid, start, end = \\\n            self.submit_and_confirm_reservation(offset, dur, select=select,\n                                                standing=True,\n                                                rrule=\"FREQ=MINUTELY;COUNT=2\")\n\n        self.alter_a_reservation(rid, start, end, alter_s=True,\n                                 shift=shift1, a_duration=dur2, select=select2)\n        self.alter_a_reservation(rid, start, end, alter_s=True,\n                                 shift=shift2, a_duration=dur3,\n                                 select=select3, sequence=2)\n\n        t = start - int(time.time()) + shift2\n\n        self.logger.info('Sleeping %ds until resv starts' % (t))\n        self.server.expect(RESV,\n                           {'reserve_state': (MATCH_RE, 'RESV_RUNNING|5')},\n                           id=rid, offset=t)\n\n        self.check_standing_resv_second_occurrence(rid, start, end, select,\n                                                   freq=60, wait=True)\n\n    def test_select_fail_revert(self):\n        \"\"\"\n        Test that when a ralter fails, the select is reverted properly\n        \"\"\"\n        offset = 3600\n        offset2 = 7200\n        shift = 1800\n        dur = 3600\n        select = '8:ncpus=1'\n        select2 = '4:ncpus=1'\n\n        rid, start, end = self.submit_and_confirm_reservation(offset, dur,\n                                                              select=select)\n\n        rid2, start2, end2 = self.submit_and_confirm_reservation(offset2, dur,\n                                                                 select=select)\n\n        self.alter_a_reservation(rid, start, end, alter_s=True, alter_e=True,\n                                 shift=shift, select=select2, whichMessage=3)\n\n        a = {'Resource_List.select': '8:ncpus=1', 'Resource_List.ncpus': 8,\n             'Resource_List.nodect': 8}\n        self.server.expect(RESV, a, id=rid)\n\n    def test_resv_resc_assigned(self):\n        \"\"\"\n        Test that when an ralter -D is issued, the resources on the node\n        are still correct\n        \"\"\"\n\n        offset = 60\n        dur = 60\n        select = '4:ncpus=1'\n\n        rid, start, end = self.submit_and_confirm_reservation(offset, dur,\n                                                              select=select)\n\n        self.server.status(RESV, 'resv_nodes', id=rid)\n        resv_node = self.server.reservations[rid].get_vnodes()[0]\n\n        sleepdur = start - time.time()\n        self.logger.info('Sleeping until reservation starts')\n        self.server.expect(RESV,\n                           {'reserve_state': (MATCH_RE, 'RESV_RUNNING|5')},\n                           offset=sleepdur)\n        self.alter_a_reservation(rid, start, end, a_duration=600,\n                                 confirm=False)\n        self.server.expect(NODE, {'resources_assigned.ncpus': 4},\n                           max_attempts=1, id=resv_node)\n\n    def test_alter_start_standing_resv_future_occrs(self):\n        \"\"\"\n        Test that when start time of a confirmed standing reservation is\n        altered, only the upcoming occurence changes and not all occurences\n        are modified.\n        \"\"\"\n\n        duration = 20\n        offset = 3600\n        shift = -3000\n\n        rid, start, end = self.submit_and_confirm_reservation(\n            offset, duration, select=\"2:ncpus=4\", standing=True,\n            rrule=\"FREQ=HOURLY;COUNT=3\")\n\n        # move the reservation 10 mins in future\n        self.alter_a_reservation(rid, start, end, confirm=True, alter_s=True,\n                                 alter_e=True, shift=shift)\n        # Ideally this reservation should confirm because second occurrence\n        # of the first reservation happens in almost 2 hrs from now.\n        rid2, start, end = self.submit_and_confirm_reservation(\n            3000, 1800, select=\"2:ncpus=4\")\n\n    def test_alter_duration_standing_resv_future_occrs(self):\n        \"\"\"\n        Test that when duration of a confirmed standing reservation is\n        altered, only the upcoming occurence changes and not all occurences\n        are modified.\n        \"\"\"\n\n        duration = 180\n        offset = 300\n\n        rid, start, end = self.submit_and_confirm_reservation(\n            offset, duration, select=\"2:ncpus=4\", standing=True,\n            rrule=\"FREQ=HOURLY;COUNT=3\")\n\n        # change the reservation's duration to 20 seconds\n        self.alter_a_reservation(rid, start, end, confirm=True, a_duration=20)\n\n        # Submit another reservation that starts in 1hr and 30 seconds.\n        # Ideally, in 1 hr second occurrence of reservation will start running\n        # and it will run for 3 mins. This means the new reservation will be\n        # denied.\n        new_offset = (start + 3630) - time.time()\n        rid2, start, end = self.submit_and_confirm_reservation(\n            new_offset, 180, select=\"2:ncpus=4\", ExpectSuccess=0)\n\n    def test_ralter_force_start_end_confirmed_resv(self):\n        \"\"\"\n        Test that forcefully altering a confirmed reservation takes effect.\n        Especially when there are conflicting reservations\n        \"\"\"\n\n        duration1 = 3600\n        offset1 = 3600\n\n        rid1, start1, end1 = self.submit_and_confirm_reservation(\n            offset1, duration1, select=\"2:ncpus=4\")\n\n        duration2 = 1800\n        offset2 = 600\n\n        rid2, start2, end2 = self.submit_and_confirm_reservation(\n            offset2, duration2, select=\"2:ncpus=4\")\n\n        self.alter_a_reservation(rid1, start1, end1, confirm=True, shift=-3000,\n                                 alter_s=True, alter_e=True, extend='force')\n        t_duration, t_start, t_end = self.get_resv_time_info(rid1)\n        start1 = start1 - 3000\n        end1 = end1 - 3000\n        self.assertEqual(int(t_start), start1)\n        self.assertEqual(int(t_duration), duration1)\n        self.assertEqual(int(t_end), end1)\n\n        # Try the same alter but in interactive mode\n        duration = 300\n        self.alter_a_reservation(rid1, start1, end1, confirm=True,\n                                 a_duration=duration, extend='force',\n                                 interactive=10, sequence=2)\n        t_duration, _, _ = self.get_resv_time_info(rid1)\n        self.assertEqual(int(t_duration), duration)\n\n    def test_ralter_force_start_end_unconfirmed_resv(self):\n        \"\"\"\n        Test that forcefully altering unconfirmed reservation takes effect.\n        \"\"\"\n\n        self.server.manager(MGR_CMD_SET, SCHED, {'scheduling': 'False'})\n\n        duration = 3600\n        offset = 3600\n\n        rid, start, end = self.submit_and_confirm_reservation(\n            offset, duration, select=\"2:ncpus=4\", ExpectSuccess=2)\n        attrs = {}\n        new_start = start - 1800\n        new_end = end - 3600\n        new_duration = duration - 1800\n\n        new_start_conv = self.bu.convert_seconds_to_datetime(new_start)\n        attrs['reserve_start'] = new_start_conv\n\n        new_end_conv = self.bu.convert_seconds_to_datetime(new_end)\n        attrs['reserve_end'] = new_end_conv\n\n        self.server.alterresv(rid, attrs, extend='force')\n        msg = \"pbs_ralter: \" + rid + \" CONFIRMED\"\n        self.assertEqual(msg, self.server.last_out[0])\n\n        t_duration, t_start, t_end = self.get_resv_time_info(rid)\n        self.assertEqual(int(t_start), new_start)\n        self.assertEqual(int(t_duration), new_duration)\n        self.assertEqual(int(t_end), new_end)\n\n        # Try the same alter but in interactive mode\n        new_end = new_end - 100\n        new_end_conv = self.bu.convert_seconds_to_datetime(new_end)\n        attrs['reserve_end'] = new_end_conv\n        attrs['interactive'] = 10\n        self.server.alterresv(rid, attrs, extend='force')\n        msg = \"pbs_ralter: \" + rid + \" CONFIRMED\"\n        self.assertEqual(msg, self.server.last_out[0])\n\n        _, _, t_end = self.get_resv_time_info(rid)\n        self.assertEqual(int(t_end), new_end)\n        check_attr = {'reserve_state': (MATCH_RE, 'RESV_UNCONFIRMED|1')}\n        self.server.expect(RESV, check_attr, rid)\n\n    def test_alter_force_duration_standing_resv_future_occrs(self):\n        \"\"\"\n        Test that when duration of a confirmed standing reservation is\n        forcefully altered, only the upcoming occurence changes and not all\n        occurrences are modified.\n        \"\"\"\n\n        duration = 180\n        offset = 300\n        offset_a = 500\n\n        rid_a, start_a, end_a = self.submit_and_confirm_reservation(\n            offset_a, duration, select=\"2:ncpus=4\")\n\n        rid, start, end = self.submit_and_confirm_reservation(\n            offset, duration, select=\"2:ncpus=4\", standing=True,\n            rrule=\"FREQ=HOURLY;COUNT=3\")\n\n        # change the reservation's duration to 300 seconds, so that it clashes\n        # with the advance reservation\n        self.alter_a_reservation(rid, start, end, confirm=True,\n                                 a_duration=300, extend='force')\n\n        # Submit another reservation that starts in 1hr and 200 seconds.\n        # Ideally, in 1 hr second occurrence of reservation will start running\n        # and it will run for 3 mins. This means the new reservation will be\n        # confirmed.\n        new_offset = (start + 3800) - int(time.time())\n        rid2, start, end = self.submit_and_confirm_reservation(\n            new_offset, 180, select=\"2:ncpus=4\", ExpectSuccess=1)\n\n    def test_alter_force_non_manager_user(self):\n        \"\"\"\n        Test that ralter -Wforce option fails for non-manager users\n        \"\"\"\n\n        duration = 180\n        offset = 300\n        rid, start, end = self.submit_and_confirm_reservation(\n            offset, duration, select=\"2:ncpus=4\", ruser=TEST_USER2)\n        self.alter_a_reservation(rid, start, end,\n                                 a_duration=300, extend='force',\n                                 runas=TEST_USER2, whichMessage=0)\n\n    def test_alter_force_select(self):\n        \"\"\"\n        Test that ralter -Wforce option fails for select resource\n        \"\"\"\n\n        duration = 180\n        offset = 300\n        rid, start, end = self.submit_and_confirm_reservation(\n            offset, duration, select=\"2:ncpus=4\", ruser=TEST_USER2)\n        self.alter_a_reservation(rid, start, end, select=\"1:ncpus=1\",\n                                 a_duration=20, extend='force',\n                                 whichMessage=0)\n\n    def test_ralter_force_start_end_running_resv(self):\n        \"\"\"\n        Test that forcefully altering a running reservation takes effect.\n        \"\"\"\n\n        duration = 3600\n        offset = 20\n\n        rid, start, end = self.submit_and_confirm_reservation(\n            offset, duration, select=\"2:ncpus=4\")\n\n        resv_queue = rid.split('.')[0]\n        a = {'queue': resv_queue}\n        j = Job(attrs=a)\n        jid = self.server.submit(j)\n\n        self.logger.info('Waiting for reservation to start')\n        a = {'reserve_state': (MATCH_RE, 'RESV_RUNNING|5')}\n        off = int(start - time.time())\n        self.server.expect(RESV, a, id=rid, offset=off)\n\n        # this alter command is rejected because the reservation has\n        # a running job in it.\n        self.alter_a_reservation(rid, start, end, confirm=False, shift=-10,\n                                 alter_s=True, extend='force', whichMessage=0)\n\n        self.alter_a_reservation(rid, start, end, confirm=False, shift=-100,\n                                 alter_e=True, extend='force')\n        _, _, t_end = self.get_resv_time_info(rid)\n        end -= 100\n        self.assertEqual(int(t_end), end)\n\n        self.alter_a_reservation(rid, start, end, confirm=False,\n                                 a_duration=4000, extend='force', sequence=2)\n        t_duration, _, _ = self.get_resv_time_info(rid)\n        self.assertEqual(int(t_duration), 4000)\n\n        self.server.delete(jid, wait=True)\n        end = start + 4000\n        self.alter_a_reservation(rid, start, end, confirm=True, shift=1000,\n                                 alter_s=True, extend='force', sequence=3)\n        _, t_start, _ = self.get_resv_time_info(rid)\n        self.assertEqual(int(t_start), start + 1000)\n\n    def test_restart_revert(self):\n        \"\"\"\n        Test that if a reservation is in state RESV_BEING_ALTERED and\n        the server shuts down, when the server recovers the reservation\n        from the database, it will revert the reservation to the original\n        attributes.\n        \"\"\"\n\n        duration = 60\n        offset = 60\n        shift = 5\n\n        rid, start, end = self.submit_and_confirm_reservation(\n            offset, duration)\n\n        attrs = {'reserve_start':\n                 self.bu.convert_seconds_to_datetime(start, self.fmt),\n                 'reserve_end':\n                 self.bu.convert_seconds_to_datetime(end, self.fmt),\n                 'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, attrs, id=rid)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': False})\n        new_start, new_end = self.alter_a_reservation(rid, start, end,\n                                                      alter_s=True,\n                                                      alter_e=True,\n                                                      shift=shift,\n                                                      confirm=False,\n                                                      whichMessage=-1)\n        a2 = {'reserve_start':\n              self.bu.convert_seconds_to_datetime(new_start, self.fmt),\n              'reserve_end':\n              self.bu.convert_seconds_to_datetime(new_end, self.fmt),\n              'reserve_state': (MATCH_RE, 'RESV_BEING_ALTERED|11')}\n        self.server.expect(RESV, a2, id=rid)\n        self.server.restart()\n        self.server.expect(RESV, attrs, id=rid)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': True})\n        self.server.expect(RESV, attrs, id=rid)\n        wait = start - time.time()\n        self.check_resv_running(rid, offset=wait)\n\n    def test_alter_degrade_reconfirm_standing(self):\n        \"\"\"\n        Test that if a standing reservation is altered, degraded,\n        then reconfirmed, the reservation will use the original\n        select\n        \"\"\"\n        duration = 60\n        offset = 60\n\n        confirmed = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        degraded = {'reserve_state': (MATCH_RE, 'RESV_DEGRADED|10')}\n        offline = {'state': 'offline'}\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'reserve_retry_time': 5})\n\n        rid, start, end = self.submit_and_confirm_reservation(\n            offset, duration, standing=True, select=\"2:ncpus=2\")\n\n        self.alter_a_reservation(rid, start, end, select=\"1:ncpus=2\")\n\n        self.server.status(RESV, id=rid)\n        resv_node = self.server.reservations[rid].get_vnodes()[0]\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': False})\n        self.server.manager(MGR_CMD_SET, NODE, offline, id=resv_node)\n        self.server.expect(RESV, degraded, id=rid)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': True})\n        self.server.expect(RESV, confirmed, id=rid)\n\n        stat = self.server.status(RESV, id=rid)[0]\n        resvnodes = stat['resv_nodes']\n        self.assertNotEquals(resv_node, resvnodes)\n        self.assertEquals(1, len(resvnodes.split('+')))\n\n        self.check_occr_finish(rid, end - time.time())\n        stat = self.server.status(RESV, id=rid)[0]\n        resvnodes = stat['resv_nodes']\n        self.assertEquals(2, len(resvnodes.split('+')))\n"
  },
  {
    "path": "test/tests/functional/pbs_release_limited_res_suspend.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport time\nfrom tests.functional import *\n\n\nclass TestReleaseLimitedResOnSuspend(TestFunctional):\n    \"\"\"\n    Test that based on admin's input only limited number of resources are\n    released when suspending a running job.\n    \"\"\"\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n        # Set default resources available on the default mom\n        a = {ATTR_rescavail + '.ncpus': 4, ATTR_rescavail + '.mem': '2gb'}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        # Create an express queue\n        b = {ATTR_qtype: 'Execution', ATTR_enable: 'True',\n             ATTR_start: 'True', ATTR_p: '200'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, b, \"expressq\")\n\n    def test_do_not_release_mem_sched_susp(self):\n        \"\"\"\n        During preemption by suspension test that only ncpus are released from\n        the running job and memory is not released.\n        \"\"\"\n\n        # Set restrict_res_to_release_on_suspend server attribute\n        a = {ATTR_restrict_res_to_release_on_suspend: 'ncpus'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        # Submit a low priority job\n        j1 = Job(TEST_USER)\n        j1.set_attributes({ATTR_l + '.select': '1:ncpus=4:mem=512mb'})\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n\n        # Submit a high priority job\n        j2 = Job(TEST_USER)\n        j2.set_attributes(\n            {ATTR_l + '.select': '1:ncpus=2:mem=512mb',\n             ATTR_q: 'expressq'})\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid2)\n        self.server.expect(JOB, {ATTR_state: 'S', ATTR_substate: 45}, id=jid1)\n\n        ras_mem = ATTR_rescassn + '.mem'\n        ras_ncpus = ATTR_rescassn + '.ncpus'\n\n        rv = self.server.status(\n            NODE, [ras_ncpus, ras_mem], id=self.mom.shortname)\n        self.assertNotEqual(rv, None)\n\n        self.assertEqual(rv[0][ras_mem], \"1048576kb\",\n                         msg=\"pbs should not release memory\")\n        self.assertEqual(rv[0][ras_ncpus], \"2\",\n                         msg=\"pbs did not release ncpus\")\n\n    def test_do_not_release_mem_qsig_susp(self):\n        \"\"\"\n        If a running job is suspended using qsig, test that only ncpus are\n        released from the running job and memory is not released.\n        \"\"\"\n\n        # Set restrict_res_to_release_on_suspend server attribute\n        a = {ATTR_restrict_res_to_release_on_suspend: 'ncpus'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        # Submit a low priority job\n        j1 = Job(TEST_USER)\n        j1.set_attributes({ATTR_l + '.select': '1:ncpus=4:mem=512mb'})\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n\n        # suspend job\n        self.server.sigjob(jobid=jid1, signal=\"suspend\")\n\n        ras_mem = ATTR_rescassn + '.mem'\n        ras_ncpus = ATTR_rescassn + '.ncpus'\n\n        rv = self.server.status(\n            NODE, [ras_ncpus, ras_mem], id=self.mom.shortname)\n        self.assertNotEqual(rv, None)\n\n        self.assertEqual(rv[0][ras_mem], \"524288kb\",\n                         msg=\"pbs should not release memory\")\n        self.assertEqual(rv[0][ras_ncpus], \"0\",\n                         msg=\"pbs did not release ncpus\")\n\n    def test_change_in_res_to_release_on_suspend(self):\n        \"\"\"\n        set restrict_res_to_release_on_suspend to only ncpus and then suspend\n        a job after the job is suspended change\n        restrict_res_to_release_on_suspend to release only memory and check\n        if the suspended job resumes and do not account for memory twice.\n        \"\"\"\n\n        # Set restrict_res_to_release_on_suspend server attribute\n        a = {ATTR_restrict_res_to_release_on_suspend: 'ncpus'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        # Submit a low priority job\n        j1 = Job(TEST_USER)\n        j1.set_attributes({ATTR_l + '.select': '1:ncpus=4:mem=512mb'})\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n\n        # Submit a high priority job\n        j2 = Job(TEST_USER)\n        j2.set_attributes(\n            {ATTR_l + '.select': '1:ncpus=2:mem=256mb',\n             ATTR_q: 'expressq'})\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid2)\n        self.server.expect(JOB, {ATTR_state: 'S', ATTR_substate: 45}, id=jid1)\n\n        # Change restrict_res_to_release_on_suspend server attribute\n        a = {ATTR_restrict_res_to_release_on_suspend: 'mem'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        rc = 0\n        try:\n            rc = self.server.deljob(jid2, wait=True)\n        except PbsDeljobError as e:\n            self.assertEqual(rc, 0, e.msg[0])\n\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n        ras_mem = ATTR_rescassn + '.mem'\n        ras_ncpus = ATTR_rescassn + '.ncpus'\n\n        rv = self.server.status(\n            NODE, [ras_ncpus, ras_mem], id=self.mom.shortname)\n        self.assertNotEqual(rv, None)\n\n        self.assertEqual(rv[0][ras_mem], \"524288kb\",\n                         msg=\"pbs did not account for memory correctly\")\n        self.assertEqual(rv[0][ras_ncpus], \"4\",\n                         msg=\"pbs did not account for ncpus correctly\")\n\n    def test_res_released_sched_susp(self):\n        \"\"\"\n        Test if job's resources_released attribute is correctly set when\n        it is suspended.\n        \"\"\"\n\n        # Set restrict_res_to_release_on_suspend server attribute\n        a = {ATTR_restrict_res_to_release_on_suspend: 'ncpus'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        # Submit a low priority job\n        j1 = Job(TEST_USER)\n        j1.set_attributes({ATTR_l + '.select': '1:ncpus=4:mem=512mb'})\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n\n        # Submit a high priority job\n        j2 = Job(TEST_USER)\n        j2.set_attributes(\n            {ATTR_l + '.select': '1:ncpus=2:mem=512mb',\n             ATTR_q: 'expressq'})\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid2)\n        self.server.expect(JOB, {ATTR_state: 'S', ATTR_substate: 45}, id=jid1)\n\n        job = self.server.status(JOB, id=jid1)\n\n        rr = \"(%s:ncpus=4)\" % self.mom.shortname\n        self.assertEqual(job[0][ATTR_released], rr,\n                         msg=\"resources_released incorrect\")\n\n    def test_res_released_sched_susp_multi_vnode(self):\n        \"\"\"\n        Test if job's resources_released attribute is correctly set when\n        a multi vnode job is suspended.\n        \"\"\"\n\n        # Set restrict_res_to_release_on_suspend server attribute\n        a = {ATTR_restrict_res_to_release_on_suspend: 'ncpus'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        vn_attrs = {ATTR_rescavail + '.ncpus': 8,\n                    ATTR_rescavail + '.mem': '1024mb'}\n        self.mom.create_vnodes(vn_attrs, 1,\n                               fname=\"vnodedef1\", vname=\"vnode1\")\n        # Append a vnode\n        vn_attrs = {ATTR_rescavail + '.ncpus': 6,\n                    ATTR_rescavail + '.mem': '1024mb'}\n        self.mom.create_vnodes(vn_attrs, 1, additive=True,\n                               fname=\"vnodedef2\", vname=\"vnode2\")\n\n        # Submit a low priority job\n        j1 = Job(TEST_USER)\n        j1.set_attributes({ATTR_l + '.select':\n                           '1:ncpus=8:mem=512mb+1:ncpus=6:mem=256mb'})\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n\n        # Submit a high priority job\n        j2 = Job(TEST_USER)\n        j2.set_attributes(\n            {ATTR_l + '.select': '1:ncpus=8:mem=256mb',\n             ATTR_q: 'expressq'})\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid2)\n        self.server.expect(JOB, {ATTR_state: 'S', ATTR_substate: 45}, id=jid1)\n\n        job = self.server.status(JOB, id=jid1)\n\n        rr = \"(vnode1[0]:ncpus=8)+(vnode2[0]:ncpus=6)\"\n        self.logger.info(\"resources released are \" + job[0][ATTR_released])\n        self.assertEqual(job[0][ATTR_released], rr,\n                         msg=\"resources_released incorrect\")\n\n    def test_res_released_sched_susp_arrayjob(self):\n        \"\"\"\n        Test if array subjob's resources_released attribute is correctly\n        set when it is suspended.\n        \"\"\"\n\n        # Set restrict_res_to_release_on_suspend server attribute\n        a = {ATTR_restrict_res_to_release_on_suspend: 'ncpus'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        # Submit a low priority job\n        j1 = Job(TEST_USER)\n        j1.set_attributes({ATTR_l + '.select': '1:ncpus=4:mem=512mb',\n                           ATTR_J: '1-3'})\n        jid1 = self.server.submit(j1)\n        subjobs = self.server.status(JOB, id=jid1, extend='t')\n        sub_jid1 = subjobs[1]['id']\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=sub_jid1)\n\n        # Submit a high priority job\n        j2 = Job(TEST_USER)\n        j2.set_attributes(\n            {ATTR_l + '.select': '1:ncpus=2:mem=512mb',\n             ATTR_q: 'expressq'})\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid2)\n        self.server.expect(JOB, {ATTR_state: 'S', ATTR_substate: 45},\n                           id=sub_jid1)\n\n        job = self.server.status(JOB, id=sub_jid1)\n\n        rr = \"(%s:ncpus=4)\" % self.mom.shortname\n        self.assertEqual(job[0][ATTR_released], rr,\n                         msg=\"resources_released incorrect\")\n\n    def test_res_released_list_sched_susp_arrayjob(self):\n        \"\"\"\n        Test if array subjob's resources_released_list attribute is correctly\n        set when it is suspended.\n        \"\"\"\n\n        # Set restrict_res_to_release_on_suspend server attribute\n        a = {ATTR_restrict_res_to_release_on_suspend: 'ncpus,mem'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        # Submit a low priority job\n        j1 = Job(TEST_USER)\n        j1.set_attributes({ATTR_l + '.select': '1:ncpus=4:mem=512mb',\n                           ATTR_J: '1-3'})\n        jid1 = self.server.submit(j1)\n        subjobs = self.server.status(JOB, id=jid1, extend='t')\n        sub_jid1 = subjobs[1]['id']\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=sub_jid1)\n\n        # Submit a high priority job\n        j2 = Job(TEST_USER)\n        j2.set_attributes(\n            {ATTR_l + '.select': '1:ncpus=2:mem=256mb',\n             ATTR_q: 'expressq'})\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid2)\n        self.server.expect(JOB, {ATTR_state: 'S', ATTR_substate: 45},\n                           id=sub_jid1)\n\n        job = self.server.status(JOB, id=sub_jid1)\n\n        rr_l_ncpus = job[0][ATTR_rel_list + \".ncpus\"]\n        self.assertEqual(rr_l_ncpus, \"4\", msg=\"ncpus not released\")\n        rr_l_mem = job[0][ATTR_rel_list + \".mem\"]\n        self.assertEqual(rr_l_mem, \"524288kb\", msg=\"memory not released\")\n\n    def test_res_released_list_sched_susp(self):\n        \"\"\"\n        Test if job's resources_released_list attribute is correctly set when\n        it is suspended.\n        \"\"\"\n\n        # Set restrict_res_to_release_on_suspend server attribute\n        a = {ATTR_restrict_res_to_release_on_suspend: 'ncpus,mem'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        # Submit a low priority job\n        j1 = Job(TEST_USER)\n        j1.set_attributes({ATTR_l + '.select': '1:ncpus=4:mem=512mb'})\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n\n        # Submit a high priority job\n        j2 = Job(TEST_USER)\n        j2.set_attributes(\n            {ATTR_l + '.select': '1:ncpus=2:mem=256mb',\n             ATTR_q: 'expressq'})\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid2)\n        self.server.expect(JOB, {ATTR_state: 'S', ATTR_substate: 45}, id=jid1)\n\n        job = self.server.status(JOB, id=jid1)\n\n        rr_l_ncpus = job[0][ATTR_rel_list + \".ncpus\"]\n        self.assertEqual(rr_l_ncpus, \"4\", msg=\"ncpus not released\")\n        rr_l_mem = job[0][ATTR_rel_list + \".mem\"]\n        self.assertEqual(rr_l_mem, \"524288kb\", msg=\"memory not released\")\n\n    def test_res_released_list_sched_susp_multi_vnode(self):\n        \"\"\"\n        Test if job's resources_released_list attribute is correctly set when\n        a multi vnode job is suspended.\n        \"\"\"\n\n        # Set restrict_res_to_release_on_suspend server attribute\n        a = {ATTR_restrict_res_to_release_on_suspend: 'ncpus,mem'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        vn_attrs = {ATTR_rescavail + '.ncpus': 8,\n                    ATTR_rescavail + '.mem': '1024mb'}\n        self.mom.create_vnodes(vn_attrs, 1,\n                               fname=\"vnodedef1\", vname=\"vnode1\")\n        # Append a vnode\n        vn_attrs = {ATTR_rescavail + '.ncpus': 6,\n                    ATTR_rescavail + '.mem': '1024mb'}\n        self.mom.create_vnodes(vn_attrs, 1, additive=True,\n                               fname=\"vnodedef2\", vname=\"vnode2\")\n\n        # Submit a low priority job\n        j1 = Job(TEST_USER)\n        j1.set_attributes({ATTR_l + '.select':\n                           '1:ncpus=8:mem=512mb+1:ncpus=6:mem=256mb'})\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n\n        # Submit a high priority job\n        j2 = Job(TEST_USER)\n        j2.set_attributes(\n            {ATTR_l + '.select': '1:ncpus=8:mem=256mb',\n             ATTR_q: 'expressq'})\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid2)\n        self.server.expect(JOB, {ATTR_state: 'S', ATTR_substate: 45}, id=jid1)\n\n        job = self.server.status(JOB, id=jid1)\n\n        rr_l_ncpus = job[0][ATTR_rel_list + \".ncpus\"]\n        self.assertEqual(rr_l_ncpus, \"14\", msg=\"ncpus not released\")\n        rr_l_mem = job[0][ATTR_rel_list + \".mem\"]\n        self.assertNotEqual(rr_l_mem, \"2097152kb\", msg=\"memory not released\")\n\n    def test_node_res_after_deleting_suspended_job(self):\n        \"\"\"\n        Test that once a suspended job is deleted node's resources assigned\n        are back to 0.\n        \"\"\"\n\n        # Set restrict_res_to_release_on_suspend server attribute\n        a = {ATTR_restrict_res_to_release_on_suspend: 'ncpus'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        # Submit a low priority job\n        j1 = Job(TEST_USER)\n        j1.set_attributes({ATTR_l + '.select': '1:ncpus=4:mem=512mb'})\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n\n        # suspend job\n        self.server.sigjob(jobid=jid1, signal=\"suspend\")\n        self.server.expect(JOB, {ATTR_state: 'S', ATTR_substate: 43}, id=jid1)\n\n        ras_mem = ATTR_rescassn + '.mem'\n        ras_ncpus = ATTR_rescassn + '.ncpus'\n\n        rv = self.server.status(\n            NODE, [ras_ncpus, ras_mem], id=self.mom.shortname)\n        self.assertNotEqual(rv, None)\n\n        self.assertEqual(\n            rv[0][ras_mem], \"524288kb\",\n            msg=\"pbs did not retain memory correctly on the node\")\n        self.assertEqual(\n            rv[0][ras_ncpus], \"0\",\n            msg=\"pbs did not release ncpus correctly on the node\")\n\n        rc = 0\n        try:\n            rc = self.server.deljob(jid1, wait=True)\n        except PbsDeljobError as e:\n            self.assertEqual(rc, 0, e.msg[0])\n\n        rv = self.server.status(\n            NODE, [ras_ncpus, ras_mem], id=self.mom.shortname)\n        self.assertNotEqual(rv, None)\n\n        self.assertEqual(\n            rv[0][ras_mem], \"0kb\",\n            msg=\"pbs did not reassign memory correctly on the node\")\n        self.assertEqual(\n            rv[0][ras_ncpus], \"0\",\n            msg=\"pbs did not reassign ncpus correctly on the node\")\n\n    def test_default_restrict_res_released_on_suspend(self):\n        \"\"\"\n        Test the default value of restrict_res_to_release_on_suspend.\n        It should release all the resources by default.\n        \"\"\"\n        # Submit a low priority job\n        j1 = Job(TEST_USER)\n        j1.set_attributes({ATTR_l + '.select': '1:ncpus=4:mem=512mb'})\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n\n        # Submit a high priority job\n        j2 = Job(TEST_USER)\n        j2.set_attributes(\n            {ATTR_l + '.select': '1:ncpus=2:mem=256mb',\n             ATTR_q: 'expressq'})\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid2)\n        self.server.expect(JOB, {ATTR_state: 'S', ATTR_substate: 45}, id=jid1)\n\n        ras_mem = ATTR_rescassn + '.mem'\n        ras_ncpus = ATTR_rescassn + '.ncpus'\n\n        rv = self.server.status(\n            NODE, [ras_ncpus, ras_mem], id=self.mom.shortname)\n        self.assertNotEqual(rv, None)\n\n        self.assertEqual(rv[0][ras_mem], \"262144kb\",\n                         msg=\"pbs did not release memory\")\n        self.assertEqual(rv[0][ras_ncpus], \"2\",\n                         msg=\"pbs did not release ncpus\")\n\n    def test_setting_unknown_resc(self):\n        \"\"\"\n        Set a non existing resource in restrict_res_to_release_on_suspend\n        and expect an unknown resource error\n        \"\"\"\n\n        # Set restrict_res_to_release_on_suspend server attribute\n        a = {ATTR_restrict_res_to_release_on_suspend: 'ncpus,abc'}\n        try:\n            self.server.manager(MGR_CMD_SET, SERVER, a)\n        except PbsManagerError as e:\n            self.assertTrue(\"Unknown resource\" in e.msg[0])\n\n    def test_delete_res_busy_on_res_to_release_list(self):\n        \"\"\"\n        Create a resource, set it in restrict_res_to_release_on_suspend\n        then delete the resource and check for resource busy error\n        \"\"\"\n\n        # create a custom resource\n        attr = {ATTR_RESC_TYPE: 'long'}\n        self.server.manager(MGR_CMD_CREATE, RSC, attr, id='foo')\n        self.server.manager(MGR_CMD_CREATE, RSC, attr, id='bar')\n\n        # Set restrict_res_to_release_on_suspend server attribute\n        a = {ATTR_restrict_res_to_release_on_suspend: 'ncpus,foo,bar'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        # delete the custom resources\n        try:\n            self.server.manager(MGR_CMD_DELETE, RSC, id='foo')\n        except PbsManagerError as e:\n            self.assertTrue(\"Resource busy on server\" in e.msg[0])\n\n        try:\n            self.server.manager(MGR_CMD_DELETE, RSC, id='bar')\n        except PbsManagerError as e:\n            self.assertTrue(\"Resource busy on server\" in e.msg[0])\n\n    def test_queue_res_release_upon_suspension(self):\n        \"\"\"\n        Create 2 consumable resources and set it on queue,\n        set one of those resouces in restrict_res_to_release_on_suspend,\n        submit a job requesting these resources, check if the resource\n        set in restrict_res_to_release_on_suspend shows up as released\n        on the queue\n        \"\"\"\n\n        # create a custom resource\n        attr = {ATTR_RESC_TYPE: 'long',\n                ATTR_RESC_FLAG: 'q'}\n        self.server.manager(MGR_CMD_CREATE, RSC, attr, id='foo')\n        self.server.manager(MGR_CMD_CREATE, RSC, attr, id='bar')\n\n        # Set foo in restrict_res_to_release_on_suspend server attribute\n        a = {ATTR_restrict_res_to_release_on_suspend: 'ncpus,foo'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        a = {ATTR_rescavail + \".foo\": '100',\n             ATTR_rescavail + \".bar\": '100'}\n        self.server.manager(MGR_CMD_SET, QUEUE, a, id=\"workq\")\n\n        j1 = Job(TEST_USER)\n        j1.set_attributes({ATTR_l + '.ncpus': '4',\n                           ATTR_l + '.foo': '30',\n                           ATTR_l + '.bar': '40'})\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n\n        # suspend job\n        self.server.sigjob(jobid=jid1, signal=\"suspend\")\n\n        ras_foo = ATTR_rescassn + '.foo'\n        ras_bar = ATTR_rescassn + '.bar'\n\n        rv = self.server.status(\n            QUEUE, [ras_foo, ras_bar], id=\"workq\")\n        self.assertNotEqual(rv, None)\n\n        self.assertEqual(rv[0][ras_foo], \"0\",\n                         msg=\"pbs did not release resource foo\")\n\n        self.assertEqual(rv[0][ras_bar], \"40\",\n                         msg=\"pbs should not release resource bar\")\n\n    def test_server_res_release_upon_suspension_using_qsig(self):\n        \"\"\"\n        Create 2 consumable resources and set it on server,\n        set one of those resouces in restrict_res_to_release_on_suspend,\n        submit a job requesting these resources, check if the resource\n        set in restrict_res_to_release_on_suspend shows up as released\n        on the server when job is suspended using qsig\n        \"\"\"\n\n        # create a custom resource\n        attr = {ATTR_RESC_TYPE: 'long',\n                ATTR_RESC_FLAG: 'q'}\n        self.server.manager(MGR_CMD_CREATE, RSC, attr, id='foo')\n        self.server.manager(MGR_CMD_CREATE, RSC, attr, id='bar')\n\n        # Set foo in restrict_res_to_release_on_suspend server attribute\n        a = {ATTR_restrict_res_to_release_on_suspend: 'ncpus,foo'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        a = {ATTR_rescavail + \".foo\": '100',\n             ATTR_rescavail + \".bar\": '100'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        j1 = Job(TEST_USER)\n        j1.set_attributes({ATTR_l + '.ncpus': '4',\n                           ATTR_l + '.foo': '30',\n                           ATTR_l + '.bar': '40'})\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n\n        # suspend job\n        self.server.sigjob(jobid=jid1, signal=\"suspend\")\n\n        ras_foo = ATTR_rescassn + '.foo'\n        ras_bar = ATTR_rescassn + '.bar'\n\n        rv = self.server.status(\n            SERVER, [ras_foo, ras_bar])\n        self.assertNotEqual(rv, None)\n\n        self.assertEqual(rv[0][ras_foo], \"0\",\n                         msg=\"pbs did not release resource foo\")\n\n        self.assertEqual(rv[0][ras_bar], \"40\",\n                         msg=\"pbs should not release resource bar\")\n\n    def test_server_res_release_upon_suspension_using_preemption(self):\n        \"\"\"\n        Create 2 consumable resources and set it on server,\n        set one of those resouces in restrict_res_to_release_on_suspend,\n        submit a job requesting these resources, check if the resource\n        set in restrict_res_to_release_on_suspend shows up as released\n        on the server when preemption happens\n        \"\"\"\n\n        # create a custom resource\n        attr = {ATTR_RESC_TYPE: 'long',\n                ATTR_RESC_FLAG: 'q'}\n        self.server.manager(MGR_CMD_CREATE, RSC, attr, id='foo')\n        self.server.manager(MGR_CMD_CREATE, RSC, attr, id='bar')\n\n        # Set foo in restrict_res_to_release_on_suspend server attribute\n        a = {ATTR_restrict_res_to_release_on_suspend: 'ncpus,foo'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        # Add foo and bar to the resources scheduler checks for\n        resources = self.scheduler.sched_config['resources']\n        resources = resources[:-1] + ', foo, bar\\\"'\n        self.scheduler.set_sched_config({'resources': resources})\n\n        a = {ATTR_rescavail + \".foo\": '100',\n             ATTR_rescavail + \".bar\": '100'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        # Submit 2 normal priority jobs\n        j1 = Job(TEST_USER)\n        j1.set_attributes({ATTR_l + '.ncpus': '1',\n                           ATTR_l + '.foo': '40',\n                           ATTR_l + '.bar': '20'})\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n\n        j2 = Job(TEST_USER)\n        j2.set_attributes({ATTR_l + '.ncpus': '1',\n                           ATTR_l + '.foo': '40',\n                           ATTR_l + '.bar': '20'})\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid2)\n\n        # Submit a high priority job\n        j3 = Job(TEST_USER)\n        j3.set_attributes({ATTR_l + '.ncpus': '1',\n                           ATTR_l + '.foo': '70',\n                           ATTR_l + '.bar': '20',\n                           ATTR_q: 'expressq'})\n        jid3 = self.server.submit(j3)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid3)\n        self.server.expect(JOB, {ATTR_state: 'S'}, id=jid2)\n        self.server.expect(JOB, {ATTR_state: 'S'}, id=jid1)\n\n        ras_foo = ATTR_rescassn + '.foo'\n        ras_bar = ATTR_rescassn + '.bar'\n\n        rv = self.server.status(\n            SERVER, [ras_foo, ras_bar])\n        self.assertNotEqual(rv, None)\n\n        self.assertEqual(rv[0][ras_foo], \"70\",\n                         msg=\"pbs did not release resource foo\")\n\n        self.assertEqual(rv[0][ras_bar], \"60\",\n                         msg=\"pbs should not release resource bar\")\n\n    def test_node_custom_res_release_upon_suspension(self):\n        \"\"\"\n        Create 2 consumable resources and set it on node,\n        set one of those resouces in restrict_res_to_release_on_suspend,\n        submit a job requesting these resources, check if the resource\n        set in restrict_res_to_release_on_suspend shows up as released\n        on the node\n        \"\"\"\n\n        # create a custom resource\n        attr = {ATTR_RESC_TYPE: 'long',\n                ATTR_RESC_FLAG: 'nh'}\n        self.server.manager(MGR_CMD_CREATE, RSC, attr, id='foo')\n        self.server.manager(MGR_CMD_CREATE, RSC, attr, id='bar')\n\n        # Set foo in restrict_res_to_release_on_suspend server attribute\n        a = {ATTR_restrict_res_to_release_on_suspend: 'ncpus,foo'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        self.scheduler.add_resource(\"foo,bar\")\n\n        a = {ATTR_rescavail + \".foo\": '100',\n             ATTR_rescavail + \".bar\": '100'}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.mom.shortname)\n\n        j1 = Job(TEST_USER)\n        j1.set_attributes({ATTR_l + '.ncpus': '4',\n                           ATTR_l + '.foo': '30',\n                           ATTR_l + '.bar': '40'})\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n\n        # suspend job\n        self.server.sigjob(jobid=jid1, signal=\"suspend\")\n\n        ras_foo = ATTR_rescassn + '.foo'\n        ras_bar = ATTR_rescassn + '.bar'\n\n        rv = self.server.status(\n            NODE, [ras_foo, ras_bar], id=self.mom.shortname)\n        self.assertNotEqual(rv, None)\n\n        self.assertEqual(rv[0][ras_foo], \"0\",\n                         msg=\"pbs did not release resource foo\")\n\n        self.assertEqual(rv[0][ras_bar], \"40\",\n                         msg=\"pbs should not release resource bar\")\n\n    def test_resuming_with_no_res_released(self):\n        \"\"\"\n        Set restrict_res_to_release_on_suspend to a resource that a job\n        does not request and then suspend this running job using qsig\n        check if such a job resumes when qsig -s resume is issued\n        \"\"\"\n        # Set mem in restrict_res_to_release_on_suspend server attribute\n        a = {ATTR_restrict_res_to_release_on_suspend: 'mem'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        j1 = Job(TEST_USER)\n        j1.set_attributes({ATTR_l + '.ncpus': '4'})\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n\n        # suspend job\n        self.server.sigjob(jobid=jid1, signal=\"suspend\")\n\n        job = self.server.status(JOB, id=jid1)\n\n        rr = \"(%s:ncpus=0)\" % self.mom.shortname\n        self.assertEqual(job[0][ATTR_released], rr,\n                         msg=\"resources_released incorrect\")\n\n        # resume job\n        self.server.sigjob(jobid=jid1, signal=\"resume\")\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n\n    def test_resuming_with_no_res_released_multi_vnode(self):\n        \"\"\"\n        Set restrict_res_to_release_on_suspend to a resource that multi-vnode\n        job does not request and then suspend this running job using qsig\n        check if such a job resumes when qsig -s resume is issued\n        \"\"\"\n        # Set mem in restrict_res_to_release_on_suspend server attribute\n        a = {ATTR_restrict_res_to_release_on_suspend: 'mem'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        vn_attrs = {ATTR_rescavail + '.ncpus': 2,\n                    ATTR_rescavail + '.mem': '1024mb'}\n        self.mom.create_vnodes(vn_attrs, 1, fname=\"vnodedef1\",\n                               vname=\"vnode1\")\n        # Append a vnode\n        vn_attrs = {ATTR_rescavail + '.ncpus': 6,\n                    ATTR_rescavail + '.mem': '1024mb'}\n        self.mom.create_vnodes(vn_attrs, 1, additive=True,\n                               fname=\"vnodedef2\", vname=\"vnode2\")\n        j1 = Job(TEST_USER)\n        j1.set_attributes({ATTR_l + '.select':\n                           '1:ncpus=2+1:ncpus=6',\n                           ATTR_l + '.place': 'vscatter'})\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n\n        # suspend job\n        self.server.sigjob(jobid=jid1, signal=\"suspend\")\n\n        job = self.server.status(JOB, id=jid1)\n\n        rr = \"(vnode1[0]:ncpus=0)+(vnode2[0]:ncpus=0)\"\n        self.assertEqual(job[0][ATTR_released], rr,\n                         msg=\"resources_released incorrect\")\n\n        # resume job\n        self.server.sigjob(jobid=jid1, signal=\"resume\")\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n\n    def test_resuming_excljob_with_no_res_released(self):\n        \"\"\"\n        Set restrict_res_to_release_on_suspend to a resource that an node excl\n        job does not request and then suspend this running job using peemption\n        check if such a job resumes when high priority job is deleted\n        \"\"\"\n        # Set mem in restrict_res_to_release_on_suspend server attribute\n        a = {ATTR_restrict_res_to_release_on_suspend: 'mem'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        j1 = Job(TEST_USER)\n        j1.set_attributes({ATTR_l + '.select': '1:ncpus=1',\n                           ATTR_l + '.place': 'excl'})\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n\n        # Submit a high priority job\n        j2 = Job(TEST_USER)\n        j2.set_attributes(\n            {ATTR_l + '.select': '1:ncpus=2',\n             ATTR_q: 'expressq'})\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid2)\n        self.server.expect(JOB, {ATTR_state: 'S', ATTR_substate: 45}, id=jid1)\n\n        job = self.server.status(JOB, id=jid1)\n\n        rr = \"(%s:ncpus=0)\" % self.mom.shortname\n        self.assertEqual(job[0][ATTR_released], rr,\n                         msg=\"resources_released incorrect\")\n\n        # resume job\n        self.server.deljob(jid2, wait=True)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n\n    def test_normal_user_unable_to_see_res_released(self):\n        \"\"\"\n        Check if normal user (non-operator, non-manager) has privileges to see\n        resources_released and resource_released_list attribute in job status\n        \"\"\"\n        # Set mem in restrict_res_to_release_on_suspend server attribute\n        a = {ATTR_restrict_res_to_release_on_suspend: 'mem'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        j1 = Job(TEST_USER)\n        j1.set_attributes({ATTR_l + '.select': '1:ncpus=4:mem=512mb'})\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n        # suspend job\n        self.server.sigjob(jobid=jid1, signal=\"suspend\")\n        self.server.expect(JOB, {ATTR_state: 'S'}, id=jid1)\n\n        # stat the job as a normal user\n        attrs = self.server.status(JOB, id=jid1, runas=TEST_USER)\n        self.assertFalse(\"resources_released\" in attrs[0],\n                         \"Normal user can see resources_released \"\n                         \"which is not expected\")\n\n        self.assertFalse(\"resource_released_list.mem\" in attrs[0],\n                         \"Normal user can see resources_released_list \"\n                         \"which is not expected\")\n\n    def test_if_node_gets_oversubscribed(self):\n        \"\"\"\n        Check if the node gets oversubscribed if a filler job runs\n        on resources left on the node after suspension.\n        \"\"\"\n        # Set mem in restrict_res_to_release_on_suspend server attribute\n        a = {ATTR_restrict_res_to_release_on_suspend: 'mem'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        a = {ATTR_sched_preempt_enforce_resumption: True}\n        self.server.manager(MGR_CMD_SET, SCHED, a)\n\n        j1 = Job(TEST_USER)\n        j1.set_attributes({ATTR_l + '.select': '1:ncpus=2:mem=512mb'})\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n\n        # Submit a filler job\n        j2 = Job(TEST_USER)\n        j2.set_attributes({ATTR_l + '.select': '1:ncpus=3',\n                           ATTR_l + '.walltime': 50})\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {ATTR_state: 'Q'}, id=jid2)\n\n        # Submit a high priority job\n        j3 = Job(TEST_USER)\n        j3.set_attributes({ATTR_l + '.select': '1:ncpus=1:mem=2gb',\n                           ATTR_q: 'expressq',\n                           ATTR_l + '.walltime': 100})\n        jid3 = self.server.submit(j3)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid3)\n        self.server.expect(JOB, {ATTR_state: 'S'}, id=jid1)\n        # Check that resources_assigned is not exceeding resources_available\n        ras_ncpus = ATTR_rescassn + '.ncpus'\n        rav_ncpus = ATTR_rescavail + '.ncpus'\n        rv = self.server.status(\n            NODE, [ras_ncpus, rav_ncpus], id=self.mom.shortname)\n        self.assertNotEqual(rv, None)\n\n        self.assertLessEqual(rv[0][ras_ncpus], rv[0][rav_ncpus],\n                             msg=\"pbs released resource ncpus incorrectly\")\n\n        # Expect filler job to be in queued state because\n        # suspended job did not release any ncpus\n        self.server.expect(JOB, {ATTR_state: 'Q'}, id=jid2)\n\n    def test_suspended_job_gets_calendered(self):\n        \"\"\"\n        Check if a job which releases limited amount of resources gets\n        calendared in the same cycle when it gets suspended.\n        \"\"\"\n        # Set mem in restrict_res_to_release_on_suspend server attribute\n        a = {ATTR_restrict_res_to_release_on_suspend: 'mem'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        a = {ATTR_sched_preempt_enforce_resumption: True}\n        self.server.manager(MGR_CMD_SET, SCHED, a)\n\n        # Set 5 ncpus available on the node\n        a = {ATTR_rescavail + '.ncpus': 5}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n\n        j1 = Job(TEST_USER)\n        j1.set_attributes({ATTR_l + '.select': '1:ncpus=3:mem=1512mb'})\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n\n        # Submit a high priority job\n        j2 = Job(TEST_USER)\n        j2.set_attributes({ATTR_l + '.select': '1:ncpus=2:mem=2gb',\n                           ATTR_q: 'expressq',\n                           ATTR_l + '.walltime': 100})\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid2)\n        self.server.expect(JOB, {ATTR_state: 'S'}, id=jid1)\n\n        # Check if the job is calendared\n        self.scheduler.log_match(\n            jid1 + \";Can't find start time estimate\", existence=False,\n            max_attempts=2)\n\n    def helper_test_preempt_release_all(self, preempt_method):\n        \"\"\"\n        Helper function to test that when preempting jobs, all resources\n        are released during preemption simulation for R and C methods\n        \"\"\"\n        if preempt_method == \"R\":\n            schedlog_msg = \"Job preempted by requeuing\"\n        elif preempt_method == \"C\":\n            schedlog_msg = \"Job preempted by checkpointing\"\n        else:\n            raise Exception(\"Unexpected value of argument preempt_method: %s\"\n                            % (preempt_method))\n\n        a = {ATTR_restrict_res_to_release_on_suspend: 'mem'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'preempt_order': preempt_method}, runas=ROOT_USER)\n\n        # Set 2 ncpus available on the node\n        a = {ATTR_rescavail + '.ncpus': \"2\"}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n\n        # Submit a low priority jobs which takes up all of the ncpus\n        j1 = Job(TEST_USER)\n        j1.set_attributes({ATTR_l + '.select': '1:ncpus=2'})\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n\n        # Submit a high priority job which requests 1 ncpus\n        j2 = Job(TEST_USER)\n        j2.set_attributes({ATTR_l + '.select': '1:ncpus=1',\n                           ATTR_q: 'expressq'})\n        jid2 = self.server.submit(j2)\n\n        # Even though server is configured to only release mem for suspend,\n        # for requeue and checkpointing, we should have released ncpus as well\n        # and correctly preempted the low priority job\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid2)\n        self.scheduler.log_match(jid1 + \";\" + schedlog_msg)\n\n    def test_preempt_requeue_release_all(self):\n        \"\"\"\n        Test that when preempting jobs via Requeue, all resources\n        are release during the preemption simulation\n        \"\"\"\n        self.helper_test_preempt_release_all(\"R\")\n\n    def test_preempt_checkpoint_release_all(self):\n        \"\"\"\n        Test that when preempting jobs via Checkpointing, all resources\n        are release during the preemption simulation\n        \"\"\"\n        # Create checkpoint\n        chk_script = \"\"\"#!/bin/bash\n                kill $1\n                exit 0\n                \"\"\"\n        pbs_home = self.mom.pbs_conf['PBS_HOME']\n        self.mom.add_checkpoint_abort_script(body=chk_script)\n        self.helper_test_preempt_release_all(\"C\")\n\n    def test_server_restart_with_suspened_job(self):\n        \"\"\"\n        Test that when a job releases limited resources on a node and then\n        PBS server is restarted, the job is able to resume back on the same\n        node.\n        \"\"\"\n        # Set ncpus in restrict_res_to_release_on_suspend server attribute\n        a = {ATTR_restrict_res_to_release_on_suspend: 'ncpus'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        # Set 2 ncpus available on the node\n        a = {ATTR_rescavail + '.ncpus': \"2\"}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n\n        # Submit a job which takes up all of the ncpus\n        j1 = Job(TEST_USER)\n        j1.set_attributes({ATTR_l + '.select': '1:ncpus=2'})\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n\n        # make sure that job id is part of node's jobs attribute\n        node = self.server.status(NODE, id=self.mom.shortname)\n        self.assertIn(jid1, node[0]['jobs'])\n\n        # suspend job\n        self.server.sigjob(jobid=jid1, signal=\"suspend\")\n\n        self.server.restart()\n\n        self.assertTrue(self.server.isUp())\n        self.server.expect(NODE, {'state': 'free'}, id=self.mom.shortname)\n        self.server.expect(NODE, 'jobs', op=UNSET, id=self.mom.shortname)\n\n        # resume job\n        self.server.sigjob(jobid=jid1, signal=\"resume\")\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n        self.server.expect(NODE, 'jobs', op=SET, id=self.mom.shortname)\n\n    def test_server_restart_with_suspened_job_unset(self):\n        \"\"\"\n        Test that when the attribute is set and unset,\n        the server does not crash on restart with a suspended job.\n        \"\"\"\n        a = {'type': 'long', 'flag': 'q'}\n        self.server.manager(MGR_CMD_CREATE, RSC, a, id='l1')\n        self.server.manager(MGR_CMD_CREATE, RSC, a, id='l2')\n\n        self.scheduler.add_resource('l1, l2', apply=True)\n\n        a = {'Resources_available.l1': 5, 'Resources_available.l2': 5}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        a = {ATTR_restrict_res_to_release_on_suspend: ['ncpus', 'l1']}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        a = {'Resource_List.select': '1:ncpus=1:mem=1024kb',\n             'Resource_List.l1': 1,\n             'Resource_List.l2': 1}\n\n        j1 = Job(TEST_USER, attrs=a)\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n\n        node = self.server.status(NODE, id=self.mom.shortname)\n        self.assertIn(jid1, node[0]['jobs'])\n        self.server.expect(NODE,\n                           {'resources_assigned.ncpus': 1,\n                            'resources_assigned.mem': '1024kb'},\n                           id=self.mom.shortname)\n        self.server.expect(SERVER,\n                           {'resources_assigned.l1': 1,\n                            'resources_assigned.l2': 1})\n\n        self.server.sigjob(jobid=jid1, signal=\"suspend\")\n        self.server.expect(NODE,\n                           {'resources_assigned.ncpus': 0,\n                            'resources_assigned.mem': '1024kb'},\n                           id=self.mom.shortname)\n        self.server.expect(SERVER,\n                           {'resources_assigned.l1': 0,\n                            'resources_assigned.l2': 1})\n\n        a = [ATTR_restrict_res_to_release_on_suspend]\n        self.server.manager(MGR_CMD_UNSET, SERVER, a)\n\n        self.server.restart()\n\n        self.assertTrue(self.server.isUp())\n        self.server.expect(NODE,\n                           {'state': 'free',\n                            'resources_assigned.ncpus': 0,\n                            'resources_assigned.mem': '1024kb'},\n                           id=self.mom.shortname)\n        self.server.expect(SERVER,\n                           {'resources_assigned.l1': 0,\n                            'resources_assigned.l2': 1})\n        self.server.expect(NODE, 'jobs', op=UNSET, id=self.mom.shortname)\n\n        self.server.sigjob(jobid=jid1, signal=\"resume\")\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n        self.server.expect(NODE, 'jobs', op=SET, id=self.mom.shortname)\n        self.server.expect(NODE,\n                           {'resources_assigned.ncpus': 1,\n                            'resources_assigned.mem': '1024kb'},\n                           id=self.mom.shortname)\n        self.server.expect(SERVER,\n                           {'resources_assigned.l1': 1,\n                            'resources_assigned.l2': 1})\n"
  },
  {
    "path": "test/tests/functional/pbs_reliable_job_startup.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\nfrom ptl.utils.pbs_logutils import PBSLogUtils\n\n\ndef convert_time(fmt, tm, fixdate=False):\n    \"\"\"\n    Convert given time stamp <tm> into given format <fmt>\n    if fixdate is True add <space> before date if date is < 9\n    (This is because to match output with ctime as qstat uses it)\n    \"\"\"\n    rv = time.strftime(fmt, time.localtime(float(tm)))\n    if ((sys.platform not in ('cygwin', 'win32')) and (fixdate)):\n        rv = rv.split()\n        date = int(rv[2])\n        if date <= 9:\n            date = ' ' + str(date)\n        rv[2] = str(date)\n        rv = ' '.join(rv)\n    return rv\n\n\ndef create_subjob_id(job_array_id, subjob_index):\n    \"\"\"\n    insert subjob index into the square brackets of job array id\n    \"\"\"\n    idx = job_array_id.find('[]')\n    return job_array_id[:idx + 1] + str(subjob_index) + job_array_id[idx + 1:]\n\n\n@requirements(num_moms=5)\nclass TestPbsReliableJobStartup(TestFunctional):\n\n    \"\"\"\n    This tests the Reliable Job Startup Feature,\n    where a job can be started with extra nodes,\n    with node failures tolerated during job start\n    (and even throughout the life of the job),\n    before pruning job back to a set of healthy\n    nodes that satisfy the original request.\n\n    Custom parameters:\n    moms: colon-separated hostnames of five MoMs\n    \"\"\"\n    logutils = PBSLogUtils()\n\n    def pbs_nodefile_match_exec_host(self, jid, exec_host,\n                                     schedselect=None):\n        \"\"\"\n        Look into the PBS_NODEFILE on the first host listed in 'exec_host'\n        and returns True if all host entries in 'exec_host' match the entries\n        in the file. Otherwise, return False.\n\n        # Look for 'mpiprocs' values in 'schedselect' (if not None), and\n        # verify that the corresponding node hosts are appearing in\n        # PBS_NODEFILE 'mpiprocs' number of times.\n        \"\"\"\n\n        pbs_nodefile = os.path.join(self.server.\n                                    pbs_conf['PBS_HOME'], 'aux', jid)\n\n        # look for mpiprocs settings\n        mpiprocs = []\n        if schedselect is not None:\n            for chunk in schedselect.split('+'):\n                chl = chunk.split(':')\n                for ch in chl:\n                    if ch.find('=') != -1:\n                        c = ch.split('=')\n                        if c[0] == \"mpiprocs\":\n                            mpiprocs.append(c[1])\n        ehost = exec_host.split('+')\n        first_host = ehost[0].split('/')[0]\n\n        cmd = ['cat', pbs_nodefile]\n        ret = self.server.du.run_cmd(first_host, cmd, sudo=False)\n        ehost2 = []\n        for h in ret['out']:\n            ehost2.append(h.split('.')[0])\n\n        ehost1 = []\n        j = 0\n        for eh in ehost:\n            h = eh.split('/')\n            if (len(mpiprocs) > 0):\n                for _ in range(int(mpiprocs[j])):\n                    ehost1.append(h[0])\n            else:\n                ehost1.append(h[0])\n            j += 1\n\n        self.logger.info(\"EHOST1=%s\" % (ehost1,))\n        self.logger.info(\"EHOST2=%s\" % (ehost2,))\n        if ehost1 == ehost2:\n            return True\n        return False\n\n    def match_accounting_log(self, atype, jid, exec_host, exec_vnode,\n                             mem, ncpus, nodect, place, select):\n        \"\"\"\n        This checks if there's an accounting log record 'atype' for\n        job 'jid' containing the values given (i.e.\n        Resource_List.exec_host, Resource_List.exec_vnode, etc...)\n        This throws an exception upon encountering a non-matching\n        accounting_logs entry.\n        Some example values of 'atype' are: 'u' (update record due to\n        release node request), 'c' (record containing the next\n        set of resources to be used by a phased job as a result of\n        release node request), 'e' (last update record for a phased job\n        due to a release node request), 'E' (end of job record),\n        's' (secondary start record).\n        \"\"\"\n\n        if atype == 'e':\n            self.mom.log_match(\"Job;%s;Obit sent\" % (jid,), n=\"ALL\",\n                               max_attempts=5, interval=5,\n                               starttime=self.stime)\n\n        self.server.accounting_match(\n            msg=r\".*%s;%s.*exec_host=%s\" % (atype, jid, exec_host),\n            regexp=True, n=\"ALL\", max_attempts=3, starttime=self.stime)\n\n        self.server.accounting_match(\n            msg=r\".*%s;%s.*exec_vnode=%s\" % (atype, jid, exec_vnode),\n            regexp=True, n=\"ALL\", max_attempts=3, starttime=self.stime)\n\n        self.server.accounting_match(\n            msg=r\".*%s;%s.*Resource_List\\.mem=%s\" % (atype, jid,  mem),\n            regexp=True, n=\"ALL\", max_attempts=3, starttime=self.stime)\n\n        self.server.accounting_match(\n            msg=r\".*%s;%s.*Resource_List\\.ncpus=%d\" % (atype, jid, ncpus),\n            regexp=True, n=\"ALL\", max_attempts=3, starttime=self.stime)\n\n        self.server.accounting_match(\n            msg=r\".*%s;%s.*Resource_List\\.nodect=%d\" % (atype, jid, nodect),\n            regexp=True, n=\"ALL\", max_attempts=3, starttime=self.stime)\n\n        self.server.accounting_match(\n            msg=r\".*%s;%s.*Resource_List\\.place=%s\" % (atype, jid, place),\n            regexp=True, n=\"ALL\", max_attempts=3, starttime=self.stime)\n\n        self.server.accounting_match(\n            msg=r\".*%s;%s.*Resource_List\\.select=%s\" % (atype, jid, select),\n            regexp=True, n=\"ALL\", max_attempts=3, starttime=self.stime)\n\n        if (atype != 'c') and (atype != 'S') and (atype != 's'):\n            self.server.accounting_match(\n                msg=r\".*%s;%s.*resources_used\\.\" % (atype, jid),\n                regexp=True, n=\"ALL\", max_attempts=3, starttime=self.stime)\n\n    def match_vnode_status(self, vnode_list, state, jobs=None, ncpus=None,\n                           mem=None):\n        \"\"\"\n        Given a list of vnode names in 'vnode_list', check to make\n        sure each vnode's state, jobs string, resources_assigned.mem,\n        and resources_assigned.ncpus match the passed arguments.\n        This will throw an exception if a match is not found.\n        \"\"\"\n        for vn in vnode_list:\n            dict_match = {'state': state}\n            if jobs is not None:\n                dict_match['jobs'] = jobs\n            if ncpus is not None:\n                dict_match['resources_assigned.ncpus'] = ncpus\n            if mem is not None:\n                dict_match['resources_assigned.mem'] = mem\n\n            self.server.expect(VNODE, dict_match, id=vn)\n\n    def create_and_submit_job(self, job_type, attribs=None):\n        \"\"\"\n        create the job object and submit it to the server\n        based on 'job_type' and attributes list 'attribs'.\n        \"\"\"\n        if attribs:\n            retjob = Job(TEST_USER, attrs=attribs)\n        else:\n            retjob = Job(TEST_USER)\n\n        if job_type == 'job1':\n            retjob.create_script(self.script['job1'])\n        elif job_type == 'job1_2':\n            retjob.create_script(self.script['job1_2'])\n        elif job_type == 'job1_3':\n            retjob.create_script(self.script['job1_3'])\n        elif job_type == 'job1_4':\n            retjob.create_script(self.script['job1_4'])\n        elif job_type == 'job2':\n            retjob.create_script(self.script['job2'])\n        elif job_type == 'job3':\n            retjob.create_script(self.script['job3'])\n        elif job_type == 'job4':\n            retjob.create_script(self.script['job4'])\n        elif job_type == 'job5':\n            retjob.create_script(self.script['job5'])\n        elif job_type == 'jobA':\n            retjob.create_script(self.script['jobA'])\n\n        return self.server.submit(retjob)\n\n    def setUp(self):\n\n        if len(self.moms) != 5:\n            cmt = \"need 5 mom hosts: -p moms=<m1>:<m2>:<m3>:<m4>:<m5>\"\n            self.skip_test(reason=cmt)\n\n        TestFunctional.setUp(self)\n        Job.dflt_attributes[ATTR_k] = 'oe'\n\n        self.server.cleanup_jobs()\n\n        self.momA = self.moms.values()[0]\n        self.momB = self.moms.values()[1]\n        self.momC = self.moms.values()[2]\n        self.momD = self.moms.values()[3]\n        self.momE = self.moms.values()[4]\n\n        # Now start setting up and creating the vnodes\n        self.server.manager(MGR_CMD_DELETE, NODE, None, \"\")\n\n        # set node momA\n        self.hostA = self.momA.shortname\n        self.momA.delete_vnode_defs()\n        vnode_prefix = self.hostA\n        a = {'resources_available.mem': '1gb',\n             'resources_available.ncpus': '1'}\n        vnodedef = self.momA.create_vnode_def(vnode_prefix, a, 4)\n        self.assertNotEqual(vnodedef, None)\n        self.momA.insert_vnode_def(vnodedef, 'vnode.def')\n        self.server.manager(MGR_CMD_CREATE, NODE, id=self.hostA)\n\n        # set node momB\n        self.hostB = self.momB.shortname\n        self.momB.delete_vnode_defs()\n        vnode_prefix = self.hostB\n        a = {'resources_available.mem': '1gb',\n             'resources_available.ncpus': '1'}\n        vnodedef = self.momB.create_vnode_def(vnode_prefix, a, 5,\n                                              usenatvnode=True)\n        self.assertNotEqual(vnodedef, None)\n        self.momB.insert_vnode_def(vnodedef, 'vnode.def')\n        self.server.manager(MGR_CMD_CREATE, NODE, id=self.hostB)\n\n        # set node momC\n        # This one has no vnode definition.\n\n        self.hostC = self.momC.shortname\n        self.momC.delete_vnode_defs()\n        self.server.manager(MGR_CMD_CREATE, NODE, id=self.hostC)\n        a = {'resources_available.ncpus': 2,\n             'resources_available.mem': '2gb'}\n        # set natural vnode of hostC\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.hostC)\n\n        # set node momD\n        # This one has no vnode definition.\n\n        self.hostD = self.momD.shortname\n        self.momD.delete_vnode_defs()\n        self.server.manager(MGR_CMD_CREATE, NODE, id=self.hostD)\n        a = {'resources_available.ncpus': 5,\n             'resources_available.mem': '5gb'}\n        # set natural vnode of hostD\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.hostD)\n\n        # set node momE\n        self.hostE = self.momE.shortname\n        self.momE.delete_vnode_defs()\n        vnode_prefix = self.hostE\n        a = {'resources_available.mem': '1gb',\n             'resources_available.ncpus': '1'}\n        vnodedef = self.momE.create_vnode_def(vnode_prefix, a, 5,\n                                              usenatvnode=True)\n        self.assertNotEqual(vnodedef, None)\n        self.momE.insert_vnode_def(vnodedef, 'vnode.def')\n        self.server.manager(MGR_CMD_CREATE, NODE, id=self.hostE)\n\n        # Various node names\n        self.nA = self.hostA\n        self.nAv0 = '%s[0]' % (self.hostA,)\n        self.nAv1 = '%s[1]' % (self.hostA,)\n        self.nAv2 = '%s[2]' % (self.hostA,)\n        self.nAv3 = '%s[3]' % (self.hostA,)\n        self.nB = self.hostB\n        self.nBv0 = '%s[0]' % (self.hostB,)\n        self.nBv1 = '%s[1]' % (self.hostB,)\n        self.nBv2 = '%s[2]' % (self.hostB,)\n        self.nBv3 = '%s[3]' % (self.hostB,)\n        self.nC = self.hostC\n        self.nD = self.hostD\n        self.nE = self.hostE\n        self.nEv0 = '%s[0]' % (self.hostE,)\n        self.nEv1 = '%s[1]' % (self.hostE,)\n        self.nEv2 = '%s[2]' % (self.hostE,)\n        self.nEv3 = '%s[3]' % (self.hostE,)\n\n        a = {'state': 'free', 'resources_available.ncpus': (GE, 1)}\n        self.server.expect(VNODE, {'state=free': 17}, count=True,\n                           max_attempts=10, interval=2)\n\n        if sys.platform in ('cygwin', 'win32'):\n            SLEEP_CMD = \"pbs-sleep\"\n        else:\n            SLEEP_CMD = os.path.join(os.sep, \"bin\", \"sleep\")\n\n        self.pbs_release_nodes_cmd = os.path.join(\n            self.server.pbs_conf['PBS_EXEC'], 'bin', 'pbs_release_nodes')\n\n        FIB37 = os.path.join(self.server.pbs_conf['PBS_EXEC'], 'bin',\n                             'pbs_python') + \\\n            ' -c \"exec(\\\\\\\"def fib(i):\\\\n if i < 2:\\\\n  \\\nreturn i\\\\n return fib(i-1) + fib(i-2)\\\\n\\\\nprint(fib(37))\\\\\\\")\"'\n\n        self.fib37_value = 24157817\n\n        FIB40 = os.path.join(self.server.pbs_conf['PBS_EXEC'], 'bin',\n                             'pbs_python') + \\\n            ' -c \"exec(\\\\\\\"def fib(i):\\\\n if i < 2:\\\\n  \\\nreturn i\\\\n return fib(i-1) + fib(i-2)\\\\n\\\\nprint(fib(40))\\\\\\\")\"'\n\n        # job submission arguments\n        self.script = {}\n        # original select spec\n        self.job1_oselect = \"ncpus=3:mem=2gb+ncpus=3:mem=2gb+ncpus=2:mem=2gb\"\n        self.job1_place = \"scatter\"\n        # incremented values at job start and just before actual launch\n        self.job1_iselect = \\\n            \"1:ncpus=3:mem=2gb+2:ncpus=3:mem=2gb+2:ncpus=2:mem=2gb\"\n        self.job1_ischedselect = self.job1_iselect\n        self.job1_iexec_host = \"%s/0*0+%s/0*0+%s/0*3+%s/0*2+%s/0*0\" % (\n            self.nA, self.nB, self.nD, self.nC, self.nE)\n        self.job1_iexec_vnode = \\\n            \"(%s:mem=1048576kb:ncpus=1+\" % (self.nAv0,) + \\\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.nAv1,) + \\\n            \"%s:ncpus=1)+\" % (self.nAv2) + \\\n            \"(%s:mem=1048576kb:ncpus=1+\" % (self.nB,) + \\\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.nBv0,) + \\\n            \"%s:ncpus=1)+\" % (self.nBv1,) + \\\n            \"(%s:ncpus=3:mem=2097152kb)+\" % (self.nD,) + \\\n            \"(%s:ncpus=2:mem=2097152kb)+\" % (self.nC,) + \\\n            \"(%s:mem=1048576kb:ncpus=1+\" % (self.nE,) + \\\n            \"%s:mem=1048576kb:ncpus=1)\" % (self.nEv0,)\n        self.job1_isel_esc = self.job1_iselect.replace(r\"+\", r\"\\+\")\n        self.job1_iexec_host_esc = self.job1_iexec_host.replace(\n            r\"*\", r\"\\*\").replace(r\"[\", r\"\\[\").replace(r\"]\", r\"\\]\").replace(\n                    r\"+\", r\"\\+\")\n        self.job1_iexec_vnode_esc = self.job1_iexec_vnode.replace(\n            r\"[\", r\"\\[\").replace(r\"]\", r\"\\]\").replace(r\"(\", r\"\\(\").replace(\n            r\")\", r\"\\)\").replace(r\"+\", r\"\\+\")\n\n        # expected values version 1 upon successful job launch\n        self.job1_select = \\\n            \"1:ncpus=3:mem=2gb+1:ncpus=3:mem=2gb+1:ncpus=2:mem=2gb\"\n        self.job1_schedselect = self.job1_select\n        self.job1_exec_host = r\"%s/0*0+%s/0*3+%s/0*0\" % (\n            self.nA, self.nD, self.nE)\n        self.job1_exec_vnode = \\\n            \"(%s:mem=1048576kb:ncpus=1+\" % (self.nAv0,) + \\\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.nAv1,) + \\\n            \"%s:ncpus=1)+\" % (self.nAv2) + \\\n            \"(%s:ncpus=3:mem=2097152kb)+\" % (self.nD,) + \\\n            \"(%s:mem=1048576kb:ncpus=1+\" % (self.nE,) + \\\n            \"%s:mem=1048576kb:ncpus=1)\" % (self.nEv0,)\n\n        self.job1_sel_esc = self.job1_select.replace(r\"+\", r\"\\+\")\n        self.job1_exec_host_esc = self.job1_exec_host.replace(\n            r\"*\", r\"\\*\").replace(r\"[\", r\"\\[\").replace(r\"]\", r\"\\]\").replace(\n                    r\"+\", r\"\\+\")\n        self.job1_exec_vnode_esc = self.job1_exec_vnode.replace(\n            r\"[\", r\"\\[\").replace(r\"]\", r\"\\]\").replace(r\"(\", r\"\\(\").replace(\n            r\")\", r\"\\)\").replace(r\"+\", r\"\\+\")\n\n        # expected values version 2 upon successful job launch\n        self.job1v2_select = \\\n            r\"1:ncpus=3:mem=2gb+1:ncpus=3:mem=2gb+1:ncpus=2:mem=2gb\"\n        self.job1v2_schedselect = self.job1v2_select\n        self.job1v2_exec_host = r\"%s/0*0+%s/0*3+%s/0*2\" % (\n            self.nA, self.nD, self.nC)\n        self.job1v2_exec_vnode = \\\n            \"(%s:mem=1048576kb:ncpus=1+\" % (self.nAv0,) + \\\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.nAv1,) + \\\n            \"%s:ncpus=1)+\" % (self.nAv2) + \\\n            \"(%s:ncpus=3:mem=2097152kb)+\" % (self.nD,) + \\\n            \"(%s:ncpus=2:mem=2097152kb)\" % (self.nC,)\n\n        self.job1v2_sel_esc = self.job1v2_select.replace(r\"+\", r\"\\+\")\n        self.job1v2_exec_host_esc = self.job1v2_exec_host.replace(\n            r\"*\", r\"\\*\").replace(r\"[\", r\"\\[\").replace(r\"]\", r\"\\]\").replace(\n                    r\"+\", r\"\\+\")\n        self.job1v2_exec_vnode_esc = self.job1v2_exec_vnode.replace(\n            r\"[\", r\"\\[\").replace(r\"]\", r\"\\]\").replace(r\"(\", r\"\\(\").replace(\n            r\")\", r\"\\)\").replace(r\"+\", r\"\\+\")\n\n        # expected values version 3 upon successful job launch\n        self.job1v3_select = \\\n            \"1:ncpus=3:mem=2gb+1:ncpus=3:mem=2gb+1:ncpus=2:mem=2gb\"\n        self.job1v3_schedselect = self.job1v3_select\n        self.job1v3_exec_host = \"%s/0*0+%s/0*0+%s/0*0\" % (\n            self.nA, self.nB, self.nE)\n        self.job1v3_exec_vnode = \\\n            \"(%s:mem=1048576kb:ncpus=1+\" % (self.nAv0,) + \\\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.nAv1,) + \\\n            \"%s:ncpus=1)+\" % (self.nAv2) + \\\n            \"(%s:mem=1048576kb:ncpus=1+\" % (self.nB,) + \\\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.nBv0,) + \\\n            \"%s:ncpus=1)+\" % (self.nBv1,) + \\\n            \"(%s:mem=1048576kb:ncpus=1+\" % (self.nE,) + \\\n            \"%s:mem=1048576kb:ncpus=1)\" % (self.nEv0,)\n\n        self.job1v3_sel_esc = self.job1v3_select.replace(\"+\", r\"\\+\")\n        self.job1v3_exec_host_esc = self.job1v3_exec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                    \"+\", r\"\\+\")\n        self.job1v3_exec_vnode_esc = self.job1v3_exec_vnode.replace(\n            \"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\"(\", r\"\\(\").replace(\n            \")\", r\"\\)\").replace(\"+\", r\"\\+\")\n\n        # expected values version 4 upon successful job launch\n        self.job1v4_select = \\\n            \"1:ncpus=3:mem=2gb+1:ncpus=3:mem=2gb+1:ncpus=2:mem=2gb\"\n        self.job1v4_schedselect = self.job1v4_select\n        self.job1v4_exec_host = \"%s/0*0+%s/0*0+%s/0*2\" % (\n            self.nA, self.nB, self.nD)\n        self.job1v4_exec_vnode = \\\n            \"(%s:mem=1048576kb:ncpus=1+\" % (self.nAv0,) + \\\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.nAv1,) + \\\n            \"%s:ncpus=1)+\" % (self.nAv2) + \\\n            \"(%s:mem=1048576kb:ncpus=1+\" % (self.nB,) + \\\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.nBv0,) + \\\n            \"%s:ncpus=1)+\" % (self.nBv1,) + \\\n            \"(%s:ncpus=2:mem=2097152kb)\" % (self.nD,)\n\n        self.job1v4_sel_esc = self.job1v4_select.replace(\"+\", r\"\\+\")\n        self.job1v4_exec_host_esc = self.job1v4_exec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                    \"+\", r\"\\+\")\n        self.job1v4_exec_vnode_esc = self.job1v4_exec_vnode.replace(\n            \"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\"(\", r\"\\(\").replace(\n            \")\", r\"\\)\").replace(\"+\", r\"\\+\")\n\n        # expected values version 5 upon successful job launch\n        self.job1v5_select = \\\n            \"1:ncpus=3:mem=2gb+1:ncpus=3:mem=2gb+1:ncpus=2:mem=2gb\"\n        self.job1v5_schedselect = self.job1v5_select\n        self.job1v5_exec_host = \"%s/0*0+%s/0*0+%s/0*2\" % (\n            self.nA, self.nB, self.nC)\n        self.job1v5_exec_vnode = \\\n            \"(%s:mem=1048576kb:ncpus=1+\" % (self.nAv0,) + \\\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.nAv1,) + \\\n            \"%s:ncpus=1)+\" % (self.nAv2) + \\\n            \"(%s:mem=1048576kb:ncpus=1+\" % (self.nB,) + \\\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.nBv0,) + \\\n            \"%s:ncpus=1)+\" % (self.nBv1,) + \\\n            \"(%s:ncpus=2:mem=2097152kb)\" % (self.nC,)\n\n        self.job1v5_sel_esc = self.job1v5_select.replace(\"+\", r\"\\+\")\n        self.job1v5_exec_host_esc = self.job1v5_exec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                    \"+\", r\"\\+\")\n        self.job1v5_exec_vnode_esc = self.job1v5_exec_vnode.replace(\n            \"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\"(\", r\"\\(\").replace(\n            \")\", r\"\\)\").replace(\"+\", r\"\\+\")\n\n        # expected values version 6 upon successful job launch\n        self.job1v6_select = \\\n            \"1:ncpus=3:mem=2gb+1:ncpus=3:mem=2gb+1:ncpus=2:mem=2gb\"\n        self.job1v6_select += \"+1:ncpus=1:mem=1gb\"\n        self.job1v6_schedselect = self.job1v6_select\n        self.job1v6_exec_host = \"%s/0*0+%s/0*0+%s/0*2+%s/0\" % (\n            self.nA, self.nB, self.nC, self.nE)\n        self.job1v6_exec_vnode = \\\n            \"(%s:mem=1048576kb:ncpus=1+\" % (self.nAv0,) + \\\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.nAv1,) + \\\n            \"%s:ncpus=1)+\" % (self.nAv2) + \\\n            \"(%s:mem=1048576kb:ncpus=1+\" % (self.nB,) + \\\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.nBv0,) + \\\n            \"%s:ncpus=1)+\" % (self.nBv1,) + \\\n            \"(%s:ncpus=2:mem=2097152kb)+\" % (self.nC,) + \\\n            \"(%s:mem=1048576kb:ncpus=1)\" % (self.nE,)\n\n        self.job1v6_sel_esc = self.job1v6_select.replace(\"+\", r\"\\+\")\n        self.job1v6_exec_host_esc = self.job1v6_exec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                    \"+\", r\"\\+\")\n        self.job1v6_exec_vnode_esc = self.job1v6_exec_vnode.replace(\n            \"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\"(\", r\"\\(\").replace(\n            \")\", r\"\\)\").replace(\"+\", r\"\\+\")\n\n        self.script['job1'] = \"\"\"\n#PBS -l select=%s\n#PBS -l place=%s\n#PBS -W umask=022\n#PBS -S /bin/bash\necho \"$PBS_NODEFILE\"\ncat $PBS_NODEFILE\necho 'FIB TESTS'\necho 'pbsdsh -n 1 fib 37'\npbsdsh -n 1 -- %s\necho 'pbsdsh -n 2 fib 37'\npbsdsh -n 2 -- %s\necho 'fib 37'\n%s\necho 'HOSTNAME TESTS'\necho 'pbsdsh -n 0 hostname'\npbsdsh -n 0 --  hostname -s\necho 'pbsdsh -n 1 hostname'\npbsdsh -n 1 --  hostname -s\necho 'pbsdsh -n 2 hostname'\npbsdsh -n 2 --  hostname -s\necho 'PBS_NODEFILE tests'\nfor h in `cat $PBS_NODEFILE`\ndo\n    echo \"HOST=$h\"\n    echo \"pbs_tmrsh $h hostname\"\n    pbs_tmrsh $h hostname -s\ndone\n\"\"\" % (self.job1_oselect, self.job1_place, FIB37, FIB37, FIB37)\n\n        # original select spec\n        self.jobA_oselect = \"ncpus=1:mem=1gb+ncpus=1:mem=1gb+ncpus=1:mem=1gb\"\n        self.jobA_place = \"scatter\"\n        # incremented values at job start and just before actual launch\n        self.jobA_iselect = \\\n            \"1:ncpus=1:mem=1gb+2:ncpus=1:mem=1gb+2:ncpus=1:mem=1gb\"\n        self.jobA_ischedselect = self.jobA_iselect\n        self.jobA_iexec_host1 = \"%s/0+%s/0+%s/0+%s/0+%s/0\" % (\n            self.nA, self.nB, self.nC, self.nD, self.nE)\n        self.jobA_iexec_host2 = \"%s/1+%s/1+%s/1+%s/1+%s/1\" % (\n            self.nA, self.nB, self.nC, self.nD, self.nE)\n        self.jobA_iexec_host3 = \"%s/2+%s/2+%s/0+%s/2+%s/0\" % (\n            self.nA, self.nB, self.nC, self.nD, self.nE)\n        self.jobA_iexec_vnode1 = \\\n            \"(%s:ncpus=1:mem=1048576kb)+\" % (self.nAv0,) + \\\n            \"(%s:ncpus=1:mem=1048576kb)+\" % (self.nB,) + \\\n            \"(%s:ncpus=1:mem=1048576kb)+\" % (self.nC,) + \\\n            \"(%s:ncpus=1:mem=1048576kb)+\" % (self.nD,) + \\\n            \"(%s:ncpus=1:mem=1048576kb)\" % (self.nE,)\n        self.jobA_iexec_vnode2 = \\\n            \"(%s:ncpus=1:mem=1048576kb)+\" % (self.nAv1,) + \\\n            \"(%s:ncpus=1:mem=1048576kb)+\" % (self.nBv0,) + \\\n            \"(%s:ncpus=1:mem=1048576kb)+\" % (self.nC,) + \\\n            \"(%s:ncpus=1:mem=1048576kb)+\" % (self.nD,) + \\\n            \"(%s:ncpus=1:mem=1048576kb)\" % (self.nEv0,)\n        self.jobA_iexec_vnode3 = \\\n            \"(%s:ncpus=1:mem=1048576kb)+\" % (self.nAv2,) + \\\n            \"(%s:ncpus=1:mem=1048576kb)+\" % (self.nBv1,) + \\\n            \"(%s:ncpus=1:mem=1048576kb)+\" % (self.nC,) + \\\n            \"(%s:ncpus=1:mem=1048576kb)+\" % (self.nD,) + \\\n            \"(%s:ncpus=1:mem=1048576kb)\" % (self.nE,)\n        self.jobA_isel_esc = self.jobA_iselect.replace(\"+\", r\"\\+\")\n        self.jobA_iexec_host1_esc = self.jobA_iexec_host1.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                    \"+\", r\"\\+\")\n        self.jobA_iexec_host2_esc = self.jobA_iexec_host2.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                    \"+\", r\"\\+\")\n        self.jobA_iexec_host3_esc = self.jobA_iexec_host3.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                    \"+\", r\"\\+\")\n        self.jobA_iexec_vnode1_esc = self.jobA_iexec_vnode1.replace(\n            \"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\"(\", r\"\\(\").replace(\n            \")\", r\"\\)\").replace(\"+\", r\"\\+\")\n        self.jobA_iexec_vnode2_esc = self.jobA_iexec_vnode2.replace(\n            \"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\"(\", r\"\\(\").replace(\n            \")\", r\"\\)\").replace(\"+\", r\"\\+\")\n        self.jobA_iexec_vnode3_esc = self.jobA_iexec_vnode3.replace(\n            \"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\"(\", r\"\\(\").replace(\n            \")\", r\"\\)\").replace(\"+\", r\"\\+\")\n\n        # expected values version 1 upon successful job launch\n        self.jobA_select = \\\n            \"1:ncpus=1:mem=1gb+1:ncpus=1:mem=1gb+1:ncpus=1:mem=1gb\"\n        self.jobA_schedselect = self.jobA_select\n        self.jobA_exec_host1 = \"%s/0+%s/0+%s/0\" % (\n            self.nA, self.nB, self.nD)\n        self.jobA_exec_host2 = \"%s/1+%s/1+%s/1\" % (\n            self.nA, self.nB, self.nD)\n        self.jobA_exec_host3 = \"%s/2+%s/2+%s/2\" % (\n            self.nA, self.nB, self.nD)\n        self.jobA_exec_vnode1 = \\\n            \"(%s:ncpus=1:mem=1048576kb)+\" % (self.nAv0,) + \\\n            \"(%s:ncpus=1:mem=1048576kb)+\" % (self.nB,) + \\\n            \"(%s:ncpus=1:mem=1048576kb)\" % (self.nD,)\n        self.jobA_exec_vnode2 = \\\n            \"(%s:ncpus=1:mem=1048576kb)+\" % (self.nAv1,) + \\\n            \"(%s:ncpus=1:mem=1048576kb)+\" % (self.nBv0,) + \\\n            \"(%s:ncpus=1:mem=1048576kb)\" % (self.nD,)\n        self.jobA_exec_vnode3 = \\\n            \"(%s:ncpus=1:mem=1048576kb)+\" % (self.nAv2,) + \\\n            \"(%s:ncpus=1:mem=1048576kb)+\" % (self.nBv1,) + \\\n            \"(%s:ncpus=1:mem=1048576kb)\" % (self.nD,)\n\n        self.jobA_sel_esc = self.jobA_select.replace(\"+\", r\"\\+\")\n        self.jobA_exec_host1_esc = self.jobA_exec_host1.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                    \"+\", r\"\\+\")\n        self.jobA_exec_host2_esc = self.jobA_exec_host2.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                    \"+\", r\"\\+\")\n        self.jobA_exec_host3_esc = self.jobA_exec_host3.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                    \"+\", r\"\\+\")\n        self.jobA_exec_vnode1_esc = self.jobA_exec_vnode1.replace(\n            \"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\"(\", r\"\\(\").replace(\n            \")\", r\"\\)\").replace(\"+\", r\"\\+\")\n        self.jobA_exec_vnode2_esc = self.jobA_exec_vnode2.replace(\n            \"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\"(\", r\"\\(\").replace(\n            \")\", r\"\\)\").replace(\"+\", r\"\\+\")\n        self.jobA_exec_vnode3_esc = self.jobA_exec_vnode3.replace(\n            \"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\"(\", r\"\\(\").replace(\n            \")\", r\"\\)\").replace(\"+\", r\"\\+\")\n        self.script['jobA'] = \"\"\"\n#PBS -J 1-3\n#PBS -l select=%s\n#PBS -l place=%s\n#PBS -S /bin/bash\necho 'HOSTNAME TESTS'\necho 'pbsdsh -n 0 hostname'\npbsdsh -n 0 -- hostname -s\necho 'pbsdsh -n 1 hostname'\npbsdsh -n 1 -- hostname -s\necho 'pbsdsh -n 2 hostname'\npbsdsh -n 2 -- hostname -s\nsleep 180\n\"\"\" % (self.jobA_oselect, self.jobA_place)\n\n        self.script['job1_3'] = \"\"\"\n#PBS -l select=%s\n#PBS -l place=%s\n#PBS -W umask=022\n#PBS -S /bin/bash\necho \"$PBS_NODEFILE\"\ncat $PBS_NODEFILE\necho 'FIB TESTS'\necho 'pbsdsh -n 2 fib 40'\npbsdsh -n 2 -- %s\necho 'fib 40'\n%s\necho 'HOSTNAME TESTS'\necho 'pbsdsh -n 0 hostname'\npbsdsh -n 0 -- hostname -s\necho 'pbsdsh -n 2 hostname'\npbsdsh -n 2 -- hostname -s\n\"\"\" % (self.job1_oselect, self.job1_place, FIB40, FIB40)\n\n        self.script['job1_2'] = \"\"\"\n#PBS -l select=%s\n#PBS -l place=%s\n#PBS -W umask=022\n#PBS -S /bin/bash\necho \"$PBS_NODEFILE\"\ncat $PBS_NODEFILE\necho 'FIB TESTS'\necho 'pbsdsh -n 2 fib 37'\npbsdsh -n 2 -- %s\necho 'fib 37'\n%s\necho 'HOSTNAME TESTS'\necho 'pbsdsh -n 0 hostname'\npbsdsh -n 0 -- hostname -s\necho 'pbsdsh -n 2 hostname'\npbsdsh -n 2 -- hostname -s\n\"\"\" % (self.job1_oselect, self.job1_place, FIB37, FIB37)\n\n        self.script['job1_3'] = \"\"\"\n#PBS -l select=%s\n#PBS -l place=%s\n#PBS -W umask=022\n#PBS -S /bin/bash\necho \"$PBS_NODEFILE\"\ncat $PBS_NODEFILE\necho 'FIB TESTS'\necho 'pbsdsh -n 2 fib 40'\npbsdsh -n 2 -- %s\necho 'fib 40'\n%s\necho 'HOSTNAME TESTS'\necho 'pbsdsh -n 0 hostname'\npbsdsh -n 0 -- hostname -s\necho 'pbsdsh -n 2 hostname'\npbsdsh -n 2 -- hostname -s\n\"\"\" % (self.job1_oselect, self.job1_place, FIB40, FIB40)\n\n        self.script['job1_4'] = \"\"\"\n#PBS -l select=%s\n#PBS -l place=%s\n#PBS -W umask=022\n#PBS -S /bin/bash\necho \"$PBS_NODEFILE\"\ncat $PBS_NODEFILE\necho 'FIB TESTS'\necho 'pbsdsh -n 1 fib 37'\npbsdsh -n 1 -- %s\necho 'pbsdsh -n 2 fib 37'\npbsdsh -n 2 -- %s\necho 'pbsdsh -n 3 fib 37'\npbsdsh -n 3 -- %s\necho 'fib 37'\n%s\necho 'HOSTNAME TESTS'\necho 'pbsdsh -n 0 hostname'\npbsdsh -n 0 -- hostname -s\necho 'pbsdsh -n 1 hostname'\npbsdsh -n 1 -- hostname -s\necho 'pbsdsh -n 2 hostname'\npbsdsh -n 2 -- hostname -s\necho 'pbsdsh -n 3 hostname'\npbsdsh -n 3 -- hostname -s\necho 'PBS_NODEFILE tests'\nfor h in `cat $PBS_NODEFILE`\ndo\n    echo \"HOST=$h\"\n    echo \"pbs_tmrsh $h hostname\"\n    pbs_tmrsh $h hostname -s\ndone\n\"\"\" % (self.job1_oselect, self.job1_place, FIB37, FIB37, FIB37, FIB37)\n\n        # original select spec\n        self.job2_oselect = \"ncpus=3:mem=2gb+ncpus=3:mem=2gb+ncpus=0:mem=2gb\"\n        self.job2_place = \"scatter\"\n        # incremented values at job start and just before actual launch\n        self.job2_iselect = \\\n            \"1:ncpus=3:mem=2gb+2:ncpus=3:mem=2gb+2:ncpus=0:mem=2gb\"\n        self.job2_ischedselect = self.job2_iselect\n        self.job2_iexec_host = \"%s/0*0+%s/0*0+%s/0*3+%s/0*0+%s/0*0\" % (\n            self.nA, self.nB, self.nD, self.nC, self.nE)\n        self.job2_iexec_vnode = \\\n            \"(%s:mem=1048576kb:ncpus=1+\" % (self.nAv0,) + \\\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.nAv1,) + \\\n            \"%s:ncpus=1)+\" % (self.nAv2) + \\\n            \"(%s:mem=1048576kb:ncpus=1+\" % (self.nB,) + \\\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.nBv0,) + \\\n            \"%s:ncpus=1)+\" % (self.nBv1,) + \\\n            \"(%s:ncpus=3:mem=2097152kb)+\" % (self.nD,) + \\\n            \"(%s:ncpus=0:mem=2097152kb)+\" % (self.nC,) + \\\n            \"(%s:mem=1048576kb:ncpus=0+\" % (self.nE,) + \\\n            \"%s:mem=1048576kb)\" % (self.nEv0,)\n        self.job2_isel_esc = self.job2_iselect.replace(\"+\", r\"\\+\")\n        self.job2_iexec_host_esc = self.job2_iexec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                    \"+\", r\"\\+\")\n        self.job2_iexec_vnode_esc = self.job2_iexec_vnode.replace(\n            \"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\"(\", r\"\\(\").replace(\n            \")\", r\"\\)\").replace(\"+\", r\"\\+\")\n\n        # expected values version upon successful job launch\n        self.job2_select = \\\n            \"1:ncpus=3:mem=2gb+1:ncpus=3:mem=2gb+1:ncpus=0:mem=2gb\"\n        self.job2_schedselect = self.job2_select\n        self.job2_exec_host = \"%s/0*0+%s/0*3+%s/0*0\" % (\n            self.nA, self.nD, self.nE)\n\n        # ncpus=0 assigned hosts are not listed in $PBS_NODEFILE\n        self.job2_exec_host_nfile = \"%s/0*0+%s/0*3\" % (\n            self.nA, self.nD)\n\n        self.job2_exec_vnode = \\\n            \"(%s:mem=1048576kb:ncpus=1+\" % (self.nAv0,) + \\\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.nAv1,) + \\\n            \"%s:ncpus=1)+\" % (self.nAv2) + \\\n            \"(%s:ncpus=3:mem=2097152kb)+\" % (self.nD,) + \\\n            \"(%s:mem=1048576kb+\" % (self.nE,) + \\\n            \"%s:mem=1048576kb)\" % (self.nEv0,)\n\n        self.job2_sel_esc = self.job2_select.replace(\"+\", r\"\\+\")\n        self.job2_exec_host_esc = self.job2_exec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                    \"+\", r\"\\+\")\n        self.job2_exec_vnode_esc = self.job2_exec_vnode.replace(\n            \"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\"(\", r\"\\(\").replace(\n            \")\", r\"\\)\").replace(\"+\", r\"\\+\")\n\n        self.script['job2'] = \\\n            \"#PBS -l select=\" + self.job2_oselect + \"\\n\" + \\\n            \"#PBS -l place=\" + self.job2_place + \"\\n\" + \\\n            SLEEP_CMD + \" 60\\n\"\n\n        # Job with mpiprocs and ompthreads requested\n        self.job3_oselect = \\\n            \"ncpus=3:mem=2gb:mpiprocs=3:ompthreads=2+\" + \\\n            \"ncpus=3:mem=2gb:mpiprocs=3:ompthreads=3+\" + \\\n            \"ncpus=2:mem=2gb:mpiprocs=2:ompthreads=2\"\n        self.job3_place = \"scatter\"\n        # incremented values at job start and just before actual launch\n        self.job3_iselect = \\\n            \"1:ncpus=3:mem=2gb:mpiprocs=3:ompthreads=2+\" + \\\n            \"2:ncpus=3:mem=2gb:mpiprocs=3:ompthreads=3+\" + \\\n            \"2:ncpus=2:mem=2gb:mpiprocs=2:ompthreads=2\"\n        self.job3_ischedselect = self.job3_iselect\n        self.job3_iexec_host = \\\n            \"%s/0*0+%s/0*0+%s/0*3+%s/0*2+%s/0*0\" % (\n                self.nA, self.nB, self.nD, self.nC, self.nE)\n        self.job3_iexec_vnode = \\\n            \"(%s:mem=1048576kb:ncpus=1+\" % (self.nAv0,) + \\\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.nAv1,) + \\\n            \"%s:ncpus=1)+\" % (self.nAv2) + \\\n            \"(%s:mem=1048576kb:ncpus=1+\" % (self.nB,) + \\\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.nBv0,) + \\\n            \"%s:ncpus=1)+\" % (self.nBv1,) + \\\n            \"(%s:ncpus=3:mem=2097152kb)+\" % (self.nD,) + \\\n            \"(%s:ncpus=2:mem=2097152kb)+\" % (self.nC,) + \\\n            \"(%s:mem=1048576kb:ncpus=1+\" % (self.nE,) + \\\n            \"%s:mem=1048576kb:ncpus=1)\" % (self.nEv0,)\n\n        # expected values version 6 upon successful job launch\n        self.job3_select = \\\n            \"1:ncpus=3:mem=2gb:mpiprocs=3:ompthreads=2+\" + \\\n            \"1:ncpus=3:mem=2gb:mpiprocs=3:ompthreads=3+\" + \\\n            \"1:ncpus=2:mem=2gb:mpiprocs=2:ompthreads=2\"\n\n        self.job3_schedselect = self.job3_select\n        self.job3_exec_host = \"%s/0*0+%s/0*3+%s/0*0\" % (\n            self.nA, self.nD, self.nE)\n        self.job3_exec_vnode = \\\n            \"(%s:mem=1048576kb:ncpus=1+\" % (self.nAv0,) + \\\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.nAv1,) + \\\n            \"%s:ncpus=1)+\" % (self.nAv2) + \\\n            \"(%s:ncpus=3:mem=2097152kb)+\" % (self.nD,) + \\\n            \"(%s:mem=1048576kb:ncpus=1+\" % (self.nE,) + \\\n            \"%s:mem=1048576kb:ncpus=1)\" % (self.nEv0,)\n\n        self.job3_sel_esc = self.job3_select.replace(\"+\", r\"\\+\")\n        self.job3_exec_host_esc = self.job3_exec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                    \"+\", r\"\\+\")\n        self.job3_exec_vnode_esc = self.job3_exec_vnode.replace(\n            \"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\"(\", r\"\\(\").replace(\n            \")\", r\"\\)\").replace(\"+\", r\"\\+\")\n\n        self.job3_isel_esc = self.job3_iselect.replace(\"+\", r\"\\+\")\n        self.job3_iexec_host_esc = self.job3_iexec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                    \"+\", r\"\\+\")\n        self.job3_iexec_vnode_esc = self.job3_iexec_vnode.replace(\n            \"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\"(\", r\"\\(\").replace(\n            \")\", r\"\\)\").replace(\"+\", r\"\\+\")\n\n        self.script['job3'] = \\\n            \"#PBS -l select=\" + self.job3_oselect + \"\\n\" + \\\n            \"#PBS -l place=\" + self.job3_place + \"\\n\" + \\\n            SLEEP_CMD + \" 300\\n\"\n\n        self.job3_ischedselect = self.job3_iselect\n\n        self.job4_oselect = \"ncpus=3:mem=2gb+ncpus=3:mem=2gb+ncpus=2:mem=2gb\"\n        self.job4_place = \"scatter:excl\"\n        self.job4_iselect = \\\n            \"1:ncpus=3:mem=2gb+2:ncpus=3:mem=2gb+2:ncpus=2:mem=2gb\"\n        self.job4_ischedselect = self.job4_iselect\n        self.job4_iexec_host = \\\n            \"%s/0*0+%s/0*0+%s/0*3+%s/0*2+%s/0*0\" % (\n                self.nA, self.nB, self.nD, self.nC, self.nE)\n        self.job4_iexec_vnode = \\\n            \"(%s:mem=1048576kb:ncpus=1+\" % (self.nAv0,) + \\\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.nAv1,) + \\\n            \"%s:ncpus=1)+\" % (self.nAv2) + \\\n            \"(%s:mem=1048576kb:ncpus=1+\" % (self.nB,) + \\\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.nBv0,) + \\\n            \"%s:ncpus=1)+\" % (self.nBv1,) + \\\n            \"(%s:ncpus=3:mem=2097152kb)+\" % (self.nD,) + \\\n            \"(%s:ncpus=2:mem=2097152kb)+\" % (self.nC,) + \\\n            \"(%s:mem=1048576kb:ncpus=1+\" % (self.nE,) + \\\n            \"%s:mem=1048576kb:ncpus=1)\" % (self.nEv0,)\n\n        # expected values upon successful job launch\n        self.job4_select = \\\n            \"1:ncpus=3:mem=2gb+1:ncpus=3:mem=2gb+1:ncpus=2:mem=2gb\"\n        self.job4_schedselect = \"1:ncpus=3:mem=2gb+\" + \\\n            \"1:ncpus=3:mem=2gb+1:ncpus=2:mem=2gb\"\n        self.job4_exec_host = \"%s/0*0+%s/0*3+%s/0*0\" % (\n            self.nA, self.nD, self.nE)\n        self.job4_exec_vnode = \\\n            \"(%s:mem=1048576kb:ncpus=1+\" % (self.nAv0,) + \\\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.nAv1,) + \\\n            \"%s:ncpus=1)+\" % (self.nAv2) + \\\n            \"(%s:ncpus=3:mem=2097152kb)+\" % (self.nD,) + \\\n            \"(%s:mem=1048576kb:ncpus=1+\" % (self.nE,) + \\\n            \"%s:mem=1048576kb:ncpus=1)\" % (self.nEv0,)\n\n        self.script['job4'] = \\\n            \"#PBS -l select=\" + self.job4_oselect + \"\\n\" + \\\n            \"#PBS -l place=\" + self.job4_place + \"\\n\" + \\\n            SLEEP_CMD + \" 300\\n\"\n\n        self.job4_sel_esc = self.job4_select.replace(\"+\", r\"\\+\")\n        self.job4_exec_host_esc = self.job4_exec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                    \"+\", r\"\\+\")\n        self.job4_exec_vnode_esc = self.job4_exec_vnode.replace(\n            \"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\"(\", r\"\\(\").replace(\n            \")\", r\"\\)\").replace(\"+\", r\"\\+\")\n\n        self.job4_isel_esc = self.job4_iselect.replace(\"+\", r\"\\+\")\n        self.job4_iexec_host_esc = self.job4_iexec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                    \"+\", r\"\\+\")\n        self.job4_iexec_vnode_esc = self.job4_iexec_vnode.replace(\n            \"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\"(\", r\"\\(\").replace(\n            \")\", r\"\\)\").replace(\"+\", r\"\\+\")\n\n        self.job5_oselect = \"ncpus=3:mem=2gb+ncpus=3:mem=2gb+ncpus=2:mem=2gb\"\n        self.job5_place = \"free\"\n        self.job5_iselect = \\\n            \"1:ncpus=3:mem=2gb+2:ncpus=3:mem=2gb+2:ncpus=2:mem=2gb\"\n        self.job5_ischedselect = self.job5_iselect\n        self.job5_iexec_host = \\\n            \"%s/0*0+%s/0*0+%s/0*3+%s/1*0+%s/0*2\" % (\n                self.nA, self.nB, self.nD, self.nB, self.nC)\n        self.job5_iexec_vnode = \\\n            \"(%s:mem=1048576kb:ncpus=1+\" % (self.nAv0,) + \\\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.nAv1,) + \\\n            \"%s:ncpus=1)+\" % (self.nAv2) + \\\n            \"(%s:mem=1048576kb:ncpus=1+\" % (self.nB,) + \\\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.nBv0,) + \\\n            \"%s:ncpus=1)+\" % (self.nBv1,) + \\\n            \"(%s:ncpus=3:mem=2097152kb)+\" % (self.nD,) + \\\n            \"(%s:mem=1048576kb+\" % (self.nBv1,) + \\\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.nBv2,) + \\\n            \"%s:ncpus=1)+\" % (self.nBv3,) + \\\n            \"(%s:ncpus=2:mem=2097152kb)\" % (self.nC,)\n\n        # expected values upon successful job launch\n        self.job5_select = \\\n            \"1:ncpus=3:mem=2gb+1:ncpus=3:mem=2gb+1:ncpus=1:mem=1gb\"\n        self.job5_schedselect = self.job5_select\n        self.job5_exec_host = \"%s/0*0+%s/0*0+%s/1*0\" % (\n            self.nA, self.nB, self.nB)\n        self.job5_exec_vnode = \\\n            \"(%s:mem=1048576kb:ncpus=1+\" % (self.nAv0,) + \\\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.nAv1,) + \\\n            \"%s:ncpus=1)+\" % (self.nAv2) + \\\n            \"(%s:mem=1048576kb:ncpus=1+\" % (self.nB,) + \\\n            \"%s:mem=1048576kb:ncpus=1+\" % (self.nBv0,) + \\\n            \"%s:ncpus=1)+\" % (self.nBv1,) + \\\n            \"(%s:mem=1048576kb+\" % (self.nBv1,) + \\\n            \"%s:ncpus=1)\" % (self.nBv2,)\n\n        self.script['job5'] = \\\n            \"#PBS -l select=\" + self.job5_oselect + \"\\n\" + \\\n            \"#PBS -l place=\" + self.job5_place + \"\\n\" + \\\n            SLEEP_CMD + \" 300\\n\"\n\n        self.job5_sel_esc = self.job5_select.replace(\"+\", r\"\\+\")\n        self.job5_exec_host_esc = self.job5_exec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                    \"+\", r\"\\+\")\n        self.job5_exec_vnode_esc = self.job5_exec_vnode.replace(\n            \"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\"(\", r\"\\(\").replace(\n            \")\", r\"\\)\").replace(\"+\", r\"\\+\")\n\n        self.job5_isel_esc = self.job5_iselect.replace(\"+\", r\"\\+\")\n        self.job5_iexec_host_esc = self.job5_iexec_host.replace(\n            \"*\", r\"\\*\").replace(\"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\n                    \"+\", r\"\\+\")\n        self.job5_iexec_vnode_esc = self.job5_iexec_vnode.replace(\n            \"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\"(\", r\"\\(\").replace(\n            \")\", r\"\\)\").replace(\"+\", r\"\\+\")\n\n        # queuejob hooks used throughout the test\n        self.qjob_hook_body = \"\"\"\nimport pbs\ne=pbs.event()\npbs.logmsg(pbs.LOG_DEBUG, \"queuejob hook executed\")\n# Save current select spec in resource 'site'\ne.job.Resource_List[\"site\"] = str(e.job.Resource_List[\"select\"])\nnew_select = e.job.Resource_List[\"select\"].increment_chunks(1)\ne.job.Resource_List[\"select\"] = new_select\ne.job.tolerate_node_failures = \"job_start\"\n\"\"\"\n        self.qjob_hook_body2 = \"\"\"\nimport pbs\ne=pbs.event()\npbs.logmsg(pbs.LOG_DEBUG, \"queuejob hook executed\")\n# Save current select spec in resource 'site'\ne.job.Resource_List[\"site\"] = str(e.job.Resource_List[\"select\"])\nnew_select = e.job.Resource_List[\"select\"].increment_chunks(1)\ne.job.Resource_List[\"select\"] = new_select\ne.job.tolerate_node_failures = \"all\"\n\"\"\"\n        # begin hooks used throughout the test\n        self.begin_hook_body = \"\"\"\nimport pbs\ne=pbs.event()\npbs.logmsg(pbs.LOG_DEBUG, \"Executing begin\")\nlocalnode=pbs.get_local_nodename()\nif not e.job.in_ms_mom() and (localnode == '%s'):\n    e.reject(\"bad node\")\n\"\"\" % (self.nB,)\n        # The below hook may not really be doing anything, but is\n        # used in a test of the sister join job alarm time with\n        # the hook's alarm value.\n        self.begin_hook_body2 = \"\"\"\nimport pbs\ne=pbs.event()\npbs.logmsg(pbs.LOG_DEBUG, \"Executing begin\")\nlocalnode=pbs.get_local_nodename()\n\"\"\"\n        self.begin_hook_body3 = \"\"\"\nimport pbs\ne=pbs.event()\npbs.logmsg(pbs.LOG_DEBUG, \"Executing begin\")\nlocalnode=pbs.get_local_nodename()\nif not e.job.in_ms_mom() and (localnode == '%s'):\n    x\n\"\"\" % (self.nE,)\n\n        self.begin_hook_body4 = \"\"\"\nimport pbs\ne=pbs.event()\npbs.logmsg(pbs.LOG_DEBUG, \"Executing begin\")\nlocalnode=pbs.get_local_nodename()\nif not e.job.in_ms_mom() and (localnode == '%s'):\n    e.reject(\"bad node\")\n\"\"\" % (self.nD,)\n\n        self.begin_hook_body5 = \"\"\"\nimport pbs\ne=pbs.event()\npbs.logmsg(pbs.LOG_DEBUG, \"Executing begin\")\nlocalnode=pbs.get_local_nodename()\nif not e.job.in_ms_mom() and (localnode == '%s'):\n    e.reject(\"bad node\")\n\"\"\" % (self.nC,)\n\n        # prologue hooks used throughout the test\n        self.prolo_hook_body = \"\"\"\nimport pbs\ne=pbs.event()\npbs.logmsg(pbs.LOG_DEBUG, \"Executing prolo\")\n\nfor vn in e.vnode_list:\n    v = e.vnode_list[vn]\n    pbs.logjobmsg(e.job.id, \"prolo: found vnode_list[\" + v.name + \"]\")\n\nfor vn in e.vnode_list_fail:\n    v = e.vnode_list_fail[vn]\n    pbs.logjobmsg(e.job.id, \"prolo: found vnode_list_fail[\" + v.name + \"]\")\nlocalnode=pbs.get_local_nodename()\nif not e.job.in_ms_mom() and (localnode == '%s'):\n    e.reject(\"bad node\")\n\"\"\" % (self.nC,)\n\n        self.prolo_hook_body2 = \"\"\"\nimport pbs\ne=pbs.event()\npbs.logmsg(pbs.LOG_DEBUG, \"Executing prologue\")\nlocalnode=pbs.get_local_nodename()\nif not e.job.in_ms_mom() and (localnode == '%s'):\n    x\n\"\"\" % (self.nC,)\n\n        self.prolo_hook_body3 = \"\"\"\nimport pbs\ne=pbs.event()\npbs.logmsg(pbs.LOG_DEBUG, \"Executing prolo\")\n\nfor vn in e.vnode_list:\n    v = e.vnode_list[vn]\n    pbs.logjobmsg(e.job.id, \"prolo: found vnode_list[\" + v.name + \"]\")\n\nfor vn in e.vnode_list_fail:\n    v = e.vnode_list_fail[vn]\n    pbs.logjobmsg(e.job.id, \"prolo: found vnode_list_fail[\" + v.name + \"]\")\nlocalnode=pbs.get_local_nodename()\n\"\"\"\n        self.prolo_hook_body4 = \"\"\"\nimport pbs\ne=pbs.event()\npbs.logmsg(pbs.LOG_DEBUG, \"Executing prolo\")\n\nfor vn in e.vnode_list:\n    v = e.vnode_list[vn]\n    pbs.logjobmsg(e.job.id, \"prolo: found vnode_list[\" + v.name + \"]\")\n\nfor vn in e.vnode_list_fail:\n    v = e.vnode_list_fail[vn]\n    pbs.logjobmsg(e.job.id, \"prolo: found vnode_list_fail[\" + v.name + \"]\")\nlocalnode=pbs.get_local_nodename()\n\nif e.job.in_ms_mom():\n    pj = e.job.release_nodes(keep_select=e.job.Resource_List[\"site\"])\n    if pj != None:\n        pbs.logjobmsg(e.job.id, \"prolo: job.exec_vnode=%s\" % (pj.exec_vnode,))\n        pbs.logjobmsg(e.job.id, \"prolo: job.exec_host=%s\" % (pj.exec_host,))\n        pbs.logjobmsg(e.job.id,\n                      \"prolo: job.schedselect=%s\" % (pj.schedselect,))\n    else:\n        e.job.Hold_Types = pbs.hold_types(\"s\")\n        e.job.rerun()\n        e.reject(\"unsuccessful at PROLOGUE\")\n\"\"\"\n        self.prolo_hook_body5 = \"\"\"\nimport pbs\nimport time\ne=pbs.event()\npbs.logmsg(pbs.LOG_DEBUG, \"Executing prolo\")\n\nfor vn in e.vnode_list:\n    v = e.vnode_list[vn]\n    pbs.logjobmsg(e.job.id, \"prolo: found vnode_list[\" + v.name + \"]\")\n\nfor vn in e.vnode_list_fail:\n    v = e.vnode_list_fail[vn]\n    pbs.logjobmsg(e.job.id, \"prolo: found vnode_list_fail[\" + v.name + \"]\")\nif not e.job.in_ms_mom():\n    pbs.logjobmsg(e.job.id, \"sleeping for 30 secs\")\n    time.sleep(30)\n\"\"\"\n\n        # launch hooks used throughout the test\n        self.launch_hook_body = \"\"\"\nimport pbs\ne=pbs.event()\n\nif 'PBS_NODEFILE' not in e.env:\n    e.accept()\n\npbs.logmsg(pbs.LOG_DEBUG, \"Executing launch\")\n\nfor vn in e.vnode_list:\n    v = e.vnode_list[vn]\n    pbs.logjobmsg(e.job.id, \"launch: found vnode_list[\" + v.name + \"]\")\n\nfor vn in e.vnode_list_fail:\n    v = e.vnode_list_fail[vn]\n    pbs.logjobmsg(e.job.id, \"launch: found vnode_list_fail[\" + v.name + \"]\")\nif e.job.in_ms_mom():\n    pj = e.job.release_nodes(keep_select=e.job.Resource_List[\"site\"])\n    if pj != None:\n        pbs.logjobmsg(e.job.id, \"launch: job.exec_vnode=%s\" % (pj.exec_vnode,))\n        pbs.logjobmsg(e.job.id, \"launch: job.exec_host=%s\" % (pj.exec_host,))\n        pbs.logjobmsg(e.job.id,\n                      \"launch: job.schedselect=%s\" % (pj.schedselect,))\n    else:\n        e.job.Hold_Types = pbs.hold_types(\"s\")\n        e.job.rerun()\n        e.reject(\"unsuccessful at LAUNCH\")\n\"\"\"\n\n        self.launch_hook_body2 = \"\"\"\nimport pbs\ne=pbs.event()\n\nif 'PBS_NODEFILE' not in e.env:\n    e.accept()\n\npbs.logmsg(pbs.LOG_DEBUG, \"Executing launch\")\n\nfor vn in e.vnode_list:\n    v = e.vnode_list[vn]\n    pbs.logjobmsg(e.job.id, \"launch: found vnode_list[\" + v.name + \"]\")\n\nfor vn in e.vnode_list_fail:\n    v = e.vnode_list_fail[vn]\n    pbs.logjobmsg(e.job.id, \"launch: found vnode_list_fail[\" + v.name + \"]\")\nif e.job.in_ms_mom():\n    new_sel = \"ncpus=3:mem=2gb+ncpus=3:mem=2gb+ncpus=1:mem=1gb\"\n    pj = e.job.release_nodes(keep_select=new_sel)\n    if pj != None:\n        pbs.logjobmsg(e.job.id, \"launch: job.exec_vnode=%s\" % (pj.exec_vnode,))\n        pbs.logjobmsg(e.job.id, \"launch: job.exec_host=%s\" % (pj.exec_host,))\n        pbs.logjobmsg(e.job.id,\n                      \"launch: job.schedselect=%s\" % (pj.schedselect,))\n    else:\n        e.job.Hold_Types = pbs.hold_types(\"s\")\n        e.job.rerun()\n        e.reject(\"unsuccessful at LAUNCH\")\n\"\"\"\n        self.stime = time.time()\n\n    def tearDown(self):\n        self.momA.signal(\"-CONT\")\n        self.momB.signal(\"-CONT\")\n        self.momC.signal(\"-CONT\")\n        self.momD.signal(\"-CONT\")\n        self.momE.signal(\"-CONT\")\n        self.momA.unset_mom_config('$sister_join_job_alarm', False)\n        self.momA.unset_mom_config('$job_launch_delay', False)\n        a = {'state': (DECR, 'offline')}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.momA.shortname)\n        self.server.manager(MGR_CMD_SET, NODE, a, self.momB.shortname)\n        self.server.manager(MGR_CMD_SET, NODE, a, self.momC.shortname)\n        self.server.manager(MGR_CMD_SET, NODE, a, self.momD.shortname)\n        self.server.manager(MGR_CMD_SET, NODE, a, self.momE.shortname)\n        TestFunctional.tearDown(self)\n        # Delete managers and operators if added\n        attrib = ['operators', 'managers']\n        self.server.manager(MGR_CMD_UNSET, SERVER, attrib)\n\n    @timeout(400)\n    def test_t1(self):\n        \"\"\"\n        Test tolerating job_start 2 node failures after adding\n             extra nodes to the job, pruning\n             job's assigned resources to match up to the original\n             select spec, and offlining the failed vnodes.\n\n             1.\tHave a job that has been submitted with a select\n                spec of 2 super-chunks say (A) and (B), and 1 chunk\n                of (C), along with place spec of \"scatter\",\n                resulting in the following assignment:\n\n                    exec_vnode = (A)+(B)+(C)\n\n                and -Wtolerate_node_failures=job_start\n\n             2. Have a queuejob hook that adds 1 extra node to each\n                chunk (except the MS (first) chunk), resulting in the\n                assignment:\n\n                    exec_vnode = (A)+(B)+(D)+(C)+(E)\n\n                where D mirrors super-chunk B specs while E mirrors\n                 chunk C.\n\n             3. Have an execjob_begin hook that fails (causes rejection)\n                when executed by mom managing vnodes in (B).\n\n             4. Have an execjob_prologue hook that fails (causes rejection)\n                when executed by mom managing vnodes in (C).\n\n             5. Then create an execjob_launch hook that offlines the failed\n                nodes (B) and (C), and prunes back the job's exec_vnode\n                assignment back to satisfying the original 3-node select\n                spec, choosing only healthy nodes.\n\n             6. Result:\n\n                a. This results in the following reassignment of chunks:\n\n                   exec_vnode = (A)+(D)+(E)\n\n                   since (B) and (C) contain vnodes from failed moms.\n\n                b. vnodes in (B) and (C) are now showing a state of\n                   \"offline\".\n                c. The accounting log start record 'S' will reflect the\n                   select request where additional chunks were added, while\n                   the secondary start record 's' will reflect the assigned\n                   resources after pruning the original select request via\n                   the pbs.release_nodes(keep_select=...) call\n                   inside execjob_launch hook.\n        \"\"\"\n        # instantiate queuejob hook\n        hook_event = \"queuejob\"\n        hook_name = \"qjob\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.qjob_hook_body)\n\n        # instantiate execjob_begin hook\n        hook_event = \"execjob_begin\"\n        hook_name = \"begin\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.begin_hook_body)\n\n        # instantiate execjob_prologue hook\n        hook_event = \"execjob_prologue\"\n        hook_name = \"prolo\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.prolo_hook_body)\n\n        # instantiate execjob_launch hook\n        hook_body = \"\"\"\nimport pbs\ne=pbs.event()\n\nif 'PBS_NODEFILE' not in e.env:\n    e.accept()\n\npbs.logmsg(pbs.LOG_DEBUG, \"Executing launch\")\n\nfor vn in e.vnode_list:\n    v = e.vnode_list[vn]\n    pbs.logjobmsg(e.job.id, \"launch: found vnode_list[\" + v.name + \"]\")\n\nfor vn in e.vnode_list_fail:\n    v = e.vnode_list_fail[vn]\n    pbs.logjobmsg(e.job.id, \"launch:offline vnode_list_fail[\" + v.name + \"]\")\n    v.state = pbs.ND_OFFLINE\nif e.job.in_ms_mom():\n    pj = e.job.release_nodes(keep_select=e.job.Resource_List[\"site\"])\n    if pj != None:\n        pbs.logjobmsg(e.job.id, \"launch: job.exec_vnode=%s\" % (pj.exec_vnode,))\n        pbs.logjobmsg(e.job.id, \"launch: job.exec_host=%s\" % (pj.exec_host,))\n        pbs.logjobmsg(e.job.id,\n                      \"launch: job.schedselect=%s\" % (pj.schedselect,))\n    else:\n        e.job.Hold_Types = pbs.hold_types(\"s\")\n        e.job.rerun()\n        e.reject(\"unsuccessful at LAUNCH\")\n\"\"\"\n        hook_event = \"execjob_launch\"\n        hook_name = \"launch\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, hook_body)\n\n        # First, turn off scheduling\n        a = {'scheduling': 'false'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        jid = self.create_and_submit_job('job1')\n        # Job gets queued and reflects the incremented values from queuejob\n        # hook\n        self.server.expect(JOB, {'job_state': 'Q',\n                                 'tolerate_node_failures': 'job_start',\n                                 'Resource_List.mem': '10gb',\n                                 'Resource_List.ncpus': 13,\n                                 'Resource_List.nodect': 5,\n                                 'Resource_List.select': self.job1_iselect,\n                                 'Resource_List.site': self.job1_oselect,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_ischedselect},\n                           id=jid, attrop=PTL_AND)\n\n        a = {'scheduling': 'true'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        # Job eventually launches reflecting the pruned back values\n        # to the original select spec\n        # There's a max_attempts=60 for it would take up to 60 seconds\n        # for primary mom to wait for the sisters to join\n        # (default $sister_join_job_alarm of 30 seconds) and to wait for\n        # sisters to execjob_prologue hooks (default $job_launch_delay\n        # value of 30 seconds)\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'tolerate_node_failures': 'job_start',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job1_select,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_schedselect,\n                                 'exec_host': self.job1_exec_host,\n                                 'exec_vnode': self.job1_exec_vnode},\n                           id=jid, interval=1, attrop=PTL_AND, max_attempts=60)\n\n        thisjob = self.server.status(JOB, id=jid)\n        if thisjob:\n            job_output_file = thisjob[0]['Output_Path'].split(':')[1]\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status(\n            [self.nAv0, self.nAv1, self.nE, self.nEv0],\n            'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.nAv2],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        jobs_assn3 = \"%s/0, %s/1, %s/2\" % (jid, jid, jid)\n        self.match_vnode_status([self.nD], 'free', jobs_assn3,\n                                3, '2097152kb')\n\n        self.match_vnode_status([self.nA, self.nAv3, self.nBv2, self.nBv3,\n                                 self.nEv1, self.nEv2, self.nEv3], 'free')\n\n        self.match_vnode_status([self.nB, self.nBv0, self.nBv1, self.nC],\n                                'offline')\n\n        # Check server/queue counts.\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 8,\n                                    'resources_assigned.mem': '6291456kb'},\n                           attrop=PTL_AND)\n        self.server.expect(QUEUE, {'resources_assigned.ncpus': 8,\n                                   'resources_assigned.mem': '6291456kb'},\n                           id='workq', attrop=PTL_AND)\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(jid, self.job1_exec_host))\n\n        # Verify mom_logs\n        self.momA.log_match(\n            \"Job;%s;job_start_error.+from node %s.+could not JOIN_JOB\" % (\n                jid, self.hostB), n=10, regexp=True)\n\n        self.momA.log_match(\n            \"Job;%s;ignoring error from %s.+as job \" % (jid, self.hostB) +\n            \"is tolerant of node failures\",\n            regexp=True, n=10)\n\n        self.momA.log_match(\n            \"Job;%s;job_start_error.+from node %s.+\" % (jid, self.hostC) +\n            \"could not IM_EXEC_PROLOGUE\", n=10, regexp=True)\n\n        self.momA.log_match(\n            \"Job;%s;ignoring error from %s.+\" % (jid, self.hostC) +\n            \"as job is tolerant of node failures\", n=10, regexp=True)\n        # Check vnode_list[] parameter in execjob_prologue hook\n        vnode_list = [self.nAv0, self.nAv1, self.nAv2,\n                      self.nB, self.nBv0, self.nBv1,\n                      self.nC, self.nD, self.nE, self.nEv0]\n        for vn in vnode_list:\n            self.momA.log_match(\"Job;%s;prolo: found vnode_list[%s]\" % (\n                                jid, vn), n=10)\n\n        # Check vnode_list_fail[] parameter in execjob_prologue hook\n        vnode_list_fail = [self.nB, self.nBv0, self.nBv1]\n        for vn in vnode_list_fail:\n            self.momA.log_match(\"Job;%s;prolo: found vnode_list_fail[%s]\" % (\n                                jid, vn), n=10)\n\n        # Check vnode_list[] parameter in execjob_launch hook\n        vnode_list = [self.nAv0, self.nAv1, self.nAv2,\n                      self.nB, self.nBv0, self.nBv1,\n                      self.nC, self.nD, self.nE, self.nEv0]\n        for vn in vnode_list:\n            self.momA.log_match(\"Job;%s;launch: found vnode_list[%s]\" % (\n                                jid, vn), n=10)\n\n        # Check vnode_list_fail[] parameter in execjob_launch hook\n        vnode_list_fail = [self.nB, self.nBv0, self.nBv1, self.nC]\n        for vn in vnode_list_fail:\n            self.momA.log_match(\n                \"Job;%s;launch:offline vnode_list_fail[%s]\" % (jid, vn), n=10)\n\n        # Check result of pbs.event().job.release_nodes(keep_select) call\n        self.momA.log_match(\"Job;%s;launch: job.exec_vnode=%s\" % (\n            jid, self.job1_exec_vnode), n=10)\n\n        self.momA.log_match(\"Job;%s;launch: job.schedselect=%s\" % (\n            jid, self.job1_schedselect), n=10)\n\n        self.momA.log_match(\"Job;%s;pruned from exec_vnode=%s\" % (\n            jid, self.job1_iexec_vnode), n=10)\n\n        self.momA.log_match(\"Job;%s;pruned to exec_vnode=%s\" % (\n            jid, self.job1_exec_vnode), n=10)\n\n        # Check accounting_logs\n        self.match_accounting_log('S', jid, self.job1_iexec_host_esc,\n                                  self.job1_iexec_vnode_esc, \"10gb\", 13, 5,\n                                  self.job1_place,\n                                  self.job1_isel_esc)\n\n        self.match_accounting_log('s', jid, self.job1_exec_host_esc,\n                                  self.job1_exec_vnode_esc,\n                                  \"6gb\", 8, 3,\n                                  self.job1_place,\n                                  self.job1_sel_esc)\n\n        self.momA.log_match(\"Job;%s;task.+started, hostname\" % (jid,),\n                            n=10, interval=5, regexp=True)\n\n        self.momA.log_match(\"Job;%s;copy file request received\" % (jid,),\n                            n=10, interval=5)\n\n        # validate output\n        expected_out = \"\"\"/var/spool/pbs/aux/%s\n%s\n%s\n%s\nFIB TESTS\npbsdsh -n 1 fib 37\n%d\npbsdsh -n 2 fib 37\n%d\nfib 37\n%d\nHOSTNAME TESTS\npbsdsh -n 0 hostname\n%s\npbsdsh -n 1 hostname\n%s\npbsdsh -n 2 hostname\n%s\nPBS_NODEFILE tests\nHOST=%s\npbs_tmrsh %s hostname\n%s\nHOST=%s\npbs_tmrsh %s hostname\n%s\nHOST=%s\npbs_tmrsh %s hostname\n%s\n\"\"\" % (jid, self.momA.hostname, self.momD.hostname, self.momE.hostname,\n            self.fib37_value, self.fib37_value, self.fib37_value,\n            self.momA.shortname, self.momD.shortname, self.momE.shortname,\n            self.momA.hostname, self.momA.hostname, self.momA.shortname,\n            self.momD.hostname, self.momD.hostname, self.momD.shortname,\n            self.momE.hostname, self.momE.hostname, self.momE.shortname)\n\n        self.logger.info(\"expected out=%s\" % (expected_out,))\n        job_out = \"\"\n        with open(job_output_file, 'r') as fd:\n            job_out = fd.read()\n            self.logger.info(\"job_out=%s\" % (job_out,))\n\n        self.assertEqual(job_out, expected_out)\n\n    @timeout(400)\n    def test_t2(self):\n        \"\"\"\n        Test tolerating job_start 2 node failures after adding\n             extra nodes to the job, pruning\n             job's assigned resources to match up to the original\n             select spec, without offlining the failed vnodes, and\n             specifying mom config file options 'sister_join_job_alarm' and\n             'job_launch_delay'.\n\n             1. Set $sister_join_job_alarm and $job_launch_delay values\n                in mom's config file.\n\n             2.\tSubmit a job that has been submitted with a select\n                spec of 2 super-chunks say (A) and (B), and 1 chunk\n                of (C), along with place spec of \"scatter\",\n                resulting in the following assignment:\n\n                    exec_vnode = (A)+(B)+(C)\n\n                and -Wtolerate_node_failures=job_start\n\n             3. Have a queuejob hook that adds 1 extra node to each\n                chunk (except the MS (first) chunk), resulting in the\n                assignment:\n\n                    exec_vnode = (A)+(B)+(D)+(C)+(E)\n\n                where D mirrors super-chunk B specs while E mirrors\n                chunk C.\n\n             4. Prior to submitting a job, suspend mom B. When job runs,\n                momB won't be able to join the job, so it won't be considered\n                as a \"healthy\" mom.\n\n             5. Have an execjob_begin hook that doesn't fail.\n\n             6. Have an execjob_prologue hook that fails (causes rejection)\n                when executed by mom managing vnodes in (C).\n\n             7. Have an execjob_launch hook that prunes back the\n                job's exec_vnode assignment back to satisfying the original\n                3-node select spec, choosing only healthy nodes.\n\n             8. Result:\n\n                a. This results in the following reassignment of chunks:\n\n                   exec_vnode = (A)+(D)+(E)\n\n                   since (B) and (C) contain vnodes from failed moms.\n\n                b. vnodes in (B) and (C) are now showing a state of \"free\".\n\n                c. Mom's log file will show explicit values to\n                   $sister_join_job_alarm and $job_launch_delay.\n\n                c. The accounting log start record 'S' will reflect the\n                   select request where additional chunks were added, while\n                   the secondary start record 's' will reflect the assigned\n                   resources after pruning the original select request via\n                   the pbs.release_nodes(keep_select=...) call\n                   inside execjob_launch hook.\n        \"\"\"\n        # set mom config options:\n        sis_join_alarm = 45\n        c = {'$sister_join_job_alarm': sis_join_alarm}\n        self.momA.add_config(c)\n\n        job_launch_delay = 40\n        c = {'$job_launch_delay': job_launch_delay}\n        self.momA.add_config(c)\n\n        self.momA.signal(\"-HUP\")\n\n        self.momA.log_match(\n            \"sister_join_job_alarm;%d\" % (sis_join_alarm,), max_attempts=5,\n            interval=5)\n        self.momA.log_match(\n            \"job_launch_delay;%d\" % (job_launch_delay,),\n            max_attempts=5, interval=5)\n\n        # instantiate queuejob hook\n        hook_event = \"queuejob\"\n        hook_name = \"qjob\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.qjob_hook_body)\n\n        # instantiate execjob_begin hook\n        hook_event = \"execjob_begin\"\n        hook_name = \"begin\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.begin_hook_body2)\n\n        # instantiate execjob_prologue hook\n        hook_event = \"execjob_prologue\"\n        hook_name = \"prolo\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.prolo_hook_body)\n\n        # instantiate execjob_launch hook\n        hook_event = \"execjob_launch\"\n        hook_name = \"launch\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.launch_hook_body)\n\n        # First, turn off scheduling\n        a = {'scheduling': 'false'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        # temporarily suspend momB, simulating a failed mom host.\n        self.momB.signal(\"-STOP\")\n        jid = self.create_and_submit_job('job1')\n        # Job gets queued and reflects the incremented values from queuejob\n        # hook\n        self.server.expect(JOB, {'job_state': 'Q',\n                                 'tolerate_node_failures': 'job_start',\n                                 'Resource_List.mem': '10gb',\n                                 'Resource_List.ncpus': 13,\n                                 'Resource_List.nodect': 5,\n                                 'Resource_List.select': self.job1_iselect,\n                                 'Resource_List.site': self.job1_oselect,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_ischedselect},\n                           id=jid, attrop=PTL_AND)\n\n        # Set time to start scanning logs\n        stime = time.time()\n\n        a = {'scheduling': 'true'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        # Job eventually launches reflecting the pruned back values\n        # to the original select spec\n        # There's a max_attempts=60 for it would take up to 60 seconds\n        # for primary mom to wait for the sisters to join\n        # (default $sister_join_job_alarm of 30 seconds) and to wait for\n        # sisters to execjob_prologue hooks (default $job_launch_delay\n        # value of 30 seconds)\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'tolerate_node_failures': 'job_start',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job1_select,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_schedselect,\n                                 'exec_host': self.job1_exec_host,\n                                 'exec_vnode': self.job1_exec_vnode},\n                           id=jid, interval=1, attrop=PTL_AND,\n                           max_attempts=100)\n\n        thisjob = self.server.status(JOB, id=jid)\n        if thisjob:\n            job_output_file = thisjob[0]['Output_Path'].split(':')[1]\n\n        # Verify the logs and make sure sister_join_job_alarm is honored\n        logs = self.mom.log_match(\n            \"Executing begin\",\n            allmatch=True, starttime=stime, max_attempts=8)\n        log1 = logs[0][1]\n        logs = self.mom.log_match(\n            \"Executing prolo\",\n            allmatch=True, starttime=stime, max_attempts=8)\n        log2 = logs[0][1]\n        tmp = log1.split(';')\n        # Convert the time into epoch time\n        time1 = int(self.logutils.convert_date_time(tmp[0]))\n        tmp = log2.split(';')\n        time2 = int(self.logutils.convert_date_time(tmp[0]))\n\n        diff = time2 - time1\n        self.logger.info(\n            \"Time diff between begin hook and prologue hook is \" +\n            str(diff) + \" seconds\")\n        # Leave a little wiggle room for slow systems\n        self.assertTrue((diff >= sis_join_alarm) and\n                        diff <= (sis_join_alarm + 5))\n\n        self.mom.log_match(\n            \"sister_join_job_alarm wait time %d secs exceeded\" % (\n                sis_join_alarm,), starttime=stime, max_attempts=8)\n\n        # Verify the logs and make sure job_launch_delay is honored\n        logs = self.mom.log_match(\n            \"Executing prolo\",\n            allmatch=True, starttime=stime, max_attempts=8)\n        log1 = logs[0][1]\n        logs = self.mom.log_match(\n            \"Executing launch\",\n            allmatch=True, starttime=stime, max_attempts=8)\n        log2 = logs[0][1]\n        tmp = log1.split(';')\n        # Convert the time into epoch time\n        time1 = int(self.logutils.convert_date_time(tmp[0]))\n        tmp = log2.split(';')\n        time2 = int(self.logutils.convert_date_time(tmp[0]))\n\n        diff = time2 - time1\n        self.logger.info(\"Time diff between prolo hook and launch hook is \" +\n                         str(diff) + \" seconds\")\n        # Leave a little wiggle room for slow systems\n        self.assertTrue((diff >= job_launch_delay) and\n                        diff <= (job_launch_delay + 3))\n\n        self.momA.log_match(\n            \"not all prologue hooks to sister moms completed, \" +\n            \"but job will proceed to execute\", n=10)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status(\n            [self.nAv0, self.nAv1, self.nE, self.nEv0],\n            'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.nAv2],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        jobs_assn3 = \"%s/0, %s/1, %s/2\" % (jid, jid, jid)\n        self.match_vnode_status([self.nD], 'free', jobs_assn3,\n                                3, '2097152kb')\n\n        self.match_vnode_status([self.nA, self.nAv3, self.nB, self.nBv0,\n                                 self.nBv1, self.nBv2, self.nBv3, self.nC,\n                                 self.nEv1, self.nEv2, self.nEv3], 'free')\n\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(jid, self.job1_exec_host))\n\n        # Verify mom_logs\n        self.momA.log_match(\n            \"Job;%s;job_start_error.+from node %s.+\" % (jid, self.hostC) +\n            \"could not IM_EXEC_PROLOGUE\", n=10, regexp=True)\n\n        self.momA.log_match(\n            \"Job;%s;ignoring error from %s.+\" % (jid, self.hostC) +\n            \"as job is tolerant of node failures\", n=10, regexp=True)\n        # Check vnode_list[] parameter in execjob_prologue hook\n        vnode_list = [self.nAv0, self.nAv1, self.nAv2,\n                      self.nB, self.nBv0, self.nBv1,\n                      self.nC, self.nD, self.nE, self.nEv0]\n        for vn in vnode_list:\n            self.momA.log_match(\"Job;%s;prolo: found vnode_list[%s]\" % (\n                                jid, vn), n=10)\n\n        # check server/queue counts\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 8,\n                                    'resources_assigned.mem': '6291456kb'},\n                           attrop=PTL_AND)\n        self.server.expect(QUEUE, {'resources_assigned.ncpus': 8,\n                                   'resources_assigned.mem': '6291456kb'},\n                           id='workq', attrop=PTL_AND)\n\n        # Check vnode_list[] parameter in execjob_launch hook\n        vnode_list = [self.nAv0, self.nAv1, self.nAv2,\n                      self.nB, self.nBv0, self.nBv1,\n                      self.nC, self.nD, self.nE, self.nEv0]\n        for vn in vnode_list:\n            self.momA.log_match(\"Job;%s;launch: found vnode_list[%s]\" % (\n                                jid, vn), n=10)\n\n        # Check vnode_list_fail[] parameter in execjob_launch hook\n        vnode_list_fail = [self.nC]\n        for vn in vnode_list_fail:\n            self.momA.log_match(\"Job;%s;launch: found vnode_list_fail[%s]\" % (\n                                jid, vn), n=10)\n\n        # Check result of pbs.event().job.release_nodes(keep_select) call\n        self.momA.log_match(\"Job;%s;launch: job.exec_vnode=%s\" % (\n            jid, self.job1_exec_vnode), n=10)\n\n        self.momA.log_match(\"Job;%s;launch: job.schedselect=%s\" % (\n            jid, self.job1_schedselect), n=10)\n\n        self.momA.log_match(\"Job;%s;pruned from exec_vnode=%s\" % (\n            jid, self.job1_iexec_vnode), n=10)\n\n        self.momA.log_match(\"Job;%s;pruned to exec_vnode=%s\" % (\n            jid, self.job1_exec_vnode), n=10)\n\n        # Check accounting_logs\n        self.match_accounting_log('S', jid, self.job1_iexec_host_esc,\n                                  self.job1_iexec_vnode_esc, \"10gb\", 13, 5,\n                                  self.job1_place,\n                                  self.job1_isel_esc)\n\n        self.match_accounting_log('s', jid, self.job1_exec_host_esc,\n                                  self.job1_exec_vnode_esc,\n                                  \"6gb\", 8, 3,\n                                  self.job1_place,\n                                  self.job1_sel_esc)\n        self.momA.log_match(\"Job;%s;task.+started, hostname\" % (jid,),\n                            n=10, interval=5, regexp=True)\n\n        self.momA.log_match(\"Job;%s;copy file request received\" % (jid,),\n                            n=10, interval=5)\n\n        # validate output\n        expected_out = \"\"\"/var/spool/pbs/aux/%s\n%s\n%s\n%s\nFIB TESTS\npbsdsh -n 1 fib 37\n%d\npbsdsh -n 2 fib 37\n%d\nfib 37\n%d\nHOSTNAME TESTS\npbsdsh -n 0 hostname\n%s\npbsdsh -n 1 hostname\n%s\npbsdsh -n 2 hostname\n%s\nPBS_NODEFILE tests\nHOST=%s\npbs_tmrsh %s hostname\n%s\nHOST=%s\npbs_tmrsh %s hostname\n%s\nHOST=%s\npbs_tmrsh %s hostname\n%s\n\"\"\" % (jid, self.momA.hostname, self.momD.hostname, self.momE.hostname,\n            self.fib37_value, self.fib37_value, self.fib37_value,\n            self.momA.shortname, self.momD.shortname, self.momE.shortname,\n            self.momA.hostname, self.momA.hostname, self.momA.shortname,\n            self.momD.hostname, self.momD.hostname, self.momD.shortname,\n            self.momE.hostname, self.momE.hostname, self.momE.shortname)\n\n        self.logger.info(\"expected out=%s\" % (expected_out,))\n        job_out = \"\"\n        with open(job_output_file, 'r') as fd:\n            job_out = fd.read()\n            self.logger.info(\"job_out=%s\" % (job_out,))\n\n        self.assertEqual(job_out, expected_out)\n\n    @timeout(400)\n    def test_t3(self):\n        \"\"\"\n        Test: tolerating job_start 2 node failures after adding\n              extra nodes to the job, pruning\n              job's assigned resources to match up to the original\n              select spec, without offlining the failed vnodes, and\n              with 2 execjob_prologue hooks, with prologue hook1\n              having alarm1 and prologue hook2 having alarm2.\n              This also test the default value to sister_join_job_alarm.\n\n             1.\tSubmit a job that has been submitted with a select\n                spec of 2 super-chunks say (A) and (B), and 1 chunk\n                of (C), along with place spec of \"scatter\",\n                resulting in the following assignment:\n\n                    exec_vnode = (A)+(B)+(C)\n\n                and -Wtolerate_node_failures=job_start\n\n             2. Have a queuejob hook that adds 1 extra node to each\n                chunk (except the MS (first) chunk), resulting in the\n                assignment:\n\n                    exec_vnode = (A)+(B)+(D)+(C)+(E)\n\n                where D mirrors super-chunk B specs while E mirrors\n                chunk C.\n\n             3. Prior to submitting a job, suspend mom B. When job runs,\n                momB won't be able to join the job, so it won't be considered\n                as a \"healthy\" mom.\n\n             4. Have an execjob_prologue hook that doesn't fail any mom host\n                with alarm=alarm1, order=1.\n\n             5. Have an execjob_prologue hook2 with alarm=alarm2, order=2,\n                that fails (causes rejection) when executed by mom managing\n                vnodes in (C).\n\n             6. Have an execjob_launch hook that prunes back the\n                job's exec_vnode assignment back to satisfying the original\n                3-node select spec, choosing only healthy nodes.\n\n             7. Result:\n\n                a. This results in the following reassignment of chunks:\n\n                   exec_vnode = (A)+(D)+(E)\n\n                   since (B) and (C) contain vnodes from failed moms.\n\n                b. vnodes in (B) and (C) are now showing a state of \"free\".\n\n                c. Mom's log file shows the wait time from execjob_prologue\n                   hook1 execution and the execution of the exescjob_launch\n                   hook is no more than alarm1+alarm2.\n\n                c. The accounting log start record 'S' will reflect the\n                   select request where additional chunks were added, while\n                   the secondary start record 's' will reflect the assigned\n                   resources after pruning the original select request via\n                   the pbs.release_nodes(keep_select=...) call\n                   inside execjob_launch hook.\n        \"\"\"\n\n        # instantiate queuejob hook\n        hook_event = \"queuejob\"\n        hook_name = \"qjob\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.qjob_hook_body)\n\n        # instantiate execjob_begin hook\n        hook_event = \"execjob_begin\"\n        hook_name = \"begin\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.begin_hook_body2)\n\n        # instantiate execjob_prologue hook #1\n        hook_body = \"\"\"\nimport pbs\ne=pbs.event()\npbs.logmsg(pbs.LOG_DEBUG, \"Executing prolo1\")\nlocalnode=pbs.get_local_nodename()\n\"\"\"\n        hook_event = \"execjob_prologue\"\n        hook_name = \"prolo1\"\n        alarm1 = 17\n        a = {'event': hook_event, 'enabled': 'true', 'order': 1,\n             'alarm': alarm1}\n        self.server.create_import_hook(hook_name, a, hook_body)\n\n        # instantiate execjob_prologue hook #2\n        hook_body = \"\"\"\nimport pbs\ne=pbs.event()\npbs.logmsg(pbs.LOG_DEBUG, \"Executing prolo2\")\n\nfor vn in e.vnode_list:\n    v = e.vnode_list[vn]\n    pbs.logjobmsg(e.job.id, \"prolo2: found vnode_list[\" + v.name + \"]\")\n\nfor vn in e.vnode_list_fail:\n    v = e.vnode_list_fail[vn]\n    pbs.logjobmsg(e.job.id, \"prolo2: found vnode_list_fail[\" + v.name + \"]\")\nlocalnode=pbs.get_local_nodename()\nif not e.job.in_ms_mom() and (localnode == '%s'):\n    x\n\"\"\" % (self.nC,)\n        hook_event = \"execjob_prologue\"\n        hook_name = \"prolo2\"\n        alarm2 = 16\n        a = {'event': hook_event, 'enabled': 'true', 'order': 2,\n             'alarm': alarm2}\n        self.server.create_import_hook(hook_name, a, hook_body)\n\n        # instantiate execjob_launch hook\n        hook_event = \"execjob_launch\"\n        hook_name = \"launch\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.launch_hook_body)\n\n        # First, turn off scheduling\n        a = {'scheduling': 'false'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        # temporarily suspend momB, simulating a failed mom host.\n        self.momB.signal(\"-STOP\")\n\n        jid = self.create_and_submit_job('job1')\n        # Job gets queued and reflects the incremented values from queuejob\n        # hook\n        self.server.expect(JOB, {'job_state': 'Q',\n                                 'tolerate_node_failures': 'job_start',\n                                 'Resource_List.mem': '10gb',\n                                 'Resource_List.ncpus': 13,\n                                 'Resource_List.nodect': 5,\n                                 'Resource_List.select': self.job1_iselect,\n                                 'Resource_List.site': self.job1_oselect,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_ischedselect},\n                           id=jid, attrop=PTL_AND)\n\n        # Set time to start scanning logs\n        stime = time.time()\n\n        a = {'scheduling': 'true'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        # Job eventually launches reflecting the pruned back values\n        # to the original select spec\n        # There's a max_attempts=60 for it would take up to 60 seconds\n        # for primary mom to wait for the sisters to join\n        # (default $sister_join_job_alarm of 30 seconds) and to wait for\n        # sisters to execjob_prologue hooks (default $job_launch_delay\n        # value of 30 seconds)\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'tolerate_node_failures': 'job_start',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job1_select,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_schedselect,\n                                 'exec_host': self.job1_exec_host,\n                                 'exec_vnode': self.job1_exec_vnode},\n                           id=jid, interval=1, attrop=PTL_AND,\n                           max_attempts=100)\n\n        thisjob = self.server.status(JOB, id=jid)\n        if thisjob:\n            job_output_file = thisjob[0]['Output_Path'].split(':')[1]\n\n        # Verify the logs and make sure sister_join_job_alarm is honored\n        logs = self.mom.log_match(\n            \"Executing begin\",\n            allmatch=True, starttime=stime, max_attempts=8)\n        log1 = logs[0][1]\n        logs = self.mom.log_match(\n            \"Executing prolo1\",\n            allmatch=True, starttime=stime, max_attempts=8)\n        log2 = logs[0][1]\n        tmp = log1.split(';')\n        # Convert the time into epoch time\n        time1 = int(self.logutils.convert_date_time(tmp[0]))\n        tmp = log2.split(';')\n        time2 = int(self.logutils.convert_date_time(tmp[0]))\n\n        diff = time2 - time1\n        self.logger.info(\n            \"Time diff between begin hook and prologue hook is \" +\n            str(diff) + \" seconds\")\n        # Leave a little wiggle room for slow systems\n\n        # test default sister_join_job_alarm value\n        sis_join_alarm = 30\n        self.assertTrue((diff >= sis_join_alarm) and\n                        diff <= (sis_join_alarm + 5))\n\n        self.mom.log_match(\n            \"sister_join_job_alarm wait time %d secs exceeded\" % (\n                sis_join_alarm,), starttime=stime, max_attempts=8)\n\n        # Verify the logs and make sure job_launch_delay is honored\n        logs = self.mom.log_match(\n            \"Executing prolo1\",\n            allmatch=True, starttime=stime, max_attempts=8)\n        log1 = logs[0][1]\n        logs = self.mom.log_match(\n            \"Executing launch\",\n            allmatch=True, starttime=stime, max_attempts=8)\n        log2 = logs[0][1]\n        tmp = log1.split(';')\n        # Convert the time into epoch time\n        time1 = int(self.logutils.convert_date_time(tmp[0]))\n        tmp = log2.split(';')\n        time2 = int(self.logutils.convert_date_time(tmp[0]))\n\n        diff = time2 - time1\n        self.logger.info(\n            \"Time diff between prolo1 hook and launch hook is \" +\n            str(diff) + \" seconds\")\n        # Leave a little wiggle room for slow systems\n        job_launch_delay = alarm1 + alarm2\n        self.assertTrue((diff >= job_launch_delay) and\n                        diff <= (job_launch_delay + 3))\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status(\n            [self.nAv0, self.nAv1, self.nE, self.nEv0],\n            'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.nAv2],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        jobs_assn3 = \"%s/0, %s/1, %s/2\" % (jid, jid, jid)\n        self.match_vnode_status([self.nD], 'free', jobs_assn3,\n                                3, '2097152kb')\n\n        self.match_vnode_status([self.nA, self.nAv3, self.nB, self.nBv0,\n                                 self.nBv1, self.nBv2, self.nBv3, self.nC,\n                                 self.nEv1, self.nEv2, self.nEv3], 'free')\n\n        # check server/queue counts\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 8,\n                                    'resources_assigned.mem': '6291456kb'},\n                           attrop=PTL_AND)\n        self.server.expect(QUEUE, {'resources_assigned.ncpus': 8,\n                                   'resources_assigned.mem': '6291456kb'},\n                           id='workq', attrop=PTL_AND)\n\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(jid, self.job1_exec_host))\n\n        # Verify mom_logs\n        self.momA.log_match(\n            \"Job;%s;job_start_error.+from node %s.+\" % (jid, self.hostC) +\n            \"could not IM_EXEC_PROLOGUE\", n=10, regexp=True)\n\n        self.momA.log_match(\n            \"Job;%s;ignoring error from %s.+\" % (jid, self.hostC) +\n            \"as job is tolerant of node failures\", n=10, regexp=True)\n\n        self.momA.log_match(\n            \"not all prologue hooks to sister moms completed, \" +\n            \"but job will proceed to execute\", n=10)\n\n        # Check vnode_list[] parameter in execjob_prologue hook\n        vnode_list = [self.nAv0, self.nAv1, self.nAv2,\n                      self.nB, self.nBv0, self.nBv1,\n                      self.nC, self.nD, self.nE, self.nEv0]\n        for vn in vnode_list:\n            self.momA.log_match(\"Job;%s;prolo2: found vnode_list[%s]\" % (\n                                jid, vn), n=10)\n\n        # Check vnode_list[] parameter in execjob_launch hook\n        vnode_list = [self.nAv0, self.nAv1, self.nAv2,\n                      self.nB, self.nBv0, self.nBv1,\n                      self.nC, self.nD, self.nE, self.nEv0]\n        for vn in vnode_list:\n            self.momA.log_match(\"Job;%s;launch: found vnode_list[%s]\" % (\n                                jid, vn), n=10)\n\n        # Check vnode_list_fail[] parameter in execjob_launch hook\n        vnode_list_fail = [self.nC]\n        for vn in vnode_list_fail:\n            self.momA.log_match(\n                \"Job;%s;launch: found vnode_list_fail[%s]\" % (jid, vn), n=10)\n\n        # Check result of pbs.event().job.release_nodes(keep_select) call\n        self.momA.log_match(\"Job;%s;launch: job.exec_vnode=%s\" % (\n            jid, self.job1_exec_vnode), n=10)\n\n        self.momA.log_match(\"Job;%s;launch: job.schedselect=%s\" % (\n            jid, self.job1_schedselect), n=10)\n\n        self.momA.log_match(\"Job;%s;pruned from exec_vnode=%s\" % (\n            jid, self.job1_iexec_vnode), n=10)\n\n        self.momA.log_match(\"Job;%s;pruned to exec_vnode=%s\" % (\n            jid, self.job1_exec_vnode), n=10)\n\n        # Check accounting_logs\n        self.match_accounting_log('S', jid, self.job1_iexec_host_esc,\n                                  self.job1_iexec_vnode_esc, \"10gb\", 13, 5,\n                                  self.job1_place,\n                                  self.job1_isel_esc)\n\n        self.match_accounting_log('s', jid, self.job1_exec_host_esc,\n                                  self.job1_exec_vnode_esc,\n                                  \"6gb\", 8, 3,\n                                  self.job1_place,\n                                  self.job1_sel_esc)\n        self.momA.log_match(\"Job;%s;task.+started, hostname\" % (jid,),\n                            n=10, interval=5, regexp=True)\n\n        self.momA.log_match(\"Job;%s;copy file request received\" % (jid,),\n                            n=10, interval=5)\n\n        # validate output\n        expected_out = \"\"\"/var/spool/pbs/aux/%s\n%s\n%s\n%s\nFIB TESTS\npbsdsh -n 1 fib 37\n%d\npbsdsh -n 2 fib 37\n%d\nfib 37\n%d\nHOSTNAME TESTS\npbsdsh -n 0 hostname\n%s\npbsdsh -n 1 hostname\n%s\npbsdsh -n 2 hostname\n%s\nPBS_NODEFILE tests\nHOST=%s\npbs_tmrsh %s hostname\n%s\nHOST=%s\npbs_tmrsh %s hostname\n%s\nHOST=%s\npbs_tmrsh %s hostname\n%s\n\"\"\" % (jid, self.momA.hostname, self.momD.hostname, self.momE.hostname,\n            self.fib37_value, self.fib37_value, self.fib37_value,\n            self.momA.shortname, self.momD.shortname, self.momE.shortname,\n            self.momA.hostname, self.momA.hostname, self.momA.shortname,\n            self.momD.hostname, self.momD.hostname, self.momD.shortname,\n            self.momE.hostname, self.momE.hostname, self.momE.shortname)\n\n        self.logger.info(\"expected out=%s\" % (expected_out,))\n        job_out = \"\"\n        with open(job_output_file, 'r') as fd:\n            job_out = fd.read()\n            self.logger.info(\"job_out=%s\" % (job_out,))\n\n        self.assertEqual(job_out, expected_out)\n\n    @timeout(400)\n    def test_t4(self):\n        \"\"\"\n        Test: tolerating job_start 1 node failure that is used\n              to satisfy a multi-chunk request, after adding\n              extra nodes to the job, pruning\n              job's assigned resources to match up to the original\n              select spec.\n\n             1.\tSubmit a job that has been submitted with a select\n                spec of 2 super-chunks say (A) and (B), and 1 chunk\n                of (C), along with place spec of \"scatter\",\n                resulting in the following assignment:\n\n                    exec_vnode = (A)+(B)+(C)\n\n                and -Wtolerate_node_failures=job_start\n\n             2. Have a queuejob hook that adds 1 extra node to each\n                chunk (except the MS (first) chunk), resulting in the\n                assignment:\n\n                    exec_vnode = (A)+(B)+(D)+(C)+(E)\n\n                where D mirrors super-chunk B specs while E mirrors\n                chunk C.\n\n             3. Have an execjob_begin hook that fails (causes rejection)\n                when executed by mom managing vnodes in (B).\n\n             4. Then create an execjob_launch hook that\n                prunes back the job's exec_vnode assignment back to\n                satisfying the original 3-node select spec,\n                choosing only healthy nodes.\n\n             5. Result:\n\n                a. This results in the following reassignment of chunks:\n\n                   exec_vnode = (A)+(D)+(C)\n\n                   since (B) contain vnodes from failed moms.\n\n                b. The accounting log start record 'S' will reflect the\n                   select request where additional chunks were added, while\n                   the secondary start record 's' will reflect the assigned\n                   resources after pruning the original select request via\n                   the pbs.release_nodes(keep_select=...) call\n                   inside execjob_launch hook.\n        \"\"\"\n        # instantiate queuejob hook\n        hook_event = \"queuejob\"\n        hook_name = \"qjob\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.qjob_hook_body)\n\n        # instantiate execjob_begin hook\n        hook_event = \"execjob_begin\"\n        hook_name = \"begin\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.begin_hook_body)\n\n        # instantiate execjob_launch hook\n        hook_event = \"execjob_launch\"\n        hook_name = \"launch\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.launch_hook_body)\n\n        # First, turn off scheduling\n        a = {'scheduling': 'false'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        jid = self.create_and_submit_job('job1')\n        # Job gets queued and reflects the incremented values from queuejob\n        # hook\n        self.server.expect(JOB, {'job_state': 'Q',\n                                 'tolerate_node_failures': 'job_start',\n                                 'Resource_List.mem': '10gb',\n                                 'Resource_List.ncpus': 13,\n                                 'Resource_List.nodect': 5,\n                                 'Resource_List.select': self.job1_iselect,\n                                 'Resource_List.site': self.job1_oselect,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_ischedselect},\n                           id=jid, attrop=PTL_AND)\n\n        a = {'scheduling': 'true'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        # Job eventually launches reflecting the pruned back values\n        # to the original select spec\n        # There's a max_attempts=60 for it would take up to 60 seconds\n        # for primary mom to wait for the sisters to join\n        # (default $sister_join_job_alarm of 30 seconds) and to wait for\n        # sisters to execjob_prologue hooks (default $job_launch_delay\n        # value of 30 seconds)\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'tolerate_node_failures': 'job_start',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job1v2_select,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1v2_schedselect,\n                                 'exec_host': self.job1v2_exec_host,\n                                 'exec_vnode': self.job1v2_exec_vnode},\n                           id=jid, interval=1, attrop=PTL_AND, max_attempts=70)\n\n        thisjob = self.server.status(JOB, id=jid)\n        if thisjob:\n            job_output_file = thisjob[0]['Output_Path'].split(':')[1]\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.nAv0, self.nAv1],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.nAv2],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        jobs_assn2 = \"%s/0, %s/1\" % (jid, jid)\n        self.match_vnode_status([self.nC], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        jobs_assn3 = \"%s/0, %s/1, %s/2\" % (jid, jid, jid)\n        self.match_vnode_status([self.nD], 'free', jobs_assn3,\n                                3, '2097152kb')\n\n        self.match_vnode_status([self.nA, self.nAv3, self.nB, self.nBv0,\n                                 self.nBv1, self.nBv2, self.nBv3, self.nE,\n                                 self.nEv0, self.nEv1, self.nEv2,\n                                 self.nEv3], 'free')\n\n        # check server/queue counts\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 8,\n                                    'resources_assigned.mem': '6291456kb'},\n                           attrop=PTL_AND)\n        self.server.expect(QUEUE, {'resources_assigned.ncpus': 8,\n                                   'resources_assigned.mem': '6291456kb'},\n                           id='workq', attrop=PTL_AND)\n\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(jid, self.job1v2_exec_host))\n\n        # Verify mom_logs\n        self.momA.log_match(\n            \"Job;%s;job_start_error.+from node %s.+could not JOIN_JOB\" % (\n                jid, self.hostB), n=10, regexp=True)\n\n        self.momA.log_match(\n            \"Job;%s;ignoring error from %s.+as job \" % (jid, self.hostB) +\n            \"is tolerant of node failures\",\n            regexp=True, n=10)\n\n        # Check vnode_list[] parameter in execjob_launch hook\n        vnode_list = [self.nAv0, self.nAv1, self.nAv2,\n                      self.nB, self.nBv0, self.nBv1,\n                      self.nC, self.nD, self.nE, self.nEv0]\n        for vn in vnode_list:\n            self.momA.log_match(\"Job;%s;launch: found vnode_list[%s]\" % (\n                                jid, vn), n=10)\n\n        # Check vnode_list_fail[] parameter in execjob_launch hook\n        vnode_list_fail = [self.nB, self.nBv0, self.nBv1]\n        for vn in vnode_list_fail:\n            self.momA.log_match(\"Job;%s;launch: found vnode_list_fail[%s]\" % (\n                                jid, vn), n=10)\n\n        # Check result of pbs.event().job.release_nodes(keep_select) call\n        self.momA.log_match(\"Job;%s;launch: job.exec_vnode=%s\" % (\n            jid, self.job1v2_exec_vnode), n=10)\n\n        self.momA.log_match(\"Job;%s;launch: job.schedselect=%s\" % (\n            jid, self.job1v2_schedselect), n=10)\n\n        self.momA.log_match(\"Job;%s;pruned from exec_vnode=%s\" % (\n            jid, self.job1_iexec_vnode), n=10)\n\n        self.momA.log_match(\"Job;%s;pruned to exec_vnode=%s\" % (\n            jid, self.job1v2_exec_vnode), n=10)\n\n        # Check accounting_logs\n        self.match_accounting_log('S', jid, self.job1_iexec_host_esc,\n                                  self.job1_iexec_vnode_esc, \"10gb\", 13, 5,\n                                  self.job1_place,\n                                  self.job1_isel_esc)\n\n        self.match_accounting_log('s', jid, self.job1v2_exec_host_esc,\n                                  self.job1v2_exec_vnode_esc,\n                                  \"6gb\", 8, 3,\n                                  self.job1_place,\n                                  self.job1v2_sel_esc)\n        self.momA.log_match(\"Job;%s;task.+started, hostname\" % (jid,),\n                            n=10, interval=5, regexp=True)\n\n        self.momA.log_match(\"Job;%s;copy file request received\" % (jid,),\n                            n=10, interval=5)\n\n        # validate output\n        expected_out = \"\"\"/var/spool/pbs/aux/%s\n%s\n%s\n%s\nFIB TESTS\npbsdsh -n 1 fib 37\n%d\npbsdsh -n 2 fib 37\n%d\nfib 37\n%d\nHOSTNAME TESTS\npbsdsh -n 0 hostname\n%s\npbsdsh -n 1 hostname\n%s\npbsdsh -n 2 hostname\n%s\nPBS_NODEFILE tests\nHOST=%s\npbs_tmrsh %s hostname\n%s\nHOST=%s\npbs_tmrsh %s hostname\n%s\nHOST=%s\npbs_tmrsh %s hostname\n%s\n\"\"\" % (jid, self.momA.hostname, self.momD.hostname, self.momC.hostname,\n            self.fib37_value, self.fib37_value, self.fib37_value,\n            self.momA.shortname, self.momD.shortname, self.momC.shortname,\n            self.momA.hostname, self.momA.hostname, self.momA.shortname,\n            self.momD.hostname, self.momD.hostname, self.momD.shortname,\n            self.momC.hostname, self.momC.hostname, self.momC.shortname)\n\n        self.logger.info(\"expected out=%s\" % (expected_out,))\n        job_out = \"\"\n        with open(job_output_file, 'r') as fd:\n            job_out = fd.read()\n            self.logger.info(\"job_out=%s\" % (job_out,))\n\n        self.assertEqual(job_out, expected_out)\n\n    @timeout(400)\n    def test_t5(self):\n        \"\"\"\n        Test: tolerating job_start 1 node failure used in a regular\n              chunk after adding extra nodes to the job, pruning\n             job's assigned resources to match up to the original\n             select spec.\n\n             1.\tSubmit a job that has been submitted with a select\n                spec of 2 super-chunks say (A) and (B), and 1 chunk\n                of (C), along with place spec of \"scatter\",\n                resulting in the following assignment:\n\n                    exec_vnode = (A)+(B)+(C)\n\n                and -Wtolerate_node_failures=job_start\n\n             2. Have a queuejob hook that adds 1 extra node to each\n                chunk (except the MS (first) chunk), resulting in the\n                assignment:\n\n                    exec_vnode = (A)+(B)+(D)+(C)+(E)\n\n                where D mirrors super-chunk B specs while E mirrors\n                chunk C.\n\n             3. Have an execjob_prologue hook that fails (causes\n                rejection) when executed by mom managing vnodes in (C).\n\n             4. Then create an execjob_launch hook that\n                prunes back the job's exec_vnode assignment back to\n                satisfying the original 3-node select spec,\n                choosing only healthy nodes.\n\n             5. Result:\n\n                a. This results in the following reassignment of chunks:\n\n                   exec_vnode = (A)+(B)+(E)\n\n                   since (C) contain vnodes from failed moms.\n\n                b. The accounting log start record 'S' will reflect the\n                   select request where additional chunks were added, while\n                   the secondary start record 's' will reflect the assigned\n                   resources after pruning the original select request via\n                   the pbs.release_nodes(keep_select=...) call\n                   inside execjob_launch hook.\n        \"\"\"\n        # instantiate queuejob hook\n        hook_event = \"queuejob\"\n        hook_name = \"qjob\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.qjob_hook_body)\n\n        # instantiate execjob_prologue hook\n        hook_event = \"execjob_prologue\"\n        hook_name = \"prolo\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.prolo_hook_body2)\n\n        # instantiate execjob_launch hook\n        hook_event = \"execjob_launch\"\n        hook_name = \"launch\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.launch_hook_body)\n\n        # First, turn off scheduling\n        a = {'scheduling': 'false'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        jid = self.create_and_submit_job('job1')\n        # Job gets queued and reflects the incremented values from queuejob\n        # hook\n        self.server.expect(JOB, {'job_state': 'Q',\n                                 'tolerate_node_failures': 'job_start',\n                                 'Resource_List.mem': '10gb',\n                                 'Resource_List.ncpus': 13,\n                                 'Resource_List.nodect': 5,\n                                 'Resource_List.select': self.job1_iselect,\n                                 'Resource_List.site': self.job1_oselect,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_ischedselect},\n                           id=jid, attrop=PTL_AND)\n\n        a = {'scheduling': 'true'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        # Job eventually launches reflecting the pruned back values\n        # to the original select spec\n        # There's a max_attempts=60 for it would take up to 60 seconds\n        # for primary mom to wait for the sisters to join\n        # (default $sister_join_job_alarm of 30 seconds) and to wait for\n        # sisters to execjob_prologue hooks (default $job_launch_delay\n        # value of 30 seconds)\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'tolerate_node_failures': 'job_start',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job1v3_select,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1v3_schedselect,\n                                 'exec_host': self.job1v3_exec_host,\n                                 'exec_vnode': self.job1v3_exec_vnode},\n                           id=jid, interval=1, attrop=PTL_AND, max_attempts=70)\n\n        thisjob = self.server.status(JOB, id=jid)\n        if thisjob:\n            job_output_file = thisjob[0]['Output_Path'].split(':')[1]\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.nAv0, self.nAv1, self.nB, self.nBv0,\n                                 self.nE, self.nEv0], 'job-busy', jobs_assn1,\n                                1, '1048576kb')\n\n        self.match_vnode_status([self.nAv2, self.nBv1],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        self.match_vnode_status([self.nA, self.nAv3, self.nBv2, self.nBv3,\n                                 self.nC, self.nD, self.nEv1, self.nEv2,\n                                 self.nEv3], 'free')\n\n        # check server/queue counts\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 8,\n                                    'resources_assigned.mem': '6291456kb'},\n                           attrop=PTL_AND)\n        self.server.expect(QUEUE, {'resources_assigned.ncpus': 8,\n                                   'resources_assigned.mem': '6291456kb'},\n                           id='workq', attrop=PTL_AND)\n\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(jid, self.job1v3_exec_host))\n\n        # Verify mom_logs\n        self.momA.log_match(\n            \"Job;%s;job_start_error.+from node %s.+\" % (jid, self.hostC) +\n            \"could not IM_EXEC_PROLOGUE\", n=10, regexp=True)\n\n        self.momA.log_match(\n            \"Job;%s;ignoring error from %s.+\" % (jid, self.hostC) +\n            \"as job is tolerant of node failures\", n=10, regexp=True)\n\n        # Check vnode_list[] parameter in execjob_launch hook\n        vnode_list = [self.nAv0, self.nAv1, self.nAv2,\n                      self.nB, self.nBv0, self.nBv1,\n                      self.nC, self.nD, self.nE, self.nEv0]\n        for vn in vnode_list:\n            self.momA.log_match(\"Job;%s;launch: found vnode_list[%s]\" % (\n                                jid, vn), n=10)\n\n        # Check vnode_list_fail[] parameter in execjob_launch hook\n        vnode_list_fail = [self.nC]\n        for vn in vnode_list_fail:\n            self.momA.log_match(\n                \"Job;%s;launch: found vnode_list_fail[%s]\" % (jid, vn), n=10)\n\n        # Check result of pbs.event().job.release_nodes(keep_select) call\n        self.momA.log_match(\"Job;%s;launch: job.exec_vnode=%s\" % (\n                            jid, self.job1v3_exec_vnode), n=10)\n\n        self.momA.log_match(\"Job;%s;launch: job.schedselect=%s\" % (\n            jid, self.job1v3_schedselect), n=10)\n\n        self.momA.log_match(\"Job;%s;pruned from exec_vnode=%s\" % (\n                            jid, self.job1_iexec_vnode), n=10)\n\n        self.momA.log_match(\"Job;%s;pruned to exec_vnode=%s\" % (\n            jid, self.job1v3_exec_vnode), n=10)\n\n        # Check accounting_logs\n        self.match_accounting_log('S', jid, self.job1_iexec_host_esc,\n                                  self.job1_iexec_vnode_esc, \"10gb\", 13, 5,\n                                  self.job1_place,\n                                  self.job1_isel_esc)\n\n        self.match_accounting_log('s', jid, self.job1v3_exec_host_esc,\n                                  self.job1v3_exec_vnode_esc,\n                                  \"6gb\", 8, 3,\n                                  self.job1_place,\n                                  self.job1v3_sel_esc)\n        self.momA.log_match(\"Job;%s;task.+started, hostname\" % (jid,),\n                            n=10, interval=5, regexp=True)\n\n        self.momA.log_match(\"Job;%s;copy file request received\" % (jid,),\n                            n=10, interval=5)\n\n        # validate output\n        expected_out = \"\"\"/var/spool/pbs/aux/%s\n%s\n%s\n%s\nFIB TESTS\npbsdsh -n 1 fib 37\n%d\npbsdsh -n 2 fib 37\n%d\nfib 37\n%d\nHOSTNAME TESTS\npbsdsh -n 0 hostname\n%s\npbsdsh -n 1 hostname\n%s\npbsdsh -n 2 hostname\n%s\nPBS_NODEFILE tests\nHOST=%s\npbs_tmrsh %s hostname\n%s\nHOST=%s\npbs_tmrsh %s hostname\n%s\nHOST=%s\npbs_tmrsh %s hostname\n%s\n\"\"\" % (jid, self.momA.hostname, self.momB.hostname, self.momE.hostname,\n            self.fib37_value, self.fib37_value, self.fib37_value,\n            self.momA.shortname, self.momB.shortname, self.momE.shortname,\n            self.momA.hostname, self.momA.hostname, self.momA.shortname,\n            self.momB.hostname, self.momB.hostname, self.momB.shortname,\n            self.momE.hostname, self.momE.hostname, self.momE.shortname)\n\n        self.logger.info(\"expected out=%s\" % (expected_out,))\n        job_out = \"\"\n        with open(job_output_file, 'r') as fd:\n            job_out = fd.read()\n            self.logger.info(\"job_out=%s\" % (job_out,))\n\n        self.assertEqual(job_out, expected_out)\n\n    def test_t6(self):\n        \"\"\"\n        Test: tolerating job_start of 2 node failures used to\n              satisfy the smaller chunks, after adding extra nodes\n              to the job, pruning job's assigned resources to match up\n              to the original select spec.\n\n             1.\tSubmit a job that has been submitted with a select\n                spec of 2 super-chunks say (A) and (B), and 1 chunk\n                of (C), along with place spec of \"scatter\",\n                resulting in the following assignment:\n\n                    exec_vnode = (A)+(B)+(C)\n\n                and -Wtolerate_node_failures=job_start\n\n             2. Have a queuejob hook that adds 1 extra node to each\n                chunk (except the MS (first) chunk), resulting in the\n                assignment:\n\n                    exec_vnode = (A)+(B)+(D)+(C)+(E)\n\n                where D mirrors super-chunk B specs while E mirrors\n                chunk C. (C) and (E) are of smaller chunks than (B)\n                and (D). For example:\n                    (D) =  \"(nadal:ncpus=3:mem=2097152kb)\"\n                    (C) =  \"(lendl:ncpus=2:mem=2097152kb)\"\n\n             3. Have an execjob_begin hook that fails (causes\n                rejection) when executed by mom managing vnodes in (C).\n\n             4. Have an execjob_prologue hook that fails (causes\n                rejection) when executed by mom managing vnodes in (E).\n\n             5. Then create an execjob_launch hook that\n                prunes back the job's exec_vnode assignment back to\n                satisfying the original 3-node select spec,\n                choosing only healthy nodes.\n\n             6. Result:\n\n                a. This results in the following reassignment of chunks:\n\n                   exec_vnode = (A)+(B)+(D)\n\n                   since (C) and (E) contain vnodes from failed moms.\n                   Note that from (D), only allocate enough resources\n                   to satisfy the smaller third requested chunk.\n                   if (D) originally has \"(nadal:ncpus=3:mem=2097152kb)\",\n                   reassigning this would only be\n                   \"(nadal:ncpus=2:mem=2097152kb)\".\n\n                b. The accounting log start record 'S' will reflect the\n                   select request where additional chunks were added, while\n                   the secondary start record 's' will reflect the assigned\n                   resources after pruning the original select request via\n                   the pbs.release_nodes(keep_select=...) call\n                   inside execjob_launch hook.\n        \"\"\"\n        # instantiate queuejob hook\n        hook_event = \"queuejob\"\n        hook_name = \"qjob\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.qjob_hook_body)\n\n        # instantiate execjob_begin hook\n        hook_event = \"execjob_begin\"\n        hook_name = \"begin\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.begin_hook_body3)\n\n        # instantiate execjob_prologue hook\n        hook_event = \"execjob_prologue\"\n        hook_name = \"prolo\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.prolo_hook_body2)\n\n        # instantiate execjob_launch hook\n        hook_event = \"execjob_launch\"\n        hook_name = \"launch\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.launch_hook_body)\n\n        # First, turn off scheduling\n        a = {'scheduling': 'false'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        jid = self.create_and_submit_job('job1')\n        # Job gets queued and reflects the incremented values from queuejob\n        # hook\n        self.server.expect(JOB, {'job_state': 'Q',\n                                 'tolerate_node_failures': 'job_start',\n                                 'Resource_List.mem': '10gb',\n                                 'Resource_List.ncpus': 13,\n                                 'Resource_List.nodect': 5,\n                                 'Resource_List.select': self.job1_iselect,\n                                 'Resource_List.site': self.job1_oselect,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_ischedselect},\n                           id=jid, attrop=PTL_AND)\n\n        a = {'scheduling': 'true'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        # Job eventually launches reflecting the pruned back values\n        # to the original select spec\n        # There's a max_attempts=60 for it would take up to 60 seconds\n        # for primary mom to wait for the sisters to join\n        # (default $sister_join_job_alarm of 30 seconds) and to wait for\n        # sisters to execjob_prologue hooks (default $job_launch_delay\n        # value of 30 seconds)\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'tolerate_node_failures': 'job_start',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job1v4_select,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1v4_schedselect,\n                                 'exec_host': self.job1v4_exec_host,\n                                 'exec_vnode': self.job1v4_exec_vnode},\n                           id=jid, interval=1, attrop=PTL_AND, max_attempts=70)\n\n        thisjob = self.server.status(JOB, id=jid)\n        if thisjob:\n            job_output_file = thisjob[0]['Output_Path'].split(':')[1]\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.nAv0, self.nAv1, self.nB, self.nBv0],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        jobs_assn2 = \"%s/0, %s/1\" % (jid, jid)\n        self.match_vnode_status([self.nD], 'free', jobs_assn2,\n                                2, '2097152kb')\n\n        self.match_vnode_status([self.nAv2, self.nBv1],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        self.match_vnode_status([self.nA, self.nAv3, self.nBv2, self.nBv3,\n                                 self.nC, self.nD, self.nEv1, self.nEv2,\n                                 self.nEv3], 'free')\n\n        # check server/queue counts\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 8,\n                                    'resources_assigned.mem': '6291456kb'},\n                           attrop=PTL_AND)\n        self.server.expect(QUEUE, {'resources_assigned.ncpus': 8,\n                                   'resources_assigned.mem': '6291456kb'},\n                           id='workq', attrop=PTL_AND)\n\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(jid, self.job1v4_exec_host))\n\n        # Verify mom_logs\n        self.momA.log_match(\n            \"Job;%s;job_start_error.+from node %s.+could not JOIN_JOB\" % (\n                jid, self.hostE), n=10, regexp=True)\n\n        self.momA.log_match(\n            \"Job;%s;ignoring error from %s.+as job \" % (jid, self.hostE) +\n            \"is tolerant of node failures\",\n            regexp=True, n=10)\n\n        self.momA.log_match(\n            \"Job;%s;job_start_error.+from node %s.+\" % (jid, self.hostC) +\n            \"could not IM_EXEC_PROLOGUE\", n=10, regexp=True)\n\n        self.momA.log_match(\n            \"Job;%s;ignoring error from %s.+\" % (jid, self.hostC) +\n            \"as job is tolerant of node failures\", n=10, regexp=True)\n\n        # Check vnode_list[] parameter in execjob_launch hook\n        vnode_list = [self.nAv0, self.nAv1, self.nAv2,\n                      self.nB, self.nBv0, self.nBv1,\n                      self.nC, self.nD, self.nE, self.nEv0]\n        for vn in vnode_list:\n            self.momA.log_match(\"Job;%s;launch: found vnode_list[%s]\" % (\n                                jid, vn), n=10)\n\n        # Check vnode_list_fail[] parameter in execjob_launch hook\n        vnode_list_fail = [self.nC, self.nE]\n        for vn in vnode_list_fail:\n            self.momA.log_match(\"Job;%s;launch: found vnode_list_fail[%s]\" % (\n                                jid, vn), n=10)\n\n        # Check result of pbs.event().job.release_nodes(keep_select) call\n        self.momA.log_match(\"Job;%s;launch: job.exec_vnode=%s\" % (\n            jid, self.job1v4_exec_vnode), n=10)\n\n        self.momA.log_match(\"Job;%s;launch: job.schedselect=%s\" % (\n            jid, self.job1v4_schedselect), n=10)\n\n        self.momA.log_match(\"Job;%s;pruned from exec_vnode=%s\" % (\n            jid, self.job1_iexec_vnode), n=10)\n\n        self.momA.log_match(\"Job;%s;pruned to exec_vnode=%s\" % (\n            jid, self.job1v4_exec_vnode), n=10)\n\n        # Check accounting_logs\n        self.match_accounting_log('S', jid, self.job1_iexec_host_esc,\n                                  self.job1_iexec_vnode_esc, \"10gb\", 13, 5,\n                                  self.job1_place,\n                                  self.job1_isel_esc)\n\n        self.match_accounting_log('s', jid, self.job1v4_exec_host_esc,\n                                  self.job1v4_exec_vnode_esc,\n                                  \"6gb\", 8, 3,\n                                  self.job1_place,\n                                  self.job1v4_sel_esc)\n        self.momA.log_match(\"Job;%s;task.+started, hostname\" % (jid,),\n                            n=10, interval=5, regexp=True)\n\n        self.momA.log_match(\"Job;%s;copy file request received\" % (jid,),\n                            n=10, interval=5)\n\n        # validate output\n        expected_out = \"\"\"/var/spool/pbs/aux/%s\n%s\n%s\n%s\nFIB TESTS\npbsdsh -n 1 fib 37\n%d\npbsdsh -n 2 fib 37\n%d\nfib 37\n%d\nHOSTNAME TESTS\npbsdsh -n 0 hostname\n%s\npbsdsh -n 1 hostname\n%s\npbsdsh -n 2 hostname\n%s\nPBS_NODEFILE tests\nHOST=%s\npbs_tmrsh %s hostname\n%s\nHOST=%s\npbs_tmrsh %s hostname\n%s\nHOST=%s\npbs_tmrsh %s hostname\n%s\n\"\"\" % (jid, self.momA.hostname, self.momB.hostname, self.momD.hostname,\n            self.fib37_value, self.fib37_value, self.fib37_value,\n            self.momA.shortname, self.momB.shortname, self.momD.shortname,\n            self.momA.hostname, self.momA.hostname, self.momA.shortname,\n            self.momB.hostname, self.momB.hostname, self.momB.shortname,\n            self.momD.hostname, self.momD.hostname, self.momD.shortname)\n\n        self.logger.info(\"expected out=%s\" % (expected_out,))\n        job_out = \"\"\n        with open(job_output_file, 'r') as fd:\n            job_out = fd.read()\n            self.logger.info(\"job_out=%s\" % (job_out,))\n\n        self.assertEqual(job_out, expected_out)\n\n    def test_t7(self):\n        \"\"\"\n        Test: tolerating job_start of 2 node failures used to\n              satisfy the larger chunks, after adding extra nodes\n              to the job. Pruning job's assigned resources to match up\n              to the original select spec would fail, as the\n              unsatisfied chunk requests cannot be handled by\n              by the remaining smaller sized nodes. The failure\n              to prune job is followed by a pbs.event().rerun()\n              action and a job hold. Also, this test\n              setting tolerate_node_falures=none.\n        \"\"\"\n        # instantiate queuejob hook\n        hook_event = \"queuejob\"\n        hook_name = \"qjob\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.qjob_hook_body)\n\n        # instantiate execjob_begin hook\n        hook_event = \"execjob_begin\"\n        hook_name = \"begin\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.begin_hook_body)\n\n        # instantiate execjob_prologue hook\n        hook_body = \"\"\"\nimport pbs\ne=pbs.event()\npbs.logmsg(pbs.LOG_DEBUG, \"Executing prologue\")\nlocalnode=pbs.get_local_nodename()\nif not e.job.in_ms_mom() and (localnode == '%s'):\n    x\n\"\"\" % (self.nD,)\n        hook_event = \"execjob_prologue\"\n        hook_name = \"prolo\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, hook_body)\n\n        # instantiate execjob_launch hook\n        hook_event = \"execjob_launch\"\n        hook_name = \"launch\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.launch_hook_body)\n\n        # First, turn off scheduling\n        a = {'scheduling': 'false'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        jid = self.create_and_submit_job('job1')\n        # Job gets queued and reflects the incremented values from queuejob\n        # hook\n        self.server.expect(JOB, {'job_state': 'Q',\n                                 'tolerate_node_failures': 'job_start',\n                                 'Resource_List.mem': '10gb',\n                                 'Resource_List.ncpus': 13,\n                                 'Resource_List.nodect': 5,\n                                 'Resource_List.select': self.job1_iselect,\n                                 'Resource_List.site': self.job1_oselect,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_ischedselect},\n                           id=jid, attrop=PTL_AND)\n\n        a = {'scheduling': 'true'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        # Job eventually launches reflecting the pruned back values\n        # to the original select spec\n        # There's a max_attempts=60 for it would take up to 60 seconds\n        # for primary mom to wait for the sisters to join\n        # (default $sister_join_job_alarm of 30 seconds) and to wait for\n        # sisters to execjob_prologue hooks (default $job_launch_delay\n        # value of 30 seconds)\n\n        # Verify mom_logs\n        self.momA.log_match(\n            \"Job;%s;job_start_error.+from node %s.+could not JOIN_JOB\" % (\n                jid, self.hostB), n=10, regexp=True)\n\n        self.momA.log_match(\n            \"Job;%s;ignoring error from %s.+as job \" % (jid, self.hostB) +\n            \"is tolerant of node failures\",\n            regexp=True, n=10)\n        self.momA.log_match(\n            \"Job;%s;job_start_error.+from node %s.+\" % (jid, self.hostD) +\n            \"could not IM_EXEC_PROLOGUE\", n=10, regexp=True)\n\n        self.momA.log_match(\n            \"Job;%s;ignoring error from %s.+\" % (jid, self.hostD) +\n            \"as job is tolerant of node failures\", n=10, regexp=True)\n\n        self.momA.log_match(\"Job;%s;could not satisfy select chunk\" % (jid,),\n                            n=10)\n\n        self.momA.log_match(\"Job;%s;NEED chunks for keep_select\" % (jid,),\n                            n=10)\n\n        self.momA.log_match(\n            \"Job;%s;HAVE chunks from job's exec_vnode\" % (jid,), n=10)\n\n        self.momA.log_match(\"execjob_launch request rejected by 'launch'\",\n                            n=10)\n\n        errmsg = \"unsuccessful at LAUNCH\"\n        self.momA.log_match(\"Job;%s;%s\" % (jid, errmsg,), n=10)\n        self.server.expect(JOB, {'job_state': 'H'},\n                           id=jid, interval=1, max_attempts=70)\n\n        # turn off queuejob\n        self.server.manager(MGR_CMD_SET, HOOK, {'enabled': 'false'}, 'qjob')\n\n        # modify job so as to not tolerate_node_failures\n        a = {ATTR_tolerate_node_failures: \"none\"}\n        self.server.alterjob(jobid=jid, attrib=a)\n\n        # release hold on job\n        self.server.rlsjob(jobid=jid, holdtype='s')\n\n        a = {'scheduling': 'true'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        # Verify mom_logs\n        self.momA.log_match(\n            \"Job;%s;job_start_error.+could not JOIN_JOB\" % (\n                jid), n=10, regexp=True)\n\n        self.momA.log_match(\n            \"Job;%s;ignoring error from %s.+as job \" % (jid, self.hostE) +\n            \"is tolerant of node failures\",\n            regexp=True, n=10, existence=False, max_attempts=10)\n\n        self.server.expect(JOB, {'job_state': 'H'},\n                           id=jid, interval=1, max_attempts=30)\n\n        # turn off begin hook, leaving prologue hook in place\n        self.server.manager(MGR_CMD_SET, HOOK, {'enabled': 'false'}, 'begin')\n\n        # release hold on job\n        self.server.rlsjob(jobid=jid, holdtype='s')\n\n        a = {'scheduling': 'true'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        self.momA.log_match(\n            \"Job;%s;job_start_error.+could not IM_EXEC_PROLOGUE\" % (jid,),\n            n=10, regexp=True)\n\n        self.momA.log_match(\n            \"Job;%s;ignoring error from %s.+\" % (jid, self.hostC) +\n            \"as job is tolerant of node failures\", n=10, regexp=True,\n            existence=False, max_attempts=10)\n\n        self.server.expect(JOB, {'job_state': 'H'},\n                           id=jid, interval=1, max_attempts=15)\n\n        # turn off prologue hook, so only launch hook remains.\n        self.server.manager(MGR_CMD_SET, HOOK, {'enabled': 'false'}, 'prolo')\n\n        # release hold on job\n        self.server.rlsjob(jobid=jid, holdtype='s')\n\n        a = {'scheduling': 'true'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'tolerate_node_failures': 'none',\n                                 'Resource_List.mem': '10gb',\n                                 'Resource_List.ncpus': 13,\n                                 'Resource_List.nodect': 5,\n                                 'Resource_List.select': self.job1_iselect,\n                                 'Resource_List.site': self.job1_oselect,\n                                 'Resource_List.place': self.job1_place,\n                                 'exec_host': self.job1_iexec_host,\n                                 'exec_vnode': self.job1_iexec_vnode,\n                                 'schedselect': self.job1_ischedselect},\n                           id=jid, attrop=PTL_AND)\n\n        # tolerate_node_failures=none and launch hook calls release_nodes()\n        emsg = \"no nodes released as job does not tolerate node failures\"\n        self.momA.log_match(\"%s: %s\" % (jid, emsg), n=30)\n\n    def test_t8(self):\n        \"\"\"\n        Test tolerating node failures at job startup with no\n             failed moms.\n\n             1.\tSubmit a job that has been submitted with a select\n                spec of 2 super-chunks say (A) and (B), and 1 chunk\n                of (C), along with place spec of \"scatter\",\n                resulting in the following assignment:\n\n                    exec_vnode = (A)+(B)+(C)\n\n                and -Wtolerate_node_failures=all\n\n             2. Have a queuejob hook that adds 1 extra node to each\n                chunk (except the MS (first) chunk), resulting in the\n                assignment:\n\n                    exec_vnode = (A)+(B)+(D)+(C)+(E)\n\n                where D mirrors super-chunk B specs while E mirrors\n                 chunk C.\n\n             3. Have an execjob_begin, execjob_prologue hooks that don't\n                fail any of the sister moms.\n                when executed by mom managing vnodes in (C).\n\n             4. Then create an execjob_launch that prunes back the job's\n                exec_vnode assignment back to satisfying the original 3-node\n                select spec, choosing only healthy nodes.\n\n             5. Result:\n\n                a. This results in the following reassignment of chunks:\n\n                   exec_vnode = (A)+(B)+(C)\n\n                b. The accounting log start record 'S' will reflect the\n                   select request where additional chunks were added, while\n                   the secondary start record 's' will reflect the assigned\n                   resources after pruning the original select request via\n                   the pbs.release_nodes(keep_select=...) call\n                   inside execjob_launch hook.\n        \"\"\"\n        # instantiate queuejob hook\n        hook_event = \"queuejob\"\n        hook_name = \"qjob\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.qjob_hook_body)\n\n        # instantiate execjob_begin hook\n        hook_event = \"execjob_begin\"\n        hook_name = \"begin\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.begin_hook_body2)\n\n        # instantiate execjob_prologue hook\n        hook_event = \"execjob_prologue\"\n        hook_name = \"prolo\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.prolo_hook_body3)\n\n        # instantiate execjob_launch hook\n        hook_event = \"execjob_launch\"\n        hook_name = \"launch\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.launch_hook_body)\n\n        # First, turn off scheduling\n        a = {'scheduling': 'false'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        jid = self.create_and_submit_job('job1')\n        # Job gets queued and reflects the incremented values from queuejob\n        # hook\n        self.server.expect(JOB, {'job_state': 'Q',\n                                 'tolerate_node_failures': 'job_start',\n                                 'Resource_List.mem': '10gb',\n                                 'Resource_List.ncpus': 13,\n                                 'Resource_List.nodect': 5,\n                                 'Resource_List.select': self.job1_iselect,\n                                 'Resource_List.site': self.job1_oselect,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_ischedselect},\n                           id=jid, attrop=PTL_AND)\n\n        a = {'scheduling': 'true'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        # Job eventually launches reflecting the pruned back values\n        # to the original select spec\n        # There's a max_attempts=60 for it would take up to 60 seconds\n        # for primary mom to wait for the sisters to join\n        # (default $sister_join_job_alarm of 30 seconds) and to wait for\n        # sisters to execjob_prologue hooks (default $job_launch_delay\n        # value of 30 seconds)\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'tolerate_node_failures': 'job_start',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job1v5_select,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1v5_schedselect,\n                                 'exec_host': self.job1v5_exec_host,\n                                 'exec_vnode': self.job1v5_exec_vnode},\n                           id=jid, interval=1, attrop=PTL_AND, max_attempts=60)\n\n        thisjob = self.server.status(JOB, id=jid)\n        if thisjob:\n            job_output_file = thisjob[0]['Output_Path'].split(':')[1]\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status(\n            [self.nAv0, self.nAv1, self.nB, self.nBv0],\n            'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.nAv2, self.nBv1],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        jobs_assn2 = \"%s/0, %s/1\" % (jid, jid)\n\n        self.match_vnode_status([self.nC], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        self.match_vnode_status([self.nA, self.nAv3, self.nBv2, self.nBv3,\n                                 self.nE, self.nEv0, self.nEv1, self.nEv2,\n                                 self.nEv3], 'free')\n\n        # Check server/queue counts.\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 8,\n                                    'resources_assigned.mem': '6291456kb'},\n                           attrop=PTL_AND)\n        self.server.expect(QUEUE, {'resources_assigned.ncpus': 8,\n                                   'resources_assigned.mem': '6291456kb'},\n                           id='workq', attrop=PTL_AND)\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(jid, self.job1v5_exec_host))\n\n        # Check vnode_list[] parameter in execjob_prologue hook\n        vnode_list = [self.nAv0, self.nAv1, self.nAv2,\n                      self.nB, self.nBv0, self.nBv1,\n                      self.nC, self.nD, self.nE, self.nEv0]\n        for vn in vnode_list:\n            self.momA.log_match(\"Job;%s;prolo: found vnode_list[%s]\" % (\n                                jid, vn), n=10)\n\n        # Check vnode_list[] parameter in execjob_launch hook\n        vnode_list = [self.nAv0, self.nAv1, self.nAv2,\n                      self.nB, self.nBv0, self.nBv1,\n                      self.nC, self.nD, self.nE, self.nEv0]\n        for vn in vnode_list:\n            self.momA.log_match(\"Job;%s;launch: found vnode_list[%s]\" % (\n                                jid, vn), n=10)\n\n        # Check result of pbs.event().job.release_nodes(keep_select) call\n        self.momA.log_match(\"Job;%s;launch: job.exec_vnode=%s\" % (\n            jid, self.job1v5_exec_vnode), n=10)\n\n        self.momA.log_match(\"Job;%s;launch: job.schedselect=%s\" % (\n            jid, self.job1v5_schedselect), n=10)\n\n        self.momA.log_match(\"Job;%s;pruned from exec_vnode=%s\" % (\n            jid, self.job1_iexec_vnode), n=10)\n\n        self.momA.log_match(\"Job;%s;pruned to exec_vnode=%s\" % (\n            jid, self.job1v5_exec_vnode), n=10)\n\n        # Check accounting_logs\n        self.match_accounting_log('S', jid, self.job1_iexec_host_esc,\n                                  self.job1_iexec_vnode_esc, \"10gb\", 13, 5,\n                                  self.job1_place,\n                                  self.job1_isel_esc)\n\n        self.match_accounting_log('s', jid, self.job1v5_exec_host_esc,\n                                  self.job1v5_exec_vnode_esc,\n                                  \"6gb\", 8, 3,\n                                  self.job1_place,\n                                  self.job1v5_sel_esc)\n        self.momA.log_match(\"Job;%s;task.+started, hostname\" % (jid,),\n                            n=10, interval=5, regexp=True)\n\n        self.momA.log_match(\"Job;%s;copy file request received\" % (jid,),\n                            n=10, interval=5)\n\n        # validate output\n        expected_out = \"\"\"/var/spool/pbs/aux/%s\n%s\n%s\n%s\nFIB TESTS\npbsdsh -n 1 fib 37\n%d\npbsdsh -n 2 fib 37\n%d\nfib 37\n%d\nHOSTNAME TESTS\npbsdsh -n 0 hostname\n%s\npbsdsh -n 1 hostname\n%s\npbsdsh -n 2 hostname\n%s\nPBS_NODEFILE tests\nHOST=%s\npbs_tmrsh %s hostname\n%s\nHOST=%s\npbs_tmrsh %s hostname\n%s\nHOST=%s\npbs_tmrsh %s hostname\n%s\n\"\"\" % (jid, self.momA.hostname, self.momB.hostname, self.momC.hostname,\n            self.fib37_value, self.fib37_value, self.fib37_value,\n            self.momA.shortname, self.momB.shortname, self.momC.shortname,\n            self.momA.hostname, self.momA.hostname, self.momA.shortname,\n            self.momB.hostname, self.momB.hostname, self.momB.shortname,\n            self.momC.hostname, self.momC.hostname, self.momC.shortname)\n\n        self.logger.info(\"expected out=%s\" % (expected_out,))\n        job_out = \"\"\n        with open(job_output_file, 'r') as fd:\n            job_out = fd.read()\n            self.logger.info(\"job_out=%s\" % (job_out,))\n\n        self.assertEqual(job_out, expected_out)\n\n    @timeout(400)\n    def test_t9(self):\n        \"\"\"\n        Test tolerating 'all' node failures at job startup and\n             within the life of the job.\n\n             1.\tSubmit a job that has been submitted with a select\n                spec of 2 super-chunks say (A) and (B), and 1 chunk\n                of (C), along with place spec of \"scatter\",\n                resulting in the following assignment:\n\n                    exec_vnode = (A)+(B)+(C)\n\n                and -Wtolerate_node_failures=all\n\n             2. Have a queuejob hook that adds 1 extra node to each\n                chunk (except the MS (first) chunk), resulting in the\n                assignment:\n\n                    exec_vnode = (A)+(B)+(D)+(C)+(E)\n\n                where D mirrors super-chunk B specs while E mirrors\n                chunk C.\n\n             3. Have an execjob_begin hook that fails (causes rejection)\n                when executed by mom managing vnodes in (B).\n\n             4. Have an execjob_prologue hook that fails (causes rejection)\n                when executed by mom managing vnodes in (C).\n\n             5. Then create an execjob_launch that prunes back the job's\n                exec_vnode assignment back to satisfying the original 3-node\n                select spec, choosing only healthy nodes.\n\n             6. Now kill -KILL mom host hostD.\n\n             7. Result:\n\n                a. This results in the following reassignment of chunks:\n\n                   exec_vnode = (A)+(D)+(E)\n\n                   since (B) and (C) contain vnodes from failed moms.\n\n                b. Job continues to run even after nodeD goes down with\n                   only an indication in mom_logs with the message:\n                   im_eof, Premature end of message from addr n stream 4\n\n        \"\"\"\n        # set this so as to not linger on delaying job kill.\n        c = {'$max_poll_downtime': 10}\n        self.momA.add_config(c)\n\n        # instantiate queuejob hook, tolerate_node_failure is set to 'all'\n        hook_event = \"queuejob\"\n        hook_name = \"qjob\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.qjob_hook_body2)\n\n        # instantiate execjob_begin hook\n        hook_event = \"execjob_begin\"\n        hook_name = \"begin\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.begin_hook_body)\n\n        # instantiate execjob_prologue hook\n        hook_event = \"execjob_prologue\"\n        hook_name = \"prolo\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.prolo_hook_body)\n\n        # instantiate execjob_launch hook\n        hook_event = \"execjob_launch\"\n        hook_name = \"launch\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.launch_hook_body)\n\n        # First, turn off scheduling\n        a = {'scheduling': 'false'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        jid = self.create_and_submit_job('job1_2')\n        # Job gets queued and reflects the incremented values from queuejob\n        # hook\n        self.server.expect(JOB, {'job_state': 'Q',\n                                 'tolerate_node_failures': 'all',\n                                 'Resource_List.mem': '10gb',\n                                 'Resource_List.ncpus': 13,\n                                 'Resource_List.nodect': 5,\n                                 'Resource_List.select': self.job1_iselect,\n                                 'Resource_List.site': self.job1_oselect,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_ischedselect},\n                           id=jid, attrop=PTL_AND)\n\n        a = {'scheduling': 'true'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        # Job eventually launches reflecting the pruned back values\n        # to the original select spec\n        # There's a max_attempts=60 for it would take up to 60 seconds\n        # for primary mom to wait for the sisters to join\n        # (default $sister_join_job_alarm of 30 seconds) and to wait for\n        # sisters to execjob_prologue hooks (default $job_launch_delay\n        # value of 30 seconds)\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'tolerate_node_failures': 'all',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job1_select,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_schedselect,\n                                 'exec_host': self.job1_exec_host,\n                                 'exec_vnode': self.job1_exec_vnode},\n                           id=jid, interval=1, attrop=PTL_AND, max_attempts=60)\n\n        thisjob = self.server.status(JOB, id=jid)\n        if thisjob:\n            job_output_file = thisjob[0]['Output_Path'].split(':')[1]\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status(\n            [self.nAv0, self.nAv1, self.nE, self.nEv0],\n            'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.nAv2],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        jobs_assn3 = \"%s/0, %s/1, %s/2\" % (jid, jid, jid)\n        self.match_vnode_status([self.nD], 'free', jobs_assn3,\n                                3, '2097152kb')\n\n        self.match_vnode_status([self.nA, self.nAv3, self.nB, self.nBv0,\n                                 self.nBv1, self.nBv2, self.nBv3, self.nC,\n                                 self.nEv1, self.nEv2, self.nEv3], 'free')\n\n        # Check server/queue counts.\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 8,\n                                    'resources_assigned.mem': '6291456kb'},\n                           attrop=PTL_AND)\n        self.server.expect(QUEUE, {'resources_assigned.ncpus': 8,\n                                   'resources_assigned.mem': '6291456kb'},\n                           id='workq', attrop=PTL_AND)\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(jid, self.job1_exec_host))\n\n        # Verify mom_logs\n        self.momA.log_match(\n            \"Job;%s;job_start_error.+from node %s.+could not JOIN_JOB\" % (\n                jid, self.hostB), n=10, regexp=True)\n\n        self.momA.log_match(\n            \"Job;%s;ignoring error from %s.+as job \" % (jid, self.hostB) +\n            \"is tolerant of node failures\",\n            regexp=True, n=10)\n\n        self.momA.log_match(\n            \"Job;%s;job_start_error.+from node %s.+\" % (jid, self.hostC) +\n            \"could not IM_EXEC_PROLOGUE\", n=10, regexp=True)\n\n        self.momA.log_match(\n            \"Job;%s;ignoring error from %s.+\" % (jid, self.hostC) +\n            \"as job is tolerant of node failures\", n=10, regexp=True)\n        # Check vnode_list[] parameter in execjob_prologue hook\n        vnode_list = [self.nAv0, self.nAv1, self.nAv2,\n                      self.nB, self.nBv0, self.nBv1,\n                      self.nC, self.nD, self.nE, self.nEv0]\n        for vn in vnode_list:\n            self.momA.log_match(\"Job;%s;prolo: found vnode_list[%s]\" % (\n                                jid, vn), n=10)\n\n        # Check vnode_list_fail[] parameter in execjob_prologue hook\n        vnode_list_fail = [self.nB, self.nBv0, self.nBv1]\n        for vn in vnode_list_fail:\n            self.momA.log_match(\"Job;%s;prolo: found vnode_list_fail[%s]\" % (\n                                jid, vn), n=10)\n\n        # Check vnode_list[] parameter in execjob_launch hook\n        vnode_list = [self.nAv0, self.nAv1, self.nAv2,\n                      self.nB, self.nBv0, self.nBv1,\n                      self.nC, self.nD, self.nE, self.nEv0]\n        for vn in vnode_list:\n            self.momA.log_match(\"Job;%s;launch: found vnode_list[%s]\" % (\n                                jid, vn), n=10)\n\n        # Check vnode_list_fail[] parameter in execjob_launch hook\n        vnode_list_fail = [self.nB, self.nBv0, self.nBv1, self.nC]\n        for vn in vnode_list_fail:\n            self.momA.log_match(\n                \"Job;%s;launch: found vnode_list_fail[%s]\" % (jid, vn), n=10)\n\n        # Check result of pbs.event().job.release_nodes(keep_select) call\n        self.momA.log_match(\"Job;%s;launch: job.exec_vnode=%s\" % (\n            jid, self.job1_exec_vnode), n=10)\n\n        self.momA.log_match(\"Job;%s;launch: job.schedselect=%s\" % (\n            jid, self.job1_schedselect), n=10)\n\n        self.momA.log_match(\"Job;%s;pruned from exec_vnode=%s\" % (\n            jid, self.job1_iexec_vnode), n=10)\n\n        self.momA.log_match(\"Job;%s;pruned to exec_vnode=%s\" % (\n            jid, self.job1_exec_vnode), n=10)\n\n        # Check accounting_logs\n        self.match_accounting_log('S', jid, self.job1_iexec_host_esc,\n                                  self.job1_iexec_vnode_esc, \"10gb\", 13, 5,\n                                  self.job1_place,\n                                  self.job1_isel_esc)\n\n        self.match_accounting_log('s', jid, self.job1_exec_host_esc,\n                                  self.job1_exec_vnode_esc,\n                                  \"6gb\", 8, 3,\n                                  self.job1_place,\n                                  self.job1_sel_esc)\n\n        # temporarily suspend momD, simulating a failed mom host.\n        self.momD.signal(\"-KILL\")\n        self.momA.log_match(\"im_eof, Premature end of message.+on stream 4\",\n                            n=10, max_attempts=30, interval=2, regexp=True)\n\n        self.momA.log_match(\"Job;%s;task.+started, hostname\" % (jid,),\n                            n=10, interval=5, regexp=True)\n\n        self.momA.log_match(\"Job;%s;copy file request received\" % (jid,),\n                            n=10, interval=5)\n\n        # validate output\n        expected_out = \"\"\"/var/spool/pbs/aux/%s\n%s\n%s\n%s\nFIB TESTS\npbsdsh -n 2 fib 37\n%d\nfib 37\n%d\nHOSTNAME TESTS\npbsdsh -n 0 hostname\n%s\npbsdsh -n 2 hostname\n%s\n\"\"\" % (jid, self.momA.hostname, self.momD.hostname, self.momE.hostname,\n            self.fib37_value, self.fib37_value, self.momA.shortname,\n            self.momE.shortname)\n\n        self.logger.info(\"expected out=%s\" % (expected_out,))\n        job_out = \"\"\n        with open(job_output_file, 'r') as fd:\n            job_out = fd.read()\n            self.logger.info(\"job_out=%s\" % (job_out,))\n\n        self.assertEqual(job_out, expected_out)\n        self.momD.start()\n\n    def test_t10(self):\n        \"\"\"\n        Test tolerating node failures at job startup but also\n             cause a failure on one of the nodes after the job has\n             started.\n\n             1.\tSubmit a job that has been submitted with a select\n                spec of 2 super-chunks say (A) and (B), and 1 chunk\n                of (C), along with place spec of \"scatter\",\n                resulting in the following assignment:\n\n                    exec_vnode = (A)+(B)+(C)\n\n                and -Wtolerate_node_failures=all\n\n             2. Have a queuejob hook that adds 1 extra node to each\n                chunk (except the MS (first) chunk), resulting in the\n                assignment:\n\n                    exec_vnode = (A)+(B)+(D)+(C)+(E)\n\n                where D mirrors super-chunk B specs while E mirrors\n                chunk C.\n\n             3. Have an execjob_begin hook that fails (causes rejection)\n                when executed by mom managing vnodes in (B).\n\n             4. Have an execjob_prologue hook that fails (causes rejection)\n                when executed by mom managing vnodes in (C).\n\n             5. Then create an execjob_launch that prunes back the job's\n                exec_vnode assignment back to satisfying the original 3-node\n                select spec, choosing only healthy nodes.\n\n             6. Now kill -KILL mom host hostD.\n\n             7. Result:\n\n                a. This results in the following reassignment of chunks:\n\n                   exec_vnode = (A)+(D)+(E)\n\n                   since (B) and (C) contain vnodes from failed moms.\n\n                b. Job eventually aborts after nodeD goes down with\n                   an indication in mom_logs with the message:\n                   \"im_eof, lost communication with <host>\"\n                   \"node EOF 1 (<host>)\"\n                   \"kill_job\"\n\n        \"\"\"\n        # set this so as to not linger on delaying job kill.\n        c = {'$max_poll_downtime': 10}\n        self.momA.add_config(c)\n\n        # instantiate queuejob hook\n        hook_event = \"queuejob\"\n        hook_name = \"qjob\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.qjob_hook_body)\n\n        # instantiate execjob_begin hook\n        hook_event = \"execjob_begin\"\n        hook_name = \"begin\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.begin_hook_body)\n\n        # instantiate execjob_prologue hook\n        hook_event = \"execjob_prologue\"\n        hook_name = \"prolo\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.prolo_hook_body)\n\n        # instantiate execjob_launch hook\n        hook_event = \"execjob_launch\"\n        hook_name = \"launch\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.launch_hook_body)\n\n        # First, turn off scheduling\n        a = {'scheduling': 'false'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        jid = self.create_and_submit_job('job1_3')\n        # Job gets queued and reflects the incremented values from queuejob\n        # hook\n        self.server.expect(JOB, {'job_state': 'Q',\n                                 'tolerate_node_failures': 'job_start',\n                                 'Resource_List.mem': '10gb',\n                                 'Resource_List.ncpus': 13,\n                                 'Resource_List.nodect': 5,\n                                 'Resource_List.select': self.job1_iselect,\n                                 'Resource_List.site': self.job1_oselect,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_ischedselect},\n                           id=jid, attrop=PTL_AND)\n\n        a = {'scheduling': 'true'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        # Job eventually launches reflecting the pruned back values\n        # to the original select spec\n        # There's a max_attempts=60 for it would take up to 60 seconds\n        # for primary mom to wait for the sisters to join\n        # (default $sister_join_job_alarm of 30 seconds) and to wait for\n        # sisters to execjob_prologue hooks (default $job_launch_delay\n        # value of 30 seconds)\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'tolerate_node_failures': 'job_start',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job1_select,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_schedselect,\n                                 'exec_host': self.job1_exec_host,\n                                 'exec_vnode': self.job1_exec_vnode},\n                           id=jid, interval=1, attrop=PTL_AND, max_attempts=60)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status(\n            [self.nAv0, self.nAv1, self.nE, self.nEv0],\n            'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.nAv2],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        jobs_assn3 = \"%s/0, %s/1, %s/2\" % (jid, jid, jid)\n        self.match_vnode_status([self.nD], 'free', jobs_assn3,\n                                3, '2097152kb')\n\n        self.match_vnode_status([self.nA, self.nAv3, self.nB, self.nBv0,\n                                 self.nBv1, self.nBv2, self.nBv3, self.nC,\n                                 self.nEv1, self.nEv2, self.nEv3], 'free')\n\n        # Check server/queue counts.\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 8,\n                                    'resources_assigned.mem': '6291456kb'},\n                           attrop=PTL_AND)\n        self.server.expect(QUEUE, {'resources_assigned.ncpus': 8,\n                                   'resources_assigned.mem': '6291456kb'},\n                           id='workq', attrop=PTL_AND)\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(jid, self.job1_exec_host))\n\n        # Verify mom_logs\n        self.momA.log_match(\n            \"Job;%s;job_start_error.+from node %s.+could not JOIN_JOB\" % (\n                jid, self.hostB), n=10, regexp=True)\n\n        self.momA.log_match(\n            \"Job;%s;ignoring error from %s.+as job \" % (jid, self.hostB) +\n            \"is tolerant of node failures\",\n            regexp=True, n=10)\n\n        self.momA.log_match(\n            \"Job;%s;job_start_error.+from node %s.+\" % (jid, self.hostC) +\n            \"could not IM_EXEC_PROLOGUE\", n=10, regexp=True)\n\n        self.momA.log_match(\n            \"Job;%s;ignoring error from %s.+\" % (jid, self.hostC) +\n            \"as job is tolerant of node failures\", n=10, regexp=True)\n        # Check vnode_list[] parameter in execjob_prologue hook\n        vnode_list = [self.nAv0, self.nAv1, self.nAv2,\n                      self.nB, self.nBv0, self.nBv1,\n                      self.nC, self.nD, self.nE, self.nEv0]\n        for vn in vnode_list:\n            self.momA.log_match(\"Job;%s;prolo: found vnode_list[%s]\" % (\n                                jid, vn), n=10)\n\n        # Check vnode_list_fail[] parameter in execjob_prologue hook\n        vnode_list_fail = [self.nB, self.nBv0, self.nBv1]\n        for vn in vnode_list_fail:\n            self.momA.log_match(\"Job;%s;prolo: found vnode_list_fail[%s]\" % (\n                                jid, vn), n=10)\n\n        # Check vnode_list[] parameter in execjob_launch hook\n        vnode_list = [self.nAv0, self.nAv1, self.nAv2,\n                      self.nB, self.nBv0, self.nBv1,\n                      self.nC, self.nD, self.nE, self.nEv0]\n        for vn in vnode_list:\n            self.momA.log_match(\"Job;%s;launch: found vnode_list[%s]\" % (\n                                jid, vn), n=10)\n\n        # Check vnode_list_fail[] parameter in execjob_launch hook\n        vnode_list_fail = [self.nB, self.nBv0, self.nBv1, self.nC]\n        for vn in vnode_list_fail:\n            self.momA.log_match(\"Job;%s;launch: found vnode_list_fail[%s]\" % (\n                                jid, vn), n=10)\n\n        # Check result of pbs.event().job.release_nodes(keep_select) call\n        self.momA.log_match(\"Job;%s;launch: job.exec_vnode=%s\" % (\n            jid, self.job1_exec_vnode), n=10)\n\n        self.momA.log_match(\"Job;%s;launch: job.schedselect=%s\" % (\n            jid, self.job1_schedselect), n=10)\n\n        self.momA.log_match(\"Job;%s;pruned from exec_vnode=%s\" % (\n            jid, self.job1_iexec_vnode), n=10)\n\n        self.momA.log_match(\"Job;%s;pruned to exec_vnode=%s\" % (\n            jid, self.job1_exec_vnode), n=10)\n\n        # Check accounting_logs\n        self.match_accounting_log('S', jid, self.job1_iexec_host_esc,\n                                  self.job1_iexec_vnode_esc, \"10gb\", 13, 5,\n                                  self.job1_place,\n                                  self.job1_isel_esc)\n\n        self.match_accounting_log('s', jid, self.job1_exec_host_esc,\n                                  self.job1_exec_vnode_esc,\n                                  \"6gb\", 8, 3,\n                                  self.job1_place,\n                                  self.job1_sel_esc)\n\n        # temporarily suspend momD, simulating a failed mom host.\n        self.momD.signal(\"-KILL\")\n\n        self.momA.log_match(\n            \"Job;%s;im_eof, lost communication with %s.+killing job now\" % (\n                jid, self.nD), n=10, max_attempts=30, interval=2, regexp=True)\n\n        self.momA.log_match(\"Job;%s;kill_job\" % (jid,),\n                            n=10, max_attempts=60, interval=2)\n        self.momD.start()\n\n    def test_t11(self):\n        \"\"\"\n        Test: tolerating node failures at job startup with\n              job having an ncpus=0 assignment. This ensures\n              the hooks will have the info for the ncpus=0 chunks\n              in pbs.event().vnode_list[].\n        \"\"\"\n        # instantiate queuejob hook\n        hook_event = \"queuejob\"\n        hook_name = \"qjob\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.qjob_hook_body)\n\n        # instantiate execjob_begin hook\n        hook_event = \"execjob_begin\"\n        hook_name = \"begin\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.begin_hook_body)\n\n        # instantiate execjob_prologue hook\n        hook_event = \"execjob_prologue\"\n        hook_name = \"prolo\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.prolo_hook_body)\n\n        # instantiate execjob_launch hook\n        hook_event = \"execjob_launch\"\n        hook_name = \"launch\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.launch_hook_body)\n\n        # First, turn off scheduling\n        a = {'scheduling': 'false'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        jid = self.create_and_submit_job('job2')\n        # Job gets queued and reflects the incremented values from queuejob\n        # hook\n        self.server.expect(JOB, {'job_state': 'Q',\n                                 'tolerate_node_failures': 'job_start',\n                                 'Resource_List.mem': '10gb',\n                                 'Resource_List.ncpus': 9,\n                                 'Resource_List.nodect': 5,\n                                 'Resource_List.select': self.job2_iselect,\n                                 'Resource_List.site': self.job2_oselect,\n                                 'Resource_List.place': self.job2_place,\n                                 'schedselect': self.job2_ischedselect},\n                           max_attempts=10, id=jid, attrop=PTL_AND)\n\n        a = {'scheduling': 'true'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        # Job eventually launches reflecting the pruned back values\n        # to the original select spec\n        # There's a max_attempts=60 for it would take up to 60 seconds\n        # for primary mom to wait for the sisters to join\n        # (default $sister_join_job_alarm of 30 seconds) and to wait for\n        # sisters to execjob_prologue hooks (default $job_launch_delay\n        # value of 30 seconds)\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'tolerate_node_failures': 'job_start',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 6,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job2_select,\n                                 'Resource_List.place': self.job2_place,\n                                 'schedselect': self.job2_schedselect,\n                                 'exec_host': self.job2_exec_host,\n                                 'exec_vnode': self.job2_exec_vnode},\n                           id=jid, interval=1, attrop=PTL_AND, max_attempts=60)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.nAv0, self.nAv1],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.nE, self.nEv0],\n                                'free', jobs_assn1, 0, '1048576kb')\n\n        self.match_vnode_status([self.nAv2],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        jobs_assn3 = \"%s/0, %s/1, %s/2\" % (jid, jid, jid)\n        self.match_vnode_status([self.nD], 'free', jobs_assn3,\n                                3, '2097152kb')\n\n        self.match_vnode_status([self.nA, self.nAv3, self.nB, self.nBv0,\n                                 self.nBv1, self.nBv2, self.nBv3, self.nC,\n                                 self.nEv1, self.nEv2, self.nEv3], 'free')\n\n        # Check server/queue counts.\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 6,\n                                    'resources_assigned.mem': '6291456kb'},\n                           attrop=PTL_AND)\n        self.server.expect(QUEUE, {'resources_assigned.ncpus': 6,\n                                   'resources_assigned.mem': '6291456kb'},\n                           id='workq', attrop=PTL_AND)\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(jid, self.job2_exec_host_nfile))\n\n        # Verify mom_logs\n        self.momA.log_match(\n            \"Job;%s;job_start_error.+from node %s.+could not JOIN_JOB\" % (\n                jid, self.hostB), n=10, regexp=True)\n\n        self.momA.log_match(\n            \"Job;%s;ignoring error from %s.+as job \" % (jid, self.hostB) +\n            \"is tolerant of node failures\",\n            regexp=True, n=10)\n\n        self.momA.log_match(\n            \"Job;%s;job_start_error.+from node %s.+\" % (jid, self.hostC) +\n            \"could not IM_EXEC_PROLOGUE\", n=10, regexp=True)\n\n        self.momA.log_match(\n            \"Job;%s;ignoring error from %s.+\" % (jid, self.hostC) +\n            \"as job is tolerant of node failures\", n=10, regexp=True)\n        # Check vnode_list[] parameter in execjob_prologue hook\n        vnode_list = [self.nAv0, self.nAv1, self.nAv2,\n                      self.nB, self.nBv0, self.nBv1,\n                      self.nC, self.nD, self.nE, self.nEv0]\n        for vn in vnode_list:\n            self.momA.log_match(\"Job;%s;prolo: found vnode_list[%s]\" % (\n                                jid, vn), n=10)\n\n        # Check vnode_list_fail[] parameter in execjob_prologue hook\n        vnode_list_fail = [self.nB, self.nBv0, self.nBv1]\n        for vn in vnode_list_fail:\n            self.momA.log_match(\"Job;%s;prolo: found vnode_list_fail[%s]\" % (\n                                jid, vn), n=10)\n\n        # Check vnode_list[] parameter in execjob_launch hook\n        vnode_list = [self.nAv0, self.nAv1, self.nAv2,\n                      self.nB, self.nBv0, self.nBv1,\n                      self.nC, self.nD, self.nE, self.nEv0]\n        for vn in vnode_list:\n            self.momA.log_match(\"Job;%s;launch: found vnode_list[%s]\" % (\n                                jid, vn), n=10)\n\n        # Check vnode_list_fail[] parameter in execjob_launch hook\n        vnode_list_fail = [self.nB, self.nBv0, self.nBv1, self.nC]\n        for vn in vnode_list_fail:\n            self.momA.log_match(\n                \"Job;%s;launch: found vnode_list_fail[%s]\" % (jid, vn), n=10)\n\n        # Check result of pbs.event().job.release_nodes(keep_select) call\n        self.momA.log_match(\"Job;%s;launch: job.exec_vnode=%s\" % (\n            jid, self.job2_exec_vnode), n=10)\n\n        self.momA.log_match(\"Job;%s;launch: job.schedselect=%s\" % (\n            jid, self.job2_schedselect), n=10)\n\n        self.momA.log_match(\"Job;%s;pruned from exec_vnode=%s\" % (\n            jid, self.job2_iexec_vnode), n=10)\n\n        self.momA.log_match(\"Job;%s;pruned to exec_vnode=%s\" % (\n            jid, self.job2_exec_vnode), n=10)\n\n        # Check accounting_logs\n        self.match_accounting_log('S', jid, self.job2_iexec_host_esc,\n                                  self.job2_iexec_vnode_esc, \"10gb\", 9, 5,\n                                  self.job2_place,\n                                  self.job2_isel_esc)\n\n        self.match_accounting_log('s', jid, self.job2_exec_host_esc,\n                                  self.job2_exec_vnode_esc,\n                                  \"6gb\", 6, 3,\n                                  self.job2_place,\n                                  self.job2_sel_esc)\n\n    def test_t12(self):\n        \"\"\"\n        Test: tolerating node failures at job startup with\n              extra resources requested such as mpiprocs and\n              ompthtreads which would affect content of $PBS_NODEFILE.\n        \"\"\"\n        # instantiate queuejob hook\n        hook_event = \"queuejob\"\n        hook_name = \"qjob\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.qjob_hook_body)\n\n        # instantiate execjob_begin hook\n        hook_event = \"execjob_begin\"\n        hook_name = \"begin\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.begin_hook_body)\n\n        # instantiate execjob_prologue hook\n        hook_event = \"execjob_prologue\"\n        hook_name = \"prolo\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.prolo_hook_body)\n\n        # instantiate execjob_launch hook\n        hook_event = \"execjob_launch\"\n        hook_name = \"launch\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.launch_hook_body)\n\n        # First, turn off scheduling\n        a = {'scheduling': 'false'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        jid = self.create_and_submit_job('job3')\n        # Job gets queued and reflects the incremented values from queuejob\n        # hook\n        self.server.expect(JOB, {'job_state': 'Q',\n                                 'tolerate_node_failures': 'job_start',\n                                 'Resource_List.mem': '10gb',\n                                 'Resource_List.ncpus': 13,\n                                 'Resource_List.nodect': 5,\n                                 'Resource_List.select': self.job3_iselect,\n                                 'Resource_List.site': self.job3_oselect,\n                                 'Resource_List.place': self.job3_place,\n                                 'schedselect': self.job3_ischedselect},\n                           max_attempts=10, id=jid, attrop=PTL_AND)\n\n        a = {'scheduling': 'true'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        # Job eventually launches reflecting the pruned back values\n        # to the original select spec\n        # There's a max_attempts=60 for it would take up to 60 seconds\n        # for primary mom to wait for the sisters to join\n        # (default $sister_join_job_alarm of 30 seconds) and to wait for\n        # sisters to execjob_prologue hooks (default $job_launch_delay\n        # value of 30 seconds)\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'tolerate_node_failures': 'job_start',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job3_select,\n                                 'Resource_List.place': self.job3_place,\n                                 'schedselect': self.job3_schedselect,\n                                 'exec_host': self.job3_exec_host,\n                                 'exec_vnode': self.job3_exec_vnode},\n                           id=jid, interval=1, attrop=PTL_AND, max_attempts=60)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.nAv0, self.nAv1],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.nE, self.nEv0],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.nAv2],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        jobs_assn3 = \"%s/0, %s/1, %s/2\" % (jid, jid, jid)\n        self.match_vnode_status([self.nD], 'free', jobs_assn3,\n                                3, '2097152kb')\n\n        self.match_vnode_status([self.nA, self.nAv3, self.nB, self.nBv0,\n                                 self.nBv1, self.nBv2, self.nBv3, self.nC,\n                                 self.nEv1, self.nEv2, self.nEv3], 'free')\n\n        # Check server/queue counts.\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 8,\n                                    'resources_assigned.mem': '6291456kb'},\n                           attrop=PTL_AND)\n        self.server.expect(QUEUE, {'resources_assigned.ncpus': 8,\n                                   'resources_assigned.mem': '6291456kb'},\n                           id='workq', attrop=PTL_AND)\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(jid, self.job3_exec_host,\n                                              self.job3_schedselect))\n\n        # Verify mom_logs\n        self.momA.log_match(\n            \"Job;%s;job_start_error.+from node %s.+could not JOIN_JOB\" % (\n                jid, self.hostB), n=10, regexp=True)\n\n        self.momA.log_match(\n            \"Job;%s;ignoring error from %s.+as job \" % (jid, self.hostB) +\n            \"is tolerant of node failures\",\n            regexp=True, n=10)\n\n        self.momA.log_match(\n            \"Job;%s;job_start_error.+from node %s.+\" % (jid, self.hostC) +\n            \"could not IM_EXEC_PROLOGUE\", n=10, regexp=True)\n\n        self.momA.log_match(\n            \"Job;%s;ignoring error from %s.+\" % (jid, self.hostC) +\n            \"as job is tolerant of node failures\", n=10, regexp=True)\n        # Check vnode_list[] parameter in execjob_prologue hook\n        vnode_list = [self.nAv0, self.nAv1, self.nAv2,\n                      self.nB, self.nBv0, self.nBv1,\n                      self.nC, self.nD, self.nE, self.nEv0]\n        for vn in vnode_list:\n            self.momA.log_match(\"Job;%s;prolo: found vnode_list[%s]\" % (\n                                jid, vn), n=10)\n\n        # Check vnode_list_fail[] parameter in execjob_prologue hook\n        vnode_list_fail = [self.nB, self.nBv0, self.nBv1]\n        for vn in vnode_list_fail:\n            self.momA.log_match(\"Job;%s;prolo: found vnode_list_fail[%s]\" % (\n                                jid, vn), n=10)\n\n        # Check vnode_list[] parameter in execjob_launch hook\n        vnode_list = [self.nAv0, self.nAv1, self.nAv2,\n                      self.nB, self.nBv0, self.nBv1,\n                      self.nC, self.nD, self.nE, self.nEv0]\n        for vn in vnode_list:\n            self.momA.log_match(\"Job;%s;launch: found vnode_list[%s]\" % (\n                                jid, vn), n=10)\n\n        # Check vnode_list_fail[] parameter in execjob_launch hook\n        vnode_list_fail = [self.nB, self.nBv0, self.nBv1, self.nC]\n        for vn in vnode_list_fail:\n            self.momA.log_match(\"Job;%s;launch: found vnode_list_fail[%s]\" % (\n                                jid, vn), n=10)\n\n        # Check result of pbs.event().job.release_nodes(keep_select) call\n        self.momA.log_match(\"Job;%s;launch: job.exec_vnode=%s\" % (\n            jid, self.job3_exec_vnode), n=10)\n\n        self.momA.log_match(\"Job;%s;launch: job.schedselect=%s\" % (\n            jid, self.job3_schedselect), n=10)\n\n        self.momA.log_match(\"Job;%s;pruned from exec_vnode=%s\" % (\n            jid, self.job3_iexec_vnode), n=10)\n\n        self.momA.log_match(\"Job;%s;pruned to exec_vnode=%s\" % (\n            jid, self.job3_exec_vnode), n=10)\n\n        # Check accounting_logs\n        self.match_accounting_log('S', jid, self.job3_iexec_host_esc,\n                                  self.job3_iexec_vnode_esc, \"10gb\", 13, 5,\n                                  self.job3_place,\n                                  self.job3_isel_esc)\n\n        self.match_accounting_log('s', jid, self.job3_exec_host_esc,\n                                  self.job3_exec_vnode_esc,\n                                  \"6gb\", 8, 3,\n                                  self.job3_place,\n                                  self.job3_sel_esc)\n\n    def test_t13(self):\n        \"\"\"\n        Test: pbs.event().job.select.increment_chunks() method.\n        \"\"\"\n        # instantiate queuejob hook\n        hook_body = \"\"\"\nimport pbs\ne=pbs.event()\nsel=pbs.select(\"ncpus=3:mem=1gb+1:ncpus=2:mem=2gb+2:ncpus=1:mem=3gb\")\ninp=2\nisel=sel.increment_chunks(inp)\npbs.logmsg(pbs.LOG_DEBUG, \"sel=%s\" % (sel,))\npbs.logmsg(pbs.LOG_DEBUG, \"sel.increment_chunks(%d)=%s\" % (inp,isel))\ninp=\"3\"\nisel=sel.increment_chunks(inp)\npbs.logmsg(pbs.LOG_DEBUG, \"sel.increment_chunks(%s)=%s\" % (inp,isel))\ninp=\"23.5%\"\nisel=sel.increment_chunks(inp)\npbs.logmsg(pbs.LOG_DEBUG, \"sel.increment_chunks(%s)=%s\" % (inp,isel))\ninp={0: 0, 1: 4, 2: \"50%\"}\nisel=sel.increment_chunks(inp)\npbs.logmsg(pbs.LOG_DEBUG, \"sel.increment_chunks(%s)=%s\" % (inp,isel))\nsel=pbs.select(\"5:ncpus=3:mem=1gb+1:ncpus=2:mem=2gb+2:ncpus=1:mem=3gb\")\npbs.logmsg(pbs.LOG_DEBUG, \"sel=%s\" % (sel,))\ninp={0: \"50%\", 1: \"50%\", 2: \"50%\"}\nisel=sel.increment_chunks(inp)\npbs.logmsg(pbs.LOG_DEBUG, \"sel.increment_chunks(%s)=%s\" % (inp,isel))\n\"\"\"\n        hook_event = \"queuejob\"\n        hook_name = \"qjob\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, hook_body)\n\n        a = {'scheduling': 'false'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        j1 = Job(TEST_USER)\n        j1.set_sleep_time(10)\n        self.server.submit(j1)\n\n        # Verify server_logs\n        self.server.log_match(\n            \"sel=ncpus=3:mem=1gb+1:ncpus=2:mem=2gb+2:ncpus=1:mem=3gb\", n=10)\n\n        self.server.log_match(\n            \"sel.increment_chunks(2)=1:ncpus=3:mem=1gb+\" +\n            \"3:ncpus=2:mem=2gb+4:ncpus=1:mem=3gb\", n=10)\n\n        self.server.log_match(\n            \"sel.increment_chunks(3)=1:ncpus=3:mem=1gb+\" +\n            \"4:ncpus=2:mem=2gb+5:ncpus=1:mem=3gb\", n=10)\n\n        self.server.log_match(\n            \"sel.increment_chunks(23.5%)=1:ncpus=3:mem=1gb+\" +\n            \"2:ncpus=2:mem=2gb+3:ncpus=1:mem=3gb\", n=10)\n\n        self.server.log_match(\n            \"sel.increment_chunks({0: 0, 1: 4, 2: \\'50%\\'})=1:ncpus=3:\" +\n            \"mem=1gb+5:ncpus=2:mem=2gb+3:ncpus=1:mem=3gb\", n=10)\n\n        self.server.log_match(\n            \"sel=5:ncpus=3:mem=1gb+1:ncpus=2:mem=2gb+2:ncpus=1:mem=3gb\",\n            n=10)\n\n        self.server.log_match(\n            \"sel.increment_chunks({0: \\'50%\\', 1: \\'50%\\', 2: \\'50%\\'})=\" +\n            \"7:ncpus=3:mem=1gb+2:ncpus=2:mem=2gb+3:ncpus=1:mem=3gb\", n=10)\n\n    def test_t14(self):\n        \"\"\"\n        Test: tolerating job_start of no node failures,\n              but pruning job's assigned nodes to satisfy the original\n              select spec + 1 additional node.\n              Basically, given an original spec requiring\n              3 nodes, and a queuejob hook has added 2 more nodes,\n              resulting in a new assignment:\n                   exec_vnode=(A)+(B)+(C)+(D)+(E) where\n              (C) mirrors (B) and satisfy the second chunk, and (E)\n              mirrors (D) and satisfy the third chunk.\n              Now pruning the assigned nodes to need 4 nodes, would\n              result in:\n                   exec_vnode=(A)+(B)+(D)+(e1)\n              where (E) is a super-chunk of the form (e1+e2) and only\n              need 'e1' part.\n        \"\"\"\n        # instantiate queuejob hook\n        hook_event = \"queuejob\"\n        hook_name = \"qjob\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.qjob_hook_body)\n\n        # instantiate execjob_launch hook\n        hook_body = \"\"\"\nimport pbs\ne=pbs.event()\n\nif 'PBS_NODEFILE' not in e.env:\n    e.accept()\n\npbs.logmsg(pbs.LOG_DEBUG, \"Executing launch\")\n\nfor vn in e.vnode_list:\n    v = e.vnode_list[vn]\n    pbs.logjobmsg(e.job.id, \"launch: found vnode_list[\" + v.name + \"]\")\n\nfor vn in e.vnode_list_fail:\n    v = e.vnode_list_fail[vn]\n    pbs.logjobmsg(e.job.id, \"launch: found vnode_list_fail[\" + v.name + \"]\")\nif e.job.in_ms_mom():\n    new_jsel = e.job.Resource_List[\"site\"] + \"+ncpus=1:mem=1gb\"\n    pj = e.job.release_nodes(keep_select=new_jsel)\n    pbs.logmsg(pbs.LOG_DEBUG, \"release_nodes(keep_select=%s)\" % (new_jsel,))\n    if pj != None:\n        pbs.logjobmsg(e.job.id, \"launch: job.exec_vnode=%s\" % (pj.exec_vnode,))\n        pbs.logjobmsg(e.job.id, \"launch: job.exec_host=%s\" % (pj.exec_host,))\n        pbs.logjobmsg(e.job.id,\n                      \"launch: job.schedselect=%s\" % (pj.schedselect,))\n    else:\n        e.job.delete()\n        msg = \"unsuccessful at LAUNCH\"\n        e.reject(\"unsuccessful at LAUNCH\")\n\"\"\"\n        hook_event = \"execjob_launch\"\n        hook_name = \"launch\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, hook_body)\n\n        # First, turn off scheduling\n        a = {'scheduling': 'false'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        jid = self.create_and_submit_job('job1_4')\n        # Job gets queued and reflects the incremented values from queuejob\n        # hook\n        self.server.expect(JOB, {'job_state': 'Q',\n                                 'tolerate_node_failures': 'job_start',\n                                 'Resource_List.mem': '10gb',\n                                 'Resource_List.ncpus': 13,\n                                 'Resource_List.nodect': 5,\n                                 'Resource_List.select': self.job1_iselect,\n                                 'Resource_List.site': self.job1_oselect,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_ischedselect},\n                           id=jid, attrop=PTL_AND)\n\n        a = {'scheduling': 'true'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'tolerate_node_failures': 'job_start',\n                                 'Resource_List.mem': '7gb',\n                                 'Resource_List.ncpus': 9,\n                                 'Resource_List.nodect': 4,\n                                 'Resource_List.select': self.job1v6_select,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1v6_schedselect,\n                                 'exec_host': self.job1v6_exec_host,\n                                 'exec_vnode': self.job1v6_exec_vnode},\n                           id=jid, interval=1, attrop=PTL_AND, max_attempts=60)\n\n        thisjob = self.server.status(JOB, id=jid)\n        if thisjob:\n            job_output_file = thisjob[0]['Output_Path'].split(':')[1]\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status(\n            [self.nAv0, self.nAv1, self.nB, self.nBv0, self.nE],\n            'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.nAv2, self.nBv1],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        jobs_assn2 = \"%s/0, %s/1\" % (jid, jid)\n\n        self.match_vnode_status([self.nC], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        self.match_vnode_status([self.nA, self.nAv3, self.nBv2, self.nBv3,\n                                 self.nEv0, self.nEv1, self.nEv2,\n                                 self.nEv3], 'free')\n\n        # Check server/queue counts.\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 9,\n                                    'resources_assigned.mem': '7340032kb'},\n                           attrop=PTL_AND)\n        self.server.expect(QUEUE, {'resources_assigned.ncpus': 9,\n                                   'resources_assigned.mem': '7340032kb'},\n                           id='workq', attrop=PTL_AND)\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(jid, self.job1v6_exec_host))\n\n        # Check vnode_list[] parameter in execjob_launch hook\n        vnode_list = [self.nAv0, self.nAv1, self.nAv2,\n                      self.nB, self.nBv0, self.nBv1,\n                      self.nC, self.nD, self.nE, self.nEv0]\n        for vn in vnode_list:\n            self.momA.log_match(\"Job;%s;launch: found vnode_list[%s]\" % (\n                                jid, vn), n=10)\n\n        # Check result of pbs.event().job.release_nodes(keep_select) call\n        self.momA.log_match(\"Job;%s;launch: job.exec_vnode=%s\" % (\n            jid, self.job1v6_exec_vnode), n=10)\n\n        self.momA.log_match(\"Job;%s;launch: job.schedselect=%s\" % (\n            jid, self.job1v6_schedselect), n=10)\n\n        self.momA.log_match(\"Job;%s;pruned from exec_vnode=%s\" % (\n            jid, self.job1_iexec_vnode), n=10)\n\n        self.momA.log_match(\"Job;%s;pruned to exec_vnode=%s\" % (\n            jid, self.job1v6_exec_vnode), n=10)\n\n        # Check accounting_logs\n        self.match_accounting_log('S', jid, self.job1_iexec_host_esc,\n                                  self.job1_iexec_vnode_esc, \"10gb\", 13, 5,\n                                  self.job1_place,\n                                  self.job1_isel_esc)\n\n        self.match_accounting_log('s', jid, self.job1v6_exec_host_esc,\n                                  self.job1v6_exec_vnode_esc,\n                                  \"7gb\", 9, 4,\n                                  self.job1_place,\n                                  self.job1v6_sel_esc)\n        self.momA.log_match(\"Job;%s;task.+started, hostname\" % (jid,),\n                            n=10, interval=5, regexp=True)\n\n        self.momA.log_match(\"Job;%s;copy file request received\" % (jid,),\n                            n=10, interval=5)\n\n        # validate output\n        expected_out = \"\"\"/var/spool/pbs/aux/%s\n%s\n%s\n%s\n%s\nFIB TESTS\npbsdsh -n 1 fib 37\n%d\npbsdsh -n 2 fib 37\n%d\npbsdsh -n 3 fib 37\n%d\nfib 37\n%d\nHOSTNAME TESTS\npbsdsh -n 0 hostname\n%s\npbsdsh -n 1 hostname\n%s\npbsdsh -n 2 hostname\n%s\npbsdsh -n 3 hostname\n%s\nPBS_NODEFILE tests\nHOST=%s\npbs_tmrsh %s hostname\n%s\nHOST=%s\npbs_tmrsh %s hostname\n%s\nHOST=%s\npbs_tmrsh %s hostname\n%s\nHOST=%s\npbs_tmrsh %s hostname\n%s\n\"\"\" % (jid, self.momA.hostname, self.momB.hostname, self.momC.hostname,\n            self.momE.hostname,\n            self.fib37_value, self.fib37_value, self.fib37_value,\n            self.fib37_value,\n            self.momA.shortname, self.momB.shortname, self.momC.shortname,\n            self.momE.shortname,\n            self.momA.hostname, self.momA.hostname, self.momA.shortname,\n            self.momB.hostname, self.momB.hostname, self.momB.shortname,\n            self.momC.hostname, self.momC.hostname, self.momC.shortname,\n            self.momE.hostname, self.momE.hostname, self.momE.shortname)\n\n        self.logger.info(\"expected out=%s\" % (expected_out,))\n        job_out = \"\"\n        with open(job_output_file, 'r') as fd:\n            job_out = fd.read()\n            self.logger.info(\"job_out=%s\" % (job_out,))\n\n        self.assertEqual(job_out, expected_out)\n\n    def test_t15(self):\n        \"\"\"\n        Test: tolerating job_start of no node failures,\n              but pruning job's assigned nodes to satisfy the original\n              select spec minus 1 node, except one of the chunks is.\n              unsatisfiable. This time, the action pbs.event().delete()\n              action is specified on a failure to prune the job.\n        \"\"\"\n        # instantiate queuejob hook\n        hook_event = \"queuejob\"\n        hook_name = \"qjob\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.qjob_hook_body)\n\n        # instantiate execjob_launch hook\n        hook_body = \"\"\"\nimport pbs\ne=pbs.event()\n\nif 'PBS_NODEFILE' not in e.env:\n    e.accept()\n\npbs.logmsg(pbs.LOG_DEBUG, \"Executing launch\")\n\nfor vn in e.vnode_list:\n    v = e.vnode_list[vn]\n    pbs.logjobmsg(e.job.id, \"launch: found vnode_list[\" + v.name + \"]\")\n\nfor vn in e.vnode_list_fail:\n    v = e.vnode_list_fail[vn]\n    pbs.logjobmsg(e.job.id, \"launch: found vnode_list_fail[\" + v.name + \"]\")\nif e.job.in_ms_mom():\n    new_jsel =\"ncpus=3:mem=2gb+ncpus=5:mem=3gb\"\n    pj = e.job.release_nodes(keep_select=new_jsel)\n    pbs.logmsg(pbs.LOG_DEBUG, \"release_nodes(keep_select=%s)\" % (new_jsel,))\n    if pj != None:\n        pbs.logjobmsg(e.job.id, \"launch: job.exec_vnode=%s\" % (pj.exec_vnode,))\n        pbs.logjobmsg(e.job.id, \"launch: job.exec_host=%s\" % (pj.exec_host,))\n        pbs.logjobmsg(e.job.id,\n                      \"launch: job.schedselect=%s\" % (pj.schedselect,))\n    else:\n        e.job.delete()\n        msg = \"unsuccessful at LAUNCH\"\n        e.reject(\"unsuccessful at LAUNCH\")\n\"\"\"\n        hook_event = \"execjob_launch\"\n        hook_name = \"launch\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, hook_body)\n\n        # First, turn off scheduling\n        a = {'scheduling': 'false'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        jid = self.create_and_submit_job('job1_4')\n        # Job gets queued and reflects the incremented values from queuejob\n        # hook\n        self.server.expect(JOB, {'job_state': 'Q',\n                                 'tolerate_node_failures': 'job_start',\n                                 'Resource_List.mem': '10gb',\n                                 'Resource_List.ncpus': 13,\n                                 'Resource_List.nodect': 5,\n                                 'Resource_List.select': self.job1_iselect,\n                                 'Resource_List.site': self.job1_oselect,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_ischedselect},\n                           id=jid, attrop=PTL_AND)\n\n        a = {'scheduling': 'true'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        self.momA.log_match(\"Job;%s;could not satisfy select chunk\" % (jid,),\n                            n=10, max_attempts=60, interval=2)\n\n        self.momA.log_match(\"Job;%s;NEED chunks for keep_select\" % (jid,),\n                            n=10)\n\n        self.momA.log_match(\n            \"Job;%s;HAVE chunks from job's exec_vnode\" % (jid,), n=10)\n\n        self.momA.log_match(\"execjob_launch request rejected by 'launch'\",\n                            n=10)\n\n        errmsg = \"unsuccessful at LAUNCH\"\n        self.momA.log_match(\"Job;%s;%s\" % (jid, errmsg,), n=10)\n\n        self.server.expect(JOB, 'queue', op=UNSET, id=jid)\n\n    def test_t16(self):\n        \"\"\"\n        Test: tolerating node failures at job startup with\n              a job submitted with -l place=\"scatter:excl\".\n              Like jobs submitted with only \"-l place=scatter\"\n              except the vnodes assigned would have a\n             \"job-exclusive\" state.\n        \"\"\"\n        # instantiate queuejob hook\n        hook_event = \"queuejob\"\n        hook_name = \"qjob\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.qjob_hook_body)\n\n        # instantiate execjob_begin hook\n        hook_body = \"\"\"\nimport pbs\ne=pbs.event()\npbs.logmsg(pbs.LOG_DEBUG, \"Executing begin\")\nlocalnode=pbs.get_local_nodename()\nif not e.job.in_ms_mom() and (localnode == '%s'):\n    x\n\"\"\" % (self.nB,)\n        hook_event = \"execjob_begin\"\n        hook_name = \"begin\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, hook_body)\n\n        # instantiate execjob_prologue hook\n        hook_event = \"execjob_prologue\"\n        hook_name = \"prolo\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.prolo_hook_body)\n\n        # instantiate execjob_launch hook\n        hook_event = \"execjob_launch\"\n        hook_name = \"launch\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.launch_hook_body)\n\n        # First, turn off scheduling\n        a = {'scheduling': 'false'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        jid = self.create_and_submit_job('job4')\n        # Job gets queued and reflects the incremented values from queuejob\n        # hook\n        self.server.expect(JOB, {'job_state': 'Q',\n                                 'tolerate_node_failures': 'job_start',\n                                 'Resource_List.mem': '10gb',\n                                 'Resource_List.ncpus': 13,\n                                 'Resource_List.nodect': 5,\n                                 'Resource_List.select': self.job4_iselect,\n                                 'Resource_List.site': self.job4_oselect,\n                                 'Resource_List.place': self.job4_place,\n                                 'schedselect': self.job4_ischedselect},\n                           max_attempts=10, id=jid, attrop=PTL_AND)\n\n        a = {'scheduling': 'true'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        # Job eventually launches reflecting the pruned back values\n        # to the original select spec\n        # There's a max_attempts=60 for it would take up to 60 seconds\n        # for primary mom to wait for the sisters to join\n        # (default $sister_join_job_alarm of 30 seconds) and to wait for\n        # sisters to execjob_prologue hooks (default $job_launch_delay\n        # value of 30 seconds)\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'tolerate_node_failures': 'job_start',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job4_select,\n                                 'Resource_List.place': self.job4_place,\n                                 'schedselect': self.job4_schedselect,\n                                 'exec_host': self.job4_exec_host,\n                                 'exec_vnode': self.job4_exec_vnode},\n                           id=jid, interval=1, attrop=PTL_AND, max_attempts=60)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.nAv0, self.nAv1],\n                                'job-exclusive', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.nE, self.nEv0],\n                                'job-exclusive', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.nAv2],\n                                'job-exclusive', jobs_assn1, 1, '0kb')\n\n        jobs_assn3 = \"%s/0, %s/1, %s/2\" % (jid, jid, jid)\n        self.match_vnode_status([self.nD], 'job-exclusive', jobs_assn3,\n                                3, '2097152kb')\n\n        self.match_vnode_status([self.nA, self.nAv3, self.nB, self.nBv0,\n                                 self.nBv1, self.nBv2, self.nBv3, self.nC,\n                                 self.nEv1, self.nEv2, self.nEv3], 'free')\n\n        # Check server/queue counts.\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 8,\n                                    'resources_assigned.mem': '6291456kb'},\n                           attrop=PTL_AND)\n        self.server.expect(QUEUE, {'resources_assigned.ncpus': 8,\n                                   'resources_assigned.mem': '6291456kb'},\n                           id='workq', attrop=PTL_AND)\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(jid, self.job4_exec_host))\n\n        # Verify mom_logs\n        self.momA.log_match(\n            \"Job;%s;job_start_error.+from node %s.+could not JOIN_JOB\" % (\n                jid, self.hostB), n=10, regexp=True)\n\n        self.momA.log_match(\n            \"Job;%s;ignoring error from %s.+as job \" % (jid, self.hostB) +\n            \"is tolerant of node failures\",\n            regexp=True, n=10)\n\n        self.momA.log_match(\n            \"Job;%s;job_start_error.+from node %s.+\" % (jid, self.hostC) +\n            \"could not IM_EXEC_PROLOGUE\", n=10, regexp=True)\n\n        self.momA.log_match(\n            \"Job;%s;ignoring error from %s.+\" % (jid, self.hostC) +\n            \"as job is tolerant of node failures\", n=10, regexp=True)\n        # Check vnode_list[] parameter in execjob_prologue hook\n        vnode_list = [self.nAv0, self.nAv1, self.nAv2,\n                      self.nB, self.nBv0, self.nBv1,\n                      self.nC, self.nD, self.nE, self.nEv0]\n        for vn in vnode_list:\n            self.momA.log_match(\"Job;%s;prolo: found vnode_list[%s]\" % (\n                                jid, vn), n=10)\n\n        # Check vnode_list_fail[] parameter in execjob_prologue hook\n        vnode_list_fail = [self.nB, self.nBv0, self.nBv1]\n        for vn in vnode_list_fail:\n            self.momA.log_match(\"Job;%s;prolo: found vnode_list_fail[%s]\" % (\n                                jid, vn), n=10)\n\n        # Check vnode_list[] parameter in execjob_launch hook\n        vnode_list = [self.nAv0, self.nAv1, self.nAv2,\n                      self.nB, self.nBv0, self.nBv1,\n                      self.nC, self.nD, self.nE, self.nEv0]\n        for vn in vnode_list:\n            self.momA.log_match(\"Job;%s;launch: found vnode_list[%s]\" % (\n                                jid, vn), n=10)\n\n        # Check vnode_list_fail[] parameter in execjob_launch hook\n        vnode_list_fail = [self.nB, self.nBv0, self.nBv1, self.nC]\n        for vn in vnode_list_fail:\n            self.momA.log_match(\"Job;%s;launch: found vnode_list_fail[%s]\" % (\n                                jid, vn), n=10)\n\n        # Check result of pbs.event().job.release_nodes(keep_select) call\n        self.momA.log_match(\"Job;%s;launch: job.exec_vnode=%s\" % (\n            jid, self.job4_exec_vnode), n=10)\n\n        self.momA.log_match(\"Job;%s;launch: job.schedselect=%s\" % (\n            jid, self.job4_schedselect), n=10)\n\n        self.momA.log_match(\"Job;%s;pruned from exec_vnode=%s\" % (\n            jid, self.job4_iexec_vnode), n=10)\n\n        self.momA.log_match(\"Job;%s;pruned to exec_vnode=%s\" % (\n            jid, self.job4_exec_vnode), n=10)\n\n        # Check accounting_logs\n        self.match_accounting_log('S', jid, self.job4_iexec_host_esc,\n                                  self.job4_iexec_vnode_esc, \"10gb\", 13, 5,\n                                  self.job4_place,\n                                  self.job4_isel_esc)\n\n        self.match_accounting_log('s', jid, self.job4_exec_host_esc,\n                                  self.job4_exec_vnode_esc,\n                                  \"6gb\", 8, 3,\n                                  self.job4_place,\n                                  self.job4_sel_esc)\n\n    def test_t17(self):\n        \"\"\"\n        Test: tolerating 1 node failure at job startup with\n              a job submitted with -l place=\"free\".\n              Like jobs submitted with only \"-l place=scatter\"\n              except some vnodes from the same mom would get\n              allocated to satisfy different chunks.\n              This test breaks apart one of the multi-chunks of\n              the form (b1+b2+b3) so that upon reassignment,\n              (b1+b2) is used.\n        \"\"\"\n        # instantiate queuejob hook\n        hook_event = \"queuejob\"\n        hook_name = \"qjob\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.qjob_hook_body)\n\n        # instantiate execjob_begin hook\n        hook_event = \"execjob_begin\"\n        hook_name = \"begin\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.begin_hook_body4)\n\n        # instantiate execjob_prologue hook\n        hook_event = \"execjob_prologue\"\n        hook_name = \"prolo\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.prolo_hook_body3)\n\n        # instantiate execjob_launch hook\n        hook_event = \"execjob_launch\"\n        hook_name = \"launch\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.launch_hook_body2)\n\n        # First, turn off scheduling\n        a = {'scheduling': 'false'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        jid = self.create_and_submit_job('job5')\n        # Job gets queued and reflects the incremented values from queuejob\n        # hook\n        self.server.expect(JOB, {'job_state': 'Q',\n                                 'tolerate_node_failures': 'job_start',\n                                 'Resource_List.mem': '10gb',\n                                 'Resource_List.ncpus': 13,\n                                 'Resource_List.nodect': 5,\n                                 'Resource_List.select': self.job5_iselect,\n                                 'Resource_List.site': self.job5_oselect,\n                                 'Resource_List.place': self.job5_place,\n                                 'schedselect': self.job5_ischedselect},\n                           max_attempts=10, id=jid, attrop=PTL_AND)\n\n        a = {'scheduling': 'true'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        # Job eventually launches reflecting the pruned back values\n        # to the original select spec\n        # There's a max_attempts=60 for it would take up to 60 seconds\n        # for primary mom to wait for the sisters to join\n        # (default $sister_join_job_alarm of 30 seconds) and to wait for\n        # sisters to execjob_prologue hooks (default $job_launch_delay\n        # value of 30 seconds)\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'tolerate_node_failures': 'job_start',\n                                 'Resource_List.mem': '5gb',\n                                 'Resource_List.ncpus': 7,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job5_select,\n                                 'Resource_List.place': self.job5_place,\n                                 'schedselect': self.job5_schedselect,\n                                 'exec_host': self.job5_exec_host,\n                                 'exec_vnode': self.job5_exec_vnode},\n                           id=jid, interval=1, attrop=PTL_AND, max_attempts=60)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status(\n            [self.nAv0, self.nAv1, self.nB, self.nBv0],\n            'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.nAv2, self.nBv2],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        # due to free placement, job appears twice as it's been allocated\n        # twice, one for mem only and the other for ncpus\n        jobs_assn2 = \"%s/0, %s/0\" % (jid, jid)\n        self.match_vnode_status([self.nBv1],\n                                'job-busy', jobs_assn2, 1, '1048576kb')\n\n        self.match_vnode_status([self.nA, self.nAv3, self.nC, self.nD,\n                                 self.nD, self.nE, self.nEv0, self.nEv1,\n                                 self.nEv2, self.nEv3, self.nBv3], 'free')\n\n        # Check server/queue counts.\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 7,\n                                    'resources_assigned.mem': '5242880kb'},\n                           attrop=PTL_AND)\n        self.server.expect(QUEUE, {'resources_assigned.ncpus': 7,\n                                   'resources_assigned.mem': '5242880kb'},\n                           id='workq', attrop=PTL_AND)\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(jid, self.job5_exec_host))\n\n        # Verify mom_logs\n        self.momA.log_match(\n            \"Job;%s;job_start_error.+from node %s.+could not JOIN_JOB\" % (\n                jid, self.hostD), n=10, regexp=True)\n\n        self.momA.log_match(\n            \"Job;%s;ignoring error from %s.+as job \" % (jid, self.hostD) +\n            \"is tolerant of node failures\",\n            regexp=True, n=10)\n\n        # Check vnode_list[] parameter in execjob_prologue hook\n        vnode_list = [self.nAv0, self.nAv1, self.nAv2,\n                      self.nB, self.nBv0, self.nBv1,\n                      self.nC, self.nD]\n        for vn in vnode_list:\n            self.momA.log_match(\"Job;%s;prolo: found vnode_list[%s]\" % (\n                                jid, vn), n=10)\n\n        # Check vnode_list[] parameter in execjob_launch hook\n        vnode_list = [self.nAv0, self.nAv1, self.nAv2,\n                      self.nB, self.nBv0, self.nBv1,\n                      self.nC, self.nD]\n        for vn in vnode_list:\n            self.momA.log_match(\"Job;%s;launch: found vnode_list[%s]\" % (\n                                jid, vn), n=10)\n\n        # Check vnode_list_fail[] parameter in execjob_launch hook\n        vnode_list_fail = [self.nD]\n        for vn in vnode_list_fail:\n            self.momA.log_match(\"Job;%s;launch: found vnode_list_fail[%s]\" % (\n                                jid, vn), n=10)\n\n        # Check result of pbs.event().job.release_nodes(keep_select) call\n        self.momA.log_match(\"Job;%s;launch: job.exec_vnode=%s\" % (\n            jid, self.job5_exec_vnode), n=10)\n\n        self.momA.log_match(\"Job;%s;launch: job.schedselect=%s\" % (\n            jid, self.job5_schedselect), n=10)\n\n        self.momA.log_match(\"Job;%s;pruned from exec_vnode=%s\" % (\n            jid, self.job5_iexec_vnode), n=10)\n\n        self.momA.log_match(\"Job;%s;pruned to exec_vnode=%s\" % (\n            jid, self.job5_exec_vnode), n=10)\n\n        # Check accounting_logs\n        self.match_accounting_log('S', jid, self.job5_iexec_host_esc,\n                                  self.job5_iexec_vnode_esc, \"10gb\", 13, 5,\n                                  self.job5_place,\n                                  self.job5_isel_esc)\n\n        self.match_accounting_log('s', jid, self.job5_exec_host_esc,\n                                  self.job5_exec_vnode_esc,\n                                  \"5gb\", 7, 3,\n                                  self.job5_place,\n                                  self.job5_sel_esc)\n\n    def test_t18(self):\n        \"\"\"\n        Test: having a node failure tolerant job waiting for healthy nodes\n              to get rerun (i.e. qrerun). Upon qrerun, job should get\n              killed, requeued, and restarted.\n        \"\"\"\n        # instantiate queuejob hook\n        hook_event = \"queuejob\"\n        hook_name = \"qjob\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.qjob_hook_body)\n\n        # instantiate execjob_begin hook\n        hook_event = \"execjob_begin\"\n        hook_name = \"begin\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.begin_hook_body)\n\n        # instantiate execjob_launch hook\n        hook_event = \"execjob_launch\"\n        hook_name = \"launch\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.launch_hook_body)\n\n        jid = self.create_and_submit_job('job1')\n        # job's substate is 41 (PRERUN) since it would be waiting for\n        # healthy nodes being a node failure tolerant job.\n        # With no prologue hook, MS would wait the default 30\n        # seconds for healthy nodes.\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'substate': 41,\n                                 'tolerate_node_failures': 'job_start',\n                                 'Resource_List.mem': '10gb',\n                                 'Resource_List.ncpus': 13,\n                                 'Resource_List.nodect': 5,\n                                 'exec_host': self.job1_iexec_host,\n                                 'exec_vnode': self.job1_iexec_vnode,\n                                 'Resource_List.select': self.job1_iselect,\n                                 'Resource_List.site': self.job1_oselect,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_ischedselect},\n                           id=jid, attrop=PTL_AND)\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.nAv0, self.nAv1, self.nB, self.nBv0,\n                                 self.nE, self.nEv0],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.nAv2, self.nBv1],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        jobs_assn2 = \"%s/0, %s/1\" % (jid, jid)\n        self.match_vnode_status([self.nC], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        jobs_assn3 = \"%s/0, %s/1, %s/2\" % (jid, jid, jid)\n        self.match_vnode_status([self.nD], 'free', jobs_assn3,\n                                3, '2097152kb')\n\n        self.match_vnode_status([self.nA, self.nAv3, self.nBv2, self.nBv3,\n                                 self.nEv1, self.nEv2, self.nEv3], 'free')\n\n        # check server/queue counts\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 13,\n                                    'resources_assigned.mem': '10485760'},\n                           attrop=PTL_AND)\n        self.server.expect(QUEUE, {'resources_assigned.ncpus': 13,\n                                   'resources_assigned.mem': '10485760'},\n                           id='workq', attrop=PTL_AND)\n\n        # Verify mom_logs\n        self.momA.log_match(\n            \"Job;%s;job_start_error.+from node %s.+could not JOIN_JOB\" % (\n                jid, self.hostB), n=10, regexp=True)\n\n        self.momA.log_match(\n            \"Job;%s;ignoring error from %s.+as job \" % (jid, self.hostB) +\n            \"is tolerant of node failures\",\n            regexp=True, n=10)\n\n        a = {'scheduling': 'false'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        self.server.rerunjob(jid)\n\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid)\n\n        self.match_vnode_status([self.nA, self.nAv0, self.nAv1, self.nAv2,\n                                 self.nAv3,\n                                 self.nB, self.nBv0, self.nBv1, self.nBv2,\n                                 self.nBv3, self.nC, self.nD, self.nE,\n                                 self.nEv0, self.nEv1, self.nEv2,\n                                 self.nEv3], 'free')\n\n        # check server/queue counts\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 0,\n                                    'resources_assigned.mem': '0kb'},\n                           attrop=PTL_AND)\n        self.server.expect(QUEUE, {'resources_assigned.ncpus': 0,\n                                   'resources_assigned.mem': '0kb'},\n                           id='workq', attrop=PTL_AND)\n\n        a = {'scheduling': 'true'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        # Now job should start running again\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'tolerate_node_failures': 'job_start',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job1v2_select,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1v2_schedselect,\n                                 'exec_host': self.job1v2_exec_host,\n                                 'exec_vnode': self.job1v2_exec_vnode},\n                           id=jid, interval=1, attrop=PTL_AND, max_attempts=70)\n\n        thisjob = self.server.status(JOB, id=jid)\n        if thisjob:\n            job_output_file = thisjob[0]['Output_Path'].split(':')[1]\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.nAv0, self.nAv1],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        self.match_vnode_status([self.nAv2],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        jobs_assn2 = \"%s/0, %s/1\" % (jid, jid)\n        self.match_vnode_status([self.nC], 'job-busy', jobs_assn2,\n                                2, '2097152kb')\n\n        jobs_assn3 = \"%s/0, %s/1, %s/2\" % (jid, jid, jid)\n        self.match_vnode_status([self.nD], 'free', jobs_assn3,\n                                3, '2097152kb')\n\n        self.match_vnode_status([self.nA, self.nAv3, self.nB, self.nBv0,\n                                 self.nBv1, self.nBv2, self.nBv3, self.nE,\n                                 self.nEv0, self.nEv1, self.nEv2,\n                                 self.nEv3], 'free')\n\n        # check server/queue counts\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 8,\n                                    'resources_assigned.mem': '6291456kb'},\n                           attrop=PTL_AND)\n        self.server.expect(QUEUE, {'resources_assigned.ncpus': 8,\n                                   'resources_assigned.mem': '6291456kb'},\n                           id='workq', attrop=PTL_AND)\n\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(jid, self.job1v2_exec_host))\n\n        # Verify mom_logs\n        self.momA.log_match(\n            \"Job;%s;job_start_error.+from node %s.+could not JOIN_JOB\" % (\n                jid, self.hostB), n=10, regexp=True)\n\n        self.momA.log_match(\n            \"Job;%s;ignoring error from %s.+as job \" % (jid, self.hostB) +\n            \"is tolerant of node failures\",\n            regexp=True, n=10)\n\n        # Check vnode_list[] parameter in execjob_launch hook\n        vnode_list = [self.nAv0, self.nAv1, self.nAv2,\n                      self.nB, self.nBv0, self.nBv1,\n                      self.nC, self.nD, self.nE, self.nEv0]\n        for vn in vnode_list:\n            self.momA.log_match(\"Job;%s;launch: found vnode_list[%s]\" % (\n                                jid, vn), n=10)\n\n        # Check vnode_list_fail[] parameter in execjob_launch hook\n        vnode_list_fail = [self.nB, self.nBv0, self.nBv1]\n        for vn in vnode_list_fail:\n            self.momA.log_match(\"Job;%s;launch: found vnode_list_fail[%s]\" % (\n                                jid, vn), n=10)\n\n        # Check result of pbs.event().job.release_nodes(keep_select) call\n        self.momA.log_match(\"Job;%s;launch: job.exec_vnode=%s\" % (\n            jid, self.job1v2_exec_vnode), n=10)\n\n        self.momA.log_match(\"Job;%s;launch: job.schedselect=%s\" % (\n            jid, self.job1v2_schedselect), n=10)\n\n        self.momA.log_match(\"Job;%s;pruned from exec_vnode=%s\" % (\n            jid, self.job1_iexec_vnode), n=10)\n\n        self.momA.log_match(\"Job;%s;pruned to exec_vnode=%s\" % (\n            jid, self.job1v2_exec_vnode), n=10)\n\n        # Check accounting_logs\n        self.match_accounting_log('S', jid, self.job1_iexec_host_esc,\n                                  self.job1_iexec_vnode_esc, \"10gb\", 13, 5,\n                                  self.job1_place,\n                                  self.job1_isel_esc)\n\n        self.match_accounting_log('s', jid, self.job1v2_exec_host_esc,\n                                  self.job1v2_exec_vnode_esc,\n                                  \"6gb\", 8, 3,\n                                  self.job1_place,\n                                  self.job1v2_sel_esc)\n        self.momA.log_match(\"Job;%s;task.+started, hostname\" % (jid,),\n                            n=10, interval=5, regexp=True)\n\n        self.momA.log_match(\"Job;%s;copy file request received\" % (jid,),\n                            n=10, interval=5)\n\n        # validate output\n        expected_out = \"\"\"/var/spool/pbs/aux/%s\n%s\n%s\n%s\nFIB TESTS\npbsdsh -n 1 fib 37\n%d\npbsdsh -n 2 fib 37\n%d\nfib 37\n%d\nHOSTNAME TESTS\npbsdsh -n 0 hostname\n%s\npbsdsh -n 1 hostname\n%s\npbsdsh -n 2 hostname\n%s\nPBS_NODEFILE tests\nHOST=%s\npbs_tmrsh %s hostname\n%s\nHOST=%s\npbs_tmrsh %s hostname\n%s\nHOST=%s\npbs_tmrsh %s hostname\n%s\n\"\"\" % (jid, self.momA.hostname, self.momD.hostname, self.momC.hostname,\n            self.fib37_value, self.fib37_value, self.fib37_value,\n            self.momA.shortname, self.momD.shortname, self.momC.shortname,\n            self.momA.hostname, self.momA.hostname, self.momA.shortname,\n            self.momD.hostname, self.momD.hostname, self.momD.shortname,\n            self.momC.hostname, self.momC.hostname, self.momC.shortname)\n\n        job_out = \"\"\n        with open(job_output_file, 'r') as fd:\n            job_out = fd.read()\n\n        self.assertIn(expected_out, job_out, \"job output is not present\")\n\n        # Re-check vnode_list[] parameter in execjob_launch hook\n        vnode_list = [self.nAv0, self.nAv1, self.nAv2,\n                      self.nB, self.nBv0, self.nBv1,\n                      self.nC, self.nD, self.nE, self.nEv0]\n        for vn in vnode_list:\n            self.momA.log_match(\"Job;%s;launch: found vnode_list[%s]\" % (\n                                jid, vn), n=10)\n\n        # Re-check vnode_list_fail[] parameter in execjob_launch hook\n        vnode_list_fail = [self.nB, self.nBv0, self.nBv1]\n        for vn in vnode_list_fail:\n            self.momA.log_match(\"Job;%s;launch: found vnode_list_fail[%s]\" % (\n                                jid, vn), n=10)\n\n        # Check result of pbs.event().job.release_nodes(keep_select) call\n        self.momA.log_match(\"Job;%s;launch: job.exec_vnode=%s\" % (\n            jid, self.job1v2_exec_vnode), n=10)\n\n        self.momA.log_match(\"Job;%s;launch: job.schedselect=%s\" % (\n            jid, self.job1v2_schedselect), n=10)\n\n        self.momA.log_match(\"Job;%s;pruned from exec_vnode=%s\" % (\n            jid, self.job1_iexec_vnode), n=10)\n\n        self.momA.log_match(\"Job;%s;pruned to exec_vnode=%s\" % (\n            jid, self.job1v2_exec_vnode), n=10)\n\n        # Check accounting_logs\n        self.match_accounting_log('S', jid, self.job1_iexec_host_esc,\n                                  self.job1_iexec_vnode_esc, \"10gb\", 13, 5,\n                                  self.job1_place,\n                                  self.job1_isel_esc)\n\n        self.match_accounting_log('s', jid, self.job1v2_exec_host_esc,\n                                  self.job1v2_exec_vnode_esc,\n                                  \"6gb\", 8, 3,\n                                  self.job1_place,\n                                  self.job1v2_sel_esc)\n        self.momA.log_match(\"Job;%s;task.+started, hostname\" % (jid,),\n                            n=10, interval=5, regexp=True)\n\n        self.momA.log_match(\"Job;%s;copy file request received\" % (jid,),\n                            n=10, interval=5)\n\n        # validate output\n        expected_out = \"\"\"/var/spool/pbs/aux/%s\n%s\n%s\n%s\nFIB TESTS\npbsdsh -n 1 fib 37\n%d\npbsdsh -n 2 fib 37\n%d\nfib 37\n%d\nHOSTNAME TESTS\npbsdsh -n 0 hostname\n%s\npbsdsh -n 1 hostname\n%s\npbsdsh -n 2 hostname\n%s\nPBS_NODEFILE tests\nHOST=%s\npbs_tmrsh %s hostname\n%s\nHOST=%s\npbs_tmrsh %s hostname\n%s\nHOST=%s\npbs_tmrsh %s hostname\n%s\n\"\"\" % (jid, self.momA.hostname, self.momD.hostname, self.momC.hostname,\n            self.fib37_value, self.fib37_value, self.fib37_value,\n            self.momA.shortname, self.momD.shortname, self.momC.shortname,\n            self.momA.hostname, self.momA.hostname, self.momA.shortname,\n            self.momD.hostname, self.momD.hostname, self.momD.shortname,\n            self.momC.hostname, self.momC.hostname, self.momC.shortname)\n\n        job_out = \"\"\n        with open(job_output_file, 'r') as fd:\n            job_out = fd.read()\n\n        self.assertIn(expected_out, job_out, \"job output is not present\")\n\n    def test_t19(self):\n        \"\"\"\n        Test: having a node tolerant job waiting for healthy nodes\n              to get issued a request to release nodes. The call\n              to pbs_release_nodes would fail given that the job\n              is not fully running yet, still figuring out which nodes\n              assigned are deemed good.\n        \"\"\"\n        # instantiate queuejob hook\n        hook_event = \"queuejob\"\n        hook_name = \"qjob\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.qjob_hook_body)\n\n        # instantiate execjob_begin hook\n        hook_event = \"execjob_begin\"\n        hook_name = \"begin\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.begin_hook_body)\n\n        # instantiate execjob_launch hook\n        hook_event = \"execjob_launch\"\n        hook_name = \"launch\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.launch_hook_body)\n\n        jid = self.create_and_submit_job('job1')\n        # job's substate is 41 (PRERUN) since it would be waiting for\n        # healthy nodes being a node failure tolerant job\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'substate': 41,\n                                 'tolerate_node_failures': 'job_start',\n                                 'Resource_List.mem': '10gb',\n                                 'Resource_List.ncpus': 13,\n                                 'Resource_List.nodect': 5,\n                                 'exec_host': self.job1_iexec_host,\n                                 'exec_vnode': self.job1_iexec_vnode,\n                                 'Resource_List.select': self.job1_iselect,\n                                 'Resource_List.site': self.job1_oselect,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_ischedselect},\n                           id=jid, attrop=PTL_AND)\n\n        # Verify mom_logs\n        self.momA.log_match(\n            \"Job;%s;job_start_error.+from node %s.+could not JOIN_JOB\" % (\n                jid, self.hostB), n=10, regexp=True)\n\n        self.momA.log_match(\n            \"Job;%s;ignoring error from %s.+as job \" % (jid, self.hostB) +\n            \"is tolerant of node failures\",\n            regexp=True, n=10)\n\n        # Run pbs_release_nodes on a job whose state is running but\n        # substate is under PRERUN\n        pbs_release_nodes_cmd = os.path.join(\n            self.server.pbs_conf['PBS_EXEC'], 'bin', 'pbs_release_nodes')\n        cmd = [pbs_release_nodes_cmd, '-j', jid, '-a']\n\n        ret = self.server.du.run_cmd(self.server.hostname, cmd,\n                                     runas=TEST_USER)\n        self.assertNotEqual(ret['rc'], 0)\n        self.assertTrue(ret['err'][0].startswith(\n            'pbs_release_nodes: Request invalid for state of job'))\n\n    def test_t20(self):\n        \"\"\"\n        Test: node failure tolerant job array, with multiple subjobs\n              starting at the same time, and job's assigned resources\n              are pruned to match up to the original select spec using\n              an execjob_prologue hook this time.\n        \"\"\"\n        # instantiate queuejob hook\n        hook_event = \"queuejob\"\n        hook_name = \"qjob\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.qjob_hook_body)\n\n        # instantiate execjob_begin hook\n        hook_event = \"execjob_begin\"\n        hook_name = \"begin\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.begin_hook_body5)\n\n        # instantiate execjob_prologue hook\n        hook_event = \"execjob_prologue\"\n        hook_name = \"prolo\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.prolo_hook_body4)\n\n        # First, turn off scheduling\n        a = {'scheduling': 'false'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        jid = self.create_and_submit_job('jobA')\n        # Job gets queued and reflects the incremented values from queuejob\n        # hook\n        self.server.expect(JOB, {'job_state': 'Q',\n                                 'tolerate_node_failures': 'job_start',\n                                 'Resource_List.mem': '5gb',\n                                 'Resource_List.ncpus': 5,\n                                 'Resource_List.nodect': 5,\n                                 'Resource_List.select': self.jobA_iselect,\n                                 'Resource_List.site': self.jobA_oselect,\n                                 'Resource_List.place': self.jobA_place,\n                                 'schedselect': self.jobA_ischedselect},\n                           id=jid, attrop=PTL_AND)\n\n        a = {'scheduling': 'true'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        self.server.expect(JOB, {'job_state': 'B',\n                                 'tolerate_node_failures': 'job_start',\n                                 'Resource_List.mem': '5gb',\n                                 'Resource_List.ncpus': 5,\n                                 'Resource_List.nodect': 5,\n                                 'Resource_List.select': self.jobA_iselect,\n                                 'Resource_List.site': self.jobA_oselect,\n                                 'Resource_List.place': self.jobA_place,\n                                 'schedselect': self.jobA_ischedselect},\n                           id=jid, attrop=PTL_AND)\n\n        self.server.expect(JOB, {'job_state=R': 3}, extend='t')\n\n        idx_values = {}\n        sub_jobs = {}\n        for idx in range(1, 4):\n            sjid = create_subjob_id(jid, idx)\n            if idx == 1:\n                d = {'iexec_host_esc': self.jobA_iexec_host1_esc,\n                     'iexec_vnode': self.jobA_iexec_vnode1,\n                     'iexec_vnode_esc': self.jobA_iexec_vnode1_esc,\n                     'exec_host': self.jobA_exec_host1,\n                     'exec_host_esc': self.jobA_exec_host1_esc,\n                     'exec_vnode': self.jobA_exec_vnode1,\n                     'exec_vnode_esc': self.jobA_exec_vnode1_esc,\n                     'vnode_list': [self.nAv0, self.nB, self.nC,\n                                    self.nD, self.nE]}\n            elif idx == 2:\n                d = {'iexec_host_esc': self.jobA_iexec_host2_esc,\n                     'iexec_vnode': self.jobA_iexec_vnode2,\n                     'iexec_vnode_esc': self.jobA_iexec_vnode2_esc,\n                     'exec_host': self.jobA_exec_host2,\n                     'exec_host_esc': self.jobA_exec_host2_esc,\n                     'exec_vnode': self.jobA_exec_vnode2,\n                     'exec_vnode_esc': self.jobA_exec_vnode2_esc,\n                     'vnode_list': [self.nAv1, self.nBv0, self.nC,\n                                    self.nD, self.nEv0]}\n            elif idx == 3:\n                d = {'iexec_host_esc': self.jobA_iexec_host3_esc,\n                     'iexec_vnode': self.jobA_iexec_vnode3,\n                     'iexec_vnode_esc': self.jobA_iexec_vnode3_esc,\n                     'exec_host': self.jobA_exec_host3,\n                     'exec_host_esc': self.jobA_exec_host3_esc,\n                     'exec_vnode': self.jobA_exec_vnode3,\n                     'exec_vnode_esc': self.jobA_exec_vnode3_esc,\n                     'vnode_list': [self.nAv2, self.nBv1, self.nC,\n                                    self.nD, self.nE]}\n            idx_values[idx] = d\n            sub_jobs[idx] = sjid\n            self.server.expect(JOB, {'job_state': 'R',\n                                     'substate': 42,\n                                     'tolerate_node_failures': 'job_start',\n                                     'Resource_List.mem': '3gb',\n                                     'Resource_List.ncpus': 3,\n                                     'Resource_List.nodect': 3,\n                                     'exec_host': d['exec_host'],\n                                     'exec_vnode': d['exec_vnode'],\n                                     'Resource_List.select': self.jobA_select,\n                                     'Resource_List.site': self.jobA_oselect,\n                                     'Resource_List.place': self.jobA_place,\n                                     'schedselect': self.jobA_schedselect},\n                               id=sjid, attrop=PTL_AND)\n\n        for idx in range(1, 4):\n            sjid = sub_jobs[idx]\n            # Verify mom_logs\n            sjid_esc = sjid.replace(\n                \"[\", r\"\\[\").replace(\"]\", r\"\\]\").replace(\"(\", r\"\\(\").replace(\n                \")\", r\"\\)\").replace(\"+\", r\"\\+\")\n            self.momA.log_match(\n                \"Job;%s;job_start_error.+from node %s.+could not JOIN_JOB\" % (\n                    sjid_esc, self.hostC), n=10, regexp=True)\n            self.momA.log_match(\n                \"Job;%s;ignoring error from %s.+as job \" % (\n                    sjid_esc, self.hostC) + \"is tolerant of node failures\",\n                regexp=True, n=10)\n\n            for vn in idx_values[idx]['vnode_list']:\n                self.momA.log_match(\"Job;%s;prolo: found vnode_list[%s]\" % (\n                                    sjid, vn), n=10)\n\n            vnode_list_fail = [self.nC]\n            for vn in vnode_list_fail:\n                self.momA.log_match(\n                    \"Job;%s;prolo: found vnode_list_fail[%s]\" % (\n                        sjid, vn), n=10)\n            # Check result of pbs.event().job.release_nodes(keep_select)\n            # call\n            self.momA.log_match(\"Job;%s;prolo: job.exec_vnode=%s\" % (\n                sjid, idx_values[idx]['exec_vnode']), n=10)\n            self.momA.log_match(\"Job;%s;prolo: job.schedselect=%s\" % (\n                sjid, self.jobA_schedselect), n=10)\n            self.momA.log_match(\"Job;%s;pruned from exec_vnode=%s\" % (\n                sjid, idx_values[idx]['iexec_vnode']), n=10)\n            self.momA.log_match(\"Job;%s;pruned to exec_vnode=%s\" % (\n                sjid, idx_values[idx]['exec_vnode']), n=10)\n\n            # Check accounting_logs\n            self.match_accounting_log('S', sjid_esc,\n                                      idx_values[idx]['iexec_host_esc'],\n                                      idx_values[idx]['iexec_vnode_esc'],\n                                      \"5gb\", 5, 5,\n                                      self.jobA_place,\n                                      self.jobA_isel_esc)\n            self.match_accounting_log('s', sjid_esc,\n                                      idx_values[idx]['exec_host_esc'],\n                                      idx_values[idx]['exec_vnode_esc'],\n                                      \"3gb\", 3, 3,\n                                      self.jobA_place,\n                                      self.jobA_sel_esc)\n\n    @timeout(400)\n    def test_t21(self):\n        \"\"\"\n        Test: radio silent moms causing the primary mom to not get\n              any acks from the sister moms executing prologue hooks.\n              After some 'job_launch_delay' time has passed, primary\n              mom will consider node hosts that have not acknowledged\n              the prologue hook execution as failed hosts, and will\n              not use their vnodes in the pruning of jobs.\n        \"\"\"\n        job_launch_delay = 120\n        c = {'$job_launch_delay': job_launch_delay}\n        self.momA.add_config(c)\n\n        # instantiate queuejob hook\n        hook_event = \"queuejob\"\n        hook_name = \"qjob\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.qjob_hook_body)\n\n        # instantiate execjob_prologue hook\n        hook_event = \"execjob_prologue\"\n        hook_name = \"prolo\"\n        a = {'event': hook_event, 'enabled': 'true', 'alarm': 60}\n        self.server.create_import_hook(hook_name, a, self.prolo_hook_body5)\n\n        # instantiate execjob_launch hook\n        hook_event = \"execjob_launch\"\n        hook_name = \"launch\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, self.launch_hook_body)\n\n        # First, turn off scheduling\n        a = {'scheduling': 'false'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        jid = self.create_and_submit_job('job1')\n        # Job gets queued and reflects the incremented values from queuejob\n        # hook\n        self.server.expect(JOB, {'job_state': 'Q',\n                                 'tolerate_node_failures': 'job_start',\n                                 'Resource_List.mem': '10gb',\n                                 'Resource_List.ncpus': 13,\n                                 'Resource_List.nodect': 5,\n                                 'Resource_List.select': self.job1_iselect,\n                                 'Resource_List.site': self.job1_oselect,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1_ischedselect},\n                           id=jid, attrop=PTL_AND)\n\n        a = {'scheduling': 'true'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        self.momE.log_match(\n            \"Job;%s;sleeping for 30 secs\" % (jid, ), n=10)\n\n        # temporarily suspend momE, simulating a radio silent mom.\n        self.momE.signal(\"-STOP\")\n\n        self.momC.log_match(\n            \"Job;%s;sleeping for 30 secs\" % (jid, ), n=10)\n\n        # temporarily suspend momC, simulating a radio silent mom.\n        self.momC.signal(\"-STOP\")\n\n        # sleep as long as the time primary mom waits for all\n        # prologue hook acknowledgement from the sister moms\n        self.logger.info(\"sleeping for %d secs waiting for healthy nodes\" % (\n                         job_launch_delay,))\n        time.sleep(job_launch_delay)\n\n        # Job eventually launches reflecting the pruned back values\n        # to the original select spec\n        # There's a max_attempts=60 for it would take up to 60 seconds\n        # for primary mom to wait for the sisters to join\n        # (default $sister_join_job_alarm of 30 seconds) and to wait for\n        # sisters to execjob_prologue hooks (default $job_launch_delay\n        # value of 30 seconds)\n\n        self.server.expect(JOB, {'job_state': 'R',\n                                 'tolerate_node_failures': 'job_start',\n                                 'Resource_List.mem': '6gb',\n                                 'Resource_List.ncpus': 8,\n                                 'Resource_List.nodect': 3,\n                                 'Resource_List.select': self.job1v4_select,\n                                 'Resource_List.place': self.job1_place,\n                                 'schedselect': self.job1v4_schedselect,\n                                 'exec_host': self.job1v4_exec_host,\n                                 'exec_vnode': self.job1v4_exec_vnode},\n                           id=jid, interval=1, attrop=PTL_AND, max_attempts=70)\n\n        thisjob = self.server.status(JOB, id=jid)\n        if thisjob:\n            job_output_file = thisjob[0]['Output_Path'].split(':')[1]\n\n        # Check various vnode status.\n        jobs_assn1 = \"%s/0\" % (jid,)\n        self.match_vnode_status([self.nAv0, self.nAv1, self.nB, self.nBv0],\n                                'job-busy', jobs_assn1, 1, '1048576kb')\n\n        jobs_assn2 = \"%s/0, %s/1\" % (jid, jid)\n        self.match_vnode_status([self.nD], 'free', jobs_assn2,\n                                2, '2097152kb')\n\n        self.match_vnode_status([self.nAv2, self.nBv1],\n                                'job-busy', jobs_assn1, 1, '0kb')\n\n        self.match_vnode_status([self.nA, self.nAv3, self.nBv2, self.nBv3,\n                                 self.nC, self.nD, self.nEv1, self.nEv2,\n                                 self.nEv3, self.nE, self.nEv0], 'free')\n\n        # check server/queue counts\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 8,\n                                    'resources_assigned.mem': '6291456kb'},\n                           attrop=PTL_AND)\n        self.server.expect(QUEUE, {'resources_assigned.ncpus': 8,\n                                   'resources_assigned.mem': '6291456kb'},\n                           id='workq', attrop=PTL_AND)\n\n        self.assertTrue(\n            self.pbs_nodefile_match_exec_host(jid, self.job1v4_exec_host))\n\n        # Check vnode_list[] parameter in execjob_launch hook\n        vnode_list = [self.nAv0, self.nAv1, self.nAv2,\n                      self.nB, self.nBv0, self.nBv1,\n                      self.nC, self.nD, self.nE, self.nEv0]\n        for vn in vnode_list:\n            self.momA.log_match(\"Job;%s;launch: found vnode_list[%s]\" % (\n                                jid, vn), n=10)\n\n        # Check vnode_list_fail[] parameter in execjob_launch hook\n        vnode_list_fail = [self.nC, self.nE, self.nEv0]\n        for vn in vnode_list_fail:\n            self.momA.log_match(\"Job;%s;launch: found vnode_list_fail[%s]\" % (\n                                jid, vn), n=10)\n\n        # Check result of pbs.event().job.release_nodes(keep_select) call\n        self.momA.log_match(\"Job;%s;launch: job.exec_vnode=%s\" % (\n            jid, self.job1v4_exec_vnode), n=10)\n\n        self.momA.log_match(\"Job;%s;launch: job.schedselect=%s\" % (\n            jid, self.job1v4_schedselect), n=10)\n\n        self.momA.log_match(\"Job;%s;pruned from exec_vnode=%s\" % (\n            jid, self.job1_iexec_vnode), n=10)\n\n        self.momA.log_match(\"Job;%s;pruned to exec_vnode=%s\" % (\n            jid, self.job1v4_exec_vnode), n=10)\n\n        # Check accounting_logs\n        self.match_accounting_log('S', jid, self.job1_iexec_host_esc,\n                                  self.job1_iexec_vnode_esc, \"10gb\", 13, 5,\n                                  self.job1_place,\n                                  self.job1_isel_esc)\n\n        self.match_accounting_log('s', jid, self.job1v4_exec_host_esc,\n                                  self.job1v4_exec_vnode_esc,\n                                  \"6gb\", 8, 3,\n                                  self.job1_place,\n                                  self.job1v4_sel_esc)\n        self.momA.log_match(\"Job;%s;task.+started, hostname\" % (jid,),\n                            n=10, interval=5, regexp=True)\n\n        self.momA.log_match(\"Job;%s;copy file request received\" % (jid,),\n                            n=10, interval=5)\n\n        # validate output\n        expected_out = \"\"\"/var/spool/pbs/aux/%s\n%s\n%s\n%s\nFIB TESTS\npbsdsh -n 1 fib 37\n%d\npbsdsh -n 2 fib 37\n%d\nfib 37\n%d\nHOSTNAME TESTS\npbsdsh -n 0 hostname\n%s\npbsdsh -n 1 hostname\n%s\npbsdsh -n 2 hostname\n%s\nPBS_NODEFILE tests\nHOST=%s\npbs_tmrsh %s hostname\n%s\nHOST=%s\npbs_tmrsh %s hostname\n%s\nHOST=%s\npbs_tmrsh %s hostname\n%s\n\"\"\" % (jid, self.momA.hostname, self.momB.hostname, self.momD.hostname,\n            self.fib37_value, self.fib37_value, self.fib37_value,\n            self.momA.shortname, self.momB.shortname, self.momD.shortname,\n            self.momA.hostname, self.momA.hostname, self.momA.shortname,\n            self.momB.hostname, self.momB.hostname, self.momB.shortname,\n            self.momD.hostname, self.momD.hostname, self.momD.shortname)\n\n        job_out = \"\"\n        with open(job_output_file, 'r') as fd:\n            job_out = fd.read()\n\n        self.assertEqual(job_out, expected_out)\n"
  },
  {
    "path": "test/tests/functional/pbs_resc_custom_perm.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass Test_Custom_Resource_Perm(TestFunctional):\n    rsc_list = ['foo_str', 'foo_strm', 'foo_strh']\n    hook_list = [\"start\", \"begin\"]\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n\n        self.momA = self.moms.values()[0]\n        self.momA.delete_vnode_defs()\n        self.hostA = self.momA.shortname\n\n        rc = self.server.manager(MGR_CMD_DELETE, NODE, None, \"\")\n        self.assertEqual(rc, 0)\n\n        rc = self.server.manager(MGR_CMD_CREATE, NODE, id=self.hostA)\n        self.assertEqual(rc, 0)\n        self.server.expect(NODE, {'state': 'free'}, id=self.hostA)\n\n        # Next set custom resources with m flag and without m flag\n        # via qmgr -c 'create resource'\n        attr = {'type': 'string', 'flag': 'h'}\n        r = 'foo_str'\n        self.server.manager(\n            MGR_CMD_CREATE, RSC, attr, id=r, logerr=False)\n\n        attr = {'type': 'string', 'flag': 'hm'}\n        r = 'foo_strm'\n        self.server.manager(MGR_CMD_CREATE, RSC, attr, id=r, logerr=False)\n\n        # Create a custom resources via exechost_startup hook.\n        startup_hook_body = \"\"\"\nimport pbs\ne=pbs.event()\nlocalnode=pbs.get_local_nodename()\ne.vnode_list[localnode].resources_available['foo_strh'] = \"str_hook\"\n\"\"\"\n        hook_name = \"start\"\n        a = {'event': \"exechost_startup\", 'enabled': 'True'}\n        self.server.create_import_hook(\n            hook_name,\n            a,\n            startup_hook_body,\n            overwrite=True)\n\n        # Restart the MoM so that exechost_startup hook can run.\n        self.momA.restart()\n\n        # Give the moms a chance to receive the updated resource.\n        # Ensure the new resource is seen by all moms.\n        self.momA.log_match(\"resourcedef;copy hook-related file\",\n                            max_attempts=20, interval=1)\n\n        # Create a exechost_begin hook.\n        begin_hook_body = \"\"\"\nimport pbs\ne=pbs.event()\nres_list = getattr(e.job, 'Resource_List')\npbs.logjobmsg(e.job.id, \"Resource_List is %s\" % str(res_list))\n\"\"\"\n        hook_name = \"begin\"\n        a = {'event': \"execjob_begin\", 'enabled': 'True'}\n        self.server.create_import_hook(\n            hook_name,\n            a,\n            begin_hook_body,\n            overwrite=True)\n\n    def test_custom_resc_single_node(self):\n        \"\"\"\n        Test permission flag of resources of a single node job\n        using a execjob_begin hook.\n        \"\"\"\n        self.logger.info(\"test_custom_resc__single_node\")\n\n        a = {'Resource_List.foo_str': 'str_noperm',\n             'Resource_List.foo_strm': 'str_perm',\n             'Resource_List.foo_strh': 'str_hook'\n             }\n        j = Job(TEST_USER, a)\n        j.set_sleep_time(\"100\")\n        jid = self.server.submit(j)\n\n        self.server.expect(JOB, {\n            'job_state': 'R',\n            'Resource_List.foo_str': 'str_noperm',\n            'Resource_List.foo_strm': 'str_perm',\n            'Resource_List.foo_strh': 'str_hook'},\n            offset=1, id=jid)\n\n        # Match mom logs entry, only resoruces with \"m\" flag would show up.\n        msg = 'foo_strm=str_perm'\n        self.momA.log_match(\"Job;%s;.*%s.*\" % (jid, msg),\n                            regexp=True, n=10, max_attempts=10, interval=2)\n        msg = 'foo_strh=str_hook'\n        self.momA.log_match(\"Job;%s;.*%s.*\" % (jid, msg),\n                            regexp=True, n=10, max_attempts=10, interval=2)\n        msg = 'foo_str=str_noperm'\n        self.momA.log_match(\"Job;%s;.*%s.*\" % (jid, msg),\n                            regexp=True, n=5, max_attempts=10, interval=2,\n                            existence=False)\n"
  },
  {
    "path": "test/tests/functional/pbs_resc_used_single_node.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport time\nfrom tests.functional import *\n\n\nclass Test_singleNode_Job_ResourceUsed(TestFunctional):\n    rsc_list = ['foo_str', 'foo_f', 'foo_i', 'foo_str2', 'foo_str3']\n\n    def tearDown(self):\n        self.du.set_pbs_config(confs={'PBS_SERVER': self.server.hostname})\n        TestFunctional.tearDown(self)\n        for r in self.rsc_list:\n            try:\n                self.server.manager(MGR_CMD_DELETE, RSC, id=r, runas=ROOT_USER)\n            except Exception:\n                pass\n        self.server.restart()\n        self.mom.restart()\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n        for r in self.rsc_list:\n            try:\n                self.server.manager(MGR_CMD_DELETE, RSC, id=r, runas=ROOT_USER)\n            except Exception:\n                pass\n\n        self.server.restart()\n\n        self.momA = self.moms.values()[0]\n        self.momA.restart()\n        self.momA.delete_vnode_defs()\n        self.hostA = self.momA.shortname\n\n        rc = self.server.manager(MGR_CMD_DELETE, NODE, None, \"\")\n        self.assertEqual(rc, 0)\n\n        rc = self.server.manager(MGR_CMD_CREATE, NODE, id=self.hostA)\n        self.assertEqual(rc, 0)\n        self.server.expect(NODE, {'state': 'free'}, id=self.hostA)\n\n        a = {'job_history_enable': 'True'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        # Next set some custom resources via qmgr -c 'create resource'\n        attr = {'type': 'string', 'flag': 'h'}\n        r = 'foo_str2'\n        rc = self.server.manager(\n            MGR_CMD_CREATE, RSC, attr, id=r, runas=ROOT_USER, logerr=False)\n        self.assertEqual(rc, 0)\n\n        r = 'foo_str3'\n        rc = self.server.manager(\n            MGR_CMD_CREATE, RSC, attr, id=r, runas=ROOT_USER, logerr=False)\n        self.assertEqual(rc, 0)\n\n        attr['type'] = 'string_array'\n        r = 'stra'\n        rc = self.server.manager(\n            MGR_CMD_CREATE, RSC, attr, id=r, runas=ROOT_USER, logerr=False)\n        self.assertEqual(rc, 0)\n\n        # Set some custom resources via exechost_startup hook.\n        startup_hook_body = \"\"\"\nimport pbs\ne=pbs.event()\nlocalnode=pbs.get_local_nodename()\ne.vnode_list[localnode].resources_available['foo_i'] = 7\ne.vnode_list[localnode].resources_available['foo_f'] = 5.0\ne.vnode_list[localnode].resources_available['foo_str'] = \"seventyseven\"\ne.vnode_list[localnode].resources_available['foo_str2'] = \"seven\"\n\"\"\"\n        hook_name = \"start\"\n        a = {'event': \"exechost_startup\", 'enabled': 'True'}\n        rv = self.server.create_import_hook(\n            hook_name,\n            a,\n            startup_hook_body,\n            overwrite=True)\n        self.assertTrue(rv)\n\n        self.momA.signal(\"-HUP\")\n        # Give the moms a chance to receive the updated resource.\n        # Ensure the new resource is seen by all moms.\n        m = self.momA.log_match(\"resourcedef;copy hook-related file\",\n                                max_attempts=20, interval=1)\n        self.assertTrue(m)\n\n    def test_epilogue_single_node(self):\n        \"\"\"\n        Test accumulation of resources of a single node job from an\n        exechost_epilogue hook.\n        \"\"\"\n        self.logger.info(\"test_epilogue_single_node\")\n        hook_body = \"\"\"\nimport pbs\ne=pbs.event()\npbs.logmsg(pbs.LOG_DEBUG, \"executed epilogue hook\")\ne.job.resources_used[\"vmem\"] = pbs.size(\"9gb\")\ne.job.resources_used[\"foo_i\"] = 9\ne.job.resources_used[\"foo_f\"] = 0.09\ne.job.resources_used[\"foo_str\"] = '{\"seven\":7}'\ne.job.resources_used[\"foo_str3\"] = \\\n\"\\\"\\\"{\"a\":6,\"b\":\"some value #$%^&*@\",\"c\":54.4,\"d\":\"32.5gb\"}\\\"\\\"\\\"\ne.job.resources_used[\"foo_str2\"] = \"seven\"\ne.job.resources_used[\"cput\"] = 10\ne.job.resources_used[\"stra\"] = '\"glad,elated\",\"happy\"'\n\"\"\"\n\n        hook_name = \"epi\"\n        # this hook must run last on the same machine, and after the\n        # cgroups hook (default order=100)\n        a = {'event': \"execjob_epilogue\", 'enabled': 'True', 'order': '1000'}\n        rv = self.server.create_import_hook(\n            hook_name,\n            a,\n            hook_body,\n            overwrite=True)\n        self.assertTrue(rv)\n\n        a = {'Resource_List.select': '1:ncpus=1',\n             'Resource_List.walltime': 3\n             }\n        j = Job(TEST_USER)\n        j.set_attributes(a)\n        j.set_sleep_time(\"3\")\n        jid = self.server.submit(j)\n\n        # The results should show value for custom resources 'foo_i',\n        # 'foo_f', 'foo_str', 'foo_str3', and builtin resources 'vmem',\n        # 'cput', and should be accumulating  based on the hook script,\n        # For 'string' type resources set to a json string will be within\n        # single quotes.\n        #\n        # foo_str and foo_str3 are string type resource set to JSON\n        # format string\n\n        self.server.expect(JOB, {\n            'job_state': 'F',\n            'resources_used.foo_f': '0.09',\n            'resources_used.foo_i': '9',\n            'resources_used.foo_str': '\\'{\"seven\": 7}\\'',\n            'resources_used.foo_str2': 'seven',\n            'resources_used.stra': \"\\\"glad,elated\\\",\\\"happy\\\"\",\n            'resources_used.vmem': '9gb',\n            'resources_used.cput': '00:00:10',\n            'resources_used.ncpus': '1'},\n            extend='x', offset=10, attrop=PTL_AND, id=jid)\n\n        # Match accounting_logs entry\n\n        acctlog_match = 'resources_used.foo_f=0.09'\n        s = self.server.accounting_match(\n            \"E;%s;.*%s.*\" % (jid, acctlog_match), regexp=True, n=100)\n        self.assertTrue(s)\n\n        acctlog_match = 'resources_used.foo_i=9'\n        s = self.server.accounting_match(\n            \"E;%s;.*%s.*\" % (jid, acctlog_match), regexp=True, n=100)\n        self.assertTrue(s)\n\n        acctlog_match = 'resources_used.foo_str=\\'{\"seven\": 7}\\''\n        s = self.server.accounting_match(\n            \"E;%s;.*%s.*\" % (jid, acctlog_match), regexp=True, n=100)\n        self.assertTrue(s)\n\n        acctlog_match = 'resources_used.vmem=9gb'\n        s = self.server.accounting_match(\n            \"E;%s;.*%s.*\" % (jid, acctlog_match), regexp=True, n=100)\n        self.assertTrue(s)\n\n        acctlog_match = 'resources_used.cput=00:00:10'\n        s = self.server.accounting_match(\n            \"E;%s;.*%s.*\" % (jid, acctlog_match), regexp=True, n=100)\n        self.assertTrue(s)\n\n        acctlog_match = 'resources_used.foo_str2=seven'\n        s = self.server.accounting_match(\n            \"E;%s;.*%s.*\" % (jid, acctlog_match), regexp=True, n=100)\n        self.assertTrue(s)\n\n        acctlog_match = 'resources_used.ncpus=1'\n        s = self.server.accounting_match(\n            \"E;%s;.*%s.*\" % (jid, acctlog_match), regexp=True, n=100)\n        self.assertTrue(s)\n\n        acctlog_match = 'resources_used.foo_str3='\n        s = self.server.accounting_match(\n            \"E;%s;.*%s'{.*}'.*\" % (jid, acctlog_match), regexp=True, n=100)\n        self.assertTrue(s)\n        acctlog_match = r'resources_used.stra=\\\"glad\\,elated\\\"\\,\\\"happy\\\"'\n        s = self.server.accounting_match(\n            \"E;%s;.*%s.*\" % (jid, acctlog_match), regexp=True, n=100)\n        self.assertTrue(s)\n"
  },
  {
    "path": "test/tests/functional/pbs_reservations.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport time\nfrom datetime import datetime as dt\n\nfrom tests.functional import *\n\n\n@tags('reservations')\nclass TestReservations(TestFunctional):\n    \"\"\"\n    Various tests to verify behavior of PBS scheduler in handling\n    reservations\n    \"\"\"\n\n    def setUp(self):\n        super().setUp()\n        self._submitted_reservations = []\n        # self.server.manager(MGR_CMD_SET, SERVER, {'log_events': '8191'})\n\n    def tearDown(self):\n        for rid in self._submitted_reservations:\n            try:\n                self.server.delete(rid)\n            except PbsDeleteError:\n                pass\n        super().tearDown()\n\n    def get_tz(self):\n        if 'PBS_TZID' in self.conf:\n            tzone = self.conf['PBS_TZID']\n        elif 'PBS_TZID' in os.environ:\n            tzone = os.environ['PBS_TZID']\n        else:\n            self.logger.info('Missing timezone, using America/Los_Angeles')\n            tzone = 'America/Los_Angeles'\n        return tzone\n\n    def dst_changes(self, start, end):\n        \"\"\"\n        Returns true if it detects that DST changes between start and end\n        \"\"\"\n        s = dt.fromtimestamp(start)\n        e = dt.fromtimestamp(end)\n        s_tz = s.astimezone().strftime(\"%Z\")\n        e_tz = e.astimezone().strftime(\"%Z\")\n        if s_tz != e_tz:\n            return True\n        return False\n\n    def submit_reservation(self, select, start, end, user, rrule=None,\n                           place='free', extra_attrs=None):\n        \"\"\"\n        helper method to submit a reservation\n        \"\"\"\n        a = {'Resource_List.select': select,\n             'Resource_List.place': place,\n             'reserve_start': start,\n             }\n\n        if self.dst_changes(start, end) is True:\n            a['reserve_duration'] = int(end - start)\n        else:\n            a['reserve_end'] = end\n\n        if rrule is not None:\n            tzone = self.get_tz()\n            a.update({ATTR_resv_rrule: rrule, ATTR_resv_timezone: tzone})\n\n        if extra_attrs:\n            a.update(extra_attrs)\n        r = Reservation(user, a)\n\n        rid = self.server.submit(r)\n        if rid:\n            self._submitted_reservations.append(rid)\n\n        return rid\n\n    def submit_asap_reservation(self, user, jid, extra_attrs=None):\n        \"\"\"\n        Helper method to submit an ASAP reservation\n        \"\"\"\n        a = {ATTR_convert: jid}\n        if extra_attrs:\n            a.update(extra_attrs)\n        r = Reservation(user, a)\n\n        # PTL's Reservation class sets the default ATTR_resv_start\n        # and ATTR_resv_end.\n        # But pbs_rsub: -Wqmove is not compatible with -R or -E option\n        # So, unset these attributes from the reservation instance.\n        r.unset_attributes(['reserve_start', 'reserve_end'])\n\n        rid = self.server.submit(r)\n        if rid:\n            self._submitted_reservations.append(rid)\n\n        return rid\n\n    def submit_job(self, set_attrib=None, sleep=100, job_running=False):\n        \"\"\"\n        This function submits job\n        :param set_attrib: Job attributes to set\n        :type set_attrib: Dictionary\n        \"\"\"\n        j = Job(TEST_USER)\n        if set_attrib is not None:\n            j.set_attributes(set_attrib)\n        j.set_sleep_time(sleep)\n        jid = self.server.submit(j)\n        self.logger.info(\"Job submitted successfully-%s\" % jid)\n        job_node = None\n        if job_running:\n            self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n            get_exec_vnode = self.server.status(JOB, 'exec_vnode', id=jid)[0]\n            job_node = get_exec_vnode['exec_vnode']\n        return (jid, job_node)\n\n    @staticmethod\n    def cust_attr(name, totnodes, numnode, attrib):\n        a = {}\n        if numnode % 2 == 0:\n            a['resources_available.color'] = 'red'\n        else:\n            a['resources_available.color'] = 'blue'\n        return {**attrib, **a}\n\n    def degraded_resv_reconfirm(self, start, end, rrule=None, run=False):\n        \"\"\"\n        Test that a degraded reservation gets reconfirmed\n        \"\"\"\n        a = {'reserve_retry_time': 5}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        a = {'type': 'string', 'flag': 'h'}\n        self.server.manager(MGR_CMD_CREATE, RSC, a, id='color')\n        self.scheduler.add_resource('color')\n\n        a = {'resources_available.ncpus': 1}\n        self.mom.create_vnodes(a, num=5,\n                               attrfunc=self.cust_attr)\n\n        now = int(time.time())\n\n        rid = self.submit_reservation(user=TEST_USER,\n                                      select='2:ncpus=1:color=red',\n                                      rrule=rrule, start=now + start,\n                                      end=now + end)\n\n        a = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, a, id=rid)\n\n        self.server.status(RESV, 'resv_nodes', id=rid)\n        resv_node_list = self.server.reservations[rid].get_vnodes()\n        resv_node = resv_node_list[0]\n\n        if run:\n            resv_state = {'reserve_state': (MATCH_RE, 'RESV_RUNNING|5')}\n            self.logger.info('Sleeping until reservation starts')\n            offset = start - int(time.time())\n            self.server.expect(RESV, resv_state, id=rid,\n                               offset=offset, interval=1)\n        else:\n            resv_state = {'reserve_state': (MATCH_RE, 'RESV_DEGRADED|10')}\n\n        a = {'state': 'offline'}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=resv_node)\n\n        a = {'reserve_substate': 10}\n        a.update(resv_state)\n        self.server.expect(RESV, a, id=rid)\n\n        a = {'resources_available.color': 'red'}\n        free_nodes = self.server.filter(NODE, a)\n        nodes = list(free_nodes.values())[0]\n\n        other_node = [x for x in nodes if x not in resv_node_list][0]\n\n        if run:\n            a = {'reserve_substate': 5}\n        else:\n            a = {'reserve_substate': 2}\n\n        self.server.expect(RESV, a, id=rid, interval=1)\n\n        self.server.status(RESV)\n        self.assertEquals(set(self.server.reservations[rid].get_vnodes()),\n                          {resv_node_list[1], other_node},\n                          \"Node not replaced correctly\")\n        if run:\n            a = {'resources_assigned.ncpus': 0}\n            self.server.expect(NODE, a, id=resv_node)\n            a = {'resources_assigned.ncpus=1': 2}\n            self.server.expect(NODE, a)\n\n    def degraded_resv_failed_reconfirm(self, start, end, rrule=None,\n                                       run=False, resume=False):\n        \"\"\"\n        Test that reservations do not get reconfirmed if there is no place\n        to put them.\n        \"\"\"\n        a = {'reserve_retry_time': 5}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        a = {'resources_available.ncpus': 1}\n        self.mom.create_vnodes(a, num=2)\n\n        now = time.time()\n\n        rid = self.submit_reservation(user=TEST_USER, select='1:ncpus=1',\n                                      rrule=rrule, start=now + start,\n                                      end=now + end)\n\n        a = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, a, id=rid)\n\n        self.server.status(RESV, 'resv_nodes', id=rid)\n        resv_node = self.server.reservations[rid].get_vnodes()[0]\n\n        a = {'Resource_List.select': '1:ncpus=1',\n             'Resource_List.walltime': '1:00:00'}\n        j = Job(attrs=a)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        self.server.status(JOB, 'exec_vnode', id=jid)\n        job_node = j.get_vnodes()[0]\n\n        msg = 'Job and Resv share node'\n        self.assertNotEqual(resv_node, job_node, msg)\n\n        if run:\n            resv_state = {'reserve_state': (MATCH_RE, 'RESV_RUNNING|5')}\n            self.logger.info('Sleeping until reservation starts')\n            offset = start - int(time.time())\n            self.server.expect(RESV, resv_state, id=rid,\n                               offset=offset, interval=1)\n        else:\n            resv_state = {'reserve_state': (MATCH_RE, 'RESV_DEGRADED|10')}\n\n        a = {'state': 'offline'}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=resv_node)\n\n        a = {'reserve_substate': 10}\n        a.update(resv_state)\n        self.server.expect(RESV, a, id=rid)\n\n        self.scheduler.log_match(rid + ';Reservation is in degraded mode',\n                                 starttime=now, interval=1)\n\n        self.server.expect(RESV, a, id=rid)\n\n        self.server.expect(RESV, {'resv_nodes':\n                                  (MATCH_RE, re.escape(resv_node))}, id=rid)\n\n        if rrule and run:\n            a = {'reserve_state': (MATCH_RE, 'RESV_DEGRADED|10'),\n                 'reserve_substate': 10}\n            t = end - int(time.time())\n            self.logger.info('Sleeping until reservation ends')\n            self.server.expect(RESV, a, id=rid, offset=t)\n\n        self.server.manager(MGR_CMD_SET, NODE, {'state': (DECR, 'offline')},\n                            id=resv_node)\n        # If run and rrule are true, we waited until the occurrence\n        # finished and the reservation is no longer running otherwise\n        # the reservation is still running.\n        if run:\n            if rrule:\n                resv_state = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2'),\n                              'reserve_substate': 2}\n            else:\n                resv_state = {'reserve_state': (MATCH_RE, 'RESV_RUNNING|5'),\n                              'reserve_substate': 5}\n        else:\n            resv_state = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2'),\n                          'reserve_substate': 2}\n\n        self.server.expect(RESV, resv_state, id=rid)\n\n    def test_degraded_standing_reservations(self):\n        \"\"\"\n        Verify that degraded standing reservations are reconfirmed\n        on other nodes\n        \"\"\"\n        self.degraded_resv_reconfirm(start=125, end=625,\n                                     rrule='freq=HOURLY;count=5')\n\n    def test_degraded_advance_reservations(self):\n        \"\"\"\n        Verify that degraded advance reservations are reconfirmed\n        on other nodes\n        \"\"\"\n        self.degraded_resv_reconfirm(start=125, end=625)\n\n    def test_degraded_standing_running_reservations(self):\n        \"\"\"\n        Verify that degraded standing reservations are reconfirmed\n        on other nodes\n        \"\"\"\n        self.degraded_resv_reconfirm(start=25, end=625,\n                                     rrule='freq=HOURLY;count=5', run=True)\n\n    def test_degraded_advance_running_reservations(self):\n        \"\"\"\n        Verify that degraded advance reservations are not reconfirmed\n        on other nodes if no space is available\n        \"\"\"\n        self.degraded_resv_reconfirm(\n            start=25, end=625, run=True)\n\n    def test_degraded_standing_reservations_fail(self):\n        \"\"\"\n        Verify that degraded standing reservations are not\n        reconfirmed on other nodes if there is no space available\n        \"\"\"\n        self.degraded_resv_failed_reconfirm(start=120, end=720,\n                                            rrule='freq=HOURLY;count=5')\n\n    def test_degraded_advance_reservations_fail(self):\n        \"\"\"\n        Verify that advance reservations are not reconfirmed if there\n        is no space available\n        \"\"\"\n        self.degraded_resv_failed_reconfirm(start=120, end=720)\n\n    def test_degraded_standing_running_reservations_fail(self):\n        \"\"\"\n        Verify that degraded running standing reservations are not\n        reconfirmed on other nodes if there is no space available\n        \"\"\"\n        self.degraded_resv_failed_reconfirm(start=25, end=55,\n                                            rrule='freq=HOURLY;count=5',\n                                            run=True)\n\n    def test_degraded_advance_running_reservations_fail(self):\n        \"\"\"\n        Verify that advance running reservations are not reconfirmed if there\n        is no space available\n        \"\"\"\n        self.degraded_resv_failed_reconfirm(\n            start=25, end=625, run=True)\n\n    def test_degraded_advanced_reservation_superchunk(self):\n        \"\"\"\n        Verify that an advanced reservation requesting a superchunk is\n        correctly reconfirmed on other nodes\n        \"\"\"\n        retry = 15\n        a = {'resources_available.ncpus': 1}\n        self.mom.create_vnodes(a, num=6)\n        self.server.manager(MGR_CMD_SET, SERVER, {'reserve_retry_time': retry})\n\n        now = int(time.time())\n        rid = self.submit_reservation(user=TEST_USER,\n                                      select='1:ncpus=1+1:ncpus=3',\n                                      start=now + 60,\n                                      end=now + 240)\n\n        self.server.expect(RESV, {'reserve_state':\n                                  (MATCH_RE, 'RESV_CONFIRMED|2')}, id=rid)\n        st = self.server.status(RESV, 'resv_nodes', id=rid)[0]\n        nds1 = st['resv_nodes']\n        # Should have 4 nodes.  Nodes 2-4 are the superchunk.  Choose the\n        # middle node to avoid the '(' and ')' of the superchunk.\n        sp = nds1.split('+')\n        sn = sp[2].split(':')[0]\n\n        # Keep the first chunk's node around to confirm it is still the same\n        sc = sp[0]\n\n        t = int(time.time())\n        self.server.manager(MGR_CMD_SET, NODE, {'state': 'offline'}, id=sn)\n        self.server.expect(RESV, {'reserve_state':\n                                  (MATCH_RE, 'RESV_DEGRADED|10')}, id=rid)\n\n        retry_time = t + retry\n        offset = retry_time - int(time.time())\n        self.server.expect(RESV, {'reserve_state':\n                                  (MATCH_RE, 'RESV_CONFIRMED|2')},\n                           id=rid, offset=offset)\n        st = self.server.status(RESV, 'resv_nodes', id=rid)[0]\n        nds2 = st['resv_nodes']\n        self.assertEqual(len(sp), len(nds2.split('+')))\n        self.assertNotEqual(nds1, nds2)\n        self.assertEquals(sc, nds1.split('+')[0])\n\n    def test_degraded_running_only_replace(self):\n        \"\"\"\n        Test that when a running degraded reservation is reconfirmed,\n        make sure that only the nodes that unavailable are replaced\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, SERVER, {'reserve_retry_time': 15})\n\n        a = {'resources_available.ncpus': 1}\n        self.mom.create_vnodes(a, 5)\n\n        # Submit two jobs to take up nodes 0 and 1. This forces the reservation\n        # onto nodes 3 and 4. The idea is to delete the two jobs and see\n        # if the reservation shifts onto nodes 0 and 1 after the reconfirm\n        j1 = Job(attrs={'Resource_List.select': '1:ncpus=1'})\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n\n        j2 = Job(attrs={'Resource_List.select': '1:ncpus=1'})\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n\n        now = int(time.time())\n        start = now + 20\n        rid = self.submit_reservation(user=TEST_USER,\n                                      select='2:ncpus=1',\n                                      start=start,\n                                      end=start + 60)\n        self.server.expect(RESV, {'reserve_state':\n                                  (MATCH_RE, 'RESV_CONFIRMED|2')}, id=rid)\n        resv_queue = rid.split('.')[0]\n        a = {'Resource_List.select': '1:ncpus=1', 'queue': resv_queue}\n        j3 = Job(attrs=a)\n        jid3 = self.server.submit(j3)\n\n        self.logger.info('Sleeping until reservation starts')\n        self.server.expect(RESV,\n                           {'reserve_state': (MATCH_RE, 'RESV_RUNNING|5')},\n                           id=rid, offset=start - int(time.time()))\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid3)\n\n        self.server.delete(jid1, wait=True)\n        self.server.delete(jid2, wait=True)\n        self.server.status(RESV)\n        rnodes = self.server.reservations[rid].get_vnodes()\n        self.server.status(JOB)\n        jnode = j3.get_vnodes()[0]\n        other_node = rnodes[rnodes[0] == jnode]\n        self.server.manager(MGR_CMD_SET, NODE, {\n                            'state': (INCR, 'offline')}, id=other_node)\n        self.server.expect(RESV, {'reserve_substate': 10}, id=rid)\n        self.logger.info('Waiting until reconfirmation')\n        self.server.expect(RESV, {'reserve_substate': 5}, id=rid, offset=7)\n        self.server.status(RESV)\n        rnodes2 = self.server.reservations[rid].get_vnodes()\n        self.assertIn(jnode, rnodes2, 'Reservation not on job node')\n\n    def test_standing_reservation_occurrence_two_not_degraded(self):\n        \"\"\"\n        Test that when a standing reservation's occurrence 1 is on an offline\n        vnode and occurrence 2 is not, that when the first occurrence finishes\n        the reservation is back in the confirmed state\n        \"\"\"\n\n        a = {'reserve_retry_time': 15}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        a = {'resources_available.ncpus': 1}\n        self.mom.create_vnodes(a, num=2)\n\n        start_time = time.time()\n        now = int(start_time)\n        rid1 = self.submit_reservation(user=TEST_USER, select='1:ncpus=1',\n                                       start=now + 3600, end=now + 7200)\n\n        a = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, a, id=rid1)\n\n        start = now + 25\n        end = now + 55\n        rid2 = self.submit_reservation(user=TEST_USER, select='1:ncpus=1',\n                                       start=start, end=end,\n                                       rrule='freq=HOURLY;count=5')\n        a = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, a, id=rid2)\n\n        self.server.status(RESV, 'resv_nodes')\n        resv_node = self.server.reservations[rid1].get_vnodes()[0]\n        resv_node2 = self.server.reservations[rid2].get_vnodes()[0]\n\n        msg = 'Reservations not on the same vnode'\n        self.assertEqual(resv_node, resv_node2, msg)\n\n        J = Job(attrs={'Resource_List.walltime': 1800})\n        jid = self.server.submit(J)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        self.server.manager(MGR_CMD_SET, NODE, {'state': 'offline'}, resv_node)\n\n        self.logger.info('Sleeping until retry timer fires')\n        time.sleep(15)\n        # Can't reconfirm rid1 because rid2's second occurrence should be\n        # on node 2 at that time.\n        self.scheduler.log_match(rid1 + ';Reservation is in degraded mode',\n                                 starttime=start_time, interval=1)\n        # Can't reconfirm rid2 because the job is running on node 2.\n        self.scheduler.log_match(rid2 + ';Reservation is in degraded mode',\n                                 starttime=start_time, interval=1)\n\n        self.logger.info('Sleeping until standing reservation runs')\n        a = {'reserve_state': (MATCH_RE, 'RESV_RUNNING|5')}\n        self.server.expect(RESV, a, id=rid2, offset=start - int(time.time()))\n\n        self.logger.info('Sleeping until occurrence finishes')\n        # occurrence 2 is not on the offlined node.  It should be confirmed\n        a = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, a, id=rid2, offset=end - int(time.time()))\n\n    def test_degraded_reservation_reconfirm_running_job(self):\n        \"\"\"\n        Test that a reservation isn't reconfirmed if there is a running job\n        on an node that is offline until the job finishes.\n        \"\"\"\n        a = {'reserve_retry_time': 5}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        a = {'resources_available.ncpus': 1}\n        self.mom.create_vnodes(a, num=2)\n\n        now = int(time.time())\n        start = now + 25\n        rid = self.submit_reservation(select='1:ncpus=1', user=TEST_USER,\n                                      start=start, end=now + 625)\n\n        a = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, a, id=rid)\n        resv_queue = rid.split('.')[0]\n\n        self.server.status(RESV, 'resv_nodes', id=rid)\n        resv_node = self.server.reservations[rid].get_vnodes()[0]\n\n        a = {'Resource_List.select': '1:ncpus=1', ATTR_queue: resv_queue}\n        J = Job(attrs=a)\n        jid = self.server.submit(J)\n\n        self.logger.info('Sleeping until reservation runs')\n        self.server.expect(RESV,\n                           {'reserve_state': (MATCH_RE, 'RESV_RUNNING|5')},\n                           offset=start - int(time.time()), id=rid)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        self.server.manager(MGR_CMD_SET, NODE, {'state': 'offline'},\n                            id=resv_node)\n        self.server.expect(RESV, {'reserve_substate': 10}, id=rid)\n        self.scheduler.log_match(rid + ';PBS Failed to confirm resv: '\n                                 'Reservation has running jobs in it',\n                                 interval=1)\n\n        self.server.delete(jid)\n\n        self.server.expect(RESV, {'reserve_substate': 5}, id=rid)\n\n    def test_not_honoring_resvs(self):\n        \"\"\"\n        PBS schedules jobs on nodes without accounting\n        for the reservation on the node\n        \"\"\"\n\n        a = {'resources_available.ncpus': 4}\n        self.mom.create_vnodes(a, 1, usenatvnode=True)\n\n        now = int(time.time())\n        start1 = now + 15\n        end1 = now + 25\n        start2 = now + 600\n        end2 = now + 7200\n\n        r1id = self.submit_reservation(user=TEST_USER,\n                                       select='1:ncpus=1',\n                                       start=start1,\n                                       end=end1)\n        a = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, a, r1id)\n\n        r2id = self.submit_reservation(user=TEST_USER,\n                                       select='1:ncpus=4',\n                                       start=start2,\n                                       end=end2)\n        a = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, a, r2id)\n\n        r1_que = r1id.split('.')[0]\n        for i in range(20):\n            j = Job(TEST_USER)\n            a = {'Resource_List.select': '1:ncpus=1',\n                 'Resource_List.walltime': 10, 'queue': r1_que}\n            j.set_attributes(a)\n            self.server.submit(j)\n\n        j1 = Job(TEST_USER)\n        a = {'Resource_List.select': '1:ncpus=1',\n             'Resource_List.walltime': 7200}\n        j1.set_attributes(a)\n        j1id = self.server.submit(j1)\n\n        j2 = Job(TEST_USER)\n        a = {'Resource_List.select': '1:ncpus=1',\n             'Resource_List.walltime': 7200}\n        j2.set_attributes(a)\n        j2id = self.server.submit(j2)\n\n        self.logger.info('Sleeping till Resv 1 ends')\n        a = {'reserve_state': (MATCH_RE, \"RESV_BEING_DELETED|7\")}\n        off = end1 - int(time.time())\n        self.server.expect(RESV, a, id=r1id, interval=1, offset=off)\n\n        a = {'scheduling': 'True'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        self.server.expect(JOB, {'job_state': 'Q'}, id=j1id)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=j2id)\n\n    def test_sched_cycle_starts_on_resv_end(self):\n        \"\"\"\n        This test checks whether the sched cycle gets started\n        when the advance reservation ends.\n        \"\"\"\n        a = {'resources_available.ncpus': 2}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.mom.shortname,\n                            sudo=True)\n\n        now = int(time.time())\n        rid = self.submit_reservation(user=TEST_USER,\n                                      select='1:ncpus=2',\n                                      start=now + 10,\n                                      end=now + 30)\n\n        a = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, a, rid)\n\n        attr = {'Resource_List.walltime': '00:00:20'}\n        j = Job(TEST_USER, attr)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {ATTR_state: 'Q'},\n                           id=jid)\n        msg = \"Job would conflict with reservation or top job\"\n        self.server.expect(\n            JOB, {ATTR_comment: \"Not Running: \" + msg}, id=jid)\n        self.scheduler.log_match(\n            jid + \";\" + msg,\n            max_attempts=30)\n\n        a = {'reserve_state': (MATCH_RE, 'RESV_RUNNING|2')}\n        self.server.expect(RESV, a, rid)\n\n        resid = rid.split('.')[0]\n        self.server.log_match(resid + \";deleted at request of pbs_server\",\n                              id=resid, interval=5)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid)\n\n    def test_exclusive_state(self):\n        \"\"\"\n        Test that the resv-exclusive and job-exclusive\n        states are approprately set\n        \"\"\"\n        a = {'resources_available.ncpus': 1}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.mom.shortname,\n                            sudo=True)\n\n        now = int(time.time())\n        rid = self.submit_reservation('1:ncpus=1', now + 30, now + 3600,\n                                      user=TEST_USER, place='excl')\n\n        exp_attr = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, exp_attr, id=rid)\n\n        self.logger.info('Waiting 30s for reservation to start')\n        exp_attr['reserve_state'] = (MATCH_RE, 'RESV_RUNNING|5')\n        self.server.expect(RESV, exp_attr, id=rid, offset=30)\n\n        self.server.status(RESV, 'resv_nodes', id=rid)\n        resv_vnode = self.server.reservations[rid].get_vnodes()[0]\n        self.server.expect(NODE, {'state': 'resv-exclusive'},\n                           id=resv_vnode)\n\n        a = {'Resource_List.select': '1:ncpus=1',\n             'Resource_List.place': 'excl', 'queue': rid.split('.')[0]}\n        j = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        n = self.server.status(NODE, id=resv_vnode)\n        states = n[0]['state'].split(',')\n        self.assertIn('resv-exclusive', states)\n        self.assertIn('job-exclusive', states)\n\n    def test_resv_excl_future_resv(self):\n        \"\"\"\n        Test to see that exclusive reservations in the near term do not\n        interfere with longer term reservations\n        \"\"\"\n        a = {'resources_available.ncpus': 1}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.mom.shortname,\n                            sudo=True)\n\n        now = int(time.time())\n        rid1 = self.submit_reservation(user=TEST_USER,\n                                       select='1:ncpus=1',\n                                       place='excl',\n                                       start=now + 30,\n                                       end=now + 3600)\n\n        exp_attr = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, exp_attr, id=rid1)\n\n        rid2 = self.submit_reservation(user=TEST_USER,\n                                       select='1:ncpus=1',\n                                       place='excl',\n                                       start=now + 7200,\n                                       end=now + 10800)\n\n        self.server.expect(RESV, exp_attr, id=rid2)\n\n    def test_job_exceed_resv_end(self):\n        \"\"\"\n        Test to see that a job when submitted to a reservation without the\n        walltime would not show up as exceeding the reservation and\n        making the scheduler reject future reservations.\n        \"\"\"\n\n        a = {'resources_available.ncpus': 1}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.mom.shortname,\n                            sudo=True)\n\n        now = int(time.time())\n        rid = self.submit_reservation(user=TEST_USER,\n                                      select='1:ncpus=1',\n                                      place='excl',\n                                      start=now + 30,\n                                      end=now + 300)\n\n        exp_attr = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, exp_attr, id=rid)\n\n        self.logger.info('Waiting 30s for reservation to start')\n        exp_attr['reserve_state'] = (MATCH_RE, 'RESV_RUNNING|5')\n        self.server.expect(RESV, exp_attr, id=rid, offset=30)\n\n        # Submit a job but do not specify walltime, scheduler will consider\n        # the walltime of such a job to be 5 years\n        a = {'Resource_List.select': '1:ncpus=1',\n             'Resource_List.place': 'excl',\n             'queue': rid.split('.')[0]}\n        j = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        # Submit another reservation that will start after first\n        rid2 = self.submit_reservation(user=TEST_USER,\n                                       select='1:ncpus=1',\n                                       start=now + 360,\n                                       end=now + 3600)\n\n        exp_attr = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, exp_attr, id=rid2)\n\n    def test_future_resv_conflicts_running_job(self):\n        \"\"\"\n        Test if a running exclusive job without walltime will deny the future\n        resv from getting confirmed.\n        \"\"\"\n\n        vnode_val = None\n        if self.mom.is_cpuset_mom():\n            vnode_val = '1:ncpus=1:vnode=' + self.server.status(NODE)[1]['id']\n            select_val = vnode_val\n        else:\n            select_val = '1:ncpus=1'\n\n        now = int(time.time())\n        # Submit a job but do not specify walltime, scheduler will consider\n        # the walltime of such a job to be 5 years\n        a = {'Resource_List.select': '1:ncpus=1',\n             'Resource_List.place': 'excl'}\n        if self.mom.is_cpuset_mom():\n            a['Resource_List.select'] = vnode_val\n\n        j = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        # Submit a reservation that will start after the job starts running\n        rid1 = self.submit_reservation(user=TEST_USER,\n                                       select=select_val,\n                                       start=now + 360,\n                                       end=now + 3600)\n\n        self.server.log_match(rid1 + \";Reservation denied\",\n                              id=rid1, interval=5)\n\n    def test_future_resv_confirms_after_running_job(self):\n        \"\"\"\n        Test if a future reservation gets confirmed if its start time starts\n        after the end time of a job running in an exclusive reservation\n        \"\"\"\n\n        a = {'resources_available.ncpus': 1}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.mom.shortname,\n                            sudo=True)\n\n        now = int(time.time())\n        rid = self.submit_reservation(user=TEST_USER,\n                                      select='1:ncpus=1',\n                                      place='excl',\n                                      start=now + 30,\n                                      end=now + 300)\n\n        exp_attr = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, exp_attr, id=rid)\n\n        self.logger.info('Waiting 30s for reservation to start')\n        exp_attr['reserve_state'] = (MATCH_RE, 'RESV_RUNNING|5')\n        self.server.expect(RESV, exp_attr, id=rid, offset=30)\n\n        # Submit a job with walltime exceeding reservation duration\n        a = {'Resource_List.select': '1:ncpus=1',\n             'Resource_List.place': 'excl',\n             'Resource_List.walltime': 600,\n             'queue': rid.split('.')[0]}\n        j = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        # Submit another reservation that will start after the job ends\n        rid2 = self.submit_reservation(user=TEST_USER,\n                                       select='1:ncpus=1',\n                                       start=now + 630,\n                                       end=now + 3600)\n\n        exp_attr = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, exp_attr, id=rid2)\n\n    def test_future_resv_confirms_before_non_excl_job(self):\n        \"\"\"\n        Test if a future reservation gets confirmed if its start time starts\n        before the end time of a non exclusive job running in an exclusive\n        reservation.\n        \"\"\"\n\n        a = {'resources_available.ncpus': 1}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.mom.shortname,\n                            sudo=True)\n\n        now = int(time.time())\n        rid = self.submit_reservation(user=TEST_USER,\n                                      select='1:ncpus=1',\n                                      place='excl',\n                                      start=now + 30,\n                                      end=now + 300)\n\n        exp_attr = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, exp_attr, id=rid)\n\n        self.logger.info('Waiting 30s for reservation to start')\n        exp_attr['reserve_state'] = (MATCH_RE, 'RESV_RUNNING|5')\n        self.server.expect(RESV, exp_attr, id=rid, offset=30)\n\n        # Submit a job with walltime exceeding reservation duration\n        a = {'Resource_List.select': '1:ncpus=1',\n             'Resource_List.walltime': 600,\n             'queue': rid.split('.')[0]}\n        j = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        # Submit another reservation that will start after the first\n        # reservation ends\n        rid2 = self.submit_reservation(user=TEST_USER,\n                                       select='1:ncpus=1',\n                                       start=now + 330,\n                                       end=now + 3600)\n\n        exp_attr = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, exp_attr, id=rid2)\n\n    def test_future_resv_with_non_excl_jobs(self):\n        \"\"\"\n        Test if future reservations with/without exclusive placement are\n        confirmed if their start time starts before end time of non exclusive\n        jobs that are running in reservation.\n        \"\"\"\n\n        a = {'resources_available.ncpus': 1}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.mom.shortname,\n                            sudo=True)\n\n        now = int(time.time())\n        rid = self.submit_reservation(user=TEST_USER,\n                                      select='1:ncpus=1',\n                                      start=now + 30,\n                                      end=now + 300)\n\n        exp_attr = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, exp_attr, id=rid)\n\n        self.logger.info('Waiting 30s for reservation to start')\n        exp_attr['reserve_state'] = (MATCH_RE, 'RESV_RUNNING|5')\n        self.server.expect(RESV, exp_attr, id=rid, offset=30)\n\n        # Submit a job with walltime exceeding reservation\n        a = {'Resource_List.select': '1:ncpus=1',\n             'Resource_List.walltime': 600,\n             'queue': rid.split('.')[0]}\n        j = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        # Submit another non exclusive reservation that will start after\n        # previous reservation ends but before job's walltime is over.\n        rid2 = self.submit_reservation(user=TEST_USER,\n                                       select='1:ncpus=1',\n                                       start=now + 330,\n                                       end=now + 3600)\n\n        exp_attr = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, exp_attr, id=rid2)\n\n        self.server.delete(rid2)\n\n        # Submit another exclusive reservation that will start after\n        # previous reservation ends but before job's walltime is over.\n        rid3 = self.submit_reservation(user=TEST_USER,\n                                       select='1:ncpus=1',\n                                       place='excl',\n                                       start=now + 330,\n                                       end=now + 3600)\n\n        exp_attr = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, exp_attr, id=rid3)\n\n    def test_resv_excl_with_jobs(self):\n        \"\"\"\n        Test to see that exclusive reservations in the near term do not\n        interfere with longer term reservations with jobs inside\n        \"\"\"\n        a = {'resources_available.ncpus': 1}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.mom.shortname)\n\n        now = int(time.time())\n        rid = self.submit_reservation(user=TEST_USER,\n                                      select='1:ncpus=1',\n                                      place='excl',\n                                      start=now + 30,\n                                      end=now + 300)\n\n        exp_attr = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, exp_attr, id=rid)\n\n        self.logger.info('Waiting 30s for reservation to start')\n        exp_attr['reserve_state'] = (MATCH_RE, 'RESV_RUNNING|5')\n        self.server.expect(RESV, exp_attr, id=rid, offset=30)\n\n        a = {'Resource_List.select': '1:ncpus=1',\n             'Resource_List.place': 'excl',\n             'Resource_List.walltime': '30',\n             'queue': rid.split('.')[0]}\n        j = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        # Submit another reservation that will start after first\n        rid2 = self.submit_reservation(user=TEST_USER,\n                                       select='1:ncpus=1',\n                                       place='excl',\n                                       start=now + 360,\n                                       end=now + 3600)\n\n        exp_attr = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, exp_attr, id=rid2)\n\n    def test_resv_server_restart(self):\n        \"\"\"\n        Test if a reservation correctly goes into the resv-exclusive state\n        if the server is restarted between when the reservation gets\n        confirmed and when it starts\n        \"\"\"\n        now = int(time.time())\n        start = now + 30\n        a = {'Resource_List.select': '1:ncpus=1:vnode=' +\n             self.mom.shortname}\n        if self.mom.is_cpuset_mom():\n            vnode_val = '1:ncpus=1:vnode=' + self.server.status(NODE)[1]['id']\n            a['Resource_List.select'] = vnode_val\n\n        rid = self.submit_reservation(user=TEST_USER,\n                                      select=a['Resource_List.select'],\n                                      place='excl',\n                                      start=start,\n                                      end=start + 3600)\n        a = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, a, id=rid)\n\n        self.server.restart()\n\n        sleep_time = start - int(time.time())\n\n        self.logger.info('Waiting %d seconds till resv starts' % sleep_time)\n        a = {'reserve_state': (MATCH_RE, 'RESV_RUNNING|5')}\n        self.server.expect(RESV, a, id=rid, offset=sleep_time)\n\n        mom_name = self.mom.shortname\n        if self.mom.is_cpuset_mom():\n            mom_name = self.server.status(NODE)[1]['id']\n        self.server.expect(NODE, {'state': 'resv-exclusive'},\n                           id=mom_name)\n\n    def test_resv_exclusive_after_node_offlined(self):\n        \"\"\"\n        Test if the set of reserved nodes correctly goes into the\n        resv-exclusive state if a node in the initially confirmed reservation\n        is offlined and the reservation is only reconfirmed and assigned new\n        nodes when the reservation is starting.\n        \"\"\"\n        a = {'reserve_retry_time': 3600}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        a = {'resources_available.ncpus': 1, 'sharing': 'default_excl'}\n        self.mom.create_vnodes(a, num=6)\n\n        now = int(time.time())\n        start = now + 30\n        rid = self.submit_reservation(user=TEST_USER,\n                                      select='4:ncpus=1',\n                                      place='excl',\n                                      start=start,\n                                      end=start + 3600)\n\n        a = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, a, id=rid)\n\n        self.server.status(RESV, 'resv_nodes', id=rid)\n        resv_nodes = self.server.reservations[rid].get_vnodes()\n        self.logger.info(\"RESV_NODES: %s\", resv_nodes)\n\n        self.server.manager(MGR_CMD_SET, NODE, {'state': 'offline'},\n                            id=resv_nodes[0])\n\n        a = {'reserve_state': (MATCH_RE, 'RESV_DEGRADED')}\n        self.server.expect(RESV, a, id=rid)\n\n        sleep_time = start - int(time.time())\n        self.logger.info('Waiting %d seconds till resv starts' % sleep_time)\n        a = {'reserve_state': (MATCH_RE, 'RESV_RUNNING|5')}\n        self.server.expect(RESV, a, id=rid, offset=sleep_time)\n\n        self.server.status(RESV, 'resv_nodes', id=rid)\n        resv_nodes = self.server.reservations[rid].get_vnodes()\n        self.logger.info(\"RESV_NODES: %s\", resv_nodes)\n\n        for n in resv_nodes:\n            self.server.expect(NODE, {'state': 'resv-exclusive'},\n                               id=n, max_attempts=1)\n\n    def test_multiple_asap_resv(self):\n        \"\"\"\n        Test that multiple ASAP reservations are scheduled one after another\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, NODE,\n                            {'resources_available.ncpus': 1},\n                            id=self.mom.shortname)\n\n        job_attrs = {'Resource_List.select': '1:ncpus=1',\n                     'Resource_List.walltime': '1:00:00'}\n        j = Job(TEST_USER, attrs=job_attrs)\n        jid1 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n\n        s = self.server.status(JOB, 'stime', id=jid1)\n        job_stime = int(time.mktime(time.strptime(s[0]['stime'], '%c')))\n\n        j = Job(TEST_USER, attrs=job_attrs)\n        jid2 = self.server.submit(j)\n        self.server.expect(JOB, 'comment', op=SET, id=jid2)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid2)\n\n        rid1 = self.submit_asap_reservation(TEST_USER, jid2)\n        exp_attrs = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, exp_attrs, id=rid1)\n        s = self.server.status(RESV, 'reserve_start', id=rid1)\n        resv1_stime = int(time.mktime(\n            time.strptime(s[0]['reserve_start'], '%c')))\n        msg = 'ASAP reservation has incorrect start time'\n        self.assertEqual(resv1_stime, job_stime + 3600, msg)\n\n        j = Job(TEST_USER, attrs=job_attrs)\n        jid3 = self.server.submit(j)\n        self.server.expect(JOB, 'comment', op=SET, id=jid3)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid3)\n\n        rid2 = self.submit_asap_reservation(TEST_USER, jid3)\n        exp_attrs = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, exp_attrs, id=rid2)\n        s = self.server.status(RESV, 'reserve_start', id=rid2)\n        resv2_stime = int(time.mktime(\n            time.strptime(s[0]['reserve_start'], '%c')))\n        msg = 'ASAP reservation has incorrect start time'\n        self.assertEqual(resv2_stime, resv1_stime + 3600, msg)\n\n    def test_excl_asap_resv_before_longterm_resvs(self):\n        \"\"\"\n        Test if an ASAP reservation created from an exclusive\n        placement job does not interfere with subsequent long\n        term advance and standing exclusive reservations\n        \"\"\"\n        a = {'resources_available.ncpus': 1}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.mom.shortname)\n\n        # Submit a job and let it run with available resources\n        a = {'Resource_List.select': '1:ncpus=1',\n             'Resource_List.walltime': 30}\n        j1 = Job(TEST_USER, attrs=a)\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n\n        # Submit a second job with exclusive node placement\n        # and let it be queued\n        a = {'Resource_List.select': '1:ncpus=1',\n             'Resource_List.walltime': 300,\n             'Resource_List.place': 'excl'}\n        j2 = Job(TEST_USER, attrs=a)\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, 'comment', op=SET, id=jid2)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid2)\n\n        # Convert j2 into an ASAP reservation\n        rid1 = self.submit_asap_reservation(user=TEST_USER,\n                                            jid=jid2)\n        exp_attr = {'reserve_state': (MATCH_RE, \"RESV_CONFIRMED|2\")}\n        self.server.expect(RESV, exp_attr, id=rid1)\n\n        # Wait for the reservation to start\n        self.logger.info('Waiting 30 seconds for reservation to start')\n        exp_attr = {'reserve_state': (MATCH_RE, \"RESV_RUNNING|5\")}\n        self.server.expect(RESV, exp_attr, id=rid1, offset=30)\n\n        # Submit a long term reservation with exclusive node\n        # placement when rid1 is running\n        # This reservation should be confirmed\n        now = int(time.time())\n        rid2 = self.submit_reservation(user=TEST_USER,\n                                       select='1:ncpus=1',\n                                       place='excl',\n                                       start=now + 3600,\n                                       end=now + 3605)\n        exp_attr = {'reserve_state': (MATCH_RE, \"RESV_CONFIRMED|2\")}\n        self.server.expect(RESV, exp_attr, id=rid2)\n\n        # Submit a long term standing reservation with exclusive node\n        # placement when rid1 is running\n        # This reservation should also be confirmed\n        now = int(time.time())\n        rid3 = self.submit_reservation(user=TEST_USER,\n                                       select='1:ncpus=1',\n                                       place='excl',\n                                       rrule='FREQ=HOURLY;COUNT=3',\n                                       start=now + 7200,\n                                       end=now + 7205)\n        exp_attr = {'reserve_state': (MATCH_RE, \"RESV_CONFIRMED|2\")}\n        self.server.expect(RESV, exp_attr, id=rid3)\n\n    def test_excl_asap_resv_after_longterm_resvs(self):\n        \"\"\"\n        Test if an exclusive ASAP reservation created from an exclusive\n        placement job does not interfere with already existing long term\n        exclusive reservations.\n        Also, test if future exclusive reservations are successful when\n        the ASAP reservation is running.\n        \"\"\"\n        a = {'resources_available.ncpus': 1}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.mom.shortname)\n\n        # Submit a long term advance reservation with exclusive node\n        now = int(time.time())\n        rid1 = self.submit_reservation(user=TEST_USER,\n                                       select='1:ncpus=1',\n                                       place='excl',\n                                       start=now + 360,\n                                       end=now + 365)\n        exp_attr = {'reserve_state': (MATCH_RE, \"RESV_CONFIRMED|2\")}\n        self.server.expect(RESV, exp_attr, id=rid1)\n\n        # Submit a long term standing reservation with exclusive node\n        now = int(time.time())\n        rid2 = self.submit_reservation(user=TEST_USER,\n                                       select='1:ncpus=1',\n                                       place='excl',\n                                       rrule='FREQ=HOURLY;COUNT=3',\n                                       start=now + 3600,\n                                       end=now + 3605)\n        exp_attr = {'reserve_state': (MATCH_RE, \"RESV_CONFIRMED|2\")}\n        self.server.expect(RESV, exp_attr, id=rid2)\n\n        # Submit a job and let it run with available resources\n        a = {'Resource_List.select': '1:ncpus=1',\n             'Resource_List.walltime': 30}\n        j1 = Job(TEST_USER, attrs=a)\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n\n        # Submit a second job with exclusive node placement\n        # and let it be queued\n        a = {'Resource_List.select': '1:ncpus=1',\n             'Resource_List.walltime': 300,\n             'Resource_List.place': 'excl'}\n        j2 = Job(TEST_USER, attrs=a)\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, 'comment', op=SET, id=jid2)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid2)\n\n        # Convert j2 into an ASAP reservation\n        rid1 = self.submit_asap_reservation(user=TEST_USER,\n                                            jid=jid2)\n\n        exp_attr = {'reserve_state': (MATCH_RE, \"RESV_CONFIRMED|2\")}\n        self.server.expect(RESV, exp_attr, id=rid1)\n\n        # Wait for the reservation to start\n        self.logger.info('Waiting 30 seconds for reservation to start')\n        exp_attr = {'reserve_state': (MATCH_RE, \"RESV_RUNNING|5\")}\n        self.server.expect(RESV, exp_attr, id=rid1, offset=30)\n\n        # Submit a long term reservation with exclusive node\n        # placement when rid1 is running\n        # This reservation should be confirmed\n        now = int(time.time())\n        rid3 = self.submit_reservation(user=TEST_USER,\n                                       select='1:ncpus=1',\n                                       place='excl',\n                                       start=now + 3600,\n                                       end=now + 3605)\n        exp_attr = {'reserve_state': (MATCH_RE, \"RESV_CONFIRMED|2\")}\n        self.server.expect(RESV, exp_attr, id=rid3)\n\n    def test_multi_vnode_excl_advance_resvs(self):\n        \"\"\"\n        Test if long term exclusive reservations do not interfere\n        with current reservations on a multi-vnoded host\n        \"\"\"\n        a = {'resources_available.ncpus': 4}\n        self.mom.create_vnodes(a, num=3)\n\n        # Submit a long term standing reservation with\n        # exclusive nodes.\n        now = int(time.time())\n        rid1 = self.submit_reservation(user=TEST_USER,\n                                       select='1:ncpus=9',\n                                       place='excl',\n                                       rrule='FREQ=HOURLY;COUNT=3',\n                                       start=now + 7200,\n                                       end=now + 7205)\n        exp_attr = {'reserve_state': (MATCH_RE, \"RESV_CONFIRMED|2\")}\n        self.server.expect(RESV, exp_attr, id=rid1)\n\n        # Submit a long term advance reservation with exclusive node\n        now = int(time.time())\n        rid2 = self.submit_reservation(user=TEST_USER,\n                                       select='1:ncpus=10',\n                                       place='excl',\n                                       start=now + 3600,\n                                       end=now + 3605)\n        exp_attr = {'reserve_state': (MATCH_RE, \"RESV_CONFIRMED|2\")}\n        self.server.expect(RESV, exp_attr, id=rid2)\n\n        # Submit a short term reservation requesting all the nodes\n        # exclusively\n        now = int(time.time())\n        rid3 = self.submit_reservation(user=TEST_USER,\n                                       select='1:ncpus=12',\n                                       place='excl',\n                                       start=now + 20,\n                                       end=now + 100)\n        exp_attr = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, exp_attr, id=rid3)\n\n        exp_attr['reserve_state'] = (MATCH_RE, 'RESV_RUNNING|5')\n        self.server.expect(RESV, exp_attr, id=rid3, offset=30)\n\n    def test_multi_vnode_excl_asap_resv(self):\n        \"\"\"\n        Test if an ASAP reservation created from a excl placement\n        job does not interfere with future multinode exclusive\n        reservations on a multi-vnoded host\n        \"\"\"\n        a = {'resources_available.ncpus': 4}\n        self.mom.create_vnodes(a, num=3)\n\n        # Submit 3 exclusive jobs, so all the nodes are busy\n        # j1 requesting 4 cpus, j2 requesting 4 cpus and j3\n        # requesting 5 cpus\n        a = {'Resource_List.select': '1:ncpus=4',\n             'Resource_List.place': 'excl',\n             'Resource_List.walltime': 30}\n        j1 = Job(TEST_USER, attrs=a)\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n\n        a['Resource_List.walltime'] = 400\n        j2 = Job(TEST_USER, attrs=a)\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n\n        a = {'Resource_List.select': '1:ncpus=5',\n             'Resource_List.place': 'excl',\n             'Resource_List.walltime': 100}\n        j3 = Job(TEST_USER, attrs=a)\n        jid3 = self.server.submit(j3)\n        self.server.expect(JOB, 'comment', op=SET, id=jid3)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid3)\n\n        # Convert J3 to ASAP reservation\n        rid1 = self.submit_asap_reservation(user=TEST_USER,\n                                            jid=jid3)\n\n        exp_attr = {'reserve_state': (MATCH_RE, \"RESV_CONFIRMED|2\")}\n        self.server.expect(RESV, exp_attr, id=rid1)\n\n        # Wait for the reservation to start\n        self.logger.info('Waiting 30 seconds for reservation to start')\n        exp_attr = {'reserve_state': (MATCH_RE, \"RESV_RUNNING|5\")}\n        self.server.expect(RESV, exp_attr, id=rid1, offset=30)\n\n        # Submit a long term reservation with exclusive node\n        # placement when rid1 is running (requesting all nodes)\n        # This reservation should be confirmed\n        now = int(time.time())\n        rid2 = self.submit_reservation(user=TEST_USER,\n                                       select='1:ncpus=12',\n                                       place='excl',\n                                       start=now + 3600,\n                                       end=now + 3605)\n        exp_attr = {'reserve_state': (MATCH_RE, \"RESV_CONFIRMED|2\")}\n        self.server.expect(RESV, exp_attr, id=rid2)\n\n    def test_fail_confirm_resv_message(self):\n        \"\"\"\n        Test if the scheduler fails to reserve a\n        reservation, the reason will be logged.\n        \"\"\"\n        a = {'resources_available.ncpus': 1}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.mom.shortname)\n\n        # Submit a long term advance reservation that will be denied\n        now = int(time.time())\n        rid = self.submit_reservation(user=TEST_USER,\n                                      select='1:ncpus=10',\n                                      start=now + 360,\n                                      end=now + 365)\n        self.server.log_match(rid + \";Reservation denied\",\n                              id=rid, interval=5)\n        # The scheduler should log reason why it was denied\n        self.scheduler.log_match(rid + \";PBS Failed to confirm resv: \" +\n                                 \"Insufficient amount of resource: ncpus\")\n\n    def common_steps(self):\n        \"\"\"\n        This function has common steps for configuration used in tests\n        \"\"\"\n        a = {'resources_available.ncpus': 4}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.mom.shortname)\n        self.server.manager(MGR_CMD_SET, SERVER, {\n            'job_history_enable': 'True'})\n\n    def test_advance_reservation_with_job_array(self):\n        \"\"\"\n        Test to submit a job array within a advance reservation\n        Check if the reservation gets confimed and the jobs\n        inside the reservation starts running when the reservation runs.\n        \"\"\"\n        self.common_steps()\n        # Submit a job-array\n        j = Job(TEST_USER, attrs={ATTR_J: '1-4'})\n        j.set_sleep_time(10)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'B'}, jid)\n        self.server.expect(JOB, {'job_state=R': 4}, count=True,\n                           id=jid, extend='t')\n        # Check status of the sub-job using qstat -fx once job completes\n        self.server.expect(JOB, {'job_state': 'F'}, extend='x',\n                           offset=10, id=jid)\n\n        # Submit a advance reservation (R1) and an array job to the reservation\n        # once reservation confirmed\n        start_time = time.time()\n        resv_start_time = int(start_time) + 20\n        rid = self.submit_reservation(user=TEST_USER,\n                                      select='1:ncpus=1',\n                                      start=resv_start_time,\n                                      end=resv_start_time + 100)\n        rid_q = rid.split('.')[0]\n        a = {'reserve_state': (MATCH_RE, \"RESV_CONFIRMED|2\")}\n        self.server.expect(RESV, a, id=rid)\n\n        a = {ATTR_q: rid_q, ATTR_J: '1-4'}\n        j2 = Job(TEST_USER, attrs=a)\n        j2.set_sleep_time(20)\n        jid2 = self.server.submit(j2)\n        subjid = []\n        for i in range(1, 5):\n            subjid.append(j.create_subjob_id(jid2, i))\n\n        a = {'reserve_state': (MATCH_RE, \"RESV_RUNNING|5\")}\n        expect_offset = resv_start_time - int(time.time())\n        self.server.expect(RESV, a, id=rid, offset=expect_offset)\n        self.server.expect(JOB, {'job_state': 'B'}, jid2)\n        self.server.expect(JOB, {'job_state=R': 1}, count=True,\n                           id=jid2, extend='t')\n        self.server.expect(JOB, {'job_state=Q': 3}, count=True,\n                           extend='t', id=jid2)\n        self.server.expect(JOB, {'job_state': 'R'}, id=subjid[0])\n        self.server.expect(JOB, {'job_state': 'Q'}, id=subjid[1])\n        self.server.expect(JOB, {'job_state': 'Q'}, id=subjid[2])\n        self.server.expect(JOB, {'job_state': 'Q'}, id=subjid[3])\n        self.server.delete(subjid[0])\n        self.server.expect(JOB, {'job_state': 'R'}, id=subjid[1])\n        # Wait for reservation to delete from server\n        msg = \"Que;\" + rid_q + \";deleted at request of pbs_server@\"\n        self.server.log_match(msg, starttime=start_time, interval=10)\n        # Check status of the sub-job using qstat -fx once job completes\n        self.server.expect(JOB, {'job_state': 'F', 'Exit_status': '271'},\n                           extend='x', attrop=PTL_AND, id=subjid[0])\n        self.server.expect(JOB, {'job_state': 'F', 'Exit_status': '0'},\n                           extend='x', attrop=PTL_AND, id=subjid[3])\n\n        # Submit a advance reservation (R2) and an array job to the reservation\n        # once reservation confirmed\n        start_time = time.time()\n        resv_start_time = int(start_time) + 20\n        rid = self.submit_reservation(user=TEST_USER,\n                                      select='1:ncpus=4',\n                                      start=resv_start_time,\n                                      end=resv_start_time + 160)\n        rid_q = rid.split('.')[0]\n        a = {'reserve_state': (MATCH_RE, \"RESV_CONFIRMED|2\")}\n        self.server.expect(RESV, a, id=rid)\n\n        a = {ATTR_q: rid_q, ATTR_J: '1-4'}\n        j2 = Job(TEST_USER, attrs=a)\n        j2.set_sleep_time(60)\n        jid2 = self.server.submit(j2)\n        subjid = []\n        for i in range(1, 5):\n            subjid.append(j.create_subjob_id(jid2, i))\n\n        a = {'reserve_state': (MATCH_RE, \"RESV_RUNNING|5\")}\n        expect_offset = resv_start_time - int(time.time())\n        self.server.expect(RESV, a, id=rid, offset=expect_offset)\n        self.server.expect(JOB, {'job_state': 'B'}, jid2)\n        self.server.expect(JOB, {'job_state=R': 4}, count=True,\n                           id=jid2, extend='t')\n        # Submit another job-array with small sleep time than job j2\n        a = {ATTR_q: rid_q, ATTR_J: '1-4'}\n        j3 = Job(TEST_USER, attrs=a)\n        j3.set_sleep_time(20)\n        jid3 = self.server.submit(j3)\n        subjid2 = []\n        for i in range(1, 5):\n            subjid2.append(j.create_subjob_id(jid3, i))\n        self.server.expect(JOB, {'job_state': 'Q'}, jid3)\n        self.server.expect(JOB, {'job_state=Q': 5}, count=True,\n                           id=jid3, extend='t')\n        self.server.expect(JOB, {'job_state': 'Q'}, id=subjid2[0])\n        # Wait for job array j2 to finish and verify all sub-job\n        # from j3 start running\n        self.server.expect(JOB, {'job_state': 'B'}, jid3, offset=30,\n                           interval=5, max_attempts=30)\n        self.server.expect(JOB, {'job_state=R': 4}, count=True,\n                           id=jid3, extend='t')\n        msg = \"Que;\" + rid_q + \";deleted at request of pbs_server@\"\n        self.server.log_match(msg, starttime=start_time, interval=10)\n        # Check status of the job-array using qstat -fx at the end of\n        # reservation\n        self.server.expect(JOB, {'job_state': 'F', 'Exit_status': '0'},\n                           extend='x', attrop=PTL_AND, id=jid2)\n        self.server.expect(JOB, {'job_state': 'F', 'Exit_status': '0'},\n                           extend='x', attrop=PTL_AND, id=jid3)\n\n    @requirements(num_moms=2)\n    def test_advance_resv_with_multinode_job_array(self):\n        \"\"\"\n        Test multinode job array with advance reservation\n        \"\"\"\n        if (len(self.moms) < 2):\n            self.skip_test(\"Test requires 2 moms: use -p mom1:mom2\")\n        a = {'resources_available.ncpus': 4}\n        for mom in self.moms.values():\n            self.server.manager(MGR_CMD_SET, NODE, a, id=mom.shortname)\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'job_history_enable': 'True'})\n        # Submit reservation with placement type scatter\n        now = int(time.time())\n        rid = self.submit_reservation(user=TEST_USER,\n                                      select='2:ncpus=2',\n                                      place='scatter',\n                                      start=now + 30,\n                                      end=now + 300)\n        exp_attr = {'reserve_state': (MATCH_RE, \"RESV_CONFIRMED|2\")}\n        self.server.expect(RESV, exp_attr, id=rid)\n        resv_queue = rid.split(\".\")[0]\n\n        # Submit job array in reservation queue\n        attrs = {ATTR_q: resv_queue, ATTR_J: '1-5',\n                 'Resource_List.select': '2:ncpus=1'}\n        j = Job(PBSROOT_USER, attrs)\n        j.set_sleep_time(60)\n        jid = self.server.submit(j)\n        subjid = []\n        for i in range(1, 6):\n            subjid.append(j.create_subjob_id(jid, i))\n\n        self.logger.info(\"Wait 30s for resv to be in Running state\")\n        a = {'reserve_state': (MATCH_RE, 'RESV_RUNNING|5')}\n        self.server.expect(RESV, a, id=rid, offset=30)\n        self.server.expect(JOB, {'job_state': 'B'}, id=jid)\n        self.server.expect(JOB, {'job_state=R': 2}, count=True,\n                           extend='t', id=jid)\n        self.server.expect(JOB, {'job_state=Q': 3}, count=True,\n                           extend='t', id=jid)\n        self.server.sigjob(jobid=subjid[0], signal=\"suspend\")\n        self.server.expect(JOB, {'job_state': 'S'}, id=subjid[0])\n        self.server.expect(JOB, {'job_state': 'R'}, id=subjid[2])\n\n        # Submit job array with placement type scatter in resv queue\n        attrs = {ATTR_q: resv_queue, ATTR_J: '1-5',\n                 'Resource_List.place': 'scatter',\n                 'Resource_List.select': '2:ncpus=1'}\n        j1 = Job(PBSROOT_USER, attrs)\n        j1.set_sleep_time(60)\n        jid2 = self.server.submit(j1)\n        subjid2 = []\n        for i in range(1, 6):\n            subjid2.append(j.create_subjob_id(jid2, i))\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid2)\n\n        self.server.sigjob(subjid[0], 'resume')\n        self.logger.info(\"Wait 120s for all the subjobs to complete\")\n        self.server.expect(JOB, {'job_state': 'F', 'exit_status': '0'},\n                           id=jid, extend='x', interval=10, offset=120)\n\n        self.server.expect(JOB, {'job_state': 'B'}, id=jid2)\n        self.server.expect(JOB, {'job_state=R': 2}, count=True,\n                           extend='t', id=jid2)\n        self.server.expect(JOB, {'job_state=Q': 3}, count=True,\n                           extend='t', id=jid2)\n        self.server.sigjob(jobid=subjid2[0], signal=\"suspend\")\n        self.server.expect(JOB, {'job_state': 'S'}, id=subjid2[0])\n        self.server.sigjob(jobid=subjid2[1], signal=\"suspend\")\n        self.server.expect(JOB, {'job_state': 'S'}, id=subjid2[1])\n        self.server.expect(JOB, {'job_state': 'R'}, id=subjid2[2])\n        self.server.expect(JOB, {'job_state': 'R'}, id=subjid2[3])\n        self.server.delete([subjid2[2], subjid2[3]])\n        self.server.expect(JOB, {'job_state': 'R'}, id=subjid2[4])\n        self.server.expect(JOB, {'job_state': 'X'}, id=subjid2[4], offset=60)\n        self.server.sigjob(subjid2[0], 'resume')\n        self.server.sigjob(subjid2[1], 'resume')\n        self.server.expect(JOB, {'job_state=R': 2}, count=True,\n                           extend='t', id=jid2)\n        self.logger.info(\"Wait 120s for all the subjobs to complete\")\n        self.server.expect(JOB, {'job_state': 'F'},\n                           id=jid2, extend='x', interval=10, offset=120)\n\n    def test_reservations_with_expired_subjobs(self):\n        \"\"\"\n        Test that an array job submitted to a reservation ends when\n        there are expired subjobs in the array job and job history is\n        enabled\n        \"\"\"\n        self.common_steps()\n        # Submit an advance reservation and an array job to the reservation\n        # once reservation confirmed\n        start_time = time.time()\n        now = int(start_time)\n        rid = self.submit_reservation(user=TEST_USER,\n                                      select='1:ncpus=1',\n                                      start=now + 10,\n                                      end=now + 40)\n        rid_q = rid.split('.')[0]\n        a = {'reserve_state': (MATCH_RE, \"RESV_CONFIRMED|2\")}\n        self.server.expect(RESV, a, id=rid)\n\n        # submit enough jobs that there are some expired subjobs and some\n        # queued/running subjobs left in the system by the time reservation\n        # ends\n        a = {ATTR_q: rid_q, ATTR_J: '1-20'}\n        j = Job(TEST_USER, attrs=a)\n        j.set_sleep_time(2)\n        jid = self.server.submit(j)\n\n        a = {'reserve_state': (MATCH_RE, \"RESV_RUNNING|5\")}\n        self.server.expect(RESV, a, id=rid, offset=10)\n        self.server.expect(JOB, {'job_state': 'B'}, jid)\n        # Wait for reservation to delete from server\n        msg = \"Que;\" + rid_q + \";deleted at request of pbs_server@\"\n        self.server.log_match(msg, starttime=start_time, interval=10)\n        # Check status of the parent-job using qstat -fx once reservation ends\n        self.server.expect(JOB, {'job_state': 'F', 'substate': '91'},\n                           extend='x', id=jid)\n\n    def test_ASAP_resv_request_same_time(self):\n        \"\"\"\n        Test two jobs converted in two ASAP reservation\n        which request same walltime should run and finish as\n        per available resources.\n        Also to verify 2 ASAP reservations with same start\n        time doesn't crashes PBS daemon.\n        \"\"\"\n        self.common_steps()\n\n        # Submit job j to consume all resources\n        a = {'Resource_List.walltime': '10',\n             'Resource_List.select': '1:ncpus=4'}\n        j = Job(TEST_USER, attrs=a)\n        j.set_sleep_time(10)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, jid)\n\n        # Submit a job j2\n        a = {'Resource_List.select': '1:ncpus=2',\n             'Resource_List.walltime': '10'}\n        j2 = Job(TEST_USER, attrs=a)\n        j2.set_sleep_time(10)\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {'job_state': 'Q'}, jid2)\n\n        # Convert j2 into an ASAP reservation\n        now = time.time()\n        rid1 = self.submit_asap_reservation(user=TEST_USER,\n                                            jid=jid2)\n        rid1_q = rid1.split('.')[0]\n        exp_attr = {'reserve_state': (MATCH_RE, \"RESV_CONFIRMED|2\"),\n                    'reserve_duration': 10}\n        self.server.expect(RESV, exp_attr, id=rid1)\n        self.server.expect(\n            JOB, {'job_state': 'Q', 'queue': rid1_q}, id=jid2)\n\n        # Submit another job j3 same as j2\n        j3 = Job(TEST_USER, attrs=a)\n        j3.set_sleep_time(10)\n        jid3 = self.server.submit(j3)\n        self.server.expect(JOB, {'job_state': 'Q'}, jid3)\n        # Convert j3 into an ASAP reservation\n        now2 = time.time()\n        rid2 = self.submit_asap_reservation(user=TEST_USER,\n                                            jid=jid3)\n        rid2_q = rid2.split('.')[0]\n        self.server.expect(RESV, exp_attr, id=rid2)\n        self.server.expect(\n            JOB, {'job_state': 'Q', 'queue': rid2_q}, id=jid3)\n\n        # Wait for both  reservation to start\n        exp_attr = {'reserve_state': (MATCH_RE, \"RESV_RUNNING|5\")}\n        self.server.expect(RESV, exp_attr, id=rid1)\n        self.server.expect(RESV, exp_attr, id=rid2)\n        # Verify j2 and j3 start running\n        self.server.expect(\n            JOB, {'job_state': 'R', 'queue': rid1_q}, id=jid2)\n        self.server.expect(\n            JOB, {'job_state': 'R', 'queue': rid2_q}, id=jid3)\n\n        # Wait for reservations to be finish\n        msg = \"Que;\" + rid1_q + \";deleted at request of pbs_server@\"\n        self.server.log_match(msg, starttime=now, interval=5)\n        msg = \"Que;\" + rid2_q + \";deleted at request of pbs_server@\"\n        self.server.log_match(msg, starttime=now2)\n        # Check status of the job using qstat -fx once reservation\n        # ends\n        jids = [jid2, jid3]\n        for job in jids:\n            self.server.expect(JOB, 'queue', op=UNSET, id=job)\n            self.server.expect(JOB, {'job_state=F': 1}, count=True,\n                               id=job, extend='x')\n\n        # Verify pbs_server and pbs_scheduler is up\n        if not self.server.isUp():\n            self.fail(\"Server is not up\")\n        if not self.scheduler.isUp():\n            self.fail(\"Scheduler is not up\")\n\n    def test_standing_resv_with_job_array(self):\n        \"\"\"\n        Test job-array with standing reservation\n        \"\"\"\n        self.common_steps()\n        if 'PBS_TZID' in self.conf:\n            tzone = self.conf['PBS_TZID']\n        elif 'PBS_TZID' in os.environ:\n            tzone = os.environ['PBS_TZID']\n        else:\n            self.logger.info('Missing timezone, using America/Los_Angeles')\n            tzone = 'America/Los_Angeles'\n        # Submit a standing reservation to occur every other minute for a\n        # total count of 2\n        start = time.time() + 10\n        now = start + 25\n        start = int(start)\n        end = int(now)\n        rid = self.submit_reservation(user=TEST_USER, select='1:ncpus=4',\n                                      rrule='FREQ=MINUTELY;INTERVAL=2;COUNT=2',\n                                      start=start, end=end)\n        exp_attr = {'reserve_state': (MATCH_RE, \"RESV_CONFIRMED|2\")}\n        self.server.expect(RESV, exp_attr, id=rid)\n        rid_q = rid.split(\".\")[0]\n        # Submit a job-array within reservation\n        j = Job(TEST_USER, attrs={'Resource_List.select': '1:ncpus=1',\n                                  ATTR_q: rid_q, ATTR_J: '1-4'})\n        j.set_sleep_time(15)\n        jid = self.server.submit(j)\n        subjid = []\n        for i in range(1, 4):\n            subjid.append(j.create_subjob_id(jid, i))\n        # Wait for standing reservation first instance to start\n        self.logger.info('Waiting until reservation runs')\n        exp_attr = {'reserve_state': (MATCH_RE, \"RESV_RUNNING|5\")}\n        self.server.expect(RESV, exp_attr, id=rid,\n                           offset=start - int(time.time()))\n        self.server.expect(RESV, {'reserve_index': 1}, id=rid)\n        self.server.expect(JOB, {'job_state': 'B'}, jid)\n        self.server.expect(JOB, {'job_state=R': 4}, count=True,\n                           id=jid, extend='t')\n        # Wait for standing reservation first instance to finished\n        self.logger.info('Waiting for first occurrence to finish')\n        exp_attr = {'reserve_state': (MATCH_RE, \"RESV_CONFIRMED|2\")}\n        self.server.expect(RESV, exp_attr, id=rid,\n                           offset=end - int(time.time()))\n        self.server.expect(JOB, 'queue', op=UNSET, id=jid)\n        # Wait for standing reservation second instance to start\n        offset = end - int(time.time()) + 120 - (end - start)\n        self.logger.info(\n            'Waiting for second occurrence of reservation to start')\n        exp_attr = {'reserve_state': (MATCH_RE, \"RESV_RUNNING|5\")}\n        self.server.expect(RESV, exp_attr, id=rid, offset=offset, interval=1)\n        self.server.expect(RESV, {'reserve_index': 2}, id=rid)\n        # Wait for reservations to be finished\n        msg = \"Que;\" + rid_q + \";deleted at request of pbs_server@\"\n        self.server.log_match(msg, starttime=now, interval=2)\n        self.server.expect(JOB, 'queue', op=UNSET, id=jid)\n\n        # Check for subjob status for job-array\n        # as all subjobs from job-array finished within the\n        # instance so it should have substate=92\n        for i in subjid:\n            self.server.expect(JOB, {'job_state': 'F', 'substate': '92'},\n                               extend='x', id=i)\n        # Check for finished jobs by issuing the command qstat\n        self.server.expect(JOB, {'job_state': 'F', 'substate': '92'},\n                           extend='xt', id=jid)\n\n        start = int(time.time()) + 25\n        end = int(time.time()) + 3660\n        rid = self.submit_reservation(user=TEST_USER, select='1:ncpus=1',\n                                      rrule='FREQ=DAILY;COUNT=2',\n                                      start=start, end=end)\n        exp_attr = {'reserve_state': (MATCH_RE, \"RESV_CONFIRMED|2\")}\n        self.server.expect(RESV, exp_attr, id=rid)\n        rid_q = rid.split(\".\")[0]\n        # Submit a job-array within resrvation\n        j = Job(TEST_USER, attrs={\n            'Resource_List.walltime': 20, ATTR_q: rid_q, ATTR_J: '1-5'})\n        j.set_sleep_time(20)\n        jid = self.server.submit(j)\n        subjid = []\n        for i in range(1, 6):\n            subjid.append(j.create_subjob_id(jid, i))\n        # Wait for standing reservation first instance to start\n        # Verify one subjob start running and others remain queued\n        exp_attr = {'reserve_state': (MATCH_RE, \"RESV_RUNNING|5\")}\n        self.server.expect(RESV, exp_attr, id=rid, interval=1)\n        self.server.expect(RESV, {'reserve_index': 1}, id=rid)\n        self.server.expect(JOB, {'job_state': 'B'}, jid)\n        self.server.expect(JOB, {'job_state=R': 1}, count=True,\n                           id=jid, extend='t')\n        self.server.expect(JOB, {'job_state=Q': 4}, count=True,\n                           id=jid, extend='t')\n        # Suspend running subjob[1], verify\n        # subjob[2] start running\n        self.server.expect(JOB, {'job_state': 'R'}, id=subjid[0])\n        self.server.sigjob(jobid=subjid[0], signal=\"suspend\")\n        self.server.expect(JOB, {'job_state': 'S'}, id=subjid[0])\n        self.server.expect(JOB, {'job_state': 'R'}, id=subjid[1])\n        # Resume subjob[1] and verify subjob[1] should\n        # run once resources are available\n        self.server.sigjob(subjid[0], 'resume')\n        self.server.expect(JOB, {'job_state': 'S'}, id=subjid[0])\n        self.server.delete(subjid[1])\n        self.server.expect(JOB, {'job_state': 'R'}, id=subjid[2])\n        self.server.delete([subjid[2], subjid[3], subjid[4]])\n        self.server.expect(JOB, {'job_state': 'R'}, id=subjid[0])\n        self.server.expect(JOB, {'job_state': 'F',  'substate': '92',\n                                 'queue': rid_q}, id=subjid[0], extend='x')\n\n        self.server.expect(JOB, {'job_state': 'F',  'queue': rid_q},\n                           id=jid, extend='x')\n\n    def test_multiple_job_array_within_standing_reservation(self):\n        \"\"\"\n        Test multiple job-array submitted to a standing reservations\n        and subjobs exceed walltime to run within instance of\n        reservation\n        \"\"\"\n        self.common_steps()\n\n        # Submit a standing reservation to occur every other minute for a\n        # total count of 2\n        start = time.time() + 10\n        now = start + 30\n        start = int(start)\n        end = int(now)\n        rid = self.submit_reservation(user=TEST_USER,\n                                      select='1:ncpus=4',\n                                      rrule='FREQ=MINUTELY;INTERVAL=2;COUNT=2',\n                                      start=start,\n                                      end=end)\n        exp_attr = {'reserve_state': (MATCH_RE, \"RESV_CONFIRMED|2\")}\n        self.server.expect(RESV, exp_attr, id=rid)\n        rid_q = rid.split(\".\")[0]\n        # Submit 3 job-array within reservation with sleep time longer\n        # than instance duration\n        subjid = []\n        jids = []\n        for i in range(3):\n            j = Job(TEST_USER, attrs={ATTR_q: rid_q, ATTR_J: '1-2'})\n            j.set_sleep_time(100)\n            pjid = self.server.submit(j)\n            jids.append(pjid)\n            for subid in range(1, 3):\n                subjid.append(j.create_subjob_id(pjid, subid))\n        # Wait for first instance of reservation to be start\n        exp_attr = {'reserve_state': (MATCH_RE, \"RESV_RUNNING|5\")}\n        self.server.expect(RESV, exp_attr, id=rid, interval=1)\n        self.server.expect(RESV, {'reserve_index': 1}, id=rid)\n        self.server.expect(JOB, {'job_state': 'B'}, jids[0])\n        self.server.expect(JOB, {'job_state=R': 2}, count=True,\n                           id=jids[0], extend='t')\n        self.server.expect(JOB, {'job_state=R': 2}, count=True,\n                           id=jids[1], extend='t')\n        self.server.expect(JOB, {'job_state=Q': 3}, count=True,\n                           id=jids[2], extend='t')\n        # At end of first instance of reservation ,verify running subjobs\n        # should be finished\n        self.logger.info(\n            'Waiting 20 sec job-array 1 and 2 to be finished')\n        self.server.expect(JOB, {'job_state=F': 3}, extend='xt',\n                           offset=20, id=jids[0])\n        self.server.expect(JOB, {'job_state=F': 3}, extend='xt',\n                           id=jids[1])\n\n        # Wait for standing reservation second instance to confirmed\n        exp_attr = {'reserve_state': (MATCH_RE, \"RESV_CONFIRMED|2\")}\n        self.server.expect(RESV, exp_attr, id=rid)\n        # Check for queued jobs in second instance of reservation\n        self.server.expect(JOB, {'job_state': 'Q',\n                                 'comment': (MATCH_RE, 'Queue not started')},\n                           id=jids[2])\n        self.logger.info(\n            'Waiting 55 sec for second instance of reservation to start')\n        exp_attr = {'reserve_state': (MATCH_RE, \"RESV_RUNNING|5\")}\n        self.server.expect(RESV, exp_attr, id=rid, offset=55, interval=1)\n        self.server.expect(RESV, {'reserve_index': 2}, id=rid)\n        # Check for queued jobs should be running\n        self.server.expect(JOB, {'job_state=R': 2}, extend='xt',\n                           id=jids[2])\n\n        # Check for running jobs in second instance should finished\n        self.logger.info(\n            'Waiting 30 sec for second instance of reservation to finished')\n        self.server.expect(JOB, {'job_state=F': 3}, extend='xt',\n                           offset=30, id=jids[2])\n\n        # Wait for reservation to be finished\n        msg = \"Que;\" + rid_q + \";deleted at request of pbs_server@\"\n        self.server.log_match(msg, starttime=now, interval=2)\n        for job in jids:\n            self.server.expect(JOB, 'queue', op=UNSET, id=job)\n\n        # At end of reservation,verify running subjobs from job-array 3\n        # terminated\n        self.server.expect(JOB, {'job_state': 'F', 'substate': '91'},\n                           extend='x', id=jids[2])\n        self.server.expect(JOB, {'job_state': 'F', 'substate': '91',\n                                 'queue': rid_q}, extend='x',\n                           attrop=PTL_AND, id=subjid[5])\n\n        # Check for subjobs status of job-array 1 and 2\n        # as all subjobs from job-array 1 and 2 exceed walltime of\n        # reservation,so they will not complete running within an instance\n        # so the substate of these subjobs should be 93\n        job_list = subjid\n        job_list.pop()\n        job_list.pop()\n        for subjob in job_list:\n            self.server.expect(JOB, {'job_state': 'F', 'substate': '93',\n                                     'queue': rid_q}, extend='xt',\n                               attrop=PTL_AND, id=subjob)\n\n    def test_delete_idle_resv_basic(self):\n        \"\"\"\n        Test basic functionality of delete_idle_time.  Submit a reservation\n        with delete_idle_time and no jobs.  Wait until the timer expires\n        and see the reservation get deleted\n        \"\"\"\n        now = int(time.time())\n        start = now + 30\n        idle_timer = 15\n        extra = {'delete_idle_time': idle_timer}\n        rid = self.submit_reservation(user=TEST_USER,\n                                      select='1:ncpus=1',\n                                      start=start,\n                                      end=now + 3600,\n                                      extra_attrs=extra)\n\n        exp_attr = {'reserve_state': (MATCH_RE, \"RESV_CONFIRMED|2\")}\n        self.server.expect(RESV, exp_attr, id=rid)\n        self.logger.info('Sleeping until reservation starts')\n        exp_attr = {'reserve_state': (MATCH_RE, \"RESV_RUNNING|5\")}\n        offset = start - int(time.time())\n        self.server.expect(RESV, exp_attr, id=rid, offset=offset)\n\n        self.logger.info('Sleeping until resv idle timer fires')\n        self.server.expect(RESV, 'queue', op=UNSET, id=rid, offset=idle_timer)\n\n    def test_delete_idle_resv_job_finish(self):\n        \"\"\"\n        Test that an idle reservation is properly deleted after its only\n        job runs and finishes\n        \"\"\"\n        now = int(time.time())\n        start = now + 30\n        idle_timer = 15\n        extra = {'delete_idle_time': idle_timer}\n        rid = self.submit_reservation(user=TEST_USER,\n                                      select='1:ncpus=1',\n                                      start=start,\n                                      end=now + 3600,\n                                      extra_attrs=extra)\n\n        exp_attr = {'reserve_state': (MATCH_RE, \"RESV_CONFIRMED|2\")}\n        self.server.expect(RESV, exp_attr, id=rid)\n\n        rid_q = rid.split('.', 1)[0]\n\n        a = {'Resource_List.select': '1:ncpus=1', ATTR_q: rid_q}\n        j = Job(attrs=a)\n        j.set_sleep_time(5)\n        jid = self.server.submit(j)\n\n        self.logger.info('Sleeping until reservation starts')\n        exp_attr = {'reserve_state': (MATCH_RE, \"RESV_RUNNING|5\")}\n        offset = start - int(time.time())\n        self.server.expect(RESV, exp_attr, id=rid, offset=offset)\n\n        self.server.expect(JOB, 'queue', op=UNSET, id=jid)\n\n        # Wait for idle resv timer to hit and delete reservation\n        self.logger.info('Sleeping until resv idle timer fires')\n        self.server.expect(RESV, 'queue', op=UNSET,\n                           id=rid, offset=idle_timer + 5)\n\n    def test_delete_idle_resv_job_delete(self):\n        \"\"\"\n        Test that when a running job is deleted, the\n        idle reservation is deleted\n        \"\"\"\n        now = int(time.time())\n        start = now + 30\n        idle_timer = 15\n        extra = {'delete_idle_time': idle_timer}\n        rid = self.submit_reservation(user=TEST_USER,\n                                      select='1:ncpus=1',\n                                      start=start,\n                                      end=now + 3600,\n                                      extra_attrs=extra)\n\n        exp_attr = {'reserve_state': (MATCH_RE, \"RESV_CONFIRMED|2\")}\n        self.server.expect(RESV, exp_attr, id=rid)\n\n        rid_q = rid.split('.', 1)[0]\n\n        a = {'Resource_List.select': '1:ncpus=1', ATTR_q: rid_q}\n        j = Job(attrs=a)\n        jid = self.server.submit(j)\n\n        self.logger.info('Sleeping until reservation starts')\n        exp_attr = {'reserve_state': (MATCH_RE, \"RESV_RUNNING|5\")}\n        offset = start - int(time.time())\n        self.server.expect(RESV, exp_attr, id=rid, offset=offset)\n\n        self.server.delete(jid)\n\n        self.logger.info('Sleeping until resv idle timer fires')\n        self.server.expect(RESV, 'queue', op=UNSET, id=rid,\n                           offset=idle_timer)\n\n    def test_delete_idle_resv_job_standing(self):\n        \"\"\"\n        Test that an idle standing reservation is properly deleted after its\n        only job finishes\n        \"\"\"\n        now = int(time.time())\n        start = now + 30\n        idle_timer = 15\n        extra = {'delete_idle_time': idle_timer}\n        rid = self.submit_reservation(\n            user=TEST_USER, select='1:ncpus=1', rrule='freq=DAILY;COUNT=3',\n            start=start, end=start + 1800, extra_attrs=extra)\n\n        exp_attr = {'reserve_state': (MATCH_RE, \"RESV_CONFIRMED|2\")}\n        self.server.expect(RESV, exp_attr, id=rid)\n\n        rid_q = rid.split('.', 1)[0]\n\n        a = {'Resource_List.select': '1:ncpus=1', ATTR_q: rid_q}\n        j = Job(attrs=a)\n        j.set_sleep_time(5)\n        jid = self.server.submit(j)\n\n        self.logger.info('Sleeping until reservation starts')\n        exp_attr = {'reserve_state': (MATCH_RE, \"RESV_RUNNING|5\")}\n        offset = start - int(time.time())\n        self.server.expect(RESV, exp_attr, id=rid, offset=offset)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        strf_str = '%a %b %d %T %Y'\n        start_str = time.strftime(strf_str, time.localtime(start + 86400))\n\n        self.logger.info('Sleeping until resv idle timer fires')\n        exp_attr = {'reserve_state': (MATCH_RE, \"RESV_CONFIRMED|2\"),\n                    'reserve_start': start_str}\n        self.server.expect(RESV, exp_attr, id=rid, offset=idle_timer + 5)\n\n    def test_asap_delete_idle_resv_set(self):\n        \"\"\"\n        Test that an ASAP reservation gets a default 10m idle timer if not set\n        or keeps its idle timer if it is set\n        \"\"\"\n        ncpus = self.server.status(NODE)[0]['resources_available.ncpus']\n\n        a = {'Resource_List.select': '1:ncpus=' + ncpus,\n             'Resource_List.walltime': 3600}\n\n        vnode_val = None\n        if self.mom.is_cpuset_mom():\n            vnode_val = 'vnode=' + self.server.status(NODE)[1]['id']\n            ncpus = self.server.status(NODE)[1]['resources_available.ncpus']\n            a['Resource_List.select'] = vnode_val + \":ncpus=\" + ncpus\n\n        j1 = Job(attrs=a)\n        jid1 = self.server.submit(j1)\n\n        j2 = Job(attrs=a)\n        jid2 = self.server.submit(j2)\n\n        j3 = Job(attrs=a)\n        jid3 = self.server.submit(j3)\n\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid2)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid3)\n\n        rid = self.submit_asap_reservation(TEST_USER, jid2)\n        self.server.expect(RESV, {'delete_idle_time': '10:00'}, id=rid)\n\n        extra_attrs = {'delete_idle_time': '5:00'}\n        rid = self.submit_asap_reservation(TEST_USER, jid3, extra_attrs)\n        self.server.expect(RESV, {'delete_idle_time': '5:00'}, id=rid)\n\n    def common_config(self):\n        \"\"\"\n        This function contains common steps for test\n        \"test_ASAP_resv_with_multivnode_job\" and\n        \"test_standing_resv_with_multivnode_job_array\"\n        \"\"\"\n        vn_attrs = {ATTR_rescavail + '.ncpus': 4}\n        self.mom.create_vnodes(vn_attrs, 2,\n                               fname=\"vnodedef1\")\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'job_history_enable': 'True'})\n\n    def test_ASAP_resv_with_multivnode_job(self):\n        \"\"\"\n        Test 2 multivnode jobs converted to ASAP resv\n        having same start time run as per resources available and\n        doesn't crashes PBS daemons on completion of reservation.\n        \"\"\"\n        self.common_config()\n        # Submit job such that it consumes all the resources\n        # on both vnodes\n        attrs = {'Resource_List.select': '2:ncpus=4',\n                 'Resource_List.walltime': '10',\n                 'Resource_List.place': 'vscatter'}\n        j = Job(PBSROOT_USER)\n        j.set_sleep_time(10)\n        j.set_attributes(attrs)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        # Submit 2 jobs and verify that both jobs are in Q state\n        attrs = {'Resource_List.select': '2:ncpus=2',\n                 'Resource_List.walltime': '10',\n                 'Resource_List.place': 'vscatter'}\n        j1 = Job(PBSROOT_USER)\n        j1.set_sleep_time(10)\n        j1.set_attributes(attrs)\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid1)\n        j2 = Job(PBSROOT_USER)\n        j2.set_sleep_time(10)\n        j2.set_attributes(attrs)\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid2)\n        # Convert 2 jobs into ASAP reservation\n        now = time.time()\n        rid1 = self.submit_asap_reservation(PBSROOT_USER, jid1)\n        rid1_q = rid1.split('.')[0]\n        rid2 = self.submit_asap_reservation(PBSROOT_USER, jid2)\n        rid2_q = rid2.split('.')[0]\n        # Check both the reservation starts running\n        a = {'reserve_state': (MATCH_RE, \"RESV_RUNNING|5\")}\n        self.server.expect(RESV, a, id=rid1, offset=10)\n        self.server.expect(RESV, a, id=rid2)\n        # Wait for reservation to end\n        resv_queue = [rid1_q, rid2_q]\n        for queue in resv_queue:\n            msg = \"Que;\" + queue + \";deleted at request of pbs_server@\"\n            self.server.log_match(msg, starttime=now, interval=10)\n        # Verify all the jobs are deleted once resv ends\n        jids = [jid1, jid2]\n        for job in jids:\n            self.server.expect(JOB, 'queue', op=UNSET, id=job)\n        exp_attrib = {'job_state': 'F'}\n        for jid in jids:\n            self.server.expect(JOB, exp_attrib, id=jid, extend='x')\n        # Verify all the PBS daemons are up and running upon resv completion\n        self.server.isUp()\n        self.mom.isUp()\n        self.scheduler.isUp()\n\n    def test_standing_resv_with_multivnode_job_array(self):\n        \"\"\"\n        Test multivnode job array with standing reservation. Also\n        verify that subjobs with walltime exceeding the resv duration\n        are deleted once reservation ends\n        \"\"\"\n        self.common_config()\n\n        start = int(time.time()) + 10\n        end = int(time.time()) + 61\n        rid = self.submit_reservation(user=PBSROOT_USER,\n                                      select='2:ncpus=2',\n                                      place='vscatter',\n                                      rrule='FREQ=MINUTELY;COUNT=2',\n                                      start=start,\n                                      end=end)\n        exp_attr = {'reserve_state': (MATCH_RE, \"RESV_CONFIRMED|2\")}\n        self.server.expect(RESV, exp_attr, id=rid)\n        resv_queue = rid.split(\".\")[0]\n        # Submit job requesting more walltime than the resv duration\n        attrs = {ATTR_q: resv_queue, ATTR_J: '1-5',\n                 'Resource_List.select': '2:ncpus=1',\n                 'Resource_List.place': 'vscatter'}\n        j = Job(PBSROOT_USER)\n        j.set_attributes(attrs)\n        jid = self.server.submit(j)\n        subjid = []\n        for i in range(1, 6):\n            subjid.append(j.create_subjob_id(jid, i))\n        exp_attrib = {'job_state': 'Q',\n                      'comment': (MATCH_RE, 'Queue not started')}\n        self.server.expect(JOB, exp_attrib, id=subjid[0])\n        # Wait for first instance of standing resv to start\n        a = {'reserve_state': (MATCH_RE, 'RESV_RUNNING|5')}\n        self.server.expect(RESV, a, id=rid, offset=10)\n        self.server.expect(JOB, {'job_state': 'B'}, id=jid)\n        self.server.expect(JOB, {'job_state=R': 2}, count=True,\n                           extend='t', id=jid)\n        # Wait for second instance of standing resv to start\n        a = {'reserve_state': (MATCH_RE, 'RESV_RUNNING|5'), 'reserve_index': 2}\n        self.server.expect(RESV, a, offset=60, id=rid)\n        # Verify running subjobs were terminated after the first\n        # instance of standing resv\n        jobs = [subjid[0], subjid[1]]\n        attrib = {'job_state': 'X', 'substate': '93'}\n        for jobid in jobs:\n            self.server.expect(JOB, attrib, id=jobid)\n        self.server.expect(JOB, {'job_state=R': 2}, count=True,\n                           extend='t', id=jid)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=subjid[4])\n        # Wait for reservation to end\n        self.server.log_match(resv_queue + \";deleted at request of pbs_server\",\n                              id=rid, interval=10)\n        # Verify all the jobs are deleted once resv ends\n        self.server.expect(JOB, 'queue', op=UNSET, id=jid)\n        self.server.expect(JOB, {'job_state=F': 6}, count=True,\n                           extend='xt', id=jid)\n\n    def test_standing_resv_resc_used(self):\n        \"\"\"\n        Test that resources are released from the server when a\n        standing reservation's occurrence finishes\n        \"\"\"\n\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 0})\n        now = int(time.time())\n\n        # submitting 25 seconds from now to allow some of the older testbed\n        # systems time to process (discovered empirically)\n        rid = self.submit_reservation(user=TEST_USER,\n                                      select='1:ncpus=1',\n                                      rrule='FREQ=MINUTELY;COUNT=2',\n                                      start=now + 25,\n                                      end=now + 35)\n\n        a = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, a, id=rid)\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 0})\n\n        offset = now + 25 - int(time.time())\n        a = {'reserve_state': (MATCH_RE, 'RESV_RUNNING|5')}\n        self.server.expect(RESV, a, id=rid, offset=offset, interval=1)\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 1})\n\n        a = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, a, id=rid, offset=10, interval=1)\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 0})\n\n        offset = now + 85 - int(time.time())\n        a = {'reserve_state': (MATCH_RE, 'RESV_RUNNING|5')}\n        self.server.expect(RESV, a, id=rid, offset=offset, interval=1)\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 1})\n\n        self.server.expect(RESV, 'queue', op=UNSET, id=rid, offset=10)\n        self.server.expect(SERVER, {'resources_assigned.ncpus': 0})\n\n    def test_server_recover_resv_queue(self):\n        \"\"\"\n        Test that PBS server can recover a reservation queue after a\n        restart\n        \"\"\"\n\n        a = {'resources_available.ncpus': 1}\n        self.mom.create_vnodes(a, num=2)\n        now = int(time.time())\n        rid = self.submit_reservation(user=TEST_USER, select='1:ncpus=1',\n                                      start=now + 5, end=now + 300)\n\n        a = {'reserve_state': (MATCH_RE, 'RESV_RUNNING|5')}\n        self.server.expect(RESV, a, id=rid)\n\n        self.server.restart()\n        self.server.expect(RESV, a, id=rid)\n\n        resv_queue = rid.split('.')[0]\n        a = {'Resource_List.select': '1:ncpus=1', ATTR_queue: resv_queue}\n        J = Job(attrs=a)\n        jid = self.server.submit(J)\n\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid)\n\n    def test_resv_job_hard_walltime(self):\n        \"\"\"\n        Test that a job with hard walltime will not conflict with\n        reservtion if hard walltime is less that reservation start time.\n        \"\"\"\n        a = {'resources_available.ncpus': 4}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.mom.shortname)\n\n        now = int(time.time())\n\n        rid = self.submit_reservation(user=TEST_USER,\n                                      select='1:ncpus=4',\n                                      start=now + 65,\n                                      end=now + 240)\n        self.server.expect(RESV,\n                           {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')},\n                           id=rid)\n        a = {'Resource_List.ncpus': 4,\n             'Resource_List.walltime': 50}\n        J = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(J)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n    def test_resv_reconfirm_holding_partial_nodes(self):\n        \"\"\"\n        Test that scheduler is able to reconfirm a reservation when\n        only some of the nodes reservation was running on goes down.\n        Also make sure it hangs on to the node that was not down.\n        \"\"\"\n        a = {'reserve_retry_time': 5}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        a = {'resources_available.ncpus': 2}\n        self.mom.create_vnodes(a, num=3)\n        vn_list = [\"%s[%d]\" % (self.mom.shortname, i) for i in range(3)]\n\n        now = int(time.time())\n        sel = '1:ncpus=2+1:ncpus=1'\n        rid = self.submit_reservation(user=TEST_USER, select=sel,\n                                      start=now + 5, end=now + 300)\n\n        a = {'reserve_state': (MATCH_RE, 'RESV_RUNNING|5')}\n        self.server.expect(RESV, a, id=rid)\n\n        self.server.status(RESV, 'resv_nodes', id=rid)\n        resv_node_list = self.server.reservations[rid].get_vnodes()\n        resv_node = resv_node_list[0]\n        resv_node2 = resv_node_list[1]\n        vn = [i for i in vn_list if i not in resv_node_list]\n\n        a = {'scheduling': 'False'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        self.server.manager(MGR_CMD_SET, NODE, {'state': 'offline'},\n                            id=resv_node)\n        self.server.expect(RESV, {'reserve_substate': 10}, id=rid)\n\n        a = {'scheduling': 'True'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        solution = '(' + vn[0] + ':ncpus=2)+(' + resv_node2 + ':ncpus=1)'\n        a = {'reserve_substate': '5'}\n        self.server.expect(RESV, a, id=rid)\n        self.server.status(RESV)\n        rnodes = self.server.reservations[rid].get_vnodes()\n        self.assertIn(vn[0], rnodes, \"Wrong node assigned to resv\")\n        self.assertIn(resv_node2, rnodes, \"Wrong node assigned to resv\")\n\n    def test_standing_resv_with_start_in_past(self):\n        \"\"\"\n        Test that PBS accepts standing reservations with its start time in the\n        past and end time in future. Check that PBS treats this kind of\n        reservation as a reservation for the next day.\n        \"\"\"\n\n        now = int(time.time())\n\n        # we cannot use self.server.submit to submit the reservation\n        # because we don't want to specify date in start and end options\n        start = [\" -R \" + time.strftime('%H%M', time.localtime(now - 3600))]\n        end = [\" -E \" + time.strftime('%H%M', time.localtime(now + 3600))]\n\n        runcmd = [os.path.join(self.server.pbs_conf['PBS_EXEC'], 'bin',\n                               'pbs_rsub')]\n        tz = ['PBS_TZID=' + self.get_tz()]\n        rule = [\"-r 'freq=WEEKLY;BYDAY=SU;COUNT=3'\"]\n\n        runcmd = tz + runcmd + start + end + rule\n        ret = self.du.run_cmd(self.server.hostname, runcmd, as_script=True)\n        self.assertEqual(ret['rc'], 0)\n        rid = ret['out'][0].split()[0]\n\n        a = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, a, id=rid)\n        d = datetime.datetime.today()\n\n        # weekday() returns the weekday's index. Monday to Sunday (0 to 6)\n        # We calculate how far away is today than Sunday and move\n        # the day ahead.\n        n = d.weekday()\n        # If today is Sunday, move 7 days ahead, else move \"6 - weekday()\"\n        delta = (6 - n) if (6 - n > 0) else 7\n        d += datetime.timedelta(days=delta)\n        sunday = d.strftime('%a %b %d')\n        start = time.strftime('%H:%M', time.localtime(now - 3600))\n        sunday = sunday + \" \" + start\n\n        stat = self.server.status(RESV, 'reserve_start', id=rid)\n        self.assertIn(sunday, stat[0]['reserve_start'])\n\n    def qmove_job_to_reserv(self, Res_Status, Res_substate, start, end):\n        \"\"\"\n        Function to qmove job into reservation and verify job state\n        in reservation\n        \"\"\"\n        a = {'resources_available.ncpus': 2}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.mom.shortname)\n        jid1 = self.submit_job(job_running=True)\n\n        # Submit a standing reservation to occur every other minute for a\n        # total count of 2\n        rid = self.submit_reservation(user=TEST_USER, select='1:ncpus=1',\n                                      rrule='FREQ=MINUTELY;INTERVAL=2;COUNT=2',\n                                      start=start, end=end)\n        rid_q = rid.split('.')[0]\n        exp_attr = {'reserve_state': Res_Status,\n                    'reserve_substate': Res_substate}\n        self.server.expect(RESV, exp_attr, id=rid, offset=5)\n        self.server.holdjob(jid1[0])\n        # qrerun the jobs\n        self.server.rerunjob(jobid=jid1[0])\n        self.server.expect(JOB, {'job_state': 'H'}, id=jid1[0])\n        # qmove the job to reservation queue\n        self.server.movejob(jid1[0], rid_q)\n        self.server.expect(JOB, {'job_state': 'H', 'queue': rid_q},\n                           id=jid1[0])\n        self.server.rlsjob(jid1[0], 'u')\n        if Res_Status == 'RESV_CONFIRMED':\n            self.server.expect(JOB, {'job_state': 'Q'}, id=jid1[0])\n            self.logger.info('Job %s is in Q as expected' % jid1[0])\n        if Res_Status == 'RESV_RUNNING':\n            self.server.expect(JOB, {'job_state': 'R'}, id=jid1[0])\n            self.logger.info('Job %s is in R as expected' % jid1[0])\n        jid2 = self.submit_job(job_running=True)\n        self.server.delete([rid, jid2[0]], wait=True)\n\n    def test_qmove_job_into_standing_reservation(self):\n        \"\"\"\n        Test qmove job into standing reservation\n        \"\"\"\n        # Test qmove of a job to a confirmed standing reservation instance\n        self.qmove_job_to_reserv(\"RESV_CONFIRMED\", 2, time.time() + 15,\n                                 time.time() + 60)\n\n        # Test qmove of a job to a running standing reservation instance\n        self.qmove_job_to_reserv(\"RESV_RUNNING\", 5, time.time() + 10,\n                                 time.time() + 60)\n\n    def test_shared_exclusive_job_not_in_same_rsv_vnode(self):\n        \"\"\"\n        Test to verify user cannot submit an exclusive placement job\n        in a free placement reservation, job submission would be denied\n        because placement spec does not match.\n        Also verify  shared and exclusive job in reservation should\n        not overlap on same vnode.\n        \"\"\"\n        vn_attrs = {ATTR_rescavail + '.ncpus': 4,\n                    'sharing': 'default_excl'}\n        self.mom.create_vnodes(vn_attrs, 6)\n\n        # Submit a advance reservation (R1)\n        rid = self.submit_reservation(select='3:ncpus=4', user=TEST_USER,\n                                      start=time.time() + 10,\n                                      end=time.time() + 1000)\n        rid_q = rid.split('.')[0]\n        a = {'reserve_state': (MATCH_RE, \"RESV_CONFIRMED|2\")}\n        self.server.expect(RESV, a, id=rid)\n        a = {'Resource_List.select': '1:ncpus=2',\n             'Resource_List.place': 'shared',\n             'queue': rid_q}\n        jid = self.submit_job(set_attrib=a, job_running=True)\n        vn = self.mom.shortname\n        self.assertEqual(jid[1], '(' + vn + '[0]:ncpus=2)')\n        a = {'Resource_List.select': '1:ncpus=8',\n             'Resource_List.place': 'excl',\n             'queue': rid_q}\n        _msg = \"qsub: job and reservation have conflicting specification \"\n        _msg += \"Resource_List.place\"\n        try:\n            self.submit_job(set_attrib=a)\n        except PbsSubmitError as e:\n            self.assertEqual(\n                e.msg[0], _msg, msg=\"Did not get expected qsub err message\")\n            self.logger.info(\"Got expected qsub err message as %s\", e.msg[0])\n        else:\n            self.fail(\"Job got submitted\")\n        self.server.delete([jid[0], rid], wait=True)\n\n        # Repeat above test with reservation have place=excl\n        # Submit a advance reservation (R2)\n        rid = self.submit_reservation(select='3:ncpus=4', user=TEST_USER,\n                                      start=time.time() + 10,\n                                      end=time.time() + 1000,\n                                      place='excl')\n        rid_q = rid.split('.')[0]\n        a = {'reserve_state': (MATCH_RE, \"RESV_CONFIRMED|2\")}\n        self.server.expect(RESV, a, id=rid)\n        a = {'Resource_List.select': '1:ncpus=2',\n             'Resource_List.place': 'shared',\n             'queue': rid_q}\n        jid = self.submit_job(set_attrib=a, job_running=True)\n        job1_node = jid[1]\n        self.assertEqual(jid[1], '(' + vn + '[0]:ncpus=2)')\n        a = {'Resource_List.select': '1:ncpus=8',\n             'Resource_List.place': 'excl',\n             'queue': rid_q}\n        jid2 = self.submit_job(set_attrib=a, job_running=True)\n        job2_node = jid2[1]\n        errmsg = 'job1_node contain job_node2 value'\n        self.assertEqual(\n            jid2[1], '(' + vn + '[1]:ncpus=4+' + vn + '[2]:ncpus=4)')\n        self.assertNotIn(job1_node, job2_node, errmsg)\n\n    def test_clashing_reservations(self):\n        \"\"\"\n        Test that when a standing reservation and advance reservation\n        are submitted to start at the same time on the same set of\n        resources, then the one that is submitted first wins and second\n        is rejected.\n        \"\"\"\n\n        self.common_config()\n        a = {'scheduling': 'False'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        start1 = int(time.time()) + 3600\n        end1 = int(time.time()) + 7200\n        start2 = int(time.time()) + 1800\n        end2 = int(time.time()) + 5400\n\n        srid = self.submit_reservation(user=TEST_USER,\n                                       select='2:ncpus=4',\n                                       rrule='FREQ=DAILY;COUNT=2',\n                                       start=start1,\n                                       end=end1)\n\n        arid = self.submit_reservation(user=TEST_USER,\n                                       select='2:ncpus=4',\n                                       start=start2,\n                                       end=end2)\n        self.scheduler.run_scheduling_cycle()\n        self.server.expect(RESV, {'reserve_state':\n                                  (MATCH_RE, 'RESV_CONFIRMED|2')},\n                           id=srid, max_attempts=1)\n        self.server.log_match(arid + \";Reservation denied\", id=arid,\n                              max_attempts=1)\n"
  },
  {
    "path": "test/tests/functional/pbs_resource_multichunk.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestResourceMultiChunk(TestFunctional):\n\n    \"\"\"\n    Test suite to test value of custom resource\n    in a multi chunk job request\n    \"\"\"\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n        attr = {}\n        attr['type'] = 'float'\n        attr['flag'] = 'nh'\n        r = 'foo_float'\n        self.server.manager(MGR_CMD_CREATE, RSC, attr, id=r)\n        self.scheduler.add_resource('foo_float')\n        a = {'resources_available.foo_float': 4.2,\n             'resources_available.ncpus': 2}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n\n    def test_resource_float_type(self):\n        \"\"\"\n        Test to check the value of custom resource\n        in Resource_List.<custom resc> matches the value\n        requested by the multi-chunk job\n        \"\"\"\n        a = {'Resource_List.select': '2:ncpus=1:foo_float=0.8',\n             'Resource_List.place': 'shared'}\n        j = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        self.server.expect(JOB, {'Resource_List.foo_float': 1.6}, id=jid)\n"
  },
  {
    "path": "test/tests/functional/pbs_resource_unset.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestResourceUnset(TestFunctional):\n    \"\"\"\n    Test that resources behave properly when unset\n    \"\"\"\n    def setUp(self):\n        TestFunctional.setUp(self)\n\n        resources = {\n            'tbool': {'type': 'boolean'},\n            'tstr': {'type': 'string'},\n            'tlong': {'type': 'long'},\n            'thbool': {'type': 'boolean', 'flag': 'h'},\n            'thstr': {'type': 'string', 'flag': 'h'},\n            'thlong':  {'type': 'long', 'flag': 'h'}\n        }\n        res_str = ''\n        for r in resources:\n            self.server.manager(MGR_CMD_CREATE, RSC, resources[r], id=r)\n            res_str += r + ','\n\n        res_str = res_str[:-1]\n        self.scheduler.add_resource(res_str)\n\n        self.server.manager(MGR_CMD_SET, NODE,\n                            {'resources_available.ncpus': 3},\n                            id=self.mom.shortname)\n\n    def test_unset_server_resources(self):\n        \"\"\"\n        test server resources are ignored if unset and the job runs\n        \"\"\"\n\n        J1 = Job(attrs={'Resource_List.tbool': True})\n        jid1 = self.server.submit(J1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n\n        J2 = Job(attrs={'Resource_List.tstr': \"foo\"})\n        jid2 = self.server.submit(J2)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n\n        J3 = Job(attrs={'Resource_List.tlong': 1})\n        jid3 = self.server.submit(J3)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid3)\n\n    def test_unset_node_resources(self):\n        \"\"\"\n        test if node level resources are not matched if unset.\n        The job does not run.\n        \"\"\"\n        J1 = Job(attrs={'Resource_List.thbool': True})\n        jid1 = self.server.submit(J1)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid1)\n\n        J2 = Job(attrs={'Resource_List.thstr': \"foo\"})\n        jid2 = self.server.submit(J2)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid2)\n\n        J3 = Job(attrs={'Resource_List.thlong': 1})\n        jid3 = self.server.submit(J3)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid3)\n"
  },
  {
    "path": "test/tests/functional/pbs_resource_usage_log.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestResourceUsageLog(TestFunctional):\n\n    \"\"\"\n    Test various scenarios in which resource usage is logged\n    in the accounting logs\n    \"\"\"\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n        attr1 = {'job_history_enable': 'True'}\n        self.server.manager(MGR_CMD_SET, SERVER, attr1)\n        attr2 = {'resources_available.mem': '200gb'}\n        self.server.manager(MGR_CMD_SET, NODE, attr2, id=self.mom.shortname)\n\n    def cleanup_eatcpu(self, scripts):\n        for script in scripts:\n            cmd = 'pgrep -f ' + script\n            ret = self.du.run_cmd(cmd=cmd, level=logging.DEBUG)\n            for pid in ret['out']:\n                cmd = 'kill -9 ' + pid\n                ret = self.du.run_cmd(\n                    cmd=cmd, level=logging.DEBUG, runas=TEST_USER)\n\n    def test_acclog_for_job_states(self):\n        \"\"\"\n        Check accounting logs when a job completes successfully and when\n        a job is deleted in Q or R state\n        \"\"\"\n        a = {'Resource_List.select': '1:ncpus=1:mem=200gb'}\n        j1 = Job(TEST_USER, a)\n        j1.create_eatcpu_job(40, self.mom.shortname)\n        jid1 = self.server.submit(j1)\n\n        j2 = Job(TEST_USER, a)\n        j2.create_eatcpu_job(30, self.mom.shortname)\n        jid2 = self.server.submit(j2)\n\n        self.server.expect(JOB, {'job_state': 'R'}, jid1)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid2)\n\n        self.server.delete(jid2, wait=True)\n        self.server.expect(JOB, {'job_state': 'F'},\n                           offset=40, extend='x', id=jid1)\n\n        j3 = Job(TEST_USER, a)\n        j3.create_eatcpu_job(hostname=self.mom.shortname)\n        jid3 = self.server.submit(j3)\n        self.server.expect(JOB, {'job_state': 'R'}, jid3)\n        self.server.delete(jid3, wait=True)\n\n        # No R record ; Only E record for job1 which finished\n        self.server.accounting_match(\n            msg='E;' + jid1 +\n            '.*Exit_status=0.*resources_used.*run_count=1', id=jid1,\n            regexp=True)\n        self.server.accounting_match(\n            msg='R;' + jid1, id=jid1, existence=False,\n            max_attempts=10)\n\n        # No R record, No E record for job2 which is in 'Q' state\n        self.server.accounting_match(\n            msg='R;' + jid2, id=jid2, existence=False,\n            max_attempts=10)\n        self.server.accounting_match(\n            msg='E;' + jid2, id=jid2, existence=False,\n            max_attempts=10)\n\n        # No R record ; Only E record for job3 which was deleted\n        # when in 'R' state\n        self.server.accounting_match(\n            msg='R;' + jid3, id=jid3, existence=False,\n            max_attempts=10)\n        self.server.accounting_match(\n            msg='E;' + jid3 +\n            '.*Exit_status=271.*resources_used.*run_count=1', id=jid3,\n            regexp=True)\n\n    def test_acclog_mom_down(self):\n        \"\"\"\n        Check accounting logs when node is down and MoM is restarted\n        \"\"\"\n        a = {ATTR_nodefailrq: 15}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        a = {'resources_available.ncpus': 4}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        scripts = []\n        # Submit a job\n        a = {'Resource_List.select': '1:ncpus=1:mem=20gb'}\n        j = Job(TEST_USER, a)\n        scripts.append(j.create_eatcpu_job(hostname=self.mom.shortname))\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, jid)\n        # Submit a job array\n        ja = Job(TEST_USER, attrs={\n            ATTR_J: '1-2',\n            'Resource_List.select': 'ncpus=1:mem=20gb'}\n        )\n        scripts.append(ja.create_eatcpu_job(hostname=self.mom.shortname))\n        jid_a = self.server.submit(ja)\n\n        subjid1 = j.create_subjob_id(jid_a, 1)\n        subjid2 = j.create_subjob_id(jid_a, 2)\n\n        self.server.expect(JOB, {'job_state': 'R'}, subjid1)\n        self.server.expect(JOB, {'job_state': 'R'}, subjid2)\n\n        self.assertTrue(self.server.isUp())\n        self.assertTrue(self.mom.isUp())\n\n        # kill -9 mom\n        self.mom.signal('-KILL')\n\n        # Verify that node is reported to be down.\n        self.server.expect(NODE, {ATTR_NODE_state: 'down'},\n                           id=self.mom.shortname, offset=15)\n\n        self.server.expect(JOB, {'job_state': 'Q'}, jid)\n        self.server.expect(JOB, {'job_state': 'Q'}, subjid1)\n        self.server.expect(JOB, {'job_state': 'Q'}, subjid2)\n\n        self.server.tracejob_match(\n            msg='Job requeued, execution node .* down', id=jid,\n            regexp=True)\n        self.server.tracejob_match(\n            msg='Job requeued, execution node .* down', id=subjid1,\n            regexp=True)\n        self.server.tracejob_match(\n            msg='Job requeued, execution node .* down', id=subjid2,\n            regexp=True)\n\n        # now start mom\n        self.mom.start()\n        self.assertTrue(self.mom.isUp())\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n        self.server.expect(JOB, {'job_state': 'R'}, jid)\n\n        self.server.delete(jid, wait=True)\n        self.server.delete(jid_a, wait=True)\n\n        # job1 has R and E record\n        self.server.accounting_match(\n            msg='R;' + jid + '.*Exit_status=0.*resources_used.*run_count=1',\n            id=jid, regexp=True, allmatch=True)\n        self.server.accounting_match(\n            msg='E;' + jid +\n            '.*Exit_status=271.*resources_used.*run_count=2',\n            id=jid, regexp=True)\n\n        # job array's subjobs have a R record and\n        # the jobarray has E record with run_count=0\n        self.server.accounting_match(\n            msg='R;' + re.escape(subjid1) +\n            '.*Exit_status=0.*resources_used.*run_count=1',\n            id=subjid1, regexp=True, allmatch=True)\n        self.server.accounting_match(\n            msg='R;' + re.escape(subjid2) +\n            '.*Exit_status=0.*resources_used.*run_count=1',\n            id=subjid2, regexp=True, allmatch=True)\n        self.server.accounting_match(\n            msg='E;' + re.escape(jid_a) +\n            '.*Exit_status=1.*run_count=0', id=jid_a, regexp=True)\n        self.cleanup_eatcpu(scripts)\n\n    def test_acclog_job_multiple_qrerun(self):\n        \"\"\"\n        Check for R record in accounting logs when job is\n        requeued using qrerun command\n        \"\"\"\n        a = {'resources_available.ncpus': 4}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n\n        # Submit job\n        a = {'Resource_List.select': '1:ncpus=1:mem=20gb'}\n        j = Job(TEST_USER, a)\n        j.create_eatcpu_job(hostname=self.mom.shortname)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, jid)\n\n        # Submit job array\n        ja = Job(TEST_USER, attrs={\n            ATTR_J: '1-2',\n            'Resource_List.select': 'ncpus=1:mem=20gb'}\n        )\n        ja.create_eatcpu_job(hostname=self.mom.shortname)\n        jid_a = self.server.submit(ja)\n        subjid1 = j.create_subjob_id(jid_a, 1)\n        subjid2 = j.create_subjob_id(jid_a, 2)\n        self.server.expect(JOB, {'job_state': 'R'}, subjid1)\n        self.server.expect(JOB, {'job_state': 'R'}, subjid2)\n\n        # Turn scheduling off before rerun\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n\n        # Rerun jobs first time\n        self.server.rerunjob(jobid=jid)\n        self.server.rerunjob(jobid=jid_a)\n\n        self.server.expect(JOB, {'job_state': 'Q'}, jid)\n        self.server.expect(JOB, {'job_state': 'Q'}, subjid1)\n        self.server.expect(JOB, {'job_state': 'Q'}, subjid2)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n\n        self.server.expect(JOB, {'job_state': 'R'}, jid)\n        self.server.expect(JOB, {'job_state': 'R'}, subjid1)\n        self.server.expect(JOB, {'job_state': 'R'}, subjid2)\n\n        # Rerun jobs second time\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n        self.server.rerunjob(jobid=jid)\n        self.server.rerunjob(jobid=jid_a)\n\n        self.server.expect(JOB, {'job_state': 'Q'}, jid)\n        self.server.expect(JOB, {'job_state': 'Q'}, subjid1)\n        self.server.expect(JOB, {'job_state': 'Q'}, subjid2)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n        self.server.expect(JOB, {'job_state': 'R'}, jid)\n        self.server.expect(JOB, {'job_state': 'R'}, subjid1)\n        self.server.expect(JOB, {'job_state': 'R'}, subjid2)\n\n        self.server.delete(jid, wait=True)\n        self.server.delete(jid_a, wait=True)\n\n        # Check for R records for every qrerun\n        self.server.accounting_match(\n            msg='R;' + jid +\n            '.*Exit_status=-11.*resources_used.*run_count=1', id=jid,\n            regexp=True)\n        self.server.accounting_match(\n            msg='R;' + jid +\n            '.*Exit_status=-11.*resources_used.*run_count=2', id=jid,\n            regexp=True)\n        self.server.accounting_match(\n            msg='E;' + jid +\n            '.*Exit_status=271.*resources_used.*run_count=3', id=jid,\n            regexp=True)\n\n        self.server.accounting_match(\n            msg='R;' + re.escape(subjid1) +\n            '.*Exit_status=-11.*resources_used.*run_count=1',\n            id=subjid1, regexp=True, allmatch=True)\n        self.server.accounting_match(\n            msg='R;' + re.escape(subjid1) +\n            '.*Exit_status=-11.*resources_used.*run_count=1',\n            id=subjid2, regexp=True, allmatch=True)\n        self.server.accounting_match(\n            msg='E;' + re.escape(jid_a) +\n            '.*Exit_status=1.*run_count=0',\n            id=jid_a, regexp=True)\n\n    def test_acclog_force_requeue(self):\n        \"\"\"\n        Check for resource usage when job is force requeued\n        \"\"\"\n        scripts = []\n        a = {'Resource_List.select': '1:ncpus=1:mem=200gb'}\n        j1 = Job(TEST_USER, a)\n        scripts.append(j1.create_eatcpu_job(hostname=self.mom.shortname))\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {'job_state': 'R'}, jid1)\n\n        # kill -9 mom\n        self.mom.signal('-KILL')\n\n        # Verify that nodes are reported to be down.\n        self.server.expect(NODE, {ATTR_NODE_state: (MATCH_RE, 'down')},\n                           id=self.mom.shortname, offset=15)\n        self.server.rerunjob(jid1, extend='force')\n\n        # Look for R record as job was force requeued\n        self.server.accounting_match(\n            msg='.*R;' +\n            jid1 +\n            '.*Exit_status=-11.*resources_used.*run_count=1',\n            id=jid1,\n            regexp=True)\n        self.cleanup_eatcpu(scripts)\n\n    def test_acclog_services_restart(self):\n        \"\"\"\n        Check for resource usage in accounting logs after\n        PBS services are restarted\n        \"\"\"\n        a = {'Resource_List.select': '1:ncpus=1:mem=200gb'}\n        j1 = Job(TEST_USER, a)\n        j1.create_eatcpu_job(60, self.mom.shortname)\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {'job_state': 'R'}, jid1)\n\n        # Restart PBS services\n        PBSInitServices().restart()\n        if self.server.shortname != self.mom.shortname:\n            self.mom.restart()\n\n        self.assertTrue(self.server.isUp())\n        self.assertTrue(self.mom.isUp())\n\n        # Sleep so accounting logs get updated\n        time.sleep(40)\n        self.logger.info(\"Sleep for 40s so accounting log is updated\")\n\n        # Check for R record\n        self.server.accounting_match(\n            msg='R;' + jid1 + '.*resources_used.*run_count=1', id=jid1,\n            regexp=True)\n\n    def test_acclog_preempt_order(self):\n        \"\"\"\n        Check for R record when editing preempt order to \"R\" and requeuing job\n        \"\"\"\n        # Create a high priority queue\n        a = {'queue_type': 'e', 'started': 't',\n             'enabled': 't', 'priority': '180'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id=\"highp\")\n        self.server.manager(MGR_CMD_SET, SCHED, {'preempt_order': 'R'})\n\n        a = {'Resource_List.select': '1:ncpus=1:mem=200gb'}\n        j1 = Job(TEST_USER, a)\n        j1.create_eatcpu_job(30, self.mom.shortname)\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {'job_state': 'R'}, jid1)\n\n        a = {'Resource_List.select': '1:ncpus=1:mem=200gb', 'queue': 'highp'}\n        j2 = Job(TEST_USER, a)\n        j2.create_eatcpu_job(60, self.mom.shortname)\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {ATTR_state: 'R'}, jid2)\n        self.server.expect(JOB, {ATTR_state: 'Q'}, jid1)\n\n        self.server.accounting_match(\n            msg='.*R;' + jid1 +\n            '.*Exit_status=-11.*resources_used.*run_count=1',\n            id=jid1, regexp=True)\n"
  },
  {
    "path": "test/tests/functional/pbs_resv_begin_hook.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2020 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\nimport textwrap\nfrom tests.functional import *\n\n\ndef get_hook_body_reverse_node_state():\n    hook_body = \"\"\"\n    import pbs\n    e = pbs.event()\n    pbs.logmsg(pbs.LOG_DEBUG, \"pbs.__file__:\" + pbs.__file__)\n    # this is backwards as it's a reverse lookup.\n    for value, key in pbs.REVERSE_RESV_STATE.items():\n        pbs.logmsg(pbs.LOG_DEBUG, \"key:%s value:%s\" % (key, value))\n    e.accept()\n    \"\"\"\n    hook_body = textwrap.dedent(hook_body)\n    return hook_body\n\n\nclass TestResvBeginHook(TestFunctional):\n    \"\"\"\n    Tests to verify the reservation begin hook for a confirm standing/advance/\n    degraded reservation once the reservation begins.\n    \"\"\"\n\n    advance_resv_hook_script = textwrap.dedent(\"\"\"\n        import pbs\n        e=pbs.event()\n\n        pbs.logmsg(pbs.LOG_DEBUG, 'Reservation Begin Hook name - %s' %\n                   e.hook_name)\n\n        if e.type == pbs.RESV_BEGIN:\n            pbs.logmsg(pbs.LOG_DEBUG, 'Reservation ID - %s' % e.resv.resvid)\n    \"\"\")\n\n    standing_resv_hook_script = textwrap.dedent(\"\"\"\n        import pbs\n        e=pbs.event()\n\n        pbs.logmsg(pbs.LOG_DEBUG, 'Reservation Begin Hook name - %s' %\n                   e.hook_name)\n\n        if e.type == pbs.RESV_BEGIN:\n            pbs.logmsg(pbs.LOG_DEBUG, 'Reservation occurrence - %s' %\n            e.resv.reserve_index)\n    \"\"\")\n\n    def setUp(self):\n        \"\"\"\n        Create a reservation begin hook and set the server log level.\n        \"\"\"\n        super(TestResvBeginHook, self).setUp()\n        self.hook_name = 'resvbegin_hook'\n        attrs = {'event': 'resv_begin'}\n        self.server.create_hook(self.hook_name, attrs)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'log_events': 2047})\n\n    @tags('hooks')\n    def test_begin_advance_resv(self):\n        \"\"\"\n        Testcase to submit and confirm an advance reservation, wait for it\n        to begin and verify the reservation begin hook.\n        \"\"\"\n        self.server.import_hook(self.hook_name,\n                                TestResvBeginHook.advance_resv_hook_script)\n\n        offset = 10\n        duration = 30\n        rid, _, _ = self.server.submit_resv(offset, duration)\n\n        attrs = {'reserve_state': (MATCH_RE, 'RESV_RUNNING|5')}\n        self.server.expect(RESV, attrs, id=rid, offset=10)\n\n        # Don't need to wait.  Let teardown clear the reservation\n        msg = 'Hook;Server@%s;Reservation ID - %s' % \\\n              (self.server.shortname, rid)\n        self.server.log_match(msg, tail=True, interval=2, max_attempts=30)\n\n    @tags('hooks')\n    def test_delete_advance_resv(self):\n        \"\"\"\n        Testcase to submit and confirm advance reservation, delete the same\n        and verify the resvbegin hook did not run.\n        \"\"\"\n        self.server.import_hook(self.hook_name,\n                                TestResvBeginHook.advance_resv_hook_script)\n\n        offset = 10\n        duration = 30\n        rid, start, _ = self.server.submit_resv(offset, duration)\n        self.server.delete(rid)\n        time.sleep(start - time.time())\n        msg = 'Hook;Server@%s;Reservation ID - %s' % \\\n              (self.server.shortname, rid)\n        self.server.log_match(msg, tail=True, interval=1, max_attempts=5,\n                              existence=False)\n\n    @tags('hooks')\n    def test_delete_degraded_resv(self):\n        \"\"\"\n        Testcase to submit and confirm an advance reservation, turn the mom\n        off, delete the degraded reservation and verify the resvbegin\n        hook did not run.\n        \"\"\"\n        self.server.import_hook(self.hook_name,\n                                TestResvBeginHook.advance_resv_hook_script)\n\n        offset = 10\n        duration = 30\n\n        rid, start, _ = self.server.submit_resv(offset, duration)\n        self.server.manager(MGR_CMD_SET, NODE, {'state': (INCR, 'offline')},\n                            id=self.mom.shortname)\n        attrs = {'reserve_state': (MATCH_RE, 'RESV_DEGRADED|10')}\n        self.server.expect(RESV, attrs, id=rid)\n        self.server.delete(rid)\n        time.sleep(start - time.time())\n\n        msg = 'Hook;Server@%s;Reservation ID - %s' % \\\n            (self.server.shortname, rid)\n        self.server.log_match(msg, tail=True, interval=1, max_attempts=5,\n                              existence=False)\n\n    @tags('hooks')\n    def test_server_down_case_1(self):\n        \"\"\"\n        Testcase to submit and confirm an advance reservation, turn the server\n        off, turn the server on after the reservation would have started and\n        verify the resvbegin hook ran.\n        \"\"\"\n        self.server.import_hook(self.hook_name,\n                                TestResvBeginHook.advance_resv_hook_script)\n\n        offset = 10\n        duration = 300\n        rid, start, _ = self.server.submit_resv(offset, duration)\n\n        self.server.stop()\n        time.sleep(offset + 5)\n        self.server.start()\n        self.assertTrue(self.server.isUp())\n\n        attrs = {'reserve_state': (MATCH_RE, 'RESV_RUNNING|5')}\n        self.server.expect(RESV, attrs, id=rid)\n\n        msg = 'Hook;Server@%s;Reservation ID - %s' % \\\n              (self.server.shortname, rid)\n        self.server.log_match(msg, tail=True, interval=1, max_attempts=5)\n\n    @tags('hooks')\n    def test_server_down_case_2(self):\n        \"\"\"\n        Testcase to submit and confirm an advance reservation, turn the\n        server off, wait for the reservation duration to finish, turn the\n        server on and verify the resvbegin hook never ran.\n        \"\"\"\n        self.server.import_hook(self.hook_name,\n                                TestResvBeginHook.advance_resv_hook_script)\n\n        offset = 10\n        duration = 30\n        wait_time = offset + duration + 5\n        rid, _, _ = self.server.submit_resv(offset, duration)\n\n        self.server.stop()\n\n        self.logger.info('wait for %s seconds till the reservation would end' %\n                         (wait_time))\n        time.sleep(wait_time)\n\n        self.server.start()\n        self.assertTrue(self.server.isUp())\n\n        msg = 'Hook;Server@%s;Reservation ID - %s' % \\\n              (self.server.shortname, rid)\n        self.server.log_match(msg, tail=True, interval=2, max_attempts=2,\n                              existence=False)\n\n    @tags('hooks')\n    def test_set_attrs(self):\n        \"\"\"\n        Testcase to submit and confirm an advance reservation, delete the\n        reservation and verify the read permissions in the resvbegin hook.\n        \"\"\"\n\n        hook_script = textwrap.dedent(\"\"\"\n            import pbs\n            e=pbs.event()\n\n            pbs.logmsg(pbs.LOG_DEBUG,\n                       'Reservation begin Hook name - %s' % e.hook_name)\n\n            if e.type == pbs.RESV_BEGIN:\n                pbs.logmsg(pbs.LOG_DEBUG, 'e.resv = %s' % e.resv.__dict__)\n                e.resv.queue = 'workq'\n                pbs.logmsg(pbs.LOG_DEBUG, 'Reservation ID - %s' %\n                e.resv.queue)\n        \"\"\")\n\n        self.server.import_hook(self.hook_name, hook_script)\n\n        offset = 10\n        duration = 30\n        _, start, _ = self.server.submit_resv(offset, duration)\n\n        time.sleep(start - time.time())\n\n        msg = 'Svr;Server@%s;PBS server internal error (15011) in Error ' \\\n              'evaluating Python script, attribute '\"'queue'\"' is ' \\\n              'part of a readonly object' % self.server.shortname\n        self.server.log_match(msg, tail=True, max_attempts=30, interval=2)\n\n    @tags('hooks')\n    def test_delete_resv_after_first_occurrence(self):\n        \"\"\"\n        Testcase to submit and confirm a standing reservation for two\n        occurrences, wait for the first occurrence to begin and verify\n        the begin hook for the same, delete before the second occurrence and\n        verify the resvbegin hook for the latter didn't run.\n        \"\"\"\n        self.server.import_hook(self.hook_name,\n                                TestResvBeginHook.standing_resv_hook_script)\n\n        offset = 20\n        duration = 30\n        rid, start, _ = self.server.submit_resv(offset, duration,\n                                                rrule='FREQ=MINUTELY;COUNT=2',\n                                                conf=self.conf)\n\n        off = start - time.time()\n        attrs = {'reserve_state': (MATCH_RE, 'RESV_RUNNING|5')}\n        self.server.expect(RESV, attrs, id=rid, offset=off)\n\n        msg = 'Hook;Server@%s;Reservation occurrence - 1' % \\\n              self.server.shortname\n        self.server.log_match(msg, tail=True, interval=2, max_attempts=30)\n        self.logger.info('Reservation begin hook ran for first occurrence of '\n                         'a standing reservation')\n\n        self.logger.info('delete during first occurence')\n\n        self.server.delete(rid)\n        time.sleep(max(start + 60 - time.time(), 0))\n        msg = 'Hook;Server@%s;Reservation occurrence - 2' % \\\n              self.server.shortname\n        self.server.log_match(msg, tail=True, interval=2, max_attempts=3,\n                              existence=False)\n\n    @tags('hooks')\n    def test_begin_resv_occurrences(self):\n        \"\"\"\n        Testcase to submit and confirm a standing reservation for two\n        occurrences, wait for the first occurrence to begin and verify\n        the begin hook for the same, wait for the second occurrence to\n        start and end, verify the resvbegin hook for the latter.\n        \"\"\"\n        self.server.import_hook(self.hook_name,\n                                TestResvBeginHook.standing_resv_hook_script)\n\n        offset = 20\n        duration = 30\n        rid, start, _ = self.server.submit_resv(offset, duration,\n                                                rrule='FREQ=MINUTELY;COUNT=2',\n                                                conf=self.conf)\n        off = start - time.time()\n        attrs = {'reserve_state': (MATCH_RE, 'RESV_RUNNING|5')}\n        self.server.expect(RESV, attrs, id=rid, offset=off)\n\n        msg = 'Hook;Server@%s;Reservation occurrence - 1' % \\\n              self.server.shortname\n        self.server.log_match(msg, tail=True, interval=2, max_attempts=30)\n        self.logger.info('Reservation begin hook ran for first occurrence of a'\n                         ' standing reservation')\n\n        off = start + 60 - time.time()\n        self.logger.info('Wait for second occurrence')\n        attrs = {'reserve_state': (MATCH_RE, 'RESV_RUNNING|5'),\n                 'reserve_index': 2}\n        self.server.expect(RESV, attrs, id=rid, attrop=PTL_AND, offset=off)\n\n        msg = 'Hook;Server@%s;Reservation occurrence - 2' % \\\n              self.server.shortname\n        self.server.log_match(msg, tail=True, interval=2, max_attempts=30)\n        self.logger.info('Reservation begin hook ran for second occurrence of '\n                         'a standing reservation')\n\n    @tags('hooks')\n    def test_unconfirmed_resv_with_node(self):\n        \"\"\"\n        Testcase to set the node attributes such that the number of ncpus is 1,\n        submit and confirm a reservation on the same node, submit another\n        reservation on the same node and verify the reservation begin hook\n        as the latter one stays in unconfirmed state.\n        \"\"\"\n        self.server.import_hook(self.hook_name,\n                                TestResvBeginHook.advance_resv_hook_script)\n\n        node_attrs = {'resources_available.ncpus': 1}\n        self.server.manager(MGR_CMD_SET, NODE, node_attrs,\n                            id=self.mom.shortname)\n        offset = 10\n        duration = 10\n        self.server.submit_resv(offset, duration)\n        unconf_rid, unconf_start, _ = self.server.submit_resv(offset,\n                                                              duration,\n                                                              confirmed=False)\n        msg = 'Server@%s;Resv;%s;Reservation denied' % \\\n            (self.server.shortname, unconf_rid)\n        self.server.log_match(msg, tail=True, max_attempts=10)\n\n        time.sleep(max(unconf_start - time.time(), 0))\n        msg = 'Hook;Server@%s;Reservation ID - %s' % \\\n              (self.server.shortname, unconf_rid)\n        self.server.log_match(msg, tail=True, max_attempts=5, existence=False)\n\n    @tags('hooks')\n    def test_scheduler_down(self):\n        \"\"\"\n        Testcase to turn off the scheduler and submit a reservation,\n        the same will be in unconfirmed state and at the start time\n        resvbegin hook shall not run.\n        \"\"\"\n        self.server.import_hook(self.hook_name,\n                                TestResvBeginHook.advance_resv_hook_script)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n\n        offset = 10\n        duration = 30\n        rid, start, _ = self.server.submit_resv(offset, duration,\n                                                confirmed=False)\n\n        self.logger.info('wait utill the reservation would begin')\n        off = start - time.time() + 10\n        attrs = {'reserve_state': (MATCH_RE, 'RESV_UNCONFIRMED|1')}\n        self.server.expect(RESV, attrs, id=rid, offset=off)\n        msg = 'Hook;Server@%s;Reservation ID - %s' % \\\n              (self.server.shortname, rid)\n        self.server.log_match(msg, tail=True, max_attempts=5,\n                              existence=False)\n\n    # Test Reverser\n    @tags(\"hooks\")\n    def test_check_reservation_state_lookup(self):\n        \"\"\"\n        Test: check for the existence and values of the\n        pbs.REVERSE_RESV_STATE dictionary\n\n        run a hook that converts reservation state change ints into a string,\n        then search for it in the server log.\n        \"\"\"\n\n        self.add_pbs_python_path_to_sys_path()\n        import pbs\n        self.server.import_hook(self.hook_name,\n                                get_hook_body_reverse_node_state())\n        start_time = int(time.time())\n\n        duration = 30\n        offset = 10\n        rid, _, _ = self.server.submit_resv(offset, duration)\n        attrs = {'reserve_state': (MATCH_RE, 'RESV_RUNNING|5')}\n        self.server.expect(RESV, attrs, id=rid)\n\n        for value, key in pbs.REVERSE_RESV_STATE.items():\n            self.server.log_match(\"key:%s value:%s\" % (key, value),\n                                  starttime=start_time)\n\n    @tags('hooks')\n    def test_multiple_hooks(self):\n        \"\"\"\n        Define multiple hooks for the resv_begin event and make sure both\n        get run.\n\n        \"\"\"\n        test_hook_script = textwrap.dedent(\"\"\"\n        import pbs\n        e=pbs.event()\n\n        pbs.logmsg(pbs.LOG_DEBUG,\n                   'Reservation Begin Hook name - %%s' %% e.hook_name)\n\n        if e.type == pbs.RESV_BEGIN:\n            pbs.logmsg(pbs.LOG_DEBUG,\n                       'Test %d Reservation ID - %%s' %% e.resv.resvid)\n        \"\"\")\n\n        attrs = {'event': 'resv_begin'}\n        self.server.create_import_hook(\"test_hook_1\", attrs,\n                                       test_hook_script % 1)\n        self.server.create_import_hook(\"test_hook_2\", attrs,\n                                       test_hook_script % 2)\n\n        offset = 20\n        duration = 30\n        rid, _, _ = self.server.submit_resv(offset, duration)\n        attrs = {'reserve_state': (MATCH_RE, 'RESV_RUNNING|5')}\n        self.server.expect(RESV, attrs, id=rid, offset=offset)\n\n        msg = 'Hook;Server@%s;Test 1 Reservation ID - %s' % \\\n              (self.server.shortname, rid)\n        self.server.log_match(msg, tail=True, interval=2, max_attempts=3)\n\n        msg = 'Hook;Server@%s;Test 2 Reservation ID - %s' % \\\n              (self.server.shortname, rid)\n        self.server.log_match(msg, tail=True, interval=2, max_attempts=3)\n"
  },
  {
    "path": "test/tests/functional/pbs_resv_confirm_hook.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2020 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\nimport textwrap\nfrom tests.functional import *\n\n\nclass TestResvConfirmHook(TestFunctional):\n    \"\"\"\n    Tests to verify the reservation begin hook for a confirm standing/advance/\n    degraded reservation once the reservation begins.\n    \"\"\"\n\n    advance_resv_hook_script = textwrap.dedent(\"\"\"\n        import pbs\n        e=pbs.event()\n\n        pbs.logmsg(pbs.LOG_DEBUG,\n                   'Reservation Confirm Hook name - %s' % e.hook_name)\n\n        if e.type == pbs.RESV_CONFIRM:\n            pbs.logmsg(pbs.LOG_DEBUG,\n                       'Reservation ID - %s' % e.resv.resvid)\n            pbs.logmsg(pbs.LOG_DEBUG, 'Reservation occurrence - %s' %\n                       e.resv.reserve_index)\n    \"\"\")\n\n    def setUp(self):\n        \"\"\"\n        Create a reservation confirm hook and set the server log level.\n        \"\"\"\n        super(TestResvConfirmHook, self).setUp()\n        self.hook_name = 'resvconfirm_hook'\n        attrs = {'event': 'resv_confirm'}\n        self.server.create_hook(self.hook_name, attrs)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'log_events': 2047})\n\n    @tags('hooks')\n    def test_run_advance_resv(self):\n        \"\"\"\n        Testcase to submit and confirm advance reservation, delete the same\n        and verify the resv_confirm hook ran.\n        \"\"\"\n        self.server.import_hook(self.hook_name,\n                                TestResvConfirmHook.advance_resv_hook_script)\n\n        offset = 10\n        duration = 30\n        rid, _, _ = self.server.submit_resv(offset, duration)\n\n        msg = 'Hook;Server@%s;Reservation ID - %s' % \\\n              (self.server.shortname, rid)\n        self.server.log_match(msg, tail=True, interval=1, max_attempts=10)\n\n    @tags('hooks')\n    def test_degraded_resv(self):\n        \"\"\"\n        Testcase to submit and confirm an advance reservation, offline vnode,\n        verify reservation degredation, restore the vnode and verify the\n        resv_confirm hook ran the correct number of times.\n        \"\"\"\n        self.server.import_hook(self.hook_name,\n                                TestResvConfirmHook.advance_resv_hook_script)\n\n        offset = 300\n        duration = 30\n        rid, _, _ = self.server.submit_resv(offset, duration)\n        msg = 'Hook;Server@%s;Reservation ID - %s' % (self.server.shortname,\n                                                      rid)\n        self.server.log_match(msg, tail=True)\n\n        self.server.manager(MGR_CMD_SET, NODE, {'state': (INCR, 'offline')},\n                            id=self.mom.shortname)\n        vnode_off_time = time.time()\n\n        attrs = {'reserve_state': (MATCH_RE, 'RESV_DEGRADED|10')}\n        self.server.expect(RESV, attrs, id=rid)\n\n        self.server.manager(MGR_CMD_SET, NODE, {'state': (DECR, 'offline')},\n                            id=self.mom.shortname)\n        attrs = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, attrs, id=rid)\n\n        self.server.log_match(msg, starttime=vnode_off_time, interval=1,\n                              max_attempts=10, existence=False)\n\n    @tags('hooks')\n    def test_set_attrs(self):\n        \"\"\"\n        Testcase to submit and confirm an advance reservation, delete the\n        reservation and verify the read permissions in the resvconfirm hook.\n        \"\"\"\n\n        hook_script = \"\"\"\\\n            import pbs\n            e=pbs.event()\n\n            pbs.logmsg(pbs.LOG_DEBUG,\n                       'Reservation confirm Hook name - %s' % e.hook_name)\n\n            if e.type == pbs.RESV_CONFIRM:\n                pbs.logmsg(pbs.LOG_DEBUG, \"e.resv = %s\" % e.resv.resvid)\n                e.resv.queue = 'workq'\n                pbs.logmsg(pbs.LOG_DEBUG, 'Reservation ID - %s' %\n                e.resv.queue)\n        \"\"\"\n        hook_script = textwrap.dedent(hook_script)\n        self.server.import_hook(self.hook_name, hook_script)\n\n        offset = 10\n        duration = 30\n        rid, _, _ = self.server.submit_resv(offset, duration)\n\n        msg = 'Svr;Server@%s;PBS server internal error (15011) in Error ' \\\n              'evaluating Python script, attribute '\"'queue'\"' is ' \\\n              'part of a readonly object' % self.server.shortname\n        self.server.log_match(msg, tail=True, max_attempts=30, interval=2)\n\n    @tags('hooks')\n    def test_delete_resv_after_first_occurrence(self):\n        \"\"\"\n        Testcase to submit and confirm a standing reservation for two\n        occurrences, wait for the first occurrence to begin and verify\n        the confirm hook for the reservation, delete before the second\n        occurrence and verify the confirm ran only once.\n        \"\"\"\n        self.server.import_hook(self.hook_name,\n                                TestResvConfirmHook.advance_resv_hook_script)\n\n        offset = 20\n        duration = 30\n        rid, _, _ = self.server.submit_resv(offset, duration,\n                                            rrule='FREQ=MINUTELY;COUNT=2',\n                                            conf=self.conf)\n        msg = 'Hook;Server@%s;Reservation ID - %s' % (self.server.shortname,\n                                                      rid)\n        self.server.log_match(msg, tail=True, interval=2, max_attempts=30)\n        self.logger.info('Reservation confirm hook ran for first occurrence of'\n                         ' a standing reservation')\n        post_first_conf_time = time.time()\n        attrs = {'reserve_state': (MATCH_RE, 'RESV_RUNNING|5')}\n        self.server.expect(RESV, attrs, id=rid)\n        off = offset + duration - time.time()\n        self.logger.info('Wait %s sec until reservation completed.', off)\n\n        attrs = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, attrs, id=rid, offset=off)\n\n        self.server.log_match(msg, starttime=post_first_conf_time, interval=1,\n                              max_attempts=10, existence=False)\n\n    @tags('hooks')\n    def test_unconfirmed_resv_with_node(self):\n        \"\"\"\n        Testcase to set the node attributes such that the number of ncpus is 1,\n        submit and confirm a reservation on the same node, submit another\n        reservation on the same node and verify the reservation confirm hook\n        did not run as the latter one never gets past the unconfirmed state.\n        \"\"\"\n        self.server.import_hook(self.hook_name,\n                                TestResvConfirmHook.advance_resv_hook_script)\n\n        node_attrs = {'resources_available.ncpus': 1}\n        self.server.manager(MGR_CMD_SET, NODE, node_attrs,\n                            id=self.mom.shortname)\n        offset = 20\n        duration = 300\n        rid, _, _ = self.server.submit_resv(offset, duration)\n        msg = 'Hook;Server@%s;Reservation ID - %s' % \\\n              (self.server.shortname, rid)\n        self.server.log_match(msg, tail=True, max_attempts=10)\n\n        new_rid, _, _ = self.server.submit_resv(offset, duration,\n                                                confirmed=False)\n        msg = \"Server@%s;Resv;%s;Reservation denied\" % (self.server.shortname,\n                                                        new_rid)\n        self.server.log_match(msg, tail=True)\n        msg = 'Hook;Server@%s;Reservation ID - %s' % \\\n              (self.server.shortname, new_rid)\n        self.server.log_match(msg, tail=True, max_attempts=10,\n                              existence=False)\n\n    @tags('hooks')\n    def test_scheduler_down(self):\n        \"\"\"\n        Testcase to turn off the scheduler and submit a reservation,\n        the same will be in unconfirmed state and upon ending the\n        resv_confirm hook shall not run.\n        \"\"\"\n        self.server.import_hook(self.hook_name,\n                                TestResvConfirmHook.advance_resv_hook_script)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n\n        offset = 20\n        duration = 30\n        rid, _, end = self.server.submit_resv(offset, duration,\n                                              confirmed=False)\n        off = end - time.time()\n\n        self.logger.info('wait for %s seconds till the reservation begins',\n                         off)\n        time.sleep(off)\n\n        msg = 'Hook;Server@%s;Reservation ID - %s' % \\\n              (self.server.shortname, rid)\n        self.server.log_match(msg, tail=True, max_attempts=3,\n                              existence=False)\n\n    @tags('hooks')\n    def test_multiple_reconfirm_hooks(self):\n        \"\"\"\n        Define multiple hooks for the resv_confirm event and make sure\n        both get run.\n\n        Check for initial confirmation and also in a degraded/reconfirmed case.\n        \"\"\"\n        test_hook_script = textwrap.dedent(\"\"\"\n        import pbs\n        e=pbs.event()\n\n        pbs.logmsg(pbs.LOG_DEBUG,\n                   'Reservation Confirm Hook name - %%s' %% e.hook_name)\n\n        if e.type == pbs.RESV_CONFIRM:\n            pbs.logmsg(pbs.LOG_DEBUG,\n                       'Test %d Reservation ID - %%s' %% e.resv.resvid)\n        \"\"\")\n\n        attrs = {'event': 'resv_confirm'}\n        self.server.create_import_hook(\"test_hook_1\", attrs,\n                                       test_hook_script % 1)\n        self.server.create_import_hook(\"test_hook_2\", attrs,\n                                       test_hook_script % 2)\n        a = {'resources_available.ncpus': 1}\n        self.mom.create_vnodes(a, 2)\n        offset = 10\n        duration = 30\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'reserve_retry_time': 5})\n        rid, _, _ = self.server.submit_resv(offset, duration)\n        msg1 = 'Hook;Server@%s;Test 1 Reservation ID - %s' % \\\n            (self.server.shortname, rid)\n        self.server.log_match(msg1, tail=True)\n\n        msg2 = 'Hook;Server@%s;Test 2 Reservation ID - %s' % \\\n            (self.server.shortname, rid)\n        self.server.log_match(msg2, tail=True)\n\n        self.server.status(RESV, 'resv_nodes', id=rid)\n        rnodes = self.server.reservations[rid].get_vnodes()\n        self.server.manager(MGR_CMD_SET, NODE, {'state': (INCR, 'offline')},\n                            id=rnodes[0])\n        vnode_off_time = time.time()\n\n        attrs = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, attrs, id=rid)\n\n        self.server.log_match(msg1, starttime=vnode_off_time, interval=1,\n                              max_attempts=10)\n        self.server.log_match(msg2, starttime=vnode_off_time, interval=1,\n                              max_attempts=10)\n"
  },
  {
    "path": "test/tests/functional/pbs_resv_end_hook.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestResvEndHook(TestFunctional):\n    \"\"\"\n    Tests to verify the reservation end hook for a confirm standing/advance/\n    degraded reservation once the reservation ends or gets deleted.\n    \"\"\"\n\n    advance_resv_hook_script = \"\"\"\nimport pbs\ne=pbs.event()\n\npbs.logmsg(pbs.LOG_DEBUG, 'Reservation End Hook name - %s' % e.hook_name)\n\nif e.type == pbs.RESV_END:\n    pbs.logmsg(pbs.LOG_DEBUG, 'Reservation ID - %s' % e.resv.resvid)\n\"\"\"\n\n    standing_resv_hook_script = \"\"\"\nimport pbs\ne=pbs.event()\n\npbs.logmsg(pbs.LOG_DEBUG, 'Reservation End Hook name - %s' % e.hook_name)\n\nif e.type == pbs.RESV_END:\n    pbs.logmsg(pbs.LOG_DEBUG, 'Reservation occurrence - %s' %\n    e.resv.reserve_index)\n\"\"\"\n\n    def setUp(self):\n        \"\"\"\n        Create a reservation end hook and set the server log level.\n        \"\"\"\n        super(TestResvEndHook, self).setUp()\n        self.hook_name = 'resvend_hook'\n        attrs = {'event': 'resv_end'}\n        self.server.create_hook(self.hook_name, attrs)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'log_events': 2047})\n\n    def submit_resv(self, offset, duration, select='1:ncpus=1', rrule=''):\n        \"\"\"\n        Helper function to submit an advance/a standing reservation.\n        \"\"\"\n        start_time = int(time.time()) + offset\n        end_time = start_time + duration\n\n        attrs = {\n            'reserve_start': start_time,\n            'reserve_end': end_time,\n            'Resource_List.select': select\n        }\n\n        if rrule:\n            if 'PBS_TZID' in self.conf:\n                tzone = self.conf['PBS_TZID']\n            elif 'PBS_TZID' in os.environ:\n                tzone = os.environ['PBS_TZID']\n            else:\n                self.logger.info('Missing timezone, using Asia/Kolkata')\n                tzone = 'Asia/Kolkata'\n            attrs[ATTR_resv_rrule] = rrule\n            attrs[ATTR_resv_timezone] = tzone\n\n        rid = self.server.submit(Reservation(TEST_USER, attrs))\n\n        return rid\n\n    def test_delete_advance_resv(self):\n        \"\"\"\n        Testcase to submit and confirm advance reservation, delete the same\n        and verify the resvend hook.\n        \"\"\"\n        self.server.import_hook(self.hook_name,\n                                TestResvEndHook.advance_resv_hook_script)\n\n        offset = 10\n        duration = 30\n        rid = self.submit_resv(offset, duration)\n\n        attrs = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, attrs, id=rid)\n\n        self.server.delete(rid)\n        msg = 'Hook;Server@%s;Reservation ID - %s' % \\\n              (self.server.shortname, rid)\n        self.server.log_match(msg, tail=True, interval=2, max_attempts=30)\n\n    def test_delete_degraded_resv(self):\n        \"\"\"\n        Testcase to submit and confirm an advance reservation, turn the mom\n        off, delete the degraded reservation and verify the resvend\n        hook.\n        \"\"\"\n        self.server.import_hook(self.hook_name,\n                                TestResvEndHook.advance_resv_hook_script)\n\n        offset = 10\n        duration = 30\n        rid = self.submit_resv(offset, duration)\n\n        attrs = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, attrs, id=rid)\n\n        self.mom.stop()\n\n        attrs['reserve_state'] = (MATCH_RE, 'RESV_DEGRADED|10')\n        self.server.expect(RESV, attrs, id=rid)\n\n        self.server.delete(rid)\n        msg = 'Hook;Server@%s;Reservation ID - %s' % \\\n              (self.server.shortname, rid)\n        self.server.log_match(msg, tail=True, interval=2, max_attempts=30)\n\n    def test_server_down_case_1(self):\n        \"\"\"\n        Testcase to submit and confirm an advance reservation, turn the server\n        off, turn the server on, delete the reservation and verify the resvend\n        hook.\n        \"\"\"\n        self.server.import_hook(self.hook_name,\n                                TestResvEndHook.advance_resv_hook_script)\n\n        offset = 10\n        duration = 300\n        rid = self.submit_resv(offset, duration)\n\n        attrs = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, attrs, id=rid)\n\n        self.server.stop()\n\n        self.server.start()\n\n        self.server.delete(rid)\n\n        msg = 'Hook;Server@%s;Reservation ID - %s' % \\\n              (self.server.shortname, rid)\n        self.server.log_match(msg, tail=True, interval=2, max_attempts=30)\n\n    @timeout(300)\n    def test_server_down_case_2(self):\n        \"\"\"\n        Testcase to submit and confirm an advance reservation, turn the\n        server off, wait for the reservation duration to finish, turn the\n        server on and verify the resvend hook.\n        \"\"\"\n        self.server.import_hook(self.hook_name,\n                                TestResvEndHook.advance_resv_hook_script)\n\n        offset = 10\n        duration = 30\n        rid = self.submit_resv(offset, duration)\n\n        attrs = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, attrs, id=rid)\n\n        self.server.stop()\n\n        self.logger.info('wait for 30 seconds till the reservation ends')\n        time.sleep(30)\n\n        self.server.start()\n\n        msg = 'Hook;Server@%s;Reservation ID - %s' % \\\n              (self.server.shortname, rid)\n        self.server.log_match(msg, tail=True, interval=2, max_attempts=30)\n\n    @timeout(240)\n    def test_end_advance_resv(self):\n        \"\"\"\n        Testcase to submit and confirm an advance reservation, wait for it\n        to end and verify the reservation end hook.\n        \"\"\"\n        self.server.import_hook(self.hook_name,\n                                TestResvEndHook.advance_resv_hook_script)\n\n        offset = 10\n        duration = 30\n        rid = self.submit_resv(offset, duration)\n\n        attrs = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, attrs, id=rid)\n\n        attrs['reserve_state'] = (MATCH_RE, 'RESV_RUNNING|5')\n        self.server.expect(RESV, attrs, id=rid, offset=10)\n\n        self.logger.info('wait 30 seconds until the reservation ends')\n        time.sleep(30)\n\n        msg = 'Hook;Server@%s;Reservation ID - %s' % \\\n              (self.server.shortname, rid)\n        self.server.log_match(msg, tail=True, interval=2, max_attempts=30)\n\n    def test_delete_advance_resv_with_jobs(self):\n        \"\"\"\n        Testcase to submit and confirm an advance reservation, submit\n        some jobs to the same, wait for the same to end and\n        verify the reservation end hook.\n        \"\"\"\n        self.server.import_hook(self.hook_name,\n                                TestResvEndHook.advance_resv_hook_script)\n\n        offset = 10\n        duration = 30\n        rid = self.submit_resv(offset, duration)\n\n        attrs = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, attrs, id=rid)\n\n        attrs['reserve_state'] = (MATCH_RE, 'RESV_RUNNING|5')\n        self.server.expect(RESV, attrs, id=rid, offset=10)\n\n        job_attrs = {\n            'Resource_List.walltime': 5,\n            'Resource_List.select': '1:ncpus=1',\n            'queue': rid.split('.')[0]\n        }\n\n        for _ in range(10):\n            self.server.submit(Job(TEST_USER, job_attrs))\n\n        self.logger.info('wait 10 seconds till the reservation runs some jobs')\n        time.sleep(10)\n\n        self.server.delete(rid)\n        msg = 'Hook;Server@%s;Reservation ID - %s' % \\\n              (self.server.shortname, rid)\n        self.server.log_match(msg, tail=True, interval=2, max_attempts=30)\n\n    @timeout(240)\n    def test_end_advance_resv_with_jobs(self):\n        \"\"\"\n        Testcase to submit and confirm an advance reservation, submit\n        some jobs to the same, wait for it to start and end, verify\n        the resvend hook.\n        \"\"\"\n        self.server.import_hook(self.hook_name,\n                                TestResvEndHook.advance_resv_hook_script)\n\n        offset = 10\n        duration = 30\n        rid = self.submit_resv(offset, duration)\n\n        attrs = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, attrs, id=rid)\n\n        attrs['reserve_state'] = (MATCH_RE, 'RESV_RUNNING|5')\n        self.server.expect(RESV, attrs, id=rid, offset=10)\n\n        job_attrs = {\n            'Resource_List.walltime': 10,\n            'Resource_List.select': '1:ncpus=1',\n            'queue': rid.split('.')[0]\n        }\n\n        for _ in range(10):\n            self.server.submit(Job(TEST_USER, job_attrs))\n\n        self.logger.info('wait till 30 seconds until the reservation ends')\n        time.sleep(30)\n\n        msg = 'Hook;Server@%s;Reservation ID - %s' % \\\n              (self.server.shortname, rid)\n        self.server.log_match(msg, tail=True, interval=2, max_attempts=30)\n\n    def test_set_attrs(self):\n        \"\"\"\n        Testcase to submit and confirm an advance reservation, delete the\n        reservation and verify the read permissions in the resvend hook.\n        \"\"\"\n\n        hook_script = \"\"\"\nimport pbs\ne=pbs.event()\n\npbs.logmsg(pbs.LOG_DEBUG, 'Reservation End Hook name - %s' % e.hook_name)\n\nif e.type == pbs.RESV_END:\n    e.resv.resources_used.walltime = 10\n    pbs.logmsg(pbs.LOG_DEBUG, 'Reservation ID - %s' %\n    e.resv.resources_used.walltime)\n\"\"\"\n\n        self.server.import_hook(self.hook_name, hook_script)\n\n        offset = 10\n        duration = 30\n        rid = self.submit_resv(offset, duration)\n\n        attrs = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, attrs, id=rid)\n\n        self.server.delete(rid)\n        msg = 'Svr;Server@%s;PBS server internal error (15011) in Error ' \\\n              'evaluating Python script, attribute '\"'resources_used'\"' is ' \\\n              'part of a readonly object' % self.server.shortname\n        self.server.log_match(msg, tail=True, max_attempts=30, interval=2)\n\n    @timeout(300)\n    def test_delete_resv_occurrence(self):\n        \"\"\"\n        Testcase to submit and confirm a standing reservation for two\n        occurrences, wait for the first occurrence to end and verify\n        the end hook for the same, delete the second occurrence and\n        verify the resvend hook for the latter.\n        \"\"\"\n        self.server.import_hook(self.hook_name,\n                                TestResvEndHook.standing_resv_hook_script)\n\n        offset = 10\n        duration = 30\n        rid = self.submit_resv(offset, duration, rrule='FREQ=MINUTELY;COUNT=2')\n\n        attrs = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, attrs, id=rid)\n\n        attrs['reserve_state'] = (MATCH_RE, 'RESV_RUNNING|5')\n        self.server.expect(RESV, attrs, id=rid, offset=10)\n\n        self.logger.info('wait till 30 seconds until the reservation ends')\n        time.sleep(30)\n\n        msg = 'Hook;Server@%s;Reservation occurrence - 1' % \\\n              self.server.shortname\n        self.server.log_match(msg, tail=True, interval=2, max_attempts=30)\n        self.logger.info('Reservation end hook ran for first occurrence of '\n                         'a standing reservation')\n\n        self.logger.info(\n            'wait for 10 seconds till the next occurrence is submitted')\n        time.sleep(10)\n\n        self.server.delete(rid)\n        msg = 'Hook;Server@%s;Reservation occurrence - 2' % \\\n              self.server.shortname\n        self.server.log_match(msg, tail=True, interval=2, max_attempts=30)\n\n    @timeout(300)\n    def test_end_resv_occurrences(self):\n        \"\"\"\n        Testcase to submit and confirm a standing reservation for two\n        occurrences, wait for the first occurrence to end and verify\n        the end hook for the same, wait for the second occurrence to\n        start and end, verify the resvend hook for the latter.\n        \"\"\"\n        self.server.import_hook(self.hook_name,\n                                TestResvEndHook.standing_resv_hook_script)\n\n        offset = 10\n        duration = 30\n        rid = self.submit_resv(offset, duration, rrule='FREQ=MINUTELY;COUNT=2')\n\n        attrs = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, attrs, id=rid)\n\n        attrs['reserve_state'] = (MATCH_RE, 'RESV_RUNNING|5')\n        self.server.expect(RESV, attrs, id=rid, offset=10)\n\n        self.logger.info('Sleep for 30 seconds for the reservation occurrence '\n                         'to end')\n        time.sleep(30)\n\n        msg = 'Hook;Server@%s;Reservation occurrence - 1' % \\\n              self.server.shortname\n        self.server.log_match(msg, tail=True, interval=2, max_attempts=30)\n        self.logger.info('Reservation end hook ran for first occurrence of a '\n                         'standing reservation')\n\n        self.logger.info('Sleep for 30 seconds as this is a '\n                         'minutely occurrence')\n        time.sleep(30)\n\n        attrs = {'reserve_state': (MATCH_RE, 'RESV_RUNNING|5'),\n                 'reserve_index': 2}\n        self.server.expect(RESV, attrs, id=rid, attrop=PTL_AND)\n\n        msg = 'Hook;Server@%s;Reservation occurrence - 2' % \\\n              self.server.shortname\n        self.server.log_match(msg, tail=True, interval=2, max_attempts=30)\n        self.logger.info('Reservation end hook ran for second occurrence of a'\n                         ' standing reservation')\n\n    @timeout(300)\n    def test_delete_resv_occurrence_with_jobs(self):\n        \"\"\"\n        Testcase to submit and confirm a standing reservation for two\n        occurrences, submit some jobs to it, wait for the first\n        occurrence to end and verify the end hook for the same,\n        delete the second occurrence and verify the resvend hook\n        for the latter.\n        \"\"\"\n        self.server.import_hook(self.hook_name,\n                                TestResvEndHook.standing_resv_hook_script)\n\n        offset = 10\n        duration = 30\n        rid = self.submit_resv(offset, duration, rrule='FREQ=MINUTELY;COUNT=2')\n\n        attrs = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, attrs, id=rid)\n\n        attrs['reserve_state'] = (MATCH_RE, 'RESV_RUNNING|5')\n        self.server.expect(RESV, attrs, id=rid, offset=10)\n\n        job_attrs = {\n            'Resource_List.walltime': 5,\n            'Resource_List.select': '1:ncpus=1',\n            'queue': rid.split('.')[0]\n        }\n\n        for _ in range(10):\n            self.server.submit(Job(TEST_USER, job_attrs))\n\n        self.logger.info('Sleep for 30 seconds for the reservation occurrence '\n                         'to end')\n        time.sleep(30)\n\n        msg = 'Hook;Server@%s;Reservation occurrence - 1' % \\\n              self.server.shortname\n        self.server.log_match(msg, tail=True, interval=2, max_attempts=30)\n        self.logger.info('Reservation end hook ran for first occurrence of a '\n                         'standing reservation')\n\n        self.logger.info(\n            'wait for 10 seconds till the next occurrence is submitted')\n        time.sleep(10)\n\n        self.server.delete(rid)\n        msg = 'Hook;Server@%s;Reservation occurrence - 2' % \\\n              self.server.shortname\n        self.server.log_match(msg, tail=True, interval=2, max_attempts=30)\n\n    @timeout(300)\n    def test_end_resv_occurrences_with_jobs(self):\n        \"\"\"\n        Testcase to submit and confirm a standing reservation for two\n        occurrences, wait for the first occurrence to end and verify\n        the end hook for the same, wait for the second occurrence to\n        end and verify the resvend hook for the latter.\n        \"\"\"\n        self.server.import_hook(self.hook_name,\n                                TestResvEndHook.standing_resv_hook_script)\n\n        offset = 10\n        duration = 30\n        rid = self.submit_resv(offset, duration, rrule='FREQ=MINUTELY;COUNT=2')\n\n        attrs = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, attrs, id=rid)\n\n        job_attrs = {\n            'Resource_List.walltime': 5,\n            'Resource_List.select': '1:ncpus=1',\n            'queue': rid.split('.')[0]\n        }\n\n        for _ in range(10):\n            self.server.submit(Job(TEST_USER, job_attrs))\n\n        attrs['reserve_state'] = (MATCH_RE, 'RESV_RUNNING|5')\n        self.server.expect(RESV, attrs, id=rid, offset=10)\n\n        self.logger.info('Sleep for 30 seconds for the reservation occurrence '\n                         'to end')\n        time.sleep(30)\n\n        msg = 'Hook;Server@%s;Reservation occurrence - 1' % \\\n              self.server.shortname\n        self.server.log_match(msg, tail=True, interval=2, max_attempts=30)\n        self.logger.info('Reservation end hook ran for first occurrence of a '\n                         'standing reservation')\n\n        self.logger.info('Sleep for 30 seconds as this is a '\n                         'minutely occurrence')\n        time.sleep(30)\n\n        attrs = {'reserve_state': (MATCH_RE, 'RESV_RUNNING|5'),\n                 'reserve_index': 2}\n        self.server.expect(RESV, attrs, id=rid, attrop=PTL_AND)\n\n        msg = 'Hook;Server@%s;Reservation occurrence - 2' % \\\n              self.server.shortname\n        self.server.log_match(msg, tail=True, interval=2, max_attempts=30)\n        self.logger.info('Reservation end hook ran for second occurrence of a '\n                         'standing reservation')\n\n    def test_unconfirmed_resv_with_node(self):\n        \"\"\"\n        Testcase to set the node attributes such that the number of ncpus is 1,\n        submit and confirm a reservation on the same node, submit another\n        reservation on the same node and verify the reservation end hook\n        as the latter one stays in unconfirmed state.\n        \"\"\"\n        self.server.import_hook(self.hook_name,\n                                TestResvEndHook.advance_resv_hook_script)\n\n        node_attrs = {'resources_available.ncpus': 1}\n        self.server.manager(MGR_CMD_SET, NODE, node_attrs,\n                            id=self.mom.shortname)\n        offset = 10\n        duration = 10\n        rid = self.submit_resv(offset, duration)\n\n        attrs = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, attrs, id=rid)\n\n        new_rid = self.submit_resv(offset, duration)\n\n        msg = 'Hook;Server@%s;Reservation ID - %s' % \\\n              (self.server.shortname, new_rid)\n        self.server.log_match(msg, tail=True, max_attempts=10,\n                              existence=False)\n\n    @timeout(240)\n    def test_scheduler_down_case_1(self):\n        \"\"\"\n        Testcase to turn off the scheduler and submit a reservation,\n        the same will be in unconfirmed state and upon ending the\n        resvend hook shall not run.\n        \"\"\"\n        self.server.import_hook(self.hook_name,\n                                TestResvEndHook.advance_resv_hook_script)\n\n        self.scheduler.stop()\n\n        offset = 10\n        duration = 30\n        rid = self.submit_resv(offset, duration)\n\n        self.logger.info('wait for 30 seconds till the reservation ends ')\n        time.sleep(30)\n\n        msg = 'Hook;Server@%s;Reservation ID - %s' % \\\n              (self.server.shortname, rid)\n        self.server.log_match(msg, tail=True, max_attempts=10,\n                              existence=False)\n\n    def test_scheduler_down_case_2(self):\n        \"\"\"\n        Testcase to turn off the scheduler and submit a reservation,\n        the same will be in unconfirmed state and deleting that should\n        not run the resvend hook.\n        \"\"\"\n        self.server.import_hook(self.hook_name,\n                                TestResvEndHook.advance_resv_hook_script)\n\n        self.scheduler.stop()\n\n        offset = 10\n        duration = 10\n        rid = self.submit_resv(offset, duration)\n\n        self.server.delete(rid)\n        msg = 'Hook;Server@%s;Reservation ID - %s' % \\\n              (self.server.shortname, rid)\n        self.server.log_match(msg, tail=True, max_attempts=10,\n                              existence=False)\n"
  },
  {
    "path": "test/tests/functional/pbs_resv_start_dur_end.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestReservationRequests(TestFunctional):\n\n    \"\"\"\n    Various tests to verify behavoir of server\n    in validating reservation requests\n    \"\"\"\n    # Class variables\n    bu = BatchUtils()\n    fmt = \"%a %b %d %H:%M:%S %Y\"\n\n    def test_duration_end_resv(self):\n        \"\"\"\n        To test if reservations can be made by using\n        duration and endtime, making the server calculate\n        the starttime.\n        \"\"\"\n        now = int(time.time())\n        a = {'Resource_List.select': '1:ncpus=1',\n             'reserve_end': now + 30,\n             'reserve_duration': 5}\n        R = Reservation(TEST_USER, attrs=a)\n        R.unset_attributes(['reserve_start'])\n        rid = self.server.submit(R)\n\n        a = {'reserve_start': self.bu.convert_seconds_to_datetime(\n            now + 25, self.fmt)}\n        self.server.expect(RESV, a, id=rid)\n\n    def test_duration_end_resv_fail(self):\n        \"\"\"\n        To test if reservations made by using\n        duration and endtime, making the server calculate\n        the starttime and rejects if the starttime is before now.\n        \"\"\"\n        a = {'Resource_List.select': '1:ncpus=1',\n             'reserve_end': int(time.time()) + 15,\n             'reserve_duration': 20}\n        R = Reservation(TEST_USER, attrs=a)\n        R.unset_attributes(['reserve_start'])\n        rid = None\n        try:\n            rid = self.server.submit(R)\n        except PbsSubmitError as e:\n            self.assertTrue('Bad time specification(s)' in e.msg[0],\n                            'Reservation Submit failed in an unexpected way')\n        self.assertTrue(rid is None,\n                        'Reservation Submit succeeded ' +\n                        'when it should have failed')\n\n    def test_start_dur_end_resv_fail(self):\n        \"\"\"\n        Test to submit a job with a start, end, and duration\n        where start + duration != end.\n        \"\"\"\n        now = int(time.time())\n        a = {'Resource_List.select': '1:ncpus=1',\n             'reserve_start': now + 20,\n             'reserve_end': now + 40,\n             'reserve_duration': 10}\n        R = Reservation(TEST_USER, attrs=a)\n        rid = None\n        try:\n            rid = self.server.submit(R)\n        except PbsSubmitError as e:\n            self.assertTrue('Bad time specification(s)' in e.msg[0],\n                            'Reservation Submit failed in an unexpected way')\n        self.assertTrue(rid is None,\n                        'Reservation Submit succeeded' +\n                        'when it should have failed')\n\n    def test_start_dur_end_resv(self):\n        \"\"\"\n        Test to submit a job with a start, end, and duration\n        where start + duration = end.\n        \"\"\"\n        now = int(time.time())\n        a = {'Resource_List.select': '1:ncpus=1',\n             'reserve_start': now + 20,\n             'reserve_end': now + 30,\n             'reserve_duration': 10}\n        R = Reservation(TEST_USER, attrs=a)\n        rid = self.server.submit(R)\n\n        a = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, a, id=rid)\n\n    def test_end_wall_resv(self):\n        \"\"\"\n        Test to submit a job with end and walltime\n        \"\"\"\n        now = int(time.time())\n        a = {'Resource_List.select': '1:ncpus=1',\n             'Resource_List.walltime': '10',\n             'reserve_end': now + 30}\n        R = Reservation(TEST_USER, attrs=a)\n        R.unset_attributes(['reserve_start'])\n        rid = self.server.submit(R)\n\n        a = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, a, id=rid)\n\n    def test_rstat_longterm_resv(self):\n        \"\"\"\n        Test to submit a reservation, where duration > INT_MAX.\n        Check whether rstat displaying negative number.\n        \"\"\"\n        now = int(time.time())\n        a = {'Resource_List.select': '1:ncpus=1',\n             'reserve_start': now + 3600,\n             'reserve_end': now + 4294970895}\n        r = Reservation(TEST_USER, attrs=a)\n        rid = self.server.submit(r)\n        a = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, a, id=rid)\n\n        out = self.server.status(RESV, 'reserve_duration', id=rid)[0][\n            'reserve_duration']\n        dur = int(out)\n        self.assertGreater(dur, 0, 'Duration ' + str(dur) + 'is negative.')\n"
  },
  {
    "path": "test/tests/functional/pbs_root_owned_script.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\nimport os\nfrom tests.functional import *\n\n\nclass Test_RootOwnedScript(TestFunctional):\n    \"\"\"\n    Test suite to test whether the root owned script is getting rejected\n    and the comment is getting updated when root_reject_scripts set to true.\n    \"\"\"\n\n    def setUp(self):\n        \"\"\"\n        Set up the parameters required for Test_RootOwnedScript\n        \"\"\"\n        if os.getuid() != 0 or sys.platform in ('cygwin', 'win32'):\n            self.skipTest(\"Test need to run as root\")\n        TestFunctional.setUp(self)\n        mom_conf_attr = {'$reject_root_scripts': 'true'}\n        qmgr_attr = {'acl_roots': ROOT_USER}\n        self.mom.add_config(mom_conf_attr)\n        self.mom.restart()\n        self.server.manager(MGR_CMD_SET, SERVER, qmgr_attr)\n        self.sleep_5 = \"\"\"#!/bin/bash\n        sleep 5\n        \"\"\"\n        self.qsub_cmd = os.path.join(\n            self.server.pbs_conf['PBS_EXEC'], 'bin', 'qsub')\n        # Make sure local mom is ready to run jobs\n        a = {'state': 'free', 'resources_available.ncpus': (GE, 1)}\n        self.server.expect(VNODE, a, max_attempts=10, interval=2)\n\n    def test_root_owned_script(self):\n        \"\"\"\n        Edit the mom config to reject root script\n        submit a script as root and observe the job comment.\n        \"\"\"\n        j = Job(ROOT_USER)\n        j.create_script(self.sleep_5)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n        jid = self.server.submit(j)\n        self.server.runjob(jid)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid, offset=2)\n        _comment = 'Not Running: PBS Error: Execution server rejected request'\n        self.server.expect(JOB, {'comment': _comment}, id=jid)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n        self.server.expect(JOB, {'job_state': 'H'}, id=jid)\n        _comment = 'job held, too many failed attempts to run'\n        self.server.expect(JOB, {'comment': _comment}, id=jid)\n\n    def test_root_owned_job_array_script(self):\n        \"\"\"\n        Like test_root_owned_script, except job array is used.\n        \"\"\"\n        a = {ATTR_J: '1-3'}\n        j = Job(ROOT_USER, a)\n        j.create_script(self.sleep_5)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n        jid = self.server.submit(j)\n        sjid = j.create_subjob_id(jid, 1)\n        self.server.runjob(sjid)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid, offset=2)\n        _comment = 'Not Running: PBS Error: Execution server rejected request'\n        self.server.expect(JOB, {'comment': _comment}, id=jid)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n        self.server.expect(JOB, {'job_state': 'H'}, id=jid)\n        _comment = 'job held, too many failed attempts to run'\n        self.server.expect(JOB, {'comment': _comment}, id=sjid)\n        ja_comment = \"Job Array Held, too many failed attempts to run subjob\"\n        self.server.expect(JOB, {ATTR_state: \"H\", ATTR_comment: (MATCH_RE,\n                           ja_comment)}, attrop=PTL_AND, id=jid)\n\n    def test_non_root_script(self):\n        \"\"\"\n        Edit the mom config to reject root script\n        submit a script as TEST_USER and observe the job comment.\n        \"\"\"\n        j = Job(TEST_USER)\n        j.create_script(self.sleep_5)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n    def test_root_owned_executable(self):\n        \"\"\"\n        Edit the mom config to reject root script\n        submit a job as root with the -- <executable> option.\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n        cmd = [self.qsub_cmd, '--', '/usr/bin/id']\n        rv = self.du.run_cmd(self.server.hostname, cmd=cmd)\n        self.assertEquals(rv['rc'], 0, 'qsub failed')\n        jid = rv['out'][0]\n        self.server.runjob(jid)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid, offset=2)\n        _comment = 'Not Running: PBS Error: Execution server rejected request'\n        self.server.expect(JOB, {'comment': _comment}, id=jid)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n        self.server.expect(JOB, {'job_state': 'H'}, id=jid)\n        _comment = 'job held, too many failed attempts to run'\n        self.server.expect(JOB, {'comment': _comment}, id=jid)\n\n    def test_root_owned_job_array_executable(self):\n        \"\"\"\n        Like test_root_owned_executable, except job array is used.\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n        cmd = [self.qsub_cmd, '-J', '1-3', '--', '/usr/bin/id']\n        rv = self.du.run_cmd(self.server.hostname, cmd=cmd)\n        self.assertEquals(rv['rc'], 0, 'qsub failed')\n        jid = rv['out'][0]\n        sjid = Job().create_subjob_id(jid, 1)\n        self.server.runjob(sjid)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid, offset=2)\n        _comment = 'Not Running: PBS Error: Execution server rejected request'\n        self.server.expect(JOB, {'comment': _comment}, id=jid)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n        self.server.expect(JOB, {'job_state': 'H'}, id=jid)\n        _comment = 'job held, too many failed attempts to run'\n        self.server.expect(JOB, {'comment': _comment}, id=sjid)\n        ja_comment = \"Job Array Held, too many failed attempts to run subjob\"\n        self.server.expect(JOB, {ATTR_state: \"H\", ATTR_comment: (MATCH_RE,\n                           ja_comment)}, attrop=PTL_AND, id=jid)\n\n    def test_root_owned_job_pbs_attach(self):\n        \"\"\"\n        submit a job as root and test pbs_attach feature.\n        \"\"\"\n        mom_conf_attr = {'$reject_root_scripts': 'false'}\n        self.mom.add_config(mom_conf_attr)\n        self.mom.restart()\n        qmgr_attr = {'acl_roots': ROOT_USER}\n        self.server.manager(MGR_CMD_SET, SERVER, qmgr_attr)\n        pbs_attach = os.path.join(self.server.pbs_conf['PBS_EXEC'],\n                                  'bin', 'pbs_attach')\n\n        # Job script\n        test = []\n        test += ['#PBS -l select=ncpus=1\\n']\n        test += ['%s -j $PBS_JOBID -P -s %s 30\\n' %\n                 (pbs_attach, self.mom.sleep_cmd)]\n\n        # Submit a job\n        j = Job(ROOT_USER)\n        j.create_script(body=test)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid)\n\n        msg_expected = \".+%s;pid.+attached as task.+\" % jid\n        s = self.mom.log_match(msg_expected, regexp=True, max_attempts=10)\n\n    def test_user_owned_job_pbs_attach(self):\n        \"\"\"\n        submit a job as user and test pbs_attach feature.\n        \"\"\"\n        pbs_attach = os.path.join(self.server.pbs_conf['PBS_EXEC'],\n                                  'bin', 'pbs_attach')\n\n        # Job script\n        test = []\n        test += ['#PBS -l select=ncpus=1\\n']\n        test += ['%s -j $PBS_JOBID -P -s %s 30\\n' %\n                 (pbs_attach, self.mom.sleep_cmd)]\n\n        # Submit a job\n        j = Job(TEST_USER)\n        j.create_script(body=test)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid)\n\n        msg_expected = \".+%s;pid.+attached as task.+\" % jid\n        s = self.mom.log_match(msg_expected, regexp=True, max_attempts=10)\n"
  },
  {
    "path": "test/tests/functional/pbs_rstat.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestPbsRstat(TestFunctional):\n    \"\"\"\n    This test suite validates output of pbs_rstat\n    \"\"\"\n\n    def test_rstat_missing_resv(self):\n        \"\"\"\n        Test that checks if pbs_rstat will continue to display\n        reservations after not locating one reservation\n        \"\"\"\n\n        now = int(time.time())\n        a = {'Resource_List.select': '1:ncpus=1',\n             'reserve_start': now + 1000,\n             'reserve_end': now + 2000}\n        r = Reservation(TEST_USER)\n        r.set_attributes(a)\n        rid = self.server.submit(r)\n        exp = {'reserve_state': (MATCH_RE, \"RESV_CONFIRMED|2\")}\n        self.server.expect(RESV, exp, id=rid)\n\n        a2 = {'Resource_List.select': '1:ncpus=1',\n              'reserve_start': now + 3000,\n              'reserve_end': now + 4000}\n        r2 = Reservation(TEST_USER)\n        r.set_attributes(a2)\n        rid2 = self.server.submit(r)\n        self.server.expect(RESV, exp, id=rid2)\n\n        self.server.delresv(rid, wait=True)\n\n        rstat_cmd = \\\n            os.path.join(self.server.pbs_conf['PBS_EXEC'], 'bin', 'pbs_rstat')\n        rstat_opt = [rstat_cmd, '-B', rid, rid2]\n        ret = self.du.run_cmd(self.server.hostname, cmd=rstat_opt,\n                              logerr=False)\n\n        self.assertEqual(ret['rc'], 0,\n                         'pbs_rstat returned with non-zero exit status')\n\n        rstat_out = '\\n'.join(ret['out'])\n        self.assertIn(rid2, rstat_out)\n"
  },
  {
    "path": "test/tests/functional/pbs_runjob_hook.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestRunJobHook(TestFunctional):\n    \"\"\"\n    This test suite tests the runjob hook\n    \"\"\"\n    index_hook_script = \"\"\"\nimport pbs\ne = pbs.event()\nj = e.job\npbs.logmsg(pbs.LOG_DEBUG, \"job_id=%s\" % j.id)\npbs.logmsg(pbs.LOG_DEBUG, \"sub_job_array_index=%s\"\n           % j.array_index)\ne.accept()\n\"\"\"\n\n    reject_hook_script = \"\"\"\nimport pbs\npbs.event().reject(\"runjob hook rejected the job\")\n\"\"\"\n\n    new_res_in_hook_script = \"\"\"\nimport pbs\ne = pbs.event()\ne.job.Resource_List['site'] = 'site_value'\n\"\"\"\n\n    rerun_hook_script = \"\"\"\nimport pbs\ne = pbs.event()\nj = e.job\nif not j.run_version is None:\n    pbs.logmsg(pbs.LOG_DEBUG,\n        \"rerun_job_hook %s(%s): Resource_List.foo_str=%s\" %\n        (j.id,j.run_version,j.Resource_List['foo_str']))\nelse:\n    j.Resource_List['foo_str'] = \"foo_value\"\n\"\"\"\n\n    def test_array_sub_job_index(self):\n        \"\"\"\n        Submit a job array. Check the array sub-job index value\n        \"\"\"\n        hook_name = \"runjob_hook\"\n        attrs = {'event': \"runjob\"}\n        rv = self.server.create_import_hook(hook_name, attrs,\n                                            self.index_hook_script,\n                                            overwrite=True)\n        self.assertTrue(rv)\n        a = {'resources_available.ncpus': 3}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        lower = 1\n        upper = 3\n        j1 = Job(TEST_USER)\n        j1.set_attributes({ATTR_J: '%d-%d' % (lower, upper)})\n        self.server.submit(j1)\n        for i in range(lower, upper + 1):\n            self.server.log_match(\"sub_job_array_index=%d\" % (i),\n                                  starttime=self.server.ctime)\n\n    def test_array_sub_new_res_in_hook(self):\n        \"\"\"\n        Insert site resource in runjob hook. Submit a job array.\n        Check if site resource set for all subjobs\n        \"\"\"\n        hook_name = \"runjob_hook\"\n        attrs = {'event': \"runjob\"}\n        rv = self.server.create_import_hook(hook_name, attrs,\n                                            self.new_res_in_hook_script,\n                                            overwrite=True)\n        self.assertTrue(rv)\n        a = {'resources_available.ncpus': 3}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        lower = 1\n        upper = 3\n        j1 = Job(TEST_USER)\n        j1.set_sleep_time(10)\n        j1.set_attributes({ATTR_J: '%d-%d' % (lower, upper)})\n        jid = self.server.submit(j1)\n        self.server.expect(JOB, {ATTR_state: 'B'}, id=jid)\n        time.sleep(5)\n        self.server.expect(JOB, ATTR_state, op=UNSET, id=jid)\n        for i in range(lower, upper + 1):\n            sid = j1.create_subjob_id(jid, i)\n            m = \"'runjob_hook' hook set job's Resource_List.site = site_value\"\n            self.server.tracejob_match(m, id=sid, n='ALL', tail=False)\n            m = 'E;' + re.escape(sid) + ';.*Resource_List.site=site_value'\n            self.server.accounting_match(m, regexp=True)\n\n    def test_array_sub_res_persist_in_hook_forcereque(self):\n        \"\"\"\n        set custom resource in runjob hook. Submit a job array.\n        Check if custom resource set persists after force requeue due to\n        node fail requeue\n        \"\"\"\n        # configure node fail requeue to lower value\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'node_fail_requeue': 5})\n        # set three cpu for three subjobs\n        a = {'resources_available.ncpus': 3}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        # create custom non-consumable string server resource\n        attr = {'type': 'string'}\n        r = 'foo_str'\n        self.server.manager(\n            MGR_CMD_CREATE, RSC, attr, id=r, logerr=False)\n        # create and import hook\n        hook_name = \"runjob_hook\"\n        attrs = {'event': \"runjob\"}\n        rv = self.server.create_import_hook(hook_name, attrs,\n                                            self.rerun_hook_script,\n                                            overwrite=True)\n        self.assertTrue(rv)\n        # create and submit job array\n        lower = 1\n        upper = 3\n        j1 = Job(TEST_USER)\n        j1.set_attributes({ATTR_J: '%d-%d' % (lower, upper)})\n        jid = self.server.submit(j1)\n        # check if running\n        self.server.expect(JOB, {ATTR_state: 'B'}, id=jid)\n        self.server.expect(JOB, {'job_state=R': 3}, count=True,\n                           id=jid, extend='t')\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'False'})\n        # kill mom\n        self.mom.stop('-KILL')\n        m = \"'runjob_hook' hook set job's Resource_List.foo_str = foo_value\"\n        sid = {}\n        for i in range(lower, upper + 1):\n            sid[i] = j1.create_subjob_id(jid, i)\n            self.server.tracejob_match(m, id=sid[i], n='ALL', tail=False)\n        # check subjob state change from R->Q\n        self.server.expect(JOB, {'job_state=Q': 3}, count=True,\n                           id=jid, extend='t')\n        # bring back mom\n        self.mom.start()\n        start_time = time.time()\n        self.mom.isUp()\n        # let subjobs get rerun from sched Q->R\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'True'})\n        self.server.expect(JOB, {'job_state=R': 3}, count=True,\n                           id=jid, extend='t')\n        # log match from hook log for custom res value\n        m = \"rerun_job_hook %s(1): Resource_List.foo_str=foo_value\"\n        for i in range(lower, upper + 1):\n            self.server.log_match(m % (sid[i]), start_time)\n\n    def test_array_sub_res_persist_in_hook_qrerun(self):\n        \"\"\"\n        set custom resource in runjob hook. Submit a job array.\n        Check if custom resource set persists after a qrerun\n        \"\"\"\n        # set three cpu for three subjobs\n        a = {'resources_available.ncpus': 3}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        # create custom non-consumable string server resource\n        attr = {'type': 'string'}\n        r = 'foo_str'\n        self.server.manager(\n            MGR_CMD_CREATE, RSC, attr, id=r, logerr=False)\n        # create and import hook\n        hook_name = \"runjob_hook\"\n        attrs = {'event': \"runjob\"}\n        rv = self.server.create_import_hook(hook_name, attrs,\n                                            self.rerun_hook_script,\n                                            overwrite=True)\n        self.assertTrue(rv)\n        # create and submit job array\n        lower = 1\n        upper = 3\n        j1 = Job(TEST_USER)\n        j1.set_attributes({ATTR_J: '%d-%d' % (lower, upper)})\n        jid = self.server.submit(j1)\n        # check if running\n        self.server.expect(JOB, {ATTR_state: 'B'}, id=jid)\n        self.server.expect(JOB, {'job_state=R': 3}, count=True,\n                           id=jid, extend='t')\n        m = \"'runjob_hook' hook set job's Resource_List.foo_str = foo_value\"\n        sid = {}\n        for i in range(lower, upper + 1):\n            sid[i] = j1.create_subjob_id(jid, i)\n            self.server.tracejob_match(m, id=sid[i], n='ALL', tail=False)\n        start_time = time.time()\n        # rerun the array job\n        self.server.rerunjob(jobid=jid, runas=ROOT_USER)\n        self.server.expect(JOB, {'job_state=R': 3}, count=True,\n                           id=jid, extend='t')\n        # log match from hook log for custom res value\n        m = \"rerun_job_hook %s(1): Resource_List.foo_str=foo_value\"\n        for i in range(lower, upper + 1):\n            self.server.log_match(m % (sid[i]), start_time)\n        start_time = time.time()\n        # rerun a single subjob\n        self.server.rerunjob(jobid=sid[2], runas=ROOT_USER)\n        self.server.expect(JOB, {'job_state': 'R'}, id=sid[2])\n        # log match from hook log for custom res value\n        m = \"rerun_job_hook %s(2): Resource_List.foo_str=foo_value\"\n        self.server.log_match(m % (sid[2]), start_time)\n\n    def test_normal_job_index(self):\n        \"\"\"\n        Submit a normal job. Check the job index value which should be None\n        \"\"\"\n        hook_name = \"runjob_hook\"\n        attrs = {'event': \"runjob\"}\n        rv = self.server.create_import_hook(hook_name, attrs,\n                                            self.index_hook_script,\n                                            overwrite=True)\n        self.assertTrue(rv)\n        j1 = Job(TEST_USER)\n        self.server.submit(j1)\n        self.server.log_match(\"sub_job_array_index=None\",\n                              starttime=self.server.ctime)\n\n    def test_reject_array_sub_job(self):\n        \"\"\"\n        Test to check array subjobs,\n        jobs should run after runjob hook enabled set to false.\n        \"\"\"\n        hook_name = \"runjob_hook\"\n        attrs = {'event': \"runjob\"}\n        rv = self.server.create_import_hook(hook_name, attrs,\n                                            self.reject_hook_script,\n                                            overwrite=True)\n        self.assertTrue(rv)\n        a = {'resources_available.ncpus': 3}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        j1 = Job(TEST_USER)\n        j1.set_attributes({ATTR_J: '1-3'})\n        jid = self.server.submit(j1)\n        msg = \"Not Running: PBS Error: runjob hook rejected the job\"\n        self.server.expect(JOB, {'job_state': 'Q', 'comment': msg}, id=jid)\n        a = {'enabled': 'false'}\n        self.server.manager(MGR_CMD_SET, HOOK, a, id=hook_name, sudo=True)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n        self.server.expect(JOB, {'job_state': 'B'}, id=jid)\n        self.server.expect(JOB, {'job_state=R': 3}, count=True,\n                           id=jid, extend='t')\n"
  },
  {
    "path": "test/tests/functional/pbs_sched_attr_updates.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\nfrom tests.functional import *\n\n\nclass TestSchedAttrUpdates(TestFunctional):\n\n    def test_basic_throttling(self):\n        \"\"\"\n        Test the behavior of sched's 'attr_update_period' attribute\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, NODE,\n                            {\"resources_available.ncpus\": 1},\n                            id=self.mom.shortname)\n\n        jid1 = self.server.submit(Job())\n        jid2 = self.server.submit(Job())\n\n        self.server.expect(JOB, {\"job_state\": \"R\"}, id=jid1)\n        self.server.expect(JOB, {\"job_state\": \"Q\"}, id=jid2)\n        self.server.expect(JOB, \"comment\", op=SET, id=jid2)\n\n        self.server.cleanup_jobs()\n\n        a = {\"attr_update_period\": 45, \"scheduling\": \"False\"}\n        self.server.manager(MGR_CMD_SET, SCHED, a, id=\"default\")\n\n        self.server.submit(Job())\n        jid4 = self.server.submit(Job())\n\n        self.scheduler.run_scheduling_cycle()\n\n        # Scheduler should send attr updates in the first cycle\n        # after attr_update_period is set\n        self.server.expect(JOB, \"comment\", op=SET, id=jid4)\n\n        jid5 = self.server.submit(Job())\n        jid6 = self.server.submit(Job())\n\n        t = time.time()\n        self.scheduler.run_scheduling_cycle()\n\n        # Verify that scheduler didn't send attr updates for new jobs\n        self.server.expect(JOB, \"comment\", op=UNSET, id=jid5)\n        self.server.expect(JOB, \"comment\", op=UNSET, id=jid6)\n        self.server.log_match(\"Type 96 request received\", existence=False,\n                              starttime=t, max_attempts=5)\n\n        self.logger.info(\"Sleep for 45s for the attr_update_period to pass\")\n        time.sleep(45)\n        jid7 = self.server.submit(Job())\n        jid8 = self.server.submit(Job())\n\n        t = time.time()\n        self.scheduler.run_scheduling_cycle()\n\n        # Verify that scheduler sent attr updates for all new jobs\n        self.server.expect(JOB, \"comment\", op=SET, id=jid7)\n        self.server.expect(JOB, \"comment\", op=SET, id=jid8)\n        self.server.log_match(\"Type 96 request received\", starttime=t)\n\n    def test_accrue_type(self):\n        \"\"\"\n        Test that accrue_type updates are sent immediately\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, NODE,\n                            {\"resources_available.ncpus\": 1},\n                            id=self.mom.shortname)\n\n        a = {\"attr_update_period\": 600, \"scheduling\": \"False\"}\n        self.server.manager(MGR_CMD_SET, SCHED, a, id=\"default\")\n\n        j = Job()\n        j.set_sleep_time(1000)\n        jid1 = self.server.submit(j)\n        jid2 = self.server.submit(Job())\n\n        self.scheduler.run_scheduling_cycle()\n        self.server.expect(JOB, {\"job_state\": \"R\"}, id=jid1)\n        self.server.expect(JOB, {\"job_state\": \"Q\"}, id=jid2)\n        self.server.expect(JOB, \"comment\", op=SET, id=jid2)\n\n        jid3 = self.server.submit(Job())\n        self.scheduler.run_scheduling_cycle()\n        self.server.expect(JOB, \"comment\", op=UNSET, id=jid3)\n\n        # Now, turn eligible time on and add a limit that'll be crossed\n        # by the user, this will trigger an accrue_type update from sched\n        a = {\"eligible_time_enable\": \"True\"}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'max_run_res.ncpus': '[u:PBS_GENERIC=1]'})\n\n        # We should still be within the throttling window\n        # But, because accrue_type needed to be sent out,\n        # sched will send all updates for the jobs\n        jid4 = self.server.submit(Job())\n        self.scheduler.run_scheduling_cycle()\n        self.server.expect(JOB, \"comment\", op=SET, id=jid4, max_attempts=1)\n        self.server.expect(JOB, {\"accrue_type\": \"1\"}, id=jid4, max_attempts=1)\n        self.server.expect(JOB, \"comment\", op=SET, id=jid3, max_attempts=1)\n        self.server.expect(JOB, {\"accrue_type\": \"1\"}, id=jid3, max_attempts=1)\n        self.server.expect(JOB, {\"accrue_type\": \"1\"}, id=jid2, max_attempts=1)\n"
  },
  {
    "path": "test/tests/functional/pbs_sched_badstate.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport copy\nfrom tests.functional import *\n\n\nclass TestSchedBadstate(TestFunctional):\n\n    def test_sched_badstate_subjob(self):\n        \"\"\"\n        This test case tests if scheduler goes into infinite loop\n        when following conditions are met.\n        - Kill a mom\n        - mark the mom's state as free\n        - submit an array job\n        - check the sched log for \"Leaving sched cycle\" from the time\n          array job was submitted.\n        If we are unable to find a log match then scheduler is in\n          endless loop and test case has failed.\n        \"\"\"\n\n        self.mom.signal('-KILL')\n\n        attr = {'state': 'free', 'resources_available.ncpus': '2'}\n        self.server.manager(MGR_CMD_SET, NODE, attr, self.mom.shortname)\n\n        attr = {'scheduling': 'False'}\n        self.server.manager(MGR_CMD_SET, SERVER, attr)\n\n        j1 = Job(TEST_USER)\n        j1.set_attributes({'Resource_List.ncpus': '2', ATTR_J: '1-3'})\n        j1id = self.server.submit(j1)\n\n        now = time.time()\n\n        attr = {'scheduling': 'True'}\n        self.server.manager(MGR_CMD_SET, SERVER, attr)\n\n        self.scheduler.log_match(\"Leaving Scheduling Cycle\",\n                                 starttime=now,\n                                 interval=1)\n        self.server.delete(j1id)\n\n    def test_sched_unknown_node_state(self):\n        \"\"\"\n        Test to see if the scheduler reports node states as 'Unknown'\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, SCHED, {'log_events': 2047})\n\n        # free is when all ncpus are free\n        self.server.expect(NODE, {'state': 'free'}, id=self.mom.shortname)\n        self.scheduler.log_match(\"Unknown Node State\",\n                                 existence=False, max_attempts=2)\n        ncpus = self.server.status(NODE)[0]['resources_available.ncpus']\n        if self.mom.is_cpuset_mom():\n            vnode_id = self.server.status(NODE)[1]['id']\n            vnode_val = 'vnode=' + vnode_id\n            ncpus = self.server.status(NODE)[1]['resources_available.ncpus']\n            a = {'Resource_List.select': vnode_val + ':ncpus=' + ncpus}\n        else:\n            a = {'Resource_List.select': '1:ncpus=' + ncpus}\n        b = a.copy()\n        J = Job(attrs=a)\n        jid1 = self.server.submit(J)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        if self.mom.is_cpuset_mom():\n            self.server.expect(VNODE, {'state': 'job-busy'}, id=vnode_id)\n        else:\n            self.server.expect(NODE, {'state': 'job-busy'},\n                               id=self.mom.shortname)\n        self.scheduler.log_match(\"Unknown Node State\",\n                                 existence=False, max_attempts=2)\n\n        # maintenance is when a job is has been admin-suspended\n        self.server.sigjob(jid1, 'admin-suspend')\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid1)\n        if self.mom.is_cpuset_mom():\n            self.server.expect(VNODE, {'state': 'maintenance'}, id=vnode_id)\n        else:\n            self.server.expect(NODE, {'state': 'maintenance'},\n                               id=self.mom.shortname)\n        self.scheduler.log_match(\"Unknown Node State\",\n                                 existence=False, max_attempts=2)\n\n        self.server.delete(jid1, wait=True)\n\n        # job-exclusive is when a job requests place=excl\n        b['Resource_List.place'] = 'excl'\n        J = Job(attrs=b)\n        jid2 = self.server.submit(J)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n\n        if self.mom.is_cpuset_mom():\n            self.server.expect(VNODE, {'state': 'job-exclusive'}, id=vnode_id)\n        else:\n            self.server.expect(NODE, {'state': 'job-exclusive'},\n                               id=self.mom.shortname)\n        self.scheduler.log_match(\"Unknown Node State\",\n                                 existence=False, max_attempts=2)\n\n        self.server.delete(jid2, wait=True)\n\n        # resv-exclusive is when a reservation requersts -lplace=excl\n        st = time.time() + 30\n        et = st + 30\n        b['reserve_start'] = st\n        b['reserve_end'] = et\n        R = Reservation(attrs=b)\n        rid = self.server.submit(R)\n        self.server.expect(RESV, {'reserve_state':\n                                  (MATCH_RE, 'RESV_CONFIRMED|2')}, id=rid)\n\n        self.server.expect(RESV, {'reserve_state':\n                                  (MATCH_RE, 'RESV_RUNNING|5')},\n                           id=rid, offset=30)\n\n        if self.mom.is_cpuset_mom():\n            self.server.expect(VNODE, {'state': 'resv-exclusive'}, id=vnode_id)\n        else:\n            self.server.expect(NODE, {'state': 'resv-exclusive'},\n                               id=self.mom.shortname)\n        self.scheduler.log_match(\"Unknown Node State\",\n                                 existence=False, max_attempts=2)\n        self.server.delete(rid)\n\n        # Multiple node states eg: down + job-busy\n        J = Job(attrs=a)\n        jid3 = self.server.submit(J)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid3)\n        self.mom.signal('-KILL')\n        if self.mom.is_cpuset_mom():\n            self.server.expect(VNODE, {'state': 'down,job-busy'},\n                               id=vnode_id)\n        else:\n            self.server.expect(NODE, {'state': 'down,job-busy'},\n                               id=self.mom.shortname)\n        self.scheduler.log_match(\"Unknown Node State\",\n                                 existence=False, max_attempts=2)\n"
  },
  {
    "path": "test/tests/functional/pbs_sched_fifo.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestSchedFifo(TestFunctional):\n    \"\"\"\n    Test suite for FIFO scheduling\n    \"\"\"\n\n    def test_sched_fifo(self):\n        \"\"\"\n        Check that FIFO works.\n        \"\"\"\n\n        # Configure sched for FIFO\n        self.scheduler.set_sched_config({'strict_ordering': 'True',\n                                         'by_queue': 'False'})\n\n        # Create a new queue to test FIFO\n        queue_attrib = {ATTR_qtype: 'execution',\n                        ATTR_start: 'True',\n                        ATTR_enable: 'True'}\n        self.server.manager(MGR_CMD_CREATE,\n                            QUEUE, queue_attrib, id=\"workq2\")\n\n        # Set ncpus to 2 so it is easy to test, and scheduling off\n        self.server.manager(MGR_CMD_SET,\n                            NODE, {'resources_available.ncpus': 2},\n                            self.mom.shortname)\n        self.server.manager(MGR_CMD_SET,\n                            SERVER, {'scheduling': 'False'})\n\n        # Submit 3 jobs: j1 (workq), j2 (workq2), j3 (workq)\n        j1 = Job(TEST_USER, attrs={\n            ATTR_queue: \"workq\"})\n        j2 = Job(TEST_USER, attrs={\n            ATTR_queue: \"workq2\"})\n        j3 = Job(TEST_USER, attrs={\n            ATTR_queue: \"workq\"})\n\n        j_id1 = self.server.submit(j1)\n        j_id2 = self.server.submit(j2)\n        j_id3 = self.server.submit(j3)\n\n        # Turn scheduling on again\n        self.server.manager(MGR_CMD_SET,\n                            SERVER, {'scheduling': 'True'})\n\n        # j1 and j2 should be running\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=j_id1, max_attempts=10)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=j_id2, max_attempts=10)\n        self.server.expect(JOB, {ATTR_state: 'Q'}, id=j_id3, max_attempts=10)\n"
  },
  {
    "path": "test/tests/functional/pbs_sched_preempt_enforce_resumption.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestSchedPreemptEnforceResumption(TestFunctional):\n    \"\"\"\n    Test sched_preempt_enforce_resumption working\n    \"\"\"\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n\n        a = {ATTR_qtype: 'Execution', ATTR_enable: 'True',\n             ATTR_start: 'True', ATTR_p: '151'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, \"expressq\")\n\n        a = {ATTR_sched_preempt_enforce_resumption: True}\n        self.server.manager(MGR_CMD_SET, SCHED, a)\n\n        a = {'job_history_enable': 'True'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n    def test_filler_job_higher_walltime(self):\n        \"\"\"\n        This test confirms that the filler job does not run if it conflicts\n        with running of a suspended job.\n        \"\"\"\n        a = {ATTR_rescavail + '.ncpus': 2}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n\n        j1 = Job(TEST_USER)\n        j1.set_attributes({ATTR_l + '.select': '1:ncpus=2',\n                           ATTR_l + '.walltime': 80})\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n\n        j2 = Job(TEST_USER)\n        j2.set_attributes({ATTR_l + '.select': '1:ncpus=1',\n                           ATTR_q: 'expressq',\n                           ATTR_l + '.walltime': 30})\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {ATTR_state: 'S'}, id=jid1)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid2)\n\n        j3 = Job(TEST_USER)\n        j3.set_attributes({ATTR_l + '.select': '1:ncpus=1',\n                           ATTR_l + '.walltime': 90})\n        jid3 = self.server.submit(j3)\n        logmsg = \";Job would conflict with reservation or top job\"\n        self.scheduler.log_match(jid3 + logmsg)\n        self.server.expect(JOB, {ATTR_state: 'Q'}, id=jid3)\n\n    def test_suspended_job_ded_time_calendared(self):\n        \"\"\"\n        This test confirms that a suspended job becomes top job when unable to\n        resume due to conflict with dedicated time.\n        \"\"\"\n        a = {ATTR_rescavail + '.ncpus': 2}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n\n        now = int(time.time())\n        temp = 60 - now % 60\n        start = now + 180 + temp\n        end = start + 120\n        self.scheduler.add_dedicated_time(start=start, end=end)\n\n        j1 = Job(TEST_USER)\n        jtime = int(time.time())\n        j1.set_attributes({ATTR_l + '.select': '1:ncpus=2',\n                           ATTR_l + '.walltime': start - jtime - 10})\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n\n        j2 = Job(TEST_USER)\n        j2.set_attributes({ATTR_l + '.select': '1:ncpus=1',\n                           ATTR_l + '.walltime': 60,\n                           ATTR_q: 'expressq'})\n        j2.set_sleep_time(60)\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid2)\n        self.server.expect(JOB, {ATTR_state: 'S'}, id=jid1)\n\n        self.server.tracejob_match(\n            msg='Job is a top job and will run at', id=jid1)\n\n        attr = 'estimated.start_time'\n        stat = self.server.status(JOB, attr, id=jid1)\n        est_val = stat[0][attr]\n        est_str = time.strptime(est_val, '%c')\n        est_start_time = int(time.mktime(est_str))\n\n        self.assertGreaterEqual(est_start_time, end)\n\n    def test_filler_job_lesser_walltime(self):\n        \"\"\"\n        This test confirms that the filler job does run when the walltime does\n        not conflict with running of a suspended job.\n        \"\"\"\n        a = {ATTR_rescavail + '.ncpus': 4}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n\n        j1 = Job(TEST_USER)\n        j1.set_attributes({ATTR_l + '.select': '1:ncpus=4',\n                           ATTR_l + '.walltime': 50})\n        jid1 = self.server.submit(j1)\n\n        j2 = Job(TEST_USER)\n        j2.set_attributes({ATTR_l + '.select': '1:ncpus=1',\n                           ATTR_l + '.walltime': 80})\n        jid2 = self.server.submit(j2)\n\n        j3 = Job(TEST_USER)\n        j3.set_attributes({ATTR_l + '.select': '1:ncpus=1',\n                           ATTR_l + '.walltime': 50})\n        jid3 = self.server.submit(j3)\n\n        j4 = Job(TEST_USER)\n        j4.set_attributes({ATTR_l + '.select': '1:ncpus=1',\n                           ATTR_l + '.walltime': 150})\n        jid4 = self.server.submit(j4)\n\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n        self.server.expect(JOB, {ATTR_state: 'Q'}, id=jid2)\n        self.server.expect(JOB, {ATTR_state: 'Q'}, id=jid3)\n        self.server.expect(JOB, {ATTR_state: 'Q'}, id=jid4)\n\n        j5 = Job(TEST_USER)\n        j5.set_attributes({ATTR_l + '.select': '1:ncpus=1',\n                           ATTR_q: 'expressq',\n                           ATTR_l + '.walltime': 100})\n        jid5 = self.server.submit(j5)\n        self.server.expect(JOB, {ATTR_state: 'S'}, id=jid1)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid2)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid3)\n        self.server.expect(JOB, {ATTR_state: 'Q'}, id=jid4)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid5)\n\n        logmsg = \";Job would conflict with reservation or top job\"\n        self.scheduler.log_match(jid4 + logmsg)\n\n    def test_filler_job_suspend(self):\n        \"\"\"\n        This test confirms that the filler gets suspended by a high\n        priority job.\n        \"\"\"\n        a = {ATTR_rescavail + '.ncpus': 4}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n\n        j1 = Job(TEST_USER)\n        j1.set_attributes({ATTR_l + '.select': '1:ncpus=4',\n                           ATTR_l + '.walltime': 90})\n        jid1 = self.server.submit(j1)\n\n        j2 = Job(TEST_USER)\n        j2.set_attributes({ATTR_l + '.select': '1:ncpus=2',\n                           ATTR_l + '.walltime': 30})\n        jid2 = self.server.submit(j2)\n\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n        self.server.expect(JOB, {ATTR_state: 'Q'}, id=jid2)\n\n        j3 = Job(TEST_USER)\n        j3.set_attributes({ATTR_l + '.select': '1:ncpus=2',\n                           ATTR_q: 'expressq',\n                           ATTR_l + '.walltime': 50})\n        jid3 = self.server.submit(j3)\n\n        self.server.expect(JOB, {ATTR_state: 'S'}, id=jid1)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid2)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid3)\n\n        j4 = Job(TEST_USER)\n        j4.set_sleep_time(30)\n        j4.set_attributes({ATTR_l + '.select': '1:ncpus=2',\n                           ATTR_q: 'expressq',\n                           ATTR_l + '.walltime': 30})\n        jid4 = self.server.submit(j4)\n\n        self.server.expect(JOB, {ATTR_state: 'S'}, id=jid1)\n        self.server.expect(JOB, {ATTR_state: 'S'}, id=jid2)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid3)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid4)\n\n        self.server.expect(JOB, {ATTR_state: 'F'}, id=jid4, extend='x',\n                           offset=5, interval=2)\n        self.server.expect(JOB, {ATTR_state: 'S'}, id=jid1)\n        self.server.expect(JOB, {ATTR_state: 'S'}, id=jid2)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid3)\n\n        logmsg = \";Job would conflict with reservation or top job\"\n        self.scheduler.log_match(jid2 + logmsg)\n\n        self.server.expect(JOB, {ATTR_state: 'F'}, id=jid3, extend='x',\n                           offset=15, interval=2)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n        self.server.expect(JOB, {ATTR_state: 'S'}, id=jid2)\n\n        self.server.expect(JOB, {ATTR_state: 'F'}, id=jid1, extend='x',\n                           offset=30, interval=2)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid2)\n\n    def test_preempted_job_server_soft_limits(self):\n        \"\"\"\n        This test confirms that a preempted job remains suspended if it has\n        violated server soft limits\n        \"\"\"\n        a = {ATTR_rescavail + '.ncpus': 4}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n\n        a = {'max_run_res_soft.ncpus': \"[u:\" + str(TEST_USER1) + \"=2]\"}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        p = '\"express_queue, normal_jobs, server_softlimits\"'\n        a = {'preempt_prio': p}\n        self.server.manager(MGR_CMD_SET, SCHED, a)\n\n        j1 = Job(TEST_USER1)\n        j1.set_attributes({ATTR_l + '.select': '1:ncpus=4',\n                           ATTR_l + '.walltime': 50})\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n\n        j2 = Job(TEST_USER)\n        j2.set_attributes({ATTR_l + '.select': '1:ncpus=2',\n                           ATTR_q: 'expressq',\n                           ATTR_l + '.walltime': 20})\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {ATTR_state: 'S'}, id=jid1)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid2)\n\n        j3 = Job(TEST_USER)\n        j3.set_attributes({ATTR_l + '.select': '1:ncpus=2',\n                           ATTR_l + '.walltime': 25})\n        jid3 = self.server.submit(j3)\n        self.server.expect(JOB, {ATTR_state: 'S'}, id=jid1)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid2)\n        self.server.expect(JOB, {ATTR_state: 'Q'}, id=jid3)\n\n        self.server.expect(JOB, {ATTR_state: 'F'}, id=jid2, extend='x',\n                           offset=30, interval=2)\n        self.server.expect(JOB, {ATTR_state: 'S'}, id=jid1)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid3)\n\n        self.server.expect(JOB, {ATTR_state: 'F'}, id=jid3, extend='x',\n                           offset=30, interval=2)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n\n    def test_preempted_job_queue_soft_limits(self):\n        \"\"\"\n        This test confirms that a preempted job remains suspended if it has\n        violated queue soft limits\n        \"\"\"\n        a = {ATTR_rescavail + '.ncpus': 4}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n\n        a = {'max_run_res_soft.ncpus': \"[u:\" + str(TEST_USER1) + \"=2]\"}\n        self.server.manager(MGR_CMD_SET, QUEUE, a, 'workq')\n\n        p = '\"express_queue, normal_jobs, queue_softlimits\"'\n        a = {'preempt_prio': p}\n        self.server.manager(MGR_CMD_SET, SCHED, a)\n\n        j1 = Job(TEST_USER1)\n        j1.set_attributes({ATTR_l + '.select': '1:ncpus=4',\n                           ATTR_l + '.walltime': 50})\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n\n        j2 = Job(TEST_USER)\n        j2.set_attributes({ATTR_l + '.select': '1:ncpus=2',\n                           ATTR_q: 'expressq',\n                           ATTR_l + '.walltime': 20})\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {ATTR_state: 'S'}, id=jid1)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid2)\n\n        j3 = Job(TEST_USER)\n        j3.set_attributes({ATTR_l + '.select': '1:ncpus=2',\n                           ATTR_l + '.walltime': 25})\n        jid3 = self.server.submit(j3)\n        self.server.expect(JOB, {ATTR_state: 'S'}, id=jid1)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid2)\n        self.server.expect(JOB, {ATTR_state: 'Q'}, id=jid3)\n\n        self.server.expect(JOB, {ATTR_state: 'F'}, id=jid2, extend='x',\n                           offset=30, interval=2)\n        self.server.expect(JOB, {ATTR_state: 'S'}, id=jid1)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid3)\n\n        self.server.expect(JOB, {ATTR_state: 'F'}, id=jid3, extend='x',\n                           offset=30, interval=2)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n\n    def test_filler_jobs_with_no_walltime(self):\n        \"\"\"\n        This test confirms that filler jobs with no walltime remain queued\n        \"\"\"\n        a = {ATTR_rescavail + '.ncpus': 4}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n\n        j1 = Job(TEST_USER1)\n        j1.set_attributes({ATTR_l + '.select': '1:ncpus=4',\n                           ATTR_l + '.walltime': 20})\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n\n        j2 = Job(TEST_USER)\n        j2.set_attributes({ATTR_l + '.select': '1:ncpus=2',\n                           ATTR_q: 'expressq',\n                           ATTR_l + '.walltime': 8})\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {ATTR_state: 'S'}, id=jid1)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid2)\n\n        j3 = Job(TEST_USER)\n        j3.set_attributes({ATTR_l + '.select': '1:ncpus=2'})\n        jid3 = self.server.submit(j3)\n\n        j4 = Job(TEST_USER)\n        j4.set_attributes({ATTR_l + '.select': '1:ncpus=2'})\n        jid4 = self.server.submit(j4)\n\n        self.server.expect(JOB, {ATTR_state: 'S'}, id=jid1)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid2)\n        self.server.expect(JOB, {ATTR_state: 'Q'}, id=jid3)\n        self.server.expect(JOB, {ATTR_state: 'Q'}, id=jid4)\n\n    def test_filler_stf(self):\n        \"\"\"\n        Test that confirms filler shrink to fit jobs will shrink correctly\n        \"\"\"\n        a = {'resources_available.ncpus': 3}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.mom.shortname)\n\n        a = {ATTR_l + '.select': '1:ncpus=3',\n             ATTR_l + '.walltime': 50}\n        jid1 = self.server.submit(Job(attrs=a))\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n\n        a = {ATTR_l + '.select': '1:ncpus=1',\n             ATTR_l + '.walltime': 115,\n             ATTR_q: 'expressq'}\n        jid2 = self.server.submit(Job(attrs=a))\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid2)\n        self.server.expect(JOB, {ATTR_state: 'S'}, id=jid1)\n\n        a = {ATTR_l + '.select': '1:ncpus=1',\n             ATTR_l + '.min_walltime': 70,\n             ATTR_l + '.max_walltime': 90}\n        jid3 = self.server.submit(Job(attrs=a))\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid3)\n        self.scheduler.log_match('Job;%s;Job will run for duration=00:01:' %\n                                 (jid3))\n\n        a = {ATTR_l + '.select': '1:ncpus=1',\n             ATTR_l + '.min_walltime': '01:00',\n             ATTR_l + '.max_walltime': '10:00'}\n        jid4 = self.server.submit(Job(attrs=a))\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid4)\n        self.scheduler.log_match('Job;%s;Job will run for duration=00:01:' %\n                                 (jid4))\n\n        a = {ATTR_l + '.select': '1:ncpus=1',\n             ATTR_l + '.min_walltime': '02:30',\n             ATTR_l + '.max_walltime': '05:00'}\n        jid5 = self.server.submit(Job(attrs=a))\n        self.server.expect(JOB, {ATTR_state: 'Q'}, id=jid5)\n\n        stat = self.server.status(JOB, id=jid1)[0]\n        j1start = datetime.datetime.strptime(stat['estimated.start_time'],\n                                             '%c')\n\n        stat = self.server.status(JOB, id=jid3)[0]\n        t = datetime.datetime.strptime(stat[ATTR_l + '.walltime'], '%H:%M:%S')\n        j3dur = datetime.timedelta(hours=t.hour,\n                                   minutes=t.minute,\n                                   seconds=t.second)\n        j3start = datetime.datetime.strptime(stat[ATTR_stime], '%c')\n        self.assertGreaterEqual(j1start, j3start + j3dur)\n        self.assertGreaterEqual(j3dur.total_seconds(), 70)\n        self.assertLessEqual(j3dur.total_seconds(), 90)\n\n        stat = self.server.status(JOB, id=jid4)[0]\n        t = datetime.datetime.strptime(stat[ATTR_l + '.walltime'], '%H:%M:%S')\n        j4dur = datetime.timedelta(hours=t.hour,\n                                   minutes=t.minute,\n                                   seconds=t.second)\n        j4start = datetime.datetime.strptime(stat[ATTR_stime], '%c')\n        self.assertEquals(j4start + j4dur, j1start)\n"
  },
  {
    "path": "test/tests/functional/pbs_sched_rerun.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestSchedRerun(TestFunctional):\n    \"\"\"\n    Tests to verify scheduling of rerun jobs.\n    \"\"\"\n\n    @requirements(num_moms=2)\n    def test_rerun_job_over_reservation(self):\n        \"\"\"\n        Test that job will not run over reservation/top job\n        after first failed attempt to run (with used resource set).\n        \"\"\"\n        now = int(time.time())\n\n        usage_string = 'test requires two moms,' + \\\n                       'use -p \"servers=M1,moms=M1:M2\"'\n\n        if len(self.moms.values()) != 2:\n            self.skip_test(usage_string)\n\n        self.momA = self.moms.values()[0]\n        self.momB = self.moms.values()[1]\n        self.hostA = self.momA.shortname\n        self.hostB = self.momB.shortname\n\n        if not self.hostA or not self.hostB:\n            self.skip_test(usage_string)\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'managers': (INCR, '%s@*' % TEST_USER)})\n\n        a = {'type': 'long', 'flag': 'i'}\n        r = 'foo'\n        self.server.manager(MGR_CMD_CREATE, RSC, a, id=r)\n\n        hook_body = f\"\"\"\nimport pbs\ne = pbs.event()\nif e.type == pbs.EXECJOB_BEGIN:\n    e.job.resources_used[\"foo\"] = 123\n    if e.job.run_count == 1:\n        for v in e.vnode_list:\n            if v == \"{self.hostA}\":\n                e.vnode_list[v].state = pbs.ND_OFFLINE\n        e.job.rerun()\n        e.reject(\"rerun job\")\ne.accept()\n\"\"\"\n\n        hook_name = \"rerun_job\"\n        a = {'event': 'execjob_begin',\n             'enabled': 'True'}\n        rv = self.server.create_import_hook(\n            hook_name, a, hook_body, overwrite=True)\n        self.assertTrue(rv)\n\n        a = {'reserve_start': now + 60,\n             'reserve_end': now + 7200}\n        h = [self.hostB]\n        r = Reservation(TEST_USER, attrs=a, hosts=h)\n\n        self.server.submit(r)\n\n        a = {'Resource_List.select': '1:ncpus=1',\n             'Resource_List.walltime': '24:00:00'}\n        j = Job(TEST_USER, attrs=a)\n        j.set_sleep_time(1000)\n        jid = self.server.submit(j)\n\n        self.logger.info(\"Wait for the job to try to rerun (10 seconds)\")\n        time.sleep(10)\n\n        # the job shoud not run because first mom is offline\n        # second mom is occupied by maintenance reservation\n        a = {'job_state': 'Q'}\n        self.server.expect(JOB, a, id=jid)\n"
  },
  {
    "path": "test/tests/functional/pbs_sched_runjobwait.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\nfrom tests.functional import *\n\n\nclass TestSchedJobRunWait(TestFunctional):\n    \"\"\"\n    Tests related to scheduler attribute job_run_wait\n    \"\"\"\n\n    def setup_scn(self, n):\n        \"\"\"\n        set up n multi-scheds for a test\n        \"\"\"\n        sc_quenames = []\n        for i in range(n):\n            scname = \"sc\" + str(i)\n            pname = \"P\" + str(i)\n            qname = \"wq\" + str(i)\n            sc_quenames.append([scname, qname])\n\n            a = {'partition': pname,\n                 'sched_host': self.server.hostname}\n            self.server.manager(MGR_CMD_CREATE, SCHED,\n                                a, id=scname)\n            self.scheds[scname].create_scheduler()\n            self.scheds[scname].start()\n            self.server.manager(MGR_CMD_SET, SCHED,\n                                {'log_events': 2047}, id=scname)\n\n            a = {'queue_type': 'execution',\n                 'started': 'True',\n                 'enabled': 'True',\n                 'partition': pname}\n            self.server.manager(MGR_CMD_CREATE, QUEUE, a, id=qname)\n            a = {'resources_available.ncpus': 1, 'partition': pname}\n            prefix = 'vnode' + str(i)\n            nname = prefix + \"[0]\"\n            self.mom.create_vnodes(a, 1, delall=False,\n                                   additive=True, vname=nname)\n        return sc_quenames\n\n    def test_throughput_mode_deprecated(self):\n        \"\"\"\n        Test that server logs throughput_mode as deprecated\n        \"\"\"\n        t1 = time.time()\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'throughput_mode': \"True\"}, id=\"default\")\n        msg = \"'throughput_mode' is being deprecated, \" +\\\n            \"it is recommended to use 'job_run_wait'\"\n        self.server.log_match(msg, starttime=t1)\n\n    def test_jobrunwait_throughput_clash(self):\n        \"\"\"\n        Test that job_run_wait and throughput_mode don't clash\n        \"\"\"\n        # Setting TP to True/False should set JRW correctly\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'throughput_mode': \"False\"}, id=\"default\")\n        self.server.expect(SCHED, {'job_run_wait': \"execjob_hook\"},\n                           id=\"default\")\n\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'throughput_mode': \"True\"}, id=\"default\")\n        self.server.expect(SCHED, {'job_run_wait': \"runjob_hook\"},\n                           id=\"default\")\n\n        # Setting job_run_wait to 'none' should just delete TP\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'job_run_wait': \"none\"}, id=\"default\")\n        rt = self.server.status(SCHED, id=\"default\")\n        self.assertNotIn('throughput_mode', rt[0].keys(),\n                         'throughput_mode displayed when not expected')\n\n        # Setting JRW to runjob/execjob should set TP correctly\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'job_run_wait': \"execjob_hook\"}, id=\"default\")\n        self.server.expect(SCHED, {'throughput_mode': \"False\"},\n                           id=\"default\")\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'job_run_wait': \"runjob_hook\"}, id=\"default\")\n        self.server.expect(SCHED, {'throughput_mode': \"True\"},\n                           id=\"default\")\n\n    def test_jobrunwait_default(self):\n        \"\"\"\n        Test that job_run_wait gets set to its default when unset\n        \"\"\"\n        # Unsetting job_run_wait should set both to default\n        self.server.manager(MGR_CMD_UNSET, SCHED,\n                            'job_run_wait', id=\"default\")\n        self.server.expect(SCHED, {'job_run_wait': 'runjob_hook'},\n                           id='default')\n        self.server.expect(SCHED, {'throughput_mode': \"True\"},\n                           id=\"default\")\n\n        # Unsetting TP should do the same\n        self.server.manager(MGR_CMD_UNSET, SCHED,\n                            'throughput_mode', id=\"default\")\n        self.server.expect(SCHED, {'job_run_wait': \"runjob_hook\"},\n                           id=\"default\")\n        self.server.expect(SCHED, {'throughput_mode': \"True\"},\n                           id=\"default\")\n\n    def test_valid_vals(self):\n        \"\"\"\n        Test that job_run_wait can only be set to its default values\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, SCHED, {'job_run_wait': 'none'},\n                            id='default')\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'job_run_wait': 'runjob_hook'}, id='default')\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'job_run_wait': 'execjob_hook'}, id='default')\n        with self.assertRaises(PbsManagerError,\n                               msg=\"invalid str value was accepted\"):\n            self.server.manager(MGR_CMD_SET, SCHED,\n                                {'job_run_wait': 'badstr'}, id='default')\n\n        with self.assertRaises(PbsManagerError,\n                               msg=\"invalid int value was accepted\"):\n            self.server.manager(MGR_CMD_SET, SCHED,\n                                {'job_run_wait': 0}, id='default')\n\n    def test_multisched_multival(self):\n        \"\"\"\n        Test that multiple scheds can be configured with different vals of\n        job_run_wait, and behave correctly\n        \"\"\"\n        sc_queue = self.setup_scn(3)\n        a = {\"scheduling\": \"False\", \"job_run_wait\": \"none\"}\n        self.server.manager(MGR_CMD_SET, SCHED, a, id=sc_queue[0][0])\n        a[\"job_run_wait\"] = \"runjob_hook\"\n        self.server.manager(MGR_CMD_SET, SCHED, a, id=sc_queue[1][0])\n        a[\"job_run_wait\"] = \"execjob_hook\"\n        self.server.manager(MGR_CMD_SET, SCHED, a, id=sc_queue[2][0])\n\n        hook_txt = \"\"\"\nimport pbs\n\nif pbs.event().job.id == '%s':\n    pbs.event().reject(\"rejecting first job\")\npbs.event().accept()\n\"\"\"\n        hk_attrs = {'event': 'runjob', 'enabled': 'True'}\n\n        # All of the scheds have a 1 ncpu node only\n        # Submit 2 1cpu jobs to each sched\n        # The runjob hook will reject first job that's run by each sched\n        a = {\"queue\": sc_queue[0][1], \"Resource_List.ncpus\": \"1\"}\n        jid1 = self.server.submit(Job(attrs=a))\n        jid2 = self.server.submit(Job(attrs=a))\n        self.server.create_import_hook('rj', hk_attrs, hook_txt % (jid1))\n\n        # sched 1 with job_run_wait=none runs first job without waiting\n        # for runjob reject, so it doesn't run second job.\n        # Ultimately, neither jobs should run\n        self.scheds[sc_queue[0][0]].run_scheduling_cycle()\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid1)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid2)\n\n        self.server.delete_hook('rj')\n        a[\"queue\"] = sc_queue[1][1]\n        jid3 = self.server.submit(Job(attrs=a))\n        jid4 = self.server.submit(Job(attrs=a))\n        self.server.create_import_hook('rj', hk_attrs, hook_txt % str(jid3))\n\n        # sched 2 with job_run_wait=runjob_hook should wait for runjob\n        # reject and then run the second job\n        self.scheds[sc_queue[1][0]].run_scheduling_cycle()\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid3)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid4)\n\n        self.server.delete_hook('rj')\n        a[\"queue\"] = sc_queue[1][1]\n        jid5 = self.server.submit(Job(attrs=a))\n        jid6 = self.server.submit(Job(attrs=a))\n        hk_attrs[\"event\"] = 'execjob_begin'\n        self.server.create_import_hook('ej', hk_attrs, hook_txt % str(jid5))\n\n        # sched 2 with job_run_wait=runjob_hook won't wait for execjob_begin\n        # reject, so it will run first job and not run second.\n        # Ultimately no jobs will run\n        self.scheds[sc_queue[1][0]].run_scheduling_cycle()\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid5)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid6)\n\n        self.server.delete_hook('ej')\n        a[\"queue\"] = sc_queue[2][1]\n        jid7 = self.server.submit(Job(attrs=a))\n        jid8 = self.server.submit(Job(attrs=a))\n        hk_attrs[\"event\"] = 'execjob_begin'\n        self.server.create_import_hook('ej', hk_attrs, hook_txt % str(jid7))\n\n        # sched 3 with job_run_wait=execjob_hook should wait for runjob\n        # reject and then run the second job\n        self.scheds[sc_queue[2][0]].run_scheduling_cycle()\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid7)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid8)\n\n    def test_no_runjob_hook(self):\n        \"\"\"\n        Test that when there is no runjob hook configured, sched behaves as if\n        job_run_wait == none, even if it's set to \"runjob_hook\"\n        \"\"\"\n\n        a = {\"scheduling\": \"False\", \"job_run_wait\": \"runjob_hook\"}\n        self.server.manager(MGR_CMD_SET, SCHED, a, id=\"default\")\n\n        self.server.submit(Job())\n\n        t = time.time()\n        self.scheduler.run_scheduling_cycle()\n\n        # Check that server received PBS_BATCH_AsyrunJob, truly async request\n        logmsg = \"Type 23 request received\"\n        self.server.log_match(logmsg, starttime=t)\n\n    def test_with_runjob_hook(self):\n        \"\"\"\n        Test that when there is a runjob hook configured, sched doesn't\n        upgrade job_run_wait from \"runjob_hook\" to \"none\"\n        \"\"\"\n\n        a = {\"scheduling\": \"False\", \"job_run_wait\": \"runjob_hook\"}\n        self.server.manager(MGR_CMD_SET, SCHED, a, id=\"default\")\n\n        hook_txt = \"\"\"\nimport pbs\n\npbs.event().accept()\n\"\"\"\n        hk_attrs = {'event': 'runjob', 'enabled': 'True'}\n        self.server.create_import_hook('rj', hk_attrs, hook_txt)\n\n        self.server.submit(Job())\n\n        t = time.time()\n        self.scheduler.run_scheduling_cycle()\n\n        # Check that server received PBS_BATCH_AsyrunJob_ack request\n        self.server.log_match(\"Type 97 request received\", starttime=t)\n\n    @skip(\"issue 2330\")\n    def test_throughput_ok(self):\n        \"\"\"\n        Test that throughput_mode still works correctly\n        \"\"\"\n        self.server.manager(MGR_CMD_UNSET, SCHED,\n                            'job_run_wait', id=\"default\")\n\n        a = {'throughput_mode': \"True\", \"scheduling\": \"False\"}\n        self.server.manager(MGR_CMD_SET, SCHED, a, id=\"default\")\n\n        jid = self.server.submit(Job())\n\n        t = time.time()\n        self.scheduler.run_scheduling_cycle()\n        self.server.expect(JOB, {\"job_state\": \"R\"}, id=jid)\n\n        # Check that server received PBS_BATCH_AsyrunJob request\n        self.server.log_match(\"Type 23 request received\", starttime=t)\n\n        self.server.cleanup_jobs()\n\n        self.server.manager(MGR_CMD_SET, SCHED, {'throughput_mode': \"False\"},\n                            id=\"default\")\n        jid = self.server.submit(Job())\n        t = time.time()\n        self.scheduler.run_scheduling_cycle()\n        self.server.expect(JOB, {\"job_state\": \"R\"}, id=jid)\n\n        # Check that server received PBS_BATCH_RunJob request\n        self.server.log_match(\"Type 15 request received\", starttime=t)\n\n        hook_txt = \"\"\"\nimport pbs\n\npbs.event().accept()\n\"\"\"\n        hk_attrs = {'event': 'runjob', 'enabled': 'True'}\n        self.server.create_import_hook('rj', hk_attrs, hook_txt)\n\n        self.server.cleanup_jobs()\n\n        self.server.manager(MGR_CMD_SET, SCHED, {'throughput_mode': \"True\"},\n                            id=\"default\")\n        jid = self.server.submit(Job())\n        t = time.time()\n        self.scheduler.run_scheduling_cycle()\n        self.server.expect(JOB, {\"job_state\": \"R\"}, id=jid)\n\n        # Check that server received PBS_BATCH_AsynJob_ack request\n        self.server.log_match(\"Type 97 request received\", starttime=t)\n\n    def test_runhook_reject_comment_sched(self):\n        \"\"\"\n        Test that when a runjob hook rejects a job, with job_run_wait\n        unset, that a job's comment is set correctly by sched\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, NODE,\n                            {\"resources_available.ncpus\": 4},\n                            id=self.mom.shortname)\n\n        self.server.manager(MGR_CMD_UNSET, SCHED, \"job_run_wait\")\n\n        jid1 = self.server.submit(Job())\n        self.server.expect(JOB, {\"job_state\": \"R\"}, id=jid1)\n        jid2 = self.server.submit(Job())\n        self.server.expect(JOB, {\"job_state\": \"R\"}, id=jid2)\n        jid3 = self.server.submit(Job())\n        self.server.expect(JOB, {\"job_state\": \"R\"}, id=jid3)\n\n        hook_txt = \"\"\"\nimport pbs\n\ne = pbs.event()\nj = e.job\n\nif not j.Resource_List[\"walltime\"]:\n    e.reject(\"%s: no walltime specified\" % (e.hook_name) )\ne.accept()\"\"\"\n        hk_attrs = {'event': 'runjob'}\n        self.server.create_import_hook('rj', hk_attrs, hook_txt)\n\n        t1 = time.time()\n        jid4 = self.server.submit(Job())\n        self.server.expect(JOB, {\"job_state\": \"Q\"}, id=jid4)\n        a = {\"comment\": (MATCH_RE, \"no walltime specified\")}\n        self.server.expect(JOB, a, id=jid4)\n        self.server.log_match(\"Type 96 request\", starttime=t1, max_attempts=5)\n\n    def test_runhook_reject_comment_server(self):\n        \"\"\"\n        Test that when a runjob hook rejects a job, with job_run_wait\n        set to \"none\", that a job's comment is set correctly by the server\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, SCHED, {\"job_run_wait\": \"none\"})\n\n        hook_txt = \"\"\"\nimport pbs\n\ne = pbs.event()\nj = e.job\n\nif not j.Resource_List[\"walltime\"]:\n    e.reject(\"%s: no walltime specified\" % (e.hook_name) )\ne.accept()\"\"\"\n        hk_attrs = {'event': 'runjob'}\n        self.server.create_import_hook('rj', hk_attrs, hook_txt)\n\n        t1 = time.time()\n        jid = self.server.submit(Job())\n        self.server.expect(JOB, {\"job_state\": \"Q\"}, id=jid)\n        a = {\"comment\": (MATCH_RE, \"no walltime specified\")}\n        self.server.expect(JOB, a, id=jid)\n        self.server.log_match(\"Type 96 request\", starttime=t1, max_attempts=5,\n                              existence=False)\n"
  },
  {
    "path": "test/tests/functional/pbs_sched_signal.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestSchedSignal(TestFunctional):\n\n    def test_sigpipe(self):\n        \"\"\"\n        Test that pbs_sched receives a SIGPIPE correctly and it is not ignored\n        \"\"\"\n        self.scheduler.signal('-PIPE')\n        self.scheduler.log_match(\"We've received a sigpipe:\")\n"
  },
  {
    "path": "test/tests/functional/pbs_schedule_indirect_resources.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestIndirectResources(TestFunctional):\n    \"\"\"\n    Test Scheduler resolve indirect resources correctly\n    \"\"\"\n\n    def config_complex_for_grouping(self, res, res_type='string', flag='h'):\n        \"\"\"\n        Configure the PBS complex for node grouping test\n        \"\"\"\n        # Create a custom resource\n        attr = {\"type\": res_type, \"flag\": flag}\n        self.server.manager(MGR_CMD_CREATE, RSC, attr, id=res)\n\n        # Add resource to the resources line in sched_config\n        self.scheduler.add_resource(res)\n\n        # Set resource as the node_group_key\n        attr = {'node_group_enable': 'True', 'node_group_key': res}\n        self.server.manager(MGR_CMD_SET, SERVER, attr)\n\n    def submit_job(self, attr):\n        \"\"\"\n        Helper function to submit a sleep job with provided attributes\n        \"\"\"\n        job = Job(TEST_USER1, attr)\n        jobid = self.server.submit(job)\n\n        return (jobid, job)\n\n    def test_node_grouping_with_indirect_res(self):\n        \"\"\"\n        Test node grouping with indirect resources set on some nodes\n        Steps:\n        -> Configure the system to have 6 vnodes with custom resource 'foostr'\n        set as the node_group_key\n        -> Set 'foostr' to 'A', 'B', and 'C' on the first three vnodes\n        -> Set 'foostr' for the next three vnodes to point to the first three\n        vnodes correspondingly\n        -> Verify that the last three vnodes are part of their respective\n        placement sets\n        \"\"\"\n\n        # Create 6 vnodes\n        attr = {'resources_available.ncpus': 1}\n        self.mom.create_vnodes(attr, 6)\n        vn = ['%s[%d]' % (self.mom.shortname, i) for i in range(6)]\n        # Configure a system with 6 vnodes and 'foostr' as node_group_key\n        self.config_complex_for_grouping('foostr')\n\n        # Set 'foostr' to 'A', 'B' and 'C' respectively for the first\n        # three vnodes\n        attr = {'resources_available.foostr': 'A'}\n        self.server.manager(MGR_CMD_SET, NODE, attr, vn[0])\n        attr = {'resources_available.foostr': 'B'}\n        self.server.manager(MGR_CMD_SET, NODE, attr, vn[1])\n        attr = {'resources_available.foostr': 'C'}\n        self.server.manager(MGR_CMD_SET, NODE, attr, vn[2])\n\n        # Set 'foostr' for last three vnodes as indirect resource to\n        # the first three vnodes correcpondingly\n        attr = {'resources_available.foostr': '@' + vn[0]}\n        self.server.manager(MGR_CMD_SET, NODE, attr, vn[3])\n        attr = {'resources_available.foostr': '@' + vn[1]}\n        self.server.manager(MGR_CMD_SET, NODE, attr, vn[4])\n        attr = {'resources_available.foostr': '@' + vn[2]}\n        self.server.manager(MGR_CMD_SET, NODE, attr, vn[5])\n\n        # Submit 3 jobs requesting 2 vnodes and check they ran on the nodes\n        # within same group\n        attr = {'Resource_List.select': '2:ncpus=1'}\n\n        # since there are 6 vnodes and the test grouped them on foostr\n        # resource, we now have groups in pair of vnode 0+3, 1+4, 2+5\n        # check that the jobs are running on correct vnode groups.\n        for i in range(3):\n            jid, j = self.submit_job(attr)\n            self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n            self.server.status(JOB, 'exec_vnode', jid)\n            vn = j.get_vnodes()\n            self.assertEqual(int(vn[0][-2]) + 3, int(vn[1][-2]))\n"
  },
  {
    "path": "test/tests/functional/pbs_server_hook_attr.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestServerHookAttr(TestFunctional):\n\n    def test_queuejob_hook_requestor_host(self):\n        \"\"\"\n        Check the requestor_host attribute is set properly in hook.\n        \"\"\"\n        hook_body = f\"\"\"\nimport pbs\ne = pbs.event()\nif e.requestor_host == \"{self.servers.values()[0].name}\":\n    e.accept()\ne.reject()\n\"\"\"\n        hook_name = \"test_requestor_host\"\n        a = {'event': 'queuejob',\n             'enabled': 'True'}\n        rv = self.server.create_import_hook(\n            hook_name, a, hook_body, overwrite=True)\n        self.assertTrue(rv)\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'False'})\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n        a = {'job_state': 'Q'}\n        self.server.expect(JOB, a, id=jid)\n"
  },
  {
    "path": "test/tests/functional/pbs_server_periodic_hook.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass Test_server_periodic_hook(TestFunctional):\n    hook_string = \"\"\"\nimport pbs\nimport time\ne = pbs.event()\npbs.logmsg(pbs.LOG_DEBUG, \"periodic hook started at %%d\" %% time.time())\ntime.sleep(%d)\npbs.logmsg(pbs.LOG_DEBUG, \"periodic hook ended at %%d\" %% time.time())\n%s\n\"\"\"\n\n    def create_hook(self, accept, sleep_time):\n        \"\"\"\n        function to create a hook script.\n        It accepts 2 arguments\n        - accept        If set to true, then hook will accept else reject\n        - sleep_time        Number of seconds we want the hook to sleep\n        \"\"\"\n        hook_action = \"e.accept()\"\n        if accept is False:\n            hook_action = \"e.reject()\"\n        final_hook = self.hook_string % (int(sleep_time), hook_action)\n        return final_hook\n\n    start_msg = \"periodic hook started at \"\n    end_msg = \"periodic hook ended at \"\n\n    def get_timestamp(self, msg):\n        a = msg.rsplit(' ', 1)\n        return int(a[1])\n\n    def check_next_occurances(self, count, freq,\n                              hook_run_time, check_for_hook_end, alarm=0):\n        \"\"\"\n        Helper function to check the occurances of hook by matching their\n        logs in server logs.\n        It needs 4 arguments:\n        - count                     to know how many times to repeat\n                                    checking these logs\n        - freq                      is the frequency set in pbs server to run\n                                    this hook\n        - hook_run_time             is the amount of time hook takes to run.\n        - check_for_hook_end        If it is true then the functions checks for\n                                    hook end messages.\n        - alarm                     hook alarm\n        \"\"\"\n        occurance = 0\n        # time after which we want to start matching log\n        search_after = time.time()\n        intr = freq\n        while (occurance < count):\n            msg_expected = self.start_msg\n            msg = self.server.log_match(msg_expected,\n                                        interval=(intr + 1),\n                                        starttime=search_after)\n            if occurance == 0:\n                time_expected = time_logged = self.get_timestamp(msg[1])\n            else:\n                time_logged = self.get_timestamp(msg[1])\n                self.assertFalse(time_logged - time_expected > 1)\n\n            if check_for_hook_end is True:\n                time_expected = time_logged + hook_run_time\n                # set it to a second before we expect the hook to end\n                search_after = time_expected - 1\n                msg_expected = self.end_msg\n                msg = self.server.log_match(msg_expected, max_attempts=2,\n                                            interval=(hook_run_time + 1),\n                                            starttime=search_after)\n                time_logged = self.get_timestamp(msg[1])\n                if alarm != 0:\n                    self.assertLessEqual(time_logged - time_expected,\n                                         alarm - hook_run_time)\n                else:\n                    self.assertLessEqual(time_logged - time_expected, 1)\n\n                if hook_run_time <= freq:\n                    intr = freq - hook_run_time\n                else:\n                    intr = freq - (hook_run_time % freq)\n            else:\n                if hook_run_time <= freq:\n                    intr = freq\n                else:\n                    intr = hook_run_time + (freq - (hook_run_time % freq))\n\n            # we just matched hook start/end message, next start message is\n            # surely after time_expected.\n            search_after = time_expected + 1\n            time_expected = time_logged + intr\n            occurance += 1\n\n    def test_sp_hook_run(self):\n        \"\"\"\n        Submit a server periodic hook that rejects\n        \"\"\"\n        hook_name = \"medium_hook\"\n        freq = 20\n        hook_run_time = 10\n        scr = self.create_hook(True, hook_run_time)\n        attrs = {'event': \"periodic\"}\n        rv = self.server.create_import_hook(\n            hook_name,\n            attrs,\n            scr,\n            overwrite=True)\n        self.assertTrue(rv)\n        attrs = {'freq': freq}\n        rv = self.server.manager(MGR_CMD_SET, HOOK, attrs, hook_name)\n        self.assertEqual(rv, 0)\n        attrs = {'enabled': 'True'}\n        rv = self.server.manager(MGR_CMD_SET, HOOK, attrs, hook_name)\n        self.assertEqual(rv, 0)\n        self.check_next_occurances(2, freq, hook_run_time, True)\n\n    def test_sp_hook_reject(self):\n        \"\"\"\n        Submit a server periodic hook that rejects\n        \"\"\"\n        hook_name = \"reject_hook\"\n        freq = 20\n        hook_run_time = 10\n        scr = self.create_hook(False, hook_run_time)\n        attrs = {'event': \"periodic\"}\n        msg_expected = \";periodic request rejected by \" + \"'\" + hook_name + \"'\"\n        rv = self.server.create_import_hook(\n            hook_name,\n            attrs,\n            scr,\n            overwrite=True)\n        self.assertTrue(rv)\n        attrs = {'freq': freq}\n        rv = self.server.manager(MGR_CMD_SET, HOOK, attrs, hook_name)\n        self.assertEqual(rv, 0)\n        attrs = {'enabled': 'True'}\n        rv = self.server.manager(MGR_CMD_SET, HOOK, attrs, hook_name)\n        self.assertEqual(rv, 0)\n        self.check_next_occurances(2, freq, hook_run_time, True)\n        self.server.log_match(msg_expected, interval=1)\n\n    def test_sp_hook_long_run(self):\n        \"\"\"\n        Submit a hook that runs longer than the frequency set by the hook and\n        see if the hook starts at the next subsequent freq interval.\n        in this case hook runs for 20 seconds and freq is 6. So if a hook\n        starts at time 'x' then it's next occurance should be at 'x +24'.\n        \"\"\"\n        hook_name = \"long_hook\"\n        freq = 6\n        hook_run_time = 20\n        scr = self.create_hook(True, hook_run_time)\n        attrs = {'event': \"periodic\"}\n        rv = self.server.create_import_hook(\n            hook_name,\n            attrs,\n            scr,\n            overwrite=True)\n        self.assertTrue(rv)\n        attrs = {'freq': freq}\n        rv = self.server.manager(MGR_CMD_SET, HOOK, attrs, hook_name)\n        self.assertEqual(rv, 0)\n        attrs = {'enabled': 'True'}\n        rv = self.server.manager(MGR_CMD_SET, HOOK, attrs, hook_name)\n        self.assertEqual(rv, 0)\n        self.check_next_occurances(2, freq, hook_run_time, True)\n\n    def test_sp_hook_aborts_after_short_alarm(self):\n        \"\"\"\n        Submit a hook that runs longer than the frequency set by the hook and\n        see if the hook starts at the next subsequent freq interval.\n        in this case hook runs for 20 seconds and freq is 15 and alarm is 12.\n        So if a hook starts at time 'x' then it's next occurance should be\n        at 'x +15' because alarm is going to kill it at 12th second of it's\n        run.\n        \"\"\"\n        hook_name = \"long_hook\"\n        freq = 15\n        alarm = 12\n        hook_run_time = alarm\n        scr = self.create_hook(True, 20)\n        attrs = {'event': \"periodic\"}\n        rv = self.server.create_import_hook(\n            hook_name,\n            attrs,\n            scr,\n            overwrite=True)\n        self.assertTrue(rv)\n        attrs = {'freq': freq, 'alarm': alarm}\n        rv = self.server.manager(MGR_CMD_SET, HOOK, attrs, hook_name)\n        self.assertEqual(rv, 0)\n        attrs = {'enabled': 'True'}\n        rv = self.server.manager(MGR_CMD_SET, HOOK, attrs, hook_name)\n        self.assertEqual(rv, 0)\n        self.check_next_occurances(2, freq, hook_run_time, False)\n\n    def test_sp_hook_aborts_after_long_alarm(self):\n        \"\"\"\n        Submit a hook that runs longer than the frequency set by the hook and\n        see if the hook starts at the next subsequent freq interval.\n        in this case hook runs for 20 seconds and freq is 12 and alarm is 15.\n        So if a hook starts at time 'x' then it's next occurance should be\n        at 'x +12' but it is going to run and get killed due to an alarm at\n        x +15 and then again start execution at x+24.\n        \"\"\"\n        hook_name = \"long_hook\"\n        freq = 12\n        alarm = 15\n        hook_run_time = alarm\n        scr = self.create_hook(True, 20)\n        attrs = {'event': \"periodic\"}\n        rv = self.server.create_import_hook(\n            hook_name,\n            attrs,\n            scr,\n            overwrite=True)\n        self.assertTrue(rv)\n        attrs = {'freq': freq, 'alarm': alarm}\n        rv = self.server.manager(MGR_CMD_SET, HOOK, attrs, hook_name)\n        self.assertEqual(rv, 0)\n        attrs = {'enabled': 'True'}\n        rv = self.server.manager(MGR_CMD_SET, HOOK, attrs, hook_name)\n        self.assertEqual(rv, 0)\n        self.check_next_occurances(2, freq, hook_run_time, False)\n\n    def test_sp_with_queuejob(self):\n        \"\"\"\n        This test case checks that periodic and queuejob\n        event can be set for the same hook\n        \"\"\"\n        events = [\"periodic\", \"queuejob\"]\n        hook_name = \"TestHook\"\n        hook_attrib = {'event': events, 'freq': 100}\n        scr = self.create_hook(True, 10)\n        retval = self.server.create_import_hook(hook_name,\n                                                hook_attrib,\n                                                scr,\n                                                overwrite=True)\n        self.assertTrue(retval)\n        attrs = {'enabled': 'True'}\n        self.server.manager(MGR_CMD_SET, HOOK, attrs, hook_name)\n\n        job = Job(TEST_USER1, attrs={ATTR_l: 'select=1:ncpus=1',\n                                     ATTR_l: 'walltime=1:00:00'})\n        jid = self.server.submit(job)\n        self.server.log_match(self.start_msg, interval=3)\n        self.server.log_match(self.end_msg, interval=3)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n    def test_alarm_more_than_freq(self):\n        \"\"\"\n        Test when alarm is more than freq. Ensure multiple\n        instances do not get launched\n        \"\"\"\n        hook_name = \"medium_hook\"\n        scr = self.create_hook(accept=True, sleep_time=10)\n        attrs = {'event': 'periodic', 'alarm': 15, 'freq': 6}\n        self.server.create_import_hook(hook_name, attrs, scr, overwrite=True)\n        self.check_next_occurances(2, freq=6, hook_run_time=10,\n                                   check_for_hook_end=True, alarm=15)\n\n    def test_check_for_negative_freq(self):\n        \"\"\"\n        Check for the correct messages thrown if negative values of freq is set\n        \"\"\"\n        hook_name = \"med_hook\"\n        attrs = {'event': \"periodic\", 'freq': \"0\"}\n        match_str1 = \"set_hook_freq: freq value '0'\"\n        match_str1 += \" of a hook must be > 0\"\n        try:\n            self.server.create_hook(hook_name, attrs)\n        except PbsManagerError as e:\n            self.assertIn(match_str1, e.msg[0])\n            self.logger.info('Expected error: ' + match_str1)\n        else:\n            msg = \"Able to set freq to zero\"\n            self.assertTrue(False, msg)\n        attrs = {'enabled': \"False\", 'event': \"periodic\", 'freq': '120'}\n        self.server.manager(MGR_CMD_SET, HOOK, attrs, hook_name)\n        attrs = {'freq': \"-1\"}\n        match_str1 = \"set_hook_freq: freq value '-1'\"\n        match_str1 += \" of a hook must be > 0\"\n        try:\n            self.server.manager(MGR_CMD_SET, HOOK, attrs, hook_name)\n        except PbsManagerError as e:\n            self.assertIn(match_str1, e.msg[0])\n            self.logger.info('Expected error: ' + match_str1)\n        else:\n            msg = \"Able to set freq to negative value\"\n            self.assertTrue(False, msg)\n\n    @timeout(600)\n    def test_with_other_hooks(self):\n        \"\"\"\n        Test periodic hook works fine with other hooks\n        \"\"\"\n        hook_name = \"periodic_hook\"\n        freq = 30\n        scr = self.create_hook(accept=True, sleep_time=25)\n        attrs = {'event': \"periodic\", 'alarm': \"28\"}\n        self.server.create_import_hook(hook_name, attrs, scr, overwrite=True)\n        start_time = time.time()\n        attrs = {'freq': 30, 'enabled': 'True'}\n        self.server.manager(MGR_CMD_SET, HOOK, attrs, hook_name)\n        expected_msg = \"periodic hook started at \"\n        self.server.log_match(expected_msg, starttime=start_time,\n                              interval=(freq + 1))\n        self.check_next_occurances(count=1, freq=freq, hook_run_time=25,\n                                   check_for_hook_end=False)\n        hook_name = \"exechost_periodic_hook3\"\n        freq = 8\n        hook_run_time = 5\n        scr = self.create_hook(True, sleep_time=5)\n        attrs = {'event': \"exechost_periodic\", 'alarm': \"7\", 'freq': \"8\",\n                 'enabled': 'True'}\n        self.server.create_import_hook(hook_name, attrs, scr)\n        start_time = time.time()\n        expected_msg = \"periodic hook started at \"\n        self.mom.log_match(expected_msg, interval=(freq + 1),\n                           starttime=start_time)\n        expected_msg = \"periodic hook ended at \"\n        self.mom.log_match(expected_msg, interval=(hook_run_time + 1),\n                           starttime=start_time)\n\n    def test_other_pbs_operations_work(self):\n        \"\"\"\n        Test that when periodic hook is launched PBS operations do not get\n        hampered\n        \"\"\"\n        hook_name = \"medium_hook\"\n        freq = 20\n        scr = self.create_hook(accept=True, sleep_time=15)\n        attrs = {'event': \"periodic\"}\n        self.server.create_import_hook(hook_name, attrs, scr)\n        attrs = {'alarm': \"18\", 'freq': freq}\n        self.server.manager(MGR_CMD_SET, HOOK, attrs, hook_name)\n        attrs = {'enabled': 'True'}\n        self.server.manager(MGR_CMD_SET, HOOK, attrs, hook_name)\n        self.start_msg = \";periodic hook started at \"\n        self.check_next_occurances(count=1, freq=freq, hook_run_time=25,\n                                   check_for_hook_end=False)\n        a = {'Resource_List.select': '1:ncpus=1',\n             'Resource_List.walltime': 3}\n        j = Job(TEST_USER, attrs=a)\n        j.set_sleep_time(3)\n        jid1 = self.server.submit(j)\n        self.server.expect(JOB, 'queue', id=jid1, op=UNSET, offset=3)\n        self.server.log_match(jid1 + \";Exit_status=0\")\n        j1 = Job(TEST_USER)\n        jid2 = self.server.submit(j1)\n        self.server.delete(jid2)\n\n    def test_set_as_non_admin(self):\n        \"\"\"\n        Check for the correct messages thrown if user other\n        than pbsadmin tries to set\n        \"\"\"\n        hook_name = \"medium_hook\"\n        host_name = str(self.server.hostname)\n        self.server.create_hook(hook_name, attrs={'enabled': \"False\"})\n        attrs = {'event': \"periodic\", 'freq': '120'}\n        match_str1 = \"unauthorized to access hooks data from server\"\n        try:\n            self.server.manager(MGR_CMD_SET, HOOK, attrs, hook_name,\n                                runas=TEST_USER1)\n        except PbsManagerError as e:\n            self.assertIn(match_str1, e.msg[0])\n            self.logger.info('Expected error: ' + match_str1)\n        else:\n            msg = \"Able to create hook as other user\"\n            self.assertTrue(False, msg)\n        self.server.manager(MGR_CMD_SET, HOOK, attrs, hook_name)\n        self.server.manager(MGR_CMD_LIST, HOOK, {'freq': '120'}, hook_name)\n"
  },
  {
    "path": "test/tests/functional/pbs_set_enforcement.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestMomEnforcement(TestFunctional):\n    \"\"\"\n    This test suite tests enforcement on mom\n    \"\"\"\n\n    def test_set_enforcement(self):\n        \"\"\"\n        This test suite verifies that mom successfully handle the setting\n        of enforcement in the config file\n        \"\"\"\n        self.mom.add_config(\n            {'$enforce delta_percent_over': '50',\n             '$enforce delta_cpufactor': '1.5',\n             '$enforce delta_weightup': '0.4',\n             '$enforce delta_weightdown': '0.1',\n             '$enforce average_percent_over': '50',\n             '$enforce average_cpufactor': '1.025',\n             '$enforce average_trialperiod': '120',\n             '$enforce cpuburst': '',\n             '$enforce cpuaverage': '',\n             '$enforce mem': ''})\n\n        error = \"\"\n        try:\n            self.mom.stop()\n            self.mom.start()\n        except PbsServiceError as err:\n            error = err\n\n        self.assertEqual(error, \"\")\n"
  },
  {
    "path": "test/tests/functional/pbs_sister_mom_crash.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\n@requirements(num_moms=2)\nclass TestSisterMom(TestFunctional):\n    \"\"\"\n    This test suite tests the sister mom crash\n    \"\"\"\n    @timeout(240)\n    def test_sister_mom_crash(self):\n        \"\"\"\n        This test suite verifies that sister mom\n        doesn't crash\n        \"\"\"\n        # Skip test if number of mom provided is not equal to two\n        if len(self.moms) != 2:\n            self.skipTest(\"test requires atleast two MoMs as input, \" +\n                          \"use -p moms=<mom1:mom2>\")\n        sister_mom = self.moms.keys()[1]\n        pbsdsh_path = os.path.join(self.server.pbs_conf['PBS_EXEC'],\n                                   \"bin\", \"pbsdsh\")\n        script = \"%s dd if=/dev/zero of=/dev/null\" % pbsdsh_path\n        j = Job(TEST_USER)\n        j.set_attributes({'Resource_List.select': '2',\n                          'Resource_List.place': 'scatter'})\n        j.create_script(script)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        self.server.log_match(\";%s;node down: communication closed\"\n                              % (sister_mom),\n                              max_attempts=18, interval=10, existence=False)\n"
  },
  {
    "path": "test/tests/functional/pbs_snapshot_unittest.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport json\nimport os\nimport time\n\nfrom ptl.utils.pbs_snaputils import *\nfrom tests.functional import *\n\n\nclass TestPBSSnapshot(TestFunctional):\n    \"\"\"\n    Test suit with unit tests for the pbs_snapshot tool\n    \"\"\"\n    pbs_snapshot_path = None\n    snapdirs = []\n    snaptars = []\n    parent_dir = os.getcwd()\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n\n        # Check whether pbs_snapshot is accessible\n        try:\n            self.pbs_snapshot_path = os.path.join(\n                self.server.pbs_conf[\"PBS_EXEC\"], \"sbin\", \"pbs_snapshot\")\n            ret = self.du.run_cmd(cmd=[self.pbs_snapshot_path, \"-h\"])\n            if ret['rc'] != 0:\n                self.pbs_snapshot_path = None\n        except Exception:\n            self.pbs_snapshot_path = None\n\n        # Check whether the user has root access or not\n        # pbs_snapshot only supports being run as root, so skip the entire\n        # testsuite if the user doesn't have root privileges\n        ret = self.du.run_cmd(\n            cmd=[\"ls\", os.path.join(os.sep, \"root\")], sudo=True)\n        if ret['rc'] != 0:\n            self.skipTest(\"pbs_snapshot/PBSSnapUtils need root privileges\")\n\n    def setup_sc(self, sched_id, partition, port,\n                 sched_priv=None, sched_log=None):\n        \"\"\"\n        Setup a scheduler\n\n        :param sched_id: id of the scheduler\n        :type sched_id: str\n        :param partition: partition name for the scheduler (e.g \"P1\", \"P1,P2\")\n        :type partition: str\n        :param port: The port number string for the scheduler\n        :type port: str\n        :param sched_priv: 'sched_priv' (full path) for the scheduler\n        :type sched_priv: str\n        :param sched_log: 'sched_log' (full path) for the scheduler\n        :type sched_log: str\n        \"\"\"\n        a = {'partition': partition,\n             'sched_host': self.server.hostname}\n        if sched_priv is not None:\n            a['sched_priv'] = sched_priv\n        if sched_log is not None:\n            a['sched_log'] = sched_log\n        self.server.manager(MGR_CMD_CREATE, SCHED, a, id=sched_id)\n        if 'sched_priv' in a:\n            sched_dir = os.path.dirname(sched_priv)\n            self.scheds[sched_id].create_scheduler(sched_dir)\n            self.scheds[sched_id].start(sched_dir)\n        else:\n            self.scheds[sched_id].create_scheduler()\n            self.scheds[sched_id].start()\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'scheduling': 'True'}, id=sched_id)\n\n    def setup_queues_nodes(self, num_partitions):\n        \"\"\"\n        Given a no. of partitions, create equal no. of associated queues\n        and nodes\n\n        :param num_partitions: number of partitions\n        :type num_partitions: int\n        :return a tuple of lists of queue and node ids:\n            ([q1, q1, ..], [n1, n2, ..])\n        \"\"\"\n        queues = []\n        nodes = []\n        a_q = {\"queue_type\": \"execution\",\n               \"started\": \"True\",\n               \"enabled\": \"True\"}\n        a_n = {\"resources_available.ncpus\": 2}\n        self.mom.create_vnodes(a_n, (num_partitions + 1), vname='vnode')\n        for i in range(num_partitions):\n            partition_id = \"P\" + str(i + 1)\n\n            # Create queue i + 1 with partition i + 1\n            id_q = \"wq\" + str(i + 1)\n            queues.append(id_q)\n            a_q[\"partition\"] = partition_id\n            self.server.manager(MGR_CMD_CREATE, QUEUE, a_q, id=id_q)\n\n            # Set the partition i + 1 on node i\n            id_n = \"vnode[\" + str(i) + \"]\"\n            nodes.append(id_n)\n            a = {\"partition\": partition_id}\n            self.server.manager(MGR_CMD_SET, NODE, a, id=id_n)\n\n        return (queues, nodes)\n\n    def take_snapshot(self, acct_logs=None, daemon_logs=None,\n                      obfuscate=None, with_sudo=True, hosts=None,\n                      primary_host=None, basic=None, obf_snap=None):\n        \"\"\"\n        Take a snapshot using pbs_snapshot command\n\n        :param acct_logs: Number of accounting logs to capture\n        :type acct_logs: int\n        :param daemon_logs: Number of daemon logs to capture\n        :type daemon_logs: int\n        :param obfuscate: Obfuscate information?\n        :type obfuscate: bool\n        :param with_sudo: use the --with-sudo option?\n        :type with_sudo: bool\n        :param hosts: list of additional hosts to capture information from\n        :type list\n        :param primary_host: hostname of the primary host to capture (-H)\n        :type primary_host: str\n        :param basic: use --basic option\n        :type bool\n        :param obf_snap: path to existing snapshot to obfuscate\n        :type str\n        :return a tuple of name of tarball and snapshot directory captured:\n            (tarfile, snapdir)\n        \"\"\"\n        if self.pbs_snapshot_path is None:\n            self.skip_test(\"pbs_snapshot not found\")\n\n        if obf_snap:\n            snap_cmd = [self.pbs_snapshot_path, \"--obf-snap\", obf_snap]\n        else:\n            snap_cmd = [self.pbs_snapshot_path, \"-o\", self.parent_dir]\n            if acct_logs is not None:\n                snap_cmd.append(\"--accounting-logs=\" + str(acct_logs))\n            if daemon_logs is not None:\n                snap_cmd.append(\"--daemon-logs=\" + str(daemon_logs))\n            if obfuscate:\n                snap_cmd.append(\"--obfuscate\")\n            if hosts is not None:\n                hosts_str = \",\".join(hosts)\n                snap_cmd.append(\"--additional-hosts=\" + hosts_str)\n            if primary_host is not None:\n                snap_cmd.append(\"-H \" + primary_host)\n            if basic is not None:\n                snap_cmd.append(\"--basic\")\n\n        if with_sudo:\n            snap_cmd.append(\"--with-sudo\")\n\n        ret = self.du.run_cmd(cmd=snap_cmd, logerr=False, as_script=True)\n        self.assertEqual(ret['rc'], 0)\n\n        # Get the name of the tarball that was created\n        # pbs_snapshot prints to stdout only the following:\n        #     \"Snapshot available at: <path to tarball>\"\n        self.assertTrue(len(ret['out']) > 0, str(ret))\n        snap_out = ret['out'][0]\n        output_tar = snap_out.split(\":\")[1]\n        output_tar = output_tar.strip()\n\n        # Check that the output tarball was created\n        self.assertTrue(os.path.isfile(output_tar),\n                        \"Error capturing snapshot:\\n\" + str(ret))\n\n        # Unwrap the tarball\n        tar = tarfile.open(output_tar)\n        tar.extractall(path=self.parent_dir)\n        tar.close()\n\n        # snapshot directory name = <snapshot>.tgz[:-4]\n        snap_dir = output_tar[:-4]\n\n        # Check that the directory exists\n        self.assertTrue(os.path.isdir(snap_dir))\n\n        self.snapdirs.append(snap_dir)\n        self.snaptars.append(output_tar)\n\n        return (output_tar, snap_dir)\n\n    def check_snap_obfuscated(self, snap_dir, real_values):\n        \"\"\"\n        Check that a snapshot doesn't contain any sensitive values\n\n        :param snap_dir: path to the snapshot dir\n        :type str\n        :param real_values: map of {attribute name: sensitive value}\n        :type dict\n        \"\"\"\n        values = real_values.values()\n        for val_list in values:\n            for val in val_list:\n                # Just do a grep for the value in the snapshot\n                cmd = [\"grep\", \"-wR\", \"\\'\" + str(val) + \"\\'\", snap_dir]\n                ret = self.du.run_cmd(cmd=cmd, level=logging.DEBUG)\n                # grep returns 2 if an error occurred\n                self.assertNotEqual(ret[\"rc\"], 2, \"grep failed!\")\n                self.assertIn(ret[\"out\"], [\"\", None, []], str(val) +\n                              \" was not obfuscated. Real values:\\n\" +\n                              str(real_values))\n                # Also make sure that no filenames contain the sensitive val\n                cmd = [\"find\", snap_dir, \"-name\", \"\\'*\" + str(val) + \"*\\'\"]\n                ret = self.du.run_cmd(cmd=cmd, level=logging.DEBUG)\n                self.assertEqual(ret[\"rc\"], 0, \"find command failed!\")\n                self.assertIn(ret[\"out\"], [\"\", None, []], str(val) +\n                              \" was not obfuscated. Real values:\\n\" +\n                              str(real_values))\n\n    def test_capture_server(self):\n        \"\"\"\n        Test the 'capture_server' interface of PBSSnapUtils\n        \"\"\"\n\n        # Set something on the server so we can match it later\n        job_hist_duration = \"12:00:00\"\n        attr_list = {\"job_history_enable\": \"True\",\n                     \"job_history_duration\": job_hist_duration}\n        self.server.manager(MGR_CMD_SET, SERVER, attr_list)\n\n        num_daemon_logs = 2\n        num_acct_logs = 5\n\n        with PBSSnapUtils(out_dir=self.parent_dir, acct_logs=num_acct_logs,\n                          daemon_logs=num_daemon_logs,\n                          with_sudo=True) as snap_obj:\n            snap_dir = snap_obj.capture_server(True, True)\n\n            # Go through the snapshot and perform certain checks\n            # Check 1: the snapshot exists\n            self.assertTrue(os.path.isdir(snap_dir))\n            # Check 2: all directories except the 'server' directory have no\n            # files\n            svr_fullpath = os.path.join(snap_dir, \"server\")\n            for root, _, files in os.walk(snap_dir):\n                for filename in files:\n                    file_fullpath = os.path.join(root, filename)\n                    # Find the common paths between 'server' & the file\n                    common_path = os.path.commonprefix([file_fullpath,\n                                                        svr_fullpath])\n                    try:\n                        self.assertEqual(os.path.basename(common_path),\n                                         \"server\")\n                    except AssertionError:\n                        # Check if this was a server core file, which would\n                        # explain why it was captured\n                        svrcorepath = os.path.join(CORE_DIR, \"server_priv\")\n                        if svrcorepath in file_fullpath:\n                            continue\n                        raise\n            # Check 3: qstat_Bf.out exists\n            qstat_bf_out = os.path.join(snap_obj.snapdir, QSTAT_BF_PATH)\n            self.assertTrue(os.path.isfile(qstat_bf_out))\n            # Check 4: qstat_Bf.out has 'job_history_duration' set to 24:00:00\n            with open(qstat_bf_out, \"r\") as fd:\n                for line in fd:\n                    if \"job_history_duration\" in line:\n                        # Remove whitespaces\n                        line = \"\".join(line.split())\n                        # Split it up by '='\n                        key_val = line.split(\"=\")\n                        self.assertEqual(key_val[1], job_hist_duration)\n\n        # Cleanup\n        if os.path.isdir(snap_dir):\n            self.du.rm(path=snap_dir, recursive=True, force=True)\n\n    def test_capture_all(self):\n        \"\"\"\n        Test the 'capture_all' interface of PBSSnapUtils\n\n        WARNING: Assumes that the test is being run on type - 1 PBS install\n        \"\"\"\n        num_daemon_logs = 2\n        num_acct_logs = 5\n\n        # Check that all PBS daemons are up and running\n        all_daemons_up = self.server.isUp()\n        all_daemons_up = all_daemons_up and self.mom.isUp()\n        all_daemons_up = all_daemons_up and self.comm.isUp()\n        all_daemons_up = all_daemons_up and self.scheduler.isUp()\n\n        if not all_daemons_up:\n            # Skip the test\n            self.skipTest(\"Type 1 installation not present or \" +\n                          \"all daemons are not running\")\n\n        with PBSSnapUtils(out_dir=self.parent_dir, acct_logs=num_acct_logs,\n                          daemon_logs=num_daemon_logs,\n                          with_sudo=True) as snap_obj:\n            snap_dir = snap_obj.capture_all()\n            snap_obj.finalize()\n\n            # Test that all the expected information has been captured\n            # PBSSnapUtils has various dictionaries which store metadata\n            # for various objects. Create a list of these dicts\n            all_info = [snap_obj.server_info, snap_obj.job_info,\n                        snap_obj.node_info, snap_obj.comm_info,\n                        snap_obj.hook_info, snap_obj.sched_info,\n                        snap_obj.resv_info, snap_obj.core_info,\n                        snap_obj.sys_info]\n            skip_list = [ACCT_LOGS, QMGR_LPBSHOOK_OUT, \"reservation\", \"job\",\n                         QMGR_PR_OUT, PG_LOGS, \"core_file_bt\",\n                         \"pbs_snapshot.log\"]\n            platform = self.du.get_platform()\n            if not platform.startswith(\"linux\"):\n                skip_list.extend([ETC_HOSTS, ETC_NSSWITCH_CONF, LSOF_PBS_OUT,\n                                  VMSTAT_OUT, DF_H_OUT, DMESG_OUT])\n            for item_info in all_info:\n                for key, info in item_info.items():\n                    info_path = info[0]\n                    if info_path is None:\n                        continue\n                    # Check if we should skip checking this info\n                    skip_item = False\n                    for item in skip_list:\n                        if isinstance(item, int):\n                            if item == key:\n                                skip_item = True\n                                break\n                        else:\n                            if item in info_path:\n                                skip_item = True\n                                break\n                    if skip_item:\n                        continue\n\n                    # Check if this information was captured\n                    info_full_path = os.path.join(snap_dir, info_path)\n                    self.assertTrue(os.path.exists(info_full_path),\n                                    msg=info_full_path + \" was not captured\")\n\n        # Cleanup\n        if os.path.isdir(snap_dir):\n            self.du.rm(path=snap_dir, recursive=True, force=True)\n\n    def test_capture_pbs_logs(self):\n        \"\"\"\n        Test the 'capture_pbs_logs' interface of PBSSnapUtils\n        \"\"\"\n        num_daemon_logs = 2\n        num_acct_logs = 5\n\n        # Check which PBS daemons are up on this machine.\n        # We'll only check for logs from the daemons which were up\n        # when the snapshot was taken.\n        server_up = self.server.isUp()\n        mom_up = self.mom.isUp()\n        comm_up = self.comm.isUp()\n        sched_up = self.scheduler.isUp()\n\n        if not (server_up or mom_up or comm_up or sched_up):\n            # Skip the test\n            self.skipTest(\"No PBS daemons found on the system,\" +\n                          \" skipping the test\")\n\n        with PBSSnapUtils(out_dir=self.parent_dir, acct_logs=num_acct_logs,\n                          daemon_logs=num_daemon_logs,\n                          with_sudo=True) as snap_obj:\n            snap_dir = snap_obj.capture_pbs_logs()\n\n            # Perform some checks\n            # Check that the snapshot exists\n            self.assertTrue(os.path.isdir(snap_dir))\n            if server_up:\n                # Check that 'server_logs' were captured\n                log_path = os.path.join(snap_dir, SVR_LOGS_PATH)\n                self.assertTrue(os.path.isdir(log_path))\n                # Check that 'accounting_logs' were captured\n                log_path = os.path.join(snap_dir, ACCT_LOGS_PATH)\n                self.assertTrue(os.path.isdir(log_path))\n            if mom_up:\n                # Check that 'mom_logs' were captured\n                log_path = os.path.join(snap_dir, MOM_LOGS_PATH)\n                self.assertTrue(os.path.isdir(log_path))\n            if comm_up:\n                # Check that 'comm_logs' were captured\n                log_path = os.path.join(snap_dir, COMM_LOGS_PATH)\n                self.assertTrue(os.path.isdir(log_path))\n            if sched_up:\n                # Check that 'sched_logs' were captured\n                log_path = os.path.join(snap_dir, DFLT_SCHED_LOGS_PATH)\n                self.assertTrue(os.path.isdir(log_path))\n\n        if os.path.isdir(snap_dir):\n            self.du.rm(path=snap_dir, recursive=True, force=True)\n\n    def test_snapshot_basic(self):\n        \"\"\"\n        Test capturing a snapshot via the pbs_snapshot program\n        \"\"\"\n        if self.pbs_snapshot_path is None:\n            self.skip_test(\"pbs_snapshot not found\")\n\n        output_tar, _ = self.take_snapshot()\n\n        # Check that the output tarball was created\n        self.assertTrue(os.path.isfile(output_tar))\n\n    def test_snapshot_without_logs(self):\n        \"\"\"\n        Test capturing a snapshot via the pbs_snapshot program\n        Capture no logs\n        \"\"\"\n        if self.pbs_snapshot_path is None:\n            self.skip_test(\"pbs_snapshot not found\")\n\n        (_, snap_dir) = self.take_snapshot(0, 0)\n\n        # Check that 'server_logs' were not captured\n        log_path = os.path.join(snap_dir, SVR_LOGS_PATH)\n        self.assertTrue(not os.path.isdir(log_path))\n        # Check that 'mom_logs' were not captured\n        log_path = os.path.join(snap_dir, MOM_LOGS_PATH)\n        self.assertTrue(not os.path.isdir(log_path))\n        # Check that 'comm_logs' were not captured\n        log_path = os.path.join(snap_dir, COMM_LOGS_PATH)\n        self.assertTrue(not os.path.isdir(log_path))\n        # Check that 'sched_logs' were not captured\n        log_path = os.path.join(snap_dir, DFLT_SCHED_LOGS_PATH)\n        self.assertTrue(not os.path.isdir(log_path))\n        # Check that 'accounting_logs' were not captured\n        log_path = os.path.join(snap_dir, ACCT_LOGS_PATH)\n        self.assertTrue(not os.path.isdir(log_path))\n\n    def test_obfuscate_resv_user_groups(self):\n        \"\"\"\n        Test obfuscation of user & group related attributes while capturing\n        snapshots via pbs_snapshot\n        \"\"\"\n        if self.pbs_snapshot_path is None:\n            self.skip_test(\"pbs_snapshot not found\")\n\n        now = int(time.time())\n\n        # Let's submit a reservation with Authorized_Users and\n        # Authorized_Groups set\n        attribs = {ATTR_auth_u: TEST_USER1, ATTR_auth_g: TSTGRP0,\n                   ATTR_l + \".ncpus\": 1, 'reserve_start': now + 25,\n                   'reserve_end': now + 45}\n        resv_obj = Reservation(attrs=attribs)\n        resv_id = self.server.submit(resv_obj)\n        attribs = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, attribs, id=resv_id)\n\n        # Now, take a snapshot with --obfuscate\n        (_, snap_dir) = self.take_snapshot(0, 0, True)\n\n        # Make sure that the pbs_rstat -f output captured doesn't have the\n        # Authorized user and group names\n        pbsrstat_path = os.path.join(snap_dir, PBS_RSTAT_F_PATH)\n        self.assertTrue(os.path.isfile(pbsrstat_path))\n        with open(pbsrstat_path, \"r\") as rstatfd:\n            all_content = rstatfd.read()\n            self.assertFalse(str(TEST_USER1) in all_content)\n            self.assertFalse(str(TSTGRP0) in all_content)\n\n    def test_obfuscate_acct_bad(self):\n        \"\"\"\n        Test that pbs_snapshot --obfuscate can work with bad accounting records\n        \"\"\"\n        if os.getuid() != 0:\n            self.skipTest(\"Test need to run as root\")\n\n        if self.pbs_snapshot_path is None:\n            self.skip_test(\"pbs_snapshot not found\")\n\n        # Delete all existing accounting logs\n        acct_logpath = os.path.join(self.server.pbs_conf[\"PBS_HOME\"],\n                                    \"server_priv\", \"accounting\")\n        self.du.rm(path=os.path.join(acct_logpath, \"*\"), force=True,\n                   as_script=True)\n        ret = os.listdir(acct_logpath)\n        self.assertEqual(len(ret), 0)\n        self.server.pi.restart()\n\n        # Make sure that the restart generated a new accounting log\n        # Let's submit a job to generate a new accounting log\n        j = Job(TEST_USER)\n        j.set_sleep_time(1)\n        jid = self.server.submit(j)\n\n        # Check that the accounting E record was generated\n        self.server.accounting_match(\";E;%s;\" % jid)\n\n        # Now, Add some garbage data to the accounting file\n        ret = os.listdir(acct_logpath)\n        self.assertGreater(len(ret), 0)\n        acct_filename = ret[0]\n        filepath = os.path.join(acct_logpath, acct_filename)\n        with open(filepath, \"a+\") as fd:\n            fd.write(\"!@#$%^\")\n\n        # Now, take a snapshot with --obfuscate\n        (_, snap_dir) = self.take_snapshot(obfuscate=True, with_sudo=False)\n\n        # Make sure that the accounting log was captured with the job record\n        snapacctdir = os.path.join(snap_dir, \"server_priv\", \"accounting\")\n        self.assertTrue(os.path.isdir(snapacctdir))\n        snapacctpath = os.path.join(snapacctdir, acct_filename)\n        self.assertTrue(os.path.isfile(snapacctpath))\n        with open(snapacctpath, \"r\") as fd:\n            content = fd.read()\n            self.assertIn(\";E;%s;\" % jid, content)\n\n        # Now, modify the job record itself to add some garbage to it\n        file_contents = []\n        contents_out = []\n        with open(filepath, \"r\") as fd:\n            file_contents = fd.readlines()\n        for line in file_contents:\n            if \";E;%s;\" % jid in line:\n                line = line[:-1] + \" !@#$^\\n\"\n            contents_out.append(line)\n        with open(filepath, \"w\") as fd:\n            fd.writelines(contents_out)\n\n        # Capture another snapshot with --obfuscate\n        (_, snap_dir) = self.take_snapshot(obfuscate=True, with_sudo=False)\n\n        # Make sure that the accounting log was captured\n        # This time, the job record should not be captured as it had garbage\n        snapacctdir = os.path.join(snap_dir, \"server_priv\", \"accounting\")\n        self.assertTrue(os.path.isdir(snapacctdir))\n        snapacctpath = os.path.join(snapacctdir, acct_filename)\n        self.assertTrue(os.path.isfile(snapacctpath))\n        with open(snapacctpath, \"r\") as fd:\n            content = fd.read()\n            self.assertNotIn(\";E;%s;\" % jid, content)\n\n    def test_multisched_support(self):\n        \"\"\"\n        Test that pbs_snapshot can capture details of all schedulers\n        \"\"\"\n        if self.pbs_snapshot_path is None:\n            self.skip_test(\"pbs_snapshot not found\")\n\n        # Setup 3 schedulers\n        sched_ids = [\"sc1\", \"sc2\", \"sc3\", \"default\"]\n        self.setup_sc(sched_ids[0], \"P1\", \"15050\")\n        self.setup_sc(sched_ids[1], \"P2\", \"15051\")\n        # Setup scheduler at non-default location\n        dir_path = os.path.join(os.sep, 'var', 'spool', 'pbs', 'sched_dir')\n        if not os.path.exists(dir_path):\n            self.du.mkdir(path=dir_path, sudo=True)\n        sched_priv = os.path.join(dir_path, 'sched_priv_sc3')\n        sched_log = os.path.join(dir_path, 'sched_logs_sc3')\n        self.setup_sc(sched_ids[2], \"P3\", \"15052\", sched_priv, sched_log)\n\n        # Add 3 partitions, each associated with a queue and a node\n        (q_ids, _) = self.setup_queues_nodes(3)\n\n        # Submit some jobs to fill the system up and get the multiple\n        # schedulers busy\n        for q_id in q_ids:\n            for _ in range(2):\n                attr = {\"queue\": q_id, \"Resource_List.ncpus\": \"1\"}\n                j = Job(TEST_USER1, attrs=attr)\n                self.server.submit(j)\n\n        # Capture a snapshot of the system with multiple schedulers\n        (_, snapdir) = self.take_snapshot()\n\n        # Check that sched priv and sched logs for all schedulers was captured\n        for sched_id in sched_ids:\n            if (sched_id == \"default\"):\n                schedi_priv = os.path.join(snapdir, DFLT_SCHED_PRIV_PATH)\n                schedi_logs = os.path.join(snapdir, DFLT_SCHED_LOGS_PATH)\n            else:\n                schedi_priv = os.path.join(snapdir, \"sched_priv_\" + sched_id)\n                schedi_logs = os.path.join(snapdir, \"sched_logs_\" + sched_id)\n\n            self.assertTrue(os.path.isdir(schedi_priv))\n            self.assertTrue(os.path.isdir(schedi_logs))\n\n            # Make sure that these directories are not empty\n            self.assertTrue(len(os.listdir(schedi_priv)) > 0)\n            self.assertTrue(len(os.listdir(schedi_logs)) > 0)\n\n        # Check that qmgr -c \"l sched\" captured information about all scheds\n        lschedpath = os.path.join(snapdir, QMGR_LSCHED_PATH)\n        with open(lschedpath, \"r\") as fd:\n            scheds_found = 0\n            for line in fd:\n                if line.startswith(\"Sched \"):\n                    sched_id = line.split(\"Sched \")[1]\n                    sched_id = sched_id.strip()\n                    self.assertTrue(sched_id in sched_ids)\n                    scheds_found += 1\n            self.assertEqual(scheds_found, 4)\n\n    def test_snapshot_from_hook(self):\n        \"\"\"\n        Test that pbs_snapshot can be called from inside a hook\n        \"\"\"\n        logmsg = \"pbs_snapshot was successfully run\"\n        hook_body = \"\"\"\nimport pbs\nimport os\nimport subprocess\nimport time\n\npbs_snap_exec = os.path.join(pbs.pbs_conf['PBS_EXEC'], \"sbin\", \"pbs_snapshot\")\nif not os.path.isfile(pbs_snap_exec):\n    raise ValueError(\"pbs_snapshot executable not found\")\n\nref_time = time.time()\nsnap_cmd = [pbs_snap_exec, \"-o\", \".\"]\nassert(not subprocess.call(snap_cmd))\n\n# Check that the snapshot was captured\nsnapshot_found = False\nfor filename in os.listdir(\".\"):\n    if filename.startswith(\"snapshot\") and filename.endswith(\".tgz\"):\n        # Make sure the mtime on this file is recent enough\n        mtime_file = os.path.getmtime(filename)\n        if mtime_file > ref_time:\n            snapshot_found = True\n            break\nassert(snapshot_found)\npbs.logmsg(pbs.EVENT_DEBUG,\"%s\")\n\"\"\" % (logmsg)\n        hook_name = \"snapshothook\"\n        attr = {\"event\": \"periodic\", \"freq\": 5}\n        rv = self.server.create_import_hook(hook_name, attr, hook_body,\n                                            overwrite=True)\n        self.assertTrue(rv)\n        self.server.log_match(logmsg)\n\n    def snapshot_multi_mom_basic(self, obfuscate=False):\n        \"\"\"\n        Test capturing data from a multi-mom system\n\n        :param obfuscate: take snapshot with --obfuscate?\n        :type obfuscate: bool\n        \"\"\"\n        # Skip test if number of moms is not equal to two\n        if len(self.moms) != 2:\n            self.skipTest(\"test requires atleast two moms as input, \"\n                          \"use -p moms=<mom 1>:<mom 2>\")\n\n        mom1 = self.moms.values()[0]\n        mom2 = self.moms.values()[1]\n\n        host1 = mom1.shortname\n        host2 = mom2.shortname\n\n        self.server.manager(MGR_CMD_DELETE, NODE, None, \"\")\n        self.server.manager(MGR_CMD_CREATE, NODE, id=host1)\n        self.server.manager(MGR_CMD_CREATE, NODE, id=host2)\n\n        # Give the moms a chance to contact the server.\n        self.server.expect(NODE, {'state': 'free'}, id=host1)\n        self.server.expect(NODE, {'state': 'free'}, id=host2)\n\n        # Capture a snapshot with details from the remote moms\n        (_, snapdir) = self.take_snapshot(hosts=[host1, host2],\n                                          obfuscate=obfuscate)\n\n        # Check that snapshots for the 2 hosts were captured\n        host1_outtar = os.path.join(snapdir, host1 + \"_snapshot.tgz\")\n        host2_outtar = os.path.join(snapdir, host2 + \"_snapshot.tgz\")\n\n        self.assertTrue(os.path.isfile(host1_outtar),\n                        \"Failed to capture snapshot on %s\" % (host1))\n        self.assertTrue(os.path.isfile(host2_outtar),\n                        \"Failed to capture snapshot on %s\" % (host2))\n\n        # Unwrap the host snapshots\n        host1_snapdir = host1 + \"_snapshot\"\n        host2_snapdir = host2 + \"_snapshot\"\n        os.mkdir(host1_snapdir)\n        self.snapdirs.append(host1_snapdir)\n        os.mkdir(host2_snapdir)\n        self.snapdirs.append(host2_snapdir)\n        tar = tarfile.open(host1_outtar)\n        tar.extractall(path=host1_snapdir)\n        tar.close()\n        tar = tarfile.open(host2_outtar)\n        tar.extractall(path=host2_snapdir)\n        tar.close()\n\n        # Determine the name of the child snapshots\n        snap1_path = self.du.listdir(path=host1_snapdir, fullpath=True)\n        snap2_path = self.du.listdir(path=host2_snapdir, fullpath=True)\n        snap1_path = snap1_path[0]\n        snap2_path = snap2_path[0]\n\n        # Check that at least pbs.conf was captured on all of these hosts\n        self.assertTrue(os.path.isfile(os.path.join(snapdir, \"pbs.conf\")),\n                        \"Main snapshot didn't capture all expected\"\n                        \" information\")\n        self.assertTrue(os.path.isfile(os.path.join(snap1_path, \"pbs.conf\")),\n                        \"%s snapshot didn't capture all expected\"\n                        \" information\" % (host1))\n        self.assertTrue(os.path.isfile(os.path.join(snap2_path, \"pbs.conf\")),\n                        \"%s snapshot didn't capture all expected\"\n                        \" information\" % (host2))\n\n    @requirements(num_moms=2)\n    def test_multi_mom_basic(self):\n        \"\"\"\n        Test running pbs_snapshot on a multi-mom setup\n        \"\"\"\n        self.snapshot_multi_mom_basic()\n\n    @requirements(num_moms=2)\n    def test_multi_mom_basic_obfuscate(self):\n        \"\"\"\n        Test running pbs_snapshot on a multi-mom setup with obfuscation\n        \"\"\"\n        self.snapshot_multi_mom_basic(obfuscate=True)\n\n    def test_no_sudo(self):\n        \"\"\"\n        Test that running pbs_snapshot without sudo doesn't fail\n        \"\"\"\n        output_tar, _ = self.take_snapshot(with_sudo=False)\n\n        # Check that the output tarball was created\n        self.assertTrue(os.path.isfile(output_tar))\n\n    def test_snapshot_json(self):\n        \"\"\"\n        Test that pbs_snapshot captures job and vnode info in json\n        \"\"\"\n        _, snap_dir = self.take_snapshot()\n\n        # Verify that qstat json was captured\n        jsonpath = os.path.join(snap_dir, QSTAT_F_JSON_PATH)\n        self.assertTrue(os.path.isfile(jsonpath))\n        with open(jsonpath, \"r\") as fd:\n            json.load(fd)   # this will fail if file is not a valid json\n\n        # Verify that pbsnodes json was captured\n        jsonpath = os.path.join(snap_dir, PBSNODES_AVFJSON_PATH)\n        self.assertTrue(os.path.isfile(jsonpath))\n        with open(jsonpath, \"r\") as fd:\n            json.load(fd)\n\n    @requirements(no_mom_on_server=True)\n    def test_remote_primary_mom(self):\n        \"\"\"\n        Test that pbs_snapshot -H works correctly to capture a remote primary\n        MoM host\n        \"\"\"\n        # Skip test if there's no remote mom host available\n        if len(self.moms) == 0 or \\\n                self.du.is_localhost((self.moms.values()[0]).shortname):\n            self.skipTest(\"test requires a remote mom host as input, \"\n                          \"use -p moms=<mom host>\")\n\n        mom_host = (self.moms.values()[0]).shortname\n\n        _, snap_dir = self.take_snapshot(primary_host=mom_host)\n\n        # Verify that mom_priv was captured\n        momprivpath = os.path.join(snap_dir, \"mom_priv\")\n        self.assertTrue(os.path.isdir(momprivpath))\n\n    @requirements(num_moms=2)\n    def test_remote_primary_multinode(self):\n        \"\"\"\n        Test that pbs_snapshot -H works with --additional-hosts to capture\n        \"\"\"\n        # Skip test if number of moms is not equal to two\n        if len(self.moms) != 2:\n            self.skipTest(\"test requires atleast two moms as input, \"\n                          \"use -p moms=<mom 1>:<mom 2>\")\n\n        mom1 = self.moms.values()[0]\n        mom2 = self.moms.values()[1]\n\n        host1 = mom1.shortname\n        host2 = mom2.shortname\n\n        _, snap_dir = self.take_snapshot(hosts=[host2], primary_host=host1)\n\n        # Verify that the primary host's mom_priv was captured\n        momprivpath = os.path.join(snap_dir, \"mom_priv\")\n        self.assertTrue(os.path.isdir(momprivpath))\n\n        # The other host was captured as an additional host,\n        # so there should be a snapshot tar for it inside the main snapshot\n        host2_outtar = os.path.join(snap_dir, host2 + \"_snapshot.tgz\")\n        self.assertTrue(os.path.isfile(host2_outtar))\n\n        # Verify that mom_priv was captured, we can do this by just checking\n        # for mom_priv/config file\n        tar = tarfile.open(host2_outtar)\n        host2_snapname = tar.getnames()[0].split(os.sep, 1)[0]\n        try:\n            config_path = os.path.join(host2_snapname, \"mom_priv\", \"config\")\n            tar.getmember(config_path)\n        except KeyError:\n            self.fail(\"mom_priv/config not found in %s's snapshot\" % host2)\n\n    @skipOnShasta\n    def test_snapshot_obf_stress(self):\n        \"\"\"\n        A stress test to make sure that snapshot --obufscate really obfuscates\n        the attributes that it claims to\n        \"\"\"\n        real_values = {}\n\n        # We will try to set all attributes which --obfuscate anonymizes\n        manager1 = str(MGR_USER) + '@*'\n        manager2 = str(TEST_USER) + \"@*\"\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {ATTR_managers: (INCR, manager2)},\n                            sudo=True)\n        real_values[ATTR_managers] = [manager1, manager2]\n\n        operator = str(OPER_USER) + '@*'\n        real_values[ATTR_operators] = [operator]\n\n        real_values[ATTR_SvrHost] = [self.server.hostname]\n\n        # Create a queue with acls set\n        a = {ATTR_qtype: 'execution', ATTR_start: 'True', ATTR_enable: 'True',\n             ATTR_aclgren: 'True', ATTR_aclgroup: TSTGRP0,\n             ATTR_acluser: TEST_USER}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id='workq2')\n        real_values[ATTR_aclgroup] = [TSTGRP0]\n        real_values[ATTR_acluser] = [TEST_USER]\n\n        # Create a custom resource\n        attr = {\"type\": \"long\", \"flag\": \"nh\"}\n        rsc_id = \"myres\"\n        self.server.manager(MGR_CMD_CREATE, RSC, attr, id=rsc_id,\n                            logerr=False)\n\n        # Make it schedulable\n        self.scheduler.add_resource(\"myres\")\n\n        # Set myres on the vnode\n        attr = {\"resources_available.myres\": 1}\n        self.server.manager(MGR_CMD_SET, NODE, attr, id=self.mom.shortname)\n\n        # Set acls on server\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {ATTR_aclResvgroup: TSTGRP0,\n                             ATTR_aclResvuser: TEST_USER,\n                             ATTR_aclResvhost: self.server.hostname,\n                             ATTR_aclhost: self.server.hostname},\n                            sudo=True)\n        real_values[ATTR_aclResvgroup] = [TSTGRP0]\n        real_values[ATTR_aclResvuser] = [TEST_USER]\n\n        # ATTR_SchedHost  is already set on the default host\n        real_values[ATTR_SchedHost] = [self.server.hostname]\n\n        # Add node's 'Host' & 'Mom'\n        real_values[ATTR_NODE_Host] = [self.mom.shortname, self.mom.hostname]\n        real_values[ATTR_NODE_Mom] = [self.mom.shortname, self.mom.hostname]\n        real_values[ATTR_rescavail + \".host\"] = [self.mom.shortname,\n                                                 self.mom.hostname]\n        real_values[ATTR_rescavail + \".vnode\"] = [self.mom.shortname]\n\n        # Submit a reservation with Authorized_Users & Authorized_Groups set\n        a = {ATTR_auth_u: TEST_USER, ATTR_auth_g: TSTGRP0}\n        Reservation(TEST_USER, a)\n        real_values[ATTR_auth_u] = [TEST_USER]\n        real_values[ATTR_auth_g] = [TSTGRP0]\n        real_values[ATTR_resv_owner] = [TEST_USER, self.server.hostname]\n\n        # Set up fairshare so that resource_group file gets filled\n        self.scheduler.add_to_resource_group(TEST_USER, 11, 'root', 40)\n        self.scheduler.add_to_resource_group(TEST_USER1, 11, 'root', 60)\n\n        # Submit a job with sensitive attributes set\n        a = {ATTR_project: 'p1', ATTR_A: 'a1', ATTR_g: TSTGRP0,\n             ATTR_M: TEST_USER, ATTR_u: TEST_USER,\n             ATTR_l + \".walltime\": \"00:01:00\",\n             ATTR_l + \".myres\": 1, ATTR_S: \"/bin/bash\"}\n        j = Job(TEST_USER, attrs=a)\n        j.set_sleep_time(1000)\n        self.server.submit(j)\n\n        # Add job's attributes to the list\n        # TEST_USER belongs to group TESTGRP0\n        real_values[ATTR_euser] = [TEST_USER]\n        real_values[ATTR_egroup] = [TSTGRP0]\n        real_values[ATTR_project] = ['p1']\n        real_values[ATTR_A] = ['a1']\n        real_values[ATTR_g] = [TSTGRP0]\n        real_values[ATTR_M] = [TEST_USER]\n        real_values[ATTR_u] = [TEST_USER]\n        real_values[ATTR_owner] = [TEST_USER, self.server.hostname]\n        real_values[ATTR_exechost] = [self.server.hostname]\n        real_values[ATTR_S] = [\"/bin/bash\"]\n        real_values[ATTR_l] = [\"myres\"]\n\n        # Take a snapshot with --obfuscate\n        (_, snap_dir) = self.take_snapshot(obfuscate=True)\n\n        # Make sure that none of the sensitive values were captured\n        self.check_snap_obfuscated(snap_dir, real_values)\n\n    def test_basic_option(self):\n        \"\"\"\n        Test pbs_snapshot --basic\n        \"\"\"\n        if self.pbs_snapshot_path is None:\n            self.skip_test(\"pbs_snapshot not found\")\n\n        _, snap_dir = self.take_snapshot(basic=True)\n\n        # Check that the output tarball was created\n        self.assertTrue(os.path.isdir(snap_dir))\n\n        # Check that only the following was captured:\n        target_files = [\"server/qstat_Bf.out\", \"server/qstat_Qf.out\",\n                        \"scheduler/qmgr_lsched.out\", \"node/pbsnodes_va.out\",\n                        \"reservation/pbs_rstat_f.out\", \"job/qstat_f.out\",\n                        \"hook/qmgr_lpbshook.out\", \"server_priv/resourcedef\",\n                        \"pbs.conf\", \"pbs_snapshot.log\", \"ctime\",\n                        \"job/qstat_tf.out\"]\n        target_files = [os.path.join(snap_dir, f) for f in target_files]\n        sched_priv_dir = os.path.join(snap_dir, \"sched_priv\")\n        for (root, dirs, files) in os.walk(snap_dir):\n            for fname in files:\n                fpath = os.path.join(root, fname)\n                if fpath not in target_files:\n                    if not fpath.startswith(sched_priv_dir):\n                        self.fail(\"Unexpected file \" + fpath + \" captured\")\n\n    def test_snapshot_mom_obf(self):\n        \"\"\"\n        Test capturing a snapshot of a system that's only running pbs_mom\n        \"\"\"\n        # Kill all daemons and start only pbs_mom\n        self.server.pi.initd(op=\"stop\", daemon=\"all\")\n        self.mom.pi.start_mom()\n        self.assertTrue(self.mom.isUp())\n        self.assertFalse(self.server.isUp())\n\n        # Take & verify a snapshot with obfuscate\n        self.take_snapshot(obfuscate=True, with_sudo=True, acct_logs=10)\n\n        # Bring the rest of daemons up otherwise tearDown will error out\n        self.server.pi.initd(op=\"start\", daemon=\"all\")\n\n    def test_obfuscate_existing(self):\n        \"\"\"\n        Test the --obf-snap option which obfuscates an existing snapshot\n        \"\"\"\n        if self.pbs_snapshot_path is None:\n            self.skip_test(\"pbs_snapshot not found\")\n\n        now = int(time.time())\n\n        # Submit a job with sensitive attributes set\n        a = {ATTR_project: 'p1', ATTR_A: 'a1', ATTR_g: TSTGRP0,\n             ATTR_M: TEST_USER, ATTR_u: TEST_USER,\n             ATTR_l + \".walltime\": \"00:01:00\", ATTR_S: \"/bin/bash\"}\n        j = Job(TEST_USER, attrs=a)\n        j.set_sleep_time(1000)\n        self.server.submit(j)\n\n        # Add job's attributes to the list\n        # TEST_USER belongs to group TESTGRP0\n        real_values = {}\n        real_values[ATTR_euser] = [TEST_USER]\n        real_values[ATTR_egroup] = [TSTGRP0]\n        real_values[ATTR_project] = ['p1']\n        real_values[ATTR_A] = ['a1']\n        real_values[ATTR_g] = [TSTGRP0]\n        real_values[ATTR_M] = [TEST_USER]\n        real_values[ATTR_u] = [TEST_USER]\n        real_values[ATTR_owner] = [TEST_USER, self.server.hostname]\n        real_values[ATTR_exechost] = [self.server.hostname]\n        real_values[ATTR_S] = [\"/bin/bash\"]\n\n        # Take a normal snapshot\n        (snap_tar, snap_dir) = self.take_snapshot(0, 0, with_sudo=True)\n\n        # Now, obfuscate the snapshot that we just took\n        (snap_obf_tar, snap_obf) = self.take_snapshot(obf_snap=snap_dir)\n\n        # Make sure that none of the sensitive values were captured\n        self.check_snap_obfuscated(snap_obf, real_values)\n        self.du.rm(path=snap_obf, force=True, recursive=True)\n        self.du.rm(path=snap_dir, force=True, recursive=True)\n\n        # Now, obfuscate using the tar directly instead of the snapshot dir\n        self.du.rm(path=snap_obf_tar, force=True)\n        (_, snap_obf) = self.take_snapshot(obf_snap=snap_tar)\n        self.check_snap_obfuscated(snap_obf, real_values)\n\n    def tearDown(self):\n        # Delete the snapshot directories and tarballs created\n        for snap_dir in self.snapdirs:\n            self.du.rm(path=snap_dir, recursive=True, force=True)\n        for snap_tar in self.snaptars:\n            self.du.rm(path=snap_tar, sudo=True, force=True)\n\n        TestFunctional.tearDown(self)\n"
  },
  {
    "path": "test/tests/functional/pbs_soft_walltime.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\ndef cvt_duration(duration):\n    \"\"\"\n    convert string form of a duration (HH:MM:SS) into seconds\n    \"\"\"\n    h = 0\n    m = 0\n    sp = duration.split(':')\n    if len(sp) == 3:\n        h = int(sp[0])\n        m = int(sp[1])\n        s = int(sp[2])\n    elif len(sp) == 2:\n        m = int(sp[0])\n        s = int(sp[1])\n    else:\n        s = int(sp[0])\n\n    return h * 3600 + m * 60 + s\n\n\nclass TestSoftWalltime(TestFunctional):\n\n    \"\"\"\n    Test that the soft_walltime resource is being used properly and\n    being extended properly when exceeded\n    \"\"\"\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n        self.server.manager(\n            MGR_CMD_UNSET, SERVER, 'Resources_default.soft_walltime')\n        # Delete operators if added\n        self.server.manager(MGR_CMD_UNSET, SERVER, 'operators')\n\n    def tearDown(self):\n        if self.mom.is_cpuset_mom():\n            # reset the freq value\n            attrs = {'freq': 120}\n            self.server.manager(MGR_CMD_SET, HOOK, attrs, \"pbs_cgroups\")\n        TestFunctional.tearDown(self)\n\n    def stat_job(self, job):\n        \"\"\"\n        stat a job for its estimated.start_time and soft_walltime or walltime\n        :param job: Job to stat\n        :type job: string\n        \"\"\"\n        a = ['estimated.start_time', 'Resource_List.soft_walltime',\n             'Resource_List.walltime']\n        # If we're in CLI mode, qstat returns times in a human readable format\n        # We need to turn it back into an epoch.  API mode will be the epoch.\n        Jstat = self.server.status(JOB, id=job, attrib=a)\n        wt = 0\n        if self.server.get_op_mode() == PTL_CLI:\n            strp = time.strptime(Jstat[0]['estimated.start_time'], '%c')\n            est = int(time.mktime(strp))\n            if 'Resource_List.soft_walltime' in Jstat[0]:\n                wt = cvt_duration(Jstat[0]['Resource_List.soft_walltime'])\n            elif 'Resource_List.walltime' in Jstat[0]:\n                wt = cvt_duration(Jstat[0]['Resource_List.walltime'])\n        else:\n            est = int(Jstat[0]['estimated.start_time'])\n            if 'Resource_List.soft_walltime' in Jstat[0]:\n                wt = int(Jstat[0]['Resource_List.soft_walltime'])\n            elif 'Resource_List.walltime' in RN_stat[0]:\n                wt = int(Jstat[0]['Resource_List.walltime'])\n\n        return (est, wt)\n\n    def compare_estimates(self, baseline_job, jobs):\n        \"\"\"\n        Check if estimated start times are correct\n        :param baseline_job: initial top job to base times off of\n        :type baseline_job: string (job id)\n        :param jobs: calendared jobs\n        :type jobs: list of strings (job ids)\n        \"\"\"\n        est, wt = self.stat_job(baseline_job)\n        for j in jobs:\n            est2, wt2 = self.stat_job(j)\n            self.assertEqual(est + wt, est2)\n            est = est2\n            wt = wt2\n\n    def setup_holidays(self, prime_offset, nonprime_offset):\n        \"\"\"\n        Set up the holidays file for test execution.  This function will\n        first remove all entries in the holidays file and then add a year,\n        prime, and nonprime for all days.  The prime and nonprime entries\n        will be offsets from the current time.\n\n        This all is necessary because there are some holidays set by default.\n        The test should be able to be run on any day of the year.  If it is\n        run on one of these holidays, it will be nonprime time only.\n        \"\"\"\n        # Delete all entries in the holidays file\n        self.scheduler.holidays_delete_entry('a')\n\n        lt = time.localtime(time.time())\n        self.scheduler.holidays_set_year(str(lt[0]))\n\n        now = int(time.time())\n        prime = time.strftime('%H%M', time.localtime(now + prime_offset))\n        nonprime = time.strftime('%H%M', time.localtime(now + nonprime_offset))\n\n        # set prime-time and nonprime-time for all days\n        self.scheduler.holidays_set_day('weekday', prime, nonprime)\n        self.scheduler.holidays_set_day('saturday', prime, nonprime)\n        self.scheduler.holidays_set_day('sunday', prime, nonprime)\n\n    def test_soft_walltime_perms(self):\n        \"\"\"\n        Test to see if soft_walltime can't be submitted with a job or\n        altered by a normal user or operator\n        \"\"\"\n        J = Job(TEST_USER, attrs={'Resource_List.soft_walltime': 10})\n        msg = 'Cannot set attribute, read only or insufficient permission'\n\n        jid = None\n        try:\n            jid = self.server.submit(J)\n        except PbsSubmitError as e:\n            self.assertTrue(msg in e.msg[0])\n\n        self.assertEqual(jid, None)\n\n        J = Job(TEST_USER)\n        jid = self.server.submit(J)\n        try:\n            self.server.alterjob(jid, {'Resource_List.soft_walltime': 10},\n                                 runas=TEST_USER)\n        except PbsAlterError as e:\n            self.assertTrue(msg in e.msg[0])\n\n        self.server.expect(JOB, 'Resource_List.soft_walltime',\n                           op=UNSET, id=jid)\n\n        operator = str(OPER_USER) + '@*'\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'operators': (INCR, operator)},\n                            sudo=True)\n\n        try:\n            self.server.alterjob(jid, {'Resource_List.soft_walltime': 10},\n                                 runas=OPER_USER)\n        except PbsAlterError as e:\n            self.assertTrue(msg in e.msg[0])\n\n        self.server.expect(JOB, 'Resource_List.soft_walltime',\n                           op=UNSET, id=jid)\n\n    def test_soft_walltime_STF(self):\n        \"\"\"\n        Test that STF jobs can't have soft_walltime\n        \"\"\"\n        msg = 'soft_walltime is not supported with Shrink to Fit jobs'\n        J = Job(attrs={'Resource_List.min_walltime': 120, ATTR_h: None})\n        jid = self.server.submit(J)\n        try:\n            self.server.alterjob(jid, {'Resource_List.soft_walltime': 10})\n        except PbsAlterError as e:\n            self.assertTrue(msg in e.msg[0])\n\n        self.server.expect(JOB, 'Resource_List.soft_walltime',\n                           op=UNSET, id=jid)\n\n        J = Job(TEST_USER, attrs={ATTR_h: None})\n        jid = self.server.submit(J)\n        self.server.alterjob(jid, {'Resource_List.soft_walltime': 10})\n        try:\n            self.server.alterjob(jid, {'Resource_List.min_walltime': 120})\n        except PbsAlterError as e:\n            self.assertTrue(msg in e.msg[0])\n\n        self.server.expect(JOB, 'Resource_List.min_walltime',\n                           op=UNSET, id=jid)\n\n        J = Job(TEST_USER, attrs={ATTR_h: None})\n        jid = self.server.submit(J)\n        a = {'Resource_List.soft_walltime': 10,\n             'Resource_List.min_walltime': 120}\n        try:\n            self.server.alterjob(jid, a)\n        except PbsAlterError as e:\n            self.assertTrue(msg in e.msg[0])\n\n        al = ['Resource_List.min_walltime', 'Resource_List.soft_walltime']\n        self.server.expect(JOB, al, op=UNSET, id=jid)\n\n    def test_soft_greater_hard(self):\n        \"\"\"\n        Test that a job's soft_walltime can't be greater than its hard walltime\n        \"\"\"\n        msg = 'Illegal attribute or resource value'\n        J = Job(TEST_USER, attrs={'Resource_List.walltime': 120, ATTR_h: None})\n        jid = self.server.submit(J)\n\n        try:\n            self.server.alterjob(jid, {'Resource_List.soft_walltime': 240})\n        except PbsAlterError as e:\n            self.assertTrue(msg in e.msg[0])\n\n        self.server.expect(JOB, 'Resource_List.soft_walltime',\n                           op=UNSET, id=jid)\n\n        J = Job(TEST_USER, {ATTR_h: None})\n        jid = self.server.submit(J)\n        self.server.alterjob(jid, {'Resource_List.soft_walltime': 240})\n        try:\n            self.server.alterjob(jid, {'Resource_List.walltime': 120})\n        except PbsAlterError as e:\n            self.assertTrue(msg in e.msg[0])\n\n        self.server.expect(JOB, 'Resource_List.walltime', op=UNSET, id=jid)\n\n        J = Job(TEST_USER, {ATTR_h: None})\n        jid = self.server.submit(J)\n        try:\n            self.server.alterjob(jid, {'Resource_List.walltime': 120,\n                                       'Resource_List.soft_walltime': 240})\n        except PbsAlterError as e:\n            self.assertTrue(msg in e.msg[0])\n\n        al = ['Resource_List.walltime', 'Resource_List.soft_walltime']\n        self.server.expect(JOB, al, op=UNSET, id=jid)\n\n    def test_direct_set_soft_walltime(self):\n        \"\"\"\n        Test setting soft_walltime directly\n        \"\"\"\n        hook_body = \\\n            \"\"\"import pbs\ne = pbs.event()\nj = e.job\nj.Resource_List[\"soft_walltime\"] = \\\npbs.duration(j.Resource_List[\"set_soft_walltime\"])\ne.accept()\n\"\"\"\n        self.server.manager(MGR_CMD_CREATE, RSC, {'type': 'long'},\n                            id='set_soft_walltime')\n\n        a = {'event': 'queuejob', 'enabled': 'True'}\n        self.server.create_import_hook(\"que\", a, hook_body)\n\n        J = Job(TEST_USER, attrs={'Resource_List.set_soft_walltime': 5})\n        jid = self.server.submit(J)\n\n        self.server.expect(JOB, {'Resource_List.soft_walltime': 5}, id=jid)\n\n    def test_soft_walltime_extend(self):\n        \"\"\"\n        Test to see that soft_walltime is extended properly\n        \"\"\"\n        J = Job(TEST_USER)\n        jid = self.server.submit(J)\n\n        self.server.alterjob(jid, {'Resource_List.soft_walltime': 6})\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        time.sleep(7)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n\n        self.server.expect(JOB, {'estimated.soft_walltime': 6}, op=GT, id=jid)\n\n        # Get the current soft_walltime\n        jstat = self.server.status(JOB, id=jid,\n                                   attrib=['estimated.soft_walltime'])\n        est_soft_walltime = jstat[0]['estimated.soft_walltime']\n\n        time.sleep(7)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n\n        # Check if soft_walltime extended\n        self.server.expect(JOB, {'estimated.soft_walltime':\n                                 est_soft_walltime}, op=GT, id=jid)\n\n    def test_soft_walltime_extend_hook(self):\n        \"\"\"\n        Test to see that soft_walltime is extended properly when submitted\n        through a queue job hook\n        \"\"\"\n        hook_body = \\\n            \"\"\"import pbs\ne = pbs.event()\ne.job.Resource_List[\"soft_walltime\"] = pbs.duration(5)\ne.accept()\n\"\"\"\n        a = {'event': 'queuejob', 'enabled': 'True'}\n        self.server.create_import_hook(\"que\", a, hook_body)\n\n        J = Job(TEST_USER)\n        jid = self.server.submit(J)\n\n        self.server.expect(JOB, {'Resource_List.soft_walltime': 5}, id=jid)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        time.sleep(6)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n\n        self.server.expect(JOB, {'estimated.soft_walltime': 5}, op=GT, id=jid)\n\n        # Get the current soft_walltime\n        jstat = self.server.status(JOB, id=jid,\n                                   attrib=['estimated.soft_walltime'])\n        est_soft_walltime = jstat[0]['estimated.soft_walltime']\n\n        time.sleep(6)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n\n        self.server.expect(JOB, {'estimated.soft_walltime':\n                                 est_soft_walltime}, op=GT, id=jid)\n\n    def test_soft_then_hard(self):\n        \"\"\"\n        Test to see if a job has both a soft and a hard walltime, that\n        the job's soft_walltime is not extended past its hard walltime.\n        It should first extend once and then extend to its hard walltime\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'job_history_enable': 'True'})\n\n        J = Job(TEST_USER,\n                attrs={'Resource_List.ncpus': 1, 'Resource_List.walltime': 16})\n        jid = self.server.submit(J)\n\n        self.server.alterjob(jid, {'Resource_List.soft_walltime': 6})\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        time.sleep(7)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n\n        self.server.expect(JOB, {'estimated.soft_walltime': 6}, op=GT, id=jid)\n\n        time.sleep(7)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n\n        self.server.expect(JOB, {'estimated.soft_walltime': 16},\n                           offset=4, extend='x', id=jid)\n\n        self.server.expect(JOB, 'queue', op=UNSET, id=jid)\n\n    def test_soft_before_dedicated(self):\n        \"\"\"\n        Make sure that if a job's soft_walltime won't complete before\n        dedicated time, the job does not start\n        \"\"\"\n\n        now = int(time.time())\n        self.scheduler.add_dedicated_time(start=now + 120, end=now + 2500)\n\n        J = Job(TEST_USER)\n        J.set_sleep_time(200)\n        jid = self.server.submit(J)\n        self.server.alterjob(jid, {'Resource_List.soft_walltime': 180})\n        comment = 'Not Running: Job would cross dedicated time boundary'\n        self.server.expect(JOB, {'comment': comment}, id=jid)\n\n    def test_soft_extend_dedicated(self):\n        \"\"\"\n        Have a job with a soft_walltime extend into dedicated time and see\n        the job continue running like normal\n        \"\"\"\n\n        # Dedicated time is in the granularity of minutes.  This can't be set\n        # any shorter without making it dedicated time right now.\n        now = int(time.time())\n        self.scheduler.add_dedicated_time(start=now + 70, end=now + 180)\n        J = Job(TEST_USER, {'Resource_List.walltime': 180})\n        jid = self.server.submit(J)\n        self.server.alterjob(jid, {'Resource_List.soft_walltime': 5})\n\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        self.logger.info(\"Waiting until dedicated time starts\")\n        time.sleep(61)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n\n        self.server.expect(JOB, {'estimated.soft_walltime': 65},\n                           op=GE, id=jid)\n\n    def test_soft_before_prime(self):\n        \"\"\"\n        Make sure that if a job's soft_walltime won't complete before\n        prime boundry, the job does not start\n        \"\"\"\n        self.scheduler.set_sched_config({'backfill_prime': 'True'})\n\n        self.setup_holidays(3600, 7200)\n\n        J = Job(TEST_USER)\n        jid = self.server.submit(J)\n        self.server.alterjob(jid, {'Resource_List.soft_walltime': 5400})\n        comment = 'Not Running: Job will cross into primetime'\n        self.server.expect(JOB, {'comment': comment}, id=jid)\n\n    def test_soft_backfill_prime(self):\n        \"\"\"\n        Test if soft_walltime is used to see if a job can run before\n        the next prime boundry\n        \"\"\"\n        self.scheduler.set_sched_config({'backfill_prime': 'True'})\n\n        self.setup_holidays(60, 3600)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n\n        J = Job(TEST_USER, {'Resource_List.walltime': 300})\n        jid = self.server.submit(J)\n        self.server.alterjob(jid, {'Resource_List.soft_walltime': 5})\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid)\n\n        self.logger.info(\"Waiting until prime time starts.\")\n        time.sleep(61)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n        self.server.expect(JOB, {'estimated.soft_walltime': 65}, op=GE,\n                           id=jid)\n\n    def test_resv_conf_soft(self):\n        \"\"\"\n        Test that there is no change in the reservation behavior with\n        soft_walltime set on jobs with no hard walltime set\n        \"\"\"\n\n        a = {'resources_available.ncpus': 4}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.mom.shortname)\n\n        J = Job(TEST_USER, attrs={'Resource_List.ncpus': 4})\n        jid = self.server.submit(J)\n        self.server.alterjob(jid, {'Resource_List.soft_walltime': 5})\n\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        now = time.time()\n        a = {'Resource_List.ncpus': 1, 'reserve_start': now + 10,\n             'reserve_end': now + 130}\n        R = Reservation(TEST_USER, attrs=a)\n        rid = self.server.submit(R)\n        self.server.log_match(rid + ';reservation deleted', max_attempts=5)\n\n    def test_resv_conf_soft_with_hard(self):\n        \"\"\"\n        Test that there is no change in the reservation behavior with\n        soft_walltime set on jobs with a hard walltime set.  The soft_walltime\n        should be ignored and only the hard walltime should be used.\n        \"\"\"\n\n        a = {'resources_available.ncpus': 4}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.mom.shortname)\n\n        now = int(time.time())\n        J = Job(TEST_USER, attrs={'Resource_List.ncpus': 4,\n                                  'Resource_List.walltime': 120})\n        jid = self.server.submit(J)\n        self.server.alterjob(jid, {'Resource_List.soft_walltime': 5})\n\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        a = {'Resource_List.ncpus': 1, 'reserve_start': now + 60,\n             'reserve_end': now + 250}\n        R = Reservation(TEST_USER, attrs=a)\n        rid = self.server.submit(R)\n        self.server.log_match(rid + ';reservation deleted', max_attempts=5)\n\n    def test_resv_job_soft(self):\n        \"\"\"\n        Test to see that a job with a soft walltime which would \"end\" before\n        a reservation starts does not start.  It would interfere with the\n        reservation.\n        \"\"\"\n        a = {'resources_available.ncpus': 4}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.mom.shortname)\n\n        now = int(time.time())\n\n        a = {'Resource_List.ncpus': 4, 'reserve_start': now + 120,\n             'reserve_end': now + 240}\n        R = Reservation(TEST_USER, attrs=a)\n        rid = self.server.submit(R)\n        self.server.expect(RESV,\n                           {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')},\n                           id=rid)\n\n        a = {'Resource_List.ncpus': 4, ATTR_h: None}\n        J = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(J)\n        self.server.alterjob(jid, {'Resource_List.soft_walltime': 60})\n        self.server.rlsjob(jid, 'u')\n        a = {ATTR_state: 'Q', ATTR_comment:\n             'Not Running: Job would conflict with reservation or top job'}\n        self.server.expect(JOB, a, id=jid, attrop=PTL_AND)\n\n    def test_resv_job_soft_hard(self):\n        \"\"\"\n        Test to see that a job with a soft walltime and a hard walltime does\n        not interfere with a confirmed reservation.  The soft walltime would\n        have the job \"end\" before the reservation starts, but the hard\n        walltime would not.\n        \"\"\"\n        a = {'resources_available.ncpus': 4}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.mom.shortname)\n\n        now = int(time.time())\n\n        a = {'Resource_List.ncpus': 4, 'reserve_start': now + 120,\n             'reserve_end': now + 240}\n        R = Reservation(TEST_USER, attrs=a)\n        rid = self.server.submit(R)\n        self.server.expect(RESV,\n                           {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')},\n                           id=rid)\n\n        a = {'Resource_List.ncpus': 4,\n             'Resource_List.walltime': 150, ATTR_h: None}\n        J = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(J)\n        self.server.alterjob(jid, {'Resource_List.soft_walltime': 60})\n        self.server.rlsjob(jid, 'u')\n        a = {ATTR_state: 'Q', ATTR_comment:\n             'Not Running: Job would conflict with reservation or top job'}\n        self.server.expect(JOB, a, id=jid, attrop=PTL_AND)\n\n    def test_topjob(self):\n        \"\"\"\n        Test that soft_walltime is used for calendaring of topjobs\n        Submit 3 jobs:\n        Job1 has a soft_walltime=150 and runs now\n        Job2 has a soft_walltime=150 and gets added to the calendar at now+150\n        Job3 has a soft_walltime=150 and gets added to the calendar at now+300\n        Job4 has a soft_walltime=150 and gets added to the calendar at now+450\n        \"\"\"\n        self.scheduler.set_sched_config({'strict_ordering': 'True'})\n        a = {'resources_available.ncpus': 1}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.mom.shortname)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {ATTR_backfill_depth: 3})\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n        J = Job(TEST_USER, {'Resource_List.walltime': 300})\n        jid1 = self.server.submit(J)\n        self.server.alterjob(jid1, {'Resource_List.soft_walltime': 150})\n\n        J = Job(TEST_USER, {'Resource_List.walltime': 300})\n        jid2 = self.server.submit(J)\n        self.server.alterjob(jid2, {'Resource_List.soft_walltime': 150})\n\n        J = Job(TEST_USER, {'Resource_List.walltime': 300})\n        jid3 = self.server.submit(J)\n        self.server.alterjob(jid3, {'Resource_List.soft_walltime': 150})\n\n        J = Job(TEST_USER, {'Resource_List.walltime': 300})\n        jid4 = self.server.submit(J)\n        self.server.alterjob(jid4, {'Resource_List.soft_walltime': 150})\n\n        now = time.time()\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n\n        self.scheduler.log_match('Leaving Scheduling Cycle', starttime=now,\n                                 max_attempts=20)\n\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n        self.compare_estimates(jid2, [jid3, jid4])\n\n    def test_topjob2(self):\n        \"\"\"\n        Test a mixture of soft_walltime and walltime used in the calendar\n        Submit 3 jobs:\n        Job1 has a soft_walltime=150 runs now\n        Job2 has a soft_walltime=150 and gets added to the calendar at now+150\n        Job3 has a soft_walltime=150 and gets added to the calendar at now+300\n        Job4 has a walltime=300 and gets added to the calendar at now+450\n        Job5 gets added to the calendar at now+750\n        \"\"\"\n        self.scheduler.set_sched_config({'strict_ordering': 'True'})\n        a = {'resources_available.ncpus': 1}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.mom.shortname)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {ATTR_backfill_depth: 4})\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n        J = Job(TEST_USER, {'Resource_List.walltime': 300})\n        jid1 = self.server.submit(J)\n        self.server.alterjob(jid1, {'Resource_List.soft_walltime': 150})\n\n        J = Job(TEST_USER, {'Resource_List.walltime': 300})\n        jid2 = self.server.submit(J)\n        self.server.alterjob(jid2, {'Resource_List.soft_walltime': 150})\n\n        J = Job(TEST_USER, {'Resource_List.walltime': 300})\n        jid3 = self.server.submit(J)\n        self.server.alterjob(jid3, {'Resource_List.soft_walltime': 150})\n\n        J = Job(TEST_USER, {'Resource_List.walltime': 300})\n        jid4 = self.server.submit(J)\n\n        J = Job(TEST_USER, {'Resource_List.walltime': 300})\n        jid5 = self.server.submit(J)\n\n        now = time.time()\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n\n        self.scheduler.log_match('Leaving Scheduling Cycle', starttime=now)\n\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n        self.compare_estimates(jid2, [jid3, jid4, jid5])\n\n    def test_filler_job(self):\n        \"\"\"\n        Test to see if filler jobs will run based on their soft_walltime\n        Submit 3 jobs:\n        Job1 requests 1cpu and runs now\n        Job2 requests 2cpus gets added to the calendar at now+300\n        Job3 requests 1cpu and has a soft_walltime=150 and walltime=450\n        Job3 should run because its soft_walltime will finish before now+300\n        \"\"\"\n        self.scheduler.set_sched_config({'strict_ordering': 'True'})\n        a = {'resources_available.ncpus': 2}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.mom.shortname)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n\n        J1 = Job(TEST_USER, {'Resource_List.walltime': 300,\n                             'Resource_List.ncpus': 1})\n        jid1 = self.server.submit(J1)\n\n        J2 = Job(TEST_USER, {'Resource_List.walltime': 300,\n                             'Resource_List.ncpus': 2})\n        jid2 = self.server.submit(J2)\n\n        J3 = Job(TEST_USER, {'Resource_List.walltime': 450,\n                             'Resource_List.ncpus': 1})\n        jid3 = self.server.submit(J3)\n        self.server.alterjob(jid3, {'Resource_List.soft_walltime': 150})\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n        self.server.expect(JOB, {ATTR_state: 'Q'}, id=jid2)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid3)\n\n    def test_preempt_order(self):\n        \"\"\"\n        Test if soft_walltime is used for preempt_order.  It should be used\n        to calculate percent done and also if the soft_walltime is exceeded,\n        the percent done should remain at 100%\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, SCHED, {'preempt_order': \"R 10 S\"},\n                            runas=ROOT_USER)\n        a = {'resources_available.ncpus': 2}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.mom.shortname)\n\n        a = {'queue_type': 'Execution', 'enabled': 'True',\n             'started': 'True', 'Priority': 150}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id='expressq')\n\n        a = {'Resource_List.walltime': 600}\n        J1 = Job(TEST_USER, attrs=a)\n        jid1 = self.server.submit(J1)\n        a = {'Resource_List.soft_walltime': 45}\n        self.server.alterjob(jid1, a)\n\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n\n        # test preempt_order with percentage < 90.  jid1 should be requeued.\n        express_a = {'Resource_List.ncpus': 2, ATTR_queue: 'expressq'}\n        J2 = Job(TEST_USER, attrs=express_a)\n        jid2 = self.server.submit(J2)\n\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid2)\n        self.server.expect(JOB, {ATTR_state: 'Q'}, id=jid1)\n\n        self.server.deljob(jid2, wait=True)\n\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n\n        # preempt_order percentage done is based on resources_used.walltime\n        # this is only periodically updated.  Sleep until half way through\n        # the extended soft_walltime to make sure we're over 100%\n        self.logger.info(\"Sleeping 60 seconds to accumulate \"\n                         \"resources_used.walltime\")\n        time.sleep(60)\n\n        J3 = Job(TEST_USER, attrs=express_a)\n        jid3 = self.server.submit(J3)\n\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid3)\n        self.server.expect(JOB, {ATTR_state: 'S'}, id=jid1)\n\n    def test_soft_values_default(self):\n        \"\"\"\n        Test to verify that soft_walltime will only take integer/long type\n        value\n        \"\"\"\n\n        msg = 'Illegal attribute or resource value'\n        try:\n            self.server.manager(\n                MGR_CMD_SET, SERVER, {'resources_default.soft_walltime': '0'})\n        except PbsManagerError as e:\n            self.assertTrue(msg in e.msg[0])\n\n        try:\n            self.server.manager(\n                MGR_CMD_SET, SERVER,\n                {'resources_default.soft_walltime': '00:00:00'})\n        except PbsManagerError as e:\n            self.assertTrue(msg in e.msg[0])\n\n        try:\n            self.server.manager(\n                MGR_CMD_SET, SERVER,\n                {'resources_default.soft_walltime': 'abc'})\n        except PbsManagerError as e:\n            self.assertTrue(msg in e.msg[0])\n\n        try:\n            self.server.manager(\n                MGR_CMD_SET, SERVER,\n                {'resources_default.soft_walltime': '01:20:aa'})\n        except PbsManagerError as e:\n            self.assertTrue(msg in e.msg[0])\n\n        try:\n            self.server.manager(MGR_CMD_SET, SERVER, {\n                                'resources_default.soft_walltime':\n                                '1000000000000000000000000'})\n        except PbsManagerError as e:\n            self.assertTrue(msg in e.msg[0])\n\n        try:\n            self.server.manager(\n                MGR_CMD_SET, SERVER,\n                {'resources_default.soft_walltime': '-1'})\n        except PbsManagerError as e:\n            self.assertTrue(msg in e.msg[0])\n\n        try:\n            self.server.manager(\n                MGR_CMD_SET, SERVER,\n                {'resources_default.soft_walltime': '00.10'})\n        except PbsManagerError as e:\n            self.assertTrue(msg in e.msg[0])\n\n        self.server.manager(\n            MGR_CMD_SET, SERVER,\n            {'resources_default.soft_walltime': '00:01:00'})\n\n    def test_soft_runjob_hook(self):\n        \"\"\"\n        Test that soft walltime is set by runjob hook\n        \"\"\"\n\n        hook_body = \\\n            \"\"\"import pbs\ne = pbs.event()\ne.job.Resource_List[\"soft_walltime\"] = pbs.duration(5)\ne.accept()\n\"\"\"\n        a = {'event': 'runjob', 'enabled': 'True'}\n        self.server.create_import_hook(\"que\", a, hook_body)\n\n        J = Job(TEST_USER)\n        jid = self.server.submit(J)\n\n        self.server.expect(JOB, {'Resource_List.soft_walltime': 5}, id=jid)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n    def test_soft_modifyjob_hook(self):\n        \"\"\"\n        Test that soft walltime is set by modifyjob hook\n        \"\"\"\n\n        hook_body = \\\n            \"\"\"import pbs\ne = pbs.event()\ne.job.Resource_List[\"soft_walltime\"] = pbs.duration(15)\ne.accept()\n\"\"\"\n        a = {'event': 'modifyjob', 'enabled': 'True'}\n        self.server.create_import_hook(\"que\", a, hook_body)\n\n        J = Job(TEST_USER)\n        jid = self.server.submit(J)\n\n        self.server.expect(\n            JOB, 'Resource_List.soft_walltime', op=UNSET, id=jid)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        self.server.alterjob(jid, {'Resource_List.soft_walltime': 5})\n        self.server.expect(JOB, {'Resource_List.soft_walltime': 15}, id=jid)\n\n    def test_walltime_default(self):\n        \"\"\"\n        Test soft walltime behavior with hard walltime is same\n        even if set under resource_default\n        \"\"\"\n\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'resources_default.soft_walltime': '15'})\n\n        J = Job(TEST_USER, attrs={'Resource_List.walltime': 15})\n        jid = self.server.submit(J)\n        self.server.expect(JOB, {'Resource_List.soft_walltime': 15}, id=jid)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        self.server.deljob(jid, wait=True)\n\n        J = Job(TEST_USER, attrs={'Resource_List.walltime': 16})\n        jid1 = self.server.submit(J)\n        self.server.expect(JOB, {'Resource_List.soft_walltime': 15}, id=jid1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        self.server.deljob(jid1, wait=True)\n\n        # following piece is commented due to PP-1058\n        # try:\n        #    J = Job(TEST_USER, attrs={'Resource_List.walltime': 10})\n        #    jid1 = self.server.submit(J)\n        # except PtlSubmitError as e:\n        #    self.assertTrue(\"illegal attribute or resource value\" in e.msg[0])\n        # self.assertEqual(jid1, None)\n\n    def test_soft_held(self):\n        \"\"\"\n        Test that if job is held soft_walltime will not get extended\n        \"\"\"\n        J = Job(TEST_USER, attrs={'Resource_List.walltime': '100'})\n        jid = self.server.submit(J)\n        self.server.alterjob(jid, {'Resource_List.soft_walltime': 7})\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        self.logger.info(\n            \"Sleep to let soft_walltime get extended\")\n        time.sleep(10)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n\n        self.server.expect(JOB, {'estimated.soft_walltime': 7}, op=GT,\n                           id=jid)\n\n        # Save the soft_walltime before holding the job\n        jstat = self.server.status(JOB, id=jid,\n                                   attrib=['estimated.soft_walltime'])\n        est_soft_walltime = jstat[0]['estimated.soft_walltime']\n\n        self.server.holdjob(jid, 'u')\n        self.server.rerunjob(jid)\n        self.server.expect(JOB, {'job_state': 'H'}, id=jid)\n\n        self.logger.info(\n            \"Sleep to verify that soft_walltime: %s\"\n            \" doesn't change while job is held\" % est_soft_walltime)\n        time.sleep(10)\n        self.server.expect(JOB, {'estimated.soft_walltime':\n                                 est_soft_walltime}, id=jid)\n\n        # release the job and look for the soft_walltime again\n        self.server.rlsjob(jid, 'u')\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n        self.server.expect(JOB, {'job_state': 'R', 'estimated.soft_walltime':\n                                 est_soft_walltime}, attrop=PTL_AND, id=jid)\n\n        # Wait for some more time and verify that soft_walltime\n        # extending again\n        self.logger.info(\n            \"Sleep enough to let soft_walltime get extended again\"\n            \" since the walltime was reset to 0\")\n        time.sleep(17)\n        self.server.expect(JOB, {'estimated.soft_walltime': est_soft_walltime},\n                           op=GT, id=jid)\n\n    def test_soft_less_cput(self):\n        \"\"\"\n        Test that soft_walltime has no impact on cput enforcement limit\n        \"\"\"\n\n        script = \"\"\"\ni=0\nwhile [ 1 ]\ndo\n    sleep 0.125;\n    dd if=/dev/zero of=/dev/null;\ndone\n\"\"\"\n        # If it is a cpuset mom, the cgroups hook relies on the periodic hook\n        # to update cput, so make the periodic hook run more often for the\n        # purpose of this test.\n        if self.mom.is_cpuset_mom():\n            attrs = {'freq': 1}\n            self.server.manager(MGR_CMD_SET, HOOK, attrs, \"pbs_cgroups\")\n            # cause the change to take effect now\n            self.mom.restart()\n\n        j1 = Job(TEST_USER, {'Resource_List.cput': 5})\n        j1.create_script(body=script)\n        jid = self.server.submit(j1)\n        self.server.alterjob(jid, {'Resource_List.soft_walltime': 300})\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        self.logger.info(\"Sleep 10 secs waiting for cput to cause the\"\n                         \" job to be deleted\")\n        time.sleep(10)\n        self.server.expect(JOB, 'queue', op=UNSET, id=jid)\n\n    def test_soft_walltime_resv(self):\n        \"\"\"\n        Submit a job with soft walltime inside a reservation\n        \"\"\"\n\n        now = int(time.time())\n\n        a = {'Resource_List.ncpus': 1, 'reserve_start': now + 10,\n             'reserve_end': now + 20}\n        R = Reservation(TEST_USER, attrs=a)\n        rid = self.server.submit(R)\n        self.server.expect(RESV,\n                           {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')},\n                           id=rid)\n        r1 = rid.split('.')[0]\n\n        j1 = Job(TEST_USER, attrs={ATTR_queue: r1})\n        jid = self.server.submit(j1)\n\n        # Set soft walltime to greater than reservation end time\n        self.server.alterjob(jid, {'Resource_List.soft_walltime': 300})\n        self.server.expect(JOB, {'Resource_List.soft_walltime': 300}, id=jid)\n\n        # verify that the job gets deleted when reservation ends\n        self.server.expect(\n            JOB, 'queue', op=UNSET, id=jid, offset=20)\n\n    def test_restart_server(self):\n        \"\"\"\n        Test that on server restart soft walltime is not reset\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'job_history_enable': 'True'})\n\n        hook_body = \\\n            \"\"\"import pbs\ne = pbs.event()\ne.job.Resource_List[\"soft_walltime\"] = pbs.duration(8)\ne.accept()\n\"\"\"\n        a = {'event': 'queuejob', 'enabled': 'True'}\n        self.server.create_import_hook(\"que\", a, hook_body)\n\n        J = Job(TEST_USER)\n        jid = self.server.submit(J)\n\n        self.server.expect(JOB, {'Resource_List.soft_walltime': 8}, id=jid)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        self.logger.info(\"Wait till the soft_walltime is extended once\")\n        time.sleep(9)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n\n        self.server.expect(JOB, {'estimated.soft_walltime': 8}, op=GT,\n                           id=jid)\n\n        self.server.restart()\n\n        self.server.expect(JOB, {'Resource_List.soft_walltime': 8}, id=jid)\n        self.server.expect(JOB, {'estimated.soft_walltime': 8}, op=GT,\n                           id=jid)\n\n        # Get the current soft_walltime\n        jstat = self.server.status(JOB, id=jid,\n                                   attrib=['estimated.soft_walltime'])\n        est_soft_walltime = jstat[0]['estimated.soft_walltime']\n\n        # Delete the job and verify that estimated.soft_walltime is set\n        # for job history\n        self.server.deljob(jid, wait=True)\n        self.server.expect(JOB,\n                           {'job_state': 'F',\n                            'estimated.soft_walltime':\n                            est_soft_walltime}, op=GE,\n                           extend='x', attrop=PTL_AND, id=jid)\n\n    def test_soft_job_array(self):\n        \"\"\"\n        Test that soft walltime works similar way with subjobs as\n        regular jobs\n        \"\"\"\n\n        a = {'resources_available.ncpus': 1}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.mom.shortname)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n\n        J = Job(TEST_USER, attrs={ATTR_J: '1-5',\n                                  'Resource_List.walltime': 15})\n        jid = self.server.submit(J)\n        self.server.alterjob(jid, {'Resource_List.soft_walltime': 5})\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n\n        self.server.expect(\n            JOB, {'job_state': 'B', 'Resource_List.soft_walltime': 5}, id=jid)\n        subjob1 = jid.replace('[]', '[1]')\n        self.server.expect(\n            JOB, {'job_state': 'R', 'Resource_List.soft_walltime': 5},\n            id=subjob1)\n\n        self.logger.info(\"Wait for 6s and make sure that subjob1 is not\"\n                         \"deleted even past soft_walltime\")\n        time.sleep(6)\n        self.server.expect(JOB, {'job_state': 'R'}, id=subjob1)\n\n        # Make sure the subjob1 is deleted after 15s past walltime limit\n        self.server.expect(JOB, {'job_state': 'X'}, id=subjob1,\n                           offset=9)\n"
  },
  {
    "path": "test/tests/functional/pbs_stf.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestSTF(TestFunctional):\n\n    \"\"\"\n    The goal for this test suite is to contain basic STF tests that use\n    the following timed events that cause the STF job to shrink its walltime:\n\n    - dedicated time\n\n    - primetime (with backfill_prime)\n\n    - reservations\n    \"\"\"\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n        a = {'resources_available.ncpus': 1}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.mom.shortname)\n\n    def set_primetime(self, ptime_start, ptime_end):\n        \"\"\"\n        Set primttime to start at ptime_start and end at ptime_end.\n        Remove all default holidays because will cause a test to fail on\n        a holiday\n        \"\"\"\n        # Delete all entries in the holidays file\n        self.scheduler.holidays_delete_entry('a')\n\n        # Without the YEAR entry primetime is considered to be 24 hours.\n        p_yy = time.strftime('%Y', time.localtime(ptime_start))\n        self.scheduler.holidays_set_year(p_yy)\n\n        p_day = 'weekday'\n        p_hhmm = time.strftime('%H%M', time.localtime(ptime_start))\n        np_hhmm = time.strftime('%H%M', time.localtime(ptime_end))\n        self.scheduler.holidays_set_day(p_day, p_hhmm, np_hhmm)\n\n        p_day = 'saturday'\n        self.scheduler.holidays_set_day(p_day, p_hhmm, np_hhmm)\n\n        p_day = 'sunday'\n        self.scheduler.holidays_set_day(p_day, p_hhmm, np_hhmm)\n\n    def submit_resv(self, resv_start, ncpus, resv_dur):\n        \"\"\"\n        Submit a reservation and expect it to be confirmed\n        \"\"\"\n        a = {'Resource_List.select': '1:ncpus=%d' % ncpus,\n             'Resource_List.place': 'free',\n             'reserve_start': int(resv_start),\n             'reserve_duration': int(resv_dur)\n             }\n        r = Reservation(TEST_USER, attrs=a)\n        rid = self.server.submit(r)\n\n        a = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, a, id=rid)\n\n    def submit_jq(self, ncpus):\n        \"\"\"\n        Submit a job and expect it to stay queued\n        \"\"\"\n        a = {'Resource_List.select': '1:ncpus=%d' % ncpus,\n             'Resource_List.place': 'free',\n             'Resource_List.walltime': '01:00:00'\n             }\n        j = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, ATTR_comment, op=SET)\n        self.server.expect(JOB, {ATTR_state: 'Q'}, id=jid)\n\n    def test_t_4_1_3(self):\n        \"\"\"\n        Test shrink to fit by setting a dedicated time that started 20 minutes\n        ago with a duration of 2 hours.  Submit a job that can run for as\n        short as 1 minute and as long as 20 hours.  Submit a second job to the\n        dedicated time queue.  Expect the first job to be in Q state and the\n        second job in R state with a walltime that's less than or equal to\n        1 hr 40 mins and greater than or equal to 1 min.\n        \"\"\"\n        qname = 'ded_time'\n\n        a = {'queue_type': 'execution', 'enabled': 'True', 'started': 'True'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, qname)\n        now = int(time.time())\n        self.scheduler.add_dedicated_time(start=now - 1200, end=now + 6000)\n\n        j = Job(TEST_USER)\n        a = {'Resource_List.max_walltime': '20:00:00',\n             'Resource_List.min_walltime': '00:01:00'}\n        j.set_attributes(a)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid)\n\n        a = {'queue': 'ded_time',\n             'Resource_List.max_walltime': '20:00:00',\n             'Resource_List.min_walltime': '00:01:00'}\n        j = Job(TEST_USER, attrs=a)\n        j2id = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=j2id)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid)\n\n        attr = {'Resource_List.walltime': (LE, '01:40:00')}\n        self.server.expect(JOB, attr, id=j2id)\n\n        attr = {'Resource_List.walltime': (GE, '00:01:00')}\n        self.server.expect(JOB, attr, id=j2id)\n\n        sw = self.server.status(JOB, 'Resource_List.walltime', id=j2id)\n        wt = sw[0]['Resource_List.walltime']\n        wt2 = wt\n\n        # Make walltime given by qstat to agree with format in sched_logs.\n        # A leading '0' is removed in the hour string.\n        hh = wt.split(':')[0]\n        if len(hh) == 2 and hh[0] == '0':\n            wt2 = wt[1:]\n\n        msg = \"Job;%s;Job will run for duration=[%s|%s]\" % (j2id, wt, wt2)\n        self.scheduler.log_match(msg, regexp=True, max_attempts=5, interval=2)\n\n    def test_t_4_1_1(self):\n        \"\"\"\n        Test shrink to fit by setting a dedicated time that starts 1 hour\n        from now for 1 hour  Submit a job that can run for as low as 10 minutes\n        and as long as 10 hours.  Expect the job in R state with a walltime\n        that is less than or equal to 1 hour and greater than or equal to\n        10 minutes.\n        \"\"\"\n        now = int(time.time())\n        self.scheduler.add_dedicated_time(start=now + 3600, end=now + 7200)\n\n        a = {'Resource_List.max_walltime': '10:00:00',\n             'Resource_List.min_walltime': '00:10:00'}\n        j = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        attr = {'Resource_List.walltime': (LE, '01:00:00')}\n        self.server.expect(JOB, attr, id=jid)\n\n        attr = {'Resource_List.walltime': (GE, '00:10:00')}\n        self.server.expect(JOB, attr, id=jid)\n\n    def test_t_4_2_1(self):\n        \"\"\"\n        Test shrink to fit by setting primetime that starts 4 hours from now\n        and ends 12 hours from now and scheduler's backfill_prime is true.\n        A regular job is submitted which goes into Q state.  Then a STF job\n        with a max_walltime of 10:00:00 is able to run with a shrunk walltime\n        of less than or equal to 4:00:00.\n        \"\"\"\n        now = time.time()\n        ptime_start = now + 14400\n        ptime_end = now + 43200\n\n        self.set_primetime(ptime_start, ptime_end)\n\n        self.scheduler.set_sched_config({'backfill_prime': 'True'})\n\n        a = {'Resource_List.ncpus': '1'}\n        j = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid)\n\n        a2 = {'Resource_List.max_walltime': '10:00:00',\n              'Resource_List.min_walltime': '00:10:00'}\n        j = Job(TEST_USER, attrs=a2)\n        j2id = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=j2id)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid)\n\n        attr = {'Resource_List.walltime': (LE, '04:00:00')}\n        self.server.expect(JOB, attr, id=j2id)\n\n        attr = {'Resource_List.walltime': (GE, '00:10:00')}\n        self.server.expect(JOB, attr, id=j2id)\n\n    def test_t_4_2_3(self):\n        \"\"\"\n        Test shrink to fit by setting primetime that starts 4 hours from now\n        and ends 12 hours from now and scheduler's backfill_prime is true and\n        prime_spill is 1 hour.  A STF job with a min_walltime of 00:10:00 and\n        with a max_walltime of 10:00:00 gets queued with a shrunk walltime\n        of less than or equal to 05:00:00.\n        \"\"\"\n        now = time.time()\n        ptime_start = now + 14400\n        ptime_end = now + 43200\n\n        self.set_primetime(ptime_start, ptime_end)\n\n        self.scheduler.set_sched_config({'backfill_prime': 'True',\n                                         'prime_spill': '01:00:00'})\n\n        a2 = {'Resource_List.max_walltime': '10:00:00',\n              'Resource_List.min_walltime': '00:10:00'}\n        j = Job(TEST_USER, attrs=a2)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        attr = {'Resource_List.walltime': (LE, '05:00:00')}\n        self.server.expect(JOB, attr, id=jid)\n\n        attr = {'Resource_List.walltime': (GE, '00:10:00')}\n        self.server.expect(JOB, attr, id=jid)\n\n    def test_t_4_2_4(self):\n        \"\"\"\n        Test shrink to fit by setting primetime that started 22 minutes ago\n        and ends 5:38 hours from now and scheduler's backfill_prime is true and\n        prime_spill is 1 hour.  A STF job with a min_walltime of 00:01:00 and\n        with a max_walltime of 20:00:00 gets queued with a shrunk walltime\n        of less than or equal to 06:38:00.\n        \"\"\"\n        now = time.time()\n        ptime_start = now - 1320\n        ptime_end = now + 20280\n\n        self.set_primetime(ptime_start, ptime_end)\n\n        self.scheduler.set_sched_config({'backfill_prime': 'True',\n                                         'prime_spill': '01:00:00'})\n\n        a = {'Resource_List.max_walltime': '20:00:00',\n             'Resource_List.min_walltime': '00:01:00'}\n        j = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        attr = {'Resource_List.walltime': (LE, '06:38:00')}\n        self.server.expect(JOB, attr, id=jid)\n\n        attr = {'Resource_List.walltime': (GE, '00:01:00')}\n        self.server.expect(JOB, attr, id=jid)\n\n    def test_t_4_3_1(self):\n        \"\"\"\n        Test shrink to fit by creating 16 reservations, say from R110 to R125,\n        with R117, R121, R124 having ncpus=3, all others having ncpus=2.\n        Duration of 10 min with 30 min difference between consecutive\n        reservation.\tA STF job will shrink its walltime to less than or\n        equal to 4 hours.\n        \"\"\"\n        a = {'resources_available.ncpus': 3}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.mom.shortname)\n\n        now = int(time.time())\n        resv_dur = 600\n\n        for i in range(1, 8):\n            resv_start = now + i * 1800\n            self.submit_resv(resv_start, 2, resv_dur)\n\n        resv_start = now + 8 * 1800\n        self.submit_resv(resv_start, 3, resv_dur)\n\n        for i in range(9, 12):\n            resv_start = now + i * 1800\n            self.submit_resv(resv_start, 2, resv_dur)\n\n        resv_start = now + 12 * 1800\n        self.submit_resv(resv_start, 3, resv_dur)\n\n        for i in range(13, 15):\n            resv_start = now + i * 1800\n            self.submit_resv(resv_start, 2, resv_dur)\n\n        resv_start = now + 15 * 1800\n        self.submit_resv(resv_start, 3, resv_dur)\n\n        resv_start = now + 16 * 1800\n        self.submit_resv(resv_start, 2, resv_dur)\n\n        a = {'Resource_List.max_walltime': '10:00:00',\n             'Resource_List.min_walltime': '00:10:00'}\n        j = Job(TEST_USER, attrs=a)\n\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        attr = {'Resource_List.walltime': (LE, '04:00:00')}\n        self.server.expect(JOB, attr, id=jid)\n\n        attr = {'Resource_List.walltime': (GE, '00:10:00')}\n        self.server.expect(JOB, attr, id=jid)\n\n    def test_t_4_3_6(self):\n        \"\"\"\n        Test shrink to fit by creating one reservation having ncpus=1,\n        starting in 3 min. with a duration of two hours.  A preempted STF job\n        with min_walltime of 2 min. and max_walltime of 2 hours will stay\n        suspended after higher priority job goes away if its\n        min_walltime can't be satisfied.\n        \"\"\"\n        self.skip_test('Skipping test due to PP-1049')\n        qname = 'highp'\n\n        a = {'queue_type': 'execution', 'enabled': 'True', 'started': 'True',\n             'priority': '150'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, qname)\n\n        now = int(time.time())\n        resv_dur = 7200\n\n        resv_start = now + 180\n        d = self.submit_resv(resv_start, 1, resv_dur)\n        self.assertTrue(d)\n\n        a = {'Resource_List.max_walltime': '02:00:00',\n             'Resource_List.min_walltime': '00:02:00'}\n        j = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        attr = {'Resource_List.walltime': (LE, '00:03:00')}\n        self.server.expect(JOB, attr, id=jid)\n\n        attr = {'Resource_List.walltime': (GE, '00:02:00')}\n        self.server.expect(JOB, attr, id=jid)\n\n        # The sleep below will leave less than 2 minutes window for jid\n        # after j2id is deleted.  The min_walltime of jid can't be\n        # satisfied and jid will stay in S state.\n        self.logger.info(\"Sleeping 65s to leave less than 2m before resv\")\n        time.sleep(65)\n\n        a = {'queue': 'highp', 'Resource_List.select': '1:ncpus=1',\n             'Resource_List.walltime': '00:01:00'}\n        j = Job(TEST_USER, attrs=a)\n        j2id = self.server.submit(j)\n\n        self.server.expect(JOB, {'job_state': 'R'}, id=j2id)\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid)\n\n        self.server.delete(j2id)\n\n        t = time.time()\n        a = {'scheduling': 'True'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        self.scheduler.log_match(\"Leaving Scheduling Cycle\", starttime=t,\n                                 max_attempts=5)\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid)\n\n    def test_t_4_3_8(self):\n        \"\"\"\n        Test shrink to fit by submitting a STF job and then creating a\n        reservation having ncpus=1 that overlaps with the job.  The\n        reservation is denied.\n        \"\"\"\n        a = {'Resource_List.max_walltime': '02:00:00',\n             'Resource_List.min_walltime': '00:02:00'}\n        j = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        now = int(time.time())\n        resv_start = now + 300\n        resv_dur = 7200\n\n        a = {'Resource_List.select': '1:ncpus=1',\n             'Resource_List.place': 'free',\n             'reserve_start': resv_start,\n             'reserve_duration': resv_dur\n             }\n        r = Reservation(TEST_USER, attrs=a)\n        rid1 = self.server.submit(r)\n        self.server.log_match(rid1 + \";reservation deleted\", max_attempts=10)\n\n        self.server.delete(jid, wait=True)\n\n        a = {'Resource_List.select': '1:ncpus=1'}\n        j = Job(TEST_USER, attrs=a)\n        j2id = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=j2id)\n\n        resv_start = now + 300\n        resv_dur = 7200\n\n        a = {'Resource_List.select': '1:ncpus=1',\n             'Resource_List.place': 'free',\n             'reserve_start': resv_start,\n             'reserve_duration': resv_dur\n             }\n        r = Reservation(TEST_USER, attrs=a)\n        rid2 = self.server.submit(r)\n\n        self.server.log_match(rid2 + \";reservation deleted\", max_attempts=10)\n\n    def test_t_4_4_1(self):\n        \"\"\"\n        Test shrink to fit by submitting top jobs as barrier.\n        A STF job will shrink its walltime relative to top jobs\n        \"\"\"\n        a = {'resources_available.ncpus': 3}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.mom.shortname)\n\n        a = {'strict_ordering': 'true ALL', 'backfill': 'true ALL'}\n        self.scheduler.set_sched_config(a)\n\n        a = {'backfill_depth': '20'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        a = {'Resource_List.select': '1:ncpus=2',\n             'Resource_List.place': 'free',\n             'Resource_List.walltime': '01:00:00'}\n        j = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        for _ in range(1, 5):\n            self.submit_jq(2)\n\n        self.submit_jq(3)\n\n        for _ in range(6, 8):\n            self.submit_jq(2)\n\n        self.submit_jq(3)\n\n        for _ in range(9, 12):\n            self.submit_jq(2)\n\n        self.submit_jq(3)\n\n        for _ in range(13, 16):\n            self.submit_jq(2)\n\n        a = {'Resource_List.max_walltime': '10:00:00',\n             'Resource_List.min_walltime': '00:10:00',\n             'Resource_List.select': '1:ncpus=1'}\n        j = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        attr = {'Resource_List.walltime': (LE, '05:00:00')}\n        self.server.expect(JOB, attr, id=jid)\n\n    def test_t_4_5_1(self):\n        \"\"\"\n        Test shrink to fit by setting primetime that started 45 minutes ago\n        and ends 2:45 hours from now and dedicated time starting in 5 minutes\n        ending in 1:45 hours.  A STF job with a min_walltime of 00:01:00 and\n        with a max_walltime of 20:00:00 gets queued with a shrunk walltime\n        of less than or equal to 00:05:00.\n        \"\"\"\n        now = int(time.time())\n        ptime_start = now - 2700\n        ptime_end = now + 9900\n\n        p_day = 'weekday'\n        p_hhmm = time.strftime('%H%M', time.localtime(ptime_start))\n        np_hhmm = time.strftime('%H%M', time.localtime(ptime_end))\n        self.scheduler.holidays_set_day(p_day, p_hhmm, np_hhmm)\n\n        p_day = 'saturday'\n        self.scheduler.holidays_set_day(p_day, p_hhmm, np_hhmm)\n\n        p_day = 'sunday'\n        self.scheduler.holidays_set_day(p_day, p_hhmm, np_hhmm)\n\n        self.scheduler.add_dedicated_time(start=now + 300, end=now + 6300)\n\n        a = {'Resource_List.max_walltime': '20:00:00',\n             'Resource_List.min_walltime': '00:01:00'}\n        j = Job(TEST_USER, attrs=a)\n        j2id = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=j2id)\n\n        attr = {'Resource_List.walltime': (LE, '00:05:00')}\n        self.server.expect(JOB, attr, id=j2id)\n\n    def test_t_4_6_1(self):\n        \"\"\"\n        Test shrink to fit by submitting a reservation and top jobs as\n        barriers. A STF job will shrink its walltime relative to top jobs\n        and reservations.\n        \"\"\"\n        a = {'resources_available.ncpus': 3}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.mom.shortname)\n\n        self.scheduler.set_sched_config({'strict_ordering': 'True ALL'})\n\n        now = int(time.time())\n        resv_start = now + 4200\n        resv_dur = 900\n        self.submit_resv(resv_start, 3, resv_dur)\n\n        a = {'Resource_List.select': '1:ncpus=2',\n             'Resource_List.place': 'free',\n             'Resource_List.walltime': '00:15:00'}\n        j = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        a = {'Resource_List.select': '1:ncpus=3',\n             'Resource_List.place': 'free',\n             'Resource_List.walltime': '00:15:00'}\n        j = Job(TEST_USER, attrs=a)\n        jid2 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid2)\n\n        a = {'Resource_List.max_walltime': '02:00:00',\n             'Resource_List.min_walltime': '00:01:00',\n             'Resource_List.select': '1:ncpus=1'}\n        j = Job(TEST_USER, attrs=a)\n        jid3 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid3)\n\n        attr = {'Resource_List.walltime': (LE, '00:15:00')}\n        self.server.expect(JOB, attr, id=jid3)\n\n    def test_t_5_1_1(self):\n        \"\"\"\n        STF job's min/max_walltime relative to resources_min/max.walltime\n        setting on queue.\n        \"\"\"\n        a = {'resources_min.walltime': '00:10:00',\n             'resources_max.walltime': '10:00:00'}\n        self.server.manager(MGR_CMD_SET, QUEUE, a, id=\"workq\")\n\n        a = {'Resource_List.max_walltime': '10:00:00',\n             'Resource_List.min_walltime': '00:09:00'}\n        j = Job(TEST_USER, attrs=a)\n\n        error_msg = 'Job violates queue and/or server resource limits'\n        try:\n            jid = self.server.submit(j)\n        except PbsSubmitError as e:\n            self.assertTrue(error_msg in e.msg[0])\n\n        a = {'Resource_List.max_walltime': '00:09:00',\n             'Resource_List.min_walltime': '00:09:00'}\n        j = Job(TEST_USER, attrs=a)\n\n        try:\n            jid = self.server.submit(j)\n        except PbsSubmitError as e:\n            self.assertTrue(error_msg in e.msg[0])\n\n        a = {'Resource_List.max_walltime': '11:00:00',\n             'Resource_List.min_walltime': '00:10:00'}\n        j = Job(TEST_USER, attrs=a)\n\n        try:\n            jid = self.server.submit(j)\n        except PbsSubmitError as e:\n            self.assertTrue(error_msg in e.msg[0])\n\n        a = {'Resource_List.max_walltime': '11:00:00',\n             'Resource_List.min_walltime': '11:00:00'}\n        j = Job(TEST_USER, attrs=a)\n        try:\n            jid = self.server.submit(j)\n        except PbsSubmitError as e:\n            self.assertTrue(error_msg in e.msg[0])\n\n        a = {'Resource_List.max_walltime': '10:00:00',\n             'Resource_List.min_walltime': '00:10:00'}\n        j = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        self.server.delete(jid)\n\n        a = {'Resource_List.max_walltime': '00:10:00',\n             'Resource_List.min_walltime': '00:10:00'}\n        j = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        self.server.delete(jid)\n\n        a = {'Resource_List.max_walltime': '10:00:00',\n             'Resource_List.min_walltime': '10:00:00'}\n        j = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        self.server.delete(jid)\n\n        a = {'Resource_List.max_walltime': '09:00:00',\n             'Resource_List.min_walltime': '00:11:00'}\n        j = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n    def test_t_5_1_2(self):\n        \"\"\"\n        STF job's max_walltime relative to resources_max.walltime\n        setting on server.\n        \"\"\"\n        a = {'resources_max.walltime': '15:00:00'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        a = {'Resource_List.max_walltime': '16:00:00',\n             'Resource_List.min_walltime': '00:20:00'}\n        j = Job(TEST_USER, attrs=a)\n\n        error_msg = 'Job violates queue and/or server resource limits'\n        try:\n            jid = self.server.submit(j)\n        except PbsSubmitError as e:\n            self.assertTrue(error_msg in e.msg[0])\n\n        a = {'Resource_List.max_walltime': '16:00:00',\n             'Resource_List.min_walltime': '16:00:00'}\n        j = Job(TEST_USER, attrs=a)\n\n        try:\n            jid = self.server.submit(j)\n        except PbsSubmitError as e:\n            self.assertTrue(error_msg in e.msg[0])\n\n        a = {'Resource_List.max_walltime': '15:00:00',\n             'Resource_List.min_walltime': '15:00:00'}\n        j = Job(TEST_USER, attrs=a)\n\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n    def test_t_5_2_1(self):\n        \"\"\"\n        Setting resources_max.min_walltime on a queue.\n        \"\"\"\n        a = {'resources_max.min_walltime': '10:00:00'}\n        try:\n            self.server.manager(MGR_CMD_SET, QUEUE, a, id=\"workq\")\n        except PbsManagerError as e:\n            self.assertTrue('Resource limits can not be set for the resource'\n                            in e.msg[0])\n\n    def test_t_5_2_2(self):\n        \"\"\"\n        Setting resources_max.min_walltime on the server.\n        \"\"\"\n        a = {'resources_max.min_walltime': '10:00:00'}\n        try:\n            self.server.manager(MGR_CMD_SET, SERVER, a)\n        except PbsManagerError as e:\n            self.assertTrue('Resource limits can not be set for the resource'\n                            in e.msg[0])\n\n    def test_t_6(self):\n        \"\"\"\n        Test to see that the min_walltime is not unset if the max_walltime\n        is attempted to be set less than the min.\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n        a = {'Resource_List.min_walltime': 9, 'Resource_List.max_walltime': 60}\n        J = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(J)\n        try:\n            self.server.alterjob(jid, {'Resource_List.max_walltime': 3})\n        except PbsAlterError as e:\n            self.assertTrue('\\\"min_walltime\\\" can not be greater'\n                            ' than \\\"max_walltime\\\"' in e.msg[0])\n\n        self.server.expect(JOB, 'Resource_List.min_walltime', op=SET)\n        self.server.expect(JOB, 'Resource_List.max_walltime', op=SET)\n\n        try:\n            self.server.alterjob(jid, {'Resource_List.min_walltime': 180})\n        except PbsAlterError as e:\n            self.assertTrue('\\\"min_walltime\\\" can not be greater'\n                            ' than \\\"max_walltime\\\"' in e.msg[0])\n\n        self.server.expect(JOB, 'Resource_List.min_walltime', op=SET)\n        self.server.expect(JOB, 'Resource_List.max_walltime', op=SET)\n\n        try:\n            a = {'Resource_List.min_walltime': 60,\n                 'Resource_List.max_walltime': 30}\n            self.server.alterjob(jid, a)\n        except PbsAlterError as e:\n            self.assertTrue('\\\"min_walltime\\\" can not be greater'\n                            ' than \\\"max_walltime\\\"' in e.msg[0])\n\n        self.server.expect(JOB, 'Resource_List.min_walltime', op=SET)\n        self.server.expect(JOB, 'Resource_List.max_walltime', op=SET)\n"
  },
  {
    "path": "test/tests/functional/pbs_strict_ordering.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport os\nimport string\nimport time\n\nfrom tests.functional import *\n\n\nclass TestStrictOrderingAndBackfilling(TestFunctional):\n\n    \"\"\"\n    Test strict ordering when backfilling is truned off\n    \"\"\"\n    @timeout(1800)\n    def test_t1(self):\n\n        a = {'resources_available.ncpus': 4}\n        self.mom.create_vnodes(a, 1, usenatvnode=True)\n\n        rv = self.scheduler.set_sched_config(\n            {'round_robin': 'false all', 'by_queue': 'false all',\n             'strict_ordering': 'true all'})\n        self.assertTrue(rv)\n\n        a = {'backfill_depth': 0}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        j1 = Job(TEST_USER)\n        a = {'Resource_List.select': '1:ncpus=2',\n             'Resource_List.walltime': 9999}\n        j1.set_sleep_time(9999)\n        j1.set_attributes(a)\n        j1 = self.server.submit(j1)\n\n        j2 = Job(TEST_USER)\n        a = {'Resource_List.select': '1:ncpus=3',\n             'Resource_List.walltime': 9999}\n        j2.set_sleep_time(9999)\n        j2.set_attributes(a)\n        j2 = self.server.submit(j2)\n\n        j3 = Job(TEST_USER)\n        a = {'Resource_List.select': '1:ncpus=2',\n             'Resource_List.walltime': 9999}\n        j3.set_sleep_time(9999)\n        j3.set_attributes(a)\n        j3 = self.server.submit(j3)\n        rv = self.server.expect(\n            JOB,\n            {'comment': 'Not Running: Job would break strict sorted order'},\n            id=j3,\n            offset=2,\n            max_attempts=2,\n            interval=2)\n        self.assertTrue(rv)\n    \"\"\"\n    Test strict ordering when queue backilling is enabled and server\n    backfilling is off\n    \"\"\"\n\n    def test_t2(self):\n        rv = self.scheduler.set_sched_config(\n            {'by_queue': 'false prime', 'strict_ordering': 'true all'})\n        self.assertTrue(rv)\n        a = {'backfill_depth': 2}\n        self.server.manager(\n            MGR_CMD_SET, QUEUE, a, id='workq')\n        a = {\n            'queue_type': 'execution',\n            'started': 't',\n            'enabled': 't',\n            'backfill_depth': 1}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id='wq2')\n        a = {\n            'queue_type': 'execution',\n            'started': 't',\n            'enabled': 't',\n            'backfill_depth': 0}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id='wq3')\n        a = {'backfill_depth': 0}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        a = {'resources_available.ncpus': 5}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n        a = {'Resource_List.select': '1:ncpus=2', ATTR_queue: 'workq'}\n        j = Job(TEST_USER, a)\n        j.set_sleep_time(100)\n        j1id = self.server.submit(j)\n        j2id = self.server.submit(j)\n        j3id = self.server.submit(j)\n        a = {'Resource_List.select': '1:ncpus=1', ATTR_queue: 'wq2'}\n        j = Job(TEST_USER, a)\n        j.set_sleep_time(100)\n        j4id = self.server.submit(j)\n        a = {'Resource_List.select': '1:ncpus=1', ATTR_queue: 'wq3'}\n        j = Job(TEST_USER, a)\n        j.set_sleep_time(100)\n        j5id = self.server.submit(j)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n        self.server.expect(JOB,\n                           {'job_state': 'R'},\n                           id=j1id,\n                           max_attempts=30,\n                           interval=2)\n        self.server.expect(JOB,\n                           {'job_state': 'R'},\n                           id=j2id,\n                           max_attempts=30,\n                           interval=2)\n        self.server.expect(JOB,\n                           {'job_state': 'R'},\n                           id=j4id,\n                           max_attempts=30,\n                           interval=2)\n        self.server.expect(JOB,\n                           {'job_state': 'Q'},\n                           id=j5id,\n                           max_attempts=30,\n                           interval=2)\n\n    def test_zero_backfill_depth_on_queue(self):\n        \"\"\"\n        Test if scheduler tries to run a job when strict ordering is enabled\n        and backfill_depth is set to 0 on the queue\n        \"\"\"\n        a = {'resources_available.ncpus': 2}\n        self.mom.create_vnodes(a, 1, usenatvnode=True)\n\n        rv = self.scheduler.set_sched_config(\n            {'round_robin': 'false all', 'by_queue': 'false all',\n             'strict_ordering': 'true all'})\n        self.assertTrue(rv)\n\n        a = {'backfill_depth': 0}\n        self.server.manager(MGR_CMD_SET, QUEUE, a, id=\"workq\")\n\n        a = {'scheduling': 'False'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        a = {'Resource_List.select': '1:ncpus=1'}\n        j1 = Job(TEST_USER)\n        j1.set_attributes(a)\n        jid1 = self.server.submit(j1)\n\n        a = {'Resource_List.select': '1:ncpus=2'}\n        j2 = Job(TEST_USER)\n        j2.set_attributes(a)\n        jid2 = self.server.submit(j2)\n\n        a = {'Resource_List.select': '1:ncpus=1'}\n        j3 = Job(TEST_USER)\n        j3.set_attributes(a)\n        jid3 = self.server.submit(j3)\n\n        a = {'scheduling': 'True'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        job_comment = \"Not Running: Job would break strict sorted order\"\n        self.server.expect(JOB, {'comment': job_comment}, id=jid3, offset=2,\n                           max_attempts=2, interval=2)\n\n        # Now try the same scenario with backfilling set to one and check\n        # that first and third job runs but second gets calendared.\n        # since we want thrid job to backfill around second, we need to make\n        # sure that walltime of third job is less than the walltime of first\n        # job\n        a = {'scheduling': 'False'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        self.server.deljob([jid1, jid2, jid3])\n\n        a = {'backfill_depth': 1}\n        self.server.manager(MGR_CMD_SET, QUEUE, a, id=\"workq\")\n\n        a = {'Resource_List.select': '1:ncpus=1',\n             'Resource_List.walltime': '100'}\n        j4 = Job(TEST_USER)\n        j4.set_attributes(a)\n        jid4 = self.server.submit(j4)\n\n        a = {'Resource_List.select': '1:ncpus=2'}\n        j5 = Job(TEST_USER)\n        j5.set_attributes(a)\n        jid5 = self.server.submit(j5)\n\n        a = {'Resource_List.select': '1:ncpus=1',\n             'Resource_List.walltime': '50'}\n        j6 = Job(TEST_USER)\n        j6.set_attributes(a)\n        jid6 = self.server.submit(j6)\n\n        a = {'scheduling': 'True'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid4)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid6)\n        self.scheduler.log_match(jid5 + \";Job is a top job\")\n\n    def test_zero_backfill_depth_on_one_queue(self):\n        \"\"\"\n        Test if scheduler tries to run a job when strict ordering is enabled\n        and backfill_depth is set to 0 on one queue but backfill_depth is\n        enabled on another queue.\n        \"\"\"\n        a = {'resources_available.ncpus': 1}\n        self.mom.create_vnodes(a, 1, usenatvnode=True)\n\n        rv = self.scheduler.set_sched_config(\n            {'round_robin': 'false all', 'by_queue': 'false all',\n             'strict_ordering': 'true all'})\n        self.assertTrue(rv)\n\n        a = {'backfill_depth': 0}\n        self.server.manager(MGR_CMD_SET, QUEUE, a, id=\"workq\")\n\n        a = {'queue_type': 'execution', 'started': 'True', 'enabled': 'True',\n             'priority': '100'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id=\"workq2\")\n\n        a = {'backfill_depth': 1}\n        self.server.manager(MGR_CMD_SET, QUEUE, a, id=\"workq2\")\n\n        a = {'scheduling': 'False'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        a = {'Resource_List.select': '1:ncpus=1'}\n        j1 = Job(TEST_USER)\n        j1.set_attributes(a)\n        self.server.submit(j1)\n\n        j2 = Job(TEST_USER)\n        j2.set_attributes(a)\n        self.server.submit(j2)\n\n        j3 = Job(TEST_USER)\n        j3.set_attributes(a)\n        jid3 = self.server.submit(j3)\n\n        a = {'Resource_List.select': '1:ncpus=1', 'queue': 'workq2'}\n        j4 = Job(TEST_USER)\n        j4.set_attributes(a)\n        jid4 = self.server.submit(j4)\n\n        j5 = Job(TEST_USER)\n        j5.set_attributes(a)\n        self.server.submit(j5)\n\n        j6 = Job(TEST_USER)\n        j6.set_attributes(a)\n        self.server.submit(j6)\n\n        a = {'scheduling': 'True'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        job_comment = \"Not Running: Job would break strict sorted order\"\n        self.server.expect(JOB, {'comment': job_comment}, id=jid3, offset=2,\n                           max_attempts=2, interval=2)\n        self.scheduler.log_match(jid4 + \";Job is a top job\")\n"
  },
  {
    "path": "test/tests/functional/pbs_support_linux_hook_event_phase1_2.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\n@requirements(num_moms=2)\nclass TestSupportLinuxHookEventPhase1_2(TestFunctional):\n    \"\"\"\n    Tests that cover support for Linux hook events in phase 1.2.\n    \"\"\"\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n        self.pbs_attach = os.path.join(self.server.pbs_conf['PBS_EXEC'],\n                                       'bin', 'pbs_attach')\n        self.pbs_tmrsh = os.path.join(self.server.pbs_conf['PBS_EXEC'],\n                                      'bin', 'pbs_tmrsh')\n\n    def test_prologue_attach_order(self):\n        \"\"\"\n        Check that the execjob_prologue and execjob_attach event types\n        of a hook happen in the sister mom and mom superior.\n        Check that execjob_prologue event happens before execjob_attach.\n        \"\"\"\n\n        hook_body = \"\"\"\nimport pbs\nimport time\ne = pbs.event()\ntime.sleep(2)\nif e.type == pbs.EXECJOB_PROLOGUE:\n        pbs.logmsg(pbs.LOG_DEBUG, \"event is %s\" % (\"EXECJOB_PROLOGUE\"))\nelif e.type == pbs.EXECJOB_ATTACH:\n        pbs.logmsg(pbs.LOG_DEBUG, \"event is %s\" % (\"EXECJOB_ATTACH\"))\nelse:\n        pbs.logmsg(pbs.LOG_DEBUG, \"event is %s\" % (\"UNKNOWN\"))\n\"\"\"\n\n        a = {'event': 'execjob_prologue,execjob_attach', 'enabled': 'True'}\n        self.server.create_import_hook(\"hook1\", a, hook_body)\n\n        self.momA = list(self.moms.values())[0]\n        self.momB = list(self.moms.values())[1]\n        self.hostA = self.momA.shortname\n        self.hostB = self.momB.shortname\n        if self.momA.is_cpuset_mom():\n            self.hostA = self.hostA + '[0]'\n        if self.momB.is_cpuset_mom():\n            self.hostB = self.hostB + '[0]'\n\n        # Job script\n        test = []\n        test += ['#PBS -l select=vnode=%s+vnode=%s\\n' %\n                 (self.hostA, self.hostB)]\n        test += ['setsid --fork %s -j $PBS_JOBID %s 3000\\n'\n                 % (self.pbs_attach, self.mem.sleep_cmd)]\n        test += ['%s %s setsid --fork %s -j $PBS_JOBID %s 3000\\n'\n                 % (self.pbs_tmrsh, self.momB.shortname, self.pbs_attach,\n                    self.mem.sleep_cmd)]\n        test += ['%s 3000\\n' % self.mom.sleep_cmd]\n\n        # Submit a job\n        j = Job(TEST_USER)\n        j.create_script(body=test)\n        check_after1 = time.time()\n        check_after2 = check_after1 + 2\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid)\n\n        # Check mom logs\n        msg_expected = \"event is EXECJOB_PROLOGUE\"\n        s = self.momA.log_match(msg_expected, starttime=check_after1,\n                                max_attempts=10)\n        self.assertTrue(s)\n        s = self.momB.log_match(msg_expected, starttime=check_after1,\n                                max_attempts=10)\n        self.assertTrue(s)\n\n        msg_expected = \"event is EXECJOB_ATTACH\"\n        s = self.momA.log_match(msg_expected, starttime=check_after2,\n                                max_attempts=10)\n        self.assertTrue(s)\n        # Allow time for log message to appear before checking on sister mom\n        time.sleep(31)\n        s = self.momB.log_match(msg_expected, starttime=check_after2,\n                                max_attempts=10)\n        self.assertTrue(s)\n\n    def test_execjob_attach_hook_with_accept(self):\n        \"\"\"\n        Check that the execjob_attach event type with accept\n        of a hook happen in the sister mom and mom superior;\n        execjob_attach hook returns process id, job and vnode list.\n        \"\"\"\n\n        hook_body = \"\"\"\nimport pbs\nimport os\nimport sys\nimport time\n\ne = pbs.event()\npbs.logmsg(pbs.LOG_DEBUG,\n           \"printing pbs.event() values ---------------------->\")\nif e.type == pbs.EXECJOB_ATTACH:\n   pbs.logmsg(pbs.LOG_DEBUG, \"Event is: %s\" % (\"EXECJOB_ATTACH\"))\nelse:\n   pbs.logmsg(pbs.LOG_DEBUG, \"Event is: %s\" % (\"UNKNOWN\"))\n\npbs.logmsg(pbs.LOG_DEBUG, \"Requestor is: %s\" % (e.requestor))\npbs.logmsg(pbs.LOG_DEBUG, \"Requestor_host is: %s\" % (e.requestor_host))\n\n# Getting/setting vnode_list\nvn = pbs.event().vnode_list\n\nfor k in vn.keys():\n   pbs.logmsg(pbs.LOG_DEBUG, \"Vnode: [%s]-------------->\" % (k))\n\npbs.logjobmsg(e.job.id, \"PID = %d, type = %s\" % (e.pid, type(e.pid)))\n\nif e.job.in_ms_mom():\n        pbs.logmsg(pbs.LOG_DEBUG, \"job is in_ms_mom\")\nelse:\n        pbs.logmsg(pbs.LOG_DEBUG, \"job is NOT in_ms_mom\")\n\ne.accept()\n\"\"\"\n\n        a = {'event': 'execjob_attach', 'enabled': 'True'}\n        self.server.create_import_hook(\"hook1\", a, hook_body)\n\n        self.momA = list(self.moms.values())[0]\n        self.momB = list(self.moms.values())[1]\n        self.hostA = self.momA.shortname\n        self.hostB = self.momB.shortname\n        if self.momA.is_cpuset_mom():\n            self.hostA = self.hostA + '[0]'\n        if self.momB.is_cpuset_mom():\n            self.hostB = self.hostB + '[0]'\n\n        # Job script\n        test = []\n        test += ['#PBS -l select=vnode=%s+vnode=%s\\n' %\n                 (self.hostA, self.hostB)]\n\n        test += ['setsid --fork %s -j $PBS_JOBID %s 3000\\n'\n                 % (self.pbs_attach, self.mom.sleep_cmd)]\n        test += ['%s %s setsid --fork %s -j $PBS_JOBID %s 3000\\n'\n                 % (self.pbs_tmrsh, self.momB.shortname, self.pbs_attach,\n                    self.mom.sleep_cmd)]\n        test += ['%s 3000\\n' % self.mom.sleep_cmd]\n\n        # Submit a job\n        j = Job(TEST_USER)\n        j.create_script(body=test)\n        check_after = time.time()\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid)\n\n        # Allow time for log messages to appear before checking mom logs\n        time.sleep(31)\n\n        # Check log msgs on sister mom\n        log_msgB = [\n            \"Hook;pbs_python;printing pbs.event() values \" +\n            \"---------------------->\",\n            \"Hook;pbs_python;Event is: EXECJOB_ATTACH\",\n            \"Hook;pbs_python;Requestor is: pbs_mom\",\n            \"Hook;pbs_python;Requestor_host is: %s\" %\n            self.momB.shortname,\n            \"Hook;pbs_python;Vnode: [%s]-------------->\" %\n            self.hostA,\n            \"Hook;pbs_python;Vnode: [%s]-------------->\" %\n            self.hostB,\n            \"pbs_python;Job;%s;PID =\" %\n            jid,\n            \"Hook;pbs_python;job is NOT in_ms_mom\"]\n\n        for msg in log_msgB:\n            rc = self.momB.log_match(msg, starttime=check_after,\n                                     max_attempts=10)\n            _msg = \"Didn't get expected log msg: %s \" % msg\n            _msg += \"on host:%s\" % self.hostB\n            self.assertTrue(rc, _msg)\n            _msg = \"Got expected log msg: %s on host: %s\" % (msg, self.hostB)\n            self.logger.info(_msg)\n\n        # Check log msgs on mom superior\n        log_msgA = [\"Hook;pbs_python;printing pbs.event() values \" +\n                    \"---------------------->\",\n                    \"Hook;pbs_python;Event is: EXECJOB_ATTACH\",\n                    \"Hook;pbs_python;Requestor is: pbs_mom\",\n                    \"Hook;pbs_python;Requestor_host is: %s\" %\n                    self.momA.shortname,\n                    \"Hook;pbs_python;Vnode: [%s]-------------->\" % self.hostA,\n                    \"Job;%s;PID =\" % jid,\n                    \"Hook;pbs_python;Vnode: [%s]-------------->\" % self.hostB,\n                    \"Hook;pbs_python;job is in_ms_mom\"]\n\n        for msg in log_msgA:\n            rc = self.momA.log_match(msg, starttime=check_after,\n                                     max_attempts=10)\n            _msg = \"Didn't get expected log msg: %s \" % msg\n            _msg += \"on host:%s\" % self.hostA\n            self.assertTrue(rc, _msg)\n            _msg = \"Got expected log msg: %s on host: %s\" % (msg, self.hostA)\n            self.logger.info(_msg)\n"
  },
  {
    "path": "test/tests/functional/pbs_suspend_resume_accounting.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestSuspendResumeAccounting(TestFunctional):\n    \"\"\"\n    Testsuite for verifying accounting of\n    suspend/resume events of job\n    \"\"\"\n\n    script = ['#!/bin/bash\\nfor ((c=1; c <= 1000000000; c++));']\n    script += ['do']\n    script += ['sleep 1']\n    script += ['done']\n\n    def test_suspend_resume_job_signal(self):\n        \"\"\"\n        Test case to verify accounting suspend\n        and resume records when the events are\n        triggered by client for one job.\n        \"\"\"\n        j = Job()\n        j.create_script(self.script)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid)\n\n        self.server.sigjob(jobid=jid, signal=\"suspend\")\n\n        record = 'z;%s;.*resources_used.' % jid\n        self.server.accounting_match(msg=record, id=jid, regexp=True)\n\n        self.server.sigjob(jobid=jid, signal=\"resume\")\n        record = 'r;%s;' % jid\n        self.server.accounting_match(msg=record, id=jid)\n\n    def test_suspend_resume_job_array_signal(self):\n        \"\"\"\n        Test case to verify accounting suspend\n        and resume records when the events are\n        triggered by client for a job array.\n        \"\"\"\n        a = {ATTR_rescavail + '.ncpus': 2}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n\n        j = Job()\n        j.create_script(self.script)\n        j.set_attributes({ATTR_J: '1-2'})\n        jid = self.server.submit(j)\n\n        sub_jid1 = j.create_subjob_id(jid, 1)\n        sub_jid2 = j.create_subjob_id(jid, 2)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=sub_jid1)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=sub_jid2)\n\n        # suspend job\n        self.server.sigjob(jobid=jid, signal=\"suspend\")\n        self.server.expect(JOB, {ATTR_state: 'S'}, id=sub_jid1)\n        self.server.expect(JOB, {ATTR_state: 'S'}, id=sub_jid2)\n\n        record = 'z;%s;resources_used.' % sub_jid1\n        self.server.accounting_match(msg=record, id=sub_jid1)\n\n        record = 'z;%s;resources_used.' % sub_jid2\n        self.server.accounting_match(msg=record, id=sub_jid2)\n\n        self.server.sigjob(jobid=jid, signal=\"resume\")\n        record = 'r;%s;' % sub_jid1\n        self.server.accounting_match(msg=record, id=sub_jid1)\n\n        record = 'r;%s;' % sub_jid2\n        self.server.accounting_match(msg=record, id=sub_jid2)\n\n    def test_interactive_job_suspend_resume(self):\n        \"\"\"\n        Test case to verify accounting suspend\n        and resume records when the events are\n        triggered by client for a interactive job.\n        \"\"\"\n\n        cmd = 'sleep 10'\n        j = Job(attrs={ATTR_inter: ''})\n        j.interactive_script = [('hostname', '.*'),\n                                (cmd, '.*')]\n\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        self.server.sigjob(jobid=jid, signal=\"suspend\")\n\n        record = 'z;%s;.*resources_used.' % jid\n        self.server.accounting_match(msg=record, id=jid, regexp=True)\n\n        self.server.sigjob(jobid=jid, signal=\"resume\")\n        record = 'r;%s;' % jid\n        self.server.accounting_match(msg=record, id=jid)\n\n    def test_suspend_resume_job_scheduler(self):\n        \"\"\"\n        Test case to verify accounting suspend\n        and resume records when the events are\n        triggered by Scheduler.\n        \"\"\"\n\n        a = {ATTR_rescavail + '.ncpus': 4, ATTR_rescavail + '.mem': '2gb'}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n\n        # Create an express queue\n        b = {ATTR_qtype: 'Execution', ATTR_enable: 'True',\n             ATTR_start: 'True', ATTR_p: '200'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, b, \"expressq\")\n\n        a = {ATTR_restrict_res_to_release_on_suspend: 'ncpus,mem'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        j1 = Job()\n        j1.create_script(self.script)\n        j1.set_attributes({ATTR_l + '.select': '1:ncpus=4:mem=512mb'})\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n\n        j2 = Job()\n        j2.create_script(self.script)\n        j2.set_attributes(\n            {ATTR_l + '.select': '1:ncpus=2:mem=512mb',\n             ATTR_q: 'expressq'})\n        jid2 = self.server.submit(j2)\n\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid2)\n        self.server.expect(JOB, {ATTR_state: 'S', ATTR_substate: 45}, id=jid1)\n\n        resc_released = \"resources_released=(%s:ncpus=4:mem=524288kb)\" \\\n                        % self.mom.shortname\n        record = 'z;%s;resources_used.' % jid1\n        line = self.server.accounting_match(msg=record, id=jid1)[1]\n        self.assertIn(resc_released, line)\n\n        self.server.delete(jid2)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n        record = 'r;%s;' % jid1\n        self.server.accounting_match(msg=record, id=jid1)\n\n    def test_admin_suspend_resume_signal(self):\n        \"\"\"\n        Test case to verify accounting of admin-suspend\n        and admin-resume records.\n        \"\"\"\n        j = Job()\n        j.create_script(self.script)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R', 'substate': 42}, id=jid)\n\n        self.server.sigjob(jid, 'admin-suspend', runas=ROOT_USER)\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid)\n\n        record = 'z;%s;.*resources_used.' % jid\n        self.server.accounting_match(msg=record, id=jid, regexp=True)\n\n        self.server.sigjob(jid, 'admin-resume', runas=ROOT_USER)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        record = 'r;%s;' % jid\n        self.server.accounting_match(msg=record, id=jid)\n\n    def test_resc_released_susp_resume(self):\n        \"\"\"\n        Test case to verify accounting of suspend/resume\n        events with restrict_res_to_release_on_suspend set\n        on server\n        \"\"\"\n        # Set both ncpus and mem\n        a = {ATTR_restrict_res_to_release_on_suspend: 'ncpus,mem'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        j = Job()\n        j.create_script(self.script)\n        j.set_attributes({ATTR_l + '.select': '1:ncpus=1:mem=512mb'})\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid)\n        self.server.sigjob(jobid=jid, signal=\"suspend\")\n\n        # check for both ncpus and mem are released\n        resc_released = 'resources_released=(%s:ncpus=1:mem=524288kb)'\n\n        node = self.server.status(JOB, 'exec_vnode', id=jid)[0]['exec_vnode']\n        vn = node.split('+')[0].split(':')[0].split('(')[1]\n        resc_released = resc_released % vn\n        record = 'z;%s;resources_used.' % jid\n        line = self.server.accounting_match(msg=record, id=jid)[1]\n        self.assertIn(resc_released, line)\n\n        self.server.sigjob(jobid=jid, signal=\"resume\")\n        record = 'r;%s;' % jid\n        self.server.accounting_match(msg=record, id=jid)\n\n    def test_resc_released_susp_resume_multi_vnode(self):\n        \"\"\"\n        Test case to verify accounting of suspend/resume\n        events with restrict_res_to_release_on_suspend set\n        on server for multiple vnodes\n        \"\"\"\n        # Set restrict_res_to_release_on_suspend server attribute\n        a = {ATTR_restrict_res_to_release_on_suspend: 'ncpus'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        b = {ATTR_qtype: 'Execution', ATTR_enable: 'True',\n             ATTR_start: 'True', ATTR_p: '200'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, b, \"expressq\")\n\n        vn_attrs = {ATTR_rescavail + '.ncpus': 8,\n                    ATTR_rescavail + '.mem': '1024mb'}\n        self.mom.create_vnodes(vn_attrs, 1,\n                               fname=\"vnodedef1\", vname=\"vnode1\")\n        # Append a vnode\n        vn_attrs = {ATTR_rescavail + '.ncpus': 6,\n                    ATTR_rescavail + '.mem': '1024mb'}\n        self.mom.create_vnodes(vn_attrs, 1, additive=True,\n                               fname=\"vnodedef2\", vname=\"vnode2\")\n\n        # Submit a low priority job\n        j1 = Job()\n        j1.create_script(self.script)\n        j1.set_attributes({ATTR_l + '.select': '1:ncpus=8:mem=512mb+1'\n                                               ':ncpus=6:mem=256mb'})\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n\n        # Submit a high priority job\n        j2 = Job()\n        j2.create_script(self.script)\n        j2.set_attributes(\n            {ATTR_l + '.select': '1:ncpus=8:mem=256mb',\n             ATTR_q: 'expressq'})\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid2)\n        self.server.expect(JOB, {ATTR_state: 'S', ATTR_substate: 45}, id=jid1)\n\n        resc_released = 'resources_released=(vnode1[0]:ncpus=8)+' \\\n                        '(vnode2[0]:ncpus=6)'\n        record = 'z;%s;resources_used.' % jid1\n        line = self.server.accounting_match(msg=record, id=jid1)[1]\n        self.assertIn(resc_released, line)\n\n    def test_higher_priority_job_hook_reject(self):\n        \"\"\"\n        Test case to verify accounting of suspend/resume\n        events of a job which gets suspended by a higher priority\n        job and gets resumed when the runjob hook rejects the\n        higher priority job.\n        \"\"\"\n        a = {ATTR_rescavail + '.ncpus': 4, ATTR_rescavail + '.mem': '2gb'}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n\n        b = {ATTR_qtype: 'Execution', ATTR_enable: 'True',\n             ATTR_start: 'True', ATTR_p: '200'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, b, \"expressq\")\n\n        # Define a runjob hook\n        hook = \"\"\"\nimport pbs\ne = pbs.event()\ne.reject()\n\"\"\"\n        j1 = Job()\n        j1.create_script(self.script)\n        j1.set_attributes({ATTR_l + '.select': '1:ncpus=4:mem=512mb'})\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n\n        a = {'event': 'runjob', 'enabled': 'True'}\n        self.server.create_import_hook(\"sr_hook\", a, hook)\n\n        j2 = Job()\n        j2.create_script(self.script)\n        j2.set_attributes(\n            {ATTR_l + '.select': '1:ncpus=2:mem=512mb',\n             ATTR_q: 'expressq'})\n        jid2 = self.server.submit(j2)\n\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid1)\n\n        record = 'z;%s;.*resources_used.' % jid1\n        self.server.accounting_match(msg=record, id=jid1, regexp=True)\n\n        record = 'r;%s;' % jid1\n        self.server.accounting_match(msg=record, id=jid1)\n"
  },
  {
    "path": "test/tests/functional/pbs_svr_dyn_res.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\nfrom ptl.lib.pbs_ifl_mock import *\nfrom ptl.utils.pbs_procutils import ProcUtils\n\n\nclass TestServerDynRes(TestFunctional):\n\n    dirnames = []\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n\n    def check_access_log(self, fp, exist=True):\n        \"\"\"\n        Helper function to check if scheduler logged a file security\n        message.\n        \"\"\"\n        # adding a second delay because log_match can then start from the\n        # correct log message and avoid false positives from previous\n        # logs\n        time.sleep(1)\n        match_from = time.time()\n        self.scheduler.apply_config(validate=False)\n        self.scheduler.get_pid()\n        self.scheduler.signal('-HUP')\n        self.scheduler.log_match(fp + ' file has a non-secure file access',\n                                 starttime=match_from, existence=exist)\n\n    def setup_dyn_res(self, resname, restype, script_body):\n        \"\"\"\n        Helper function to setup server dynamic resources\n        returns a list of dynamic resource scripts created by the function\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n        val = []\n        scripts = []\n        attr = {}\n        for i, name in enumerate(resname):\n            attr[\"type\"] = restype[i]\n            self.server.manager(MGR_CMD_CREATE, RSC, attr, id=name)\n            # Add resource to sched_config's 'resources' line\n            self.scheduler.add_resource(name)\n            dest_file = self.scheduler.add_server_dyn_res(name,\n                                                          script_body[i],\n                                                          prefix=\"svr_resc\",\n                                                          suffix=\".scr\")\n            val.append('\"' + name + ' ' + '!' + dest_file + '\"')\n            scripts.append(dest_file)\n        a = {'server_dyn_res': val}\n        self.scheduler.set_sched_config(a)\n\n        # The server dynamic resource script gets executed for every\n        # scheduling cycle\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n        return scripts\n\n    def test_invalid_script_out(self):\n        \"\"\"\n        Test that the scheduler handles incorrect output from server_dyn_res\n        script correctly\n        \"\"\"\n        # Create a server_dyn_res of type long\n        resname = [\"mybadres\"]\n        restype = [\"long\"]\n        script_body = ['echo abc']\n\n        start_time = time.time()\n        # Add it as a server_dyn_res that returns a string output\n        filenames = self.setup_dyn_res(resname, restype, script_body)\n\n        # Submit a job\n        j = Job(TEST_USER)\n        j.set_sleep_time(1)\n        jid = self.server.submit(j)\n\n        # Make sure that \"Problem with creating server data structure\"\n        # is not logged in sched_logs\n        self.scheduler.log_match(\"Problem with creating server data structure\",\n                                 existence=False, max_attempts=10,\n                                 starttime=start_time)\n\n        # Also check that \"<script> returned bad output\"\n        # is in the logs\n        self.scheduler.log_match(\"%s returned bad output\" % filenames[0])\n\n        # The scheduler uses 0 as the available amount of the dynamic resource\n        # if the server_dyn_res script output is bad\n        # So, submit a job that requests 1 of the resource\n        attr = {\"Resource_List.\" + resname[0]: 1}\n\n        # Submit job\n        j = Job(TEST_USER, attrs=attr)\n        jid = self.server.submit(j)\n\n        # The job shouldn't run\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid)\n\n        # Check for the expected log message for insufficient resources\n        self.scheduler.log_match(\n            \"Insufficient amount of server resource: %s (R: 1 A: 0 T: 0)\"\n            % (resname[0]), level=logging.DEBUG2)\n\n    def test_res_long_pos(self):\n        \"\"\"\n        Test that server_dyn_res accepts command line arguments to the\n        commands it runs. Resource value set to a positive long int.\n        \"\"\"\n        # Create a resource of type long. positive value\n        resname = [\"foobar\"]\n        restype = [\"long\"]\n        resval = ['/bin/echo 4']\n\n        # Add server_dyn_res entry in sched_config\n        self.setup_dyn_res(resname, restype, resval)\n\n        a = {'Resource_List.foobar': 4}\n        # Submit job\n        j = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(j)\n\n        # Job must run successfully\n        a = {'job_state': 'R', 'Resource_List.foobar': '4'}\n        self.server.expect(JOB, a, id=jid)\n\n    def test_res_long_neg(self):\n        \"\"\"\n        Test that server_dyn_res accepts command line arguments to the\n        commands it runs. Resource value set to a negative long int.\n        \"\"\"\n        # Create a resource of type long. negative value\n        resname = [\"foobar\"]\n        restype = [\"long\"]\n        resval = ['/bin/echo -1']\n\n        # Add server_dyn_res entry in sched_config\n        self.setup_dyn_res(resname, restype, resval)\n\n        # Submit job\n        a = {'Resource_List.foobar': '1'}\n        # Submit job\n        j = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(j)\n\n        # Check for the expected log message for insufficient resources\n        job_comment = \"Can Never Run: Insufficient amount of server resource:\"\n        job_comment += \" foobar (R: 1 A: -1 T: -1)\"\n\n        # The job shouldn't run\n        a = {'job_state': 'Q', 'comment': job_comment}\n        self.server.expect(JOB, a, id=jid, attrop=PTL_AND)\n\n    def test_res_whitespace(self):\n        \"\"\"\n        Test for parse errors when more than one white space\n        is added between the resource name and the !<script> in a\n        server_dyn_res line. There shouldn't be any errors.\n        \"\"\"\n        # Create a resource of type long\n        resname = [\"foo\"]\n        restype = [\"long\"]\n        resval = ['echo get_foo > /tmp/PtlPbs_got_foo; echo 1']\n\n        # Prep for server_dyn_resource scripts. Script \"PbsPtl_get_foo*\"\n        # generates file \"PbsPtl_got_foo\" and returns 1.\n        fpath_out = os.path.join(os.sep, \"tmp\", \"PtlPbs_got_foo\")\n\n        self.setup_dyn_res(resname, restype, resval)\n\n        # Check if the file \"PbsPtl_got_foo\" was created\n        for _ in range(10):\n            self.logger.info(\"Waiting for the file [%s] to appear\",\n                             fpath_out)\n            if self.du.isfile(path=fpath_out):\n                break\n            time.sleep(1)\n        self.assertTrue(self.du.isfile(path=fpath_out))\n\n        # Submit job\n        a = {'Resource_List.foo': '1'}\n        # Submit job\n        j = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(j)\n\n        # Job must run successfully\n        a = {'job_state': 'R', 'Resource_List.foo': 1}\n        self.server.expect(JOB, a, id=jid)\n        # Cleanup dynamically created file\n        self.du.rm(fpath_out, sudo=True, force=True)\n\n    def test_multiple_res(self):\n        \"\"\"\n        Test multiple dynamic resources specified in resourcedef\n        and sched_config\n        \"\"\"\n        # Create resources of type long\n        resname = [\"foobar_small\", \"foobar_medium\", \"foobar_large\"]\n        restype = [\"long\", \"long\", \"long\"]\n\n        # Prep for server_dyn_resource scripts.\n        script_body = [\"echo 8\", \"echo 12\", \"echo 20\"]\n\n        self.setup_dyn_res(resname, restype, script_body)\n\n        a = {'Resource_List.foobar_small': '4'}\n        # Submit job\n        j = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(j)\n\n        # Job must run successfully\n        a = {'job_state': 'R', 'Resource_List.foobar_small': 4}\n        self.server.expect(JOB, a, id=jid)\n\n        self.server.delete(jid, wait=True)\n\n        a = {'Resource_List.foobar_medium': '10'}\n        # Submit job\n        j = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(j)\n\n        # Job must run successfully\n        a = {'job_state': 'R', 'Resource_List.foobar_medium': 10}\n        self.server.expect(JOB, a, id=jid)\n\n        self.server.delete(jid, wait=True)\n\n        a = {'Resource_List.foobar_large': '18'}\n        # Submit job\n        j = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(j)\n\n        # Job must run successfully\n        a = {'job_state': 'R', 'Resource_List.foobar_large': 18}\n        self.server.expect(JOB, a, id=jid)\n\n    def test_res_string(self):\n        \"\"\"\n        Test that server_dyn_res accepts a string value returned\n        by a script\n        \"\"\"\n        # Create a resource of type string\n        resname = [\"foobar\"]\n        restype = [\"string\"]\n\n        # Prep for server_dyn_resource script\n        resval = [\"echo abc\"]\n\n        self.setup_dyn_res(resname, restype, resval)\n\n        # Submit job\n        a = {'Resource_List.foobar': 'abc'}\n        j = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(j)\n\n        # Job must run successfully\n        a = {'job_state': 'R', 'Resource_List.foobar': 'abc'}\n        self.server.expect(JOB, a, id=jid)\n\n        self.server.delete(jid, wait=True)\n\n        # Submit job\n        a = {'Resource_List.foobar': 'xyz'}\n        j = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(j)\n\n        # Check for the expected log message for insufficient resources\n        job_comment = \"Can Never Run: Insufficient amount of server resource:\"\n        job_comment += \" foobar (xyz != abc)\"\n\n        # The job shouldn't run\n        a = {'job_state': 'Q', 'comment': job_comment}\n        self.server.expect(JOB, a, id=jid, attrop=PTL_AND)\n\n    def test_res_string_array(self):\n        \"\"\"\n        Test that server_dyn_res accepts string array returned\n        by a script\n        \"\"\"\n        # Create a resource of type string_array\n        resname = [\"foobar\"]\n        restype = [\"string_array\"]\n\n        # Prep for server_dyn_resource script\n        resval = [\"echo white, red, blue\"]\n\n        self.setup_dyn_res(resname, restype, resval)\n\n        # Submit job\n        a = {'Resource_List.foobar': 'red'}\n        j = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(j)\n\n        # Job must run successfully\n        a = {'job_state': 'R', 'Resource_List.foobar': 'red'}\n        self.server.expect(JOB, a, id=jid)\n\n        self.server.delete(jid, wait=True)\n\n        # Submit job\n        a = {'Resource_List.foobar': 'green'}\n        j = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(j)\n\n        # Check for the expected log message for insufficient resources\n        job_comment = \"Can Never Run: Insufficient amount of server resource:\"\n        job_comment += \" foobar (green != white,red,blue)\"\n\n        # The job shouldn't run\n        a = {'job_state': 'Q', 'comment': job_comment}\n        self.server.expect(JOB, a, id=jid, attrop=PTL_AND)\n\n    def test_res_size(self):\n        \"\"\"\n        Test that server_dyn_res accepts type \"size\" and a \"value\"\n        returned by a script\n        \"\"\"\n        # Create a resource of type size\n        resname = [\"foobar\"]\n        restype = [\"size\"]\n\n        # Prep for server_dyn_resource script\n        resval = [\"echo 100gb\"]\n\n        self.setup_dyn_res(resname, restype, resval)\n\n        # Submit job\n        a = {'Resource_List.foobar': '95gb'}\n        j1 = Job(TEST_USER, attrs=a)\n        jid1 = self.server.submit(j1)\n\n        # Job must run successfully\n        a = {'job_state': 'R', 'Resource_List.foobar': '95gb'}\n        self.server.expect(JOB, a, id=jid1)\n\n        self.server.delete(jid1, wait=True)\n\n        # Submit job\n        a = {'Resource_List.foobar': '101gb'}\n        j2 = Job(TEST_USER, attrs=a)\n        jid2 = self.server.submit(j2)\n\n        # Check for the expected log message for insufficient resources\n        job_comment = \"Can Never Run: Insufficient amount of server resource:\"\n        job_comment += \" foobar (R: 101gb A: 100gb T: 100gb)\"\n\n        # The job shouldn't run\n        a = {'job_state': 'Q', 'comment': job_comment}\n        self.server.expect(JOB, a, id=jid2, attrop=PTL_AND)\n        self.server.expect(JOB, 'queue', op=UNSET, id=jid1)\n        self.server.deljob(jid2, wait=True, runas=TEST_USER)\n\n        # Submit jobs again\n        a = {'Resource_List.foobar': '100gb'}\n        j1 = Job(TEST_USER, attrs=a)\n        jid1 = self.server.submit(j1)\n        a = {'job_state': 'R', 'Resource_List.foobar': '100gb'}\n        self.server.expect(JOB, a, id=jid1)\n\n    def test_res_size_runtime(self):\n        \"\"\"\n        Test that server_dyn_res accepts type \"size\" and a \"value\"\n        returned by a script. Check if the script change during\n        job run is correctly considered\n        \"\"\"\n\n        # Create a resource of type size\n        resname = [\"foobar\"]\n        restype = [\"size\"]\n\n        # Prep for server_dyn_resource script\n        resval = [\"echo 100gb\"]\n\n        filenames = self.setup_dyn_res(resname, restype, resval)\n\n        # Submit job\n        a = {'Resource_List.foobar': '95gb'}\n        j = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(j)\n\n        # Job must run successfully\n        a = {'job_state': 'R', 'Resource_List.foobar': '95gb'}\n        self.server.expect(JOB, a, id=jid)\n\n        # Turn off scheduling. There is a race where scheduler could\n        # already be inside a cycle because of previous expect call and\n        # read the old dynamic resource script.\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n        # Change script during job run\n        tmp_file = self.du.create_temp_file(body=\"echo 50gb\")\n        self.du.run_copy(src=tmp_file, dest=filenames[0], sudo=True,\n                         preserve_permission=False)\n\n        # Rerun job\n        self.server.rerunjob(jid)\n\n        # Turn on scheduling\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n        # The job shouldn't run\n        job_comment = \"Can Never Run: Insufficient amount of server resource:\"\n        job_comment += \" foobar (R: 95gb A: 50gb T: 50gb)\"\n        a = {'job_state': 'Q', 'comment': job_comment}\n        self.server.expect(JOB, a, id=jid, attrop=PTL_AND)\n\n    def test_res_size_invalid_input(self):\n        \"\"\"\n        Test invalid values returned from server_dyn_resource\n        script for resource type 'size'.\n        Script returns a 'string' instead of type 'size'.\n        \"\"\"\n        # Create a resource of type size\n        resname = [\"foobar\"]\n        restype = [\"size\"]\n\n        # Script returns invalid value for resource type 'size'\n        resval = [\"echo two gb\"]\n\n        filenames = self.setup_dyn_res(resname, restype, resval)\n\n        # Submit job\n        a = {'Resource_List.foobar': '2gb'}\n        j = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(j)\n\n        # Also check that \"<script> returned bad output\"\n        # is in the logs\n        self.scheduler.log_match(\"%s returned bad output\" % filenames[0])\n\n        # The job shouldn't run\n        job_comment = \"Can Never Run: Insufficient amount of server resource:\"\n        job_comment += \" foobar (R: 2gb A: 0kb T: 0kb)\"\n        a = {'job_state': 'Q', 'comment': job_comment}\n        self.server.expect(JOB, a, id=jid, attrop=PTL_AND)\n\n    def test_res_float_invalid_input(self):\n        \"\"\"\n        Test invalid values returned from server_dyn_resource\n        script for resource type 'float'\n        Script returns 'string' instead of type 'float'.\n        \"\"\"\n\n        # Create a resource of type float\n        resname = [\"foo\"]\n        restype = [\"float\"]\n\n        # Prep for server_dyn_resource script\n        resval = [\"echo abc\"]\n\n        filenames = self.setup_dyn_res(resname, restype, resval)\n\n        # Submit job\n        a = {'Resource_List.foo': '1.2'}\n        j = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(j)\n\n        # Also check that \"<script> returned bad output\"\n        # is in the logs\n        self.scheduler.log_match(\"%s returned bad output\" % filenames[0])\n\n        # The job shouldn't run\n        job_comment = \"Can Never Run: Insufficient amount of server resource:\"\n        job_comment += \" foo (R: 1.2 A: 0 T: 0)\"\n        a = {'job_state': 'Q', 'comment': job_comment}\n        self.server.expect(JOB, a, id=jid, attrop=PTL_AND)\n\n    def test_res_boolean_invalid_input(self):\n        \"\"\"\n        Test invalid values returned from server_dyn_resource\n        script for resource type 'boolean'.\n        Script returns 'non boolean' values\n        \"\"\"\n\n        # Create a resource of type boolean\n        resname = [\"foo\"]\n        restype = [\"boolean\"]\n\n        # Prep for server_dyn_resource script\n        resval = [\"echo yes\"]\n\n        filenames = self.setup_dyn_res(resname, restype, resval)\n\n        # Submit job\n        a = {'Resource_List.foo': '\"true\"'}\n        j = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(j)\n\n        # Also check that \"<script> returned bad output\"\n        # is in the logs\n        self.scheduler.log_match(\"%s returned bad output\" % filenames[0])\n\n        # The job shouldn't run\n        job_comment = \"Can Never Run: Insufficient amount of server resource:\"\n        job_comment += \" foo (True != False)\"\n        a = {'job_state': 'Q', 'comment': job_comment}\n        self.server.expect(JOB, a, id=jid)\n\n    def test_res_timeout(self):\n        \"\"\"\n        Test server_dyn_res script timeouts after 30 seconds\n        \"\"\"\n\n        # Create a resource of type boolean\n        resname = [\"foo\"]\n        restype = [\"boolean\"]\n\n        # Prep for server_dyn_resource script\n        resval = [\"sleep 60\\necho true\"]\n\n        filenames = self.setup_dyn_res(resname, restype, resval)\n\n        # Submit job\n        a = {'Resource_List.foo': 'true'}\n        j = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(j)\n\n        self.logger.info('Sleeping 30 seconds to wait for script to timeout')\n        time.sleep(30)\n        self.scheduler.log_match(\"%s timed out\" % filenames[0])\n        self.scheduler.log_match(\"Setting resource foo to 0\")\n\n        # The job shouldn't run\n        job_comment = \"Can Never Run: Insufficient amount of server resource:\"\n        job_comment += \" foo (True != False)\"\n        a = {'job_state': 'Q', 'comment': job_comment}\n        self.server.expect(JOB, a, id=jid)\n\n    def test_res_set_timeout(self):\n        \"\"\"\n        Test setting server_dyn_res script to timeout after 10 seconds\n        \"\"\"\n\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {ATTR_sched_server_dyn_res_alarm: 10})\n\n        # Create a resource of type boolean\n        resname = [\"foo\"]\n        restype = [\"boolean\"]\n\n        # Prep for server_dyn_resource script\n        resval = [\"sleep 20\\necho true\"]\n\n        filenames = self.setup_dyn_res(resname, restype, resval)\n\n        # Submit job\n        a = {'Resource_List.foo': 'true'}\n        j = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(j)\n\n        self.logger.info('Sleeping 10 seconds to wait for script to timeout')\n        time.sleep(10)\n        self.scheduler.log_match(\"%s timed out\" % filenames[0])\n        self.scheduler.log_match(\"Setting resource foo to 0\")\n\n        # The job shouldn't run\n        job_comment = \"Can Never Run: Insufficient amount of server resource:\"\n        job_comment += \" foo (True != False)\"\n        a = {'job_state': 'Q', 'comment': job_comment}\n        self.server.expect(JOB, a, id=jid, attrop=PTL_AND)\n\n    def test_svr_dyn_res_permissions(self):\n        \"\"\"\n        Test whether scheduler rejects the server_dyn_res script when the\n        permission of the script are open to write for others and group\n        \"\"\"\n\n        # Create a new resource\n        attr = {'type': 'long', 'flag': 'q'}\n        self.server.manager(MGR_CMD_CREATE, RSC, attr, id='foo')\n        self.scheduler.add_resource('foo')\n\n        scr_body = ['echo \"10\"', 'exit 0']\n        home_dir = os.path.expanduser('~')\n        fp = self.scheduler.add_server_dyn_res(\"foo\", scr_body,\n                                               dirname=home_dir,\n                                               validate=False)\n\n        # give write permission to group and others\n        self.du.chmod(path=fp, mode=0o766, sudo=True)\n        self.check_access_log(fp)\n\n        # give write permission to group\n        self.du.chmod(path=fp, mode=0o764, sudo=True)\n        self.check_access_log(fp)\n\n        # give write permission to others\n        self.du.chmod(path=fp, mode=0o746, sudo=True)\n        self.check_access_log(fp)\n\n        # give write permission to user only\n        self.du.chmod(path=fp, mode=0o744, sudo=True)\n        if os.getuid() != 0:\n            self.check_access_log(fp, exist=True)\n        else:\n            self.check_access_log(fp, exist=False)\n\n        # Create script in a directory which has more open privileges\n        # This should make loading of this file fail in all cases\n        # Create the directory name with a space in it, to make sure PBS parses\n        # it correctly.\n        dir_temp = self.du.create_temp_dir(mode=0o766,\n                                           dirname=home_dir,\n                                           suffix=' tmp')\n        self.du.chmod(path=dir_temp, mode=0o766, sudo=True)\n        self.du.chown(path=dir_temp, sudo=True, uid=self.scheduler.user)\n        fp = self.scheduler.add_server_dyn_res(\"foo\", scr_body,\n                                               dirname=dir_temp,\n                                               validate=False)\n\n        # Add to dirnames for cleanup\n        self.dirnames.append(dir_temp)\n\n        # give write permission to group and others\n        self.du.chmod(path=fp, mode=0o766, sudo=True)\n        self.check_access_log(fp)\n\n        # give write permission to group\n        self.du.chmod(path=fp, mode=0o764, sudo=True)\n        self.check_access_log(fp)\n\n        # give write permission to others\n        self.du.chmod(path=fp, mode=0o746, sudo=True)\n        self.check_access_log(fp)\n\n        # give write permission to user only\n        self.du.chmod(path=fp, mode=0o744, sudo=True)\n        self.check_access_log(fp)\n\n        # Create dynamic resource script in PBS_HOME directory and check\n        # file permissions\n        # self.scheduler.add_server_dyn_res by default creates the script in\n        # PBS_HOME as root\n        fp = self.scheduler.add_server_dyn_res(\"foo\", scr_body, perm=0o766,\n                                               validate=False)\n\n        self.check_access_log(fp)\n\n        # give write permission to group\n        self.du.chmod(path=fp, mode=0o764, sudo=True)\n        self.check_access_log(fp)\n\n        # give write permission to others\n        self.du.chmod(path=fp, mode=0o746, sudo=True)\n        self.check_access_log(fp)\n\n        # give write permission to user only\n        self.du.chmod(path=fp, mode=0o744, sudo=True)\n        self.check_access_log(fp, exist=False)\n\n    def test_res_cleanup(self):\n        \"\"\"\n        Test that the scheduler cleans up its children\n        \"\"\"\n        pu = ProcUtils()\n        resname = [\"normal\", \"invalid\", \"timeout\"]\n        restype = [\"long\", \"long\", \"long\"]\n\n        # Prep for server_dyn_resource scripts.\n        script_body = [\"echo 8\", \"echo hello\", \"sleep 40; echo 20\"]\n\n        filenames = self.setup_dyn_res(resname, restype, script_body)\n\n        a = {'Resource_List.normal': '2',\n             'Resource_List.invalid': '8',\n             'Resource_List.timeout': 10}\n        j = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(j)\n        self.logger.info('Sleeping 30 seconds to wait for script to timeout')\n        time.sleep(30)\n        self.scheduler.log_match(\"%s timed out\" % filenames[2])\n        children = pu.get_proc_children(hostname=self.scheduler.hostname,\n                                        ppid=self.scheduler.get_pid())\n        self.assertFalse(children)\n\n    def tearDown(self):\n        # removing all files creating in test\n        if len(self.dirnames) != 0:\n            self.du.rm(path=self.dirnames, sudo=True, force=True,\n                       recursive=True)\n            self.dirnames[:] = []\n        TestFunctional.tearDown(self)\n"
  },
  {
    "path": "test/tests/functional/pbs_systemd.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\nimport socket\n\n\nclass Test_systemd(TestFunctional):\n    \"\"\"\n    Test whether you are able to control pbs using systemd\n    \"\"\"\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n        # Skip test if systemctl command is not found.\n        is_systemctl = self.du.which(exe='systemctl')\n        if is_systemctl == 'systemctl':\n            self.skipTest(\"Systemctl command not found\")\n        ret = self.du.run_cmd(self.server.hostname, \"systemctl is-active dbus\")\n        if ret['rc'] == 1:\n            self.skipTest(\"Systemd not functional\")\n\n    def shutdown_all(self):\n        if self.server.isUp():\n            self.server.stop()\n        if self.scheduler.isUp():\n            self.scheduler.stop()\n        if self.comm.isUp():\n            self.comm.stop()\n        if self.mom.isUp():\n            self.mom.stop()\n\n    def start_using_systemd(self):\n        cmd = \"systemctl start pbs\"\n        self.du.run_cmd(self.hostname, cmd, True)\n        if ('1' == self.server.pbs_conf['PBS_START_SERVER'] and\n                not self.server.isUp()):\n            return False\n        if ('1' == self.server.pbs_conf['PBS_START_SCHED'] and\n                not self.scheduler.isUp()):\n            return False\n        if ('1' == self.server.pbs_conf['PBS_START_COMM'] and\n                not self.comm.isUp()):\n            return False\n        if ('1' == self.server.pbs_conf['PBS_START_MOM'] and\n                not self.mom.isUp()):\n            return False\n        return True\n\n    def stop_using_systemd(self):\n        cmd = \"systemctl stop pbs\"\n        self.du.run_cmd(self.hostname, cmd, True)\n        if ('1' == self.server.pbs_conf['PBS_START_SERVER'] and\n                self.server.isUp(max_attempts=10)):\n            return False\n        if ('1' == self.server.pbs_conf['PBS_START_SCHED'] and\n                self.scheduler.isUp(max_attempts=10)):\n            return False\n        if ('1' == self.server.pbs_conf['PBS_START_COMM'] and\n                self.comm.isUp(max_attempts=10)):\n            return False\n        if ('1' == self.server.pbs_conf['PBS_START_MOM'] and\n                self.mom.isUp(max_attempts=10)):\n            return False\n        return True\n\n    @skipOnShasta\n    def test_systemd(self):\n        \"\"\"\n        Test whether you are able to control pbs using systemd\n        \"\"\"\n        self.hostname = socket.gethostname()\n        cmd = \"systemctl daemon-reload\"\n        out = self.du.run_cmd(self.hostname, cmd, True)\n        self.shutdown_all()\n        rv = self.start_using_systemd()\n        self.assertTrue(rv)\n        rv = self.stop_using_systemd()\n        self.assertTrue(rv)\n        rv = self.start_using_systemd()\n\n    @skipOnShasta\n    def test_missing_daemon(self):\n        \"\"\"\n        Test whether missing daemons starts without re-starting other daemons\n        \"\"\"\n        self.hostname = self.server.hostname\n        self.shutdown_all()\n        rv = self.start_using_systemd()\n        self.assertTrue(rv)\n        cmd = \"systemctl reload pbs\"\n        # Mom\n        self.mom.signal(\"-KILL\")\n        if self.mom.isUp(max_attempts=10):\n            self.fail(\"MoM is still running\")\n        self.du.run_cmd(self.hostname, cmd, True)\n        if self.mom.isUp(max_attempts=10):\n            self.logger.info(\"MoM started and running\")\n        else:\n            self.fail(\"MoM not started\")\n        # Sched\n        self.scheduler.signal(\"-KILL\")\n        if self.scheduler.isUp(max_attempts=10):\n            self.fail(\"Sched is still running\")\n        self.du.run_cmd(self.hostname, cmd, True)\n        if self.scheduler.isUp(max_attempts=10):\n            self.logger.info(\"Sched started and running\")\n        else:\n            self.fail(\"Sched not started\")\n        # Comm\n        self.comm.signal(\"-KILL\")\n        if self.comm.isUp(max_attempts=10):\n            self.fail(\"Comm is still running\")\n        self.du.run_cmd(self.hostname, cmd, True)\n        if self.comm.isUp(max_attempts=10):\n            self.logger.info(\"Comm started and running\")\n        else:\n            self.fail(\"Comm not started\")\n        # Server\n        self.server.signal(\"-KILL\")\n        if self.server.isUp(max_attempts=10):\n            self.fail(\"Server is still running\")\n        self.du.run_cmd(self.hostname, cmd, True)\n        if self.server.isUp(max_attempts=10):\n            self.logger.info(\"Server started and running\")\n        else:\n            self.fail(\"Server not started\")\n"
  },
  {
    "path": "test/tests/functional/pbs_test_entity_limits.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestEntityLimits(TestFunctional):\n\n    \"\"\"\n    This test suite tests working of FGC limits\n\n    PBS supports entity limits at queue and server level. And these limits\n    can be applied for a user, group, project or overall.\n    This test suite iterates over all the entities.\n\n    \"\"\"\n\n    limit = 10\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n\n        a = {'resources_available.ncpus': 1}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n\n    def common_limit_test(self, server, entstr, job_attr={}, queued=False,\n                          exp_err=''):\n        if not server:\n            qname = self.server.default_queue\n            self.server.manager(MGR_CMD_SET, QUEUE, entstr, qname)\n        else:\n            self.server.manager(MGR_CMD_SET, SERVER, entstr)\n\n        if queued:\n            j = Job(TEST_USER, job_attr)\n            jid = self.server.submit(j)\n            self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        for _ in range(self.limit):\n            j = Job(TEST_USER, job_attr)\n            self.server.submit(j)\n\n        try:\n            j = Job(TEST_USER, job_attr)\n            self.server.submit(j)\n        except PbsSubmitError as e:\n            if e.msg[0] != exp_err:\n                raise self.failureException(\"rcvd unexpected err message: \" +\n                                            e.msg[0])\n        else:\n            self.assertFalse(True, \"Job violating limits got submitted.\")\n\n        self.server.cleanup_jobs()\n\n        try:\n            jval = \"1-\" + str(self.limit + 1)\n            job_attr[ATTR_J] = jval\n            j = Job(TEST_USER, job_attr)\n            jid = self.server.submit(j)\n        except PbsSubmitError as e:\n            if e.msg[0] != exp_err:\n                raise self.failureException(\"rcvd unexpected err message: \" +\n                                            e.msg[0])\n        else:\n            self.assertFalse(True, \"Array Job violating limits got submitted.\")\n\n        jval = \"1-\" + str(self.limit)\n        job_attr[ATTR_J] = jval\n\n        j = Job(TEST_USER, job_attr)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'B'}, id=jid)\n        subjob1 = j.create_subjob_id(jid, 1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=subjob1)\n\n        del job_attr[ATTR_J]\n\n        if queued:\n            j = Job(TEST_USER, job_attr)\n            self.server.submit(j)\n\n        try:\n            j = Job(TEST_USER, job_attr)\n            self.server.submit(j)\n        except PbsSubmitError as e:\n            if e.msg[0] != exp_err:\n                raise self.failureException(\"rcvd unexpected err message: \" +\n                                            e.msg[0])\n        else:\n            self.assertFalse(True, \"Job violating limits got submitted.\")\n\n        self.server.restart()\n\n        try:\n            self.server.submit(j)\n        except PbsSubmitError as e:\n            if e.msg[0] != exp_err:\n                raise self.failureException(\"rcvd unexpected err message: \" +\n                                            e.msg[0])\n        else:\n            self.assertFalse(True, \"Job violating limits got submitted after \"\n                             \"server restart.\")\n\n    def test_server_generic_user_limits_queued(self):\n        \"\"\"\n        Test queued_jobs_threshold for any user at the server level.\n        \"\"\"\n        a = {\"queued_jobs_threshold\":\n             \"[u:PBS_GENERIC=\" + str(self.limit) + \"]\"}\n        m = \"qsub: would exceed complex's per-user limit of jobs in 'Q' state\"\n        self.common_limit_test(True, a, queued=True, exp_err=m)\n\n    def test_server_user_limits_queued(self):\n        \"\"\"\n        Test queued_jobs_threshold for user TEST_USER at the server level.\n        \"\"\"\n        a = {\"queued_jobs_threshold\":\n             \"[u:\" + str(TEST_USER) + \"=\" + str(self.limit) + \"]\"}\n        errmsg = \"qsub: Maximum number of jobs in 'Q' state for user \" + \\\n            str(TEST_USER) + ' already in complex'\n        self.common_limit_test(True, a, queued=True, exp_err=errmsg)\n\n    def test_server_project_limits_queued(self):\n        \"\"\"\n        Test queued_jobs_threshold for project p1 at the server level.\n        \"\"\"\n        a = {\"queued_jobs_threshold\": \"[p:p1=\" + str(self.limit) + \"]\"}\n        attrs = {ATTR_project: 'p1'}\n        errmsg = \"qsub: Maximum number of jobs in 'Q' state for project p1 \" \\\n            + \"already in complex\"\n        self.common_limit_test(True, a, attrs, queued=True, exp_err=errmsg)\n\n    def test_server_generic_project_limits_queued(self):\n        \"\"\"\n        Test queued_jobs_threshold for any project at the server level.\n        \"\"\"\n        a = {\"queued_jobs_threshold\":\n             \"[p:PBS_GENERIC=\" + str(self.limit) + \"]\"}\n        errmsg = \"qsub: would exceed complex's per-project limit of jobs in \" \\\n            + \"'Q' state\"\n        self.common_limit_test(True, a, queued=True, exp_err=errmsg)\n\n    @skipOnShasta\n    def test_server_group_limits_queued(self):\n        \"\"\"\n        Test queued_jobs_threshold for group TSTGRP0 at the server level.\n        \"\"\"\n        a = {\"queued_jobs_threshold\":\n             \"[g:\" + str(TSTGRP0) + \"=\" + str(self.limit) + \"]\"}\n        errmsg = \"qsub: Maximum number of jobs in 'Q' state for group \" + \\\n            str(TSTGRP0) + ' already in complex'\n        self.common_limit_test(True, a, queued=True, exp_err=errmsg)\n\n    @skipOnShasta\n    def test_server_generic_group_limits_queued(self):\n        \"\"\"\n        Test queued_jobs_threshold for any group at the server level.\n        \"\"\"\n        a = {\"queued_jobs_threshold\":\n             \"[g:PBS_GENERIC=\" + str(self.limit) + \"]\"}\n        m = \"qsub: would exceed complex's per-group limit of jobs in 'Q' state\"\n        self.common_limit_test(True, a, queued=True, exp_err=m)\n\n    def test_server_overall_limits_queued(self):\n        \"\"\"\n        Test queued_jobs_threshold for any entity at the server level.\n        \"\"\"\n        a = {\"queued_jobs_threshold\": \"[o:PBS_ALL=\" + str(self.limit) + \"]\"}\n        errmsg = \"qsub: Maximum number of jobs in 'Q' state already in complex\"\n        self.common_limit_test(True, a, queued=True, exp_err=errmsg)\n\n    def test_queue_generic_user_limits_queued(self):\n        \"\"\"\n        Test queued_jobs_threshold for any user for the default queue.\n        \"\"\"\n        a = {\"queued_jobs_threshold\":\n             \"[u:PBS_GENERIC=\" + str(self.limit) + \"]\"}\n        attrs = {ATTR_queue: self.server.default_queue}\n        errmsg = \"qsub: would exceed queue generic's per-user limit of \" \\\n            + \"jobs in 'Q' state\"\n        self.common_limit_test(False, a, attrs, queued=True, exp_err=errmsg)\n\n    def test_queue_user_limits_queued(self):\n        \"\"\"\n        Test queued_jobs_threshold for user pbsuser1 for the default queue.\n        \"\"\"\n        a = {\"queued_jobs_threshold\":\n             \"[u:\" + str(TEST_USER) + \"=\" + str(self.limit) + \"]\"}\n        attrs = {ATTR_queue: self.server.default_queue}\n        errmsg = \"qsub: Maximum number of jobs in 'Q' state for user \" + \\\n            str(TEST_USER) + ' already in queue ' + self.server.default_queue\n        self.common_limit_test(False, a, attrs, queued=True, exp_err=errmsg)\n\n    @skipOnShasta\n    def test_queue_group_limits_queued(self):\n        \"\"\"\n        Test queued_jobs_threshold for group TSTGRP0 for the default queue.\n        \"\"\"\n        a = {\"queued_jobs_threshold\":\n             \"[g:\" + str(TSTGRP0) + \"=\" + str(self.limit) + \"]\"}\n        attrs = {ATTR_queue: self.server.default_queue}\n        errmsg = \"qsub: Maximum number of jobs in 'Q' state for group \" + \\\n            str(TSTGRP0) + ' already in queue ' + self.server.default_queue\n        self.common_limit_test(False, a, attrs, queued=True, exp_err=errmsg)\n\n    def test_queue_project_limits_queued(self):\n        \"\"\"\n        Test queued_jobs_threshold for project p1 for the default queue.\n        \"\"\"\n        a = {\"queued_jobs_threshold\": \"[p:p1=\" + str(self.limit) + \"]\"}\n        attrs = {ATTR_queue: self.server.default_queue, ATTR_project: 'p1'}\n        errmsg = \"qsub: Maximum number of jobs in 'Q' state for project p1 \" \\\n            'already in queue ' + self.server.default_queue\n        self.common_limit_test(False, a, attrs, queued=True, exp_err=errmsg)\n\n    def test_queue_generic_project_limits_queued(self):\n        \"\"\"\n        Test queued_jobs_threshold for any project for the default queue.\n        \"\"\"\n        a = {\"queued_jobs_threshold\":\n             \"[p:PBS_GENERIC=\" + str(self.limit) + \"]\"}\n        attrs = {ATTR_queue: self.server.default_queue}\n        errmsg = 'qsub: would exceed queue ' + self.server.default_queue + \\\n            \"'s per-project limit of jobs in 'Q' state\"\n        self.common_limit_test(False, a, attrs, queued=True, exp_err=errmsg)\n\n    @skipOnShasta\n    def test_queue_generic_group_limits_queued(self):\n        \"\"\"\n        Test queued_jobs_threshold for any group for the default queue.\n        \"\"\"\n        a = {\"queued_jobs_threshold\":\n             \"[g:PBS_GENERIC=\" + str(self.limit) + \"]\"}\n        attrs = {ATTR_queue: self.server.default_queue}\n        errmsg = 'qsub: would exceed queue ' + self.server.default_queue + \\\n            \"'s per-group limit of jobs in 'Q' state\"\n        self.common_limit_test(False, a, attrs, queued=True, exp_err=errmsg)\n\n    def test_queue_overall_limits_queued(self):\n        \"\"\"\n        Test queued_jobs_threshold for all entities for the default queue.\n        \"\"\"\n        a = {\"queued_jobs_threshold\": \"[o:PBS_ALL=\" + str(self.limit) + \"]\"}\n        attrs = {ATTR_queue: self.server.default_queue}\n        emsg = \"qsub: Maximum number of jobs in 'Q' state already in queue \" \\\n            + self.server.default_queue\n        self.common_limit_test(False, a, attrs, queued=True, exp_err=emsg)\n\n    def test_server_generic_user_limits_res_queued(self):\n        \"\"\"\n        Test queued_jobs_threshold_res for any user at the server level.\n        \"\"\"\n        a = {\"queued_jobs_threshold_res.ncpus\":\n             \"[u:PBS_GENERIC=\" + str(self.limit) + \"]\"}\n        attrs = {'Resource_List.select': '1:ncpus=1'}\n        errmsg = 'qsub: would exceed per-user limit on resource ncpus in ' \\\n            + \"complex for jobs in 'Q' state\"\n        self.common_limit_test(True, a, attrs, queued=True, exp_err=errmsg)\n\n    def test_server_user_limits_res_queued(self):\n        \"\"\"\n        Test queued_jobs_threshold_res for user pbsuser1 at the server level.\n        \"\"\"\n        a = {\"queued_jobs_threshold_res.ncpus\":\n             \"[u:\" + str(TEST_USER) + \"=\" + str(self.limit) + \"]\"}\n        attrs = {'Resource_List.select': '1:ncpus=1'}\n        errmsg = 'qsub: would exceed user ' + str(TEST_USER) + \\\n            \"'s limit on resource ncpus in complex for jobs in 'Q' state\"\n        self.common_limit_test(True, a, attrs, queued=True, exp_err=errmsg)\n\n    @skipOnShasta\n    def test_server_generic_group_limits_res_queued(self):\n        \"\"\"\n        Test queued_jobs_threshold_res for any group at the server level.\n        \"\"\"\n        a = {\"queued_jobs_threshold_res.ncpus\":\n             \"[g:PBS_GENERIC=\" + str(self.limit) + \"]\"}\n        attrs = {'Resource_List.select': '1:ncpus=1'}\n        errmsg = 'qsub: would exceed per-group limit on resource ncpus in '\\\n            + \"complex for jobs in 'Q' state\"\n        self.common_limit_test(True, a, attrs, queued=True, exp_err=errmsg)\n\n    def test_server_project_limits_res_queued(self):\n        \"\"\"\n        Test queued_jobs_threshold_res for project p1 at the server level.\n        \"\"\"\n        a = {\"queued_jobs_threshold_res.ncpus\":\n             \"[p:p1=\" + str(self.limit) + \"]\"}\n        attrs = {ATTR_project: 'p1', 'Resource_List.select': '1:ncpus=1'}\n        errmsg = \"qsub: would exceed project p1's limit on resource ncpus in\" \\\n            + \" complex for jobs in 'Q' state\"\n        self.common_limit_test(True, a, attrs, queued=True, exp_err=errmsg)\n\n    def test_server_generic_project_limits_res_queued(self):\n        \"\"\"\n        Test queued_jobs_threshold_res for any project at the server level.\n        \"\"\"\n        a = {\"queued_jobs_threshold_res.ncpus\":\n             \"[p:PBS_GENERIC=\" + str(self.limit) + \"]\"}\n        attrs = {'Resource_List.select': '1:ncpus=1'}\n        errmsg = 'qsub: would exceed per-project limit on resource ncpus in ' \\\n            + \"complex for jobs in 'Q' state\"\n        self.common_limit_test(True, a, attrs, queued=True, exp_err=errmsg)\n\n    @skipOnShasta\n    def test_server_group_limits_res_queued(self):\n        \"\"\"\n        Test queued_jobs_threshold_res for group pbsuser1 at the server level.\n        \"\"\"\n        a = {\"queued_jobs_threshold_res.ncpus\":\n             \"[g:\" + str(TSTGRP0) + \"=\" + str(self.limit) + \"]\"}\n        attrs = {'Resource_List.select': '1:ncpus=1'}\n        errmsg = 'qsub: would exceed group ' + str(TSTGRP0) + \\\n            \"'s limit on resource ncpus in complex for jobs in 'Q' state\"\n        self.common_limit_test(True, a, attrs, queued=True, exp_err=errmsg)\n\n    def test_server_overall_limits_res_queued(self):\n        \"\"\"\n        Test queued_jobs_threshold_res for all entities at the server level.\n        \"\"\"\n        a = {\"queued_jobs_threshold_res.ncpus\":\n             \"[o:PBS_ALL=\" + str(self.limit) + \"]\"}\n        attrs = {'Resource_List.select': '1:ncpus=1'}\n        errmsg = 'qsub: would exceed limit on resource ncpus in complex for '\\\n            + \"jobs in 'Q' state\"\n        self.common_limit_test(True, a, attrs, queued=True, exp_err=errmsg)\n\n    def test_queue_generic_user_limits_res_queued(self):\n        \"\"\"\n        Test queued_jobs_threshold_res for all users for the default queue.\n        \"\"\"\n        a = {\"queued_jobs_threshold_res.ncpus\":\n             \"[u:PBS_GENERIC=\" + str(self.limit) + \"]\"}\n        attrs = {ATTR_queue: self.server.default_queue,\n                 'Resource_List.select': '1:ncpus=1'}\n        emsg = 'qsub: would exceed per-user limit on resource ncpus in queue '\\\n            + self.server.default_queue + \" for jobs in 'Q' state\"\n        self.common_limit_test(False, a, attrs, queued=True, exp_err=emsg)\n\n    def test_queue_user_limits_res_queued(self):\n        \"\"\"\n        Test queued_jobs_threshold_res for user pbsuser1 for the default queue.\n        \"\"\"\n        a = {\"queued_jobs_threshold_res.ncpus\":\n             \"[u:\" + str(TEST_USER) + \"=\" + str(self.limit) + \"]\"}\n        attrs = {ATTR_queue: self.server.default_queue,\n                 'Resource_List.select': '1:ncpus=1'}\n        errmsg = 'qsub: would exceed user ' + str(TEST_USER) + \\\n            \"'s limit on resource ncpus in queue \" + \\\n            self.server.default_queue + \" for jobs in 'Q' state\"\n        self.common_limit_test(False, a, attrs, queued=True, exp_err=errmsg)\n\n    @skipOnShasta\n    def test_queue_group_limits_res_queued(self):\n        \"\"\"\n        Test queued_jobs_threshold_res for group pbsuser1 for the default queue\n        \"\"\"\n        a = {\"queued_jobs_threshold_res.ncpus\":\n             \"[g:\" + str(TSTGRP0) + \"=\" + str(self.limit) + \"]\"}\n        attrs = {ATTR_queue: self.server.default_queue,\n                 'Resource_List.select': '1:ncpus=1'}\n        errmsg = 'qsub: would exceed group ' + str(TSTGRP0) + \\\n            \"'s limit on resource ncpus in queue \" + \\\n            self.server.default_queue + \" for jobs in 'Q' state\"\n        self.common_limit_test(False, a, attrs, queued=True, exp_err=errmsg)\n\n    @skipOnShasta\n    def test_queue_generic_group_limits_res_queued(self):\n        \"\"\"\n        Test queued_jobs_threshold_res for any group for the default queue.\n        \"\"\"\n        a = {\"queued_jobs_threshold_res.ncpus\":\n             \"[g:PBS_GENERIC=\" + str(self.limit) + \"]\"}\n        attrs = {ATTR_queue: self.server.default_queue,\n                 'Resource_List.select': '1:ncpus=1'}\n        errmsg = 'qsub: would exceed per-group limit on resource ncpus in ' \\\n            + 'queue ' + self.server.default_queue + \" for jobs in 'Q' state\"\n        self.common_limit_test(False, a, attrs, queued=True, exp_err=errmsg)\n\n    def test_queue_project_limits_res_queued(self):\n        \"\"\"\n        Test queued_jobs_threshold_res for project p1 for the default queue.\n        \"\"\"\n        a = {\"queued_jobs_threshold_res.ncpus\":\n             \"[p:p1=\" + str(self.limit) + \"]\"}\n        attrs = {ATTR_queue: self.server.default_queue,\n                 ATTR_project: 'p1', 'Resource_List.select': '1:ncpus=1'}\n        errmsg = \"qsub: would exceed project p1's limit on resource ncpus \" + \\\n            'in queue ' + self.server.default_queue + \" for jobs in 'Q' state\"\n        self.common_limit_test(False, a, attrs, queued=True, exp_err=errmsg)\n\n    def test_queue_generic_project_limits_res_queued(self):\n        \"\"\"\n        Test queued_jobs_threshold_res for any project for the default queue.\n        \"\"\"\n        a = {\"queued_jobs_threshold_res.ncpus\":\n             \"[p:PBS_GENERIC=\" + str(self.limit) + \"]\"}\n        attrs = {ATTR_queue: self.server.default_queue,\n                 'Resource_List.select': '1:ncpus=1'}\n        errmsg = 'qsub: would exceed per-project limit on resource ncpus in' \\\n            + ' queue ' + self.server.default_queue + \" for jobs in 'Q' state\"\n        self.common_limit_test(False, a, attrs, queued=True, exp_err=errmsg)\n\n    def test_queue_overall_limits_res_queued(self):\n        \"\"\"\n        Test queued_jobs_threshold_res for any entity for the default queue.\n        \"\"\"\n        a = {\"queued_jobs_threshold_res.ncpus\":\n             \"[o:PBS_ALL=\" + str(self.limit) + \"]\"}\n        attrs = {ATTR_queue: self.server.default_queue,\n                 'Resource_List.select': '1:ncpus=1'}\n        errmsg = 'qsub: would exceed limit on resource ncpus in queue ' + \\\n            self.server.default_queue + \" for jobs in 'Q' state\"\n        self.common_limit_test(False, a, attrs, queued=True, exp_err=errmsg)\n\n    def test_server_generic_user_limits_max(self):\n        \"\"\"\n        Test max_queued for any user at the server level.\n        \"\"\"\n        a = {\"max_queued\":\n             \"[u:PBS_GENERIC=\" + str(self.limit) + \"]\"}\n        errmsg = \"qsub: would exceed complex's per-user limit\"\n        self.common_limit_test(True, a, exp_err=errmsg)\n\n    def test_server_user_limits_max(self):\n        \"\"\"\n        Test max_queued for user pbsuser1 at the server level.\n        \"\"\"\n        a = {\"max_queued\":\n             \"[u:\" + str(TEST_USER) + \"=\" + str(self.limit) + \"]\"}\n        errmsg = 'qsub: Maximum number of jobs for user ' + str(TEST_USER) + \\\n            ' already in complex'\n        self.common_limit_test(True, a, exp_err=errmsg)\n\n    def test_server_project_limits_max(self):\n        \"\"\"\n        Test max_queued for project p1 at the server level.\n        \"\"\"\n        a = {\"max_queued\": \"[p:p1=\" + str(self.limit) + \"]\"}\n        attrs = {ATTR_project: 'p1'}\n        msg = 'qsub: Maximum number of jobs for project p1 already in complex'\n        self.common_limit_test(True, a, attrs, exp_err=msg)\n\n    def test_server_generic_project_limits_max(self):\n        \"\"\"\n        Test max_queued for any project at the server level.\n        \"\"\"\n        a = {\"max_queued\":\n             \"[p:PBS_GENERIC=\" + str(self.limit) + \"]\"}\n        errmsg = \"qsub: would exceed complex's per-project limit\"\n        self.common_limit_test(True, a, exp_err=errmsg)\n\n    @skipOnShasta\n    def test_server_group_limits_max(self):\n        \"\"\"\n        Test max_queued for group TSTGRP0 at the server level.\n        \"\"\"\n        a = {\"max_queued\":\n             \"[g:\" + str(TSTGRP0) + \"=\" + str(self.limit) + \"]\"}\n        errmsg = 'qsub: Maximum number of jobs for group ' + str(TSTGRP0) + \\\n            ' already in complex'\n        self.common_limit_test(True, a, exp_err=errmsg)\n\n    @skipOnShasta\n    def test_server_generic_group_limits_max(self):\n        \"\"\"\n        Test max_queued for any group at the server level.\n        \"\"\"\n        a = {\"max_queued\":\n             \"[g:PBS_GENERIC=\" + str(self.limit) + \"]\"}\n        errmsg = \"qsub: would exceed complex's per-group limit\"\n        self.common_limit_test(True, a, exp_err=errmsg)\n\n    def test_server_overall_limits_max(self):\n        \"\"\"\n        Test max_queued for any entity at the server level.\n        \"\"\"\n        a = {\"max_queued\": \"[o:PBS_ALL=\" + str(self.limit) + \"]\"}\n        errmsg = 'qsub: Maximum number of jobs already in complex'\n        self.common_limit_test(True, a, exp_err=errmsg)\n\n    def test_queue_generic_user_limits_max(self):\n        \"\"\"\n        Test max_queued for any user for the default queue.\n        \"\"\"\n        a = {\"max_queued\":\n             \"[u:PBS_GENERIC=\" + str(self.limit) + \"]\"}\n        attrs = {ATTR_queue: self.server.default_queue}\n        errmsg = \"qsub: would exceed queue generic's per-user limit\"\n        self.common_limit_test(False, a, attrs, exp_err=errmsg)\n\n    def test_queue_user_limits_max(self):\n        \"\"\"\n        Test max_queued for user pbsuser1 for the default queue.\n        \"\"\"\n        a = {\"max_queued\":\n             \"[u:\" + str(TEST_USER) + \"=\" + str(self.limit) + \"]\"}\n        attrs = {ATTR_queue: self.server.default_queue}\n        errmsg = 'qsub: Maximum number of jobs for user ' + str(TEST_USER) + \\\n            ' already in queue ' + self.server.default_queue\n        self.common_limit_test(False, a, attrs, exp_err=errmsg)\n\n    @skipOnShasta\n    def test_queue_group_limits_max(self):\n        \"\"\"\n        Test max_queued for group pbsuser1 for the default queue.\n        \"\"\"\n        a = {\"max_queued\":\n             \"[g:\" + str(TSTGRP0) + \"=\" + str(self.limit) + \"]\"}\n        attrs = {ATTR_queue: self.server.default_queue}\n        errmsg = 'qsub: Maximum number of jobs for group ' + str(TSTGRP0) + \\\n            ' already in queue ' + self.server.default_queue\n        self.common_limit_test(False, a, attrs, exp_err=errmsg)\n\n    def test_queue_project_limits_max(self):\n        \"\"\"\n        Test max_queued for project p1 for the default queue.\n        \"\"\"\n        a = {\"max_queued\": \"[p:p1=\" + str(self.limit) + \"]\"}\n        attrs = {ATTR_queue: self.server.default_queue, ATTR_project: 'p1'}\n        msg = 'qsub: Maximum number of jobs for project p1 already in queue '\\\n            + self.server.default_queue\n        self.common_limit_test(False, a, attrs, exp_err=msg)\n\n    def test_queue_generic_project_limits_max(self):\n        \"\"\"\n        Test max_queued for any project for the default queue.\n        \"\"\"\n        a = {\"max_queued\":\n             \"[p:PBS_GENERIC=\" + str(self.limit) + \"]\"}\n        attrs = {ATTR_queue: self.server.default_queue}\n        errmsg = 'qsub: would exceed queue ' + self.server.default_queue + \\\n            \"'s per-project limit\"\n        self.common_limit_test(False, a, attrs, exp_err=errmsg)\n\n    @skipOnShasta\n    def test_queue_generic_group_limits_max(self):\n        \"\"\"\n        Test max_queued for any group for the default queue.\n        \"\"\"\n        a = {\"max_queued\":\n             \"[g:PBS_GENERIC=\" + str(self.limit) + \"]\"}\n        attrs = {ATTR_queue: self.server.default_queue}\n        errmsg = 'qsub: would exceed queue ' + self.server.default_queue + \\\n            \"'s per-group limit\"\n        self.common_limit_test(False, a, attrs, exp_err=errmsg)\n\n    def test_queue_overall_limits_max(self):\n        \"\"\"\n        Test max_queued for all entities for the default queue.\n        \"\"\"\n        a = {\"max_queued\": \"[o:PBS_ALL=\" + str(self.limit) + \"]\"}\n        attrs = {ATTR_queue: self.server.default_queue}\n        errmsg = 'qsub: Maximum number of jobs already in queue ' + \\\n            self.server.default_queue\n        self.common_limit_test(False, a, attrs, exp_err=errmsg)\n\n    def test_server_generic_user_limits_res_max(self):\n        \"\"\"\n        Test max_queued_res for any user at the server level.\n        \"\"\"\n        a = {\"max_queued_res.ncpus\":\n             \"[u:PBS_GENERIC=\" + str(self.limit) + \"]\"}\n        attrs = {'Resource_List.select': '1:ncpus=1'}\n        emsg = 'qsub: would exceed per-user limit on resource ncpus in complex'\n        self.common_limit_test(True, a, attrs, exp_err=emsg)\n\n    def test_server_user_limits_res_max(self):\n        \"\"\"\n        Test max_queued_res for user pbsuser1 at the server level.\n        \"\"\"\n        a = {\"max_queued_res.ncpus\":\n             \"[u:\" + str(TEST_USER) + \"=\" + str(self.limit) + \"]\"}\n        attrs = {'Resource_List.select': '1:ncpus=1'}\n        errmsg = 'qsub: would exceed user ' + str(TEST_USER) + \\\n            \"'s limit on resource ncpus in complex\"\n        self.common_limit_test(True, a, attrs, exp_err=errmsg)\n\n    @skipOnShasta\n    def test_server_generic_group_limits_res_max(self):\n        \"\"\"\n        Test max_queued_res for any group at the server level.\n        \"\"\"\n        a = {\"max_queued_res.ncpus\":\n             \"[g:PBS_GENERIC=\" + str(self.limit) + \"]\"}\n        attrs = {'Resource_List.select': '1:ncpus=1'}\n        msg = 'qsub: would exceed per-group limit on resource ncpus in complex'\n        self.common_limit_test(True, a, attrs, exp_err=msg)\n\n    def test_server_project_limits_res_max(self):\n        \"\"\"\n        Test max_queued_res for project p1 at the server level.\n        \"\"\"\n        a = {\"max_queued_res.ncpus\":\n             \"[p:p1=\" + str(self.limit) + \"]\"}\n        attrs = {ATTR_project: 'p1', 'Resource_List.select': '1:ncpus=1'}\n        errmsg = \"qsub: would exceed project p1's limit on resource ncpus in\" \\\n            + \" complex\"\n        self.common_limit_test(True, a, attrs, exp_err=errmsg)\n\n    def test_server_generic_project_limits_res_max(self):\n        \"\"\"\n        Test max_queued_res for any project at the server level.\n        \"\"\"\n        a = {\"max_queued_res.ncpus\":\n             \"[p:PBS_GENERIC=\" + str(self.limit) + \"]\"}\n        attrs = {'Resource_List.select': '1:ncpus=1'}\n        m = 'qsub: would exceed per-project limit on resource ncpus in complex'\n        self.common_limit_test(True, a, attrs, exp_err=m)\n\n    @skipOnShasta\n    def test_server_group_limits_res_max(self):\n        \"\"\"\n        Test max_queued_res for group pbsuser1 at the server level.\n        \"\"\"\n        a = {\"max_queued_res.ncpus\":\n             \"[g:\" + str(TSTGRP0) + \"=\" + str(self.limit) + \"]\"}\n        attrs = {'Resource_List.select': '1:ncpus=1'}\n        errmsg = 'qsub: would exceed group ' + str(TSTGRP0) + \\\n            \"'s limit on resource ncpus in complex\"\n        self.common_limit_test(True, a, attrs, exp_err=errmsg)\n\n    def test_server_overall_limits_res_max(self):\n        \"\"\"\n        Test max_queued_res for all entities at the server level.\n        \"\"\"\n        a = {\"max_queued_res.ncpus\":\n             \"[o:PBS_ALL=\" + str(self.limit) + \"]\"}\n        attrs = {'Resource_List.select': '1:ncpus=1'}\n        errmsg = 'qsub: would exceed limit on resource ncpus in complex'\n        self.common_limit_test(True, a, attrs, exp_err=errmsg)\n\n    def test_queue_generic_user_limits_res_max(self):\n        \"\"\"\n        Test max_queued_res for all users for the default queue.\n        \"\"\"\n        a = {\"max_queued_res.ncpus\":\n             \"[u:PBS_GENERIC=\" + str(self.limit) + \"]\"}\n        attrs = {ATTR_queue: self.server.default_queue,\n                 'Resource_List.select': '1:ncpus=1'}\n        errmsg = 'qsub: would exceed per-user limit on resource ncpus in' \\\n            + ' queue ' + self.server.default_queue\n        self.common_limit_test(False, a, attrs, exp_err=errmsg)\n\n    def test_queue_user_limits_res_max(self):\n        \"\"\"\n        Test max_queued_res for user pbsuser1 for the default queue.\n        \"\"\"\n        a = {\"max_queued_res.ncpus\":\n             \"[u:\" + str(TEST_USER) + \"=\" + str(self.limit) + \"]\"}\n        attrs = {ATTR_queue: self.server.default_queue,\n                 'Resource_List.select': '1:ncpus=1'}\n        errmsg = 'qsub: would exceed user ' + str(TEST_USER) + \\\n            \"'s limit on resource ncpus in queue \" + \\\n            self.server.default_queue\n        self.common_limit_test(False, a, attrs, exp_err=errmsg)\n\n    @skipOnShasta\n    def test_queue_group_limits_res_max(self):\n        \"\"\"\n        Test max_queued_res for group pbsuser1 for the default queue\n        \"\"\"\n        a = {\"max_queued_res.ncpus\":\n             \"[g:\" + str(TSTGRP0) + \"=\" + str(self.limit) + \"]\"}\n        attrs = {ATTR_queue: self.server.default_queue,\n                 'Resource_List.select': '1:ncpus=1'}\n        errmsg = 'qsub: would exceed group ' + str(TSTGRP0) + \\\n            \"'s limit on resource ncpus in queue \" + self.server.default_queue\n        self.common_limit_test(False, a, attrs, exp_err=errmsg)\n\n    @skipOnShasta\n    def test_queue_generic_group_limits_res_max(self):\n        \"\"\"\n        Test max_queued_res for any group for the default queue.\n        \"\"\"\n        a = {\"max_queued_res.ncpus\":\n             \"[g:PBS_GENERIC=\" + str(self.limit) + \"]\"}\n        attrs = {ATTR_queue: self.server.default_queue,\n                 'Resource_List.select': '1:ncpus=1'}\n        errmsg = 'qsub: would exceed per-group limit on resource ncpus in' \\\n            + ' queue ' + self.server.default_queue\n        self.common_limit_test(False, a, attrs, exp_err=errmsg)\n\n    def test_queue_project_limits_res_max(self):\n        \"\"\"\n        Test max_queued_res for project p1 for the default queue.\n        \"\"\"\n        a = {\"max_queued_res.ncpus\":\n             \"[p:p1=\" + str(self.limit) + \"]\"}\n        attrs = {ATTR_queue: self.server.default_queue,\n                 ATTR_project: 'p1', 'Resource_List.select': '1:ncpus=1'}\n        errmsg = \"qsub: would exceed project p1's limit on resource ncpus\" + \\\n            ' in queue ' + self.server.default_queue\n        self.common_limit_test(False, a, attrs, exp_err=errmsg)\n\n    def test_queue_generic_project_limits_res_max(self):\n        \"\"\"\n        Test max_queued_res for any project for the default queue.\n        \"\"\"\n        a = {\"max_queued_res.ncpus\":\n             \"[p:PBS_GENERIC=\" + str(self.limit) + \"]\"}\n        attrs = {ATTR_queue: self.server.default_queue,\n                 'Resource_List.select': '1:ncpus=1'}\n        errmsg = 'qsub: would exceed per-project limit on resource ncpus in' \\\n            + ' queue ' + self.server.default_queue\n        self.common_limit_test(False, a, attrs, exp_err=errmsg)\n\n    def test_queue_overall_limits_res_max(self):\n        \"\"\"\n        Test max_queued_res for any entity for the default queue.\n        \"\"\"\n        a = {\"max_queued_res.ncpus\":\n             \"[o:PBS_ALL=\" + str(self.limit) + \"]\"}\n        attrs = {ATTR_queue: self.server.default_queue,\n                 'Resource_List.select': '1:ncpus=1'}\n        errmsg = 'qsub: would exceed limit on resource ncpus in queue ' \\\n                 + self.server.default_queue\n        self.common_limit_test(False, a, attrs, exp_err=errmsg)\n\n    def test_qalter_resource(self):\n        \"\"\"\n        Test that qaltering a job's resource list is accounted\n        \"\"\"\n        res_name = 'res_long'\n        res_attr = {ATTR_RESC_TYPE: 'long', ATTR_RESC_FLAG: 'q'}\n        rc = self.server.manager(MGR_CMD_CREATE, RSC, res_attr, id=res_name)\n\n        a = {\"max_queued_res.\" + res_name:\n             \"[o:PBS_ALL=\" + str(self.limit) + \"]\"}\n        qname = self.server.default_queue\n        self.server.manager(MGR_CMD_SET, QUEUE, a, qname)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {ATTR_scheduling: 'False'})\n\n        attrs = {ATTR_queue: qname, 'Resource_List.' + res_name: 9}\n        j_1 = Job(TEST_USER, attrs)\n        J_1_id = self.server.submit(j_1)\n\n        try:\n            attrs = {ATTR_queue: qname, 'Resource_List.' + res_name: 2}\n            j_2 = Job(TEST_USER, attrs)\n            self.server.submit(j_2)\n        except PbsSubmitError as e:\n            exp_err = 'qsub: would exceed limit on resource ' + res_name + \\\n                ' in queue ' + qname\n            if e.msg[0] != exp_err:\n                raise self.failureException(\"rcvd unexpected err message: \" +\n                                            e.msg[0])\n\n        attribs = {'Resource_List.' + res_name: 8}\n        self.server.alterjob(J_1_id, attribs)\n        self.server.expect(JOB, attribs, id=J_1_id)\n\n        self.server.submit(j_2)\n\n        try:\n            attrs = {ATTR_queue: qname, 'Resource_List.' + res_name: 1}\n            j_3 = Job(TEST_USER, attrs)\n            self.server.submit(j_3)\n        except PbsSubmitError as e:\n            exp_err = 'qsub: would exceed limit on resource ' + res_name + \\\n                ' in queue ' + qname\n            if e.msg[0] != exp_err:\n                raise self.failureException(\"rcvd unexpected err message: \" +\n                                            e.msg[0])\n\n    def test_multiple_queued_limits(self):\n        defqname = self.server.default_queue\n        q2name = 'q2'\n        a = OrderedDict()\n        a['queue_type'] = 'execution'\n        a['enabled'] = 'True'\n        a['started'] = 'True'\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id=q2name)\n\n        a = {\"queued_jobs_threshold\":\n             \"[u:PBS_GENERIC=10]\"}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        a = {\"queued_jobs_threshold\":\n             \"[u:PBS_GENERIC=5]\"}\n        self.server.manager(MGR_CMD_SET, QUEUE, a, defqname)\n        jd = Job(TEST_USER, {ATTR_queue: defqname})\n        j2 = Job(TEST_USER, {ATTR_queue: q2name})\n\n        jid = self.server.submit(jd)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        for _ in range(5):\n            jd = Job(TEST_USER, {ATTR_queue: defqname})\n            self.server.submit(jd)\n\n        try:\n            self.server.submit(jd)\n        except PbsSubmitError as e:\n            exp_err = \"qsub: would exceed queue generic's per-user limit \" + \\\n                \"of jobs in 'Q' state\"\n            if e.msg[0] != exp_err:\n                raise self.failureException(\"rcvd unexpected err message: \" +\n                                            e.msg[0])\n        else:\n            self.assertFalse(True, \"Job violating limits got submitted.\")\n\n        for _ in range(5):\n            self.server.submit(j2)\n\n        try:\n            self.server.submit(j2)\n        except PbsSubmitError as e:\n            exp_err = \"qsub: would exceed complex's per-user limit of \" + \\\n                \"jobs in 'Q' state\"\n            if e.msg[0] != exp_err:\n                raise self.failureException(\"rcvd unexpected err message: \" +\n                                            e.msg[0])\n        else:\n            self.assertFalse(True, \"Job violating limits got submitted.\")\n\n    def test_pbs_all_soft_limits(self):\n        \"\"\"\n        Set resource soft limit on server for PBS_ALL and see that the job\n        requesting this resource is susceptible to preemption\n        \"\"\"\n        # set max_run_res_soft on mem for PBS_ALL\n        a = {'max_run_res_soft.mem': '[o:PBS_ALL=256mb]'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        a = {'resources_available.ncpus': 4,\n             'resources_available.mem': '2gb'}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        # set preempt prio on the scheduler\n        t = \"express_queue, normal_jobs, server_softlimits\"\n        self.server.manager(MGR_CMD_SET, SCHED, {'preempt_prio': t})\n\n        # submit a job that requests mem and exceeds soft limit\n        attr = {'Resource_List.select': '1:ncpus=2:mem=1gb'}\n        j1 = Job(attrs=attr)\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        # Sleep for a second and submit another job requesting all ncpus left\n        # Sleep is required so that scheduler sorts the most recently started\n        # job to the top of the preemptible candidates when it tries\n        # preemption\n        time.sleep(1)\n        attr = {'Resource_List.select': '1:ncpus=2'}\n        j2 = Job(attrs=attr)\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n        # Submit third job (normal priority) that will try to preempt the\n        # job which is running over its softlimits (which is J1 here).\n        j3 = Job(attrs=attr)\n        jid3 = self.server.submit(j3)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid3)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid1)\n        # Now that no job is over its soft limits, if we submit another\n        # normal priority job, it should stay queued\n        j4 = Job(attrs=attr)\n        jid4 = self.server.submit(j4)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid3)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid1)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid4)\n\n    def test_user_soft_limits(self):\n        \"\"\"\n        Set resource soft limit on server for a user and see that the job\n        requesting this resource submitted by same user are susceptible\n        to preemption\n        \"\"\"\n        # set max_run_res_soft on mem for TEST_USER\n        a = {'max_run_res_soft.mem': '[u:' + str(TEST_USER) + '=256mb]'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        a = {'resources_available.ncpus': 4,\n             'resources_available.mem': '2gb'}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        # set preempt prio on the scheduler\n        t = \"express_queue, normal_jobs, server_softlimits\"\n        self.server.manager(MGR_CMD_SET, SCHED, {'preempt_prio': t})\n\n        # submit a job as TEST_USER that requests mem and exceeds soft limit\n        attr = {'Resource_List.select': '1:ncpus=2:mem=1gb'}\n        j1 = Job(TEST_USER, attrs=attr)\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        # Sleep for a second and submit another job requesting all ncpus left\n        # Sleep is required so that scheduler sorts the most recently started\n        # job to the top of the preemptible candidates when it tries\n        # preemption\n        time.sleep(1)\n        attr = {'Resource_List.select': '1:ncpus=2'}\n        j2 = Job(TEST_USER, attrs=attr)\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n        # Submit third job (normal priority) that will try to preempt the\n        # job which is running over its softlimits (which is J1 here).\n        j3 = Job(TEST_USER, attrs=attr)\n        jid3 = self.server.submit(j3)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid3)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid1)\n        # Now that no job is over its soft limits, if we submit another\n        # normal priority job, it should stay queued\n        j4 = Job(TEST_USER, attrs=attr)\n        jid4 = self.server.submit(j4)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid3)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid1)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid4)\n\n    def test_group_soft_limits(self):\n        \"\"\"\n        Set resource soft limit on server for a group and see that the job\n        requesting this resource submitted by same group are susceptible\n        to preemption\n        \"\"\"\n        # set max_run_res_soft on mem for TSTGRP0\n        a = {'max_run_res_soft.mem': '[g:' + str(TSTGRP0) + '=256mb]'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        a = {'resources_available.ncpus': 4,\n             'resources_available.mem': '2gb'}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        # set preempt prio on the scheduler\n        t = \"express_queue, normal_jobs, server_softlimits\"\n        self.server.manager(MGR_CMD_SET, SCHED, {'preempt_prio': t})\n\n        # submit a job as TSTGRP0 that requests mem and exceeds soft limit\n        attr = {'Resource_List.select': '1:ncpus=2:mem=1gb',\n                'group_list': TSTGRP0}\n        j1 = Job(TEST_USER, attrs=attr)\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        # Sleep for a second and submit another job requesting all ncpus left\n        # Sleep is required so that scheduler sorts the most recently started\n        # job to the top of the preemptible candidates when it tries\n        # preemption\n        time.sleep(1)\n        attr = {'Resource_List.select': '1:ncpus=2',\n                'group_list': TSTGRP1}\n        j2 = Job(TEST_USER1, attrs=attr)\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n        # Submit third job (normal priority) that will try to preempt the\n        # job which is running over its softlimits (which is J1 here).\n        j3 = Job(TEST_USER1, attrs=attr)\n        jid3 = self.server.submit(j3)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid3)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid1)\n        # Now that no job is over its soft limits, if we submit another\n        # normal priority job, it should stay queued\n        j4 = Job(TEST_USER1, attrs=attr)\n        jid4 = self.server.submit(j4)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid3)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid1)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid4)\n\n    def test_project_soft_limits(self):\n        \"\"\"\n        Set resource soft limit on server for a project and see that the job\n        requesting this resource submitted under same project is susceptible\n        to preemption\n        \"\"\"\n        # set max_run_res_soft on mem for project P1\n        a = {'max_run_res_soft.mem': '[p:P1=256mb]'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        a = {'resources_available.ncpus': 4,\n             'resources_available.mem': '2gb'}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        # set preempt prio on the scheduler\n        t = \"express_queue, normal_jobs, server_softlimits\"\n        self.server.manager(MGR_CMD_SET, SCHED, {'preempt_prio': t})\n\n        # submit a job under P1 project that requests mem and exceeds\n        # soft limit\n        attr = {'Resource_List.select': '1:ncpus=2:mem=1gb',\n                ATTR_project: 'P1'}\n        j1 = Job(TEST_USER, attrs=attr)\n        jid1 = self.server.submit(j1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        # Sleep for a second and submit another job requesting all ncpus left\n        # Sleep is required so that scheduler sorts the most recently started\n        # job to the top of the preemptible candidates when it tries\n        # preemption\n        time.sleep(1)\n        attr = {'Resource_List.select': '1:ncpus=2',\n                ATTR_project: 'P2'}\n        j2 = Job(TEST_USER, attrs=attr)\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n        # Submit third job (normal priority) that will try to preempt the\n        # job which is running over its softlimits (which is J1 here).\n        j3 = Job(TEST_USER, attrs=attr)\n        jid3 = self.server.submit(j3)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid3)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid1)\n        # Now that no job is over its soft limits, if we submit another\n        # normal priority job, it should stay queued\n        j4 = Job(TEST_USER, attrs=attr)\n        jid4 = self.server.submit(j4)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid3)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid1)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid4)\n"
  },
  {
    "path": "test/tests/functional/pbs_test_qorder.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass Test_qorder(TestFunctional):\n    \"\"\"\n    Test suite to test whether the political order of selecting a job to run\n    in scheduler changes when one does a qorder.\n    \"\"\"\n\n    def test_qorder_job(self):\n        \"\"\"\n        Submit two jobs, switch their order using qorder and then check if the\n        jobs are selected to run in the newly created order.\n        \"\"\"\n        a = {'resources_available.ncpus': 1}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n\n        a = {'scheduling': 'false'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        j1 = Job(TEST_USER)\n        j1.set_sleep_time(10)\n        j2 = Job(TEST_USER)\n        j2.set_sleep_time(10)\n        jid1 = self.server.submit(j1)\n        jid2 = self.server.submit(j2)\n\n        rc = self.server.orderjob(jobid1=jid1, jobid2=jid2)\n        self.assertEqual(rc, 0)\n\n        a = {'scheduling': 'true'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        a = {'server_state': 'Scheduling'}\n        self.server.expect(SERVER, a, op=NE)\n\n        jid2 = jid2.split('.')[0]\n        cycle = self.scheduler.cycles(start=self.server.ctime, lastN=1)\n        cycle = cycle[0]\n        firstconsidered = cycle.political_order[0]\n        msg = 'testinfo: first job considered [' + str(firstconsidered) + \\\n              '] == second submitted [' + str(jid2) + ']'\n        self.logger.info(msg)\n\n        self.assertEqual(firstconsidered, jid2)\n\n    def test_qorder_job_across_queues(self):\n        a = {'resources_available.ncpus': 1}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n\n        a = {'scheduling': 'false'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        a = {'queue_type': 'e', 'enabled': '1', 'started': '1'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id='workq2')\n\n        self.scheduler.set_sched_config({'by_queue': 'False'})\n\n        a = {ATTR_queue: 'workq'}\n        j1 = Job(TEST_USER, a)\n        j1.set_sleep_time(10)\n        a = {ATTR_queue: 'workq2'}\n        j2 = Job(TEST_USER, a)\n        j2.set_sleep_time(10)\n        jid1 = self.server.submit(j1)\n        jid2 = self.server.submit(j2)\n\n        rc = self.server.orderjob(jobid1=jid1, jobid2=jid2)\n        self.assertEqual(rc, 0)\n\n        a = {'scheduling': 'true'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        a = {'server_state': 'Scheduling'}\n        self.server.expect(SERVER, a, op=NE)\n\n        jid2 = jid2.split('.')[0]\n        cycle = self.scheduler.cycles(start=self.server.ctime, lastN=1)\n        cycle = cycle[0]\n        firstconsidered = cycle.political_order[0]\n        msg = 'testinfo: first job considered [' + str(firstconsidered) + \\\n              '] == second submitted [' + str(jid2) + ']'\n        self.logger.info(msg)\n\n        self.assertEqual(firstconsidered, jid2)\n"
  },
  {
    "path": "test/tests/functional/pbs_test_run_count.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass Test_run_count(TestFunctional):\n    \"\"\"\n    Test suite to test run_count attribute of a job.\n    \"\"\"\n    hook_name = \"h1\"\n    hook_body = (\"import pbs\\n\"\n                 \"e=pbs.event()\\n\"\n                 \"e.reject()\\n\")\n\n    def create_reject_begin_hook(self):\n        start_time = time.time()\n        attr = {'event': 'execjob_begin'}\n        self.server.create_import_hook(self.hook_name, attr, self.hook_body)\n\n        # make sure hook has propogated to mom\n        self.mom.log_match(\"h1.HK;copy hook-related file request received\",\n                           existence=True, starttime=start_time)\n\n    def disable_reject_begin_hook(self):\n        start_time = time.time()\n        attr = {'enabled': 'false'}\n        self.server.manager(MGR_CMD_SET, HOOK, attr, self.hook_name)\n\n        # make sure hook has propogated to mom\n        self.mom.log_match(\"h1.HK;copy hook-related file request received\",\n                           existence=True, starttime=start_time)\n\n    def check_run_count(self, input_count=\"0\", output_count=\"21\"):\n        \"\"\"\n        Creates a hook, submits a job and checks the run count.\n        input_count is the user requested run_count and output_count\n        is the run_count attribute of job from the scheduler.\n        \"\"\"\n        # Create an execjob_begin hook that rejects the job\n        self.create_reject_begin_hook()\n\n        a = {ATTR_W: \"run_count=\" + input_count}\n        j = Job(TEST_USER, a)\n        jid = self.server.submit(j)\n\n        self.server.expect(JOB, {'job_state': \"H\", 'run_count': output_count},\n                           attrop=PTL_AND, id=jid)\n\n    def test_run_count_overflow(self):\n        \"\"\"\n        Submit a job that requests run count exceeding 64 bit integer limit\n        and see that such a job gets rejected.\n        \"\"\"\n        a = {ATTR_W: \"run_count=18446744073709551616\"}\n        j = Job(TEST_USER, a)\n        try:\n            self.server.submit(j)\n        except PbsSubmitError as e:\n            self.assertTrue(\"illegal -W value\" in e.msg[0])\n\n    def test_large_run_count(self):\n        \"\"\"\n        Submit a job with a large (>20) but valid run_count value and create\n        an execjob_begin hook that will reject the job. Check run_count to\n        make sure that the job goes to held state just after one rejection.\n        This is so because if run_count is greater than 20 then PBS will hold\n        the job upon the first rejection from mom.\n        \"\"\"\n\n        self.check_run_count(input_count=\"184\", output_count=\"185\")\n\n    def test_less_than_20_run_count(self):\n        \"\"\"\n        Submit a job with a run count 15, create a execjob_begin\n        hook to reject the job and test that the job goes\n        into held state after 5 rejections\n        \"\"\"\n        self.check_run_count(input_count=\"15\", output_count=\"21\")\n\n    def subjob_check(self, jid, sjid, maxruncount=\"21\"):\n        self.server.expect(JOB, {ATTR_state: \"H\", ATTR_runcount: maxruncount},\n                           attrop=PTL_AND, id=sjid)\n        ja_comment = \"Job Array Held, too many failed attempts to run subjob\"\n        self.server.expect(JOB, {ATTR_state: \"H\",\n                                 ATTR_comment: (MATCH_RE, ja_comment)},\n                           attrop=PTL_AND, id=jid)\n        self.disable_reject_begin_hook()\n        self.server.rlsjob(jid, 's')\n        self.server.expect(JOB, {ATTR_state: \"R\"}, id=sjid)\n        ja_comment = \"Job Array Began at\"\n        self.server.expect(JOB, {ATTR_state: \"B\",\n                                 ATTR_comment: (MATCH_RE, ja_comment)},\n                           attrop=PTL_AND, id=jid)\n\n    def test_run_count_subjob(self):\n        \"\"\"\n        Submit a job array and check if the subjob and the parent are getting\n        held after 20 rejection from mom\n        \"\"\"\n        # Create an execjob_begin hook that rejects the job\n        self.create_reject_begin_hook()\n\n        a = {ATTR_J: '1-2'}\n        j = Job(TEST_USER, a)\n        jid = self.server.submit(j)\n        self.subjob_check(jid=jid, sjid=j.create_subjob_id(jid, 1))\n\n    def test_run_count_subjob_in_x(self):\n        \"\"\"\n        Submit a job array and check if the subjob and the parent are getting\n        held after 20 rejection from mom when there is another subjob in X\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, NODE,\n                            {'resources_available.ncpus': 1},\n                            id=self.mom.shortname)\n\n        a = {ATTR_J: '1-6'}\n        j = Job(TEST_USER, a)\n        j.set_sleep_time(20)\n        jid = self.server.submit(j)\n        self.logger.info(\"Waiting for second subjob to go in R state\")\n        self.server.expect(JOB, {ATTR_state: \"R\"},\n                           id=j.create_subjob_id(jid, 2), offset=15)\n        # Create an execjob_begin hook that rejects the job\n        self.create_reject_begin_hook()\n        self.logger.info(\"Waiting for subjob to finish\")\n        self.server.expect(JOB, {ATTR_state: \"X\"},\n                           id=j.create_subjob_id(jid, 2), offset=15)\n\n        self.subjob_check(jid=jid, sjid=j.create_subjob_id(jid, 3))\n\n    def test_large_run_count_subjob(self):\n        \"\"\"\n        Submit a job array with a large (>20) but valid run_count value and\n        check if the subjob and the parent are getting\n        held after 1 rejection from mom\n        \"\"\"\n        # Create an execjob_begin hook that rejects the job\n        self.create_reject_begin_hook()\n\n        a = {ATTR_W: \"run_count=39\", ATTR_J: '1-2'}\n        j = Job(TEST_USER, a)\n        jid = self.server.submit(j)\n        sjid = j.create_subjob_id(jid, 1)\n        self.subjob_check(jid, sjid, maxruncount=\"40\")\n        return sjid\n\n    def test_large_run_count_subjob_in_x(self):\n        \"\"\"\n        Submit a job array and check if the subjob and the parent are getting\n        held after 20 rejection from mom when there is another subjob in X\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, NODE,\n                            {'resources_available.ncpus': 1},\n                            id=self.mom.shortname)\n\n        a = {ATTR_W: \"run_count=453\", ATTR_J: '1-6'}\n        j = Job(TEST_USER, a)\n        j.set_sleep_time(10)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {ATTR_state: \"R\"},\n                           id=j.create_subjob_id(jid, 2))\n        self.server.manager(MGR_CMD_SET, SCHED, {\"scheduling\": \"false\"})\n        # Create an execjob_begin hook that rejects the job\n        self.create_reject_begin_hook()\n        self.server.manager(MGR_CMD_SET, SCHED, {\"scheduling\": \"true\"})\n        self.server.expect(JOB, {ATTR_state: \"X\"},\n                           id=j.create_subjob_id(jid, 2))\n\n        self.subjob_check(jid=jid, sjid=j.create_subjob_id(jid, 3),\n                          maxruncount=\"454\")\n\n    def test_subjob_run_count_on_rerun(self):\n        \"\"\"\n        to check if subjob which was previously held retains its run_count on\n        rerun\n        \"\"\"\n        sjid = self.test_large_run_count_subjob()\n        self.server.rerunjob(sjid)\n        self.server.expect(JOB, {ATTR_state: \"R\", ATTR_runcount: \"42\"},\n                           attrop=PTL_AND, id=sjid)\n"
  },
  {
    "path": "test/tests/functional/pbs_test_svr_dflt.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestServerDefaultAttrib(TestFunctional):\n\n    dflt_attr = {'scheduling': 'True',\n                 'query_other_jobs': 'True',\n                 'scheduler_iteration': '600',\n                 'resources_default.ncpus': '1',\n                 'log_events': '511',\n                 'mail_from': 'adm',\n                 'pbs_license_linger_time': '31536000',\n                 'pbs_license_min': '0',\n                 'pbs_license_max': '2147483647',\n                 'eligible_time_enable': 'False',\n                 'max_concurrent_provision': '5',\n                 'resv_enable': 'True',\n                 'max_array_size': '10000',\n                 }\n\n    def test_server_unset_dflt_attr(self):\n        \"\"\"\n        Test that server sets the listed attributes with their default values\n        when they are unset\n        \"\"\"\n        for attr in self.dflt_attr:\n            self.server.manager(MGR_CMD_UNSET, SERVER, attr)\n\n        self.server.expect(SERVER, self.dflt_attr, attrop=PTL_AND,\n                           max_attempts=20)\n\n    def test_server_unset_dflt_attr_and_restart(self):\n        \"\"\"\n        Test that server sets the listed attributes with their default values\n        when they are unset and retain it across boots\n        \"\"\"\n        for attr in self.dflt_attr:\n            self.server.manager(MGR_CMD_UNSET, SERVER, attr)\n\n        self.server.expect(SERVER, self.dflt_attr, attrop=PTL_AND,\n                           max_attempts=20)\n        self.server.restart()\n        self.server.expect(SERVER, self.dflt_attr, attrop=PTL_AND,\n                           max_attempts=20)\n"
  },
  {
    "path": "test/tests/functional/pbs_test_tpp.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\nfrom tests.functional import *\nimport socket\n\n\n@tags('comm')\nclass TestTPP(TestFunctional):\n    \"\"\"\n    Test suite consists of tests to check the functionality of pbs_comm daemon\n    \"\"\"\n    node_list = []\n    default_client = None\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n        self.pbs_conf = self.du.parse_pbs_config(self.server.shortname)\n        self.pbs_conf_path = self.du.get_pbs_conf_file(\n            hostname=self.server.hostname)\n        msg = \"Unable to retrieve pbs.conf file path\"\n        self.assertNotEqual(self.pbs_conf_path, None, msg)\n\n        self.exec_path = os.path.join(self.pbs_conf['PBS_EXEC'], \"bin\")\n        if not self.default_client:\n            self.default_client = self.server.client\n\n        # Retrieve temporary directory\n        self.tmp_dir = self.du.get_tempdir(hostname=self.server.hostname)\n        msg = \"Unable to get temp_dir\"\n        self.assertNotEqual(self.tmp_dir, None, msg)\n\n    def pbs_restart(self, host_name):\n        \"\"\"\n        This function restarts PBS daemons\n        :param host_name: Name of the host on which PBS\n                          has to be restarted\n        :type host_name: String\n        \"\"\"\n        pi = PBSInitServices(hostname=host_name)\n        pi.restart()\n\n    def set_pbs_conf(self, host_name, conf_param):\n        \"\"\"\n        This function sets attributes in pbs.conf file\n        :param host_name: Name of the host on which pbs.conf\n                          has to be updated\n        :type host_name: String\n        :param conf_param: Parameters to be updated in pbs.conf\n        :type conf_param: Dictionary\n        \"\"\"\n        pbsconfpath = self.du.get_pbs_conf_file(hostname=host_name)\n        self.du.set_pbs_config(hostname=host_name, fin=pbsconfpath,\n                               confs=conf_param)\n        self.pbs_restart(host_name)\n\n    def unset_pbs_conf(self, host_name, conf_param):\n        \"\"\"\n        This function unsets parameters in pbs.conf file\n        :param host_name: Name of the host on which pbs.conf\n                          has to be updated\n        :type host_name: String\n        :param conf_param: Parameters to be removed from pbs.conf\n        :type conf_param: List\n        \"\"\"\n        pbsconfpath = self.du.get_pbs_conf_file(hostname=host_name)\n        self.du.unset_pbs_config(hostname=host_name,\n                                 fin=pbsconfpath, confs=conf_param)\n        self.pbs_restart(host_name)\n\n    def submit_resv(self, resv_set_attr=None, resv_exp_attr=None):\n        \"\"\"\n        Submits reservation and check for the reservation attributes\n        :param resv_set_attr: Reservation attributes to set\n        :type resv_set_attr: Dictionary. Defaults to None\n        :param resv_exp_attrib: Reservation attributes to verify\n        :type resv_exp_attrib: Dictionary. Defaults to None\n        \"\"\"\n        r = Reservation(TEST_USER)\n        if resv_set_attr is None:\n            resv_set_attr = {ATTR_l + '.select': '2:ncpus=1',\n                             ATTR_l + '.place': 'scatter',\n                             'reserve_start': time.time() + 10,\n                             'reserve_end': time.time() + 120}\n        r.set_attributes(resv_set_attr)\n        rid = self.server.submit(r)\n        if not resv_exp_attr:\n            resv_exp_attr = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, resv_exp_attr, id=rid)\n        return rid\n\n    def submit_job(self, set_attr=None, exp_attr=None, job=False,\n                   job_script=False, interactive=False, rid=None,\n                   resv_job=False, sleep=1):\n        \"\"\"\n        Submits job and check for the job attributes\n        :param set_attr: Job attributes to set\n        :type set_attr: Dictionary. Defaults to None\n        :param exp_attr: Job attributes to verify\n        :type exp_attr: Dictionary. Defaults to None\n        :param job: Whether to submit a multi chunk job\n        :type job: Bool. Defaults to False\n        :param job_script: Whether to submit a job using job script\n        :type job_script: Bool. Defaults to False\n        :param interactive: Whether to submit a interactive job\n        :type interactive: Bool. Defaults to False\n        :param rid: Reservation id\n        :type rid: String\n        :param resv_job: Whether to submit job into reservation.\n        :type resv_job: Bool. Defaults to False\n        :param sleep: Job's sleep time\n        :type sleep: Integer. Defaults to 1s\n        \"\"\"\n        j = Job(TEST_USER)\n        if set_attr is None:\n            set_attr = {ATTR_l + '.select': '2:ncpus=1',\n                        ATTR_l + '.place': 'scatter', ATTR_k: 'oe'}\n        if job:\n            j.set_attributes(set_attr)\n\n        if interactive:\n            set_attr[ATTR_inter] = ''\n            j.set_attributes(set_attr)\n            j.interactive_script = [('hostname', '.*'),\n                                    ('export PATH=$PATH:%s' %\n                                     self.exec_path, '.*'),\n                                    ('qstat', '.*')]\n        if resv_job:\n            if ATTR_inter in set_attr:\n                del set_attr[ATTR_inter]\n            resv_que = rid.split('.')[0]\n            set_attr[ATTR_q] = resv_que\n            j.set_attributes(set_attr)\n\n        if job_script:\n            pbsdsh_path = os.path.join(self.server.pbs_conf['PBS_EXEC'],\n                                       \"bin\", \"pbsdsh\")\n            script = \"#!/bin/sh\\n%s sleep %s\" % (pbsdsh_path, sleep)\n            j.create_script(script, hostname=self.server.client)\n        else:\n            j.set_sleep_time(sleep)\n\n        jid = self.server.submit(j)\n        if exp_attr is not None:\n            self.server.expect(JOB, exp_attr, offset=offset, id=jid)\n        return jid\n\n    def common_steps(self, set_attr=None, exp_attr=None, job=False,\n                     interactive=False, resv=False,\n                     resv_set_attr=None, resv_exp_attr=None,\n                     resv_job=False, client=None):\n        \"\"\"\n        This function contains common steps of submitting\n        different kind of jobs.\n        Submits job and check for the job attributes\n        :param set_attr: Job attributes to set\n        :type set_attr: Dictionary. Defaults to None\n        :param exp_attr: Job attributes to verify\n        :type exp_attr: Dictionary. Defaults to None\n        :param job: Whether to submit a multi chunk job\n        :type job: Bool. Defaults to False\n        :param interactive: Whether to submit a interactive job\n        :type interactive: Bool. Defaults to False\n        :param resv: Whether to submit reservation.\n        :type resv: Bool. Defaults to False\n        :param resv_set_attr: Reservation attributes to set\n        :type resv_set_attr: Dictionary. Defaults to None\n        :param resv_exp_attrib: Reservation attributes to verify\n        :type resv_exp_attrib: Dictionary. Defaults to None\n        :param resv_job: Whether to submit job into reservation.\n        :type resv_job: Bool. Defaults to False\n        :param client: Name of the client\n        :type client: String. Defaults to None\n        \"\"\"\n        if client is None:\n            self.server.client = self.server.hostname\n        else:\n            self.server.client = client\n        if job:\n            jid = self.submit_job(set_attr, exp_attr, job=True,\n                                  job_script=True)\n            self.server.expect(JOB, 'queue', id=jid, op=UNSET, offset=1)\n            self.server.log_match(\"%s;Exit_status=0\" % jid)\n        # Submit Interactive Job\n        if interactive:\n            jid = self.submit_job(set_attr, exp_attr, interactive=True)\n            self.server.expect(JOB, 'queue', id=jid, op=UNSET)\n            self.server.log_match(\"%s;Exit_status=0\" % jid)\n        # Submit reservation\n        if resv:\n            rid = self.submit_resv(resv_set_attr, resv_exp_attr)\n            jid = self.submit_job(set_attr, exp_attr, resv_job=True,\n                                  rid=rid, job_script=True)\n            # Wait for reservation to start\n            a = {'reserve_state': (MATCH_RE, 'RESV_RUNNING|5')}\n            self.server.expect(RESV, a, rid)\n            self.server.expect(JOB, 'queue', id=jid, op=UNSET, offset=1)\n            self.server.log_match(\"%s;Exit_status=0\" % jid)\n\n    @requirements(num_moms=2)\n    def test_comm_with_mom(self):\n        \"\"\"\n        This test verifies communication between server-mom and\n        between moms through pbs_comm\n        Configuration:\n        Node 1 : Server, Mom, Sched, Comm\n        Node 2 : Mom\n        \"\"\"\n        log_msgs = [\"TPP initialization done\",\n                    \"Connected to pbs_comm %s.*:17001\" % self.server.shortname]\n        for msg in log_msgs:\n            self.server.log_match(msg, regexp=True)\n            for mom in self.moms.values():\n                self.mom.log_match(msg, regexp=True)\n        server_ip = socket.gethostbyname(self.server.hostname)\n        msg = \"Registering address %s:15001 to pbs_comm\" % server_ip\n        self.server.log_match(msg)\n        for mom in self.moms.values():\n            ip = socket.gethostbyname(mom.shortname)\n            msg1 = \"Registering address %s:15003 to pbs_comm\" % ip\n            msg2 = \"Leaf registered address %s:15003\" % ip\n            mom.log_match(msg1)\n            self.comm.log_match(msg2)\n        self.common_steps(job=True, interactive=True, resv=True,\n                          resv_job=True)\n\n    @requirements(num_moms=2, num_clients=1)\n    def test_client_with_mom(self):\n        \"\"\"\n        This test verifies communication between server-mom,\n        server-client and between moms through pbs_comm\n        Configuration:\n        Node 1 : Server, Mom, Sched, Comm\n        Node 2 : Mom\n        Node 3 : Client\n        \"\"\"\n        if self.server.client == self.server.hostname:\n            msg = \"Test requires client as input which is on non server\"\n            msg += \" host\"\n            self.skipTest(msg)\n        self.momA = self.moms.values()[0]\n        self.momB = self.moms.values()[1]\n        self.hostA = self.momA.shortname\n        self.hostB = self.momB.shortname\n        self.hostC = self.server.client\n        self.node_list = [self.hostA, self.hostB, self.hostC]\n        self.server.manager(MGR_CMD_SET, SERVER, {'flatuid': True})\n        self.common_steps(job=True, interactive=True)\n        self.common_steps(resv=True, resv_job=True, client=self.hostB)\n\n    @requirements(num_moms=2, num_clients=1, no_comm_on_server=True)\n    def test_comm_non_server_host(self):\n        \"\"\"\n        This test verifies communication between server-mom,\n        server-client and between moms through pbs_comm which\n        is running on non server host\n        Configuration:\n        Node 1 : Server, Mom, Sched\n        Node 2 : Client\n        Node 3 : Mom\n        Node 4 : Comm\n        \"\"\"\n        if self.server.client == self.server.hostname:\n            msg = \"Test requires client as input which is on non server\"\n            msg += \" host\"\n            self.skipTest(msg)\n        self.momA = self.moms.values()[0]\n        self.momB = self.moms.values()[1]\n        self.comm1 = self.comms.values()[0]\n        self.hostA = self.momA.shortname\n        self.hostB = self.server.client\n        self.hostC = self.momB.shortname\n        self.hostD = self.comm1.shortname\n        self.node_list = [self.hostA, self.hostB,\n                          self.hostC, self.hostD]\n        a = {'PBS_START_COMM': '0', 'PBS_START_MOM': '1',\n             'PBS_LEAF_ROUTERS': self.hostD}\n        b = {'PBS_LEAF_ROUTERS': self.hostD}\n        hosts = [self.hostA, self.hostB, self.hostC]\n        moms = [self.hostA, self.hostC]\n        if self.server.shortname not in hosts:\n            hosts.append(self.server.shortname)\n        for host in hosts:\n            if host == self.server.shortname and host in moms:\n                self.set_pbs_conf(host_name=host, conf_param=a)\n            elif host == self.server.shortname and \\\n                    host not in self.moms.values():\n                a['PBS_START_MOM'] = \"0\"\n                self.set_pbs_conf(host_name=host, conf_param=a)\n            else:\n                self.set_pbs_conf(host_name=host, conf_param=b)\n        self.common_steps(job=True, resv=True, resv_job=True)\n        self.common_steps(interactive=True, client=self.hostB)\n\n    @requirements(num_moms=2, no_mom_on_server=True)\n    def test_mom_non_server_host(self):\n        \"\"\"\n        This test verifies communication between server-mom,\n        between moms which are running on non server host\n        through pbs_comm.\n        Configuration:\n        Node 1 : Server, Sched, Comm\n        Node 2 : Mom\n        Node 3 : Mom\n        \"\"\"\n        self.momA = self.moms.values()[0]\n        self.momB = self.moms.values()[1]\n        self.hostA = self.momA.shortname\n        self.hostB = self.momB.shortname\n        self.node_list = [self.hostA, self.hostB]\n        self.common_steps(job=True, resv=True,\n                          resv_job=True, client=self.hostA)\n        self.common_steps(job=True, interactive=True,\n                          client=self.hostB)\n\n    def test_comm_with_vnode_insertion(self):\n        \"\"\"\n        Test for verifying vnode insertion when using TPP\n        \"\"\"\n        vn = self.mom.shortname\n        if not self.mom.is_cpuset_mom():\n            a = {'resources_available.ncpus': 1}\n            self.mom.create_vnodes(a, 2)\n            vnode_val = \"vnode=%s[0]:ncpus=1+vnode=%s[1]:ncpus=1\" % (vn, vn)\n        else:\n            vnode_val = \"vnode=%s:ncpus=1\" % self.server.status(NODE)[1]['id']\n            vnode_val += \"+vnode=%s:ncpus=1\" % self.server.status(NODE)[\n                2]['id']\n        set_attr = {ATTR_l + '.select': vnode_val,\n                    ATTR_k: 'oe'}\n        resv_set_attr = {ATTR_l + '.select': vnode_val,\n                         'reserve_start': time.time() + 30,\n                         'reserve_end': time.time() + 120}\n        self.common_steps(job=True, set_attr=set_attr,\n                          resv_set_attr=resv_set_attr)\n        self.comm.stop('-KILL')\n        if self.mom.is_cpuset_mom():\n            ret = self.server.status(NODE)\n            vnode_list = [ret[1]['id'], ret[2]['id']]\n        else:\n            vnode_list = [vn + \"[0]\", vn + \"[1]\"]\n        a = {'state': (MATCH_RE, \"down\")}\n        for vnode in vnode_list:\n            self.server.expect(VNODE, a, id=vnode)\n\n    def common_setup(self, no_mom_on_comm=False, no_comm_on_server=False,\n                     req_moms=2, req_comms=2):\n        \"\"\"\n        This function sets the shortnames of moms and comms in the cluster\n        accordingly.\n        Mom objects : self.momA, self.momB, self.momC\n        Mom shortnames : self.hostA, self.hostB, self.hostC\n        comm objects : self.comm2, self.comm3\n        comm shortnames : self.hostD, self.hostE\n        :param no_mom_on_comm: Flag, True if no mom is present on comm\n        :type no_mom_on_comm: Bool. Defaults to False\n        :param no_comm_on_server: Flag, True if no comm is present on server\n        :type no_comm_on_server: Bool. Defaults to False\n        :param req_moms: No of required moms\n        :type req_moms: Integer. Defaults to 2\n        :param req_comms: No of required comms\n        :type req_comms: Integer. Defaults to 2\n        \"\"\"\n        mom_list = [x.shortname for x in self.moms.values()]\n        comm_list = [y.shortname for y in self.comms.values()]\n        num_moms = len(mom_list)\n        num_comms = len(comm_list)\n        if (req_moms != num_moms) and (req_comms != num_comms):\n            msg = \"Test requires exact %s moms and %s\" % (req_moms, req_comms)\n            msg += \" comms as input\"\n            self.skipTest(msg)\n        if not no_mom_on_comm and not no_comm_on_server:\n            if self.server.shortname not in mom_list:\n                self.skipTest(\"Mom and comm should be on server host\")\n        if num_moms == 2 and num_comms == 2:\n            self.hostA = self.server.shortname\n            self.momB = self.moms.values()[1]\n            self.hostB = mom_list[1]\n            self.comm2 = self.comms.values()[1]\n            self.hostC = comm_list[1]\n            self.node_list = [self.hostA, self.hostB, self.hostC]\n        elif num_moms == 3 and num_comms == 3:\n            self.hostA = self.server.shortname\n            self.hostB = mom_list[1]\n            self.hostC = comm_list[1]\n            self.hostD = mom_list[2]\n            self.hostE = comm_list[2]\n            self.node_list = [\n                self.hostA,\n                self.hostB,\n                self.hostC,\n                self.hostD,\n                self.hostE]\n        elif num_moms == 2 and num_comms == 3:\n            if self.server.shortname not in comm_list:\n                self.hostA = comm_list[0]\n            else:\n                self.hostA = self.server.shortname\n            self.hostB = mom_list[0]\n            self.hostD = comm_list[1]\n            self.hostC = mom_list[1]\n            self.hostE = comm_list[2]\n            self.node_list = [\n                self.hostA,\n                self.hostB,\n                self.hostC,\n                self.hostD,\n                self.hostE]\n        elif num_moms == 4 and num_comms == 5:\n            self.hostA = self.server.shortname\n            self.hostB = mom_list[0]\n            self.hostC = mom_list[1]\n            self.hostD = mom_list[2]\n            self.hostE = mom_list[3]\n            self.comm2 = self.comms.values()[1]\n            self.hostF = comm_list[1]\n            self.hostG = comm_list[2]\n            self.comm4 = self.comms.values()[3]\n            self.hostH = comm_list[3]\n            self.hostI = comm_list[4]\n            self.node_list = [\n                self.hostA,\n                self.hostB,\n                self.hostC,\n                self.hostD,\n                self.hostE, self.hostF, self.hostG, self.hostH,\n                self.hostI]\n\n    @requirements(num_moms=2, num_comms=2)\n    def test_multiple_comm_with_mom(self):\n        \"\"\"\n        This test verifies communication between server-mom and\n        between mom when multiple pbs_comm are present in cluster\n        Configuration:\n        Node 1 : Server, Sched, Mom, Comm (self.hostA)\n        Node 2 : Mom (self.hostB)\n        Node 3 : Comm (self.hostC)\n        \"\"\"\n        self.common_setup()\n        a = {'PBS_COMM_ROUTERS': self.hostA}\n        self.set_pbs_conf(host_name=self.hostC, conf_param=a)\n        b = {'PBS_LEAF_ROUTERS': self.hostC}\n        self.set_pbs_conf(host_name=self.hostB, conf_param=b)\n        self.common_steps(job=True, interactive=True, resv=True,\n                          resv_job=True)\n\n    def common_steps_for_comm_failover(self):\n        \"\"\"\n        This function has common steps for comm failover used in\n        diff tests\n        \"\"\"\n        self.common_steps(job=True, interactive=True)\n        rid = self.submit_resv()\n        jid = self.submit_job(rid=rid, resv_job=True, sleep=60)\n        resv_exp_attrib = {'reserve_state': (MATCH_RE, 'RESV_RUNNING|5')}\n        self.server.expect(RESV, resv_exp_attrib, rid, offset=10)\n        job_exp_attrib = {'job_state': 'R'}\n        self.server.expect(JOB, job_exp_attrib, id=jid)\n        self.comm2.stop('-KILL')\n        for mom in self.moms.values():\n            self.server.expect(NODE, {'state': 'free'}, id=mom.shortname)\n        self.server.expect(RESV, resv_exp_attrib, rid)\n        self.server.expect(JOB, job_exp_attrib, id=jid)\n        self.comm2.start()\n        self.comm.stop('-KILL')\n        for mom in self.moms.values():\n            self.server.expect(NODE, {'state': 'free'}, id=mom.shortname)\n        self.server.expect(RESV, resv_exp_attrib, rid)\n        self.server.expect(JOB, job_exp_attrib, id=jid)\n\n    @requirements(num_moms=2, num_comms=2)\n    def test_comm_failover(self):\n        \"\"\"\n        This test verifies communication between server-mom and\n        between mom when multiple pbs_comm are present in cluster\n        with pbs_comm failover\n        Configuration:\n        Node 1 : Server, Sched, Mom, Comm (self.hostA)\n        Node 2 : Mom (self.hostB)\n        Node 3 : Comm (self.hostC)\n        \"\"\"\n        self.common_setup()\n        a = {'PBS_COMM_ROUTERS': self.hostA}\n        self.set_pbs_conf(host_name=self.hostC, conf_param=a)\n        leaf_val = self.hostA + \",\" + self.hostC\n        b = {'PBS_LEAF_ROUTERS': leaf_val}\n        self.set_pbs_conf(host_name=self.hostA, conf_param=b)\n        leaf_val = self.hostC + \",\" + self.hostA\n        b = {'PBS_LEAF_ROUTERS': leaf_val}\n        self.set_pbs_conf(host_name=self.hostB, conf_param=b)\n        self.common_steps_for_comm_failover()\n\n    @requirements(num_moms=2, num_comms=2)\n    def test_comm_failover_with_invalid_values(self):\n        \"\"\"\n        This test verifies communication between server-mom and\n        between mom when multiple pbs_comm are present in cluster\n        with pbs_comm failover when values of PBS_LEAF_ROUTERS\n        in pbs.conf are invalid\n        Configuration:\n        Node 1 : Server, Sched, Mom, Comm (self.hostA)\n        Node 2 : Mom (self.hostB)\n        Node 3 : Comm (self.hostC)\n        \"\"\"\n        self.common_setup()\n        # set a valid hostname but invalid PBS_LEAF_ROUTERS value\n        param = {'PBS_LEAF_ROUTERS': self.hostB}\n        self.set_pbs_conf(host_name=self.hostB, conf_param=param)\n        self.server.expect(NODE, {'state': 'down'}, id=self.hostB)\n        # set a invalid PBS_LEAF_ROUTERS value for secondary comm\n        invalid_val = self.hostA + \"XXXX\"\n        leaf_val = self.hostC + \",\" + invalid_val\n        b = {'PBS_LEAF_ROUTERS': leaf_val}\n        self.set_pbs_conf(host_name=self.hostB, conf_param=b)\n        self.comm2.stop('-KILL')\n        self.server.expect(NODE, {'state': 'down'}, id=self.hostB)\n        exp_msg = [\"Error 99 while connecting to %s:17001\" % invalid_val,\n                   \"Error -2 resolving %s\" % invalid_val\n                   ]\n        for msg in exp_msg:\n            self.momB.log_match(msg)\n        # set a invalid PBS_LEAF_ROUTERS value for primary comm\n        leaf_val = invalid_val + \",\" + self.hostC\n        b = {'PBS_LEAF_ROUTERS': leaf_val}\n        self.set_pbs_conf(host_name=self.hostA, conf_param=b)\n        self.comm.stop('-KILL')\n        self.server.expect(NODE,\n                           {'state': 'state-unknown,down'},\n                           id=self.hostA)\n        for msg in exp_msg:\n            self.momB.log_match(msg)\n        # set a invalid port value for PBS_LEAF_ROUTERS\n        invalid_val = self.hostA + \":1700\"\n        leaf_val = self.hostC + \",\" + invalid_val\n        b = {'PBS_LEAF_ROUTERS': leaf_val}\n        self.set_pbs_conf(host_name=self.hostB, conf_param=b)\n        self.comm2.stop('-KILL')\n        self.server.expect(NODE,\n                           {'state': 'state-unknown,down'},\n                           id=self.hostB)\n\n        # set same value for secondary comm as primary in PBS_LEAF_ROUTERS\n        leaf_val = self.hostC + \",\" + self.hostC\n        b = {'PBS_LEAF_ROUTERS': leaf_val}\n        self.set_pbs_conf(host_name=self.hostB, conf_param=b)\n        self.comm2.stop('-KILL')\n        self.server.expect(NODE,\n                           {'state': 'state-unknown,down'},\n                           id=self.hostB)\n\n    @requirements(num_moms=2, num_comms=2)\n    def test_comm_failover_with_ipaddress(self):\n        \"\"\"\n        This test verifies communication between server-mom and\n        between mom when multiple pbs_comm are present in cluster\n        with pbs_comm failover when PBS_LEAF_ROUTERS has ipaddress as value\n        Configuration:\n        Node 1 : Server, Sched, Mom, Comm (self.hostA)\n        Node 2 : Mom (self.hostB)\n        Node 3 : Comm (self.hostC)\n        \"\"\"\n        self.common_setup()\n        hostA_ip = socket.gethostbyname(self.hostA)\n        hostC_ip = socket.gethostbyname(self.hostC)\n        a = {'PBS_COMM_ROUTERS': hostA_ip}\n        self.set_pbs_conf(host_name=self.hostC, conf_param=a)\n        leaf_val = hostA_ip + \",\" + hostC_ip\n        b = {'PBS_LEAF_ROUTERS': leaf_val}\n        self.set_pbs_conf(host_name=self.hostA, conf_param=b)\n        leaf_val = hostC_ip + \",\" + hostA_ip\n        b = {'PBS_LEAF_ROUTERS': leaf_val}\n        self.set_pbs_conf(host_name=self.hostB, conf_param=b)\n        self.common_steps_for_comm_failover()\n\n    @requirements(num_moms=2, num_comms=2)\n    def test_comm_failover_with_ipaddress_hostnames(self):\n        \"\"\"\n        This test verifies communication between server-mom and\n        between mom when multiple pbs_comm are present in cluster\n        with pbs_comm failover when PBS_LEAF_ROUTERS has ipaddress\n        and hostname as values\n        Configuration:\n        Node 1 : Server, Sched, Mom, Comm (self.hostA)\n        Node 2 : Mom (self.hostB)\n        Node 3 : Comm (self.hostC)\n        \"\"\"\n        self.common_setup()\n        hostA_ip = socket.gethostbyname(self.hostA)\n        hostC_ip = socket.gethostbyname(self.hostC)\n        a = {'PBS_COMM_ROUTERS': self.hostA}\n        self.set_pbs_conf(host_name=self.hostC, conf_param=a)\n        leaf_val = self.hostA + \",\" + hostC_ip\n        b = {'PBS_LEAF_ROUTERS': leaf_val}\n        self.set_pbs_conf(host_name=self.hostA, conf_param=b)\n        leaf_val = self.hostC + \",\" + hostA_ip\n        b = {'PBS_LEAF_ROUTERS': leaf_val}\n        self.set_pbs_conf(host_name=self.hostB, conf_param=b)\n        self.common_steps_for_comm_failover()\n\n    @requirements(num_moms=2, num_comms=2)\n    def test_comm_failover_with_ipaddress_hostnames_port(self):\n        \"\"\"\n        This test verifies communication between server-mom and\n        between mom when multiple pbs_comm are present in cluster\n        with pbs_comm failover when PBS_LEAF_ROUTERS has ipaddress,\n        port number and hostname as its values\n        Configuration:\n        Node 1 : Server, Sched, Mom, Comm (self.hostA)\n        Node 2 : Mom (self.hostB)\n        Node 3 : Comm (self.hostC)\n        \"\"\"\n        self.common_setup()\n        hostA_ip = socket.gethostbyname(self.hostA)\n        hostC_ip = socket.gethostbyname(self.hostC)\n        a = {'PBS_COMM_ROUTERS': self.hostA}\n        self.set_pbs_conf(host_name=self.hostC, conf_param=a)\n        leaf_val = self.hostA + \":17001\" + \",\" + self.hostC\n        b = {'PBS_LEAF_ROUTERS': leaf_val}\n        self.set_pbs_conf(host_name=self.hostA, conf_param=b)\n        leaf_val = hostC_ip + \",\" + self.hostA + \":17001\"\n        b = {'PBS_LEAF_ROUTERS': leaf_val}\n        self.set_pbs_conf(host_name=self.hostB, conf_param=b)\n        self.common_steps_for_comm_failover()\n\n    def copy_pbs_conf_to_non_default_path(self):\n        \"\"\"\n        This function copies the pbs.conf from default location\n        to non default location\n        \"\"\"\n        self.new_conf_path = os.path.join(self.tmp_dir, \"pbs.conf\")\n\n        # Copy pbs.conf file to temporary location\n        rc = self.du.run_copy(src=self.pbs_conf_path, dest=self.new_conf_path)\n        msg = \"Cannot copy %s \" % self.pbs_conf_path\n        msg += \"%s, error: %s\" % (self.new_conf_path, rc['err'])\n        self.assertEqual(rc['rc'], 0, msg)\n\n        # Set the PBS_CONF_FILE variable to the temp location\n        os.environ['PBS_CONF_FILE'] = self.new_conf_path\n        self.logger.info(\"Successfully exported PBS_CONF_FILE variable\")\n\n        self.server.pi.conf_file = self.new_conf_path\n        self.pbs_restart(self.server.hostname)\n        self.logger.info(\"PBS services started successfully\")\n\n    @requirements(num_moms=2, num_comms=2)\n    def test_comm_failover_with_nondefault_pbs_conf(self):\n        \"\"\"\n        This test verifies communication between server-mom and\n        between mom when multiple pbs_comm are present in cluster\n        with pbs_comm failover when PBS_LEAF_ROUTERS has ipaddress,\n        port number and hostname as values and pbs.conf is in\n        non default location\n        Configuration:\n        Node 1 : Server, Sched, Mom, Comm (self.hostA)\n        Node 2 : Mom (self.hostB)\n        Node 3 : Comm (self.hostC)\n        \"\"\"\n        self.common_setup()\n        hostA_ip = socket.gethostbyname(self.hostA)\n        hostC_ip = socket.gethostbyname(self.hostC)\n        a = {'PBS_COMM_ROUTERS': self.hostA}\n        self.set_pbs_conf(host_name=self.hostC, conf_param=a)\n        leaf_val = self.hostA + \":17001\" + \",\" + self.hostC\n        b = {'PBS_LEAF_ROUTERS': leaf_val}\n        self.set_pbs_conf(host_name=self.hostA, conf_param=b)\n        leaf_val = hostC_ip + \",\" + self.hostA + \":17001\"\n        b = {'PBS_LEAF_ROUTERS': leaf_val}\n        self.set_pbs_conf(host_name=self.hostB, conf_param=b)\n        self.copy_pbs_conf_to_non_default_path()\n        self.common_steps_for_comm_failover()\n\n    @requirements(num_moms=3, num_comms=3)\n    def test_comm_routers_with_hostname(self):\n        \"\"\"\n        This test verifies communication between server-mom and\n        between mom when multiple pbs_comm are present in cluster\n        with pbs_comm failover when multiple hostname values for\n        PBS_COMM_ROUTERS are set.\n        Configuration:\n        Node 1 : Server, Sched, Mom, Comm (self.hostA)\n        Node 2 : Mom (self.hostB)\n        Node 3 : Comm (self.hostC)\n        Node 4 : Mom (self.hostD)\n        Node 5 : Comm (self.hostE)\n        \"\"\"\n        self.common_setup(req_moms=3, req_comms=3)\n        a = {'PBS_COMM_ROUTERS': self.hostA}\n        self.set_pbs_conf(host_name=self.hostC, conf_param=a)\n        comm_val = self.hostA + \",\" + self.hostC\n        a = {'PBS_COMM_ROUTERS': comm_val}\n        self.set_pbs_conf(host_name=self.hostE, conf_param=a)\n        leaf_val = self.hostA + \",\" + self.hostC + \",\" + self.hostE\n        b = {'PBS_LEAF_ROUTERS': leaf_val}\n        self.set_pbs_conf(host_name=self.hostA, conf_param=b)\n        leaf_val = self.hostC + \",\" + self.hostA + \",\" + self.hostE\n        b = {'PBS_LEAF_ROUTERS': leaf_val}\n        self.set_pbs_conf(host_name=self.hostB, conf_param=b)\n        leaf_val = self.hostE + \",\" + self.hostC + \",\" + self.hostA\n        b = {'PBS_LEAF_ROUTERS': leaf_val}\n        self.set_pbs_conf(host_name=self.hostD, conf_param=b)\n        set_attr = {ATTR_l + '.select': '3:ncpus=1',\n                    ATTR_l + '.place': 'scatter', ATTR_k: 'oe'}\n        resv_set_attr = {ATTR_l + '.select': '3:ncpus=1',\n                         ATTR_l + '.place': 'scatter',\n                         'reserve_start': time.time() + 30,\n                         'reserve_end': time.time() + 120}\n        self.common_steps(set_attr=set_attr, resv_set_attr=resv_set_attr,\n                          job=True, interactive=True, resv=True,\n                          resv_job=True)\n\n    @requirements(num_moms=3, num_comms=3)\n    def test_comm_routers_with_ipaddress(self):\n        \"\"\"\n        This test verifies communication between server-mom and\n        between mom when multiple pbs_comm are present in cluster\n        with pbs_comm failover when multiple ipadress for\n        PBS_COMM_ROUTERS are set.\n        Configuration:\n        Node 1 : Server, Sched, Mom, Comm (self.hostA)\n        Node 2 : Mom (self.hostB)\n        Node 3 : Comm (self.hostC)\n        Node 4 : Mom (self.hostD)\n        Node 5 : Comm (self.hostE)\n        \"\"\"\n        self.common_setup(req_moms=3, req_comms=3)\n        hostA_ip = socket.gethostbyname(self.hostA)\n        hostC_ip = socket.gethostbyname(self.hostC)\n        a = {'PBS_COMM_ROUTERS': hostA_ip}\n        self.set_pbs_conf(host_name=self.hostC, conf_param=a)\n        comm_val = hostA_ip + \",\" + hostC_ip\n        a = {'PBS_COMM_ROUTERS': comm_val}\n        self.set_pbs_conf(host_name=self.hostE, conf_param=a)\n        leaf_val = self.hostA + \",\" + self.hostC + \",\" + self.hostE\n        b = {'PBS_LEAF_ROUTERS': leaf_val}\n        self.set_pbs_conf(host_name=self.hostA, conf_param=b)\n        leaf_val = self.hostC + \",\" + self.hostA + \",\" + self.hostE\n        b = {'PBS_LEAF_ROUTERS': leaf_val}\n        self.set_pbs_conf(host_name=self.hostB, conf_param=b)\n        leaf_val = self.hostE + \",\" + self.hostC + \",\" + self.hostA\n        b = {'PBS_LEAF_ROUTERS': leaf_val}\n        self.set_pbs_conf(host_name=self.hostD, conf_param=b)\n        set_attr = {ATTR_l + '.select': '3:ncpus=1',\n                    ATTR_l + '.place': 'scatter', ATTR_k: 'oe'}\n        resv_set_attr = {ATTR_l + '.select': '3:ncpus=1',\n                         ATTR_l + '.place': 'scatter',\n                         'reserve_start': int(time.time()) + 30,\n                         'reserve_end': int(time.time()) + 120}\n        self.common_steps(set_attr=set_attr, resv_set_attr=resv_set_attr,\n                          job=True, interactive=True, resv=True,\n                          resv_job=True)\n\n    @requirements(num_moms=3, num_comms=3)\n    def test_comm_routers_with_ipaddress_hostnames_port(self):\n        \"\"\"\n        This test verifies communication between server-mom and\n        between mom when multiple pbs_comm are present in cluster\n        with pbs_comm failover when PBS_COMM_ROUTERS has ipaddress,\n        port number and hostname as its values\n        Configuration:\n        Node 1 : Server, Sched, Mom, Comm (self.hostA)\n        Node 2 : Mom (self.hostB)\n        Node 3 : Comm (self.hostC)\n        Node 4 : Mom (self.hostD)\n        Node 5 : Comm (self.hostE)\n        \"\"\"\n        self.common_setup(req_moms=3, req_comms=3)\n        hostA_ip = socket.gethostbyname(self.hostA)\n        comm_val = self.hostA + \":17001\"\n        a = {'PBS_COMM_ROUTERS': comm_val}\n        self.set_pbs_conf(host_name=self.hostC, conf_param=a)\n        comm_val = hostA_ip + \":17001\" + \",\" + self.hostC\n        a = {'PBS_COMM_ROUTERS': comm_val}\n        self.set_pbs_conf(host_name=self.hostE, conf_param=a)\n        leaf_val = self.hostA + \",\" + self.hostC + \",\" + self.hostE\n        b = {'PBS_LEAF_ROUTERS': leaf_val}\n        self.set_pbs_conf(host_name=self.hostA, conf_param=b)\n        leaf_val = self.hostC + \",\" + self.hostA + \",\" + self.hostE\n        b = {'PBS_LEAF_ROUTERS': leaf_val}\n        self.set_pbs_conf(host_name=self.hostB, conf_param=b)\n        leaf_val = self.hostE + \",\" + self.hostC + \",\" + self.hostA\n        b = {'PBS_LEAF_ROUTERS': leaf_val}\n        self.set_pbs_conf(host_name=self.hostD, conf_param=b)\n        set_attr = {ATTR_l + '.select': '3:ncpus=1',\n                    ATTR_l + '.place': 'scatter', ATTR_k: 'oe'}\n        resv_set_attr = {ATTR_l + '.select': '3:ncpus=1',\n                         ATTR_l + '.place': 'scatter',\n                         'reserve_start': int(time.time()) + 30,\n                         'reserve_end': int(time.time()) + 120}\n        self.common_steps(set_attr=set_attr, resv_set_attr=resv_set_attr,\n                          job=True, interactive=True, resv=True,\n                          resv_job=True)\n\n    @requirements(num_moms=3, num_comms=3)\n    def test_comm_routers_with_nondefault_pbs_conf(self):\n        \"\"\"\n        This test verifies communication between server-mom and\n        between mom when multiple pbs_comm are present in cluster\n        when PBS_COMM_ROUTERS has ipaddress, port number and hostname\n        as values and pbs.conf is in non default location\n        Configuration:\n        Node 1 : Server, Sched, Mom, Comm (self.hostA)\n        Node 2 : Mom (self.hostB)\n        Node 3 : Comm (self.hostC)\n        Node 4 : Mom (self.hostD)\n        Node 5 : Comm (self.hostE)\n        \"\"\"\n        self.common_setup(req_moms=3, req_comms=3)\n        hostA_ip = socket.gethostbyname(self.hostA)\n        comm_val = self.hostA + \":17001\"\n        a = {'PBS_COMM_ROUTERS': self.hostA}\n        self.set_pbs_conf(host_name=self.hostC, conf_param=a)\n        comm_val = hostA_ip + \":17001\" + \",\" + self.hostC\n        a = {'PBS_COMM_ROUTERS': comm_val}\n        self.set_pbs_conf(host_name=self.hostE, conf_param=a)\n        leaf_val = self.hostA + \",\" + self.hostC + \",\" + self.hostE\n        b = {'PBS_LEAF_ROUTERS': leaf_val}\n        self.set_pbs_conf(host_name=self.hostA, conf_param=b)\n        leaf_val = self.hostC + \",\" + self.hostA + \",\" + self.hostE\n        b = {'PBS_LEAF_ROUTERS': leaf_val}\n        self.set_pbs_conf(host_name=self.hostB, conf_param=b)\n        leaf_val = self.hostE + \",\" + self.hostC + \",\" + self.hostA\n        b = {'PBS_LEAF_ROUTERS': leaf_val}\n        self.set_pbs_conf(host_name=self.hostD, conf_param=b)\n        self.copy_pbs_conf_to_non_default_path()\n        set_attr = {ATTR_l + '.select': '3:ncpus=1',\n                    ATTR_l + '.place': 'scatter', ATTR_k: 'oe'}\n        resv_set_attr = {ATTR_l + '.select': '3:ncpus=1',\n                         ATTR_l + '.place': 'scatter',\n                         'reserve_start': int(time.time()) + 30,\n                         'reserve_end': int(time.time()) + 120}\n        self.common_steps(set_attr=set_attr, resv_set_attr=resv_set_attr,\n                          job=True, interactive=True, resv=True,\n                          resv_job=True)\n\n    def calculate_no_of_wait_threads(self, comm_pid):\n        \"\"\"\n        This function calculate the number of wait threads\n        :param comm_pid: Comm's Pid\n        :type comm_pid: Integer.\n        \"\"\"\n        ps_cmd = \"ps -eT | grep pbs_comm | wc -l\"\n        rc = self.server.du.run_cmd(self.server.hostname,\n                                    cmd=ps_cmd, as_script=True)\n        num_wait_threads = int(rc['out'][0]) - 1\n        return num_wait_threads\n\n    def test_comm_threads(self):\n        \"\"\"\n        Test allowable values for PBS_COMM_THREADS\n        \"\"\"\n        threads = [1, 2, 100, 101, \"T\"]\n        for wait_thread in threads:\n            a = {'PBS_COMM_THREADS': wait_thread}\n            self.set_pbs_conf(host_name=self.server.shortname, conf_param=a)\n            comm_pid = self.comm.get_pid()\n            num_wait_thread = self.calculate_no_of_wait_threads(comm_pid)\n            if wait_thread == 1 or wait_thread == 101:\n                num_threads = -1\n            elif wait_thread == \"T\":\n                num_threads = 4\n            else:\n                num_threads = wait_thread\n            _msg = \"No of wait threads is not equal to %s\" % num_threads\n            self.assertEqual(num_wait_thread, num_threads, _msg)\n            exp_msg = [\"pbs_comms should have at least 2 threads\",\n                       \"tpp init failed\"]\n            if num_threads == 1:\n                for msg in exp_msg:\n                    self.comm.log_match(msg)\n            elif num_threads == 101:\n                exp_msg[0] = \"pbs_comms should have <= 100 threads\"\n                for msg in exp_msg:\n                    self.comm.log_match(msg)\n\n    def test_comm_log_events(self):\n        \"\"\"\n        Test for verifying the allowable values for PBS_COMM_LOG_EVENTS\n        \"\"\"\n        a = [0, \"T\", 511]\n        for log_event in a:\n            hook_name = \"begin_\" + str(log_event)\n            attrib = {'PBS_COMM_LOG_EVENTS': log_event}\n            if log_event == 0 or log_event == \"T\":\n                existence = False\n            else:\n                existence = True\n            self.set_pbs_conf(host_name=self.server.shortname,\n                              conf_param=attrib)\n            exp_msg = [\"MCAST packet from .*:15001\",\n                       \"mcast done\"]\n            attrs = {'event': 'execjob_begin', 'enabled': 'True'}\n            start_time = time.time()\n            self.server.create_hook(hook_name, attrs)\n            for msg in exp_msg:\n                self.comm.log_match(msg, existence=existence,\n                                    starttime=start_time, regexp=True)\n            start_time = time.time()\n            self.server.import_hook(hook_name, body=\"import pbs\")\n            for msg in exp_msg:\n                self.comm.log_match(msg, existence=existence,\n                                    starttime=start_time, regexp=True)\n            start_time = time.time()\n            self.server.manager(MGR_CMD_DELETE, HOOK, id=hook_name)\n            for msg in exp_msg:\n                self.comm.log_match(msg, existence=existence,\n                                    starttime=start_time, regexp=True)\n\n    def common_steps_for_mom_pool_tests(self):\n        \"\"\"\n        This function submit different jobs as required by test\n        \"test_isolated_mom_pools\" and\n        \"test_isolated_mom_pools_when_comm_on_non_serverhost\"\n        \"\"\"\n        set_attr = {ATTR_l + '.select': '1:ncpus=1', ATTR_k: 'oe',\n                    ATTR_l + '.place': 'excl'}\n        jid1 = self.submit_job(job=True, set_attr=set_attr)\n        jid2 = self.submit_job(job=True, set_attr=set_attr)\n        jobs = [jid1, jid2]\n        for job_id in jobs:\n            self.server.expect(JOB, 'queue', op=UNSET, id=job_id, offset=1)\n            self.server.log_match(\"%s;Exit_status=0\" % job_id)\n        set_attr[ATTR_inter] = ''\n        jid1 = self.submit_job(interactive=True, set_attr=set_attr)\n        jid2 = self.submit_job(interactive=True, set_attr=set_attr)\n        jobs = [jid1, jid2]\n        for job_id in jobs:\n            self.server.expect(JOB, 'queue', op=UNSET, id=job_id)\n            self.server.log_match(\"%s;Exit_status=0\" % job_id)\n        del set_attr[ATTR_inter]\n        resv_set_attr = {ATTR_l + '.select': '1:ncpus=1',\n                         ATTR_l + '.place': 'excl',\n                         'reserve_start': time.time() + 10,\n                         'reserve_end': time.time() + 120}\n        rid1 = self.submit_resv(resv_set_attr)\n        resv_job1 = self.submit_job(set_attr=set_attr, resv_job=True, rid=rid1)\n        resv_set_attr['reserve_start'] = time.time() + 10\n        resv_set_attr['reserve_end'] = time.time() + 120\n        rid2 = self.submit_resv(resv_set_attr)\n        resv_job2 = self.submit_job(set_attr=set_attr, resv_job=True, rid=rid2)\n        resv_jobs = [resv_job1, resv_job2]\n        for job_id in resv_jobs:\n            self.server.expect(JOB, 'queue', op=UNSET, id=job_id, offset=1)\n            self.server.log_match(\"%s;Exit_status=0\" % job_id)\n\n    @requirements(num_moms=2, no_mom_on_server=True, num_comms=3)\n    def test_isolated_mom_pools(self):\n        \"\"\"\n        Test isolated mom pools\n        Configuration:\n        Node 1 : Server, Sched, Comm (self.hostA)\n        Node 2 : Mom (self.hostB)\n        Node 3 : Comm (self.hostD)\n        Node 4 : Mom (self.hostC)\n        Node 5 : Comm (self.hostE)\n        \"\"\"\n        self.common_setup(no_mom_on_comm=True, req_comms=3)\n        a = {'PBS_COMM_ROUTERS': self.hostA}\n        hosts = [self.hostD, self.hostE]\n        for host in hosts:\n            self.set_pbs_conf(host_name=host, conf_param=a)\n        b = {'PBS_LEAF_ROUTERS': self.hostD}\n        self.set_pbs_conf(host_name=self.hostB, conf_param=b)\n        b = {'PBS_LEAF_ROUTERS': self.hostE}\n        self.set_pbs_conf(host_name=self.hostC, conf_param=b)\n        self.common_steps_for_mom_pool_tests()\n\n    @requirements(num_moms=2, no_mom_on_server=True,\n                  num_comms=3, no_comm_on_server=True)\n    def test_isolated_mom_pools_when_comm_on_non_serverhost(self):\n        \"\"\"\n        Test isolated mom pools when comm is present on non server host\n        Configuration:\n        Node 1 : Server, Sched\n        Node 2 : Mom (self.hostB)\n        Node 3 : Comm (self.hostD)\n        Node 4 : Mom (self.hostC)\n        Node 5 : Comm (self.hostE)\n        Node 6 : Comm (self.hostA)\n        \"\"\"\n        self.common_setup(no_mom_on_comm=True, no_comm_on_server=True,\n                          req_comms=3)\n        a = {'PBS_COMM_ROUTERS': self.hostA}\n        hosts = [self.hostD, self.hostE]\n        for host in hosts:\n            self.set_pbs_conf(host_name=host, conf_param=a)\n        b = {'PBS_LEAF_ROUTERS': self.hostD}\n        self.set_pbs_conf(host_name=self.hostB, conf_param=b)\n        b = {'PBS_LEAF_ROUTERS': self.hostE}\n        self.set_pbs_conf(host_name=self.hostC, conf_param=b)\n        c = {'PBS_LEAF_ROUTERS': self.hostA}\n        self.set_pbs_conf(host_name=self.server.shortname, conf_param=c)\n        self.common_steps_for_mom_pool_tests()\n\n    @requirements(num_moms=4, no_mom_on_server=True, num_comms=5)\n    def test_comm_failover_with_isolated_mom_pools(self):\n        \"\"\"\n        Test comm failover with isolated mom pools\n        Configuration:\n        Node 1 : Server, Sched, Comm (self.hostA)\n        Node 2 : Mom (self.hostB)\n        Node 3 : Comm (self.hostF)\n        Node 4 : Mom (self.hostC)\n        Node 5 : Comm (self.hostG)\n        Node 6 : Mom (self.hostD)\n        Node 7 : Comm (self.hostH)\n        Node 8 : Mom (self.hostE)\n        Node 9 : Comm (self.hostI)\n        \"\"\"\n        self.common_setup(no_mom_on_comm=True, req_moms=4, req_comms=5)\n        node_attr = {'resources_available.ncpus': '1'}\n        for mom in self.moms.values():\n            self.server.manager(MGR_CMD_SET, NODE, node_attr, id=mom.name)\n        a = {'PBS_COMM_ROUTERS': self.hostA}\n        comm_hosts = [self.hostF, self.hostG, self.hostH, self.hostI]\n        for host in comm_hosts:\n            self.set_pbs_conf(host_name=host, conf_param=a)\n        mom_hosts = [self.hostB, self.hostC, self.hostD, self.hostE]\n        for host in mom_hosts:\n            if host == self.hostB:\n                leaf_val = self.hostF + \",\" + self.hostG\n            elif host == self.hostC:\n                leaf_val = self.hostG + \",\" + self.hostF\n            elif host == self.hostD:\n                leaf_val = self.hostH + \",\" + self.hostI\n            elif host == self.hostE:\n                leaf_val = self.hostI + \",\" + self.hostH\n            b = {'PBS_LEAF_ROUTERS': leaf_val}\n            self.set_pbs_conf(host_name=host, conf_param=b)\n        vnode_val = \"vnode=\" + self.hostB + \":ncpus=1+vnode=\"\n        vnode_val += self.hostC + \":ncpus=1\"\n        set_attr = {ATTR_l + '.select': vnode_val,\n                    ATTR_l + '.place': 'scatter', ATTR_k: 'oe'}\n        jid = self.submit_job(set_attr=set_attr, job=True,\n                              job_script=True, sleep=30)\n        exp_attr = {'job_state': 'R'}\n        self.server.expect(JOB, exp_attr, id=jid)\n        self.comm2.stop('-KILL')\n        hosts = [self.hostB, self.hostC]\n        for mom in hosts:\n            self.server.expect(NODE, {'state': 'job-busy'}, id=mom)\n        self.server.expect(JOB, exp_attr, id=jid)\n        self.comm2.start()\n        self.server.expect(JOB, 'queue', id=jid, op=UNSET, offset=30)\n\n        vnode_val = \"vnode=\" + self.hostD + \":ncpus=1+vnode=\"\n        vnode_val += self.hostE + \":ncpus=1\"\n        resv_set_attr = {ATTR_l + '.select': vnode_val,\n                         ATTR_l + '.place': 'scatter',\n                         'reserve_start': time.time() + 10,\n                         'reserve_end': time.time() + 120}\n        rid = self.submit_resv(resv_set_attr)\n        jid = self.submit_job(rid=rid, resv_job=True, sleep=30)\n        resv_exp_attrib = {'reserve_state': (MATCH_RE, 'RESV_RUNNING|5')}\n        self.server.expect(RESV, resv_exp_attrib, rid, offset=10)\n        self.server.expect(JOB, exp_attr, id=jid)\n        self.comm4.stop('-KILL')\n        hosts = [self.hostD, self.hostE]\n        for mom in hosts:\n            self.server.expect(NODE, {'state': 'job-busy'}, id=mom)\n        self.server.expect(RESV, resv_exp_attrib, rid)\n        self.server.expect(JOB, exp_attr, id=jid)\n        self.comm4.start()\n        self.server.expect(JOB, 'queue', id=jid, op=UNSET, offset=30)\n\n    def tearDown(self):\n        os.environ['PBS_CONF_FILE'] = self.pbs_conf_path\n        self.logger.info(\"Successfully exported PBS_CONF_FILE variable\")\n        conf_param = ['PBS_LEAF_ROUTERS', 'PBS_COMM_ROUTERS',\n                      'PBS_COMM_THREADS', 'PBS_COMM_LOG_EVENTS']\n        for host in self.node_list:\n            self.unset_pbs_conf(host, conf_param)\n        self.node_list.clear()\n        self.server.client = self.default_client\n        TestFunctional.tearDown(self)\n"
  },
  {
    "path": "test/tests/functional/pbs_trillion_jobid.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestTrillionJobid(TestFunctional):\n    \"\"\"\n    This test suite tests the Trillion Job ID and sequence jobid\n    \"\"\"\n\n    update_svr_db_script = \"\"\"#!/bin/bash\n. %s\n. ${PBS_EXEC}/libexec/pbs_db_env\n\nDATA_PORT=${PBS_DATA_SERVICE_PORT}\nif [ -z ${DATA_PORT} ]; then\n    DATA_PORT=15007\nfi\n\nsudo ls ${PBS_HOME}/server_priv/db_user &>/dev/null\nif [ $? -eq 0 ]; then\n    DATA_USER=`sudo cat ${PBS_HOME}/server_priv/db_user`\n    if [ $? -ne 0 ]; then\n        exit 1\n    fi\nfi\n\nsudo ${PBS_EXEC}/sbin/pbs_ds_password test\nif [ $? -eq 0 ]; then\n    sudo ${PBS_EXEC}/sbin/pbs_dataservice stop\n    if [ $? -ne 0 ]; then\n        exit 1\n    fi\nfi\n\nsudo ${PBS_EXEC}/sbin/pbs_dataservice status\nif [ $? -eq 0 ]; then\n    sudo ${PBS_EXEC}/sbin/pbs_dataservice stop\n    if [ $? -ne 0 ]; then\n        exit 1\n    fi\nfi\n\nsudo ${PBS_EXEC}/sbin/pbs_dataservice start\nif [ $? -ne 0 ]; then\n    exit 1\nfi\n\nargs=\"-U ${DATA_USER} -p ${DATA_PORT} -d pbs_datastore\"\nPGPASSWORD=test ${PGSQL_BIN}/psql ${args} <<-EOF\n    UPDATE pbs.server SET sv_jobidnumber = %d;\nEOF\n\nret=$?\nif [ $ret -eq 0 ]; then\n    echo \"Server sv_jobidnumber attribute has been updated successfully\"\nfi\n\nsudo ${PBS_EXEC}/sbin/pbs_dataservice stop\nif [ $? -ne 0 ]; then\n    exit 1\nfi\n\nexit 0\n\n\"\"\"\n\n    def set_svr_sv_jobidnumber(self, num=0):\n        \"\"\"\n        This function is to set the next jobid into server database\n        \"\"\"\n        # Stop the PBS server\n        self.server.stop()\n        stop_msg = 'Failed to stop PBS'\n        self.assertFalse(self.server.isUp(), stop_msg)\n        # Create a shell script file and update the database\n        conf_path = self.du.get_pbs_conf_file()\n        fn = self.du.create_temp_file(\n            body=self.update_svr_db_script %\n            (conf_path, num))\n        self.du.chmod(path=fn, mode=0o755)\n        fail_msg = 'Failed to set sequence id in database'\n        ret = self.du.run_cmd(cmd=fn)\n        self.assertEqual(ret['rc'], 0, fail_msg)\n        # Start the PBS server\n        start_msg = 'Failed to restart PBS'\n        self.server.start()\n        self.assertTrue(self.server.isUp(), start_msg)\n\n    def stop_and_restart_svr(self, restart_type):\n        \"\"\"\n            Abruptly/Gracefully stop and restart the server\n\n        \"\"\"\n        try:\n            if(restart_type == 'kill'):\n                self.server.stop('-KILL')\n            else:\n                self.server.stop()\n        except PbsServiceError as e:\n            # The server failed to stop\n            raise self.failureException(\"Server failed to stop:\" + e.msg)\n        try:\n            self.server.start()\n        except PbsServiceError as e:\n            # The server failed to start\n            raise self.failureException(\"Server failed to start:\" + e.msg)\n        restart_msg = 'Failed to restart PBS'\n        self.assertTrue(self.server.isUp(), restart_msg)\n\n    def submit_job(self, sleep=100, lower=0,\n                   upper=0, job_id=None, job_msg=None, verify=False):\n        \"\"\"\n        Helper method to submit a normal/array job\n        and also checks the R state and particular jobid if success,\n        else log the error message\n\n        :param sleep   : Sleep time in seconds for the job\n        :type  sleep   : int\n\n        :param lower   : Lower limit for the array job\n        :type  lower   : int\n\n        :param upper   : Upper limit for the array job\n        :type  upper   : int\n\n        :param job_id  : Expected jobid upon submission\n        :type  job_id  : string\n\n        :param job_msg : Expected message upon submission failure\n        :type  job_msg : int\n\n        :param verify : Checks Job status R\n        :type  verify : boolean(True/False)\n\n        \"\"\"\n        arr_flag = False\n        j = Job(TEST_USER)\n        if((lower >= 0) and (upper > lower)):\n            j.set_attributes({ATTR_J: '%d-%d' % (lower, upper)})\n            arr_flag = True\n            total_jobs = upper - lower + 1\n        j.set_sleep_time(sleep)\n        try:\n            jid = self.server.submit(j)\n            if job_id is not None:\n                self.assertEqual(jid.split('.')[0], job_id)\n            if arr_flag:\n                if verify:\n                    self.server.expect(JOB, {'job_state': 'B'}, id=jid)\n                    self.server.expect(\n                        JOB,\n                        {'job_state=R': total_jobs},\n                        count=True, id=jid, extend='t')\n            else:\n                if verify:\n                    self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        except PbsSubmitError as e:\n            if job_msg is not None:\n                # if JobId already exist\n                self.assertEqual(e.msg[0], job_msg)\n            else:\n                # Unable to submit Job\n                self.logger.info('Error in submitting job:', e.msg)\n\n    def submit_resv(self, resv_dur=2, resv_id=None, resv_msg=None):\n        \"\"\"\n        Helper method to submit a reservation and checks the\n        reservation id if success, else log the error message.\n\n        :param resv_dur : Reservation duration in seconds\n        :type  resv_dur : int\n\n        :param resv_id  : Expected resvid upon submission\n        :type  resv_id  : string\n\n        :param resv_msg : Expected message upon reservation failure\n        :type  resv_msg : string\n\n        \"\"\"\n        resv_start = int(time.time()) + 2\n        a = {'reserve_start': int(resv_start),\n             'reserve_duration': int(resv_dur)\n             }\n        r = Reservation(TEST_USER, attrs=a)\n        try:\n            rid = self.server.submit(r)\n            if resv_id is not None:\n                self.assertEqual(rid.split('.')[0], resv_id)\n        except PbsSubmitError as e:\n            if resv_msg is not None:\n                # if ResvId already exist\n                self.assertEqual(e.msg[0], resv_msg)\n            else:\n                # Unable to submit reservation\n                self.logger.info('Error in submitting reservation:', e.msg)\n\n    def test_set_unset_max_job_sequence_id(self):\n        \"\"\"\n        Set/Unset max_job_sequence_id attribute and\n        also verify the attribute value after server qterm/kill\n        \"\"\"\n        # Set as Non-admin user\n        seq_id = {ATTR_max_job_sequence_id: 123456789}\n        try:\n            self.server.manager(MGR_CMD_SET, SERVER, seq_id, runas=TEST_USER1)\n        except PbsManagerError as e:\n            self.assertTrue('Unauthorized Request' in e.msg[0])\n\n        # Set as Admin User and also check the value after server restart\n        self.server.manager(MGR_CMD_SET, SERVER, seq_id, runas=ROOT_USER)\n        self.server.expect(SERVER, seq_id)\n        self.server.log_match('svr_max_job_sequence_id set to '\n                              'val %d' % (seq_id[ATTR_max_job_sequence_id]),\n                              starttime=self.server.ctime)\n        # Abruptly kill the server\n        self.stop_and_restart_svr('kill')\n        self.server.expect(SERVER, seq_id)\n        # Gracefully stop the server\n        self.stop_and_restart_svr('normal')\n        self.server.expect(SERVER, seq_id)\n\n        # Unset as Non-admin user\n        try:\n            self.server.manager(\n                MGR_CMD_UNSET,\n                SERVER,\n                'max_job_sequence_id',\n                runas=TEST_USER1)\n        except PbsManagerError as e:\n            self.assertTrue('Unauthorized Request' in e.msg[0])\n\n        # Unset as Admin user\n        self.server.manager(MGR_CMD_UNSET, SERVER, 'max_job_sequence_id',\n                            runas=ROOT_USER)\n        self.server.log_match('svr_max_job_sequence_id reverting back '\n                              'to default val 9999999',\n                              starttime=self.server.ctime)\n\n    def test_max_job_sequence_id_values(self):\n        \"\"\"\n        Test to check valid/invalid values for the\n        max_job_sequence_id server attribute\n        \"\"\"\n        # Invalid Values\n        invalid_values = [-9999999, '*456879846',\n                          23545.45, 'ajndd', '**45', 'asgh456']\n        for val in invalid_values:\n            try:\n                seq_id = {ATTR_max_job_sequence_id: val}\n                self.server.manager(\n                    MGR_CMD_SET, SERVER, seq_id, runas=ROOT_USER)\n            except PbsManagerError as e:\n                self.assertTrue(\n                    'Illegal attribute or resource value' in e.msg[0])\n        # Less than or Greater than the attribute limit\n        min_max_values = [120515, 999999, 1234567891234, 9999999999999]\n        for val in min_max_values:\n            try:\n                seq_id = {ATTR_max_job_sequence_id: val}\n                self.server.manager(\n                    MGR_CMD_SET, SERVER, seq_id, runas=ROOT_USER)\n            except PbsManagerError as e:\n                self.assertTrue('Cannot set max_job_sequence_id < 9999999, '\n                                'or > 999999999999' in e.msg[0])\n        # Valid values\n        valid_values = [9999999, 123456789, 100000000000, 999999999999]\n        for val in valid_values:\n            seq_id = {ATTR_max_job_sequence_id: val}\n            self.server.manager(MGR_CMD_SET, SERVER, seq_id, runas=ROOT_USER)\n\n    def test_max_job_sequence_id_wrap(self):\n        \"\"\"\n        Test to check the jobid's/resvid's are wrapping it to zero or not,\n        after reaching to the given limit\n        \"\"\"\n        # Check default limit(9999999) and wrap it 0\n        a = {'resources_available.ncpus': 20}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        self.submit_job(verify=True)\n        self.submit_job(lower=1, upper=2, verify=True)\n        self.submit_resv()\n        sv_jobidnumber = 9999999  # default\n        self.set_svr_sv_jobidnumber(sv_jobidnumber)\n        self.submit_job(job_id='%s' % (sv_jobidnumber), verify=True)\n        self.submit_job(lower=1, upper=2, job_id='0[]',\n                        verify=True)  # wrap it\n        self.submit_resv(resv_id='R1')\n\n        # Check max limit (999999999999) and wrap it 0\n        sv_jobidnumber = 999999999999  # max limit\n        seq_id = {ATTR_max_job_sequence_id: sv_jobidnumber}\n        self.server.manager(MGR_CMD_SET, SERVER, seq_id, runas=ROOT_USER)\n        self.server.expect(SERVER, seq_id)\n        self.submit_job(verify=True)\n        self.submit_job(lower=1, upper=2, verify=True)\n        self.submit_resv()\n        self.set_svr_sv_jobidnumber(sv_jobidnumber)\n        self.submit_job(job_id='%s' % (sv_jobidnumber), verify=True)\n        self.submit_job(lower=1, upper=2, job_id='0[]',\n                        verify=True)  # wrap it\n        self.submit_resv(resv_id='R1')\n\n        # Someone set the max_job_sequence_id less than current jobid then also\n        # wrap it 0\n        sv_jobidnumber = 1234567890\n        seq_id = {ATTR_max_job_sequence_id: sv_jobidnumber}\n        self.server.manager(MGR_CMD_SET, SERVER, seq_id, runas=ROOT_USER)\n        self.server.expect(SERVER, seq_id)\n        sv_jobidnumber = 123456789\n        self.set_svr_sv_jobidnumber(sv_jobidnumber)\n        self.submit_job(job_id='%s' % (sv_jobidnumber), verify=True)\n        self.submit_job(lower=1, upper=2, job_id='123456790[]', verify=True)\n        self.submit_resv(resv_id='R123456791')\n        # Set smaller(12345678) than current jobid(123456790)\n        sv_jobidnumber = 12345678\n        seq_id = {ATTR_max_job_sequence_id: sv_jobidnumber}\n        self.server.manager(MGR_CMD_SET, SERVER, seq_id, runas=ROOT_USER)\n        self.server.expect(SERVER, seq_id)\n        self.submit_job(job_id='0', verify=True)  # wrap it to zero\n        self.submit_job(lower=1, upper=2, job_id='1[]', verify=True)\n        self.submit_resv(resv_id='R2')\n\n    @timeout(3000)\n    def test_verify_sequence_window(self):\n        \"\"\"\n        Tests the sequence window scenario in which jobid\n        number save to the database once in a 1000 time\n        \"\"\"\n        # Abruptly kill the server so next jobid should be 1000 after server\n        # start\n        self.set_svr_sv_jobidnumber(0)\n        self.submit_job(job_id='0')\n        self.submit_job(lower=1, upper=2, job_id='1[]')\n        self.submit_resv(resv_id='R2')\n        # kill the server forcefully\n        self.stop_and_restart_svr('kill')\n        self.submit_job(job_id='1000')\n        self.submit_job(lower=1, upper=2, job_id='1001[]')\n        self.submit_resv(resv_id='R1002')\n        # if server gets killed again abruptly then next jobid would be 2000\n        self.stop_and_restart_svr('kill')\n        self.submit_job(job_id='2000')\n        self.submit_job(lower=1, upper=2, job_id='2001[]')\n        self.submit_resv(resv_id='R2002')\n\n        # Gracefully stop the server so jobid's will continue from the last\n        # jobid\n        self.stop_and_restart_svr('normal')\n        self.submit_job(job_id='2003')\n        self.submit_job(lower=1, upper=2, job_id='2004[]')\n        self.submit_resv(resv_id='R2005')\n\n        # Verify the sequence window, incase of submitting more than 1001 jobs\n        # and all jobs should submit successfully without any duplication error\n        for _ in range(1010):\n            j = Job(TEST_USER)\n            self.server.submit(j)\n\n    def test_jobid_duplication(self):\n        \"\"\"\n                Tests the JobId/ResvId duplication after wrap\n                Job/Resv shouldn't submit because previous\n                jobs with the same id's are still running\n        \"\"\"\n        seq_id = {ATTR_max_job_sequence_id: 99999999}\n        self.server.manager(MGR_CMD_SET, SERVER, seq_id, runas=ROOT_USER)\n        self.set_svr_sv_jobidnumber(0)\n        self.submit_job(sleep=1000, job_id='0')\n        self.submit_job(sleep=1000, lower=1, upper=2, job_id='1[]')\n        self.submit_resv(resv_dur=300, resv_id='R2')\n        sv_jobidnumber = 99999999\n        self.set_svr_sv_jobidnumber(sv_jobidnumber)\n        self.submit_job(sleep=1000, job_id='%s' % (sv_jobidnumber))\n\n        # Now job/resv shouldn't submit because same id's are already occupied\n        msg = \"qsub: Job with requested ID already exists\"\n        self.submit_job(job_msg=msg)\n        self.submit_job(lower=1, upper=2, job_msg=msg)\n        msg = 'pbs_rsub: Reservation with '\\\n              'requested ID already exists'\n        self.submit_resv(resv_msg=msg)\n        # Job should submit successfully because all existing id's has been\n        # passed\n        self.submit_job(lower=1, upper=2, job_id='3[]')\n\n    def test_jobid_resvid_after_multiple_restart(self):\n        \"\"\"\n        Test to check the Jobid/Resvid should not wrap to 0 during\n        server restart multiple times consecutively either gracefully/abruptly\n        \"\"\"\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n        curr_id = int(jid.split('.')[0])\n        self.submit_job(job_id='%s' % str(curr_id + 1))\n        self.submit_job(lower=1, upper=2, job_id='%s[]' % str(curr_id + 2))\n        self.submit_resv(resv_id='R%s' % str(curr_id + 3))\n        # Gracefully stop and start the server twice consecutively\n        self.stop_and_restart_svr('normal')\n        self.stop_and_restart_svr('normal')\n        self.submit_job(job_id='%s' % str(curr_id + 4))\n        self.submit_job(lower=1, upper=2, job_id='%s[]' % str(curr_id + 5))\n        self.submit_resv(resv_id='R%s' % str(curr_id + 6))\n        # Abruptly kill and start the server twice consecutively\n        self.stop_and_restart_svr('kill')\n        self.stop_and_restart_svr('kill')\n        # Starting at 1000 in current jobid for the sequence window buffer\n        curr_id = 1000\n        self.submit_job(job_id='%s' % str(curr_id))\n        self.submit_job(lower=1, upper=2, job_id='%s[]' % str(curr_id + 1))\n        self.submit_resv(resv_id='R%s' % str(curr_id + 2))\n\n    def tearDown(self):\n        self.server.cleanup_jobs()\n        TestFunctional.tearDown(self)\n"
  },
  {
    "path": "test/tests/functional/pbs_two_mom_hooks_resources_used.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\n@requirements(num_moms=2)\nclass TestAcctlogRescUsedWithTwoMomHooks(TestFunctional):\n\n    \"\"\"\n    This test suite tests the accounting logs to have non-zero resources_used\n    in the scenario where we have execjob_begin and execjob_end hooks.\n    \"\"\"\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n        if len(self.moms) != 2:\n            self.skipTest('Test requires two MoMs as input, '\n                          'use -p moms=<mom1>:<mom2>')\n\n        hook_body = \"import time\\n\"\n        a = {'event': 'execjob_begin', 'enabled': 'True'}\n        rv = self.server.create_import_hook(\"test\", a, hook_body)\n        self.assertTrue(rv)\n\n        a = {'event': 'execjob_end', 'enabled': 'True'}\n        rv = self.server.create_import_hook(\"test2\", a, hook_body)\n        self.assertTrue(rv)\n\n        a = {ATTR_nodefailrq: 5}\n        rc = self.server.manager(MGR_CMD_SET, SERVER, a)\n        self.assertEqual(rc, 0)\n\n        a = {'job_history_enable': 'True'}\n        rc = self.server.manager(MGR_CMD_SET, SERVER, a)\n        self.assertEqual(rc, 0)\n\n        self.momA = self.moms.values()[0]\n        self.momB = self.moms.values()[1]\n\n        if self.momA.is_cpuset_mom() or self.momB.is_cpuset_mom():\n            node_status = self.server.status(NODE)\n\n        if self.momA.is_cpuset_mom():\n            self.hostA = node_status[1]['id']\n        else:\n            self.hostA = self.momA.shortname\n        if self.momB.is_cpuset_mom():\n            self.hostB = node_status[-1]['id']\n        else:\n            self.hostB = self.momB.shortname\n\n    def test_Rrecord(self):\n        \"\"\"\n        This test case runs a job on two nodes. Kills the mom process on\n        MS, waits for the job to be requeued and tests for the\n        resources_used value to be present in the 'R' record.\n        \"\"\"\n\n        # Submit job\n        select = \"vnode=\" + self.hostA + \"+vnode=\" + self.hostB\n        j1 = Job(TEST_USER, attrs={\n             ATTR_N: 'NodeFailRequeueTest',\n             'Resource_List.select': select})\n        jid1 = self.server.submit(j1)\n\n        # Wait for the job to start running.\n        self.server.expect(JOB, {ATTR_state: 'R'}, jid1)\n        # Kill the MoM process on the MS.\n\n        self.momA.signal('-KILL')\n        # Wait for the job to be requeued.\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid1)\n\n        # Check for resources_used value in the 'R' record.\n        msg = '.*R;' + str(jid1) + '.*resources_used.ncpus=2.*'\n        self.server.accounting_match(msg, regexp=True, n='ALL')\n\n    def test_Erecord(self):\n        \"\"\"\n        This test case runs a job on two nodes. Waits for the job to complete.\n        After that, tests for the E record to have non-zero values in\n        resources_used.\n        \"\"\"\n\n        # Submit job\n        select = \"vnode=\" + self.hostA + \"+vnode=\" + self.hostB\n        j1 = Job(TEST_USER, attrs={\n             ATTR_N: 'JobEndTest',\n             'Resource_List.select': select})\n        j1.set_sleep_time(15)\n        jid1 = self.server.submit(j1)\n\n        # Wait for the job to start running.\n        self.server.expect(JOB, {ATTR_state: 'R'}, jid1)\n\n        # Wait for the job to finish running.\n        self.server.expect(JOB, {'job_state': 'F'}, id=jid1, extend='x')\n\n        rv = 0\n        # Check if resources_used.walltime is zero or not.\n        try:\n            rv = self.server.expect(JOB, {'resources_used.walltime': '0'},\n                                    id=jid1, max_attempts=2, extend='x')\n        except PtlExpectError as e:\n            # resources_used.walltime is non-zero.\n            self.assertFalse(rv)\n        else:\n            # resources_used.walltime is zero, test case fails.\n            self.logger.info(\"resources_used.walltime reported to be zero\")\n            self.assertFalse(True)\n\n        # Check for the E record to NOT have zero walltime.\n        msg = '.*E;' + str(jid1) + '.*resources_used.walltime=\\\"00:00:00.*'\n        self.server.accounting_match(msg, tail=True, regexp=True,\n                                     existence=False)\n\n        # Check for the E record to have non-zero ncpus.\n        msg = '.*E;' + str(jid1) + '.*resources_used.ncpus=2.*'\n        self.server.accounting_match(msg, tail=True, regexp=True)\n"
  },
  {
    "path": "test/tests/functional/pbs_types.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass Test_pbs_types(TestFunctional):\n    \"\"\"\n    This test suite tests pbs python types and their related functions\n    \"\"\"\n    def test_pbs_size_deepcopy(self):\n        \"\"\"\n        Test that deepcopy works for pbs.size type\n        \"\"\"\n        hook_content = (\"\"\"\nimport pbs\nimport copy\na = pbs.size(1000)\nb = copy.deepcopy(a)\nc = a\npbs.logmsg(pbs.EVENT_DEBUG, 'a=%s, b=%s, c=%s' % (a, b, c))\nd = pbs.size('1m')\ne = copy.deepcopy(d)\nf = d\npbs.logmsg(pbs.EVENT_DEBUG, 'd=%s, e=%s, f=%s' % (d, e, f))\n\"\"\")\n        hook_name = 'deepcopy'\n        hook_attr = {'enabled': 'true', 'event': 'queuejob'}\n        self.server.create_import_hook(hook_name, hook_attr, hook_content)\n\n        j = Job(TEST_USER)\n        self.server.submit(j)\n        self.server.log_match(\"a=1000b, b=1000b, c=1000b\")\n        self.server.log_match(\"d=1mb, e=1mb, f=1mb\")\n"
  },
  {
    "path": "test/tests/functional/pbs_unknown_resource_hook_update.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestUnknownResourceHookUpdate(TestFunctional):\n    \"\"\"\n    Test that a resource that is not known to the server and is\n    updated in an execjob_epilogue hook doesn't crash the server.\n    \"\"\"\n\n    def test_epilogue_update(self):\n        \"\"\"\n        Test setting resources_used values of resources that\n        are unknown to the server, using an epilogue hook.\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'job_history_enable': 'True'})\n\n        hook_body = \"import pbs\\n\"\n        hook_body += \"e = pbs.event()\\n\"\n        hook_body += \"hstr=\\'\" + \"unkown_resource\" + \"\\'\\n\"\n        hook_body += \"e.job.resources_used[\\\"foo_str\\\"] = 'unknown resource'\\n\"\n        hook_body += \"e.job.resources_used[\\\"foo_i\\\"] = 5\\n\"\n\n        a = {'event': 'execjob_epilogue', 'enabled': 'True'}\n        self.server.create_import_hook(\"ep\", a, hook_body)\n\n        J = Job()\n        J.set_sleep_time(1)\n        jid = self.server.submit(J)\n\n        self.server.expect(JOB, {'job_state': 'F'}, id=jid, extend='x')\n\n        # Make sure the server is still up\n        self.server.isUp()\n\n        # Server_logs would only show the first resource it has failed to\n        # update\n        log_match = 'unable to update attribute resources_used.foo_str '\n        log_match += 'in job_obit'\n        self.server.log_match(\"%s;.*%s.*\" % (jid, log_match), regexp=True)\n"
  },
  {
    "path": "test/tests/functional/pbs_unset_exectime.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass Testunset_exectime(TestFunctional):\n    \"\"\"\n    Test that unsetting execution time through hooks does not throw parse error\n    \"\"\"\n\n    def test_unset_exectime(self):\n        \"\"\"\n        Create a hook to unset execution time and check after submitting\n        a job no error messages are logged\n        \"\"\"\n        hook_name = \"exechook\"\n        hook_body = \"\"\"\nimport pbs\ne = pbs.event()\nif (e.type is pbs.QUEUEJOB):\n        o = e.job\n        o.Execution_Time = None\nelse:\n        e.reject(\"unmatched event type!\")\n\"\"\"\n        a = {'event': 'queuejob', 'enabled': 'True'}\n        self.server.create_import_hook(hook_name, a, hook_body)\n        self.server.manager(MGR_CMD_SET, SERVER, {'log_events': 2047})\n        j = Job(TEST_USER)\n        self.server.submit(j)\n        msg = \"Error evaluating Python script, \"\n        msg += \"exec_time could not be parsed\"\n        self.server.log_match(msg, max_attempts=5, existence=False)\n"
  },
  {
    "path": "test/tests/functional/pbs_user_reliability.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\nimport re\n\nfrom tests.functional import *\n\n\nclass Test_user_reliability(TestFunctional):\n\n    \"\"\"\n    This test suite is for testing the user reliability workflow feature.\n    \"\"\"\n\n    def test_create_resv_from_job_using_runjob_hook(self):\n        \"\"\"\n        This test is for creating a reservation out of a job using runjob hook.\n        \"\"\"\n        qmgr_cmd = os.path.join(self.server.pbs_conf['PBS_EXEC'],\n                                'bin', 'qmgr')\n\n        runjob_hook_body = \"\"\"\nimport pbs\ne = pbs.event()\nj = e.job\nj.create_resv_from_job=1\n\"\"\"\n        hook_event = \"runjob\"\n        hook_name = \"rsub\"\n        a = {'event': hook_event, 'enabled': 'true'}\n        self.server.create_import_hook(hook_name, a, runjob_hook_body)\n\n        s_ncpus = 'resources_assigned.ncpus'\n        s_nodect = 'resources_assigned.nodect'\n        try:\n            s_ncpus_before = self.server.status(SERVER, s_ncpus)[0][s_ncpus]\n            s_nodect_before = self.server.status(SERVER, s_nodect)[0][s_nodect]\n        except IndexError:\n            s_nodect_before = '0'\n            s_ncpus_before = '0'\n\n        a = {'Resource_List.walltime': 9999}\n        job = Job(TEST_USER, a)\n        jid = self.server.submit(job)\n        self.server.expect(JOB, {ATTR_state: 'R'}, jid)\n\n        a = {ATTR_job: jid}\n        rid = self.server.status(RESV, a)[0]['id'].split(\".\")[0]\n\n        a = {ATTR_job: jid, 'reserve_state': (MATCH_RE, 'RESV_RUNNING|5'),\n             'Resource_List.walltime': 9999}\n        self.server.expect(RESV, a, id=rid)\n\n        a = {ATTR_queue: rid}\n        self.server.expect(JOB, a, id=jid)\n\n        self.server.deljob(jid, wait=True)\n        self.server.expect(RESV, a, id=rid)\n        self.server.delete(rid)\n\n        s_ncpus_after = self.server.status(SERVER, s_ncpus)[0][s_ncpus]\n        s_nodect_after = self.server.status(SERVER, s_nodect)[0][s_nodect]\n\n        self.assertEqual(s_ncpus_before, s_ncpus_after)\n        self.assertEqual(s_nodect_before, s_nodect_after)\n\n    def test_create_resv_from_job_using_qsub(self):\n        \"\"\"\n        This test is for creating a reservation out of a job using qsub.\n        \"\"\"\n        s_ncpus = 'resources_assigned.ncpus'\n        s_nodect = 'resources_assigned.nodect'\n        try:\n            s_ncpus_before = self.server.status(SERVER, s_ncpus)[0][s_ncpus]\n            s_nodect_before = self.server.status(SERVER, s_nodect)[0][s_nodect]\n        except IndexError:\n            s_nodect_before = '0'\n            s_ncpus_before = '0'\n\n        now = time.time()\n\n        a = {ATTR_W: 'create_resv_from_job=True'}\n        job = Job(TEST_USER, a)\n        jid = self.server.submit(job)\n        self.server.expect(JOB, {ATTR_state: 'R'}, jid)\n\n        self.server.log_match(\"Reject reply code=15095\", starttime=now,\n                              interval=2, max_attempts=10, existence=False)\n\n        a = {ATTR_job: jid}\n        rid = self.server.status(RESV, a)[0]['id'].split(\".\")[0]\n\n        a = {ATTR_job: jid, 'reserve_state': (MATCH_RE, 'RESV_RUNNING|5')}\n        self.server.expect(RESV, a, id=rid)\n\n        a = {ATTR_queue: rid}\n        self.server.expect(JOB, a, id=jid)\n\n        self.server.deljob(jid, wait=True)\n        self.server.expect(RESV, a, id=rid)\n        self.server.delete(rid)\n\n        s_ncpus_after = self.server.status(SERVER, s_ncpus)[0][s_ncpus]\n        s_nodect_after = self.server.status(SERVER, s_nodect)[0][s_nodect]\n\n        self.assertEqual(s_ncpus_before, s_ncpus_after)\n        self.assertEqual(s_nodect_before, s_nodect_after)\n\n        a = {ATTR_W: 'create_resv_from_job=False'}\n        job = Job(TEST_USER, a)\n        jid = self.server.submit(job)\n        self.server.expect(JOB, {ATTR_state: 'R'}, jid)\n        self.assertFalse(self.server.status(RESV))\n\n    def test_create_resv_from_job_using_rsub(self):\n        \"\"\"\n        This test is for creating a reservation out of a job using pbs_rsub.\n        \"\"\"\n        s_ncpus = 'resources_assigned.ncpus'\n        s_nodect = 'resources_assigned.nodect'\n        try:\n            s_ncpus_before = self.server.status(SERVER, s_ncpus)[0][s_ncpus]\n            s_nodect_before = self.server.status(SERVER, s_nodect)[0][s_nodect]\n        except IndexError:\n            s_nodect_before = '0'\n            s_ncpus_before = '0'\n\n        a = {'Resource_List.walltime': 9999}\n        job = Job(TEST_USER, a)\n        jid = self.server.submit(job)\n        self.server.expect(JOB, {ATTR_state: 'R'}, jid)\n\n        a = {ATTR_job: jid}\n        resv = Reservation(attrs=a)\n        self.server.submit(resv)\n\n        a = {ATTR_job: jid}\n        rid = self.server.status(RESV, a)[0]['id'].split(\".\")[0]\n\n        a = {ATTR_job: jid, 'reserve_state': (MATCH_RE, 'RESV_RUNNING|5')}\n        self.server.expect(RESV, a, id=rid)\n\n        a = {ATTR_queue: rid}\n        self.server.expect(JOB, a, id=jid)\n\n        self.server.deljob(jid, wait=True)\n        self.server.expect(RESV, a, id=rid)\n        self.server.delete(rid)\n\n        s_ncpus_after = self.server.status(SERVER, s_ncpus)[0][s_ncpus]\n        s_nodect_after = self.server.status(SERVER, s_nodect)[0][s_nodect]\n\n        self.assertEqual(s_ncpus_before, s_ncpus_after)\n        self.assertEqual(s_nodect_before, s_nodect_after)\n\n    def test_create_resv_from_array_job(self):\n        \"\"\"\n        This test confirms that a reservation cannot be created out of an\n        array job.\n        \"\"\"\n\n        j = Job(TEST_USER)\n        j.set_attributes({ATTR_J: '1-3'})\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'B'}, jid)\n\n        subjobs = self.server.status(JOB, id=jid, extend='t')\n        jids1 = subjobs[1]['id']\n\n        a = {ATTR_job: jids1}\n        resv = Reservation(attrs=a)\n        msg = \"Reservation may not be created from an array job\"\n        try:\n            self.server.submit(resv)\n        except PbsSubmitError as e:\n            self.assertTrue(msg in e.msg[0])\n        else:\n            self.fail(\"Error message not as expected\")\n\n        a = {ATTR_job: jid}\n        resv = Reservation(attrs=a)\n        try:\n            self.server.submit(resv)\n        except PbsSubmitError as e:\n            self.assertTrue(msg in e.msg[0])\n        else:\n            self.fail(\"Error message not as expected\")\n\n    def test_create_resv_by_other_user(self):\n        \"\"\"\n        This test confirms that a reservation cannot be created out of an\n        job owned by someone else.\n        \"\"\"\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, jid)\n\n        a = {ATTR_job: jid}\n        resv = Reservation(username=TEST_USER2, attrs=a)\n        msg = \"Unauthorized Request\"\n        try:\n            self.server.submit(resv)\n        except PbsSubmitError as e:\n            self.assertTrue(msg in e.msg[0])\n        else:\n            self.fail(\"Error message not as expected\")\n\n    def test_flatuid_false(self):\n        \"\"\"\n        This test confirms that a reservation can be created out of a job\n        even when flatuid is set to False.\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, SERVER, {'flatuid': False})\n        self.test_create_resv_from_job_using_qsub()\n\n    def test_set_attr_when_job_running(self):\n        \"\"\"\n        This test confirms that create_resv_from_job is not allowed to be\n        altered when the job is already running.\n        \"\"\"\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, jid)\n\n        msg = \"attribute allowed to be modified\"\n        with self.assertRaises(PbsAlterError, msg=msg) as c:\n            self.server.alterjob(jid, {ATTR_W: 'create_resv_from_job=1'})\n\n        msg = \"qalter: Cannot modify attribute while job running  \"\n        msg += \"create_resv_from_job\"\n        self.assertIn(msg, c.exception.msg[0])\n"
  },
  {
    "path": "test/tests/functional/pbs_validate_job_qsub_attributes.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.functional import *\n\n\nclass TestQsubWithQueuejobHook(TestFunctional):\n    \"\"\"\n    This test suite validates the job submitted through qsub\n    when queuejob hook is enabled in the PBS complex.\n    \"\"\"\n\n    hooks = {\n        \"queuejob_hook1\":\n        \"\"\"\nimport pbs\npbs.logmsg(pbs.LOG_DEBUG, \"submitted job with long select\" )\n        \"\"\",\n    }\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n\n    def test_qsub_long_select_with_hook(self):\n        \"\"\"\n        This test case validates that, when a long string of resource is\n        requested in qsub through lselect. The requested resource should not\n        get truncated by the server hook infra when there exists a queuejob\n        hook.\n        \"\"\"\n\n        hook_names = [\"queuejob_hook1\"]\n        hook_attrib = {'event': 'queuejob', 'enabled': 'True'}\n        for hook_name in hook_names:\n            hook_script = self.hooks[hook_name]\n            retval = self.server.create_import_hook(hook_name,\n                                                    hook_attrib,\n                                                    hook_script,\n                                                    overwrite=True)\n            self.assertTrue(retval)\n\n        # Create a long select statement for the job\n        loop_str = \"1:host=testnode\"\n        long_select = loop_str\n        for loop_i in range(1, 5120, len(loop_str) + 1):\n            long_select += \"+\" + loop_str\n\n        select_len = len(long_select)\n        long_select = \"select=\" + long_select\n        job = Job(TEST_USER1, attrs={ATTR_l: long_select})\n        jid = self.server.submit(job)\n        job_status = self.server.status(JOB, id=jid)\n        select_resource = job_status[0]['Resource_List.select']\n        self.assertTrue(select_len == len(select_resource))\n\n    def test_qsub_N_cmdline(self):\n        \"\"\"\n        This test case validates, illlegal characters in job name,\n        cause qsub to error out\n        \"\"\"\n        J = Job(TEST_USER, attrs={ATTR_N: 'j&whoami&gt;/tmp/b&'})\n        try:\n            jid = self.server.submit(J)\n        except PbsSubmitError as e:\n            self.assertTrue(\"illegal -N value\" in e.msg[0])\n            self.logger.info('qsub: illegal -N value.Job not submitted')\n        else:\n            self.logger.info('Job created with illegal name: ' + jid)\n            self.assertTrue(False, \"Job shouldn't be accepted\")\n\n    def test_qsub_N_jobscript(self):\n        \"\"\"\n        This test case validates, illlegal characters in job name\n        passed from job script, cause qsub to error out\n        \"\"\"\n        j = Job(TEST_USER)\n        scrpt = []\n        scrpt += ['#!/bin/bash']\n        scrpt += ['#PBS -N \"j&whoami&gt;/tmp/b&\"\\n']\n        scrpt += ['#PBS -j oe\\n']\n        scrpt += ['#PBS -m n\\n']\n        scrpt += ['#PBS -l select=1:ncpus=1\\n']\n        scrpt += ['#PBS -l walltime=00:0:15\\n']\n        scrpt += ['#PBS -l place=scatter:excl\\n']\n        scrpt += ['date +%s']\n        j.create_script(body=scrpt, hostname=self.server.client)\n        try:\n            jid = self.server.submit(j)\n        except PbsSubmitError as e:\n            self.assertTrue(\"illegal -N value\" in e.msg[0])\n            self.logger.info('qsub: illegal -N value.Job not submitted')\n        else:\n            self.logger.info('Job created with illegal name: ' + jid)\n            self.assertTrue(False, \"Job shouldn't be accepted\")\n\n    def test_qsub_N_job_array(self):\n        \"\"\"\n        This test case validates, illlegal characters in job name\n        for a job array, cause qsub to error out\n        \"\"\"\n        J = Job(TEST_USER, attrs={ATTR_N: 'j&whoami&g;/tmp/b&', ATTR_J: '1-2'})\n        try:\n            jid = self.server.submit(J)\n        except PbsSubmitError as e:\n            self.assertTrue(\"illegal -N value\" in e.msg[0])\n            self.logger.info('qsub: illegal -N value.Job not submitted')\n        else:\n            self.logger.info('Job created with illegal name: ' + jid)\n            self.assertTrue(False, \"Job shouldn't be accepted\")\n\n    def test_qsub_N_validchar(self):\n        \"\"\"\n        This test case validates whether character \".\"\n        in job name passed via -N args in qsub works fine\n        \"\"\"\n        j = Job(TEST_USER, {ATTR_N: 'job.scr'})\n        try:\n            jid = self.server.submit(j)\n        except PbsSubmitError as e:\n            self.assertNotIn('illegal -N value', e.msg[0],\n                             'qsub: Not accepted \".\" in job name')\n        else:\n            self.server.expect(JOB, {'job_state': (MATCH_RE, '[RQ]')}, id=jid)\n            self.logger.info('Job submitted successfully: ' + jid)\n"
  },
  {
    "path": "test/tests/functional/pbs_verify_log_output.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport array\nimport fcntl\nimport socket\nimport struct\nimport sys\n\nfrom tests.functional import *\n\n\nclass TestVerifyLogOutput(TestFunctional):\n    \"\"\"\n    Test that hostname and interface information\n    is added to all logs at log open\n    \"\"\"\n\n    def setUp(self):\n        TestFunctional.setUp(self)\n\n    def all_interfaces(self):\n        \"\"\"\n        Miscellaneous function to return all interface names\n        that should also be added to logs\n        \"\"\"\n        is_64bits = sys.maxsize > 2**32\n        struct_size = 40 if is_64bits else 32\n        s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)\n        max_possible = 8\n        while True:\n            bytes = max_possible * struct_size\n            names = array.array('B', b'\\0' * bytes)\n            outbytes = struct.unpack('iL', fcntl.ioctl(\n                s.fileno(),\n                0x8912,\n                struct.pack('iL', bytes, names.buffer_info()[0])\n            ))[0]\n            if outbytes == bytes:\n                max_possible *= 2\n            else:\n                break\n        namestr = names.tostring()\n        for i in range(0, outbytes, struct_size):\n            yield namestr[i:i + 16].split(b'\\0', 1)[0].decode()\n\n    def test_hostname_add(self):\n        \"\"\"\n        Test for hostname presence in log files\n        \"\"\"\n        log_val = socket.gethostname()\n        momname = self.mom.shortname\n        self.scheduler.log_match(\n            log_val,\n            regexp=False,\n            starttime=self.server.ctime,\n            max_attempts=5,\n            interval=2)\n        self.server.log_match(\n            log_val,\n            regexp=False,\n            starttime=self.server.ctime,\n            max_attempts=5,\n            interval=2)\n        self.mom.log_match(\n            momname,\n            regexp=False,\n            starttime=self.server.ctime,\n            max_attempts=5,\n            interval=2)\n\n    def test_if_info_add(self):\n        \"\"\"\n        Test for interface info presence in log files\n        \"\"\"\n        interfaceGenerator = self.all_interfaces()\n        for ifname in interfaceGenerator:\n            name = \"[(\" + ifname + \")]\"\n            log_val = \"[( interface: )]\" + name\n            # Workaround for PTL regex to match\n            # entire word once using () inside []\n            self.scheduler.log_match(\n                log_val,\n                regexp=True,\n                starttime=self.server.ctime,\n                max_attempts=5,\n                interval=2)\n            self.server.log_match(\n                log_val,\n                regexp=True,\n                starttime=self.server.ctime,\n                max_attempts=5,\n                interval=2)\n            self.mom.log_match(\n                log_val,\n                regexp=True,\n                starttime=self.server.ctime,\n                max_attempts=5,\n                interval=2)\n\n    def test_auto_sched_cycle_trigger(self):\n        \"\"\"\n        Test case to verify that scheduling cycle is triggered automatically\n        without any delay  after restart of PBS Services.\n        \"\"\"\n        started_time = time.time()\n        self.logger.info('Restarting PBS Services')\n        PBSInitServices().restart()\n\n        if self.server.isUp() and self.scheduler.isUp():\n            self.scheduler.log_match(\"Req;;Starting Scheduling Cycle\",\n                                     starttime=started_time)\n            self.scheduler.log_match(\"Req;;Leaving Scheduling Cycle\",\n                                     starttime=started_time)\n\n    def test_supported_auth_method_msgs(self):\n        \"\"\"\n        Test to verify PBS_SUPPORTED_AUTH_METHODS is logged in server\n        and comm daemon logs after start or restart\n        \"\"\"\n        attr_name = 'PBS_SUPPORTED_AUTH_METHODS'\n        started_time = time.time()\n        # check the logs after restarting the server and comm daemon\n        self.server.restart()\n        self.comm.restart()\n        resvport_msg = 'Supported authentication method: ' + 'resvport'\n        if self.server.isUp() and self.comm.isUp():\n            self.server.log_match(resvport_msg, starttime=started_time)\n            self.comm.log_match(resvport_msg, starttime=started_time)\n\n        # Added an attribute PBS_SUPPORTED_AUTH_METHODS in pbs.conf file\n        conf_attr = {'PBS_SUPPORTED_AUTH_METHODS': 'munge,resvport'}\n        self.du.set_pbs_config(confs=conf_attr)\n        started_time = time.time()\n        # check the logs after restarting the server and comm daemon\n        self.server.restart()\n        self.comm.restart()\n        munge_msg = 'Supported authentication method: ' + 'munge'\n        if self.server.isUp() and self.comm.isUp():\n            self.server.log_match(munge_msg, starttime=started_time)\n            self.comm.log_match(munge_msg, starttime=started_time)\n            self.server.log_match(resvport_msg, starttime=started_time)\n            self.comm.log_match(resvport_msg, starttime=started_time)\n"
  },
  {
    "path": "test/tests/interfaces/__init__.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom ptl.utils.pbs_testsuite import *\n\n\nclass TestInterfaces(PBSTestSuite):\n    \"\"\"\n    Base test suite for Interfaces related tests\n    \"\"\"\n    pass\n"
  },
  {
    "path": "test/tests/interfaces/pbs_libpbs_so.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.interfaces import *\n\ntest_code = '''\n#include <stdio.h>\n#include <string.h>\n#include <pbs_ifl.h>\n\nint main(int argc, char **argv)\n{\n    struct batch_status *status = NULL;\n    struct attrl *a;\n    int c = pbs_connect(NULL);\n\n    if (c <= 0)\n        return 1;\n    status = pbs_statserver(c, NULL, NULL);\n    if (status == NULL)\n        return 1;\n    a = status->attribs;\n    while (a != NULL) {\n        if (a->name != NULL &&\n            (!strcmp(a->name, ATTR_SvrHost) ||\n                !strcmp(a->name, ATTR_total))) {\n            printf(\"%s = %s\\\\n\", a->name, a->value);\n        }\n        a = a->next;\n    }\n    pbs_statfree(status);\n    pbs_disconnect(c);\n    return 0;\n}\n'''\n\n\nclass TestLibpbsLinking(TestInterfaces):\n    \"\"\"\n    Test suite to shared libpbs library linking\n    \"\"\"\n\n    def test_libpbs(self):\n        \"\"\"\n        Test shared libpbs library linking with test code\n        \"\"\"\n        if self.du.get_platform().lower() != 'linux':\n            self.skipTest(\"This test is only supported on Linux!\")\n        _gcc = self.du.which(exe='gcc')\n        if _gcc == 'gcc':\n            self.skipTest(\"Couldn't find gcc!\")\n        _exec = self.server.pbs_conf['PBS_EXEC']\n        _id = os.path.join(_exec, 'include')\n        _ld = os.path.join(_exec, 'lib')\n        if not self.du.isfile(path=os.path.join(_id, 'pbs_ifl.h')):\n            _m = \"Couldn't find pbs_ifl.h in %s\" % _id\n            _m += \", Please install PBS devel package\"\n            self.skipTest(_m)\n        self.assertTrue(self.du.isfile(path=os.path.join(_ld, 'libpbs.so')))\n        _fn = self.du.create_temp_file(body=test_code, suffix='.c')\n        _en = self.du.create_temp_file()\n        self.du.rm(path=_en)\n        cmd = ['gcc', '-g', '-O2', '-Wall', '-Werror']\n        cmd += ['-o', _en]\n        cmd += ['-I%s' % _id, _fn, '-L%s' % _ld, '-lpbs', '-lz']\n        _res = self.du.run_cmd(cmd=cmd)\n        self.assertEqual(_res['rc'], 0, \"\\n\".join(_res['err']))\n        self.assertTrue(self.du.isfile(path=_en))\n        cmd = ['LD_LIBRARY_PATH=%s %s' % (_ld, _en)]\n        _res = self.du.run_cmd(cmd=cmd, as_script=True)\n        self.assertEqual(_res['rc'], 0)\n        self.assertEqual(len(_res['out']), 2)\n        _s = self.server.status(SERVER)[0]\n        _exp = [\"%s = %s\" % (ATTR_SvrHost, _s[ATTR_SvrHost])]\n        _exp += [\"%s = %s\" % (ATTR_total, _s[ATTR_total])]\n        _exp = \"\\n\".join(_exp)\n        _out = \"\\n\".join(_res['out'])\n        self.assertEqual(_out, _exp)\n"
  },
  {
    "path": "test/tests/interfaces/pbs_node_partition.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.interfaces import *\n\n\n@tags('multisched')\nclass TestNodePartition(TestInterfaces):\n    \"\"\"\n    Test suite to test partition attr for node\n    \"\"\"\n\n    def setUp(self):\n        TestInterfaces.setUp(self)\n        self.mom_hostname = self.mom.shortname\n\n    def set_node_partition_attr(self, mgr_cmd=\"MGR_CMD_SET\", n_name=None,\n                                partition=\"P1\", user=ROOT_USER):\n        \"\"\"\n        Common function to set partition attribute to node object\n        :param mgr_cmd: qmgr \"MGR_CMD_SET/MGR_CMD_UNSET\" cmd,\n        Defaults to MGR_CMD_SET\n        :type mgr_cmd: str\n        :param n_name: name of the vnode, Defaults to \"hostname of server\"\n        :type n_name: str\n        :param partition: \"partition\" attribute of node object,\n                    Defaults to \"P1\"\n        :type partition: str\n        :param user: one of the pre-defined set of users\n        :type user: :py:class:`~ptl.lib.pbs_testlib.PbsUser`\n        \"\"\"\n        attr = {'partition': partition}\n        if mgr_cmd == \"MGR_CMD_SET\":\n            self.server.manager(MGR_CMD_SET, NODE, attr, id=n_name, runas=user)\n        elif mgr_cmd == \"MGR_CMD_UNSET\":\n            self.server.manager(MGR_CMD_UNSET, NODE,\n                                \"partition\", id=n_name, runas=user)\n        else:\n            msg = (\"Error: test_set_node_partition_attr function takes only \"\n                   \"MGR_CMD_SET/MGR_CMD_UNSET value for mgr_cmd\")\n            self.assertTrue(False, msg)\n\n    def test_set_unset_partition_node_attr(self):\n        \"\"\"\n        Test to set/unset the partition attribute of node object\n        \"\"\"\n        self.set_node_partition_attr(n_name=self.mom_hostname)\n        self.set_node_partition_attr(partition=\"P2\", n_name=self.mom_hostname)\n\n        # resetting the same partition value\n        self.set_node_partition_attr(partition=\"P2\", n_name=self.mom_hostname)\n        self.set_node_partition_attr(mgr_cmd=\"MGR_CMD_UNSET\",\n                                     n_name=self.mom_hostname)\n\n    def test_set_partition_node_attr_user_permissions(self):\n        \"\"\"\n        Test to check the user permissions for set/unset the partition\n        attribute of node\n        \"\"\"\n        self.set_node_partition_attr(n_name=self.mom_hostname)\n        msg1 = \"Unauthorized Request\"\n        msg2 = \"didn't receive expected error message\"\n        try:\n            self.set_node_partition_attr(partition=\"P2\", user=TEST_USER)\n        except PbsManagerError as e:\n            self.assertTrue(msg1 in e.msg[0], msg2)\n        try:\n            self.set_node_partition_attr(mgr_cmd=\"MGR_CMD_UNSET\",\n                                         user=TEST_USER)\n        except PbsManagerError as e:\n            # self.assertEqual(e.rc, 15007)\n            # The above code has to be uncommented when the PTL framework\n            # bug PP-881 gets fixed\n            self.assertTrue(msg1 in e.msg[0], msg2)\n\n    def test_partition_association_with_node_and_queue(self):\n        \"\"\"\n        Test to check the set of partition attribute and association\n        between queue and node\n        \"\"\"\n        attr = {'queue_type': \"execution\", 'enabled': \"True\",\n                'started': \"True\", 'partition': \"P1\"}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, attr, id=\"Q1\")\n        self.set_node_partition_attr(n_name=self.mom_hostname)\n        attr = {'queue_type': \"execution\", 'enabled': \"True\",\n                'started': \"True\", 'partition': \"P2\"}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, attr, id=\"Q2\")\n\n        self.set_node_partition_attr(mgr_cmd=\"MGR_CMD_UNSET\",\n                                     n_name=self.mom_hostname)\n        self.server.manager(MGR_CMD_SET, NODE, {\n                            'queue': \"Q2\"}, id=self.mom_hostname)\n        self.set_node_partition_attr(partition=\"P2\", n_name=self.mom_hostname)\n\n    def test_mismatch_of_partition_on_node_and_queue(self):\n        \"\"\"\n        Test to check the set of partition attribute is disallowed\n        if partition ids do not match on queue and node\n        \"\"\"\n        self.test_partition_association_with_node_and_queue()\n        msg1 = \"Invalid partition in queue\"\n        msg2 = \"didn't receive expected error message\"\n        try:\n            self.server.manager(MGR_CMD_SET,\n                                QUEUE, {'partition': \"P1\"},\n                                id=\"Q2\")\n        except PbsManagerError as e:\n            # self.assertEqual(e.rc, 15221)\n            # The above code has to be uncommented when the PTL framework\n            # bug PP-881 gets fixed\n            self.assertTrue(msg1 in e.msg[0], msg2)\n        msg1 = \"Partition P2 is not part of queue for node\"\n        try:\n            self.server.manager(MGR_CMD_SET,\n                                NODE, {'queue': \"Q1\"},\n                                id=self.mom_hostname)\n        except PbsManagerError as e:\n            # self.assertEqual(e.rc, 15220)\n            # The above code has to be uncommented when the PTL framework\n            # bug PP-881 gets fixed\n            self.assertTrue(msg1 in e.msg[0], msg2)\n        msg1 = \"Queue Q2 is not part of partition for node\"\n        try:\n            self.set_node_partition_attr(n_name=self.mom_hostname)\n        except PbsManagerError as e:\n            # self.assertEqual(e.rc, 15219)\n            # The above code has to be uncommented when the PTL framework\n            # bug PP-881 gets fixed\n            self.assertTrue(msg1 in e.msg[0], msg2)\n"
  },
  {
    "path": "test/tests/interfaces/pbs_partition.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.interfaces import *\n\n\nclass TestPartition(TestInterfaces):\n    \"\"\"\n    Test suite to test partition attr\n    \"\"\"\n\n    def partition_attr(self, mgr_cmd=MGR_CMD_SET,\n                       obj_name=\"QUEUE\", q_type=None,\n                       name=\"Q1\", enable=\"True\",\n                       start=\"True\", partition=\"P1\", user=ROOT_USER):\n        \"\"\"\n        Common function to set partition attribute/s to node/queue object\n        :param mgr_cmd: qmgr \"MGR_CMD_SET/MGR_CMD_UNSET/MGR_CMD_CREATE\" cmd,\n        Defaults to MGR_CMD_SET\n        :type mgr_cmd: int\n        :param obj_name: PBS object vnode/queue. Defaults to queue\n        :type obj_name: str\n        :param q_type: \"queue_type\" attribute of queue object ,\n                    Defaults to \"execution\"\n        :type q_type:  str\n        :param name: name of the queue/vnode , Defaults to Q1 for Queue object,\n        and server shortname for Node object\n        :type name: str\n        :param enable: \"enabled\" attribute of queue object, Defaults to \"True\"\n        :type enable: boolean\n        :param start: \"started\" attribute of queue object, Defaults to \"True\"\n        :type start: boolean\n        :param partition: \"partition\" attribute of vnode/queue object,\n                    Defaults to \"P1\"\n        :type partition: str\n        :param user: one of the pre-defined set of users\n        :type user: :py:class:`~ptl.lib.pbs_testlib.PbsUser`\n        \"\"\"\n        if obj_name == \"QUEUE\":\n            if mgr_cmd == MGR_CMD_CREATE:\n                if q_type is None:\n                    attr = {'partition': partition}\n                else:\n                    attr = {\n                        'queue_type': q_type,\n                        'enabled': enable,\n                        'started': start,\n                        'partition': partition}\n                self.server.manager(MGR_CMD_CREATE,\n                                    QUEUE, attr, id=name, runas=user)\n            elif mgr_cmd == MGR_CMD_SET:\n                attr = {'partition': partition}\n                self.server.manager(MGR_CMD_SET, QUEUE,\n                                    attr, id=name, runas=user)\n            elif mgr_cmd == MGR_CMD_UNSET:\n                self.server.manager(MGR_CMD_UNSET, QUEUE,\n                                    \"partition\", id=name, runas=user)\n            else:\n                msg = (\"Error: partition_attr function takes only \"\n                       \"MGR_CMD_[CREATE/SET/UNSET] value for mgr_cmd when \"\n                       \"pbs object is queue\")\n                self.assertTrue(False, msg)\n        elif obj_name == \"NODE\":\n            if name == \"Q1\":\n                name = self.mom.shortname\n            attr = {'partition': partition}\n            if mgr_cmd == MGR_CMD_SET:\n                self.server.manager(MGR_CMD_SET, NODE, attr,\n                                    id=name, runas=user)\n            elif mgr_cmd == MGR_CMD_UNSET:\n                self.server.manager(MGR_CMD_UNSET, NODE,\n                                    \"partition\", id=name, runas=user)\n            else:\n                msg = (\"Error: partition_attr function takes only \"\n                       \"MGR_CMD_SET/MGR_CMD_UNSET value for mgr_cmd when \"\n                       \"pbs object is node\")\n                self.assertTrue(False, msg)\n        else:\n            msg = (\"Error: partition_attr function takes only \"\n                   \"QUEUE/NODE objects value for obj_name\")\n            self.assertTrue(False, msg)\n\n    def test_set_unset_queue_partition(self):\n        \"\"\"\n        Test to set/unset the partition attribute of queue object\n        \"\"\"\n        self.partition_attr(mgr_cmd=MGR_CMD_CREATE, q_type=\"execution\")\n        self.partition_attr(mgr_cmd=MGR_CMD_SET, partition=\"P2\")\n        # resetting the same partition value\n        self.partition_attr(mgr_cmd=MGR_CMD_SET, partition=\"P2\")\n        self.partition_attr(mgr_cmd=MGR_CMD_UNSET)\n\n    def test_set_queue_partition_user_permissions(self):\n        \"\"\"\n        Test to check the user permissions for set/unset the partition\n        attribute of queue\n        \"\"\"\n        self.partition_attr(mgr_cmd=MGR_CMD_CREATE, q_type=\"execution\")\n        msg1 = \"Unauthorized Request\"\n        msg2 = \"checking the qmgr error message\"\n        try:\n            self.partition_attr(mgr_cmd=MGR_CMD_SET, partition=\"P2\")\n        except PbsManagerError as e:\n            self.assertTrue(msg1 in e.msg[0], msg2)\n        try:\n            self.partition_attr(mgr_cmd=MGR_CMD_UNSET)\n        except PbsManagerError as e:\n            # self.assertEqual(e.rc, 15007)\n            # The above code has to be uncommented when the PTL framework\n            # bug PP-881 gets fixed\n            self.assertTrue(msg1 in e.msg[0], msg2)\n\n    def test_set_partition_to_routing_queue(self):\n        \"\"\"\n        Test to check the set of partition attribute on routing queue\n        \"\"\"\n        msg0 = \"Route queues are incompatible with the \"\\\n               \"partition attribute\"\n        msg1 = \"Cannot assign a partition to route queue\"\n        msg2 = \"Qmgr error message do not match\"\n        try:\n            self.partition_attr(\n                mgr_cmd=MGR_CMD_CREATE,\n                q_type=\"route\",\n                enable=\"False\",\n                start=\"False\")\n        except PbsManagerError as e:\n            # self.assertEqual(e.rc, 15217)\n            # The above code has to be uncommented when the PTL framework\n            # bug PP-881 gets fixed\n            self.assertTrue(msg0 in e.msg[0], msg2)\n        self.server.manager(\n            MGR_CMD_CREATE, QUEUE, {\n                'queue_type': 'route'}, id='Q1')\n        try:\n            self.partition_attr(mgr_cmd=MGR_CMD_SET)\n        except PbsManagerError as e:\n            # self.assertEqual(e.rc, 15007)\n            # The above code has to be uncommented when the PTL framework\n            # bug PP-881 gets fixed\n            self.assertTrue(msg1 in e.msg[0], msg2)\n\n    def test_modify_queue_with_partition_to_routing(self):\n        \"\"\"\n        Test to check the modify of execution queue to routing when\n        partition attribute is set\n        \"\"\"\n        self.partition_attr(mgr_cmd=MGR_CMD_CREATE, q_type=\"execution\")\n        msg1 = (\"Route queues are incompatible \"\n                \"with the partition attribute queue_type\")\n        msg2 = \"checking the qmgr error message\"\n        try:\n            self.partition_attr(mgr_cmd=MGR_CMD_SET, q_type=\"route\")\n        except PbsManagerError as e:\n            # self.assertEqual(e.rc, 15218)\n            # The above code has to be uncommented when the PTL framework\n            # bug PP-881 gets fixed\n            self.assertTrue(msg1 in e.msg[0], msg2)\n\n    def test_set_partition_without_queue_type(self):\n        \"\"\"\n        Test to check the set of partition attribute on queue\n        with not queue_type set\n        \"\"\"\n        self.partition_attr(mgr_cmd=MGR_CMD_CREATE)\n        self.partition_attr(mgr_cmd=MGR_CMD_SET, partition=\"P2\")\n        self.partition_attr(mgr_cmd=MGR_CMD_SET, q_type=\"execution\")\n\n    def test_partition_node_attr(self):\n        \"\"\"\n        Test to set/unset the partition attribute of node object\n        \"\"\"\n        self.partition_attr(obj_name=\"NODE\")\n        self.partition_attr(obj_name=\"NODE\", partition=\"P2\")\n        # resetting the same partition value\n        self.partition_attr(obj_name=\"NODE\", partition=\"P2\")\n        self.partition_attr(mgr_cmd=MGR_CMD_UNSET, obj_name=\"NODE\")\n\n    def test_set_partition_node_attr_user_permissions(self):\n        \"\"\"\n        Test to check the user permissions for set/unset the partition\n        attribute of node\n        \"\"\"\n        self.partition_attr(obj_name=\"NODE\")\n        msg1 = \"Unauthorized Request\"\n        msg2 = \"didn't receive expected error message\"\n        try:\n            self.partition_attr(\n                obj_name=\"NODE\",\n                partition=\"P2\",\n                user=TEST_USER)\n        except PbsManagerError as e:\n            self.assertTrue(msg1 in e.msg[0], msg2)\n        try:\n            self.partition_attr(mgr_cmd=MGR_CMD_UNSET,\n                                obj_name=\"NODE\", user=TEST_USER)\n        except PbsManagerError as e:\n            # self.assertEqual(e.rc, 15007)\n            # The above code has to be uncommented when the PTL framework\n            # bug PP-881 gets fixed\n            self.assertTrue(msg1 in e.msg[0], msg2)\n\n    def test_partition_association_with_node_and_queue(self):\n        \"\"\"\n        Test to check the set of partition attribute and association\n        between queue and node\n        \"\"\"\n        self.partition_attr(mgr_cmd=MGR_CMD_CREATE, q_type=\"execution\")\n        self.partition_attr(obj_name=\"NODE\")\n        self.partition_attr(\n            mgr_cmd=MGR_CMD_CREATE,\n            q_type=\"execution\",\n            name=\"Q2\",\n            partition=\"P2\")\n        self.partition_attr(mgr_cmd=MGR_CMD_UNSET, obj_name=\"NODE\")\n        self.server.manager(MGR_CMD_SET, NODE, {\n                            'queue': \"Q2\"}, id=self.mom.shortname)\n        self.partition_attr(obj_name=\"NODE\", partition=\"P2\")\n\n    def test_mismatch_of_partition_on_node_and_queue(self):\n        \"\"\"\n        Test to check the set of partition attribute is disallowed\n        if partition ids do not match on queue and node\n        \"\"\"\n        self.test_partition_association_with_node_and_queue()\n        msg1 = \"Invalid partition in queue\"\n        msg2 = \"didn't receive expected error message\"\n        try:\n            self.partition_attr(mgr_cmd=MGR_CMD_SET, name=\"Q2\")\n        except PbsManagerError as e:\n            # self.assertEqual(e.rc, 15221)\n            # The above code has to be uncommented when the PTL framework\n            # bug PP-881 gets fixed\n            self.assertTrue(msg1 in e.msg[0], msg2)\n        msg1 = \"Partition P2 is not part of queue for node\"\n        try:\n            self.server.manager(MGR_CMD_SET,\n                                NODE, {'queue': \"Q1\"},\n                                id=self.mom.shortname)\n        except PbsManagerError as e:\n            # self.assertEqual(e.rc, 15220)\n            # The above code has to be uncommented when the PTL framework\n            # bug PP-881 gets fixed\n            self.assertTrue(msg1 in e.msg[0], msg2)\n        msg1 = \"Queue Q2 is not part of partition for node\"\n        try:\n            self.partition_attr(obj_name=\"NODE\")\n        except PbsManagerError as e:\n            # self.assertEqual(e.rc, 15219)\n            # The above code has to be uncommented when the PTL framework\n            # bug PP-881 gets fixed\n            self.assertTrue(msg1 in e.msg[0], msg2)\n"
  },
  {
    "path": "test/tests/interfaces/pbs_preempt_params.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.interfaces import *\n\n\nclass TestPreemptParamsQmgr(TestInterfaces):\n    \"\"\"\n    This testsuite is for testing setting/unsetting of preemption paramaters\n    that were moved from sched_config to the scheduler object.\n    \"\"\"\n    UNAUTH = 1\n\n    def func_set_fail(self, a, msg, user=ROOT_USER, error_type=0):\n        \"\"\"\n        function to confirm that setting the value fails.\n        \"\"\"\n        error = \"\"\n        error_code = \"\"\n        if error_type == self.UNAUTH:\n            error = \"Unauthorized Request\"\n            error_code = \"15007\"\n        else:\n            error = \"Illegal attribute or resource value\"\n            error_code = \"15014\"\n\n        try:\n            self.server.manager(MGR_CMD_SET, SCHED, a, runas=user)\n        except PbsManagerError as e:\n            self.assertTrue(error in e.msg[0])\n            self.assertTrue(error_code in e.msg[1])\n        else:\n            self.fail(msg)\n\n    def common_tests(self, param, msg):\n        \"\"\"\n        function that executes common steps in the actual tests.\n        \"\"\"\n        a = {param: 'abc'}\n        self.func_set_fail(a, msg)\n\n        a = {param: '123abc'}\n        self.func_set_fail(a, msg)\n\n        a = {param: 'abc123'}\n        self.func_set_fail(a, msg)\n\n    def test_set_unset_preempt_queue_prio(self):\n        \"\"\"\n        This test case sets preempt_queue_prio parameter to valid/invalid\n        values and checks if the server allows/disallows the operation.\n        \"\"\"\n        msg = \"preempt_queue_prio set to invalid value\"\n        param = 'preempt_queue_prio'\n\n        self.common_tests(param, msg)\n\n        a = {param: 120}\n        self.func_set_fail(a, msg, TEST_USER, self.UNAUTH)\n\n        self.server.manager(MGR_CMD_SET, SCHED, a, runas=ROOT_USER)\n\n        self.server.manager(MGR_CMD_UNSET, SCHED, 'preempt_queue_prio',\n                            runas=ROOT_USER)\n\n        a = {param: 150}\n        self.server.manager(MGR_CMD_LIST, SCHED, a, runas=ROOT_USER)\n\n    def test_set_unset_preempt_prio(self):\n        \"\"\"\n        This test case sets preempt_prio parameter to valid/invalid\n        values and checks if the server allows/disallows the operation.\n        \"\"\"\n        msg = \"preempt_prio set to invalid value\"\n        param = 'preempt_prio'\n\n        self.common_tests(param, msg)\n\n        p = '\"express_queue, nrmal_jobs, server_softlimits, queue_softlimits\"'\n        a = {param: p}\n        self.func_set_fail(a, msg)\n\n        p = '\"express_queue, normal_jobs, server_softlimits, queue_softlimits\"'\n        a = {param: p}\n        self.func_set_fail(a, msg, TEST_USER, self.UNAUTH)\n\n        self.server.manager(MGR_CMD_SET, SCHED, a, runas=ROOT_USER)\n\n        p = '\"express_queue, normal_jobs, express_queue+fairshare, fairshare\"'\n        a = {param: p}\n        self.server.manager(MGR_CMD_SET, SCHED, a,\n                            runas=ROOT_USER)\n\n        self.server.manager(MGR_CMD_LIST, SCHED, a, runas=ROOT_USER)\n\n        self.server.manager(MGR_CMD_UNSET, SCHED, param,\n                            runas=ROOT_USER)\n\n        p = 'express_queue, normal_jobs'\n        a = {param: p}\n        self.server.manager(MGR_CMD_LIST, SCHED, a, runas=ROOT_USER)\n\n    def test_set_unset_preempt_order(self):\n        \"\"\"\n        This test case sets preempt_order parameter to valid/invalid\n        values and checks if the server allows/disallows the operation.\n        \"\"\"\n        msg = \"preempt_order set to invalid value\"\n        param = 'preempt_order'\n\n        self.common_tests(param, msg)\n\n        a = {param: '\"SCR 80 PQR\"'}\n        self.func_set_fail(a, msg)\n\n        a = {param: '\"PQR\"'}\n        self.func_set_fail(a, msg)\n\n        a = {param: '\"SCR SC\"'}\n        self.func_set_fail(a, msg)\n\n        a = {param: '\"80 SC\"'}\n        self.func_set_fail(a, msg)\n\n        a = {param: '\"SCR 80 70\"'}\n        self.func_set_fail(a, msg)\n\n        a = {param: '\"SCR 80 SC 50 S\"'}\n        self.server.manager(MGR_CMD_SET, SCHED, a, runas=ROOT_USER)\n\n        a = {param: 'SCR'}\n        self.func_set_fail(a, msg, TEST_USER, self.UNAUTH)\n\n        self.server.manager(MGR_CMD_SET, SCHED, a, runas=ROOT_USER)\n\n        self.server.manager(MGR_CMD_UNSET, SCHED, param, runas=ROOT_USER)\n\n        self.server.manager(MGR_CMD_LIST, SCHED, a, runas=ROOT_USER)\n\n    def test_set_unset_preempt_sort(self):\n        \"\"\"\n        This test case sets preempt_sort parameter to valid/invalid\n        values and checks if the server allows/disallows the operation.\n        \"\"\"\n        msg = \"preempt_sort set to invalid value\"\n        param = 'preempt_sort'\n\n        self.common_tests(param, msg)\n\n        a = {param: '123'}\n        self.func_set_fail(a, msg)\n\n        a = {param: 'min_time_sincestart'}\n        self.func_set_fail(a, msg)\n\n        a = {param: 'min_time_since_start'}\n        self.func_set_fail(a, msg, TEST_USER, self.UNAUTH)\n\n        self.server.manager(MGR_CMD_SET, SCHED, a, runas=ROOT_USER)\n\n        self.server.manager(MGR_CMD_UNSET, SCHED, param, runas=ROOT_USER)\n        self.server.expect(SCHED, a, runas=ROOT_USER)\n        self.server.manager(MGR_CMD_LIST, SCHED, a, runas=ROOT_USER)\n"
  },
  {
    "path": "test/tests/interfaces/pbs_sched_interface_test.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.interfaces import *\n\n\nclass TestSchedulerInterface(TestInterfaces):\n\n    \"\"\"\n    Test suite to test different scheduler interfaces\n    \"\"\"\n\n    def setUp(self):\n        TestInterfaces.setUp(self)\n        a = {'partition': 'P1',\n             'sched_host': self.server.hostname}\n        self.server.manager(MGR_CMD_CREATE,\n                            SCHED, a,\n                            id=\"TestCommonSched\")\n        self.scheds['TestCommonSched'].create_scheduler()\n        self.scheds['TestCommonSched'].start()\n\n    def test_duplicate_scheduler_name(self):\n        \"\"\"\n        Check for the scheduler object name.\n        \"\"\"\n        try:\n            self.server.manager(MGR_CMD_CREATE,\n                                SCHED,\n                                {'sched_host': self.server.hostname},\n                                id=\"TestCommonSched\")\n        except PbsManagerError as e:\n            if self.server.get_op_mode() == PTL_CLI:\n                self.assertTrue(\n                    'qmgr: Error (15211) returned from server' in e.msg[1])\n\n    def test_permission_on_scheduler(self):\n        \"\"\"\n        Check for the permission to create/delete/modify scheduler object.\n        \"\"\"\n        # Check for create permission\n        try:\n            self.server.manager(MGR_CMD_CREATE,\n                                SCHED,\n                                {'sched_host': self.server.hostname},\n                                id=\"testCreateSched\",\n                                runas=OPER_USER)\n        except PbsManagerError as e:\n            if self.server.get_op_mode() == PTL_CLI:\n                self.assertTrue(\n                    'qmgr: Error (15007) returned from server' in e.msg[1])\n\n        self.server.manager(MGR_CMD_CREATE,\n                            SCHED,\n                            {'sched_host': self.server.hostname},\n                            id=\"testCreateSched\",\n                            runas=ROOT_USER)\n\n        # Check for delete permission\n        self.server.manager(MGR_CMD_CREATE,\n                            SCHED,\n                            {'sched_host': self.server.hostname},\n                            id=\"testDeleteSched\")\n        try:\n            self.server.manager(MGR_CMD_DELETE,\n                                SCHED,\n                                id=\"testDeleteSched\",\n                                runas=OPER_USER)\n        except PbsManagerError as e:\n            if self.server.get_op_mode() == PTL_CLI:\n                self.assertTrue(\n                    'qmgr: Error (15007) returned from server' in e.msg[1])\n\n        self.server.manager(MGR_CMD_DELETE,\n                            SCHED,\n                            id=\"testDeleteSched\",\n                            runas=ROOT_USER)\n\n        # Check for attribute set permission\n        try:\n            self.server.manager(MGR_CMD_SET,\n                                SCHED,\n                                {'sched_cycle_length': 12000},\n                                id=\"TestCommonSched\",\n                                runas=OPER_USER)\n        except PbsManagerError as e:\n            if self.server.get_op_mode() == PTL_CLI:\n                self.assertTrue(\n                    'qmgr: Error (15007) returned from server' in e.msg[1])\n\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'sched_cycle_length': 12000},\n                            id=\"TestCommonSched\", runas=ROOT_USER)\n\n    def test_delete_default_sched(self):\n        \"\"\"\n        Delete default scheduler.\n        \"\"\"\n        try:\n            self.server.manager(MGR_CMD_DELETE,\n                                SCHED,\n                                id=\"default\")\n        except PbsManagerError as e:\n            if self.server.get_op_mode() == PTL_CLI:\n                self.assertTrue(\n                    'qmgr: Error (15214) returned from server' in e.msg[1])\n\n    def test_set_and_unset_sched_attributes(self):\n        \"\"\"\n        Set and unset an attribute of a scheduler object .\n        \"\"\"\n        # Set an attribute of a scheduler object.\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'sched_cycle_length': 1234}, id=\"TestCommonSched\")\n\n        # Unset an attribute of a scheduler object.\n        self.server.manager(MGR_CMD_UNSET,\n                            SCHED,\n                            'sched_cycle_length',\n                            id=\"TestCommonSched\")\n        a = {'sched_cycle_length': '00:20:00'}\n        self.server.expect(SCHED, a, id='TestCommonSched', max_attempts=10)\n\n    def test_sched_default_attrs(self):\n        \"\"\"\n        Test all sched attributes are set by default on default scheduler\n        \"\"\"\n        sched_priv = os.path.join(\n            self.server.pbs_conf['PBS_HOME'], 'sched_priv')\n        sched_logs = os.path.join(\n            self.server.pbs_conf['PBS_HOME'], 'sched_logs')\n        a = {'sched_host': self.server.hostname,\n             'sched_priv': sched_priv,\n             'sched_log': sched_logs,\n             'scheduling': 'True',\n             'scheduler_iteration': 600,\n             'state': 'idle',\n             'sched_cycle_length': '00:20:00'}\n        self.server.expect(SCHED, a, id='default',\n                           attrop=PTL_AND, max_attempts=10)\n\n    def test_scheduling_attribute(self):\n        \"\"\"\n        Test scheduling attribute on newly created scheduler is false\n        unless made true\n        \"\"\"\n        self.server.expect(SCHED, {'scheduling': 'False'},\n                           id='TestCommonSched', max_attempts=10)\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'scheduling': 'True'},\n                            runas=ROOT_USER, id='TestCommonSched')\n        self.server.expect(SCHED, {'scheduling': 'True'},\n                           id='TestCommonSched', max_attempts=10)\n\n    def test_set_sched_priv_log_duplicate_value(self):\n        \"\"\"\n        Test setting of sched_priv and sched_log to a\n        value assigned to another scheduler\n        \"\"\"\n        err_msg = \"Another scheduler also has same \"\n        err_msg += \"value for its sched_priv directory\"\n        try:\n            self.server.manager(MGR_CMD_SET, SCHED,\n                                {'sched_priv': '/var/spool/pbs/sched_priv'},\n                                runas=ROOT_USER, id='TestCommonSched')\n        except PbsManagerError as e:\n            self.assertTrue(err_msg in e.msg[0],\n                            \"Error message is not expected\")\n        err_msg = \"Another scheduler also has same \"\n        err_msg += \"value for its sched_log directory\"\n        try:\n            self.server.manager(MGR_CMD_SET, SCHED,\n                                {'sched_log': '/var/spool/pbs/sched_logs'},\n                                runas=ROOT_USER, id='TestCommonSched')\n        except PbsManagerError as e:\n            self.assertTrue(err_msg in e.msg[0],\n                            \"Error message is not expected\")\n\n    def test_set_default_sched_not_permitted(self):\n        \"\"\"\n        Test setting partition on default scheduler\n        \"\"\"\n        err_msg = \"Operation is not permitted on default scheduler\"\n        try:\n            self.server.manager(MGR_CMD_SET, SCHED,\n                                {'partition': 'P1'},\n                                runas=ROOT_USER)\n        except PbsManagerError as e:\n            self.assertTrue(err_msg in e.msg[0],\n                            \"Error message is not expected\")\n        try:\n            self.server.manager(MGR_CMD_SET, SCHED,\n                                {'sched_priv': '/var/spool/somedir'},\n                                runas=ROOT_USER)\n        except PbsManagerError as e:\n            self.assertTrue(err_msg in e.msg[0],\n                            \"Error message is not expected\")\n        try:\n            self.server.manager(MGR_CMD_SET, SCHED,\n                                {'sched_log': '/var/spool/somedir'},\n                                runas=ROOT_USER)\n        except PbsManagerError as e:\n            self.assertTrue(err_msg in e.msg[0],\n                            \"Error message is not expected\")\n\n    def test_sched_name_too_long(self):\n        \"\"\"\n        Test creating a scheduler with name longer than 15 chars\n        \"\"\"\n        try:\n            self.server.manager(MGR_CMD_CREATE, SCHED,\n                                runas=ROOT_USER, id=\"TestLongScheduler\")\n        except PbsManagerError as e:\n            self.assertTrue(\"Scheduler name is too long\" in e.msg[0],\n                            \"Error message is not expected\")\n\n    def test_set_default_sched_attrs(self):\n        \"\"\"\n        Test setting scheduling and scheduler_iteration on default scheduler\n        and it updates server attributes and vice versa\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'scheduling': 'False'},\n                            runas=ROOT_USER)\n        self.server.expect(SERVER, {'scheduling': 'False'})\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduling': 'True'},\n                            runas=ROOT_USER)\n        self.server.expect(SCHED, {'scheduling': 'True'},\n                           id='default', max_attempts=10)\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'scheduler_iteration': 300},\n                            runas=ROOT_USER)\n        self.server.expect(SERVER, {'scheduler_iteration': 300})\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {'scheduler_iteration': 500},\n                            runas=ROOT_USER)\n        self.server.expect(SCHED, {'scheduler_iteration': 500},\n                           id='default', max_attempts=10)\n\n    def test_scheduling_iteration(self):\n        \"\"\"\n        Test scheduler_itration attribute after it is unset. It should go\n        to its default value which is 600. If this happens Server will not\n        kickoff infinite scheduling cycles. Also make sure that all other\n        scheduler attributes are set to its correct default values after\n        this change.\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {ATTR_schedit: 500},\n                            runas=ROOT_USER, id='TestCommonSched')\n        self.server.expect(SCHED, {ATTR_schedit: '500'},\n                           id='TestCommonSched', max_attempts=5)\n\n        self.server.manager(MGR_CMD_UNSET, SCHED, ATTR_schedit,\n                            id='TestCommonSched')\n\n        sched_priv = os.path.join(\n            self.server.pbs_conf['PBS_HOME'], 'sched_priv_TestCommonSched')\n        sched_logs = os.path.join(\n            self.server.pbs_conf['PBS_HOME'], 'sched_logs_TestCommonSched')\n        a = {'sched_host': self.server.hostname,\n             'sched_priv': sched_priv,\n             'sched_log': sched_logs,\n             'scheduling': 'False',\n             'scheduler_iteration': 600,\n             'sched_cycle_length': '00:20:00'}\n        self.server.expect(SCHED, a, id='TestCommonSched', max_attempts=10)\n"
  },
  {
    "path": "test/tests/pbs_smoketest.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom ptl.utils.pbs_testsuite import *\n\n\n@tags('smoke')\nclass SmokeTest(PBSTestSuite):\n\n    \"\"\"\n    This test suite contains a few smoke tests of PBS\n\n    \"\"\"\n\n    def test_submit_job(self):\n        \"\"\"\n        Test to submit a job\n        \"\"\"\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n    def test_submit_job_array(self):\n        \"\"\"\n        Test to submit a job array\n        \"\"\"\n        a = {'resources_available.ncpus': 8}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        j = Job(TEST_USER)\n        j.set_attributes({ATTR_J: '1-3:1'})\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'B'}, jid)\n        self.server.expect(JOB, {'job_state=R': 3}, count=True,\n                           id=jid, extend='t')\n\n    def test_advance_reservation(self):\n        \"\"\"\n        Test to submit an advanced reservation and submit jobs to that\n        reservation. Check if the reservation gets confimed and the jobs\n        inside the reservation starts running when the reservation runs.\n        \"\"\"\n        a = {'resources_available.ncpus': 4}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.mom.shortname)\n        r = Reservation(TEST_USER)\n        now = int(time.time())\n        r_start_time = now + 30\n        a = {'Resource_List.select': '1:ncpus=4',\n             'reserve_start': r_start_time,\n             'reserve_end': now + 110}\n        r.set_attributes(a)\n        rid = self.server.submit(r)\n        rid_q = rid.split('.')[0]\n        a = {'reserve_state': (MATCH_RE, \"RESV_CONFIRMED|2\")}\n        self.server.expect(RESV, a, id=rid)\n\n        # submit a normal job and an array job to the reservation\n        a = {'Resource_List.select': '1:ncpus=1',\n             ATTR_q: rid_q}\n        j1 = Job(TEST_USER, attrs=a)\n        jid1 = self.server.submit(j1)\n\n        a = {'Resource_List.select': '1:ncpus=1',\n             ATTR_q: rid_q, ATTR_J: '1-2'}\n        j2 = Job(TEST_USER, attrs=a)\n        jid2 = self.server.submit(j2)\n\n        offset = r_start_time - int(time.time())\n        a = {'reserve_state': (MATCH_RE, \"RESV_RUNNING|5\")}\n        self.server.expect(RESV, a, id=rid, interval=1,\n                           offset=offset)\n        self.server.expect(JOB, {'job_state': 'R'}, jid1)\n        self.server.expect(JOB, {'job_state': 'B'}, jid2)\n\n    def test_standing_reservation(self):\n        \"\"\"\n        Test to submit a standing reservation\n        \"\"\"\n        # PBS_TZID environment variable must be set, there is no way to set\n        # it through the API call, use CLI instead for this test\n\n        _m = self.server.get_op_mode()\n        if _m != PTL_CLI:\n            self.server.set_op_mode(PTL_CLI)\n        if 'PBS_TZID' in self.conf:\n            tzone = self.conf['PBS_TZID']\n        elif 'PBS_TZID' in os.environ:\n            tzone = os.environ['PBS_TZID']\n        else:\n            self.logger.info('Missing timezone, using America/Los_Angeles')\n            tzone = 'America/Los_Angeles'\n        a = {'Resource_List.select': '1:ncpus=1',\n             ATTR_resv_rrule: 'FREQ=WEEKLY;COUNT=3',\n             ATTR_resv_timezone: tzone,\n             ATTR_resv_standing: '1',\n             'reserve_start': time.time() + 20,\n             'reserve_end': time.time() + 30, }\n        r = Reservation(TEST_USER, a)\n        rid = self.server.submit(r)\n        a = {'reserve_state': (MATCH_RE, \"RESV_CONFIRMED|2\")}\n        self.server.expect(RESV, a, id=rid)\n        if _m == PTL_API:\n            self.server.set_op_mode(PTL_API)\n\n    def test_degraded_advance_reservation(self):\n        \"\"\"\n        Make reservations more fault tolerant\n        Test for an advance reservation\n        \"\"\"\n\n        now = int(time.time())\n        a = {'reserve_retry_init': 5}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        a = {'resources_available.ncpus': 4}\n        self.mom.create_vnodes(a, num=2)\n        a = {'Resource_List.select': '1:ncpus=4',\n             'reserve_start': now + 3600,\n             'reserve_end': now + 7200}\n        r = Reservation(TEST_USER, attrs=a)\n        rid = self.server.submit(r)\n        a = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, a, id=rid)\n        self.server.status(RESV, 'resv_nodes', id=rid)\n        resv_node = self.server.reservations[rid].get_vnodes()[0]\n        a = {'state': 'offline'}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=resv_node)\n        a = {'reserve_state': (MATCH_RE, 'RESV_DEGRADED|10')}\n        self.server.expect(RESV, a, id=rid)\n        a = {'resources_available.ncpus': (GT, 0)}\n        free_nodes = self.server.filter(NODE, a)\n        nodes = list(free_nodes.values())[0]\n        other_node = [nodes[0], nodes[1]][resv_node == nodes[0]]\n        a = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2'),\n             'resv_nodes': (MATCH_RE, re.escape(other_node))}\n        self.server.expect(RESV, a, id=rid, offset=3, attrop=PTL_AND)\n\n    def test_select(self):\n        \"\"\"\n        Test to qselect\n        \"\"\"\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, jid)\n        jobs = self.server.select()\n        self.assertNotEqual(jobs, None)\n\n    def test_alter(self):\n        \"\"\"\n        Test to alter job\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid)\n        self.server.alterjob(jid, {'comment': 'job comment altered'})\n        self.server.expect(JOB, {'comment': 'job comment altered'}, id=jid)\n\n    def test_sigjob(self):\n        \"\"\"\n        Test to signal job\n        \"\"\"\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R', 'substate': 42},\n                           attrop=PTL_AND, id=jid)\n        self.server.sigjob(jid, 'suspend')\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid)\n        self.server.sigjob(jid, 'resume')\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n    def test_backfilling(self):\n        \"\"\"\n        Test for backfilling\n        \"\"\"\n        a = {'resources_available.ncpus': 2}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        self.scheduler.set_sched_config({'strict_ordering': 'True'})\n        a = {'Resource_List.select': '1:ncpus=1',\n             'Resource_List.walltime': 3600}\n        j = Job(TEST_USER, attrs=a)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        a = {'Resource_List.select': '1:ncpus=2',\n             'Resource_List.walltime': 3600}\n        j = Job(TEST_USER, attrs=a)\n        jid1 = self.server.submit(j)\n        self.server.expect(JOB, 'comment', op=SET, id=jid1)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid1)\n        a = {'Resource_List.select': '1:ncpus=1',\n             'Resource_List.walltime': 1800}\n        j = Job(TEST_USER, attrs=a)\n        jid2 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n\n    def test_hold_release(self):\n        \"\"\"\n        Test to hold and release a job\n        \"\"\"\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n        a = {'job_state': 'R', 'substate': '42'}\n        self.server.expect(JOB, a, jid, attrop=PTL_AND)\n        self.server.holdjob(jid, USER_HOLD)\n        self.server.expect(JOB, {'Hold_Types': 'u'}, jid)\n        self.server.rlsjob(jid, USER_HOLD)\n        self.server.expect(JOB, {'Hold_Types': 'n'}, jid)\n\n    def test_create_vnode(self):\n        \"\"\"\n        Test to create vnodes\n        \"\"\"\n        self.server.expect(SERVER, {'pbs_version': '8'}, op=GT)\n        self.server.manager(MGR_CMD_DELETE, NODE, None, \"\")\n        a = {'resources_available.ncpus': 20, 'sharing': 'force_excl'}\n        momstr = self.mom.create_vnode_def('testnode', a, 10)\n        self.mom.insert_vnode_def(momstr)\n        self.server.manager(MGR_CMD_CREATE, NODE, None, self.mom.hostname)\n        a = {'resources_available.ncpus=20': 10}\n        self.server.expect(VNODE, a, count=True, interval=5)\n\n    def test_create_execution_queue(self):\n        \"\"\"\n        Test to create execution queue\n        \"\"\"\n        qname = 'testq'\n        try:\n            self.server.manager(MGR_CMD_DELETE, QUEUE, None, qname)\n        except PbsManagerError:\n            pass\n        a = {'queue_type': 'Execution', 'enabled': 'True', 'started': 'True'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, qname)\n        self.server.manager(MGR_CMD_DELETE, QUEUE, id=qname)\n\n    def test_create_routing_queue(self):\n        \"\"\"\n        Test to create routing queue\n        \"\"\"\n        qname = 'routeq'\n        try:\n            self.server.manager(MGR_CMD_DELETE, QUEUE, None, qname)\n        except PbsManagerError:\n            pass\n        a = {'queue_type': 'Route', 'started': 'True'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, qname)\n        self.server.manager(MGR_CMD_DELETE, QUEUE, id=qname)\n\n    def test_fgc_limits(self):\n        \"\"\"\n        Test for limits\n        \"\"\"\n        a = {'resources_available.ncpus': 4}\n        self.mom.create_vnodes(a, 2)\n        a = {'max_run': '[u:' + str(TEST_USER) + '=2]'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        self.server.expect(SERVER, a)\n        j1 = Job(TEST_USER)\n        j2 = Job(TEST_USER)\n        j3 = Job(TEST_USER)\n        j1id = self.server.submit(j1)\n        self.server.expect(JOB, {'job_state': 'R'}, j1id)\n        j2id = self.server.submit(j2)\n        self.server.expect(JOB, {'job_state': 'R'}, id=j2id)\n        j3id = self.server.submit(j3)\n        self.server.expect(JOB, 'comment', op=SET, id=j3id)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=j3id)\n\n    def test_limits(self):\n        \"\"\"\n        Test for limits\n        \"\"\"\n        a = {'resources_available.ncpus': 4, 'resources_available.mem': '2gb'}\n        self.mom.create_vnodes(a, 2)\n        a = {'max_run_res.ncpus': '[u:' + str(TEST_USER) + '=2]'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        for _ in range(3):\n            j = Job(TEST_USER)\n            self.server.submit(j)\n        a = {'server_state': 'Scheduling'}\n        self.server.expect(SERVER, a, op=NE)\n        a = {'job_state=R': 2, 'euser=' + str(TEST_USER): 2}\n        self.server.expect(JOB, a, attrop=PTL_AND)\n\n        # Now set limit on mem as well and submit 2 jobs, each requesting\n        # a different limit resource and check both of them run\n        self.server.cleanup_jobs()\n        a = {'max_run_res.mem': '[u:' + str(TEST_USER) + '=1gb]'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        a = {'Resource_List.ncpus': 1}\n        j = Job(TEST_USER, a)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        a = {'Resource_List.mem': '1gb'}\n        j = Job(TEST_USER, a)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n    @runOnlyOnLinux\n    def test_finished_jobs(self):\n        \"\"\"\n        Test for finished jobs and resource used for jobs.\n        \"\"\"\n        a = {'resources_available.ncpus': '2'}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        a = {'job_history_enable': 'True'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        a = {'Resource_List.ncpus': 2}\n        j = Job(TEST_USER, a)\n        j.set_sleep_time(15)\n        j.create_eatcpu_job(15, self.mom.shortname)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'F'}, extend='x', offset=15,\n                           interval=1, id=jid)\n        jobs = self.server.status(JOB, id=jid, extend='x')\n        exp_eq_val = {ATTR_used + '.ncpus': '2',\n                      ATTR_exit_status: '0'}\n        for key in exp_eq_val:\n            self.assertEqual(exp_eq_val[key], jobs[0][key])\n        exp_noteq_val = {ATTR_used + '.walltime': '00:00:00',\n                         ATTR_used + '.cput': '00:00:00',\n                         ATTR_used + '.mem': '0kb',\n                         ATTR_used + '.cpupercent': '0'}\n        for key in exp_noteq_val:\n            self.assertNotEqual(exp_noteq_val[key], jobs[0][key])\n\n    def test_project_based_limits(self):\n        \"\"\"\n        Test for project based limits\n        \"\"\"\n        proj = 'testproject'\n        a = {'max_run': '[p:' + proj + '=1]'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        for _ in range(5):\n            j = Job(TEST_USER, attrs={ATTR_project: proj})\n            self.server.submit(j)\n        self.server.expect(SERVER, {'server_state': 'Scheduling'}, op=NE)\n        self.server.expect(JOB, {'job_state=R': 1})\n\n    def test_job_scheduling_order(self):\n        \"\"\"\n        Test for job scheduling order\n        \"\"\"\n        a = {'backfill_depth': 5}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        self.scheduler.set_sched_config({'strict_ordering': 'True'})\n        a = {'resources_available.ncpus': '1'}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        if self.mom.is_cpuset_mom():\n            a = {'state=free': (GE, 1)}\n        else:\n            a = {'state=free': 1}\n        self.server.expect(VNODE, a, attrop=PTL_AND)\n        a = {'scheduling': 'False'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        for _ in range(6):\n            j = Job(TEST_USER, attrs={'Resource_List.select': '1:ncpus=1',\n                                      'Resource_List.walltime': 3600})\n            self.server.submit(j)\n        a = {'scheduling': 'True'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        a = {'server_state': 'Scheduling'}\n        self.server.expect(SERVER, a, op=NE)\n        self.server.expect(JOB, {'estimated.start_time': 5},\n                           count=True, op=SET)\n\n    def test_preemption(self):\n        \"\"\"\n        Test for preemption\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, SCHED, {'log_events': 2047})\n        a = {'resources_available.ncpus': '1'}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        self.server.status(QUEUE)\n        if 'expressq' in self.server.queues.keys():\n            self.server.manager(MGR_CMD_DELETE, QUEUE, None, 'expressq')\n        a = {'queue_type': 'execution'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, 'expressq')\n        a = {'enabled': 'True', 'started': 'True', 'priority': 150}\n        self.server.manager(MGR_CMD_SET, QUEUE, a, 'expressq')\n        j = Job(TEST_USER, attrs={'Resource_List.select': '1:ncpus=1'})\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        j2 = Job(TEST_USER,\n                 attrs={'queue': 'expressq',\n                        'Resource_List.select': '1:ncpus=1'})\n        j2id = self.server.submit(j2)\n        self.server.expect(JOB, {'job_state': 'R'}, id=j2id)\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid)\n\n    def test_preemption_qrun(self):\n        \"\"\"\n        Test that a job is preempted when a high priority job is run via qrun\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, NODE,\n                            {'resources_available.ncpus': 1},\n                            id=self.mom.shortname)\n\n        j = Job(TEST_USER)\n        jid1 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n\n        j2 = Job(TEST_USER)\n        jid2 = self.server.submit(j2)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid2)\n\n        self.server.runjob(jid2)\n        self.server.expect(JOB, {'job_state': 'S'}, id=jid1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n\n        self.scheduler.log_match(jid1 + \";Job preempted by suspension\")\n\n    def test_fairshare(self):\n        \"\"\"\n        Test for fairshare\n        \"\"\"\n        a = {'fair_share': 'true ALL',\n             'fairshare_usage_res': 'ncpus*walltime',\n             'unknown_shares': 10}\n        self.scheduler.set_sched_config(a)\n        a = {'resources_available.ncpus': 4}\n        self.mom.create_vnodes(a, 4)\n        a = {'Resource_List.select': '1:ncpus=4'}\n        for _ in range(10):\n            j = Job(TEST_USER1, a)\n            self.server.submit(j)\n        a = {'job_state=R': 4}\n        self.server.expect(JOB, a)\n        self.logger.info('testinfo: waiting for walltime accumulation')\n        running_jobs = self.server.filter(JOB, {'job_state': 'R'})\n        if running_jobs.values():\n            for _j in list(running_jobs.values())[0]:\n                a = {'resources_used.walltime': (NE, '00:00:00')}\n                self.server.expect(JOB, a, id=_j, interval=1, max_attempts=30)\n        j = Job(TEST_USER2)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid, offset=5)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n        a = {'server_state': 'Scheduling'}\n        self.server.expect(SERVER, a, op=NE)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n        cycle = self.scheduler.cycles(start=self.server.ctime, lastN=10)\n        if len(cycle) > 0:\n            i = len(cycle) - 1\n            while len(cycle[i].political_order) == 0:\n                i -= 1\n            cycle = cycle[i]\n            firstconsidered = cycle.political_order[0]\n            lastsubmitted = jid.split('.')[0]\n            msg = 'testinfo: first job considered [' + str(firstconsidered) + \\\n                  '] == last submitted [' + str(lastsubmitted) + ']'\n            self.logger.info(msg)\n            self.assertEqual(firstconsidered, lastsubmitted)\n\n    def test_server_hook(self):\n        \"\"\"\n        Create a hook, import a hook content that rejects all jobs, verify\n        that a job is rejected by the hook.\n        \"\"\"\n        hook_name = \"testhook\"\n        hook_body = \"import pbs\\npbs.event().reject('my custom message')\\n\"\n        a = {'event': 'queuejob', 'enabled': 'True'}\n        self.server.create_import_hook(hook_name, a, hook_body)\n        self.server.manager(MGR_CMD_SET, SERVER, {'log_events': 2047})\n        j = Job(TEST_USER)\n        now = time.time()\n        try:\n            self.server.submit(j)\n        except PbsSubmitError:\n            pass\n        self.server.log_match(\"my custom message\", starttime=now)\n\n    def test_mom_hook(self):\n        \"\"\"\n        Create a hook, import a hook content that rejects all jobs, verify\n        that a job is rejected by the hook.\n        \"\"\"\n        hook_name = \"momhook\"\n        hook_body = \"import pbs\\npbs.event().reject('my custom message')\\n\"\n        a = {'event': 'execjob_begin', 'enabled': 'True'}\n        self.server.create_import_hook(hook_name, a, hook_body)\n        # Asynchronous copy of hook content, we wait for the copy to occur\n        self.server.log_match(\".*successfully sent hook file.*\" +\n                              hook_name + \".PY\" + \".*\", regexp=True,\n                              max_attempts=100, interval=5)\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n        self.mom.log_match(\"my custom message\", starttime=self.server.ctime,\n                           interval=1)\n\n    def test_shrink_to_fit(self):\n        \"\"\"\n        Smoke test shrink to fit by setting a dedicated time to start in an\n        hour and submit a job that can run for as low as 59 mn and as long as\n        4 hours. Expect the job's walltime to be greater or equal than the\n        minimum set.\n        \"\"\"\n        a = {'resources_available.ncpus': 1}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        now = time.time()\n        self.scheduler.add_dedicated_time(start=now + 3600, end=now + 7200)\n        j = Job(TEST_USER)\n        a = {'Resource_List.max_walltime': '04:00:00',\n             'Resource_List.min_walltime': '00:58:00'}\n        j.set_attributes(a)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        attr = {'Resource_List.walltime':\n                (GE, a['Resource_List.min_walltime'])}\n        self.server.expect(JOB, attr, id=jid)\n\n    def test_submit_job_with_script(self):\n        \"\"\"\n        Test to submit job with job script\n        \"\"\"\n        sleep_cmd = os.path.join(self.server.pbs_conf['PBS_EXEC'],\n                                 'bin', 'pbs_sleep')\n        script_body = sleep_cmd + ' 120'\n        j = Job(TEST_USER, attrs={ATTR_N: 'test'})\n        j.create_script(script_body, hostname=self.server.client)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        self.server.delete(id=jid, extend='force', wait=True)\n        self.logger.info(\"Testing script with extension\")\n        j = Job(TEST_USER)\n\n        fn = self.du.create_temp_file(hostname=self.server.client,\n                                      suffix=\".scr\",\n                                      body=script_body,\n                                      asuser=str(TEST_USER))\n        jid = self.server.submit(j, script=fn)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        self.logger.info('Job submitted successfully: ' + jid)\n\n    def test_formula_match(self):\n        \"\"\"\n        Test for job sort formula\n        \"\"\"\n        a = {'resources_available.ncpus': 8}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        self.server.manager(MGR_CMD_SET, SCHED, {'log_events': 2047})\n        a = {'job_sort_formula': 'ncpus'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        # purposely submitting a job that is highly unlikely to run so\n        # it stays Q'd\n        j = Job(TEST_USER, attrs={'Resource_List.select': '1:ncpus=128'})\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid)\n        a = {'scheduling': 'True'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        _f1 = self.scheduler.job_formula(jid)\n        _f2 = self.server.evaluate_formula(jid, full=False)\n        self.assertEqual(_f1, _f2)\n        self.logger.info(str(_f1) + \" = \" + str(_f2) + \" ... OK\")\n\n    @skipOnShasta\n    def test_staging(self):\n        \"\"\"\n        Test for file staging\n        \"\"\"\n        execution_info = {}\n        storage_info = {}\n        stagein_path = self.mom.create_and_format_stagein_path(\n            storage_info, asuser=str(TEST_USER))\n        a = {ATTR_stagein: stagein_path}\n        j = Job(TEST_USER, a)\n        j.set_sleep_time(2)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, 'queue', op=UNSET, id=jid, offset=2)\n        execution_info['hostname'] = self.mom.hostname\n        storage_info['hostname'] = self.server.hostname\n        stageout_path = self.mom.create_and_format_stageout_path(\n            execution_info, storage_info, asuser=str(TEST_USER))\n        a = {ATTR_stageout: stageout_path}\n        j = Job(TEST_USER, a)\n        j.set_sleep_time(2)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, 'queue', op=UNSET, id=jid, offset=2)\n\n    def test_route_queue(self):\n        \"\"\"\n        Verify that a routing queue routes a job into the appropriate execution\n        queue.\n        \"\"\"\n        a = {'queue_type': 'Execution', 'resources_min.ncpus': 1,\n             'enabled': 'True', 'started': 'False'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id='specialq')\n        dflt_q = self.server.default_queue\n        a = {'queue_type': 'route',\n             'route_destinations': dflt_q + ',specialq',\n             'enabled': 'True', 'started': 'True'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id='routeq')\n        a = {'resources_min.ncpus': 4}\n        self.server.manager(MGR_CMD_SET, QUEUE, a, id=dflt_q)\n        j = Job(TEST_USER, attrs={ATTR_queue: 'routeq',\n                                  'Resource_List.ncpus': 1})\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {ATTR_queue: 'specialq'}, id=jid)\n\n    def test_movejob(self):\n        \"\"\"\n        Verify that a job can be moved to another queue than the one it was\n        originally submitted to\n        \"\"\"\n        a = {'queue_type': 'Execution', 'enabled': 'True', 'started': 'True'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id='solverq')\n        a = {'scheduling': 'False'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n        self.server.movejob(jid, 'solverq')\n        a = {'scheduling': 'True'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        self.server.expect(JOB, {ATTR_queue: 'solverq', 'job_state': 'R'},\n                           attrop=PTL_AND)\n\n    def test_by_queue(self):\n        \"\"\"\n        Test by_queue scheduling policy\n        \"\"\"\n        a = OrderedDict()\n        a['queue_type'] = 'execution'\n        a['enabled'] = 'True'\n        a['started'] = 'True'\n        a['priority'] = 200\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id='p1')\n        a['priority'] = 400\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id='p2')\n        a = {'scheduling': 'False'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        self.scheduler.set_sched_config({'by_queue': 'True'})\n        a = {'resources_available.ncpus': 8}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        a = {'Resource_List.select': '1:ncpus=1', ATTR_queue: 'p1'}\n        j = Job(TEST_USER, a)\n        j1id = self.server.submit(j)\n        a = {'Resource_List.select': '1:ncpus=8', ATTR_queue: 'p1'}\n        j = Job(TEST_USER, a)\n        j2id = self.server.submit(j)\n        a = {'Resource_List.select': '1:ncpus=2', ATTR_queue: 'p1'}\n        j = Job(TEST_USER, a)\n        j3id = self.server.submit(j)\n        a = {'Resource_List.select': '1:ncpus=1', ATTR_queue: 'p2'}\n        j = Job(TEST_USER, a)\n        j4id = self.server.submit(j)\n        a = {'Resource_List.select': '1:ncpus=8', ATTR_queue: 'p2'}\n        j = Job(TEST_USER, a)\n        j5id = self.server.submit(j)\n        a = {'Resource_List.select': '1:ncpus=8', ATTR_queue: 'p2'}\n        j = Job(TEST_USER, a)\n        j6id = self.server.submit(j)\n        a = {'scheduling': 'True'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        a = {'scheduling': 'False'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        # Given node configuration of 8 cpus the only jobs that could run are\n        # j4id j1id and j3id\n        self.server.expect(JOB, {'job_state=R': 3},\n                           trigger_sched_cycle=False)\n        cycle = self.scheduler.cycles(start=self.server.ctime, lastN=2)\n        if len(cycle) > 0:\n            i = len(cycle) - 1\n            while len(cycle[i].political_order) == 0:\n                i -= 1\n            cycle = cycle[i]\n            p1jobs = [j1id, j2id, j3id]\n            p2jobs = [j4id, j5id, j6id]\n            jobs = [j1id, j2id, j3id, j4id, j5id, j6id]\n            job_order = [j.split('.')[0] for j in p2jobs + p1jobs]\n            self.logger.info(\n                'Political order: ' + ','.join(cycle.political_order))\n            self.logger.info('Expected order: ' + ','.join(job_order))\n            self.assertTrue(cycle.political_order == job_order)\n\n    def test_round_robin(self):\n        \"\"\"\n        Test round_robin scheduling policy\n        \"\"\"\n        a = OrderedDict()\n        a['queue_type'] = 'execution'\n        a['enabled'] = 'True'\n        a['started'] = 'True'\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id='p1')\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id='p2')\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id='p3')\n        a = {'scheduling': 'False'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        self.scheduler.set_sched_config({'round_robin': 'true   ALL'})\n        a = {'resources_available.ncpus': 9}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        jids = []\n        queues = ['p1', 'p2', 'p3']\n        queue = queues[0]\n        for i in range(9):\n            if (i != 0) and (i % 3 == 0):\n                del queues[0]\n                queue = queues[0]\n            a = {'Resource_List.select': '1:ncpus=1', ATTR_queue: queue}\n            j = Job(TEST_USER, a)\n            jids.append(self.server.submit(j))\n        start_time = time.time()\n        a = {'scheduling': 'True'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        a = {'scheduling': 'False'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        self.server.expect(JOB, {'job_state=R': 9})\n        end_time = int(time.time()) + 1\n        cycle = self.scheduler.cycles(start=start_time, end=end_time)\n        self.logger.info(\"len(cycle):%s, td:%s\" % (len(cycle),\n                                                   end_time - start_time))\n        if len(cycle) > 0:\n            i = len(cycle) - 1\n            while ((i >= 0) and (len(cycle[i].political_order) == 0)):\n                i -= 1\n            if i < 0:\n                self.assertTrue(False, 'failed to found political order')\n            for j, _cycle in enumerate(cycle):\n                self.logger.info(\"cycle:%s:%s\" % (i, _cycle.political_order))\n            self.logger.info(\"cycle i:%s\" % i)\n            cycle = cycle[i]\n            jobs = [jids[0], jids[3], jids[6], jids[1], jids[4], jids[7],\n                    jids[2], jids[5], jids[8]]\n            job_order = [j.split('.')[0] for j in jobs]\n            self.logger.info(\n                'Political order: ' + ','.join(cycle.political_order))\n            self.logger.info('Expected order: ' + ','.join(job_order))\n            self.assertTrue(cycle.political_order == job_order)\n\n    def test_pbs_probe(self):\n        \"\"\"\n        Verify that pbs_probe runs and returns 0 when no errors are detected\n        \"\"\"\n        probe = os.path.join(self.server.pbs_conf['PBS_EXEC'], 'sbin',\n                             'pbs_probe')\n        ret = self.du.run_cmd(self.server.hostname, [probe], sudo=True)\n        self.assertEqual(ret['rc'], 0)\n\n    def test_printjob(self):\n        \"\"\"\n        Verify that printjob can be executed\n        \"\"\"\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n        a = {'job_state': 'R', 'substate': 42}\n        self.server.expect(JOB, a, id=jid)\n        ret = self.mom.printjob(jid)\n        self.assertEqual(ret['rc'], 0)\n\n    def test_comm_service(self):\n        \"\"\"\n        Examples to demonstrate how to start/stop/signal the pbs_comm service\n        \"\"\"\n        svr_obj = Server()\n        comm = Comm(svr_obj)\n        comm.isUp()\n        comm.signal('-HUP')\n        comm.stop()\n        comm.start()\n        comm.log_match('Thread')\n\n    def test_add_server_dyn_res(self):\n        \"\"\"\n        Examples to demonstrate how to add a server dynamic resource script\n        \"\"\"\n        attr = {}\n        attr['type'] = 'long'\n        self.server.manager(MGR_CMD_CREATE, RSC, attr, id='foo')\n        body = \"echo 10\"\n        self.scheduler.add_server_dyn_res(\"foo\", script_body=body)\n        self.scheduler.add_resource(\"foo\", apply=True)\n        j1 = Job(TEST_USER)\n        j1.set_attributes({'Resource_List': 'foo=15'})\n        j1id = self.server.submit(j1)\n        msg = \"Can Never Run: Insufficient amount of server resource: foo \"\\\n              \"(R: 15 A: 10 T: 10)\"\n        a = {'job_state': 'Q', 'Resource_List.foo': '15',\n             'comment': msg}\n        self.server.expect(JOB, a, id=j1id)\n\n    def test_schedlog_preempted_info(self):\n        \"\"\"\n        Demonstrate how to retrieve a list of jobs that had to be preempted in\n        order to run a high priority job\n        \"\"\"\n        # run the preemption smoketest\n        self.test_preemption()\n        # Analyze the scheduler log\n        a = PBSLogAnalyzer()\n        a.analyze_scheduler_log(self.scheduler.logfile,\n                                start=self.server.ctime)\n        for cycle in a.scheduler.cycles:\n            if cycle.preempted_jobs:\n                self.logger.info('Preemption info: ' +\n                                 str(cycle.preempted_jobs))\n\n    def test_basic(self):\n        \"\"\"\n        basic express queue preemption test\n        \"\"\"\n        try:\n            self.server.manager(MGR_CMD_DELETE, QUEUE, id=\"expressq\")\n        except PbsManagerError:\n            pass\n        a = {'queue_type': 'e',\n             'started': 'True',\n             'enabled': 'True',\n             'Priority': 150}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, \"expressq\")\n        a = {'resources_available.ncpus': 4, 'resources_available.mem': '2gb'}\n        self.mom.create_vnodes(a, 4)\n        j1 = Job(TEST_USER)\n        j1.set_attributes(\n            {'Resource_List.select': '4:ncpus=4',\n             'Resource_List.walltime': 3600})\n        j1id = self.server.submit(j1)\n        self.server.expect(JOB, {'job_state': 'R', 'substate': 42}, id=j1id)\n        j2 = Job(TEST_USER)\n        j2.set_attributes(\n            {'Resource_List.select': '1:ncpus=4',\n             'Resource_List.walltime': 3600,\n             'queue': 'expressq'})\n        j2id = self.server.submit(j2)\n        self.server.expect(JOB, {'job_state': 'S'}, id=j1id)\n        self.server.expect(JOB, {'job_state': 'R'}, id=j2id)\n        self.server.cleanup_jobs()\n        self.server.expect(SERVER, {'total_jobs': 0})\n        self.server.manager(MGR_CMD_DELETE, QUEUE, id=\"expressq\")\n\n    def test_basic_ja(self):\n        \"\"\"\n        basic express queue preemption test with job array\n        \"\"\"\n        try:\n            self.server.manager(MGR_CMD_DELETE, QUEUE, id=\"expressq\")\n        except PbsManagerError:\n            pass\n        a = {'queue_type': 'e',\n             'started': 'True',\n             'enabled': 'True',\n             'Priority': 150}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, \"expressq\")\n        a = {'resources_available.ncpus': 4, 'resources_available.mem': '2gb'}\n        self.mom.create_vnodes(a, 4)\n        j1 = Job(TEST_USER)\n        j1.set_attributes({'Resource_List.select': '4:ncpus=4',\n                           'Resource_List.walltime': 3600})\n        j1id = self.server.submit(j1)\n        self.server.expect(JOB, {'job_state': 'R', 'substate': 42}, id=j1id)\n        j2 = Job(TEST_USER)\n        j2.set_attributes({'Resource_List.select': '1:ncpus=4',\n                           'Resource_List.walltime': 3600,\n                           'queue': 'expressq',\n                           ATTR_J: '1-3'})\n        j2id = self.server.submit(j2)\n        self.server.expect(JOB, {'job_state': 'S'}, id=j1id)\n        self.server.expect(JOB, {'job_state=R': 3}, count=True,\n                           id=j2id, extend='t')\n        self.server.cleanup_jobs()\n        self.server.expect(SERVER, {'total_jobs': 0})\n        self.server.manager(MGR_CMD_DELETE, QUEUE, id=\"expressq\")\n\n    def submit_reserv(self, resv_start, ncpus, resv_dur):\n        a = {'Resource_List.select': '1:ncpus=%d' % ncpus,\n             'Resource_List.place': 'free',\n             'reserve_start': int(resv_start),\n             'reserve_duration': int(resv_dur)\n             }\n        r = Reservation(TEST_USER, attrs=a)\n        rid = self.server.submit(r)\n        try:\n            a = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n            d = self.server.expect(RESV, a, id=rid)\n        except PtlExpectError as e:\n            d = e.rv\n        return d\n\n    def test_shrink_to_fit_resv_barrier(self):\n        \"\"\"\n        Test shrink to fit by creating one reservation having ncpus=1,\n        starting in 3 hours with a duration of two hours.  A STF job with\n        a min_walltime of 10 min. and max_walltime of 20.5 hrs will shrink\n        its walltime to less than or equal to 3 hours and greater than or\n        equal to 10 mins.\n        \"\"\"\n        a = {'resources_available.ncpus': 1}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        now = time.time()\n        resv_dur = 7200\n        resv_start = now + 10800\n        d = self.submit_reserv(resv_start, 1, resv_dur)\n        self.assertTrue(d)\n        j = Job(TEST_USER)\n        a = {'Resource_List.ncpus': '1'}\n        j.set_attributes(a)\n        jid = self.server.submit(j)\n        j2 = Job(TEST_USER)\n        a = {'Resource_List.max_walltime': '20:30:00',\n             'Resource_List.min_walltime': '00:10:00'}\n        j2.set_attributes(a)\n        jid2 = self.server.submit(j2)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n        attr = {'Resource_List.walltime': (LE, '03:00:00')}\n        self.server.expect(JOB, attr, id=jid2)\n        attr = {'Resource_List.walltime': (GE, '00:10:00')}\n        self.server.expect(JOB, attr, id=jid2)\n\n    def test_job_sort_formula_threshold(self):\n        \"\"\"\n        Test job_sort_formula_threshold basic behavior\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, SCHED, {'log_events': 2047})\n        a = {'resources_available.ncpus': 1}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n        a = {'job_sort_formula':\n             'ceil(fabs(-ncpus*(mem/100.00)*sqrt(walltime)))'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        a = {'job_sort_formula_threshold': '7'}\n        self.server.manager(MGR_CMD_SET, SCHED, a)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n        a = {'Resource_List.select': '1:ncpus=1:mem=300kb',\n             'Resource_List.walltime': 4}\n        J1 = Job(TEST_USER1, attrs=a)\n        a = {'Resource_List.select': '1:ncpus=1:mem=350kb',\n             'Resource_List.walltime': 4}\n        J2 = Job(TEST_USER1, attrs=a)\n        a = {'Resource_List.select': '1:ncpus=1:mem=380kb',\n             'Resource_List.walltime': 4}\n        J3 = Job(TEST_USER1, attrs=a)\n        a = {'Resource_List.select': '1:ncpus=1:mem=440kb',\n             'Resource_List.walltime': 4}\n        J4 = Job(TEST_USER1, attrs=a)\n        j1id = self.server.submit(J1)\n        j2id = self.server.submit(J2)\n        j3id = self.server.submit(J3)\n        j4id = self.server.submit(J4)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n        rv = self.server.expect(SERVER, {'server_state': 'Scheduling'}, op=NE)\n        self.logger.info(\"Checking the job state of \" + j4id)\n        self.server.expect(JOB, {'job_state': 'R'}, id=j4id, max_attempts=30,\n                           interval=2)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=j3id, max_attempts=30,\n                           interval=2)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=j2id, max_attempts=30,\n                           interval=2)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=j1id, max_attempts=30,\n                           interval=2)\n        msg = \"Checking the job state of %s, runs after %s is deleted\" % (j3id,\n                                                                          j4id)\n        self.logger.info(msg)\n        try:\n            self.server.deljob(id=j4id, wait=True, extend='force',\n                               runas=MGR_USER)\n        except PbsDeljobError as e:\n            self.assertIn(\n                'qdel: Unknown Job Id', e.msg[0])\n        self.server.expect(JOB, {'job_state': 'R'}, id=j3id, max_attempts=30,\n                           interval=2)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=j2id, max_attempts=30,\n                           interval=2)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=j1id, max_attempts=30,\n                           interval=2)\n        self.scheduler.log_match(j1id + \";Formula Evaluation = 6\",\n                                 regexp=True, starttime=self.server.ctime,\n                                 max_attempts=10, interval=2)\n        m = \";Job's formula value 6 is under threshold 7\"\n        self.scheduler.log_match(j1id + m,\n                                 regexp=True, starttime=self.server.ctime,\n                                 max_attempts=10, interval=2)\n        m = \";Job is under job_sort_formula threshold value\"\n        self.scheduler.log_match(j1id + m,\n                                 regexp=True, starttime=self.server.ctime,\n                                 max_attempts=10, interval=2)\n        self.scheduler.log_match(j2id + \";Formula Evaluation = 7\",\n                                 regexp=True, starttime=self.server.ctime,\n                                 max_attempts=10, interval=2)\n        m = \";Job's formula value 7 is under threshold 7\"\n        self.scheduler.log_match(j2id + m,\n                                 regexp=True, starttime=self.server.ctime,\n                                 max_attempts=10, interval=2)\n        m = \";Job is under job_sort_formula threshold value\"\n        self.scheduler.log_match(j1id + m,\n                                 regexp=True, starttime=self.server.ctime,\n                                 max_attempts=10, interval=2)\n        self.scheduler.log_match(j3id + \";Formula Evaluation = 8\",\n                                 regexp=True, starttime=self.server.ctime,\n                                 max_attempts=10, interval=2)\n        self.scheduler.log_match(j4id + \";Formula Evaluation = 9\",\n                                 regexp=True, starttime=self.server.ctime,\n                                 max_attempts=10, interval=2)\n        try:\n            self.server.deljob(id=j3id, wait=True, extend='force',\n                               runas=MGR_USER)\n        except PbsDeljobError as e:\n            self.assertIn(\n                'qdel: Unknown Job Id', e.msg[0])\n        # Make sure we can qrun a job under the threshold\n        rv = self.server.expect(SERVER, {'server_state': 'Scheduling'}, op=NE)\n        self.server.expect(JOB, {ATTR_state: 'Q'}, id=j1id)\n        self.server.runjob(jobid=j1id)\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=j1id)\n\n        # test to make sure server can still start with job_sort_formula set\n        self.server.restart()\n        restart_msg = 'Failed to restart PBS'\n        self.assertTrue(self.server.isUp(), restart_msg)\n\n    def isSuspended(self, ppid):\n        \"\"\"\n        Check wether <ppid> is in Suspended state, return True if\n        <ppid> in Suspended state else return False\n        \"\"\"\n        return self.mom.is_proc_suspended(ppid)\n\n    def do_preempt_config(self):\n        \"\"\"\n        Do Scheduler Preemption configuration\n        \"\"\"\n        _t = ('\\\"express_queue, normal_jobs, server_softlimits,' +\n              ' queue_softlimits\\\"')\n        a = {'preempt_prio': _t}\n        self.scheduler.set_sched_config(a)\n        try:\n            self.server.manager(MGR_CMD_DELETE, QUEUE, None, 'expressq')\n        except PbsManagerError:\n            pass\n        a = {'queue_type': 'e',\n             'started': 'True',\n             'Priority': 150,\n             'enabled': 'True'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, 'expressq')\n\n    def common_stuff(self, isJobArray=False, isWithPreempt=False):\n        \"\"\"\n        Do common stuff for job like submitting, stating and suspending\n        \"\"\"\n        if isJobArray:\n            a = {'resources_available.ncpus': 3}\n        else:\n            a = {'resources_available.ncpus': 1}\n        self.mom.create_vnodes(a, 1)\n        if isWithPreempt:\n            self.do_preempt_config()\n        j1 = Job(TEST_USER, attrs={'Resource_List.walltime': 100})\n        if isJobArray:\n            j1.set_attributes({ATTR_J: '1-3'})\n        j1id = self.server.submit(j1)\n        if isJobArray:\n            a = {'job_state=R': 3, 'substate=42': 3}\n        else:\n            a = {'job_state': 'R', 'substate': 42}\n        self.server.expect(JOB, a, extend='t')\n        if isWithPreempt:\n            j2 = Job(TEST_USER, attrs={'Resource_List.walltime': 100,\n                                       'queue': 'expressq'})\n            if isJobArray:\n                j2.set_attributes({ATTR_J: '1-3'})\n            j2id = self.server.submit(j2)\n            self.assertNotEqual(j2id, None)\n            if isJobArray:\n                a = {'job_state=R': 3, 'substate=42': 3}\n            else:\n                a = {'job_state': 'R', 'substate': 42}\n            self.server.expect(JOB, a, id=j2id, extend='t')\n        else:\n            self.server.sigjob(j1id, 'suspend')\n        if isJobArray:\n            a = {'job_state=S': 3}\n        else:\n            a = {'job_state': 'S'}\n        self.server.expect(JOB, a, id=j1id, extend='t')\n        jobs = self.server.status(JOB, id=j1id)\n        for job in jobs:\n            if 'session_id' in job:\n                self.server.expect(JOB, {'session_id': self.isSuspended},\n                                   id=job['id'])\n        if isWithPreempt:\n            return (j1id, j2id)\n        else:\n            return j1id\n\n    def test_suspend_job_with_preempt(self):\n        \"\"\"\n        Test Suspend of Job using Scheduler Preemption\n        \"\"\"\n        self.common_stuff(isWithPreempt=True)\n\n    def test_resume_job_with_preempt(self):\n        \"\"\"\n        Test Resume of Job using Scheduler Preemption\n        \"\"\"\n        (j1id, j2id) = self.common_stuff(isWithPreempt=True)\n        self.server.delete(j2id)\n        self.server.expect(JOB, {'job_state': 'R', 'substate': 42},\n                           id=j1id)\n        jobs = self.server.status(JOB, id=j1id)\n        for job in jobs:\n            if 'session_id' in job:\n                self.server.expect(JOB,\n                                   {'session_id': (NOT, self.isSuspended)},\n                                   id=job['id'])\n\n    def test_suspend_job_array_with_preempt(self):\n        \"\"\"\n        Test Suspend of Job array using Scheduler Preemption\n        \"\"\"\n        self.common_stuff(isJobArray=True, isWithPreempt=True)\n\n    def test_resume_job_array_with_preempt(self):\n        \"\"\"\n        Test Resume of Job array using Scheduler Preemption\n        \"\"\"\n        (j1id, j2id) = self.common_stuff(isJobArray=True, isWithPreempt=True)\n        self.server.delete(j2id)\n        self.server.expect(JOB,\n                           {'job_state=R': 3, 'substate=42': 3},\n                           extend='t')\n        jobs = self.server.status(JOB, id=j1id, extend='t')\n        for job in jobs:\n            if 'session_id' in job:\n                self.server.expect(JOB,\n                                   {'session_id': (NOT, self.isSuspended)},\n                                   id=job['id'])\n\n    def test_resource_create_delete(self):\n        \"\"\"\n        Verify behavior of resource on creation, deletion\n        and job.\n        \"\"\"\n\n        a = {'job_history_enable': 'True'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        attr = {'type': 'boolean'}\n        self.server.manager(MGR_CMD_CREATE, RSC, attr, id='foo')\n        attr = {'type': 'long', 'flag': 'nh'}\n        self.server.manager(MGR_CMD_CREATE, RSC, attr, id='foo1')\n        attr = {'type': 'string'}\n        self.server.manager(MGR_CMD_CREATE, RSC, attr, id='foo2')\n        attr = {'type': 'size', 'flag': 'nh'}\n        self.server.manager(MGR_CMD_CREATE, RSC, attr, id='foo3')\n\n        with self.assertRaises(PbsManagerError) as e:\n            self.server.manager(MGR_CMD_CREATE, RSC, attr, id='foo1')\n        msg = 'qmgr obj=foo1 svr=default: Duplicate entry in list '\n        self.assertIn(msg, e.exception.msg)\n\n        self.scheduler.add_resource(\"foo, foo1, foo2, foo3\", apply=True)\n\n        attr = {'Resources_available.foo': True}\n        self.server.manager(MGR_CMD_SET, SERVER, attr)\n\n        vnode_val = self.mom.shortname\n        if self.mom.is_cpuset_mom():\n            nodeinfo = self.server.status(NODE)\n            if len(nodeinfo) > 1:\n                vnode_val = nodeinfo[1]['id']\n        attr = {'Resources_available.foo3': '2gb'}\n        self.server.manager(MGR_CMD_SET, NODE, attr, id=vnode_val)\n        attr = {'Resources_available.foo1': 3}\n        self.server.manager(MGR_CMD_SET, NODE, attr, id=vnode_val)\n\n        now = time.time()\n        r = Reservation(TEST_USER)\n        a = {'Resource_List.foo2': 'abc',\n             'reserve_start': now + 10,\n             'reserve_end': now + 40}\n        r.set_attributes(a)\n        rid = self.server.submit(r)\n        rid_q = rid.split('.')[0]\n        a = {'reserve_state': (MATCH_RE, \"RESV_CONFIRMED|2\")}\n        self.server.expect(RESV, a, id=rid)\n        a = {'Resource_List.foo3': '1gb',\n             'Resource_List.foo1': 2,\n             ATTR_q: rid_q}\n        j = Job(TEST_USER, attrs=a)\n        j.set_sleep_time(15)\n        jid = self.server.submit(j)\n\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid,\n                           offset=10)\n\n        with self.assertRaises(PbsManagerError) as e:\n            self.server.manager(MGR_CMD_DELETE, RSC, id='foo1')\n        msg = 'qmgr obj=foo1 svr=default: Resource busy on job'\n        self.assertIn(msg, e.exception.msg)\n\n        self.server.expect(JOB, {'job_state': 'F'}, extend='x',\n                           offset=15, id=jid)\n\n        a = {'Resource_List.foo': True}\n        j = Job(TEST_USER, attrs=a)\n        j.set_sleep_time(15)\n        jid1 = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        self.server.expect(JOB, {'job_state': 'F'}, extend='x',\n                           offset=15, id=jid1)\n\n        a = {'job_history_enable': 'False'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        self.server.manager(MGR_CMD_DELETE, RSC, id='foo1')\n        self.server.manager(MGR_CMD_DELETE, RSC, id='foo2')\n        self.server.manager(MGR_CMD_DELETE, RSC, id='foo3')\n\n    def setup_fs(self, formula):\n\n        # change resource group file and validate after all the changes are in\n        self.scheduler.add_to_resource_group('grp1', 100, 'root', 60,\n                                             validate=False)\n        self.scheduler.add_to_resource_group('grp2', 200, 'root', 40,\n                                             validate=False)\n        self.scheduler.add_to_resource_group(TEST_USER1, 101, 'grp1', 40,\n                                             validate=False)\n        self.scheduler.add_to_resource_group(TEST_USER2, 102, 'grp1', 20,\n                                             validate=False)\n        self.scheduler.add_to_resource_group(TEST_USER3, 201, 'grp2', 30,\n                                             validate=False)\n        self.scheduler.add_to_resource_group(TEST_USER4, 202, 'grp2', 10,\n                                             validate=True)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduler_iteration': 7})\n        a = {'fair_share': 'True', 'fairshare_decay_time': '24:00:00',\n             'fairshare_decay_factor': 0.5, 'fairshare_usage_res': formula}\n        self.scheduler.set_sched_config(a)\n        self.server.manager(MGR_CMD_SET, SCHED, {'log_events': 4095})\n\n    def test_fairshare_enhanced(self):\n        \"\"\"\n        Test the basic fairshare behavior with custom resources for math module\n        \"\"\"\n        rv = self.server.add_resource('foo1', 'float', 'nh')\n        self.assertTrue(rv)\n        # Set scheduler fairshare usage formula\n        self.setup_fs(\n            'ceil(fabs(-ncpus*(foo1/100.00)*sqrt(100)))')\n        node_attr = {'resources_available.ncpus': 1,\n                     'resources_available.foo1': 5000}\n        self.server.manager(MGR_CMD_SET, NODE, node_attr, self.mom.shortname)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n        job_attr = {'Resource_List.select': '1:ncpus=1:foo1=20'}\n        J1 = Job(TEST_USER2, attrs=job_attr)\n        J2 = Job(TEST_USER3, attrs=job_attr)\n        J3 = Job(TEST_USER1, attrs=job_attr)\n        j1id = self.server.submit(J1)\n        j2id = self.server.submit(J2)\n        j3id = self.server.submit(J3)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n        rv = self.server.expect(SERVER, {'server_state': 'Scheduling'}, op=NE)\n\n        self.logger.info(\"Checking the job state of \" + j3id)\n        self.server.expect(JOB, {'job_state': 'R'}, id=j3id)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=j2id)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=j1id)\n        # While nothing has changed, we must run another cycle for the\n        # scheduler to take note of the fairshare usage.\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n\n        self.server.delete(j3id)\n        msg = \"Checking the job state of \" + j2id + \", runs after \"\n        msg += j3id + \" is deleted\"\n        self.logger.info(msg)\n        self.server.expect(JOB, {'job_state': 'R'}, id=j2id)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=j1id)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n\n        self.server.delete(j2id)\n        msg = \"Checking the job state of \" + j1id + \", runs after \"\n        msg += j2id + \" is deleted\"\n        self.logger.info(msg)\n        self.server.expect(JOB, {'job_state': 'R'}, id=j1id)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n\n        self.server.delete(j1id)\n\n        # query fairshare and check usage\n        fs1 = self.scheduler.fairshare.query_fairshare(name=str(TEST_USER1))\n        self.logger.info('Checking ' + str(fs1.usage) + \" == 3\")\n        self.assertEqual(fs1.usage, 3)\n        fs2 = self.scheduler.fairshare.query_fairshare(name=str(TEST_USER2))\n        self.logger.info('Checking ' + str(fs2.usage) + \" == 3\")\n        self.assertEqual(fs2.usage, 3)\n        fs3 = self.scheduler.fairshare.query_fairshare(name=str(TEST_USER3))\n        self.logger.info('Checking ' + str(fs3.usage) + \" == 3\")\n        self.assertEqual(fs3.usage, 3)\n        fs4 = self.scheduler.fairshare.query_fairshare(name=str(TEST_USER4))\n        self.logger.info('Checking ' + str(fs4.usage) + \" == 1\")\n        self.assertEqual(fs4.usage, 1)\n\n        # Check the scheduler usage file whether it's updating or not\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n        J1 = Job(TEST_USER4, attrs=job_attr)\n        J2 = Job(TEST_USER2, attrs=job_attr)\n        J3 = Job(TEST_USER1, attrs=job_attr)\n        j1id = self.server.submit(J1)\n        j2id = self.server.submit(J2)\n        j3id = self.server.submit(J3)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n\n        self.logger.info(\"Checking the job state of \" + j1id)\n        self.server.expect(JOB, {'job_state': 'R'}, id=j1id)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=j2id)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=j3id)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n\n        self.server.delete(j1id)\n        msg = \"Checking the job state of \" + j3id + \", runs after \"\n        msg += j1id + \" is deleted\"\n        self.logger.info(msg)\n        self.server.expect(JOB, {'job_state': 'R'}, id=j3id)\n        self.server.expect(JOB, {'job_state': 'Q'}, id=j2id)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n\n        self.server.delete(j3id)\n        msg = \"Checking the job state of \" + j2id + \", runs after \"\n        msg += j3id + \" is deleted\"\n        self.logger.info(msg)\n        self.server.expect(JOB, {'job_state': 'R'}, id=j2id)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n\n        # query fairshare and check usage\n        fs1 = self.scheduler.fairshare.query_fairshare(name=str(TEST_USER1))\n        self.logger.info('Checking ' + str(fs1.usage) + \" == 5\")\n        self.assertEqual(fs1.usage, 5)\n        fs2 = self.scheduler.fairshare.query_fairshare(name=str(TEST_USER2))\n        self.logger.info('Checking ' + str(fs2.usage) + \" == 5\")\n        self.assertEqual(fs2.usage, 5)\n        fs3 = self.scheduler.fairshare.query_fairshare(name=str(TEST_USER3))\n        self.logger.info('Checking ' + str(fs3.usage) + \" == 3\")\n        self.assertEqual(fs3.usage, 3)\n        fs4 = self.scheduler.fairshare.query_fairshare(name=str(TEST_USER4))\n        self.logger.info('Checking ' + str(fs4.usage) + \" == 3\")\n        self.assertEqual(fs4.usage, 3)\n\n    @checkModule(\"pexpect\")\n    @skipOnShasta\n    @runOnlyOnLinux\n    def test_interactive_job(self):\n        \"\"\"\n        Submit an interactive job\n        \"\"\"\n        cmd = 'sleep 10'\n        j = Job(TEST_USER, attrs={ATTR_inter: ''})\n        j.interactive_script = [('hostname', '.*'),\n                                (cmd, '.*')]\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        self.server.delete(jid)\n        self.server.expect(JOB, 'queue', op=UNSET, id=jid)\n\n    def test_man_pages(self):\n        \"\"\"\n        Test basic functionality of man pages\n        \"\"\"\n        pbs_conf = self.du.parse_pbs_config(self.server.shortname)\n        man_cmd = \"man\"\n        man_bin_path = self.du.which(exe=man_cmd)\n        if man_bin_path == man_cmd:\n            self.skip_test(reason='man command is not available. Please '\n                                  'install man and try again.')\n        manpath = os.path.join(pbs_conf['PBS_EXEC'], \"share\", \"man\")\n        pbs_cmnds = [\"pbsnodes\", \"qsub\"]\n        os.environ['MANPATH'] = manpath\n        for pbs_cmd in pbs_cmnds:\n            cmd = \"man %s\" % pbs_cmd\n            rc = self.du.run_cmd(cmd=cmd)\n            msg = \"Error while retrieving man page of %s\" % pbs_cmd\n            msg += \"command: %s\" % rc['err']\n            self.assertEqual(rc['rc'], 0, msg)\n            msg = \"Successfully retrieved man page for\"\n            msg += \" %s command\" % pbs_cmd\n            self.logger.info(msg)\n\n    def test_exclhost(self):\n        \"\"\"\n        Test that a job requesting exclhost is not placed on another host\n        with a running job on it.\n        \"\"\"\n        a = {'resources_available.ncpus': 2}\n        self.mom.create_vnodes(a, 8, sharednode=False,\n                               vnodes_per_host=4)\n        vn = self.mom.shortname\n        req_nodes = '1:ncpus=1:vnode=' + vn + '[3]'\n        J1 = Job(TEST_USER, {'Resource_List.select': req_nodes})\n        jid1 = self.server.submit(J1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n\n        a = {'Resource_List.select': '1:ncpus=1',\n             'Resource_List.place': 'exclhost'}\n        J2 = Job(TEST_USER, a)\n        jid2 = self.server.submit(J2)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n\n        st = self.server.status(JOB, 'exec_vnode', id=jid2)\n        vnodes = J2.get_vnodes(st[0]['exec_vnode'])\n        expected_vnodes = [vn + '[4]', vn + '[5]', vn + '[6]', vn + '[7]']\n\n        for v in vnodes:\n            self.assertIn(v, expected_vnodes)\n\n    def test_jobscript_max_size(self):\n        \"\"\"\n        Test that if jobscript_max_size attribute is set, users can not\n        submit jobs with job script size exceeding the limit.\n        \"\"\"\n\n        scr = []\n        for i in range(2048):\n            scr += ['echo \"This is a very long line, it will exceed 20 bytes\"']\n\n        j = Job()\n        j.create_script(scr)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'jobscript_max_size': 65537})\n        try:\n            self.server.submit(j)\n        except PbsSubmitError as e:\n            self.assertIn(\"jobscript size exceeded the jobscript_max_size\",\n                          e.msg[0])\n        self.server.log_match(\"Req;req_reject;Reject reply code=15175\",\n                              max_attempts=5)\n\n    def test_import_pbs_module(self):\n        \"\"\"\n        Test that the pbs module located in the PBS installation directory is\n        able to be loaded and symbols within it accessed.\n        \"\"\"\n        self.add_pbs_python_path_to_sys_path()\n        import pbs\n        msg = \"pbs.JOB_STATE_RUNNING=%s\" % (pbs.JOB_STATE_RUNNING,)\n        self.logger.info(msg)\n\n    def test_import_pbs_ifl_module(self):\n        \"\"\"\n        Test that the pbs_ifl module located in the PBS installation directory\n        is able to be loaded and a connection to the server can be established.\n        \"\"\"\n        self.add_pbs_python_path_to_sys_path()\n        import pbs_ifl\n        server_conn = pbs_ifl.pbs_connect(None)\n        server_stat = pbs_ifl.pbs_statserver(server_conn, None, None)\n        pbs_ifl.pbs_disconnect(server_conn)\n        msg = \"server name is %s\" % (server_stat.name,)\n        self.logger.info(msg)\n"
  },
  {
    "path": "test/tests/performance/__init__.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport math\nfrom math import sqrt\nfrom ptl.utils.pbs_testsuite import *\nimport statistics\n\n\nclass TestPerformance(PBSTestSuite):\n    \"\"\"\n    Base test suite for Performance tests\n    \"\"\"\n\n    def check_value(self, res):\n        if isinstance(res, list):\n            for val in res:\n                if not isinstance(val, (int, float)):\n                    raise self.failureException(\n                        \"Test result list must be int or float\")\n        else:\n            if not isinstance(res, (int, float)):\n                raise self.failureException(\"Test result must be int or float\")\n\n    def perf_test_result(self, result, test_measure, unit):\n        \"\"\"\n        Add test results to json file. If a multiple trial values are passed\n        calculate mean,std_dev,min,max for the list.\n        \"\"\"\n        self.check_value(result)\n\n        if isinstance(result, list) and len(result) > 1:\n            mean_res = statistics.mean(result)\n            stddev_res = statistics.stdev(result)\n            lowv = mean_res - (stddev_res * 2)\n            uppv = mean_res + (stddev_res * 2)\n            new_result = [x for x in result if x > lowv and x < uppv]\n            if len(new_result) == 0:\n                new_result = result\n            max_res = round(max(new_result), 2)\n            min_res = round(min(new_result), 2)\n            mean_res = statistics.mean(new_result)\n            mean_res = round(mean_res, 2)\n            trial_no = 1\n            trial_data = []\n            for trial_result in result:\n                trial_result = round(trial_result, 2)\n                trial_data.append(\n                    {\"trial_no\": trial_no, \"value\": trial_result})\n                trial_no += 1\n            test_data = {\"test_measure\": test_measure,\n                         \"unit\": unit,\n                         \"test_data\": {\"mean\": mean_res,\n                                       \"std_dev\": stddev_res,\n                                       \"minimum\": min_res,\n                                       \"maximum\": max_res,\n                                       \"trials\": trial_data,\n                                       \"samples_considered\": len(new_result),\n                                       \"total_samples\": len(result)}}\n            return self.set_test_measurements(test_data)\n        else:\n            variance = 0\n            if isinstance(result, list):\n                result = result[0]\n            if isinstance(result, float):\n                result = round(result, 2)\n            testdic = {\"test_measure\": test_measure, \"unit\": unit,\n                       \"test_data\": {\"mean\": result,\n                                     \"std_dev\": variance,\n                                     \"minimum\": result,\n                                     \"maximum\": result}}\n            return self.set_test_measurements(testdic)\n\n    pass\n"
  },
  {
    "path": "test/tests/performance/pbs_cgroups_stress.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.performance import *\n\n\ndef is_memsw_enabled(mem_path):\n    \"\"\"\n    Check if system has swapcontrol enabled, then return true\n    else return false\n    \"\"\"\n    # List all files and check if memsw files exists\n    for files in os.listdir(mem_path):\n        if 'memory.memsw' in files:\n            return 'true'\n    return 'false'\n\n\nclass TestCgroupsStress(TestPerformance):\n    \"\"\"\n    This test suite targets Linux Cgroups hook stress.\n    \"\"\"\n\n    def setUp(self):\n        TestPerformance.setUp(self)\n\n        self.true_script = \"\"\"#!/bin/bash\n#PBS -joe\n/bin/true\n\"\"\"\n        self.cfg0 = \"\"\"{\n    \"cgroup_prefix\"         : \"pbs\",\n    \"exclude_hosts\"         : [],\n    \"exclude_vntypes\"       : [],\n    \"run_only_on_hosts\"     : [],\n    \"periodic_resc_update\"  : false,\n    \"vnode_per_numa_node\"   : false,\n    \"online_offlined_nodes\" : false,\n    \"use_hyperthreads\"      : false,\n    \"cgroup\" : {\n        \"cpuacct\" : {\n            \"enabled\"         : false\n        },\n        \"cpuset\" : {\n            \"enabled\"         : false\n        },\n        \"devices\" : {\n            \"enabled\"         : false\n        },\n        \"hugetlb\" : {\n            \"enabled\"         : false\n        },\n        \"memory\":\n        {\n            \"enabled\"         : true,\n            \"exclude_hosts\"   : [],\n            \"exclude_vntypes\" : [],\n            \"soft_limit\"      : false,\n            \"default\"         : \"256MB\",\n            \"reserve_percent\" : \"0\",\n            \"reserve_amount\"  : \"0MB\"\n        },\n        \"memsw\":\n        {\n            \"enabled\"         : %s,\n            \"exclude_hosts\"   : [],\n            \"exclude_vntypes\" : [],\n            \"default\"         : \"256MB\",\n            \"reserve_percent\" : \"0\",\n            \"reserve_amount\"  : \"128MB\"\n        }\n    }\n}\"\"\"\n\n        self.noprefix = False\n        self.paths = self.get_paths()\n        if not (self.paths['cpuset'] and self.paths['memory']):\n            self.skipTest('cpuset or memory cgroup subsystem not mounted')\n        self.swapctl = is_memsw_enabled(self.paths['memsw'])\n        self.server.set_op_mode(PTL_CLI)\n        self.server.cleanup_jobs()\n        Job.dflt_attributes[ATTR_k] = 'oe'\n        # Configure the scheduler to schedule using vmem\n        a = {'resources': 'ncpus,mem,vmem,host,vnode'}\n        self.scheduler.set_sched_config(a)\n        # Import the hook\n        self.hook_name = 'pbs_cgroups'\n        self.hook_file = os.path.join(self.server.pbs_conf['PBS_EXEC'],\n                                      'lib',\n                                      'python',\n                                      'altair',\n                                      'pbs_hooks',\n                                      'pbs_cgroups.PY')\n        self.load_hook(self.hook_file)\n        # Enable the cgroups hook\n        conf = {'enabled': 'True', 'freq': 2}\n        self.server.manager(MGR_CMD_SET, HOOK, conf, self.hook_name)\n        # Restart mom so exechost_startup hook is run\n        self.mom.signal('-HUP')\n\n    def get_paths(self):\n        \"\"\"\n        Returns a dictionary containing the location where each cgroup\n        is mounted.\n        \"\"\"\n        paths = {'pids': None,\n                 'blkio': None,\n                 'systemd': None,\n                 'cpuset': None,\n                 'memory': None,\n                 'memsw': None,\n                 'cpuacct': None,\n                 'devices': None}\n        # Loop through the mounts and collect the ones for cgroups\n        with open(os.path.join(os.sep, 'proc', 'mounts'), 'r') as fd:\n            for line in fd:\n                entries = line.split()\n                if entries[2] != 'cgroup':\n                    continue\n                flags = entries[3].split(',')\n                if 'noprefix' in flags:\n                    self.noprefix = True\n                subsys = os.path.basename(entries[1])\n                paths[subsys] = entries[1]\n                if 'memory' in flags:\n                    paths['memsw'] = paths[subsys]\n                    paths['memory'] = paths[subsys]\n                if 'cpuacct' in flags:\n                    paths['cpuacct'] = paths[subsys]\n                if 'devices' in flags:\n                    paths['devices'] = paths[subsys]\n        return paths\n\n    def load_hook(self, filename):\n        \"\"\"\n        Import and enable a hook pointed to by the URL specified.\n        \"\"\"\n        try:\n            with open(filename, 'r') as fd:\n                script = fd.read()\n        except IOError:\n            self.assertTrue(False, \"Failed to open hook file %s\" % filename)\n        events = '\"execjob_begin,execjob_launch,execjob_attach,'\n        events += 'execjob_epilogue,execjob_end,exechost_startup,'\n        events += 'exechost_periodic\"'\n        a = {'enabled': 'True',\n             'freq': '2',\n             'event': events}\n        self.server.create_import_hook(self.hook_name, a, script,\n                                       overwrite=True)\n        # Add the configuration\n        self.load_config(self.cfg0 % self.swapctl)\n\n    def load_config(self, cfg):\n        \"\"\"\n        Create a hook configuration file with the provided contents.\n        \"\"\"\n        fn = self.du.create_temp_file(body=cfg)\n        self.logger.info(\"Current config: %s\" % cfg)\n        a = {'content-type': 'application/x-config',\n             'content-encoding': 'default',\n             'input-file': fn}\n        self.server.manager(MGR_CMD_IMPORT, HOOK, a, self.hook_name)\n        os.remove(fn)\n        self.mom.log_match('pbs_cgroups.CF;copy hook-related ' +\n                           'file request received',\n                           starttime=self.server.ctime)\n        pbs_home = self.server.pbs_conf['PBS_HOME']\n        svr_conf = os.path.join(\n            os.sep, pbs_home, 'server_priv', 'hooks', 'pbs_cgroups.CF')\n        pbs_home = self.mom.pbs_conf['PBS_HOME']\n        mom_conf = os.path.join(\n            os.sep, pbs_home, 'mom_priv', 'hooks', 'pbs_cgroups.CF')\n        # reload config if server and mom cfg differ up to count times\n        count = 5\n        while (count > 0):\n            r1 = self.du.run_cmd(cmd=['cat', svr_conf], sudo=True)\n            r2 = self.du.run_cmd(cmd=['cat', mom_conf], sudo=True)\n            if r1['out'] != r2['out']:\n                self.logger.info('server & mom pbs_cgroups.CF differ')\n                self.server.manager(MGR_CMD_IMPORT, HOOK, a, self.hook_name)\n                self.mom.log_match('pbs_cgroups.CF;copy hook-related ' +\n                                   'file request received',\n                                   starttime=self.server.ctime)\n            else:\n                self.logger.info('server & mom pbs_cgroups.CF match')\n                break\n            time.sleep(1)\n            count -= 1\n        # A HUP of mom ensures update to hook config file is\n        # seen by the exechost_startup hook.\n        self.mom.signal('-HUP')\n\n    @timeout(1200)\n    def test_cgroups_race_condition(self):\n        \"\"\"\n        Test to ensure a cgroups event does not read the cgroups file system\n        while another event is writing to it. By submitting 1000 instant jobs,\n        the events should collide at least once.\n        \"\"\"\n        pcpus = 0\n        with open('/proc/cpuinfo', 'r') as desc:\n            for line in desc:\n                if re.match('^processor', line):\n                    pcpus += 1\n        if pcpus < 8:\n            self.skipTest(\"Test requires at least 8 physical CPUs\")\n\n        attr = {'job_history_enable': 'true'}\n        self.server.manager(MGR_CMD_SET, SERVER, attr)\n        self.load_config(self.cfg0 % self.swapctl)\n        now = time.time()\n        j = Job(TEST_USER, attrs={ATTR_J: '0-1000'})\n        j.create_script(self.true_script)\n        jid = self.server.submit(j)\n        jid = jid.split(']')[0]\n        done = False\n        for i in range(0, 1000):\n            # Build the subjob id and ensure it is complete\n            sjid = jid + str(i) + \"]\"\n            # If the array job is finished, it and all subjobs will be put\n            # into the F state. This can happen while checking the last\n            # couple of subjobs. If this happens, we need to check for the\n            # F state instead of the X state.\n            if done:\n                self.server.expect(\n                    JOB, {'job_state': 'F'}, id=sjid, extend='x')\n            else:\n                try:\n                    self.server.expect(\n                        JOB, {'job_state': 'X'}, id=sjid, extend='tx')\n                except PtlExpectError:\n                    # The expect failed, maybe because the array job finished\n                    # Check for the F state for this and future subjobs.\n                    done = True\n                    self.server.expect(\n                        JOB, {'job_state': 'F'}, id=sjid, extend='x')\n            # Check the logs for IOError every 100 subjobs, to reduce time of\n            # a failing test.\n            if i % 100 == 0:\n                self.mom.log_match(msg=\"IOError\", starttime=now,\n                                   existence=False, max_attempts=1, n=\"ALL\")\n        # Check the logs one last time to ensure it passed\n        self.mom.log_match(msg=\"IOError\", starttime=now,\n                           existence=False, max_attempts=10, n=\"ALL\")\n"
  },
  {
    "path": "test/tests/performance/pbs_client_nagle_performance.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport os\nimport timeit\n\nfrom tests.performance import *\n\n\nclass TestClientNagles(TestPerformance):\n\n    \"\"\"\n    Testing the effect of Nagles algorithm on CLI Performance\n    \"\"\"\n    time_command = 'time'\n\n    def setUp(self):\n        \"\"\"\n        Base class method overriding\n        builds absolute path of commands to execute\n        \"\"\"\n        TestPerformance.setUp(self)\n        self.time_command = self.du.which(exe=\"time\")\n        if self.time_command == \"time\":\n            self.skipTest(\"Time command not found\")\n\n    def tearDown(self):\n        \"\"\"\n        cleanup jobs\n        \"\"\"\n\n        TestPerformance.tearDown(self)\n        self.server.cleanup_jobs()\n\n    def compute_qdel_time(self):\n        \"\"\"\n        Computes qdel time in secs\"\n        return :\n              -1 on qdel fail\n        \"\"\"\n        qsel_list = self.server.select()\n        qsel_list = \" \".join(qsel_list)\n        command = self.time_command\n        command += \" -f \\\"%e\\\" \"\n        command += os.path.join(self.server.client_conf['PBS_EXEC'],\n                                'bin',\n                                'qdel ')\n        command += qsel_list\n        # compute elapse time without -E option\n        qdel_perf = self.du.run_cmd(self.server.hostname,\n                                    command,\n                                    as_script=True,\n                                    runas=TEST_USER1,\n                                    logerr=False)\n        if qdel_perf['rc'] != 0:\n            return -1\n\n        return qdel_perf['err'][0]\n\n    def submit_jobs(self, user, num_jobs):\n        \"\"\"\n        Submit specified number of simple jobs\n        Arguments :\n             user - user under which to submit jobs\n             num_jobs - number of jobs to submit\n        \"\"\"\n        job = Job(user)\n        job.set_sleep_time(1)\n        for _ in range(num_jobs):\n            self.server.submit(job)\n\n    @timeout(600)\n    def test_qdel_nagle_perf(self):\n        \"\"\"\n        Submit 500 jobs, measure qdel performace before/after adding managers\n        \"\"\"\n\n        # Adding to managers ensures that packets are larger than 1023 bytes\n        # that triggers Nagle's algorithm which slows down the communication.\n        # Effect on TCP seems irreversible till server is restarted, so in\n        # this test case we restart server so that any effects from earlier\n        # test cases/runs do not interfere\n\n        # Baseline qdel performance with scheduling false and managers unset\n        # Restart server to ensure no effect from earlier tests/operations\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n        self.server.manager(MGR_CMD_UNSET, SERVER, 'managers')\n        self.server.restart()\n\n        self.submit_jobs(TEST_USER1, 500)\n        qdel_perf = self.compute_qdel_time()\n        self.assertTrue((qdel_perf != -1), \"qdel command failed\")\n\n        # Add to the managers list so that TCP packets are now larger and\n        # triggers Nagle's\n        manager = TEST_USER1.name + '@' + self.server.hostname\n        self.server.manager(MGR_CMD_SET, SERVER, {\n                            'managers': (INCR, manager)}, sudo=True)\n\n        # Remeasure the qdel performance\n        self.submit_jobs(TEST_USER1, 500)\n        qdel_perf2 = self.compute_qdel_time()\n        self.assertTrue((qdel_perf2 != -1), \"qdel command failed\")\n\n        self.logger.info(\"qdel performance: \" + str(qdel_perf))\n        self.logger.info(\n            \"qdel performance after setting manager: \" + str(qdel_perf2))\n        self.perf_test_result(float(qdel_perf), \"qdel_perf\", \"sec\")\n        self.perf_test_result(float(qdel_perf2),\n                              \"qdel_perf_with_manager\", \"sec\")\n\n    @timeout(900)\n    def test_qsub_perf(self):\n        \"\"\"\n        Test that qsub performance have improved when run\n        with -f option\n        \"\"\"\n\n        # Restart server\n        self.server.restart()\n\n        # Submit a job for 500 times and timeit\n        cmd = os.path.join(\n            self.server.client_conf['PBS_EXEC'],\n            'bin',\n            'qsub -- /bin/true >/dev/null')\n        start_time = timeit.default_timer()\n        for x in range(1, 500):\n            rv = self.du.run_cmd(self.server.hostname, cmd)\n            self.assertTrue(rv['rc'] == 0)\n        elap_time1 = timeit.default_timer() - float(start_time)\n\n        # submit a job with -f for 500 times and timeit\n        cmd = os.path.join(\n            self.server.client_conf['PBS_EXEC'],\n            'bin',\n            'qsub -f -- /bin/true >/dev/null >/dev/null')\n        start_time = timeit.default_timer()\n        for x in range(1, 500):\n            rv = self.du.run_cmd(self.server.hostname,\n                                 cmd)\n            self.assertTrue(rv['rc'] == 0)\n        elap_time2 = timeit.default_timer() - float(start_time)\n        self.logger.info(\"Time taken by qsub -f is \" + str(elap_time2) +\n                         \" and time taken by qsub is \" + str(elap_time1))\n        self.perf_test_result(elap_time1, \"qsub_-f_time\", \"sec\")\n        self.perf_test_result(elap_time2, \"qsub_time\", \"sec\")\n"
  },
  {
    "path": "test/tests/performance/pbs_equiv_classes_perf.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport os\n\nfrom tests.performance import *\n\n\nclass TestJobEquivClassPerf(TestPerformance):\n\n    \"\"\"\n    Test job equivalence class performance\n    \"\"\"\n\n    def setUp(self):\n        TestPerformance.setUp(self)\n        self.server.manager(MGR_CMD_SET, SCHED, {'log_events': 2047})\n\n        # Create vnodes\n        a = {'resources_available.ncpus': 1, 'resources_available.mem': '8gb'}\n        self.mom.create_vnodes(a, 10000, expect=False,\n                               sharednode=False)\n        self.server.expect(NODE, {'state=free': 10001})\n\n    def run_n_get_cycle_time(self):\n        \"\"\"\n        Run a scheduling cycle and calculate its duration\n        \"\"\"\n\n        t = time.time()\n\n        # Run only one cycle\n        self.server.manager(MGR_CMD_SET, MGR_OBJ_SERVER,\n                            {'scheduling': 'True'})\n        self.server.manager(MGR_CMD_SET, MGR_OBJ_SERVER,\n                            {'scheduling': 'False'})\n\n        # Wait for cycle to finish\n        self.scheduler.log_match(\"Leaving Scheduling Cycle\", starttime=t,\n                                 max_attempts=300, interval=3)\n\n        c = self.scheduler.cycles(lastN=1)[0]\n        cycle_time = c.end - c.start\n\n        return cycle_time\n\n    @timeout(2000)\n    def test_basic(self):\n        \"\"\"\n        Test basic functionality of job equivalence classes.\n        Pre test: one class per job\n        Post test: one class for all jobs\n        \"\"\"\n\n        self.server.manager(MGR_CMD_SET, MGR_OBJ_SERVER,\n                            {'scheduling': 'False'})\n\n        num_jobs = 5000\n        jids = []\n        # Create num_jobs different equivalence classes.  These jobs can't run\n        # because there aren't 2cpu nodes.  This bypasses the quick\n        # 'can I run?' check the scheduler does.  It will better show the\n        # equivalence class performance.\n        for n in range(num_jobs):\n            a = {'Resource_List.select': str(n + 1) + ':ncpus=2',\n                 \"Resource_List.place\": \"free\"}\n            J = Job(TEST_USER, attrs=a)\n            jid = self.server.submit(J)\n            jids += [jid]\n\n        cycle1_time = self.run_n_get_cycle_time()\n\n        # Make all jobs into one equivalence class\n        a = {'Resource_List.select': str(num_jobs) + \":ncpus=2\",\n             \"Resource_List.place\": \"free\"}\n        for n in range(num_jobs):\n            self.server.alterjob(jids[n], a)\n\n        cycle2_time = self.run_n_get_cycle_time()\n\n        self.logger.info('Cycle 1: %d Cycle 2: %d Cycle time difference: %d' %\n                         (cycle1_time, cycle2_time, cycle1_time - cycle2_time))\n        self.assertGreaterEqual(cycle1_time, cycle2_time)\n        time_diff = cycle1_time - cycle2_time\n        self.perf_test_result(cycle1_time, \"different_equiv_class\", \"sec\")\n        self.perf_test_result(cycle2_time, \"single_equiv_class\", \"sec\")\n        self.perf_test_result(time_diff,\n                              \"time_diff_bn_single_diff_equiv_classes\", \"sec\")\n\n    @timeout(10000)\n    def test_server_queue_limit(self):\n        \"\"\"\n        Test the performance with hard and soft limits\n        on resources\n        \"\"\"\n\n        # Create workq2\n        self.server.manager(MGR_CMD_CREATE, QUEUE,\n                            {'queue_type': 'e', 'started': 'True',\n                             'enabled': 'True'}, id='workq2')\n\n        # Set queue limit\n        a = {\n            'max_run': ('[o:PBS_ALL=100],[g:PBS_GENERIC=20],'\n                        '[u:PBS_GENERIC=20],[g:%s = 8],[u:%s=10]' %\n                        (str(TSTGRP1), str(TEST_USER1)))}\n        self.server.manager(MGR_CMD_SET, QUEUE,\n                            a, id='workq2')\n\n        a = {'max_run_res.ncpus':\n             '[o:PBS_ALL=100],[g:PBS_GENERIC=50],\\\n             [u:PBS_GENERIC=20],[g:%s=13],[u:%s=12]' %\n             (str(TSTGRP1), str(TEST_USER1))}\n        self.server.manager(MGR_CMD_SET, QUEUE, a, id='workq2')\n\n        a = {'max_run_res_soft.ncpus':\n             '[o:PBS_ALL=100],[g:PBS_GENERIC=30],\\\n             [u:PBS_GENERIC=10],[g:%s=10],[u:%s=10]' %\n             (str(TSTGRP1), str(TEST_USER1))}\n        self.server.manager(MGR_CMD_SET, QUEUE, a, id='workq2')\n\n        # Set server limits\n        a = {\n            'max_run': '[o:PBS_ALL=100],[g:PBS_GENERIC=50],\\\n            [u:PBS_GENERIC=20],[g:%s=13],[u:%s=13]' %\n            (str(TSTGRP1), str(TEST_USER1))}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        a = {'max_run_soft':\n             '[o:PBS_ALL=50],[g:PBS_GENERIC=25],[u:PBS_GENERIC=10],\\\n             [g:%s=10],[u:%s=10]' %\n             (str(TSTGRP1), str(TEST_USER1))}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        # Turn scheduling off\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'false'})\n\n        # Submit jobs as pbsuser1 from group tstgrp01 in workq2\n        for x in range(100):\n            a = {'Resource_List.select': '1:ncpus=2',\n                 'Resource_List.walltime': int(x),\n                 'group_list': TSTGRP1, ATTR_q: 'workq2'}\n            J = Job(TEST_USER1, attrs=a)\n            for y in range(100):\n                self.server.submit(J)\n\n        # Get time for ~100 classes\n        cyc1 = self.run_n_get_cycle_time()\n\n        # Submit jobs as pbsuser1 from group tstgrp02 in workq2\n        for x in range(100):\n            a = {'Resource_List.select': '1:ncpus=2',\n                 'Resource_List.walltime': int(x),\n                 'group_list': TSTGRP2, ATTR_q: 'workq2'}\n            J = Job(TEST_USER1, attrs=a)\n            for y in range(100):\n                self.server.submit(J)\n\n        # Get time for ~200 classes\n        cyc2 = self.run_n_get_cycle_time()\n\n        # Submit jobs as pbsuser2 from tstgrp01 in workq2\n        for x in range(100):\n            a = {'Resource_List.select': '1:ncpus=2',\n                 'Resource_List.walltime': int(x),\n                 'group_list': TSTGRP1, ATTR_q: 'workq2'}\n            J = Job(TEST_USER2, attrs=a)\n            for y in range(100):\n                self.server.submit(J)\n\n        # Get time for ~300 classes\n        cyc3 = self.run_n_get_cycle_time()\n\n        # Submit jobs as pbsuser2 from tstgrp03 in workq2\n        for x in range(100):\n            a = {'Resource_List.select': '1:ncpus=2',\n                 'Resource_List.walltime': int(x),\n                 'group_list': TSTGRP3, ATTR_q: 'workq2'}\n            J = Job(TEST_USER2, attrs=a)\n            for y in range(100):\n                self.server.submit(J)\n\n        # Get time for ~400 classes\n        cyc4 = self.run_n_get_cycle_time()\n\n        # Submit jobs as pbsuser1 from tstgrp01 in workq\n        for x in range(100):\n            a = {'Resource_List.select': '1:ncpus=2',\n                 'Resource_List.walltime': int(x),\n                 'group_list': TSTGRP1, ATTR_q: 'workq'}\n            J = Job(TEST_USER1, attrs=a)\n            for y in range(100):\n                self.server.submit(J)\n\n        # Get time for ~500 classes\n        cyc5 = self.run_n_get_cycle_time()\n\n        # Submit jobs as pbsuser1 from tstgrp02 in workq\n        for x in range(100):\n            a = {'Resource_List.select': '1:ncpus=2',\n                 'Resource_List.walltime': int(x),\n                 'group_list': TSTGRP2, ATTR_q: 'workq'}\n            J = Job(TEST_USER1, attrs=a)\n            for y in range(100):\n                self.server.submit(J)\n\n        # Get time for 60k jobs for ~600 classes\n        cyc6 = self.run_n_get_cycle_time()\n\n        # Submit jobs as pbsuser2 from tstgrp01 in workq\n        for x in range(100):\n            a = {'Resource_List.select': '1:ncpus=2',\n                 'Resource_List.walltime': int(x),\n                 'group_list': TSTGRP1, ATTR_q: 'workq'}\n            J = Job(TEST_USER2, attrs=a)\n            for y in range(100):\n                self.server.submit(J)\n\n        # Get time for 70k jobs for ~700 classes\n        cyc7 = self.run_n_get_cycle_time()\n\n        # Submit jobs as pbsuser2 from tstgrp03 in workq\n        for x in range(100):\n            a = {'Resource_List.select': '1:ncpus=2',\n                 'Resource_List.walltime': int(x),\n                 'group_list': TSTGRP3, ATTR_q: 'workq'}\n            J = Job(TEST_USER2, attrs=a)\n            for y in range(100):\n                self.server.submit(J)\n\n        # Get time for 80k jobs for ~800 classes\n        cyc8 = self.run_n_get_cycle_time()\n\n        # Print the time taken for all the classes and compare\n        # it against previous releases\n        self.logger.info(\"time taken for \\n100 classes is %d\"\n                         \"\\n200 classes is %d,\"\n                         \"\\n300 classes is %d,\"\n                         \"\\n400 classes is %d,\"\n                         \"\\n500 classes is %d,\"\n                         \"\\n600 classes is %d,\"\n                         \"\\n700 classes is %d,\"\n                         \"\\n800 classes is %d\"\n                         % (cyc1, cyc2, cyc3, cyc4, cyc5, cyc6, cyc7, cyc8))\n        self.perf_test_result(cyc1, \"100_class_time\", \"sec\")\n        self.perf_test_result(cyc2, \"200_class_time\", \"sec\")\n        self.perf_test_result(cyc3, \"300_class_time\", \"sec\")\n        self.perf_test_result(cyc4, \"400_class_time\", \"sec\")\n        self.perf_test_result(cyc5, \"500_class_time\", \"sec\")\n        self.perf_test_result(cyc6, \"600_class_time\", \"sec\")\n        self.perf_test_result(cyc7, \"700_class_time\", \"sec\")\n        self.perf_test_result(cyc8, \"800_class_time\", \"sec\")\n"
  },
  {
    "path": "test/tests/performance/pbs_history_cleanup_quasihang.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.performance import *\n\n\nclass HistoryCleanupQuasihang(TestPerformance):\n    \"\"\"\n    This test suite aims at testing the quasihang caused by a lot of jobs\n    in the history.\n    Without the fix, the server takes a lot of time to respond to a client.\n    With the fix, the amount of time is significantly reduced.\n    \"\"\"\n\n    def setUp(self):\n        TestPerformance.setUp(self)\n\n        a = {'job_history_enable': 'True', \"job_history_duration\": \"10:00:00\"}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        a = {ATTR_rescavail + '.ncpus': 100}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n\n    @timeout(18000)\n    def test_time_for_stat_during_history_cleanup(self):\n        \"\"\"\n        This test case submits 50k very short jobs so that the job history\n        has a lot of jobs in it.\n        After submitting the jobs, the job_history_duration is reduced so\n        that the server could start purging the job history.\n\n        Another job is submitted and stat-ed. The test case then finds the\n        amount of time the server takes to respond.\n\n        The test case is not designed to pass/fail on builds with/without\n        the fix.\n        \"\"\"\n        test = ['echo test\\n']\n        # Submit a lot of jobs.\n        for _ in range(0, 10):\n            for _ in range(0, 10):\n                for _ in range(0, 500):\n                    j = Job(TEST_USER, attrs={ATTR_k: 'oe'})\n                    j.create_script(body=test)\n                    jid = self.server.submit(j)\n                time.sleep(1)\n            self.server.expect(JOB, {'job_state': 'F'}, id=jid, extend='x',\n                               offset=10, interval=2, max_attempts=100)\n\n        a = {\"job_history_duration\": \"00:00:05\"}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        j = Job(TEST_USER)\n        j.set_sleep_time(10000)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        now1 = 0\n        now2 = 0\n        i = 0\n\n        while now2 - now1 < 2 and i < 125:\n            time.sleep(1)\n            now1 = int(time.time())\n            self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n            now2 = int(time.time())\n            i += 1\n\n        self.logger.info(\"qstat took %d seconds to return\\n\",\n                         (now2 - now1))\n        self.perf_test_result((now2 - now1), \"qstat_return_time\", \"sec\")\n"
  },
  {
    "path": "test/tests/performance/pbs_jobperf.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport os\nimport subprocess\nimport multiprocessing\nfrom threading import Thread\nfrom tests.performance import *\nfrom ptl.utils.pbs_logutils import PBSLogAnalyzer\n\n\nclass TestJobPerf(TestPerformance):\n    \"\"\"\n    Performance Testsuite for job related tests\n    \"\"\"\n\n    def setUp(self):\n        TestPerformance.setUp(self)\n\n    def set_test_config(self, config):\n        \"\"\"\n        Sets test level configuration\n        \"\"\"\n        testconfig = {}\n        for key, value in config.items():\n            if isinstance(value, int):\n                testconfig[key] = int(\n                    self.conf[key]) if key in self.conf else value\n            else:\n                testconfig[key] = self.conf[key] if key in self.conf else value\n        self.set_test_measurements({\"test_config\": testconfig})\n        return testconfig\n\n    def submit_jobs(self, user, num_jobs, qsub_exec=None,\n                    qsub_exec_arg=None):\n        \"\"\"\n        Submit jobs with provided arguments and job\n        \"\"\"\n        job = 'sudo -u ' + str(user) + ' ' + \\\n              str(os.path.join(\n                  self.server.pbs_conf['PBS_EXEC'], 'bin', 'qsub'))\n        if qsub_exec_arg is None:\n            job += ' -koe -o /dev/null -e /dev/null'\n        else:\n            job += ' ' + qsub_exec_arg\n        if qsub_exec is None:\n            job += ' -- /bin/true'\n        else:\n            job += ' ' + qsub_exec\n        for _ in range(num_jobs):\n            subprocess.call(job, shell=True)\n\n    def wait_for_job_finish(self, jobiter=True):\n        \"\"\"\n        Wait for jobs to get finished\n        \"\"\"\n        total_jobs = self.config['No_of_jobs_per_user']\n        total_jobs *= self.config['No_of_users']\n        if jobiter:\n            total_jobs *= self.config['No_of_iterations']\n        self.server.expect(JOB, {'job_state=F': total_jobs}, extend='x',\n                           interval=20, trigger_sched_cycle=False)\n\n    def delete_jobs_per_user(self, users, num_users):\n        \"\"\"\n        Delete jobs faster by providing more job id's at once\n        \"\"\"\n        bin_path = str(os.path.join(\n            self.server.pbs_conf['PBS_EXEC'], 'bin'))\n        qdel = str(os.path.join(bin_path, 'qdel'))\n        qdel = qdel + ' -W force'\n        cmd = qdel + \" `\" + str(os.path.join(bin_path, 'qselect'))\n        for u in range(0, num_users):\n            qdel_cmd = cmd + \" -u\" + str(users[u]) + \" `\"\n            subprocess.call(qdel_cmd, shell=True)\n\n    @timeout(3600)\n    def test_job_performance_sched_off(self):\n        \"\"\"\n        Test Job subission rate when scheduling is off by\n        submitting 1k jobs and turn on scheduling with 1k ncpus.\n        Test Params:  'No_of_jobs_per_user': 100,\n                      'No_of_tries': 10,\n                      'No_of_iterations': 1,\n                      'No_of_users': 10,\n                      'svr_log_level': 511,\n                      'qsub_exec': '-- /bin/true',\n                      'qsub_exec_arg': None,\n                      'No_of_moms': 21,\n                      'No_of_ncpus_per_node': 48\n        \"\"\"\n        testconfig = {'No_of_jobs_per_user': 100,\n                      'No_of_tries': 10,\n                      'No_of_iterations': 1,\n                      'No_of_users': 10,\n                      'svr_log_level': 511,\n                      'qsub_exec': '-- /bin/true',\n                      'qsub_exec_arg': None,\n                      'No_of_moms': 21,\n                      'No_of_ncpus_per_node': 48}\n        config = self.set_test_config(testconfig)\n\n        avg_sub_time = []\n        avg_run_rate = []\n        j = 0\n        counts = self.server.counter(NODE, {'state': 'free'})\n        if counts['state=free'] < config['No_of_moms']:\n            a = {'resources_available.ncpus': config['No_of_ncpus_per_node']}\n            self.server.create_moms(\n                'mom', a,\n                config['No_of_moms'], self.mom)\n        while j < config['No_of_tries']:\n            sub_rate = []\n            run_rate = []\n            i = 0\n            users = [TEST_USER1, TEST_USER2, TEST_USER3, TEST_USER4,\n                     TEST_USER5, TEST_USER6, TEST_USER7, TEST_USER,\n                     TST_USR, TST_USR1]\n            a = {'log_events': config['svr_log_level']}\n            self.server.manager(MGR_CMD_SET, SERVER, a)\n            a = {'scheduling': 'False'}\n            self.server.manager(MGR_CMD_SET, SERVER, a)\n            while i < config['No_of_iterations']:\n                start = time.time()\n                os.chdir('/tmp')\n                thrds = []\n                for u in range(0, config['No_of_users']):\n                    t = multiprocessing.Process(target=self.submit_jobs, args=(\n                                                users[u],\n                                                config['No_of_jobs_per_user'],\n                                                config['qsub_exec'],\n                                                config['qsub_exec_arg']))\n                    t.start()\n                    thrds.append(t)\n                for t in thrds:\n                    t.join()\n                stop = time.time()\n                res = stop - start\n                resps = (config['No_of_jobs_per_user'] *\n                         config['No_of_users']) / res\n                sub_rate.append(resps)\n                i += 1\n            a = {'scheduling': 'True'}\n            self.server.manager(MGR_CMD_SET, SERVER, a)\n            self.scheduler.log_match(\n                \"Starting Scheduling Cycle\", starttime=int(start),\n                max_attempts=30)\n            a = {'scheduling': 'False'}\n            self.server.manager(MGR_CMD_SET, SERVER, a)\n            self.scheduler.log_match(\"Leaving Scheduling Cycle\",\n                                     starttime=int(start) + 1,\n                                     max_attempts=30, interval=1)\n            sclg = PBSLogAnalyzer()\n            md = sclg.analyze_scheduler_log(\n                filename=self.scheduler.logfile, start=int(start))\n            rr = md['summary']['job_run_rate']\n            value = rr.strip().split('/', 1)\n            rr = float(value[0])\n            run_rate.append(rr)\n            avg_sub_time.extend(sub_rate)\n            avg_run_rate.extend(run_rate)\n            j = j + 1\n        self.perf_test_result(avg_sub_time, \"job_submission\", \"jobs/sec\")\n        self.perf_test_result(avg_run_rate, \"job_run_rate\", \"jobs/sec\")\n\n    @timeout(6000)\n    def test_job_performance_sched_on(self):\n        \"\"\"\n        Test job submit_rate, run_rate, throughput by submitting 10k jobs\n        when scheduling is on with 1k ncpus.\n        Test Params: 'No_of_jobs_per_user': 100,\n                      'No_of_tries': 1,\n                      'No_of_iterations': 10,\n                      'No_of_users': 10,\n                      'svr_log_level': 511,\n                      'qsub_exec': '-- /bin/true',\n                      'qsub_exec_arg': None,\n                      'No_of_moms': 21,\n                      'No_of_ncpus_per_node': 48\n        \"\"\"\n        testconfig = {'No_of_jobs_per_user': 100,\n                      'No_of_tries': 1,\n                      'No_of_iterations': 10,\n                      'No_of_users': 10,\n                      'svr_log_level': 511,\n                      'qsub_exec': '-- /bin/true',\n                      'qsub_exec_arg': None,\n                      'No_of_moms': 21,\n                      'No_of_ncpus_per_node': 48}\n        self.config = self.set_test_config(testconfig)\n        num_ncpus = self.config['No_of_ncpus_per_node']\n        avg_sub_time = []\n        avg_run_rate = []\n        avg_throughput = []\n        j = 0\n        counts = self.server.counter(NODE, {'state': 'free'})\n        if counts['state=free'] < self.config['No_of_moms']:\n            a = {'resources_available.ncpus': num_ncpus}\n            self.server.create_moms(\n                'mom', a,\n                self.config['No_of_moms'], self.mom)\n        while j < self.config['No_of_tries']:\n            sub_time = []\n            run_rate = []\n            throughput = []\n            i = 0\n            log_start = time.time()\n            users = [TEST_USER1, TEST_USER2, TEST_USER3, TEST_USER4,\n                     TEST_USER5, TEST_USER6, TEST_USER7, TEST_USER,\n                     TST_USR, TST_USR1]\n            a = {'log_events': self.config['svr_log_level']}\n            self.server.manager(MGR_CMD_SET, SERVER, a)\n            a = {'job_history_enable': True}\n            self.server.manager(MGR_CMD_SET, SERVER, a)\n            while i < self.config['No_of_iterations']:\n                os.chdir('/tmp')\n                thrds = []\n                start = time.time()\n                for u in range(0, self.config['No_of_users']):\n                    t = multiprocessing.Process(target=self.submit_jobs, args=(\n                        users[u], self.config['No_of_jobs_per_user'],\n                        self.config['qsub_exec'],\n                        self.config['qsub_exec_arg']))\n                    t.start()\n                    thrds.append(t)\n                for t in thrds:\n                    t.join()\n                i = i + 1\n                stop = time.time()\n                res = stop - start\n                resps = (self.config['No_of_jobs_per_user'] *\n                         self.config['No_of_users']) / res\n                sub_time.append(resps)\n            j += 1\n            self.wait_for_job_finish()\n            a = {'job_history_enable': False}\n            self.server.manager(MGR_CMD_SET, SERVER, a)\n            sclg = PBSLogAnalyzer()\n            md = sclg.analyze_scheduler_log(\n                filename=self.scheduler.logfile, start=int(log_start))\n            rr = md['summary']['job_run_rate']\n            value = rr.strip().split('/', 1)\n            rr = float(value[0])\n            run_rate.append(rr)\n            md = sclg.analyze_server_log(\n                filename=self.server.logfile, start=int(log_start))\n            jobs_ended = md['num_jobs_ended']\n            if jobs_ended:\n                jtr = md['job_throughput']\n                value = jtr.strip().split('/', 1)\n                jtr = float(value[0])\n                throughput.append(jtr)\n            else:\n                throughput.append(0)\n            avg_run_rate.extend(run_rate)\n            avg_sub_time.extend(sub_time)\n            avg_throughput.extend(throughput)\n        self.perf_test_result(avg_sub_time, \"job_submission\", \"jobs/sec\")\n        self.perf_test_result(avg_run_rate, \"job_run_rate\", \"jobs/sec\")\n        self.perf_test_result(avg_throughput, \"job_throughput\", \"jobs/sec\")\n\n    def qstat_jobs(self, user, num_stats, qstat_arg=None):\n        for _ in range(num_stats):\n            qstat = 'sudo -u ' + str(user) + ' ' + \\\n                str(os.path.join(\n                    self.server.pbs_conf['PBS_EXEC'], 'bin', 'qstat'))\n            if qstat_arg:\n                qstat = qstat + ' ' + qstat_arg\n            self.logger.info(qstat)\n            subprocess.call(qstat, shell=True)\n\n    @timeout(3600)\n    def test_qstat_perf(self):\n        \"\"\"\n        Test time taken by 100 qstat -f with 10k jobs in queue\n        Test Params: 'No_of_jobs_per_user': 100,\n                      'No_of_tries': 1,\n                      'No_of_iterations': 10,\n                      'No_of_qstats': 10,\n                      'No_of_users': 10,\n                      'svr_log_level': 511,\n                      'qstat_args': '-f',\n                      'qsub_exec_arg': None\n        \"\"\"\n        testconfig = {'No_of_jobs_per_user': 100,\n                      'No_of_tries': 1,\n                      'No_of_iterations': 10,\n                      'No_of_qstats': 10,\n                      'No_of_users': 10,\n                      'svr_log_level': 511,\n                      'qstat_args': '-f',\n                      'qsub_exec_arg': None}\n        config = self.set_test_config(testconfig)\n\n        avg_stat_time = []\n        j = 0\n        while j < config['No_of_tries']:\n            i = 0\n            stat_time = []\n            users = [TEST_USER1, TEST_USER2, TEST_USER3, TEST_USER4,\n                     TEST_USER5, TEST_USER6, TEST_USER7, TEST_USER,\n                     TST_USR, TST_USR1]\n            a = {'log_events': config['svr_log_level']}\n            self.server.manager(MGR_CMD_SET, SERVER, a)\n            a = {'scheduling': 'False'}\n            self.server.manager(MGR_CMD_SET, SERVER, a)\n            thrds = []\n            for u in range(0, config['No_of_users']):\n                t = Thread(target=self.submit_jobs, args=(\n                    users[u], config['No_of_jobs_per_user'], None,\n                    config['qsub_exec_arg']))\n                t.start()\n                thrds.append(t)\n            for t in thrds:\n                t.join()\n\n            while i < config['No_of_iterations']:\n                os.chdir('/tmp')\n                start = time.time()\n                thrds = []\n                for u in range(0, config['No_of_users']):\n                    t = Thread(target=self.qstat_jobs, args=(\n                        users[u], config['No_of_users'],\n                        config['qsub_exec_arg']))\n                    t.start()\n                    thrds.append(t)\n                for t in thrds:\n                    t.join()\n                stop = time.time()\n                res = stop - start\n                i = i + 1\n                stat_time.append(res)\n            j = j + 1\n            avg_stat_time.extend(stat_time)\n        self.delete_jobs_per_user(users, config['No_of_users'])\n        self.perf_test_result(avg_stat_time, \"job_stats\", \"secs\")\n\n    @timeout(3600)\n    def test_qstat_hist_perf(self):\n        \"\"\"\n        Test time taken by 100 qstat -fx with 10k jobs in history\n        Test Params: 'No_of_jobs_per_user': 100,\n                      'No_of_tries': 1,\n                      'No_of_iterations': 10,\n                      'No_of_qstats': 100,\n                      'No_of_users': 10,\n                      'svr_log_level': 511,\n                      'qstat_args': '-fx',\n                      'qsub_exec_arg': None,\n                      'No_of_moms': 21,\n                      'No_of_ncpus_per_node': 48\n        \"\"\"\n        testconfig = {'No_of_jobs_per_user': 100,\n                      'No_of_tries': 1,\n                      'No_of_iterations': 10,\n                      'No_of_qstats': 10,\n                      'No_of_users': 10,\n                      'svr_log_level': 511,\n                      'qstat_args': '-fx',\n                      'qsub_exec_arg': None,\n                      'No_of_moms': 21,\n                      'No_of_ncpus_per_node': 48}\n        self.config = self.set_test_config(testconfig)\n\n        avg_stat_time = []\n        num_ncpus = self.config['No_of_ncpus_per_node']\n        j = 0\n        counts = self.server.counter(NODE, {'state': 'free'})\n        if counts['state=free'] < self.config['No_of_moms']:\n            a = {'resources_available.ncpus': num_ncpus}\n            self.server.create_moms(\n                'mom', a,\n                self.config['No_of_moms'], self.mom)\n        while j < self.config['No_of_tries']:\n            stat_time = []\n            i = 0\n            users = [TEST_USER1, TEST_USER2, TEST_USER3, TEST_USER4,\n                     TEST_USER5, TEST_USER6, TEST_USER7, TEST_USER,\n                     TST_USR, TST_USR1]\n            a = {'log_events': self.config['svr_log_level']}\n            self.server.manager(MGR_CMD_SET, SERVER, a)\n            a = {'scheduling': 'False'}\n            self.server.manager(MGR_CMD_SET, SERVER, a)\n            thrds = []\n            for u in range(0, self.config['No_of_users']):\n                t = Thread(target=self.submit_jobs, args=(\n                    users[u], self.config['No_of_jobs_per_user'], None,\n                    self.config['qsub_exec_arg']))\n                t.start()\n                thrds.append(t)\n            for t in thrds:\n                t.join()\n            a = {'job_history_enable': 'True'}\n            self.server.manager(MGR_CMD_SET, SERVER, a)\n            a = {'scheduling': 'True'}\n            self.server.manager(MGR_CMD_SET, SERVER, a)\n            self.wait_for_job_finish(jobiter=False)\n            while i < self.config['No_of_iterations']:\n                os.chdir('/tmp')\n                start = time.time()\n                thrds = []\n                for u in range(0, self.config['No_of_users']):\n                    t = Thread(target=self.qstat_jobs, args=(\n                        users[u], self.config['No_of_qstats'],\n                        self.config['qstat_args']))\n                    t.start()\n                    thrds.append(t)\n                for t in thrds:\n                    t.join()\n                stop = time.time()\n                res = stop - start\n                i = i + 1\n                stat_time.append(res)\n            j = j + 1\n            avg_stat_time.extend(stat_time)\n            a = {'job_history_enable': False}\n            self.server.manager(MGR_CMD_SET, SERVER, a)\n        self.perf_test_result(avg_stat_time, \"job_stats_history\", \"secs\")\n\n    @timeout(3600)\n    def common_server_restart(self, option=None):\n        \"\"\"\n        Test server restart performance with huge number of jobs in queue\n        \"\"\"\n        testconfig = {'No_of_jobs_per_user': 10000,\n                      'No_of_tries': 1,\n                      'No_of_iterations': 1,\n                      'No_of_users': 10,\n                      'svr_log_level': 511}\n        config = self.set_test_config(testconfig)\n\n        avg_result = []\n        j = 0\n        while j < config['No_of_tries']:\n            result = []\n            i = 0\n            users = [TEST_USER1, TEST_USER2, TEST_USER3, TEST_USER4,\n                     TEST_USER5, TEST_USER6, TEST_USER7, TEST_USER,\n                     TST_USR, TST_USR1]\n            a = {'log_events': config['svr_log_level']}\n            self.server.manager(MGR_CMD_SET, SERVER, a)\n            a = {'scheduling': 'False'}\n            self.server.manager(MGR_CMD_SET, SERVER, a)\n            thrds = []\n            for u in range(0, config['No_of_users']):\n                t = Thread(target=self.submit_jobs,\n                           args=(users[u], config['No_of_jobs_per_user']))\n                t.start()\n                thrds.append(t)\n            for t in thrds:\n                t.join()\n            while i < config['No_of_iterations']:\n                if option == 'kill':\n                    self.server.stop('-KILL')\n                else:\n                    self.server.stop()\n                start = time.time()\n                self.server.start()\n                stop = time.time()\n                res = stop - start\n                i = i + 1\n                result.append(res)\n            j = j + 1\n            avg_result.extend(result)\n            self.delete_jobs_per_user(users, config['No_of_users'])\n        self.perf_test_result(avg_result, \"server_restart_perf\", \"secs\")\n\n    @timeout(3600)\n    def test_server_restart_kill(self):\n        \"\"\"\n        Test server kill and restart performance with 100k jobs in queue\n        Test Params: 'No_of_jobs_per_user': 10000,\n                      'No_of_tries': 1,\n                      'No_of_iterations': 1,\n                      'No_of_users': 10,\n                      'svr_log_level': 511\n        \"\"\"\n        self.common_server_restart(option='kill')\n\n    @timeout(3600)\n    def test_server_restart(self):\n        \"\"\"\n        Test server restart performance 100k jobs in queue\n        Test Params: 'No_of_jobs_per_user': 10000,\n                      'No_of_tries': 1,\n                      'No_of_iterations': 1,\n                      'No_of_users': 10,\n                      'svr_log_level': 511\n        \"\"\"\n        self.common_server_restart()\n\n    @timeout(3600)\n    def test_qdel_perf(self):\n        \"\"\"\n        Test job deletion performance for 10k queued jobs\n        Test Params: 'No_of_jobs_per_user': 1000,\n                      'No_of_tries': 1,\n                      'No_of_iterations': 1,\n                      'No_of_users': 10,\n                      'svr_log_level': 511\n        \"\"\"\n        testconfig = {'No_of_jobs_per_user': 1000,\n                      'No_of_tries': 1,\n                      'No_of_users': 10,\n                      'svr_log_level': 511,\n                      'qdel_exec_args': None}\n        config = self.set_test_config(testconfig)\n\n        avg_qdel_time = []\n        j = 0\n        while j < config['No_of_tries']:\n            qdel_time = []\n            users = [TEST_USER1, TEST_USER2, TEST_USER3, TEST_USER4,\n                     TEST_USER5, TEST_USER6, TEST_USER7, TEST_USER,\n                     TST_USR, TST_USR1]\n            a = {'log_events': config['svr_log_level']}\n            self.server.manager(MGR_CMD_SET, SERVER, a)\n            a = {'scheduling': 'False'}\n            self.server.manager(MGR_CMD_SET, SERVER, a)\n            thrds = []\n            for u in range(0, config['No_of_users']):\n                t = Thread(target=self.submit_jobs,\n                           args=(users[u], config['No_of_jobs_per_user']))\n                t.start()\n                thrds.append(t)\n            for t in thrds:\n                t.join()\n\n            start = time.time()\n            bin_path = str(os.path.join(\n                self.server.pbs_conf['PBS_EXEC'], 'bin'))\n            qdel = str(os.path.join(bin_path, 'qdel'))\n            if config['qdel_exec_args']:\n                qdel = qdel + ' -W force'\n            cmd = qdel + \" `\" + str(os.path.join(bin_path, 'qselect')) + \"`\"\n            subprocess.call(cmd, shell=True)\n            stop = time.time()\n            res = stop - start\n            qdel_time.append(res)\n            j = j + 1\n            avg_qdel_time.extend(qdel_time)\n        self.perf_test_result(avg_qdel_time, \"job_deletion\", \"secs\")\n\n    @timeout(3600)\n    def test_qdel_hist_perf(self):\n        \"\"\"\n        Test job deletion performance for 10k history jobs\n        Test Params: 'No_of_jobs_per_user': 1000,\n                      'No_of_tries': 1,\n                      'No_of_users': 10,\n                      'svr_log_level': 511,\n                      'No_of_moms': 21,\n                      'No_of_ncpus_per_node': 48\n        \"\"\"\n        testconfig = {'No_of_jobs_per_user': 1000,\n                      'No_of_tries': 1,\n                      'No_of_users': 10,\n                      'svr_log_level': 511,\n                      'No_of_moms': 21,\n                      'No_of_ncpus_per_node': 48}\n        self.config = self.set_test_config(testconfig)\n        num_ncpus = self.config['No_of_ncpus_per_node']\n        avg_qdel_time = []\n        qdel_time = []\n        j = 0\n        counts = self.server.counter(NODE, {'state': 'free'})\n        if counts['state=free'] < self.config['No_of_moms']:\n            a = {'resources_available.ncpus': num_ncpus}\n            self.server.create_moms(\n                'mom', a,\n                self.config['No_of_moms'], self.mom)\n        while j < self.config['No_of_tries']:\n            qdel_time = []\n            users = [TEST_USER1, TEST_USER2, TEST_USER3, TEST_USER4,\n                     TEST_USER5, TEST_USER6, TEST_USER7, TEST_USER,\n                     TST_USR, TST_USR1]\n            a = {'log_events': self.config['svr_log_level']}\n            self.server.manager(MGR_CMD_SET, SERVER, a)\n            a = {'scheduling': 'False'}\n            self.server.manager(MGR_CMD_SET, SERVER, a)\n            thrds = []\n            for u in range(0, self.config['No_of_users']):\n                t = Thread(target=self.submit_jobs,\n                           args=(users[u], self.config['No_of_jobs_per_user']))\n                t.start()\n                thrds.append(t)\n            for t in thrds:\n                t.join()\n            a = {'job_history_enable': 'True'}\n            self.server.manager(MGR_CMD_SET, SERVER, a)\n            a = {'scheduling': 'True'}\n            self.server.manager(MGR_CMD_SET, SERVER, a)\n            self.wait_for_job_finish(jobiter=False)\n            start = time.time()\n            bin_path = str(os.path.join(\n                self.server.pbs_conf['PBS_EXEC'], 'bin'))\n            qdel = str(os.path.join(bin_path, 'qdel')) + ' -x'\n            cmd = qdel + \" `\" + \\\n                str(os.path.join(bin_path, 'qselect')) + \" -x\" + \"`\"\n            self.logger.info(cmd)\n            subprocess.call(cmd, shell=True)\n            stop = time.time()\n            res = stop - start\n            qdel_time.append(res)\n            j = j + 1\n            avg_qdel_time.extend(qdel_time)\n        self.perf_test_result(avg_qdel_time, \"job_deletion_history\", 'secs')\n"
  },
  {
    "path": "test/tests/performance/pbs_preemptperformance.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.performance import *\nfrom ptl.utils.pbs_logutils import PBSLogUtils\n\n\nclass TestPreemptPerformance(TestPerformance):\n\n    \"\"\"\n    Check the preemption performance\n    \"\"\"\n    lu = PBSLogUtils()\n\n    def setUp(self):\n        TestPerformance.setUp(self)\n        # set poll cycle to a high value because mom spends a lot of time\n        # in gathering job's resources used. We don't need that in this test\n        self.mom.add_config({'$min_check_poll': 7200, '$max_check_poll': 9600})\n\n    def create_workload_and_preempt(self):\n        a = {\n            'queue_type': 'execution',\n            'started': 'True',\n            'enabled': 'True'\n        }\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, 'workq2')\n\n        a = {'max_run_res_soft.ncpus': \"[u:PBS_GENERIC=2]\"}\n        self.server.manager(MGR_CMD_SET, QUEUE, a, 'workq')\n\n        a = {'max_run_res.mem': \"[u:\" + str(TEST_USER) + \"=1500mb]\"}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        a = {'Resource_List.select': '1:ncpus=3:mem=90mb',\n             'Resource_List.walltime': 9999}\n        for _ in range(8):\n            j = Job(TEST_USER, attrs=a)\n            j.set_sleep_time(9999)\n            self.server.submit(j)\n\n        for _ in range(7):\n            j = Job(TEST_USER1, attrs=a)\n            j.set_sleep_time(9999)\n            self.server.submit(j)\n\n        sched_off = {'scheduling': 'False'}\n        self.server.manager(MGR_CMD_SET, SERVER, sched_off)\n\n        a = {'Resource_List.select': '1:ncpus=3',\n             'Resource_List.walltime': 9999}\n        for _ in range(775):\n            j = Job(TEST_USER, attrs=a)\n            j.set_sleep_time(9999)\n            self.server.submit(j)\n\n        for _ in range(800):\n            j = Job(TEST_USER1, attrs=a)\n            j.set_sleep_time(9999)\n            self.server.submit(j)\n\n        sched_on = {'scheduling': 'True'}\n        self.server.manager(MGR_CMD_SET, SERVER, sched_on)\n\n        self.server.expect(JOB, {'job_state=R': 1590},\n                           offset=15, interval=20)\n\n        a = {'Resource_List.select': '1:ncpus=90:mem=1350mb',\n             'Resource_List.walltime': 9999, ATTR_queue: 'workq2'}\n        j1 = Job(TEST_USER, attrs=a)\n        j1.set_sleep_time(9999)\n        j1id = self.server.submit(j1)\n\n        self.server.expect(JOB, {'job_state': 'R'}, id=j1id,\n                           offset=15, interval=5)\n        self.server.expect(JOB, {'job_state=S': 20}, interval=5)\n\n        (_, str1) = self.scheduler.log_match(j1id + \";Considering job to run\",\n                                             id=j1id, n='ALL',\n                                             max_attempts=1, interval=2)\n        (_, str2) = self.scheduler.log_match(j1id + \";Job run\",\n                                             id=j1id, n='ALL',\n                                             max_attempts=1, interval=2)\n\n        date_time1 = str1.split(\";\")[0]\n        date_time2 = str2.split(\";\")[0]\n        epoch1 = self.lu.convert_date_time(date_time1)\n        epoch2 = self.lu.convert_date_time(date_time2)\n        time_diff = epoch2 - epoch1\n        self.logger.info('#' * 80)\n        self.logger.info('#' * 80)\n        res_str = \"RESULT: THE TIME TAKEN IS : \" + str(time_diff) + \" SECONDS\"\n        self.logger.info(res_str)\n        self.logger.info('#' * 80)\n        self.logger.info('#' * 80)\n\n    @timeout(3600)\n    @tags('sched', 'scheduling_policy')\n    def test_preemption_with_limits(self):\n        \"\"\"\n        Measure the time scheduler takes to preempt when the high priority\n        job hits soft/hard limits under a considerable amount of workload.\n        \"\"\"\n        a = {'resources_available.ncpus': 4800,\n             'resources_available.mem': '2800mb'}\n        self.mom.create_vnodes(a, 1, usenatvnode=True)\n        p = '\"express_queue, normal_jobs, server_softlimits, queue_softlimits\"'\n        a = {'preempt_prio': p}\n        self.server.manager(MGR_CMD_SET, SCHED, a, runas=ROOT_USER)\n\n        self.create_workload_and_preempt()\n\n    @timeout(3600)\n    @tags('sched', 'scheduling_policy')\n    def test_preemption_with_insufficient_resc(self):\n        \"\"\"\n        Measure the time scheduler takes to preempt when the high priority\n        job hits soft/hard limits and there is scarcity of resources\n        under a considerable amount of workload.\n        \"\"\"\n        a = {'resources_available.ncpus': 4800,\n             'resources_available.mem': '1500mb'}\n        self.mom.create_vnodes(a, 1, usenatvnode=True)\n        p = '\"express_queue, normal_jobs, server_softlimits, queue_softlimits\"'\n        a = {'preempt_prio': p}\n        self.server.manager(MGR_CMD_SET, SCHED, a, runas=ROOT_USER)\n\n        self.create_workload_and_preempt()\n\n    @timeout(3600)\n    @tags('sched', 'scheduling_policy')\n    def test_insufficient_resc_non_cons(self):\n        \"\"\"\n        Submit a number of low priority job and then submit a high priority\n        job that needs a non-consumable resource which is assigned to last\n        running job. This will make scheduler go through all running jobs\n        to find the preemptable job.\n        \"\"\"\n\n        a = {'type': 'string', 'flag': 'h'}\n        self.server.manager(MGR_CMD_CREATE, RSC, a, id='qlist')\n\n        a = {ATTR_rescavail + \".qlist\": \"list1\",\n             ATTR_rescavail + \".ncpus\": \"8\"}\n        self.mom.create_vnodes(\n            a, 400, self.mom, additive=True, fname=\"vnodedef1\")\n\n        a = {ATTR_rescavail + \".qlist\": \"list2\",\n             ATTR_rescavail + \".ncpus\": \"1\"}\n        self.mom.create_vnodes(\n            a, 1, self.mom, additive=True, fname=\"vnodedef2\")\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n\n        a = {ATTR_l + '.select': '1:ncpus=1:qlist=list2'}\n        j = Job(TEST_USER, attrs=a)\n        j.set_sleep_time(3000)\n\n        # Add qlist to the resources scheduler checks for\n        self.scheduler.add_resource('qlist')\n\n        jid = self.server.submit(j)\n        time.sleep(1)\n\n        a = {ATTR_l + '.select': '1:ncpus=1:qlist=list1'}\n        for _ in range(3200):\n            j = Job(TEST_USER, attrs=a)\n            j.set_sleep_time(3000)\n            self.server.submit(j)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n\n        self.server.expect(JOB, {'job_state=R': 3201}, interval=20,\n                           offset=15)\n\n        qname = 'highp'\n        a = {'queue_type': 'execution', 'priority': '200',\n             'started': 'True', 'enabled': 'True'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, qname)\n\n        a = {ATTR_l + '.select': '1:ncpus=1:qlist=list2',\n             ATTR_q: 'highp'}\n        j = Job(TEST_USER, attrs=a)\n        j.set_sleep_time(3000)\n        jid_highp = self.server.submit(j)\n\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid_highp, interval=10)\n        self.server.expect(JOB, {ATTR_state: (MATCH_RE, 'S|Q')}, id=jid)\n\n        search_str = jid_highp + \";Considering job to run\"\n        (_, str1) = self.scheduler.log_match(search_str,\n                                             id=jid_highp, n='ALL',\n                                             max_attempts=1, interval=2)\n        search_str = jid_highp + \";Job run\"\n        (_, str2) = self.scheduler.log_match(search_str,\n                                             id=jid_highp, n='ALL',\n                                             max_attempts=1, interval=2)\n        date_time1 = str1.split(\";\")[0]\n        date_time2 = str2.split(\";\")[0]\n        epoch1 = self.lu.convert_date_time(date_time1)\n        epoch2 = self.lu.convert_date_time(date_time2)\n        time_diff = epoch2 - epoch1\n        self.logger.info('#' * 80)\n        self.logger.info('#' * 80)\n        res_str = \"RESULT: PREEMPTION TOOK: \" + str(time_diff) + \" SECONDS\"\n        self.logger.info(res_str)\n        self.logger.info('#' * 80)\n        self.logger.info('#' * 80)\n        self.perf_test_result(time_diff,\n                              \"preempt_time_nonconsumable_resc\", \"sec\")\n\n    @timeout(3600)\n    @tags('sched', 'scheduling_policy')\n    def test_insufficient_resc_multiple_non_cons(self):\n        \"\"\"\n        Submit a number of low priority jobs and then submit a high priority\n        job that needs a non-consumable resource in 2 chunks. These resources\n        are assigned to last two running jobs. This will make scheduler go\n        through all running jobs to find preemptable jobs.\n        \"\"\"\n\n        a = {'type': 'string', 'flag': 'h'}\n        self.server.manager(MGR_CMD_CREATE, RSC, a, id='qlist')\n\n        a = {ATTR_rescavail + \".qlist\": \"list1\",\n             ATTR_rescavail + \".ncpus\": \"8\"}\n        self.mom.create_vnodes(\n            a, 400, additive=True, fname=\"vnodedef1\")\n\n        a = {ATTR_rescavail + \".qlist\": \"list2\",\n             ATTR_rescavail + \".ncpus\": \"1\"}\n        self.mom.create_vnodes(\n            a, 1, additive=True, fname=\"vnodedef2\")\n\n        a = {ATTR_rescavail + \".qlist\": \"list3\",\n             ATTR_rescavail + \".ncpus\": \"1\"}\n        self.mom.create_vnodes(\n            a, 1, additive=True, fname=\"vnodedef3\")\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n\n        a = {ATTR_l + '.select': '1:ncpus=1:qlist=list2'}\n        j = Job(TEST_USER, attrs=a)\n        j.set_sleep_time(3000)\n\n        b = {ATTR_l + '.select': '1:ncpus=1:qlist=list3'}\n        j2 = Job(TEST_USER, attrs=b)\n        j2.set_sleep_time(3000)\n\n        # Add qlist to the resources scheduler checks for\n        self.scheduler.add_resource('qlist')\n\n        jid = self.server.submit(j)\n        jid2 = self.server.submit(j2)\n\n        a = {ATTR_l + '.select': '1:ncpus=1:qlist=list1'}\n        for _ in range(3200):\n            j = Job(TEST_USER, attrs=a)\n            j.set_sleep_time(3000)\n            self.server.submit(j)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n\n        self.server.expect(JOB, {'job_state=R': 3202}, interval=20,\n                           offset=15)\n\n        qname = 'highp'\n        a = {'queue_type': 'execution', 'priority': '200',\n             'started': 'True', 'enabled': 'True'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, qname)\n\n        a = {ATTR_l + '.select': '1:ncpus=1:qlist=list2+1:ncpus=1:qlist=list3',\n             ATTR_q: 'highp'}\n        j = Job(TEST_USER, attrs=a)\n        j.set_sleep_time(3000)\n        jid_highp = self.server.submit(j)\n\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid_highp, interval=10)\n        self.server.expect(JOB, {ATTR_state: (MATCH_RE, 'S|Q')}, id=jid)\n        self.server.expect(JOB, {ATTR_state: (MATCH_RE, 'S|Q')}, id=jid2)\n\n        search_str = jid_highp + \";Considering job to run\"\n        (_, str1) = self.scheduler.log_match(search_str,\n                                             id=jid_highp, n='ALL',\n                                             max_attempts=1, interval=2)\n        search_str = jid_highp + \";Job run\"\n        (_, str2) = self.scheduler.log_match(search_str,\n                                             id=jid_highp, n='ALL',\n                                             max_attempts=1, interval=2)\n        date_time1 = str1.split(\";\")[0]\n        date_time2 = str2.split(\";\")[0]\n        epoch1 = self.lu.convert_date_time(date_time1)\n        epoch2 = self.lu.convert_date_time(date_time2)\n        time_diff = epoch2 - epoch1\n        self.logger.info('#' * 80)\n        self.logger.info('#' * 80)\n        res_str = \"RESULT: PREEMPTION TOOK: \" + str(time_diff) + \" SECONDS\"\n        self.logger.info(res_str)\n        self.logger.info('#' * 80)\n        self.logger.info('#' * 80)\n        self.perf_test_result(time_diff,\n                              \"preempt_time_multiplenonconsumable_resc\",\n                              \"sec\")\n\n    @timeout(3600)\n    @tags('sched', 'scheduling_policy')\n    def test_insufficient_server_resc(self):\n        \"\"\"\n        Submit a number of low priority jobs and then make the last low\n        priority job to consume some server level resources. Submit a\n        high priority job that request for this server level resource\n        and measure the time it takes for preemption.\n        \"\"\"\n\n        a = {'type': 'long', 'flag': 'q'}\n        self.server.manager(MGR_CMD_CREATE, RSC, a, id='foo')\n\n        a = {ATTR_rescavail + \".ncpus\": \"8\"}\n        self.mom.create_vnodes(\n            a, 401, additive=True, fname=\"vnodedef1\")\n\n        # Make resource foo available on server\n        a = {ATTR_rescavail + \".foo\": 50, 'scheduling': 'False'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n\n        # Add foo to the resources scheduler checks for\n        self.scheduler.add_resource('foo')\n\n        a = {ATTR_l + '.select': '1:ncpus=1', ATTR_l + '.foo': 25}\n        j = Job(TEST_USER, attrs=a)\n        j.set_sleep_time(3000)\n        jid = self.server.submit(j)\n        time.sleep(1)\n\n        a = {ATTR_l + '.select': '1:ncpus=1'}\n        for _ in range(3200):\n            j = Job(TEST_USER, attrs=a)\n            j.set_sleep_time(3000)\n            self.server.submit(j)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n        self.server.expect(JOB, {'job_state=R': 3201}, interval=20,\n                           offset=15)\n\n        qname = 'highp'\n        a = {'queue_type': 'execution', 'priority': '200',\n             'started': 'True', 'enabled': 'True'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, qname)\n\n        a = {ATTR_l + '.select': '1:ncpus=1', ATTR_l + '.foo': 50,\n             ATTR_q: 'highp'}\n        j2 = Job(TEST_USER, attrs=a)\n        j2.set_sleep_time(3000)\n        jid_highp = self.server.submit(j2)\n\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid_highp, interval=10)\n        self.server.expect(JOB, {ATTR_state: (MATCH_RE, 'S|Q')}, id=jid)\n\n        search_str = jid_highp + \";Considering job to run\"\n        (_, str1) = self.scheduler.log_match(search_str,\n                                             id=jid_highp, n='ALL',\n                                             max_attempts=1, interval=2)\n        search_str = jid_highp + \";Job run\"\n        (_, str2) = self.scheduler.log_match(search_str,\n                                             id=jid_highp, n='ALL',\n                                             max_attempts=1, interval=2)\n        date_time1 = str1.split(\";\")[0]\n        date_time2 = str2.split(\";\")[0]\n        epoch1 = self.lu.convert_date_time(date_time1)\n        epoch2 = self.lu.convert_date_time(date_time2)\n        time_diff = epoch2 - epoch1\n        self.logger.info('#' * 80)\n        self.logger.info('#' * 80)\n        res_str = \"RESULT: PREEMPTION TOOK: \" + str(time_diff) + \" SECONDS\"\n        self.logger.info(res_str)\n        self.logger.info('#' * 80)\n        self.logger.info('#' * 80)\n        self.perf_test_result(time_diff, \"High_priority_preemption\", \"sec\")\n\n    @timeout(7200)\n    def test_preemption_basic(self):\n        \"\"\"\n        Submit a number of low priority job and then submit a high priority\n        job.\n        \"\"\"\n\n        a = {ATTR_rescavail + \".ncpus\": \"8\"}\n        self.mom.create_vnodes(\n            a, 400, additive=True, fname=\"vnodedef1\")\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n\n        a = {ATTR_l + '.select': '1:ncpus=1'}\n        for _ in range(3200):\n            j = Job(TEST_USER, attrs=a)\n            j.set_sleep_time(3000)\n            self.server.submit(j)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n        self.server.expect(JOB, {'job_state=R': 3200}, interval=20,\n                           offset=15)\n\n        qname = 'highp'\n        a = {'queue_type': 'execution', 'priority': '200',\n             'started': 'True', 'enabled': 'True'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, qname)\n\n        ncpus = 20\n        S_jobs = 20\n        for _ in range(5):\n            a = {ATTR_l + '.select': ncpus,\n                 ATTR_q: 'highp'}\n            j = Job(TEST_USER, attrs=a)\n            j.set_sleep_time(3000)\n            jid_highp = self.server.submit(j)\n\n            self.server.expect(JOB, {ATTR_state: 'R'}, id=jid_highp,\n                               interval=10)\n            self.server.expect(JOB, {'job_state=S': S_jobs}, interval=5)\n            search_str = jid_highp + \";Considering job to run\"\n            (_, str1) = self.scheduler.log_match(search_str,\n                                                 id=jid_highp, n='ALL',\n                                                 max_attempts=1)\n            search_str = jid_highp + \";Job run\"\n            (_, str2) = self.scheduler.log_match(search_str,\n                                                 id=jid_highp, n='ALL',\n                                                 max_attempts=1)\n            date_time1 = str1.split(\";\")[0]\n            date_time2 = str2.split(\";\")[0]\n            epoch1 = self.lu.convert_date_time(date_time1)\n            epoch2 = self.lu.convert_date_time(date_time2)\n            time_diff = epoch2 - epoch1\n            self.logger.info('#' * 80)\n            self.logger.info('#' * 80)\n            res_str = \"RESULT: PREEMPTION OF \" + str(ncpus) + \" JOBS TOOK: \" \\\n                + str(time_diff) + \" SECONDS\"\n            self.logger.info(res_str)\n            self.logger.info('#' * 80)\n            self.logger.info('#' * 80)\n            ncpus *= 3\n            S_jobs += ncpus\n            self.perf_test_result(time_diff, \"preemption_time\", \"sec\")\n\n    @timeout(3600)\n    def test_preemption_with_unrelated_soft_limits(self):\n        \"\"\"\n        Measure the time scheduler takes to preempt when there are user\n        soft limits in the system and preemptor and preemptee jobs are\n        submitted as different user.\n        \"\"\"\n        a = {'resources_available.ncpus': 4,\n             'resources_available.mem': '6400mb'}\n        self.mom.create_vnodes(a, 500, usenatvnode=False,\n                               sharednode=False)\n        p = \"express_queue, normal_jobs, server_softlimits, queue_softlimits\"\n        a = {'preempt_prio': p}\n        self.server.manager(MGR_CMD_SET, SCHED, a)\n\n        a = {'max_run_res_soft.ncpus': \"[u:\" + str(TEST_USER) + \"=1]\"}\n        self.server.manager(MGR_CMD_SET, QUEUE, a, 'workq')\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n\n        # submit a bunch of jobs as TEST_USER2\n        a = {ATTR_l + '.select=1:ncpus': 1}\n        for _ in range(2000):\n            j = Job(TEST_USER2, attrs=a)\n            j.set_sleep_time(3000)\n            self.server.submit(j)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n        self.server.expect(JOB, {'job_state=R': 2000}, interval=10, offset=5,\n                           max_attempts=100)\n\n        qname = 'highp'\n        a = {'queue_type': 'execution', 'priority': '200',\n             'started': 'True', 'enabled': 'True'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, qname)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n        a = {ATTR_l + '.select=2000:ncpus': 1, ATTR_q: qname}\n        j = Job(TEST_USER3, attrs=a)\n        j.set_sleep_time(3000)\n        hjid = self.server.submit(j)\n        scycle = time.time()\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n\n        (_, str1) = self.scheduler.log_match(hjid + \";Considering job to run\")\n\n        date_time1 = str1.split(\";\")[0]\n        epoch1 = self.lu.convert_date_time(date_time1)\n        # make sure 2000 jobs were suspended\n        self.server.expect(JOB, {'job_state=S': 2000}, interval=10, offset=5,\n                           max_attempts=100)\n\n        # check when server received the request\n        (_, req_svr) = self.server.log_match(\";Type 93 request received\",\n                                             starttime=epoch1)\n        date_time_svr = req_svr.split(\";\")[0]\n        epoch_svr = self.lu.convert_date_time(date_time_svr)\n        # check when scheduler gets first reply from server\n        (_, resp_sched) = self.scheduler.log_match(\";Job preempted \",\n                                                   starttime=epoch1)\n        date_time_sched = resp_sched.split(\";\")[0]\n        epoch_sched = self.lu.convert_date_time(date_time_sched)\n        svr_delay = epoch_sched - epoch_svr\n\n        # record the start time of high priority job\n        (_, str2) = self.scheduler.log_match(hjid + \";Job run\",\n                                             n='ALL', interval=2)\n        date_time2 = str2.split(\";\")[0]\n        epoch2 = self.lu.convert_date_time(date_time2)\n        time_diff = epoch2 - epoch1\n        self.logger.info('#' * 80)\n        self.logger.info('#' * 80)\n        res_str = \"RESULT: TOTAL PREEMPTION TIME: \" + \\\n                  str(time_diff) + \" SECONDS, SERVER TOOK: \" + \\\n                  str(svr_delay) + \" , SCHED TOOK: \" + \\\n                  str(time_diff - svr_delay)\n        self.logger.info(res_str)\n        self.logger.info('#' * 80)\n        self.logger.info('#' * 80)\n\n    @timeout(3600)\n    def test_preemption_with_user_soft_limits(self):\n        \"\"\"\n        Measure the time scheduler takes to preempt when there are user\n        soft limits in the system for one user and only some preemptee jobs\n        are submitted as that user.\n        \"\"\"\n        a = {'resources_available.ncpus': 4,\n             'resources_available.mem': '6400mb'}\n        self.mom.create_vnodes(a, 500, usenatvnode=False,\n                               sharednode=False)\n        p = \"express_queue, normal_jobs, server_softlimits, queue_softlimits\"\n        a = {'preempt_prio': p}\n        self.server.manager(MGR_CMD_SET, SCHED, a)\n\n        a = {'max_run_res_soft.ncpus': \"[u:\" + str(TEST_USER) + \"=1]\"}\n        self.server.manager(MGR_CMD_SET, QUEUE, a, 'workq')\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n\n        # submit a bunch of jobs as different users\n        a = {ATTR_l + '.select=1:ncpus': 1}\n        usr_list = [TEST_USER, TEST_USER2, TEST_USER3, TEST_USER4]\n        num_usr = len(usr_list)\n        for ind in range(2000):\n            j = Job(usr_list[ind % num_usr], attrs=a)\n            j.set_sleep_time(3000)\n            self.server.submit(j)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n        self.server.expect(JOB, {'job_state=R': 2000}, interval=10, offset=5,\n                           max_attempts=100)\n\n        qname = 'highp'\n        a = {'queue_type': 'execution', 'priority': '200',\n             'started': 'True', 'enabled': 'True'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, qname)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n        a = {ATTR_l + '.select=2000:ncpus': 1, ATTR_q: qname}\n        j = Job(TEST_USER5, attrs=a)\n        j.set_sleep_time(3000)\n        hjid = self.server.submit(j)\n        scycle = time.time()\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n\n        (_, str1) = self.scheduler.log_match(hjid + \";Considering job to run\")\n\n        date_time1 = str1.split(\";\")[0]\n        epoch1 = self.lu.convert_date_time(date_time1)\n        # make sure 2000 jobs were suspended\n        self.server.expect(JOB, {'job_state=S': 2000}, interval=10, offset=5,\n                           max_attempts=100)\n\n        # check when server received the request\n        (_, req_svr) = self.server.log_match(\";Type 93 request received\",\n                                             starttime=epoch1)\n        date_time_svr = req_svr.split(\";\")[0]\n        epoch_svr = self.lu.convert_date_time(date_time_svr)\n        # check when scheduler gets first reply from server\n        (_, resp_sched) = self.scheduler.log_match(\";Job preempted \",\n                                                   starttime=epoch1)\n        date_time_sched = resp_sched.split(\";\")[0]\n        epoch_sched = self.lu.convert_date_time(date_time_sched)\n        svr_delay = epoch_sched - epoch_svr\n\n        # record the start time of high priority job\n        (_, str2) = self.scheduler.log_match(hjid + \";Job run\",\n                                             n='ALL', interval=2)\n        date_time2 = str2.split(\";\")[0]\n        epoch2 = self.lu.convert_date_time(date_time2)\n        time_diff = epoch2 - epoch1\n        self.logger.info('#' * 80)\n        self.logger.info('#' * 80)\n        res_str = \"RESULT: TOTAL PREEMPTION TIME: \" + \\\n                  str(time_diff) + \" SECONDS, SERVER TOOK: \" + \\\n                  str(svr_delay) + \" , SCHED TOOK: \" + \\\n                  str(time_diff - svr_delay)\n        self.logger.info(res_str)\n        self.logger.info('#' * 80)\n        self.logger.info('#' * 80)\n        self.perf_test_result(time_diff, \"preempt_time_soft_limits\", \"sec\")\n\n    def tearDown(self):\n        TestPerformance.tearDown(self)\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n        job_ids = self.server.select()\n        self.server.delete(id=job_ids)\n"
  },
  {
    "path": "test/tests/performance/pbs_qstat_performance.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nimport os\n\nfrom tests.performance import *\n\n\nclass TestQstatPerformance(TestPerformance):\n\n    \"\"\"\n    Testing Qstat Performance\n    \"\"\"\n    time_command = 'time'\n\n    def setUp(self):\n        \"\"\"\n            Base class method overridding\n            builds absolute path of commands to execute\n        \"\"\"\n        TestPerformance.setUp(self)\n        self.qstat_query_list = []\n        self.time_command = self.du.which(exe=\"time\")\n        if self.time_command == \"time\":\n            self.skipTest(\"Time command not found\")\n\n        qselect_command = os.path.join(\n            self.server.client_conf['PBS_EXEC'],\n            'bin',\n            'qselect')\n\n        self.qstat_query_list.append(\" `\" + qselect_command + \"`\")\n        self.qstat_query_list.append(\" workq `\" + qselect_command + \"`\")\n        self.qstat_query_list.append(\" `\" + qselect_command + \"` workq\")\n\n        self.qstat_query_list.append(\" -s `\" + qselect_command + \"`\")\n        self.qstat_query_list.append(\" -f `\" + qselect_command + \"` workq\")\n        self.qstat_query_list.append(\" -f workq `\" + qselect_command + \"`\")\n\n    def compute_elapse_time(self, query):\n        \"\"\"\n        Computes qstat time in secs\"\n        Arguments :\n             query - qstat query to run\n        return :\n              -1 on qstat fail\n        \"\"\"\n        command = self.time_command\n        command += \" -f \\\"%e\\\" \"\n        command += os.path.join(\n            self.server.client_conf['PBS_EXEC'],\n            'bin',\n            'qstat')\n        command += query\n\n        # compute elapse time without -E option\n        without_E_option = self.du.run_cmd(self.server.hostname,\n                                           command,\n                                           as_script=True,\n                                           logerr=False)\n        if without_E_option['rc'] != 0:\n            return -1\n        # compute elapse time with -E option\n        command += \" -E\"\n        with_E_option = self.du.run_cmd(self.server.hostname,\n                                        command,\n                                        as_script=True,\n                                        logerr=False)\n\n        if with_E_option['rc'] != 0:\n            return -1\n        self.logger.info(\"Without E option :\" + without_E_option['err'][0])\n        self.logger.info(\"With E option    :\" + with_E_option['err'][0])\n        measure = \"elapse_time qstat\" + query.split(\"`\")[0]\n        self.perf_test_result(float(without_E_option['err'][0]),\n                              measure, \"sec\")\n        self.perf_test_result(float(with_E_option['err'][0]),\n                              measure + \"-E\", \"sec\")\n        self.assertTrue(\n            (without_E_option['err'][0] >= with_E_option['err'][0]),\n            \"Qstat command with option : \" + query + \" Failed\")\n\n    def submit_jobs(self, user, num_jobs):\n        \"\"\"\n        Submit specified number of simple jobs\n        Arguments :\n             user - user under which qstat to run\n             num_jobs - number of jobs to submit and stat\n        \"\"\"\n        job = Job(user)\n        job.set_sleep_time(1000)\n        for _ in range(num_jobs):\n            self.server.submit(job)\n\n    def submit_and_stat_jobs(self, number_jobs):\n        \"\"\"\n        Submit specified number of simple jobs and stats jobs\n        Arguments :\n             num_jobs - number of jobs to submit and stat\n        \"\"\"\n        self.submit_jobs(TEST_USER1, number_jobs)\n        for query in self.qstat_query_list:\n            self.compute_elapse_time(query)\n\n    @timeout(600)\n    def test_with_100_jobs(self):\n        \"\"\"\n        Submit 100 job and compute performace of qstat\n        \"\"\"\n        self.submit_and_stat_jobs(100)\n\n    @timeout(600)\n    def test_with_1000_jobs(self):\n        \"\"\"\n        Submit 1000 job and compute performace of qstat\n        \"\"\"\n        self.submit_and_stat_jobs(1000)\n"
  },
  {
    "path": "test/tests/performance/pbs_qsub_performance.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\nfrom tests.performance import *\n\n\nclass TestQsubPerformance(TestPerformance):\n\n    def setUp(self):\n        TestPerformance.setUp(self)\n        attr = {'scheduling': 'False'}\n        self.server.manager(MGR_CMD_SET, SERVER, attr)\n\n    def submit_jobs(self, qsub_exec_arg=None, env=None):\n        \"\"\"\n        Submits n num of jobs according to the arguments provided\n        and returns submission time\n        :param qsub_exec_arg: Arguments to qsub.\n        :type qsub_exec_arg: String. Defaults to None.\n        :param env: Environment variable to be set before submittign job.\n        :type env: Dictionary. Defaults to None.\n        \"\"\"\n        qsub_path = os.path.join(\n                  self.server.pbs_conf['PBS_EXEC'], 'bin', 'qsub')\n\n        if qsub_exec_arg is not None:\n            job_sub_arg = qsub_path + ' ' + qsub_exec_arg\n            env = {'VARIABLE': 'b' * 13000}\n        else:\n            job_sub_arg = qsub_path\n\n        job_sub_arg += ' -- /bin/sleep 100'\n\n        start_time = time.time()\n        for _ in range(1000):\n            qsub = self.du.run_cmd(self.server.hostname,\n                                   job_sub_arg,\n                                   env=env,\n                                   as_script=True,\n                                   logerr=False)\n            if qsub['rc'] != 0:\n                return -1\n        end_time = time.time()\n        sub_time = round(end_time - start_time, 2)\n        return sub_time\n\n    def test_submit_large_env(self):\n        \"\"\"\n        This test case does the following\n        1. Submit 1000 jobs\n        2. Set env variable with huge value\n        3. Submit 1000 jobs again with -V as argument to qsub\n        4. Collect time taken for both submissions\n        \"\"\"\n        sub_time_without_env = self.submit_jobs()\n        sub_time_with_env = self.submit_jobs(qsub_exec_arg=\"-V\")\n\n        self.logger.info(\n            \"Submission time without env is %d and with env is %d sec\"\n            % (sub_time_without_env, sub_time_with_env))\n        self.perf_test_result(sub_time_without_env,\n                              \"submission_time_without_env\", \"sec\")\n        self.perf_test_result(sub_time_with_env,\n                              \"submission_time_with_env\", \"sec\")\n"
  },
  {
    "path": "test/tests/performance/pbs_rerunjob_file_transfer_perf.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.performance import *\n\n\n@requirements(num_moms=2)\nclass JobRerunFileTransferPerf(TestPerformance):\n    \"\"\"\n    This test suite is for testing the performance of job script\n    and job output (stdout) transfers in case of rerun\n    \"\"\"\n\n    def setUp(self):\n        TestPerformance.setUp(self)\n\n        if len(self.moms) != 2:\n            self.logger.error('test requires two MoMs as input, ' +\n                              '  use -p moms=<mom1>:<mom2>')\n            self.assertEqual(len(self.moms), 2)\n\n        # PBSTestSuite returns the moms passed in as parameters as dictionary\n        # of hostname and MoM object\n        self.momA = self.moms.values()[0]\n        self.momB = self.moms.values()[1]\n        self.momA.delete_vnode_defs()\n        self.momB.delete_vnode_defs()\n\n        self.hostA = self.momA.shortname\n        self.hostB = self.momB.shortname\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'log_events': 4095})\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'job_requeue_timeout': 1000})\n\n    @timeout(600)\n    def test_huge_job_file(self):\n        j = Job(TEST_USER, attrs={\n                ATTR_N: 'huge_job_file', 'Resource_List.select': '1:host=%s'\n                % self.momB.shortname})\n\n        test = []\n        test += ['dd if=/dev/zero of=file bs=1024 count=0 seek=250000\\n']\n        test += ['cat file\\n']\n        test += ['sleep 10000\\n']\n\n        j.create_script(test, hostname=self.server.client)\n\n        now1 = int(time.time())\n        jid = self.server.submit(j)\n        self.server.expect(\n            JOB, {'job_state': 'R', 'substate': 42}, id=jid,\n            max_attempts=30, interval=5)\n        now2 = int(time.time())\n        self.logger.info(\"Job %s took %d seconds to start\\n\",\n                         jid, (now2 - now1))\n        self.perf_test_result((now2 - now1), \"job_start_time\", \"sec\")\n\n        # give a few seconds to job to create large spool file\n        time.sleep(5)\n\n        now1 = int(time.time())\n        self.server.rerunjob(jid)\n        self.server.expect(\n            JOB, {'job_state': 'R', 'substate': 42}, id=jid,\n            max_attempts=500, interval=5)\n        now2 = int(time.time())\n        self.logger.info(\"Job %s took %d seconds to rerun\\n\",\n                         jid, (now2 - now1))\n        self.perf_test_result((now2 - now1), \"job_return_time\", \"sec\")\n"
  },
  {
    "path": "test/tests/performance/pbs_runjobwait_perf.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\nfrom tests.performance import *\n\n\nclass TestRunjobWaitPerf(TestPerformance):\n    \"\"\"\n    Performance tests related to performance testing of sched job_run_wait attr\n    \"\"\"\n\n    def common_test(self, rw_val):\n        \"\"\"\n        Common testing method for job_run_wait tests\n        \"\"\"\n        # Create 100 vnodes with 100 ncpus each, capable of running 10k jobs\n        a = {\"resources_available.ncpus\": 100}\n        self.mom.create_vnodes(\n            a, 100, sharednode=False, expect=False)\n        self.server.expect(NODE, {'state=free': (GE, 100)})\n\n        # Start pbs_mom in mock run mode\n        self.mom.stop()\n        mompath = os.path.join(self.server.pbs_conf[\"PBS_EXEC\"], \"sbin\",\n                               \"pbs_mom\")\n        cmd = [mompath, \"-m\"]\n        self.du.run_cmd(cmd=cmd, sudo=True)\n        self.assertTrue(self.mom.isUp())\n        self.server.expect(NODE, {'resources_available.ncpus=100': (GE, 100)})\n\n        self.server.manager(MGR_CMD_SET, SCHED, {'job_run_wait': rw_val},\n                            id=\"default\")\n\n        self.server.manager(MGR_CMD_SET, SERVER, {\"scheduling\": \"False\"})\n        a = {'Resource_List.select': '1:ncpus=1'}\n        for i in range(10000):\n            self.server.submit(Job(attrs=a))\n\n        t = time.time()\n        self.scheduler.run_scheduling_cycle()\n\n        c = self.scheduler.cycles(lastN=1)[0]\n        t = c.end - c.start\n        self.logger.info('#' * 80)\n        m = \"Time taken for job_run_wait=%s: %s\" % (rw_val, str(t))\n        self.logger.info(m)\n        self.logger.info('#' * 80)\n\n        return t\n\n    @timeout(7200)\n    def test_rw_none(self):\n        \"\"\"\n        Test performance of job_run_wait=none\n        \"\"\"\n        t = self.common_test(\"none\")\n        self.perf_test_result(t, \"time_taken_run_wait_none\", \"sec\")\n\n    @timeout(7200)\n    def test_rw_runjobhook(self):\n        \"\"\"\n        Test performance of job_run_wait=runjob_hook\n        \"\"\"\n        # Create runjob hook so that sched doesn't upgrade runjob_hook to none\n        hook_txt = \"\"\"\nimport pbs\n\npbs.event().accept()\n\"\"\"\n        hk_attrs = {'event': 'runjob', 'enabled': 'True'}\n        self.server.create_import_hook('rj', hk_attrs, hook_txt)\n        t = self.common_test(\"runjob_hook\")\n        self.perf_test_result(t, \"time_taken_run_wait_runjobhook\", \"sec\")\n\n    @timeout(7200)\n    def test_rw_execjobhook(self):\n        \"\"\"\n        Test performance of job_run_wait=execjob_hook\n        \"\"\"\n        t = self.common_test(\"execjob_hook\")\n        self.perf_test_result(t, \"time_taken_run_wait_execjobhook\", \"sec\")\n\n    @timeout(14400)\n    def test_rw_runjobhook_nohook(self):\n        \"\"\"\n        Test performance of job_run_wait=runjob_hook without a runjob hook\n        \"\"\"\n        t_rj = self.common_test(\"runjob_hook\")\n        t_none = self.common_test(\"none\")\n\n        # Verify that time taken by runjob_hook mode was less than 1.5 times\n        # the time taken by none mode, as without a runjob hook, the\n        # scheduler should assume none mode even if job_run_wait=runjob_hook\n        self.assertLess(t_rj / t_none, 1.5)\n        self.perf_test_result(\n            t_rj, \"time_taken_run_wait_runjobhook_nohook\", \"sec\")\n        self.perf_test_result(t_none, \"time_taken_run_wait_none\", \"sec\")\n        self.perf_test_result(\n            (t_none - t_rj),\n            \"time_diff_run_wait_none_and_run_wait_runjobhook_nohook\", \"sec\")\n"
  },
  {
    "path": "test/tests/performance/pbs_sched_perf.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom ptl.utils.pbs_logutils import PBSLogUtils\nfrom tests.performance import *\n\n\nclass TestSchedPerf(TestPerformance):\n    \"\"\"\n    Test the performance of scheduler features\n    \"\"\"\n\n    def common_setup1(self):\n        TestPerformance.setUp(self)\n        self.server.manager(MGR_CMD_CREATE, RSC,\n                            {'type': 'string', 'flag': 'h'}, id='color')\n        self.colors = \\\n            ['red', 'orange', 'yellow', 'green', 'blue', 'indigo', 'violet']\n        a = {'resources_available.ncpus': 1, 'resources_available.mem': '8gb'}\n        # 10010 nodes since it divides into 7 evenly.\n        # Each node bucket will have 1430 nodes in it\n        self.mom.create_vnodes(a, 10010,\n                               sharednode=False,\n                               attrfunc=self.cust_attr_func, expect=False)\n        self.server.expect(NODE, {'state=free': (GE, 10010)})\n        self.scheduler.add_resource('color')\n\n    def cust_attr_func(self, name, totalnodes, numnode, attribs):\n        \"\"\"\n        Add custom resources to nodes\n        \"\"\"\n        a = {'resources_available.color': self.colors[numnode % 7]}\n        return {**attribs, **a}\n\n    def submit_jobs(self, attribs, num, step=1, wt_start=100):\n        \"\"\"\n        Submit num jobs each in their individual equiv class\n        \"\"\"\n        jids = []\n\n        self.server.manager(MGR_CMD_SET, MGR_OBJ_SERVER,\n                            {'scheduling': 'False'})\n\n        for i in range(num):\n            job_wt = wt_start + (i * step)\n            attribs['Resource_List.walltime'] = job_wt\n            J = Job(TEST_USER, attrs=attribs)\n            J.set_sleep_time(job_wt)\n            jid = self.server.submit(J)\n            jids.append(jid)\n\n        return jids\n\n    def run_cycle(self):\n        \"\"\"\n        Run a cycle and return the length of the cycle\n        \"\"\"\n        self.server.expect(SERVER, {'server_state': 'Scheduling'}, op=NE)\n        self.server.manager(MGR_CMD_SET, MGR_OBJ_SERVER,\n                            {'scheduling': 'True'})\n        self.server.expect(SERVER, {'server_state': 'Scheduling'})\n        self.server.manager(MGR_CMD_SET, MGR_OBJ_SERVER,\n                            {'scheduling': 'False'})\n\n        # 600 * 2sec = 20m which is the max cycle length\n        self.server.expect(SERVER, {'server_state': 'Scheduling'}, op=NE,\n                           max_attempts=600, interval=2)\n        c = self.scheduler.cycles(lastN=1)[0]\n        return c.end - c.start\n\n    def compare_normal_path_to_buckets(self, place, num_jobs):\n        \"\"\"\n        Submit num_jobs jobs and run two cycles.  First one with the normal\n        node search code path and the second with buckets.  Print the\n        time difference between the two cycles.\n        \"\"\"\n        # Submit one job to eat up the resources.  We want to compare the\n        # time it takes for the scheduler to attempt and fail to run the jobs\n        a = {'Resource_List.select': '1429:ncpus=1:color=yellow',\n             'Resource_List.place': place,\n             'Resource_List.walltime': '1:00:00'}\n        Jyellow = Job(TEST_USER, attrs=a)\n        Jyellow.set_sleep_time(3600)\n        jid_yellow = self.server.submit(Jyellow)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid_yellow)\n\n        self.server.manager(MGR_CMD_SET, MGR_OBJ_SERVER,\n                            {'scheduling': 'False'})\n\n        # Shared jobs use standard code path\n        a = {'Resource_List.select':\n             '1429:ncpus=1:color=blue+1429:ncpus=1:color=yellow',\n             \"Resource_List.place\": place}\n        jids = self.submit_jobs(a, num_jobs)\n\n        cycle1_time = self.run_cycle()\n\n        # Excl jobs use bucket codepath\n        a = {'Resource_List.place': place + ':excl'}\n        for jid in jids:\n            self.server.alterjob(jid, a)\n\n        cycle2_time = self.run_cycle()\n\n        log_msg = 'Cycle 1: %.2f Cycle 2: %.2f Cycle time difference: %.2f'\n        self.logger.info(log_msg % (cycle1_time, cycle2_time,\n                                    cycle1_time - cycle2_time))\n        self.assertGreater(cycle1_time, cycle2_time)\n\n    @timeout(10000)\n    def test_node_bucket_perf_scatter(self):\n        \"\"\"\n        Submit a large number of jobs which use node buckets.  Run a cycle and\n        compare that to a cycle that doesn't use node buckets.  Jobs require\n        place=excl to use node buckets.\n        This test uses place=scatter.  Scatter placement is quicker than free\n        \"\"\"\n        self.common_setup1()\n        num_jobs = 3000\n        self.compare_normal_path_to_buckets('scatter', num_jobs)\n\n    @timeout(10000)\n    def test_node_bucket_perf_free(self):\n        \"\"\"\n        Submit a large number of jobs which use node buckets.  Run a cycle and\n        compare that to a cycle that doesn't use node buckets.  Jobs require\n        place=excl to use node buckets.\n        This test uses free placement.  Free placement is slower than scatter\n        \"\"\"\n        self.common_setup1()\n        num_jobs = 3000\n        self.compare_normal_path_to_buckets('free', num_jobs)\n\n    @timeout(3600)\n    def test_run_many_normal_jobs(self):\n        \"\"\"\n        Submit many normal path jobs and time the cycle that runs all of them.\n        \"\"\"\n        self.common_setup1()\n        num_jobs = 10000\n        a = {'Resource_List.select': '1:ncpus=1'}\n        jids = self.submit_jobs(a, num_jobs, wt_start=num_jobs)\n        t = self.run_cycle()\n        self.server.expect(JOB, {'job_state=R': num_jobs},\n                           trigger_sched_cycle=False, interval=5,\n                           max_attempts=240)\n        self.logger.info('#' * 80)\n        m = 'Time taken in cycle to run %d normal jobs: %.2f' % (num_jobs, t)\n        self.logger.info(m)\n        self.logger.info('#' * 80)\n\n    @timeout(3600)\n    def test_run_many_bucket_jobs(self):\n        \"\"\"\n        Submit many bucket path jobs and time the cycle that runs all of them.\n        \"\"\"\n        self.common_setup1()\n        num_jobs = 10000\n        a = {'Resource_List.select': '1:ncpus=1',\n             'Resource_List.place': 'excl'}\n        self.submit_jobs(a, num_jobs, wt_start=num_jobs)\n        t = self.run_cycle()\n\n        self.server.expect(JOB, {'job_state=R': num_jobs},\n                           trigger_sched_cycle=False, interval=5,\n                           max_attempts=240)\n        self.logger.info('#' * 80)\n        m = 'Time taken in cycle to run %d bucket jobs: %.2f' % (num_jobs, t)\n        self.logger.info(m)\n        self.logger.info('#' * 80)\n        self.perf_test_result(t, m, \"seconds\")\n\n    @timeout(3600)\n    def test_pset_fuzzy_perf(self):\n        \"\"\"\n        Test opt_backfill_fuzzy with placement sets.\n        \"\"\"\n        self.common_setup1()\n        a = {'strict_ordering': 'True'}\n        self.scheduler.set_sched_config(a)\n\n        a = {'node_group_key': 'color', 'node_group_enable': 'True',\n             'scheduling': 'False'}\n\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        self.server.expect(SERVER, {'server_state': (NE, 'Scheduling')})\n\n        a = {'Resource_List.select': '1:ncpus=1:color=yellow'}\n        self.submit_jobs(attribs=a, num=1430, step=60, wt_start=3600)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n\n        self.server.expect(JOB, {'job_state=R': 1430},\n                           trigger_sched_cycle=False, interval=5,\n                           max_attempts=240)\n\n        a = {'Resource_List.select': '10000:ncpus=1'}\n        tj = Job(TEST_USER, attrs=a)\n        self.server.submit(tj)\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n\n        cycle1 = self.scheduler.cycles(lastN=1)[0]\n        cycle1_time = cycle1.end - cycle1.start\n\n        a = {'opt_backfill_fuzzy': 'High'}\n        self.server.manager(MGR_CMD_SET, SCHED, a, id='default')\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n\n        cycle2 = self.scheduler.cycles(lastN=1)[0]\n        cycle2_time = cycle2.end - cycle2.start\n\n        self.logger.info('Cycle 1: %f Cycle 2: %f Perc %.2f%%' % (\n            cycle1_time, cycle2_time, (cycle1_time / cycle2_time) * 100))\n        self.assertLess(cycle2_time, cycle1_time,\n                        'Optimization was not faster')\n        self.perf_test_result(((cycle1_time / cycle2_time) * 100),\n                              \"optimized_percentage\", \"percentage\")\n\n    @timeout(1200)\n    def test_many_chunks(self):\n        self.common_setup1()\n        num_jobs = 1000\n        num_cycles = 3\n        # Submit jobs with a large number of chunks that can't run\n        a = {'Resource_List.select': '9999:ncpus=1:color=red'}\n        jids = self.submit_jobs(a, num_jobs, wt_start=1000)\n        m = 'Time taken to consider %d normal jobs' % num_jobs\n        times = []\n        for i in range(num_cycles):\n            t = self.run_cycle()\n            times.append(t)\n\n        self.logger.info('#' * 80)\n        for i in range(num_cycles):\n            m2 = '[%d] %s: %.2f' % (i, m, times[i])\n            self.logger.info(m2)\n        self.logger.info('#' * 80)\n\n        self.perf_test_result(times, m, \"sec\")\n\n    @timeout(10000)\n    def test_many_jobs_with_calendaring(self):\n        \"\"\"\n        Performance test for when there are many jobs and calendaring is on\n        \"\"\"\n        self.common_setup1()\n        # Turn strict ordering on and backfill_depth=20\n        a = {'strict_ordering': 'True'}\n        self.server.manager(MGR_CMD_SET, MGR_OBJ_SERVER,\n                            {'backfill_depth': '20'})\n\n        self.server.manager(MGR_CMD_SET, MGR_OBJ_SERVER,\n                            {'scheduling': 'False'})\n        jids = []\n\n        # Submit around 10k jobs\n        chunk_size = 100\n        total_jobs = 10000\n        while total_jobs > 0:\n            for i in range(1, chunk_size + 1):\n                a = {'Resource_List.select':\n                     str(i) + \":ncpus=1:color=\" + self.colors[i % 7]}\n                njobs = int(chunk_size / i)\n                _jids = self.submit_jobs(a, njobs, wt_start=1000)\n                jids.extend(_jids)\n                total_jobs -= njobs\n                if total_jobs <= 0:\n                    break\n\n        t1 = time.time()\n        for _ in range(100):\n            self.scheduler.run_scheduling_cycle()\n        t2 = time.time()\n\n        self.logger.info(\"Time taken by 100 sched cycles: \" + str(t2 - t1))\n\n        # Delete all jobs\n        self.server.cleanup_jobs()\n\n    @timeout(5000)\n    def test_attr_update_period_perf(self):\n        \"\"\"\n        Test the performance boost gained by using attr_update_period\n        \"\"\"\n        # Create 1 node with 1 cpu\n        a = {\"resources_available.ncpus\": 1}\n        self.mom.create_vnodes(a, 1, sharednode=False)\n\n        a = {\"attr_update_period\": 10000, \"scheduling\": \"False\"}\n        self.server.manager(MGR_CMD_SET, SCHED, a, id=\"default\")\n\n        # Submit 5k jobs\n        for _ in range(5000):\n            self.server.submit(Job())\n\n        # The first scheduling cycle will send attribute updates\n        self.scheduler.run_scheduling_cycle()\n        cycle1 = self.scheduler.cycles(lastN=1)[0]\n        cycle1_time = cycle1.end - cycle1.start\n\n        # Delete all jobs, submit 5k jobs again\n        self.server.cleanup_jobs()\n        for _ in range(5000):\n            self.server.submit(Job())\n\n        # This is the second scheduling cycle. We gave a very long\n        # attr_update_period value, so we should still be within that period\n        # So, sched should NOT send updates this time\n        self.scheduler.run_scheduling_cycle()\n        cycle2 = self.scheduler.cycles(lastN=1)[0]\n        cycle2_time = cycle2.end - cycle2.start\n\n        # Compare performance of the 2 cycles\n        self.logger.info(\"##################################################\")\n        self.logger.info(\n            \"Sched cycle time with attribute updates: %f\" % cycle1_time)\n        self.logger.info(\n            \"Sched cycle time without attribute updates: %f\" % cycle2_time)\n        self.logger.info(\"##################################################\")\n        m = \"sched cycle time\"\n        self.perf_test_result([cycle1_time, cycle2_time], m, \"sec\")\n\n    def setup_scheds(self):\n        for i in range(1, 6):\n            partition = 'P' + str(i)\n            sched_name = 'sc' + str(i)\n            a = {'partition': partition,\n                 'sched_host': self.server.hostname}\n            self.server.manager(MGR_CMD_CREATE, SCHED,\n                                a, id=sched_name)\n            self.scheds[sched_name].create_scheduler()\n            self.scheds[sched_name].start()\n            self.server.manager(MGR_CMD_SET, SCHED,\n                                {'scheduling': 'True'}, id=sched_name)\n\n    def setup_queues_nodes(self):\n        a = {'queue_type': 'execution',\n             'started': 'True',\n             'enabled': 'True'}\n        for i in range(1, 6):\n            self.server.manager(MGR_CMD_CREATE, QUEUE, a, id='wq' + str(i))\n            p = {'partition': 'P' + str(i)}\n            self.server.manager(MGR_CMD_SET, QUEUE, p, id='wq' + str(i))\n            node = str(self.mom.shortname)\n            num = i - 1\n            self.server.manager(MGR_CMD_SET, NODE, p,\n                                id=node + '[' + str(num) + ']')\n\n    def submit_njobs(self, num_jobs=1, attrs=None, user=TEST_USER):\n        \"\"\"\n        Submit num_jobs number of jobs with attrs attributes for user.\n        Return a list of job ids\n        \"\"\"\n        if attrs is None:\n            attrs = {ATTR_q: 'workq'}\n        ret_jids = []\n        for _ in range(num_jobs):\n            J = Job(user, attrs)\n            jid = self.server.submit(J)\n            ret_jids += [jid]\n\n        return ret_jids\n\n    @timeout(3600)\n    def test_multi_sched_perf(self):\n        \"\"\"\n        Test time taken to schedule and run 5k jobs with\n        single scheduler and workload divided among 5 schedulers.\n        \"\"\"\n        a = {'resources_available.ncpus': 1000}\n        self.mom.create_vnodes(a, 5)\n        a = {'scheduling': 'False'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        self.submit_njobs(5000)\n        start = time.time()\n        self.scheduler.run_scheduling_cycle()\n        c = self.scheduler.cycles(lastN=1)[0]\n        cyc_dur = c.end - c.start\n        self.perf_test_result(cyc_dur, \"default_cycle_duration\", \"secs\")\n        msg = 'Time taken by default scheduler to run 5k jobs is '\n        self.logger.info(msg + str(cyc_dur))\n        self.server.cleanup_jobs()\n        self.setup_scheds()\n        self.setup_queues_nodes()\n        for sc in self.scheds:\n            a = {'scheduling': 'False'}\n            self.server.manager(MGR_CMD_SET, SCHED, a, id=sc)\n        a = {ATTR_q: 'wq1'}\n        self.submit_njobs(1000, a)\n        a = {ATTR_q: 'wq2'}\n        self.submit_njobs(1000, a)\n        a = {ATTR_q: 'wq3'}\n        self.submit_njobs(1000, a)\n        a = {ATTR_q: 'wq4'}\n        self.submit_njobs(1000, a)\n        a = {ATTR_q: 'wq5'}\n        self.submit_njobs(1000, a)\n        start = time.time()\n        for sc in self.scheds:\n            a = {'scheduling': 'True'}\n            self.server.manager(MGR_CMD_SET, SCHED, a, id=sc)\n        for sc in self.scheds:\n            a = {'scheduling': 'False'}\n            self.server.manager(MGR_CMD_SET, SCHED, a, id=sc)\n        sc_dur = []\n        for sc in self.scheds:\n            if sc != 'default':\n                self.logger.info(\"searching log for scheduler \" + str(sc))\n                log_msg = self.scheds[sc].log_match(\"Leaving Scheduling Cycle\",\n                                                    starttime=int(start),\n                                                    max_attempts=30)\n                endtime = PBSLogUtils.convert_date_time(\n                    log_msg[1].split(';')[0])\n                dur = endtime - start\n                sc_dur.append(dur)\n        max_dur = max(sc_dur)\n        self.perf_test_result(max_dur, \"max_multisched_cycle_duration\", \"secs\")\n        msg = 'Max time taken by one of the multi sched to run 1k jobs is '\n        self.logger.info(msg + str(max_dur))\n        self.perf_test_result(\n            cyc_dur - max_dur, \"multisched_defaultsched_cycle_diff\", \"secs\")\n        self.assertLess(max_dur, cyc_dur)\n        msg1 = 'Multi scheduler is faster than single scheduler by '\n        msg2 = 'secs in scheduling 5000 jobs with 5 schedulers'\n        self.logger.info(msg1 + str(cyc_dur - max_dur) + msg2)\n"
  },
  {
    "path": "test/tests/performance/pbs_standing_resv_quasihang.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.performance import *\n\n\nclass StandingResvQuasihang(TestPerformance):\n    \"\"\"\n    This test suite aims at testing the quasihang caused by a MoM HUP\n    when there is a standing reservation with more than a 1000 instances.\n    Without the fix, the server takes a lot of time to respond to a client.\n    With the fix, the amount of time is significantly reduced.\n    \"\"\"\n\n    def setUp(self):\n        TestPerformance.setUp(self)\n\n        # Set PBS_TZID, needed for standing reservation.\n        if 'PBS_TZID' in self.conf:\n            self.tzone = self.conf['PBS_TZID']\n        elif 'PBS_TZID' in os.environ:\n            self.tzone = os.environ['PBS_TZID']\n        else:\n            self.logger.info('Timezone not set, using Asia/Kolkata')\n            self.tzone = 'Asia/Kolkata'\n\n        a = {'resources_available.ncpus': 2}\n        self.mom.create_vnodes(a, num=2000, usenatvnode=True)\n\n    @timeout(6000)\n    def test_time_for_stat_after_mom_hup(self):\n        \"\"\"\n        This test case submits a standing reservation with 2000 instances,\n        HUPS the MoM, stats the reservation and finds the amount of time\n        the server took to respond.\n\n        The test case is not designed to pass/fail on builds with/without\n        the fix.\n        \"\"\"\n        start = int(time.time()) + 3600\n        attrs = {'Resource_List.select': \"64:ncpus=2\",\n                 'reserve_start': start,\n                 'reserve_duration': 2000,\n                 'reserve_timezone': self.tzone,\n                 'reserve_rrule': \"FREQ=HOURLY;BYHOUR=1,2,3,4,5;COUNT=2000\"}\n\n        rid = self.server.submit(Reservation(TEST_USER, attrs))\n        attrs = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n\n        # it takes a while for all the instances of the reservation to get\n        # confirmed, hence the interval of 5 seconds.\n        self.server.expect(RESV, attrs, id=rid, interval=5)\n\n        self.mom.signal('-HUP')\n\n        # sleep for 5 seconds so that the HUP takes its effect.\n        time.sleep(5)\n\n        now1 = int(time.time())\n        attrs = {'reserve_state': (MATCH_RE, 'RESV_CONFIRMED|2')}\n        self.server.expect(RESV, attrs, id=rid)\n\n        now2 = int(time.time())\n        self.logger.info(\"pbs_rstat took %d seconds to return\\n\",\n                         (now2 - now1))\n        self.perf_test_result((now2 - now1), \"pbs_rstat_return_time\", \"sec\")\n"
  },
  {
    "path": "test/tests/performance/test_dependency_perf.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.performance import *\n\n\nclass TestDependencyPerformance(TestPerformance):\n    \"\"\"\n    Test the performance of job dependency feature\n    \"\"\"\n\n    def check_depend_delete_msg(self, pjid, cjid):\n        \"\"\"\n        helper function to check ia message that the dependent job (cjid)\n        is deleted because of the parent job (pjid)\n        \"\"\"\n        msg = cjid + \";Job deleted as result of dependency on job \" + pjid\n        self.server.log_match(msg)\n\n    @timeout(1800)\n    def test_delete_long_dependency_chains(self):\n        \"\"\"\n        Submit a very long chain of dependent jobs and then measure the time\n        PBS takes to get rid of all dependent jobs.\n        \"\"\"\n\n        a = {'job_history_enable': 'True'}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        job = Job()\n        job.set_sleep_time(3600)\n        jid = self.server.submit(job)\n        j_arr = [jid]\n        for _ in range(5000):\n            a = {ATTR_depend: 'afternotok:' + jid}\n            jid = self.server.submit(Job(attrs=a))\n            j_arr.append(jid)\n\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=j_arr[0])\n        self.server.expect(JOB, {ATTR_state: 'H'}, id=j_arr[5000])\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n        t1 = time.time()\n        self.server.delete(j_arr[0])\n        self.server.expect(JOB, {ATTR_state: 'F'}, id=j_arr[5000],\n                           extend='x', interval=2)\n        t2 = time.time()\n        self.logger.info('#' * 80)\n        self.logger.info('Time taken to delete all jobs %f' % (t2-t1))\n        self.logger.info('#' * 80)\n        self.check_depend_delete_msg(j_arr[4999], j_arr[5000])\n        self.perf_test_result((t2 - t1),\n                              \"time_taken_delete_all_dependent_jobs\", \"sec\")\n"
  },
  {
    "path": "test/tests/resilience/__init__.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom ptl.utils.pbs_testsuite import *\n\n\nclass TestResilience(PBSTestSuite):\n    \"\"\"\n    Base test suite for all kinds of resilience tests\n    like Failover, stress, load, endurance etc.\n    \"\"\"\n    pass\n"
  },
  {
    "path": "test/tests/resilience/pbs_hook_alarm_large_multinode_job.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.resilience import *\nfrom time import sleep\n\n\nclass TestPbsHookAlarmLargeMultinodeJob(TestResilience):\n\n    \"\"\"\n    This test suite contains hooks test to verify that a large\n    multi-node job does not slow down hook execution and cause an alarm.\n    \"\"\"\n\n    def setUp(self):\n\n        TestResilience.setUp(self)\n        # Increasing the daemon log for debugging\n        self.server.manager(MGR_CMD_SET, SERVER, {\"log_events\": '2047'})\n        self.mom.add_config({\"$logevent\": \"0xfffffff\"})\n\n        if not self.mom.is_cpuset_mom():\n            a = {'resources_available.mem': '1gb',\n                 'resources_available.ncpus': '1'}\n            self.mom.create_vnodes(a, 5000, expect=False)\n            # Make sure all the nodes are in state free.  We can't let\n            # create_vnodes() do this because it does\n            # a pbsnodes -v on each vnode.\n            # This takes a long time.\n            self.server.expect(NODE, {'state=free': (GE, 5000)})\n            # Restart mom explicitly due to PP-993\n            self.mom.restart()\n\n    def submit_job(self):\n        a = {'Resource_List.walltime': 10}\n        if self.mom.is_cpuset_mom():\n            vnode_val = self.server.status(NODE)\n            del vnode_val[0]\n            vnode_id = vnode_val[0]['id']\n            ncpus = vnode_val[0]['resources_available.ncpus']\n            del vnode_val[0]\n            a = {'Resource_List.select': '1:ncpus=' +\n                 ncpus + ':vnode=' + vnode_id}\n            for _vnode in vnode_val:\n                vnode_id = _vnode['id']\n                ncpus = _vnode['resources_available.ncpus']\n                a['Resource_List.select'] += '+1:ncpus=' + \\\n                    ncpus + ':vnode=' + vnode_id\n        else:\n            a['Resource_List.select'] = '5000:ncpus=1:mem=1gb'\n        j = Job(TEST_USER)\n\n        j.set_attributes(a)\n        j.set_sleep_time(10)\n\n        jid = self.server.submit(j)\n        return jid\n\n    def test_begin_hook(self):\n        \"\"\"\n        Create an execjob_begin hook, import a hook content with a small\n        alarm value, and test it against a large multi-node job.\n        \"\"\"\n        hook_name = \"beginhook\"\n        hook_event = \"execjob_begin\"\n        hook_body = \"\"\"\nimport pbs\ne=pbs.event()\npbs.logmsg(pbs.LOG_DEBUG, \"executing begin hook %s\" % (e.hook_name,))\n\"\"\"\n        a = {'event': hook_event, 'enabled': 'True',\n             'alarm': '60'}\n        self.server.create_import_hook(hook_name, a, hook_body)\n\n        jid = self.submit_job()\n\n        self.server.expect(JOB, {'job_state': 'R'},\n                           jid, max_attempts=100, interval=2)\n        self.mom.log_match(\n            \"pbs_python;executing begin hook %s\" % (hook_name,), n=100,\n            max_attempts=5, interval=5, regexp=True)\n\n        self.mom.log_match(\n            \"Job;%s;alarm call while running %s hook\" % (jid, hook_event),\n            n=100, max_attempts=5, interval=5, regexp=True, existence=False)\n\n        self.mom.log_match(\"Job;%s;Started, pid\" % (jid,), n=100,\n                           max_attempts=5, interval=5, regexp=True)\n\n    def test_prolo_hook(self):\n        \"\"\"\n        Create an execjob_prologue hook, import a hook content with a\n        small alarm value, and test it against a large multi-node job.\n        \"\"\"\n        hook_name = \"prolohook\"\n        hook_event = \"execjob_prologue\"\n        hook_body = \"\"\"\nimport pbs\ne=pbs.event()\npbs.logmsg(pbs.LOG_DEBUG, \"executing prologue hook %s\" % (e.hook_name,))\n\"\"\"\n        a = {'event': hook_event, 'enabled': 'True',\n             'alarm': '60'}\n        self.server.create_import_hook(hook_name, a, hook_body)\n\n        jid = self.submit_job()\n\n        self.server.expect(JOB, {'job_state': 'R'},\n                           jid, max_attempts=100, interval=2)\n\n        self.mom.log_match(\n            \"pbs_python;executing prologue hook %s\" % (hook_name,), n=100,\n            max_attempts=5, interval=5, regexp=True)\n\n        self.mom.log_match(\n            \"Job;%s;alarm call while running %s hook\" % (jid, hook_event),\n            n=100, max_attempts=5, interval=5, regexp=True, existence=False)\n\n    def test_epi_hook(self):\n        \"\"\"\n        Create an execjob_epilogue hook, import a hook content with a small\n        alarm value, and test it against a large multi-node job.\n        \"\"\"\n        hook_name = \"epihook\"\n        hook_event = \"execjob_epilogue\"\n        hook_body = \"\"\"\nimport pbs\ne=pbs.event()\npbs.logmsg(pbs.LOG_DEBUG, \"executing epilogue hook %s\" % (e.hook_name,))\n\"\"\"\n        search_after = time.time()\n        a = {'event': hook_event, 'enabled': 'True',\n             'alarm': '60'}\n        self.server.create_import_hook(hook_name, a, hook_body)\n\n        jid = self.submit_job()\n\n        self.server.expect(JOB, {'job_state': 'R'},\n                           jid, max_attempts=100, interval=2)\n\n        self.logger.info(\"Wait 10s for job to finish\")\n        sleep(10)\n\n        self.server.log_match(\"dequeuing from\", max_attempts=100,\n                              interval=3, starttime=search_after)\n\n        self.mom.log_match(\n            \"pbs_python;executing epilogue hook %s\" % (hook_name,), n=100,\n            max_attempts=5, interval=5, regexp=True)\n\n        self.mom.log_match(\n            \"Job;%s;alarm call while running %s hook\" % (jid, hook_event),\n            n=100, max_attempts=5, interval=5, regexp=True, existence=False)\n\n        self.mom.log_match(\"Job;%s;Obit sent\" % (jid,), n=100,\n                           max_attempts=5, interval=5, regexp=True)\n\n    def test_end_hook(self):\n        \"\"\"\n        Create an execjob_end hook, import a hook content with a small\n        alarm value, and test it against a large multi-node job.\n        \"\"\"\n        hook_name = \"endhook\"\n        hook_event = \"execjob_end\"\n        hook_body = \"\"\"\nimport pbs\ne=pbs.event()\npbs.logmsg(pbs.LOG_DEBUG, \"executing end hook %s\" % (e.hook_name,))\n\"\"\"\n        search_after = time.time()\n        a = {'event': hook_event, 'enabled': 'True',\n             'alarm': '40'}\n        self.server.create_import_hook(hook_name, a, hook_body)\n\n        jid = self.submit_job()\n        self.server.expect(JOB, {'job_state': 'R'},\n                           jid, max_attempts=100, interval=2)\n\n        self.logger.info(\"Wait 10s for job to finish\")\n        sleep(10)\n\n        self.server.log_match(\"dequeuing from\", max_attempts=100,\n                              interval=3, starttime=search_after)\n\n        self.mom.log_match(\n            \"pbs_python;executing end hook %s\" % (hook_name,), n=100,\n            max_attempts=5, interval=5, regexp=True)\n\n        self.mom.log_match(\n            \"Job;%s;alarm call while running %s hook\" % (jid, hook_event),\n            n=100, max_attempts=5, interval=5, regexp=True, existence=False)\n\n        self.mom.log_match(\"Job;%s;Obit sent\" % (jid,), n=100,\n                           max_attempts=5, interval=5, regexp=True)\n"
  },
  {
    "path": "test/tests/security/__init__.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom ptl.utils.pbs_testsuite import *\n\n\nclass TestSecurity(PBSTestSuite):\n    \"\"\"\n    Base test suite for Security tests\n    \"\"\"\n    pass\n"
  },
  {
    "path": "test/tests/security/pbs_command_injection.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\nfrom tests.security import *\n\n\nclass Test_command_injection(TestSecurity):\n    \"\"\"\n    This test suite is for testing command injection\n    \"\"\"\n    def setUp(self):\n        TestSecurity.setUp(self)\n\n    def test_pbs_rcp_command_injection(self):\n        \"\"\"\n        \"\"\"\n        cmd = os.path.join(self.server.pbs_conf['PBS_EXEC'], 'sbin', 'pbs_rcp')\n        cmd_opt = \\\n            [cmd, 'test@1.1.1.1:/tmp/abc;cat /etc/passwd test@2.2.2.2:/tmp']\n        ret = self.du.run_cmd(self.server.hostname, cmd=cmd_opt, logerr=False)\n\n        self.assertNotEqual(ret['rc'], 0,\n                            'pbs_rcp returned with success')\n"
  },
  {
    "path": "test/tests/security/pbs_multiple_auth.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\nfrom tests.security import *\nimport os\nimport errno\nimport socket\nimport re\nimport random\nimport string\nimport sys\nimport time\n\nclass TestMultipleAuthMethods(TestSecurity):\n    \"\"\"\n    This test suite contains tests for Multiple authentication added to PBS\n    \"\"\"\n    node_list = []\n    # Regular expression that will be used to get the address and port number\n    # from the output of 'netstat' or 'ss' commands\n    re_addr_port = re.compile(r'(?P<addr>.*):(?P<port>\\d+)')\n    # Regular expression that will be used to get the second argument of write\n    # system calls from the output of 'strace' command\n    re_syscall = re.compile(r'.*\\\"(?P<write_buffer>.*)\\\".*')\n\n    def setUp(self):\n        TestSecurity.setUp(self)\n        attrib = {'log_events': 2047}\n        self.server.manager(MGR_CMD_SET, SERVER, attrib)\n        self.mom.add_config({'$logevent': '4095'})\n        self.server.manager(MGR_CMD_SET, SCHED, {'log_events': 4095})\n        self.cur_user = self.du.get_current_user()\n        self.svr_hostname = self.server.shortname\n        self.node_list.append(self.svr_hostname)\n        self.client_host = None\n\n    def update_pbs_conf(self, conf_param, host_name=None,\n                        restart=True, op='set', check_state=True):\n        \"\"\"\n        This function updates pbs conf file.\n\n        :param conf_param: PBS confs 'key/value' need to update in pbs.conf\n        :type conf_param: Dictionary\n        :param host_name: Name of the host on which pbs.conf\n                          has to be updated\n        :param restart: Restart PBS or not after update pbs.conf\n        :type restart: Boolean\n        :param op: Operation need to perform on pbs.conf\n                   Set or Unset operation will perform.\n        :type set: String\n        :param check_state: Check node state to be free\n        :type check_state: Bool\n        \"\"\"\n\n        check_sched_log = False\n        if host_name == None:\n            host_name = self.svr_hostname\n        pbsconfpath = self.du.get_pbs_conf_file(host_name)\n        if op == 'set':\n            check_sched_log = True\n            self.du.set_pbs_config(hostname=host_name,\n                                   fin=pbsconfpath,\n                                   confs=conf_param)\n        else:\n            self.du.unset_pbs_config(hostname=host_name,\n                                     fin=pbsconfpath,\n                                     confs=conf_param)\n        if restart:\n            self.pbs_restart(host_name, node_state=check_state,\n                             sched_log=check_sched_log)\n            # Wait for Server to be ready after restart\n            time.sleep(5)\n\n    def munge_operation(self, host_name, op=None):\n        \"\"\"\n        This function starts the munge dameon on a given host\n\n        :param host_name: Name of the host on which munge\n                          service has to be start.\n        :type host_name: String\n        :param op: Operation perform on munge daemon.\n        :type op: String\n        \"\"\"\n        cmd = 'service munge %s' % (op)\n        status = self.du.run_cmd(hosts=host_name, cmd=cmd, sudo=True)\n        if op == 'status':\n            if status['rc'] == 0:\n                return True\n            else:\n                return False\n        else:\n            msg = \"Failed to %s Munge on %s, Error: %s\" % (op, host_name,\n                                                           str(status['err']))\n            self.assertEquals(status['rc'], 0, msg)\n\n    def pbs_restart(self, host_name=None, node_state=True, daemon=None,\n                    sched_log=True):\n        \"\"\"\n        This function restarts PBS on host.\n        :param host_name: Name of the host on which pbs daemons\n                          has to be restarted\n        :type host_name: String\n        :param node_state: Check node state to be free\n        :type node_state: Bool\n        \"\"\"\n        if host_name == None:\n            host_name = self.mom.shortname\n        if daemon == \"pbs_comm\":\n            self.comm.restart\n            pi = PBSInitServices(hostname=self.server.shortname)\n            pi.restart_comm()\n        else:\n            start_time = time.time()\n            pi = PBSInitServices(hostname=host_name)\n            pi.restart()\n        if node_state:\n            self.server.expect(NODE, {'state': 'free'}, id=host_name)\n\n    def perform_op(self, choice, host_name, node_state=True):\n        \"\"\"\n        This function check if munge is installed or not installed\n        based on test case. If installed then check if munge daemon\n        is active or not,if not active then start it.\n\n        param choice: which operation want to perform on host.\n        type choice: String\n        param host_name: Hostname on which operation want to perform.\n        type host_name: String or None\n        :param node_state: Check node state to be free\n        :type node_state: Bool\n        \"\"\"\n        munge_cmd = self.du.which(exe=\"munge\", hostname=host_name)\n        if choice == 'check_installed_and_run':\n            if munge_cmd == 'munge':\n                self.skipTest(reason='Munge is not installed')\n            else:\n                _msg = \"Munge is installed as per test suite requirement,\"\n                _msg += \" proceeding to check if munge is active\"\n                self.logger.info(_msg)\n                if not self.munge_operation(host_name, op='status'):\n                    _msg = \"Munge daemon is not running, trying to start it...\"\n                    self.logger.info(_msg)\n                    self.munge_operation(host_name=host_name,\n                                         op='start')\n                    self.logger.info(\n                        \"Munge started successfully, proceeding further\")\n                    self.pbs_restart(\n                        host_name=host_name, node_state=node_state)\n                else:\n                    _msg = \"Munge is running as per test suite requirement, \"\n                    _msg += \"proceeding with test case execution\"\n                    self.logger.info(_msg)\n        else:\n            if munge_cmd != 'munge':\n                _msg = 'Munge is installed which is not a pre-requiste'\n                _msg += ' for test cases, skipping test case'\n                self.skipTest(reason=_msg)\n            else:\n                _msg = \"Munge is not installed as per test suite requirement,\"\n                _msg += \" proceeding with test case execution\"\n                self.logger.info(_msg)\n\n    def match_logs(self, exp_msg, nt_exp_msg=None):\n        \"\"\"\n        This function verifies the expected log msgs with respect\n                to authentication in daemon logs\n\n        param exp_msg: Expected log messages in daemons log file.\n        type exp_msg: Dictionary\n        param nt_exp_msg: Not expected message in daemons log file.\n        type exp_msg: String\n        \"\"\"\n\n        st_time = self.server.ctime\n        if 'mom' in exp_msg:\n            self.mom.log_match(exp_msg['mom'], starttime=st_time)\n        for msg in exp_msg.get('server', []):\n            self.server.log_match(msg, starttime=st_time)\n        for msg in exp_msg.get('comm', []):\n            self.comm.log_match(msg, starttime=st_time)\n\n        if nt_exp_msg:\n            self.mom.log_match(nt_exp_msg, starttime=st_time,\n                               existence=False)\n            self.server.log_match(nt_exp_msg, starttime=st_time,\n                                  existence=False)\n            self.comm.log_match(nt_exp_msg, starttime=st_time,\n                                existence=False)\n\n    def common_commands_steps(self, set_attr=None, job_script=False,\n                              resv_attr=None, client=None):\n        \"\"\"\n        This function check all pbs commands are authenticated via\n        respective auth method.\n\n        :param set_attr: Job attributes to set\n        :type set_attr: Dictionary. Defaults to None\n        :param job_script: Whether to submit a job using job script\n        :type job_script: Bool. Defaults to False\n        :param resv_set_attr: Reservation attributes to set\n        :type resv_set_attr: Dictionary. Defaults to None\n        :param client: Name of the client\n        :type client: String. Defaults to None\n        \"\"\"\n        if client is None:\n            self.server.client = self.svr_hostname\n        else:\n            self.server.client = client\n        # Verify that PBS commands are authenticated\n        exp_msg = \"Type 95 request received\"\n        start_time = time.time()\n        self.server.status(SERVER)\n        self.server.log_match(exp_msg, starttime=start_time)\n\n        if resv_attr is None:\n            resv_attr = {'reserve_start': time.time() + 30,\n                         'reserve_end': time.time() + 60}\n\n        r = Reservation(TEST_USER, resv_attr)\n        rid = self.server.submit(r)\n        exp_state = {'reserve_state': (MATCH_RE, \"RESV_CONFIRMED\")}\n        self.server.expect(RESV, exp_state, id=rid)\n\n        start_time = time.time()\n        self.server.delete(rid)\n        self.server.log_match(exp_msg, starttime=start_time)\n\n        start_time = time.time()\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n        self.server.log_match(exp_msg, starttime=start_time)\n\n        start_time = time.time()\n        j = Job(TEST_USER)\n        if set_attr is not None:\n            j.set_attributes(set_attr)\n        if job_script:\n            pbsdsh_path = os.path.join(self.server.pbs_conf['PBS_EXEC'],\n                                       \"bin\", \"pbsdsh\")\n            script = \"#!/bin/sh\\n%s sleep 30\" % pbsdsh_path\n            j.create_script(script, hostname=self.server.client)\n        else:\n            j.set_sleep_time(30)\n        jid = self.server.submit(j)\n\n        self.server.log_match(exp_msg, starttime=start_time)\n\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid)\n\n        start_time = time.time()\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n        self.server.log_match(exp_msg, starttime=start_time)\n\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        start_time = time.time()\n        self.server.alterjob(jid, {ATTR_N: 'test'})\n        self.server.log_match(exp_msg, starttime=start_time)\n\n        start_time = time.time()\n        self.server.expect(JOB, 'queue', id=jid, op=UNSET, offset=30)\n        self.server.log_match(\"%s;Exit_status=0\" % jid)\n\n    def common_setup(self, req_moms=2, req_comms=1):\n        \"\"\"\n        This function sets the shortnames of server, moms, comms and\n        Client in the cluster\n        Server shortname : self.hostA\n        Mom objects : self.momB, self.momC\n        Mom shortnames : self.hostB, self.hostC\n        Comm objects : self.comm2, self.comm3\n        Comm shortnames : self.hostD, self.hostE\n        Client name : self.hostF\n        :param req_moms: No of required moms\n        :type req_moms: Integer. Defaults to 2\n        :param req_comms: No of required comms\n        :type req_comms: Integer. Defaults to 1\n        \"\"\"\n        num_moms = len(self.moms)\n        num_comms = len(self.comms)\n        if (req_moms != num_moms) and (req_comms != num_comms):\n            msg = \"Test requires exact %s moms and %s\" % (req_moms, req_comms)\n            msg += \" comms as input\"\n            self.skipTest(msg)\n        if num_moms == 2 and num_comms == 2:\n            self.hostA = self.server.shortname\n            self.momB = self.moms.values()[0]\n            self.hostB = self.momB.shortname\n            self.momC = self.moms.values()[1]\n            self.hostC = self.momC.shortname\n            self.comm2 = self.comms.values()[0]\n            self.hostD = self.comm2.shortname\n            self.comm3 = self.comms.values()[1]\n            self.hostE = self.comm3.shortname\n            self.hostF = self.client_host = self.server.client\n            self.node_list = [self.hostA, self.hostB,\n                              self.hostC, self.hostD,\n                              self.hostE, self.hostF]\n        elif num_moms == 2 and num_comms == 3:\n            self.hostA = self.server.shortname\n            self.momB = self.moms.values()[0]\n            self.hostB = self.momB.shortname\n            self.momC = self.moms.values()[1]\n            self.hostC = self.momC.shortname\n            self.comm2 = self.comms.values()[1]\n            self.hostD = self.comm2.shortname\n            self.comm3 = self.comms.values()[2]\n            self.hostE = self.comm3.shortname\n            self.hostF = self.client_host = self.server.client\n            self.node_list = [self.hostA, self.hostB,\n                              self.hostC, self.hostD,\n                              self.hostE, self.hostF]\n\n    def simple_interactive_job(self):\n        self.svr_mode = self.server.get_op_mode()\n        if self.svr_mode != PTL_CLI:\n            self.server.set_op_mode(PTL_CLI)\n\n        j = Job(TEST_USER, attrs={ATTR_inter: ''})\n        j.interactive_script = [('hostname', '.*'),\n                                ('sleep 100', '.*')]\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        self.server.delete(jid)\n\n    def test_default_auth_method(self):\n        \"\"\"\n        Test to verify all PBS daemons and commands are authenticated\n        via default authentication method\n        default authentication method is resvport\n        \"\"\"\n        conf_param = {'PBS_COMM_LOG_EVENTS': \"2047\"}\n        if self.server.shortname != self.mom.shortname:\n            check_state = False\n        else:\n            check_state = True\n        self.update_pbs_conf(conf_param, check_state=check_state)\n        common_msg = 'TPP authentication method = resvport'\n        common_msg1 = 'Supported authentication method: resvport'\n        exp_msg = {'server': [common_msg, common_msg1],\n                   'comm': [common_msg1],\n                   'mom': common_msg}\n        self.match_logs(exp_msg)\n        self.common_commands_steps()\n\n    def test_munge_auth_method(self):\n        \"\"\"\n        Test to verify all PBS daemons and commands are authenticated\n        via munge authentication method\n        \"\"\"\n        if self.server.shortname != self.mom.shortname:\n            self.node_list.append(self.mom.shortname)\n            check_state = False\n        else:\n            check_state = True\n\n        # Function call to check if munge is installed and enabled\n        for host_name in self.node_list:\n            self.perform_op(choice='check_installed_and_run',\n                            host_name=host_name)\n\n        conf_param = {'PBS_SUPPORTED_AUTH_METHODS': 'MUNGE',\n                      'PBS_AUTH_METHOD': 'MUNGE',\n                      'PBS_COMM_LOG_EVENTS': \"2047\"}\n        self.update_pbs_conf(conf_param, check_state=check_state)\n\n        if self.server.shortname != self.mom.shortname:\n            conf_param = {'PBS_AUTH_METHOD': 'MUNGE'}\n            self.update_pbs_conf(conf_param, host_name=self.mom.shortname)\n\n        common_msg = 'TPP authentication method = munge'\n        common_msg1 = 'Supported authentication method: munge'\n\n        exp_msg = {'server': [common_msg, common_msg1],\n                   'comm': [common_msg1],\n                   'mom': common_msg}\n\n        nt_exp_msg = 'TPP authentication method = resvport'\n        self.match_logs(exp_msg, nt_exp_msg)\n        self.common_commands_steps()\n\n    def test_multiple_supported_auth_methods(self):\n        \"\"\"\n        Test to verify all PBS daemons and commands are authenticated\n        via multiple authentication method.\n        we authenticated with resvport and munge.\n        \"\"\"\n        if self.server.shortname != self.mom.shortname:\n            self.node_list.append(self.mom.shortname)\n            check_state = False\n        else:\n            check_state = True\n\n        # Function call to check if munge is installed and enabled\n        for host_name in self.node_list:\n            self.perform_op(choice='check_installed_and_run',\n                            host_name=host_name)\n\n        conf_param = {'PBS_SUPPORTED_AUTH_METHODS': 'MUNGE,resvport',\n                      'PBS_AUTH_METHOD': 'MUNGE',\n                      'PBS_COMM_LOG_EVENTS': \"2047\"}\n        self.update_pbs_conf(conf_param, check_state=check_state)\n\n        if self.server.shortname != self.mom.shortname:\n            conf_param = {'PBS_AUTH_METHOD': 'MUNGE'}\n            self.update_pbs_conf(conf_param, host_name=self.mom.shortname)\n\n        common_msg = 'TPP authentication method = munge'\n        common_msg1 = 'Supported authentication method: munge'\n        common_msg2 = 'Supported authentication method: resvport'\n\n        exp_msg = {'server': [common_msg, common_msg1, common_msg2],\n                   'comm': [common_msg, common_msg1, common_msg2],\n                   'mom': common_msg}\n        nt_exp_msg = 'TPP authentication method = resvport'\n        self.match_logs(exp_msg, nt_exp_msg)\n        self.common_commands_steps()\n\n        confs = ['PBS_AUTH_METHOD']\n        self.update_pbs_conf(confs, op='unset', restart=False)\n\n        common_msg = 'TPP authentication method = resvport'\n        conf_param = {'PBS_AUTH_METHOD': 'resvport'}\n        for host_name in self.node_list:\n            if host_name == self.svr_hostname:\n                check_state = False\n            else:\n                check_state = True\n            self.update_pbs_conf(conf_param, host_name=host_name,\n                                 check_state=check_state)\n\n        exp_msg['server'][0] = exp_msg['comm'][0] = common_msg\n        exp_msg['mom'] = common_msg\n        self.match_logs(exp_msg)\n        self.common_commands_steps()\n\n    def test_multiple_auth_method(self):\n        \"\"\"\n        Test to verify getting expected error message\n        with multiple PBS_AUTH_METHOD.\n        \"\"\"\n        conf_param = {'PBS_SUPPORTED_AUTH_METHODS': 'MUNGE',\n                      'PBS_AUTH_METHOD': 'resvport,MUNGE',\n                      'PBS_COMM_LOG_EVENTS': \"2047\"}\n        self.update_pbs_conf(conf_param, restart=False)\n\n        if self.server.shortname != self.mom.shortname:\n            self.node_list.append(self.mom.shortname)\n            conf_param = {'PBS_AUTH_METHOD': 'resvport,MUNGE'}\n            self.update_pbs_conf(conf_param, host_name=self.mom.shortname,\n                                 restart=False)\n\n        lib_path = os.path.join(self.server.pbs_conf['PBS_EXEC'],\n                                'lib')\n        msg = \"/libauth_resvport,munge.so: cannot open shared object file:\"\n        msg += \" No such file or directory\"\n        matchs = lib_path + msg\n\n        # Restart PBS and it should fail with expected error message\n        try:\n            self.pbs_restart(self.server.shortname)\n        except PbsInitServicesError as e:\n            self.assertIn(matchs, e.msg)\n            _msg = \"PBS start up failed with logger info: \" + str(e.msg)\n            self.logger.info(_msg)\n        else:\n            err_msg = \"Failed to get expected error message in PBS restart: \"\n            err_msg += msg\n            self.fail(err_msg)\n\n    def test_not_listed_auth_method(self):\n        \"\"\"\n        test to verify that pbs give appropiate error message\n        if we use not listed PBS_AUTH_METHOD.\n        \"\"\"\n        conf_param = {'PBS_SUPPORTED_AUTH_METHODS': 'resvport',\n                      'PBS_AUTH_METHOD': 'MUNGE',\n                      'PBS_COMM_LOG_EVENTS': \"2047\"}\n        self.update_pbs_conf(conf_param, restart=False)\n\n        if self.server.shortname != self.mom.shortname:\n            self.node_list.append(self.mom.shortname)\n            conf_param = {'PBS_AUTH_METHOD': 'MUNGE'}\n            self.update_pbs_conf(conf_param, host_name=self.mom.shortname,\n                                 restart=False)\n\n        err_msg = ['auth: error returned: 15029',\n                   'auth: Failed to send auth request',\n                   'No support for requested service.',\n                   f'qstat: cannot connect to server {self.server.shortname} (errno=15029)']\n\n        try:\n            self.server.status(SERVER)\n        except PbsStatusError as e:\n            for msg in err_msg:\n                self.assertIn(msg, e.msg)\n        else:\n            err_msg = \"Failed to get expected error message\"\n            err_msg += \" while checking server status.\"\n            self.fail(err_msg)\n\n    def test_null_authentication_value(self):\n        \"\"\"\n        Set PBS_AUTH_METHOD to NULL (empty).\n        Check for error message\n        \"\"\"\n\n        conf_param = {'PBS_SUPPORTED_AUTH_METHODS': 'resvport',\n                      'PBS_AUTH_METHOD': '',\n                      'PBS_COMM_LOG_EVENTS': \"2047\"}\n        self.update_pbs_conf(conf_param, restart=False)\n\n        if self.server.shortname != self.mom.shortname:\n            self.node_list.append(self.mom.shortname)\n            conf_param = {'PBS_AUTH_METHOD': ''}\n            self.update_pbs_conf(conf_param, host_name=self.mom.shortname,\n                                 restart=False)\n\n        comm_path = os.path.join(self.server.pbs_conf['PBS_EXEC'],\n                                 'sbin', 'pbs_comm')\n        matchs = comm_path + \": Configuration error\"\n\n        # Restart PBS and it should fail with expected error message\n        _msg = \"PBS start up failed with logger info: \"\n        try:\n            self.pbs_restart(self.server.shortname)\n        except PbsInitServicesError as e:\n            self.assertIn(matchs, e.msg)\n            self.logger.info(_msg + str(e.msg))\n        else:\n            err_msg = \"Failed to get expected error message in PBS restart: \"\n            err_msg += matchs\n            self.fail(err_msg)\n\n        if self.mom.shortname != self.svr_hostname:\n            try:\n                self.pbs_restart()\n            except PbsInitServicesError as e:\n                matchs = \"pbs_mom startup failed, exit 1 aborting\"\n                self.assertIn(matchs, e.msg)\n                self.logger.info(_msg + str(e.msg))\n            else:\n                err_msg = \"Failed to get expected error message in PBS restart: \"\n                err_msg += matchs\n                self.fail(err_msg)\n\n    def test_munge_not_running_state(self):\n        \"\"\"\n        Submit a job when munge process is not running on server host.\n        Job submit error should occur because of\n        Munge encode failure\n        \"\"\"\n        if self.server.shortname != self.mom.shortname:\n            self.node_list.append(self.mom.shortname)\n            check_state = False\n        else:\n            check_state = True\n\n        # Function call to check if munge is installed and enabled\n        for host_name in self.node_list:\n            self.perform_op(choice='check_installed_and_run',\n                            host_name=host_name)\n\n        conf_param = {'PBS_SUPPORTED_AUTH_METHODS': 'MUNGE',\n                      'PBS_AUTH_METHOD': 'MUNGE',\n                      'PBS_COMM_LOG_EVENTS': \"2047\"}\n        self.update_pbs_conf(conf_param, check_state=check_state)\n\n        if self.server.shortname != self.mom.shortname:\n            conf_param = {'PBS_AUTH_METHOD': 'MUNGE'}\n            self.update_pbs_conf(conf_param, host_name=self.mom.shortname)\n\n        common_msg = 'MUNGE user-authentication on encode failed with '\n        err_msg1 = common_msg + '`Munged communication error`'\n        err_msg2 = common_msg + '`Socket communication error`'\n\n        # To stop munge process and check if successfull\n        self.munge_operation(host_name=self.svr_hostname, op='stop')\n\n        msg = f\"qsub: cannot connect to server {self.server.hostname}\"\n        msg += \" (errno=15010)\"\n        exp_msg = ['munge_get_auth_data: ' + err_msg1,\n                    'auth: error returned: 15010',\n                    'auth: ' + err_msg1,\n                    msg\n                    ]\n\n        exp_msg1 = copy.copy(exp_msg)\n\n        exp_msg1[0] = 'munge_get_auth_data: ' + err_msg2\n        exp_msg1[2] = 'auth: ' + err_msg2\n\n        # Submit a job and it should fail resulting in expected error\n        j = Job(self.cur_user)\n        _msg = \"Trying to start munge daemon\"\n        try:\n            j1id = self.server.submit(j)\n        except PbsSubmitError as e:\n            # Check if all lines of exp_msg are present in the error message\n            all_in_exp_msg = all(any(s in msg for msg in e.msg)\n                                 for s in exp_msg)\n            # Check if all lines of exp_msg1 are present in the error message\n            all_in_exp_msg1 = all(any(s in msg for msg in e.msg)\n                                  for s in exp_msg1)\n            self.assertTrue(all_in_exp_msg or all_in_exp_msg1,\n                            f\"{str(e.msg)} does not match with {str(exp_msg)} or {str(exp_msg1)}\")\n            self.logger.info(\"Job submit failed as expected with\" + str(e.msg))\n        else:\n            err_msg = \"Failed to get expected error message\"\n            err_msg += \" while submiting job.\"\n            self.fail(err_msg)\n        # Clean Up: To start munge that was stopped in first step\n        finally:\n            self.logger.info(_msg)\n            self.munge_operation(host_name=self.svr_hostname, op='start')\n            self.logger.info(\"Munge started as a part of cleanup\")\n\n    def test_invalid_authentication_value(self):\n        \"\"\"\n        Set PBS_AUTH_METHOD to invalid value.\n        Check for error message on restart of daemons\n        \"\"\"\n        conf_param = {'PBS_SUPPORTED_AUTH_METHODS': 'resvport',\n                      'PBS_AUTH_METHOD': 'testing',\n                      'PBS_COMM_LOG_EVENTS': \"2047\"}\n        self.update_pbs_conf(conf_param, restart=False)\n\n        if self.server.shortname != self.mom.shortname:\n            self.node_list.append(self.mom.shortname)\n            conf_param = {'PBS_AUTH_METHOD': 'testing'}\n            self.update_pbs_conf(conf_param, host_name=self.mom.shortname,\n                                 restart=False)\n\n        matchs = \"/opt/pbs/lib/libauth_testing.so:\"\n        matchs += \" cannot open shared object file: No such file or directory\"\n\n        # Restart PBS and it should fail with expected error message\n        _msg = \"PBS restart failed as expected, error message: \"\n        try:\n            self.pbs_restart()\n        except PbsInitServicesError as e:\n            self.assertIn(matchs, e.msg)\n            self.logger.info(_msg + str(e.msg))\n        else:\n            err_msg = \"Failed to get expected error message in PBS restart: \"\n            err_msg += matchs\n            self.fail(err_msg)\n\n    @requirements(num_moms=2)\n    def test_munge_disabled_on_mom_host(self):\n        \"\"\"\n        Test behavior when munge is stopped on Mom host\n        Configuration:\n        Node 1 : Server, Sched, Comm, Mom (self.hostA)\n        Node 2 : Mom (self.hostB)\n        \"\"\"\n        if len(self.moms) != 2:\n            msg = \"Test requires exactly 2 mom host as input\"\n            msg += \" host as input\"\n            self.skipTest(msg)\n\n        self.momA = self.moms.values()[0]\n        self.hostA = self.momA.shortname\n        self.momB = self.moms.values()[1]\n        self.hostB = self.momB.shortname\n        self.node_list.extend([self.hostA, self.hostB])\n        for host in self.node_list:\n            self.perform_op('check_installed_and_run', host, node_state=False)\n\n        # Update pbs.conf of Server host\n        conf_param = {'PBS_COMM_LOG_EVENTS': '2047',\n                      'PBS_SUPPORTED_AUTH_METHODS': 'MUNGE,resvport',\n                      'PBS_AUTH_METHOD': 'munge'}\n        self.update_pbs_conf(\n            conf_param,\n            host_name=self.svr_hostname, check_state=False)\n\n        # Update pbs.conf of Mom hosts\n        del conf_param['PBS_COMM_LOG_EVENTS']\n        del conf_param['PBS_SUPPORTED_AUTH_METHODS']\n        for mom in self.moms.values():\n            if mom.name != self.server.hostname:\n                self.update_pbs_conf(\n                  conf_param,\n                  host_name=mom.name)\n\n        self.munge_operation(self.hostB, op=\"stop\")\n        self.pbs_restart(self.hostB, node_state=False)\n        self.server.expect(NODE, {'state': 'down'}, id=self.hostB)\n        set_attr = {ATTR_l + '.select': '2:ncpus=1',\n                    ATTR_l + '.place': 'scatter'}\n        j = Job(attrs=set_attr)\n        jid = self.server.submit(j)\n\n        self.server.expect(JOB, {'job_state': 'Q'}, id=jid)\n\n    @requirements(no_mom_on_server=True, num_client=1)\n    def test_munge_without_supported_auth_method_on_server(self):\n        \"\"\"\n        Verify appropriate error msg is thrown when\n        PBS_SUPPORTED_AUTH_METHODS is not added to pbs.conf on server host\n        Configuration:\n        Node 1 : Server, Sched, Comm (self.hostA)\n        Node 2 : Mom, Client (self.hostB)\n        \"\"\"\n        if self.server.client == self.server.hostname:\n            msg = \"Test requires 1 mom and 1 client which is on non server\"\n            msg += \" host as input\"\n            self.skipTest(msg)\n\n        self.hostA = self.server.shortname\n        self.momA = self.moms.values()[0]\n        self.hostB = self.momA.shortname\n\n        self.node_list = [self.hostA, self.hostB]\n\n        # Verify if munge is installed on all the hosts\n        for host in self.node_list:\n            self.perform_op('check_installed_and_run', host)\n\n        conf_param = {'PBS_AUTH_METHOD': 'MUNGE'}\n        for host in self.node_list:\n            self.update_pbs_conf(conf_param, host_name=host, check_state=False)\n\n        err_msg = ['auth: error returned: 15029',\n                   'auth: Failed to send auth request',\n                   'No support for requested service.',\n                   f'qstat: cannot connect to server {self.server.shortname} (errno=15029)']\n\n        try:\n            self.server.status(SERVER)\n        except PbsStatusError as e:\n            for msg in err_msg:\n                self.assertIn(msg, e.msg)\n        else:\n            err_msg = \"Failed to get expected error message\"\n            err_msg += \" while checking server status.\"\n            self.fail(err_msg)\n\n    def common_steps_without_munge(self, client=None):\n        \"\"\"\n        This function contains common steps for tests which\n        verify behavior when munge is not installed on one of the host\n        :param client: Name of the client\n        :type client: String. Defaults to None\n        \"\"\"\n        if client is None:\n            self.server.client = self.svr_hostname\n        else:\n            self.server.client = client\n\n        cmd_path = os.path.join(self.server.pbs_conf['PBS_EXEC'], 'bin')\n\n        err_msg = \": cannot connect to server %s\" % self.server.hostname\n        err_msg += \", error=15010\"\n\n        pbsnodes_cmd = os.path.join(cmd_path, 'pbsnodes')\n        msg = pbsnodes_cmd + err_msg\n        exp_msgs = ['init_munge: libmunge.so not found',\n                    'auth: error returned: 15010',\n                    'auth: Munge lib is not loaded',\n                    msg\n                    ]\n        try:\n            self.server.status(NODE, id=self.server.client)\n        except PbsStatusError as e:\n            for msg in exp_msgs:\n                self.assertIn(msg, e.msg)\n            cmd_exp_msg = \"Getting expected error message.\"\n            self.logger.info(cmd_exp_msg)\n        else:\n            cmd_msg = \"Failed to get expected error message \"\n            cmd_msg = \"while checking Node status.\"\n            self.fail(cmd_msg)\n\n        err_msg = \"qstat: cannot connect to server %s\" % self.server.hostname\n        err_msg += \" (errno=15010)\"\n        exp_msgs[3] = err_msg\n        try:\n            self.server.status(SERVER, id=self.svr_hostname)\n        except PbsStatusError as e:\n            for msg in exp_msgs:\n                self.assertIn(msg, e.msg)\n            cmd_exp_msg = \"Getting expected error message.\"\n            self.logger.info(cmd_exp_msg)\n        else:\n            cmd_msg = \"Failed to get expected error message \"\n            cmd_msg += \"while checking Server status.\"\n            self.fail(cmd_msg)\n\n    def test_without_munge_on_server_host(self):\n        \"\"\"\n        Munge is not installed on server host.\n        Set PBS_AUTH_METHOD=munge in conf and check respective error message.\n        \"\"\"\n        # Function call to check if munge is not installed and then proceeding\n        # with test case execution\n        self.perform_op(choice='check_not_installed',\n                        host_name=self.svr_hostname)\n\n        conf_attrib = {'PBS_SUPPORTED_AUTH_METHODS': 'MUNGE',\n                       'PBS_AUTH_METHOD': 'MUNGE',\n                       'PBS_COMM_LOG_EVENTS': \"2047\"}\n        self.update_pbs_conf(conf_attrib, check_state=False)\n\n        if self.svr_hostname != self.mom.shortname:\n            self.node_list.append(self.mom.shortname)\n            conf_param = {'PBS_AUTH_METHOD': 'munge'}\n            self.update_pbs_conf(\n                conf_param,\n                host_name=self.mom.shortname,\n                check_state=False)\n        exp_log = \"libmunge.so not found\"\n        self.server.log_match(exp_log)\n        self.mom.log_match(exp_log)\n        self.common_steps_without_munge(self.svr_hostname)\n\n    @requirements(num_moms=2)\n    def test_without_munge_on_mom_host(self):\n        \"\"\"\n        Test behavior when PBS_AUTH_METHOD is set to munge on remote\n        mom host where munge is not installed.\n        Node 1: Server, Sched, Mom, Comm [self.hostA]\n        Node 2: Mom [self.hostB]\n        \"\"\"\n        self.momA = self.moms.values()[0]\n        self.hostA = self.momA.shortname\n        self.momB = self.moms.values()[1]\n        self.hostB = self.momB.shortname\n\n        self.perform_op(choice='check_not_installed',\n                        host_name=self.hostB)\n        self.node_list.extend([self.hostA, self.hostB])\n\n        conf_attrib = {'PBS_SUPPORTED_AUTH_METHODS': 'MUNGE',\n                       'PBS_AUTH_METHOD': 'MUNGE',\n                       'PBS_COMM_LOG_EVENTS': \"2047\"}\n        self.update_pbs_conf(conf_attrib, check_state=False)\n\n        conf_param = {'PBS_AUTH_METHOD': 'munge'}\n        for mom in self.moms.values():\n            if mom.shortname != self.svr_hostname:\n                self.update_pbs_conf(\n                    conf_param, host_name=mom.name, check_state=False)\n\n        exp_log = \"libmunge.so not found\"\n        self.momB.log_match(exp_log)\n        self.common_steps_without_munge(self.hostB)\n\n    @requirements(num_moms=1, num_comms=1, no_comm_on_server=True)\n    def test_without_munge_on_comm_host(self):\n        \"\"\"\n        Test behavior when PBS_AUTH_METHOD is set to munge on\n        comm host where munge is not installed.\n        when pbs_comm and client are on non-server host\n        Configuration:\n        Node 1: Server, Sched, Mom [self.hostA]\n        Node 2: Comm [self.hostB]\n        \"\"\"\n        if self.svr_hostname == self.comm.shortname:\n            msg = \"Test requires a comm host which is present on \"\n            msg += \"non server host\"\n            self.skip_test(msg)\n\n        self.momA = self.moms.values()[0]\n        self.hostA = self.momA.shortname\n        self.hostB = self.comm.shortname\n\n        self.perform_op(choice='check_not_installed',\n                        host_name=self.hostB)\n        self.node_list.extend([self.hostA, self.hostB])\n\n        conf_param = {'PBS_SUPPORTED_AUTH_METHODS': 'MUNGE',\n                      'PBS_AUTH_METHOD': 'MUNGE',\n                      'PBS_LEAF_ROUTERS': self.hostB}\n        self.update_pbs_conf(conf_param, check_state=False)\n\n        del conf_param['PBS_LEAF_ROUTERS']\n        conf_param['PBS_COMM_LOG_EVENTS'] = 2047\n        self.update_pbs_conf(\n            conf_param,\n            host_name=self.hostB,\n            check_state=False)\n\n        del conf_param['PBS_SUPPORTED_AUTH_METHODS']\n        conf_param['PBS_LEAF_ROUTERS'] = self.hostB\n        for mom in self.moms.values():\n            if mom.shortname != self.svr_hostname:\n                self.update_pbs_conf(\n                    conf_param, host_name=mom.name, check_state=False)\n\n        self.common_steps_without_munge(self.hostB)\n\n    @requirements(num_client=1, num_moms=1)\n    def test_without_munge_on_client_host(self):\n        \"\"\"\n        Test behavior when PBS_AUTH_METHOD is set to munge on\n        comm host where munge is not installed.\n        when pbs_comm and client are on non-server host\n        Configuration:\n        Node 1: Server, Sched, Mom, Comm [self.hostA]\n        Node 2: Client [self.hostB]\n        \"\"\"\n        if self.svr_hostname == self.server.client:\n            msg = \"Test requires a client host which is present on \"\n            msg += \"non server host\"\n            self.skip_test(msg)\n\n        self.momA = self.moms.values()[0]\n        self.hostA = self.momA.shortname\n        self.hostB = self.server.client\n\n        self.perform_op(choice='check_not_installed',\n                        host_name=self.hostB)\n        self.node_list.extend([self.hostA, self.hostB])\n\n        conf_param = {'PBS_SUPPORTED_AUTH_METHODS': 'MUNGE',\n                      'PBS_AUTH_METHOD': 'MUNGE',\n                      'PBS_COMM_LOG_EVENTS': \"2047\"}\n        self.update_pbs_conf(conf_param, check_state=False)\n        del conf_param['PBS_COMM_LOG_EVENTS']\n        self.update_pbs_conf(conf_param, host_name=self.hostB,\n                             restart=False)\n        conf_param = {'PBS_AUTH_METHOD': 'MUNGE'}\n        for mom in self.moms.values():\n            if mom.shortname != self.svr_hostname:\n                self.update_pbs_conf(\n                    conf_param, host_name=mom.name, check_state=False)\n\n        self.common_steps_without_munge(self.hostB)\n\n    @requirements(num_client=1)\n    def test_diff_auth_method_on_client(self):\n        \"\"\"\n        Verify all PBS daemons and commands are authenticated when\n        different authentication method is on client\n        Configuration:\n        Node 1 : Server, Sched, Comm, Mom (self.hostA)\n        Node 2 : Client (self.hostB)\n        \"\"\"\n        if self.svr_hostname == self.server.client:\n            msg = \"Test requires a client host which is present on \"\n            msg += \"non server host\"\n            self.skip_test(msg)\n\n        if self.server.shortname != self.mom.shortname:\n            check_state = False\n        else:\n            check_state = True\n\n        self.hostB = self.server.client\n        self.server.client = self.svr_hostname\n        # Update pbs.conf on server host (self.hostA)\n        conf_param = {'PBS_SUPPORTED_AUTH_METHODS': 'MUNGE',\n                      'PBS_AUTH_METHOD': 'MUNGE',\n                      'PBS_COMM_LOG_EVENTS': \"2047\"}\n        self.update_pbs_conf(conf_param, check_state=check_state)\n\n        self.server.client = self.hostB\n\n        msg = \"auth: Unable to authenticate connection\"\n        msg += \" (%s:15001)\" % self.server.hostname\n        err_msg = ['auth: error returned: -1',\n                   msg,\n                   'qstat: cannot connect to server %s (errno=-1)'\n                   % (self.server.shortname)]\n        try:\n            self.server.status(SERVER)\n        except PbsStatusError as e:\n            for msg in err_msg:\n                self.assertIn(msg, e.msg)\n        else:\n            err_msg = \"Failed to get expected error message\"\n            err_msg += \" while checking server status.\"\n            self.fail(err_msg)\n\n    @requirements(num_moms=2)\n    def test_diff_auth_methods_on_moms(self):\n        \"\"\"\n        Verify all PBS daemons and commands are authenticated when\n        different authentication method are used on execution host.\n        This also tests the behavior when authentication mechanism\n        is different on server and execution host.\n        \"\"\"\n        comm_list = [x.shortname for x in self.comms.values()]\n        if not(len(self.moms) == 2) or self.server.shortname not in comm_list:\n            msg = \"Test needs 2 moms as input and a comm which is present\"\n            msg += \" on server host\"\n            self.skip_test(msg)\n\n        self.momA = self.moms.values()[0]\n        self.hostA = self.momA.shortname\n        self.momB = self.moms.values()[1]\n        self.hostB = self.momB.shortname\n\n        self.node_list.extend([self.hostA, self.hostB])\n        self.perform_op('check_installed_and_run', self.hostA,\n                        node_state=False)\n\n        # Update pbs.conf of Server\n        conf_param = {'PBS_COMM_LOG_EVENTS': '2047',\n                      'PBS_SUPPORTED_AUTH_METHODS': 'MUNGE,resvport',\n                      'PBS_AUTH_METHOD': 'MUNGE'\n                      }\n        self.update_pbs_conf(\n            conf_param,\n            check_state=False)\n\n        server_ip = socket.gethostbyname(self.svr_hostname)\n        mom2_ip = socket.gethostbyname(self.hostB)\n\n        if self.hostA != self.svr_hostname:\n            conf_param = {'PBS_AUTH_METHOD': 'MUNGE'}\n            self.update_pbs_conf(conf_param, host_name=self.hostA)\n            mom1_ip = socket.gethostbyname(self.hostA)\n            self.momA = self.moms.values()[0]\n            attrib = {self.server.hostname: [server_ip, 15001],\n                      self.momB.hostname: [mom2_ip, 15003],\n                      self.momA.hostname: [mom1_ip, 15003]}\n        else:\n            self.pbs_restart(host_name=self.svr_hostname)\n            attrib = {self.server.hostname: [server_ip, 15001, 15003],\n                      self.momB.hostname: [mom2_ip, 15003]}\n\n        msg = \"Unauthenticated connection from %s\" % self.server.shortname\n        self.comm.log_match(msg, existence=False, starttime=self.server.ctime)\n        ip = []\n        port = []\n        for host, host_attribs in attrib.items():\n            ip = host_attribs.pop(0)\n            for port in host_attribs:\n                exp_msg = \"Leaf registered address %s:%s\" % (ip, port)\n                self.comm.log_match(exp_msg)\n\n        # Submit a job on the execution hosts with different\n        # authentication mechanism\n        set_attr = {ATTR_l + '.select': '2:ncpus=1',\n                    ATTR_l + '.place': 'scatter', ATTR_k: 'oe'}\n        resv_attr = {ATTR_l + '.select': '2:ncpus=1',\n                     'reserve_start': time.time() + 30,\n                     'reserve_end': time.time() + 60}\n        self.common_commands_steps(resv_attr=resv_attr,\n                                   set_attr=set_attr)\n\n    def test_daemon_not_in_service_users_with_munge(self):\n        \"\"\"\n        Test behavior when the daemon user is not in the PBS_AUTH_SERVICE_USERS\n        list when using munge for authentication. No daemon should be able to\n        establish a connection with comm.\n\n        Node 1: Server, Sched, Mom, Comm [self.hostA]\n        \"\"\"\n        auth_method = 'munge'\n        conf_param = {'PBS_COMM_LOG_EVENTS': \"2047\",\n                      'PBS_SUPPORTED_AUTH_METHODS': auth_method,\n                      'PBS_AUTH_METHOD': auth_method,\n                      'PBS_AUTH_SERVICE_USERS': 'random_user'}\n\n        if self.server.shortname != self.mom.shortname:\n            self.node_list.append(self.mom.shortname)\n\n        # Function call to check if munge is installed and enabled\n        for host_name in self.node_list:\n            self.perform_op(choice='check_installed_and_run',\n                            host_name=host_name)\n\n        common_msg = f'Connection to pbs_comm {self.server.shortname}:17001 down'\n        common_msg1 = f'Supported authentication method: {auth_method}'\n        common_msg2 = f'TPP authentication method = {auth_method}'\n\n        exp_msg = {'comm': [common_msg1, common_msg2, f'User {str(ROOT_USER)} not in service users list'],\n                   'mom': common_msg,\n                   'server': [common_msg, common_msg1, common_msg2]\n                   }\n        self.update_pbs_conf(conf_param, check_state=False)\n        self.match_logs(exp_msg)\n\n    def test_daemon_not_in_service_users_with_resvport(self):\n        \"\"\"\n        Test behavior when the daemon user is not in the PBS_AUTH_SERVICE_USERS\n        list when using resvport for authentication. The daemon user name is not\n        available with using resvport. Therefore we expect that the daemons will\n        connect to comm successfully.\n\n        Node 1: Server, Sched, Mom, Comm [self.hostA]\n        \"\"\"\n        auth_method = 'resvport'\n        conf_param = {'PBS_COMM_LOG_EVENTS': \"2047\",\n                      'PBS_SUPPORTED_AUTH_METHODS': auth_method,\n                      'PBS_AUTH_METHOD': auth_method,\n                      'PBS_AUTH_SERVICE_USERS': 'random_user'}\n\n        nt_exp_msg = f'Connection to pbs_comm {self.server.shortname}:17001 down'\n        common_msg1 = f'Supported authentication method: {auth_method}'\n        common_msg2 = f'TPP authentication method = {auth_method}'\n\n        exp_msg = {'comm': [common_msg1],\n                   'mom': common_msg2,\n                   'server': [common_msg1, common_msg2]\n                   }\n        self.update_pbs_conf(conf_param, check_state=False)\n        self.match_logs(exp_msg, nt_exp_msg)\n        self.common_commands_steps()\n\n    def test_default_interactive_auth_method(self):\n        \"\"\"\n        Test that we can successfully run interactive jobs when using the default\n        (resvport) interactive authentication method.\n\n        Node 1: Server, Sched, Mom, Comm [self.hostA]\n        \"\"\"\n        conf_param = {'PBS_COMM_LOG_EVENTS': \"2047\"}\n        if self.server.shortname != self.mom.shortname:\n            check_state = False\n        else:\n            check_state = True\n\n        auth_method = 'resvport'\n        common_msg1 = f'TPP authentication method = {auth_method}'\n        common_msg2 = f'Supported authentication method: {auth_method}'\n        mom_msg = f'interactive authentication method = {auth_method}'\n        exp_msg = {'server': [common_msg1, common_msg2],\n                   'comm': [common_msg2]}\n\n        self.update_pbs_conf(conf_param, check_state=check_state)\n        self.match_logs(exp_msg)\n        self.simple_interactive_job()\n        self.match_logs({'mom': mom_msg})\n\n    def test_interactive_job_with_resvport(self):\n        \"\"\"\n        Test that we can successfully run interactive jobs when using resvport\n        as the interactive authentication method.\n\n        Node 1: Server, Sched, Mom, Comm [self.hostA]\n        \"\"\"\n        auth_method = 'resvport'\n        conf_param = {'PBS_COMM_LOG_EVENTS': \"2047\",\n                      'PBS_SUPPORTED_AUTH_METHODS': auth_method,\n                      'PBS_AUTH_METHOD': auth_method,\n                      'PBS_INTERACTIVE_AUTH_METHOD': auth_method}\n        if self.server.shortname != self.mom.shortname:\n            check_state = False\n        else:\n            check_state = True\n\n        common_msg1 = f'TPP authentication method = {auth_method}'\n        common_msg2 = f'Supported authentication method: {auth_method}'\n        mom_msg = f'interactive authentication method = {auth_method}'\n        exp_msg = {'server': [common_msg1, common_msg2],\n                   'comm': [common_msg2]}\n\n        self.update_pbs_conf(conf_param, check_state=check_state)\n        self.match_logs(exp_msg)\n        self.simple_interactive_job()\n        self.match_logs({'mom': mom_msg})\n\n    def test_interactive_job_with_munge(self):\n        \"\"\"\n        Test that we can successfully run interactive jobs when using munge\n        as the interactive authentication method.\n\n        Node 1: Server, Sched, Mom, Comm [self.hostA]\n        \"\"\"\n        auth_method = 'munge'\n        conf_param = {'PBS_COMM_LOG_EVENTS': \"2047\",\n                      'PBS_SUPPORTED_AUTH_METHODS': f\"resvport,{auth_method}\",\n                      'PBS_INTERACTIVE_AUTH_METHOD': auth_method}\n\n        if self.server.shortname != self.mom.shortname:\n            self.node_list.append(self.mom.shortname)\n            check_state = False\n        else:\n            check_state = True\n\n        # Function call to check if munge is installed and enabled\n        for host_name in self.node_list:\n            self.perform_op(choice='check_installed_and_run',\n                            host_name=host_name)\n\n        common_msg1 = f'TPP authentication method = resvport'\n        common_msg2 = f'Supported authentication method: resvport'\n        common_msg3 = f'Supported authentication method: {auth_method}'\n        mom_msg = f'interactive authentication method = {auth_method}'\n        exp_msg = {'server': [common_msg1, common_msg2, common_msg3],\n                   'comm': [common_msg2, common_msg3]}\n\n        self.update_pbs_conf(conf_param, check_state=check_state)\n        self.match_logs(exp_msg)\n        self.simple_interactive_job()\n        self.match_logs({'mom': mom_msg})\n\n    def test_multiple_interactive_auth_methods(self):\n        \"\"\"\n        Test that we can successfully run itneractive jobs when supporting\n        multiple interactive auth methods.\n\n        Node 1: Server, Sched, Mom, Comm [self.hostA]\n        \"\"\"\n        conf_param = {'PBS_COMM_LOG_EVENTS': \"2047\",\n                      'PBS_SUPPORTED_AUTH_METHODS': 'munge,resvport'}\n        if self.server.shortname != self.mom.shortname:\n            self.node_list.append(self.mom.shortname)\n            check_state = False\n        else:\n            check_state = True\n\n        # Function call to check if munge is installed and enabled\n        for host_name in self.node_list:\n            self.perform_op(choice='check_installed_and_run',\n                            host_name=host_name)\n\n        common_msg1 = f'Supported authentication method: munge'\n        common_msg2 = f'Supported authentication method: resvport'\n\n        exp_msg = {'server': [common_msg1, common_msg2],\n                   'comm': [common_msg1, common_msg2]}\n\n        for auth_method in ['resvport', 'munge']:\n            conf_param['PBS_INTERACTIVE_AUTH_METHOD'] = auth_method\n            self.update_pbs_conf(conf_param, check_state=check_state)\n            if self.server.shortname != self.mom.shortname:\n                self.update_pbs_conf(conf_param, host_name=self.mom.shortname)\n            self.match_logs(exp_msg)\n            self.simple_interactive_job()\n            self.match_logs({'mom': f'interactive authentication method = {auth_method}'})\n            self.update_pbs_conf(['PBS_INTERACTIVE_AUTH_METHOD'], op='unset', restart=False)\n\n    def test_not_listed_interactive_auth_method(self):\n        \"\"\"\n        Test behavior when the provided interactive auth method is not in\n        supported methods.\n\n        Node 1: Server, Sched, Mom, Comm [self.hostA]\n        \"\"\"\n        self.svr_mode = self.server.get_op_mode()\n        if self.svr_mode != PTL_CLI:\n            self.server.set_op_mode(PTL_CLI)\n\n        auth_method = 'munge'\n        conf_param = {'PBS_COMM_LOG_EVENTS': \"2047\",\n                      'PBS_SUPPORTED_AUTH_METHODS': 'resvport',\n                      'PBS_INTERACTIVE_AUTH_METHOD': auth_method}\n        if self.server.shortname != self.mom.shortname:\n            check_state = False\n        else:\n            check_state = True\n\n        self.update_pbs_conf(conf_param, check_state=check_state)\n        common_msg1 = f'Supported authentication method: resvport'\n\n        exp_msg = {'server': [common_msg1],\n                   'comm': [common_msg1]}\n        self.match_logs(exp_msg)\n        j = Job(TEST_USER, attrs={ATTR_inter: ''})\n        j.interactive_script = [('sleep 100', '.*')]\n        jid = self.server.submit(j)\n        self.match_logs({'mom': f'interactive authentication method {auth_method} not supported'})\n\n    def test_invalid_interactive_auth_method(self):\n        \"\"\"\n        Test behavior when the provided interactive auth method is invalid.\n\n        Node 1: Server, Sched, Mom, Comm [self.hostA]\n        \"\"\"\n        self.svr_mode = self.server.get_op_mode()\n        if self.svr_mode != PTL_CLI:\n            self.server.set_op_mode(PTL_CLI)\n\n        auth_method = 'testing'\n        conf_param = {'PBS_COMM_LOG_EVENTS': \"2047\",\n                      'PBS_SUPPORTED_AUTH_METHODS': 'resvport',\n                      'PBS_INTERACTIVE_AUTH_METHOD': auth_method}\n        if self.server.shortname != self.mom.shortname:\n            check_state = False\n        else:\n            check_state = True\n\n        self.update_pbs_conf(conf_param, check_state=check_state)\n        common_msg1 = f'Supported authentication method: resvport'\n\n        exp_msg = {'server': [common_msg1],\n                   'comm': [common_msg1]}\n        self.match_logs(exp_msg)\n        j = Job(TEST_USER, attrs={ATTR_inter: ''})\n        j.interactive_script = [('sleep 100', '.*')]\n        jid = self.server.submit(j)\n        self.match_logs({'mom': f'interactive authentication method {auth_method} not supported'})\n\n    def test_penetration_interactive_auth_resvport(self):\n        \"\"\"\n        Test that qsub interactive will reject unathorized incoming connections.\n        1. qsub -I starts listening for an incoming connection\n        2. We connect to the qsub socket and send a random message\n        3. qsub should reject the connection since it does not originate from a\n        privileged port.\n\n        Node 1: Server, Sched, Mom, Comm\n        \"\"\"\n        auth_method = 'resvport'\n        conf_param = {'PBS_SUPPORTED_AUTH_METHODS': auth_method,\n                      'PBS_INTERACTIVE_AUTH_METHOD': auth_method}\n        self.helper_pentest_interactive_auth_1(conf_param)\n\n    def test_penetration_interactive_auth_munge(self):\n        \"\"\"\n        Test that qsub interactive will reject unathorized incoming connections.\n        1. qsub -I starts listening for an incoming connection\n        2. We connect to the qsub socket and send a random message\n        3. qsub should reject the connection since it does not match with a valid\n        root munge token.\n\n        Node 1: Server, Sched, Mom, Comm\n        \"\"\"\n        # Function call to check if munge is installed and enabled\n        self.perform_op(choice='check_installed_and_run',\n                        host_name=self.server.hostname)\n\n        auth_method = 'munge'\n        conf_param = {'PBS_SUPPORTED_AUTH_METHODS': f'resvport,{auth_method}',\n                      'PBS_INTERACTIVE_AUTH_METHOD': auth_method}\n        self.helper_pentest_interactive_auth_1(conf_param)\n\n    def helper_pentest_interactive_auth_1(self, conf_param):\n        \"\"\"\n        Helper function for pen-testing interactive authentication.\n        In this test a malicious user connects to the qsub -I socket and\n        submits a random message. The connection should be refused since it\n        does not match the expected credentials. When scheduling is turned on,\n        a valid execution host should be able to connect to qsub -I and the\n        interactive job should start running.\n\n        :param conf_param: Configuration parameters to update in pbs.conf\n        :type conf_param: dict\n        \"\"\"\n        if self.server.get_op_mode() != PTL_CLI:\n            self.server.set_op_mode(PTL_CLI)\n\n        self.update_pbs_conf(conf_param, host_name=self.svr_hostname)\n\n        # turn off scheduling, so we can try to connect with qsub\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n        # submit an interactive job\n        j = Job(TEST_USER, attrs={ATTR_inter: ''})\n        j.interactive_script = [('sleep 100', '.*')]\n        jid = self.server.submit(j)\n        # Get the address and port of qsub -I\n        address, port = self.get_qsub_address_port(host=self.svr_hostname)\n        self.assertTrue(port and address, \"Failed to get qsub -I address and port\")\n        family, socktype, _, _, sockaddr = socket.getaddrinfo(address, port)[0]\n        # Create a TCP socket and connect to qsub\n        with socket.socket(family, socktype) as sock:\n            sock = socket.socket(family, socktype)\n            sock.connect(sockaddr)\n            with self.assertRaises(BrokenPipeError, msg='Connection was not refused'):\n                for _ in range(10):\n                    # Add a small delay so that MoM has the time to close the connection\n                    time.sleep(5)\n                    # Send the job id to qsub\n                    sock.sendall(jid.encode())\n        self.logger.info('Connection refused as expected')\n\n        # the job should be still in the queue\n        self.server.expect(JOB, {ATTR_state: 'Q'}, id=jid)\n        # now turn scheduling on\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n        # the job should be running\n        self.server.expect(JOB, {ATTR_state: 'R'}, id=jid)\n\n    def test_penetration_interactive_auth_resvport_2(self):\n        \"\"\"\n        In this test the malicious user connects to the socket and stays idle.\n        1. qsub -I starts listening for an incoming connection\n        2. We connect to the qsub socket and stay idle.\n        3. After a while the connection should timeout and be closed.\n        4. When scheduling is turned on, a valid execution host should be able\n        to connect to qsub -I.\n        5. The interactive job should start running.\n\n        Node 1: Server, Sched, Mom, Comm\n        \"\"\"\n        auth_method = 'resvport'\n        conf_param = {'PBS_SUPPORTED_AUTH_METHODS': f'{auth_method}',\n                      'PBS_INTERACTIVE_AUTH_METHOD': auth_method}\n        self.helper_pentest_interactive_auth_2(conf_param=conf_param)\n\n    def test_penetration_interactive_auth_munge_2(self):\n        \"\"\"\n        In this test the malicious user connects to the socket and stays idle.\n        1. qsub -I starts listening for an incoming connection\n        2. We connect to the qsub socket and stay idle.\n        3. After a while the connection should timeout and be closed.\n        4. When scheduling is turned on, a valid execution host should be able\n        to connect to qsub -I.\n        5. The interactive job should start running.\n\n        Node 1: Server, Sched, Mom, Comm\n        \"\"\"\n\n        # Function call to check if munge is installed and enabled\n        self.perform_op(choice='check_installed_and_run',\n                        host_name=self.server.hostname)\n\n        auth_method = 'munge'\n        conf_param = {'PBS_SUPPORTED_AUTH_METHODS': f'resvport,{auth_method}',\n                      'PBS_INTERACTIVE_AUTH_METHOD': auth_method}\n        self.helper_pentest_interactive_auth_2(conf_param=conf_param)\n\n    def helper_pentest_interactive_auth_2(self, conf_param):\n        \"\"\"\n        Helper function for pen-testing interactive authentication.\n        In this test a malicious user connects to the socket and stays idle.\n        After a while the connection should timeout and be closed.\n        When scheduling is turned on, a valid execution host should be able\n        to connect to qsub -I and the interactive job should start running.\n\n        :param conf_param: Configuration parameters to update in pbs.conf\n        :type conf_param: dict\n        \"\"\"\n        if self.server.get_op_mode() != PTL_CLI:\n            self.server.set_op_mode(PTL_CLI)\n\n        self.update_pbs_conf(conf_param, host_name=self.svr_hostname)\n\n        # turn off scheduling, so we can try to connect with qsub\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'False'})\n\n        # submit an interactive job\n        j = Job(TEST_USER, attrs={ATTR_inter: ''})\n        j.interactive_script = [('sleep 100', '.*')]\n        jid = self.server.submit(j)\n\n        # Get the address and port of qsub -I\n        address, port = self.get_qsub_address_port(host=self.svr_hostname)\n        self.assertTrue(port and address, \"Failed to get qsub -I address and port\")\n        family, socktype, _, _, sockaddr = socket.getaddrinfo(address, port)[0]\n        # Create a TCP socket and try to connect to qsub\n        with socket.socket(family, socktype) as sock:\n            sock.connect(sockaddr)\n            # the job should be still in the queue\n            self.server.expect(JOB, {ATTR_state: 'Q'}, id=jid)\n            # now turn scheduling on\n            self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': 'True'})\n            # the job should be running\n            self.server.expect(JOB, {ATTR_state: 'R'}, id=jid, offset=30)\n\n    def test_penetration_root_impersonation_munge(self):\n        \"\"\"\n        Test that server will reject credentials that don't match with the\n        username in the batch request.\n        1. Send to the server a connect request with root as the username\n        2. Then send a authenticate request with a root as the username\n        3. Finally we send the TEST_USER's munge token.\n        4. The server should reject the request since the username in the\n        batch requests does not match with the username in the munge token.\n\n        Node 1: Server, Sched, Mom, Comm\n        \"\"\"\n        # Function call to check if munge is installed and enabled\n        self.perform_op(choice='check_installed_and_run',\n                        host_name=self.server.hostname)\n\n        auth_method = 'munge'\n        conf_param = {'PBS_AUTH_METHOD': f'{auth_method}',\n                      'PBS_SUPPORTED_AUTH_METHODS': f'resvport,{auth_method}'}\n        self.update_pbs_conf(conf_param, host_name=self.svr_hostname)\n\n        packets_to_send = []\n        # Need to run qstat under strace, to get all write system calls\n        root_packets = self.get_qstat_write_syscalls(runas=ROOT_USER)\n        self.assertGreater(len(root_packets), 2, f\"Failed to get {ROOT_USER}'s write system calls for qstat\")\n\n        user_packets = self.get_qstat_write_syscalls(runas=TEST_USER)\n        self.assertGreater(len(user_packets), 2, f\"Failed to get {TEST_USER}'s write system calls for qstat\")\n\n        # This is the order the packets will be sent:\n        # 1. PBS_BATCH_Connect as root\n        # 2. PBS_BATCH_Authenticate as root\n        # 3. TEST_USER's munge credentials\n        packets_to_send = root_packets[:2] + [user_packets[2]]\n        start_time = time.time()\n        # Establish connection with server in port PBS_BATCH_SERVICE_PORT\n        server_port = self.server.pbs_conf.get('PBS_BATCH_SERVICE_PORT', 15001)\n        family, socktype, _, _, sockaddr = socket.getaddrinfo(self.svr_hostname, server_port)[0]\n        # Create a TCP socket and try to connect to the server\n        with socket.socket(family, socktype) as sock:\n            sock = socket.socket(family, socktype)\n            sock.connect(sockaddr)\n            # Send PBS_BATCH_Connect\n            sock.sendall(bytes.fromhex(packets_to_send[0].replace(r'\\x', ' ')))\n            # Send PBS_BATCH_Authenticate with user credentials\n            sock.sendall(bytes.fromhex(packets_to_send[1].replace(r'\\x', ' ')))\n            sock.sendall(bytes.fromhex(packets_to_send[2].replace(r'\\x', ' ')))\n            reply = sock.recv(1024).decode()\n            # Now match expected server logs\n            self.server.log_match(f\".*Type 0 request received from {ROOT_USER}.*\",\n                                    regexp=True, starttime=start_time)\n            self.server.log_match(f\".*Type 95 request received from {ROOT_USER}.*\",\n                                    regexp=True, starttime=start_time)\n            self.server.log_match(\".*;munge_validate_auth_data;MUNGE user-authentication on decode failed with `Replayed credential`.*\",\n                                    regexp=True, starttime=start_time)\n            self.server.log_match(\".*;tcp_pre_process;MUNGE user-authentication on decode failed with `Replayed credential`.*\",\n                                    regexp=True, starttime=start_time)\n\n    def test_penetration_server_to_mom_over_tcp_1(self):\n        \"\"\"\n        Test that MoM will reject unathenticated TCP requests at port 15002.\n        1. We establish a TCP connection with MoM on port 15002 and submit a\n        random request.\n        2. MoM should reject the request since it does not contain the correct\n        encrypted cipher.\n\n        Node 1: Server, Sched, Mom, Comm\n        \"\"\"\n        if os.getuid() != 0 or sys.platform in ('cygwin', 'win32'):\n            self.skipTest(\"Test needs to run as root\")\n\n        packet = 'PKTV1\\x00\\x00\\x00\\x00\\x00\\x0d+1+1+2+5ncpus'\n        # Establish connection with mom in port PBS_MANAGER_SERVICE_PORT\n        mom_port = self.server.pbs_conf.get('PBS_MOM_SERVICE_PORT', 15002)\n        family, socktype, _, _, sockaddr = socket.getaddrinfo(self.svr_hostname, mom_port)[0]\n        start_time = time.time()\n        # Create a TCP socket and connect to mom\n        with socket.socket(family, socktype) as sock:\n            # Bind to an available, privileged port\n            self.bind_to_privileged_port(sock)\n            sock.connect(sockaddr)\n            # Send the packet\n            sock.sendall(packet.encode())\n            # Now match expected mom log\n            self.mom.log_match(f\".*pbs_mom;.*received incorrect auth token.*\",\n                               regexp=True, starttime=start_time)\n            self.mom.log_match(f\".*;pbs_mom;Svr;wait_request;process socket failed.*\",\n                               regexp=True, starttime=start_time)\n\n    def test_penetration_server_to_mom_over_tcp_2(self):\n        \"\"\"\n        Test that MoM will reject unathenticated TCP requests at port 15002.\n        1. We establish a TCP connection with MoM on port 15002 and submit a\n        request that looks similar to an encrypted cipher.\n        2. MoM should reject the request since it does not match the expected\n        encrypted cipher according to the authentication protocol.\n\n        Node 1: Server, Sched, Mom, Comm\n        \"\"\"\n        if os.getuid() != 0 or sys.platform in ('cygwin', 'win32'):\n            self.skipTest(\"Test needs to run as root\")\n\n        random_str = ''.join(random.choices(string.ascii_letters, k=15)) +\\\n            ';' + ''.join(random.choices(string.ascii_letters, k=33))\n        packet = 'PKTV1\\x00\\x01\\x00\\x00\\x00\\x31' + random_str\n        # Establish connection with mom in port PBS_MANAGER_SERVICE_PORT\n        mom_port = self.server.pbs_conf.get('PBS_MOM_SERVICE_PORT', 15002)\n        family, socktype, _, _, sockaddr = socket.getaddrinfo(self.svr_hostname, mom_port)[0]\n        start_time = time.time()\n        # Create a TCP socket and connect to mom\n        with socket.socket(family, socktype) as sock:\n            # Bind to an available, privileged port\n            self.bind_to_privileged_port(sock)\n            sock.connect(sockaddr)\n            # Send the packet\n            sock.sendall(packet.encode())\n            # Now match expected mom log\n            self.mom.log_match(f\".*pbs_mom;validate_hostkey, decyrpt failed, host_keylen=.*\",\n                               regexp=True, starttime=start_time)\n            self.mom.log_match(f\".*pbs_mom;.*Failed to decrypt auth data.*\",\n                               regexp=True, starttime=start_time)\n            self.mom.log_match(f\".*;pbs_mom;Svr;wait_request;process socket failed.*\",\n                               regexp=True, starttime=start_time)\n\n    def bind_to_privileged_port(self, sock):\n        \"\"\"\n        Bind to an available, privileged port\n\n        :param sock: The socket to bind\n        :type sock: socket.socket\n        :return: The privileged port number that was successfully bound\n        \"\"\"\n        port_found = False\n        for local_port in range(1023, 0, -1):\n            try:\n                sock.bind((self.svr_hostname, local_port))\n                port_found = True\n                break\n            except socket.error as e:\n                if e.errno != errno.EADDRINUSE:\n                    raise e\n        self.assertTrue(port_found, \"Failed to find an available privileged port\")\n        return local_port\n\n    def get_qsub_address_port(self, host):\n        \"\"\"\n        Extract the address and listenting port of a qsub interactive session\n\n        :param host: The hostname of the host where qsub -I was run\n        :type host: str\n        :return: A tuple containing the address and port of the qsub session,\n                    or (None, None) if not found\n        \"\"\"\n        tool = self.du.which(hostname=host, exe='ss')\n        if tool == 'ss':\n            tool = self.du.which(hostname=host, exe='netstat')\n        if tool == 'netstat':\n            self.skipTest(f\"Command ss or netstat not found on {host}\")\n        cmd = [tool, '-tnap']\n        ret = self.du.run_cmd(hosts=host, cmd=cmd, runas=ROOT_USER)\n        if ret['rc'] == 0:\n            for line in ret['out']:\n                if 'LISTEN' in line and 'qsub' in line:\n                    # this should extract the port number from the output\n                    m = self.re_addr_port.match(line.split()[3])\n                    if m:\n                        return m.group('addr'), m.group('port')\n        self.logger.error(f\"Failed to get qsub -I address and port with: {ret['err']}\")\n        return None, None\n\n    def get_qstat_write_syscalls(self, runas, host=None):\n        \"\"\"\n        Get all write system calls made by qstat\n\n        param runas: The user to run the command as\n        type runas: str\n        return: A list of all write system calls made by qstat.\n        \"\"\"\n        strace = self.du.which(hostname=host, exe='strace')\n        if strace == 'strace':\n            self.skipTest(\"Command strace not found\")\n        qstat = os.path.join(self.server.pbs_conf['PBS_EXEC'], 'bin', 'qstat')\n        cmd = [strace, '--trace=write', '-s1024', '-x', '-q', '--', qstat]\n        ret = self.du.run_cmd(cmd=cmd, runas=runas, hosts=host, logerr=False)\n        calls = []\n        if ret['rc'] == 0:\n            for line in ret['err']:\n                m = self.re_syscall.match(line)\n                if m:\n                    calls.append(m.group('write_buffer'))\n        else:\n            self.logger.error(f\"Failed to get qstat write system calls with: {ret['err']}\")\n        return calls\n\n    def tearDown(self):\n        conf_param = ['PBS_SUPPORTED_AUTH_METHODS',\n                      'PBS_AUTH_METHOD',\n                      'PBS_COMM_LOG_EVENTS',\n                      'PBS_COMM_ROUTERS',\n                      'PBS_LEAF_ROUTERS',\n                      'PBS_INTERACTIVE_AUTH_METHOD',\n                      'PBS_AUTH_SERVICE_USERS']\n        restart = True\n        self.node_list = set(self.node_list)\n        for host_name in self.node_list:\n            if host_name == self.client_host:\n                restart = False\n            self.update_pbs_conf(conf_param, host_name, op='unset',\n                                 restart=restart, check_state=False)\n        self.node_list.clear()\n"
  },
  {
    "path": "test/tests/selftest/__init__.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom ptl.utils.pbs_testsuite import *\n\n\nclass TestSelf(PBSTestSuite):\n    \"\"\"\n    Base test suite for Self tests\n    \"\"\"\n"
  },
  {
    "path": "test/tests/selftest/pbs_config_sched.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 2022 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\nfrom tests.selftest import *\nimport re\n\n\nclass TestSchedConfig(TestSelf):\n    \"\"\"\n    Test setting values in sched_config file.\n    \"\"\"\n\n    def test_set_sched_config(self):\n        '''\n        Test whether Scheduler.set_sched_config() works as expected.\n        '''\n        sched = self.scheds['default']\n\n        def set_and_get(confs):\n            \"\"\"\n            Change settings in scheduler config file and fetch the new\n            file contents\n            \"\"\"\n            sched.set_sched_config(confs, apply=True, validate=False)\n            sched.parse_sched_config()\n            return sched.sched_config\n\n        def cmp_dict(old, new):\n            \"\"\"\n            Compare two dictionaries and build list of changes\n            \"\"\"\n            all_keys = set(old.keys()) | set(new.keys())\n            diffs = []\n            for key in sorted(all_keys):\n                o = old.get(key, '<missing>')\n                n = new.get(key, '<missing>')\n                if o != n:\n                    diffs.append([key, n])\n            return diffs\n\n        # See if we can change an existing value, and leave others alone\n        old_conf = sched.sched_config\n        new_conf = set_and_get({'round_robin': 'True ALL'})\n        diffs = cmp_dict(old_conf, new_conf)\n        self.assertEqual(diffs, [['round_robin', 'True ALL']])\n\n        # Repeat for initially missing setting\n        old_conf = new_conf\n        new_conf = set_and_get({'job_sort_key': '\"ncpus HIGH\"'})\n        diffs = cmp_dict(old_conf, new_conf)\n        self.assertEqual(diffs, [['job_sort_key', '\"ncpus HIGH\"']])\n\n        # Test whether we can unset a value by setting to empty list\n        old_conf = new_conf\n        new_conf = set_and_get({'job_sort_key': []})\n        diffs = cmp_dict(old_conf, new_conf)\n        self.assertEqual(diffs, [['job_sort_key', '<missing>']])\n\n        # Test whether we can set multiple values for one setting\n        old_conf = new_conf\n        jsk = ['\"job_priority HIGH\"', '\"ncpus HIGH\"']\n        new_conf = set_and_get({'job_sort_key': jsk})\n        diffs = cmp_dict(old_conf, new_conf)\n        self.assertEqual(diffs, [['job_sort_key', jsk]])\n\n        # Test whether we can change multiple settings\n        old_conf = new_conf\n        jsk = ['\"job_priority LOW\"', '\"mem LOW\"']\n        new_conf = set_and_get({'primetime_prefix': 'xp_',\n                                'job_sort_key': jsk})\n        diffs = cmp_dict(old_conf, new_conf)\n        self.assertEqual(diffs, [['job_sort_key', jsk],\n                                 ['primetime_prefix', 'xp_']])\n\n        # Finally, check if the scheduler likes the result\n        sched.set_sched_config(confs={}, apply=True, validate=True)\n"
  },
  {
    "path": "test/tests/selftest/pbs_cycles_test.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.selftest import *\n\n\nclass PBSTestCycle(TestSelf):\n    \"\"\"\n    Tests to check that the cycle function returns correct information\n    for schedulers\n    \"\"\"\n\n    def test_cycle_multi_sched(self):\n        \"\"\"\n        Test that scheduler.cycle() reads the correct multisched log file\n        This test will check the start time and political order from the\n        cycles() output to test the correct log file is being read.\n        \"\"\"\n        # Create a queue, vnode and link them to the newly created sched\n        a = {'queue_type': 'execution',\n             'started': 'True',\n             'enabled': 'True'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, id='wq')\n\n        a = {'resources_available.ncpus': 2}\n        self.mom.create_vnodes(a, 1)\n        vn = self.mom.shortname\n        p1 = {'partition': 'P1'}\n        self.server.manager(MGR_CMD_SET, QUEUE, p1, id='wq')\n        self.server.manager(MGR_CMD_SET, NODE, p1, id=vn + '[0]')\n\n        a = {'partition': 'P1',\n             'sched_host': self.server.hostname}\n        self.server.manager(MGR_CMD_CREATE, SCHED,\n                            a, id=\"sc\")\n        self.scheds['sc'].create_scheduler()\n        self.scheds['sc'].start()\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'scheduling': 'True'}, id=\"sc\")\n        self.server.manager(MGR_CMD_SET, SCHED, {'log_events': 2047}, id='sc')\n\n        # Turn off scheduling for all the scheds.\n        for name in self.scheds:\n            self.server.manager(MGR_CMD_SET, SCHED,\n                                {'scheduling': 'False'}, id=name)\n        st = time.time()\n\n        # submit jobs and check political order\n        a = {ATTR_queue: 'wq'}\n\n        j1 = Job(TEST_USER1, attrs=a)\n        jid1 = self.server.submit(j1)\n        j2 = Job(TEST_USER1, attrs=a)\n        jid2 = self.server.submit(j2)\n\n        self.server.manager(MGR_CMD_SET, SCHED,\n                            {'scheduling': 'True'}, id='sc')\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n\n        cycles = self.scheds['sc'].cycles(start=st, firstN=1)\n        self.assertNotEqual(len(cycles), 0)\n        cycle = cycles[0]\n\n        political_order = cycle.political_order\n        self.assertNotEqual(len(political_order), 0)\n        firstconsidered = political_order[0]\n\n        # Check that the cycle start time is after st\n        self.assertGreater(cycle.start, st)\n\n        # check the first considered is jid1\n        self.assertEqual(firstconsidered, jid1.split('.')[0])\n"
  },
  {
    "path": "test/tests/selftest/pbs_default_timeout.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.selftest import *\nfrom ptl.utils.plugins.ptl_test_runner import TimeOut\n\n\nclass TestDefaultTimeout(TestSelf):\n    \"\"\"\n    Test suite to verify increase default testcase timeout working properly.\n    \"\"\"\n\n    def test_default_timeout(self):\n        \"\"\"\n        Verify that test not timedout after 180 sec.\n        i.e. after previous default timeout value\n        \"\"\"\n        try:\n            mssg = 'sleeping for %ssec and minimum-testcase-timeout is %ssec' \\\n                   % (MINIMUM_TESTCASE_TIMEOUT/2, MINIMUM_TESTCASE_TIMEOUT)\n            self.logger.info(mssg)\n            time.sleep(MINIMUM_TESTCASE_TIMEOUT/2)\n        except TimeOut as e:\n            mssg = 'Timed out after %s second' % MINIMUM_TESTCASE_TIMEOUT\n            err_mssg = 'The test timed out for an incorrect timeout duration'\n            self.assertEqual(mssg, str(e), err_mssg)\n\n    def test_timeout_greater_default_value(self):\n        \"\"\"\n        Verify that test timedout after 600 sec.\n        \"\"\"\n        try:\n            mssg = 'sleeping for %ssec and minimum-testcase-timeout is %ssec' \\\n                   % (MINIMUM_TESTCASE_TIMEOUT+1, MINIMUM_TESTCASE_TIMEOUT)\n            self.logger.info(mssg)\n            time.sleep(MINIMUM_TESTCASE_TIMEOUT+1)\n        except TimeOut as e:\n            mssg = 'Timed out after %s second' % MINIMUM_TESTCASE_TIMEOUT\n            err_mssg = 'The test timed out for an incorrect timeout duration'\n            self.assertEqual(mssg, str(e), err_mssg)\n        else:\n            msg = 'Test did not timeout after the min. timeout period of %d' \\\n                  % MINIMUM_TESTCASE_TIMEOUT\n            self.fail(msg)\n\n    @timeout(200)\n    def test_timeout_decorator_less_default_value(self):\n        \"\"\"\n        If timeout decorator value is less than 600 then\n        default testcase timeout is considered\n        \"\"\"\n        try:\n            mssg = 'sleeping for %ssec and minimum-testcase-timeout is %ssec' \\\n                   % (MINIMUM_TESTCASE_TIMEOUT/2, MINIMUM_TESTCASE_TIMEOUT)\n            self.logger.info(mssg)\n            time.sleep(MINIMUM_TESTCASE_TIMEOUT/2)\n        except TimeOut as e:\n            mssg = 'Timed out after %s second' % MINIMUM_TESTCASE_TIMEOUT\n            err_mssg = 'The test timed out for an incorrect timeout duration'\n            self.assertEqual(mssg, str(e), err_mssg)\n\n    @timeout(800)\n    def test_timeout_decorator_greater_default_value(self):\n        \"\"\"\n        If timeout decorator value is greater than 600 then\n        testcase timeout value consider as timeout decorator value\n        \"\"\"\n        try:\n            mssg = 'sleeping for %ssec and minimum-testcase-timeout is %ssec' \\\n                   % (MINIMUM_TESTCASE_TIMEOUT+1, 800)\n            self.logger.info(mssg)\n            time.sleep(MINIMUM_TESTCASE_TIMEOUT+1)\n        except TimeOut as e:\n            mssg = 'Timed out after 800 second'\n            err_mssg = 'The test timed out for an incorrect timeout duration'\n            self.assertEqual(mssg, str(e), err_mssg)\n"
  },
  {
    "path": "test/tests/selftest/pbs_dshutils_tests.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.selftest import *\nfrom ptl.utils.pbs_dshutils import PbsConfigError\nimport pwd\nimport getpass\nimport grp\nimport os\n\n\nclass TestDshUtils(TestSelf):\n    \"\"\"\n    Test pbs distributed utilities util pbs_dshutils.py\n    \"\"\"\n\n    def test_pbs_conf(self):\n        \"\"\"\n        Test setting a pbs.conf variable and then unsetting it\n        \"\"\"\n        var = 'PBS_COMM_THREADS'\n        val = '16'\n        self.du.set_pbs_config(confs={var: val})\n        conf = self.du.parse_pbs_config()\n        msg = 'pbs.conf variable not set'\n        self.assertIn(var, conf, msg)\n        self.assertEqual(conf[var], val, msg)\n\n        msg = 'pbs.conf variable not unset'\n        self.du.unset_pbs_config(confs=[var])\n        conf = self.du.parse_pbs_config()\n        self.assertNotIn(var, conf, msg)\n\n        exception_found = False\n        try:\n            self.du.set_pbs_config(confs={'f': 'b'}, fout='/does/not/exist')\n        except PbsConfigError:\n            exception_found = True\n\n        self.assertTrue(exception_found, 'PbsConfigError not thrown')\n\n    def test_pbs_env(self):\n        \"\"\"\n        Test setting a pbs_environment variable and then unsetting it\n        \"\"\"\n        self.du.set_pbs_environment(environ={'pbs_foo': 'True'})\n        environ = self.du.parse_pbs_environment()\n        msg = 'pbs environment variable not set'\n        self.assertIn('pbs_foo', environ, msg)\n        self.assertEqual(environ['pbs_foo'], 'True', msg)\n\n        msg = 'pbs environment variable not unset'\n        self.du.unset_pbs_environment(environ=['pbs_foo'])\n        environ = self.du.parse_pbs_environment()\n        self.assertNotIn('pbs_foo', environ, msg)\n\n    def check_access(self, path, mode=0o755, user=None, group=None, host=None):\n        \"\"\"\n        Helper function to check user, group and mode of given path\n        \"\"\"\n        if host is None:\n            result = os.stat(path)\n            m = result.st_mode\n            u = result.st_uid\n            g = result.st_gid\n\n        else:\n            py = self.du.which(host, 'python3')\n            c = '\"import os;s = os.stat(\\'' + path + '\\');'\n            c += 'print(s.st_uid,s.st_gid,s.st_mode)\"'\n            cmd = [py, '-c', c]\n            runas = self.du.get_current_user()\n            ret = self.du.run_cmd(host, cmd, sudo=True, runas=runas)\n            self.assertEqual(ret['rc'], 0)\n            res = ret['out'][0]\n            u, g, m = [int(v) for v in\n                       res.split(' ')]\n\n        self.assertEqual(oct(m & 0o777), oct(mode))\n        if user is not None:\n            uid = pwd.getpwnam(str(user)).pw_uid\n            self.assertEqual(u, uid)\n        if group is not None:\n            gid = grp.getgrnam(str(group))[2]\n            self.assertEqual(g, gid)\n\n    def test_create_dir_as_user(self):\n        \"\"\"\n        Test creating a directory as user\n        \"\"\"\n\n        # create a directory with permissions\n        tmpdir = self.du.create_temp_dir(mode=0o750)\n        self.check_access(tmpdir, mode=0o750)\n\n        # check default configurations\n        tmpdir = self.du.create_temp_dir()\n        self.check_access(tmpdir, user=getpass.getuser())\n\n        # create a directory as a specific user\n        tmpdir = self.du.create_temp_dir(asuser=TEST_USER2)\n        self.check_access(tmpdir, user=TEST_USER2, mode=0o755)\n\n        # create a directory as a specific user and permissions\n        tmpdir = self.du.create_temp_dir(asuser=TEST_USER2, mode=0o750)\n        self.check_access(tmpdir, mode=0o750, user=TEST_USER2)\n\n        # create a directory as a specific user, group and permissions\n        tmpdir = self.du.create_temp_dir(asuser=TEST_USER2, mode=0o770,\n                                         asgroup=TSTGRP3)\n        self.check_access(tmpdir, mode=0o770, user=TEST_USER2, group=TSTGRP3)\n\n    @requirements(num_moms=2)\n    def test_create_remote_dir_as_user(self):\n        \"\"\"\n        Test creating a directory on a remote host as a user\n        \"\"\"\n\n        if len(self.moms) < 2:\n            self.skip_test(\"Test requires 2 moms: use -p moms=mom1:mom2\")\n        remote = None\n        for each in self.moms:\n            if not self.du.is_localhost(each):\n                remote = each\n                break\n\n        if remote is None:\n            self.skip_test(\"Provide a remote hostname\")\n\n        # check default configurations\n        tmpdir = self.du.create_temp_dir(remote)\n        self.check_access(tmpdir, user=getpass.getuser(), host=remote)\n\n        # create a directory with permissions\n        tmpdir = self.du.create_temp_dir(remote, mode=0o750)\n        self.check_access(tmpdir, mode=0o750, host=remote)\n\n        # create a directory as a specific user\n        tmpdir = self.du.create_temp_dir(remote, asuser=TEST_USER2)\n        self.check_access(tmpdir, user=TEST_USER2, mode=0o755, host=remote)\n\n        # create a directory as a specific user and permissions\n        tmpdir = self.du.create_temp_dir(remote, asuser=TEST_USER2,\n                                         mode=0o750)\n        self.check_access(tmpdir, mode=0o750, user=TEST_USER2, host=remote)\n\n        # create a directory as a specific user, group and permissions\n        tmpdir = self.du.create_temp_dir(remote, asuser=TEST_USER2, mode=0o770,\n                                         asgroup=TSTGRP3)\n        self.check_access(tmpdir, mode=0o770, user=TEST_USER2, group=TSTGRP3,\n                          host=remote)\n\n    @requirements(num_moms=1)\n    def test_script_cleanup(self):\n        \"\"\"\n        Make sure run_cmd with the as_script option cleans up the\n        temporary file it creates.\n        \"\"\"\n\n        if len(self.moms) < 1:\n            self.skip_test(\"Test requires a mom: use -p mom=mom_host\")\n        remote = None\n        for each in self.moms:\n            if not self.du.is_localhost(each):\n                remote = each\n                break\n\n        if remote is None:\n            self.skip_test(\"Provide a remote mom\")\n\n        # First, figure out where temp files are created\n        pfx = 'PtlPbs'      # This should be a constant someplace\n        tmpname = self.du.create_temp_file(prefix=pfx)\n        tmp_dir_path = os.path.dirname(tmpname)\n        long_pfx = os.path.join(tmp_dir_path, pfx)\n        self.du.rm(path=tmpname)\n\n        # Get current contents of temp file dir\n        before = self.du.listdir(path=tmp_dir_path, level=logging.DEBUG)\n\n        # Run an as_script command\n        example_cmd = \"printenv HOME\"\n        result = self.du.run_cmd(cmd=example_cmd, as_script=True,\n                                 level=logging.DEBUG)\n        self.assertEqual(result['rc'], 0)\n\n        # Get contents of temp file dir post script\n        after = self.du.listdir(path=tmp_dir_path, level=logging.DEBUG)\n\n        # Compare contents, looking for new PTL files\n        new_files = set(after) - set(before)\n        our_new_files = [x for x in new_files if x.startswith(long_pfx)]\n        self.assertEqual([], our_new_files, \"script file not cleaned up\")\n\n        # Repeat, running on remote host\n        before = self.du.listdir(hostname=remote, path=tmp_dir_path,\n                                 level=logging.DEBUG)\n        result = self.du.run_cmd(hosts=remote, cmd=example_cmd,\n                                 as_script=True, level=logging.DEBUG)\n        self.assertEqual(result['rc'], 0)\n        after = self.du.listdir(hostname=remote, path=tmp_dir_path,\n                                level=logging.DEBUG)\n        new_files = set(after) - set(before)\n        our_new_files = [x for x in new_files if x.startswith(long_pfx)]\n        self.assertEqual([], our_new_files,\n                         \"remote script file not cleaned up\")\n\n        # Repeat, running locally as a different user\n        before = self.du.listdir(path=tmp_dir_path, runas=TEST_USER,\n                                 level=logging.DEBUG)\n        result = self.du.run_cmd(cmd=example_cmd, as_script=True,\n                                 runas=TEST_USER, level=logging.DEBUG)\n        self.assertEqual(result['rc'], 0)\n        after = self.du.listdir(path=tmp_dir_path, runas=TEST_USER,\n                                level=logging.DEBUG)\n        new_files = set(after) - set(before)\n        our_new_files = [x for x in new_files if x.startswith(long_pfx)]\n        self.assertEqual([], our_new_files,\n                         \"alternate user script file not cleaned up\")\n\n        # Once more, with both remote host and different user\n        before = self.du.listdir(hostname=remote, path=tmp_dir_path,\n                                 runas=TEST_USER, level=logging.INFOCLI)\n        result = self.du.run_cmd(hosts=remote, cmd=example_cmd,\n                                 as_script=True, runas=TEST_USER,\n                                 level=logging.DEBUG)\n        self.assertEqual(result['rc'], 0)\n        after = self.du.listdir(hostname=remote, path=tmp_dir_path,\n                                runas=TEST_USER, level=logging.INFOCLI)\n        new_files = set(after) - set(before)\n        our_new_files = [x for x in new_files if x.startswith(long_pfx)]\n        self.assertEqual([], our_new_files,\n                         \"alt user script file on remote not cleaned up\")\n"
  },
  {
    "path": "test/tests/selftest/pbs_expect.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.selftest import *\nfrom io import StringIO\nimport logging\n\n\nclass TestExpect(TestSelf):\n    \"\"\"\n    Contains tests for the expect() function\n    \"\"\"\n\n    def test_attribute_case(self):\n        \"\"\"\n        Test that when verifying attribute list containing attribute names\n        with different case, expect() is case insensitive\n        \"\"\"\n        # Create a queue\n        a = {'queue_type': 'execution'}\n        self.server.manager(MGR_CMD_CREATE, QUEUE, a, 'expressq')\n\n        # Set the Priority attribute on the queue but provide 'p' lowercase\n        # Set other attributes normally\n        a = {'enabled': 'True', 'started': 'True', 'priority': 150}\n        self.server.manager(MGR_CMD_SET, QUEUE, a, 'expressq')\n        self.server.expect(QUEUE, a, id='expressq')\n\n    def test_revert_attributes(self):\n        \"\"\"\n        test that when we unset any attribute in expect(),\n        attribute will be unset and should get value on attribute basis.\n        \"\"\"\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': False})\n        self.server.manager(MGR_CMD_UNSET, SERVER, 'scheduling')\n        self.server.expect(SERVER, 'scheduling', op=UNSET)\n\n        self.server.manager(MGR_CMD_UNSET, SERVER, 'max_job_sequence_id')\n        self.server.expect(SERVER, 'max_job_sequence_id', op=UNSET)\n\n        self.server.manager(MGR_CMD_UNSET, SCHED, 'sched_host')\n        self.server.expect(SCHED, 'sched_host', op=UNSET)\n\n        self.server.manager(MGR_CMD_UNSET, SCHED, ATTR_sched_cycle_len)\n        self.server.expect(SCHED, ATTR_sched_cycle_len, op=UNSET)\n\n        self.server.manager(MGR_CMD_UNSET, NODE, ATTR_NODE_resv_enable,\n                            id=self.mom.shortname)\n        self.server.expect(NODE, ATTR_NODE_resv_enable,\n                           op=UNSET, id=self.mom.shortname)\n\n        hook_name = \"testhook\"\n        hook_body = \"import pbs\\npbs.event().reject('my custom message')\\n\"\n        a = {'event': 'queuejob', 'enabled': 'True', 'alarm': 10}\n        self.server.create_import_hook(hook_name, a, hook_body)\n        self.server.manager(MGR_CMD_UNSET, HOOK, 'alarm', id=hook_name)\n        self.server.expect(HOOK, 'alarm', op=UNSET, id=hook_name)\n\n        a = {'partition': 'P1',\n             'sched_host': self.server.hostname}\n        self.server.manager(MGR_CMD_CREATE, SCHED,\n                            a, id=\"sc1\")\n        new_sched_home = os.path.join(self.server.pbs_conf['PBS_HOME'],\n                                      'sc_1_mot')\n        self.scheds['sc1'].create_scheduler(sched_home=new_sched_home)\n        self.scheds['sc1'].start()\n        self.server.manager(MGR_CMD_UNSET, SCHED, 'sched_priv', id='sc1')\n        self.server.expect(SCHED, 'sched_priv', op=UNSET, id='sc1')\n\n        a = {'resources_available.ncpus': 2}\n        self.mom.create_vnodes(attrib=a, num=2, vname='vn')\n        self.server.manager(MGR_CMD_UNSET, VNODE,\n                            'resources_available.ncpus', id='vn[1]')\n        self.server.expect(VNODE, 'resources_available.ncpus',\n                           id='vn[1]', op=UNSET)\n        self.server.manager(MGR_CMD_UNSET, VNODE,\n                            ATTR_NODE_resv_enable, id='vn[1]')\n        self.server.expect(VNODE, ATTR_NODE_resv_enable,\n                           op=UNSET, id='vn[1]')\n\n        self.server.manager(MGR_CMD_UNSET, NODE, 'resources_available.ncpus',\n                            id=self.mom.shortname)\n        self.server.expect(NODE, 'resources_available.ncpus',\n                           op=UNSET, id=self.mom.shortname)\n"
  },
  {
    "path": "test/tests/selftest/pbs_initservices.py",
    "content": "# coding: utf-8\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.selftest import *\n\n\nclass TestPbsInitServices(TestSelf):\n    \"\"\"\n    Contains tests related to PBSInitServices class\n    \"\"\"\n    def test_init_services_pid(self):\n        \"\"\"\n        Test if the pid of PBS daemons are updated correctly\n        after a restart via PBSInitServices's functions\n        \"\"\"\n        self.server.pi.stop()\n        self.server.pi.start_server()\n        self.server.pi.start_mom()\n        self.server.pi.start_comm()\n        self.server.pi.start_sched()\n\n        self.assertTrue(self.server.signal('-HUP'))\n        self.assertTrue(self.mom.signal('-HUP'))\n        self.assertTrue(self.scheduler.signal('-HUP'))\n        self.assertTrue(self.comm.signal('-HUP'))\n"
  },
  {
    "path": "test/tests/selftest/pbs_job_cleanup.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.selftest import *\n\n\nclass TestJobCleanup(TestSelf):\n    \"\"\"\n    Tests for checking the job cleanup functionality in PTL\n    \"\"\"\n\n    @timeout(10000)\n    def test_cleanup_perf(self):\n        \"\"\"\n        Test the time that it takes to cleanup a large number of jobs\n        \"\"\"\n        num_jobs = 10000\n\n        # Set npcus to a number that'll allow half the jobs to run\n        ncpus = int(num_jobs / 2)\n        a = {'resources_available.ncpus': str(ncpus)}\n        self.server.manager(MGR_CMD_SET, NODE, a, self.mom.shortname)\n\n        self.server.manager(MGR_CMD_SET, MGR_OBJ_SERVER,\n                            {'scheduling': 'False'})\n\n        for i in range(num_jobs):\n            j = Job()\n            j.set_sleep_time(1000)\n            self.server.submit(j)\n\n        self.scheduler.run_scheduling_cycle()\n\n        # Measure job cleanup performance\n        t1 = time.time()\n        self.server.cleanup_jobs()\n\n        # Restart the mom\n        self.mom.restart()\n        t2 = time.time()\n\n        self.logger.info(\"Time taken for job cleanup \" + str(t2 - t1))\n\n    def test_del_large_num_jobs(self):\n        \"\"\"\n        Test that the delete jobs functionality works correctly with large\n        number of jobs\n        \"\"\"\n        num_jobs = 1000\n        a = {'resources_available.ncpus': 1,\n             'resources_available.mem': '2gb'}\n        self.mom.create_vnodes(a, num_jobs,\n                               sharednode=False, expect=False)\n        self.server.expect(NODE, {'state=free': (GE, num_jobs)})\n\n        self.server.manager(MGR_CMD_SET, MGR_OBJ_SERVER,\n                            {'scheduling': 'False'})\n\n        for i in range(num_jobs):\n            j = Job(TEST_USER)\n            self.server.submit(j)\n\n        self.scheduler.run_scheduling_cycle()\n        self.server.expect(JOB, {'job_state=R': num_jobs}, max_attempts=120)\n\n        self.server.cleanup_jobs()\n        self.server.expect(SERVER, {'total_jobs': 0})\n\n    def test_del_queued_jobs(self):\n        \"\"\"\n        Test that the delete jobs functionality works correctly with large\n        number of queued jobs that do not have processes associated with them.\n        \"\"\"\n        num_jobs = 1000\n        a = {'resources_available.ncpus': 1,\n             'resources_available.mem': '2gb'}\n        self.mom.create_vnodes(a, num_jobs,\n                               sharednode=False, expect=False)\n        self.server.expect(NODE, {'state=free': (GE, num_jobs)})\n\n        self.server.manager(MGR_CMD_SET, MGR_OBJ_SERVER,\n                            {'scheduling': 'False'})\n\n        for i in range(num_jobs):\n            j = Job(TEST_USER, attrs={ATTR_l + '.select': '1:ncpus=4'})\n            self.server.submit(j)\n\n        self.scheduler.run_scheduling_cycle()\n        self.server.expect(JOB, {'job_state=Q': num_jobs}, max_attempts=120)\n\n        self.server.cleanup_jobs()\n        self.server.expect(SERVER, {'total_jobs': 0})\n"
  },
  {
    "path": "test/tests/selftest/pbs_json_test_report.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.selftest import *\nfrom ptl.utils.plugins.ptl_report_json import PTLJsonData\n\n\nclass TestJSONReport(TestSelf):\n    \"\"\"\n    Tests to test JSON test report\n    \"\"\"\n\n    def test_json_fields(self):\n        \"\"\"\n        Test to verify fields of JSON test report\n        \"\"\"\n        test_data = {\n            'status': 'PASS',\n            'start_time': datetime.datetime(2018, 10, 4, 0, 18, 8, 509426),\n            'hostname': self.server.hostname,\n            'status_data': '',\n            'pbs_version': '19.2.0.20180903140741',\n            'testcase': 'test_select',\n            'end_time': datetime.datetime(2018, 10, 4, 0, 18, 11, 153022),\n            'testdoc': '\\n        Test to qselect\\n        ',\n            'duration': datetime.timedelta(0, 2, 643596),\n            'suite': 'SmokeTest',\n            'testparam': 'ABC=DEF,x=100',\n            'machinfo': {\n                self.server.hostname: {\n                    'platform': 'Linux centos7alpha3.10.0-862.11.6.el7.x86_64',\n                    'pbs_install_type': 'server',\n                    'os_info': 'Linux-3.10.0-862.11.6.el7.x86_64-x86_64'\n                }\n            },\n            'suitedoc': '\\n    This test suite contains smoke tests of PBS\\n',\n            'tags': ['smoke'],\n            'file': 'tests/pbs_smoketest.py',\n            'module': 'tests.pbs_smoketest',\n            'requirements': {\n                \"num_moms\": 1,\n                \"no_comm_on_mom\": True,\n                \"no_comm_on_server\": False,\n                \"num_servers\": 1,\n                \"no_mom_on_server\": False,\n                \"num_comms\": 1,\n                \"num_clients\": 1\n            }\n        }\n        verify_data = {\n            'test_keys': [\"command\", \"user\", \"product_version\", \"run_id\",\n                          \"test_conf\", \"machine_info\", \"testsuites\",\n                          \"test_summary\", \"additional_data\"],\n            'test_summary_keys': [\"result_summary\", \"test_start_time\",\n                                  \"test_end_time\", \"tests_with_failures\",\n                                  \"test_suites_with_failures\"],\n            'test_machine_info': [\"platform\", \"os_info\", \"pbs_install_type\"],\n            'test_results': [\"run\", \"succeeded\", \"failed\", \"errors\", \"skipped\",\n                             \"timedout\"],\n            'test_suites_info': [\"testcases\", \"docstring\", \"module\", \"file\"],\n            'test_cases_info': [\"docstring\", \"tags\", \"requirements\",\n                                \"results\"],\n            'test_results_info': [\"duration\", \"status\", \"status_data\",\n                                  \"start_time\", \"end_time\", \"measurements\"]\n        }\n        field_values = {\n            'machine_info_name': list(test_data['machinfo'].keys())[0],\n            'testresult_status': test_data['status'],\n            'requirements': {\n                \"num_moms\": 1,\n                \"no_comm_on_mom\": True,\n                \"no_comm_on_server\": False,\n                \"num_servers\": 1,\n                \"no_mom_on_server\": False,\n                \"num_comms\": 1,\n                \"num_clients\": 1\n            }\n        }\n        faulty_fields = []\n        faulty_values = []\n        test_cmd = \"pbs_benchpress -t SmokeTest.test_submit_job\"\n        jsontest = PTLJsonData(command=test_cmd)\n        jdata = jsontest.get_json(data=test_data, prev_data=None)\n        tsname = test_data['suite']\n        tcname = test_data['testcase']\n        vdata = jdata['testsuites'][tsname]['testcases'][tcname]\n        for k in verify_data['test_keys']:\n            if k not in jdata:\n                faulty_fields.append(k)\n        for l in verify_data['test_summary_keys']:\n            if l not in jdata['test_summary']['1']:\n                faulty_fields.append(l)\n        for o in verify_data['test_results']:\n            if o not in jdata['test_summary']['1']['result_summary']:\n                faulty_fields.append(o)\n        for node in jdata['machine_info']:\n            for m in verify_data['test_machine_info']:\n                if m not in jdata['machine_info'][node]:\n                    faulty_fields.append(m)\n        for q in jdata['testsuites']:\n            for p in verify_data['test_suites_info']:\n                if p not in jdata['testsuites'][q]:\n                    faulty_fields.append(p)\n        for s in jdata['testsuites']:\n            for t in jdata['testsuites'][s]:\n                for u in jdata['testsuites'][s]['testcases']:\n                    for r in verify_data['test_cases_info']:\n                        if r not in jdata['testsuites'][s]['testcases'][u]:\n                            faulty_fields.append(r)\n        for s in jdata['testsuites']:\n            for t in jdata['testsuites'][s]['testcases']:\n                for v in verify_data['test_results_info']:\n                    testcase = jdata['testsuites'][s]['testcases'][t]\n                    if v not in testcase['results']['1']:\n                        faulty_fields.append(v)\n        for k, v in field_values.items():\n            if k == 'machine_info_name':\n                if list(jdata['machine_info'].keys())[0] != v:\n                    faulty_values.append(k)\n            if k == 'testresult_status':\n                if vdata['results']['1']['status'] != v:\n                    faulty_values.append(k)\n            if k == 'requirements':\n                if vdata['requirements'] != v:\n                    faulty_values.append(r)\n        if (faulty_fields or faulty_values):\n            raise AssertionError(\"Faulty fields or values\",\n                                 (faulty_fields, faulty_values))\n"
  },
  {
    "path": "test/tests/selftest/pbs_manager.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.selftest import *\n\n\nclass TestManager(TestSelf):\n    \"\"\"\n    Test ptl.server.manager()\n    \"\"\"\n\n    def test_pbs_conf(self):\n        \"\"\"\n        Test setting an string array with quote characters\n        \"\"\"\n        attr = {'type': 'string_array', 'flag': 'h'}\n        rc = self.server.manager(MGR_CMD_CREATE, RSC, attr, id='f')\n        self.assertEqual(rc, 0)\n        self.server.manager(MGR_CMD_SET, SERVER,\n                            {\"resources_available.f\":\n                             [\"A'A\", 'B\"B', 'C C']\n                             })\n        servers = self.server.filter(SERVER,\n                                     {'resources_available.f':\n                                      \"\"\"A'A,B\"B,C C\"\"\"})\n        self.logger.info(servers)\n        self.assertTrue(servers)\n"
  },
  {
    "path": "test/tests/selftest/pbs_managers_operators.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.selftest import *\n\n\nclass TestManagersOperators(TestSelf):\n\n    \"\"\"\n        Additional managers users, except current user and MGR_USER\n        should get unset after test setUp run\n    \"\"\"\n    def test_managers_unset_setup(self):\n        \"\"\"\n        Additional managers users, except current user and MGR_USER should\n        get unset after test setUp run\n        \"\"\"\n        runas = ROOT_USER\n        manager_usr_str = str(MGR_USER) + '@*'\n        current_usr = pwd.getpwuid(os.getuid())[0]\n        current_usr_str = str(current_usr) + '@*'\n        mgr_users = {manager_usr_str, current_usr_str}\n        svr_mgr = self.server.status(SERVER, 'managers')[0]['managers']\\\n            .split(\",\")\n        self.assertEqual(mgr_users, set(svr_mgr))\n\n        mgr_user1 = str(TEST_USER)\n        mgr_user2 = str(TEST_USER1)\n        a = {ATTR_managers: (INCR, mgr_user1 + '@*,' + mgr_user2 + '@*')}\n        self.server.manager(MGR_CMD_SET, SERVER, a, runas=runas)\n        self.logger.info(\"Calling test setUp:\")\n        TestSelf.setUp(self)\n\n        svr_mgr = self.server.status(SERVER, 'managers')[0]['managers'] \\\n            .split(\",\")\n        self.assertEqual(mgr_users, set(svr_mgr))\n\n    def test_default_oper(self):\n        \"\"\"\n        Check that default operator user is set on PTL setup\n        \"\"\"\n        svr_opr = self.server.status(SERVER, 'operators')[0].get('operators')\n        opr_usr = str(OPER_USER) + '@*'\n        self.assertEqual(opr_usr, svr_opr)\n\n    def test_add_mgr_opr_fail(self):\n        \"\"\"\n        Check that if unset mgr/ opr in add_mgr_opr fails, error is logged.\n        There was a bug where self.logger.error() was used instead\n        of cls.logger.error().\n        \"\"\"\n        self.server.stop()\n        with self.assertRaises(PbsStatusError) as e:\n            self.add_mgrs_opers()\n        self.assertIn('Connection refused', e.exception.msg[0])\n        self.server.start()\n"
  },
  {
    "path": "test/tests/selftest/pbs_param_dict.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 2022 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.selftest import *\nfrom ptl.utils.plugins.ptl_test_runner import PTLTestRunner\n\n\nclass TestParamDict(TestSelf):\n    \"\"\"\n    Test parsing the pbs_benchpress -p argument\n    \"\"\"\n\n    def test_gpd(self):\n        \"\"\"\n        Pass various -p param strings to __get_param_dictionary and check\n        that result is correct.\n        \"\"\"\n        def try_one(runner, param):\n            \"\"\"\n            Run get_param_dictionary to convert param string to dict\n            \"\"\"\n            runner.param = param\n            try:\n                # Note that we call private routine directly to avoid\n                # possible side effects.\n                result = runner._PTLTestRunner__get_param_dictionary()\n            except ValueError:\n                result = None\n            return result\n\n        def compare(old, new, expect_none=False):\n            \"\"\"\n            Compare new param dict to old\n            \"\"\"\n            if new is None and not expect_none:\n                return 'Unexpected parse error'\n            if expect_none and new is not None:\n                return 'Should have failed'\n            diffs = [(k, new[k]) for k in old if old[k] != new[k]]\n            return sorted(diffs)\n\n        def check_diffs(self, new, expected):\n            \"\"\"\n            Validate changes between baseline and new param dict\n            \"\"\"\n            if expected is None:\n                self.assertEqual(new, expected)\n                return\n            diffs = compare(self.base, new)\n            self.assertEqual(diffs, expected)\n\n        our_runner = PTLTestRunner()\n\n        # Create baseline dictionary\n        self.base = try_one(our_runner, '')\n\n        # Try simple key=value (boolean)\n        new = try_one(our_runner, 'mom_on_server=True')\n        check_diffs(self, new, [('mom_on_server', True)])\n\n        # Try one that generates a set of hostnames\n        new = try_one(our_runner, 'moms=foo')\n        check_diffs(self, new, [('moms', set(['foo']))])\n\n        # Test that unknown parameter is ignored\n        new = try_one(our_runner, 'nonsense')\n        check_diffs(self, new, [])\n\n        # Test more complicated host list\n        new = try_one(our_runner, 'comms=foo:bar@/path/to/bleem:baz')\n        check_diffs(self, new, [('comms', set(['foo', 'baz', 'bar']))])\n\n        # Test bad boolean arg\n        new = try_one(our_runner, 'mom_on_server=oops')\n        check_diffs(self, new, None)\n\n        # Test multiple options, one of which is default\n        new = try_one(our_runner,\n                      'server=foo2:bar,mom_on_server=y,comm_on_server=f')\n        expected = [('mom_on_server', True),\n                    ('servers', set(['foo2', 'bar']))]\n        check_diffs(self, new, expected)\n"
  },
  {
    "path": "test/tests/selftest/pbs_pbstestsuite.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.selftest import *\nfrom ptl.utils.pbs_snaputils import *\n\n\nclass TestPBSTestSuite(TestSelf):\n    \"\"\"\n    Contains tests for pbs_testsuite module's functionality\n    \"\"\"\n\n    def test_revert_pbsconf_onehost(self):\n        \"\"\"\n        Test the functionality of PBSTestSuite.revert_pbsconf()\n        for a single host type 1 installation\n        \"\"\"\n        pbs_conf_val = self.du.parse_pbs_config(self.server.hostname)\n        self.assertTrue(pbs_conf_val and len(pbs_conf_val) >= 1,\n                        \"Could not parse pbs.conf on host %s\" %\n                        (self.server.hostname))\n\n        # Since the setUp already ran, check that the start bits are turned on\n        self.assertEqual(pbs_conf_val[\"PBS_START_MOM\"], \"1\")\n        self.assertEqual(pbs_conf_val[\"PBS_START_SERVER\"], \"1\")\n        self.assertEqual(pbs_conf_val[\"PBS_START_SCHED\"], \"1\")\n        self.assertEqual(pbs_conf_val[\"PBS_START_COMM\"], \"1\")\n\n        self.server.pi.stop()\n\n        # Now, change pbs.conf to turn the sched off\n        pbs_conf_val[\"PBS_START_SCHED\"] = \"0\"\n        self.du.set_pbs_config(confs=pbs_conf_val)\n\n        # Start PBS again\n        self.server.pi.start()\n\n        # Verify that the scheduler didn't come up\n        self.assertFalse(self.scheduler.isUp())\n\n        # Now call revert_pbsconf()\n        self.revert_pbsconf()\n\n        # Verify that the scheduler came up and start bit is 1\n        self.assertTrue(self.scheduler.isUp())\n        pbs_conf_val = self.du.parse_pbs_config(self.server.hostname)\n        self.assertEqual(pbs_conf_val[\"PBS_START_SCHED\"], \"1\")\n\n    def test_revert_pbsconf_remotemom(self):\n        \"\"\"\n        Test the functionality of PBSTestSuite.revert_pbsconf()\n        with a remote mom setup\n        \"\"\"\n        remotemom = None\n        for mom in self.moms.values():\n            if not self.du.is_localhost(mom.hostname):\n                remotemom = mom\n                break\n        if remotemom is None:\n            self.skip_test(\"Test needs at least one remote Mom host,\"\n                           \" use -p moms=<hostname>\")\n\n        pbs_conf_val = self.du.parse_pbs_config(self.server.hostname)\n        self.assertTrue(pbs_conf_val and len(pbs_conf_val) >= 1,\n                        \"Could not parse pbs.conf on host %s\" %\n                        (self.server.hostname))\n\n        # Check that the start bits on server host are set correctly\n        self.assertEqual(pbs_conf_val[\"PBS_START_SERVER\"], \"1\")\n        self.assertEqual(pbs_conf_val[\"PBS_START_SCHED\"], \"1\")\n        self.assertEqual(pbs_conf_val[\"PBS_START_COMM\"], \"1\")\n        if self.server.hostname in self.moms:\n            self.assertEqual(pbs_conf_val[\"PBS_START_MOM\"], \"1\")\n        else:\n            self.assertEqual(pbs_conf_val[\"PBS_START_MOM\"], \"0\")\n\n        # Check that the remote mom's pbs.conf has mom start bit on\n        pbs_conf_val = self.du.parse_pbs_config(remotemom.hostname)\n        self.assertEqual(pbs_conf_val[\"PBS_START_MOM\"], \"1\")\n\n        # Now set it to 0 and restart the mom\n        remotemom.pi.stop(remotemom.hostname)\n        pbs_conf_val[\"PBS_START_MOM\"] = \"0\"\n        self.du.set_pbs_config(remotemom.hostname, confs=pbs_conf_val)\n        remotemom.pi.start(remotemom.hostname)\n\n        # Confirm that the mom is down\n        self.assertFalse(remotemom.isUp())\n\n        # Now call revert_pbsconf()\n        self.revert_pbsconf()\n\n        # Confirm that the mom came up and start bit is 1\n        self.assertTrue(remotemom.isUp())\n        pbs_conf_val = self.du.parse_pbs_config(remotemom.hostname)\n        self.assertEqual(pbs_conf_val[\"PBS_START_MOM\"], \"1\")\n\n    def test_revert_pbsconf_corelimit(self):\n        \"\"\"\n        Test the functionality of PBSTestSuite.revert_pbsconf() when\n        PBS_CORE_LIMIT is set to a value other than the default\n        \"\"\"\n        pbs_conf_val = self.du.parse_pbs_config(self.server.hostname)\n        self.assertTrue(pbs_conf_val and len(pbs_conf_val) >= 1,\n                        \"Could not parse pbs.conf on host %s\" %\n                        (self.server.hostname))\n\n        # Since the setUp already ran, check that PBS_CORE_LIMIT is set to\n        # unlimited\n        self.assertEqual(pbs_conf_val[\"PBS_CORE_LIMIT\"], \"unlimited\")\n\n        # Now, set the core limit to 0 and restart PBS\n        self.server.pi.stop()\n        pbs_conf_val[\"PBS_CORE_LIMIT\"] = \"0\"\n        self.du.set_pbs_config(confs=pbs_conf_val)\n        self.server.pi.start()\n\n        # First, check that there's no existing core file in mom_priv\n        mom_priv_path = os.path.join(self.server.pbs_conf[\"PBS_HOME\"],\n                                     \"mom_priv\")\n        mom_priv_filenames = self.du.listdir(self.server.hostname,\n                                             mom_priv_path, sudo=True,\n                                             fullpath=False)\n        for filename in mom_priv_filenames:\n            if filename.startswith(\"core\"):\n                # Found a core file, delete it\n                corepath = os.path.join(mom_priv_path, filename)\n                self.du.rm(self.server.hostname, corepath, sudo=True,\n                           force=True)\n\n        # Send SIGSEGV to pbs_mom\n        self.assertTrue(self.mom.isUp())\n        self.mom.signal(\"-SEGV\")\n        for _ in range(20):\n            ret = self.mom.isUp(max_attempts=1)\n            if not ret:\n                break\n            time.sleep(1)\n        self.assertFalse(ret, \"Mom was expected to go down but it didn't\")\n\n        # Confirm that no core file was generated\n        mom_priv_filenames = self.du.listdir(self.server.hostname,\n                                             mom_priv_path, sudo=True,\n                                             fullpath=False)\n        corefound = False\n        for filename in mom_priv_filenames:\n            if filename.startswith(\"core\"):\n                corefound = True\n                break\n        self.assertFalse(corefound, \"mom unexpectedly dumped core\")\n\n        # Now, call self.revert_pbsconf()\n        self.revert_pbsconf()\n\n        # Confirm that PBS_CORE_LIMIT was reverted\n        pbs_conf_val = self.du.parse_pbs_config(self.server.hostname)\n        self.assertEqual(pbs_conf_val[\"PBS_CORE_LIMIT\"], \"unlimited\")\n\n        # Send another SIGSEGV to pbs_mom\n        self.assertTrue(self.mom.isUp())\n        self.mom.signal(\"-SEGV\")\n        for _ in range(20):\n            ret = self.mom.isUp(max_attempts=1)\n            if not ret:\n                break\n            time.sleep(1)\n        self.assertFalse(ret, \"Mom was expected to go down but it didn't\")\n\n        # Confirm that a core file was generated this time\n        mom_priv_filenames = self.du.listdir(self.server.hostname,\n                                             mom_priv_path, sudo=True,\n                                             fullpath=False)\n        corefound = False\n        for filename in mom_priv_filenames:\n            if filename.startswith(\"core\"):\n                corefound = True\n                break\n        self.assertTrue(corefound,\n                        \"mom was expected to dump core but it didn't\")\n\n    def test_revert_pbsconf_extra_vars(self):\n        \"\"\"\n        Test the functionality of PBSTestSuite.revert_pbsconf() when\n        there are extra pbs.conf variables than the default\n        \"\"\"\n        pbs_conf_val = self.du.parse_pbs_config(self.server.hostname)\n        self.assertTrue(pbs_conf_val and len(pbs_conf_val) >= 1,\n                        \"Could not parse pbs.conf on host %s\" %\n                        (self.server.hostname))\n\n        # Set a non-default pbs.conf variables, let's say\n        # PBS_LOCALLOG, and restart PBS\n        self.server.pi.stop()\n        pbs_conf_val[\"PBS_LOCALLOG\"] = \"1\"\n        self.du.set_pbs_config(confs=pbs_conf_val)\n        self.server.pi.start()\n\n        # Confirm that the pbs.conf variable is set\n        pbs_conf_val = self.du.parse_pbs_config(self.server.hostname)\n        self.assertEqual(pbs_conf_val[\"PBS_LOCALLOG\"], \"1\")\n\n        # Now, call self.revert_pbsconf()\n        self.revert_pbsconf()\n\n        # Confirm that the value gets removed from the list as it is not\n        # a default setting\n        pbs_conf_val = self.du.parse_pbs_config(self.server.hostname)\n        self.assertFalse(\"PBS_LOCALLOG\" in pbs_conf_val)\n\n    def test_revert_pbsconf_fewer_vars(self):\n        \"\"\"\n        Test the functionality of PBSTestSuite.revert_pbsconf() when\n        there are fewer pbs.conf variables than the default\n        \"\"\"\n        pbs_conf_val = self.du.parse_pbs_config(self.server.hostname)\n        self.assertTrue(pbs_conf_val and len(pbs_conf_val) >= 1,\n                        \"Could not parse pbs.conf on host %s\" %\n                        (self.server.hostname))\n\n        # Remove a default pbs.conf variable, say PBS_CORE_LIMIT,\n        # and restart PBS\n        self.server.pi.stop()\n        del pbs_conf_val[\"PBS_CORE_LIMIT\"]\n        self.du.set_pbs_config(confs=pbs_conf_val, append=False)\n        self.server.pi.start()\n\n        # Confirm that the pbs.conf variable is gone\n        pbs_conf_val = self.du.parse_pbs_config(self.server.hostname)\n        self.assertNotIn(\"PBS_CORE_LIMIT\", pbs_conf_val)\n\n        # Now, call self.revert_pbsconf()\n        self.revert_pbsconf()\n\n        # Confirm that the variable was set again\n        pbs_conf_val = self.du.parse_pbs_config(self.server.hostname)\n        self.assertIn(\"PBS_CORE_LIMIT\", pbs_conf_val)\n\n    def test_revert_moms_default_conf(self):\n        \"\"\"\n        Test if PBSTestSuite.revert_moms() reverts the mom configuration\n        setting to defaults\n        \"\"\"\n        c1 = self.mom.parse_config()\n        # Save a copy of default config to check it was reverted\n        # correctly later\n        c2 = c1.copy()\n        a = {'$prologalarm': '280'}\n        self.mom.add_config(a)\n        c1.update(a)\n        self.assertEqual(self.mom.parse_config(), c1)\n        self.mom.revert_to_defaults()\n        # Make sure the default config is back\n        self.assertEqual(self.mom.parse_config(), c2)\n\n    def test_revert_conf_highres_logging(self):\n        \"\"\"\n        Test that if PBS_LOG_HIGHRES_TIMESTAMP by default is set to 1 and\n        if its value is changed in the test it will be reverted to 1 by\n        revert_pbsconf()\n        \"\"\"\n        highres_val = self.du.parse_pbs_config()\\\n            .get(\"PBS_LOG_HIGHRES_TIMESTAMP\")\n        self.assertEqual(\"1\", highres_val)\n\n        a = {'PBS_LOG_HIGHRES_TIMESTAMP': \"0\"}\n        self.du.set_pbs_config(confs=a, append=True)\n        highres_val = self.du.parse_pbs_config()\\\n            .get(\"PBS_LOG_HIGHRES_TIMESTAMP\")\n        self.assertEqual(\"0\", highres_val)\n\n        self.revert_pbsconf()\n        highres_val = self.du.parse_pbs_config()\\\n            .get(\"PBS_LOG_HIGHRES_TIMESTAMP\")\n        self.assertEqual(\"1\", highres_val)\n"
  },
  {
    "path": "test/tests/selftest/pbs_requirements_decorator.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.selftest import *\n\n\n@requirements(num_servers=1, num_comms=1, min_mom_ram=1,\n              min_mom_disk=5, min_server_ram=1, min_server_disk=5)\nclass TestRequirementsDecorator(TestSelf):\n\n    \"\"\"\n    This test suite is to verify functionality of requirements\n    decorator. This test suite when run on single node without any\n    hostnames passed in -p or param file, tests run and skip functionality\n    of this decorator\n\n    \"\"\"\n    @requirements(num_servers=1, num_comms=1, min_mom_ram=2,\n                  min_mom_disk=5, min_server_ram=2, min_server_disk=5)\n    def test_tc_run(self):\n        \"\"\"\n        Test to verify test run when requirements are satisfied\n        test suite requirements overridden by test case requirements\n        \"\"\"\n        j = Job(TEST_USER)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n        self.server.expect(SERVER, {'server_state': 'Active'})\n        requirements_set = {\n            'num_servers': 1,\n            'num_comms': 1\n        }\n        ds = getattr(self, REQUIREMENTS_KEY, {})\n        if ds == requirements_set:\n            raise self.failureException(\"Requirements not as expected\")\n\n    @requirements(num_servers=3)\n    def test_tc_skip(self):\n        \"\"\"\n        Test to verify test skip when requirements are not satisfied\n        due to num_servers\n        \"\"\"\n        self.server.expect(SERVER, {'server_state': 'Active'})\n\n    def test_ts_skip(self):\n        \"\"\"\n        Test to verify test skip when requirements are not satisfied\n        at test suite level\n        \"\"\"\n        self.server.expect(SERVER, {'server_state': 'Active'})\n\n    @requirements(num_servers=1, num_comms=1, no_mom_on_server=True)\n    def test_skip_no_server_on_mom(self):\n        \"\"\"\n        Test to verify test skip when requirements are not satisfied\n        due to no_mom_on_server flag\n        \"\"\"\n        self.server.expect(SERVER, {'server_state': 'Active'})\n\n    @requirements(num_servers=1, num_comms=1, no_comm_on_mom=True)\n    def test_skip_no_comm_on_mom(self):\n        \"\"\"\n        Test to verify test skip when requirements are not satisfied\n        due to no_comm_on_mom flag set true\n        \"\"\"\n        self.server.expect(SERVER, {'server_state': 'Active'})\n\n    @requirements(num_comms=2, num_moms=2, no_comm_on_server=True)\n    def test_skip_no_comm_on_server(self):\n        \"\"\"\n        Test to verify test skip when requirements are not satisfied\n        due to no_comm_on_server flag set true\n        \"\"\"\n        self.server.expect(SERVER, {'server_state': 'Active'})\n"
  },
  {
    "path": "test/tests/selftest/pbs_resvid_test.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.selftest import *\n\n\nclass TestPTLConvertsResvIDtoid(TestSelf):\n    \"\"\"\n    This test suite tests PTL should not convert\n    'Resv ID' to 'id' when query a reservation\n    \"\"\"\n\n    def test_attrib_ResvID(self):\n        \"\"\"\n        Test reservation attribute 'Resv Id' in pbs_rstat output\n        is not replaced with 'id' by PTL framework.\n        \"\"\"\n        r = Reservation(TEST_USER)\n        now = int(time.time())\n        a = {'Resource_List.select': '1:ncpus=1',\n             'reserve_start': now + 10,\n             'reserve_end': now + 110}\n        r.set_attributes(a)\n        rid = self.server.submit(r)\n        val = self.server.status(RESV, id=rid)\n        self.logger.info(\"Verifying attribute in pbs_rstat -f rid\")\n        self.assertIn('id', val[0], msg=\"Failed to get expected attrib id\")\n        self.logger.info(\"Got expected attribute id\")\n        self.logger.info(\"Look for attribute Resv ID\")\n        self.assertIn(\n            'Resv ID', val[0], msg=\"Failed to get expected attrib Resv ID\")\n        self.logger.info(\"Got expected attribute Resv ID\")\n"
  },
  {
    "path": "test/tests/selftest/pbs_test_create_vnodes.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.selftest import *\n\n\n@requirements(num_moms=2)\nclass Test_create_vnodes(TestSelf):\n    \"\"\"\n    Tests to test Server().create_vnodes()\n    \"\"\"\n\n    def test_delall_multi(self):\n        # Skip test if number of mom provided is not equal to two\n        if len(self.moms) != 2:\n            self.skipTest(\"test requires atleast two MoMs as input, \"\n                          \"use -p moms=<mom1:mom2>\")\n        a = {'resources_available.ncpus': 4}\n        self.moms.values()[0].create_vnodes(a, 4)\n        self.moms.values()[1].create_vnodes(a, 5,\n                                            usenatvnode=True, delall=False)\n        stat = self.server.status(NODE)\n        self.assertEqual(len(stat), 10)\n\n    def test_delall(self):\n        a = {'resources_available.ncpus': 4}\n        self.mom.create_vnodes(a, 4, vname='first')\n        self.mom.create_vnodes(a, 5, usenatvnode=True,\n                               delall=False, vname='second')\n        self.mom.create_vnodes(a, 5, delall=False, vname='third')\n        stat = self.server.status(NODE)\n        self.assertEqual(len(stat), 14)\n"
  },
  {
    "path": "test/tests/selftest/pbs_test_revert_site_hooks.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.selftest import *\n\n\nclass TestRevertSiteHook(TestSelf):\n    \"\"\"\n    Test suite to verify default site hooks exist after\n    revert_to_defaults() is called\n    \"\"\"\n\n    def test_verify_cgroups_hook_exists_after_setup(self):\n        \"\"\"\n        Test if cgroups hook exists after revert_to_defaults() is called\n        \"\"\"\n        hooks = [h['id'] for h in self.server.status(HOOK)]\n        if 'pbs_cgroups' not in hooks:\n            self.fail(\"Default cgroups hook doesn't exist\")\n"
  },
  {
    "path": "test/tests/selftest/pbs_test_revert_to_defaults.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.selftest import *\nfrom datetime import datetime, timedelta\n\n\nclass TestPTLRevertToDefault(TestSelf):\n    \"\"\"\n    This test suite tests PTL's revert to default functionality\n    \"\"\"\n\n    svr_dflt_attr = {'scheduling': 'True',\n                     'query_other_jobs': 'True',\n                     'scheduler_iteration': '600',\n                     'resources_default.ncpus': '1',\n                     'log_events': '511',\n                     'mail_from': 'adm',\n                     'pbs_license_linger_time': '31536000',\n                     'pbs_license_min': '0',\n                     'pbs_license_max': '2147483647',\n                     'eligible_time_enable': 'False',\n                     'max_concurrent_provision': '5',\n                     'resv_enable': 'True',\n                     'max_array_size': '10000',\n                     }\n\n    def test_svr_revert_to_default(self):\n        \"\"\"\n        This test case tests the server attributes after revert_to_default\n        is called.\n        \"\"\"\n        modified_attr = {'scheduling': 'FALSE',\n                         'query_other_jobs': 'FALSE',\n                         'scheduler_iteration': '100',\n                         'resources_default.ncpus': '12',\n                         'log_events': '2047',\n                         'mail_from': 'user1',\n                         'pbs_license_linger_time': '6000',\n                         'pbs_license_min': '10',\n                         'pbs_license_max': '3647',\n                         'eligible_time_enable': 'TRUE',\n                         'max_concurrent_provision': '15',\n                         'resv_enable': 'FALSE',\n                         'max_array_size': '900',\n                         }\n        self.server.manager(MGR_CMD_SET, SERVER, modified_attr)\n        self.server.revert_to_defaults()\n\n        self.server.expect(SERVER, self.svr_dflt_attr,\n                           attrop=PTL_AND, max_attempts=20)\n\n    def test_sched_revert_to_defaults_dedtime(self):\n        \"\"\"\n        Test that revert_to_defaults() reverts the dedicated time file\n        \"\"\"\n        dt_start = datetime.now() + timedelta(seconds=3600)\n        dt_start_str = dt_start.strftime(\"%m/%d/%Y %H:%M\")\n        dt_end = dt_start + timedelta(seconds=3600)\n        dt_end_str = dt_end.strftime(\"%m/%d/%Y %H:%M\")\n        self.scheduler.add_dedicated_time(dt_start_str + ' ' + dt_end_str)\n\n        J = Job(attrs={'Resource_List.walltime': '7200'})\n        jid = self.server.submit(J)\n\n        a = {'job_state': 'Q',\n             'comment': 'Not Running: Job would cross dedicated time boundary'}\n        self.server.expect(JOB, a, id=jid)\n\n        self.scheduler.revert_to_defaults()\n\n        self.server.manager(MGR_CMD_SET, SERVER, {'scheduling': True})\n\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n"
  },
  {
    "path": "test/tests/selftest/pbs_testlogutils.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.selftest import *\nfrom ptl.utils.pbs_logutils import PBSLogUtils\n\n\nclass TestLogUtils(TestSelf):\n    \"\"\"\n    Test PBSLogUtils functionality in PTL\n    \"\"\"\n\n    def switch_microsecondlogging(self, hostname=None, highrestimestamp=1):\n        \"\"\"\n        Set microsecond logging in pbs.conf\n        \"\"\"\n        if hostname is None:\n            hostname = self.server.hostname\n        a = {'PBS_LOG_HIGHRES_TIMESTAMP': highrestimestamp}\n        self.du.set_pbs_config(hostname=hostname, confs=a, append=True)\n        PBSInitServices().restart()\n        self.assertTrue(self.server.isUp(), 'Failed to restart PBS Daemons')\n\n    def test_log_match_microsec_logging(self):\n        \"\"\"\n        Test that log_match will work when microsecond logging\n        is turned on and then off\n        \"\"\"\n        # Turn of microsecond logging and test log_match()\n        self.switch_microsecondlogging(highrestimestamp=1)\n        a = {'Resource_List.ncpus': 1}\n        J1 = Job(TEST_USER, attrs=a)\n        st = time.time()\n        jid1 = self.server.submit(J1)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid1)\n        msg_str = \"Job;\" + jid1 + \";Job Run at request of Scheduler\"\n        et = time.time()\n        self.server.log_match(msg_str, starttime=st, endtime=et)\n        self.server.deljob(jid1, wait=True)\n\n        # Turn off microsecond logging and test log_match()\n        st = int(time.time())\n        self.switch_microsecondlogging(highrestimestamp=0)\n        a = {'Resource_List.ncpus': 1}\n        J2 = Job(TEST_USER, attrs=a)\n        jid2 = self.server.submit(J2)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid2)\n        et = int(time.time())\n        msg_str = \"Job;\" + jid2 + \";Job Run at request of Scheduler\"\n        self.server.log_match(msg_str, starttime=st, endtime=et)\n\n    def test_get_timestamps(self):\n        \"\"\"\n        test that LogUtils.get_timestamps() works correctly\n        \"\"\"\n        # Submit a 1 cpu job, this will create some accounting records\n        a = {'Resource_List.ncpus': 1}\n        j = Job(attrs=a)\n        jid = self.server.submit(j)\n        self.server.expect(JOB, {'job_state': 'R'}, id=jid)\n\n        acctpath = os.path.join(self.server.pbs_conf[\"PBS_HOME\"],\n                                \"server_priv\", \"accounting\")\n        self.assertTrue(self.du.isdir(self.server.hostname, acctpath,\n                                      sudo=True))\n        logs = self.du.listdir(self.server.hostname, acctpath, sudo=True)\n        log_today = sorted(logs, reverse=True)[0]\n        self.assertTrue(self.du.isfile(self.server.hostname, acctpath,\n                                       sudo=True))\n\n        # Call LogUtils.get_timestamps() for today's accounting log file\n        lu = PBSLogUtils()\n        tm = lu.get_timestamps(log_today, self.server.hostname, num=1,\n                               sudo=True)\n        self.assertIsNotNone(tm)\n        self.assertEqual(len(tm), 1)\n"
  },
  {
    "path": "test/tests/selftest/pbs_testparams_decorator.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom tests.selftest import *\n\n\nclass Test_TestparamsDecorator(TestSelf):\n\n    \"\"\"\n    This test suite is to verify functionality of requirements\n    decorator. The test specific parameters can be modified by\n    -p or param file.\n\n    \"\"\"\n    @testparams(num_jobs=100, scheduling=False)\n    def test_testparams(self):\n        \"\"\"\n        Test to submit n number of jobs and expect jobs to be\n        in running or queued state as per tunable scheduling\n        parameter.\n        \"\"\"\n        scheduling = self.conf[\"Test_TestparamsDecorator.scheduling\"]\n        num_jobs = self.conf[\"Test_TestparamsDecorator.num_jobs\"]\n        a = {'resources_available.ncpus': num_jobs}\n        self.server.manager(MGR_CMD_SET, NODE, a, id=self.mom.shortname)\n        a = {'scheduling': scheduling}\n        self.server.manager(MGR_CMD_SET, SERVER, a)\n        j = Job(TEST_USER)\n        for _ in range(num_jobs):\n            self.server.submit(j)\n        if scheduling:\n            self.server.expect(JOB, {'job_state=R': num_jobs})\n        else:\n            self.server.expect(JOB, {'job_state=Q': num_jobs})\n"
  },
  {
    "path": "test/tests/upgrades/__init__.py",
    "content": "# coding: utf-8\n\n# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n\nfrom ptl.utils.pbs_testsuite import *\n\n\nclass TestUpgrades(PBSTestSuite):\n    \"\"\"\n    Base test suite for Upgrades tests\n    \"\"\"\n    pass\n"
  },
  {
    "path": "valgrind.supp",
    "content": "# Copyright (C) 1994-2021 Altair Engineering, Inc.\n# For more information, contact Altair at www.altair.com.\n#\n# This file is part of both the OpenPBS software (\"OpenPBS\")\n# and the PBS Professional (\"PBS Pro\") software.\n#\n# Open Source License Information:\n#\n# OpenPBS is free software. You can redistribute it and/or modify it under\n# the terms of the GNU Affero General Public License as published by the\n# Free Software Foundation, either version 3 of the License, or (at your\n# option) any later version.\n#\n# OpenPBS is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU Affero General Public\n# License for more details.\n#\n# You should have received a copy of the GNU Affero General Public License\n# along with this program.  If not, see <http://www.gnu.org/licenses/>.\n#\n# Commercial License Information:\n#\n# PBS Pro is commercially licensed software that shares a common core with\n# the OpenPBS software.  For a copy of the commercial license terms and\n# conditions, go to: (http://www.pbspro.com/agreement.html) or contact the\n# Altair Legal Department.\n#\n# Altair's dual-license business model allows companies, individuals, and\n# organizations to create proprietary derivative works of OpenPBS and\n# distribute them - whether embedded or bundled with other software -\n# under a commercial license agreement.\n#\n# Use of Altair's trademarks, including but not limited to \"PBS™\",\n# \"OpenPBS®\", \"PBS Professional®\", and \"PBS Pro™\" and Altair's logos is\n# subject to Altair's trademark licensing policies.\n\n# These are unfreed memory (still reachable blocks) detected in python\n\n{\n   ADDRESS_IN_RANGE/Invalid read of size 4\n   Memcheck:Addr4\n   fun:Py_ADDRESS_IN_RANGE\n}\n\n{\n   ADDRESS_IN_RANGE/Invalid read of size 4\n   Memcheck:Value4\n   fun:Py_ADDRESS_IN_RANGE\n}\n\n{\n   ADDRESS_IN_RANGE/Invalid read of size 8 (x86_64 aka amd64)\n   Memcheck:Value8\n   fun:Py_ADDRESS_IN_RANGE\n}\n\n{\n   ADDRESS_IN_RANGE/Conditional jump or move depends on uninitialised value\n   Memcheck:Cond\n   fun:Py_ADDRESS_IN_RANGE\n}\n\n#\n# Leaks (including possible leaks)\n#    Hmmm, I wonder if this masks some real leaks.  I think it does.\n#    Will need to fix that.\n#\n\n{\n   Suppress leaking the GIL.  Happens once per process, see comment in ceval.c.\n   Memcheck:Leak\n   fun:malloc\n   fun:PyThread_allocate_lock\n   fun:PyEval_InitThreads\n}\n\n{\n   Suppress leaking the GIL after a fork.\n   Memcheck:Leak\n   fun:malloc\n   fun:PyThread_allocate_lock\n   fun:PyEval_ReInitThreads\n}\n\n{\n   Suppress leaking the autoTLSkey.  This looks like it shouldn't leak though.\n   Memcheck:Leak\n   fun:malloc\n   fun:PyThread_create_key\n   fun:_PyGILState_Init\n   fun:Py_InitializeEx\n   fun:Py_Main\n}\n\n{\n   Hmmm, is this a real leak or like the GIL?\n   Memcheck:Leak\n   fun:malloc\n   fun:PyThread_ReInitTLS\n}\n\n{\n   Handle PyMalloc confusing valgrind (possibly leaked)\n   Memcheck:Leak\n   fun:realloc\n   fun:_PyObject_GC_Resize\n#   fun:COMMENT_THIS_LINE_TO_DISABLE_LEAK_WARNING\n}\n\n{\n   Handle PyMalloc confusing valgrind (possibly leaked)\n   Memcheck:Leak\n   fun:malloc\n   fun:_PyObject_GC_New\n#   fun:COMMENT_THIS_LINE_TO_DISABLE_LEAK_WARNING\n}\n\n{\n   Handle PyMalloc confusing valgrind (possibly leaked)\n   Memcheck:Leak\n   fun:malloc\n   fun:_PyObject_GC_NewVar\n#   fun:COMMENT_THIS_LINE_TO_DISABLE_LEAK_WARNING\n}\n\n#\n# Non-python specific leaks\n#\n\n{\n   Handle pthread issue (possibly leaked)\n   Memcheck:Leak\n   fun:calloc\n   fun:allocate_dtv\n   fun:_dl_allocate_tls_storage\n   fun:_dl_allocate_tls\n}\n\n{\n   Handle pthread issue (possibly leaked)\n   Memcheck:Leak\n   fun:memalign\n   fun:_dl_allocate_tls_storage\n   fun:_dl_allocate_tls\n}\n\n###{\n###   ADDRESS_IN_RANGE/Invalid read of size 4\n###   Memcheck:Addr4\n###   fun:PyObject_Free\n###}\n###\n###{\n###   ADDRESS_IN_RANGE/Invalid read of size 4\n###   Memcheck:Value4\n###   fun:PyObject_Free\n###}\n###\n###{\n###   ADDRESS_IN_RANGE/Use of uninitialised value of size 8\n###   Memcheck:Addr8\n###   fun:PyObject_Free\n###}\n###\n###{\n###   ADDRESS_IN_RANGE/Use of uninitialised value of size 8\n###   Memcheck:Value8\n###   fun:PyObject_Free\n###}\n###\n###{\n###   ADDRESS_IN_RANGE/Conditional jump or move depends on uninitialised value\n###   Memcheck:Cond\n###   fun:PyObject_Free\n###}\n\n###{\n###   ADDRESS_IN_RANGE/Invalid read of size 4\n###   Memcheck:Addr4\n###   fun:PyObject_Realloc\n###}\n###\n###{\n###   ADDRESS_IN_RANGE/Invalid read of size 4\n###   Memcheck:Value4\n###   fun:PyObject_Realloc\n###}\n###\n###{\n###   ADDRESS_IN_RANGE/Use of uninitialised value of size 8\n###   Memcheck:Addr8\n###   fun:PyObject_Realloc\n###}\n###\n###{\n###   ADDRESS_IN_RANGE/Use of uninitialised value of size 8\n###   Memcheck:Value8\n###   fun:PyObject_Realloc\n###}\n###\n###{\n###   ADDRESS_IN_RANGE/Conditional jump or move depends on uninitialised value\n###   Memcheck:Cond\n###   fun:PyObject_Realloc\n###}\n\n###\n### All the suppressions below are for errors that occur within libraries\n### that Python uses.  The problems to not appear to be related to Python's\n### use of the libraries.\n###\n\n{\n   Generic ubuntu ld problems\n   Memcheck:Addr8\n   obj:/lib/ld-2.4.so\n   obj:/lib/ld-2.4.so\n   obj:/lib/ld-2.4.so\n   obj:/lib/ld-2.4.so\n}\n\n{\n   Generic gentoo ld problems\n   Memcheck:Cond\n   obj:/lib/ld-2.3.4.so\n   obj:/lib/ld-2.3.4.so\n   obj:/lib/ld-2.3.4.so\n   obj:/lib/ld-2.3.4.so\n}\n\n{\n   DBM problems, see test_dbm\n   Memcheck:Param\n   write(buf)\n   fun:write\n   obj:/usr/lib/libdb1.so.2\n   obj:/usr/lib/libdb1.so.2\n   obj:/usr/lib/libdb1.so.2\n   obj:/usr/lib/libdb1.so.2\n   fun:dbm_close\n}\n\n{\n   DBM problems, see test_dbm\n   Memcheck:Value8\n   fun:memmove\n   obj:/usr/lib/libdb1.so.2\n   obj:/usr/lib/libdb1.so.2\n   obj:/usr/lib/libdb1.so.2\n   obj:/usr/lib/libdb1.so.2\n   fun:dbm_store\n   fun:dbm_ass_sub\n}\n\n{\n   DBM problems, see test_dbm\n   Memcheck:Cond\n   obj:/usr/lib/libdb1.so.2\n   obj:/usr/lib/libdb1.so.2\n   obj:/usr/lib/libdb1.so.2\n   fun:dbm_store\n   fun:dbm_ass_sub\n}\n\n{\n   DBM problems, see test_dbm\n   Memcheck:Cond\n   fun:memmove\n   obj:/usr/lib/libdb1.so.2\n   obj:/usr/lib/libdb1.so.2\n   obj:/usr/lib/libdb1.so.2\n   obj:/usr/lib/libdb1.so.2\n   fun:dbm_store\n   fun:dbm_ass_sub\n}\n\n{\n   GDBM problems, see test_gdbm\n   Memcheck:Param\n   write(buf)\n   fun:write\n   fun:gdbm_open\n\n}\n\n{\n   ZLIB problems, see test_gzip\n   Memcheck:Cond\n   obj:/lib/libz.so.1.2.3\n   obj:/lib/libz.so.1.2.3\n   fun:deflate\n}\n\n{\n   Avoid problems w/readline doing a putenv and leaking on exit\n   Memcheck:Leak\n   fun:malloc\n   fun:xmalloc\n   fun:sh_set_lines_and_columns\n   fun:_rl_get_screen_size\n   fun:_rl_init_terminal_io\n   obj:/lib/libreadline.so.4.3\n   fun:rl_initialize\n}\n\n###\n### These occur from somewhere within the SSL, when running\n###  test_socket_sll.  They are too general to leave on by default.\n###\n###{\n###   somewhere in SSL stuff\n###   Memcheck:Cond\n###   fun:memset\n###}\n###{\n###   somewhere in SSL stuff\n###   Memcheck:Value4\n###   fun:memset\n###}\n###\n###{\n###   somewhere in SSL stuff\n###   Memcheck:Cond\n###   fun:MD5_Update\n###}\n###\n###{\n###   somewhere in SSL stuff\n###   Memcheck:Value4\n###   fun:MD5_Update\n###}\n\n#\n# All of these problems come from using test_socket_ssl\n#\n{\n   from test_socket_ssl\n   Memcheck:Cond\n   fun:BN_bin2bn\n}\n\n{\n   from test_socket_ssl\n   Memcheck:Cond\n   fun:BN_num_bits_word\n}\n\n{\n   from test_socket_ssl\n   Memcheck:Value4\n   fun:BN_num_bits_word\n}\n\n{\n   from test_socket_ssl\n   Memcheck:Cond\n   fun:BN_mod_exp_mont_word\n}\n\n{\n   from test_socket_ssl\n   Memcheck:Cond\n   fun:BN_mod_exp_mont\n}\n\n{\n   from test_socket_ssl\n   Memcheck:Param\n   write(buf)\n   fun:write\n   obj:/usr/lib/libcrypto.so.0.9.7\n}\n\n{\n   from test_socket_ssl\n   Memcheck:Cond\n   fun:RSA_verify\n}\n\n{\n   from test_socket_ssl\n   Memcheck:Value4\n   fun:RSA_verify\n}\n\n{\n   from test_socket_ssl\n   Memcheck:Value4\n   fun:DES_set_key_unchecked\n}\n\n{\n   from test_socket_ssl\n   Memcheck:Value4\n   fun:DES_encrypt2\n}\n\n{\n   from test_socket_ssl\n   Memcheck:Cond\n   obj:/usr/lib/libssl.so.0.9.7\n}\n\n{\n   from test_socket_ssl\n   Memcheck:Value4\n   obj:/usr/lib/libssl.so.0.9.7\n}\n\n{\n   from test_socket_ssl\n   Memcheck:Cond\n   fun:BUF_MEM_grow_clean\n}\n\n{\n   from test_socket_ssl\n   Memcheck:Cond\n   fun:memcpy\n   fun:ssl3_read_bytes\n}\n\n{\n   from test_socket_ssl\n   Memcheck:Cond\n   fun:SHA1_Update\n}\n\n{\n   from test_socket_ssl\n   Memcheck:Value4\n   fun:SHA1_Update\n}\n\n{\n   From PBS (TPP layer) - suppress epoll_pwait() glibc bug\n   Memcheck:Param\n   epoll_pwait(sigmask)\n   fun:epoll_pwait\n   fun:tpp_em_pwait\n   fun:tpp_em_wait\n   fun:work\n   fun:start_thread\n   fun:clone\n}\n{\n   From PBS (TPP layer) - suppress warning about uninitialized bytes in sendto\n   Memcheck:Param\n   socketcall.sendto(msg)\n   fun:send\n   fun:send_data\n   fun:handle_cmd\n   fun:work\n   fun:start_thread\n   fun:clone\n}\n{\n   From PBS (server) - Suppress intentional unfreed memory for pbs_db_get_svr_id\n   Memcheck:Leak\n   match-leak-kinds: reachable\n   fun:malloc\n   fun:strdup\n   fun:pbs_db_get_svr_id\n   fun:chk_and_update_db_svrhost\n   fun:main\n}\n{\n   From PBS (server) - Suppress intentional unfreed memory from hook_recov\n   Memcheck:Leak\n   fun:malloc\n   fun:strdup\n   fun:hook_recov\n   fun:pbsd_init\n   fun:main\n}\n{\n   From PBS (server) - Suppress intentional unfreed memory from loading hook script at hook_recov\n   Memcheck:Leak\n   fun:malloc\n   fun:pbs_python_ext_alloc_python_script\n   fun:hook_recov\n   fun:pbsd_init\n   fun:main\n}\n{\n   From PBS (server) - Suppress intentional unfreed memory from loading hook script at hook_recov\n   Memcheck:Leak\n   fun:malloc\n   fun:strdup\n   fun:pbs_python_ext_alloc_python_script\n   fun:hook_recov\n   fun:pbsd_init\n   fun:main\n}\n{\n   From PBS (server) - Suppress intentional unfreed memory from allocating hook data at hook_recov\n   Memcheck:Leak\n   fun:malloc\n   fun:hook_alloc\n   fun:hook_recov\n   fun:pbsd_init\n   fun:main\n}\n{\n   From PBS (all deamons) - Suppress memory allocated for log mutex\n   Memcheck:Leak\n   fun:calloc\n   fun:log_mutex_lock\n   fun:log_record\n   fun:*\n}\n{\n   From PBS (mom) - Suppress intentional unfreed memory from python_script_alloc() inside req_copy_hookfile() that is tracked globally in svr_allhooks.\n   Memcheck:Leak\n   fun:malloc\n   ...\n   fun:python_script_alloc\n   ...\n   fun:req_copy_hookfile\n   fun:is_request\n   fun:do_tpp\n   fun:tpp_request\n   fun:wait_request\n   fun:main\n}\n{\n   From PBS (mom) - Suppress intentional unfreed memory from hook_recov() inside req_copy_hookfile() that is tracked globally in svr_allhooks.\n   Memcheck:Leak\n   fun:malloc\n   ...\n   fun:hook_recov\n   fun:req_copy_hookfile\n   fun:is_request\n   fun:do_tpp\n   fun:tpp_request\n   fun:wait_request\n   fun:main\n}\n{\n   From PBS (server) - Suppress intentional unfreed memory from pbs_python_populae_attributes_to_python_class() that is tracked globally in pbs_resource_value_list.\n   Memcheck:Leak\n   fun:malloc\n   fun:pbs_python_populate_attributes_to_python_class\n   fun:*\n}\n{\n   From PBS (server) - Suppress intentional unfreed memory from pbs_python_populate_attributes_to_python_class() that is tracked and freed in a local pbs_list_head\n   Memcheck:Leak\n   fun:malloc\n   fun:attrlist_alloc\n   fun:attrlist_create\n   fun:encode_l\n   fun:encode_resc\n   fun:pbs_python_populate_attributes_to_python_class\n   fun:*\n}\n{\n   From PBS (server) - Suppress intentional unfreed memory from pbs_python_populate_attributes_to_python_class() that is tracked and freed in a local pbs_list_head.\n   Memcheck:Leak\n   fun:malloc\n   fun:attrlist_alloc\n   fun:attrlist_create\n   fun:encode_size\n   fun:encode_resc\n   fun:pbs_python_populate_attributes_to_python_class\n   fun:*\n}\n{\n   From PBS (server) - Suppress intentional unfreed memory from pbs_python_populate_attributes_to_python_class() that is tracked and freed in a local pbs_list_head.\n   Memcheck:Leak\n   fun:malloc\n   fun:attrlist_alloc\n   fun:attrlist_create\n   fun:encode_str\n   fun:encode_resc\n   fun:pbs_python_populate_attributes_to_python_class\n   fun:*\n}\n{\n   From PBS (server) - Suppress intentional unfreed memory from pbs_python_populate_attributes_to_python_class() that is tracked and freed in a local pbs_list_head.\n   Memcheck:Leak\n   fun:malloc\n   fun:attrlist_alloc\n   fun:attrlist_create\n   fun:encode_time\n   fun:encode_resc\n   fun:pbs_python_populate_attributes_to_python_class\n   fun:*\n}\n{\n   From PBS (server) - Suppress intentional unfreed memory from loading hook script at mgr_hook_import\n   Memcheck:Leak\n   fun:malloc\n   fun:strdup\n   fun:pbs_python_ext_alloc_python_script\n   fun:mgr_hook_import\n   fun:req_manager\n   fun:dispatch_request\n   fun:process_request\n   fun:wait_request\n   fun:main\n}\n{\n   From PBS (server) - Suppress intentional unfreed memory from loading hook script at mgr_hook_import\n   Memcheck:Leak\n   fun:malloc\n   fun:pbs_python_ext_alloc_python_script\n   fun:mgr_hook_import\n   fun:req_manager\n   fun:dispatch_request\n   fun:process_request\n   fun:wait_request\n   fun:main\n}\n{\n   From PBS (server) - Suppress intentional unfreed memory from allocating hook data at mgr_hook_create\n   Memcheck:Leak\n   fun:malloc\n   fun:hook_alloc\n   fun:mgr_hook_create\n   fun:req_manager\n   fun:dispatch_request\n   fun:process_request\n   fun:wait_request\n   fun:main\n}\n{\n   From PBS (server) - Suppress intentional unfreed memory from allocating hook name at mgr_hook_create\n   Memcheck:Leak\n   fun:malloc\n   fun:strdup\n   fun:set_hook_name\n   fun:mgr_hook_create\n   fun:req_manager\n   fun:dispatch_request\n   fun:process_request\n   fun:wait_request\n   fun:main\n}\n{\n   From PBS - Suppress intentional unfreed memory of auth struct inside global tpp_conf struct\n   Memcheck:Leak\n   fun:malloc\n   fun:make_auth_config\n   fun:set_tpp_config\n   fun:main\n}\n{\n   From PBS - Suppress intentional unfreed memory of auth struct inside global tpp_conf struct\n   Memcheck:Leak\n   fun:malloc\n   fun:strdup\n   fun:make_auth_config\n   fun:set_tpp_config\n   fun:main\n}\n{\n   From PBS - Suppress intentional unfreed memory of auth struct inside global tpp_conf struct\n   Memcheck:Leak\n   fun:malloc\n   fun:mk_hostname\n   fun:set_tpp_config\n   fun:main\n}\n{\n   From PBS - Suppress intentional unfreed memory of auth struct inside global tpp_conf struct\n   Memcheck:Leak\n   fun:malloc\n   fun:strdup\n   fun:set_tpp_config\n   fun:main\n}\n{\n   From PBS - Suppress intentional unfreed memory of auth struct inside global tpp_conf struct\n   Memcheck:Leak\n   fun:realloc\n   fun:set_tpp_config\n   fun:main\n}\n{\n   From PBS - Suppress intentional unfreed avl tree tls data\n   Memcheck:Leak\n   fun:calloc\n   fun:get_avl_tls\n   ...\n}\n{\n   From PBS - Suppress intentional unfreed tpp tls data\n   Memcheck:Leak\n   fun:calloc\n   fun:tpp_get_tls\n   fun:work\n   ...\n}\n{\n   From PBS - Suppress uninitialized job fs structure in mom\n   Memcheck:Param\n   write(buf)\n   ...\n   fun:job_save_fs\n   ...\n}\n{\n   From PBS - Suppress hook allocated buffer\n   Memcheck:Leak\n   match-leak-kinds: possible\n   fun:malloc\n   fun:hook_alloc\n   fun:hook_recov\n   ...\n}\n{\n   From PBS - Suppress hook allocated buffer\n   Memcheck:Leak\n   match-leak-kinds: possible\n   fun:malloc\n   fun:hook_alloc\n   fun:mgr_hook_create\n   ...\n}\n{\n   From PBS - Suppress uninitialized DIS buffer\n   Memcheck:Param\n   write(buf)\n   ...\n   fun:__send_pkt\n   fun:dis_flush\n   ...\n}\n{\n   From PBS - Suppress scheduler query resources leak, it is misreported\n   Memcheck:Leak\n   match-leak-kinds: definite\n   ...\n   fun:*query_resources*\n   fun:*update_resource_def*\n   fun:schedule\n   ...\n}\n"
  }
]